Compare commits

..

30 Commits

Author SHA1 Message Date
Radosław Kapka
2e322b464c Keep only the latest value in the health channel (#14087)
* Increase health tracker channel buffer size

* keep only the latest value

* Make health test blocking as a regression test for PR #14807

* Fix new race conditions in the MockHealthClient

---------

Co-authored-by: Preston Van Loon <preston@pvl.dev>
2024-06-10 07:54:10 -05:00
kasey
26c0178ce4 always close cache warm chan to prevent blocking (#14080)
* always close cache warm chan to prevent blocking

* test that waitForCache does not block

* combine defers to reduce cognitive overhead

* lint

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-06-05 09:26:49 -05:00
Nishant Das
b6f93e5305 Change It To Debug (#14072) 2024-06-03 10:04:03 -05:00
Nishant Das
7ad870fdba Update Libp2p Dependencies (#14060)
* Update to v0.35.0 and v0.11.0

* Update Protobuf

* Update bazel deps

(cherry picked from commit 568273453b)
2024-05-31 13:33:30 -05:00
Nishant Das
8b9651606d Restrict Dials From Discovery (#14052)
* Fix Excessive Subnet Dials

* Handle backoff in Iterator

* Slow Down Lookups

* Add Flag To Configure Dials

* Preston's Review

* Update cmd/beacon-chain/flags/base.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* Reduce polling period

* Manu's Review

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
(cherry picked from commit 7a4ecb6060)
2024-05-31 13:33:23 -05:00
terence
a2a77b5bfc Fix dependent root retrival genesis case (#14053)
* Fix dependent root retrival genesis case

* Remove print

(cherry picked from commit 43c7659d18)
2024-05-31 13:32:38 -05:00
Radosław Kapka
b807afbd7d Only log error when aggregator check fails (#14046)
* Only log error when aggregator check fails

* review

(cherry picked from commit 2f2152e039)
2024-05-31 13:32:21 -05:00
terence
6df83ebef7 Fix CommitteeAssignments to not return every validator (#14039)
* Rewrite CommitteeAssignments to not return every validator

* Potuz's feedback

(cherry picked from commit c35889d4c6)
2024-05-31 13:31:43 -05:00
Sammy Rosso
7b4d23801a Fix race conditions + cleanup (#14041)
(cherry picked from commit 10dedd5ced)
2024-05-31 13:31:34 -05:00
kasey
63d31cfa1e paranoid underflow protection without error handling (#14044)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
(cherry picked from commit 62b5c43d87)
2024-05-31 13:31:05 -05:00
james-prysm
f947fefa19 WebFix develop (#14040)
* fixing issues introduced by PR 13593

* missed setting db

* linting

(cherry picked from commit 2e84208169)
2024-05-31 13:30:41 -05:00
Sammy Rosso
7a55dc6cc3 Fix TestNodeHealth_Concurrently race condition (#14033)
(cherry picked from commit 4d190c41cc)
2024-05-31 13:29:50 -05:00
Radosław Kapka
024163b923 Substantial VC cleanup (#13593)
* Cleanup part 1

* Cleanup part 2

* Cleanup part 3

* remove lock field init

* doc for SignerConfig

* remove vars

* use full Keymanager word in function

* revert interface rename

* linter

* fix build issues

* review

(cherry picked from commit 30cc23c5de)
2024-05-31 13:26:16 -05:00
Radosław Kapka
8eb964c3e5 Remove Beacon API Postman collection (#14014)
(cherry picked from commit 8a12b78684)
2024-05-31 13:25:05 -05:00
Preston Van Loon
69c60a6611 Enable experimental_remote_downloader in CI. (#13996)
(cherry picked from commit 49a6d02e12)
2024-05-31 13:24:51 -05:00
Radosław Kapka
7de5381af8 Update state readme (#13890)
* README.md for the state package

* Update beacon-chain/state/state-native/README.md

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Revert "Update beacon-chain/state/state-native/README.md"

This reverts commit 6a4be3bae5.

---------

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
(cherry picked from commit a35535043e)
2024-05-31 13:23:46 -05:00
Brandon Liu
25dfed5cf4 use time.NewTimer() to avoid possible memory leaks (#13800)
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
(cherry picked from commit 41edee9fe9)
2024-05-31 13:23:27 -05:00
Nishant Das
b9f0fc52ea Handle Each Blob In Its Own Goroutine (#13959)
(cherry picked from commit e9606b3635)
2024-05-31 13:23:16 -05:00
dependabot[bot]
3a186a1347 Bump golang.org/x/net from 0.21.0 to 0.23.0 (#13895)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.21.0 to 0.23.0.
- [Commits](https://github.com/golang/net/compare/v0.21.0...v0.23.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
(cherry picked from commit ed7c4bb6a7)
2024-05-31 13:23:00 -05:00
Nishant Das
c2117e5747 Update Libp2p Dependencies (#13960)
* Update Libp2p

* Update Go Sum

(cherry picked from commit aa847991e0)
2024-05-31 11:47:57 -05:00
terence
0a9e99ce74 Remove unused validator map copy method (#13954)
(cherry picked from commit 49f3531aed)
2024-05-31 11:47:33 -05:00
Sammy Rosso
c3486c58cb Run correct test (#13935)
(cherry picked from commit ae16d5f52c)
2024-05-31 11:46:34 -05:00
Preston Van Loon
5ddd5cb3fb beacon-chain/cache: Convert tests to cache_test blackbox testing (#13920)
* beacon-chain/cache: convert to blackbox tests (package cache_test)

* Move balanceCacheKey to its own file to satisify go fuzz build

(cherry picked from commit 3233e64ace)
2024-05-31 11:45:57 -05:00
kasey
f079db62bd use [32]byte keys in the filesystem cache (#13885)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
(cherry picked from commit 8d9024f01f)
2024-05-31 11:45:35 -05:00
Sammy Rosso
a5425d9c97 Remove EnableEIP4881 flag (#13826)
* Remove EnableEIP4881 flag

* Gaz

* Fix missing error handler

* Remove old tree and fix tests

* Gaz

* Fix build import

* Replace depositcache

* Add pendingDeposit tests

* Nishant's fix

* Fix unsafe uint64 to int

* Fix other unsafe uint64 to int

* Remove: RemovePendingDeposit

* Deprecate and remove DisableEIP4881 flag

* Check: index not greater than deposit count

* Move index check

(cherry picked from commit c8d6f47749)
2024-05-31 11:45:21 -05:00
Manu NALEPA
ea5c14affb Revert "zig: Update zig to recent main branch commit (#13142)" (#13908)
This reverts commit b24b60dbd8.

(cherry picked from commit a6f134e48e)
2024-05-31 11:45:21 -05:00
Preston Van Loon
f4f00e8f35 spectests: fail hard on missing test folders (#13913)
(cherry picked from commit fdbb5136d9)
2024-05-31 11:45:21 -05:00
Preston Van Loon
1fb4c38503 Refactor beacon-chain/core/helpers tests to be black box (#13906)
(cherry picked from commit 2c66918594)
2024-05-31 11:45:21 -05:00
Manu NALEPA
d041d8da32 Do not remove blobs DB in slasher. (#13881)
(cherry picked from commit a0dac292ff)
2024-05-31 11:44:40 -05:00
terence
11684392d1 Simplify prune invalid by reusing existing fork choice store call (#13878)
(cherry picked from commit 75857e7177)
2024-05-31 11:44:28 -05:00
1034 changed files with 36724 additions and 72086 deletions

View File

@@ -1,4 +1,4 @@
FROM golang:1.22-alpine
FROM golang:1.21-alpine
COPY entrypoint.sh /entrypoint.sh

View File

@@ -16,7 +16,7 @@ jobs:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version: '1.22.3'
go-version: '1.21.5'
- id: list
uses: shogo82148/actions-go-fuzz/list@v0
with:
@@ -36,7 +36,7 @@ jobs:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version: '1.22.3'
go-version: '1.21.5'
- uses: shogo82148/actions-go-fuzz/run@v0
with:
packages: ${{ matrix.package }}

View File

@@ -28,14 +28,14 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Set up Go 1.22
- name: Set up Go 1.21
uses: actions/setup-go@v3
with:
go-version: '1.22.3'
go-version: '1.21.5'
- name: Run Gosec Security Scanner
run: | # https://github.com/securego/gosec/issues/469
export PATH=$PATH:$(go env GOPATH)/bin
go install github.com/securego/gosec/v2/cmd/gosec@v2.19.0
go install github.com/securego/gosec/v2/cmd/gosec@v2.15.0
gosec -exclude-generated -exclude=G307 -exclude-dir=crypto/bls/herumi ./...
lint:
@@ -45,10 +45,10 @@ jobs:
- name: Checkout
uses: actions/checkout@v2
- name: Set up Go 1.22
- name: Set up Go 1.21
uses: actions/setup-go@v3
with:
go-version: '1.22.3'
go-version: '1.21.5'
id: go
- name: Golangci-lint
@@ -64,7 +64,7 @@ jobs:
- name: Set up Go 1.x
uses: actions/setup-go@v2
with:
go-version: '1.22.3'
go-version: '1.21.5'
id: go
- name: Check out code into the Go module directory

View File

@@ -6,7 +6,7 @@ run:
- proto
- tools/analyzers
timeout: 10m
go: '1.22.3'
go: '1.21.5'
linters:
enable-all: true
@@ -34,6 +34,7 @@ linters:
- dogsled
- dupl
- durationcheck
- errorlint
- exhaustive
- exhaustruct
- forbidigo
@@ -51,7 +52,6 @@ linters:
- gofumpt
- gomnd
- gomoddirectives
- gosec
- inamedparam
- interfacebloat
- ireturn

View File

@@ -224,7 +224,6 @@ nogo(
"@org_golang_x_tools//go/analysis/passes/deepequalerrors:go_default_library",
"@org_golang_x_tools//go/analysis/passes/defers:go_default_library",
"@org_golang_x_tools//go/analysis/passes/directive:go_default_library",
"@org_golang_x_tools//go/analysis/passes/errorsas:go_default_library",
# fieldalignment disabled
#"@org_golang_x_tools//go/analysis/passes/fieldalignment:go_default_library",
"@org_golang_x_tools//go/analysis/passes/findcall:go_default_library",

21
MODULE.bazel.lock generated
View File

@@ -1123,27 +1123,6 @@
"recordedRepoMappingEntries": []
}
},
"@@bazel_tools//tools/test:extensions.bzl%remote_coverage_tools_extension": {
"general": {
"bzlTransitiveDigest": "l5mcjH2gWmbmIycx97bzI2stD0Q0M5gpDc0aLOHKIm8=",
"recordedFileInputs": {},
"recordedDirentsInputs": {},
"envVariables": {},
"generatedRepoSpecs": {
"remote_coverage_tools": {
"bzlFile": "@@bazel_tools//tools/build_defs/repo:http.bzl",
"ruleClassName": "http_archive",
"attributes": {
"sha256": "7006375f6756819b7013ca875eab70a541cf7d89142d9c511ed78ea4fefa38af",
"urls": [
"https://mirror.bazel.build/bazel_coverage_output_generator/releases/coverage_output_generator-v2.6.zip"
]
}
}
},
"recordedRepoMappingEntries": []
}
},
"@@rules_java~//java:extensions.bzl%toolchains": {
"general": {
"bzlTransitiveDigest": "tJHbmWnq7m+9eUBnUdv7jZziQ26FmcGL9C5/hU3Q9UQ=",

View File

@@ -182,7 +182,7 @@ load("@io_bazel_rules_go//go:deps.bzl", "go_register_toolchains", "go_rules_depe
go_rules_dependencies()
go_register_toolchains(
go_version = "1.22.4",
go_version = "1.21.8",
nogo = "@//:nogo",
)
@@ -227,7 +227,7 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.5.0-alpha.3"
consensus_spec_version = "v1.4.0"
bls_test_version = "v0.1.1"
@@ -243,7 +243,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-+byv+GUOQytex5GgtjBGVoNDseJZbiBdAjEtlgCbjEo=",
sha256 = "c282c0f86f23f3d2e0f71f5975769a4077e62a7e3c7382a16bd26a7e589811a0",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/general.tar.gz" % consensus_spec_version,
)
@@ -259,7 +259,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-JJUy/jT1h3kGQkinTuzL7gMOA1+qgmPgJXVrYuH63Cg=",
sha256 = "4649c35aa3b8eb0cfdc81bee7c05649f90ef36bede5b0513e1f2e8baf37d6033",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/minimal.tar.gz" % consensus_spec_version,
)
@@ -275,7 +275,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-T2VM4Qd0SwgGnTjWxjOX297DqEsovO9Ueij1UEJy48Y=",
sha256 = "c5a03f724f757456ffaabd2a899992a71d2baf45ee4db65ca3518f2b7ee928c8",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/mainnet.tar.gz" % consensus_spec_version,
)
@@ -290,7 +290,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-OP9BCBcQ7i+93bwj7ktY8pZ5uWsGjgTe4XTp7BDhX+I=",
sha256 = "cd1c9d97baccbdde1d2454a7dceb8c6c61192a3b581eee12ffc94969f2db8453",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)
@@ -332,14 +332,14 @@ http_archive(
filegroup(
name = "configs",
srcs = [
"metadata/config.yaml",
"custom_config_data/config.yaml",
],
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-b7ZTT+olF+VXEJYNTV5jggNtCkt9dOejm1i2VE+zy+0=",
strip_prefix = "holesky-874c199423ccd180607320c38cbaca05d9a1573a",
url = "https://github.com/eth-clients/holesky/archive/874c199423ccd180607320c38cbaca05d9a1573a.tar.gz", # 2024-06-18
sha256 = "5f4be6fd088683ea9db45c863b9c5a1884422449e5b59fd2d561d3ba0f73ffd9",
strip_prefix = "holesky-9d9aabf2d4de51334ee5fed6c79a4d55097d1a43",
url = "https://github.com/eth-clients/holesky/archive/9d9aabf2d4de51334ee5fed6c79a4d55097d1a43.tar.gz", # 2024-01-22
)
http_archive(

View File

@@ -9,20 +9,22 @@ import (
"net/http"
"testing"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api/client"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
blocktest "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks/testing"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/ssz/detect"
"github.com/prysmaticlabs/prysm/v5/network/forks"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/ssz/detect"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
type testRT struct {

View File

@@ -11,7 +11,6 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/v5/api/client/builder",
visibility = ["//visibility:public"],
deps = [
"//api:go_default_library",
"//api/server/structs:go_default_library",
"//config/fieldparams:go_default_library",
"//consensus-types:go_default_library",
@@ -29,7 +28,6 @@ go_library(
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
],
)
@@ -51,11 +49,9 @@ go_test(
"//math:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],

View File

@@ -6,7 +6,6 @@ import (
consensus_types "github.com/prysmaticlabs/prysm/v5/consensus-types"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
)
@@ -23,7 +22,7 @@ type SignedBid interface {
type Bid interface {
Header() (interfaces.ExecutionData, error)
BlobKzgCommitments() ([][]byte, error)
Value() primitives.Wei
Value() []byte
Pubkey() []byte
Version() int
IsNil() bool
@@ -126,8 +125,8 @@ func (b builderBid) Version() int {
}
// Value --
func (b builderBid) Value() primitives.Wei {
return primitives.LittleEndianBytesToWei(b.p.Value)
func (b builderBid) Value() []byte {
return b.p.Value
}
// Pubkey --
@@ -166,7 +165,7 @@ func WrappedBuilderBidCapella(p *ethpb.BuilderBidCapella) (Bid, error) {
// Header returns the execution data interface.
func (b builderBidCapella) Header() (interfaces.ExecutionData, error) {
// We have to convert big endian to little endian because the value is coming from the execution layer.
return blocks.WrappedExecutionPayloadHeaderCapella(b.p.Header)
return blocks.WrappedExecutionPayloadHeaderCapella(b.p.Header, blocks.PayloadValueToWei(b.p.Value))
}
// BlobKzgCommitments --
@@ -180,8 +179,8 @@ func (b builderBidCapella) Version() int {
}
// Value --
func (b builderBidCapella) Value() primitives.Wei {
return primitives.LittleEndianBytesToWei(b.p.Value)
func (b builderBidCapella) Value() []byte {
return b.p.Value
}
// Pubkey --
@@ -223,8 +222,8 @@ func (b builderBidDeneb) Version() int {
}
// Value --
func (b builderBidDeneb) Value() primitives.Wei {
return primitives.LittleEndianBytesToWei(b.p.Value)
func (b builderBidDeneb) Value() []byte {
return b.p.Value
}
// Pubkey --
@@ -250,7 +249,7 @@ func (b builderBidDeneb) HashTreeRootWith(hh *ssz.Hasher) error {
// Header --
func (b builderBidDeneb) Header() (interfaces.ExecutionData, error) {
// We have to convert big endian to little endian because the value is coming from the execution layer.
return blocks.WrappedExecutionPayloadHeaderDeneb(b.p.Header)
return blocks.WrappedExecutionPayloadHeaderDeneb(b.p.Header, blocks.PayloadValueToWei(b.p.Value))
}
// BlobKzgCommitments --

View File

@@ -6,6 +6,7 @@ import (
"encoding/json"
"fmt"
"io"
"math/big"
"net"
"net/http"
"net/url"
@@ -13,11 +14,11 @@ import (
"text/template"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api"
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing"
v1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
@@ -281,68 +282,133 @@ func (c *Client) RegisterValidator(ctx context.Context, svr []*ethpb.SignedValid
return err
}
var errResponseVersionMismatch = errors.New("builder API response uses a different version than requested in " + api.VersionHeader + " header")
// SubmitBlindedBlock calls the builder API endpoint that binds the validator to the builder and submits the block.
// The response is the full execution payload used to create the blinded block.
func (c *Client) SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, *v1.BlobsBundle, error) {
if !sb.IsBlinded() {
return nil, nil, errNotBlinded
}
// massage the proto struct type data into the api response type.
mj, err := structs.SignedBeaconBlockMessageJsoner(sb)
if err != nil {
return nil, nil, errors.Wrap(err, "error generating blinded beacon block post request")
}
body, err := json.Marshal(mj)
if err != nil {
return nil, nil, errors.Wrap(err, "error marshaling blinded block post request to json")
}
postOpts := func(r *http.Request) {
r.Header.Add("Eth-Consensus-Version", version.String(sb.Version()))
r.Header.Set("Content-Type", api.JsonMediaType)
r.Header.Set("Accept", api.JsonMediaType)
}
// post the blinded block - the execution payload response should contain the unblinded payload, along with the
// blobs bundle if it is post deneb.
rb, err := c.do(ctx, http.MethodPost, postBlindedBeaconBlockPath, bytes.NewBuffer(body), postOpts)
if err != nil {
return nil, nil, errors.Wrap(err, "error posting the blinded block to the builder api")
}
// ExecutionPayloadResponse parses just the outer container and the Value key, enabling it to use the .Value
// key to determine which underlying data type to use to finish the unmarshaling.
ep := &ExecutionPayloadResponse{}
if err := json.Unmarshal(rb, ep); err != nil {
return nil, nil, errors.Wrap(err, "error unmarshaling the builder ExecutionPayloadResponse")
}
if strings.ToLower(ep.Version) != version.String(sb.Version()) {
return nil, nil, errors.Wrapf(errResponseVersionMismatch, "req=%s, recv=%s", strings.ToLower(ep.Version), version.String(sb.Version()))
}
// This parses the rest of the response and returns the inner data field.
pp, err := ep.ParsePayload()
if err != nil {
return nil, nil, errors.Wrapf(err, "failed to parse execution payload from builder with version=%s", ep.Version)
}
// Get the payload as a proto.Message so it can be wrapped as an execution payload interface.
pb, err := pp.PayloadProto()
if err != nil {
return nil, nil, err
}
ed, err := blocks.NewWrappedExecutionData(pb)
if err != nil {
return nil, nil, err
}
bb, ok := pp.(BlobBundler)
if ok {
bbpb, err := bb.BundleProto()
switch sb.Version() {
case version.Bellatrix:
psb, err := sb.PbBlindedBellatrixBlock()
if err != nil {
return nil, nil, errors.Wrapf(err, "failed to extract blobs bundle from builder response with version=%s", ep.Version)
return nil, nil, errors.Wrapf(err, "could not get protobuf block")
}
return ed, bbpb, nil
b, err := structs.SignedBlindedBeaconBlockBellatrixFromConsensus(&ethpb.SignedBlindedBeaconBlockBellatrix{Block: psb.Block, Signature: bytesutil.SafeCopyBytes(psb.Signature)})
if err != nil {
return nil, nil, errors.Wrapf(err, "could not convert SignedBlindedBeaconBlockBellatrix to json marshalable type")
}
body, err := json.Marshal(b)
if err != nil {
return nil, nil, errors.Wrap(err, "error encoding the SignedBlindedBeaconBlockBellatrix value body in SubmitBlindedBlock")
}
versionOpt := func(r *http.Request) {
r.Header.Add("Eth-Consensus-Version", version.String(version.Bellatrix))
r.Header.Set("Content-Type", "application/json")
r.Header.Set("Accept", "application/json")
}
rb, err := c.do(ctx, http.MethodPost, postBlindedBeaconBlockPath, bytes.NewBuffer(body), versionOpt)
if err != nil {
return nil, nil, errors.Wrap(err, "error posting the SignedBlindedBeaconBlockBellatrix to the builder api")
}
ep := &ExecPayloadResponse{}
if err := json.Unmarshal(rb, ep); err != nil {
return nil, nil, errors.Wrap(err, "error unmarshaling the builder SubmitBlindedBlock response")
}
if strings.ToLower(ep.Version) != version.String(version.Bellatrix) {
return nil, nil, errors.New("not a bellatrix payload")
}
p, err := ep.ToProto()
if err != nil {
return nil, nil, errors.Wrapf(err, "could not extract proto message from payload")
}
payload, err := blocks.WrappedExecutionPayload(p)
if err != nil {
return nil, nil, errors.Wrapf(err, "could not wrap execution payload in interface")
}
return payload, nil, nil
case version.Capella:
psb, err := sb.PbBlindedCapellaBlock()
if err != nil {
return nil, nil, errors.Wrapf(err, "could not get protobuf block")
}
b, err := structs.SignedBlindedBeaconBlockCapellaFromConsensus(&ethpb.SignedBlindedBeaconBlockCapella{Block: psb.Block, Signature: bytesutil.SafeCopyBytes(psb.Signature)})
if err != nil {
return nil, nil, errors.Wrapf(err, "could not convert SignedBlindedBeaconBlockCapella to json marshalable type")
}
body, err := json.Marshal(b)
if err != nil {
return nil, nil, errors.Wrap(err, "error encoding the SignedBlindedBeaconBlockCapella value body in SubmitBlindedBlockCapella")
}
versionOpt := func(r *http.Request) {
r.Header.Add("Eth-Consensus-Version", version.String(version.Capella))
r.Header.Set("Content-Type", "application/json")
r.Header.Set("Accept", "application/json")
}
rb, err := c.do(ctx, http.MethodPost, postBlindedBeaconBlockPath, bytes.NewBuffer(body), versionOpt)
if err != nil {
return nil, nil, errors.Wrap(err, "error posting the SignedBlindedBeaconBlockCapella to the builder api")
}
ep := &ExecPayloadResponseCapella{}
if err := json.Unmarshal(rb, ep); err != nil {
return nil, nil, errors.Wrap(err, "error unmarshaling the builder SubmitBlindedBlockCapella response")
}
if strings.ToLower(ep.Version) != version.String(version.Capella) {
return nil, nil, errors.New("not a capella payload")
}
p, err := ep.ToProto()
if err != nil {
return nil, nil, errors.Wrapf(err, "could not extract proto message from payload")
}
payload, err := blocks.WrappedExecutionPayloadCapella(p, big.NewInt(0))
if err != nil {
return nil, nil, errors.Wrapf(err, "could not wrap execution payload in interface")
}
return payload, nil, nil
case version.Deneb:
psb, err := sb.PbBlindedDenebBlock()
if err != nil {
return nil, nil, errors.Wrapf(err, "could not get protobuf block")
}
b, err := structs.SignedBlindedBeaconBlockDenebFromConsensus(&ethpb.SignedBlindedBeaconBlockDeneb{Message: psb.Message, Signature: bytesutil.SafeCopyBytes(psb.Signature)})
if err != nil {
return nil, nil, errors.Wrapf(err, "could not convert SignedBlindedBeaconBlockDeneb to json marshalable type")
}
body, err := json.Marshal(b)
if err != nil {
return nil, nil, errors.Wrap(err, "error encoding the SignedBlindedBeaconBlockDeneb value body in SubmitBlindedBlockDeneb")
}
versionOpt := func(r *http.Request) {
r.Header.Add("Eth-Consensus-Version", version.String(version.Deneb))
r.Header.Set("Content-Type", "application/json")
r.Header.Set("Accept", "application/json")
}
rb, err := c.do(ctx, http.MethodPost, postBlindedBeaconBlockPath, bytes.NewBuffer(body), versionOpt)
if err != nil {
return nil, nil, errors.Wrap(err, "error posting the SignedBlindedBeaconBlockDeneb to the builder api")
}
ep := &ExecPayloadResponseDeneb{}
if err := json.Unmarshal(rb, ep); err != nil {
return nil, nil, errors.Wrap(err, "error unmarshaling the builder SubmitBlindedBlockDeneb response")
}
if strings.ToLower(ep.Version) != version.String(version.Deneb) {
return nil, nil, errors.New("not a deneb payload")
}
p, blobBundle, err := ep.ToProto()
if err != nil {
return nil, nil, errors.Wrapf(err, "could not extract proto message from payload")
}
payload, err := blocks.WrappedExecutionPayloadDeneb(p, big.NewInt(0))
if err != nil {
return nil, nil, errors.Wrapf(err, "could not wrap execution payload in interface")
}
return payload, blobBundle, nil
default:
return nil, nil, fmt.Errorf("unsupported block version %s", version.String(sb.Version()))
}
return ed, nil, nil
}
// Status asks the remote builder server for a health check. A response of 200 with an empty body is the success/healthy

View File

@@ -6,6 +6,7 @@ import (
"encoding/json"
"fmt"
"io"
"math/big"
"net/http"
"net/url"
"strconv"
@@ -15,7 +16,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
types "github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
v1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
@@ -198,12 +198,12 @@ func TestClient_GetHeader(t *testing.T) {
require.NoError(t, err)
require.Equal(t, true, bytes.Equal(expectedTxRoot, withdrawalsRoot))
require.Equal(t, uint64(1), bidHeader.GasUsed())
// this matches the value in the testExampleHeaderResponse
bidStr := "652312848583266388373324160190187140051835877600158453279131187530910662656"
value, err := stringToUint256(bidStr)
value, err := stringToUint256("652312848583266388373324160190187140051835877600158453279131187530910662656")
require.NoError(t, err)
require.Equal(t, 0, value.Int.Cmp(primitives.WeiToBigInt(bid.Value())))
require.Equal(t, bidStr, primitives.WeiToBigInt(bid.Value()).String())
require.Equal(t, fmt.Sprintf("%#x", value.SSZBytes()), fmt.Sprintf("%#x", bid.Value()))
bidValue := bytesutil.ReverseByteOrder(bid.Value())
require.DeepEqual(t, bidValue, value.Bytes())
require.DeepEqual(t, big.NewInt(0).SetBytes(bidValue), value.Int)
})
t.Run("capella", func(t *testing.T) {
hc := &http.Client{
@@ -230,11 +230,12 @@ func TestClient_GetHeader(t *testing.T) {
withdrawalsRoot, err := bidHeader.WithdrawalsRoot()
require.NoError(t, err)
require.Equal(t, true, bytes.Equal(expectedWithdrawalsRoot, withdrawalsRoot))
bidStr := "652312848583266388373324160190187140051835877600158453279131187530910662656"
value, err := stringToUint256(bidStr)
value, err := stringToUint256("652312848583266388373324160190187140051835877600158453279131187530910662656")
require.NoError(t, err)
require.Equal(t, 0, value.Int.Cmp(primitives.WeiToBigInt(bid.Value())))
require.Equal(t, bidStr, primitives.WeiToBigInt(bid.Value()).String())
require.Equal(t, fmt.Sprintf("%#x", value.SSZBytes()), fmt.Sprintf("%#x", bid.Value()))
bidValue := bytesutil.ReverseByteOrder(bid.Value())
require.DeepEqual(t, bidValue, value.Bytes())
require.DeepEqual(t, big.NewInt(0).SetBytes(bidValue), value.Int)
})
t.Run("deneb", func(t *testing.T) {
hc := &http.Client{
@@ -261,13 +262,12 @@ func TestClient_GetHeader(t *testing.T) {
withdrawalsRoot, err := bidHeader.WithdrawalsRoot()
require.NoError(t, err)
require.Equal(t, true, bytes.Equal(expectedWithdrawalsRoot, withdrawalsRoot))
bidStr := "652312848583266388373324160190187140051835877600158453279131187530910662656"
value, err := stringToUint256(bidStr)
value, err := stringToUint256("652312848583266388373324160190187140051835877600158453279131187530910662656")
require.NoError(t, err)
require.Equal(t, 0, value.Int.Cmp(primitives.WeiToBigInt(bid.Value())))
require.Equal(t, bidStr, primitives.WeiToBigInt(bid.Value()).String())
require.Equal(t, fmt.Sprintf("%#x", value.SSZBytes()), fmt.Sprintf("%#x", bid.Value()))
bidValue := bytesutil.ReverseByteOrder(bid.Value())
require.DeepEqual(t, bidValue, value.Bytes())
require.DeepEqual(t, big.NewInt(0).SetBytes(bidValue), value.Int)
kcgCommitments, err := bid.BlobKzgCommitments()
require.NoError(t, err)
require.Equal(t, len(kcgCommitments) > 0, true)
@@ -432,7 +432,7 @@ func TestSubmitBlindedBlock(t *testing.T) {
sbbb, err := blocks.NewSignedBeaconBlock(testSignedBlindedBeaconBlockBellatrix(t))
require.NoError(t, err)
_, _, err = c.SubmitBlindedBlock(ctx, sbbb)
require.ErrorIs(t, err, errResponseVersionMismatch)
require.ErrorContains(t, "not a bellatrix payload", err)
})
t.Run("not blinded", func(t *testing.T) {
sbb, err := blocks.NewSignedBeaconBlock(&eth.SignedBeaconBlockBellatrix{Block: &eth.BeaconBlockBellatrix{Body: &eth.BeaconBlockBodyBellatrix{ExecutionPayload: &v1.ExecutionPayload{}}}})

View File

@@ -9,15 +9,11 @@ import (
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
consensusblocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
types "github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/math"
v1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"google.golang.org/protobuf/proto"
)
var errInvalidUint256 = errors.New("invalid Uint256")
@@ -48,9 +44,6 @@ func sszBytesToUint256(b []byte) (Uint256, error) {
// SSZBytes creates an ssz-style (little-endian byte slice) representation of the Uint256.
func (s Uint256) SSZBytes() []byte {
if s.Int == nil {
s.Int = big.NewInt(0)
}
if !math.IsValidUint256(s.Int) {
return []byte{}
}
@@ -98,9 +91,6 @@ func (s Uint256) MarshalJSON() ([]byte, error) {
// MarshalText returns a text byte representation of Uint256.
func (s Uint256) MarshalText() ([]byte, error) {
if s.Int == nil {
s.Int = big.NewInt(0)
}
if !math.IsValidUint256(s.Int) {
return nil, errors.Wrapf(errInvalidUint256, "value=%s", s.Int)
}
@@ -156,8 +146,6 @@ func (bb *BuilderBid) ToProto() (*eth.BuilderBid, error) {
}
return &eth.BuilderBid{
Header: header,
// Note that SSZBytes() reverses byte order for the little-endian representation.
// Uint256.Bytes() is big-endian, SSZBytes takes this value and reverses it.
Value: bb.Value.SSZBytes(),
Pubkey: bb.Pubkey,
}, nil
@@ -277,11 +265,6 @@ func (r *ExecPayloadResponse) ToProto() (*v1.ExecutionPayload, error) {
return r.Data.ToProto()
}
func (r *ExecutionPayload) PayloadProto() (proto.Message, error) {
pb, err := r.ToProto()
return pb, err
}
// ToProto returns a ExecutionPayload Proto
func (p *ExecutionPayload) ToProto() (*v1.ExecutionPayload, error) {
txs := make([][]byte, len(p.Transactions))
@@ -413,51 +396,6 @@ func FromProtoDeneb(payload *v1.ExecutionPayloadDeneb) (ExecutionPayloadDeneb, e
}, nil
}
var errInvalidTypeConversion = errors.New("unable to translate between api and foreign type")
// ExecutionPayloadResponseFromData converts an ExecutionData interface value to a payload response.
// This involves serializing the execution payload value so that the abstract payload envelope can be used.
func ExecutionPayloadResponseFromData(ed interfaces.ExecutionData, bundle *v1.BlobsBundle) (*ExecutionPayloadResponse, error) {
pb := ed.Proto()
var data interface{}
var err error
var ver string
switch pbStruct := pb.(type) {
case *v1.ExecutionPayload:
ver = version.String(version.Bellatrix)
data, err = FromProto(pbStruct)
if err != nil {
return nil, errors.Wrap(err, "failed to convert a Bellatrix ExecutionPayload to an API response")
}
case *v1.ExecutionPayloadCapella:
ver = version.String(version.Capella)
data, err = FromProtoCapella(pbStruct)
if err != nil {
return nil, errors.Wrap(err, "failed to convert a Capella ExecutionPayload to an API response")
}
case *v1.ExecutionPayloadDeneb:
ver = version.String(version.Deneb)
payloadStruct, err := FromProtoDeneb(pbStruct)
if err != nil {
return nil, errors.Wrap(err, "failed to convert a Deneb ExecutionPayload to an API response")
}
data = &ExecutionPayloadDenebAndBlobsBundle{
ExecutionPayload: &payloadStruct,
BlobsBundle: FromBundleProto(bundle),
}
default:
return nil, errInvalidTypeConversion
}
encoded, err := json.Marshal(data)
if err != nil {
return nil, errors.Wrapf(err, "failed to marshal execution payload version=%s", ver)
}
return &ExecutionPayloadResponse{
Version: ver,
Data: encoded,
}, nil
}
// ExecHeaderResponseCapella is the response of builder API /eth/v1/builder/header/{slot}/{parent_hash}/{pubkey} for Capella.
type ExecHeaderResponseCapella struct {
Data struct {
@@ -486,8 +424,6 @@ func (bb *BuilderBidCapella) ToProto() (*eth.BuilderBidCapella, error) {
}
return &eth.BuilderBidCapella{
Header: header,
// Note that SSZBytes() reverses byte order for the little-endian representation.
// Uint256.Bytes() is big-endian, SSZBytes takes this value and reverses it.
Value: bytesutil.SafeCopyBytes(bb.Value.SSZBytes()),
Pubkey: bytesutil.SafeCopyBytes(bb.Pubkey),
}, nil
@@ -587,42 +523,6 @@ type ExecPayloadResponseCapella struct {
Data ExecutionPayloadCapella `json:"data"`
}
// ExecutionPayloadResponse allows for unmarshaling just the Version field of the payload.
// This allows it to return different ExecutionPayload types based on the version field.
type ExecutionPayloadResponse struct {
Version string `json:"version"`
Data json.RawMessage `json:"data"`
}
// ParsedPayload can retrieve the underlying protobuf message for the given execution payload response.
type ParsedPayload interface {
PayloadProto() (proto.Message, error)
}
// BlobBundler can retrieve the underlying blob bundle protobuf message for the given execution payload response.
type BlobBundler interface {
BundleProto() (*v1.BlobsBundle, error)
}
func (r *ExecutionPayloadResponse) ParsePayload() (ParsedPayload, error) {
var toProto ParsedPayload
switch r.Version {
case version.String(version.Bellatrix):
toProto = &ExecutionPayload{}
case version.String(version.Capella):
toProto = &ExecutionPayloadCapella{}
case version.String(version.Deneb):
toProto = &ExecutionPayloadDenebAndBlobsBundle{}
default:
return nil, consensusblocks.ErrUnsupportedVersion
}
if err := json.Unmarshal(r.Data, toProto); err != nil {
return nil, errors.Wrap(err, "failed to unmarshal the response .Data field with the stated version schema")
}
return toProto, nil
}
// ExecutionPayloadCapella is a field of ExecPayloadResponseCapella.
type ExecutionPayloadCapella struct {
ParentHash hexutil.Bytes `json:"parent_hash"`
@@ -647,11 +547,6 @@ func (r *ExecPayloadResponseCapella) ToProto() (*v1.ExecutionPayloadCapella, err
return r.Data.ToProto()
}
func (p *ExecutionPayloadCapella) PayloadProto() (proto.Message, error) {
pb, err := p.ToProto()
return pb, err
}
// ToProto returns a ExecutionPayloadCapella Proto.
func (p *ExecutionPayloadCapella) ToProto() (*v1.ExecutionPayloadCapella, error) {
txs := make([][]byte, len(p.Transactions))
@@ -1026,10 +921,8 @@ func (bb *BuilderBidDeneb) ToProto() (*eth.BuilderBidDeneb, error) {
return &eth.BuilderBidDeneb{
Header: header,
BlobKzgCommitments: kzgCommitments,
// Note that SSZBytes() reverses byte order for the little-endian representation.
// Uint256.Bytes() is big-endian, SSZBytes takes this value and reverses it.
Value: bytesutil.SafeCopyBytes(bb.Value.SSZBytes()),
Pubkey: bytesutil.SafeCopyBytes(bb.Pubkey),
Value: bytesutil.SafeCopyBytes(bb.Value.SSZBytes()),
Pubkey: bytesutil.SafeCopyBytes(bb.Pubkey),
}, nil
}
@@ -1235,12 +1128,6 @@ func (r *ExecPayloadResponseDeneb) ToProto() (*v1.ExecutionPayloadDeneb, *v1.Blo
if r.Data == nil {
return nil, nil, errors.New("data field in response is empty")
}
if r.Data.ExecutionPayload == nil {
return nil, nil, errors.Wrap(consensusblocks.ErrNilObject, "nil execution payload")
}
if r.Data.BlobsBundle == nil {
return nil, nil, errors.Wrap(consensusblocks.ErrNilObject, "nil blobs bundle")
}
payload, err := r.Data.ExecutionPayload.ToProto()
if err != nil {
return nil, nil, err
@@ -1252,26 +1139,8 @@ func (r *ExecPayloadResponseDeneb) ToProto() (*v1.ExecutionPayloadDeneb, *v1.Blo
return payload, bundle, nil
}
func (r *ExecutionPayloadDenebAndBlobsBundle) PayloadProto() (proto.Message, error) {
if r.ExecutionPayload == nil {
return nil, errors.Wrap(consensusblocks.ErrNilObject, "nil execution payload in combined deneb payload")
}
pb, err := r.ExecutionPayload.ToProto()
return pb, err
}
func (r *ExecutionPayloadDenebAndBlobsBundle) BundleProto() (*v1.BlobsBundle, error) {
if r.BlobsBundle == nil {
return nil, errors.Wrap(consensusblocks.ErrNilObject, "nil blobs bundle")
}
return r.BlobsBundle.ToProto()
}
// ToProto returns the ExecutionPayloadDeneb Proto.
func (p *ExecutionPayloadDeneb) ToProto() (*v1.ExecutionPayloadDeneb, error) {
if p == nil {
return nil, errors.Wrap(consensusblocks.ErrNilObject, "nil execution payload")
}
txs := make([][]byte, len(p.Transactions))
for i := range p.Transactions {
txs[i] = bytesutil.SafeCopyBytes(p.Transactions[i])

View File

@@ -12,15 +12,12 @@ import (
"testing"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
consensusblocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/math"
v1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
@@ -1603,6 +1600,7 @@ func TestBuilderBidUnmarshalUint256(t *testing.T) {
require.NoError(t, expectedValue.UnmarshalText([]byte(base10)))
r := &ExecHeaderResponse{}
require.NoError(t, json.Unmarshal([]byte(testBuilderBid), r))
//require.Equal(t, expectedValue, r.Data.Message.Value)
marshaled := r.Data.Message.Value.String()
require.Equal(t, base10, marshaled)
require.Equal(t, 0, expectedValue.Cmp(r.Data.Message.Value.Int))
@@ -1909,41 +1907,3 @@ func TestErrorMessage_non200Err(t *testing.T) {
})
}
}
func TestEmptyResponseBody(t *testing.T) {
t.Run("empty buffer", func(t *testing.T) {
var b []byte
r := &ExecutionPayloadResponse{}
err := json.Unmarshal(b, r)
var syntaxError *json.SyntaxError
ok := errors.As(err, &syntaxError)
require.Equal(t, true, ok)
})
t.Run("empty object", func(t *testing.T) {
empty := []byte("{}")
emptyResponse := &ExecutionPayloadResponse{}
require.NoError(t, json.Unmarshal(empty, emptyResponse))
_, err := emptyResponse.ParsePayload()
require.ErrorIs(t, err, consensusblocks.ErrUnsupportedVersion)
})
versions := []int{version.Bellatrix, version.Capella, version.Deneb}
for i := range versions {
vstr := version.String(versions[i])
t.Run("populated version without payload"+vstr, func(t *testing.T) {
in := &ExecutionPayloadResponse{Version: vstr}
encoded, err := json.Marshal(in)
require.NoError(t, err)
epr := &ExecutionPayloadResponse{}
require.NoError(t, json.Unmarshal(encoded, epr))
pp, err := epr.ParsePayload()
require.NoError(t, err)
pb, err := pp.PayloadProto()
if err == nil {
require.NoError(t, err)
require.Equal(t, false, pb == nil)
} else {
require.ErrorIs(t, err, consensusblocks.ErrNilObject)
}
})
}
}

View File

@@ -2,10 +2,7 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"event_stream.go",
"utils.go",
],
srcs = ["event_stream.go"],
importpath = "github.com/prysmaticlabs/prysm/v5/api/client/event",
visibility = ["//visibility:public"],
deps = [
@@ -18,10 +15,7 @@ go_library(
go_test(
name = "go_default_test",
srcs = [
"event_stream_test.go",
"utils_test.go",
],
srcs = ["event_stream_test.go"],
embed = [":go_default_library"],
deps = [
"//testing/require:go_default_library",

View File

@@ -102,8 +102,6 @@ func (h *EventStream) Subscribe(eventsChannel chan<- *Event) {
}()
// Create a new scanner to read lines from the response body
scanner := bufio.NewScanner(resp.Body)
// Set the split function for the scanning operation
scanner.Split(scanLinesWithCarriage)
var eventType, data string // Variables to store event type and data
@@ -115,7 +113,7 @@ func (h *EventStream) Subscribe(eventsChannel chan<- *Event) {
close(eventsChannel)
return
default:
line := scanner.Text()
line := scanner.Text() // TODO(13730): scanner does not handle /r and does not fully adhere to https://html.spec.whatwg.org/multipage/server-sent-events.html#the-eventsource-interface
// Handle the event based on your specific format
if line == "" {
// Empty line indicates the end of an event

View File

@@ -43,9 +43,8 @@ func TestEventStream(t *testing.T) {
mux.HandleFunc("/eth/v1/events", func(w http.ResponseWriter, r *http.Request) {
flusher, ok := w.(http.Flusher)
require.Equal(t, true, ok)
for i := 1; i <= 3; i++ {
events := [3]string{"event: head\ndata: data%d\n\n", "event: head\rdata: data%d\r\r", "event: head\r\ndata: data%d\r\n\r\n"}
_, err := fmt.Fprintf(w, events[i-1], i)
for i := 1; i <= 2; i++ {
_, err := fmt.Fprintf(w, "event: head\ndata: data%d\n\n", i)
require.NoError(t, err)
flusher.Flush() // Trigger flush to simulate streaming data
time.Sleep(100 * time.Millisecond) // Simulate delay between events
@@ -63,7 +62,7 @@ func TestEventStream(t *testing.T) {
// Collect events
var events []*Event
for len(events) != 3 {
for len(events) != 2 {
select {
case event := <-eventsChannel:
log.Info(event)
@@ -72,7 +71,7 @@ func TestEventStream(t *testing.T) {
}
// Assertions to verify the events content
expectedData := []string{"data1", "data2", "data3"}
expectedData := []string{"data1", "data2"}
for i, event := range events {
if string(event.Data) != expectedData[i] {
t.Errorf("Expected event data %q, got %q", expectedData[i], string(event.Data))

View File

@@ -1,36 +0,0 @@
package event
import (
"bytes"
)
// adapted from ScanLines in scan.go to handle carriage return characters as separators
func scanLinesWithCarriage(data []byte, atEOF bool) (advance int, token []byte, err error) {
if atEOF && len(data) == 0 {
return 0, nil, nil
}
if i, j := bytes.IndexByte(data, '\n'), bytes.IndexByte(data, '\r'); i >= 0 || j >= 0 {
in := i
// Select the first index of \n or \r or the second index of \r if it is followed by \n
if i < 0 || (i > j && i != j+1 && j >= 0) {
in = j
}
// We have a full newline-terminated line.
return in + 1, dropCR(data[0:in]), nil
}
// If we're at EOF, we have a final, non-terminated line. Return it.
if atEOF {
return len(data), dropCR(data), nil
}
// Request more data.
return 0, nil, nil
}
// dropCR drops a terminal \r from the data.
func dropCR(data []byte) []byte {
if len(data) > 0 && data[len(data)-1] == '\r' {
return data[0 : len(data)-1]
}
return data
}

View File

@@ -1,97 +0,0 @@
package event
import (
"bufio"
"bytes"
"testing"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestScanLinesWithCarriage(t *testing.T) {
testCases := []struct {
name string
input string
expected []string
}{
{
name: "LF line endings",
input: "line1\nline2\nline3",
expected: []string{"line1", "line2", "line3"},
},
{
name: "CR line endings",
input: "line1\rline2\rline3",
expected: []string{"line1", "line2", "line3"},
},
{
name: "CRLF line endings",
input: "line1\r\nline2\r\nline3",
expected: []string{"line1", "line2", "line3"},
},
{
name: "Mixed line endings",
input: "line1\nline2\rline3\r\nline4",
expected: []string{"line1", "line2", "line3", "line4"},
},
{
name: "Empty lines",
input: "line1\n\nline2\r\rline3",
expected: []string{"line1", "", "line2", "", "line3"},
},
{
name: "Empty lines 2",
input: "line1\n\rline2\n\rline3",
expected: []string{"line1", "", "line2", "", "line3"},
},
{
name: "No line endings",
input: "single line without ending",
expected: []string{"single line without ending"},
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
scanner := bufio.NewScanner(bytes.NewReader([]byte(tc.input)))
scanner.Split(scanLinesWithCarriage)
var lines []string
for scanner.Scan() {
lines = append(lines, scanner.Text())
}
require.NoError(t, scanner.Err())
require.Equal(t, len(tc.expected), len(lines), "Number of lines does not match")
for i, line := range lines {
require.Equal(t, tc.expected[i], line, "Line %d does not match", i)
}
})
}
}
// TestScanLinesWithCarriageEdgeCases tests edge cases and potential error scenarios
func TestScanLinesWithCarriageEdgeCases(t *testing.T) {
t.Run("Empty input", func(t *testing.T) {
scanner := bufio.NewScanner(bytes.NewReader([]byte("")))
scanner.Split(scanLinesWithCarriage)
require.Equal(t, scanner.Scan(), false)
require.NoError(t, scanner.Err())
})
t.Run("Very long line", func(t *testing.T) {
longLine := bytes.Repeat([]byte("a"), bufio.MaxScanTokenSize+1)
scanner := bufio.NewScanner(bytes.NewReader(longLine))
scanner.Split(scanLinesWithCarriage)
require.Equal(t, scanner.Scan(), false)
require.NotNil(t, scanner.Err())
})
t.Run("Line ending at max token size", func(t *testing.T) {
input := append(bytes.Repeat([]byte("a"), bufio.MaxScanTokenSize-1), '\n')
scanner := bufio.NewScanner(bytes.NewReader(input))
scanner.Split(scanLinesWithCarriage)
require.Equal(t, scanner.Scan(), true)
require.Equal(t, string(bytes.Repeat([]byte("a"), bufio.MaxScanTokenSize-1)), scanner.Text())
})
}

View File

@@ -14,7 +14,7 @@ go_library(
"//validator:__subpackages__",
],
deps = [
"//api/server/middleware:go_default_library",
"//api/server:go_default_library",
"//runtime:go_default_library",
"@com_github_gorilla_mux//:go_default_library",
"@com_github_grpc_ecosystem_grpc_gateway_v2//runtime:go_default_library",

View File

@@ -11,7 +11,7 @@ import (
"github.com/gorilla/mux"
gwruntime "github.com/grpc-ecosystem/grpc-gateway/v2/runtime"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api/server/middleware"
"github.com/prysmaticlabs/prysm/v5/api/server"
"github.com/prysmaticlabs/prysm/v5/runtime"
"google.golang.org/grpc"
"google.golang.org/grpc/connectivity"
@@ -104,7 +104,7 @@ func (g *Gateway) Start() {
}
}
corsMux := middleware.CorsHandler(g.cfg.allowedOrigins).Middleware(g.cfg.router)
corsMux := server.CorsHandler(g.cfg.allowedOrigins).Middleware(g.cfg.router)
if g.cfg.muxHandler != nil {
g.cfg.router.PathPrefix("/").HandlerFunc(func(w http.ResponseWriter, r *http.Request) {

View File

@@ -2,14 +2,29 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = ["error.go"],
srcs = [
"error.go",
"middleware.go",
"util.go",
],
importpath = "github.com/prysmaticlabs/prysm/v5/api/server",
visibility = ["//visibility:public"],
deps = [
"@com_github_gorilla_mux//:go_default_library",
"@com_github_rs_cors//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["error_test.go"],
srcs = [
"error_test.go",
"middleware_test.go",
"util_test.go",
],
embed = [":go_default_library"],
deps = ["//testing/assert:go_default_library"],
deps = [
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
],
)

View File

@@ -1,7 +1,6 @@
package server
import (
"errors"
"fmt"
"strings"
)
@@ -16,8 +15,7 @@ type DecodeError struct {
// NewDecodeError wraps an error (either the initial decoding error or another DecodeError).
// The current field that failed decoding must be passed in.
func NewDecodeError(err error, field string) *DecodeError {
var de *DecodeError
ok := errors.As(err, &de)
de, ok := err.(*DecodeError)
if ok {
return &DecodeError{path: append([]string{field}, de.path...), err: de.err}
}

32
api/server/middleware.go Normal file
View File

@@ -0,0 +1,32 @@
package server
import (
"net/http"
"github.com/gorilla/mux"
"github.com/rs/cors"
)
// NormalizeQueryValuesHandler normalizes an input query of "key=value1,value2,value3" to "key=value1&key=value2&key=value3"
func NormalizeQueryValuesHandler(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
query := r.URL.Query()
NormalizeQueryValues(query)
r.URL.RawQuery = query.Encode()
next.ServeHTTP(w, r)
})
}
// CorsHandler sets the cors settings on api endpoints
func CorsHandler(allowOrigins []string) mux.MiddlewareFunc {
c := cors.New(cors.Options{
AllowedOrigins: allowOrigins,
AllowedMethods: []string{http.MethodPost, http.MethodGet, http.MethodDelete, http.MethodOptions},
AllowCredentials: true,
MaxAge: 600,
AllowedHeaders: []string{"*"},
})
return c.Handler
}

View File

@@ -1,29 +0,0 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"middleware.go",
"util.go",
],
importpath = "github.com/prysmaticlabs/prysm/v5/api/server/middleware",
visibility = ["//visibility:public"],
deps = [
"@com_github_gorilla_mux//:go_default_library",
"@com_github_rs_cors//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = [
"middleware_test.go",
"util_test.go",
],
embed = [":go_default_library"],
deps = [
"//api:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
],
)

View File

@@ -1,112 +0,0 @@
package middleware
import (
"fmt"
"net/http"
"strings"
"github.com/gorilla/mux"
"github.com/rs/cors"
)
// NormalizeQueryValuesHandler normalizes an input query of "key=value1,value2,value3" to "key=value1&key=value2&key=value3"
func NormalizeQueryValuesHandler(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
query := r.URL.Query()
NormalizeQueryValues(query)
r.URL.RawQuery = query.Encode()
next.ServeHTTP(w, r)
})
}
// CorsHandler sets the cors settings on api endpoints
func CorsHandler(allowOrigins []string) mux.MiddlewareFunc {
c := cors.New(cors.Options{
AllowedOrigins: allowOrigins,
AllowedMethods: []string{http.MethodPost, http.MethodGet, http.MethodDelete, http.MethodOptions},
AllowCredentials: true,
MaxAge: 600,
AllowedHeaders: []string{"*"},
})
return c.Handler
}
// ContentTypeHandler checks request for the appropriate media types otherwise returning a http.StatusUnsupportedMediaType error
func ContentTypeHandler(acceptedMediaTypes []string) mux.MiddlewareFunc {
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// skip the GET request
if r.Method == http.MethodGet {
next.ServeHTTP(w, r)
return
}
contentType := r.Header.Get("Content-Type")
if contentType == "" {
http.Error(w, "Content-Type header is missing", http.StatusUnsupportedMediaType)
return
}
accepted := false
for _, acceptedType := range acceptedMediaTypes {
if strings.Contains(strings.TrimSpace(contentType), strings.TrimSpace(acceptedType)) {
accepted = true
break
}
}
if !accepted {
http.Error(w, fmt.Sprintf("Unsupported media type: %s", contentType), http.StatusUnsupportedMediaType)
return
}
next.ServeHTTP(w, r)
})
}
}
// AcceptHeaderHandler checks if the client's response preference is handled
func AcceptHeaderHandler(serverAcceptedTypes []string) mux.MiddlewareFunc {
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
acceptHeader := r.Header.Get("Accept")
// header is optional and should skip if not provided
if acceptHeader == "" {
next.ServeHTTP(w, r)
return
}
accepted := false
acceptTypes := strings.Split(acceptHeader, ",")
// follows rules defined in https://datatracker.ietf.org/doc/html/rfc2616#section-14.1
for _, acceptType := range acceptTypes {
acceptType = strings.TrimSpace(acceptType)
if acceptType == "*/*" {
accepted = true
break
}
for _, serverAcceptedType := range serverAcceptedTypes {
if strings.HasPrefix(acceptType, serverAcceptedType) {
accepted = true
break
}
if acceptType != "/*" && strings.HasSuffix(acceptType, "/*") && strings.HasPrefix(serverAcceptedType, acceptType[:len(acceptType)-2]) {
accepted = true
break
}
}
if accepted {
break
}
}
if !accepted {
http.Error(w, fmt.Sprintf("Not Acceptable: %s", acceptHeader), http.StatusNotAcceptable)
return
}
next.ServeHTTP(w, r)
})
}
}

View File

@@ -1,204 +0,0 @@
package middleware
import (
"net/http"
"net/http/httptest"
"testing"
"github.com/prysmaticlabs/prysm/v5/api"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestNormalizeQueryValuesHandler(t *testing.T) {
nextHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
_, err := w.Write([]byte("next handler"))
require.NoError(t, err)
})
handler := NormalizeQueryValuesHandler(nextHandler)
tests := []struct {
name string
inputQuery string
expectedQuery string
}{
{
name: "3 values",
inputQuery: "key=value1,value2,value3",
expectedQuery: "key=value1&key=value2&key=value3", // replace with expected normalized value
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
req, err := http.NewRequest("GET", "/test?"+test.inputQuery, http.NoBody)
if err != nil {
t.Fatal(err)
}
rr := httptest.NewRecorder()
handler.ServeHTTP(rr, req)
if rr.Code != http.StatusOK {
t.Errorf("handler returned wrong status code: got %v want %v", rr.Code, http.StatusOK)
}
if req.URL.RawQuery != test.expectedQuery {
t.Errorf("query not normalized: got %v want %v", req.URL.RawQuery, test.expectedQuery)
}
if rr.Body.String() != "next handler" {
t.Errorf("next handler was not executed")
}
})
}
}
func TestContentTypeHandler(t *testing.T) {
acceptedMediaTypes := []string{api.JsonMediaType, api.OctetStreamMediaType}
nextHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
_, err := w.Write([]byte("next handler"))
require.NoError(t, err)
})
handler := ContentTypeHandler(acceptedMediaTypes)(nextHandler)
tests := []struct {
name string
contentType string
expectedStatusCode int
isGet bool
}{
{
name: "Accepted Content-Type - application/json",
contentType: api.JsonMediaType,
expectedStatusCode: http.StatusOK,
},
{
name: "Accepted Content-Type - ssz format",
contentType: api.OctetStreamMediaType,
expectedStatusCode: http.StatusOK,
},
{
name: "Unsupported Content-Type - text/plain",
contentType: "text/plain",
expectedStatusCode: http.StatusUnsupportedMediaType,
},
{
name: "Missing Content-Type",
contentType: "",
expectedStatusCode: http.StatusUnsupportedMediaType,
},
{
name: "GET request skips content type check",
contentType: "",
expectedStatusCode: http.StatusOK,
isGet: true,
},
{
name: "Content type contains charset is ok",
contentType: "application/json; charset=utf-8",
expectedStatusCode: http.StatusOK,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
httpMethod := http.MethodPost
if tt.isGet {
httpMethod = http.MethodGet
}
req := httptest.NewRequest(httpMethod, "/", nil)
if tt.contentType != "" {
req.Header.Set("Content-Type", tt.contentType)
}
rr := httptest.NewRecorder()
handler.ServeHTTP(rr, req)
if status := rr.Code; status != tt.expectedStatusCode {
t.Errorf("handler returned wrong status code: got %v want %v", status, tt.expectedStatusCode)
}
})
}
}
func TestAcceptHeaderHandler(t *testing.T) {
acceptedTypes := []string{"application/json", "application/octet-stream"}
nextHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
_, err := w.Write([]byte("next handler"))
require.NoError(t, err)
})
handler := AcceptHeaderHandler(acceptedTypes)(nextHandler)
tests := []struct {
name string
acceptHeader string
expectedStatusCode int
}{
{
name: "Accepted Accept-Type - application/json",
acceptHeader: "application/json",
expectedStatusCode: http.StatusOK,
},
{
name: "Accepted Accept-Type - application/octet-stream",
acceptHeader: "application/octet-stream",
expectedStatusCode: http.StatusOK,
},
{
name: "Accepted Accept-Type with parameters",
acceptHeader: "application/json;q=0.9, application/octet-stream;q=0.8",
expectedStatusCode: http.StatusOK,
},
{
name: "Unsupported Accept-Type - text/plain",
acceptHeader: "text/plain",
expectedStatusCode: http.StatusNotAcceptable,
},
{
name: "Missing Accept header",
acceptHeader: "",
expectedStatusCode: http.StatusOK,
},
{
name: "*/* is accepted",
acceptHeader: "*/*",
expectedStatusCode: http.StatusOK,
},
{
name: "application/* is accepted",
acceptHeader: "application/*",
expectedStatusCode: http.StatusOK,
},
{
name: "/* is unsupported",
acceptHeader: "/*",
expectedStatusCode: http.StatusNotAcceptable,
},
{
name: "application/ is unsupported",
acceptHeader: "application/",
expectedStatusCode: http.StatusNotAcceptable,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
req := httptest.NewRequest("GET", "/", nil)
if tt.acceptHeader != "" {
req.Header.Set("Accept", tt.acceptHeader)
}
rr := httptest.NewRecorder()
handler.ServeHTTP(rr, req)
if status := rr.Code; status != tt.expectedStatusCode {
t.Errorf("handler returned wrong status code: got %v want %v", status, tt.expectedStatusCode)
}
})
}
}

View File

@@ -0,0 +1,54 @@
package server
import (
"net/http"
"net/http/httptest"
"testing"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestNormalizeQueryValuesHandler(t *testing.T) {
nextHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
_, err := w.Write([]byte("next handler"))
require.NoError(t, err)
})
handler := NormalizeQueryValuesHandler(nextHandler)
tests := []struct {
name string
inputQuery string
expectedQuery string
}{
{
name: "3 values",
inputQuery: "key=value1,value2,value3",
expectedQuery: "key=value1&key=value2&key=value3", // replace with expected normalized value
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
req, err := http.NewRequest("GET", "/test?"+test.inputQuery, nil)
if err != nil {
t.Fatal(err)
}
rr := httptest.NewRecorder()
handler.ServeHTTP(rr, req)
if rr.Code != http.StatusOK {
t.Errorf("handler returned wrong status code: got %v want %v", rr.Code, http.StatusOK)
}
if req.URL.RawQuery != test.expectedQuery {
t.Errorf("query not normalized: got %v want %v", req.URL.RawQuery, test.expectedQuery)
}
if rr.Body.String() != "next handler" {
t.Errorf("next handler was not executed")
}
})
}
}

View File

@@ -26,7 +26,6 @@ go_library(
"//api/server:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/fieldparams:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/validator:go_default_library",
"//container/slice:go_default_library",

View File

@@ -1,34 +1,10 @@
package structs
import "encoding/json"
// MessageJsoner describes a signed consensus type wrapper that can return the `.Message` field in a json envelope
// encoded as a []byte, for use as a json.RawMessage value when encoding the outer envelope.
type MessageJsoner interface {
MessageRawJson() ([]byte, error)
}
// SignedMessageJsoner embeds MessageJsoner and adds a method to also retrieve the Signature field as a string.
type SignedMessageJsoner interface {
MessageJsoner
SigString() string
}
type SignedBeaconBlock struct {
Message *BeaconBlock `json:"message"`
Signature string `json:"signature"`
}
var _ SignedMessageJsoner = &SignedBeaconBlock{}
func (s *SignedBeaconBlock) MessageRawJson() ([]byte, error) {
return json.Marshal(s.Message)
}
func (s *SignedBeaconBlock) SigString() string {
return s.Signature
}
type BeaconBlock struct {
Slot string `json:"slot"`
ProposerIndex string `json:"proposer_index"`
@@ -53,16 +29,6 @@ type SignedBeaconBlockAltair struct {
Signature string `json:"signature"`
}
var _ SignedMessageJsoner = &SignedBeaconBlockAltair{}
func (s *SignedBeaconBlockAltair) MessageRawJson() ([]byte, error) {
return json.Marshal(s.Message)
}
func (s *SignedBeaconBlockAltair) SigString() string {
return s.Signature
}
type BeaconBlockAltair struct {
Slot string `json:"slot"`
ProposerIndex string `json:"proposer_index"`
@@ -88,16 +54,6 @@ type SignedBeaconBlockBellatrix struct {
Signature string `json:"signature"`
}
var _ SignedMessageJsoner = &SignedBeaconBlockBellatrix{}
func (s *SignedBeaconBlockBellatrix) MessageRawJson() ([]byte, error) {
return json.Marshal(s.Message)
}
func (s *SignedBeaconBlockBellatrix) SigString() string {
return s.Signature
}
type BeaconBlockBellatrix struct {
Slot string `json:"slot"`
ProposerIndex string `json:"proposer_index"`
@@ -124,16 +80,6 @@ type SignedBlindedBeaconBlockBellatrix struct {
Signature string `json:"signature"`
}
var _ SignedMessageJsoner = &SignedBlindedBeaconBlockBellatrix{}
func (s *SignedBlindedBeaconBlockBellatrix) MessageRawJson() ([]byte, error) {
return json.Marshal(s.Message)
}
func (s *SignedBlindedBeaconBlockBellatrix) SigString() string {
return s.Signature
}
type BlindedBeaconBlockBellatrix struct {
Slot string `json:"slot"`
ProposerIndex string `json:"proposer_index"`
@@ -160,16 +106,6 @@ type SignedBeaconBlockCapella struct {
Signature string `json:"signature"`
}
var _ SignedMessageJsoner = &SignedBeaconBlockCapella{}
func (s *SignedBeaconBlockCapella) MessageRawJson() ([]byte, error) {
return json.Marshal(s.Message)
}
func (s *SignedBeaconBlockCapella) SigString() string {
return s.Signature
}
type BeaconBlockCapella struct {
Slot string `json:"slot"`
ProposerIndex string `json:"proposer_index"`
@@ -197,16 +133,6 @@ type SignedBlindedBeaconBlockCapella struct {
Signature string `json:"signature"`
}
var _ SignedMessageJsoner = &SignedBlindedBeaconBlockCapella{}
func (s *SignedBlindedBeaconBlockCapella) MessageRawJson() ([]byte, error) {
return json.Marshal(s.Message)
}
func (s *SignedBlindedBeaconBlockCapella) SigString() string {
return s.Signature
}
type BlindedBeaconBlockCapella struct {
Slot string `json:"slot"`
ProposerIndex string `json:"proposer_index"`
@@ -246,16 +172,6 @@ type SignedBeaconBlockDeneb struct {
Signature string `json:"signature"`
}
var _ SignedMessageJsoner = &SignedBeaconBlockDeneb{}
func (s *SignedBeaconBlockDeneb) MessageRawJson() ([]byte, error) {
return json.Marshal(s.Message)
}
func (s *SignedBeaconBlockDeneb) SigString() string {
return s.Signature
}
type BeaconBlockDeneb struct {
Slot string `json:"slot"`
ProposerIndex string `json:"proposer_index"`
@@ -292,16 +208,6 @@ type SignedBlindedBeaconBlockDeneb struct {
Signature string `json:"signature"`
}
var _ SignedMessageJsoner = &SignedBlindedBeaconBlockDeneb{}
func (s *SignedBlindedBeaconBlockDeneb) MessageRawJson() ([]byte, error) {
return json.Marshal(s.Message)
}
func (s *SignedBlindedBeaconBlockDeneb) SigString() string {
return s.Signature
}
type BlindedBeaconBlockBodyDeneb struct {
RandaoReveal string `json:"randao_reveal"`
Eth1Data *Eth1Data `json:"eth1_data"`
@@ -317,94 +223,6 @@ type BlindedBeaconBlockBodyDeneb struct {
BlobKzgCommitments []string `json:"blob_kzg_commitments"`
}
type SignedBeaconBlockContentsElectra struct {
SignedBlock *SignedBeaconBlockElectra `json:"signed_block"`
KzgProofs []string `json:"kzg_proofs"`
Blobs []string `json:"blobs"`
}
type BeaconBlockContentsElectra struct {
Block *BeaconBlockElectra `json:"block"`
KzgProofs []string `json:"kzg_proofs"`
Blobs []string `json:"blobs"`
}
type SignedBeaconBlockElectra struct {
Message *BeaconBlockElectra `json:"message"`
Signature string `json:"signature"`
}
var _ SignedMessageJsoner = &SignedBeaconBlockElectra{}
func (s *SignedBeaconBlockElectra) MessageRawJson() ([]byte, error) {
return json.Marshal(s.Message)
}
func (s *SignedBeaconBlockElectra) SigString() string {
return s.Signature
}
type BeaconBlockElectra struct {
Slot string `json:"slot"`
ProposerIndex string `json:"proposer_index"`
ParentRoot string `json:"parent_root"`
StateRoot string `json:"state_root"`
Body *BeaconBlockBodyElectra `json:"body"`
}
type BeaconBlockBodyElectra struct {
RandaoReveal string `json:"randao_reveal"`
Eth1Data *Eth1Data `json:"eth1_data"`
Graffiti string `json:"graffiti"`
ProposerSlashings []*ProposerSlashing `json:"proposer_slashings"`
AttesterSlashings []*AttesterSlashingElectra `json:"attester_slashings"`
Attestations []*AttestationElectra `json:"attestations"`
Deposits []*Deposit `json:"deposits"`
VoluntaryExits []*SignedVoluntaryExit `json:"voluntary_exits"`
SyncAggregate *SyncAggregate `json:"sync_aggregate"`
ExecutionPayload *ExecutionPayloadElectra `json:"execution_payload"`
BLSToExecutionChanges []*SignedBLSToExecutionChange `json:"bls_to_execution_changes"`
BlobKzgCommitments []string `json:"blob_kzg_commitments"`
}
type BlindedBeaconBlockElectra struct {
Slot string `json:"slot"`
ProposerIndex string `json:"proposer_index"`
ParentRoot string `json:"parent_root"`
StateRoot string `json:"state_root"`
Body *BlindedBeaconBlockBodyElectra `json:"body"`
}
type SignedBlindedBeaconBlockElectra struct {
Message *BlindedBeaconBlockElectra `json:"message"`
Signature string `json:"signature"`
}
var _ SignedMessageJsoner = &SignedBlindedBeaconBlockElectra{}
func (s *SignedBlindedBeaconBlockElectra) MessageRawJson() ([]byte, error) {
return json.Marshal(s.Message)
}
func (s *SignedBlindedBeaconBlockElectra) SigString() string {
return s.Signature
}
type BlindedBeaconBlockBodyElectra struct {
RandaoReveal string `json:"randao_reveal"`
Eth1Data *Eth1Data `json:"eth1_data"`
Graffiti string `json:"graffiti"`
ProposerSlashings []*ProposerSlashing `json:"proposer_slashings"`
AttesterSlashings []*AttesterSlashingElectra `json:"attester_slashings"`
Attestations []*AttestationElectra `json:"attestations"`
Deposits []*Deposit `json:"deposits"`
VoluntaryExits []*SignedVoluntaryExit `json:"voluntary_exits"`
SyncAggregate *SyncAggregate `json:"sync_aggregate"`
ExecutionPayloadHeader *ExecutionPayloadHeaderElectra `json:"execution_payload_header"`
BLSToExecutionChanges []*SignedBLSToExecutionChange `json:"bls_to_execution_changes"`
BlobKzgCommitments []string `json:"blob_kzg_commitments"`
}
type SignedBeaconBlockHeaderContainer struct {
Header *SignedBeaconBlockHeader `json:"header"`
Root string `json:"root"`
@@ -533,49 +351,3 @@ type ExecutionPayloadHeaderDeneb struct {
BlobGasUsed string `json:"blob_gas_used"`
ExcessBlobGas string `json:"excess_blob_gas"`
}
type ExecutionPayloadElectra struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
Transactions []string `json:"transactions"`
Withdrawals []*Withdrawal `json:"withdrawals"`
BlobGasUsed string `json:"blob_gas_used"`
ExcessBlobGas string `json:"excess_blob_gas"`
DepositRequests []*DepositRequest `json:"deposit_requests"`
WithdrawalRequests []*WithdrawalRequest `json:"withdrawal_requests"`
ConsolidationRequests []*ConsolidationRequest `json:"consolidation_requests"`
}
type ExecutionPayloadHeaderElectra struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
TransactionsRoot string `json:"transactions_root"`
WithdrawalsRoot string `json:"withdrawals_root"`
BlobGasUsed string `json:"blob_gas_used"`
ExcessBlobGas string `json:"excess_blob_gas"`
DepositRequestsRoot string `json:"deposit_requests_root"`
WithdrawalRequestsRoot string `json:"withdrawal_requests_root"`
ConsolidationRequestsRoot string `json:"consolidation_requests_root"`
}

View File

@@ -8,10 +8,11 @@ import (
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api/server"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/consensus-types/validator"
"github.com/prysmaticlabs/prysm/v5/container/slice"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/math"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
@@ -340,42 +341,6 @@ func (a *AggregateAttestationAndProof) ToConsensus() (*eth.AggregateAttestationA
}, nil
}
func (s *SignedAggregateAttestationAndProofElectra) ToConsensus() (*eth.SignedAggregateAttestationAndProofElectra, error) {
msg, err := s.Message.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Message")
}
sig, err := bytesutil.DecodeHexWithLength(s.Signature, fieldparams.BLSSignatureLength)
if err != nil {
return nil, server.NewDecodeError(err, "Signature")
}
return &eth.SignedAggregateAttestationAndProofElectra{
Message: msg,
Signature: sig,
}, nil
}
func (a *AggregateAttestationAndProofElectra) ToConsensus() (*eth.AggregateAttestationAndProofElectra, error) {
aggIndex, err := strconv.ParseUint(a.AggregatorIndex, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "AggregatorIndex")
}
agg, err := a.Aggregate.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Aggregate")
}
proof, err := bytesutil.DecodeHexWithLength(a.SelectionProof, 96)
if err != nil {
return nil, server.NewDecodeError(err, "SelectionProof")
}
return &eth.AggregateAttestationAndProofElectra{
AggregatorIndex: primitives.ValidatorIndex(aggIndex),
Aggregate: agg,
SelectionProof: proof,
}, nil
}
func (a *Attestation) ToConsensus() (*eth.Attestation, error) {
aggBits, err := hexutil.Decode(a.AggregationBits)
if err != nil {
@@ -405,41 +370,6 @@ func AttFromConsensus(a *eth.Attestation) *Attestation {
}
}
func (a *AttestationElectra) ToConsensus() (*eth.AttestationElectra, error) {
aggBits, err := hexutil.Decode(a.AggregationBits)
if err != nil {
return nil, server.NewDecodeError(err, "AggregationBits")
}
data, err := a.Data.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Data")
}
sig, err := bytesutil.DecodeHexWithLength(a.Signature, fieldparams.BLSSignatureLength)
if err != nil {
return nil, server.NewDecodeError(err, "Signature")
}
committeeBits, err := hexutil.Decode(a.CommitteeBits)
if err != nil {
return nil, server.NewDecodeError(err, "CommitteeBits")
}
return &eth.AttestationElectra{
AggregationBits: aggBits,
Data: data,
Signature: sig,
CommitteeBits: committeeBits,
}, nil
}
func AttElectraFromConsensus(a *eth.AttestationElectra) *AttestationElectra {
return &AttestationElectra{
AggregationBits: hexutil.Encode(a.AggregationBits),
Data: AttDataFromConsensus(a.Data),
Signature: hexutil.Encode(a.Signature),
CommitteeBits: hexutil.Encode(a.CommitteeBits),
}
}
func (a *AttestationData) ToConsensus() (*eth.AttestationData, error) {
slot, err := strconv.ParseUint(a.Slot, 10, 64)
if err != nil {
@@ -694,18 +624,6 @@ func (s *AttesterSlashing) ToConsensus() (*eth.AttesterSlashing, error) {
return &eth.AttesterSlashing{Attestation_1: att1, Attestation_2: att2}, nil
}
func (s *AttesterSlashingElectra) ToConsensus() (*eth.AttesterSlashingElectra, error) {
att1, err := s.Attestation1.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Attestation1")
}
att2, err := s.Attestation2.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Attestation2")
}
return &eth.AttesterSlashingElectra{Attestation_1: att1, Attestation_2: att2}, nil
}
func (a *IndexedAttestation) ToConsensus() (*eth.IndexedAttestation, error) {
indices := make([]uint64, len(a.AttestingIndices))
var err error
@@ -731,31 +649,6 @@ func (a *IndexedAttestation) ToConsensus() (*eth.IndexedAttestation, error) {
}, nil
}
func (a *IndexedAttestationElectra) ToConsensus() (*eth.IndexedAttestationElectra, error) {
indices := make([]uint64, len(a.AttestingIndices))
var err error
for i, ix := range a.AttestingIndices {
indices[i], err = strconv.ParseUint(ix, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("AttestingIndices[%d]", i))
}
}
data, err := a.Data.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Data")
}
sig, err := bytesutil.DecodeHexWithLength(a.Signature, fieldparams.BLSSignatureLength)
if err != nil {
return nil, server.NewDecodeError(err, "Signature")
}
return &eth.IndexedAttestationElectra{
AttestingIndices: indices,
Data: data,
Signature: sig,
}, nil
}
func WithdrawalsFromConsensus(ws []*enginev1.Withdrawal) []*Withdrawal {
result := make([]*Withdrawal, len(ws))
for i, w := range ws {
@@ -773,126 +666,6 @@ func WithdrawalFromConsensus(w *enginev1.Withdrawal) *Withdrawal {
}
}
func WithdrawalRequestsFromConsensus(ws []*enginev1.WithdrawalRequest) []*WithdrawalRequest {
result := make([]*WithdrawalRequest, len(ws))
for i, w := range ws {
result[i] = WithdrawalRequestFromConsensus(w)
}
return result
}
func WithdrawalRequestFromConsensus(w *enginev1.WithdrawalRequest) *WithdrawalRequest {
return &WithdrawalRequest{
SourceAddress: hexutil.Encode(w.SourceAddress),
ValidatorPubkey: hexutil.Encode(w.ValidatorPubkey),
Amount: fmt.Sprintf("%d", w.Amount),
}
}
func (w *WithdrawalRequest) ToConsensus() (*enginev1.WithdrawalRequest, error) {
src, err := bytesutil.DecodeHexWithLength(w.SourceAddress, common.AddressLength)
if err != nil {
return nil, server.NewDecodeError(err, "SourceAddress")
}
pubkey, err := bytesutil.DecodeHexWithLength(w.ValidatorPubkey, fieldparams.BLSPubkeyLength)
if err != nil {
return nil, server.NewDecodeError(err, "ValidatorPubkey")
}
amount, err := strconv.ParseUint(w.Amount, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "Amount")
}
return &enginev1.WithdrawalRequest{
SourceAddress: src,
ValidatorPubkey: pubkey,
Amount: amount,
}, nil
}
func ConsolidationRequestsFromConsensus(cs []*enginev1.ConsolidationRequest) []*ConsolidationRequest {
result := make([]*ConsolidationRequest, len(cs))
for i, c := range cs {
result[i] = ConsolidationRequestFromConsensus(c)
}
return result
}
func ConsolidationRequestFromConsensus(c *enginev1.ConsolidationRequest) *ConsolidationRequest {
return &ConsolidationRequest{
SourceAddress: hexutil.Encode(c.SourceAddress),
SourcePubkey: hexutil.Encode(c.SourcePubkey),
TargetPubkey: hexutil.Encode(c.TargetPubkey),
}
}
func (c *ConsolidationRequest) ToConsensus() (*enginev1.ConsolidationRequest, error) {
srcAddress, err := bytesutil.DecodeHexWithLength(c.SourceAddress, common.AddressLength)
if err != nil {
return nil, server.NewDecodeError(err, "SourceAddress")
}
srcPubkey, err := bytesutil.DecodeHexWithLength(c.SourcePubkey, fieldparams.BLSPubkeyLength)
if err != nil {
return nil, server.NewDecodeError(err, "SourcePubkey")
}
targetPubkey, err := bytesutil.DecodeHexWithLength(c.TargetPubkey, fieldparams.BLSPubkeyLength)
if err != nil {
return nil, server.NewDecodeError(err, "TargetPubkey")
}
return &enginev1.ConsolidationRequest{
SourceAddress: srcAddress,
SourcePubkey: srcPubkey,
TargetPubkey: targetPubkey,
}, nil
}
func DepositRequestsFromConsensus(ds []*enginev1.DepositRequest) []*DepositRequest {
result := make([]*DepositRequest, len(ds))
for i, d := range ds {
result[i] = DepositRequestFromConsensus(d)
}
return result
}
func DepositRequestFromConsensus(d *enginev1.DepositRequest) *DepositRequest {
return &DepositRequest{
Pubkey: hexutil.Encode(d.Pubkey),
WithdrawalCredentials: hexutil.Encode(d.WithdrawalCredentials),
Amount: fmt.Sprintf("%d", d.Amount),
Signature: hexutil.Encode(d.Signature),
Index: fmt.Sprintf("%d", d.Index),
}
}
func (d *DepositRequest) ToConsensus() (*enginev1.DepositRequest, error) {
pubkey, err := bytesutil.DecodeHexWithLength(d.Pubkey, fieldparams.BLSPubkeyLength)
if err != nil {
return nil, server.NewDecodeError(err, "Pubkey")
}
withdrawalCredentials, err := bytesutil.DecodeHexWithLength(d.WithdrawalCredentials, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "WithdrawalCredentials")
}
amount, err := strconv.ParseUint(d.Amount, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "Amount")
}
sig, err := bytesutil.DecodeHexWithLength(d.Signature, fieldparams.BLSSignatureLength)
if err != nil {
return nil, server.NewDecodeError(err, "Signature")
}
index, err := strconv.ParseUint(d.Index, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "Index")
}
return &enginev1.DepositRequest{
Pubkey: pubkey,
WithdrawalCredentials: withdrawalCredentials,
Amount: amount,
Signature: sig,
Index: index,
}, nil
}
func ProposerSlashingsToConsensus(src []*ProposerSlashing) ([]*eth.ProposerSlashing, error) {
if src == nil {
return nil, errNilValue
@@ -1158,138 +931,6 @@ func AttesterSlashingFromConsensus(src *eth.AttesterSlashing) *AttesterSlashing
}
}
func AttesterSlashingsElectraToConsensus(src []*AttesterSlashingElectra) ([]*eth.AttesterSlashingElectra, error) {
if src == nil {
return nil, errNilValue
}
err := slice.VerifyMaxLength(src, 2)
if err != nil {
return nil, err
}
attesterSlashings := make([]*eth.AttesterSlashingElectra, len(src))
for i, s := range src {
if s == nil {
return nil, server.NewDecodeError(errNilValue, fmt.Sprintf("[%d]", i))
}
if s.Attestation1 == nil {
return nil, server.NewDecodeError(errNilValue, fmt.Sprintf("[%d].Attestation1", i))
}
if s.Attestation2 == nil {
return nil, server.NewDecodeError(errNilValue, fmt.Sprintf("[%d].Attestation2", i))
}
a1Sig, err := bytesutil.DecodeHexWithLength(s.Attestation1.Signature, fieldparams.BLSSignatureLength)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("[%d].Attestation1.Signature", i))
}
err = slice.VerifyMaxLength(s.Attestation1.AttestingIndices, 2048)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("[%d].Attestation1.AttestingIndices", i))
}
a1AttestingIndices := make([]uint64, len(s.Attestation1.AttestingIndices))
for j, ix := range s.Attestation1.AttestingIndices {
attestingIndex, err := strconv.ParseUint(ix, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("[%d].Attestation1.AttestingIndices[%d]", i, j))
}
a1AttestingIndices[j] = attestingIndex
}
a1Data, err := s.Attestation1.Data.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("[%d].Attestation1.Data", i))
}
a2Sig, err := bytesutil.DecodeHexWithLength(s.Attestation2.Signature, fieldparams.BLSSignatureLength)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("[%d].Attestation2.Signature", i))
}
err = slice.VerifyMaxLength(s.Attestation2.AttestingIndices, 2048)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("[%d].Attestation2.AttestingIndices", i))
}
a2AttestingIndices := make([]uint64, len(s.Attestation2.AttestingIndices))
for j, ix := range s.Attestation2.AttestingIndices {
attestingIndex, err := strconv.ParseUint(ix, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("[%d].Attestation2.AttestingIndices[%d]", i, j))
}
a2AttestingIndices[j] = attestingIndex
}
a2Data, err := s.Attestation2.Data.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("[%d].Attestation2.Data", i))
}
attesterSlashings[i] = &eth.AttesterSlashingElectra{
Attestation_1: &eth.IndexedAttestationElectra{
AttestingIndices: a1AttestingIndices,
Data: a1Data,
Signature: a1Sig,
},
Attestation_2: &eth.IndexedAttestationElectra{
AttestingIndices: a2AttestingIndices,
Data: a2Data,
Signature: a2Sig,
},
}
}
return attesterSlashings, nil
}
func AttesterSlashingsElectraFromConsensus(src []*eth.AttesterSlashingElectra) []*AttesterSlashingElectra {
attesterSlashings := make([]*AttesterSlashingElectra, len(src))
for i, s := range src {
attesterSlashings[i] = AttesterSlashingElectraFromConsensus(s)
}
return attesterSlashings
}
func AttesterSlashingElectraFromConsensus(src *eth.AttesterSlashingElectra) *AttesterSlashingElectra {
a1AttestingIndices := make([]string, len(src.Attestation_1.AttestingIndices))
for j, ix := range src.Attestation_1.AttestingIndices {
a1AttestingIndices[j] = fmt.Sprintf("%d", ix)
}
a2AttestingIndices := make([]string, len(src.Attestation_2.AttestingIndices))
for j, ix := range src.Attestation_2.AttestingIndices {
a2AttestingIndices[j] = fmt.Sprintf("%d", ix)
}
return &AttesterSlashingElectra{
Attestation1: &IndexedAttestationElectra{
AttestingIndices: a1AttestingIndices,
Data: &AttestationData{
Slot: fmt.Sprintf("%d", src.Attestation_1.Data.Slot),
CommitteeIndex: fmt.Sprintf("%d", src.Attestation_1.Data.CommitteeIndex),
BeaconBlockRoot: hexutil.Encode(src.Attestation_1.Data.BeaconBlockRoot),
Source: &Checkpoint{
Epoch: fmt.Sprintf("%d", src.Attestation_1.Data.Source.Epoch),
Root: hexutil.Encode(src.Attestation_1.Data.Source.Root),
},
Target: &Checkpoint{
Epoch: fmt.Sprintf("%d", src.Attestation_1.Data.Target.Epoch),
Root: hexutil.Encode(src.Attestation_1.Data.Target.Root),
},
},
Signature: hexutil.Encode(src.Attestation_1.Signature),
},
Attestation2: &IndexedAttestationElectra{
AttestingIndices: a2AttestingIndices,
Data: &AttestationData{
Slot: fmt.Sprintf("%d", src.Attestation_2.Data.Slot),
CommitteeIndex: fmt.Sprintf("%d", src.Attestation_2.Data.CommitteeIndex),
BeaconBlockRoot: hexutil.Encode(src.Attestation_2.Data.BeaconBlockRoot),
Source: &Checkpoint{
Epoch: fmt.Sprintf("%d", src.Attestation_2.Data.Source.Epoch),
Root: hexutil.Encode(src.Attestation_2.Data.Source.Root),
},
Target: &Checkpoint{
Epoch: fmt.Sprintf("%d", src.Attestation_2.Data.Target.Epoch),
Root: hexutil.Encode(src.Attestation_2.Data.Target.Root),
},
},
Signature: hexutil.Encode(src.Attestation_2.Signature),
},
}
}
func AttsToConsensus(src []*Attestation) ([]*eth.Attestation, error) {
if src == nil {
return nil, errNilValue
@@ -1317,33 +958,6 @@ func AttsFromConsensus(src []*eth.Attestation) []*Attestation {
return atts
}
func AttsElectraToConsensus(src []*AttestationElectra) ([]*eth.AttestationElectra, error) {
if src == nil {
return nil, errNilValue
}
err := slice.VerifyMaxLength(src, 8)
if err != nil {
return nil, err
}
atts := make([]*eth.AttestationElectra, len(src))
for i, a := range src {
atts[i], err = a.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("[%d]", i))
}
}
return atts, nil
}
func AttsElectraFromConsensus(src []*eth.AttestationElectra) []*AttestationElectra {
atts := make([]*AttestationElectra, len(src))
for i, a := range src {
atts[i] = AttElectraFromConsensus(a)
}
return atts
}
func DepositsToConsensus(src []*Deposit) ([]*eth.Deposit, error) {
if src == nil {
return nil, errNilValue
@@ -1474,37 +1088,3 @@ func DepositSnapshotFromConsensus(ds *eth.DepositSnapshot) *DepositSnapshot {
ExecutionBlockHeight: fmt.Sprintf("%d", ds.ExecutionDepth),
}
}
func PendingBalanceDepositsFromConsensus(ds []*eth.PendingBalanceDeposit) []*PendingBalanceDeposit {
deposits := make([]*PendingBalanceDeposit, len(ds))
for i, d := range ds {
deposits[i] = &PendingBalanceDeposit{
Index: fmt.Sprintf("%d", d.Index),
Amount: fmt.Sprintf("%d", d.Amount),
}
}
return deposits
}
func PendingPartialWithdrawalsFromConsensus(ws []*eth.PendingPartialWithdrawal) []*PendingPartialWithdrawal {
withdrawals := make([]*PendingPartialWithdrawal, len(ws))
for i, w := range ws {
withdrawals[i] = &PendingPartialWithdrawal{
Index: fmt.Sprintf("%d", w.Index),
Amount: fmt.Sprintf("%d", w.Amount),
WithdrawableEpoch: fmt.Sprintf("%d", w.WithdrawableEpoch),
}
}
return withdrawals
}
func PendingConsolidationsFromConsensus(cs []*eth.PendingConsolidation) []*PendingConsolidation {
consolidations := make([]*PendingConsolidation, len(cs))
for i, c := range cs {
consolidations[i] = &PendingConsolidation{
SourceIndex: fmt.Sprintf("%d", c.SourceIndex),
TargetIndex: fmt.Sprintf("%d", c.TargetIndex),
}
}
return consolidations
}

File diff suppressed because it is too large Load Diff

View File

@@ -593,185 +593,3 @@ func BeaconStateDenebFromConsensus(st beaconState.BeaconState) (*BeaconStateDene
HistoricalSummaries: hs,
}, nil
}
func BeaconStateElectraFromConsensus(st beaconState.BeaconState) (*BeaconStateElectra, error) {
srcBr := st.BlockRoots()
br := make([]string, len(srcBr))
for i, r := range srcBr {
br[i] = hexutil.Encode(r)
}
srcSr := st.StateRoots()
sr := make([]string, len(srcSr))
for i, r := range srcSr {
sr[i] = hexutil.Encode(r)
}
srcHr, err := st.HistoricalRoots()
if err != nil {
return nil, err
}
hr := make([]string, len(srcHr))
for i, r := range srcHr {
hr[i] = hexutil.Encode(r)
}
srcVotes := st.Eth1DataVotes()
votes := make([]*Eth1Data, len(srcVotes))
for i, e := range srcVotes {
votes[i] = Eth1DataFromConsensus(e)
}
srcVals := st.Validators()
vals := make([]*Validator, len(srcVals))
for i, v := range srcVals {
vals[i] = ValidatorFromConsensus(v)
}
srcBals := st.Balances()
bals := make([]string, len(srcBals))
for i, b := range srcBals {
bals[i] = fmt.Sprintf("%d", b)
}
srcRm := st.RandaoMixes()
rm := make([]string, len(srcRm))
for i, m := range srcRm {
rm[i] = hexutil.Encode(m)
}
srcSlashings := st.Slashings()
slashings := make([]string, len(srcSlashings))
for i, s := range srcSlashings {
slashings[i] = fmt.Sprintf("%d", s)
}
srcPrevPart, err := st.PreviousEpochParticipation()
if err != nil {
return nil, err
}
prevPart := make([]string, len(srcPrevPart))
for i, p := range srcPrevPart {
prevPart[i] = fmt.Sprintf("%d", p)
}
srcCurrPart, err := st.CurrentEpochParticipation()
if err != nil {
return nil, err
}
currPart := make([]string, len(srcCurrPart))
for i, p := range srcCurrPart {
currPart[i] = fmt.Sprintf("%d", p)
}
srcIs, err := st.InactivityScores()
if err != nil {
return nil, err
}
is := make([]string, len(srcIs))
for i, s := range srcIs {
is[i] = fmt.Sprintf("%d", s)
}
currSc, err := st.CurrentSyncCommittee()
if err != nil {
return nil, err
}
nextSc, err := st.NextSyncCommittee()
if err != nil {
return nil, err
}
execData, err := st.LatestExecutionPayloadHeader()
if err != nil {
return nil, err
}
srcPayload, ok := execData.Proto().(*enginev1.ExecutionPayloadHeaderElectra)
if !ok {
return nil, errPayloadHeaderNotFound
}
payload, err := ExecutionPayloadHeaderElectraFromConsensus(srcPayload)
if err != nil {
return nil, err
}
srcHs, err := st.HistoricalSummaries()
if err != nil {
return nil, err
}
hs := make([]*HistoricalSummary, len(srcHs))
for i, s := range srcHs {
hs[i] = HistoricalSummaryFromConsensus(s)
}
nwi, err := st.NextWithdrawalIndex()
if err != nil {
return nil, err
}
nwvi, err := st.NextWithdrawalValidatorIndex()
if err != nil {
return nil, err
}
drsi, err := st.DepositRequestsStartIndex()
if err != nil {
return nil, err
}
dbtc, err := st.DepositBalanceToConsume()
if err != nil {
return nil, err
}
ebtc, err := st.ExitBalanceToConsume()
if err != nil {
return nil, err
}
eee, err := st.EarliestExitEpoch()
if err != nil {
return nil, err
}
cbtc, err := st.ConsolidationBalanceToConsume()
if err != nil {
return nil, err
}
ece, err := st.EarliestConsolidationEpoch()
if err != nil {
return nil, err
}
pbd, err := st.PendingBalanceDeposits()
if err != nil {
return nil, err
}
ppw, err := st.PendingPartialWithdrawals()
if err != nil {
return nil, err
}
pc, err := st.PendingConsolidations()
if err != nil {
return nil, err
}
return &BeaconStateElectra{
GenesisTime: fmt.Sprintf("%d", st.GenesisTime()),
GenesisValidatorsRoot: hexutil.Encode(st.GenesisValidatorsRoot()),
Slot: fmt.Sprintf("%d", st.Slot()),
Fork: ForkFromConsensus(st.Fork()),
LatestBlockHeader: BeaconBlockHeaderFromConsensus(st.LatestBlockHeader()),
BlockRoots: br,
StateRoots: sr,
HistoricalRoots: hr,
Eth1Data: Eth1DataFromConsensus(st.Eth1Data()),
Eth1DataVotes: votes,
Eth1DepositIndex: fmt.Sprintf("%d", st.Eth1DepositIndex()),
Validators: vals,
Balances: bals,
RandaoMixes: rm,
Slashings: slashings,
PreviousEpochParticipation: prevPart,
CurrentEpochParticipation: currPart,
JustificationBits: hexutil.Encode(st.JustificationBits()),
PreviousJustifiedCheckpoint: CheckpointFromConsensus(st.PreviousJustifiedCheckpoint()),
CurrentJustifiedCheckpoint: CheckpointFromConsensus(st.CurrentJustifiedCheckpoint()),
FinalizedCheckpoint: CheckpointFromConsensus(st.FinalizedCheckpoint()),
InactivityScores: is,
CurrentSyncCommittee: SyncCommitteeFromConsensus(currSc),
NextSyncCommittee: SyncCommitteeFromConsensus(nextSc),
LatestExecutionPayloadHeader: payload,
NextWithdrawalIndex: fmt.Sprintf("%d", nwi),
NextWithdrawalValidatorIndex: fmt.Sprintf("%d", nwvi),
HistoricalSummaries: hs,
DepositRequestsStartIndex: fmt.Sprintf("%d", drsi),
DepositBalanceToConsume: fmt.Sprintf("%d", dbtc),
ExitBalanceToConsume: fmt.Sprintf("%d", ebtc),
EarliestExitEpoch: fmt.Sprintf("%d", eee),
ConsolidationBalanceToConsume: fmt.Sprintf("%d", cbtc),
EarliestConsolidationEpoch: fmt.Sprintf("%d", ece),
PendingBalanceDeposits: PendingBalanceDepositsFromConsensus(pbd),
PendingPartialWithdrawals: PendingPartialWithdrawalsFromConsensus(ppw),
PendingConsolidations: PendingConsolidationsFromConsensus(pc),
}, nil
}

View File

@@ -196,48 +196,3 @@ type DepositSnapshot struct {
ExecutionBlockHash string `json:"execution_block_hash"`
ExecutionBlockHeight string `json:"execution_block_height"`
}
type GetIndividualVotesRequest struct {
Epoch string `json:"epoch"`
PublicKeys []string `json:"public_keys,omitempty"`
Indices []string `json:"indices,omitempty"`
}
type GetIndividualVotesResponse struct {
IndividualVotes []*IndividualVote `json:"individual_votes"`
}
type IndividualVote struct {
Epoch string `json:"epoch"`
PublicKey string `json:"public_keys,omitempty"`
ValidatorIndex string `json:"validator_index"`
IsSlashed bool `json:"is_slashed"`
IsWithdrawableInCurrentEpoch bool `json:"is_withdrawable_in_current_epoch"`
IsActiveInCurrentEpoch bool `json:"is_active_in_current_epoch"`
IsActiveInPreviousEpoch bool `json:"is_active_in_previous_epoch"`
IsCurrentEpochAttester bool `json:"is_current_epoch_attester"`
IsCurrentEpochTargetAttester bool `json:"is_current_epoch_target_attester"`
IsPreviousEpochAttester bool `json:"is_previous_epoch_attester"`
IsPreviousEpochTargetAttester bool `json:"is_previous_epoch_target_attester"`
IsPreviousEpochHeadAttester bool `json:"is_previous_epoch_head_attester"`
CurrentEpochEffectiveBalanceGwei string `json:"current_epoch_effective_balance_gwei"`
InclusionSlot string `json:"inclusion_slot"`
InclusionDistance string `json:"inclusion_distance"`
InactivityScore string `json:"inactivity_score"`
}
type ChainHead struct {
HeadSlot string `json:"head_slot"`
HeadEpoch string `json:"head_epoch"`
HeadBlockRoot string `json:"head_block_root"`
FinalizedSlot string `json:"finalized_slot"`
FinalizedEpoch string `json:"finalized_epoch"`
FinalizedBlockRoot string `json:"finalized_block_root"`
JustifiedSlot string `json:"justified_slot"`
JustifiedEpoch string `json:"justified_epoch"`
JustifiedBlockRoot string `json:"justified_block_root"`
PreviousJustifiedSlot string `json:"previous_justified_slot"`
PreviousJustifiedEpoch string `json:"previous_justified_epoch"`
PreviousJustifiedBlockRoot string `json:"previous_justified_block_root"`
OptimisticStatus bool `json:"optimistic_status"`
}

View File

@@ -118,34 +118,3 @@ type GetValidatorPerformanceResponse struct {
MissingValidators [][]byte `json:"missing_validators,omitempty"`
InactivityScores []uint64 `json:"inactivity_scores,omitempty"`
}
type GetValidatorParticipationResponse struct {
Epoch string `json:"epoch"`
Finalized bool `json:"finalized"`
Participation *ValidatorParticipation `json:"participation"`
}
type ValidatorParticipation struct {
GlobalParticipationRate string `json:"global_participation_rate" deprecated:"true"`
VotedEther string `json:"voted_ether" deprecated:"true"`
EligibleEther string `json:"eligible_ether" deprecated:"true"`
CurrentEpochActiveGwei string `json:"current_epoch_active_gwei"`
CurrentEpochAttestingGwei string `json:"current_epoch_attesting_gwei"`
CurrentEpochTargetAttestingGwei string `json:"current_epoch_target_attesting_gwei"`
PreviousEpochActiveGwei string `json:"previous_epoch_active_gwei"`
PreviousEpochAttestingGwei string `json:"previous_epoch_attesting_gwei"`
PreviousEpochTargetAttestingGwei string `json:"previous_epoch_target_attesting_gwei"`
PreviousEpochHeadAttestingGwei string `json:"previous_epoch_head_attesting_gwei"`
}
type ActiveSetChanges struct {
Epoch string `json:"epoch"`
ActivatedPublicKeys []string `json:"activated_public_keys"`
ActivatedIndices []string `json:"activated_indices"`
ExitedPublicKeys []string `json:"exited_public_keys"`
ExitedIndices []string `json:"exited_indices"`
SlashedPublicKeys []string `json:"slashed_public_keys"`
SlashedIndices []string `json:"slashed_indices"`
EjectedPublicKeys []string `json:"ejected_public_keys"`
EjectedIndices []string `json:"ejected_indices"`
}

View File

@@ -29,13 +29,6 @@ type Attestation struct {
Signature string `json:"signature"`
}
type AttestationElectra struct {
AggregationBits string `json:"aggregation_bits"`
Data *AttestationData `json:"data"`
Signature string `json:"signature"`
CommitteeBits string `json:"committee_bits"`
}
type AttestationData struct {
Slot string `json:"slot"`
CommitteeIndex string `json:"index"`
@@ -85,17 +78,6 @@ type AggregateAttestationAndProof struct {
SelectionProof string `json:"selection_proof"`
}
type SignedAggregateAttestationAndProofElectra struct {
Message *AggregateAttestationAndProofElectra `json:"message"`
Signature string `json:"signature"`
}
type AggregateAttestationAndProofElectra struct {
AggregatorIndex string `json:"aggregator_index"`
Aggregate *AttestationElectra `json:"aggregate"`
SelectionProof string `json:"selection_proof"`
}
type SyncCommitteeSubscription struct {
ValidatorIndex string `json:"validator_index"`
SyncCommitteeIndices []string `json:"sync_committee_indices"`
@@ -196,11 +178,6 @@ type AttesterSlashing struct {
Attestation2 *IndexedAttestation `json:"attestation_2"`
}
type AttesterSlashingElectra struct {
Attestation1 *IndexedAttestationElectra `json:"attestation_1"`
Attestation2 *IndexedAttestationElectra `json:"attestation_2"`
}
type Deposit struct {
Proof []string `json:"proof"`
Data *DepositData `json:"data"`
@@ -219,12 +196,6 @@ type IndexedAttestation struct {
Signature string `json:"signature"`
}
type IndexedAttestationElectra struct {
AttestingIndices []string `json:"attesting_indices"`
Data *AttestationData `json:"data"`
Signature string `json:"signature"`
}
type SyncAggregate struct {
SyncCommitteeBits string `json:"sync_committee_bits"`
SyncCommitteeSignature string `json:"sync_committee_signature"`
@@ -236,39 +207,3 @@ type Withdrawal struct {
ExecutionAddress string `json:"address"`
Amount string `json:"amount"`
}
type DepositRequest struct {
Pubkey string `json:"pubkey"`
WithdrawalCredentials string `json:"withdrawal_credentials"`
Amount string `json:"amount"`
Signature string `json:"signature"`
Index string `json:"index"`
}
type WithdrawalRequest struct {
SourceAddress string `json:"source_address"`
ValidatorPubkey string `json:"validator_pubkey"`
Amount string `json:"amount"`
}
type ConsolidationRequest struct {
SourceAddress string `json:"source_address"`
SourcePubkey string `json:"source_pubkey"`
TargetPubkey string `json:"target_pubkey"`
}
type PendingBalanceDeposit struct {
Index string `json:"index"`
Amount string `json:"amount"`
}
type PendingPartialWithdrawal struct {
Index string `json:"index"`
Amount string `json:"amount"`
WithdrawableEpoch string `json:"withdrawable_epoch"`
}
type PendingConsolidation struct {
SourceIndex string `json:"source_index"`
TargetIndex string `json:"target_index"`
}

View File

@@ -140,43 +140,3 @@ type BeaconStateDeneb struct {
NextWithdrawalValidatorIndex string `json:"next_withdrawal_validator_index"`
HistoricalSummaries []*HistoricalSummary `json:"historical_summaries"`
}
type BeaconStateElectra struct {
GenesisTime string `json:"genesis_time"`
GenesisValidatorsRoot string `json:"genesis_validators_root"`
Slot string `json:"slot"`
Fork *Fork `json:"fork"`
LatestBlockHeader *BeaconBlockHeader `json:"latest_block_header"`
BlockRoots []string `json:"block_roots"`
StateRoots []string `json:"state_roots"`
HistoricalRoots []string `json:"historical_roots"`
Eth1Data *Eth1Data `json:"eth1_data"`
Eth1DataVotes []*Eth1Data `json:"eth1_data_votes"`
Eth1DepositIndex string `json:"eth1_deposit_index"`
Validators []*Validator `json:"validators"`
Balances []string `json:"balances"`
RandaoMixes []string `json:"randao_mixes"`
Slashings []string `json:"slashings"`
PreviousEpochParticipation []string `json:"previous_epoch_participation"`
CurrentEpochParticipation []string `json:"current_epoch_participation"`
JustificationBits string `json:"justification_bits"`
PreviousJustifiedCheckpoint *Checkpoint `json:"previous_justified_checkpoint"`
CurrentJustifiedCheckpoint *Checkpoint `json:"current_justified_checkpoint"`
FinalizedCheckpoint *Checkpoint `json:"finalized_checkpoint"`
InactivityScores []string `json:"inactivity_scores"`
CurrentSyncCommittee *SyncCommittee `json:"current_sync_committee"`
NextSyncCommittee *SyncCommittee `json:"next_sync_committee"`
LatestExecutionPayloadHeader *ExecutionPayloadHeaderElectra `json:"latest_execution_payload_header"`
NextWithdrawalIndex string `json:"next_withdrawal_index"`
NextWithdrawalValidatorIndex string `json:"next_withdrawal_validator_index"`
HistoricalSummaries []*HistoricalSummary `json:"historical_summaries"`
DepositRequestsStartIndex string `json:"deposit_requests_start_index"`
DepositBalanceToConsume string `json:"deposit_balance_to_consume"`
ExitBalanceToConsume string `json:"exit_balance_to_consume"`
EarliestExitEpoch string `json:"earliest_exit_epoch"`
ConsolidationBalanceToConsume string `json:"consolidation_balance_to_consume"`
EarliestConsolidationEpoch string `json:"earliest_consolidation_epoch"`
PendingBalanceDeposits []*PendingBalanceDeposit `json:"pending_balance_deposits"`
PendingPartialWithdrawals []*PendingPartialWithdrawal `json:"pending_partial_withdrawals"`
PendingConsolidations []*PendingConsolidation `json:"pending_consolidations"`
}

View File

@@ -1,4 +1,4 @@
package middleware
package server
import (
"net/url"

View File

@@ -1,4 +1,4 @@
package middleware
package server
import (
"testing"

View File

@@ -64,7 +64,6 @@ go_library(
"//beacon-chain/operations/slashings:go_default_library",
"//beacon-chain/operations/voluntaryexits:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/slasher/types:go_default_library",
"//beacon-chain/startup:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
@@ -139,7 +138,6 @@ go_test(
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/cache/depositsnapshot:go_default_library",
"//beacon-chain/core/altair:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
@@ -183,7 +181,6 @@ go_test(
"//time:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_ethereum_go_ethereum//core/types:go_default_library",
"@com_github_holiman_uint256//:go_default_library",
"@com_github_pkg_errors//:go_default_library",

View File

@@ -6,6 +6,8 @@ import (
"time"
"github.com/pkg/errors"
"go.opencensus.io/trace"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
f "github.com/prysmaticlabs/prysm/v5/beacon-chain/forkchoice"
doublylinkedtree "github.com/prysmaticlabs/prysm/v5/beacon-chain/forkchoice/doubly-linked-tree"
@@ -18,7 +20,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"go.opencensus.io/trace"
)
// ChainInfoFetcher defines a common interface for methods in blockchain service which
@@ -48,8 +49,6 @@ type ForkchoiceFetcher interface {
ForkChoiceDump(context.Context) (*forkchoice.Dump, error)
NewSlot(context.Context, primitives.Slot) error
ProposerBoost() [32]byte
RecentBlockSlot(root [32]byte) (primitives.Slot, error)
IsCanonical(ctx context.Context, blockRoot [32]byte) (bool, error)
}
// TimeFetcher retrieves the Ethereum consensus data that's related to time.

View File

@@ -22,6 +22,13 @@ func (s *Service) GetProposerHead() [32]byte {
return s.cfg.ForkChoiceStore.GetProposerHead()
}
// ShouldOverrideFCU returns the corresponding value from forkchoice
func (s *Service) ShouldOverrideFCU() bool {
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
return s.cfg.ForkChoiceStore.ShouldOverrideFCU()
}
// SetForkChoiceGenesisTime sets the genesis time in Forkchoice
func (s *Service) SetForkChoiceGenesisTime(timestamp uint64) {
s.cfg.ForkChoiceStore.Lock()
@@ -92,10 +99,3 @@ func (s *Service) FinalizedBlockHash() [32]byte {
defer s.cfg.ForkChoiceStore.RUnlock()
return s.cfg.ForkChoiceStore.FinalizedPayloadBlockHash()
}
// ParentRoot wraps a call to the corresponding method in forkchoice
func (s *Service) ParentRoot(root [32]byte) ([32]byte, error) {
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
return s.cfg.ForkChoiceStore.ParentRoot(root)
}

View File

@@ -74,8 +74,8 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *fcuConfig) (*
}
payloadID, lastValidHash, err := s.cfg.ExecutionEngineCaller.ForkchoiceUpdated(ctx, fcs, arg.attributes)
if err != nil {
switch {
case errors.Is(err, execution.ErrAcceptedSyncingPayloadStatus):
switch err {
case execution.ErrAcceptedSyncingPayloadStatus:
forkchoiceUpdatedOptimisticNodeCount.Inc()
log.WithFields(logrus.Fields{
"headSlot": headBlk.Slot(),
@@ -83,7 +83,7 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *fcuConfig) (*
"finalizedPayloadBlockHash": fmt.Sprintf("%#x", bytesutil.Trunc(finalizedHash[:])),
}).Info("Called fork choice updated with optimistic block")
return payloadID, nil
case errors.Is(err, execution.ErrInvalidPayloadStatus):
case execution.ErrInvalidPayloadStatus:
forkchoiceUpdatedInvalidNodeCount.Inc()
headRoot := arg.headRoot
if len(lastValidHash) == 0 {
@@ -139,6 +139,7 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *fcuConfig) (*
"newHeadRoot": fmt.Sprintf("%#x", bytesutil.Trunc(r[:])),
}).Warn("Pruned invalid blocks")
return pid, invalidBlock{error: ErrInvalidPayload, root: arg.headRoot, invalidAncestorRoots: invalidRoots}
default:
log.WithError(err).Error(ErrUndefinedExecutionEngineError)
return nil, nil
@@ -230,18 +231,18 @@ func (s *Service) notifyNewPayload(ctx context.Context, preStateVersion int,
} else {
lastValidHash, err = s.cfg.ExecutionEngineCaller.NewPayload(ctx, payload, []common.Hash{}, &common.Hash{} /*empty version hashes and root before Deneb*/)
}
switch {
case err == nil:
switch err {
case nil:
newPayloadValidNodeCount.Inc()
return true, nil
case errors.Is(err, execution.ErrAcceptedSyncingPayloadStatus):
case execution.ErrAcceptedSyncingPayloadStatus:
newPayloadOptimisticNodeCount.Inc()
log.WithFields(logrus.Fields{
"slot": blk.Block().Slot(),
"payloadBlockHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.BlockHash())),
}).Info("Called new payload with optimistic block")
return false, nil
case errors.Is(err, execution.ErrInvalidPayloadStatus):
case execution.ErrInvalidPayloadStatus:
lvh := bytesutil.ToBytes32(lastValidHash)
return false, invalidBlock{
error: ErrInvalidPayload,
@@ -323,8 +324,8 @@ func (s *Service) getPayloadAttribute(ctx context.Context, st state.BeaconState,
var attr payloadattribute.Attributer
switch st.Version() {
case version.Deneb, version.Electra:
withdrawals, _, err := st.ExpectedWithdrawals()
case version.Deneb:
withdrawals, err := st.ExpectedWithdrawals()
if err != nil {
log.WithError(err).Error("Could not get expected withdrawals to get payload attribute")
return emptyAttri
@@ -341,7 +342,7 @@ func (s *Service) getPayloadAttribute(ctx context.Context, st state.BeaconState,
return emptyAttri
}
case version.Capella:
withdrawals, _, err := st.ExpectedWithdrawals()
withdrawals, err := st.ExpectedWithdrawals()
if err != nil {
log.WithError(err).Error("Could not get expected withdrawals to get payload attribute")
return emptyAttri

View File

@@ -855,63 +855,41 @@ func Test_GetPayloadAttributeV2(t *testing.T) {
require.Equal(t, 0, len(a))
}
func Test_GetPayloadAttributeV3(t *testing.T) {
var testCases = []struct {
name string
st bstate.BeaconState
}{
{
name: "deneb",
st: func() bstate.BeaconState {
st, _ := util.DeterministicGenesisStateDeneb(t, 1)
return st
}(),
},
{
name: "electra",
st: func() bstate.BeaconState {
st, _ := util.DeterministicGenesisStateElectra(t, 1)
return st
}(),
},
}
func Test_GetPayloadAttributeDeneb(t *testing.T) {
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx := tr.ctx
for _, test := range testCases {
t.Run(test.name, func(t *testing.T) {
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx := tr.ctx
st, _ := util.DeterministicGenesisStateDeneb(t, 1)
attr := service.getPayloadAttribute(ctx, st, 0, []byte{})
require.Equal(t, true, attr.IsEmpty())
attr := service.getPayloadAttribute(ctx, test.st, 0, []byte{})
require.Equal(t, true, attr.IsEmpty())
// Cache hit, advance state, no fee recipient
slot := primitives.Slot(1)
service.cfg.TrackedValidatorsCache.Set(cache.TrackedValidator{Active: true, Index: 0})
service.cfg.PayloadIDCache.Set(slot, [32]byte{}, [8]byte{})
attr = service.getPayloadAttribute(ctx, st, slot, params.BeaconConfig().ZeroHash[:])
require.Equal(t, false, attr.IsEmpty())
require.Equal(t, params.BeaconConfig().EthBurnAddressHex, common.BytesToAddress(attr.SuggestedFeeRecipient()).String())
a, err := attr.Withdrawals()
require.NoError(t, err)
require.Equal(t, 0, len(a))
// Cache hit, advance state, no fee recipient
slot := primitives.Slot(1)
service.cfg.TrackedValidatorsCache.Set(cache.TrackedValidator{Active: true, Index: 0})
service.cfg.PayloadIDCache.Set(slot, [32]byte{}, [8]byte{})
attr = service.getPayloadAttribute(ctx, test.st, slot, params.BeaconConfig().ZeroHash[:])
require.Equal(t, false, attr.IsEmpty())
require.Equal(t, params.BeaconConfig().EthBurnAddressHex, common.BytesToAddress(attr.SuggestedFeeRecipient()).String())
a, err := attr.Withdrawals()
require.NoError(t, err)
require.Equal(t, 0, len(a))
// Cache hit, advance state, has fee recipient
suggestedAddr := common.HexToAddress("123")
service.cfg.TrackedValidatorsCache.Set(cache.TrackedValidator{Active: true, FeeRecipient: primitives.ExecutionAddress(suggestedAddr), Index: 0})
service.cfg.PayloadIDCache.Set(slot, [32]byte{}, [8]byte{})
attr = service.getPayloadAttribute(ctx, st, slot, params.BeaconConfig().ZeroHash[:])
require.Equal(t, false, attr.IsEmpty())
require.Equal(t, suggestedAddr, common.BytesToAddress(attr.SuggestedFeeRecipient()))
a, err = attr.Withdrawals()
require.NoError(t, err)
require.Equal(t, 0, len(a))
// Cache hit, advance state, has fee recipient
suggestedAddr := common.HexToAddress("123")
service.cfg.TrackedValidatorsCache.Set(cache.TrackedValidator{Active: true, FeeRecipient: primitives.ExecutionAddress(suggestedAddr), Index: 0})
service.cfg.PayloadIDCache.Set(slot, [32]byte{}, [8]byte{})
attr = service.getPayloadAttribute(ctx, test.st, slot, params.BeaconConfig().ZeroHash[:])
require.Equal(t, false, attr.IsEmpty())
require.Equal(t, suggestedAddr, common.BytesToAddress(attr.SuggestedFeeRecipient()))
a, err = attr.Withdrawals()
require.NoError(t, err)
require.Equal(t, 0, len(a))
attrV3, err := attr.PbV3()
require.NoError(t, err)
hr := service.headRoot()
require.Equal(t, hr, [32]byte(attrV3.ParentBeaconBlockRoot))
attrV3, err := attr.PbV3()
require.NoError(t, err)
hr := service.headRoot()
require.Equal(t, hr, [32]byte(attrV3.ParentBeaconBlockRoot))
})
}
}
func Test_UpdateLastValidatedCheckpoint(t *testing.T) {

View File

@@ -401,7 +401,7 @@ func (s *Service) saveOrphanedOperations(ctx context.Context, orphanedRoot [32]b
}
for _, a := range orphanedBlk.Block().Body().Attestations() {
// if the attestation is one epoch older, it wouldn't been useful to save it.
if a.GetData().Slot+params.BeaconConfig().SlotsPerEpoch < s.CurrentSlot() {
if a.Data.Slot+params.BeaconConfig().SlotsPerEpoch < s.CurrentSlot() {
continue
}
if helpers.IsAggregated(a) {

View File

@@ -312,14 +312,14 @@ func TestSaveOrphanedAtts(t *testing.T) {
require.NoError(t, service.saveOrphanedOperations(ctx, r3, r4))
require.Equal(t, 3, service.cfg.AttPool.AggregatedAttestationCount())
wantAtts := []ethpb.Att{
wantAtts := []*ethpb.Attestation{
blk3.Block.Body.Attestations[0],
blk2.Block.Body.Attestations[0],
blk1.Block.Body.Attestations[0],
}
atts := service.cfg.AttPool.AggregatedAttestations()
sort.Slice(atts, func(i, j int) bool {
return atts[i].GetData().Slot > atts[j].GetData().Slot
return atts[i].Data.Slot > atts[j].Data.Slot
})
require.DeepEqual(t, wantAtts, atts)
}
@@ -389,14 +389,14 @@ func TestSaveOrphanedOps(t *testing.T) {
require.NoError(t, service.saveOrphanedOperations(ctx, r3, r4))
require.Equal(t, 3, service.cfg.AttPool.AggregatedAttestationCount())
wantAtts := []ethpb.Att{
wantAtts := []*ethpb.Attestation{
blk3.Block.Body.Attestations[0],
blk2.Block.Body.Attestations[0],
blk1.Block.Body.Attestations[0],
}
atts := service.cfg.AttPool.AggregatedAttestations()
sort.Slice(atts, func(i, j int) bool {
return atts[i].GetData().Slot > atts[j].GetData().Slot
return atts[i].Data.Slot > atts[j].Data.Slot
})
require.DeepEqual(t, wantAtts, atts)
require.Equal(t, 1, len(service.cfg.SlashingPool.PendingProposerSlashings(ctx, st, false)))
@@ -517,14 +517,14 @@ func TestSaveOrphanedAtts_DoublyLinkedTrie(t *testing.T) {
require.NoError(t, service.saveOrphanedOperations(ctx, r3, r4))
require.Equal(t, 3, service.cfg.AttPool.AggregatedAttestationCount())
wantAtts := []ethpb.Att{
wantAtts := []*ethpb.Attestation{
blk3.Block.Body.Attestations[0],
blk2.Block.Body.Attestations[0],
blk1.Block.Body.Attestations[0],
}
atts := service.cfg.AttPool.AggregatedAttestations()
sort.Slice(atts, func(i, j int) bool {
return atts[i].GetData().Slot > atts[j].GetData().Slot
return atts[i].Data.Slot > atts[j].Data.Slot
})
require.DeepEqual(t, wantAtts, atts)
}

View File

@@ -16,7 +16,7 @@ import (
)
const (
FinalityBranchNumOfLeaves = 6
finalityBranchNumOfLeaves = 6
)
// CreateLightClientFinalityUpdate - implements https://github.com/ethereum/consensus-specs/blob/3d235740e5f1e641d3b160c8688f26e7dc5a1894/specs/altair/light-client/full-node.md#create_light_client_finality_update
@@ -69,7 +69,7 @@ func NewLightClientOptimisticUpdateFromBeaconState(
// assert sum(block.message.body.sync_aggregate.sync_committee_bits) >= MIN_SYNC_COMMITTEE_PARTICIPANTS
syncAggregate, err := block.Block().Body().SyncAggregate()
if err != nil {
return nil, fmt.Errorf("could not get sync aggregate %w", err)
return nil, fmt.Errorf("could not get sync aggregate %v", err)
}
if syncAggregate.SyncCommitteeBits.Count() < params.BeaconConfig().MinSyncCommitteeParticipants {
@@ -85,18 +85,18 @@ func NewLightClientOptimisticUpdateFromBeaconState(
header := state.LatestBlockHeader()
stateRoot, err := state.HashTreeRoot(ctx)
if err != nil {
return nil, fmt.Errorf("could not get state root %w", err)
return nil, fmt.Errorf("could not get state root %v", err)
}
header.StateRoot = stateRoot[:]
headerRoot, err := header.HashTreeRoot()
if err != nil {
return nil, fmt.Errorf("could not get header root %w", err)
return nil, fmt.Errorf("could not get header root %v", err)
}
blockRoot, err := block.Block().HashTreeRoot()
if err != nil {
return nil, fmt.Errorf("could not get block root %w", err)
return nil, fmt.Errorf("could not get block root %v", err)
}
if headerRoot != blockRoot {
@@ -114,14 +114,14 @@ func NewLightClientOptimisticUpdateFromBeaconState(
// attested_header.state_root = hash_tree_root(attested_state)
attestedStateRoot, err := attestedState.HashTreeRoot(ctx)
if err != nil {
return nil, fmt.Errorf("could not get attested state root %w", err)
return nil, fmt.Errorf("could not get attested state root %v", err)
}
attestedHeader.StateRoot = attestedStateRoot[:]
// assert hash_tree_root(attested_header) == block.message.parent_root
attestedHeaderRoot, err := attestedHeader.HashTreeRoot()
if err != nil {
return nil, fmt.Errorf("could not get attested header root %w", err)
return nil, fmt.Errorf("could not get attested header root %v", err)
}
if attestedHeaderRoot != block.Block().ParentRoot() {
@@ -175,13 +175,13 @@ func NewLightClientFinalityUpdateFromBeaconState(
if finalizedBlock.Block().Slot() != 0 {
tempFinalizedHeader, err := finalizedBlock.Header()
if err != nil {
return nil, fmt.Errorf("could not get finalized header %w", err)
return nil, fmt.Errorf("could not get finalized header %v", err)
}
finalizedHeader = migration.V1Alpha1SignedHeaderToV1(tempFinalizedHeader).GetMessage()
finalizedHeaderRoot, err := finalizedHeader.HashTreeRoot()
if err != nil {
return nil, fmt.Errorf("could not get finalized header root %w", err)
return nil, fmt.Errorf("could not get finalized header root %v", err)
}
if finalizedHeaderRoot != bytesutil.ToBytes32(attestedState.FinalizedCheckpoint().Root) {
@@ -204,7 +204,7 @@ func NewLightClientFinalityUpdateFromBeaconState(
var bErr error
finalityBranch, bErr = attestedState.FinalizedRootProof(ctx)
if bErr != nil {
return nil, fmt.Errorf("could not get finalized root proof %w", bErr)
return nil, fmt.Errorf("could not get finalized root proof %v", bErr)
}
} else {
finalizedHeader = &ethpbv1.BeaconBlockHeader{
@@ -215,8 +215,8 @@ func NewLightClientFinalityUpdateFromBeaconState(
BodyRoot: make([]byte, 32),
}
finalityBranch = make([][]byte, FinalityBranchNumOfLeaves)
for i := 0; i < FinalityBranchNumOfLeaves; i++ {
finalityBranch = make([][]byte, finalityBranchNumOfLeaves)
for i := 0; i < finalityBranchNumOfLeaves; i++ {
finalityBranch[i] = make([]byte, 32)
}
}

View File

@@ -153,7 +153,7 @@ func TestLightClient_NewLightClientFinalityUpdateFromBeaconState(t *testing.T) {
require.DeepSSZEqual(t, zeroHash, update.FinalizedHeader.ParentRoot, "Finalized header parent root is not zero")
require.DeepSSZEqual(t, zeroHash, update.FinalizedHeader.StateRoot, "Finalized header state root is not zero")
require.DeepSSZEqual(t, zeroHash, update.FinalizedHeader.BodyRoot, "Finalized header body root is not zero")
require.Equal(t, FinalityBranchNumOfLeaves, len(update.FinalityBranch), "Invalid finality branch leaves")
require.Equal(t, finalityBranchNumOfLeaves, len(update.FinalityBranch), "Invalid finality branch leaves")
for _, leaf := range update.FinalityBranch {
require.DeepSSZEqual(t, zeroHash, leaf, "Leaf is not zero")
}

View File

@@ -369,6 +369,6 @@ func reportEpochMetrics(ctx context.Context, postState, headState state.BeaconSt
func reportAttestationInclusion(blk interfaces.ReadOnlyBeaconBlock) {
for _, att := range blk.Body().Attestations() {
attestationInclusionDelay.Observe(float64(blk.Slot() - att.GetData().Slot))
attestationInclusionDelay.Observe(float64(blk.Slot() - att.Data.Slot))
}
}

View File

@@ -36,17 +36,17 @@ import (
//
// # Update latest messages for attesting indices
// update_latest_messages(store, indexed_attestation.attesting_indices, attestation)
func (s *Service) OnAttestation(ctx context.Context, a ethpb.Att, disparity time.Duration) error {
func (s *Service) OnAttestation(ctx context.Context, a *ethpb.Attestation, disparity time.Duration) error {
ctx, span := trace.StartSpan(ctx, "blockChain.onAttestation")
defer span.End()
if err := helpers.ValidateNilAttestation(a); err != nil {
return err
}
if err := helpers.ValidateSlotTargetEpoch(a.GetData()); err != nil {
if err := helpers.ValidateSlotTargetEpoch(a.Data); err != nil {
return err
}
tgt := a.GetData().Target.Copy()
tgt := ethpb.CopyCheckpoint(a.Data.Target)
// Note that target root check is ignored here because it was performed in sync's validation pipeline:
// validate_aggregate_proof.go and validate_beacon_attestation.go
@@ -67,7 +67,7 @@ func (s *Service) OnAttestation(ctx context.Context, a ethpb.Att, disparity time
}
// Verify attestation beacon block is known and not from the future.
if err := s.verifyBeaconBlock(ctx, a.GetData()); err != nil {
if err := s.verifyBeaconBlock(ctx, a.Data); err != nil {
return errors.Wrap(err, "could not verify attestation beacon block")
}
@@ -75,16 +75,16 @@ func (s *Service) OnAttestation(ctx context.Context, a ethpb.Att, disparity time
// validate_aggregate_proof.go and validate_beacon_attestation.go
// Verify attestations can only affect the fork choice of subsequent slots.
if err := slots.VerifyTime(genesisTime, a.GetData().Slot+1, disparity); err != nil {
if err := slots.VerifyTime(genesisTime, a.Data.Slot+1, disparity); err != nil {
return err
}
// Use the target state to verify attesting indices are valid.
committees, err := helpers.AttestationCommittees(ctx, baseState, a)
committee, err := helpers.BeaconCommitteeFromState(ctx, baseState, a.Data.Slot, a.Data.CommitteeIndex)
if err != nil {
return err
}
indexedAtt, err := attestation.ConvertToIndexed(ctx, a, committees...)
indexedAtt, err := attestation.ConvertToIndexed(ctx, a, committee)
if err != nil {
return err
}
@@ -97,7 +97,7 @@ func (s *Service) OnAttestation(ctx context.Context, a ethpb.Att, disparity time
// We assume trusted attestation in this function has verified signature.
// Update forkchoice store with the new attestation for updating weight.
s.cfg.ForkChoiceStore.ProcessAttestation(ctx, indexedAtt.GetAttestingIndices(), bytesutil.ToBytes32(a.GetData().BeaconBlockRoot), a.GetData().Target.Epoch)
s.cfg.ForkChoiceStore.ProcessAttestation(ctx, indexedAtt.AttestingIndices, bytesutil.ToBytes32(a.Data.BeaconBlockRoot), a.Data.Target.Epoch)
return nil
}

View File

@@ -7,11 +7,9 @@ import (
"time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
@@ -75,7 +73,7 @@ func TestStore_OnAttestation_ErrorConditions(t *testing.T) {
tests := []struct {
name string
a ethpb.Att
a *ethpb.Attestation
wantedErr string
}{
{
@@ -127,36 +125,25 @@ func TestStore_OnAttestation_ErrorConditions(t *testing.T) {
}
func TestStore_OnAttestation_Ok_DoublyLinkedTree(t *testing.T) {
eval := func(ctx context.Context, service *Service, genesisState state.BeaconState, pks []bls.SecretKey) {
service.SetGenesisTime(time.Unix(time.Now().Unix()-int64(params.BeaconConfig().SecondsPerSlot), 0))
require.NoError(t, service.saveGenesisData(ctx, genesisState))
att, err := util.GenerateAttestations(genesisState, pks, 1, 0, false)
require.NoError(t, err)
tRoot := bytesutil.ToBytes32(att[0].GetData().Target.Root)
copied := genesisState.Copy()
copied, err = transition.ProcessSlots(ctx, copied, 1)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, copied, tRoot))
ojc := &ethpb.Checkpoint{Epoch: 0, Root: tRoot[:]}
ofc := &ethpb.Checkpoint{Epoch: 0, Root: tRoot[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 0, tRoot, tRoot, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
require.NoError(t, service.OnAttestation(ctx, att[0], 0))
}
service, tr := minimalTestService(t)
ctx := tr.ctx
t.Run("pre-Electra", func(t *testing.T) {
service, tr := minimalTestService(t)
ctx := tr.ctx
genesisState, pks := util.DeterministicGenesisState(t, 64)
eval(ctx, service, genesisState, pks)
})
t.Run("post-Electra", func(t *testing.T) {
service, tr := minimalTestService(t)
ctx := tr.ctx
genesisState, pks := util.DeterministicGenesisStateElectra(t, 64)
eval(ctx, service, genesisState, pks)
})
genesisState, pks := util.DeterministicGenesisState(t, 64)
service.SetGenesisTime(time.Unix(time.Now().Unix()-int64(params.BeaconConfig().SecondsPerSlot), 0))
require.NoError(t, service.saveGenesisData(ctx, genesisState))
att, err := util.GenerateAttestations(genesisState, pks, 1, 0, false)
require.NoError(t, err)
tRoot := bytesutil.ToBytes32(att[0].Data.Target.Root)
copied := genesisState.Copy()
copied, err = transition.ProcessSlots(ctx, copied, 1)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, copied, tRoot))
ojc := &ethpb.Checkpoint{Epoch: 0, Root: tRoot[:]}
ofc := &ethpb.Checkpoint{Epoch: 0, Root: tRoot[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 0, tRoot, tRoot, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
require.NoError(t, service.OnAttestation(ctx, att[0], 0))
}
func TestService_GetRecentPreState(t *testing.T) {

View File

@@ -6,6 +6,9 @@ import (
"time"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/state"
@@ -29,8 +32,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
// A custom slot deadline for processing state slots in our cache.
@@ -142,7 +143,7 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
b := blks[0].Block()
// Retrieve incoming block's pre state.
if err := s.verifyBlkPreState(ctx, b.ParentRoot()); err != nil {
if err := s.verifyBlkPreState(ctx, b); err != nil {
return err
}
preState, err := s.cfg.StateGen.StateByRootInitialSync(ctx, b.ParentRoot())
@@ -365,17 +366,17 @@ func (s *Service) handleEpochBoundary(ctx context.Context, slot primitives.Slot,
func (s *Service) handleBlockAttestations(ctx context.Context, blk interfaces.ReadOnlyBeaconBlock, st state.BeaconState) error {
// Feed in block's attestations to fork choice store.
for _, a := range blk.Body().Attestations() {
committees, err := helpers.AttestationCommittees(ctx, st, a)
committee, err := helpers.BeaconCommitteeFromState(ctx, st, a.Data.Slot, a.Data.CommitteeIndex)
if err != nil {
return err
}
indices, err := attestation.AttestingIndices(a, committees...)
indices, err := attestation.AttestingIndices(a.AggregationBits, committee)
if err != nil {
return err
}
r := bytesutil.ToBytes32(a.GetData().BeaconBlockRoot)
r := bytesutil.ToBytes32(a.Data.BeaconBlockRoot)
if s.cfg.ForkChoiceStore.HasNode(r) {
s.cfg.ForkChoiceStore.ProcessAttestation(ctx, indices, r, a.GetData().Target.Epoch)
s.cfg.ForkChoiceStore.ProcessAttestation(ctx, indices, r, a.Data.Target.Epoch)
} else if err := s.cfg.AttPool.SaveBlockAttestation(a); err != nil {
return err
}
@@ -386,7 +387,7 @@ func (s *Service) handleBlockAttestations(ctx context.Context, blk interfaces.Re
// InsertSlashingsToForkChoiceStore inserts attester slashing indices to fork choice store.
// To call this function, it's caller's responsibility to ensure the slashing object is valid.
// This function requires a write lock on forkchoice.
func (s *Service) InsertSlashingsToForkChoiceStore(ctx context.Context, slashings []ethpb.AttSlashing) {
func (s *Service) InsertSlashingsToForkChoiceStore(ctx context.Context, slashings []*ethpb.AttesterSlashing) {
for _, slashing := range slashings {
indices := blocks.SlashableAttesterIndices(slashing)
for _, index := range indices {

View File

@@ -7,6 +7,9 @@ import (
"github.com/ethereum/go-ethereum/common"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
@@ -14,7 +17,6 @@ import (
forkchoicetypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/features"
field_params "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
@@ -23,8 +25,6 @@ import (
ethpbv2 "github.com/prysmaticlabs/prysm/v5/proto/eth/v2"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
// CurrentSlot returns the current slot based on time.
@@ -286,7 +286,7 @@ func (s *Service) getBlockPreState(ctx context.Context, b interfaces.ReadOnlyBea
defer span.End()
// Verify incoming block has a valid pre state.
if err := s.verifyBlkPreState(ctx, b.ParentRoot()); err != nil {
if err := s.verifyBlkPreState(ctx, b); err != nil {
return nil, err
}
@@ -312,10 +312,11 @@ func (s *Service) getBlockPreState(ctx context.Context, b interfaces.ReadOnlyBea
}
// verifyBlkPreState validates input block has a valid pre-state.
func (s *Service) verifyBlkPreState(ctx context.Context, parentRoot [field_params.RootLength]byte) error {
func (s *Service) verifyBlkPreState(ctx context.Context, b interfaces.ReadOnlyBeaconBlock) error {
ctx, span := trace.StartSpan(ctx, "blockChain.verifyBlkPreState")
defer span.End()
parentRoot := b.ParentRoot()
// Loosen the check to HasBlock because state summary gets saved in batches
// during initial syncing. There's no risk given a state summary object is just a
// subset of the block object.

View File

@@ -117,7 +117,7 @@ func TestCachedPreState_CanGetFromStateSummary(t *testing.T) {
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Slot: 1, Root: root[:]}))
require.NoError(t, service.cfg.StateGen.SaveState(ctx, root, st))
require.NoError(t, service.verifyBlkPreState(ctx, wsb.Block().ParentRoot()))
require.NoError(t, service.verifyBlkPreState(ctx, wsb.Block()))
}
func TestFillForkChoiceMissingBlocks_CanSave(t *testing.T) {
@@ -824,10 +824,7 @@ func TestRemoveBlockAttestationsInPool(t *testing.T) {
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Root: r[:]}))
require.NoError(t, service.cfg.BeaconDB.SaveGenesisBlockRoot(ctx, r))
atts := make([]ethpb.Att, len(b.Block.Body.Attestations))
for i, a := range b.Block.Body.Attestations {
atts[i] = a
}
atts := b.Block.Body.Attestations
require.NoError(t, service.cfg.AttPool.SaveAggregatedAttestations(atts))
wsb, err := consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
@@ -1963,134 +1960,68 @@ func TestNoViableHead_Reboot(t *testing.T) {
}
func TestOnBlock_HandleBlockAttestations(t *testing.T) {
t.Run("pre-Electra", func(t *testing.T) {
service, tr := minimalTestService(t)
ctx := tr.ctx
service, tr := minimalTestService(t)
ctx := tr.ctx
st, keys := util.DeterministicGenesisState(t, 64)
stateRoot, err := st.HashTreeRoot(ctx)
require.NoError(t, err, "Could not hash genesis state")
st, keys := util.DeterministicGenesisState(t, 64)
stateRoot, err := st.HashTreeRoot(ctx)
require.NoError(t, err, "Could not hash genesis state")
require.NoError(t, service.saveGenesisData(ctx, st))
require.NoError(t, service.saveGenesisData(ctx, st))
genesis := blocks.NewGenesisBlock(stateRoot[:])
wsb, err := consensusblocks.NewSignedBeaconBlock(genesis)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb), "Could not save genesis block")
parentRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err, "Could not get signing root")
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, st, parentRoot), "Could not save genesis state")
require.NoError(t, service.cfg.BeaconDB.SaveHeadBlockRoot(ctx, parentRoot), "Could not save genesis state")
genesis := blocks.NewGenesisBlock(stateRoot[:])
wsb, err := consensusblocks.NewSignedBeaconBlock(genesis)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb), "Could not save genesis block")
parentRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err, "Could not get signing root")
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, st, parentRoot), "Could not save genesis state")
require.NoError(t, service.cfg.BeaconDB.SaveHeadBlockRoot(ctx, parentRoot), "Could not save genesis state")
st, err = service.HeadState(ctx)
require.NoError(t, err)
b, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 1)
require.NoError(t, err)
wsb, err = consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, root, [32]byte{}, postState, false}))
st, err = service.HeadState(ctx)
require.NoError(t, err)
b, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 1)
require.NoError(t, err)
wsb, err = consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, root, [32]byte{}, postState, false}))
st, err = service.HeadState(ctx)
require.NoError(t, err)
b, err = util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 2)
require.NoError(t, err)
wsb, err = consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
st, err = service.HeadState(ctx)
require.NoError(t, err)
b, err = util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 2)
require.NoError(t, err)
wsb, err = consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
// prepare another block that is not inserted
st3, err := transition.ExecuteStateTransition(ctx, st, wsb)
require.NoError(t, err)
b3, err := util.GenerateFullBlock(st3, keys, util.DefaultBlockGenConfig(), 3)
require.NoError(t, err)
wsb3, err := consensusblocks.NewSignedBeaconBlock(b3)
require.NoError(t, err)
// prepare another block that is not inserted
st3, err := transition.ExecuteStateTransition(ctx, st, wsb)
require.NoError(t, err)
b3, err := util.GenerateFullBlock(st3, keys, util.DefaultBlockGenConfig(), 3)
require.NoError(t, err)
wsb3, err := consensusblocks.NewSignedBeaconBlock(b3)
require.NoError(t, err)
require.Equal(t, 1, len(wsb.Block().Body().Attestations()))
a := wsb.Block().Body().Attestations()[0]
r := bytesutil.ToBytes32(a.GetData().BeaconBlockRoot)
require.Equal(t, true, service.cfg.ForkChoiceStore.HasNode(r))
require.Equal(t, 1, len(wsb.Block().Body().Attestations()))
a := wsb.Block().Body().Attestations()[0]
r := bytesutil.ToBytes32(a.Data.BeaconBlockRoot)
require.Equal(t, true, service.cfg.ForkChoiceStore.HasNode(r))
require.Equal(t, 1, len(wsb.Block().Body().Attestations()))
a3 := wsb3.Block().Body().Attestations()[0]
r3 := bytesutil.ToBytes32(a3.GetData().BeaconBlockRoot)
require.Equal(t, false, service.cfg.ForkChoiceStore.HasNode(r3))
require.Equal(t, 1, len(wsb.Block().Body().Attestations()))
a3 := wsb3.Block().Body().Attestations()[0]
r3 := bytesutil.ToBytes32(a3.Data.BeaconBlockRoot)
require.Equal(t, false, service.cfg.ForkChoiceStore.HasNode(r3))
require.NoError(t, service.handleBlockAttestations(ctx, wsb.Block(), st)) // fine to use the same committee as st
require.Equal(t, 0, service.cfg.AttPool.ForkchoiceAttestationCount())
require.NoError(t, service.handleBlockAttestations(ctx, wsb3.Block(), st3)) // fine to use the same committee as st
require.Equal(t, 1, len(service.cfg.AttPool.BlockAttestations()))
})
t.Run("post-Electra", func(t *testing.T) {
service, tr := minimalTestService(t)
ctx := tr.ctx
st, keys := util.DeterministicGenesisStateElectra(t, 64)
require.NoError(t, service.saveGenesisData(ctx, st))
genesis, err := blocks.NewGenesisBlockForState(ctx, st)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, genesis), "Could not save genesis block")
parentRoot, err := genesis.Block().HashTreeRoot()
require.NoError(t, err, "Could not get signing root")
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, st, parentRoot), "Could not save genesis state")
require.NoError(t, service.cfg.BeaconDB.SaveHeadBlockRoot(ctx, parentRoot), "Could not save genesis state")
st, err = service.HeadState(ctx)
require.NoError(t, err)
defaultConfig := util.DefaultBlockGenConfig()
defaultConfig.NumWithdrawalRequests = 1
defaultConfig.NumDepositRequests = 2
defaultConfig.NumConsolidationRequests = 1
b, err := util.GenerateFullBlockElectra(st, keys, defaultConfig, 1)
require.NoError(t, err)
wsb, err := consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, root, [32]byte{}, postState, false}))
st, err = service.HeadState(ctx)
require.NoError(t, err)
b, err = util.GenerateFullBlockElectra(st, keys, defaultConfig, 2)
require.NoError(t, err)
wsb, err = consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
// prepare another block that is not inserted
st3, err := transition.ExecuteStateTransition(ctx, st, wsb)
require.NoError(t, err)
b3, err := util.GenerateFullBlockElectra(st3, keys, defaultConfig, 3)
require.NoError(t, err)
wsb3, err := consensusblocks.NewSignedBeaconBlock(b3)
require.NoError(t, err)
require.Equal(t, 1, len(wsb.Block().Body().Attestations()))
a := wsb.Block().Body().Attestations()[0]
r := bytesutil.ToBytes32(a.GetData().BeaconBlockRoot)
require.Equal(t, true, service.cfg.ForkChoiceStore.HasNode(r))
require.Equal(t, 1, len(wsb.Block().Body().Attestations()))
a3 := wsb3.Block().Body().Attestations()[0]
r3 := bytesutil.ToBytes32(a3.GetData().BeaconBlockRoot)
require.Equal(t, false, service.cfg.ForkChoiceStore.HasNode(r3))
require.NoError(t, service.handleBlockAttestations(ctx, wsb.Block(), st)) // fine to use the same committee as st
require.Equal(t, 0, service.cfg.AttPool.ForkchoiceAttestationCount())
require.NoError(t, service.handleBlockAttestations(ctx, wsb3.Block(), st3)) // fine to use the same committee as st
require.Equal(t, 1, len(service.cfg.AttPool.BlockAttestations()))
})
require.NoError(t, service.handleBlockAttestations(ctx, wsb.Block(), st)) // fine to use the same committee as st
require.Equal(t, 0, service.cfg.AttPool.ForkchoiceAttestationCount())
require.NoError(t, service.handleBlockAttestations(ctx, wsb3.Block(), st3)) // fine to use the same committee as st
require.Equal(t, 1, len(service.cfg.AttPool.BlockAttestations()))
}
func TestFillMissingBlockPayloadId_DiffSlotExitEarly(t *testing.T) {

View File

@@ -13,7 +13,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
@@ -32,7 +31,7 @@ type AttestationStateFetcher interface {
// AttestationReceiver interface defines the methods of chain service receive and processing new attestations.
type AttestationReceiver interface {
AttestationStateFetcher
VerifyLmdFfgConsistency(ctx context.Context, att ethpb.Att) error
VerifyLmdFfgConsistency(ctx context.Context, att *ethpb.Attestation) error
InForkchoice([32]byte) bool
}
@@ -52,13 +51,13 @@ func (s *Service) AttestationTargetState(ctx context.Context, target *ethpb.Chec
}
// VerifyLmdFfgConsistency verifies that attestation's LMD and FFG votes are consistency to each other.
func (s *Service) VerifyLmdFfgConsistency(ctx context.Context, a ethpb.Att) error {
r, err := s.TargetRootForEpoch([32]byte(a.GetData().BeaconBlockRoot), a.GetData().Target.Epoch)
func (s *Service) VerifyLmdFfgConsistency(ctx context.Context, a *ethpb.Attestation) error {
r, err := s.TargetRootForEpoch([32]byte(a.Data.BeaconBlockRoot), a.Data.Target.Epoch)
if err != nil {
return err
}
if !bytes.Equal(a.GetData().Target.Root, r[:]) {
return fmt.Errorf("FFG and LMD votes are not consistent, block root: %#x, target root: %#x, canonical target root: %#x", a.GetData().BeaconBlockRoot, a.GetData().Target.Root, r)
if !bytes.Equal(a.Data.Target.Root, r[:]) {
return fmt.Errorf("FFG and LMD votes are not consistent, block root: %#x, target root: %#x, canonical target root: %#x", a.Data.BeaconBlockRoot, a.Data.Target.Root, r)
}
return nil
}
@@ -171,13 +170,13 @@ func (s *Service) processAttestations(ctx context.Context, disparity time.Durati
// Based on the spec, don't process the attestation until the subsequent slot.
// This delays consideration in the fork choice until their slot is in the past.
// https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/fork-choice.md#validate_on_attestation
nextSlot := a.GetData().Slot + 1
nextSlot := a.Data.Slot + 1
if err := slots.VerifyTime(uint64(s.genesisTime.Unix()), nextSlot, disparity); err != nil {
continue
}
hasState := s.cfg.BeaconDB.HasStateSummary(ctx, bytesutil.ToBytes32(a.GetData().BeaconBlockRoot))
hasBlock := s.hasBlock(ctx, bytesutil.ToBytes32(a.GetData().BeaconBlockRoot))
hasState := s.cfg.BeaconDB.HasStateSummary(ctx, bytesutil.ToBytes32(a.Data.BeaconBlockRoot))
hasBlock := s.hasBlock(ctx, bytesutil.ToBytes32(a.Data.BeaconBlockRoot))
if !(hasState && hasBlock) {
continue
}
@@ -186,31 +185,18 @@ func (s *Service) processAttestations(ctx context.Context, disparity time.Durati
log.WithError(err).Error("Could not delete fork choice attestation in pool")
}
if !helpers.VerifyCheckpointEpoch(a.GetData().Target, s.genesisTime) {
if !helpers.VerifyCheckpointEpoch(a.Data.Target, s.genesisTime) {
continue
}
if err := s.receiveAttestationNoPubsub(ctx, a, disparity); err != nil {
var fields logrus.Fields
if a.Version() >= version.Electra {
fields = logrus.Fields{
"slot": a.GetData().Slot,
"committeeCount": a.CommitteeBitsVal().Count(),
"committeeIndices": a.CommitteeBitsVal().BitIndices(),
"beaconBlockRoot": fmt.Sprintf("%#x", bytesutil.Trunc(a.GetData().BeaconBlockRoot)),
"targetRoot": fmt.Sprintf("%#x", bytesutil.Trunc(a.GetData().Target.Root)),
"aggregatedCount": a.GetAggregationBits().Count(),
}
} else {
fields = logrus.Fields{
"slot": a.GetData().Slot,
"committeeIndex": a.GetData().CommitteeIndex,
"beaconBlockRoot": fmt.Sprintf("%#x", bytesutil.Trunc(a.GetData().BeaconBlockRoot)),
"targetRoot": fmt.Sprintf("%#x", bytesutil.Trunc(a.GetData().Target.Root)),
"aggregatedCount": a.GetAggregationBits().Count(),
}
}
log.WithFields(fields).WithError(err).Warn("Could not process attestation for fork choice")
log.WithFields(logrus.Fields{
"slot": a.Data.Slot,
"committeeIndex": a.Data.CommitteeIndex,
"beaconBlockRoot": fmt.Sprintf("%#x", bytesutil.Trunc(a.Data.BeaconBlockRoot)),
"targetRoot": fmt.Sprintf("%#x", bytesutil.Trunc(a.Data.Target.Root)),
"aggregationCount": a.AggregationBits.Count(),
}).WithError(err).Warn("Could not process attestation for fork choice")
}
}
}
@@ -220,7 +206,7 @@ func (s *Service) processAttestations(ctx context.Context, disparity time.Durati
// 1. Validate attestation, update validator's latest vote
// 2. Apply fork choice to the processed attestation
// 3. Save latest head info
func (s *Service) receiveAttestationNoPubsub(ctx context.Context, att ethpb.Att, disparity time.Duration) error {
func (s *Service) receiveAttestationNoPubsub(ctx context.Context, att *ethpb.Attestation, disparity time.Duration) error {
ctx, span := trace.StartSpan(ctx, "beacon-chain.blockchain.receiveAttestationNoPubsub")
defer span.End()

View File

@@ -73,7 +73,7 @@ func TestProcessAttestations_Ok(t *testing.T) {
require.NoError(t, service.saveGenesisData(ctx, genesisState))
atts, err := util.GenerateAttestations(genesisState, pks, 1, 0, false)
require.NoError(t, err)
tRoot := bytesutil.ToBytes32(atts[0].GetData().Target.Root)
tRoot := bytesutil.ToBytes32(atts[0].Data.Target.Root)
copied := genesisState.Copy()
copied, err = transition.ProcessSlots(ctx, copied, 1)
require.NoError(t, err)
@@ -83,11 +83,7 @@ func TestProcessAttestations_Ok(t *testing.T) {
state, blkRoot, err := prepareForkchoiceState(ctx, 0, tRoot, tRoot, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
attsToSave := make([]ethpb.Att, len(atts))
for i, a := range atts {
attsToSave[i] = a
}
require.NoError(t, service.cfg.AttPool.SaveForkchoiceAttestations(attsToSave))
require.NoError(t, service.cfg.AttPool.SaveForkchoiceAttestations(atts))
service.processAttestations(ctx, 0)
require.Equal(t, 0, len(service.cfg.AttPool.ForkchoiceAttestations()))
require.LogsDoNotContain(t, hook, "Could not process attestation for fork choice")
@@ -125,14 +121,10 @@ func TestService_ProcessAttestationsAndUpdateHead(t *testing.T) {
// Generate attestations for this block in Slot 1
atts, err := util.GenerateAttestations(copied, pks, 1, 1, false)
require.NoError(t, err)
attsToSave := make([]ethpb.Att, len(atts))
for i, a := range atts {
attsToSave[i] = a
}
require.NoError(t, service.cfg.AttPool.SaveForkchoiceAttestations(attsToSave))
require.NoError(t, service.cfg.AttPool.SaveForkchoiceAttestations(atts))
// Verify the target is in forkchoice
require.Equal(t, true, fcs.HasNode(bytesutil.ToBytes32(atts[0].GetData().BeaconBlockRoot)))
require.Equal(t, tRoot, bytesutil.ToBytes32(atts[0].GetData().BeaconBlockRoot))
require.Equal(t, true, fcs.HasNode(bytesutil.ToBytes32(atts[0].Data.BeaconBlockRoot)))
require.Equal(t, tRoot, bytesutil.ToBytes32(atts[0].Data.BeaconBlockRoot))
require.Equal(t, true, fcs.HasNode(service.originBlockRoot))
// Insert a new block to forkchoice

View File

@@ -13,7 +13,6 @@ import (
coreTime "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/das"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/slasher/types"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/features"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
@@ -53,7 +52,7 @@ type BlobReceiver interface {
// SlashingReceiver interface defines the methods of chain service for receiving validated slashing over the wire.
type SlashingReceiver interface {
ReceiveAttesterSlashing(ctx context.Context, slashing ethpb.AttSlashing)
ReceiveAttesterSlashing(ctx context.Context, slashings *ethpb.AttesterSlashing)
}
// ReceiveBlock is a function that defines the operations (minus pubsub)
@@ -77,20 +76,59 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
if err != nil {
return err
}
rob, err := blocks.NewROBlockWithRoot(block, blockRoot)
if err != nil {
return err
}
preState, err := s.getBlockPreState(ctx, blockCopy.Block())
if err != nil {
return errors.Wrap(err, "could not get block's prestate")
}
// Save current justified and finalized epochs for future use.
currStoreJustifiedEpoch := s.CurrentJustifiedCheckpt().Epoch
currStoreFinalizedEpoch := s.FinalizedCheckpt().Epoch
currentEpoch := coreTime.CurrentEpoch(preState)
currentCheckpoints := s.saveCurrentCheckpoints(preState)
postState, isValidPayload, err := s.validateExecutionAndConsensus(ctx, preState, blockCopy, blockRoot)
preStateVersion, preStateHeader, err := getStateVersionAndPayload(preState)
if err != nil {
return err
}
daWaitedTime, err := s.handleDA(ctx, blockCopy, blockRoot, avs)
if err != nil {
eg, _ := errgroup.WithContext(ctx)
var postState state.BeaconState
eg.Go(func() error {
var err error
postState, err = s.validateStateTransition(ctx, preState, blockCopy)
if err != nil {
return errors.Wrap(err, "failed to validate consensus state transition function")
}
return nil
})
var isValidPayload bool
eg.Go(func() error {
var err error
isValidPayload, err = s.validateExecutionOnBlock(ctx, preStateVersion, preStateHeader, blockCopy, blockRoot)
if err != nil {
return errors.Wrap(err, "could not notify the engine of the new payload")
}
return nil
})
if err := eg.Wait(); err != nil {
return err
}
daStartTime := time.Now()
if avs != nil {
if err := avs.IsDataAvailable(ctx, s.CurrentSlot(), rob); err != nil {
return errors.Wrap(err, "could not validate blob data availability (AvailabilityStore.IsDataAvailable)")
}
} else {
if err := s.isDataAvailable(ctx, blockRoot, blockCopy); err != nil {
return errors.Wrap(err, "could not validate blob data availability")
}
}
daWaitedTime := time.Since(daStartTime)
dataAvailWaitedTime.Observe(float64(daWaitedTime.Milliseconds()))
// Defragment the state before continuing block processing.
s.defragmentState(postState)
@@ -112,9 +150,34 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
tracing.AnnotateError(span, err)
return err
}
if err := s.updateCheckpoints(ctx, currentCheckpoints, preState, postState, blockRoot); err != nil {
return err
if coreTime.CurrentEpoch(postState) > currentEpoch && s.cfg.ForkChoiceStore.IsCanonical(blockRoot) {
headSt, err := s.HeadState(ctx)
if err != nil {
return errors.Wrap(err, "could not get head state")
}
if err := reportEpochMetrics(ctx, postState, headSt); err != nil {
log.WithError(err).Error("could not report epoch metrics")
}
}
if err := s.updateJustificationOnBlock(ctx, preState, postState, currStoreJustifiedEpoch); err != nil {
return errors.Wrap(err, "could not update justified checkpoint")
}
newFinalized, err := s.updateFinalizationOnBlock(ctx, preState, postState, currStoreFinalizedEpoch)
if err != nil {
return errors.Wrap(err, "could not update finalized checkpoint")
}
// Send finalized events and finalized deposits in the background
if newFinalized {
finalized := s.cfg.ForkChoiceStore.FinalizedCheckpoint()
go s.sendNewFinalizedEvent(ctx, postState)
depCtx, cancel := context.WithTimeout(context.Background(), depositDeadline)
go func() {
s.insertFinalizedDeposits(depCtx, finalized.Root)
cancel()
}()
}
// If slasher is configured, forward the attestations in the block via an event feed for processing.
if features.Get().EnableSlasher {
go s.sendBlockAttestationsToSlasher(blockCopy, preState)
@@ -134,153 +197,31 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
if err := s.handleCaches(); err != nil {
return err
}
s.reportPostBlockProcessing(blockCopy, blockRoot, receivedTime, daWaitedTime)
return nil
}
type ffgCheckpoints struct {
j, f, c primitives.Epoch
}
func (s *Service) saveCurrentCheckpoints(state state.BeaconState) (cp ffgCheckpoints) {
// Save current justified and finalized epochs for future use.
cp.j = s.CurrentJustifiedCheckpt().Epoch
cp.f = s.FinalizedCheckpt().Epoch
cp.c = coreTime.CurrentEpoch(state)
return
}
func (s *Service) updateCheckpoints(
ctx context.Context,
cp ffgCheckpoints,
preState, postState state.BeaconState,
blockRoot [32]byte,
) error {
if coreTime.CurrentEpoch(postState) > cp.c && s.cfg.ForkChoiceStore.IsCanonical(blockRoot) {
headSt, err := s.HeadState(ctx)
if err != nil {
return errors.Wrap(err, "could not get head state")
}
if err := reportEpochMetrics(ctx, postState, headSt); err != nil {
log.WithError(err).Error("could not report epoch metrics")
}
}
if err := s.updateJustificationOnBlock(ctx, preState, postState, cp.j); err != nil {
return errors.Wrap(err, "could not update justified checkpoint")
}
newFinalized, err := s.updateFinalizationOnBlock(ctx, preState, postState, cp.f)
if err != nil {
return errors.Wrap(err, "could not update finalized checkpoint")
}
// Send finalized events and finalized deposits in the background
if newFinalized {
// hook to process all post state finalization tasks
s.executePostFinalizationTasks(ctx, postState)
}
return nil
}
func (s *Service) validateExecutionAndConsensus(
ctx context.Context,
preState state.BeaconState,
block interfaces.SignedBeaconBlock,
blockRoot [32]byte,
) (state.BeaconState, bool, error) {
preStateVersion, preStateHeader, err := getStateVersionAndPayload(preState)
if err != nil {
return nil, false, err
}
eg, _ := errgroup.WithContext(ctx)
var postState state.BeaconState
eg.Go(func() error {
var err error
postState, err = s.validateStateTransition(ctx, preState, block)
if err != nil {
return errors.Wrap(err, "failed to validate consensus state transition function")
}
return nil
})
var isValidPayload bool
eg.Go(func() error {
var err error
isValidPayload, err = s.validateExecutionOnBlock(ctx, preStateVersion, preStateHeader, block, blockRoot)
if err != nil {
return errors.Wrap(err, "could not notify the engine of the new payload")
}
return nil
})
if err := eg.Wait(); err != nil {
return nil, false, err
}
return postState, isValidPayload, nil
}
func (s *Service) handleDA(
ctx context.Context,
block interfaces.SignedBeaconBlock,
blockRoot [32]byte,
avs das.AvailabilityStore,
) (time.Duration, error) {
daStartTime := time.Now()
if avs != nil {
rob, err := blocks.NewROBlockWithRoot(block, blockRoot)
if err != nil {
return 0, err
}
if err := avs.IsDataAvailable(ctx, s.CurrentSlot(), rob); err != nil {
return 0, errors.Wrap(err, "could not validate blob data availability (AvailabilityStore.IsDataAvailable)")
}
} else {
if err := s.isDataAvailable(ctx, blockRoot, block); err != nil {
return 0, errors.Wrap(err, "could not validate blob data availability")
}
}
daWaitedTime := time.Since(daStartTime)
dataAvailWaitedTime.Observe(float64(daWaitedTime.Milliseconds()))
return daWaitedTime, nil
}
func (s *Service) reportPostBlockProcessing(
block interfaces.SignedBeaconBlock,
blockRoot [32]byte,
receivedTime time.Time,
daWaitedTime time.Duration,
) {
// Reports on block and fork choice metrics.
cp := s.cfg.ForkChoiceStore.FinalizedCheckpoint()
finalized := &ethpb.Checkpoint{Epoch: cp.Epoch, Root: bytesutil.SafeCopyBytes(cp.Root[:])}
reportSlotMetrics(block.Block().Slot(), s.HeadSlot(), s.CurrentSlot(), finalized)
reportSlotMetrics(blockCopy.Block().Slot(), s.HeadSlot(), s.CurrentSlot(), finalized)
// Log block sync status.
cp = s.cfg.ForkChoiceStore.JustifiedCheckpoint()
justified := &ethpb.Checkpoint{Epoch: cp.Epoch, Root: bytesutil.SafeCopyBytes(cp.Root[:])}
if err := logBlockSyncStatus(block.Block(), blockRoot, justified, finalized, receivedTime, uint64(s.genesisTime.Unix()), daWaitedTime); err != nil {
if err := logBlockSyncStatus(blockCopy.Block(), blockRoot, justified, finalized, receivedTime, uint64(s.genesisTime.Unix()), daWaitedTime); err != nil {
log.WithError(err).Error("Unable to log block sync status")
}
// Log payload data
if err := logPayload(block.Block()); err != nil {
if err := logPayload(blockCopy.Block()); err != nil {
log.WithError(err).Error("Unable to log debug block payload data")
}
// Log state transition data.
if err := logStateTransitionData(block.Block()); err != nil {
if err := logStateTransitionData(blockCopy.Block()); err != nil {
log.WithError(err).Error("Unable to log state transition data")
}
timeWithoutDaWait := time.Since(receivedTime) - daWaitedTime
chainServiceProcessingTime.Observe(float64(timeWithoutDaWait.Milliseconds()))
}
func (s *Service) executePostFinalizationTasks(ctx context.Context, finalizedState state.BeaconState) {
finalized := s.cfg.ForkChoiceStore.FinalizedCheckpoint()
go func() {
finalizedState.SaveValidatorIndices() // used to handle Validator index invariant from EIP6110
s.sendNewFinalizedEvent(ctx, finalizedState)
}()
depCtx, cancel := context.WithTimeout(context.Background(), depositDeadline)
go func() {
s.insertFinalizedDeposits(depCtx, finalized.Root)
cancel()
}()
return nil
}
// ReceiveBlockBatch processes the whole block batch at once, assuming the block batch is linear ,transitioning
@@ -354,10 +295,10 @@ func (s *Service) HasBlock(ctx context.Context, root [32]byte) bool {
}
// ReceiveAttesterSlashing receives an attester slashing and inserts it to forkchoice
func (s *Service) ReceiveAttesterSlashing(ctx context.Context, slashing ethpb.AttSlashing) {
func (s *Service) ReceiveAttesterSlashing(ctx context.Context, slashing *ethpb.AttesterSlashing) {
s.cfg.ForkChoiceStore.Lock()
defer s.cfg.ForkChoiceStore.Unlock()
s.InsertSlashingsToForkChoiceStore(ctx, []ethpb.AttSlashing{slashing})
s.InsertSlashingsToForkChoiceStore(ctx, []*ethpb.AttesterSlashing{slashing})
}
// prunePostBlockOperationPools only runs on new head otherwise should return a nil.
@@ -538,17 +479,17 @@ func (s *Service) sendBlockAttestationsToSlasher(signed interfaces.ReadOnlySigne
// is done in the background to avoid adding more load to this critical code path.
ctx := context.TODO()
for _, att := range signed.Block().Body().Attestations() {
committees, err := helpers.AttestationCommittees(ctx, preState, att)
committee, err := helpers.BeaconCommitteeFromState(ctx, preState, att.Data.Slot, att.Data.CommitteeIndex)
if err != nil {
log.WithError(err).Error("Could not get attestation committees")
log.WithError(err).Error("Could not get attestation committee")
return
}
indexedAtt, err := attestation.ConvertToIndexed(ctx, att, committees...)
indexedAtt, err := attestation.ConvertToIndexed(ctx, att, committee)
if err != nil {
log.WithError(err).Error("Could not convert to indexed attestation")
return
}
s.cfg.SlasherAttestationsFeed.Send(&types.WrappedIndexedAtt{IndexedAtt: indexedAtt})
s.cfg.SlasherAttestationsFeed.Send(indexedAtt)
}
}

View File

@@ -6,13 +6,11 @@ import (
"testing"
"time"
"github.com/ethereum/go-ethereum/common/hexutil"
blockchainTesting "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
statefeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/das"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/operations/voluntaryexits"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
@@ -22,7 +20,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/prysmaticlabs/prysm/v5/time/slots"
logTest "github.com/sirupsen/logrus/hooks/test"
)
@@ -418,76 +415,3 @@ func Test_sendNewFinalizedEvent(t *testing.T) {
assert.DeepEqual(t, finalizedStRoot[:], fc.State)
assert.Equal(t, false, fc.ExecutionOptimistic)
}
func Test_executePostFinalizationTasks(t *testing.T) {
logHook := logTest.NewGlobal()
headState, err := util.NewBeaconStateElectra()
require.NoError(t, err)
finalizedStRoot, err := headState.HashTreeRoot(context.Background())
require.NoError(t, err)
genesis := util.NewBeaconBlock()
genesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
finalizedSlot := params.BeaconConfig().SlotsPerEpoch*122 + 1
headBlock := util.NewBeaconBlock()
headBlock.Block.Slot = finalizedSlot
headBlock.Block.StateRoot = finalizedStRoot[:]
headBlock.Block.ParentRoot = bytesutil.PadTo(genesisRoot[:], 32)
headRoot, err := headBlock.Block.HashTreeRoot()
require.NoError(t, err)
hexKey := "0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"
key, err := hexutil.Decode(hexKey)
require.NoError(t, err)
require.NoError(t, headState.SetValidators([]*ethpb.Validator{
{
PublicKey: key,
WithdrawalCredentials: make([]byte, fieldparams.RootLength),
},
}))
require.NoError(t, headState.SetSlot(finalizedSlot))
require.NoError(t, headState.SetFinalizedCheckpoint(&ethpb.Checkpoint{
Epoch: 123,
Root: headRoot[:],
}))
require.NoError(t, headState.SetGenesisValidatorsRoot(params.BeaconConfig().ZeroHash[:]))
s, tr := minimalTestService(t, WithFinalizedStateAtStartUp(headState))
ctx, beaconDB, stateGen := tr.ctx, tr.db, tr.sg
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, genesisRoot))
util.SaveBlock(t, ctx, beaconDB, genesis)
require.NoError(t, beaconDB.SaveState(ctx, headState, headRoot))
require.NoError(t, beaconDB.SaveState(ctx, headState, genesisRoot))
util.SaveBlock(t, ctx, beaconDB, headBlock)
require.NoError(t, beaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Epoch: slots.ToEpoch(finalizedSlot), Root: headRoot[:]}))
require.NoError(t, err)
require.NoError(t, stateGen.SaveState(ctx, headRoot, headState))
require.NoError(t, beaconDB.SaveLastValidatedCheckpoint(ctx, &ethpb.Checkpoint{Epoch: slots.ToEpoch(finalizedSlot), Root: headRoot[:]}))
notifier := &blockchainTesting.MockStateNotifier{RecordEvents: true}
s.cfg.StateNotifier = notifier
s.executePostFinalizationTasks(s.ctx, headState)
time.Sleep(1 * time.Second) // sleep for a second because event is in a separate go routine
require.Equal(t, 1, len(notifier.ReceivedEvents()))
e := notifier.ReceivedEvents()[0]
assert.Equal(t, statefeed.FinalizedCheckpoint, int(e.Type))
fc, ok := e.Data.(*ethpbv1.EventFinalizedCheckpoint)
require.Equal(t, true, ok, "event has wrong data type")
assert.Equal(t, primitives.Epoch(123), fc.Epoch)
assert.DeepEqual(t, headRoot[:], fc.Block)
assert.DeepEqual(t, finalizedStRoot[:], fc.State)
assert.Equal(t, false, fc.ExecutionOptimistic)
// check the cache
index, ok := headState.ValidatorIndexByPubkey(bytesutil.ToBytes48(key))
require.Equal(t, true, ok)
require.Equal(t, primitives.ValidatorIndex(0), index) // first index
// check deposit
require.LogsContain(t, logHook, "Finalized deposit insertion completed at index")
}

View File

@@ -11,6 +11,8 @@ import (
"time"
"github.com/pkg/errors"
"go.opencensus.io/trace"
"github.com/prysmaticlabs/prysm/v5/async/event"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/kzg"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
@@ -42,7 +44,6 @@ import (
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
prysmTime "github.com/prysmaticlabs/prysm/v5/time"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"go.opencensus.io/trace"
)
// Service represents a service that handles the internal
@@ -329,8 +330,6 @@ func (s *Service) StartFromSavedState(saved state.BeaconState) error {
return errors.Wrap(err, "failed to initialize blockchain service")
}
saved.SaveValidatorIndices() // used to handle Validator index invariant from EIP6110
return nil
}
@@ -505,7 +504,7 @@ func (s *Service) saveGenesisData(ctx context.Context, genesisState state.Beacon
}
genesisBlk, err := s.cfg.BeaconDB.GenesisBlock(ctx)
if err != nil || genesisBlk == nil || genesisBlk.IsNil() {
return fmt.Errorf("could not load genesis block: %w", err)
return fmt.Errorf("could not load genesis block: %v", err)
}
genesisBlkRoot, err := genesisBlk.Block().HashTreeRoot()
if err != nil {

View File

@@ -8,10 +8,9 @@ import (
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache/depositsnapshot"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/altair"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db"
@@ -30,7 +29,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/config/params"
consensusblocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/container/trie"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
@@ -205,7 +203,7 @@ func TestChainService_InitializeBeaconChain(t *testing.T) {
BlockHash: make([]byte, 32),
})
require.NoError(t, err)
genState, err = altair.ProcessPreGenesisDeposits(ctx, genState, deposits)
genState, err = blocks.ProcessPreGenesisDeposits(ctx, genState, deposits)
require.NoError(t, err)
_, err = bc.initializeBeaconChain(ctx, time.Unix(0, 0), genState, &ethpb.Eth1Data{DepositRoot: hashTreeRoot[:], BlockHash: make([]byte, 32)})
@@ -501,65 +499,6 @@ func TestChainService_EverythingOptimistic(t *testing.T) {
require.Equal(t, true, op)
}
func TestStartFromSavedState_ValidatorIndexCacheUpdated(t *testing.T) {
resetFn := features.InitWithReset(&features.Flags{
EnableStartOptimistic: true,
})
defer resetFn()
genesis := util.NewBeaconBlockElectra()
genesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
finalizedSlot := params.BeaconConfig().SlotsPerEpoch*2 + 1
headBlock := util.NewBeaconBlockElectra()
headBlock.Block.Slot = finalizedSlot
headBlock.Block.ParentRoot = bytesutil.PadTo(genesisRoot[:], 32)
headState, err := util.NewBeaconState()
require.NoError(t, err)
hexKey := "0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"
key, err := hexutil.Decode(hexKey)
require.NoError(t, err)
hexKey2 := "0x42247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"
key2, err := hexutil.Decode(hexKey2)
require.NoError(t, err)
require.NoError(t, headState.SetValidators([]*ethpb.Validator{
{
PublicKey: key,
WithdrawalCredentials: make([]byte, fieldparams.RootLength),
},
{
PublicKey: key2,
WithdrawalCredentials: make([]byte, fieldparams.RootLength),
},
}))
require.NoError(t, headState.SetSlot(finalizedSlot))
require.NoError(t, headState.SetGenesisValidatorsRoot(params.BeaconConfig().ZeroHash[:]))
headRoot, err := headBlock.Block.HashTreeRoot()
require.NoError(t, err)
c, tr := minimalTestService(t, WithFinalizedStateAtStartUp(headState))
ctx, beaconDB, stateGen := tr.ctx, tr.db, tr.sg
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, genesisRoot))
util.SaveBlock(t, ctx, beaconDB, genesis)
require.NoError(t, beaconDB.SaveState(ctx, headState, headRoot))
require.NoError(t, beaconDB.SaveState(ctx, headState, genesisRoot))
util.SaveBlock(t, ctx, beaconDB, headBlock)
require.NoError(t, beaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Epoch: slots.ToEpoch(finalizedSlot), Root: headRoot[:]}))
require.NoError(t, err)
require.NoError(t, stateGen.SaveState(ctx, headRoot, headState))
require.NoError(t, beaconDB.SaveLastValidatedCheckpoint(ctx, &ethpb.Checkpoint{Epoch: slots.ToEpoch(finalizedSlot), Root: headRoot[:]}))
require.NoError(t, c.StartFromSavedState(headState))
index, ok := headState.ValidatorIndexByPubkey(bytesutil.ToBytes48(key))
require.Equal(t, true, ok)
require.Equal(t, primitives.ValidatorIndex(0), index) // first index
index2, ok := headState.ValidatorIndexByPubkey(bytesutil.ToBytes48(key2))
require.Equal(t, true, ok)
require.Equal(t, primitives.ValidatorIndex(1), index2) // first index
}
// MockClockSetter satisfies the ClockSetter interface for testing the conditions where blockchain.Service should
// call SetGenesis.
type MockClockSetter struct {

View File

@@ -13,7 +13,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem"
testDB "github.com/prysmaticlabs/prysm/v5/beacon-chain/db/testing"
mockExecution "github.com/prysmaticlabs/prysm/v5/beacon-chain/execution/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/forkchoice"
doublylinkedtree "github.com/prysmaticlabs/prysm/v5/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/operations/attestations"
@@ -50,7 +49,7 @@ func (mb *mockBroadcaster) Broadcast(_ context.Context, _ proto.Message) error {
return nil
}
func (mb *mockBroadcaster) BroadcastAttestation(_ context.Context, _ uint64, _ ethpb.Att) error {
func (mb *mockBroadcaster) BroadcastAttestation(_ context.Context, _ uint64, _ *ethpb.Attestation) error {
mb.broadcastCalled = true
return nil
}
@@ -121,7 +120,6 @@ func minimalTestService(t *testing.T, opts ...Option) (*Service, *testServiceReq
WithTrackedValidatorsCache(cache.NewTrackedValidatorsCache()),
WithBlobStorage(filesystem.NewEphemeralBlobStorage(t)),
WithSyncChecker(mock.MockChecker{}),
WithExecutionEngineCaller(&mockExecution.EngineClient{}),
}
// append the variadic opts so they override the defaults by being processed afterwards
opts = append(defOpts, opts...)

View File

@@ -414,8 +414,8 @@ func (*ChainService) HeadGenesisValidatorsRoot() [32]byte {
}
// VerifyLmdFfgConsistency mocks VerifyLmdFfgConsistency and always returns nil.
func (*ChainService) VerifyLmdFfgConsistency(_ context.Context, a ethpb.Att) error {
if !bytes.Equal(a.GetData().BeaconBlockRoot, a.GetData().Target.Root) {
func (*ChainService) VerifyLmdFfgConsistency(_ context.Context, a *ethpb.Attestation) error {
if !bytes.Equal(a.Data.BeaconBlockRoot, a.Data.Target.Root) {
return errors.New("LMD and FFG miss matched")
}
return nil
@@ -495,7 +495,7 @@ func (s *ChainService) UpdateHead(ctx context.Context, slot primitives.Slot) {
}
// ReceiveAttesterSlashing mocks the same method in the chain service.
func (*ChainService) ReceiveAttesterSlashing(context.Context, ethpb.AttSlashing) {}
func (*ChainService) ReceiveAttesterSlashing(context.Context, *ethpb.AttesterSlashing) {}
// IsFinalized mocks the same method in the chain service.
func (s *ChainService) IsFinalized(_ context.Context, blockRoot [32]byte) bool {

View File

@@ -2,6 +2,7 @@ package testing
import (
"context"
"math/big"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api/client/builder"
@@ -54,13 +55,13 @@ func (s *MockBuilderService) SubmitBlindedBlock(_ context.Context, b interfaces.
}
return w, nil, s.ErrSubmitBlindedBlock
case version.Capella:
w, err := blocks.WrappedExecutionPayloadCapella(s.PayloadCapella)
w, err := blocks.WrappedExecutionPayloadCapella(s.PayloadCapella, big.NewInt(0))
if err != nil {
return nil, nil, errors.Wrap(err, "could not wrap capella payload")
}
return w, nil, s.ErrSubmitBlindedBlock
case version.Deneb:
w, err := blocks.WrappedExecutionPayloadDeneb(s.PayloadDeneb)
w, err := blocks.WrappedExecutionPayloadDeneb(s.PayloadDeneb, big.NewInt(0))
if err != nil {
return nil, nil, errors.Wrap(err, "could not wrap deneb payload")
}

View File

@@ -36,7 +36,6 @@ go_library(
"//math:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
@@ -79,7 +78,6 @@ go_test(
"//math:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",
"//runtime/version:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",

View File

@@ -48,7 +48,7 @@ func ProcessAttestationsNoVerifySignature(
func ProcessAttestationNoVerifySignature(
ctx context.Context,
beaconState state.BeaconState,
att ethpb.Att,
att *ethpb.Attestation,
totalBalance uint64,
) (state.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "altair.ProcessAttestationNoVerifySignature")
@@ -58,24 +58,24 @@ func ProcessAttestationNoVerifySignature(
return nil, err
}
delay, err := beaconState.Slot().SafeSubSlot(att.GetData().Slot)
delay, err := beaconState.Slot().SafeSubSlot(att.Data.Slot)
if err != nil {
return nil, fmt.Errorf("att slot %d can't be greater than state slot %d", att.GetData().Slot, beaconState.Slot())
return nil, fmt.Errorf("att slot %d can't be greater than state slot %d", att.Data.Slot, beaconState.Slot())
}
participatedFlags, err := AttestationParticipationFlagIndices(beaconState, att.GetData(), delay)
participatedFlags, err := AttestationParticipationFlagIndices(beaconState, att.Data, delay)
if err != nil {
return nil, err
}
committees, err := helpers.AttestationCommittees(ctx, beaconState, att)
committee, err := helpers.BeaconCommitteeFromState(ctx, beaconState, att.Data.Slot, att.Data.CommitteeIndex)
if err != nil {
return nil, err
}
indices, err := attestation.AttestingIndices(att, committees...)
indices, err := attestation.AttestingIndices(att.AggregationBits, committee)
if err != nil {
return nil, err
}
return SetParticipationAndRewardProposer(ctx, beaconState, att.GetData().Target.Epoch, indices, participatedFlags, totalBalance)
return SetParticipationAndRewardProposer(ctx, beaconState, att.Data.Target.Epoch, indices, participatedFlags, totalBalance)
}
// SetParticipationAndRewardProposer retrieves and sets the epoch participation bits in state. Based on the epoch participation, it rewards

View File

@@ -195,95 +195,47 @@ func TestProcessAttestations_InvalidAggregationBitsLength(t *testing.T) {
}
func TestProcessAttestations_OK(t *testing.T) {
t.Run("pre-Electra", func(t *testing.T) {
beaconState, privKeys := util.DeterministicGenesisStateAltair(t, 100)
beaconState, privKeys := util.DeterministicGenesisStateAltair(t, 100)
aggBits := bitfield.NewBitlist(3)
aggBits.SetBitAt(0, true)
var mockRoot [32]byte
copy(mockRoot[:], "hello-world")
att := util.HydrateAttestation(&ethpb.Attestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Root: mockRoot[:]},
Target: &ethpb.Checkpoint{Root: mockRoot[:]},
},
AggregationBits: aggBits,
})
cfc := beaconState.CurrentJustifiedCheckpoint()
cfc.Root = mockRoot[:]
require.NoError(t, beaconState.SetCurrentJustifiedCheckpoint(cfc))
committee, err := helpers.BeaconCommitteeFromState(context.Background(), beaconState, att.Data.Slot, 0)
require.NoError(t, err)
attestingIndices, err := attestation.AttestingIndices(att, committee)
require.NoError(t, err)
sigs := make([]bls.Signature, len(attestingIndices))
for i, indice := range attestingIndices {
sb, err := signing.ComputeDomainAndSign(beaconState, 0, att.Data, params.BeaconConfig().DomainBeaconAttester, privKeys[indice])
require.NoError(t, err)
sig, err := bls.SignatureFromBytes(sb)
require.NoError(t, err)
sigs[i] = sig
}
att.Signature = bls.AggregateSignatures(sigs).Marshal()
block := util.NewBeaconBlockAltair()
block.Block.Body.Attestations = []*ethpb.Attestation{att}
err = beaconState.SetSlot(beaconState.Slot() + params.BeaconConfig().MinAttestationInclusionDelay)
require.NoError(t, err)
wsb, err := blocks.NewSignedBeaconBlock(block)
require.NoError(t, err)
_, err = altair.ProcessAttestationsNoVerifySignature(context.Background(), beaconState, wsb.Block())
require.NoError(t, err)
aggBits := bitfield.NewBitlist(3)
aggBits.SetBitAt(0, true)
var mockRoot [32]byte
copy(mockRoot[:], "hello-world")
att := util.HydrateAttestation(&ethpb.Attestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Root: mockRoot[:]},
Target: &ethpb.Checkpoint{Root: mockRoot[:]},
},
AggregationBits: aggBits,
})
t.Run("post-Electra", func(t *testing.T) {
beaconState, privKeys := util.DeterministicGenesisStateElectra(t, 100)
aggBits := bitfield.NewBitlist(3)
aggBits.SetBitAt(0, true)
committeeBits := primitives.NewAttestationCommitteeBits()
committeeBits.SetBitAt(0, true)
var mockRoot [32]byte
copy(mockRoot[:], "hello-world")
att := util.HydrateAttestationElectra(&ethpb.AttestationElectra{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Root: mockRoot[:]},
Target: &ethpb.Checkpoint{Root: mockRoot[:]},
},
AggregationBits: aggBits,
CommitteeBits: committeeBits,
})
cfc := beaconState.CurrentJustifiedCheckpoint()
cfc.Root = mockRoot[:]
require.NoError(t, beaconState.SetCurrentJustifiedCheckpoint(cfc))
cfc := beaconState.CurrentJustifiedCheckpoint()
cfc.Root = mockRoot[:]
require.NoError(t, beaconState.SetCurrentJustifiedCheckpoint(cfc))
committee, err := helpers.BeaconCommitteeFromState(context.Background(), beaconState, att.Data.Slot, att.Data.CommitteeIndex)
require.NoError(t, err)
attestingIndices, err := attestation.AttestingIndices(att.AggregationBits, committee)
require.NoError(t, err)
sigs := make([]bls.Signature, len(attestingIndices))
for i, indice := range attestingIndices {
sb, err := signing.ComputeDomainAndSign(beaconState, 0, att.Data, params.BeaconConfig().DomainBeaconAttester, privKeys[indice])
require.NoError(t, err)
sig, err := bls.SignatureFromBytes(sb)
require.NoError(t, err)
sigs[i] = sig
}
att.Signature = bls.AggregateSignatures(sigs).Marshal()
committee, err := helpers.BeaconCommitteeFromState(context.Background(), beaconState, att.Data.Slot, 0)
require.NoError(t, err)
attestingIndices, err := attestation.AttestingIndices(att, committee)
require.NoError(t, err)
sigs := make([]bls.Signature, len(attestingIndices))
for i, indice := range attestingIndices {
sb, err := signing.ComputeDomainAndSign(beaconState, 0, att.Data, params.BeaconConfig().DomainBeaconAttester, privKeys[indice])
require.NoError(t, err)
sig, err := bls.SignatureFromBytes(sb)
require.NoError(t, err)
sigs[i] = sig
}
att.Signature = bls.AggregateSignatures(sigs).Marshal()
block := util.NewBeaconBlockAltair()
block.Block.Body.Attestations = []*ethpb.Attestation{att}
block := util.NewBeaconBlockElectra()
block.Block.Body.Attestations = []*ethpb.AttestationElectra{att}
err = beaconState.SetSlot(beaconState.Slot() + params.BeaconConfig().MinAttestationInclusionDelay)
require.NoError(t, err)
wsb, err := blocks.NewSignedBeaconBlock(block)
require.NoError(t, err)
_, err = altair.ProcessAttestationsNoVerifySignature(context.Background(), beaconState, wsb.Block())
require.NoError(t, err)
})
err = beaconState.SetSlot(beaconState.Slot() + params.BeaconConfig().MinAttestationInclusionDelay)
require.NoError(t, err)
wsb, err := blocks.NewSignedBeaconBlock(block)
require.NoError(t, err)
_, err = altair.ProcessAttestationsNoVerifySignature(context.Background(), beaconState, wsb.Block())
require.NoError(t, err)
}
func TestProcessAttestationNoVerify_SourceTargetHead(t *testing.T) {
@@ -321,7 +273,7 @@ func TestProcessAttestationNoVerify_SourceTargetHead(t *testing.T) {
committee, err := helpers.BeaconCommitteeFromState(context.Background(), beaconState, att.Data.Slot, att.Data.CommitteeIndex)
require.NoError(t, err)
indices, err := attestation.AttestingIndices(att, committee)
indices, err := attestation.AttestingIndices(att.AggregationBits, committee)
require.NoError(t, err)
for _, index := range indices {
has, err := altair.HasValidatorFlag(p[index], params.BeaconConfig().TimelyHeadFlagIndex)

View File

@@ -5,32 +5,11 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
)
// ProcessPreGenesisDeposits processes a deposit for the beacon state before chainstart.
func ProcessPreGenesisDeposits(
ctx context.Context,
beaconState state.BeaconState,
deposits []*ethpb.Deposit,
) (state.BeaconState, error) {
var err error
beaconState, err = ProcessDeposits(ctx, beaconState, deposits)
if err != nil {
return nil, errors.Wrap(err, "could not process deposit")
}
beaconState, err = blocks.ActivateValidatorWithEffectiveBalance(beaconState, deposits)
if err != nil {
return nil, err
}
return beaconState, nil
}
// ProcessDeposits processes validator deposits for beacon state Altair.
func ProcessDeposits(
ctx context.Context,
@@ -54,158 +33,23 @@ func ProcessDeposits(
return beaconState, nil
}
// ProcessDeposit takes in a deposit object and inserts it
// into the registry as a new validator or balance change.
// Returns the resulting state, a boolean to indicate whether or not the deposit
// resulted in a new validator entry into the beacon state, and any error.
//
// Spec pseudocode definition:
// def process_deposit(state: BeaconState, deposit: Deposit) -> None:
//
// # Verify the Merkle branch
// assert is_valid_merkle_branch(
// leaf=hash_tree_root(deposit.data),
// branch=deposit.proof,
// depth=DEPOSIT_CONTRACT_TREE_DEPTH + 1, # Add 1 for the List length mix-in
// index=state.eth1_deposit_index,
// root=state.eth1_data.deposit_root,
// )
//
// # Deposits must be processed in order
// state.eth1_deposit_index += 1
//
// apply_deposit(
// state=state,
// pubkey=deposit.data.pubkey,
// withdrawal_credentials=deposit.data.withdrawal_credentials,
// amount=deposit.data.amount,
// signature=deposit.data.signature,
// )
// ProcessDeposit processes validator deposit for beacon state Altair.
func ProcessDeposit(beaconState state.BeaconState, deposit *ethpb.Deposit, verifySignature bool) (state.BeaconState, error) {
if err := blocks.VerifyDeposit(beaconState, deposit); err != nil {
if deposit == nil || deposit.Data == nil {
return nil, err
}
return nil, errors.Wrapf(err, "could not verify deposit from %#x", bytesutil.Trunc(deposit.Data.PublicKey))
}
if err := beaconState.SetEth1DepositIndex(beaconState.Eth1DepositIndex() + 1); err != nil {
beaconState, isNewValidator, err := blocks.ProcessDeposit(beaconState, deposit, verifySignature)
if err != nil {
return nil, err
}
return ApplyDeposit(beaconState, deposit.Data, verifySignature)
}
// ApplyDeposit
// Spec pseudocode definition:
// def apply_deposit(state: BeaconState, pubkey: BLSPubkey, withdrawal_credentials: Bytes32, amount: uint64, signature: BLSSignature) -> None:
//
// validator_pubkeys = [v.pubkey for v in state.validators]
// if pubkey not in validator_pubkeys:
// # Verify the deposit signature (proof of possession) which is not checked by the deposit contract
// deposit_message = DepositMessage(
// pubkey=pubkey,
// withdrawal_credentials=withdrawal_credentials,
// amount=amount,
// )
// domain = compute_domain(DOMAIN_DEPOSIT) # Fork-agnostic domain since deposits are valid across forks
// signing_root = compute_signing_root(deposit_message, domain)
// if bls.Verify(pubkey, signing_root, signature):
// add_validator_to_registry(state, pubkey, withdrawal_credentials, amount)
// else:
// # Increase balance by deposit amount
// index = ValidatorIndex(validator_pubkeys.index(pubkey))
// increase_balance(state, index, amount)
func ApplyDeposit(beaconState state.BeaconState, data *ethpb.Deposit_Data, verifySignature bool) (state.BeaconState, error) {
pubKey := data.PublicKey
amount := data.Amount
withdrawalCredentials := data.WithdrawalCredentials
index, ok := beaconState.ValidatorIndexByPubkey(bytesutil.ToBytes48(pubKey))
if !ok {
if verifySignature {
valid, err := blocks.IsValidDepositSignature(data)
if err != nil {
return nil, err
}
if !valid {
return beaconState, nil
}
}
if err := AddValidatorToRegistry(beaconState, pubKey, withdrawalCredentials, amount); err != nil {
return nil, err
}
} else {
if err := helpers.IncreaseBalance(beaconState, index, amount); err != nil {
return nil, err
}
}
return beaconState, nil
}
// AddValidatorToRegistry updates the beacon state with validator information
// def add_validator_to_registry(state: BeaconState,
//
// pubkey: BLSPubkey,
// withdrawal_credentials: Bytes32,
// amount: uint64) -> None:
// index = get_index_for_new_validator(state)
// validator = get_validator_from_deposit(pubkey, withdrawal_credentials)
// set_or_append_list(state.validators, index, validator)
// set_or_append_list(state.balances, index, 0)
// set_or_append_list(state.previous_epoch_participation, index, ParticipationFlags(0b0000_0000)) // New in Altair
// set_or_append_list(state.current_epoch_participation, index, ParticipationFlags(0b0000_0000)) // New in Altair
// set_or_append_list(state.inactivity_scores, index, uint64(0)) // New in Altair
func AddValidatorToRegistry(beaconState state.BeaconState, pubKey []byte, withdrawalCredentials []byte, amount uint64) error {
val := GetValidatorFromDeposit(pubKey, withdrawalCredentials, amount)
if err := beaconState.AppendValidator(val); err != nil {
return err
}
if err := beaconState.AppendBalance(amount); err != nil {
return err
}
// only active in altair and only when it's a new validator (after append balance)
if beaconState.Version() >= version.Altair {
if isNewValidator {
if err := beaconState.AppendInactivityScore(0); err != nil {
return err
return nil, err
}
if err := beaconState.AppendPreviousParticipationBits(0); err != nil {
return err
return nil, err
}
if err := beaconState.AppendCurrentParticipationBits(0); err != nil {
return err
return nil, err
}
}
return nil
}
// GetValidatorFromDeposit gets a new validator object with provided parameters
//
// def get_validator_from_deposit(pubkey: BLSPubkey, withdrawal_credentials: Bytes32, amount: uint64) -> Validator:
//
// effective_balance = min(amount - amount % EFFECTIVE_BALANCE_INCREMENT, MAX_EFFECTIVE_BALANCE)
//
// return Validator(
// pubkey=pubkey,
// withdrawal_credentials=withdrawal_credentials,
// activation_eligibility_epoch=FAR_FUTURE_EPOCH,
// activation_epoch=FAR_FUTURE_EPOCH,
// exit_epoch=FAR_FUTURE_EPOCH,
// withdrawable_epoch=FAR_FUTURE_EPOCH,
// effective_balance=effective_balance,
// )
func GetValidatorFromDeposit(pubKey []byte, withdrawalCredentials []byte, amount uint64) *ethpb.Validator {
effectiveBalance := amount - (amount % params.BeaconConfig().EffectiveBalanceIncrement)
if params.BeaconConfig().MaxEffectiveBalance < effectiveBalance {
effectiveBalance = params.BeaconConfig().MaxEffectiveBalance
}
return &ethpb.Validator{
PublicKey: pubKey,
WithdrawalCredentials: withdrawalCredentials,
ActivationEligibilityEpoch: params.BeaconConfig().FarFutureEpoch,
ActivationEpoch: params.BeaconConfig().FarFutureEpoch,
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
WithdrawableEpoch: params.BeaconConfig().FarFutureEpoch,
EffectiveBalance: effectiveBalance,
}
return beaconState, nil
}

View File

@@ -30,59 +30,6 @@ func TestFuzzProcessDeposits_10000(t *testing.T) {
}
}
func TestFuzzProcessPreGenesisDeposit_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconStateAltair{}
deposit := &ethpb.Deposit{}
ctx := context.Background()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(deposit)
s, err := state_native.InitializeFromProtoUnsafeAltair(state)
require.NoError(t, err)
r, err := altair.ProcessPreGenesisDeposits(ctx, s, []*ethpb.Deposit{deposit})
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposit)
}
}
}
func TestFuzzProcessPreGenesisDeposit_Phase0_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
deposit := &ethpb.Deposit{}
ctx := context.Background()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(deposit)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
require.NoError(t, err)
r, err := altair.ProcessPreGenesisDeposits(ctx, s, []*ethpb.Deposit{deposit})
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposit)
}
}
}
func TestFuzzProcessDeposit_Phase0_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
deposit := &ethpb.Deposit{}
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(deposit)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
require.NoError(t, err)
r, err := altair.ProcessDeposit(s, deposit, true)
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposit)
}
}
}
func TestFuzzProcessDeposit_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconStateAltair{}

View File

@@ -7,14 +7,11 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/altair"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/signing"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/container/trie"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
)
@@ -47,6 +44,36 @@ func TestProcessDeposits_SameValidatorMultipleDepositsSameBlock(t *testing.T) {
require.Equal(t, 2, len(newState.Validators()), "Incorrect validator count")
}
func TestProcessDeposits_MerkleBranchFailsVerification(t *testing.T) {
deposit := &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: bytesutil.PadTo([]byte{1, 2, 3}, 48),
WithdrawalCredentials: make([]byte, 32),
Signature: make([]byte, 96),
},
}
leaf, err := deposit.Data.HashTreeRoot()
require.NoError(t, err)
// We then create a merkle branch for the test.
depositTrie, err := trie.GenerateTrieFromItems([][]byte{leaf[:]}, params.BeaconConfig().DepositContractTreeDepth)
require.NoError(t, err, "Could not generate trie")
proof, err := depositTrie.MerkleProof(0)
require.NoError(t, err, "Could not generate proof")
deposit.Proof = proof
beaconState, err := state_native.InitializeFromProtoAltair(&ethpb.BeaconStateAltair{
Eth1Data: &ethpb.Eth1Data{
DepositRoot: []byte{0},
BlockHash: []byte{1},
},
})
require.NoError(t, err)
want := "deposit root did not verify"
_, err = altair.ProcessDeposits(context.Background(), beaconState, []*ethpb.Deposit{deposit})
require.ErrorContains(t, want, err)
}
func TestProcessDeposits_AddsNewValidatorDeposit(t *testing.T) {
dep, _, err := util.DeterministicDepositsAndKeys(1)
require.NoError(t, err)
@@ -218,158 +245,3 @@ func TestProcessDeposit_SkipsInvalidDeposit(t *testing.T) {
t.Errorf("Expected validator balance at index 0 to stay 0, received: %v", newState.Balances()[0])
}
}
func TestPreGenesisDeposits_SkipInvalidDeposit(t *testing.T) {
dep, _, err := util.DeterministicDepositsAndKeys(100)
require.NoError(t, err)
dep[0].Data.Signature = make([]byte, 96)
dt, _, err := util.DepositTrieFromDeposits(dep)
require.NoError(t, err)
for i := range dep {
proof, err := dt.MerkleProof(i)
require.NoError(t, err)
dep[i].Proof = proof
}
root, err := dt.HashTreeRoot()
require.NoError(t, err)
eth1Data := &ethpb.Eth1Data{
DepositRoot: root[:],
DepositCount: 1,
}
registry := []*ethpb.Validator{
{
PublicKey: []byte{1},
WithdrawalCredentials: []byte{1, 2, 3},
},
}
balances := []uint64{0}
beaconState, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: registry,
Balances: balances,
Eth1Data: eth1Data,
Fork: &ethpb.Fork{
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
},
})
require.NoError(t, err)
newState, err := altair.ProcessPreGenesisDeposits(context.Background(), beaconState, dep)
require.NoError(t, err, "Expected invalid block deposit to be ignored without error")
_, ok := newState.ValidatorIndexByPubkey(bytesutil.ToBytes48(dep[0].Data.PublicKey))
require.Equal(t, false, ok, "bad pubkey should not exist in state")
for i := 1; i < newState.NumValidators(); i++ {
val, err := newState.ValidatorAtIndex(primitives.ValidatorIndex(i))
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().MaxEffectiveBalance, val.EffectiveBalance, "unequal effective balance")
require.Equal(t, primitives.Epoch(0), val.ActivationEpoch)
require.Equal(t, primitives.Epoch(0), val.ActivationEligibilityEpoch)
}
if newState.Eth1DepositIndex() != 100 {
t.Errorf(
"Expected Eth1DepositIndex to be increased by 99 after processing an invalid deposit, received change: %v",
newState.Eth1DepositIndex(),
)
}
if len(newState.Validators()) != 100 {
t.Errorf("Expected validator list to have length 100, received: %v", len(newState.Validators()))
}
if len(newState.Balances()) != 100 {
t.Errorf("Expected validator balances list to have length 100, received: %v", len(newState.Balances()))
}
if newState.Balances()[0] != 0 {
t.Errorf("Expected validator balance at index 0 to stay 0, received: %v", newState.Balances()[0])
}
}
func TestProcessDeposit_RepeatedDeposit_IncreasesValidatorBalance(t *testing.T) {
sk, err := bls.RandKey()
require.NoError(t, err)
deposit := &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: sk.PublicKey().Marshal(),
Amount: 1000,
WithdrawalCredentials: make([]byte, 32),
Signature: make([]byte, 96),
},
}
sr, err := signing.ComputeSigningRoot(deposit.Data, bytesutil.ToBytes(3, 32))
require.NoError(t, err)
sig := sk.Sign(sr[:])
deposit.Data.Signature = sig.Marshal()
leaf, err := deposit.Data.HashTreeRoot()
require.NoError(t, err)
// We then create a merkle branch for the test.
depositTrie, err := trie.GenerateTrieFromItems([][]byte{leaf[:]}, params.BeaconConfig().DepositContractTreeDepth)
require.NoError(t, err, "Could not generate trie")
proof, err := depositTrie.MerkleProof(0)
require.NoError(t, err, "Could not generate proof")
deposit.Proof = proof
registry := []*ethpb.Validator{
{
PublicKey: []byte{1, 2, 3},
},
{
PublicKey: sk.PublicKey().Marshal(),
WithdrawalCredentials: []byte{1},
},
}
balances := []uint64{0, 50}
root, err := depositTrie.HashTreeRoot()
require.NoError(t, err)
beaconState, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: registry,
Balances: balances,
Eth1Data: &ethpb.Eth1Data{
DepositRoot: root[:],
BlockHash: root[:],
},
})
require.NoError(t, err)
newState, err := altair.ProcessDeposit(beaconState, deposit, true /*verifySignature*/)
require.NoError(t, err, "Process deposit failed")
require.Equal(t, uint64(1000+50), newState.Balances()[1], "Expected balance at index 1 to be 1050")
}
func TestProcessDeposits_MerkleBranchFailsVerification(t *testing.T) {
deposit := &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: bytesutil.PadTo([]byte{1, 2, 3}, fieldparams.BLSPubkeyLength),
WithdrawalCredentials: make([]byte, 32),
Signature: make([]byte, fieldparams.BLSSignatureLength),
},
}
leaf, err := deposit.Data.HashTreeRoot()
require.NoError(t, err)
// We then create a merkle branch for the test.
depositTrie, err := trie.GenerateTrieFromItems([][]byte{leaf[:]}, params.BeaconConfig().DepositContractTreeDepth)
require.NoError(t, err, "Could not generate trie")
proof, err := depositTrie.MerkleProof(0)
require.NoError(t, err, "Could not generate proof")
deposit.Proof = proof
b := util.NewBeaconBlock()
b.Block = &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{
Deposits: []*ethpb.Deposit{deposit},
},
}
beaconState, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Eth1Data: &ethpb.Eth1Data{
DepositRoot: []byte{0},
BlockHash: []byte{1},
},
})
require.NoError(t, err)
want := "deposit root did not verify"
_, err = altair.ProcessDeposits(context.Background(), beaconState, b.Block.Body.Deposits)
assert.ErrorContains(t, want, err)
}

View File

@@ -18,7 +18,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/math"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
@@ -102,8 +101,7 @@ func NextSyncCommittee(ctx context.Context, s state.BeaconState) (*ethpb.SyncCom
// candidate_index = active_validator_indices[shuffled_index]
// random_byte = hash(seed + uint_to_bytes(uint64(i // 32)))[i % 32]
// effective_balance = state.validators[candidate_index].effective_balance
// # [Modified in Electra:EIP7251]
// if effective_balance * MAX_RANDOM_BYTE >= MAX_EFFECTIVE_BALANCE_ELECTRA * random_byte:
// if effective_balance * MAX_RANDOM_BYTE >= MAX_EFFECTIVE_BALANCE * random_byte:
// sync_committee_indices.append(candidate_index)
// i += 1
// return sync_committee_indices
@@ -123,11 +121,6 @@ func NextSyncCommitteeIndices(ctx context.Context, s state.BeaconState) ([]primi
cIndices := make([]primitives.ValidatorIndex, 0, syncCommitteeSize)
hashFunc := hash.CustomSHA256Hasher()
maxEB := cfg.MaxEffectiveBalanceElectra
if s.Version() < version.Electra {
maxEB = cfg.MaxEffectiveBalance
}
for i := primitives.ValidatorIndex(0); uint64(len(cIndices)) < params.BeaconConfig().SyncCommitteeSize; i++ {
if ctx.Err() != nil {
return nil, ctx.Err()
@@ -147,7 +140,7 @@ func NextSyncCommitteeIndices(ctx context.Context, s state.BeaconState) ([]primi
}
effectiveBal := v.EffectiveBalance()
if effectiveBal*maxRandomByte >= maxEB*uint64(randomByte) {
if effectiveBal*maxRandomByte >= cfg.MaxEffectiveBalance*uint64(randomByte) {
cIndices = append(cIndices, cIndex)
}
}

View File

@@ -13,14 +13,13 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
prysmTime "github.com/prysmaticlabs/prysm/v5/time"
)
func TestSyncCommitteeIndices_CanGet(t *testing.T) {
getState := func(t *testing.T, count uint64, vers int) state.BeaconState {
getState := func(t *testing.T, count uint64) state.BeaconState {
validators := make([]*ethpb.Validator, count)
for i := 0; i < len(validators); i++ {
validators[i] = &ethpb.Validator{
@@ -28,28 +27,17 @@ func TestSyncCommitteeIndices_CanGet(t *testing.T) {
EffectiveBalance: params.BeaconConfig().MinDepositAmount,
}
}
var st state.BeaconState
var err error
switch vers {
case version.Altair:
st, err = state_native.InitializeFromProtoAltair(&ethpb.BeaconStateAltair{
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
case version.Electra:
st, err = state_native.InitializeFromProtoElectra(&ethpb.BeaconStateElectra{
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
default:
t.Fatal("Unknown version")
}
st, err := state_native.InitializeFromProtoAltair(&ethpb.BeaconStateAltair{
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
require.NoError(t, err)
require.NoError(t, st.SetValidators(validators))
return st
}
type args struct {
validatorCount uint64
epoch primitives.Epoch
state state.BeaconState
epoch primitives.Epoch
}
tests := []struct {
name string
@@ -60,32 +48,32 @@ func TestSyncCommitteeIndices_CanGet(t *testing.T) {
{
name: "genesis validator count, epoch 0",
args: args{
validatorCount: params.BeaconConfig().MinGenesisActiveValidatorCount,
epoch: 0,
state: getState(t, params.BeaconConfig().MinGenesisActiveValidatorCount),
epoch: 0,
},
wantErr: false,
},
{
name: "genesis validator count, epoch 100",
args: args{
validatorCount: params.BeaconConfig().MinGenesisActiveValidatorCount,
epoch: 100,
state: getState(t, params.BeaconConfig().MinGenesisActiveValidatorCount),
epoch: 100,
},
wantErr: false,
},
{
name: "less than optimal validator count, epoch 100",
args: args{
validatorCount: params.BeaconConfig().MaxValidatorsPerCommittee,
epoch: 100,
state: getState(t, params.BeaconConfig().MaxValidatorsPerCommittee),
epoch: 100,
},
wantErr: false,
},
{
name: "no active validators, epoch 100",
args: args{
validatorCount: 0, // Regression test for divide by zero. Issue #13051.
epoch: 100,
state: getState(t, 0), // Regression test for divide by zero. Issue #13051.
epoch: 100,
},
wantErr: true,
errString: "no active validator indices",
@@ -93,18 +81,13 @@ func TestSyncCommitteeIndices_CanGet(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
for _, v := range []int{version.Altair, version.Electra} {
t.Run(version.String(v), func(t *testing.T) {
helpers.ClearCache()
st := getState(t, tt.args.validatorCount, v)
got, err := altair.NextSyncCommitteeIndices(context.Background(), st)
if tt.wantErr {
require.ErrorContains(t, tt.errString, err)
} else {
require.NoError(t, err)
require.Equal(t, int(params.BeaconConfig().SyncCommitteeSize), len(got))
}
})
helpers.ClearCache()
got, err := altair.NextSyncCommitteeIndices(context.Background(), tt.args.state)
if tt.wantErr {
require.ErrorContains(t, tt.errString, err)
} else {
require.NoError(t, err)
require.Equal(t, int(params.BeaconConfig().SyncCommitteeSize), len(got))
}
})
}

View File

@@ -28,87 +28,87 @@ import (
// process_historical_roots_update(state)
// process_participation_flag_updates(state) # [New in Altair]
// process_sync_committee_updates(state) # [New in Altair]
func ProcessEpoch(ctx context.Context, state state.BeaconState) error {
func ProcessEpoch(ctx context.Context, state state.BeaconState) (state.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "altair.ProcessEpoch")
defer span.End()
if state == nil || state.IsNil() {
return errors.New("nil state")
return nil, errors.New("nil state")
}
vp, bp, err := InitializePrecomputeValidators(ctx, state)
if err != nil {
return err
return nil, err
}
// New in Altair.
vp, bp, err = ProcessEpochParticipation(ctx, state, bp, vp)
if err != nil {
return err
return nil, err
}
state, err = precompute.ProcessJustificationAndFinalizationPreCompute(state, bp)
if err != nil {
return errors.Wrap(err, "could not process justification")
return nil, errors.Wrap(err, "could not process justification")
}
// New in Altair.
state, vp, err = ProcessInactivityScores(ctx, state, vp)
if err != nil {
return errors.Wrap(err, "could not process inactivity updates")
return nil, errors.Wrap(err, "could not process inactivity updates")
}
// New in Altair.
state, err = ProcessRewardsAndPenaltiesPrecompute(state, bp, vp)
if err != nil {
return errors.Wrap(err, "could not process rewards and penalties")
return nil, errors.Wrap(err, "could not process rewards and penalties")
}
state, err = e.ProcessRegistryUpdates(ctx, state)
if err != nil {
return errors.Wrap(err, "could not process registry updates")
return nil, errors.Wrap(err, "could not process registry updates")
}
// Modified in Altair and Bellatrix.
proportionalSlashingMultiplier, err := state.ProportionalSlashingMultiplier()
if err != nil {
return err
return nil, err
}
state, err = e.ProcessSlashings(state, proportionalSlashingMultiplier)
if err != nil {
return err
return nil, err
}
state, err = e.ProcessEth1DataReset(state)
if err != nil {
return err
return nil, err
}
state, err = e.ProcessEffectiveBalanceUpdates(state)
if err != nil {
return err
return nil, err
}
state, err = e.ProcessSlashingsReset(state)
if err != nil {
return err
return nil, err
}
state, err = e.ProcessRandaoMixesReset(state)
if err != nil {
return err
return nil, err
}
state, err = e.ProcessHistoricalDataUpdate(state)
if err != nil {
return err
return nil, err
}
// New in Altair.
state, err = ProcessParticipationFlagUpdates(state)
if err != nil {
return err
return nil, err
}
// New in Altair.
_, err = ProcessSyncCommitteeUpdates(ctx, state)
state, err = ProcessSyncCommitteeUpdates(ctx, state)
if err != nil {
return err
return nil, err
}
return nil
return state, nil
}

View File

@@ -13,9 +13,9 @@ import (
func TestProcessEpoch_CanProcess(t *testing.T) {
st, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().MaxValidatorsPerCommittee)
require.NoError(t, st.SetSlot(10*params.BeaconConfig().SlotsPerEpoch))
err := altair.ProcessEpoch(context.Background(), st)
newState, err := altair.ProcessEpoch(context.Background(), st)
require.NoError(t, err)
require.Equal(t, uint64(0), st.Slashings()[2], "Unexpected slashed balance")
require.Equal(t, uint64(0), newState.Slashings()[2], "Unexpected slashed balance")
b := st.Balances()
require.Equal(t, params.BeaconConfig().MaxValidatorsPerCommittee, uint64(len(b)))
@@ -45,9 +45,9 @@ func TestProcessEpoch_CanProcess(t *testing.T) {
func TestProcessEpoch_CanProcessBellatrix(t *testing.T) {
st, _ := util.DeterministicGenesisStateBellatrix(t, params.BeaconConfig().MaxValidatorsPerCommittee)
require.NoError(t, st.SetSlot(10*params.BeaconConfig().SlotsPerEpoch))
err := altair.ProcessEpoch(context.Background(), st)
newState, err := altair.ProcessEpoch(context.Background(), st)
require.NoError(t, err)
require.Equal(t, uint64(0), st.Slashings()[2], "Unexpected slashed balance")
require.Equal(t, uint64(0), newState.Slashings()[2], "Unexpected slashed balance")
b := st.Balances()
require.Equal(t, params.BeaconConfig().MaxValidatorsPerCommittee, uint64(len(b)))

View File

@@ -154,11 +154,11 @@ func TranslateParticipation(ctx context.Context, state state.BeaconState, atts [
if err != nil {
return nil, err
}
committee, err := helpers.BeaconCommitteeFromState(ctx, state, att.GetData().Slot, att.GetData().CommitteeIndex)
committee, err := helpers.BeaconCommitteeFromState(ctx, state, att.Data.Slot, att.Data.CommitteeIndex)
if err != nil {
return nil, err
}
indices, err := attestation.AttestingIndices(att, committee)
indices, err := attestation.AttestingIndices(att.AggregationBits, committee)
if err != nil {
return nil, err
}

View File

@@ -55,7 +55,7 @@ func TestTranslateParticipation(t *testing.T) {
committee, err := helpers.BeaconCommitteeFromState(ctx, s, pendingAtts[0].Data.Slot, pendingAtts[0].Data.CommitteeIndex)
require.NoError(t, err)
indices, err := attestation.AttestingIndices(pendingAtts[0], committee)
indices, err := attestation.AttestingIndices(pendingAtts[0].AggregationBits, committee)
require.NoError(t, err)
for _, index := range indices {
has, err := altair.HasValidatorFlag(participation[index], params.BeaconConfig().TimelySourceFlagIndex)

View File

@@ -46,7 +46,7 @@ func ProcessAttestationsNoVerifySignature(
func VerifyAttestationNoVerifySignature(
ctx context.Context,
beaconState state.ReadOnlyBeaconState,
att ethpb.Att,
att *ethpb.Attestation,
) error {
ctx, span := trace.StartSpan(ctx, "core.VerifyAttestationNoVerifySignature")
defer span.End()
@@ -56,7 +56,7 @@ func VerifyAttestationNoVerifySignature(
}
currEpoch := time.CurrentEpoch(beaconState)
prevEpoch := time.PrevEpoch(beaconState)
data := att.GetData()
data := att.Data
if data.Target.Epoch != prevEpoch && data.Target.Epoch != currEpoch {
return fmt.Errorf(
"expected target epoch (%d) to be the previous epoch (%d) or the current epoch (%d)",
@@ -76,11 +76,11 @@ func VerifyAttestationNoVerifySignature(
}
}
if err := helpers.ValidateSlotTargetEpoch(att.GetData()); err != nil {
if err := helpers.ValidateSlotTargetEpoch(att.Data); err != nil {
return err
}
s := att.GetData().Slot
s := att.Data.Slot
minInclusionCheck := s+params.BeaconConfig().MinAttestationInclusionDelay <= beaconState.Slot()
if !minInclusionCheck {
return fmt.Errorf(
@@ -102,63 +102,27 @@ func VerifyAttestationNoVerifySignature(
)
}
}
activeValidatorCount, err := helpers.ActiveValidatorCount(ctx, beaconState, att.GetData().Target.Epoch)
activeValidatorCount, err := helpers.ActiveValidatorCount(ctx, beaconState, att.Data.Target.Epoch)
if err != nil {
return err
}
c := helpers.SlotCommitteeCount(activeValidatorCount)
if uint64(att.Data.CommitteeIndex) >= c {
return fmt.Errorf("committee index %d >= committee count %d", att.Data.CommitteeIndex, c)
}
var indexedAtt ethpb.IndexedAtt
if err := helpers.VerifyAttestationBitfieldLengths(ctx, beaconState, att); err != nil {
return errors.Wrap(err, "could not verify attestation bitfields")
}
if att.Version() >= version.Electra {
if att.GetData().CommitteeIndex != 0 {
return errors.New("committee index must be 0 post-Electra")
}
committeeIndices := att.CommitteeBitsVal().BitIndices()
committees := make([][]primitives.ValidatorIndex, len(committeeIndices))
participantsCount := 0
var err error
for i, ci := range committeeIndices {
if uint64(ci) >= c {
return fmt.Errorf("committee index %d >= committee count %d", ci, c)
}
committees[i], err = helpers.BeaconCommitteeFromState(ctx, beaconState, att.GetData().Slot, primitives.CommitteeIndex(ci))
if err != nil {
return err
}
participantsCount += len(committees[i])
}
if att.GetAggregationBits().Len() != uint64(participantsCount) {
return fmt.Errorf("aggregation bits count %d is different than participant count %d", att.GetAggregationBits().Len(), participantsCount)
}
indexedAtt, err = attestation.ConvertToIndexed(ctx, att, committees...)
if err != nil {
return err
}
} else {
if uint64(att.GetData().CommitteeIndex) >= c {
return fmt.Errorf("committee index %d >= committee count %d", att.GetData().CommitteeIndex, c)
}
// Verify attesting indices are correct.
committee, err := helpers.BeaconCommitteeFromState(ctx, beaconState, att.GetData().Slot, att.GetData().CommitteeIndex)
if err != nil {
return err
}
if committee == nil {
return errors.New("no committee exist for this attestation")
}
if err := helpers.VerifyBitfieldLength(att.GetAggregationBits(), uint64(len(committee))); err != nil {
return errors.Wrap(err, "failed to verify aggregation bitfield")
}
indexedAtt, err = attestation.ConvertToIndexed(ctx, att, committee)
if err != nil {
return err
}
// Verify attesting indices are correct.
committee, err := helpers.BeaconCommitteeFromState(ctx, beaconState, att.Data.Slot, att.Data.CommitteeIndex)
if err != nil {
return err
}
indexedAtt, err := attestation.ConvertToIndexed(ctx, att, committee)
if err != nil {
return err
}
return attestation.IsValidAttestationIndices(ctx, indexedAtt)
@@ -169,7 +133,7 @@ func VerifyAttestationNoVerifySignature(
func ProcessAttestationNoVerifySignature(
ctx context.Context,
beaconState state.BeaconState,
att ethpb.Att,
att *ethpb.Attestation,
) (state.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "core.ProcessAttestationNoVerifySignature")
defer span.End()
@@ -179,15 +143,15 @@ func ProcessAttestationNoVerifySignature(
}
currEpoch := time.CurrentEpoch(beaconState)
data := att.GetData()
s := att.GetData().Slot
data := att.Data
s := att.Data.Slot
proposerIndex, err := helpers.BeaconProposerIndex(ctx, beaconState)
if err != nil {
return nil, err
}
pendingAtt := &ethpb.PendingAttestation{
Data: data,
AggregationBits: att.GetAggregationBits(),
AggregationBits: att.AggregationBits,
InclusionDelay: beaconState.Slot() - s,
ProposerIndex: proposerIndex,
}
@@ -205,6 +169,23 @@ func ProcessAttestationNoVerifySignature(
return beaconState, nil
}
// VerifyAttestationSignature converts and attestation into an indexed attestation and verifies
// the signature in that attestation.
func VerifyAttestationSignature(ctx context.Context, beaconState state.ReadOnlyBeaconState, att *ethpb.Attestation) error {
if err := helpers.ValidateNilAttestation(att); err != nil {
return err
}
committee, err := helpers.BeaconCommitteeFromState(ctx, beaconState, att.Data.Slot, att.Data.CommitteeIndex)
if err != nil {
return err
}
indexedAtt, err := attestation.ConvertToIndexed(ctx, att, committee)
if err != nil {
return err
}
return VerifyIndexedAttestation(ctx, beaconState, indexedAtt)
}
// VerifyIndexedAttestation determines the validity of an indexed attestation.
//
// Spec pseudocode definition:
@@ -222,7 +203,7 @@ func ProcessAttestationNoVerifySignature(
// domain = get_domain(state, DOMAIN_BEACON_ATTESTER, indexed_attestation.data.target.epoch)
// signing_root = compute_signing_root(indexed_attestation.data, domain)
// return bls.FastAggregateVerify(pubkeys, signing_root, indexed_attestation.signature)
func VerifyIndexedAttestation(ctx context.Context, beaconState state.ReadOnlyBeaconState, indexedAtt ethpb.IndexedAtt) error {
func VerifyIndexedAttestation(ctx context.Context, beaconState state.ReadOnlyBeaconState, indexedAtt *ethpb.IndexedAttestation) error {
ctx, span := trace.StartSpan(ctx, "core.VerifyIndexedAttestation")
defer span.End()
@@ -231,14 +212,14 @@ func VerifyIndexedAttestation(ctx context.Context, beaconState state.ReadOnlyBea
}
domain, err := signing.Domain(
beaconState.Fork(),
indexedAtt.GetData().Target.Epoch,
indexedAtt.Data.Target.Epoch,
params.BeaconConfig().DomainBeaconAttester,
beaconState.GenesisValidatorsRoot(),
)
if err != nil {
return err
}
indices := indexedAtt.GetAttestingIndices()
indices := indexedAtt.AttestingIndices
var pubkeys []bls.PublicKey
for i := 0; i < len(indices); i++ {
pubkeyAtIdx := beaconState.PubkeyAtIndex(primitives.ValidatorIndex(indices[i]))

View File

@@ -44,7 +44,7 @@ func TestProcessAggregatedAttestation_OverlappingBits(t *testing.T) {
committee, err := helpers.BeaconCommitteeFromState(context.Background(), beaconState, att1.Data.Slot, att1.Data.CommitteeIndex)
require.NoError(t, err)
attestingIndices1, err := attestation.AttestingIndices(att1, committee)
attestingIndices1, err := attestation.AttestingIndices(att1.AggregationBits, committee)
require.NoError(t, err)
sigs := make([]bls.Signature, len(attestingIndices1))
for i, indice := range attestingIndices1 {
@@ -66,7 +66,7 @@ func TestProcessAggregatedAttestation_OverlappingBits(t *testing.T) {
committee, err = helpers.BeaconCommitteeFromState(context.Background(), beaconState, att2.Data.Slot, att2.Data.CommitteeIndex)
require.NoError(t, err)
attestingIndices2, err := attestation.AttestingIndices(att2, committee)
attestingIndices2, err := attestation.AttestingIndices(att2.AggregationBits, committee)
require.NoError(t, err)
sigs = make([]bls.Signature, len(attestingIndices2))
for i, indice := range attestingIndices2 {
@@ -221,83 +221,6 @@ func TestVerifyAttestationNoVerifySignature_BadAttIdx(t *testing.T) {
require.ErrorContains(t, "committee index 100 >= committee count 1", err)
}
func TestVerifyAttestationNoVerifySignature_Electra(t *testing.T) {
var mockRoot [32]byte
copy(mockRoot[:], "hello-world")
var zeroSig [fieldparams.BLSSignatureLength]byte
beaconState, _ := util.DeterministicGenesisState(t, 100)
err := beaconState.SetSlot(beaconState.Slot() + params.BeaconConfig().MinAttestationInclusionDelay)
require.NoError(t, err)
ckp := beaconState.CurrentJustifiedCheckpoint()
copy(ckp.Root, "hello-world")
require.NoError(t, beaconState.SetCurrentJustifiedCheckpoint(ckp))
require.NoError(t, beaconState.AppendCurrentEpochAttestations(&ethpb.PendingAttestation{}))
t.Run("ok", func(t *testing.T) {
aggBits := bitfield.NewBitlist(3)
aggBits.SetBitAt(1, true)
committeeBits := bitfield.NewBitvector64()
committeeBits.SetBitAt(0, true)
att := &ethpb.AttestationElectra{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 0, Root: mockRoot[:]},
Target: &ethpb.Checkpoint{Epoch: 0, Root: make([]byte, 32)},
},
AggregationBits: aggBits,
CommitteeBits: committeeBits,
}
att.Signature = zeroSig[:]
err = blocks.VerifyAttestationNoVerifySignature(context.TODO(), beaconState, att)
assert.NoError(t, err)
})
t.Run("non-zero committee index", func(t *testing.T) {
att := &ethpb.AttestationElectra{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 0, Root: mockRoot[:]},
Target: &ethpb.Checkpoint{Epoch: 0, Root: make([]byte, 32)},
CommitteeIndex: 1,
},
AggregationBits: bitfield.NewBitlist(1),
CommitteeBits: bitfield.NewBitvector64(),
}
err = blocks.VerifyAttestationNoVerifySignature(context.TODO(), beaconState, att)
assert.ErrorContains(t, "committee index must be 0 post-Electra", err)
})
t.Run("index of committee too big", func(t *testing.T) {
aggBits := bitfield.NewBitlist(3)
committeeBits := bitfield.NewBitvector64()
committeeBits.SetBitAt(63, true)
att := &ethpb.AttestationElectra{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 0, Root: mockRoot[:]},
Target: &ethpb.Checkpoint{Epoch: 0, Root: make([]byte, 32)},
},
AggregationBits: aggBits,
CommitteeBits: committeeBits,
}
att.Signature = zeroSig[:]
err = blocks.VerifyAttestationNoVerifySignature(context.TODO(), beaconState, att)
assert.ErrorContains(t, "committee index 63 >= committee count 1", err)
})
t.Run("wrong aggregation bits count", func(t *testing.T) {
aggBits := bitfield.NewBitlist(123)
committeeBits := bitfield.NewBitvector64()
committeeBits.SetBitAt(0, true)
att := &ethpb.AttestationElectra{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 0, Root: mockRoot[:]},
Target: &ethpb.Checkpoint{Epoch: 0, Root: make([]byte, 32)},
},
AggregationBits: aggBits,
CommitteeBits: committeeBits,
}
att.Signature = zeroSig[:]
err = blocks.VerifyAttestationNoVerifySignature(context.TODO(), beaconState, att)
assert.ErrorContains(t, "aggregation bits count 123 is different than participant count 3", err)
})
}
func TestConvertToIndexed_OK(t *testing.T) {
helpers.ClearCache()
validators := make([]*ethpb.Validator, 2*params.BeaconConfig().SlotsPerEpoch)
@@ -463,7 +386,7 @@ func TestValidateIndexedAttestation_BadAttestationsSignatureSet(t *testing.T) {
sig := keys[0].Sign([]byte{'t', 'e', 's', 't'})
list := bitfield.Bitlist{0b11111}
var atts []ethpb.Att
var atts []*ethpb.Attestation
for i := uint64(0); i < 1000; i++ {
atts = append(atts, &ethpb.Attestation{
Data: &ethpb.AttestationData{
@@ -479,7 +402,7 @@ func TestValidateIndexedAttestation_BadAttestationsSignatureSet(t *testing.T) {
_, err := blocks.AttestationSignatureBatch(context.Background(), beaconState, atts)
assert.ErrorContains(t, want, err)
atts = []ethpb.Att{}
atts = []*ethpb.Attestation{}
list = bitfield.Bitlist{0b10000}
for i := uint64(0); i < 1000; i++ {
atts = append(atts, &ethpb.Attestation{
@@ -578,109 +501,53 @@ func TestRetrieveAttestationSignatureSet_VerifiesMultipleAttestations(t *testing
}
}
t.Run("pre-Electra", func(t *testing.T) {
st, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, st.SetSlot(5))
require.NoError(t, st.SetValidators(validators))
st, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, st.SetSlot(5))
require.NoError(t, st.SetValidators(validators))
comm1, err := helpers.BeaconCommitteeFromState(context.Background(), st, 1 /*slot*/, 0 /*committeeIndex*/)
require.NoError(t, err)
att1 := util.HydrateAttestation(&ethpb.Attestation{
AggregationBits: bitfield.NewBitlist(uint64(len(comm1))),
Data: &ethpb.AttestationData{
Slot: 1,
},
})
domain, err := signing.Domain(st.Fork(), st.Fork().Epoch, params.BeaconConfig().DomainBeaconAttester, st.GenesisValidatorsRoot())
require.NoError(t, err)
root, err := signing.ComputeSigningRoot(att1.Data, domain)
require.NoError(t, err)
var sigs []bls.Signature
for i, u := range comm1 {
att1.AggregationBits.SetBitAt(uint64(i), true)
sigs = append(sigs, keys[u].Sign(root[:]))
}
att1.Signature = bls.AggregateSignatures(sigs).Marshal()
comm2, err := helpers.BeaconCommitteeFromState(context.Background(), st, 1 /*slot*/, 1 /*committeeIndex*/)
require.NoError(t, err)
att2 := util.HydrateAttestation(&ethpb.Attestation{
AggregationBits: bitfield.NewBitlist(uint64(len(comm2))),
Data: &ethpb.AttestationData{
Slot: 1,
CommitteeIndex: 1,
},
})
root, err = signing.ComputeSigningRoot(att2.Data, domain)
require.NoError(t, err)
sigs = nil
for i, u := range comm2 {
att2.AggregationBits.SetBitAt(uint64(i), true)
sigs = append(sigs, keys[u].Sign(root[:]))
}
att2.Signature = bls.AggregateSignatures(sigs).Marshal()
set, err := blocks.AttestationSignatureBatch(ctx, st, []ethpb.Att{att1, att2})
require.NoError(t, err)
verified, err := set.Verify()
require.NoError(t, err)
assert.Equal(t, true, verified, "Multiple signatures were unable to be verified.")
comm1, err := helpers.BeaconCommitteeFromState(context.Background(), st, 1 /*slot*/, 0 /*committeeIndex*/)
require.NoError(t, err)
att1 := util.HydrateAttestation(&ethpb.Attestation{
AggregationBits: bitfield.NewBitlist(uint64(len(comm1))),
Data: &ethpb.AttestationData{
Slot: 1,
},
})
t.Run("post-Electra", func(t *testing.T) {
st, err := util.NewBeaconStateElectra()
require.NoError(t, err)
require.NoError(t, st.SetSlot(5))
require.NoError(t, st.SetValidators(validators))
domain, err := signing.Domain(st.Fork(), st.Fork().Epoch, params.BeaconConfig().DomainBeaconAttester, st.GenesisValidatorsRoot())
require.NoError(t, err)
root, err := signing.ComputeSigningRoot(att1.Data, domain)
require.NoError(t, err)
var sigs []bls.Signature
for i, u := range comm1 {
att1.AggregationBits.SetBitAt(uint64(i), true)
sigs = append(sigs, keys[u].Sign(root[:]))
}
att1.Signature = bls.AggregateSignatures(sigs).Marshal()
comm1, err := helpers.BeaconCommitteeFromState(context.Background(), st, 1 /*slot*/, 0 /*committeeIndex*/)
require.NoError(t, err)
commBits1 := primitives.NewAttestationCommitteeBits()
commBits1.SetBitAt(0, true)
att1 := util.HydrateAttestationElectra(&ethpb.AttestationElectra{
AggregationBits: bitfield.NewBitlist(uint64(len(comm1))),
CommitteeBits: commBits1,
Data: &ethpb.AttestationData{
Slot: 1,
},
})
domain, err := signing.Domain(st.Fork(), st.Fork().Epoch, params.BeaconConfig().DomainBeaconAttester, st.GenesisValidatorsRoot())
require.NoError(t, err)
root, err := signing.ComputeSigningRoot(att1.Data, domain)
require.NoError(t, err)
var sigs []bls.Signature
for i, u := range comm1 {
att1.AggregationBits.SetBitAt(uint64(i), true)
sigs = append(sigs, keys[u].Sign(root[:]))
}
att1.Signature = bls.AggregateSignatures(sigs).Marshal()
comm2, err := helpers.BeaconCommitteeFromState(context.Background(), st, 1 /*slot*/, 1 /*committeeIndex*/)
require.NoError(t, err)
commBits2 := primitives.NewAttestationCommitteeBits()
commBits2.SetBitAt(1, true)
att2 := util.HydrateAttestationElectra(&ethpb.AttestationElectra{
AggregationBits: bitfield.NewBitlist(uint64(len(comm2))),
CommitteeBits: commBits2,
Data: &ethpb.AttestationData{
Slot: 1,
},
})
root, err = signing.ComputeSigningRoot(att2.Data, domain)
require.NoError(t, err)
sigs = nil
for i, u := range comm2 {
att2.AggregationBits.SetBitAt(uint64(i), true)
sigs = append(sigs, keys[u].Sign(root[:]))
}
att2.Signature = bls.AggregateSignatures(sigs).Marshal()
set, err := blocks.AttestationSignatureBatch(ctx, st, []ethpb.Att{att1, att2})
require.NoError(t, err)
verified, err := set.Verify()
require.NoError(t, err)
assert.Equal(t, true, verified, "Multiple signatures were unable to be verified.")
comm2, err := helpers.BeaconCommitteeFromState(context.Background(), st, 1 /*slot*/, 1 /*committeeIndex*/)
require.NoError(t, err)
att2 := util.HydrateAttestation(&ethpb.Attestation{
AggregationBits: bitfield.NewBitlist(uint64(len(comm2))),
Data: &ethpb.AttestationData{
Slot: 1,
CommitteeIndex: 1,
},
})
root, err = signing.ComputeSigningRoot(att2.Data, domain)
require.NoError(t, err)
sigs = nil
for i, u := range comm2 {
att2.AggregationBits.SetBitAt(uint64(i), true)
sigs = append(sigs, keys[u].Sign(root[:]))
}
att2.Signature = bls.AggregateSignatures(sigs).Marshal()
set, err := blocks.AttestationSignatureBatch(ctx, st, []*ethpb.Attestation{att1, att2})
require.NoError(t, err)
verified, err := set.Verify()
require.NoError(t, err)
assert.Equal(t, true, verified, "Multiple signatures were unable to be verified.")
}
func TestRetrieveAttestationSignatureSet_AcrossFork(t *testing.T) {
@@ -740,6 +607,6 @@ func TestRetrieveAttestationSignatureSet_AcrossFork(t *testing.T) {
}
att2.Signature = bls.AggregateSignatures(sigs).Marshal()
_, err = blocks.AttestationSignatureBatch(ctx, st, []ethpb.Att{att1, att2})
_, err = blocks.AttestationSignatureBatch(ctx, st, []*ethpb.Attestation{att1, att2})
require.NoError(t, err)
}

View File

@@ -7,11 +7,13 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/container/slice"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/slashings"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
@@ -38,7 +40,7 @@ import (
func ProcessAttesterSlashings(
ctx context.Context,
beaconState state.BeaconState,
slashings []ethpb.AttSlashing,
slashings []*ethpb.AttesterSlashing,
slashFunc slashValidatorFunc,
) (state.BeaconState, error) {
var err error
@@ -55,7 +57,7 @@ func ProcessAttesterSlashings(
func ProcessAttesterSlashing(
ctx context.Context,
beaconState state.BeaconState,
slashing ethpb.AttSlashing,
slashing *ethpb.AttesterSlashing,
slashFunc slashValidatorFunc,
) (state.BeaconState, error) {
if err := VerifyAttesterSlashing(ctx, beaconState, slashing); err != nil {
@@ -75,7 +77,19 @@ func ProcessAttesterSlashing(
return nil, err
}
if helpers.IsSlashableValidator(val.ActivationEpoch(), val.WithdrawableEpoch(), val.Slashed(), currentEpoch) {
beaconState, err = slashFunc(ctx, beaconState, primitives.ValidatorIndex(validatorIndex))
cfg := params.BeaconConfig()
var slashingQuotient uint64
switch {
case beaconState.Version() == version.Phase0:
slashingQuotient = cfg.MinSlashingPenaltyQuotient
case beaconState.Version() == version.Altair:
slashingQuotient = cfg.MinSlashingPenaltyQuotientAltair
case beaconState.Version() >= version.Bellatrix:
slashingQuotient = cfg.MinSlashingPenaltyQuotientBellatrix
default:
return nil, errors.New("unknown state version")
}
beaconState, err = slashFunc(ctx, beaconState, primitives.ValidatorIndex(validatorIndex), slashingQuotient, cfg.ProposerRewardQuotient)
if err != nil {
return nil, errors.Wrapf(err, "could not slash validator index %d",
validatorIndex)
@@ -90,20 +104,20 @@ func ProcessAttesterSlashing(
}
// VerifyAttesterSlashing validates the attestation data in both attestations in the slashing object.
func VerifyAttesterSlashing(ctx context.Context, beaconState state.ReadOnlyBeaconState, slashing ethpb.AttSlashing) error {
func VerifyAttesterSlashing(ctx context.Context, beaconState state.ReadOnlyBeaconState, slashing *ethpb.AttesterSlashing) error {
if slashing == nil {
return errors.New("nil slashing")
}
if slashing.FirstAttestation() == nil || slashing.SecondAttestation() == nil {
if slashing.Attestation_1 == nil || slashing.Attestation_2 == nil {
return errors.New("nil attestation")
}
if slashing.FirstAttestation().GetData() == nil || slashing.SecondAttestation().GetData() == nil {
if slashing.Attestation_1.Data == nil || slashing.Attestation_2.Data == nil {
return errors.New("nil attestation data")
}
att1 := slashing.FirstAttestation()
att2 := slashing.SecondAttestation()
data1 := att1.GetData()
data2 := att2.GetData()
att1 := slashing.Attestation_1
att2 := slashing.Attestation_2
data1 := att1.Data
data2 := att2.Data
if !IsSlashableAttestationData(data1, data2) {
return errors.New("attestations are not slashable")
}
@@ -143,11 +157,11 @@ func IsSlashableAttestationData(data1, data2 *ethpb.AttestationData) bool {
}
// SlashableAttesterIndices returns the intersection of attester indices from both attestations in this slashing.
func SlashableAttesterIndices(slashing ethpb.AttSlashing) []uint64 {
if slashing == nil || slashing.FirstAttestation() == nil || slashing.SecondAttestation() == nil {
func SlashableAttesterIndices(slashing *ethpb.AttesterSlashing) []uint64 {
if slashing == nil || slashing.Attestation_1 == nil || slashing.Attestation_2 == nil {
return nil
}
indices1 := slashing.FirstAttestation().GetAttestingIndices()
indices2 := slashing.SecondAttestation().GetAttestingIndices()
indices1 := slashing.Attestation_1.AttestingIndices
indices2 := slashing.Attestation_2.AttestingIndices
return slice.IntersectionUint64(indices1, indices2)
}

View File

@@ -57,11 +57,7 @@ func TestProcessAttesterSlashings_DataNotSlashable(t *testing.T) {
AttesterSlashings: slashings,
},
}
ss := make([]ethpb.AttSlashing, len(b.Block.Body.AttesterSlashings))
for i, s := range b.Block.Body.AttesterSlashings {
ss[i] = s
}
_, err = blocks.ProcessAttesterSlashings(context.Background(), beaconState, ss, v.SlashValidator)
_, err = blocks.ProcessAttesterSlashings(context.Background(), beaconState, b.Block.Body.AttesterSlashings, v.SlashValidator)
assert.ErrorContains(t, "attestations are not slashable", err)
}
@@ -96,11 +92,7 @@ func TestProcessAttesterSlashings_IndexedAttestationFailedToVerify(t *testing.T)
},
}
ss := make([]ethpb.AttSlashing, len(b.Block.Body.AttesterSlashings))
for i, s := range b.Block.Body.AttesterSlashings {
ss[i] = s
}
_, err = blocks.ProcessAttesterSlashings(context.Background(), beaconState, ss, v.SlashValidator)
_, err = blocks.ProcessAttesterSlashings(context.Background(), beaconState, b.Block.Body.AttesterSlashings, v.SlashValidator)
assert.ErrorContains(t, "validator indices count exceeds MAX_VALIDATORS_PER_COMMITTEE", err)
}
@@ -152,11 +144,7 @@ func TestProcessAttesterSlashings_AppliesCorrectStatus(t *testing.T) {
},
}
ss := make([]ethpb.AttSlashing, len(b.Block.Body.AttesterSlashings))
for i, s := range b.Block.Body.AttesterSlashings {
ss[i] = s
}
newState, err := blocks.ProcessAttesterSlashings(context.Background(), beaconState, ss, v.SlashValidator)
newState, err := blocks.ProcessAttesterSlashings(context.Background(), beaconState, b.Block.Body.AttesterSlashings, v.SlashValidator)
require.NoError(t, err)
newRegistry := newState.Validators()
@@ -225,11 +213,7 @@ func TestProcessAttesterSlashings_AppliesCorrectStatusAltair(t *testing.T) {
},
}
ss := make([]ethpb.AttSlashing, len(b.Block.Body.AttesterSlashings))
for i, s := range b.Block.Body.AttesterSlashings {
ss[i] = s
}
newState, err := blocks.ProcessAttesterSlashings(context.Background(), beaconState, ss, v.SlashValidator)
newState, err := blocks.ProcessAttesterSlashings(context.Background(), beaconState, b.Block.Body.AttesterSlashings, v.SlashValidator)
require.NoError(t, err)
newRegistry := newState.Validators()
@@ -298,11 +282,7 @@ func TestProcessAttesterSlashings_AppliesCorrectStatusBellatrix(t *testing.T) {
},
}
ss := make([]ethpb.AttSlashing, len(b.Block.Body.AttesterSlashings))
for i, s := range b.Block.Body.AttesterSlashings {
ss[i] = s
}
newState, err := blocks.ProcessAttesterSlashings(context.Background(), beaconState, ss, v.SlashValidator)
newState, err := blocks.ProcessAttesterSlashings(context.Background(), beaconState, b.Block.Body.AttesterSlashings, v.SlashValidator)
require.NoError(t, err)
newRegistry := newState.Validators()
@@ -371,11 +351,7 @@ func TestProcessAttesterSlashings_AppliesCorrectStatusCapella(t *testing.T) {
},
}
ss := make([]ethpb.AttSlashing, len(b.Block.Body.AttesterSlashings))
for i, s := range b.Block.Body.AttesterSlashings {
ss[i] = s
}
newState, err := blocks.ProcessAttesterSlashings(context.Background(), beaconState, ss, v.SlashValidator)
newState, err := blocks.ProcessAttesterSlashings(context.Background(), beaconState, b.Block.Body.AttesterSlashings, v.SlashValidator)
require.NoError(t, err)
newRegistry := newState.Validators()

View File

@@ -216,7 +216,7 @@ func TestFuzzProcessAttesterSlashings_10000(t *testing.T) {
fuzzer.Fuzz(a)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
require.NoError(t, err)
r, err := ProcessAttesterSlashings(ctx, s, []ethpb.AttSlashing{a}, v.SlashValidator)
r, err := ProcessAttesterSlashings(ctx, s, []*ethpb.AttesterSlashing{a}, v.SlashValidator)
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and slashing: %v", r, err, state, a)
}
@@ -297,6 +297,75 @@ func TestFuzzVerifyIndexedAttestationn_10000(t *testing.T) {
}
}
func TestFuzzVerifyAttestation_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
attestation := &ethpb.Attestation{}
ctx := context.Background()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(attestation)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
require.NoError(t, err)
err = VerifyAttestationSignature(ctx, s, attestation)
_ = err
}
}
func TestFuzzProcessDeposits_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
deposits := make([]*ethpb.Deposit, 100)
ctx := context.Background()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
for i := range deposits {
fuzzer.Fuzz(deposits[i])
}
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
require.NoError(t, err)
r, err := ProcessDeposits(ctx, s, deposits)
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposits)
}
}
}
func TestFuzzProcessPreGenesisDeposit_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
deposit := &ethpb.Deposit{}
ctx := context.Background()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(deposit)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
require.NoError(t, err)
r, err := ProcessPreGenesisDeposits(ctx, s, []*ethpb.Deposit{deposit})
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposit)
}
}
}
func TestFuzzProcessDeposit_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
deposit := &ethpb.Deposit{}
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(deposit)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
require.NoError(t, err)
r, _, err := ProcessDeposit(s, deposit, true)
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposit)
}
}
}
func TestFuzzverifyDeposit_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
@@ -306,7 +375,7 @@ func TestFuzzverifyDeposit_10000(t *testing.T) {
fuzzer.Fuzz(deposit)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
require.NoError(t, err)
err = VerifyDeposit(s, deposit)
err = verifyDeposit(s, deposit)
_ = err
}
}

View File

@@ -91,11 +91,7 @@ func TestProcessAttesterSlashings_RegressionSlashableIndices(t *testing.T) {
},
}
ss := make([]ethpb.AttSlashing, len(b.Block.Body.AttesterSlashings))
for i, s := range b.Block.Body.AttesterSlashings {
ss[i] = s
}
newState, err := blocks.ProcessAttesterSlashings(context.Background(), beaconState, ss, v.SlashValidator)
newState, err := blocks.ProcessAttesterSlashings(context.Background(), beaconState, b.Block.Body.AttesterSlashings, v.SlashValidator)
require.NoError(t, err)
newRegistry := newState.Validators()
if !newRegistry[expectedSlashedVal].Slashed {

View File

@@ -5,6 +5,7 @@ import (
"fmt"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/signing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/params"
@@ -16,6 +17,24 @@ import (
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
// ProcessPreGenesisDeposits processes a deposit for the beacon state before chainstart.
func ProcessPreGenesisDeposits(
ctx context.Context,
beaconState state.BeaconState,
deposits []*ethpb.Deposit,
) (state.BeaconState, error) {
var err error
beaconState, err = ProcessDeposits(ctx, beaconState, deposits)
if err != nil {
return nil, errors.Wrap(err, "could not process deposit")
}
beaconState, err = ActivateValidatorWithEffectiveBalance(beaconState, deposits)
if err != nil {
return nil, err
}
return beaconState, nil
}
// ActivateValidatorWithEffectiveBalance updates validator's effective balance, and if it's above MaxEffectiveBalance, validator becomes active in genesis.
func ActivateValidatorWithEffectiveBalance(beaconState state.BeaconState, deposits []*ethpb.Deposit) (state.BeaconState, error) {
for _, d := range deposits {
@@ -47,6 +66,38 @@ func ActivateValidatorWithEffectiveBalance(beaconState state.BeaconState, deposi
return beaconState, nil
}
// ProcessDeposits is one of the operations performed on each processed
// beacon block to verify queued validators from the Ethereum 1.0 Deposit Contract
// into the beacon chain.
//
// Spec pseudocode definition:
//
// For each deposit in block.body.deposits:
// process_deposit(state, deposit)
func ProcessDeposits(
ctx context.Context,
beaconState state.BeaconState,
deposits []*ethpb.Deposit,
) (state.BeaconState, error) {
// Attempt to verify all deposit signatures at once, if this fails then fall back to processing
// individual deposits with signature verification enabled.
batchVerified, err := BatchVerifyDepositsSignatures(ctx, deposits)
if err != nil {
return nil, err
}
for _, d := range deposits {
if d == nil || d.Data == nil {
return nil, errors.New("got a nil deposit in block")
}
beaconState, _, err = ProcessDeposit(beaconState, d, batchVerified)
if err != nil {
return nil, errors.Wrapf(err, "could not process deposit from %#x", bytesutil.Trunc(d.Data.PublicKey))
}
}
return beaconState, nil
}
// BatchVerifyDepositsSignatures batch verifies deposit signatures.
func BatchVerifyDepositsSignatures(ctx context.Context, deposits []*ethpb.Deposit) (bool, error) {
var err error
@@ -63,28 +114,102 @@ func BatchVerifyDepositsSignatures(ctx context.Context, deposits []*ethpb.Deposi
return verified, nil
}
// IsValidDepositSignature returns whether deposit_data is valid
// def is_valid_deposit_signature(pubkey: BLSPubkey, withdrawal_credentials: Bytes32, amount: uint64, signature: BLSSignature) -> bool:
// ProcessDeposit takes in a deposit object and inserts it
// into the registry as a new validator or balance change.
// Returns the resulting state, a boolean to indicate whether or not the deposit
// resulted in a new validator entry into the beacon state, and any error.
//
// deposit_message = DepositMessage( pubkey=pubkey, withdrawal_credentials=withdrawal_credentials, amount=amount, )
// domain = compute_domain(DOMAIN_DEPOSIT) # Fork-agnostic domain since deposits are valid across forks
// signing_root = compute_signing_root(deposit_message, domain)
// return bls.Verify(pubkey, signing_root, signature)
func IsValidDepositSignature(data *ethpb.Deposit_Data) (bool, error) {
domain, err := signing.ComputeDomain(params.BeaconConfig().DomainDeposit, nil, nil)
if err != nil {
return false, err
// Spec pseudocode definition:
// def process_deposit(state: BeaconState, deposit: Deposit) -> None:
//
// # Verify the Merkle branch
// assert is_valid_merkle_branch(
// leaf=hash_tree_root(deposit.data),
// branch=deposit.proof,
// depth=DEPOSIT_CONTRACT_TREE_DEPTH + 1, # Add 1 for the List length mix-in
// index=state.eth1_deposit_index,
// root=state.eth1_data.deposit_root,
// )
//
// # Deposits must be processed in order
// state.eth1_deposit_index += 1
//
// pubkey = deposit.data.pubkey
// amount = deposit.data.amount
// validator_pubkeys = [v.pubkey for v in state.validators]
// if pubkey not in validator_pubkeys:
// # Verify the deposit signature (proof of possession) which is not checked by the deposit contract
// deposit_message = DepositMessage(
// pubkey=deposit.data.pubkey,
// withdrawal_credentials=deposit.data.withdrawal_credentials,
// amount=deposit.data.amount,
// )
// domain = compute_domain(DOMAIN_DEPOSIT) # Fork-agnostic domain since deposits are valid across forks
// signing_root = compute_signing_root(deposit_message, domain)
// if not bls.Verify(pubkey, signing_root, deposit.data.signature):
// return
//
// # Add validator and balance entries
// state.validators.append(get_validator_from_deposit(state, deposit))
// state.balances.append(amount)
// else:
// # Increase balance by deposit amount
// index = ValidatorIndex(validator_pubkeys.index(pubkey))
// increase_balance(state, index, amount)
func ProcessDeposit(beaconState state.BeaconState, deposit *ethpb.Deposit, verifySignature bool) (state.BeaconState, bool, error) {
var newValidator bool
if err := verifyDeposit(beaconState, deposit); err != nil {
if deposit == nil || deposit.Data == nil {
return nil, newValidator, err
}
return nil, newValidator, errors.Wrapf(err, "could not verify deposit from %#x", bytesutil.Trunc(deposit.Data.PublicKey))
}
if err := verifyDepositDataSigningRoot(data, domain); err != nil {
// Ignore this error as in the spec pseudo code.
log.WithError(err).Debug("Skipping deposit: could not verify deposit data signature")
return false, nil
if err := beaconState.SetEth1DepositIndex(beaconState.Eth1DepositIndex() + 1); err != nil {
return nil, newValidator, err
}
return true, nil
pubKey := deposit.Data.PublicKey
amount := deposit.Data.Amount
index, ok := beaconState.ValidatorIndexByPubkey(bytesutil.ToBytes48(pubKey))
if !ok {
if verifySignature {
domain, err := signing.ComputeDomain(params.BeaconConfig().DomainDeposit, nil, nil)
if err != nil {
return nil, newValidator, err
}
if err := verifyDepositDataSigningRoot(deposit.Data, domain); err != nil {
// Ignore this error as in the spec pseudo code.
log.WithError(err).Debug("Skipping deposit: could not verify deposit data signature")
return beaconState, newValidator, nil
}
}
effectiveBalance := amount - (amount % params.BeaconConfig().EffectiveBalanceIncrement)
if params.BeaconConfig().MaxEffectiveBalance < effectiveBalance {
effectiveBalance = params.BeaconConfig().MaxEffectiveBalance
}
if err := beaconState.AppendValidator(&ethpb.Validator{
PublicKey: pubKey,
WithdrawalCredentials: deposit.Data.WithdrawalCredentials,
ActivationEligibilityEpoch: params.BeaconConfig().FarFutureEpoch,
ActivationEpoch: params.BeaconConfig().FarFutureEpoch,
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
WithdrawableEpoch: params.BeaconConfig().FarFutureEpoch,
EffectiveBalance: effectiveBalance,
}); err != nil {
return nil, newValidator, err
}
newValidator = true
if err := beaconState.AppendBalance(amount); err != nil {
return nil, newValidator, err
}
} else if err := helpers.IncreaseBalance(beaconState, index, amount); err != nil {
return nil, newValidator, err
}
return beaconState, newValidator, nil
}
// VerifyDeposit verifies the deposit data and signature given the beacon state and deposit information
func VerifyDeposit(beaconState state.ReadOnlyBeaconState, deposit *ethpb.Deposit) error {
func verifyDeposit(beaconState state.ReadOnlyBeaconState, deposit *ethpb.Deposit) error {
// Verify Merkle proof of deposit and deposit trie root.
if deposit == nil || deposit.Data == nil {
return errors.New("received nil deposit or nil deposit data")

View File

@@ -9,42 +9,59 @@ import (
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/container/trie"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
)
func TestBatchVerifyDepositsSignatures_Ok(t *testing.T) {
deposit := &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: bytesutil.PadTo([]byte{1, 2, 3}, 48),
WithdrawalCredentials: make([]byte, 32),
Signature: make([]byte, 96),
func TestProcessDeposits_SameValidatorMultipleDepositsSameBlock(t *testing.T) {
// Same validator created 3 valid deposits within the same block
dep, _, err := util.DeterministicDepositsAndKeysSameValidator(3)
require.NoError(t, err)
eth1Data, err := util.DeterministicEth1Data(len(dep))
require.NoError(t, err)
b := util.NewBeaconBlock()
b.Block = &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{
// 3 deposits from the same validator
Deposits: []*ethpb.Deposit{dep[0], dep[1], dep[2]},
},
}
leaf, err := deposit.Data.HashTreeRoot()
registry := []*ethpb.Validator{
{
PublicKey: []byte{1},
WithdrawalCredentials: []byte{1, 2, 3},
},
}
balances := []uint64{0}
beaconState, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: registry,
Balances: balances,
Eth1Data: eth1Data,
Fork: &ethpb.Fork{
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
},
})
require.NoError(t, err)
// We then create a merkle branch for the test.
depositTrie, err := trie.GenerateTrieFromItems([][]byte{leaf[:]}, params.BeaconConfig().DepositContractTreeDepth)
require.NoError(t, err, "Could not generate trie")
proof, err := depositTrie.MerkleProof(0)
require.NoError(t, err, "Could not generate proof")
newState, err := blocks.ProcessDeposits(context.Background(), beaconState, b.Block.Body.Deposits)
require.NoError(t, err, "Expected block deposits to process correctly")
deposit.Proof = proof
require.NoError(t, err)
ok, err := blocks.BatchVerifyDepositsSignatures(context.Background(), []*ethpb.Deposit{deposit})
require.NoError(t, err)
require.Equal(t, true, ok)
assert.Equal(t, 2, len(newState.Validators()), "Incorrect validator count")
}
func TestVerifyDeposit_MerkleBranchFailsVerification(t *testing.T) {
func TestProcessDeposits_MerkleBranchFailsVerification(t *testing.T) {
deposit := &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: bytesutil.PadTo([]byte{1, 2, 3}, 48),
PublicKey: bytesutil.PadTo([]byte{1, 2, 3}, fieldparams.BLSPubkeyLength),
WithdrawalCredentials: make([]byte, 32),
Signature: make([]byte, 96),
Signature: make([]byte, fieldparams.BLSSignatureLength),
},
}
leaf, err := deposit.Data.HashTreeRoot()
@@ -57,7 +74,13 @@ func TestVerifyDeposit_MerkleBranchFailsVerification(t *testing.T) {
require.NoError(t, err, "Could not generate proof")
deposit.Proof = proof
beaconState, err := state_native.InitializeFromProtoAltair(&ethpb.BeaconStateAltair{
b := util.NewBeaconBlock()
b.Block = &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{
Deposits: []*ethpb.Deposit{deposit},
},
}
beaconState, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Eth1Data: &ethpb.Eth1Data{
DepositRoot: []byte{0},
BlockHash: []byte{1},
@@ -65,31 +88,312 @@ func TestVerifyDeposit_MerkleBranchFailsVerification(t *testing.T) {
})
require.NoError(t, err)
want := "deposit root did not verify"
err = blocks.VerifyDeposit(beaconState, deposit)
require.ErrorContains(t, want, err)
_, err = blocks.ProcessDeposits(context.Background(), beaconState, b.Block.Body.Deposits)
assert.ErrorContains(t, want, err)
}
func TestIsValidDepositSignature_Ok(t *testing.T) {
func TestProcessDeposits_AddsNewValidatorDeposit(t *testing.T) {
dep, _, err := util.DeterministicDepositsAndKeys(1)
require.NoError(t, err)
eth1Data, err := util.DeterministicEth1Data(len(dep))
require.NoError(t, err)
b := util.NewBeaconBlock()
b.Block = &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{
Deposits: []*ethpb.Deposit{dep[0]},
},
}
registry := []*ethpb.Validator{
{
PublicKey: []byte{1},
WithdrawalCredentials: []byte{1, 2, 3},
},
}
balances := []uint64{0}
beaconState, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: registry,
Balances: balances,
Eth1Data: eth1Data,
Fork: &ethpb.Fork{
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
},
})
require.NoError(t, err)
newState, err := blocks.ProcessDeposits(context.Background(), beaconState, b.Block.Body.Deposits)
require.NoError(t, err, "Expected block deposits to process correctly")
if newState.Balances()[1] != dep[0].Data.Amount {
t.Errorf(
"Expected state validator balances index 0 to equal %d, received %d",
dep[0].Data.Amount,
newState.Balances()[1],
)
}
}
func TestProcessDeposits_RepeatedDeposit_IncreasesValidatorBalance(t *testing.T) {
sk, err := bls.RandKey()
require.NoError(t, err)
depositData := &ethpb.Deposit_Data{
PublicKey: sk.PublicKey().Marshal(),
Amount: 0,
WithdrawalCredentials: make([]byte, 32),
Signature: make([]byte, fieldparams.BLSSignatureLength),
deposit := &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: sk.PublicKey().Marshal(),
Amount: 1000,
WithdrawalCredentials: make([]byte, 32),
Signature: make([]byte, fieldparams.BLSSignatureLength),
},
}
dm := &ethpb.DepositMessage{
PublicKey: sk.PublicKey().Marshal(),
WithdrawalCredentials: make([]byte, 32),
Amount: 0,
}
domain, err := signing.ComputeDomain(params.BeaconConfig().DomainDeposit, nil, nil)
require.NoError(t, err)
sr, err := signing.ComputeSigningRoot(dm, domain)
sr, err := signing.ComputeSigningRoot(deposit.Data, bytesutil.ToBytes(3, 32))
require.NoError(t, err)
sig := sk.Sign(sr[:])
depositData.Signature = sig.Marshal()
valid, err := blocks.IsValidDepositSignature(depositData)
deposit.Data.Signature = sig.Marshal()
leaf, err := deposit.Data.HashTreeRoot()
require.NoError(t, err)
require.Equal(t, true, valid)
// We then create a merkle branch for the test.
depositTrie, err := trie.GenerateTrieFromItems([][]byte{leaf[:]}, params.BeaconConfig().DepositContractTreeDepth)
require.NoError(t, err, "Could not generate trie")
proof, err := depositTrie.MerkleProof(0)
require.NoError(t, err, "Could not generate proof")
deposit.Proof = proof
b := util.NewBeaconBlock()
b.Block = &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{
Deposits: []*ethpb.Deposit{deposit},
},
}
registry := []*ethpb.Validator{
{
PublicKey: []byte{1, 2, 3},
},
{
PublicKey: sk.PublicKey().Marshal(),
WithdrawalCredentials: []byte{1},
},
}
balances := []uint64{0, 50}
root, err := depositTrie.HashTreeRoot()
require.NoError(t, err)
beaconState, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: registry,
Balances: balances,
Eth1Data: &ethpb.Eth1Data{
DepositRoot: root[:],
BlockHash: root[:],
},
})
require.NoError(t, err)
newState, err := blocks.ProcessDeposits(context.Background(), beaconState, b.Block.Body.Deposits)
require.NoError(t, err, "Process deposit failed")
assert.Equal(t, uint64(1000+50), newState.Balances()[1], "Expected balance at index 1 to be 1050")
}
func TestProcessDeposit_AddsNewValidatorDeposit(t *testing.T) {
// Similar to TestProcessDeposits_AddsNewValidatorDeposit except that this test directly calls ProcessDeposit
dep, _, err := util.DeterministicDepositsAndKeys(1)
require.NoError(t, err)
eth1Data, err := util.DeterministicEth1Data(len(dep))
require.NoError(t, err)
registry := []*ethpb.Validator{
{
PublicKey: []byte{1},
WithdrawalCredentials: []byte{1, 2, 3},
},
}
balances := []uint64{0}
beaconState, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: registry,
Balances: balances,
Eth1Data: eth1Data,
Fork: &ethpb.Fork{
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
},
})
require.NoError(t, err)
newState, isNewValidator, err := blocks.ProcessDeposit(beaconState, dep[0], true)
require.NoError(t, err, "Process deposit failed")
assert.Equal(t, true, isNewValidator, "Expected isNewValidator to be true")
assert.Equal(t, 2, len(newState.Validators()), "Expected validator list to have length 2")
assert.Equal(t, 2, len(newState.Balances()), "Expected validator balances list to have length 2")
if newState.Balances()[1] != dep[0].Data.Amount {
t.Errorf(
"Expected state validator balances index 1 to equal %d, received %d",
dep[0].Data.Amount,
newState.Balances()[1],
)
}
}
func TestProcessDeposit_SkipsInvalidDeposit(t *testing.T) {
// Same test settings as in TestProcessDeposit_AddsNewValidatorDeposit, except that we use an invalid signature
dep, _, err := util.DeterministicDepositsAndKeys(1)
require.NoError(t, err)
dep[0].Data.Signature = make([]byte, 96)
dt, _, err := util.DepositTrieFromDeposits(dep)
require.NoError(t, err)
root, err := dt.HashTreeRoot()
require.NoError(t, err)
eth1Data := &ethpb.Eth1Data{
DepositRoot: root[:],
DepositCount: 1,
}
registry := []*ethpb.Validator{
{
PublicKey: []byte{1},
WithdrawalCredentials: []byte{1, 2, 3},
},
}
balances := []uint64{0}
beaconState, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: registry,
Balances: balances,
Eth1Data: eth1Data,
Fork: &ethpb.Fork{
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
},
})
require.NoError(t, err)
newState, isNewValidator, err := blocks.ProcessDeposit(beaconState, dep[0], true)
require.NoError(t, err, "Expected invalid block deposit to be ignored without error")
assert.Equal(t, false, isNewValidator, "Expected isNewValidator to be false")
if newState.Eth1DepositIndex() != 1 {
t.Errorf(
"Expected Eth1DepositIndex to be increased by 1 after processing an invalid deposit, received change: %v",
newState.Eth1DepositIndex(),
)
}
if len(newState.Validators()) != 1 {
t.Errorf("Expected validator list to have length 1, received: %v", len(newState.Validators()))
}
if len(newState.Balances()) != 1 {
t.Errorf("Expected validator balances list to have length 1, received: %v", len(newState.Balances()))
}
if newState.Balances()[0] != 0 {
t.Errorf("Expected validator balance at index 0 to stay 0, received: %v", newState.Balances()[0])
}
}
func TestPreGenesisDeposits_SkipInvalidDeposit(t *testing.T) {
dep, _, err := util.DeterministicDepositsAndKeys(100)
require.NoError(t, err)
dep[0].Data.Signature = make([]byte, 96)
dt, _, err := util.DepositTrieFromDeposits(dep)
require.NoError(t, err)
for i := range dep {
proof, err := dt.MerkleProof(i)
require.NoError(t, err)
dep[i].Proof = proof
}
root, err := dt.HashTreeRoot()
require.NoError(t, err)
eth1Data := &ethpb.Eth1Data{
DepositRoot: root[:],
DepositCount: 1,
}
registry := []*ethpb.Validator{
{
PublicKey: []byte{1},
WithdrawalCredentials: []byte{1, 2, 3},
},
}
balances := []uint64{0}
beaconState, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: registry,
Balances: balances,
Eth1Data: eth1Data,
Fork: &ethpb.Fork{
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
},
})
require.NoError(t, err)
newState, err := blocks.ProcessPreGenesisDeposits(context.Background(), beaconState, dep)
require.NoError(t, err, "Expected invalid block deposit to be ignored without error")
_, ok := newState.ValidatorIndexByPubkey(bytesutil.ToBytes48(dep[0].Data.PublicKey))
require.Equal(t, false, ok, "bad pubkey should not exist in state")
for i := 1; i < newState.NumValidators(); i++ {
val, err := newState.ValidatorAtIndex(primitives.ValidatorIndex(i))
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().MaxEffectiveBalance, val.EffectiveBalance, "unequal effective balance")
require.Equal(t, primitives.Epoch(0), val.ActivationEpoch)
require.Equal(t, primitives.Epoch(0), val.ActivationEligibilityEpoch)
}
if newState.Eth1DepositIndex() != 100 {
t.Errorf(
"Expected Eth1DepositIndex to be increased by 99 after processing an invalid deposit, received change: %v",
newState.Eth1DepositIndex(),
)
}
if len(newState.Validators()) != 100 {
t.Errorf("Expected validator list to have length 100, received: %v", len(newState.Validators()))
}
if len(newState.Balances()) != 100 {
t.Errorf("Expected validator balances list to have length 100, received: %v", len(newState.Balances()))
}
if newState.Balances()[0] != 0 {
t.Errorf("Expected validator balance at index 0 to stay 0, received: %v", newState.Balances()[0])
}
}
func TestProcessDeposit_RepeatedDeposit_IncreasesValidatorBalance(t *testing.T) {
sk, err := bls.RandKey()
require.NoError(t, err)
deposit := &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: sk.PublicKey().Marshal(),
Amount: 1000,
WithdrawalCredentials: make([]byte, 32),
Signature: make([]byte, 96),
},
}
sr, err := signing.ComputeSigningRoot(deposit.Data, bytesutil.ToBytes(3, 32))
require.NoError(t, err)
sig := sk.Sign(sr[:])
deposit.Data.Signature = sig.Marshal()
leaf, err := deposit.Data.HashTreeRoot()
require.NoError(t, err)
// We then create a merkle branch for the test.
depositTrie, err := trie.GenerateTrieFromItems([][]byte{leaf[:]}, params.BeaconConfig().DepositContractTreeDepth)
require.NoError(t, err, "Could not generate trie")
proof, err := depositTrie.MerkleProof(0)
require.NoError(t, err, "Could not generate proof")
deposit.Proof = proof
registry := []*ethpb.Validator{
{
PublicKey: []byte{1, 2, 3},
},
{
PublicKey: sk.PublicKey().Marshal(),
WithdrawalCredentials: []byte{1},
},
}
balances := []uint64{0, 50}
root, err := depositTrie.HashTreeRoot()
require.NoError(t, err)
beaconState, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: registry,
Balances: balances,
Eth1Data: &ethpb.Eth1Data{
DepositRoot: root[:],
BlockHash: root[:],
},
})
require.NoError(t, err)
newState, isNewValidator, err := blocks.ProcessDeposit(beaconState, deposit, true /*verifySignature*/)
require.NoError(t, err, "Process deposit failed")
assert.Equal(t, false, isNewValidator, "Expected isNewValidator to be false")
assert.Equal(t, uint64(1000+50), newState.Balances()[1], "Expected balance at index 1 to be 1050")
}

View File

@@ -58,9 +58,10 @@ func AreEth1DataEqual(a, b *ethpb.Eth1Data) bool {
// votes to see if they match the eth1data.
func Eth1DataHasEnoughSupport(beaconState state.ReadOnlyBeaconState, data *ethpb.Eth1Data) (bool, error) {
voteCount := uint64(0)
data = ethpb.CopyETH1Data(data)
for _, vote := range beaconState.Eth1DataVotes() {
if AreEth1DataEqual(vote, data.Copy()) {
if AreEth1DataEqual(vote, data) {
voteCount++
}
}

View File

@@ -186,7 +186,7 @@ func TestProcessEth1Data_SetsCorrectly(t *testing.T) {
if len(newETH1DataVotes) <= 1 {
t.Error("Expected new ETH1 data votes to have length > 1")
}
if !proto.Equal(beaconState.Eth1Data(), b.Block.Body.Eth1Data.Copy()) {
if !proto.Equal(beaconState.Eth1Data(), ethpb.CopyETH1Data(b.Block.Body.Eth1Data)) {
t.Errorf(
"Expected latest eth1 data to have been set to %v, received %v",
b.Block.Body.Eth1Data,

View File

@@ -98,8 +98,6 @@ func ProcessVoluntaryExits(
// assert get_current_epoch(state) >= voluntary_exit.epoch
// # Verify the validator has been active long enough
// assert get_current_epoch(state) >= validator.activation_epoch + SHARD_COMMITTEE_PERIOD
// # Only exit validator if it has no pending withdrawals in the queue
// assert get_pending_balance_to_withdraw(state, voluntary_exit.validator_index) == 0 # [New in Electra:EIP7251]
// # Verify signature
// domain = get_domain(state, DOMAIN_VOLUNTARY_EXIT, voluntary_exit.epoch)
// signing_root = compute_signing_root(voluntary_exit, domain)
@@ -115,6 +113,7 @@ func VerifyExitAndSignature(
return errors.New("nil exit")
}
currentSlot := state.Slot()
fork := state.Fork()
genesisRoot := state.GenesisValidatorsRoot()
@@ -129,7 +128,7 @@ func VerifyExitAndSignature(
}
exit := signed.Exit
if err := verifyExitConditions(state, validator, exit); err != nil {
if err := verifyExitConditions(validator, currentSlot, exit); err != nil {
return err
}
domain, err := signing.Domain(fork, exit.Epoch, params.BeaconConfig().DomainVoluntaryExit, genesisRoot)
@@ -158,16 +157,14 @@ func VerifyExitAndSignature(
// assert get_current_epoch(state) >= voluntary_exit.epoch
// # Verify the validator has been active long enough
// assert get_current_epoch(state) >= validator.activation_epoch + SHARD_COMMITTEE_PERIOD
// # Only exit validator if it has no pending withdrawals in the queue
// assert get_pending_balance_to_withdraw(state, voluntary_exit.validator_index) == 0 # [New in Electra:EIP7251]
// # Verify signature
// domain = get_domain(state, DOMAIN_VOLUNTARY_EXIT, voluntary_exit.epoch)
// signing_root = compute_signing_root(voluntary_exit, domain)
// assert bls.Verify(validator.pubkey, signing_root, signed_voluntary_exit.signature)
// # Initiate exit
// initiate_validator_exit(state, voluntary_exit.validator_index)
func verifyExitConditions(st state.ReadOnlyBeaconState, validator state.ReadOnlyValidator, exit *ethpb.VoluntaryExit) error {
currentEpoch := slots.ToEpoch(st.Slot())
func verifyExitConditions(validator state.ReadOnlyValidator, currentSlot primitives.Slot, exit *ethpb.VoluntaryExit) error {
currentEpoch := slots.ToEpoch(currentSlot)
// Verify the validator is active.
if !helpers.IsActiveValidatorUsingTrie(validator, currentEpoch) {
return errors.New("non-active validator cannot exit")
@@ -190,17 +187,5 @@ func verifyExitConditions(st state.ReadOnlyBeaconState, validator state.ReadOnly
validator.ActivationEpoch()+params.BeaconConfig().ShardCommitteePeriod,
)
}
if st.Version() >= version.Electra {
// Only exit validator if it has no pending withdrawals in the queue.
ok, err := st.HasPendingBalanceToWithdraw(exit.ValidatorIndex)
if err != nil {
return fmt.Errorf("unable to retrieve pending balance to withdraw for validator %d: %w", exit.ValidatorIndex, err)
}
if ok {
return fmt.Errorf("validator %d must have no pending balance to withdraw", exit.ValidatorIndex)
}
}
return nil
}

View File

@@ -135,11 +135,8 @@ func TestProcessVoluntaryExits_AppliesCorrectStatus(t *testing.T) {
}
func TestVerifyExitAndSignature(t *testing.T) {
// Remove after electra fork epoch is defined.
cfg := params.BeaconConfig()
cfg.ElectraForkEpoch = cfg.DenebForkEpoch * 2
params.SetActiveTestCleanup(t, cfg)
// End remove section.
undo := util.HackDenebMaxuint(t)
defer undo()
denebSlot, err := slots.EpochStart(params.BeaconConfig().DenebForkEpoch)
require.NoError(t, err)
tests := []struct {
@@ -172,7 +169,7 @@ func TestVerifyExitAndSignature(t *testing.T) {
wantErr: "nil exit",
},
{
name: "Happy Path phase0",
name: "Happy Path",
setup: func() (*ethpb.Validator, *ethpb.SignedVoluntaryExit, state.ReadOnlyBeaconState, error) {
fork := &ethpb.Fork{
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
@@ -280,52 +277,6 @@ func TestVerifyExitAndSignature(t *testing.T) {
return validator, signedExit, bs, nil
},
},
{
name: "EIP-7251 - pending balance to withdraw must be zero",
setup: func() (*ethpb.Validator, *ethpb.SignedVoluntaryExit, state.ReadOnlyBeaconState, error) {
fork := &ethpb.Fork{
PreviousVersion: params.BeaconConfig().DenebForkVersion,
CurrentVersion: params.BeaconConfig().ElectraForkVersion,
Epoch: params.BeaconConfig().ElectraForkEpoch,
}
signedExit := &ethpb.SignedVoluntaryExit{
Exit: &ethpb.VoluntaryExit{
Epoch: params.BeaconConfig().DenebForkEpoch + 1,
ValidatorIndex: 0,
},
}
electraSlot, err := slots.EpochStart(params.BeaconConfig().ElectraForkEpoch)
require.NoError(t, err)
bs, keys := util.DeterministicGenesisState(t, 1)
bs, err = state_native.InitializeFromProtoUnsafeElectra(&ethpb.BeaconStateElectra{
GenesisValidatorsRoot: bs.GenesisValidatorsRoot(),
Fork: fork,
Slot: electraSlot,
Validators: bs.Validators(),
})
if err != nil {
return nil, nil, nil, err
}
validator := bs.Validators()[0]
validator.ActivationEpoch = 1
err = bs.UpdateValidatorAtIndex(0, validator)
require.NoError(t, err)
sb, err := signing.ComputeDomainAndSign(bs, signedExit.Exit.Epoch, signedExit.Exit, params.BeaconConfig().DomainVoluntaryExit, keys[0])
require.NoError(t, err)
sig, err := bls.SignatureFromBytes(sb)
require.NoError(t, err)
signedExit.Signature = sig.Marshal()
// Give validator a pending balance to withdraw.
require.NoError(t, bs.AppendPendingPartialWithdrawal(&ethpb.PendingPartialWithdrawal{
Index: 0,
Amount: 500,
}))
return validator, signedExit, bs, nil
},
wantErr: "validator 0 must have no pending balance to withdraw",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {

View File

@@ -1,5 +1,3 @@
package blocks
var ProcessBLSToExecutionChange = processBLSToExecutionChange
var VerifyBlobCommitmentCount = verifyBlobCommitmentCount

View File

@@ -163,42 +163,7 @@ func NewGenesisBlockForState(ctx context.Context, st state.BeaconState) (interfa
SyncCommitteeBits: make([]byte, fieldparams.SyncCommitteeLength/8),
SyncCommitteeSignature: make([]byte, fieldparams.BLSSignatureLength),
},
ExecutionPayload: &enginev1.ExecutionPayloadDeneb{
ParentHash: make([]byte, 32),
FeeRecipient: make([]byte, 20),
StateRoot: make([]byte, 32),
ReceiptsRoot: make([]byte, 32),
LogsBloom: make([]byte, 256),
PrevRandao: make([]byte, 32),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
Transactions: make([][]byte, 0),
Withdrawals: make([]*enginev1.Withdrawal, 0),
},
BlsToExecutionChanges: make([]*ethpb.SignedBLSToExecutionChange, 0),
BlobKzgCommitments: make([][]byte, 0),
},
},
Signature: params.BeaconConfig().EmptySignature[:],
})
case *ethpb.BeaconStateElectra:
return blocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockElectra{
Block: &ethpb.BeaconBlockElectra{
ParentRoot: params.BeaconConfig().ZeroHash[:],
StateRoot: root[:],
Body: &ethpb.BeaconBlockBodyElectra{
RandaoReveal: make([]byte, 96),
Eth1Data: &ethpb.Eth1Data{
DepositRoot: make([]byte, 32),
BlockHash: make([]byte, 32),
},
Graffiti: make([]byte, 32),
SyncAggregate: &ethpb.SyncAggregate{
SyncCommitteeBits: make([]byte, fieldparams.SyncCommitteeLength/8),
SyncCommitteeSignature: make([]byte, fieldparams.BLSSignatureLength),
},
ExecutionPayload: &enginev1.ExecutionPayloadElectra{
ExecutionPayload: &enginev1.ExecutionPayloadDeneb{ // Deneb difference.
ParentHash: make([]byte, 32),
FeeRecipient: make([]byte, 20),
StateRoot: make([]byte, 32),

View File

@@ -2,13 +2,11 @@ package blocks
import (
"bytes"
"fmt"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
field_params "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
consensus_types "github.com/prysmaticlabs/prysm/v5/consensus-types"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
@@ -202,13 +200,13 @@ func ValidatePayload(st state.BeaconState, payload interfaces.ExecutionData) err
// block_hash=payload.block_hash,
// transactions_root=hash_tree_root(payload.transactions),
// )
func ProcessPayload(st state.BeaconState, body interfaces.ReadOnlyBeaconBlockBody) (state.BeaconState, error) {
payload, err := body.Execution()
if err != nil {
return nil, err
}
if err := verifyBlobCommitmentCount(body); err != nil {
return nil, err
func ProcessPayload(st state.BeaconState, payload interfaces.ExecutionData) (state.BeaconState, error) {
var err error
if st.Version() >= version.Capella {
st, err = ProcessWithdrawals(st, payload)
if err != nil {
return nil, errors.Wrap(err, "could not process withdrawals")
}
}
if err := ValidatePayloadWhenMergeCompletes(st, payload); err != nil {
return nil, err
@@ -222,20 +220,70 @@ func ProcessPayload(st state.BeaconState, body interfaces.ReadOnlyBeaconBlockBod
return st, nil
}
func verifyBlobCommitmentCount(body interfaces.ReadOnlyBeaconBlockBody) error {
if body.Version() < version.Deneb {
return nil
}
kzgs, err := body.BlobKzgCommitments()
// ValidatePayloadHeaderWhenMergeCompletes validates the payload header when the merge completes.
func ValidatePayloadHeaderWhenMergeCompletes(st state.BeaconState, header interfaces.ExecutionData) error {
// Skip validation if the state is not merge compatible.
complete, err := IsMergeTransitionComplete(st)
if err != nil {
return err
}
if len(kzgs) > field_params.MaxBlobsPerBlock {
return fmt.Errorf("too many kzg commitments in block: %d", len(kzgs))
if !complete {
return nil
}
// Validate current header's parent hash matches state header's block hash.
h, err := st.LatestExecutionPayloadHeader()
if err != nil {
return err
}
if !bytes.Equal(header.ParentHash(), h.BlockHash()) {
return ErrInvalidPayloadBlockHash
}
return nil
}
// ValidatePayloadHeader validates the payload header.
func ValidatePayloadHeader(st state.BeaconState, header interfaces.ExecutionData) error {
// Validate header's random mix matches with state in current epoch
random, err := helpers.RandaoMix(st, time.CurrentEpoch(st))
if err != nil {
return err
}
if !bytes.Equal(header.PrevRandao(), random) {
return ErrInvalidPayloadPrevRandao
}
// Validate header's timestamp matches with state in current slot.
t, err := slots.ToTime(st.GenesisTime(), st.Slot())
if err != nil {
return err
}
if header.Timestamp() != uint64(t.Unix()) {
return ErrInvalidPayloadTimeStamp
}
return nil
}
// ProcessPayloadHeader processes the payload header.
func ProcessPayloadHeader(st state.BeaconState, header interfaces.ExecutionData) (state.BeaconState, error) {
var err error
if st.Version() >= version.Capella {
st, err = ProcessWithdrawals(st, header)
if err != nil {
return nil, errors.Wrap(err, "could not process withdrawals")
}
}
if err := ValidatePayloadHeaderWhenMergeCompletes(st, header); err != nil {
return nil, err
}
if err := ValidatePayloadHeader(st, header); err != nil {
return nil, err
}
if err := st.SetLatestExecutionPayloadHeader(header); err != nil {
return nil, err
}
return st, nil
}
// GetBlockPayloadHash returns the hash of the execution payload of the block
func GetBlockPayloadHash(blk interfaces.ReadOnlyBeaconBlock) ([32]byte, error) {
var payloadHash [32]byte

View File

@@ -1,7 +1,7 @@
package blocks_test
import (
"fmt"
"math/big"
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
@@ -14,7 +14,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/encoding/ssz"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/prysmaticlabs/prysm/v5/time/slots"
@@ -583,18 +582,14 @@ func Test_ProcessPayload(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
body, err := consensusblocks.NewBeaconBlockBody(&ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: tt.payload,
})
wrappedPayload, err := consensusblocks.WrappedExecutionPayload(tt.payload)
require.NoError(t, err)
st, err := blocks.ProcessPayload(st, body)
st, err := blocks.ProcessPayload(st, wrappedPayload)
if err != nil {
require.Equal(t, tt.err.Error(), err.Error())
} else {
require.Equal(t, tt.err, err)
payload, err := body.Execution()
require.NoError(t, err)
want, err := consensusblocks.PayloadToHeader(payload)
want, err := consensusblocks.PayloadToHeader(wrappedPayload)
require.Equal(t, tt.err, err)
h, err := st.LatestExecutionPayloadHeader()
require.NoError(t, err)
@@ -615,15 +610,13 @@ func Test_ProcessPayloadCapella(t *testing.T) {
random, err := helpers.RandaoMix(st, time.CurrentEpoch(st))
require.NoError(t, err)
payload.PrevRandao = random
body, err := consensusblocks.NewBeaconBlockBody(&ethpb.BeaconBlockBodyCapella{
ExecutionPayload: payload,
})
wrapped, err := consensusblocks.WrappedExecutionPayloadCapella(payload, big.NewInt(0))
require.NoError(t, err)
_, err = blocks.ProcessPayload(st, body)
_, err = blocks.ProcessPayload(st, wrapped)
require.NoError(t, err)
}
func Test_ProcessPayload_Blinded(t *testing.T) {
func Test_ProcessPayloadHeader(t *testing.T) {
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
random, err := helpers.RandaoMix(st, time.CurrentEpoch(st))
require.NoError(t, err)
@@ -671,13 +664,7 @@ func Test_ProcessPayload_Blinded(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
p, ok := tt.header.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
body, err := consensusblocks.NewBeaconBlockBody(&ethpb.BlindedBeaconBlockBodyBellatrix{
ExecutionPayloadHeader: p,
})
require.NoError(t, err)
st, err := blocks.ProcessPayload(st, body)
st, err := blocks.ProcessPayloadHeader(st, tt.header)
if err != nil {
require.Equal(t, tt.err.Error(), err.Error())
} else {
@@ -742,7 +729,7 @@ func Test_ValidatePayloadHeader(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err = blocks.ValidatePayload(st, tt.header)
err = blocks.ValidatePayloadHeader(st, tt.header)
require.Equal(t, tt.err, err)
})
}
@@ -799,7 +786,7 @@ func Test_ValidatePayloadHeaderWhenMergeCompletes(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err = blocks.ValidatePayloadWhenMergeCompletes(tt.state, tt.header)
err = blocks.ValidatePayloadHeaderWhenMergeCompletes(tt.state, tt.header)
require.Equal(t, tt.err, err)
})
}
@@ -887,7 +874,7 @@ func emptyPayloadHeaderCapella() (interfaces.ExecutionData, error) {
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: make([]byte, fieldparams.RootLength),
WithdrawalsRoot: make([]byte, fieldparams.RootLength),
})
}, big.NewInt(0))
}
func emptyPayload() *enginev1.ExecutionPayload {
@@ -920,15 +907,3 @@ func emptyPayloadCapella() *enginev1.ExecutionPayloadCapella {
Withdrawals: make([]*enginev1.Withdrawal, 0),
}
}
func TestVerifyBlobCommitmentCount(t *testing.T) {
b := &ethpb.BeaconBlockDeneb{Body: &ethpb.BeaconBlockBodyDeneb{}}
rb, err := consensusblocks.NewBeaconBlock(b)
require.NoError(t, err)
require.NoError(t, blocks.VerifyBlobCommitmentCount(rb.Body()))
b = &ethpb.BeaconBlockDeneb{Body: &ethpb.BeaconBlockBodyDeneb{BlobKzgCommitments: make([][]byte, fieldparams.MaxBlobsPerBlock+1)}}
rb, err = consensusblocks.NewBeaconBlock(b)
require.NoError(t, err)
require.ErrorContains(t, fmt.Sprintf("too many kzg commitments in block: %d", fieldparams.MaxBlobsPerBlock+1), blocks.VerifyBlobCommitmentCount(rb.Body()))
}

View File

@@ -12,14 +12,12 @@ import (
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"google.golang.org/protobuf/proto"
)
type slashValidatorFunc func(
ctx context.Context,
st state.BeaconState,
vid primitives.ValidatorIndex) (state.BeaconState, error)
type slashValidatorFunc func(ctx context.Context, st state.BeaconState, vid primitives.ValidatorIndex, penaltyQuotient, proposerRewardQuotient uint64) (state.BeaconState, error)
// ProcessProposerSlashings is one of the operations performed
// on each processed beacon block to slash proposers based on
@@ -77,7 +75,19 @@ func ProcessProposerSlashing(
if err = VerifyProposerSlashing(beaconState, slashing); err != nil {
return nil, errors.Wrap(err, "could not verify proposer slashing")
}
beaconState, err = slashFunc(ctx, beaconState, slashing.Header_1.Header.ProposerIndex)
cfg := params.BeaconConfig()
var slashingQuotient uint64
switch {
case beaconState.Version() == version.Phase0:
slashingQuotient = cfg.MinSlashingPenaltyQuotient
case beaconState.Version() == version.Altair:
slashingQuotient = cfg.MinSlashingPenaltyQuotientAltair
case beaconState.Version() >= version.Bellatrix:
slashingQuotient = cfg.MinSlashingPenaltyQuotientBellatrix
default:
return nil, errors.New("unknown state version")
}
beaconState, err = slashFunc(ctx, beaconState, slashing.Header_1.Header.ProposerIndex, slashingQuotient, cfg.ProposerRewardQuotient)
if err != nil {
return nil, errors.Wrapf(err, "could not slash proposer index %d", slashing.Header_1.Header.ProposerIndex)
}

View File

@@ -179,7 +179,7 @@ func randaoSigningData(ctx context.Context, beaconState state.ReadOnlyBeaconStat
func createAttestationSignatureBatch(
ctx context.Context,
beaconState state.ReadOnlyBeaconState,
atts []ethpb.Att,
atts []*ethpb.Attestation,
domain []byte,
) (*bls.SignatureBatch, error) {
if len(atts) == 0 {
@@ -191,26 +191,31 @@ func createAttestationSignatureBatch(
msgs := make([][32]byte, len(atts))
descs := make([]string, len(atts))
for i, a := range atts {
sigs[i] = a.GetSignature()
committees, err := helpers.AttestationCommittees(ctx, beaconState, a)
sigs[i] = a.Signature
c, err := helpers.BeaconCommitteeFromState(ctx, beaconState, a.Data.Slot, a.Data.CommitteeIndex)
if err != nil {
return nil, err
}
ia, err := attestation.ConvertToIndexed(ctx, a, committees...)
ia, err := attestation.ConvertToIndexed(ctx, a, c)
if err != nil {
return nil, err
}
if err := attestation.IsValidAttestationIndices(ctx, ia); err != nil {
return nil, err
}
indices := ia.GetAttestingIndices()
aggP, err := beaconState.AggregateKeyFromIndices(indices)
indices := ia.AttestingIndices
pubkeys := make([][]byte, len(indices))
for i := 0; i < len(indices); i++ {
pubkeyAtIdx := beaconState.PubkeyAtIndex(primitives.ValidatorIndex(indices[i]))
pubkeys[i] = pubkeyAtIdx[:]
}
aggP, err := bls.AggregatePublicKeys(pubkeys)
if err != nil {
return nil, err
}
pks[i] = aggP
root, err := signing.ComputeSigningRoot(ia.GetData(), domain)
root, err := signing.ComputeSigningRoot(ia.Data, domain)
if err != nil {
return nil, errors.Wrap(err, "could not get signing root of object")
}
@@ -228,7 +233,7 @@ func createAttestationSignatureBatch(
// AttestationSignatureBatch retrieves all the related attestation signature data such as the relevant public keys,
// signatures and attestation signing data and collate it into a signature batch object.
func AttestationSignatureBatch(ctx context.Context, beaconState state.ReadOnlyBeaconState, atts []ethpb.Att) (*bls.SignatureBatch, error) {
func AttestationSignatureBatch(ctx context.Context, beaconState state.ReadOnlyBeaconState, atts []*ethpb.Attestation) (*bls.SignatureBatch, error) {
if len(atts) == 0 {
return bls.NewSet(), nil
}
@@ -238,10 +243,10 @@ func AttestationSignatureBatch(ctx context.Context, beaconState state.ReadOnlyBe
dt := params.BeaconConfig().DomainBeaconAttester
// Split attestations by fork. Note: the signature domain will differ based on the fork.
var preForkAtts []ethpb.Att
var postForkAtts []ethpb.Att
var preForkAtts []*ethpb.Attestation
var postForkAtts []*ethpb.Attestation
for _, a := range atts {
if slots.ToEpoch(a.GetData().Slot) < fork.Epoch {
if slots.ToEpoch(a.Data.Slot) < fork.Epoch {
preForkAtts = append(preForkAtts, a)
} else {
postForkAtts = append(postForkAtts, a)

View File

@@ -120,35 +120,32 @@ func ValidateBLSToExecutionChange(st state.ReadOnlyBeaconState, signed *ethpb.Si
//
// Spec pseudocode definition:
//
// def process_withdrawals(state: BeaconState, payload: ExecutionPayload) -> None:
// expected_withdrawals, partial_withdrawals_count = get_expected_withdrawals(state) # [Modified in Electra:EIP7251]
// def process_withdrawals(state: BeaconState, payload: ExecutionPayload) -> None:
//
// assert len(payload.withdrawals) == len(expected_withdrawals)
// expected_withdrawals = get_expected_withdrawals(state)
// assert len(payload.withdrawals) == len(expected_withdrawals)
//
// for expected_withdrawal, withdrawal in zip(expected_withdrawals, payload.withdrawals):
// assert withdrawal == expected_withdrawal
// decrease_balance(state, withdrawal.validator_index, withdrawal.amount)
// for expected_withdrawal, withdrawal in zip(expected_withdrawals, payload.withdrawals):
// assert withdrawal == expected_withdrawal
// decrease_balance(state, withdrawal.validator_index, withdrawal.amount)
//
// # Update pending partial withdrawals [New in Electra:EIP7251]
// state.pending_partial_withdrawals = state.pending_partial_withdrawals[partial_withdrawals_count:]
// # Update the next withdrawal index if this block contained withdrawals
// if len(expected_withdrawals) != 0:
// latest_withdrawal = expected_withdrawals[-1]
// state.next_withdrawal_index = WithdrawalIndex(latest_withdrawal.index + 1)
//
// # Update the next withdrawal index if this block contained withdrawals
// if len(expected_withdrawals) != 0:
// latest_withdrawal = expected_withdrawals[-1]
// state.next_withdrawal_index = WithdrawalIndex(latest_withdrawal.index + 1)
//
// # Update the next validator index to start the next withdrawal sweep
// if len(expected_withdrawals) == MAX_WITHDRAWALS_PER_PAYLOAD:
// # Next sweep starts after the latest withdrawal's validator index
// next_validator_index = ValidatorIndex((expected_withdrawals[-1].validator_index + 1) % len(state.validators))
// state.next_withdrawal_validator_index = next_validator_index
// else:
// # Advance sweep by the max length of the sweep if there was not a full set of withdrawals
// next_index = state.next_withdrawal_validator_index + MAX_VALIDATORS_PER_WITHDRAWALS_SWEEP
// next_validator_index = ValidatorIndex(next_index % len(state.validators))
// state.next_withdrawal_validator_index = next_validator_index
// # Update the next validator index to start the next withdrawal sweep
// if len(expected_withdrawals) == MAX_WITHDRAWALS_PER_PAYLOAD:
// # Next sweep starts after the latest withdrawal's validator index
// next_validator_index = ValidatorIndex((expected_withdrawals[-1].validator_index + 1) % len(state.validators))
// state.next_withdrawal_validator_index = next_validator_index
// else:
// # Advance sweep by the max length of the sweep if there was not a full set of withdrawals
// next_index = state.next_withdrawal_validator_index + MAX_VALIDATORS_PER_WITHDRAWALS_SWEEP
// next_validator_index = ValidatorIndex(next_index % len(state.validators))
// state.next_withdrawal_validator_index = next_validator_index
func ProcessWithdrawals(st state.BeaconState, executionData interfaces.ExecutionData) (state.BeaconState, error) {
expectedWithdrawals, partialWithdrawalsCount, err := st.ExpectedWithdrawals()
expectedWithdrawals, err := st.ExpectedWithdrawals()
if err != nil {
return nil, errors.Wrap(err, "could not get expected withdrawals")
}
@@ -165,11 +162,6 @@ func ProcessWithdrawals(st state.BeaconState, executionData interfaces.Execution
if err != nil {
return nil, errors.Wrap(err, "could not get withdrawals")
}
if len(wds) != len(expectedWithdrawals) {
return nil, fmt.Errorf("execution payload header has %d withdrawals when %d were expected", len(wds), len(expectedWithdrawals))
}
wdRoot, err = ssz.WithdrawalSliceRoot(wds, fieldparams.MaxWithdrawalsPerPayload)
if err != nil {
return nil, errors.Wrap(err, "could not get withdrawals root")
@@ -190,13 +182,6 @@ func ProcessWithdrawals(st state.BeaconState, executionData interfaces.Execution
return nil, errors.Wrap(err, "could not decrease balance")
}
}
if st.Version() >= version.Electra {
if err := st.DequeuePartialWithdrawals(partialWithdrawalsCount); err != nil {
return nil, fmt.Errorf("unable to dequeue partial withdrawals from state: %w", err)
}
}
if len(expectedWithdrawals) > 0 {
if err := st.SetNextWithdrawalIndex(expectedWithdrawals[len(expectedWithdrawals)-1].Index + 1); err != nil {
return nil, errors.Wrap(err, "could not set next withdrawal index")

View File

@@ -1,6 +1,7 @@
package blocks_test
import (
"math/big"
"math/rand"
"testing"
@@ -12,7 +13,6 @@ import (
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
consensusblocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
"github.com/prysmaticlabs/prysm/v5/crypto/bls/common"
@@ -20,7 +20,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/encoding/ssz"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
@@ -251,12 +250,12 @@ func TestProcessBlindWithdrawals(t *testing.T) {
maxEffectiveBalance := params.BeaconConfig().MaxEffectiveBalance
type args struct {
Name string
NextWithdrawalValidatorIndex primitives.ValidatorIndex
NextWithdrawalIndex uint64
FullWithdrawalIndices []primitives.ValidatorIndex
PendingPartialWithdrawalIndices []primitives.ValidatorIndex
Withdrawals []*enginev1.Withdrawal
Name string
NextWithdrawalValidatorIndex primitives.ValidatorIndex
NextWithdrawalIndex uint64
FullWithdrawalIndices []primitives.ValidatorIndex
PartialWithdrawalIndices []primitives.ValidatorIndex
Withdrawals []*enginev1.Withdrawal
}
type control struct {
NextWithdrawalValidatorIndex primitives.ValidatorIndex
@@ -284,7 +283,7 @@ func TestProcessBlindWithdrawals(t *testing.T) {
Amount: withdrawalAmount(i),
}
}
PendingPartialWithdrawal := func(i primitives.ValidatorIndex, idx uint64) *enginev1.Withdrawal {
partialWithdrawal := func(i primitives.ValidatorIndex, idx uint64) *enginev1.Withdrawal {
return &enginev1.Withdrawal{
Index: idx,
ValidatorIndex: i,
@@ -322,12 +321,12 @@ func TestProcessBlindWithdrawals(t *testing.T) {
},
{
Args: args{
Name: "success one partial withdrawal",
NextWithdrawalIndex: 21,
NextWithdrawalValidatorIndex: 120,
PendingPartialWithdrawalIndices: []primitives.ValidatorIndex{7},
Name: "success one partial withdrawal",
NextWithdrawalIndex: 21,
NextWithdrawalValidatorIndex: 120,
PartialWithdrawalIndices: []primitives.ValidatorIndex{7},
Withdrawals: []*enginev1.Withdrawal{
PendingPartialWithdrawal(7, 21),
partialWithdrawal(7, 21),
},
},
Control: control{
@@ -387,12 +386,12 @@ func TestProcessBlindWithdrawals(t *testing.T) {
},
{
Args: args{
Name: "success many partial withdrawals",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 4,
PendingPartialWithdrawalIndices: []primitives.ValidatorIndex{7, 19, 28},
Name: "success many partial withdrawals",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 4,
PartialWithdrawalIndices: []primitives.ValidatorIndex{7, 19, 28},
Withdrawals: []*enginev1.Withdrawal{
PendingPartialWithdrawal(7, 22), PendingPartialWithdrawal(19, 23), PendingPartialWithdrawal(28, 24),
partialWithdrawal(7, 22), partialWithdrawal(19, 23), partialWithdrawal(28, 24),
},
},
Control: control{
@@ -407,14 +406,14 @@ func TestProcessBlindWithdrawals(t *testing.T) {
},
{
Args: args{
Name: "success many withdrawals",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 88,
FullWithdrawalIndices: []primitives.ValidatorIndex{7, 19, 28},
PendingPartialWithdrawalIndices: []primitives.ValidatorIndex{2, 1, 89, 15},
Name: "success many withdrawals",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 88,
FullWithdrawalIndices: []primitives.ValidatorIndex{7, 19, 28},
PartialWithdrawalIndices: []primitives.ValidatorIndex{2, 1, 89, 15},
Withdrawals: []*enginev1.Withdrawal{
PendingPartialWithdrawal(89, 22), PendingPartialWithdrawal(1, 23), PendingPartialWithdrawal(2, 24),
fullWithdrawal(7, 25), PendingPartialWithdrawal(15, 26), fullWithdrawal(19, 27),
partialWithdrawal(89, 22), partialWithdrawal(1, 23), partialWithdrawal(2, 24),
fullWithdrawal(7, 25), partialWithdrawal(15, 26), fullWithdrawal(19, 27),
fullWithdrawal(28, 28),
},
},
@@ -454,17 +453,17 @@ func TestProcessBlindWithdrawals(t *testing.T) {
},
{
Args: args{
Name: "success more than max partially withdrawals",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 0,
PendingPartialWithdrawalIndices: []primitives.ValidatorIndex{1, 2, 3, 4, 5, 6, 7, 8, 9, 21, 22, 23, 24, 25, 26, 27, 29, 35, 89},
Name: "success more than max partially withdrawals",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 0,
PartialWithdrawalIndices: []primitives.ValidatorIndex{1, 2, 3, 4, 5, 6, 7, 8, 9, 21, 22, 23, 24, 25, 26, 27, 29, 35, 89},
Withdrawals: []*enginev1.Withdrawal{
PendingPartialWithdrawal(1, 22), PendingPartialWithdrawal(2, 23), PendingPartialWithdrawal(3, 24),
PendingPartialWithdrawal(4, 25), PendingPartialWithdrawal(5, 26), PendingPartialWithdrawal(6, 27),
PendingPartialWithdrawal(7, 28), PendingPartialWithdrawal(8, 29), PendingPartialWithdrawal(9, 30),
PendingPartialWithdrawal(21, 31), PendingPartialWithdrawal(22, 32), PendingPartialWithdrawal(23, 33),
PendingPartialWithdrawal(24, 34), PendingPartialWithdrawal(25, 35), PendingPartialWithdrawal(26, 36),
PendingPartialWithdrawal(27, 37),
partialWithdrawal(1, 22), partialWithdrawal(2, 23), partialWithdrawal(3, 24),
partialWithdrawal(4, 25), partialWithdrawal(5, 26), partialWithdrawal(6, 27),
partialWithdrawal(7, 28), partialWithdrawal(8, 29), partialWithdrawal(9, 30),
partialWithdrawal(21, 31), partialWithdrawal(22, 32), partialWithdrawal(23, 33),
partialWithdrawal(24, 34), partialWithdrawal(25, 35), partialWithdrawal(26, 36),
partialWithdrawal(27, 37),
},
},
Control: control{
@@ -492,12 +491,12 @@ func TestProcessBlindWithdrawals(t *testing.T) {
},
{
Args: args{
Name: "failure wrong number of partial withdrawal",
NextWithdrawalIndex: 21,
NextWithdrawalValidatorIndex: 37,
PendingPartialWithdrawalIndices: []primitives.ValidatorIndex{7},
Name: "failure wrong number of partial withdrawal",
NextWithdrawalIndex: 21,
NextWithdrawalValidatorIndex: 37,
PartialWithdrawalIndices: []primitives.ValidatorIndex{7},
Withdrawals: []*enginev1.Withdrawal{
PendingPartialWithdrawal(7, 21), PendingPartialWithdrawal(9, 22),
partialWithdrawal(7, 21), partialWithdrawal(9, 22),
},
},
Control: control{
@@ -541,7 +540,7 @@ func TestProcessBlindWithdrawals(t *testing.T) {
NextWithdrawalValidatorIndex: 4,
FullWithdrawalIndices: []primitives.ValidatorIndex{7, 19, 28, 1},
Withdrawals: []*enginev1.Withdrawal{
fullWithdrawal(7, 22), fullWithdrawal(19, 23), PendingPartialWithdrawal(28, 24),
fullWithdrawal(7, 22), fullWithdrawal(19, 23), partialWithdrawal(28, 24),
fullWithdrawal(1, 25),
},
},
@@ -565,10 +564,10 @@ func TestProcessBlindWithdrawals(t *testing.T) {
},
{
Args: args{
Name: "failure validator not partially withdrawable",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 4,
PendingPartialWithdrawalIndices: []primitives.ValidatorIndex{notPartiallyWithdrawable},
Name: "failure validator not partially withdrawable",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 4,
PartialWithdrawalIndices: []primitives.ValidatorIndex{notPartiallyWithdrawable},
Withdrawals: []*enginev1.Withdrawal{
fullWithdrawal(notPartiallyWithdrawable, 22),
},
@@ -612,7 +611,7 @@ func TestProcessBlindWithdrawals(t *testing.T) {
st.Balances[idx] = withdrawalAmount(idx)
validators[idx].WithdrawalCredentials[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
}
for _, idx := range arguments.PendingPartialWithdrawalIndices {
for _, idx := range arguments.PartialWithdrawalIndices {
validators[idx].WithdrawalCredentials[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
st.Balances[idx] = withdrawalAmount(idx)
}
@@ -630,8 +629,8 @@ func TestProcessBlindWithdrawals(t *testing.T) {
if test.Args.FullWithdrawalIndices == nil {
test.Args.FullWithdrawalIndices = make([]primitives.ValidatorIndex, 0)
}
if test.Args.PendingPartialWithdrawalIndices == nil {
test.Args.PendingPartialWithdrawalIndices = make([]primitives.ValidatorIndex, 0)
if test.Args.PartialWithdrawalIndices == nil {
test.Args.PartialWithdrawalIndices = make([]primitives.ValidatorIndex, 0)
}
slot, err := slots.EpochStart(currentEpoch)
require.NoError(t, err)
@@ -644,7 +643,10 @@ func TestProcessBlindWithdrawals(t *testing.T) {
require.NoError(t, err)
wdRoot, err := ssz.WithdrawalSliceRoot(test.Args.Withdrawals, fieldparams.MaxWithdrawalsPerPayload)
require.NoError(t, err)
p, err := consensusblocks.WrappedExecutionPayloadHeaderCapella(&enginev1.ExecutionPayloadHeaderCapella{WithdrawalsRoot: wdRoot[:]})
p, err := consensusblocks.WrappedExecutionPayloadHeaderCapella(
&enginev1.ExecutionPayloadHeaderCapella{WithdrawalsRoot: wdRoot[:]},
big.NewInt(0),
)
require.NoError(t, err)
post, err := blocks.ProcessWithdrawals(st, p)
if test.Control.ExpectedError {
@@ -671,13 +673,12 @@ func TestProcessWithdrawals(t *testing.T) {
maxEffectiveBalance := params.BeaconConfig().MaxEffectiveBalance
type args struct {
Name string
NextWithdrawalValidatorIndex primitives.ValidatorIndex
NextWithdrawalIndex uint64
FullWithdrawalIndices []primitives.ValidatorIndex
PendingPartialWithdrawalIndices []primitives.ValidatorIndex
Withdrawals []*enginev1.Withdrawal
PendingPartialWithdrawals []*ethpb.PendingPartialWithdrawal // Electra
Name string
NextWithdrawalValidatorIndex primitives.ValidatorIndex
NextWithdrawalIndex uint64
FullWithdrawalIndices []primitives.ValidatorIndex
PartialWithdrawalIndices []primitives.ValidatorIndex
Withdrawals []*enginev1.Withdrawal
}
type control struct {
NextWithdrawalValidatorIndex primitives.ValidatorIndex
@@ -705,7 +706,7 @@ func TestProcessWithdrawals(t *testing.T) {
Amount: withdrawalAmount(i),
}
}
PendingPartialWithdrawal := func(i primitives.ValidatorIndex, idx uint64) *enginev1.Withdrawal {
partialWithdrawal := func(i primitives.ValidatorIndex, idx uint64) *enginev1.Withdrawal {
return &enginev1.Withdrawal{
Index: idx,
ValidatorIndex: i,
@@ -743,12 +744,12 @@ func TestProcessWithdrawals(t *testing.T) {
},
{
Args: args{
Name: "success one partial withdrawal",
NextWithdrawalIndex: 21,
NextWithdrawalValidatorIndex: 120,
PendingPartialWithdrawalIndices: []primitives.ValidatorIndex{7},
Name: "success one partial withdrawal",
NextWithdrawalIndex: 21,
NextWithdrawalValidatorIndex: 120,
PartialWithdrawalIndices: []primitives.ValidatorIndex{7},
Withdrawals: []*enginev1.Withdrawal{
PendingPartialWithdrawal(7, 21),
partialWithdrawal(7, 21),
},
},
Control: control{
@@ -775,7 +776,7 @@ func TestProcessWithdrawals(t *testing.T) {
},
{
Args: args{
Name: "less than max sweep at end",
Name: "Less than max sweep at end",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 4,
FullWithdrawalIndices: []primitives.ValidatorIndex{80, 81, 82, 83},
@@ -792,7 +793,7 @@ func TestProcessWithdrawals(t *testing.T) {
},
{
Args: args{
Name: "less than max sweep and beginning",
Name: "Less than max sweep and beginning",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 4,
FullWithdrawalIndices: []primitives.ValidatorIndex{4, 5, 6},
@@ -808,12 +809,12 @@ func TestProcessWithdrawals(t *testing.T) {
},
{
Args: args{
Name: "success many partial withdrawals",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 4,
PendingPartialWithdrawalIndices: []primitives.ValidatorIndex{7, 19, 28},
Name: "success many partial withdrawals",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 4,
PartialWithdrawalIndices: []primitives.ValidatorIndex{7, 19, 28},
Withdrawals: []*enginev1.Withdrawal{
PendingPartialWithdrawal(7, 22), PendingPartialWithdrawal(19, 23), PendingPartialWithdrawal(28, 24),
partialWithdrawal(7, 22), partialWithdrawal(19, 23), partialWithdrawal(28, 24),
},
},
Control: control{
@@ -828,14 +829,14 @@ func TestProcessWithdrawals(t *testing.T) {
},
{
Args: args{
Name: "success many withdrawals",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 88,
FullWithdrawalIndices: []primitives.ValidatorIndex{7, 19, 28},
PendingPartialWithdrawalIndices: []primitives.ValidatorIndex{2, 1, 89, 15},
Name: "success many withdrawals",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 88,
FullWithdrawalIndices: []primitives.ValidatorIndex{7, 19, 28},
PartialWithdrawalIndices: []primitives.ValidatorIndex{2, 1, 89, 15},
Withdrawals: []*enginev1.Withdrawal{
PendingPartialWithdrawal(89, 22), PendingPartialWithdrawal(1, 23), PendingPartialWithdrawal(2, 24),
fullWithdrawal(7, 25), PendingPartialWithdrawal(15, 26), fullWithdrawal(19, 27),
partialWithdrawal(89, 22), partialWithdrawal(1, 23), partialWithdrawal(2, 24),
fullWithdrawal(7, 25), partialWithdrawal(15, 26), fullWithdrawal(19, 27),
fullWithdrawal(28, 28),
},
},
@@ -849,36 +850,6 @@ func TestProcessWithdrawals(t *testing.T) {
},
},
},
{
Args: args{
Name: "success many withdrawals with pending partial withdrawals in state",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 88,
FullWithdrawalIndices: []primitives.ValidatorIndex{7, 19, 28},
PendingPartialWithdrawalIndices: []primitives.ValidatorIndex{2, 1, 89, 15},
Withdrawals: []*enginev1.Withdrawal{
PendingPartialWithdrawal(89, 22), PendingPartialWithdrawal(1, 23), PendingPartialWithdrawal(2, 24),
fullWithdrawal(7, 25), PendingPartialWithdrawal(15, 26), fullWithdrawal(19, 27),
fullWithdrawal(28, 28),
},
PendingPartialWithdrawals: []*ethpb.PendingPartialWithdrawal{
{
Index: 11,
Amount: withdrawalAmount(11) - maxEffectiveBalance,
},
},
},
Control: control{
NextWithdrawalValidatorIndex: 40,
NextWithdrawalIndex: 29,
Balances: map[uint64]uint64{
7: 0, 19: 0, 28: 0,
2: maxEffectiveBalance, 1: maxEffectiveBalance, 89: maxEffectiveBalance,
15: maxEffectiveBalance,
},
},
},
{
Args: args{
Name: "success more than max fully withdrawals",
@@ -905,17 +876,17 @@ func TestProcessWithdrawals(t *testing.T) {
},
{
Args: args{
Name: "success more than max partially withdrawals",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 0,
PendingPartialWithdrawalIndices: []primitives.ValidatorIndex{1, 2, 3, 4, 5, 6, 7, 8, 9, 21, 22, 23, 24, 25, 26, 27, 29, 35, 89},
Name: "success more than max partially withdrawals",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 0,
PartialWithdrawalIndices: []primitives.ValidatorIndex{1, 2, 3, 4, 5, 6, 7, 8, 9, 21, 22, 23, 24, 25, 26, 27, 29, 35, 89},
Withdrawals: []*enginev1.Withdrawal{
PendingPartialWithdrawal(1, 22), PendingPartialWithdrawal(2, 23), PendingPartialWithdrawal(3, 24),
PendingPartialWithdrawal(4, 25), PendingPartialWithdrawal(5, 26), PendingPartialWithdrawal(6, 27),
PendingPartialWithdrawal(7, 28), PendingPartialWithdrawal(8, 29), PendingPartialWithdrawal(9, 30),
PendingPartialWithdrawal(21, 31), PendingPartialWithdrawal(22, 32), PendingPartialWithdrawal(23, 33),
PendingPartialWithdrawal(24, 34), PendingPartialWithdrawal(25, 35), PendingPartialWithdrawal(26, 36),
PendingPartialWithdrawal(27, 37),
partialWithdrawal(1, 22), partialWithdrawal(2, 23), partialWithdrawal(3, 24),
partialWithdrawal(4, 25), partialWithdrawal(5, 26), partialWithdrawal(6, 27),
partialWithdrawal(7, 28), partialWithdrawal(8, 29), partialWithdrawal(9, 30),
partialWithdrawal(21, 31), partialWithdrawal(22, 32), partialWithdrawal(23, 33),
partialWithdrawal(24, 34), partialWithdrawal(25, 35), partialWithdrawal(26, 36),
partialWithdrawal(27, 37),
},
},
Control: control{
@@ -943,12 +914,12 @@ func TestProcessWithdrawals(t *testing.T) {
},
{
Args: args{
Name: "failure wrong number of partial withdrawal",
NextWithdrawalIndex: 21,
NextWithdrawalValidatorIndex: 37,
PendingPartialWithdrawalIndices: []primitives.ValidatorIndex{7},
Name: "failure wrong number of partial withdrawal",
NextWithdrawalIndex: 21,
NextWithdrawalValidatorIndex: 37,
PartialWithdrawalIndices: []primitives.ValidatorIndex{7},
Withdrawals: []*enginev1.Withdrawal{
PendingPartialWithdrawal(7, 21), PendingPartialWithdrawal(9, 22),
partialWithdrawal(7, 21), partialWithdrawal(9, 22),
},
},
Control: control{
@@ -992,7 +963,7 @@ func TestProcessWithdrawals(t *testing.T) {
NextWithdrawalValidatorIndex: 4,
FullWithdrawalIndices: []primitives.ValidatorIndex{7, 19, 28, 1},
Withdrawals: []*enginev1.Withdrawal{
fullWithdrawal(7, 22), fullWithdrawal(19, 23), PendingPartialWithdrawal(28, 24),
fullWithdrawal(7, 22), fullWithdrawal(19, 23), partialWithdrawal(28, 24),
fullWithdrawal(1, 25),
},
},
@@ -1016,10 +987,10 @@ func TestProcessWithdrawals(t *testing.T) {
},
{
Args: args{
Name: "failure validator not partially withdrawable",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 4,
PendingPartialWithdrawalIndices: []primitives.ValidatorIndex{notPartiallyWithdrawable},
Name: "failure validator not partially withdrawable",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 4,
PartialWithdrawalIndices: []primitives.ValidatorIndex{notPartiallyWithdrawable},
Withdrawals: []*enginev1.Withdrawal{
fullWithdrawal(notPartiallyWithdrawable, 22),
},
@@ -1044,97 +1015,65 @@ func TestProcessWithdrawals(t *testing.T) {
}
}
prepareValidators := func(st state.BeaconState, arguments args) error {
prepareValidators := func(st *ethpb.BeaconStateCapella, arguments args) (state.BeaconState, error) {
validators := make([]*ethpb.Validator, numValidators)
if err := st.SetBalances(make([]uint64, numValidators)); err != nil {
return err
}
st.Balances = make([]uint64, numValidators)
for i := range validators {
v := &ethpb.Validator{}
v.EffectiveBalance = maxEffectiveBalance
v.WithdrawableEpoch = epochInFuture
v.WithdrawalCredentials = make([]byte, 32)
v.WithdrawalCredentials[31] = byte(i)
if err := st.UpdateBalancesAtIndex(primitives.ValidatorIndex(i), v.EffectiveBalance-uint64(rand.Intn(1000))); err != nil {
return err
}
st.Balances[i] = v.EffectiveBalance - uint64(rand.Intn(1000))
validators[i] = v
}
for _, idx := range arguments.FullWithdrawalIndices {
if idx != notWithdrawableIndex {
validators[idx].WithdrawableEpoch = epochInPast
}
if err := st.UpdateBalancesAtIndex(idx, withdrawalAmount(idx)); err != nil {
return err
}
st.Balances[idx] = withdrawalAmount(idx)
validators[idx].WithdrawalCredentials[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
}
for _, idx := range arguments.PendingPartialWithdrawalIndices {
for _, idx := range arguments.PartialWithdrawalIndices {
validators[idx].WithdrawalCredentials[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
if err := st.UpdateBalancesAtIndex(idx, withdrawalAmount(idx)); err != nil {
return err
}
st.Balances[idx] = withdrawalAmount(idx)
}
return st.SetValidators(validators)
st.Validators = validators
return state_native.InitializeFromProtoCapella(st)
}
for _, test := range tests {
t.Run(test.Args.Name, func(t *testing.T) {
for _, fork := range []int{version.Capella, version.Electra} {
t.Run(version.String(fork), func(t *testing.T) {
saved := params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep
params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep = maxSweep
if test.Args.Withdrawals == nil {
test.Args.Withdrawals = make([]*enginev1.Withdrawal, 0)
}
if test.Args.FullWithdrawalIndices == nil {
test.Args.FullWithdrawalIndices = make([]primitives.ValidatorIndex, 0)
}
if test.Args.PendingPartialWithdrawalIndices == nil {
test.Args.PendingPartialWithdrawalIndices = make([]primitives.ValidatorIndex, 0)
}
slot, err := slots.EpochStart(currentEpoch)
require.NoError(t, err)
var st state.BeaconState
var p interfaces.ExecutionData
switch fork {
case version.Capella:
spb := &ethpb.BeaconStateCapella{
Slot: slot,
NextWithdrawalValidatorIndex: test.Args.NextWithdrawalValidatorIndex,
NextWithdrawalIndex: test.Args.NextWithdrawalIndex,
}
st, err = state_native.InitializeFromProtoUnsafeCapella(spb)
require.NoError(t, err)
p, err = consensusblocks.WrappedExecutionPayloadCapella(&enginev1.ExecutionPayloadCapella{Withdrawals: test.Args.Withdrawals})
require.NoError(t, err)
case version.Electra:
spb := &ethpb.BeaconStateElectra{
Slot: slot,
NextWithdrawalValidatorIndex: test.Args.NextWithdrawalValidatorIndex,
NextWithdrawalIndex: test.Args.NextWithdrawalIndex,
PendingPartialWithdrawals: test.Args.PendingPartialWithdrawals,
}
st, err = state_native.InitializeFromProtoUnsafeElectra(spb)
require.NoError(t, err)
p, err = consensusblocks.WrappedExecutionPayloadElectra(&enginev1.ExecutionPayloadElectra{Withdrawals: test.Args.Withdrawals})
require.NoError(t, err)
default:
t.Fatalf("Add a beacon state setup for version %s", version.String(fork))
}
err = prepareValidators(st, test.Args)
require.NoError(t, err)
post, err := blocks.ProcessWithdrawals(st, p)
if test.Control.ExpectedError {
require.NotNil(t, err)
} else {
require.NoError(t, err)
checkPostState(t, test.Control, post)
}
params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep = saved
})
saved := params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep
params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep = maxSweep
if test.Args.Withdrawals == nil {
test.Args.Withdrawals = make([]*enginev1.Withdrawal, 0)
}
if test.Args.FullWithdrawalIndices == nil {
test.Args.FullWithdrawalIndices = make([]primitives.ValidatorIndex, 0)
}
if test.Args.PartialWithdrawalIndices == nil {
test.Args.PartialWithdrawalIndices = make([]primitives.ValidatorIndex, 0)
}
slot, err := slots.EpochStart(currentEpoch)
require.NoError(t, err)
spb := &ethpb.BeaconStateCapella{
Slot: slot,
NextWithdrawalValidatorIndex: test.Args.NextWithdrawalValidatorIndex,
NextWithdrawalIndex: test.Args.NextWithdrawalIndex,
}
st, err := prepareValidators(spb, test.Args)
require.NoError(t, err)
p, err := consensusblocks.WrappedExecutionPayloadCapella(&enginev1.ExecutionPayloadCapella{Withdrawals: test.Args.Withdrawals}, big.NewInt(0))
require.NoError(t, err)
post, err := blocks.ProcessWithdrawals(st, p)
if test.Control.ExpectedError {
require.NotNil(t, err)
} else {
require.NoError(t, err)
checkPostState(t, test.Control, post)
}
params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep = saved
})
}
}

View File

@@ -1,86 +0,0 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"attestation.go",
"churn.go",
"consolidations.go",
"deposits.go",
"effective_balance_updates.go",
"registry_updates.go",
"transition.go",
"transition_no_verify_sig.go",
"upgrade.go",
"validator.go",
"withdrawals.go",
],
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/electra",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/core/altair:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/epoch:go_default_library",
"//beacon-chain/core/epoch/precompute:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/validators:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/params:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//contracts/deposit:go_default_library",
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = [
"churn_test.go",
"consolidations_test.go",
"deposit_fuzz_test.go",
"deposits_test.go",
"effective_balance_updates_test.go",
"registry_updates_test.go",
"transition_test.go",
"upgrade_test.go",
"validator_test.go",
"withdrawals_test.go",
],
deps = [
":go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/bls:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_google_gofuzz//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
],
)

View File

@@ -1,7 +0,0 @@
package electra
import "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/altair"
var (
ProcessAttestationsNoVerifySignature = altair.ProcessAttestationsNoVerifySignature
)

View File

@@ -1,85 +0,0 @@
package electra
import (
"context"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/math"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
// ComputeConsolidationEpochAndUpdateChurn fulfills the consensus spec definition below. This method
// calls mutating methods to the beacon state.
//
// Spec definition:
//
// def compute_consolidation_epoch_and_update_churn(state: BeaconState, consolidation_balance: Gwei) -> Epoch:
// earliest_consolidation_epoch = max(
// state.earliest_consolidation_epoch, compute_activation_exit_epoch(get_current_epoch(state)))
// per_epoch_consolidation_churn = get_consolidation_churn_limit(state)
// # New epoch for consolidations.
// if state.earliest_consolidation_epoch < earliest_consolidation_epoch:
// consolidation_balance_to_consume = per_epoch_consolidation_churn
// else:
// consolidation_balance_to_consume = state.consolidation_balance_to_consume
//
// # Consolidation doesn't fit in the current earliest epoch.
// if consolidation_balance > consolidation_balance_to_consume:
// balance_to_process = consolidation_balance - consolidation_balance_to_consume
// additional_epochs = (balance_to_process - 1) // per_epoch_consolidation_churn + 1
// earliest_consolidation_epoch += additional_epochs
// consolidation_balance_to_consume += additional_epochs * per_epoch_consolidation_churn
//
// # Consume the balance and update state variables.
// state.consolidation_balance_to_consume = consolidation_balance_to_consume - consolidation_balance
// state.earliest_consolidation_epoch = earliest_consolidation_epoch
//
// return state.earliest_consolidation_epoch
func ComputeConsolidationEpochAndUpdateChurn(ctx context.Context, s state.BeaconState, consolidationBalance primitives.Gwei) (primitives.Epoch, error) {
earliestEpoch, err := s.EarliestConsolidationEpoch()
if err != nil {
return 0, err
}
earliestConsolidationEpoch := max(earliestEpoch, helpers.ActivationExitEpoch(slots.ToEpoch(s.Slot())))
activeBal, err := helpers.TotalActiveBalance(s)
if err != nil {
return 0, err
}
perEpochConsolidationChurn := helpers.ConsolidationChurnLimit(primitives.Gwei(activeBal))
// New epoch for consolidations.
var consolidationBalanceToConsume primitives.Gwei
if earliestEpoch < earliestConsolidationEpoch {
consolidationBalanceToConsume = perEpochConsolidationChurn
} else {
consolidationBalanceToConsume, err = s.ConsolidationBalanceToConsume()
if err != nil {
return 0, err
}
}
// Consolidation doesn't fit in the current earliest epoch.
if consolidationBalance > consolidationBalanceToConsume {
balanceToProcess := consolidationBalance - consolidationBalanceToConsume
// additional_epochs = (balance_to_process - 1) // per_epoch_consolidation_churn + 1
additionalEpochs, err := math.Div64(uint64(balanceToProcess-1), uint64(perEpochConsolidationChurn))
if err != nil {
return 0, err
}
additionalEpochs++
earliestConsolidationEpoch += primitives.Epoch(additionalEpochs)
consolidationBalanceToConsume += primitives.Gwei(additionalEpochs) * perEpochConsolidationChurn
}
// Consume the balance and update state variables.
if err := s.SetConsolidationBalanceToConsume(consolidationBalanceToConsume - consolidationBalance); err != nil {
return 0, err
}
if err := s.SetEarliestConsolidationEpoch(earliestConsolidationEpoch); err != nil {
return 0, err
}
return earliestConsolidationEpoch, nil
}

View File

@@ -1,149 +0,0 @@
package electra_test
import (
"context"
"fmt"
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/electra"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
func createValidatorsWithTotalActiveBalance(totalBal primitives.Gwei) []*eth.Validator {
num := totalBal / primitives.Gwei(params.BeaconConfig().MinActivationBalance)
vals := make([]*eth.Validator, num)
for i := range vals {
wd := make([]byte, 32)
wd[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
wd[31] = byte(i)
vals[i] = &eth.Validator{
ActivationEpoch: primitives.Epoch(0),
EffectiveBalance: params.BeaconConfig().MinActivationBalance,
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
PublicKey: []byte(fmt.Sprintf("val_%d", i)),
WithdrawableEpoch: params.BeaconConfig().FarFutureEpoch,
WithdrawalCredentials: wd,
}
}
if totalBal%primitives.Gwei(params.BeaconConfig().MinActivationBalance) != 0 {
vals = append(vals, &eth.Validator{
ActivationEpoch: primitives.Epoch(0),
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
EffectiveBalance: uint64(totalBal) % params.BeaconConfig().MinActivationBalance,
})
}
return vals
}
func TestComputeConsolidationEpochAndUpdateChurn(t *testing.T) {
// Test setup: create a state with 32M ETH total active balance.
// In this state, the churn is expected to be 232 ETH per epoch.
tests := []struct {
name string
state state.BeaconState
consolidationBalance primitives.Gwei
expectedEpoch primitives.Epoch
expectedConsolidationBalanceToConsume primitives.Gwei
}{
{
name: "compute consolidation with no consolidation balance",
state: func(t *testing.T) state.BeaconState {
s, err := state_native.InitializeFromProtoUnsafeElectra(&eth.BeaconStateElectra{
Slot: slots.UnsafeEpochStart(10),
EarliestConsolidationEpoch: 9,
Validators: createValidatorsWithTotalActiveBalance(32000000000000000), // 32M ETH
})
require.NoError(t, err)
return s
}(t),
consolidationBalance: 0, // 0 ETH
expectedEpoch: 15, // current epoch + 1 + MaxSeedLookahead
expectedConsolidationBalanceToConsume: 232000000000, // 232 ETH
},
{
name: "new epoch for consolidations",
state: func(t *testing.T) state.BeaconState {
s, err := state_native.InitializeFromProtoUnsafeElectra(&eth.BeaconStateElectra{
Slot: slots.UnsafeEpochStart(10),
EarliestConsolidationEpoch: 9,
Validators: createValidatorsWithTotalActiveBalance(32000000000000000), // 32M ETH
})
require.NoError(t, err)
return s
}(t),
consolidationBalance: 32000000000, // 32 ETH
expectedEpoch: 15, // current epoch + 1 + MaxSeedLookahead
expectedConsolidationBalanceToConsume: 200000000000, // 200 ETH
},
{
name: "flows into another epoch",
state: func(t *testing.T) state.BeaconState {
s, err := state_native.InitializeFromProtoUnsafeElectra(&eth.BeaconStateElectra{
Slot: slots.UnsafeEpochStart(10),
EarliestConsolidationEpoch: 9,
Validators: createValidatorsWithTotalActiveBalance(32000000000000000), // 32M ETH
})
require.NoError(t, err)
return s
}(t),
consolidationBalance: 235000000000, // 235 ETH
expectedEpoch: 16, // Flows into another epoch.
expectedConsolidationBalanceToConsume: 229000000000, // 229 ETH
},
{
name: "not a new epoch, fits in remaining balance of current epoch",
state: func(t *testing.T) state.BeaconState {
s, err := state_native.InitializeFromProtoUnsafeElectra(&eth.BeaconStateElectra{
Slot: slots.UnsafeEpochStart(10),
EarliestConsolidationEpoch: 15,
ConsolidationBalanceToConsume: 200000000000, // 200 ETH
Validators: createValidatorsWithTotalActiveBalance(32000000000000000), // 32M ETH
})
require.NoError(t, err)
return s
}(t),
consolidationBalance: 32000000000, // 32 ETH
expectedEpoch: 15, // Fits into current earliest consolidation epoch.
expectedConsolidationBalanceToConsume: 168000000000, // 126 ETH
},
{
name: "not a new epoch, fits in remaining balance of current epoch",
state: func(t *testing.T) state.BeaconState {
s, err := state_native.InitializeFromProtoUnsafeElectra(&eth.BeaconStateElectra{
Slot: slots.UnsafeEpochStart(10),
EarliestConsolidationEpoch: 15,
ConsolidationBalanceToConsume: 200000000000, // 200 ETH
Validators: createValidatorsWithTotalActiveBalance(32000000000000000), // 32M ETH
})
require.NoError(t, err)
return s
}(t),
consolidationBalance: 232000000000, // 232 ETH
expectedEpoch: 16, // Flows into another epoch.
expectedConsolidationBalanceToConsume: 200000000000, // 200 ETH
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
gotEpoch, err := electra.ComputeConsolidationEpochAndUpdateChurn(context.TODO(), tt.state, tt.consolidationBalance)
require.NoError(t, err)
require.Equal(t, tt.expectedEpoch, gotEpoch)
// Check consolidation balance to consume is set on the state.
cbtc, err := tt.state.ConsolidationBalanceToConsume()
require.NoError(t, err)
require.Equal(t, tt.expectedConsolidationBalanceToConsume, cbtc)
// Check earliest consolidation epoch was set on the state.
gotEpoch, err = tt.state.EarliestConsolidationEpoch()
require.NoError(t, err)
require.Equal(t, tt.expectedEpoch, gotEpoch)
})
}
}

View File

@@ -1,254 +0,0 @@
package electra
import (
"bytes"
"context"
"fmt"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"go.opencensus.io/trace"
)
// ProcessPendingConsolidations implements the spec definition below. This method makes mutating
// calls to the beacon state.
//
// Spec definition:
//
// def process_pending_consolidations(state: BeaconState) -> None:
// next_pending_consolidation = 0
// for pending_consolidation in state.pending_consolidations:
// source_validator = state.validators[pending_consolidation.source_index]
// if source_validator.slashed:
// next_pending_consolidation += 1
// continue
// if source_validator.withdrawable_epoch > get_current_epoch(state):
// break
//
// # Churn any target excess active balance of target and raise its max
// switch_to_compounding_validator(state, pending_consolidation.target_index)
// # Move active balance to target. Excess balance is withdrawable.
// active_balance = get_active_balance(state, pending_consolidation.source_index)
// decrease_balance(state, pending_consolidation.source_index, active_balance)
// increase_balance(state, pending_consolidation.target_index, active_balance)
// next_pending_consolidation += 1
//
// state.pending_consolidations = state.pending_consolidations[next_pending_consolidation:]
func ProcessPendingConsolidations(ctx context.Context, st state.BeaconState) error {
_, span := trace.StartSpan(ctx, "electra.ProcessPendingConsolidations")
defer span.End()
if st == nil || st.IsNil() {
return errors.New("nil state")
}
currentEpoch := slots.ToEpoch(st.Slot())
var nextPendingConsolidation uint64
pendingConsolidations, err := st.PendingConsolidations()
if err != nil {
return err
}
for _, pc := range pendingConsolidations {
sourceValidator, err := st.ValidatorAtIndex(pc.SourceIndex)
if err != nil {
return err
}
if sourceValidator.Slashed {
nextPendingConsolidation++
continue
}
if sourceValidator.WithdrawableEpoch > currentEpoch {
break
}
if err := SwitchToCompoundingValidator(st, pc.TargetIndex); err != nil {
return err
}
activeBalance, err := st.ActiveBalanceAtIndex(pc.SourceIndex)
if err != nil {
return err
}
if err := helpers.DecreaseBalance(st, pc.SourceIndex, activeBalance); err != nil {
return err
}
if err := helpers.IncreaseBalance(st, pc.TargetIndex, activeBalance); err != nil {
return err
}
nextPendingConsolidation++
}
if nextPendingConsolidation > 0 {
return st.SetPendingConsolidations(pendingConsolidations[nextPendingConsolidation:])
}
return nil
}
// ProcessConsolidationRequests implements the spec definition below. This method makes mutating
// calls to the beacon state.
//
// def process_consolidation_request(
// state: BeaconState,
// consolidation_request: ConsolidationRequest
// ) -> None:
// # If the pending consolidations queue is full, consolidation requests are ignored
// if len(state.pending_consolidations) == PENDING_CONSOLIDATIONS_LIMIT:
// return
// # If there is too little available consolidation churn limit, consolidation requests are ignored
// if get_consolidation_churn_limit(state) <= MIN_ACTIVATION_BALANCE:
// return
//
// validator_pubkeys = [v.pubkey for v in state.validators]
// # Verify pubkeys exists
// request_source_pubkey = consolidation_request.source_pubkey
// request_target_pubkey = consolidation_request.target_pubkey
// if request_source_pubkey not in validator_pubkeys:
// return
// if request_target_pubkey not in validator_pubkeys:
// return
// source_index = ValidatorIndex(validator_pubkeys.index(request_source_pubkey))
// target_index = ValidatorIndex(validator_pubkeys.index(request_target_pubkey))
// source_validator = state.validators[source_index]
// target_validator = state.validators[target_index]
//
// # Verify that source != target, so a consolidation cannot be used as an exit.
// if source_index == target_index:
// return
//
// # Verify source withdrawal credentials
// has_correct_credential = has_execution_withdrawal_credential(source_validator)
// is_correct_source_address = (
// source_validator.withdrawal_credentials[12:] == consolidation_request.source_address
// )
// if not (has_correct_credential and is_correct_source_address):
// return
//
// # Verify that target has execution withdrawal credentials
// if not has_execution_withdrawal_credential(target_validator):
// return
//
// # Verify the source and the target are active
// current_epoch = get_current_epoch(state)
// if not is_active_validator(source_validator, current_epoch):
// return
// if not is_active_validator(target_validator, current_epoch):
// return
// # Verify exits for source and target have not been initiated
// if source_validator.exit_epoch != FAR_FUTURE_EPOCH:
// return
// if target_validator.exit_epoch != FAR_FUTURE_EPOCH:
// return
//
// # Initiate source validator exit and append pending consolidation
// source_validator.exit_epoch = compute_consolidation_epoch_and_update_churn(
// state, source_validator.effective_balance
// )
// source_validator.withdrawable_epoch = Epoch(
// source_validator.exit_epoch + MIN_VALIDATOR_WITHDRAWABILITY_DELAY
// )
// state.pending_consolidations.append(PendingConsolidation(
// source_index=source_index,
// target_index=target_index
// ))
func ProcessConsolidationRequests(ctx context.Context, st state.BeaconState, reqs []*enginev1.ConsolidationRequest) error {
if len(reqs) == 0 || st == nil {
return nil
}
activeBal, err := helpers.TotalActiveBalance(st)
if err != nil {
return err
}
churnLimit := helpers.ConsolidationChurnLimit(primitives.Gwei(activeBal))
if churnLimit <= primitives.Gwei(params.BeaconConfig().MinActivationBalance) {
return nil
}
curEpoch := slots.ToEpoch(st.Slot())
ffe := params.BeaconConfig().FarFutureEpoch
minValWithdrawDelay := params.BeaconConfig().MinValidatorWithdrawabilityDelay
pcLimit := params.BeaconConfig().PendingConsolidationsLimit
for _, cr := range reqs {
if ctx.Err() != nil {
return fmt.Errorf("cannot process consolidation requests: %w", ctx.Err())
}
if npc, err := st.NumPendingConsolidations(); err != nil {
return fmt.Errorf("failed to fetch number of pending consolidations: %w", err) // This should never happen.
} else if npc >= pcLimit {
return nil
}
srcIdx, ok := st.ValidatorIndexByPubkey(bytesutil.ToBytes48(cr.SourcePubkey))
if !ok {
continue
}
tgtIdx, ok := st.ValidatorIndexByPubkey(bytesutil.ToBytes48(cr.TargetPubkey))
if !ok {
continue
}
if srcIdx == tgtIdx {
continue
}
srcV, err := st.ValidatorAtIndex(srcIdx)
if err != nil {
return fmt.Errorf("failed to fetch source validator: %w", err) // This should never happen.
}
tgtV, err := st.ValidatorAtIndexReadOnly(tgtIdx)
if err != nil {
return fmt.Errorf("failed to fetch target validator: %w", err) // This should never happen.
}
// Verify source withdrawal credentials
if !helpers.HasExecutionWithdrawalCredentials(srcV) {
continue
}
// Confirm source_validator.withdrawal_credentials[12:] == consolidation_request.source_address
if len(srcV.WithdrawalCredentials) != 32 || len(cr.SourceAddress) != 20 || !bytes.HasSuffix(srcV.WithdrawalCredentials, cr.SourceAddress) {
continue
}
// Target validator must have their withdrawal credentials set appropriately.
if !helpers.HasExecutionWithdrawalCredentials(tgtV) {
continue
}
// Both validators must be active.
if !helpers.IsActiveValidator(srcV, curEpoch) || !helpers.IsActiveValidatorUsingTrie(tgtV, curEpoch) {
continue
}
// Neither validator are exiting.
if srcV.ExitEpoch != ffe || tgtV.ExitEpoch() != ffe {
continue
}
// Initiate the exit of the source validator.
exitEpoch, err := ComputeConsolidationEpochAndUpdateChurn(ctx, st, primitives.Gwei(srcV.EffectiveBalance))
if err != nil {
return fmt.Errorf("failed to compute consolidaiton epoch: %w", err)
}
srcV.ExitEpoch = exitEpoch
srcV.WithdrawableEpoch = exitEpoch + minValWithdrawDelay
if err := st.UpdateValidatorAtIndex(srcIdx, srcV); err != nil {
return fmt.Errorf("failed to update validator: %w", err) // This should never happen.
}
if err := st.AppendPendingConsolidation(&eth.PendingConsolidation{SourceIndex: srcIdx, TargetIndex: tgtIdx}); err != nil {
return fmt.Errorf("failed to append pending consolidation: %w", err) // This should never happen.
}
}
return nil
}

Some files were not shown because too many files have changed in this diff Show More