Compare commits

..

35 Commits

Author SHA1 Message Date
terence tsao
c2bea8a504 implement eip-7928 as a gloas fork without eip-7732 2025-09-17 10:22:23 -07:00
james-prysm
df86f57507 deprecate /prysm/v1/beacon/blobs (#15643)
* adding in check for post fulu support

* gaz

* fixing unit tests and typo

* gaz

* manu feedback

* addressing manu's suggestion

* fixing tests
2025-09-10 21:44:50 +00:00
james-prysm
9b0a3e9632 default validator client rest mode to ssz for post block (#15645)
* wip

* fixing tests

* updating unit tests for coverage

* gofmt

* changelog

* consolidated propose beacon block tests and improved code coverage

* gaz

* partial review feedback

* continuation fixing feedback on functions

* adding log

* only fall back on 406

* fixing broken tests

* Update validator/client/beacon-api/propose_beacon_block.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/beacon-api/propose_beacon_block.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/beacon-api/propose_beacon_block.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/beacon-api/propose_beacon_block.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/beacon-api/propose_beacon_block.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* radek review feedback

* more feedback

* radek suggestion

* fixing unit tests

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-09-10 21:08:11 +00:00
kasey
5e079aa62c justin's suggestion to unwedge CI (#15679)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-09-10 20:31:29 +00:00
terence
5c68ec5c39 spectests: Add Fulu proposer lookahead epoch processing tests (#15667) 2025-09-10 18:51:19 +00:00
terence
5410232bef spectests: Add fulu fork transition tests for mainnet and minimal (#15666) 2025-09-10 18:51:04 +00:00
Radosław Kapka
3f5c4df7e0 Optimize processing of slashings (#14990)
* Calculate max epoch and churn for slashing once

* calculate once for proposer and attester slashings

* changelog <3

* introduce struct

* check if err is nil in ProcessVoluntaryExits

* rename exitData to exitInfo and return from functions

* cleanup + tests

* cleanup after rebase

* Potuz's review

* pre-calculate total active balance

* remove `slashValidatorFunc` closure

* Avoid a second validator loop

    🤖 Generated with [Claude Code](https://claude.ai/code)

    Co-Authored-By: Claude <noreply@anthropic.com>

* remove balance parameter from slashing functions

---------

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: potuz <potuz@prysmaticlabs.com>
2025-09-10 18:14:11 +00:00
Preston Van Loon
5c348dff59 Add erigon/caplin to list of known p2p agents. (#15678) 2025-09-10 17:57:24 +00:00
james-prysm
8136ff7c3a adding remote singer changes for fulu (#15498)
* adding web 3 signer changes for fulu

* missed adding fulu values

* add accounting for fulu version

* updated web3signer version and fixing linting

* Update validator/keymanager/remote-web3signer/types/requests_test.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/keymanager/remote-web3signer/types/mock/mocks.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* radek suggestions

* removing redundant check and removing old function, changed changelog to reflect

* gaz

* radek's suggestion

* fixing test as v1 proof is no longer used

* fixing more tests

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-09-10 16:58:53 +00:00
Preston Van Loon
f690af81fa Add diagnostic logging for 'invalid data returned from peer' errors (#15674)
When peers return invalid data during initial sync, log the specific
validation failure reason. This helps identify:
- Whether peer exceeded requested block count
- Whether peer exceeded MAX_REQUEST_BLOCKS protocol limit
- Whether blocks are outside the requested slot range
- Whether blocks are out of order (not increasing or wrong step)

Each log includes the specific condition that failed, making it easier
to debug whether the issue is with peer implementations or request
validation logic.
2025-09-09 22:08:37 +00:00
kasey
029b896c79 refactor subscription code to enable peer discovery asap (#15660)
* refactor subscription code to enable peer discovery asap

* fix subscription tests

* hunting down the other test initialization bugs

* refactor subscription parameter tracking type

* manu naming feedback

* manu naming feedback

* missing godoc

* protect tracker from nil subscriptions

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-09-09 21:00:34 +00:00
Jun Song
e1117a7de2 SSZ-QL: Handle Vector type & Add SSZ tag parser for multi-dimensional parsing (#15668)
* Add vectorInfo

* Add 2D bytes field for test

* Add tag_parser for parsing SSZ tags

* Integrate tag parser with analyzer

* Add ByteList test case

* Changelog

* Better printing feature with Stringer implementation

* Return error for non-determined case without printing other values

* Update tag_parser.go: handle Vector and List mutually exclusive (inspired by OffchainLabs/fastssz)

* Make linter happy

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-09-09 20:35:06 +00:00
Potuz
39b2a02f66 Remove BLS change broadcasting at the fork (#15659)
* Remove BLS change broadcasting at the fork

* Changelog
2025-09-09 16:47:16 +00:00
Galoretka
4e8a710b64 chore: Remove duplicated test case from TestCollector (#15672)
* chore: Remove duplicated test case from TestCollector

* Create Galoretka_fix-leakybucket-test-duplicate.md
2025-09-09 12:49:54 +00:00
Preston Van Loon
7191a5bcdf PeerDAS: find data column peers when tracking validators (#15654)
* Subscription: Get fanout peers in all data column subnets when at least one validator is connected

* Apply suggestion from @nalepae

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Apply suggestion from @nalepae

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Apply suggestion from @nalepae

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Apply suggestion from @nalepae

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Updated test structure to @nalepae suggestion

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-09-09 00:31:15 +00:00
Muzry
d335a52c49 fixed the empty dirs not being removed (#15573)
* fixed the empty dirs not being removed

* update list empty dirs first

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: kasey <489222+kasey@users.noreply.github.com>
2025-09-08 20:55:50 +00:00
NikolaiKryshnev
c7401f5e75 Fix link (#15631)
* Update CHANGELOG.md

* Update DEPENDENCIES.md

* Create 2025-08-25-docs-links.md
2025-09-08 20:40:18 +00:00
Preston Van Loon
0057cc57b5 spectests: Set mainnet spectests to size=large. This helps with some CI timeouts. CI is showing an average time of 4m39s, which is too close to the 5m timeout for a moderate test (#15664) 2025-09-08 20:01:34 +00:00
Jun Song
b1dc5e485d SSZ-QL: Handle List type & Populate the actual value dynamically (#15637)
* Add VariableTestContainer in ssz_query.proto

* Add listInfo

* Use errors.New for making an error with a static string literal

* Add listInfo field when analyzing the List type

* Persist the field order in the container

* Add actualOffset and goFieldName at fieldInfo

* Add PopulateFromValue function & update test runner

* Handle slice of ssz object for marshalling

* Add CalculateOffsetAndLength test

* Add comments for better doc

* Changelog :)

* Apply reviews from Radek

* Remove actualOffset and update offset field instead

* Add Nested container of variable-sized for testing nested path

* Fix offset adding logics: for variable-sized field, always add 4 instead of its fixed size

* Fix multiple import issue

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-09-08 16:50:24 +00:00
Radosław Kapka
f035da6fc5 Aggregate and pack sync committee messages (#15608)
* Aggregate and pack sync committee messages

* test

* simplify error check

* changelog <3

* fix assert import

* remove parallelization

* use sync committee cache

* ignore bits already set in pool messages

* fuzz fix

* cleanup

* test panic fix

* clear cache in tests
2025-09-08 15:17:19 +00:00
Muzry
854f4bc9a3 fix getBlockAttestationsV2 to return [] instead of null when data is empty (#15651) 2025-09-08 14:45:08 +00:00
james-prysm
1933adedbf update to v1.6.0-alpha.6 (#15658)
* wip workspace update

* update ethspecify yml

* fixing ethspecify

* fixing ethspecify

* fixing loader test

* changelog and reverting something in configs

* adding bad hash
2025-09-05 17:20:13 +00:00
Potuz
278b796e43 Fix next epoch proposer duties (#15642)
* Fix next epoch proposer duties

* Do not update state's slot when computing the proposer

Also do not call Fulu's proposer lookahead if the requested epoch is not
current or next.

* retract Terence's test

* Fix tests

* removing epoch check to pass spec test

* reverting rollback and fixing test setup

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: james-prysm <james@prysmaticlabs.com>
2025-08-29 21:45:23 +00:00
Manu NALEPA
8e52d0c3c6 Retry fetch origin data column sidecars. (#15634)
* `convertToAddrInfo`: Add peer details on error.

* `fetchOriginColumns`: Remove unused argument.

* `fetchOriginColumns`: Fix typo

* `isSidecarIndexRequested`: Log requested indices.

* `fetchOriginColumns`: Retry.

* Add changelog.

* Fix Preston comment.

* `custodyGroupCountFromPeerENR`: Add agent on error messages.

* Update beacon-chain/sync/initial-sync/service.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* Fix `TestCustodyGroupCountFromPeer`.

* `s.fetchOriginColumns`: Use `maxAttempts` and `delay` as parameters to ease unit testing.

* Implement `TestFetchOriginColumns`.

* `SendDataColumnSidecarsByRangeRequest` and `SendDataColumnSidecarsByRootRequest`: Add option to downscore the peer on RPC error.

* `fetchOriginColumns`: Remove max attempts, and downscore peers on RPC fault.

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2025-08-28 21:16:34 +00:00
james-prysm
d339e09509 fulu block proposals with datacolumn broadcast (#15628)
* propose block changes from peerdas branch

* breaking out broadcast code into its own helper, changing fulu broadcast for rest api to properly send datasidecars

* renamed validate blobsidecars to validate blobs, and added check for max blobs

* gofmt

* adding in batch verification for blobs"

* changelog

* adding kzg tests, moving new kzg functions to validation.go

* linting and other small fixes

* fixing linting issues and adding some proposer tests

* missing dependencies

* fixing test

* fixing more tests

* gaz

* removed return on broadcast data columns

* more cleanup and unit test adjustments

* missed removal of unneeded field

* adding data column receiver initialization

* Update beacon-chain/rpc/eth/beacon/handlers.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* partial review feedback from manu

* gaz

* reverting some code to peerdas as I don't believe the broadcast code needs to be reused

* missed removal of build dependency

* fixing tests and adding another test based on manu's suggestion

* fixing linting

* Update beacon-chain/rpc/eth/beacon/handlers.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/blockchain/kzg/validation.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* radek's review changes

* adding missed test

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-08-28 16:27:30 +00:00
Pop Chunhapanya
8ec460223c Swap the wrong arguments in a call (#15639)
* Swap the wrong arguments in a call

I saw that the names of the passed arguments and the ones of the
function parameters don't match, so I suspect that it's a bug.

* Add changelog

* Add validation for the fillInForkChocieMissingBlocks checkpoints.

* Add test for checkpoint epoch validation in fillInForkChoiceMissingBlocks.

* Use a sentinel error rather than error string

---------

Co-authored-by: kasey <489222+kasey@users.noreply.github.com>
Co-authored-by: Preston Van Loon <preston@pvl.dev>
2025-08-27 21:29:44 +00:00
Potuz
349d9d2fd0 Start from Justified checkpoint by default (#15636)
By default when starting a node, we load the finalized checkpoint from
db and set it as head. When the chain has not been finalizing for a
while and the user does not start from the latest head, it may still be
benefitial to start from the latest justified checkpoint that has to be
a descendant of the finalized one.
2025-08-27 15:50:10 +00:00
Sahil Sojitra
e0aecb9c32 Refactor to use atomic types (#15625)
* refactor to use atomic types

* added changlog fragment file

* update change log
2025-08-26 19:57:14 +00:00
Potuz
4a1ab70929 Change error message (#15635) 2025-08-26 17:24:32 +00:00
Manu NALEPA
240cd1d058 FetchDataColumnSidecars: If possible, try to reconstruct after retrieving sidecars from peers if not some are still missing. (#15593)
* `FetchDataColumnSidecars`: If possible, try to reconstruct after retrieving sidecars from peers if not some are still missing.

* `randomPeer`: Deterministic randomness.
Before this commit, `randomPeer` was non derterministic, even with a deterministic random source. There reason is we iterated over a map (which is fully random) and then stopped the iteration on a chosen random index (which can be deterministic if the random source is deterministic.)
After this commit, `randomPeer` and all its callers are fully deterministic when using a deterministic random source.

* Fix Potuz's comment.

* Fix James' comment.

* `tryReconstructFromStorageAndPeers`: Improve godoc.
2025-08-25 14:31:02 +00:00
Jun Song
26d8b6b786 Initialize SSZ-QL package with support for fixed-size types (#15588)
* Add basic PathElement

* Add ssz_type.go

* Add basic sszInfo

* Add containerInfo

* Add basic analyzer without analyzing list/vector

* Add analyzer for homogeneous collection types

* Add offset/length calculator

* Add testutil package in encoding/ssz/query

* Add first round trip test for IndexedAttestationElectra

* Go mod tidy

* Add Print function for debugging purpose

* Add changelog

* Add testonly flag for testutil package & Nit for nogo

* Apply reviews from Radek

* Replace fastssz with prysmaticlabs one

* Add proto/ssz_query package for testing purpose

* Update encoding/ssz/query tests to decouple with beacon types

* Use require.* instead of assert.*

* Fix import name for proto ssz_query package

* Remove uint8/uint16 and some byte arrays in FixedTestContainer

* Add newline for files

* Fix comment about byte array in ssz_query.proto

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-08-25 14:29:26 +00:00
Manu NALEPA
92c359456e PeerDAS: Add various missing items (#15629)
* `startBaseServices`: Warm data column storage cache.

* `TestFindPeers_NodeDeduplication`: Use `t.context`.

* `BUILD.bazel`: Moge `# gazelle.ignore` at the top of the file.

Rationale: This directive is applied to the whole file, regardless its position in the file.

* Improve `TestConstructGenericBeaconBlock`: Courtesy of Terence

* Add `TestDataColumnStoragePath_FlagSpecified`.

* `appFlags`: Move `flags.SubscribeAllDataSubnets` (cosmetic).

* `appFlags`: Add `storage.DataColumnStoragePathFlag`.

* Add changelog.
2025-08-25 13:06:57 +00:00
terence
d48ed44c4c Update consensus spec to v1.6.0-alpha.5 and adjust minimal config (#15621) 2025-08-23 01:19:07 +00:00
Radosław Kapka
dbab624f3d Update Github bug template (#15623) 2025-08-22 13:53:05 +00:00
Potuz
83e9cc3dbb Update gohashtree (#15619) 2025-08-22 13:19:28 +00:00
264 changed files with 17660 additions and 5051 deletions

View File

@@ -1,6 +1,6 @@
name: 🐞 Bug report
description: Report a bug or problem with running Prysm
labels: ["Bug"]
type: "Bug"
body:
- type: markdown
attributes:

View File

@@ -2993,7 +2993,7 @@ There are two known issues with this release:
### Added
- Web3Signer support. See the [documentation](https://docs.prylabs.network/docs/next/wallet/web3signer) for more
- Web3Signer support. See the [documentation](https://prysm.offchainlabs.com/docs/manage-wallet/web3signer/) for more
details.
- Bellatrix support. See [kiln testnet instructions](https://hackmd.io/OqIoTiQvS9KOIataIFksBQ?view)
- Weak subjectivity sync / checkpoint sync. This is an experimental feature and may have unintended side effects for

View File

@@ -2,7 +2,7 @@
Prysm is go project with many complicated dependencies, including some c++ based libraries. There
are two parts to Prysm's dependency management. Go modules and bazel managed dependencies. Be sure
to read [Why Bazel?](https://github.com/OffchainLabs/documentation/issues/138) to fully
to read [Why Bazel?](https://prysm.offchainlabs.com/docs/install-prysm/install-with-bazel/#why-bazel) to fully
understand the reasoning behind an additional layer of build tooling via Bazel rather than a pure
"go build" project.

View File

@@ -253,16 +253,16 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.6.0-alpha.4"
consensus_spec_version = "v1.6.0-alpha.6"
load("@prysm//tools:download_spectests.bzl", "consensus_spec_tests")
consensus_spec_tests(
name = "consensus_spec_tests",
flavors = {
"general": "sha256-MaN4zu3o0vWZypUHS5r4D8WzJF4wANoadM8qm6iyDs4=",
"minimal": "sha256-aZGNPp/bBvJgq3Wf6vyR0H6G3DOkbSuggEmOL4jEmtg=",
"mainnet": "sha256-C7jjosvpzUgw3GPajlsWBV02ZbkZ5Uv4ikmOqfDGajI=",
"general": "sha256-7wkWuahuCO37uVYnxq8Badvi+jY907pBj68ixL8XDOI=",
"minimal": "sha256-Qy/f27N0LffS/ej7VhIubwDejD6LMK0VdenKkqtZVt4=",
"mainnet": "sha256-3H7mu5yE+FGz2Wr/nc8Nd9aEu93YoEpsYtn0zBSoeDE=",
},
version = consensus_spec_version,
)
@@ -278,7 +278,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-qreawRS77l8CebiNww8z727qUItw7KlHY1Xqj7IrPdk=",
integrity = "sha256-uvz3XfMTGfy3/BtQQoEp5XQOgrWgcH/5Zo/gR0iiP+k=",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)

View File

@@ -1,9 +1,8 @@
package httprest
import (
"time"
"net/http"
"time"
"github.com/OffchainLabs/prysm/v6/api/server/middleware"
)

View File

@@ -102,7 +102,6 @@ func VerifyCellKZGProofBatch(commitmentsBytes []Bytes48, cellIndices []uint64, c
for i := range cells {
ckzgCells[i] = ckzg4844.Cell(cells[i])
}
return ckzg4844.VerifyCellKZGProofBatch(commitmentsBytes, cellIndices, ckzgCells, proofsBytes)
}

View File

@@ -1,10 +1,30 @@
package kzg
import (
"fmt"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
GoKZG "github.com/crate-crypto/go-kzg-4844"
ckzg4844 "github.com/ethereum/c-kzg-4844/v2/bindings/go"
"github.com/pkg/errors"
)
func bytesToBlob(blob []byte) *GoKZG.Blob {
var ret GoKZG.Blob
copy(ret[:], blob)
return &ret
}
func bytesToCommitment(commitment []byte) (ret GoKZG.KZGCommitment) {
copy(ret[:], commitment)
return
}
func bytesToKZGProof(proof []byte) (ret GoKZG.KZGProof) {
copy(ret[:], proof)
return
}
// Verify performs single or batch verification of commitments depending on the number of given BlobSidecars.
func Verify(blobSidecars ...blocks.ROBlob) error {
if len(blobSidecars) == 0 {
@@ -27,18 +47,121 @@ func Verify(blobSidecars ...blocks.ROBlob) error {
return kzgContext.VerifyBlobKZGProofBatch(blobs, cmts, proofs)
}
func bytesToBlob(blob []byte) *GoKZG.Blob {
var ret GoKZG.Blob
copy(ret[:], blob)
return &ret
// VerifyBlobKZGProofBatch verifies KZG proofs for multiple blobs using batch verification.
// This is more efficient than verifying each blob individually when len(blobs) > 1.
// For single blob verification, it uses the optimized single verification path.
func VerifyBlobKZGProofBatch(blobs [][]byte, commitments [][]byte, proofs [][]byte) error {
if len(blobs) != len(commitments) || len(blobs) != len(proofs) {
return errors.Errorf("number of blobs (%d), commitments (%d), and proofs (%d) must match", len(blobs), len(commitments), len(proofs))
}
if len(blobs) == 0 {
return nil
}
// Optimize for single blob case - use single verification to avoid batch overhead
if len(blobs) == 1 {
return kzgContext.VerifyBlobKZGProof(
bytesToBlob(blobs[0]),
bytesToCommitment(commitments[0]),
bytesToKZGProof(proofs[0]))
}
// Use batch verification for multiple blobs
ckzgBlobs := make([]ckzg4844.Blob, len(blobs))
ckzgCommitments := make([]ckzg4844.Bytes48, len(commitments))
ckzgProofs := make([]ckzg4844.Bytes48, len(proofs))
for i := range blobs {
if len(blobs[i]) != len(ckzg4844.Blob{}) {
return fmt.Errorf("blobs len (%d) differs from expected (%d)", len(blobs[i]), len(ckzg4844.Blob{}))
}
if len(commitments[i]) != len(ckzg4844.Bytes48{}) {
return fmt.Errorf("commitments len (%d) differs from expected (%d)", len(commitments[i]), len(ckzg4844.Blob{}))
}
if len(proofs[i]) != len(ckzg4844.Bytes48{}) {
return fmt.Errorf("proofs len (%d) differs from expected (%d)", len(proofs[i]), len(ckzg4844.Blob{}))
}
ckzgBlobs[i] = ckzg4844.Blob(blobs[i])
ckzgCommitments[i] = ckzg4844.Bytes48(commitments[i])
ckzgProofs[i] = ckzg4844.Bytes48(proofs[i])
}
valid, err := ckzg4844.VerifyBlobKZGProofBatch(ckzgBlobs, ckzgCommitments, ckzgProofs)
if err != nil {
return errors.Wrap(err, "batch verification")
}
if !valid {
return errors.New("batch KZG proof verification failed")
}
return nil
}
func bytesToCommitment(commitment []byte) (ret GoKZG.KZGCommitment) {
copy(ret[:], commitment)
return
}
// VerifyCellKZGProofBatchFromBlobData verifies cell KZG proofs in batch format directly from blob data.
// This is more efficient than reconstructing data column sidecars when you have the raw blob data and cell proofs.
// For PeerDAS/Fulu, the execution client provides cell proofs in flattened format via BlobsBundleV2.
// For single blob verification, it optimizes by computing cells once and verifying efficiently.
func VerifyCellKZGProofBatchFromBlobData(blobs [][]byte, commitments [][]byte, cellProofs [][]byte, numberOfColumns uint64) error {
blobCount := uint64(len(blobs))
expectedCellProofs := blobCount * numberOfColumns
func bytesToKZGProof(proof []byte) (ret GoKZG.KZGProof) {
copy(ret[:], proof)
return
if uint64(len(cellProofs)) != expectedCellProofs {
return errors.Errorf("expected %d cell proofs, got %d", expectedCellProofs, len(cellProofs))
}
if len(commitments) != len(blobs) {
return errors.Errorf("number of commitments (%d) must match number of blobs (%d)", len(commitments), len(blobs))
}
if blobCount == 0 {
return nil
}
// Handle multiple blobs - compute cells for all blobs
allCells := make([]Cell, 0, expectedCellProofs)
allCommitments := make([]Bytes48, 0, expectedCellProofs)
allIndices := make([]uint64, 0, expectedCellProofs)
allProofs := make([]Bytes48, 0, expectedCellProofs)
for blobIndex := range blobs {
if len(blobs[blobIndex]) != len(Blob{}) {
return fmt.Errorf("blobs len (%d) differs from expected (%d)", len(blobs[blobIndex]), len(Blob{}))
}
// Convert blob to kzg.Blob type
blob := Blob(blobs[blobIndex])
// Compute cells for this blob
cells, err := ComputeCells(&blob)
if err != nil {
return errors.Wrapf(err, "failed to compute cells for blob %d", blobIndex)
}
// Add cells and corresponding data for each column
for columnIndex := range numberOfColumns {
cellProofIndex := uint64(blobIndex)*numberOfColumns + columnIndex
if len(commitments[blobIndex]) != len(Bytes48{}) {
return fmt.Errorf("commitments len (%d) differs from expected (%d)", len(commitments[blobIndex]), len(Bytes48{}))
}
if len(cellProofs[cellProofIndex]) != len(Bytes48{}) {
return fmt.Errorf("proofs len (%d) differs from expected (%d)", len(cellProofs[cellProofIndex]), len(Bytes48{}))
}
allCells = append(allCells, cells[columnIndex])
allCommitments = append(allCommitments, Bytes48(commitments[blobIndex]))
allIndices = append(allIndices, columnIndex)
allProofs = append(allProofs, Bytes48(cellProofs[cellProofIndex]))
}
}
// Batch verify all cells
valid, err := VerifyCellKZGProofBatch(allCommitments, allIndices, allCells, allProofs)
if err != nil {
return errors.Wrap(err, "cell batch verification")
}
if !valid {
return errors.New("cell KZG proof batch verification failed")
}
return nil
}

View File

@@ -37,6 +37,7 @@ func TestBytesToAny(t *testing.T) {
}
func TestGenerateCommitmentAndProof(t *testing.T) {
require.NoError(t, Start())
blob := random.GetRandBlob(123)
commitment, proof, err := GenerateCommitmentAndProof(blob)
require.NoError(t, err)
@@ -45,3 +46,432 @@ func TestGenerateCommitmentAndProof(t *testing.T) {
require.Equal(t, expectedCommitment, commitment)
require.Equal(t, expectedProof, proof)
}
func TestVerifyBlobKZGProofBatch(t *testing.T) {
// Initialize KZG for testing
require.NoError(t, Start())
t.Run("valid single blob batch", func(t *testing.T) {
blob := random.GetRandBlob(123)
commitment, proof, err := GenerateCommitmentAndProof(blob)
require.NoError(t, err)
blobs := [][]byte{blob[:]}
commitments := [][]byte{commitment[:]}
proofs := [][]byte{proof[:]}
err = VerifyBlobKZGProofBatch(blobs, commitments, proofs)
require.NoError(t, err)
})
t.Run("valid multiple blob batch", func(t *testing.T) {
blobCount := 3
blobs := make([][]byte, blobCount)
commitments := make([][]byte, blobCount)
proofs := make([][]byte, blobCount)
for i := 0; i < blobCount; i++ {
blob := random.GetRandBlob(int64(i))
commitment, proof, err := GenerateCommitmentAndProof(blob)
require.NoError(t, err)
blobs[i] = blob[:]
commitments[i] = commitment[:]
proofs[i] = proof[:]
}
err := VerifyBlobKZGProofBatch(blobs, commitments, proofs)
require.NoError(t, err)
})
t.Run("empty inputs should pass", func(t *testing.T) {
err := VerifyBlobKZGProofBatch([][]byte{}, [][]byte{}, [][]byte{})
require.NoError(t, err)
})
t.Run("mismatched input lengths", func(t *testing.T) {
blob := random.GetRandBlob(123)
commitment, proof, err := GenerateCommitmentAndProof(blob)
require.NoError(t, err)
// Test different mismatch scenarios
err = VerifyBlobKZGProofBatch(
[][]byte{blob[:]},
[][]byte{},
[][]byte{proof[:]},
)
require.ErrorContains(t, "number of blobs (1), commitments (0), and proofs (1) must match", err)
err = VerifyBlobKZGProofBatch(
[][]byte{blob[:], blob[:]},
[][]byte{commitment[:]},
[][]byte{proof[:], proof[:]},
)
require.ErrorContains(t, "number of blobs (2), commitments (1), and proofs (2) must match", err)
})
t.Run("invalid commitment should fail", func(t *testing.T) {
blob := random.GetRandBlob(123)
_, proof, err := GenerateCommitmentAndProof(blob)
require.NoError(t, err)
// Use a different blob's commitment (mismatch)
differentBlob := random.GetRandBlob(456)
wrongCommitment, _, err := GenerateCommitmentAndProof(differentBlob)
require.NoError(t, err)
blobs := [][]byte{blob[:]}
commitments := [][]byte{wrongCommitment[:]}
proofs := [][]byte{proof[:]}
err = VerifyBlobKZGProofBatch(blobs, commitments, proofs)
// Single blob optimization uses different error message
require.ErrorContains(t, "can't verify opening proof", err)
})
t.Run("invalid proof should fail", func(t *testing.T) {
blob := random.GetRandBlob(123)
commitment, _, err := GenerateCommitmentAndProof(blob)
require.NoError(t, err)
// Use wrong proof
invalidProof := make([]byte, 48) // All zeros
blobs := [][]byte{blob[:]}
commitments := [][]byte{commitment[:]}
proofs := [][]byte{invalidProof}
err = VerifyBlobKZGProofBatch(blobs, commitments, proofs)
require.ErrorContains(t, "short buffer", err)
})
t.Run("mixed valid and invalid proofs should fail", func(t *testing.T) {
// First blob - valid
blob1 := random.GetRandBlob(123)
commitment1, proof1, err := GenerateCommitmentAndProof(blob1)
require.NoError(t, err)
// Second blob - invalid proof
blob2 := random.GetRandBlob(456)
commitment2, _, err := GenerateCommitmentAndProof(blob2)
require.NoError(t, err)
invalidProof := make([]byte, 48) // All zeros
blobs := [][]byte{blob1[:], blob2[:]}
commitments := [][]byte{commitment1[:], commitment2[:]}
proofs := [][]byte{proof1[:], invalidProof}
err = VerifyBlobKZGProofBatch(blobs, commitments, proofs)
require.ErrorContains(t, "batch verification", err)
})
t.Run("batch KZG proof verification failed", func(t *testing.T) {
// Create multiple blobs with mismatched commitments and proofs to trigger batch verification failure
blob1 := random.GetRandBlob(123)
blob2 := random.GetRandBlob(456)
// Generate valid proof for blob1
commitment1, proof1, err := GenerateCommitmentAndProof(blob1)
require.NoError(t, err)
// Generate valid proof for blob2 but use wrong commitment (from blob1)
_, proof2, err := GenerateCommitmentAndProof(blob2)
require.NoError(t, err)
// Use blob2 data with blob1's commitment and blob2's proof - this should cause batch verification to fail
blobs := [][]byte{blob1[:], blob2[:]}
commitments := [][]byte{commitment1[:], commitment1[:]} // Wrong commitment for blob2
proofs := [][]byte{proof1[:], proof2[:]}
err = VerifyBlobKZGProofBatch(blobs, commitments, proofs)
require.ErrorContains(t, "batch KZG proof verification failed", err)
})
}
func TestVerifyCellKZGProofBatchFromBlobData(t *testing.T) {
// Initialize KZG for testing
require.NoError(t, Start())
t.Run("valid single blob cell verification", func(t *testing.T) {
numberOfColumns := uint64(128)
// Generate blob and commitment
randBlob := random.GetRandBlob(123)
var blob Blob
copy(blob[:], randBlob[:])
commitment, err := BlobToKZGCommitment(&blob)
require.NoError(t, err)
// Compute cells and proofs
cellsAndProofs, err := ComputeCellsAndKZGProofs(&blob)
require.NoError(t, err)
// Create flattened cell proofs (like execution client format)
cellProofs := make([][]byte, numberOfColumns)
for i := range numberOfColumns {
cellProofs[i] = cellsAndProofs.Proofs[i][:]
}
blobs := [][]byte{blob[:]}
commitments := [][]byte{commitment[:]}
err = VerifyCellKZGProofBatchFromBlobData(blobs, commitments, cellProofs, numberOfColumns)
require.NoError(t, err)
})
t.Run("valid multiple blob cell verification", func(t *testing.T) {
numberOfColumns := uint64(128)
blobCount := 2
blobs := make([][]byte, blobCount)
commitments := make([][]byte, blobCount)
var allCellProofs [][]byte
for i := range blobCount {
// Generate blob and commitment
randBlob := random.GetRandBlob(int64(i))
var blob Blob
copy(blob[:], randBlob[:])
commitment, err := BlobToKZGCommitment(&blob)
require.NoError(t, err)
// Compute cells and proofs
cellsAndProofs, err := ComputeCellsAndKZGProofs(&blob)
require.NoError(t, err)
blobs[i] = blob[:]
commitments[i] = commitment[:]
// Add cell proofs for this blob
for j := range numberOfColumns {
allCellProofs = append(allCellProofs, cellsAndProofs.Proofs[j][:])
}
}
err := VerifyCellKZGProofBatchFromBlobData(blobs, commitments, allCellProofs, numberOfColumns)
require.NoError(t, err)
})
t.Run("empty inputs should pass", func(t *testing.T) {
err := VerifyCellKZGProofBatchFromBlobData([][]byte{}, [][]byte{}, [][]byte{}, 128)
require.NoError(t, err)
})
t.Run("mismatched blob and commitment count", func(t *testing.T) {
randBlob := random.GetRandBlob(123)
var blob Blob
copy(blob[:], randBlob[:])
err := VerifyCellKZGProofBatchFromBlobData(
[][]byte{blob[:]},
[][]byte{}, // Empty commitments
[][]byte{},
128,
)
require.ErrorContains(t, "expected 128 cell proofs", err)
})
t.Run("wrong cell proof count", func(t *testing.T) {
numberOfColumns := uint64(128)
randBlob := random.GetRandBlob(123)
var blob Blob
copy(blob[:], randBlob[:])
commitment, err := BlobToKZGCommitment(&blob)
require.NoError(t, err)
blobs := [][]byte{blob[:]}
commitments := [][]byte{commitment[:]}
// Wrong number of cell proofs - should be 128 for 1 blob, but provide 10
wrongCellProofs := make([][]byte, 10)
err = VerifyCellKZGProofBatchFromBlobData(blobs, commitments, wrongCellProofs, numberOfColumns)
require.ErrorContains(t, "expected 128 cell proofs, got 10", err)
})
t.Run("invalid cell proofs should fail", func(t *testing.T) {
numberOfColumns := uint64(128)
randBlob := random.GetRandBlob(123)
var blob Blob
copy(blob[:], randBlob[:])
commitment, err := BlobToKZGCommitment(&blob)
require.NoError(t, err)
blobs := [][]byte{blob[:]}
commitments := [][]byte{commitment[:]}
// Create invalid cell proofs (all zeros)
invalidCellProofs := make([][]byte, numberOfColumns)
for i := range numberOfColumns {
invalidCellProofs[i] = make([]byte, 48) // All zeros
}
err = VerifyCellKZGProofBatchFromBlobData(blobs, commitments, invalidCellProofs, numberOfColumns)
require.ErrorContains(t, "cell batch verification", err)
})
t.Run("mismatched commitment should fail", func(t *testing.T) {
numberOfColumns := uint64(128)
// Generate blob and correct cell proofs
randBlob := random.GetRandBlob(123)
var blob Blob
copy(blob[:], randBlob[:])
cellsAndProofs, err := ComputeCellsAndKZGProofs(&blob)
require.NoError(t, err)
// Generate wrong commitment from different blob
randBlob2 := random.GetRandBlob(456)
var differentBlob Blob
copy(differentBlob[:], randBlob2[:])
wrongCommitment, err := BlobToKZGCommitment(&differentBlob)
require.NoError(t, err)
cellProofs := make([][]byte, numberOfColumns)
for i := range numberOfColumns {
cellProofs[i] = cellsAndProofs.Proofs[i][:]
}
blobs := [][]byte{blob[:]}
commitments := [][]byte{wrongCommitment[:]}
err = VerifyCellKZGProofBatchFromBlobData(blobs, commitments, cellProofs, numberOfColumns)
require.ErrorContains(t, "cell KZG proof batch verification failed", err)
})
t.Run("invalid blob data that should cause ComputeCells to fail", func(t *testing.T) {
numberOfColumns := uint64(128)
// Create invalid blob (not properly formatted)
invalidBlobData := make([]byte, 10) // Too short
commitment := make([]byte, 48) // Dummy commitment
cellProofs := make([][]byte, numberOfColumns)
for i := range numberOfColumns {
cellProofs[i] = make([]byte, 48)
}
blobs := [][]byte{invalidBlobData}
commitments := [][]byte{commitment}
err := VerifyCellKZGProofBatchFromBlobData(blobs, commitments, cellProofs, numberOfColumns)
require.NotNil(t, err)
require.ErrorContains(t, "blobs len (10) differs from expected (131072)", err)
})
t.Run("invalid commitment size should fail", func(t *testing.T) {
numberOfColumns := uint64(128)
randBlob := random.GetRandBlob(123)
var blob Blob
copy(blob[:], randBlob[:])
// Create invalid commitment (wrong size)
invalidCommitment := make([]byte, 32) // Should be 48 bytes
cellProofs := make([][]byte, numberOfColumns)
for i := range numberOfColumns {
cellProofs[i] = make([]byte, 48)
}
blobs := [][]byte{blob[:]}
commitments := [][]byte{invalidCommitment}
err := VerifyCellKZGProofBatchFromBlobData(blobs, commitments, cellProofs, numberOfColumns)
require.ErrorContains(t, "commitments len (32) differs from expected (48)", err)
})
t.Run("invalid cell proof size should fail", func(t *testing.T) {
numberOfColumns := uint64(128)
randBlob := random.GetRandBlob(123)
var blob Blob
copy(blob[:], randBlob[:])
commitment, err := BlobToKZGCommitment(&blob)
require.NoError(t, err)
// Create invalid cell proofs (wrong size)
invalidCellProofs := make([][]byte, numberOfColumns)
for i := range numberOfColumns {
if i == 0 {
invalidCellProofs[i] = make([]byte, 32) // Wrong size - should be 48
} else {
invalidCellProofs[i] = make([]byte, 48)
}
}
blobs := [][]byte{blob[:]}
commitments := [][]byte{commitment[:]}
err = VerifyCellKZGProofBatchFromBlobData(blobs, commitments, invalidCellProofs, numberOfColumns)
require.ErrorContains(t, "proofs len (32) differs from expected (48)", err)
})
t.Run("multiple blobs with mixed invalid commitments", func(t *testing.T) {
numberOfColumns := uint64(128)
blobCount := 2
blobs := make([][]byte, blobCount)
commitments := make([][]byte, blobCount)
var allCellProofs [][]byte
// First blob - valid
randBlob1 := random.GetRandBlob(123)
var blob1 Blob
copy(blob1[:], randBlob1[:])
commitment1, err := BlobToKZGCommitment(&blob1)
require.NoError(t, err)
blobs[0] = blob1[:]
commitments[0] = commitment1[:]
// Second blob - use invalid commitment size
randBlob2 := random.GetRandBlob(456)
var blob2 Blob
copy(blob2[:], randBlob2[:])
blobs[1] = blob2[:]
commitments[1] = make([]byte, 32) // Wrong size
// Add cell proofs for both blobs
for i := 0; i < blobCount; i++ {
for j := uint64(0); j < numberOfColumns; j++ {
allCellProofs = append(allCellProofs, make([]byte, 48))
}
}
err = VerifyCellKZGProofBatchFromBlobData(blobs, commitments, allCellProofs, numberOfColumns)
require.ErrorContains(t, "commitments len (32) differs from expected (48)", err)
})
t.Run("multiple blobs with mixed invalid cell proof sizes", func(t *testing.T) {
numberOfColumns := uint64(128)
blobCount := 2
blobs := make([][]byte, blobCount)
commitments := make([][]byte, blobCount)
var allCellProofs [][]byte
for i := 0; i < blobCount; i++ {
randBlob := random.GetRandBlob(int64(i))
var blob Blob
copy(blob[:], randBlob[:])
commitment, err := BlobToKZGCommitment(&blob)
require.NoError(t, err)
blobs[i] = blob[:]
commitments[i] = commitment[:]
// Add cell proofs - make some invalid in the second blob
for j := uint64(0); j < numberOfColumns; j++ {
if i == 1 && j == 64 {
// Invalid proof size in middle of second blob's proofs
allCellProofs = append(allCellProofs, make([]byte, 20))
} else {
allCellProofs = append(allCellProofs, make([]byte, 48))
}
}
}
err := VerifyCellKZGProofBatchFromBlobData(blobs, commitments, allCellProofs, numberOfColumns)
require.ErrorContains(t, "proofs len (20) differs from expected (48)", err)
})
}

View File

@@ -159,7 +159,7 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
}
// Fill in missing blocks
if err := s.fillInForkChoiceMissingBlocks(ctx, blks[0], preState.CurrentJustifiedCheckpoint(), preState.FinalizedCheckpoint()); err != nil {
if err := s.fillInForkChoiceMissingBlocks(ctx, blks[0], preState.FinalizedCheckpoint(), preState.CurrentJustifiedCheckpoint()); err != nil {
return errors.Wrap(err, "could not fill in missing blocks to forkchoice")
}

View File

@@ -30,6 +30,10 @@ import (
"github.com/sirupsen/logrus"
)
// ErrInvalidCheckpointArgs may be returned when the finalized checkpoint has an epoch greater than the justified checkpoint epoch.
// If you are seeing this error, make sure you haven't mixed up the order of the arguments in the method you are calling.
var ErrInvalidCheckpointArgs = errors.New("finalized checkpoint cannot be greater than justified checkpoint")
// CurrentSlot returns the current slot based on time.
func (s *Service) CurrentSlot() primitives.Slot {
return slots.CurrentSlot(s.genesisTime)
@@ -454,6 +458,9 @@ func (s *Service) ancestorByDB(ctx context.Context, r [32]byte, slot primitives.
// This is useful for block tree visualizer and additional vote accounting.
func (s *Service) fillInForkChoiceMissingBlocks(ctx context.Context, signed interfaces.ReadOnlySignedBeaconBlock,
fCheckpoint, jCheckpoint *ethpb.Checkpoint) error {
if fCheckpoint.Epoch > jCheckpoint.Epoch {
return ErrInvalidCheckpointArgs
}
pendingNodes := make([]*forkchoicetypes.BlockAndCheckpoints, 0)
// Fork choice only matters from last finalized slot.

View File

@@ -375,6 +375,81 @@ func TestFillForkChoiceMissingBlocks_FinalizedSibling(t *testing.T) {
require.Equal(t, ErrNotDescendantOfFinalized.Error(), err.Error())
}
func TestFillForkChoiceMissingBlocks_ErrorCases(t *testing.T) {
tests := []struct {
name string
finalizedEpoch primitives.Epoch
justifiedEpoch primitives.Epoch
expectedError error
}{
{
name: "finalized epoch greater than justified epoch",
finalizedEpoch: 5,
justifiedEpoch: 3,
expectedError: ErrInvalidCheckpointArgs,
},
{
name: "valid case - finalized equal to justified",
finalizedEpoch: 3,
justifiedEpoch: 3,
expectedError: nil,
},
{
name: "valid case - finalized less than justified",
finalizedEpoch: 2,
justifiedEpoch: 3,
expectedError: nil,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
service, tr := minimalTestService(t)
ctx, beaconDB := tr.ctx, tr.db
st, _ := util.DeterministicGenesisState(t, 64)
require.NoError(t, service.saveGenesisData(ctx, st))
// Create a simple block for testing
blk := util.NewBeaconBlock()
blk.Block.Slot = 10
blk.Block.ParentRoot = service.originBlockRoot[:]
wsb, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
util.SaveBlock(t, ctx, beaconDB, blk)
// Create checkpoints with test case epochs
finalizedCheckpoint := &ethpb.Checkpoint{
Epoch: tt.finalizedEpoch,
Root: service.originBlockRoot[:],
}
justifiedCheckpoint := &ethpb.Checkpoint{
Epoch: tt.justifiedEpoch,
Root: service.originBlockRoot[:],
}
// Set up forkchoice store to avoid other errors
fcp := &ethpb.Checkpoint{Epoch: 0, Root: service.originBlockRoot[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 0, service.originBlockRoot, service.originBlockRoot, [32]byte{}, fcp, fcp)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
err = service.fillInForkChoiceMissingBlocks(
t.Context(), wsb, finalizedCheckpoint, justifiedCheckpoint)
if tt.expectedError != nil {
require.ErrorIs(t, err, tt.expectedError)
} else {
// For valid cases, we might get other errors (like block not being descendant of finalized)
// but we shouldn't get the checkpoint validation error
if err != nil && errors.Is(err, tt.expectedError) {
t.Errorf("Unexpected checkpoint validation error: %v", err)
}
}
})
}
}
// blockTree1 constructs the following tree:
//
// /- B1
@@ -2132,13 +2207,13 @@ func TestNoViableHead_Reboot(t *testing.T) {
// Forkchoice has the genesisRoot loaded at startup
require.Equal(t, genesisRoot, service.ensureRootNotZeros(service.cfg.ForkChoiceStore.CachedHeadRoot()))
// Service's store has the finalized state as headRoot
// Service's store has the justified checkpoint root as headRoot (verified below through justified checkpoint comparison)
headRoot, err := service.HeadRoot(ctx)
require.NoError(t, err)
require.Equal(t, genesisRoot, bytesutil.ToBytes32(headRoot))
require.NotEqual(t, bytesutil.ToBytes32(params.BeaconConfig().ZeroHash[:]), bytesutil.ToBytes32(headRoot)) // Ensure head is not zero
optimistic, err := service.IsOptimistic(ctx)
require.NoError(t, err)
require.Equal(t, false, optimistic)
require.Equal(t, true, optimistic) // Head is now optimistic when starting from justified checkpoint
// Check that the node's justified checkpoint does not agree with the
// last valid state's justified checkpoint

View File

@@ -20,7 +20,7 @@ func (s *Service) setupForkchoice(st state.BeaconState) error {
return errors.Wrap(err, "could not set up forkchoice checkpoints")
}
if err := s.setupForkchoiceTree(st); err != nil {
return errors.Wrap(err, "could not set up forkchoice root")
return errors.Wrap(err, "could not set up forkchoice tree")
}
if err := s.initializeHead(s.ctx, st); err != nil {
return errors.Wrap(err, "could not initialize head from db")
@@ -30,24 +30,24 @@ func (s *Service) setupForkchoice(st state.BeaconState) error {
func (s *Service) startupHeadRoot() [32]byte {
headStr := features.Get().ForceHead
cp := s.FinalizedCheckpt()
fRoot := s.ensureRootNotZeros([32]byte(cp.Root))
jp := s.CurrentJustifiedCheckpt()
jRoot := s.ensureRootNotZeros([32]byte(jp.Root))
if headStr == "" {
return fRoot
return jRoot
}
if headStr == "head" {
root, err := s.cfg.BeaconDB.HeadBlockRoot()
if err != nil {
log.WithError(err).Error("Could not get head block root, starting with finalized block as head")
return fRoot
log.WithError(err).Error("Could not get head block root, starting with justified block as head")
return jRoot
}
log.Infof("Using Head root of %#x", root)
return root
}
root, err := bytesutil.DecodeHexWithLength(headStr, 32)
if err != nil {
log.WithError(err).Error("Could not parse head root, starting with finalized block as head")
return fRoot
log.WithError(err).Error("Could not parse head root, starting with justified block as head")
return jRoot
}
return [32]byte(root)
}

View File

@@ -32,7 +32,7 @@ func Test_startupHeadRoot(t *testing.T) {
})
defer resetCfg()
require.Equal(t, service.startupHeadRoot(), gr)
require.LogsContain(t, hook, "Could not get head block root, starting with finalized block as head")
require.LogsContain(t, hook, "Could not get head block root, starting with justified block as head")
})
st, _ := util.DeterministicGenesisState(t, 64)

View File

@@ -67,6 +67,30 @@ func (s *SyncCommitteeCache) Clear() {
s.cache = cache.NewFIFO(keyFn)
}
// CurrentPeriodPositions returns current period positions of validator indices with respect with
// sync committee. If any input validator index has no assignment, an empty list will be returned
// for that validator. If the input root does not exist in cache, `ErrNonExistingSyncCommitteeKey` is returned.
// Manual checking of state for index position in state is recommended when `ErrNonExistingSyncCommitteeKey` is returned.
func (s *SyncCommitteeCache) CurrentPeriodPositions(root [32]byte, indices []primitives.ValidatorIndex) ([][]primitives.CommitteeIndex, error) {
s.lock.RLock()
defer s.lock.RUnlock()
pos, err := s.positionsInCommittee(root, indices)
if err != nil {
return nil, err
}
result := make([][]primitives.CommitteeIndex, len(pos))
for i, p := range pos {
if p == nil {
result[i] = []primitives.CommitteeIndex{}
} else {
result[i] = p.currentPeriod
}
}
return result, nil
}
// CurrentPeriodIndexPosition returns current period index position of a validator index with respect with
// sync committee. If the input validator index has no assignment, an empty list will be returned.
// If the input root does not exist in cache, `ErrNonExistingSyncCommitteeKey` is returned.
@@ -104,11 +128,7 @@ func (s *SyncCommitteeCache) NextPeriodIndexPosition(root [32]byte, valIdx primi
return pos.nextPeriod, nil
}
// Helper function for `CurrentPeriodIndexPosition` and `NextPeriodIndexPosition` to return a mapping
// of validator index to its index(s) position in the sync committee.
func (s *SyncCommitteeCache) idxPositionInCommittee(
root [32]byte, valIdx primitives.ValidatorIndex,
) (*positionInCommittee, error) {
func (s *SyncCommitteeCache) positionsInCommittee(root [32]byte, indices []primitives.ValidatorIndex) ([]*positionInCommittee, error) {
obj, exists, err := s.cache.GetByKey(key(root))
if err != nil {
return nil, err
@@ -121,13 +141,33 @@ func (s *SyncCommitteeCache) idxPositionInCommittee(
if !ok {
return nil, errNotSyncCommitteeIndexPosition
}
idxInCommittee, ok := item.vIndexToPositionMap[valIdx]
if !ok {
SyncCommitteeCacheMiss.Inc()
result := make([]*positionInCommittee, len(indices))
for i, idx := range indices {
idxInCommittee, ok := item.vIndexToPositionMap[idx]
if ok {
SyncCommitteeCacheHit.Inc()
result[i] = idxInCommittee
} else {
SyncCommitteeCacheMiss.Inc()
result[i] = nil
}
}
return result, nil
}
// Helper function for `CurrentPeriodIndexPosition` and `NextPeriodIndexPosition` to return a mapping
// of validator index to its index(s) position in the sync committee.
func (s *SyncCommitteeCache) idxPositionInCommittee(
root [32]byte, valIdx primitives.ValidatorIndex,
) (*positionInCommittee, error) {
positions, err := s.positionsInCommittee(root, []primitives.ValidatorIndex{valIdx})
if err != nil {
return nil, err
}
if len(positions) == 0 {
return nil, nil
}
SyncCommitteeCacheHit.Inc()
return idxInCommittee, nil
return positions[0], nil
}
// UpdatePositionsInCommittee updates caching of validators position in sync committee in respect to

View File

@@ -16,6 +16,11 @@ func NewSyncCommittee() *FakeSyncCommitteeCache {
return &FakeSyncCommitteeCache{}
}
// CurrentPeriodPositions -- fake
func (s *FakeSyncCommitteeCache) CurrentPeriodPositions(root [32]byte, indices []primitives.ValidatorIndex) ([][]primitives.CommitteeIndex, error) {
return nil, nil
}
// CurrentEpochIndexPosition -- fake.
func (s *FakeSyncCommitteeCache) CurrentPeriodIndexPosition(root [32]byte, valIdx primitives.ValidatorIndex) ([]primitives.CommitteeIndex, error) {
return nil, nil

View File

@@ -5,6 +5,7 @@ import (
"sort"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/validators"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/container/slice"
@@ -39,11 +40,11 @@ func ProcessAttesterSlashings(
ctx context.Context,
beaconState state.BeaconState,
slashings []ethpb.AttSlashing,
slashFunc slashValidatorFunc,
exitInfo *validators.ExitInfo,
) (state.BeaconState, error) {
var err error
for _, slashing := range slashings {
beaconState, err = ProcessAttesterSlashing(ctx, beaconState, slashing, slashFunc)
beaconState, err = ProcessAttesterSlashing(ctx, beaconState, slashing, exitInfo)
if err != nil {
return nil, err
}
@@ -56,7 +57,7 @@ func ProcessAttesterSlashing(
ctx context.Context,
beaconState state.BeaconState,
slashing ethpb.AttSlashing,
slashFunc slashValidatorFunc,
exitInfo *validators.ExitInfo,
) (state.BeaconState, error) {
if err := VerifyAttesterSlashing(ctx, beaconState, slashing); err != nil {
return nil, errors.Wrap(err, "could not verify attester slashing")
@@ -75,10 +76,9 @@ func ProcessAttesterSlashing(
return nil, err
}
if helpers.IsSlashableValidator(val.ActivationEpoch(), val.WithdrawableEpoch(), val.Slashed(), currentEpoch) {
beaconState, err = slashFunc(ctx, beaconState, primitives.ValidatorIndex(validatorIndex))
beaconState, err = validators.SlashValidator(ctx, beaconState, primitives.ValidatorIndex(validatorIndex), exitInfo)
if err != nil {
return nil, errors.Wrapf(err, "could not slash validator index %d",
validatorIndex)
return nil, errors.Wrapf(err, "could not slash validator index %d", validatorIndex)
}
slashedAny = true
}

View File

@@ -4,6 +4,7 @@ import (
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
v "github.com/OffchainLabs/prysm/v6/beacon-chain/core/validators"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
@@ -44,11 +45,10 @@ func TestProcessAttesterSlashings_DataNotSlashable(t *testing.T) {
Target: &ethpb.Checkpoint{Epoch: 1}},
})}}
var registry []*ethpb.Validator
currentSlot := primitives.Slot(0)
beaconState, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: registry,
Validators: []*ethpb.Validator{{}},
Slot: currentSlot,
})
require.NoError(t, err)
@@ -62,16 +62,15 @@ func TestProcessAttesterSlashings_DataNotSlashable(t *testing.T) {
for i, s := range b.Block.Body.AttesterSlashings {
ss[i] = s
}
_, err = blocks.ProcessAttesterSlashings(t.Context(), beaconState, ss, v.SlashValidator)
_, err = blocks.ProcessAttesterSlashings(t.Context(), beaconState, ss, v.ExitInformation(beaconState))
assert.ErrorContains(t, "attestations are not slashable", err)
}
func TestProcessAttesterSlashings_IndexedAttestationFailedToVerify(t *testing.T) {
var registry []*ethpb.Validator
currentSlot := primitives.Slot(0)
beaconState, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: registry,
Validators: []*ethpb.Validator{{}},
Slot: currentSlot,
})
require.NoError(t, err)
@@ -101,7 +100,7 @@ func TestProcessAttesterSlashings_IndexedAttestationFailedToVerify(t *testing.T)
for i, s := range b.Block.Body.AttesterSlashings {
ss[i] = s
}
_, err = blocks.ProcessAttesterSlashings(t.Context(), beaconState, ss, v.SlashValidator)
_, err = blocks.ProcessAttesterSlashings(t.Context(), beaconState, ss, v.ExitInformation(beaconState))
assert.ErrorContains(t, "validator indices count exceeds MAX_VALIDATORS_PER_COMMITTEE", err)
}
@@ -243,7 +242,7 @@ func TestProcessAttesterSlashings_AppliesCorrectStatus(t *testing.T) {
currentSlot := 2 * params.BeaconConfig().SlotsPerEpoch
require.NoError(t, tc.st.SetSlot(currentSlot))
newState, err := blocks.ProcessAttesterSlashings(t.Context(), tc.st, []ethpb.AttSlashing{tc.slashing}, v.SlashValidator)
newState, err := blocks.ProcessAttesterSlashings(t.Context(), tc.st, []ethpb.AttSlashing{tc.slashing}, v.ExitInformation(tc.st))
require.NoError(t, err)
newRegistry := newState.Validators()
@@ -265,3 +264,83 @@ func TestProcessAttesterSlashings_AppliesCorrectStatus(t *testing.T) {
})
}
}
func TestProcessAttesterSlashing_ExitEpochGetsUpdated(t *testing.T) {
st, keys := util.DeterministicGenesisStateElectra(t, 8)
bal, err := helpers.TotalActiveBalance(st)
require.NoError(t, err)
perEpochChurn := helpers.ActivationExitChurnLimit(primitives.Gwei(bal))
vals := st.Validators()
// We set the total effective balance of slashed validators
// higher than the churn limit for a single epoch.
vals[0].EffectiveBalance = uint64(perEpochChurn / 3)
vals[1].EffectiveBalance = uint64(perEpochChurn / 3)
vals[2].EffectiveBalance = uint64(perEpochChurn / 3)
vals[3].EffectiveBalance = uint64(perEpochChurn / 3)
require.NoError(t, st.SetValidators(vals))
sl1att1 := util.HydrateIndexedAttestationElectra(&ethpb.IndexedAttestationElectra{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 1},
},
AttestingIndices: []uint64{0, 1},
})
sl1att2 := util.HydrateIndexedAttestationElectra(&ethpb.IndexedAttestationElectra{
AttestingIndices: []uint64{0, 1},
})
slashing1 := &ethpb.AttesterSlashingElectra{
Attestation_1: sl1att1,
Attestation_2: sl1att2,
}
sl2att1 := util.HydrateIndexedAttestationElectra(&ethpb.IndexedAttestationElectra{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 1},
},
AttestingIndices: []uint64{2, 3},
})
sl2att2 := util.HydrateIndexedAttestationElectra(&ethpb.IndexedAttestationElectra{
AttestingIndices: []uint64{2, 3},
})
slashing2 := &ethpb.AttesterSlashingElectra{
Attestation_1: sl2att1,
Attestation_2: sl2att2,
}
domain, err := signing.Domain(st.Fork(), 0, params.BeaconConfig().DomainBeaconAttester, st.GenesisValidatorsRoot())
require.NoError(t, err)
signingRoot, err := signing.ComputeSigningRoot(sl1att1.GetData(), domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 := keys[0].Sign(signingRoot[:])
sig1 := keys[1].Sign(signingRoot[:])
aggregateSig := bls.AggregateSignatures([]bls.Signature{sig0, sig1})
sl1att1.Signature = aggregateSig.Marshal()
signingRoot, err = signing.ComputeSigningRoot(sl1att2.GetData(), domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 = keys[0].Sign(signingRoot[:])
sig1 = keys[1].Sign(signingRoot[:])
aggregateSig = bls.AggregateSignatures([]bls.Signature{sig0, sig1})
sl1att2.Signature = aggregateSig.Marshal()
signingRoot, err = signing.ComputeSigningRoot(sl2att1.GetData(), domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 = keys[2].Sign(signingRoot[:])
sig1 = keys[3].Sign(signingRoot[:])
aggregateSig = bls.AggregateSignatures([]bls.Signature{sig0, sig1})
sl2att1.Signature = aggregateSig.Marshal()
signingRoot, err = signing.ComputeSigningRoot(sl2att2.GetData(), domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 = keys[2].Sign(signingRoot[:])
sig1 = keys[3].Sign(signingRoot[:])
aggregateSig = bls.AggregateSignatures([]bls.Signature{sig0, sig1})
sl2att2.Signature = aggregateSig.Marshal()
exitInfo := v.ExitInformation(st)
assert.Equal(t, primitives.Epoch(0), exitInfo.HighestExitEpoch)
_, err = blocks.ProcessAttesterSlashings(t.Context(), st, []ethpb.AttSlashing{slashing1, slashing2}, exitInfo)
require.NoError(t, err)
assert.Equal(t, primitives.Epoch(6), exitInfo.HighestExitEpoch)
}

View File

@@ -191,7 +191,7 @@ func TestFuzzProcessProposerSlashings_10000(t *testing.T) {
fuzzer.Fuzz(p)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
require.NoError(t, err)
r, err := ProcessProposerSlashings(ctx, s, []*ethpb.ProposerSlashing{p}, v.SlashValidator)
r, err := ProcessProposerSlashings(ctx, s, []*ethpb.ProposerSlashing{p}, v.ExitInformation(s))
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and slashing: %v", r, err, state, p)
}
@@ -224,7 +224,7 @@ func TestFuzzProcessAttesterSlashings_10000(t *testing.T) {
fuzzer.Fuzz(a)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
require.NoError(t, err)
r, err := ProcessAttesterSlashings(ctx, s, []ethpb.AttSlashing{a}, v.SlashValidator)
r, err := ProcessAttesterSlashings(ctx, s, []ethpb.AttSlashing{a}, v.ExitInformation(s))
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and slashing: %v", r, err, state, a)
}
@@ -334,7 +334,7 @@ func TestFuzzProcessVoluntaryExits_10000(t *testing.T) {
fuzzer.Fuzz(e)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
require.NoError(t, err)
r, err := ProcessVoluntaryExits(ctx, s, []*ethpb.SignedVoluntaryExit{e})
r, err := ProcessVoluntaryExits(ctx, s, []*ethpb.SignedVoluntaryExit{e}, v.ExitInformation(s))
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and exit: %v", r, err, state, e)
}
@@ -351,7 +351,7 @@ func TestFuzzProcessVoluntaryExitsNoVerify_10000(t *testing.T) {
fuzzer.Fuzz(e)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
require.NoError(t, err)
r, err := ProcessVoluntaryExits(t.Context(), s, []*ethpb.SignedVoluntaryExit{e})
r, err := ProcessVoluntaryExits(t.Context(), s, []*ethpb.SignedVoluntaryExit{e}, v.ExitInformation(s))
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, e)
}

View File

@@ -94,7 +94,7 @@ func TestProcessAttesterSlashings_RegressionSlashableIndices(t *testing.T) {
for i, s := range b.Block.Body.AttesterSlashings {
ss[i] = s
}
newState, err := blocks.ProcessAttesterSlashings(t.Context(), beaconState, ss, v.SlashValidator)
newState, err := blocks.ProcessAttesterSlashings(t.Context(), beaconState, ss, v.ExitInformation(beaconState))
require.NoError(t, err)
newRegistry := newState.Validators()
if !newRegistry[expectedSlashedVal].Slashed {

View File

@@ -9,7 +9,6 @@ import (
v "github.com/OffchainLabs/prysm/v6/beacon-chain/core/validators"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
@@ -50,13 +49,12 @@ func ProcessVoluntaryExits(
ctx context.Context,
beaconState state.BeaconState,
exits []*ethpb.SignedVoluntaryExit,
exitInfo *v.ExitInfo,
) (state.BeaconState, error) {
// Avoid calculating the epoch churn if no exits exist.
if len(exits) == 0 {
return beaconState, nil
}
maxExitEpoch, churn := v.MaxExitEpochAndChurn(beaconState)
var exitEpoch primitives.Epoch
for idx, exit := range exits {
if exit == nil || exit.Exit == nil {
return nil, errors.New("nil voluntary exit in block body")
@@ -68,15 +66,8 @@ func ProcessVoluntaryExits(
if err := VerifyExitAndSignature(val, beaconState, exit); err != nil {
return nil, errors.Wrapf(err, "could not verify exit %d", idx)
}
beaconState, exitEpoch, err = v.InitiateValidatorExit(ctx, beaconState, exit.Exit.ValidatorIndex, maxExitEpoch, churn)
if err == nil {
if exitEpoch > maxExitEpoch {
maxExitEpoch = exitEpoch
churn = 1
} else if exitEpoch == maxExitEpoch {
churn++
}
} else if !errors.Is(err, v.ErrValidatorAlreadyExited) {
beaconState, err = v.InitiateValidatorExit(ctx, beaconState, exit.Exit.ValidatorIndex, exitInfo)
if err != nil && !errors.Is(err, v.ErrValidatorAlreadyExited) {
return nil, err
}
}

View File

@@ -7,6 +7,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/validators"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v6/config/params"
@@ -46,7 +47,7 @@ func TestProcessVoluntaryExits_NotActiveLongEnoughToExit(t *testing.T) {
}
want := "validator has not been active long enough to exit"
_, err = blocks.ProcessVoluntaryExits(t.Context(), state, b.Block.Body.VoluntaryExits)
_, err = blocks.ProcessVoluntaryExits(t.Context(), state, b.Block.Body.VoluntaryExits, validators.ExitInformation(state))
assert.ErrorContains(t, want, err)
}
@@ -76,7 +77,7 @@ func TestProcessVoluntaryExits_ExitAlreadySubmitted(t *testing.T) {
}
want := "validator with index 0 has already submitted an exit, which will take place at epoch: 10"
_, err = blocks.ProcessVoluntaryExits(t.Context(), state, b.Block.Body.VoluntaryExits)
_, err = blocks.ProcessVoluntaryExits(t.Context(), state, b.Block.Body.VoluntaryExits, validators.ExitInformation(state))
assert.ErrorContains(t, want, err)
}
@@ -124,7 +125,7 @@ func TestProcessVoluntaryExits_AppliesCorrectStatus(t *testing.T) {
},
}
newState, err := blocks.ProcessVoluntaryExits(t.Context(), state, b.Block.Body.VoluntaryExits)
newState, err := blocks.ProcessVoluntaryExits(t.Context(), state, b.Block.Body.VoluntaryExits, validators.ExitInformation(state))
require.NoError(t, err, "Could not process exits")
newRegistry := newState.Validators()
if newRegistry[0].ExitEpoch != helpers.ActivationExitEpoch(primitives.Epoch(state.Slot()/params.BeaconConfig().SlotsPerEpoch)) {

View File

@@ -7,9 +7,9 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/validators"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/pkg/errors"
@@ -19,11 +19,6 @@ import (
// ErrCouldNotVerifyBlockHeader is returned when a block header's signature cannot be verified.
var ErrCouldNotVerifyBlockHeader = errors.New("could not verify beacon block header")
type slashValidatorFunc func(
ctx context.Context,
st state.BeaconState,
vid primitives.ValidatorIndex) (state.BeaconState, error)
// ProcessProposerSlashings is one of the operations performed
// on each processed beacon block to slash proposers based on
// slashing conditions if any slashable events occurred.
@@ -54,11 +49,11 @@ func ProcessProposerSlashings(
ctx context.Context,
beaconState state.BeaconState,
slashings []*ethpb.ProposerSlashing,
slashFunc slashValidatorFunc,
exitInfo *validators.ExitInfo,
) (state.BeaconState, error) {
var err error
for _, slashing := range slashings {
beaconState, err = ProcessProposerSlashing(ctx, beaconState, slashing, slashFunc)
beaconState, err = ProcessProposerSlashing(ctx, beaconState, slashing, exitInfo)
if err != nil {
return nil, err
}
@@ -71,7 +66,7 @@ func ProcessProposerSlashing(
ctx context.Context,
beaconState state.BeaconState,
slashing *ethpb.ProposerSlashing,
slashFunc slashValidatorFunc,
exitInfo *validators.ExitInfo,
) (state.BeaconState, error) {
var err error
if slashing == nil {
@@ -80,7 +75,7 @@ func ProcessProposerSlashing(
if err = VerifyProposerSlashing(beaconState, slashing); err != nil {
return nil, errors.Wrap(err, "could not verify proposer slashing")
}
beaconState, err = slashFunc(ctx, beaconState, slashing.Header_1.Header.ProposerIndex)
beaconState, err = validators.SlashValidator(ctx, beaconState, slashing.Header_1.Header.ProposerIndex, exitInfo)
if err != nil {
return nil, errors.Wrapf(err, "could not slash proposer index %d", slashing.Header_1.Header.ProposerIndex)
}

View File

@@ -50,7 +50,7 @@ func TestProcessProposerSlashings_UnmatchedHeaderSlots(t *testing.T) {
},
}
want := "mismatched header slots"
_, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, b.Block.Body.ProposerSlashings, v.SlashValidator)
_, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, b.Block.Body.ProposerSlashings, v.ExitInformation(beaconState))
assert.ErrorContains(t, want, err)
}
@@ -83,7 +83,7 @@ func TestProcessProposerSlashings_SameHeaders(t *testing.T) {
},
}
want := "expected slashing headers to differ"
_, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, b.Block.Body.ProposerSlashings, v.SlashValidator)
_, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, b.Block.Body.ProposerSlashings, v.ExitInformation(beaconState))
assert.ErrorContains(t, want, err)
}
@@ -133,7 +133,7 @@ func TestProcessProposerSlashings_ValidatorNotSlashable(t *testing.T) {
"validator with key %#x is not slashable",
bytesutil.ToBytes48(beaconState.Validators()[0].PublicKey),
)
_, err = blocks.ProcessProposerSlashings(t.Context(), beaconState, b.Block.Body.ProposerSlashings, v.SlashValidator)
_, err = blocks.ProcessProposerSlashings(t.Context(), beaconState, b.Block.Body.ProposerSlashings, v.ExitInformation(beaconState))
assert.ErrorContains(t, want, err)
}
@@ -172,7 +172,7 @@ func TestProcessProposerSlashings_AppliesCorrectStatus(t *testing.T) {
block := util.NewBeaconBlock()
block.Block.Body.ProposerSlashings = slashings
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.ExitInformation(beaconState))
require.NoError(t, err)
newStateVals := newState.Validators()
@@ -220,7 +220,7 @@ func TestProcessProposerSlashings_AppliesCorrectStatusAltair(t *testing.T) {
block := util.NewBeaconBlock()
block.Block.Body.ProposerSlashings = slashings
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.ExitInformation(beaconState))
require.NoError(t, err)
newStateVals := newState.Validators()
@@ -268,7 +268,7 @@ func TestProcessProposerSlashings_AppliesCorrectStatusBellatrix(t *testing.T) {
block := util.NewBeaconBlock()
block.Block.Body.ProposerSlashings = slashings
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.ExitInformation(beaconState))
require.NoError(t, err)
newStateVals := newState.Validators()
@@ -316,7 +316,7 @@ func TestProcessProposerSlashings_AppliesCorrectStatusCapella(t *testing.T) {
block := util.NewBeaconBlock()
block.Block.Body.ProposerSlashings = slashings
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.ExitInformation(beaconState))
require.NoError(t, err)
newStateVals := newState.Validators()

View File

@@ -84,8 +84,8 @@ func ProcessRegistryUpdates(ctx context.Context, st state.BeaconState) error {
// Handle validator ejections.
for _, idx := range eligibleForEjection {
var err error
// exitQueueEpoch and churn arguments are not used in electra.
st, _, err = validators.InitiateValidatorExit(ctx, st, idx, 0 /*exitQueueEpoch*/, 0 /*churn*/)
// exit info is not used in electra
st, err = validators.InitiateValidatorExit(ctx, st, idx, &validators.ExitInfo{})
if err != nil && !errors.Is(err, validators.ErrValidatorAlreadyExited) {
return fmt.Errorf("failed to initiate validator exit at index %d: %w", idx, err)
}

View File

@@ -4,6 +4,7 @@ import (
"context"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
v "github.com/OffchainLabs/prysm/v6/beacon-chain/core/validators"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
@@ -46,18 +47,21 @@ var (
// # [New in Electra:EIP7251]
// for_ops(body.execution_payload.consolidation_requests, process_consolidation_request)
func ProcessOperations(
ctx context.Context,
st state.BeaconState,
block interfaces.ReadOnlyBeaconBlock) (state.BeaconState, error) {
func ProcessOperations(ctx context.Context, st state.BeaconState, block interfaces.ReadOnlyBeaconBlock) (state.BeaconState, error) {
var err error
// 6110 validations are in VerifyOperationLengths
bb := block.Body()
// Electra extends the altair operations.
st, err := ProcessProposerSlashings(ctx, st, bb.ProposerSlashings(), v.SlashValidator)
exitInfo := v.ExitInformation(st)
if err := helpers.UpdateTotalActiveBalanceCache(st, exitInfo.TotalActiveBalance); err != nil {
return nil, errors.Wrap(err, "could not update total active balance cache")
}
st, err = ProcessProposerSlashings(ctx, st, bb.ProposerSlashings(), exitInfo)
if err != nil {
return nil, errors.Wrap(err, "could not process altair proposer slashing")
}
st, err = ProcessAttesterSlashings(ctx, st, bb.AttesterSlashings(), v.SlashValidator)
st, err = ProcessAttesterSlashings(ctx, st, bb.AttesterSlashings(), exitInfo)
if err != nil {
return nil, errors.Wrap(err, "could not process altair attester slashing")
}
@@ -68,7 +72,7 @@ func ProcessOperations(
if _, err := ProcessDeposits(ctx, st, bb.Deposits()); err != nil { // new in electra
return nil, errors.Wrap(err, "could not process altair deposit")
}
st, err = ProcessVoluntaryExits(ctx, st, bb.VoluntaryExits())
st, err = ProcessVoluntaryExits(ctx, st, bb.VoluntaryExits(), exitInfo)
if err != nil {
return nil, errors.Wrap(err, "could not process voluntary exits")
}

View File

@@ -147,9 +147,8 @@ func ProcessWithdrawalRequests(ctx context.Context, st state.BeaconState, wrs []
if isFullExitRequest {
// Only exit validator if it has no pending withdrawals in the queue
if pendingBalanceToWithdraw == 0 {
maxExitEpoch, churn := validators.MaxExitEpochAndChurn(st)
var err error
st, _, err = validators.InitiateValidatorExit(ctx, st, vIdx, maxExitEpoch, churn)
st, err = validators.InitiateValidatorExit(ctx, st, vIdx, validators.ExitInformation(st))
if err != nil {
return nil, err
}

View File

@@ -99,8 +99,7 @@ func ProcessRegistryUpdates(ctx context.Context, st state.BeaconState) (state.Be
for _, idx := range eligibleForEjection {
// Here is fine to do a quadratic loop since this should
// barely happen
maxExitEpoch, churn := validators.MaxExitEpochAndChurn(st)
st, _, err = validators.InitiateValidatorExit(ctx, st, idx, maxExitEpoch, churn)
st, err = validators.InitiateValidatorExit(ctx, st, idx, validators.ExitInformation(st))
if err != nil && !errors.Is(err, validators.ErrValidatorAlreadyExited) {
return nil, errors.Wrapf(err, "could not initiate exit for validator %d", idx)
}

View File

@@ -16,10 +16,10 @@ func ProcessEpoch(ctx context.Context, state state.BeaconState) error {
if err := electra.ProcessEpoch(ctx, state); err != nil {
return errors.Wrap(err, "could not process epoch in fulu transition")
}
return processProposerLookahead(ctx, state)
return ProcessProposerLookahead(ctx, state)
}
func processProposerLookahead(ctx context.Context, state state.BeaconState) error {
func ProcessProposerLookahead(ctx context.Context, state state.BeaconState) error {
_, span := trace.StartSpan(ctx, "fulu.processProposerLookahead")
defer span.End()

View File

@@ -0,0 +1,19 @@
load("@prysm//tools/go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = ["upgrade.go"],
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/core/gloas",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/params:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)

View File

@@ -0,0 +1,189 @@
package gloas
import (
"context"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v6/config/params"
enginev1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/pkg/errors"
)
// UpgradeToGloas updates inputs a generic state to return the version Gloas state.
// This implements the upgrade_to_eip7928 function from the specification.
func UpgradeToGloas(ctx context.Context, beaconState state.BeaconState) (state.BeaconState, error) {
epoch := time.CurrentEpoch(beaconState)
currentSyncCommittee, err := beaconState.CurrentSyncCommittee()
if err != nil {
return nil, err
}
nextSyncCommittee, err := beaconState.NextSyncCommittee()
if err != nil {
return nil, err
}
prevEpochParticipation, err := beaconState.PreviousEpochParticipation()
if err != nil {
return nil, err
}
currentEpochParticipation, err := beaconState.CurrentEpochParticipation()
if err != nil {
return nil, err
}
inactivityScores, err := beaconState.InactivityScores()
if err != nil {
return nil, err
}
payloadHeader, err := beaconState.LatestExecutionPayloadHeader()
if err != nil {
return nil, err
}
txRoot, err := payloadHeader.TransactionsRoot()
if err != nil {
return nil, err
}
wdRoot, err := payloadHeader.WithdrawalsRoot()
if err != nil {
return nil, err
}
wi, err := beaconState.NextWithdrawalIndex()
if err != nil {
return nil, err
}
vi, err := beaconState.NextWithdrawalValidatorIndex()
if err != nil {
return nil, err
}
summaries, err := beaconState.HistoricalSummaries()
if err != nil {
return nil, err
}
excessBlobGas, err := payloadHeader.ExcessBlobGas()
if err != nil {
return nil, err
}
blobGasUsed, err := payloadHeader.BlobGasUsed()
if err != nil {
return nil, err
}
depositRequestsStartIndex, err := beaconState.DepositRequestsStartIndex()
if err != nil {
return nil, err
}
depositBalanceToConsume, err := beaconState.DepositBalanceToConsume()
if err != nil {
return nil, err
}
exitBalanceToConsume, err := beaconState.ExitBalanceToConsume()
if err != nil {
return nil, err
}
earliestExitEpoch, err := beaconState.EarliestExitEpoch()
if err != nil {
return nil, err
}
consolidationBalanceToConsume, err := beaconState.ConsolidationBalanceToConsume()
if err != nil {
return nil, err
}
earliestConsolidationEpoch, err := beaconState.EarliestConsolidationEpoch()
if err != nil {
return nil, err
}
pendingDeposits, err := beaconState.PendingDeposits()
if err != nil {
return nil, err
}
pendingPartialWithdrawals, err := beaconState.PendingPartialWithdrawals()
if err != nil {
return nil, err
}
pendingConsolidations, err := beaconState.PendingConsolidations()
if err != nil {
return nil, err
}
proposerLookahead, err := helpers.InitializeProposerLookahead(ctx, beaconState, slots.ToEpoch(beaconState.Slot()))
if err != nil {
return nil, err
}
// Create the new Gloas execution payload header with block_access_list_root
// as specified in the EIP7928 upgrade function
latestExecutionPayloadHeader := &enginev1.ExecutionPayloadHeaderGloas{
ParentHash: payloadHeader.ParentHash(),
FeeRecipient: payloadHeader.FeeRecipient(),
StateRoot: payloadHeader.StateRoot(),
ReceiptsRoot: payloadHeader.ReceiptsRoot(),
LogsBloom: payloadHeader.LogsBloom(),
PrevRandao: payloadHeader.PrevRandao(),
BlockNumber: payloadHeader.BlockNumber(),
GasLimit: payloadHeader.GasLimit(),
GasUsed: payloadHeader.GasUsed(),
Timestamp: payloadHeader.Timestamp(),
ExtraData: payloadHeader.ExtraData(),
BaseFeePerGas: payloadHeader.BaseFeePerGas(),
BlockHash: payloadHeader.BlockHash(),
TransactionsRoot: txRoot,
WithdrawalsRoot: wdRoot,
BlobGasUsed: blobGasUsed,
ExcessBlobGas: excessBlobGas,
BlockAccessListRoot: make([]byte, 32), // New in EIP7928 - empty Root()
}
s := &ethpb.BeaconStateGloas{
GenesisTime: uint64(beaconState.GenesisTime().Unix()),
GenesisValidatorsRoot: beaconState.GenesisValidatorsRoot(),
Slot: beaconState.Slot(),
Fork: &ethpb.Fork{
PreviousVersion: beaconState.Fork().CurrentVersion,
CurrentVersion: params.BeaconConfig().GloasForkVersion, // Modified in EIP7928
Epoch: epoch,
},
LatestBlockHeader: beaconState.LatestBlockHeader(),
BlockRoots: beaconState.BlockRoots(),
StateRoots: beaconState.StateRoots(),
HistoricalRoots: beaconState.HistoricalRoots(),
Eth1Data: beaconState.Eth1Data(),
Eth1DataVotes: beaconState.Eth1DataVotes(),
Eth1DepositIndex: beaconState.Eth1DepositIndex(),
Validators: beaconState.Validators(),
Balances: beaconState.Balances(),
RandaoMixes: beaconState.RandaoMixes(),
Slashings: beaconState.Slashings(),
PreviousEpochParticipation: prevEpochParticipation,
CurrentEpochParticipation: currentEpochParticipation,
JustificationBits: beaconState.JustificationBits(),
PreviousJustifiedCheckpoint: beaconState.PreviousJustifiedCheckpoint(),
CurrentJustifiedCheckpoint: beaconState.CurrentJustifiedCheckpoint(),
FinalizedCheckpoint: beaconState.FinalizedCheckpoint(),
InactivityScores: inactivityScores,
CurrentSyncCommittee: currentSyncCommittee,
NextSyncCommittee: nextSyncCommittee,
LatestExecutionPayloadHeader: latestExecutionPayloadHeader,
NextWithdrawalIndex: wi,
NextWithdrawalValidatorIndex: vi,
HistoricalSummaries: summaries,
DepositRequestsStartIndex: depositRequestsStartIndex,
DepositBalanceToConsume: depositBalanceToConsume,
ExitBalanceToConsume: exitBalanceToConsume,
EarliestExitEpoch: earliestExitEpoch,
ConsolidationBalanceToConsume: consolidationBalanceToConsume,
EarliestConsolidationEpoch: earliestConsolidationEpoch,
PendingDeposits: pendingDeposits,
PendingPartialWithdrawals: pendingPartialWithdrawals,
PendingConsolidations: pendingConsolidations,
ProposerLookahead: proposerLookahead,
}
post, err := state_native.InitializeFromProtoUnsafeGloas(s)
if err != nil {
return nil, errors.Wrap(err, "failed to initialize post gloas beaconState")
}
return post, nil
}

View File

@@ -317,23 +317,15 @@ func ProposerAssignments(ctx context.Context, state state.BeaconState, epoch pri
}
proposerAssignments := make(map[primitives.ValidatorIndex][]primitives.Slot)
originalStateSlot := state.Slot()
for slot := startSlot; slot < startSlot+params.BeaconConfig().SlotsPerEpoch; slot++ {
// Skip proposer assignment for genesis slot.
if slot == 0 {
continue
}
// Set the state's current slot.
if err := state.SetSlot(slot); err != nil {
return nil, err
}
// Determine the proposer index for the current slot.
i, err := BeaconProposerIndex(ctx, state)
i, err := BeaconProposerIndexAtSlot(ctx, state, slot)
if err != nil {
return nil, errors.Wrapf(err, "could not check proposer at slot %d", state.Slot())
return nil, errors.Wrapf(err, "could not check proposer at slot %d", slot)
}
// Append the slot to the proposer's assignments.
@@ -342,12 +334,6 @@ func ProposerAssignments(ctx context.Context, state state.BeaconState, epoch pri
}
proposerAssignments[i] = append(proposerAssignments[i], slot)
}
// Reset state back to its original slot.
if err := state.SetSlot(originalStateSlot); err != nil {
return nil, err
}
return proposerAssignments, nil
}

View File

@@ -87,6 +87,11 @@ func TotalActiveBalance(s state.ReadOnlyBeaconState) (uint64, error) {
return total, nil
}
// UpdateTotalActiveBalanceCache updates the cache with the given total active balance.
func UpdateTotalActiveBalanceCache(s state.BeaconState, total uint64) error {
return balanceCache.AddTotalEffectiveBalance(s, total)
}
// IncreaseBalance increases validator with the given 'index' balance by 'delta' in Gwei.
//
// Spec pseudocode definition:

View File

@@ -297,3 +297,30 @@ func TestIncreaseBadBalance_NotOK(t *testing.T) {
require.ErrorContains(t, "addition overflows", helpers.IncreaseBalance(state, test.i, test.nb))
}
}
func TestUpdateTotalActiveBalanceCache(t *testing.T) {
helpers.ClearCache()
// Create a test state with some validators
validators := []*ethpb.Validator{
{EffectiveBalance: 32 * 1e9, ExitEpoch: params.BeaconConfig().FarFutureEpoch, ActivationEpoch: 0},
{EffectiveBalance: 32 * 1e9, ExitEpoch: params.BeaconConfig().FarFutureEpoch, ActivationEpoch: 0},
{EffectiveBalance: 31 * 1e9, ExitEpoch: params.BeaconConfig().FarFutureEpoch, ActivationEpoch: 0},
}
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: validators,
Slot: 0,
})
require.NoError(t, err)
// Test updating cache with a specific total
testTotal := uint64(95 * 1e9) // 32 + 32 + 31 = 95
err = helpers.UpdateTotalActiveBalanceCache(state, testTotal)
require.NoError(t, err)
// Verify the cache was updated by retrieving the total active balance
// which should now return the cached value
cachedTotal, err := helpers.TotalActiveBalance(state)
require.NoError(t, err)
assert.Equal(t, testTotal, cachedTotal, "Cache should return the updated total")
}

View File

@@ -21,6 +21,39 @@ var (
syncCommitteeCache = cache.NewSyncCommittee()
)
// CurrentPeriodPositions returns committee indices of the current period sync committee for input validators.
func CurrentPeriodPositions(st state.BeaconState, indices []primitives.ValidatorIndex) ([][]primitives.CommitteeIndex, error) {
root, err := SyncPeriodBoundaryRoot(st)
if err != nil {
return nil, err
}
pos, err := syncCommitteeCache.CurrentPeriodPositions(root, indices)
if errors.Is(err, cache.ErrNonExistingSyncCommitteeKey) {
committee, err := st.CurrentSyncCommittee()
if err != nil {
return nil, err
}
// Fill in the cache on miss.
go func() {
if err := syncCommitteeCache.UpdatePositionsInCommittee(root, st); err != nil {
log.WithError(err).Error("Could not fill sync committee cache on miss")
}
}()
pos = make([][]primitives.CommitteeIndex, len(indices))
for i, idx := range indices {
pubkey := st.PubkeyAtIndex(idx)
pos[i] = findSubCommitteeIndices(pubkey[:], committee.Pubkeys)
}
return pos, nil
}
if err != nil {
return nil, err
}
return pos, nil
}
// IsCurrentPeriodSyncCommittee returns true if the input validator index belongs in the current period sync committee
// along with the sync committee root.
// 1. Checks if the public key exists in the sync committee cache

View File

@@ -17,6 +17,38 @@ import (
"github.com/OffchainLabs/prysm/v6/testing/require"
)
func TestCurrentPeriodPositions(t *testing.T) {
helpers.ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
Pubkeys: make([][]byte, params.BeaconConfig().SyncCommitteeSize),
}
for i := 0; i < len(validators); i++ {
k := make([]byte, 48)
copy(k, strconv.Itoa(i))
validators[i] = &ethpb.Validator{
PublicKey: k,
}
syncCommittee.Pubkeys[i] = bytesutil.PadTo(k, 48)
}
state, err := state_native.InitializeFromProtoAltair(&ethpb.BeaconStateAltair{
Validators: validators,
})
require.NoError(t, err)
require.NoError(t, state.SetCurrentSyncCommittee(syncCommittee))
require.NoError(t, state.SetNextSyncCommittee(syncCommittee))
require.NoError(t, err, helpers.SyncCommitteeCache().UpdatePositionsInCommittee([32]byte{}, state))
positions, err := helpers.CurrentPeriodPositions(state, []primitives.ValidatorIndex{0, 1})
require.NoError(t, err)
require.Equal(t, 2, len(positions))
require.Equal(t, 1, len(positions[0]))
assert.Equal(t, primitives.CommitteeIndex(0), positions[0][0])
require.Equal(t, 1, len(positions[1]))
assert.Equal(t, primitives.CommitteeIndex(1), positions[1][0])
}
func TestIsCurrentEpochSyncCommittee_UsingCache(t *testing.T) {
helpers.ClearCache()

View File

@@ -309,23 +309,29 @@ func beaconProposerIndexAtSlotFulu(state state.ReadOnlyBeaconState, slot primiti
if err != nil {
return 0, errors.Wrap(err, "could not get proposer lookahead")
}
spe := params.BeaconConfig().SlotsPerEpoch
if e == stateEpoch {
return lookAhead[slot%params.BeaconConfig().SlotsPerEpoch], nil
return lookAhead[slot%spe], nil
}
// The caller is requesting the proposer for the next epoch
return lookAhead[slot%params.BeaconConfig().SlotsPerEpoch+params.BeaconConfig().SlotsPerEpoch], nil
return lookAhead[spe+slot%spe], nil
}
// BeaconProposerIndexAtSlot returns proposer index at the given slot from the
// point of view of the given state as head state
func BeaconProposerIndexAtSlot(ctx context.Context, state state.ReadOnlyBeaconState, slot primitives.Slot) (primitives.ValidatorIndex, error) {
if state.Version() >= version.Fulu {
return beaconProposerIndexAtSlotFulu(state, slot)
}
e := slots.ToEpoch(slot)
stateEpoch := slots.ToEpoch(state.Slot())
// Even if the state is post Fulu, we may request a past proposer index.
if state.Version() >= version.Fulu && e >= params.BeaconConfig().FuluForkEpoch {
// We can use the cached lookahead only for the current and the next epoch.
if e == stateEpoch || e == stateEpoch+1 {
return beaconProposerIndexAtSlotFulu(state, slot)
}
}
// The cache uses the state root of the previous epoch - minimum_seed_lookahead last slot as key. (e.g. Starting epoch 1, slot 32, the key would be block root at slot 31)
// For simplicity, the node will skip caching of genesis epoch.
if e > params.BeaconConfig().GenesisEpoch+params.BeaconConfig().MinSeedLookahead {
// For simplicity, the node will skip caching of genesis epoch. If the passed state has not yet reached this slot then we do not check the cache.
if e <= stateEpoch && e > params.BeaconConfig().GenesisEpoch+params.BeaconConfig().MinSeedLookahead {
s, err := slots.EpochEnd(e - 1)
if err != nil {
return 0, err

View File

@@ -1161,6 +1161,10 @@ func TestValidatorMaxEffectiveBalance(t *testing.T) {
}
func TestBeaconProposerIndexAtSlotFulu(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.FuluForkEpoch = 1
params.OverrideBeaconConfig(cfg)
lookahead := make([]uint64, 64)
lookahead[0] = 15
lookahead[1] = 16
@@ -1180,8 +1184,4 @@ func TestBeaconProposerIndexAtSlotFulu(t *testing.T) {
idx, err = helpers.BeaconProposerIndexAtSlot(t.Context(), st, 130)
require.NoError(t, err)
require.Equal(t, primitives.ValidatorIndex(42), idx)
_, err = helpers.BeaconProposerIndexAtSlot(t.Context(), st, 95)
require.ErrorContains(t, "slot 95 is not in the current epoch 3 or the next epoch", err)
_, err = helpers.BeaconProposerIndexAtSlot(t.Context(), st, 160)
require.ErrorContains(t, "slot 160 is not in the current epoch 3 or the next epoch", err)
}

View File

@@ -108,6 +108,12 @@ func CanUpgradeToFulu(slot primitives.Slot) bool {
return epochStart && fuluEpoch
}
func CanUpgradeToGloas(slot primitives.Slot) bool {
epochStart := slots.IsEpochStart(slot)
gloasEpoch := slots.ToEpoch(slot) == params.BeaconConfig().GloasForkEpoch
return epochStart && gloasEpoch
}
// CanProcessEpoch checks the eligibility to process epoch.
// The epoch can be processed at the end of the last slot of every epoch.
//

View File

@@ -24,6 +24,7 @@ go_library(
"//beacon-chain/core/epoch/precompute:go_default_library",
"//beacon-chain/core/execution:go_default_library",
"//beacon-chain/core/fulu:go_default_library",
"//beacon-chain/core/gloas:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/transition/interop:go_default_library",

View File

@@ -17,6 +17,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/epoch/precompute"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/execution"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/fulu"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/gloas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/config/features"
@@ -389,6 +390,15 @@ func UpgradeState(ctx context.Context, state state.BeaconState) (state.BeaconSta
upgraded = true
}
if time.CanUpgradeToGloas(slot) {
state, err = gloas.UpgradeToGloas(ctx, state)
if err != nil {
tracing.AnnotateError(span, err)
return nil, err
}
upgraded = true
}
if upgraded {
log.WithField("version", version.String(state.Version())).Info("Upgraded state to")
}

View File

@@ -8,6 +8,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/altair"
b "github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/electra"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition/interop"
v "github.com/OffchainLabs/prysm/v6/beacon-chain/core/validators"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
@@ -374,15 +375,18 @@ func ProcessBlockForStateRoot(
}
// This calls altair block operations.
func altairOperations(
ctx context.Context,
st state.BeaconState,
beaconBlock interfaces.ReadOnlyBeaconBlock) (state.BeaconState, error) {
st, err := b.ProcessProposerSlashings(ctx, st, beaconBlock.Body().ProposerSlashings(), v.SlashValidator)
func altairOperations(ctx context.Context, st state.BeaconState, beaconBlock interfaces.ReadOnlyBeaconBlock) (state.BeaconState, error) {
var err error
exitInfo := v.ExitInformation(st)
if err := helpers.UpdateTotalActiveBalanceCache(st, exitInfo.TotalActiveBalance); err != nil {
return nil, errors.Wrap(err, "could not update total active balance cache")
}
st, err = b.ProcessProposerSlashings(ctx, st, beaconBlock.Body().ProposerSlashings(), exitInfo)
if err != nil {
return nil, errors.Wrap(err, "could not process altair proposer slashing")
}
st, err = b.ProcessAttesterSlashings(ctx, st, beaconBlock.Body().AttesterSlashings(), v.SlashValidator)
st, err = b.ProcessAttesterSlashings(ctx, st, beaconBlock.Body().AttesterSlashings(), exitInfo)
if err != nil {
return nil, errors.Wrap(err, "could not process altair attester slashing")
}
@@ -393,7 +397,7 @@ func altairOperations(
if _, err := altair.ProcessDeposits(ctx, st, beaconBlock.Body().Deposits()); err != nil {
return nil, errors.Wrap(err, "could not process altair deposit")
}
st, err = b.ProcessVoluntaryExits(ctx, st, beaconBlock.Body().VoluntaryExits())
st, err = b.ProcessVoluntaryExits(ctx, st, beaconBlock.Body().VoluntaryExits(), exitInfo)
if err != nil {
return nil, errors.Wrap(err, "could not process voluntary exits")
}
@@ -401,15 +405,18 @@ func altairOperations(
}
// This calls phase 0 block operations.
func phase0Operations(
ctx context.Context,
st state.BeaconState,
beaconBlock interfaces.ReadOnlyBeaconBlock) (state.BeaconState, error) {
st, err := b.ProcessProposerSlashings(ctx, st, beaconBlock.Body().ProposerSlashings(), v.SlashValidator)
func phase0Operations(ctx context.Context, st state.BeaconState, beaconBlock interfaces.ReadOnlyBeaconBlock) (state.BeaconState, error) {
var err error
exitInfo := v.ExitInformation(st)
if err := helpers.UpdateTotalActiveBalanceCache(st, exitInfo.TotalActiveBalance); err != nil {
return nil, errors.Wrap(err, "could not update total active balance cache")
}
st, err = b.ProcessProposerSlashings(ctx, st, beaconBlock.Body().ProposerSlashings(), exitInfo)
if err != nil {
return nil, errors.Wrap(err, "could not process block proposer slashings")
}
st, err = b.ProcessAttesterSlashings(ctx, st, beaconBlock.Body().AttesterSlashings(), v.SlashValidator)
st, err = b.ProcessAttesterSlashings(ctx, st, beaconBlock.Body().AttesterSlashings(), exitInfo)
if err != nil {
return nil, errors.Wrap(err, "could not process block attester slashings")
}
@@ -420,5 +427,9 @@ func phase0Operations(
if _, err := altair.ProcessDeposits(ctx, st, beaconBlock.Body().Deposits()); err != nil {
return nil, errors.Wrap(err, "could not process deposits")
}
return b.ProcessVoluntaryExits(ctx, st, beaconBlock.Body().VoluntaryExits())
st, err = b.ProcessVoluntaryExits(ctx, st, beaconBlock.Body().VoluntaryExits(), exitInfo)
if err != nil {
return nil, errors.Wrap(err, "could not process voluntary exits")
}
return st, nil
}

View File

@@ -13,34 +13,55 @@ import (
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/math"
mathutil "github.com/OffchainLabs/prysm/v6/math"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/pkg/errors"
)
// ExitInfo provides information about validator exits in the state.
type ExitInfo struct {
HighestExitEpoch primitives.Epoch
Churn uint64
TotalActiveBalance uint64
}
// ErrValidatorAlreadyExited is an error raised when trying to process an exit of
// an already exited validator
var ErrValidatorAlreadyExited = errors.New("validator already exited")
// MaxExitEpochAndChurn returns the maximum non-FAR_FUTURE_EPOCH exit
// epoch and the number of them
func MaxExitEpochAndChurn(s state.BeaconState) (maxExitEpoch primitives.Epoch, churn uint64) {
// ExitInformation returns information about validator exits.
func ExitInformation(s state.BeaconState) *ExitInfo {
exitInfo := &ExitInfo{}
farFutureEpoch := params.BeaconConfig().FarFutureEpoch
currentEpoch := slots.ToEpoch(s.Slot())
totalActiveBalance := uint64(0)
err := s.ReadFromEveryValidator(func(idx int, val state.ReadOnlyValidator) error {
e := val.ExitEpoch()
if e != farFutureEpoch {
if e > maxExitEpoch {
maxExitEpoch = e
churn = 1
} else if e == maxExitEpoch {
churn++
if e > exitInfo.HighestExitEpoch {
exitInfo.HighestExitEpoch = e
exitInfo.Churn = 1
} else if e == exitInfo.HighestExitEpoch {
exitInfo.Churn++
}
}
// Calculate total active balance in the same loop
if helpers.IsActiveValidatorUsingTrie(val, currentEpoch) {
totalActiveBalance += val.EffectiveBalance()
}
return nil
})
_ = err
return
// Apply minimum balance as per spec
exitInfo.TotalActiveBalance = mathutil.Max(params.BeaconConfig().EffectiveBalanceIncrement, totalActiveBalance)
return exitInfo
}
// InitiateValidatorExit takes in validator index and updates
@@ -64,59 +85,117 @@ func MaxExitEpochAndChurn(s state.BeaconState) (maxExitEpoch primitives.Epoch, c
// # Set validator exit epoch and withdrawable epoch
// validator.exit_epoch = exit_queue_epoch
// validator.withdrawable_epoch = Epoch(validator.exit_epoch + MIN_VALIDATOR_WITHDRAWABILITY_DELAY)
func InitiateValidatorExit(ctx context.Context, s state.BeaconState, idx primitives.ValidatorIndex, exitQueueEpoch primitives.Epoch, churn uint64) (state.BeaconState, primitives.Epoch, error) {
func InitiateValidatorExit(
ctx context.Context,
s state.BeaconState,
idx primitives.ValidatorIndex,
exitInfo *ExitInfo,
) (state.BeaconState, error) {
validator, err := s.ValidatorAtIndex(idx)
if err != nil {
return nil, 0, err
return nil, err
}
if validator.ExitEpoch != params.BeaconConfig().FarFutureEpoch {
return s, validator.ExitEpoch, ErrValidatorAlreadyExited
return s, ErrValidatorAlreadyExited
}
// Compute exit queue epoch.
if s.Version() < version.Electra {
// Relevant spec code from phase0:
//
// exit_epochs = [v.exit_epoch for v in state.validators if v.exit_epoch != FAR_FUTURE_EPOCH]
// exit_queue_epoch = max(exit_epochs + [compute_activation_exit_epoch(get_current_epoch(state))])
// exit_queue_churn = len([v for v in state.validators if v.exit_epoch == exit_queue_epoch])
// if exit_queue_churn >= get_validator_churn_limit(state):
// exit_queue_epoch += Epoch(1)
exitableEpoch := helpers.ActivationExitEpoch(time.CurrentEpoch(s))
if exitableEpoch > exitQueueEpoch {
exitQueueEpoch = exitableEpoch
churn = 0
}
activeValidatorCount, err := helpers.ActiveValidatorCount(ctx, s, time.CurrentEpoch(s))
if err != nil {
return nil, 0, errors.Wrap(err, "could not get active validator count")
}
currentChurn := helpers.ValidatorExitChurnLimit(activeValidatorCount)
if churn >= currentChurn {
exitQueueEpoch, err = exitQueueEpoch.SafeAdd(1)
if err != nil {
return nil, 0, err
}
if err = initiateValidatorExitPreElectra(ctx, s, exitInfo); err != nil {
return nil, err
}
} else {
// [Modified in Electra:EIP7251]
// exit_queue_epoch = compute_exit_epoch_and_update_churn(state, validator.effective_balance)
var err error
exitQueueEpoch, err = s.ExitEpochAndUpdateChurn(primitives.Gwei(validator.EffectiveBalance))
exitInfo.HighestExitEpoch, err = s.ExitEpochAndUpdateChurn(primitives.Gwei(validator.EffectiveBalance))
if err != nil {
return nil, 0, err
return nil, err
}
}
validator.ExitEpoch = exitQueueEpoch
validator.WithdrawableEpoch, err = exitQueueEpoch.SafeAddEpoch(params.BeaconConfig().MinValidatorWithdrawabilityDelay)
validator.ExitEpoch = exitInfo.HighestExitEpoch
validator.WithdrawableEpoch, err = exitInfo.HighestExitEpoch.SafeAddEpoch(params.BeaconConfig().MinValidatorWithdrawabilityDelay)
if err != nil {
return nil, 0, err
return nil, err
}
if err := s.UpdateValidatorAtIndex(idx, validator); err != nil {
return nil, 0, err
return nil, err
}
return s, exitQueueEpoch, nil
return s, nil
}
// InitiateValidatorExitForTotalBal has the same functionality as InitiateValidatorExit,
// the only difference being how total active balance is obtained. In InitiateValidatorExit
// it is calculated inside the function and in InitiateValidatorExitForTotalBal it's a
// function argument.
func InitiateValidatorExitForTotalBal(
ctx context.Context,
s state.BeaconState,
idx primitives.ValidatorIndex,
exitInfo *ExitInfo,
totalActiveBalance primitives.Gwei,
) (state.BeaconState, error) {
validator, err := s.ValidatorAtIndex(idx)
if err != nil {
return nil, err
}
if validator.ExitEpoch != params.BeaconConfig().FarFutureEpoch {
return s, ErrValidatorAlreadyExited
}
// Compute exit queue epoch.
if s.Version() < version.Electra {
if err = initiateValidatorExitPreElectra(ctx, s, exitInfo); err != nil {
return nil, err
}
} else {
// [Modified in Electra:EIP7251]
// exit_queue_epoch = compute_exit_epoch_and_update_churn(state, validator.effective_balance)
var err error
exitInfo.HighestExitEpoch, err = s.ExitEpochAndUpdateChurnForTotalBal(totalActiveBalance, primitives.Gwei(validator.EffectiveBalance))
if err != nil {
return nil, err
}
}
validator.ExitEpoch = exitInfo.HighestExitEpoch
validator.WithdrawableEpoch, err = exitInfo.HighestExitEpoch.SafeAddEpoch(params.BeaconConfig().MinValidatorWithdrawabilityDelay)
if err != nil {
return nil, err
}
if err := s.UpdateValidatorAtIndex(idx, validator); err != nil {
return nil, err
}
return s, nil
}
func initiateValidatorExitPreElectra(ctx context.Context, s state.BeaconState, exitInfo *ExitInfo) error {
// Relevant spec code from phase0:
//
// exit_epochs = [v.exit_epoch for v in state.validators if v.exit_epoch != FAR_FUTURE_EPOCH]
// exit_queue_epoch = max(exit_epochs + [compute_activation_exit_epoch(get_current_epoch(state))])
// exit_queue_churn = len([v for v in state.validators if v.exit_epoch == exit_queue_epoch])
// if exit_queue_churn >= get_validator_churn_limit(state):
// exit_queue_epoch += Epoch(1)
exitableEpoch := helpers.ActivationExitEpoch(time.CurrentEpoch(s))
if exitableEpoch > exitInfo.HighestExitEpoch {
exitInfo.HighestExitEpoch = exitableEpoch
exitInfo.Churn = 0
}
activeValidatorCount, err := helpers.ActiveValidatorCount(ctx, s, time.CurrentEpoch(s))
if err != nil {
return errors.Wrap(err, "could not get active validator count")
}
currentChurn := helpers.ValidatorExitChurnLimit(activeValidatorCount)
if exitInfo.Churn >= currentChurn {
exitInfo.HighestExitEpoch, err = exitInfo.HighestExitEpoch.SafeAdd(1)
if err != nil {
return err
}
exitInfo.Churn = 1
} else {
exitInfo.Churn = exitInfo.Churn + 1
}
return nil
}
// SlashValidator slashes the malicious validator's balance and awards
@@ -152,9 +231,12 @@ func InitiateValidatorExit(ctx context.Context, s state.BeaconState, idx primiti
func SlashValidator(
ctx context.Context,
s state.BeaconState,
slashedIdx primitives.ValidatorIndex) (state.BeaconState, error) {
maxExitEpoch, churn := MaxExitEpochAndChurn(s)
s, _, err := InitiateValidatorExit(ctx, s, slashedIdx, maxExitEpoch, churn)
slashedIdx primitives.ValidatorIndex,
exitInfo *ExitInfo,
) (state.BeaconState, error) {
var err error
s, err = InitiateValidatorExitForTotalBal(ctx, s, slashedIdx, exitInfo, primitives.Gwei(exitInfo.TotalActiveBalance))
if err != nil && !errors.Is(err, ErrValidatorAlreadyExited) {
return nil, errors.Wrapf(err, "could not initiate validator %d exit", slashedIdx)
}

View File

@@ -49,9 +49,11 @@ func TestInitiateValidatorExit_AlreadyExited(t *testing.T) {
}}
state, err := state_native.InitializeFromProtoPhase0(base)
require.NoError(t, err)
newState, epoch, err := validators.InitiateValidatorExit(t.Context(), state, 0, 199, 1)
exitInfo := &validators.ExitInfo{HighestExitEpoch: 199, Churn: 1}
newState, err := validators.InitiateValidatorExit(t.Context(), state, 0, exitInfo)
require.ErrorIs(t, err, validators.ErrValidatorAlreadyExited)
require.Equal(t, exitEpoch, epoch)
assert.Equal(t, primitives.Epoch(199), exitInfo.HighestExitEpoch)
assert.Equal(t, uint64(1), exitInfo.Churn)
v, err := newState.ValidatorAtIndex(0)
require.NoError(t, err)
assert.Equal(t, exitEpoch, v.ExitEpoch, "Already exited")
@@ -68,9 +70,11 @@ func TestInitiateValidatorExit_ProperExit(t *testing.T) {
}}
state, err := state_native.InitializeFromProtoPhase0(base)
require.NoError(t, err)
newState, epoch, err := validators.InitiateValidatorExit(t.Context(), state, idx, exitedEpoch+2, 1)
exitInfo := &validators.ExitInfo{HighestExitEpoch: exitedEpoch + 2, Churn: 1}
newState, err := validators.InitiateValidatorExit(t.Context(), state, idx, exitInfo)
require.NoError(t, err)
require.Equal(t, exitedEpoch+2, epoch)
assert.Equal(t, exitedEpoch+2, exitInfo.HighestExitEpoch)
assert.Equal(t, uint64(2), exitInfo.Churn)
v, err := newState.ValidatorAtIndex(idx)
require.NoError(t, err)
assert.Equal(t, exitedEpoch+2, v.ExitEpoch, "Exit epoch was not the highest")
@@ -88,9 +92,11 @@ func TestInitiateValidatorExit_ChurnOverflow(t *testing.T) {
}}
state, err := state_native.InitializeFromProtoPhase0(base)
require.NoError(t, err)
newState, epoch, err := validators.InitiateValidatorExit(t.Context(), state, idx, exitedEpoch+2, 4)
exitInfo := &validators.ExitInfo{HighestExitEpoch: exitedEpoch + 2, Churn: 4}
newState, err := validators.InitiateValidatorExit(t.Context(), state, idx, exitInfo)
require.NoError(t, err)
require.Equal(t, exitedEpoch+3, epoch)
assert.Equal(t, exitedEpoch+3, exitInfo.HighestExitEpoch)
assert.Equal(t, uint64(1), exitInfo.Churn)
// Because of exit queue overflow,
// validator who init exited has to wait one more epoch.
@@ -110,7 +116,8 @@ func TestInitiateValidatorExit_WithdrawalOverflows(t *testing.T) {
}}
state, err := state_native.InitializeFromProtoPhase0(base)
require.NoError(t, err)
_, _, err = validators.InitiateValidatorExit(t.Context(), state, 1, params.BeaconConfig().FarFutureEpoch-1, 1)
exitInfo := &validators.ExitInfo{HighestExitEpoch: params.BeaconConfig().FarFutureEpoch - 1, Churn: 1}
_, err = validators.InitiateValidatorExit(t.Context(), state, 1, exitInfo)
require.ErrorContains(t, "addition overflows", err)
}
@@ -146,12 +153,11 @@ func TestInitiateValidatorExit_ProperExit_Electra(t *testing.T) {
require.NoError(t, err)
require.Equal(t, primitives.Gwei(0), ebtc)
newState, epoch, err := validators.InitiateValidatorExit(t.Context(), state, idx, 0, 0) // exitQueueEpoch and churn are not used in electra
newState, err := validators.InitiateValidatorExit(t.Context(), state, idx, &validators.ExitInfo{}) // exit info is not used in electra
require.NoError(t, err)
// Expect that the exit epoch is the next available epoch with max seed lookahead.
want := helpers.ActivationExitEpoch(exitedEpoch + 1)
require.Equal(t, want, epoch)
v, err := newState.ValidatorAtIndex(idx)
require.NoError(t, err)
assert.Equal(t, want, v.ExitEpoch, "Exit epoch was not the highest")
@@ -190,7 +196,7 @@ func TestSlashValidator_OK(t *testing.T) {
require.NoError(t, err, "Could not get proposer")
proposerBal, err := state.BalanceAtIndex(proposer)
require.NoError(t, err)
slashedState, err := validators.SlashValidator(t.Context(), state, slashedIdx)
slashedState, err := validators.SlashValidator(t.Context(), state, slashedIdx, validators.ExitInformation(state))
require.NoError(t, err, "Could not slash validator")
require.Equal(t, true, slashedState.Version() == version.Phase0)
@@ -244,7 +250,7 @@ func TestSlashValidator_Electra(t *testing.T) {
require.NoError(t, err, "Could not get proposer")
proposerBal, err := state.BalanceAtIndex(proposer)
require.NoError(t, err)
slashedState, err := validators.SlashValidator(t.Context(), state, slashedIdx)
slashedState, err := validators.SlashValidator(t.Context(), state, slashedIdx, validators.ExitInformation(state))
require.NoError(t, err, "Could not slash validator")
require.Equal(t, true, slashedState.Version() == version.Electra)
@@ -505,8 +511,8 @@ func TestValidatorMaxExitEpochAndChurn(t *testing.T) {
for _, tt := range tests {
s, err := state_native.InitializeFromProtoPhase0(tt.state)
require.NoError(t, err)
epoch, churn := validators.MaxExitEpochAndChurn(s)
require.Equal(t, tt.wantedEpoch, epoch)
require.Equal(t, tt.wantedChurn, churn)
exitInfo := validators.ExitInformation(s)
require.Equal(t, tt.wantedEpoch, exitInfo.HighestExitEpoch)
require.Equal(t, tt.wantedChurn, exitInfo.Churn)
}
}

View File

@@ -116,19 +116,43 @@ func (l *periodicEpochLayout) pruneBefore(before primitives.Epoch) (*pruneSummar
}
// Roll up summaries and clean up per-epoch directories.
rollup := &pruneSummary{}
// Track which period directories might be empty after epoch removal
periodsToCheck := make(map[string]struct{})
for epoch, sum := range sums {
rollup.blobsPruned += sum.blobsPruned
rollup.failedRemovals = append(rollup.failedRemovals, sum.failedRemovals...)
rmdir := l.epochDir(epoch)
periodDir := l.periodDir(epoch)
if len(sum.failedRemovals) == 0 {
if err := l.fs.Remove(rmdir); err != nil {
log.WithField("dir", rmdir).WithError(err).Error("Failed to remove epoch directory while pruning")
} else {
periodsToCheck[periodDir] = struct{}{}
}
} else {
log.WithField("dir", rmdir).WithField("numFailed", len(sum.failedRemovals)).WithError(err).Error("Unable to remove epoch directory due to pruning failures")
}
}
//Clean up empty period directories
for periodDir := range periodsToCheck {
entries, err := afero.ReadDir(l.fs, periodDir)
if err != nil {
log.WithField("dir", periodDir).WithError(err).Debug("Failed to read period directory contents")
continue
}
// Only attempt to remove if directory is empty
if len(entries) == 0 {
if err := l.fs.Remove(periodDir); err != nil {
log.WithField("dir", periodDir).WithError(err).Error("Failed to remove empty period directory")
}
}
}
return rollup, nil
}

View File

@@ -4,6 +4,7 @@ import (
"encoding/binary"
"os"
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
"github.com/OffchainLabs/prysm/v6/config/params"
@@ -195,3 +196,48 @@ func TestLayoutPruneBefore(t *testing.T) {
})
}
}
func TestLayoutByEpochPruneBefore(t *testing.T) {
roots := testRoots(10)
cases := []struct {
name string
pruned []testIdent
remain []testIdent
err error
sum pruneSummary
}{
{
name: "single epoch period cleanup",
pruned: []testIdent{
{offset: 0, blobIdent: blobIdent{root: roots[0], epoch: 367076, index: 0}},
},
remain: []testIdent{
{offset: 0, blobIdent: blobIdent{root: roots[1], epoch: 371176, index: 0}}, // Different period
},
sum: pruneSummary{blobsPruned: 1},
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
fs, bs := NewEphemeralBlobStorageAndFs(t, WithLayout(LayoutNameByEpoch))
pruned := testSetupBlobIdentPaths(t, fs, bs, c.pruned)
remain := testSetupBlobIdentPaths(t, fs, bs, c.remain)
time.Sleep(1 * time.Second)
for _, id := range pruned {
_, err := fs.Stat(bs.layout.sszPath(id))
require.Equal(t, true, os.IsNotExist(err))
dirs := bs.layout.blockParentDirs(id)
for i := len(dirs) - 1; i > 0; i-- {
_, err = fs.Stat(dirs[i])
require.Equal(t, true, os.IsNotExist(err))
}
}
for _, id := range remain {
_, err := fs.Stat(bs.layout.sszPath(id))
require.NoError(t, err)
}
})
}
}

View File

@@ -57,6 +57,11 @@ var (
GetPayloadMethodV5,
GetBlobsV2,
}
gloasEngineEndpoints = []string{
NewPayloadMethodV5,
GetPayloadMethodV6,
}
)
const (
@@ -67,6 +72,8 @@ const (
NewPayloadMethodV3 = "engine_newPayloadV3"
// NewPayloadMethodV4 is the engine_newPayloadVX method added at Electra.
NewPayloadMethodV4 = "engine_newPayloadV4"
// NewPayloadMethodV5 is the engine_newPayloadVX method added at Gloas.
NewPayloadMethodV5 = "engine_newPayloadV5"
// ForkchoiceUpdatedMethod v1 request string for JSON-RPC.
ForkchoiceUpdatedMethod = "engine_forkchoiceUpdatedV1"
// ForkchoiceUpdatedMethodV2 v2 request string for JSON-RPC.
@@ -83,6 +90,8 @@ const (
GetPayloadMethodV4 = "engine_getPayloadV4"
// GetPayloadMethodV5 is the get payload method added for fulu
GetPayloadMethodV5 = "engine_getPayloadV5"
// GetPayloadMethodV6 is the get payload method added for gloas
GetPayloadMethodV6 = "engine_getPayloadV6"
// BlockByHashMethod request string for JSON-RPC.
BlockByHashMethod = "eth_getBlockByHash"
// BlockByNumberMethod request string for JSON-RPC.
@@ -181,6 +190,18 @@ func (s *Service) NewPayload(ctx context.Context, payload interfaces.ExecutionDa
return nil, handleRPCError(err)
}
}
case *pb.ExecutionPayloadGloas:
if executionRequests == nil {
return nil, errors.New("execution requests are required for gloas execution payload")
}
flattenedRequests, err := pb.EncodeExecutionRequests(executionRequests)
if err != nil {
return nil, errors.Wrap(err, "failed to encode execution requests")
}
err = s.rpcClient.CallContext(ctx, result, NewPayloadMethodV5, payloadPb, versionedHashes, parentBlockRoot, flattenedRequests)
if err != nil {
return nil, handleRPCError(err)
}
default:
return nil, errors.New("unknown execution data type")
}
@@ -273,6 +294,9 @@ func (s *Service) ForkchoiceUpdated(
func getPayloadMethodAndMessage(slot primitives.Slot) (string, proto.Message) {
epoch := slots.ToEpoch(slot)
if epoch >= params.BeaconConfig().GloasForkEpoch {
return GetPayloadMethodV6, &pb.ExecutionBundleGloas{}
}
if epoch >= params.BeaconConfig().FuluForkEpoch {
return GetPayloadMethodV5, &pb.ExecutionBundleFulu{}
}
@@ -325,6 +349,10 @@ func (s *Service) ExchangeCapabilities(ctx context.Context) ([]string, error) {
supportedEngineEndpoints = append(supportedEngineEndpoints, fuluEngineEndpoints...)
}
if params.GloasEnabled() {
supportedEngineEndpoints = append(supportedEngineEndpoints, gloasEngineEndpoints...)
}
elSupportedEndpointsSlice := make([]string, len(supportedEngineEndpoints))
if err := s.rpcClient.CallContext(ctx, &elSupportedEndpointsSlice, ExchangeCapabilities, supportedEngineEndpoints); err != nil {
return nil, handleRPCError(err)

View File

@@ -86,9 +86,7 @@ go_test(
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/builder:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/execution:go_default_library",
"//beacon-chain/execution/testing:go_default_library",
"//beacon-chain/monitor:go_default_library",
@@ -101,7 +99,6 @@ go_test(
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@com_github_urfave_cli_v2//:go_default_library",
],

View File

@@ -318,6 +318,7 @@ func startBaseServices(cliCtx *cli.Context, beacon *BeaconNode, depositAddress s
}
beacon.BlobStorage.WarmCache()
beacon.DataColumnStorage.WarmCache()
log.Debugln("Starting Slashing DB")
if err := beacon.startSlasherDB(cliCtx, clearer); err != nil {
@@ -550,11 +551,6 @@ func (b *BeaconNode) startDB(cliCtx *cli.Context, depositAddress string) error {
return errors.Wrap(err, "could not ensure embedded genesis")
}
// Validate sync options when starting with an empty database
if err := b.validateSyncFlags(); err != nil {
return err
}
if b.CheckpointInitializer != nil {
log.Info("Checkpoint sync - Downloading origin state and block")
if err := b.CheckpointInitializer.Initialize(b.ctx, b.db); err != nil {
@@ -569,52 +565,6 @@ func (b *BeaconNode) startDB(cliCtx *cli.Context, depositAddress string) error {
log.WithField("address", depositAddress).Info("Deposit contract")
return nil
}
// validateSyncFlags ensures that when starting with an empty database,
// the user has explicitly chosen either genesis sync or checkpoint sync.
func (b *BeaconNode) validateSyncFlags() error {
// Check if database has an origin checkpoint (indicating it's not empty)
_, err := b.db.OriginCheckpointBlockRoot(b.ctx)
if err == nil {
// Database is not empty, validation is not needed
return nil
}
if !errors.Is(err, db.ErrNotFoundOriginBlockRoot) {
// Some other error occurred
return errors.Wrap(err, "could not check origin checkpoint block root")
}
// if genesis exists, also consider DB non-empty.
if gb, err := b.db.GenesisBlock(b.ctx); err == nil && gb != nil && !gb.IsNil() {
return nil
}
// Database is empty, check if user has provided required flags
syncFromGenesis := b.cliCtx.Bool(flags.SyncFromGenesis.Name)
hasCheckpointSync := b.CheckpointInitializer != nil
if !syncFromGenesis && !hasCheckpointSync {
return errors.New("when starting with an empty database, you must specify either:\n" +
" --sync-from-genesis (to sync from genesis)\n" +
" --checkpoint-sync-url <url> (to sync from a remote beacon node)\n" +
" --checkpoint-state <path> and --checkpoint-block <path> (to sync from local files)\n\n" +
"Checkpoint sync is recommended for faster syncing.")
}
// Check for conflicting sync options
if syncFromGenesis && hasCheckpointSync {
return errors.New("conflicting sync options: cannot use both --sync-from-genesis and checkpoint sync flags. " +
"Please choose either genesis sync or checkpoint sync, not both.")
}
if syncFromGenesis {
log.Warn("Syncing from genesis is enabled. This will take a very long time and is not recommended. " +
"Consider using checkpoint sync instead with --checkpoint-sync-url.")
}
return nil
}
func (b *BeaconNode) startSlasherDB(cliCtx *cli.Context, clearer *dbClearer) error {
if !b.slasherEnabled {
return nil
@@ -991,6 +941,7 @@ func (b *BeaconNode) registerRPCService(router *http.ServeMux) error {
FinalizationFetcher: chainService,
BlockReceiver: chainService,
BlobReceiver: chainService,
DataColumnReceiver: chainService,
AttestationReceiver: chainService,
GenesisTimeFetcher: chainService,
GenesisFetcher: chainService,

View File

@@ -7,7 +7,6 @@ import (
"net/http"
"net/http/httptest"
"path/filepath"
"strings"
"testing"
"time"
@@ -15,19 +14,15 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain"
"github.com/OffchainLabs/prysm/v6/beacon-chain/builder"
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/execution"
mockExecution "github.com/OffchainLabs/prysm/v6/beacon-chain/execution/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/monitor"
"github.com/OffchainLabs/prysm/v6/cmd"
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v6/config/features"
"github.com/OffchainLabs/prysm/v6/runtime"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/prometheus/client_golang/prometheus"
logTest "github.com/sirupsen/logrus/hooks/test"
"github.com/urfave/cli/v2"
)
@@ -54,7 +49,6 @@ func TestNodeClose_OK(t *testing.T) {
set.Bool("demo-config", true, "demo configuration")
set.String("deposit-contract", "0x0000000000000000000000000000000000000000", "deposit contract address")
set.String("suggested-fee-recipient", "0x6e35733c5af9B61374A128e6F85f553aF09ff89A", "fee recipient")
set.Bool("sync-from-genesis", true, "sync from genesis")
require.NoError(t, set.Set("suggested-fee-recipient", "0x6e35733c5af9B61374A128e6F85f553aF09ff89A"))
cmd.ValidatorMonitorIndicesFlag.Value = &cli.IntSlice{}
cmd.ValidatorMonitorIndicesFlag.Value.SetInt(1)
@@ -80,7 +74,6 @@ func TestNodeStart_Ok(t *testing.T) {
set := flag.NewFlagSet("test", 0)
set.String("datadir", tmp, "node data directory")
set.String("suggested-fee-recipient", "0x6e35733c5af9B61374A128e6F85f553aF09ff89A", "fee recipient")
set.Bool("sync-from-genesis", true, "sync from genesis")
require.NoError(t, set.Set("suggested-fee-recipient", "0x6e35733c5af9B61374A128e6F85f553aF09ff89A"))
ctx, cancel := newCliContextWithCancel(&app, set)
@@ -111,7 +104,6 @@ func TestNodeStart_SyncChecker(t *testing.T) {
set := flag.NewFlagSet("test", 0)
set.String("datadir", tmp, "node data directory")
set.String("suggested-fee-recipient", "0x6e35733c5af9B61374A128e6F85f553aF09ff89A", "fee recipient")
set.Bool("sync-from-genesis", true, "sync from genesis")
require.NoError(t, set.Set("suggested-fee-recipient", "0x6e35733c5af9B61374A128e6F85f553aF09ff89A"))
ctx, cancel := newCliContextWithCancel(&app, set)
@@ -151,7 +143,6 @@ func TestClearDB(t *testing.T) {
set.String("datadir", tmp, "node data directory")
set.Bool(cmd.ForceClearDB.Name, true, "force clear db")
set.String("suggested-fee-recipient", "0x6e35733c5af9B61374A128e6F85f553aF09ff89A", "fee recipient")
set.Bool("sync-from-genesis", true, "sync from genesis")
require.NoError(t, set.Set("suggested-fee-recipient", "0x6e35733c5af9B61374A128e6F85f553aF09ff89A"))
context, cancel := newCliContextWithCancel(&app, set)
@@ -271,128 +262,3 @@ func TestCORS(t *testing.T) {
})
}
}
// TestValidateSyncFlags tests the validateSyncFlags function with real database instances
func TestValidateSyncFlags(t *testing.T) {
tests := []struct {
expectWarning bool
expectError bool
hasCheckpointInitializer bool
syncFromGenesis bool
dbHasOriginCheckpoint bool
expectedErrorContains string
name string
}{
{
name: "Database not empty - validation skipped",
dbHasOriginCheckpoint: true,
syncFromGenesis: false,
expectError: false,
},
{
name: "Empty DB, no sync flags - should fail",
dbHasOriginCheckpoint: false,
syncFromGenesis: false,
expectError: true,
expectedErrorContains: "when starting with an empty database, you must specify either",
},
{
name: "Empty DB, sync from genesis - should succeed with warning",
dbHasOriginCheckpoint: false,
syncFromGenesis: true,
expectError: false,
expectWarning: true,
},
{
name: "Empty DB, checkpoint sync - should succeed",
dbHasOriginCheckpoint: false,
hasCheckpointInitializer: true,
expectError: false,
},
{
name: "Empty DB, conflicting sync options - should fail",
dbHasOriginCheckpoint: false,
syncFromGenesis: true,
hasCheckpointInitializer: true,
expectError: true,
expectedErrorContains: "conflicting sync options",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
// Isolate Prometheus metrics per subtest to avoid duplicate registration across DB setups.
reg := prometheus.NewRegistry()
prometheus.DefaultRegisterer = reg
prometheus.DefaultGatherer = reg
ctx := context.Background()
// Set up real database for testing (empty to start).
beaconDB := testDB.SetupDB(t)
// Populate database if needed (simulate "non-empty" via origin checkpoint).
if tt.dbHasOriginCheckpoint {
err := beaconDB.SaveOriginCheckpointBlockRoot(ctx, [32]byte{0x01})
require.NoError(t, err)
}
// Set up CLI flags
flagSet := flag.NewFlagSet("test", flag.ContinueOnError)
flagSet.Bool(flags.SyncFromGenesis.Name, tt.syncFromGenesis, "")
app := cli.App{}
cliCtx := cli.NewContext(&app, flagSet, nil)
// Create BeaconNode with test setup
beaconNode := &BeaconNode{
ctx: ctx,
db: beaconDB,
cliCtx: cliCtx,
}
// Set CheckpointInitializer if needed
if tt.hasCheckpointInitializer {
beaconNode.CheckpointInitializer = &mockCheckpointInitializer{}
}
// Capture log output for warning detection
hook := logTest.NewGlobal()
defer hook.Reset()
// Call the function under test
err := beaconNode.validateSyncFlags()
// Validate results
if tt.expectError {
require.NotNil(t, err)
if tt.expectedErrorContains != "" {
require.ErrorContains(t, tt.expectedErrorContains, err)
}
} else {
require.NoError(t, err)
}
// Check for warning log if expected
if tt.expectWarning {
found := false
for _, entry := range hook.Entries {
if entry.Level.String() == "warning" &&
strings.Contains(entry.Message, "Syncing from genesis is enabled") {
found = true
break
}
}
require.Equal(t, true, found, "Expected warning log about genesis sync")
}
})
}
}
// mockCheckpointInitializer is a simple mock for testing
type mockCheckpointInitializer struct{}
func (m *mockCheckpointInitializer) Initialize(ctx context.Context, db db.Database) error {
return nil
}

View File

@@ -155,6 +155,7 @@ func (s *Service) custodyGroupCountFromPeerENR(pid peer.ID) uint64 {
log := log.WithFields(logrus.Fields{
"peerID": pid,
"defaultValue": custodyRequirement,
"agent": agentString(pid, s.Host()),
})
// Retrieve the ENR of the peer.

View File

@@ -8,6 +8,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers/scorers"
testp2p "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/consensus-types/wrapper"
@@ -269,6 +270,7 @@ func TestCustodyGroupCountFromPeer(t *testing.T) {
service := &Service{
peers: peers,
metaData: tc.metadata,
host: testp2p.NewTestP2P(t).Host(),
}
// Retrieve the custody count from the remote peer.
@@ -329,6 +331,7 @@ func TestCustodyGroupCountFromPeerENR(t *testing.T) {
service := &Service{
peers: peers,
host: testp2p.NewTestP2P(t).Host(),
}
actual := service.custodyGroupCountFromPeerENR(pid)

View File

@@ -684,7 +684,7 @@ func (s *Service) filterPeer(node *enode.Node) bool {
peerData, multiAddrs, err := convertToAddrInfo(node)
if err != nil {
log.WithError(err).Debug("Could not convert to peer data")
log.WithError(err).WithField("node", node.String()).Debug("Could not convert to peer data")
return false
}
@@ -851,7 +851,7 @@ func convertToMultiAddr(nodes []*enode.Node) []ma.Multiaddr {
func convertToAddrInfo(node *enode.Node) (*peer.AddrInfo, []ma.Multiaddr, error) {
multiAddrs, err := retrieveMultiAddrsFromNode(node)
if err != nil {
return nil, nil, err
return nil, nil, errors.Wrap(err, "retrieve multiaddrs from node")
}
if len(multiAddrs) == 0 {

View File

@@ -969,7 +969,7 @@ func TestFindPeers_NodeDeduplication(t *testing.T) {
cache.SubnetIDs.EmptyAllCaches()
defer cache.SubnetIDs.EmptyAllCaches()
ctx := context.Background()
ctx := t.Context()
// Create LocalNodes and manipulate sequence numbers
localNode1 := createTestNodeWithID(t, "node1")
@@ -1193,8 +1193,6 @@ func TestFindPeers_received_bad_existing_node(t *testing.T) {
cache.SubnetIDs.EmptyAllCaches()
defer cache.SubnetIDs.EmptyAllCaches()
ctx := context.Background()
// Create LocalNode with same ID but different sequences
localNode1 := createTestNodeWithID(t, "testnode")
node1_seq1 := localNode1.Node() // Get current node
@@ -1213,7 +1211,7 @@ func TestFindPeers_received_bad_existing_node(t *testing.T) {
MaxPeers: 30,
},
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
peers: peers.NewStatus(ctx, &peers.StatusConfig{
peers: peers.NewStatus(t.Context(), &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{},
}),
@@ -1243,7 +1241,7 @@ func TestFindPeers_received_bad_existing_node(t *testing.T) {
service.dv5Listener = testp2p.NewMockListener(localNode, iter)
// Run findPeers - node1_seq1 gets processed first, then callback marks peer bad, then node1_seq2 fails
ctxWithTimeout, cancel := context.WithTimeout(ctx, 1*time.Second)
ctxWithTimeout, cancel := context.WithTimeout(t.Context(), 1*time.Second)
defer cancel()
result, err := service.findPeers(ctxWithTimeout, 3)

View File

@@ -32,6 +32,9 @@ var gossipTopicMappings = map[string]func() proto.Message{
func GossipTopicMappings(topic string, epoch primitives.Epoch) proto.Message {
switch topic {
case BlockSubnetTopicFormat:
if epoch >= params.BeaconConfig().GloasForkEpoch {
return &ethpb.SignedBeaconBlockGloas{}
}
if epoch >= params.BeaconConfig().FuluForkEpoch {
return &ethpb.SignedBeaconBlockFulu{}
}
@@ -144,4 +147,7 @@ func init() {
// Specially handle Fulu objects.
GossipTypeMapping[reflect.TypeOf(&ethpb.SignedBeaconBlockFulu{})] = BlockSubnetTopicFormat
// Specially handle Gloas objects.
GossipTypeMapping[reflect.TypeOf(&ethpb.SignedBeaconBlockGloas{})] = BlockSubnetTopicFormat
}

View File

@@ -18,6 +18,7 @@ var (
"lodestar",
"js-libp2p",
"rust-libp2p",
"erigon/caplin",
}
p2pPeerCount = promauto.NewGaugeVec(prometheus.GaugeOpts{
Name: "p2p_peer_count",

View File

@@ -50,6 +50,7 @@ const (
// TestP2P represents a p2p implementation that can be used for testing.
type TestP2P struct {
mu sync.Mutex
t *testing.T
BHost host.Host
EnodeID enode.ID
@@ -63,6 +64,7 @@ type TestP2P struct {
custodyInfoMut sync.RWMutex // protects custodyGroupCount and earliestAvailableSlot
earliestAvailableSlot primitives.Slot
custodyGroupCount uint64
enr *enr.Record
}
// NewTestP2P initializes a new p2p test service.
@@ -103,6 +105,7 @@ func NewTestP2P(t *testing.T, userOptions ...config.Option) *TestP2P {
pubsub: ps,
joinedTopics: map[string]*pubsub.Topic{},
peers: peerStatuses,
enr: new(enr.Record),
}
}
@@ -241,6 +244,8 @@ func (p *TestP2P) SetStreamHandler(topic string, handler network.StreamHandler)
// JoinTopic will join PubSub topic, if not already joined.
func (p *TestP2P) JoinTopic(topic string, opts ...pubsub.TopicOpt) (*pubsub.Topic, error) {
p.mu.Lock()
defer p.mu.Unlock()
if _, ok := p.joinedTopics[topic]; !ok {
joinedTopic, err := p.pubsub.Join(topic, opts...)
if err != nil {
@@ -310,8 +315,8 @@ func (p *TestP2P) Host() host.Host {
}
// ENR returns the enr of the local peer.
func (*TestP2P) ENR() *enr.Record {
return new(enr.Record)
func (p *TestP2P) ENR() *enr.Record {
return p.enr
}
// NodeID returns the node id of the local peer.

View File

@@ -85,6 +85,11 @@ func InitializeDataMaps() {
&ethpb.SignedBeaconBlockFulu{Block: &ethpb.BeaconBlockElectra{Body: &ethpb.BeaconBlockBodyElectra{ExecutionPayload: &enginev1.ExecutionPayloadDeneb{}}}},
)
},
bytesutil.ToBytes4(params.BeaconConfig().GloasForkVersion): func() (interfaces.ReadOnlySignedBeaconBlock, error) {
return blocks.NewSignedBeaconBlock(
&ethpb.SignedBeaconBlockGloas{Block: &ethpb.BeaconBlockGloas{Body: &ethpb.BeaconBlockBodyGloas{ExecutionPayload: &enginev1.ExecutionPayloadGloas{}}}},
)
},
}
// Reset our metadata map.
@@ -110,6 +115,9 @@ func InitializeDataMaps() {
bytesutil.ToBytes4(params.BeaconConfig().FuluForkVersion): func() (metadata.Metadata, error) {
return wrapper.WrappedMetadataV2(&ethpb.MetaDataV2{}), nil
},
bytesutil.ToBytes4(params.BeaconConfig().GloasForkVersion): func() (metadata.Metadata, error) {
return wrapper.WrappedMetadataV2(&ethpb.MetaDataV2{}), nil
},
}
// Reset our attestation map.
@@ -135,6 +143,9 @@ func InitializeDataMaps() {
bytesutil.ToBytes4(params.BeaconConfig().FuluForkVersion): func() (ethpb.Att, error) {
return &ethpb.SingleAttestation{}, nil
},
bytesutil.ToBytes4(params.BeaconConfig().GloasForkVersion): func() (ethpb.Att, error) {
return &ethpb.SingleAttestation{}, nil
},
}
// Reset our aggregate attestation map.
@@ -160,6 +171,9 @@ func InitializeDataMaps() {
bytesutil.ToBytes4(params.BeaconConfig().FuluForkVersion): func() (ethpb.SignedAggregateAttAndProof, error) {
return &ethpb.SignedAggregateAttestationAndProofElectra{}, nil
},
bytesutil.ToBytes4(params.BeaconConfig().GloasForkVersion): func() (ethpb.SignedAggregateAttAndProof, error) {
return &ethpb.SignedAggregateAttestationAndProofElectra{}, nil
},
}
// Reset our aggregate attestation map.
@@ -185,6 +199,9 @@ func InitializeDataMaps() {
bytesutil.ToBytes4(params.BeaconConfig().FuluForkVersion): func() (ethpb.AttSlashing, error) {
return &ethpb.AttesterSlashingElectra{}, nil
},
bytesutil.ToBytes4(params.BeaconConfig().GloasForkVersion): func() (ethpb.AttSlashing, error) {
return &ethpb.AttesterSlashingElectra{}, nil
},
}
// Reset our light client optimistic update map.
@@ -204,6 +221,12 @@ func InitializeDataMaps() {
bytesutil.ToBytes4(params.BeaconConfig().ElectraForkVersion): func() (interfaces.LightClientOptimisticUpdate, error) {
return lightclientConsensusTypes.NewEmptyOptimisticUpdateDeneb(), nil
},
bytesutil.ToBytes4(params.BeaconConfig().FuluForkVersion): func() (interfaces.LightClientOptimisticUpdate, error) {
return lightclientConsensusTypes.NewEmptyOptimisticUpdateDeneb(), nil
},
bytesutil.ToBytes4(params.BeaconConfig().GloasForkVersion): func() (interfaces.LightClientOptimisticUpdate, error) {
return lightclientConsensusTypes.NewEmptyOptimisticUpdateDeneb(), nil
},
}
// Reset our light client finality update map.
@@ -223,5 +246,11 @@ func InitializeDataMaps() {
bytesutil.ToBytes4(params.BeaconConfig().ElectraForkVersion): func() (interfaces.LightClientFinalityUpdate, error) {
return lightclientConsensusTypes.NewEmptyFinalityUpdateElectra(), nil
},
bytesutil.ToBytes4(params.BeaconConfig().FuluForkVersion): func() (interfaces.LightClientFinalityUpdate, error) {
return lightclientConsensusTypes.NewEmptyFinalityUpdateElectra(), nil
},
bytesutil.ToBytes4(params.BeaconConfig().GloasForkVersion): func() (interfaces.LightClientFinalityUpdate, error) {
return lightclientConsensusTypes.NewEmptyFinalityUpdateElectra(), nil
},
}
}

View File

@@ -1230,6 +1230,7 @@ func (s *Service) prysmBeaconEndpoints(
methods: []string{http.MethodGet},
},
{
// Warning: no longer supported post Fulu fork
template: "/prysm/v1/beacon/blobs",
name: namespace + ".PublishBlobs",
middleware: []middleware.Middleware{

View File

@@ -18,6 +18,7 @@ go_library(
"//api/server:go_default_library",
"//api/server/structs:go_default_library",
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/cache/depositsnapshot:go_default_library",
"//beacon-chain/core/altair:go_default_library",
@@ -60,7 +61,6 @@ go_library(
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_ethereum_go_ethereum//crypto/kzg4844:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
@@ -84,6 +84,7 @@ go_test(
"//api:go_default_library",
"//api/server:go_default_library",
"//api/server/structs:go_default_library",
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/cache/depositsnapshot:go_default_library",
"//beacon-chain/core/signing:go_default_library",
@@ -124,7 +125,6 @@ go_test(
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots:go_default_library",
"@com_github_crate_crypto_go_kzg_4844//:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",

View File

@@ -13,6 +13,7 @@ import (
"github.com/OffchainLabs/prysm/v6/api"
"github.com/OffchainLabs/prysm/v6/api/server/structs"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache/depositsnapshot"
corehelpers "github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition"
@@ -32,7 +33,6 @@ import (
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/crypto/kzg4844"
"github.com/pkg/errors"
ssz "github.com/prysmaticlabs/fastssz"
"github.com/sirupsen/logrus"
@@ -334,26 +334,26 @@ func (s *Server) GetBlockAttestationsV2(w http.ResponseWriter, r *http.Request)
consensusAtts := blk.Block().Body().Attestations()
v := blk.Block().Version()
var attStructs []interface{}
attStructs := make([]interface{}, len(consensusAtts))
if v >= version.Electra {
for _, att := range consensusAtts {
for index, att := range consensusAtts {
a, ok := att.(*eth.AttestationElectra)
if !ok {
httputil.HandleError(w, fmt.Sprintf("unable to convert consensus attestations electra of type %T", att), http.StatusInternalServerError)
return
}
attStruct := structs.AttElectraFromConsensus(a)
attStructs = append(attStructs, attStruct)
attStructs[index] = attStruct
}
} else {
for _, att := range consensusAtts {
for index, att := range consensusAtts {
a, ok := att.(*eth.Attestation)
if !ok {
httputil.HandleError(w, fmt.Sprintf("unable to convert consensus attestation of type %T", att), http.StatusInternalServerError)
return
}
attStruct := structs.AttFromConsensus(a)
attStructs = append(attStructs, attStruct)
attStructs[index] = attStruct
}
}
@@ -942,14 +942,13 @@ func decodePhase0JSON(body []byte) (*eth.GenericSignedBeaconBlock, error) {
// broadcastSidecarsIfSupported broadcasts blob sidecars when an equivocated block occurs.
func broadcastSidecarsIfSupported(ctx context.Context, s *Server, b interfaces.SignedBeaconBlock, gb *eth.GenericSignedBeaconBlock, versionHeader string) error {
switch versionHeader {
case version.String(version.Fulu):
return s.broadcastSeenBlockSidecars(ctx, b, gb.GetFulu().Blobs, gb.GetFulu().KzgProofs)
case version.String(version.Electra):
return s.broadcastSeenBlockSidecars(ctx, b, gb.GetElectra().Blobs, gb.GetElectra().KzgProofs)
case version.String(version.Deneb):
return s.broadcastSeenBlockSidecars(ctx, b, gb.GetDeneb().Blobs, gb.GetDeneb().KzgProofs)
default:
// other forks before Deneb do not support blob sidecars
// forks after fulu do not support blob sidecars, instead support data columns, no need to rebroadcast
return nil
}
}
@@ -1053,7 +1052,7 @@ func (s *Server) validateConsensus(ctx context.Context, b *eth.GenericSignedBeac
return nil
}
if err := s.validateBlobSidecars(blk, blobs, proofs); err != nil {
if err := s.validateBlobs(blk, blobs, proofs); err != nil {
return err
}
@@ -1067,23 +1066,41 @@ func (s *Server) validateEquivocation(blk interfaces.ReadOnlyBeaconBlock) error
return nil
}
func (s *Server) validateBlobSidecars(blk interfaces.SignedBeaconBlock, blobs [][]byte, proofs [][]byte) error {
func (s *Server) validateBlobs(blk interfaces.SignedBeaconBlock, blobs [][]byte, proofs [][]byte) error {
if blk.Version() < version.Deneb {
return nil
}
kzgs, err := blk.Block().Body().BlobKzgCommitments()
numberOfColumns := params.BeaconConfig().NumberOfColumns
commitments, err := blk.Block().Body().BlobKzgCommitments()
if err != nil {
return errors.Wrap(err, "could not get blob kzg commitments")
}
if len(blobs) != len(proofs) || len(blobs) != len(kzgs) {
return errors.New("number of blobs, proofs, and commitments do not match")
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(blk.Block().Slot())
if len(blobs) > maxBlobsPerBlock {
return fmt.Errorf("number of blobs over max, %d > %d", len(blobs), maxBlobsPerBlock)
}
for i, blob := range blobs {
b := kzg4844.Blob(blob)
if err := kzg4844.VerifyBlobProof(&b, kzg4844.Commitment(kzgs[i]), kzg4844.Proof(proofs[i])); err != nil {
return errors.Wrap(err, "could not verify blob proof")
if blk.Version() >= version.Fulu {
// For Fulu blocks, proofs are cell proofs (blobs * numberOfColumns)
expectedProofsCount := uint64(len(blobs)) * numberOfColumns
if uint64(len(proofs)) != expectedProofsCount || len(blobs) != len(commitments) {
return fmt.Errorf("number of blobs (%d), cell proofs (%d), and commitments (%d) do not match (expected %d cell proofs)", len(blobs), len(proofs), len(commitments), expectedProofsCount)
}
// For Fulu blocks, proofs are cell proofs from execution client's BlobsBundleV2
// Verify cell proofs directly without reconstructing data column sidecars
if err := kzg.VerifyCellKZGProofBatchFromBlobData(blobs, commitments, proofs, numberOfColumns); err != nil {
return errors.Wrap(err, "could not verify cell proofs")
}
} else {
// For pre-Fulu blocks, proofs are blob proofs (1:1 with blobs)
if len(blobs) != len(proofs) || len(blobs) != len(commitments) {
return errors.Errorf("number of blobs (%d), proofs (%d), and commitments (%d) do not match", len(blobs), len(proofs), len(commitments))
}
// Use batch verification for better performance
if err := kzg.VerifyBlobKZGProofBatch(blobs, commitments, proofs); err != nil {
return errors.Wrap(err, "could not verify blob proofs")
}
}
return nil
}
@@ -1627,6 +1644,8 @@ func (s *Server) broadcastSeenBlockSidecars(
if err != nil {
return err
}
// Broadcast blob sidecars with forkchoice checking
for _, sc := range scs {
r, err := sc.SignedBlockHeader.Header.HashTreeRoot()
if err != nil {

View File

@@ -17,19 +17,19 @@ func TestBlocks_NewSignedBeaconBlock_EquivocationFix(t *testing.T) {
var block structs.SignedBeaconBlock
err := json.Unmarshal([]byte(rpctesting.Phase0Block), &block)
require.NoError(t, err)
// Convert to generic format
genericBlock, err := block.ToGeneric()
require.NoError(t, err)
// Test the FIX: pass genericBlock.Block instead of genericBlock
// This is what our fix changed in handlers.go line 704 and 858
_, err = blocks.NewSignedBeaconBlock(genericBlock.Block)
require.NoError(t, err, "NewSignedBeaconBlock should work with genericBlock.Block")
// Test the BROKEN version: pass genericBlock directly (this should fail)
_, err = blocks.NewSignedBeaconBlock(genericBlock)
if err == nil {
t.Errorf("NewSignedBeaconBlock should fail with whole genericBlock but succeeded")
}
}
}

View File

@@ -14,6 +14,7 @@ import (
"github.com/OffchainLabs/prysm/v6/api"
"github.com/OffchainLabs/prysm/v6/api/server/structs"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
chainMock "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache/depositsnapshot"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition"
@@ -40,7 +41,6 @@ import (
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/OffchainLabs/prysm/v6/time/slots"
GoKZG "github.com/crate-crypto/go-kzg-4844"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
ssz "github.com/prysmaticlabs/fastssz"
@@ -910,6 +910,100 @@ func TestGetBlockAttestations(t *testing.T) {
})
})
})
t.Run("empty-attestations", func(t *testing.T) {
t.Run("v1", func(t *testing.T) {
b := util.NewBeaconBlock()
b.Block.Body.Attestations = []*eth.Attestation{} // Explicitly set empty attestations
sb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
mockChainService := &chainMock.ChainService{
FinalizedRoots: map[[32]byte]bool{},
}
s := &Server{
OptimisticModeFetcher: mockChainService,
FinalizationFetcher: mockChainService,
Blocker: &testutil.MockBlocker{BlockToReturn: sb},
}
request := httptest.NewRequest(http.MethodGet, "http://foo.example/eth/v1/beacon/blocks/{block_id}/attestations", nil)
request.SetPathValue("block_id", "head")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetBlockAttestations(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &structs.GetBlockAttestationsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
// Ensure data is empty array, not null
require.NotNil(t, resp.Data)
assert.Equal(t, 0, len(resp.Data))
})
t.Run("v2-pre-electra", func(t *testing.T) {
b := util.NewBeaconBlock()
b.Block.Body.Attestations = []*eth.Attestation{} // Explicitly set empty attestations
sb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
mockChainService := &chainMock.ChainService{
FinalizedRoots: map[[32]byte]bool{},
}
s := &Server{
OptimisticModeFetcher: mockChainService,
FinalizationFetcher: mockChainService,
Blocker: &testutil.MockBlocker{BlockToReturn: sb},
}
request := httptest.NewRequest(http.MethodGet, "http://foo.example/eth/v2/beacon/blocks/{block_id}/attestations", nil)
request.SetPathValue("block_id", "head")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetBlockAttestationsV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &structs.GetBlockAttestationsV2Response{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
// Ensure data is "[]", not null
require.NotNil(t, resp.Data)
assert.Equal(t, string(json.RawMessage("[]")), string(resp.Data))
})
t.Run("v2-electra", func(t *testing.T) {
eb := util.NewBeaconBlockFulu()
eb.Block.Body.Attestations = []*eth.AttestationElectra{} // Explicitly set empty attestations
esb, err := blocks.NewSignedBeaconBlock(eb)
require.NoError(t, err)
mockChainService := &chainMock.ChainService{
FinalizedRoots: map[[32]byte]bool{},
}
s := &Server{
OptimisticModeFetcher: mockChainService,
FinalizationFetcher: mockChainService,
Blocker: &testutil.MockBlocker{BlockToReturn: esb},
}
request := httptest.NewRequest(http.MethodGet, "http://foo.example/eth/v2/beacon/blocks/{block_id}/attestations", nil)
request.SetPathValue("block_id", "head")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetBlockAttestationsV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &structs.GetBlockAttestationsV2Response{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
// Ensure data is "[]", not null
require.NotNil(t, resp.Data)
assert.Equal(t, string(json.RawMessage("[]")), string(resp.Data))
assert.Equal(t, "fulu", resp.Version)
})
})
}
func TestGetBlindedBlock(t *testing.T) {
@@ -4781,25 +4875,329 @@ func TestServer_broadcastBlobSidecars(t *testing.T) {
require.LogsContain(t, hook, "Broadcasted blob sidecar for already seen block")
}
func Test_validateBlobSidecars(t *testing.T) {
func Test_validateBlobs(t *testing.T) {
require.NoError(t, kzg.Start())
blob := util.GetRandBlob(123)
commitment := GoKZG.KZGCommitment{180, 218, 156, 194, 59, 20, 10, 189, 186, 254, 132, 93, 7, 127, 104, 172, 238, 240, 237, 70, 83, 89, 1, 152, 99, 0, 165, 65, 143, 62, 20, 215, 230, 14, 205, 95, 28, 245, 54, 25, 160, 16, 178, 31, 232, 207, 38, 85}
proof := GoKZG.KZGProof{128, 110, 116, 170, 56, 111, 126, 87, 229, 234, 211, 42, 110, 150, 129, 206, 73, 142, 167, 243, 90, 149, 240, 240, 236, 204, 143, 182, 229, 249, 81, 27, 153, 171, 83, 70, 144, 250, 42, 1, 188, 215, 71, 235, 30, 7, 175, 86}
// Generate proper commitment and proof for the blob
var kzgBlob kzg.Blob
copy(kzgBlob[:], blob[:])
commitment, err := kzg.BlobToKZGCommitment(&kzgBlob)
require.NoError(t, err)
proof, err := kzg.ComputeBlobKZGProof(&kzgBlob, commitment)
require.NoError(t, err)
blk := util.NewBeaconBlockDeneb()
blk.Block.Body.BlobKzgCommitments = [][]byte{commitment[:]}
b, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
s := &Server{}
require.NoError(t, s.validateBlobSidecars(b, [][]byte{blob[:]}, [][]byte{proof[:]}))
require.NoError(t, s.validateBlobs(b, [][]byte{blob[:]}, [][]byte{proof[:]}))
require.ErrorContains(t, "number of blobs, proofs, and commitments do not match", s.validateBlobSidecars(b, [][]byte{blob[:]}, [][]byte{}))
require.ErrorContains(t, "number of blobs (1), proofs (0), and commitments (1) do not match", s.validateBlobs(b, [][]byte{blob[:]}, [][]byte{}))
sk, err := bls.RandKey()
require.NoError(t, err)
blk.Block.Body.BlobKzgCommitments = [][]byte{sk.PublicKey().Marshal()}
b, err = blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.ErrorContains(t, "could not verify blob proof: can't verify opening proof", s.validateBlobSidecars(b, [][]byte{blob[:]}, [][]byte{proof[:]}))
require.ErrorContains(t, "could not verify blob proofs", s.validateBlobs(b, [][]byte{blob[:]}, [][]byte{proof[:]}))
blobs := [][]byte{}
commitments := [][]byte{}
proofs := [][]byte{}
for i := 0; i < 10; i++ {
blobs = append(blobs, blob[:])
commitments = append(commitments, commitment[:])
proofs = append(proofs, proof[:])
}
t.Run("pre-Deneb block should return early", func(t *testing.T) {
// Create a pre-Deneb block (e.g., Capella)
blk := util.NewBeaconBlockCapella()
b, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
s := &Server{}
// Should return nil for pre-Deneb blocks regardless of blobs
require.NoError(t, s.validateBlobs(b, [][]byte{}, [][]byte{}))
require.NoError(t, s.validateBlobs(b, blobs[:1], proofs[:1]))
})
t.Run("Deneb block with valid single blob", func(t *testing.T) {
blk := util.NewBeaconBlockDeneb()
blk.Block.Body.BlobKzgCommitments = [][]byte{commitment[:]}
b, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
s := &Server{}
require.NoError(t, s.validateBlobs(b, [][]byte{blob[:]}, [][]byte{proof[:]}))
})
t.Run("Deneb block with max blobs (6)", func(t *testing.T) {
cfg := params.BeaconConfig().Copy()
defer params.OverrideBeaconConfig(cfg)
testCfg := params.BeaconConfig().Copy()
testCfg.DenebForkEpoch = 0
testCfg.ElectraForkEpoch = 100
testCfg.DeprecatedMaxBlobsPerBlock = 6
params.OverrideBeaconConfig(testCfg)
blk := util.NewBeaconBlockDeneb()
blk.Block.Slot = 10 // Deneb slot
blk.Block.Body.BlobKzgCommitments = commitments[:6]
b, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
s := &Server{}
// Should pass with exactly 6 blobs
require.NoError(t, s.validateBlobs(b, blobs[:6], proofs[:6]))
})
t.Run("Deneb block exceeding max blobs", func(t *testing.T) {
cfg := params.BeaconConfig().Copy()
defer params.OverrideBeaconConfig(cfg)
testCfg := params.BeaconConfig().Copy()
testCfg.DenebForkEpoch = 0
testCfg.ElectraForkEpoch = 100
testCfg.DeprecatedMaxBlobsPerBlock = 6
params.OverrideBeaconConfig(testCfg)
blk := util.NewBeaconBlockDeneb()
blk.Block.Slot = 10 // Deneb slot
blk.Block.Body.BlobKzgCommitments = commitments[:7]
b, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
s := &Server{}
// Should fail with 7 blobs when max is 6
err = s.validateBlobs(b, blobs[:7], proofs[:7])
require.ErrorContains(t, "number of blobs over max, 7 > 6", err)
})
t.Run("Electra block with valid blobs", func(t *testing.T) {
cfg := params.BeaconConfig().Copy()
defer params.OverrideBeaconConfig(cfg)
// Set up Electra config with max 9 blobs
testCfg := params.BeaconConfig().Copy()
testCfg.DenebForkEpoch = 0
testCfg.ElectraForkEpoch = 5
testCfg.DeprecatedMaxBlobsPerBlock = 6
testCfg.DeprecatedMaxBlobsPerBlockElectra = 9
params.OverrideBeaconConfig(testCfg)
blk := util.NewBeaconBlockElectra()
blk.Block.Slot = 160 // Electra slot (epoch 5+)
blk.Block.Body.BlobKzgCommitments = commitments[:9]
b, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
s := &Server{}
// Should pass with 9 blobs in Electra
require.NoError(t, s.validateBlobs(b, blobs[:9], proofs[:9]))
})
t.Run("Electra block exceeding max blobs", func(t *testing.T) {
cfg := params.BeaconConfig().Copy()
defer params.OverrideBeaconConfig(cfg)
// Set up Electra config with max 9 blobs
testCfg := params.BeaconConfig().Copy()
testCfg.DenebForkEpoch = 0
testCfg.ElectraForkEpoch = 5
testCfg.DeprecatedMaxBlobsPerBlock = 6
testCfg.DeprecatedMaxBlobsPerBlockElectra = 9
params.OverrideBeaconConfig(testCfg)
blk := util.NewBeaconBlockElectra()
blk.Block.Slot = 160 // Electra slot
blk.Block.Body.BlobKzgCommitments = commitments[:10]
b, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
s := &Server{}
// Should fail with 10 blobs when max is 9
err = s.validateBlobs(b, blobs[:10], proofs[:10])
require.ErrorContains(t, "number of blobs over max, 10 > 9", err)
})
t.Run("Fulu block with valid cell proofs", func(t *testing.T) {
cfg := params.BeaconConfig().Copy()
defer params.OverrideBeaconConfig(cfg)
testCfg := params.BeaconConfig().Copy()
testCfg.DenebForkEpoch = 0
testCfg.ElectraForkEpoch = 5
testCfg.FuluForkEpoch = 10
testCfg.DeprecatedMaxBlobsPerBlock = 6
testCfg.DeprecatedMaxBlobsPerBlockElectra = 9
testCfg.NumberOfColumns = 128 // Standard PeerDAS configuration
params.OverrideBeaconConfig(testCfg)
// Create Fulu block with proper cell proofs
blk := util.NewBeaconBlockFulu()
blk.Block.Slot = 320 // Epoch 10 (Fulu fork)
// Generate valid commitments and cell proofs for testing
blobCount := 2
commitments := make([][]byte, blobCount)
fuluBlobs := make([][]byte, blobCount)
var kzgBlobs []kzg.Blob
for i := 0; i < blobCount; i++ {
blob := util.GetRandBlob(int64(i))
fuluBlobs[i] = blob[:]
var kzgBlob kzg.Blob
copy(kzgBlob[:], blob[:])
kzgBlobs = append(kzgBlobs, kzgBlob)
// Generate commitment
commitment, err := kzg.BlobToKZGCommitment(&kzgBlob)
require.NoError(t, err)
commitments[i] = commitment[:]
}
blk.Block.Body.BlobKzgCommitments = commitments
b, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
// Generate cell proofs for the blobs (flattened format like execution client)
numberOfColumns := params.BeaconConfig().NumberOfColumns
cellProofs := make([][]byte, uint64(blobCount)*numberOfColumns)
for blobIdx := 0; blobIdx < blobCount; blobIdx++ {
cellsAndProofs, err := kzg.ComputeCellsAndKZGProofs(&kzgBlobs[blobIdx])
require.NoError(t, err)
for colIdx := uint64(0); colIdx < numberOfColumns; colIdx++ {
cellProofIdx := uint64(blobIdx)*numberOfColumns + colIdx
cellProofs[cellProofIdx] = cellsAndProofs.Proofs[colIdx][:]
}
}
s := &Server{}
// Should use cell batch verification for Fulu blocks
require.NoError(t, s.validateBlobs(b, fuluBlobs, cellProofs))
})
t.Run("Fulu block with invalid cell proof count", func(t *testing.T) {
cfg := params.BeaconConfig().Copy()
defer params.OverrideBeaconConfig(cfg)
testCfg := params.BeaconConfig().Copy()
testCfg.DenebForkEpoch = 0
testCfg.ElectraForkEpoch = 5
testCfg.FuluForkEpoch = 10
testCfg.NumberOfColumns = 128
params.OverrideBeaconConfig(testCfg)
blk := util.NewBeaconBlockFulu()
blk.Block.Slot = 320 // Epoch 10 (Fulu fork)
// Create valid commitments but wrong number of cell proofs
blobCount := 2
commitments := make([][]byte, blobCount)
fuluBlobs := make([][]byte, blobCount)
for i := 0; i < blobCount; i++ {
blob := util.GetRandBlob(int64(i))
fuluBlobs[i] = blob[:]
var kzgBlob kzg.Blob
copy(kzgBlob[:], blob[:])
commitment, err := kzg.BlobToKZGCommitment(&kzgBlob)
require.NoError(t, err)
commitments[i] = commitment[:]
}
blk.Block.Body.BlobKzgCommitments = commitments
b, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
// Wrong number of cell proofs (should be blobCount * numberOfColumns)
wrongCellProofs := make([][]byte, 10) // Too few proofs
s := &Server{}
err = s.validateBlobs(b, fuluBlobs, wrongCellProofs)
require.ErrorContains(t, "do not match", err)
})
t.Run("Deneb block with invalid blob proof", func(t *testing.T) {
blob := util.GetRandBlob(123)
invalidProof := make([]byte, 48) // All zeros - invalid proof
sk, err := bls.RandKey()
require.NoError(t, err)
blk := util.NewBeaconBlockDeneb()
blk.Block.Body.BlobKzgCommitments = [][]byte{sk.PublicKey().Marshal()}
b, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
s := &Server{}
err = s.validateBlobs(b, [][]byte{blob[:]}, [][]byte{invalidProof})
require.ErrorContains(t, "could not verify blob proofs", err)
})
t.Run("empty blobs and proofs should pass", func(t *testing.T) {
blk := util.NewBeaconBlockDeneb()
blk.Block.Body.BlobKzgCommitments = [][]byte{}
b, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
s := &Server{}
require.NoError(t, s.validateBlobs(b, [][]byte{}, [][]byte{}))
})
t.Run("BlobSchedule with progressive increases (BPO)", func(t *testing.T) {
cfg := params.BeaconConfig().Copy()
defer params.OverrideBeaconConfig(cfg)
// Set up config with BlobSchedule (BPO - Blob Production Optimization)
testCfg := params.BeaconConfig().Copy()
testCfg.DenebForkEpoch = 0
testCfg.ElectraForkEpoch = 100
testCfg.FuluForkEpoch = 200
testCfg.DeprecatedMaxBlobsPerBlock = 6
testCfg.DeprecatedMaxBlobsPerBlockElectra = 9
// Define blob schedule with progressive increases
testCfg.BlobSchedule = []params.BlobScheduleEntry{
{Epoch: 0, MaxBlobsPerBlock: 3}, // Start with 3 blobs
{Epoch: 10, MaxBlobsPerBlock: 5}, // Increase to 5 at epoch 10
{Epoch: 20, MaxBlobsPerBlock: 7}, // Increase to 7 at epoch 20
{Epoch: 30, MaxBlobsPerBlock: 9}, // Increase to 9 at epoch 30
}
params.OverrideBeaconConfig(testCfg)
s := &Server{}
// Test epoch 0-9: max 3 blobs
t.Run("epoch 0-9: max 3 blobs", func(t *testing.T) {
blk := util.NewBeaconBlockDeneb()
blk.Block.Slot = 5 // Epoch 0
blk.Block.Body.BlobKzgCommitments = commitments[:3]
b, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, s.validateBlobs(b, blobs[:3], proofs[:3]))
// Should fail with 4 blobs
blk.Block.Body.BlobKzgCommitments = commitments[:4]
b, err = blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
err = s.validateBlobs(b, blobs[:4], proofs[:4])
require.ErrorContains(t, "number of blobs over max, 4 > 3", err)
})
// Test epoch 30+: max 9 blobs
t.Run("epoch 30+: max 9 blobs", func(t *testing.T) {
blk := util.NewBeaconBlockDeneb()
blk.Block.Slot = 960 // Epoch 30
blk.Block.Body.BlobKzgCommitments = commitments[:9]
b, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, s.validateBlobs(b, blobs[:9], proofs[:9]))
// Should fail with 10 blobs
blk.Block.Body.BlobKzgCommitments = commitments[:10]
b, err = blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
err = s.validateBlobs(b, blobs[:10], proofs[:10])
require.ErrorContains(t, "number of blobs over max, 10 > 9", err)
})
})
}
func TestGetPendingConsolidations(t *testing.T) {

View File

@@ -68,7 +68,8 @@ func (rs *BlockRewardService) GetBlockRewardsData(ctx context.Context, blk inter
Code: http.StatusInternalServerError,
}
}
st, err = coreblocks.ProcessAttesterSlashings(ctx, st, blk.Body().AttesterSlashings(), validators.SlashValidator)
exitInfo := validators.ExitInformation(st)
st, err = coreblocks.ProcessAttesterSlashings(ctx, st, blk.Body().AttesterSlashings(), exitInfo)
if err != nil {
return nil, &httputil.DefaultJsonError{
Message: "Could not get attester slashing rewards: " + err.Error(),
@@ -82,7 +83,7 @@ func (rs *BlockRewardService) GetBlockRewardsData(ctx context.Context, blk inter
Code: http.StatusInternalServerError,
}
}
st, err = coreblocks.ProcessProposerSlashings(ctx, st, blk.Body().ProposerSlashings(), validators.SlashValidator)
st, err = coreblocks.ProcessProposerSlashings(ctx, st, blk.Body().ProposerSlashings(), exitInfo)
if err != nil {
return nil, &httputil.DefaultJsonError{
Message: "Could not get proposer slashing rewards: " + err.Error(),

View File

@@ -1029,8 +1029,8 @@ func (s *Server) GetProposerDuties(w http.ResponseWriter, r *http.Request) {
httputil.HandleError(w, fmt.Sprintf("Could not get head state: %v ", err), http.StatusInternalServerError)
return
}
// Advance state with empty transitions up to the requested epoch start slot for pre fulu state only. Fulu state utilizes proposer look ahead field.
if st.Slot() < epochStartSlot && st.Version() != version.Fulu {
// Notice that even for Fulu requests for the next epoch, we are only advancing the state to the start of the current epoch.
if st.Slot() < epochStartSlot {
headRoot, err := s.HeadFetcher.HeadRoot(ctx)
if err != nil {
httputil.HandleError(w, fmt.Sprintf("Could not get head root: %v ", err), http.StatusInternalServerError)

View File

@@ -2645,78 +2645,6 @@ func TestGetProposerDuties(t *testing.T) {
})
}
func TestGetProposerDuties_FuluState(t *testing.T) {
helpers.ClearCache()
// Create a Fulu state with slot 0 (before epoch 1 start slot which is 32)
fuluState, err := util.NewBeaconStateFulu()
require.NoError(t, err)
require.NoError(t, fuluState.SetSlot(0)) // Set to slot 0
// Create some validators for the test
depChainStart := params.BeaconConfig().MinGenesisActiveValidatorCount
deposits, _, err := util.DeterministicDepositsAndKeys(depChainStart)
require.NoError(t, err)
validators := make([]*ethpbalpha.Validator, len(deposits))
for i, deposit := range deposits {
validators[i] = &ethpbalpha.Validator{
PublicKey: deposit.Data.PublicKey,
ActivationEpoch: 0,
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
WithdrawalCredentials: make([]byte, 32),
}
}
require.NoError(t, fuluState.SetValidators(validators))
// Set up block roots
genesis := util.NewBeaconBlock()
genesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
roots := make([][]byte, fieldparams.BlockRootsLength)
roots[0] = genesisRoot[:]
require.NoError(t, fuluState.SetBlockRoots(roots))
chainSlot := primitives.Slot(0)
chain := &mockChain.ChainService{
State: fuluState, Root: genesisRoot[:], Slot: &chainSlot,
}
db := dbutil.SetupDB(t)
require.NoError(t, db.SaveGenesisBlockRoot(t.Context(), genesisRoot))
s := &Server{
Stater: &testutil.MockStater{StatesBySlot: map[primitives.Slot]state.BeaconState{0: fuluState}},
HeadFetcher: chain,
TimeFetcher: chain,
OptimisticModeFetcher: chain,
SyncChecker: &mockSync.Sync{IsSyncing: false},
PayloadIDCache: cache.NewPayloadIDCache(),
TrackedValidatorsCache: cache.NewTrackedValidatorsCache(),
BeaconDB: db,
}
// Request epoch 1 duties, which should require advancing from slot 0 to slot 32
// But for Fulu state, this advancement should be skipped
request := httptest.NewRequest(http.MethodGet, "http://www.example.com/eth/v1/validator/duties/proposer/{epoch}", nil)
request.SetPathValue("epoch", "1")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetProposerDuties(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
// Verify the state was not advanced - it should still be at slot 0
// This is the key assertion for the regression test
assert.Equal(t, primitives.Slot(0), fuluState.Slot(), "Fulu state should not have been advanced")
resp := &structs.GetProposerDutiesResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
// Should still return proposer duties despite not advancing the state
assert.Equal(t, true, len(resp.Data) > 0, "Should return proposer duties even without state advancement")
}
func TestGetSyncCommitteeDuties(t *testing.T) {
helpers.ClearCache()
params.SetupTestConfigCleanup(t)

View File

@@ -186,6 +186,7 @@ func (s *Server) GetChainHead(w http.ResponseWriter, r *http.Request) {
httputil.WriteJson(w, response)
}
// Warning: no longer supported post Fulu blobs
func (s *Server) PublishBlobs(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "beacon.PublishBlobs")
defer span.End()
@@ -215,6 +216,15 @@ func (s *Server) PublishBlobs(w http.ResponseWriter, r *http.Request) {
httputil.HandleError(w, "Could not decode blob sidecar: "+err.Error(), http.StatusBadRequest)
return
}
scEpoch := slots.ToEpoch(sc.SignedBlockHeader.Header.Slot)
if scEpoch < params.BeaconConfig().DenebForkEpoch {
httputil.HandleError(w, "Blob sidecars not supported for pre deneb", http.StatusBadRequest)
return
}
if scEpoch > params.BeaconConfig().FuluForkEpoch {
httputil.HandleError(w, "Blob sidecars not supported for post fulu blobs", http.StatusBadRequest)
return
}
readOnlySc, err := blocks.NewROBlobWithRoot(sc, bytesutil.ToBytes32(root))
if err != nil {

View File

@@ -1017,6 +1017,11 @@ func TestPublishBlobs_BadBlockRoot(t *testing.T) {
}
func TestPublishBlobs(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.DenebForkEpoch = 0 // Set Deneb fork to epoch 0 so slot 0 is valid
params.OverrideBeaconConfig(cfg)
server := &Server{
BlobReceiver: &chainMock.ChainService{},
Broadcaster: &mockp2p.MockBroadcaster{},
@@ -1032,3 +1037,94 @@ func TestPublishBlobs(t *testing.T) {
assert.Equal(t, len(server.BlobReceiver.(*chainMock.ChainService).Blobs), 1)
assert.Equal(t, server.Broadcaster.(*mockp2p.MockBroadcaster).BroadcastCalled.Load(), true)
}
func TestPublishBlobs_PreDeneb(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.DenebForkEpoch = 10 // Set Deneb fork to epoch 10, so slot 0 is pre-Deneb
params.OverrideBeaconConfig(cfg)
server := &Server{
BlobReceiver: &chainMock.ChainService{},
Broadcaster: &mockp2p.MockBroadcaster{},
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader([]byte(rpctesting.PublishBlobsRequest)))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlobs(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
assert.StringContains(t, "Blob sidecars not supported for pre deneb", writer.Body.String())
assert.Equal(t, len(server.BlobReceiver.(*chainMock.ChainService).Blobs), 0)
assert.Equal(t, server.Broadcaster.(*mockp2p.MockBroadcaster).BroadcastCalled.Load(), false)
}
func TestPublishBlobs_PostFulu(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.DenebForkEpoch = 0
cfg.FuluForkEpoch = 0 // Set Fulu fork to epoch 0
params.OverrideBeaconConfig(cfg)
server := &Server{
BlobReceiver: &chainMock.ChainService{},
Broadcaster: &mockp2p.MockBroadcaster{},
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
// Create a custom request with a slot in epoch 1 (which is > FuluForkEpoch of 0)
postFuluRequest := fmt.Sprintf(`{
"block_root" : "0x0000000000000000000000000000000000000000000000000000000000000000",
"blob_sidecars" : {
"sidecars" : [
{
"blob" : "%s",
"index" : "0",
"kzg_commitment" : "0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"kzg_commitment_inclusion_proof" : [
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0x0000000000000000000000000000000000000000000000000000000000000000"
],
"kzg_proof" : "0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"signed_block_header" : {
"message" : {
"body_root" : "0x0000000000000000000000000000000000000000000000000000000000000000",
"parent_root" : "0x0000000000000000000000000000000000000000000000000000000000000000",
"proposer_index" : "0",
"slot" : "33",
"state_root" : "0x0000000000000000000000000000000000000000000000000000000000000000"
},
"signature" : "0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"
}
}
]
}
}`, rpctesting.Blob)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader([]byte(postFuluRequest)))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlobs(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
assert.StringContains(t, "Blob sidecars not supported for post fulu blobs", writer.Body.String())
assert.Equal(t, len(server.BlobReceiver.(*chainMock.ChainService).Blobs), 0)
assert.Equal(t, server.Broadcaster.(*mockp2p.MockBroadcaster).BroadcastCalled.Load(), false)
}

View File

@@ -19,7 +19,7 @@ func TestServer_GetBeaconConfig(t *testing.T) {
conf := params.BeaconConfig()
confType := reflect.TypeOf(conf).Elem()
numFields := confType.NumField()
// Count only exported fields, as unexported fields are not included in the config
exportedFields := 0
for i := 0; i < numFields; i++ {

View File

@@ -1,3 +1,5 @@
# gazelle:ignore
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
@@ -37,6 +39,7 @@ go_library(
"//api/client/builder:go_default_library",
"//async/event:go_default_library",
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/builder:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/cache/depositsnapshot:go_default_library",
@@ -47,6 +50,7 @@ go_library(
"//beacon-chain/core/feed/operation:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/transition:go_default_library",
@@ -63,8 +67,8 @@ go_library(
"//beacon-chain/rpc/core:go_default_library",
"//beacon-chain/startup:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//beacon-chain/sync:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
@@ -81,7 +85,7 @@ go_library(
"//crypto/rand:go_default_library",
"//encoding/bytesutil:go_default_library",
"//encoding/ssz:go_default_library",
"//genesis:go_default_library",
"//genesis:go_default_library",
"//math:go_default_library",
"//monitoring/tracing:go_default_library",
"//monitoring/tracing/trace:go_default_library",
@@ -181,7 +185,6 @@ common_deps = [
"@org_golang_google_protobuf//types/known/emptypb:go_default_library",
]
# gazelle:ignore
go_test(
name = "go_default_test",
timeout = "moderate",

View File

@@ -57,6 +57,12 @@ func (vs *Server) constructGenericBeaconBlock(
return nil, fmt.Errorf("expected *BlobsBundleV2, got %T", blobsBundler)
}
return vs.constructFuluBlock(blockProto, isBlinded, bidStr, bundle), nil
case version.Gloas:
bundle, ok := blobsBundler.(*enginev1.BlobsBundleV2)
if blobsBundler != nil && !ok {
return nil, fmt.Errorf("expected *BlobsBundleV2, got %T", blobsBundler)
}
return vs.constructGloasBlock(blockProto, isBlinded, bidStr, bundle), nil
default:
return nil, fmt.Errorf("unknown block version: %d", sBlk.Version())
}
@@ -120,3 +126,12 @@ func (vs *Server) constructFuluBlock(blockProto proto.Message, isBlinded bool, p
}
return &ethpb.GenericBeaconBlock{Block: &ethpb.GenericBeaconBlock_Fulu{Fulu: fuluContents}, IsBlinded: false, PayloadValue: payloadValue}
}
func (vs *Server) constructGloasBlock(blockProto proto.Message, isBlinded bool, payloadValue string, bundle *enginev1.BlobsBundleV2) *ethpb.GenericBeaconBlock {
gloasContents := &ethpb.BeaconBlockContentsGloas{Block: blockProto.(*ethpb.BeaconBlockGloas)}
if bundle != nil {
gloasContents.KzgProofs = bundle.Proofs
gloasContents.Blobs = bundle.Blobs
}
return &ethpb.GenericBeaconBlock{Block: &ethpb.GenericBeaconBlock_Gloas{Gloas: gloasContents}, IsBlinded: false, PayloadValue: payloadValue}
}

View File

@@ -29,12 +29,19 @@ func TestConstructGenericBeaconBlock(t *testing.T) {
require.NoError(t, err)
r1, err := eb.Block.HashTreeRoot()
require.NoError(t, err)
result, err := vs.constructGenericBeaconBlock(b, nil, primitives.ZeroWei())
bundle := &enginev1.BlobsBundleV2{
KzgCommitments: [][]byte{{1, 2, 3}},
Proofs: [][]byte{{4, 5, 6}},
Blobs: [][]byte{{7, 8, 9}},
}
result, err := vs.constructGenericBeaconBlock(b, bundle, primitives.ZeroWei())
require.NoError(t, err)
r2, err := result.GetFulu().Block.HashTreeRoot()
require.NoError(t, err)
require.Equal(t, r1, r2)
require.Equal(t, result.IsBlinded, false)
require.DeepEqual(t, bundle.Blobs, result.GetFulu().GetBlobs())
require.DeepEqual(t, bundle.Proofs, result.GetFulu().GetKzgProofs())
})
// Test for Electra version

View File

@@ -15,9 +15,11 @@ import (
blockfeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/block"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/operation"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/kv"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
@@ -58,28 +60,31 @@ func (vs *Server) GetBeaconBlock(ctx context.Context, req *ethpb.BlockRequest) (
if err != nil {
log.WithError(err).Error("Could not convert slot to time")
}
log.WithFields(logrus.Fields{
"slot": req.Slot,
"sinceSlotStartTime": time.Since(t),
}).Info("Begin building block")
log := log.WithField("slot", req.Slot)
log.WithField("sinceSlotStartTime", time.Since(t)).Info("Begin building block")
// A syncing validator should not produce a block.
if vs.SyncChecker.Syncing() {
log.Error("Fail to build block: node is syncing")
return nil, status.Error(codes.Unavailable, "Syncing to latest head, not ready to respond")
}
// An optimistic validator MUST NOT produce a block (i.e., sign across the DOMAIN_BEACON_PROPOSER domain).
if slots.ToEpoch(req.Slot) >= params.BeaconConfig().BellatrixForkEpoch {
if err := vs.optimisticStatus(ctx); err != nil {
log.WithError(err).Error("Fail to build block: node is optimistic")
return nil, status.Errorf(codes.Unavailable, "Validator is not ready to propose: %v", err)
}
}
head, parentRoot, err := vs.getParentState(ctx, req.Slot)
if err != nil {
log.WithError(err).Error("Fail to build block: could not get parent state")
return nil, err
}
sBlk, err := getEmptyBlock(req.Slot)
if err != nil {
log.WithError(err).Error("Fail to build block: could not get empty block")
return nil, status.Errorf(codes.Internal, "Could not prepare block: %v", err)
}
// Set slot, graffiti, randao reveal, and parent root.
@@ -101,8 +106,7 @@ func (vs *Server) GetBeaconBlock(ctx context.Context, req *ethpb.BlockRequest) (
}
resp, err := vs.BuildBlockParallel(ctx, sBlk, head, req.SkipMevBoost, builderBoostFactor)
log := log.WithFields(logrus.Fields{
"slot": req.Slot,
log = log.WithFields(logrus.Fields{
"sinceSlotStartTime": time.Since(t),
"validator": sBlk.Block().ProposerIndex(),
})
@@ -275,6 +279,11 @@ func (vs *Server) BuildBlockParallel(ctx context.Context, sBlk interfaces.Signed
//
// ProposeBeaconBlock handles the proposal of beacon blocks.
func (vs *Server) ProposeBeaconBlock(ctx context.Context, req *ethpb.GenericSignedBeaconBlock) (*ethpb.ProposeResponse, error) {
var (
blobSidecars []*ethpb.BlobSidecar
dataColumnSidecars []*ethpb.DataColumnSidecar
)
ctx, span := trace.StartSpan(ctx, "ProposerServer.ProposeBeaconBlock")
defer span.End()
@@ -291,7 +300,7 @@ func (vs *Server) ProposeBeaconBlock(ctx context.Context, req *ethpb.GenericSign
return nil, status.Errorf(codes.Internal, "Could not hash tree root: %v", err)
}
// For post-Fulu blinded blocks, submit to relay and return early
// For post-Fulu blinded blocks (including Gloas), submit to relay and return early
if block.IsBlinded() && slots.ToEpoch(block.Block().Slot()) >= params.BeaconConfig().FuluForkEpoch {
err := vs.BlockBuilder.SubmitBlindedBlockPostFulu(ctx, block)
if err != nil {
@@ -300,11 +309,10 @@ func (vs *Server) ProposeBeaconBlock(ctx context.Context, req *ethpb.GenericSign
return &ethpb.ProposeResponse{BlockRoot: root[:]}, nil
}
var sidecars []*ethpb.BlobSidecar
if block.IsBlinded() {
block, sidecars, err = vs.handleBlindedBlock(ctx, block)
block, blobSidecars, err = vs.handleBlindedBlock(ctx, block)
} else if block.Version() >= version.Deneb {
sidecars, err = vs.blobSidecarsFromUnblindedBlock(block, req)
blobSidecars, dataColumnSidecars, err = vs.handleUnblindedBlock(block, req)
}
if err != nil {
return nil, status.Errorf(codes.Internal, "%s: %v", "handle block failed", err)
@@ -323,10 +331,9 @@ func (vs *Server) ProposeBeaconBlock(ctx context.Context, req *ethpb.GenericSign
errChan <- nil
}()
if err := vs.broadcastAndReceiveBlobs(ctx, sidecars, root); err != nil {
return nil, status.Errorf(codes.Internal, "Could not broadcast/receive blobs: %v", err)
if err := vs.broadcastAndReceiveSidecars(ctx, block, root, blobSidecars, dataColumnSidecars); err != nil {
return nil, status.Errorf(codes.Internal, "Could not broadcast/receive sidecars: %v", err)
}
wg.Wait()
if err := <-errChan; err != nil {
return nil, status.Errorf(codes.Internal, "Could not broadcast/receive block: %v", err)
@@ -335,12 +342,35 @@ func (vs *Server) ProposeBeaconBlock(ctx context.Context, req *ethpb.GenericSign
return &ethpb.ProposeResponse{BlockRoot: root[:]}, nil
}
// broadcastAndReceiveSidecars broadcasts and receives sidecars.
func (vs *Server) broadcastAndReceiveSidecars(
ctx context.Context,
block interfaces.SignedBeaconBlock,
root [fieldparams.RootLength]byte,
blobSidecars []*ethpb.BlobSidecar,
dataColumnSideCars []*ethpb.DataColumnSidecar,
) error {
if block.Version() >= version.Fulu {
if err := vs.broadcastAndReceiveDataColumns(ctx, dataColumnSideCars, root); err != nil {
return errors.Wrap(err, "broadcast and receive data columns")
}
return nil
}
if err := vs.broadcastAndReceiveBlobs(ctx, blobSidecars, root); err != nil {
return errors.Wrap(err, "broadcast and receive blobs")
}
return nil
}
// handleBlindedBlock processes blinded beacon blocks (pre-Fulu only).
// Post-Fulu blinded blocks are handled directly in ProposeBeaconBlock.
// Post-Fulu blinded blocks (including Gloas) are handled directly in ProposeBeaconBlock.
func (vs *Server) handleBlindedBlock(ctx context.Context, block interfaces.SignedBeaconBlock) (interfaces.SignedBeaconBlock, []*ethpb.BlobSidecar, error) {
if block.Version() < version.Bellatrix {
return nil, nil, errors.New("pre-Bellatrix blinded block")
}
if vs.BlockBuilder == nil || !vs.BlockBuilder.Configured() {
return nil, nil, errors.New("unconfigured block builder")
}
@@ -367,16 +397,34 @@ func (vs *Server) handleBlindedBlock(ctx context.Context, block interfaces.Signe
return copiedBlock, sidecars, nil
}
func (vs *Server) blobSidecarsFromUnblindedBlock(block interfaces.SignedBeaconBlock, req *ethpb.GenericSignedBeaconBlock) ([]*ethpb.BlobSidecar, error) {
func (vs *Server) handleUnblindedBlock(
block interfaces.SignedBeaconBlock,
req *ethpb.GenericSignedBeaconBlock,
) ([]*ethpb.BlobSidecar, []*ethpb.DataColumnSidecar, error) {
rawBlobs, proofs, err := blobsAndProofs(req)
if err != nil {
return nil, err
return nil, nil, err
}
return BuildBlobSidecars(block, rawBlobs, proofs)
if block.Version() >= version.Fulu {
dataColumnSideCars, err := peerdas.ConstructDataColumnSidecars(block, rawBlobs, proofs)
if err != nil {
return nil, nil, errors.Wrap(err, "construct data column sidecars")
}
return nil, dataColumnSideCars, nil
}
blobSidecars, err := BuildBlobSidecars(block, rawBlobs, proofs)
if err != nil {
return nil, nil, errors.Wrap(err, "build blob sidecars")
}
return blobSidecars, nil, nil
}
// broadcastReceiveBlock broadcasts a block and handles its reception.
func (vs *Server) broadcastReceiveBlock(ctx context.Context, block interfaces.SignedBeaconBlock, root [32]byte) error {
func (vs *Server) broadcastReceiveBlock(ctx context.Context, block interfaces.SignedBeaconBlock, root [fieldparams.RootLength]byte) error {
protoBlock, err := block.Proto()
if err != nil {
return errors.Wrap(err, "protobuf conversion failed")
@@ -392,18 +440,14 @@ func (vs *Server) broadcastReceiveBlock(ctx context.Context, block interfaces.Si
}
// broadcastAndReceiveBlobs handles the broadcasting and reception of blob sidecars.
func (vs *Server) broadcastAndReceiveBlobs(ctx context.Context, sidecars []*ethpb.BlobSidecar, root [32]byte) error {
func (vs *Server) broadcastAndReceiveBlobs(ctx context.Context, sidecars []*ethpb.BlobSidecar, root [fieldparams.RootLength]byte) error {
eg, eCtx := errgroup.WithContext(ctx)
for i, sc := range sidecars {
// Copy the iteration instance to a local variable to give each go-routine its own copy to play with.
// See https://golang.org/doc/faq#closures_and_goroutines for more details.
subIdx := i
sCar := sc
for subIdx, sc := range sidecars {
eg.Go(func() error {
if err := vs.P2P.BroadcastBlob(eCtx, uint64(subIdx), sCar); err != nil {
if err := vs.P2P.BroadcastBlob(eCtx, uint64(subIdx), sc); err != nil {
return errors.Wrap(err, "broadcast blob failed")
}
readOnlySc, err := blocks.NewROBlobWithRoot(sCar, root)
readOnlySc, err := blocks.NewROBlobWithRoot(sc, root)
if err != nil {
return errors.Wrap(err, "ROBlob creation failed")
}
@@ -421,6 +465,53 @@ func (vs *Server) broadcastAndReceiveBlobs(ctx context.Context, sidecars []*ethp
return eg.Wait()
}
// broadcastAndReceiveDataColumns handles the broadcasting and reception of data columns sidecars.
func (vs *Server) broadcastAndReceiveDataColumns(
ctx context.Context,
sidecars []*ethpb.DataColumnSidecar,
root [fieldparams.RootLength]byte,
) error {
verifiedRODataColumns := make([]blocks.VerifiedRODataColumn, 0, len(sidecars))
eg, _ := errgroup.WithContext(ctx)
for _, sidecar := range sidecars {
roDataColumn, err := blocks.NewRODataColumnWithRoot(sidecar, root)
if err != nil {
return errors.Wrap(err, "new read-only data column with root")
}
// We build this block ourselves, so we can upgrade the read only data column sidecar into a verified one.
verifiedRODataColumn := blocks.NewVerifiedRODataColumn(roDataColumn)
verifiedRODataColumns = append(verifiedRODataColumns, verifiedRODataColumn)
eg.Go(func() error {
// Compute the subnet index based on the column index.
subnet := peerdas.ComputeSubnetForDataColumnSidecar(sidecar.Index)
if err := vs.P2P.BroadcastDataColumnSidecar(root, subnet, sidecar); err != nil {
return errors.Wrap(err, "broadcast data column")
}
return nil
})
}
if err := eg.Wait(); err != nil {
return errors.Wrap(err, "wait for data columns to be broadcasted")
}
if err := vs.DataColumnReceiver.ReceiveDataColumns(verifiedRODataColumns); err != nil {
return errors.Wrap(err, "receive data column")
}
for _, verifiedRODataColumn := range verifiedRODataColumns {
vs.OperationNotifier.OperationFeed().Send(&feed.Event{
Type: operation.DataColumnSidecarReceived,
Data: &operation.DataColumnSidecarReceivedData{DataColumn: &verifiedRODataColumn}, // #nosec G601
})
}
return nil
}
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// PrepareBeaconProposer caches and updates the fee recipient for the given proposer.
@@ -541,6 +632,9 @@ func blobsAndProofs(req *ethpb.GenericSignedBeaconBlock) ([][]byte, [][]byte, er
case req.GetFulu() != nil:
dbBlockContents := req.GetFulu()
return dbBlockContents.Blobs, dbBlockContents.KzgProofs, nil
case req.GetGloas() != nil:
dbBlockContents := req.GetGloas()
return dbBlockContents.Blobs, dbBlockContents.KzgProofs, nil
default:
return nil, nil, errors.Errorf("unknown request type provided: %T", req)
}

View File

@@ -1,8 +1,10 @@
package validator
import (
"bytes"
"context"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
@@ -15,6 +17,7 @@ import (
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield"
)
func (vs *Server) setSyncAggregate(ctx context.Context, blk interfaces.SignedBeaconBlock) {
@@ -51,21 +54,28 @@ func (vs *Server) getSyncAggregate(ctx context.Context, slot primitives.Slot, ro
if vs.SyncCommitteePool == nil {
return nil, errors.New("sync committee pool is nil")
}
// Contributions have to match the input root
contributions, err := vs.SyncCommitteePool.SyncCommitteeContributions(slot)
poolContributions, err := vs.SyncCommitteePool.SyncCommitteeContributions(slot)
if err != nil {
return nil, err
}
proposerContributions := proposerSyncContributions(contributions).filterByBlockRoot(root)
// Contributions have to match the input root
proposerContributions := proposerSyncContributions(poolContributions).filterByBlockRoot(root)
// Each sync subcommittee is 128 bits and the sync committee is 512 bits for mainnet.
aggregatedContributions, err := vs.aggregatedSyncCommitteeMessages(ctx, slot, root, poolContributions)
if err != nil {
return nil, errors.Wrap(err, "could not get aggregated sync committee messages")
}
proposerContributions = append(proposerContributions, aggregatedContributions...)
subcommitteeCount := params.BeaconConfig().SyncCommitteeSubnetCount
var bitsHolder [][]byte
for i := uint64(0); i < params.BeaconConfig().SyncCommitteeSubnetCount; i++ {
for i := uint64(0); i < subcommitteeCount; i++ {
bitsHolder = append(bitsHolder, ethpb.NewSyncCommitteeAggregationBits())
}
sigsHolder := make([]bls.Signature, 0, params.BeaconConfig().SyncCommitteeSize/params.BeaconConfig().SyncCommitteeSubnetCount)
sigsHolder := make([]bls.Signature, 0, params.BeaconConfig().SyncCommitteeSize/subcommitteeCount)
for i := uint64(0); i < params.BeaconConfig().SyncCommitteeSubnetCount; i++ {
for i := uint64(0); i < subcommitteeCount; i++ {
cs := proposerContributions.filterBySubIndex(i)
aggregates, err := synccontribution.Aggregate(cs)
if err != nil {
@@ -107,3 +117,107 @@ func (vs *Server) getSyncAggregate(ctx context.Context, slot primitives.Slot, ro
SyncCommitteeSignature: syncSigBytes[:],
}, nil
}
func (vs *Server) aggregatedSyncCommitteeMessages(
ctx context.Context,
slot primitives.Slot,
root [32]byte,
poolContributions []*ethpb.SyncCommitteeContribution,
) ([]*ethpb.SyncCommitteeContribution, error) {
subcommitteeCount := params.BeaconConfig().SyncCommitteeSubnetCount
subcommitteeSize := params.BeaconConfig().SyncCommitteeSize / subcommitteeCount
sigsPerSubcommittee := make([][][]byte, subcommitteeCount)
bitsPerSubcommittee := make([]bitfield.Bitfield, subcommitteeCount)
for i := uint64(0); i < subcommitteeCount; i++ {
sigsPerSubcommittee[i] = make([][]byte, 0, subcommitteeSize)
bitsPerSubcommittee[i] = ethpb.NewSyncCommitteeAggregationBits()
}
// Get committee position(s) for each message's validator index.
scMessages, err := vs.SyncCommitteePool.SyncCommitteeMessages(slot)
if err != nil {
return nil, errors.Wrap(err, "could not get sync committee messages")
}
messageIndices := make([]primitives.ValidatorIndex, 0, len(scMessages))
messageSigs := make([][]byte, 0, len(scMessages))
for _, msg := range scMessages {
if bytes.Equal(root[:], msg.BlockRoot) {
messageIndices = append(messageIndices, msg.ValidatorIndex)
messageSigs = append(messageSigs, msg.Signature)
}
}
st, err := vs.HeadFetcher.HeadState(ctx)
if err != nil {
return nil, errors.Wrap(err, "could not get head state")
}
positions, err := helpers.CurrentPeriodPositions(st, messageIndices)
if err != nil {
return nil, errors.Wrap(err, "could not get sync committee positions")
}
// Based on committee position(s), set the appropriate subcommittee bit and signature.
for i, ci := range positions {
for _, index := range ci {
k := uint64(index)
subnetIndex := k / subcommitteeSize
indexMod := k % subcommitteeSize
// Existing aggregated contributions from the pool intersecting with aggregates
// created from single sync committee messages can result in bit intersections
// that fail to produce the best possible final aggregate. Ignoring bits that are
// already set in pool contributions makes intersections impossible.
intersects := false
for _, poolContrib := range poolContributions {
if poolContrib.SubcommitteeIndex == subnetIndex && poolContrib.AggregationBits.BitAt(indexMod) {
intersects = true
}
}
if !intersects && !bitsPerSubcommittee[subnetIndex].BitAt(indexMod) {
bitsPerSubcommittee[subnetIndex].SetBitAt(indexMod, true)
sigsPerSubcommittee[subnetIndex] = append(sigsPerSubcommittee[subnetIndex], messageSigs[i])
}
}
}
// Aggregate.
result := make([]*ethpb.SyncCommitteeContribution, 0, subcommitteeCount)
for i := uint64(0); i < subcommitteeCount; i++ {
aggregatedSig := make([]byte, 96)
aggregatedSig[0] = 0xC0
if len(sigsPerSubcommittee[i]) != 0 {
contrib, err := aggregateSyncSubcommitteeMessages(slot, root, i, bitsPerSubcommittee[i], sigsPerSubcommittee[i])
if err != nil {
// Skip aggregating this subcommittee
log.WithError(err).Errorf("Could not aggregate sync messages for subcommittee %d", i)
continue
}
result = append(result, contrib)
}
}
return result, nil
}
func aggregateSyncSubcommitteeMessages(
slot primitives.Slot,
root [32]byte,
subcommitteeIndex uint64,
bits bitfield.Bitfield,
sigs [][]byte,
) (*ethpb.SyncCommitteeContribution, error) {
var err error
uncompressedSigs := make([]bls.Signature, len(sigs))
for i, sig := range sigs {
uncompressedSigs[i], err = bls.SignatureFromBytesNoValidation(sig)
if err != nil {
return nil, errors.Wrap(err, "could not create signature from bytes")
}
}
return &ethpb.SyncCommitteeContribution{
Slot: slot,
BlockRoot: root[:],
SubcommitteeIndex: subcommitteeIndex,
AggregationBits: bits.Bytes(),
Signature: bls.AggregateSignatures(uncompressedSigs).Marshal(),
}, nil
}

View File

@@ -3,13 +3,67 @@ package validator
import (
"testing"
chainmock "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/operations/synccommittee"
mockSync "github.com/OffchainLabs/prysm/v6/beacon-chain/sync/initial-sync/testing"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/crypto/bls"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/prysmaticlabs/go-bitfield"
)
func TestProposer_GetSyncAggregate_OK(t *testing.T) {
st, err := util.NewBeaconStateAltair()
require.NoError(t, err)
proposerServer := &Server{
HeadFetcher: &chainmock.ChainService{State: st},
SyncChecker: &mockSync.Sync{IsSyncing: false},
SyncCommitteePool: synccommittee.NewStore(),
}
r := params.BeaconConfig().ZeroHash
conts := []*ethpb.SyncCommitteeContribution{
{Slot: 1, SubcommitteeIndex: 0, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b0001}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 0, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b1001}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 0, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b1110}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 1, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b0001}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 1, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b1001}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 1, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b1110}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 2, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b0001}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 2, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b1001}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 2, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b1110}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 3, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b0001}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 3, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b1001}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 3, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b1110}, BlockRoot: r[:]},
{Slot: 2, SubcommitteeIndex: 0, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b10101010}, BlockRoot: r[:]},
{Slot: 2, SubcommitteeIndex: 1, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b10101010}, BlockRoot: r[:]},
{Slot: 2, SubcommitteeIndex: 2, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b10101010}, BlockRoot: r[:]},
{Slot: 2, SubcommitteeIndex: 3, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b10101010}, BlockRoot: r[:]},
}
for _, cont := range conts {
require.NoError(t, proposerServer.SyncCommitteePool.SaveSyncCommitteeContribution(cont))
}
aggregate, err := proposerServer.getSyncAggregate(t.Context(), 1, bytesutil.ToBytes32(conts[0].BlockRoot))
require.NoError(t, err)
require.DeepEqual(t, bitfield.Bitvector32{0xf, 0xf, 0xf, 0xf}, aggregate.SyncCommitteeBits)
aggregate, err = proposerServer.getSyncAggregate(t.Context(), 2, bytesutil.ToBytes32(conts[0].BlockRoot))
require.NoError(t, err)
require.DeepEqual(t, bitfield.Bitvector32{0xaa, 0xaa, 0xaa, 0xaa}, aggregate.SyncCommitteeBits)
aggregate, err = proposerServer.getSyncAggregate(t.Context(), 3, bytesutil.ToBytes32(conts[0].BlockRoot))
require.NoError(t, err)
require.DeepEqual(t, bitfield.NewBitvector32(), aggregate.SyncCommitteeBits)
}
func TestServer_SetSyncAggregate_EmptyCase(t *testing.T) {
b, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlockAltair())
require.NoError(t, err)
@@ -25,3 +79,123 @@ func TestServer_SetSyncAggregate_EmptyCase(t *testing.T) {
}
require.DeepEqual(t, want, agg)
}
func TestProposer_GetSyncAggregate_IncludesSyncCommitteeMessages(t *testing.T) {
// TEST SETUP
// - validator 0 is selected twice in subcommittee 0 (indexes [0,1])
// - validator 1 is selected once in subcommittee 0 (index 2)
// - validator 2 is selected twice in subcommittee 1 (indexes [0,1])
// - validator 3 is selected once in subcommittee 1 (index 2)
// - sync committee aggregates in the pool have index 3 set for both subcommittees
subcommitteeSize := params.BeaconConfig().SyncCommitteeSize / params.BeaconConfig().SyncCommitteeSubnetCount
helpers.ClearCache()
st, err := util.NewBeaconStateAltair()
require.NoError(t, err)
vals := make([]*ethpb.Validator, 4)
vals[0] = &ethpb.Validator{PublicKey: bytesutil.PadTo([]byte{0xf0}, 48)}
vals[1] = &ethpb.Validator{PublicKey: bytesutil.PadTo([]byte{0xf1}, 48)}
vals[2] = &ethpb.Validator{PublicKey: bytesutil.PadTo([]byte{0xf2}, 48)}
vals[3] = &ethpb.Validator{PublicKey: bytesutil.PadTo([]byte{0xf3}, 48)}
require.NoError(t, st.SetValidators(vals))
sc := &ethpb.SyncCommittee{
Pubkeys: make([][]byte, params.BeaconConfig().SyncCommitteeSize),
}
sc.Pubkeys[0] = vals[0].PublicKey
sc.Pubkeys[1] = vals[0].PublicKey
sc.Pubkeys[2] = vals[1].PublicKey
sc.Pubkeys[subcommitteeSize] = vals[2].PublicKey
sc.Pubkeys[subcommitteeSize+1] = vals[2].PublicKey
sc.Pubkeys[subcommitteeSize+2] = vals[3].PublicKey
require.NoError(t, st.SetCurrentSyncCommittee(sc))
proposerServer := &Server{
HeadFetcher: &chainmock.ChainService{State: st},
SyncChecker: &mockSync.Sync{IsSyncing: false},
SyncCommitteePool: synccommittee.NewStore(),
}
r := params.BeaconConfig().ZeroHash
msgs := []*ethpb.SyncCommitteeMessage{
{Slot: 1, BlockRoot: r[:], ValidatorIndex: 0, Signature: bls.NewAggregateSignature().Marshal()},
{Slot: 1, BlockRoot: r[:], ValidatorIndex: 1, Signature: bls.NewAggregateSignature().Marshal()},
{Slot: 1, BlockRoot: r[:], ValidatorIndex: 2, Signature: bls.NewAggregateSignature().Marshal()},
{Slot: 1, BlockRoot: r[:], ValidatorIndex: 3, Signature: bls.NewAggregateSignature().Marshal()},
}
for _, msg := range msgs {
require.NoError(t, proposerServer.SyncCommitteePool.SaveSyncCommitteeMessage(msg))
}
subcommittee0AggBits := ethpb.NewSyncCommitteeAggregationBits()
subcommittee0AggBits.SetBitAt(3, true)
subcommittee1AggBits := ethpb.NewSyncCommitteeAggregationBits()
subcommittee1AggBits.SetBitAt(3, true)
conts := []*ethpb.SyncCommitteeContribution{
{Slot: 1, SubcommitteeIndex: 0, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: subcommittee0AggBits, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 1, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: subcommittee1AggBits, BlockRoot: r[:]},
}
for _, cont := range conts {
require.NoError(t, proposerServer.SyncCommitteePool.SaveSyncCommitteeContribution(cont))
}
// The final sync aggregates must have indexes [0,1,2,3] set for both subcommittees
sa, err := proposerServer.getSyncAggregate(t.Context(), 1, r)
require.NoError(t, err)
assert.Equal(t, true, sa.SyncCommitteeBits.BitAt(0))
assert.Equal(t, true, sa.SyncCommitteeBits.BitAt(1))
assert.Equal(t, true, sa.SyncCommitteeBits.BitAt(2))
assert.Equal(t, true, sa.SyncCommitteeBits.BitAt(3))
assert.Equal(t, true, sa.SyncCommitteeBits.BitAt(subcommitteeSize))
assert.Equal(t, true, sa.SyncCommitteeBits.BitAt(subcommitteeSize+1))
assert.Equal(t, true, sa.SyncCommitteeBits.BitAt(subcommitteeSize+2))
assert.Equal(t, true, sa.SyncCommitteeBits.BitAt(subcommitteeSize+3))
}
func Test_aggregatedSyncCommitteeMessages_NoIntersectionWithPoolContributions(t *testing.T) {
helpers.ClearCache()
st, err := util.NewBeaconStateAltair()
require.NoError(t, err)
vals := make([]*ethpb.Validator, 4)
vals[0] = &ethpb.Validator{PublicKey: bytesutil.PadTo([]byte{0xf0}, 48)}
vals[1] = &ethpb.Validator{PublicKey: bytesutil.PadTo([]byte{0xf1}, 48)}
vals[2] = &ethpb.Validator{PublicKey: bytesutil.PadTo([]byte{0xf2}, 48)}
vals[3] = &ethpb.Validator{PublicKey: bytesutil.PadTo([]byte{0xf3}, 48)}
require.NoError(t, st.SetValidators(vals))
sc := &ethpb.SyncCommittee{
Pubkeys: make([][]byte, params.BeaconConfig().SyncCommitteeSize),
}
sc.Pubkeys[0] = vals[0].PublicKey
sc.Pubkeys[1] = vals[1].PublicKey
sc.Pubkeys[2] = vals[2].PublicKey
sc.Pubkeys[3] = vals[3].PublicKey
require.NoError(t, st.SetCurrentSyncCommittee(sc))
proposerServer := &Server{
HeadFetcher: &chainmock.ChainService{State: st},
SyncChecker: &mockSync.Sync{IsSyncing: false},
SyncCommitteePool: synccommittee.NewStore(),
}
r := params.BeaconConfig().ZeroHash
msgs := []*ethpb.SyncCommitteeMessage{
{Slot: 1, BlockRoot: r[:], ValidatorIndex: 0, Signature: bls.NewAggregateSignature().Marshal()},
{Slot: 1, BlockRoot: r[:], ValidatorIndex: 1, Signature: bls.NewAggregateSignature().Marshal()},
{Slot: 1, BlockRoot: r[:], ValidatorIndex: 2, Signature: bls.NewAggregateSignature().Marshal()},
{Slot: 1, BlockRoot: r[:], ValidatorIndex: 3, Signature: bls.NewAggregateSignature().Marshal()},
}
for _, msg := range msgs {
require.NoError(t, proposerServer.SyncCommitteePool.SaveSyncCommitteeMessage(msg))
}
subcommitteeAggBits := ethpb.NewSyncCommitteeAggregationBits()
subcommitteeAggBits.SetBitAt(3, true)
cont := &ethpb.SyncCommitteeContribution{
Slot: 1,
SubcommitteeIndex: 0,
Signature: bls.NewAggregateSignature().Marshal(),
AggregationBits: subcommitteeAggBits,
BlockRoot: r[:],
}
aggregated, err := proposerServer.aggregatedSyncCommitteeMessages(t.Context(), 1, r, []*ethpb.SyncCommitteeContribution{cont})
require.NoError(t, err)
require.Equal(t, 1, len(aggregated))
assert.Equal(t, false, aggregated[0].AggregationBits.BitAt(3))
}

View File

@@ -473,6 +473,11 @@ func isVersionCompatible(bidVersion, headBlockVersion int) bool {
return true
}
// Allow Electra bids for Gloas blocks - they have compatible payload formats
if bidVersion == version.Electra && headBlockVersion == version.Gloas {
return true
}
// For all other cases, require exact version match
return false
}

View File

@@ -16,6 +16,11 @@ func getEmptyBlock(slot primitives.Slot) (interfaces.SignedBeaconBlock, error) {
var err error
epoch := slots.ToEpoch(slot)
switch {
case epoch >= params.BeaconConfig().GloasForkEpoch:
sBlk, err = blocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockGloas{Block: &ethpb.BeaconBlockGloas{Body: &ethpb.BeaconBlockBodyGloas{}}})
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not initialize block for proposal: %v", err)
}
case epoch >= params.BeaconConfig().FuluForkEpoch:
sBlk, err = blocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockFulu{Block: &ethpb.BeaconBlockElectra{Body: &ethpb.BeaconBlockBodyElectra{}}})
if err != nil {

View File

@@ -136,7 +136,7 @@ func (vs *Server) getLocalPayloadFromEngine(
}
var attr payloadattribute.Attributer
switch st.Version() {
case version.Deneb, version.Electra, version.Fulu:
case version.Deneb, version.Electra, version.Fulu, version.Gloas:
withdrawals, _, err := st.ExpectedWithdrawals()
if err != nil {
return nil, err

View File

@@ -4,16 +4,23 @@ import (
"context"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
v "github.com/OffchainLabs/prysm/v6/beacon-chain/core/validators"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
)
func (vs *Server) getSlashings(ctx context.Context, head state.BeaconState) ([]*ethpb.ProposerSlashing, []ethpb.AttSlashing) {
var err error
exitInfo := v.ExitInformation(head)
if err := helpers.UpdateTotalActiveBalanceCache(head, exitInfo.TotalActiveBalance); err != nil {
log.WithError(err).Warn("Could not update total active balance cache")
}
proposerSlashings := vs.SlashingsPool.PendingProposerSlashings(ctx, head, false /*noLimit*/)
validProposerSlashings := make([]*ethpb.ProposerSlashing, 0, len(proposerSlashings))
for _, slashing := range proposerSlashings {
_, err := blocks.ProcessProposerSlashing(ctx, head, slashing, v.SlashValidator)
_, err = blocks.ProcessProposerSlashing(ctx, head, slashing, exitInfo)
if err != nil {
log.WithError(err).Warn("Could not validate proposer slashing for block inclusion")
continue
@@ -23,7 +30,7 @@ func (vs *Server) getSlashings(ctx context.Context, head state.BeaconState) ([]*
attSlashings := vs.SlashingsPool.PendingAttesterSlashings(ctx, head, false /*noLimit*/)
validAttSlashings := make([]ethpb.AttSlashing, 0, len(attSlashings))
for _, slashing := range attSlashings {
_, err := blocks.ProcessAttesterSlashing(ctx, head, slashing, v.SlashValidator)
_, err = blocks.ProcessAttesterSlashing(ctx, head, slashing, exitInfo)
if err != nil {
log.WithError(err).Warn("Could not validate attester slashing for block inclusion")
continue

View File

@@ -6,6 +6,7 @@ import (
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
mock "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/builder"
builderTest "github.com/OffchainLabs/prysm/v6/beacon-chain/builder/testing"
@@ -894,6 +895,9 @@ func injectSlashings(t *testing.T, st state.BeaconState, keys []bls.SecretKey, s
}
func TestProposer_ProposeBlock_OK(t *testing.T) {
// Initialize KZG for Fulu blocks
require.NoError(t, kzg.Start())
tests := []struct {
name string
block func([32]byte) *ethpb.GenericSignedBeaconBlock
@@ -1098,6 +1102,131 @@ func TestProposer_ProposeBlock_OK(t *testing.T) {
},
err: "blob KZG commitments don't match number of blobs or KZG proofs",
},
{
name: "fulu block no blob",
block: func(parent [32]byte) *ethpb.GenericSignedBeaconBlock {
sb := &ethpb.SignedBeaconBlockContentsFulu{
Block: &ethpb.SignedBeaconBlockFulu{
Block: &ethpb.BeaconBlockElectra{Slot: 5, ParentRoot: parent[:], Body: util.HydrateBeaconBlockBodyElectra(&ethpb.BeaconBlockBodyElectra{})},
},
}
blk := &ethpb.GenericSignedBeaconBlock_Fulu{Fulu: sb}
return &ethpb.GenericSignedBeaconBlock{Block: blk, IsBlinded: false}
},
},
{
name: "fulu block with single blob and cell proofs",
block: func(parent [32]byte) *ethpb.GenericSignedBeaconBlock {
numberOfColumns := uint64(128)
// For Fulu, we have cell proofs (blobs * numberOfColumns)
cellProofs := make([][]byte, numberOfColumns)
for i := uint64(0); i < numberOfColumns; i++ {
cellProofs[i] = bytesutil.PadTo([]byte{byte(i)}, 48)
}
// Blob must be exactly 131072 bytes
blob := make([]byte, 131072)
blob[0] = 0x01
sb := &ethpb.SignedBeaconBlockContentsFulu{
Block: &ethpb.SignedBeaconBlockFulu{
Block: &ethpb.BeaconBlockElectra{
Slot: 5, ParentRoot: parent[:],
Body: util.HydrateBeaconBlockBodyElectra(&ethpb.BeaconBlockBodyElectra{
BlobKzgCommitments: [][]byte{bytesutil.PadTo([]byte("kc"), 48)},
}),
},
},
KzgProofs: cellProofs,
Blobs: [][]byte{blob},
}
blk := &ethpb.GenericSignedBeaconBlock_Fulu{Fulu: sb}
return &ethpb.GenericSignedBeaconBlock{Block: blk, IsBlinded: false}
},
},
{
name: "fulu block with multiple blobs and cell proofs",
block: func(parent [32]byte) *ethpb.GenericSignedBeaconBlock {
numberOfColumns := uint64(128)
blobCount := 3
// For Fulu, we have cell proofs (blobs * numberOfColumns)
cellProofs := make([][]byte, uint64(blobCount)*numberOfColumns)
for i := range cellProofs {
cellProofs[i] = bytesutil.PadTo([]byte{byte(i % 256)}, 48)
}
// Create properly sized blobs (131072 bytes each)
blobs := make([][]byte, blobCount)
for i := 0; i < blobCount; i++ {
blob := make([]byte, 131072)
blob[0] = byte(i + 1)
blobs[i] = blob
}
sb := &ethpb.SignedBeaconBlockContentsFulu{
Block: &ethpb.SignedBeaconBlockFulu{
Block: &ethpb.BeaconBlockElectra{
Slot: 5, ParentRoot: parent[:],
Body: util.HydrateBeaconBlockBodyElectra(&ethpb.BeaconBlockBodyElectra{
BlobKzgCommitments: [][]byte{
bytesutil.PadTo([]byte("kc"), 48),
bytesutil.PadTo([]byte("kc1"), 48),
bytesutil.PadTo([]byte("kc2"), 48),
},
}),
},
},
KzgProofs: cellProofs,
Blobs: blobs,
}
blk := &ethpb.GenericSignedBeaconBlock_Fulu{Fulu: sb}
return &ethpb.GenericSignedBeaconBlock{Block: blk, IsBlinded: false}
},
},
{
name: "fulu block wrong cell proof count (should be blobs * 128)",
block: func(parent [32]byte) *ethpb.GenericSignedBeaconBlock {
// Wrong number of cell proofs - should be 2 * 128 = 256, but providing only 2
// Create properly sized blobs
blob1 := make([]byte, 131072)
blob1[0] = 0x01
blob2 := make([]byte, 131072)
blob2[0] = 0x02
sb := &ethpb.SignedBeaconBlockContentsFulu{
Block: &ethpb.SignedBeaconBlockFulu{
Block: &ethpb.BeaconBlockElectra{
Slot: 5, ParentRoot: parent[:],
Body: util.HydrateBeaconBlockBodyElectra(&ethpb.BeaconBlockBodyElectra{
BlobKzgCommitments: [][]byte{
bytesutil.PadTo([]byte("kc"), 48),
bytesutil.PadTo([]byte("kc1"), 48),
},
}),
},
},
KzgProofs: [][]byte{{0x01}, {0x02}}, // Wrong: should be 256 cell proofs
Blobs: [][]byte{blob1, blob2},
}
blk := &ethpb.GenericSignedBeaconBlock_Fulu{Fulu: sb}
return &ethpb.GenericSignedBeaconBlock{Block: blk, IsBlinded: false}
},
err: "blobs and cells proofs mismatch",
},
{
name: "blind fulu block with blob commitments",
block: func(parent [32]byte) *ethpb.GenericSignedBeaconBlock {
blockToPropose := util.NewBlindedBeaconBlockFulu()
blockToPropose.Message.Slot = 5
blockToPropose.Message.ParentRoot = parent[:]
txRoot, err := ssz.TransactionsRoot([][]byte{})
require.NoError(t, err)
withdrawalsRoot, err := ssz.WithdrawalSliceRoot([]*enginev1.Withdrawal{}, fieldparams.MaxWithdrawalsPerPayload)
require.NoError(t, err)
blockToPropose.Message.Body.ExecutionPayloadHeader.TransactionsRoot = txRoot[:]
blockToPropose.Message.Body.ExecutionPayloadHeader.WithdrawalsRoot = withdrawalsRoot[:]
blockToPropose.Message.Body.BlobKzgCommitments = [][]byte{bytesutil.PadTo([]byte{0x01}, 48)}
blk := &ethpb.GenericSignedBeaconBlock_BlindedFulu{BlindedFulu: blockToPropose}
return &ethpb.GenericSignedBeaconBlock{Block: blk}
},
useBuilder: true,
err: "commitment value doesn't match block", // Known issue with mock builder cell proof mismatch
},
}
for _, tt := range tests {
@@ -1111,15 +1240,29 @@ func TestProposer_ProposeBlock_OK(t *testing.T) {
c := &mock.ChainService{Root: bsRoot[:], State: beaconState}
db := dbutil.SetupDB(t)
// Create cell proofs for Fulu blocks (128 proofs per blob)
numberOfColumns := uint64(128)
cellProofs := make([][]byte, numberOfColumns)
for i := uint64(0); i < numberOfColumns; i++ {
cellProofs[i] = bytesutil.PadTo([]byte{byte(i)}, 48)
}
// Create properly sized blob for mock builder
mockBlob := make([]byte, 131072)
mockBlob[0] = 0x03
// Use the same commitment as in the blind block test
mockCommitment := bytesutil.PadTo([]byte{0x01}, 48)
proposerServer := &Server{
BlockReceiver: c,
BlockNotifier: c.BlockNotifier(),
P2P: mockp2p.NewTestP2P(t),
BlockBuilder: &builderTest.MockBuilderService{HasConfigured: tt.useBuilder, PayloadCapella: emptyPayloadCapella(), PayloadDeneb: emptyPayloadDeneb(),
BlobBundle: &enginev1.BlobsBundle{KzgCommitments: [][]byte{bytesutil.PadTo([]byte{0x01}, 48)}, Proofs: [][]byte{{0x02}}, Blobs: [][]byte{{0x03}}}},
BeaconDB: db,
BlobReceiver: c,
OperationNotifier: c.OperationNotifier(),
BlobBundle: &enginev1.BlobsBundle{KzgCommitments: [][]byte{mockCommitment}, Proofs: [][]byte{{0x02}}, Blobs: [][]byte{{0x03}}},
BlobBundleV2: &enginev1.BlobsBundleV2{KzgCommitments: [][]byte{mockCommitment}, Proofs: cellProofs, Blobs: [][]byte{mockBlob}}},
BeaconDB: db,
BlobReceiver: c,
DataColumnReceiver: c, // Add DataColumnReceiver for Fulu blocks
OperationNotifier: c.OperationNotifier(),
}
blockToPropose := tt.block(bsRoot)
res, err := proposerServer.ProposeBeaconBlock(t.Context(), blockToPropose)
@@ -2953,49 +3096,6 @@ func TestProposer_DeleteAttsInPool_Aggregated(t *testing.T) {
assert.Equal(t, 0, len(atts), "Did not delete unaggregated attestation")
}
func TestProposer_GetSyncAggregate_OK(t *testing.T) {
proposerServer := &Server{
SyncChecker: &mockSync.Sync{IsSyncing: false},
SyncCommitteePool: synccommittee.NewStore(),
}
r := params.BeaconConfig().ZeroHash
conts := []*ethpb.SyncCommitteeContribution{
{Slot: 1, SubcommitteeIndex: 0, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b0001}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 0, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b1001}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 0, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b1110}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 1, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b0001}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 1, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b1001}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 1, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b1110}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 2, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b0001}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 2, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b1001}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 2, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b1110}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 3, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b0001}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 3, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b1001}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 3, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b1110}, BlockRoot: r[:]},
{Slot: 2, SubcommitteeIndex: 0, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b10101010}, BlockRoot: r[:]},
{Slot: 2, SubcommitteeIndex: 1, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b10101010}, BlockRoot: r[:]},
{Slot: 2, SubcommitteeIndex: 2, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b10101010}, BlockRoot: r[:]},
{Slot: 2, SubcommitteeIndex: 3, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b10101010}, BlockRoot: r[:]},
}
for _, cont := range conts {
require.NoError(t, proposerServer.SyncCommitteePool.SaveSyncCommitteeContribution(cont))
}
aggregate, err := proposerServer.getSyncAggregate(t.Context(), 1, bytesutil.ToBytes32(conts[0].BlockRoot))
require.NoError(t, err)
require.DeepEqual(t, bitfield.Bitvector32{0xf, 0xf, 0xf, 0xf}, aggregate.SyncCommitteeBits)
aggregate, err = proposerServer.getSyncAggregate(t.Context(), 2, bytesutil.ToBytes32(conts[0].BlockRoot))
require.NoError(t, err)
require.DeepEqual(t, bitfield.Bitvector32{0xaa, 0xaa, 0xaa, 0xaa}, aggregate.SyncCommitteeBits)
aggregate, err = proposerServer.getSyncAggregate(t.Context(), 3, bytesutil.ToBytes32(conts[0].BlockRoot))
require.NoError(t, err)
require.DeepEqual(t, bitfield.NewBitvector32(), aggregate.SyncCommitteeBits)
}
func TestProposer_PrepareBeaconProposer(t *testing.T) {
type args struct {
request *ethpb.PrepareBeaconProposerRequest

View File

@@ -69,6 +69,7 @@ type Server struct {
SyncCommitteePool synccommittee.Pool
BlockReceiver blockchain.BlockReceiver
BlobReceiver blockchain.BlobReceiver
DataColumnReceiver blockchain.DataColumnReceiver
MockEth1Votes bool
Eth1BlockFetcher execution.POWBlockFetcher
PendingDepositsFetcher depositsnapshot.PendingDepositsFetcher

View File

@@ -89,6 +89,7 @@ type Config struct {
AttestationReceiver blockchain.AttestationReceiver
BlockReceiver blockchain.BlockReceiver
BlobReceiver blockchain.BlobReceiver
DataColumnReceiver blockchain.DataColumnReceiver
ExecutionChainService execution.Chain
ChainStartFetcher execution.ChainStartFetcher
ExecutionChainInfoFetcher execution.ChainInfoFetcher
@@ -238,6 +239,7 @@ func NewService(ctx context.Context, cfg *Config) *Service {
P2P: s.cfg.Broadcaster,
BlockReceiver: s.cfg.BlockReceiver,
BlobReceiver: s.cfg.BlobReceiver,
DataColumnReceiver: s.cfg.DataColumnReceiver,
MockEth1Votes: s.cfg.MockEth1Votes,
Eth1BlockFetcher: s.cfg.ExecutionChainService,
PendingDepositsFetcher: s.cfg.PendingDepositFetcher,

View File

@@ -8,6 +8,7 @@ go_library(
"mock_blocker.go",
"mock_exec_chain_info_fetcher.go",
"mock_genesis_timefetcher.go",
"mock_sidecars.go",
"mock_stater.go",
],
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/testutil",

View File

@@ -0,0 +1,44 @@
package testutil
import ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
// CreateDataColumnSidecar generates a filled dummy data column sidecar
func CreateDataColumnSidecar(index uint64, data []byte) *ethpb.DataColumnSidecar {
return &ethpb.DataColumnSidecar{
Index: index,
Column: [][]byte{data},
SignedBlockHeader: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{
Slot: 1,
ProposerIndex: 1,
ParentRoot: make([]byte, 32),
StateRoot: make([]byte, 32),
BodyRoot: make([]byte, 32),
},
Signature: make([]byte, 96),
},
KzgCommitments: [][]byte{make([]byte, 48)},
KzgProofs: [][]byte{make([]byte, 48)},
KzgCommitmentsInclusionProof: [][]byte{make([]byte, 32)},
}
}
// CreateBlobSidecar generates a filled dummy data blob sidecar
func CreateBlobSidecar(index uint64, blob []byte) *ethpb.BlobSidecar {
return &ethpb.BlobSidecar{
Index: index,
Blob: blob,
SignedBlockHeader: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{
Slot: 1,
ProposerIndex: 1,
ParentRoot: make([]byte, 32),
StateRoot: make([]byte, 32),
BodyRoot: make([]byte, 32),
},
Signature: make([]byte, 96),
},
KzgCommitment: make([]byte, 48),
KzgProof: make([]byte, 48),
}
}

View File

@@ -67,6 +67,13 @@ func WithNower(n Nower) ClockOpt {
}
}
// WithTimeAsNow will create a Nower based on the given time.Time and set it as the Now() implementation.
func WithTimeAsNow(t time.Time) ClockOpt {
return func(g *Clock) {
g.now = func() time.Time { return t }
}
}
// NewClock constructs a Clock value from a genesis timestamp (t) and a Genesis Validator Root (vr).
// The WithNower ClockOpt can be used in tests to specify an alternate `time.Now` implementation,
// for instance to return a value for `Now` spanning a certain number of slots from genesis time, to control the current slot.

View File

@@ -265,6 +265,7 @@ type WriteOnlyEth1Data interface {
AppendEth1DataVotes(val *ethpb.Eth1Data) error
SetEth1DepositIndex(val uint64) error
ExitEpochAndUpdateChurn(exitBalance primitives.Gwei) (primitives.Epoch, error)
ExitEpochAndUpdateChurnForTotalBal(totalActiveBalance primitives.Gwei, exitBalance primitives.Gwei) (primitives.Epoch, error)
}
// WriteOnlyValidators defines a struct which only has write access to validators methods.

View File

@@ -52,6 +52,7 @@ type BeaconState struct {
latestExecutionPayloadHeader *enginev1.ExecutionPayloadHeader
latestExecutionPayloadHeaderCapella *enginev1.ExecutionPayloadHeaderCapella
latestExecutionPayloadHeaderDeneb *enginev1.ExecutionPayloadHeaderDeneb
latestExecutionPayloadHeaderGloas *enginev1.ExecutionPayloadHeaderGloas
// Capella fields
nextWithdrawalIndex uint64

View File

@@ -17,6 +17,10 @@ func (b *BeaconState) LatestExecutionPayloadHeader() (interfaces.ExecutionData,
b.lock.RLock()
defer b.lock.RUnlock()
if b.version >= version.Gloas {
return blocks.WrappedExecutionPayloadHeaderGloas(b.latestExecutionPayloadHeaderGloas.Copy())
}
if b.version >= version.Deneb {
return blocks.WrappedExecutionPayloadHeaderDeneb(b.latestExecutionPayloadHeaderDeneb.Copy())
}

View File

@@ -44,11 +44,27 @@ func (b *BeaconState) ExitEpochAndUpdateChurn(exitBalance primitives.Gwei) (prim
return 0, err
}
return b.exitEpochAndUpdateChurn(primitives.Gwei(activeBal), exitBalance)
}
// ExitEpochAndUpdateChurnForTotalBal has the same functionality as ExitEpochAndUpdateChurn,
// the only difference being how total active balance is obtained. In ExitEpochAndUpdateChurn
// it is calculated inside the function and in ExitEpochAndUpdateChurnForTotalBal it's a
// function argument.
func (b *BeaconState) ExitEpochAndUpdateChurnForTotalBal(totalActiveBalance primitives.Gwei, exitBalance primitives.Gwei) (primitives.Epoch, error) {
if b.version < version.Electra {
return 0, errNotSupported("ExitEpochAndUpdateChurnForTotalBal", b.version)
}
return b.exitEpochAndUpdateChurn(totalActiveBalance, exitBalance)
}
func (b *BeaconState) exitEpochAndUpdateChurn(totalActiveBalance primitives.Gwei, exitBalance primitives.Gwei) (primitives.Epoch, error) {
b.lock.Lock()
defer b.lock.Unlock()
earliestExitEpoch := max(b.earliestExitEpoch, helpers.ActivationExitEpoch(slots.ToEpoch(b.slot)))
perEpochChurn := helpers.ActivationExitChurnLimit(primitives.Gwei(activeBal)) // Guaranteed to be non-zero.
perEpochChurn := helpers.ActivationExitChurnLimit(totalActiveBalance) // Guaranteed to be non-zero.
// New epoch for exits
var exitBalanceToConsume primitives.Gwei

View File

@@ -55,6 +55,17 @@ func (b *BeaconState) SetLatestExecutionPayloadHeader(val interfaces.ExecutionDa
b.latestExecutionPayloadHeaderDeneb = latest
b.markFieldAsDirty(types.LatestExecutionPayloadHeaderDeneb)
return nil
case *enginev1.ExecutionPayloadGloas:
if !(b.version >= version.Gloas) {
return fmt.Errorf("wrong state version (%s) for gloas execution payload", version.String(b.version))
}
latest, err := consensusblocks.PayloadToHeaderGloas(val)
if err != nil {
return errors.Wrap(err, "could not convert payload to header")
}
b.latestExecutionPayloadHeaderGloas = latest
b.markFieldAsDirty(types.LatestExecutionPayloadHeaderGloas)
return nil
case *enginev1.ExecutionPayloadHeader:
if b.version != version.Bellatrix {
return fmt.Errorf("wrong state version (%s) for bellatrix execution payload header", version.String(b.version))
@@ -76,6 +87,13 @@ func (b *BeaconState) SetLatestExecutionPayloadHeader(val interfaces.ExecutionDa
b.latestExecutionPayloadHeaderDeneb = header
b.markFieldAsDirty(types.LatestExecutionPayloadHeaderDeneb)
return nil
case *enginev1.ExecutionPayloadHeaderGloas:
if !(b.version >= version.Gloas) {
return fmt.Errorf("wrong state version (%s) for gloas execution payload header", version.String(b.version))
}
b.latestExecutionPayloadHeaderGloas = header
b.markFieldAsDirty(types.LatestExecutionPayloadHeaderGloas)
return nil
default:
return errors.New("value must be an execution payload header")
}

View File

@@ -112,6 +112,24 @@ var (
electraFields,
types.ProposerLookahead,
)
gloasFields = append(
altairFields,
types.LatestExecutionPayloadHeaderGloas,
types.NextWithdrawalIndex,
types.NextWithdrawalValidatorIndex,
types.HistoricalSummaries,
types.DepositRequestsStartIndex,
types.DepositBalanceToConsume,
types.ExitBalanceToConsume,
types.EarliestExitEpoch,
types.ConsolidationBalanceToConsume,
types.EarliestConsolidationEpoch,
types.PendingDeposits,
types.PendingPartialWithdrawals,
types.PendingConsolidations,
types.ProposerLookahead,
)
)
const (
@@ -122,6 +140,7 @@ const (
denebSharedFieldRefCount = 7
electraSharedFieldRefCount = 10
fuluSharedFieldRefCount = 11
gloasSharedFieldRefCount = 11
)
// InitializeFromProtoPhase0 the beacon state from a protobuf representation.
@@ -159,6 +178,11 @@ func InitializeFromProtoFulu(st *ethpb.BeaconStateFulu) (state.BeaconState, erro
return InitializeFromProtoUnsafeFulu(proto.Clone(st).(*ethpb.BeaconStateFulu))
}
// InitializeFromProtoGloas the beacon state from a protobuf representation.
func InitializeFromProtoGloas(st *ethpb.BeaconStateGloas) (state.BeaconState, error) {
return InitializeFromProtoUnsafeGloas(proto.Clone(st).(*ethpb.BeaconStateGloas))
}
// InitializeFromProtoUnsafePhase0 directly uses the beacon state protobuf fields
// and sets them as fields of the BeaconState type.
func InitializeFromProtoUnsafePhase0(st *ethpb.BeaconState) (state.BeaconState, error) {
@@ -731,6 +755,105 @@ func InitializeFromProtoUnsafeFulu(st *ethpb.BeaconStateFulu) (state.BeaconState
return b, nil
}
// InitializeFromProtoUnsafeGloas directly uses the beacon state protobuf fields
// and sets them as fields of the BeaconState type.
func InitializeFromProtoUnsafeGloas(st *ethpb.BeaconStateGloas) (state.BeaconState, error) {
if st == nil {
return nil, errors.New("received nil state")
}
hRoots := customtypes.HistoricalRoots(make([][32]byte, len(st.HistoricalRoots)))
for i, r := range st.HistoricalRoots {
hRoots[i] = bytesutil.ToBytes32(r)
}
proposerLookahead := make([]primitives.ValidatorIndex, len(st.ProposerLookahead))
for i, v := range st.ProposerLookahead {
proposerLookahead[i] = primitives.ValidatorIndex(v)
}
fieldCount := params.BeaconConfig().BeaconStateFuluFieldCount
b := &BeaconState{
version: version.Gloas,
genesisTime: st.GenesisTime,
genesisValidatorsRoot: bytesutil.ToBytes32(st.GenesisValidatorsRoot),
slot: st.Slot,
fork: st.Fork,
latestBlockHeader: st.LatestBlockHeader,
historicalRoots: hRoots,
eth1Data: st.Eth1Data,
eth1DataVotes: st.Eth1DataVotes,
eth1DepositIndex: st.Eth1DepositIndex,
slashings: st.Slashings,
previousEpochParticipation: st.PreviousEpochParticipation,
currentEpochParticipation: st.CurrentEpochParticipation,
justificationBits: st.JustificationBits,
previousJustifiedCheckpoint: st.PreviousJustifiedCheckpoint,
currentJustifiedCheckpoint: st.CurrentJustifiedCheckpoint,
finalizedCheckpoint: st.FinalizedCheckpoint,
currentSyncCommittee: st.CurrentSyncCommittee,
nextSyncCommittee: st.NextSyncCommittee,
latestExecutionPayloadHeaderGloas: st.LatestExecutionPayloadHeader,
nextWithdrawalIndex: st.NextWithdrawalIndex,
nextWithdrawalValidatorIndex: st.NextWithdrawalValidatorIndex,
historicalSummaries: st.HistoricalSummaries,
depositRequestsStartIndex: st.DepositRequestsStartIndex,
depositBalanceToConsume: st.DepositBalanceToConsume,
exitBalanceToConsume: st.ExitBalanceToConsume,
earliestExitEpoch: st.EarliestExitEpoch,
consolidationBalanceToConsume: st.ConsolidationBalanceToConsume,
earliestConsolidationEpoch: st.EarliestConsolidationEpoch,
pendingDeposits: st.PendingDeposits,
pendingPartialWithdrawals: st.PendingPartialWithdrawals,
pendingConsolidations: st.PendingConsolidations,
proposerLookahead: proposerLookahead,
id: types.Enumerator.Inc(),
dirtyFields: make(map[types.FieldIndex]bool, fieldCount),
dirtyIndices: make(map[types.FieldIndex][]uint64, fieldCount),
stateFieldLeaves: make(map[types.FieldIndex]*fieldtrie.FieldTrie, fieldCount),
rebuildTrie: make(map[types.FieldIndex]bool, fieldCount),
valMapHandler: stateutil.NewValMapHandler(st.Validators),
}
b.blockRootsMultiValue = NewMultiValueBlockRoots(st.BlockRoots)
b.stateRootsMultiValue = NewMultiValueStateRoots(st.StateRoots)
b.randaoMixesMultiValue = NewMultiValueRandaoMixes(st.RandaoMixes)
b.balancesMultiValue = NewMultiValueBalances(st.Balances)
b.validatorsMultiValue = NewMultiValueValidators(st.Validators)
b.inactivityScoresMultiValue = NewMultiValueInactivityScores(st.InactivityScores)
b.sharedFieldReferences = make(map[types.FieldIndex]*stateutil.Reference, gloasSharedFieldRefCount)
for _, f := range gloasFields {
b.dirtyFields[f] = true
b.rebuildTrie[f] = true
b.dirtyIndices[f] = []uint64{}
trie, err := fieldtrie.NewFieldTrie(f, types.BasicArray, nil, 0)
if err != nil {
return nil, err
}
b.stateFieldLeaves[f] = trie
}
// Initialize field reference tracking for shared data.
b.sharedFieldReferences[types.HistoricalRoots] = stateutil.NewRef(1)
b.sharedFieldReferences[types.Eth1DataVotes] = stateutil.NewRef(1)
b.sharedFieldReferences[types.Slashings] = stateutil.NewRef(1)
b.sharedFieldReferences[types.PreviousEpochParticipationBits] = stateutil.NewRef(1)
b.sharedFieldReferences[types.CurrentEpochParticipationBits] = stateutil.NewRef(1)
b.sharedFieldReferences[types.LatestExecutionPayloadHeaderGloas] = stateutil.NewRef(1) // New in Gloas.
b.sharedFieldReferences[types.HistoricalSummaries] = stateutil.NewRef(1)
b.sharedFieldReferences[types.PendingDeposits] = stateutil.NewRef(1)
b.sharedFieldReferences[types.PendingPartialWithdrawals] = stateutil.NewRef(1)
b.sharedFieldReferences[types.PendingConsolidations] = stateutil.NewRef(1)
b.sharedFieldReferences[types.ProposerLookahead] = stateutil.NewRef(1)
state.Count.Inc()
// Finalizer runs when dst is being destroyed in garbage collection.
runtime.SetFinalizer(b, finalizerCleanup)
return b, nil
}
// Copy returns a deep copy of the beacon state.
func (b *BeaconState) Copy() state.BeaconState {
b.lock.RLock()
@@ -752,6 +875,8 @@ func (b *BeaconState) Copy() state.BeaconState {
fieldCount = params.BeaconConfig().BeaconStateElectraFieldCount
case version.Fulu:
fieldCount = params.BeaconConfig().BeaconStateFuluFieldCount
case version.Gloas:
fieldCount = params.BeaconConfig().BeaconStateFuluFieldCount
}
dst := &BeaconState{
@@ -806,6 +931,7 @@ func (b *BeaconState) Copy() state.BeaconState {
latestExecutionPayloadHeader: b.latestExecutionPayloadHeader.Copy(),
latestExecutionPayloadHeaderCapella: b.latestExecutionPayloadHeaderCapella.Copy(),
latestExecutionPayloadHeaderDeneb: b.latestExecutionPayloadHeaderDeneb.Copy(),
latestExecutionPayloadHeaderGloas: b.latestExecutionPayloadHeaderGloas.Copy(),
id: types.Enumerator.Inc(),
@@ -842,6 +968,8 @@ func (b *BeaconState) Copy() state.BeaconState {
dst.sharedFieldReferences = make(map[types.FieldIndex]*stateutil.Reference, electraSharedFieldRefCount)
case version.Fulu:
dst.sharedFieldReferences = make(map[types.FieldIndex]*stateutil.Reference, fuluSharedFieldRefCount)
case version.Gloas:
dst.sharedFieldReferences = make(map[types.FieldIndex]*stateutil.Reference, gloasSharedFieldRefCount)
}
for field, ref := range b.sharedFieldReferences {
@@ -937,6 +1065,8 @@ func (b *BeaconState) initializeMerkleLayers(ctx context.Context) error {
b.dirtyFields = make(map[types.FieldIndex]bool, params.BeaconConfig().BeaconStateElectraFieldCount)
case version.Fulu:
b.dirtyFields = make(map[types.FieldIndex]bool, params.BeaconConfig().BeaconStateFuluFieldCount)
case version.Gloas:
b.dirtyFields = make(map[types.FieldIndex]bool, params.BeaconConfig().BeaconStateFuluFieldCount)
default:
return fmt.Errorf("unknown state version (%s) when computing dirty fields in merklization", version.String(b.version))
}
@@ -1149,6 +1279,8 @@ func (b *BeaconState) rootSelector(ctx context.Context, field types.FieldIndex)
return b.latestExecutionPayloadHeaderCapella.HashTreeRoot()
case types.LatestExecutionPayloadHeaderDeneb:
return b.latestExecutionPayloadHeaderDeneb.HashTreeRoot()
case types.LatestExecutionPayloadHeaderGloas:
return b.latestExecutionPayloadHeaderGloas.HashTreeRoot()
case types.NextWithdrawalIndex:
return ssz.Uint64Root(b.nextWithdrawalIndex), nil
case types.NextWithdrawalValidatorIndex:

View File

@@ -88,6 +88,8 @@ func (f FieldIndex) String() string {
return "latestExecutionPayloadHeaderCapella"
case LatestExecutionPayloadHeaderDeneb:
return "latestExecutionPayloadHeaderDeneb"
case LatestExecutionPayloadHeaderGloas:
return "latestExecutionPayloadHeaderGloas"
case NextWithdrawalIndex:
return "nextWithdrawalIndex"
case NextWithdrawalValidatorIndex:
@@ -171,7 +173,7 @@ func (f FieldIndex) RealPosition() int {
return 22
case NextSyncCommittee:
return 23
case LatestExecutionPayloadHeader, LatestExecutionPayloadHeaderCapella, LatestExecutionPayloadHeaderDeneb:
case LatestExecutionPayloadHeader, LatestExecutionPayloadHeaderCapella, LatestExecutionPayloadHeaderDeneb, LatestExecutionPayloadHeaderGloas:
return 24
case NextWithdrawalIndex:
return 25
@@ -251,6 +253,7 @@ const (
LatestExecutionPayloadHeader
LatestExecutionPayloadHeaderCapella
LatestExecutionPayloadHeaderDeneb
LatestExecutionPayloadHeaderGloas
NextWithdrawalIndex
NextWithdrawalValidatorIndex
HistoricalSummaries

View File

@@ -5,7 +5,6 @@ go_library(
srcs = [
"batch_verifier.go",
"block_batcher.go",
"broadcast_bls_changes.go",
"context.go",
"custody.go",
"data_column_sidecars.go",
@@ -165,7 +164,6 @@ go_test(
"batch_verifier_test.go",
"blobs_test.go",
"block_batcher_test.go",
"broadcast_bls_changes_test.go",
"context_test.go",
"custody_test.go",
"data_column_sidecars_test.go",
@@ -194,6 +192,7 @@ go_test(
"slot_aware_cache_test.go",
"subscriber_beacon_aggregate_proof_test.go",
"subscriber_beacon_blocks_test.go",
"subscriber_data_column_sidecar_test.go",
"subscriber_test.go",
"subscription_topic_handler_test.go",
"sync_fuzz_test.go",
@@ -266,6 +265,7 @@ go_test(
"//crypto/rand:go_default_library",
"//encoding/bytesutil:go_default_library",
"//encoding/ssz/equality:go_default_library",
"//genesis:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",

View File

@@ -1,88 +0,0 @@
package sync
import (
"context"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
"github.com/OffchainLabs/prysm/v6/config/params"
types "github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/crypto/rand"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/time/slots"
)
const broadcastBLSChangesRateLimit = 128
// This routine broadcasts known BLS changes at the Capella fork.
func (s *Service) broadcastBLSChanges(currSlot types.Slot) {
capellaSlotStart, err := slots.EpochStart(params.BeaconConfig().CapellaForkEpoch)
if err != nil {
// only possible error is an overflow, so we exit early from the method
return
}
if currSlot != capellaSlotStart {
return
}
changes, err := s.cfg.blsToExecPool.PendingBLSToExecChanges()
if err != nil {
log.WithError(err).Error("could not get BLS to execution changes")
}
if len(changes) == 0 {
return
}
source := rand.NewGenerator()
length := len(changes)
broadcastChanges := make([]*ethpb.SignedBLSToExecutionChange, length)
for i := 0; i < length; i++ {
idx := source.Intn(len(changes))
broadcastChanges[i] = changes[idx]
changes = append(changes[:idx], changes[idx+1:]...)
}
go s.rateBLSChanges(s.ctx, broadcastChanges)
}
func (s *Service) broadcastBLSBatch(ctx context.Context, ptr *[]*ethpb.SignedBLSToExecutionChange) {
limit := broadcastBLSChangesRateLimit
if len(*ptr) < broadcastBLSChangesRateLimit {
limit = len(*ptr)
}
st, err := s.cfg.chain.HeadStateReadOnly(ctx)
if err != nil {
log.WithError(err).Error("could not get head state")
return
}
for _, ch := range (*ptr)[:limit] {
if ch != nil {
_, err := blocks.ValidateBLSToExecutionChange(st, ch)
if err != nil {
log.WithError(err).Error("could not validate BLS to execution change")
continue
}
if err := s.cfg.p2p.Broadcast(ctx, ch); err != nil {
log.WithError(err).Error("could not broadcast BLS to execution changes.")
}
}
}
*ptr = (*ptr)[limit:]
}
func (s *Service) rateBLSChanges(ctx context.Context, changes []*ethpb.SignedBLSToExecutionChange) {
s.broadcastBLSBatch(ctx, &changes)
if len(changes) == 0 {
return
}
ticker := time.NewTicker(500 * time.Millisecond)
for {
select {
case <-s.ctx.Done():
return
case <-ticker.C:
s.broadcastBLSBatch(ctx, &changes)
if len(changes) == 0 {
return
}
}
}
}

View File

@@ -1,157 +0,0 @@
package sync
import (
"testing"
"time"
mockChain "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
testingdb "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
doublylinkedtree "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/OffchainLabs/prysm/v6/beacon-chain/operations/blstoexec"
mockp2p "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state/stategen"
mockSync "github.com/OffchainLabs/prysm/v6/beacon-chain/sync/initial-sync/testing"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/OffchainLabs/prysm/v6/time/slots"
logTest "github.com/sirupsen/logrus/hooks/test"
)
func TestBroadcastBLSChanges(t *testing.T) {
params.SetupTestConfigCleanup(t)
c := params.BeaconConfig()
c.CapellaForkEpoch = c.BellatrixForkEpoch.Add(2)
params.OverrideBeaconConfig(c)
chainService := &mockChain.ChainService{
Genesis: time.Now(),
ValidatorsRoot: [32]byte{'A'},
}
s := NewService(t.Context(),
WithP2P(mockp2p.NewTestP2P(t)),
WithInitialSync(&mockSync.Sync{IsSyncing: false}),
WithChainService(chainService),
WithOperationNotifier(chainService.OperationNotifier()),
WithBlsToExecPool(blstoexec.NewPool()),
)
var emptySig [96]byte
s.cfg.blsToExecPool.InsertBLSToExecChange(&ethpb.SignedBLSToExecutionChange{
Message: &ethpb.BLSToExecutionChange{
ValidatorIndex: 10,
FromBlsPubkey: make([]byte, 48),
ToExecutionAddress: make([]byte, 20),
},
Signature: emptySig[:],
})
capellaStart, err := slots.EpochStart(params.BeaconConfig().CapellaForkEpoch)
require.NoError(t, err)
s.broadcastBLSChanges(capellaStart + 1)
}
func TestRateBLSChanges(t *testing.T) {
logHook := logTest.NewGlobal()
params.SetupTestConfigCleanup(t)
c := params.BeaconConfig()
c.CapellaForkEpoch = c.BellatrixForkEpoch.Add(2)
params.OverrideBeaconConfig(c)
chainService := &mockChain.ChainService{
Genesis: time.Now(),
ValidatorsRoot: [32]byte{'A'},
}
p1 := mockp2p.NewTestP2P(t)
s := NewService(t.Context(),
WithP2P(p1),
WithInitialSync(&mockSync.Sync{IsSyncing: false}),
WithChainService(chainService),
WithOperationNotifier(chainService.OperationNotifier()),
WithBlsToExecPool(blstoexec.NewPool()),
)
beaconDB := testingdb.SetupDB(t)
s.cfg.stateGen = stategen.New(beaconDB, doublylinkedtree.New())
s.cfg.beaconDB = beaconDB
s.initCaches()
st, keys := util.DeterministicGenesisStateCapella(t, 256)
s.cfg.chain = &mockChain.ChainService{
ValidatorsRoot: [32]byte{'A'},
Genesis: time.Now().Add(-time.Second * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Duration(10)),
State: st,
}
for i := 0; i < 200; i++ {
message := &ethpb.BLSToExecutionChange{
ValidatorIndex: primitives.ValidatorIndex(i),
FromBlsPubkey: keys[i+1].PublicKey().Marshal(),
ToExecutionAddress: bytesutil.PadTo([]byte("address"), 20),
}
epoch := params.BeaconConfig().CapellaForkEpoch + 1
domain, err := signing.Domain(st.Fork(), epoch, params.BeaconConfig().DomainBLSToExecutionChange, st.GenesisValidatorsRoot())
assert.NoError(t, err)
htr, err := signing.Data(message.HashTreeRoot, domain)
assert.NoError(t, err)
signed := &ethpb.SignedBLSToExecutionChange{
Message: message,
Signature: keys[i+1].Sign(htr[:]).Marshal(),
}
s.cfg.blsToExecPool.InsertBLSToExecChange(signed)
}
require.Equal(t, false, p1.BroadcastCalled.Load())
slot, err := slots.EpochStart(params.BeaconConfig().CapellaForkEpoch)
require.NoError(t, err)
s.broadcastBLSChanges(slot)
time.Sleep(100 * time.Millisecond) // Need a sleep for the go routine to be ready
require.Equal(t, true, p1.BroadcastCalled.Load())
require.LogsDoNotContain(t, logHook, "could not")
p1.BroadcastCalled.Store(false)
time.Sleep(500 * time.Millisecond) // Need a sleep for the second batch to be broadcast
require.Equal(t, true, p1.BroadcastCalled.Load())
require.LogsDoNotContain(t, logHook, "could not")
}
func TestBroadcastBLSBatch_changes_slice(t *testing.T) {
message := &ethpb.BLSToExecutionChange{
FromBlsPubkey: make([]byte, 48),
ToExecutionAddress: make([]byte, 20),
}
signed := &ethpb.SignedBLSToExecutionChange{
Message: message,
Signature: make([]byte, 96),
}
changes := make([]*ethpb.SignedBLSToExecutionChange, 200)
for i := 0; i < len(changes); i++ {
changes[i] = signed
}
p1 := mockp2p.NewTestP2P(t)
chainService := &mockChain.ChainService{
Genesis: time.Now(),
ValidatorsRoot: [32]byte{'A'},
}
s := NewService(t.Context(),
WithP2P(p1),
WithInitialSync(&mockSync.Sync{IsSyncing: false}),
WithChainService(chainService),
WithOperationNotifier(chainService.OperationNotifier()),
WithBlsToExecPool(blstoexec.NewPool()),
)
beaconDB := testingdb.SetupDB(t)
s.cfg.stateGen = stategen.New(beaconDB, doublylinkedtree.New())
s.cfg.beaconDB = beaconDB
s.initCaches()
st, _ := util.DeterministicGenesisStateCapella(t, 32)
s.cfg.chain = &mockChain.ChainService{
ValidatorsRoot: [32]byte{'A'},
Genesis: time.Now().Add(-time.Second * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Duration(10)),
State: st,
}
s.broadcastBLSBatch(s.ctx, &changes)
require.Equal(t, 200-128, len(changes))
}

View File

@@ -106,6 +106,9 @@ func (s *Service) custodyGroupCount() (uint64, error) {
// validatorsCustodyRequirements computes the custody requirements based on the
// finalized state and the tracked validators.
func (s *Service) validatorsCustodyRequirement() (uint64, error) {
if s.trackedValidatorsCache == nil {
return 0, nil
}
// Get the indices of the tracked validators.
indices := s.trackedValidatorsCache.Indices()

View File

@@ -3,7 +3,6 @@ package sync
import (
"bytes"
"context"
"math"
"slices"
"sync"
"time"
@@ -30,13 +29,14 @@ import (
// DataColumnSidecarsParams stores the common parameters needed to
// fetch data column sidecars from peers.
type DataColumnSidecarsParams struct {
Ctx context.Context // Context
Tor blockchain.TemporalOracle // Temporal oracle, useful to get the current slot
P2P prysmP2P.P2P // P2P network interface
RateLimiter *leakybucket.Collector // Rate limiter for outgoing requests
CtxMap ContextByteVersions // Context map, useful to know if a message is mapped to the correct fork
Storage filesystem.DataColumnStorageReader // Data columns storage
NewVerifier verification.NewDataColumnsVerifier // Data columns verifier to check to conformity of incoming data column sidecars
Ctx context.Context // Context
Tor blockchain.TemporalOracle // Temporal oracle, useful to get the current slot
P2P prysmP2P.P2P // P2P network interface
RateLimiter *leakybucket.Collector // Rate limiter for outgoing requests
CtxMap ContextByteVersions // Context map, useful to know if a message is mapped to the correct fork
Storage filesystem.DataColumnStorageReader // Data columns storage
NewVerifier verification.NewDataColumnsVerifier // Data columns verifier to check to conformity of incoming data column sidecars
DownscorePeerOnRPCFault bool // Downscore a peer if it commits an RPC fault. Not responding sidecars at all is considered as a fault.
}
// FetchDataColumnSidecars retrieves data column sidecars from storage and peers for the given
@@ -64,7 +64,7 @@ func FetchDataColumnSidecars(
indices := sortedSliceFromMap(indicesMap)
slotsWithCommitments := make(map[primitives.Slot]bool)
indicesByRootToQuery := make(map[[fieldparams.RootLength]byte]map[uint64]bool)
missingIndicesByRoot := make(map[[fieldparams.RootLength]byte]map[uint64]bool)
indicesByRootStored := make(map[[fieldparams.RootLength]byte]map[uint64]bool)
result := make(map[[fieldparams.RootLength]byte][]blocks.VerifiedRODataColumn)
@@ -83,7 +83,7 @@ func FetchDataColumnSidecars(
root := roBlock.Root()
// Step 1: Get the requested sidecars for this root if available in storage
requestedColumns, err := tryGetDirectColumns(params.Storage, root, indices)
requestedColumns, err := tryGetStoredColumns(params.Storage, root, indices)
if err != nil {
return nil, errors.Wrapf(err, "try get direct columns for root %#x", root)
}
@@ -107,7 +107,7 @@ func FetchDataColumnSidecars(
indicesToQueryMap, indicesStoredMap := categorizeIndices(params.Storage, root, indices)
if len(indicesToQueryMap) > 0 {
indicesByRootToQuery[root] = indicesToQueryMap
missingIndicesByRoot[root] = indicesToQueryMap
}
if len(indicesStoredMap) > 0 {
indicesByRootStored[root] = indicesStoredMap
@@ -115,40 +115,57 @@ func FetchDataColumnSidecars(
}
// Early return if no sidecars need to be queried from peers.
if len(indicesByRootToQuery) == 0 {
if len(missingIndicesByRoot) == 0 {
return result, nil
}
// Step 3b: Request missing sidecars from peers.
start, count := time.Now(), computeTotalCount(indicesByRootToQuery)
fromPeersResult, err := tryRequestingColumnsFromPeers(params, roBlocks, slotsWithCommitments, indicesByRootToQuery)
start, count := time.Now(), computeTotalCount(missingIndicesByRoot)
fromPeersResult, err := tryRequestingColumnsFromPeers(params, roBlocks, slotsWithCommitments, missingIndicesByRoot)
if err != nil {
return nil, errors.Wrap(err, "request from peers")
}
log.WithFields(logrus.Fields{"duration": time.Since(start), "count": count}).Debug("Requested data column sidecars from peers")
for root, verifiedSidecars := range fromPeersResult {
result[root] = append(result[root], verifiedSidecars...)
// Step 3c: If needed, try to reconstruct missing sidecars from storage and fetched data.
fromReconstructionResult, err := tryReconstructFromStorageAndPeers(params.Storage, fromPeersResult, indicesMap, missingIndicesByRoot)
if err != nil {
return nil, errors.Wrap(err, "reconstruct from storage and peers")
}
// Step 3c: Load the stored sidecars.
for root, indicesStored := range indicesByRootStored {
requestedColumns, err := tryGetDirectColumns(params.Storage, root, sortedSliceFromMap(indicesStored))
for root, verifiedSidecars := range fromReconstructionResult {
result[root] = verifiedSidecars
}
for root := range fromPeersResult {
if _, ok := fromReconstructionResult[root]; ok {
// We already have what we need from peers + reconstruction
continue
}
result[root] = append(result[root], fromPeersResult[root]...)
storedIndices := indicesByRootStored[root]
if len(storedIndices) == 0 {
continue
}
storedColumns, err := tryGetStoredColumns(params.Storage, root, sortedSliceFromMap(storedIndices))
if err != nil {
return nil, errors.Wrapf(err, "try get direct columns for root %#x", root)
}
result[root] = append(result[root], requestedColumns...)
result[root] = append(result[root], storedColumns...)
}
return result, nil
}
// tryGetDirectColumns attempts to retrieve all requested columns directly from storage
// if they are all available. Returns the columns if successful, and nil if at least one
// tryGetStoredColumns attempts to retrieve all requested data column sidecars directly from storage
// if they are all available. Returns the sidecars if successful, and nil if at least one
// requested sidecar is not available in the storage.
func tryGetDirectColumns(storage filesystem.DataColumnStorageReader, blockRoot [fieldparams.RootLength]byte, indices []uint64) ([]blocks.VerifiedRODataColumn, error) {
func tryGetStoredColumns(storage filesystem.DataColumnStorageReader, blockRoot [fieldparams.RootLength]byte, indices []uint64) ([]blocks.VerifiedRODataColumn, error) {
// Check if all requested indices are present in cache
storedIndices := storage.Summary(blockRoot).Stored()
allRequestedPresent := true
@@ -234,9 +251,9 @@ func categorizeIndices(storage filesystem.DataColumnStorageReader, blockRoot [fi
// It explores the connected peers to find those that are expected to custody the requested columns
// and returns only when all requested columns are either retrieved or have been tried to be retrieved
// by all possible peers.
// Returns a map of block roots to their verified read-only data column sidecars and a map of block roots.
// Returns an error if at least one requested column could not be retrieved.
// WARNING: This function alters `missingIndicesByRoot`. The caller should NOT use it after running this function.
// WARNING: This function alters `missingIndicesByRoot` by removing successfully retrieved columns.
// After running this function, the user can check the content of the (modified) `missingIndicesByRoot` map
// to check if some sidecars are still missing.
func tryRequestingColumnsFromPeers(
p DataColumnSidecarsParams,
roBlocks []blocks.ROBlock,
@@ -284,8 +301,7 @@ func tryRequestingColumnsFromPeers(
}
// Remove the verified sidecars from the missing indices map and compute the new verified columns by root.
newMissingIndicesByRoot, localVerifiedColumnsByRoot := updateResults(verifiedRoDataColumnSidecars, missingIndicesByRoot)
missingIndicesByRoot = newMissingIndicesByRoot
localVerifiedColumnsByRoot := updateResults(verifiedRoDataColumnSidecars, missingIndicesByRoot)
for root, verifiedRoDataColumns := range localVerifiedColumnsByRoot {
verifiedColumnsByRoot[root] = append(verifiedColumnsByRoot[root], verifiedRoDataColumns...)
}
@@ -297,11 +313,67 @@ func tryRequestingColumnsFromPeers(
}
}
if len(missingIndicesByRoot) > 0 {
return nil, errors.New("not all requested data column sidecars were retrieved from peers")
return verifiedColumnsByRoot, nil
}
// tryReconstructFromStorageAndPeers attempts to reconstruct missing data column sidecars
// using the data available in the storage and the data fetched from peers.
// If, for at least one root, the reconstruction is not possible, an error is returned.
func tryReconstructFromStorageAndPeers(
storage filesystem.DataColumnStorageReader,
fromPeersByRoot map[[fieldparams.RootLength]byte][]blocks.VerifiedRODataColumn,
indices map[uint64]bool,
missingIndicesByRoot map[[fieldparams.RootLength]byte]map[uint64]bool,
) (map[[fieldparams.RootLength]byte][]blocks.VerifiedRODataColumn, error) {
if len(missingIndicesByRoot) == 0 {
// Nothing to do, return early.
return nil, nil
}
return verifiedColumnsByRoot, nil
minimumColumnsCountToReconstruct := peerdas.MinimumColumnCountToReconstruct()
start := time.Now()
result := make(map[[fieldparams.RootLength]byte][]blocks.VerifiedRODataColumn, len(missingIndicesByRoot))
for root := range missingIndicesByRoot {
// Check if a reconstruction is possible based on what we have from the store and fetched from peers.
summary := storage.Summary(root)
storedCount := summary.Count()
fetchedCount := uint64(len(fromPeersByRoot[root]))
if storedCount+fetchedCount < minimumColumnsCountToReconstruct {
return nil, errors.Errorf("cannot reconstruct all needed columns for root %#x. stored: %d, fetched: %d, minimum: %d", root, storedCount, fetchedCount, minimumColumnsCountToReconstruct)
}
// Load all we have in the store.
storedSidecars, err := storage.Get(root, nil)
if err != nil {
return nil, errors.Wrapf(err, "failed to get stored sidecars for root %#x", root)
}
sidecars := make([]blocks.VerifiedRODataColumn, 0, storedCount+fetchedCount)
sidecars = append(sidecars, storedSidecars...)
sidecars = append(sidecars, fromPeersByRoot[root]...)
// Attempt reconstruction.
reconstructedSidecars, err := peerdas.ReconstructDataColumnSidecars(sidecars)
if err != nil {
return nil, errors.Wrapf(err, "failed to reconstruct data columns for root %#x", root)
}
// Select only sidecars we need.
for _, sidecar := range reconstructedSidecars {
if indices[sidecar.Index] {
result[root] = append(result[root], sidecar)
}
}
}
log.WithFields(logrus.Fields{
"rootCount": len(missingIndicesByRoot),
"elapsed": time.Since(start),
}).Debug("Reconstructed from storage and peers")
return result, nil
}
// selectPeers selects peers to query the sidecars.
@@ -314,7 +386,7 @@ func selectPeers(
count int,
origIndicesByRootByPeer map[goPeer.ID]map[[fieldparams.RootLength]byte]map[uint64]bool,
) (map[goPeer.ID]map[[fieldparams.RootLength]byte]map[uint64]bool, error) {
const randomPeerTimeout = 30 * time.Second
const randomPeerTimeout = 2 * time.Minute
// Select peers to query the missing sidecars from.
indicesByRootByPeer := copyIndicesByRootByPeer(origIndicesByRootByPeer)
@@ -371,12 +443,14 @@ func selectPeers(
}
// updateResults updates the missing indices and verified sidecars maps based on the newly verified sidecars.
// WARNING: This function alters `missingIndicesByRoot` by removing verified sidecars.
// After running this function, the user can check the content of the (modified) `missingIndicesByRoot` map
// to check if some sidecars are still missing.
func updateResults(
verifiedSidecars []blocks.VerifiedRODataColumn,
origMissingIndicesByRoot map[[fieldparams.RootLength]byte]map[uint64]bool,
) (map[[fieldparams.RootLength]byte]map[uint64]bool, map[[fieldparams.RootLength]byte][]blocks.VerifiedRODataColumn) {
missingIndicesByRoot map[[fieldparams.RootLength]byte]map[uint64]bool,
) map[[fieldparams.RootLength]byte][]blocks.VerifiedRODataColumn {
// Copy the original map to avoid modifying it directly.
missingIndicesByRoot := copyIndicesByRoot(origMissingIndicesByRoot)
verifiedSidecarsByRoot := make(map[[fieldparams.RootLength]byte][]blocks.VerifiedRODataColumn)
for _, verifiedSidecar := range verifiedSidecars {
blockRoot := verifiedSidecar.BlockRoot()
@@ -393,7 +467,7 @@ func updateResults(
}
}
return missingIndicesByRoot, verifiedSidecarsByRoot
return verifiedSidecarsByRoot
}
// fetchDataColumnSidecarsFromPeers retrieves data column sidecars from peers.
@@ -783,15 +857,13 @@ func randomPeer(
for ctx.Err() == nil {
nonRateLimitedPeers := make([]goPeer.ID, 0, len(indicesByRootByPeer))
for peer := range indicesByRootByPeer {
remaining := int64(math.MaxInt64)
if rateLimiter != nil {
remaining = rateLimiter.Remaining(peer.String())
}
if remaining >= int64(count) {
if rateLimiter == nil || rateLimiter.Remaining(peer.String()) >= int64(count) {
nonRateLimitedPeers = append(nonRateLimitedPeers, peer)
}
}
slices.Sort(nonRateLimitedPeers)
if len(nonRateLimitedPeers) == 0 {
log.WithFields(logrus.Fields{
"peerCount": peerCount,

View File

@@ -3,10 +3,12 @@ package sync
import (
"context"
"fmt"
"math/rand"
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
@@ -19,7 +21,6 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
leakybucket "github.com/OffchainLabs/prysm/v6/container/leaky-bucket"
"github.com/OffchainLabs/prysm/v6/crypto/rand"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
@@ -36,6 +37,7 @@ func TestFetchDataColumnSidecars(t *testing.T) {
// Slot 2: No commitment
// Slot 3: All sidecars are saved excepted the needed ones
// Slot 4: Some sidecars are in the storage, other have to be retrieved from peers.
// Slot 5: Some sidecars are in the storage, other have to be retrieved from peers but peers do not deliver all requested sidecars.
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
@@ -93,6 +95,27 @@ func TestFetchDataColumnSidecars(t *testing.T) {
err = storage.Save(toStore4)
require.NoError(t, err)
// Block 5
minimumColumnsCountToReconstruct := peerdas.MinimumColumnCountToReconstruct()
block5, _, verifiedSidecars5 := util.GenerateTestFuluBlockWithSidecars(t, blobCount, util.WithSlot(5))
root5 := block5.Root()
toStoreCount := minimumColumnsCountToReconstruct - 1
toStore5 := make([]blocks.VerifiedRODataColumn, 0, toStoreCount)
for i := uint64(0); uint64(len(toStore5)) < toStoreCount; i++ {
sidecar := verifiedSidecars5[minimumColumnsCountToReconstruct+i]
if sidecar.Index == 81 {
continue
}
toStore5 = append(toStore5, sidecar)
}
err = storage.Save(toStore5)
require.NoError(t, err)
// Custody columns with this private key and 4-cgc: 31, 81, 97, 105
privateKeyBytes := [32]byte{1}
privateKey, err := crypto.UnmarshalSecp256k1PrivateKey(privateKeyBytes[:])
require.NoError(t, err)
@@ -105,12 +128,12 @@ func TestFetchDataColumnSidecars(t *testing.T) {
p2p.Connect(other)
p2p.Peers().SetChainState(other.PeerID(), &ethpb.StatusV2{
HeadSlot: 4,
HeadSlot: 5,
})
expectedRequest := &ethpb.DataColumnSidecarsByRangeRequest{
StartSlot: 4,
Count: 1,
Count: 2,
Columns: []uint64{31, 81},
}
@@ -138,6 +161,9 @@ func TestFetchDataColumnSidecars(t *testing.T) {
err = WriteDataColumnSidecarChunk(stream, clock, other.Encoding(), verifiedSidecars4[81].DataColumnSidecar)
assert.NoError(t, err)
err = WriteDataColumnSidecarChunk(stream, clock, other.Encoding(), verifiedSidecars5[81].DataColumnSidecar)
assert.NoError(t, err)
err = stream.CloseWrite()
assert.NoError(t, err)
})
@@ -157,9 +183,10 @@ func TestFetchDataColumnSidecars(t *testing.T) {
// no root2 (no commitments in this block)
root3: {verifiedSidecars3[31], verifiedSidecars3[81], verifiedSidecars3[106]},
root4: {verifiedSidecars4[31], verifiedSidecars4[81], verifiedSidecars4[106]},
root5: {verifiedSidecars5[31], verifiedSidecars5[81], verifiedSidecars5[106]},
}
blocks := []blocks.ROBlock{block1, block2, block3, block4}
blocks := []blocks.ROBlock{block1, block2, block3, block4, block5}
actual, err := FetchDataColumnSidecars(params, blocks, indices)
require.NoError(t, err)
@@ -202,7 +229,7 @@ func TestCategorizeIndices(t *testing.T) {
func TestSelectPeers(t *testing.T) {
const (
count = 3
seed = 46
seed = 42
)
params := DataColumnSidecarsParams{
@@ -210,7 +237,7 @@ func TestSelectPeers(t *testing.T) {
RateLimiter: leakybucket.NewCollector(1., 10, time.Second, false /* deleteEmptyBuckets */),
}
randomSource := rand.NewGenerator()
randomSource := rand.New(rand.NewSource(seed))
indicesByRootByPeer := map[peer.ID]map[[fieldparams.RootLength]byte]map[uint64]bool{
"peer1": {
@@ -225,19 +252,7 @@ func TestSelectPeers(t *testing.T) {
},
}
expected_1 := map[peer.ID]map[[fieldparams.RootLength]byte]map[uint64]bool{
"peer1": {
{1}: {12: true, 13: true},
{2}: {13: true, 14: true, 15: true},
{3}: {14: true, 15: true},
},
"peer2": {
{1}: {14: true},
{3}: {16: true},
},
}
expected_2 := map[peer.ID]map[[fieldparams.RootLength]byte]map[uint64]bool{
expected := map[peer.ID]map[[fieldparams.RootLength]byte]map[uint64]bool{
"peer1": {
{1}: {12: true},
{3}: {15: true},
@@ -251,11 +266,6 @@ func TestSelectPeers(t *testing.T) {
actual, err := selectPeers(params, randomSource, count, indicesByRootByPeer)
expected := expected_1
if len(actual["peer1"]) == 2 {
expected = expected_2
}
require.NoError(t, err)
require.Equal(t, len(expected), len(actual))
for peerID := range expected {
@@ -291,8 +301,8 @@ func TestUpdateResults(t *testing.T) {
verifiedSidecars[2].BlockRoot(): {verifiedSidecars[2], verifiedSidecars[3]},
}
actualMissingIndicesByRoot, actualVerifiedSidecarsByRoot := updateResults(verifiedSidecars, missingIndicesByRoot)
require.DeepEqual(t, expectedMissingIndicesByRoot, actualMissingIndicesByRoot)
actualVerifiedSidecarsByRoot := updateResults(verifiedSidecars, missingIndicesByRoot)
require.DeepEqual(t, expectedMissingIndicesByRoot, missingIndicesByRoot)
require.DeepEqual(t, expectedVerifiedSidecarsByRoot, actualVerifiedSidecarsByRoot)
}
@@ -857,8 +867,8 @@ func TestComputeIndicesByRootByPeer(t *testing.T) {
func TestRandomPeer(t *testing.T) {
// Fixed seed.
const seed = 42
randomSource := rand.NewGenerator()
const seed = 43
randomSource := rand.New(rand.NewSource(seed))
t.Run("no peers", func(t *testing.T) {
pid, err := randomPeer(t.Context(), randomSource, leakybucket.NewCollector(4, 8, time.Second, false /* deleteEmptyBuckets */), 1, nil)
@@ -889,7 +899,11 @@ func TestRandomPeer(t *testing.T) {
pid, err := randomPeer(t.Context(), randomSource, collector, count, indicesByRootByPeer)
require.NoError(t, err)
require.Equal(t, true, map[peer.ID]bool{peer1: true, peer2: true, peer3: true}[pid])
require.Equal(t, peer1, pid)
pid, err = randomPeer(t.Context(), randomSource, collector, count, indicesByRootByPeer)
require.NoError(t, err)
require.Equal(t, peer2, pid)
})
}

View File

@@ -12,6 +12,7 @@ import (
// Is a background routine that observes for new incoming forks. Depending on the epoch
// it will be in charge of subscribing/unsubscribing the relevant topics at the fork boundaries.
func (s *Service) forkWatcher() {
<-s.initialSyncComplete
slotTicker := slots.NewSlotTicker(s.cfg.clock.GenesisTime(), params.BeaconConfig().SecondsPerSlot)
for {
select {
@@ -28,9 +29,6 @@ func (s *Service) forkWatcher() {
log.WithError(err).Error("Unable to check for fork in the previous epoch")
continue
}
// Broadcast BLS changes at the Capella fork boundary
s.broadcastBLSChanges(currSlot)
case <-s.ctx.Done():
log.Debug("Context closed, exiting goroutine")
slotTicker.Done()

View File

@@ -2,96 +2,84 @@ package sync
import (
"context"
"fmt"
"sync"
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/async/abool"
mockChain "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
p2ptest "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
mockSync "github.com/OffchainLabs/prysm/v6/beacon-chain/sync/initial-sync/testing"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/genesis"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
)
func defaultClockWithTimeAtEpoch(epoch primitives.Epoch) *startup.Clock {
now := genesis.Time().Add(params.EpochsDuration(epoch, params.BeaconConfig()))
return startup.NewClock(genesis.Time(), genesis.ValidatorsRoot(), startup.WithTimeAsNow(now))
}
func testForkWatcherService(t *testing.T, current primitives.Epoch) *Service {
closedChan := make(chan struct{})
close(closedChan)
peer2peer := p2ptest.NewTestP2P(t)
chainService := &mockChain.ChainService{
Genesis: genesis.Time(),
ValidatorsRoot: genesis.ValidatorsRoot(),
}
ctx, cancel := context.WithTimeout(t.Context(), 10*time.Millisecond)
r := &Service{
ctx: ctx,
cancel: cancel,
cfg: &config{
p2p: peer2peer,
chain: chainService,
clock: defaultClockWithTimeAtEpoch(current),
initialSync: &mockSync.Sync{IsSyncing: false},
},
chainStarted: abool.New(),
subHandler: newSubTopicHandler(),
initialSyncComplete: closedChan,
}
return r
}
func TestService_CheckForNextEpochFork(t *testing.T) {
closedChan := make(chan struct{})
close(closedChan)
params.SetupTestConfigCleanup(t)
genesis.StoreEmbeddedDuringTest(t, params.BeaconConfig().ConfigName)
params.BeaconConfig().FuluForkEpoch = params.BeaconConfig().ElectraForkEpoch + 1096*2
params.BeaconConfig().InitializeForkSchedule()
tests := []struct {
name string
svcCreator func(t *testing.T) *Service
currEpoch primitives.Epoch
wantErr bool
postSvcCheck func(t *testing.T, s *Service)
name string
svcCreator func(t *testing.T) *Service
checkRegistration func(t *testing.T, s *Service)
forkEpoch primitives.Epoch
epochAtRegistration func(primitives.Epoch) primitives.Epoch
nextForkEpoch primitives.Epoch
}{
{
name: "no fork in the next epoch",
svcCreator: func(t *testing.T) *Service {
peer2peer := p2ptest.NewTestP2P(t)
gt := time.Now().Add(time.Duration(-params.BeaconConfig().SlotsPerEpoch.Mul(uint64(params.BeaconConfig().SlotsPerEpoch))) * time.Second)
vr := [32]byte{'A'}
chainService := &mockChain.ChainService{
Genesis: gt,
ValidatorsRoot: vr,
}
ctx, cancel := context.WithCancel(t.Context())
r := &Service{
ctx: ctx,
cancel: cancel,
cfg: &config{
p2p: peer2peer,
chain: chainService,
clock: startup.NewClock(gt, vr),
initialSync: &mockSync.Sync{IsSyncing: false},
},
chainStarted: abool.New(),
subHandler: newSubTopicHandler(),
}
return r
},
currEpoch: 10,
wantErr: false,
postSvcCheck: func(t *testing.T, s *Service) {
},
name: "no fork in the next epoch",
forkEpoch: params.BeaconConfig().AltairForkEpoch,
epochAtRegistration: func(e primitives.Epoch) primitives.Epoch { return e - 2 },
nextForkEpoch: params.BeaconConfig().BellatrixForkEpoch,
checkRegistration: func(t *testing.T, s *Service) {},
},
{
name: "altair fork in the next epoch",
svcCreator: func(t *testing.T) *Service {
peer2peer := p2ptest.NewTestP2P(t)
gt := time.Now().Add(-4 * oneEpoch())
vr := [32]byte{'A'}
chainService := &mockChain.ChainService{
Genesis: gt,
ValidatorsRoot: vr,
}
bCfg := params.BeaconConfig().Copy()
bCfg.AltairForkEpoch = 5
params.OverrideBeaconConfig(bCfg)
params.BeaconConfig().InitializeForkSchedule()
ctx, cancel := context.WithCancel(t.Context())
r := &Service{
ctx: ctx,
cancel: cancel,
cfg: &config{
p2p: peer2peer,
chain: chainService,
clock: startup.NewClock(gt, vr),
initialSync: &mockSync.Sync{IsSyncing: false},
},
chainStarted: abool.New(),
subHandler: newSubTopicHandler(),
}
return r
},
currEpoch: 4,
wantErr: false,
postSvcCheck: func(t *testing.T, s *Service) {
digest := params.ForkDigest(5)
assert.Equal(t, true, s.subHandler.digestExists(digest))
name: "altair fork in the next epoch",
forkEpoch: params.BeaconConfig().AltairForkEpoch,
epochAtRegistration: func(e primitives.Epoch) primitives.Epoch { return e - 1 },
nextForkEpoch: params.BeaconConfig().BellatrixForkEpoch,
checkRegistration: func(t *testing.T, s *Service) {
digest := params.ForkDigest(params.BeaconConfig().AltairForkEpoch)
rpcMap := make(map[string]bool)
for _, p := range s.cfg.p2p.Host().Mux().Protocols() {
rpcMap[string(p)] = true
@@ -99,375 +87,132 @@ func TestService_CheckForNextEpochFork(t *testing.T) {
assert.Equal(t, true, rpcMap[p2p.RPCBlocksByRangeTopicV2+s.cfg.p2p.Encoding().ProtocolSuffix()], "topic doesn't exist")
assert.Equal(t, true, rpcMap[p2p.RPCBlocksByRootTopicV2+s.cfg.p2p.Encoding().ProtocolSuffix()], "topic doesn't exist")
assert.Equal(t, true, rpcMap[p2p.RPCMetaDataTopicV2+s.cfg.p2p.Encoding().ProtocolSuffix()], "topic doesn't exist")
expected := fmt.Sprintf(p2p.SyncContributionAndProofSubnetTopicFormat+s.cfg.p2p.Encoding().ProtocolSuffix(), digest)
assert.Equal(t, true, s.subHandler.topicExists(expected), "subnet topic doesn't exist")
// TODO: we should check subcommittee indices here but we need to work with the committee cache to do it properly
/*
subIndices := mapFromCount(params.BeaconConfig().SyncCommitteeSubnetCount)
for idx := range subIndices {
topic := fmt.Sprintf(p2p.SyncCommitteeSubnetTopicFormat, digest, idx)
expected := topic + s.cfg.p2p.Encoding().ProtocolSuffix()
assert.Equal(t, true, s.subHandler.topicExists(expected), fmt.Sprintf("subnet topic %s doesn't exist", expected))
}
*/
},
},
{
name: "bellatrix fork in the next epoch",
svcCreator: func(t *testing.T) *Service {
peer2peer := p2ptest.NewTestP2P(t)
chainService := &mockChain.ChainService{
Genesis: time.Now().Add(-4 * oneEpoch()),
ValidatorsRoot: [32]byte{'A'},
}
bCfg := params.BeaconConfig().Copy()
bCfg.AltairForkEpoch = 3
bCfg.BellatrixForkEpoch = 5
params.OverrideBeaconConfig(bCfg)
params.BeaconConfig().InitializeForkSchedule()
ctx, cancel := context.WithCancel(t.Context())
r := &Service{
ctx: ctx,
cancel: cancel,
cfg: &config{
p2p: peer2peer,
chain: chainService,
clock: startup.NewClock(chainService.Genesis, chainService.ValidatorsRoot),
initialSync: &mockSync.Sync{IsSyncing: false},
},
chainStarted: abool.New(),
subHandler: newSubTopicHandler(),
}
return r
},
currEpoch: 4,
wantErr: false,
postSvcCheck: func(t *testing.T, s *Service) {
digest := params.ForkDigest(5)
assert.Equal(t, true, s.subHandler.digestExists(digest))
name: "capella fork in the next epoch",
checkRegistration: func(t *testing.T, s *Service) {
digest := params.ForkDigest(params.BeaconConfig().CapellaForkEpoch)
rpcMap := make(map[string]bool)
for _, p := range s.cfg.p2p.Host().Mux().Protocols() {
rpcMap[string(p)] = true
}
expected := fmt.Sprintf(p2p.BlsToExecutionChangeSubnetTopicFormat+s.cfg.p2p.Encoding().ProtocolSuffix(), digest)
assert.Equal(t, true, s.subHandler.topicExists(expected), "subnet topic doesn't exist")
},
forkEpoch: params.BeaconConfig().CapellaForkEpoch,
nextForkEpoch: params.BeaconConfig().DenebForkEpoch,
epochAtRegistration: func(e primitives.Epoch) primitives.Epoch { return e - 1 },
},
{
name: "deneb fork in the next epoch",
svcCreator: func(t *testing.T) *Service {
peer2peer := p2ptest.NewTestP2P(t)
gt := time.Now().Add(-4 * oneEpoch())
vr := [32]byte{'A'}
chainService := &mockChain.ChainService{
Genesis: gt,
ValidatorsRoot: vr,
}
bCfg := params.BeaconConfig().Copy()
bCfg.DenebForkEpoch = 5
params.OverrideBeaconConfig(bCfg)
params.BeaconConfig().InitializeForkSchedule()
ctx, cancel := context.WithCancel(t.Context())
r := &Service{
ctx: ctx,
cancel: cancel,
cfg: &config{
p2p: peer2peer,
chain: chainService,
clock: startup.NewClock(gt, vr),
initialSync: &mockSync.Sync{IsSyncing: false},
},
chainStarted: abool.New(),
subHandler: newSubTopicHandler(),
}
return r
},
currEpoch: 4,
wantErr: false,
postSvcCheck: func(t *testing.T, s *Service) {
digest := params.ForkDigest(5)
assert.Equal(t, true, s.subHandler.digestExists(digest))
checkRegistration: func(t *testing.T, s *Service) {
digest := params.ForkDigest(params.BeaconConfig().DenebForkEpoch)
rpcMap := make(map[string]bool)
for _, p := range s.cfg.p2p.Host().Mux().Protocols() {
rpcMap[string(p)] = true
}
subIndices := mapFromCount(params.BeaconConfig().BlobsidecarSubnetCount)
for idx := range subIndices {
topic := fmt.Sprintf(p2p.BlobSubnetTopicFormat, digest, idx)
expected := topic + s.cfg.p2p.Encoding().ProtocolSuffix()
assert.Equal(t, true, s.subHandler.topicExists(expected), fmt.Sprintf("subnet topic %s doesn't exist", expected))
}
assert.Equal(t, true, rpcMap[p2p.RPCBlobSidecarsByRangeTopicV1+s.cfg.p2p.Encoding().ProtocolSuffix()], "topic doesn't exist")
assert.Equal(t, true, rpcMap[p2p.RPCBlobSidecarsByRootTopicV1+s.cfg.p2p.Encoding().ProtocolSuffix()], "topic doesn't exist")
},
forkEpoch: params.BeaconConfig().DenebForkEpoch,
nextForkEpoch: params.BeaconConfig().ElectraForkEpoch,
epochAtRegistration: func(e primitives.Epoch) primitives.Epoch { return e - 1 },
},
{
name: "electra fork in the next epoch",
svcCreator: func(t *testing.T) *Service {
peer2peer := p2ptest.NewTestP2P(t)
gt := time.Now().Add(-4 * oneEpoch())
vr := [32]byte{'A'}
chainService := &mockChain.ChainService{
Genesis: gt,
ValidatorsRoot: vr,
checkRegistration: func(t *testing.T, s *Service) {
digest := params.ForkDigest(params.BeaconConfig().ElectraForkEpoch)
subIndices := mapFromCount(params.BeaconConfig().BlobsidecarSubnetCountElectra)
for idx := range subIndices {
topic := fmt.Sprintf(p2p.BlobSubnetTopicFormat, digest, idx)
expected := topic + s.cfg.p2p.Encoding().ProtocolSuffix()
assert.Equal(t, true, s.subHandler.topicExists(expected), fmt.Sprintf("subnet topic %s doesn't exist", expected))
}
bCfg := params.BeaconConfig().Copy()
bCfg.ElectraForkEpoch = 5
params.OverrideBeaconConfig(bCfg)
params.BeaconConfig().InitializeForkSchedule()
ctx, cancel := context.WithCancel(t.Context())
r := &Service{
ctx: ctx,
cancel: cancel,
cfg: &config{
p2p: peer2peer,
chain: chainService,
clock: startup.NewClock(gt, vr),
initialSync: &mockSync.Sync{IsSyncing: false},
},
chainStarted: abool.New(),
subHandler: newSubTopicHandler(),
}
return r
},
currEpoch: 4,
wantErr: false,
postSvcCheck: func(t *testing.T, s *Service) {
digest := params.ForkDigest(5)
assert.Equal(t, true, s.subHandler.digestExists(digest))
rpcMap := make(map[string]bool)
for _, p := range s.cfg.p2p.Host().Mux().Protocols() {
rpcMap[string(p)] = true
}
assert.Equal(t, true, rpcMap[p2p.RPCBlobSidecarsByRangeTopicV1+s.cfg.p2p.Encoding().ProtocolSuffix()], "topic doesn't exist")
assert.Equal(t, true, rpcMap[p2p.RPCBlobSidecarsByRootTopicV1+s.cfg.p2p.Encoding().ProtocolSuffix()], "topic doesn't exist")
},
forkEpoch: params.BeaconConfig().ElectraForkEpoch,
nextForkEpoch: params.BeaconConfig().FuluForkEpoch,
epochAtRegistration: func(e primitives.Epoch) primitives.Epoch { return e - 1 },
},
{
name: "fulu fork in the next epoch",
svcCreator: func(t *testing.T) *Service {
peer2peer := p2ptest.NewTestP2P(t)
gt := time.Now().Add(-4 * oneEpoch())
vr := [32]byte{'A'}
chainService := &mockChain.ChainService{
Genesis: gt,
ValidatorsRoot: vr,
}
bCfg := params.BeaconConfig().Copy()
bCfg.FuluForkEpoch = 5
params.OverrideBeaconConfig(bCfg)
params.BeaconConfig().InitializeForkSchedule()
ctx, cancel := context.WithCancel(t.Context())
r := &Service{
ctx: ctx,
cancel: cancel,
cfg: &config{
p2p: peer2peer,
chain: chainService,
clock: startup.NewClock(gt, vr),
initialSync: &mockSync.Sync{IsSyncing: false},
},
chainStarted: abool.New(),
subHandler: newSubTopicHandler(),
trackedValidatorsCache: cache.NewTrackedValidatorsCache(),
}
return r
},
currEpoch: 4,
wantErr: false,
postSvcCheck: func(t *testing.T, s *Service) {
digest := params.ForkDigest(5)
assert.Equal(t, true, s.subHandler.digestExists(digest))
checkRegistration: func(t *testing.T, s *Service) {
rpcMap := make(map[string]bool)
for _, p := range s.cfg.p2p.Host().Mux().Protocols() {
rpcMap[string(p)] = true
}
assert.Equal(t, true, rpcMap[p2p.RPCBlobSidecarsByRangeTopicV1+s.cfg.p2p.Encoding().ProtocolSuffix()], "topic doesn't exist")
assert.Equal(t, true, rpcMap[p2p.RPCBlobSidecarsByRootTopicV1+s.cfg.p2p.Encoding().ProtocolSuffix()], "topic doesn't exist")
assert.Equal(t, true, rpcMap[p2p.RPCMetaDataTopicV3+s.cfg.p2p.Encoding().ProtocolSuffix()], "topic doesn't exist")
},
forkEpoch: params.BeaconConfig().FuluForkEpoch,
nextForkEpoch: params.BeaconConfig().FuluForkEpoch,
epochAtRegistration: func(e primitives.Epoch) primitives.Epoch { return e - 1 },
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
s := tt.svcCreator(t)
if err := s.registerForUpcomingFork(tt.currEpoch); (err != nil) != tt.wantErr {
t.Errorf("registerForUpcomingFork() error = %v, wantErr %v", err, tt.wantErr)
current := tt.epochAtRegistration(tt.forkEpoch)
s := testForkWatcherService(t, current)
wg := attachSpawner(s)
require.NoError(t, s.registerForUpcomingFork(s.cfg.clock.CurrentEpoch()))
wg.Wait()
tt.checkRegistration(t, s)
if current != tt.forkEpoch-1 {
return
}
tt.postSvcCheck(t, s)
// Ensure the topics were registered for the upcoming fork
digest := params.ForkDigest(tt.forkEpoch)
assert.Equal(t, true, s.subHandler.digestExists(digest))
// After this point we are checking deregistration, which doesn't apply if there isn't a higher
// nextForkEpoch.
if tt.forkEpoch >= tt.nextForkEpoch {
return
}
nextDigest := params.ForkDigest(tt.nextForkEpoch)
// Move the clock to just before the next fork epoch and ensure deregistration is correct
wg = attachSpawner(s)
s.cfg.clock = defaultClockWithTimeAtEpoch(tt.nextForkEpoch - 1)
require.NoError(t, s.registerForUpcomingFork(s.cfg.clock.CurrentEpoch()))
wg.Wait()
// deregister as if it is the epoch after the next fork epoch
require.NoError(t, s.deregisterFromPastFork(tt.nextForkEpoch+1))
assert.Equal(t, false, s.subHandler.digestExists(digest))
assert.Equal(t, true, s.subHandler.digestExists(nextDigest))
})
}
}
func TestService_CheckForPreviousEpochFork(t *testing.T) {
params.SetupTestConfigCleanup(t)
tests := []struct {
name string
svcCreator func(t *testing.T) *Service
currEpoch primitives.Epoch
wantErr bool
postSvcCheck func(t *testing.T, s *Service)
}{
{
name: "no fork in the previous epoch",
svcCreator: func(t *testing.T) *Service {
peer2peer := p2ptest.NewTestP2P(t)
chainService := &mockChain.ChainService{
Genesis: time.Now().Add(-oneEpoch()),
ValidatorsRoot: [32]byte{'A'},
}
clock := startup.NewClock(chainService.Genesis, chainService.ValidatorsRoot)
ctx, cancel := context.WithCancel(t.Context())
r := &Service{
ctx: ctx,
cancel: cancel,
cfg: &config{
p2p: peer2peer,
chain: chainService,
clock: clock,
initialSync: &mockSync.Sync{IsSyncing: false},
},
chainStarted: abool.New(),
subHandler: newSubTopicHandler(),
}
err := r.registerRPCHandlers()
assert.NoError(t, err)
return r
},
currEpoch: 10,
wantErr: false,
postSvcCheck: func(t *testing.T, s *Service) {
ptcls := s.cfg.p2p.Host().Mux().Protocols()
pMap := make(map[string]bool)
for _, p := range ptcls {
pMap[string(p)] = true
}
assert.Equal(t, true, pMap[p2p.RPCGoodByeTopicV1+s.cfg.p2p.Encoding().ProtocolSuffix()])
assert.Equal(t, true, pMap[p2p.RPCStatusTopicV1+s.cfg.p2p.Encoding().ProtocolSuffix()])
assert.Equal(t, true, pMap[p2p.RPCPingTopicV1+s.cfg.p2p.Encoding().ProtocolSuffix()])
assert.Equal(t, true, pMap[p2p.RPCMetaDataTopicV1+s.cfg.p2p.Encoding().ProtocolSuffix()])
assert.Equal(t, true, pMap[p2p.RPCBlocksByRangeTopicV1+s.cfg.p2p.Encoding().ProtocolSuffix()])
assert.Equal(t, true, pMap[p2p.RPCBlocksByRootTopicV1+s.cfg.p2p.Encoding().ProtocolSuffix()])
},
},
{
name: "altair fork in the previous epoch",
svcCreator: func(t *testing.T) *Service {
peer2peer := p2ptest.NewTestP2P(t)
chainService := &mockChain.ChainService{
Genesis: time.Now().Add(-4 * oneEpoch()),
ValidatorsRoot: [32]byte{'A'},
}
clock := startup.NewClock(chainService.Genesis, chainService.ValidatorsRoot)
bCfg := params.BeaconConfig().Copy()
bCfg.AltairForkEpoch = 3
params.OverrideBeaconConfig(bCfg)
params.BeaconConfig().InitializeForkSchedule()
ctx, cancel := context.WithCancel(t.Context())
r := &Service{
ctx: ctx,
cancel: cancel,
cfg: &config{
p2p: peer2peer,
chain: chainService,
clock: clock,
initialSync: &mockSync.Sync{IsSyncing: false},
},
chainStarted: abool.New(),
subHandler: newSubTopicHandler(),
}
prevGenesis := chainService.Genesis
// To allow registration of v1 handlers
chainService.Genesis = time.Now().Add(-1 * oneEpoch())
err := r.registerRPCHandlers()
assert.NoError(t, err)
chainService.Genesis = prevGenesis
previous, err := r.rpcHandlerByTopicFromFork(version.Phase0)
assert.NoError(t, err)
next, err := r.rpcHandlerByTopicFromFork(version.Altair)
assert.NoError(t, err)
handlerByTopic := addedRPCHandlerByTopic(previous, next)
for topic, handler := range handlerByTopic {
r.registerRPC(topic, handler)
}
digest := params.ForkDigest(0)
assert.NoError(t, err)
r.registerSubscribers(0, digest)
assert.Equal(t, true, r.subHandler.digestExists(digest))
digest = params.ForkDigest(3)
r.registerSubscribers(3, digest)
assert.Equal(t, true, r.subHandler.digestExists(digest))
return r
},
currEpoch: 4,
wantErr: false,
postSvcCheck: func(t *testing.T, s *Service) {
digest := params.ForkDigest(0)
assert.Equal(t, false, s.subHandler.digestExists(digest))
digest = params.ForkDigest(3)
assert.Equal(t, true, s.subHandler.digestExists(digest))
ptcls := s.cfg.p2p.Host().Mux().Protocols()
pMap := make(map[string]bool)
for _, p := range ptcls {
pMap[string(p)] = true
}
assert.Equal(t, true, pMap[p2p.RPCGoodByeTopicV1+s.cfg.p2p.Encoding().ProtocolSuffix()])
assert.Equal(t, true, pMap[p2p.RPCStatusTopicV1+s.cfg.p2p.Encoding().ProtocolSuffix()])
assert.Equal(t, true, pMap[p2p.RPCPingTopicV1+s.cfg.p2p.Encoding().ProtocolSuffix()])
assert.Equal(t, true, pMap[p2p.RPCMetaDataTopicV2+s.cfg.p2p.Encoding().ProtocolSuffix()])
assert.Equal(t, true, pMap[p2p.RPCBlocksByRangeTopicV2+s.cfg.p2p.Encoding().ProtocolSuffix()])
assert.Equal(t, true, pMap[p2p.RPCBlocksByRootTopicV2+s.cfg.p2p.Encoding().ProtocolSuffix()])
assert.Equal(t, false, pMap[p2p.RPCMetaDataTopicV1+s.cfg.p2p.Encoding().ProtocolSuffix()])
assert.Equal(t, false, pMap[p2p.RPCBlocksByRangeTopicV1+s.cfg.p2p.Encoding().ProtocolSuffix()])
assert.Equal(t, false, pMap[p2p.RPCBlocksByRootTopicV1+s.cfg.p2p.Encoding().ProtocolSuffix()])
},
},
{
name: "bellatrix fork in the previous epoch",
svcCreator: func(t *testing.T) *Service {
peer2peer := p2ptest.NewTestP2P(t)
chainService := &mockChain.ChainService{
Genesis: time.Now().Add(-4 * oneEpoch()),
ValidatorsRoot: [32]byte{'A'},
}
clock := startup.NewClock(chainService.Genesis, chainService.ValidatorsRoot)
bCfg := params.BeaconConfig().Copy()
bCfg.AltairForkEpoch = 1
bCfg.BellatrixForkEpoch = 3
params.OverrideBeaconConfig(bCfg)
params.BeaconConfig().InitializeForkSchedule()
ctx, cancel := context.WithCancel(t.Context())
r := &Service{
ctx: ctx,
cancel: cancel,
cfg: &config{
p2p: peer2peer,
chain: chainService,
clock: clock,
initialSync: &mockSync.Sync{IsSyncing: false},
},
chainStarted: abool.New(),
subHandler: newSubTopicHandler(),
}
digest := params.ForkDigest(1)
r.registerSubscribers(1, digest)
assert.Equal(t, true, r.subHandler.digestExists(digest))
digest = params.ForkDigest(3)
r.registerSubscribers(3, digest)
assert.Equal(t, true, r.subHandler.digestExists(digest))
return r
},
currEpoch: 4,
wantErr: false,
postSvcCheck: func(t *testing.T, s *Service) {
digest := params.ForkDigest(1)
assert.Equal(t, false, s.subHandler.digestExists(digest))
digest = params.ForkDigest(3)
assert.Equal(t, true, s.subHandler.digestExists(digest))
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
s := tt.svcCreator(t)
if err := s.deregisterFromPastFork(tt.currEpoch); (err != nil) != tt.wantErr {
t.Errorf("registerForUpcomingFork() error = %v, wantErr %v", err, tt.wantErr)
}
tt.postSvcCheck(t, s)
})
func attachSpawner(s *Service) *sync.WaitGroup {
wg := new(sync.WaitGroup)
s.subscriptionSpawner = func(f func()) {
wg.Add(1)
go func() {
defer wg.Done()
f()
}()
}
return wg
}
// oneEpoch returns the duration of one epoch.

View File

@@ -34,6 +34,7 @@ go_library(
"//beacon-chain/verification:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
@@ -108,7 +109,9 @@ go_test(
"//time:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enr:go_default_library",
"@com_github_libp2p_go_libp2p//:go_default_library",
"@com_github_libp2p_go_libp2p//core:go_default_library",
"@com_github_libp2p_go_libp2p//core/crypto:go_default_library",
"@com_github_libp2p_go_libp2p//core/network:go_default_library",
"@com_github_libp2p_go_libp2p//core/peer:go_default_library",
"@com_github_paulbellamy_ratecounter//:go_default_library",

Some files were not shown because too many files have changed in this diff Show More