Compare commits

..

5 Commits

Author SHA1 Message Date
satushh
10856fc373 Merge branch 'develop' into fulu-bcn-cfg 2026-01-05 21:11:04 +05:30
satushh
bb3df7415a Merge branch 'develop' into fulu-bcn-cfg 2026-01-05 20:49:27 +05:30
satushh
565a7de7bf add update timeout 2026-01-05 20:48:34 +05:30
satushh
3b2752b18c changelog 2025-12-19 18:01:56 +05:30
satushh
c51a74325e add missing fulu constants to beacon config 2025-12-19 18:00:43 +05:30
150 changed files with 1439 additions and 4666 deletions

3
.gitignore vendored
View File

@@ -44,6 +44,3 @@ tmp
# spectest coverage reports
report.txt
# execution client data
execution/

View File

@@ -4,66 +4,6 @@ All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
## [v7.1.2](https://github.com/prysmaticlabs/prysm/compare/v7.1.1...v7.1.2) - 2026-01-07
Happy new year! This patch release is very small. The main improvement is better management of pending attestation aggregation via [PR 16153](https://github.com/OffchainLabs/prysm/pull/16153).
### Added
- `primitives.BuilderIndex`: SSZ `uint64` wrapper for builder registry indices. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16169)
### Changed
- the /eth/v2/beacon/pool/attestations and /eth/v1/beacon/pool/sync_committees now returns a 503 error if the node is still syncing, the rest api is also working in a similar process to gRPC broadcasting immediately now. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16152)
- `validateDataColumn`: Remove error logs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16157)
- Pending aggregates: When multiple aggregated attestations only differing by the aggregator index are in the pending queue, only process one of them. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16153)
### Fixed
- Fix the missing fork version object mapping for Fulu in light client p2p. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16151)
- Do not process slots and copy states for next epoch proposers after Fulu. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16168)
## [v7.1.1](https://github.com/prysmaticlabs/prysm/compare/v7.1.0...v7.1.1) - 2025-12-18
Release highlights:
- Fixed potential deadlock scenario in data column batch verification
- Improved processing and metrics for cells and proofs
We are aware of [an issue](https://github.com/OffchainLabs/prysm/issues/16160) where Prysm struggles to sync from an out of sync state. We will have another release before the end of the year to address this issue.
Our postmortem document from the December 4th mainnet issue has been published on our [documentation site](https://prysm.offchainlabs.com/docs/misc/mainnet-postmortems/)
### Added
- Track the dependent root of the latest finalized checkpoint in forkchoice. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16103)
- Proposal design document to implement graffiti. Currently it is empty by default and the idea is to have it of the form GE168dPR63af. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15983)
- Add support for detecting and logging per address reachability via libp2p AutoNAT v2. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16100)
- Static analyzer that ensures each `httputil.HandleError` call is followed by a `return` statement. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16134)
- Prometheus histogram `cells_and_proofs_from_structured_computation_milliseconds` to track computation time for cells and proofs from structured blobs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16115)
- Prometheus histogram `get_blobs_v2_latency_milliseconds` to track RPC latency for `getBlobsV2` calls to the execution layer. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16115)
### Changed
- Optimise migratetocold by not doing brute force for loop. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16101)
- e2e sync committee evaluator now skips the first slot after startup, we already skip the fork epoch for checks here, this skip only applies on startup, due to altair always from 0 and validators need to warm up. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16145)
- Run `ComputeCellsAndProofsFromFlat` in parallel to improve performance when computing cells and proofs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16115)
- Run `ComputeCellsAndProofsFromStructured` in parallel to improve performance when computing cells and proofs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16115)
### Removed
- Unnecessary copy is removed from Eth1DataHasEnoughSupport. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16118)
### Fixed
- Incorrect constructor return type [#16084](https://github.com/OffchainLabs/prysm/pull/16084). [[PR]](https://github.com/prysmaticlabs/prysm/pull/16084)
- Fixed possible race when validating two attestations at the same time. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16105)
- Fix missing return after version header check in SubmitAttesterSlashingsV2. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16126)
- Fix deadlock in data column gossip KZG batch verification when a caller times out preventing result delivery. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16141)
- Fixed replay state issue in rest api caused by attester and sync committee duties endpoints. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16136)
- Do not error when committee has been computed correctly but updating the cache failed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16142)
- Prevent blocked sends to the KZG batch verifier when the caller context is already canceled, avoiding useless queueing and potential hangs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16144)
## [v7.1.0](https://github.com/prysmaticlabs/prysm/compare/v7.0.0...v7.1.0) - 2025-12-10
This release includes several key features/fixes. If you are running v7.0.0 then you should update to v7.0.1 or later and remove the flag `--disable-last-epoch-targets`.

View File

@@ -273,16 +273,16 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.7.0-alpha.0"
consensus_spec_version = "v1.6.0"
load("@prysm//tools:download_spectests.bzl", "consensus_spec_tests")
consensus_spec_tests(
name = "consensus_spec_tests",
flavors = {
"general": "sha256-b+rJOuVqq+Dy53quPcNYcQwPFoMU7Wp7tdUVe7n0g8w=",
"minimal": "sha256-qxRIxtjPxVsVCY90WsBJKhk0027XDSmhjnRvRN14V1c=",
"mainnet": "sha256-NsuOQG3LzeiEE1TrWuvQ6vu6BboHv7h7f/RTS0pWkCs=",
"general": "sha256-54hTaUNF9nLg+hRr3oHoq0yjZpW3MNiiUUuCQu6Rajk=",
"minimal": "sha256-1JHIGg3gVMjvcGYRHR5cwdDgOvX47oR/MWp6gyAeZfA=",
"mainnet": "sha256-292h3W2Ffts0YExgDTyxYe9Os7R0bZIXuAaMO8P6kl4=",
},
version = consensus_spec_version,
)
@@ -298,7 +298,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-hwNdUBgdBrkk6pWIpNYbzbwswUuOu6AMD2exN8uv+QQ=",
integrity = "sha256-VzBgrEokvYSMIIXVnSA5XS9I3m9oxpvToQGxC1N5lzw=",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)

View File

@@ -17,7 +17,6 @@ import (
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
ethpbv1 "github.com/OffchainLabs/prysm/v7/proto/eth/v1"
@@ -131,10 +130,12 @@ func TestService_ReceiveBlock(t *testing.T) {
block: genFullBlock(t, util.DefaultBlockGenConfig(), 1 /*slot*/),
},
check: func(t *testing.T, s *Service) {
notifier := s.cfg.StateNotifier.(*blockchainTesting.MockStateNotifier)
require.Eventually(t, func() bool {
return len(notifier.ReceivedEvents()) >= 1
}, 2*time.Second, 10*time.Millisecond, "Expected at least 1 state notification")
// Hacky sleep, should use a better way to be able to resolve the race
// between event being sent out and processed.
time.Sleep(100 * time.Millisecond)
if recvd := len(s.cfg.StateNotifier.(*blockchainTesting.MockStateNotifier).ReceivedEvents()); recvd < 1 {
t.Errorf("Received %d state notifications, expected at least 1", recvd)
}
},
},
{
@@ -221,10 +222,10 @@ func TestService_ReceiveBlockUpdateHead(t *testing.T) {
require.NoError(t, s.ReceiveBlock(ctx, wsb, root, nil))
})
wg.Wait()
notifier := s.cfg.StateNotifier.(*blockchainTesting.MockStateNotifier)
require.Eventually(t, func() bool {
return len(notifier.ReceivedEvents()) >= 1
}, 2*time.Second, 10*time.Millisecond, "Expected at least 1 state notification")
time.Sleep(100 * time.Millisecond)
if recvd := len(s.cfg.StateNotifier.(*blockchainTesting.MockStateNotifier).ReceivedEvents()); recvd < 1 {
t.Errorf("Received %d state notifications, expected at least 1", recvd)
}
// Verify fork choice has processed the block. (Genesis block and the new block)
assert.Equal(t, 2, s.cfg.ForkChoiceStore.NodeCount())
}
@@ -264,10 +265,10 @@ func TestService_ReceiveBlockBatch(t *testing.T) {
block: genFullBlock(t, util.DefaultBlockGenConfig(), 1 /*slot*/),
},
check: func(t *testing.T, s *Service) {
notifier := s.cfg.StateNotifier.(*blockchainTesting.MockStateNotifier)
require.Eventually(t, func() bool {
return len(notifier.ReceivedEvents()) >= 1
}, 2*time.Second, 10*time.Millisecond, "Expected at least 1 state notification")
time.Sleep(100 * time.Millisecond)
if recvd := len(s.cfg.StateNotifier.(*blockchainTesting.MockStateNotifier).ReceivedEvents()); recvd < 1 {
t.Errorf("Received %d state notifications, expected at least 1", recvd)
}
},
},
}
@@ -511,9 +512,8 @@ func Test_executePostFinalizationTasks(t *testing.T) {
s.cfg.StateNotifier = notifier
s.executePostFinalizationTasks(s.ctx, headState)
require.Eventually(t, func() bool {
return len(notifier.ReceivedEvents()) == 1
}, 5*time.Second, 50*time.Millisecond, "Expected exactly 1 state notification")
time.Sleep(1 * time.Second) // sleep for a second because event is in a separate go routine
require.Equal(t, 1, len(notifier.ReceivedEvents()))
e := notifier.ReceivedEvents()[0]
assert.Equal(t, statefeed.FinalizedCheckpoint, int(e.Type))
fc, ok := e.Data.(*ethpbv1.EventFinalizedCheckpoint)
@@ -552,9 +552,8 @@ func Test_executePostFinalizationTasks(t *testing.T) {
s.cfg.StateNotifier = notifier
s.executePostFinalizationTasks(s.ctx, headState)
require.Eventually(t, func() bool {
return len(notifier.ReceivedEvents()) == 1
}, 5*time.Second, 50*time.Millisecond, "Expected exactly 1 state notification")
time.Sleep(1 * time.Second) // sleep for a second because event is in a separate go routine
require.Equal(t, 1, len(notifier.ReceivedEvents()))
e := notifier.ReceivedEvents()[0]
assert.Equal(t, statefeed.FinalizedCheckpoint, int(e.Type))
fc, ok := e.Data.(*ethpbv1.EventFinalizedCheckpoint)
@@ -597,13 +596,13 @@ func TestProcessLightClientBootstrap(t *testing.T) {
s.executePostFinalizationTasks(s.ctx, l.AttestedState)
// Wait for the light client bootstrap to be saved (runs in goroutine)
var b interfaces.LightClientBootstrap
require.Eventually(t, func() bool {
var err error
b, err = s.lcStore.LightClientBootstrap(ctx, [32]byte(cp.Root))
return err == nil && b != nil
}, 5*time.Second, 50*time.Millisecond, "Light client bootstrap was not saved within timeout")
// wait for the goroutine to finish processing
time.Sleep(1 * time.Second)
// Check that the light client bootstrap is saved
b, err := s.lcStore.LightClientBootstrap(ctx, [32]byte(cp.Root))
require.NoError(t, err)
require.NotNil(t, b)
btst, err := lightClient.NewLightClientBootstrapFromBeaconState(ctx, l.FinalizedState.Slot(), l.FinalizedState, l.FinalizedBlock)
require.NoError(t, err)

View File

@@ -26,7 +26,6 @@ go_library(
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/validators:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types:go_default_library",

View File

@@ -7,7 +7,6 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/config/features"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
@@ -213,12 +212,7 @@ func ProcessWithdrawals(st state.BeaconState, executionData interfaces.Execution
if err != nil {
return nil, errors.Wrap(err, "could not get next withdrawal validator index")
}
if features.Get().LowValcountSweep {
bound := min(uint64(st.NumValidators()), params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep)
nextValidatorIndex += primitives.ValidatorIndex(bound)
} else {
nextValidatorIndex += primitives.ValidatorIndex(params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep)
}
nextValidatorIndex += primitives.ValidatorIndex(params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep)
nextValidatorIndex = nextValidatorIndex % primitives.ValidatorIndex(st.NumValidators())
} else {
nextValidatorIndex = expectedWithdrawals[len(expectedWithdrawals)-1].ValidatorIndex + 1

View File

@@ -1,45 +0,0 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = ["bid.go"],
importpath = "github.com/OffchainLabs/prysm/v7/beacon-chain/core/gloas",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//crypto/bls:go_default_library",
"//crypto/bls/common:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//time/slots:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["bid_test.go"],
embed = [":go_default_library"],
deps = [
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/params:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/bls:go_default_library",
"//crypto/bls/common:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/validator-client:go_default_library",
"//runtime/version:go_default_library",
"//testing/require:go_default_library",
"//time/slots:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
],
)

View File

@@ -1,197 +0,0 @@
package gloas
import (
"errors"
"fmt"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/crypto/bls"
"github.com/OffchainLabs/prysm/v7/crypto/bls/common"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/time/slots"
)
// ProcessExecutionPayloadBid processes a signed execution payload bid in the Gloas fork.
// Spec v1.7.0-alpha.0 (pseudocode):
// process_execution_payload_bid(state: BeaconState, block: BeaconBlock):
//
// signed_bid = block.body.signed_execution_payload_bid
// bid = signed_bid.message
// builder_index = bid.builder_index
// amount = bid.value
// if builder_index == BUILDER_INDEX_SELF_BUILD:
// assert amount == 0
// assert signed_bid.signature == G2_POINT_AT_INFINITY
// else:
// assert is_active_builder(state, builder_index)
// assert can_builder_cover_bid(state, builder_index, amount)
// assert verify_execution_payload_bid_signature(state, signed_bid)
// assert bid.slot == block.slot
// assert bid.parent_block_hash == state.latest_block_hash
// assert bid.parent_block_root == block.parent_root
// assert bid.prev_randao == get_randao_mix(state, get_current_epoch(state))
// if amount > 0:
// state.builder_pending_payments[...] = BuilderPendingPayment(weight=0, withdrawal=BuilderPendingWithdrawal(fee_recipient=bid.fee_recipient, amount=amount, builder_index=builder_index))
// state.latest_execution_payload_bid = bid
func ProcessExecutionPayloadBid(st state.BeaconState, block interfaces.ReadOnlyBeaconBlock) error {
signedBid, err := block.Body().SignedExecutionPayloadBid()
if err != nil {
return fmt.Errorf("failed to get signed execution payload bid: %w", err)
}
wrappedBid, err := blocks.WrappedROSignedExecutionPayloadBid(signedBid)
if err != nil {
return fmt.Errorf("failed to wrap signed bid: %w", err)
}
bid, err := wrappedBid.Bid()
if err != nil {
return fmt.Errorf("failed to get bid from wrapped bid: %w", err)
}
builderIndex := bid.BuilderIndex()
amount := bid.Value()
if builderIndex == params.BeaconConfig().BuilderIndexSelfBuild {
if amount != 0 {
return fmt.Errorf("self-build amount must be zero, got %d", amount)
}
if wrappedBid.Signature() != common.InfiniteSignature {
return errors.New("self-build signature must be point at infinity")
}
} else {
ok, err := st.IsActiveBuilder(builderIndex)
if err != nil {
return fmt.Errorf("builder active check failed: %w", err)
}
if !ok {
return fmt.Errorf("builder %d is not active", builderIndex)
}
ok, err = st.CanBuilderCoverBid(builderIndex, amount)
if err != nil {
return fmt.Errorf("builder balance check failed: %w", err)
}
if !ok {
return fmt.Errorf("builder %d cannot cover bid amount %d", builderIndex, amount)
}
if err := validatePayloadBidSignature(st, wrappedBid); err != nil {
return fmt.Errorf("bid signature validation failed: %w", err)
}
}
if err := validateBidConsistency(st, bid, block); err != nil {
return fmt.Errorf("bid consistency validation failed: %w", err)
}
if amount > 0 {
feeRecipient := bid.FeeRecipient()
pendingPayment := &ethpb.BuilderPendingPayment{
Weight: 0,
Withdrawal: &ethpb.BuilderPendingWithdrawal{
FeeRecipient: feeRecipient[:],
Amount: amount,
BuilderIndex: builderIndex,
},
}
slotIndex := params.BeaconConfig().SlotsPerEpoch + (bid.Slot() % params.BeaconConfig().SlotsPerEpoch)
if err := st.SetBuilderPendingPayment(slotIndex, pendingPayment); err != nil {
return fmt.Errorf("failed to set pending payment: %w", err)
}
}
if err := st.SetExecutionPayloadBid(bid); err != nil {
return fmt.Errorf("failed to cache execution payload bid: %w", err)
}
return nil
}
// validateBidConsistency checks that the bid is consistent with the current beacon state.
func validateBidConsistency(st state.BeaconState, bid interfaces.ROExecutionPayloadBid, block interfaces.ReadOnlyBeaconBlock) error {
if bid.Slot() != block.Slot() {
return fmt.Errorf("bid slot %d does not match block slot %d", bid.Slot(), block.Slot())
}
latestBlockHash, err := st.LatestBlockHash()
if err != nil {
return fmt.Errorf("failed to get latest block hash: %w", err)
}
if bid.ParentBlockHash() != latestBlockHash {
return fmt.Errorf("bid parent block hash mismatch: got %x, expected %x",
bid.ParentBlockHash(), latestBlockHash)
}
if bid.ParentBlockRoot() != block.ParentRoot() {
return fmt.Errorf("bid parent block root mismatch: got %x, expected %x",
bid.ParentBlockRoot(), block.ParentRoot())
}
randaoMix, err := helpers.RandaoMix(st, slots.ToEpoch(st.Slot()))
if err != nil {
return fmt.Errorf("failed to get randao mix: %w", err)
}
if bid.PrevRandao() != [32]byte(randaoMix) {
return fmt.Errorf("bid prev randao mismatch: got %x, expected %x", bid.PrevRandao(), randaoMix)
}
return nil
}
// validatePayloadBidSignature verifies the BLS signature on a signed execution payload bid.
// It validates that the signature was created by the builder specified in the bid
// using the appropriate domain for the beacon builder.
func validatePayloadBidSignature(st state.ReadOnlyBeaconState, signedBid interfaces.ROSignedExecutionPayloadBid) error {
bid, err := signedBid.Bid()
if err != nil {
return fmt.Errorf("failed to get bid: %w", err)
}
builder, err := st.BuilderAtIndex(bid.BuilderIndex())
if err != nil {
return fmt.Errorf("failed to get builder: %w", err)
}
if len(builder.Pubkey) != fieldparams.BLSPubkeyLength {
return fmt.Errorf("invalid builder public key length: %d", len(builder.Pubkey))
}
publicKey, err := bls.PublicKeyFromBytes(builder.Pubkey)
if err != nil {
return fmt.Errorf("invalid builder public key: %w", err)
}
signatureBytes := signedBid.Signature()
signature, err := bls.SignatureFromBytes(signatureBytes[:])
if err != nil {
return fmt.Errorf("invalid signature format: %w", err)
}
currentEpoch := slots.ToEpoch(bid.Slot())
domain, err := signing.Domain(
st.Fork(),
currentEpoch,
params.BeaconConfig().DomainBeaconBuilder,
st.GenesisValidatorsRoot(),
)
if err != nil {
return fmt.Errorf("failed to compute signing domain: %w", err)
}
signingRoot, err := signedBid.SigningRoot(domain)
if err != nil {
return fmt.Errorf("failed to compute signing root: %w", err)
}
if !signature.Verify(publicKey, signingRoot[:]) {
return fmt.Errorf("signature verification failed: %w", signing.ErrSigFailedToVerify)
}
return nil
}

View File

@@ -1,628 +0,0 @@
package gloas
import (
"bytes"
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/signing"
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/crypto/bls"
"github.com/OffchainLabs/prysm/v7/crypto/bls/common"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
validatorpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1/validator-client"
"github.com/OffchainLabs/prysm/v7/runtime/version"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/time/slots"
fastssz "github.com/prysmaticlabs/fastssz"
"google.golang.org/protobuf/proto"
)
type stubBlockBody struct {
signedBid *ethpb.SignedExecutionPayloadBid
}
func (s stubBlockBody) Version() int { return version.Gloas }
func (s stubBlockBody) RandaoReveal() [96]byte { return [96]byte{} }
func (s stubBlockBody) Eth1Data() *ethpb.Eth1Data { return nil }
func (s stubBlockBody) Graffiti() [32]byte { return [32]byte{} }
func (s stubBlockBody) ProposerSlashings() []*ethpb.ProposerSlashing { return nil }
func (s stubBlockBody) AttesterSlashings() []ethpb.AttSlashing { return nil }
func (s stubBlockBody) Attestations() []ethpb.Att { return nil }
func (s stubBlockBody) Deposits() []*ethpb.Deposit { return nil }
func (s stubBlockBody) VoluntaryExits() []*ethpb.SignedVoluntaryExit { return nil }
func (s stubBlockBody) SyncAggregate() (*ethpb.SyncAggregate, error) { return nil, nil }
func (s stubBlockBody) IsNil() bool { return s.signedBid == nil }
func (s stubBlockBody) HashTreeRoot() ([32]byte, error) { return [32]byte{}, nil }
func (s stubBlockBody) Proto() (proto.Message, error) { return nil, nil }
func (s stubBlockBody) Execution() (interfaces.ExecutionData, error) { return nil, nil }
func (s stubBlockBody) BLSToExecutionChanges() ([]*ethpb.SignedBLSToExecutionChange, error) {
return nil, nil
}
func (s stubBlockBody) BlobKzgCommitments() ([][]byte, error) { return nil, nil }
func (s stubBlockBody) ExecutionRequests() (*enginev1.ExecutionRequests, error) {
return nil, nil
}
func (s stubBlockBody) PayloadAttestations() ([]*ethpb.PayloadAttestation, error) {
return nil, nil
}
func (s stubBlockBody) SignedExecutionPayloadBid() (*ethpb.SignedExecutionPayloadBid, error) {
return s.signedBid, nil
}
func (s stubBlockBody) MarshalSSZ() ([]byte, error) { return nil, nil }
func (s stubBlockBody) MarshalSSZTo([]byte) ([]byte, error) { return nil, nil }
func (s stubBlockBody) UnmarshalSSZ([]byte) error { return nil }
func (s stubBlockBody) SizeSSZ() int { return 0 }
type stubBlock struct {
slot primitives.Slot
proposer primitives.ValidatorIndex
parentRoot [32]byte
body stubBlockBody
v int
}
func (s stubBlock) Slot() primitives.Slot { return s.slot }
func (s stubBlock) ProposerIndex() primitives.ValidatorIndex { return s.proposer }
func (s stubBlock) ParentRoot() [32]byte { return s.parentRoot }
func (s stubBlock) StateRoot() [32]byte { return [32]byte{} }
func (s stubBlock) Body() interfaces.ReadOnlyBeaconBlockBody { return s.body }
func (s stubBlock) IsNil() bool { return false }
func (s stubBlock) IsBlinded() bool { return false }
func (s stubBlock) HashTreeRoot() ([32]byte, error) { return [32]byte{}, nil }
func (s stubBlock) Proto() (proto.Message, error) { return nil, nil }
func (s stubBlock) MarshalSSZ() ([]byte, error) { return nil, nil }
func (s stubBlock) MarshalSSZTo([]byte) ([]byte, error) { return nil, nil }
func (s stubBlock) UnmarshalSSZ([]byte) error { return nil }
func (s stubBlock) SizeSSZ() int { return 0 }
func (s stubBlock) Version() int { return s.v }
func (s stubBlock) AsSignRequestObject() (validatorpb.SignRequestObject, error) {
return nil, nil
}
func (s stubBlock) HashTreeRootWith(*fastssz.Hasher) error { return nil }
func buildGloasState(t *testing.T, slot primitives.Slot, proposerIdx primitives.ValidatorIndex, builderIdx primitives.BuilderIndex, balance uint64, randao [32]byte, latestHash [32]byte, builderPubkey [48]byte) *state_native.BeaconState {
t.Helper()
cfg := params.BeaconConfig()
blockRoots := make([][]byte, cfg.SlotsPerHistoricalRoot)
stateRoots := make([][]byte, cfg.SlotsPerHistoricalRoot)
for i := range blockRoots {
blockRoots[i] = bytes.Repeat([]byte{0xAA}, 32)
stateRoots[i] = bytes.Repeat([]byte{0xBB}, 32)
}
randaoMixes := make([][]byte, cfg.EpochsPerHistoricalVector)
for i := range randaoMixes {
randaoMixes[i] = randao[:]
}
withdrawalCreds := make([]byte, 32)
withdrawalCreds[0] = cfg.BuilderWithdrawalPrefixByte
validatorCount := int(proposerIdx) + 1
validators := make([]*ethpb.Validator, validatorCount)
balances := make([]uint64, validatorCount)
for i := range validatorCount {
validators[i] = &ethpb.Validator{
PublicKey: builderPubkey[:],
WithdrawalCredentials: withdrawalCreds,
EffectiveBalance: balance,
Slashed: false,
ActivationEligibilityEpoch: 0,
ActivationEpoch: 0,
ExitEpoch: cfg.FarFutureEpoch,
WithdrawableEpoch: cfg.FarFutureEpoch,
}
balances[i] = balance
}
payments := make([]*ethpb.BuilderPendingPayment, cfg.SlotsPerEpoch*2)
for i := range payments {
payments[i] = &ethpb.BuilderPendingPayment{Withdrawal: &ethpb.BuilderPendingWithdrawal{}}
}
var builders []*ethpb.Builder
if builderIdx != params.BeaconConfig().BuilderIndexSelfBuild {
builderCount := int(builderIdx) + 1
builders = make([]*ethpb.Builder, builderCount)
builders[builderCount-1] = &ethpb.Builder{
Pubkey: builderPubkey[:],
Version: []byte{0},
ExecutionAddress: bytes.Repeat([]byte{0x01}, 20),
Balance: primitives.Gwei(balance),
DepositEpoch: 0,
WithdrawableEpoch: cfg.FarFutureEpoch,
}
}
stProto := &ethpb.BeaconStateGloas{
Slot: slot,
GenesisValidatorsRoot: bytes.Repeat([]byte{0x11}, 32),
Fork: &ethpb.Fork{
CurrentVersion: bytes.Repeat([]byte{0x22}, 4),
PreviousVersion: bytes.Repeat([]byte{0x22}, 4),
Epoch: 0,
},
BlockRoots: blockRoots,
StateRoots: stateRoots,
RandaoMixes: randaoMixes,
Validators: validators,
Balances: balances,
LatestBlockHash: latestHash[:],
BuilderPendingPayments: payments,
BuilderPendingWithdrawals: []*ethpb.BuilderPendingWithdrawal{},
Builders: builders,
FinalizedCheckpoint: &ethpb.Checkpoint{
Epoch: 1,
},
}
st, err := state_native.InitializeFromProtoGloas(stProto)
require.NoError(t, err)
return st.(*state_native.BeaconState)
}
func signBid(t *testing.T, sk common.SecretKey, bid *ethpb.ExecutionPayloadBid, fork *ethpb.Fork, genesisRoot [32]byte) [96]byte {
t.Helper()
epoch := slots.ToEpoch(primitives.Slot(bid.Slot))
domain, err := signing.Domain(fork, epoch, params.BeaconConfig().DomainBeaconBuilder, genesisRoot[:])
require.NoError(t, err)
root, err := signing.ComputeSigningRoot(bid, domain)
require.NoError(t, err)
sig := sk.Sign(root[:]).Marshal()
var out [96]byte
copy(out[:], sig)
return out
}
func TestProcessExecutionPayloadBid_SelfBuildSuccess(t *testing.T) {
slot := primitives.Slot(12)
proposerIdx := primitives.ValidatorIndex(0)
builderIdx := params.BeaconConfig().BuilderIndexSelfBuild
randao := [32]byte(bytes.Repeat([]byte{0xAA}, 32))
latestHash := [32]byte(bytes.Repeat([]byte{0xBB}, 32))
pubKey := [48]byte{}
state := buildGloasState(t, slot, proposerIdx, builderIdx, params.BeaconConfig().MinActivationBalance+1000, randao, latestHash, pubKey)
bid := &ethpb.ExecutionPayloadBid{
ParentBlockHash: latestHash[:],
ParentBlockRoot: bytes.Repeat([]byte{0xCC}, 32),
BlockHash: bytes.Repeat([]byte{0xDD}, 32),
PrevRandao: randao[:],
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot,
Value: 0,
ExecutionPayment: 0,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0xEE}, 32),
FeeRecipient: bytes.Repeat([]byte{0xFF}, 20),
}
signed := &ethpb.SignedExecutionPayloadBid{
Message: bid,
Signature: common.InfiniteSignature[:],
}
block := stubBlock{
slot: slot,
proposer: proposerIdx,
parentRoot: bytesutil.ToBytes32(bid.ParentBlockRoot),
body: stubBlockBody{signedBid: signed},
v: version.Gloas,
}
require.NoError(t, ProcessExecutionPayloadBid(state, block))
stateProto, ok := state.ToProto().(*ethpb.BeaconStateGloas)
require.Equal(t, true, ok)
slotIndex := params.BeaconConfig().SlotsPerEpoch + (slot % params.BeaconConfig().SlotsPerEpoch)
require.Equal(t, primitives.Gwei(0), stateProto.BuilderPendingPayments[slotIndex].Withdrawal.Amount)
}
func TestProcessExecutionPayloadBid_SelfBuildNonZeroAmountFails(t *testing.T) {
slot := primitives.Slot(2)
proposerIdx := primitives.ValidatorIndex(0)
builderIdx := params.BeaconConfig().BuilderIndexSelfBuild
randao := [32]byte{}
latestHash := [32]byte{1}
state := buildGloasState(t, slot, proposerIdx, builderIdx, params.BeaconConfig().MinActivationBalance+1000, randao, latestHash, [48]byte{})
bid := &ethpb.ExecutionPayloadBid{
ParentBlockHash: latestHash[:],
ParentBlockRoot: bytes.Repeat([]byte{0xAA}, 32),
BlockHash: bytes.Repeat([]byte{0xBB}, 32),
PrevRandao: randao[:],
BuilderIndex: builderIdx,
Slot: slot,
Value: 10,
ExecutionPayment: 0,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0xCC}, 32),
FeeRecipient: bytes.Repeat([]byte{0xDD}, 20),
}
signed := &ethpb.SignedExecutionPayloadBid{
Message: bid,
Signature: common.InfiniteSignature[:],
}
block := stubBlock{
slot: slot,
proposer: proposerIdx,
parentRoot: bytesutil.ToBytes32(bid.ParentBlockRoot),
body: stubBlockBody{signedBid: signed},
v: version.Gloas,
}
err := ProcessExecutionPayloadBid(state, block)
require.ErrorContains(t, "self-build amount must be zero", err)
}
func TestProcessExecutionPayloadBid_PendingPaymentAndCacheBid(t *testing.T) {
slot := primitives.Slot(8)
builderIdx := primitives.BuilderIndex(1)
proposerIdx := primitives.ValidatorIndex(2)
randao := [32]byte(bytes.Repeat([]byte{0xAA}, 32))
latestHash := [32]byte(bytes.Repeat([]byte{0xBB}, 32))
sk, err := bls.RandKey()
require.NoError(t, err)
pub := sk.PublicKey().Marshal()
var pubKey [48]byte
copy(pubKey[:], pub)
balance := params.BeaconConfig().MinActivationBalance + 1_000_000
state := buildGloasState(t, slot, proposerIdx, builderIdx, balance, randao, latestHash, pubKey)
bid := &ethpb.ExecutionPayloadBid{
ParentBlockHash: latestHash[:],
ParentBlockRoot: bytes.Repeat([]byte{0xCC}, 32),
BlockHash: bytes.Repeat([]byte{0xDD}, 32),
PrevRandao: randao[:],
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot,
Value: 500_000,
ExecutionPayment: 1,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0xEE}, 32),
FeeRecipient: bytes.Repeat([]byte{0xFF}, 20),
}
genesis := bytesutil.ToBytes32(state.GenesisValidatorsRoot())
sig := signBid(t, sk, bid, state.Fork(), genesis)
signed := &ethpb.SignedExecutionPayloadBid{
Message: bid,
Signature: sig[:],
}
block := stubBlock{
slot: slot,
proposer: proposerIdx, // not self-build
parentRoot: bytesutil.ToBytes32(bid.ParentBlockRoot),
body: stubBlockBody{signedBid: signed},
v: version.Gloas,
}
require.NoError(t, ProcessExecutionPayloadBid(state, block))
stateProto, ok := state.ToProto().(*ethpb.BeaconStateGloas)
require.Equal(t, true, ok)
slotIndex := params.BeaconConfig().SlotsPerEpoch + (slot % params.BeaconConfig().SlotsPerEpoch)
require.Equal(t, primitives.Gwei(500_000), stateProto.BuilderPendingPayments[slotIndex].Withdrawal.Amount)
require.NotNil(t, stateProto.LatestExecutionPayloadBid)
require.Equal(t, primitives.BuilderIndex(1), stateProto.LatestExecutionPayloadBid.BuilderIndex)
require.Equal(t, primitives.Gwei(500_000), stateProto.LatestExecutionPayloadBid.Value)
}
func TestProcessExecutionPayloadBid_BuilderNotActive(t *testing.T) {
slot := primitives.Slot(4)
builderIdx := primitives.BuilderIndex(1)
proposerIdx := primitives.ValidatorIndex(2)
randao := [32]byte(bytes.Repeat([]byte{0x01}, 32))
latestHash := [32]byte(bytes.Repeat([]byte{0x02}, 32))
sk, err := bls.RandKey()
require.NoError(t, err)
var pubKey [48]byte
copy(pubKey[:], sk.PublicKey().Marshal())
state := buildGloasState(t, slot, proposerIdx, builderIdx, params.BeaconConfig().MinDepositAmount+1000, randao, latestHash, pubKey)
// Make builder inactive by setting withdrawable_epoch.
stateProto := state.ToProto().(*ethpb.BeaconStateGloas)
stateProto.Builders[int(builderIdx)].WithdrawableEpoch = 0
stateIface, err := state_native.InitializeFromProtoGloas(stateProto)
require.NoError(t, err)
state = stateIface.(*state_native.BeaconState)
bid := &ethpb.ExecutionPayloadBid{
ParentBlockHash: latestHash[:],
ParentBlockRoot: bytes.Repeat([]byte{0x03}, 32),
BlockHash: bytes.Repeat([]byte{0x04}, 32),
PrevRandao: randao[:],
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot,
Value: 10,
ExecutionPayment: 0,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0x05}, 32),
FeeRecipient: bytes.Repeat([]byte{0x06}, 20),
}
genesis := bytesutil.ToBytes32(state.GenesisValidatorsRoot())
sig := signBid(t, sk, bid, state.Fork(), genesis)
signed := &ethpb.SignedExecutionPayloadBid{Message: bid, Signature: sig[:]}
block := stubBlock{
slot: slot,
proposer: proposerIdx,
parentRoot: bytesutil.ToBytes32(bid.ParentBlockRoot),
body: stubBlockBody{signedBid: signed},
v: version.Gloas,
}
err = ProcessExecutionPayloadBid(state, block)
require.ErrorContains(t, "is not active", err)
}
func TestProcessExecutionPayloadBid_CannotCoverBid(t *testing.T) {
slot := primitives.Slot(5)
builderIdx := primitives.BuilderIndex(1)
proposerIdx := primitives.ValidatorIndex(2)
randao := [32]byte(bytes.Repeat([]byte{0x0A}, 32))
latestHash := [32]byte(bytes.Repeat([]byte{0x0B}, 32))
sk, err := bls.RandKey()
require.NoError(t, err)
var pubKey [48]byte
copy(pubKey[:], sk.PublicKey().Marshal())
state := buildGloasState(t, slot, proposerIdx, builderIdx, params.BeaconConfig().MinDepositAmount+10, randao, latestHash, pubKey)
stateProto := state.ToProto().(*ethpb.BeaconStateGloas)
// Add pending balances to push below required balance.
stateProto.BuilderPendingWithdrawals = []*ethpb.BuilderPendingWithdrawal{
{Amount: 15, BuilderIndex: builderIdx},
}
stateProto.BuilderPendingPayments = []*ethpb.BuilderPendingPayment{
{Withdrawal: &ethpb.BuilderPendingWithdrawal{Amount: 20, BuilderIndex: builderIdx}},
}
stateIface, err := state_native.InitializeFromProtoGloas(stateProto)
require.NoError(t, err)
state = stateIface.(*state_native.BeaconState)
bid := &ethpb.ExecutionPayloadBid{
ParentBlockHash: latestHash[:],
ParentBlockRoot: bytes.Repeat([]byte{0xCC}, 32),
BlockHash: bytes.Repeat([]byte{0xDD}, 32),
PrevRandao: randao[:],
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot,
Value: 25,
ExecutionPayment: 0,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0xEE}, 32),
FeeRecipient: bytes.Repeat([]byte{0xFF}, 20),
}
genesis := bytesutil.ToBytes32(state.GenesisValidatorsRoot())
sig := signBid(t, sk, bid, state.Fork(), genesis)
signed := &ethpb.SignedExecutionPayloadBid{Message: bid, Signature: sig[:]}
block := stubBlock{
slot: slot,
proposer: proposerIdx,
parentRoot: bytesutil.ToBytes32(bid.ParentBlockRoot),
body: stubBlockBody{signedBid: signed},
v: version.Gloas,
}
err = ProcessExecutionPayloadBid(state, block)
require.ErrorContains(t, "cannot cover bid amount", err)
}
func TestProcessExecutionPayloadBid_InvalidSignature(t *testing.T) {
slot := primitives.Slot(6)
builderIdx := primitives.BuilderIndex(1)
proposerIdx := primitives.ValidatorIndex(2)
randao := [32]byte(bytes.Repeat([]byte{0xAA}, 32))
latestHash := [32]byte(bytes.Repeat([]byte{0xBB}, 32))
sk, err := bls.RandKey()
require.NoError(t, err)
var pubKey [48]byte
copy(pubKey[:], sk.PublicKey().Marshal())
state := buildGloasState(t, slot, proposerIdx, builderIdx, params.BeaconConfig().MinDepositAmount+1000, randao, latestHash, pubKey)
bid := &ethpb.ExecutionPayloadBid{
ParentBlockHash: latestHash[:],
ParentBlockRoot: bytes.Repeat([]byte{0xCC}, 32),
BlockHash: bytes.Repeat([]byte{0xDD}, 32),
PrevRandao: randao[:],
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot,
Value: 10,
ExecutionPayment: 0,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0xEE}, 32),
FeeRecipient: bytes.Repeat([]byte{0xFF}, 20),
}
// Use an invalid signature.
invalidSig := [96]byte{1}
signed := &ethpb.SignedExecutionPayloadBid{Message: bid, Signature: invalidSig[:]}
block := stubBlock{
slot: slot,
proposer: proposerIdx,
parentRoot: bytesutil.ToBytes32(bid.ParentBlockRoot),
body: stubBlockBody{signedBid: signed},
v: version.Gloas,
}
err = ProcessExecutionPayloadBid(state, block)
require.ErrorContains(t, "bid signature validation failed", err)
}
func TestProcessExecutionPayloadBid_SlotMismatch(t *testing.T) {
slot := primitives.Slot(10)
builderIdx := primitives.BuilderIndex(1)
proposerIdx := primitives.ValidatorIndex(2)
randao := [32]byte(bytes.Repeat([]byte{0xAA}, 32))
latestHash := [32]byte(bytes.Repeat([]byte{0xBB}, 32))
sk, err := bls.RandKey()
require.NoError(t, err)
var pubKey [48]byte
copy(pubKey[:], sk.PublicKey().Marshal())
state := buildGloasState(t, slot, proposerIdx, builderIdx, params.BeaconConfig().MinDepositAmount+1000, randao, latestHash, pubKey)
bid := &ethpb.ExecutionPayloadBid{
ParentBlockHash: latestHash[:],
ParentBlockRoot: bytes.Repeat([]byte{0xAA}, 32),
BlockHash: bytes.Repeat([]byte{0xBB}, 32),
PrevRandao: randao[:],
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot + 1, // mismatch
Value: 1,
ExecutionPayment: 0,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0xCC}, 32),
FeeRecipient: bytes.Repeat([]byte{0xDD}, 20),
}
genesis := bytesutil.ToBytes32(state.GenesisValidatorsRoot())
sig := signBid(t, sk, bid, state.Fork(), genesis)
signed := &ethpb.SignedExecutionPayloadBid{Message: bid, Signature: sig[:]}
block := stubBlock{
slot: slot,
proposer: proposerIdx,
parentRoot: bytesutil.ToBytes32(bid.ParentBlockRoot),
body: stubBlockBody{signedBid: signed},
v: version.Gloas,
}
err = ProcessExecutionPayloadBid(state, block)
require.ErrorContains(t, "bid slot", err)
}
func TestProcessExecutionPayloadBid_ParentHashMismatch(t *testing.T) {
slot := primitives.Slot(11)
builderIdx := primitives.BuilderIndex(1)
proposerIdx := primitives.ValidatorIndex(2)
randao := [32]byte(bytes.Repeat([]byte{0xAA}, 32))
latestHash := [32]byte(bytes.Repeat([]byte{0xBB}, 32))
sk, err := bls.RandKey()
require.NoError(t, err)
var pubKey [48]byte
copy(pubKey[:], sk.PublicKey().Marshal())
state := buildGloasState(t, slot, proposerIdx, builderIdx, params.BeaconConfig().MinDepositAmount+1000, randao, latestHash, pubKey)
bid := &ethpb.ExecutionPayloadBid{
ParentBlockHash: bytes.Repeat([]byte{0x11}, 32), // mismatch
ParentBlockRoot: bytes.Repeat([]byte{0x22}, 32),
BlockHash: bytes.Repeat([]byte{0x33}, 32),
PrevRandao: randao[:],
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot,
Value: 1,
ExecutionPayment: 0,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0x44}, 32),
FeeRecipient: bytes.Repeat([]byte{0x55}, 20),
}
genesis := bytesutil.ToBytes32(state.GenesisValidatorsRoot())
sig := signBid(t, sk, bid, state.Fork(), genesis)
signed := &ethpb.SignedExecutionPayloadBid{Message: bid, Signature: sig[:]}
block := stubBlock{
slot: slot,
proposer: proposerIdx,
parentRoot: bytesutil.ToBytes32(bid.ParentBlockRoot),
body: stubBlockBody{signedBid: signed},
v: version.Gloas,
}
err = ProcessExecutionPayloadBid(state, block)
require.ErrorContains(t, "parent block hash mismatch", err)
}
func TestProcessExecutionPayloadBid_ParentRootMismatch(t *testing.T) {
slot := primitives.Slot(12)
builderIdx := primitives.BuilderIndex(1)
proposerIdx := primitives.ValidatorIndex(2)
randao := [32]byte(bytes.Repeat([]byte{0xAA}, 32))
latestHash := [32]byte(bytes.Repeat([]byte{0xBB}, 32))
sk, err := bls.RandKey()
require.NoError(t, err)
var pubKey [48]byte
copy(pubKey[:], sk.PublicKey().Marshal())
state := buildGloasState(t, slot, proposerIdx, builderIdx, params.BeaconConfig().MinDepositAmount+1000, randao, latestHash, pubKey)
parentRoot := bytes.Repeat([]byte{0x22}, 32)
bid := &ethpb.ExecutionPayloadBid{
ParentBlockHash: latestHash[:],
ParentBlockRoot: parentRoot,
BlockHash: bytes.Repeat([]byte{0x33}, 32),
PrevRandao: randao[:],
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot,
Value: 1,
ExecutionPayment: 0,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0x44}, 32),
FeeRecipient: bytes.Repeat([]byte{0x55}, 20),
}
genesis := bytesutil.ToBytes32(state.GenesisValidatorsRoot())
sig := signBid(t, sk, bid, state.Fork(), genesis)
signed := &ethpb.SignedExecutionPayloadBid{Message: bid, Signature: sig[:]}
block := stubBlock{
slot: slot,
proposer: proposerIdx,
parentRoot: bytesutil.ToBytes32(bytes.Repeat([]byte{0x99}, 32)), // mismatch
body: stubBlockBody{signedBid: signed},
v: version.Gloas,
}
err = ProcessExecutionPayloadBid(state, block)
require.ErrorContains(t, "parent block root mismatch", err)
}
func TestProcessExecutionPayloadBid_PrevRandaoMismatch(t *testing.T) {
slot := primitives.Slot(13)
builderIdx := primitives.BuilderIndex(1)
proposerIdx := primitives.ValidatorIndex(2)
randao := [32]byte(bytes.Repeat([]byte{0xAA}, 32))
latestHash := [32]byte(bytes.Repeat([]byte{0xBB}, 32))
sk, err := bls.RandKey()
require.NoError(t, err)
var pubKey [48]byte
copy(pubKey[:], sk.PublicKey().Marshal())
state := buildGloasState(t, slot, proposerIdx, builderIdx, params.BeaconConfig().MinDepositAmount+1000, randao, latestHash, pubKey)
bid := &ethpb.ExecutionPayloadBid{
ParentBlockHash: latestHash[:],
ParentBlockRoot: bytes.Repeat([]byte{0x22}, 32),
BlockHash: bytes.Repeat([]byte{0x33}, 32),
PrevRandao: bytes.Repeat([]byte{0x01}, 32), // mismatch
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot,
Value: 1,
ExecutionPayment: 0,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0x44}, 32),
FeeRecipient: bytes.Repeat([]byte{0x55}, 20),
}
genesis := bytesutil.ToBytes32(state.GenesisValidatorsRoot())
sig := signBid(t, sk, bid, state.Fork(), genesis)
signed := &ethpb.SignedExecutionPayloadBid{Message: bid, Signature: sig[:]}
block := stubBlock{
slot: slot,
proposer: proposerIdx,
parentRoot: bytesutil.ToBytes32(bid.ParentBlockRoot),
body: stubBlockBody{signedBid: signed},
v: version.Gloas,
}
err = ProcessExecutionPayloadBid(state, block)
require.ErrorContains(t, "prev randao mismatch", err)
}

View File

@@ -40,7 +40,6 @@ go_library(
"//beacon-chain/state/state-native:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//beacon-chain/verification:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",

View File

@@ -11,7 +11,6 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v7/beacon-chain/execution/types"
"github.com/OffchainLabs/prysm/v7/beacon-chain/verification"
"github.com/OffchainLabs/prysm/v7/cmd/beacon-chain/flags"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
@@ -539,10 +538,6 @@ func (s *Service) GetBlobsV2(ctx context.Context, versionedHashes []common.Hash)
return nil, errors.New(fmt.Sprintf("%s is not supported", GetBlobsV2))
}
if flags.Get().DisableGetBlobsV2 {
return []*pb.BlobAndProofV2{}, nil
}
result := make([]*pb.BlobAndProofV2, len(versionedHashes))
err := s.rpcClient.CallContext(ctx, &result, GetBlobsV2, versionedHashes)

View File

@@ -75,6 +75,7 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
p2p := p2pTesting.NewTestP2P(t)
lcStore := NewLightClientStore(p2p, new(event.Feed), testDB.SetupDB(t))
timeForGoroutinesToFinish := 20 * time.Microsecond
// update 0 with basic data and no supermajority following an empty lastFinalityUpdate - should save and broadcast
l0 := util.NewTestLightClient(t, version.Altair)
update0, err := NewLightClientFinalityUpdateFromBeaconState(l0.Ctx, l0.State, l0.Block, l0.AttestedState, l0.AttestedBlock, l0.FinalizedBlock)
@@ -86,9 +87,8 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
lcStore.SetLastFinalityUpdate(update0, true)
require.Equal(t, update0, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Eventually(t, func() bool {
return p2p.BroadcastCalled.Load()
}, time.Second, 10*time.Millisecond, "Broadcast should have been called after setting a new last finality update when previous is nil")
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
require.Equal(t, true, p2p.BroadcastCalled.Load(), "Broadcast should have been called after setting a new last finality update when previous is nil")
p2p.BroadcastCalled.Store(false) // Reset for next test
// update 1 with same finality slot, increased attested slot, and no supermajority - should save but not broadcast
@@ -102,7 +102,7 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
lcStore.SetLastFinalityUpdate(update1, true)
require.Equal(t, update1, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
time.Sleep(50 * time.Millisecond) // Wait briefly to verify broadcast is not called
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been called after setting a new last finality update without supermajority")
p2p.BroadcastCalled.Store(false) // Reset for next test
@@ -117,9 +117,8 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
lcStore.SetLastFinalityUpdate(update2, true)
require.Equal(t, update2, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Eventually(t, func() bool {
return p2p.BroadcastCalled.Load()
}, time.Second, 10*time.Millisecond, "Broadcast should have been called after setting a new last finality update with supermajority")
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
require.Equal(t, true, p2p.BroadcastCalled.Load(), "Broadcast should have been called after setting a new last finality update with supermajority")
p2p.BroadcastCalled.Store(false) // Reset for next test
// update 3 with same finality slot, increased attested slot, and supermajority - should save but not broadcast
@@ -133,7 +132,7 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
lcStore.SetLastFinalityUpdate(update3, true)
require.Equal(t, update3, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
time.Sleep(50 * time.Millisecond) // Wait briefly to verify broadcast is not called
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been when previous was already broadcast")
// update 4 with increased finality slot, increased attested slot, and supermajority - should save and broadcast
@@ -147,9 +146,8 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
lcStore.SetLastFinalityUpdate(update4, true)
require.Equal(t, update4, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Eventually(t, func() bool {
return p2p.BroadcastCalled.Load()
}, time.Second, 10*time.Millisecond, "Broadcast should have been called after a new finality update with increased finality slot")
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
require.Equal(t, true, p2p.BroadcastCalled.Load(), "Broadcast should have been called after a new finality update with increased finality slot")
p2p.BroadcastCalled.Store(false) // Reset for next test
// update 5 with the same new finality slot, increased attested slot, and supermajority - should save but not broadcast
@@ -163,7 +161,7 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
lcStore.SetLastFinalityUpdate(update5, true)
require.Equal(t, update5, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
time.Sleep(50 * time.Millisecond) // Wait briefly to verify broadcast is not called
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been called when previous was already broadcast with supermajority")
// update 6 with the same new finality slot, increased attested slot, and no supermajority - should save but not broadcast
@@ -177,7 +175,7 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
lcStore.SetLastFinalityUpdate(update6, true)
require.Equal(t, update6, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
time.Sleep(50 * time.Millisecond) // Wait briefly to verify broadcast is not called
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been called when previous was already broadcast with supermajority")
}

View File

@@ -22,7 +22,6 @@ import (
"github.com/OffchainLabs/prysm/v7/monitoring/tracing/trace"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/time/slots"
pubsub "github.com/libp2p/go-libp2p-pubsub"
"github.com/pkg/errors"
ssz "github.com/prysmaticlabs/fastssz"
"github.com/sirupsen/logrus"
@@ -358,99 +357,58 @@ func (s *Service) BroadcastDataColumnSidecars(ctx context.Context, sidecars []bl
return nil
}
// broadcastDataColumnSidecars broadcasts multiple data column sidecars to the p2p network.
// For sidecars with available peers, it uses batch publishing.
// For sidecars without peers, it finds peers first and then publishes individually.
// Both paths run in parallel. It returns when all broadcasts are complete, or the context is cancelled.
// broadcastDataColumnSidecars broadcasts multiple data column sidecars to the p2p network, after ensuring
// there is at least one peer in each needed subnet. If not, it will attempt to find one before broadcasting.
// It returns when all broadcasts are complete, or the context is cancelled (whichever comes first).
func (s *Service) broadcastDataColumnSidecars(ctx context.Context, forkDigest [fieldparams.VersionLength]byte, sidecars []blocks.VerifiedRODataColumn) {
type rootAndIndex struct {
root [fieldparams.RootLength]byte
index uint64
}
var timings sync.Map
var (
wg sync.WaitGroup
timings sync.Map
)
logLevel := logrus.GetLevel()
slotPerRoot := make(map[[fieldparams.RootLength]byte]primitives.Slot, 1)
topicFunc := func(sidecar blocks.VerifiedRODataColumn) (topic string, wrappedSubIdx uint64, subnet uint64) {
subnet = peerdas.ComputeSubnetForDataColumnSidecar(sidecar.Index)
topic = dataColumnSubnetToTopic(subnet, forkDigest)
wrappedSubIdx = subnet + dataColumnSubnetVal
return
}
sidecarsWithPeers := make([]blocks.VerifiedRODataColumn, 0, len(sidecars))
var sidecarsWithoutPeers []blocks.VerifiedRODataColumn
// Categorize sidecars by peer availability.
for _, sidecar := range sidecars {
slotPerRoot[sidecar.BlockRoot()] = sidecar.Slot()
topic, wrappedSubIdx, _ := topicFunc(sidecar)
// Check if we have a peer for this subnet (use RLock for read-only check).
mu := s.subnetLocker(wrappedSubIdx)
mu.RLock()
hasPeer := s.hasPeerWithSubnet(topic)
mu.RUnlock()
if hasPeer {
sidecarsWithPeers = append(sidecarsWithPeers, sidecar)
continue
}
sidecarsWithoutPeers = append(sidecarsWithoutPeers, sidecar)
}
var batchWg, individualWg sync.WaitGroup
// Batch publish sidecars that already have peers
var messageBatch pubsub.MessageBatch
for _, sidecar := range sidecarsWithPeers {
batchWg.Go(func() {
_, span := trace.StartSpan(ctx, "p2p.broadcastDataColumnSidecars")
ctx := trace.NewContext(s.ctx, span)
wg.Go(func() {
// Add tracing to the function.
ctx, span := trace.StartSpan(s.ctx, "p2p.broadcastDataColumnSidecars")
defer span.End()
topic, _, _ := topicFunc(sidecar)
// Compute the subnet for this data column sidecar.
subnet := peerdas.ComputeSubnetForDataColumnSidecar(sidecar.Index)
if err := s.batchObject(ctx, &messageBatch, sidecar, topic); err != nil {
tracing.AnnotateError(span, err)
log.WithError(err).Error("Cannot batch data column sidecar")
return
}
// Build the topic corresponding to subnet column subnet and this fork digest.
topic := dataColumnSubnetToTopic(subnet, forkDigest)
if logLevel >= logrus.DebugLevel {
root := sidecar.BlockRoot()
timings.Store(rootAndIndex{root: root, index: sidecar.Index}, time.Now())
}
})
}
// Compute the wrapped subnet index.
wrappedSubIdx := subnet + dataColumnSubnetVal
// For sidecars without peers, find peers and publish individually (no batching).
for _, sidecar := range sidecarsWithoutPeers {
individualWg.Go(func() {
_, span := trace.StartSpan(ctx, "p2p.broadcastDataColumnSidecars")
ctx := trace.NewContext(s.ctx, span)
defer span.End()
topic, wrappedSubIdx, subnet := topicFunc(sidecar)
// Find peers for this sidecar's subnet.
// Find peers if needed.
if err := s.findPeersIfNeeded(ctx, wrappedSubIdx, DataColumnSubnetTopicFormat, forkDigest, subnet); err != nil {
tracing.AnnotateError(span, err)
log.WithError(err).Error("Cannot find peers if needed")
return
}
// Publish individually (not batched) since we just found peers.
// Broadcast the data column sidecar to the network.
if err := s.broadcastObject(ctx, sidecar, topic); err != nil {
tracing.AnnotateError(span, err)
log.WithError(err).Error("Cannot broadcast data column sidecar")
return
}
// Increase the number of successful broadcasts.
dataColumnSidecarBroadcasts.Inc()
// Record the timing for log purposes.
if logLevel >= logrus.DebugLevel {
root := sidecar.BlockRoot()
timings.Store(rootAndIndex{root: root, index: sidecar.Index}, time.Now())
@@ -458,18 +416,8 @@ func (s *Service) broadcastDataColumnSidecars(ctx context.Context, forkDigest [f
})
}
// Wait for batch to be populated, then publish.
batchWg.Wait()
if len(sidecarsWithPeers) > 0 {
if err := s.pubsub.PublishBatch(&messageBatch); err != nil {
log.WithError(err).Error("Cannot publish batch for data column sidecars")
} else {
dataColumnSidecarBroadcasts.Add(float64(len(sidecarsWithPeers)))
}
}
// Wait for all individual publishes to complete.
individualWg.Wait()
// Wait for all broadcasts to finish.
wg.Wait()
// The rest of this function is only for debug logging purposes.
if logLevel < logrus.DebugLevel {
@@ -556,68 +504,28 @@ func (s *Service) findPeersIfNeeded(
return nil
}
// encodeGossipMessage encodes an object for gossip transmission.
// It returns the encoded bytes and the full topic with protocol suffix.
func (s *Service) encodeGossipMessage(obj ssz.Marshaler, topic string) ([]byte, string, error) {
buf := new(bytes.Buffer)
if _, err := s.Encoding().EncodeGossip(buf, obj); err != nil {
return nil, "", fmt.Errorf("could not encode message: %w", err)
}
return buf.Bytes(), topic + s.Encoding().ProtocolSuffix(), nil
}
// broadcastObject broadcasts a message to other peers in our gossip mesh.
// method to broadcast messages to other peers in our gossip mesh.
func (s *Service) broadcastObject(ctx context.Context, obj ssz.Marshaler, topic string) error {
ctx, span := trace.StartSpan(ctx, "p2p.broadcastObject")
defer span.End()
span.SetAttributes(trace.StringAttribute("topic", topic))
data, fullTopic, err := s.encodeGossipMessage(obj, topic)
if err != nil {
buf := new(bytes.Buffer)
if _, err := s.Encoding().EncodeGossip(buf, obj); err != nil {
err := errors.Wrap(err, "could not encode message")
tracing.AnnotateError(span, err)
return err
}
if span.IsRecording() {
id := hash.FastSum64(data)
messageLen := int64(len(data))
id := hash.FastSum64(buf.Bytes())
messageLen := int64(buf.Len())
// lint:ignore uintcast -- It's safe to do this for tracing.
iid := int64(id)
span = trace.AddMessageSendEvent(span, iid, messageLen /*uncompressed*/, messageLen /*compressed*/)
}
if err := s.PublishToTopic(ctx, fullTopic, data); err != nil {
err := errors.Wrap(err, "could not publish message")
tracing.AnnotateError(span, err)
return err
}
return nil
}
// batchObject adds an object to a message batch for a future broadcast.
// The caller MUST publish the batch after all messages have been added.
func (s *Service) batchObject(ctx context.Context, batch *pubsub.MessageBatch, obj ssz.Marshaler, topic string) error {
ctx, span := trace.StartSpan(ctx, "p2p.batchObject")
defer span.End()
span.SetAttributes(trace.StringAttribute("topic", topic))
data, fullTopic, err := s.encodeGossipMessage(obj, topic)
if err != nil {
tracing.AnnotateError(span, err)
return err
}
if span.IsRecording() {
id := hash.FastSum64(data)
messageLen := int64(len(data))
// lint:ignore uintcast -- It's safe to do this for tracing.
iid := int64(id)
span = trace.AddMessageSendEvent(span, iid, messageLen /*uncompressed*/, messageLen /*compressed*/)
}
if err := s.addToBatch(ctx, batch, fullTopic, data); err != nil {
if err := s.PublishToTopic(ctx, topic+s.Encoding().ProtocolSuffix(), buf.Bytes()); err != nil {
err := errors.Wrap(err, "could not publish message")
tracing.AnnotateError(span, err)
return err

View File

@@ -32,8 +32,6 @@ import (
"github.com/OffchainLabs/prysm/v7/time/slots"
pubsub "github.com/libp2p/go-libp2p-pubsub"
"github.com/libp2p/go-libp2p/core/host"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/libp2p/go-libp2p/core/protocol"
"google.golang.org/protobuf/proto"
)
@@ -72,10 +70,7 @@ func TestService_Broadcast(t *testing.T) {
sub, err := p2.SubscribeToTopic(topic)
require.NoError(t, err)
// Wait for libp2p mesh to establish
require.Eventually(t, func() bool {
return len(p.pubsub.ListPeers(topic)) > 0
}, 5*time.Second, 10*time.Millisecond, "libp2p mesh did not establish")
time.Sleep(50 * time.Millisecond) // libp2p fails without this delay...
// Async listen for the pubsub, must be before the broadcast.
var wg sync.WaitGroup
@@ -189,10 +184,7 @@ func TestService_BroadcastAttestation(t *testing.T) {
sub, err := p2.SubscribeToTopic(topic)
require.NoError(t, err)
// Wait for libp2p mesh to establish
require.Eventually(t, func() bool {
return len(p.pubsub.ListPeers(topic)) > 0
}, 5*time.Second, 10*time.Millisecond, "libp2p mesh did not establish")
time.Sleep(50 * time.Millisecond) // libp2p fails without this delay...
// Async listen for the pubsub, must be before the broadcast.
var wg sync.WaitGroup
@@ -381,15 +373,7 @@ func TestService_BroadcastAttestationWithDiscoveryAttempts(t *testing.T) {
_, err = tpHandle.Subscribe()
require.NoError(t, err)
// This test specifically tests discovery-based peer finding, which requires
// time for nodes to discover each other. Using a fixed sleep here is intentional
// as we're testing the discovery timing behavior.
time.Sleep(500 * time.Millisecond)
// Verify mesh establishment after discovery
require.Eventually(t, func() bool {
return len(p.pubsub.ListPeers(topic)) > 0 && len(p2.pubsub.ListPeers(topic)) > 0
}, 5*time.Second, 10*time.Millisecond, "libp2p mesh did not establish")
time.Sleep(500 * time.Millisecond) // libp2p fails without this delay...
nodePeers := p.pubsub.ListPeers(topic)
nodePeers2 := p2.pubsub.ListPeers(topic)
@@ -458,10 +442,7 @@ func TestService_BroadcastSyncCommittee(t *testing.T) {
sub, err := p2.SubscribeToTopic(topic)
require.NoError(t, err)
// Wait for libp2p mesh to establish
require.Eventually(t, func() bool {
return len(p.pubsub.ListPeers(topic)) > 0
}, 5*time.Second, 10*time.Millisecond, "libp2p mesh did not establish")
time.Sleep(50 * time.Millisecond) // libp2p fails without this delay...
// Async listen for the pubsub, must be before the broadcast.
var wg sync.WaitGroup
@@ -538,10 +519,7 @@ func TestService_BroadcastBlob(t *testing.T) {
sub, err := p2.SubscribeToTopic(topic)
require.NoError(t, err)
// Wait for libp2p mesh to establish
require.Eventually(t, func() bool {
return len(p.pubsub.ListPeers(topic)) > 0
}, 5*time.Second, 10*time.Millisecond, "libp2p mesh did not establish")
time.Sleep(50 * time.Millisecond) // libp2p fails without this delay...
// Async listen for the pubsub, must be before the broadcast.
var wg sync.WaitGroup
@@ -604,10 +582,7 @@ func TestService_BroadcastLightClientOptimisticUpdate(t *testing.T) {
sub, err := p2.SubscribeToTopic(topic)
require.NoError(t, err)
// Wait for libp2p mesh to establish
require.Eventually(t, func() bool {
return len(p.pubsub.ListPeers(topic)) > 0
}, 5*time.Second, 10*time.Millisecond, "libp2p mesh did not establish")
time.Sleep(50 * time.Millisecond) // libp2p fails without this delay...
// Async listen for the pubsub, must be before the broadcast.
var wg sync.WaitGroup
@@ -683,10 +658,7 @@ func TestService_BroadcastLightClientFinalityUpdate(t *testing.T) {
sub, err := p2.SubscribeToTopic(topic)
require.NoError(t, err)
// Wait for libp2p mesh to establish
require.Eventually(t, func() bool {
return len(p.pubsub.ListPeers(topic)) > 0
}, 5*time.Second, 10*time.Millisecond, "libp2p mesh did not establish")
time.Sleep(50 * time.Millisecond) // libp2p fails without this delay...
// Async listen for the pubsub, must be before the broadcast.
var wg sync.WaitGroup
@@ -797,10 +769,8 @@ func TestService_BroadcastDataColumn(t *testing.T) {
sub, err := p2.SubscribeToTopic(topic)
require.NoError(t, err)
// Wait for libp2p mesh to establish
require.Eventually(t, func() bool {
return len(service.pubsub.ListPeers(topic)) > 0
}, 5*time.Second, 10*time.Millisecond, "libp2p mesh did not establish")
// libp2p fails without this delay
time.Sleep(50 * time.Millisecond)
// Broadcast to peers and wait.
err = service.BroadcastDataColumnSidecars(ctx, []blocks.VerifiedRODataColumn{verifiedRoSidecar})
@@ -817,190 +787,3 @@ func TestService_BroadcastDataColumn(t *testing.T) {
require.NoError(t, service.Encoding().DecodeGossip(msg.Data, &result))
require.DeepEqual(t, &result, verifiedRoSidecar)
}
type topicInvoked struct {
topic string
pid peer.ID
}
// rpcOrderTracer is a RawTracer implementation that captures the order of SendRPC calls.
// It records the topics of messages sent via pubsub to verify round-robin ordering.
type rpcOrderTracer struct {
mu sync.Mutex
invoked []*topicInvoked
byTopic map[string][]peer.ID
}
func (t *rpcOrderTracer) SendRPC(rpc *pubsub.RPC, pid peer.ID) {
t.mu.Lock()
defer t.mu.Unlock()
for _, msg := range rpc.GetPublish() {
invoked := &topicInvoked{topic: msg.GetTopic(), pid: pid}
t.invoked = append(t.invoked, invoked)
t.byTopic[invoked.topic] = append(t.byTopic[invoked.topic], invoked.pid)
}
}
func newRpcOrderTracer() *rpcOrderTracer {
return &rpcOrderTracer{byTopic: make(map[string][]peer.ID)}
}
func (t *rpcOrderTracer) getTopics() []string {
t.mu.Lock()
defer t.mu.Unlock()
result := make([]string, len(t.invoked))
for i := range t.invoked {
result[i] = t.invoked[i].topic
}
return result
}
// No-op implementations for other RawTracer methods.
func (*rpcOrderTracer) AddPeer(peer.ID, protocol.ID) {}
func (*rpcOrderTracer) RemovePeer(peer.ID) {}
func (*rpcOrderTracer) Join(string) {}
func (*rpcOrderTracer) Leave(string) {}
func (*rpcOrderTracer) Graft(peer.ID, string) {}
func (*rpcOrderTracer) Prune(peer.ID, string) {}
func (*rpcOrderTracer) ValidateMessage(*pubsub.Message) {}
func (*rpcOrderTracer) DeliverMessage(*pubsub.Message) {}
func (*rpcOrderTracer) RejectMessage(*pubsub.Message, string) {}
func (*rpcOrderTracer) DuplicateMessage(*pubsub.Message) {}
func (*rpcOrderTracer) ThrottlePeer(peer.ID) {}
func (*rpcOrderTracer) RecvRPC(*pubsub.RPC) {}
func (*rpcOrderTracer) DropRPC(*pubsub.RPC, peer.ID) {}
func (*rpcOrderTracer) UndeliverableMessage(*pubsub.Message) {}
// TestService_BroadcastDataColumnRoundRobin verifies that when broadcasting multiple
// data column sidecars, messages are interleaved in round-robin order by column index
// rather than sending all copies of one column before the next.
//
// Without batch publishing: A,A,A,A,B,B,B,B (all peers for column A, then all for column B)
// With batch publishing: A,B,A,B,A,B,A,B (interleaved by message ID)
func TestService_BroadcastDataColumnRoundRobin(t *testing.T) {
const (
port = 2100
topicFormat = DataColumnSubnetTopicFormat
)
ctx := t.Context()
// Load the KZG trust setup.
err := kzg.Start()
require.NoError(t, err)
gFlags := new(flags.GlobalFlags)
gFlags.MinimumPeersPerSubnet = 1
flags.Init(gFlags)
defer flags.Init(new(flags.GlobalFlags))
// Create a tracer to capture the order of SendRPC calls.
tracer := newRpcOrderTracer()
// Create the publisher node with the tracer injected.
p1 := p2ptest.NewTestP2PWithPubsubOptions(t, []pubsub.Option{pubsub.WithRawTracer(tracer)})
// Create subscriber peers.
expectedPeers := []*p2ptest.TestP2P{
p2ptest.NewTestP2P(t),
p2ptest.NewTestP2P(t),
}
// Connect peers.
for _, p := range expectedPeers {
p1.Connect(p)
}
require.NotEqual(t, 0, len(p1.BHost.Network().Peers()), "No peers")
// Create a host for discovery.
_, pkey, ipAddr := createHost(t, port)
// Create a shared DB for the service.
db := testDB.SetupDB(t)
// Create and close the custody info channel immediately since custodyInfo is already set.
custodyInfoSet := make(chan struct{})
close(custodyInfoSet)
service := &Service{
ctx: ctx,
host: p1.BHost,
pubsub: p1.PubSub(),
joinedTopics: map[string]*pubsub.Topic{},
cfg: &Config{DB: db},
genesisTime: time.Now(),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
subnetsLock: make(map[uint64]*sync.RWMutex),
subnetsLockLock: sync.Mutex{},
peers: peers.NewStatus(ctx, &peers.StatusConfig{ScorerParams: &scorers.Config{}}),
custodyInfo: &custodyInfo{},
custodyInfoSet: custodyInfoSet,
}
// Create a listener for discovery.
listener, err := service.startDiscoveryV5(ipAddr, pkey)
require.NoError(t, err)
service.dv5Listener = listener
digest, err := service.currentForkDigest()
require.NoError(t, err)
// Create multiple data column sidecars with different column indices.
// Use indices that map to different subnets: 0, 32, 64 (assuming 128 columns and 64 subnets).
columnIndices := []uint64{0, 32, 64}
params := make([]util.DataColumnParam, len(columnIndices))
for i, idx := range columnIndices {
params[i] = util.DataColumnParam{Index: idx}
}
_, verifiedRoSidecars := util.CreateTestVerifiedRoDataColumnSidecars(t, params)
expectedTopics := make(map[string]bool)
// Subscribe peers to the relevant topics.
for _, idx := range columnIndices {
subnet := peerdas.ComputeSubnetForDataColumnSidecar(idx)
topic := fmt.Sprintf(topicFormat, digest, subnet) + service.Encoding().ProtocolSuffix()
for _, p := range expectedPeers {
_, err = p.SubscribeToTopic(topic)
require.NoError(t, err)
}
expectedTopics[topic] = true
}
// libp2p needs some time to establish mesh connections.
time.Sleep(100 * time.Millisecond)
// Broadcast all sidecars.
err = service.BroadcastDataColumnSidecars(ctx, verifiedRoSidecars)
require.NoError(t, err)
// Give some time for messages to be sent.
time.Sleep(100 * time.Millisecond)
topics := tracer.getTopics()
if len(topics) == 0 {
t.Fatal("Expected at least one message for each topic to be sent to each peer")
}
unseen := make(map[string]bool)
for k := range expectedTopics {
unseen[k] = true
}
// Verify round-robin invariant: before all message IDs are seen, no message ID may be repeated.
// In round-robin order, we should see each topic once before any topic repeats.
for _, topic := range topics {
if !expectedTopics[topic] {
continue
}
if !unseen[topic] {
t.Errorf("Topic %s repeated before all topics were seen once. This violates round-robin ordering.", topic)
}
delete(unseen, topic)
if len(unseen) == 0 {
break // all have been seen
}
}
require.Equal(t, 0, len(unseen))
// Verify that we actually saw all expected topics.
for topic := range expectedTopics {
require.Equal(t, len(expectedPeers), len(tracer.byTopic[topic]))
}
}

View File

@@ -482,12 +482,12 @@ func TestStaticPeering_PeersAreAdded(t *testing.T) {
s.Start()
<-exitRoutine
}()
time.Sleep(50 * time.Millisecond) // Wait for service initialization
time.Sleep(50 * time.Millisecond)
var vr [32]byte
require.NoError(t, cs.SetClock(startup.NewClock(time.Now(), vr)))
require.Eventually(t, func() bool {
return len(s.host.Network().Peers()) == 5
}, 10*time.Second, 100*time.Millisecond, "Not all peers added to peerstore")
time.Sleep(4 * time.Second)
ps := s.host.Network().Peers()
assert.Equal(t, 5, len(ps), "Not all peers added to peerstore")
require.NoError(t, s.Stop())
exitRoutine <- true
}

View File

@@ -99,27 +99,6 @@ func (s *Service) PublishToTopic(ctx context.Context, topic string, data []byte,
}
}
// addToBatch joins (if necessary) a topic and adds the message to a message batch.
func (s *Service) addToBatch(ctx context.Context, batch *pubsub.MessageBatch, topic string, data []byte, opts ...pubsub.PubOpt) error {
topicHandle, err := s.JoinTopic(topic)
if err != nil {
return fmt.Errorf("joining topic: %w", err)
}
// Wait for at least 1 peer to be available to receive the published message.
for {
if flags.Get().MinimumSyncPeers == 0 || len(topicHandle.ListPeers()) > 0 {
return topicHandle.AddToBatch(ctx, batch, data, opts...)
}
select {
case <-ctx.Done():
return errors.Wrapf(ctx.Err(), "unable to find requisite number of peers for topic %s, 0 peers found to publish to", topic)
case <-time.After(100 * time.Millisecond):
// reenter the for loop after 100ms
}
}
}
// SubscribeToTopic joins (if necessary) and subscribes to PubSub topic.
func (s *Service) SubscribeToTopic(topic string, opts ...pubsub.SubOpt) (*pubsub.Subscription, error) {
s.awaitStateInitialized() // Genesis time and genesis validators root are required to subscribe.

View File

@@ -80,9 +80,8 @@ func TestService_Start_OnlyStartsOnce(t *testing.T) {
}()
var vr [32]byte
require.NoError(t, cs.SetClock(startup.NewClock(time.Now(), vr)))
require.Eventually(t, func() bool {
return s.started
}, 5*time.Second, 100*time.Millisecond, "Expected service to be started")
time.Sleep(time.Second * 2)
assert.Equal(t, true, s.started, "Expected service to be started")
s.Start()
require.LogsContain(t, hook, "Attempted to start p2p service when it was already started")
require.NoError(t, s.Stop())
@@ -261,9 +260,17 @@ func TestListenForNewNodes(t *testing.T) {
err = cs.SetClock(startup.NewClock(genesisTime, gvr))
require.NoError(t, err, "Could not set clock in service")
require.Eventually(t, func() bool {
return len(s.host.Network().Peers()) == peerCount
}, 5*time.Second, 100*time.Millisecond, "Not all peers added to peerstore")
actualPeerCount := len(s.host.Network().Peers())
for range 40 {
if actualPeerCount == peerCount {
break
}
time.Sleep(100 * time.Millisecond)
actualPeerCount = len(s.host.Network().Peers())
}
assert.Equal(t, peerCount, actualPeerCount, "Not all peers added to peerstore")
err = s.Stop()
require.NoError(t, err, "Failed to stop service")

View File

@@ -70,11 +70,6 @@ type TestP2P struct {
// NewTestP2P initializes a new p2p test service.
func NewTestP2P(t *testing.T, userOptions ...config.Option) *TestP2P {
return NewTestP2PWithPubsubOptions(t, nil, userOptions...)
}
// NewTestP2PWithPubsubOptions initializes a new p2p test service with custom pubsub options.
func NewTestP2PWithPubsubOptions(t *testing.T, pubsubOpts []pubsub.Option, userOptions ...config.Option) *TestP2P {
ctx := context.Background()
options := []config.Option{
libp2p.ResourceManager(&network.NullResourceManager{}),
@@ -89,14 +84,10 @@ func NewTestP2PWithPubsubOptions(t *testing.T, pubsubOpts []pubsub.Option, userO
h, err := libp2p.New(options...)
require.NoError(t, err)
defaultPubsubOpts := []pubsub.Option{
ps, err := pubsub.NewFloodSub(ctx, h,
pubsub.WithMessageSigning(false),
pubsub.WithStrictSignatureVerification(false),
}
allPubsubOpts := append(defaultPubsubOpts, pubsubOpts...)
ps, err := pubsub.NewGossipSub(ctx, h, allPubsubOpts...)
)
if err != nil {
t.Fatal(err)
}

View File

@@ -657,9 +657,8 @@ func TestSubmitAttestationsV2(t *testing.T) {
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Source.Epoch)
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().Target.Root))
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Target.Epoch)
require.Eventually(t, func() bool {
return s.AttestationsPool.UnaggregatedAttestationCount() == 1
}, time.Second, 10*time.Millisecond, "Expected 1 attestation in pool")
time.Sleep(100 * time.Millisecond) // Wait for async pool save
assert.Equal(t, 1, s.AttestationsPool.UnaggregatedAttestationCount())
})
t.Run("multiple", func(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
@@ -678,9 +677,8 @@ func TestSubmitAttestationsV2(t *testing.T) {
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, 2, broadcaster.NumAttestations())
require.Eventually(t, func() bool {
return s.AttestationsPool.UnaggregatedAttestationCount() == 2
}, time.Second, 10*time.Millisecond, "Expected 2 attestations in pool")
time.Sleep(100 * time.Millisecond) // Wait for async pool save
assert.Equal(t, 2, s.AttestationsPool.UnaggregatedAttestationCount())
})
t.Run("phase0 att post electra", func(t *testing.T) {
params.SetupTestConfigCleanup(t)
@@ -800,9 +798,8 @@ func TestSubmitAttestationsV2(t *testing.T) {
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Source.Epoch)
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().Target.Root))
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Target.Epoch)
require.Eventually(t, func() bool {
return s.AttestationsPool.UnaggregatedAttestationCount() == 1
}, time.Second, 10*time.Millisecond, "Expected 1 attestation in pool")
time.Sleep(100 * time.Millisecond) // Wait for async pool save
assert.Equal(t, 1, s.AttestationsPool.UnaggregatedAttestationCount())
})
t.Run("multiple", func(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
@@ -821,9 +818,8 @@ func TestSubmitAttestationsV2(t *testing.T) {
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, 2, broadcaster.NumAttestations())
require.Eventually(t, func() bool {
return s.AttestationsPool.UnaggregatedAttestationCount() == 2
}, time.Second, 10*time.Millisecond, "Expected 2 attestations in pool")
time.Sleep(100 * time.Millisecond) // Wait for async pool save
assert.Equal(t, 2, s.AttestationsPool.UnaggregatedAttestationCount())
})
t.Run("no body", func(t *testing.T) {
request := httptest.NewRequest(http.MethodPost, "http://example.com", nil)
@@ -1379,9 +1375,9 @@ func TestSubmitSignedBLSToExecutionChanges_Ok(t *testing.T) {
writer.Body = &bytes.Buffer{}
s.SubmitBLSToExecutionChanges(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
require.Eventually(t, func() bool {
return broadcaster.BroadcastCalled.Load() && len(broadcaster.BroadcastMessages) == numValidators
}, time.Second, 10*time.Millisecond, "Broadcast should be called with all messages")
time.Sleep(100 * time.Millisecond) // Delay to let the routine start
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, numValidators, len(broadcaster.BroadcastMessages))
poolChanges, err := s.BLSChangesPool.PendingBLSToExecChanges()
require.Equal(t, len(poolChanges), len(signedChanges))
@@ -1595,10 +1591,10 @@ func TestSubmitSignedBLSToExecutionChanges_Failures(t *testing.T) {
s.SubmitBLSToExecutionChanges(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
time.Sleep(10 * time.Millisecond) // Delay to allow the routine to start
require.StringContains(t, "One or more messages failed validation", writer.Body.String())
require.Eventually(t, func() bool {
return broadcaster.BroadcastCalled.Load() && len(broadcaster.BroadcastMessages)+1 == numValidators
}, time.Second, 10*time.Millisecond, "Broadcast should be called with expected messages")
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, numValidators, len(broadcaster.BroadcastMessages)+1)
poolChanges, err := s.BLSChangesPool.PendingBLSToExecChanges()
require.Equal(t, len(poolChanges)+1, len(signedChanges))

View File

@@ -181,6 +181,12 @@ func prepareConfigSpec() (map[string]any, error) {
data[tag] = convertValueForJSON(val, tag)
}
// Add derived values that are computed from other config values.
data["UPDATE_TIMEOUT"] = strconv.FormatUint(
uint64(config.SlotsPerEpoch)*uint64(config.EpochsPerSyncCommitteePeriod),
10,
)
return data, nil
}

View File

@@ -168,8 +168,11 @@ func TestGetSpec(t *testing.T) {
config.BlobsidecarSubnetCount = 101
config.BlobsidecarSubnetCountElectra = 102
config.SyncMessageDueBPS = 103
config.BuilderWithdrawalPrefixByte = byte('b')
config.BuilderIndexSelfBuild = primitives.BuilderIndex(125)
config.FieldElementsPerCell = 104
config.FieldElementsPerExtBlob = 105
config.KzgCommitmentsInclusionProofDepth = 106
config.CellsPerExtBlob = 107
config.NumberOfColumns = 108
var dbp [4]byte
copy(dbp[:], []byte{'0', '0', '0', '1'})
@@ -192,9 +195,6 @@ func TestGetSpec(t *testing.T) {
var daap [4]byte
copy(daap[:], []byte{'0', '0', '0', '7'})
config.DomainAggregateAndProof = daap
var dbb [4]byte
copy(dbb[:], []byte{'0', '0', '0', '8'})
config.DomainBeaconBuilder = dbb
var dam [4]byte
copy(dam[:], []byte{'1', '0', '0', '0'})
config.DomainApplicationMask = dam
@@ -210,7 +210,7 @@ func TestGetSpec(t *testing.T) {
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), &resp))
data, ok := resp.Data.(map[string]any)
require.Equal(t, true, ok)
assert.Equal(t, 178, len(data))
assert.Equal(t, 181, len(data))
for k, v := range data {
t.Run(k, func(t *testing.T) {
switch k {
@@ -424,14 +424,8 @@ func TestGetSpec(t *testing.T) {
assert.Equal(t, "0x0a000000", v)
case "DOMAIN_APPLICATION_BUILDER":
assert.Equal(t, "0x00000001", v)
case "DOMAIN_BEACON_BUILDER":
assert.Equal(t, "0x30303038", v)
case "DOMAIN_BLOB_SIDECAR":
assert.Equal(t, "0x00000000", v)
case "BUILDER_WITHDRAWAL_PREFIX":
assert.Equal(t, "0x62", v)
case "BUILDER_INDEX_SELF_BUILD":
assert.Equal(t, "125", v)
case "TRANSITION_TOTAL_DIFFICULTY":
assert.Equal(t, "0", v)
case "TERMINAL_BLOCK_HASH_ACTIVATION_EPOCH":
@@ -592,6 +586,18 @@ func TestGetSpec(t *testing.T) {
blobSchedule, ok := v.([]any)
assert.Equal(t, true, ok)
assert.Equal(t, 2, len(blobSchedule))
case "FIELD_ELEMENTS_PER_CELL":
assert.Equal(t, "104", v)
case "FIELD_ELEMENTS_PER_EXT_BLOB":
assert.Equal(t, "105", v)
case "KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH":
assert.Equal(t, "106", v)
case "CELLS_PER_EXT_BLOB":
assert.Equal(t, "107", v)
case "NUMBER_OF_COLUMNS":
assert.Equal(t, "108", v)
case "UPDATE_TIMEOUT":
assert.Equal(t, "1782", v) // SlotsPerEpoch (27) * EpochsPerSyncCommitteePeriod (66)
default:
t.Errorf("Incorrect key: %s", k)
}

View File

@@ -5,7 +5,6 @@ go_library(
srcs = [
"error.go",
"interfaces.go",
"interfaces_gloas.go",
"prometheus.go",
],
importpath = "github.com/OffchainLabs/prysm/v7/beacon-chain/state",

View File

@@ -63,7 +63,6 @@ type ReadOnlyBeaconState interface {
ReadOnlyDeposits
ReadOnlyConsolidations
ReadOnlyProposerLookahead
readOnlyGloasFields
ToProtoUnsafe() any
ToProto() any
GenesisTime() time.Time
@@ -99,7 +98,6 @@ type WriteOnlyBeaconState interface {
WriteOnlyWithdrawals
WriteOnlyDeposits
WriteOnlyProposerLookahead
writeOnlyGloasFields
SetGenesisTime(val time.Time) error
SetGenesisValidatorsRoot(val []byte) error
SetSlot(val primitives.Slot) error

View File

@@ -1,19 +0,0 @@
package state
import (
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
)
type writeOnlyGloasFields interface {
SetExecutionPayloadBid(h interfaces.ROExecutionPayloadBid) error
SetBuilderPendingPayment(index primitives.Slot, payment *ethpb.BuilderPendingPayment) error
}
type readOnlyGloasFields interface {
BuilderAtIndex(primitives.BuilderIndex) (*ethpb.Builder, error)
IsActiveBuilder(primitives.BuilderIndex) (bool, error)
CanBuilderCoverBid(primitives.BuilderIndex, primitives.Gwei) (bool, error)
LatestBlockHash() ([32]byte, error)
}

View File

@@ -14,7 +14,6 @@ go_library(
"getters_deposits.go",
"getters_eth1.go",
"getters_exit.go",
"getters_gloas.go",
"getters_misc.go",
"getters_participation.go",
"getters_payload_header.go",
@@ -37,7 +36,6 @@ go_library(
"setters_deposit_requests.go",
"setters_deposits.go",
"setters_eth1.go",
"setters_gloas.go",
"setters_misc.go",
"setters_participation.go",
"setters_payload_header.go",
@@ -99,13 +97,11 @@ go_test(
"getters_deposit_requests_test.go",
"getters_deposits_test.go",
"getters_exit_test.go",
"getters_gloas_test.go",
"getters_participation_test.go",
"getters_setters_lookahead_test.go",
"getters_test.go",
"getters_validator_test.go",
"getters_withdrawal_test.go",
"gloas_test.go",
"hasher_test.go",
"mvslice_fuzz_test.go",
"proofs_test.go",
@@ -117,7 +113,6 @@ go_test(
"setters_deposit_requests_test.go",
"setters_deposits_test.go",
"setters_eth1_test.go",
"setters_gloas_test.go",
"setters_misc_test.go",
"setters_participation_test.go",
"setters_payload_header_test.go",
@@ -161,7 +156,6 @@ go_test(
"@com_github_google_go_cmp//cmp:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_stretchr_testify//require:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
"@org_golang_google_protobuf//testing/protocmp:go_default_library",
],

View File

@@ -72,13 +72,11 @@ type BeaconState struct {
// Gloas fields
latestExecutionPayloadBid *ethpb.ExecutionPayloadBid
builders []*ethpb.Builder
nextWithdrawalBuilderIndex primitives.BuilderIndex
executionPayloadAvailability []byte
builderPendingPayments []*ethpb.BuilderPendingPayment
builderPendingWithdrawals []*ethpb.BuilderPendingWithdrawal
latestBlockHash []byte
payloadExpectedWithdrawals []*enginev1.Withdrawal
latestWithdrawalsRoot []byte
id uint64
lock sync.RWMutex
@@ -136,13 +134,11 @@ type beaconStateMarshalable struct {
PendingConsolidations []*ethpb.PendingConsolidation `json:"pending_consolidations" yaml:"pending_consolidations"`
ProposerLookahead []primitives.ValidatorIndex `json:"proposer_look_ahead" yaml:"proposer_look_ahead"`
LatestExecutionPayloadBid *ethpb.ExecutionPayloadBid `json:"latest_execution_payload_bid" yaml:"latest_execution_payload_bid"`
Builders []*ethpb.Builder `json:"builders" yaml:"builders"`
NextWithdrawalBuilderIndex primitives.BuilderIndex `json:"next_withdrawal_builder_index" yaml:"next_withdrawal_builder_index"`
ExecutionPayloadAvailability []byte `json:"execution_payload_availability" yaml:"execution_payload_availability"`
BuilderPendingPayments []*ethpb.BuilderPendingPayment `json:"builder_pending_payments" yaml:"builder_pending_payments"`
BuilderPendingWithdrawals []*ethpb.BuilderPendingWithdrawal `json:"builder_pending_withdrawals" yaml:"builder_pending_withdrawals"`
LatestBlockHash []byte `json:"latest_block_hash" yaml:"latest_block_hash"`
PayloadExpectedWithdrawals []*enginev1.Withdrawal `json:"payload_expected_withdrawals" yaml:"payload_expected_withdrawals"`
LatestWithdrawalsRoot []byte `json:"latest_withdrawals_root" yaml:"latest_withdrawals_root"`
}
func (b *BeaconState) MarshalJSON() ([]byte, error) {
@@ -198,13 +194,11 @@ func (b *BeaconState) MarshalJSON() ([]byte, error) {
PendingConsolidations: b.pendingConsolidations,
ProposerLookahead: b.proposerLookahead,
LatestExecutionPayloadBid: b.latestExecutionPayloadBid,
Builders: b.builders,
NextWithdrawalBuilderIndex: b.nextWithdrawalBuilderIndex,
ExecutionPayloadAvailability: b.executionPayloadAvailability,
BuilderPendingPayments: b.builderPendingPayments,
BuilderPendingWithdrawals: b.builderPendingWithdrawals,
LatestBlockHash: b.latestBlockHash,
PayloadExpectedWithdrawals: b.payloadExpectedWithdrawals,
LatestWithdrawalsRoot: b.latestWithdrawalsRoot,
}
return json.Marshal(marshalable)
}

View File

@@ -56,7 +56,9 @@ func (r StateRoots) MarshalSSZTo(dst []byte) ([]byte, error) {
func (r StateRoots) MarshalSSZ() ([]byte, error) {
marshalled := make([]byte, fieldparams.StateRootsLength*32)
for i, r32 := range r {
copy(marshalled[i*32:(i+1)*32], r32[:])
for j, rr := range r32 {
marshalled[i*32+j] = rr
}
}
return marshalled, nil
}

View File

@@ -1,128 +0,0 @@
package state_native
import (
"fmt"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/runtime/version"
)
// LatestBlockHash returns the hash of the latest execution block.
func (b *BeaconState) LatestBlockHash() ([32]byte, error) {
if b.version < version.Gloas {
return [32]byte{}, errNotSupported("LatestBlockHash", b.version)
}
b.lock.RLock()
defer b.lock.RUnlock()
if b.latestBlockHash == nil {
return [32]byte{}, nil
}
return [32]byte(b.latestBlockHash), nil
}
// BuilderAtIndex returns a copy of the builder at the provided index.
func (b *BeaconState) BuilderAtIndex(builderIndex primitives.BuilderIndex) (*ethpb.Builder, error) {
if b.version < version.Gloas {
return nil, errNotSupported("BuilderAtIndex", b.version)
}
b.lock.RLock()
defer b.lock.RUnlock()
return b.builderAtIndex(builderIndex)
}
// IsActiveBuilder returns true if the builder placement is finalized and it has not initiated exit.
// Spec v1.7.0-alpha.0 (pseudocode):
// def is_active_builder(state: BeaconState, builder_index: BuilderIndex) -> bool:
//
// builder = state.builders[builder_index]
// return (
// builder.deposit_epoch < state.finalized_checkpoint.epoch
// and builder.withdrawable_epoch == FAR_FUTURE_EPOCH
// )
func (b *BeaconState) IsActiveBuilder(builderIndex primitives.BuilderIndex) (bool, error) {
if b.version < version.Gloas {
return false, errNotSupported("IsActiveBuilder", b.version)
}
b.lock.RLock()
defer b.lock.RUnlock()
builder, err := b.builderAtIndex(builderIndex)
if err != nil {
return false, err
}
finalizedEpoch := b.finalizedCheckpoint.Epoch
return builder.DepositEpoch < finalizedEpoch && builder.WithdrawableEpoch == params.BeaconConfig().FarFutureEpoch, nil
}
// CanBuilderCoverBid returns true if the builder has enough balance to cover the given bid amount.
// Spec v1.7.0-alpha.0 (pseudocode):
// def can_builder_cover_bid(state: BeaconState, builder_index: BuilderIndex, bid_amount: Gwei) -> bool:
//
// builder_balance = state.builders[builder_index].balance
// pending_withdrawals_amount = get_pending_balance_to_withdraw_for_builder(state, builder_index)
// min_balance = MIN_DEPOSIT_AMOUNT + pending_withdrawals_amount
// if builder_balance < min_balance:
// return False
// return builder_balance - min_balance >= bid_amount
func (b *BeaconState) CanBuilderCoverBid(builderIndex primitives.BuilderIndex, bidAmount primitives.Gwei) (bool, error) {
if b.version < version.Gloas {
return false, errNotSupported("CanBuilderCoverBid", b.version)
}
b.lock.RLock()
defer b.lock.RUnlock()
builder, err := b.builderAtIndex(builderIndex)
if err != nil {
return false, err
}
pendingBalanceToWithdraw := b.builderPendingBalanceToWithdraw(builderIndex)
minBalance := params.BeaconConfig().MinDepositAmount + pendingBalanceToWithdraw
balance := uint64(builder.Balance)
if balance < minBalance {
return false, nil
}
return balance-minBalance >= uint64(bidAmount), nil
}
func (b *BeaconState) builderAtIndex(builderIndex primitives.BuilderIndex) (*ethpb.Builder, error) {
idx := uint64(builderIndex)
if idx >= uint64(len(b.builders)) {
return nil, fmt.Errorf("builder index %d out of range (len=%d)", builderIndex, len(b.builders))
}
builder := b.builders[idx]
if builder == nil {
return nil, fmt.Errorf("builder at index %d is nil", builderIndex)
}
return ethpb.CopyBuilder(builder), nil
}
// builderPendingBalanceToWithdraw mirrors get_pending_balance_to_withdraw_for_builder in the spec,
// summing both pending withdrawals and pending payments for a builder.
func (b *BeaconState) builderPendingBalanceToWithdraw(builderIndex primitives.BuilderIndex) uint64 {
var total uint64
for _, withdrawal := range b.builderPendingWithdrawals {
if withdrawal.BuilderIndex == builderIndex {
total += uint64(withdrawal.Amount)
}
}
for _, payment := range b.builderPendingPayments {
if payment.Withdrawal.BuilderIndex == builderIndex {
total += uint64(payment.Withdrawal.Amount)
}
}
return total
}

View File

@@ -1,149 +0,0 @@
package state_native_test
import (
"bytes"
"testing"
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/testing/util"
)
func TestLatestBlockHash(t *testing.T) {
t.Run("returns error before gloas", func(t *testing.T) {
st, _ := util.DeterministicGenesisState(t, 1)
_, err := st.LatestBlockHash()
require.ErrorContains(t, "is not supported", err)
})
t.Run("returns zero hash when unset", func(t *testing.T) {
st, err := state_native.InitializeFromProtoGloas(&ethpb.BeaconStateGloas{})
require.NoError(t, err)
got, err := st.LatestBlockHash()
require.NoError(t, err)
require.Equal(t, [32]byte{}, got)
})
t.Run("returns configured hash", func(t *testing.T) {
hashBytes := bytes.Repeat([]byte{0xAB}, 32)
var want [32]byte
copy(want[:], hashBytes)
st, err := state_native.InitializeFromProtoGloas(&ethpb.BeaconStateGloas{
LatestBlockHash: hashBytes,
})
require.NoError(t, err)
got, err := st.LatestBlockHash()
require.NoError(t, err)
require.Equal(t, want, got)
})
}
func TestBuilderAtIndex(t *testing.T) {
t.Run("returns error before gloas", func(t *testing.T) {
stIface, _ := util.DeterministicGenesisState(t, 1)
native, ok := stIface.(*state_native.BeaconState)
require.Equal(t, true, ok)
_, err := native.BuilderAtIndex(0)
require.ErrorContains(t, "is not supported", err)
})
t.Run("returns copy", func(t *testing.T) {
orig := &ethpb.Builder{Balance: 42}
stIface, err := state_native.InitializeFromProtoGloas(&ethpb.BeaconStateGloas{
Builders: []*ethpb.Builder{orig},
})
require.NoError(t, err)
st := stIface.(*state_native.BeaconState)
got, err := st.BuilderAtIndex(0)
require.NoError(t, err)
require.Equal(t, primitives.Gwei(42), got.Balance)
got.Balance = 99
require.Equal(t, primitives.Gwei(42), orig.Balance)
})
t.Run("out of range returns error", func(t *testing.T) {
stIface, err := state_native.InitializeFromProtoGloas(&ethpb.BeaconStateGloas{
Builders: []*ethpb.Builder{},
})
require.NoError(t, err)
st := stIface.(*state_native.BeaconState)
_, err = st.BuilderAtIndex(1)
require.ErrorContains(t, "out of range", err)
})
}
func TestBuilderHelpers(t *testing.T) {
t.Run("is active builder", func(t *testing.T) {
st, err := state_native.InitializeFromProtoGloas(&ethpb.BeaconStateGloas{
Builders: []*ethpb.Builder{
{
Balance: 10,
DepositEpoch: 0,
WithdrawableEpoch: params.BeaconConfig().FarFutureEpoch,
},
},
FinalizedCheckpoint: &ethpb.Checkpoint{Epoch: 1},
})
require.NoError(t, err)
active, err := st.IsActiveBuilder(0)
require.NoError(t, err)
require.Equal(t, true, active)
// Not active when withdrawable epoch is set.
stProto := &ethpb.BeaconStateGloas{
Builders: []*ethpb.Builder{
{
Balance: 10,
DepositEpoch: 0,
WithdrawableEpoch: 1,
},
},
FinalizedCheckpoint: &ethpb.Checkpoint{Epoch: 2},
}
stInactive, err := state_native.InitializeFromProtoGloas(stProto)
require.NoError(t, err)
active, err = stInactive.IsActiveBuilder(0)
require.NoError(t, err)
require.Equal(t, false, active)
})
t.Run("can builder cover bid", func(t *testing.T) {
stIface, err := state_native.InitializeFromProtoGloas(&ethpb.BeaconStateGloas{
Builders: []*ethpb.Builder{
{
Balance: primitives.Gwei(params.BeaconConfig().MinDepositAmount + 50),
DepositEpoch: 0,
WithdrawableEpoch: params.BeaconConfig().FarFutureEpoch,
},
},
BuilderPendingWithdrawals: []*ethpb.BuilderPendingWithdrawal{
{Amount: 10, BuilderIndex: 0},
},
BuilderPendingPayments: []*ethpb.BuilderPendingPayment{
{Withdrawal: &ethpb.BuilderPendingWithdrawal{Amount: 15, BuilderIndex: 0}},
},
FinalizedCheckpoint: &ethpb.Checkpoint{Epoch: 1},
})
require.NoError(t, err)
st := stIface.(*state_native.BeaconState)
ok, err := st.CanBuilderCoverBid(0, 20)
require.NoError(t, err)
require.Equal(t, true, ok)
ok, err = st.CanBuilderCoverBid(0, 30)
require.NoError(t, err)
require.Equal(t, false, ok)
})
}

View File

@@ -305,12 +305,10 @@ func (b *BeaconState) ToProtoUnsafe() any {
PendingConsolidations: b.pendingConsolidations,
ProposerLookahead: lookahead,
ExecutionPayloadAvailability: b.executionPayloadAvailability,
Builders: b.builders,
NextWithdrawalBuilderIndex: b.nextWithdrawalBuilderIndex,
BuilderPendingPayments: b.builderPendingPayments,
BuilderPendingWithdrawals: b.builderPendingWithdrawals,
LatestBlockHash: b.latestBlockHash,
PayloadExpectedWithdrawals: b.payloadExpectedWithdrawals,
LatestWithdrawalsRoot: b.latestWithdrawalsRoot,
}
default:
return nil
@@ -609,12 +607,10 @@ func (b *BeaconState) ToProto() any {
PendingConsolidations: b.pendingConsolidationsVal(),
ProposerLookahead: lookahead,
ExecutionPayloadAvailability: b.executionPayloadAvailabilityVal(),
Builders: b.buildersVal(),
NextWithdrawalBuilderIndex: b.nextWithdrawalBuilderIndex,
BuilderPendingPayments: b.builderPendingPaymentsVal(),
BuilderPendingWithdrawals: b.builderPendingWithdrawalsVal(),
LatestBlockHash: b.latestBlockHashVal(),
PayloadExpectedWithdrawals: b.payloadExpectedWithdrawalsVal(),
LatestWithdrawalsRoot: b.latestWithdrawalsRootVal(),
}
default:
return nil

View File

@@ -1,7 +1,6 @@
package state_native
import (
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
)
@@ -48,22 +47,6 @@ func (b *BeaconState) builderPendingWithdrawalsVal() []*ethpb.BuilderPendingWith
return withdrawals
}
// buildersVal returns a copy of the builders registry.
// This assumes that a lock is already held on BeaconState.
func (b *BeaconState) buildersVal() []*ethpb.Builder {
if b.builders == nil {
return nil
}
builders := make([]*ethpb.Builder, len(b.builders))
for i := range builders {
builder := b.builders[i]
builders[i] = ethpb.CopyBuilder(builder)
}
return builders
}
// latestBlockHashVal returns a copy of the latest block hash.
// This assumes that a lock is already held on BeaconState.
func (b *BeaconState) latestBlockHashVal() []byte {
@@ -77,17 +60,15 @@ func (b *BeaconState) latestBlockHashVal() []byte {
return hash
}
// payloadExpectedWithdrawalsVal returns a copy of the payload expected withdrawals.
// latestWithdrawalsRootVal returns a copy of the latest withdrawals root.
// This assumes that a lock is already held on BeaconState.
func (b *BeaconState) payloadExpectedWithdrawalsVal() []*enginev1.Withdrawal {
if b.payloadExpectedWithdrawals == nil {
func (b *BeaconState) latestWithdrawalsRootVal() []byte {
if b.latestWithdrawalsRoot == nil {
return nil
}
withdrawals := make([]*enginev1.Withdrawal, len(b.payloadExpectedWithdrawals))
for i, withdrawal := range b.payloadExpectedWithdrawals {
withdrawals[i] = withdrawal.Copy()
}
root := make([]byte, len(b.latestWithdrawalsRoot))
copy(root, b.latestWithdrawalsRoot)
return withdrawals
return root
}

View File

@@ -1,43 +0,0 @@
package state_native
import (
"testing"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/stretchr/testify/require"
)
func TestBuildersVal(t *testing.T) {
st := &BeaconState{}
require.Nil(t, st.buildersVal())
st.builders = []*ethpb.Builder{
{Pubkey: []byte{0x01}, ExecutionAddress: []byte{0x02}, Balance: 3},
nil,
}
got := st.buildersVal()
require.Len(t, got, 2)
require.Nil(t, got[1])
require.Equal(t, st.builders[0], got[0])
require.NotSame(t, st.builders[0], got[0])
}
func TestPayloadExpectedWithdrawalsVal(t *testing.T) {
st := &BeaconState{}
require.Nil(t, st.payloadExpectedWithdrawalsVal())
st.payloadExpectedWithdrawals = []*enginev1.Withdrawal{
{Index: 1, ValidatorIndex: 2, Address: []byte{0x03}, Amount: 4},
nil,
}
got := st.payloadExpectedWithdrawalsVal()
require.Len(t, got, 2)
require.Nil(t, got[1])
require.Equal(t, st.payloadExpectedWithdrawals[0], got[0])
require.NotSame(t, st.payloadExpectedWithdrawals[0], got[0])
}

View File

@@ -342,15 +342,6 @@ func ComputeFieldRootsWithHasher(ctx context.Context, state *BeaconState) ([][]b
}
if state.version >= version.Gloas {
buildersRoot, err := stateutil.BuildersRoot(state.builders)
if err != nil {
return nil, errors.Wrap(err, "could not compute builders merkleization")
}
fieldRoots[types.Builders.RealPosition()] = buildersRoot[:]
nextWithdrawalBuilderIndexRoot := ssz.Uint64Root(uint64(state.nextWithdrawalBuilderIndex))
fieldRoots[types.NextWithdrawalBuilderIndex.RealPosition()] = nextWithdrawalBuilderIndexRoot[:]
epaRoot, err := stateutil.ExecutionPayloadAvailabilityRoot(state.executionPayloadAvailability)
if err != nil {
return nil, errors.Wrap(err, "could not compute execution payload availability merkleization")
@@ -375,12 +366,8 @@ func ComputeFieldRootsWithHasher(ctx context.Context, state *BeaconState) ([][]b
lbhRoot := bytesutil.ToBytes32(state.latestBlockHash)
fieldRoots[types.LatestBlockHash.RealPosition()] = lbhRoot[:]
expectedWithdrawalsRoot, err := ssz.WithdrawalSliceRoot(state.payloadExpectedWithdrawals, fieldparams.MaxWithdrawalsPerPayload)
if err != nil {
return nil, errors.Wrap(err, "could not compute payload expected withdrawals root")
}
fieldRoots[types.PayloadExpectedWithdrawals.RealPosition()] = expectedWithdrawalsRoot[:]
lwrRoot := bytesutil.ToBytes32(state.latestWithdrawalsRoot)
fieldRoots[types.LatestWithdrawalsRoot.RealPosition()] = lwrRoot[:]
}
return fieldRoots, nil
}

View File

@@ -1,63 +0,0 @@
package state_native
import (
"fmt"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native/types"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/runtime/version"
)
// SetExecutionPayloadBid sets the latest execution payload bid in the state.
func (b *BeaconState) SetExecutionPayloadBid(h interfaces.ROExecutionPayloadBid) error {
if b.version < version.Gloas {
return errNotSupported("SetExecutionPayloadBid", b.version)
}
b.lock.Lock()
defer b.lock.Unlock()
parentBlockHash := h.ParentBlockHash()
parentBlockRoot := h.ParentBlockRoot()
blockHash := h.BlockHash()
randao := h.PrevRandao()
blobKzgCommitmentsRoot := h.BlobKzgCommitmentsRoot()
feeRecipient := h.FeeRecipient()
b.latestExecutionPayloadBid = &ethpb.ExecutionPayloadBid{
ParentBlockHash: parentBlockHash[:],
ParentBlockRoot: parentBlockRoot[:],
BlockHash: blockHash[:],
PrevRandao: randao[:],
GasLimit: h.GasLimit(),
BuilderIndex: h.BuilderIndex(),
Slot: h.Slot(),
Value: h.Value(),
ExecutionPayment: h.ExecutionPayment(),
BlobKzgCommitmentsRoot: blobKzgCommitmentsRoot[:],
FeeRecipient: feeRecipient[:],
}
b.markFieldAsDirty(types.LatestExecutionPayloadBid)
return nil
}
// SetBuilderPendingPayment sets a builder pending payment at the specified index.
func (b *BeaconState) SetBuilderPendingPayment(index primitives.Slot, payment *ethpb.BuilderPendingPayment) error {
if b.version < version.Gloas {
return errNotSupported("SetBuilderPendingPayment", b.version)
}
b.lock.Lock()
defer b.lock.Unlock()
if uint64(index) >= uint64(len(b.builderPendingPayments)) {
return fmt.Errorf("builder pending payments index %d out of range (len=%d)", index, len(b.builderPendingPayments))
}
b.builderPendingPayments[index] = ethpb.CopyBuilderPendingPayment(payment)
b.markFieldAsDirty(types.BuilderPendingPayments)
return nil
}

View File

@@ -1,140 +0,0 @@
package state_native
import (
"bytes"
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native/types"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/runtime/version"
"github.com/OffchainLabs/prysm/v7/testing/require"
)
type testExecutionPayloadBid struct {
parentBlockHash [32]byte
parentBlockRoot [32]byte
blockHash [32]byte
prevRandao [32]byte
blobKzgCommitmentsRoot [32]byte
feeRecipient [20]byte
gasLimit uint64
builderIndex primitives.BuilderIndex
slot primitives.Slot
value primitives.Gwei
executionPayment primitives.Gwei
}
func (t testExecutionPayloadBid) ParentBlockHash() [32]byte { return t.parentBlockHash }
func (t testExecutionPayloadBid) ParentBlockRoot() [32]byte { return t.parentBlockRoot }
func (t testExecutionPayloadBid) PrevRandao() [32]byte { return t.prevRandao }
func (t testExecutionPayloadBid) BlockHash() [32]byte { return t.blockHash }
func (t testExecutionPayloadBid) GasLimit() uint64 { return t.gasLimit }
func (t testExecutionPayloadBid) BuilderIndex() primitives.BuilderIndex {
return t.builderIndex
}
func (t testExecutionPayloadBid) Slot() primitives.Slot { return t.slot }
func (t testExecutionPayloadBid) Value() primitives.Gwei { return t.value }
func (t testExecutionPayloadBid) ExecutionPayment() primitives.Gwei {
return t.executionPayment
}
func (t testExecutionPayloadBid) BlobKzgCommitmentsRoot() [32]byte { return t.blobKzgCommitmentsRoot }
func (t testExecutionPayloadBid) FeeRecipient() [20]byte { return t.feeRecipient }
func (t testExecutionPayloadBid) IsNil() bool { return false }
func TestSetExecutionPayloadBid(t *testing.T) {
t.Run("previous fork returns expected error", func(t *testing.T) {
st := &BeaconState{version: version.Fulu}
err := st.SetExecutionPayloadBid(testExecutionPayloadBid{})
require.ErrorContains(t, "is not supported", err)
})
t.Run("sets bid and marks dirty", func(t *testing.T) {
var (
parentBlockHash = [32]byte(bytes.Repeat([]byte{0xAB}, 32))
parentBlockRoot = [32]byte(bytes.Repeat([]byte{0xCD}, 32))
blockHash = [32]byte(bytes.Repeat([]byte{0xEF}, 32))
prevRandao = [32]byte(bytes.Repeat([]byte{0x11}, 32))
blobRoot = [32]byte(bytes.Repeat([]byte{0x22}, 32))
feeRecipient [20]byte
)
copy(feeRecipient[:], bytes.Repeat([]byte{0x33}, len(feeRecipient)))
st := &BeaconState{
version: version.Gloas,
dirtyFields: make(map[types.FieldIndex]bool),
}
bid := testExecutionPayloadBid{
parentBlockHash: parentBlockHash,
parentBlockRoot: parentBlockRoot,
blockHash: blockHash,
prevRandao: prevRandao,
blobKzgCommitmentsRoot: blobRoot,
feeRecipient: feeRecipient,
gasLimit: 123,
builderIndex: 7,
slot: 9,
value: 11,
executionPayment: 22,
}
require.NoError(t, st.SetExecutionPayloadBid(bid))
require.NotNil(t, st.latestExecutionPayloadBid)
require.DeepEqual(t, parentBlockHash[:], st.latestExecutionPayloadBid.ParentBlockHash)
require.DeepEqual(t, parentBlockRoot[:], st.latestExecutionPayloadBid.ParentBlockRoot)
require.DeepEqual(t, blockHash[:], st.latestExecutionPayloadBid.BlockHash)
require.DeepEqual(t, prevRandao[:], st.latestExecutionPayloadBid.PrevRandao)
require.DeepEqual(t, blobRoot[:], st.latestExecutionPayloadBid.BlobKzgCommitmentsRoot)
require.DeepEqual(t, feeRecipient[:], st.latestExecutionPayloadBid.FeeRecipient)
require.Equal(t, uint64(123), st.latestExecutionPayloadBid.GasLimit)
require.Equal(t, primitives.BuilderIndex(7), st.latestExecutionPayloadBid.BuilderIndex)
require.Equal(t, primitives.Slot(9), st.latestExecutionPayloadBid.Slot)
require.Equal(t, primitives.Gwei(11), st.latestExecutionPayloadBid.Value)
require.Equal(t, primitives.Gwei(22), st.latestExecutionPayloadBid.ExecutionPayment)
require.Equal(t, true, st.dirtyFields[types.LatestExecutionPayloadBid])
})
}
func TestSetBuilderPendingPayment(t *testing.T) {
t.Run("previous fork returns expected error", func(t *testing.T) {
st := &BeaconState{version: version.Fulu}
err := st.SetBuilderPendingPayment(0, &ethpb.BuilderPendingPayment{})
require.ErrorContains(t, "is not supported", err)
})
t.Run("sets copy and marks dirty", func(t *testing.T) {
st := &BeaconState{
version: version.Gloas,
dirtyFields: make(map[types.FieldIndex]bool),
builderPendingPayments: make([]*ethpb.BuilderPendingPayment, 2),
}
payment := &ethpb.BuilderPendingPayment{
Weight: 2,
Withdrawal: &ethpb.BuilderPendingWithdrawal{
Amount: 99,
BuilderIndex: 1,
},
}
require.NoError(t, st.SetBuilderPendingPayment(1, payment))
require.DeepEqual(t, payment, st.builderPendingPayments[1])
require.Equal(t, true, st.dirtyFields[types.BuilderPendingPayments])
// Mutating the original should not affect the state copy.
payment.Withdrawal.Amount = 12345
require.Equal(t, primitives.Gwei(99), st.builderPendingPayments[1].Withdrawal.Amount)
})
t.Run("returns error on out of range index", func(t *testing.T) {
st := &BeaconState{
version: version.Gloas,
dirtyFields: make(map[types.FieldIndex]bool),
builderPendingPayments: make([]*ethpb.BuilderPendingPayment, 1),
}
err := st.SetBuilderPendingPayment(2, &ethpb.BuilderPendingPayment{})
require.ErrorContains(t, "out of range", err)
require.Equal(t, false, st.dirtyFields[types.BuilderPendingPayments])
})
}

View File

@@ -120,13 +120,11 @@ var (
)
gloasAdditionalFields = []types.FieldIndex{
types.Builders,
types.NextWithdrawalBuilderIndex,
types.ExecutionPayloadAvailability,
types.BuilderPendingPayments,
types.BuilderPendingWithdrawals,
types.LatestBlockHash,
types.PayloadExpectedWithdrawals,
types.LatestWithdrawalsRoot,
}
gloasFields = slices.Concat(
@@ -147,7 +145,7 @@ const (
denebSharedFieldRefCount = 7
electraSharedFieldRefCount = 10
fuluSharedFieldRefCount = 11
gloasSharedFieldRefCount = 13 // Adds Builders + BuilderPendingWithdrawals to the shared-ref set and LatestExecutionPayloadHeader is removed
gloasSharedFieldRefCount = 12 // Adds PendingBuilderWithdrawal to the shared-ref set and LatestExecutionPayloadHeader is removed
)
// InitializeFromProtoPhase0 the beacon state from a protobuf representation.
@@ -819,13 +817,11 @@ func InitializeFromProtoUnsafeGloas(st *ethpb.BeaconStateGloas) (state.BeaconSta
pendingConsolidations: st.PendingConsolidations,
proposerLookahead: proposerLookahead,
latestExecutionPayloadBid: st.LatestExecutionPayloadBid,
builders: st.Builders,
nextWithdrawalBuilderIndex: st.NextWithdrawalBuilderIndex,
executionPayloadAvailability: st.ExecutionPayloadAvailability,
builderPendingPayments: st.BuilderPendingPayments,
builderPendingWithdrawals: st.BuilderPendingWithdrawals,
latestBlockHash: st.LatestBlockHash,
payloadExpectedWithdrawals: st.PayloadExpectedWithdrawals,
latestWithdrawalsRoot: st.LatestWithdrawalsRoot,
dirtyFields: make(map[types.FieldIndex]bool, fieldCount),
dirtyIndices: make(map[types.FieldIndex][]uint64, fieldCount),
stateFieldLeaves: make(map[types.FieldIndex]*fieldtrie.FieldTrie, fieldCount),
@@ -865,7 +861,6 @@ func InitializeFromProtoUnsafeGloas(st *ethpb.BeaconStateGloas) (state.BeaconSta
b.sharedFieldReferences[types.PendingPartialWithdrawals] = stateutil.NewRef(1)
b.sharedFieldReferences[types.PendingConsolidations] = stateutil.NewRef(1)
b.sharedFieldReferences[types.ProposerLookahead] = stateutil.NewRef(1)
b.sharedFieldReferences[types.Builders] = stateutil.NewRef(1) // New in Gloas.
b.sharedFieldReferences[types.BuilderPendingWithdrawals] = stateutil.NewRef(1) // New in Gloas.
state.Count.Inc()
@@ -937,7 +932,6 @@ func (b *BeaconState) Copy() state.BeaconState {
pendingDeposits: b.pendingDeposits,
pendingPartialWithdrawals: b.pendingPartialWithdrawals,
pendingConsolidations: b.pendingConsolidations,
builders: b.builders,
// Everything else, too small to be concerned about, constant size.
genesisValidatorsRoot: b.genesisValidatorsRoot,
@@ -954,12 +948,11 @@ func (b *BeaconState) Copy() state.BeaconState {
latestExecutionPayloadHeaderCapella: b.latestExecutionPayloadHeaderCapella.Copy(),
latestExecutionPayloadHeaderDeneb: b.latestExecutionPayloadHeaderDeneb.Copy(),
latestExecutionPayloadBid: b.latestExecutionPayloadBid.Copy(),
nextWithdrawalBuilderIndex: b.nextWithdrawalBuilderIndex,
executionPayloadAvailability: b.executionPayloadAvailabilityVal(),
builderPendingPayments: b.builderPendingPaymentsVal(),
builderPendingWithdrawals: b.builderPendingWithdrawalsVal(),
latestBlockHash: b.latestBlockHashVal(),
payloadExpectedWithdrawals: b.payloadExpectedWithdrawalsVal(),
latestWithdrawalsRoot: b.latestWithdrawalsRootVal(),
id: types.Enumerator.Inc(),
@@ -1335,10 +1328,6 @@ func (b *BeaconState) rootSelector(ctx context.Context, field types.FieldIndex)
return stateutil.ProposerLookaheadRoot(b.proposerLookahead)
case types.LatestExecutionPayloadBid:
return b.latestExecutionPayloadBid.HashTreeRoot()
case types.Builders:
return stateutil.BuildersRoot(b.builders)
case types.NextWithdrawalBuilderIndex:
return ssz.Uint64Root(uint64(b.nextWithdrawalBuilderIndex)), nil
case types.ExecutionPayloadAvailability:
return stateutil.ExecutionPayloadAvailabilityRoot(b.executionPayloadAvailability)
@@ -1348,8 +1337,8 @@ func (b *BeaconState) rootSelector(ctx context.Context, field types.FieldIndex)
return stateutil.BuilderPendingWithdrawalsRoot(b.builderPendingWithdrawals)
case types.LatestBlockHash:
return bytesutil.ToBytes32(b.latestBlockHash), nil
case types.PayloadExpectedWithdrawals:
return ssz.WithdrawalSliceRoot(b.payloadExpectedWithdrawals, fieldparams.MaxWithdrawalsPerPayload)
case types.LatestWithdrawalsRoot:
return bytesutil.ToBytes32(b.latestWithdrawalsRoot), nil
}
return [32]byte{}, errors.New("invalid field index provided")
}

View File

@@ -116,10 +116,6 @@ func (f FieldIndex) String() string {
return "pendingConsolidations"
case ProposerLookahead:
return "proposerLookahead"
case Builders:
return "builders"
case NextWithdrawalBuilderIndex:
return "nextWithdrawalBuilderIndex"
case ExecutionPayloadAvailability:
return "executionPayloadAvailability"
case BuilderPendingPayments:
@@ -128,8 +124,8 @@ func (f FieldIndex) String() string {
return "builderPendingWithdrawals"
case LatestBlockHash:
return "latestBlockHash"
case PayloadExpectedWithdrawals:
return "payloadExpectedWithdrawals"
case LatestWithdrawalsRoot:
return "latestWithdrawalsRoot"
default:
return fmt.Sprintf("unknown field index number: %d", f)
}
@@ -215,20 +211,16 @@ func (f FieldIndex) RealPosition() int {
return 36
case ProposerLookahead:
return 37
case Builders:
return 38
case NextWithdrawalBuilderIndex:
return 39
case ExecutionPayloadAvailability:
return 40
return 38
case BuilderPendingPayments:
return 41
return 39
case BuilderPendingWithdrawals:
return 42
return 40
case LatestBlockHash:
return 43
case PayloadExpectedWithdrawals:
return 44
return 41
case LatestWithdrawalsRoot:
return 42
default:
return -1
}
@@ -295,13 +287,11 @@ const (
PendingPartialWithdrawals // Electra: EIP-7251
PendingConsolidations // Electra: EIP-7251
ProposerLookahead // Fulu: EIP-7917
Builders // Gloas: EIP-7732
NextWithdrawalBuilderIndex // Gloas: EIP-7732
ExecutionPayloadAvailability // Gloas: EIP-7732
BuilderPendingPayments // Gloas: EIP-7732
BuilderPendingWithdrawals // Gloas: EIP-7732
LatestBlockHash // Gloas: EIP-7732
PayloadExpectedWithdrawals // Gloas: EIP-7732
LatestWithdrawalsRoot // Gloas: EIP-7732
)
// Enumerator keeps track of the number of states created since the node's start.

View File

@@ -6,7 +6,6 @@ go_library(
"block_header_root.go",
"builder_pending_payments_root.go",
"builder_pending_withdrawals_root.go",
"builders_root.go",
"eth1_root.go",
"execution_payload_availability_root.go",
"field_root_attestation.go",

View File

@@ -1,12 +0,0 @@
package stateutil
import (
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/encoding/ssz"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
)
// BuildersRoot computes the SSZ root of a slice of Builder.
func BuildersRoot(slice []*ethpb.Builder) ([32]byte, error) {
return ssz.SliceRoot(slice, uint64(fieldparams.BuilderRegistryLimit))
}

View File

@@ -70,6 +70,7 @@ func TestSyncHandlers_WaitToSync(t *testing.T) {
topic := "/eth2/%x/beacon_block"
go r.startDiscoveryAndSubscriptions()
time.Sleep(100 * time.Millisecond)
var vr [32]byte
require.NoError(t, gs.SetClock(startup.NewClock(time.Now(), vr)))
@@ -82,11 +83,9 @@ func TestSyncHandlers_WaitToSync(t *testing.T) {
msg.Block.ParentRoot = util.Random32Bytes(t)
msg.Signature = sk.Sign([]byte("data")).Marshal()
p2p.ReceivePubSub(topic, msg)
// Wait for chainstart event to be processed
require.Eventually(t, func() bool {
return r.chainStarted.IsSet()
}, 5*time.Second, 50*time.Millisecond, "Did not receive chain start event.")
// wait for chainstart to be sent
time.Sleep(400 * time.Millisecond)
require.Equal(t, true, r.chainStarted.IsSet(), "Did not receive chain start event.")
}
func TestSyncHandlers_WaitForChainStart(t *testing.T) {
@@ -218,18 +217,20 @@ func TestSyncService_StopCleanly(t *testing.T) {
p2p.Digest, err = r.currentForkDigest()
require.NoError(t, err)
// Wait for chainstart and topics to be registered
require.Eventually(t, func() bool {
return r.chainStarted.IsSet() && len(r.cfg.p2p.PubSub().GetTopics()) > 0 && len(r.cfg.p2p.Host().Mux().Protocols()) > 0
}, 5*time.Second, 50*time.Millisecond, "Did not receive chain start event or topics not registered.")
// wait for chainstart to be sent
time.Sleep(2 * time.Second)
require.Equal(t, true, r.chainStarted.IsSet(), "Did not receive chain start event.")
require.NotEqual(t, 0, len(r.cfg.p2p.PubSub().GetTopics()))
require.NotEqual(t, 0, len(r.cfg.p2p.Host().Mux().Protocols()))
// Both pubsub and rpc topics should be unsubscribed.
require.NoError(t, r.Stop())
// Wait for pubsub topics to be deregistered.
require.Eventually(t, func() bool {
return len(r.cfg.p2p.PubSub().GetTopics()) == 0 && len(r.cfg.p2p.Host().Mux().Protocols()) == 0
}, 5*time.Second, 50*time.Millisecond, "Pubsub topics were not deregistered")
// Sleep to allow pubsub topics to be deregistered.
time.Sleep(1 * time.Second)
require.Equal(t, 0, len(r.cfg.p2p.PubSub().GetTopics()))
require.Equal(t, 0, len(r.cfg.p2p.Host().Mux().Protocols()))
}
func TestService_Stop_SendsGoodbyeMessages(t *testing.T) {

View File

@@ -48,14 +48,7 @@ func (s *Service) beaconBlockSubscriber(ctx context.Context, msg proto.Message)
return errors.Wrap(err, "new ro block with root")
}
go func() {
if err := s.processSidecarsFromExecutionFromBlock(ctx, roBlock); err != nil {
log.WithError(err).WithFields(logrus.Fields{
"root": fmt.Sprintf("%#x", root),
"slot": block.Slot(),
}).Error("Failed to process sidecars from execution from block")
}
}()
go s.processSidecarsFromExecutionFromBlock(ctx, roBlock)
if err := s.cfg.chain.ReceiveBlock(ctx, signed, root, nil); err != nil {
if blockchain.IsInvalidBlock(err) {
@@ -76,37 +69,28 @@ func (s *Service) beaconBlockSubscriber(ctx context.Context, msg proto.Message)
}
return err
}
if err := s.processPendingAttsForBlock(ctx, root); err != nil {
return errors.Wrap(err, "process pending atts for block")
}
return nil
}
// processSidecarsFromExecutionFromBlock retrieves (if available) sidecars data from the execution client,
// builds corresponding sidecars, save them to the storage, and broadcasts them over P2P if necessary.
func (s *Service) processSidecarsFromExecutionFromBlock(ctx context.Context, roBlock blocks.ROBlock) error {
func (s *Service) processSidecarsFromExecutionFromBlock(ctx context.Context, roBlock blocks.ROBlock) {
if roBlock.Version() >= version.Fulu {
if err := s.processDataColumnSidecarsFromExecution(ctx, peerdas.PopulateFromBlock(roBlock)); err != nil {
// Do not log if the context was cancelled on purpose.
// (Still log other context errors such as deadlines exceeded).
if errors.Is(err, context.Canceled) {
return nil
}
return errors.Wrap(err, "process data column sidecars from execution")
log.WithError(err).Error("Failed to process data column sidecars from execution")
return
}
return nil
return
}
if roBlock.Version() >= version.Deneb {
s.processBlobSidecarsFromExecution(ctx, roBlock)
return nil
return
}
return nil
}
// processBlobSidecarsFromExecution retrieves (if available) blob sidecars data from the execution client,
@@ -184,6 +168,7 @@ func (s *Service) processDataColumnSidecarsFromExecution(ctx context.Context, so
key := fmt.Sprintf("%#x", source.Root())
if _, err, _ := s.columnSidecarsExecSingleFlight.Do(key, func() (any, error) {
const delay = 250 * time.Millisecond
secondsPerHalfSlot := time.Duration(params.BeaconConfig().SecondsPerSlot/2) * time.Second
commitments, err := source.Commitments()
if err != nil {
@@ -201,6 +186,9 @@ func (s *Service) processDataColumnSidecarsFromExecution(ctx context.Context, so
return nil, errors.Wrap(err, "column indices to sample")
}
ctx, cancel := context.WithTimeout(ctx, secondsPerHalfSlot)
defer cancel()
log := log.WithFields(logrus.Fields{
"root": fmt.Sprintf("%#x", source.Root()),
"slot": source.Slot(),
@@ -221,11 +209,6 @@ func (s *Service) processDataColumnSidecarsFromExecution(ctx context.Context, so
return nil, nil
}
// Return if the context is done.
if ctx.Err() != nil {
return nil, ctx.Err()
}
if iteration == 0 {
dataColumnsRecoveredFromELAttempts.Inc()
}
@@ -237,10 +220,20 @@ func (s *Service) processDataColumnSidecarsFromExecution(ctx context.Context, so
}
// No sidecars are retrieved from the EL, retry later
constructedCount := uint64(len(constructedSidecars))
constructedSidecarCount = uint64(len(constructedSidecars))
if constructedSidecarCount == 0 {
if ctx.Err() != nil {
return nil, ctx.Err()
}
time.Sleep(delay)
continue
}
dataColumnsRecoveredFromELTotal.Inc()
// Boundary check.
if constructedSidecarCount > 0 && constructedSidecarCount != fieldparams.NumberOfColumns {
if constructedSidecarCount != fieldparams.NumberOfColumns {
return nil, errors.Errorf("reconstruct data column sidecars returned %d sidecars, expected %d - should never happen", constructedSidecarCount, fieldparams.NumberOfColumns)
}
@@ -249,24 +242,14 @@ func (s *Service) processDataColumnSidecarsFromExecution(ctx context.Context, so
return nil, errors.Wrap(err, "broadcast and receive unseen data column sidecars")
}
if constructedCount > 0 {
dataColumnsRecoveredFromELTotal.Inc()
log.WithFields(logrus.Fields{
"count": len(unseenIndices),
"indices": helpers.SortedPrettySliceFromMap(unseenIndices),
}).Debug("Constructed data column sidecars from the execution client")
log.WithFields(logrus.Fields{
"root": fmt.Sprintf("%#x", source.Root()),
"slot": source.Slot(),
"proposerIndex": source.ProposerIndex(),
"iteration": iteration,
"type": source.Type(),
"count": len(unseenIndices),
"indices": helpers.SortedPrettySliceFromMap(unseenIndices),
}).Debug("Constructed data column sidecars from the execution client")
dataColumnSidecarsObtainedViaELCount.Observe(float64(len(unseenIndices)))
return nil, nil
}
// Wait before retrying.
time.Sleep(delay)
return nil, nil
}
}); err != nil {
return err
@@ -301,11 +284,6 @@ func (s *Service) broadcastAndReceiveUnseenDataColumnSidecars(
unseenIndices[sidecar.Index] = true
}
// Exit early if there are no nothing to broadcast or receive.
if len(unseenSidecars) == 0 {
return nil, nil
}
// Broadcast all the data column sidecars we reconstructed but did not see via gossip (non blocking).
if err := s.cfg.p2p.BroadcastDataColumnSidecars(ctx, unseenSidecars); err != nil {
return nil, errors.Wrap(err, "broadcast data column sidecars")

View File

@@ -194,8 +194,7 @@ func TestProcessSidecarsFromExecutionFromBlock(t *testing.T) {
},
seenBlobCache: lruwrpr.New(1),
}
err := s.processSidecarsFromExecutionFromBlock(t.Context(), roBlock)
require.NoError(t, err)
s.processSidecarsFromExecutionFromBlock(t.Context(), roBlock)
require.Equal(t, tt.expectedBlobCount, len(chainService.Blobs))
})
}
@@ -294,8 +293,7 @@ func TestProcessSidecarsFromExecutionFromBlock(t *testing.T) {
roBlock, err := blocks.NewROBlock(sb)
require.NoError(t, err)
err = s.processSidecarsFromExecutionFromBlock(t.Context(), roBlock)
require.NoError(t, err)
s.processSidecarsFromExecutionFromBlock(t.Context(), roBlock)
require.Equal(t, tt.expectedDataColumnCount, len(chainService.DataColumns))
})
}

View File

@@ -25,12 +25,12 @@ func (s *Service) dataColumnSubscriber(ctx context.Context, msg proto.Message) e
}
if err := s.receiveDataColumnSidecar(ctx, sidecar); err != nil {
return wrapDataColumnError(sidecar, "receive data column sidecar", err)
return errors.Wrap(err, "receive data column sidecar")
}
wg.Go(func() error {
if err := s.processDataColumnSidecarsFromReconstruction(ctx, sidecar); err != nil {
return wrapDataColumnError(sidecar, "process data column sidecars from reconstruction", err)
return errors.Wrap(err, "process data column sidecars from reconstruction")
}
return nil
@@ -38,13 +38,7 @@ func (s *Service) dataColumnSubscriber(ctx context.Context, msg proto.Message) e
wg.Go(func() error {
if err := s.processDataColumnSidecarsFromExecution(ctx, peerdas.PopulateFromSidecar(sidecar)); err != nil {
if errors.Is(err, context.Canceled) {
// Do not log if the context was cancelled on purpose.
// (Still log other context errors such as deadlines exceeded).
return nil
}
return wrapDataColumnError(sidecar, "process data column sidecars from execution", err)
return errors.Wrap(err, "process data column sidecars from execution")
}
return nil
@@ -116,7 +110,3 @@ func (s *Service) allDataColumnSubnets(_ primitives.Slot) map[uint64]bool {
return allSubnets
}
func wrapDataColumnError(sidecar blocks.VerifiedRODataColumn, message string, err error) error {
return fmt.Errorf("%s - slot %d, root %s: %w", message, sidecar.SignedBlockHeader.Header.Slot, fmt.Sprintf("%#x", sidecar.BlockRoot()), err)
}

View File

@@ -614,10 +614,11 @@ func TestVerifyIndexInCommittee_SeenAggregatorEpoch(t *testing.T) {
},
}
require.Eventually(t, func() bool {
res, _ := r.validateAggregateAndProof(t.Context(), "", msg)
return res != pubsub.ValidationAccept
}, time.Second, 10*time.Millisecond, "Expected validation to reject duplicate aggregate")
time.Sleep(10 * time.Millisecond) // Wait for cached value to pass through buffers.
if res, err := r.validateAggregateAndProof(t.Context(), "", msg); res == pubsub.ValidationAccept {
_ = err
t.Fatal("Validated status is true")
}
}
func TestValidateAggregateAndProof_BadBlock(t *testing.T) {

View File

@@ -992,6 +992,7 @@ func TestValidateBeaconBlockPubSub_SeenProposerSlot(t *testing.T) {
// Mark the proposer/slot as seen
r.setSeenBlockIndexSlot(msg.Block.Slot, msg.Block.ProposerIndex)
time.Sleep(10 * time.Millisecond) // Wait for cached value to pass through buffers
// Prepare and validate the second message (clone)
buf := new(bytes.Buffer)
@@ -1009,11 +1010,9 @@ func TestValidateBeaconBlockPubSub_SeenProposerSlot(t *testing.T) {
}
// Since this is not an equivocation (same signature), it should be ignored
// Wait for the cached value to propagate through buffers
require.Eventually(t, func() bool {
res, err := r.validateBeaconBlockPubSub(ctx, "", m)
return err == nil && res == pubsub.ValidationIgnore
}, time.Second, 10*time.Millisecond, "block with same signature should be ignored")
res, err := r.validateBeaconBlockPubSub(ctx, "", m)
assert.NoError(t, err)
assert.Equal(t, pubsub.ValidationIgnore, res, "block with same signature should be ignored")
// Verify no slashings were created
assert.Equal(t, 0, len(slashingPool.PendingPropSlashings), "Expected no slashings for same signature")

View File

@@ -0,0 +1,3 @@
## Fixed
- Fix missing return after version header check in SubmitAttesterSlashingsV2.

View File

@@ -0,0 +1,3 @@
## Fixed
- incorrect constructor return type [#16084](https://github.com/OffchainLabs/prysm/pull/16084)

View File

@@ -0,0 +1,2 @@
### Ignored
- Reverts AutoNatV2 change introduced in https://github.com/OffchainLabs/prysm/pull/16100 as the libp2p upgrade fails inter-op testing.

View File

@@ -0,0 +1,3 @@
### Fixed
- Prevent blocked sends to the KZG batch verifier when the caller context is already canceled, avoiding useless queueing and potential hangs.

View File

@@ -1,5 +0,0 @@
### Added
- Added an ephemeral debug logfile that for beacon and validator nodes that captures debug-level logs for 24 hours. It
also keeps 1 backup of in case of size-based rotation. The logfiles are stored in `datadir/logs/`. This feature is
enabled by default and can be disabled by setting the `--disable-ephemeral-log-file` flag.

View File

@@ -0,0 +1,3 @@
### Fixed
- Fix the missing fork version object mapping for Fulu in light client p2p.

View File

@@ -1,4 +0,0 @@
### Changed
- Moved verbosity settings to be configurable per hook, rather than just globally. This allows us to control the
verbosity of individual output independently.

View File

@@ -0,0 +1,3 @@
### Added
- `primitives.BuilderIndex`: SSZ `uint64` wrapper for builder registry indices.

View File

@@ -0,0 +1,3 @@
### Fixed
- Fix deadlock in data column gossip KZG batch verification when a caller times out preventing result delivery.

View File

@@ -0,0 +1,3 @@
### Changed
- the /eth/v2/beacon/pool/attestations and /eth/v1/beacon/pool/sync_committees now returns a 503 error if the node is still syncing, the rest api is also working in a similar process to gRPC broadcasting immediately now.

View File

@@ -0,0 +1,3 @@
### Fixed
- fixed replay state issue in rest api caused by attester and sync committee duties endpoints

View File

@@ -1,3 +0,0 @@
### Changed
- changed IsHealthy check to IsReady for validator client's interpretation from /eth/v1/node/health, 206 will now return false as the node is syncing.

View File

@@ -0,0 +1,3 @@
### Changed
- e2e sync committee evaluator now skips the first slot after startup, we already skip the fork epoch for checks here, this skip only applies on startup, due to altair always from 0 and validators need to warm up.

View File

@@ -1,3 +0,0 @@
### Fixed
- Don't call trace.WithMaxExportBatchSize(trace.DefaultMaxExportBatchSize) twice.

3
changelog/manu-agg.md Normal file
View File

@@ -0,0 +1,3 @@
### Changed
- Pending aggregates: When multiple aggregated attestations only differing by the aggregator index are in the pending queue, only process one of them.

View File

@@ -0,0 +1,2 @@
### Changed
- `validateDataColumn`: Remove error logs.

View File

@@ -0,0 +1,2 @@
### Ignored
- Added test requirement to `PULL_REQUEST_TEMPLATE.md`

View File

@@ -1,2 +0,0 @@
### Added
- `--disable-get-blobs-v2` flag.

View File

@@ -0,0 +1,7 @@
### Added
- prometheus histogram `cells_and_proofs_from_structured_computation_milliseconds` to track computation time for cells and proofs from structured blobs.
- prometheus histogram `get_blobs_v2_latency_milliseconds` to track RPC latency for `getBlobsV2` calls to the execution layer.
### Changed
- Run `ComputeCellsAndProofsFromFlat` in parallel to improve performance when computing cells and proofs.
- Run `ComputeCellsAndProofsFromStructured` in parallel to improve performance when computing cells and proofs.

View File

@@ -1,3 +0,0 @@
### Added
- Batch publish data columns for faster data propogation.

View File

@@ -0,0 +1,3 @@
### Fixed
- Fixed possible race when validating two attestations at the same time.

View File

@@ -0,0 +1,3 @@
### Added
- Track the dependent root of the latest finalized checkpoint in forkchoice.

View File

@@ -1,2 +0,0 @@
### Added
- Add a feature flag to pass spectests with low validator count.

View File

@@ -0,0 +1,3 @@
### Fixed
- Do not process slots and copy states for next epoch proposers after Fulu

View File

@@ -0,0 +1,3 @@
### Fixed
- Do not error when committee has been computed correctly but updating the cache failed.

View File

@@ -1,2 +0,0 @@
### Added
- Update spectests to v1.7.0-alpha.0

View File

@@ -1,3 +0,0 @@
### Ignored
- Updated changelog for v7.1.2

3
changelog/pvl-v7.0.1.md Normal file
View File

@@ -0,0 +1,3 @@
### Ignored
- Updated CHANGELOG.md for v7.0.1 patch release

3
changelog/pvl-v7.1.0.md Normal file
View File

@@ -0,0 +1,3 @@
### Ignored
- Changelog for v7.1.0

View File

@@ -1,3 +0,0 @@
### Ignored
- Added changelog for v7.1.1

View File

@@ -1,3 +0,0 @@
### Changed
- Replaced `time.Sleep` with `require.Eventually` polling in tests to fix flaky behavior caused by race conditions between goroutines and assertions.

View File

@@ -0,0 +1,3 @@
### Added
- Static analyzer that ensures each `httputil.HandleError` call is followed by a `return` statement.

View File

@@ -0,0 +1,3 @@
### Ignored
- Use `WriteStateFetchError` in API handlers whenever possible.

View File

@@ -0,0 +1,3 @@
### Removed
- Unnecessary copy is removed from Eth1DataHasEnoughSupport

View File

@@ -0,0 +1,3 @@
### Added
- Added missing beacon config in fulu so that the presets don't go missing in /eth/v1/config/spec beacon api.

View File

@@ -0,0 +1,3 @@
### Added
- Proposal design document to implement graffiti. Currently it is empty by default and the idea is to have it of the form GE168dPR63af

View File

@@ -0,0 +1,3 @@
### Changed
- Optimise migratetocold by not doing brute force for loop

View File

@@ -1,3 +0,0 @@
### Changed
- Performance improvement in state (MarshalSSZTo): use copy() instead of byte-by-byte loop which isn't required.

View File

@@ -1,3 +0,0 @@
### Added
- Added basic Gloas builder support (`Builder` message and `BeaconStateGloas` `builders`/`next_withdrawal_builder_index` fields)

View File

@@ -1,3 +0,0 @@
### Added
- Add Gloas latest execution bid processing

View File

@@ -356,15 +356,4 @@ var (
Usage: "A comma-separated list of exponents (of 2) in decreasing order, defining the state diff hierarchy levels. The last exponent must be greater than or equal to 5.",
Value: cli.NewIntSlice(21, 18, 16, 13, 11, 9, 5),
}
// DisableEphemeralLogFile disables the 24 hour debug log file.
DisableEphemeralLogFile = &cli.BoolFlag{
Name: "disable-ephemeral-log-file",
Usage: "Disables the creation of a debug log file that keeps 24 hours of logs.",
Value: false,
}
// DisableGetBlobsV2 disables the engine_getBlobsV2 usage.
DisableGetBlobsV2 = &cli.BoolFlag{
Name: "disable-get-blobs-v2",
Usage: "Disables the engine_getBlobsV2 usage.",
}
)

View File

@@ -17,7 +17,6 @@ type GlobalFlags struct {
SubscribeToAllSubnets bool
Supernode bool
SemiSupernode bool
DisableGetBlobsV2 bool
MinimumSyncPeers int
MinimumPeersPerSubnet int
MaxConcurrentDials int
@@ -73,11 +72,6 @@ func ConfigureGlobalFlags(ctx *cli.Context) error {
cfg.SemiSupernode = true
}
if ctx.Bool(DisableGetBlobsV2.Name) {
log.Warning("Disabling `engine_getBlobsV2` API")
cfg.DisableGetBlobsV2 = true
}
// State-diff-exponents
cfg.StateDiffExponents = ctx.IntSlice(StateDiffExponents.Name)
if features.Get().EnableStateDiff {

View File

@@ -148,7 +148,6 @@ var appFlags = []cli.Flag{
flags.SlasherDirFlag,
flags.SlasherFlag,
flags.JwtId,
flags.DisableGetBlobsV2,
storage.BlobStoragePathFlag,
storage.DataColumnStoragePathFlag,
storage.BlobStorageLayout,
@@ -158,7 +157,6 @@ var appFlags = []cli.Flag{
dasFlags.BackfillOldestSlot,
dasFlags.BlobRetentionEpochFlag,
flags.BatchVerifierLimit,
flags.DisableEphemeralLogFile,
}
func init() {
@@ -171,15 +169,8 @@ func before(ctx *cli.Context) error {
return errors.Wrap(err, "failed to load flags from config file")
}
// determine log verbosity
verbosity := ctx.String(cmd.VerbosityFlag.Name)
verbosityLevel, err := logrus.ParseLevel(verbosity)
if err != nil {
return errors.Wrap(err, "failed to parse log verbosity")
}
logs.SetLoggingLevel(verbosityLevel)
format := ctx.String(cmd.LogFormat.Name)
switch format {
case "text":
// disabling logrus default output so we can control it via different hooks
@@ -193,9 +184,8 @@ func before(ctx *cli.Context) error {
formatter.ForceColors = true
logrus.AddHook(&logs.WriterHook{
Formatter: formatter,
Writer: os.Stderr,
AllowedLevels: logrus.AllLevels[:verbosityLevel+1],
Formatter: formatter,
Writer: os.Stderr,
})
case "fluentd":
f := joonix.NewFormatter()
@@ -219,17 +209,11 @@ func before(ctx *cli.Context) error {
logFileName := ctx.String(cmd.LogFileName.Name)
if logFileName != "" {
if err := logs.ConfigurePersistentLogging(logFileName, format, verbosityLevel); err != nil {
if err := logs.ConfigurePersistentLogging(logFileName, format); err != nil {
log.WithError(err).Error("Failed to configuring logging to disk.")
}
}
if !ctx.Bool(flags.DisableEphemeralLogFile.Name) {
if err := logs.ConfigureEphemeralLogFile(ctx.String(cmd.DataDirFlag.Name), ctx.App.Name); err != nil {
log.WithError(err).Error("Failed to configure debug log file")
}
}
if err := cmd.ExpandSingleEndpointIfFile(ctx, flags.ExecutionEngineEndpoint); err != nil {
return errors.Wrap(err, "failed to expand single endpoint")
}
@@ -306,7 +290,7 @@ func startNode(ctx *cli.Context, cancel context.CancelFunc) error {
if err != nil {
return err
}
logrus.SetLevel(level)
// Set libp2p logger to only panic logs for the info level.
golog.SetAllLoggers(golog.LevelPanic)

View File

@@ -169,7 +169,6 @@ var appHelpFlagGroups = []flagGroup{
flags.ExecutionJWTSecretFlag,
flags.JwtId,
flags.InteropMockEth1DataVotesFlag,
flags.DisableGetBlobsV2,
},
},
{ // Flags relevant to configuring beacon chain monitoring.
@@ -200,7 +199,6 @@ var appHelpFlagGroups = []flagGroup{
cmd.LogFormat,
cmd.LogFileName,
cmd.VerbosityFlag,
flags.DisableEphemeralLogFile,
},
},
{ // Feature flags.

View File

@@ -86,7 +86,7 @@ func main() {
logFileName := ctx.String(cmd.LogFileName.Name)
if logFileName != "" {
if err := logs.ConfigurePersistentLogging(logFileName, format, level); err != nil {
if err := logs.ConfigurePersistentLogging(logFileName, format); err != nil {
log.WithError(err).Error("Failed to configuring logging to disk.")
}
}

View File

@@ -410,12 +410,6 @@ var (
Usage: "Maximum number of health checks to perform before exiting if not healthy. Set to 0 or a negative number for indefinite checks.",
Value: DefaultMaxHealthChecks,
}
// DisableEphemeralLogFile disables the 24 hour debug log file.
DisableEphemeralLogFile = &cli.BoolFlag{
Name: "disable-ephemeral-log-file",
Usage: "Disables the creation of a debug log file that keeps 24 hours of logs.",
Value: false,
}
)
// DefaultValidatorDir returns OS-specific default validator directory.

View File

@@ -115,7 +115,6 @@ var appFlags = []cli.Flag{
debug.BlockProfileRateFlag,
debug.MutexProfileFractionFlag,
cmd.AcceptTosFlag,
flags.DisableEphemeralLogFile,
}
func init() {
@@ -148,14 +147,6 @@ func main() {
return err
}
// determine log verbosity
verbosity := ctx.String(cmd.VerbosityFlag.Name)
verbosityLevel, err := logrus.ParseLevel(verbosity)
if err != nil {
return errors.Wrap(err, "failed to parse log verbosity")
}
logs.SetLoggingLevel(verbosityLevel)
logFileName := ctx.String(cmd.LogFileName.Name)
format := ctx.String(cmd.LogFormat.Name)
@@ -172,9 +163,8 @@ func main() {
formatter.ForceColors = true
logrus.AddHook(&logs.WriterHook{
Formatter: formatter,
Writer: os.Stderr,
AllowedLevels: logrus.AllLevels[:verbosityLevel+1],
Formatter: formatter,
Writer: os.Stderr,
})
case "fluentd":
f := joonix.NewFormatter()
@@ -195,17 +185,11 @@ func main() {
}
if logFileName != "" {
if err := logs.ConfigurePersistentLogging(logFileName, format, verbosityLevel); err != nil {
if err := logs.ConfigurePersistentLogging(logFileName, format); err != nil {
log.WithError(err).Error("Failed to configuring logging to disk.")
}
}
if !ctx.Bool(flags.DisableEphemeralLogFile.Name) {
if err := logs.ConfigureEphemeralLogFile(ctx.String(cmd.DataDirFlag.Name), ctx.App.Name); err != nil {
log.WithError(err).Error("Failed to configure debug log file")
}
}
// Fix data dir for Windows users.
outdatedDataDir := filepath.Join(file.HomeDir(), "AppData", "Roaming", "Eth2Validators")
currentDataDir := flags.DefaultValidatorDir()

View File

@@ -75,7 +75,6 @@ var appHelpFlagGroups = []flagGroup{
cmd.GrpcMaxCallRecvMsgSizeFlag,
cmd.AcceptTosFlag,
cmd.ApiTimeoutFlag,
flags.DisableEphemeralLogFile,
},
},
{

View File

@@ -80,7 +80,6 @@ type Flags struct {
SaveInvalidBlob bool // SaveInvalidBlob saves invalid blob to temp.
EnableDiscoveryReboot bool // EnableDiscoveryReboot allows the node to have its local listener to be rebooted in the event of discovery issues.
LowValcountSweep bool // LowValcountSweep bounds withdrawal sweep by validator count.
// KeystoreImportDebounceInterval specifies the time duration the validator waits to reload new keys if they have
// changed on disk. This feature is for advanced use cases only.
@@ -285,10 +284,6 @@ func ConfigureBeaconChain(ctx *cli.Context) error {
logEnabled(ignoreUnviableAttestations)
cfg.IgnoreUnviableAttestations = true
}
if ctx.Bool(lowValcountSweep.Name) {
logEnabled(lowValcountSweep)
cfg.LowValcountSweep = true
}
if ctx.IsSet(EnableStateDiff.Name) {
logEnabled(EnableStateDiff)

View File

@@ -211,13 +211,6 @@ var (
Name: "ignore-unviable-attestations",
Usage: "Ignores attestations whose target state is not viable with respect to the current head (avoid expensive state replay from lagging attesters).",
}
// lowValcountSweep bounds withdrawal sweep by validator count.
lowValcountSweep = &cli.BoolFlag{
Name: "low-valcount-sweep",
Usage: "Uses validator count bound for withdrawal sweep when validator count is less than MaxValidatorsPerWithdrawalsSweep.",
Value: false,
Hidden: true,
}
)
// devModeFlags holds list of flags that are set when development mode is on.
@@ -279,7 +272,6 @@ var BeaconChainFlags = combinedFlags([]cli.Flag{
enableExperimentalAttestationPool,
forceHeadFlag,
blacklistRoots,
lowValcountSweep,
}, deprecatedBeaconFlags, deprecatedFlags, upcomingDeprecation)
func combinedFlags(flags ...[]cli.Flag) []cli.Flag {

View File

@@ -19,4 +19,6 @@ func testFieldParametersMatchConfig(t *testing.T) {
require.Equal(t, uint64(params.BeaconConfig().SlotsPerEpoch.Mul(params.BeaconConfig().MaxAttestations)), uint64(fieldparams.CurrentEpochAttestationsLength))
require.Equal(t, uint64(params.BeaconConfig().EpochsPerSlashingsVector), uint64(fieldparams.SlashingsLength))
require.Equal(t, params.BeaconConfig().SyncCommitteeSize, uint64(fieldparams.SyncCommitteeLength))
require.Equal(t, params.BeaconConfig().NumberOfColumns, uint64(fieldparams.NumberOfColumns))
require.Equal(t, params.BeaconConfig().FieldElementsPerCell, uint64(fieldparams.CellsPerBlob))
}

View File

@@ -9,7 +9,6 @@ const (
RandaoMixesLength = 65536 // EPOCHS_PER_HISTORICAL_VECTOR
HistoricalRootsLength = 16777216 // HISTORICAL_ROOTS_LIMIT
ValidatorRegistryLimit = 1099511627776 // VALIDATOR_REGISTRY_LIMIT
BuilderRegistryLimit = 1099511627776 // BUILDER_REGISTRY_LIMIT
Eth1DataVotesLength = 2048 // SLOTS_PER_ETH1_VOTING_PERIOD
PreviousEpochAttestationsLength = 4096 // MAX_ATTESTATIONS * SLOTS_PER_EPOCH
CurrentEpochAttestationsLength = 4096 // MAX_ATTESTATIONS * SLOTS_PER_EPOCH

View File

@@ -9,7 +9,6 @@ const (
RandaoMixesLength = 64 // EPOCHS_PER_HISTORICAL_VECTOR
HistoricalRootsLength = 16777216 // HISTORICAL_ROOTS_LIMIT
ValidatorRegistryLimit = 1099511627776 // VALIDATOR_REGISTRY_LIMIT
BuilderRegistryLimit = 1099511627776 // BUILDER_REGISTRY_LIMIT
Eth1DataVotesLength = 32 // SLOTS_PER_ETH1_VOTING_PERIOD
PreviousEpochAttestationsLength = 1024 // MAX_ATTESTATIONS * SLOTS_PER_EPOCH
CurrentEpochAttestationsLength = 1024 // MAX_ATTESTATIONS * SLOTS_PER_EPOCH

View File

@@ -56,12 +56,10 @@ type BeaconChainConfig struct {
EffectiveBalanceIncrement uint64 `yaml:"EFFECTIVE_BALANCE_INCREMENT" spec:"true"` // EffectiveBalanceIncrement is used for converting the high balance into the low balance for validators.
// Initial value constants.
BLSWithdrawalPrefixByte byte `yaml:"BLS_WITHDRAWAL_PREFIX" spec:"true"` // BLSWithdrawalPrefixByte is used for BLS withdrawal and it's the first byte.
ETH1AddressWithdrawalPrefixByte byte `yaml:"ETH1_ADDRESS_WITHDRAWAL_PREFIX" spec:"true"` // ETH1AddressWithdrawalPrefixByte is used for withdrawals and it's the first byte.
CompoundingWithdrawalPrefixByte byte `yaml:"COMPOUNDING_WITHDRAWAL_PREFIX" spec:"true"` // CompoundingWithdrawalPrefixByteByte is used for compounding withdrawals and it's the first byte.
BuilderWithdrawalPrefixByte byte `yaml:"BUILDER_WITHDRAWAL_PREFIX" spec:"true"` // BuilderWithdrawalPrefixByte is used for builder withdrawals and it's the first byte.
BuilderIndexSelfBuild primitives.BuilderIndex `yaml:"BUILDER_INDEX_SELF_BUILD" spec:"true"` // BuilderIndexSelfBuild indicates proposer self-built payloads.
ZeroHash [32]byte // ZeroHash is used to represent a zeroed out 32 byte array.
BLSWithdrawalPrefixByte byte `yaml:"BLS_WITHDRAWAL_PREFIX" spec:"true"` // BLSWithdrawalPrefixByte is used for BLS withdrawal and it's the first byte.
ETH1AddressWithdrawalPrefixByte byte `yaml:"ETH1_ADDRESS_WITHDRAWAL_PREFIX" spec:"true"` // ETH1AddressWithdrawalPrefixByte is used for withdrawals and it's the first byte.
CompoundingWithdrawalPrefixByte byte `yaml:"COMPOUNDING_WITHDRAWAL_PREFIX" spec:"true"` // CompoundingWithdrawalPrefixByteByte is used for compounding withdrawals and it's the first byte.
ZeroHash [32]byte // ZeroHash is used to represent a zeroed out 32 byte array.
// Time parameters constants.
GenesisDelay uint64 `yaml:"GENESIS_DELAY" spec:"true"` // GenesisDelay is the minimum number of seconds to delay starting the Ethereum Beacon Chain genesis. Must be at least 1 second.
@@ -141,7 +139,6 @@ type BeaconChainConfig struct {
DomainApplicationMask [4]byte `yaml:"DOMAIN_APPLICATION_MASK" spec:"true"` // DomainApplicationMask defines the BLS signature domain for application mask.
DomainApplicationBuilder [4]byte `yaml:"DOMAIN_APPLICATION_BUILDER" spec:"true"` // DomainApplicationBuilder defines the BLS signature domain for application builder.
DomainBLSToExecutionChange [4]byte `yaml:"DOMAIN_BLS_TO_EXECUTION_CHANGE" spec:"true"` // DomainBLSToExecutionChange defines the BLS signature domain to change withdrawal addresses to ETH1 prefix
DomainBeaconBuilder [4]byte `yaml:"DOMAIN_BEACON_BUILDER" spec:"true"` // DomainBeaconBuilder defines the BLS signature domain for beacon block builder.
// Prysm constants.
GenesisValidatorsRoot [32]byte // GenesisValidatorsRoot is the root hash of the genesis validators.
@@ -292,6 +289,11 @@ type BeaconChainConfig struct {
MaxRequestDataColumnSidecars uint64 `yaml:"MAX_REQUEST_DATA_COLUMN_SIDECARS" spec:"true"` // MaxRequestDataColumnSidecars is the maximum number of data column sidecars in a single request
ValidatorCustodyRequirement uint64 `yaml:"VALIDATOR_CUSTODY_REQUIREMENT" spec:"true"` // ValidatorCustodyRequirement is the minimum number of custody groups an honest node with validators attached custodies and serves samples from
BalancePerAdditionalCustodyGroup uint64 `yaml:"BALANCE_PER_ADDITIONAL_CUSTODY_GROUP" spec:"true"` // BalancePerAdditionalCustodyGroup is the balance increment corresponding to one additional group to custody.
FieldElementsPerCell uint64 `yaml:"FIELD_ELEMENTS_PER_CELL" spec:"true"` // FieldElementsPerCell is the number of field elements in a cell.
FieldElementsPerExtBlob uint64 `yaml:"FIELD_ELEMENTS_PER_EXT_BLOB" spec:"true"` // FieldElementsPerExtBlob is the number of field elements in an extended blob.
KzgCommitmentsInclusionProofDepth uint64 `yaml:"KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH" spec:"true"` // KzgCommitmentsInclusionProofDepth is the depth of the KZG commitments inclusion proof.
CellsPerExtBlob uint64 `yaml:"CELLS_PER_EXT_BLOB" spec:"true"` // CellsPerExtBlob is the number of cells in an extended blob.
NumberOfColumns uint64 `yaml:"NUMBER_OF_COLUMNS" spec:"true"` // NumberOfColumns is the number of data columns in the network.
// Networking Specific Parameters
MaxPayloadSize uint64 `yaml:"MAX_PAYLOAD_SIZE" spec:"true"` // MAX_PAYLOAD_SIZE is the maximum allowed size of uncompressed payload in gossip messages and rpc chunks.

View File

@@ -29,7 +29,6 @@ var placeholderFields = []string{
"ATTESTATION_DEADLINE",
"ATTESTATION_DUE_BPS_GLOAS",
"BLOB_SIDECAR_SUBNET_COUNT_FULU",
"CELLS_PER_EXT_BLOB",
"CONTRIBUTION_DUE_BPS_GLOAS",
"EIP6110_FORK_EPOCH",
"EIP6110_FORK_VERSION",
@@ -44,26 +43,20 @@ var placeholderFields = []string{
"EIP7928_FORK_EPOCH",
"EIP7928_FORK_VERSION",
"EPOCHS_PER_SHUFFLING_PHASE",
"FIELD_ELEMENTS_PER_CELL", // Configured as a constant in config/fieldparams/mainnet.go
"FIELD_ELEMENTS_PER_EXT_BLOB", // Configured in proto/ssz_proto_library.bzl
"GLOAS_FORK_EPOCH",
"GLOAS_FORK_VERSION",
"INCLUSION_LIST_SUBMISSION_DEADLINE",
"INCLUSION_LIST_SUBMISSION_DUE_BPS",
"KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH", // Configured in proto/ssz_proto_library.bzl
"MAX_BYTES_PER_INCLUSION_LIST",
"MAX_REQUEST_BLOB_SIDECARS_FULU",
"MAX_REQUEST_INCLUSION_LIST",
"MAX_REQUEST_PAYLOADS", // Compile time constant on BeaconBlockBody.ExecutionRequests
"MIN_BUILDER_WITHDRAWABILITY_DELAY",
"NUMBER_OF_COLUMNS", // Configured as a constant in config/fieldparams/mainnet.go
"PAYLOAD_ATTESTATION_DUE_BPS",
"PROPOSER_INCLUSION_LIST_CUTOFF",
"PROPOSER_INCLUSION_LIST_CUTOFF_BPS",
"PROPOSER_SELECTION_GAP",
"SYNC_MESSAGE_DUE_BPS_GLOAS",
"TARGET_NUMBER_OF_PEERS",
"UPDATE_TIMEOUT",
"VIEW_FREEZE_CUTOFF_BPS",
"VIEW_FREEZE_DEADLINE",
"WHISK_EPOCHS_PER_SHUFFLING_PHASE",

Some files were not shown because too many files have changed in this diff Show More