mirror of
https://github.com/OffchainLabs/prysm.git
synced 2026-01-10 13:58:09 -05:00
Compare commits
12 Commits
graffiti-i
...
eas
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
77ea14fdee | ||
|
|
0aff28e0e7 | ||
|
|
c96d188468 | ||
|
|
0fcb922702 | ||
|
|
3646a77bfb | ||
|
|
1541558261 | ||
|
|
1a6252ade4 | ||
|
|
27c009e7ff | ||
|
|
ffad861e2c | ||
|
|
792fa22099 | ||
|
|
c5b3d3531c | ||
|
|
cc4510bb77 |
3
.gitignore
vendored
3
.gitignore
vendored
@@ -44,3 +44,6 @@ tmp
|
||||
|
||||
# spectest coverage reports
|
||||
report.txt
|
||||
|
||||
# execution client data
|
||||
execution/
|
||||
|
||||
60
CHANGELOG.md
60
CHANGELOG.md
@@ -4,6 +4,66 @@ All notable changes to this project will be documented in this file.
|
||||
|
||||
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
|
||||
|
||||
## [v7.1.2](https://github.com/prysmaticlabs/prysm/compare/v7.1.1...v7.1.2) - 2026-01-07
|
||||
|
||||
Happy new year! This patch release is very small. The main improvement is better management of pending attestation aggregation via [PR 16153](https://github.com/OffchainLabs/prysm/pull/16153).
|
||||
|
||||
### Added
|
||||
|
||||
- `primitives.BuilderIndex`: SSZ `uint64` wrapper for builder registry indices. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16169)
|
||||
|
||||
### Changed
|
||||
|
||||
- the /eth/v2/beacon/pool/attestations and /eth/v1/beacon/pool/sync_committees now returns a 503 error if the node is still syncing, the rest api is also working in a similar process to gRPC broadcasting immediately now. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16152)
|
||||
- `validateDataColumn`: Remove error logs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16157)
|
||||
- Pending aggregates: When multiple aggregated attestations only differing by the aggregator index are in the pending queue, only process one of them. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16153)
|
||||
|
||||
### Fixed
|
||||
|
||||
- Fix the missing fork version object mapping for Fulu in light client p2p. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16151)
|
||||
- Do not process slots and copy states for next epoch proposers after Fulu. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16168)
|
||||
|
||||
## [v7.1.1](https://github.com/prysmaticlabs/prysm/compare/v7.1.0...v7.1.1) - 2025-12-18
|
||||
|
||||
Release highlights:
|
||||
|
||||
- Fixed potential deadlock scenario in data column batch verification
|
||||
- Improved processing and metrics for cells and proofs
|
||||
|
||||
We are aware of [an issue](https://github.com/OffchainLabs/prysm/issues/16160) where Prysm struggles to sync from an out of sync state. We will have another release before the end of the year to address this issue.
|
||||
|
||||
Our postmortem document from the December 4th mainnet issue has been published on our [documentation site](https://prysm.offchainlabs.com/docs/misc/mainnet-postmortems/)
|
||||
|
||||
### Added
|
||||
|
||||
- Track the dependent root of the latest finalized checkpoint in forkchoice. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16103)
|
||||
- Proposal design document to implement graffiti. Currently it is empty by default and the idea is to have it of the form GE168dPR63af. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15983)
|
||||
- Add support for detecting and logging per address reachability via libp2p AutoNAT v2. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16100)
|
||||
- Static analyzer that ensures each `httputil.HandleError` call is followed by a `return` statement. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16134)
|
||||
- Prometheus histogram `cells_and_proofs_from_structured_computation_milliseconds` to track computation time for cells and proofs from structured blobs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16115)
|
||||
- Prometheus histogram `get_blobs_v2_latency_milliseconds` to track RPC latency for `getBlobsV2` calls to the execution layer. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16115)
|
||||
|
||||
### Changed
|
||||
|
||||
- Optimise migratetocold by not doing brute force for loop. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16101)
|
||||
- e2e sync committee evaluator now skips the first slot after startup, we already skip the fork epoch for checks here, this skip only applies on startup, due to altair always from 0 and validators need to warm up. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16145)
|
||||
- Run `ComputeCellsAndProofsFromFlat` in parallel to improve performance when computing cells and proofs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16115)
|
||||
- Run `ComputeCellsAndProofsFromStructured` in parallel to improve performance when computing cells and proofs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16115)
|
||||
|
||||
### Removed
|
||||
|
||||
- Unnecessary copy is removed from Eth1DataHasEnoughSupport. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16118)
|
||||
|
||||
### Fixed
|
||||
|
||||
- Incorrect constructor return type [#16084](https://github.com/OffchainLabs/prysm/pull/16084). [[PR]](https://github.com/prysmaticlabs/prysm/pull/16084)
|
||||
- Fixed possible race when validating two attestations at the same time. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16105)
|
||||
- Fix missing return after version header check in SubmitAttesterSlashingsV2. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16126)
|
||||
- Fix deadlock in data column gossip KZG batch verification when a caller times out preventing result delivery. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16141)
|
||||
- Fixed replay state issue in rest api caused by attester and sync committee duties endpoints. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16136)
|
||||
- Do not error when committee has been computed correctly but updating the cache failed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16142)
|
||||
- Prevent blocked sends to the KZG batch verifier when the caller context is already canceled, avoiding useless queueing and potential hangs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16144)
|
||||
|
||||
## [v7.1.0](https://github.com/prysmaticlabs/prysm/compare/v7.0.0...v7.1.0) - 2025-12-10
|
||||
|
||||
This release includes several key features/fixes. If you are running v7.0.0 then you should update to v7.0.1 or later and remove the flag `--disable-last-epoch-targets`.
|
||||
|
||||
10
WORKSPACE
10
WORKSPACE
@@ -273,16 +273,16 @@ filegroup(
|
||||
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
|
||||
)
|
||||
|
||||
consensus_spec_version = "v1.6.0"
|
||||
consensus_spec_version = "v1.7.0-alpha.0"
|
||||
|
||||
load("@prysm//tools:download_spectests.bzl", "consensus_spec_tests")
|
||||
|
||||
consensus_spec_tests(
|
||||
name = "consensus_spec_tests",
|
||||
flavors = {
|
||||
"general": "sha256-54hTaUNF9nLg+hRr3oHoq0yjZpW3MNiiUUuCQu6Rajk=",
|
||||
"minimal": "sha256-1JHIGg3gVMjvcGYRHR5cwdDgOvX47oR/MWp6gyAeZfA=",
|
||||
"mainnet": "sha256-292h3W2Ffts0YExgDTyxYe9Os7R0bZIXuAaMO8P6kl4=",
|
||||
"general": "sha256-b+rJOuVqq+Dy53quPcNYcQwPFoMU7Wp7tdUVe7n0g8w=",
|
||||
"minimal": "sha256-qxRIxtjPxVsVCY90WsBJKhk0027XDSmhjnRvRN14V1c=",
|
||||
"mainnet": "sha256-NsuOQG3LzeiEE1TrWuvQ6vu6BboHv7h7f/RTS0pWkCs=",
|
||||
},
|
||||
version = consensus_spec_version,
|
||||
)
|
||||
@@ -298,7 +298,7 @@ filegroup(
|
||||
visibility = ["//visibility:public"],
|
||||
)
|
||||
""",
|
||||
integrity = "sha256-VzBgrEokvYSMIIXVnSA5XS9I3m9oxpvToQGxC1N5lzw=",
|
||||
integrity = "sha256-hwNdUBgdBrkk6pWIpNYbzbwswUuOu6AMD2exN8uv+QQ=",
|
||||
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
|
||||
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
|
||||
)
|
||||
|
||||
@@ -74,7 +74,6 @@ go_library(
|
||||
"//beacon-chain/state:go_default_library",
|
||||
"//beacon-chain/state/stategen:go_default_library",
|
||||
"//beacon-chain/verification:go_default_library",
|
||||
"//cmd/beacon-chain/flags:go_default_library",
|
||||
"//config/features:go_default_library",
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
@@ -174,7 +173,6 @@ go_test(
|
||||
"//beacon-chain/state/state-native:go_default_library",
|
||||
"//beacon-chain/state/stategen:go_default_library",
|
||||
"//beacon-chain/verification:go_default_library",
|
||||
"//cmd/beacon-chain/flags:go_default_library",
|
||||
"//config/features:go_default_library",
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
|
||||
@@ -17,6 +17,7 @@ import (
|
||||
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v7/config/params"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
|
||||
ethpbv1 "github.com/OffchainLabs/prysm/v7/proto/eth/v1"
|
||||
@@ -130,12 +131,10 @@ func TestService_ReceiveBlock(t *testing.T) {
|
||||
block: genFullBlock(t, util.DefaultBlockGenConfig(), 1 /*slot*/),
|
||||
},
|
||||
check: func(t *testing.T, s *Service) {
|
||||
// Hacky sleep, should use a better way to be able to resolve the race
|
||||
// between event being sent out and processed.
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
if recvd := len(s.cfg.StateNotifier.(*blockchainTesting.MockStateNotifier).ReceivedEvents()); recvd < 1 {
|
||||
t.Errorf("Received %d state notifications, expected at least 1", recvd)
|
||||
}
|
||||
notifier := s.cfg.StateNotifier.(*blockchainTesting.MockStateNotifier)
|
||||
require.Eventually(t, func() bool {
|
||||
return len(notifier.ReceivedEvents()) >= 1
|
||||
}, 2*time.Second, 10*time.Millisecond, "Expected at least 1 state notification")
|
||||
},
|
||||
},
|
||||
{
|
||||
@@ -222,10 +221,10 @@ func TestService_ReceiveBlockUpdateHead(t *testing.T) {
|
||||
require.NoError(t, s.ReceiveBlock(ctx, wsb, root, nil))
|
||||
})
|
||||
wg.Wait()
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
if recvd := len(s.cfg.StateNotifier.(*blockchainTesting.MockStateNotifier).ReceivedEvents()); recvd < 1 {
|
||||
t.Errorf("Received %d state notifications, expected at least 1", recvd)
|
||||
}
|
||||
notifier := s.cfg.StateNotifier.(*blockchainTesting.MockStateNotifier)
|
||||
require.Eventually(t, func() bool {
|
||||
return len(notifier.ReceivedEvents()) >= 1
|
||||
}, 2*time.Second, 10*time.Millisecond, "Expected at least 1 state notification")
|
||||
// Verify fork choice has processed the block. (Genesis block and the new block)
|
||||
assert.Equal(t, 2, s.cfg.ForkChoiceStore.NodeCount())
|
||||
}
|
||||
@@ -265,10 +264,10 @@ func TestService_ReceiveBlockBatch(t *testing.T) {
|
||||
block: genFullBlock(t, util.DefaultBlockGenConfig(), 1 /*slot*/),
|
||||
},
|
||||
check: func(t *testing.T, s *Service) {
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
if recvd := len(s.cfg.StateNotifier.(*blockchainTesting.MockStateNotifier).ReceivedEvents()); recvd < 1 {
|
||||
t.Errorf("Received %d state notifications, expected at least 1", recvd)
|
||||
}
|
||||
notifier := s.cfg.StateNotifier.(*blockchainTesting.MockStateNotifier)
|
||||
require.Eventually(t, func() bool {
|
||||
return len(notifier.ReceivedEvents()) >= 1
|
||||
}, 2*time.Second, 10*time.Millisecond, "Expected at least 1 state notification")
|
||||
},
|
||||
},
|
||||
}
|
||||
@@ -512,8 +511,9 @@ func Test_executePostFinalizationTasks(t *testing.T) {
|
||||
s.cfg.StateNotifier = notifier
|
||||
s.executePostFinalizationTasks(s.ctx, headState)
|
||||
|
||||
time.Sleep(1 * time.Second) // sleep for a second because event is in a separate go routine
|
||||
require.Equal(t, 1, len(notifier.ReceivedEvents()))
|
||||
require.Eventually(t, func() bool {
|
||||
return len(notifier.ReceivedEvents()) == 1
|
||||
}, 5*time.Second, 50*time.Millisecond, "Expected exactly 1 state notification")
|
||||
e := notifier.ReceivedEvents()[0]
|
||||
assert.Equal(t, statefeed.FinalizedCheckpoint, int(e.Type))
|
||||
fc, ok := e.Data.(*ethpbv1.EventFinalizedCheckpoint)
|
||||
@@ -552,8 +552,9 @@ func Test_executePostFinalizationTasks(t *testing.T) {
|
||||
s.cfg.StateNotifier = notifier
|
||||
s.executePostFinalizationTasks(s.ctx, headState)
|
||||
|
||||
time.Sleep(1 * time.Second) // sleep for a second because event is in a separate go routine
|
||||
require.Equal(t, 1, len(notifier.ReceivedEvents()))
|
||||
require.Eventually(t, func() bool {
|
||||
return len(notifier.ReceivedEvents()) == 1
|
||||
}, 5*time.Second, 50*time.Millisecond, "Expected exactly 1 state notification")
|
||||
e := notifier.ReceivedEvents()[0]
|
||||
assert.Equal(t, statefeed.FinalizedCheckpoint, int(e.Type))
|
||||
fc, ok := e.Data.(*ethpbv1.EventFinalizedCheckpoint)
|
||||
@@ -596,13 +597,13 @@ func TestProcessLightClientBootstrap(t *testing.T) {
|
||||
|
||||
s.executePostFinalizationTasks(s.ctx, l.AttestedState)
|
||||
|
||||
// wait for the goroutine to finish processing
|
||||
time.Sleep(1 * time.Second)
|
||||
|
||||
// Check that the light client bootstrap is saved
|
||||
b, err := s.lcStore.LightClientBootstrap(ctx, [32]byte(cp.Root))
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, b)
|
||||
// Wait for the light client bootstrap to be saved (runs in goroutine)
|
||||
var b interfaces.LightClientBootstrap
|
||||
require.Eventually(t, func() bool {
|
||||
var err error
|
||||
b, err = s.lcStore.LightClientBootstrap(ctx, [32]byte(cp.Root))
|
||||
return err == nil && b != nil
|
||||
}, 5*time.Second, 50*time.Millisecond, "Light client bootstrap was not saved within timeout")
|
||||
|
||||
btst, err := lightClient.NewLightClientBootstrapFromBeaconState(ctx, l.FinalizedState.Slot(), l.FinalizedState, l.FinalizedBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
@@ -14,7 +14,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/cache"
|
||||
statefeed "github.com/OffchainLabs/prysm/v7/beacon-chain/core/feed/state"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
|
||||
coreTime "github.com/OffchainLabs/prysm/v7/beacon-chain/core/time"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/transition"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/db"
|
||||
@@ -30,7 +29,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/startup"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/state/stategen"
|
||||
"github.com/OffchainLabs/prysm/v7/cmd/beacon-chain/flags"
|
||||
"github.com/OffchainLabs/prysm/v7/config/params"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
|
||||
@@ -291,19 +289,6 @@ func (s *Service) StartFromSavedState(saved state.BeaconState) error {
|
||||
return errors.Wrap(err, "failed to initialize blockchain service")
|
||||
}
|
||||
|
||||
if !params.FuluEnabled() {
|
||||
return nil
|
||||
}
|
||||
|
||||
earliestAvailableSlot, custodySubnetCount, err := s.updateCustodyInfoInDB(saved.Slot())
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not get and save custody group count")
|
||||
}
|
||||
|
||||
if _, _, err := s.cfg.P2P.UpdateCustodyInfo(earliestAvailableSlot, custodySubnetCount); err != nil {
|
||||
return errors.Wrap(err, "update custody info")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -468,73 +453,6 @@ func (s *Service) removeStartupState() {
|
||||
s.cfg.FinalizedStateAtStartUp = nil
|
||||
}
|
||||
|
||||
// UpdateCustodyInfoInDB updates the custody information in the database.
|
||||
// It returns the (potentially updated) custody group count and the earliest available slot.
|
||||
func (s *Service) updateCustodyInfoInDB(slot primitives.Slot) (primitives.Slot, uint64, error) {
|
||||
isSupernode := flags.Get().Supernode
|
||||
isSemiSupernode := flags.Get().SemiSupernode
|
||||
|
||||
cfg := params.BeaconConfig()
|
||||
custodyRequirement := cfg.CustodyRequirement
|
||||
|
||||
// Check if the node was previously subscribed to all data subnets, and if so,
|
||||
// store the new status accordingly.
|
||||
wasSupernode, err := s.cfg.BeaconDB.UpdateSubscribedToAllDataSubnets(s.ctx, isSupernode)
|
||||
if err != nil {
|
||||
return 0, 0, errors.Wrap(err, "update subscribed to all data subnets")
|
||||
}
|
||||
|
||||
// Compute the target custody group count based on current flag configuration.
|
||||
targetCustodyGroupCount := custodyRequirement
|
||||
|
||||
// Supernode: custody all groups (either currently set or previously enabled)
|
||||
if isSupernode {
|
||||
targetCustodyGroupCount = cfg.NumberOfCustodyGroups
|
||||
}
|
||||
|
||||
// Semi-supernode: custody minimum needed for reconstruction, or custody requirement if higher
|
||||
if isSemiSupernode {
|
||||
semiSupernodeCustody, err := peerdas.MinimumCustodyGroupCountToReconstruct()
|
||||
if err != nil {
|
||||
return 0, 0, errors.Wrap(err, "minimum custody group count")
|
||||
}
|
||||
|
||||
targetCustodyGroupCount = max(custodyRequirement, semiSupernodeCustody)
|
||||
}
|
||||
|
||||
// Safely compute the fulu fork slot.
|
||||
fuluForkSlot, err := fuluForkSlot()
|
||||
if err != nil {
|
||||
return 0, 0, errors.Wrap(err, "fulu fork slot")
|
||||
}
|
||||
|
||||
// If slot is before the fulu fork slot, then use the earliest stored slot as the reference slot.
|
||||
if slot < fuluForkSlot {
|
||||
slot, err = s.cfg.BeaconDB.EarliestSlot(s.ctx)
|
||||
if err != nil {
|
||||
return 0, 0, errors.Wrap(err, "earliest slot")
|
||||
}
|
||||
}
|
||||
|
||||
earliestAvailableSlot, actualCustodyGroupCount, err := s.cfg.BeaconDB.UpdateCustodyInfo(s.ctx, slot, targetCustodyGroupCount)
|
||||
if err != nil {
|
||||
return 0, 0, errors.Wrap(err, "update custody info")
|
||||
}
|
||||
|
||||
if isSupernode {
|
||||
log.WithFields(logrus.Fields{
|
||||
"current": actualCustodyGroupCount,
|
||||
"target": cfg.NumberOfCustodyGroups,
|
||||
}).Info("Supernode mode enabled. Will custody all data columns going forward.")
|
||||
}
|
||||
|
||||
if wasSupernode && !isSupernode {
|
||||
log.Warningf("Because the `--%s` flag was previously used, the node will continue to act as a super node.", flags.Supernode.Name)
|
||||
}
|
||||
|
||||
return earliestAvailableSlot, actualCustodyGroupCount, nil
|
||||
}
|
||||
|
||||
func spawnCountdownIfPreGenesis(ctx context.Context, genesisTime time.Time, db db.HeadAccessDatabase) {
|
||||
currentTime := prysmTime.Now()
|
||||
if currentTime.After(genesisTime) {
|
||||
@@ -551,19 +469,3 @@ func spawnCountdownIfPreGenesis(ctx context.Context, genesisTime time.Time, db d
|
||||
}
|
||||
go slots.CountdownToGenesis(ctx, genesisTime, uint64(gState.NumValidators()), gRoot)
|
||||
}
|
||||
|
||||
func fuluForkSlot() (primitives.Slot, error) {
|
||||
cfg := params.BeaconConfig()
|
||||
|
||||
fuluForkEpoch := cfg.FuluForkEpoch
|
||||
if fuluForkEpoch == cfg.FarFutureEpoch {
|
||||
return cfg.FarFutureSlot, nil
|
||||
}
|
||||
|
||||
forkFuluSlot, err := slots.EpochStart(fuluForkEpoch)
|
||||
if err != nil {
|
||||
return 0, errors.Wrap(err, "epoch start")
|
||||
}
|
||||
|
||||
return forkFuluSlot, nil
|
||||
}
|
||||
|
||||
@@ -23,11 +23,9 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/startup"
|
||||
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/state/stategen"
|
||||
"github.com/OffchainLabs/prysm/v7/cmd/beacon-chain/flags"
|
||||
"github.com/OffchainLabs/prysm/v7/config/features"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v7/config/params"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
|
||||
consensusblocks "github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
|
||||
@@ -596,218 +594,3 @@ func TestNotifyIndex(t *testing.T) {
|
||||
t.Errorf("Notifier channel did not receive the index")
|
||||
}
|
||||
}
|
||||
|
||||
func TestUpdateCustodyInfoInDB(t *testing.T) {
|
||||
const (
|
||||
fuluForkEpoch = 10
|
||||
custodyRequirement = uint64(4)
|
||||
earliestStoredSlot = primitives.Slot(12)
|
||||
numberOfCustodyGroups = uint64(64)
|
||||
)
|
||||
|
||||
params.SetupTestConfigCleanup(t)
|
||||
cfg := params.BeaconConfig()
|
||||
cfg.FuluForkEpoch = fuluForkEpoch
|
||||
cfg.CustodyRequirement = custodyRequirement
|
||||
cfg.NumberOfCustodyGroups = numberOfCustodyGroups
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
ctx := t.Context()
|
||||
pbBlock := util.NewBeaconBlock()
|
||||
pbBlock.Block.Slot = 12
|
||||
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(pbBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
roBlock, err := blocks.NewROBlock(signedBeaconBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
t.Run("CGC increases before fulu", func(t *testing.T) {
|
||||
service, requirements := minimalTestService(t)
|
||||
err = requirements.db.SaveBlock(ctx, roBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Before Fulu
|
||||
// -----------
|
||||
actualEas, actualCgc, err := service.updateCustodyInfoInDB(15)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, earliestStoredSlot, actualEas)
|
||||
require.Equal(t, custodyRequirement, actualCgc)
|
||||
|
||||
actualEas, actualCgc, err = service.updateCustodyInfoInDB(17)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, earliestStoredSlot, actualEas)
|
||||
require.Equal(t, custodyRequirement, actualCgc)
|
||||
|
||||
resetFlags := flags.Get()
|
||||
gFlags := new(flags.GlobalFlags)
|
||||
gFlags.Supernode = true
|
||||
flags.Init(gFlags)
|
||||
defer flags.Init(resetFlags)
|
||||
|
||||
actualEas, actualCgc, err = service.updateCustodyInfoInDB(19)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, earliestStoredSlot, actualEas)
|
||||
require.Equal(t, numberOfCustodyGroups, actualCgc)
|
||||
|
||||
// After Fulu
|
||||
// ----------
|
||||
actualEas, actualCgc, err = service.updateCustodyInfoInDB(fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, earliestStoredSlot, actualEas)
|
||||
require.Equal(t, numberOfCustodyGroups, actualCgc)
|
||||
})
|
||||
|
||||
t.Run("CGC increases after fulu", func(t *testing.T) {
|
||||
service, requirements := minimalTestService(t)
|
||||
err = requirements.db.SaveBlock(ctx, roBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Before Fulu
|
||||
// -----------
|
||||
actualEas, actualCgc, err := service.updateCustodyInfoInDB(15)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, earliestStoredSlot, actualEas)
|
||||
require.Equal(t, custodyRequirement, actualCgc)
|
||||
|
||||
actualEas, actualCgc, err = service.updateCustodyInfoInDB(17)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, earliestStoredSlot, actualEas)
|
||||
require.Equal(t, custodyRequirement, actualCgc)
|
||||
|
||||
// After Fulu
|
||||
// ----------
|
||||
resetFlags := flags.Get()
|
||||
gFlags := new(flags.GlobalFlags)
|
||||
gFlags.Supernode = true
|
||||
flags.Init(gFlags)
|
||||
defer flags.Init(resetFlags)
|
||||
|
||||
slot := fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1
|
||||
actualEas, actualCgc, err = service.updateCustodyInfoInDB(slot)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, slot, actualEas)
|
||||
require.Equal(t, numberOfCustodyGroups, actualCgc)
|
||||
|
||||
actualEas, actualCgc, err = service.updateCustodyInfoInDB(slot + 2)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, slot, actualEas)
|
||||
require.Equal(t, numberOfCustodyGroups, actualCgc)
|
||||
})
|
||||
|
||||
t.Run("Supernode downgrade prevented", func(t *testing.T) {
|
||||
service, requirements := minimalTestService(t)
|
||||
err = requirements.db.SaveBlock(ctx, roBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Enable supernode
|
||||
resetFlags := flags.Get()
|
||||
gFlags := new(flags.GlobalFlags)
|
||||
gFlags.Supernode = true
|
||||
flags.Init(gFlags)
|
||||
|
||||
slot := fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1
|
||||
actualEas, actualCgc, err := service.updateCustodyInfoInDB(slot)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, slot, actualEas)
|
||||
require.Equal(t, numberOfCustodyGroups, actualCgc)
|
||||
|
||||
// Try to downgrade by removing flag
|
||||
gFlags.Supernode = false
|
||||
flags.Init(gFlags)
|
||||
defer flags.Init(resetFlags)
|
||||
|
||||
// Should still be supernode
|
||||
actualEas, actualCgc, err = service.updateCustodyInfoInDB(slot + 2)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, slot, actualEas)
|
||||
require.Equal(t, numberOfCustodyGroups, actualCgc) // Still 64, not downgraded
|
||||
})
|
||||
|
||||
t.Run("Semi-supernode downgrade prevented", func(t *testing.T) {
|
||||
service, requirements := minimalTestService(t)
|
||||
err = requirements.db.SaveBlock(ctx, roBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Enable semi-supernode
|
||||
resetFlags := flags.Get()
|
||||
gFlags := new(flags.GlobalFlags)
|
||||
gFlags.SemiSupernode = true
|
||||
flags.Init(gFlags)
|
||||
|
||||
slot := fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1
|
||||
actualEas, actualCgc, err := service.updateCustodyInfoInDB(slot)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, slot, actualEas)
|
||||
semiSupernodeCustody := numberOfCustodyGroups / 2 // 64
|
||||
require.Equal(t, semiSupernodeCustody, actualCgc) // Semi-supernode custodies 64 groups
|
||||
|
||||
// Try to downgrade by removing flag
|
||||
gFlags.SemiSupernode = false
|
||||
flags.Init(gFlags)
|
||||
defer flags.Init(resetFlags)
|
||||
|
||||
// UpdateCustodyInfo should prevent downgrade - custody count should remain at 64
|
||||
actualEas, actualCgc, err = service.updateCustodyInfoInDB(slot + 2)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, slot, actualEas)
|
||||
require.Equal(t, semiSupernodeCustody, actualCgc) // Still 64 due to downgrade prevention by UpdateCustodyInfo
|
||||
})
|
||||
|
||||
t.Run("Semi-supernode to supernode upgrade allowed", func(t *testing.T) {
|
||||
service, requirements := minimalTestService(t)
|
||||
err = requirements.db.SaveBlock(ctx, roBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Start with semi-supernode
|
||||
resetFlags := flags.Get()
|
||||
gFlags := new(flags.GlobalFlags)
|
||||
gFlags.SemiSupernode = true
|
||||
flags.Init(gFlags)
|
||||
|
||||
slot := fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1
|
||||
actualEas, actualCgc, err := service.updateCustodyInfoInDB(slot)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, slot, actualEas)
|
||||
semiSupernodeCustody := numberOfCustodyGroups / 2 // 64
|
||||
require.Equal(t, semiSupernodeCustody, actualCgc) // Semi-supernode custodies 64 groups
|
||||
|
||||
// Upgrade to full supernode
|
||||
gFlags.SemiSupernode = false
|
||||
gFlags.Supernode = true
|
||||
flags.Init(gFlags)
|
||||
defer flags.Init(resetFlags)
|
||||
|
||||
// Should upgrade to full supernode
|
||||
upgradeSlot := slot + 2
|
||||
actualEas, actualCgc, err = service.updateCustodyInfoInDB(upgradeSlot)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, upgradeSlot, actualEas) // Earliest slot updates when upgrading
|
||||
require.Equal(t, numberOfCustodyGroups, actualCgc) // Upgraded to 128
|
||||
})
|
||||
|
||||
t.Run("Semi-supernode with high validator requirements uses higher custody", func(t *testing.T) {
|
||||
service, requirements := minimalTestService(t)
|
||||
err = requirements.db.SaveBlock(ctx, roBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Enable semi-supernode
|
||||
resetFlags := flags.Get()
|
||||
gFlags := new(flags.GlobalFlags)
|
||||
gFlags.SemiSupernode = true
|
||||
flags.Init(gFlags)
|
||||
defer flags.Init(resetFlags)
|
||||
|
||||
// Mock a high custody requirement (simulating many validators)
|
||||
// We need to override the custody requirement calculation
|
||||
// For this test, we'll verify the logic by checking if custodyRequirement > 64
|
||||
// Since custodyRequirement in minimalTestService is 4, we can't test the high case here
|
||||
// This would require a different test setup with actual validators
|
||||
slot := fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1
|
||||
actualEas, actualCgc, err := service.updateCustodyInfoInDB(slot)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, slot, actualEas)
|
||||
semiSupernodeCustody := numberOfCustodyGroups / 2 // 64
|
||||
// With low validator requirements (4), should use semi-supernode minimum (64)
|
||||
require.Equal(t, semiSupernodeCustody, actualCgc)
|
||||
})
|
||||
}
|
||||
|
||||
@@ -212,7 +212,8 @@ func ProcessWithdrawals(st state.BeaconState, executionData interfaces.Execution
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not get next withdrawal validator index")
|
||||
}
|
||||
nextValidatorIndex += primitives.ValidatorIndex(params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep)
|
||||
bound := min(uint64(st.NumValidators()), params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep)
|
||||
nextValidatorIndex += primitives.ValidatorIndex(bound)
|
||||
nextValidatorIndex = nextValidatorIndex % primitives.ValidatorIndex(st.NumValidators())
|
||||
} else {
|
||||
nextValidatorIndex = expectedWithdrawals[len(expectedWithdrawals)-1].ValidatorIndex + 1
|
||||
|
||||
@@ -40,6 +40,7 @@ go_library(
|
||||
"//beacon-chain/state/state-native:go_default_library",
|
||||
"//beacon-chain/state/stategen:go_default_library",
|
||||
"//beacon-chain/verification:go_default_library",
|
||||
"//cmd/beacon-chain/flags:go_default_library",
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
"//consensus-types/blocks:go_default_library",
|
||||
|
||||
@@ -11,6 +11,7 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/execution/types"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/verification"
|
||||
"github.com/OffchainLabs/prysm/v7/cmd/beacon-chain/flags"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v7/config/params"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
|
||||
@@ -538,6 +539,10 @@ func (s *Service) GetBlobsV2(ctx context.Context, versionedHashes []common.Hash)
|
||||
return nil, errors.New(fmt.Sprintf("%s is not supported", GetBlobsV2))
|
||||
}
|
||||
|
||||
if flags.Get().DisableGetBlobsV2 {
|
||||
return []*pb.BlobAndProofV2{}, nil
|
||||
}
|
||||
|
||||
result := make([]*pb.BlobAndProofV2, len(versionedHashes))
|
||||
err := s.rpcClient.CallContext(ctx, &result, GetBlobsV2, versionedHashes)
|
||||
|
||||
|
||||
@@ -75,7 +75,6 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
|
||||
p2p := p2pTesting.NewTestP2P(t)
|
||||
lcStore := NewLightClientStore(p2p, new(event.Feed), testDB.SetupDB(t))
|
||||
|
||||
timeForGoroutinesToFinish := 20 * time.Microsecond
|
||||
// update 0 with basic data and no supermajority following an empty lastFinalityUpdate - should save and broadcast
|
||||
l0 := util.NewTestLightClient(t, version.Altair)
|
||||
update0, err := NewLightClientFinalityUpdateFromBeaconState(l0.Ctx, l0.State, l0.Block, l0.AttestedState, l0.AttestedBlock, l0.FinalizedBlock)
|
||||
@@ -87,8 +86,9 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
|
||||
|
||||
lcStore.SetLastFinalityUpdate(update0, true)
|
||||
require.Equal(t, update0, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
|
||||
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
|
||||
require.Equal(t, true, p2p.BroadcastCalled.Load(), "Broadcast should have been called after setting a new last finality update when previous is nil")
|
||||
require.Eventually(t, func() bool {
|
||||
return p2p.BroadcastCalled.Load()
|
||||
}, time.Second, 10*time.Millisecond, "Broadcast should have been called after setting a new last finality update when previous is nil")
|
||||
p2p.BroadcastCalled.Store(false) // Reset for next test
|
||||
|
||||
// update 1 with same finality slot, increased attested slot, and no supermajority - should save but not broadcast
|
||||
@@ -102,7 +102,7 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
|
||||
|
||||
lcStore.SetLastFinalityUpdate(update1, true)
|
||||
require.Equal(t, update1, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
|
||||
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
|
||||
time.Sleep(50 * time.Millisecond) // Wait briefly to verify broadcast is not called
|
||||
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been called after setting a new last finality update without supermajority")
|
||||
p2p.BroadcastCalled.Store(false) // Reset for next test
|
||||
|
||||
@@ -117,8 +117,9 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
|
||||
|
||||
lcStore.SetLastFinalityUpdate(update2, true)
|
||||
require.Equal(t, update2, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
|
||||
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
|
||||
require.Equal(t, true, p2p.BroadcastCalled.Load(), "Broadcast should have been called after setting a new last finality update with supermajority")
|
||||
require.Eventually(t, func() bool {
|
||||
return p2p.BroadcastCalled.Load()
|
||||
}, time.Second, 10*time.Millisecond, "Broadcast should have been called after setting a new last finality update with supermajority")
|
||||
p2p.BroadcastCalled.Store(false) // Reset for next test
|
||||
|
||||
// update 3 with same finality slot, increased attested slot, and supermajority - should save but not broadcast
|
||||
@@ -132,7 +133,7 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
|
||||
|
||||
lcStore.SetLastFinalityUpdate(update3, true)
|
||||
require.Equal(t, update3, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
|
||||
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
|
||||
time.Sleep(50 * time.Millisecond) // Wait briefly to verify broadcast is not called
|
||||
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been when previous was already broadcast")
|
||||
|
||||
// update 4 with increased finality slot, increased attested slot, and supermajority - should save and broadcast
|
||||
@@ -146,8 +147,9 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
|
||||
|
||||
lcStore.SetLastFinalityUpdate(update4, true)
|
||||
require.Equal(t, update4, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
|
||||
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
|
||||
require.Equal(t, true, p2p.BroadcastCalled.Load(), "Broadcast should have been called after a new finality update with increased finality slot")
|
||||
require.Eventually(t, func() bool {
|
||||
return p2p.BroadcastCalled.Load()
|
||||
}, time.Second, 10*time.Millisecond, "Broadcast should have been called after a new finality update with increased finality slot")
|
||||
p2p.BroadcastCalled.Store(false) // Reset for next test
|
||||
|
||||
// update 5 with the same new finality slot, increased attested slot, and supermajority - should save but not broadcast
|
||||
@@ -161,7 +163,7 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
|
||||
|
||||
lcStore.SetLastFinalityUpdate(update5, true)
|
||||
require.Equal(t, update5, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
|
||||
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
|
||||
time.Sleep(50 * time.Millisecond) // Wait briefly to verify broadcast is not called
|
||||
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been called when previous was already broadcast with supermajority")
|
||||
|
||||
// update 6 with the same new finality slot, increased attested slot, and no supermajority - should save but not broadcast
|
||||
@@ -175,7 +177,7 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
|
||||
|
||||
lcStore.SetLastFinalityUpdate(update6, true)
|
||||
require.Equal(t, update6, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
|
||||
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
|
||||
time.Sleep(50 * time.Millisecond) // Wait briefly to verify broadcast is not called
|
||||
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been called when previous was already broadcast with supermajority")
|
||||
}
|
||||
|
||||
|
||||
@@ -22,6 +22,7 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v7/monitoring/tracing/trace"
|
||||
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v7/time/slots"
|
||||
pubsub "github.com/libp2p/go-libp2p-pubsub"
|
||||
"github.com/pkg/errors"
|
||||
ssz "github.com/prysmaticlabs/fastssz"
|
||||
"github.com/sirupsen/logrus"
|
||||
@@ -357,58 +358,67 @@ func (s *Service) BroadcastDataColumnSidecars(ctx context.Context, sidecars []bl
|
||||
return nil
|
||||
}
|
||||
|
||||
// broadcastDataColumnSidecars broadcasts multiple data column sidecars to the p2p network, after ensuring
|
||||
// there is at least one peer in each needed subnet. If not, it will attempt to find one before broadcasting.
|
||||
// It returns when all broadcasts are complete, or the context is cancelled (whichever comes first).
|
||||
// broadcastDataColumnSidecars broadcasts multiple data column sidecars to the p2p network.
|
||||
// For sidecars with available peers, it uses batch publishing.
|
||||
// For sidecars without peers, it finds peers first and then publishes individually.
|
||||
// Both paths run in parallel. It returns when all broadcasts are complete, or the context is cancelled.
|
||||
func (s *Service) broadcastDataColumnSidecars(ctx context.Context, forkDigest [fieldparams.VersionLength]byte, sidecars []blocks.VerifiedRODataColumn) {
|
||||
type rootAndIndex struct {
|
||||
root [fieldparams.RootLength]byte
|
||||
index uint64
|
||||
}
|
||||
|
||||
var (
|
||||
wg sync.WaitGroup
|
||||
timings sync.Map
|
||||
)
|
||||
|
||||
var timings sync.Map
|
||||
logLevel := logrus.GetLevel()
|
||||
|
||||
slotPerRoot := make(map[[fieldparams.RootLength]byte]primitives.Slot, 1)
|
||||
|
||||
topicFunc := func(sidecar blocks.VerifiedRODataColumn) (topic string, wrappedSubIdx uint64, subnet uint64) {
|
||||
subnet = peerdas.ComputeSubnetForDataColumnSidecar(sidecar.Index)
|
||||
topic = dataColumnSubnetToTopic(subnet, forkDigest)
|
||||
wrappedSubIdx = subnet + dataColumnSubnetVal
|
||||
return
|
||||
}
|
||||
|
||||
sidecarsWithPeers := make([]blocks.VerifiedRODataColumn, 0, len(sidecars))
|
||||
var sidecarsWithoutPeers []blocks.VerifiedRODataColumn
|
||||
|
||||
// Categorize sidecars by peer availability.
|
||||
for _, sidecar := range sidecars {
|
||||
slotPerRoot[sidecar.BlockRoot()] = sidecar.Slot()
|
||||
|
||||
wg.Go(func() {
|
||||
// Add tracing to the function.
|
||||
ctx, span := trace.StartSpan(s.ctx, "p2p.broadcastDataColumnSidecars")
|
||||
topic, wrappedSubIdx, _ := topicFunc(sidecar)
|
||||
// Check if we have a peer for this subnet (use RLock for read-only check).
|
||||
mu := s.subnetLocker(wrappedSubIdx)
|
||||
mu.RLock()
|
||||
hasPeer := s.hasPeerWithSubnet(topic)
|
||||
mu.RUnlock()
|
||||
|
||||
if hasPeer {
|
||||
sidecarsWithPeers = append(sidecarsWithPeers, sidecar)
|
||||
continue
|
||||
}
|
||||
|
||||
sidecarsWithoutPeers = append(sidecarsWithoutPeers, sidecar)
|
||||
}
|
||||
|
||||
var batchWg, individualWg sync.WaitGroup
|
||||
|
||||
// Batch publish sidecars that already have peers
|
||||
var messageBatch pubsub.MessageBatch
|
||||
for _, sidecar := range sidecarsWithPeers {
|
||||
batchWg.Go(func() {
|
||||
_, span := trace.StartSpan(ctx, "p2p.broadcastDataColumnSidecars")
|
||||
ctx := trace.NewContext(s.ctx, span)
|
||||
defer span.End()
|
||||
|
||||
// Compute the subnet for this data column sidecar.
|
||||
subnet := peerdas.ComputeSubnetForDataColumnSidecar(sidecar.Index)
|
||||
topic, _, _ := topicFunc(sidecar)
|
||||
|
||||
// Build the topic corresponding to subnet column subnet and this fork digest.
|
||||
topic := dataColumnSubnetToTopic(subnet, forkDigest)
|
||||
|
||||
// Compute the wrapped subnet index.
|
||||
wrappedSubIdx := subnet + dataColumnSubnetVal
|
||||
|
||||
// Find peers if needed.
|
||||
if err := s.findPeersIfNeeded(ctx, wrappedSubIdx, DataColumnSubnetTopicFormat, forkDigest, subnet); err != nil {
|
||||
if err := s.batchObject(ctx, &messageBatch, sidecar, topic); err != nil {
|
||||
tracing.AnnotateError(span, err)
|
||||
log.WithError(err).Error("Cannot find peers if needed")
|
||||
log.WithError(err).Error("Cannot batch data column sidecar")
|
||||
return
|
||||
}
|
||||
|
||||
// Broadcast the data column sidecar to the network.
|
||||
if err := s.broadcastObject(ctx, sidecar, topic); err != nil {
|
||||
tracing.AnnotateError(span, err)
|
||||
log.WithError(err).Error("Cannot broadcast data column sidecar")
|
||||
return
|
||||
}
|
||||
|
||||
// Increase the number of successful broadcasts.
|
||||
dataColumnSidecarBroadcasts.Inc()
|
||||
|
||||
// Record the timing for log purposes.
|
||||
if logLevel >= logrus.DebugLevel {
|
||||
root := sidecar.BlockRoot()
|
||||
timings.Store(rootAndIndex{root: root, index: sidecar.Index}, time.Now())
|
||||
@@ -416,8 +426,50 @@ func (s *Service) broadcastDataColumnSidecars(ctx context.Context, forkDigest [f
|
||||
})
|
||||
}
|
||||
|
||||
// Wait for all broadcasts to finish.
|
||||
wg.Wait()
|
||||
// For sidecars without peers, find peers and publish individually (no batching).
|
||||
for _, sidecar := range sidecarsWithoutPeers {
|
||||
individualWg.Go(func() {
|
||||
_, span := trace.StartSpan(ctx, "p2p.broadcastDataColumnSidecars")
|
||||
ctx := trace.NewContext(s.ctx, span)
|
||||
defer span.End()
|
||||
|
||||
topic, wrappedSubIdx, subnet := topicFunc(sidecar)
|
||||
|
||||
// Find peers for this sidecar's subnet.
|
||||
if err := s.findPeersIfNeeded(ctx, wrappedSubIdx, DataColumnSubnetTopicFormat, forkDigest, subnet); err != nil {
|
||||
tracing.AnnotateError(span, err)
|
||||
log.WithError(err).Error("Cannot find peers if needed")
|
||||
return
|
||||
}
|
||||
|
||||
// Publish individually (not batched) since we just found peers.
|
||||
if err := s.broadcastObject(ctx, sidecar, topic); err != nil {
|
||||
tracing.AnnotateError(span, err)
|
||||
log.WithError(err).Error("Cannot broadcast data column sidecar")
|
||||
return
|
||||
}
|
||||
|
||||
dataColumnSidecarBroadcasts.Inc()
|
||||
|
||||
if logLevel >= logrus.DebugLevel {
|
||||
root := sidecar.BlockRoot()
|
||||
timings.Store(rootAndIndex{root: root, index: sidecar.Index}, time.Now())
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// Wait for batch to be populated, then publish.
|
||||
batchWg.Wait()
|
||||
if len(sidecarsWithPeers) > 0 {
|
||||
if err := s.pubsub.PublishBatch(&messageBatch); err != nil {
|
||||
log.WithError(err).Error("Cannot publish batch for data column sidecars")
|
||||
} else {
|
||||
dataColumnSidecarBroadcasts.Add(float64(len(sidecarsWithPeers)))
|
||||
}
|
||||
}
|
||||
|
||||
// Wait for all individual publishes to complete.
|
||||
individualWg.Wait()
|
||||
|
||||
// The rest of this function is only for debug logging purposes.
|
||||
if logLevel < logrus.DebugLevel {
|
||||
@@ -504,28 +556,68 @@ func (s *Service) findPeersIfNeeded(
|
||||
return nil
|
||||
}
|
||||
|
||||
// method to broadcast messages to other peers in our gossip mesh.
|
||||
// encodeGossipMessage encodes an object for gossip transmission.
|
||||
// It returns the encoded bytes and the full topic with protocol suffix.
|
||||
func (s *Service) encodeGossipMessage(obj ssz.Marshaler, topic string) ([]byte, string, error) {
|
||||
buf := new(bytes.Buffer)
|
||||
if _, err := s.Encoding().EncodeGossip(buf, obj); err != nil {
|
||||
return nil, "", fmt.Errorf("could not encode message: %w", err)
|
||||
}
|
||||
return buf.Bytes(), topic + s.Encoding().ProtocolSuffix(), nil
|
||||
}
|
||||
|
||||
// broadcastObject broadcasts a message to other peers in our gossip mesh.
|
||||
func (s *Service) broadcastObject(ctx context.Context, obj ssz.Marshaler, topic string) error {
|
||||
ctx, span := trace.StartSpan(ctx, "p2p.broadcastObject")
|
||||
defer span.End()
|
||||
|
||||
span.SetAttributes(trace.StringAttribute("topic", topic))
|
||||
|
||||
buf := new(bytes.Buffer)
|
||||
if _, err := s.Encoding().EncodeGossip(buf, obj); err != nil {
|
||||
err := errors.Wrap(err, "could not encode message")
|
||||
data, fullTopic, err := s.encodeGossipMessage(obj, topic)
|
||||
if err != nil {
|
||||
tracing.AnnotateError(span, err)
|
||||
return err
|
||||
}
|
||||
|
||||
if span.IsRecording() {
|
||||
id := hash.FastSum64(buf.Bytes())
|
||||
messageLen := int64(buf.Len())
|
||||
id := hash.FastSum64(data)
|
||||
messageLen := int64(len(data))
|
||||
// lint:ignore uintcast -- It's safe to do this for tracing.
|
||||
iid := int64(id)
|
||||
span = trace.AddMessageSendEvent(span, iid, messageLen /*uncompressed*/, messageLen /*compressed*/)
|
||||
}
|
||||
if err := s.PublishToTopic(ctx, topic+s.Encoding().ProtocolSuffix(), buf.Bytes()); err != nil {
|
||||
|
||||
if err := s.PublishToTopic(ctx, fullTopic, data); err != nil {
|
||||
err := errors.Wrap(err, "could not publish message")
|
||||
tracing.AnnotateError(span, err)
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// batchObject adds an object to a message batch for a future broadcast.
|
||||
// The caller MUST publish the batch after all messages have been added.
|
||||
func (s *Service) batchObject(ctx context.Context, batch *pubsub.MessageBatch, obj ssz.Marshaler, topic string) error {
|
||||
ctx, span := trace.StartSpan(ctx, "p2p.batchObject")
|
||||
defer span.End()
|
||||
|
||||
span.SetAttributes(trace.StringAttribute("topic", topic))
|
||||
|
||||
data, fullTopic, err := s.encodeGossipMessage(obj, topic)
|
||||
if err != nil {
|
||||
tracing.AnnotateError(span, err)
|
||||
return err
|
||||
}
|
||||
|
||||
if span.IsRecording() {
|
||||
id := hash.FastSum64(data)
|
||||
messageLen := int64(len(data))
|
||||
// lint:ignore uintcast -- It's safe to do this for tracing.
|
||||
iid := int64(id)
|
||||
span = trace.AddMessageSendEvent(span, iid, messageLen /*uncompressed*/, messageLen /*compressed*/)
|
||||
}
|
||||
|
||||
if err := s.addToBatch(ctx, batch, fullTopic, data); err != nil {
|
||||
err := errors.Wrap(err, "could not publish message")
|
||||
tracing.AnnotateError(span, err)
|
||||
return err
|
||||
|
||||
@@ -32,6 +32,8 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v7/time/slots"
|
||||
pubsub "github.com/libp2p/go-libp2p-pubsub"
|
||||
"github.com/libp2p/go-libp2p/core/host"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
"github.com/libp2p/go-libp2p/core/protocol"
|
||||
"google.golang.org/protobuf/proto"
|
||||
)
|
||||
|
||||
@@ -70,7 +72,10 @@ func TestService_Broadcast(t *testing.T) {
|
||||
sub, err := p2.SubscribeToTopic(topic)
|
||||
require.NoError(t, err)
|
||||
|
||||
time.Sleep(50 * time.Millisecond) // libp2p fails without this delay...
|
||||
// Wait for libp2p mesh to establish
|
||||
require.Eventually(t, func() bool {
|
||||
return len(p.pubsub.ListPeers(topic)) > 0
|
||||
}, 5*time.Second, 10*time.Millisecond, "libp2p mesh did not establish")
|
||||
|
||||
// Async listen for the pubsub, must be before the broadcast.
|
||||
var wg sync.WaitGroup
|
||||
@@ -184,7 +189,10 @@ func TestService_BroadcastAttestation(t *testing.T) {
|
||||
sub, err := p2.SubscribeToTopic(topic)
|
||||
require.NoError(t, err)
|
||||
|
||||
time.Sleep(50 * time.Millisecond) // libp2p fails without this delay...
|
||||
// Wait for libp2p mesh to establish
|
||||
require.Eventually(t, func() bool {
|
||||
return len(p.pubsub.ListPeers(topic)) > 0
|
||||
}, 5*time.Second, 10*time.Millisecond, "libp2p mesh did not establish")
|
||||
|
||||
// Async listen for the pubsub, must be before the broadcast.
|
||||
var wg sync.WaitGroup
|
||||
@@ -373,7 +381,15 @@ func TestService_BroadcastAttestationWithDiscoveryAttempts(t *testing.T) {
|
||||
_, err = tpHandle.Subscribe()
|
||||
require.NoError(t, err)
|
||||
|
||||
time.Sleep(500 * time.Millisecond) // libp2p fails without this delay...
|
||||
// This test specifically tests discovery-based peer finding, which requires
|
||||
// time for nodes to discover each other. Using a fixed sleep here is intentional
|
||||
// as we're testing the discovery timing behavior.
|
||||
time.Sleep(500 * time.Millisecond)
|
||||
|
||||
// Verify mesh establishment after discovery
|
||||
require.Eventually(t, func() bool {
|
||||
return len(p.pubsub.ListPeers(topic)) > 0 && len(p2.pubsub.ListPeers(topic)) > 0
|
||||
}, 5*time.Second, 10*time.Millisecond, "libp2p mesh did not establish")
|
||||
|
||||
nodePeers := p.pubsub.ListPeers(topic)
|
||||
nodePeers2 := p2.pubsub.ListPeers(topic)
|
||||
@@ -442,7 +458,10 @@ func TestService_BroadcastSyncCommittee(t *testing.T) {
|
||||
sub, err := p2.SubscribeToTopic(topic)
|
||||
require.NoError(t, err)
|
||||
|
||||
time.Sleep(50 * time.Millisecond) // libp2p fails without this delay...
|
||||
// Wait for libp2p mesh to establish
|
||||
require.Eventually(t, func() bool {
|
||||
return len(p.pubsub.ListPeers(topic)) > 0
|
||||
}, 5*time.Second, 10*time.Millisecond, "libp2p mesh did not establish")
|
||||
|
||||
// Async listen for the pubsub, must be before the broadcast.
|
||||
var wg sync.WaitGroup
|
||||
@@ -519,7 +538,10 @@ func TestService_BroadcastBlob(t *testing.T) {
|
||||
sub, err := p2.SubscribeToTopic(topic)
|
||||
require.NoError(t, err)
|
||||
|
||||
time.Sleep(50 * time.Millisecond) // libp2p fails without this delay...
|
||||
// Wait for libp2p mesh to establish
|
||||
require.Eventually(t, func() bool {
|
||||
return len(p.pubsub.ListPeers(topic)) > 0
|
||||
}, 5*time.Second, 10*time.Millisecond, "libp2p mesh did not establish")
|
||||
|
||||
// Async listen for the pubsub, must be before the broadcast.
|
||||
var wg sync.WaitGroup
|
||||
@@ -582,7 +604,10 @@ func TestService_BroadcastLightClientOptimisticUpdate(t *testing.T) {
|
||||
sub, err := p2.SubscribeToTopic(topic)
|
||||
require.NoError(t, err)
|
||||
|
||||
time.Sleep(50 * time.Millisecond) // libp2p fails without this delay...
|
||||
// Wait for libp2p mesh to establish
|
||||
require.Eventually(t, func() bool {
|
||||
return len(p.pubsub.ListPeers(topic)) > 0
|
||||
}, 5*time.Second, 10*time.Millisecond, "libp2p mesh did not establish")
|
||||
|
||||
// Async listen for the pubsub, must be before the broadcast.
|
||||
var wg sync.WaitGroup
|
||||
@@ -658,7 +683,10 @@ func TestService_BroadcastLightClientFinalityUpdate(t *testing.T) {
|
||||
sub, err := p2.SubscribeToTopic(topic)
|
||||
require.NoError(t, err)
|
||||
|
||||
time.Sleep(50 * time.Millisecond) // libp2p fails without this delay...
|
||||
// Wait for libp2p mesh to establish
|
||||
require.Eventually(t, func() bool {
|
||||
return len(p.pubsub.ListPeers(topic)) > 0
|
||||
}, 5*time.Second, 10*time.Millisecond, "libp2p mesh did not establish")
|
||||
|
||||
// Async listen for the pubsub, must be before the broadcast.
|
||||
var wg sync.WaitGroup
|
||||
@@ -769,8 +797,10 @@ func TestService_BroadcastDataColumn(t *testing.T) {
|
||||
sub, err := p2.SubscribeToTopic(topic)
|
||||
require.NoError(t, err)
|
||||
|
||||
// libp2p fails without this delay
|
||||
time.Sleep(50 * time.Millisecond)
|
||||
// Wait for libp2p mesh to establish
|
||||
require.Eventually(t, func() bool {
|
||||
return len(service.pubsub.ListPeers(topic)) > 0
|
||||
}, 5*time.Second, 10*time.Millisecond, "libp2p mesh did not establish")
|
||||
|
||||
// Broadcast to peers and wait.
|
||||
err = service.BroadcastDataColumnSidecars(ctx, []blocks.VerifiedRODataColumn{verifiedRoSidecar})
|
||||
@@ -787,3 +817,190 @@ func TestService_BroadcastDataColumn(t *testing.T) {
|
||||
require.NoError(t, service.Encoding().DecodeGossip(msg.Data, &result))
|
||||
require.DeepEqual(t, &result, verifiedRoSidecar)
|
||||
}
|
||||
|
||||
type topicInvoked struct {
|
||||
topic string
|
||||
pid peer.ID
|
||||
}
|
||||
|
||||
// rpcOrderTracer is a RawTracer implementation that captures the order of SendRPC calls.
|
||||
// It records the topics of messages sent via pubsub to verify round-robin ordering.
|
||||
type rpcOrderTracer struct {
|
||||
mu sync.Mutex
|
||||
invoked []*topicInvoked
|
||||
byTopic map[string][]peer.ID
|
||||
}
|
||||
|
||||
func (t *rpcOrderTracer) SendRPC(rpc *pubsub.RPC, pid peer.ID) {
|
||||
t.mu.Lock()
|
||||
defer t.mu.Unlock()
|
||||
for _, msg := range rpc.GetPublish() {
|
||||
invoked := &topicInvoked{topic: msg.GetTopic(), pid: pid}
|
||||
t.invoked = append(t.invoked, invoked)
|
||||
t.byTopic[invoked.topic] = append(t.byTopic[invoked.topic], invoked.pid)
|
||||
}
|
||||
}
|
||||
|
||||
func newRpcOrderTracer() *rpcOrderTracer {
|
||||
return &rpcOrderTracer{byTopic: make(map[string][]peer.ID)}
|
||||
}
|
||||
|
||||
func (t *rpcOrderTracer) getTopics() []string {
|
||||
t.mu.Lock()
|
||||
defer t.mu.Unlock()
|
||||
result := make([]string, len(t.invoked))
|
||||
for i := range t.invoked {
|
||||
result[i] = t.invoked[i].topic
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// No-op implementations for other RawTracer methods.
|
||||
func (*rpcOrderTracer) AddPeer(peer.ID, protocol.ID) {}
|
||||
func (*rpcOrderTracer) RemovePeer(peer.ID) {}
|
||||
func (*rpcOrderTracer) Join(string) {}
|
||||
func (*rpcOrderTracer) Leave(string) {}
|
||||
func (*rpcOrderTracer) Graft(peer.ID, string) {}
|
||||
func (*rpcOrderTracer) Prune(peer.ID, string) {}
|
||||
func (*rpcOrderTracer) ValidateMessage(*pubsub.Message) {}
|
||||
func (*rpcOrderTracer) DeliverMessage(*pubsub.Message) {}
|
||||
func (*rpcOrderTracer) RejectMessage(*pubsub.Message, string) {}
|
||||
func (*rpcOrderTracer) DuplicateMessage(*pubsub.Message) {}
|
||||
func (*rpcOrderTracer) ThrottlePeer(peer.ID) {}
|
||||
func (*rpcOrderTracer) RecvRPC(*pubsub.RPC) {}
|
||||
func (*rpcOrderTracer) DropRPC(*pubsub.RPC, peer.ID) {}
|
||||
func (*rpcOrderTracer) UndeliverableMessage(*pubsub.Message) {}
|
||||
|
||||
// TestService_BroadcastDataColumnRoundRobin verifies that when broadcasting multiple
|
||||
// data column sidecars, messages are interleaved in round-robin order by column index
|
||||
// rather than sending all copies of one column before the next.
|
||||
//
|
||||
// Without batch publishing: A,A,A,A,B,B,B,B (all peers for column A, then all for column B)
|
||||
// With batch publishing: A,B,A,B,A,B,A,B (interleaved by message ID)
|
||||
func TestService_BroadcastDataColumnRoundRobin(t *testing.T) {
|
||||
const (
|
||||
port = 2100
|
||||
topicFormat = DataColumnSubnetTopicFormat
|
||||
)
|
||||
|
||||
ctx := t.Context()
|
||||
|
||||
// Load the KZG trust setup.
|
||||
err := kzg.Start()
|
||||
require.NoError(t, err)
|
||||
|
||||
gFlags := new(flags.GlobalFlags)
|
||||
gFlags.MinimumPeersPerSubnet = 1
|
||||
flags.Init(gFlags)
|
||||
defer flags.Init(new(flags.GlobalFlags))
|
||||
|
||||
// Create a tracer to capture the order of SendRPC calls.
|
||||
tracer := newRpcOrderTracer()
|
||||
|
||||
// Create the publisher node with the tracer injected.
|
||||
p1 := p2ptest.NewTestP2PWithPubsubOptions(t, []pubsub.Option{pubsub.WithRawTracer(tracer)})
|
||||
|
||||
// Create subscriber peers.
|
||||
expectedPeers := []*p2ptest.TestP2P{
|
||||
p2ptest.NewTestP2P(t),
|
||||
p2ptest.NewTestP2P(t),
|
||||
}
|
||||
|
||||
// Connect peers.
|
||||
for _, p := range expectedPeers {
|
||||
p1.Connect(p)
|
||||
}
|
||||
require.NotEqual(t, 0, len(p1.BHost.Network().Peers()), "No peers")
|
||||
|
||||
// Create a host for discovery.
|
||||
_, pkey, ipAddr := createHost(t, port)
|
||||
|
||||
// Create a shared DB for the service.
|
||||
db := testDB.SetupDB(t)
|
||||
|
||||
// Create and close the custody info channel immediately since custodyInfo is already set.
|
||||
custodyInfoSet := make(chan struct{})
|
||||
close(custodyInfoSet)
|
||||
|
||||
service := &Service{
|
||||
ctx: ctx,
|
||||
host: p1.BHost,
|
||||
pubsub: p1.PubSub(),
|
||||
joinedTopics: map[string]*pubsub.Topic{},
|
||||
cfg: &Config{DB: db},
|
||||
genesisTime: time.Now(),
|
||||
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
|
||||
subnetsLock: make(map[uint64]*sync.RWMutex),
|
||||
subnetsLockLock: sync.Mutex{},
|
||||
peers: peers.NewStatus(ctx, &peers.StatusConfig{ScorerParams: &scorers.Config{}}),
|
||||
custodyInfo: &custodyInfo{},
|
||||
custodyInfoSet: custodyInfoSet,
|
||||
}
|
||||
|
||||
// Create a listener for discovery.
|
||||
listener, err := service.startDiscoveryV5(ipAddr, pkey)
|
||||
require.NoError(t, err)
|
||||
service.dv5Listener = listener
|
||||
|
||||
digest, err := service.currentForkDigest()
|
||||
require.NoError(t, err)
|
||||
|
||||
// Create multiple data column sidecars with different column indices.
|
||||
// Use indices that map to different subnets: 0, 32, 64 (assuming 128 columns and 64 subnets).
|
||||
columnIndices := []uint64{0, 32, 64}
|
||||
params := make([]util.DataColumnParam, len(columnIndices))
|
||||
for i, idx := range columnIndices {
|
||||
params[i] = util.DataColumnParam{Index: idx}
|
||||
}
|
||||
_, verifiedRoSidecars := util.CreateTestVerifiedRoDataColumnSidecars(t, params)
|
||||
|
||||
expectedTopics := make(map[string]bool)
|
||||
// Subscribe peers to the relevant topics.
|
||||
for _, idx := range columnIndices {
|
||||
subnet := peerdas.ComputeSubnetForDataColumnSidecar(idx)
|
||||
topic := fmt.Sprintf(topicFormat, digest, subnet) + service.Encoding().ProtocolSuffix()
|
||||
for _, p := range expectedPeers {
|
||||
_, err = p.SubscribeToTopic(topic)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
expectedTopics[topic] = true
|
||||
}
|
||||
// libp2p needs some time to establish mesh connections.
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
|
||||
// Broadcast all sidecars.
|
||||
err = service.BroadcastDataColumnSidecars(ctx, verifiedRoSidecars)
|
||||
require.NoError(t, err)
|
||||
// Give some time for messages to be sent.
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
|
||||
topics := tracer.getTopics()
|
||||
if len(topics) == 0 {
|
||||
t.Fatal("Expected at least one message for each topic to be sent to each peer")
|
||||
}
|
||||
|
||||
unseen := make(map[string]bool)
|
||||
for k := range expectedTopics {
|
||||
unseen[k] = true
|
||||
}
|
||||
// Verify round-robin invariant: before all message IDs are seen, no message ID may be repeated.
|
||||
// In round-robin order, we should see each topic once before any topic repeats.
|
||||
for _, topic := range topics {
|
||||
if !expectedTopics[topic] {
|
||||
continue
|
||||
}
|
||||
if !unseen[topic] {
|
||||
t.Errorf("Topic %s repeated before all topics were seen once. This violates round-robin ordering.", topic)
|
||||
}
|
||||
delete(unseen, topic)
|
||||
if len(unseen) == 0 {
|
||||
break // all have been seen
|
||||
}
|
||||
}
|
||||
require.Equal(t, 0, len(unseen))
|
||||
|
||||
// Verify that we actually saw all expected topics.
|
||||
for topic := range expectedTopics {
|
||||
require.Equal(t, len(expectedPeers), len(tracer.byTopic[topic]))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -482,12 +482,12 @@ func TestStaticPeering_PeersAreAdded(t *testing.T) {
|
||||
s.Start()
|
||||
<-exitRoutine
|
||||
}()
|
||||
time.Sleep(50 * time.Millisecond)
|
||||
time.Sleep(50 * time.Millisecond) // Wait for service initialization
|
||||
var vr [32]byte
|
||||
require.NoError(t, cs.SetClock(startup.NewClock(time.Now(), vr)))
|
||||
time.Sleep(4 * time.Second)
|
||||
ps := s.host.Network().Peers()
|
||||
assert.Equal(t, 5, len(ps), "Not all peers added to peerstore")
|
||||
require.Eventually(t, func() bool {
|
||||
return len(s.host.Network().Peers()) == 5
|
||||
}, 10*time.Second, 100*time.Millisecond, "Not all peers added to peerstore")
|
||||
require.NoError(t, s.Stop())
|
||||
exitRoutine <- true
|
||||
}
|
||||
|
||||
@@ -99,6 +99,27 @@ func (s *Service) PublishToTopic(ctx context.Context, topic string, data []byte,
|
||||
}
|
||||
}
|
||||
|
||||
// addToBatch joins (if necessary) a topic and adds the message to a message batch.
|
||||
func (s *Service) addToBatch(ctx context.Context, batch *pubsub.MessageBatch, topic string, data []byte, opts ...pubsub.PubOpt) error {
|
||||
topicHandle, err := s.JoinTopic(topic)
|
||||
if err != nil {
|
||||
return fmt.Errorf("joining topic: %w", err)
|
||||
}
|
||||
|
||||
// Wait for at least 1 peer to be available to receive the published message.
|
||||
for {
|
||||
if flags.Get().MinimumSyncPeers == 0 || len(topicHandle.ListPeers()) > 0 {
|
||||
return topicHandle.AddToBatch(ctx, batch, data, opts...)
|
||||
}
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return errors.Wrapf(ctx.Err(), "unable to find requisite number of peers for topic %s, 0 peers found to publish to", topic)
|
||||
case <-time.After(100 * time.Millisecond):
|
||||
// reenter the for loop after 100ms
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// SubscribeToTopic joins (if necessary) and subscribes to PubSub topic.
|
||||
func (s *Service) SubscribeToTopic(topic string, opts ...pubsub.SubOpt) (*pubsub.Subscription, error) {
|
||||
s.awaitStateInitialized() // Genesis time and genesis validators root are required to subscribe.
|
||||
|
||||
@@ -80,8 +80,9 @@ func TestService_Start_OnlyStartsOnce(t *testing.T) {
|
||||
}()
|
||||
var vr [32]byte
|
||||
require.NoError(t, cs.SetClock(startup.NewClock(time.Now(), vr)))
|
||||
time.Sleep(time.Second * 2)
|
||||
assert.Equal(t, true, s.started, "Expected service to be started")
|
||||
require.Eventually(t, func() bool {
|
||||
return s.started
|
||||
}, 5*time.Second, 100*time.Millisecond, "Expected service to be started")
|
||||
s.Start()
|
||||
require.LogsContain(t, hook, "Attempted to start p2p service when it was already started")
|
||||
require.NoError(t, s.Stop())
|
||||
@@ -260,17 +261,9 @@ func TestListenForNewNodes(t *testing.T) {
|
||||
err = cs.SetClock(startup.NewClock(genesisTime, gvr))
|
||||
require.NoError(t, err, "Could not set clock in service")
|
||||
|
||||
actualPeerCount := len(s.host.Network().Peers())
|
||||
for range 40 {
|
||||
if actualPeerCount == peerCount {
|
||||
break
|
||||
}
|
||||
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
actualPeerCount = len(s.host.Network().Peers())
|
||||
}
|
||||
|
||||
assert.Equal(t, peerCount, actualPeerCount, "Not all peers added to peerstore")
|
||||
require.Eventually(t, func() bool {
|
||||
return len(s.host.Network().Peers()) == peerCount
|
||||
}, 5*time.Second, 100*time.Millisecond, "Not all peers added to peerstore")
|
||||
|
||||
err = s.Stop()
|
||||
require.NoError(t, err, "Failed to stop service")
|
||||
|
||||
@@ -70,6 +70,11 @@ type TestP2P struct {
|
||||
|
||||
// NewTestP2P initializes a new p2p test service.
|
||||
func NewTestP2P(t *testing.T, userOptions ...config.Option) *TestP2P {
|
||||
return NewTestP2PWithPubsubOptions(t, nil, userOptions...)
|
||||
}
|
||||
|
||||
// NewTestP2PWithPubsubOptions initializes a new p2p test service with custom pubsub options.
|
||||
func NewTestP2PWithPubsubOptions(t *testing.T, pubsubOpts []pubsub.Option, userOptions ...config.Option) *TestP2P {
|
||||
ctx := context.Background()
|
||||
options := []config.Option{
|
||||
libp2p.ResourceManager(&network.NullResourceManager{}),
|
||||
@@ -84,10 +89,14 @@ func NewTestP2P(t *testing.T, userOptions ...config.Option) *TestP2P {
|
||||
|
||||
h, err := libp2p.New(options...)
|
||||
require.NoError(t, err)
|
||||
ps, err := pubsub.NewFloodSub(ctx, h,
|
||||
|
||||
defaultPubsubOpts := []pubsub.Option{
|
||||
pubsub.WithMessageSigning(false),
|
||||
pubsub.WithStrictSignatureVerification(false),
|
||||
)
|
||||
}
|
||||
allPubsubOpts := append(defaultPubsubOpts, pubsubOpts...)
|
||||
|
||||
ps, err := pubsub.NewGossipSub(ctx, h, allPubsubOpts...)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
@@ -657,8 +657,9 @@ func TestSubmitAttestationsV2(t *testing.T) {
|
||||
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Source.Epoch)
|
||||
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().Target.Root))
|
||||
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Target.Epoch)
|
||||
time.Sleep(100 * time.Millisecond) // Wait for async pool save
|
||||
assert.Equal(t, 1, s.AttestationsPool.UnaggregatedAttestationCount())
|
||||
require.Eventually(t, func() bool {
|
||||
return s.AttestationsPool.UnaggregatedAttestationCount() == 1
|
||||
}, time.Second, 10*time.Millisecond, "Expected 1 attestation in pool")
|
||||
})
|
||||
t.Run("multiple", func(t *testing.T) {
|
||||
broadcaster := &p2pMock.MockBroadcaster{}
|
||||
@@ -677,8 +678,9 @@ func TestSubmitAttestationsV2(t *testing.T) {
|
||||
assert.Equal(t, http.StatusOK, writer.Code)
|
||||
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
|
||||
assert.Equal(t, 2, broadcaster.NumAttestations())
|
||||
time.Sleep(100 * time.Millisecond) // Wait for async pool save
|
||||
assert.Equal(t, 2, s.AttestationsPool.UnaggregatedAttestationCount())
|
||||
require.Eventually(t, func() bool {
|
||||
return s.AttestationsPool.UnaggregatedAttestationCount() == 2
|
||||
}, time.Second, 10*time.Millisecond, "Expected 2 attestations in pool")
|
||||
})
|
||||
t.Run("phase0 att post electra", func(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
@@ -798,8 +800,9 @@ func TestSubmitAttestationsV2(t *testing.T) {
|
||||
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Source.Epoch)
|
||||
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().Target.Root))
|
||||
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Target.Epoch)
|
||||
time.Sleep(100 * time.Millisecond) // Wait for async pool save
|
||||
assert.Equal(t, 1, s.AttestationsPool.UnaggregatedAttestationCount())
|
||||
require.Eventually(t, func() bool {
|
||||
return s.AttestationsPool.UnaggregatedAttestationCount() == 1
|
||||
}, time.Second, 10*time.Millisecond, "Expected 1 attestation in pool")
|
||||
})
|
||||
t.Run("multiple", func(t *testing.T) {
|
||||
broadcaster := &p2pMock.MockBroadcaster{}
|
||||
@@ -818,8 +821,9 @@ func TestSubmitAttestationsV2(t *testing.T) {
|
||||
assert.Equal(t, http.StatusOK, writer.Code)
|
||||
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
|
||||
assert.Equal(t, 2, broadcaster.NumAttestations())
|
||||
time.Sleep(100 * time.Millisecond) // Wait for async pool save
|
||||
assert.Equal(t, 2, s.AttestationsPool.UnaggregatedAttestationCount())
|
||||
require.Eventually(t, func() bool {
|
||||
return s.AttestationsPool.UnaggregatedAttestationCount() == 2
|
||||
}, time.Second, 10*time.Millisecond, "Expected 2 attestations in pool")
|
||||
})
|
||||
t.Run("no body", func(t *testing.T) {
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com", nil)
|
||||
@@ -1375,9 +1379,9 @@ func TestSubmitSignedBLSToExecutionChanges_Ok(t *testing.T) {
|
||||
writer.Body = &bytes.Buffer{}
|
||||
s.SubmitBLSToExecutionChanges(writer, request)
|
||||
assert.Equal(t, http.StatusOK, writer.Code)
|
||||
time.Sleep(100 * time.Millisecond) // Delay to let the routine start
|
||||
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
|
||||
assert.Equal(t, numValidators, len(broadcaster.BroadcastMessages))
|
||||
require.Eventually(t, func() bool {
|
||||
return broadcaster.BroadcastCalled.Load() && len(broadcaster.BroadcastMessages) == numValidators
|
||||
}, time.Second, 10*time.Millisecond, "Broadcast should be called with all messages")
|
||||
|
||||
poolChanges, err := s.BLSChangesPool.PendingBLSToExecChanges()
|
||||
require.Equal(t, len(poolChanges), len(signedChanges))
|
||||
@@ -1591,10 +1595,10 @@ func TestSubmitSignedBLSToExecutionChanges_Failures(t *testing.T) {
|
||||
|
||||
s.SubmitBLSToExecutionChanges(writer, request)
|
||||
assert.Equal(t, http.StatusBadRequest, writer.Code)
|
||||
time.Sleep(10 * time.Millisecond) // Delay to allow the routine to start
|
||||
require.StringContains(t, "One or more messages failed validation", writer.Body.String())
|
||||
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
|
||||
assert.Equal(t, numValidators, len(broadcaster.BroadcastMessages)+1)
|
||||
require.Eventually(t, func() bool {
|
||||
return broadcaster.BroadcastCalled.Load() && len(broadcaster.BroadcastMessages)+1 == numValidators
|
||||
}, time.Second, 10*time.Millisecond, "Broadcast should be called with expected messages")
|
||||
|
||||
poolChanges, err := s.BLSChangesPool.PendingBLSToExecChanges()
|
||||
require.Equal(t, len(poolChanges)+1, len(signedChanges))
|
||||
|
||||
@@ -102,6 +102,7 @@ go_test(
|
||||
"getters_test.go",
|
||||
"getters_validator_test.go",
|
||||
"getters_withdrawal_test.go",
|
||||
"gloas_test.go",
|
||||
"hasher_test.go",
|
||||
"mvslice_fuzz_test.go",
|
||||
"proofs_test.go",
|
||||
@@ -156,6 +157,7 @@ go_test(
|
||||
"@com_github_google_go_cmp//cmp:go_default_library",
|
||||
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
|
||||
"@com_github_sirupsen_logrus//:go_default_library",
|
||||
"@com_github_stretchr_testify//require:go_default_library",
|
||||
"@org_golang_google_protobuf//proto:go_default_library",
|
||||
"@org_golang_google_protobuf//testing/protocmp:go_default_library",
|
||||
],
|
||||
|
||||
@@ -72,11 +72,13 @@ type BeaconState struct {
|
||||
|
||||
// Gloas fields
|
||||
latestExecutionPayloadBid *ethpb.ExecutionPayloadBid
|
||||
builders []*ethpb.Builder
|
||||
nextWithdrawalBuilderIndex primitives.BuilderIndex
|
||||
executionPayloadAvailability []byte
|
||||
builderPendingPayments []*ethpb.BuilderPendingPayment
|
||||
builderPendingWithdrawals []*ethpb.BuilderPendingWithdrawal
|
||||
latestBlockHash []byte
|
||||
latestWithdrawalsRoot []byte
|
||||
payloadExpectedWithdrawals []*enginev1.Withdrawal
|
||||
|
||||
id uint64
|
||||
lock sync.RWMutex
|
||||
@@ -134,11 +136,13 @@ type beaconStateMarshalable struct {
|
||||
PendingConsolidations []*ethpb.PendingConsolidation `json:"pending_consolidations" yaml:"pending_consolidations"`
|
||||
ProposerLookahead []primitives.ValidatorIndex `json:"proposer_look_ahead" yaml:"proposer_look_ahead"`
|
||||
LatestExecutionPayloadBid *ethpb.ExecutionPayloadBid `json:"latest_execution_payload_bid" yaml:"latest_execution_payload_bid"`
|
||||
Builders []*ethpb.Builder `json:"builders" yaml:"builders"`
|
||||
NextWithdrawalBuilderIndex primitives.BuilderIndex `json:"next_withdrawal_builder_index" yaml:"next_withdrawal_builder_index"`
|
||||
ExecutionPayloadAvailability []byte `json:"execution_payload_availability" yaml:"execution_payload_availability"`
|
||||
BuilderPendingPayments []*ethpb.BuilderPendingPayment `json:"builder_pending_payments" yaml:"builder_pending_payments"`
|
||||
BuilderPendingWithdrawals []*ethpb.BuilderPendingWithdrawal `json:"builder_pending_withdrawals" yaml:"builder_pending_withdrawals"`
|
||||
LatestBlockHash []byte `json:"latest_block_hash" yaml:"latest_block_hash"`
|
||||
LatestWithdrawalsRoot []byte `json:"latest_withdrawals_root" yaml:"latest_withdrawals_root"`
|
||||
PayloadExpectedWithdrawals []*enginev1.Withdrawal `json:"payload_expected_withdrawals" yaml:"payload_expected_withdrawals"`
|
||||
}
|
||||
|
||||
func (b *BeaconState) MarshalJSON() ([]byte, error) {
|
||||
@@ -194,11 +198,13 @@ func (b *BeaconState) MarshalJSON() ([]byte, error) {
|
||||
PendingConsolidations: b.pendingConsolidations,
|
||||
ProposerLookahead: b.proposerLookahead,
|
||||
LatestExecutionPayloadBid: b.latestExecutionPayloadBid,
|
||||
Builders: b.builders,
|
||||
NextWithdrawalBuilderIndex: b.nextWithdrawalBuilderIndex,
|
||||
ExecutionPayloadAvailability: b.executionPayloadAvailability,
|
||||
BuilderPendingPayments: b.builderPendingPayments,
|
||||
BuilderPendingWithdrawals: b.builderPendingWithdrawals,
|
||||
LatestBlockHash: b.latestBlockHash,
|
||||
LatestWithdrawalsRoot: b.latestWithdrawalsRoot,
|
||||
PayloadExpectedWithdrawals: b.payloadExpectedWithdrawals,
|
||||
}
|
||||
return json.Marshal(marshalable)
|
||||
}
|
||||
|
||||
@@ -56,9 +56,7 @@ func (r StateRoots) MarshalSSZTo(dst []byte) ([]byte, error) {
|
||||
func (r StateRoots) MarshalSSZ() ([]byte, error) {
|
||||
marshalled := make([]byte, fieldparams.StateRootsLength*32)
|
||||
for i, r32 := range r {
|
||||
for j, rr := range r32 {
|
||||
marshalled[i*32+j] = rr
|
||||
}
|
||||
copy(marshalled[i*32:(i+1)*32], r32[:])
|
||||
}
|
||||
return marshalled, nil
|
||||
}
|
||||
|
||||
@@ -305,10 +305,12 @@ func (b *BeaconState) ToProtoUnsafe() any {
|
||||
PendingConsolidations: b.pendingConsolidations,
|
||||
ProposerLookahead: lookahead,
|
||||
ExecutionPayloadAvailability: b.executionPayloadAvailability,
|
||||
Builders: b.builders,
|
||||
NextWithdrawalBuilderIndex: b.nextWithdrawalBuilderIndex,
|
||||
BuilderPendingPayments: b.builderPendingPayments,
|
||||
BuilderPendingWithdrawals: b.builderPendingWithdrawals,
|
||||
LatestBlockHash: b.latestBlockHash,
|
||||
LatestWithdrawalsRoot: b.latestWithdrawalsRoot,
|
||||
PayloadExpectedWithdrawals: b.payloadExpectedWithdrawals,
|
||||
}
|
||||
default:
|
||||
return nil
|
||||
@@ -607,10 +609,12 @@ func (b *BeaconState) ToProto() any {
|
||||
PendingConsolidations: b.pendingConsolidationsVal(),
|
||||
ProposerLookahead: lookahead,
|
||||
ExecutionPayloadAvailability: b.executionPayloadAvailabilityVal(),
|
||||
Builders: b.buildersVal(),
|
||||
NextWithdrawalBuilderIndex: b.nextWithdrawalBuilderIndex,
|
||||
BuilderPendingPayments: b.builderPendingPaymentsVal(),
|
||||
BuilderPendingWithdrawals: b.builderPendingWithdrawalsVal(),
|
||||
LatestBlockHash: b.latestBlockHashVal(),
|
||||
LatestWithdrawalsRoot: b.latestWithdrawalsRootVal(),
|
||||
PayloadExpectedWithdrawals: b.payloadExpectedWithdrawalsVal(),
|
||||
}
|
||||
default:
|
||||
return nil
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package state_native
|
||||
|
||||
import (
|
||||
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
|
||||
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
|
||||
)
|
||||
|
||||
@@ -47,6 +48,22 @@ func (b *BeaconState) builderPendingWithdrawalsVal() []*ethpb.BuilderPendingWith
|
||||
return withdrawals
|
||||
}
|
||||
|
||||
// buildersVal returns a copy of the builders registry.
|
||||
// This assumes that a lock is already held on BeaconState.
|
||||
func (b *BeaconState) buildersVal() []*ethpb.Builder {
|
||||
if b.builders == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
builders := make([]*ethpb.Builder, len(b.builders))
|
||||
for i := range builders {
|
||||
builder := b.builders[i]
|
||||
builders[i] = ethpb.CopyBuilder(builder)
|
||||
}
|
||||
|
||||
return builders
|
||||
}
|
||||
|
||||
// latestBlockHashVal returns a copy of the latest block hash.
|
||||
// This assumes that a lock is already held on BeaconState.
|
||||
func (b *BeaconState) latestBlockHashVal() []byte {
|
||||
@@ -60,15 +77,17 @@ func (b *BeaconState) latestBlockHashVal() []byte {
|
||||
return hash
|
||||
}
|
||||
|
||||
// latestWithdrawalsRootVal returns a copy of the latest withdrawals root.
|
||||
// payloadExpectedWithdrawalsVal returns a copy of the payload expected withdrawals.
|
||||
// This assumes that a lock is already held on BeaconState.
|
||||
func (b *BeaconState) latestWithdrawalsRootVal() []byte {
|
||||
if b.latestWithdrawalsRoot == nil {
|
||||
func (b *BeaconState) payloadExpectedWithdrawalsVal() []*enginev1.Withdrawal {
|
||||
if b.payloadExpectedWithdrawals == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
root := make([]byte, len(b.latestWithdrawalsRoot))
|
||||
copy(root, b.latestWithdrawalsRoot)
|
||||
withdrawals := make([]*enginev1.Withdrawal, len(b.payloadExpectedWithdrawals))
|
||||
for i, withdrawal := range b.payloadExpectedWithdrawals {
|
||||
withdrawals[i] = withdrawal.Copy()
|
||||
}
|
||||
|
||||
return root
|
||||
return withdrawals
|
||||
}
|
||||
|
||||
43
beacon-chain/state/state-native/gloas_test.go
Normal file
43
beacon-chain/state/state-native/gloas_test.go
Normal file
@@ -0,0 +1,43 @@
|
||||
package state_native
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
|
||||
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestBuildersVal(t *testing.T) {
|
||||
st := &BeaconState{}
|
||||
|
||||
require.Nil(t, st.buildersVal())
|
||||
|
||||
st.builders = []*ethpb.Builder{
|
||||
{Pubkey: []byte{0x01}, ExecutionAddress: []byte{0x02}, Balance: 3},
|
||||
nil,
|
||||
}
|
||||
|
||||
got := st.buildersVal()
|
||||
require.Len(t, got, 2)
|
||||
require.Nil(t, got[1])
|
||||
require.Equal(t, st.builders[0], got[0])
|
||||
require.NotSame(t, st.builders[0], got[0])
|
||||
}
|
||||
|
||||
func TestPayloadExpectedWithdrawalsVal(t *testing.T) {
|
||||
st := &BeaconState{}
|
||||
|
||||
require.Nil(t, st.payloadExpectedWithdrawalsVal())
|
||||
|
||||
st.payloadExpectedWithdrawals = []*enginev1.Withdrawal{
|
||||
{Index: 1, ValidatorIndex: 2, Address: []byte{0x03}, Amount: 4},
|
||||
nil,
|
||||
}
|
||||
|
||||
got := st.payloadExpectedWithdrawalsVal()
|
||||
require.Len(t, got, 2)
|
||||
require.Nil(t, got[1])
|
||||
require.Equal(t, st.payloadExpectedWithdrawals[0], got[0])
|
||||
require.NotSame(t, st.payloadExpectedWithdrawals[0], got[0])
|
||||
}
|
||||
@@ -342,6 +342,15 @@ func ComputeFieldRootsWithHasher(ctx context.Context, state *BeaconState) ([][]b
|
||||
}
|
||||
|
||||
if state.version >= version.Gloas {
|
||||
buildersRoot, err := stateutil.BuildersRoot(state.builders)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not compute builders merkleization")
|
||||
}
|
||||
fieldRoots[types.Builders.RealPosition()] = buildersRoot[:]
|
||||
|
||||
nextWithdrawalBuilderIndexRoot := ssz.Uint64Root(uint64(state.nextWithdrawalBuilderIndex))
|
||||
fieldRoots[types.NextWithdrawalBuilderIndex.RealPosition()] = nextWithdrawalBuilderIndexRoot[:]
|
||||
|
||||
epaRoot, err := stateutil.ExecutionPayloadAvailabilityRoot(state.executionPayloadAvailability)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not compute execution payload availability merkleization")
|
||||
@@ -366,8 +375,12 @@ func ComputeFieldRootsWithHasher(ctx context.Context, state *BeaconState) ([][]b
|
||||
lbhRoot := bytesutil.ToBytes32(state.latestBlockHash)
|
||||
fieldRoots[types.LatestBlockHash.RealPosition()] = lbhRoot[:]
|
||||
|
||||
lwrRoot := bytesutil.ToBytes32(state.latestWithdrawalsRoot)
|
||||
fieldRoots[types.LatestWithdrawalsRoot.RealPosition()] = lwrRoot[:]
|
||||
expectedWithdrawalsRoot, err := ssz.WithdrawalSliceRoot(state.payloadExpectedWithdrawals, fieldparams.MaxWithdrawalsPerPayload)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not compute payload expected withdrawals root")
|
||||
}
|
||||
|
||||
fieldRoots[types.PayloadExpectedWithdrawals.RealPosition()] = expectedWithdrawalsRoot[:]
|
||||
}
|
||||
return fieldRoots, nil
|
||||
}
|
||||
|
||||
@@ -120,11 +120,13 @@ var (
|
||||
)
|
||||
|
||||
gloasAdditionalFields = []types.FieldIndex{
|
||||
types.Builders,
|
||||
types.NextWithdrawalBuilderIndex,
|
||||
types.ExecutionPayloadAvailability,
|
||||
types.BuilderPendingPayments,
|
||||
types.BuilderPendingWithdrawals,
|
||||
types.LatestBlockHash,
|
||||
types.LatestWithdrawalsRoot,
|
||||
types.PayloadExpectedWithdrawals,
|
||||
}
|
||||
|
||||
gloasFields = slices.Concat(
|
||||
@@ -145,7 +147,7 @@ const (
|
||||
denebSharedFieldRefCount = 7
|
||||
electraSharedFieldRefCount = 10
|
||||
fuluSharedFieldRefCount = 11
|
||||
gloasSharedFieldRefCount = 12 // Adds PendingBuilderWithdrawal to the shared-ref set and LatestExecutionPayloadHeader is removed
|
||||
gloasSharedFieldRefCount = 13 // Adds Builders + BuilderPendingWithdrawals to the shared-ref set and LatestExecutionPayloadHeader is removed
|
||||
)
|
||||
|
||||
// InitializeFromProtoPhase0 the beacon state from a protobuf representation.
|
||||
@@ -817,11 +819,13 @@ func InitializeFromProtoUnsafeGloas(st *ethpb.BeaconStateGloas) (state.BeaconSta
|
||||
pendingConsolidations: st.PendingConsolidations,
|
||||
proposerLookahead: proposerLookahead,
|
||||
latestExecutionPayloadBid: st.LatestExecutionPayloadBid,
|
||||
builders: st.Builders,
|
||||
nextWithdrawalBuilderIndex: st.NextWithdrawalBuilderIndex,
|
||||
executionPayloadAvailability: st.ExecutionPayloadAvailability,
|
||||
builderPendingPayments: st.BuilderPendingPayments,
|
||||
builderPendingWithdrawals: st.BuilderPendingWithdrawals,
|
||||
latestBlockHash: st.LatestBlockHash,
|
||||
latestWithdrawalsRoot: st.LatestWithdrawalsRoot,
|
||||
payloadExpectedWithdrawals: st.PayloadExpectedWithdrawals,
|
||||
dirtyFields: make(map[types.FieldIndex]bool, fieldCount),
|
||||
dirtyIndices: make(map[types.FieldIndex][]uint64, fieldCount),
|
||||
stateFieldLeaves: make(map[types.FieldIndex]*fieldtrie.FieldTrie, fieldCount),
|
||||
@@ -861,6 +865,7 @@ func InitializeFromProtoUnsafeGloas(st *ethpb.BeaconStateGloas) (state.BeaconSta
|
||||
b.sharedFieldReferences[types.PendingPartialWithdrawals] = stateutil.NewRef(1)
|
||||
b.sharedFieldReferences[types.PendingConsolidations] = stateutil.NewRef(1)
|
||||
b.sharedFieldReferences[types.ProposerLookahead] = stateutil.NewRef(1)
|
||||
b.sharedFieldReferences[types.Builders] = stateutil.NewRef(1) // New in Gloas.
|
||||
b.sharedFieldReferences[types.BuilderPendingWithdrawals] = stateutil.NewRef(1) // New in Gloas.
|
||||
|
||||
state.Count.Inc()
|
||||
@@ -932,6 +937,7 @@ func (b *BeaconState) Copy() state.BeaconState {
|
||||
pendingDeposits: b.pendingDeposits,
|
||||
pendingPartialWithdrawals: b.pendingPartialWithdrawals,
|
||||
pendingConsolidations: b.pendingConsolidations,
|
||||
builders: b.builders,
|
||||
|
||||
// Everything else, too small to be concerned about, constant size.
|
||||
genesisValidatorsRoot: b.genesisValidatorsRoot,
|
||||
@@ -948,11 +954,12 @@ func (b *BeaconState) Copy() state.BeaconState {
|
||||
latestExecutionPayloadHeaderCapella: b.latestExecutionPayloadHeaderCapella.Copy(),
|
||||
latestExecutionPayloadHeaderDeneb: b.latestExecutionPayloadHeaderDeneb.Copy(),
|
||||
latestExecutionPayloadBid: b.latestExecutionPayloadBid.Copy(),
|
||||
nextWithdrawalBuilderIndex: b.nextWithdrawalBuilderIndex,
|
||||
executionPayloadAvailability: b.executionPayloadAvailabilityVal(),
|
||||
builderPendingPayments: b.builderPendingPaymentsVal(),
|
||||
builderPendingWithdrawals: b.builderPendingWithdrawalsVal(),
|
||||
latestBlockHash: b.latestBlockHashVal(),
|
||||
latestWithdrawalsRoot: b.latestWithdrawalsRootVal(),
|
||||
payloadExpectedWithdrawals: b.payloadExpectedWithdrawalsVal(),
|
||||
|
||||
id: types.Enumerator.Inc(),
|
||||
|
||||
@@ -1328,6 +1335,10 @@ func (b *BeaconState) rootSelector(ctx context.Context, field types.FieldIndex)
|
||||
return stateutil.ProposerLookaheadRoot(b.proposerLookahead)
|
||||
case types.LatestExecutionPayloadBid:
|
||||
return b.latestExecutionPayloadBid.HashTreeRoot()
|
||||
case types.Builders:
|
||||
return stateutil.BuildersRoot(b.builders)
|
||||
case types.NextWithdrawalBuilderIndex:
|
||||
return ssz.Uint64Root(uint64(b.nextWithdrawalBuilderIndex)), nil
|
||||
case types.ExecutionPayloadAvailability:
|
||||
return stateutil.ExecutionPayloadAvailabilityRoot(b.executionPayloadAvailability)
|
||||
|
||||
@@ -1337,8 +1348,8 @@ func (b *BeaconState) rootSelector(ctx context.Context, field types.FieldIndex)
|
||||
return stateutil.BuilderPendingWithdrawalsRoot(b.builderPendingWithdrawals)
|
||||
case types.LatestBlockHash:
|
||||
return bytesutil.ToBytes32(b.latestBlockHash), nil
|
||||
case types.LatestWithdrawalsRoot:
|
||||
return bytesutil.ToBytes32(b.latestWithdrawalsRoot), nil
|
||||
case types.PayloadExpectedWithdrawals:
|
||||
return ssz.WithdrawalSliceRoot(b.payloadExpectedWithdrawals, fieldparams.MaxWithdrawalsPerPayload)
|
||||
}
|
||||
return [32]byte{}, errors.New("invalid field index provided")
|
||||
}
|
||||
|
||||
@@ -116,6 +116,10 @@ func (f FieldIndex) String() string {
|
||||
return "pendingConsolidations"
|
||||
case ProposerLookahead:
|
||||
return "proposerLookahead"
|
||||
case Builders:
|
||||
return "builders"
|
||||
case NextWithdrawalBuilderIndex:
|
||||
return "nextWithdrawalBuilderIndex"
|
||||
case ExecutionPayloadAvailability:
|
||||
return "executionPayloadAvailability"
|
||||
case BuilderPendingPayments:
|
||||
@@ -124,8 +128,8 @@ func (f FieldIndex) String() string {
|
||||
return "builderPendingWithdrawals"
|
||||
case LatestBlockHash:
|
||||
return "latestBlockHash"
|
||||
case LatestWithdrawalsRoot:
|
||||
return "latestWithdrawalsRoot"
|
||||
case PayloadExpectedWithdrawals:
|
||||
return "payloadExpectedWithdrawals"
|
||||
default:
|
||||
return fmt.Sprintf("unknown field index number: %d", f)
|
||||
}
|
||||
@@ -211,16 +215,20 @@ func (f FieldIndex) RealPosition() int {
|
||||
return 36
|
||||
case ProposerLookahead:
|
||||
return 37
|
||||
case ExecutionPayloadAvailability:
|
||||
case Builders:
|
||||
return 38
|
||||
case BuilderPendingPayments:
|
||||
case NextWithdrawalBuilderIndex:
|
||||
return 39
|
||||
case BuilderPendingWithdrawals:
|
||||
case ExecutionPayloadAvailability:
|
||||
return 40
|
||||
case LatestBlockHash:
|
||||
case BuilderPendingPayments:
|
||||
return 41
|
||||
case LatestWithdrawalsRoot:
|
||||
case BuilderPendingWithdrawals:
|
||||
return 42
|
||||
case LatestBlockHash:
|
||||
return 43
|
||||
case PayloadExpectedWithdrawals:
|
||||
return 44
|
||||
default:
|
||||
return -1
|
||||
}
|
||||
@@ -287,11 +295,13 @@ const (
|
||||
PendingPartialWithdrawals // Electra: EIP-7251
|
||||
PendingConsolidations // Electra: EIP-7251
|
||||
ProposerLookahead // Fulu: EIP-7917
|
||||
Builders // Gloas: EIP-7732
|
||||
NextWithdrawalBuilderIndex // Gloas: EIP-7732
|
||||
ExecutionPayloadAvailability // Gloas: EIP-7732
|
||||
BuilderPendingPayments // Gloas: EIP-7732
|
||||
BuilderPendingWithdrawals // Gloas: EIP-7732
|
||||
LatestBlockHash // Gloas: EIP-7732
|
||||
LatestWithdrawalsRoot // Gloas: EIP-7732
|
||||
PayloadExpectedWithdrawals // Gloas: EIP-7732
|
||||
)
|
||||
|
||||
// Enumerator keeps track of the number of states created since the node's start.
|
||||
|
||||
@@ -6,6 +6,7 @@ go_library(
|
||||
"block_header_root.go",
|
||||
"builder_pending_payments_root.go",
|
||||
"builder_pending_withdrawals_root.go",
|
||||
"builders_root.go",
|
||||
"eth1_root.go",
|
||||
"execution_payload_availability_root.go",
|
||||
"field_root_attestation.go",
|
||||
|
||||
12
beacon-chain/state/stateutil/builders_root.go
Normal file
12
beacon-chain/state/stateutil/builders_root.go
Normal file
@@ -0,0 +1,12 @@
|
||||
package stateutil
|
||||
|
||||
import (
|
||||
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v7/encoding/ssz"
|
||||
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
|
||||
)
|
||||
|
||||
// BuildersRoot computes the SSZ root of a slice of Builder.
|
||||
func BuildersRoot(slice []*ethpb.Builder) ([32]byte, error) {
|
||||
return ssz.SliceRoot(slice, uint64(fieldparams.BuilderRegistryLimit))
|
||||
}
|
||||
@@ -10,20 +10,72 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p"
|
||||
"github.com/OffchainLabs/prysm/v7/cmd/beacon-chain/flags"
|
||||
"github.com/OffchainLabs/prysm/v7/config/params"
|
||||
"github.com/OffchainLabs/prysm/v7/time/slots"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
var nilFinalizedStateError = errors.New("finalized state is nil")
|
||||
|
||||
func (s *Service) maintainCustodyInfo() {
|
||||
func (s *Service) maintainCustodyInfo() error {
|
||||
// Rationale of slot choice:
|
||||
// - If syncing with an empty DB from genesis, then justifiedSlot = finalizedSlot = 0,
|
||||
// and the node starts to sync from slot 0 ==> Using justifiedSlot is correct.
|
||||
// - If syncing with an empty DB from a checkpoint, then justifiedSlot = finalizedSlot = checkpointSlot,
|
||||
// and the node starts to sync from checkpointSlot ==> Using justifiedSlot is correct.
|
||||
// - If syncing with a non-empty DB, then justifiedSlot > finalizedSlot,
|
||||
// and the node starts to sync from justifiedSlot + 1 ==> Using justifiedSlot + 1 is correct.
|
||||
const interval = 1 * time.Minute
|
||||
|
||||
finalizedCheckpoint, err := s.cfg.beaconDB.FinalizedCheckpoint(s.ctx)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "finalized checkpoint")
|
||||
}
|
||||
|
||||
if finalizedCheckpoint == nil {
|
||||
return errors.New("finalized checkpoint is nil")
|
||||
}
|
||||
|
||||
finalizedSlot, err := slots.EpochStart(finalizedCheckpoint.Epoch)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "epoch start for finalized slot")
|
||||
}
|
||||
|
||||
justifiedCheckpoint, err := s.cfg.beaconDB.JustifiedCheckpoint(s.ctx)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "justified checkpoint")
|
||||
}
|
||||
|
||||
if justifiedCheckpoint == nil {
|
||||
return errors.New("justified checkpoint is nil")
|
||||
}
|
||||
|
||||
justifiedSlot, err := slots.EpochStart(justifiedCheckpoint.Epoch)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "epoch start for justified slot")
|
||||
}
|
||||
|
||||
slot := justifiedSlot
|
||||
if justifiedSlot > finalizedSlot {
|
||||
slot++
|
||||
}
|
||||
|
||||
earliestAvailableSlot, custodySubnetCount, err := s.updateCustodyInfoInDB(slot)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not get and save custody group count")
|
||||
}
|
||||
|
||||
if _, _, err := s.cfg.p2p.UpdateCustodyInfo(earliestAvailableSlot, custodySubnetCount); err != nil {
|
||||
return errors.Wrap(err, "update custody info")
|
||||
}
|
||||
|
||||
async.RunEvery(s.ctx, interval, func() {
|
||||
if err := s.updateCustodyInfoIfNeeded(); err != nil {
|
||||
log.WithError(err).Error("Failed to update custody info")
|
||||
}
|
||||
})
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *Service) updateCustodyInfoIfNeeded() error {
|
||||
|
||||
@@ -12,6 +12,7 @@ import (
|
||||
mock "github.com/OffchainLabs/prysm/v7/beacon-chain/blockchain/testing"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/transition"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/db/kv"
|
||||
dbTest "github.com/OffchainLabs/prysm/v7/beacon-chain/db/testing"
|
||||
testingDB "github.com/OffchainLabs/prysm/v7/beacon-chain/db/testing"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/peers"
|
||||
@@ -934,6 +935,7 @@ func TestStatusRPCRequest_BadPeerHandshake(t *testing.T) {
|
||||
r := &Service{
|
||||
cfg: &config{
|
||||
p2p: p1,
|
||||
beaconDB: dbTest.SetupDB(t),
|
||||
chain: chain,
|
||||
stateNotifier: chain.StateNotifier(),
|
||||
initialSync: &mockSync.Sync{IsSyncing: false},
|
||||
|
||||
@@ -17,6 +17,7 @@ import (
|
||||
blockfeed "github.com/OffchainLabs/prysm/v7/beacon-chain/core/feed/block"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/feed/operation"
|
||||
statefeed "github.com/OffchainLabs/prysm/v7/beacon-chain/core/feed/state"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/db"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/db/filesystem"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/execution"
|
||||
@@ -33,9 +34,11 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/sync/backfill/coverage"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/verification"
|
||||
lruwrpr "github.com/OffchainLabs/prysm/v7/cache/lru"
|
||||
"github.com/OffchainLabs/prysm/v7/cmd/beacon-chain/flags"
|
||||
"github.com/OffchainLabs/prysm/v7/config/params"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
|
||||
leakybucket "github.com/OffchainLabs/prysm/v7/container/leaky-bucket"
|
||||
"github.com/OffchainLabs/prysm/v7/crypto/rand"
|
||||
"github.com/OffchainLabs/prysm/v7/runtime"
|
||||
@@ -275,11 +278,6 @@ func (s *Service) Start() {
|
||||
|
||||
s.processPendingBlocksQueue()
|
||||
s.maintainPeerStatuses()
|
||||
|
||||
if params.FuluEnabled() {
|
||||
s.maintainCustodyInfo()
|
||||
}
|
||||
|
||||
s.resyncIfBehind()
|
||||
|
||||
// Update sync metrics.
|
||||
@@ -287,6 +285,15 @@ func (s *Service) Start() {
|
||||
|
||||
// Prune data column cache periodically on finalization.
|
||||
async.RunEvery(s.ctx, 30*time.Second, s.pruneDataColumnCache)
|
||||
|
||||
if !params.FuluEnabled() {
|
||||
return
|
||||
}
|
||||
|
||||
if err := s.maintainCustodyInfo(); err != nil {
|
||||
log.WithError(err).Error("Failed to maintain custody info")
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
// Stop the regular sync service.
|
||||
@@ -452,6 +459,89 @@ func (s *Service) waitForInitialSync(ctx context.Context) error {
|
||||
}
|
||||
}
|
||||
|
||||
// UpdateCustodyInfoInDB updates the custody information in the database.
|
||||
// It returns the (potentially updated) custody group count and the earliest available slot.
|
||||
func (s *Service) updateCustodyInfoInDB(slot primitives.Slot) (primitives.Slot, uint64, error) {
|
||||
isSupernode := flags.Get().Supernode
|
||||
isSemiSupernode := flags.Get().SemiSupernode
|
||||
|
||||
cfg := params.BeaconConfig()
|
||||
custodyRequirement := cfg.CustodyRequirement
|
||||
|
||||
// Check if the node was previously subscribed to all data subnets, and if so,
|
||||
// store the new status accordingly.
|
||||
wasSupernode, err := s.cfg.beaconDB.UpdateSubscribedToAllDataSubnets(s.ctx, isSupernode)
|
||||
if err != nil {
|
||||
return 0, 0, errors.Wrap(err, "update subscribed to all data subnets")
|
||||
}
|
||||
|
||||
// Compute the target custody group count based on current flag configuration.
|
||||
targetCustodyGroupCount := custodyRequirement
|
||||
|
||||
// Supernode: custody all groups (either currently set or previously enabled)
|
||||
if isSupernode {
|
||||
targetCustodyGroupCount = cfg.NumberOfCustodyGroups
|
||||
}
|
||||
|
||||
// Semi-supernode: custody minimum needed for reconstruction, or custody requirement if higher
|
||||
if isSemiSupernode {
|
||||
semiSupernodeCustody, err := peerdas.MinimumCustodyGroupCountToReconstruct()
|
||||
if err != nil {
|
||||
return 0, 0, errors.Wrap(err, "minimum custody group count")
|
||||
}
|
||||
|
||||
targetCustodyGroupCount = max(custodyRequirement, semiSupernodeCustody)
|
||||
}
|
||||
|
||||
// Safely compute the fulu fork slot.
|
||||
fuluForkSlot, err := fuluForkSlot()
|
||||
if err != nil {
|
||||
return 0, 0, errors.Wrap(err, "fulu fork slot")
|
||||
}
|
||||
|
||||
// If slot is before the fulu fork slot, then use the earliest stored slot as the reference slot.
|
||||
if slot < fuluForkSlot {
|
||||
slot, err = s.cfg.beaconDB.EarliestSlot(s.ctx)
|
||||
if err != nil {
|
||||
return 0, 0, errors.Wrap(err, "earliest slot")
|
||||
}
|
||||
}
|
||||
|
||||
earliestAvailableSlot, actualCustodyGroupCount, err := s.cfg.beaconDB.UpdateCustodyInfo(s.ctx, slot, targetCustodyGroupCount)
|
||||
if err != nil {
|
||||
return 0, 0, errors.Wrap(err, "update custody info")
|
||||
}
|
||||
|
||||
if isSupernode {
|
||||
log.WithFields(logrus.Fields{
|
||||
"current": actualCustodyGroupCount,
|
||||
"target": cfg.NumberOfCustodyGroups,
|
||||
}).Info("Supernode mode enabled. Will custody all data columns going forward.")
|
||||
}
|
||||
|
||||
if wasSupernode && !isSupernode {
|
||||
log.Warningf("Because the `--%s` flag was previously used, the node will continue to act as a super node.", flags.Supernode.Name)
|
||||
}
|
||||
|
||||
return earliestAvailableSlot, actualCustodyGroupCount, nil
|
||||
}
|
||||
|
||||
func fuluForkSlot() (primitives.Slot, error) {
|
||||
cfg := params.BeaconConfig()
|
||||
|
||||
fuluForkEpoch := cfg.FuluForkEpoch
|
||||
if fuluForkEpoch == cfg.FarFutureEpoch {
|
||||
return cfg.FarFutureSlot, nil
|
||||
}
|
||||
|
||||
forkFuluSlot, err := slots.EpochStart(fuluForkEpoch)
|
||||
if err != nil {
|
||||
return 0, errors.Wrap(err, "epoch start")
|
||||
}
|
||||
|
||||
return forkFuluSlot, nil
|
||||
}
|
||||
|
||||
// Checker defines a struct which can verify whether a node is currently
|
||||
// synchronizing a chain with the rest of peers in the network.
|
||||
type Checker interface {
|
||||
|
||||
@@ -16,6 +16,9 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/startup"
|
||||
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
|
||||
mockSync "github.com/OffchainLabs/prysm/v7/beacon-chain/sync/initial-sync/testing"
|
||||
"github.com/OffchainLabs/prysm/v7/cmd/beacon-chain/flags"
|
||||
"github.com/OffchainLabs/prysm/v7/config/params"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
|
||||
leakybucket "github.com/OffchainLabs/prysm/v7/container/leaky-bucket"
|
||||
"github.com/OffchainLabs/prysm/v7/crypto/bls"
|
||||
@@ -70,7 +73,6 @@ func TestSyncHandlers_WaitToSync(t *testing.T) {
|
||||
|
||||
topic := "/eth2/%x/beacon_block"
|
||||
go r.startDiscoveryAndSubscriptions()
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
|
||||
var vr [32]byte
|
||||
require.NoError(t, gs.SetClock(startup.NewClock(time.Now(), vr)))
|
||||
@@ -83,9 +85,11 @@ func TestSyncHandlers_WaitToSync(t *testing.T) {
|
||||
msg.Block.ParentRoot = util.Random32Bytes(t)
|
||||
msg.Signature = sk.Sign([]byte("data")).Marshal()
|
||||
p2p.ReceivePubSub(topic, msg)
|
||||
// wait for chainstart to be sent
|
||||
time.Sleep(400 * time.Millisecond)
|
||||
require.Equal(t, true, r.chainStarted.IsSet(), "Did not receive chain start event.")
|
||||
|
||||
// Wait for chainstart event to be processed
|
||||
require.Eventually(t, func() bool {
|
||||
return r.chainStarted.IsSet()
|
||||
}, 5*time.Second, 50*time.Millisecond, "Did not receive chain start event.")
|
||||
}
|
||||
|
||||
func TestSyncHandlers_WaitForChainStart(t *testing.T) {
|
||||
@@ -217,20 +221,18 @@ func TestSyncService_StopCleanly(t *testing.T) {
|
||||
p2p.Digest, err = r.currentForkDigest()
|
||||
require.NoError(t, err)
|
||||
|
||||
// wait for chainstart to be sent
|
||||
time.Sleep(2 * time.Second)
|
||||
require.Equal(t, true, r.chainStarted.IsSet(), "Did not receive chain start event.")
|
||||
|
||||
require.NotEqual(t, 0, len(r.cfg.p2p.PubSub().GetTopics()))
|
||||
require.NotEqual(t, 0, len(r.cfg.p2p.Host().Mux().Protocols()))
|
||||
// Wait for chainstart and topics to be registered
|
||||
require.Eventually(t, func() bool {
|
||||
return r.chainStarted.IsSet() && len(r.cfg.p2p.PubSub().GetTopics()) > 0 && len(r.cfg.p2p.Host().Mux().Protocols()) > 0
|
||||
}, 5*time.Second, 50*time.Millisecond, "Did not receive chain start event or topics not registered.")
|
||||
|
||||
// Both pubsub and rpc topics should be unsubscribed.
|
||||
require.NoError(t, r.Stop())
|
||||
|
||||
// Sleep to allow pubsub topics to be deregistered.
|
||||
time.Sleep(1 * time.Second)
|
||||
require.Equal(t, 0, len(r.cfg.p2p.PubSub().GetTopics()))
|
||||
require.Equal(t, 0, len(r.cfg.p2p.Host().Mux().Protocols()))
|
||||
// Wait for pubsub topics to be deregistered.
|
||||
require.Eventually(t, func() bool {
|
||||
return len(r.cfg.p2p.PubSub().GetTopics()) == 0 && len(r.cfg.p2p.Host().Mux().Protocols()) == 0
|
||||
}, 5*time.Second, 50*time.Millisecond, "Pubsub topics were not deregistered")
|
||||
}
|
||||
|
||||
func TestService_Stop_SendsGoodbyeMessages(t *testing.T) {
|
||||
@@ -441,3 +443,224 @@ func TestService_Stop_ConcurrentGoodbyeMessages(t *testing.T) {
|
||||
|
||||
require.Equal(t, false, util.WaitTimeout(&wg, 2*time.Second))
|
||||
}
|
||||
|
||||
func TestUpdateCustodyInfoInDB(t *testing.T) {
|
||||
const (
|
||||
fuluForkEpoch = 10
|
||||
custodyRequirement = uint64(4)
|
||||
earliestStoredSlot = primitives.Slot(12)
|
||||
numberOfCustodyGroups = uint64(64)
|
||||
)
|
||||
|
||||
params.SetupTestConfigCleanup(t)
|
||||
cfg := params.BeaconConfig()
|
||||
cfg.FuluForkEpoch = fuluForkEpoch
|
||||
cfg.CustodyRequirement = custodyRequirement
|
||||
cfg.NumberOfCustodyGroups = numberOfCustodyGroups
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
ctx := t.Context()
|
||||
pbBlock := util.NewBeaconBlock()
|
||||
pbBlock.Block.Slot = 12
|
||||
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(pbBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
roBlock, err := blocks.NewROBlock(signedBeaconBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
t.Run("CGC increases before fulu", func(t *testing.T) {
|
||||
beaconDB := dbTest.SetupDB(t)
|
||||
service := Service{cfg: &config{beaconDB: beaconDB}}
|
||||
err = beaconDB.SaveBlock(ctx, roBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Before Fulu
|
||||
// -----------
|
||||
actualEas, actualCgc, err := service.updateCustodyInfoInDB(15)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, earliestStoredSlot, actualEas)
|
||||
require.Equal(t, custodyRequirement, actualCgc)
|
||||
|
||||
actualEas, actualCgc, err = service.updateCustodyInfoInDB(17)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, earliestStoredSlot, actualEas)
|
||||
require.Equal(t, custodyRequirement, actualCgc)
|
||||
|
||||
resetFlags := flags.Get()
|
||||
gFlags := new(flags.GlobalFlags)
|
||||
gFlags.Supernode = true
|
||||
flags.Init(gFlags)
|
||||
defer flags.Init(resetFlags)
|
||||
|
||||
actualEas, actualCgc, err = service.updateCustodyInfoInDB(19)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, earliestStoredSlot, actualEas)
|
||||
require.Equal(t, numberOfCustodyGroups, actualCgc)
|
||||
|
||||
// After Fulu
|
||||
// ----------
|
||||
actualEas, actualCgc, err = service.updateCustodyInfoInDB(fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, earliestStoredSlot, actualEas)
|
||||
require.Equal(t, numberOfCustodyGroups, actualCgc)
|
||||
})
|
||||
|
||||
t.Run("CGC increases after fulu", func(t *testing.T) {
|
||||
beaconDB := dbTest.SetupDB(t)
|
||||
service := Service{cfg: &config{beaconDB: beaconDB}}
|
||||
err = beaconDB.SaveBlock(ctx, roBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Before Fulu
|
||||
// -----------
|
||||
actualEas, actualCgc, err := service.updateCustodyInfoInDB(15)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, earliestStoredSlot, actualEas)
|
||||
require.Equal(t, custodyRequirement, actualCgc)
|
||||
|
||||
actualEas, actualCgc, err = service.updateCustodyInfoInDB(17)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, earliestStoredSlot, actualEas)
|
||||
require.Equal(t, custodyRequirement, actualCgc)
|
||||
|
||||
// After Fulu
|
||||
// ----------
|
||||
resetFlags := flags.Get()
|
||||
gFlags := new(flags.GlobalFlags)
|
||||
gFlags.Supernode = true
|
||||
flags.Init(gFlags)
|
||||
defer flags.Init(resetFlags)
|
||||
|
||||
slot := fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1
|
||||
actualEas, actualCgc, err = service.updateCustodyInfoInDB(slot)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, slot, actualEas)
|
||||
require.Equal(t, numberOfCustodyGroups, actualCgc)
|
||||
|
||||
actualEas, actualCgc, err = service.updateCustodyInfoInDB(slot + 2)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, slot, actualEas)
|
||||
require.Equal(t, numberOfCustodyGroups, actualCgc)
|
||||
})
|
||||
|
||||
t.Run("Supernode downgrade prevented", func(t *testing.T) {
|
||||
beaconDB := dbTest.SetupDB(t)
|
||||
service := Service{cfg: &config{beaconDB: beaconDB}}
|
||||
err = beaconDB.SaveBlock(ctx, roBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Enable supernode
|
||||
resetFlags := flags.Get()
|
||||
gFlags := new(flags.GlobalFlags)
|
||||
gFlags.Supernode = true
|
||||
flags.Init(gFlags)
|
||||
|
||||
slot := fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1
|
||||
actualEas, actualCgc, err := service.updateCustodyInfoInDB(slot)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, slot, actualEas)
|
||||
require.Equal(t, numberOfCustodyGroups, actualCgc)
|
||||
|
||||
// Try to downgrade by removing flag
|
||||
gFlags.Supernode = false
|
||||
flags.Init(gFlags)
|
||||
defer flags.Init(resetFlags)
|
||||
|
||||
// Should still be supernode
|
||||
actualEas, actualCgc, err = service.updateCustodyInfoInDB(slot + 2)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, slot, actualEas)
|
||||
require.Equal(t, numberOfCustodyGroups, actualCgc) // Still 64, not downgraded
|
||||
})
|
||||
|
||||
t.Run("Semi-supernode downgrade prevented", func(t *testing.T) {
|
||||
beaconDB := dbTest.SetupDB(t)
|
||||
service := Service{cfg: &config{beaconDB: beaconDB}}
|
||||
err = beaconDB.SaveBlock(ctx, roBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Enable semi-supernode
|
||||
resetFlags := flags.Get()
|
||||
gFlags := new(flags.GlobalFlags)
|
||||
gFlags.SemiSupernode = true
|
||||
flags.Init(gFlags)
|
||||
|
||||
slot := fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1
|
||||
actualEas, actualCgc, err := service.updateCustodyInfoInDB(slot)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, slot, actualEas)
|
||||
semiSupernodeCustody := numberOfCustodyGroups / 2 // 64
|
||||
require.Equal(t, semiSupernodeCustody, actualCgc) // Semi-supernode custodies 64 groups
|
||||
|
||||
// Try to downgrade by removing flag
|
||||
gFlags.SemiSupernode = false
|
||||
flags.Init(gFlags)
|
||||
defer flags.Init(resetFlags)
|
||||
|
||||
// UpdateCustodyInfo should prevent downgrade - custody count should remain at 64
|
||||
actualEas, actualCgc, err = service.updateCustodyInfoInDB(slot + 2)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, slot, actualEas)
|
||||
require.Equal(t, semiSupernodeCustody, actualCgc) // Still 64 due to downgrade prevention by UpdateCustodyInfo
|
||||
})
|
||||
|
||||
t.Run("Semi-supernode to supernode upgrade allowed", func(t *testing.T) {
|
||||
beaconDB := dbTest.SetupDB(t)
|
||||
service := Service{cfg: &config{beaconDB: beaconDB}}
|
||||
err = beaconDB.SaveBlock(ctx, roBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Start with semi-supernode
|
||||
resetFlags := flags.Get()
|
||||
gFlags := new(flags.GlobalFlags)
|
||||
gFlags.SemiSupernode = true
|
||||
flags.Init(gFlags)
|
||||
|
||||
slot := fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1
|
||||
actualEas, actualCgc, err := service.updateCustodyInfoInDB(slot)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, slot, actualEas)
|
||||
semiSupernodeCustody := numberOfCustodyGroups / 2 // 64
|
||||
require.Equal(t, semiSupernodeCustody, actualCgc) // Semi-supernode custodies 64 groups
|
||||
|
||||
// Upgrade to full supernode
|
||||
gFlags.SemiSupernode = false
|
||||
gFlags.Supernode = true
|
||||
flags.Init(gFlags)
|
||||
defer flags.Init(resetFlags)
|
||||
|
||||
// Should upgrade to full supernode
|
||||
upgradeSlot := slot + 2
|
||||
actualEas, actualCgc, err = service.updateCustodyInfoInDB(upgradeSlot)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, upgradeSlot, actualEas) // Earliest slot updates when upgrading
|
||||
require.Equal(t, numberOfCustodyGroups, actualCgc) // Upgraded to 128
|
||||
})
|
||||
|
||||
t.Run("Semi-supernode with high validator requirements uses higher custody", func(t *testing.T) {
|
||||
beaconDB := dbTest.SetupDB(t)
|
||||
service := Service{cfg: &config{beaconDB: beaconDB}}
|
||||
err = beaconDB.SaveBlock(ctx, roBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Enable semi-supernode
|
||||
resetFlags := flags.Get()
|
||||
gFlags := new(flags.GlobalFlags)
|
||||
gFlags.SemiSupernode = true
|
||||
flags.Init(gFlags)
|
||||
defer flags.Init(resetFlags)
|
||||
|
||||
// Mock a high custody requirement (simulating many validators)
|
||||
// We need to override the custody requirement calculation
|
||||
// For this test, we'll verify the logic by checking if custodyRequirement > 64
|
||||
// Since custodyRequirement in minimalTestService is 4, we can't test the high case here
|
||||
// This would require a different test setup with actual validators
|
||||
slot := fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1
|
||||
actualEas, actualCgc, err := service.updateCustodyInfoInDB(slot)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, slot, actualEas)
|
||||
semiSupernodeCustody := numberOfCustodyGroups / 2 // 64
|
||||
// With low validator requirements (4), should use semi-supernode minimum (64)
|
||||
require.Equal(t, semiSupernodeCustody, actualCgc)
|
||||
})
|
||||
}
|
||||
|
||||
@@ -48,7 +48,14 @@ func (s *Service) beaconBlockSubscriber(ctx context.Context, msg proto.Message)
|
||||
return errors.Wrap(err, "new ro block with root")
|
||||
}
|
||||
|
||||
go s.processSidecarsFromExecutionFromBlock(ctx, roBlock)
|
||||
go func() {
|
||||
if err := s.processSidecarsFromExecutionFromBlock(ctx, roBlock); err != nil {
|
||||
log.WithError(err).WithFields(logrus.Fields{
|
||||
"root": fmt.Sprintf("%#x", root),
|
||||
"slot": block.Slot(),
|
||||
}).Error("Failed to process sidecars from execution from block")
|
||||
}
|
||||
}()
|
||||
|
||||
if err := s.cfg.chain.ReceiveBlock(ctx, signed, root, nil); err != nil {
|
||||
if blockchain.IsInvalidBlock(err) {
|
||||
@@ -69,28 +76,37 @@ func (s *Service) beaconBlockSubscriber(ctx context.Context, msg proto.Message)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
if err := s.processPendingAttsForBlock(ctx, root); err != nil {
|
||||
return errors.Wrap(err, "process pending atts for block")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// processSidecarsFromExecutionFromBlock retrieves (if available) sidecars data from the execution client,
|
||||
// builds corresponding sidecars, save them to the storage, and broadcasts them over P2P if necessary.
|
||||
func (s *Service) processSidecarsFromExecutionFromBlock(ctx context.Context, roBlock blocks.ROBlock) {
|
||||
func (s *Service) processSidecarsFromExecutionFromBlock(ctx context.Context, roBlock blocks.ROBlock) error {
|
||||
if roBlock.Version() >= version.Fulu {
|
||||
if err := s.processDataColumnSidecarsFromExecution(ctx, peerdas.PopulateFromBlock(roBlock)); err != nil {
|
||||
log.WithError(err).Error("Failed to process data column sidecars from execution")
|
||||
return
|
||||
// Do not log if the context was cancelled on purpose.
|
||||
// (Still log other context errors such as deadlines exceeded).
|
||||
if errors.Is(err, context.Canceled) {
|
||||
return nil
|
||||
}
|
||||
|
||||
return errors.Wrap(err, "process data column sidecars from execution")
|
||||
}
|
||||
|
||||
return
|
||||
return nil
|
||||
}
|
||||
|
||||
if roBlock.Version() >= version.Deneb {
|
||||
s.processBlobSidecarsFromExecution(ctx, roBlock)
|
||||
return
|
||||
return nil
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// processBlobSidecarsFromExecution retrieves (if available) blob sidecars data from the execution client,
|
||||
@@ -168,7 +184,6 @@ func (s *Service) processDataColumnSidecarsFromExecution(ctx context.Context, so
|
||||
key := fmt.Sprintf("%#x", source.Root())
|
||||
if _, err, _ := s.columnSidecarsExecSingleFlight.Do(key, func() (any, error) {
|
||||
const delay = 250 * time.Millisecond
|
||||
secondsPerHalfSlot := time.Duration(params.BeaconConfig().SecondsPerSlot/2) * time.Second
|
||||
|
||||
commitments, err := source.Commitments()
|
||||
if err != nil {
|
||||
@@ -186,9 +201,6 @@ func (s *Service) processDataColumnSidecarsFromExecution(ctx context.Context, so
|
||||
return nil, errors.Wrap(err, "column indices to sample")
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(ctx, secondsPerHalfSlot)
|
||||
defer cancel()
|
||||
|
||||
log := log.WithFields(logrus.Fields{
|
||||
"root": fmt.Sprintf("%#x", source.Root()),
|
||||
"slot": source.Slot(),
|
||||
@@ -209,6 +221,11 @@ func (s *Service) processDataColumnSidecarsFromExecution(ctx context.Context, so
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// Return if the context is done.
|
||||
if ctx.Err() != nil {
|
||||
return nil, ctx.Err()
|
||||
}
|
||||
|
||||
if iteration == 0 {
|
||||
dataColumnsRecoveredFromELAttempts.Inc()
|
||||
}
|
||||
@@ -220,20 +237,10 @@ func (s *Service) processDataColumnSidecarsFromExecution(ctx context.Context, so
|
||||
}
|
||||
|
||||
// No sidecars are retrieved from the EL, retry later
|
||||
constructedSidecarCount = uint64(len(constructedSidecars))
|
||||
if constructedSidecarCount == 0 {
|
||||
if ctx.Err() != nil {
|
||||
return nil, ctx.Err()
|
||||
}
|
||||
|
||||
time.Sleep(delay)
|
||||
continue
|
||||
}
|
||||
|
||||
dataColumnsRecoveredFromELTotal.Inc()
|
||||
constructedCount := uint64(len(constructedSidecars))
|
||||
|
||||
// Boundary check.
|
||||
if constructedSidecarCount != fieldparams.NumberOfColumns {
|
||||
if constructedSidecarCount > 0 && constructedSidecarCount != fieldparams.NumberOfColumns {
|
||||
return nil, errors.Errorf("reconstruct data column sidecars returned %d sidecars, expected %d - should never happen", constructedSidecarCount, fieldparams.NumberOfColumns)
|
||||
}
|
||||
|
||||
@@ -242,14 +249,24 @@ func (s *Service) processDataColumnSidecarsFromExecution(ctx context.Context, so
|
||||
return nil, errors.Wrap(err, "broadcast and receive unseen data column sidecars")
|
||||
}
|
||||
|
||||
log.WithFields(logrus.Fields{
|
||||
"count": len(unseenIndices),
|
||||
"indices": helpers.SortedPrettySliceFromMap(unseenIndices),
|
||||
}).Debug("Constructed data column sidecars from the execution client")
|
||||
if constructedCount > 0 {
|
||||
dataColumnsRecoveredFromELTotal.Inc()
|
||||
|
||||
dataColumnSidecarsObtainedViaELCount.Observe(float64(len(unseenIndices)))
|
||||
log.WithFields(logrus.Fields{
|
||||
"root": fmt.Sprintf("%#x", source.Root()),
|
||||
"slot": source.Slot(),
|
||||
"proposerIndex": source.ProposerIndex(),
|
||||
"iteration": iteration,
|
||||
"type": source.Type(),
|
||||
"count": len(unseenIndices),
|
||||
"indices": helpers.SortedPrettySliceFromMap(unseenIndices),
|
||||
}).Debug("Constructed data column sidecars from the execution client")
|
||||
|
||||
return nil, nil
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// Wait before retrying.
|
||||
time.Sleep(delay)
|
||||
}
|
||||
}); err != nil {
|
||||
return err
|
||||
@@ -284,6 +301,11 @@ func (s *Service) broadcastAndReceiveUnseenDataColumnSidecars(
|
||||
unseenIndices[sidecar.Index] = true
|
||||
}
|
||||
|
||||
// Exit early if there are no nothing to broadcast or receive.
|
||||
if len(unseenSidecars) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// Broadcast all the data column sidecars we reconstructed but did not see via gossip (non blocking).
|
||||
if err := s.cfg.p2p.BroadcastDataColumnSidecars(ctx, unseenSidecars); err != nil {
|
||||
return nil, errors.Wrap(err, "broadcast data column sidecars")
|
||||
|
||||
@@ -194,7 +194,8 @@ func TestProcessSidecarsFromExecutionFromBlock(t *testing.T) {
|
||||
},
|
||||
seenBlobCache: lruwrpr.New(1),
|
||||
}
|
||||
s.processSidecarsFromExecutionFromBlock(t.Context(), roBlock)
|
||||
err := s.processSidecarsFromExecutionFromBlock(t.Context(), roBlock)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, tt.expectedBlobCount, len(chainService.Blobs))
|
||||
})
|
||||
}
|
||||
@@ -293,7 +294,8 @@ func TestProcessSidecarsFromExecutionFromBlock(t *testing.T) {
|
||||
roBlock, err := blocks.NewROBlock(sb)
|
||||
require.NoError(t, err)
|
||||
|
||||
s.processSidecarsFromExecutionFromBlock(t.Context(), roBlock)
|
||||
err = s.processSidecarsFromExecutionFromBlock(t.Context(), roBlock)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, tt.expectedDataColumnCount, len(chainService.DataColumns))
|
||||
})
|
||||
}
|
||||
|
||||
@@ -25,12 +25,12 @@ func (s *Service) dataColumnSubscriber(ctx context.Context, msg proto.Message) e
|
||||
}
|
||||
|
||||
if err := s.receiveDataColumnSidecar(ctx, sidecar); err != nil {
|
||||
return errors.Wrap(err, "receive data column sidecar")
|
||||
return wrapDataColumnError(sidecar, "receive data column sidecar", err)
|
||||
}
|
||||
|
||||
wg.Go(func() error {
|
||||
if err := s.processDataColumnSidecarsFromReconstruction(ctx, sidecar); err != nil {
|
||||
return errors.Wrap(err, "process data column sidecars from reconstruction")
|
||||
return wrapDataColumnError(sidecar, "process data column sidecars from reconstruction", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
@@ -38,7 +38,13 @@ func (s *Service) dataColumnSubscriber(ctx context.Context, msg proto.Message) e
|
||||
|
||||
wg.Go(func() error {
|
||||
if err := s.processDataColumnSidecarsFromExecution(ctx, peerdas.PopulateFromSidecar(sidecar)); err != nil {
|
||||
return errors.Wrap(err, "process data column sidecars from execution")
|
||||
if errors.Is(err, context.Canceled) {
|
||||
// Do not log if the context was cancelled on purpose.
|
||||
// (Still log other context errors such as deadlines exceeded).
|
||||
return nil
|
||||
}
|
||||
|
||||
return wrapDataColumnError(sidecar, "process data column sidecars from execution", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
@@ -110,3 +116,7 @@ func (s *Service) allDataColumnSubnets(_ primitives.Slot) map[uint64]bool {
|
||||
|
||||
return allSubnets
|
||||
}
|
||||
|
||||
func wrapDataColumnError(sidecar blocks.VerifiedRODataColumn, message string, err error) error {
|
||||
return fmt.Errorf("%s - slot %d, root %s: %w", message, sidecar.SignedBlockHeader.Header.Slot, fmt.Sprintf("%#x", sidecar.BlockRoot()), err)
|
||||
}
|
||||
|
||||
@@ -614,11 +614,10 @@ func TestVerifyIndexInCommittee_SeenAggregatorEpoch(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
time.Sleep(10 * time.Millisecond) // Wait for cached value to pass through buffers.
|
||||
if res, err := r.validateAggregateAndProof(t.Context(), "", msg); res == pubsub.ValidationAccept {
|
||||
_ = err
|
||||
t.Fatal("Validated status is true")
|
||||
}
|
||||
require.Eventually(t, func() bool {
|
||||
res, _ := r.validateAggregateAndProof(t.Context(), "", msg)
|
||||
return res != pubsub.ValidationAccept
|
||||
}, time.Second, 10*time.Millisecond, "Expected validation to reject duplicate aggregate")
|
||||
}
|
||||
|
||||
func TestValidateAggregateAndProof_BadBlock(t *testing.T) {
|
||||
|
||||
@@ -992,7 +992,6 @@ func TestValidateBeaconBlockPubSub_SeenProposerSlot(t *testing.T) {
|
||||
|
||||
// Mark the proposer/slot as seen
|
||||
r.setSeenBlockIndexSlot(msg.Block.Slot, msg.Block.ProposerIndex)
|
||||
time.Sleep(10 * time.Millisecond) // Wait for cached value to pass through buffers
|
||||
|
||||
// Prepare and validate the second message (clone)
|
||||
buf := new(bytes.Buffer)
|
||||
@@ -1010,9 +1009,11 @@ func TestValidateBeaconBlockPubSub_SeenProposerSlot(t *testing.T) {
|
||||
}
|
||||
|
||||
// Since this is not an equivocation (same signature), it should be ignored
|
||||
res, err := r.validateBeaconBlockPubSub(ctx, "", m)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, pubsub.ValidationIgnore, res, "block with same signature should be ignored")
|
||||
// Wait for the cached value to propagate through buffers
|
||||
require.Eventually(t, func() bool {
|
||||
res, err := r.validateBeaconBlockPubSub(ctx, "", m)
|
||||
return err == nil && res == pubsub.ValidationIgnore
|
||||
}, time.Second, 10*time.Millisecond, "block with same signature should be ignored")
|
||||
|
||||
// Verify no slashings were created
|
||||
assert.Equal(t, 0, len(slashingPool.PendingPropSlashings), "Expected no slashings for same signature")
|
||||
|
||||
@@ -1,3 +0,0 @@
|
||||
## Fixed
|
||||
|
||||
- Fix missing return after version header check in SubmitAttesterSlashingsV2.
|
||||
@@ -1,3 +0,0 @@
|
||||
## Fixed
|
||||
|
||||
- incorrect constructor return type [#16084](https://github.com/OffchainLabs/prysm/pull/16084)
|
||||
@@ -1,2 +0,0 @@
|
||||
### Ignored
|
||||
- Reverts AutoNatV2 change introduced in https://github.com/OffchainLabs/prysm/pull/16100 as the libp2p upgrade fails inter-op testing.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Prevent blocked sends to the KZG batch verifier when the caller context is already canceled, avoiding useless queueing and potential hangs.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Fix the missing fork version object mapping for Fulu in light client p2p.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- `primitives.BuilderIndex`: SSZ `uint64` wrapper for builder registry indices.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Fix deadlock in data column gossip KZG batch verification when a caller times out preventing result delivery.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- the /eth/v2/beacon/pool/attestations and /eth/v1/beacon/pool/sync_committees now returns a 503 error if the node is still syncing, the rest api is also working in a similar process to gRPC broadcasting immediately now.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- fixed replay state issue in rest api caused by attester and sync committee duties endpoints
|
||||
3
changelog/james-prysm_is-ready.md
Normal file
3
changelog/james-prysm_is-ready.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Changed
|
||||
|
||||
- changed IsHealthy check to IsReady for validator client's interpretation from /eth/v1/node/health, 206 will now return false as the node is syncing.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- e2e sync committee evaluator now skips the first slot after startup, we already skip the fork epoch for checks here, this skip only applies on startup, due to altair always from 0 and validators need to warm up.
|
||||
3
changelog/jrhea_duplicate_tracer_provider_setting.md
Normal file
3
changelog/jrhea_duplicate_tracer_provider_setting.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Fixed
|
||||
|
||||
- Don't call trace.WithMaxExportBatchSize(trace.DefaultMaxExportBatchSize) twice.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Pending aggregates: When multiple aggregated attestations only differing by the aggregator index are in the pending queue, only process one of them.
|
||||
3
changelog/manu-eas.md
Normal file
3
changelog/manu-eas.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Fixed
|
||||
|
||||
- When adding the `--[semi-]supernode` flag, update the ealiest available slot accordingly.
|
||||
@@ -1,2 +0,0 @@
|
||||
### Changed
|
||||
- `validateDataColumn`: Remove error logs.
|
||||
@@ -1,2 +0,0 @@
|
||||
### Ignored
|
||||
- Added test requirement to `PULL_REQUEST_TEMPLATE.md`
|
||||
2
changelog/manu_disable_get_blobs_v2.md
Normal file
2
changelog/manu_disable_get_blobs_v2.md
Normal file
@@ -0,0 +1,2 @@
|
||||
### Added
|
||||
- `--disable-get-blobs-v2` flag.
|
||||
@@ -1,7 +0,0 @@
|
||||
### Added
|
||||
- prometheus histogram `cells_and_proofs_from_structured_computation_milliseconds` to track computation time for cells and proofs from structured blobs.
|
||||
- prometheus histogram `get_blobs_v2_latency_milliseconds` to track RPC latency for `getBlobsV2` calls to the execution layer.
|
||||
|
||||
### Changed
|
||||
- Run `ComputeCellsAndProofsFromFlat` in parallel to improve performance when computing cells and proofs.
|
||||
- Run `ComputeCellsAndProofsFromStructured` in parallel to improve performance when computing cells and proofs.
|
||||
@@ -0,0 +1,3 @@
|
||||
### Added
|
||||
|
||||
- Batch publish data columns for faster data propogation.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Fixed possible race when validating two attestations at the same time.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Track the dependent root of the latest finalized checkpoint in forkchoice.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Do not process slots and copy states for next epoch proposers after Fulu
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Do not error when committee has been computed correctly but updating the cache failed.
|
||||
2
changelog/potuz_update_spectests.md
Normal file
2
changelog/potuz_update_spectests.md
Normal file
@@ -0,0 +1,2 @@
|
||||
### Added
|
||||
- Update spectests to v1.7.0-alpha.0
|
||||
3
changelog/pvl-changelog-v7.1.2.md
Normal file
3
changelog/pvl-changelog-v7.1.2.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Ignored
|
||||
|
||||
- Updated changelog for v7.1.2
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Updated CHANGELOG.md for v7.0.1 patch release
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Changelog for v7.1.0
|
||||
3
changelog/pvl-v7.1.1.md
Normal file
3
changelog/pvl-v7.1.1.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Ignored
|
||||
|
||||
- Added changelog for v7.1.1
|
||||
3
changelog/pvl_fix-flaky-tests-polling.md
Normal file
3
changelog/pvl_fix-flaky-tests-polling.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Changed
|
||||
|
||||
- Replaced `time.Sleep` with `require.Eventually` polling in tests to fix flaky behavior caused by race conditions between goroutines and assertions.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Static analyzer that ensures each `httputil.HandleError` call is followed by a `return` statement.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Use `WriteStateFetchError` in API handlers whenever possible.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Removed
|
||||
|
||||
- Unnecessary copy is removed from Eth1DataHasEnoughSupport
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Proposal design document to implement graffiti. Currently it is empty by default and the idea is to have it of the form GE168dPR63af
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Optimise migratetocold by not doing brute force for loop
|
||||
3
changelog/satushh-opt.md
Normal file
3
changelog/satushh-opt.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Changed
|
||||
|
||||
- Performance improvement in state (MarshalSSZTo): use copy() instead of byte-by-byte loop which isn't required.
|
||||
3
changelog/t_gloas-builders-registry.md
Normal file
3
changelog/t_gloas-builders-registry.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Added
|
||||
|
||||
- Added basic Gloas builder support (`Builder` message and `BeaconStateGloas` `builders`/`next_withdrawal_builder_index` fields)
|
||||
@@ -356,4 +356,9 @@ var (
|
||||
Usage: "A comma-separated list of exponents (of 2) in decreasing order, defining the state diff hierarchy levels. The last exponent must be greater than or equal to 5.",
|
||||
Value: cli.NewIntSlice(21, 18, 16, 13, 11, 9, 5),
|
||||
}
|
||||
// DisableGetBlobsV2 disables the engine_getBlobsV2 usage.
|
||||
DisableGetBlobsV2 = &cli.BoolFlag{
|
||||
Name: "disable-get-blobs-v2",
|
||||
Usage: "Disables the engine_getBlobsV2 usage.",
|
||||
}
|
||||
)
|
||||
|
||||
@@ -17,6 +17,7 @@ type GlobalFlags struct {
|
||||
SubscribeToAllSubnets bool
|
||||
Supernode bool
|
||||
SemiSupernode bool
|
||||
DisableGetBlobsV2 bool
|
||||
MinimumSyncPeers int
|
||||
MinimumPeersPerSubnet int
|
||||
MaxConcurrentDials int
|
||||
@@ -72,6 +73,11 @@ func ConfigureGlobalFlags(ctx *cli.Context) error {
|
||||
cfg.SemiSupernode = true
|
||||
}
|
||||
|
||||
if ctx.Bool(DisableGetBlobsV2.Name) {
|
||||
log.Warning("Disabling `engine_getBlobsV2` API")
|
||||
cfg.DisableGetBlobsV2 = true
|
||||
}
|
||||
|
||||
// State-diff-exponents
|
||||
cfg.StateDiffExponents = ctx.IntSlice(StateDiffExponents.Name)
|
||||
if features.Get().EnableStateDiff {
|
||||
|
||||
@@ -148,6 +148,7 @@ var appFlags = []cli.Flag{
|
||||
flags.SlasherDirFlag,
|
||||
flags.SlasherFlag,
|
||||
flags.JwtId,
|
||||
flags.DisableGetBlobsV2,
|
||||
storage.BlobStoragePathFlag,
|
||||
storage.DataColumnStoragePathFlag,
|
||||
storage.BlobStorageLayout,
|
||||
|
||||
@@ -169,6 +169,7 @@ var appHelpFlagGroups = []flagGroup{
|
||||
flags.ExecutionJWTSecretFlag,
|
||||
flags.JwtId,
|
||||
flags.InteropMockEth1DataVotesFlag,
|
||||
flags.DisableGetBlobsV2,
|
||||
},
|
||||
},
|
||||
{ // Flags relevant to configuring beacon chain monitoring.
|
||||
|
||||
@@ -9,6 +9,7 @@ const (
|
||||
RandaoMixesLength = 65536 // EPOCHS_PER_HISTORICAL_VECTOR
|
||||
HistoricalRootsLength = 16777216 // HISTORICAL_ROOTS_LIMIT
|
||||
ValidatorRegistryLimit = 1099511627776 // VALIDATOR_REGISTRY_LIMIT
|
||||
BuilderRegistryLimit = 1099511627776 // BUILDER_REGISTRY_LIMIT
|
||||
Eth1DataVotesLength = 2048 // SLOTS_PER_ETH1_VOTING_PERIOD
|
||||
PreviousEpochAttestationsLength = 4096 // MAX_ATTESTATIONS * SLOTS_PER_EPOCH
|
||||
CurrentEpochAttestationsLength = 4096 // MAX_ATTESTATIONS * SLOTS_PER_EPOCH
|
||||
|
||||
@@ -9,6 +9,7 @@ const (
|
||||
RandaoMixesLength = 64 // EPOCHS_PER_HISTORICAL_VECTOR
|
||||
HistoricalRootsLength = 16777216 // HISTORICAL_ROOTS_LIMIT
|
||||
ValidatorRegistryLimit = 1099511627776 // VALIDATOR_REGISTRY_LIMIT
|
||||
BuilderRegistryLimit = 1099511627776 // BUILDER_REGISTRY_LIMIT
|
||||
Eth1DataVotesLength = 32 // SLOTS_PER_ETH1_VOTING_PERIOD
|
||||
PreviousEpochAttestationsLength = 1024 // MAX_ATTESTATIONS * SLOTS_PER_EPOCH
|
||||
CurrentEpochAttestationsLength = 1024 // MAX_ATTESTATIONS * SLOTS_PER_EPOCH
|
||||
|
||||
@@ -55,7 +55,8 @@ var placeholderFields = []string{
|
||||
"MAX_REQUEST_BLOB_SIDECARS_FULU",
|
||||
"MAX_REQUEST_INCLUSION_LIST",
|
||||
"MAX_REQUEST_PAYLOADS", // Compile time constant on BeaconBlockBody.ExecutionRequests
|
||||
"NUMBER_OF_COLUMNS", // Configured as a constant in config/fieldparams/mainnet.go
|
||||
"MIN_BUILDER_WITHDRAWABILITY_DELAY",
|
||||
"NUMBER_OF_COLUMNS", // Configured as a constant in config/fieldparams/mainnet.go
|
||||
"PAYLOAD_ATTESTATION_DUE_BPS",
|
||||
"PROPOSER_INCLUSION_LIST_CUTOFF",
|
||||
"PROPOSER_INCLUSION_LIST_CUTOFF_BPS",
|
||||
|
||||
@@ -206,7 +206,7 @@ var mainnetBeaconConfig = &BeaconChainConfig{
|
||||
BeaconStateDenebFieldCount: 28,
|
||||
BeaconStateElectraFieldCount: 37,
|
||||
BeaconStateFuluFieldCount: 38,
|
||||
BeaconStateGloasFieldCount: 43,
|
||||
BeaconStateGloasFieldCount: 45,
|
||||
|
||||
// Slasher related values.
|
||||
WeakSubjectivityPeriod: 54000,
|
||||
|
||||
@@ -669,6 +669,7 @@ func hydrateBeaconBlockBodyGloas() *eth.BeaconBlockBodyGloas {
|
||||
ParentBlockHash: make([]byte, fieldparams.RootLength),
|
||||
ParentBlockRoot: make([]byte, fieldparams.RootLength),
|
||||
BlockHash: make([]byte, fieldparams.RootLength),
|
||||
PrevRandao: make([]byte, fieldparams.RootLength),
|
||||
FeeRecipient: make([]byte, 20),
|
||||
BlobKzgCommitmentsRoot: make([]byte, fieldparams.RootLength),
|
||||
},
|
||||
|
||||
@@ -45,7 +45,6 @@ func Setup(ctx context.Context, serviceName, processName, endpoint string, sampl
|
||||
exporter,
|
||||
trace.WithMaxExportBatchSize(trace.DefaultMaxExportBatchSize),
|
||||
trace.WithBatchTimeout(trace.DefaultScheduleDelay*time.Millisecond),
|
||||
trace.WithMaxExportBatchSize(trace.DefaultMaxExportBatchSize),
|
||||
),
|
||||
trace.WithResource(
|
||||
resource.NewWithAttributes(
|
||||
|
||||
@@ -35,6 +35,21 @@ func CopyValidator(val *Validator) *Validator {
|
||||
}
|
||||
}
|
||||
|
||||
// CopyBuilder copies the provided builder.
|
||||
func CopyBuilder(builder *Builder) *Builder {
|
||||
if builder == nil {
|
||||
return nil
|
||||
}
|
||||
return &Builder{
|
||||
Pubkey: bytesutil.SafeCopyBytes(builder.Pubkey),
|
||||
Version: bytesutil.SafeCopyBytes(builder.Version),
|
||||
ExecutionAddress: bytesutil.SafeCopyBytes(builder.ExecutionAddress),
|
||||
Balance: builder.Balance,
|
||||
DepositEpoch: builder.DepositEpoch,
|
||||
WithdrawableEpoch: builder.WithdrawableEpoch,
|
||||
}
|
||||
}
|
||||
|
||||
// CopySyncCommitteeMessage copies the provided sync committee message object.
|
||||
func CopySyncCommitteeMessage(s *SyncCommitteeMessage) *SyncCommitteeMessage {
|
||||
if s == nil {
|
||||
|
||||
@@ -1220,7 +1220,7 @@ func genExecutionPayloadBidGloas() *v1alpha1.ExecutionPayloadBid {
|
||||
BlockHash: bytes(32),
|
||||
FeeRecipient: bytes(20),
|
||||
GasLimit: rand.Uint64(),
|
||||
BuilderIndex: primitives.ValidatorIndex(rand.Uint64()),
|
||||
BuilderIndex: primitives.BuilderIndex(rand.Uint64()),
|
||||
Slot: primitives.Slot(rand.Uint64()),
|
||||
Value: primitives.Gwei(rand.Uint64()),
|
||||
BlobKzgCommitmentsRoot: bytes(32),
|
||||
|
||||
@@ -13,11 +13,13 @@ func (header *ExecutionPayloadBid) Copy() *ExecutionPayloadBid {
|
||||
ParentBlockHash: bytesutil.SafeCopyBytes(header.ParentBlockHash),
|
||||
ParentBlockRoot: bytesutil.SafeCopyBytes(header.ParentBlockRoot),
|
||||
BlockHash: bytesutil.SafeCopyBytes(header.BlockHash),
|
||||
PrevRandao: bytesutil.SafeCopyBytes(header.PrevRandao),
|
||||
FeeRecipient: bytesutil.SafeCopyBytes(header.FeeRecipient),
|
||||
GasLimit: header.GasLimit,
|
||||
BuilderIndex: header.BuilderIndex,
|
||||
Slot: header.Slot,
|
||||
Value: header.Value,
|
||||
ExecutionPayment: header.ExecutionPayment,
|
||||
BlobKzgCommitmentsRoot: bytesutil.SafeCopyBytes(header.BlobKzgCommitmentsRoot),
|
||||
}
|
||||
}
|
||||
@@ -28,10 +30,9 @@ func (withdrawal *BuilderPendingWithdrawal) Copy() *BuilderPendingWithdrawal {
|
||||
return nil
|
||||
}
|
||||
return &BuilderPendingWithdrawal{
|
||||
FeeRecipient: bytesutil.SafeCopyBytes(withdrawal.FeeRecipient),
|
||||
Amount: withdrawal.Amount,
|
||||
BuilderIndex: withdrawal.BuilderIndex,
|
||||
WithdrawableEpoch: withdrawal.WithdrawableEpoch,
|
||||
FeeRecipient: bytesutil.SafeCopyBytes(withdrawal.FeeRecipient),
|
||||
Amount: withdrawal.Amount,
|
||||
BuilderIndex: withdrawal.BuilderIndex,
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
1367
proto/prysm/v1alpha1/gloas.pb.go
generated
1367
proto/prysm/v1alpha1/gloas.pb.go
generated
File diff suppressed because it is too large
Load Diff
@@ -26,30 +26,37 @@ option go_package = "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1;eth";
|
||||
// parent_block_hash: Hash32
|
||||
// parent_block_root: Root
|
||||
// block_hash: Hash32
|
||||
// prev_randao: Bytes32
|
||||
// fee_recipient: ExecutionAddress
|
||||
// gas_limit: uint64
|
||||
// builder_index: ValidatorIndex
|
||||
// builder_index: BuilderIndex
|
||||
// slot: Slot
|
||||
// value: Gwei
|
||||
// execution_payment: Gwei
|
||||
// blob_kzg_commitments_root: Root
|
||||
message ExecutionPayloadBid {
|
||||
bytes parent_block_hash = 1 [ (ethereum.eth.ext.ssz_size) = "32" ];
|
||||
bytes parent_block_root = 2 [ (ethereum.eth.ext.ssz_size) = "32" ];
|
||||
bytes block_hash = 3 [ (ethereum.eth.ext.ssz_size) = "32" ];
|
||||
bytes fee_recipient = 4 [ (ethereum.eth.ext.ssz_size) = "20" ];
|
||||
uint64 gas_limit = 5;
|
||||
uint64 builder_index = 6 [ (ethereum.eth.ext.cast_type) =
|
||||
bytes prev_randao = 4 [ (ethereum.eth.ext.ssz_size) = "32" ];
|
||||
bytes fee_recipient = 5 [ (ethereum.eth.ext.ssz_size) = "20" ];
|
||||
uint64 gas_limit = 6;
|
||||
uint64 builder_index = 7 [ (ethereum.eth.ext.cast_type) =
|
||||
"github.com/OffchainLabs/prysm/v7/"
|
||||
"consensus-types/primitives.ValidatorIndex" ];
|
||||
uint64 slot = 7 [
|
||||
"consensus-types/primitives.BuilderIndex" ];
|
||||
uint64 slot = 8 [
|
||||
(ethereum.eth.ext.cast_type) =
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives.Slot"
|
||||
];
|
||||
uint64 value = 8 [
|
||||
uint64 value = 9 [
|
||||
(ethereum.eth.ext.cast_type) =
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives.Gwei"
|
||||
];
|
||||
bytes blob_kzg_commitments_root = 9 [ (ethereum.eth.ext.ssz_size) = "32" ];
|
||||
uint64 execution_payment = 10 [
|
||||
(ethereum.eth.ext.cast_type) =
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives.Gwei"
|
||||
];
|
||||
bytes blob_kzg_commitments_root = 11 [ (ethereum.eth.ext.ssz_size) = "32" ];
|
||||
}
|
||||
|
||||
// SignedExecutionPayloadBid wraps an execution payload bid with a signature.
|
||||
@@ -185,12 +192,15 @@ message SignedBeaconBlockGloas {
|
||||
// [All previous fields from earlier forks]
|
||||
// # Replaced existing latest execution header position
|
||||
// latest_execution_payload_bid: ExecutionPayloadBid
|
||||
// # New fields in Gloas:EIP7732
|
||||
// builders: List[Builder, BUILDER_REGISTRY_LIMIT]
|
||||
// # [New in Gloas:EIP7732]
|
||||
// next_withdrawal_builder_index: BuilderIndex
|
||||
// # [New in Gloas:EIP7732]
|
||||
// execution_payload_availability: Bitvector[SLOTS_PER_HISTORICAL_ROOT]
|
||||
// builder_pending_payments: Vector[BuilderPendingPayment, 2 * SLOTS_PER_EPOCH]
|
||||
// builder_pending_withdrawals: List[BuilderPendingWithdrawal, BUILDER_PENDING_WITHDRAWALS_LIMIT]
|
||||
// latest_block_hash: Hash32
|
||||
// latest_withdrawals_root: Root
|
||||
// payload_expected_withdrawals: List[Withdrawal, MAX_WITHDRAWALS_PER_PAYLOAD]
|
||||
message BeaconStateGloas {
|
||||
// Versioning [1001-2000]
|
||||
uint64 genesis_time = 1001;
|
||||
@@ -300,13 +310,20 @@ message BeaconStateGloas {
|
||||
[ (ethereum.eth.ext.ssz_size) = "proposer_lookahead_size" ];
|
||||
|
||||
// Fields introduced in Gloas fork [14001-15000]
|
||||
bytes execution_payload_availability = 14001 [
|
||||
repeated Builder builders = 14001
|
||||
[ (ethereum.eth.ext.ssz_max) = "builder_registry_limit" ];
|
||||
uint64 next_withdrawal_builder_index = 14002
|
||||
[ (ethereum.eth.ext.cast_type) =
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/"
|
||||
"primitives.BuilderIndex" ];
|
||||
bytes execution_payload_availability = 14003 [
|
||||
(ethereum.eth.ext.ssz_size) = "execution_payload_availability.size"
|
||||
];
|
||||
repeated BuilderPendingPayment builder_pending_payments = 14002 [(ethereum.eth.ext.ssz_size) = "builder_pending_payments.size"];
|
||||
repeated BuilderPendingWithdrawal builder_pending_withdrawals = 14003 [(ethereum.eth.ext.ssz_max) = "1048576"];
|
||||
bytes latest_block_hash = 14004 [ (ethereum.eth.ext.ssz_size) = "32" ];
|
||||
bytes latest_withdrawals_root = 14005 [ (ethereum.eth.ext.ssz_size) = "32" ];
|
||||
repeated BuilderPendingPayment builder_pending_payments = 14004 [(ethereum.eth.ext.ssz_size) = "builder_pending_payments.size"];
|
||||
repeated BuilderPendingWithdrawal builder_pending_withdrawals = 14005 [(ethereum.eth.ext.ssz_max) = "1048576"];
|
||||
bytes latest_block_hash = 14006 [ (ethereum.eth.ext.ssz_size) = "32" ];
|
||||
repeated ethereum.engine.v1.Withdrawal payload_expected_withdrawals = 14007
|
||||
[ (ethereum.eth.ext.ssz_max) = "withdrawal.size" ];
|
||||
}
|
||||
|
||||
// BuilderPendingPayment represents a pending payment to a builder.
|
||||
@@ -329,8 +346,7 @@ message BuilderPendingPayment {
|
||||
// class BuilderPendingWithdrawal(Container):
|
||||
// fee_recipient: ExecutionAddress
|
||||
// amount: Gwei
|
||||
// builder_index: ValidatorIndex
|
||||
// withdrawable_epoch: Epoch
|
||||
// builder_index: BuilderIndex
|
||||
message BuilderPendingWithdrawal {
|
||||
bytes fee_recipient = 1 [ (ethereum.eth.ext.ssz_size) = "20" ];
|
||||
uint64 amount = 2 [
|
||||
@@ -340,11 +356,7 @@ message BuilderPendingWithdrawal {
|
||||
uint64 builder_index = 3
|
||||
[ (ethereum.eth.ext.cast_type) =
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/"
|
||||
"primitives.ValidatorIndex" ];
|
||||
uint64 withdrawable_epoch = 4 [
|
||||
(ethereum.eth.ext.cast_type) =
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives.Epoch"
|
||||
];
|
||||
"primitives.BuilderIndex" ];
|
||||
}
|
||||
|
||||
// DataColumnSidecarGloas represents a data column sidecar in the Gloas fork.
|
||||
@@ -387,7 +399,7 @@ message DataColumnSidecarGloas {
|
||||
// class ExecutionPayloadEnvelope(Container):
|
||||
// payload: ExecutionPayload
|
||||
// execution_requests: ExecutionRequests
|
||||
// builder_index: ValidatorIndex
|
||||
// builder_index: BuilderIndex
|
||||
// beacon_block_root: Root
|
||||
// slot: Slot
|
||||
// blob_kzg_commitments: List[KZGCommitment, MAX_BLOB_COMMITMENTS_PER_BLOCK]
|
||||
@@ -397,7 +409,7 @@ message ExecutionPayloadEnvelope {
|
||||
ethereum.engine.v1.ExecutionRequests execution_requests = 2;
|
||||
uint64 builder_index = 3 [ (ethereum.eth.ext.cast_type) =
|
||||
"github.com/OffchainLabs/prysm/v7/"
|
||||
"consensus-types/primitives.ValidatorIndex" ];
|
||||
"consensus-types/primitives.BuilderIndex" ];
|
||||
bytes beacon_block_root = 4 [ (ethereum.eth.ext.ssz_size) = "32" ];
|
||||
uint64 slot = 5 [
|
||||
(ethereum.eth.ext.cast_type) =
|
||||
@@ -421,3 +433,31 @@ message SignedExecutionPayloadEnvelope {
|
||||
ExecutionPayloadEnvelope message = 1;
|
||||
bytes signature = 2 [ (ethereum.eth.ext.ssz_size) = "96" ];
|
||||
}
|
||||
|
||||
// Builder represents a builder in the Gloas fork.
|
||||
//
|
||||
// Spec:
|
||||
// class Builder(Container):
|
||||
// pubkey: BLSPubkey
|
||||
// version: uint8
|
||||
// execution_address: ExecutionAddress
|
||||
// balance: Gwei
|
||||
// deposit_epoch: Epoch
|
||||
// withdrawable_epoch: Epoch
|
||||
message Builder {
|
||||
bytes pubkey = 1 [ (ethereum.eth.ext.ssz_size) = "48" ];
|
||||
bytes version = 2 [ (ethereum.eth.ext.ssz_size) = "1" ];
|
||||
bytes execution_address = 3 [ (ethereum.eth.ext.ssz_size) = "20" ];
|
||||
uint64 balance = 4 [
|
||||
(ethereum.eth.ext.cast_type) =
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives.Gwei"
|
||||
];
|
||||
uint64 deposit_epoch = 5 [
|
||||
(ethereum.eth.ext.cast_type) =
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives.Epoch"
|
||||
];
|
||||
uint64 withdrawable_epoch = 6 [
|
||||
(ethereum.eth.ext.cast_type) =
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives.Epoch"
|
||||
];
|
||||
}
|
||||
|
||||
@@ -37,26 +37,36 @@ func (e *ExecutionPayloadBid) MarshalSSZTo(buf []byte) (dst []byte, err error) {
|
||||
}
|
||||
dst = append(dst, e.BlockHash...)
|
||||
|
||||
// Field (3) 'FeeRecipient'
|
||||
// Field (3) 'PrevRandao'
|
||||
if size := len(e.PrevRandao); size != 32 {
|
||||
err = ssz.ErrBytesLengthFn("--.PrevRandao", size, 32)
|
||||
return
|
||||
}
|
||||
dst = append(dst, e.PrevRandao...)
|
||||
|
||||
// Field (4) 'FeeRecipient'
|
||||
if size := len(e.FeeRecipient); size != 20 {
|
||||
err = ssz.ErrBytesLengthFn("--.FeeRecipient", size, 20)
|
||||
return
|
||||
}
|
||||
dst = append(dst, e.FeeRecipient...)
|
||||
|
||||
// Field (4) 'GasLimit'
|
||||
// Field (5) 'GasLimit'
|
||||
dst = ssz.MarshalUint64(dst, e.GasLimit)
|
||||
|
||||
// Field (5) 'BuilderIndex'
|
||||
// Field (6) 'BuilderIndex'
|
||||
dst = ssz.MarshalUint64(dst, uint64(e.BuilderIndex))
|
||||
|
||||
// Field (6) 'Slot'
|
||||
// Field (7) 'Slot'
|
||||
dst = ssz.MarshalUint64(dst, uint64(e.Slot))
|
||||
|
||||
// Field (7) 'Value'
|
||||
// Field (8) 'Value'
|
||||
dst = ssz.MarshalUint64(dst, uint64(e.Value))
|
||||
|
||||
// Field (8) 'BlobKzgCommitmentsRoot'
|
||||
// Field (9) 'ExecutionPayment'
|
||||
dst = ssz.MarshalUint64(dst, uint64(e.ExecutionPayment))
|
||||
|
||||
// Field (10) 'BlobKzgCommitmentsRoot'
|
||||
if size := len(e.BlobKzgCommitmentsRoot); size != 32 {
|
||||
err = ssz.ErrBytesLengthFn("--.BlobKzgCommitmentsRoot", size, 32)
|
||||
return
|
||||
@@ -70,7 +80,7 @@ func (e *ExecutionPayloadBid) MarshalSSZTo(buf []byte) (dst []byte, err error) {
|
||||
func (e *ExecutionPayloadBid) UnmarshalSSZ(buf []byte) error {
|
||||
var err error
|
||||
size := uint64(len(buf))
|
||||
if size != 180 {
|
||||
if size != 220 {
|
||||
return ssz.ErrSize
|
||||
}
|
||||
|
||||
@@ -92,36 +102,45 @@ func (e *ExecutionPayloadBid) UnmarshalSSZ(buf []byte) error {
|
||||
}
|
||||
e.BlockHash = append(e.BlockHash, buf[64:96]...)
|
||||
|
||||
// Field (3) 'FeeRecipient'
|
||||
// Field (3) 'PrevRandao'
|
||||
if cap(e.PrevRandao) == 0 {
|
||||
e.PrevRandao = make([]byte, 0, len(buf[96:128]))
|
||||
}
|
||||
e.PrevRandao = append(e.PrevRandao, buf[96:128]...)
|
||||
|
||||
// Field (4) 'FeeRecipient'
|
||||
if cap(e.FeeRecipient) == 0 {
|
||||
e.FeeRecipient = make([]byte, 0, len(buf[96:116]))
|
||||
e.FeeRecipient = make([]byte, 0, len(buf[128:148]))
|
||||
}
|
||||
e.FeeRecipient = append(e.FeeRecipient, buf[96:116]...)
|
||||
e.FeeRecipient = append(e.FeeRecipient, buf[128:148]...)
|
||||
|
||||
// Field (4) 'GasLimit'
|
||||
e.GasLimit = ssz.UnmarshallUint64(buf[116:124])
|
||||
// Field (5) 'GasLimit'
|
||||
e.GasLimit = ssz.UnmarshallUint64(buf[148:156])
|
||||
|
||||
// Field (5) 'BuilderIndex'
|
||||
e.BuilderIndex = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.ValidatorIndex(ssz.UnmarshallUint64(buf[124:132]))
|
||||
// Field (6) 'BuilderIndex'
|
||||
e.BuilderIndex = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.BuilderIndex(ssz.UnmarshallUint64(buf[156:164]))
|
||||
|
||||
// Field (6) 'Slot'
|
||||
e.Slot = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Slot(ssz.UnmarshallUint64(buf[132:140]))
|
||||
// Field (7) 'Slot'
|
||||
e.Slot = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Slot(ssz.UnmarshallUint64(buf[164:172]))
|
||||
|
||||
// Field (7) 'Value'
|
||||
e.Value = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Gwei(ssz.UnmarshallUint64(buf[140:148]))
|
||||
// Field (8) 'Value'
|
||||
e.Value = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Gwei(ssz.UnmarshallUint64(buf[172:180]))
|
||||
|
||||
// Field (8) 'BlobKzgCommitmentsRoot'
|
||||
// Field (9) 'ExecutionPayment'
|
||||
e.ExecutionPayment = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Gwei(ssz.UnmarshallUint64(buf[180:188]))
|
||||
|
||||
// Field (10) 'BlobKzgCommitmentsRoot'
|
||||
if cap(e.BlobKzgCommitmentsRoot) == 0 {
|
||||
e.BlobKzgCommitmentsRoot = make([]byte, 0, len(buf[148:180]))
|
||||
e.BlobKzgCommitmentsRoot = make([]byte, 0, len(buf[188:220]))
|
||||
}
|
||||
e.BlobKzgCommitmentsRoot = append(e.BlobKzgCommitmentsRoot, buf[148:180]...)
|
||||
e.BlobKzgCommitmentsRoot = append(e.BlobKzgCommitmentsRoot, buf[188:220]...)
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
// SizeSSZ returns the ssz encoded size in bytes for the ExecutionPayloadBid object
|
||||
func (e *ExecutionPayloadBid) SizeSSZ() (size int) {
|
||||
size = 180
|
||||
size = 220
|
||||
return
|
||||
}
|
||||
|
||||
@@ -155,26 +174,36 @@ func (e *ExecutionPayloadBid) HashTreeRootWith(hh *ssz.Hasher) (err error) {
|
||||
}
|
||||
hh.PutBytes(e.BlockHash)
|
||||
|
||||
// Field (3) 'FeeRecipient'
|
||||
// Field (3) 'PrevRandao'
|
||||
if size := len(e.PrevRandao); size != 32 {
|
||||
err = ssz.ErrBytesLengthFn("--.PrevRandao", size, 32)
|
||||
return
|
||||
}
|
||||
hh.PutBytes(e.PrevRandao)
|
||||
|
||||
// Field (4) 'FeeRecipient'
|
||||
if size := len(e.FeeRecipient); size != 20 {
|
||||
err = ssz.ErrBytesLengthFn("--.FeeRecipient", size, 20)
|
||||
return
|
||||
}
|
||||
hh.PutBytes(e.FeeRecipient)
|
||||
|
||||
// Field (4) 'GasLimit'
|
||||
// Field (5) 'GasLimit'
|
||||
hh.PutUint64(e.GasLimit)
|
||||
|
||||
// Field (5) 'BuilderIndex'
|
||||
// Field (6) 'BuilderIndex'
|
||||
hh.PutUint64(uint64(e.BuilderIndex))
|
||||
|
||||
// Field (6) 'Slot'
|
||||
// Field (7) 'Slot'
|
||||
hh.PutUint64(uint64(e.Slot))
|
||||
|
||||
// Field (7) 'Value'
|
||||
// Field (8) 'Value'
|
||||
hh.PutUint64(uint64(e.Value))
|
||||
|
||||
// Field (8) 'BlobKzgCommitmentsRoot'
|
||||
// Field (9) 'ExecutionPayment'
|
||||
hh.PutUint64(uint64(e.ExecutionPayment))
|
||||
|
||||
// Field (10) 'BlobKzgCommitmentsRoot'
|
||||
if size := len(e.BlobKzgCommitmentsRoot); size != 32 {
|
||||
err = ssz.ErrBytesLengthFn("--.BlobKzgCommitmentsRoot", size, 32)
|
||||
return
|
||||
@@ -216,7 +245,7 @@ func (s *SignedExecutionPayloadBid) MarshalSSZTo(buf []byte) (dst []byte, err er
|
||||
func (s *SignedExecutionPayloadBid) UnmarshalSSZ(buf []byte) error {
|
||||
var err error
|
||||
size := uint64(len(buf))
|
||||
if size != 276 {
|
||||
if size != 316 {
|
||||
return ssz.ErrSize
|
||||
}
|
||||
|
||||
@@ -224,22 +253,22 @@ func (s *SignedExecutionPayloadBid) UnmarshalSSZ(buf []byte) error {
|
||||
if s.Message == nil {
|
||||
s.Message = new(ExecutionPayloadBid)
|
||||
}
|
||||
if err = s.Message.UnmarshalSSZ(buf[0:180]); err != nil {
|
||||
if err = s.Message.UnmarshalSSZ(buf[0:220]); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Field (1) 'Signature'
|
||||
if cap(s.Signature) == 0 {
|
||||
s.Signature = make([]byte, 0, len(buf[180:276]))
|
||||
s.Signature = make([]byte, 0, len(buf[220:316]))
|
||||
}
|
||||
s.Signature = append(s.Signature, buf[180:276]...)
|
||||
s.Signature = append(s.Signature, buf[220:316]...)
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
// SizeSSZ returns the ssz encoded size in bytes for the SignedExecutionPayloadBid object
|
||||
func (s *SignedExecutionPayloadBid) SizeSSZ() (size int) {
|
||||
size = 276
|
||||
size = 316
|
||||
return
|
||||
}
|
||||
|
||||
@@ -713,7 +742,7 @@ func (b *BeaconBlockBodyGloas) MarshalSSZ() ([]byte, error) {
|
||||
// MarshalSSZTo ssz marshals the BeaconBlockBodyGloas object to a target array
|
||||
func (b *BeaconBlockBodyGloas) MarshalSSZTo(buf []byte) (dst []byte, err error) {
|
||||
dst = buf
|
||||
offset := int(664)
|
||||
offset := int(704)
|
||||
|
||||
// Field (0) 'RandaoReveal'
|
||||
if size := len(b.RandaoReveal); size != 96 {
|
||||
@@ -885,7 +914,7 @@ func (b *BeaconBlockBodyGloas) MarshalSSZTo(buf []byte) (dst []byte, err error)
|
||||
func (b *BeaconBlockBodyGloas) UnmarshalSSZ(buf []byte) error {
|
||||
var err error
|
||||
size := uint64(len(buf))
|
||||
if size < 664 {
|
||||
if size < 704 {
|
||||
return ssz.ErrSize
|
||||
}
|
||||
|
||||
@@ -917,7 +946,7 @@ func (b *BeaconBlockBodyGloas) UnmarshalSSZ(buf []byte) error {
|
||||
return ssz.ErrOffset
|
||||
}
|
||||
|
||||
if o3 != 664 {
|
||||
if o3 != 704 {
|
||||
return ssz.ErrInvalidVariableOffset
|
||||
}
|
||||
|
||||
@@ -958,12 +987,12 @@ func (b *BeaconBlockBodyGloas) UnmarshalSSZ(buf []byte) error {
|
||||
if b.SignedExecutionPayloadBid == nil {
|
||||
b.SignedExecutionPayloadBid = new(SignedExecutionPayloadBid)
|
||||
}
|
||||
if err = b.SignedExecutionPayloadBid.UnmarshalSSZ(buf[384:660]); err != nil {
|
||||
if err = b.SignedExecutionPayloadBid.UnmarshalSSZ(buf[384:700]); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Offset (11) 'PayloadAttestations'
|
||||
if o11 = ssz.ReadOffset(buf[660:664]); o11 > size || o9 > o11 {
|
||||
if o11 = ssz.ReadOffset(buf[700:704]); o11 > size || o9 > o11 {
|
||||
return ssz.ErrOffset
|
||||
}
|
||||
|
||||
@@ -1105,7 +1134,7 @@ func (b *BeaconBlockBodyGloas) UnmarshalSSZ(buf []byte) error {
|
||||
|
||||
// SizeSSZ returns the ssz encoded size in bytes for the BeaconBlockBodyGloas object
|
||||
func (b *BeaconBlockBodyGloas) SizeSSZ() (size int) {
|
||||
size = 664
|
||||
size = 704
|
||||
|
||||
// Field (3) 'ProposerSlashings'
|
||||
size += len(b.ProposerSlashings) * 416
|
||||
@@ -1408,7 +1437,7 @@ func (b *BeaconStateGloas) MarshalSSZ() ([]byte, error) {
|
||||
// MarshalSSZTo ssz marshals the BeaconStateGloas object to a target array
|
||||
func (b *BeaconStateGloas) MarshalSSZTo(buf []byte) (dst []byte, err error) {
|
||||
dst = buf
|
||||
offset := int(2741821)
|
||||
offset := int(2741333)
|
||||
|
||||
// Field (0) 'GenesisTime'
|
||||
dst = ssz.MarshalUint64(dst, b.GenesisTime)
|
||||
@@ -1630,14 +1659,21 @@ func (b *BeaconStateGloas) MarshalSSZTo(buf []byte) (dst []byte, err error) {
|
||||
dst = ssz.MarshalUint64(dst, b.ProposerLookahead[ii])
|
||||
}
|
||||
|
||||
// Field (38) 'ExecutionPayloadAvailability'
|
||||
// Offset (38) 'Builders'
|
||||
dst = ssz.WriteOffset(dst, offset)
|
||||
offset += len(b.Builders) * 93
|
||||
|
||||
// Field (39) 'NextWithdrawalBuilderIndex'
|
||||
dst = ssz.MarshalUint64(dst, uint64(b.NextWithdrawalBuilderIndex))
|
||||
|
||||
// Field (40) 'ExecutionPayloadAvailability'
|
||||
if size := len(b.ExecutionPayloadAvailability); size != 1024 {
|
||||
err = ssz.ErrBytesLengthFn("--.ExecutionPayloadAvailability", size, 1024)
|
||||
return
|
||||
}
|
||||
dst = append(dst, b.ExecutionPayloadAvailability...)
|
||||
|
||||
// Field (39) 'BuilderPendingPayments'
|
||||
// Field (41) 'BuilderPendingPayments'
|
||||
if size := len(b.BuilderPendingPayments); size != 64 {
|
||||
err = ssz.ErrVectorLengthFn("--.BuilderPendingPayments", size, 64)
|
||||
return
|
||||
@@ -1648,23 +1684,20 @@ func (b *BeaconStateGloas) MarshalSSZTo(buf []byte) (dst []byte, err error) {
|
||||
}
|
||||
}
|
||||
|
||||
// Offset (40) 'BuilderPendingWithdrawals'
|
||||
// Offset (42) 'BuilderPendingWithdrawals'
|
||||
dst = ssz.WriteOffset(dst, offset)
|
||||
offset += len(b.BuilderPendingWithdrawals) * 44
|
||||
offset += len(b.BuilderPendingWithdrawals) * 36
|
||||
|
||||
// Field (41) 'LatestBlockHash'
|
||||
// Field (43) 'LatestBlockHash'
|
||||
if size := len(b.LatestBlockHash); size != 32 {
|
||||
err = ssz.ErrBytesLengthFn("--.LatestBlockHash", size, 32)
|
||||
return
|
||||
}
|
||||
dst = append(dst, b.LatestBlockHash...)
|
||||
|
||||
// Field (42) 'LatestWithdrawalsRoot'
|
||||
if size := len(b.LatestWithdrawalsRoot); size != 32 {
|
||||
err = ssz.ErrBytesLengthFn("--.LatestWithdrawalsRoot", size, 32)
|
||||
return
|
||||
}
|
||||
dst = append(dst, b.LatestWithdrawalsRoot...)
|
||||
// Offset (44) 'PayloadExpectedWithdrawals'
|
||||
dst = ssz.WriteOffset(dst, offset)
|
||||
offset += len(b.PayloadExpectedWithdrawals) * 44
|
||||
|
||||
// Field (7) 'HistoricalRoots'
|
||||
if size := len(b.HistoricalRoots); size > 16777216 {
|
||||
@@ -1777,7 +1810,18 @@ func (b *BeaconStateGloas) MarshalSSZTo(buf []byte) (dst []byte, err error) {
|
||||
}
|
||||
}
|
||||
|
||||
// Field (40) 'BuilderPendingWithdrawals'
|
||||
// Field (38) 'Builders'
|
||||
if size := len(b.Builders); size > 1099511627776 {
|
||||
err = ssz.ErrListTooBigFn("--.Builders", size, 1099511627776)
|
||||
return
|
||||
}
|
||||
for ii := 0; ii < len(b.Builders); ii++ {
|
||||
if dst, err = b.Builders[ii].MarshalSSZTo(dst); err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// Field (42) 'BuilderPendingWithdrawals'
|
||||
if size := len(b.BuilderPendingWithdrawals); size > 1048576 {
|
||||
err = ssz.ErrListTooBigFn("--.BuilderPendingWithdrawals", size, 1048576)
|
||||
return
|
||||
@@ -1788,6 +1832,17 @@ func (b *BeaconStateGloas) MarshalSSZTo(buf []byte) (dst []byte, err error) {
|
||||
}
|
||||
}
|
||||
|
||||
// Field (44) 'PayloadExpectedWithdrawals'
|
||||
if size := len(b.PayloadExpectedWithdrawals); size > 16 {
|
||||
err = ssz.ErrListTooBigFn("--.PayloadExpectedWithdrawals", size, 16)
|
||||
return
|
||||
}
|
||||
for ii := 0; ii < len(b.PayloadExpectedWithdrawals); ii++ {
|
||||
if dst, err = b.PayloadExpectedWithdrawals[ii].MarshalSSZTo(dst); err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
@@ -1795,12 +1850,12 @@ func (b *BeaconStateGloas) MarshalSSZTo(buf []byte) (dst []byte, err error) {
|
||||
func (b *BeaconStateGloas) UnmarshalSSZ(buf []byte) error {
|
||||
var err error
|
||||
size := uint64(len(buf))
|
||||
if size < 2741821 {
|
||||
if size < 2741333 {
|
||||
return ssz.ErrSize
|
||||
}
|
||||
|
||||
tail := buf
|
||||
var o7, o9, o11, o12, o15, o16, o21, o27, o34, o35, o36, o40 uint64
|
||||
var o7, o9, o11, o12, o15, o16, o21, o27, o34, o35, o36, o38, o42, o44 uint64
|
||||
|
||||
// Field (0) 'GenesisTime'
|
||||
b.GenesisTime = ssz.UnmarshallUint64(buf[0:8])
|
||||
@@ -1853,7 +1908,7 @@ func (b *BeaconStateGloas) UnmarshalSSZ(buf []byte) error {
|
||||
return ssz.ErrOffset
|
||||
}
|
||||
|
||||
if o7 != 2741821 {
|
||||
if o7 != 2741333 {
|
||||
return ssz.ErrInvalidVariableOffset
|
||||
}
|
||||
|
||||
@@ -1963,93 +2018,100 @@ func (b *BeaconStateGloas) UnmarshalSSZ(buf []byte) error {
|
||||
if b.LatestExecutionPayloadBid == nil {
|
||||
b.LatestExecutionPayloadBid = new(ExecutionPayloadBid)
|
||||
}
|
||||
if err = b.LatestExecutionPayloadBid.UnmarshalSSZ(buf[2736629:2736809]); err != nil {
|
||||
if err = b.LatestExecutionPayloadBid.UnmarshalSSZ(buf[2736629:2736849]); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Field (25) 'NextWithdrawalIndex'
|
||||
b.NextWithdrawalIndex = ssz.UnmarshallUint64(buf[2736809:2736817])
|
||||
b.NextWithdrawalIndex = ssz.UnmarshallUint64(buf[2736849:2736857])
|
||||
|
||||
// Field (26) 'NextWithdrawalValidatorIndex'
|
||||
b.NextWithdrawalValidatorIndex = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.ValidatorIndex(ssz.UnmarshallUint64(buf[2736817:2736825]))
|
||||
b.NextWithdrawalValidatorIndex = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.ValidatorIndex(ssz.UnmarshallUint64(buf[2736857:2736865]))
|
||||
|
||||
// Offset (27) 'HistoricalSummaries'
|
||||
if o27 = ssz.ReadOffset(buf[2736825:2736829]); o27 > size || o21 > o27 {
|
||||
if o27 = ssz.ReadOffset(buf[2736865:2736869]); o27 > size || o21 > o27 {
|
||||
return ssz.ErrOffset
|
||||
}
|
||||
|
||||
// Field (28) 'DepositRequestsStartIndex'
|
||||
b.DepositRequestsStartIndex = ssz.UnmarshallUint64(buf[2736829:2736837])
|
||||
b.DepositRequestsStartIndex = ssz.UnmarshallUint64(buf[2736869:2736877])
|
||||
|
||||
// Field (29) 'DepositBalanceToConsume'
|
||||
b.DepositBalanceToConsume = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Gwei(ssz.UnmarshallUint64(buf[2736837:2736845]))
|
||||
b.DepositBalanceToConsume = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Gwei(ssz.UnmarshallUint64(buf[2736877:2736885]))
|
||||
|
||||
// Field (30) 'ExitBalanceToConsume'
|
||||
b.ExitBalanceToConsume = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Gwei(ssz.UnmarshallUint64(buf[2736845:2736853]))
|
||||
b.ExitBalanceToConsume = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Gwei(ssz.UnmarshallUint64(buf[2736885:2736893]))
|
||||
|
||||
// Field (31) 'EarliestExitEpoch'
|
||||
b.EarliestExitEpoch = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Epoch(ssz.UnmarshallUint64(buf[2736853:2736861]))
|
||||
b.EarliestExitEpoch = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Epoch(ssz.UnmarshallUint64(buf[2736893:2736901]))
|
||||
|
||||
// Field (32) 'ConsolidationBalanceToConsume'
|
||||
b.ConsolidationBalanceToConsume = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Gwei(ssz.UnmarshallUint64(buf[2736861:2736869]))
|
||||
b.ConsolidationBalanceToConsume = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Gwei(ssz.UnmarshallUint64(buf[2736901:2736909]))
|
||||
|
||||
// Field (33) 'EarliestConsolidationEpoch'
|
||||
b.EarliestConsolidationEpoch = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Epoch(ssz.UnmarshallUint64(buf[2736869:2736877]))
|
||||
b.EarliestConsolidationEpoch = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Epoch(ssz.UnmarshallUint64(buf[2736909:2736917]))
|
||||
|
||||
// Offset (34) 'PendingDeposits'
|
||||
if o34 = ssz.ReadOffset(buf[2736877:2736881]); o34 > size || o27 > o34 {
|
||||
if o34 = ssz.ReadOffset(buf[2736917:2736921]); o34 > size || o27 > o34 {
|
||||
return ssz.ErrOffset
|
||||
}
|
||||
|
||||
// Offset (35) 'PendingPartialWithdrawals'
|
||||
if o35 = ssz.ReadOffset(buf[2736881:2736885]); o35 > size || o34 > o35 {
|
||||
if o35 = ssz.ReadOffset(buf[2736921:2736925]); o35 > size || o34 > o35 {
|
||||
return ssz.ErrOffset
|
||||
}
|
||||
|
||||
// Offset (36) 'PendingConsolidations'
|
||||
if o36 = ssz.ReadOffset(buf[2736885:2736889]); o36 > size || o35 > o36 {
|
||||
if o36 = ssz.ReadOffset(buf[2736925:2736929]); o36 > size || o35 > o36 {
|
||||
return ssz.ErrOffset
|
||||
}
|
||||
|
||||
// Field (37) 'ProposerLookahead'
|
||||
b.ProposerLookahead = ssz.ExtendUint64(b.ProposerLookahead, 64)
|
||||
for ii := 0; ii < 64; ii++ {
|
||||
b.ProposerLookahead[ii] = ssz.UnmarshallUint64(buf[2736889:2737401][ii*8 : (ii+1)*8])
|
||||
b.ProposerLookahead[ii] = ssz.UnmarshallUint64(buf[2736929:2737441][ii*8 : (ii+1)*8])
|
||||
}
|
||||
|
||||
// Field (38) 'ExecutionPayloadAvailability'
|
||||
// Offset (38) 'Builders'
|
||||
if o38 = ssz.ReadOffset(buf[2737441:2737445]); o38 > size || o36 > o38 {
|
||||
return ssz.ErrOffset
|
||||
}
|
||||
|
||||
// Field (39) 'NextWithdrawalBuilderIndex'
|
||||
b.NextWithdrawalBuilderIndex = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.BuilderIndex(ssz.UnmarshallUint64(buf[2737445:2737453]))
|
||||
|
||||
// Field (40) 'ExecutionPayloadAvailability'
|
||||
if cap(b.ExecutionPayloadAvailability) == 0 {
|
||||
b.ExecutionPayloadAvailability = make([]byte, 0, len(buf[2737401:2738425]))
|
||||
b.ExecutionPayloadAvailability = make([]byte, 0, len(buf[2737453:2738477]))
|
||||
}
|
||||
b.ExecutionPayloadAvailability = append(b.ExecutionPayloadAvailability, buf[2737401:2738425]...)
|
||||
b.ExecutionPayloadAvailability = append(b.ExecutionPayloadAvailability, buf[2737453:2738477]...)
|
||||
|
||||
// Field (39) 'BuilderPendingPayments'
|
||||
// Field (41) 'BuilderPendingPayments'
|
||||
b.BuilderPendingPayments = make([]*BuilderPendingPayment, 64)
|
||||
for ii := 0; ii < 64; ii++ {
|
||||
if b.BuilderPendingPayments[ii] == nil {
|
||||
b.BuilderPendingPayments[ii] = new(BuilderPendingPayment)
|
||||
}
|
||||
if err = b.BuilderPendingPayments[ii].UnmarshalSSZ(buf[2738425:2741753][ii*52 : (ii+1)*52]); err != nil {
|
||||
if err = b.BuilderPendingPayments[ii].UnmarshalSSZ(buf[2738477:2741293][ii*44 : (ii+1)*44]); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Offset (40) 'BuilderPendingWithdrawals'
|
||||
if o40 = ssz.ReadOffset(buf[2741753:2741757]); o40 > size || o36 > o40 {
|
||||
// Offset (42) 'BuilderPendingWithdrawals'
|
||||
if o42 = ssz.ReadOffset(buf[2741293:2741297]); o42 > size || o38 > o42 {
|
||||
return ssz.ErrOffset
|
||||
}
|
||||
|
||||
// Field (41) 'LatestBlockHash'
|
||||
// Field (43) 'LatestBlockHash'
|
||||
if cap(b.LatestBlockHash) == 0 {
|
||||
b.LatestBlockHash = make([]byte, 0, len(buf[2741757:2741789]))
|
||||
b.LatestBlockHash = make([]byte, 0, len(buf[2741297:2741329]))
|
||||
}
|
||||
b.LatestBlockHash = append(b.LatestBlockHash, buf[2741757:2741789]...)
|
||||
b.LatestBlockHash = append(b.LatestBlockHash, buf[2741297:2741329]...)
|
||||
|
||||
// Field (42) 'LatestWithdrawalsRoot'
|
||||
if cap(b.LatestWithdrawalsRoot) == 0 {
|
||||
b.LatestWithdrawalsRoot = make([]byte, 0, len(buf[2741789:2741821]))
|
||||
// Offset (44) 'PayloadExpectedWithdrawals'
|
||||
if o44 = ssz.ReadOffset(buf[2741329:2741333]); o44 > size || o42 > o44 {
|
||||
return ssz.ErrOffset
|
||||
}
|
||||
b.LatestWithdrawalsRoot = append(b.LatestWithdrawalsRoot, buf[2741789:2741821]...)
|
||||
|
||||
// Field (7) 'HistoricalRoots'
|
||||
{
|
||||
@@ -2209,7 +2271,7 @@ func (b *BeaconStateGloas) UnmarshalSSZ(buf []byte) error {
|
||||
|
||||
// Field (36) 'PendingConsolidations'
|
||||
{
|
||||
buf = tail[o36:o40]
|
||||
buf = tail[o36:o38]
|
||||
num, err := ssz.DivideInt2(len(buf), 16, 262144)
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -2225,10 +2287,28 @@ func (b *BeaconStateGloas) UnmarshalSSZ(buf []byte) error {
|
||||
}
|
||||
}
|
||||
|
||||
// Field (40) 'BuilderPendingWithdrawals'
|
||||
// Field (38) 'Builders'
|
||||
{
|
||||
buf = tail[o40:]
|
||||
num, err := ssz.DivideInt2(len(buf), 44, 1048576)
|
||||
buf = tail[o38:o42]
|
||||
num, err := ssz.DivideInt2(len(buf), 93, 1099511627776)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
b.Builders = make([]*Builder, num)
|
||||
for ii := 0; ii < num; ii++ {
|
||||
if b.Builders[ii] == nil {
|
||||
b.Builders[ii] = new(Builder)
|
||||
}
|
||||
if err = b.Builders[ii].UnmarshalSSZ(buf[ii*93 : (ii+1)*93]); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Field (42) 'BuilderPendingWithdrawals'
|
||||
{
|
||||
buf = tail[o42:o44]
|
||||
num, err := ssz.DivideInt2(len(buf), 36, 1048576)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -2237,7 +2317,25 @@ func (b *BeaconStateGloas) UnmarshalSSZ(buf []byte) error {
|
||||
if b.BuilderPendingWithdrawals[ii] == nil {
|
||||
b.BuilderPendingWithdrawals[ii] = new(BuilderPendingWithdrawal)
|
||||
}
|
||||
if err = b.BuilderPendingWithdrawals[ii].UnmarshalSSZ(buf[ii*44 : (ii+1)*44]); err != nil {
|
||||
if err = b.BuilderPendingWithdrawals[ii].UnmarshalSSZ(buf[ii*36 : (ii+1)*36]); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Field (44) 'PayloadExpectedWithdrawals'
|
||||
{
|
||||
buf = tail[o44:]
|
||||
num, err := ssz.DivideInt2(len(buf), 44, 16)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
b.PayloadExpectedWithdrawals = make([]*v1.Withdrawal, num)
|
||||
for ii := 0; ii < num; ii++ {
|
||||
if b.PayloadExpectedWithdrawals[ii] == nil {
|
||||
b.PayloadExpectedWithdrawals[ii] = new(v1.Withdrawal)
|
||||
}
|
||||
if err = b.PayloadExpectedWithdrawals[ii].UnmarshalSSZ(buf[ii*44 : (ii+1)*44]); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
@@ -2247,7 +2345,7 @@ func (b *BeaconStateGloas) UnmarshalSSZ(buf []byte) error {
|
||||
|
||||
// SizeSSZ returns the ssz encoded size in bytes for the BeaconStateGloas object
|
||||
func (b *BeaconStateGloas) SizeSSZ() (size int) {
|
||||
size = 2741821
|
||||
size = 2741333
|
||||
|
||||
// Field (7) 'HistoricalRoots'
|
||||
size += len(b.HistoricalRoots) * 32
|
||||
@@ -2282,8 +2380,14 @@ func (b *BeaconStateGloas) SizeSSZ() (size int) {
|
||||
// Field (36) 'PendingConsolidations'
|
||||
size += len(b.PendingConsolidations) * 16
|
||||
|
||||
// Field (40) 'BuilderPendingWithdrawals'
|
||||
size += len(b.BuilderPendingWithdrawals) * 44
|
||||
// Field (38) 'Builders'
|
||||
size += len(b.Builders) * 93
|
||||
|
||||
// Field (42) 'BuilderPendingWithdrawals'
|
||||
size += len(b.BuilderPendingWithdrawals) * 36
|
||||
|
||||
// Field (44) 'PayloadExpectedWithdrawals'
|
||||
size += len(b.PayloadExpectedWithdrawals) * 44
|
||||
|
||||
return
|
||||
}
|
||||
@@ -2637,14 +2741,33 @@ func (b *BeaconStateGloas) HashTreeRootWith(hh *ssz.Hasher) (err error) {
|
||||
hh.Merkleize(subIndx)
|
||||
}
|
||||
|
||||
// Field (38) 'ExecutionPayloadAvailability'
|
||||
// Field (38) 'Builders'
|
||||
{
|
||||
subIndx := hh.Index()
|
||||
num := uint64(len(b.Builders))
|
||||
if num > 1099511627776 {
|
||||
err = ssz.ErrIncorrectListSize
|
||||
return
|
||||
}
|
||||
for _, elem := range b.Builders {
|
||||
if err = elem.HashTreeRootWith(hh); err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
hh.MerkleizeWithMixin(subIndx, num, 1099511627776)
|
||||
}
|
||||
|
||||
// Field (39) 'NextWithdrawalBuilderIndex'
|
||||
hh.PutUint64(uint64(b.NextWithdrawalBuilderIndex))
|
||||
|
||||
// Field (40) 'ExecutionPayloadAvailability'
|
||||
if size := len(b.ExecutionPayloadAvailability); size != 1024 {
|
||||
err = ssz.ErrBytesLengthFn("--.ExecutionPayloadAvailability", size, 1024)
|
||||
return
|
||||
}
|
||||
hh.PutBytes(b.ExecutionPayloadAvailability)
|
||||
|
||||
// Field (39) 'BuilderPendingPayments'
|
||||
// Field (41) 'BuilderPendingPayments'
|
||||
{
|
||||
subIndx := hh.Index()
|
||||
for _, elem := range b.BuilderPendingPayments {
|
||||
@@ -2655,7 +2778,7 @@ func (b *BeaconStateGloas) HashTreeRootWith(hh *ssz.Hasher) (err error) {
|
||||
hh.Merkleize(subIndx)
|
||||
}
|
||||
|
||||
// Field (40) 'BuilderPendingWithdrawals'
|
||||
// Field (42) 'BuilderPendingWithdrawals'
|
||||
{
|
||||
subIndx := hh.Index()
|
||||
num := uint64(len(b.BuilderPendingWithdrawals))
|
||||
@@ -2671,19 +2794,28 @@ func (b *BeaconStateGloas) HashTreeRootWith(hh *ssz.Hasher) (err error) {
|
||||
hh.MerkleizeWithMixin(subIndx, num, 1048576)
|
||||
}
|
||||
|
||||
// Field (41) 'LatestBlockHash'
|
||||
// Field (43) 'LatestBlockHash'
|
||||
if size := len(b.LatestBlockHash); size != 32 {
|
||||
err = ssz.ErrBytesLengthFn("--.LatestBlockHash", size, 32)
|
||||
return
|
||||
}
|
||||
hh.PutBytes(b.LatestBlockHash)
|
||||
|
||||
// Field (42) 'LatestWithdrawalsRoot'
|
||||
if size := len(b.LatestWithdrawalsRoot); size != 32 {
|
||||
err = ssz.ErrBytesLengthFn("--.LatestWithdrawalsRoot", size, 32)
|
||||
return
|
||||
// Field (44) 'PayloadExpectedWithdrawals'
|
||||
{
|
||||
subIndx := hh.Index()
|
||||
num := uint64(len(b.PayloadExpectedWithdrawals))
|
||||
if num > 16 {
|
||||
err = ssz.ErrIncorrectListSize
|
||||
return
|
||||
}
|
||||
for _, elem := range b.PayloadExpectedWithdrawals {
|
||||
if err = elem.HashTreeRootWith(hh); err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
hh.MerkleizeWithMixin(subIndx, num, 16)
|
||||
}
|
||||
hh.PutBytes(b.LatestWithdrawalsRoot)
|
||||
|
||||
hh.Merkleize(indx)
|
||||
return
|
||||
@@ -2716,7 +2848,7 @@ func (b *BuilderPendingPayment) MarshalSSZTo(buf []byte) (dst []byte, err error)
|
||||
func (b *BuilderPendingPayment) UnmarshalSSZ(buf []byte) error {
|
||||
var err error
|
||||
size := uint64(len(buf))
|
||||
if size != 52 {
|
||||
if size != 44 {
|
||||
return ssz.ErrSize
|
||||
}
|
||||
|
||||
@@ -2727,7 +2859,7 @@ func (b *BuilderPendingPayment) UnmarshalSSZ(buf []byte) error {
|
||||
if b.Withdrawal == nil {
|
||||
b.Withdrawal = new(BuilderPendingWithdrawal)
|
||||
}
|
||||
if err = b.Withdrawal.UnmarshalSSZ(buf[8:52]); err != nil {
|
||||
if err = b.Withdrawal.UnmarshalSSZ(buf[8:44]); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -2736,7 +2868,7 @@ func (b *BuilderPendingPayment) UnmarshalSSZ(buf []byte) error {
|
||||
|
||||
// SizeSSZ returns the ssz encoded size in bytes for the BuilderPendingPayment object
|
||||
func (b *BuilderPendingPayment) SizeSSZ() (size int) {
|
||||
size = 52
|
||||
size = 44
|
||||
return
|
||||
}
|
||||
|
||||
@@ -2783,9 +2915,6 @@ func (b *BuilderPendingWithdrawal) MarshalSSZTo(buf []byte) (dst []byte, err err
|
||||
// Field (2) 'BuilderIndex'
|
||||
dst = ssz.MarshalUint64(dst, uint64(b.BuilderIndex))
|
||||
|
||||
// Field (3) 'WithdrawableEpoch'
|
||||
dst = ssz.MarshalUint64(dst, uint64(b.WithdrawableEpoch))
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
@@ -2793,7 +2922,7 @@ func (b *BuilderPendingWithdrawal) MarshalSSZTo(buf []byte) (dst []byte, err err
|
||||
func (b *BuilderPendingWithdrawal) UnmarshalSSZ(buf []byte) error {
|
||||
var err error
|
||||
size := uint64(len(buf))
|
||||
if size != 44 {
|
||||
if size != 36 {
|
||||
return ssz.ErrSize
|
||||
}
|
||||
|
||||
@@ -2807,17 +2936,14 @@ func (b *BuilderPendingWithdrawal) UnmarshalSSZ(buf []byte) error {
|
||||
b.Amount = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Gwei(ssz.UnmarshallUint64(buf[20:28]))
|
||||
|
||||
// Field (2) 'BuilderIndex'
|
||||
b.BuilderIndex = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.ValidatorIndex(ssz.UnmarshallUint64(buf[28:36]))
|
||||
|
||||
// Field (3) 'WithdrawableEpoch'
|
||||
b.WithdrawableEpoch = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Epoch(ssz.UnmarshallUint64(buf[36:44]))
|
||||
b.BuilderIndex = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.BuilderIndex(ssz.UnmarshallUint64(buf[28:36]))
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
// SizeSSZ returns the ssz encoded size in bytes for the BuilderPendingWithdrawal object
|
||||
func (b *BuilderPendingWithdrawal) SizeSSZ() (size int) {
|
||||
size = 44
|
||||
size = 36
|
||||
return
|
||||
}
|
||||
|
||||
@@ -2843,9 +2969,6 @@ func (b *BuilderPendingWithdrawal) HashTreeRootWith(hh *ssz.Hasher) (err error)
|
||||
// Field (2) 'BuilderIndex'
|
||||
hh.PutUint64(uint64(b.BuilderIndex))
|
||||
|
||||
// Field (3) 'WithdrawableEpoch'
|
||||
hh.PutUint64(uint64(b.WithdrawableEpoch))
|
||||
|
||||
hh.Merkleize(indx)
|
||||
return
|
||||
}
|
||||
@@ -3218,7 +3341,7 @@ func (e *ExecutionPayloadEnvelope) UnmarshalSSZ(buf []byte) error {
|
||||
}
|
||||
|
||||
// Field (2) 'BuilderIndex'
|
||||
e.BuilderIndex = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.ValidatorIndex(ssz.UnmarshallUint64(buf[8:16]))
|
||||
e.BuilderIndex = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.BuilderIndex(ssz.UnmarshallUint64(buf[8:16]))
|
||||
|
||||
// Field (3) 'BeaconBlockRoot'
|
||||
if cap(e.BeaconBlockRoot) == 0 {
|
||||
@@ -3472,3 +3595,132 @@ func (s *SignedExecutionPayloadEnvelope) HashTreeRootWith(hh *ssz.Hasher) (err e
|
||||
hh.Merkleize(indx)
|
||||
return
|
||||
}
|
||||
|
||||
// MarshalSSZ ssz marshals the Builder object
|
||||
func (b *Builder) MarshalSSZ() ([]byte, error) {
|
||||
return ssz.MarshalSSZ(b)
|
||||
}
|
||||
|
||||
// MarshalSSZTo ssz marshals the Builder object to a target array
|
||||
func (b *Builder) MarshalSSZTo(buf []byte) (dst []byte, err error) {
|
||||
dst = buf
|
||||
|
||||
// Field (0) 'Pubkey'
|
||||
if size := len(b.Pubkey); size != 48 {
|
||||
err = ssz.ErrBytesLengthFn("--.Pubkey", size, 48)
|
||||
return
|
||||
}
|
||||
dst = append(dst, b.Pubkey...)
|
||||
|
||||
// Field (1) 'Version'
|
||||
if size := len(b.Version); size != 1 {
|
||||
err = ssz.ErrBytesLengthFn("--.Version", size, 1)
|
||||
return
|
||||
}
|
||||
dst = append(dst, b.Version...)
|
||||
|
||||
// Field (2) 'ExecutionAddress'
|
||||
if size := len(b.ExecutionAddress); size != 20 {
|
||||
err = ssz.ErrBytesLengthFn("--.ExecutionAddress", size, 20)
|
||||
return
|
||||
}
|
||||
dst = append(dst, b.ExecutionAddress...)
|
||||
|
||||
// Field (3) 'Balance'
|
||||
dst = ssz.MarshalUint64(dst, uint64(b.Balance))
|
||||
|
||||
// Field (4) 'DepositEpoch'
|
||||
dst = ssz.MarshalUint64(dst, uint64(b.DepositEpoch))
|
||||
|
||||
// Field (5) 'WithdrawableEpoch'
|
||||
dst = ssz.MarshalUint64(dst, uint64(b.WithdrawableEpoch))
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
// UnmarshalSSZ ssz unmarshals the Builder object
|
||||
func (b *Builder) UnmarshalSSZ(buf []byte) error {
|
||||
var err error
|
||||
size := uint64(len(buf))
|
||||
if size != 93 {
|
||||
return ssz.ErrSize
|
||||
}
|
||||
|
||||
// Field (0) 'Pubkey'
|
||||
if cap(b.Pubkey) == 0 {
|
||||
b.Pubkey = make([]byte, 0, len(buf[0:48]))
|
||||
}
|
||||
b.Pubkey = append(b.Pubkey, buf[0:48]...)
|
||||
|
||||
// Field (1) 'Version'
|
||||
if cap(b.Version) == 0 {
|
||||
b.Version = make([]byte, 0, len(buf[48:49]))
|
||||
}
|
||||
b.Version = append(b.Version, buf[48:49]...)
|
||||
|
||||
// Field (2) 'ExecutionAddress'
|
||||
if cap(b.ExecutionAddress) == 0 {
|
||||
b.ExecutionAddress = make([]byte, 0, len(buf[49:69]))
|
||||
}
|
||||
b.ExecutionAddress = append(b.ExecutionAddress, buf[49:69]...)
|
||||
|
||||
// Field (3) 'Balance'
|
||||
b.Balance = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Gwei(ssz.UnmarshallUint64(buf[69:77]))
|
||||
|
||||
// Field (4) 'DepositEpoch'
|
||||
b.DepositEpoch = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Epoch(ssz.UnmarshallUint64(buf[77:85]))
|
||||
|
||||
// Field (5) 'WithdrawableEpoch'
|
||||
b.WithdrawableEpoch = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Epoch(ssz.UnmarshallUint64(buf[85:93]))
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
// SizeSSZ returns the ssz encoded size in bytes for the Builder object
|
||||
func (b *Builder) SizeSSZ() (size int) {
|
||||
size = 93
|
||||
return
|
||||
}
|
||||
|
||||
// HashTreeRoot ssz hashes the Builder object
|
||||
func (b *Builder) HashTreeRoot() ([32]byte, error) {
|
||||
return ssz.HashWithDefaultHasher(b)
|
||||
}
|
||||
|
||||
// HashTreeRootWith ssz hashes the Builder object with a hasher
|
||||
func (b *Builder) HashTreeRootWith(hh *ssz.Hasher) (err error) {
|
||||
indx := hh.Index()
|
||||
|
||||
// Field (0) 'Pubkey'
|
||||
if size := len(b.Pubkey); size != 48 {
|
||||
err = ssz.ErrBytesLengthFn("--.Pubkey", size, 48)
|
||||
return
|
||||
}
|
||||
hh.PutBytes(b.Pubkey)
|
||||
|
||||
// Field (1) 'Version'
|
||||
if size := len(b.Version); size != 1 {
|
||||
err = ssz.ErrBytesLengthFn("--.Version", size, 1)
|
||||
return
|
||||
}
|
||||
hh.PutBytes(b.Version)
|
||||
|
||||
// Field (2) 'ExecutionAddress'
|
||||
if size := len(b.ExecutionAddress); size != 20 {
|
||||
err = ssz.ErrBytesLengthFn("--.ExecutionAddress", size, 20)
|
||||
return
|
||||
}
|
||||
hh.PutBytes(b.ExecutionAddress)
|
||||
|
||||
// Field (3) 'Balance'
|
||||
hh.PutUint64(uint64(b.Balance))
|
||||
|
||||
// Field (4) 'DepositEpoch'
|
||||
hh.PutUint64(uint64(b.DepositEpoch))
|
||||
|
||||
// Field (5) 'WithdrawableEpoch'
|
||||
hh.PutUint64(uint64(b.WithdrawableEpoch))
|
||||
|
||||
hh.Merkleize(indx)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -26,10 +26,12 @@ func TestExecutionPayloadBid_Copy(t *testing.T) {
|
||||
ParentBlockHash: []byte("parent_block_hash_32_bytes_long!"),
|
||||
ParentBlockRoot: []byte("parent_block_root_32_bytes_long!"),
|
||||
BlockHash: []byte("block_hash_32_bytes_are_long!!"),
|
||||
PrevRandao: []byte("prev_randao_32_bytes_long!!!"),
|
||||
FeeRecipient: []byte("fee_recipient_20_byt"),
|
||||
GasLimit: 15000000,
|
||||
BuilderIndex: primitives.ValidatorIndex(42),
|
||||
BuilderIndex: primitives.BuilderIndex(42),
|
||||
Slot: primitives.Slot(12345),
|
||||
ExecutionPayment: 5645654,
|
||||
Value: 1000000000000000000,
|
||||
BlobKzgCommitmentsRoot: []byte("blob_kzg_commitments_32_bytes!!"),
|
||||
},
|
||||
@@ -76,10 +78,9 @@ func TestBuilderPendingWithdrawal_Copy(t *testing.T) {
|
||||
{
|
||||
name: "fully populated withdrawal",
|
||||
withdrawal: &BuilderPendingWithdrawal{
|
||||
FeeRecipient: []byte("fee_recipient_20_byt"),
|
||||
Amount: primitives.Gwei(5000000000),
|
||||
BuilderIndex: primitives.ValidatorIndex(123),
|
||||
WithdrawableEpoch: primitives.Epoch(456),
|
||||
FeeRecipient: []byte("fee_recipient_20_byt"),
|
||||
Amount: primitives.Gwei(5000000000),
|
||||
BuilderIndex: primitives.BuilderIndex(123),
|
||||
},
|
||||
},
|
||||
}
|
||||
@@ -134,10 +135,9 @@ func TestBuilderPendingPayment_Copy(t *testing.T) {
|
||||
payment: &BuilderPendingPayment{
|
||||
Weight: primitives.Gwei(2500),
|
||||
Withdrawal: &BuilderPendingWithdrawal{
|
||||
FeeRecipient: []byte("test_recipient_20byt"),
|
||||
Amount: primitives.Gwei(10000),
|
||||
BuilderIndex: primitives.ValidatorIndex(789),
|
||||
WithdrawableEpoch: primitives.Epoch(999),
|
||||
FeeRecipient: []byte("test_recipient_20byt"),
|
||||
Amount: primitives.Gwei(10000),
|
||||
BuilderIndex: primitives.BuilderIndex(789),
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -166,3 +166,60 @@ func TestBuilderPendingPayment_Copy(t *testing.T) {
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestCopyBuilder(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
builder *Builder
|
||||
}{
|
||||
{
|
||||
name: "nil builder",
|
||||
builder: nil,
|
||||
},
|
||||
{
|
||||
name: "empty builder",
|
||||
builder: &Builder{},
|
||||
},
|
||||
{
|
||||
name: "fully populated builder",
|
||||
builder: &Builder{
|
||||
Pubkey: []byte("pubkey_48_bytes_long_pubkey_48_bytes_long_pubkey_48!"),
|
||||
Version: []byte{'a'},
|
||||
ExecutionAddress: []byte("execution_address_20"),
|
||||
Balance: primitives.Gwei(12345),
|
||||
DepositEpoch: primitives.Epoch(10),
|
||||
WithdrawableEpoch: primitives.Epoch(20),
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
copied := CopyBuilder(tt.builder)
|
||||
if tt.builder == nil {
|
||||
if copied != nil {
|
||||
t.Errorf("CopyBuilder() of nil should return nil, got %v", copied)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
if !reflect.DeepEqual(tt.builder, copied) {
|
||||
t.Errorf("CopyBuilder() = %v, want %v", copied, tt.builder)
|
||||
}
|
||||
|
||||
if len(tt.builder.Pubkey) > 0 {
|
||||
tt.builder.Pubkey[0] = 0xFF
|
||||
if copied.Pubkey[0] == 0xFF {
|
||||
t.Error("CopyBuilder() did not create deep copy of Pubkey")
|
||||
}
|
||||
}
|
||||
|
||||
if len(tt.builder.ExecutionAddress) > 0 {
|
||||
tt.builder.ExecutionAddress[0] = 0xFF
|
||||
if copied.ExecutionAddress[0] == 0xFF {
|
||||
t.Error("CopyBuilder() did not create deep copy of ExecutionAddress")
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -48,6 +48,7 @@ mainnet = {
|
||||
"payload_attestation.size": "4", # Gloas: MAX_PAYLOAD_ATTESTATIONS defined in block body
|
||||
"execution_payload_availability.size": "1024", # Gloas: SLOTS_PER_HISTORICAL_ROOT
|
||||
"builder_pending_payments.size": "64", # Gloas: vector length (2 * SLOTS_PER_EPOCH)
|
||||
"builder_registry_limit": "1099511627776", # Gloas: BUILDER_REGISTRY_LIMIT (same for mainnet/minimal)
|
||||
}
|
||||
|
||||
minimal = {
|
||||
@@ -91,7 +92,8 @@ minimal = {
|
||||
"ptc.type": "github.com/OffchainLabs/go-bitfield.Bitvector2",
|
||||
"payload_attestation.size": "4", # Gloas: MAX_PAYLOAD_ATTESTATIONS defined in block body
|
||||
"execution_payload_availability.size": "8", # Gloas: SLOTS_PER_HISTORICAL_ROOT
|
||||
"builder_pending_payments.size": "16" # Gloas: vector length (2 * SLOTS_PER_EPOCH)
|
||||
"builder_pending_payments.size": "16", # Gloas: vector length (2 * SLOTS_PER_EPOCH)
|
||||
"builder_registry_limit": "1099511627776", # Gloas: BUILDER_REGISTRY_LIMIT (same for mainnet/minimal)
|
||||
}
|
||||
|
||||
###### Rules definitions #######
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
version: v1.6.0
|
||||
version: v1.7.0-alpha.0
|
||||
style: full
|
||||
|
||||
specrefs:
|
||||
@@ -57,6 +57,12 @@ exceptions:
|
||||
- PAYLOAD_STATUS_EMPTY#gloas
|
||||
- PAYLOAD_STATUS_FULL#gloas
|
||||
- PAYLOAD_STATUS_PENDING#gloas
|
||||
- ATTESTATION_TIMELINESS_INDEX#gloas
|
||||
- BUILDER_INDEX_FLAG#gloas
|
||||
- BUILDER_INDEX_SELF_BUILD#gloas
|
||||
- DOMAIN_PROPOSER_PREFERENCES#gloas
|
||||
- NUM_BLOCK_TIMELINESS_DEADLINES#gloas
|
||||
- PTC_TIMELINESS_INDEX#gloas
|
||||
|
||||
configs:
|
||||
# Not implemented (placeholders)
|
||||
@@ -76,6 +82,7 @@ exceptions:
|
||||
- MAX_REQUEST_PAYLOADS#gloas
|
||||
- PAYLOAD_ATTESTATION_DUE_BPS#gloas
|
||||
- SYNC_MESSAGE_DUE_BPS_GLOAS#gloas
|
||||
- MIN_BUILDER_WITHDRAWABILITY_DELAY#gloas
|
||||
|
||||
ssz_objects:
|
||||
# Not implemented
|
||||
@@ -103,6 +110,9 @@ exceptions:
|
||||
- PayloadAttestationMessage#gloas
|
||||
- SignedExecutionPayloadEnvelope#gloas
|
||||
- SignedExecutionPayloadBid#gloas
|
||||
- Builder#gloas
|
||||
- ProposerPreferences#gloas
|
||||
- SignedProposerPreferences#gloas
|
||||
|
||||
dataclasses:
|
||||
# Not implemented
|
||||
@@ -331,10 +341,8 @@ exceptions:
|
||||
- get_ptc#gloas
|
||||
- get_ptc_assignment#gloas
|
||||
- get_weight#gloas
|
||||
- has_builder_withdrawal_credential#gloas
|
||||
- has_compounding_withdrawal_credential#gloas
|
||||
- is_attestation_same_slot#gloas
|
||||
- is_builder_payment_withdrawable#gloas
|
||||
- is_builder_withdrawal_credential#gloas
|
||||
- is_merge_transition_complete#gloas
|
||||
- is_parent_block_full#gloas
|
||||
@@ -358,7 +366,6 @@ exceptions:
|
||||
- process_proposer_slashing#gloas
|
||||
- process_slot#gloas
|
||||
- process_withdrawals#gloas
|
||||
- remove_flag#gloas
|
||||
- should_extend_payload#gloas
|
||||
- update_latest_messages#gloas
|
||||
- upgrade_to_gloas#gloas
|
||||
@@ -368,3 +375,55 @@ exceptions:
|
||||
- verify_data_column_sidecar_inclusion_proof#gloas
|
||||
- verify_execution_payload_envelope_signature#gloas
|
||||
- verify_execution_payload_bid_signature#gloas
|
||||
- add_builder_to_registry#gloas
|
||||
- apply_deposit_for_builder#gloas
|
||||
- apply_withdrawals#capella
|
||||
- apply_withdrawals#gloas
|
||||
- can_builder_cover_bid#gloas
|
||||
- compute_proposer_score#phase0
|
||||
- convert_builder_index_to_validator_index#gloas
|
||||
- convert_validator_index_to_builder_index#gloas
|
||||
- get_attestation_score#gloas
|
||||
- get_attestation_score#phase0
|
||||
- get_balance_after_withdrawals#capella
|
||||
- get_builder_from_deposit#gloas
|
||||
- get_builder_withdrawals#gloas
|
||||
- get_builders_sweep_withdrawals#gloas
|
||||
- get_index_for_new_builder#gloas
|
||||
- get_pending_balance_to_withdraw_for_builder#gloas
|
||||
- get_pending_partial_withdrawals#electra
|
||||
- get_proposer_preferences_signature#gloas
|
||||
- get_upcoming_proposal_slots#gloas
|
||||
- get_validators_sweep_withdrawals#capella
|
||||
- get_validators_sweep_withdrawals#electra
|
||||
- initiate_builder_exit#gloas
|
||||
- is_active_builder#gloas
|
||||
- is_builder_index#gloas
|
||||
- is_eligible_for_partial_withdrawals#electra
|
||||
- is_head_late#gloas
|
||||
- is_head_weak#gloas
|
||||
- is_parent_strong#gloas
|
||||
- is_proposer_equivocation#phase0
|
||||
- is_valid_proposal_slot#gloas
|
||||
- process_deposit_request#gloas
|
||||
- process_voluntary_exit#gloas
|
||||
- record_block_timeliness#gloas
|
||||
- record_block_timeliness#phase0
|
||||
- should_apply_proposer_boost#gloas
|
||||
- update_builder_pending_withdrawals#gloas
|
||||
- update_next_withdrawal_builder_index#gloas
|
||||
- update_next_withdrawal_index#capella
|
||||
- update_next_withdrawal_validator_index#capella
|
||||
- update_payload_expected_withdrawals#gloas
|
||||
- update_pending_partial_withdrawals#electra
|
||||
- update_proposer_boost_root#gloas
|
||||
- update_proposer_boost_root#phase0
|
||||
|
||||
presets:
|
||||
- CELLS_PER_EXT_BLOB#fulu
|
||||
- BUILDER_PENDING_WITHDRAWALS_LIMIT#gloas
|
||||
- BUILDER_REGISTRY_LIMIT#gloas
|
||||
- MAX_BUILDERS_PER_WITHDRAWALS_SWEEP#gloas
|
||||
- MAX_PAYLOAD_ATTESTATIONS#gloas
|
||||
- PTC_SIZE#gloas
|
||||
- UPDATE_TIMEOUT#altair
|
||||
|
||||
@@ -304,16 +304,6 @@
|
||||
GENESIS_SLOT: Slot = 0
|
||||
</spec>
|
||||
|
||||
- name: INTERVALS_PER_SLOT
|
||||
sources:
|
||||
- file: config/params/config.go
|
||||
search: IntervalsPerSlot\s+.*yaml:"INTERVALS_PER_SLOT"
|
||||
regex: true
|
||||
spec: |
|
||||
<spec constant_var="INTERVALS_PER_SLOT" fork="phase0" hash="3352e419">
|
||||
INTERVALS_PER_SLOT: uint64 = 3
|
||||
</spec>
|
||||
|
||||
- name: JUSTIFICATION_BITS_LENGTH
|
||||
sources:
|
||||
- file: config/params/config.go
|
||||
|
||||
@@ -698,7 +698,7 @@
|
||||
- name: compute_matrix
|
||||
sources: []
|
||||
spec: |
|
||||
<spec fn="compute_matrix" fork="fulu" hash="b39370ca">
|
||||
<spec fn="compute_matrix" fork="fulu" hash="0b88eac1">
|
||||
def compute_matrix(blobs: Sequence[Blob]) -> Sequence[MatrixEntry]:
|
||||
"""
|
||||
Return the full, flattened sequence of matrix entries.
|
||||
@@ -714,8 +714,8 @@
|
||||
MatrixEntry(
|
||||
cell=cell,
|
||||
kzg_proof=proof,
|
||||
row_index=blob_index,
|
||||
column_index=cell_index,
|
||||
row_index=blob_index,
|
||||
)
|
||||
)
|
||||
return matrix
|
||||
@@ -739,7 +739,7 @@
|
||||
- file: beacon-chain/rpc/prysm/v1alpha1/validator/proposer_attestations_electra.go
|
||||
search: func computeOnChainAggregate(
|
||||
spec: |
|
||||
<spec fn="compute_on_chain_aggregate" fork="electra" hash="128055d6">
|
||||
<spec fn="compute_on_chain_aggregate" fork="electra" hash="f020af4c">
|
||||
def compute_on_chain_aggregate(network_aggregates: Sequence[Attestation]) -> Attestation:
|
||||
aggregates = sorted(
|
||||
network_aggregates, key=lambda a: get_committee_indices(a.committee_bits)[0]
|
||||
@@ -760,8 +760,8 @@
|
||||
return Attestation(
|
||||
aggregation_bits=aggregation_bits,
|
||||
data=data,
|
||||
committee_bits=committee_bits,
|
||||
signature=signature,
|
||||
committee_bits=committee_bits,
|
||||
)
|
||||
</spec>
|
||||
|
||||
@@ -2366,40 +2366,18 @@
|
||||
- file: beacon-chain/state/state-native/getters_withdrawal.go
|
||||
search: func (b *BeaconState) ExpectedWithdrawals(
|
||||
spec: |
|
||||
<spec fn="get_expected_withdrawals" fork="capella" hash="09191977">
|
||||
def get_expected_withdrawals(state: BeaconState) -> Sequence[Withdrawal]:
|
||||
epoch = get_current_epoch(state)
|
||||
<spec fn="get_expected_withdrawals" fork="capella" hash="d6a98c14">
|
||||
def get_expected_withdrawals(state: BeaconState) -> Tuple[Sequence[Withdrawal], uint64]:
|
||||
withdrawal_index = state.next_withdrawal_index
|
||||
validator_index = state.next_withdrawal_validator_index
|
||||
withdrawals: List[Withdrawal] = []
|
||||
bound = min(len(state.validators), MAX_VALIDATORS_PER_WITHDRAWALS_SWEEP)
|
||||
for _ in range(bound):
|
||||
validator = state.validators[validator_index]
|
||||
balance = state.balances[validator_index]
|
||||
if is_fully_withdrawable_validator(validator, balance, epoch):
|
||||
withdrawals.append(
|
||||
Withdrawal(
|
||||
index=withdrawal_index,
|
||||
validator_index=validator_index,
|
||||
address=ExecutionAddress(validator.withdrawal_credentials[12:]),
|
||||
amount=balance,
|
||||
)
|
||||
)
|
||||
withdrawal_index += WithdrawalIndex(1)
|
||||
elif is_partially_withdrawable_validator(validator, balance):
|
||||
withdrawals.append(
|
||||
Withdrawal(
|
||||
index=withdrawal_index,
|
||||
validator_index=validator_index,
|
||||
address=ExecutionAddress(validator.withdrawal_credentials[12:]),
|
||||
amount=balance - MAX_EFFECTIVE_BALANCE,
|
||||
)
|
||||
)
|
||||
withdrawal_index += WithdrawalIndex(1)
|
||||
if len(withdrawals) == MAX_WITHDRAWALS_PER_PAYLOAD:
|
||||
break
|
||||
validator_index = ValidatorIndex((validator_index + 1) % len(state.validators))
|
||||
return withdrawals
|
||||
|
||||
# Get validators sweep withdrawals
|
||||
validators_sweep_withdrawals, withdrawal_index, processed_validators_sweep_count = (
|
||||
get_validators_sweep_withdrawals(state, withdrawal_index, withdrawals)
|
||||
)
|
||||
withdrawals.extend(validators_sweep_withdrawals)
|
||||
|
||||
return withdrawals, processed_validators_sweep_count
|
||||
</spec>
|
||||
|
||||
- name: get_expected_withdrawals#electra
|
||||
@@ -2407,80 +2385,26 @@
|
||||
- file: beacon-chain/state/state-native/getters_withdrawal.go
|
||||
search: func (b *BeaconState) ExpectedWithdrawals(
|
||||
spec: |
|
||||
<spec fn="get_expected_withdrawals" fork="electra" hash="060932cd">
|
||||
def get_expected_withdrawals(state: BeaconState) -> Tuple[Sequence[Withdrawal], uint64]:
|
||||
epoch = get_current_epoch(state)
|
||||
<spec fn="get_expected_withdrawals" fork="electra" hash="cfce862b">
|
||||
def get_expected_withdrawals(state: BeaconState) -> Tuple[Sequence[Withdrawal], uint64, uint64]:
|
||||
withdrawal_index = state.next_withdrawal_index
|
||||
validator_index = state.next_withdrawal_validator_index
|
||||
withdrawals: List[Withdrawal] = []
|
||||
processed_partial_withdrawals_count = 0
|
||||
|
||||
# [New in Electra:EIP7251]
|
||||
# Consume pending partial withdrawals
|
||||
for withdrawal in state.pending_partial_withdrawals:
|
||||
if (
|
||||
withdrawal.withdrawable_epoch > epoch
|
||||
or len(withdrawals) == MAX_PENDING_PARTIALS_PER_WITHDRAWALS_SWEEP
|
||||
):
|
||||
break
|
||||
# Get partial withdrawals
|
||||
partial_withdrawals, withdrawal_index, processed_partial_withdrawals_count = (
|
||||
get_pending_partial_withdrawals(state, withdrawal_index, withdrawals)
|
||||
)
|
||||
withdrawals.extend(partial_withdrawals)
|
||||
|
||||
validator = state.validators[withdrawal.validator_index]
|
||||
has_sufficient_effective_balance = validator.effective_balance >= MIN_ACTIVATION_BALANCE
|
||||
total_withdrawn = sum(
|
||||
w.amount for w in withdrawals if w.validator_index == withdrawal.validator_index
|
||||
)
|
||||
balance = state.balances[withdrawal.validator_index] - total_withdrawn
|
||||
has_excess_balance = balance > MIN_ACTIVATION_BALANCE
|
||||
if (
|
||||
validator.exit_epoch == FAR_FUTURE_EPOCH
|
||||
and has_sufficient_effective_balance
|
||||
and has_excess_balance
|
||||
):
|
||||
withdrawable_balance = min(balance - MIN_ACTIVATION_BALANCE, withdrawal.amount)
|
||||
withdrawals.append(
|
||||
Withdrawal(
|
||||
index=withdrawal_index,
|
||||
validator_index=withdrawal.validator_index,
|
||||
address=ExecutionAddress(validator.withdrawal_credentials[12:]),
|
||||
amount=withdrawable_balance,
|
||||
)
|
||||
)
|
||||
withdrawal_index += WithdrawalIndex(1)
|
||||
# Get validators sweep withdrawals
|
||||
validators_sweep_withdrawals, withdrawal_index, processed_validators_sweep_count = (
|
||||
get_validators_sweep_withdrawals(state, withdrawal_index, withdrawals)
|
||||
)
|
||||
withdrawals.extend(validators_sweep_withdrawals)
|
||||
|
||||
processed_partial_withdrawals_count += 1
|
||||
|
||||
# Sweep for remaining.
|
||||
bound = min(len(state.validators), MAX_VALIDATORS_PER_WITHDRAWALS_SWEEP)
|
||||
for _ in range(bound):
|
||||
validator = state.validators[validator_index]
|
||||
# [Modified in Electra:EIP7251]
|
||||
total_withdrawn = sum(w.amount for w in withdrawals if w.validator_index == validator_index)
|
||||
balance = state.balances[validator_index] - total_withdrawn
|
||||
if is_fully_withdrawable_validator(validator, balance, epoch):
|
||||
withdrawals.append(
|
||||
Withdrawal(
|
||||
index=withdrawal_index,
|
||||
validator_index=validator_index,
|
||||
address=ExecutionAddress(validator.withdrawal_credentials[12:]),
|
||||
amount=balance,
|
||||
)
|
||||
)
|
||||
withdrawal_index += WithdrawalIndex(1)
|
||||
elif is_partially_withdrawable_validator(validator, balance):
|
||||
withdrawals.append(
|
||||
Withdrawal(
|
||||
index=withdrawal_index,
|
||||
validator_index=validator_index,
|
||||
address=ExecutionAddress(validator.withdrawal_credentials[12:]),
|
||||
# [Modified in Electra:EIP7251]
|
||||
amount=balance - get_max_effective_balance(validator),
|
||||
)
|
||||
)
|
||||
withdrawal_index += WithdrawalIndex(1)
|
||||
if len(withdrawals) == MAX_WITHDRAWALS_PER_PAYLOAD:
|
||||
break
|
||||
validator_index = ValidatorIndex((validator_index + 1) % len(state.validators))
|
||||
return withdrawals, processed_partial_withdrawals_count
|
||||
# [Modified in Electra:EIP7251]
|
||||
return withdrawals, processed_partial_withdrawals_count, processed_validators_sweep_count
|
||||
</spec>
|
||||
|
||||
- name: get_filtered_block_tree
|
||||
@@ -3053,7 +2977,7 @@
|
||||
- name: get_proposer_head
|
||||
sources: []
|
||||
spec: |
|
||||
<spec fn="get_proposer_head" fork="phase0" hash="15d44290">
|
||||
<spec fn="get_proposer_head" fork="phase0" hash="99e8fc05">
|
||||
def get_proposer_head(store: Store, head_root: Root, slot: Slot) -> Root:
|
||||
head_block = store.blocks[head_root]
|
||||
parent_root = head_block.parent_root
|
||||
@@ -3084,7 +3008,10 @@
|
||||
head_weak = is_head_weak(store, head_root)
|
||||
|
||||
# Check that the missing votes are assigned to the parent and not being hoarded.
|
||||
parent_strong = is_parent_strong(store, parent_root)
|
||||
parent_strong = is_parent_strong(store, head_root)
|
||||
|
||||
# Re-org more aggressively if there is a proposer equivocation in the previous slot.
|
||||
proposer_equivocation = is_proposer_equivocation(store, head_root)
|
||||
|
||||
if all(
|
||||
[
|
||||
@@ -3100,6 +3027,8 @@
|
||||
):
|
||||
# We can re-org the current head by building upon its parent block.
|
||||
return parent_root
|
||||
elif all([head_weak, current_time_ok, proposer_equivocation]):
|
||||
return parent_root
|
||||
else:
|
||||
return head_root
|
||||
</spec>
|
||||
@@ -3117,11 +3046,10 @@
|
||||
- name: get_proposer_score
|
||||
sources: []
|
||||
spec: |
|
||||
<spec fn="get_proposer_score" fork="phase0" hash="164b8de0">
|
||||
<spec fn="get_proposer_score" fork="phase0" hash="2c8d8a27">
|
||||
def get_proposer_score(store: Store) -> Gwei:
|
||||
justified_checkpoint_state = store.checkpoint_states[store.justified_checkpoint]
|
||||
committee_weight = get_total_active_balance(justified_checkpoint_state) // SLOTS_PER_EPOCH
|
||||
return (committee_weight * PROPOSER_SCORE_BOOST) // 100
|
||||
return compute_proposer_score(justified_checkpoint_state)
|
||||
</spec>
|
||||
|
||||
- name: get_randao_mix
|
||||
@@ -3509,26 +3437,10 @@
|
||||
- file: beacon-chain/forkchoice/doubly-linked-tree/forkchoice.go
|
||||
search: func (f *ForkChoice) Weight(
|
||||
spec: |
|
||||
<spec fn="get_weight" fork="phase0" hash="f2e4e8ef">
|
||||
<spec fn="get_weight" fork="phase0" hash="b18bf25c">
|
||||
def get_weight(store: Store, root: Root) -> Gwei:
|
||||
state = store.checkpoint_states[store.justified_checkpoint]
|
||||
unslashed_and_active_indices = [
|
||||
i
|
||||
for i in get_active_validator_indices(state, get_current_epoch(state))
|
||||
if not state.validators[i].slashed
|
||||
]
|
||||
attestation_score = Gwei(
|
||||
sum(
|
||||
state.validators[i].effective_balance
|
||||
for i in unslashed_and_active_indices
|
||||
if (
|
||||
i in store.latest_messages
|
||||
and i not in store.equivocating_indices
|
||||
and get_ancestor(store, store.latest_messages[i].root, store.blocks[root].slot)
|
||||
== root
|
||||
)
|
||||
)
|
||||
)
|
||||
attestation_score = get_attestation_score(store, root, state)
|
||||
if store.proposer_boost_root == Root():
|
||||
# Return only attestation score if ``proposer_boost_root`` is not set
|
||||
return attestation_score
|
||||
@@ -3615,7 +3527,7 @@
|
||||
- file: beacon-chain/core/transition/state.go
|
||||
search: func GenesisBeaconState(
|
||||
spec: |
|
||||
<spec fn="initialize_beacon_state_from_eth1" fork="phase0" hash="c69537d6">
|
||||
<spec fn="initialize_beacon_state_from_eth1" fork="phase0" hash="d3a0ddd4">
|
||||
def initialize_beacon_state_from_eth1(
|
||||
eth1_block_hash: Hash32, eth1_timestamp: uint64, deposits: Sequence[Deposit]
|
||||
) -> BeaconState:
|
||||
@@ -3627,7 +3539,7 @@
|
||||
state = BeaconState(
|
||||
genesis_time=eth1_timestamp + GENESIS_DELAY,
|
||||
fork=fork,
|
||||
eth1_data=Eth1Data(block_hash=eth1_block_hash, deposit_count=uint64(len(deposits))),
|
||||
eth1_data=Eth1Data(deposit_count=uint64(len(deposits)), block_hash=eth1_block_hash),
|
||||
latest_block_header=BeaconBlockHeader(body_root=hash_tree_root(BeaconBlockBody())),
|
||||
randao_mixes=[eth1_block_hash]
|
||||
* EPOCHS_PER_HISTORICAL_VECTOR, # Seed RANDAO with Eth1 entropy
|
||||
@@ -4162,10 +4074,11 @@
|
||||
- name: is_parent_strong
|
||||
sources: []
|
||||
spec: |
|
||||
<spec fn="is_parent_strong" fork="phase0" hash="e06641a8">
|
||||
def is_parent_strong(store: Store, parent_root: Root) -> bool:
|
||||
<spec fn="is_parent_strong" fork="phase0" hash="02a3fd0b">
|
||||
def is_parent_strong(store: Store, root: Root) -> bool:
|
||||
justified_state = store.checkpoint_states[store.justified_checkpoint]
|
||||
parent_threshold = calculate_committee_fraction(justified_state, REORG_PARENT_WEIGHT_THRESHOLD)
|
||||
parent_root = store.blocks[root].parent_root
|
||||
parent_weight = get_weight(store, parent_root)
|
||||
return parent_weight > parent_threshold
|
||||
</spec>
|
||||
@@ -4683,7 +4596,7 @@
|
||||
- file: beacon-chain/blockchain/receive_block.go
|
||||
search: func (s *Service) ReceiveBlock(
|
||||
spec: |
|
||||
<spec fn="on_block" fork="phase0" hash="aff24b59">
|
||||
<spec fn="on_block" fork="phase0" hash="5f45947a">
|
||||
def on_block(store: Store, signed_block: SignedBeaconBlock) -> None:
|
||||
block = signed_block.message
|
||||
# Parent block must be known
|
||||
@@ -4713,19 +4626,8 @@
|
||||
# Add new state for this block to the store
|
||||
store.block_states[block_root] = state
|
||||
|
||||
# Add block timeliness to the store
|
||||
seconds_since_genesis = store.time - store.genesis_time
|
||||
time_into_slot_ms = seconds_to_milliseconds(seconds_since_genesis) % SLOT_DURATION_MS
|
||||
epoch = get_current_store_epoch(store)
|
||||
attestation_threshold_ms = get_attestation_due_ms(epoch)
|
||||
is_before_attesting_interval = time_into_slot_ms < attestation_threshold_ms
|
||||
is_timely = get_current_slot(store) == block.slot and is_before_attesting_interval
|
||||
store.block_timeliness[hash_tree_root(block)] = is_timely
|
||||
|
||||
# Add proposer score boost if the block is timely and not conflicting with an existing block
|
||||
is_first_block = store.proposer_boost_root == Root()
|
||||
if is_timely and is_first_block:
|
||||
store.proposer_boost_root = hash_tree_root(block)
|
||||
record_block_timeliness(store, block_root)
|
||||
update_proposer_boost_root(store, block_root)
|
||||
|
||||
# Update checkpoints in store if necessary
|
||||
update_checkpoints(store, state.current_justified_checkpoint, state.finalized_checkpoint)
|
||||
@@ -4739,7 +4641,7 @@
|
||||
- file: beacon-chain/blockchain/receive_block.go
|
||||
search: func (s *Service) ReceiveBlock(
|
||||
spec: |
|
||||
<spec fn="on_block" fork="bellatrix" hash="a3193d92">
|
||||
<spec fn="on_block" fork="bellatrix" hash="e81d01c3">
|
||||
def on_block(store: Store, signed_block: SignedBeaconBlock) -> None:
|
||||
"""
|
||||
Run ``on_block`` upon receiving a new block.
|
||||
@@ -4780,19 +4682,8 @@
|
||||
# Add new state for this block to the store
|
||||
store.block_states[block_root] = state
|
||||
|
||||
# Add block timeliness to the store
|
||||
seconds_since_genesis = store.time - store.genesis_time
|
||||
time_into_slot_ms = seconds_to_milliseconds(seconds_since_genesis) % SLOT_DURATION_MS
|
||||
epoch = get_current_store_epoch(store)
|
||||
attestation_threshold_ms = get_attestation_due_ms(epoch)
|
||||
is_before_attesting_interval = time_into_slot_ms < attestation_threshold_ms
|
||||
is_timely = get_current_slot(store) == block.slot and is_before_attesting_interval
|
||||
store.block_timeliness[hash_tree_root(block)] = is_timely
|
||||
|
||||
# Add proposer score boost if the block is timely and not conflicting with an existing block
|
||||
is_first_block = store.proposer_boost_root == Root()
|
||||
if is_timely and is_first_block:
|
||||
store.proposer_boost_root = hash_tree_root(block)
|
||||
record_block_timeliness(store, block_root)
|
||||
update_proposer_boost_root(store, block_root)
|
||||
|
||||
# Update checkpoints in store if necessary
|
||||
update_checkpoints(store, state.current_justified_checkpoint, state.finalized_checkpoint)
|
||||
@@ -4806,7 +4697,7 @@
|
||||
- file: beacon-chain/blockchain/receive_block.go
|
||||
search: func (s *Service) ReceiveBlock(
|
||||
spec: |
|
||||
<spec fn="on_block" fork="capella" hash="560056ad">
|
||||
<spec fn="on_block" fork="capella" hash="7450531c">
|
||||
def on_block(store: Store, signed_block: SignedBeaconBlock) -> None:
|
||||
"""
|
||||
Run ``on_block`` upon receiving a new block.
|
||||
@@ -4839,19 +4730,8 @@
|
||||
# Add new state for this block to the store
|
||||
store.block_states[block_root] = state
|
||||
|
||||
# Add block timeliness to the store
|
||||
seconds_since_genesis = store.time - store.genesis_time
|
||||
time_into_slot_ms = seconds_to_milliseconds(seconds_since_genesis) % SLOT_DURATION_MS
|
||||
epoch = get_current_store_epoch(store)
|
||||
attestation_threshold_ms = get_attestation_due_ms(epoch)
|
||||
is_before_attesting_interval = time_into_slot_ms < attestation_threshold_ms
|
||||
is_timely = get_current_slot(store) == block.slot and is_before_attesting_interval
|
||||
store.block_timeliness[hash_tree_root(block)] = is_timely
|
||||
|
||||
# Add proposer score boost if the block is timely and not conflicting with an existing block
|
||||
is_first_block = store.proposer_boost_root == Root()
|
||||
if is_timely and is_first_block:
|
||||
store.proposer_boost_root = hash_tree_root(block)
|
||||
record_block_timeliness(store, block_root)
|
||||
update_proposer_boost_root(store, block_root)
|
||||
|
||||
# Update checkpoints in store if necessary
|
||||
update_checkpoints(store, state.current_justified_checkpoint, state.finalized_checkpoint)
|
||||
@@ -4865,7 +4745,7 @@
|
||||
- file: beacon-chain/blockchain/receive_block.go
|
||||
search: func (s *Service) ReceiveBlock(
|
||||
spec: |
|
||||
<spec fn="on_block" fork="deneb" hash="9565acee">
|
||||
<spec fn="on_block" fork="deneb" hash="bbad196e">
|
||||
def on_block(store: Store, signed_block: SignedBeaconBlock) -> None:
|
||||
"""
|
||||
Run ``on_block`` upon receiving a new block.
|
||||
@@ -4903,19 +4783,8 @@
|
||||
# Add new state for this block to the store
|
||||
store.block_states[block_root] = state
|
||||
|
||||
# Add block timeliness to the store
|
||||
seconds_since_genesis = store.time - store.genesis_time
|
||||
time_into_slot_ms = seconds_to_milliseconds(seconds_since_genesis) % SLOT_DURATION_MS
|
||||
epoch = get_current_store_epoch(store)
|
||||
attestation_threshold_ms = get_attestation_due_ms(epoch)
|
||||
is_before_attesting_interval = time_into_slot_ms < attestation_threshold_ms
|
||||
is_timely = get_current_slot(store) == block.slot and is_before_attesting_interval
|
||||
store.block_timeliness[hash_tree_root(block)] = is_timely
|
||||
|
||||
# Add proposer score boost if the block is timely and not conflicting with an existing block
|
||||
is_first_block = store.proposer_boost_root == Root()
|
||||
if is_timely and is_first_block:
|
||||
store.proposer_boost_root = hash_tree_root(block)
|
||||
record_block_timeliness(store, block_root)
|
||||
update_proposer_boost_root(store, block_root)
|
||||
|
||||
# Update checkpoints in store if necessary
|
||||
update_checkpoints(store, state.current_justified_checkpoint, state.finalized_checkpoint)
|
||||
@@ -4929,7 +4798,7 @@
|
||||
- file: beacon-chain/blockchain/receive_block.go
|
||||
search: func (s *Service) ReceiveBlock(
|
||||
spec: |
|
||||
<spec fn="on_block" fork="fulu" hash="4f955de9">
|
||||
<spec fn="on_block" fork="fulu" hash="b8f279b9">
|
||||
def on_block(store: Store, signed_block: SignedBeaconBlock) -> None:
|
||||
"""
|
||||
Run ``on_block`` upon receiving a new block.
|
||||
@@ -4967,19 +4836,8 @@
|
||||
# Add new state for this block to the store
|
||||
store.block_states[block_root] = state
|
||||
|
||||
# Add block timeliness to the store
|
||||
seconds_since_genesis = store.time - store.genesis_time
|
||||
time_into_slot_ms = seconds_to_milliseconds(seconds_since_genesis) % SLOT_DURATION_MS
|
||||
epoch = get_current_store_epoch(store)
|
||||
attestation_threshold_ms = get_attestation_due_ms(epoch)
|
||||
is_before_attesting_interval = time_into_slot_ms < attestation_threshold_ms
|
||||
is_timely = get_current_slot(store) == block.slot and is_before_attesting_interval
|
||||
store.block_timeliness[hash_tree_root(block)] = is_timely
|
||||
|
||||
# Add proposer score boost if the block is timely and not conflicting with an existing block
|
||||
is_first_block = store.proposer_boost_root == Root()
|
||||
if is_timely and is_first_block:
|
||||
store.proposer_boost_root = hash_tree_root(block)
|
||||
record_block_timeliness(store, block_root)
|
||||
update_proposer_boost_root(store, block_root)
|
||||
|
||||
# Update checkpoints in store if necessary
|
||||
update_checkpoints(store, state.current_justified_checkpoint, state.finalized_checkpoint)
|
||||
@@ -5074,7 +4932,7 @@
|
||||
- name: prepare_execution_payload#capella
|
||||
sources: []
|
||||
spec: |
|
||||
<spec fn="prepare_execution_payload" fork="capella" hash="28db1590">
|
||||
<spec fn="prepare_execution_payload" fork="capella" hash="c258893e">
|
||||
def prepare_execution_payload(
|
||||
state: BeaconState,
|
||||
safe_block_hash: Hash32,
|
||||
@@ -5087,12 +4945,15 @@
|
||||
parent_hash = state.latest_execution_payload_header.block_hash
|
||||
|
||||
# Set the forkchoice head and initiate the payload build process
|
||||
# [New in Capella]
|
||||
withdrawals, _ = get_expected_withdrawals(state)
|
||||
|
||||
payload_attributes = PayloadAttributes(
|
||||
timestamp=compute_time_at_slot(state, state.slot),
|
||||
prev_randao=get_randao_mix(state, get_current_epoch(state)),
|
||||
suggested_fee_recipient=suggested_fee_recipient,
|
||||
# [New in Capella]
|
||||
withdrawals=get_expected_withdrawals(state),
|
||||
withdrawals=withdrawals,
|
||||
)
|
||||
return execution_engine.notify_forkchoice_updated(
|
||||
head_block_hash=parent_hash,
|
||||
@@ -5105,7 +4966,7 @@
|
||||
- name: prepare_execution_payload#deneb
|
||||
sources: []
|
||||
spec: |
|
||||
<spec fn="prepare_execution_payload" fork="deneb" hash="f3387ec6">
|
||||
<spec fn="prepare_execution_payload" fork="deneb" hash="59f61f3a">
|
||||
def prepare_execution_payload(
|
||||
state: BeaconState,
|
||||
safe_block_hash: Hash32,
|
||||
@@ -5117,11 +4978,13 @@
|
||||
parent_hash = state.latest_execution_payload_header.block_hash
|
||||
|
||||
# Set the forkchoice head and initiate the payload build process
|
||||
withdrawals, _ = get_expected_withdrawals(state)
|
||||
|
||||
payload_attributes = PayloadAttributes(
|
||||
timestamp=compute_time_at_slot(state, state.slot),
|
||||
prev_randao=get_randao_mix(state, get_current_epoch(state)),
|
||||
suggested_fee_recipient=suggested_fee_recipient,
|
||||
withdrawals=get_expected_withdrawals(state),
|
||||
withdrawals=withdrawals,
|
||||
# [New in Deneb:EIP4788]
|
||||
parent_beacon_block_root=hash_tree_root(state.latest_block_header),
|
||||
)
|
||||
@@ -5136,7 +4999,7 @@
|
||||
- name: prepare_execution_payload#electra
|
||||
sources: []
|
||||
spec: |
|
||||
<spec fn="prepare_execution_payload" fork="electra" hash="567b3739">
|
||||
<spec fn="prepare_execution_payload" fork="electra" hash="5414b883">
|
||||
def prepare_execution_payload(
|
||||
state: BeaconState,
|
||||
safe_block_hash: Hash32,
|
||||
@@ -5149,7 +5012,7 @@
|
||||
|
||||
# [Modified in EIP7251]
|
||||
# Set the forkchoice head and initiate the payload build process
|
||||
withdrawals, _ = get_expected_withdrawals(state)
|
||||
withdrawals, _, _ = get_expected_withdrawals(state)
|
||||
|
||||
payload_attributes = PayloadAttributes(
|
||||
timestamp=compute_time_at_slot(state, state.slot),
|
||||
@@ -5171,7 +5034,7 @@
|
||||
- file: beacon-chain/core/blocks/attestation.go
|
||||
search: func ProcessAttestationNoVerifySignature(
|
||||
spec: |
|
||||
<spec fn="process_attestation" fork="phase0" hash="6ac78cd0">
|
||||
<spec fn="process_attestation" fork="phase0" hash="d8e86aa9">
|
||||
def process_attestation(state: BeaconState, attestation: Attestation) -> None:
|
||||
data = attestation.data
|
||||
assert data.target.epoch in (get_previous_epoch(state), get_current_epoch(state))
|
||||
@@ -5183,8 +5046,8 @@
|
||||
assert len(attestation.aggregation_bits) == len(committee)
|
||||
|
||||
pending_attestation = PendingAttestation(
|
||||
data=data,
|
||||
aggregation_bits=attestation.aggregation_bits,
|
||||
data=data,
|
||||
inclusion_delay=state.slot - data.slot,
|
||||
proposer_index=get_beacon_proposer_index(state),
|
||||
)
|
||||
@@ -7208,31 +7071,18 @@
|
||||
- file: beacon-chain/core/blocks/withdrawals.go
|
||||
search: func ProcessWithdrawals(
|
||||
spec: |
|
||||
<spec fn="process_withdrawals" fork="capella" hash="ed6a9c5a">
|
||||
<spec fn="process_withdrawals" fork="capella" hash="901f9fc4">
|
||||
def process_withdrawals(state: BeaconState, payload: ExecutionPayload) -> None:
|
||||
expected_withdrawals = get_expected_withdrawals(state)
|
||||
assert payload.withdrawals == expected_withdrawals
|
||||
# Get expected withdrawals
|
||||
withdrawals, processed_validators_sweep_count = get_expected_withdrawals(state)
|
||||
assert payload.withdrawals == withdrawals
|
||||
|
||||
for withdrawal in expected_withdrawals:
|
||||
decrease_balance(state, withdrawal.validator_index, withdrawal.amount)
|
||||
# Apply expected withdrawals
|
||||
apply_withdrawals(state, withdrawals)
|
||||
|
||||
# Update the next withdrawal index if this block contained withdrawals
|
||||
if len(expected_withdrawals) != 0:
|
||||
latest_withdrawal = expected_withdrawals[-1]
|
||||
state.next_withdrawal_index = WithdrawalIndex(latest_withdrawal.index + 1)
|
||||
|
||||
# Update the next validator index to start the next withdrawal sweep
|
||||
if len(expected_withdrawals) == MAX_WITHDRAWALS_PER_PAYLOAD:
|
||||
# Next sweep starts after the latest withdrawal's validator index
|
||||
next_validator_index = ValidatorIndex(
|
||||
(expected_withdrawals[-1].validator_index + 1) % len(state.validators)
|
||||
)
|
||||
state.next_withdrawal_validator_index = next_validator_index
|
||||
else:
|
||||
# Advance sweep by the max length of the sweep if there was not a full set of withdrawals
|
||||
next_index = state.next_withdrawal_validator_index + MAX_VALIDATORS_PER_WITHDRAWALS_SWEEP
|
||||
next_validator_index = ValidatorIndex(next_index % len(state.validators))
|
||||
state.next_withdrawal_validator_index = next_validator_index
|
||||
# Update withdrawals fields in the state
|
||||
update_next_withdrawal_index(state, withdrawals)
|
||||
update_next_withdrawal_validator_index(state, processed_validators_sweep_count)
|
||||
</spec>
|
||||
|
||||
- name: process_withdrawals#electra
|
||||
@@ -7240,39 +7090,23 @@
|
||||
- file: beacon-chain/core/blocks/withdrawals.go
|
||||
search: func ProcessWithdrawals(
|
||||
spec: |
|
||||
<spec fn="process_withdrawals" fork="electra" hash="dd99a91f">
|
||||
<spec fn="process_withdrawals" fork="electra" hash="67870972">
|
||||
def process_withdrawals(state: BeaconState, payload: ExecutionPayload) -> None:
|
||||
# [Modified in Electra:EIP7251]
|
||||
expected_withdrawals, processed_partial_withdrawals_count = get_expected_withdrawals(state)
|
||||
# Get expected withdrawals
|
||||
withdrawals, processed_partial_withdrawals_count, processed_validators_sweep_count = (
|
||||
get_expected_withdrawals(state)
|
||||
)
|
||||
assert payload.withdrawals == withdrawals
|
||||
|
||||
assert payload.withdrawals == expected_withdrawals
|
||||
|
||||
for withdrawal in expected_withdrawals:
|
||||
decrease_balance(state, withdrawal.validator_index, withdrawal.amount)
|
||||
# Apply expected withdrawals
|
||||
apply_withdrawals(state, withdrawals)
|
||||
|
||||
# Update withdrawals fields in the state
|
||||
update_next_withdrawal_index(state, withdrawals)
|
||||
# [New in Electra:EIP7251]
|
||||
# Update pending partial withdrawals
|
||||
state.pending_partial_withdrawals = state.pending_partial_withdrawals[
|
||||
processed_partial_withdrawals_count:
|
||||
]
|
||||
|
||||
# Update the next withdrawal index if this block contained withdrawals
|
||||
if len(expected_withdrawals) != 0:
|
||||
latest_withdrawal = expected_withdrawals[-1]
|
||||
state.next_withdrawal_index = WithdrawalIndex(latest_withdrawal.index + 1)
|
||||
|
||||
# Update the next validator index to start the next withdrawal sweep
|
||||
if len(expected_withdrawals) == MAX_WITHDRAWALS_PER_PAYLOAD:
|
||||
# Next sweep starts after the latest withdrawal's validator index
|
||||
next_validator_index = ValidatorIndex(
|
||||
(expected_withdrawals[-1].validator_index + 1) % len(state.validators)
|
||||
)
|
||||
state.next_withdrawal_validator_index = next_validator_index
|
||||
else:
|
||||
# Advance sweep by the max length of the sweep if there was not a full set of withdrawals
|
||||
next_index = state.next_withdrawal_validator_index + MAX_VALIDATORS_PER_WITHDRAWALS_SWEEP
|
||||
next_validator_index = ValidatorIndex(next_index % len(state.validators))
|
||||
state.next_withdrawal_validator_index = next_validator_index
|
||||
update_pending_partial_withdrawals(state, processed_partial_withdrawals_count)
|
||||
update_next_withdrawal_validator_index(state, processed_validators_sweep_count)
|
||||
</spec>
|
||||
|
||||
- name: queue_excess_active_balance
|
||||
@@ -7303,7 +7137,7 @@
|
||||
- name: recover_matrix
|
||||
sources: []
|
||||
spec: |
|
||||
<spec fn="recover_matrix" fork="fulu" hash="9b01f005">
|
||||
<spec fn="recover_matrix" fork="fulu" hash="3db21f50">
|
||||
def recover_matrix(
|
||||
partial_matrix: Sequence[MatrixEntry], blob_count: uint64
|
||||
) -> Sequence[MatrixEntry]:
|
||||
@@ -7323,8 +7157,8 @@
|
||||
MatrixEntry(
|
||||
cell=cell,
|
||||
kzg_proof=proof,
|
||||
row_index=blob_index,
|
||||
column_index=cell_index,
|
||||
row_index=blob_index,
|
||||
)
|
||||
)
|
||||
return matrix
|
||||
@@ -7373,7 +7207,7 @@
|
||||
- file: beacon-chain/forkchoice/ro.go
|
||||
search: func (ro *ROForkChoice) ShouldOverrideFCU(
|
||||
spec: |
|
||||
<spec fn="should_override_forkchoice_update" fork="bellatrix" hash="9a8043af">
|
||||
<spec fn="should_override_forkchoice_update" fork="bellatrix" hash="c055d92a">
|
||||
def should_override_forkchoice_update(store: Store, head_root: Root) -> bool:
|
||||
head_block = store.blocks[head_root]
|
||||
parent_root = head_block.parent_root
|
||||
@@ -7414,7 +7248,7 @@
|
||||
# `store.time` early, or by counting queued attestations during the head block's slot.
|
||||
if current_slot > head_block.slot:
|
||||
head_weak = is_head_weak(store, head_root)
|
||||
parent_strong = is_parent_strong(store, parent_root)
|
||||
parent_strong = is_parent_strong(store, head_root)
|
||||
else:
|
||||
head_weak = True
|
||||
parent_strong = True
|
||||
|
||||
@@ -8,6 +8,7 @@ import (
|
||||
"runtime"
|
||||
"sort"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v7/encoding/ssz/equality"
|
||||
"github.com/d4l3k/messagediff"
|
||||
@@ -138,12 +139,21 @@ func StringContains(loggerFn assertionLoggerFn, expected, actual string, flag bo
|
||||
|
||||
// NoError asserts that error is nil.
|
||||
func NoError(loggerFn assertionLoggerFn, err error, msg ...any) {
|
||||
// reflect.ValueOf is needed for nil instances of custom types implementing Error
|
||||
if err != nil && !reflect.ValueOf(err).IsNil() {
|
||||
errMsg := parseMsg("Unexpected error", msg...)
|
||||
_, file, line, _ := runtime.Caller(2)
|
||||
loggerFn("%s:%d %s: %v", filepath.Base(file), line, errMsg, err)
|
||||
if err == nil {
|
||||
return
|
||||
}
|
||||
// reflect.ValueOf is needed for nil instances of custom types implementing Error.
|
||||
// Only check IsNil for types that support it to avoid panics on struct types.
|
||||
v := reflect.ValueOf(err)
|
||||
switch v.Kind() {
|
||||
case reflect.Chan, reflect.Func, reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice, reflect.UnsafePointer:
|
||||
if v.IsNil() {
|
||||
return
|
||||
}
|
||||
}
|
||||
errMsg := parseMsg("Unexpected error", msg...)
|
||||
_, file, line, _ := runtime.Caller(2)
|
||||
loggerFn("%s:%d %s: %v", filepath.Base(file), line, errMsg, err)
|
||||
}
|
||||
|
||||
// ErrorIs uses Errors.Is to recursively unwrap err looking for target in the chain.
|
||||
@@ -341,3 +351,18 @@ func (tb *TBMock) Errorf(format string, args ...any) {
|
||||
func (tb *TBMock) Fatalf(format string, args ...any) {
|
||||
tb.FatalfMsg = fmt.Sprintf(format, args...)
|
||||
}
|
||||
|
||||
// Eventually asserts that given condition will be met within waitFor time,
|
||||
// periodically checking target function each tick.
|
||||
func Eventually(loggerFn assertionLoggerFn, condition func() bool, waitFor, tick time.Duration, msg ...any) {
|
||||
deadline := time.Now().Add(waitFor)
|
||||
for time.Now().Before(deadline) {
|
||||
if condition() {
|
||||
return
|
||||
}
|
||||
time.Sleep(tick)
|
||||
}
|
||||
errMsg := parseMsg("Condition never satisfied", msg...)
|
||||
_, file, line, _ := runtime.Caller(2)
|
||||
loggerFn("%s:%d %s (waited %v)", filepath.Base(file), line, errMsg, waitFor)
|
||||
}
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
package require
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v7/testing/assertions"
|
||||
"github.com/sirupsen/logrus/hooks/test"
|
||||
)
|
||||
@@ -87,3 +89,9 @@ func ErrorIs(tb assertions.AssertionTestingTB, err, target error, msg ...any) {
|
||||
func StringContains(tb assertions.AssertionTestingTB, expected, actual string, msg ...any) {
|
||||
assertions.StringContains(tb.Fatalf, expected, actual, true, msg)
|
||||
}
|
||||
|
||||
// Eventually asserts that given condition will be met within waitFor time,
|
||||
// periodically checking target function each tick.
|
||||
func Eventually(tb assertions.AssertionTestingTB, condition func() bool, waitFor, tick time.Duration, msg ...any) {
|
||||
assertions.Eventually(tb.Fatalf, condition, waitFor, tick, msg...)
|
||||
}
|
||||
|
||||
@@ -62,8 +62,17 @@ func runTest(t *testing.T, config string, fork int, basePath string) { // nolint
|
||||
if len(testFolders) == 0 {
|
||||
t.Fatalf("No test folders found for %s/%s/%s", config, version.String(fork), folderPath)
|
||||
}
|
||||
|
||||
var skipTests = map[string]bool{
|
||||
// Skipping because of #4807 backporting issues
|
||||
"voting_source_beyond_two_epoch": true,
|
||||
"justified_update_always_if_better": true,
|
||||
"justified_update_not_realized_finality": true,
|
||||
}
|
||||
for _, folder := range testFolders {
|
||||
if skipTests[folder.Name()] {
|
||||
t.Logf("Skipping test %s due to known issues", folder.Name())
|
||||
continue
|
||||
}
|
||||
t.Run(folder.Name(), func(t *testing.T) {
|
||||
helpers.ClearCache()
|
||||
preStepsFile, err := util.BazelFileBytes(testsFolderPath, folder.Name(), "steps.yaml")
|
||||
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
"testing"
|
||||
|
||||
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
|
||||
// enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
|
||||
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
|
||||
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v7/testing/require"
|
||||
@@ -56,6 +57,8 @@ func unmarshalledSSZ(t *testing.T, serializedBytes []byte, folderName string) (a
|
||||
obj = ðpb.BeaconBlockBodyGloas{}
|
||||
case "BeaconState":
|
||||
obj = ðpb.BeaconStateGloas{}
|
||||
case "Builder":
|
||||
obj = ðpb.Builder{}
|
||||
case "BuilderPendingPayment":
|
||||
obj = ðpb.BuilderPendingPayment{}
|
||||
case "BuilderPendingWithdrawal":
|
||||
@@ -70,6 +73,8 @@ func unmarshalledSSZ(t *testing.T, serializedBytes []byte, folderName string) (a
|
||||
t.Skip("Not a consensus type")
|
||||
case "DataColumnSidecar":
|
||||
obj = ðpb.DataColumnSidecarGloas{}
|
||||
case "SignedProposerPreferences", "ProposerPreferences":
|
||||
t.Skip("p2p-only type; not part of the consensus state transition")
|
||||
|
||||
// Standard types that also exist in gloas
|
||||
case "ExecutionPayload":
|
||||
|
||||
12
testing/validator-mock/node_client_mock.go
generated
12
testing/validator-mock/node_client_mock.go
generated
@@ -56,18 +56,18 @@ func (mr *MockNodeClientMockRecorder) Genesis(arg0, arg1 any) *gomock.Call {
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Genesis", reflect.TypeOf((*MockNodeClient)(nil).Genesis), arg0, arg1)
|
||||
}
|
||||
|
||||
// IsHealthy mocks base method.
|
||||
func (m *MockNodeClient) IsHealthy(arg0 context.Context) bool {
|
||||
// IsReady mocks base method.
|
||||
func (m *MockNodeClient) IsReady(arg0 context.Context) bool {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "IsHealthy", arg0)
|
||||
ret := m.ctrl.Call(m, "IsReady", arg0)
|
||||
ret0, _ := ret[0].(bool)
|
||||
return ret0
|
||||
}
|
||||
|
||||
// IsHealthy indicates an expected call of IsHealthy.
|
||||
func (mr *MockNodeClientMockRecorder) IsHealthy(arg0 any) *gomock.Call {
|
||||
// IsReady indicates an expected call of IsReady.
|
||||
func (mr *MockNodeClientMockRecorder) IsReady(arg0 any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "IsHealthy", reflect.TypeOf((*MockNodeClient)(nil).IsHealthy), arg0)
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "IsReady", reflect.TypeOf((*MockNodeClient)(nil).IsReady), arg0)
|
||||
}
|
||||
|
||||
// Peers mocks base method.
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user