mirror of
https://github.com/OffchainLabs/prysm.git
synced 2026-01-13 07:17:59 -05:00
Compare commits
28 Commits
dependent_
...
graffiti-i
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
e0f925a673 | ||
|
|
fcc1795497 | ||
|
|
433054d82a | ||
|
|
0be2dad39b | ||
|
|
43a66e25ef | ||
|
|
e1a292a6a3 | ||
|
|
e408dad694 | ||
|
|
60f8308145 | ||
|
|
d7983c1558 | ||
|
|
6d49931a6d | ||
|
|
fda3fb272a | ||
|
|
17a1b9668d | ||
|
|
6373d6017d | ||
|
|
fd25e8a14d | ||
|
|
3a3f05987e | ||
|
|
6cb3677541 | ||
|
|
78235a4d91 | ||
|
|
57f011c4ab | ||
|
|
cba493062c | ||
|
|
b5bd8f6b12 | ||
|
|
2d624fb0e1 | ||
|
|
64d7d546ef | ||
|
|
2de004c3d5 | ||
|
|
23b5e2e174 | ||
|
|
bfbd4d176e | ||
|
|
78995a2d8f | ||
|
|
51db2c2129 | ||
|
|
000c367a53 |
3
.gitignore
vendored
3
.gitignore
vendored
@@ -44,6 +44,3 @@ tmp
|
||||
|
||||
# spectest coverage reports
|
||||
report.txt
|
||||
|
||||
# execution client data
|
||||
execution/
|
||||
|
||||
60
CHANGELOG.md
60
CHANGELOG.md
@@ -4,66 +4,6 @@ All notable changes to this project will be documented in this file.
|
||||
|
||||
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
|
||||
|
||||
## [v7.1.2](https://github.com/prysmaticlabs/prysm/compare/v7.1.1...v7.1.2) - 2026-01-07
|
||||
|
||||
Happy new year! This patch release is very small. The main improvement is better management of pending attestation aggregation via [PR 16153](https://github.com/OffchainLabs/prysm/pull/16153).
|
||||
|
||||
### Added
|
||||
|
||||
- `primitives.BuilderIndex`: SSZ `uint64` wrapper for builder registry indices. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16169)
|
||||
|
||||
### Changed
|
||||
|
||||
- the /eth/v2/beacon/pool/attestations and /eth/v1/beacon/pool/sync_committees now returns a 503 error if the node is still syncing, the rest api is also working in a similar process to gRPC broadcasting immediately now. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16152)
|
||||
- `validateDataColumn`: Remove error logs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16157)
|
||||
- Pending aggregates: When multiple aggregated attestations only differing by the aggregator index are in the pending queue, only process one of them. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16153)
|
||||
|
||||
### Fixed
|
||||
|
||||
- Fix the missing fork version object mapping for Fulu in light client p2p. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16151)
|
||||
- Do not process slots and copy states for next epoch proposers after Fulu. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16168)
|
||||
|
||||
## [v7.1.1](https://github.com/prysmaticlabs/prysm/compare/v7.1.0...v7.1.1) - 2025-12-18
|
||||
|
||||
Release highlights:
|
||||
|
||||
- Fixed potential deadlock scenario in data column batch verification
|
||||
- Improved processing and metrics for cells and proofs
|
||||
|
||||
We are aware of [an issue](https://github.com/OffchainLabs/prysm/issues/16160) where Prysm struggles to sync from an out of sync state. We will have another release before the end of the year to address this issue.
|
||||
|
||||
Our postmortem document from the December 4th mainnet issue has been published on our [documentation site](https://prysm.offchainlabs.com/docs/misc/mainnet-postmortems/)
|
||||
|
||||
### Added
|
||||
|
||||
- Track the dependent root of the latest finalized checkpoint in forkchoice. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16103)
|
||||
- Proposal design document to implement graffiti. Currently it is empty by default and the idea is to have it of the form GE168dPR63af. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15983)
|
||||
- Add support for detecting and logging per address reachability via libp2p AutoNAT v2. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16100)
|
||||
- Static analyzer that ensures each `httputil.HandleError` call is followed by a `return` statement. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16134)
|
||||
- Prometheus histogram `cells_and_proofs_from_structured_computation_milliseconds` to track computation time for cells and proofs from structured blobs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16115)
|
||||
- Prometheus histogram `get_blobs_v2_latency_milliseconds` to track RPC latency for `getBlobsV2` calls to the execution layer. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16115)
|
||||
|
||||
### Changed
|
||||
|
||||
- Optimise migratetocold by not doing brute force for loop. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16101)
|
||||
- e2e sync committee evaluator now skips the first slot after startup, we already skip the fork epoch for checks here, this skip only applies on startup, due to altair always from 0 and validators need to warm up. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16145)
|
||||
- Run `ComputeCellsAndProofsFromFlat` in parallel to improve performance when computing cells and proofs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16115)
|
||||
- Run `ComputeCellsAndProofsFromStructured` in parallel to improve performance when computing cells and proofs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16115)
|
||||
|
||||
### Removed
|
||||
|
||||
- Unnecessary copy is removed from Eth1DataHasEnoughSupport. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16118)
|
||||
|
||||
### Fixed
|
||||
|
||||
- Incorrect constructor return type [#16084](https://github.com/OffchainLabs/prysm/pull/16084). [[PR]](https://github.com/prysmaticlabs/prysm/pull/16084)
|
||||
- Fixed possible race when validating two attestations at the same time. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16105)
|
||||
- Fix missing return after version header check in SubmitAttesterSlashingsV2. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16126)
|
||||
- Fix deadlock in data column gossip KZG batch verification when a caller times out preventing result delivery. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16141)
|
||||
- Fixed replay state issue in rest api caused by attester and sync committee duties endpoints. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16136)
|
||||
- Do not error when committee has been computed correctly but updating the cache failed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16142)
|
||||
- Prevent blocked sends to the KZG batch verifier when the caller context is already canceled, avoiding useless queueing and potential hangs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16144)
|
||||
|
||||
## [v7.1.0](https://github.com/prysmaticlabs/prysm/compare/v7.0.0...v7.1.0) - 2025-12-10
|
||||
|
||||
This release includes several key features/fixes. If you are running v7.0.0 then you should update to v7.0.1 or later and remove the flag `--disable-last-epoch-targets`.
|
||||
|
||||
10
WORKSPACE
10
WORKSPACE
@@ -273,16 +273,16 @@ filegroup(
|
||||
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
|
||||
)
|
||||
|
||||
consensus_spec_version = "v1.7.0-alpha.1"
|
||||
consensus_spec_version = "v1.6.0"
|
||||
|
||||
load("@prysm//tools:download_spectests.bzl", "consensus_spec_tests")
|
||||
|
||||
consensus_spec_tests(
|
||||
name = "consensus_spec_tests",
|
||||
flavors = {
|
||||
"general": "sha256-j5R3jA7Oo4OSDMTvpMuD+8RomaCByeFSwtfkq6fL0Zg=",
|
||||
"minimal": "sha256-tdTqByoyswOS4r6OxFmo70y2BP7w1TgEok+gf4cbxB0=",
|
||||
"mainnet": "sha256-5gB4dt6SnSDKzdBc06VedId3NkgvSYyv9n9FRxWKwYI=",
|
||||
"general": "sha256-54hTaUNF9nLg+hRr3oHoq0yjZpW3MNiiUUuCQu6Rajk=",
|
||||
"minimal": "sha256-1JHIGg3gVMjvcGYRHR5cwdDgOvX47oR/MWp6gyAeZfA=",
|
||||
"mainnet": "sha256-292h3W2Ffts0YExgDTyxYe9Os7R0bZIXuAaMO8P6kl4=",
|
||||
},
|
||||
version = consensus_spec_version,
|
||||
)
|
||||
@@ -298,7 +298,7 @@ filegroup(
|
||||
visibility = ["//visibility:public"],
|
||||
)
|
||||
""",
|
||||
integrity = "sha256-J+43DrK1pF658kTXTwMS6zGf4KDjvas++m8w2a8swpg=",
|
||||
integrity = "sha256-VzBgrEokvYSMIIXVnSA5XS9I3m9oxpvToQGxC1N5lzw=",
|
||||
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
|
||||
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
|
||||
)
|
||||
|
||||
@@ -17,7 +17,6 @@ import (
|
||||
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v7/config/params"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
|
||||
ethpbv1 "github.com/OffchainLabs/prysm/v7/proto/eth/v1"
|
||||
@@ -131,10 +130,12 @@ func TestService_ReceiveBlock(t *testing.T) {
|
||||
block: genFullBlock(t, util.DefaultBlockGenConfig(), 1 /*slot*/),
|
||||
},
|
||||
check: func(t *testing.T, s *Service) {
|
||||
notifier := s.cfg.StateNotifier.(*blockchainTesting.MockStateNotifier)
|
||||
require.Eventually(t, func() bool {
|
||||
return len(notifier.ReceivedEvents()) >= 1
|
||||
}, 2*time.Second, 10*time.Millisecond, "Expected at least 1 state notification")
|
||||
// Hacky sleep, should use a better way to be able to resolve the race
|
||||
// between event being sent out and processed.
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
if recvd := len(s.cfg.StateNotifier.(*blockchainTesting.MockStateNotifier).ReceivedEvents()); recvd < 1 {
|
||||
t.Errorf("Received %d state notifications, expected at least 1", recvd)
|
||||
}
|
||||
},
|
||||
},
|
||||
{
|
||||
@@ -221,10 +222,10 @@ func TestService_ReceiveBlockUpdateHead(t *testing.T) {
|
||||
require.NoError(t, s.ReceiveBlock(ctx, wsb, root, nil))
|
||||
})
|
||||
wg.Wait()
|
||||
notifier := s.cfg.StateNotifier.(*blockchainTesting.MockStateNotifier)
|
||||
require.Eventually(t, func() bool {
|
||||
return len(notifier.ReceivedEvents()) >= 1
|
||||
}, 2*time.Second, 10*time.Millisecond, "Expected at least 1 state notification")
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
if recvd := len(s.cfg.StateNotifier.(*blockchainTesting.MockStateNotifier).ReceivedEvents()); recvd < 1 {
|
||||
t.Errorf("Received %d state notifications, expected at least 1", recvd)
|
||||
}
|
||||
// Verify fork choice has processed the block. (Genesis block and the new block)
|
||||
assert.Equal(t, 2, s.cfg.ForkChoiceStore.NodeCount())
|
||||
}
|
||||
@@ -264,10 +265,10 @@ func TestService_ReceiveBlockBatch(t *testing.T) {
|
||||
block: genFullBlock(t, util.DefaultBlockGenConfig(), 1 /*slot*/),
|
||||
},
|
||||
check: func(t *testing.T, s *Service) {
|
||||
notifier := s.cfg.StateNotifier.(*blockchainTesting.MockStateNotifier)
|
||||
require.Eventually(t, func() bool {
|
||||
return len(notifier.ReceivedEvents()) >= 1
|
||||
}, 2*time.Second, 10*time.Millisecond, "Expected at least 1 state notification")
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
if recvd := len(s.cfg.StateNotifier.(*blockchainTesting.MockStateNotifier).ReceivedEvents()); recvd < 1 {
|
||||
t.Errorf("Received %d state notifications, expected at least 1", recvd)
|
||||
}
|
||||
},
|
||||
},
|
||||
}
|
||||
@@ -511,9 +512,8 @@ func Test_executePostFinalizationTasks(t *testing.T) {
|
||||
s.cfg.StateNotifier = notifier
|
||||
s.executePostFinalizationTasks(s.ctx, headState)
|
||||
|
||||
require.Eventually(t, func() bool {
|
||||
return len(notifier.ReceivedEvents()) == 1
|
||||
}, 5*time.Second, 50*time.Millisecond, "Expected exactly 1 state notification")
|
||||
time.Sleep(1 * time.Second) // sleep for a second because event is in a separate go routine
|
||||
require.Equal(t, 1, len(notifier.ReceivedEvents()))
|
||||
e := notifier.ReceivedEvents()[0]
|
||||
assert.Equal(t, statefeed.FinalizedCheckpoint, int(e.Type))
|
||||
fc, ok := e.Data.(*ethpbv1.EventFinalizedCheckpoint)
|
||||
@@ -552,9 +552,8 @@ func Test_executePostFinalizationTasks(t *testing.T) {
|
||||
s.cfg.StateNotifier = notifier
|
||||
s.executePostFinalizationTasks(s.ctx, headState)
|
||||
|
||||
require.Eventually(t, func() bool {
|
||||
return len(notifier.ReceivedEvents()) == 1
|
||||
}, 5*time.Second, 50*time.Millisecond, "Expected exactly 1 state notification")
|
||||
time.Sleep(1 * time.Second) // sleep for a second because event is in a separate go routine
|
||||
require.Equal(t, 1, len(notifier.ReceivedEvents()))
|
||||
e := notifier.ReceivedEvents()[0]
|
||||
assert.Equal(t, statefeed.FinalizedCheckpoint, int(e.Type))
|
||||
fc, ok := e.Data.(*ethpbv1.EventFinalizedCheckpoint)
|
||||
@@ -597,13 +596,13 @@ func TestProcessLightClientBootstrap(t *testing.T) {
|
||||
|
||||
s.executePostFinalizationTasks(s.ctx, l.AttestedState)
|
||||
|
||||
// Wait for the light client bootstrap to be saved (runs in goroutine)
|
||||
var b interfaces.LightClientBootstrap
|
||||
require.Eventually(t, func() bool {
|
||||
var err error
|
||||
b, err = s.lcStore.LightClientBootstrap(ctx, [32]byte(cp.Root))
|
||||
return err == nil && b != nil
|
||||
}, 5*time.Second, 50*time.Millisecond, "Light client bootstrap was not saved within timeout")
|
||||
// wait for the goroutine to finish processing
|
||||
time.Sleep(1 * time.Second)
|
||||
|
||||
// Check that the light client bootstrap is saved
|
||||
b, err := s.lcStore.LightClientBootstrap(ctx, [32]byte(cp.Root))
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, b)
|
||||
|
||||
btst, err := lightClient.NewLightClientBootstrapFromBeaconState(ctx, l.FinalizedState.Slot(), l.FinalizedState, l.FinalizedBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
@@ -12,6 +12,7 @@ go_library(
|
||||
"log.go",
|
||||
"registry_updates.go",
|
||||
"transition.go",
|
||||
"transition_no_verify_sig.go",
|
||||
"upgrade.go",
|
||||
"validator.go",
|
||||
"withdrawals.go",
|
||||
@@ -61,6 +62,7 @@ go_test(
|
||||
"error_test.go",
|
||||
"export_test.go",
|
||||
"registry_updates_test.go",
|
||||
"transition_no_verify_sig_test.go",
|
||||
"transition_test.go",
|
||||
"upgrade_test.go",
|
||||
"validator_test.go",
|
||||
|
||||
@@ -6,11 +6,6 @@ type execReqErr struct {
|
||||
error
|
||||
}
|
||||
|
||||
// NewExecReqError creates a new execReqErr.
|
||||
func NewExecReqError(msg string) error {
|
||||
return execReqErr{errors.New(msg)}
|
||||
}
|
||||
|
||||
// IsExecutionRequestError returns true if the error has `execReqErr`.
|
||||
func IsExecutionRequestError(e error) bool {
|
||||
if e == nil {
|
||||
|
||||
@@ -1,10 +1,9 @@
|
||||
package transition
|
||||
package electra
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/blocks"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/electra"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
|
||||
v "github.com/OffchainLabs/prysm/v7/beacon-chain/core/validators"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
|
||||
@@ -48,7 +47,7 @@ var (
|
||||
// # [New in Electra:EIP7251]
|
||||
// for_ops(body.execution_payload.consolidation_requests, process_consolidation_request)
|
||||
|
||||
func electraOperations(ctx context.Context, st state.BeaconState, block interfaces.ReadOnlyBeaconBlock) (state.BeaconState, error) {
|
||||
func ProcessOperations(ctx context.Context, st state.BeaconState, block interfaces.ReadOnlyBeaconBlock) (state.BeaconState, error) {
|
||||
var err error
|
||||
|
||||
// 6110 validations are in VerifyOperationLengths
|
||||
@@ -64,60 +63,59 @@ func electraOperations(ctx context.Context, st state.BeaconState, block interfac
|
||||
return nil, errors.Wrap(err, "could not update total active balance cache")
|
||||
}
|
||||
}
|
||||
st, err = blocks.ProcessProposerSlashings(ctx, st, bb.ProposerSlashings(), exitInfo)
|
||||
st, err = ProcessProposerSlashings(ctx, st, bb.ProposerSlashings(), exitInfo)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(ErrProcessProposerSlashingsFailed, err.Error())
|
||||
return nil, errors.Wrap(err, "could not process altair proposer slashing")
|
||||
}
|
||||
st, err = blocks.ProcessAttesterSlashings(ctx, st, bb.AttesterSlashings(), exitInfo)
|
||||
st, err = ProcessAttesterSlashings(ctx, st, bb.AttesterSlashings(), exitInfo)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(ErrProcessAttesterSlashingsFailed, err.Error())
|
||||
return nil, errors.Wrap(err, "could not process altair attester slashing")
|
||||
}
|
||||
st, err = electra.ProcessAttestationsNoVerifySignature(ctx, st, block)
|
||||
st, err = ProcessAttestationsNoVerifySignature(ctx, st, block)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(ErrProcessAttestationsFailed, err.Error())
|
||||
return nil, errors.Wrap(err, "could not process altair attestation")
|
||||
}
|
||||
if _, err := electra.ProcessDeposits(ctx, st, bb.Deposits()); err != nil {
|
||||
return nil, errors.Wrap(ErrProcessDepositsFailed, err.Error())
|
||||
if _, err := ProcessDeposits(ctx, st, bb.Deposits()); err != nil { // new in electra
|
||||
return nil, errors.Wrap(err, "could not process altair deposit")
|
||||
}
|
||||
st, err = blocks.ProcessVoluntaryExits(ctx, st, bb.VoluntaryExits(), exitInfo)
|
||||
st, err = ProcessVoluntaryExits(ctx, st, bb.VoluntaryExits(), exitInfo)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(ErrProcessVoluntaryExitsFailed, err.Error())
|
||||
return nil, errors.Wrap(err, "could not process voluntary exits")
|
||||
}
|
||||
st, err = blocks.ProcessBLSToExecutionChanges(st, block)
|
||||
st, err = ProcessBLSToExecutionChanges(st, block)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(ErrProcessBLSChangesFailed, err.Error())
|
||||
return nil, errors.Wrap(err, "could not process bls-to-execution changes")
|
||||
}
|
||||
// new in electra
|
||||
requests, err := bb.ExecutionRequests()
|
||||
if err != nil {
|
||||
return nil, electra.NewExecReqError(errors.Wrap(err, "could not get execution requests").Error())
|
||||
return nil, errors.Wrap(err, "could not get execution requests")
|
||||
}
|
||||
for _, d := range requests.Deposits {
|
||||
if d == nil {
|
||||
return nil, electra.NewExecReqError("nil deposit request")
|
||||
return nil, errors.New("nil deposit request")
|
||||
}
|
||||
}
|
||||
st, err = electra.ProcessDepositRequests(ctx, st, requests.Deposits)
|
||||
st, err = ProcessDepositRequests(ctx, st, requests.Deposits)
|
||||
if err != nil {
|
||||
return nil, electra.NewExecReqError(errors.Wrap(err, "could not process deposit requests").Error())
|
||||
return nil, execReqErr{errors.Wrap(err, "could not process deposit requests")}
|
||||
}
|
||||
|
||||
for _, w := range requests.Withdrawals {
|
||||
if w == nil {
|
||||
return nil, electra.NewExecReqError("nil withdrawal request")
|
||||
return nil, errors.New("nil withdrawal request")
|
||||
}
|
||||
}
|
||||
st, err = electra.ProcessWithdrawalRequests(ctx, st, requests.Withdrawals)
|
||||
st, err = ProcessWithdrawalRequests(ctx, st, requests.Withdrawals)
|
||||
if err != nil {
|
||||
return nil, electra.NewExecReqError(errors.Wrap(err, "could not process withdrawal requests").Error())
|
||||
return nil, execReqErr{errors.Wrap(err, "could not process withdrawal requests")}
|
||||
}
|
||||
for _, c := range requests.Consolidations {
|
||||
if c == nil {
|
||||
return nil, electra.NewExecReqError("nil consolidation request")
|
||||
return nil, errors.New("nil consolidation request")
|
||||
}
|
||||
}
|
||||
if err := electra.ProcessConsolidationRequests(ctx, st, requests.Consolidations); err != nil {
|
||||
return nil, electra.NewExecReqError(errors.Wrap(err, "could not process consolidation requests").Error())
|
||||
if err := ProcessConsolidationRequests(ctx, st, requests.Consolidations); err != nil {
|
||||
return nil, execReqErr{errors.Wrap(err, "could not process consolidation requests")}
|
||||
}
|
||||
return st, nil
|
||||
}
|
||||
60
beacon-chain/core/electra/transition_no_verify_sig_test.go
Normal file
60
beacon-chain/core/electra/transition_no_verify_sig_test.go
Normal file
@@ -0,0 +1,60 @@
|
||||
package electra_test
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/electra"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
|
||||
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
|
||||
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v7/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v7/testing/util"
|
||||
)
|
||||
|
||||
func TestProcessOperationsWithNilRequests(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
modifyBlk func(blockElectra *ethpb.SignedBeaconBlockElectra)
|
||||
errMsg string
|
||||
}{
|
||||
{
|
||||
name: "Nil deposit request",
|
||||
modifyBlk: func(blk *ethpb.SignedBeaconBlockElectra) {
|
||||
blk.Block.Body.ExecutionRequests.Deposits = []*enginev1.DepositRequest{nil}
|
||||
},
|
||||
errMsg: "nil deposit request",
|
||||
},
|
||||
{
|
||||
name: "Nil withdrawal request",
|
||||
modifyBlk: func(blk *ethpb.SignedBeaconBlockElectra) {
|
||||
blk.Block.Body.ExecutionRequests.Withdrawals = []*enginev1.WithdrawalRequest{nil}
|
||||
},
|
||||
errMsg: "nil withdrawal request",
|
||||
},
|
||||
{
|
||||
name: "Nil consolidation request",
|
||||
modifyBlk: func(blk *ethpb.SignedBeaconBlockElectra) {
|
||||
blk.Block.Body.ExecutionRequests.Consolidations = []*enginev1.ConsolidationRequest{nil}
|
||||
},
|
||||
errMsg: "nil consolidation request",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
st, ks := util.DeterministicGenesisStateElectra(t, 128)
|
||||
blk, err := util.GenerateFullBlockElectra(st, ks, util.DefaultBlockGenConfig(), 1)
|
||||
require.NoError(t, err)
|
||||
|
||||
tc.modifyBlk(blk)
|
||||
|
||||
b, err := blocks.NewSignedBeaconBlock(blk)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.NoError(t, st.SetSlot(1))
|
||||
|
||||
_, err = electra.ProcessOperations(t.Context(), st, b.Block())
|
||||
require.ErrorContains(t, tc.errMsg, err)
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -3,8 +3,6 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
|
||||
go_library(
|
||||
name = "go_default_library",
|
||||
srcs = [
|
||||
"electra.go",
|
||||
"errors.go",
|
||||
"log.go",
|
||||
"skip_slot_cache.go",
|
||||
"state.go",
|
||||
@@ -64,8 +62,6 @@ go_test(
|
||||
"altair_transition_no_verify_sig_test.go",
|
||||
"bellatrix_transition_no_verify_sig_test.go",
|
||||
"benchmarks_test.go",
|
||||
"electra_test.go",
|
||||
"exports_test.go",
|
||||
"skip_slot_cache_test.go",
|
||||
"state_fuzz_test.go",
|
||||
"state_test.go",
|
||||
|
||||
@@ -1,216 +0,0 @@
|
||||
package transition_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/transition"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
|
||||
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
|
||||
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v7/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v7/testing/util"
|
||||
)
|
||||
|
||||
func TestProcessOperationsWithNilRequests(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
modifyBlk func(blockElectra *ethpb.SignedBeaconBlockElectra)
|
||||
errMsg string
|
||||
}{
|
||||
{
|
||||
name: "Nil deposit request",
|
||||
modifyBlk: func(blk *ethpb.SignedBeaconBlockElectra) {
|
||||
blk.Block.Body.ExecutionRequests.Deposits = []*enginev1.DepositRequest{nil}
|
||||
},
|
||||
errMsg: "nil deposit request",
|
||||
},
|
||||
{
|
||||
name: "Nil withdrawal request",
|
||||
modifyBlk: func(blk *ethpb.SignedBeaconBlockElectra) {
|
||||
blk.Block.Body.ExecutionRequests.Withdrawals = []*enginev1.WithdrawalRequest{nil}
|
||||
},
|
||||
errMsg: "nil withdrawal request",
|
||||
},
|
||||
{
|
||||
name: "Nil consolidation request",
|
||||
modifyBlk: func(blk *ethpb.SignedBeaconBlockElectra) {
|
||||
blk.Block.Body.ExecutionRequests.Consolidations = []*enginev1.ConsolidationRequest{nil}
|
||||
},
|
||||
errMsg: "nil consolidation request",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
st, ks := util.DeterministicGenesisStateElectra(t, 128)
|
||||
blk, err := util.GenerateFullBlockElectra(st, ks, util.DefaultBlockGenConfig(), 1)
|
||||
require.NoError(t, err)
|
||||
|
||||
tc.modifyBlk(blk)
|
||||
|
||||
b, err := blocks.NewSignedBeaconBlock(blk)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.NoError(t, st.SetSlot(1))
|
||||
|
||||
_, err = transition.ElectraOperations(t.Context(), st, b.Block())
|
||||
require.ErrorContains(t, tc.errMsg, err)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestElectraOperations_ProcessingErrors(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
modifyBlk func(blk *ethpb.SignedBeaconBlockElectra)
|
||||
errCheck func(t *testing.T, err error)
|
||||
}{
|
||||
{
|
||||
name: "ErrProcessProposerSlashingsFailed",
|
||||
modifyBlk: func(blk *ethpb.SignedBeaconBlockElectra) {
|
||||
// Create invalid proposer slashing with out-of-bounds proposer index
|
||||
blk.Block.Body.ProposerSlashings = []*ethpb.ProposerSlashing{
|
||||
{
|
||||
Header_1: ðpb.SignedBeaconBlockHeader{
|
||||
Header: ðpb.BeaconBlockHeader{
|
||||
Slot: 1,
|
||||
ProposerIndex: 999999, // Invalid index (out of bounds)
|
||||
ParentRoot: make([]byte, 32),
|
||||
StateRoot: make([]byte, 32),
|
||||
BodyRoot: make([]byte, 32),
|
||||
},
|
||||
Signature: make([]byte, 96),
|
||||
},
|
||||
Header_2: ðpb.SignedBeaconBlockHeader{
|
||||
Header: ðpb.BeaconBlockHeader{
|
||||
Slot: 1,
|
||||
ProposerIndex: 999999,
|
||||
ParentRoot: make([]byte, 32),
|
||||
StateRoot: make([]byte, 32),
|
||||
BodyRoot: make([]byte, 32),
|
||||
},
|
||||
Signature: make([]byte, 96),
|
||||
},
|
||||
},
|
||||
}
|
||||
},
|
||||
errCheck: func(t *testing.T, err error) {
|
||||
require.ErrorContains(t, "process proposer slashings failed", err)
|
||||
require.Equal(t, true, errors.Is(err, transition.ErrProcessProposerSlashingsFailed))
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "ErrProcessAttestationsFailed",
|
||||
modifyBlk: func(blk *ethpb.SignedBeaconBlockElectra) {
|
||||
// Create attestation with invalid committee index
|
||||
blk.Block.Body.Attestations = []*ethpb.AttestationElectra{
|
||||
{
|
||||
AggregationBits: []byte{0b00000001},
|
||||
Data: ðpb.AttestationData{
|
||||
Slot: 1,
|
||||
CommitteeIndex: 999999, // Invalid committee index
|
||||
BeaconBlockRoot: make([]byte, 32),
|
||||
Source: ðpb.Checkpoint{
|
||||
Epoch: 0,
|
||||
Root: make([]byte, 32),
|
||||
},
|
||||
Target: ðpb.Checkpoint{
|
||||
Epoch: 0,
|
||||
Root: make([]byte, 32),
|
||||
},
|
||||
},
|
||||
CommitteeBits: []byte{0b00000001},
|
||||
Signature: make([]byte, 96),
|
||||
},
|
||||
}
|
||||
},
|
||||
errCheck: func(t *testing.T, err error) {
|
||||
require.ErrorContains(t, "process attestations failed", err)
|
||||
require.Equal(t, true, errors.Is(err, transition.ErrProcessAttestationsFailed))
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "ErrProcessDepositsFailed",
|
||||
modifyBlk: func(blk *ethpb.SignedBeaconBlockElectra) {
|
||||
// Create deposit with invalid proof length
|
||||
blk.Block.Body.Deposits = []*ethpb.Deposit{
|
||||
{
|
||||
Proof: [][]byte{}, // Invalid: empty proof
|
||||
Data: ðpb.Deposit_Data{
|
||||
PublicKey: make([]byte, 48),
|
||||
WithdrawalCredentials: make([]byte, 32),
|
||||
Amount: 32000000000, // 32 ETH in Gwei
|
||||
Signature: make([]byte, 96),
|
||||
},
|
||||
},
|
||||
}
|
||||
},
|
||||
errCheck: func(t *testing.T, err error) {
|
||||
require.ErrorContains(t, "process deposits failed", err)
|
||||
require.Equal(t, true, errors.Is(err, transition.ErrProcessDepositsFailed))
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "ErrProcessVoluntaryExitsFailed",
|
||||
modifyBlk: func(blk *ethpb.SignedBeaconBlockElectra) {
|
||||
// Create voluntary exit with invalid validator index
|
||||
blk.Block.Body.VoluntaryExits = []*ethpb.SignedVoluntaryExit{
|
||||
{
|
||||
Exit: ðpb.VoluntaryExit{
|
||||
Epoch: 0,
|
||||
ValidatorIndex: 999999, // Invalid index (out of bounds)
|
||||
},
|
||||
Signature: make([]byte, 96),
|
||||
},
|
||||
}
|
||||
},
|
||||
errCheck: func(t *testing.T, err error) {
|
||||
require.ErrorContains(t, "process voluntary exits failed", err)
|
||||
require.Equal(t, true, errors.Is(err, transition.ErrProcessVoluntaryExitsFailed))
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "ErrProcessBLSChangesFailed",
|
||||
modifyBlk: func(blk *ethpb.SignedBeaconBlockElectra) {
|
||||
// Create BLS to execution change with invalid validator index
|
||||
blk.Block.Body.BlsToExecutionChanges = []*ethpb.SignedBLSToExecutionChange{
|
||||
{
|
||||
Message: ðpb.BLSToExecutionChange{
|
||||
ValidatorIndex: 999999, // Invalid index (out of bounds)
|
||||
FromBlsPubkey: make([]byte, 48),
|
||||
ToExecutionAddress: make([]byte, 20),
|
||||
},
|
||||
Signature: make([]byte, 96),
|
||||
},
|
||||
}
|
||||
},
|
||||
errCheck: func(t *testing.T, err error) {
|
||||
require.ErrorContains(t, "process BLS to execution changes failed", err)
|
||||
require.Equal(t, true, errors.Is(err, transition.ErrProcessBLSChangesFailed))
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
st, ks := util.DeterministicGenesisStateElectra(t, 128)
|
||||
blk, err := util.GenerateFullBlockElectra(st, ks, util.DefaultBlockGenConfig(), 1)
|
||||
require.NoError(t, err)
|
||||
|
||||
tc.modifyBlk(blk)
|
||||
|
||||
b, err := blocks.NewSignedBeaconBlock(blk)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.NoError(t, st.SetSlot(primitives.Slot(1)))
|
||||
|
||||
_, err = transition.ElectraOperations(ctx, st, b.Block())
|
||||
require.NotNil(t, err, "Expected an error but got nil")
|
||||
tc.errCheck(t, err)
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -1,19 +0,0 @@
|
||||
package transition
|
||||
|
||||
import "errors"
|
||||
|
||||
var (
|
||||
ErrAttestationsSignatureInvalid = errors.New("attestations signature invalid")
|
||||
ErrRandaoSignatureInvalid = errors.New("randao signature invalid")
|
||||
ErrBLSToExecutionChangesSignatureInvalid = errors.New("BLS to execution changes signature invalid")
|
||||
ErrProcessWithdrawalsFailed = errors.New("process withdrawals failed")
|
||||
ErrProcessRandaoFailed = errors.New("process randao failed")
|
||||
ErrProcessEth1DataFailed = errors.New("process eth1 data failed")
|
||||
ErrProcessProposerSlashingsFailed = errors.New("process proposer slashings failed")
|
||||
ErrProcessAttesterSlashingsFailed = errors.New("process attester slashings failed")
|
||||
ErrProcessAttestationsFailed = errors.New("process attestations failed")
|
||||
ErrProcessDepositsFailed = errors.New("process deposits failed")
|
||||
ErrProcessVoluntaryExitsFailed = errors.New("process voluntary exits failed")
|
||||
ErrProcessBLSChangesFailed = errors.New("process BLS to execution changes failed")
|
||||
ErrProcessSyncAggregateFailed = errors.New("process sync aggregate failed")
|
||||
)
|
||||
@@ -1,3 +0,0 @@
|
||||
package transition
|
||||
|
||||
var ElectraOperations = electraOperations
|
||||
@@ -7,11 +7,12 @@ import (
|
||||
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/altair"
|
||||
b "github.com/OffchainLabs/prysm/v7/beacon-chain/core/blocks"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/electra"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/transition/interop"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/validators"
|
||||
v "github.com/OffchainLabs/prysm/v7/beacon-chain/core/validators"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
|
||||
"github.com/OffchainLabs/prysm/v7/config/features"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
|
||||
"github.com/OffchainLabs/prysm/v7/crypto/bls"
|
||||
@@ -69,11 +70,10 @@ func ExecuteStateTransitionNoVerifyAnySig(
|
||||
}
|
||||
|
||||
// Execute per block transition.
|
||||
sigSlice, st, err := ProcessBlockNoVerifyAnySig(ctx, st, signed)
|
||||
set, st, err := ProcessBlockNoVerifyAnySig(ctx, st, signed)
|
||||
if err != nil {
|
||||
return nil, nil, errors.Wrap(err, "could not process block")
|
||||
}
|
||||
set := sigSlice.Batch()
|
||||
|
||||
// State root validation.
|
||||
postStateRoot, err := st.HashTreeRoot(ctx)
|
||||
@@ -113,7 +113,7 @@ func ExecuteStateTransitionNoVerifyAnySig(
|
||||
// assert block.state_root == hash_tree_root(state)
|
||||
func CalculateStateRoot(
|
||||
ctx context.Context,
|
||||
rollback state.BeaconState,
|
||||
state state.BeaconState,
|
||||
signed interfaces.ReadOnlySignedBeaconBlock,
|
||||
) ([32]byte, error) {
|
||||
ctx, span := trace.StartSpan(ctx, "core.state.CalculateStateRoot")
|
||||
@@ -122,7 +122,7 @@ func CalculateStateRoot(
|
||||
tracing.AnnotateError(span, ctx.Err())
|
||||
return [32]byte{}, ctx.Err()
|
||||
}
|
||||
if rollback == nil || rollback.IsNil() {
|
||||
if state == nil || state.IsNil() {
|
||||
return [32]byte{}, errors.New("nil state")
|
||||
}
|
||||
if signed == nil || signed.IsNil() || signed.Block().IsNil() {
|
||||
@@ -130,7 +130,7 @@ func CalculateStateRoot(
|
||||
}
|
||||
|
||||
// Copy state to avoid mutating the state reference.
|
||||
state := rollback.Copy()
|
||||
state = state.Copy()
|
||||
|
||||
// Execute per slots transition.
|
||||
var err error
|
||||
@@ -141,103 +141,14 @@ func CalculateStateRoot(
|
||||
}
|
||||
|
||||
// Execute per block transition.
|
||||
if features.Get().EnableProposerPreprocessing {
|
||||
state, err = processBlockForProposing(ctx, state, signed)
|
||||
if err != nil {
|
||||
return [32]byte{}, errors.Wrap(err, "could not process block for proposing")
|
||||
}
|
||||
} else {
|
||||
state, err = ProcessBlockForStateRoot(ctx, state, signed)
|
||||
if err != nil {
|
||||
return [32]byte{}, errors.Wrap(err, "could not process block")
|
||||
}
|
||||
state, err = ProcessBlockForStateRoot(ctx, state, signed)
|
||||
if err != nil {
|
||||
return [32]byte{}, errors.Wrap(err, "could not process block")
|
||||
}
|
||||
|
||||
return state.HashTreeRoot(ctx)
|
||||
}
|
||||
|
||||
// processBlockVerifySigs processes the block and verifies the signatures within it. Block signatures are not verified as this block is not yet signed.
|
||||
func processBlockForProposing(ctx context.Context, st state.BeaconState, signed interfaces.ReadOnlySignedBeaconBlock) (state.BeaconState, error) {
|
||||
var err error
|
||||
var set BlockSignatureBatches
|
||||
set, st, err = ProcessBlockNoVerifyAnySig(ctx, st, signed)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// We first try to verify all sigantures batched optimistically. We ignore block proposer signature.
|
||||
sigSet := set.Batch()
|
||||
valid, err := sigSet.Verify()
|
||||
if err != nil || valid {
|
||||
return st, err
|
||||
}
|
||||
// Some signature failed to verify.
|
||||
// Verify Attestations signatures
|
||||
attSigs := set.AttestationSignatures
|
||||
if attSigs == nil {
|
||||
return nil, ErrAttestationsSignatureInvalid
|
||||
}
|
||||
valid, err = attSigs.Verify()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if !valid {
|
||||
return nil, ErrAttestationsSignatureInvalid
|
||||
}
|
||||
|
||||
// Verify Randao signature
|
||||
randaoSigs := set.RandaoSignatures
|
||||
if randaoSigs == nil {
|
||||
return nil, ErrRandaoSignatureInvalid
|
||||
}
|
||||
valid, err = randaoSigs.Verify()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if !valid {
|
||||
return nil, ErrRandaoSignatureInvalid
|
||||
}
|
||||
|
||||
if signed.Block().Version() < version.Capella {
|
||||
//This should not happen as we must have failed one of the above signatures.
|
||||
return st, nil
|
||||
}
|
||||
// Verify BLS to execution changes signatures
|
||||
blsChangeSigs := set.BLSChangeSignatures
|
||||
if blsChangeSigs == nil {
|
||||
return nil, ErrBLSToExecutionChangesSignatureInvalid
|
||||
}
|
||||
valid, err = blsChangeSigs.Verify()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if !valid {
|
||||
return nil, ErrBLSToExecutionChangesSignatureInvalid
|
||||
}
|
||||
// We should not reach this point as one of the above signatures must have failed.
|
||||
return st, nil
|
||||
}
|
||||
|
||||
// BlockSignatureBatches holds the signature batches for different parts of a beacon block.
|
||||
type BlockSignatureBatches struct {
|
||||
RandaoSignatures *bls.SignatureBatch
|
||||
AttestationSignatures *bls.SignatureBatch
|
||||
BLSChangeSignatures *bls.SignatureBatch
|
||||
}
|
||||
|
||||
// Batch returns the batch of signature batches in the BlockSignatureBatches.
|
||||
func (b BlockSignatureBatches) Batch() *bls.SignatureBatch {
|
||||
sigs := bls.NewSet()
|
||||
if b.RandaoSignatures != nil {
|
||||
sigs.Join(b.RandaoSignatures)
|
||||
}
|
||||
if b.AttestationSignatures != nil {
|
||||
sigs.Join(b.AttestationSignatures)
|
||||
}
|
||||
if b.BLSChangeSignatures != nil {
|
||||
sigs.Join(b.BLSChangeSignatures)
|
||||
}
|
||||
return sigs
|
||||
}
|
||||
|
||||
// ProcessBlockNoVerifyAnySig creates a new, modified beacon state by applying block operation
|
||||
// transformations as defined in the Ethereum Serenity specification. It does not validate
|
||||
// any block signature except for deposit and slashing signatures. It also returns the relevant
|
||||
@@ -254,48 +165,48 @@ func ProcessBlockNoVerifyAnySig(
|
||||
ctx context.Context,
|
||||
st state.BeaconState,
|
||||
signed interfaces.ReadOnlySignedBeaconBlock,
|
||||
) (BlockSignatureBatches, state.BeaconState, error) {
|
||||
) (*bls.SignatureBatch, state.BeaconState, error) {
|
||||
ctx, span := trace.StartSpan(ctx, "core.state.ProcessBlockNoVerifyAnySig")
|
||||
defer span.End()
|
||||
set := BlockSignatureBatches{}
|
||||
if err := blocks.BeaconBlockIsNil(signed); err != nil {
|
||||
return set, nil, err
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
if st.Version() != signed.Block().Version() {
|
||||
return set, nil, fmt.Errorf("state and block are different version. %d != %d", st.Version(), signed.Block().Version())
|
||||
return nil, nil, fmt.Errorf("state and block are different version. %d != %d", st.Version(), signed.Block().Version())
|
||||
}
|
||||
|
||||
blk := signed.Block()
|
||||
st, err := ProcessBlockForStateRoot(ctx, st, signed)
|
||||
if err != nil {
|
||||
return set, nil, err
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
randaoReveal := signed.Block().Body().RandaoReveal()
|
||||
rSet, err := b.RandaoSignatureBatch(ctx, st, randaoReveal[:])
|
||||
if err != nil {
|
||||
tracing.AnnotateError(span, err)
|
||||
return set, nil, errors.Wrap(err, "could not retrieve randao signature set")
|
||||
return nil, nil, errors.Wrap(err, "could not retrieve randao signature set")
|
||||
}
|
||||
set.RandaoSignatures = rSet
|
||||
aSet, err := b.AttestationSignatureBatch(ctx, st, signed.Block().Body().Attestations())
|
||||
if err != nil {
|
||||
return set, nil, errors.Wrap(err, "could not retrieve attestation signature set")
|
||||
return nil, nil, errors.Wrap(err, "could not retrieve attestation signature set")
|
||||
}
|
||||
set.AttestationSignatures = aSet
|
||||
|
||||
// Merge beacon block, randao and attestations signatures into a set.
|
||||
set := bls.NewSet()
|
||||
set.Join(rSet).Join(aSet)
|
||||
|
||||
if blk.Version() >= version.Capella {
|
||||
changes, err := signed.Block().Body().BLSToExecutionChanges()
|
||||
if err != nil {
|
||||
return set, nil, errors.Wrap(err, "could not get BLSToExecutionChanges")
|
||||
return nil, nil, errors.Wrap(err, "could not get BLSToExecutionChanges")
|
||||
}
|
||||
cSet, err := b.BLSChangesSignatureBatch(st, changes)
|
||||
if err != nil {
|
||||
return set, nil, errors.Wrap(err, "could not get BLSToExecutionChanges signatures")
|
||||
return nil, nil, errors.Wrap(err, "could not get BLSToExecutionChanges signatures")
|
||||
}
|
||||
set.BLSChangeSignatures = cSet
|
||||
set.Join(cSet)
|
||||
}
|
||||
return set, st, nil
|
||||
}
|
||||
@@ -357,7 +268,7 @@ func ProcessOperationsNoVerifyAttsSigs(
|
||||
return nil, err
|
||||
}
|
||||
} else {
|
||||
state, err = electraOperations(ctx, state, beaconBlock)
|
||||
state, err = electra.ProcessOperations(ctx, state, beaconBlock)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -415,7 +326,7 @@ func ProcessBlockForStateRoot(
|
||||
if state.Version() >= version.Capella {
|
||||
state, err = b.ProcessWithdrawals(state, executionData)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(ErrProcessWithdrawalsFailed, err.Error())
|
||||
return nil, errors.Wrap(err, "could not process withdrawals")
|
||||
}
|
||||
}
|
||||
if err = b.ProcessPayload(state, blk.Body()); err != nil {
|
||||
@@ -427,13 +338,13 @@ func ProcessBlockForStateRoot(
|
||||
state, err = b.ProcessRandaoNoVerify(state, randaoReveal[:])
|
||||
if err != nil {
|
||||
tracing.AnnotateError(span, err)
|
||||
return nil, errors.Wrap(ErrProcessRandaoFailed, err.Error())
|
||||
return nil, errors.Wrap(err, "could not verify and process randao")
|
||||
}
|
||||
|
||||
state, err = b.ProcessEth1DataInBlock(ctx, state, signed.Block().Body().Eth1Data())
|
||||
if err != nil {
|
||||
tracing.AnnotateError(span, err)
|
||||
return nil, errors.Wrap(ErrProcessEth1DataFailed, err.Error())
|
||||
return nil, errors.Wrap(err, "could not process eth1 data")
|
||||
}
|
||||
|
||||
state, err = ProcessOperationsNoVerifyAttsSigs(ctx, state, signed.Block())
|
||||
@@ -452,7 +363,7 @@ func ProcessBlockForStateRoot(
|
||||
}
|
||||
state, _, err = altair.ProcessSyncAggregate(ctx, state, sa)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(ErrProcessSyncAggregateFailed, err.Error())
|
||||
return nil, errors.Wrap(err, "process_sync_aggregate failed")
|
||||
}
|
||||
|
||||
return state, nil
|
||||
@@ -468,35 +379,31 @@ func altairOperations(ctx context.Context, st state.BeaconState, beaconBlock int
|
||||
exitInfo := &validators.ExitInfo{}
|
||||
if hasSlashings || hasExits {
|
||||
// ExitInformation is expensive to compute, only do it if we need it.
|
||||
exitInfo = validators.ExitInformation(st)
|
||||
exitInfo = v.ExitInformation(st)
|
||||
if err := helpers.UpdateTotalActiveBalanceCache(st, exitInfo.TotalActiveBalance); err != nil {
|
||||
return nil, errors.Wrap(err, "could not update total active balance cache")
|
||||
}
|
||||
}
|
||||
st, err = b.ProcessProposerSlashings(ctx, st, beaconBlock.Body().ProposerSlashings(), exitInfo)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(ErrProcessProposerSlashingsFailed, err.Error())
|
||||
return nil, errors.Wrap(err, "could not process altair proposer slashing")
|
||||
}
|
||||
st, err = b.ProcessAttesterSlashings(ctx, st, beaconBlock.Body().AttesterSlashings(), exitInfo)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(ErrProcessAttesterSlashingsFailed, err.Error())
|
||||
return nil, errors.Wrap(err, "could not process altair attester slashing")
|
||||
}
|
||||
st, err = altair.ProcessAttestationsNoVerifySignature(ctx, st, beaconBlock)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(ErrProcessAttestationsFailed, err.Error())
|
||||
return nil, errors.Wrap(err, "could not process altair attestation")
|
||||
}
|
||||
if _, err := altair.ProcessDeposits(ctx, st, beaconBlock.Body().Deposits()); err != nil {
|
||||
return nil, errors.Wrap(ErrProcessDepositsFailed, err.Error())
|
||||
return nil, errors.Wrap(err, "could not process altair deposit")
|
||||
}
|
||||
st, err = b.ProcessVoluntaryExits(ctx, st, beaconBlock.Body().VoluntaryExits(), exitInfo)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(ErrProcessVoluntaryExitsFailed, err.Error())
|
||||
return nil, errors.Wrap(err, "could not process voluntary exits")
|
||||
}
|
||||
st, err = b.ProcessBLSToExecutionChanges(st, beaconBlock)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(ErrProcessBLSChangesFailed, err.Error())
|
||||
}
|
||||
return st, nil
|
||||
return b.ProcessBLSToExecutionChanges(st, beaconBlock)
|
||||
}
|
||||
|
||||
// This calls phase 0 block operations.
|
||||
@@ -504,32 +411,32 @@ func phase0Operations(ctx context.Context, st state.BeaconState, beaconBlock int
|
||||
var err error
|
||||
hasSlashings := len(beaconBlock.Body().ProposerSlashings()) > 0 || len(beaconBlock.Body().AttesterSlashings()) > 0
|
||||
hasExits := len(beaconBlock.Body().VoluntaryExits()) > 0
|
||||
var exitInfo *validators.ExitInfo
|
||||
var exitInfo *v.ExitInfo
|
||||
if hasSlashings || hasExits {
|
||||
// ExitInformation is expensive to compute, only do it if we need it.
|
||||
exitInfo = validators.ExitInformation(st)
|
||||
exitInfo = v.ExitInformation(st)
|
||||
if err := helpers.UpdateTotalActiveBalanceCache(st, exitInfo.TotalActiveBalance); err != nil {
|
||||
return nil, errors.Wrap(err, "could not update total active balance cache")
|
||||
}
|
||||
}
|
||||
st, err = b.ProcessProposerSlashings(ctx, st, beaconBlock.Body().ProposerSlashings(), exitInfo)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(ErrProcessProposerSlashingsFailed, err.Error())
|
||||
return nil, errors.Wrap(err, "could not process block proposer slashings")
|
||||
}
|
||||
st, err = b.ProcessAttesterSlashings(ctx, st, beaconBlock.Body().AttesterSlashings(), exitInfo)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(ErrProcessAttesterSlashingsFailed, err.Error())
|
||||
return nil, errors.Wrap(err, "could not process block attester slashings")
|
||||
}
|
||||
st, err = b.ProcessAttestationsNoVerifySignature(ctx, st, beaconBlock)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(ErrProcessAttestationsFailed, err.Error())
|
||||
return nil, errors.Wrap(err, "could not process block attestations")
|
||||
}
|
||||
if _, err := altair.ProcessDeposits(ctx, st, beaconBlock.Body().Deposits()); err != nil {
|
||||
return nil, errors.Wrap(ErrProcessDepositsFailed, err.Error())
|
||||
return nil, errors.Wrap(err, "could not process deposits")
|
||||
}
|
||||
st, err = b.ProcessVoluntaryExits(ctx, st, beaconBlock.Body().VoluntaryExits(), exitInfo)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(ErrProcessVoluntaryExitsFailed, err.Error())
|
||||
return nil, errors.Wrap(err, "could not process voluntary exits")
|
||||
}
|
||||
return st, nil
|
||||
}
|
||||
|
||||
@@ -132,8 +132,7 @@ func TestProcessBlockNoVerify_PassesProcessingConditions(t *testing.T) {
|
||||
set, _, err := transition.ProcessBlockNoVerifyAnySig(t.Context(), beaconState, wsb)
|
||||
require.NoError(t, err)
|
||||
// Test Signature set verifies.
|
||||
sigSet := set.Batch()
|
||||
verified, err := sigSet.Verify()
|
||||
verified, err := set.Verify()
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, true, verified, "Could not verify signature set.")
|
||||
}
|
||||
@@ -146,8 +145,7 @@ func TestProcessBlockNoVerifyAnySigAltair_OK(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
set, _, err := transition.ProcessBlockNoVerifyAnySig(t.Context(), beaconState, wsb)
|
||||
require.NoError(t, err)
|
||||
sigSet := set.Batch()
|
||||
verified, err := sigSet.Verify()
|
||||
verified, err := set.Verify()
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, true, verified, "Could not verify signature set")
|
||||
}
|
||||
@@ -156,9 +154,8 @@ func TestProcessBlockNoVerify_SigSetContainsDescriptions(t *testing.T) {
|
||||
beaconState, block, _, _, _ := createFullBlockWithOperations(t)
|
||||
wsb, err := blocks.NewSignedBeaconBlock(block)
|
||||
require.NoError(t, err)
|
||||
signatures, _, err := transition.ProcessBlockNoVerifyAnySig(t.Context(), beaconState, wsb)
|
||||
set, _, err := transition.ProcessBlockNoVerifyAnySig(t.Context(), beaconState, wsb)
|
||||
require.NoError(t, err)
|
||||
set := signatures.Batch()
|
||||
assert.Equal(t, len(set.Signatures), len(set.Descriptions), "Signatures and descriptions do not match up")
|
||||
assert.Equal(t, "randao signature", set.Descriptions[0])
|
||||
assert.Equal(t, "attestation signature", set.Descriptions[1])
|
||||
|
||||
@@ -8,6 +8,7 @@ go_library(
|
||||
"deposit.go",
|
||||
"engine_client.go",
|
||||
"errors.go",
|
||||
"graffiti_info.go",
|
||||
"log.go",
|
||||
"log_processing.go",
|
||||
"metrics.go",
|
||||
@@ -40,7 +41,6 @@ go_library(
|
||||
"//beacon-chain/state/state-native:go_default_library",
|
||||
"//beacon-chain/state/stategen:go_default_library",
|
||||
"//beacon-chain/verification:go_default_library",
|
||||
"//cmd/beacon-chain/flags:go_default_library",
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
"//consensus-types/blocks:go_default_library",
|
||||
@@ -89,6 +89,7 @@ go_test(
|
||||
"engine_client_fuzz_test.go",
|
||||
"engine_client_test.go",
|
||||
"execution_chain_test.go",
|
||||
"graffiti_info_test.go",
|
||||
"init_test.go",
|
||||
"log_processing_test.go",
|
||||
"mock_test.go",
|
||||
|
||||
@@ -11,7 +11,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/execution/types"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/verification"
|
||||
"github.com/OffchainLabs/prysm/v7/cmd/beacon-chain/flags"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v7/config/params"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
|
||||
@@ -61,7 +60,17 @@ var (
|
||||
}
|
||||
)
|
||||
|
||||
// ClientVersionV1 represents the response from engine_getClientVersionV1.
|
||||
type ClientVersionV1 struct {
|
||||
Code string `json:"code"`
|
||||
Name string `json:"name"`
|
||||
Version string `json:"version"`
|
||||
Commit string `json:"commit"`
|
||||
}
|
||||
|
||||
const (
|
||||
// GetClientVersionMethod is the engine_getClientVersionV1 method for JSON-RPC.
|
||||
GetClientVersionMethod = "engine_getClientVersionV1"
|
||||
// NewPayloadMethod v1 request string for JSON-RPC.
|
||||
NewPayloadMethod = "engine_newPayloadV1"
|
||||
// NewPayloadMethodV2 v2 request string for JSON-RPC.
|
||||
@@ -350,6 +359,27 @@ func (s *Service) ExchangeCapabilities(ctx context.Context) ([]string, error) {
|
||||
return elSupportedEndpointsSlice, nil
|
||||
}
|
||||
|
||||
// GetClientVersion calls engine_getClientVersionV1 to retrieve EL client information.
|
||||
func (s *Service) GetClientVersion(ctx context.Context) ([]ClientVersionV1, error) {
|
||||
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.GetClientVersion")
|
||||
defer span.End()
|
||||
|
||||
// Per spec, we send our own client info as the parameter
|
||||
clVersion := ClientVersionV1{
|
||||
Code: CLCode,
|
||||
Name: "Prysm",
|
||||
Version: version.SemanticVersion(),
|
||||
Commit: version.GetCommitPrefix(),
|
||||
}
|
||||
|
||||
var result []ClientVersionV1
|
||||
err := s.rpcClient.CallContext(ctx, &result, GetClientVersionMethod, clVersion)
|
||||
if err != nil {
|
||||
return nil, handleRPCError(err)
|
||||
}
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// GetTerminalBlockHash returns the valid terminal block hash based on total difficulty.
|
||||
//
|
||||
// Spec code:
|
||||
@@ -539,10 +569,6 @@ func (s *Service) GetBlobsV2(ctx context.Context, versionedHashes []common.Hash)
|
||||
return nil, errors.New(fmt.Sprintf("%s is not supported", GetBlobsV2))
|
||||
}
|
||||
|
||||
if flags.Get().DisableGetBlobsV2 {
|
||||
return []*pb.BlobAndProofV2{}, nil
|
||||
}
|
||||
|
||||
result := make([]*pb.BlobAndProofV2, len(versionedHashes))
|
||||
err := s.rpcClient.CallContext(ctx, &result, GetBlobsV2, versionedHashes)
|
||||
|
||||
|
||||
107
beacon-chain/execution/graffiti_info.go
Normal file
107
beacon-chain/execution/graffiti_info.go
Normal file
@@ -0,0 +1,107 @@
|
||||
package execution
|
||||
|
||||
import (
|
||||
"sync"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v7/runtime/version"
|
||||
)
|
||||
|
||||
const (
|
||||
// CLCode is the two-letter client code for Prysm.
|
||||
CLCode = "PR"
|
||||
)
|
||||
|
||||
// GraffitiInfo holds version information for generating block graffiti.
|
||||
// It is thread-safe and can be updated by the execution service and read by the validator server.
|
||||
type GraffitiInfo struct {
|
||||
mu sync.RWMutex
|
||||
elCode string // From engine_getClientVersionV1
|
||||
elCommit string // From engine_getClientVersionV1
|
||||
}
|
||||
|
||||
// NewGraffitiInfo creates a new GraffitiInfo.
|
||||
func NewGraffitiInfo() *GraffitiInfo {
|
||||
return &GraffitiInfo{}
|
||||
}
|
||||
|
||||
// UpdateFromEngine updates the EL client information.
|
||||
func (g *GraffitiInfo) UpdateFromEngine(code, commit string) {
|
||||
g.mu.Lock()
|
||||
defer g.mu.Unlock()
|
||||
g.elCode = code
|
||||
g.elCommit = commit
|
||||
}
|
||||
|
||||
// GenerateGraffiti generates graffiti using the flexible standard
|
||||
// with the provided user graffiti from the validator client request.
|
||||
// It packs as much client info as space allows, followed by a space and user graffiti.
|
||||
//
|
||||
// Available Space | Format (space added before user graffiti if present)
|
||||
// ≥13 bytes | EL(2)+commit(4)+CL(2)+commit(4)+space+user e.g. "GEabcdPRxxxx Sushi"
|
||||
// 9-12 bytes | EL(2)+commit(2)+CL(2)+commit(2)+space+user e.g. "GEabPRxx Sushi"
|
||||
// 5-8 bytes | EL(2)+CL(2)+space+user e.g. "GEPR Sushi"
|
||||
// 3-4 bytes | code(2)+space+user e.g. "GE Sushi" or "PR Sushi"
|
||||
// <3 bytes | user only e.g. "Sushi"
|
||||
func (g *GraffitiInfo) GenerateGraffiti(userGraffiti []byte) [32]byte {
|
||||
g.mu.RLock()
|
||||
defer g.mu.RUnlock()
|
||||
|
||||
var result [32]byte
|
||||
userStr := string(userGraffiti)
|
||||
// Trim trailing null bytes
|
||||
for len(userStr) > 0 && userStr[len(userStr)-1] == 0 {
|
||||
userStr = userStr[:len(userStr)-1]
|
||||
}
|
||||
|
||||
// Prepend space to user graffiti for readability
|
||||
if len(userStr) > 0 {
|
||||
userStr = " " + userStr
|
||||
}
|
||||
available := 32 - len(userStr)
|
||||
|
||||
clCommit := version.GetCommitPrefix()
|
||||
clCommit4 := truncateCommit(clCommit, 4)
|
||||
clCommit2 := truncateCommit(clCommit, 2)
|
||||
|
||||
// If no EL info, clear EL commits but still include CL info
|
||||
var elCommit4, elCommit2 string
|
||||
if g.elCode != "" {
|
||||
elCommit4 = truncateCommit(g.elCommit, 4)
|
||||
elCommit2 = truncateCommit(g.elCommit, 2)
|
||||
}
|
||||
|
||||
var graffiti string
|
||||
switch {
|
||||
case available >= 12:
|
||||
// Full: EL(2)+commit(4)+CL(2)+commit(4)+space+user
|
||||
graffiti = g.elCode + elCommit4 + CLCode + clCommit4 + userStr
|
||||
case available >= 8:
|
||||
// Reduced commits: EL(2)+commit(2)+CL(2)+commit(2)+space+user
|
||||
graffiti = g.elCode + elCommit2 + CLCode + clCommit2 + userStr
|
||||
case available >= 4:
|
||||
// Codes only: EL(2)+CL(2)+space+user
|
||||
graffiti = g.elCode + CLCode + userStr
|
||||
case available >= 2:
|
||||
// EL code only (or CL code if no EL): code(2)+space+user
|
||||
if g.elCode != "" {
|
||||
graffiti = g.elCode + userStr
|
||||
} else {
|
||||
graffiti = CLCode + userStr
|
||||
}
|
||||
default:
|
||||
// User graffiti only (no space needed since no version prefix)
|
||||
// Remove the prepended space since we can't fit any version info
|
||||
graffiti = userStr[1:]
|
||||
}
|
||||
|
||||
copy(result[:], graffiti)
|
||||
return result
|
||||
}
|
||||
|
||||
// truncateCommit returns the first n characters of the commit string.
|
||||
func truncateCommit(commit string, n int) string {
|
||||
if len(commit) <= n {
|
||||
return commit
|
||||
}
|
||||
return commit[:n]
|
||||
}
|
||||
194
beacon-chain/execution/graffiti_info_test.go
Normal file
194
beacon-chain/execution/graffiti_info_test.go
Normal file
@@ -0,0 +1,194 @@
|
||||
package execution
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v7/testing/require"
|
||||
)
|
||||
|
||||
func TestGraffitiInfo_GenerateGraffiti(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
elCode string
|
||||
elCommit string
|
||||
userGraffiti []byte
|
||||
wantPrefix string
|
||||
wantSuffix string // for checking user graffiti is appended
|
||||
}{
|
||||
// No EL info cases (CL info "PR" + commit still included when space allows)
|
||||
{
|
||||
name: "No EL - empty user graffiti",
|
||||
elCode: "",
|
||||
elCommit: "",
|
||||
userGraffiti: []byte{},
|
||||
wantPrefix: "PR", // CL code, no trailing space since no user graffiti
|
||||
},
|
||||
{
|
||||
name: "No EL - short user graffiti",
|
||||
elCode: "",
|
||||
elCommit: "",
|
||||
userGraffiti: []byte("my validator"),
|
||||
wantPrefix: "PR", // CL code + commit + space + user
|
||||
wantSuffix: " my validator", // space before user graffiti
|
||||
},
|
||||
{
|
||||
name: "No EL - 28 char user graffiti (3 bytes available after space)",
|
||||
elCode: "",
|
||||
elCommit: "",
|
||||
userGraffiti: []byte("1234567890123456789012345678"), // 28 chars
|
||||
wantPrefix: "PR ", // CL code + space (3 bytes)
|
||||
},
|
||||
{
|
||||
name: "No EL - 29 char user graffiti (2 bytes available after space)",
|
||||
elCode: "",
|
||||
elCommit: "",
|
||||
userGraffiti: []byte("12345678901234567890123456789"), // 29 chars, 2 bytes available = fits PR
|
||||
wantPrefix: "PR ", // CL code (2 bytes) + space fits exactly
|
||||
},
|
||||
{
|
||||
name: "No EL - 30 char user graffiti (1 byte available after space)",
|
||||
elCode: "",
|
||||
elCommit: "",
|
||||
userGraffiti: []byte("123456789012345678901234567890"), // 30 chars, 1 byte available = not enough for code+space
|
||||
wantPrefix: "123456789012345678901234567890", // User only
|
||||
},
|
||||
{
|
||||
name: "No EL - 31 char user graffiti (0 bytes available after space)",
|
||||
elCode: "",
|
||||
elCommit: "",
|
||||
userGraffiti: []byte("1234567890123456789012345678901"),
|
||||
wantPrefix: "1234567890123456789012345678901", // User only
|
||||
},
|
||||
// With EL info - flexible standard format cases
|
||||
{
|
||||
name: "With EL - full format (empty user graffiti)",
|
||||
elCode: "GE",
|
||||
elCommit: "abcd1234",
|
||||
userGraffiti: []byte{},
|
||||
wantPrefix: "GEabcdPR", // No trailing space when no user graffiti
|
||||
},
|
||||
{
|
||||
name: "With EL - full format (short user graffiti)",
|
||||
elCode: "GE",
|
||||
elCommit: "abcd1234",
|
||||
userGraffiti: []byte("Bob"),
|
||||
wantPrefix: "GEabcdPR", // EL(2)+commit(4)+CL(2)+commit(4)
|
||||
wantSuffix: " Bob", // space before user graffiti
|
||||
},
|
||||
{
|
||||
name: "With EL - full format (18 char user, 13 bytes available after space)",
|
||||
elCode: "GE",
|
||||
elCommit: "abcd1234",
|
||||
userGraffiti: []byte("123456789012345678"), // 18 chars, need 19 with space, leaves 13
|
||||
wantPrefix: "GEabcdPR", // Full format fits (12 bytes)
|
||||
},
|
||||
{
|
||||
name: "With EL - reduced commits (22 char user, 9 bytes available after space)",
|
||||
elCode: "GE",
|
||||
elCommit: "abcd1234",
|
||||
userGraffiti: []byte("1234567890123456789012"), // 22 chars, need 23 with space, leaves 9
|
||||
wantPrefix: "GEabPR", // Reduced format (8 bytes)
|
||||
},
|
||||
{
|
||||
name: "With EL - codes only (26 char user, 5 bytes available after space)",
|
||||
elCode: "GE",
|
||||
elCommit: "abcd1234",
|
||||
userGraffiti: []byte("12345678901234567890123456"), // 26 chars, need 27 with space, leaves 5
|
||||
wantPrefix: "GEPR ", // Codes only (4 bytes) + space
|
||||
},
|
||||
{
|
||||
name: "With EL - EL code only (28 char user, 3 bytes available after space)",
|
||||
elCode: "GE",
|
||||
elCommit: "abcd1234",
|
||||
userGraffiti: []byte("1234567890123456789012345678"), // 28 chars, need 29 with space, leaves 3
|
||||
wantPrefix: "GE ", // EL code (2 bytes) + space
|
||||
},
|
||||
{
|
||||
name: "With EL - user only (30 char user, 1 byte available after space)",
|
||||
elCode: "GE",
|
||||
elCommit: "abcd1234",
|
||||
userGraffiti: []byte("123456789012345678901234567890"), // 30 chars, need 31 with space, leaves 1
|
||||
wantPrefix: "123456789012345678901234567890", // Not enough for code+space, user only
|
||||
},
|
||||
{
|
||||
name: "With EL - user only (32 char user, 0 bytes available)",
|
||||
elCode: "GE",
|
||||
elCommit: "abcd1234",
|
||||
userGraffiti: []byte("12345678901234567890123456789012"),
|
||||
wantPrefix: "12345678901234567890123456789012",
|
||||
},
|
||||
// Null byte handling
|
||||
{
|
||||
name: "Null bytes - input with trailing nulls",
|
||||
elCode: "GE",
|
||||
elCommit: "abcd1234",
|
||||
userGraffiti: append([]byte("test"), 0, 0, 0),
|
||||
wantPrefix: "GEabcdPR",
|
||||
wantSuffix: " test",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
g := NewGraffitiInfo()
|
||||
if tt.elCode != "" {
|
||||
g.UpdateFromEngine(tt.elCode, tt.elCommit)
|
||||
}
|
||||
|
||||
result := g.GenerateGraffiti(tt.userGraffiti)
|
||||
resultStr := string(result[:])
|
||||
|
||||
// Check prefix
|
||||
require.Equal(t, true, len(resultStr) >= len(tt.wantPrefix), "Result too short for prefix check")
|
||||
require.Equal(t, tt.wantPrefix, resultStr[:len(tt.wantPrefix)], "Prefix mismatch")
|
||||
|
||||
// Check suffix if specified
|
||||
if tt.wantSuffix != "" {
|
||||
trimmed := trimNullBytes(resultStr)
|
||||
require.Equal(t, true, len(trimmed) >= len(tt.wantSuffix), "Result too short for suffix check")
|
||||
require.Equal(t, tt.wantSuffix, trimmed[len(trimmed)-len(tt.wantSuffix):], "Suffix mismatch")
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestGraffitiInfo_UpdateFromEngine(t *testing.T) {
|
||||
g := NewGraffitiInfo()
|
||||
|
||||
// Initially no EL info - should still have CL info (PR + commit)
|
||||
result := g.GenerateGraffiti([]byte{})
|
||||
resultStr := string(result[:])
|
||||
require.Equal(t, "PR", resultStr[:2], "Expected CL info before update")
|
||||
|
||||
// Update with EL info
|
||||
g.UpdateFromEngine("GE", "1234abcd")
|
||||
|
||||
result = g.GenerateGraffiti([]byte{})
|
||||
resultStr = string(result[:])
|
||||
require.Equal(t, "GE1234PR", resultStr[:8], "Expected EL+CL info after update")
|
||||
}
|
||||
|
||||
func TestTruncateCommit(t *testing.T) {
|
||||
tests := []struct {
|
||||
commit string
|
||||
n int
|
||||
want string
|
||||
}{
|
||||
{"abcd1234", 4, "abcd"},
|
||||
{"ab", 4, "ab"},
|
||||
{"", 4, ""},
|
||||
{"abcdef", 2, "ab"},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
got := truncateCommit(tt.commit, tt.n)
|
||||
require.Equal(t, tt.want, got)
|
||||
}
|
||||
}
|
||||
|
||||
func trimNullBytes(s string) string {
|
||||
for len(s) > 0 && s[len(s)-1] == 0 {
|
||||
s = s[:len(s)-1]
|
||||
}
|
||||
return s
|
||||
}
|
||||
@@ -124,3 +124,11 @@ func WithVerifierWaiter(v *verification.InitializerWaiter) Option {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithGraffitiInfo sets the GraffitiInfo for client version tracking.
|
||||
func WithGraffitiInfo(g *GraffitiInfo) Option {
|
||||
return func(s *Service) error {
|
||||
s.graffitiInfo = g
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
@@ -162,6 +162,7 @@ type Service struct {
|
||||
verifierWaiter *verification.InitializerWaiter
|
||||
blobVerifier verification.NewBlobVerifier
|
||||
capabilityCache *capabilityCache
|
||||
graffitiInfo *GraffitiInfo
|
||||
}
|
||||
|
||||
// NewService sets up a new instance with an ethclient when given a web3 endpoint as a string in the config.
|
||||
@@ -318,6 +319,28 @@ func (s *Service) updateConnectedETH1(state bool) {
|
||||
s.updateBeaconNodeStats()
|
||||
}
|
||||
|
||||
// GraffitiInfo returns the GraffitiInfo struct for graffiti generation.
|
||||
func (s *Service) GraffitiInfo() *GraffitiInfo {
|
||||
return s.graffitiInfo
|
||||
}
|
||||
|
||||
// updateGraffitiInfo fetches EL client version and updates the graffiti info.
|
||||
func (s *Service) updateGraffitiInfo() {
|
||||
if s.graffitiInfo == nil {
|
||||
return
|
||||
}
|
||||
ctx, cancel := context.WithTimeout(s.ctx, time.Second)
|
||||
defer cancel()
|
||||
versions, err := s.GetClientVersion(ctx)
|
||||
if err != nil {
|
||||
log.WithError(err).Debug("Could not get execution client version for graffiti")
|
||||
return
|
||||
}
|
||||
if len(versions) >= 1 {
|
||||
s.graffitiInfo.UpdateFromEngine(versions[0].Code, versions[0].Commit)
|
||||
}
|
||||
}
|
||||
|
||||
// refers to the latest eth1 block which follows the condition: eth1_timestamp +
|
||||
// SECONDS_PER_ETH1_BLOCK * ETH1_FOLLOW_DISTANCE <= current_unix_time
|
||||
func (s *Service) followedBlockHeight(ctx context.Context) (uint64, error) {
|
||||
@@ -598,6 +621,12 @@ func (s *Service) run(done <-chan struct{}) {
|
||||
chainstartTicker := time.NewTicker(logPeriod)
|
||||
defer chainstartTicker.Stop()
|
||||
|
||||
// Update graffiti info 4 times per epoch (~96 seconds with 12s slots and 32 slots/epoch)
|
||||
graffitiTicker := time.NewTicker(96 * time.Second)
|
||||
defer graffitiTicker.Stop()
|
||||
// Initial update
|
||||
s.updateGraffitiInfo()
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-done:
|
||||
@@ -622,6 +651,8 @@ func (s *Service) run(done <-chan struct{}) {
|
||||
continue
|
||||
}
|
||||
s.logTillChainStart(context.Background())
|
||||
case <-graffitiTicker.C:
|
||||
s.updateGraffitiInfo()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -75,6 +75,7 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
|
||||
p2p := p2pTesting.NewTestP2P(t)
|
||||
lcStore := NewLightClientStore(p2p, new(event.Feed), testDB.SetupDB(t))
|
||||
|
||||
timeForGoroutinesToFinish := 20 * time.Microsecond
|
||||
// update 0 with basic data and no supermajority following an empty lastFinalityUpdate - should save and broadcast
|
||||
l0 := util.NewTestLightClient(t, version.Altair)
|
||||
update0, err := NewLightClientFinalityUpdateFromBeaconState(l0.Ctx, l0.State, l0.Block, l0.AttestedState, l0.AttestedBlock, l0.FinalizedBlock)
|
||||
@@ -86,9 +87,8 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
|
||||
|
||||
lcStore.SetLastFinalityUpdate(update0, true)
|
||||
require.Equal(t, update0, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
|
||||
require.Eventually(t, func() bool {
|
||||
return p2p.BroadcastCalled.Load()
|
||||
}, time.Second, 10*time.Millisecond, "Broadcast should have been called after setting a new last finality update when previous is nil")
|
||||
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
|
||||
require.Equal(t, true, p2p.BroadcastCalled.Load(), "Broadcast should have been called after setting a new last finality update when previous is nil")
|
||||
p2p.BroadcastCalled.Store(false) // Reset for next test
|
||||
|
||||
// update 1 with same finality slot, increased attested slot, and no supermajority - should save but not broadcast
|
||||
@@ -102,7 +102,7 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
|
||||
|
||||
lcStore.SetLastFinalityUpdate(update1, true)
|
||||
require.Equal(t, update1, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
|
||||
time.Sleep(50 * time.Millisecond) // Wait briefly to verify broadcast is not called
|
||||
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
|
||||
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been called after setting a new last finality update without supermajority")
|
||||
p2p.BroadcastCalled.Store(false) // Reset for next test
|
||||
|
||||
@@ -117,9 +117,8 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
|
||||
|
||||
lcStore.SetLastFinalityUpdate(update2, true)
|
||||
require.Equal(t, update2, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
|
||||
require.Eventually(t, func() bool {
|
||||
return p2p.BroadcastCalled.Load()
|
||||
}, time.Second, 10*time.Millisecond, "Broadcast should have been called after setting a new last finality update with supermajority")
|
||||
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
|
||||
require.Equal(t, true, p2p.BroadcastCalled.Load(), "Broadcast should have been called after setting a new last finality update with supermajority")
|
||||
p2p.BroadcastCalled.Store(false) // Reset for next test
|
||||
|
||||
// update 3 with same finality slot, increased attested slot, and supermajority - should save but not broadcast
|
||||
@@ -133,7 +132,7 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
|
||||
|
||||
lcStore.SetLastFinalityUpdate(update3, true)
|
||||
require.Equal(t, update3, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
|
||||
time.Sleep(50 * time.Millisecond) // Wait briefly to verify broadcast is not called
|
||||
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
|
||||
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been when previous was already broadcast")
|
||||
|
||||
// update 4 with increased finality slot, increased attested slot, and supermajority - should save and broadcast
|
||||
@@ -147,9 +146,8 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
|
||||
|
||||
lcStore.SetLastFinalityUpdate(update4, true)
|
||||
require.Equal(t, update4, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
|
||||
require.Eventually(t, func() bool {
|
||||
return p2p.BroadcastCalled.Load()
|
||||
}, time.Second, 10*time.Millisecond, "Broadcast should have been called after a new finality update with increased finality slot")
|
||||
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
|
||||
require.Equal(t, true, p2p.BroadcastCalled.Load(), "Broadcast should have been called after a new finality update with increased finality slot")
|
||||
p2p.BroadcastCalled.Store(false) // Reset for next test
|
||||
|
||||
// update 5 with the same new finality slot, increased attested slot, and supermajority - should save but not broadcast
|
||||
@@ -163,7 +161,7 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
|
||||
|
||||
lcStore.SetLastFinalityUpdate(update5, true)
|
||||
require.Equal(t, update5, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
|
||||
time.Sleep(50 * time.Millisecond) // Wait briefly to verify broadcast is not called
|
||||
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
|
||||
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been called when previous was already broadcast with supermajority")
|
||||
|
||||
// update 6 with the same new finality slot, increased attested slot, and no supermajority - should save but not broadcast
|
||||
@@ -177,7 +175,7 @@ func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
|
||||
|
||||
lcStore.SetLastFinalityUpdate(update6, true)
|
||||
require.Equal(t, update6, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
|
||||
time.Sleep(50 * time.Millisecond) // Wait briefly to verify broadcast is not called
|
||||
time.Sleep(timeForGoroutinesToFinish) // give some time for the broadcast goroutine to finish
|
||||
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been called when previous was already broadcast with supermajority")
|
||||
}
|
||||
|
||||
|
||||
@@ -775,6 +775,9 @@ func (b *BeaconNode) registerPOWChainService() error {
|
||||
return err
|
||||
}
|
||||
|
||||
// Create GraffitiInfo for client version tracking in block graffiti
|
||||
graffitiInfo := execution.NewGraffitiInfo()
|
||||
|
||||
// skipcq: CRT-D0001
|
||||
opts := append(
|
||||
b.serviceFlagOpts.executionChainFlagOpts,
|
||||
@@ -787,6 +790,7 @@ func (b *BeaconNode) registerPOWChainService() error {
|
||||
execution.WithFinalizedStateAtStartup(b.finalizedStateAtStartUp),
|
||||
execution.WithJwtId(b.cliCtx.String(flags.JwtId.Name)),
|
||||
execution.WithVerifierWaiter(b.verifyInitWaiter),
|
||||
execution.WithGraffitiInfo(graffitiInfo),
|
||||
)
|
||||
web3Service, err := execution.NewService(b.ctx, opts...)
|
||||
if err != nil {
|
||||
@@ -993,6 +997,7 @@ func (b *BeaconNode) registerRPCService(router *http.ServeMux) error {
|
||||
TrackedValidatorsCache: b.trackedValidatorsCache,
|
||||
PayloadIDCache: b.payloadIDCache,
|
||||
LCStore: b.lcStore,
|
||||
GraffitiInfo: web3Service.GraffitiInfo(),
|
||||
})
|
||||
|
||||
return b.services.RegisterService(rpcService)
|
||||
|
||||
@@ -22,7 +22,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v7/monitoring/tracing/trace"
|
||||
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v7/time/slots"
|
||||
pubsub "github.com/libp2p/go-libp2p-pubsub"
|
||||
"github.com/pkg/errors"
|
||||
ssz "github.com/prysmaticlabs/fastssz"
|
||||
"github.com/sirupsen/logrus"
|
||||
@@ -358,99 +357,58 @@ func (s *Service) BroadcastDataColumnSidecars(ctx context.Context, sidecars []bl
|
||||
return nil
|
||||
}
|
||||
|
||||
// broadcastDataColumnSidecars broadcasts multiple data column sidecars to the p2p network.
|
||||
// For sidecars with available peers, it uses batch publishing.
|
||||
// For sidecars without peers, it finds peers first and then publishes individually.
|
||||
// Both paths run in parallel. It returns when all broadcasts are complete, or the context is cancelled.
|
||||
// broadcastDataColumnSidecars broadcasts multiple data column sidecars to the p2p network, after ensuring
|
||||
// there is at least one peer in each needed subnet. If not, it will attempt to find one before broadcasting.
|
||||
// It returns when all broadcasts are complete, or the context is cancelled (whichever comes first).
|
||||
func (s *Service) broadcastDataColumnSidecars(ctx context.Context, forkDigest [fieldparams.VersionLength]byte, sidecars []blocks.VerifiedRODataColumn) {
|
||||
type rootAndIndex struct {
|
||||
root [fieldparams.RootLength]byte
|
||||
index uint64
|
||||
}
|
||||
|
||||
var timings sync.Map
|
||||
var (
|
||||
wg sync.WaitGroup
|
||||
timings sync.Map
|
||||
)
|
||||
|
||||
logLevel := logrus.GetLevel()
|
||||
|
||||
slotPerRoot := make(map[[fieldparams.RootLength]byte]primitives.Slot, 1)
|
||||
|
||||
topicFunc := func(sidecar blocks.VerifiedRODataColumn) (topic string, wrappedSubIdx uint64, subnet uint64) {
|
||||
subnet = peerdas.ComputeSubnetForDataColumnSidecar(sidecar.Index)
|
||||
topic = dataColumnSubnetToTopic(subnet, forkDigest)
|
||||
wrappedSubIdx = subnet + dataColumnSubnetVal
|
||||
return
|
||||
}
|
||||
|
||||
sidecarsWithPeers := make([]blocks.VerifiedRODataColumn, 0, len(sidecars))
|
||||
var sidecarsWithoutPeers []blocks.VerifiedRODataColumn
|
||||
|
||||
// Categorize sidecars by peer availability.
|
||||
for _, sidecar := range sidecars {
|
||||
slotPerRoot[sidecar.BlockRoot()] = sidecar.Slot()
|
||||
|
||||
topic, wrappedSubIdx, _ := topicFunc(sidecar)
|
||||
// Check if we have a peer for this subnet (use RLock for read-only check).
|
||||
mu := s.subnetLocker(wrappedSubIdx)
|
||||
mu.RLock()
|
||||
hasPeer := s.hasPeerWithSubnet(topic)
|
||||
mu.RUnlock()
|
||||
|
||||
if hasPeer {
|
||||
sidecarsWithPeers = append(sidecarsWithPeers, sidecar)
|
||||
continue
|
||||
}
|
||||
|
||||
sidecarsWithoutPeers = append(sidecarsWithoutPeers, sidecar)
|
||||
}
|
||||
|
||||
var batchWg, individualWg sync.WaitGroup
|
||||
|
||||
// Batch publish sidecars that already have peers
|
||||
var messageBatch pubsub.MessageBatch
|
||||
for _, sidecar := range sidecarsWithPeers {
|
||||
batchWg.Go(func() {
|
||||
_, span := trace.StartSpan(ctx, "p2p.broadcastDataColumnSidecars")
|
||||
ctx := trace.NewContext(s.ctx, span)
|
||||
wg.Go(func() {
|
||||
// Add tracing to the function.
|
||||
ctx, span := trace.StartSpan(s.ctx, "p2p.broadcastDataColumnSidecars")
|
||||
defer span.End()
|
||||
|
||||
topic, _, _ := topicFunc(sidecar)
|
||||
// Compute the subnet for this data column sidecar.
|
||||
subnet := peerdas.ComputeSubnetForDataColumnSidecar(sidecar.Index)
|
||||
|
||||
if err := s.batchObject(ctx, &messageBatch, sidecar, topic); err != nil {
|
||||
tracing.AnnotateError(span, err)
|
||||
log.WithError(err).Error("Cannot batch data column sidecar")
|
||||
return
|
||||
}
|
||||
// Build the topic corresponding to subnet column subnet and this fork digest.
|
||||
topic := dataColumnSubnetToTopic(subnet, forkDigest)
|
||||
|
||||
if logLevel >= logrus.DebugLevel {
|
||||
root := sidecar.BlockRoot()
|
||||
timings.Store(rootAndIndex{root: root, index: sidecar.Index}, time.Now())
|
||||
}
|
||||
})
|
||||
}
|
||||
// Compute the wrapped subnet index.
|
||||
wrappedSubIdx := subnet + dataColumnSubnetVal
|
||||
|
||||
// For sidecars without peers, find peers and publish individually (no batching).
|
||||
for _, sidecar := range sidecarsWithoutPeers {
|
||||
individualWg.Go(func() {
|
||||
_, span := trace.StartSpan(ctx, "p2p.broadcastDataColumnSidecars")
|
||||
ctx := trace.NewContext(s.ctx, span)
|
||||
defer span.End()
|
||||
|
||||
topic, wrappedSubIdx, subnet := topicFunc(sidecar)
|
||||
|
||||
// Find peers for this sidecar's subnet.
|
||||
// Find peers if needed.
|
||||
if err := s.findPeersIfNeeded(ctx, wrappedSubIdx, DataColumnSubnetTopicFormat, forkDigest, subnet); err != nil {
|
||||
tracing.AnnotateError(span, err)
|
||||
log.WithError(err).Error("Cannot find peers if needed")
|
||||
return
|
||||
}
|
||||
|
||||
// Publish individually (not batched) since we just found peers.
|
||||
// Broadcast the data column sidecar to the network.
|
||||
if err := s.broadcastObject(ctx, sidecar, topic); err != nil {
|
||||
tracing.AnnotateError(span, err)
|
||||
log.WithError(err).Error("Cannot broadcast data column sidecar")
|
||||
return
|
||||
}
|
||||
|
||||
// Increase the number of successful broadcasts.
|
||||
dataColumnSidecarBroadcasts.Inc()
|
||||
|
||||
// Record the timing for log purposes.
|
||||
if logLevel >= logrus.DebugLevel {
|
||||
root := sidecar.BlockRoot()
|
||||
timings.Store(rootAndIndex{root: root, index: sidecar.Index}, time.Now())
|
||||
@@ -458,18 +416,8 @@ func (s *Service) broadcastDataColumnSidecars(ctx context.Context, forkDigest [f
|
||||
})
|
||||
}
|
||||
|
||||
// Wait for batch to be populated, then publish.
|
||||
batchWg.Wait()
|
||||
if len(sidecarsWithPeers) > 0 {
|
||||
if err := s.pubsub.PublishBatch(&messageBatch); err != nil {
|
||||
log.WithError(err).Error("Cannot publish batch for data column sidecars")
|
||||
} else {
|
||||
dataColumnSidecarBroadcasts.Add(float64(len(sidecarsWithPeers)))
|
||||
}
|
||||
}
|
||||
|
||||
// Wait for all individual publishes to complete.
|
||||
individualWg.Wait()
|
||||
// Wait for all broadcasts to finish.
|
||||
wg.Wait()
|
||||
|
||||
// The rest of this function is only for debug logging purposes.
|
||||
if logLevel < logrus.DebugLevel {
|
||||
@@ -556,68 +504,28 @@ func (s *Service) findPeersIfNeeded(
|
||||
return nil
|
||||
}
|
||||
|
||||
// encodeGossipMessage encodes an object for gossip transmission.
|
||||
// It returns the encoded bytes and the full topic with protocol suffix.
|
||||
func (s *Service) encodeGossipMessage(obj ssz.Marshaler, topic string) ([]byte, string, error) {
|
||||
buf := new(bytes.Buffer)
|
||||
if _, err := s.Encoding().EncodeGossip(buf, obj); err != nil {
|
||||
return nil, "", fmt.Errorf("could not encode message: %w", err)
|
||||
}
|
||||
return buf.Bytes(), topic + s.Encoding().ProtocolSuffix(), nil
|
||||
}
|
||||
|
||||
// broadcastObject broadcasts a message to other peers in our gossip mesh.
|
||||
// method to broadcast messages to other peers in our gossip mesh.
|
||||
func (s *Service) broadcastObject(ctx context.Context, obj ssz.Marshaler, topic string) error {
|
||||
ctx, span := trace.StartSpan(ctx, "p2p.broadcastObject")
|
||||
defer span.End()
|
||||
|
||||
span.SetAttributes(trace.StringAttribute("topic", topic))
|
||||
|
||||
data, fullTopic, err := s.encodeGossipMessage(obj, topic)
|
||||
if err != nil {
|
||||
buf := new(bytes.Buffer)
|
||||
if _, err := s.Encoding().EncodeGossip(buf, obj); err != nil {
|
||||
err := errors.Wrap(err, "could not encode message")
|
||||
tracing.AnnotateError(span, err)
|
||||
return err
|
||||
}
|
||||
|
||||
if span.IsRecording() {
|
||||
id := hash.FastSum64(data)
|
||||
messageLen := int64(len(data))
|
||||
id := hash.FastSum64(buf.Bytes())
|
||||
messageLen := int64(buf.Len())
|
||||
// lint:ignore uintcast -- It's safe to do this for tracing.
|
||||
iid := int64(id)
|
||||
span = trace.AddMessageSendEvent(span, iid, messageLen /*uncompressed*/, messageLen /*compressed*/)
|
||||
}
|
||||
|
||||
if err := s.PublishToTopic(ctx, fullTopic, data); err != nil {
|
||||
err := errors.Wrap(err, "could not publish message")
|
||||
tracing.AnnotateError(span, err)
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// batchObject adds an object to a message batch for a future broadcast.
|
||||
// The caller MUST publish the batch after all messages have been added.
|
||||
func (s *Service) batchObject(ctx context.Context, batch *pubsub.MessageBatch, obj ssz.Marshaler, topic string) error {
|
||||
ctx, span := trace.StartSpan(ctx, "p2p.batchObject")
|
||||
defer span.End()
|
||||
|
||||
span.SetAttributes(trace.StringAttribute("topic", topic))
|
||||
|
||||
data, fullTopic, err := s.encodeGossipMessage(obj, topic)
|
||||
if err != nil {
|
||||
tracing.AnnotateError(span, err)
|
||||
return err
|
||||
}
|
||||
|
||||
if span.IsRecording() {
|
||||
id := hash.FastSum64(data)
|
||||
messageLen := int64(len(data))
|
||||
// lint:ignore uintcast -- It's safe to do this for tracing.
|
||||
iid := int64(id)
|
||||
span = trace.AddMessageSendEvent(span, iid, messageLen /*uncompressed*/, messageLen /*compressed*/)
|
||||
}
|
||||
|
||||
if err := s.addToBatch(ctx, batch, fullTopic, data); err != nil {
|
||||
if err := s.PublishToTopic(ctx, topic+s.Encoding().ProtocolSuffix(), buf.Bytes()); err != nil {
|
||||
err := errors.Wrap(err, "could not publish message")
|
||||
tracing.AnnotateError(span, err)
|
||||
return err
|
||||
|
||||
@@ -32,8 +32,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v7/time/slots"
|
||||
pubsub "github.com/libp2p/go-libp2p-pubsub"
|
||||
"github.com/libp2p/go-libp2p/core/host"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
"github.com/libp2p/go-libp2p/core/protocol"
|
||||
"google.golang.org/protobuf/proto"
|
||||
)
|
||||
|
||||
@@ -72,10 +70,7 @@ func TestService_Broadcast(t *testing.T) {
|
||||
sub, err := p2.SubscribeToTopic(topic)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Wait for libp2p mesh to establish
|
||||
require.Eventually(t, func() bool {
|
||||
return len(p.pubsub.ListPeers(topic)) > 0
|
||||
}, 5*time.Second, 10*time.Millisecond, "libp2p mesh did not establish")
|
||||
time.Sleep(50 * time.Millisecond) // libp2p fails without this delay...
|
||||
|
||||
// Async listen for the pubsub, must be before the broadcast.
|
||||
var wg sync.WaitGroup
|
||||
@@ -189,10 +184,7 @@ func TestService_BroadcastAttestation(t *testing.T) {
|
||||
sub, err := p2.SubscribeToTopic(topic)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Wait for libp2p mesh to establish
|
||||
require.Eventually(t, func() bool {
|
||||
return len(p.pubsub.ListPeers(topic)) > 0
|
||||
}, 5*time.Second, 10*time.Millisecond, "libp2p mesh did not establish")
|
||||
time.Sleep(50 * time.Millisecond) // libp2p fails without this delay...
|
||||
|
||||
// Async listen for the pubsub, must be before the broadcast.
|
||||
var wg sync.WaitGroup
|
||||
@@ -381,15 +373,7 @@ func TestService_BroadcastAttestationWithDiscoveryAttempts(t *testing.T) {
|
||||
_, err = tpHandle.Subscribe()
|
||||
require.NoError(t, err)
|
||||
|
||||
// This test specifically tests discovery-based peer finding, which requires
|
||||
// time for nodes to discover each other. Using a fixed sleep here is intentional
|
||||
// as we're testing the discovery timing behavior.
|
||||
time.Sleep(500 * time.Millisecond)
|
||||
|
||||
// Verify mesh establishment after discovery
|
||||
require.Eventually(t, func() bool {
|
||||
return len(p.pubsub.ListPeers(topic)) > 0 && len(p2.pubsub.ListPeers(topic)) > 0
|
||||
}, 5*time.Second, 10*time.Millisecond, "libp2p mesh did not establish")
|
||||
time.Sleep(500 * time.Millisecond) // libp2p fails without this delay...
|
||||
|
||||
nodePeers := p.pubsub.ListPeers(topic)
|
||||
nodePeers2 := p2.pubsub.ListPeers(topic)
|
||||
@@ -458,10 +442,7 @@ func TestService_BroadcastSyncCommittee(t *testing.T) {
|
||||
sub, err := p2.SubscribeToTopic(topic)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Wait for libp2p mesh to establish
|
||||
require.Eventually(t, func() bool {
|
||||
return len(p.pubsub.ListPeers(topic)) > 0
|
||||
}, 5*time.Second, 10*time.Millisecond, "libp2p mesh did not establish")
|
||||
time.Sleep(50 * time.Millisecond) // libp2p fails without this delay...
|
||||
|
||||
// Async listen for the pubsub, must be before the broadcast.
|
||||
var wg sync.WaitGroup
|
||||
@@ -538,10 +519,7 @@ func TestService_BroadcastBlob(t *testing.T) {
|
||||
sub, err := p2.SubscribeToTopic(topic)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Wait for libp2p mesh to establish
|
||||
require.Eventually(t, func() bool {
|
||||
return len(p.pubsub.ListPeers(topic)) > 0
|
||||
}, 5*time.Second, 10*time.Millisecond, "libp2p mesh did not establish")
|
||||
time.Sleep(50 * time.Millisecond) // libp2p fails without this delay...
|
||||
|
||||
// Async listen for the pubsub, must be before the broadcast.
|
||||
var wg sync.WaitGroup
|
||||
@@ -604,10 +582,7 @@ func TestService_BroadcastLightClientOptimisticUpdate(t *testing.T) {
|
||||
sub, err := p2.SubscribeToTopic(topic)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Wait for libp2p mesh to establish
|
||||
require.Eventually(t, func() bool {
|
||||
return len(p.pubsub.ListPeers(topic)) > 0
|
||||
}, 5*time.Second, 10*time.Millisecond, "libp2p mesh did not establish")
|
||||
time.Sleep(50 * time.Millisecond) // libp2p fails without this delay...
|
||||
|
||||
// Async listen for the pubsub, must be before the broadcast.
|
||||
var wg sync.WaitGroup
|
||||
@@ -683,10 +658,7 @@ func TestService_BroadcastLightClientFinalityUpdate(t *testing.T) {
|
||||
sub, err := p2.SubscribeToTopic(topic)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Wait for libp2p mesh to establish
|
||||
require.Eventually(t, func() bool {
|
||||
return len(p.pubsub.ListPeers(topic)) > 0
|
||||
}, 5*time.Second, 10*time.Millisecond, "libp2p mesh did not establish")
|
||||
time.Sleep(50 * time.Millisecond) // libp2p fails without this delay...
|
||||
|
||||
// Async listen for the pubsub, must be before the broadcast.
|
||||
var wg sync.WaitGroup
|
||||
@@ -797,10 +769,8 @@ func TestService_BroadcastDataColumn(t *testing.T) {
|
||||
sub, err := p2.SubscribeToTopic(topic)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Wait for libp2p mesh to establish
|
||||
require.Eventually(t, func() bool {
|
||||
return len(service.pubsub.ListPeers(topic)) > 0
|
||||
}, 5*time.Second, 10*time.Millisecond, "libp2p mesh did not establish")
|
||||
// libp2p fails without this delay
|
||||
time.Sleep(50 * time.Millisecond)
|
||||
|
||||
// Broadcast to peers and wait.
|
||||
err = service.BroadcastDataColumnSidecars(ctx, []blocks.VerifiedRODataColumn{verifiedRoSidecar})
|
||||
@@ -817,190 +787,3 @@ func TestService_BroadcastDataColumn(t *testing.T) {
|
||||
require.NoError(t, service.Encoding().DecodeGossip(msg.Data, &result))
|
||||
require.DeepEqual(t, &result, verifiedRoSidecar)
|
||||
}
|
||||
|
||||
type topicInvoked struct {
|
||||
topic string
|
||||
pid peer.ID
|
||||
}
|
||||
|
||||
// rpcOrderTracer is a RawTracer implementation that captures the order of SendRPC calls.
|
||||
// It records the topics of messages sent via pubsub to verify round-robin ordering.
|
||||
type rpcOrderTracer struct {
|
||||
mu sync.Mutex
|
||||
invoked []*topicInvoked
|
||||
byTopic map[string][]peer.ID
|
||||
}
|
||||
|
||||
func (t *rpcOrderTracer) SendRPC(rpc *pubsub.RPC, pid peer.ID) {
|
||||
t.mu.Lock()
|
||||
defer t.mu.Unlock()
|
||||
for _, msg := range rpc.GetPublish() {
|
||||
invoked := &topicInvoked{topic: msg.GetTopic(), pid: pid}
|
||||
t.invoked = append(t.invoked, invoked)
|
||||
t.byTopic[invoked.topic] = append(t.byTopic[invoked.topic], invoked.pid)
|
||||
}
|
||||
}
|
||||
|
||||
func newRpcOrderTracer() *rpcOrderTracer {
|
||||
return &rpcOrderTracer{byTopic: make(map[string][]peer.ID)}
|
||||
}
|
||||
|
||||
func (t *rpcOrderTracer) getTopics() []string {
|
||||
t.mu.Lock()
|
||||
defer t.mu.Unlock()
|
||||
result := make([]string, len(t.invoked))
|
||||
for i := range t.invoked {
|
||||
result[i] = t.invoked[i].topic
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// No-op implementations for other RawTracer methods.
|
||||
func (*rpcOrderTracer) AddPeer(peer.ID, protocol.ID) {}
|
||||
func (*rpcOrderTracer) RemovePeer(peer.ID) {}
|
||||
func (*rpcOrderTracer) Join(string) {}
|
||||
func (*rpcOrderTracer) Leave(string) {}
|
||||
func (*rpcOrderTracer) Graft(peer.ID, string) {}
|
||||
func (*rpcOrderTracer) Prune(peer.ID, string) {}
|
||||
func (*rpcOrderTracer) ValidateMessage(*pubsub.Message) {}
|
||||
func (*rpcOrderTracer) DeliverMessage(*pubsub.Message) {}
|
||||
func (*rpcOrderTracer) RejectMessage(*pubsub.Message, string) {}
|
||||
func (*rpcOrderTracer) DuplicateMessage(*pubsub.Message) {}
|
||||
func (*rpcOrderTracer) ThrottlePeer(peer.ID) {}
|
||||
func (*rpcOrderTracer) RecvRPC(*pubsub.RPC) {}
|
||||
func (*rpcOrderTracer) DropRPC(*pubsub.RPC, peer.ID) {}
|
||||
func (*rpcOrderTracer) UndeliverableMessage(*pubsub.Message) {}
|
||||
|
||||
// TestService_BroadcastDataColumnRoundRobin verifies that when broadcasting multiple
|
||||
// data column sidecars, messages are interleaved in round-robin order by column index
|
||||
// rather than sending all copies of one column before the next.
|
||||
//
|
||||
// Without batch publishing: A,A,A,A,B,B,B,B (all peers for column A, then all for column B)
|
||||
// With batch publishing: A,B,A,B,A,B,A,B (interleaved by message ID)
|
||||
func TestService_BroadcastDataColumnRoundRobin(t *testing.T) {
|
||||
const (
|
||||
port = 2100
|
||||
topicFormat = DataColumnSubnetTopicFormat
|
||||
)
|
||||
|
||||
ctx := t.Context()
|
||||
|
||||
// Load the KZG trust setup.
|
||||
err := kzg.Start()
|
||||
require.NoError(t, err)
|
||||
|
||||
gFlags := new(flags.GlobalFlags)
|
||||
gFlags.MinimumPeersPerSubnet = 1
|
||||
flags.Init(gFlags)
|
||||
defer flags.Init(new(flags.GlobalFlags))
|
||||
|
||||
// Create a tracer to capture the order of SendRPC calls.
|
||||
tracer := newRpcOrderTracer()
|
||||
|
||||
// Create the publisher node with the tracer injected.
|
||||
p1 := p2ptest.NewTestP2PWithPubsubOptions(t, []pubsub.Option{pubsub.WithRawTracer(tracer)})
|
||||
|
||||
// Create subscriber peers.
|
||||
expectedPeers := []*p2ptest.TestP2P{
|
||||
p2ptest.NewTestP2P(t),
|
||||
p2ptest.NewTestP2P(t),
|
||||
}
|
||||
|
||||
// Connect peers.
|
||||
for _, p := range expectedPeers {
|
||||
p1.Connect(p)
|
||||
}
|
||||
require.NotEqual(t, 0, len(p1.BHost.Network().Peers()), "No peers")
|
||||
|
||||
// Create a host for discovery.
|
||||
_, pkey, ipAddr := createHost(t, port)
|
||||
|
||||
// Create a shared DB for the service.
|
||||
db := testDB.SetupDB(t)
|
||||
|
||||
// Create and close the custody info channel immediately since custodyInfo is already set.
|
||||
custodyInfoSet := make(chan struct{})
|
||||
close(custodyInfoSet)
|
||||
|
||||
service := &Service{
|
||||
ctx: ctx,
|
||||
host: p1.BHost,
|
||||
pubsub: p1.PubSub(),
|
||||
joinedTopics: map[string]*pubsub.Topic{},
|
||||
cfg: &Config{DB: db},
|
||||
genesisTime: time.Now(),
|
||||
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
|
||||
subnetsLock: make(map[uint64]*sync.RWMutex),
|
||||
subnetsLockLock: sync.Mutex{},
|
||||
peers: peers.NewStatus(ctx, &peers.StatusConfig{ScorerParams: &scorers.Config{}}),
|
||||
custodyInfo: &custodyInfo{},
|
||||
custodyInfoSet: custodyInfoSet,
|
||||
}
|
||||
|
||||
// Create a listener for discovery.
|
||||
listener, err := service.startDiscoveryV5(ipAddr, pkey)
|
||||
require.NoError(t, err)
|
||||
service.dv5Listener = listener
|
||||
|
||||
digest, err := service.currentForkDigest()
|
||||
require.NoError(t, err)
|
||||
|
||||
// Create multiple data column sidecars with different column indices.
|
||||
// Use indices that map to different subnets: 0, 32, 64 (assuming 128 columns and 64 subnets).
|
||||
columnIndices := []uint64{0, 32, 64}
|
||||
params := make([]util.DataColumnParam, len(columnIndices))
|
||||
for i, idx := range columnIndices {
|
||||
params[i] = util.DataColumnParam{Index: idx}
|
||||
}
|
||||
_, verifiedRoSidecars := util.CreateTestVerifiedRoDataColumnSidecars(t, params)
|
||||
|
||||
expectedTopics := make(map[string]bool)
|
||||
// Subscribe peers to the relevant topics.
|
||||
for _, idx := range columnIndices {
|
||||
subnet := peerdas.ComputeSubnetForDataColumnSidecar(idx)
|
||||
topic := fmt.Sprintf(topicFormat, digest, subnet) + service.Encoding().ProtocolSuffix()
|
||||
for _, p := range expectedPeers {
|
||||
_, err = p.SubscribeToTopic(topic)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
expectedTopics[topic] = true
|
||||
}
|
||||
// libp2p needs some time to establish mesh connections.
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
|
||||
// Broadcast all sidecars.
|
||||
err = service.BroadcastDataColumnSidecars(ctx, verifiedRoSidecars)
|
||||
require.NoError(t, err)
|
||||
// Give some time for messages to be sent.
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
|
||||
topics := tracer.getTopics()
|
||||
if len(topics) == 0 {
|
||||
t.Fatal("Expected at least one message for each topic to be sent to each peer")
|
||||
}
|
||||
|
||||
unseen := make(map[string]bool)
|
||||
for k := range expectedTopics {
|
||||
unseen[k] = true
|
||||
}
|
||||
// Verify round-robin invariant: before all message IDs are seen, no message ID may be repeated.
|
||||
// In round-robin order, we should see each topic once before any topic repeats.
|
||||
for _, topic := range topics {
|
||||
if !expectedTopics[topic] {
|
||||
continue
|
||||
}
|
||||
if !unseen[topic] {
|
||||
t.Errorf("Topic %s repeated before all topics were seen once. This violates round-robin ordering.", topic)
|
||||
}
|
||||
delete(unseen, topic)
|
||||
if len(unseen) == 0 {
|
||||
break // all have been seen
|
||||
}
|
||||
}
|
||||
require.Equal(t, 0, len(unseen))
|
||||
|
||||
// Verify that we actually saw all expected topics.
|
||||
for topic := range expectedTopics {
|
||||
require.Equal(t, len(expectedPeers), len(tracer.byTopic[topic]))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -482,12 +482,12 @@ func TestStaticPeering_PeersAreAdded(t *testing.T) {
|
||||
s.Start()
|
||||
<-exitRoutine
|
||||
}()
|
||||
time.Sleep(50 * time.Millisecond) // Wait for service initialization
|
||||
time.Sleep(50 * time.Millisecond)
|
||||
var vr [32]byte
|
||||
require.NoError(t, cs.SetClock(startup.NewClock(time.Now(), vr)))
|
||||
require.Eventually(t, func() bool {
|
||||
return len(s.host.Network().Peers()) == 5
|
||||
}, 10*time.Second, 100*time.Millisecond, "Not all peers added to peerstore")
|
||||
time.Sleep(4 * time.Second)
|
||||
ps := s.host.Network().Peers()
|
||||
assert.Equal(t, 5, len(ps), "Not all peers added to peerstore")
|
||||
require.NoError(t, s.Stop())
|
||||
exitRoutine <- true
|
||||
}
|
||||
|
||||
@@ -99,27 +99,6 @@ func (s *Service) PublishToTopic(ctx context.Context, topic string, data []byte,
|
||||
}
|
||||
}
|
||||
|
||||
// addToBatch joins (if necessary) a topic and adds the message to a message batch.
|
||||
func (s *Service) addToBatch(ctx context.Context, batch *pubsub.MessageBatch, topic string, data []byte, opts ...pubsub.PubOpt) error {
|
||||
topicHandle, err := s.JoinTopic(topic)
|
||||
if err != nil {
|
||||
return fmt.Errorf("joining topic: %w", err)
|
||||
}
|
||||
|
||||
// Wait for at least 1 peer to be available to receive the published message.
|
||||
for {
|
||||
if flags.Get().MinimumSyncPeers == 0 || len(topicHandle.ListPeers()) > 0 {
|
||||
return topicHandle.AddToBatch(ctx, batch, data, opts...)
|
||||
}
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return errors.Wrapf(ctx.Err(), "unable to find requisite number of peers for topic %s, 0 peers found to publish to", topic)
|
||||
case <-time.After(100 * time.Millisecond):
|
||||
// reenter the for loop after 100ms
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// SubscribeToTopic joins (if necessary) and subscribes to PubSub topic.
|
||||
func (s *Service) SubscribeToTopic(topic string, opts ...pubsub.SubOpt) (*pubsub.Subscription, error) {
|
||||
s.awaitStateInitialized() // Genesis time and genesis validators root are required to subscribe.
|
||||
|
||||
@@ -80,9 +80,8 @@ func TestService_Start_OnlyStartsOnce(t *testing.T) {
|
||||
}()
|
||||
var vr [32]byte
|
||||
require.NoError(t, cs.SetClock(startup.NewClock(time.Now(), vr)))
|
||||
require.Eventually(t, func() bool {
|
||||
return s.started
|
||||
}, 5*time.Second, 100*time.Millisecond, "Expected service to be started")
|
||||
time.Sleep(time.Second * 2)
|
||||
assert.Equal(t, true, s.started, "Expected service to be started")
|
||||
s.Start()
|
||||
require.LogsContain(t, hook, "Attempted to start p2p service when it was already started")
|
||||
require.NoError(t, s.Stop())
|
||||
@@ -261,9 +260,17 @@ func TestListenForNewNodes(t *testing.T) {
|
||||
err = cs.SetClock(startup.NewClock(genesisTime, gvr))
|
||||
require.NoError(t, err, "Could not set clock in service")
|
||||
|
||||
require.Eventually(t, func() bool {
|
||||
return len(s.host.Network().Peers()) == peerCount
|
||||
}, 5*time.Second, 100*time.Millisecond, "Not all peers added to peerstore")
|
||||
actualPeerCount := len(s.host.Network().Peers())
|
||||
for range 40 {
|
||||
if actualPeerCount == peerCount {
|
||||
break
|
||||
}
|
||||
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
actualPeerCount = len(s.host.Network().Peers())
|
||||
}
|
||||
|
||||
assert.Equal(t, peerCount, actualPeerCount, "Not all peers added to peerstore")
|
||||
|
||||
err = s.Stop()
|
||||
require.NoError(t, err, "Failed to stop service")
|
||||
|
||||
@@ -70,11 +70,6 @@ type TestP2P struct {
|
||||
|
||||
// NewTestP2P initializes a new p2p test service.
|
||||
func NewTestP2P(t *testing.T, userOptions ...config.Option) *TestP2P {
|
||||
return NewTestP2PWithPubsubOptions(t, nil, userOptions...)
|
||||
}
|
||||
|
||||
// NewTestP2PWithPubsubOptions initializes a new p2p test service with custom pubsub options.
|
||||
func NewTestP2PWithPubsubOptions(t *testing.T, pubsubOpts []pubsub.Option, userOptions ...config.Option) *TestP2P {
|
||||
ctx := context.Background()
|
||||
options := []config.Option{
|
||||
libp2p.ResourceManager(&network.NullResourceManager{}),
|
||||
@@ -89,14 +84,10 @@ func NewTestP2PWithPubsubOptions(t *testing.T, pubsubOpts []pubsub.Option, userO
|
||||
|
||||
h, err := libp2p.New(options...)
|
||||
require.NoError(t, err)
|
||||
|
||||
defaultPubsubOpts := []pubsub.Option{
|
||||
ps, err := pubsub.NewFloodSub(ctx, h,
|
||||
pubsub.WithMessageSigning(false),
|
||||
pubsub.WithStrictSignatureVerification(false),
|
||||
}
|
||||
allPubsubOpts := append(defaultPubsubOpts, pubsubOpts...)
|
||||
|
||||
ps, err := pubsub.NewGossipSub(ctx, h, allPubsubOpts...)
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
@@ -657,9 +657,8 @@ func TestSubmitAttestationsV2(t *testing.T) {
|
||||
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Source.Epoch)
|
||||
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().Target.Root))
|
||||
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Target.Epoch)
|
||||
require.Eventually(t, func() bool {
|
||||
return s.AttestationsPool.UnaggregatedAttestationCount() == 1
|
||||
}, time.Second, 10*time.Millisecond, "Expected 1 attestation in pool")
|
||||
time.Sleep(100 * time.Millisecond) // Wait for async pool save
|
||||
assert.Equal(t, 1, s.AttestationsPool.UnaggregatedAttestationCount())
|
||||
})
|
||||
t.Run("multiple", func(t *testing.T) {
|
||||
broadcaster := &p2pMock.MockBroadcaster{}
|
||||
@@ -678,9 +677,8 @@ func TestSubmitAttestationsV2(t *testing.T) {
|
||||
assert.Equal(t, http.StatusOK, writer.Code)
|
||||
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
|
||||
assert.Equal(t, 2, broadcaster.NumAttestations())
|
||||
require.Eventually(t, func() bool {
|
||||
return s.AttestationsPool.UnaggregatedAttestationCount() == 2
|
||||
}, time.Second, 10*time.Millisecond, "Expected 2 attestations in pool")
|
||||
time.Sleep(100 * time.Millisecond) // Wait for async pool save
|
||||
assert.Equal(t, 2, s.AttestationsPool.UnaggregatedAttestationCount())
|
||||
})
|
||||
t.Run("phase0 att post electra", func(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
@@ -800,9 +798,8 @@ func TestSubmitAttestationsV2(t *testing.T) {
|
||||
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Source.Epoch)
|
||||
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().Target.Root))
|
||||
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Target.Epoch)
|
||||
require.Eventually(t, func() bool {
|
||||
return s.AttestationsPool.UnaggregatedAttestationCount() == 1
|
||||
}, time.Second, 10*time.Millisecond, "Expected 1 attestation in pool")
|
||||
time.Sleep(100 * time.Millisecond) // Wait for async pool save
|
||||
assert.Equal(t, 1, s.AttestationsPool.UnaggregatedAttestationCount())
|
||||
})
|
||||
t.Run("multiple", func(t *testing.T) {
|
||||
broadcaster := &p2pMock.MockBroadcaster{}
|
||||
@@ -821,9 +818,8 @@ func TestSubmitAttestationsV2(t *testing.T) {
|
||||
assert.Equal(t, http.StatusOK, writer.Code)
|
||||
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
|
||||
assert.Equal(t, 2, broadcaster.NumAttestations())
|
||||
require.Eventually(t, func() bool {
|
||||
return s.AttestationsPool.UnaggregatedAttestationCount() == 2
|
||||
}, time.Second, 10*time.Millisecond, "Expected 2 attestations in pool")
|
||||
time.Sleep(100 * time.Millisecond) // Wait for async pool save
|
||||
assert.Equal(t, 2, s.AttestationsPool.UnaggregatedAttestationCount())
|
||||
})
|
||||
t.Run("no body", func(t *testing.T) {
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com", nil)
|
||||
@@ -1379,9 +1375,9 @@ func TestSubmitSignedBLSToExecutionChanges_Ok(t *testing.T) {
|
||||
writer.Body = &bytes.Buffer{}
|
||||
s.SubmitBLSToExecutionChanges(writer, request)
|
||||
assert.Equal(t, http.StatusOK, writer.Code)
|
||||
require.Eventually(t, func() bool {
|
||||
return broadcaster.BroadcastCalled.Load() && len(broadcaster.BroadcastMessages) == numValidators
|
||||
}, time.Second, 10*time.Millisecond, "Broadcast should be called with all messages")
|
||||
time.Sleep(100 * time.Millisecond) // Delay to let the routine start
|
||||
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
|
||||
assert.Equal(t, numValidators, len(broadcaster.BroadcastMessages))
|
||||
|
||||
poolChanges, err := s.BLSChangesPool.PendingBLSToExecChanges()
|
||||
require.Equal(t, len(poolChanges), len(signedChanges))
|
||||
@@ -1595,10 +1591,10 @@ func TestSubmitSignedBLSToExecutionChanges_Failures(t *testing.T) {
|
||||
|
||||
s.SubmitBLSToExecutionChanges(writer, request)
|
||||
assert.Equal(t, http.StatusBadRequest, writer.Code)
|
||||
time.Sleep(10 * time.Millisecond) // Delay to allow the routine to start
|
||||
require.StringContains(t, "One or more messages failed validation", writer.Body.String())
|
||||
require.Eventually(t, func() bool {
|
||||
return broadcaster.BroadcastCalled.Load() && len(broadcaster.BroadcastMessages)+1 == numValidators
|
||||
}, time.Second, 10*time.Millisecond, "Broadcast should be called with expected messages")
|
||||
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
|
||||
assert.Equal(t, numValidators, len(broadcaster.BroadcastMessages)+1)
|
||||
|
||||
poolChanges, err := s.BLSChangesPool.PendingBLSToExecChanges()
|
||||
require.Equal(t, len(poolChanges)+1, len(signedChanges))
|
||||
|
||||
@@ -89,7 +89,13 @@ func (vs *Server) GetBeaconBlock(ctx context.Context, req *ethpb.BlockRequest) (
|
||||
}
|
||||
// Set slot, graffiti, randao reveal, and parent root.
|
||||
sBlk.SetSlot(req.Slot)
|
||||
sBlk.SetGraffiti(req.Graffiti)
|
||||
// Generate graffiti with client version info using flexible standard
|
||||
if vs.GraffitiInfo != nil {
|
||||
graffiti := vs.GraffitiInfo.GenerateGraffiti(req.Graffiti)
|
||||
sBlk.SetGraffiti(graffiti[:])
|
||||
} else {
|
||||
sBlk.SetGraffiti(req.Graffiti)
|
||||
}
|
||||
sBlk.SetRandaoReveal(req.RandaoReveal)
|
||||
sBlk.SetParentRoot(parentRoot[:])
|
||||
|
||||
@@ -602,7 +608,7 @@ func (vs *Server) GetFeeRecipientByPubKey(ctx context.Context, request *ethpb.Fe
|
||||
|
||||
// computeStateRoot computes the state root after a block has been processed through a state transition and
|
||||
// returns it to the validator client.
|
||||
func (vs *Server) computeStateRoot(ctx context.Context, block interfaces.SignedBeaconBlock) ([]byte, error) {
|
||||
func (vs *Server) computeStateRoot(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock) ([]byte, error) {
|
||||
beaconState, err := vs.StateGen.StateByRoot(ctx, block.Block().ParentRoot())
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not retrieve beacon state")
|
||||
@@ -613,72 +619,13 @@ func (vs *Server) computeStateRoot(ctx context.Context, block interfaces.SignedB
|
||||
block,
|
||||
)
|
||||
if err != nil {
|
||||
return vs.handleStateRootError(ctx, block, err)
|
||||
return nil, errors.Wrapf(err, "could not calculate state root at slot %d", beaconState.Slot())
|
||||
}
|
||||
|
||||
log.WithField("beaconStateRoot", fmt.Sprintf("%#x", root)).Debugf("Computed state root")
|
||||
return root[:], nil
|
||||
}
|
||||
|
||||
type computeStateRootAttemptsKeyType string
|
||||
|
||||
const computeStateRootAttemptsKey = computeStateRootAttemptsKeyType("compute-state-root-attempts")
|
||||
const maxComputeStateRootAttempts = 3
|
||||
|
||||
// handleStateRootError retries block construction in some error cases.
|
||||
func (vs *Server) handleStateRootError(ctx context.Context, block interfaces.SignedBeaconBlock, err error) ([]byte, error) {
|
||||
if ctx.Err() != nil {
|
||||
return nil, status.Errorf(codes.Canceled, "context error: %v", ctx.Err())
|
||||
}
|
||||
switch {
|
||||
case errors.Is(err, transition.ErrAttestationsSignatureInvalid),
|
||||
errors.Is(err, transition.ErrProcessAttestationsFailed):
|
||||
log.WithError(err).Warn("Retrying block construction without attestations")
|
||||
if err := block.SetAttestations([]ethpb.Att{}); err != nil {
|
||||
return nil, errors.Wrap(err, "could not set attestations")
|
||||
}
|
||||
case errors.Is(err, transition.ErrProcessBLSChangesFailed), errors.Is(err, transition.ErrBLSToExecutionChangesSignatureInvalid):
|
||||
log.WithError(err).Warn("Retrying block construction without BLS to execution changes")
|
||||
if err := block.SetBLSToExecutionChanges([]*ethpb.SignedBLSToExecutionChange{}); err != nil {
|
||||
return nil, errors.Wrap(err, "could not set BLS to execution changes")
|
||||
}
|
||||
case errors.Is(err, transition.ErrProcessProposerSlashingsFailed):
|
||||
log.WithError(err).Warn("Retrying block construction without proposer slashings")
|
||||
block.SetProposerSlashings([]*ethpb.ProposerSlashing{})
|
||||
case errors.Is(err, transition.ErrProcessAttesterSlashingsFailed):
|
||||
log.WithError(err).Warn("Retrying block construction without attester slashings")
|
||||
if err := block.SetAttesterSlashings([]ethpb.AttSlashing{}); err != nil {
|
||||
return nil, errors.Wrap(err, "could not set attester slashings")
|
||||
}
|
||||
case errors.Is(err, transition.ErrProcessVoluntaryExitsFailed):
|
||||
log.WithError(err).Warn("Retrying block construction without voluntary exits")
|
||||
block.SetVoluntaryExits([]*ethpb.SignedVoluntaryExit{})
|
||||
case errors.Is(err, transition.ErrProcessSyncAggregateFailed):
|
||||
log.WithError(err).Warn("Retrying block construction without sync aggregate")
|
||||
emptySig := [96]byte{0xC0}
|
||||
emptyAggregate := ðpb.SyncAggregate{
|
||||
SyncCommitteeBits: make([]byte, params.BeaconConfig().SyncCommitteeSize/8),
|
||||
SyncCommitteeSignature: emptySig[:],
|
||||
}
|
||||
if err := block.SetSyncAggregate(emptyAggregate); err != nil {
|
||||
log.WithError(err).Error("Could not set sync aggregate")
|
||||
}
|
||||
|
||||
default:
|
||||
return nil, errors.Wrap(err, "could not compute state root")
|
||||
}
|
||||
// prevent deep recursion by limiting max attempts.
|
||||
if v, ok := ctx.Value(computeStateRootAttemptsKey).(int); !ok {
|
||||
ctx = context.WithValue(ctx, computeStateRootAttemptsKey, int(1))
|
||||
} else if v >= maxComputeStateRootAttempts {
|
||||
return nil, fmt.Errorf("attempted max compute state root attempts %d", maxComputeStateRootAttempts)
|
||||
} else {
|
||||
ctx = context.WithValue(ctx, computeStateRootAttemptsKey, v+1)
|
||||
}
|
||||
// recursive call to compute state root again
|
||||
return vs.computeStateRoot(ctx, block)
|
||||
}
|
||||
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// SubmitValidatorRegistrations submits validator registrations.
|
||||
|
||||
@@ -1313,59 +1313,6 @@ func TestProposer_ComputeStateRoot_OK(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestHandleStateRootError_MaxAttemptsReached(t *testing.T) {
|
||||
// Test that handleStateRootError returns an error when max attempts is reached
|
||||
// instead of recursing infinitely.
|
||||
ctx := t.Context()
|
||||
vs := &Server{}
|
||||
|
||||
// Create a minimal block for testing
|
||||
blk := util.NewBeaconBlock()
|
||||
wsb, err := blocks.NewSignedBeaconBlock(blk)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Pre-seed the context with max attempts already reached
|
||||
ctx = context.WithValue(ctx, computeStateRootAttemptsKey, maxComputeStateRootAttempts)
|
||||
|
||||
// Call handleStateRootError with a retryable error
|
||||
_, err = vs.handleStateRootError(ctx, wsb, transition.ErrAttestationsSignatureInvalid)
|
||||
|
||||
// Should return an error about max attempts instead of recursing
|
||||
require.ErrorContains(t, "attempted max compute state root attempts", err)
|
||||
}
|
||||
|
||||
func TestHandleStateRootError_IncrementsAttempts(t *testing.T) {
|
||||
// Test that handleStateRootError properly increments the attempts counter
|
||||
// and eventually fails after max attempts.
|
||||
db := dbutil.SetupDB(t)
|
||||
ctx := t.Context()
|
||||
|
||||
beaconState, parentRoot, _ := util.DeterministicGenesisStateWithGenesisBlock(t, ctx, db, 100)
|
||||
|
||||
stateGen := stategen.New(db, doublylinkedtree.New())
|
||||
vs := &Server{
|
||||
StateGen: stateGen,
|
||||
}
|
||||
|
||||
// Create a block that will trigger retries
|
||||
blk := util.NewBeaconBlock()
|
||||
blk.Block.ParentRoot = parentRoot[:]
|
||||
blk.Block.Slot = 1
|
||||
wsb, err := blocks.NewSignedBeaconBlock(blk)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Add a state for the parent root so StateByRoot succeeds
|
||||
require.NoError(t, stateGen.SaveState(ctx, parentRoot, beaconState))
|
||||
|
||||
// Call handleStateRootError with a retryable error - it will recurse
|
||||
// but eventually hit the max attempts limit since CalculateStateRoot
|
||||
// will keep failing (no valid attestations, randao, etc.)
|
||||
_, err = vs.handleStateRootError(ctx, wsb, transition.ErrAttestationsSignatureInvalid)
|
||||
|
||||
// Should eventually fail - either with max attempts or another error
|
||||
require.NotNil(t, err)
|
||||
}
|
||||
|
||||
func TestProposer_PendingDeposits_Eth1DataVoteOK(t *testing.T) {
|
||||
ctx := t.Context()
|
||||
|
||||
|
||||
@@ -83,6 +83,7 @@ type Server struct {
|
||||
ClockWaiter startup.ClockWaiter
|
||||
CoreService *core.Service
|
||||
AttestationStateFetcher blockchain.AttestationStateFetcher
|
||||
GraffitiInfo *execution.GraffitiInfo
|
||||
}
|
||||
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
|
||||
@@ -125,6 +125,7 @@ type Config struct {
|
||||
TrackedValidatorsCache *cache.TrackedValidatorsCache
|
||||
PayloadIDCache *cache.PayloadIDCache
|
||||
LCStore *lightClient.Store
|
||||
GraffitiInfo *execution.GraffitiInfo
|
||||
}
|
||||
|
||||
// NewService instantiates a new RPC service instance that will
|
||||
@@ -256,6 +257,7 @@ func NewService(ctx context.Context, cfg *Config) *Service {
|
||||
TrackedValidatorsCache: s.cfg.TrackedValidatorsCache,
|
||||
PayloadIDCache: s.cfg.PayloadIDCache,
|
||||
AttestationStateFetcher: s.cfg.AttestationReceiver,
|
||||
GraffitiInfo: s.cfg.GraffitiInfo,
|
||||
}
|
||||
s.validatorServer = validatorServer
|
||||
nodeServer := &nodev1alpha1.Server{
|
||||
|
||||
@@ -11,7 +11,6 @@ go_library(
|
||||
visibility = ["//visibility:public"],
|
||||
deps = [
|
||||
"//beacon-chain/state/state-native/custom-types:go_default_library",
|
||||
"//beacon-chain/state/state-native/types:go_default_library",
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//consensus-types/interfaces:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
|
||||
@@ -10,7 +10,6 @@ import (
|
||||
|
||||
"github.com/OffchainLabs/go-bitfield"
|
||||
customtypes "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native/custom-types"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native/types"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
|
||||
@@ -44,8 +43,6 @@ type Prover interface {
|
||||
FinalizedRootProof(ctx context.Context) ([][]byte, error)
|
||||
CurrentSyncCommitteeProof(ctx context.Context) ([][]byte, error)
|
||||
NextSyncCommitteeProof(ctx context.Context) ([][]byte, error)
|
||||
|
||||
ProofByFieldIndex(ctx context.Context, f types.FieldIndex) ([][]byte, error)
|
||||
}
|
||||
|
||||
// ReadOnlyBeaconState defines a struct which only has read access to beacon state methods.
|
||||
|
||||
@@ -102,7 +102,6 @@ go_test(
|
||||
"getters_test.go",
|
||||
"getters_validator_test.go",
|
||||
"getters_withdrawal_test.go",
|
||||
"gloas_test.go",
|
||||
"hasher_test.go",
|
||||
"mvslice_fuzz_test.go",
|
||||
"proofs_test.go",
|
||||
@@ -157,7 +156,6 @@ go_test(
|
||||
"@com_github_google_go_cmp//cmp:go_default_library",
|
||||
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
|
||||
"@com_github_sirupsen_logrus//:go_default_library",
|
||||
"@com_github_stretchr_testify//require:go_default_library",
|
||||
"@org_golang_google_protobuf//proto:go_default_library",
|
||||
"@org_golang_google_protobuf//testing/protocmp:go_default_library",
|
||||
],
|
||||
|
||||
@@ -72,13 +72,11 @@ type BeaconState struct {
|
||||
|
||||
// Gloas fields
|
||||
latestExecutionPayloadBid *ethpb.ExecutionPayloadBid
|
||||
builders []*ethpb.Builder
|
||||
nextWithdrawalBuilderIndex primitives.BuilderIndex
|
||||
executionPayloadAvailability []byte
|
||||
builderPendingPayments []*ethpb.BuilderPendingPayment
|
||||
builderPendingWithdrawals []*ethpb.BuilderPendingWithdrawal
|
||||
latestBlockHash []byte
|
||||
payloadExpectedWithdrawals []*enginev1.Withdrawal
|
||||
latestWithdrawalsRoot []byte
|
||||
|
||||
id uint64
|
||||
lock sync.RWMutex
|
||||
@@ -136,13 +134,11 @@ type beaconStateMarshalable struct {
|
||||
PendingConsolidations []*ethpb.PendingConsolidation `json:"pending_consolidations" yaml:"pending_consolidations"`
|
||||
ProposerLookahead []primitives.ValidatorIndex `json:"proposer_look_ahead" yaml:"proposer_look_ahead"`
|
||||
LatestExecutionPayloadBid *ethpb.ExecutionPayloadBid `json:"latest_execution_payload_bid" yaml:"latest_execution_payload_bid"`
|
||||
Builders []*ethpb.Builder `json:"builders" yaml:"builders"`
|
||||
NextWithdrawalBuilderIndex primitives.BuilderIndex `json:"next_withdrawal_builder_index" yaml:"next_withdrawal_builder_index"`
|
||||
ExecutionPayloadAvailability []byte `json:"execution_payload_availability" yaml:"execution_payload_availability"`
|
||||
BuilderPendingPayments []*ethpb.BuilderPendingPayment `json:"builder_pending_payments" yaml:"builder_pending_payments"`
|
||||
BuilderPendingWithdrawals []*ethpb.BuilderPendingWithdrawal `json:"builder_pending_withdrawals" yaml:"builder_pending_withdrawals"`
|
||||
LatestBlockHash []byte `json:"latest_block_hash" yaml:"latest_block_hash"`
|
||||
PayloadExpectedWithdrawals []*enginev1.Withdrawal `json:"payload_expected_withdrawals" yaml:"payload_expected_withdrawals"`
|
||||
LatestWithdrawalsRoot []byte `json:"latest_withdrawals_root" yaml:"latest_withdrawals_root"`
|
||||
}
|
||||
|
||||
func (b *BeaconState) MarshalJSON() ([]byte, error) {
|
||||
@@ -198,13 +194,11 @@ func (b *BeaconState) MarshalJSON() ([]byte, error) {
|
||||
PendingConsolidations: b.pendingConsolidations,
|
||||
ProposerLookahead: b.proposerLookahead,
|
||||
LatestExecutionPayloadBid: b.latestExecutionPayloadBid,
|
||||
Builders: b.builders,
|
||||
NextWithdrawalBuilderIndex: b.nextWithdrawalBuilderIndex,
|
||||
ExecutionPayloadAvailability: b.executionPayloadAvailability,
|
||||
BuilderPendingPayments: b.builderPendingPayments,
|
||||
BuilderPendingWithdrawals: b.builderPendingWithdrawals,
|
||||
LatestBlockHash: b.latestBlockHash,
|
||||
PayloadExpectedWithdrawals: b.payloadExpectedWithdrawals,
|
||||
LatestWithdrawalsRoot: b.latestWithdrawalsRoot,
|
||||
}
|
||||
return json.Marshal(marshalable)
|
||||
}
|
||||
|
||||
@@ -56,7 +56,9 @@ func (r StateRoots) MarshalSSZTo(dst []byte) ([]byte, error) {
|
||||
func (r StateRoots) MarshalSSZ() ([]byte, error) {
|
||||
marshalled := make([]byte, fieldparams.StateRootsLength*32)
|
||||
for i, r32 := range r {
|
||||
copy(marshalled[i*32:(i+1)*32], r32[:])
|
||||
for j, rr := range r32 {
|
||||
marshalled[i*32+j] = rr
|
||||
}
|
||||
}
|
||||
return marshalled, nil
|
||||
}
|
||||
|
||||
@@ -305,12 +305,10 @@ func (b *BeaconState) ToProtoUnsafe() any {
|
||||
PendingConsolidations: b.pendingConsolidations,
|
||||
ProposerLookahead: lookahead,
|
||||
ExecutionPayloadAvailability: b.executionPayloadAvailability,
|
||||
Builders: b.builders,
|
||||
NextWithdrawalBuilderIndex: b.nextWithdrawalBuilderIndex,
|
||||
BuilderPendingPayments: b.builderPendingPayments,
|
||||
BuilderPendingWithdrawals: b.builderPendingWithdrawals,
|
||||
LatestBlockHash: b.latestBlockHash,
|
||||
PayloadExpectedWithdrawals: b.payloadExpectedWithdrawals,
|
||||
LatestWithdrawalsRoot: b.latestWithdrawalsRoot,
|
||||
}
|
||||
default:
|
||||
return nil
|
||||
@@ -609,12 +607,10 @@ func (b *BeaconState) ToProto() any {
|
||||
PendingConsolidations: b.pendingConsolidationsVal(),
|
||||
ProposerLookahead: lookahead,
|
||||
ExecutionPayloadAvailability: b.executionPayloadAvailabilityVal(),
|
||||
Builders: b.buildersVal(),
|
||||
NextWithdrawalBuilderIndex: b.nextWithdrawalBuilderIndex,
|
||||
BuilderPendingPayments: b.builderPendingPaymentsVal(),
|
||||
BuilderPendingWithdrawals: b.builderPendingWithdrawalsVal(),
|
||||
LatestBlockHash: b.latestBlockHashVal(),
|
||||
PayloadExpectedWithdrawals: b.payloadExpectedWithdrawalsVal(),
|
||||
LatestWithdrawalsRoot: b.latestWithdrawalsRootVal(),
|
||||
}
|
||||
default:
|
||||
return nil
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package state_native
|
||||
|
||||
import (
|
||||
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
|
||||
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
|
||||
)
|
||||
|
||||
@@ -48,22 +47,6 @@ func (b *BeaconState) builderPendingWithdrawalsVal() []*ethpb.BuilderPendingWith
|
||||
return withdrawals
|
||||
}
|
||||
|
||||
// buildersVal returns a copy of the builders registry.
|
||||
// This assumes that a lock is already held on BeaconState.
|
||||
func (b *BeaconState) buildersVal() []*ethpb.Builder {
|
||||
if b.builders == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
builders := make([]*ethpb.Builder, len(b.builders))
|
||||
for i := range builders {
|
||||
builder := b.builders[i]
|
||||
builders[i] = ethpb.CopyBuilder(builder)
|
||||
}
|
||||
|
||||
return builders
|
||||
}
|
||||
|
||||
// latestBlockHashVal returns a copy of the latest block hash.
|
||||
// This assumes that a lock is already held on BeaconState.
|
||||
func (b *BeaconState) latestBlockHashVal() []byte {
|
||||
@@ -77,17 +60,15 @@ func (b *BeaconState) latestBlockHashVal() []byte {
|
||||
return hash
|
||||
}
|
||||
|
||||
// payloadExpectedWithdrawalsVal returns a copy of the payload expected withdrawals.
|
||||
// latestWithdrawalsRootVal returns a copy of the latest withdrawals root.
|
||||
// This assumes that a lock is already held on BeaconState.
|
||||
func (b *BeaconState) payloadExpectedWithdrawalsVal() []*enginev1.Withdrawal {
|
||||
if b.payloadExpectedWithdrawals == nil {
|
||||
func (b *BeaconState) latestWithdrawalsRootVal() []byte {
|
||||
if b.latestWithdrawalsRoot == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
withdrawals := make([]*enginev1.Withdrawal, len(b.payloadExpectedWithdrawals))
|
||||
for i, withdrawal := range b.payloadExpectedWithdrawals {
|
||||
withdrawals[i] = withdrawal.Copy()
|
||||
}
|
||||
root := make([]byte, len(b.latestWithdrawalsRoot))
|
||||
copy(root, b.latestWithdrawalsRoot)
|
||||
|
||||
return withdrawals
|
||||
return root
|
||||
}
|
||||
|
||||
@@ -1,43 +0,0 @@
|
||||
package state_native
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
|
||||
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestBuildersVal(t *testing.T) {
|
||||
st := &BeaconState{}
|
||||
|
||||
require.Nil(t, st.buildersVal())
|
||||
|
||||
st.builders = []*ethpb.Builder{
|
||||
{Pubkey: []byte{0x01}, ExecutionAddress: []byte{0x02}, Balance: 3},
|
||||
nil,
|
||||
}
|
||||
|
||||
got := st.buildersVal()
|
||||
require.Len(t, got, 2)
|
||||
require.Nil(t, got[1])
|
||||
require.Equal(t, st.builders[0], got[0])
|
||||
require.NotSame(t, st.builders[0], got[0])
|
||||
}
|
||||
|
||||
func TestPayloadExpectedWithdrawalsVal(t *testing.T) {
|
||||
st := &BeaconState{}
|
||||
|
||||
require.Nil(t, st.payloadExpectedWithdrawalsVal())
|
||||
|
||||
st.payloadExpectedWithdrawals = []*enginev1.Withdrawal{
|
||||
{Index: 1, ValidatorIndex: 2, Address: []byte{0x03}, Amount: 4},
|
||||
nil,
|
||||
}
|
||||
|
||||
got := st.payloadExpectedWithdrawalsVal()
|
||||
require.Len(t, got, 2)
|
||||
require.Nil(t, got[1])
|
||||
require.Equal(t, st.payloadExpectedWithdrawals[0], got[0])
|
||||
require.NotSame(t, st.payloadExpectedWithdrawals[0], got[0])
|
||||
}
|
||||
@@ -342,15 +342,6 @@ func ComputeFieldRootsWithHasher(ctx context.Context, state *BeaconState) ([][]b
|
||||
}
|
||||
|
||||
if state.version >= version.Gloas {
|
||||
buildersRoot, err := stateutil.BuildersRoot(state.builders)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not compute builders merkleization")
|
||||
}
|
||||
fieldRoots[types.Builders.RealPosition()] = buildersRoot[:]
|
||||
|
||||
nextWithdrawalBuilderIndexRoot := ssz.Uint64Root(uint64(state.nextWithdrawalBuilderIndex))
|
||||
fieldRoots[types.NextWithdrawalBuilderIndex.RealPosition()] = nextWithdrawalBuilderIndexRoot[:]
|
||||
|
||||
epaRoot, err := stateutil.ExecutionPayloadAvailabilityRoot(state.executionPayloadAvailability)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not compute execution payload availability merkleization")
|
||||
@@ -375,12 +366,8 @@ func ComputeFieldRootsWithHasher(ctx context.Context, state *BeaconState) ([][]b
|
||||
lbhRoot := bytesutil.ToBytes32(state.latestBlockHash)
|
||||
fieldRoots[types.LatestBlockHash.RealPosition()] = lbhRoot[:]
|
||||
|
||||
expectedWithdrawalsRoot, err := ssz.WithdrawalSliceRoot(state.payloadExpectedWithdrawals, fieldparams.MaxWithdrawalsPerPayload)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not compute payload expected withdrawals root")
|
||||
}
|
||||
|
||||
fieldRoots[types.PayloadExpectedWithdrawals.RealPosition()] = expectedWithdrawalsRoot[:]
|
||||
lwrRoot := bytesutil.ToBytes32(state.latestWithdrawalsRoot)
|
||||
fieldRoots[types.LatestWithdrawalsRoot.RealPosition()] = lwrRoot[:]
|
||||
}
|
||||
return fieldRoots, nil
|
||||
}
|
||||
|
||||
@@ -5,7 +5,6 @@ import (
|
||||
"encoding/binary"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native/types"
|
||||
"github.com/OffchainLabs/prysm/v7/config/params"
|
||||
"github.com/OffchainLabs/prysm/v7/container/trie"
|
||||
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
|
||||
"github.com/OffchainLabs/prysm/v7/runtime/version"
|
||||
@@ -40,51 +39,33 @@ func (b *BeaconState) NextSyncCommitteeGeneralizedIndex() (uint64, error) {
|
||||
|
||||
// CurrentSyncCommitteeProof from the state's Merkle trie representation.
|
||||
func (b *BeaconState) CurrentSyncCommitteeProof(ctx context.Context) ([][]byte, error) {
|
||||
return b.ProofByFieldIndex(ctx, types.CurrentSyncCommittee)
|
||||
b.lock.Lock()
|
||||
defer b.lock.Unlock()
|
||||
|
||||
if b.version == version.Phase0 {
|
||||
return nil, errNotSupported("CurrentSyncCommitteeProof", b.version)
|
||||
}
|
||||
|
||||
// In case the Merkle layers of the trie are not populated, we need
|
||||
// to perform some initialization.
|
||||
if err := b.initializeMerkleLayers(ctx); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// Our beacon state uses a "dirty" fields pattern which requires us to
|
||||
// recompute branches of the Merkle layers that are marked as dirty.
|
||||
if err := b.recomputeDirtyFields(ctx); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return trie.ProofFromMerkleLayers(b.merkleLayers, types.CurrentSyncCommittee.RealPosition()), nil
|
||||
}
|
||||
|
||||
// NextSyncCommitteeProof from the state's Merkle trie representation.
|
||||
func (b *BeaconState) NextSyncCommitteeProof(ctx context.Context) ([][]byte, error) {
|
||||
return b.ProofByFieldIndex(ctx, types.NextSyncCommittee)
|
||||
}
|
||||
|
||||
// FinalizedRootProof crafts a Merkle proof for the finalized root
|
||||
// contained within the finalized checkpoint of a beacon state.
|
||||
func (b *BeaconState) FinalizedRootProof(ctx context.Context) ([][]byte, error) {
|
||||
b.lock.Lock()
|
||||
defer b.lock.Unlock()
|
||||
|
||||
branchProof, err := b.proofByFieldIndex(ctx, types.FinalizedCheckpoint)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// The epoch field of a finalized checkpoint is the neighbor
|
||||
// index of the finalized root field in its Merkle tree representation
|
||||
// of the checkpoint. This neighbor is the first element added to the proof.
|
||||
epochBuf := make([]byte, 8)
|
||||
binary.LittleEndian.PutUint64(epochBuf, uint64(b.finalizedCheckpointVal().Epoch))
|
||||
epochRoot := bytesutil.ToBytes32(epochBuf)
|
||||
proof := make([][]byte, 0)
|
||||
proof = append(proof, epochRoot[:])
|
||||
proof = append(proof, branchProof...)
|
||||
return proof, nil
|
||||
}
|
||||
|
||||
// ProofByFieldIndex constructs proofs for given field index with lock acquisition.
|
||||
func (b *BeaconState) ProofByFieldIndex(ctx context.Context, f types.FieldIndex) ([][]byte, error) {
|
||||
b.lock.Lock()
|
||||
defer b.lock.Unlock()
|
||||
|
||||
return b.proofByFieldIndex(ctx, f)
|
||||
}
|
||||
|
||||
// proofByFieldIndex constructs proofs for given field index.
|
||||
// Important: it is assumed that beacon state mutex is locked when calling this method.
|
||||
func (b *BeaconState) proofByFieldIndex(ctx context.Context, f types.FieldIndex) ([][]byte, error) {
|
||||
err := b.validateFieldIndex(f)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
if b.version == version.Phase0 {
|
||||
return nil, errNotSupported("NextSyncCommitteeProof", b.version)
|
||||
}
|
||||
|
||||
if err := b.initializeMerkleLayers(ctx); err != nil {
|
||||
@@ -93,40 +74,35 @@ func (b *BeaconState) proofByFieldIndex(ctx context.Context, f types.FieldIndex)
|
||||
if err := b.recomputeDirtyFields(ctx); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return trie.ProofFromMerkleLayers(b.merkleLayers, f.RealPosition()), nil
|
||||
return trie.ProofFromMerkleLayers(b.merkleLayers, types.NextSyncCommittee.RealPosition()), nil
|
||||
}
|
||||
|
||||
func (b *BeaconState) validateFieldIndex(f types.FieldIndex) error {
|
||||
switch b.version {
|
||||
case version.Phase0:
|
||||
if f.RealPosition() > params.BeaconConfig().BeaconStateFieldCount-1 {
|
||||
return errNotSupported(f.String(), b.version)
|
||||
}
|
||||
case version.Altair:
|
||||
if f.RealPosition() > params.BeaconConfig().BeaconStateAltairFieldCount-1 {
|
||||
return errNotSupported(f.String(), b.version)
|
||||
}
|
||||
case version.Bellatrix:
|
||||
if f.RealPosition() > params.BeaconConfig().BeaconStateBellatrixFieldCount-1 {
|
||||
return errNotSupported(f.String(), b.version)
|
||||
}
|
||||
case version.Capella:
|
||||
if f.RealPosition() > params.BeaconConfig().BeaconStateCapellaFieldCount-1 {
|
||||
return errNotSupported(f.String(), b.version)
|
||||
}
|
||||
case version.Deneb:
|
||||
if f.RealPosition() > params.BeaconConfig().BeaconStateDenebFieldCount-1 {
|
||||
return errNotSupported(f.String(), b.version)
|
||||
}
|
||||
case version.Electra:
|
||||
if f.RealPosition() > params.BeaconConfig().BeaconStateElectraFieldCount-1 {
|
||||
return errNotSupported(f.String(), b.version)
|
||||
}
|
||||
case version.Fulu:
|
||||
if f.RealPosition() > params.BeaconConfig().BeaconStateFuluFieldCount-1 {
|
||||
return errNotSupported(f.String(), b.version)
|
||||
}
|
||||
// FinalizedRootProof crafts a Merkle proof for the finalized root
|
||||
// contained within the finalized checkpoint of a beacon state.
|
||||
func (b *BeaconState) FinalizedRootProof(ctx context.Context) ([][]byte, error) {
|
||||
b.lock.Lock()
|
||||
defer b.lock.Unlock()
|
||||
|
||||
if b.version == version.Phase0 {
|
||||
return nil, errNotSupported("FinalizedRootProof", b.version)
|
||||
}
|
||||
|
||||
return nil
|
||||
if err := b.initializeMerkleLayers(ctx); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := b.recomputeDirtyFields(ctx); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
cpt := b.finalizedCheckpointVal()
|
||||
// The epoch field of a finalized checkpoint is the neighbor
|
||||
// index of the finalized root field in its Merkle tree representation
|
||||
// of the checkpoint. This neighbor is the first element added to the proof.
|
||||
epochBuf := make([]byte, 8)
|
||||
binary.LittleEndian.PutUint64(epochBuf, uint64(cpt.Epoch))
|
||||
epochRoot := bytesutil.ToBytes32(epochBuf)
|
||||
proof := make([][]byte, 0)
|
||||
proof = append(proof, epochRoot[:])
|
||||
branch := trie.ProofFromMerkleLayers(b.merkleLayers, types.FinalizedCheckpoint.RealPosition())
|
||||
proof = append(proof, branch...)
|
||||
return proof, nil
|
||||
}
|
||||
|
||||
@@ -21,6 +21,10 @@ func TestBeaconStateMerkleProofs_phase0_notsupported(t *testing.T) {
|
||||
_, err := st.NextSyncCommitteeProof(ctx)
|
||||
require.ErrorContains(t, "not supported", err)
|
||||
})
|
||||
t.Run("finalized root", func(t *testing.T) {
|
||||
_, err := st.FinalizedRootProof(ctx)
|
||||
require.ErrorContains(t, "not supported", err)
|
||||
})
|
||||
}
|
||||
func TestBeaconStateMerkleProofs_altair(t *testing.T) {
|
||||
ctx := t.Context()
|
||||
|
||||
@@ -120,13 +120,11 @@ var (
|
||||
)
|
||||
|
||||
gloasAdditionalFields = []types.FieldIndex{
|
||||
types.Builders,
|
||||
types.NextWithdrawalBuilderIndex,
|
||||
types.ExecutionPayloadAvailability,
|
||||
types.BuilderPendingPayments,
|
||||
types.BuilderPendingWithdrawals,
|
||||
types.LatestBlockHash,
|
||||
types.PayloadExpectedWithdrawals,
|
||||
types.LatestWithdrawalsRoot,
|
||||
}
|
||||
|
||||
gloasFields = slices.Concat(
|
||||
@@ -147,7 +145,7 @@ const (
|
||||
denebSharedFieldRefCount = 7
|
||||
electraSharedFieldRefCount = 10
|
||||
fuluSharedFieldRefCount = 11
|
||||
gloasSharedFieldRefCount = 13 // Adds Builders + BuilderPendingWithdrawals to the shared-ref set and LatestExecutionPayloadHeader is removed
|
||||
gloasSharedFieldRefCount = 12 // Adds PendingBuilderWithdrawal to the shared-ref set and LatestExecutionPayloadHeader is removed
|
||||
)
|
||||
|
||||
// InitializeFromProtoPhase0 the beacon state from a protobuf representation.
|
||||
@@ -819,13 +817,11 @@ func InitializeFromProtoUnsafeGloas(st *ethpb.BeaconStateGloas) (state.BeaconSta
|
||||
pendingConsolidations: st.PendingConsolidations,
|
||||
proposerLookahead: proposerLookahead,
|
||||
latestExecutionPayloadBid: st.LatestExecutionPayloadBid,
|
||||
builders: st.Builders,
|
||||
nextWithdrawalBuilderIndex: st.NextWithdrawalBuilderIndex,
|
||||
executionPayloadAvailability: st.ExecutionPayloadAvailability,
|
||||
builderPendingPayments: st.BuilderPendingPayments,
|
||||
builderPendingWithdrawals: st.BuilderPendingWithdrawals,
|
||||
latestBlockHash: st.LatestBlockHash,
|
||||
payloadExpectedWithdrawals: st.PayloadExpectedWithdrawals,
|
||||
latestWithdrawalsRoot: st.LatestWithdrawalsRoot,
|
||||
dirtyFields: make(map[types.FieldIndex]bool, fieldCount),
|
||||
dirtyIndices: make(map[types.FieldIndex][]uint64, fieldCount),
|
||||
stateFieldLeaves: make(map[types.FieldIndex]*fieldtrie.FieldTrie, fieldCount),
|
||||
@@ -865,7 +861,6 @@ func InitializeFromProtoUnsafeGloas(st *ethpb.BeaconStateGloas) (state.BeaconSta
|
||||
b.sharedFieldReferences[types.PendingPartialWithdrawals] = stateutil.NewRef(1)
|
||||
b.sharedFieldReferences[types.PendingConsolidations] = stateutil.NewRef(1)
|
||||
b.sharedFieldReferences[types.ProposerLookahead] = stateutil.NewRef(1)
|
||||
b.sharedFieldReferences[types.Builders] = stateutil.NewRef(1) // New in Gloas.
|
||||
b.sharedFieldReferences[types.BuilderPendingWithdrawals] = stateutil.NewRef(1) // New in Gloas.
|
||||
|
||||
state.Count.Inc()
|
||||
@@ -937,7 +932,6 @@ func (b *BeaconState) Copy() state.BeaconState {
|
||||
pendingDeposits: b.pendingDeposits,
|
||||
pendingPartialWithdrawals: b.pendingPartialWithdrawals,
|
||||
pendingConsolidations: b.pendingConsolidations,
|
||||
builders: b.builders,
|
||||
|
||||
// Everything else, too small to be concerned about, constant size.
|
||||
genesisValidatorsRoot: b.genesisValidatorsRoot,
|
||||
@@ -954,12 +948,11 @@ func (b *BeaconState) Copy() state.BeaconState {
|
||||
latestExecutionPayloadHeaderCapella: b.latestExecutionPayloadHeaderCapella.Copy(),
|
||||
latestExecutionPayloadHeaderDeneb: b.latestExecutionPayloadHeaderDeneb.Copy(),
|
||||
latestExecutionPayloadBid: b.latestExecutionPayloadBid.Copy(),
|
||||
nextWithdrawalBuilderIndex: b.nextWithdrawalBuilderIndex,
|
||||
executionPayloadAvailability: b.executionPayloadAvailabilityVal(),
|
||||
builderPendingPayments: b.builderPendingPaymentsVal(),
|
||||
builderPendingWithdrawals: b.builderPendingWithdrawalsVal(),
|
||||
latestBlockHash: b.latestBlockHashVal(),
|
||||
payloadExpectedWithdrawals: b.payloadExpectedWithdrawalsVal(),
|
||||
latestWithdrawalsRoot: b.latestWithdrawalsRootVal(),
|
||||
|
||||
id: types.Enumerator.Inc(),
|
||||
|
||||
@@ -1335,10 +1328,6 @@ func (b *BeaconState) rootSelector(ctx context.Context, field types.FieldIndex)
|
||||
return stateutil.ProposerLookaheadRoot(b.proposerLookahead)
|
||||
case types.LatestExecutionPayloadBid:
|
||||
return b.latestExecutionPayloadBid.HashTreeRoot()
|
||||
case types.Builders:
|
||||
return stateutil.BuildersRoot(b.builders)
|
||||
case types.NextWithdrawalBuilderIndex:
|
||||
return ssz.Uint64Root(uint64(b.nextWithdrawalBuilderIndex)), nil
|
||||
case types.ExecutionPayloadAvailability:
|
||||
return stateutil.ExecutionPayloadAvailabilityRoot(b.executionPayloadAvailability)
|
||||
|
||||
@@ -1348,8 +1337,8 @@ func (b *BeaconState) rootSelector(ctx context.Context, field types.FieldIndex)
|
||||
return stateutil.BuilderPendingWithdrawalsRoot(b.builderPendingWithdrawals)
|
||||
case types.LatestBlockHash:
|
||||
return bytesutil.ToBytes32(b.latestBlockHash), nil
|
||||
case types.PayloadExpectedWithdrawals:
|
||||
return ssz.WithdrawalSliceRoot(b.payloadExpectedWithdrawals, fieldparams.MaxWithdrawalsPerPayload)
|
||||
case types.LatestWithdrawalsRoot:
|
||||
return bytesutil.ToBytes32(b.latestWithdrawalsRoot), nil
|
||||
}
|
||||
return [32]byte{}, errors.New("invalid field index provided")
|
||||
}
|
||||
|
||||
@@ -116,10 +116,6 @@ func (f FieldIndex) String() string {
|
||||
return "pendingConsolidations"
|
||||
case ProposerLookahead:
|
||||
return "proposerLookahead"
|
||||
case Builders:
|
||||
return "builders"
|
||||
case NextWithdrawalBuilderIndex:
|
||||
return "nextWithdrawalBuilderIndex"
|
||||
case ExecutionPayloadAvailability:
|
||||
return "executionPayloadAvailability"
|
||||
case BuilderPendingPayments:
|
||||
@@ -128,8 +124,8 @@ func (f FieldIndex) String() string {
|
||||
return "builderPendingWithdrawals"
|
||||
case LatestBlockHash:
|
||||
return "latestBlockHash"
|
||||
case PayloadExpectedWithdrawals:
|
||||
return "payloadExpectedWithdrawals"
|
||||
case LatestWithdrawalsRoot:
|
||||
return "latestWithdrawalsRoot"
|
||||
default:
|
||||
return fmt.Sprintf("unknown field index number: %d", f)
|
||||
}
|
||||
@@ -215,20 +211,16 @@ func (f FieldIndex) RealPosition() int {
|
||||
return 36
|
||||
case ProposerLookahead:
|
||||
return 37
|
||||
case Builders:
|
||||
return 38
|
||||
case NextWithdrawalBuilderIndex:
|
||||
return 39
|
||||
case ExecutionPayloadAvailability:
|
||||
return 40
|
||||
return 38
|
||||
case BuilderPendingPayments:
|
||||
return 41
|
||||
return 39
|
||||
case BuilderPendingWithdrawals:
|
||||
return 42
|
||||
return 40
|
||||
case LatestBlockHash:
|
||||
return 43
|
||||
case PayloadExpectedWithdrawals:
|
||||
return 44
|
||||
return 41
|
||||
case LatestWithdrawalsRoot:
|
||||
return 42
|
||||
default:
|
||||
return -1
|
||||
}
|
||||
@@ -295,13 +287,11 @@ const (
|
||||
PendingPartialWithdrawals // Electra: EIP-7251
|
||||
PendingConsolidations // Electra: EIP-7251
|
||||
ProposerLookahead // Fulu: EIP-7917
|
||||
Builders // Gloas: EIP-7732
|
||||
NextWithdrawalBuilderIndex // Gloas: EIP-7732
|
||||
ExecutionPayloadAvailability // Gloas: EIP-7732
|
||||
BuilderPendingPayments // Gloas: EIP-7732
|
||||
BuilderPendingWithdrawals // Gloas: EIP-7732
|
||||
LatestBlockHash // Gloas: EIP-7732
|
||||
PayloadExpectedWithdrawals // Gloas: EIP-7732
|
||||
LatestWithdrawalsRoot // Gloas: EIP-7732
|
||||
)
|
||||
|
||||
// Enumerator keeps track of the number of states created since the node's start.
|
||||
|
||||
@@ -6,7 +6,6 @@ go_library(
|
||||
"block_header_root.go",
|
||||
"builder_pending_payments_root.go",
|
||||
"builder_pending_withdrawals_root.go",
|
||||
"builders_root.go",
|
||||
"eth1_root.go",
|
||||
"execution_payload_availability_root.go",
|
||||
"field_root_attestation.go",
|
||||
|
||||
@@ -1,12 +0,0 @@
|
||||
package stateutil
|
||||
|
||||
import (
|
||||
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v7/encoding/ssz"
|
||||
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
|
||||
)
|
||||
|
||||
// BuildersRoot computes the SSZ root of a slice of Builder.
|
||||
func BuildersRoot(slice []*ethpb.Builder) ([32]byte, error) {
|
||||
return ssz.SliceRoot(slice, uint64(fieldparams.BuilderRegistryLimit))
|
||||
}
|
||||
@@ -70,6 +70,7 @@ func TestSyncHandlers_WaitToSync(t *testing.T) {
|
||||
|
||||
topic := "/eth2/%x/beacon_block"
|
||||
go r.startDiscoveryAndSubscriptions()
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
|
||||
var vr [32]byte
|
||||
require.NoError(t, gs.SetClock(startup.NewClock(time.Now(), vr)))
|
||||
@@ -82,11 +83,9 @@ func TestSyncHandlers_WaitToSync(t *testing.T) {
|
||||
msg.Block.ParentRoot = util.Random32Bytes(t)
|
||||
msg.Signature = sk.Sign([]byte("data")).Marshal()
|
||||
p2p.ReceivePubSub(topic, msg)
|
||||
|
||||
// Wait for chainstart event to be processed
|
||||
require.Eventually(t, func() bool {
|
||||
return r.chainStarted.IsSet()
|
||||
}, 5*time.Second, 50*time.Millisecond, "Did not receive chain start event.")
|
||||
// wait for chainstart to be sent
|
||||
time.Sleep(400 * time.Millisecond)
|
||||
require.Equal(t, true, r.chainStarted.IsSet(), "Did not receive chain start event.")
|
||||
}
|
||||
|
||||
func TestSyncHandlers_WaitForChainStart(t *testing.T) {
|
||||
@@ -218,18 +217,20 @@ func TestSyncService_StopCleanly(t *testing.T) {
|
||||
p2p.Digest, err = r.currentForkDigest()
|
||||
require.NoError(t, err)
|
||||
|
||||
// Wait for chainstart and topics to be registered
|
||||
require.Eventually(t, func() bool {
|
||||
return r.chainStarted.IsSet() && len(r.cfg.p2p.PubSub().GetTopics()) > 0 && len(r.cfg.p2p.Host().Mux().Protocols()) > 0
|
||||
}, 5*time.Second, 50*time.Millisecond, "Did not receive chain start event or topics not registered.")
|
||||
// wait for chainstart to be sent
|
||||
time.Sleep(2 * time.Second)
|
||||
require.Equal(t, true, r.chainStarted.IsSet(), "Did not receive chain start event.")
|
||||
|
||||
require.NotEqual(t, 0, len(r.cfg.p2p.PubSub().GetTopics()))
|
||||
require.NotEqual(t, 0, len(r.cfg.p2p.Host().Mux().Protocols()))
|
||||
|
||||
// Both pubsub and rpc topics should be unsubscribed.
|
||||
require.NoError(t, r.Stop())
|
||||
|
||||
// Wait for pubsub topics to be deregistered.
|
||||
require.Eventually(t, func() bool {
|
||||
return len(r.cfg.p2p.PubSub().GetTopics()) == 0 && len(r.cfg.p2p.Host().Mux().Protocols()) == 0
|
||||
}, 5*time.Second, 50*time.Millisecond, "Pubsub topics were not deregistered")
|
||||
// Sleep to allow pubsub topics to be deregistered.
|
||||
time.Sleep(1 * time.Second)
|
||||
require.Equal(t, 0, len(r.cfg.p2p.PubSub().GetTopics()))
|
||||
require.Equal(t, 0, len(r.cfg.p2p.Host().Mux().Protocols()))
|
||||
}
|
||||
|
||||
func TestService_Stop_SendsGoodbyeMessages(t *testing.T) {
|
||||
|
||||
@@ -48,14 +48,7 @@ func (s *Service) beaconBlockSubscriber(ctx context.Context, msg proto.Message)
|
||||
return errors.Wrap(err, "new ro block with root")
|
||||
}
|
||||
|
||||
go func() {
|
||||
if err := s.processSidecarsFromExecutionFromBlock(ctx, roBlock); err != nil {
|
||||
log.WithError(err).WithFields(logrus.Fields{
|
||||
"root": fmt.Sprintf("%#x", root),
|
||||
"slot": block.Slot(),
|
||||
}).Error("Failed to process sidecars from execution from block")
|
||||
}
|
||||
}()
|
||||
go s.processSidecarsFromExecutionFromBlock(ctx, roBlock)
|
||||
|
||||
if err := s.cfg.chain.ReceiveBlock(ctx, signed, root, nil); err != nil {
|
||||
if blockchain.IsInvalidBlock(err) {
|
||||
@@ -76,37 +69,28 @@ func (s *Service) beaconBlockSubscriber(ctx context.Context, msg proto.Message)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
if err := s.processPendingAttsForBlock(ctx, root); err != nil {
|
||||
return errors.Wrap(err, "process pending atts for block")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// processSidecarsFromExecutionFromBlock retrieves (if available) sidecars data from the execution client,
|
||||
// builds corresponding sidecars, save them to the storage, and broadcasts them over P2P if necessary.
|
||||
func (s *Service) processSidecarsFromExecutionFromBlock(ctx context.Context, roBlock blocks.ROBlock) error {
|
||||
func (s *Service) processSidecarsFromExecutionFromBlock(ctx context.Context, roBlock blocks.ROBlock) {
|
||||
if roBlock.Version() >= version.Fulu {
|
||||
if err := s.processDataColumnSidecarsFromExecution(ctx, peerdas.PopulateFromBlock(roBlock)); err != nil {
|
||||
// Do not log if the context was cancelled on purpose.
|
||||
// (Still log other context errors such as deadlines exceeded).
|
||||
if errors.Is(err, context.Canceled) {
|
||||
return nil
|
||||
}
|
||||
|
||||
return errors.Wrap(err, "process data column sidecars from execution")
|
||||
log.WithError(err).Error("Failed to process data column sidecars from execution")
|
||||
return
|
||||
}
|
||||
|
||||
return nil
|
||||
return
|
||||
}
|
||||
|
||||
if roBlock.Version() >= version.Deneb {
|
||||
s.processBlobSidecarsFromExecution(ctx, roBlock)
|
||||
return nil
|
||||
return
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// processBlobSidecarsFromExecution retrieves (if available) blob sidecars data from the execution client,
|
||||
@@ -184,6 +168,7 @@ func (s *Service) processDataColumnSidecarsFromExecution(ctx context.Context, so
|
||||
key := fmt.Sprintf("%#x", source.Root())
|
||||
if _, err, _ := s.columnSidecarsExecSingleFlight.Do(key, func() (any, error) {
|
||||
const delay = 250 * time.Millisecond
|
||||
secondsPerHalfSlot := time.Duration(params.BeaconConfig().SecondsPerSlot/2) * time.Second
|
||||
|
||||
commitments, err := source.Commitments()
|
||||
if err != nil {
|
||||
@@ -201,6 +186,9 @@ func (s *Service) processDataColumnSidecarsFromExecution(ctx context.Context, so
|
||||
return nil, errors.Wrap(err, "column indices to sample")
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(ctx, secondsPerHalfSlot)
|
||||
defer cancel()
|
||||
|
||||
log := log.WithFields(logrus.Fields{
|
||||
"root": fmt.Sprintf("%#x", source.Root()),
|
||||
"slot": source.Slot(),
|
||||
@@ -221,11 +209,6 @@ func (s *Service) processDataColumnSidecarsFromExecution(ctx context.Context, so
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// Return if the context is done.
|
||||
if ctx.Err() != nil {
|
||||
return nil, ctx.Err()
|
||||
}
|
||||
|
||||
if iteration == 0 {
|
||||
dataColumnsRecoveredFromELAttempts.Inc()
|
||||
}
|
||||
@@ -237,10 +220,20 @@ func (s *Service) processDataColumnSidecarsFromExecution(ctx context.Context, so
|
||||
}
|
||||
|
||||
// No sidecars are retrieved from the EL, retry later
|
||||
constructedCount := uint64(len(constructedSidecars))
|
||||
constructedSidecarCount = uint64(len(constructedSidecars))
|
||||
if constructedSidecarCount == 0 {
|
||||
if ctx.Err() != nil {
|
||||
return nil, ctx.Err()
|
||||
}
|
||||
|
||||
time.Sleep(delay)
|
||||
continue
|
||||
}
|
||||
|
||||
dataColumnsRecoveredFromELTotal.Inc()
|
||||
|
||||
// Boundary check.
|
||||
if constructedSidecarCount > 0 && constructedSidecarCount != fieldparams.NumberOfColumns {
|
||||
if constructedSidecarCount != fieldparams.NumberOfColumns {
|
||||
return nil, errors.Errorf("reconstruct data column sidecars returned %d sidecars, expected %d - should never happen", constructedSidecarCount, fieldparams.NumberOfColumns)
|
||||
}
|
||||
|
||||
@@ -249,24 +242,14 @@ func (s *Service) processDataColumnSidecarsFromExecution(ctx context.Context, so
|
||||
return nil, errors.Wrap(err, "broadcast and receive unseen data column sidecars")
|
||||
}
|
||||
|
||||
if constructedCount > 0 {
|
||||
dataColumnsRecoveredFromELTotal.Inc()
|
||||
log.WithFields(logrus.Fields{
|
||||
"count": len(unseenIndices),
|
||||
"indices": helpers.SortedPrettySliceFromMap(unseenIndices),
|
||||
}).Debug("Constructed data column sidecars from the execution client")
|
||||
|
||||
log.WithFields(logrus.Fields{
|
||||
"root": fmt.Sprintf("%#x", source.Root()),
|
||||
"slot": source.Slot(),
|
||||
"proposerIndex": source.ProposerIndex(),
|
||||
"iteration": iteration,
|
||||
"type": source.Type(),
|
||||
"count": len(unseenIndices),
|
||||
"indices": helpers.SortedPrettySliceFromMap(unseenIndices),
|
||||
}).Debug("Constructed data column sidecars from the execution client")
|
||||
dataColumnSidecarsObtainedViaELCount.Observe(float64(len(unseenIndices)))
|
||||
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// Wait before retrying.
|
||||
time.Sleep(delay)
|
||||
return nil, nil
|
||||
}
|
||||
}); err != nil {
|
||||
return err
|
||||
@@ -301,11 +284,6 @@ func (s *Service) broadcastAndReceiveUnseenDataColumnSidecars(
|
||||
unseenIndices[sidecar.Index] = true
|
||||
}
|
||||
|
||||
// Exit early if there are no nothing to broadcast or receive.
|
||||
if len(unseenSidecars) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// Broadcast all the data column sidecars we reconstructed but did not see via gossip (non blocking).
|
||||
if err := s.cfg.p2p.BroadcastDataColumnSidecars(ctx, unseenSidecars); err != nil {
|
||||
return nil, errors.Wrap(err, "broadcast data column sidecars")
|
||||
|
||||
@@ -194,8 +194,7 @@ func TestProcessSidecarsFromExecutionFromBlock(t *testing.T) {
|
||||
},
|
||||
seenBlobCache: lruwrpr.New(1),
|
||||
}
|
||||
err := s.processSidecarsFromExecutionFromBlock(t.Context(), roBlock)
|
||||
require.NoError(t, err)
|
||||
s.processSidecarsFromExecutionFromBlock(t.Context(), roBlock)
|
||||
require.Equal(t, tt.expectedBlobCount, len(chainService.Blobs))
|
||||
})
|
||||
}
|
||||
@@ -294,8 +293,7 @@ func TestProcessSidecarsFromExecutionFromBlock(t *testing.T) {
|
||||
roBlock, err := blocks.NewROBlock(sb)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = s.processSidecarsFromExecutionFromBlock(t.Context(), roBlock)
|
||||
require.NoError(t, err)
|
||||
s.processSidecarsFromExecutionFromBlock(t.Context(), roBlock)
|
||||
require.Equal(t, tt.expectedDataColumnCount, len(chainService.DataColumns))
|
||||
})
|
||||
}
|
||||
|
||||
@@ -25,12 +25,12 @@ func (s *Service) dataColumnSubscriber(ctx context.Context, msg proto.Message) e
|
||||
}
|
||||
|
||||
if err := s.receiveDataColumnSidecar(ctx, sidecar); err != nil {
|
||||
return wrapDataColumnError(sidecar, "receive data column sidecar", err)
|
||||
return errors.Wrap(err, "receive data column sidecar")
|
||||
}
|
||||
|
||||
wg.Go(func() error {
|
||||
if err := s.processDataColumnSidecarsFromReconstruction(ctx, sidecar); err != nil {
|
||||
return wrapDataColumnError(sidecar, "process data column sidecars from reconstruction", err)
|
||||
return errors.Wrap(err, "process data column sidecars from reconstruction")
|
||||
}
|
||||
|
||||
return nil
|
||||
@@ -38,13 +38,7 @@ func (s *Service) dataColumnSubscriber(ctx context.Context, msg proto.Message) e
|
||||
|
||||
wg.Go(func() error {
|
||||
if err := s.processDataColumnSidecarsFromExecution(ctx, peerdas.PopulateFromSidecar(sidecar)); err != nil {
|
||||
if errors.Is(err, context.Canceled) {
|
||||
// Do not log if the context was cancelled on purpose.
|
||||
// (Still log other context errors such as deadlines exceeded).
|
||||
return nil
|
||||
}
|
||||
|
||||
return wrapDataColumnError(sidecar, "process data column sidecars from execution", err)
|
||||
return errors.Wrap(err, "process data column sidecars from execution")
|
||||
}
|
||||
|
||||
return nil
|
||||
@@ -116,7 +110,3 @@ func (s *Service) allDataColumnSubnets(_ primitives.Slot) map[uint64]bool {
|
||||
|
||||
return allSubnets
|
||||
}
|
||||
|
||||
func wrapDataColumnError(sidecar blocks.VerifiedRODataColumn, message string, err error) error {
|
||||
return fmt.Errorf("%s - slot %d, root %s: %w", message, sidecar.SignedBlockHeader.Header.Slot, fmt.Sprintf("%#x", sidecar.BlockRoot()), err)
|
||||
}
|
||||
|
||||
@@ -614,10 +614,11 @@ func TestVerifyIndexInCommittee_SeenAggregatorEpoch(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
require.Eventually(t, func() bool {
|
||||
res, _ := r.validateAggregateAndProof(t.Context(), "", msg)
|
||||
return res != pubsub.ValidationAccept
|
||||
}, time.Second, 10*time.Millisecond, "Expected validation to reject duplicate aggregate")
|
||||
time.Sleep(10 * time.Millisecond) // Wait for cached value to pass through buffers.
|
||||
if res, err := r.validateAggregateAndProof(t.Context(), "", msg); res == pubsub.ValidationAccept {
|
||||
_ = err
|
||||
t.Fatal("Validated status is true")
|
||||
}
|
||||
}
|
||||
|
||||
func TestValidateAggregateAndProof_BadBlock(t *testing.T) {
|
||||
|
||||
@@ -992,6 +992,7 @@ func TestValidateBeaconBlockPubSub_SeenProposerSlot(t *testing.T) {
|
||||
|
||||
// Mark the proposer/slot as seen
|
||||
r.setSeenBlockIndexSlot(msg.Block.Slot, msg.Block.ProposerIndex)
|
||||
time.Sleep(10 * time.Millisecond) // Wait for cached value to pass through buffers
|
||||
|
||||
// Prepare and validate the second message (clone)
|
||||
buf := new(bytes.Buffer)
|
||||
@@ -1009,11 +1010,9 @@ func TestValidateBeaconBlockPubSub_SeenProposerSlot(t *testing.T) {
|
||||
}
|
||||
|
||||
// Since this is not an equivocation (same signature), it should be ignored
|
||||
// Wait for the cached value to propagate through buffers
|
||||
require.Eventually(t, func() bool {
|
||||
res, err := r.validateBeaconBlockPubSub(ctx, "", m)
|
||||
return err == nil && res == pubsub.ValidationIgnore
|
||||
}, time.Second, 10*time.Millisecond, "block with same signature should be ignored")
|
||||
res, err := r.validateBeaconBlockPubSub(ctx, "", m)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, pubsub.ValidationIgnore, res, "block with same signature should be ignored")
|
||||
|
||||
// Verify no slashings were created
|
||||
assert.Equal(t, 0, len(slashingPool.PendingPropSlashings), "Expected no slashings for same signature")
|
||||
|
||||
@@ -588,12 +588,6 @@ func fcReturnsTargetRoot(root [32]byte) func([32]byte, primitives.Epoch) ([32]by
|
||||
}
|
||||
}
|
||||
|
||||
func fcReturnsDependentRoot() func([32]byte, primitives.Epoch) ([32]byte, error) {
|
||||
return func(root [32]byte, epoch primitives.Epoch) ([32]byte, error) {
|
||||
return root, nil
|
||||
}
|
||||
}
|
||||
|
||||
type mockSignatureCache struct {
|
||||
svCalledForSig map[signatureData]bool
|
||||
svcb func(sig signatureData) (bool, error)
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package verification
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"crypto/sha256"
|
||||
"fmt"
|
||||
@@ -18,7 +19,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v7/runtime/logging"
|
||||
"github.com/OffchainLabs/prysm/v7/time/slots"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
var (
|
||||
@@ -293,57 +293,55 @@ func (dv *RODataColumnsVerifier) ValidProposerSignature(ctx context.Context) (er
|
||||
// The returned state is guaranteed to be at the same epoch as the data column's epoch, and have the same randao mix and active
|
||||
// validator indices as the data column's parent state advanced to the data column's slot.
|
||||
func (dv *RODataColumnsVerifier) getVerifyingState(ctx context.Context, dataColumn blocks.RODataColumn) (state.ReadOnlyBeaconState, error) {
|
||||
dataColumnSlot := dataColumn.Slot()
|
||||
dataColumnEpoch := slots.ToEpoch(dataColumnSlot)
|
||||
if dataColumnEpoch == 0 {
|
||||
return dv.hsp.HeadStateReadOnly(ctx)
|
||||
}
|
||||
parentRoot := dataColumn.ParentRoot()
|
||||
dcDependentRoot, err := dv.fc.DependentRootForEpoch(parentRoot, dataColumnEpoch-1)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
headRoot, err := dv.hsp.HeadRoot(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
headDependentRoot, err := dv.fc.DependentRootForEpoch(bytesutil.ToBytes32(headRoot), dataColumnEpoch-1)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if dcDependentRoot == headDependentRoot {
|
||||
headSlot := dv.hsp.HeadSlot()
|
||||
headEpoch := slots.ToEpoch(headSlot)
|
||||
if headEpoch == dataColumnEpoch || headEpoch == dataColumnEpoch-1 {
|
||||
parentRoot := dataColumn.ParentRoot()
|
||||
dataColumnSlot := dataColumn.Slot()
|
||||
dataColumnEpoch := slots.ToEpoch(dataColumnSlot)
|
||||
headSlot := dv.hsp.HeadSlot()
|
||||
headEpoch := slots.ToEpoch(headSlot)
|
||||
|
||||
// Use head if it's the parent
|
||||
if bytes.Equal(parentRoot[:], headRoot) {
|
||||
// If they are in the same epoch, then we can return the head state directly
|
||||
if dataColumnEpoch == headEpoch {
|
||||
return dv.hsp.HeadStateReadOnly(ctx)
|
||||
}
|
||||
if headEpoch+1 < dataColumnEpoch {
|
||||
headState, err := dv.hsp.HeadState(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return transition.ProcessSlotsUsingNextSlotCache(ctx, headState, headRoot, dataColumnSlot)
|
||||
// Otherwise, we need to process the head state to the data column's slot
|
||||
headState, err := dv.hsp.HeadState(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return transition.ProcessSlotsUsingNextSlotCache(ctx, headState, headRoot, dataColumnSlot)
|
||||
}
|
||||
|
||||
// If head and data column are in the same epoch and head is compatible with the parent's depdendent root, then use head
|
||||
if dataColumnEpoch == headEpoch {
|
||||
headDependent, err := dv.fc.DependentRootForEpoch(bytesutil.ToBytes32(headRoot), dataColumnEpoch)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
parentDependent, err := dv.fc.DependentRootForEpoch(parentRoot, dataColumnEpoch)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if bytes.Equal(headDependent[:], parentDependent[:]) {
|
||||
return dv.hsp.HeadStateReadOnly(ctx)
|
||||
}
|
||||
}
|
||||
|
||||
logrus.WithFields(logrus.Fields{
|
||||
"slot": dataColumnSlot,
|
||||
"parentRoot": fmt.Sprintf("%#x", parentRoot),
|
||||
"headRoot": fmt.Sprintf("%#x", headRoot),
|
||||
}).Debug("Replying state for data column verification")
|
||||
targetRoot, err := dv.fc.TargetRootForEpoch(parentRoot, dataColumnEpoch)
|
||||
// Otherwise retrieve the parent state and advance it to the data column's slot
|
||||
parentState, err := dv.sr.StateByRoot(ctx, parentRoot)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
targetState, err := dv.sr.StateByRoot(ctx, targetRoot)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
parentEpoch := slots.ToEpoch(parentState.Slot())
|
||||
if dataColumnEpoch == parentEpoch {
|
||||
return parentState, nil
|
||||
}
|
||||
targetEpoch := slots.ToEpoch(targetState.Slot())
|
||||
if targetEpoch == dataColumnEpoch || targetEpoch == dataColumnEpoch-1 {
|
||||
return targetState, nil
|
||||
}
|
||||
return transition.ProcessSlotsUsingNextSlotCache(ctx, targetState, parentRoot[:], dataColumnSlot)
|
||||
return transition.ProcessSlotsUsingNextSlotCache(ctx, parentState, parentRoot[:], dataColumnSlot)
|
||||
}
|
||||
|
||||
func (dv *RODataColumnsVerifier) SidecarParentSeen(parentSeen func([fieldparams.RootLength]byte) bool) (err error) {
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package verification
|
||||
|
||||
import (
|
||||
"context"
|
||||
"reflect"
|
||||
"testing"
|
||||
"time"
|
||||
@@ -10,7 +9,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
|
||||
forkchoicetypes "github.com/OffchainLabs/prysm/v7/beacon-chain/forkchoice/types"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/startup"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v7/config/params"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
|
||||
@@ -283,7 +281,7 @@ func TestColumnSlotAboveFinalized(t *testing.T) {
|
||||
|
||||
func TestValidProposerSignature(t *testing.T) {
|
||||
const (
|
||||
columnSlot = 97
|
||||
columnSlot = 0
|
||||
blobCount = 1
|
||||
)
|
||||
|
||||
@@ -296,83 +294,59 @@ func TestValidProposerSignature(t *testing.T) {
|
||||
// The signature data does not depend on the data column itself, so we can use the first one.
|
||||
expectedSignatureData := columnToSignatureData(firstColumn)
|
||||
|
||||
// Create a proper Fulu state for verification.
|
||||
// We need enough validators to cover the proposer index.
|
||||
numValidators := max(uint64(firstColumn.ProposerIndex()+1), 64)
|
||||
fuluState, _ := util.DeterministicGenesisStateFulu(t, numValidators)
|
||||
|
||||
// Head state provider that returns the fuluState via HeadStateReadOnly path.
|
||||
headStateWithState := &mockHeadStateProvider{
|
||||
headRoot: parentRoot[:],
|
||||
headSlot: columnSlot,
|
||||
headStateReadOnly: fuluState,
|
||||
}
|
||||
|
||||
// Head state provider that will fail (headStateReadOnly is nil).
|
||||
headStateNotFound := &mockHeadStateProvider{
|
||||
headRoot: parentRoot[:],
|
||||
headSlot: columnSlot,
|
||||
}
|
||||
|
||||
testCases := []struct {
|
||||
isError bool
|
||||
vscbShouldError bool
|
||||
svcbReturn bool
|
||||
stateByRooter StateByRooter
|
||||
headStateProvider *mockHeadStateProvider
|
||||
vscbError error
|
||||
svcbError error
|
||||
name string
|
||||
isError bool
|
||||
vscbShouldError bool
|
||||
svcbReturn bool
|
||||
stateByRooter StateByRooter
|
||||
vscbError error
|
||||
svcbError error
|
||||
name string
|
||||
}{
|
||||
{
|
||||
name: "cache hit - success",
|
||||
svcbReturn: true,
|
||||
svcbError: nil,
|
||||
vscbShouldError: true,
|
||||
vscbError: nil,
|
||||
stateByRooter: &mockStateByRooter{sbr: sbrErrorIfCalled(t)},
|
||||
headStateProvider: headStateWithState,
|
||||
isError: false,
|
||||
name: "cache hit - success",
|
||||
svcbReturn: true,
|
||||
svcbError: nil,
|
||||
vscbShouldError: true,
|
||||
vscbError: nil,
|
||||
stateByRooter: &mockStateByRooter{sbr: sbrErrorIfCalled(t)},
|
||||
isError: false,
|
||||
},
|
||||
{
|
||||
name: "cache hit - error",
|
||||
svcbReturn: true,
|
||||
svcbError: errors.New("derp"),
|
||||
vscbShouldError: true,
|
||||
vscbError: nil,
|
||||
stateByRooter: &mockStateByRooter{sbr: sbrErrorIfCalled(t)},
|
||||
headStateProvider: headStateWithState,
|
||||
isError: true,
|
||||
name: "cache hit - error",
|
||||
svcbReturn: true,
|
||||
svcbError: errors.New("derp"),
|
||||
vscbShouldError: true,
|
||||
vscbError: nil,
|
||||
stateByRooter: &mockStateByRooter{sbr: sbrErrorIfCalled(t)},
|
||||
isError: true,
|
||||
},
|
||||
{
|
||||
name: "cache miss - success",
|
||||
svcbReturn: false,
|
||||
svcbError: nil,
|
||||
vscbShouldError: false,
|
||||
vscbError: nil,
|
||||
stateByRooter: sbrForValOverrideWithT(t, firstColumn.ProposerIndex(), validator),
|
||||
headStateProvider: headStateWithState,
|
||||
isError: false,
|
||||
name: "cache miss - success",
|
||||
svcbReturn: false,
|
||||
svcbError: nil,
|
||||
vscbShouldError: false,
|
||||
vscbError: nil,
|
||||
stateByRooter: sbrForValOverrideWithT(t, firstColumn.ProposerIndex(), validator),
|
||||
isError: false,
|
||||
},
|
||||
{
|
||||
name: "cache miss - state not found",
|
||||
svcbReturn: false,
|
||||
svcbError: nil,
|
||||
vscbShouldError: false,
|
||||
vscbError: nil,
|
||||
stateByRooter: sbrNotFound(t, expectedSignatureData.Parent),
|
||||
headStateProvider: headStateNotFound,
|
||||
isError: true,
|
||||
name: "cache miss - state not found",
|
||||
svcbReturn: false,
|
||||
svcbError: nil,
|
||||
vscbShouldError: false,
|
||||
vscbError: nil,
|
||||
stateByRooter: sbrNotFound(t, expectedSignatureData.Parent),
|
||||
isError: true,
|
||||
},
|
||||
{
|
||||
name: "cache miss - signature failure",
|
||||
svcbReturn: false,
|
||||
svcbError: nil,
|
||||
vscbShouldError: false,
|
||||
vscbError: errors.New("signature, not so good!"),
|
||||
stateByRooter: sbrForValOverrideWithT(t, firstColumn.ProposerIndex(), validator),
|
||||
headStateProvider: headStateWithState,
|
||||
isError: true,
|
||||
name: "cache miss - signature failure",
|
||||
svcbReturn: false,
|
||||
svcbError: nil,
|
||||
vscbShouldError: false,
|
||||
vscbError: errors.New("signature, not so good!"),
|
||||
stateByRooter: sbrForValOverrideWithT(t, firstColumn.ProposerIndex(), validator),
|
||||
isError: true,
|
||||
},
|
||||
}
|
||||
|
||||
@@ -403,10 +377,9 @@ func TestValidProposerSignature(t *testing.T) {
|
||||
shared: &sharedResources{
|
||||
sc: signatureCache,
|
||||
sr: tc.stateByRooter,
|
||||
hsp: tc.headStateProvider,
|
||||
hsp: &mockHeadStateProvider{},
|
||||
fc: &mockForkchoicer{
|
||||
DependentRootForEpochCB: fcReturnsDependentRoot(),
|
||||
TargetRootForEpochCB: fcReturnsTargetRoot([fieldparams.RootLength]byte{}),
|
||||
TargetRootForEpochCB: fcReturnsTargetRoot([fieldparams.RootLength]byte{}),
|
||||
},
|
||||
},
|
||||
}
|
||||
@@ -432,7 +405,7 @@ func TestValidProposerSignature(t *testing.T) {
|
||||
|
||||
func TestDataColumnsSidecarParentSeen(t *testing.T) {
|
||||
const (
|
||||
columnSlot = 97
|
||||
columnSlot = 0
|
||||
blobCount = 1
|
||||
)
|
||||
|
||||
@@ -536,7 +509,7 @@ func TestDataColumnsSidecarParentValid(t *testing.T) {
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
const (
|
||||
columnSlot = 97
|
||||
columnSlot = 0
|
||||
blobCount = 1
|
||||
)
|
||||
|
||||
@@ -657,7 +630,7 @@ func TestDataColumnsSidecarDescendsFromFinalized(t *testing.T) {
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
const (
|
||||
columnSlot = 97
|
||||
columnSlot = 0
|
||||
blobCount = 1
|
||||
)
|
||||
|
||||
@@ -720,7 +693,7 @@ func TestDataColumnsSidecarInclusionProven(t *testing.T) {
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
const (
|
||||
columnSlot = 97
|
||||
columnSlot = 0
|
||||
blobCount = 1
|
||||
)
|
||||
|
||||
@@ -775,7 +748,7 @@ func TestDataColumnsSidecarKzgProofVerified(t *testing.T) {
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
const (
|
||||
columnSlot = 97
|
||||
columnSlot = 0
|
||||
blobCount = 1
|
||||
)
|
||||
|
||||
@@ -951,135 +924,3 @@ func TestColumnRequirementSatisfaction(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestGetVerifyingStateEdgeCases(t *testing.T) {
|
||||
const (
|
||||
columnSlot = 97 // epoch 3
|
||||
blobCount = 1
|
||||
)
|
||||
|
||||
parentRoot := [fieldparams.RootLength]byte{}
|
||||
columns := GenerateTestDataColumns(t, parentRoot, columnSlot, blobCount)
|
||||
|
||||
// Create a proper Fulu state for verification.
|
||||
numValidators := max(uint64(columns[0].ProposerIndex()+1), 64)
|
||||
fuluState, _ := util.DeterministicGenesisStateFulu(t, numValidators)
|
||||
|
||||
t.Run("different dependent roots - uses StateByRoot path", func(t *testing.T) {
|
||||
// Parent and head are on different forks with different dependent roots.
|
||||
// This forces the code to use TargetRootForEpoch -> StateByRoot path.
|
||||
signatureCache := &mockSignatureCache{
|
||||
svcb: func(signatureData signatureData) (bool, error) {
|
||||
return false, nil // Cache miss
|
||||
},
|
||||
vscb: func(signatureData signatureData, _ validatorAtIndexer) (err error) {
|
||||
return nil // Signature valid
|
||||
},
|
||||
}
|
||||
|
||||
// StateByRoot will be called because dependent roots differ
|
||||
stateByRootCalled := false
|
||||
stateByRooter := &mockStateByRooter{
|
||||
sbr: func(_ context.Context, root [32]byte) (state.BeaconState, error) {
|
||||
stateByRootCalled = true
|
||||
return fuluState, nil
|
||||
},
|
||||
}
|
||||
|
||||
initializer := Initializer{
|
||||
shared: &sharedResources{
|
||||
sc: signatureCache,
|
||||
sr: stateByRooter,
|
||||
hsp: &mockHeadStateProvider{
|
||||
headRoot: []byte{0xff}, // Different from parentRoot
|
||||
headSlot: columnSlot,
|
||||
},
|
||||
fc: &mockForkchoicer{
|
||||
// Return different roots for parent vs head to simulate different forks
|
||||
DependentRootForEpochCB: func(root [32]byte, epoch primitives.Epoch) ([32]byte, error) {
|
||||
return root, nil // Returns input, so parent [0...] != head [0xff...]
|
||||
},
|
||||
TargetRootForEpochCB: fcReturnsTargetRoot([fieldparams.RootLength]byte{}),
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
verifier := initializer.NewDataColumnsVerifier(columns, GossipDataColumnSidecarRequirements)
|
||||
err := verifier.ValidProposerSignature(t.Context())
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, true, stateByRootCalled, "StateByRoot should be called when dependent roots differ")
|
||||
})
|
||||
|
||||
t.Run("same dependent root head far ahead - uses head state with ProcessSlots", func(t *testing.T) {
|
||||
// Parent is ancestor of head on same chain, but head is in epoch 1 while column is in epoch 3.
|
||||
// headEpoch (1) + 1 < dataColumnEpoch (3), so ProcessSlots is called on head state.
|
||||
signatureCache := &mockSignatureCache{
|
||||
svcb: func(signatureData signatureData) (bool, error) {
|
||||
return false, nil // Cache miss
|
||||
},
|
||||
vscb: func(signatureData signatureData, _ validatorAtIndexer) (err error) {
|
||||
return nil // Signature valid
|
||||
},
|
||||
}
|
||||
|
||||
headStateCalled := false
|
||||
initializer := Initializer{
|
||||
shared: &sharedResources{
|
||||
sc: signatureCache,
|
||||
sr: &mockStateByRooter{sbr: sbrErrorIfCalled(t)}, // Should not be called
|
||||
hsp: &mockHeadStateProvider{
|
||||
headRoot: parentRoot[:], // Same as parent
|
||||
headSlot: 32, // Epoch 1
|
||||
headState: fuluState.Copy(), // HeadState (not ReadOnly) for ProcessSlots
|
||||
headStateReadOnly: nil, // Should not use ReadOnly path
|
||||
},
|
||||
fc: &mockForkchoicer{
|
||||
// Return same root for both to simulate same chain
|
||||
DependentRootForEpochCB: func(root [32]byte, epoch primitives.Epoch) ([32]byte, error) {
|
||||
return [32]byte{0xaa}, nil // Same for all inputs
|
||||
},
|
||||
TargetRootForEpochCB: fcReturnsTargetRoot([fieldparams.RootLength]byte{}),
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
// Wrap to detect HeadState call
|
||||
originalHsp := initializer.shared.hsp.(*mockHeadStateProvider)
|
||||
wrappedHsp := &mockHeadStateProvider{
|
||||
headRoot: originalHsp.headRoot,
|
||||
headSlot: originalHsp.headSlot,
|
||||
headState: originalHsp.headState,
|
||||
}
|
||||
initializer.shared.hsp = &headStateCallTracker{
|
||||
mockHeadStateProvider: wrappedHsp,
|
||||
headStateCalled: &headStateCalled,
|
||||
}
|
||||
|
||||
verifier := initializer.NewDataColumnsVerifier(columns, GossipDataColumnSidecarRequirements)
|
||||
err := verifier.ValidProposerSignature(t.Context())
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, true, headStateCalled, "HeadState should be called when head is far ahead")
|
||||
})
|
||||
}
|
||||
|
||||
// headStateCallTracker wraps mockHeadStateProvider to track HeadState calls.
|
||||
type headStateCallTracker struct {
|
||||
*mockHeadStateProvider
|
||||
headStateCalled *bool
|
||||
}
|
||||
|
||||
func (h *headStateCallTracker) HeadState(ctx context.Context) (state.BeaconState, error) {
|
||||
*h.headStateCalled = true
|
||||
return h.mockHeadStateProvider.HeadState(ctx)
|
||||
}
|
||||
|
||||
func (h *headStateCallTracker) HeadRoot(ctx context.Context) ([]byte, error) {
|
||||
return h.mockHeadStateProvider.HeadRoot(ctx)
|
||||
}
|
||||
|
||||
func (h *headStateCallTracker) HeadSlot() primitives.Slot {
|
||||
return h.mockHeadStateProvider.HeadSlot()
|
||||
}
|
||||
|
||||
func (h *headStateCallTracker) HeadStateReadOnly(ctx context.Context) (state.ReadOnlyBeaconState, error) {
|
||||
return h.mockHeadStateProvider.HeadStateReadOnly(ctx)
|
||||
}
|
||||
|
||||
3
changelog/SashaMalysehko_fix-return-after-check.md
Normal file
3
changelog/SashaMalysehko_fix-return-after-check.md
Normal file
@@ -0,0 +1,3 @@
|
||||
## Fixed
|
||||
|
||||
- Fix missing return after version header check in SubmitAttesterSlashingsV2.
|
||||
3
changelog/Snezhkko_fix-type.md
Normal file
3
changelog/Snezhkko_fix-type.md
Normal file
@@ -0,0 +1,3 @@
|
||||
## Fixed
|
||||
|
||||
- incorrect constructor return type [#16084](https://github.com/OffchainLabs/prysm/pull/16084)
|
||||
2
changelog/aarsh-revert-autonatv2.md
Normal file
2
changelog/aarsh-revert-autonatv2.md
Normal file
@@ -0,0 +1,2 @@
|
||||
### Ignored
|
||||
- Reverts AutoNatV2 change introduced in https://github.com/OffchainLabs/prysm/pull/16100 as the libp2p upgrade fails inter-op testing.
|
||||
3
changelog/avoid_kzg_send_after_context_cancel.md
Normal file
3
changelog/avoid_kzg_send_after_context_cancel.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Fixed
|
||||
|
||||
- Prevent blocked sends to the KZG batch verifier when the caller context is already canceled, avoiding useless queueing and potential hangs.
|
||||
@@ -1,5 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Added an ephemeral debug logfile that for beacon and validator nodes that captures debug-level logs for 24 hours. It
|
||||
also keeps 1 backup of in case of size-based rotation. The logfiles are stored in `datadir/logs/`. This feature is
|
||||
enabled by default and can be disabled by setting the `--disable-ephemeral-log-file` flag.
|
||||
3
changelog/bastin_fix-lcp2p-bug.md
Normal file
3
changelog/bastin_fix-lcp2p-bug.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Fixed
|
||||
|
||||
- Fix the missing fork version object mapping for Fulu in light client p2p.
|
||||
@@ -1,4 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Moved verbosity settings to be configurable per hook, rather than just globally. This allows us to control the
|
||||
verbosity of individual output independently.
|
||||
3
changelog/builder-index.md
Normal file
3
changelog/builder-index.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Added
|
||||
|
||||
- `primitives.BuilderIndex`: SSZ `uint64` wrapper for builder registry indices.
|
||||
3
changelog/fix_kzg_batch_verifier_timeout_deadlock.md
Normal file
3
changelog/fix_kzg_batch_verifier_timeout_deadlock.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Fixed
|
||||
|
||||
- Fix deadlock in data column gossip KZG batch verification when a caller times out preventing result delivery.
|
||||
3
changelog/james-prysm_align-atter-pool-apis.md
Normal file
3
changelog/james-prysm_align-atter-pool-apis.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Changed
|
||||
|
||||
- the /eth/v2/beacon/pool/attestations and /eth/v1/beacon/pool/sync_committees now returns a 503 error if the node is still syncing, the rest api is also working in a similar process to gRPC broadcasting immediately now.
|
||||
3
changelog/james-prysm_fix-rest-replay-state.md
Normal file
3
changelog/james-prysm_fix-rest-replay-state.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Fixed
|
||||
|
||||
- fixed replay state issue in rest api caused by attester and sync committee duties endpoints
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- changed IsHealthy check to IsReady for validator client's interpretation from /eth/v1/node/health, 206 will now return false as the node is syncing.
|
||||
3
changelog/james-prysm_skip-e2e-slot1-check.md
Normal file
3
changelog/james-prysm_skip-e2e-slot1-check.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Changed
|
||||
|
||||
- e2e sync committee evaluator now skips the first slot after startup, we already skip the fork epoch for checks here, this skip only applies on startup, due to altair always from 0 and validators need to warm up.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Don't call trace.WithMaxExportBatchSize(trace.DefaultMaxExportBatchSize) twice.
|
||||
3
changelog/manu-agg.md
Normal file
3
changelog/manu-agg.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Changed
|
||||
|
||||
- Pending aggregates: When multiple aggregated attestations only differing by the aggregator index are in the pending queue, only process one of them.
|
||||
2
changelog/manu-remove-error-logs.md
Normal file
2
changelog/manu-remove-error-logs.md
Normal file
@@ -0,0 +1,2 @@
|
||||
### Changed
|
||||
- `validateDataColumn`: Remove error logs.
|
||||
2
changelog/manu-test-pr.md
Normal file
2
changelog/manu-test-pr.md
Normal file
@@ -0,0 +1,2 @@
|
||||
### Ignored
|
||||
- Added test requirement to `PULL_REQUEST_TEMPLATE.md`
|
||||
@@ -1,2 +0,0 @@
|
||||
### Added
|
||||
- `--disable-get-blobs-v2` flag.
|
||||
7
changelog/manu_reconstruct-metrics.md
Normal file
7
changelog/manu_reconstruct-metrics.md
Normal file
@@ -0,0 +1,7 @@
|
||||
### Added
|
||||
- prometheus histogram `cells_and_proofs_from_structured_computation_milliseconds` to track computation time for cells and proofs from structured blobs.
|
||||
- prometheus histogram `get_blobs_v2_latency_milliseconds` to track RPC latency for `getBlobsV2` calls to the execution layer.
|
||||
|
||||
### Changed
|
||||
- Run `ComputeCellsAndProofsFromFlat` in parallel to improve performance when computing cells and proofs.
|
||||
- Run `ComputeCellsAndProofsFromStructured` in parallel to improve performance when computing cells and proofs.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Batch publish data columns for faster data propogation.
|
||||
3
changelog/potuz_check_twice_attseen.md
Normal file
3
changelog/potuz_check_twice_attseen.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Fixed
|
||||
|
||||
- Fixed possible race when validating two attestations at the same time.
|
||||
@@ -1,2 +0,0 @@
|
||||
### Changed
|
||||
- Use dependent root and target root to verify data column proposer index.
|
||||
3
changelog/potuz_finalized_deproot.md
Normal file
3
changelog/potuz_finalized_deproot.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Added
|
||||
|
||||
- Track the dependent root of the latest finalized checkpoint in forkchoice.
|
||||
@@ -1,2 +0,0 @@
|
||||
### Added
|
||||
- Add a feature flag to pass spectests with low validator count.
|
||||
3
changelog/potuz_next_epoch_attributes.md
Normal file
3
changelog/potuz_next_epoch_attributes.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Fixed
|
||||
|
||||
- Do not process slots and copy states for next epoch proposers after Fulu
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Add feature flag `--enable-proposer-preprocessing` to process the block and verify signatures before proposing.
|
||||
3
changelog/potuz_return_indices_updateerr.md
Normal file
3
changelog/potuz_return_indices_updateerr.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Fixed
|
||||
|
||||
- Do not error when committee has been computed correctly but updating the cache failed.
|
||||
@@ -1,2 +0,0 @@
|
||||
### Added
|
||||
- Update spectests to v1.7.0-alpha.0
|
||||
@@ -1,2 +0,0 @@
|
||||
### Added
|
||||
- Update spectests to v1.7.0-alpha-1.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Updated changelog for v7.1.2
|
||||
3
changelog/pvl-v7.0.1.md
Normal file
3
changelog/pvl-v7.0.1.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Ignored
|
||||
|
||||
- Updated CHANGELOG.md for v7.0.1 patch release
|
||||
3
changelog/pvl-v7.1.0.md
Normal file
3
changelog/pvl-v7.1.0.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Ignored
|
||||
|
||||
- Changelog for v7.1.0
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Added changelog for v7.1.1
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Replaced `time.Sleep` with `require.Eventually` polling in tests to fix flaky behavior caused by race conditions between goroutines and assertions.
|
||||
3
changelog/radek_httperror-analyzer.md
Normal file
3
changelog/radek_httperror-analyzer.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Added
|
||||
|
||||
- Static analyzer that ensures each `httputil.HandleError` call is followed by a `return` statement.
|
||||
3
changelog/radek_use-statefetch-error.md
Normal file
3
changelog/radek_use-statefetch-error.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Ignored
|
||||
|
||||
- Use `WriteStateFetchError` in API handlers whenever possible.
|
||||
3
changelog/satushh-eth1copy.md
Normal file
3
changelog/satushh-eth1copy.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Removed
|
||||
|
||||
- Unnecessary copy is removed from Eth1DataHasEnoughSupport
|
||||
3
changelog/satushh-graffiti-impl.md
Normal file
3
changelog/satushh-graffiti-impl.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Added
|
||||
|
||||
- Graffiti implementation based on the design doc.
|
||||
3
changelog/satushh-graffiti.md
Normal file
3
changelog/satushh-graffiti.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Added
|
||||
|
||||
- Proposal design document to implement graffiti. Currently it is empty by default and the idea is to have it of the form GE168dPR63af
|
||||
3
changelog/satushh-migratetocold.md
Normal file
3
changelog/satushh-migratetocold.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Changed
|
||||
|
||||
- Optimise migratetocold by not doing brute force for loop
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Performance improvement in state (MarshalSSZTo): use copy() instead of byte-by-byte loop which isn't required.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Add `ProofByFieldIndex` to generalize merkle proof generation for `BeaconState`.
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user