Compare commits

...

21 Commits

Author SHA1 Message Date
potuz
c4880a91f0 fix: use attestation slot epoch for fork digest in gossip topic validation
The attestation gossip validator was using currentForkDigest() to build
the expected topic prefix. This fails at fork boundaries because the
beacon node subscribes to the next fork's topics one epoch early: a
message arriving on the upcoming fork's topic would be compared against
the current fork digest, causing a spurious reject with the misleading
error "attestation's subnet does not match with pubsub topic".

Use the attestation's own slot epoch to derive the correct fork digest
instead, so the validation matches the topic the attestation was
actually published on.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-30 09:55:47 -03:00
potuz
0e8a4d0538 fix botched rebase 2026-03-27 13:57:56 -03:00
Potuz
f7cc6e91b3 align getLookupParentRoot with develop 2026-03-27 13:37:37 -03:00
Barnabas Busa
b0799980cd fix: add missing ePBS values to /eth/v1/config/spec (#16557)
Add PTC_SIZE, MAX_PAYLOAD_ATTESTATIONS, BUILDER_REGISTRY_LIMIT, and
BUILDER_PENDING_WITHDRAWALS_LIMIT as fieldparams-derived constants
exposed via the config spec endpoint. Also add
DOMAIN_PROPOSER_PREFERENCE (0x0d000000) to the BeaconChainConfig struct
and mainnet config.

<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

> Uncomment one line below and remove others.
>
> Bug fix
> Feature
> Documentation
> Other

**What does this PR do? Why is it needed?**

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-27 13:29:10 -03:00
terence
c116f6452f Fix empty blob KZG commitments in self-build execution payload bids 2026-03-27 13:20:53 -03:00
potuz
d00ef108d5 Add block processing pipeline 2026-03-27 13:20:53 -03:00
potuz
0dd3468ed5 Fetch Payloads along side blocks on init sync
Adds envelopes to the fetchRequestResponse struct which is populated by
a call to fetchPayloads.

When requesting block batches if the batch is across the Fulu fork it is
truncated. For a batch that is purely Gloas it requests the
corresponding payload envelopes by range.

The batch fetcher verifies payloads--blocks consistency, that is that
the full batch could be imported completely into the blockchain without
gaps. If both payloads and blocks are not consistent and were delivered
by the same peer, we downscore them

The batch fetcher verifies self-consistency of the payloads, that is
that the payloads follow the parehthash chain. If they don't we
downscore the peer that served them.

It changes the signature of fetchSidecars and fetchBlocksFromPeer to
modify the response in place.

Add unit tests for the new fetch functions.
2026-03-27 13:20:53 -03:00
terence
7c0f87adfc Fix build 2026-03-27 13:20:53 -03:00
terence
c4da776ecd Clean up 2026-03-27 13:20:53 -03:00
Potuz
68f1a498c0 Handle missing Payloads
In the event of a late missing Payload, we add a helper that updates the
beacon block post state in the NSC as well as handling the boundary
caches and calling FCU if we are proposing next slot.
2026-03-27 13:20:53 -03:00
terence
9e8b0e1104 Run golangci-lint 2026-03-27 13:20:53 -03:00
Potuz
c020306ca8 Add payload weights to non-canonical block log
Add PayloadWeights method to forkchoice that returns the empty and full
payload node weights for a given root. Use it in logNonCanonicalBlockReceived
to help debug cases where head gets stuck due to empty vs full weight
distribution.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-27 13:20:53 -03:00
satushh
60e7ad8d3d Implementation of ExecutionPayloadEnvelopesByRoot v1 (#16394)
**What type of PR is this?**

Feature

**What does this PR do? Why is it needed?**

Implements the `ExecutionPayloadEnvelopesByRoot v1` RPC method as
specified in the
https://github.com/ethereum/consensus-specs/blob/master/specs/gloas/p2p-interface.md#executionpayloadenvelopesbyroot-v1.

This allows peers to request signed execution payload envelopes by their
beacon block root.

This is a cache-only initial implementation.
TODO in follow up work: DB storage

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [ ] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [ ] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [ ] I have added a description with sufficient context for reviewers
to understand this PR.
- [ ] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-03-27 13:20:53 -03:00
Potuz
8de77c50e9 Remove extra checks 2026-03-27 13:20:53 -03:00
Potuz
003fff1a6e process pending payloads regularly 2026-03-27 13:20:53 -03:00
Potuz
f5e77f340b make Dora Happy
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-27 13:20:53 -03:00
Potuz
fade350a27 add execution payload envelope support to pcli
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-27 13:20:52 -03:00
Potuz
8de6a446a8 remove stale patch 2026-03-27 13:20:52 -03:00
terence
7ec8be8ce1 More bug fixes 2026-03-27 13:20:52 -03:00
Potuz
6226c86b94 Don't use NSC on gloas proposer 2026-03-27 13:20:52 -03:00
terence
06f46427e1 Varius bug fixes 2026-03-27 13:20:52 -03:00
25 changed files with 170 additions and 158 deletions

View File

@@ -271,7 +271,7 @@ func (s *Service) notifyNewPayload(ctx context.Context, stVersion int, header in
}
}
lastValidHash, err = s.cfg.ExecutionEngineCaller.NewPayload(ctx, payload, versionedHashes, parentRoot, requests)
lastValidHash, err = s.cfg.ExecutionEngineCaller.NewPayload(ctx, payload, versionedHashes, parentRoot, requests, blk.Block().Slot())
if err == nil {
newPayloadValidNodeCount.Inc()
return true, nil

View File

@@ -145,50 +145,6 @@ func getStateVersionAndPayload(st state.BeaconState) (int, interfaces.ExecutionD
return preStateVersion, preStateHeader, nil
}
// applyPayloadIfNeeded applies the parent block's execution payload envelope to
// preState when the current block's bid indicates it built on a full parent.
func (s *Service) applyPayloadIfNeeded(ctx context.Context, b interfaces.ReadOnlyBeaconBlock, parentRoot [32]byte, preState state.BeaconState) error {
if b.Version() < version.Gloas || parentRoot == [32]byte{} {
return nil
}
parentBlock, err := s.cfg.BeaconDB.Block(ctx, parentRoot)
if err != nil {
return errors.Wrapf(err, "could not get parent block with root %#x", parentRoot)
}
if parentBlock.Version() < version.Gloas {
return nil
}
sb, err := b.Body().SignedExecutionPayloadBid()
if err != nil {
return errors.Wrap(err, "could not get execution payload bid for block")
}
if sb == nil || sb.Message == nil {
return fmt.Errorf("missing execution payload bid for block at slot %d", b.Slot())
}
parentBid, err := parentBlock.Block().Body().SignedExecutionPayloadBid()
if err != nil {
return errors.Wrapf(err, "could not get execution payload bid for parent block with root %#x", parentRoot)
}
if parentBid == nil || parentBid.Message == nil {
return fmt.Errorf("missing execution payload bid for parent block with root %#x", parentRoot)
}
if !bytes.Equal(sb.Message.ParentBlockHash, parentBid.Message.BlockHash) {
return nil
}
signedEnvelope, err := s.cfg.BeaconDB.ExecutionPayloadEnvelope(ctx, parentRoot)
if err != nil {
return errors.Wrapf(err, "could not get execution payload envelope for parent block with root %#x", parentRoot)
}
if signedEnvelope == nil || signedEnvelope.Message == nil {
return nil
}
envelope, err := consensusblocks.WrappedROBlindedExecutionPayloadEnvelope(signedEnvelope.Message)
if err != nil {
return errors.Wrapf(err, "could not wrap blinded execution payload envelope for parent block with root %#x", parentRoot)
}
return gloas.ApplyBlindedExecutionPayloadEnvelopeForStateGen(ctx, preState, parentBlock.Block().StateRoot(), envelope)
}
// getBatchPrestate returns the pre-state to apply to the first beacon block in the batch and returns true if it applied the first envelope before
func (s *Service) getBatchPrestate(ctx context.Context, b consensusblocks.ROBlock, envelopes []interfaces.ROSignedExecutionPayloadEnvelope) (state.BeaconState, bool, error) {
if len(envelopes) == 0 || b.Version() < version.Gloas {
@@ -223,6 +179,10 @@ func (s *Service) getBatchPrestate(ctx context.Context, b consensusblocks.ROBloc
}
return blockPreState, false, nil
}
parentBlock, err := s.cfg.BeaconDB.Block(ctx, parentRoot)
if err != nil {
return nil, false, errors.Wrap(err, "could not get parent block")
}
env, err := envelopes[0].Envelope()
if err != nil {
return nil, false, err
@@ -235,10 +195,6 @@ func (s *Service) getBatchPrestate(ctx context.Context, b consensusblocks.ROBloc
if _, err := s.notifyNewEnvelope(ctx, blockPreState, env); err != nil {
return nil, false, err
}
parentBlock, err := s.cfg.BeaconDB.Block(ctx, parentRoot)
if err != nil {
return nil, false, errors.Wrap(err, "could not get parent block")
}
if err := gloas.ApplyBlindedExecutionPayloadEnvelopeForStateGen(ctx, blockPreState, parentBlock.Block().StateRoot(), env); err != nil {
return nil, false, err
}

View File

@@ -203,31 +203,6 @@ func (s *Service) getPayloadEnvelopePrestate(ctx context.Context, envelope inter
return preState, nil
}
func (s *Service) callNewPayload(
ctx context.Context,
payload interfaces.ExecutionData,
versionedHashes []common.Hash,
parentRoot common.Hash,
requests *enginev1.ExecutionRequests,
slot primitives.Slot,
) (bool, error) {
_, err := s.cfg.ExecutionEngineCaller.NewPayload(ctx, payload, versionedHashes, &parentRoot, requests)
if err == nil {
return true, nil
}
if errors.Is(err, execution.ErrAcceptedSyncingPayloadStatus) {
log.WithFields(logrus.Fields{
"slot": slot,
"payloadBlockHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.BlockHash())),
}).Info("Called new payload with optimistic envelope")
return false, nil
}
if errors.Is(err, execution.ErrInvalidPayloadStatus) {
return false, invalidBlock{error: ErrInvalidPayload}
}
return false, errors.WithMessage(ErrUndefinedExecutionEngineError, err.Error())
}
func (s *Service) notifyNewEnvelopeFromBlock(ctx context.Context, b blocks.ROBlock, envelope interfaces.ROExecutionPayloadEnvelope) (bool, error) {
ctx, span := trace.StartSpan(ctx, "blockChain.notifyNewEnvelopeFromBlock")
defer span.End()
@@ -236,15 +211,35 @@ func (s *Service) notifyNewEnvelopeFromBlock(ctx context.Context, b blocks.ROBlo
if err != nil {
return false, errors.Wrap(err, "could not get execution payload from envelope")
}
sbid, err := b.Block().Body().SignedExecutionPayloadBid()
if err != nil {
return false, errors.Wrap(err, "could not get signed execution payload bid from block")
}
versionedHashes := make([]common.Hash, len(sbid.Message.BlobKzgCommitments))
for i, c := range sbid.Message.BlobKzgCommitments {
commitments := sbid.Message.BlobKzgCommitments
versionedHashes := make([]common.Hash, len(commitments))
for i, c := range commitments {
versionedHashes[i] = primitives.ConvertKzgCommitmentToVersionedHash(c)
}
return s.callNewPayload(ctx, payload, versionedHashes, common.Hash(b.Block().ParentRoot()), envelope.ExecutionRequests(), envelope.Slot())
parentRoot := common.Hash(b.Block().ParentRoot())
requests := envelope.ExecutionRequests()
_, err = s.cfg.ExecutionEngineCaller.NewPayload(ctx, payload, versionedHashes, &parentRoot, requests, envelope.Slot())
if err == nil {
return true, nil
}
if errors.Is(err, execution.ErrAcceptedSyncingPayloadStatus) {
log.WithFields(logrus.Fields{
"slot": envelope.Slot(),
"payloadBlockHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.BlockHash())),
}).Info("Called new payload with optimistic envelope")
return false, nil
}
if errors.Is(err, execution.ErrInvalidPayloadStatus) {
return false, invalidBlock{error: ErrInvalidPayload}
}
return false, errors.WithMessage(ErrUndefinedExecutionEngineError, err.Error())
}
// The returned boolean indicates whether the payload was valid or if it was accepted as syncing (optimistic).
@@ -265,7 +260,25 @@ func (s *Service) notifyNewEnvelope(ctx context.Context, st state.BeaconState, e
for i, c := range commitments {
versionedHashes[i] = primitives.ConvertKzgCommitmentToVersionedHash(c)
}
return s.callNewPayload(ctx, payload, versionedHashes, common.Hash(bytesutil.ToBytes32(st.LatestBlockHeader().ParentRoot)), envelope.ExecutionRequests(), envelope.Slot())
parentRoot := common.Hash(bytesutil.ToBytes32(st.LatestBlockHeader().ParentRoot))
requests := envelope.ExecutionRequests()
_, err = s.cfg.ExecutionEngineCaller.NewPayload(ctx, payload, versionedHashes, &parentRoot, requests, envelope.Slot())
if err == nil {
return true, nil
}
if errors.Is(err, execution.ErrAcceptedSyncingPayloadStatus) {
log.WithFields(logrus.Fields{
"slot": envelope.Slot(),
"payloadBlockHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.BlockHash())),
}).Info("Called new payload with optimistic envelope")
return false, nil
}
if errors.Is(err, execution.ErrInvalidPayloadStatus) {
return false, invalidBlock{error: ErrInvalidPayload}
}
return false, errors.WithMessage(ErrUndefinedExecutionEngineError, err.Error())
}
func (s *Service) validateExecutionOnEnvelope(ctx context.Context, st state.BeaconState, envelope interfaces.ROExecutionPayloadEnvelope) (bool, error) {

View File

@@ -3,6 +3,7 @@ package execution
import (
"context"
"fmt"
"math"
"math/big"
"strings"
"time"
@@ -56,6 +57,11 @@ var (
GetPayloadMethodV4,
}
gloasEngineEndpoints = []string{
NewPayloadMethodV5,
GetPayloadMethodV5,
}
fuluEngineEndpoints = []string{
GetPayloadMethodV5,
GetBlobsV2,
@@ -80,6 +86,8 @@ const (
NewPayloadMethodV3 = "engine_newPayloadV3"
// NewPayloadMethodV4 is the engine_newPayloadVX method added at Electra.
NewPayloadMethodV4 = "engine_newPayloadV4"
// NewPayloadMethodV5 is the engine_newPayloadVX method added at Gloas.
NewPayloadMethodV5 = "engine_newPayloadV5"
// ForkchoiceUpdatedMethod v1 request string for JSON-RPC.
ForkchoiceUpdatedMethod = "engine_forkchoiceUpdatedV1"
// ForkchoiceUpdatedMethodV2 v2 request string for JSON-RPC.
@@ -148,7 +156,7 @@ type Reconstructor interface {
// EngineCaller defines a client that can interact with an Ethereum
// execution node's engine service via JSON-RPC.
type EngineCaller interface {
NewPayload(ctx context.Context, payload interfaces.ExecutionData, versionedHashes []common.Hash, parentBlockRoot *common.Hash, executionRequests *pb.ExecutionRequests) ([]byte, error)
NewPayload(ctx context.Context, payload interfaces.ExecutionData, versionedHashes []common.Hash, parentBlockRoot *common.Hash, executionRequests *pb.ExecutionRequests, slot primitives.Slot) ([]byte, error)
ForkchoiceUpdated(
ctx context.Context, state *pb.ForkchoiceState, attrs payloadattribute.Attributer,
) (*pb.PayloadIDBytes, []byte, error)
@@ -161,7 +169,7 @@ type EngineCaller interface {
var ErrEmptyBlockHash = errors.New("Block hash is empty 0x0000...")
// NewPayload request calls the engine_newPayloadVX method via JSON-RPC.
func (s *Service) NewPayload(ctx context.Context, payload interfaces.ExecutionData, versionedHashes []common.Hash, parentBlockRoot *common.Hash, executionRequests *pb.ExecutionRequests) ([]byte, error) {
func (s *Service) NewPayload(ctx context.Context, payload interfaces.ExecutionData, versionedHashes []common.Hash, parentBlockRoot *common.Hash, executionRequests *pb.ExecutionRequests, slot primitives.Slot) ([]byte, error) {
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.NewPayload")
defer span.End()
defer func(start time.Time) {
@@ -195,7 +203,11 @@ func (s *Service) NewPayload(ctx context.Context, payload interfaces.ExecutionDa
if err != nil {
return nil, errors.Wrap(err, "failed to encode execution requests")
}
err = s.rpcClient.CallContext(ctx, result, NewPayloadMethodV4, payloadPb, versionedHashes, parentBlockRoot, flattenedRequests)
method := NewPayloadMethodV4
if slots.ToEpoch(slot) >= params.BeaconConfig().GloasForkEpoch {
method = NewPayloadMethodV5
}
err = s.rpcClient.CallContext(ctx, result, method, payloadPb, versionedHashes, parentBlockRoot, flattenedRequests)
if err != nil {
return nil, handleRPCError(err)
}
@@ -261,7 +273,7 @@ func (s *Service) ForkchoiceUpdated(
if err != nil {
return nil, nil, handleRPCError(err)
}
case version.Deneb, version.Electra, version.Fulu:
case version.Deneb, version.Electra, version.Gloas, version.Fulu:
a, err := attrs.PbV3()
if err != nil {
return nil, nil, err
@@ -295,7 +307,7 @@ func (s *Service) ForkchoiceUpdated(
func getPayloadMethodAndMessage(slot primitives.Slot) (string, proto.Message) {
epoch := slots.ToEpoch(slot)
if epoch >= params.BeaconConfig().FuluForkEpoch {
if epoch >= params.BeaconConfig().FuluForkEpoch || epoch >= params.BeaconConfig().GloasForkEpoch {
return GetPayloadMethodV5, &pb.ExecutionBundleFulu{}
}
if epoch >= params.BeaconConfig().ElectraForkEpoch {
@@ -343,6 +355,10 @@ func (s *Service) ExchangeCapabilities(ctx context.Context) ([]string, error) {
supportedEngineEndpoints = append(supportedEngineEndpoints, electraEngineEndpoints...)
}
if params.BeaconConfig().GloasForkEpoch < math.MaxUint64 {
supportedEngineEndpoints = append(supportedEngineEndpoints, gloasEngineEndpoints...)
}
if params.FuluEnabled() {
supportedEngineEndpoints = append(supportedEngineEndpoints, fuluEngineEndpoints...)
}

View File

@@ -129,7 +129,7 @@ func TestClient_IPC(t *testing.T) {
require.Equal(t, true, ok)
wrappedPayload, err := blocks.WrappedExecutionPayload(req)
require.NoError(t, err)
latestValidHash, err := srv.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
latestValidHash, err := srv.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil, 0)
require.NoError(t, err)
require.DeepEqual(t, bytesutil.ToBytes32(want.LatestValidHash), bytesutil.ToBytes32(latestValidHash))
})
@@ -140,7 +140,7 @@ func TestClient_IPC(t *testing.T) {
require.Equal(t, true, ok)
wrappedPayload, err := blocks.WrappedExecutionPayloadCapella(req)
require.NoError(t, err)
latestValidHash, err := srv.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
latestValidHash, err := srv.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil, 0)
require.NoError(t, err)
require.DeepEqual(t, bytesutil.ToBytes32(want.LatestValidHash), bytesutil.ToBytes32(latestValidHash))
})
@@ -605,7 +605,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayload(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil, 0)
require.NoError(t, err)
require.DeepEqual(t, want.LatestValidHash, resp)
})
@@ -619,7 +619,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayloadCapella(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil, 0)
require.NoError(t, err)
require.DeepEqual(t, want.LatestValidHash, resp)
})
@@ -633,7 +633,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayloadDeneb(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, nil)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, nil, 0)
require.NoError(t, err)
require.DeepEqual(t, want.LatestValidHash, resp)
})
@@ -672,7 +672,7 @@ func TestClient_HTTP(t *testing.T) {
},
}
client := newPayloadV4Setup(t, want, execPayload, requests)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, requests)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, requests, 0)
require.NoError(t, err)
require.DeepEqual(t, want.LatestValidHash, resp)
})
@@ -686,7 +686,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayload(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil, 0)
require.ErrorIs(t, ErrAcceptedSyncingPayloadStatus, err)
require.DeepEqual(t, []uint8(nil), resp)
})
@@ -700,7 +700,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayloadCapella(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil, 0)
require.ErrorIs(t, ErrAcceptedSyncingPayloadStatus, err)
require.DeepEqual(t, []uint8(nil), resp)
})
@@ -714,7 +714,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayloadDeneb(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, nil)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, nil, 0)
require.ErrorIs(t, ErrAcceptedSyncingPayloadStatus, err)
require.DeepEqual(t, []uint8(nil), resp)
})
@@ -753,7 +753,7 @@ func TestClient_HTTP(t *testing.T) {
},
}
client := newPayloadV4Setup(t, want, execPayload, requests)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, requests)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, requests, 0)
require.ErrorIs(t, ErrAcceptedSyncingPayloadStatus, err)
require.DeepEqual(t, []uint8(nil), resp)
})
@@ -767,7 +767,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayload(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil, 0)
require.ErrorIs(t, ErrInvalidBlockHashPayloadStatus, err)
require.DeepEqual(t, []uint8(nil), resp)
})
@@ -781,7 +781,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayloadCapella(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil, 0)
require.ErrorIs(t, ErrInvalidBlockHashPayloadStatus, err)
require.DeepEqual(t, []uint8(nil), resp)
})
@@ -795,7 +795,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayloadDeneb(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, nil)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, nil, 0)
require.ErrorIs(t, ErrInvalidBlockHashPayloadStatus, err)
require.DeepEqual(t, []uint8(nil), resp)
})
@@ -833,7 +833,7 @@ func TestClient_HTTP(t *testing.T) {
},
}
client := newPayloadV4Setup(t, want, execPayload, requests)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, requests)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, requests, 0)
require.ErrorIs(t, ErrInvalidBlockHashPayloadStatus, err)
require.DeepEqual(t, []uint8(nil), resp)
})
@@ -847,7 +847,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayload(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil, 0)
require.ErrorIs(t, ErrInvalidPayloadStatus, err)
require.DeepEqual(t, want.LatestValidHash, resp)
})
@@ -861,7 +861,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayloadCapella(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil, 0)
require.ErrorIs(t, ErrInvalidPayloadStatus, err)
require.DeepEqual(t, want.LatestValidHash, resp)
})
@@ -875,7 +875,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayloadDeneb(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, nil)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, nil, 0)
require.ErrorIs(t, ErrInvalidPayloadStatus, err)
require.DeepEqual(t, want.LatestValidHash, resp)
})
@@ -914,7 +914,7 @@ func TestClient_HTTP(t *testing.T) {
},
}
client := newPayloadV4Setup(t, want, execPayload, requests)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, requests)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, requests, 0)
require.ErrorIs(t, ErrInvalidPayloadStatus, err)
require.DeepEqual(t, want.LatestValidHash, resp)
})
@@ -928,7 +928,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayload(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil, 0)
require.ErrorIs(t, err, ErrUnknownPayloadStatus)
require.DeepEqual(t, []uint8(nil), resp)
})

View File

@@ -49,7 +49,7 @@ type EngineClient struct {
}
// NewPayload --
func (e *EngineClient) NewPayload(_ context.Context, _ interfaces.ExecutionData, _ []common.Hash, _ *common.Hash, _ *pb.ExecutionRequests) ([]byte, error) {
func (e *EngineClient) NewPayload(_ context.Context, _ interfaces.ExecutionData, _ []common.Hash, _ *common.Hash, _ *pb.ExecutionRequests, _ primitives.Slot) ([]byte, error) {
return e.NewPayloadResp, e.ErrNewPayload
}

View File

@@ -43,9 +43,9 @@ const (
blockHashCalled
dependentRootCalled
dependentRootForEpochCalled
canonicalNodeAtSlotCalled
payloadWeightsCalled
payloadContentLookupCalled
canonicalNodeAtSlotCalled
)
func _discard(t *testing.T, e error) {

View File

@@ -139,7 +139,7 @@ func (s *Service) internalBroadcastAttestation(ctx context.Context, subnet uint6
savedAttestationBroadcasts.Inc()
return nil
}(); err != nil {
log.WithError(err).Error("Failed to find peers")
log.WithError(err).Trace("Failed to find peers")
tracing.AnnotateError(span, err)
}
}
@@ -155,7 +155,7 @@ func (s *Service) internalBroadcastAttestation(ctx context.Context, subnet uint6
}
if err := s.broadcastObject(ctx, att, attestationToTopic(subnet, forkDigest)); err != nil {
log.WithError(err).Error("Failed to broadcast attestation")
log.WithError(err).Trace("Failed to broadcast attestation")
tracing.AnnotateError(span, err)
}
}
@@ -195,14 +195,14 @@ func (s *Service) broadcastSyncCommittee(ctx context.Context, subnet uint64, sMs
savedSyncCommitteeBroadcasts.Inc()
return nil
}(); err != nil {
log.WithError(err).Error("Failed to find peers")
log.WithError(err).Trace("Failed to find peers")
tracing.AnnotateError(span, err)
}
}
// In the event our sync message is outdated and beyond the
// acceptable threshold, we exit early and do not broadcast it.
if err := altair.ValidateSyncMessageTime(sMsg.Slot, s.genesisTime, params.BeaconConfig().MaximumGossipClockDisparityDuration()); err != nil {
log.WithError(err).Warn("Sync Committee Message is too old to broadcast, discarding it")
log.WithError(err).Trace("Sync Committee Message is too old to broadcast, discarding it")
return
}
@@ -260,7 +260,7 @@ func (s *Service) internalBroadcastBlob(ctx context.Context, subnet uint64, blob
blobSidecarBroadcasts.Inc()
return nil
}(); err != nil {
log.WithError(err).Error("Failed to find peers")
log.WithError(err).Trace("Failed to find peers")
tracing.AnnotateError(span, err)
}
}

View File

@@ -138,6 +138,10 @@ func (s *Service) topicScoreParams(topic string) (*pubsub.TopicScoreParams, erro
return defaultAttesterSlashingTopicParams(), nil
case strings.Contains(topic, GossipBlsToExecutionChangeMessage):
return defaultBlsToExecutionChangeTopicParams(), nil
case strings.Contains(topic, GossipPayloadAttestationMessageMessage):
return defaultBlockTopicParams(), nil
case strings.Contains(topic, GossipExecutionPayloadEnvelopeMessage):
return defaultBlockTopicParams(), nil
case strings.Contains(topic, GossipBlobSidecarMessage), strings.Contains(topic, GossipDataColumnSidecarMessage):
// TODO(Deneb): Using the default block scoring. But this should be updated.
return defaultBlockTopicParams(), nil
@@ -217,13 +221,13 @@ func defaultAggregateTopicParams(activeValidators uint64) *pubsub.TopicScorePara
aggPerSlot := aggregatorsPerSlot(activeValidators)
firstMessageCap, err := decayLimit(scoreDecay(1*oneEpochDuration()), float64(aggPerSlot*2/gossipSubD))
if err != nil {
log.WithError(err).Warn("Skipping initializing topic scoring")
log.WithError(err).Trace("Skipping initializing topic scoring")
return nil
}
firstMessageWeight := maxFirstDeliveryScore / firstMessageCap
meshThreshold, err := decayThreshold(scoreDecay(1*oneEpochDuration()), float64(aggPerSlot)/dampeningFactor)
if err != nil {
log.WithError(err).Warn("Skipping initializing topic scoring")
log.WithError(err).Trace("Skipping initializing topic scoring")
return nil
}
meshWeight := -scoreByWeight(aggregateWeight, meshThreshold)
@@ -259,13 +263,13 @@ func defaultSyncContributionTopicParams() *pubsub.TopicScoreParams {
aggPerSlot := params.BeaconConfig().SyncCommitteeSubnetCount * params.BeaconConfig().TargetAggregatorsPerSyncSubcommittee
firstMessageCap, err := decayLimit(scoreDecay(1*oneEpochDuration()), float64(aggPerSlot*2/gossipSubD))
if err != nil {
log.WithError(err).Warn("Skipping initializing topic scoring")
log.WithError(err).Trace("Skipping initializing topic scoring")
return nil
}
firstMessageWeight := maxFirstDeliveryScore / firstMessageCap
meshThreshold, err := decayThreshold(scoreDecay(1*oneEpochDuration()), float64(aggPerSlot)/dampeningFactor)
if err != nil {
log.WithError(err).Warn("Skipping initializing topic scoring")
log.WithError(err).Trace("Skipping initializing topic scoring")
return nil
}
meshWeight := -scoreByWeight(syncContributionWeight, meshThreshold)
@@ -302,13 +306,13 @@ func defaultAggregateSubnetTopicParams(activeValidators uint64) *pubsub.TopicSco
topicWeight := attestationTotalWeight / float64(subnetCount)
subnetWeight := activeValidators / subnetCount
if subnetWeight == 0 {
log.Warn("Subnet weight is 0, skipping initializing topic scoring")
log.Trace("Subnet weight is 0, skipping initializing topic scoring")
return nil
}
// Determine the amount of validators expected in a subnet in a single slot.
numPerSlot := time.Duration(subnetWeight / uint64(params.BeaconConfig().SlotsPerEpoch))
if numPerSlot == 0 {
log.Warn("Number per slot is 0, skipping initializing topic scoring")
log.Trace("Number per slot is 0, skipping initializing topic scoring")
return nil
}
comsPerSlot := committeeCountPerSlot(activeValidators)
@@ -327,14 +331,14 @@ func defaultAggregateSubnetTopicParams(activeValidators uint64) *pubsub.TopicSco
// Determine expected first deliveries based on the message rate.
firstMessageCap, err := decayLimit(scoreDecay(firstDecay*oneEpochDuration()), float64(rate))
if err != nil {
log.WithError(err).Warn("Skipping initializing topic scoring")
log.WithError(err).Trace("Skipping initializing topic scoring")
return nil
}
firstMessageWeight := maxFirstDeliveryScore / firstMessageCap
// Determine expected mesh deliveries based on message rate applied with a dampening factor.
meshThreshold, err := decayThreshold(scoreDecay(meshDecay*oneEpochDuration()), float64(numPerSlot)/dampeningFactor)
if err != nil {
log.WithError(err).Warn("Skipping initializing topic scoring")
log.WithError(err).Trace("Skipping initializing topic scoring")
return nil
}
meshWeight := -scoreByWeight(topicWeight, meshThreshold)
@@ -390,14 +394,14 @@ func defaultSyncSubnetTopicParams(activeValidators uint64) *pubsub.TopicScorePar
// Determine expected first deliveries based on the message rate.
firstMessageCap, err := decayLimit(scoreDecay(firstDecay*oneEpochDuration()), float64(rate))
if err != nil {
log.WithError(err).Warn("Skipping initializing topic scoring")
log.WithError(err).Trace("Skipping initializing topic scoring")
return nil
}
firstMessageWeight := maxFirstDeliveryScore / firstMessageCap
// Determine expected mesh deliveries based on message rate applied with a dampening factor.
meshThreshold, err := decayThreshold(scoreDecay(meshDecay*oneEpochDuration()), float64(subnetWeight)/dampeningFactor)
if err != nil {
log.WithError(err).Warn("Skipping initializing topic scoring")
log.WithError(err).Trace("Skipping initializing topic scoring")
return nil
}
meshWeight := -scoreByWeight(topicWeight, meshThreshold)

View File

@@ -22,6 +22,7 @@ go_test(
embed = [":go_default_library"],
deps = [
"//api/server/structs:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//testing/assert:go_default_library",

View File

@@ -191,6 +191,11 @@ func prepareConfigSpec() (map[string]any, error) {
data["KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH"] = convertValueForJSON(reflect.ValueOf(uint64(4)), "KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH")
// UPDATE_TIMEOUT is derived from SLOTS_PER_EPOCH * EPOCHS_PER_SYNC_COMMITTEE_PERIOD
data["UPDATE_TIMEOUT"] = convertValueForJSON(reflect.ValueOf(uint64(config.SlotsPerEpoch)*uint64(config.EpochsPerSyncCommitteePeriod)), "UPDATE_TIMEOUT")
// Add Gloas config values from fieldparams required by the /eth/v1/config/spec API.
data["PTC_SIZE"] = convertValueForJSON(reflect.ValueOf(uint64(fieldparams.PTCSize)), "PTC_SIZE")
data["MAX_PAYLOAD_ATTESTATIONS"] = convertValueForJSON(reflect.ValueOf(uint64(fieldparams.MaxPayloadAttestations)), "MAX_PAYLOAD_ATTESTATIONS")
data["BUILDER_REGISTRY_LIMIT"] = convertValueForJSON(reflect.ValueOf(uint64(fieldparams.BuilderRegistryLimit)), "BUILDER_REGISTRY_LIMIT")
data["BUILDER_PENDING_WITHDRAWALS_LIMIT"] = convertValueForJSON(reflect.ValueOf(uint64(fieldparams.BuilderPendingWithdrawalsLimit)), "BUILDER_PENDING_WITHDRAWALS_LIMIT")
return data, nil
}

View File

@@ -9,9 +9,11 @@ import (
"net/http"
"net/http/httptest"
"reflect"
"strconv"
"testing"
"github.com/OffchainLabs/prysm/v7/api/server/structs"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/testing/assert"
@@ -229,7 +231,7 @@ func TestGetSpec(t *testing.T) {
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), &resp))
data, ok := resp.Data.(map[string]any)
require.Equal(t, true, ok)
assert.Equal(t, 198, len(data))
assert.Equal(t, 203, len(data))
for k, v := range data {
t.Run(k, func(t *testing.T) {
switch k {
@@ -651,6 +653,14 @@ func TestGetSpec(t *testing.T) {
assert.Equal(t, "128", v) // From fieldparams.NumberOfColumns
case "UPDATE_TIMEOUT":
assert.Equal(t, "1782", v) // SlotsPerEpoch (27) * EpochsPerSyncCommitteePeriod (66)
case "PTC_SIZE":
assert.Equal(t, strconv.FormatUint(uint64(fieldparams.PTCSize), 10), v)
case "MAX_PAYLOAD_ATTESTATIONS":
assert.Equal(t, strconv.FormatUint(uint64(fieldparams.MaxPayloadAttestations), 10), v)
case "BUILDER_REGISTRY_LIMIT":
assert.Equal(t, strconv.FormatUint(uint64(fieldparams.BuilderRegistryLimit), 10), v)
case "BUILDER_PENDING_WITHDRAWALS_LIMIT":
assert.Equal(t, strconv.FormatUint(uint64(fieldparams.BuilderPendingWithdrawalsLimit), 10), v)
default:
t.Errorf("Incorrect key: %s", k)
}

View File

@@ -112,7 +112,7 @@ func (s *Service) updateCustodyInfoIfNeeded() error {
"topic": topic,
"peerCount": len(peers),
"minimumPeerCount": minimumPeerCount,
}).Debug("Insufficient peers for data column sidecar topic to maintain custody count")
}).Trace("Insufficient peers for data column sidecar topic to maintain custody count")
enoughPeers = false
}
}

View File

@@ -332,17 +332,17 @@ func (f *blocksFetcher) handleRequest(ctx context.Context, start primitives.Slot
f.fetchBlocksFromPeer(ctx, response, peers)
if response.err != nil {
log.WithError(response.err).Debug("Failed to fetch blocks")
log.WithError(response.err).Error("Failed to fetch blocks")
return response
}
f.fetchSidecars(ctx, response, peers)
if response.err != nil {
log.WithError(response.err).Debug("Failed to fetch sidecars")
log.WithError(response.err).Error("Failed to fetch sidecars")
return response
}
f.fetchPayloads(ctx, response, peers)
if response.err != nil {
log.WithError(response.err).Debug("Failed to fetch payloads")
log.WithError(response.err).Error("Failed to fetch payloads")
}
return response
}

View File

@@ -32,9 +32,9 @@ func makeEnvelope(t *testing.T, slot primitives.Slot, blockHash [32]byte, parent
env := &ethpb.SignedExecutionPayloadEnvelope{
Signature: make([]byte, fieldparams.BLSSignatureLength),
Message: &ethpb.ExecutionPayloadEnvelope{
Slot: slot,
BeaconBlockRoot: make([]byte, fieldparams.RootLength),
StateRoot: make([]byte, fieldparams.RootLength),
Slot: slot,
BeaconBlockRoot: make([]byte, fieldparams.RootLength),
StateRoot: make([]byte, fieldparams.RootLength),
ExecutionRequests: &enginev1.ExecutionRequests{},
Payload: &enginev1.ExecutionPayloadDeneb{
ParentHash: parentHash[:],

View File

@@ -478,37 +478,37 @@ func (s *Service) fetchAndQueuePayloadEnvelopesForRoots(
return
}
var envelopeRoots p2ptypes.ExecutionPayloadEnvelopesByRootReq
for _, root := range roots {
if s.cfg.beaconDB.HasExecutionPayloadEnvelope(ctx, root) {
continue
}
blk, found, err := s.pendingBlockByRoot(root)
if err != nil {
log.WithError(err).WithField("root", fmt.Sprintf("%#x", root)).Debug("Could not inspect pending block by root")
continue
}
// Only request payload envelopes for blocks strictly after the Gloas start slot.
if !found || blk.Block().Slot() <= gloasStartSlot {
continue
}
envelopeRoots = append(envelopeRoots, root)
}
if len(envelopeRoots) == 0 {
return
}
envelopes, err := SendExecutionPayloadEnvelopesByRootRequest(ctx, s.cfg.clock, s.cfg.p2p, pid, s.ctxMap, &envelopeRoots)
if err != nil {
log.WithError(err).Debug("Could not request execution payload envelopes by root")
return
}
for _, env := range envelopes {
if env == nil || env.Message == nil {
if s.cfg.beaconDB.HasExecutionPayloadEnvelope(ctx, root) {
continue
}
s.queuePendingPayloadEnvelopeFromRootRequest(env)
req := p2ptypes.ExecutionPayloadEnvelopesByRootReq{root}
envelopes, err := SendExecutionPayloadEnvelopesByRootRequest(ctx, s.cfg.clock, s.cfg.p2p, pid, s.ctxMap, &req)
if err != nil {
log.WithError(err).WithFields(logrus.Fields{
"peer": pid,
"root": fmt.Sprintf("%#x", root),
"slot": blk.Block().Slot(),
}).Debug("Could not request execution payload envelope by root")
continue
}
for _, env := range envelopes {
if env == nil || env.Message == nil {
continue
}
s.queuePendingPayloadEnvelopeFromRootRequest(env)
}
}
}

View File

@@ -527,7 +527,7 @@ func (s *Service) wrapAndReportValidation(topic string, v wrappedVal) (string, p
return pubsub.ValidationIgnore
}
if currDigest != retDigest {
log.WithField("topic", topic).Debugf("Received message from outdated fork digest %#x", retDigest)
log.WithField("topic", topic).Tracef("Received message from outdated fork digest %#x", retDigest)
return pubsub.ValidationIgnore
}
b, err := v(ctx, pid, msg)

View File

@@ -251,11 +251,7 @@ func (s *Service) validateUnaggregatedAttTopic(ctx context.Context, a eth.Att, b
}
subnet := helpers.ComputeSubnetForAttestation(valCount, a)
format := p2p.GossipTypeMapping[reflect.TypeFor[*eth.Attestation]()]
digest, err := s.currentForkDigest()
if err != nil {
tracing.AnnotateError(span, err)
return pubsub.ValidationIgnore, err
}
digest := params.ForkDigest(slots.ToEpoch(a.GetData().Slot))
if !strings.HasPrefix(t, fmt.Sprintf(format, digest, subnet)) {
return pubsub.ValidationReject, errors.New("attestation's subnet does not match with pubsub topic")
}

View File

@@ -0,0 +1,4 @@
### Added
- Added missing ePBS values from `/eth/v1/config/spec`: `PTC_SIZE`, `MAX_PAYLOAD_ATTESTATIONS`, `BUILDER_REGISTRY_LIMIT`, and `BUILDER_PENDING_WITHDRAWALS_LIMIT`.
- Added missing `DOMAIN_PROPOSER_PREFERENCES` (0x0d000000) domain type from `/eth/v1/config/spec`.

View File

@@ -0,0 +1,2 @@
### Fixed
- Use the attestation's fork digest instead of the current one.

View File

@@ -52,5 +52,6 @@ const (
CellsPerBlob = 64 // CellsPerBlob refers to the number of cells in a (non-extended) blob.
// Introduced in Gloas network upgrade.
PTCSize = 512 // PTCSize is the size of the payload timeliness committee.
PTCSize = 512 // PTCSize is the size of the payload timeliness committee.
MaxPayloadAttestations = 4 // MaxPayloadAttestations is the maximum number of payload attestations in a block.
)

View File

@@ -52,5 +52,6 @@ const (
CellsPerBlob = 64 // CellsPerBlob refers to the number of cells in a (non-extended) blob.
// Introduced in Gloas network upgrade.
PTCSize = 2 // PTCSize is the size of the payload timeliness committee.
PTCSize = 2 // PTCSize is the size of the payload timeliness committee.
MaxPayloadAttestations = 4 // MaxPayloadAttestations is the maximum number of payload attestations in a block.
)

View File

@@ -110,7 +110,7 @@ func (m *engineMock) ForkchoiceUpdated(context.Context, *pb.ForkchoiceState, pay
return nil, m.latestValidHash, m.payloadStatus
}
func (m *engineMock) NewPayload(context.Context, interfaces.ExecutionData, []common.Hash, *common.Hash, *pb.ExecutionRequests) ([]byte, error) {
func (m *engineMock) NewPayload(context.Context, interfaces.ExecutionData, []common.Hash, *common.Hash, *pb.ExecutionRequests, primitives.Slot) ([]byte, error) {
return m.latestValidHash, m.payloadStatus
}

View File

@@ -59,6 +59,7 @@ var prettyCommand = &cli.Command{
"signed_block_header|" +
"signed_voluntary_exit|" +
"voluntary_exit|" +
"signed_execution_payload_envelope|" +
"state_capella",
Required: true,
Destination: &sszType,
@@ -89,6 +90,8 @@ var prettyCommand = &cli.Command{
data = &ethpb.SignedVoluntaryExit{}
case "voluntary_exit":
data = &ethpb.VoluntaryExit{}
case "signed_execution_payload_envelope":
data = &ethpb.SignedExecutionPayloadEnvelope{}
case "state_capella":
data = &ethpb.BeaconStateCapella{}
default:

View File

@@ -90,7 +90,7 @@ func (v *validator) SubmitSyncCommitteeMessage(ctx context.Context, slot primiti
"timeSinceSlotStart": time.Since(slotTime),
"blockRoot": fmt.Sprintf("%#x", bytesutil.Trunc(msg.BlockRoot)),
"validatorIndex": msg.ValidatorIndex,
}).Info("Submitted new sync message")
}).Trace("Submitted new sync message")
v.syncCommitteeStats.totalMessagesSubmitted.Add(1)
}
@@ -200,7 +200,7 @@ func (v *validator) SubmitSignedContributionAndProof(ctx context.Context, slot p
"subcommitteeIndex": contributionAndProof.Contribution.SubcommitteeIndex,
"aggregatorIndex": contributionAndProof.AggregatorIndex,
"bitsCount": contributionAndProof.Contribution.AggregationBits.Count(),
}).Info("Submitted new sync contribution and proof")
}).Trace("Submitted new sync contribution and proof")
}
}