Compare commits

..

3 Commits

Author SHA1 Message Date
james-prysm
1897e69725 adding db functions for saving gloas block and payload 2026-01-29 11:15:23 -06:00
james-prysm
919bd5d6aa Update health endpoint to include sync and optimistic checks (#16294)
**What type of PR is this?**
Other

**What does this PR do? Why is it needed?**

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

a node that is in syncing or optimistic status isn't fully ready yet. we
don't have a is ready endpoint, but I think having the gRPC match more
closely to
[/eth/v1/node/health](https://ethereum.github.io/beacon-APIs/?urls.primaryName=dev#/Node/getHealth)
would be good. This endpoint is only used internally as far as I can
tell.

this is prerequisite to https://github.com/OffchainLabs/prysm/pull/16215

tested via grpcurl against a syncing hoodi node

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-28 21:13:30 +00:00
fernantho
0476eeda57 SSZ-QL: custom Generic Merkle Proofs building the tree and collecting the hashes in one sweep (#16177)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**
Feature

**What does this PR do? Why is it needed?**
This PR replaces the previous PR
https://github.com/OffchainLabs/prysm/pull/16121, which built the entire
Merkle tree and generated proofs only after the tree was complete. In
this PR, the Merkle proof is produced by collecting hashes while the
Merkle tree is being built. This approach has proven to be more
efficient than the one in
https://github.com/OffchainLabs/prysm/pull/16121.

- **ProofCollector**: 
- New `ProofCollector` type in `encoding/ssz/query/proof_collector.go`:
Collects sibling hashes and leaves needed for Merkle proofs during
merkleization.
- Multiproof-ready design with `requiredSiblings`/`requiredLeaves` maps
for registering target gindices before merkleization.
- Thread-safe: read-only required maps during merkleization,
mutex-protected writes to `siblings`/`leaves`.
- `AddTarget(gindex)` registers a target leaf and computes all required
sibling gindices along the path to root.
- `toProof()` converts collected data into `fastssz.Proof` structure.
- Parallel execution in `merkleizeVectorBody` for composite elements
with worker pool pattern.
- Optimized container hashing: Generalized
`stateutil.OptimizedValidatorRoots` pattern for any SSZ container type:
- `optimizedContainerRoots`: Parallelized field root computation +
level-by-level vectorized hashing via `VectorizedSha256`.
- `hashContainerHelper`: Worker goroutine for processing container
subsets.
- `containerFieldRoots`: Computes field roots for a single container
using reflection and SszInfo metadata.

- **`Prove(gindex)` method** in `encoding/ssz/query/merkle_proof.go`:
Entry point for generating SSZ Merkle proofs for a given generalized
index.

- **Testing**
- Added `merkle_proof_test.go` and `proof_collector_test.go` to test and
benchmark this feature.

The main outcomes of the optimizations are here:
```
❯ go test ./encoding/ssz/query -run=^$ -bench='Benchmark(OptimizedContainerRoots|OptimizedValidatorRoots|ProofCollectorMerkleize)$' -benchmem
goos: darwin
goarch: arm64
pkg: github.com/OffchainLabs/prysm/v7/encoding/ssz/query
cpu: Apple M2 Pro
BenchmarkOptimizedValidatorRoots-10         3237            361029 ns/op          956858 B/op       6024 allocs/op
BenchmarkOptimizedContainerRoots-10         1138            969002 ns/op         3245223 B/op      11024 allocs/op
BenchmarkProofCollectorMerkleize-10          522           2262066 ns/op         3216000 B/op      19000 allocs/op
PASS
ok      github.com/OffchainLabs/prysm/v7/encoding/ssz/query     4.619s
```
Knowing that `OptimizedValidatorRoots` implements very effective
optimizations, `OptimizedContainerRoots` mimics them.
In the benchmark we can see that `OptimizedValidatorRoots` remain as the
most performant and tit the baseline here:
- `ProofCollectorMerkleize` is **~6.3× slower**, uses **~3.4× more
memory** (B/op), and performs **~3.2× more allocations**.
- `OptimizedContainerRoots` sits in between: it’s **~2.7× slower** than
`OptimizedValidatorRoots` (and **~3.4× higher B/op**, **~1.8× more
allocations**), but it is a clear win over `ProofCollectorMerkleize` for
lists/vectors: **~2.3× faster** with **~1.7× fewer allocations** (and
essentially the same memory footprint).

The main drawback is that `OptimizedContainerRoots` can only be applied
to vector/list subtrees where we don’t need to collect any sibling/leaf
data (i.e., no proof targets within that subtree); integrating it into
the recursive merkleize(...) flow when targets are outside the subtree
is expected to land in a follow-up PR.

**Which issues(s) does this PR fix?**
Partially https://github.com/OffchainLabs/prysm/issues/15598

**Other notes for review**
In this [write-up](https://hackmd.io/@fernantho/BJbZ1xmmbg), I depict
the process to come up with this solution.

Future improvements:
- Defensive check that the gindex is not too big, depicted [here](
https://github.com/OffchainLabs/prysm/pull/16177#discussion_r2671684100).
- Integrate optimizedContainerRoots into the recursive merkleize(...)
flow when proof targets are not within the subtree (skip full traversal
for container lists).
- Add multiproofs.
- Connect `proofCollector` to SSZ-QL endpoints (direct integration of
`proofCollector` for BeaconBlock endpoint and "hybrid" approach for
BeaconState endpoint).

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).

---------

Co-authored-by: Radosław Kapka <radoslaw.kapka@gmail.com>
Co-authored-by: Jun Song <87601811+syjn99@users.noreply.github.com>
2026-01-28 13:01:22 +00:00
47 changed files with 2699 additions and 588 deletions

View File

@@ -66,6 +66,10 @@ type ReadOnlyDatabase interface {
OriginCheckpointBlockRoot(ctx context.Context) ([32]byte, error)
BackfillStatus(context.Context) (*dbval.BackfillStatus, error)
// Execution payload envelope operations (Gloas+).
ExecutionPayloadEnvelope(ctx context.Context, blockHash [32]byte) (*ethpb.SignedBlindedExecutionPayloadEnvelope, error)
HasExecutionPayloadEnvelope(ctx context.Context, blockHash [32]byte) bool
// P2P Metadata operations.
MetadataSeqNum(ctx context.Context) (uint64, error)
}
@@ -115,6 +119,10 @@ type NoHeadAccessDatabase interface {
SaveLightClientUpdate(ctx context.Context, period uint64, update interfaces.LightClientUpdate) error
SaveLightClientBootstrap(ctx context.Context, blockRoot []byte, bootstrap interfaces.LightClientBootstrap) error
// Execution payload envelope operations (Gloas+).
SaveExecutionPayloadEnvelope(ctx context.Context, envelope *ethpb.SignedExecutionPayloadEnvelope) error
DeleteExecutionPayloadEnvelope(ctx context.Context, blockHash [32]byte) error
CleanUpDirtyStates(ctx context.Context, slotsPerArchivedPoint primitives.Slot) error
DeleteHistoricalDataBeforeSlot(ctx context.Context, slot primitives.Slot, batchSize int) (int, error)

View File

@@ -13,6 +13,7 @@ go_library(
"encoding.go",
"error.go",
"execution_chain.go",
"execution_payload_envelope.go",
"finalized_block_roots.go",
"genesis.go",
"key.go",
@@ -96,6 +97,7 @@ go_test(
"deposit_contract_test.go",
"encoding_test.go",
"execution_chain_test.go",
"execution_payload_envelope_test.go",
"finalized_block_roots_test.go",
"genesis_test.go",
"init_test.go",

View File

@@ -517,6 +517,10 @@ func (s *Store) DeleteHistoricalDataBeforeSlot(ctx context.Context, cutoffSlot p
return errors.Wrap(err, "could not delete validators")
}
// TODO: execution payload envelopes (Gloas+) are keyed by execution payload
// block hash, not beacon block root, so they cannot be pruned in this loop.
// A separate pruning mechanism is needed (e.g. secondary index or cursor scan).
numSlotsDeleted++
}
@@ -1247,6 +1251,12 @@ func unmarshalBlock(_ context.Context, enc []byte) (interfaces.ReadOnlySignedBea
if err := rawBlock.UnmarshalSSZ(enc[len(fuluBlindKey):]); err != nil {
return nil, errors.Wrap(err, "could not unmarshal blinded Fulu block")
}
case hasGloasKey(enc):
// post Gloas we save the full beacon block as EIP-7732 separates beacon block and payload
rawBlock = &ethpb.SignedBeaconBlockGloas{}
if err := rawBlock.UnmarshalSSZ(enc[len(gloasKey):]); err != nil {
return nil, errors.Wrap(err, "could not unmarshal Gloas block")
}
default:
// Marshal block bytes to phase 0 beacon block.
rawBlock = &ethpb.SignedBeaconBlock{}
@@ -1277,6 +1287,11 @@ func encodeBlock(blk interfaces.ReadOnlySignedBeaconBlock) ([]byte, error) {
func keyForBlock(blk interfaces.ReadOnlySignedBeaconBlock) ([]byte, error) {
v := blk.Version()
if v >= version.Gloas {
// Gloas blocks are never blinded (no execution payload in block body).
return gloasKey, nil
}
if v >= version.Fulu {
if blk.IsBlinded() {
return fuluBlindKey, nil

View File

@@ -151,6 +151,17 @@ var blockTests = []struct {
}
return blocks.NewSignedBeaconBlock(b)
}},
{
name: "gloas",
newBlock: func(slot primitives.Slot, root []byte) (interfaces.ReadOnlySignedBeaconBlock, error) {
b := util.NewBeaconBlockGloas()
b.Block.Slot = slot
if root != nil {
b.Block.ParentRoot = root
}
return blocks.NewSignedBeaconBlock(b)
},
},
}
func TestStore_SaveBlock_NoDuplicates(t *testing.T) {
@@ -211,7 +222,7 @@ func TestStore_BlocksCRUD(t *testing.T) {
retrievedBlock, err = db.Block(ctx, blockRoot)
require.NoError(t, err)
wanted := retrievedBlock
if retrievedBlock.Version() >= version.Bellatrix {
if retrievedBlock.Version() >= version.Bellatrix && retrievedBlock.Version() < version.Gloas {
wanted, err = retrievedBlock.ToBlinded()
require.NoError(t, err)
}
@@ -643,7 +654,7 @@ func TestStore_BlocksCRUD_NoCache(t *testing.T) {
require.NoError(t, err)
wanted := blk
if blk.Version() >= version.Bellatrix {
if blk.Version() >= version.Bellatrix && blk.Version() < version.Gloas {
wanted, err = blk.ToBlinded()
require.NoError(t, err)
}
@@ -1014,7 +1025,7 @@ func TestStore_SaveBlock_CanGetHighestAt(t *testing.T) {
b, err := db.Block(ctx, root)
require.NoError(t, err)
wanted := block1
if block1.Version() >= version.Bellatrix {
if block1.Version() >= version.Bellatrix && block1.Version() < version.Gloas {
wanted, err = wanted.ToBlinded()
require.NoError(t, err)
}
@@ -1032,7 +1043,7 @@ func TestStore_SaveBlock_CanGetHighestAt(t *testing.T) {
b, err = db.Block(ctx, root)
require.NoError(t, err)
wanted2 := block2
if block2.Version() >= version.Bellatrix {
if block2.Version() >= version.Bellatrix && block2.Version() < version.Gloas {
wanted2, err = block2.ToBlinded()
require.NoError(t, err)
}
@@ -1050,7 +1061,7 @@ func TestStore_SaveBlock_CanGetHighestAt(t *testing.T) {
b, err = db.Block(ctx, root)
require.NoError(t, err)
wanted = block3
if block3.Version() >= version.Bellatrix {
if block3.Version() >= version.Bellatrix && block3.Version() < version.Gloas {
wanted, err = wanted.ToBlinded()
require.NoError(t, err)
}
@@ -1086,7 +1097,7 @@ func TestStore_GenesisBlock_CanGetHighestAt(t *testing.T) {
b, err := db.Block(ctx, root)
require.NoError(t, err)
wanted := block1
if block1.Version() >= version.Bellatrix {
if block1.Version() >= version.Bellatrix && block1.Version() < version.Gloas {
wanted, err = block1.ToBlinded()
require.NoError(t, err)
}
@@ -1103,7 +1114,7 @@ func TestStore_GenesisBlock_CanGetHighestAt(t *testing.T) {
b, err = db.Block(ctx, root)
require.NoError(t, err)
wanted = genesisBlock
if genesisBlock.Version() >= version.Bellatrix {
if genesisBlock.Version() >= version.Bellatrix && genesisBlock.Version() < version.Gloas {
wanted, err = genesisBlock.ToBlinded()
require.NoError(t, err)
}
@@ -1120,7 +1131,7 @@ func TestStore_GenesisBlock_CanGetHighestAt(t *testing.T) {
b, err = db.Block(ctx, root)
require.NoError(t, err)
wanted = genesisBlock
if genesisBlock.Version() >= version.Bellatrix {
if genesisBlock.Version() >= version.Bellatrix && genesisBlock.Version() < version.Gloas {
wanted, err = genesisBlock.ToBlinded()
require.NoError(t, err)
}
@@ -1216,7 +1227,7 @@ func TestStore_BlocksBySlot_BlockRootsBySlot(t *testing.T) {
require.NoError(t, err)
wanted := b1
if b1.Version() >= version.Bellatrix {
if b1.Version() >= version.Bellatrix && b1.Version() < version.Gloas {
wanted, err = b1.ToBlinded()
require.NoError(t, err)
}
@@ -1232,7 +1243,7 @@ func TestStore_BlocksBySlot_BlockRootsBySlot(t *testing.T) {
t.Fatalf("Expected 2 blocks, received %d blocks", len(retrievedBlocks))
}
wanted = b2
if b2.Version() >= version.Bellatrix {
if b2.Version() >= version.Bellatrix && b2.Version() < version.Gloas {
wanted, err = b2.ToBlinded()
require.NoError(t, err)
}
@@ -1242,7 +1253,7 @@ func TestStore_BlocksBySlot_BlockRootsBySlot(t *testing.T) {
require.NoError(t, err)
assert.Equal(t, true, proto.Equal(wantedPb, retrieved0Pb), "Wanted: %v, received: %v", retrievedBlocks[0], wanted)
wanted = b3
if b3.Version() >= version.Bellatrix {
if b3.Version() >= version.Bellatrix && b3.Version() < version.Gloas {
wanted, err = b3.ToBlinded()
require.NoError(t, err)
}

View File

@@ -67,6 +67,7 @@ func getSubscriptionStatusFromDB(t *testing.T, db *Store) bool {
return subscribed
}
func TestUpdateCustodyInfo(t *testing.T) {
ctx := t.Context()

View File

@@ -0,0 +1,128 @@
package kv
import (
"context"
"github.com/OffchainLabs/prysm/v7/monitoring/tracing/trace"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/golang/snappy"
"github.com/pkg/errors"
bolt "go.etcd.io/bbolt"
)
// SaveExecutionPayloadEnvelope blinds and saves a signed execution payload envelope.
// The envelope is always stored in blinded form (payload replaced with its hash tree root).
// The key is the execution payload's BlockHash extracted from the envelope.
func (s *Store) SaveExecutionPayloadEnvelope(ctx context.Context, env *ethpb.SignedExecutionPayloadEnvelope) error {
_, span := trace.StartSpan(ctx, "BeaconDB.SaveExecutionPayloadEnvelope")
defer span.End()
if env == nil || env.Message == nil || env.Message.Payload == nil {
return errors.New("cannot save nil execution payload envelope")
}
blockHash := env.Message.Payload.BlockHash
blinded, err := blindEnvelope(env)
if err != nil {
return err
}
enc, err := encodeBlindedEnvelope(blinded)
if err != nil {
return err
}
return s.db.Update(func(tx *bolt.Tx) error {
bkt := tx.Bucket(executionPayloadEnvelopesBucket)
return bkt.Put(blockHash, enc)
})
}
// ExecutionPayloadEnvelope retrieves the blinded signed execution payload envelope by block hash.
func (s *Store) ExecutionPayloadEnvelope(ctx context.Context, blockHash [32]byte) (*ethpb.SignedBlindedExecutionPayloadEnvelope, error) {
_, span := trace.StartSpan(ctx, "BeaconDB.ExecutionPayloadEnvelope")
defer span.End()
var enc []byte
if err := s.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(executionPayloadEnvelopesBucket)
enc = bkt.Get(blockHash[:])
return nil
}); err != nil {
return nil, err
}
if enc == nil {
return nil, errors.Wrap(ErrNotFound, "execution payload envelope not found")
}
return decodeBlindedEnvelope(enc)
}
// HasExecutionPayloadEnvelope checks whether an execution payload envelope exists for the given block hash.
func (s *Store) HasExecutionPayloadEnvelope(ctx context.Context, blockHash [32]byte) bool {
_, span := trace.StartSpan(ctx, "BeaconDB.HasExecutionPayloadEnvelope")
defer span.End()
var exists bool
if err := s.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(executionPayloadEnvelopesBucket)
exists = bkt.Get(blockHash[:]) != nil
return nil
}); err != nil {
return false
}
return exists
}
// DeleteExecutionPayloadEnvelope removes a signed execution payload envelope by block hash.
func (s *Store) DeleteExecutionPayloadEnvelope(ctx context.Context, blockHash [32]byte) error {
_, span := trace.StartSpan(ctx, "BeaconDB.DeleteExecutionPayloadEnvelope")
defer span.End()
return s.db.Update(func(tx *bolt.Tx) error {
bkt := tx.Bucket(executionPayloadEnvelopesBucket)
return bkt.Delete(blockHash[:])
})
}
// blindEnvelope converts a full signed envelope to its blinded form by replacing
// the execution payload with its hash tree root.
func blindEnvelope(env *ethpb.SignedExecutionPayloadEnvelope) (*ethpb.SignedBlindedExecutionPayloadEnvelope, error) {
payloadRoot, err := env.Message.Payload.HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "could not compute payload hash tree root")
}
return &ethpb.SignedBlindedExecutionPayloadEnvelope{
Message: &ethpb.BlindedExecutionPayloadEnvelope{
PayloadRoot: payloadRoot[:],
ExecutionRequests: env.Message.ExecutionRequests,
BuilderIndex: env.Message.BuilderIndex,
BeaconBlockRoot: env.Message.BeaconBlockRoot,
Slot: env.Message.Slot,
BlobKzgCommitments: env.Message.BlobKzgCommitments,
StateRoot: env.Message.StateRoot,
},
Signature: env.Signature,
}, nil
}
// encodeBlindedEnvelope SSZ-encodes and snappy-compresses a blinded envelope for storage.
func encodeBlindedEnvelope(env *ethpb.SignedBlindedExecutionPayloadEnvelope) ([]byte, error) {
sszBytes, err := env.MarshalSSZ()
if err != nil {
return nil, errors.Wrap(err, "could not marshal blinded envelope")
}
return snappy.Encode(nil, sszBytes), nil
}
// decodeBlindedEnvelope snappy-decompresses and SSZ-decodes a blinded envelope from storage.
func decodeBlindedEnvelope(enc []byte) (*ethpb.SignedBlindedExecutionPayloadEnvelope, error) {
dec, err := snappy.Decode(nil, enc)
if err != nil {
return nil, errors.Wrap(err, "could not snappy decode envelope")
}
blinded := &ethpb.SignedBlindedExecutionPayloadEnvelope{}
if err := blinded.UnmarshalSSZ(dec); err != nil {
return nil, errors.Wrap(err, "could not unmarshal blinded envelope")
}
return blinded, nil
}

View File

@@ -0,0 +1,129 @@
package kv
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/testing/assert"
"github.com/OffchainLabs/prysm/v7/testing/require"
)
func testEnvelope(t *testing.T) *ethpb.SignedExecutionPayloadEnvelope {
t.Helper()
return &ethpb.SignedExecutionPayloadEnvelope{
Message: &ethpb.ExecutionPayloadEnvelope{
Payload: &enginev1.ExecutionPayloadDeneb{
ParentHash: bytesutil.PadTo([]byte("parent"), 32),
FeeRecipient: bytesutil.PadTo([]byte("fee"), 20),
StateRoot: bytesutil.PadTo([]byte("stateroot"), 32),
ReceiptsRoot: bytesutil.PadTo([]byte("receipts"), 32),
LogsBloom: bytesutil.PadTo([]byte{}, 256),
PrevRandao: bytesutil.PadTo([]byte("randao"), 32),
BlockNumber: 100,
GasLimit: 30000000,
GasUsed: 21000,
Timestamp: 1000,
ExtraData: []byte("extra"),
BaseFeePerGas: bytesutil.PadTo([]byte{1}, 32),
BlockHash: bytesutil.PadTo([]byte("blockhash"), 32),
Transactions: [][]byte{[]byte("tx1"), []byte("tx2")},
Withdrawals: []*enginev1.Withdrawal{{Index: 1, ValidatorIndex: 2, Address: bytesutil.PadTo([]byte("addr"), 20), Amount: 100}},
BlobGasUsed: 131072,
ExcessBlobGas: 0,
},
ExecutionRequests: &enginev1.ExecutionRequests{},
BuilderIndex: primitives.BuilderIndex(42),
BeaconBlockRoot: bytesutil.PadTo([]byte("beaconroot"), 32),
Slot: primitives.Slot(99),
BlobKzgCommitments: [][]byte{
bytesutil.PadTo([]byte("commitment1"), 48),
},
StateRoot: bytesutil.PadTo([]byte("envelopestateroot"), 32),
},
Signature: bytesutil.PadTo([]byte("sig"), 96),
}
}
func TestStore_SaveAndRetrieveExecutionPayloadEnvelope(t *testing.T) {
db := setupDB(t)
ctx := context.Background()
env := testEnvelope(t)
// Use the block hash as lookup key (matches what save extracts internally).
blockHash := bytesutil.ToBytes32(env.Message.Payload.BlockHash)
// Initially should not exist.
assert.Equal(t, false, db.HasExecutionPayloadEnvelope(ctx, blockHash))
// Save (always blinds internally).
require.NoError(t, db.SaveExecutionPayloadEnvelope(ctx, env))
// Should exist now.
assert.Equal(t, true, db.HasExecutionPayloadEnvelope(ctx, blockHash))
// Load and verify it's always blinded.
loaded, err := db.ExecutionPayloadEnvelope(ctx, blockHash)
require.NoError(t, err)
// Verify metadata is preserved.
assert.Equal(t, env.Message.Slot, loaded.Message.Slot)
assert.Equal(t, env.Message.BuilderIndex, loaded.Message.BuilderIndex)
assert.DeepEqual(t, env.Message.BeaconBlockRoot, loaded.Message.BeaconBlockRoot)
assert.DeepEqual(t, env.Message.StateRoot, loaded.Message.StateRoot)
assert.DeepEqual(t, env.Signature, loaded.Signature)
// PayloadRoot should be 32 bytes (the hash tree root of the full payload).
assert.Equal(t, 32, len(loaded.Message.PayloadRoot))
assert.Equal(t, false, bytesutil.ToBytes32(loaded.Message.PayloadRoot) == [32]byte{})
}
func TestStore_DeleteExecutionPayloadEnvelope(t *testing.T) {
db := setupDB(t)
ctx := context.Background()
env := testEnvelope(t)
blockHash := bytesutil.ToBytes32(env.Message.Payload.BlockHash)
require.NoError(t, db.SaveExecutionPayloadEnvelope(ctx, env))
assert.Equal(t, true, db.HasExecutionPayloadEnvelope(ctx, blockHash))
require.NoError(t, db.DeleteExecutionPayloadEnvelope(ctx, blockHash))
assert.Equal(t, false, db.HasExecutionPayloadEnvelope(ctx, blockHash))
}
func TestStore_ExecutionPayloadEnvelope_NotFound(t *testing.T) {
db := setupDB(t)
ctx := context.Background()
nonExistent := bytesutil.ToBytes32([]byte("nonexistent"))
_, err := db.ExecutionPayloadEnvelope(ctx, nonExistent)
require.ErrorContains(t, "not found", err)
}
func TestStore_SaveExecutionPayloadEnvelope_NilRejected(t *testing.T) {
db := setupDB(t)
ctx := context.Background()
err := db.SaveExecutionPayloadEnvelope(ctx, nil)
require.ErrorContains(t, "nil", err)
}
func TestBlindEnvelope_PreservesPayloadRoot(t *testing.T) {
env := testEnvelope(t)
blinded, err := blindEnvelope(env)
require.NoError(t, err)
// Compute expected payload root.
expectedRoot, err := env.Message.Payload.HashTreeRoot()
require.NoError(t, err)
assert.DeepEqual(t, expectedRoot[:], blinded.Message.PayloadRoot)
assert.Equal(t, env.Message.BuilderIndex, blinded.Message.BuilderIndex)
assert.Equal(t, env.Message.Slot, blinded.Message.Slot)
assert.DeepEqual(t, env.Message.BeaconBlockRoot, blinded.Message.BeaconBlockRoot)
assert.DeepEqual(t, env.Signature, blinded.Signature)
}

View File

@@ -87,3 +87,10 @@ func hasFuluBlindKey(enc []byte) bool {
}
return bytes.Equal(enc[:len(fuluBlindKey)], fuluBlindKey)
}
func hasGloasKey(enc []byte) bool {
if len(gloasKey) >= len(enc) {
return false
}
return bytes.Equal(enc[:len(gloasKey)], gloasKey)
}

View File

@@ -126,6 +126,7 @@ var Buckets = [][]byte{
feeRecipientBucket,
registrationBucket,
custodyBucket,
executionPayloadEnvelopesBucket,
}
// KVStoreOption is a functional option that modifies a kv.Store.

View File

@@ -7,16 +7,17 @@ package kv
// it easy to scan for keys that have a certain shard number as a prefix and return those
// corresponding attestations.
var (
blocksBucket = []byte("blocks")
stateBucket = []byte("state")
stateSummaryBucket = []byte("state-summary")
chainMetadataBucket = []byte("chain-metadata")
checkpointBucket = []byte("check-point")
powchainBucket = []byte("powchain")
stateValidatorsBucket = []byte("state-validators")
feeRecipientBucket = []byte("fee-recipient")
registrationBucket = []byte("registration")
stateDiffBucket = []byte("state-diff")
blocksBucket = []byte("blocks")
stateBucket = []byte("state")
stateSummaryBucket = []byte("state-summary")
chainMetadataBucket = []byte("chain-metadata")
checkpointBucket = []byte("check-point")
powchainBucket = []byte("powchain")
stateValidatorsBucket = []byte("state-validators")
feeRecipientBucket = []byte("fee-recipient")
registrationBucket = []byte("registration")
stateDiffBucket = []byte("state-diff")
executionPayloadEnvelopesBucket = []byte("execution-payload-envelopes")
// Light Client Updates Bucket
lightClientUpdatesBucket = []byte("light-client-updates")
@@ -60,6 +61,8 @@ var (
electraBlindKey = []byte("blind-electra")
fuluKey = []byte("fulu")
fuluBlindKey = []byte("blind-fulu")
gloasKey = []byte("gloas")
// No gloasBlindKey needed - Gloas blocks are never blinded (no execution payload in block body).
// block root included in the beacon state used by weak subjectivity initial sync
originCheckpointBlockRootKey = []byte("origin-checkpoint-block-root")

View File

@@ -575,7 +575,7 @@ func (s *Service) beaconEndpoints(
name: namespace + ".PublishBlockV2",
middleware: []middleware.Middleware{
middleware.ContentTypeHandler([]string{api.JsonMediaType, api.OctetStreamMediaType}),
middleware.AcceptHeaderHandler([]string{api.JsonMediaType, api.OctetStreamMediaType}),
middleware.AcceptHeaderHandler([]string{api.JsonMediaType}),
middleware.AcceptEncodingHeaderHandler(),
},
handler: server.PublishBlockV2,
@@ -586,7 +586,7 @@ func (s *Service) beaconEndpoints(
name: namespace + ".PublishBlindedBlockV2",
middleware: []middleware.Middleware{
middleware.ContentTypeHandler([]string{api.JsonMediaType, api.OctetStreamMediaType}),
middleware.AcceptHeaderHandler([]string{api.JsonMediaType, api.OctetStreamMediaType}),
middleware.AcceptHeaderHandler([]string{api.JsonMediaType}),
middleware.AcceptEncodingHeaderHandler(),
},
handler: server.PublishBlindedBlockV2,

View File

@@ -26,8 +26,8 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/operations/voluntaryexits/mock"
p2pMock "github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/core"
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
mockSync "github.com/OffchainLabs/prysm/v7/beacon-chain/sync/initial-sync/testing"
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/crypto/bls"

View File

@@ -48,6 +48,7 @@ go_test(
"@com_github_ethereum_go_ethereum//crypto:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@org_golang_google_grpc//:go_default_library",
"@org_golang_google_grpc//metadata:go_default_library",
"@org_golang_google_grpc//reflection:go_default_library",
"@org_golang_google_protobuf//types/known/emptypb:go_default_library",
"@org_golang_google_protobuf//types/known/timestamppb:go_default_library",

View File

@@ -35,18 +35,19 @@ import (
// providing RPC endpoints for verifying a beacon node's sync status, genesis and
// version information, and services the node implements and runs.
type Server struct {
LogsStreamer logs.Streamer
StreamLogsBufferSize int
SyncChecker sync.Checker
Server *grpc.Server
BeaconDB db.ReadOnlyDatabase
PeersFetcher p2p.PeersProvider
PeerManager p2p.PeerManager
GenesisTimeFetcher blockchain.TimeFetcher
GenesisFetcher blockchain.GenesisFetcher
POWChainInfoFetcher execution.ChainInfoFetcher
BeaconMonitoringHost string
BeaconMonitoringPort int
LogsStreamer logs.Streamer
StreamLogsBufferSize int
SyncChecker sync.Checker
Server *grpc.Server
BeaconDB db.ReadOnlyDatabase
PeersFetcher p2p.PeersProvider
PeerManager p2p.PeerManager
GenesisTimeFetcher blockchain.TimeFetcher
GenesisFetcher blockchain.GenesisFetcher
POWChainInfoFetcher execution.ChainInfoFetcher
BeaconMonitoringHost string
BeaconMonitoringPort int
OptimisticModeFetcher blockchain.OptimisticModeFetcher
}
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
@@ -61,21 +62,28 @@ func (ns *Server) GetHealth(ctx context.Context, request *ethpb.HealthRequest) (
ctx, cancel := context.WithTimeout(ctx, timeoutDuration)
defer cancel() // Important to avoid a context leak
if ns.SyncChecker.Synced() {
// Check optimistic status - validators should not participate when optimistic
isOptimistic, err := ns.OptimisticModeFetcher.IsOptimistic(ctx)
if err != nil {
return &empty.Empty{}, status.Errorf(codes.Internal, "Could not check optimistic status: %v", err)
}
if ns.SyncChecker.Synced() && !isOptimistic {
return &empty.Empty{}, nil
}
if ns.SyncChecker.Syncing() || ns.SyncChecker.Initialized() {
if request.SyncingStatus != 0 {
// override the 200 success with the provided request status
if err := grpc.SetHeader(ctx, metadata.Pairs("x-http-code", strconv.FormatUint(request.SyncingStatus, 10))); err != nil {
return &empty.Empty{}, status.Errorf(codes.Internal, "Could not set custom success code header: %v", err)
}
return &empty.Empty{}, nil
}
// Set header for REST API clients (via gRPC-gateway)
if err := grpc.SetHeader(ctx, metadata.Pairs("x-http-code", strconv.FormatUint(http.StatusPartialContent, 10))); err != nil {
return &empty.Empty{}, status.Errorf(codes.Internal, "Could not set custom success code header: %v", err)
return &empty.Empty{}, status.Errorf(codes.Internal, "Could not set status code header: %v", err)
}
return &empty.Empty{}, nil
return &empty.Empty{}, status.Error(codes.Unavailable, "node is syncing")
}
if isOptimistic {
// Set header for REST API clients (via gRPC-gateway)
if err := grpc.SetHeader(ctx, metadata.Pairs("x-http-code", strconv.FormatUint(http.StatusPartialContent, 10))); err != nil {
return &empty.Empty{}, status.Errorf(codes.Internal, "Could not set status code header: %v", err)
}
return &empty.Empty{}, status.Error(codes.Unavailable, "node is optimistic")
}
return &empty.Empty{}, status.Errorf(codes.Unavailable, "service unavailable")
}

View File

@@ -2,6 +2,7 @@ package node
import (
"errors"
"maps"
"testing"
"time"
@@ -21,6 +22,7 @@ import (
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/p2p/enode"
"google.golang.org/grpc"
"google.golang.org/grpc/metadata"
"google.golang.org/grpc/reflection"
"google.golang.org/protobuf/types/known/emptypb"
"google.golang.org/protobuf/types/known/timestamppb"
@@ -187,32 +189,71 @@ func TestNodeServer_GetETH1ConnectionStatus(t *testing.T) {
assert.Equal(t, errStr, res.CurrentConnectionError)
}
// mockServerTransportStream implements grpc.ServerTransportStream for testing
type mockServerTransportStream struct {
headers map[string][]string
}
func (m *mockServerTransportStream) Method() string { return "" }
func (m *mockServerTransportStream) SetHeader(md metadata.MD) error {
maps.Copy(m.headers, md)
return nil
}
func (m *mockServerTransportStream) SendHeader(metadata.MD) error { return nil }
func (m *mockServerTransportStream) SetTrailer(metadata.MD) error { return nil }
func TestNodeServer_GetHealth(t *testing.T) {
tests := []struct {
name string
input *mockSync.Sync
customStatus uint64
isOptimistic bool
wantedErr string
}{
{
name: "happy path",
input: &mockSync.Sync{IsSyncing: false, IsSynced: true},
name: "happy path - synced and not optimistic",
input: &mockSync.Sync{IsSyncing: false, IsSynced: true},
isOptimistic: false,
},
{
name: "syncing",
input: &mockSync.Sync{IsSyncing: false},
wantedErr: "service unavailable",
name: "returns error when not synced and not syncing",
input: &mockSync.Sync{IsSyncing: false, IsSynced: false},
isOptimistic: false,
wantedErr: "service unavailable",
},
{
name: "returns error when syncing",
input: &mockSync.Sync{IsSyncing: true, IsSynced: false},
isOptimistic: false,
wantedErr: "node is syncing",
},
{
name: "returns error when synced but optimistic",
input: &mockSync.Sync{IsSyncing: false, IsSynced: true},
isOptimistic: true,
wantedErr: "node is optimistic",
},
{
name: "returns error when syncing and optimistic",
input: &mockSync.Sync{IsSyncing: true, IsSynced: false},
isOptimistic: true,
wantedErr: "node is syncing",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
server := grpc.NewServer()
ns := &Server{
SyncChecker: tt.input,
SyncChecker: tt.input,
OptimisticModeFetcher: &mock.ChainService{Optimistic: tt.isOptimistic},
}
ethpb.RegisterNodeServer(server, ns)
reflection.Register(server)
_, err := ns.GetHealth(t.Context(), &ethpb.HealthRequest{SyncingStatus: tt.customStatus})
// Create context with mock transport stream so grpc.SetHeader works
stream := &mockServerTransportStream{headers: make(map[string][]string)}
ctx := grpc.NewContextWithServerTransportStream(t.Context(), stream)
_, err := ns.GetHealth(ctx, &ethpb.HealthRequest{})
if tt.wantedErr == "" {
require.NoError(t, err)
return

View File

@@ -259,18 +259,19 @@ func NewService(ctx context.Context, cfg *Config) *Service {
}
s.validatorServer = validatorServer
nodeServer := &nodev1alpha1.Server{
LogsStreamer: logs.NewStreamServer(),
StreamLogsBufferSize: 1000, // Enough to handle bursts of beacon node logs for gRPC streaming.
BeaconDB: s.cfg.BeaconDB,
Server: s.grpcServer,
SyncChecker: s.cfg.SyncService,
GenesisTimeFetcher: s.cfg.GenesisTimeFetcher,
PeersFetcher: s.cfg.PeersFetcher,
PeerManager: s.cfg.PeerManager,
GenesisFetcher: s.cfg.GenesisFetcher,
POWChainInfoFetcher: s.cfg.ExecutionChainInfoFetcher,
BeaconMonitoringHost: s.cfg.BeaconMonitoringHost,
BeaconMonitoringPort: s.cfg.BeaconMonitoringPort,
LogsStreamer: logs.NewStreamServer(),
StreamLogsBufferSize: 1000, // Enough to handle bursts of beacon node logs for gRPC streaming.
BeaconDB: s.cfg.BeaconDB,
Server: s.grpcServer,
SyncChecker: s.cfg.SyncService,
GenesisTimeFetcher: s.cfg.GenesisTimeFetcher,
PeersFetcher: s.cfg.PeersFetcher,
PeerManager: s.cfg.PeerManager,
GenesisFetcher: s.cfg.GenesisFetcher,
POWChainInfoFetcher: s.cfg.ExecutionChainInfoFetcher,
BeaconMonitoringHost: s.cfg.BeaconMonitoringHost,
BeaconMonitoringPort: s.cfg.BeaconMonitoringPort,
OptimisticModeFetcher: s.cfg.OptimisticModeFetcher,
}
beaconChainServer := &beaconv1alpha1.Server{
Ctx: s.ctx,

View File

@@ -1027,10 +1027,10 @@ func TestGetVerifyingStateEdgeCases(t *testing.T) {
sc: signatureCache,
sr: &mockStateByRooter{sbr: sbrErrorIfCalled(t)}, // Should not be called
hsp: &mockHeadStateProvider{
headRoot: parentRoot[:], // Same as parent
headSlot: 32, // Epoch 1
headState: fuluState.Copy(), // HeadState (not ReadOnly) for ProcessSlots
headStateReadOnly: nil, // Should not use ReadOnly path
headRoot: parentRoot[:], // Same as parent
headSlot: 32, // Epoch 1
headState: fuluState.Copy(), // HeadState (not ReadOnly) for ProcessSlots
headStateReadOnly: nil, // Should not use ReadOnly path
},
fc: &mockForkchoicer{
// Return same root for both to simulate same chain
@@ -1045,8 +1045,8 @@ func TestGetVerifyingStateEdgeCases(t *testing.T) {
// Wrap to detect HeadState call
originalHsp := initializer.shared.hsp.(*mockHeadStateProvider)
wrappedHsp := &mockHeadStateProvider{
headRoot: originalHsp.headRoot,
headSlot: originalHsp.headSlot,
headRoot: originalHsp.headRoot,
headSlot: originalHsp.headSlot,
headState: originalHsp.headState,
}
initializer.shared.hsp = &headStateCallTracker{

View File

@@ -0,0 +1,6 @@
### Added
- Added new proofCollector type to ssz-query
### Ignored
- Added testing covering the production of Merkle proof from Phase0 beacon state and benchmarked against real Hoodi beacon state (Fulu version)

View File

@@ -0,0 +1,3 @@
### Added
- gloas db save functions for gloas block , payload envelope, and blinded payload envelope.

View File

@@ -0,0 +1,3 @@
### Changed
- gRPC health endpoint will now return an error on syncing or optimistic status showing that it's unavailable.

View File

@@ -163,3 +163,18 @@ func Uint256ToSSZBytes(num string) ([]byte, error) {
}
return PadTo(ReverseByteOrder(uint256.Bytes()), 32), nil
}
// PutLittleEndian writes an unsigned integer value in little-endian format.
// Supports sizes 1, 2, 4, or 8 bytes for uint8/16/32/64 respectively.
func PutLittleEndian(dst []byte, val uint64, size int) {
switch size {
case 1:
dst[0] = byte(val)
case 2:
binary.LittleEndian.PutUint16(dst, uint16(val))
case 4:
binary.LittleEndian.PutUint32(dst, uint32(val))
case 8:
binary.LittleEndian.PutUint64(dst, val)
}
}

View File

@@ -9,7 +9,9 @@ go_library(
"container.go",
"generalized_index.go",
"list.go",
"merkle_proof.go",
"path.go",
"proof_collector.go",
"query.go",
"ssz_info.go",
"ssz_object.go",
@@ -20,7 +22,12 @@ go_library(
importpath = "github.com/OffchainLabs/prysm/v7/encoding/ssz/query",
visibility = ["//visibility:public"],
deps = [
"//container/trie:go_default_library",
"//crypto/hash/htr:go_default_library",
"//encoding/bytesutil:go_default_library",
"//encoding/ssz:go_default_library",
"//math:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
],
)
@@ -29,15 +36,24 @@ go_test(
name = "go_default_test",
srcs = [
"generalized_index_test.go",
"merkle_proof_test.go",
"path_test.go",
"proof_collector_test.go",
"query_test.go",
"tag_parser_test.go",
],
embed = [":go_default_library"],
deps = [
":go_default_library",
"//beacon-chain/state/stateutil:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/ssz:go_default_library",
"//encoding/ssz/query/testutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/ssz_query/testing:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
],
)

View File

@@ -0,0 +1,34 @@
package query
import (
"fmt"
"reflect"
fastssz "github.com/prysmaticlabs/fastssz"
)
// Prove is the entrypoint to generate an SSZ Merkle proof for the given generalized index.
// Parameters:
// - gindex: the generalized index of the node to prove inclusion for.
// Returns:
// - fastssz.Proof: the Merkle proof containing the leaf, index, and sibling hashes.
// - error: any error encountered during proof generation.
func (info *SszInfo) Prove(gindex uint64) (*fastssz.Proof, error) {
if info == nil {
return nil, fmt.Errorf("nil SszInfo")
}
collector := newProofCollector()
collector.addTarget(gindex)
// info.source is guaranteed to be valid and dereferenced by AnalyzeObject
v := reflect.ValueOf(info.source).Elem()
// Start the merkleization and proof collection process.
// In SSZ generalized indices, the root is always at index 1.
if _, err := collector.merkleize(info, v, 1); err != nil {
return nil, err
}
return collector.toProof()
}

View File

@@ -0,0 +1,163 @@
package query_test
import (
"testing"
"github.com/OffchainLabs/go-bitfield"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/encoding/ssz/query"
eth "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/testing/util"
ssz "github.com/prysmaticlabs/fastssz"
)
func TestProve_FixedTestContainer(t *testing.T) {
obj := createFixedTestContainer()
tests := []string{
".field_uint32",
".nested.value2",
".vector_field[3]",
".bitvector64_field",
".trailing_field",
}
for _, tc := range tests {
t.Run(tc, func(t *testing.T) {
proveAndVerify(t, obj, tc)
})
}
}
func TestProve_VariableTestContainer(t *testing.T) {
obj := createVariableTestContainer()
tests := []string{
".leading_field",
".field_list_uint64[2]",
"len(field_list_uint64)",
".nested.nested_list_field[1]",
".variable_container_list[0].inner_1.field_list_uint64[1]",
}
for _, tc := range tests {
t.Run(tc, func(t *testing.T) {
proveAndVerify(t, obj, tc)
})
}
}
func TestProve_BeaconBlock(t *testing.T) {
randaoReveal := make([]byte, 96)
for i := range randaoReveal {
randaoReveal[i] = 0x42
}
root32 := make([]byte, 32)
for i := range root32 {
root32[i] = 0x24
}
sig := make([]byte, 96)
for i := range sig {
sig[i] = 0x99
}
att := &eth.Attestation{
AggregationBits: bitfield.Bitlist{0x01},
Data: &eth.AttestationData{
Slot: 1,
CommitteeIndex: 1,
BeaconBlockRoot: root32,
Source: &eth.Checkpoint{
Epoch: 1,
Root: root32,
},
Target: &eth.Checkpoint{
Epoch: 1,
Root: root32,
},
},
Signature: sig,
}
b := util.NewBeaconBlock()
b.Block.Slot = 123
b.Block.Body.RandaoReveal = randaoReveal
b.Block.Body.Attestations = []*eth.Attestation{att}
sb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
protoBlock, err := sb.Block().Proto()
require.NoError(t, err)
obj, ok := protoBlock.(query.SSZObject)
require.Equal(t, true, ok, "block proto does not implement query.SSZObject")
tests := []string{
".slot",
".body.randao_reveal",
".body.attestations[0].data.slot",
"len(body.attestations)",
}
for _, tc := range tests {
t.Run(tc, func(t *testing.T) {
proveAndVerify(t, obj, tc)
})
}
}
func TestProve_BeaconState(t *testing.T) {
st, _ := util.DeterministicGenesisState(t, 16)
require.NoError(t, st.SetSlot(primitives.Slot(42)))
sszObj, ok := st.ToProtoUnsafe().(query.SSZObject)
require.Equal(t, true, ok, "state proto does not implement query.SSZObject")
tests := []string{
".slot",
".latest_block_header",
".validators[0].effective_balance",
"len(validators)",
}
for _, tc := range tests {
t.Run(tc, func(t *testing.T) {
proveAndVerify(t, sszObj, tc)
})
}
}
// proveAndVerify helper to analyze an object, generate a merkle proof for the given path,
// and verify the proof against the object's root.
func proveAndVerify(t *testing.T, obj query.SSZObject, pathStr string) {
t.Helper()
info, err := query.AnalyzeObject(obj)
require.NoError(t, err)
path, err := query.ParsePath(pathStr)
require.NoError(t, err)
gi, err := query.GetGeneralizedIndexFromPath(info, path)
require.NoError(t, err)
proof, err := info.Prove(gi)
require.NoError(t, err)
require.Equal(t, int(gi), proof.Index)
root, err := obj.HashTreeRoot()
require.NoError(t, err)
ok, err := ssz.VerifyProof(root[:], proof)
require.NoError(t, err)
require.Equal(t, true, ok, "merkle proof verification failed")
require.Equal(t, 32, len(proof.Leaf))
for i, h := range proof.Hashes {
require.Equal(t, 32, len(h), "proof hash %d is not 32 bytes", i)
}
}

View File

@@ -0,0 +1,672 @@
package query
import (
"encoding/binary"
"errors"
"fmt"
"math/bits"
"reflect"
"runtime"
"slices"
"sync"
"github.com/OffchainLabs/go-bitfield"
"github.com/OffchainLabs/prysm/v7/container/trie"
"github.com/OffchainLabs/prysm/v7/crypto/hash/htr"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
ssz "github.com/OffchainLabs/prysm/v7/encoding/ssz"
"github.com/OffchainLabs/prysm/v7/math"
fastssz "github.com/prysmaticlabs/fastssz"
)
// proofCollector collects sibling hashes and leaves needed for Merkle proofs.
//
// Multiproof-ready design:
// - requiredSiblings/requiredLeaves store which gindices we want to collect (registered before merkleization).
// - siblings/leaves store the actual collected hashes.
//
// Concurrency:
// - required* maps are read-only during merkleization.
// - siblings/leaves writes are protected by mutex.
type proofCollector struct {
sync.Mutex
// Required gindices (registered before merkleization)
requiredSiblings map[uint64]struct{}
requiredLeaves map[uint64]struct{}
// Collected hashes
siblings map[uint64][32]byte
leaves map[uint64][32]byte
}
func newProofCollector() *proofCollector {
return &proofCollector{
requiredSiblings: make(map[uint64]struct{}),
requiredLeaves: make(map[uint64]struct{}),
siblings: make(map[uint64][32]byte),
leaves: make(map[uint64][32]byte),
}
}
func (pc *proofCollector) reset() {
pc.Lock()
defer pc.Unlock()
pc.requiredSiblings = make(map[uint64]struct{})
pc.requiredLeaves = make(map[uint64]struct{})
pc.siblings = make(map[uint64][32]byte)
pc.leaves = make(map[uint64][32]byte)
}
// addTarget register the target leaf and its required sibling nodes for proof construction.
// Registration should happen before merkleization begins.
func (pc *proofCollector) addTarget(gindex uint64) {
pc.Lock()
defer pc.Unlock()
pc.requiredLeaves[gindex] = struct{}{}
// Walk from the target leaf up to (but not including) the root (gindex=1).
// At each step, register the sibling node required to prove inclusion.
nodeGindex := gindex
for nodeGindex > 1 {
siblingGindex := nodeGindex ^ 1 // flip the last bit: left<->right sibling
pc.requiredSiblings[siblingGindex] = struct{}{}
// Move to parent
nodeGindex /= 2
}
}
// toProof converts the collected siblings and leaves into a fastssz.Proof structure.
// Current behavior expects a single target leaf (single proof).
func (pc *proofCollector) toProof() (*fastssz.Proof, error) {
pc.Lock()
defer pc.Unlock()
proof := &fastssz.Proof{}
if len(pc.leaves) == 0 {
return nil, errors.New("no leaves collected: add target leaves before merkleization")
}
leafGindices := make([]uint64, 0, len(pc.leaves))
for g := range pc.leaves {
leafGindices = append(leafGindices, g)
}
slices.Sort(leafGindices)
// single proof resides in leafGindices[0]
targetGindex := leafGindices[0]
proofIndex, err := math.Int(targetGindex)
if err != nil {
return nil, fmt.Errorf("gindex %d overflows int: %w", targetGindex, err)
}
proof.Index = proofIndex
// store the leaf
leaf := pc.leaves[targetGindex]
leafBuf := make([]byte, 32)
copy(leafBuf, leaf[:])
proof.Leaf = leafBuf
// Walk from target up to root, collecting siblings.
steps := bits.Len64(targetGindex) - 1
proof.Hashes = make([][]byte, 0, steps)
for targetGindex > 1 {
sib := targetGindex ^ 1
h, ok := pc.siblings[sib]
if !ok {
return nil, fmt.Errorf("missing sibling hash for gindex %d", sib)
}
proof.Hashes = append(proof.Hashes, h[:])
targetGindex /= 2
}
return proof, nil
}
// collectLeaf checks if the given gindex is a required leaf for the proof,
// and if so, stores the provided leaf hash in the collector.
func (pc *proofCollector) collectLeaf(gindex uint64, leaf [32]byte) {
if _, ok := pc.requiredLeaves[gindex]; !ok {
return
}
pc.Lock()
pc.leaves[gindex] = leaf
pc.Unlock()
}
// collectSibling stores the hash for a sibling node identified by gindex.
// It only stores the hash if gindex was pre-registered via addTarget (present in requiredSiblings).
// Writes to the collected siblings map are protected by the collector mutex.
func (pc *proofCollector) collectSibling(gindex uint64, hash [32]byte) {
if _, ok := pc.requiredSiblings[gindex]; !ok {
return
}
pc.Lock()
pc.siblings[gindex] = hash
pc.Unlock()
}
// Merkleizers and proof collection methods
// merkleize recursively traverses an SSZ info and computes the Merkle root of the subtree.
//
// Proof collection:
// - During traversal it calls collectLeaf/collectSibling with the SSZ generalized indices (gindices)
// of visited nodes.
// - The collector only stores hashes for gindices that were pre-registered via addTarget
// (requiredLeaves/requiredSiblings). This makes the traversal multiproof-ready: you can register
// multiple targets before calling merkleize.
//
// SSZ types handled: basic types, containers, lists, vectors, bitlists, and bitvectors.
//
// Parameters:
// - info: SSZ type metadata for the current value.
// - v: reflect.Value of the current value.
// - currentGindex: generalized index of the current subtree root.
//
// Returns:
// - [32]byte: Merkle root of the current subtree.
// - error: any error encountered during traversal/merkleization.
func (pc *proofCollector) merkleize(info *SszInfo, v reflect.Value, currentGindex uint64) ([32]byte, error) {
if info.sszType.isBasic() {
return pc.merkleizeBasicType(info.sszType, v, currentGindex)
}
switch info.sszType {
case Container:
return pc.merkleizeContainer(info, v, currentGindex)
case List:
return pc.merkleizeList(info, v, currentGindex)
case Vector:
return pc.merkleizeVector(info, v, currentGindex)
case Bitlist:
return pc.merkleizeBitlist(info, v, currentGindex)
case Bitvector:
return pc.merkleizeBitvector(info, v, currentGindex)
default:
return [32]byte{}, fmt.Errorf("unsupported SSZ type: %v", info.sszType)
}
}
// merkleizeBasicType serializes a basic SSZ value into a 32-byte leaf chunk (little-endian, zero-padded).
//
// Proof collection:
// - It calls collectLeaf(currentGindex, leaf) and stores the leaf if currentGindex was pre-registered via addTarget.
//
// Parameters:
// - t: the SSZType (basic).
// - v: the reflect.Value of the basic value.
// - currentGindex: the generalized index (gindex) of this leaf.
//
// Returns:
// - [32]byte: the 32-byte SSZ leaf chunk.
// - error: if the SSZType is not a supported basic type.
func (pc *proofCollector) merkleizeBasicType(t SSZType, v reflect.Value, currentGindex uint64) ([32]byte, error) {
var leaf [32]byte
// Serialize the value into a 32-byte chunk (little-endian, zero-padded)
switch t {
case Uint8:
leaf[0] = uint8(v.Uint())
case Uint16:
binary.LittleEndian.PutUint16(leaf[:2], uint16(v.Uint()))
case Uint32:
binary.LittleEndian.PutUint32(leaf[:4], uint32(v.Uint()))
case Uint64:
binary.LittleEndian.PutUint64(leaf[:8], v.Uint())
case Boolean:
if v.Bool() {
leaf[0] = 1
}
default:
return [32]byte{}, fmt.Errorf("unexpected basic type: %v", t)
}
pc.collectLeaf(currentGindex, leaf)
return leaf, nil
}
// merkleizeContainer computes the Merkle root of an SSZ container by:
// 1. Merkleizing each field into a 32-byte subtree root
// 2. Merkleizing the field roots into the container root (padding to the next power-of-2)
//
// Generalized indices (gindices): depth = ssz.Depth(uint64(N)) and field i has gindex = (currentGindex << depth) + uint64(i).
// Proof collection: merkleize() computes each field root, merkleizeVectorAndCollect collects required siblings, and collectLeaf stores the container root if registered.
//
// Parameters:
// - info: SSZ type metadata for the container.
// - v: reflect.Value of the container value.
// - currentGindex: generalized index (gindex) of the container root.
//
// Returns:
// - [32]byte: Merkle root of the container.
// - error: any error encountered while merkleizing fields.
func (pc *proofCollector) merkleizeContainer(info *SszInfo, v reflect.Value, currentGindex uint64) ([32]byte, error) {
// If the container root itself is the target, compute directly and return early.
// This avoids full subtree merkleization when we only need the root.
if _, ok := pc.requiredLeaves[currentGindex]; ok {
root, err := info.HashTreeRoot()
if err != nil {
return [32]byte{}, err
}
pc.collectLeaf(currentGindex, root)
return root, nil
}
ci, err := info.ContainerInfo()
if err != nil {
return [32]byte{}, err
}
v = dereferencePointer(v)
// Calculate depth: how many levels from container root to field leaves
numFields := len(ci.order)
depth := ssz.Depth(uint64(numFields))
// Step 1: Compute HTR for each subtree (field)
fieldRoots := make([][32]byte, numFields)
for i, name := range ci.order {
fieldInfo := ci.fields[name]
fieldVal := v.FieldByName(fieldInfo.goFieldName)
// Field i's gindex: shift currentGindex left by depth, then OR with field index
fieldGindex := currentGindex<<depth + uint64(i)
htr, err := pc.merkleize(fieldInfo.sszInfo, fieldVal, fieldGindex)
if err != nil {
return [32]byte{}, fmt.Errorf("field %s: %w", name, err)
}
fieldRoots[i] = htr
}
// Step 2: Merkleize the field hashes into the container root,
// collecting sibling hashes if target is within this subtree
root := pc.merkleizeVectorAndCollect(fieldRoots, currentGindex, uint64(depth))
return root, nil
}
// merkleizeVectorBody computes the Merkle root of the "data" subtree for vector-like SSZ types
// (vectors and the data-part of lists/bitlists).
//
// Generalized indices (gindices): depth = ssz.Depth(limit); leafBase = subtreeRootGindex << depth; element/chunk i gindex = leafBase + uint64(i).
// Proof collection: merkleize() is called for composite elements; merkleizeVectorAndCollect collects required siblings at this layer.
// Padding: merkleizeVectorAndCollect uses trie.ZeroHashes as needed.
//
// Parameters:
// - elemInfo: SSZ type metadata for the element.
// - v: reflect.Value of the vector/list data.
// - length: number of actual elements present.
// - limit: virtual leaf capacity used for padding/Depth (fixed length for vectors, limit for lists).
// - subtreeRootGindex: gindex of the data subtree root.
//
// Returns:
// - [32]byte: Merkle root of the data subtree.
// - error: any error encountered while merkleizing composite elements.
func (pc *proofCollector) merkleizeVectorBody(elemInfo *SszInfo, v reflect.Value, length int, limit uint64, subtreeRootGindex uint64) ([32]byte, error) {
depth := uint64(ssz.Depth(limit))
var chunks [][32]byte
if elemInfo.sszType.isBasic() {
// Serialize basic elements and pack into 32-byte chunks using ssz.PackByChunk.
elemSize, err := math.Int(itemLength(elemInfo))
if err != nil {
return [32]byte{}, fmt.Errorf("element size %d overflows int: %w", itemLength(elemInfo), err)
}
serialized := make([][]byte, length)
// Single contiguous allocation for all element data
allData := make([]byte, length*elemSize)
for i := range length {
buf := allData[i*elemSize : (i+1)*elemSize]
elem := v.Index(i)
if elemInfo.sszType == Boolean && elem.Bool() {
buf[0] = 1
} else {
bytesutil.PutLittleEndian(buf, elem.Uint(), elemSize)
}
serialized[i] = buf
}
chunks, err = ssz.PackByChunk(serialized)
if err != nil {
return [32]byte{}, err
}
} else {
// Composite elements: compute each element root (no padding here; merkleizeVectorAndCollect pads).
chunks = make([][32]byte, length)
// Fall back to per-element merkleization with proper gindices for proof collection.
// Parallel execution
workerCount := min(runtime.GOMAXPROCS(0), length)
jobs := make(chan int, workerCount*16)
errCh := make(chan error, 1) // only need the first error
stopCh := make(chan struct{})
var stopOnce sync.Once
var wg sync.WaitGroup
worker := func() {
defer wg.Done()
for idx := range jobs {
select {
case <-stopCh:
return
default:
}
elemGindex := subtreeRootGindex<<depth + uint64(idx)
htr, err := pc.merkleize(elemInfo, v.Index(idx), elemGindex)
if err != nil {
stopOnce.Do(func() { close(stopCh) })
select {
case errCh <- fmt.Errorf("index %d: %w", idx, err):
default:
}
return
}
chunks[idx] = htr
}
}
wg.Add(workerCount)
for range workerCount {
go worker()
}
// Enqueue jobs; stop early if any worker reports an error.
enqueue:
for i := range length {
select {
case <-stopCh:
break enqueue
case jobs <- i:
}
}
close(jobs)
wg.Wait()
select {
case err := <-errCh:
return [32]byte{}, err
default:
}
}
root := pc.merkleizeVectorAndCollect(chunks, subtreeRootGindex, depth)
return root, nil
}
// merkleizeVector computes the Merkle root of an SSZ vector (fixed-length).
//
// Generalized indices (gindices): currentGindex is the gindex of the vector root; element/chunk gindices are derived
// inside merkleizeVectorBody using leafBase = currentGindex << ssz.Depth(leaves).
//
// Proof collection: merkleizeVectorBody performs element/chunk merkleization and collects required siblings at the
// vector layer; collectLeaf stores the vector root if currentGindex was registered via addTarget.
//
// Parameters:
// - info: SSZ type metadata for the vector.
// - v: reflect.Value of the vector value.
// - currentGindex: generalized index (gindex) of the vector root.
//
// Returns:
// - [32]byte: Merkle root of the vector.
// - error: any error encountered while merkleizing composite elements.
func (pc *proofCollector) merkleizeVector(info *SszInfo, v reflect.Value, currentGindex uint64) ([32]byte, error) {
vi, err := info.VectorInfo()
if err != nil {
return [32]byte{}, err
}
length, err := math.Int(vi.Length())
if err != nil {
return [32]byte{}, fmt.Errorf("vector length %d overflows int: %w", vi.Length(), err)
}
elemInfo := vi.element
// Determine the virtual leaf capacity for the vector.
leaves, err := getChunkCount(info)
if err != nil {
return [32]byte{}, err
}
root, err := pc.merkleizeVectorBody(elemInfo, v, length, leaves, currentGindex)
if err != nil {
return [32]byte{}, err
}
// If the vector root itself is the target
pc.collectLeaf(currentGindex, root)
return root, nil
}
// merkleizeList computes the Merkle root of an SSZ list by merkleizing its data subtree and mixing in the length.
//
// Generalized indices (gindices): dataRoot is the left child of the list root (dataRootGindex = currentGindex*2); the length mixin is the right child (currentGindex*2+1).
// Proof collection: merkleizeVectorBody computes the data root (collecting required siblings in the data subtree), and mixinLengthAndCollect collects required siblings at the length-mixin level; collectLeaf stores the list root if registered.
//
// Parameters:
// - info: SSZ type metadata for the list.
// - v: reflect.Value of the list value.
// - currentGindex: generalized index (gindex) of the list root.
//
// Returns:
// - [32]byte: Merkle root of the list.
// - error: any error encountered while merkleizing the data subtree.
func (pc *proofCollector) merkleizeList(info *SszInfo, v reflect.Value, currentGindex uint64) ([32]byte, error) {
li, err := info.ListInfo()
if err != nil {
return [32]byte{}, err
}
length := v.Len()
elemInfo := li.element
chunks := make([][32]byte, 2)
// Compute the length hash (little-endian uint256)
binary.LittleEndian.PutUint64(chunks[1][:8], uint64(length))
// Data subtree root is the left child of the list root.
dataRootGindex := currentGindex * 2
// Compute virtual leaf capacity for the data subtree.
leaves, err := getChunkCount(info)
if err != nil {
return [32]byte{}, err
}
chunks[0], err = pc.merkleizeVectorBody(elemInfo, v, length, leaves, dataRootGindex)
if err != nil {
return [32]byte{}, err
}
// Handle the length mixin level (and proof bookkeeping at this level).
// Compute the final list root: hash(dataRoot || lengthHash)
root := pc.mixinLengthAndCollect(currentGindex, chunks)
// If the list root itself is the target
pc.collectLeaf(currentGindex, root)
return root, nil
}
// merkleizeBitvectorBody computes the Merkle root of a bitvector-like byte sequence by packing it into 32-byte chunks
// and merkleizing those chunks as a fixed-capacity vector (padding with trie.ZeroHashes as needed).
//
// Generalized indices (gindices): depth = ssz.Depth(chunkLimit); leafBase = subtreeRootGindex << depth; chunk i uses gindex = leafBase + uint64(i).
// Proof collection: merkleizeVectorAndCollect collects required sibling hashes at the chunk-merkleization layer.
//
// Parameters:
// - data: raw byte sequence representing the bitvector payload.
// - chunkLimit: fixed/limit number of 32-byte chunks (used for padding/Depth).
// - subtreeRootGindex: gindex of the bitvector data subtree root.
//
// Returns:
// - [32]byte: Merkle root of the bitvector data subtree.
// - error: any error encountered while packing data into chunks.
func (pc *proofCollector) merkleizeBitvectorBody(data []byte, chunkLimit uint64, subtreeRootGindex uint64) ([32]byte, error) {
depth := ssz.Depth(chunkLimit)
chunks, err := ssz.PackByChunk([][]byte{data})
if err != nil {
return [32]byte{}, err
}
root := pc.merkleizeVectorAndCollect(chunks, subtreeRootGindex, uint64(depth))
return root, nil
}
// merkleizeBitvector computes the Merkle root of a fixed-length SSZ bitvector and collects proof nodes for targets.
//
// Parameters:
// - info: SSZ type metadata for the bitvector.
// - v: reflect.Value of the bitvector value.
// - currentGindex: generalized index (gindex) of the bitvector root.
//
// Returns:
// - [32]byte: Merkle root of the bitvector.
// - error: any error encountered during packing or merkleization.
func (pc *proofCollector) merkleizeBitvector(info *SszInfo, v reflect.Value, currentGindex uint64) ([32]byte, error) {
bitvectorBytes := v.Bytes()
if len(bitvectorBytes) == 0 {
return [32]byte{}, fmt.Errorf("bitvector field is uninitialized (nil or empty slice)")
}
// Compute virtual leaf capacity for the bitvector.
numChunks, err := getChunkCount(info)
if err != nil {
return [32]byte{}, err
}
root, err := pc.merkleizeBitvectorBody(bitvectorBytes, numChunks, currentGindex)
if err != nil {
return [32]byte{}, err
}
pc.collectLeaf(currentGindex, root)
return root, nil
}
// merkleizeBitlist computes the Merkle root of an SSZ bitlist by merkleizing its data chunks and mixing in the bit length.
//
// Generalized indices (gindices): dataRoot is the left child (dataRootGindex = currentGindex*2) and the length mixin is the right child (currentGindex*2+1).
// Proof collection: merkleizeBitvectorBody computes the data root (collecting required siblings under dataRootGindex), and mixinLengthAndCollect collects required siblings at the length-mixin level; collectLeaf stores the bitlist root if registered.
//
// Parameters:
// - info: SSZ type metadata for the bitlist.
// - v: reflect.Value of the bitlist value.
// - currentGindex: generalized index (gindex) of the bitlist root.
//
// Returns:
// - [32]byte: Merkle root of the bitlist.
// - error: any error encountered while merkleizing the data subtree.
func (pc *proofCollector) merkleizeBitlist(info *SszInfo, v reflect.Value, currentGindex uint64) ([32]byte, error) {
bi, err := info.BitlistInfo()
if err != nil {
return [32]byte{}, err
}
bitlistBytes := v.Bytes()
// Use go-bitfield to get bytes with termination bit cleared
bl := bitfield.Bitlist(bitlistBytes)
data := bl.BytesNoTrim()
// Get the bit length from bitlistInfo
bitLength := bi.Length()
// Get the chunk limit from getChunkCount
limitChunks, err := getChunkCount(info)
if err != nil {
return [32]byte{}, err
}
chunks := make([][32]byte, 2)
// Compute the length hash (little-endian uint256)
binary.LittleEndian.PutUint64(chunks[1][:8], uint64(bitLength))
dataRootGindex := currentGindex * 2
chunks[0], err = pc.merkleizeBitvectorBody(data, limitChunks, dataRootGindex)
if err != nil {
return [32]byte{}, err
}
// Handle the length mixin level (and proof bookkeeping at this level).
root := pc.mixinLengthAndCollect(currentGindex, chunks)
pc.collectLeaf(currentGindex, root)
return root, nil
}
// merkleizeVectorAndCollect merkleizes a slice of 32-byte leaf nodes into a subtree root, padding to a virtual size of 2^depth.
//
// Generalized indices (gindices): at layer i (0-based), nodes have gindices levelBase = subtreeGeneralizedIndex << (depth-i) and node gindex = levelBase + idx.
// Proof collection: for each layer it calls collectSibling(nodeGindex, nodeHash) and stores only those gindices registered via addTarget.
//
// Parameters:
// - elements: leaf-level hashes (may be shorter than 2^depth; padding is applied with trie.ZeroHashes).
// - subtreeGeneralizedIndex: gindex of the subtree root.
// - depth: number of merkleization layers from subtree root to leaves.
//
// Returns:
// - [32]byte: Merkle root of the subtree.
func (pc *proofCollector) merkleizeVectorAndCollect(elements [][32]byte, subtreeGeneralizedIndex uint64, depth uint64) [32]byte {
// Return zerohash at depth
if len(elements) == 0 {
return trie.ZeroHashes[depth]
}
for i := range depth {
layerLen := len(elements)
oddNodeLength := layerLen%2 == 1
if oddNodeLength {
zerohash := trie.ZeroHashes[i]
elements = append(elements, zerohash)
}
levelBaseGindex := subtreeGeneralizedIndex << (depth - i)
for idx := range elements {
gindex := levelBaseGindex + uint64(idx)
pc.collectSibling(gindex, elements[idx])
pc.collectLeaf(gindex, elements[idx])
}
elements = htr.VectorizedSha256(elements)
}
return elements[0]
}
// mixinLengthAndCollect computes the final mix-in root for list/bitlist values:
//
// root = hash(dataRoot, lengthHash)
//
// where chunks[0] is dataRoot and chunks[1] is the 32-byte length hash.
//
// Generalized indices (gindices): dataRoot is the left child (dataRootGindex = currentGindex*2) and lengthHash is the right child (lengthHashGindex = currentGindex*2+1).
// Proof collection: it calls collectSibling/collectLeaf for both child gindices; the collector stores them only if they were registered via addTarget.
//
// Parameters:
// - currentGindex: gindex of the parent node (list/bitlist root).
// - chunks: two 32-byte nodes: [dataRoot, lengthHash].
//
// Returns:
// - [32]byte: mixed-in Merkle root (or zero value on hashing error).
// - error: any error encountered during hashing.
func (pc *proofCollector) mixinLengthAndCollect(currentGindex uint64, chunks [][32]byte) [32]byte {
dataRoot, lengthHash := chunks[0], chunks[1]
dataRootGindex, lengthHashGindex := currentGindex*2, currentGindex*2+1
pc.collectSibling(dataRootGindex, dataRoot)
pc.collectSibling(lengthHashGindex, lengthHash)
pc.collectLeaf(dataRootGindex, dataRoot)
pc.collectLeaf(lengthHashGindex, lengthHash)
return ssz.MixInLength(dataRoot, lengthHash[:])
}

View File

@@ -0,0 +1,531 @@
package query
import (
"crypto/sha256"
"encoding/binary"
"reflect"
"slices"
"testing"
"github.com/OffchainLabs/go-bitfield"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state/stateutil"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ssz "github.com/OffchainLabs/prysm/v7/encoding/ssz"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
sszquerypb "github.com/OffchainLabs/prysm/v7/proto/ssz_query/testing"
"github.com/OffchainLabs/prysm/v7/testing/require"
)
func TestProofCollector_New(t *testing.T) {
pc := newProofCollector()
require.NotNil(t, pc)
require.Equal(t, 0, len(pc.requiredSiblings))
require.Equal(t, 0, len(pc.requiredLeaves))
require.Equal(t, 0, len(pc.siblings))
require.Equal(t, 0, len(pc.leaves))
}
func TestProofCollector_Reset(t *testing.T) {
pc := newProofCollector()
pc.requiredSiblings[3] = struct{}{}
pc.requiredLeaves[5] = struct{}{}
pc.siblings[3] = [32]byte{1}
pc.leaves[5] = [32]byte{2}
pc.reset()
require.Equal(t, 0, len(pc.requiredSiblings))
require.Equal(t, 0, len(pc.requiredLeaves))
require.Equal(t, 0, len(pc.siblings))
require.Equal(t, 0, len(pc.leaves))
}
func TestProofCollector_AddTarget(t *testing.T) {
pc := newProofCollector()
pc.addTarget(5)
_, hasLeaf := pc.requiredLeaves[5]
_, hasSibling4 := pc.requiredSiblings[4]
_, hasSibling3 := pc.requiredSiblings[3]
_, hasSibling1 := pc.requiredSiblings[1] // GI 1 is the root
require.Equal(t, true, hasLeaf)
require.Equal(t, true, hasSibling4)
require.Equal(t, true, hasSibling3)
require.Equal(t, false, hasSibling1)
}
func TestProofCollector_ToProof(t *testing.T) {
pc := newProofCollector()
pc.addTarget(5)
leaf := [32]byte{9}
sibling4 := [32]byte{4}
sibling3 := [32]byte{3}
pc.collectLeaf(5, leaf)
pc.collectSibling(4, sibling4)
pc.collectSibling(3, sibling3)
proof, err := pc.toProof()
require.NoError(t, err)
require.Equal(t, 5, proof.Index)
require.DeepEqual(t, leaf[:], proof.Leaf)
require.Equal(t, 2, len(proof.Hashes))
require.DeepEqual(t, sibling4[:], proof.Hashes[0])
require.DeepEqual(t, sibling3[:], proof.Hashes[1])
}
func TestProofCollector_ToProof_NoLeaves(t *testing.T) {
pc := newProofCollector()
_, err := pc.toProof()
require.NotNil(t, err)
}
func TestProofCollector_CollectLeaf(t *testing.T) {
pc := newProofCollector()
leaf := [32]byte{7}
pc.collectLeaf(10, leaf)
require.Equal(t, 0, len(pc.leaves))
pc.addTarget(10)
pc.collectLeaf(10, leaf)
stored, ok := pc.leaves[10]
require.Equal(t, true, ok)
require.Equal(t, leaf, stored)
}
func TestProofCollector_CollectSibling(t *testing.T) {
pc := newProofCollector()
hash := [32]byte{5}
pc.collectSibling(4, hash)
require.Equal(t, 0, len(pc.siblings))
pc.addTarget(5)
pc.collectSibling(4, hash)
stored, ok := pc.siblings[4]
require.Equal(t, true, ok)
require.Equal(t, hash, stored)
}
func TestProofCollector_Merkleize_BasicTypes(t *testing.T) {
testCases := []struct {
name string
sszType SSZType
value any
expected [32]byte
}{
{
name: "uint8",
sszType: Uint8,
value: uint8(0x11),
expected: func() [32]byte {
var leaf [32]byte
leaf[0] = 0x11
return leaf
}(),
},
{
name: "uint16",
sszType: Uint16,
value: uint16(0x2211),
expected: func() [32]byte {
var leaf [32]byte
binary.LittleEndian.PutUint16(leaf[:2], 0x2211)
return leaf
}(),
},
{
name: "uint32",
sszType: Uint32,
value: uint32(0x44332211),
expected: func() [32]byte {
var leaf [32]byte
binary.LittleEndian.PutUint32(leaf[:4], 0x44332211)
return leaf
}(),
},
{
name: "uint64",
sszType: Uint64,
value: uint64(0x8877665544332211),
expected: func() [32]byte {
var leaf [32]byte
binary.LittleEndian.PutUint64(leaf[:8], 0x8877665544332211)
return leaf
}(),
},
{
name: "bool",
sszType: Boolean,
value: true,
expected: func() [32]byte {
var leaf [32]byte
leaf[0] = 1
return leaf
}(),
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
pc := newProofCollector()
gindex := uint64(3)
pc.addTarget(gindex)
leaf, err := pc.merkleizeBasicType(tc.sszType, reflect.ValueOf(tc.value), gindex)
require.NoError(t, err)
require.Equal(t, tc.expected, leaf)
stored, ok := pc.leaves[gindex]
require.Equal(t, true, ok)
require.Equal(t, tc.expected, stored)
})
}
}
func TestProofCollector_Merkleize_Container(t *testing.T) {
container := makeFixedTestContainer()
info, err := AnalyzeObject(container)
require.NoError(t, err)
pc := newProofCollector()
pc.addTarget(1)
root, err := pc.merkleize(info, reflect.ValueOf(container), 1)
require.NoError(t, err)
expected, err := container.HashTreeRoot()
require.NoError(t, err)
require.Equal(t, expected, root)
stored, ok := pc.leaves[1]
require.Equal(t, true, ok)
require.Equal(t, expected, stored)
}
func TestProofCollector_Merkleize_Vector(t *testing.T) {
container := makeFixedTestContainer()
info, err := AnalyzeObject(container)
require.NoError(t, err)
ci, err := info.ContainerInfo()
require.NoError(t, err)
field := ci.fields["vector_field"]
pc := newProofCollector()
root, err := pc.merkleizeVector(field.sszInfo, reflect.ValueOf(container.VectorField), 1)
require.NoError(t, err)
serialized := make([][]byte, len(container.VectorField))
for i, v := range container.VectorField {
buf := make([]byte, 8)
binary.LittleEndian.PutUint64(buf, v)
serialized[i] = buf
}
chunks, err := ssz.PackByChunk(serialized)
require.NoError(t, err)
limit, err := getChunkCount(field.sszInfo)
require.NoError(t, err)
expected := ssz.MerkleizeVector(chunks, limit)
require.Equal(t, expected, root)
}
func TestProofCollector_Merkleize_List(t *testing.T) {
list := []*sszquerypb.FixedNestedContainer{
makeFixedNestedContainer(1),
makeFixedNestedContainer(2),
}
container := makeVariableTestContainer(list, bitfield.NewBitlist(1))
info, err := AnalyzeObject(container)
require.NoError(t, err)
ci, err := info.ContainerInfo()
require.NoError(t, err)
field := ci.fields["field_list_container"]
pc := newProofCollector()
root, err := pc.merkleizeList(field.sszInfo, reflect.ValueOf(list), 1)
require.NoError(t, err)
listInfo, err := field.sszInfo.ListInfo()
require.NoError(t, err)
expected, err := ssz.MerkleizeListSSZ(list, listInfo.Limit())
require.NoError(t, err)
require.Equal(t, expected, root)
}
func TestProofCollector_Merkleize_Bitvector(t *testing.T) {
container := makeFixedTestContainer()
info, err := AnalyzeObject(container)
require.NoError(t, err)
ci, err := info.ContainerInfo()
require.NoError(t, err)
field := ci.fields["bitvector64_field"]
pc := newProofCollector()
root, err := pc.merkleizeBitvector(field.sszInfo, reflect.ValueOf(container.Bitvector64Field), 1)
require.NoError(t, err)
expected, err := ssz.MerkleizeByteSliceSSZ([]byte(container.Bitvector64Field))
require.NoError(t, err)
require.Equal(t, expected, root)
}
func TestProofCollector_Merkleize_Bitlist(t *testing.T) {
bitlist := bitfield.NewBitlist(16)
bitlist.SetBitAt(3, true)
bitlist.SetBitAt(8, true)
container := makeVariableTestContainer(nil, bitlist)
info, err := AnalyzeObject(container)
require.NoError(t, err)
ci, err := info.ContainerInfo()
require.NoError(t, err)
field := ci.fields["bitlist_field"]
pc := newProofCollector()
root, err := pc.merkleizeBitlist(field.sszInfo, reflect.ValueOf(container.BitlistField), 1)
require.NoError(t, err)
bitlistInfo, err := field.sszInfo.BitlistInfo()
require.NoError(t, err)
expected, err := ssz.BitlistRoot(bitfield.Bitlist(bitlist), bitlistInfo.Limit())
require.NoError(t, err)
require.Equal(t, expected, root)
}
func TestProofCollector_MerkleizeVectorBody_Basic(t *testing.T) {
container := makeFixedTestContainer()
info, err := AnalyzeObject(container)
require.NoError(t, err)
ci, err := info.ContainerInfo()
require.NoError(t, err)
field := ci.fields["vector_field"]
vectorInfo, err := field.sszInfo.VectorInfo()
require.NoError(t, err)
length := len(container.VectorField)
limit, err := getChunkCount(field.sszInfo)
require.NoError(t, err)
pc := newProofCollector()
root, err := pc.merkleizeVectorBody(vectorInfo.element, reflect.ValueOf(container.VectorField), length, limit, 2)
require.NoError(t, err)
serialized := make([][]byte, len(container.VectorField))
for i, v := range container.VectorField {
buf := make([]byte, 8)
binary.LittleEndian.PutUint64(buf, v)
serialized[i] = buf
}
chunks, err := ssz.PackByChunk(serialized)
require.NoError(t, err)
expected := ssz.MerkleizeVector(chunks, limit)
require.Equal(t, expected, root)
}
func TestProofCollector_MerkleizeVectorAndCollect(t *testing.T) {
pc := newProofCollector()
pc.addTarget(6)
elements := [][32]byte{{1}, {2}}
expected := ssz.MerkleizeVector(slices.Clone(elements), 2)
root := pc.merkleizeVectorAndCollect(elements, 3, 1)
storedLeaf, hasLeaf := pc.leaves[6]
storedSibling, hasSibling := pc.siblings[7]
require.Equal(t, true, hasLeaf)
require.Equal(t, true, hasSibling)
require.Equal(t, elements[0], storedLeaf)
require.Equal(t, elements[1], storedSibling)
require.Equal(t, expected, root)
}
func TestProofCollector_MixinLengthAndCollect(t *testing.T) {
list := []*sszquerypb.FixedNestedContainer{
makeFixedNestedContainer(1),
makeFixedNestedContainer(2),
}
container := makeVariableTestContainer(list, bitfield.NewBitlist(1))
info, err := AnalyzeObject(container)
require.NoError(t, err)
ci, err := info.ContainerInfo()
require.NoError(t, err)
field := ci.fields["field_list_container"]
// Target gindex 2 (data root) - sibling at gindex 3 (length hash) should be collected
pc := newProofCollector()
pc.addTarget(2)
root, err := pc.merkleizeList(field.sszInfo, reflect.ValueOf(list), 1)
require.NoError(t, err)
listInfo, err := field.sszInfo.ListInfo()
require.NoError(t, err)
expected, err := ssz.MerkleizeListSSZ(list, listInfo.Limit())
require.NoError(t, err)
require.Equal(t, expected, root)
// Verify data root is collected as leaf at gindex 2
storedLeaf, hasLeaf := pc.leaves[2]
require.Equal(t, true, hasLeaf)
// Verify length hash is collected as sibling at gindex 3
storedSibling, hasSibling := pc.siblings[3]
require.Equal(t, true, hasSibling)
// Verify the root is hash(dataRoot || lengthHash)
expectedBuf := append(storedLeaf[:], storedSibling[:]...)
expectedRoot := sha256.Sum256(expectedBuf)
require.Equal(t, expectedRoot, root)
}
func BenchmarkOptimizedValidatorRoots(b *testing.B) {
validators := make([]*ethpb.Validator, 1000)
for i := range validators {
validators[i] = makeTestValidator(i)
}
b.ResetTimer()
for b.Loop() {
_, err := stateutil.OptimizedValidatorRoots(validators)
if err != nil {
b.Fatal(err)
}
}
}
func BenchmarkProofCollectorMerkleize(b *testing.B) {
validators := make([]*ethpb.Validator, 1000)
for i := range validators {
validators[i] = makeTestValidator(i)
}
info, err := AnalyzeObject(validators[0])
require.NoError(b, err)
b.ResetTimer()
for b.Loop() {
for _, val := range validators {
pc := newProofCollector()
v := reflect.ValueOf(val)
_, err := pc.merkleize(info, v, 1)
if err != nil {
b.Fatal(err)
}
}
}
}
func makeTestValidator(i int) *ethpb.Validator {
pubkey := make([]byte, 48)
for j := range pubkey {
pubkey[j] = byte(i + j)
}
withdrawalCredentials := make([]byte, 32)
for j := range withdrawalCredentials {
withdrawalCredentials[j] = byte(255 - ((i + j) % 256))
}
return &ethpb.Validator{
PublicKey: pubkey,
WithdrawalCredentials: withdrawalCredentials,
EffectiveBalance: uint64(32000000000 + i),
Slashed: i%2 == 0,
ActivationEligibilityEpoch: primitives.Epoch(i),
ActivationEpoch: primitives.Epoch(i + 1),
ExitEpoch: primitives.Epoch(i + 2),
WithdrawableEpoch: primitives.Epoch(i + 3),
}
}
func makeFixedNestedContainer(value uint64) *sszquerypb.FixedNestedContainer {
value2 := make([]byte, 32)
for i := range value2 {
value2[i] = byte(i)
}
return &sszquerypb.FixedNestedContainer{
Value1: value,
Value2: value2,
}
}
func makeFixedTestContainer() *sszquerypb.FixedTestContainer {
fieldBytes32 := make([]byte, 32)
for i := range fieldBytes32 {
fieldBytes32[i] = byte(i)
}
vectorField := make([]uint64, 24)
for i := range vectorField {
vectorField[i] = uint64(i)
}
rows := make([][]byte, 5)
for i := range rows {
row := make([]byte, 32)
for j := range row {
row[j] = byte(i) + byte(j)
}
rows[i] = row
}
bitvector64 := bitfield.NewBitvector64()
bitvector64.SetBitAt(1, true)
bitvector512 := bitfield.NewBitvector512()
bitvector512.SetBitAt(10, true)
trailing := make([]byte, 56)
for i := range trailing {
trailing[i] = byte(i)
}
return &sszquerypb.FixedTestContainer{
FieldUint32: 1,
FieldUint64: 2,
FieldBool: true,
FieldBytes32: fieldBytes32,
Nested: makeFixedNestedContainer(3),
VectorField: vectorField,
TwoDimensionBytesField: rows,
Bitvector64Field: bitvector64,
Bitvector512Field: bitvector512,
TrailingField: trailing,
}
}
func makeVariableTestContainer(list []*sszquerypb.FixedNestedContainer, bitlist bitfield.Bitlist) *sszquerypb.VariableTestContainer {
leading := make([]byte, 32)
for i := range leading {
leading[i] = byte(i)
}
trailing := make([]byte, 56)
for i := range trailing {
trailing[i] = byte(255 - i)
}
if bitlist == nil {
bitlist = bitfield.NewBitlist(0)
}
return &sszquerypb.VariableTestContainer{
LeadingField: leading,
FieldListContainer: list,
BitlistField: bitlist,
TrailingField: trailing,
}
}

View File

@@ -389,6 +389,7 @@ func TestHashTreeRoot(t *testing.T) {
require.NoError(t, err, "HashTreeRoot should not return an error")
expectedHashTreeRoot, err := tt.obj.HashTreeRoot()
require.NoError(t, err, "HashTreeRoot on original object should not return an error")
// Verify the Merkle tree root matches with the SSZ generated HashTreeRoot
require.Equal(t, expectedHashTreeRoot, hashTreeRoot, "HashTreeRoot from sszInfo should match original object's HashTreeRoot")
})
}

View File

@@ -26,21 +26,21 @@ func TestLifecycle(t *testing.T) {
port := 1000 + rand.Intn(1000)
prometheusService := NewService(t.Context(), fmt.Sprintf(":%d", port), nil)
prometheusService.Start()
// Actively wait until the service responds on /metrics (faster and less flaky than a fixed sleep)
deadline := time.Now().Add(3 * time.Second)
for {
if time.Now().After(deadline) {
t.Fatalf("metrics endpoint not ready within timeout")
}
resp, err := http.Get(fmt.Sprintf("http://localhost:%d/metrics", port))
if err == nil {
_ = resp.Body.Close()
if resp.StatusCode == http.StatusOK {
break
}
}
time.Sleep(50 * time.Millisecond)
}
// Actively wait until the service responds on /metrics (faster and less flaky than a fixed sleep)
deadline := time.Now().Add(3 * time.Second)
for {
if time.Now().After(deadline) {
t.Fatalf("metrics endpoint not ready within timeout")
}
resp, err := http.Get(fmt.Sprintf("http://localhost:%d/metrics", port))
if err == nil {
_ = resp.Body.Close()
if resp.StatusCode == http.StatusOK {
break
}
}
time.Sleep(50 * time.Millisecond)
}
// Query the service to ensure it really started.
resp, err := http.Get(fmt.Sprintf("http://localhost:%d/metrics", port))
@@ -49,18 +49,18 @@ func TestLifecycle(t *testing.T) {
err = prometheusService.Stop()
require.NoError(t, err)
// Actively wait until the service stops responding on /metrics
deadline = time.Now().Add(3 * time.Second)
for {
if time.Now().After(deadline) {
t.Fatalf("metrics endpoint still reachable after timeout")
}
_, err = http.Get(fmt.Sprintf("http://localhost:%d/metrics", port))
if err != nil {
break
}
time.Sleep(50 * time.Millisecond)
}
// Actively wait until the service stops responding on /metrics
deadline = time.Now().Add(3 * time.Second)
for {
if time.Now().After(deadline) {
t.Fatalf("metrics endpoint still reachable after timeout")
}
_, err = http.Get(fmt.Sprintf("http://localhost:%d/metrics", port))
if err != nil {
break
}
time.Sleep(50 * time.Millisecond)
}
// Query the service to ensure it really stopped.
_, err = http.Get(fmt.Sprintf("http://localhost:%d/metrics", port))

View File

@@ -195,6 +195,7 @@ ssz_fulu_objs = [
]
ssz_gloas_objs = [
"BlindedExecutionPayloadEnvelope",
"BuilderPendingPayment",
"BuilderPendingWithdrawal",
"DataColumnSidecarGloas",
@@ -204,6 +205,7 @@ ssz_gloas_objs = [
"PayloadAttestationMessage",
"ExecutionPayloadBid",
"SignedExecutionPayloadBid",
"SignedBlindedExecutionPayloadEnvelope",
"SignedExecutionPayloadEnvelope",
"BeaconBlockGloas",
"SignedBeaconBlockGloas",

View File

@@ -190,6 +190,60 @@ func copyBeaconBlockBodyGloas(body *BeaconBlockBodyGloas) *BeaconBlockBodyGloas
return copied
}
// CopySignedExecutionPayloadEnvelope copies the provided signed execution payload envelope.
func CopySignedExecutionPayloadEnvelope(env *SignedExecutionPayloadEnvelope) *SignedExecutionPayloadEnvelope {
if env == nil {
return nil
}
return &SignedExecutionPayloadEnvelope{
Message: copyExecutionPayloadEnvelope(env.Message),
Signature: bytesutil.SafeCopyBytes(env.Signature),
}
}
// copyExecutionPayloadEnvelope copies the provided execution payload envelope.
func copyExecutionPayloadEnvelope(env *ExecutionPayloadEnvelope) *ExecutionPayloadEnvelope {
if env == nil {
return nil
}
return &ExecutionPayloadEnvelope{
Payload: env.Payload, // engine proto, not deep copied here
ExecutionRequests: env.ExecutionRequests,
BuilderIndex: env.BuilderIndex,
BeaconBlockRoot: bytesutil.SafeCopyBytes(env.BeaconBlockRoot),
Slot: env.Slot,
BlobKzgCommitments: bytesutil.SafeCopy2dBytes(env.BlobKzgCommitments),
StateRoot: bytesutil.SafeCopyBytes(env.StateRoot),
}
}
// CopySignedBlindedExecutionPayloadEnvelope copies the provided signed blinded execution payload envelope.
func CopySignedBlindedExecutionPayloadEnvelope(env *SignedBlindedExecutionPayloadEnvelope) *SignedBlindedExecutionPayloadEnvelope {
if env == nil {
return nil
}
return &SignedBlindedExecutionPayloadEnvelope{
Message: copyBlindedExecutionPayloadEnvelope(env.Message),
Signature: bytesutil.SafeCopyBytes(env.Signature),
}
}
// copyBlindedExecutionPayloadEnvelope copies the provided blinded execution payload envelope.
func copyBlindedExecutionPayloadEnvelope(env *BlindedExecutionPayloadEnvelope) *BlindedExecutionPayloadEnvelope {
if env == nil {
return nil
}
return &BlindedExecutionPayloadEnvelope{
PayloadRoot: bytesutil.SafeCopyBytes(env.PayloadRoot),
ExecutionRequests: env.ExecutionRequests,
BuilderIndex: env.BuilderIndex,
BeaconBlockRoot: bytesutil.SafeCopyBytes(env.BeaconBlockRoot),
Slot: env.Slot,
BlobKzgCommitments: bytesutil.SafeCopy2dBytes(env.BlobKzgCommitments),
StateRoot: bytesutil.SafeCopyBytes(env.StateRoot),
}
}
// CopyBuilderPendingPayment creates a deep copy of a builder pending payment.
func CopyBuilderPendingPayment(original *BuilderPendingPayment) *BuilderPendingPayment {
if original == nil {

View File

@@ -1385,6 +1385,150 @@ func (x *SignedExecutionPayloadEnvelope) GetSignature() []byte {
return nil
}
type BlindedExecutionPayloadEnvelope struct {
state protoimpl.MessageState `protogen:"open.v1"`
PayloadRoot []byte `protobuf:"bytes,1,opt,name=payload_root,json=payloadRoot,proto3" json:"payload_root,omitempty" ssz-size:"32"`
ExecutionRequests *v1.ExecutionRequests `protobuf:"bytes,2,opt,name=execution_requests,json=executionRequests,proto3" json:"execution_requests,omitempty"`
BuilderIndex github_com_OffchainLabs_prysm_v7_consensus_types_primitives.BuilderIndex `protobuf:"varint,3,opt,name=builder_index,json=builderIndex,proto3" json:"builder_index,omitempty" cast-type:"github.com/OffchainLabs/prysm/v7/consensus-types/primitives.BuilderIndex"`
BeaconBlockRoot []byte `protobuf:"bytes,4,opt,name=beacon_block_root,json=beaconBlockRoot,proto3" json:"beacon_block_root,omitempty" ssz-size:"32"`
Slot github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Slot `protobuf:"varint,5,opt,name=slot,proto3" json:"slot,omitempty" cast-type:"github.com/OffchainLabs/prysm/v7/consensus-types/primitives.Slot"`
BlobKzgCommitments [][]byte `protobuf:"bytes,6,rep,name=blob_kzg_commitments,json=blobKzgCommitments,proto3" json:"blob_kzg_commitments,omitempty" ssz-max:"4096" ssz-size:"?,48"`
StateRoot []byte `protobuf:"bytes,7,opt,name=state_root,json=stateRoot,proto3" json:"state_root,omitempty" ssz-size:"32"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *BlindedExecutionPayloadEnvelope) Reset() {
*x = BlindedExecutionPayloadEnvelope{}
mi := &file_proto_prysm_v1alpha1_gloas_proto_msgTypes[14]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *BlindedExecutionPayloadEnvelope) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*BlindedExecutionPayloadEnvelope) ProtoMessage() {}
func (x *BlindedExecutionPayloadEnvelope) ProtoReflect() protoreflect.Message {
mi := &file_proto_prysm_v1alpha1_gloas_proto_msgTypes[14]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use BlindedExecutionPayloadEnvelope.ProtoReflect.Descriptor instead.
func (*BlindedExecutionPayloadEnvelope) Descriptor() ([]byte, []int) {
return file_proto_prysm_v1alpha1_gloas_proto_rawDescGZIP(), []int{14}
}
func (x *BlindedExecutionPayloadEnvelope) GetPayloadRoot() []byte {
if x != nil {
return x.PayloadRoot
}
return nil
}
func (x *BlindedExecutionPayloadEnvelope) GetExecutionRequests() *v1.ExecutionRequests {
if x != nil {
return x.ExecutionRequests
}
return nil
}
func (x *BlindedExecutionPayloadEnvelope) GetBuilderIndex() github_com_OffchainLabs_prysm_v7_consensus_types_primitives.BuilderIndex {
if x != nil {
return x.BuilderIndex
}
return github_com_OffchainLabs_prysm_v7_consensus_types_primitives.BuilderIndex(0)
}
func (x *BlindedExecutionPayloadEnvelope) GetBeaconBlockRoot() []byte {
if x != nil {
return x.BeaconBlockRoot
}
return nil
}
func (x *BlindedExecutionPayloadEnvelope) GetSlot() github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Slot {
if x != nil {
return x.Slot
}
return github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Slot(0)
}
func (x *BlindedExecutionPayloadEnvelope) GetBlobKzgCommitments() [][]byte {
if x != nil {
return x.BlobKzgCommitments
}
return nil
}
func (x *BlindedExecutionPayloadEnvelope) GetStateRoot() []byte {
if x != nil {
return x.StateRoot
}
return nil
}
type SignedBlindedExecutionPayloadEnvelope struct {
state protoimpl.MessageState `protogen:"open.v1"`
Message *BlindedExecutionPayloadEnvelope `protobuf:"bytes,1,opt,name=message,proto3" json:"message,omitempty"`
Signature []byte `protobuf:"bytes,2,opt,name=signature,proto3" json:"signature,omitempty" ssz-size:"96"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *SignedBlindedExecutionPayloadEnvelope) Reset() {
*x = SignedBlindedExecutionPayloadEnvelope{}
mi := &file_proto_prysm_v1alpha1_gloas_proto_msgTypes[15]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *SignedBlindedExecutionPayloadEnvelope) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SignedBlindedExecutionPayloadEnvelope) ProtoMessage() {}
func (x *SignedBlindedExecutionPayloadEnvelope) ProtoReflect() protoreflect.Message {
mi := &file_proto_prysm_v1alpha1_gloas_proto_msgTypes[15]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SignedBlindedExecutionPayloadEnvelope.ProtoReflect.Descriptor instead.
func (*SignedBlindedExecutionPayloadEnvelope) Descriptor() ([]byte, []int) {
return file_proto_prysm_v1alpha1_gloas_proto_rawDescGZIP(), []int{15}
}
func (x *SignedBlindedExecutionPayloadEnvelope) GetMessage() *BlindedExecutionPayloadEnvelope {
if x != nil {
return x.Message
}
return nil
}
func (x *SignedBlindedExecutionPayloadEnvelope) GetSignature() []byte {
if x != nil {
return x.Signature
}
return nil
}
type Builder struct {
state protoimpl.MessageState `protogen:"open.v1"`
Pubkey []byte `protobuf:"bytes,1,opt,name=pubkey,proto3" json:"pubkey,omitempty" ssz-size:"48"`
@@ -1399,7 +1543,7 @@ type Builder struct {
func (x *Builder) Reset() {
*x = Builder{}
mi := &file_proto_prysm_v1alpha1_gloas_proto_msgTypes[14]
mi := &file_proto_prysm_v1alpha1_gloas_proto_msgTypes[16]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -1411,7 +1555,7 @@ func (x *Builder) String() string {
func (*Builder) ProtoMessage() {}
func (x *Builder) ProtoReflect() protoreflect.Message {
mi := &file_proto_prysm_v1alpha1_gloas_proto_msgTypes[14]
mi := &file_proto_prysm_v1alpha1_gloas_proto_msgTypes[16]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -1424,7 +1568,7 @@ func (x *Builder) ProtoReflect() protoreflect.Message {
// Deprecated: Use Builder.ProtoReflect.Descriptor instead.
func (*Builder) Descriptor() ([]byte, []int) {
return file_proto_prysm_v1alpha1_gloas_proto_rawDescGZIP(), []int{14}
return file_proto_prysm_v1alpha1_gloas_proto_rawDescGZIP(), []int{16}
}
func (x *Builder) GetPubkey() []byte {
@@ -2035,39 +2179,83 @@ var file_proto_prysm_v1alpha1_gloas_proto_rawDesc = []byte{
0x6e, 0x76, 0x65, 0x6c, 0x6f, 0x70, 0x65, 0x52, 0x07, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65,
0x12, 0x24, 0x0a, 0x09, 0x73, 0x69, 0x67, 0x6e, 0x61, 0x74, 0x75, 0x72, 0x65, 0x18, 0x02, 0x20,
0x01, 0x28, 0x0c, 0x42, 0x06, 0x8a, 0xb5, 0x18, 0x02, 0x39, 0x36, 0x52, 0x09, 0x73, 0x69, 0x67,
0x6e, 0x61, 0x74, 0x75, 0x72, 0x65, 0x22, 0xc1, 0x03, 0x0a, 0x07, 0x42, 0x75, 0x69, 0x6c, 0x64,
0x65, 0x72, 0x12, 0x1e, 0x0a, 0x06, 0x70, 0x75, 0x62, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01,
0x28, 0x0c, 0x42, 0x06, 0x8a, 0xb5, 0x18, 0x02, 0x34, 0x38, 0x52, 0x06, 0x70, 0x75, 0x62, 0x6b,
0x65, 0x79, 0x12, 0x1f, 0x0a, 0x07, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x18, 0x02, 0x20,
0x01, 0x28, 0x0c, 0x42, 0x05, 0x8a, 0xb5, 0x18, 0x01, 0x31, 0x52, 0x07, 0x76, 0x65, 0x72, 0x73,
0x69, 0x6f, 0x6e, 0x12, 0x33, 0x0a, 0x11, 0x65, 0x78, 0x65, 0x63, 0x75, 0x74, 0x69, 0x6f, 0x6e,
0x5f, 0x61, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x06,
0x8a, 0xb5, 0x18, 0x02, 0x32, 0x30, 0x52, 0x10, 0x65, 0x78, 0x65, 0x63, 0x75, 0x74, 0x69, 0x6f,
0x6e, 0x41, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73, 0x12, 0x5e, 0x0a, 0x07, 0x62, 0x61, 0x6c, 0x61,
0x6e, 0x63, 0x65, 0x18, 0x04, 0x20, 0x01, 0x28, 0x04, 0x42, 0x44, 0x82, 0xb5, 0x18, 0x40, 0x67,
0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x4f, 0x66, 0x66, 0x63, 0x68, 0x61,
0x69, 0x6e, 0x4c, 0x61, 0x62, 0x73, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x2f, 0x76, 0x37, 0x2f,
0x63, 0x6f, 0x6e, 0x73, 0x65, 0x6e, 0x73, 0x75, 0x73, 0x2d, 0x74, 0x79, 0x70, 0x65, 0x73, 0x2f,
0x70, 0x72, 0x69, 0x6d, 0x69, 0x74, 0x69, 0x76, 0x65, 0x73, 0x2e, 0x47, 0x77, 0x65, 0x69, 0x52,
0x07, 0x62, 0x61, 0x6c, 0x61, 0x6e, 0x63, 0x65, 0x12, 0x6a, 0x0a, 0x0d, 0x64, 0x65, 0x70, 0x6f,
0x73, 0x69, 0x74, 0x5f, 0x65, 0x70, 0x6f, 0x63, 0x68, 0x18, 0x05, 0x20, 0x01, 0x28, 0x04, 0x42,
0x45, 0x82, 0xb5, 0x18, 0x41, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f,
0x4f, 0x66, 0x66, 0x63, 0x68, 0x61, 0x69, 0x6e, 0x4c, 0x61, 0x62, 0x73, 0x2f, 0x70, 0x72, 0x79,
0x73, 0x6d, 0x2f, 0x76, 0x37, 0x2f, 0x63, 0x6f, 0x6e, 0x73, 0x65, 0x6e, 0x73, 0x75, 0x73, 0x2d,
0x74, 0x79, 0x70, 0x65, 0x73, 0x2f, 0x70, 0x72, 0x69, 0x6d, 0x69, 0x74, 0x69, 0x76, 0x65, 0x73,
0x2e, 0x45, 0x70, 0x6f, 0x63, 0x68, 0x52, 0x0c, 0x64, 0x65, 0x70, 0x6f, 0x73, 0x69, 0x74, 0x45,
0x70, 0x6f, 0x63, 0x68, 0x12, 0x74, 0x0a, 0x12, 0x77, 0x69, 0x74, 0x68, 0x64, 0x72, 0x61, 0x77,
0x61, 0x62, 0x6c, 0x65, 0x5f, 0x65, 0x70, 0x6f, 0x63, 0x68, 0x18, 0x06, 0x20, 0x01, 0x28, 0x04,
0x42, 0x45, 0x82, 0xb5, 0x18, 0x41, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d,
0x6e, 0x61, 0x74, 0x75, 0x72, 0x65, 0x22, 0x8e, 0x04, 0x0a, 0x1f, 0x42, 0x6c, 0x69, 0x6e, 0x64,
0x65, 0x64, 0x45, 0x78, 0x65, 0x63, 0x75, 0x74, 0x69, 0x6f, 0x6e, 0x50, 0x61, 0x79, 0x6c, 0x6f,
0x61, 0x64, 0x45, 0x6e, 0x76, 0x65, 0x6c, 0x6f, 0x70, 0x65, 0x12, 0x29, 0x0a, 0x0c, 0x70, 0x61,
0x79, 0x6c, 0x6f, 0x61, 0x64, 0x5f, 0x72, 0x6f, 0x6f, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0c,
0x42, 0x06, 0x8a, 0xb5, 0x18, 0x02, 0x33, 0x32, 0x52, 0x0b, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61,
0x64, 0x52, 0x6f, 0x6f, 0x74, 0x12, 0x54, 0x0a, 0x12, 0x65, 0x78, 0x65, 0x63, 0x75, 0x74, 0x69,
0x6f, 0x6e, 0x5f, 0x72, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x73, 0x18, 0x02, 0x20, 0x01, 0x28,
0x0b, 0x32, 0x25, 0x2e, 0x65, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x2e, 0x65, 0x6e, 0x67,
0x69, 0x6e, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x45, 0x78, 0x65, 0x63, 0x75, 0x74, 0x69, 0x6f, 0x6e,
0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x73, 0x52, 0x11, 0x65, 0x78, 0x65, 0x63, 0x75, 0x74,
0x69, 0x6f, 0x6e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x73, 0x12, 0x71, 0x0a, 0x0d, 0x62,
0x75, 0x69, 0x6c, 0x64, 0x65, 0x72, 0x5f, 0x69, 0x6e, 0x64, 0x65, 0x78, 0x18, 0x03, 0x20, 0x01,
0x28, 0x04, 0x42, 0x4c, 0x82, 0xb5, 0x18, 0x48, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63,
0x6f, 0x6d, 0x2f, 0x4f, 0x66, 0x66, 0x63, 0x68, 0x61, 0x69, 0x6e, 0x4c, 0x61, 0x62, 0x73, 0x2f,
0x70, 0x72, 0x79, 0x73, 0x6d, 0x2f, 0x76, 0x37, 0x2f, 0x63, 0x6f, 0x6e, 0x73, 0x65, 0x6e, 0x73,
0x75, 0x73, 0x2d, 0x74, 0x79, 0x70, 0x65, 0x73, 0x2f, 0x70, 0x72, 0x69, 0x6d, 0x69, 0x74, 0x69,
0x76, 0x65, 0x73, 0x2e, 0x42, 0x75, 0x69, 0x6c, 0x64, 0x65, 0x72, 0x49, 0x6e, 0x64, 0x65, 0x78,
0x52, 0x0c, 0x62, 0x75, 0x69, 0x6c, 0x64, 0x65, 0x72, 0x49, 0x6e, 0x64, 0x65, 0x78, 0x12, 0x32,
0x0a, 0x11, 0x62, 0x65, 0x61, 0x63, 0x6f, 0x6e, 0x5f, 0x62, 0x6c, 0x6f, 0x63, 0x6b, 0x5f, 0x72,
0x6f, 0x6f, 0x74, 0x18, 0x04, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x06, 0x8a, 0xb5, 0x18, 0x02, 0x33,
0x32, 0x52, 0x0f, 0x62, 0x65, 0x61, 0x63, 0x6f, 0x6e, 0x42, 0x6c, 0x6f, 0x63, 0x6b, 0x52, 0x6f,
0x6f, 0x74, 0x12, 0x58, 0x0a, 0x04, 0x73, 0x6c, 0x6f, 0x74, 0x18, 0x05, 0x20, 0x01, 0x28, 0x04,
0x42, 0x44, 0x82, 0xb5, 0x18, 0x40, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d,
0x2f, 0x4f, 0x66, 0x66, 0x63, 0x68, 0x61, 0x69, 0x6e, 0x4c, 0x61, 0x62, 0x73, 0x2f, 0x70, 0x72,
0x79, 0x73, 0x6d, 0x2f, 0x76, 0x37, 0x2f, 0x63, 0x6f, 0x6e, 0x73, 0x65, 0x6e, 0x73, 0x75, 0x73,
0x2d, 0x74, 0x79, 0x70, 0x65, 0x73, 0x2f, 0x70, 0x72, 0x69, 0x6d, 0x69, 0x74, 0x69, 0x76, 0x65,
0x73, 0x2e, 0x45, 0x70, 0x6f, 0x63, 0x68, 0x52, 0x11, 0x77, 0x69, 0x74, 0x68, 0x64, 0x72, 0x61,
0x77, 0x61, 0x62, 0x6c, 0x65, 0x45, 0x70, 0x6f, 0x63, 0x68, 0x42, 0x3b, 0x5a, 0x39, 0x67, 0x69,
0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x4f, 0x66, 0x66, 0x63, 0x68, 0x61, 0x69,
0x6e, 0x4c, 0x61, 0x62, 0x73, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x2f, 0x76, 0x37, 0x2f, 0x70,
0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x2f, 0x76, 0x31, 0x61, 0x6c, 0x70,
0x68, 0x61, 0x31, 0x3b, 0x65, 0x74, 0x68, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
0x73, 0x2e, 0x53, 0x6c, 0x6f, 0x74, 0x52, 0x04, 0x73, 0x6c, 0x6f, 0x74, 0x12, 0x42, 0x0a, 0x14,
0x62, 0x6c, 0x6f, 0x62, 0x5f, 0x6b, 0x7a, 0x67, 0x5f, 0x63, 0x6f, 0x6d, 0x6d, 0x69, 0x74, 0x6d,
0x65, 0x6e, 0x74, 0x73, 0x18, 0x06, 0x20, 0x03, 0x28, 0x0c, 0x42, 0x10, 0x8a, 0xb5, 0x18, 0x04,
0x3f, 0x2c, 0x34, 0x38, 0x92, 0xb5, 0x18, 0x04, 0x34, 0x30, 0x39, 0x36, 0x52, 0x12, 0x62, 0x6c,
0x6f, 0x62, 0x4b, 0x7a, 0x67, 0x43, 0x6f, 0x6d, 0x6d, 0x69, 0x74, 0x6d, 0x65, 0x6e, 0x74, 0x73,
0x12, 0x25, 0x0a, 0x0a, 0x73, 0x74, 0x61, 0x74, 0x65, 0x5f, 0x72, 0x6f, 0x6f, 0x74, 0x18, 0x07,
0x20, 0x01, 0x28, 0x0c, 0x42, 0x06, 0x8a, 0xb5, 0x18, 0x02, 0x33, 0x32, 0x52, 0x09, 0x73, 0x74,
0x61, 0x74, 0x65, 0x52, 0x6f, 0x6f, 0x74, 0x22, 0x9f, 0x01, 0x0a, 0x25, 0x53, 0x69, 0x67, 0x6e,
0x65, 0x64, 0x42, 0x6c, 0x69, 0x6e, 0x64, 0x65, 0x64, 0x45, 0x78, 0x65, 0x63, 0x75, 0x74, 0x69,
0x6f, 0x6e, 0x50, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x45, 0x6e, 0x76, 0x65, 0x6c, 0x6f, 0x70,
0x65, 0x12, 0x50, 0x0a, 0x07, 0x6d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x18, 0x01, 0x20, 0x01,
0x28, 0x0b, 0x32, 0x36, 0x2e, 0x65, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x2e, 0x65, 0x74,
0x68, 0x2e, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x31, 0x2e, 0x42, 0x6c, 0x69, 0x6e, 0x64,
0x65, 0x64, 0x45, 0x78, 0x65, 0x63, 0x75, 0x74, 0x69, 0x6f, 0x6e, 0x50, 0x61, 0x79, 0x6c, 0x6f,
0x61, 0x64, 0x45, 0x6e, 0x76, 0x65, 0x6c, 0x6f, 0x70, 0x65, 0x52, 0x07, 0x6d, 0x65, 0x73, 0x73,
0x61, 0x67, 0x65, 0x12, 0x24, 0x0a, 0x09, 0x73, 0x69, 0x67, 0x6e, 0x61, 0x74, 0x75, 0x72, 0x65,
0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x06, 0x8a, 0xb5, 0x18, 0x02, 0x39, 0x36, 0x52, 0x09,
0x73, 0x69, 0x67, 0x6e, 0x61, 0x74, 0x75, 0x72, 0x65, 0x22, 0xc1, 0x03, 0x0a, 0x07, 0x42, 0x75,
0x69, 0x6c, 0x64, 0x65, 0x72, 0x12, 0x1e, 0x0a, 0x06, 0x70, 0x75, 0x62, 0x6b, 0x65, 0x79, 0x18,
0x01, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x06, 0x8a, 0xb5, 0x18, 0x02, 0x34, 0x38, 0x52, 0x06, 0x70,
0x75, 0x62, 0x6b, 0x65, 0x79, 0x12, 0x1f, 0x0a, 0x07, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e,
0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x05, 0x8a, 0xb5, 0x18, 0x01, 0x31, 0x52, 0x07, 0x76,
0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x12, 0x33, 0x0a, 0x11, 0x65, 0x78, 0x65, 0x63, 0x75, 0x74,
0x69, 0x6f, 0x6e, 0x5f, 0x61, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73, 0x18, 0x03, 0x20, 0x01, 0x28,
0x0c, 0x42, 0x06, 0x8a, 0xb5, 0x18, 0x02, 0x32, 0x30, 0x52, 0x10, 0x65, 0x78, 0x65, 0x63, 0x75,
0x74, 0x69, 0x6f, 0x6e, 0x41, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73, 0x12, 0x5e, 0x0a, 0x07, 0x62,
0x61, 0x6c, 0x61, 0x6e, 0x63, 0x65, 0x18, 0x04, 0x20, 0x01, 0x28, 0x04, 0x42, 0x44, 0x82, 0xb5,
0x18, 0x40, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x4f, 0x66, 0x66,
0x63, 0x68, 0x61, 0x69, 0x6e, 0x4c, 0x61, 0x62, 0x73, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x2f,
0x76, 0x37, 0x2f, 0x63, 0x6f, 0x6e, 0x73, 0x65, 0x6e, 0x73, 0x75, 0x73, 0x2d, 0x74, 0x79, 0x70,
0x65, 0x73, 0x2f, 0x70, 0x72, 0x69, 0x6d, 0x69, 0x74, 0x69, 0x76, 0x65, 0x73, 0x2e, 0x47, 0x77,
0x65, 0x69, 0x52, 0x07, 0x62, 0x61, 0x6c, 0x61, 0x6e, 0x63, 0x65, 0x12, 0x6a, 0x0a, 0x0d, 0x64,
0x65, 0x70, 0x6f, 0x73, 0x69, 0x74, 0x5f, 0x65, 0x70, 0x6f, 0x63, 0x68, 0x18, 0x05, 0x20, 0x01,
0x28, 0x04, 0x42, 0x45, 0x82, 0xb5, 0x18, 0x41, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63,
0x6f, 0x6d, 0x2f, 0x4f, 0x66, 0x66, 0x63, 0x68, 0x61, 0x69, 0x6e, 0x4c, 0x61, 0x62, 0x73, 0x2f,
0x70, 0x72, 0x79, 0x73, 0x6d, 0x2f, 0x76, 0x37, 0x2f, 0x63, 0x6f, 0x6e, 0x73, 0x65, 0x6e, 0x73,
0x75, 0x73, 0x2d, 0x74, 0x79, 0x70, 0x65, 0x73, 0x2f, 0x70, 0x72, 0x69, 0x6d, 0x69, 0x74, 0x69,
0x76, 0x65, 0x73, 0x2e, 0x45, 0x70, 0x6f, 0x63, 0x68, 0x52, 0x0c, 0x64, 0x65, 0x70, 0x6f, 0x73,
0x69, 0x74, 0x45, 0x70, 0x6f, 0x63, 0x68, 0x12, 0x74, 0x0a, 0x12, 0x77, 0x69, 0x74, 0x68, 0x64,
0x72, 0x61, 0x77, 0x61, 0x62, 0x6c, 0x65, 0x5f, 0x65, 0x70, 0x6f, 0x63, 0x68, 0x18, 0x06, 0x20,
0x01, 0x28, 0x04, 0x42, 0x45, 0x82, 0xb5, 0x18, 0x41, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e,
0x63, 0x6f, 0x6d, 0x2f, 0x4f, 0x66, 0x66, 0x63, 0x68, 0x61, 0x69, 0x6e, 0x4c, 0x61, 0x62, 0x73,
0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x2f, 0x76, 0x37, 0x2f, 0x63, 0x6f, 0x6e, 0x73, 0x65, 0x6e,
0x73, 0x75, 0x73, 0x2d, 0x74, 0x79, 0x70, 0x65, 0x73, 0x2f, 0x70, 0x72, 0x69, 0x6d, 0x69, 0x74,
0x69, 0x76, 0x65, 0x73, 0x2e, 0x45, 0x70, 0x6f, 0x63, 0x68, 0x52, 0x11, 0x77, 0x69, 0x74, 0x68,
0x64, 0x72, 0x61, 0x77, 0x61, 0x62, 0x6c, 0x65, 0x45, 0x70, 0x6f, 0x63, 0x68, 0x42, 0x3b, 0x5a,
0x39, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x4f, 0x66, 0x66, 0x63,
0x68, 0x61, 0x69, 0x6e, 0x4c, 0x61, 0x62, 0x73, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x2f, 0x76,
0x37, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x2f, 0x76, 0x31,
0x61, 0x6c, 0x70, 0x68, 0x61, 0x31, 0x3b, 0x65, 0x74, 0x68, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74,
0x6f, 0x33,
}
var (
@@ -2082,88 +2270,92 @@ func file_proto_prysm_v1alpha1_gloas_proto_rawDescGZIP() []byte {
return file_proto_prysm_v1alpha1_gloas_proto_rawDescData
}
var file_proto_prysm_v1alpha1_gloas_proto_msgTypes = make([]protoimpl.MessageInfo, 15)
var file_proto_prysm_v1alpha1_gloas_proto_msgTypes = make([]protoimpl.MessageInfo, 17)
var file_proto_prysm_v1alpha1_gloas_proto_goTypes = []any{
(*ExecutionPayloadBid)(nil), // 0: ethereum.eth.v1alpha1.ExecutionPayloadBid
(*SignedExecutionPayloadBid)(nil), // 1: ethereum.eth.v1alpha1.SignedExecutionPayloadBid
(*PayloadAttestationData)(nil), // 2: ethereum.eth.v1alpha1.PayloadAttestationData
(*PayloadAttestation)(nil), // 3: ethereum.eth.v1alpha1.PayloadAttestation
(*PayloadAttestationMessage)(nil), // 4: ethereum.eth.v1alpha1.PayloadAttestationMessage
(*BeaconBlockGloas)(nil), // 5: ethereum.eth.v1alpha1.BeaconBlockGloas
(*BeaconBlockBodyGloas)(nil), // 6: ethereum.eth.v1alpha1.BeaconBlockBodyGloas
(*SignedBeaconBlockGloas)(nil), // 7: ethereum.eth.v1alpha1.SignedBeaconBlockGloas
(*BeaconStateGloas)(nil), // 8: ethereum.eth.v1alpha1.BeaconStateGloas
(*BuilderPendingPayment)(nil), // 9: ethereum.eth.v1alpha1.BuilderPendingPayment
(*BuilderPendingWithdrawal)(nil), // 10: ethereum.eth.v1alpha1.BuilderPendingWithdrawal
(*DataColumnSidecarGloas)(nil), // 11: ethereum.eth.v1alpha1.DataColumnSidecarGloas
(*ExecutionPayloadEnvelope)(nil), // 12: ethereum.eth.v1alpha1.ExecutionPayloadEnvelope
(*SignedExecutionPayloadEnvelope)(nil), // 13: ethereum.eth.v1alpha1.SignedExecutionPayloadEnvelope
(*Builder)(nil), // 14: ethereum.eth.v1alpha1.Builder
(*Eth1Data)(nil), // 15: ethereum.eth.v1alpha1.Eth1Data
(*ProposerSlashing)(nil), // 16: ethereum.eth.v1alpha1.ProposerSlashing
(*AttesterSlashingElectra)(nil), // 17: ethereum.eth.v1alpha1.AttesterSlashingElectra
(*AttestationElectra)(nil), // 18: ethereum.eth.v1alpha1.AttestationElectra
(*Deposit)(nil), // 19: ethereum.eth.v1alpha1.Deposit
(*SignedVoluntaryExit)(nil), // 20: ethereum.eth.v1alpha1.SignedVoluntaryExit
(*SyncAggregate)(nil), // 21: ethereum.eth.v1alpha1.SyncAggregate
(*SignedBLSToExecutionChange)(nil), // 22: ethereum.eth.v1alpha1.SignedBLSToExecutionChange
(*Fork)(nil), // 23: ethereum.eth.v1alpha1.Fork
(*BeaconBlockHeader)(nil), // 24: ethereum.eth.v1alpha1.BeaconBlockHeader
(*Validator)(nil), // 25: ethereum.eth.v1alpha1.Validator
(*Checkpoint)(nil), // 26: ethereum.eth.v1alpha1.Checkpoint
(*SyncCommittee)(nil), // 27: ethereum.eth.v1alpha1.SyncCommittee
(*HistoricalSummary)(nil), // 28: ethereum.eth.v1alpha1.HistoricalSummary
(*PendingDeposit)(nil), // 29: ethereum.eth.v1alpha1.PendingDeposit
(*PendingPartialWithdrawal)(nil), // 30: ethereum.eth.v1alpha1.PendingPartialWithdrawal
(*PendingConsolidation)(nil), // 31: ethereum.eth.v1alpha1.PendingConsolidation
(*v1.Withdrawal)(nil), // 32: ethereum.engine.v1.Withdrawal
(*v1.ExecutionPayloadDeneb)(nil), // 33: ethereum.engine.v1.ExecutionPayloadDeneb
(*v1.ExecutionRequests)(nil), // 34: ethereum.engine.v1.ExecutionRequests
(*ExecutionPayloadBid)(nil), // 0: ethereum.eth.v1alpha1.ExecutionPayloadBid
(*SignedExecutionPayloadBid)(nil), // 1: ethereum.eth.v1alpha1.SignedExecutionPayloadBid
(*PayloadAttestationData)(nil), // 2: ethereum.eth.v1alpha1.PayloadAttestationData
(*PayloadAttestation)(nil), // 3: ethereum.eth.v1alpha1.PayloadAttestation
(*PayloadAttestationMessage)(nil), // 4: ethereum.eth.v1alpha1.PayloadAttestationMessage
(*BeaconBlockGloas)(nil), // 5: ethereum.eth.v1alpha1.BeaconBlockGloas
(*BeaconBlockBodyGloas)(nil), // 6: ethereum.eth.v1alpha1.BeaconBlockBodyGloas
(*SignedBeaconBlockGloas)(nil), // 7: ethereum.eth.v1alpha1.SignedBeaconBlockGloas
(*BeaconStateGloas)(nil), // 8: ethereum.eth.v1alpha1.BeaconStateGloas
(*BuilderPendingPayment)(nil), // 9: ethereum.eth.v1alpha1.BuilderPendingPayment
(*BuilderPendingWithdrawal)(nil), // 10: ethereum.eth.v1alpha1.BuilderPendingWithdrawal
(*DataColumnSidecarGloas)(nil), // 11: ethereum.eth.v1alpha1.DataColumnSidecarGloas
(*ExecutionPayloadEnvelope)(nil), // 12: ethereum.eth.v1alpha1.ExecutionPayloadEnvelope
(*SignedExecutionPayloadEnvelope)(nil), // 13: ethereum.eth.v1alpha1.SignedExecutionPayloadEnvelope
(*BlindedExecutionPayloadEnvelope)(nil), // 14: ethereum.eth.v1alpha1.BlindedExecutionPayloadEnvelope
(*SignedBlindedExecutionPayloadEnvelope)(nil), // 15: ethereum.eth.v1alpha1.SignedBlindedExecutionPayloadEnvelope
(*Builder)(nil), // 16: ethereum.eth.v1alpha1.Builder
(*Eth1Data)(nil), // 17: ethereum.eth.v1alpha1.Eth1Data
(*ProposerSlashing)(nil), // 18: ethereum.eth.v1alpha1.ProposerSlashing
(*AttesterSlashingElectra)(nil), // 19: ethereum.eth.v1alpha1.AttesterSlashingElectra
(*AttestationElectra)(nil), // 20: ethereum.eth.v1alpha1.AttestationElectra
(*Deposit)(nil), // 21: ethereum.eth.v1alpha1.Deposit
(*SignedVoluntaryExit)(nil), // 22: ethereum.eth.v1alpha1.SignedVoluntaryExit
(*SyncAggregate)(nil), // 23: ethereum.eth.v1alpha1.SyncAggregate
(*SignedBLSToExecutionChange)(nil), // 24: ethereum.eth.v1alpha1.SignedBLSToExecutionChange
(*Fork)(nil), // 25: ethereum.eth.v1alpha1.Fork
(*BeaconBlockHeader)(nil), // 26: ethereum.eth.v1alpha1.BeaconBlockHeader
(*Validator)(nil), // 27: ethereum.eth.v1alpha1.Validator
(*Checkpoint)(nil), // 28: ethereum.eth.v1alpha1.Checkpoint
(*SyncCommittee)(nil), // 29: ethereum.eth.v1alpha1.SyncCommittee
(*HistoricalSummary)(nil), // 30: ethereum.eth.v1alpha1.HistoricalSummary
(*PendingDeposit)(nil), // 31: ethereum.eth.v1alpha1.PendingDeposit
(*PendingPartialWithdrawal)(nil), // 32: ethereum.eth.v1alpha1.PendingPartialWithdrawal
(*PendingConsolidation)(nil), // 33: ethereum.eth.v1alpha1.PendingConsolidation
(*v1.Withdrawal)(nil), // 34: ethereum.engine.v1.Withdrawal
(*v1.ExecutionPayloadDeneb)(nil), // 35: ethereum.engine.v1.ExecutionPayloadDeneb
(*v1.ExecutionRequests)(nil), // 36: ethereum.engine.v1.ExecutionRequests
}
var file_proto_prysm_v1alpha1_gloas_proto_depIdxs = []int32{
0, // 0: ethereum.eth.v1alpha1.SignedExecutionPayloadBid.message:type_name -> ethereum.eth.v1alpha1.ExecutionPayloadBid
2, // 1: ethereum.eth.v1alpha1.PayloadAttestation.data:type_name -> ethereum.eth.v1alpha1.PayloadAttestationData
2, // 2: ethereum.eth.v1alpha1.PayloadAttestationMessage.data:type_name -> ethereum.eth.v1alpha1.PayloadAttestationData
6, // 3: ethereum.eth.v1alpha1.BeaconBlockGloas.body:type_name -> ethereum.eth.v1alpha1.BeaconBlockBodyGloas
15, // 4: ethereum.eth.v1alpha1.BeaconBlockBodyGloas.eth1_data:type_name -> ethereum.eth.v1alpha1.Eth1Data
16, // 5: ethereum.eth.v1alpha1.BeaconBlockBodyGloas.proposer_slashings:type_name -> ethereum.eth.v1alpha1.ProposerSlashing
17, // 6: ethereum.eth.v1alpha1.BeaconBlockBodyGloas.attester_slashings:type_name -> ethereum.eth.v1alpha1.AttesterSlashingElectra
18, // 7: ethereum.eth.v1alpha1.BeaconBlockBodyGloas.attestations:type_name -> ethereum.eth.v1alpha1.AttestationElectra
19, // 8: ethereum.eth.v1alpha1.BeaconBlockBodyGloas.deposits:type_name -> ethereum.eth.v1alpha1.Deposit
20, // 9: ethereum.eth.v1alpha1.BeaconBlockBodyGloas.voluntary_exits:type_name -> ethereum.eth.v1alpha1.SignedVoluntaryExit
21, // 10: ethereum.eth.v1alpha1.BeaconBlockBodyGloas.sync_aggregate:type_name -> ethereum.eth.v1alpha1.SyncAggregate
22, // 11: ethereum.eth.v1alpha1.BeaconBlockBodyGloas.bls_to_execution_changes:type_name -> ethereum.eth.v1alpha1.SignedBLSToExecutionChange
17, // 4: ethereum.eth.v1alpha1.BeaconBlockBodyGloas.eth1_data:type_name -> ethereum.eth.v1alpha1.Eth1Data
18, // 5: ethereum.eth.v1alpha1.BeaconBlockBodyGloas.proposer_slashings:type_name -> ethereum.eth.v1alpha1.ProposerSlashing
19, // 6: ethereum.eth.v1alpha1.BeaconBlockBodyGloas.attester_slashings:type_name -> ethereum.eth.v1alpha1.AttesterSlashingElectra
20, // 7: ethereum.eth.v1alpha1.BeaconBlockBodyGloas.attestations:type_name -> ethereum.eth.v1alpha1.AttestationElectra
21, // 8: ethereum.eth.v1alpha1.BeaconBlockBodyGloas.deposits:type_name -> ethereum.eth.v1alpha1.Deposit
22, // 9: ethereum.eth.v1alpha1.BeaconBlockBodyGloas.voluntary_exits:type_name -> ethereum.eth.v1alpha1.SignedVoluntaryExit
23, // 10: ethereum.eth.v1alpha1.BeaconBlockBodyGloas.sync_aggregate:type_name -> ethereum.eth.v1alpha1.SyncAggregate
24, // 11: ethereum.eth.v1alpha1.BeaconBlockBodyGloas.bls_to_execution_changes:type_name -> ethereum.eth.v1alpha1.SignedBLSToExecutionChange
1, // 12: ethereum.eth.v1alpha1.BeaconBlockBodyGloas.signed_execution_payload_bid:type_name -> ethereum.eth.v1alpha1.SignedExecutionPayloadBid
3, // 13: ethereum.eth.v1alpha1.BeaconBlockBodyGloas.payload_attestations:type_name -> ethereum.eth.v1alpha1.PayloadAttestation
5, // 14: ethereum.eth.v1alpha1.SignedBeaconBlockGloas.block:type_name -> ethereum.eth.v1alpha1.BeaconBlockGloas
23, // 15: ethereum.eth.v1alpha1.BeaconStateGloas.fork:type_name -> ethereum.eth.v1alpha1.Fork
24, // 16: ethereum.eth.v1alpha1.BeaconStateGloas.latest_block_header:type_name -> ethereum.eth.v1alpha1.BeaconBlockHeader
15, // 17: ethereum.eth.v1alpha1.BeaconStateGloas.eth1_data:type_name -> ethereum.eth.v1alpha1.Eth1Data
15, // 18: ethereum.eth.v1alpha1.BeaconStateGloas.eth1_data_votes:type_name -> ethereum.eth.v1alpha1.Eth1Data
25, // 19: ethereum.eth.v1alpha1.BeaconStateGloas.validators:type_name -> ethereum.eth.v1alpha1.Validator
26, // 20: ethereum.eth.v1alpha1.BeaconStateGloas.previous_justified_checkpoint:type_name -> ethereum.eth.v1alpha1.Checkpoint
26, // 21: ethereum.eth.v1alpha1.BeaconStateGloas.current_justified_checkpoint:type_name -> ethereum.eth.v1alpha1.Checkpoint
26, // 22: ethereum.eth.v1alpha1.BeaconStateGloas.finalized_checkpoint:type_name -> ethereum.eth.v1alpha1.Checkpoint
27, // 23: ethereum.eth.v1alpha1.BeaconStateGloas.current_sync_committee:type_name -> ethereum.eth.v1alpha1.SyncCommittee
27, // 24: ethereum.eth.v1alpha1.BeaconStateGloas.next_sync_committee:type_name -> ethereum.eth.v1alpha1.SyncCommittee
25, // 15: ethereum.eth.v1alpha1.BeaconStateGloas.fork:type_name -> ethereum.eth.v1alpha1.Fork
26, // 16: ethereum.eth.v1alpha1.BeaconStateGloas.latest_block_header:type_name -> ethereum.eth.v1alpha1.BeaconBlockHeader
17, // 17: ethereum.eth.v1alpha1.BeaconStateGloas.eth1_data:type_name -> ethereum.eth.v1alpha1.Eth1Data
17, // 18: ethereum.eth.v1alpha1.BeaconStateGloas.eth1_data_votes:type_name -> ethereum.eth.v1alpha1.Eth1Data
27, // 19: ethereum.eth.v1alpha1.BeaconStateGloas.validators:type_name -> ethereum.eth.v1alpha1.Validator
28, // 20: ethereum.eth.v1alpha1.BeaconStateGloas.previous_justified_checkpoint:type_name -> ethereum.eth.v1alpha1.Checkpoint
28, // 21: ethereum.eth.v1alpha1.BeaconStateGloas.current_justified_checkpoint:type_name -> ethereum.eth.v1alpha1.Checkpoint
28, // 22: ethereum.eth.v1alpha1.BeaconStateGloas.finalized_checkpoint:type_name -> ethereum.eth.v1alpha1.Checkpoint
29, // 23: ethereum.eth.v1alpha1.BeaconStateGloas.current_sync_committee:type_name -> ethereum.eth.v1alpha1.SyncCommittee
29, // 24: ethereum.eth.v1alpha1.BeaconStateGloas.next_sync_committee:type_name -> ethereum.eth.v1alpha1.SyncCommittee
0, // 25: ethereum.eth.v1alpha1.BeaconStateGloas.latest_execution_payload_bid:type_name -> ethereum.eth.v1alpha1.ExecutionPayloadBid
28, // 26: ethereum.eth.v1alpha1.BeaconStateGloas.historical_summaries:type_name -> ethereum.eth.v1alpha1.HistoricalSummary
29, // 27: ethereum.eth.v1alpha1.BeaconStateGloas.pending_deposits:type_name -> ethereum.eth.v1alpha1.PendingDeposit
30, // 28: ethereum.eth.v1alpha1.BeaconStateGloas.pending_partial_withdrawals:type_name -> ethereum.eth.v1alpha1.PendingPartialWithdrawal
31, // 29: ethereum.eth.v1alpha1.BeaconStateGloas.pending_consolidations:type_name -> ethereum.eth.v1alpha1.PendingConsolidation
14, // 30: ethereum.eth.v1alpha1.BeaconStateGloas.builders:type_name -> ethereum.eth.v1alpha1.Builder
30, // 26: ethereum.eth.v1alpha1.BeaconStateGloas.historical_summaries:type_name -> ethereum.eth.v1alpha1.HistoricalSummary
31, // 27: ethereum.eth.v1alpha1.BeaconStateGloas.pending_deposits:type_name -> ethereum.eth.v1alpha1.PendingDeposit
32, // 28: ethereum.eth.v1alpha1.BeaconStateGloas.pending_partial_withdrawals:type_name -> ethereum.eth.v1alpha1.PendingPartialWithdrawal
33, // 29: ethereum.eth.v1alpha1.BeaconStateGloas.pending_consolidations:type_name -> ethereum.eth.v1alpha1.PendingConsolidation
16, // 30: ethereum.eth.v1alpha1.BeaconStateGloas.builders:type_name -> ethereum.eth.v1alpha1.Builder
9, // 31: ethereum.eth.v1alpha1.BeaconStateGloas.builder_pending_payments:type_name -> ethereum.eth.v1alpha1.BuilderPendingPayment
10, // 32: ethereum.eth.v1alpha1.BeaconStateGloas.builder_pending_withdrawals:type_name -> ethereum.eth.v1alpha1.BuilderPendingWithdrawal
32, // 33: ethereum.eth.v1alpha1.BeaconStateGloas.payload_expected_withdrawals:type_name -> ethereum.engine.v1.Withdrawal
34, // 33: ethereum.eth.v1alpha1.BeaconStateGloas.payload_expected_withdrawals:type_name -> ethereum.engine.v1.Withdrawal
10, // 34: ethereum.eth.v1alpha1.BuilderPendingPayment.withdrawal:type_name -> ethereum.eth.v1alpha1.BuilderPendingWithdrawal
33, // 35: ethereum.eth.v1alpha1.ExecutionPayloadEnvelope.payload:type_name -> ethereum.engine.v1.ExecutionPayloadDeneb
34, // 36: ethereum.eth.v1alpha1.ExecutionPayloadEnvelope.execution_requests:type_name -> ethereum.engine.v1.ExecutionRequests
35, // 35: ethereum.eth.v1alpha1.ExecutionPayloadEnvelope.payload:type_name -> ethereum.engine.v1.ExecutionPayloadDeneb
36, // 36: ethereum.eth.v1alpha1.ExecutionPayloadEnvelope.execution_requests:type_name -> ethereum.engine.v1.ExecutionRequests
12, // 37: ethereum.eth.v1alpha1.SignedExecutionPayloadEnvelope.message:type_name -> ethereum.eth.v1alpha1.ExecutionPayloadEnvelope
38, // [38:38] is the sub-list for method output_type
38, // [38:38] is the sub-list for method input_type
38, // [38:38] is the sub-list for extension type_name
38, // [38:38] is the sub-list for extension extendee
0, // [0:38] is the sub-list for field type_name
36, // 38: ethereum.eth.v1alpha1.BlindedExecutionPayloadEnvelope.execution_requests:type_name -> ethereum.engine.v1.ExecutionRequests
14, // 39: ethereum.eth.v1alpha1.SignedBlindedExecutionPayloadEnvelope.message:type_name -> ethereum.eth.v1alpha1.BlindedExecutionPayloadEnvelope
40, // [40:40] is the sub-list for method output_type
40, // [40:40] is the sub-list for method input_type
40, // [40:40] is the sub-list for extension type_name
40, // [40:40] is the sub-list for extension extendee
0, // [0:40] is the sub-list for field type_name
}
func init() { file_proto_prysm_v1alpha1_gloas_proto_init() }
@@ -2181,7 +2373,7 @@ func file_proto_prysm_v1alpha1_gloas_proto_init() {
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_proto_prysm_v1alpha1_gloas_proto_rawDesc,
NumEnums: 0,
NumMessages: 15,
NumMessages: 17,
NumExtensions: 0,
NumServices: 0,
},

View File

@@ -434,6 +434,33 @@ message SignedExecutionPayloadEnvelope {
bytes signature = 2 [ (ethereum.eth.ext.ssz_size) = "96" ];
}
// BlindedExecutionPayloadEnvelope is the blinded version of ExecutionPayloadEnvelope.
// It replaces the full ExecutionPayload with its hash tree root,
// minimizing storage while preserving all other metadata.
message BlindedExecutionPayloadEnvelope {
bytes payload_root = 1 [ (ethereum.eth.ext.ssz_size) = "32" ];
ethereum.engine.v1.ExecutionRequests execution_requests = 2;
uint64 builder_index = 3 [ (ethereum.eth.ext.cast_type) =
"github.com/OffchainLabs/prysm/v7/"
"consensus-types/primitives.BuilderIndex" ];
bytes beacon_block_root = 4 [ (ethereum.eth.ext.ssz_size) = "32" ];
uint64 slot = 5 [
(ethereum.eth.ext.cast_type) =
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives.Slot"
];
repeated bytes blob_kzg_commitments = 6 [
(ethereum.eth.ext.ssz_size) = "?,48",
(ethereum.eth.ext.ssz_max) = "max_blob_commitments.size"
];
bytes state_root = 7 [ (ethereum.eth.ext.ssz_size) = "32" ];
}
// SignedBlindedExecutionPayloadEnvelope wraps a blinded execution payload envelope with a signature.
message SignedBlindedExecutionPayloadEnvelope {
BlindedExecutionPayloadEnvelope message = 1;
bytes signature = 2 [ (ethereum.eth.ext.ssz_size) = "96" ];
}
// Builder represents a builder in the Gloas fork.
//
// Spec:

View File

@@ -3596,6 +3596,342 @@ func (s *SignedExecutionPayloadEnvelope) HashTreeRootWith(hh *ssz.Hasher) (err e
return
}
// MarshalSSZ ssz marshals the BlindedExecutionPayloadEnvelope object
func (b *BlindedExecutionPayloadEnvelope) MarshalSSZ() ([]byte, error) {
return ssz.MarshalSSZ(b)
}
// MarshalSSZTo ssz marshals the BlindedExecutionPayloadEnvelope object to a target array
func (b *BlindedExecutionPayloadEnvelope) MarshalSSZTo(buf []byte) (dst []byte, err error) {
dst = buf
offset := int(120)
// Field (0) 'PayloadRoot'
if size := len(b.PayloadRoot); size != 32 {
err = ssz.ErrBytesLengthFn("--.PayloadRoot", size, 32)
return
}
dst = append(dst, b.PayloadRoot...)
// Offset (1) 'ExecutionRequests'
dst = ssz.WriteOffset(dst, offset)
if b.ExecutionRequests == nil {
b.ExecutionRequests = new(v1.ExecutionRequests)
}
offset += b.ExecutionRequests.SizeSSZ()
// Field (2) 'BuilderIndex'
dst = ssz.MarshalUint64(dst, uint64(b.BuilderIndex))
// Field (3) 'BeaconBlockRoot'
if size := len(b.BeaconBlockRoot); size != 32 {
err = ssz.ErrBytesLengthFn("--.BeaconBlockRoot", size, 32)
return
}
dst = append(dst, b.BeaconBlockRoot...)
// Field (4) 'Slot'
dst = ssz.MarshalUint64(dst, uint64(b.Slot))
// Offset (5) 'BlobKzgCommitments'
dst = ssz.WriteOffset(dst, offset)
offset += len(b.BlobKzgCommitments) * 48
// Field (6) 'StateRoot'
if size := len(b.StateRoot); size != 32 {
err = ssz.ErrBytesLengthFn("--.StateRoot", size, 32)
return
}
dst = append(dst, b.StateRoot...)
// Field (1) 'ExecutionRequests'
if dst, err = b.ExecutionRequests.MarshalSSZTo(dst); err != nil {
return
}
// Field (5) 'BlobKzgCommitments'
if size := len(b.BlobKzgCommitments); size > 4096 {
err = ssz.ErrListTooBigFn("--.BlobKzgCommitments", size, 4096)
return
}
for ii := 0; ii < len(b.BlobKzgCommitments); ii++ {
if size := len(b.BlobKzgCommitments[ii]); size != 48 {
err = ssz.ErrBytesLengthFn("--.BlobKzgCommitments[ii]", size, 48)
return
}
dst = append(dst, b.BlobKzgCommitments[ii]...)
}
return
}
// UnmarshalSSZ ssz unmarshals the BlindedExecutionPayloadEnvelope object
func (b *BlindedExecutionPayloadEnvelope) UnmarshalSSZ(buf []byte) error {
var err error
size := uint64(len(buf))
if size < 120 {
return ssz.ErrSize
}
tail := buf
var o1, o5 uint64
// Field (0) 'PayloadRoot'
if cap(b.PayloadRoot) == 0 {
b.PayloadRoot = make([]byte, 0, len(buf[0:32]))
}
b.PayloadRoot = append(b.PayloadRoot, buf[0:32]...)
// Offset (1) 'ExecutionRequests'
if o1 = ssz.ReadOffset(buf[32:36]); o1 > size {
return ssz.ErrOffset
}
if o1 != 120 {
return ssz.ErrInvalidVariableOffset
}
// Field (2) 'BuilderIndex'
b.BuilderIndex = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.BuilderIndex(ssz.UnmarshallUint64(buf[36:44]))
// Field (3) 'BeaconBlockRoot'
if cap(b.BeaconBlockRoot) == 0 {
b.BeaconBlockRoot = make([]byte, 0, len(buf[44:76]))
}
b.BeaconBlockRoot = append(b.BeaconBlockRoot, buf[44:76]...)
// Field (4) 'Slot'
b.Slot = github_com_OffchainLabs_prysm_v7_consensus_types_primitives.Slot(ssz.UnmarshallUint64(buf[76:84]))
// Offset (5) 'BlobKzgCommitments'
if o5 = ssz.ReadOffset(buf[84:88]); o5 > size || o1 > o5 {
return ssz.ErrOffset
}
// Field (6) 'StateRoot'
if cap(b.StateRoot) == 0 {
b.StateRoot = make([]byte, 0, len(buf[88:120]))
}
b.StateRoot = append(b.StateRoot, buf[88:120]...)
// Field (1) 'ExecutionRequests'
{
buf = tail[o1:o5]
if b.ExecutionRequests == nil {
b.ExecutionRequests = new(v1.ExecutionRequests)
}
if err = b.ExecutionRequests.UnmarshalSSZ(buf); err != nil {
return err
}
}
// Field (5) 'BlobKzgCommitments'
{
buf = tail[o5:]
num, err := ssz.DivideInt2(len(buf), 48, 4096)
if err != nil {
return err
}
b.BlobKzgCommitments = make([][]byte, num)
for ii := 0; ii < num; ii++ {
if cap(b.BlobKzgCommitments[ii]) == 0 {
b.BlobKzgCommitments[ii] = make([]byte, 0, len(buf[ii*48:(ii+1)*48]))
}
b.BlobKzgCommitments[ii] = append(b.BlobKzgCommitments[ii], buf[ii*48:(ii+1)*48]...)
}
}
return err
}
// SizeSSZ returns the ssz encoded size in bytes for the BlindedExecutionPayloadEnvelope object
func (b *BlindedExecutionPayloadEnvelope) SizeSSZ() (size int) {
size = 120
// Field (1) 'ExecutionRequests'
if b.ExecutionRequests == nil {
b.ExecutionRequests = new(v1.ExecutionRequests)
}
size += b.ExecutionRequests.SizeSSZ()
// Field (5) 'BlobKzgCommitments'
size += len(b.BlobKzgCommitments) * 48
return
}
// HashTreeRoot ssz hashes the BlindedExecutionPayloadEnvelope object
func (b *BlindedExecutionPayloadEnvelope) HashTreeRoot() ([32]byte, error) {
return ssz.HashWithDefaultHasher(b)
}
// HashTreeRootWith ssz hashes the BlindedExecutionPayloadEnvelope object with a hasher
func (b *BlindedExecutionPayloadEnvelope) HashTreeRootWith(hh *ssz.Hasher) (err error) {
indx := hh.Index()
// Field (0) 'PayloadRoot'
if size := len(b.PayloadRoot); size != 32 {
err = ssz.ErrBytesLengthFn("--.PayloadRoot", size, 32)
return
}
hh.PutBytes(b.PayloadRoot)
// Field (1) 'ExecutionRequests'
if err = b.ExecutionRequests.HashTreeRootWith(hh); err != nil {
return
}
// Field (2) 'BuilderIndex'
hh.PutUint64(uint64(b.BuilderIndex))
// Field (3) 'BeaconBlockRoot'
if size := len(b.BeaconBlockRoot); size != 32 {
err = ssz.ErrBytesLengthFn("--.BeaconBlockRoot", size, 32)
return
}
hh.PutBytes(b.BeaconBlockRoot)
// Field (4) 'Slot'
hh.PutUint64(uint64(b.Slot))
// Field (5) 'BlobKzgCommitments'
{
if size := len(b.BlobKzgCommitments); size > 4096 {
err = ssz.ErrListTooBigFn("--.BlobKzgCommitments", size, 4096)
return
}
subIndx := hh.Index()
for _, i := range b.BlobKzgCommitments {
if len(i) != 48 {
err = ssz.ErrBytesLength
return
}
hh.PutBytes(i)
}
numItems := uint64(len(b.BlobKzgCommitments))
hh.MerkleizeWithMixin(subIndx, numItems, 4096)
}
// Field (6) 'StateRoot'
if size := len(b.StateRoot); size != 32 {
err = ssz.ErrBytesLengthFn("--.StateRoot", size, 32)
return
}
hh.PutBytes(b.StateRoot)
hh.Merkleize(indx)
return
}
// MarshalSSZ ssz marshals the SignedBlindedExecutionPayloadEnvelope object
func (s *SignedBlindedExecutionPayloadEnvelope) MarshalSSZ() ([]byte, error) {
return ssz.MarshalSSZ(s)
}
// MarshalSSZTo ssz marshals the SignedBlindedExecutionPayloadEnvelope object to a target array
func (s *SignedBlindedExecutionPayloadEnvelope) MarshalSSZTo(buf []byte) (dst []byte, err error) {
dst = buf
offset := int(100)
// Offset (0) 'Message'
dst = ssz.WriteOffset(dst, offset)
if s.Message == nil {
s.Message = new(BlindedExecutionPayloadEnvelope)
}
offset += s.Message.SizeSSZ()
// Field (1) 'Signature'
if size := len(s.Signature); size != 96 {
err = ssz.ErrBytesLengthFn("--.Signature", size, 96)
return
}
dst = append(dst, s.Signature...)
// Field (0) 'Message'
if dst, err = s.Message.MarshalSSZTo(dst); err != nil {
return
}
return
}
// UnmarshalSSZ ssz unmarshals the SignedBlindedExecutionPayloadEnvelope object
func (s *SignedBlindedExecutionPayloadEnvelope) UnmarshalSSZ(buf []byte) error {
var err error
size := uint64(len(buf))
if size < 100 {
return ssz.ErrSize
}
tail := buf
var o0 uint64
// Offset (0) 'Message'
if o0 = ssz.ReadOffset(buf[0:4]); o0 > size {
return ssz.ErrOffset
}
if o0 != 100 {
return ssz.ErrInvalidVariableOffset
}
// Field (1) 'Signature'
if cap(s.Signature) == 0 {
s.Signature = make([]byte, 0, len(buf[4:100]))
}
s.Signature = append(s.Signature, buf[4:100]...)
// Field (0) 'Message'
{
buf = tail[o0:]
if s.Message == nil {
s.Message = new(BlindedExecutionPayloadEnvelope)
}
if err = s.Message.UnmarshalSSZ(buf); err != nil {
return err
}
}
return err
}
// SizeSSZ returns the ssz encoded size in bytes for the SignedBlindedExecutionPayloadEnvelope object
func (s *SignedBlindedExecutionPayloadEnvelope) SizeSSZ() (size int) {
size = 100
// Field (0) 'Message'
if s.Message == nil {
s.Message = new(BlindedExecutionPayloadEnvelope)
}
size += s.Message.SizeSSZ()
return
}
// HashTreeRoot ssz hashes the SignedBlindedExecutionPayloadEnvelope object
func (s *SignedBlindedExecutionPayloadEnvelope) HashTreeRoot() ([32]byte, error) {
return ssz.HashWithDefaultHasher(s)
}
// HashTreeRootWith ssz hashes the SignedBlindedExecutionPayloadEnvelope object with a hasher
func (s *SignedBlindedExecutionPayloadEnvelope) HashTreeRootWith(hh *ssz.Hasher) (err error) {
indx := hh.Index()
// Field (0) 'Message'
if err = s.Message.HashTreeRootWith(hh); err != nil {
return
}
// Field (1) 'Signature'
if size := len(s.Signature); size != 96 {
err = ssz.ErrBytesLengthFn("--.Signature", size, 96)
return
}
hh.PutBytes(s.Signature)
hh.Merkleize(indx)
return
}
// MarshalSSZ ssz marshals the Builder object
func (b *Builder) MarshalSSZ() ([]byte, error) {
return ssz.MarshalSSZ(b)

View File

@@ -47,7 +47,6 @@ go_library(
"@in_gopkg_yaml_v2//:go_default_library",
"@io_bazel_rules_go//go/tools/bazel:go_default_library",
"@org_golang_x_sync//errgroup:go_default_library",
"@org_golang_x_sys//unix:go_default_library",
],
)

View File

@@ -11,7 +11,6 @@ import (
"strconv"
"strings"
"syscall"
"time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
cmdshared "github.com/OffchainLabs/prysm/v7/cmd"
@@ -36,12 +35,11 @@ var _ e2etypes.BeaconNodeSet = (*BeaconNodeSet)(nil)
// BeaconNodeSet represents set of beacon nodes.
type BeaconNodeSet struct {
e2etypes.ComponentRunner
config *e2etypes.E2EConfig
nodes []e2etypes.ComponentRunner
enr string
ids []string
multiAddrs []string
started chan struct{}
config *e2etypes.E2EConfig
nodes []e2etypes.ComponentRunner
enr string
ids []string
started chan struct{}
}
// SetENR assigns ENR to the set of beacon nodes.
@@ -76,10 +74,8 @@ func (s *BeaconNodeSet) Start(ctx context.Context) error {
if s.config.UseFixedPeerIDs {
for i := range nodes {
s.ids = append(s.ids, nodes[i].(*BeaconNode).peerID)
s.multiAddrs = append(s.multiAddrs, nodes[i].(*BeaconNode).multiAddr)
}
s.config.PeerIDs = s.ids
s.config.PeerMultiAddrs = s.multiAddrs
}
// All nodes started, close channel, so that all services waiting on a set, can proceed.
close(s.started)
@@ -145,14 +141,6 @@ func (s *BeaconNodeSet) StopAtIndex(i int) error {
return s.nodes[i].Stop()
}
// RestartAtIndex restarts the component at the desired index.
func (s *BeaconNodeSet) RestartAtIndex(ctx context.Context, i int) error {
if i >= len(s.nodes) {
return errors.Errorf("provided index exceeds slice size: %d >= %d", i, len(s.nodes))
}
return s.nodes[i].(*BeaconNode).Restart(ctx)
}
// ComponentAtIndex returns the component at the provided index.
func (s *BeaconNodeSet) ComponentAtIndex(i int) (e2etypes.ComponentRunner, error) {
if i >= len(s.nodes) {
@@ -164,14 +152,12 @@ func (s *BeaconNodeSet) ComponentAtIndex(i int) (e2etypes.ComponentRunner, error
// BeaconNode represents beacon node.
type BeaconNode struct {
e2etypes.ComponentRunner
config *e2etypes.E2EConfig
started chan struct{}
index int
enr string
peerID string
multiAddr string
cmd *exec.Cmd
args []string
config *e2etypes.E2EConfig
started chan struct{}
index int
enr string
peerID string
cmd *exec.Cmd
}
// NewBeaconNode creates and returns a beacon node.
@@ -304,7 +290,6 @@ func (node *BeaconNode) Start(ctx context.Context) error {
args = append(args, fmt.Sprintf("--%s=%s:%d", flags.MevRelayEndpoint.Name, "http://127.0.0.1", e2e.TestParams.Ports.Eth1ProxyPort+index))
}
args = append(args, config.BeaconFlags...)
node.args = args
cmd := exec.CommandContext(ctx, binaryPath, args...) // #nosec G204 -- Safe
// Write stderr to log files.
@@ -333,18 +318,6 @@ func (node *BeaconNode) Start(ctx context.Context) error {
return fmt.Errorf("could not find peer id: %w", err)
}
node.peerID = peerId
// Extract QUIC multiaddr for Lighthouse to connect to this node.
// Prysm logs: msg="Node started p2p server" multiAddr="/ip4/192.168.0.14/udp/4250/quic-v1/p2p/16Uiu2..."
// We prefer QUIC over TCP as it's more reliable in E2E tests.
multiAddr, err := helpers.FindFollowingTextInFile(stdOutFile, "multiAddr=\"/ip4/192.168.0.14/udp/")
if err != nil {
return fmt.Errorf("could not find QUIC multiaddr: %w", err)
}
// The extracted text will be like: 4250/quic-v1/p2p/16Uiu2..."
// We need to prepend "/ip4/192.168.0.14/udp/" and strip the trailing quote
multiAddr = strings.TrimSuffix(multiAddr, "\"")
node.multiAddr = "/ip4/192.168.0.14/udp/" + multiAddr
}
// Mark node as ready.
@@ -374,96 +347,6 @@ func (node *BeaconNode) Stop() error {
return node.cmd.Process.Kill()
}
// Restart gracefully stops the beacon node and starts a new process.
// This is useful for testing resilience as it allows the P2P layer to reinitialize
// and discover peers again (unlike SIGSTOP/SIGCONT which breaks QUIC connections permanently).
func (node *BeaconNode) Restart(ctx context.Context) error {
binaryPath, found := bazel.FindBinary("cmd/beacon-chain", "beacon-chain")
if !found {
return errors.New("beacon chain binary not found")
}
// First, continue the process if it's stopped (from PauseAtIndex).
// A stopped process (SIGSTOP) cannot receive SIGTERM until continued.
_ = node.cmd.Process.Signal(syscall.SIGCONT)
if err := node.cmd.Process.Signal(syscall.SIGTERM); err != nil {
return fmt.Errorf("failed to send SIGTERM: %w", err)
}
// Wait for process to exit by polling. We can't call cmd.Wait() here because
// the Start() method's goroutine is already waiting on the command, and calling
// Wait() twice on the same process causes "waitid: no child processes" error.
// Instead, poll using Signal(0) which returns an error when the process no longer exists.
processExited := false
for range 100 {
if err := node.cmd.Process.Signal(syscall.Signal(0)); err != nil {
processExited = true
break
}
time.Sleep(100 * time.Millisecond)
}
if !processExited {
log.Warnf("Beacon node %d did not exit within 10 seconds after SIGTERM, proceeding with restart anyway", node.index)
}
restartArgs := make([]string, 0, len(node.args))
for _, arg := range node.args {
if !strings.Contains(arg, cmdshared.ForceClearDB.Name) {
restartArgs = append(restartArgs, arg)
}
}
stdOutFile, err := os.OpenFile(
path.Join(e2e.TestParams.LogPath, fmt.Sprintf(e2e.BeaconNodeLogFileName, node.index)),
os.O_APPEND|os.O_WRONLY,
0644,
)
if err != nil {
return fmt.Errorf("failed to open log file: %w", err)
}
defer func() {
if err := stdOutFile.Close(); err != nil {
log.WithError(err).Error("Failed to close stdout file")
}
}()
cmd := exec.CommandContext(ctx, binaryPath, restartArgs...)
stderr, err := os.OpenFile(
path.Join(e2e.TestParams.LogPath, fmt.Sprintf("beacon_node_%d_stderr.log", node.index)),
os.O_APPEND|os.O_WRONLY|os.O_CREATE,
0644,
)
if err != nil {
return fmt.Errorf("failed to open stderr file: %w", err)
}
cmd.Stderr = stderr
log.Infof("Restarting beacon chain %d with flags: %s", node.index, strings.Join(restartArgs, " "))
if err = cmd.Start(); err != nil {
if closeErr := stderr.Close(); closeErr != nil {
log.WithError(closeErr).Error("Failed to close stderr file")
}
return fmt.Errorf("failed to restart beacon node: %w", err)
}
// Close the parent's file handle after Start(). The child process has its own
// copy of the file descriptor via fork/exec, so this won't affect its ability to write.
if err := stderr.Close(); err != nil {
log.WithError(err).Error("Failed to close stderr file")
}
if err = helpers.WaitForTextInFile(stdOutFile, "Beacon chain gRPC server listening"); err != nil {
return fmt.Errorf("beacon node %d failed to restart properly: %w", node.index, err)
}
node.cmd = cmd
go func() {
_ = cmd.Wait()
}()
return nil
}
func (node *BeaconNode) UnderlyingProcess() *os.Process {
return node.cmd.Process
}

View File

@@ -108,17 +108,6 @@ func (s *BuilderSet) StopAtIndex(i int) error {
return s.builders[i].Stop()
}
// RestartAtIndex for builders just does pause/resume.
func (s *BuilderSet) RestartAtIndex(_ context.Context, i int) error {
if i >= len(s.builders) {
return errors.Errorf("provided index exceeds slice size: %d >= %d", i, len(s.builders))
}
if err := s.builders[i].Pause(); err != nil {
return err
}
return s.builders[i].Resume()
}
// ComponentAtIndex returns the component at the provided index.
func (s *BuilderSet) ComponentAtIndex(i int) (e2etypes.ComponentRunner, error) {
if i >= len(s.builders) {

View File

@@ -111,17 +111,6 @@ func (s *NodeSet) StopAtIndex(i int) error {
return s.nodes[i].Stop()
}
// RestartAtIndex for eth1 nodes just does pause/resume.
func (s *NodeSet) RestartAtIndex(_ context.Context, i int) error {
if i >= len(s.nodes) {
return errors.Errorf("provided index exceeds slice size: %d >= %d", i, len(s.nodes))
}
if err := s.nodes[i].Pause(); err != nil {
return err
}
return s.nodes[i].Resume()
}
// ComponentAtIndex returns the component at the provided index.
func (s *NodeSet) ComponentAtIndex(i int) (e2etypes.ComponentRunner, error) {
if i >= len(s.nodes) {

View File

@@ -108,17 +108,6 @@ func (s *ProxySet) StopAtIndex(i int) error {
return s.proxies[i].Stop()
}
// RestartAtIndex for proxies just does pause/resume.
func (s *ProxySet) RestartAtIndex(_ context.Context, i int) error {
if i >= len(s.proxies) {
return errors.Errorf("provided index exceeds slice size: %d >= %d", i, len(s.proxies))
}
if err := s.proxies[i].Pause(); err != nil {
return err
}
return s.proxies[i].Resume()
}
// ComponentAtIndex returns the component at the provided index.
func (s *ProxySet) ComponentAtIndex(i int) (e2etypes.ComponentRunner, error) {
if i >= len(s.proxies) {

View File

@@ -127,17 +127,6 @@ func (s *LighthouseBeaconNodeSet) StopAtIndex(i int) error {
return s.nodes[i].Stop()
}
// RestartAtIndex for Lighthouse just does pause/resume.
func (s *LighthouseBeaconNodeSet) RestartAtIndex(_ context.Context, i int) error {
if i >= len(s.nodes) {
return errors.Errorf("provided index exceeds slice size: %d >= %d", i, len(s.nodes))
}
if err := s.nodes[i].Pause(); err != nil {
return err
}
return s.nodes[i].Resume()
}
// ComponentAtIndex returns the component at the provided index.
func (s *LighthouseBeaconNodeSet) ComponentAtIndex(i int) (e2etypes.ComponentRunner, error) {
if i >= len(s.nodes) {
@@ -205,10 +194,9 @@ func (node *LighthouseBeaconNode) Start(ctx context.Context) error {
"--suggested-fee-recipient=0x878705ba3f8bc32fcf7f4caa1a35e72af65cf766",
}
if node.config.UseFixedPeerIDs {
// Use libp2p-addresses with full multiaddrs instead of trusted-peers with just peer IDs.
// This allows Lighthouse to connect directly to Prysm nodes without relying on DHT discovery.
flagVal := strings.Join(node.config.PeerMultiAddrs, ",")
args = append(args, fmt.Sprintf("--libp2p-addresses=%s", flagVal))
flagVal := strings.Join(node.config.PeerIDs, ",")
args = append(args,
fmt.Sprintf("--trusted-peers=%s", flagVal))
}
if node.config.UseBuilder {
args = append(args, fmt.Sprintf("--builder=%s:%d", "http://127.0.0.1", e2e.TestParams.Ports.Eth1ProxyPort+prysmNodeCount+index))

View File

@@ -132,17 +132,6 @@ func (s *LighthouseValidatorNodeSet) StopAtIndex(i int) error {
return s.nodes[i].Stop()
}
// RestartAtIndex for Lighthouse validators just does pause/resume.
func (s *LighthouseValidatorNodeSet) RestartAtIndex(_ context.Context, i int) error {
if i >= len(s.nodes) {
return errors.Errorf("provided index exceeds slice size: %d >= %d", i, len(s.nodes))
}
if err := s.nodes[i].Pause(); err != nil {
return err
}
return s.nodes[i].Resume()
}
// ComponentAtIndex returns the component at the provided index.
func (s *LighthouseValidatorNodeSet) ComponentAtIndex(i int) (types.ComponentRunner, error) {
if i >= len(s.nodes) {

View File

@@ -5,7 +5,6 @@ import (
"context"
"encoding/base64"
"io"
"net"
"net/http"
"os"
"os/signal"
@@ -16,7 +15,6 @@ import (
e2e "github.com/OffchainLabs/prysm/v7/testing/endtoend/params"
"github.com/OffchainLabs/prysm/v7/testing/endtoend/types"
"github.com/pkg/errors"
"golang.org/x/sys/unix"
)
var _ types.ComponentRunner = &TracingSink{}
@@ -34,7 +32,6 @@ var _ types.ComponentRunner = &TracingSink{}
type TracingSink struct {
cancel context.CancelFunc
started chan struct{}
stopped chan struct{}
endpoint string
server *http.Server
}
@@ -43,7 +40,6 @@ type TracingSink struct {
func NewTracingSink(endpoint string) *TracingSink {
return &TracingSink{
started: make(chan struct{}, 1),
stopped: make(chan struct{}),
endpoint: endpoint,
}
}
@@ -77,99 +73,62 @@ func (ts *TracingSink) Resume() error {
// Stop stops the component and its underlying process.
func (ts *TracingSink) Stop() error {
if ts.cancel != nil {
ts.cancel()
}
// Wait for server to actually shut down before returning
<-ts.stopped
ts.cancel()
return nil
}
// reusePortListener creates a TCP listener with SO_REUSEADDR set, allowing
// the port to be reused immediately after the previous listener closes.
// This is essential for sequential E2E tests that reuse the same port.
func reusePortListener(addr string) (net.Listener, error) {
lc := net.ListenConfig{
Control: func(network, address string, c syscall.RawConn) error {
var setSockOptErr error
err := c.Control(func(fd uintptr) {
setSockOptErr = unix.SetsockoptInt(int(fd), unix.SOL_SOCKET, unix.SO_REUSEADDR, 1)
})
if err != nil {
return err
}
return setSockOptErr
},
}
return lc.Listen(context.Background(), "tcp", addr)
}
// Initialize an http handler that writes all requests to a file.
func (ts *TracingSink) initializeSink(ctx context.Context) {
defer close(ts.stopped)
mux := &http.ServeMux{}
ts.server = &http.Server{
Addr: ts.endpoint,
Handler: mux,
ReadHeaderTimeout: time.Second,
}
// Disable keep-alives to ensure connections close immediately
ts.server.SetKeepAlivesEnabled(false)
// Create listener with SO_REUSEADDR to allow port reuse between tests
listener, err := reusePortListener(ts.endpoint)
if err != nil {
log.WithError(err).Error("Failed to create listener")
return
}
defer func() {
if err := ts.server.Close(); err != nil {
log.WithError(err).Error("Failed to close http server")
return
}
}()
stdOutFile, err := helpers.DeleteAndCreateFile(e2e.TestParams.LogPath, e2e.TracingRequestSinkFileName)
if err != nil {
log.WithError(err).Error("Failed to create stdout file")
if closeErr := listener.Close(); closeErr != nil {
log.WithError(closeErr).Error("Failed to close listener after file creation error")
}
return
}
cleanup := func() {
if err := stdOutFile.Close(); err != nil {
log.WithError(err).Error("Could not close stdout file")
}
// Use Shutdown for graceful shutdown that releases the port
shutdownCtx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := ts.server.Shutdown(shutdownCtx); err != nil {
log.WithError(err).Error("Could not gracefully shutdown http server")
// Force close if shutdown times out
if err := ts.server.Close(); err != nil {
log.WithError(err).Error("Could not close http server")
}
if err := ts.server.Close(); err != nil {
log.WithError(err).Error("Could not close http server")
}
}
defer cleanup()
mux.HandleFunc("/", func(_ http.ResponseWriter, r *http.Request) {
if err := captureRequest(stdOutFile, r); err != nil {
log.WithError(err).Error("Failed to capture http request")
return
}
})
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, syscall.SIGINT, syscall.SIGTERM)
go func() {
select {
case <-ctx.Done():
return
case <-sigs:
return
for {
select {
case <-ctx.Done():
cleanup()
return
case <-sigs:
cleanup()
return
default:
// Sleep for 100ms and do nothing while waiting for
// cancellation.
time.Sleep(100 * time.Millisecond)
}
}
}()
// Use Serve with our custom listener instead of ListenAndServe
if err := ts.server.Serve(listener); err != nil && err != http.ErrServerClosed {
if err := ts.server.ListenAndServe(); err != http.ErrServerClosed {
log.WithError(err).Error("Failed to serve http")
}
}

View File

@@ -134,17 +134,6 @@ func (s *ValidatorNodeSet) StopAtIndex(i int) error {
return s.nodes[i].Stop()
}
// RestartAtIndex for validators just does pause/resume since they don't have P2P issues.
func (s *ValidatorNodeSet) RestartAtIndex(_ context.Context, i int) error {
if i >= len(s.nodes) {
return errors.Errorf("provided index exceeds slice size: %d >= %d", i, len(s.nodes))
}
if err := s.nodes[i].Pause(); err != nil {
return err
}
return s.nodes[i].Resume()
}
// ComponentAtIndex returns the component at the provided index.
func (s *ValidatorNodeSet) ComponentAtIndex(i int) (e2etypes.ComponentRunner, error) {
if i >= len(s.nodes) {

View File

@@ -48,6 +48,7 @@ const (
// allNodesStartTimeout defines period after which nodes are considered
// stalled (safety measure for nodes stuck at startup, shouldn't normally happen).
allNodesStartTimeout = 5 * time.Minute
// errGeneralCode is used to represent the string value for all general process errors.
errGeneralCode = "exit status 1"
)
@@ -194,20 +195,12 @@ func (r *testRunner) runEvaluators(ec *e2etypes.EvaluationContext, conns []*grpc
secondsPerEpoch := uint64(params.BeaconConfig().SlotsPerEpoch.Mul(params.BeaconConfig().SecondsPerSlot))
ticker := helpers.NewEpochTicker(tickingStartTime, secondsPerEpoch)
for currentEpoch := range ticker.C() {
log.WithField("epoch", currentEpoch).Info("Processing epoch")
if config.EvalInterceptor(ec, currentEpoch, conns) {
log.WithField("epoch", currentEpoch).Info("Interceptor returned true, skipping evaluators")
continue
}
r.executeProvidedEvaluators(ec, currentEpoch, conns, config.Evaluators)
if t.Failed() || currentEpoch >= config.EpochsToRun-1 {
log.WithFields(log.Fields{
"currentEpoch": currentEpoch,
"EpochsToRun": config.EpochsToRun,
"testFailed": t.Failed(),
"epochLimitHit": currentEpoch >= config.EpochsToRun-1,
}).Info("Stopping evaluator loop")
ticker.Done()
if t.Failed() {
return errors.New("test failed")
@@ -232,9 +225,9 @@ func (r *testRunner) testDepositsAndTx(ctx context.Context, g *errgroup.Group,
if err := helpers.ComponentsStarted(ctx, []e2etypes.ComponentRunner{r.depositor}); err != nil {
return errors.Wrap(err, "testDepositsAndTx unable to run, depositor did not Start")
}
go func() {
if r.config.TestDeposits {
log.Info("Running deposit tests")
go func() {
if r.config.TestDeposits {
log.Info("Running deposit tests")
// The validators with an index < minGenesisActiveCount all have deposits already from the chain start.
// Skip all of those chain start validators by seeking to minGenesisActiveCount in the validator list
// for further deposit testing.
@@ -245,13 +238,12 @@ func (r *testRunner) testDepositsAndTx(ctx context.Context, g *errgroup.Group,
r.t.Error(errors.Wrap(err, "depositor.SendAndMine failed"))
}
}
}
// Only generate background transactions when relevant for the test.
// Checkpoint sync and REST API tests need EL blocks to advance, so include them.
if r.config.TestDeposits || r.config.TestFeature || r.config.UseBuilder || r.config.TestCheckpointSync || r.config.UseBeaconRestApi {
r.testTxGeneration(ctx, g, keystorePath, []e2etypes.ComponentRunner{})
}
}()
}
// Only generate background transactions when relevant for the test.
if r.config.TestDeposits || r.config.TestFeature || r.config.UseBuilder {
r.testTxGeneration(ctx, g, keystorePath, []e2etypes.ComponentRunner{})
}
}()
if r.config.TestDeposits {
return depositCheckValidator.Start(ctx)
}
@@ -630,7 +622,7 @@ func (r *testRunner) scenarioRun() error {
tickingStartTime := helpers.EpochTickerStartTime(genesis)
ec := e2etypes.NewEvaluationContext(r.depositor.History())
log.WithField("EpochsToRun", r.config.EpochsToRun).Info("Starting evaluators")
// Run assigned evaluators.
return r.runEvaluators(ec, conns, tickingStartTime)
}
@@ -676,9 +668,9 @@ func (r *testRunner) multiScenarioMulticlient(ec *e2etypes.EvaluationContext, ep
freezeStartEpoch := lastForkEpoch + 1
freezeEndEpoch := lastForkEpoch + 2
optimisticStartEpoch := lastForkEpoch + 6
optimisticEndEpoch := lastForkEpoch + 8
optimisticEndEpoch := lastForkEpoch + 7
recoveryEpochStart, recoveryEpochEnd := lastForkEpoch+3, lastForkEpoch+4
secondRecoveryEpochStart, secondRecoveryEpochMid, secondRecoveryEpochEnd := lastForkEpoch+9, lastForkEpoch+10, lastForkEpoch+11
secondRecoveryEpochStart, secondRecoveryEpochEnd := lastForkEpoch+8, lastForkEpoch+9
newPayloadMethod := "engine_newPayloadV4"
forkChoiceUpdatedMethod := "engine_forkchoiceUpdatedV3"
@@ -688,18 +680,13 @@ func (r *testRunner) multiScenarioMulticlient(ec *e2etypes.EvaluationContext, ep
forkChoiceUpdatedMethod = "engine_forkchoiceUpdatedV3"
}
// Skip evaluators during optimistic sync window (between start and end, exclusive)
if primitives.Epoch(epoch) > optimisticStartEpoch && primitives.Epoch(epoch) < optimisticEndEpoch {
return true
}
switch primitives.Epoch(epoch) {
case freezeStartEpoch:
require.NoError(r.t, r.comHandler.beaconNodes.PauseAtIndex(0))
require.NoError(r.t, r.comHandler.validatorNodes.PauseAtIndex(0))
return true
case freezeEndEpoch:
require.NoError(r.t, r.comHandler.beaconNodes.RestartAtIndex(r.comHandler.ctx, 0))
require.NoError(r.t, r.comHandler.beaconNodes.ResumeAtIndex(0))
require.NoError(r.t, r.comHandler.validatorNodes.ResumeAtIndex(0))
return true
case optimisticStartEpoch:
@@ -714,19 +701,6 @@ func (r *testRunner) multiScenarioMulticlient(ec *e2etypes.EvaluationContext, ep
}, func() bool {
return true
})
// Also intercept forkchoiceUpdated for prysm beacon node to prevent
// SetOptimisticToValid from clearing the optimistic status.
component.(e2etypes.EngineProxy).AddRequestInterceptor(forkChoiceUpdatedMethod, func() any {
return &ForkchoiceUpdatedResponse{
Status: &enginev1.PayloadStatus{
Status: enginev1.PayloadStatus_SYNCING,
LatestValidHash: nil,
},
PayloadId: nil,
}
}, func() bool {
return true
})
// Set it for lighthouse beacon node.
component, err = r.comHandler.eth1Proxy.ComponentAtIndex(2)
require.NoError(r.t, err)
@@ -760,7 +734,6 @@ func (r *testRunner) multiScenarioMulticlient(ec *e2etypes.EvaluationContext, ep
engineProxy, ok := component.(e2etypes.EngineProxy)
require.Equal(r.t, true, ok)
engineProxy.RemoveRequestInterceptor(newPayloadMethod)
engineProxy.RemoveRequestInterceptor(forkChoiceUpdatedMethod)
engineProxy.ReleaseBackedUpRequests(newPayloadMethod)
// Remove for lighthouse too
@@ -774,8 +747,8 @@ func (r *testRunner) multiScenarioMulticlient(ec *e2etypes.EvaluationContext, ep
return true
case recoveryEpochStart, recoveryEpochEnd,
secondRecoveryEpochStart, secondRecoveryEpochMid, secondRecoveryEpochEnd:
// Allow epochs for the network to finalize again after optimistic sync test.
secondRecoveryEpochStart, secondRecoveryEpochEnd:
// Allow 2 epochs for the network to finalize again.
return true
}
return false
@@ -809,39 +782,31 @@ func (r *testRunner) eeOffline(_ *e2etypes.EvaluationContext, epoch uint64, _ []
// as expected.
func (r *testRunner) multiScenario(ec *e2etypes.EvaluationContext, epoch uint64, conns []*grpc.ClientConn) bool {
lastForkEpoch := params.LastForkEpoch()
// Freeze/restart scenario is skipped in minimal test: With only 2 beacon nodes,
// when one node restarts it enters initial sync mode. During initial sync, the
// restarting node doesn't subscribe to gossip topics, leaving the other node with
// 0 gossip peers. This causes a deadlock where the network can't produce blocks
// consistently (no gossip mesh) and the restarting node can't complete initial sync
// (no blocks being produced). This scenario works in multiclient test (4 nodes)
// where 3 healthy nodes maintain the gossip mesh while 1 node syncs.
freezeStartEpoch := lastForkEpoch + 1
freezeEndEpoch := lastForkEpoch + 2
valOfflineStartEpoch := lastForkEpoch + 6
valOfflineEndEpoch := lastForkEpoch + 7
optimisticStartEpoch := lastForkEpoch + 11
optimisticEndEpoch := lastForkEpoch + 13
optimisticEndEpoch := lastForkEpoch + 12
recoveryEpochStart, recoveryEpochEnd := lastForkEpoch+3, lastForkEpoch+4
secondRecoveryEpochStart, secondRecoveryEpochEnd := lastForkEpoch+8, lastForkEpoch+9
thirdRecoveryEpochStart, thirdRecoveryEpochEnd := lastForkEpoch+14, lastForkEpoch+15
type ForkchoiceUpdatedResponse struct {
Status *enginev1.PayloadStatus `json:"payloadStatus"`
PayloadId *enginev1.PayloadIDBytes `json:"payloadId"`
}
thirdRecoveryEpochStart, thirdRecoveryEpochEnd := lastForkEpoch+13, lastForkEpoch+14
newPayloadMethod := "engine_newPayloadV4"
forkChoiceUpdatedMethod := "engine_forkchoiceUpdatedV3"
// Fallback if Electra is not set.
if params.BeaconConfig().ElectraForkEpoch == math.MaxUint64 {
newPayloadMethod = "engine_newPayloadV3"
}
// Skip evaluators during optimistic sync window (between start and end, exclusive)
if primitives.Epoch(epoch) > optimisticStartEpoch && primitives.Epoch(epoch) < optimisticEndEpoch {
return true
}
switch primitives.Epoch(epoch) {
case freezeStartEpoch:
require.NoError(r.t, r.comHandler.beaconNodes.PauseAtIndex(0))
require.NoError(r.t, r.comHandler.validatorNodes.PauseAtIndex(0))
return true
case freezeEndEpoch:
require.NoError(r.t, r.comHandler.beaconNodes.ResumeAtIndex(0))
require.NoError(r.t, r.comHandler.validatorNodes.ResumeAtIndex(0))
return true
case valOfflineStartEpoch:
require.NoError(r.t, r.comHandler.validatorNodes.PauseAtIndex(0))
require.NoError(r.t, r.comHandler.validatorNodes.PauseAtIndex(1))
@@ -861,36 +826,23 @@ func (r *testRunner) multiScenario(ec *e2etypes.EvaluationContext, epoch uint64,
}, func() bool {
return true
})
// Also intercept forkchoiceUpdated to prevent SetOptimisticToValid from
// clearing the optimistic status when the beacon node receives VALID.
component.(e2etypes.EngineProxy).AddRequestInterceptor(forkChoiceUpdatedMethod, func() any {
return &ForkchoiceUpdatedResponse{
Status: &enginev1.PayloadStatus{
Status: enginev1.PayloadStatus_SYNCING,
LatestValidHash: nil,
},
PayloadId: nil,
}
}, func() bool {
return true
})
return true
case optimisticEndEpoch:
evs := []e2etypes.Evaluator{ev.OptimisticSyncEnabled}
r.executeProvidedEvaluators(ec, epoch, []*grpc.ClientConn{conns[0]}, evs)
// Disable Interceptors
// Disable Interceptor
component, err := r.comHandler.eth1Proxy.ComponentAtIndex(0)
require.NoError(r.t, err)
engineProxy, ok := component.(e2etypes.EngineProxy)
require.Equal(r.t, true, ok)
engineProxy.RemoveRequestInterceptor(newPayloadMethod)
engineProxy.ReleaseBackedUpRequests(newPayloadMethod)
engineProxy.RemoveRequestInterceptor(forkChoiceUpdatedMethod)
engineProxy.ReleaseBackedUpRequests(forkChoiceUpdatedMethod)
return true
case secondRecoveryEpochStart, secondRecoveryEpochEnd,
case recoveryEpochStart, recoveryEpochEnd,
secondRecoveryEpochStart, secondRecoveryEpochEnd,
thirdRecoveryEpochStart, thirdRecoveryEpochEnd:
// Allow 2 epochs for the network to finalize again.
return true
}
return false

View File

@@ -82,7 +82,7 @@ var metricComparisonTests = []comparisonTest{
name: "hot state cache",
topic1: "hot_state_cache_miss",
topic2: "hot_state_cache_hit",
expectedComparison: 0.02,
expectedComparison: 0.01,
},
}
@@ -168,15 +168,20 @@ func metricCheckLessThan(pageContent, topic string, value int) error {
func metricCheckComparison(pageContent, topic1, topic2 string, comparison float64) error {
topic2Value, err := valueOfTopic(pageContent, topic2)
if err != nil || topic2Value == -1 {
// If we can't find the denominator (hits/received total), assume test passes
// If we can't find the first topic (error metrics), then assume the test passes.
if topic2Value != -1 {
return nil
}
if err != nil {
return err
}
topic1Value, err := valueOfTopic(pageContent, topic1)
if err != nil || topic1Value == -1 {
// If we can't find the numerator (misses/failures), assume test passes (no errors)
if topic1Value != -1 {
return nil
}
if err != nil {
return err
}
topicComparison := float64(topic1Value) / float64(topic2Value)
if topicComparison >= comparison {
return fmt.Errorf(

View File

@@ -101,44 +101,16 @@ func peersConnect(_ *e2etypes.EvaluationContext, conns ...*grpc.ClientConn) erro
return nil
}
ctx := context.Background()
expectedPeers := len(conns) - 1 + e2e.TestParams.LighthouseBeaconNodeCount
// Wait up to 60 seconds for all nodes to discover peers.
// Peer discovery via DHT can take time, especially for nodes that start later.
timeout := 60 * time.Second
pollInterval := 1 * time.Second
for _, conn := range conns {
nodeClient := eth.NewNodeClient(conn)
deadline := time.Now().Add(timeout)
var peersResp *eth.Peers
var err error
for time.Now().Before(deadline) {
peersResp, err = nodeClient.ListPeers(ctx, &emptypb.Empty{})
if err != nil {
time.Sleep(pollInterval)
continue
}
if len(peersResp.Peers) >= expectedPeers {
break
}
time.Sleep(pollInterval)
}
peersResp, err := nodeClient.ListPeers(ctx, &emptypb.Empty{})
if err != nil {
return fmt.Errorf("failed to list peers after %v: %w", timeout, err)
return err
}
expectedPeers := len(conns) - 1 + e2e.TestParams.LighthouseBeaconNodeCount
if expectedPeers != len(peersResp.Peers) {
peerIDs := make([]string, 0, len(peersResp.Peers))
for _, p := range peersResp.Peers {
peerIDs = append(peerIDs, p.Address[len(p.Address)-10:])
}
return fmt.Errorf("unexpected amount of peers after %v timeout, expected %d, received %d (connected to: %v)", timeout, expectedPeers, len(peersResp.Peers), peerIDs)
return fmt.Errorf("unexpected amount of peers, expected %d, received %d", expectedPeers, len(peersResp.Peers))
}
time.Sleep(connTimeDelay)
}
return nil

View File

@@ -38,8 +38,8 @@ func TestEndToEnd_MinimalConfig(t *testing.T) {
r := e2eMinimal(t, cfg,
types.WithCheckpointSync(),
types.WithEpochs(10),
types.WithExitEpoch(4), // Minimum due to ShardCommitteePeriod=4
types.WithLargeBlobs(), // Use large blob transactions for BPO testing
types.WithExitEpoch(4), // Minimum due to ShardCommitteePeriod=4
types.WithLargeBlobs(), // Use large blob transactions for BPO testing
)
r.run()
}
}

View File

@@ -117,7 +117,6 @@ type E2EConfig struct {
BeaconFlags []string
ValidatorFlags []string
PeerIDs []string
PeerMultiAddrs []string
ExtraEpochs uint64
}
@@ -223,8 +222,6 @@ type MultipleComponentRunners interface {
ResumeAtIndex(i int) error
// StopAtIndex stops the grouped component element at the desired index.
StopAtIndex(i int) error
// RestartAtIndex restarts the grouped component element at the desired index.
RestartAtIndex(ctx context.Context, i int) error
}
type EngineProxy interface {