Compare commits

...

24 Commits

Author SHA1 Message Date
terence tsao
6544f23e8f Sort by descending 2025-05-21 14:21:14 -07:00
terence tsao
b78839f6fa Log blob limit increase 2025-05-21 13:05:49 -07:00
terence tsao
df6e09d8a0 Uncomment blob schedule from plcae holder fields for test 2025-05-21 09:43:21 -07:00
terence tsao
da62dc682c Add fallback 2025-05-21 08:44:01 -07:00
james-prysm
6a01fb22ec adding in temporary fix until web3signer is supported for blob schedule in config 2025-05-21 08:44:01 -07:00
terence tsao
643a5f5965 Fix e2e test 2025-05-21 08:44:01 -07:00
terence tsao
26e199faac Feedback 2025-05-21 08:44:01 -07:00
terence tsao
1a02d0592f Add blob schedule support 2025-05-21 08:44:01 -07:00
Bastin
b20821dd8e Fix wording/typo in lc spec tests (#15307)
* fix wording

* changelog
2025-05-21 11:02:45 +00:00
Manu NALEPA
e2f0b057b0 Implement data columns filesystem. (#15257)
* Logging: Add `DataColumnFields`.

* `RODataColumn`: Implement `Slot`, `ParentRoot` and `ProposerIndex`.

* Implement verification for data column sidecars.

* Add changelog.

* Fix Terence's comment.

* Implement data columns filesystem.

* Reduce const verbosity (Kasey's comment).

* `prune`: Aquire `mu` only when needed.

* `Save`: Call `dcs.DataColumnFeed.Send` in a goroutine.

* Fix Terence's comment.

* Fix Terence's comment.

* Fix Terence's comment.

* Fix Terence's comment.

* Fix Terence`s comment.

* Implement a new solution for subscriber.

* Fix Kasey's comment.
2025-05-21 09:35:48 +00:00
Manu NALEPA
3d4e2c5568 Implement data column sidecars verifications. (#15232)
* Logging: Add `DataColumnFields`.

* `RODataColumn`: Implement `Slot`, `ParentRoot` and `ProposerIndex`.

* Implement verification for data column sidecars.

* Add changelog.

* Fix Terence's comment.

* Fix Terence's comment.

* `SidecarProposerExpected`: Stop returning "sidecar was not proposed by the expected proposer_index" when there is any error in the function.

* `SidecarProposerExpected` & `ValidProposerSignature`: Cache the parent state.

* `VerifyDataColumnsSidecarKZGProofs`: Add benchmarks.

* Fix Kasey's comment.

* Add additional benchmark.

* Fix Kasey's comment.

* Fix Kasey's comment.

* Fix Kasey's comment.

* Fix Preston's comment.

* Fix Preston's comment.

* Fix Preston's comment.
2025-05-20 21:15:29 +00:00
terence
fa744ff78f Update spec tests to v1.6.0-alpha.0 (#15306) 2025-05-20 20:52:09 +00:00
Radosław Kapka
bb5807fd08 Update Beacon API evaluator (#15304)
* Update Beacon API evaluator

* cleanup and changelog

* changelog message fix
2025-05-20 17:09:01 +00:00
Nilav Prajapati
d6bbfff8b7 Fix cyclical dependency (#15248)
* fix cyclical dependency

* fix bazel file

* fix comments

* fix: Break cyclical dependency by importing GetRandBlob directly from crypto/random

* add changelog
2025-05-20 16:19:36 +00:00
terence
a8ce85f8de Add fulu spec tests for operation and epoch processing (#15284)
* Add fulu spec tets for operation and epoch processing

* Fix operation test version

* Fix RunDepositRequestsTest
2025-05-20 16:18:04 +00:00
Bastin
00bb3ff2b8 Add Light Client Minimal Spec Tests Part 1 (#15297)
* single merkle proof - altair

* single merkle proof - bellatrix - mainnet

* single merkle proof - capella - mainnet

* single merkle proof - deneb - mainnet

* single merkle proof - electra - mainnet

* changelog entry

* typo

* merge all into common

* update ranking

* single merkle proof

* changelog entry

* update exclusions.txt
2025-05-20 16:08:41 +00:00
Bastin
edab145001 endpoint registration behind flag (#15303) 2025-05-20 12:29:07 +00:00
Preston Van Loon
7fd3902b75 Added prysm build data to otel tracing spans (#15302) 2025-05-19 17:04:30 +00:00
Joe Clapis
6b6370bc59 Added a symlink to the Docker image from /bin/sh -> /bin/bash (#15294)
* Added a symlink to the Docker image from /bin/sh -> /bin/bash

* Added changelog fragment

---------

Co-authored-by: Preston Van Loon <preston@pvl.dev>
2025-05-19 16:29:27 +00:00
Potuz
17204ca817 Use independent context when updating domain data (#15268)
* Use independent context when updating domain data

* Terence's review

* don't cancel immediately

* fix e2e panic

* add new context for roles
2025-05-19 16:14:03 +00:00
Bastin
5bbcfe5237 Enable Light Client req/resp domain (#15281)
* add rpc toppic mappings and types

* placeholder funcs

* bootstrap write chunk

* add rpc toppic mappings and types

* placeholder funcs

* bootstrap write chunk

* deps

* add messageMapping entries

* add handlers and register RPC

* deps

* tests

* read context in tests

* add tests

* add flag and changelog entry

* fix linter

* deps

* increase topic count in ratelimiter test

* handle flag
2025-05-19 15:15:13 +00:00
terence
c1b99b74c7 Use current slot helper whenever possible (#15301) 2025-05-18 16:06:54 +00:00
terence
f02955676b Remove unused protobuf import (#15300) 2025-05-18 16:05:00 +00:00
Bastin
1dea6857d5 Add Light Client Mainnet Spec Tests (#15295)
* single merkle proof - altair

* single merkle proof - bellatrix - mainnet

* single merkle proof - capella - mainnet

* single merkle proof - deneb - mainnet

* single merkle proof - electra - mainnet

* changelog entry

* typo

* merge all into common

* fail for unsupported test types
2025-05-18 12:22:45 +00:00
237 changed files with 8727 additions and 905 deletions

View File

@@ -255,7 +255,7 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.5.0"
consensus_spec_version = "v1.6.0-alpha.0"
bls_test_version = "v0.1.1"
@@ -271,7 +271,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-cI+DJe3BXlZ0lr28w3USi2lnYOUUfdi/YZ3nJuRiiYU=",
integrity = "sha256-W7oKvoM0nAkyitykRxAw6kmCvjYC01IqiNJy0AmCnMM=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/general.tar.gz" % consensus_spec_version,
)
@@ -287,7 +287,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-eBLWqO/RdcqsANmA/rwkJ4kI+LCL+Q0RmIDq6z85lYQ=",
integrity = "sha256-ig7/zxomjv6buBWMom4IxAJh3lFJ9+JnY44E7c8ZNP8=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/minimal.tar.gz" % consensus_spec_version,
)
@@ -303,7 +303,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-ab0H0WTzhSwYJ2a+GHVbUMoNRActJw18EmX3o5hhDi0",
integrity = "sha256-mjx+MkXtPhCNv4c4knLYLIkvIdpF7WTjx/ElvGPQzSo=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/mainnet.tar.gz" % consensus_spec_version,
)
@@ -318,7 +318,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-Wy3YcJxoXiKQwrGgJecrtjtdokc4X/VUNBmyQXJf0Oc=",
integrity = "sha256-u0RkIZIeGttb3sInR31mO64aBSwxALqO5SYIPlqEvPo=",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)

View File

@@ -241,7 +241,7 @@ func (c *Client) GetHeader(ctx context.Context, slot primitives.Slot, parentHash
return nil, errors.Wrap(err, "error getting header from builder server")
}
bid, err := c.parseHeaderResponse(data, header)
bid, err := c.parseHeaderResponse(data, header, slot)
if err != nil {
return nil, errors.Wrapf(
err,
@@ -254,7 +254,7 @@ func (c *Client) GetHeader(ctx context.Context, slot primitives.Slot, parentHash
return bid, nil
}
func (c *Client) parseHeaderResponse(data []byte, header http.Header) (SignedBid, error) {
func (c *Client) parseHeaderResponse(data []byte, header http.Header, slot primitives.Slot) (SignedBid, error) {
var versionHeader string
if c.sszEnabled || header.Get(api.VersionHeader) != "" {
versionHeader = header.Get(api.VersionHeader)
@@ -276,7 +276,7 @@ func (c *Client) parseHeaderResponse(data []byte, header http.Header) (SignedBid
}
if ver >= version.Electra {
return c.parseHeaderElectra(data)
return c.parseHeaderElectra(data, slot)
}
if ver >= version.Deneb {
return c.parseHeaderDeneb(data)
@@ -291,7 +291,7 @@ func (c *Client) parseHeaderResponse(data []byte, header http.Header) (SignedBid
return nil, fmt.Errorf("unsupported header version %s", versionHeader)
}
func (c *Client) parseHeaderElectra(data []byte) (SignedBid, error) {
func (c *Client) parseHeaderElectra(data []byte, slot primitives.Slot) (SignedBid, error) {
if c.sszEnabled {
sb := &ethpb.SignedBuilderBidElectra{}
if err := sb.UnmarshalSSZ(data); err != nil {
@@ -303,7 +303,7 @@ func (c *Client) parseHeaderElectra(data []byte) (SignedBid, error) {
if err := json.Unmarshal(data, hr); err != nil {
return nil, errors.Wrap(err, "could not unmarshal ExecHeaderResponseElectra JSON")
}
p, err := hr.ToProto()
p, err := hr.ToProto(slot)
if err != nil {
return nil, errors.Wrap(err, "could not convert ExecHeaderResponseElectra to proto")
}

View File

@@ -532,7 +532,7 @@ func TestClient_GetHeader(t *testing.T) {
require.Equal(t, expectedPath, r.URL.Path)
epr := &ExecHeaderResponseElectra{}
require.NoError(t, json.Unmarshal([]byte(testExampleHeaderResponseElectra), epr))
pro, err := epr.ToProto()
pro, err := epr.ToProto(100)
require.NoError(t, err)
ssz, err := pro.MarshalSSZ()
require.NoError(t, err)

View File

@@ -1284,8 +1284,8 @@ type ExecHeaderResponseElectra struct {
}
// ToProto creates a SignedBuilderBidElectra Proto from ExecHeaderResponseElectra.
func (ehr *ExecHeaderResponseElectra) ToProto() (*eth.SignedBuilderBidElectra, error) {
bb, err := ehr.Data.Message.ToProto()
func (ehr *ExecHeaderResponseElectra) ToProto(slot types.Slot) (*eth.SignedBuilderBidElectra, error) {
bb, err := ehr.Data.Message.ToProto(slot)
if err != nil {
return nil, err
}
@@ -1296,13 +1296,14 @@ func (ehr *ExecHeaderResponseElectra) ToProto() (*eth.SignedBuilderBidElectra, e
}
// ToProto creates a BuilderBidElectra Proto from BuilderBidElectra.
func (bb *BuilderBidElectra) ToProto() (*eth.BuilderBidElectra, error) {
func (bb *BuilderBidElectra) ToProto(slot types.Slot) (*eth.BuilderBidElectra, error) {
header, err := bb.Header.ToProto()
if err != nil {
return nil, err
}
if len(bb.BlobKzgCommitments) > params.BeaconConfig().MaxBlobsPerBlockByVersion(version.Electra) {
return nil, fmt.Errorf("blob commitment count %d exceeds the maximum %d", len(bb.BlobKzgCommitments), params.BeaconConfig().MaxBlobsPerBlockByVersion(version.Electra))
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
if len(bb.BlobKzgCommitments) > maxBlobsPerBlock {
return nil, fmt.Errorf("blob commitment count %d exceeds the maximum %d", len(bb.BlobKzgCommitments), maxBlobsPerBlock)
}
kzgCommitments := make([][]byte, len(bb.BlobKzgCommitments))
for i, commit := range bb.BlobKzgCommitments {

View File

@@ -29,9 +29,8 @@ go_test(
embed = [":go_default_library"],
deps = [
"//consensus-types/blocks:go_default_library",
"//crypto/random:go_default_library",
"//testing/require:go_default_library",
"@com_github_consensys_gnark_crypto//ecc/bls12-381/fr:go_default_library",
"@com_github_crate_crypto_go_kzg_4844//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)

View File

@@ -1,16 +1,12 @@
package kzg
import (
"bytes"
"crypto/sha256"
"encoding/binary"
"testing"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/crypto/random"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/consensys/gnark-crypto/ecc/bls12-381/fr"
GoKZG "github.com/crate-crypto/go-kzg-4844"
"github.com/sirupsen/logrus"
)
func GenerateCommitmentAndProof(blob GoKZG.Blob) (GoKZG.KZGCommitment, GoKZG.KZGProof, error) {
@@ -41,7 +37,7 @@ func TestBytesToAny(t *testing.T) {
}
func TestGenerateCommitmentAndProof(t *testing.T) {
blob := getRandBlob(123)
blob := random.GetRandBlob(123)
commitment, proof, err := GenerateCommitmentAndProof(blob)
require.NoError(t, err)
expectedCommitment := GoKZG.KZGCommitment{180, 218, 156, 194, 59, 20, 10, 189, 186, 254, 132, 93, 7, 127, 104, 172, 238, 240, 237, 70, 83, 89, 1, 152, 99, 0, 165, 65, 143, 62, 20, 215, 230, 14, 205, 95, 28, 245, 54, 25, 160, 16, 178, 31, 232, 207, 38, 85}
@@ -49,36 +45,3 @@ func TestGenerateCommitmentAndProof(t *testing.T) {
require.Equal(t, expectedCommitment, commitment)
require.Equal(t, expectedProof, proof)
}
func deterministicRandomness(seed int64) [32]byte {
// Converts an int64 to a byte slice
buf := new(bytes.Buffer)
err := binary.Write(buf, binary.BigEndian, seed)
if err != nil {
logrus.WithError(err).Error("Failed to write int64 to bytes buffer")
return [32]byte{}
}
bytes := buf.Bytes()
return sha256.Sum256(bytes)
}
// Returns a serialized random field element in big-endian
func getRandFieldElement(seed int64) [32]byte {
bytes := deterministicRandomness(seed)
var r fr.Element
r.SetBytes(bytes[:])
return GoKZG.SerializeScalar(r)
}
// Returns a random blob using the passed seed as entropy
func getRandBlob(seed int64) GoKZG.Blob {
var blob GoKZG.Blob
bytesPerBlob := GoKZG.ScalarsPerBlob * GoKZG.SerializedScalarSize
for i := 0; i < bytesPerBlob; i += GoKZG.SerializedScalarSize {
fieldElementBytes := getRandFieldElement(seed + int64(i))
copy(blob[i:i+GoKZG.SerializedScalarSize], fieldElementBytes[:])
}
return blob
}

View File

@@ -187,9 +187,7 @@ func ValidateAttestationTime(attSlot primitives.Slot, genesisTime time.Time, clo
// VerifyCheckpointEpoch is within current epoch and previous epoch
// with respect to current time. Returns true if it's within, false if it's not.
func VerifyCheckpointEpoch(c *ethpb.Checkpoint, genesis time.Time) bool {
now := uint64(prysmTime.Now().Unix())
genesisTime := uint64(genesis.Unix())
currentSlot := primitives.Slot((now - genesisTime) / params.BeaconConfig().SecondsPerSlot)
currentSlot := slots.CurrentSlot(uint64(genesis.Unix()))
currentEpoch := slots.ToEpoch(currentSlot)
var prevEpoch primitives.Epoch

View File

@@ -2,6 +2,7 @@ package peerdas_test
import (
"crypto/rand"
"fmt"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
@@ -52,51 +53,15 @@ func TestVerifyDataColumnSidecar(t *testing.T) {
}
func TestVerifyDataColumnSidecarKZGProofs(t *testing.T) {
const (
blobCount = 6
seed = 0
)
err := kzg.Start()
require.NoError(t, err)
generateSidecars := func(t *testing.T) []*ethpb.DataColumnSidecar {
const blobCount = int64(6)
dbBlock := util.NewBeaconBlockDeneb()
commitments := make([][]byte, 0, blobCount)
blobs := make([]kzg.Blob, 0, blobCount)
for i := range blobCount {
blob := getRandBlob(i)
commitment, _, err := generateCommitmentAndProof(&blob)
require.NoError(t, err)
commitments = append(commitments, commitment[:])
blobs = append(blobs, blob)
}
dbBlock.Block.Body.BlobKzgCommitments = commitments
sBlock, err := blocks.NewSignedBeaconBlock(dbBlock)
require.NoError(t, err)
cellsAndProofs := util.GenerateCellsAndProofs(t, blobs)
sidecars, err := peerdas.DataColumnSidecars(sBlock, cellsAndProofs)
require.NoError(t, err)
return sidecars
}
generateRODataColumnSidecars := func(t *testing.T, sidecars []*ethpb.DataColumnSidecar) []blocks.RODataColumn {
roDataColumnSidecars := make([]blocks.RODataColumn, 0, len(sidecars))
for _, sidecar := range sidecars {
roCol, err := blocks.NewRODataColumn(sidecar)
require.NoError(t, err)
roDataColumnSidecars = append(roDataColumnSidecars, roCol)
}
return roDataColumnSidecars
}
t.Run("invalid proof", func(t *testing.T) {
sidecars := generateSidecars(t)
sidecars := generateRandomSidecars(t, seed, blobCount)
sidecars[0].Column[0][0]++ // It is OK to overflow
roDataColumnSidecars := generateRODataColumnSidecars(t, sidecars)
@@ -105,7 +70,7 @@ func TestVerifyDataColumnSidecarKZGProofs(t *testing.T) {
})
t.Run("nominal", func(t *testing.T) {
sidecars := generateSidecars(t)
sidecars := generateRandomSidecars(t, seed, blobCount)
roDataColumnSidecars := generateRODataColumnSidecars(t, sidecars)
err := peerdas.VerifyDataColumnsSidecarKZGProofs(roDataColumnSidecars)
@@ -281,6 +246,96 @@ func TestCustodyGroupCountFromRecord(t *testing.T) {
})
}
func BenchmarkVerifyDataColumnSidecarKZGProofs_SameCommitments_NoBatch(b *testing.B) {
const blobCount = 12
err := kzg.Start()
require.NoError(b, err)
b.StopTimer()
b.ResetTimer()
for i := range int64(b.N) {
// Generate new random sidecars to ensure the KZG backend does not cache anything.
sidecars := generateRandomSidecars(b, i, blobCount)
roDataColumnSidecars := generateRODataColumnSidecars(b, sidecars)
for _, sidecar := range roDataColumnSidecars {
sidecars := []blocks.RODataColumn{sidecar}
b.StartTimer()
err := peerdas.VerifyDataColumnsSidecarKZGProofs(sidecars)
b.StopTimer()
require.NoError(b, err)
}
}
}
func BenchmarkVerifyDataColumnSidecarKZGProofs_DiffCommitments_Batch(b *testing.B) {
const blobCount = 12
numberOfColumns := int64(params.BeaconConfig().NumberOfColumns)
err := kzg.Start()
require.NoError(b, err)
columnsCounts := []int64{1, 2, 4, 8, 16, 32, 64, 128}
for i, columnsCount := range columnsCounts {
b.Run(fmt.Sprintf("columnsCount_%d", columnsCount), func(b *testing.B) {
b.StopTimer()
b.ResetTimer()
for j := range int64(b.N) {
allSidecars := make([]*ethpb.DataColumnSidecar, 0, numberOfColumns)
for k := int64(0); k < numberOfColumns; k += columnsCount {
// Use different seeds to generate different blobs/commitments
seed := int64(b.N*i) + numberOfColumns*j + blobCount*k
sidecars := generateRandomSidecars(b, seed, blobCount)
// Pick sidecars.
allSidecars = append(allSidecars, sidecars[k:k+columnsCount]...)
}
roDataColumnSidecars := generateRODataColumnSidecars(b, allSidecars)
b.StartTimer()
err := peerdas.VerifyDataColumnsSidecarKZGProofs(roDataColumnSidecars)
b.StopTimer()
require.NoError(b, err)
}
})
}
}
func BenchmarkVerifyDataColumnSidecarKZGProofs_DiffCommitments_Batch4(b *testing.B) {
const (
blobCount = 12
// columnsCount*batchCount = 128
columnsCount = 4
batchCount = 32
)
err := kzg.Start()
require.NoError(b, err)
b.StopTimer()
b.ResetTimer()
for i := range int64(b.N) {
allSidecars := make([][]blocks.RODataColumn, 0, batchCount)
for j := range int64(batchCount) {
// Use different seeds to generate different blobs/commitments
sidecars := generateRandomSidecars(b, int64(batchCount)*i+j*blobCount, blobCount)
roDataColumnSidecars := generateRODataColumnSidecars(b, sidecars[:columnsCount])
allSidecars = append(allSidecars, roDataColumnSidecars)
}
for _, sidecars := range allSidecars {
b.StartTimer()
err := peerdas.VerifyDataColumnsSidecarKZGProofs(sidecars)
b.StopTimer()
require.NoError(b, err)
}
}
}
func createTestSidecar(t *testing.T, index uint64, column, kzgCommitments, kzgProofs [][]byte) blocks.RODataColumn {
pbSignedBeaconBlock := util.NewBeaconBlockDeneb()
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(pbSignedBeaconBlock)
@@ -302,3 +357,42 @@ func createTestSidecar(t *testing.T, index uint64, column, kzgCommitments, kzgPr
return roSidecar
}
func generateRandomSidecars(t testing.TB, seed, blobCount int64) []*ethpb.DataColumnSidecar {
dbBlock := util.NewBeaconBlockDeneb()
commitments := make([][]byte, 0, blobCount)
blobs := make([]kzg.Blob, 0, blobCount)
for i := range blobCount {
subSeed := seed + i
blob := getRandBlob(subSeed)
commitment, err := generateCommitment(&blob)
require.NoError(t, err)
commitments = append(commitments, commitment[:])
blobs = append(blobs, blob)
}
dbBlock.Block.Body.BlobKzgCommitments = commitments
sBlock, err := blocks.NewSignedBeaconBlock(dbBlock)
require.NoError(t, err)
cellsAndProofs := util.GenerateCellsAndProofs(t, blobs)
sidecars, err := peerdas.DataColumnSidecars(sBlock, cellsAndProofs)
require.NoError(t, err)
return sidecars
}
func generateRODataColumnSidecars(t testing.TB, sidecars []*ethpb.DataColumnSidecar) []blocks.RODataColumn {
roDataColumnSidecars := make([]blocks.RODataColumn, 0, len(sidecars))
for _, sidecar := range sidecars {
roCol, err := blocks.NewRODataColumn(sidecar)
require.NoError(t, err)
roDataColumnSidecars = append(roDataColumnSidecars, roCol)
}
return roDataColumnSidecars
}

View File

@@ -8,18 +8,30 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
"github.com/consensys/gnark-crypto/ecc/bls12-381/fr"
GoKZG "github.com/crate-crypto/go-kzg-4844"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
func generateCommitment(blob *kzg.Blob) (*kzg.Commitment, error) {
commitment, err := kzg.BlobToKZGCommitment(blob)
if err != nil {
return nil, errors.Wrap(err, "blob to kzg commitment")
}
return &commitment, nil
}
func generateCommitmentAndProof(blob *kzg.Blob) (*kzg.Commitment, *kzg.Proof, error) {
commitment, err := kzg.BlobToKZGCommitment(blob)
if err != nil {
return nil, nil, err
}
proof, err := kzg.ComputeBlobKZGProof(blob, commitment)
if err != nil {
return nil, nil, err
}
return &commitment, &proof, err
}

View File

@@ -46,6 +46,7 @@ go_library(
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",

View File

@@ -27,7 +27,9 @@ import (
"github.com/OffchainLabs/prysm/v6/monitoring/tracing"
prysmTrace "github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"go.opentelemetry.io/otel/trace"
)
@@ -291,6 +293,8 @@ func ProcessSlotsCore(ctx context.Context, span trace.Span, state state.BeaconSt
tracing.AnnotateError(span, err)
return nil, errors.Wrap(err, "failed to upgrade state")
}
logBlobLimitIncrease(state.Slot())
}
return state, nil
}
@@ -507,3 +511,19 @@ func ProcessEpochPrecompute(ctx context.Context, state state.BeaconState) (state
}
return state, nil
}
func logBlobLimitIncrease(slot primitives.Slot) {
if !slots.IsEpochStart(slot) {
return
}
epoch := slots.ToEpoch(slot)
for _, entry := range params.BeaconConfig().BlobSchedule {
if entry.Epoch == epoch {
log.WithFields(logrus.Fields{
"epoch": epoch,
"blobLimit": entry.MaxBlobsPerBlock,
}).Info("Blob limit updated")
}
}
}

View File

@@ -5,6 +5,9 @@ go_library(
srcs = [
"blob.go",
"cache.go",
"data_column.go",
"data_column_cache.go",
"doc.go",
"iteration.go",
"layout.go",
"layout_by_epoch.go",
@@ -17,6 +20,8 @@ go_library(
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem",
visibility = ["//visibility:public"],
deps = [
"//async:go_default_library",
"//async/event:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/verification:go_default_library",
"//config/fieldparams:go_default_library",
@@ -41,6 +46,8 @@ go_test(
srcs = [
"blob_test.go",
"cache_test.go",
"data_column_cache_test.go",
"data_column_test.go",
"iteration_test.go",
"layout_test.go",
"migration_test.go",
@@ -50,6 +57,7 @@ go_test(
deps = [
"//beacon-chain/db:go_default_library",
"//beacon-chain/verification:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,220 @@
package filesystem
import (
"sync"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/pkg/errors"
)
var errDataColumnIndexOutOfBounds = errors.New("data column index too high")
// DataColumnStorageSummary represents cached information about the DataColumnSidecars on disk for each root the cache knows about.
type DataColumnStorageSummary struct {
epoch primitives.Epoch
mask [fieldparams.NumberOfColumns]bool
}
// NewDataColumnStorageSummary creates a new DataColumnStorageSummary for a given epoch and mask.
func NewDataColumnStorageSummary(epoch primitives.Epoch, mask [fieldparams.NumberOfColumns]bool) DataColumnStorageSummary {
return DataColumnStorageSummary{
epoch: epoch,
mask: mask,
}
}
// HasIndex returns true if the DataColumnSidecar at the given index is available in the filesystem.
func (s DataColumnStorageSummary) HasIndex(index uint64) bool {
if index >= uint64(fieldparams.NumberOfColumns) {
return false
}
return s.mask[index]
}
// Count returns the number of available data columns.
func (s DataColumnStorageSummary) Count() uint64 {
count := uint64(0)
for _, available := range s.mask {
if available {
count++
}
}
return count
}
// AllAvailable returns true if we have all data columns for corresponding indices.
func (s DataColumnStorageSummary) AllAvailable(indices map[uint64]bool) bool {
if len(indices) > len(s.mask) {
return false
}
for index := range indices {
if !s.mask[index] {
return false
}
}
return true
}
// DataColumnStorageSummarizer can be used to receive a summary of metadata about data columns on disk for a given root.
// The DataColumnStorageSummary can be used to check which indices (if any) are available for a given block by root.
type DataColumnStorageSummarizer interface {
Summary(root [fieldparams.RootLength]byte) DataColumnStorageSummary
}
type dataColumnStorageSummaryCache struct {
mu sync.RWMutex
dataColumnCount float64
lowestCachedEpoch primitives.Epoch
highestCachedEpoch primitives.Epoch
cache map[[fieldparams.RootLength]byte]DataColumnStorageSummary
}
var _ DataColumnStorageSummarizer = &dataColumnStorageSummaryCache{}
func newDataColumnStorageSummaryCache() *dataColumnStorageSummaryCache {
return &dataColumnStorageSummaryCache{
cache: make(map[[fieldparams.RootLength]byte]DataColumnStorageSummary),
lowestCachedEpoch: params.BeaconConfig().FarFutureEpoch,
}
}
// Summary returns the DataColumnStorageSummary for `root`.
// The DataColumnStorageSummary can be used to check for the presence of DataColumnSidecars based on Index.
func (sc *dataColumnStorageSummaryCache) Summary(root [fieldparams.RootLength]byte) DataColumnStorageSummary {
sc.mu.RLock()
defer sc.mu.RUnlock()
return sc.cache[root]
}
func (sc *dataColumnStorageSummaryCache) HighestEpoch() primitives.Epoch {
sc.mu.RLock()
defer sc.mu.RUnlock()
return sc.highestCachedEpoch
}
// set updates the cache.
func (sc *dataColumnStorageSummaryCache) set(dataColumnsIdent DataColumnsIdent) error {
numberOfColumns := params.BeaconConfig().NumberOfColumns
sc.mu.Lock()
defer sc.mu.Unlock()
summary := sc.cache[dataColumnsIdent.Root]
summary.epoch = dataColumnsIdent.Epoch
count := uint64(0)
for _, index := range dataColumnsIdent.Indices {
if index >= numberOfColumns {
return errDataColumnIndexOutOfBounds
}
if summary.mask[index] {
continue
}
count++
summary.mask[index] = true
sc.lowestCachedEpoch = min(sc.lowestCachedEpoch, dataColumnsIdent.Epoch)
sc.highestCachedEpoch = max(sc.highestCachedEpoch, dataColumnsIdent.Epoch)
}
sc.cache[dataColumnsIdent.Root] = summary
countFloat := float64(count)
sc.dataColumnCount += countFloat
dataColumnDiskCount.Set(sc.dataColumnCount)
dataColumnWrittenCounter.Add(countFloat)
return nil
}
// get returns the DataColumnStorageSummary for the given block root.
// If the root is not in the cache, the second return value will be false.
func (sc *dataColumnStorageSummaryCache) get(blockRoot [fieldparams.RootLength]byte) (DataColumnStorageSummary, bool) {
sc.mu.RLock()
defer sc.mu.RUnlock()
v, ok := sc.cache[blockRoot]
return v, ok
}
// evict removes the DataColumnStorageSummary for the given block root from the cache.
func (s *dataColumnStorageSummaryCache) evict(blockRoot [fieldparams.RootLength]byte) int {
deleted := 0
s.mu.Lock()
defer s.mu.Unlock()
summary, ok := s.cache[blockRoot]
if !ok {
return 0
}
for i := range summary.mask {
if summary.mask[i] {
deleted += 1
}
}
delete(s.cache, blockRoot)
if deleted > 0 {
s.dataColumnCount -= float64(deleted)
dataColumnDiskCount.Set(s.dataColumnCount)
}
// The lowest and highest cached epoch may no longer be valid here,
// but is not worth the effort to recalculate.
return deleted
}
// pruneUpTo removes all entries from the cache up to the given target epoch included.
func (sc *dataColumnStorageSummaryCache) pruneUpTo(targetEpoch primitives.Epoch) uint64 {
sc.mu.Lock()
defer sc.mu.Unlock()
prunedCount := uint64(0)
newLowestCachedEpoch := params.BeaconConfig().FarFutureEpoch
newHighestCachedEpoch := primitives.Epoch(0)
for blockRoot, summary := range sc.cache {
epoch := summary.epoch
if epoch > targetEpoch {
newLowestCachedEpoch = min(newLowestCachedEpoch, epoch)
newHighestCachedEpoch = max(newHighestCachedEpoch, epoch)
}
if epoch <= targetEpoch {
for i := range summary.mask {
if summary.mask[i] {
prunedCount += 1
}
}
delete(sc.cache, blockRoot)
}
}
if prunedCount > 0 {
sc.lowestCachedEpoch = newLowestCachedEpoch
sc.highestCachedEpoch = newHighestCachedEpoch
sc.dataColumnCount -= float64(prunedCount)
dataColumnDiskCount.Set(sc.dataColumnCount)
}
return prunedCount
}
// clear removes all entries from the cache.
func (sc *dataColumnStorageSummaryCache) clear() uint64 {
return sc.pruneUpTo(params.BeaconConfig().FarFutureEpoch)
}

View File

@@ -0,0 +1,235 @@
package filesystem
import (
"testing"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/testing/require"
)
func TestHasIndex(t *testing.T) {
summary := NewDataColumnStorageSummary(0, [fieldparams.NumberOfColumns]bool{false, true})
hasIndex := summary.HasIndex(1_000_000)
require.Equal(t, false, hasIndex)
hasIndex = summary.HasIndex(0)
require.Equal(t, false, hasIndex)
hasIndex = summary.HasIndex(1)
require.Equal(t, true, hasIndex)
}
func TestCount(t *testing.T) {
summary := NewDataColumnStorageSummary(0, [fieldparams.NumberOfColumns]bool{false, true, false, true})
count := summary.Count()
require.Equal(t, uint64(2), count)
}
func TestAllAvailableDataColumns(t *testing.T) {
const count = uint64(1_000)
summary := NewDataColumnStorageSummary(0, [fieldparams.NumberOfColumns]bool{false, true, false, true})
indices := make(map[uint64]bool, count)
for i := range count {
indices[i] = true
}
allAvailable := summary.AllAvailable(indices)
require.Equal(t, false, allAvailable)
indices = map[uint64]bool{1: true, 2: true}
allAvailable = summary.AllAvailable(indices)
require.Equal(t, false, allAvailable)
indices = map[uint64]bool{1: true, 3: true}
allAvailable = summary.AllAvailable(indices)
require.Equal(t, true, allAvailable)
}
func TestSummary(t *testing.T) {
root := [fieldparams.RootLength]byte{}
summaryCache := newDataColumnStorageSummaryCache()
expected := NewDataColumnStorageSummary(0, [fieldparams.NumberOfColumns]bool{})
actual := summaryCache.Summary(root)
require.DeepEqual(t, expected, actual)
summaryCache = newDataColumnStorageSummaryCache()
expected = NewDataColumnStorageSummary(0, [fieldparams.NumberOfColumns]bool{true, false, true, false})
summaryCache.cache[root] = expected
actual = summaryCache.Summary(root)
require.DeepEqual(t, expected, actual)
}
func TestHighestEpoch(t *testing.T) {
root1 := [fieldparams.RootLength]byte{1}
root2 := [fieldparams.RootLength]byte{2}
root3 := [fieldparams.RootLength]byte{3}
summaryCache := newDataColumnStorageSummaryCache()
actual := summaryCache.HighestEpoch()
require.Equal(t, primitives.Epoch(0), actual)
err := summaryCache.set(DataColumnsIdent{Root: root1, Epoch: 42, Indices: []uint64{1, 3}})
require.NoError(t, err)
require.Equal(t, primitives.Epoch(42), summaryCache.HighestEpoch())
err = summaryCache.set(DataColumnsIdent{Root: root2, Epoch: 43, Indices: []uint64{1, 3}})
require.NoError(t, err)
require.Equal(t, primitives.Epoch(43), summaryCache.HighestEpoch())
err = summaryCache.set(DataColumnsIdent{Root: root3, Epoch: 40, Indices: []uint64{1, 3}})
require.NoError(t, err)
require.Equal(t, primitives.Epoch(43), summaryCache.HighestEpoch())
}
func TestSet(t *testing.T) {
t.Run("Index out of bounds", func(t *testing.T) {
summaryCache := newDataColumnStorageSummaryCache()
err := summaryCache.set(DataColumnsIdent{Indices: []uint64{1_000_000}})
require.ErrorIs(t, err, errDataColumnIndexOutOfBounds)
require.Equal(t, params.BeaconConfig().FarFutureEpoch, summaryCache.lowestCachedEpoch)
require.Equal(t, 0, len(summaryCache.cache))
})
t.Run("Nominal", func(t *testing.T) {
root1 := [fieldparams.RootLength]byte{1}
root2 := [fieldparams.RootLength]byte{2}
summaryCache := newDataColumnStorageSummaryCache()
err := summaryCache.set(DataColumnsIdent{Root: root1, Epoch: 42, Indices: []uint64{1, 3}})
require.NoError(t, err)
require.Equal(t, primitives.Epoch(42), summaryCache.lowestCachedEpoch)
require.Equal(t, 1, len(summaryCache.cache))
expected := DataColumnStorageSummary{epoch: 42, mask: [fieldparams.NumberOfColumns]bool{false, true, false, true}}
actual := summaryCache.cache[root1]
require.DeepEqual(t, expected, actual)
err = summaryCache.set(DataColumnsIdent{Root: root1, Epoch: 42, Indices: []uint64{0, 1}})
require.NoError(t, err)
require.Equal(t, primitives.Epoch(42), summaryCache.lowestCachedEpoch)
require.Equal(t, 1, len(summaryCache.cache))
expected = DataColumnStorageSummary{epoch: 42, mask: [fieldparams.NumberOfColumns]bool{true, true, false, true}}
actual = summaryCache.cache[root1]
require.DeepEqual(t, expected, actual)
err = summaryCache.set(DataColumnsIdent{Root: root2, Epoch: 43, Indices: []uint64{1}})
require.NoError(t, err)
require.Equal(t, primitives.Epoch(42), summaryCache.lowestCachedEpoch) // Epoch 42 is still the lowest
require.Equal(t, 2, len(summaryCache.cache))
expected = DataColumnStorageSummary{epoch: 43, mask: [fieldparams.NumberOfColumns]bool{false, true}}
actual = summaryCache.cache[root2]
require.DeepEqual(t, expected, actual)
})
}
func TestGet(t *testing.T) {
t.Run("Not in cache", func(t *testing.T) {
summaryCache := newDataColumnStorageSummaryCache()
root := [fieldparams.RootLength]byte{}
_, ok := summaryCache.get(root)
require.Equal(t, false, ok)
})
t.Run("In cache", func(t *testing.T) {
root := [fieldparams.RootLength]byte{}
summaryCache := newDataColumnStorageSummaryCache()
summaryCache.cache[root] = NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{true, false, true, false})
actual, ok := summaryCache.get(root)
require.Equal(t, true, ok)
expected := NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{true, false, true, false})
require.DeepEqual(t, expected, actual)
})
}
func TestEvict(t *testing.T) {
t.Run("No eviction", func(t *testing.T) {
root := [fieldparams.RootLength]byte{}
summaryCache := newDataColumnStorageSummaryCache()
evicted := summaryCache.evict(root)
require.Equal(t, 0, evicted)
})
t.Run("Eviction", func(t *testing.T) {
root1 := [fieldparams.RootLength]byte{1}
root2 := [fieldparams.RootLength]byte{2}
summaryCache := newDataColumnStorageSummaryCache()
summaryCache.cache[root1] = NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{true, false, true, false})
summaryCache.cache[root2] = NewDataColumnStorageSummary(43, [fieldparams.NumberOfColumns]bool{false, true, false, true})
evicted := summaryCache.evict(root1)
require.Equal(t, 2, evicted)
require.Equal(t, 1, len(summaryCache.cache))
_, ok := summaryCache.cache[root1]
require.Equal(t, false, ok)
_, ok = summaryCache.cache[root2]
require.Equal(t, true, ok)
})
}
func TestPruneUpTo(t *testing.T) {
t.Run("No pruning", func(t *testing.T) {
summaryCache := newDataColumnStorageSummaryCache()
err := summaryCache.set(DataColumnsIdent{Root: [fieldparams.RootLength]byte{1}, Epoch: 42, Indices: []uint64{1}})
require.NoError(t, err)
err = summaryCache.set(DataColumnsIdent{Root: [fieldparams.RootLength]byte{2}, Epoch: 43, Indices: []uint64{2, 4}})
require.NoError(t, err)
count := summaryCache.pruneUpTo(41)
require.Equal(t, uint64(0), count)
require.Equal(t, 2, len(summaryCache.cache))
require.Equal(t, primitives.Epoch(42), summaryCache.lowestCachedEpoch)
})
t.Run("Pruning", func(t *testing.T) {
summaryCache := newDataColumnStorageSummaryCache()
err := summaryCache.set(DataColumnsIdent{Root: [fieldparams.RootLength]byte{1}, Epoch: 42, Indices: []uint64{1}})
require.NoError(t, err)
err = summaryCache.set(DataColumnsIdent{Root: [fieldparams.RootLength]byte{2}, Epoch: 44, Indices: []uint64{2, 4}})
require.NoError(t, err)
err = summaryCache.set(DataColumnsIdent{Root: [fieldparams.RootLength]byte{3}, Epoch: 45, Indices: []uint64{2, 4}})
require.NoError(t, err)
count := summaryCache.pruneUpTo(42)
require.Equal(t, uint64(1), count)
require.Equal(t, 2, len(summaryCache.cache))
require.Equal(t, primitives.Epoch(44), summaryCache.lowestCachedEpoch)
count = summaryCache.pruneUpTo(45)
require.Equal(t, uint64(4), count)
require.Equal(t, 0, len(summaryCache.cache))
require.Equal(t, params.BeaconConfig().FarFutureEpoch, summaryCache.lowestCachedEpoch)
require.Equal(t, primitives.Epoch(0), summaryCache.highestCachedEpoch)
})
t.Run("Clear", func(t *testing.T) {
summaryCache := newDataColumnStorageSummaryCache()
err := summaryCache.set(DataColumnsIdent{Root: [fieldparams.RootLength]byte{1}, Epoch: 42, Indices: []uint64{1}})
require.NoError(t, err)
err = summaryCache.set(DataColumnsIdent{Root: [fieldparams.RootLength]byte{2}, Epoch: 44, Indices: []uint64{2, 4}})
require.NoError(t, err)
err = summaryCache.set(DataColumnsIdent{Root: [fieldparams.RootLength]byte{3}, Epoch: 45, Indices: []uint64{2, 4}})
require.NoError(t, err)
count := summaryCache.clear()
require.Equal(t, uint64(5), count)
require.Equal(t, 0, len(summaryCache.cache))
require.Equal(t, params.BeaconConfig().FarFutureEpoch, summaryCache.lowestCachedEpoch)
require.Equal(t, primitives.Epoch(0), summaryCache.highestCachedEpoch)
})
}

View File

@@ -0,0 +1,742 @@
package filesystem
import (
"context"
"encoding/binary"
"os"
"testing"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/spf13/afero"
)
func TestNewDataColumnStorage(t *testing.T) {
ctx := context.Background()
t.Run("No base path", func(t *testing.T) {
_, err := NewDataColumnStorage(ctx)
require.ErrorIs(t, err, errNoBasePath)
})
t.Run("Nominal", func(t *testing.T) {
dir := t.TempDir()
storage, err := NewDataColumnStorage(ctx, WithDataColumnBasePath(dir))
require.NoError(t, err)
require.Equal(t, dir, storage.base)
})
}
func TestWarmCache(t *testing.T) {
storage, err := NewDataColumnStorage(
context.Background(),
WithDataColumnBasePath(t.TempDir()),
WithDataColumnRetentionEpochs(10_000),
)
require.NoError(t, err)
_, verifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{
{0}: {
{Slot: 33, ColumnIndex: 2, DataColumn: []byte{1, 2, 3}}, // Period 0 - Epoch 1
{Slot: 33, ColumnIndex: 4, DataColumn: []byte{2, 3, 4}}, // Period 0 - Epoch 1
},
{1}: {
{Slot: 128_002, ColumnIndex: 2, DataColumn: []byte{1, 2, 3}}, // Period 0 - Epoch 4000
{Slot: 128_002, ColumnIndex: 4, DataColumn: []byte{2, 3, 4}}, // Period 0 - Epoch 4000
},
{2}: {
{Slot: 128_003, ColumnIndex: 1, DataColumn: []byte{1, 2, 3}}, // Period 0 - Epoch 4000
{Slot: 128_003, ColumnIndex: 3, DataColumn: []byte{2, 3, 4}}, // Period 0 - Epoch 4000
},
{3}: {
{Slot: 128_034, ColumnIndex: 2, DataColumn: []byte{1, 2, 3}}, // Period 0 - Epoch 4001
{Slot: 128_034, ColumnIndex: 4, DataColumn: []byte{2, 3, 4}}, // Period 0 - Epoch 4001
},
{4}: {
{Slot: 131_138, ColumnIndex: 2, DataColumn: []byte{1, 2, 3}}, // Period 1 - Epoch 4098
},
{5}: {
{Slot: 131_138, ColumnIndex: 1, DataColumn: []byte{1, 2, 3}}, // Period 1 - Epoch 4098
},
{6}: {
{Slot: 131_168, ColumnIndex: 0, DataColumn: []byte{1, 2, 3}}, // Period 1 - Epoch 4099
},
},
)
err = storage.Save(verifiedRoDataColumnSidecars)
require.NoError(t, err)
storage.retentionEpochs = 4_096
storage.WarmCache()
require.Equal(t, primitives.Epoch(4_000), storage.cache.lowestCachedEpoch)
require.Equal(t, 6, len(storage.cache.cache))
summary, ok := storage.cache.get([fieldparams.RootLength]byte{1})
require.Equal(t, true, ok)
require.DeepEqual(t, DataColumnStorageSummary{epoch: 4_000, mask: [fieldparams.NumberOfColumns]bool{false, false, true, false, true}}, summary)
summary, ok = storage.cache.get([fieldparams.RootLength]byte{2})
require.Equal(t, true, ok)
require.DeepEqual(t, DataColumnStorageSummary{epoch: 4_000, mask: [fieldparams.NumberOfColumns]bool{false, true, false, true}}, summary)
summary, ok = storage.cache.get([fieldparams.RootLength]byte{3})
require.Equal(t, true, ok)
require.DeepEqual(t, DataColumnStorageSummary{epoch: 4_001, mask: [fieldparams.NumberOfColumns]bool{false, false, true, false, true}}, summary)
summary, ok = storage.cache.get([fieldparams.RootLength]byte{4})
require.Equal(t, true, ok)
require.DeepEqual(t, DataColumnStorageSummary{epoch: 4_098, mask: [fieldparams.NumberOfColumns]bool{false, false, true}}, summary)
summary, ok = storage.cache.get([fieldparams.RootLength]byte{5})
require.Equal(t, true, ok)
require.DeepEqual(t, DataColumnStorageSummary{epoch: 4_098, mask: [fieldparams.NumberOfColumns]bool{false, true}}, summary)
summary, ok = storage.cache.get([fieldparams.RootLength]byte{6})
require.Equal(t, true, ok)
require.DeepEqual(t, DataColumnStorageSummary{epoch: 4_099, mask: [fieldparams.NumberOfColumns]bool{true}}, summary)
}
func TestSaveDataColumnsSidecars(t *testing.T) {
t.Run("wrong numbers of columns", func(t *testing.T) {
cfg := params.BeaconConfig().Copy()
cfg.NumberOfColumns = 0
params.OverrideBeaconConfig(cfg)
params.SetupTestConfigCleanup(t)
_, verifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{
{}: {{ColumnIndex: 12}, {ColumnIndex: 1_000_000}, {ColumnIndex: 48}},
},
)
_, dataColumnStorage := NewEphemeralDataColumnStorageAndFs(t)
err := dataColumnStorage.Save(verifiedRoDataColumnSidecars)
require.ErrorIs(t, err, errWrongNumberOfColumns)
})
t.Run("one of the column index is too large", func(t *testing.T) {
_, verifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{{}: {{ColumnIndex: 12}, {ColumnIndex: 1_000_000}, {ColumnIndex: 48}}},
)
_, dataColumnStorage := NewEphemeralDataColumnStorageAndFs(t)
err := dataColumnStorage.Save(verifiedRoDataColumnSidecars)
require.ErrorIs(t, err, errDataColumnIndexTooLarge)
})
t.Run("different slots", func(t *testing.T) {
_, verifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{
{}: {
{Slot: 1, ColumnIndex: 12, DataColumn: []byte{1, 2, 3}},
{Slot: 2, ColumnIndex: 12, DataColumn: []byte{1, 2, 3}},
},
},
)
_, dataColumnStorage := NewEphemeralDataColumnStorageAndFs(t)
err := dataColumnStorage.Save(verifiedRoDataColumnSidecars)
require.ErrorIs(t, err, errDataColumnSidecarsFromDifferentSlots)
})
t.Run("new file - no data columns to save", func(t *testing.T) {
_, verifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{{}: {}},
)
_, dataColumnStorage := NewEphemeralDataColumnStorageAndFs(t)
err := dataColumnStorage.Save(verifiedRoDataColumnSidecars)
require.NoError(t, err)
})
t.Run("new file - different data column size", func(t *testing.T) {
_, verifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{
{}: {
{ColumnIndex: 12, DataColumn: []byte{1, 2, 3}},
{ColumnIndex: 11, DataColumn: []byte{1, 2, 3, 4}},
},
},
)
_, dataColumnStorage := NewEphemeralDataColumnStorageAndFs(t)
err := dataColumnStorage.Save(verifiedRoDataColumnSidecars)
require.ErrorIs(t, err, errWrongSszEncodedDataColumnSidecarSize)
})
t.Run("existing file - wrong incoming SSZ encoded size", func(t *testing.T) {
_, verifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{{1}: {{ColumnIndex: 12, DataColumn: []byte{1, 2, 3}}}},
)
// Save data columns into a file.
_, dataColumnStorage := NewEphemeralDataColumnStorageAndFs(t)
err := dataColumnStorage.Save(verifiedRoDataColumnSidecars)
require.NoError(t, err)
// Build a data column sidecar for the same block but with a different
// column index and an different SSZ encoded size.
_, verifiedRoDataColumnSidecars = util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{{1}: {{ColumnIndex: 13, DataColumn: []byte{1, 2, 3, 4}}}},
)
// Try to rewrite the file.
err = dataColumnStorage.Save(verifiedRoDataColumnSidecars)
require.ErrorIs(t, err, errWrongSszEncodedDataColumnSidecarSize)
})
t.Run("nominal", func(t *testing.T) {
_, inputVerifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{
{1}: {
{ColumnIndex: 12, DataColumn: []byte{1, 2, 3}},
{ColumnIndex: 11, DataColumn: []byte{3, 4, 5}},
{ColumnIndex: 12, DataColumn: []byte{1, 2, 3}}, // OK if duplicate
{ColumnIndex: 13, DataColumn: []byte{6, 7, 8}},
},
{2}: {
{ColumnIndex: 12, DataColumn: []byte{3, 4, 5}},
{ColumnIndex: 13, DataColumn: []byte{6, 7, 8}},
},
},
)
_, dataColumnStorage := NewEphemeralDataColumnStorageAndFs(t)
err := dataColumnStorage.Save(inputVerifiedRoDataColumnSidecars)
require.NoError(t, err)
_, inputVerifiedRoDataColumnSidecars = util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{
{1}: {
{ColumnIndex: 12, DataColumn: []byte{1, 2, 3}}, // OK if duplicate
{ColumnIndex: 15, DataColumn: []byte{2, 3, 4}},
{ColumnIndex: 1, DataColumn: []byte{2, 3, 4}},
},
{3}: {
{ColumnIndex: 6, DataColumn: []byte{3, 4, 5}},
{ColumnIndex: 2, DataColumn: []byte{6, 7, 8}},
},
},
)
err = dataColumnStorage.Save(inputVerifiedRoDataColumnSidecars)
require.NoError(t, err)
type fixture struct {
fileName string
blockRoot [fieldparams.RootLength]byte
expectedIndices [mandatoryNumberOfColumns]byte
dataColumnParams []util.DataColumnParams
}
fixtures := []fixture{
{
fileName: "0/0/0x0100000000000000000000000000000000000000000000000000000000000000.sszs",
blockRoot: [fieldparams.RootLength]byte{1},
expectedIndices: [mandatoryNumberOfColumns]byte{
0, nonZeroOffset + 4, 0, 0, 0, 0, 0, 0,
0, 0, 0, nonZeroOffset + 1, nonZeroOffset, nonZeroOffset + 2, 0, nonZeroOffset + 3,
// The rest is filled with zeroes.
},
dataColumnParams: []util.DataColumnParams{
{ColumnIndex: 12, DataColumn: []byte{1, 2, 3}},
{ColumnIndex: 11, DataColumn: []byte{3, 4, 5}},
{ColumnIndex: 13, DataColumn: []byte{6, 7, 8}},
{ColumnIndex: 15, DataColumn: []byte{2, 3, 4}},
{ColumnIndex: 1, DataColumn: []byte{2, 3, 4}},
},
},
{
fileName: "0/0/0x0200000000000000000000000000000000000000000000000000000000000000.sszs",
blockRoot: [fieldparams.RootLength]byte{2},
expectedIndices: [mandatoryNumberOfColumns]byte{
0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, nonZeroOffset, nonZeroOffset + 1, 0, 0,
// The rest is filled with zeroes.
},
dataColumnParams: []util.DataColumnParams{
{ColumnIndex: 12, DataColumn: []byte{3, 4, 5}},
{ColumnIndex: 13, DataColumn: []byte{6, 7, 8}},
},
},
{
fileName: "0/0/0x0300000000000000000000000000000000000000000000000000000000000000.sszs",
blockRoot: [fieldparams.RootLength]byte{3},
expectedIndices: [mandatoryNumberOfColumns]byte{
0, 0, nonZeroOffset + 1, 0, 0, 0, nonZeroOffset, 0,
// The rest is filled with zeroes.
},
dataColumnParams: []util.DataColumnParams{
{ColumnIndex: 6, DataColumn: []byte{3, 4, 5}},
{ColumnIndex: 2, DataColumn: []byte{6, 7, 8}},
},
},
}
for _, fixture := range fixtures {
// Build expected data column sidecars.
_, expectedDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{fixture.blockRoot: fixture.dataColumnParams},
)
// Build expected bytes.
firstSszEncodedDataColumnSidecar, err := expectedDataColumnSidecars[0].MarshalSSZ()
require.NoError(t, err)
dataColumnSidecarsCount := len(expectedDataColumnSidecars)
sszEncodedDataColumnSidecarSize := len(firstSszEncodedDataColumnSidecar)
sszEncodedDataColumnSidecars := make([]byte, 0, dataColumnSidecarsCount*sszEncodedDataColumnSidecarSize)
sszEncodedDataColumnSidecars = append(sszEncodedDataColumnSidecars, firstSszEncodedDataColumnSidecar...)
for _, dataColumnSidecar := range expectedDataColumnSidecars[1:] {
sszEncodedDataColumnSidecar, err := dataColumnSidecar.MarshalSSZ()
require.NoError(t, err)
sszEncodedDataColumnSidecars = append(sszEncodedDataColumnSidecars, sszEncodedDataColumnSidecar...)
}
var encodedSszEncodedDataColumnSidecarSize [sidecarByteLenSize]byte
binary.BigEndian.PutUint32(encodedSszEncodedDataColumnSidecarSize[:], uint32(sszEncodedDataColumnSidecarSize))
expectedBytes := make([]byte, 0, headerSize+dataColumnSidecarsCount*sszEncodedDataColumnSidecarSize)
expectedBytes = append(expectedBytes, []byte{0x01}...)
expectedBytes = append(expectedBytes, encodedSszEncodedDataColumnSidecarSize[:]...)
expectedBytes = append(expectedBytes, fixture.expectedIndices[:]...)
expectedBytes = append(expectedBytes, sszEncodedDataColumnSidecars...)
// Check the actual content of the file.
actualBytes, err := afero.ReadFile(dataColumnStorage.fs, fixture.fileName)
require.NoError(t, err)
require.DeepSSZEqual(t, expectedBytes, actualBytes)
// Check the summary.
indices := map[uint64]bool{}
for _, dataColumnParam := range fixture.dataColumnParams {
indices[dataColumnParam.ColumnIndex] = true
}
summary := dataColumnStorage.Summary(fixture.blockRoot)
for index := range uint64(mandatoryNumberOfColumns) {
require.Equal(t, indices[index], summary.HasIndex(index))
}
err = dataColumnStorage.Remove(fixture.blockRoot)
require.NoError(t, err)
summary = dataColumnStorage.Summary(fixture.blockRoot)
for index := range uint64(mandatoryNumberOfColumns) {
require.Equal(t, false, summary.HasIndex(index))
}
_, err = afero.ReadFile(dataColumnStorage.fs, fixture.fileName)
require.ErrorIs(t, err, os.ErrNotExist)
}
})
}
func TestGetDataColumnSidecars(t *testing.T) {
t.Run("not found", func(t *testing.T) {
_, dataColumnStorage := NewEphemeralDataColumnStorageAndFs(t)
verifiedRODataColumnSidecars, err := dataColumnStorage.Get([fieldparams.RootLength]byte{1}, []uint64{12, 13, 14})
require.NoError(t, err)
require.Equal(t, 0, len(verifiedRODataColumnSidecars))
})
t.Run("nominal", func(t *testing.T) {
_, expectedVerifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{
{1}: {
{ColumnIndex: 12, DataColumn: []byte{1, 2, 3}},
{ColumnIndex: 14, DataColumn: []byte{2, 3, 4}},
},
},
)
_, dataColumnStorage := NewEphemeralDataColumnStorageAndFs(t)
err := dataColumnStorage.Save(expectedVerifiedRoDataColumnSidecars)
require.NoError(t, err)
verifiedRODataColumnSidecars, err := dataColumnStorage.Get([fieldparams.RootLength]byte{1}, nil)
require.NoError(t, err)
require.DeepSSZEqual(t, expectedVerifiedRoDataColumnSidecars, verifiedRODataColumnSidecars)
verifiedRODataColumnSidecars, err = dataColumnStorage.Get([fieldparams.RootLength]byte{1}, []uint64{12, 13, 14})
require.NoError(t, err)
require.DeepSSZEqual(t, expectedVerifiedRoDataColumnSidecars, verifiedRODataColumnSidecars)
})
}
func TestRemove(t *testing.T) {
t.Run("not found", func(t *testing.T) {
_, dataColumnStorage := NewEphemeralDataColumnStorageAndFs(t)
err := dataColumnStorage.Remove([fieldparams.RootLength]byte{1})
require.NoError(t, err)
})
t.Run("nominal", func(t *testing.T) {
_, inputVerifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{
{1}: {
{Slot: 32, ColumnIndex: 10, DataColumn: []byte{1, 2, 3}},
{Slot: 32, ColumnIndex: 11, DataColumn: []byte{2, 3, 4}},
},
{2}: {
{Slot: 33, ColumnIndex: 10, DataColumn: []byte{1, 2, 3}},
{Slot: 33, ColumnIndex: 11, DataColumn: []byte{2, 3, 4}},
},
},
)
_, dataColumnStorage := NewEphemeralDataColumnStorageAndFs(t)
err := dataColumnStorage.Save(inputVerifiedRoDataColumnSidecars)
require.NoError(t, err)
err = dataColumnStorage.Remove([fieldparams.RootLength]byte{1})
require.NoError(t, err)
summary := dataColumnStorage.Summary([fieldparams.RootLength]byte{1})
require.Equal(t, primitives.Epoch(0), summary.epoch)
require.Equal(t, uint64(0), summary.Count())
summary = dataColumnStorage.Summary([fieldparams.RootLength]byte{2})
require.Equal(t, primitives.Epoch(1), summary.epoch)
require.Equal(t, uint64(2), summary.Count())
actual, err := dataColumnStorage.Get([fieldparams.RootLength]byte{1}, nil)
require.NoError(t, err)
require.Equal(t, 0, len(actual))
actual, err = dataColumnStorage.Get([fieldparams.RootLength]byte{2}, nil)
require.NoError(t, err)
require.Equal(t, 2, len(actual))
})
}
func TestClear(t *testing.T) {
_, inputVerifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{
{1}: {{ColumnIndex: 12, DataColumn: []byte{1, 2, 3}}},
{2}: {{ColumnIndex: 13, DataColumn: []byte{6, 7, 8}}},
},
)
_, dataColumnStorage := NewEphemeralDataColumnStorageAndFs(t)
err := dataColumnStorage.Save(inputVerifiedRoDataColumnSidecars)
require.NoError(t, err)
filePaths := []string{
"0/0/0x0100000000000000000000000000000000000000000000000000000000000000.sszs",
"0/0/0x0200000000000000000000000000000000000000000000000000000000000000.sszs",
}
for _, filePath := range filePaths {
_, err = afero.ReadFile(dataColumnStorage.fs, filePath)
require.NoError(t, err)
}
err = dataColumnStorage.Clear()
require.NoError(t, err)
summary := dataColumnStorage.Summary([fieldparams.RootLength]byte{1})
for index := range uint64(mandatoryNumberOfColumns) {
require.Equal(t, false, summary.HasIndex(index))
}
for _, filePath := range filePaths {
_, err = afero.ReadFile(dataColumnStorage.fs, filePath)
require.ErrorIs(t, err, os.ErrNotExist)
}
}
func TestMetadata(t *testing.T) {
t.Run("wrong version", func(t *testing.T) {
_, verifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{
{1}: {{ColumnIndex: 12, DataColumn: []byte{1, 2, 3}}},
},
)
// Save data columns into a file.
_, dataColumnStorage := NewEphemeralDataColumnStorageAndFs(t)
err := dataColumnStorage.Save(verifiedRoDataColumnSidecars)
require.NoError(t, err)
// Alter the version.
const filePath = "0/0/0x0100000000000000000000000000000000000000000000000000000000000000.sszs"
file, err := dataColumnStorage.fs.OpenFile(filePath, os.O_WRONLY, os.FileMode(0600))
require.NoError(t, err)
count, err := file.Write([]byte{42})
require.NoError(t, err)
require.Equal(t, 1, count)
// Try to read the metadata.
_, err = dataColumnStorage.metadata(file)
require.ErrorIs(t, err, errWrongVersion)
err = file.Close()
require.NoError(t, err)
})
}
func TestNewStorageIndices(t *testing.T) {
t.Run("wrong number of columns", func(t *testing.T) {
_, err := newStorageIndices(nil)
require.ErrorIs(t, err, errWrongNumberOfColumns)
})
t.Run("nominal", func(t *testing.T) {
var indices [mandatoryNumberOfColumns]byte
indices[0] = 1
storageIndices, err := newStorageIndices(indices[:])
require.NoError(t, err)
require.Equal(t, indices, storageIndices.indices)
})
}
func TestStorageIndicesGet(t *testing.T) {
t.Run("index too large", func(t *testing.T) {
var indices storageIndices
_, _, err := indices.get(1_000_000)
require.ErrorIs(t, errDataColumnIndexTooLarge, err)
})
t.Run("index not set", func(t *testing.T) {
const expected = false
var indices storageIndices
actual, _, err := indices.get(0)
require.NoError(t, err)
require.Equal(t, expected, actual)
})
t.Run("index set", func(t *testing.T) {
const (
expectedOk = true
expectedPosition = int64(3)
)
indices := storageIndices{indices: [mandatoryNumberOfColumns]byte{0, 131}}
actualOk, actualPosition, err := indices.get(1)
require.NoError(t, err)
require.Equal(t, expectedOk, actualOk)
require.Equal(t, expectedPosition, actualPosition)
})
}
func TestStorageIndicesLen(t *testing.T) {
const expected = int64(2)
indices := storageIndices{count: 2}
actual := indices.len()
require.Equal(t, expected, actual)
}
func TestStorageIndicesAll(t *testing.T) {
expectedIndices := []uint64{1, 3}
indices := storageIndices{indices: [mandatoryNumberOfColumns]byte{0, 131, 0, 128}}
actualIndices := indices.all()
require.DeepEqual(t, expectedIndices, actualIndices)
}
func TestStorageIndicesSet(t *testing.T) {
t.Run("data column index too large", func(t *testing.T) {
var indices storageIndices
err := indices.set(1_000_000, 0)
require.ErrorIs(t, errDataColumnIndexTooLarge, err)
})
t.Run("position too large", func(t *testing.T) {
var indices storageIndices
err := indices.set(0, 255)
require.ErrorIs(t, errDataColumnIndexTooLarge, err)
})
t.Run("nominal", func(t *testing.T) {
expected := [mandatoryNumberOfColumns]byte{0, 0, 128, 0, 131}
var storageIndices storageIndices
require.Equal(t, int64(0), storageIndices.len())
err := storageIndices.set(2, 1)
require.NoError(t, err)
require.Equal(t, int64(1), storageIndices.len())
err = storageIndices.set(4, 3)
require.NoError(t, err)
require.Equal(t, int64(2), storageIndices.len())
err = storageIndices.set(2, 0)
require.NoError(t, err)
require.Equal(t, int64(2), storageIndices.len())
actual := storageIndices.indices
require.Equal(t, expected, actual)
})
}
func TestPrune(t *testing.T) {
t.Run(("nothing to prune"), func(t *testing.T) {
dir := t.TempDir()
dataColumnStorage, err := NewDataColumnStorage(context.Background(), WithDataColumnBasePath(dir))
require.NoError(t, err)
dataColumnStorage.prune()
})
t.Run("nominal", func(t *testing.T) {
var compareSlices = func(left, right []string) bool {
if len(left) != len(right) {
return false
}
leftMap := make(map[string]bool, len(left))
for _, leftItem := range left {
leftMap[leftItem] = true
}
for _, rightItem := range right {
if _, ok := leftMap[rightItem]; !ok {
return false
}
}
return true
}
_, verifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{
{0}: {
{Slot: 33, ColumnIndex: 2, DataColumn: []byte{1, 2, 3}}, // Period 0 - Epoch 1
{Slot: 33, ColumnIndex: 4, DataColumn: []byte{2, 3, 4}}, // Period 0 - Epoch 1
},
{1}: {
{Slot: 128_002, ColumnIndex: 2, DataColumn: []byte{1, 2, 3}}, // Period 0 - Epoch 4000
{Slot: 128_002, ColumnIndex: 4, DataColumn: []byte{2, 3, 4}}, // Period 0 - Epoch 4000
},
{2}: {
{Slot: 128_003, ColumnIndex: 1, DataColumn: []byte{1, 2, 3}}, // Period 0 - Epoch 4000
{Slot: 128_003, ColumnIndex: 3, DataColumn: []byte{2, 3, 4}}, // Period 0 - Epoch 4000
},
{3}: {
{Slot: 131_138, ColumnIndex: 2, DataColumn: []byte{1, 2, 3}}, // Period 1 - Epoch 4098
{Slot: 131_138, ColumnIndex: 3, DataColumn: []byte{1, 2, 3}}, // Period 1 - Epoch 4098
},
{4}: {
{Slot: 131_169, ColumnIndex: 2, DataColumn: []byte{1, 2, 3}}, // Period 1 - Epoch 4099
{Slot: 131_169, ColumnIndex: 3, DataColumn: []byte{1, 2, 3}}, // Period 1 - Epoch 4099
},
{5}: {
{Slot: 262_144, ColumnIndex: 2, DataColumn: []byte{1, 2, 3}}, // Period 2 - Epoch 8192
{Slot: 262_144, ColumnIndex: 3, DataColumn: []byte{1, 2, 3}}, // Period 2 - Epoch 8292
},
},
)
dir := t.TempDir()
dataColumnStorage, err := NewDataColumnStorage(context.Background(), WithDataColumnBasePath(dir), WithDataColumnRetentionEpochs(10_000))
require.NoError(t, err)
err = dataColumnStorage.Save(verifiedRoDataColumnSidecars)
require.NoError(t, err)
dirs, err := listDir(dataColumnStorage.fs, ".")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{"0", "1", "2"}, dirs))
dirs, err = listDir(dataColumnStorage.fs, "0")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{"1", "4000"}, dirs))
dirs, err = listDir(dataColumnStorage.fs, "1")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{"4099", "4098"}, dirs))
dirs, err = listDir(dataColumnStorage.fs, "2")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{"8192"}, dirs))
dirs, err = listDir(dataColumnStorage.fs, "0/1")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{"0x0000000000000000000000000000000000000000000000000000000000000000.sszs"}, dirs))
dirs, err = listDir(dataColumnStorage.fs, "0/4000")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{
"0x0200000000000000000000000000000000000000000000000000000000000000.sszs",
"0x0100000000000000000000000000000000000000000000000000000000000000.sszs",
}, dirs))
dirs, err = listDir(dataColumnStorage.fs, "1/4098")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{"0x0300000000000000000000000000000000000000000000000000000000000000.sszs"}, dirs))
dirs, err = listDir(dataColumnStorage.fs, "1/4099")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{"0x0400000000000000000000000000000000000000000000000000000000000000.sszs"}, dirs))
dirs, err = listDir(dataColumnStorage.fs, "2/8192")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{"0x0500000000000000000000000000000000000000000000000000000000000000.sszs"}, dirs))
_, verifiedRoDataColumnSidecars = util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{
{6}: {{Slot: 451_141, ColumnIndex: 2, DataColumn: []byte{1, 2, 3}}}, // Period 3 - Epoch 14_098
},
)
err = dataColumnStorage.Save(verifiedRoDataColumnSidecars)
require.NoError(t, err)
// dataColumnStorage.prune(14_098)
dataColumnStorage.prune()
dirs, err = listDir(dataColumnStorage.fs, ".")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{"1", "2", "3"}, dirs))
dirs, err = listDir(dataColumnStorage.fs, "1")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{"4099"}, dirs))
dirs, err = listDir(dataColumnStorage.fs, "2")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{"8192"}, dirs))
dirs, err = listDir(dataColumnStorage.fs, "3")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{"14098"}, dirs))
dirs, err = listDir(dataColumnStorage.fs, "1/4099")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{"0x0400000000000000000000000000000000000000000000000000000000000000.sszs"}, dirs))
dirs, err = listDir(dataColumnStorage.fs, "2/8192")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{"0x0500000000000000000000000000000000000000000000000000000000000000.sszs"}, dirs))
dirs, err = listDir(dataColumnStorage.fs, "3/14098")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{"0x0600000000000000000000000000000000000000000000000000000000000000.sszs"}, dirs))
})
}

View File

@@ -0,0 +1,104 @@
package filesystem
// nolint:dupword
/*
Data column sidecars storage documentation
==========================================
File organisation
-----------------
- The first byte represents the version of the file structure (up to 0xff = 255).
We set it to 0x01.
Note: This is not strictly needed, but it will help a lot if, in the future,
we want to modify the file structure.
- The next 4 bytes represents the size of a SSZ encoded data column sidecar.
(See the `Computation of the maximum size of a DataColumnSidecar` section to a description
of how this value is computed).
- The next 128 bytes represent the index in the file of a given column.
The first bit of each byte in the index is set to 0 if there is no data column,
and set to 1 if there is a data column.
The remaining 7 bits (from 0 to 127) represent the index of the data column.
This sentinel bit is needed to distinguish between the column with index 0 and no column.
Example: If the column with index 5 is in the 3th position in the file, then indices[5] = 0x80 + 0x03 = 0x83.
- The rest of the file is a repeat of the SSZ encoded data columns sidecars.
|------------------------------------------|------------------------------------------------------------------------------------|
| Byte offset | Description |
|------------------------------------------|------------------------------------------------------------------------------------|
| 0 | version (1 byte) | sszEncodedDataColumnSidecarSize (4 bytes) | indices (128 bytes) |
|133 + 0*sszEncodedDataColumnSidecarSize | sszEncodedDataColumnSidecar (sszEncodedDataColumnSidecarSize bytes) |
|133 + 1*sszEncodedDataColumnSidecarSize | sszEncodedDataColumnSidecar (sszEncodedDataColumnSidecarSize bytes) |
|133 + 2*sszEncodedDataColumnSidecarSize | sszEncodedDataColumnSidecar (sszEncodedDataColumnSidecarSize bytes) |
| ... | ... |
|133 + 127*sszEncodedDataColumnSidecarSize | sszEncodedDataColumnSidecar (sszEncodedDataColumnSidecarSize bytes) |
|------------------------------------------|------------------------------------------------------------------------------------|
Each file is named after the block root where the data columns were data columns are committed to.
Example: `0x259c6d2f6a0bb75e2405cea7cb248e5663dc26b9404fd3bcd777afc20de91c1e.sszs`
Database organisation
---------------------
SSZ encoded data column sidecars are stored following the `by-epoch` layout.
- The first layer is a directory corresponding to the `period`, which corresponds to the epoch divided by the 4096.
- The second layer is a directory corresponding to the epoch.
- Then all files are stored in the epoch directory.
Example:
data-columns
├── 0
│   ├── 3638
│   │   ├── 0x259c6d2f6a0bb75e2405cea7cb248e5663dc26b9404fd3bcd777afc20de91c1e.sszs
│   │   ├── 0x2a855b1f6e9a2f04f8383e336325bf7d5ba02d1eab3ef90ef183736f8c768533.sszs
│   │   ├── ...
│   │   ├── 0xeb78e2b2350a71c640f1e96fea9e42f38e65705ab7e6e100c8bc9c589f2c5f2b.sszs
│   │   └── 0xeb7ee68da988fd20d773d45aad01dd62527734367a146e2b048715bd68a4e370.sszs
│   └── 3639
│      ├── 0x0fd231fe95e57936fa44f6c712c490b9e337a481b661dfd46768901e90444330.sszs
│      ├── 0x1bf5edff6b6ba2b65b1db325ff3312bbb57da461ef2ae651bd741af851aada3a.sszs
│      ├── ...
│      ├── 0xa156a527e631f858fee79fab7ef1fde3f6117a2e1201d47c09fbab0c6780c937.sszs
│      └── 0xcd80bc535ddc467dea1d19e0c39c1160875ccd1989061bcd8ce206e3c1261c87.sszs
└── 1
├── 4096
│   ├── 0x0d244009093e2bedb72eb265280290199e8c7bf1d90d7583c41af40d9f662269.sszs
│   ├── 0x11f420928d8de41c50e735caab0369996824a5299c5f054e097965855925697d.sszs
│   ├── ...
│   ├── 0xbe91fc782877ed400d95c02c61aebfdd592635d11f8e64c94b46abd84f45c967.sszs
│   └── 0xf246189f078f02d30173ff74605cf31c9e65b5e463275ebdbeb40476638135ff.sszs
└── 4097
   ├── 0x454d000674793c479e90504c0fe9827b50bb176ae022dab4e37d6a21471ab570.sszs
   ├── 0xac5eb7437d7190c48cfa863e3c45f96a7f8af371d47ac12ccda07129a06af763.sszs
   ├── ...
   ├── 0xb7df30561d9d92ab5fafdd96bca8b44526497c8debf0fc425c7a0770b2abeb83.sszs
   └── 0xc1dd0b1ae847b6ec62303a36d08c6a4a2e9e3ec4be3ff70551972a0ee3de9c14.sszs
Computation of the maximum size of a DataColumnSidecar
------------------------------------------------------
https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/das-core.md#datacolumnsidecar
class DataColumnSidecar(Container):
index: ColumnIndex # Index of column in extended matrix
column: List[Cell, MAX_BLOB_COMMITMENTS_PER_BLOCK]
kzg_commitments: List[KZGCommitment, MAX_BLOB_COMMITMENTS_PER_BLOCK]
kzg_proofs: List[KZGProof, MAX_BLOB_COMMITMENTS_PER_BLOCK]
signed_block_header: SignedBeaconBlockHeader
kzg_commitments_inclusion_proof: Vector[Bytes32, KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH]
- index: 2 bytes (ColumnIndex)
- `column`: 4,096 (MAX_BLOB_COMMITMENTS_PER_BLOCK) * 64 (FIELD_ELEMENTS_PER_CELL) * 32 bytes (BYTES_PER_FIELD_ELEMENT) = 8,388,608 bytes
- kzg_commitments: 4,096 (MAX_BLOB_COMMITMENTS_PER_BLOCK) * 48 bytes (KZGCommitment) = 196,608 bytes
- kzg_proofs: 4,096 (MAX_BLOB_COMMITMENTS_PER_BLOCK) * 48 bytes (KZGProof) = 196,608 bytes
- signed_block_header: 2 bytes (Slot) + 2 bytes (ValidatorIndex) + 3 * 2 bytes (Root) + 96 bytes (BLSSignature) = 106 bytes
- kzg_commitments_inclusion_proof: 4 (KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH) * 32 bytes = 128 bytes
TOTAL: 8,782,060 bytes = 70,256,480 bits
log(70,256,480) / log(2) ~= 26.07
==> 32 bits (4 bytes) are enough to store the maximum size of a data column sidecar.
The maximum size of an SSZ encoded data column can be 2**32 bits = 536,879,912 bytes,
which left a room of 536,879,912 bytes - 8,782,060 bytes ~= 503 mega bytes to store the extra data needed by SSZ encoding (which is more than enough.)
*/

View File

@@ -6,6 +6,7 @@ import (
)
var (
// Blobs
blobBuckets = []float64{3, 5, 7, 9, 11, 13}
blobSaveLatency = promauto.NewHistogram(prometheus.HistogramOpts{
Name: "blob_storage_save_latency",
@@ -33,4 +34,29 @@ var (
Name: "blob_disk_bytes",
Help: "Approximate number of bytes occupied by blobs in storage",
})
// Data columns
dataColumnBuckets = []float64{3, 5, 7, 9, 11, 13}
dataColumnSaveLatency = promauto.NewHistogram(prometheus.HistogramOpts{
Name: "data_column_storage_save_latency",
Help: "Latency of DataColumnSidecar storage save operations in milliseconds",
Buckets: dataColumnBuckets,
})
dataColumnFetchLatency = promauto.NewHistogram(prometheus.HistogramOpts{
Name: "data_column_storage_get_latency",
Help: "Latency of DataColumnSidecar storage get operations in milliseconds",
Buckets: dataColumnBuckets,
})
dataColumnPrunedCounter = promauto.NewCounter(prometheus.CounterOpts{
Name: "data_column_pruned",
Help: "Number of DataColumnSidecar pruned.",
})
dataColumnWrittenCounter = promauto.NewCounter(prometheus.CounterOpts{
Name: "data_column_written",
Help: "Number of DataColumnSidecar written",
})
dataColumnDiskCount = promauto.NewGauge(prometheus.GaugeOpts{
Name: "data_column_disk_count",
Help: "Approximate number of data columns in storage",
})
)

View File

@@ -1,6 +1,7 @@
package filesystem
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/config/params"
@@ -9,6 +10,9 @@ import (
"github.com/spf13/afero"
)
// Blobs
// -----
// NewEphemeralBlobStorage should only be used for tests.
// The instance of BlobStorage returned is backed by an in-memory virtual filesystem,
// improving test performance and simplifying cleanup.
@@ -75,3 +79,41 @@ func NewMockBlobStorageSummarizer(t *testing.T, set map[[32]byte][]int) BlobStor
}
return c
}
// Data columns
// ------------
// NewEphemeralDataColumnStorage should only be used for tests.
// The instance of DataColumnStorage returned is backed by an in-memory virtual filesystem,
// improving test performance and simplifying cleanup.
func NewEphemeralDataColumnStorage(t testing.TB, opts ...DataColumnStorageOption) *DataColumnStorage {
return NewWarmedEphemeralDataColumnStorageUsingFs(t, afero.NewMemMapFs(), opts...)
}
// NewEphemeralDataColumnStorageAndFs can be used by tests that want access to the virtual filesystem
// in order to interact with it outside the parameters of the DataColumnStorage API.
func NewEphemeralDataColumnStorageAndFs(t testing.TB, opts ...DataColumnStorageOption) (afero.Fs, *DataColumnStorage) {
fs := afero.NewMemMapFs()
dcs := NewWarmedEphemeralDataColumnStorageUsingFs(t, fs, opts...)
return fs, dcs
}
func NewWarmedEphemeralDataColumnStorageUsingFs(t testing.TB, fs afero.Fs, opts ...DataColumnStorageOption) *DataColumnStorage {
bs := NewEphemeralDataColumnStorageUsingFs(t, fs, opts...)
bs.WarmCache()
return bs
}
func NewEphemeralDataColumnStorageUsingFs(t testing.TB, fs afero.Fs, opts ...DataColumnStorageOption) *DataColumnStorage {
opts = append(opts,
WithDataColumnRetentionEpochs(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest),
WithDataColumnFs(fs),
)
bs, err := NewDataColumnStorage(context.Background(), opts...)
if err != nil {
t.Fatal(err)
}
return bs
}

View File

@@ -267,7 +267,7 @@ func (f *ForkChoice) HighestReceivedBlockSlot() primitives.Slot {
return f.store.highestReceivedNode.slot
}
// HighestReceivedBlockSlotDelay returns the number of slots that the highest
// HighestReceivedBlockDelay returns the number of slots that the highest
// received block was late when receiving it
func (f *ForkChoice) HighestReceivedBlockDelay() primitives.Slot {
n := f.store.highestReceivedNode

View File

@@ -55,6 +55,7 @@ go_library(
"//beacon-chain/startup:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",

View File

@@ -4,6 +4,7 @@ import (
"reflect"
p2ptypes "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/types"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
pb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
@@ -43,6 +44,18 @@ const BlobSidecarsByRangeName = "/blob_sidecars_by_range"
// BlobSidecarsByRootName is the name for the BlobSidecarsByRoot v1 message topic.
const BlobSidecarsByRootName = "/blob_sidecars_by_root"
// LightClientBootstrapName is the name for the LightClientBootstrap message topic,
const LightClientBootstrapName = "/light_client_bootstrap"
// LightClientUpdatesByRangeName is the name for the LightClientUpdatesByRange topic.
const LightClientUpdatesByRangeName = "/light_client_updates_by_range"
// LightClientFinalityUpdateName is the name for the LightClientFinalityUpdate topic.
const LightClientFinalityUpdateName = "/light_client_finality_update"
// LightClientOptimisticUpdateName is the name for the LightClientOptimisticUpdate topic.
const LightClientOptimisticUpdateName = "/light_client_optimistic_update"
const (
// V1 RPC Topics
// RPCStatusTopicV1 defines the v1 topic for the status rpc method.
@@ -66,6 +79,15 @@ const (
// /eth2/beacon_chain/req/blob_sidecars_by_root/1/
RPCBlobSidecarsByRootTopicV1 = protocolPrefix + BlobSidecarsByRootName + SchemaVersionV1
// RPCLightClientBootstrapTopicV1 is a topic for requesting a light client bootstrap.
RPCLightClientBootstrapTopicV1 = protocolPrefix + LightClientBootstrapName + SchemaVersionV1
// RPCLightClientUpdatesByRangeTopicV1 is a topic for requesting light client updates by range.
RPCLightClientUpdatesByRangeTopicV1 = protocolPrefix + LightClientUpdatesByRangeName + SchemaVersionV1
// RPCLightClientFinalityUpdateTopicV1 is a topic for requesting a light client finality update.
RPCLightClientFinalityUpdateTopicV1 = protocolPrefix + LightClientFinalityUpdateName + SchemaVersionV1
// RPCLightClientOptimisticUpdateTopicV1 is a topic for requesting a light client Optimistic update.
RPCLightClientOptimisticUpdateTopicV1 = protocolPrefix + LightClientOptimisticUpdateName + SchemaVersionV1
// V2 RPC Topics
// RPCBlocksByRangeTopicV2 defines v2 the topic for the blocks by range rpc method.
RPCBlocksByRangeTopicV2 = protocolPrefix + BeaconBlocksByRangeMessageName + SchemaVersionV2
@@ -101,6 +123,12 @@ var RPCTopicMappings = map[string]interface{}{
RPCBlobSidecarsByRangeTopicV1: new(pb.BlobSidecarsByRangeRequest),
// BlobSidecarsByRoot v1 Message
RPCBlobSidecarsByRootTopicV1: new(p2ptypes.BlobSidecarsByRootReq),
// Light client
RPCLightClientBootstrapTopicV1: new([fieldparams.RootLength]byte),
RPCLightClientUpdatesByRangeTopicV1: new(pb.LightClientUpdatesByRangeRequest),
RPCLightClientFinalityUpdateTopicV1: new(interface{}),
RPCLightClientOptimisticUpdateTopicV1: new(interface{}),
}
// Maps all registered protocol prefixes.
@@ -111,14 +139,18 @@ var protocolMapping = map[string]bool{
// Maps all the protocol message names for the different rpc
// topics.
var messageMapping = map[string]bool{
StatusMessageName: true,
GoodbyeMessageName: true,
BeaconBlocksByRangeMessageName: true,
BeaconBlocksByRootsMessageName: true,
PingMessageName: true,
MetadataMessageName: true,
BlobSidecarsByRangeName: true,
BlobSidecarsByRootName: true,
StatusMessageName: true,
GoodbyeMessageName: true,
BeaconBlocksByRangeMessageName: true,
BeaconBlocksByRootsMessageName: true,
PingMessageName: true,
MetadataMessageName: true,
BlobSidecarsByRangeName: true,
BlobSidecarsByRootName: true,
LightClientBootstrapName: true,
LightClientUpdatesByRangeName: true,
LightClientFinalityUpdateName: true,
LightClientOptimisticUpdateName: true,
}
// Maps all the RPC messages which are to updated in altair.

View File

@@ -87,6 +87,7 @@ go_test(
"//beacon-chain/execution/testing:go_default_library",
"//beacon-chain/startup:go_default_library",
"//beacon-chain/sync/initial-sync/testing:go_default_library",
"//config/features:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",

View File

@@ -22,6 +22,7 @@ import (
validatorv1alpha1 "github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/prysm/v1alpha1/validator"
validatorprysm "github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/prysm/validator"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state/stategen"
"github.com/OffchainLabs/prysm/v6/config/features"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
)
@@ -95,14 +96,19 @@ func (s *Service) endpoints(
endpoints = append(endpoints, s.nodeEndpoints()...)
endpoints = append(endpoints, s.beaconEndpoints(ch, stater, blocker, validatorServer, coreService)...)
endpoints = append(endpoints, s.configEndpoints()...)
endpoints = append(endpoints, s.lightClientEndpoints(blocker, stater)...)
endpoints = append(endpoints, s.eventsEndpoints()...)
endpoints = append(endpoints, s.prysmBeaconEndpoints(ch, stater, coreService)...)
endpoints = append(endpoints, s.prysmNodeEndpoints()...)
endpoints = append(endpoints, s.prysmValidatorEndpoints(stater, coreService)...)
if features.Get().EnableLightClient {
endpoints = append(endpoints, s.lightClientEndpoints(blocker, stater)...)
}
if enableDebug {
endpoints = append(endpoints, s.debugEndpoints(stater)...)
}
return endpoints
}

View File

@@ -6,6 +6,7 @@ import (
"slices"
"testing"
"github.com/OffchainLabs/prysm/v6/config/features"
"github.com/OffchainLabs/prysm/v6/testing/assert"
)
@@ -139,27 +140,56 @@ func Test_endpoints(t *testing.T) {
"/prysm/v1/validators/{state_id}/active_set_changes": {http.MethodGet},
}
s := &Service{cfg: &Config{}}
endpoints := s.endpoints(true, nil, nil, nil, nil, nil, nil)
actualRoutes := make(map[string][]string, len(endpoints))
for _, e := range endpoints {
if _, ok := actualRoutes[e.template]; ok {
actualRoutes[e.template] = append(actualRoutes[e.template], e.methods...)
} else {
actualRoutes[e.template] = e.methods
}
}
expectedRoutes := make(map[string][]string)
for _, m := range []map[string][]string{
beaconRoutes, builderRoutes, configRoutes, debugRoutes, eventsRoutes,
nodeRoutes, validatorRoutes, rewardsRoutes, lightClientRoutes, blobRoutes,
prysmValidatorRoutes, prysmNodeRoutes, prysmBeaconRoutes,
} {
maps.Copy(expectedRoutes, m)
testCases := []struct {
name string
flag *features.Flags
additionalExpectedRoutes []map[string][]string
}{
{
name: "no flags",
},
{
name: "light client enabled",
flag: &features.Flags{
EnableLightClient: true,
},
additionalExpectedRoutes: []map[string][]string{
lightClientRoutes,
},
},
}
assert.Equal(t, true, maps.EqualFunc(expectedRoutes, actualRoutes, func(actualMethods []string, expectedMethods []string) bool {
return slices.Equal(expectedMethods, actualMethods)
}))
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
resetFn := features.InitWithReset(tc.flag)
defer resetFn()
s := &Service{cfg: &Config{}}
endpoints := s.endpoints(true, nil, nil, nil, nil, nil, nil)
actualRoutes := make(map[string][]string, len(endpoints))
for _, e := range endpoints {
if _, ok := actualRoutes[e.template]; ok {
actualRoutes[e.template] = append(actualRoutes[e.template], e.methods...)
} else {
actualRoutes[e.template] = e.methods
}
}
expectedRoutes := make(map[string][]string)
for _, m := range []map[string][]string{
beaconRoutes, builderRoutes, configRoutes, debugRoutes, eventsRoutes,
nodeRoutes, validatorRoutes, rewardsRoutes, blobRoutes,
prysmValidatorRoutes, prysmNodeRoutes, prysmBeaconRoutes,
} {
maps.Copy(expectedRoutes, m)
}
for _, m := range tc.additionalExpectedRoutes {
maps.Copy(expectedRoutes, m)
}
assert.Equal(t, true, maps.EqualFunc(expectedRoutes, actualRoutes, func(actualMethods []string, expectedMethods []string) bool {
return slices.Equal(expectedMethods, actualMethods)
}))
})
}
}

View File

@@ -265,7 +265,7 @@ func TestBlobs(t *testing.T) {
require.Equal(t, false, resp.Finalized)
})
t.Run("blob index over max", func(t *testing.T) {
overLimit := params.BeaconConfig().MaxBlobsPerBlockByVersion(version.Deneb)
overLimit := maxBlobsPerBlockByVersion(version.Deneb)
u := fmt.Sprintf("http://foo.example/123?indices=%d", overLimit)
request := httptest.NewRequest("GET", u, nil)
writer := httptest.NewRecorder()
@@ -412,10 +412,14 @@ func TestBlobs_Electra(t *testing.T) {
cfg := params.BeaconConfig().Copy()
cfg.DenebForkEpoch = 0
cfg.ElectraForkEpoch = 1
cfg.BlobSchedule = []params.BlobScheduleEntry{
{Epoch: 0, MaxBlobsPerBlock: 6},
{Epoch: 1, MaxBlobsPerBlock: 9},
}
params.OverrideBeaconConfig(cfg)
db := testDB.SetupDB(t)
electraBlock, blobs := util.GenerateTestElectraBlockWithSidecar(t, [32]byte{}, 123, params.BeaconConfig().MaxBlobsPerBlockByVersion(version.Electra))
electraBlock, blobs := util.GenerateTestElectraBlockWithSidecar(t, [32]byte{}, 123, maxBlobsPerBlockByVersion(version.Electra))
require.NoError(t, db.SaveBlock(context.Background(), electraBlock))
bs := filesystem.NewEphemeralBlobStorage(t)
testSidecars := verification.FakeVerifySliceForTest(t, blobs)
@@ -451,7 +455,7 @@ func TestBlobs_Electra(t *testing.T) {
assert.Equal(t, http.StatusOK, writer.Code)
resp := &structs.SidecarsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, params.BeaconConfig().MaxBlobsPerBlockByVersion(version.Electra), len(resp.Data))
require.Equal(t, maxBlobsPerBlockByVersion(version.Electra), len(resp.Data))
sidecar := resp.Data[0]
require.NotNil(t, sidecar)
assert.Equal(t, "0", sidecar.Index)
@@ -464,7 +468,7 @@ func TestBlobs_Electra(t *testing.T) {
require.Equal(t, false, resp.Finalized)
})
t.Run("requested blob index at max", func(t *testing.T) {
limit := params.BeaconConfig().MaxBlobsPerBlockByVersion(version.Electra) - 1
limit := maxBlobsPerBlockByVersion(version.Electra) - 1
u := fmt.Sprintf("http://foo.example/123?indices=%d", limit)
request := httptest.NewRequest("GET", u, nil)
writer := httptest.NewRecorder()
@@ -496,7 +500,7 @@ func TestBlobs_Electra(t *testing.T) {
require.Equal(t, false, resp.Finalized)
})
t.Run("blob index over max", func(t *testing.T) {
overLimit := params.BeaconConfig().MaxBlobsPerBlockByVersion(version.Electra)
overLimit := maxBlobsPerBlockByVersion(version.Electra)
u := fmt.Sprintf("http://foo.example/123?indices=%d", overLimit)
request := httptest.NewRequest("GET", u, nil)
writer := httptest.NewRecorder()
@@ -554,3 +558,11 @@ func Test_parseIndices(t *testing.T) {
})
}
}
func maxBlobsPerBlockByVersion(v int) int {
if v >= version.Electra {
return params.BeaconConfig().DeprecatedMaxBlobsPerBlockElectra
}
return params.BeaconConfig().DeprecatedMaxBlobsPerBlock
}

View File

@@ -17,7 +17,6 @@ go_library(
"//beacon-chain/db:go_default_library",
"//beacon-chain/rpc/eth/shared:go_default_library",
"//beacon-chain/rpc/lookup:go_default_library",
"//config/features:go_default_library",
"//config/params:go_default_library",
"//encoding/bytesutil:go_default_library",
"//monitoring/tracing/trace:go_default_library",
@@ -44,7 +43,6 @@ go_test(
"//beacon-chain/core/light-client:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/interfaces:go_default_library",

View File

@@ -8,7 +8,6 @@ import (
"github.com/OffchainLabs/prysm/v6/api/server/structs"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/eth/shared"
"github.com/OffchainLabs/prysm/v6/config/features"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
@@ -22,11 +21,6 @@ import (
// GetLightClientBootstrap - implements https://github.com/ethereum/beacon-APIs/blob/263f4ed6c263c967f13279c7a9f5629b51c5fc55/apis/beacon/light_client/bootstrap.yaml
func (s *Server) GetLightClientBootstrap(w http.ResponseWriter, req *http.Request) {
if !features.Get().EnableLightClient {
httputil.HandleError(w, "Light client feature flag is not enabled", http.StatusNotFound)
return
}
// Prepare
ctx, span := trace.StartSpan(req.Context(), "beacon.GetLightClientBootstrap")
defer span.End()
@@ -74,11 +68,6 @@ func (s *Server) GetLightClientBootstrap(w http.ResponseWriter, req *http.Reques
// GetLightClientUpdatesByRange - implements https://github.com/ethereum/beacon-APIs/blob/263f4ed6c263c967f13279c7a9f5629b51c5fc55/apis/beacon/light_client/updates.yaml
func (s *Server) GetLightClientUpdatesByRange(w http.ResponseWriter, req *http.Request) {
if !features.Get().EnableLightClient {
httputil.HandleError(w, "Light client feature flag is not enabled", http.StatusNotFound)
return
}
ctx, span := trace.StartSpan(req.Context(), "beacon.GetLightClientUpdatesByRange")
defer span.End()
@@ -187,11 +176,6 @@ func (s *Server) GetLightClientUpdatesByRange(w http.ResponseWriter, req *http.R
// GetLightClientFinalityUpdate - implements https://github.com/ethereum/beacon-APIs/blob/263f4ed6c263c967f13279c7a9f5629b51c5fc55/apis/beacon/light_client/finality_update.yaml
func (s *Server) GetLightClientFinalityUpdate(w http.ResponseWriter, req *http.Request) {
if !features.Get().EnableLightClient {
httputil.HandleError(w, "Light client feature flag is not enabled", http.StatusNotFound)
return
}
_, span := trace.StartSpan(req.Context(), "beacon.GetLightClientFinalityUpdate")
defer span.End()
@@ -225,11 +209,6 @@ func (s *Server) GetLightClientFinalityUpdate(w http.ResponseWriter, req *http.R
// GetLightClientOptimisticUpdate - implements https://github.com/ethereum/beacon-APIs/blob/263f4ed6c263c967f13279c7a9f5629b51c5fc55/apis/beacon/light_client/optimistic_update.yaml
func (s *Server) GetLightClientOptimisticUpdate(w http.ResponseWriter, req *http.Request) {
if !features.Get().EnableLightClient {
httputil.HandleError(w, "Light client feature flag is not enabled", http.StatusNotFound)
return
}
_, span := trace.StartSpan(req.Context(), "beacon.GetLightClientOptimisticUpdate")
defer span.End()

View File

@@ -17,7 +17,6 @@ import (
lightclient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
dbtesting "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/config/features"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
@@ -34,11 +33,6 @@ import (
)
func TestLightClientHandler_GetLightClientBootstrap(t *testing.T) {
resetFn := features.InitWithReset(&features.Flags{
EnableLightClient: true,
})
defer resetFn()
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.AltairForkEpoch = 0
@@ -454,11 +448,6 @@ func TestLightClientHandler_GetLightClientBootstrap(t *testing.T) {
}
func TestLightClientHandler_GetLightClientByRange(t *testing.T) {
resetFn := features.InitWithReset(&features.Flags{
EnableLightClient: true,
})
defer resetFn()
helpers.ClearCache()
ctx := context.Background()
@@ -1501,10 +1490,6 @@ func TestLightClientHandler_GetLightClientByRange(t *testing.T) {
}
func TestLightClientHandler_GetLightClientFinalityUpdate(t *testing.T) {
resetFn := features.InitWithReset(&features.Flags{
EnableLightClient: true,
})
defer resetFn()
helpers.ClearCache()
t.Run("no update", func(t *testing.T) {
@@ -1589,10 +1574,6 @@ func TestLightClientHandler_GetLightClientFinalityUpdate(t *testing.T) {
}
func TestLightClientHandler_GetLightClientOptimisticUpdate(t *testing.T) {
resetFn := features.InitWithReset(&features.Flags{
EnableLightClient: true,
})
defer resetFn()
helpers.ClearCache()
t.Run("no update", func(t *testing.T) {

View File

@@ -26,6 +26,7 @@ go_library(
"rpc_blob_sidecars_by_root.go",
"rpc_chunked_response.go",
"rpc_goodbye.go",
"rpc_light_client.go",
"rpc_metadata.go",
"rpc_ping.go",
"rpc_send_request.go",
@@ -114,6 +115,7 @@ go_library(
"//encoding/bytesutil:go_default_library",
"//encoding/ssz/equality:go_default_library",
"//io/file:go_default_library",
"//math:go_default_library",
"//monitoring/tracing:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//network/forks:go_default_library",
@@ -168,6 +170,7 @@ go_test(
"rpc_blob_sidecars_by_root_test.go",
"rpc_goodbye_test.go",
"rpc_handler_test.go",
"rpc_light_client_test.go",
"rpc_metadata_test.go",
"rpc_ping_test.go",
"rpc_send_request_test.go",
@@ -231,6 +234,7 @@ go_test(
"//beacon-chain/verification:go_default_library",
"//cache/lru:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",

View File

@@ -80,6 +80,12 @@ func newRateLimiter(p2pProvider p2p.P2P) *limiter {
// BlobSidecarsByRangeV1
topicMap[addEncoding(p2p.RPCBlobSidecarsByRangeTopicV1)] = blobCollector
// Light client requests
topicMap[addEncoding(p2p.RPCLightClientBootstrapTopicV1)] = leakybucket.NewCollector(1, defaultBurstLimit, leakyBucketPeriod, false /* deleteEmptyBuckets */)
topicMap[addEncoding(p2p.RPCLightClientUpdatesByRangeTopicV1)] = leakybucket.NewCollector(1, defaultBurstLimit, leakyBucketPeriod, false /* deleteEmptyBuckets */)
topicMap[addEncoding(p2p.RPCLightClientOptimisticUpdateTopicV1)] = leakybucket.NewCollector(1, defaultBurstLimit, leakyBucketPeriod, false /* deleteEmptyBuckets */)
topicMap[addEncoding(p2p.RPCLightClientFinalityUpdateTopicV1)] = leakybucket.NewCollector(1, defaultBurstLimit, leakyBucketPeriod, false /* deleteEmptyBuckets */)
// General topic for all rpc requests.
topicMap[rpcLimiterTopic] = leakybucket.NewCollector(5, defaultBurstLimit*2, leakyBucketPeriod, false /* deleteEmptyBuckets */)

View File

@@ -18,7 +18,7 @@ import (
func TestNewRateLimiter(t *testing.T) {
rlimiter := newRateLimiter(mockp2p.NewTestP2P(t))
assert.Equal(t, len(rlimiter.limiterMap), 12, "correct number of topics not registered")
assert.Equal(t, len(rlimiter.limiterMap), 16, "correct number of topics not registered")
}
func TestNewRateLimiter_FreeCorrectly(t *testing.T) {

View File

@@ -9,6 +9,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
p2ptypes "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/types"
"github.com/OffchainLabs/prysm/v6/config/features"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing"
@@ -70,14 +71,23 @@ func (s *Service) rpcHandlerByTopicFromFork(forkIndex int) (map[string]rpcHandle
// Bellatrix: https://github.com/ethereum/consensus-specs/blob/dev/specs/bellatrix/p2p-interface.md#messages
// Altair: https://github.com/ethereum/consensus-specs/blob/dev/specs/altair/p2p-interface.md#messages
if forkIndex >= version.Altair {
return map[string]rpcHandler{
handler := map[string]rpcHandler{
p2p.RPCStatusTopicV1: s.statusRPCHandler,
p2p.RPCGoodByeTopicV1: s.goodbyeRPCHandler,
p2p.RPCBlocksByRangeTopicV2: s.beaconBlocksByRangeRPCHandler, // Updated in Altair and modified in Capella
p2p.RPCBlocksByRootTopicV2: s.beaconBlocksRootRPCHandler, // Updated in Altair and modified in Capella
p2p.RPCPingTopicV1: s.pingHandler,
p2p.RPCMetaDataTopicV2: s.metaDataHandler, // Updated in Altair
}, nil
}
if features.Get().EnableLightClient {
handler[p2p.RPCLightClientBootstrapTopicV1] = s.lightClientBootstrapRPCHandler
handler[p2p.RPCLightClientUpdatesByRangeTopicV1] = s.lightClientUpdatesByRangeRPCHandler
handler[p2p.RPCLightClientFinalityUpdateTopicV1] = s.lightClientFinalityUpdateRPCHandler
handler[p2p.RPCLightClientOptimisticUpdateTopicV1] = s.lightClientOptimisticUpdateRPCHandler
}
return handler, nil
}
// PhaseO: https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/p2p-interface.md#messages
@@ -246,9 +256,11 @@ func (s *Service) registerRPC(baseTopic string, handle rpcHandler) {
// Increment message received counter.
messageReceivedCounter.WithLabelValues(topic).Inc()
// since metadata requests do not have any data in the payload, we
// since some requests do not have any data in the payload, we
// do not decode anything.
if baseTopic == p2p.RPCMetaDataTopicV1 || baseTopic == p2p.RPCMetaDataTopicV2 {
if baseTopic == p2p.RPCMetaDataTopicV1 || baseTopic == p2p.RPCMetaDataTopicV2 ||
baseTopic == p2p.RPCLightClientOptimisticUpdateTopicV1 ||
baseTopic == p2p.RPCLightClientFinalityUpdateTopicV1 {
if err := handle(ctx, base, stream); err != nil {
messageFailedProcessingCounter.WithLabelValues(topic).Inc()
if !errors.Is(err, p2ptypes.ErrWrongForkDigestVersion) {

View File

@@ -161,3 +161,80 @@ func WriteBlobSidecarChunk(stream libp2pcore.Stream, tor blockchain.TemporalOrac
_, err = encoding.EncodeWithMaxLength(stream, sidecar)
return err
}
func WriteLightClientBootstrapChunk(stream libp2pcore.Stream, tor blockchain.TemporalOracle, encoding encoder.NetworkEncoding, bootstrap interfaces.LightClientBootstrap) error {
if _, err := stream.Write([]byte{responseCodeSuccess}); err != nil {
return err
}
valRoot := tor.GenesisValidatorsRoot()
digest, err := forks.ForkDigestFromEpoch(slots.ToEpoch(bootstrap.Header().Beacon().Slot), valRoot[:])
if err != nil {
return err
}
obtainedCtx := digest[:]
if err = writeContextToStream(obtainedCtx, stream); err != nil {
return err
}
_, err = encoding.EncodeWithMaxLength(stream, bootstrap)
return err
}
func WriteLightClientUpdateChunk(stream libp2pcore.Stream, tor blockchain.TemporalOracle, encoding encoder.NetworkEncoding, update interfaces.LightClientUpdate) error {
if _, err := stream.Write([]byte{responseCodeSuccess}); err != nil {
return err
}
valRoot := tor.GenesisValidatorsRoot()
digest, err := forks.ForkDigestFromEpoch(slots.ToEpoch(update.AttestedHeader().Beacon().Slot), valRoot[:])
if err != nil {
return err
}
obtainedCtx := digest[:]
if err = writeContextToStream(obtainedCtx, stream); err != nil {
return err
}
_, err = encoding.EncodeWithMaxLength(stream, update)
return err
}
func WriteLightClientOptimisticUpdateChunk(stream libp2pcore.Stream, tor blockchain.TemporalOracle, encoding encoder.NetworkEncoding, update interfaces.LightClientOptimisticUpdate) error {
if _, err := stream.Write([]byte{responseCodeSuccess}); err != nil {
return err
}
valRoot := tor.GenesisValidatorsRoot()
digest, err := forks.ForkDigestFromEpoch(slots.ToEpoch(update.AttestedHeader().Beacon().Slot), valRoot[:])
if err != nil {
return err
}
obtainedCtx := digest[:]
if err = writeContextToStream(obtainedCtx, stream); err != nil {
return err
}
_, err = encoding.EncodeWithMaxLength(stream, update)
return err
}
func WriteLightClientFinalityUpdateChunk(stream libp2pcore.Stream, tor blockchain.TemporalOracle, encoding encoder.NetworkEncoding, update interfaces.LightClientFinalityUpdate) error {
if _, err := stream.Write([]byte{responseCodeSuccess}); err != nil {
return err
}
valRoot := tor.GenesisValidatorsRoot()
digest, err := forks.ForkDigestFromEpoch(slots.ToEpoch(update.AttestedHeader().Beacon().Slot), valRoot[:])
if err != nil {
return err
}
obtainedCtx := digest[:]
if err = writeContextToStream(obtainedCtx, stream); err != nil {
return err
}
_, err = encoding.EncodeWithMaxLength(stream, update)
return err
}

View File

@@ -0,0 +1,222 @@
package sync
import (
"context"
"fmt"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/types"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/math"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
eth "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
libp2pcore "github.com/libp2p/go-libp2p/core"
)
// lightClientBootstrapRPCHandler handles the /eth2/beacon_chain/req/light_client_bootstrap/1/ RPC request.
func (s *Service) lightClientBootstrapRPCHandler(ctx context.Context, msg interface{}, stream libp2pcore.Stream) error {
ctx, span := trace.StartSpan(ctx, "sync.lightClientBootstrapRPCHandler")
defer span.End()
ctx, cancel := context.WithTimeout(ctx, ttfbTimeout)
defer cancel()
logger := log.WithField("handler", p2p.LightClientBootstrapName[1:])
SetRPCStreamDeadlines(stream)
if err := s.rateLimiter.validateRequest(stream, 1); err != nil {
logger.WithError(err).Error("s.rateLimiter.validateRequest")
return err
}
s.rateLimiter.add(stream, 1)
rawMsg, ok := msg.(*[fieldparams.RootLength]byte)
if !ok {
logger.Error("Message is not *types.LightClientBootstrapReq")
return fmt.Errorf("message is not type %T", &[fieldparams.RootLength]byte{})
}
blkRoot := *rawMsg
bootstrap, err := s.cfg.beaconDB.LightClientBootstrap(ctx, blkRoot[:])
if err != nil {
s.writeErrorResponseToStream(responseCodeServerError, types.ErrGeneric.Error(), stream)
tracing.AnnotateError(span, err)
logger.WithError(err).Error("s.cfg.beaconDB.LightClientBootstrap")
return err
}
if bootstrap == nil {
s.writeErrorResponseToStream(responseCodeResourceUnavailable, types.ErrResourceUnavailable.Error(), stream)
tracing.AnnotateError(span, err)
logger.WithError(err).Error(fmt.Sprintf("nil bootstrap for root %#x", blkRoot))
return err
}
SetStreamWriteDeadline(stream, defaultWriteDuration)
if err = WriteLightClientBootstrapChunk(stream, s.cfg.clock, s.cfg.p2p.Encoding(), bootstrap); err != nil {
s.writeErrorResponseToStream(responseCodeServerError, types.ErrGeneric.Error(), stream)
tracing.AnnotateError(span, err)
logger.WithError(err).Error("WriteLightClientBootstrapChunk")
return err
}
logger.Info("lightClientBootstrapRPCHandler completed")
closeStream(stream, logger)
return nil
}
// lightClientUpdatesByRangeRPCHandler handles the /eth2/beacon_chain/req/light_client_updates_by_range/1/ RPC request.
func (s *Service) lightClientUpdatesByRangeRPCHandler(ctx context.Context, msg interface{}, stream libp2pcore.Stream) error {
ctx, span := trace.StartSpan(ctx, "sync.lightClientUpdatesByRangeRPCHandler")
defer span.End()
ctx, cancel := context.WithTimeout(ctx, ttfbTimeout)
defer cancel()
logger := log.WithField("handler", p2p.LightClientUpdatesByRangeName[1:])
SetRPCStreamDeadlines(stream)
if err := s.rateLimiter.validateRequest(stream, 1); err != nil {
logger.WithError(err).Error("s.rateLimiter.validateRequest")
return err
}
s.rateLimiter.add(stream, 1)
r, ok := msg.(*eth.LightClientUpdatesByRangeRequest)
if !ok {
logger.Error("Message is not *eth.LightClientUpdatesByRangeReq")
return fmt.Errorf("message is not type %T", &eth.LightClientUpdatesByRangeRequest{})
}
if r.Count == 0 {
s.writeErrorResponseToStream(responseCodeInvalidRequest, "count is 0", stream)
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
logger.Error("Count is 0")
return nil
}
if r.Count > params.BeaconConfig().MaxRequestLightClientUpdates {
r.Count = params.BeaconConfig().MaxRequestLightClientUpdates
}
endPeriod, err := math.Add64(r.StartPeriod, r.Count-1)
if err != nil {
s.writeErrorResponseToStream(responseCodeInvalidRequest, err.Error(), stream)
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
tracing.AnnotateError(span, err)
logger.WithError(err).Error("End period overflows")
return err
}
logger.Infof("LC: requesting updates by range (StartPeriod: %d, EndPeriod: %d)", r.StartPeriod, r.StartPeriod+r.Count-1)
updates, err := s.cfg.beaconDB.LightClientUpdates(ctx, r.StartPeriod, endPeriod)
if err != nil {
s.writeErrorResponseToStream(responseCodeServerError, types.ErrGeneric.Error(), stream)
tracing.AnnotateError(span, err)
logger.WithError(err).Error("s.cfg.beaconDB.LightClientUpdates")
return err
}
if len(updates) == 0 {
s.writeErrorResponseToStream(responseCodeResourceUnavailable, types.ErrResourceUnavailable.Error(), stream)
tracing.AnnotateError(span, err)
logger.Debugf("No update available for start period %d", r.StartPeriod)
return nil
}
for i := r.StartPeriod; i <= endPeriod; i++ {
u, ok := updates[i]
if !ok {
break
}
SetStreamWriteDeadline(stream, defaultWriteDuration)
if err = WriteLightClientUpdateChunk(stream, s.cfg.clock, s.cfg.p2p.Encoding(), u); err != nil {
s.writeErrorResponseToStream(responseCodeServerError, types.ErrGeneric.Error(), stream)
tracing.AnnotateError(span, err)
logger.WithError(err).Error("WriteLightClientUpdateChunk")
return err
}
s.rateLimiter.add(stream, 1)
}
logger.Info("lightClientUpdatesByRangeRPCHandler completed")
closeStream(stream, logger)
return nil
}
// lightClientFinalityUpdateRPCHandler handles the /eth2/beacon_chain/req/light_client_finality_update/1/ RPC request.
func (s *Service) lightClientFinalityUpdateRPCHandler(ctx context.Context, _ interface{}, stream libp2pcore.Stream) error {
ctx, span := trace.StartSpan(ctx, "sync.lightClientFinalityUpdateRPCHandler")
defer span.End()
_, cancel := context.WithTimeout(ctx, ttfbTimeout)
defer cancel()
logger := log.WithField("handler", p2p.LightClientFinalityUpdateName[1:])
SetRPCStreamDeadlines(stream)
if err := s.rateLimiter.validateRequest(stream, 1); err != nil {
logger.WithError(err).Error("s.rateLimiter.validateRequest")
return err
}
s.rateLimiter.add(stream, 1)
if s.lcStore.LastFinalityUpdate() == nil {
s.writeErrorResponseToStream(responseCodeResourceUnavailable, types.ErrResourceUnavailable.Error(), stream)
logger.Error("No finality update available")
return nil
}
SetStreamWriteDeadline(stream, defaultWriteDuration)
if err := WriteLightClientFinalityUpdateChunk(stream, s.cfg.clock, s.cfg.p2p.Encoding(), s.lcStore.LastFinalityUpdate()); err != nil {
s.writeErrorResponseToStream(responseCodeServerError, types.ErrGeneric.Error(), stream)
tracing.AnnotateError(span, err)
logger.WithError(err).Error("WriteLightClientFinalityUpdateChunk")
return err
}
logger.Info("lightClientFinalityUpdateRPCHandler completed")
closeStream(stream, logger)
return nil
}
// lightClientOptimisticUpdateRPCHandler handles the /eth2/beacon_chain/req/light_client_optimistic_update/1/ RPC request.
func (s *Service) lightClientOptimisticUpdateRPCHandler(ctx context.Context, _ interface{}, stream libp2pcore.Stream) error {
ctx, span := trace.StartSpan(ctx, "sync.lightClientOptimisticUpdateRPCHandler")
defer span.End()
_, cancel := context.WithTimeout(ctx, ttfbTimeout)
defer cancel()
logger := log.WithField("handler", p2p.LightClientOptimisticUpdateName[1:])
logger.Info("lightClientOptimisticUpdateRPCHandler invoked")
SetRPCStreamDeadlines(stream)
if err := s.rateLimiter.validateRequest(stream, 1); err != nil {
logger.WithError(err).Error("s.rateLimiter.validateRequest")
return err
}
s.rateLimiter.add(stream, 1)
if s.lcStore.LastOptimisticUpdate() == nil {
s.writeErrorResponseToStream(responseCodeResourceUnavailable, types.ErrResourceUnavailable.Error(), stream)
logger.Error("No optimistic update available")
return nil
}
SetStreamWriteDeadline(stream, defaultWriteDuration)
if err := WriteLightClientOptimisticUpdateChunk(stream, s.cfg.clock, s.cfg.p2p.Encoding(), s.lcStore.LastOptimisticUpdate()); err != nil {
s.writeErrorResponseToStream(responseCodeServerError, types.ErrGeneric.Error(), stream)
tracing.AnnotateError(span, err)
logger.WithError(err).Error("WriteLightClientOptimisticUpdateChunk")
return err
}
logger.Info("lightClientOptimisticUpdateRPCHandler completed")
closeStream(stream, logger)
return nil
}

View File

@@ -0,0 +1,521 @@
package sync
import (
"context"
"sync"
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/async/abool"
mockChain "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
db "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
p2ptest "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
mockSync "github.com/OffchainLabs/prysm/v6/beacon-chain/sync/initial-sync/testing"
"github.com/OffchainLabs/prysm/v6/config/features"
"github.com/OffchainLabs/prysm/v6/config/params"
leakybucket "github.com/OffchainLabs/prysm/v6/container/leaky-bucket"
"github.com/OffchainLabs/prysm/v6/network/forks"
pb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/protocol"
)
func TestRPC_LightClientBootstrap(t *testing.T) {
resetFn := features.InitWithReset(&features.Flags{
EnableLightClient: true,
})
defer resetFn()
ctx := context.Background()
p2pService := p2ptest.NewTestP2P(t)
p1 := p2ptest.NewTestP2P(t)
p2 := p2ptest.NewTestP2P(t)
p1.Connect(p2)
require.Equal(t, 1, len(p1.BHost.Network().Peers()), "Expected peers to be connected")
chainService := &mockChain.ChainService{
ValidatorsRoot: [32]byte{'A'},
Genesis: time.Unix(time.Now().Unix(), 0),
}
d := db.SetupDB(t)
r := Service{
ctx: ctx,
cfg: &config{
p2p: p2pService,
initialSync: &mockSync.Sync{IsSyncing: false},
chain: chainService,
beaconDB: d,
clock: startup.NewClock(chainService.Genesis, chainService.ValidatorsRoot),
stateNotifier: &mockChain.MockStateNotifier{},
},
chainStarted: abool.New(),
lcStore: &lightClient.Store{},
subHandler: newSubTopicHandler(),
rateLimiter: newRateLimiter(p1),
}
pcl := protocol.ID(p2p.RPCLightClientBootstrapTopicV1)
topic := string(pcl)
r.rateLimiter.limiterMap[topic] = leakybucket.NewCollector(10000, 10000, time.Second, false)
altairDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().AltairForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
bellatrixDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().BellatrixForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
capellaDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().CapellaForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
denebDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().DenebForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
electraDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().ElectraForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
for i := 1; i <= 5; i++ {
t.Run(version.String(i), func(t *testing.T) {
l := util.NewTestLightClient(t, i)
bootstrap, err := lightClient.NewLightClientBootstrapFromBeaconState(ctx, l.State.Slot(), l.State, l.Block)
require.NoError(t, err)
blockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, r.cfg.beaconDB.SaveLightClientBootstrap(ctx, blockRoot[:], bootstrap))
var wg sync.WaitGroup
wg.Add(1)
p2.BHost.SetStreamHandler(pcl, func(stream network.Stream) {
defer wg.Done()
expectSuccess(t, stream)
rpcCtx, err := readContextFromStream(stream)
require.NoError(t, err)
require.Equal(t, 4, len(rpcCtx))
var resSSZ []byte
switch i {
case version.Altair:
require.DeepSSZEqual(t, altairDigest[:], rpcCtx)
var res pb.LightClientBootstrapAltair
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
case version.Bellatrix:
require.DeepSSZEqual(t, bellatrixDigest[:], rpcCtx)
var res pb.LightClientBootstrapAltair
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
case version.Capella:
require.DeepSSZEqual(t, capellaDigest[:], rpcCtx)
var res pb.LightClientBootstrapCapella
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
case version.Deneb:
require.DeepSSZEqual(t, denebDigest[:], rpcCtx)
var res pb.LightClientBootstrapDeneb
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
case version.Electra:
require.DeepSSZEqual(t, electraDigest[:], rpcCtx)
var res pb.LightClientBootstrapElectra
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
default:
t.Fatalf("unsupported version %d", i)
}
bootstrapSSZ, err := bootstrap.MarshalSSZ()
require.NoError(t, err)
require.DeepSSZEqual(t, resSSZ, bootstrapSSZ)
})
stream1, err := p1.BHost.NewStream(context.Background(), p2.BHost.ID(), pcl)
require.NoError(t, err)
err = r.lightClientBootstrapRPCHandler(ctx, &blockRoot, stream1)
require.NoError(t, err)
if util.WaitTimeout(&wg, 1*time.Second) {
t.Fatal("Did not receive stream within 1 sec")
}
})
}
}
func TestRPC_LightClientOptimisticUpdate(t *testing.T) {
resetFn := features.InitWithReset(&features.Flags{
EnableLightClient: true,
})
defer resetFn()
ctx := context.Background()
p2pService := p2ptest.NewTestP2P(t)
p1 := p2ptest.NewTestP2P(t)
p2 := p2ptest.NewTestP2P(t)
p1.Connect(p2)
require.Equal(t, 1, len(p1.BHost.Network().Peers()), "Expected peers to be connected")
chainService := &mockChain.ChainService{
ValidatorsRoot: [32]byte{'A'},
Genesis: time.Unix(time.Now().Unix(), 0),
}
d := db.SetupDB(t)
r := Service{
ctx: ctx,
cfg: &config{
p2p: p2pService,
initialSync: &mockSync.Sync{IsSyncing: false},
chain: chainService,
beaconDB: d,
clock: startup.NewClock(chainService.Genesis, chainService.ValidatorsRoot),
stateNotifier: &mockChain.MockStateNotifier{},
},
chainStarted: abool.New(),
lcStore: &lightClient.Store{},
subHandler: newSubTopicHandler(),
rateLimiter: newRateLimiter(p1),
}
pcl := protocol.ID(p2p.RPCLightClientOptimisticUpdateTopicV1)
topic := string(pcl)
r.rateLimiter.limiterMap[topic] = leakybucket.NewCollector(10000, 10000, time.Second, false)
altairDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().AltairForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
bellatrixDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().BellatrixForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
capellaDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().CapellaForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
denebDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().DenebForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
electraDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().ElectraForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
for i := 1; i <= 5; i++ {
t.Run(version.String(i), func(t *testing.T) {
l := util.NewTestLightClient(t, i)
update, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock)
require.NoError(t, err)
r.lcStore.SetLastOptimisticUpdate(update)
var wg sync.WaitGroup
wg.Add(1)
p2.BHost.SetStreamHandler(pcl, func(stream network.Stream) {
defer wg.Done()
expectSuccess(t, stream)
var resSSZ []byte
rpcCtx, err := readContextFromStream(stream)
require.NoError(t, err)
require.Equal(t, 4, len(rpcCtx))
switch i {
case version.Altair:
require.DeepSSZEqual(t, altairDigest[:], rpcCtx)
var res pb.LightClientOptimisticUpdateAltair
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
case version.Bellatrix:
require.DeepSSZEqual(t, bellatrixDigest[:], rpcCtx)
var res pb.LightClientOptimisticUpdateAltair
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
case version.Capella:
require.DeepSSZEqual(t, capellaDigest[:], rpcCtx)
var res pb.LightClientOptimisticUpdateCapella
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
case version.Deneb:
require.DeepSSZEqual(t, denebDigest[:], rpcCtx)
var res pb.LightClientOptimisticUpdateDeneb
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
case version.Electra:
require.DeepSSZEqual(t, electraDigest[:], rpcCtx)
var res pb.LightClientOptimisticUpdateDeneb
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
default:
t.Fatalf("unsupported version %d", i)
}
updateSSZ, err := update.MarshalSSZ()
require.NoError(t, err)
require.DeepSSZEqual(t, resSSZ, updateSSZ)
})
stream1, err := p1.BHost.NewStream(context.Background(), p2.BHost.ID(), pcl)
require.NoError(t, err)
err = r.lightClientOptimisticUpdateRPCHandler(ctx, nil, stream1)
require.NoError(t, err)
if util.WaitTimeout(&wg, 1*time.Second) {
t.Fatal("Did not receive stream within 1 sec")
}
})
}
}
func TestRPC_LightClientFinalityUpdate(t *testing.T) {
resetFn := features.InitWithReset(&features.Flags{
EnableLightClient: true,
})
defer resetFn()
ctx := context.Background()
p2pService := p2ptest.NewTestP2P(t)
p1 := p2ptest.NewTestP2P(t)
p2 := p2ptest.NewTestP2P(t)
p1.Connect(p2)
require.Equal(t, 1, len(p1.BHost.Network().Peers()), "Expected peers to be connected")
chainService := &mockChain.ChainService{
ValidatorsRoot: [32]byte{'A'},
Genesis: time.Unix(time.Now().Unix(), 0),
}
d := db.SetupDB(t)
r := Service{
ctx: ctx,
cfg: &config{
p2p: p2pService,
initialSync: &mockSync.Sync{IsSyncing: false},
chain: chainService,
beaconDB: d,
clock: startup.NewClock(chainService.Genesis, chainService.ValidatorsRoot),
stateNotifier: &mockChain.MockStateNotifier{},
},
chainStarted: abool.New(),
lcStore: &lightClient.Store{},
subHandler: newSubTopicHandler(),
rateLimiter: newRateLimiter(p1),
}
pcl := protocol.ID(p2p.RPCLightClientFinalityUpdateTopicV1)
topic := string(pcl)
r.rateLimiter.limiterMap[topic] = leakybucket.NewCollector(10000, 10000, time.Second, false)
altairDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().AltairForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
bellatrixDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().BellatrixForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
capellaDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().CapellaForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
denebDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().DenebForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
electraDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().ElectraForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
for i := 1; i <= 5; i++ {
t.Run(version.String(i), func(t *testing.T) {
l := util.NewTestLightClient(t, i)
update, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
r.lcStore.SetLastFinalityUpdate(update)
var wg sync.WaitGroup
wg.Add(1)
p2.BHost.SetStreamHandler(pcl, func(stream network.Stream) {
defer wg.Done()
expectSuccess(t, stream)
var resSSZ []byte
rpcCtx, err := readContextFromStream(stream)
require.NoError(t, err)
require.Equal(t, 4, len(rpcCtx))
switch i {
case version.Altair:
require.DeepSSZEqual(t, altairDigest[:], rpcCtx)
var res pb.LightClientFinalityUpdateAltair
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
case version.Bellatrix:
require.DeepSSZEqual(t, bellatrixDigest[:], rpcCtx)
var res pb.LightClientFinalityUpdateAltair
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
case version.Capella:
require.DeepSSZEqual(t, capellaDigest[:], rpcCtx)
var res pb.LightClientFinalityUpdateCapella
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
case version.Deneb:
require.DeepSSZEqual(t, denebDigest[:], rpcCtx)
var res pb.LightClientFinalityUpdateDeneb
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
case version.Electra:
require.DeepSSZEqual(t, electraDigest[:], rpcCtx)
var res pb.LightClientFinalityUpdateElectra
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
default:
t.Fatalf("unsupported version %d", i)
}
updateSSZ, err := update.MarshalSSZ()
require.NoError(t, err)
require.DeepSSZEqual(t, resSSZ, updateSSZ)
})
stream1, err := p1.BHost.NewStream(context.Background(), p2.BHost.ID(), pcl)
require.NoError(t, err)
err = r.lightClientFinalityUpdateRPCHandler(ctx, nil, stream1)
require.NoError(t, err)
if util.WaitTimeout(&wg, 1*time.Second) {
t.Fatal("Did not receive stream within 1 sec")
}
})
}
}
func TestRPC_LightClientUpdatesByRange(t *testing.T) {
resetFn := features.InitWithReset(&features.Flags{
EnableLightClient: true,
})
defer resetFn()
ctx := context.Background()
p2pService := p2ptest.NewTestP2P(t)
p1 := p2ptest.NewTestP2P(t)
p2 := p2ptest.NewTestP2P(t)
p1.Connect(p2)
require.Equal(t, 1, len(p1.BHost.Network().Peers()), "Expected peers to be connected")
chainService := &mockChain.ChainService{
ValidatorsRoot: [32]byte{'A'},
Genesis: time.Unix(time.Now().Unix(), 0),
}
d := db.SetupDB(t)
r := Service{
ctx: ctx,
cfg: &config{
p2p: p2pService,
initialSync: &mockSync.Sync{IsSyncing: false},
chain: chainService,
beaconDB: d,
clock: startup.NewClock(chainService.Genesis, chainService.ValidatorsRoot),
stateNotifier: &mockChain.MockStateNotifier{},
},
chainStarted: abool.New(),
lcStore: &lightClient.Store{},
subHandler: newSubTopicHandler(),
rateLimiter: newRateLimiter(p1),
}
pcl := protocol.ID(p2p.RPCLightClientUpdatesByRangeTopicV1)
topic := string(pcl)
r.rateLimiter.limiterMap[topic] = leakybucket.NewCollector(10000, 10000, time.Second, false)
altairDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().AltairForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
bellatrixDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().BellatrixForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
capellaDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().CapellaForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
denebDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().DenebForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
electraDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().ElectraForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
for i := 1; i <= 5; i++ {
t.Run(version.String(i), func(t *testing.T) {
for j := 0; j < 5; j++ {
l := util.NewTestLightClient(t, i, util.WithIncreasedAttestedSlot(uint64(j)))
update, err := lightClient.NewLightClientUpdateFromBeaconState(ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
require.NoError(t, r.cfg.beaconDB.SaveLightClientUpdate(ctx, uint64(j), update))
}
var wg sync.WaitGroup
wg.Add(1)
responseCounter := 0
p2.BHost.SetStreamHandler(pcl, func(stream network.Stream) {
defer wg.Done()
expectSuccess(t, stream)
rpcCtx, err := readContextFromStream(stream)
require.NoError(t, err)
require.Equal(t, 4, len(rpcCtx))
var resSSZ []byte
switch i {
case version.Altair:
require.DeepSSZEqual(t, altairDigest[:], rpcCtx)
var res pb.LightClientUpdateAltair
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
case version.Bellatrix:
require.DeepSSZEqual(t, bellatrixDigest[:], rpcCtx)
var res pb.LightClientUpdateAltair
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
case version.Capella:
require.DeepSSZEqual(t, capellaDigest[:], rpcCtx)
var res pb.LightClientUpdateCapella
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
case version.Deneb:
require.DeepSSZEqual(t, denebDigest[:], rpcCtx)
var res pb.LightClientUpdateDeneb
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
case version.Electra:
require.DeepSSZEqual(t, electraDigest[:], rpcCtx)
var res pb.LightClientUpdateElectra
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
default:
t.Fatalf("unsupported version %d", i)
}
update, err := r.cfg.beaconDB.LightClientUpdates(ctx, 0, 4)
require.NoError(t, err)
bootstrapSSZ, err := update[uint64(responseCounter)].MarshalSSZ()
require.NoError(t, err)
require.DeepSSZEqual(t, resSSZ, bootstrapSSZ)
responseCounter++
})
stream1, err := p1.BHost.NewStream(context.Background(), p2.BHost.ID(), pcl)
require.NoError(t, err)
msg := pb.LightClientUpdatesByRangeRequest{
StartPeriod: 0,
Count: 5,
}
err = r.lightClientUpdatesByRangeRPCHandler(ctx, &msg, stream1)
require.NoError(t, err)
if util.WaitTimeout(&wg, 1*time.Second) {
t.Fatal("Did not receive stream within 1 sec")
}
})
}
}

View File

@@ -6,6 +6,7 @@ go_library(
"batch.go",
"blob.go",
"cache.go",
"data_column.go",
"error.go",
"fake.go",
"filesystem.go",
@@ -21,12 +22,14 @@ go_library(
deps = [
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/startup:go_default_library",
"//beacon-chain/state:go_default_library",
"//cache/lru:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",
@@ -51,16 +54,21 @@ go_test(
"batch_test.go",
"blob_test.go",
"cache_test.go",
"data_column_test.go",
"initializer_test.go",
"result_test.go",
"verification_test.go",
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/startup:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",

View File

@@ -32,7 +32,7 @@ type BlobBatchVerifier struct {
}
// VerifiedROBlobs satisfies the das.BlobBatchVerifier interface, used by das.AvailabilityStore.
func (batch *BlobBatchVerifier) VerifiedROBlobs(ctx context.Context, blk blocks.ROBlock, scs []blocks.ROBlob) ([]blocks.VerifiedROBlob, error) {
func (batch *BlobBatchVerifier) VerifiedROBlobs(_ context.Context, blk blocks.ROBlock, scs []blocks.ROBlob) ([]blocks.VerifiedROBlob, error) {
if len(scs) == 0 {
return nil, nil
}

View File

@@ -25,6 +25,10 @@ const (
RequireSidecarInclusionProven
RequireSidecarKzgProofVerified
RequireSidecarProposerExpected
// Data columns specific.
RequireValidFields
RequireCorrectSubnet
)
var allBlobSidecarRequirements = []Requirement{

View File

@@ -21,7 +21,8 @@ import (
)
const (
DefaultSignatureCacheSize = 256
DefaultSignatureCacheSize = 256
DefaultInclusionProofCacheSize = 2
)
// ValidatorAtIndexer defines the method needed to retrieve a validator by its index.
@@ -73,6 +74,14 @@ type sigCache struct {
getFork forkLookup
}
type inclusionProofCache struct {
*lru.Cache
}
func newInclusionProofCache(size int) *inclusionProofCache {
return &inclusionProofCache{Cache: lruwrpr.New(size)}
}
// VerifySignature verifies the given signature data against the key obtained via ValidatorAtIndexer.
func (c *sigCache) VerifySignature(sig SignatureData, v ValidatorAtIndexer) (err error) {
defer func() {

View File

@@ -0,0 +1,535 @@
package verification
import (
"context"
"fmt"
"strings"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
forkchoicetypes "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/types"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/runtime/logging"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/pkg/errors"
)
var (
// GossipDataColumnSidecarRequirements defines the set of requirements that DataColumnSidecars received on gossip
// must satisfy in order to upgrade an RODataColumn to a VerifiedRODataColumn.
// https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/p2p-interface.md#data_column_sidecar_subnet_id
GossipDataColumnSidecarRequirements = []Requirement{
RequireValidFields,
RequireCorrectSubnet,
RequireNotFromFutureSlot,
RequireSlotAboveFinalized,
RequireValidProposerSignature,
RequireSidecarParentSeen,
RequireSidecarParentValid,
RequireSidecarParentSlotLower,
RequireSidecarDescendsFromFinalized,
RequireSidecarInclusionProven,
RequireSidecarKzgProofVerified,
RequireSidecarProposerExpected,
}
errColumnsInvalid = errors.New("data columns failed verification")
errBadTopicLength = errors.New("topic length is invalid")
errBadTopic = errors.New("topic is not of the one expected")
)
type (
RODataColumnsVerifier struct {
*sharedResources
results *results
dataColumns []blocks.RODataColumn
verifyDataColumnsCommitment rodataColumnsCommitmentVerifier
stateByRoot map[[fieldparams.RootLength]byte]state.BeaconState
}
rodataColumnsCommitmentVerifier func([]blocks.RODataColumn) error
)
var _ DataColumnsVerifier = &RODataColumnsVerifier{}
// VerifiedRODataColumns "upgrades" wrapped RODataColumns to VerifiedRODataColumns.
// If any of the verifications ran against the data columns failed, or some required verifications
// were not run, an error will be returned.
func (dv *RODataColumnsVerifier) VerifiedRODataColumns() ([]blocks.VerifiedRODataColumn, error) {
if !dv.results.allSatisfied() {
return nil, dv.results.errors(errColumnsInvalid)
}
verifiedRODataColumns := make([]blocks.VerifiedRODataColumn, 0, len(dv.dataColumns))
for _, dataColumn := range dv.dataColumns {
verifiedRODataColumn := blocks.NewVerifiedRODataColumn(dataColumn)
verifiedRODataColumns = append(verifiedRODataColumns, verifiedRODataColumn)
}
return verifiedRODataColumns, nil
}
// SatisfyRequirement allows the caller to assert that a requirement has been satisfied.
// This gives us a way to tick the box for a requirement where the usual method would be impractical.
// For example, when batch syncing, forkchoice is only updated at the end of the batch. So the checks that use
// forkchoice, like descends from finalized or parent seen, would necessarily fail. Allowing the caller to
// assert the requirement has been satisfied ensures we have an easy way to audit which piece of code is satisfying
// a requirement outside of this package.
func (dv *RODataColumnsVerifier) SatisfyRequirement(req Requirement) {
dv.recordResult(req, nil)
}
func (dv *RODataColumnsVerifier) recordResult(req Requirement, err *error) {
if err == nil || *err == nil {
dv.results.record(req, nil)
return
}
dv.results.record(req, *err)
}
func (dv *RODataColumnsVerifier) ValidFields() (err error) {
if ok, err := dv.results.cached(RequireValidFields); ok {
return err
}
defer dv.recordResult(RequireValidFields, &err)
for _, dataColumn := range dv.dataColumns {
if err := peerdas.VerifyDataColumnSidecar(dataColumn); err != nil {
return columnErrBuilder(errors.Wrap(err, "verify data column sidecar"))
}
}
return nil
}
func (dv *RODataColumnsVerifier) CorrectSubnet(dataColumnSidecarSubTopic string, expectedTopics []string) (err error) {
if ok, err := dv.results.cached(RequireCorrectSubnet); ok {
return err
}
defer dv.recordResult(RequireCorrectSubnet, &err)
if len(expectedTopics) != len(dv.dataColumns) {
return columnErrBuilder(errBadTopicLength)
}
for i := range dv.dataColumns {
// We add a trailing slash to avoid, for example,
// an actual topic /eth2/9dc47cc6/data_column_sidecar_1
// to match with /eth2/9dc47cc6/data_column_sidecar_120
expectedTopic := expectedTopics[i] + "/"
actualSubnet := peerdas.ComputeSubnetForDataColumnSidecar(dv.dataColumns[i].Index)
actualSubTopic := fmt.Sprintf(dataColumnSidecarSubTopic, actualSubnet)
if !strings.Contains(expectedTopic, actualSubTopic) {
return columnErrBuilder(errBadTopic)
}
}
return nil
}
func (dv *RODataColumnsVerifier) NotFromFutureSlot() (err error) {
if ok, err := dv.results.cached(RequireNotFromFutureSlot); ok {
return err
}
defer dv.recordResult(RequireNotFromFutureSlot, &err)
// Retrieve the current slot.
currentSlot := dv.clock.CurrentSlot()
// Get the current time.
now := dv.clock.Now()
// Retrieve the maximum gossip clock disparity.
maximumGossipClockDisparity := params.BeaconConfig().MaximumGossipClockDisparityDuration()
for _, dataColumn := range dv.dataColumns {
// Extract the data column slot.
dataColumnSlot := dataColumn.Slot()
// Skip if the data column slotis the same as the current slot.
if currentSlot == dataColumnSlot {
continue
}
// earliestStart represents the time the slot starts, lowered by MAXIMUM_GOSSIP_CLOCK_DISPARITY.
// We lower the time by MAXIMUM_GOSSIP_CLOCK_DISPARITY in case system time is running slightly behind real time.
earliestStart := dv.clock.SlotStart(dataColumnSlot).Add(-maximumGossipClockDisparity)
// If the system time is still before earliestStart, we consider the column from a future slot and return an error.
if now.Before(earliestStart) {
return columnErrBuilder(ErrFromFutureSlot)
}
}
return nil
}
func (dv *RODataColumnsVerifier) SlotAboveFinalized() (err error) {
if ok, err := dv.results.cached(RequireSlotAboveFinalized); ok {
return err
}
defer dv.recordResult(RequireSlotAboveFinalized, &err)
// Retrieve the finalized checkpoint.
finalizedCheckpoint := dv.fc.FinalizedCheckpoint()
// Compute the first slot of the finalized checkpoint epoch.
startSlot, err := slots.EpochStart(finalizedCheckpoint.Epoch)
if err != nil {
return columnErrBuilder(errors.Wrap(err, "epoch start"))
}
for _, dataColumn := range dv.dataColumns {
// Extract the data column slot.
dataColumnSlot := dataColumn.Slot()
// Check if the data column slot is after first slot of the epoch corresponding to the finalized checkpoint.
if dataColumnSlot <= startSlot {
return columnErrBuilder(ErrSlotNotAfterFinalized)
}
}
return nil
}
func (dv *RODataColumnsVerifier) ValidProposerSignature(ctx context.Context) (err error) {
if ok, err := dv.results.cached(RequireValidProposerSignature); ok {
return err
}
defer dv.recordResult(RequireValidProposerSignature, &err)
for _, dataColumn := range dv.dataColumns {
// Extract the signature data from the data column.
signatureData := columnToSignatureData(dataColumn)
// Get logging fields.
fields := logging.DataColumnFields(dataColumn)
log := log.WithFields(fields)
// First check if there is a cached verification that can be reused.
seen, err := dv.sc.SignatureVerified(signatureData)
if err != nil {
log.WithError(err).Debug("Reusing failed proposer signature validation from cache")
columnVerificationProposerSignatureCache.WithLabelValues("hit-invalid").Inc()
return columnErrBuilder(ErrInvalidProposerSignature)
}
// If yes, we can skip the full verification.
if seen {
columnVerificationProposerSignatureCache.WithLabelValues("hit-valid").Inc()
continue
}
columnVerificationProposerSignatureCache.WithLabelValues("miss").Inc()
// Retrieve the parent state.
parentState, err := dv.parentState(ctx, dataColumn)
if err != nil {
return columnErrBuilder(errors.Wrap(err, "parent state"))
}
// Full verification, which will subsequently be cached for anything sharing the signature cache.
if err = dv.sc.VerifySignature(signatureData, parentState); err != nil {
return columnErrBuilder(errors.Wrap(err, "verify signature"))
}
}
return nil
}
func (dv *RODataColumnsVerifier) SidecarParentSeen(parentSeen func([fieldparams.RootLength]byte) bool) (err error) {
if ok, err := dv.results.cached(RequireSidecarParentSeen); ok {
return err
}
defer dv.recordResult(RequireSidecarParentSeen, &err)
for _, dataColumn := range dv.dataColumns {
// Extract the root of the parent block corresponding to the data column.
parentRoot := dataColumn.ParentRoot()
// Skip if the parent root has been seen.
if parentSeen != nil && parentSeen(parentRoot) {
continue
}
if !dv.fc.HasNode(parentRoot) {
return columnErrBuilder(ErrSidecarParentNotSeen)
}
}
return nil
}
func (dv *RODataColumnsVerifier) SidecarParentValid(badParent func([fieldparams.RootLength]byte) bool) (err error) {
if ok, err := dv.results.cached(RequireSidecarParentValid); ok {
return err
}
defer dv.recordResult(RequireSidecarParentValid, &err)
for _, dataColumn := range dv.dataColumns {
// Extract the root of the parent block corresponding to the data column.
parentRoot := dataColumn.ParentRoot()
if badParent != nil && badParent(parentRoot) {
return columnErrBuilder(ErrSidecarParentInvalid)
}
}
return nil
}
func (dv *RODataColumnsVerifier) SidecarParentSlotLower() (err error) {
if ok, err := dv.results.cached(RequireSidecarParentSlotLower); ok {
return err
}
defer dv.recordResult(RequireSidecarParentSlotLower, &err)
for _, dataColumn := range dv.dataColumns {
// Extract the root of the parent block corresponding to the data column.
parentRoot := dataColumn.ParentRoot()
// Compute the slot of the parent block.
parentSlot, err := dv.fc.Slot(parentRoot)
if err != nil {
return columnErrBuilder(errors.Wrap(err, "slot"))
}
// Extract the slot of the data column.
dataColumnSlot := dataColumn.Slot()
// Check if the data column slot is after the parent slot.
if parentSlot >= dataColumnSlot {
return ErrSlotNotAfterParent
}
}
return nil
}
func (dv *RODataColumnsVerifier) SidecarDescendsFromFinalized() (err error) {
if ok, err := dv.results.cached(RequireSidecarDescendsFromFinalized); ok {
return err
}
defer dv.recordResult(RequireSidecarDescendsFromFinalized, &err)
for _, dataColumn := range dv.dataColumns {
// Extract the root of the parent block corresponding to the data column.
parentRoot := dataColumn.ParentRoot()
if !dv.fc.HasNode(parentRoot) {
return columnErrBuilder(ErrSidecarNotFinalizedDescendent)
}
}
return nil
}
func (dv *RODataColumnsVerifier) SidecarInclusionProven() (err error) {
if ok, err := dv.results.cached(RequireSidecarInclusionProven); ok {
return err
}
defer dv.recordResult(RequireSidecarInclusionProven, &err)
startTime := time.Now()
for _, dataColumn := range dv.dataColumns {
k, keyErr := inclusionProofKey(dataColumn)
if keyErr == nil {
if _, ok := dv.ic.Get(k); ok {
continue
}
} else {
log.WithError(keyErr).Error("Failed to get inclusion proof key")
}
if err = peerdas.VerifyDataColumnSidecarInclusionProof(dataColumn); err != nil {
return columnErrBuilder(ErrSidecarInclusionProofInvalid)
}
if keyErr == nil {
dv.ic.Add(k, struct{}{})
}
}
dataColumnSidecarInclusionProofVerificationHistogram.Observe(float64(time.Since(startTime).Milliseconds()))
return nil
}
func (dv *RODataColumnsVerifier) SidecarKzgProofVerified() (err error) {
if ok, err := dv.results.cached(RequireSidecarKzgProofVerified); ok {
return err
}
defer dv.recordResult(RequireSidecarKzgProofVerified, &err)
startTime := time.Now()
err = dv.verifyDataColumnsCommitment(dv.dataColumns)
if err != nil {
return columnErrBuilder(errors.Wrap(err, "verify data column commitment"))
}
dataColumnBatchKZGVerificationHistogram.Observe(float64(time.Since(startTime).Milliseconds()))
return nil
}
func (dv *RODataColumnsVerifier) SidecarProposerExpected(ctx context.Context) (err error) {
if ok, err := dv.results.cached(RequireSidecarProposerExpected); ok {
return err
}
defer dv.recordResult(RequireSidecarProposerExpected, &err)
type slotParentRoot struct {
slot primitives.Slot
parentRoot [fieldparams.RootLength]byte
}
targetRootBySlotParentRoot := make(map[slotParentRoot][fieldparams.RootLength]byte)
var targetRootFromCache = func(slot primitives.Slot, parentRoot [fieldparams.RootLength]byte) ([fieldparams.RootLength]byte, error) {
// Use cached values if available.
slotParentRoot := slotParentRoot{slot: slot, parentRoot: parentRoot}
if root, ok := targetRootBySlotParentRoot[slotParentRoot]; ok {
return root, nil
}
// Compute the epoch of the data column slot.
dataColumnEpoch := slots.ToEpoch(slot)
if dataColumnEpoch > 0 {
dataColumnEpoch = dataColumnEpoch - 1
}
// Compute the target root for the epoch.
targetRoot, err := dv.fc.TargetRootForEpoch(parentRoot, dataColumnEpoch)
if err != nil {
return [fieldparams.RootLength]byte{}, errors.Wrap(err, "target root from epoch")
}
// Store the target root in the cache.
targetRootBySlotParentRoot[slotParentRoot] = targetRoot
return targetRoot, nil
}
for _, dataColumn := range dv.dataColumns {
// Extract the slot of the data column.
dataColumnSlot := dataColumn.Slot()
// Extract the root of the parent block corresponding to the data column.
parentRoot := dataColumn.ParentRoot()
// Compute the target root for the data column.
targetRoot, err := targetRootFromCache(dataColumnSlot, parentRoot)
if err != nil {
return columnErrBuilder(errors.Wrap(err, "target root"))
}
// Compute the epoch of the data column slot.
dataColumnEpoch := slots.ToEpoch(dataColumnSlot)
if dataColumnEpoch > 0 {
dataColumnEpoch = dataColumnEpoch - 1
}
// Create a checkpoint for the target root.
checkpoint := &forkchoicetypes.Checkpoint{Root: targetRoot, Epoch: dataColumnEpoch}
// Try to extract the proposer index from the data column in the cache.
idx, cached := dv.pc.Proposer(checkpoint, dataColumnSlot)
if !cached {
// Retrieve the parent state.
parentState, err := dv.parentState(ctx, dataColumn)
if err != nil {
return columnErrBuilder(errors.Wrap(err, "parent state"))
}
idx, err = dv.pc.ComputeProposer(ctx, parentRoot, dataColumnSlot, parentState)
if err != nil {
return columnErrBuilder(errors.Wrap(err, "compute proposer"))
}
}
if idx != dataColumn.ProposerIndex() {
return columnErrBuilder(ErrSidecarUnexpectedProposer)
}
}
return nil
}
// parentState retrieves the parent state of the data column from the cache if possible, else retrieves it from the state by rooter.
func (dv *RODataColumnsVerifier) parentState(ctx context.Context, dataColumn blocks.RODataColumn) (state.BeaconState, error) {
parentRoot := dataColumn.ParentRoot()
// If the parent root is already in the cache, return it.
if st, ok := dv.stateByRoot[parentRoot]; ok {
return st, nil
}
// Retrieve the parent state from the state by rooter.
st, err := dv.sr.StateByRoot(ctx, parentRoot)
if err != nil {
return nil, errors.Wrap(err, "state by root")
}
// Store the parent state in the cache.
dv.stateByRoot[parentRoot] = st
return st, nil
}
func columnToSignatureData(d blocks.RODataColumn) SignatureData {
return SignatureData{
Root: d.BlockRoot(),
Parent: d.ParentRoot(),
Signature: bytesutil.ToBytes96(d.SignedBlockHeader.Signature),
Proposer: d.ProposerIndex(),
Slot: d.Slot(),
}
}
func columnErrBuilder(baseErr error) error {
return errors.Wrap(baseErr, errColumnsInvalid.Error())
}
func inclusionProofKey(c blocks.RODataColumn) ([160]byte, error) {
var key [160]byte
if len(c.KzgCommitmentsInclusionProof) != 4 {
// This should be already enforced by ssz unmarshaling; still check so we don't panic on array bounds.
return key, columnErrBuilder(ErrSidecarInclusionProofInvalid)
}
root, err := c.SignedBlockHeader.HashTreeRoot()
if err != nil {
return [160]byte{}, errors.Wrap(err, "hash tree root")
}
for i := range c.KzgCommitmentsInclusionProof {
if copy(key[32*i:32*i+32], c.KzgCommitmentsInclusionProof[i]) != 32 {
return key, columnErrBuilder(ErrSidecarInclusionProofInvalid)
}
}
copy(key[128:], root[:])
return key, nil
}

View File

@@ -0,0 +1,978 @@
package verification
import (
"context"
"reflect"
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
forkchoicetypes "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/types"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/pkg/errors"
)
func GenerateTestDataColumns(t *testing.T, parent [fieldparams.RootLength]byte, slot primitives.Slot, blobCount int) []blocks.RODataColumn {
roBlock, roBlobs := util.GenerateTestDenebBlockWithSidecar(t, parent, slot, blobCount)
blobs := make([]kzg.Blob, 0, len(roBlobs))
for i := range roBlobs {
blobs = append(blobs, kzg.Blob(roBlobs[i].Blob))
}
cellsAndProofs := util.GenerateCellsAndProofs(t, blobs)
dataColumnSidecars, err := peerdas.DataColumnSidecars(roBlock, cellsAndProofs)
require.NoError(t, err)
roDataColumns := make([]blocks.RODataColumn, 0, len(dataColumnSidecars))
for i := range dataColumnSidecars {
roDataColumn, err := blocks.NewRODataColumn(dataColumnSidecars[i])
require.NoError(t, err)
roDataColumns = append(roDataColumns, roDataColumn)
}
return roDataColumns
}
func TestColumnSatisfyRequirement(t *testing.T) {
const (
columnSlot = 1
blobCount = 1
)
parentRoot := [fieldparams.RootLength]byte{}
columns := GenerateTestDataColumns(t, parentRoot, columnSlot, blobCount)
intializer := Initializer{}
v := intializer.NewDataColumnsVerifier(columns, GossipDataColumnSidecarRequirements)
require.Equal(t, false, v.results.executed(RequireValidProposerSignature))
v.SatisfyRequirement(RequireValidProposerSignature)
require.Equal(t, true, v.results.executed(RequireValidProposerSignature))
}
func TestValid(t *testing.T) {
var initializer Initializer
t.Run("one invalid column", func(t *testing.T) {
columns := GenerateTestDataColumns(t, [fieldparams.RootLength]byte{}, 1, 1)
columns[0].KzgCommitments = [][]byte{}
verifier := initializer.NewDataColumnsVerifier(columns, GossipDataColumnSidecarRequirements)
err := verifier.ValidFields()
require.NotNil(t, err)
require.NotNil(t, verifier.results.result(RequireValidFields))
})
t.Run("nominal", func(t *testing.T) {
columns := GenerateTestDataColumns(t, [fieldparams.RootLength]byte{}, 1, 1)
verifier := initializer.NewDataColumnsVerifier(columns, GossipDataColumnSidecarRequirements)
err := verifier.ValidFields()
require.NoError(t, err)
require.IsNil(t, verifier.results.result(RequireValidFields))
err = verifier.ValidFields()
require.NoError(t, err)
})
}
func TestCorrectSubnet(t *testing.T) {
const dataColumnSidecarSubTopic = "/data_column_sidecar_%d/"
var initializer Initializer
t.Run("lengths mismatch", func(t *testing.T) {
columns := GenerateTestDataColumns(t, [fieldparams.RootLength]byte{}, 1, 1)
verifier := initializer.NewDataColumnsVerifier(columns, GossipDataColumnSidecarRequirements)
err := verifier.CorrectSubnet(dataColumnSidecarSubTopic, []string{})
require.ErrorIs(t, err, errBadTopicLength)
require.NotNil(t, verifier.results.result(RequireCorrectSubnet))
})
t.Run("wrong topic", func(t *testing.T) {
columns := GenerateTestDataColumns(t, [fieldparams.RootLength]byte{}, 1, 1)
verifier := initializer.NewDataColumnsVerifier(columns[:2], GossipDataColumnSidecarRequirements)
err := verifier.CorrectSubnet(
dataColumnSidecarSubTopic,
[]string{
"/eth2/9dc47cc6/data_column_sidecar_1/ssz_snappy",
"/eth2/9dc47cc6/data_column_sidecar_0/ssz_snappy",
})
require.ErrorIs(t, err, errBadTopic)
require.NotNil(t, verifier.results.result(RequireCorrectSubnet))
})
t.Run("nominal", func(t *testing.T) {
subnets := []string{
"/eth2/9dc47cc6/data_column_sidecar_0/ssz_snappy",
"/eth2/9dc47cc6/data_column_sidecar_1",
}
columns := GenerateTestDataColumns(t, [fieldparams.RootLength]byte{}, 1, 1)
verifier := initializer.NewDataColumnsVerifier(columns[:2], GossipDataColumnSidecarRequirements)
err := verifier.CorrectSubnet(dataColumnSidecarSubTopic, subnets)
require.NoError(t, err)
require.IsNil(t, verifier.results.result(RequireCorrectSubnet))
err = verifier.CorrectSubnet(dataColumnSidecarSubTopic, subnets)
require.NoError(t, err)
})
}
func TestNotFromFutureSlot(t *testing.T) {
maximumGossipClockDisparity := params.BeaconConfig().MaximumGossipClockDisparityDuration()
testCases := []struct {
name string
currentSlot, columnSlot primitives.Slot
timeBeforeCurrentSlot time.Duration
isError bool
}{
{
name: "column slot == current slot",
currentSlot: 42,
columnSlot: 42,
timeBeforeCurrentSlot: 0,
isError: false,
},
{
name: "within maximum gossip clock disparity",
currentSlot: 42,
columnSlot: 42,
timeBeforeCurrentSlot: maximumGossipClockDisparity / 2,
isError: false,
},
{
name: "outside maximum gossip clock disparity",
currentSlot: 42,
columnSlot: 42,
timeBeforeCurrentSlot: maximumGossipClockDisparity * 2,
isError: true,
},
{
name: "too far in the future",
currentSlot: 10,
columnSlot: 42,
timeBeforeCurrentSlot: 0,
isError: true,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
const blobCount = 1
now := time.Now()
secondsPerSlot := time.Duration(params.BeaconConfig().SecondsPerSlot)
genesis := now.Add(-time.Duration(tc.currentSlot) * secondsPerSlot * time.Second)
clock := startup.NewClock(
genesis,
[fieldparams.RootLength]byte{},
startup.WithNower(func() time.Time {
return now.Add(-tc.timeBeforeCurrentSlot)
}),
)
parentRoot := [fieldparams.RootLength]byte{}
initializer := Initializer{shared: &sharedResources{clock: clock}}
columns := GenerateTestDataColumns(t, parentRoot, tc.columnSlot, blobCount)
verifier := initializer.NewDataColumnsVerifier(columns, GossipDataColumnSidecarRequirements)
err := verifier.NotFromFutureSlot()
require.Equal(t, true, verifier.results.executed(RequireNotFromFutureSlot))
if tc.isError {
require.ErrorIs(t, err, ErrFromFutureSlot)
require.NotNil(t, verifier.results.result(RequireNotFromFutureSlot))
return
}
require.NoError(t, err)
require.NoError(t, verifier.results.result(RequireNotFromFutureSlot))
err = verifier.NotFromFutureSlot()
require.NoError(t, err)
})
}
}
func TestColumnSlotAboveFinalized(t *testing.T) {
testCases := []struct {
name string
finalizedSlot, columnSlot primitives.Slot
isErr bool
}{
{
name: "finalized epoch < column epoch",
finalizedSlot: 10,
columnSlot: 96,
isErr: false,
},
{
name: "finalized slot < column slot (same epoch)",
finalizedSlot: 32,
columnSlot: 33,
isErr: false,
},
{
name: "finalized slot == column slot",
finalizedSlot: 64,
columnSlot: 64,
isErr: true,
},
{
name: "finalized epoch > column epoch",
finalizedSlot: 32,
columnSlot: 31,
isErr: true,
},
}
for _, tc := range testCases {
const blobCount = 1
t.Run(tc.name, func(t *testing.T) {
finalizedCheckpoint := func() *forkchoicetypes.Checkpoint {
return &forkchoicetypes.Checkpoint{
Epoch: slots.ToEpoch(tc.finalizedSlot),
Root: [fieldparams.RootLength]byte{},
}
}
parentRoot := [fieldparams.RootLength]byte{}
initializer := &Initializer{shared: &sharedResources{
fc: &mockForkchoicer{FinalizedCheckpointCB: finalizedCheckpoint},
}}
columns := GenerateTestDataColumns(t, parentRoot, tc.columnSlot, blobCount)
v := initializer.NewDataColumnsVerifier(columns, GossipDataColumnSidecarRequirements)
err := v.SlotAboveFinalized()
require.Equal(t, true, v.results.executed(RequireSlotAboveFinalized))
if tc.isErr {
require.ErrorIs(t, err, ErrSlotNotAfterFinalized)
require.NotNil(t, v.results.result(RequireSlotAboveFinalized))
return
}
require.NoError(t, err)
require.NoError(t, v.results.result(RequireSlotAboveFinalized))
err = v.SlotAboveFinalized()
require.NoError(t, err)
})
}
}
func TestValidProposerSignature(t *testing.T) {
const (
columnSlot = 0
blobCount = 1
)
parentRoot := [fieldparams.RootLength]byte{}
validator := &ethpb.Validator{}
columns := GenerateTestDataColumns(t, parentRoot, columnSlot, blobCount)
firstColumn := columns[0]
// The signature data does not depend on the data column itself, so we can use the first one.
expectedSignatureData := columnToSignatureData(firstColumn)
testCases := []struct {
isError bool
vscbShouldError bool
svcbReturn bool
stateByRooter StateByRooter
vscbError error
svcbError error
name string
}{
{
name: "cache hit - success",
svcbReturn: true,
svcbError: nil,
vscbShouldError: true,
vscbError: nil,
stateByRooter: &mockStateByRooter{sbr: sbrErrorIfCalled(t)},
isError: false,
},
{
name: "cache hit - error",
svcbReturn: true,
svcbError: errors.New("derp"),
vscbShouldError: true,
vscbError: nil,
stateByRooter: &mockStateByRooter{sbr: sbrErrorIfCalled(t)},
isError: true,
},
{
name: "cache miss - success",
svcbReturn: false,
svcbError: nil,
vscbShouldError: false,
vscbError: nil,
stateByRooter: sbrForValOverride(firstColumn.ProposerIndex(), validator),
isError: false,
},
{
name: "cache miss - state not found",
svcbReturn: false,
svcbError: nil,
vscbShouldError: false,
vscbError: nil,
stateByRooter: sbrNotFound(t, expectedSignatureData.Parent),
isError: true,
},
{
name: "cache miss - signature failure",
svcbReturn: false,
svcbError: nil,
vscbShouldError: false,
vscbError: errors.New("signature, not so good!"),
stateByRooter: sbrForValOverride(firstColumn.ProposerIndex(), validator),
isError: true,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
signatureCache := &mockSignatureCache{
svcb: func(signatureData SignatureData) (bool, error) {
if signatureData != expectedSignatureData {
t.Error("Did not see expected SignatureData")
}
return tc.svcbReturn, tc.svcbError
},
vscb: func(signatureData SignatureData, _ ValidatorAtIndexer) (err error) {
if tc.vscbShouldError {
t.Error("VerifySignature should not be called if the result is cached")
return nil
}
if expectedSignatureData != signatureData {
t.Error("unexpected signature data")
}
return tc.vscbError
},
}
initializer := Initializer{
shared: &sharedResources{
sc: signatureCache,
sr: tc.stateByRooter,
},
}
verifier := initializer.NewDataColumnsVerifier(columns, GossipDataColumnSidecarRequirements)
err := verifier.ValidProposerSignature(context.Background())
require.Equal(t, true, verifier.results.executed(RequireValidProposerSignature))
if tc.isError {
require.NotNil(t, err)
require.NotNil(t, verifier.results.result(RequireValidProposerSignature))
return
}
require.NoError(t, err)
require.NoError(t, verifier.results.result(RequireValidProposerSignature))
err = verifier.ValidProposerSignature(context.Background())
require.NoError(t, err)
})
}
}
func TestDataColumnsSidecarParentSeen(t *testing.T) {
const (
columnSlot = 0
blobCount = 1
)
parentRoot := [fieldparams.RootLength]byte{}
columns := GenerateTestDataColumns(t, parentRoot, columnSlot, blobCount)
firstColumn := columns[0]
fcHas := &mockForkchoicer{
HasNodeCB: func(parent [fieldparams.RootLength]byte) bool {
if parent != firstColumn.ParentRoot() {
t.Error("forkchoice.HasNode called with unexpected parent root")
}
return true
},
}
fcLacks := &mockForkchoicer{
HasNodeCB: func(parent [fieldparams.RootLength]byte) bool {
if parent != firstColumn.ParentRoot() {
t.Error("forkchoice.HasNode called with unexpected parent root")
}
return false
},
}
testCases := []struct {
name string
forkChoicer Forkchoicer
parentSeen func([fieldparams.RootLength]byte) bool
isError bool
}{
{
name: "happy path",
forkChoicer: fcHas,
parentSeen: nil,
isError: false,
},
{
name: "HasNode false, no badParent cb, expected error",
forkChoicer: fcLacks,
parentSeen: nil,
isError: true,
},
{
name: "HasNode false, badParent true",
forkChoicer: fcLacks,
parentSeen: badParentCb(t, firstColumn.ParentRoot(), true),
isError: false,
},
{
name: "HasNode false, badParent false",
forkChoicer: fcLacks,
parentSeen: badParentCb(t, firstColumn.ParentRoot(), false),
isError: true,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
initializer := Initializer{shared: &sharedResources{fc: tc.forkChoicer}}
verifier := initializer.NewDataColumnsVerifier(columns, GossipDataColumnSidecarRequirements)
err := verifier.SidecarParentSeen(tc.parentSeen)
require.Equal(t, true, verifier.results.executed(RequireSidecarParentSeen))
if tc.isError {
require.ErrorIs(t, err, ErrSidecarParentNotSeen)
require.NotNil(t, verifier.results.result(RequireSidecarParentSeen))
return
}
require.NoError(t, err)
require.NoError(t, verifier.results.result(RequireSidecarParentSeen))
err = verifier.SidecarParentSeen(tc.parentSeen)
require.NoError(t, err)
})
}
}
func TestDataColumnsSidecarParentValid(t *testing.T) {
testCases := []struct {
name string
badParentCbReturn bool
isError bool
}{
{
name: "parent valid",
badParentCbReturn: false,
isError: false,
},
{
name: "parent not valid",
badParentCbReturn: true,
isError: true,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
const (
columnSlot = 0
blobCount = 1
)
parentRoot := [fieldparams.RootLength]byte{}
columns := GenerateTestDataColumns(t, parentRoot, columnSlot, blobCount)
firstColumn := columns[0]
initializer := Initializer{shared: &sharedResources{}}
verifier := initializer.NewDataColumnsVerifier(columns, GossipDataColumnSidecarRequirements)
err := verifier.SidecarParentValid(badParentCb(t, firstColumn.ParentRoot(), tc.badParentCbReturn))
require.Equal(t, true, verifier.results.executed(RequireSidecarParentValid))
if tc.isError {
require.ErrorIs(t, err, ErrSidecarParentInvalid)
require.NotNil(t, verifier.results.result(RequireSidecarParentValid))
return
}
require.NoError(t, err)
require.NoError(t, verifier.results.result(RequireSidecarParentValid))
err = verifier.SidecarParentValid(badParentCb(t, firstColumn.ParentRoot(), tc.badParentCbReturn))
require.NoError(t, err)
})
}
}
func TestColumnSidecarParentSlotLower(t *testing.T) {
columns := GenerateTestDataColumns(t, [32]byte{}, 1, 1)
firstColumn := columns[0]
cases := []struct {
name string
forkChoiceSlot primitives.Slot
forkChoiceError, err error
errCheckValue bool
}{
{
name: "Not in forkchoice",
forkChoiceError: errors.New("not in forkchoice"),
err: ErrSlotNotAfterParent,
},
{
name: "In forkchoice, slot lower",
forkChoiceSlot: firstColumn.Slot() - 1,
},
{
name: "In forkchoice, slot equal",
forkChoiceSlot: firstColumn.Slot(),
err: ErrSlotNotAfterParent,
errCheckValue: true,
},
{
name: "In forkchoice, slot higher",
forkChoiceSlot: firstColumn.Slot() + 1,
err: ErrSlotNotAfterParent,
errCheckValue: true,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
initializer := Initializer{
shared: &sharedResources{fc: &mockForkchoicer{
SlotCB: func(r [32]byte) (primitives.Slot, error) {
if firstColumn.ParentRoot() != r {
t.Error("forkchoice.Slot called with unexpected parent root")
}
return c.forkChoiceSlot, c.forkChoiceError
},
}},
}
verifier := initializer.NewDataColumnsVerifier(columns, GossipDataColumnSidecarRequirements)
err := verifier.SidecarParentSlotLower()
require.Equal(t, true, verifier.results.executed(RequireSidecarParentSlotLower))
if c.err == nil {
require.NoError(t, err)
require.NoError(t, verifier.results.result(RequireSidecarParentSlotLower))
err = verifier.SidecarParentSlotLower()
require.NoError(t, err)
return
}
require.NotNil(t, err)
require.NotNil(t, verifier.results.result(RequireSidecarParentSlotLower))
if c.errCheckValue {
require.ErrorIs(t, err, c.err)
}
})
}
}
func TestDataColumnsSidecarDescendsFromFinalized(t *testing.T) {
testCases := []struct {
name string
hasNodeCBReturn bool
isError bool
}{
{
name: "Not canonical",
hasNodeCBReturn: false,
isError: true,
},
{
name: "Canonical",
hasNodeCBReturn: true,
isError: false,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
const (
columnSlot = 0
blobCount = 1
)
parentRoot := [fieldparams.RootLength]byte{}
columns := GenerateTestDataColumns(t, parentRoot, columnSlot, blobCount)
firstColumn := columns[0]
initializer := Initializer{
shared: &sharedResources{
fc: &mockForkchoicer{
HasNodeCB: func(r [fieldparams.RootLength]byte) bool {
if firstColumn.ParentRoot() != r {
t.Error("forkchoice.Slot called with unexpected parent root")
}
return tc.hasNodeCBReturn
},
},
},
}
verifier := initializer.NewDataColumnsVerifier(columns, GossipDataColumnSidecarRequirements)
err := verifier.SidecarDescendsFromFinalized()
require.Equal(t, true, verifier.results.executed(RequireSidecarDescendsFromFinalized))
if tc.isError {
require.ErrorIs(t, err, ErrSidecarNotFinalizedDescendent)
require.NotNil(t, verifier.results.result(RequireSidecarDescendsFromFinalized))
return
}
require.NoError(t, err)
require.NoError(t, verifier.results.result(RequireSidecarDescendsFromFinalized))
err = verifier.SidecarDescendsFromFinalized()
require.NoError(t, err)
})
}
}
func TestDataColumnsSidecarInclusionProven(t *testing.T) {
testCases := []struct {
name string
alterate bool
isError bool
}{
{
name: "Inclusion proven",
alterate: false,
isError: false,
},
{
name: "Inclusion not proven",
alterate: true,
isError: true,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
const (
columnSlot = 0
blobCount = 1
)
parentRoot := [fieldparams.RootLength]byte{}
columns := GenerateTestDataColumns(t, parentRoot, columnSlot, blobCount)
if tc.alterate {
firstColumn := columns[0]
byte0 := firstColumn.SignedBlockHeader.Header.BodyRoot[0]
firstColumn.SignedBlockHeader.Header.BodyRoot[0] = byte0 ^ 255
}
initializer := Initializer{
shared: &sharedResources{ic: newInclusionProofCache(1)},
}
verifier := initializer.NewDataColumnsVerifier(columns, GossipDataColumnSidecarRequirements)
err := verifier.SidecarInclusionProven()
require.Equal(t, true, verifier.results.executed(RequireSidecarInclusionProven))
if tc.isError {
require.ErrorIs(t, err, ErrSidecarInclusionProofInvalid)
require.NotNil(t, verifier.results.result(RequireSidecarInclusionProven))
return
}
require.NoError(t, err)
require.NoError(t, verifier.results.result(RequireSidecarInclusionProven))
err = verifier.SidecarInclusionProven()
require.NoError(t, err)
})
}
}
func TestDataColumnsSidecarKzgProofVerified(t *testing.T) {
testCases := []struct {
isError bool
verifyDataColumnsCommitmentError error
name string
}{
{
name: "KZG proof verified",
verifyDataColumnsCommitmentError: nil,
isError: false,
},
{
name: "KZG proof not verified",
verifyDataColumnsCommitmentError: errors.New("KZG proof error"),
isError: true,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
const (
columnSlot = 0
blobCount = 1
)
parentRoot := [fieldparams.RootLength]byte{}
columns := GenerateTestDataColumns(t, parentRoot, columnSlot, blobCount)
firstColumn := columns[0]
verifyDataColumnsCommitment := func(roDataColumns []blocks.RODataColumn) error {
for _, roDataColumn := range roDataColumns {
require.Equal(t, true, reflect.DeepEqual(firstColumn.KzgCommitments, roDataColumn.KzgCommitments))
}
return tc.verifyDataColumnsCommitmentError
}
verifier := &RODataColumnsVerifier{
results: newResults(),
dataColumns: columns,
verifyDataColumnsCommitment: verifyDataColumnsCommitment,
}
err := verifier.SidecarKzgProofVerified()
require.Equal(t, true, verifier.results.executed(RequireSidecarKzgProofVerified))
if tc.isError {
require.NotNil(t, err)
require.NotNil(t, verifier.results.result(RequireSidecarKzgProofVerified))
return
}
require.NoError(t, err)
require.NoError(t, verifier.results.result(RequireSidecarKzgProofVerified))
err = verifier.SidecarKzgProofVerified()
require.NoError(t, err)
})
}
}
func TestDataColumnsSidecarProposerExpected(t *testing.T) {
const (
columnSlot = 1
blobCount = 1
)
parentRoot := [fieldparams.RootLength]byte{}
columns := GenerateTestDataColumns(t, parentRoot, columnSlot, blobCount)
firstColumn := columns[0]
newColumns := GenerateTestDataColumns(t, parentRoot, 2*params.BeaconConfig().SlotsPerEpoch, blobCount)
firstNewColumn := newColumns[0]
validator := &ethpb.Validator{}
commonComputeProposerCB := func(_ context.Context, root [fieldparams.RootLength]byte, slot primitives.Slot, _ state.BeaconState) (primitives.ValidatorIndex, error) {
require.Equal(t, firstColumn.ParentRoot(), root)
require.Equal(t, firstColumn.Slot(), slot)
return firstColumn.ProposerIndex(), nil
}
testCases := []struct {
name string
stateByRooter StateByRooter
proposerCache ProposerCache
columns []blocks.RODataColumn
error string
}{
{
name: "Cached, matches",
stateByRooter: nil,
proposerCache: &mockProposerCache{
ProposerCB: pcReturnsIdx(firstColumn.ProposerIndex()),
},
columns: columns,
},
{
name: "Cached, does not match",
stateByRooter: nil,
proposerCache: &mockProposerCache{
ProposerCB: pcReturnsIdx(firstColumn.ProposerIndex() + 1),
},
columns: columns,
error: ErrSidecarUnexpectedProposer.Error(),
},
{
name: "Not cached, state lookup failure",
stateByRooter: sbrNotFound(t, firstColumn.ParentRoot()),
proposerCache: &mockProposerCache{
ProposerCB: pcReturnsNotFound(),
},
columns: columns,
error: "state by root",
},
{
name: "Not cached, proposer matches",
stateByRooter: sbrForValOverride(firstColumn.ProposerIndex(), validator),
proposerCache: &mockProposerCache{
ProposerCB: pcReturnsNotFound(),
ComputeProposerCB: commonComputeProposerCB,
},
columns: columns,
},
{
name: "Not cached, proposer matches",
stateByRooter: sbrForValOverride(firstColumn.ProposerIndex(), validator),
proposerCache: &mockProposerCache{
ProposerCB: pcReturnsNotFound(),
ComputeProposerCB: commonComputeProposerCB,
},
columns: columns,
},
{
name: "Not cached, proposer matches for next epoch",
stateByRooter: sbrForValOverride(firstNewColumn.ProposerIndex(), validator),
proposerCache: &mockProposerCache{
ProposerCB: pcReturnsNotFound(),
ComputeProposerCB: func(_ context.Context, root [32]byte, slot primitives.Slot, _ state.BeaconState) (primitives.ValidatorIndex, error) {
require.Equal(t, firstNewColumn.ParentRoot(), root)
require.Equal(t, firstNewColumn.Slot(), slot)
return firstColumn.ProposerIndex(), nil
},
},
columns: newColumns,
},
{
name: "Not cached, proposer does not match",
stateByRooter: sbrForValOverride(firstColumn.ProposerIndex(), validator),
proposerCache: &mockProposerCache{
ProposerCB: pcReturnsNotFound(),
ComputeProposerCB: func(_ context.Context, root [32]byte, slot primitives.Slot, _ state.BeaconState) (primitives.ValidatorIndex, error) {
require.Equal(t, firstColumn.ParentRoot(), root)
require.Equal(t, firstColumn.Slot(), slot)
return firstColumn.ProposerIndex() + 1, nil
},
},
columns: columns,
error: ErrSidecarUnexpectedProposer.Error(),
},
{
name: "Not cached, ComputeProposer fails",
stateByRooter: sbrForValOverride(firstColumn.ProposerIndex(), validator),
proposerCache: &mockProposerCache{
ProposerCB: pcReturnsNotFound(),
ComputeProposerCB: func(_ context.Context, root [32]byte, slot primitives.Slot, _ state.BeaconState) (primitives.ValidatorIndex, error) {
require.Equal(t, firstColumn.ParentRoot(), root)
require.Equal(t, firstColumn.Slot(), slot)
return 0, errors.New("ComputeProposer failed")
},
},
columns: columns,
error: "compute proposer",
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
initializer := Initializer{
shared: &sharedResources{
sr: tc.stateByRooter,
pc: tc.proposerCache,
fc: &mockForkchoicer{
TargetRootForEpochCB: fcReturnsTargetRoot([fieldparams.RootLength]byte{}),
},
},
}
verifier := initializer.NewDataColumnsVerifier(tc.columns, GossipDataColumnSidecarRequirements)
err := verifier.SidecarProposerExpected(context.Background())
require.Equal(t, true, verifier.results.executed(RequireSidecarProposerExpected))
if len(tc.error) > 0 {
require.ErrorContains(t, tc.error, err)
require.NotNil(t, verifier.results.result(RequireSidecarProposerExpected))
return
}
require.NoError(t, err)
require.NoError(t, verifier.results.result(RequireSidecarProposerExpected))
err = verifier.SidecarProposerExpected(context.Background())
require.NoError(t, err)
})
}
}
func TestColumnRequirementSatisfaction(t *testing.T) {
const (
columnSlot = 1
blobCount = 1
)
parentRoot := [fieldparams.RootLength]byte{}
columns := GenerateTestDataColumns(t, parentRoot, columnSlot, blobCount)
initializer := Initializer{}
verifier := initializer.NewDataColumnsVerifier(columns, GossipDataColumnSidecarRequirements)
// We haven't performed any verification, VerifiedRODataColumns should error.
_, err := verifier.VerifiedRODataColumns()
require.ErrorIs(t, err, errColumnsInvalid)
var me VerificationMultiError
ok := errors.As(err, &me)
require.Equal(t, true, ok)
fails := me.Failures()
// We haven't performed any verification, so all the results should be this type.
for _, v := range fails {
require.ErrorIs(t, v, ErrMissingVerification)
}
// Satisfy everything but the first requirement through the backdoor.
for _, r := range GossipDataColumnSidecarRequirements[1:] {
verifier.results.record(r, nil)
}
// One requirement is missing, VerifiedRODataColumns should still error.
_, err = verifier.VerifiedRODataColumns()
require.ErrorIs(t, err, errColumnsInvalid)
// Now, satisfy the first requirement.
verifier.results.record(GossipDataColumnSidecarRequirements[0], nil)
// VerifiedRODataColumns should now succeed.
require.Equal(t, true, verifier.results.allSatisfied())
_, err = verifier.VerifiedRODataColumns()
require.NoError(t, err)
}

View File

@@ -114,3 +114,12 @@ func VerifiedROBlobError(err error) (blocks.VerifiedROBlob, error) {
}
return blocks.VerifiedROBlob{}, err
}
// VerifiedRODataColumnError can be used by methods that have a VerifiedRODataColumn return type but do not have permission to
// create a value of that type in order to generate an error return value.
func VerifiedRODataColumnError(err error) (blocks.VerifiedRODataColumn, error) {
if err == nil {
return blocks.VerifiedRODataColumn{}, errVerificationImplementationFault
}
return blocks.VerifiedRODataColumn{}, err
}

View File

@@ -1,11 +1,14 @@
package verification
import (
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/spf13/afero"
)
// VerifiedROBlobError creates a verified read-only blob sidecar from an error.
func VerifiedROBlobFromDisk(fs afero.Fs, root [32]byte, path string) (blocks.VerifiedROBlob, error) {
encoded, err := afero.ReadFile(fs, path)
if err != nil {
@@ -21,3 +24,33 @@ func VerifiedROBlobFromDisk(fs afero.Fs, root [32]byte, path string) (blocks.Ver
}
return blocks.NewVerifiedROBlob(ro), nil
}
// VerifiedRODataColumnFromDisk created a verified read-only data column sidecar from disk.
func VerifiedRODataColumnFromDisk(file afero.File, root [fieldparams.RootLength]byte, sszEncodedDataColumnSidecarSize uint32) (blocks.VerifiedRODataColumn, error) {
// Read the ssz encoded data column sidecar from the file
sszEncodedDataColumnSidecar := make([]byte, sszEncodedDataColumnSidecarSize)
count, err := file.Read(sszEncodedDataColumnSidecar)
if err != nil {
return VerifiedRODataColumnError(err)
}
if uint32(count) != sszEncodedDataColumnSidecarSize {
return VerifiedRODataColumnError(err)
}
// Unmarshal the SSZ encoded data column sidecar.
dataColumnSidecar := &ethpb.DataColumnSidecar{}
if err := dataColumnSidecar.UnmarshalSSZ(sszEncodedDataColumnSidecar); err != nil {
return VerifiedRODataColumnError(err)
}
// Create a RO data column.
roDataColumnSidecar, err := blocks.NewRODataColumnWithRoot(dataColumnSidecar, root)
if err != nil {
return VerifiedRODataColumnError(err)
}
// Create a verified RO data column.
verifiedRODataColumn := blocks.NewVerifiedRODataColumn(roDataColumnSidecar)
return verifiedRODataColumn, nil
}

View File

@@ -5,9 +5,11 @@ import (
"sync"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
forkchoicetypes "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/types"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/network/forks"
@@ -38,6 +40,7 @@ type sharedResources struct {
sc SignatureCache
pc ProposerCache
sr StateByRooter
ic *inclusionProofCache
}
// Initializer is used to create different Verifiers.
@@ -57,6 +60,18 @@ func (ini *Initializer) NewBlobVerifier(b blocks.ROBlob, reqs []Requirement) *RO
}
}
// NewDataColumnsVerifier creates a DataColumnVerifier for a slice of data columns, with the given set of requirements.
// WARNING: The returned verifier is not thread-safe, and should not be used concurrently.
func (ini *Initializer) NewDataColumnsVerifier(roDataColumns []blocks.RODataColumn, reqs []Requirement) *RODataColumnsVerifier {
return &RODataColumnsVerifier{
sharedResources: ini.shared,
dataColumns: roDataColumns,
results: newResults(reqs...),
verifyDataColumnsCommitment: peerdas.VerifyDataColumnsSidecarKZGProofs,
stateByRoot: make(map[[fieldparams.RootLength]byte]state.BeaconState),
}
}
// InitializerWaiter provides an Initializer once all dependent resources are ready
// via the WaitForInitializer method.
type InitializerWaiter struct {
@@ -86,6 +101,7 @@ func NewInitializerWaiter(cw startup.ClockWaiter, fc Forkchoicer, sr StateByRoot
fc: fc,
pc: pc,
sr: sr,
ic: newInclusionProofCache(DefaultInclusionProofCacheSize),
}
iw := &InitializerWaiter{cw: cw, ini: &Initializer{shared: shared}}
for _, o := range opts {
@@ -107,6 +123,7 @@ func (w *InitializerWaiter) WaitForInitializer(ctx context.Context) (*Initialize
vr := w.ini.shared.clock.GenesisValidatorsRoot()
sc := newSigCache(vr[:], DefaultSignatureCacheSize, w.getFork)
w.ini.shared.sc = sc
w.ini.shared.ic = newInclusionProofCache(DefaultInclusionProofCacheSize)
return w.ini, nil
}

View File

@@ -3,6 +3,7 @@ package verification
import (
"context"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
)
@@ -29,3 +30,27 @@ type BlobVerifier interface {
// NewBlobVerifier is a function signature that can be used by code that needs to be
// able to mock Initializer.NewBlobVerifier without complex setup.
type NewBlobVerifier func(b blocks.ROBlob, reqs []Requirement) BlobVerifier
// DataColumnsVerifier defines the methods implemented by the RODataColumnVerifier.
// It serves a very similar purpose as the blob verifier interface for data columns.
type DataColumnsVerifier interface {
VerifiedRODataColumns() ([]blocks.VerifiedRODataColumn, error)
SatisfyRequirement(Requirement)
ValidFields() error
CorrectSubnet(dataColumnSidecarSubTopic string, expectedTopics []string) error
NotFromFutureSlot() error
SlotAboveFinalized() error
ValidProposerSignature(ctx context.Context) error
SidecarParentSeen(parentSeen func([fieldparams.RootLength]byte) bool) error
SidecarParentValid(badParent func([fieldparams.RootLength]byte) bool) error
SidecarParentSlotLower() error
SidecarDescendsFromFinalized() error
SidecarInclusionProven() error
SidecarKzgProofVerified() error
SidecarProposerExpected(ctx context.Context) error
}
// NewDataColumnsVerifier is a function signature that can be used to mock a setup where a
// column verifier can be easily initialized.
type NewDataColumnsVerifier func(dataColumns []blocks.RODataColumn, reqs []Requirement) DataColumnsVerifier

View File

@@ -13,4 +13,25 @@ var (
},
[]string{"result"},
)
columnVerificationProposerSignatureCache = promauto.NewCounterVec(
prometheus.CounterOpts{
Name: "data_column_verification_proposer_signature_cache",
Help: "DataColumnSidecar proposer signature cache result.",
},
[]string{"result"},
)
dataColumnSidecarInclusionProofVerificationHistogram = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "beacon_data_column_sidecar_inclusion_proof_verification_milliseconds",
Help: "Captures the time taken to verify data column sidecar inclusion proof.",
Buckets: []float64{5, 10, 50, 100, 150, 250, 500, 1000, 2000},
},
)
dataColumnBatchKZGVerificationHistogram = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "beacon_kzg_verification_data_column_batch_milliseconds",
Help: "Captures the time taken for batched data column kzg verification.",
Buckets: []float64{5, 10, 50, 100, 150, 250, 500, 1000, 2000},
},
)
)

View File

@@ -90,6 +90,11 @@ func (r *results) result(req Requirement) error {
return r.done[req]
}
func (r *results) cached(req Requirement) (bool, error) {
result, ok := r.done[req]
return ok, result
}
func (r *results) errors(err error) error {
return newVerificationMultiError(r, err)
}

View File

@@ -0,0 +1,15 @@
package verification
import (
"os"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
)
func TestMain(t *testing.M) {
if err := kzg.Start(); err != nil {
os.Exit(1)
}
t.Run()
}

View File

@@ -0,0 +1,3 @@
### Added
- Add support for light client req/resp domain.

View File

@@ -0,0 +1,3 @@
### Added
- Add light client mainnet spec test.

View File

@@ -0,0 +1,3 @@
### Added
- Add light client minimal spec test support for `update_ranking` tests.

View File

@@ -0,0 +1,3 @@
### Ignored
- Rename the `runLightClientUpdateRankingProofTest` function to `runLightClientUpdateRankingTest` in lc spec tests.

View File

@@ -0,0 +1,3 @@
### Ignored
- Put the light client beacon api endpoints behind a flag

View File

@@ -0,0 +1,3 @@
### Fixed
- Fix cyclical dependencies issue when using testing/util package

View File

@@ -0,0 +1,2 @@
### Added
- Implement data column sidecars filesystem.

View File

@@ -0,0 +1,2 @@
### Added
- Data column sidecars verification methods.

View File

@@ -0,0 +1,3 @@
### Ignored
- Use independent context for domain data calculation.

3
changelog/pvl-build.md Normal file
View File

@@ -0,0 +1,3 @@
### Added
- Added Prysm build data to otel tracing spans.

View File

@@ -0,0 +1,3 @@
### Added
- Updated e2e Beacon API evaluator to support more endpoints, including the ones introduced in Electra.

3
changelog/symlink.md Normal file
View File

@@ -0,0 +1,3 @@
### Added
- Added /bin/sh simlink to docker images

3
changelog/tt_corn.md Normal file
View File

@@ -0,0 +1,3 @@
### Added
- Add blob schedule support from https://github.com/ethereum/consensus-specs/pull/4277

3
changelog/tt_donut.md Normal file
View File

@@ -0,0 +1,3 @@
### Added
- Add fulu operation and epoch processing spec tests

3
changelog/tt_toro.md Normal file
View File

@@ -0,0 +1,3 @@
### Ignored
- Use current slot helper whenever possible

3
changelog/tt_uni.md Normal file
View File

@@ -0,0 +1,3 @@
### Changed
- Update spec tests to v1.6.0-alpha.0

3
changelog/tt_wagyu.md Normal file
View File

@@ -0,0 +1,3 @@
### Ignored
- Remove unused protobuf import

View File

@@ -31,6 +31,7 @@ const (
BlobLength = 131072 // BlobLength defines the byte length of a blob.
BlobSize = 131072 // defined to match blob.size in bazel ssz codegen
BlobSidecarSize = 131928 // defined to match blob sidecar size in bazel ssz codegen
KzgCommitmentSize = 48 // KzgCommitmentSize defines the byte length of a KZG commitment.
KzgCommitmentInclusionProofDepth = 17 // Merkle proof depth for blob_kzg_commitments list item
ExecutionBranchDepth = 4 // ExecutionBranchDepth defines the number of leaves in a merkle proof of the execution payload header.
SyncCommitteeBranchDepth = 5 // SyncCommitteeBranchDepth defines the number of leaves in a merkle proof of a sync committee.
@@ -43,4 +44,7 @@ const (
MaxAttesterSlashingsElectra = 1 // Maximum number of attester slashings in a block.
MaxRandomByte = uint64(1<<8 - 1) // MaxRandomByte defines max for a random byte using for proposer and sync committee sampling.
MaxRandomValueElectra = uint64(1<<16 - 1) // MaxRandomValueElectra defines max for a random value using for proposer and sync committee sampling.
// Introduced in Fulu network upgrade.
NumberOfColumns = 128 // NumberOfColumns refers to the specified number of data columns that can exist in a network.
)

View File

@@ -31,6 +31,7 @@ const (
BlobLength = 131072 // BlobLength defines the byte length of a blob.
BlobSize = 131072 // defined to match blob.size in bazel ssz codegen
BlobSidecarSize = 131928 // defined to match blob sidecar size in bazel ssz codegen
KzgCommitmentSize = 48 // KzgCommitmentSize defines the byte length of a KZG commitment.
KzgCommitmentInclusionProofDepth = 10 // Merkle proof depth for blob_kzg_commitments list item
ExecutionBranchDepth = 4 // ExecutionBranchDepth defines the number of leaves in a merkle proof of the execution payload header.
SyncCommitteeBranchDepth = 5 // SyncCommitteeBranchDepth defines the number of leaves in a merkle proof of a sync committee.
@@ -43,4 +44,7 @@ const (
MaxAttesterSlashingsElectra = 1 // Maximum number of attester slashings in a block.
MaxRandomByte = uint64(1<<8 - 1) // Maximum value for a random value using for proposer and sync committee sampling.
MaxRandomValueElectra = uint64(1<<16 - 1) // Maximum value for a random value using for proposer and sync committee sampling.
// Introduced in Fulu network upgrade.
NumberOfColumns = 128 // NumberOfColumns refers to the specified number of data columns that can exist in a network.
)

View File

@@ -73,7 +73,6 @@ go_test(
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//io/file:go_default_library",
"//runtime/version:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",

View File

@@ -3,6 +3,7 @@ package params
import (
"math"
"slices"
"time"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
@@ -295,6 +296,7 @@ type BeaconChainConfig struct {
NodeIdBits uint64 `yaml:"NODE_ID_BITS" spec:"true"` // NodeIdBits defines the bit length of a node id.
// Blobs Values
BlobSchedule []BlobScheduleEntry `yaml:"BLOB_SCHEDULE"`
// Deprecated_MaxBlobsPerBlock defines the max blobs that could exist in a block.
// Deprecated: This field is no longer supported. Avoid using it.
@@ -313,6 +315,11 @@ type BeaconChainConfig struct {
DeprecatedMaxBlobsPerBlockFulu int `yaml:"MAX_BLOBS_PER_BLOCK_FULU" spec:"true"`
}
type BlobScheduleEntry struct {
Epoch primitives.Epoch `yaml:"EPOCH"`
MaxBlobsPerBlock uint64 `yaml:"MAX_BLOBS_PER_BLOCK"`
}
// InitializeForkSchedule initializes the schedules forks baked into the config.
func (b *BeaconChainConfig) InitializeForkSchedule() {
// Reset Fork Version Schedule.
@@ -400,14 +407,22 @@ func (b *BeaconChainConfig) TargetBlobsPerBlock(slot primitives.Slot) int {
return b.DeprecatedMaxBlobsPerBlock / 2
}
// MaxBlobsPerBlock returns the maximum number of blobs per block for the given slot.
func (b *BeaconChainConfig) MaxBlobsPerBlock(slot primitives.Slot) int {
epoch := primitives.Epoch(slot.DivSlot(b.SlotsPerEpoch))
if epoch >= b.FuluForkEpoch {
return b.DeprecatedMaxBlobsPerBlockFulu
if len(b.BlobSchedule) > 0 {
slices.SortFunc(b.BlobSchedule, func(a, b BlobScheduleEntry) int {
return int(b.Epoch - a.Epoch)
})
for i := len(b.BlobSchedule) - 1; i >= 0; i-- {
if epoch >= b.BlobSchedule[i].Epoch {
return int(b.BlobSchedule[i].MaxBlobsPerBlock)
}
}
}
// If the blob schedule is empty, we fall back to the deprecated value.
if epoch >= b.ElectraForkEpoch {
return b.DeprecatedMaxBlobsPerBlockElectra
}
@@ -415,26 +430,21 @@ func (b *BeaconChainConfig) MaxBlobsPerBlock(slot primitives.Slot) int {
return b.DeprecatedMaxBlobsPerBlock
}
// MaxBlobsPerBlockByVersion returns the maximum number of blobs per block for the given fork version
func (b *BeaconChainConfig) MaxBlobsPerBlockByVersion(v int) int {
if v >= version.Fulu {
return b.DeprecatedMaxBlobsPerBlockFulu
}
if v >= version.Electra {
return b.DeprecatedMaxBlobsPerBlockElectra
}
return b.DeprecatedMaxBlobsPerBlock
}
// MaxBlobsPerBlockByEpoch returns the maximum number of blobs per block for the given epoch,
// adjusting for the Electra fork.
// MaxBlobsPerBlockAtEpoch returns the maximum number of blobs per block for the given epoch
func (b *BeaconChainConfig) MaxBlobsPerBlockAtEpoch(epoch primitives.Epoch) int {
if epoch >= b.FuluForkEpoch {
return b.DeprecatedMaxBlobsPerBlockFulu
if len(b.BlobSchedule) > 0 {
slices.SortFunc(b.BlobSchedule, func(a, b BlobScheduleEntry) int {
return int(b.Epoch - a.Epoch)
})
for i := len(b.BlobSchedule) - 1; i >= 0; i-- {
if epoch >= b.BlobSchedule[i].Epoch {
return int(b.BlobSchedule[i].MaxBlobsPerBlock)
}
}
}
// If the blob schedule is empty, we fall back to the deprecated value.
if epoch >= b.ElectraForkEpoch {
return b.DeprecatedMaxBlobsPerBlockElectra
}

View File

@@ -9,7 +9,6 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/state/genesis"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/testing/require"
)
@@ -108,12 +107,44 @@ func TestConfigGenesisValidatorRoot(t *testing.T) {
}
}
func Test_MaxBlobCount(t *testing.T) {
cfg := params.MainnetConfig()
cfg.ElectraForkEpoch = 10
require.Equal(t, cfg.MaxBlobsPerBlock(primitives.Slot(cfg.ElectraForkEpoch)*cfg.SlotsPerEpoch-1), 6)
require.Equal(t, cfg.MaxBlobsPerBlock(primitives.Slot(cfg.ElectraForkEpoch)*cfg.SlotsPerEpoch), 9)
cfg.ElectraForkEpoch = math.MaxUint64
func TestMaxBlobsPerBlock(t *testing.T) {
t.Run("Before all forks and no BlobSchedule", func(t *testing.T) {
cfg := params.MainnetConfig()
cfg.BlobSchedule = nil
cfg.ElectraForkEpoch = 100
cfg.FuluForkEpoch = 200
require.Equal(t, cfg.MaxBlobsPerBlock(0), cfg.DeprecatedMaxBlobsPerBlock)
})
t.Run("Uses latest matching BlobSchedule entry", func(t *testing.T) {
cfg := params.MainnetConfig()
cfg.BlobSchedule = []params.BlobScheduleEntry{
{Epoch: 5, MaxBlobsPerBlock: 7},
{Epoch: 10, MaxBlobsPerBlock: 11},
}
slot := 11 * cfg.SlotsPerEpoch
require.Equal(t, cfg.MaxBlobsPerBlock(slot), 11)
})
t.Run("Uses earlier matching BlobSchedule entry", func(t *testing.T) {
cfg := params.MainnetConfig()
cfg.BlobSchedule = []params.BlobScheduleEntry{
{Epoch: 5, MaxBlobsPerBlock: 7},
{Epoch: 10, MaxBlobsPerBlock: 11},
}
slot := 6 * cfg.SlotsPerEpoch
require.Equal(t, cfg.MaxBlobsPerBlock(slot), 7)
})
t.Run("Before first BlobSchedule entry falls back to fork logic", func(t *testing.T) {
cfg := params.MainnetConfig()
cfg.FuluForkEpoch = 1
cfg.BlobSchedule = []params.BlobScheduleEntry{
{Epoch: 5, MaxBlobsPerBlock: 7},
}
slot := primitives.Slot(2) // Epoch 0
require.Equal(t, cfg.MaxBlobsPerBlock(slot), cfg.DeprecatedMaxBlobsPerBlock)
})
}
func Test_TargetBlobCount(t *testing.T) {
@@ -123,41 +154,3 @@ func Test_TargetBlobCount(t *testing.T) {
require.Equal(t, cfg.TargetBlobsPerBlock(primitives.Slot(cfg.ElectraForkEpoch)*cfg.SlotsPerEpoch), 6)
cfg.ElectraForkEpoch = math.MaxUint64
}
func TestMaxBlobsPerBlockByVersion(t *testing.T) {
tests := []struct {
name string
v int
want int
}{
{
name: "Version below Electra",
v: version.Electra - 1,
want: params.BeaconConfig().DeprecatedMaxBlobsPerBlock,
},
{
name: "Version equal to Electra",
v: version.Electra,
want: params.BeaconConfig().DeprecatedMaxBlobsPerBlockElectra,
},
{
name: "Version equal to Fulu",
v: version.Fulu,
want: params.BeaconConfig().DeprecatedMaxBlobsPerBlockFulu,
},
{
name: "Version above Fulu",
v: version.Fulu + 1,
want: params.BeaconConfig().DeprecatedMaxBlobsPerBlockFulu,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := params.BeaconConfig().MaxBlobsPerBlockByVersion(tt.v)
if got != tt.want {
t.Errorf("MaxBlobsPerBlockByVersion(%d) = %d, want %d", tt.v, got, tt.want)
}
})
}
}

View File

@@ -4,6 +4,7 @@ import (
"encoding/hex"
"fmt"
"os"
"strconv"
"strings"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
@@ -244,6 +245,16 @@ func ConfigToYaml(cfg *BeaconChainConfig) []byte {
fmt.Sprintf("MAX_BLOBS_PER_BLOCK_FULU: %d", cfg.DeprecatedMaxBlobsPerBlockFulu),
}
if len(cfg.BlobSchedule) > 0 {
lines = append(lines, "BLOB_SCHEDULE:")
for _, entry := range cfg.BlobSchedule {
lines = append(lines,
" - EPOCH: "+strconv.FormatUint(uint64(entry.Epoch), 10),
" MAX_BLOBS_PER_BLOCK: "+strconv.FormatUint(entry.MaxBlobsPerBlock, 10),
)
}
}
yamlFile := []byte(strings.Join(lines, "\n"))
return yamlFile
}

View File

@@ -339,6 +339,11 @@ var mainnetBeaconConfig = &BeaconChainConfig{
AttestationSubnetPrefixBits: 6,
SubnetsPerNode: 2,
NodeIdBits: 256,
BlobSchedule: []BlobScheduleEntry{
{Epoch: 269568, MaxBlobsPerBlock: 6},
{Epoch: 364032, MaxBlobsPerBlock: 9},
},
}
// MainnetTestConfig provides a version of the mainnet config that has a different name

View File

@@ -127,6 +127,11 @@ func MinimalSpecConfig() *BeaconChainConfig {
minimalConfig.ConfigName = MinimalName
minimalConfig.PresetBase = "minimal"
minimalConfig.BlobSchedule = []BlobScheduleEntry{
{Epoch: 18446744073709551615, MaxBlobsPerBlock: 6},
{Epoch: 18446744073709551615, MaxBlobsPerBlock: 9},
}
minimalConfig.InitializeForkSchedule()
return minimalConfig
}

View File

@@ -124,4 +124,13 @@ SLOTS_PER_EPOCH: 6
EPOCHS_PER_ETH1_VOTING_PERIOD: 2
MAX_SEED_LOOKAHEAD: 1
# Blob Scheduling
# ---------------------------------------------------------------
BLOB_SCHEDULE:
# Deneb
- EPOCH: 12
MAX_BLOBS_PER_BLOCK: 6
# Electra
- EPOCH: 14
MAX_BLOBS_PER_BLOCK: 9

View File

@@ -60,6 +60,11 @@ func E2ETestConfig() *BeaconChainConfig {
e2eConfig.ElectraForkVersion = []byte{5, 0, 0, 253}
e2eConfig.FuluForkVersion = []byte{6, 0, 0, 253}
e2eConfig.BlobSchedule = []BlobScheduleEntry{
{Epoch: 12, MaxBlobsPerBlock: 6},
{Epoch: 14, MaxBlobsPerBlock: 9},
}
e2eConfig.InitializeForkSchedule()
return e2eConfig
}
@@ -109,6 +114,11 @@ func E2EMainnetTestConfig() *BeaconChainConfig {
// Deneb changes.
e2eConfig.MinPerEpochChurnLimit = 2
e2eConfig.BlobSchedule = []BlobScheduleEntry{
{Epoch: 12, MaxBlobsPerBlock: 6},
{Epoch: 14, MaxBlobsPerBlock: 9},
}
e2eConfig.InitializeForkSchedule()
return e2eConfig
}

View File

@@ -46,6 +46,10 @@ func HoleskyConfig() *BeaconChainConfig {
cfg.TerminalTotalDifficulty = "0"
cfg.DepositContractAddress = "0x4242424242424242424242424242424242424242"
cfg.EjectionBalance = 28000000000
cfg.BlobSchedule = []BlobScheduleEntry{
{Epoch: 29696, MaxBlobsPerBlock: 6},
{Epoch: 115968, MaxBlobsPerBlock: 9},
}
cfg.InitializeForkSchedule()
return cfg
}

View File

@@ -53,6 +53,10 @@ func HoodiConfig() *BeaconChainConfig {
cfg.FuluForkVersion = []byte{0x70, 0x00, 0x09, 0x10}
cfg.TerminalTotalDifficulty = "0"
cfg.DepositContractAddress = "0x00000000219ab540356cBB839Cbe05303d7705Fa"
cfg.BlobSchedule = []BlobScheduleEntry{
{Epoch: 0, MaxBlobsPerBlock: 6},
{Epoch: 2048, MaxBlobsPerBlock: 9},
}
cfg.InitializeForkSchedule()
return cfg
}

View File

@@ -51,6 +51,10 @@ func SepoliaConfig() *BeaconChainConfig {
cfg.TerminalTotalDifficulty = "17000000000000000"
cfg.DepositContractAddress = "0x7f02C3E3c98b133055B8B348B2Ac625669Ed295D"
cfg.DefaultBuilderGasLimit = uint64(60000000)
cfg.BlobSchedule = []BlobScheduleEntry{
{Epoch: 132608, MaxBlobsPerBlock: 6},
{Epoch: 222464, MaxBlobsPerBlock: 9},
}
cfg.InitializeForkSchedule()
return cfg
}

View File

@@ -2,6 +2,8 @@ package blocks
import (
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
)
@@ -57,6 +59,21 @@ func (dc *RODataColumn) BlockRoot() [fieldparams.RootLength]byte {
return dc.root
}
// Slot returns the slot of the data column sidecar.
func (dc *RODataColumn) Slot() primitives.Slot {
return dc.SignedBlockHeader.Header.Slot
}
// ParentRoot returns the parent root of the data column sidecar.
func (dc *RODataColumn) ParentRoot() [fieldparams.RootLength]byte {
return bytesutil.ToBytes32(dc.SignedBlockHeader.Header.ParentRoot)
}
// ProposerIndex returns the proposer index of the data column sidecar.
func (dc *RODataColumn) ProposerIndex() primitives.ValidatorIndex {
return dc.SignedBlockHeader.Header.ProposerIndex
}
// VerifiedRODataColumn represents an RODataColumn that has undergone full verification (eg block sig, inclusion proof, commitment check).
type VerifiedRODataColumn struct {
RODataColumn

View File

@@ -4,6 +4,7 @@ import (
"testing"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/assert"
@@ -118,8 +119,52 @@ func TestNewRODataColumnWithAndWithoutRoot(t *testing.T) {
func TestDataColumn_BlockRoot(t *testing.T) {
root := [fieldparams.RootLength]byte{1}
dataColumn := &RODataColumn{
root: root,
}
dataColumn := &RODataColumn{root: root}
assert.Equal(t, root, dataColumn.BlockRoot())
}
func TestDataColumn_Slot(t *testing.T) {
slot := primitives.Slot(1)
dataColumn := &RODataColumn{
DataColumnSidecar: &ethpb.DataColumnSidecar{
SignedBlockHeader: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{
Slot: slot,
},
},
},
}
assert.Equal(t, slot, dataColumn.Slot())
}
func TestDataColumn_ParentRoot(t *testing.T) {
root := [fieldparams.RootLength]byte{1}
dataColumn := &RODataColumn{
DataColumnSidecar: &ethpb.DataColumnSidecar{
SignedBlockHeader: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{
ParentRoot: root[:],
},
},
},
}
assert.Equal(t, root, dataColumn.ParentRoot())
}
func TestDataColumn_ProposerIndex(t *testing.T) {
proposerIndex := primitives.ValidatorIndex(1)
dataColumn := &RODataColumn{
DataColumnSidecar: &ethpb.DataColumnSidecar{
SignedBlockHeader: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{
ProposerIndex: proposerIndex,
},
},
},
}
assert.Equal(t, proposerIndex, dataColumn.ProposerIndex())
}

24
crypto/random/BUILD.bazel Normal file
View File

@@ -0,0 +1,24 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = ["random.go"],
importpath = "github.com/OffchainLabs/prysm/v6/crypto/random",
visibility = ["//visibility:public"],
deps = [
"@com_github_consensys_gnark_crypto//ecc/bls12-381/fr:go_default_library",
"@com_github_crate_crypto_go_kzg_4844//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["random_test.go"],
embed = [":go_default_library"],
deps = [
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_crate_crypto_go_kzg_4844//:go_default_library",
],
)

45
crypto/random/random.go Normal file
View File

@@ -0,0 +1,45 @@
package random
import (
"bytes"
"crypto/sha256"
"encoding/binary"
"github.com/consensys/gnark-crypto/ecc/bls12-381/fr"
GoKZG "github.com/crate-crypto/go-kzg-4844"
"github.com/sirupsen/logrus"
)
// DeterministicRandomness creates a deterministic 32 byte array from a seed
func DeterministicRandomness(seed int64) [32]byte {
// Converts an int64 to a byte slice
buf := new(bytes.Buffer)
err := binary.Write(buf, binary.BigEndian, seed)
if err != nil {
logrus.WithError(err).Error("Failed to write int64 to bytes buffer")
return [32]byte{}
}
bytes := buf.Bytes()
return sha256.Sum256(bytes)
}
// GetRandFieldElement returns a serialized random field element in big-endian
func GetRandFieldElement(seed int64) [32]byte {
bytes := DeterministicRandomness(seed)
var r fr.Element
r.SetBytes(bytes[:])
return GoKZG.SerializeScalar(r)
}
// GetRandBlob returns a random blob using the passed seed as entropy
func GetRandBlob(seed int64) GoKZG.Blob {
var blob GoKZG.Blob
bytesPerBlob := GoKZG.ScalarsPerBlob * GoKZG.SerializedScalarSize
for i := 0; i < bytesPerBlob; i += GoKZG.SerializedScalarSize {
fieldElementBytes := GetRandFieldElement(seed + int64(i))
copy(blob[i:i+GoKZG.SerializedScalarSize], fieldElementBytes[:])
}
return blob
}

View File

@@ -0,0 +1,61 @@
package random
import (
"testing"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
GoKZG "github.com/crate-crypto/go-kzg-4844"
)
func TestDeterministicRandomness(t *testing.T) {
seed := int64(123)
r1 := DeterministicRandomness(seed)
r2 := DeterministicRandomness(seed)
assert.DeepEqual(t, r1, r2, "Same seed should produce same output")
// Test different seeds produce different outputs
r3 := DeterministicRandomness(seed + 1)
assert.NotEqual(t, r1, r3, "Different seeds should produce different outputs")
}
func TestGetRandFieldElement(t *testing.T) {
seed := int64(123)
r1 := GetRandFieldElement(seed)
r2 := GetRandFieldElement(seed)
assert.DeepEqual(t, r1, r2, "Same seed should produce same output")
// Test different seeds produce different outputs
r3 := GetRandFieldElement(seed + 1)
assert.NotEqual(t, r1, r3, "Different seeds should produce different outputs")
}
func TestGetRandBlob(t *testing.T) {
seed := int64(123)
r1 := GetRandBlob(seed)
r2 := GetRandBlob(seed)
assert.DeepEqual(t, r1, r2, "Same seed should produce same blob")
expectedSize := GoKZG.ScalarsPerBlob * GoKZG.SerializedScalarSize
assert.Equal(t, expectedSize, len(r1), "Blob should have correct size")
r3 := GetRandBlob(seed + 1)
assert.NotEqual(t, r1, r3, "Different seeds should produce different blobs")
}
func TestGetRandBlobElements(t *testing.T) {
seed := int64(123)
blob := GetRandBlob(seed)
// Check that each field element in the blob matches what we'd get from GetRandFieldElement
for i := 0; i < GoKZG.ScalarsPerBlob; i++ {
start := i * GoKZG.SerializedScalarSize
end := start + GoKZG.SerializedScalarSize
blobElement := [32]byte{}
copy(blobElement[:], blob[start:end])
expectedElement := GetRandFieldElement(seed + int64(i*GoKZG.SerializedScalarSize))
require.DeepEqual(t, expectedElement, blobElement, "Field element in blob doesn't match expected value")
}
}

View File

@@ -11,6 +11,7 @@ go_library(
visibility = ["//visibility:public"],
deps = [
"//monitoring/tracing/trace:go_default_library",
"//runtime/version:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opentelemetry_go_otel//:go_default_library",
"@io_opentelemetry_go_otel//attribute:go_default_library",

View File

@@ -8,6 +8,7 @@ import (
"time"
prysmTrace "github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/sirupsen/logrus"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/attribute"
@@ -51,6 +52,7 @@ func Setup(ctx context.Context, serviceName, processName, endpoint string, sampl
semconv.SchemaURL,
semconv.ServiceNameKey.String(serviceName),
attribute.String("process_name", processName),
attribute.String("build", version.BuildData()),
),
),
)

View File

@@ -10,7 +10,6 @@ import (
reflect "reflect"
sync "sync"
_ "github.com/OffchainLabs/prysm/v6/proto/eth/ext"
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
)
@@ -107,38 +106,37 @@ var file_proto_engine_v1_fulu_proto_rawDesc = []byte{
0x0a, 0x1a, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x65, 0x6e, 0x67, 0x69, 0x6e, 0x65, 0x2f, 0x76,
0x31, 0x2f, 0x66, 0x75, 0x6c, 0x75, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x12, 0x65, 0x74,
0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x2e, 0x65, 0x6e, 0x67, 0x69, 0x6e, 0x65, 0x2e, 0x76, 0x31,
0x1a, 0x1b, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x65, 0x74, 0x68, 0x2f, 0x65, 0x78, 0x74, 0x2f,
0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x26, 0x70,
0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x65, 0x6e, 0x67, 0x69, 0x6e, 0x65, 0x2f, 0x76, 0x31, 0x2f, 0x65,
0x78, 0x65, 0x63, 0x75, 0x74, 0x69, 0x6f, 0x6e, 0x5f, 0x65, 0x6e, 0x67, 0x69, 0x6e, 0x65, 0x2e,
0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0x9d, 0x02, 0x0a, 0x13, 0x45, 0x78, 0x65, 0x63, 0x75, 0x74,
0x69, 0x6f, 0x6e, 0x42, 0x75, 0x6e, 0x64, 0x6c, 0x65, 0x46, 0x75, 0x6c, 0x75, 0x12, 0x43, 0x0a,
0x07, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x29,
0x1a, 0x26, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x65, 0x6e, 0x67, 0x69, 0x6e, 0x65, 0x2f, 0x76,
0x31, 0x2f, 0x65, 0x78, 0x65, 0x63, 0x75, 0x74, 0x69, 0x6f, 0x6e, 0x5f, 0x65, 0x6e, 0x67, 0x69,
0x6e, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0x9d, 0x02, 0x0a, 0x13, 0x45, 0x78, 0x65,
0x63, 0x75, 0x74, 0x69, 0x6f, 0x6e, 0x42, 0x75, 0x6e, 0x64, 0x6c, 0x65, 0x46, 0x75, 0x6c, 0x75,
0x12, 0x43, 0x0a, 0x07, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28,
0x0b, 0x32, 0x29, 0x2e, 0x65, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x2e, 0x65, 0x6e, 0x67,
0x69, 0x6e, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x45, 0x78, 0x65, 0x63, 0x75, 0x74, 0x69, 0x6f, 0x6e,
0x50, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x44, 0x65, 0x6e, 0x65, 0x62, 0x52, 0x07, 0x70, 0x61,
0x79, 0x6c, 0x6f, 0x61, 0x64, 0x12, 0x14, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02,
0x20, 0x01, 0x28, 0x0c, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x12, 0x44, 0x0a, 0x0c, 0x62,
0x6c, 0x6f, 0x62, 0x73, 0x5f, 0x62, 0x75, 0x6e, 0x64, 0x6c, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28,
0x0b, 0x32, 0x21, 0x2e, 0x65, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x2e, 0x65, 0x6e, 0x67,
0x69, 0x6e, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x42, 0x6c, 0x6f, 0x62, 0x73, 0x42, 0x75, 0x6e, 0x64,
0x6c, 0x65, 0x56, 0x32, 0x52, 0x0b, 0x62, 0x6c, 0x6f, 0x62, 0x73, 0x42, 0x75, 0x6e, 0x64, 0x6c,
0x65, 0x12, 0x36, 0x0a, 0x17, 0x73, 0x68, 0x6f, 0x75, 0x6c, 0x64, 0x5f, 0x6f, 0x76, 0x65, 0x72,
0x72, 0x69, 0x64, 0x65, 0x5f, 0x62, 0x75, 0x69, 0x6c, 0x64, 0x65, 0x72, 0x18, 0x04, 0x20, 0x01,
0x28, 0x08, 0x52, 0x15, 0x73, 0x68, 0x6f, 0x75, 0x6c, 0x64, 0x4f, 0x76, 0x65, 0x72, 0x72, 0x69,
0x64, 0x65, 0x42, 0x75, 0x69, 0x6c, 0x64, 0x65, 0x72, 0x12, 0x2d, 0x0a, 0x12, 0x65, 0x78, 0x65,
0x63, 0x75, 0x74, 0x69, 0x6f, 0x6e, 0x5f, 0x72, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x73, 0x18,
0x05, 0x20, 0x03, 0x28, 0x0c, 0x52, 0x11, 0x65, 0x78, 0x65, 0x63, 0x75, 0x74, 0x69, 0x6f, 0x6e,
0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x73, 0x42, 0x8e, 0x01, 0x0a, 0x16, 0x6f, 0x72, 0x67,
0x2e, 0x65, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x2e, 0x65, 0x6e, 0x67, 0x69, 0x6e, 0x65,
0x2e, 0x76, 0x31, 0x2e, 0x45, 0x78, 0x65, 0x63, 0x75, 0x74, 0x69, 0x6f, 0x6e, 0x50, 0x61, 0x79,
0x6c, 0x6f, 0x61, 0x64, 0x44, 0x65, 0x6e, 0x65, 0x62, 0x52, 0x07, 0x70, 0x61, 0x79, 0x6c, 0x6f,
0x61, 0x64, 0x12, 0x14, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28,
0x0c, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x12, 0x44, 0x0a, 0x0c, 0x62, 0x6c, 0x6f, 0x62,
0x73, 0x5f, 0x62, 0x75, 0x6e, 0x64, 0x6c, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x21,
0x2e, 0x65, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x2e, 0x65, 0x6e, 0x67, 0x69, 0x6e, 0x65,
0x2e, 0x76, 0x31, 0x2e, 0x42, 0x6c, 0x6f, 0x62, 0x73, 0x42, 0x75, 0x6e, 0x64, 0x6c, 0x65, 0x56,
0x32, 0x52, 0x0b, 0x62, 0x6c, 0x6f, 0x62, 0x73, 0x42, 0x75, 0x6e, 0x64, 0x6c, 0x65, 0x12, 0x36,
0x0a, 0x17, 0x73, 0x68, 0x6f, 0x75, 0x6c, 0x64, 0x5f, 0x6f, 0x76, 0x65, 0x72, 0x72, 0x69, 0x64,
0x65, 0x5f, 0x62, 0x75, 0x69, 0x6c, 0x64, 0x65, 0x72, 0x18, 0x04, 0x20, 0x01, 0x28, 0x08, 0x52,
0x15, 0x73, 0x68, 0x6f, 0x75, 0x6c, 0x64, 0x4f, 0x76, 0x65, 0x72, 0x72, 0x69, 0x64, 0x65, 0x42,
0x75, 0x69, 0x6c, 0x64, 0x65, 0x72, 0x12, 0x2d, 0x0a, 0x12, 0x65, 0x78, 0x65, 0x63, 0x75, 0x74,
0x69, 0x6f, 0x6e, 0x5f, 0x72, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x73, 0x18, 0x05, 0x20, 0x03,
0x28, 0x0c, 0x52, 0x11, 0x65, 0x78, 0x65, 0x63, 0x75, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x65, 0x71,
0x75, 0x65, 0x73, 0x74, 0x73, 0x42, 0x8e, 0x01, 0x0a, 0x16, 0x6f, 0x72, 0x67, 0x2e, 0x65, 0x74,
0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x2e, 0x65, 0x6e, 0x67, 0x69, 0x6e, 0x65, 0x2e, 0x76, 0x31,
0x42, 0x0c, 0x45, 0x6c, 0x65, 0x63, 0x74, 0x72, 0x61, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x50, 0x01,
0x5a, 0x3a, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x70, 0x72, 0x79,
0x73, 0x6d, 0x61, 0x74, 0x69, 0x63, 0x6c, 0x61, 0x62, 0x73, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d,
0x2f, 0x76, 0x35, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x65, 0x6e, 0x67, 0x69, 0x6e, 0x65,
0x2f, 0x76, 0x31, 0x3b, 0x65, 0x6e, 0x67, 0x69, 0x6e, 0x65, 0x76, 0x31, 0xaa, 0x02, 0x12, 0x45,
0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x2e, 0x45, 0x6e, 0x67, 0x69, 0x6e, 0x65, 0x2e, 0x56,
0x31, 0xca, 0x02, 0x12, 0x45, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x5c, 0x45, 0x6e, 0x67,
0x69, 0x6e, 0x65, 0x5c, 0x76, 0x31, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
0x2e, 0x76, 0x31, 0x42, 0x0c, 0x45, 0x6c, 0x65, 0x63, 0x74, 0x72, 0x61, 0x50, 0x72, 0x6f, 0x74,
0x6f, 0x50, 0x01, 0x5a, 0x3a, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f,
0x70, 0x72, 0x79, 0x73, 0x6d, 0x61, 0x74, 0x69, 0x63, 0x6c, 0x61, 0x62, 0x73, 0x2f, 0x70, 0x72,
0x79, 0x73, 0x6d, 0x2f, 0x76, 0x35, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x65, 0x6e, 0x67,
0x69, 0x6e, 0x65, 0x2f, 0x76, 0x31, 0x3b, 0x65, 0x6e, 0x67, 0x69, 0x6e, 0x65, 0x76, 0x31, 0xaa,
0x02, 0x12, 0x45, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x2e, 0x45, 0x6e, 0x67, 0x69, 0x6e,
0x65, 0x2e, 0x56, 0x31, 0xca, 0x02, 0x12, 0x45, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x5c,
0x45, 0x6e, 0x67, 0x69, 0x6e, 0x65, 0x5c, 0x76, 0x31, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f,
0x33,
}
var (

View File

@@ -2,7 +2,6 @@ syntax = "proto3";
package ethereum.engine.v1;
import "proto/eth/ext/options.proto";
import "proto/engine/v1/execution_engine.proto";
option csharp_namespace = "Ethereum.Engine.V1";

View File

@@ -537,6 +537,61 @@ func (x *DataColumnSidecarsByRangeRequest) GetColumns() []uint64 {
return nil
}
type LightClientUpdatesByRangeRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
StartPeriod uint64 `protobuf:"varint,1,opt,name=start_period,json=startPeriod,proto3" json:"start_period,omitempty"`
Count uint64 `protobuf:"varint,2,opt,name=count,proto3" json:"count,omitempty"`
}
func (x *LightClientUpdatesByRangeRequest) Reset() {
*x = LightClientUpdatesByRangeRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[8]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *LightClientUpdatesByRangeRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*LightClientUpdatesByRangeRequest) ProtoMessage() {}
func (x *LightClientUpdatesByRangeRequest) ProtoReflect() protoreflect.Message {
mi := &file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[8]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use LightClientUpdatesByRangeRequest.ProtoReflect.Descriptor instead.
func (*LightClientUpdatesByRangeRequest) Descriptor() ([]byte, []int) {
return file_proto_prysm_v1alpha1_p2p_messages_proto_rawDescGZIP(), []int{8}
}
func (x *LightClientUpdatesByRangeRequest) GetStartPeriod() uint64 {
if x != nil {
return x.StartPeriod
}
return 0
}
func (x *LightClientUpdatesByRangeRequest) GetCount() uint64 {
if x != nil {
return x.Count
}
return 0
}
var File_proto_prysm_v1alpha1_p2p_messages_proto protoreflect.FileDescriptor
var file_proto_prysm_v1alpha1_p2p_messages_proto_rawDesc = []byte{
@@ -655,17 +710,23 @@ var file_proto_prysm_v1alpha1_p2p_messages_proto_rawDesc = []byte{
0x18, 0x02, 0x20, 0x01, 0x28, 0x04, 0x52, 0x05, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x12, 0x21, 0x0a,
0x07, 0x63, 0x6f, 0x6c, 0x75, 0x6d, 0x6e, 0x73, 0x18, 0x03, 0x20, 0x03, 0x28, 0x04, 0x42, 0x07,
0x92, 0xb5, 0x18, 0x03, 0x31, 0x32, 0x38, 0x52, 0x07, 0x63, 0x6f, 0x6c, 0x75, 0x6d, 0x6e, 0x73,
0x42, 0x9a, 0x01, 0x0a, 0x19, 0x6f, 0x72, 0x67, 0x2e, 0x65, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75,
0x6d, 0x2e, 0x65, 0x74, 0x68, 0x2e, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x31, 0x42, 0x10,
0x50, 0x32, 0x50, 0x4d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x73, 0x50, 0x72, 0x6f, 0x74, 0x6f,
0x50, 0x01, 0x5a, 0x39, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x4f,
0x66, 0x66, 0x63, 0x68, 0x61, 0x69, 0x6e, 0x4c, 0x61, 0x62, 0x73, 0x2f, 0x70, 0x72, 0x79, 0x73,
0x6d, 0x2f, 0x76, 0x36, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d,
0x2f, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x31, 0x3b, 0x65, 0x74, 0x68, 0xaa, 0x02, 0x15,
0x45, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x2e, 0x45, 0x74, 0x68, 0x2e, 0x56, 0x31, 0x61,
0x6c, 0x70, 0x68, 0x61, 0x31, 0xca, 0x02, 0x15, 0x45, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d,
0x5c, 0x45, 0x74, 0x68, 0x5c, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x31, 0x62, 0x06, 0x70,
0x72, 0x6f, 0x74, 0x6f, 0x33,
0x22, 0x5b, 0x0a, 0x20, 0x4c, 0x69, 0x67, 0x68, 0x74, 0x43, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x55,
0x70, 0x64, 0x61, 0x74, 0x65, 0x73, 0x42, 0x79, 0x52, 0x61, 0x6e, 0x67, 0x65, 0x52, 0x65, 0x71,
0x75, 0x65, 0x73, 0x74, 0x12, 0x21, 0x0a, 0x0c, 0x73, 0x74, 0x61, 0x72, 0x74, 0x5f, 0x70, 0x65,
0x72, 0x69, 0x6f, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x04, 0x52, 0x0b, 0x73, 0x74, 0x61, 0x72,
0x74, 0x50, 0x65, 0x72, 0x69, 0x6f, 0x64, 0x12, 0x14, 0x0a, 0x05, 0x63, 0x6f, 0x75, 0x6e, 0x74,
0x18, 0x02, 0x20, 0x01, 0x28, 0x04, 0x52, 0x05, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x42, 0x9a, 0x01,
0x0a, 0x19, 0x6f, 0x72, 0x67, 0x2e, 0x65, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x2e, 0x65,
0x74, 0x68, 0x2e, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x31, 0x42, 0x10, 0x50, 0x32, 0x50,
0x4d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x73, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x50, 0x01, 0x5a,
0x39, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x4f, 0x66, 0x66, 0x63,
0x68, 0x61, 0x69, 0x6e, 0x4c, 0x61, 0x62, 0x73, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x2f, 0x76,
0x36, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x2f, 0x76, 0x31,
0x61, 0x6c, 0x70, 0x68, 0x61, 0x31, 0x3b, 0x65, 0x74, 0x68, 0xaa, 0x02, 0x15, 0x45, 0x74, 0x68,
0x65, 0x72, 0x65, 0x75, 0x6d, 0x2e, 0x45, 0x74, 0x68, 0x2e, 0x56, 0x31, 0x61, 0x6c, 0x70, 0x68,
0x61, 0x31, 0xca, 0x02, 0x15, 0x45, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x5c, 0x45, 0x74,
0x68, 0x5c, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x31, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74,
0x6f, 0x33,
}
var (
@@ -680,7 +741,7 @@ func file_proto_prysm_v1alpha1_p2p_messages_proto_rawDescGZIP() []byte {
return file_proto_prysm_v1alpha1_p2p_messages_proto_rawDescData
}
var file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes = make([]protoimpl.MessageInfo, 8)
var file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes = make([]protoimpl.MessageInfo, 9)
var file_proto_prysm_v1alpha1_p2p_messages_proto_goTypes = []interface{}{
(*Status)(nil), // 0: ethereum.eth.v1alpha1.Status
(*BeaconBlocksByRangeRequest)(nil), // 1: ethereum.eth.v1alpha1.BeaconBlocksByRangeRequest
@@ -690,6 +751,7 @@ var file_proto_prysm_v1alpha1_p2p_messages_proto_goTypes = []interface{}{
(*MetaDataV2)(nil), // 5: ethereum.eth.v1alpha1.MetaDataV2
(*BlobSidecarsByRangeRequest)(nil), // 6: ethereum.eth.v1alpha1.BlobSidecarsByRangeRequest
(*DataColumnSidecarsByRangeRequest)(nil), // 7: ethereum.eth.v1alpha1.DataColumnSidecarsByRangeRequest
(*LightClientUpdatesByRangeRequest)(nil), // 8: ethereum.eth.v1alpha1.LightClientUpdatesByRangeRequest
}
var file_proto_prysm_v1alpha1_p2p_messages_proto_depIdxs = []int32{
0, // [0:0] is the sub-list for method output_type
@@ -801,6 +863,18 @@ func file_proto_prysm_v1alpha1_p2p_messages_proto_init() {
return nil
}
}
file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[8].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*LightClientUpdatesByRangeRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
type x struct{}
out := protoimpl.TypeBuilder{
@@ -808,7 +882,7 @@ func file_proto_prysm_v1alpha1_p2p_messages_proto_init() {
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_proto_prysm_v1alpha1_p2p_messages_proto_rawDesc,
NumEnums: 0,
NumMessages: 8,
NumMessages: 9,
NumExtensions: 0,
NumServices: 0,
},

View File

@@ -13,13 +13,13 @@ option java_package = "org.ethereum.eth.v1alpha1";
option php_namespace = "Ethereum\\Eth\\v1alpha1";
message Status {
bytes fork_digest = 1 [ (ethereum.eth.ext.ssz_size) = "4" ];
bytes finalized_root = 2 [ (ethereum.eth.ext.ssz_size) = "32" ];
bytes fork_digest = 1 [(ethereum.eth.ext.ssz_size) = "4"];
bytes finalized_root = 2 [(ethereum.eth.ext.ssz_size) = "32"];
uint64 finalized_epoch = 3 [
(ethereum.eth.ext.cast_type) =
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives.Epoch"
];
bytes head_root = 4 [ (ethereum.eth.ext.ssz_size) = "32" ];
bytes head_root = 4 [(ethereum.eth.ext.ssz_size) = "32"];
uint64 head_slot = 5 [
(ethereum.eth.ext.cast_type) =
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives.Slot"
@@ -36,8 +36,8 @@ message BeaconBlocksByRangeRequest {
}
message ENRForkID {
bytes current_fork_digest = 1 [ (ethereum.eth.ext.ssz_size) = "4" ];
bytes next_fork_version = 2 [ (ethereum.eth.ext.ssz_size) = "4" ];
bytes current_fork_digest = 1 [(ethereum.eth.ext.ssz_size) = "4"];
bytes next_fork_version = 2 [(ethereum.eth.ext.ssz_size) = "4"];
uint64 next_fork_epoch = 3 [
(ethereum.eth.ext.cast_type) =
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives.Epoch"
@@ -138,5 +138,17 @@ message DataColumnSidecarsByRangeRequest {
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives.Slot"
];
uint64 count = 2;
repeated uint64 columns = 3 [ (ethereum.eth.ext.ssz_max) = "128" ];
repeated uint64 columns = 3 [(ethereum.eth.ext.ssz_max) = "128"];
}
/*
Spec Definition:
(
start_period: uint64
count: uint64
)
*/
message LightClientUpdatesByRangeRequest {
uint64 start_period = 1;
uint64 count = 2;
}

View File

@@ -1,7 +1,7 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.31.0
// protoc v4.25.1
// protoc-gen-go v1.33.0
// protoc v3.21.7
// source: proto/testing/test.proto
package testing

View File

@@ -2,7 +2,10 @@ load("@prysm//tools/go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = ["blob.go"],
srcs = [
"blob.go",
"data_column.go",
],
importpath = "github.com/OffchainLabs/prysm/v6/runtime/logging",
visibility = ["//visibility:public"],
deps = [

View File

@@ -0,0 +1,23 @@
package logging
import (
"fmt"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/sirupsen/logrus"
)
// DataColumnFields extracts a standard set of fields from a DataColumnSidecar into a logrus.Fields struct
// which can be passed to log.WithFields.
func DataColumnFields(column blocks.RODataColumn) logrus.Fields {
kzgCommitmentCount := len(column.KzgCommitments)
return logrus.Fields{
"slot": column.Slot(),
"propIdx": column.ProposerIndex(),
"blockRoot": fmt.Sprintf("%#x", column.BlockRoot())[:8],
"parentRoot": fmt.Sprintf("%#x", column.ParentRoot())[:8],
"kzgCommitmentCount": kzgCommitmentCount,
"colIdx": column.Index,
}
}

View File

@@ -256,7 +256,13 @@ func (w *Web3RemoteSigner) UnderlyingProcess() *os.Process {
func createTestnetDir() (string, error) {
testNetDir := e2e.TestParams.TestPath + "/web3signer-testnet"
configPath := filepath.Join(testNetDir, "config.yaml")
rawYaml := params.ConfigToYaml(params.BeaconConfig())
// TODO: add blob schedule back in as soon as web3signer supports it!
configCopy := params.BeaconConfig().Copy()
configCopy.BlobSchedule = nil
// ---
rawYaml := params.ConfigToYaml(configCopy)
// Add in deposit contract in yaml
depContractStr := fmt.Sprintf("\nDEPOSIT_CONTRACT_ADDRESS: %s\n", params.BeaconConfig().DepositContractAddress)

View File

@@ -28,26 +28,34 @@ var getRequests = map[string]endpoint{
withParams(func(_ primitives.Epoch) []string {
return []string{"head"}
})),
// we want to test comma-separated query params
"/beacon/states/{param1}/validators?id=0,1": newMetadata[structs.GetValidatorsResponse](
"/beacon/states/{param1}/validators": newMetadata[structs.GetValidatorsResponse](
v1PathTemplate,
withParams(func(_ primitives.Epoch) []string {
return []string{"head"}
}),
withQueryParams(func(_ primitives.Epoch) []string {
return []string{"id=0,1"}
})),
"/beacon/states/{param1}/validators/{param2}": newMetadata[structs.GetValidatorResponse](
v1PathTemplate,
withParams(func(_ primitives.Epoch) []string {
return []string{"head", "0"}
})),
"/beacon/states/{param1}/validator_balances?id=0,1": newMetadata[structs.GetValidatorBalancesResponse](
"/beacon/states/{param1}/validator_balances": newMetadata[structs.GetValidatorBalancesResponse](
v1PathTemplate,
withParams(func(_ primitives.Epoch) []string {
return []string{"head"}
}),
withQueryParams(func(_ primitives.Epoch) []string {
return []string{"id=0,1"}
})),
"/beacon/states/{param1}/committees?index=0": newMetadata[structs.GetCommitteesResponse](
"/beacon/states/{param1}/committees": newMetadata[structs.GetCommitteesResponse](
v1PathTemplate,
withParams(func(_ primitives.Epoch) []string {
return []string{"head"}
}),
withQueryParams(func(_ primitives.Epoch) []string {
return []string{"index=0"}
})),
"/beacon/states/{param1}/sync_committees": newMetadata[structs.GetSyncCommitteeResponse](
v1PathTemplate,
@@ -60,6 +68,27 @@ var getRequests = map[string]endpoint{
withParams(func(_ primitives.Epoch) []string {
return []string{"head"}
})),
"/beacon/states/{param1}/pending_consolidations": newMetadata[structs.GetPendingConsolidationsResponse](
v1PathTemplate,
withStart(params.BeaconConfig().ElectraForkEpoch),
withSsz(),
withParams(func(_ primitives.Epoch) []string {
return []string{"head"}
})),
"/beacon/states/{param1}/pending_deposits": newMetadata[structs.GetPendingDepositsResponse](
v1PathTemplate,
withStart(params.BeaconConfig().ElectraForkEpoch),
withSsz(),
withParams(func(_ primitives.Epoch) []string {
return []string{"head"}
})),
"/beacon/states/{param1}/pending_partial_withdrawals": newMetadata[structs.GetPendingPartialWithdrawalsResponse](
v1PathTemplate,
withStart(params.BeaconConfig().ElectraForkEpoch),
withSsz(),
withParams(func(_ primitives.Epoch) []string {
return []string{"head"}
})),
"/beacon/headers": newMetadata[structs.GetBlockHeadersResponse](v1PathTemplate),
"/beacon/headers/{param1}": newMetadata[structs.GetBlockHeaderResponse](
v1PathTemplate,
@@ -93,6 +122,10 @@ var getRequests = map[string]endpoint{
withParams(func(_ primitives.Epoch) []string {
return []string{"head"}
})),
"/beacon/rewards/block/{param1}": newMetadata[structs.BlockRewardsResponse](
v1PathTemplate,
withStart(params.BeaconConfig().AltairForkEpoch),
withParams(func(_ primitives.Epoch) []string { return []string{"head"} })),
"/beacon/blinded_blocks/{param1}": newMetadata[structs.GetBlockV2Response](
v1PathTemplate,
withSsz(),
@@ -125,11 +158,11 @@ var getRequests = map[string]endpoint{
withCustomEval(func(p interface{}, lh interface{}) error {
pResp, ok := p.(*structs.GetForkScheduleResponse)
if !ok {
return fmt.Errorf(msgWrongJson, &structs.GetForkScheduleResponse{}, p)
return fmt.Errorf(msgWrongJSON, &structs.GetForkScheduleResponse{}, p)
}
lhResp, ok := lh.(*structs.GetForkScheduleResponse)
if !ok {
return fmt.Errorf(msgWrongJson, &structs.GetForkScheduleResponse{}, lh)
return fmt.Errorf(msgWrongJSON, &structs.GetForkScheduleResponse{}, lh)
}
// remove all forks with far-future epoch
for i := len(pResp.Data) - 1; i >= 0; i-- {
@@ -175,7 +208,7 @@ var getRequests = map[string]endpoint{
withCustomEval(func(p interface{}, _ interface{}) error {
pResp, ok := p.(*structs.GetVersionResponse)
if !ok {
return fmt.Errorf(msgWrongJson, &structs.ListAttestationsResponse{}, p)
return fmt.Errorf(msgWrongJSON, &structs.ListAttestationsResponse{}, p)
}
if pResp.Data == nil {
return errEmptyPrysmData
@@ -194,11 +227,11 @@ var getRequests = map[string]endpoint{
withCustomEval(func(p interface{}, lh interface{}) error {
pResp, ok := p.(*structs.GetProposerDutiesResponse)
if !ok {
return fmt.Errorf(msgWrongJson, &structs.GetProposerDutiesResponse{}, p)
return fmt.Errorf(msgWrongJSON, &structs.GetProposerDutiesResponse{}, p)
}
lhResp, ok := lh.(*structs.GetProposerDutiesResponse)
if !ok {
return fmt.Errorf(msgWrongJson, &structs.GetProposerDutiesResponse{}, lh)
return fmt.Errorf(msgWrongJSON, &structs.GetProposerDutiesResponse{}, lh)
}
if pResp.Data == nil {
return errEmptyPrysmData
@@ -212,57 +245,86 @@ var getRequests = map[string]endpoint{
}
return compareJSON(pResp, lhResp)
})),
"/validator/blocks/{param1}?randao_reveal=0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505": newMetadata[structs.ProduceBlockV3Response](
"/validator/blocks/{param1}": newMetadata[structs.ProduceBlockV3Response](
v3PathTemplate,
withSanityCheckOnly(),
withParams(func(currentEpoch primitives.Epoch) []string {
return []string{strconv.FormatUint(uint64(currentEpoch)*uint64(params.BeaconConfig().SlotsPerEpoch)+uint64(params.BeaconConfig().SlotsPerEpoch)/2+1, 10)}
}),
withQueryParams(func(_ primitives.Epoch) []string {
return []string{
"randao_reveal=0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505",
}
})),
}
var postRequests = map[string]endpoint{
"/beacon/states/{param1}/validators": newMetadata[structs.GetValidatorsResponse](
v1PathTemplate,
withParams(func(_ primitives.Epoch) []string {
return []string{"head"}
}),
withPOSTObj(func() interface{} {
return struct {
Ids []string `json:"ids"`
Statuses []string `json:"statuses"`
}{Ids: []string{"0", "1"}, Statuses: nil}
}())),
"/beacon/states/{param1}/validator_balances": newMetadata[structs.GetValidatorBalancesResponse](
v1PathTemplate,
withParams(func(_ primitives.Epoch) []string {
return []string{"head"}
}),
withPOSTObj(func() []string {
return []string{"0", "1"}
}())),
"/validator/duties/attester/{param1}": newMetadata[structs.GetAttesterDutiesResponse](
v1PathTemplate,
withParams(func(currentEpoch primitives.Epoch) []string {
return []string{fmt.Sprintf("%v", currentEpoch)}
}),
withPOSTObj(func() []string {
validatorIndices := make([]string, 64)
for i := range validatorIndices {
validatorIndices[i] = fmt.Sprintf("%d", i)
}
return validatorIndices
}())),
"/validator/duties/sync/{param1}": newMetadata[structs.GetSyncCommitteeDutiesResponse](
v1PathTemplate,
withStart(params.AltairE2EForkEpoch),
withParams(func(currentEpoch primitives.Epoch) []string {
return []string{fmt.Sprintf("%v", currentEpoch)}
}),
withPOSTObj(func() []string {
validatorIndices := make([]string, 64)
for i := range validatorIndices {
validatorIndices[i] = fmt.Sprintf("%d", i)
}
return validatorIndices
}())),
}
var (
postRequests = map[string]endpoint{
"/beacon/states/{param1}/validators": newMetadata[structs.GetValidatorsResponse](
v1PathTemplate,
withParams(func(_ primitives.Epoch) []string {
return []string{"head"}
}),
withPOSTObj(func() interface{} {
return struct {
Ids []string `json:"ids"`
Statuses []string `json:"statuses"`
}{Ids: []string{"0", "1"}, Statuses: nil}
}())),
"/beacon/states/{param1}/validator_balances": newMetadata[structs.GetValidatorBalancesResponse](
v1PathTemplate,
withParams(func(_ primitives.Epoch) []string {
return []string{"head"}
}),
withPOSTObj(func() []string {
return []string{"0", "1"}
}())),
"/beacon/states/{param1}/validator_identities": newMetadata[structs.GetValidatorIdentitiesResponse](
v1PathTemplate,
withSanityCheckOnly(), // LH doesn't support the endpoint
withSsz(),
withParams(func(_ primitives.Epoch) []string { return []string{"head"} }),
withPOSTObj([]string{"0", "1"})),
"/beacon/rewards/sync_committee/{param1}": newMetadata[structs.SyncCommitteeRewardsResponse](
v1PathTemplate,
withStart(params.BeaconConfig().AltairForkEpoch),
withParams(func(_ primitives.Epoch) []string { return []string{"head"} })),
"/beacon/rewards/attestations/{param1}": newMetadata[structs.AttestationRewardsResponse](
v1PathTemplate,
withStart(params.BeaconConfig().AltairForkEpoch),
withParams(func(currentEpoch primitives.Epoch) []string {
return []string{fmt.Sprintf("%v", currentEpoch-2)}
})),
"/validator/duties/attester/{param1}": newMetadata[structs.GetAttesterDutiesResponse](
v1PathTemplate,
withParams(func(currentEpoch primitives.Epoch) []string {
return []string{fmt.Sprintf("%v", currentEpoch)}
}),
withPOSTObj(func() []string {
validatorIndices := make([]string, 64)
for i := range validatorIndices {
validatorIndices[i] = fmt.Sprintf("%d", i)
}
return validatorIndices
}())),
"/validator/duties/sync/{param1}": newMetadata[structs.GetSyncCommitteeDutiesResponse](
v1PathTemplate,
withStart(params.AltairE2EForkEpoch),
withParams(func(currentEpoch primitives.Epoch) []string {
return []string{fmt.Sprintf("%v", currentEpoch)}
}),
withPOSTObj(func() []string {
validatorIndices := make([]string, 64)
for i := range validatorIndices {
validatorIndices[i] = fmt.Sprintf("%d", i)
}
return validatorIndices
}())),
"/validator/liveness/{param1}": newMetadata[structs.GetLivenessResponse](
v1PathTemplate,
withParams(func(currentEpoch primitives.Epoch) []string {
return []string{fmt.Sprintf("%v", currentEpoch)}
}),
withPOSTObj([]string{"0", "1"})),
}
)

View File

@@ -18,23 +18,26 @@ type endpoint interface {
setPOSTObj(obj interface{})
getPResp() interface{} // retrieves the Prysm JSON response
getLHResp() interface{} // retrieves the Lighthouse JSON response
getParams(epoch primitives.Epoch) []string
setParams(f func(primitives.Epoch) []string)
getParams(currentEpoch primitives.Epoch) []string
setParams(f func(currentEpoch primitives.Epoch) []string)
getQueryParams(currentEpoch primitives.Epoch) []string
setQueryParams(f func(currentEpoch primitives.Epoch) []string)
getCustomEval() func(interface{}, interface{}) error
setCustomEval(f func(interface{}, interface{}) error)
}
type apiEndpoint[Resp any] struct {
basePath string
sanity bool
ssz bool
start primitives.Epoch
postObj interface{}
pResp *Resp // Prysm JSON response
lhResp *Resp // Lighthouse JSON response
sszResp []byte // Prysm SSZ response
params func(currentEpoch primitives.Epoch) []string
customEval func(interface{}, interface{}) error
basePath string
sanity bool
ssz bool
start primitives.Epoch
postObj interface{}
pResp *Resp // Prysm JSON response
lhResp *Resp // Lighthouse JSON response
sszResp []byte // Prysm SSZ response
params func(currentEpoch primitives.Epoch) []string
queryParams func(currentEpoch primitives.Epoch) []string
customEval func(interface{}, interface{}) error
}
func (e *apiEndpoint[Resp]) getBasePath() string {
@@ -89,17 +92,28 @@ func (e *apiEndpoint[Resp]) getLHResp() interface{} {
return e.lhResp
}
func (e *apiEndpoint[Resp]) getParams(epoch primitives.Epoch) []string {
func (e *apiEndpoint[Resp]) getParams(currentEpoch primitives.Epoch) []string {
if e.params == nil {
return nil
}
return e.params(epoch)
return e.params(currentEpoch)
}
func (e *apiEndpoint[Resp]) setParams(f func(currentEpoch primitives.Epoch) []string) {
e.params = f
}
func (e *apiEndpoint[Resp]) getQueryParams(currentEpoch primitives.Epoch) []string {
if e.queryParams == nil {
return nil
}
return e.queryParams(currentEpoch)
}
func (e *apiEndpoint[Resp]) setQueryParams(f func(currentEpoch primitives.Epoch) []string) {
e.queryParams = f
}
func (e *apiEndpoint[Resp]) getCustomEval() func(interface{}, interface{}) error {
return e.customEval
}
@@ -157,6 +171,13 @@ func withParams(f func(currentEpoch primitives.Epoch) []string) endpointOpt {
}
}
// We specify query parameters.
func withQueryParams(f func(currentEpoch primitives.Epoch) []string) endpointOpt {
return func(e endpoint) {
e.setQueryParams(f)
}
}
// We perform custom evaluation on responses.
func withCustomEval(f func(interface{}, interface{}) error) endpointOpt {
return func(e endpoint) {

View File

@@ -19,13 +19,13 @@ var (
)
const (
msgWrongJson = "JSON response has wrong structure, expected %T, got %T"
msgWrongJSON = "JSON response has wrong structure, expected %T, got %T"
msgRequestFailed = "%s request failed with response code %d with response body %s"
msgUnknownNode = "unknown node type %s"
msgSSZUnmarshalFailed = "failed to unmarshal SSZ"
)
func doJSONGetRequest(template, requestPath string, beaconNodeIdx int, resp interface{}, bnType ...string) error {
func doJSONGETRequest(template, requestPath string, beaconNodeIdx int, resp interface{}, bnType ...string) error {
if len(bnType) == 0 {
bnType = []string{"Prysm"}
}
@@ -41,32 +41,34 @@ func doJSONGetRequest(template, requestPath string, beaconNodeIdx int, resp inte
}
basePath := fmt.Sprintf(template, port+beaconNodeIdx)
httpResp, err := http.Get(
basePath + requestPath,
)
httpResp, err := http.Get(basePath + requestPath)
if err != nil {
return err
return errors.Wrap(err, "request failed")
}
var body interface{}
if httpResp.StatusCode != http.StatusOK {
if httpResp.Header.Get("Content-Type") == api.JsonMediaType {
if err = json.NewDecoder(httpResp.Body).Decode(&body); err != nil {
return err
return errors.Wrap(err, "failed to decode response body")
}
} else {
defer closeBody(httpResp.Body)
body, err = io.ReadAll(httpResp.Body)
if err != nil {
return err
return errors.Wrap(err, "failed to read response body")
}
}
return fmt.Errorf(msgRequestFailed, bnType[0], httpResp.StatusCode, body)
}
return json.NewDecoder(httpResp.Body).Decode(&resp)
if err := json.NewDecoder(httpResp.Body).Decode(&resp); err != nil {
return errors.Wrap(err, "failed to decode response body")
}
return nil
}
func doSSZGetRequest(template, requestPath string, beaconNodeIdx int, bnType ...string) ([]byte, error) {
func doSSZGETRequest(template, requestPath string, beaconNodeIdx int, bnType ...string) ([]byte, error) {
if len(bnType) == 0 {
bnType = []string{"Prysm"}
}
@@ -85,30 +87,30 @@ func doSSZGetRequest(template, requestPath string, beaconNodeIdx int, bnType ...
req, err := http.NewRequest(http.MethodGet, basePath+requestPath, http.NoBody)
if err != nil {
return nil, err
return nil, errors.Wrap(err, "failed to build request")
}
req.Header.Set("Accept", "application/octet-stream")
resp, err := http.DefaultClient.Do(req)
if err != nil {
return nil, err
return nil, errors.Wrap(err, "request failed")
}
if resp.StatusCode != http.StatusOK {
var body interface{}
if err := json.NewDecoder(resp.Body).Decode(&body); err != nil {
return nil, err
return nil, errors.Wrap(err, "failed to decode response body")
}
return nil, fmt.Errorf(msgRequestFailed, bnType[0], resp.StatusCode, body)
}
defer closeBody(resp.Body)
body, err := io.ReadAll(resp.Body)
if err != nil {
return nil, err
return nil, errors.Wrap(err, "failed to read response body")
}
return body, nil
}
func doJSONPostRequest(template, requestPath string, beaconNodeIdx int, postObj, resp interface{}, bnType ...string) error {
func doJSONPOSTRequest(template, requestPath string, beaconNodeIdx int, postObj, resp interface{}, bnType ...string) error {
if len(bnType) == 0 {
bnType = []string{"Prysm"}
}
@@ -126,7 +128,7 @@ func doJSONPostRequest(template, requestPath string, beaconNodeIdx int, postObj,
basePath := fmt.Sprintf(template, port+beaconNodeIdx)
b, err := json.Marshal(postObj)
if err != nil {
return err
return errors.Wrap(err, "failed to marshal POST object")
}
httpResp, err := http.Post(
basePath+requestPath,
@@ -134,25 +136,76 @@ func doJSONPostRequest(template, requestPath string, beaconNodeIdx int, postObj,
bytes.NewBuffer(b),
)
if err != nil {
return err
return errors.Wrap(err, "request failed")
}
var body interface{}
if httpResp.StatusCode != http.StatusOK {
if httpResp.Header.Get("Content-Type") == api.JsonMediaType {
if err = json.NewDecoder(httpResp.Body).Decode(&body); err != nil {
return err
return errors.Wrap(err, "failed to decode response body")
}
} else {
defer closeBody(httpResp.Body)
body, err = io.ReadAll(httpResp.Body)
if err != nil {
return err
return errors.Wrap(err, "failed to read response body")
}
}
return fmt.Errorf(msgRequestFailed, bnType[0], httpResp.StatusCode, body)
}
return json.NewDecoder(httpResp.Body).Decode(&resp)
if err := json.NewDecoder(httpResp.Body).Decode(&resp); err != nil {
return errors.Wrap(err, "failed to decode response body")
}
return nil
}
func doSSZPOSTRequest(template, requestPath string, beaconNodeIdx int, postObj interface{}, bnType ...string) ([]byte, error) {
if len(bnType) == 0 {
bnType = []string{"Prysm"}
}
var port int
switch bnType[0] {
case "Prysm":
port = params.TestParams.Ports.PrysmBeaconNodeHTTPPort
case "Lighthouse":
port = params.TestParams.Ports.LighthouseBeaconNodeHTTPPort
default:
return nil, fmt.Errorf(msgUnknownNode, bnType[0])
}
basePath := fmt.Sprintf(template, port+beaconNodeIdx)
b, err := json.Marshal(postObj)
if err != nil {
return nil, errors.Wrap(err, "failed to marshal POST object")
}
req, err := http.NewRequest(http.MethodPost, basePath+requestPath, bytes.NewBuffer(b))
if err != nil {
return nil, errors.Wrap(err, "failed to build request")
}
req.Header.Set("Content-Type", "application/json")
req.Header.Set("Accept", "application/octet-stream")
resp, err := http.DefaultClient.Do(req)
if err != nil {
return nil, errors.Wrap(err, "request failed")
}
if resp.StatusCode != http.StatusOK {
var body interface{}
if err := json.NewDecoder(resp.Body).Decode(&body); err != nil {
return nil, errors.Wrap(err, "failed to decode response body")
}
return nil, fmt.Errorf(msgRequestFailed, bnType[0], resp.StatusCode, body)
}
defer closeBody(resp.Body)
body, err := io.ReadAll(resp.Body)
if err != nil {
return nil, errors.Wrap(err, "failed to read response body")
}
return body, nil
}
func closeBody(body io.Closer) {

View File

@@ -48,7 +48,7 @@ func verify(_ *e2etypes.EvaluationContext, conns ...*grpc.ClientConn) error {
func run(nodeIdx int) error {
genesisResp := &structs.GetGenesisResponse{}
if err := doJSONGetRequest(v1PathTemplate, "/beacon/genesis", nodeIdx, genesisResp); err != nil {
if err := doJSONGETRequest(v1PathTemplate, "/beacon/genesis", nodeIdx, genesisResp); err != nil {
return errors.Wrap(err, "error getting genesis data")
}
genesisTime, err := strconv.ParseInt(genesisResp.Data.GenesisTime, 10, 64)
@@ -61,21 +61,19 @@ func run(nodeIdx int) error {
if currentEpoch < m.getStart() {
continue
}
apiPath := path
if m.getParams(currentEpoch) != nil {
apiPath = pathFromParams(path, m.getParams(currentEpoch))
}
apiPath := pathFromParams(path, m.getParams(currentEpoch), m.getQueryParams(currentEpoch))
if m.sanityCheckOnlyEnabled() {
resp := m.getPResp()
if err = doJSONGetRequest(m.getBasePath(), apiPath, nodeIdx, resp); err != nil {
if err = doJSONGETRequest(m.getBasePath(), apiPath, nodeIdx, resp); err != nil {
return errors.Wrapf(err, "issue during Prysm JSON GET request for path %s", apiPath)
}
if resp == nil {
return fmt.Errorf("nil response from Prysm JSON GET request for path %s", apiPath)
}
if m.sszEnabled() {
sszResp, err := doSSZGetRequest(m.getBasePath(), apiPath, nodeIdx)
sszResp, err := doSSZGETRequest(m.getBasePath(), apiPath, nodeIdx)
if err != nil {
return errors.Wrapf(err, "issue during Prysm SSZ GET request for path %s", apiPath)
}
@@ -101,12 +99,37 @@ func run(nodeIdx int) error {
if currentEpoch < m.getStart() {
continue
}
apiPath := path
if m.getParams(currentEpoch) != nil {
apiPath = pathFromParams(path, m.getParams(currentEpoch))
}
if err = comparePOSTJSON(nodeIdx, m.getBasePath(), apiPath, m.getPOSTObj(), m.getPResp(), m.getLHResp(), m.getCustomEval()); err != nil {
return err
apiPath := pathFromParams(path, m.getParams(currentEpoch), m.getQueryParams(currentEpoch))
if m.sanityCheckOnlyEnabled() {
resp := m.getPResp()
if err = doJSONPOSTRequest(m.getBasePath(), apiPath, nodeIdx, m.getPOSTObj(), resp); err != nil {
return errors.Wrapf(err, "issue during Prysm JSON POST request for path %s", apiPath)
}
if resp == nil {
return fmt.Errorf("nil response from Prysm JSON POST request for path %s", apiPath)
}
if m.sszEnabled() {
sszResp, err := doSSZPOSTRequest(m.getBasePath(), apiPath, nodeIdx, m.getPOSTObj())
if err != nil {
return errors.Wrapf(err, "issue during Prysm SSZ POST request for path %s", apiPath)
}
if sszResp == nil {
return fmt.Errorf("nil response from Prysm SSZ POST request for path %s", apiPath)
}
}
} else {
if err = comparePOSTJSON(nodeIdx, m.getBasePath(), apiPath, m.getPOSTObj(), m.getPResp(), m.getLHResp(), m.getCustomEval()); err != nil {
return err
}
if m.sszEnabled() {
b, err := comparePOSTSSZ(nodeIdx, m.getBasePath(), apiPath, m.getPOSTObj())
if err != nil {
return err
}
m.setSszResp(b)
}
}
}
@@ -179,12 +202,12 @@ func postEvaluation(nodeIdx int, requests map[string]endpoint, epoch primitives.
blockHeaderData := requests["/beacon/headers/{param1}"]
header, ok := blockHeaderData.getPResp().(*structs.GetBlockHeaderResponse)
if !ok {
return fmt.Errorf(msgWrongJson, &structs.GetBlockHeaderResponse{}, blockHeaderData.getPResp())
return fmt.Errorf(msgWrongJSON, &structs.GetBlockHeaderResponse{}, blockHeaderData.getPResp())
}
dutiesData := requests["/validator/duties/proposer/{param1}"]
duties, ok := dutiesData.getPResp().(*structs.GetProposerDutiesResponse)
if !ok {
return fmt.Errorf(msgWrongJson, &structs.GetProposerDutiesResponse{}, dutiesData.getPResp())
return fmt.Errorf(msgWrongJSON, &structs.GetProposerDutiesResponse{}, dutiesData.getPResp())
}
if header.Data.Root != duties.DependentRoot {
return fmt.Errorf("header root %s does not match duties root %s ", header.Data.Root, duties.DependentRoot)
@@ -200,14 +223,32 @@ func postEvaluation(nodeIdx int, requests map[string]endpoint, epoch primitives.
return fmt.Errorf("health check response's status code is %d", resp.StatusCode)
}
syncingData := requests["/node/syncing"]
sync, ok := syncingData.getPResp().(*structs.SyncStatusResponse)
if !ok {
return fmt.Errorf(msgWrongJSON, &structs.SyncStatusResponse{}, syncingData.getPResp())
}
headSlot := sync.Data.HeadSlot
// get attestation data (it needs the current slot)
if err = compareGETJSON(
nodeIdx,
v1PathTemplate,
fmt.Sprintf("/validator/attestation_data?slot=%s&committee_index=0", headSlot),
&structs.AttestationData{},
&structs.AttestationData{},
nil); err != nil {
return err
}
return nil
}
func compareGETJSON(nodeIdx int, base, path string, pResp, lhResp interface{}, customEval func(interface{}, interface{}) error) error {
if err := doJSONGetRequest(base, path, nodeIdx, pResp); err != nil {
if err := doJSONGETRequest(base, path, nodeIdx, pResp); err != nil {
return errors.Wrapf(err, "issue during Prysm JSON GET request for path %s", path)
}
if err := doJSONGetRequest(base, path, nodeIdx, lhResp, "Lighthouse"); err != nil {
if err := doJSONGETRequest(base, path, nodeIdx, lhResp, "Lighthouse"); err != nil {
return errors.Wrapf(err, "issue during Lighthouse JSON GET request for path %s", path)
}
if pResp == nil {
@@ -224,10 +265,10 @@ func compareGETJSON(nodeIdx int, base, path string, pResp, lhResp interface{}, c
}
func comparePOSTJSON(nodeIdx int, base, path string, postObj, pResp, lhResp interface{}, customEval func(interface{}, interface{}) error) error {
if err := doJSONPostRequest(base, path, nodeIdx, postObj, pResp); err != nil {
if err := doJSONPOSTRequest(base, path, nodeIdx, postObj, pResp); err != nil {
return errors.Wrapf(err, "issue during Prysm JSON POST request for path %s", path)
}
if err := doJSONPostRequest(base, path, nodeIdx, postObj, lhResp, "Lighthouse"); err != nil {
if err := doJSONPOSTRequest(base, path, nodeIdx, postObj, lhResp, "Lighthouse"); err != nil {
return errors.Wrapf(err, "issue during Lighthouse JSON POST request for path %s", path)
}
if pResp == nil {
@@ -244,11 +285,11 @@ func comparePOSTJSON(nodeIdx int, base, path string, postObj, pResp, lhResp inte
}
func compareGETSSZ(nodeIdx int, base, path string) ([]byte, error) {
pResp, err := doSSZGetRequest(base, path, nodeIdx)
pResp, err := doSSZGETRequest(base, path, nodeIdx)
if err != nil {
return nil, errors.Wrapf(err, "issue during Prysm SSZ GET request for path %s", path)
}
lhResp, err := doSSZGetRequest(base, path, nodeIdx, "Lighthouse")
lhResp, err := doSSZGETRequest(base, path, nodeIdx, "Lighthouse")
if err != nil {
return nil, errors.Wrapf(err, "issue during Lighthouse SSZ GET request for path %s", path)
}
@@ -258,6 +299,21 @@ func compareGETSSZ(nodeIdx int, base, path string) ([]byte, error) {
return pResp, nil
}
func comparePOSTSSZ(nodeIdx int, base, path string, postObj interface{}) ([]byte, error) {
pResp, err := doSSZPOSTRequest(base, path, nodeIdx, postObj)
if err != nil {
return nil, errors.Wrapf(err, "issue during Prysm SSZ POST request for path %s", path)
}
lhResp, err := doSSZPOSTRequest(base, path, nodeIdx, postObj, "Lighthouse")
if err != nil {
return nil, errors.Wrapf(err, "issue during Lighthouse SSZ POST request for path %s", path)
}
if !bytes.Equal(pResp, lhResp) {
return nil, errors.New("Prysm SSZ response does not match Lighthouse SSZ response")
}
return pResp, nil
}
func compareJSON(pResp, lhResp interface{}) error {
if !reflect.DeepEqual(pResp, lhResp) {
p, err := json.Marshal(pResp)
@@ -273,10 +329,17 @@ func compareJSON(pResp, lhResp interface{}) error {
return nil
}
func pathFromParams(path string, params []string) string {
func pathFromParams(path string, params []string, queryParams []string) string {
apiPath := path
for i := range params {
apiPath = strings.Replace(apiPath, fmt.Sprintf("{param%d}", i+1), params[i], 1)
}
for i := range queryParams {
if i == 0 {
apiPath = apiPath + "?" + queryParams[i]
} else {
apiPath = apiPath + "&" + queryParams[i]
}
}
return apiPath
}

View File

@@ -72,7 +72,11 @@ func feeRecipientIsPresent(_ *types.EvaluationContext, conns ...*grpc.ClientConn
if err != nil {
return errors.Wrap(err, "failed to get chain head")
}
req := &ethpb.ListBlocksRequest{QueryFilter: &ethpb.ListBlocksRequest_Epoch{Epoch: chainHead.HeadEpoch.Sub(1)}}
epoch := chainHead.HeadEpoch
if epoch > 0 {
epoch--
}
req := &ethpb.ListBlocksRequest{QueryFilter: &ethpb.ListBlocksRequest_Epoch{Epoch: epoch}}
blks, err := client.ListBeaconBlocks(context.Background(), req)
if err != nil {
return errors.Wrap(err, "failed to list blocks")

View File

@@ -1,20 +1,14 @@
# LightClient
tests/mainnet/altair/light_client/single_merkle_proof
tests/mainnet/bellatrix/light_client/single_merkle_proof
tests/mainnet/capella/light_client/single_merkle_proof
tests/mainnet/deneb/light_client/single_merkle_proof
tests/minimal/altair/light_client/single_merkle_proof
tests/minimal/altair/light_client/sync
tests/minimal/altair/light_client/update_ranking
tests/minimal/bellatrix/light_client/single_merkle_proof
tests/minimal/bellatrix/light_client/sync
tests/minimal/bellatrix/light_client/update_ranking
tests/minimal/capella/light_client/single_merkle_proof
tests/minimal/capella/light_client/sync
tests/minimal/capella/light_client/update_ranking
tests/minimal/deneb/light_client/single_merkle_proof
tests/minimal/deneb/light_client/sync
tests/minimal/deneb/light_client/update_ranking
tests/minimal/electra/light_client/sync
tests/minimal/altair/light_client/data_collection
tests/minimal/bellatrix/light_client/data_collection
tests/minimal/capella/light_client/data_collection
tests/minimal/deneb/light_client/data_collection
tests/minimal/electra/light_client/data_collection
# SSZ Generic
tests/general/phase0/ssz_generic/basic_vector

Some files were not shown because too many files have changed in this diff Show More