Compare commits

...

35 Commits

Author SHA1 Message Date
terence tsao
6544f23e8f Sort by descending 2025-05-21 14:21:14 -07:00
terence tsao
b78839f6fa Log blob limit increase 2025-05-21 13:05:49 -07:00
terence tsao
df6e09d8a0 Uncomment blob schedule from plcae holder fields for test 2025-05-21 09:43:21 -07:00
terence tsao
da62dc682c Add fallback 2025-05-21 08:44:01 -07:00
james-prysm
6a01fb22ec adding in temporary fix until web3signer is supported for blob schedule in config 2025-05-21 08:44:01 -07:00
terence tsao
643a5f5965 Fix e2e test 2025-05-21 08:44:01 -07:00
terence tsao
26e199faac Feedback 2025-05-21 08:44:01 -07:00
terence tsao
1a02d0592f Add blob schedule support 2025-05-21 08:44:01 -07:00
Bastin
b20821dd8e Fix wording/typo in lc spec tests (#15307)
* fix wording

* changelog
2025-05-21 11:02:45 +00:00
Manu NALEPA
e2f0b057b0 Implement data columns filesystem. (#15257)
* Logging: Add `DataColumnFields`.

* `RODataColumn`: Implement `Slot`, `ParentRoot` and `ProposerIndex`.

* Implement verification for data column sidecars.

* Add changelog.

* Fix Terence's comment.

* Implement data columns filesystem.

* Reduce const verbosity (Kasey's comment).

* `prune`: Aquire `mu` only when needed.

* `Save`: Call `dcs.DataColumnFeed.Send` in a goroutine.

* Fix Terence's comment.

* Fix Terence's comment.

* Fix Terence's comment.

* Fix Terence's comment.

* Fix Terence`s comment.

* Implement a new solution for subscriber.

* Fix Kasey's comment.
2025-05-21 09:35:48 +00:00
Manu NALEPA
3d4e2c5568 Implement data column sidecars verifications. (#15232)
* Logging: Add `DataColumnFields`.

* `RODataColumn`: Implement `Slot`, `ParentRoot` and `ProposerIndex`.

* Implement verification for data column sidecars.

* Add changelog.

* Fix Terence's comment.

* Fix Terence's comment.

* `SidecarProposerExpected`: Stop returning "sidecar was not proposed by the expected proposer_index" when there is any error in the function.

* `SidecarProposerExpected` & `ValidProposerSignature`: Cache the parent state.

* `VerifyDataColumnsSidecarKZGProofs`: Add benchmarks.

* Fix Kasey's comment.

* Add additional benchmark.

* Fix Kasey's comment.

* Fix Kasey's comment.

* Fix Kasey's comment.

* Fix Preston's comment.

* Fix Preston's comment.

* Fix Preston's comment.
2025-05-20 21:15:29 +00:00
terence
fa744ff78f Update spec tests to v1.6.0-alpha.0 (#15306) 2025-05-20 20:52:09 +00:00
Radosław Kapka
bb5807fd08 Update Beacon API evaluator (#15304)
* Update Beacon API evaluator

* cleanup and changelog

* changelog message fix
2025-05-20 17:09:01 +00:00
Nilav Prajapati
d6bbfff8b7 Fix cyclical dependency (#15248)
* fix cyclical dependency

* fix bazel file

* fix comments

* fix: Break cyclical dependency by importing GetRandBlob directly from crypto/random

* add changelog
2025-05-20 16:19:36 +00:00
terence
a8ce85f8de Add fulu spec tests for operation and epoch processing (#15284)
* Add fulu spec tets for operation and epoch processing

* Fix operation test version

* Fix RunDepositRequestsTest
2025-05-20 16:18:04 +00:00
Bastin
00bb3ff2b8 Add Light Client Minimal Spec Tests Part 1 (#15297)
* single merkle proof - altair

* single merkle proof - bellatrix - mainnet

* single merkle proof - capella - mainnet

* single merkle proof - deneb - mainnet

* single merkle proof - electra - mainnet

* changelog entry

* typo

* merge all into common

* update ranking

* single merkle proof

* changelog entry

* update exclusions.txt
2025-05-20 16:08:41 +00:00
Bastin
edab145001 endpoint registration behind flag (#15303) 2025-05-20 12:29:07 +00:00
Preston Van Loon
7fd3902b75 Added prysm build data to otel tracing spans (#15302) 2025-05-19 17:04:30 +00:00
Joe Clapis
6b6370bc59 Added a symlink to the Docker image from /bin/sh -> /bin/bash (#15294)
* Added a symlink to the Docker image from /bin/sh -> /bin/bash

* Added changelog fragment

---------

Co-authored-by: Preston Van Loon <preston@pvl.dev>
2025-05-19 16:29:27 +00:00
Potuz
17204ca817 Use independent context when updating domain data (#15268)
* Use independent context when updating domain data

* Terence's review

* don't cancel immediately

* fix e2e panic

* add new context for roles
2025-05-19 16:14:03 +00:00
Bastin
5bbcfe5237 Enable Light Client req/resp domain (#15281)
* add rpc toppic mappings and types

* placeholder funcs

* bootstrap write chunk

* add rpc toppic mappings and types

* placeholder funcs

* bootstrap write chunk

* deps

* add messageMapping entries

* add handlers and register RPC

* deps

* tests

* read context in tests

* add tests

* add flag and changelog entry

* fix linter

* deps

* increase topic count in ratelimiter test

* handle flag
2025-05-19 15:15:13 +00:00
terence
c1b99b74c7 Use current slot helper whenever possible (#15301) 2025-05-18 16:06:54 +00:00
terence
f02955676b Remove unused protobuf import (#15300) 2025-05-18 16:05:00 +00:00
Bastin
1dea6857d5 Add Light Client Mainnet Spec Tests (#15295)
* single merkle proof - altair

* single merkle proof - bellatrix - mainnet

* single merkle proof - capella - mainnet

* single merkle proof - deneb - mainnet

* single merkle proof - electra - mainnet

* changelog entry

* typo

* merge all into common

* fail for unsupported test types
2025-05-18 12:22:45 +00:00
terence
eafef8c7c8 Add fulu spec tests for sanity and reward (#15285)
* Add fulu spec tests for sanity and reward

* Delete setting electra fork epoch to 0

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2025-05-16 14:03:06 +00:00
terence
0b3289361c Add fulu spec tests for finality and merkle proof (#15286) 2025-05-15 23:59:11 +00:00
Radosław Kapka
c4abdef874 Add {state_id} to Prysm endpoints (#15245)
* Add `{state_id}` to Prysm endpoints

* Revert "Add `{state_id}` to Prysm endpoints"

This reverts commit 88670e9cc1.

* update changelog

* update bastin feedback

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
Co-authored-by: james-prysm <james@prysmaticlabs.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-05-15 14:25:05 +00:00
james-prysm
64cbaec326 fixing attester slashing v2 endpoint (#15291) 2025-05-15 14:03:35 +00:00
james-prysm
63a0641957 fixing wrong pending consolidations handler for api (#15290) 2025-05-15 13:13:56 +00:00
Nishant Das
2737ace5a8 Disable Deposit Log processing after Deposit Requests are Activated (#15274)
* Disable Log Processing

* Changelog

* Kasey's review

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-05-14 18:23:45 +00:00
Preston Van Loon
9e3d73c1c2 Changelog v6.0.1 (#15269)
* Run unclog for v6.0.1

* Changelog narrative

* Changelog fragment
2025-05-14 15:14:09 +00:00
terence
325ec97355 Add fulu spec tets for ssz static (#15279) 2025-05-14 14:00:38 +00:00
Preston Van Loon
eea53eb6dc tracing: Add spans to various methods related to GetDuties (#15271)
* Adding spans for troubleshooting GetDuties latency

* Changelog fragment
2025-05-13 21:50:12 +00:00
Preston Van Loon
6f9a93ac89 spectests: Fix sha256 (#15278)
* spectests: Fix sha256

* Changelog fragment
2025-05-13 19:01:20 +00:00
terence
93a5fdd8f3 Remove unused variables, functions and more (#15264) 2025-05-12 13:25:26 +00:00
330 changed files with 10139 additions and 1594 deletions

View File

@@ -4,6 +4,54 @@ All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
## [v6.0.1](https://github.com/prysmaticlabs/prysm/compare/v6.0.0...v6.0.1) - 2025-05-02
This release fixes two bugs related to the `payload_attributes` [event stream](https://ethereum.github.io/beacon-APIs/#/Events/eventstream). If you are using or planning to use this endpoint, upgrading to version 6.0.1 is mandatory.
Also, a reminder: like other Beacon API endpoints, when a node is syncing, it may return historical data as `finalized` or `head`. Until the node is fully synced to the head of the chain, you should avoid using this data, depending on your application's needs.
We currently recommend against using the `--enable-beacon-rest-api` flag on Mainnet. As you may have noticed, we put a deprecation notice on our gRPC code, in particular on gRPC-related flags. The reason for this is that we want to eventually remove gRPC and have REST HTTP as the standard way of communication between the validator client and the beacon node. That being said, the REST option is still unstable and thus marked as experimental in the flag's description (the flag is `--enable-beacon-rest-api`). Therefore we encourage everyone to keep using gRPC, which is currently the default. It is fine to test the REST option on testnets, but doing it on Mainnet can lead to missing attestations and even missing blocks.
**Updating to v6.0.0 or later is required for Pectra Mainnet!**
This patch release has a few important fixes from v6.0.0. Updating to this release is encouraged.
### Added
- `UpgradeToFulu` spectests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15190)
- PeerDAS related KZG wrappers. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15186)
- Add light client p2p broadcaster functions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15175)
- Added immediate broadcasting of proposer slashings when equivocating blocks are detected during block processing. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14693)
- Added 2 new errors: `HeadStateErr` and `ErrCouldNotVerifyBlockHeader`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14693)
- Implement pending consolidations Beacon API endpoint. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15219)
- PeerDAS: Add needed proto files and corresponding generated code. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15187)
- Add light client p2p validator and subscriber functions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15214)
### Changed
- Refactored internal function `reValidateSubscriptions` to `pruneSubscriptions` in `beacon-chain/sync/subscriber.go` for improved clarity, addressing a TODO comment. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15160)
- Updated geth to v1.15.9. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15216)
- Removed the slot from `UpdateDuties`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15223)
- Update hoodie bootnodes. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15240)
### Fixed
- avoid nondeterministic default fork value when generate genesis. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15151)
- `UpgradeToFulu`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15190)
- fixed underflow with balances in leaking edge case with expected withdrawals. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15191)
- Fixes our generated ssz files to have the correct package name. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15199)
- Fixes our blob sidecar by root request lists for electra. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15209)
- Ensure that the `payload_attributes` event has a consistent view of the head state by passing the head block in the event and using stategen to retrieve the corresponding state. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15213)
- Pass parent context to update duties when dependent roots change. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15221)
- Process slots across epoch for payload attribute event. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15228)
- extend the payload attribute computation deadline to the beginning of the proposal slot. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15230)
### Security
- Fix CVE-2025-22869. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15204)
- Fix CVE-2025-22870. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15204)
- Fix CVE-2025-22872. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15204)
- Fix CVE-2025-30204. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15204)
## [v6.0.0](https://github.com/prysmaticlabs/prysm/compare/v5.3.2...v6.0.0) - 2025-04-21
This release introduces Mainnet support for the upcoming Electra + Prague (Pectra) fork. The fork is scheduled for mainnet epoch 364032 (May 7, 2025, 10:05:11 UTC). You MUST update Prysm Beacon Node, Prysm Validator Client, and your execution layer client to the Pectra ready release prior to the fork to stay on the correct chain.

View File

@@ -255,7 +255,7 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.5.0"
consensus_spec_version = "v1.6.0-alpha.0"
bls_test_version = "v0.1.1"
@@ -271,7 +271,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-JljxS/if/t0qvGWcf5CgsX+72fj90yGTg/uEgC56y7U=",
integrity = "sha256-W7oKvoM0nAkyitykRxAw6kmCvjYC01IqiNJy0AmCnMM=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/general.tar.gz" % consensus_spec_version,
)
@@ -287,7 +287,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-NRba2h4zqb2LAXyDPglHTtkT4gVyuwpY708XmwXKXV8=",
integrity = "sha256-ig7/zxomjv6buBWMom4IxAJh3lFJ9+JnY44E7c8ZNP8=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/minimal.tar.gz" % consensus_spec_version,
)
@@ -303,7 +303,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-hpbtKUbc3NHtVcUPk/Zm+Hn57G2ijI9qvXJwl9hc/tM=",
integrity = "sha256-mjx+MkXtPhCNv4c4knLYLIkvIdpF7WTjx/ElvGPQzSo=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/mainnet.tar.gz" % consensus_spec_version,
)
@@ -318,7 +318,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-Wy3YcJxoXiKQwrGgJecrtjtdokc4X/VUNBmyQXJf0Oc=",
integrity = "sha256-u0RkIZIeGttb3sInR31mO64aBSwxALqO5SYIPlqEvPo=",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)

View File

@@ -241,7 +241,7 @@ func (c *Client) GetHeader(ctx context.Context, slot primitives.Slot, parentHash
return nil, errors.Wrap(err, "error getting header from builder server")
}
bid, err := c.parseHeaderResponse(data, header)
bid, err := c.parseHeaderResponse(data, header, slot)
if err != nil {
return nil, errors.Wrapf(
err,
@@ -254,7 +254,7 @@ func (c *Client) GetHeader(ctx context.Context, slot primitives.Slot, parentHash
return bid, nil
}
func (c *Client) parseHeaderResponse(data []byte, header http.Header) (SignedBid, error) {
func (c *Client) parseHeaderResponse(data []byte, header http.Header, slot primitives.Slot) (SignedBid, error) {
var versionHeader string
if c.sszEnabled || header.Get(api.VersionHeader) != "" {
versionHeader = header.Get(api.VersionHeader)
@@ -276,7 +276,7 @@ func (c *Client) parseHeaderResponse(data []byte, header http.Header) (SignedBid
}
if ver >= version.Electra {
return c.parseHeaderElectra(data)
return c.parseHeaderElectra(data, slot)
}
if ver >= version.Deneb {
return c.parseHeaderDeneb(data)
@@ -291,7 +291,7 @@ func (c *Client) parseHeaderResponse(data []byte, header http.Header) (SignedBid
return nil, fmt.Errorf("unsupported header version %s", versionHeader)
}
func (c *Client) parseHeaderElectra(data []byte) (SignedBid, error) {
func (c *Client) parseHeaderElectra(data []byte, slot primitives.Slot) (SignedBid, error) {
if c.sszEnabled {
sb := &ethpb.SignedBuilderBidElectra{}
if err := sb.UnmarshalSSZ(data); err != nil {
@@ -303,7 +303,7 @@ func (c *Client) parseHeaderElectra(data []byte) (SignedBid, error) {
if err := json.Unmarshal(data, hr); err != nil {
return nil, errors.Wrap(err, "could not unmarshal ExecHeaderResponseElectra JSON")
}
p, err := hr.ToProto()
p, err := hr.ToProto(slot)
if err != nil {
return nil, errors.Wrap(err, "could not convert ExecHeaderResponseElectra to proto")
}

View File

@@ -532,7 +532,7 @@ func TestClient_GetHeader(t *testing.T) {
require.Equal(t, expectedPath, r.URL.Path)
epr := &ExecHeaderResponseElectra{}
require.NoError(t, json.Unmarshal([]byte(testExampleHeaderResponseElectra), epr))
pro, err := epr.ToProto()
pro, err := epr.ToProto(100)
require.NoError(t, err)
ssz, err := pro.MarshalSSZ()
require.NoError(t, err)

View File

@@ -1284,8 +1284,8 @@ type ExecHeaderResponseElectra struct {
}
// ToProto creates a SignedBuilderBidElectra Proto from ExecHeaderResponseElectra.
func (ehr *ExecHeaderResponseElectra) ToProto() (*eth.SignedBuilderBidElectra, error) {
bb, err := ehr.Data.Message.ToProto()
func (ehr *ExecHeaderResponseElectra) ToProto(slot types.Slot) (*eth.SignedBuilderBidElectra, error) {
bb, err := ehr.Data.Message.ToProto(slot)
if err != nil {
return nil, err
}
@@ -1296,13 +1296,14 @@ func (ehr *ExecHeaderResponseElectra) ToProto() (*eth.SignedBuilderBidElectra, e
}
// ToProto creates a BuilderBidElectra Proto from BuilderBidElectra.
func (bb *BuilderBidElectra) ToProto() (*eth.BuilderBidElectra, error) {
func (bb *BuilderBidElectra) ToProto(slot types.Slot) (*eth.BuilderBidElectra, error) {
header, err := bb.Header.ToProto()
if err != nil {
return nil, err
}
if len(bb.BlobKzgCommitments) > params.BeaconConfig().MaxBlobsPerBlockByVersion(version.Electra) {
return nil, fmt.Errorf("blob commitment count %d exceeds the maximum %d", len(bb.BlobKzgCommitments), params.BeaconConfig().MaxBlobsPerBlockByVersion(version.Electra))
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
if len(bb.BlobKzgCommitments) > maxBlobsPerBlock {
return nil, fmt.Errorf("blob commitment count %d exceeds the maximum %d", len(bb.BlobKzgCommitments), maxBlobsPerBlock)
}
kzgCommitments := make([][]byte, len(bb.BlobKzgCommitments))
for i, commit := range bb.BlobKzgCommitments {

View File

@@ -14,22 +14,10 @@ import (
)
const (
EventHead = "head"
EventBlock = "block"
EventAttestation = "attestation"
EventVoluntaryExit = "voluntary_exit"
EventBlsToExecutionChange = "bls_to_execution_change"
EventProposerSlashing = "proposer_slashing"
EventAttesterSlashing = "attester_slashing"
EventFinalizedCheckpoint = "finalized_checkpoint"
EventChainReorg = "chain_reorg"
EventContributionAndProof = "contribution_and_proof"
EventLightClientFinalityUpdate = "light_client_finality_update"
EventLightClientOptimisticUpdate = "light_client_optimistic_update"
EventPayloadAttributes = "payload_attributes"
EventBlobSidecar = "blob_sidecar"
EventError = "error"
EventConnectionError = "connection_error"
EventHead = "head"
EventError = "error"
EventConnectionError = "connection_error"
)
var (

View File

@@ -29,9 +29,8 @@ go_test(
embed = [":go_default_library"],
deps = [
"//consensus-types/blocks:go_default_library",
"//crypto/random:go_default_library",
"//testing/require:go_default_library",
"@com_github_consensys_gnark_crypto//ecc/bls12-381/fr:go_default_library",
"@com_github_crate_crypto_go_kzg_4844//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)

View File

@@ -1,16 +1,12 @@
package kzg
import (
"bytes"
"crypto/sha256"
"encoding/binary"
"testing"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/crypto/random"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/consensys/gnark-crypto/ecc/bls12-381/fr"
GoKZG "github.com/crate-crypto/go-kzg-4844"
"github.com/sirupsen/logrus"
)
func GenerateCommitmentAndProof(blob GoKZG.Blob) (GoKZG.KZGCommitment, GoKZG.KZGProof, error) {
@@ -41,7 +37,7 @@ func TestBytesToAny(t *testing.T) {
}
func TestGenerateCommitmentAndProof(t *testing.T) {
blob := getRandBlob(123)
blob := random.GetRandBlob(123)
commitment, proof, err := GenerateCommitmentAndProof(blob)
require.NoError(t, err)
expectedCommitment := GoKZG.KZGCommitment{180, 218, 156, 194, 59, 20, 10, 189, 186, 254, 132, 93, 7, 127, 104, 172, 238, 240, 237, 70, 83, 89, 1, 152, 99, 0, 165, 65, 143, 62, 20, 215, 230, 14, 205, 95, 28, 245, 54, 25, 160, 16, 178, 31, 232, 207, 38, 85}
@@ -49,36 +45,3 @@ func TestGenerateCommitmentAndProof(t *testing.T) {
require.Equal(t, expectedCommitment, commitment)
require.Equal(t, expectedProof, proof)
}
func deterministicRandomness(seed int64) [32]byte {
// Converts an int64 to a byte slice
buf := new(bytes.Buffer)
err := binary.Write(buf, binary.BigEndian, seed)
if err != nil {
logrus.WithError(err).Error("Failed to write int64 to bytes buffer")
return [32]byte{}
}
bytes := buf.Bytes()
return sha256.Sum256(bytes)
}
// Returns a serialized random field element in big-endian
func getRandFieldElement(seed int64) [32]byte {
bytes := deterministicRandomness(seed)
var r fr.Element
r.SetBytes(bytes[:])
return GoKZG.SerializeScalar(r)
}
// Returns a random blob using the passed seed as entropy
func getRandBlob(seed int64) GoKZG.Blob {
var blob GoKZG.Blob
bytesPerBlob := GoKZG.ScalarsPerBlob * GoKZG.SerializedScalarSize
for i := 0; i < bytesPerBlob; i += GoKZG.SerializedScalarSize {
fieldElementBytes := getRandFieldElement(seed + int64(i))
copy(blob[i:i+GoKZG.SerializedScalarSize], fieldElementBytes[:])
}
return blob
}

View File

@@ -14,6 +14,7 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/container/slice"
mathutil "github.com/OffchainLabs/prysm/v6/math"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
lru "github.com/hashicorp/golang-lru"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
@@ -104,11 +105,16 @@ func (c *CommitteeCache) CompressCommitteeCache() {
// Committee fetches the shuffled indices by slot and committee index. Every list of indices
// represent one committee. Returns true if the list exists with slot and committee index. Otherwise returns false, nil.
func (c *CommitteeCache) Committee(ctx context.Context, slot primitives.Slot, seed [32]byte, index primitives.CommitteeIndex) ([]primitives.ValidatorIndex, error) {
ctx, span := trace.StartSpan(ctx, "committeeCache.Committee")
defer span.End()
span.SetAttributes(trace.Int64Attribute("slot", int64(slot)), trace.Int64Attribute("index", int64(index))) // lint:ignore uintcast -- OK for tracing.
if err := c.checkInProgress(ctx, seed); err != nil {
return nil, err
}
obj, exists := c.CommitteeCache.Get(key(seed))
span.SetAttributes(trace.BoolAttribute("cache_hit", exists))
if exists {
CommitteeCacheHit.Inc()
} else {
@@ -157,11 +163,14 @@ func (c *CommitteeCache) AddCommitteeShuffledList(ctx context.Context, committee
// ActiveIndices returns the active indices of a given seed stored in cache.
func (c *CommitteeCache) ActiveIndices(ctx context.Context, seed [32]byte) ([]primitives.ValidatorIndex, error) {
ctx, span := trace.StartSpan(ctx, "committeeCache.ActiveIndices")
defer span.End()
if err := c.checkInProgress(ctx, seed); err != nil {
return nil, err
}
obj, exists := c.CommitteeCache.Get(key(seed))
span.SetAttributes(trace.BoolAttribute("cache_hit", exists))
if exists {
CommitteeCacheHit.Inc()
} else {

View File

@@ -12,7 +12,6 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/pkg/errors"
@@ -240,16 +239,3 @@ func verifyBlobCommitmentCount(slot primitives.Slot, body interfaces.ReadOnlyBea
}
return nil
}
// GetBlockPayloadHash returns the hash of the execution payload of the block
func GetBlockPayloadHash(blk interfaces.ReadOnlyBeaconBlock) ([32]byte, error) {
var payloadHash [32]byte
if IsPreBellatrixVersion(blk.Version()) {
return payloadHash, nil
}
payload, err := blk.Body().Execution()
if err != nil {
return payloadHash, err
}
return bytesutil.ToBytes32(payload.BlockHash()), nil
}

View File

@@ -187,9 +187,7 @@ func ValidateAttestationTime(attSlot primitives.Slot, genesisTime time.Time, clo
// VerifyCheckpointEpoch is within current epoch and previous epoch
// with respect to current time. Returns true if it's within, false if it's not.
func VerifyCheckpointEpoch(c *ethpb.Checkpoint, genesis time.Time) bool {
now := uint64(prysmTime.Now().Unix())
genesisTime := uint64(genesis.Unix())
currentSlot := primitives.Slot((now - genesisTime) / params.BeaconConfig().SecondsPerSlot)
currentSlot := slots.CurrentSlot(uint64(genesis.Unix()))
currentEpoch := slots.ToEpoch(currentSlot)
var prevEpoch primitives.Epoch

View File

@@ -119,6 +119,9 @@ func attestationCommittees(
// BeaconCommittees returns the list of all beacon committees for a given state at a given slot.
func BeaconCommittees(ctx context.Context, state state.ReadOnlyBeaconState, slot primitives.Slot) ([][]primitives.ValidatorIndex, error) {
ctx, span := trace.StartSpan(ctx, "helpers.BeaconCommittees")
defer span.End()
epoch := slots.ToEpoch(slot)
activeCount, err := ActiveValidatorCount(ctx, state, epoch)
if err != nil {
@@ -245,6 +248,9 @@ func BeaconCommittee(
slot primitives.Slot,
committeeIndex primitives.CommitteeIndex,
) ([]primitives.ValidatorIndex, error) {
ctx, span := trace.StartSpan(ctx, "helpers.BeaconCommittee")
defer span.End()
committee, err := committeeCache.Committee(ctx, slot, seed, committeeIndex)
if err != nil {
return nil, errors.Wrap(err, "could not interface with committee cache")
@@ -439,6 +445,9 @@ func CommitteeIndices(committeeBits bitfield.Bitfield) []primitives.CommitteeInd
// UpdateCommitteeCache gets called at the beginning of every epoch to cache the committee shuffled indices
// list with committee index and epoch number. It caches the shuffled indices for the input epoch.
func UpdateCommitteeCache(ctx context.Context, state state.ReadOnlyBeaconState, e primitives.Epoch) error {
ctx, span := trace.StartSpan(ctx, "committeeCache.UpdateCommitteeCache")
defer span.End()
seed, err := Seed(state, e, params.BeaconConfig().DomainBeaconAttester)
if err != nil {
return err

View File

@@ -102,6 +102,9 @@ func checkValidatorSlashable(activationEpoch, withdrawableEpoch primitives.Epoch
// """
// return [ValidatorIndex(i) for i, v in enumerate(state.validators) if is_active_validator(v, epoch)]
func ActiveValidatorIndices(ctx context.Context, s state.ReadOnlyBeaconState, epoch primitives.Epoch) ([]primitives.ValidatorIndex, error) {
ctx, span := trace.StartSpan(ctx, "helpers.ActiveValidatorIndices")
defer span.End()
seed, err := Seed(s, epoch, params.BeaconConfig().DomainBeaconAttester)
if err != nil {
return nil, errors.Wrap(err, "could not get seed")

View File

@@ -2,6 +2,7 @@ package peerdas_test
import (
"crypto/rand"
"fmt"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
@@ -52,51 +53,15 @@ func TestVerifyDataColumnSidecar(t *testing.T) {
}
func TestVerifyDataColumnSidecarKZGProofs(t *testing.T) {
const (
blobCount = 6
seed = 0
)
err := kzg.Start()
require.NoError(t, err)
generateSidecars := func(t *testing.T) []*ethpb.DataColumnSidecar {
const blobCount = int64(6)
dbBlock := util.NewBeaconBlockDeneb()
commitments := make([][]byte, 0, blobCount)
blobs := make([]kzg.Blob, 0, blobCount)
for i := range blobCount {
blob := getRandBlob(i)
commitment, _, err := generateCommitmentAndProof(&blob)
require.NoError(t, err)
commitments = append(commitments, commitment[:])
blobs = append(blobs, blob)
}
dbBlock.Block.Body.BlobKzgCommitments = commitments
sBlock, err := blocks.NewSignedBeaconBlock(dbBlock)
require.NoError(t, err)
cellsAndProofs := util.GenerateCellsAndProofs(t, blobs)
sidecars, err := peerdas.DataColumnSidecars(sBlock, cellsAndProofs)
require.NoError(t, err)
return sidecars
}
generateRODataColumnSidecars := func(t *testing.T, sidecars []*ethpb.DataColumnSidecar) []blocks.RODataColumn {
roDataColumnSidecars := make([]blocks.RODataColumn, 0, len(sidecars))
for _, sidecar := range sidecars {
roCol, err := blocks.NewRODataColumn(sidecar)
require.NoError(t, err)
roDataColumnSidecars = append(roDataColumnSidecars, roCol)
}
return roDataColumnSidecars
}
t.Run("invalid proof", func(t *testing.T) {
sidecars := generateSidecars(t)
sidecars := generateRandomSidecars(t, seed, blobCount)
sidecars[0].Column[0][0]++ // It is OK to overflow
roDataColumnSidecars := generateRODataColumnSidecars(t, sidecars)
@@ -105,7 +70,7 @@ func TestVerifyDataColumnSidecarKZGProofs(t *testing.T) {
})
t.Run("nominal", func(t *testing.T) {
sidecars := generateSidecars(t)
sidecars := generateRandomSidecars(t, seed, blobCount)
roDataColumnSidecars := generateRODataColumnSidecars(t, sidecars)
err := peerdas.VerifyDataColumnsSidecarKZGProofs(roDataColumnSidecars)
@@ -281,6 +246,96 @@ func TestCustodyGroupCountFromRecord(t *testing.T) {
})
}
func BenchmarkVerifyDataColumnSidecarKZGProofs_SameCommitments_NoBatch(b *testing.B) {
const blobCount = 12
err := kzg.Start()
require.NoError(b, err)
b.StopTimer()
b.ResetTimer()
for i := range int64(b.N) {
// Generate new random sidecars to ensure the KZG backend does not cache anything.
sidecars := generateRandomSidecars(b, i, blobCount)
roDataColumnSidecars := generateRODataColumnSidecars(b, sidecars)
for _, sidecar := range roDataColumnSidecars {
sidecars := []blocks.RODataColumn{sidecar}
b.StartTimer()
err := peerdas.VerifyDataColumnsSidecarKZGProofs(sidecars)
b.StopTimer()
require.NoError(b, err)
}
}
}
func BenchmarkVerifyDataColumnSidecarKZGProofs_DiffCommitments_Batch(b *testing.B) {
const blobCount = 12
numberOfColumns := int64(params.BeaconConfig().NumberOfColumns)
err := kzg.Start()
require.NoError(b, err)
columnsCounts := []int64{1, 2, 4, 8, 16, 32, 64, 128}
for i, columnsCount := range columnsCounts {
b.Run(fmt.Sprintf("columnsCount_%d", columnsCount), func(b *testing.B) {
b.StopTimer()
b.ResetTimer()
for j := range int64(b.N) {
allSidecars := make([]*ethpb.DataColumnSidecar, 0, numberOfColumns)
for k := int64(0); k < numberOfColumns; k += columnsCount {
// Use different seeds to generate different blobs/commitments
seed := int64(b.N*i) + numberOfColumns*j + blobCount*k
sidecars := generateRandomSidecars(b, seed, blobCount)
// Pick sidecars.
allSidecars = append(allSidecars, sidecars[k:k+columnsCount]...)
}
roDataColumnSidecars := generateRODataColumnSidecars(b, allSidecars)
b.StartTimer()
err := peerdas.VerifyDataColumnsSidecarKZGProofs(roDataColumnSidecars)
b.StopTimer()
require.NoError(b, err)
}
})
}
}
func BenchmarkVerifyDataColumnSidecarKZGProofs_DiffCommitments_Batch4(b *testing.B) {
const (
blobCount = 12
// columnsCount*batchCount = 128
columnsCount = 4
batchCount = 32
)
err := kzg.Start()
require.NoError(b, err)
b.StopTimer()
b.ResetTimer()
for i := range int64(b.N) {
allSidecars := make([][]blocks.RODataColumn, 0, batchCount)
for j := range int64(batchCount) {
// Use different seeds to generate different blobs/commitments
sidecars := generateRandomSidecars(b, int64(batchCount)*i+j*blobCount, blobCount)
roDataColumnSidecars := generateRODataColumnSidecars(b, sidecars[:columnsCount])
allSidecars = append(allSidecars, roDataColumnSidecars)
}
for _, sidecars := range allSidecars {
b.StartTimer()
err := peerdas.VerifyDataColumnsSidecarKZGProofs(sidecars)
b.StopTimer()
require.NoError(b, err)
}
}
}
func createTestSidecar(t *testing.T, index uint64, column, kzgCommitments, kzgProofs [][]byte) blocks.RODataColumn {
pbSignedBeaconBlock := util.NewBeaconBlockDeneb()
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(pbSignedBeaconBlock)
@@ -302,3 +357,42 @@ func createTestSidecar(t *testing.T, index uint64, column, kzgCommitments, kzgPr
return roSidecar
}
func generateRandomSidecars(t testing.TB, seed, blobCount int64) []*ethpb.DataColumnSidecar {
dbBlock := util.NewBeaconBlockDeneb()
commitments := make([][]byte, 0, blobCount)
blobs := make([]kzg.Blob, 0, blobCount)
for i := range blobCount {
subSeed := seed + i
blob := getRandBlob(subSeed)
commitment, err := generateCommitment(&blob)
require.NoError(t, err)
commitments = append(commitments, commitment[:])
blobs = append(blobs, blob)
}
dbBlock.Block.Body.BlobKzgCommitments = commitments
sBlock, err := blocks.NewSignedBeaconBlock(dbBlock)
require.NoError(t, err)
cellsAndProofs := util.GenerateCellsAndProofs(t, blobs)
sidecars, err := peerdas.DataColumnSidecars(sBlock, cellsAndProofs)
require.NoError(t, err)
return sidecars
}
func generateRODataColumnSidecars(t testing.TB, sidecars []*ethpb.DataColumnSidecar) []blocks.RODataColumn {
roDataColumnSidecars := make([]blocks.RODataColumn, 0, len(sidecars))
for _, sidecar := range sidecars {
roCol, err := blocks.NewRODataColumn(sidecar)
require.NoError(t, err)
roDataColumnSidecars = append(roDataColumnSidecars, roCol)
}
return roDataColumnSidecars
}

View File

@@ -8,18 +8,30 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
"github.com/consensys/gnark-crypto/ecc/bls12-381/fr"
GoKZG "github.com/crate-crypto/go-kzg-4844"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
func generateCommitment(blob *kzg.Blob) (*kzg.Commitment, error) {
commitment, err := kzg.BlobToKZGCommitment(blob)
if err != nil {
return nil, errors.Wrap(err, "blob to kzg commitment")
}
return &commitment, nil
}
func generateCommitmentAndProof(blob *kzg.Blob) (*kzg.Commitment, *kzg.Proof, error) {
commitment, err := kzg.BlobToKZGCommitment(blob)
if err != nil {
return nil, nil, err
}
proof, err := kzg.ComputeBlobKZGProof(blob, commitment)
if err != nil {
return nil, nil, err
}
return &commitment, &proof, err
}

View File

@@ -46,6 +46,7 @@ go_library(
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",

View File

@@ -27,7 +27,9 @@ import (
"github.com/OffchainLabs/prysm/v6/monitoring/tracing"
prysmTrace "github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"go.opentelemetry.io/otel/trace"
)
@@ -291,6 +293,8 @@ func ProcessSlotsCore(ctx context.Context, span trace.Span, state state.BeaconSt
tracing.AnnotateError(span, err)
return nil, errors.Wrap(err, "failed to upgrade state")
}
logBlobLimitIncrease(state.Slot())
}
return state, nil
}
@@ -507,3 +511,19 @@ func ProcessEpochPrecompute(ctx context.Context, state state.BeaconState) (state
}
return state, nil
}
func logBlobLimitIncrease(slot primitives.Slot) {
if !slots.IsEpochStart(slot) {
return
}
epoch := slots.ToEpoch(slot)
for _, entry := range params.BeaconConfig().BlobSchedule {
if entry.Epoch == epoch {
log.WithFields(logrus.Fields{
"epoch": epoch,
"blobLimit": entry.MaxBlobsPerBlock,
}).Info("Blob limit updated")
}
}
}

View File

@@ -5,6 +5,9 @@ go_library(
srcs = [
"blob.go",
"cache.go",
"data_column.go",
"data_column_cache.go",
"doc.go",
"iteration.go",
"layout.go",
"layout_by_epoch.go",
@@ -17,6 +20,8 @@ go_library(
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem",
visibility = ["//visibility:public"],
deps = [
"//async:go_default_library",
"//async/event:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/verification:go_default_library",
"//config/fieldparams:go_default_library",
@@ -41,6 +46,8 @@ go_test(
srcs = [
"blob_test.go",
"cache_test.go",
"data_column_cache_test.go",
"data_column_test.go",
"iteration_test.go",
"layout_test.go",
"migration_test.go",
@@ -50,6 +57,7 @@ go_test(
deps = [
"//beacon-chain/db:go_default_library",
"//beacon-chain/verification:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,220 @@
package filesystem
import (
"sync"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/pkg/errors"
)
var errDataColumnIndexOutOfBounds = errors.New("data column index too high")
// DataColumnStorageSummary represents cached information about the DataColumnSidecars on disk for each root the cache knows about.
type DataColumnStorageSummary struct {
epoch primitives.Epoch
mask [fieldparams.NumberOfColumns]bool
}
// NewDataColumnStorageSummary creates a new DataColumnStorageSummary for a given epoch and mask.
func NewDataColumnStorageSummary(epoch primitives.Epoch, mask [fieldparams.NumberOfColumns]bool) DataColumnStorageSummary {
return DataColumnStorageSummary{
epoch: epoch,
mask: mask,
}
}
// HasIndex returns true if the DataColumnSidecar at the given index is available in the filesystem.
func (s DataColumnStorageSummary) HasIndex(index uint64) bool {
if index >= uint64(fieldparams.NumberOfColumns) {
return false
}
return s.mask[index]
}
// Count returns the number of available data columns.
func (s DataColumnStorageSummary) Count() uint64 {
count := uint64(0)
for _, available := range s.mask {
if available {
count++
}
}
return count
}
// AllAvailable returns true if we have all data columns for corresponding indices.
func (s DataColumnStorageSummary) AllAvailable(indices map[uint64]bool) bool {
if len(indices) > len(s.mask) {
return false
}
for index := range indices {
if !s.mask[index] {
return false
}
}
return true
}
// DataColumnStorageSummarizer can be used to receive a summary of metadata about data columns on disk for a given root.
// The DataColumnStorageSummary can be used to check which indices (if any) are available for a given block by root.
type DataColumnStorageSummarizer interface {
Summary(root [fieldparams.RootLength]byte) DataColumnStorageSummary
}
type dataColumnStorageSummaryCache struct {
mu sync.RWMutex
dataColumnCount float64
lowestCachedEpoch primitives.Epoch
highestCachedEpoch primitives.Epoch
cache map[[fieldparams.RootLength]byte]DataColumnStorageSummary
}
var _ DataColumnStorageSummarizer = &dataColumnStorageSummaryCache{}
func newDataColumnStorageSummaryCache() *dataColumnStorageSummaryCache {
return &dataColumnStorageSummaryCache{
cache: make(map[[fieldparams.RootLength]byte]DataColumnStorageSummary),
lowestCachedEpoch: params.BeaconConfig().FarFutureEpoch,
}
}
// Summary returns the DataColumnStorageSummary for `root`.
// The DataColumnStorageSummary can be used to check for the presence of DataColumnSidecars based on Index.
func (sc *dataColumnStorageSummaryCache) Summary(root [fieldparams.RootLength]byte) DataColumnStorageSummary {
sc.mu.RLock()
defer sc.mu.RUnlock()
return sc.cache[root]
}
func (sc *dataColumnStorageSummaryCache) HighestEpoch() primitives.Epoch {
sc.mu.RLock()
defer sc.mu.RUnlock()
return sc.highestCachedEpoch
}
// set updates the cache.
func (sc *dataColumnStorageSummaryCache) set(dataColumnsIdent DataColumnsIdent) error {
numberOfColumns := params.BeaconConfig().NumberOfColumns
sc.mu.Lock()
defer sc.mu.Unlock()
summary := sc.cache[dataColumnsIdent.Root]
summary.epoch = dataColumnsIdent.Epoch
count := uint64(0)
for _, index := range dataColumnsIdent.Indices {
if index >= numberOfColumns {
return errDataColumnIndexOutOfBounds
}
if summary.mask[index] {
continue
}
count++
summary.mask[index] = true
sc.lowestCachedEpoch = min(sc.lowestCachedEpoch, dataColumnsIdent.Epoch)
sc.highestCachedEpoch = max(sc.highestCachedEpoch, dataColumnsIdent.Epoch)
}
sc.cache[dataColumnsIdent.Root] = summary
countFloat := float64(count)
sc.dataColumnCount += countFloat
dataColumnDiskCount.Set(sc.dataColumnCount)
dataColumnWrittenCounter.Add(countFloat)
return nil
}
// get returns the DataColumnStorageSummary for the given block root.
// If the root is not in the cache, the second return value will be false.
func (sc *dataColumnStorageSummaryCache) get(blockRoot [fieldparams.RootLength]byte) (DataColumnStorageSummary, bool) {
sc.mu.RLock()
defer sc.mu.RUnlock()
v, ok := sc.cache[blockRoot]
return v, ok
}
// evict removes the DataColumnStorageSummary for the given block root from the cache.
func (s *dataColumnStorageSummaryCache) evict(blockRoot [fieldparams.RootLength]byte) int {
deleted := 0
s.mu.Lock()
defer s.mu.Unlock()
summary, ok := s.cache[blockRoot]
if !ok {
return 0
}
for i := range summary.mask {
if summary.mask[i] {
deleted += 1
}
}
delete(s.cache, blockRoot)
if deleted > 0 {
s.dataColumnCount -= float64(deleted)
dataColumnDiskCount.Set(s.dataColumnCount)
}
// The lowest and highest cached epoch may no longer be valid here,
// but is not worth the effort to recalculate.
return deleted
}
// pruneUpTo removes all entries from the cache up to the given target epoch included.
func (sc *dataColumnStorageSummaryCache) pruneUpTo(targetEpoch primitives.Epoch) uint64 {
sc.mu.Lock()
defer sc.mu.Unlock()
prunedCount := uint64(0)
newLowestCachedEpoch := params.BeaconConfig().FarFutureEpoch
newHighestCachedEpoch := primitives.Epoch(0)
for blockRoot, summary := range sc.cache {
epoch := summary.epoch
if epoch > targetEpoch {
newLowestCachedEpoch = min(newLowestCachedEpoch, epoch)
newHighestCachedEpoch = max(newHighestCachedEpoch, epoch)
}
if epoch <= targetEpoch {
for i := range summary.mask {
if summary.mask[i] {
prunedCount += 1
}
}
delete(sc.cache, blockRoot)
}
}
if prunedCount > 0 {
sc.lowestCachedEpoch = newLowestCachedEpoch
sc.highestCachedEpoch = newHighestCachedEpoch
sc.dataColumnCount -= float64(prunedCount)
dataColumnDiskCount.Set(sc.dataColumnCount)
}
return prunedCount
}
// clear removes all entries from the cache.
func (sc *dataColumnStorageSummaryCache) clear() uint64 {
return sc.pruneUpTo(params.BeaconConfig().FarFutureEpoch)
}

View File

@@ -0,0 +1,235 @@
package filesystem
import (
"testing"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/testing/require"
)
func TestHasIndex(t *testing.T) {
summary := NewDataColumnStorageSummary(0, [fieldparams.NumberOfColumns]bool{false, true})
hasIndex := summary.HasIndex(1_000_000)
require.Equal(t, false, hasIndex)
hasIndex = summary.HasIndex(0)
require.Equal(t, false, hasIndex)
hasIndex = summary.HasIndex(1)
require.Equal(t, true, hasIndex)
}
func TestCount(t *testing.T) {
summary := NewDataColumnStorageSummary(0, [fieldparams.NumberOfColumns]bool{false, true, false, true})
count := summary.Count()
require.Equal(t, uint64(2), count)
}
func TestAllAvailableDataColumns(t *testing.T) {
const count = uint64(1_000)
summary := NewDataColumnStorageSummary(0, [fieldparams.NumberOfColumns]bool{false, true, false, true})
indices := make(map[uint64]bool, count)
for i := range count {
indices[i] = true
}
allAvailable := summary.AllAvailable(indices)
require.Equal(t, false, allAvailable)
indices = map[uint64]bool{1: true, 2: true}
allAvailable = summary.AllAvailable(indices)
require.Equal(t, false, allAvailable)
indices = map[uint64]bool{1: true, 3: true}
allAvailable = summary.AllAvailable(indices)
require.Equal(t, true, allAvailable)
}
func TestSummary(t *testing.T) {
root := [fieldparams.RootLength]byte{}
summaryCache := newDataColumnStorageSummaryCache()
expected := NewDataColumnStorageSummary(0, [fieldparams.NumberOfColumns]bool{})
actual := summaryCache.Summary(root)
require.DeepEqual(t, expected, actual)
summaryCache = newDataColumnStorageSummaryCache()
expected = NewDataColumnStorageSummary(0, [fieldparams.NumberOfColumns]bool{true, false, true, false})
summaryCache.cache[root] = expected
actual = summaryCache.Summary(root)
require.DeepEqual(t, expected, actual)
}
func TestHighestEpoch(t *testing.T) {
root1 := [fieldparams.RootLength]byte{1}
root2 := [fieldparams.RootLength]byte{2}
root3 := [fieldparams.RootLength]byte{3}
summaryCache := newDataColumnStorageSummaryCache()
actual := summaryCache.HighestEpoch()
require.Equal(t, primitives.Epoch(0), actual)
err := summaryCache.set(DataColumnsIdent{Root: root1, Epoch: 42, Indices: []uint64{1, 3}})
require.NoError(t, err)
require.Equal(t, primitives.Epoch(42), summaryCache.HighestEpoch())
err = summaryCache.set(DataColumnsIdent{Root: root2, Epoch: 43, Indices: []uint64{1, 3}})
require.NoError(t, err)
require.Equal(t, primitives.Epoch(43), summaryCache.HighestEpoch())
err = summaryCache.set(DataColumnsIdent{Root: root3, Epoch: 40, Indices: []uint64{1, 3}})
require.NoError(t, err)
require.Equal(t, primitives.Epoch(43), summaryCache.HighestEpoch())
}
func TestSet(t *testing.T) {
t.Run("Index out of bounds", func(t *testing.T) {
summaryCache := newDataColumnStorageSummaryCache()
err := summaryCache.set(DataColumnsIdent{Indices: []uint64{1_000_000}})
require.ErrorIs(t, err, errDataColumnIndexOutOfBounds)
require.Equal(t, params.BeaconConfig().FarFutureEpoch, summaryCache.lowestCachedEpoch)
require.Equal(t, 0, len(summaryCache.cache))
})
t.Run("Nominal", func(t *testing.T) {
root1 := [fieldparams.RootLength]byte{1}
root2 := [fieldparams.RootLength]byte{2}
summaryCache := newDataColumnStorageSummaryCache()
err := summaryCache.set(DataColumnsIdent{Root: root1, Epoch: 42, Indices: []uint64{1, 3}})
require.NoError(t, err)
require.Equal(t, primitives.Epoch(42), summaryCache.lowestCachedEpoch)
require.Equal(t, 1, len(summaryCache.cache))
expected := DataColumnStorageSummary{epoch: 42, mask: [fieldparams.NumberOfColumns]bool{false, true, false, true}}
actual := summaryCache.cache[root1]
require.DeepEqual(t, expected, actual)
err = summaryCache.set(DataColumnsIdent{Root: root1, Epoch: 42, Indices: []uint64{0, 1}})
require.NoError(t, err)
require.Equal(t, primitives.Epoch(42), summaryCache.lowestCachedEpoch)
require.Equal(t, 1, len(summaryCache.cache))
expected = DataColumnStorageSummary{epoch: 42, mask: [fieldparams.NumberOfColumns]bool{true, true, false, true}}
actual = summaryCache.cache[root1]
require.DeepEqual(t, expected, actual)
err = summaryCache.set(DataColumnsIdent{Root: root2, Epoch: 43, Indices: []uint64{1}})
require.NoError(t, err)
require.Equal(t, primitives.Epoch(42), summaryCache.lowestCachedEpoch) // Epoch 42 is still the lowest
require.Equal(t, 2, len(summaryCache.cache))
expected = DataColumnStorageSummary{epoch: 43, mask: [fieldparams.NumberOfColumns]bool{false, true}}
actual = summaryCache.cache[root2]
require.DeepEqual(t, expected, actual)
})
}
func TestGet(t *testing.T) {
t.Run("Not in cache", func(t *testing.T) {
summaryCache := newDataColumnStorageSummaryCache()
root := [fieldparams.RootLength]byte{}
_, ok := summaryCache.get(root)
require.Equal(t, false, ok)
})
t.Run("In cache", func(t *testing.T) {
root := [fieldparams.RootLength]byte{}
summaryCache := newDataColumnStorageSummaryCache()
summaryCache.cache[root] = NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{true, false, true, false})
actual, ok := summaryCache.get(root)
require.Equal(t, true, ok)
expected := NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{true, false, true, false})
require.DeepEqual(t, expected, actual)
})
}
func TestEvict(t *testing.T) {
t.Run("No eviction", func(t *testing.T) {
root := [fieldparams.RootLength]byte{}
summaryCache := newDataColumnStorageSummaryCache()
evicted := summaryCache.evict(root)
require.Equal(t, 0, evicted)
})
t.Run("Eviction", func(t *testing.T) {
root1 := [fieldparams.RootLength]byte{1}
root2 := [fieldparams.RootLength]byte{2}
summaryCache := newDataColumnStorageSummaryCache()
summaryCache.cache[root1] = NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{true, false, true, false})
summaryCache.cache[root2] = NewDataColumnStorageSummary(43, [fieldparams.NumberOfColumns]bool{false, true, false, true})
evicted := summaryCache.evict(root1)
require.Equal(t, 2, evicted)
require.Equal(t, 1, len(summaryCache.cache))
_, ok := summaryCache.cache[root1]
require.Equal(t, false, ok)
_, ok = summaryCache.cache[root2]
require.Equal(t, true, ok)
})
}
func TestPruneUpTo(t *testing.T) {
t.Run("No pruning", func(t *testing.T) {
summaryCache := newDataColumnStorageSummaryCache()
err := summaryCache.set(DataColumnsIdent{Root: [fieldparams.RootLength]byte{1}, Epoch: 42, Indices: []uint64{1}})
require.NoError(t, err)
err = summaryCache.set(DataColumnsIdent{Root: [fieldparams.RootLength]byte{2}, Epoch: 43, Indices: []uint64{2, 4}})
require.NoError(t, err)
count := summaryCache.pruneUpTo(41)
require.Equal(t, uint64(0), count)
require.Equal(t, 2, len(summaryCache.cache))
require.Equal(t, primitives.Epoch(42), summaryCache.lowestCachedEpoch)
})
t.Run("Pruning", func(t *testing.T) {
summaryCache := newDataColumnStorageSummaryCache()
err := summaryCache.set(DataColumnsIdent{Root: [fieldparams.RootLength]byte{1}, Epoch: 42, Indices: []uint64{1}})
require.NoError(t, err)
err = summaryCache.set(DataColumnsIdent{Root: [fieldparams.RootLength]byte{2}, Epoch: 44, Indices: []uint64{2, 4}})
require.NoError(t, err)
err = summaryCache.set(DataColumnsIdent{Root: [fieldparams.RootLength]byte{3}, Epoch: 45, Indices: []uint64{2, 4}})
require.NoError(t, err)
count := summaryCache.pruneUpTo(42)
require.Equal(t, uint64(1), count)
require.Equal(t, 2, len(summaryCache.cache))
require.Equal(t, primitives.Epoch(44), summaryCache.lowestCachedEpoch)
count = summaryCache.pruneUpTo(45)
require.Equal(t, uint64(4), count)
require.Equal(t, 0, len(summaryCache.cache))
require.Equal(t, params.BeaconConfig().FarFutureEpoch, summaryCache.lowestCachedEpoch)
require.Equal(t, primitives.Epoch(0), summaryCache.highestCachedEpoch)
})
t.Run("Clear", func(t *testing.T) {
summaryCache := newDataColumnStorageSummaryCache()
err := summaryCache.set(DataColumnsIdent{Root: [fieldparams.RootLength]byte{1}, Epoch: 42, Indices: []uint64{1}})
require.NoError(t, err)
err = summaryCache.set(DataColumnsIdent{Root: [fieldparams.RootLength]byte{2}, Epoch: 44, Indices: []uint64{2, 4}})
require.NoError(t, err)
err = summaryCache.set(DataColumnsIdent{Root: [fieldparams.RootLength]byte{3}, Epoch: 45, Indices: []uint64{2, 4}})
require.NoError(t, err)
count := summaryCache.clear()
require.Equal(t, uint64(5), count)
require.Equal(t, 0, len(summaryCache.cache))
require.Equal(t, params.BeaconConfig().FarFutureEpoch, summaryCache.lowestCachedEpoch)
require.Equal(t, primitives.Epoch(0), summaryCache.highestCachedEpoch)
})
}

View File

@@ -0,0 +1,742 @@
package filesystem
import (
"context"
"encoding/binary"
"os"
"testing"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/spf13/afero"
)
func TestNewDataColumnStorage(t *testing.T) {
ctx := context.Background()
t.Run("No base path", func(t *testing.T) {
_, err := NewDataColumnStorage(ctx)
require.ErrorIs(t, err, errNoBasePath)
})
t.Run("Nominal", func(t *testing.T) {
dir := t.TempDir()
storage, err := NewDataColumnStorage(ctx, WithDataColumnBasePath(dir))
require.NoError(t, err)
require.Equal(t, dir, storage.base)
})
}
func TestWarmCache(t *testing.T) {
storage, err := NewDataColumnStorage(
context.Background(),
WithDataColumnBasePath(t.TempDir()),
WithDataColumnRetentionEpochs(10_000),
)
require.NoError(t, err)
_, verifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{
{0}: {
{Slot: 33, ColumnIndex: 2, DataColumn: []byte{1, 2, 3}}, // Period 0 - Epoch 1
{Slot: 33, ColumnIndex: 4, DataColumn: []byte{2, 3, 4}}, // Period 0 - Epoch 1
},
{1}: {
{Slot: 128_002, ColumnIndex: 2, DataColumn: []byte{1, 2, 3}}, // Period 0 - Epoch 4000
{Slot: 128_002, ColumnIndex: 4, DataColumn: []byte{2, 3, 4}}, // Period 0 - Epoch 4000
},
{2}: {
{Slot: 128_003, ColumnIndex: 1, DataColumn: []byte{1, 2, 3}}, // Period 0 - Epoch 4000
{Slot: 128_003, ColumnIndex: 3, DataColumn: []byte{2, 3, 4}}, // Period 0 - Epoch 4000
},
{3}: {
{Slot: 128_034, ColumnIndex: 2, DataColumn: []byte{1, 2, 3}}, // Period 0 - Epoch 4001
{Slot: 128_034, ColumnIndex: 4, DataColumn: []byte{2, 3, 4}}, // Period 0 - Epoch 4001
},
{4}: {
{Slot: 131_138, ColumnIndex: 2, DataColumn: []byte{1, 2, 3}}, // Period 1 - Epoch 4098
},
{5}: {
{Slot: 131_138, ColumnIndex: 1, DataColumn: []byte{1, 2, 3}}, // Period 1 - Epoch 4098
},
{6}: {
{Slot: 131_168, ColumnIndex: 0, DataColumn: []byte{1, 2, 3}}, // Period 1 - Epoch 4099
},
},
)
err = storage.Save(verifiedRoDataColumnSidecars)
require.NoError(t, err)
storage.retentionEpochs = 4_096
storage.WarmCache()
require.Equal(t, primitives.Epoch(4_000), storage.cache.lowestCachedEpoch)
require.Equal(t, 6, len(storage.cache.cache))
summary, ok := storage.cache.get([fieldparams.RootLength]byte{1})
require.Equal(t, true, ok)
require.DeepEqual(t, DataColumnStorageSummary{epoch: 4_000, mask: [fieldparams.NumberOfColumns]bool{false, false, true, false, true}}, summary)
summary, ok = storage.cache.get([fieldparams.RootLength]byte{2})
require.Equal(t, true, ok)
require.DeepEqual(t, DataColumnStorageSummary{epoch: 4_000, mask: [fieldparams.NumberOfColumns]bool{false, true, false, true}}, summary)
summary, ok = storage.cache.get([fieldparams.RootLength]byte{3})
require.Equal(t, true, ok)
require.DeepEqual(t, DataColumnStorageSummary{epoch: 4_001, mask: [fieldparams.NumberOfColumns]bool{false, false, true, false, true}}, summary)
summary, ok = storage.cache.get([fieldparams.RootLength]byte{4})
require.Equal(t, true, ok)
require.DeepEqual(t, DataColumnStorageSummary{epoch: 4_098, mask: [fieldparams.NumberOfColumns]bool{false, false, true}}, summary)
summary, ok = storage.cache.get([fieldparams.RootLength]byte{5})
require.Equal(t, true, ok)
require.DeepEqual(t, DataColumnStorageSummary{epoch: 4_098, mask: [fieldparams.NumberOfColumns]bool{false, true}}, summary)
summary, ok = storage.cache.get([fieldparams.RootLength]byte{6})
require.Equal(t, true, ok)
require.DeepEqual(t, DataColumnStorageSummary{epoch: 4_099, mask: [fieldparams.NumberOfColumns]bool{true}}, summary)
}
func TestSaveDataColumnsSidecars(t *testing.T) {
t.Run("wrong numbers of columns", func(t *testing.T) {
cfg := params.BeaconConfig().Copy()
cfg.NumberOfColumns = 0
params.OverrideBeaconConfig(cfg)
params.SetupTestConfigCleanup(t)
_, verifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{
{}: {{ColumnIndex: 12}, {ColumnIndex: 1_000_000}, {ColumnIndex: 48}},
},
)
_, dataColumnStorage := NewEphemeralDataColumnStorageAndFs(t)
err := dataColumnStorage.Save(verifiedRoDataColumnSidecars)
require.ErrorIs(t, err, errWrongNumberOfColumns)
})
t.Run("one of the column index is too large", func(t *testing.T) {
_, verifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{{}: {{ColumnIndex: 12}, {ColumnIndex: 1_000_000}, {ColumnIndex: 48}}},
)
_, dataColumnStorage := NewEphemeralDataColumnStorageAndFs(t)
err := dataColumnStorage.Save(verifiedRoDataColumnSidecars)
require.ErrorIs(t, err, errDataColumnIndexTooLarge)
})
t.Run("different slots", func(t *testing.T) {
_, verifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{
{}: {
{Slot: 1, ColumnIndex: 12, DataColumn: []byte{1, 2, 3}},
{Slot: 2, ColumnIndex: 12, DataColumn: []byte{1, 2, 3}},
},
},
)
_, dataColumnStorage := NewEphemeralDataColumnStorageAndFs(t)
err := dataColumnStorage.Save(verifiedRoDataColumnSidecars)
require.ErrorIs(t, err, errDataColumnSidecarsFromDifferentSlots)
})
t.Run("new file - no data columns to save", func(t *testing.T) {
_, verifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{{}: {}},
)
_, dataColumnStorage := NewEphemeralDataColumnStorageAndFs(t)
err := dataColumnStorage.Save(verifiedRoDataColumnSidecars)
require.NoError(t, err)
})
t.Run("new file - different data column size", func(t *testing.T) {
_, verifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{
{}: {
{ColumnIndex: 12, DataColumn: []byte{1, 2, 3}},
{ColumnIndex: 11, DataColumn: []byte{1, 2, 3, 4}},
},
},
)
_, dataColumnStorage := NewEphemeralDataColumnStorageAndFs(t)
err := dataColumnStorage.Save(verifiedRoDataColumnSidecars)
require.ErrorIs(t, err, errWrongSszEncodedDataColumnSidecarSize)
})
t.Run("existing file - wrong incoming SSZ encoded size", func(t *testing.T) {
_, verifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{{1}: {{ColumnIndex: 12, DataColumn: []byte{1, 2, 3}}}},
)
// Save data columns into a file.
_, dataColumnStorage := NewEphemeralDataColumnStorageAndFs(t)
err := dataColumnStorage.Save(verifiedRoDataColumnSidecars)
require.NoError(t, err)
// Build a data column sidecar for the same block but with a different
// column index and an different SSZ encoded size.
_, verifiedRoDataColumnSidecars = util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{{1}: {{ColumnIndex: 13, DataColumn: []byte{1, 2, 3, 4}}}},
)
// Try to rewrite the file.
err = dataColumnStorage.Save(verifiedRoDataColumnSidecars)
require.ErrorIs(t, err, errWrongSszEncodedDataColumnSidecarSize)
})
t.Run("nominal", func(t *testing.T) {
_, inputVerifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{
{1}: {
{ColumnIndex: 12, DataColumn: []byte{1, 2, 3}},
{ColumnIndex: 11, DataColumn: []byte{3, 4, 5}},
{ColumnIndex: 12, DataColumn: []byte{1, 2, 3}}, // OK if duplicate
{ColumnIndex: 13, DataColumn: []byte{6, 7, 8}},
},
{2}: {
{ColumnIndex: 12, DataColumn: []byte{3, 4, 5}},
{ColumnIndex: 13, DataColumn: []byte{6, 7, 8}},
},
},
)
_, dataColumnStorage := NewEphemeralDataColumnStorageAndFs(t)
err := dataColumnStorage.Save(inputVerifiedRoDataColumnSidecars)
require.NoError(t, err)
_, inputVerifiedRoDataColumnSidecars = util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{
{1}: {
{ColumnIndex: 12, DataColumn: []byte{1, 2, 3}}, // OK if duplicate
{ColumnIndex: 15, DataColumn: []byte{2, 3, 4}},
{ColumnIndex: 1, DataColumn: []byte{2, 3, 4}},
},
{3}: {
{ColumnIndex: 6, DataColumn: []byte{3, 4, 5}},
{ColumnIndex: 2, DataColumn: []byte{6, 7, 8}},
},
},
)
err = dataColumnStorage.Save(inputVerifiedRoDataColumnSidecars)
require.NoError(t, err)
type fixture struct {
fileName string
blockRoot [fieldparams.RootLength]byte
expectedIndices [mandatoryNumberOfColumns]byte
dataColumnParams []util.DataColumnParams
}
fixtures := []fixture{
{
fileName: "0/0/0x0100000000000000000000000000000000000000000000000000000000000000.sszs",
blockRoot: [fieldparams.RootLength]byte{1},
expectedIndices: [mandatoryNumberOfColumns]byte{
0, nonZeroOffset + 4, 0, 0, 0, 0, 0, 0,
0, 0, 0, nonZeroOffset + 1, nonZeroOffset, nonZeroOffset + 2, 0, nonZeroOffset + 3,
// The rest is filled with zeroes.
},
dataColumnParams: []util.DataColumnParams{
{ColumnIndex: 12, DataColumn: []byte{1, 2, 3}},
{ColumnIndex: 11, DataColumn: []byte{3, 4, 5}},
{ColumnIndex: 13, DataColumn: []byte{6, 7, 8}},
{ColumnIndex: 15, DataColumn: []byte{2, 3, 4}},
{ColumnIndex: 1, DataColumn: []byte{2, 3, 4}},
},
},
{
fileName: "0/0/0x0200000000000000000000000000000000000000000000000000000000000000.sszs",
blockRoot: [fieldparams.RootLength]byte{2},
expectedIndices: [mandatoryNumberOfColumns]byte{
0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, nonZeroOffset, nonZeroOffset + 1, 0, 0,
// The rest is filled with zeroes.
},
dataColumnParams: []util.DataColumnParams{
{ColumnIndex: 12, DataColumn: []byte{3, 4, 5}},
{ColumnIndex: 13, DataColumn: []byte{6, 7, 8}},
},
},
{
fileName: "0/0/0x0300000000000000000000000000000000000000000000000000000000000000.sszs",
blockRoot: [fieldparams.RootLength]byte{3},
expectedIndices: [mandatoryNumberOfColumns]byte{
0, 0, nonZeroOffset + 1, 0, 0, 0, nonZeroOffset, 0,
// The rest is filled with zeroes.
},
dataColumnParams: []util.DataColumnParams{
{ColumnIndex: 6, DataColumn: []byte{3, 4, 5}},
{ColumnIndex: 2, DataColumn: []byte{6, 7, 8}},
},
},
}
for _, fixture := range fixtures {
// Build expected data column sidecars.
_, expectedDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{fixture.blockRoot: fixture.dataColumnParams},
)
// Build expected bytes.
firstSszEncodedDataColumnSidecar, err := expectedDataColumnSidecars[0].MarshalSSZ()
require.NoError(t, err)
dataColumnSidecarsCount := len(expectedDataColumnSidecars)
sszEncodedDataColumnSidecarSize := len(firstSszEncodedDataColumnSidecar)
sszEncodedDataColumnSidecars := make([]byte, 0, dataColumnSidecarsCount*sszEncodedDataColumnSidecarSize)
sszEncodedDataColumnSidecars = append(sszEncodedDataColumnSidecars, firstSszEncodedDataColumnSidecar...)
for _, dataColumnSidecar := range expectedDataColumnSidecars[1:] {
sszEncodedDataColumnSidecar, err := dataColumnSidecar.MarshalSSZ()
require.NoError(t, err)
sszEncodedDataColumnSidecars = append(sszEncodedDataColumnSidecars, sszEncodedDataColumnSidecar...)
}
var encodedSszEncodedDataColumnSidecarSize [sidecarByteLenSize]byte
binary.BigEndian.PutUint32(encodedSszEncodedDataColumnSidecarSize[:], uint32(sszEncodedDataColumnSidecarSize))
expectedBytes := make([]byte, 0, headerSize+dataColumnSidecarsCount*sszEncodedDataColumnSidecarSize)
expectedBytes = append(expectedBytes, []byte{0x01}...)
expectedBytes = append(expectedBytes, encodedSszEncodedDataColumnSidecarSize[:]...)
expectedBytes = append(expectedBytes, fixture.expectedIndices[:]...)
expectedBytes = append(expectedBytes, sszEncodedDataColumnSidecars...)
// Check the actual content of the file.
actualBytes, err := afero.ReadFile(dataColumnStorage.fs, fixture.fileName)
require.NoError(t, err)
require.DeepSSZEqual(t, expectedBytes, actualBytes)
// Check the summary.
indices := map[uint64]bool{}
for _, dataColumnParam := range fixture.dataColumnParams {
indices[dataColumnParam.ColumnIndex] = true
}
summary := dataColumnStorage.Summary(fixture.blockRoot)
for index := range uint64(mandatoryNumberOfColumns) {
require.Equal(t, indices[index], summary.HasIndex(index))
}
err = dataColumnStorage.Remove(fixture.blockRoot)
require.NoError(t, err)
summary = dataColumnStorage.Summary(fixture.blockRoot)
for index := range uint64(mandatoryNumberOfColumns) {
require.Equal(t, false, summary.HasIndex(index))
}
_, err = afero.ReadFile(dataColumnStorage.fs, fixture.fileName)
require.ErrorIs(t, err, os.ErrNotExist)
}
})
}
func TestGetDataColumnSidecars(t *testing.T) {
t.Run("not found", func(t *testing.T) {
_, dataColumnStorage := NewEphemeralDataColumnStorageAndFs(t)
verifiedRODataColumnSidecars, err := dataColumnStorage.Get([fieldparams.RootLength]byte{1}, []uint64{12, 13, 14})
require.NoError(t, err)
require.Equal(t, 0, len(verifiedRODataColumnSidecars))
})
t.Run("nominal", func(t *testing.T) {
_, expectedVerifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{
{1}: {
{ColumnIndex: 12, DataColumn: []byte{1, 2, 3}},
{ColumnIndex: 14, DataColumn: []byte{2, 3, 4}},
},
},
)
_, dataColumnStorage := NewEphemeralDataColumnStorageAndFs(t)
err := dataColumnStorage.Save(expectedVerifiedRoDataColumnSidecars)
require.NoError(t, err)
verifiedRODataColumnSidecars, err := dataColumnStorage.Get([fieldparams.RootLength]byte{1}, nil)
require.NoError(t, err)
require.DeepSSZEqual(t, expectedVerifiedRoDataColumnSidecars, verifiedRODataColumnSidecars)
verifiedRODataColumnSidecars, err = dataColumnStorage.Get([fieldparams.RootLength]byte{1}, []uint64{12, 13, 14})
require.NoError(t, err)
require.DeepSSZEqual(t, expectedVerifiedRoDataColumnSidecars, verifiedRODataColumnSidecars)
})
}
func TestRemove(t *testing.T) {
t.Run("not found", func(t *testing.T) {
_, dataColumnStorage := NewEphemeralDataColumnStorageAndFs(t)
err := dataColumnStorage.Remove([fieldparams.RootLength]byte{1})
require.NoError(t, err)
})
t.Run("nominal", func(t *testing.T) {
_, inputVerifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{
{1}: {
{Slot: 32, ColumnIndex: 10, DataColumn: []byte{1, 2, 3}},
{Slot: 32, ColumnIndex: 11, DataColumn: []byte{2, 3, 4}},
},
{2}: {
{Slot: 33, ColumnIndex: 10, DataColumn: []byte{1, 2, 3}},
{Slot: 33, ColumnIndex: 11, DataColumn: []byte{2, 3, 4}},
},
},
)
_, dataColumnStorage := NewEphemeralDataColumnStorageAndFs(t)
err := dataColumnStorage.Save(inputVerifiedRoDataColumnSidecars)
require.NoError(t, err)
err = dataColumnStorage.Remove([fieldparams.RootLength]byte{1})
require.NoError(t, err)
summary := dataColumnStorage.Summary([fieldparams.RootLength]byte{1})
require.Equal(t, primitives.Epoch(0), summary.epoch)
require.Equal(t, uint64(0), summary.Count())
summary = dataColumnStorage.Summary([fieldparams.RootLength]byte{2})
require.Equal(t, primitives.Epoch(1), summary.epoch)
require.Equal(t, uint64(2), summary.Count())
actual, err := dataColumnStorage.Get([fieldparams.RootLength]byte{1}, nil)
require.NoError(t, err)
require.Equal(t, 0, len(actual))
actual, err = dataColumnStorage.Get([fieldparams.RootLength]byte{2}, nil)
require.NoError(t, err)
require.Equal(t, 2, len(actual))
})
}
func TestClear(t *testing.T) {
_, inputVerifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{
{1}: {{ColumnIndex: 12, DataColumn: []byte{1, 2, 3}}},
{2}: {{ColumnIndex: 13, DataColumn: []byte{6, 7, 8}}},
},
)
_, dataColumnStorage := NewEphemeralDataColumnStorageAndFs(t)
err := dataColumnStorage.Save(inputVerifiedRoDataColumnSidecars)
require.NoError(t, err)
filePaths := []string{
"0/0/0x0100000000000000000000000000000000000000000000000000000000000000.sszs",
"0/0/0x0200000000000000000000000000000000000000000000000000000000000000.sszs",
}
for _, filePath := range filePaths {
_, err = afero.ReadFile(dataColumnStorage.fs, filePath)
require.NoError(t, err)
}
err = dataColumnStorage.Clear()
require.NoError(t, err)
summary := dataColumnStorage.Summary([fieldparams.RootLength]byte{1})
for index := range uint64(mandatoryNumberOfColumns) {
require.Equal(t, false, summary.HasIndex(index))
}
for _, filePath := range filePaths {
_, err = afero.ReadFile(dataColumnStorage.fs, filePath)
require.ErrorIs(t, err, os.ErrNotExist)
}
}
func TestMetadata(t *testing.T) {
t.Run("wrong version", func(t *testing.T) {
_, verifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{
{1}: {{ColumnIndex: 12, DataColumn: []byte{1, 2, 3}}},
},
)
// Save data columns into a file.
_, dataColumnStorage := NewEphemeralDataColumnStorageAndFs(t)
err := dataColumnStorage.Save(verifiedRoDataColumnSidecars)
require.NoError(t, err)
// Alter the version.
const filePath = "0/0/0x0100000000000000000000000000000000000000000000000000000000000000.sszs"
file, err := dataColumnStorage.fs.OpenFile(filePath, os.O_WRONLY, os.FileMode(0600))
require.NoError(t, err)
count, err := file.Write([]byte{42})
require.NoError(t, err)
require.Equal(t, 1, count)
// Try to read the metadata.
_, err = dataColumnStorage.metadata(file)
require.ErrorIs(t, err, errWrongVersion)
err = file.Close()
require.NoError(t, err)
})
}
func TestNewStorageIndices(t *testing.T) {
t.Run("wrong number of columns", func(t *testing.T) {
_, err := newStorageIndices(nil)
require.ErrorIs(t, err, errWrongNumberOfColumns)
})
t.Run("nominal", func(t *testing.T) {
var indices [mandatoryNumberOfColumns]byte
indices[0] = 1
storageIndices, err := newStorageIndices(indices[:])
require.NoError(t, err)
require.Equal(t, indices, storageIndices.indices)
})
}
func TestStorageIndicesGet(t *testing.T) {
t.Run("index too large", func(t *testing.T) {
var indices storageIndices
_, _, err := indices.get(1_000_000)
require.ErrorIs(t, errDataColumnIndexTooLarge, err)
})
t.Run("index not set", func(t *testing.T) {
const expected = false
var indices storageIndices
actual, _, err := indices.get(0)
require.NoError(t, err)
require.Equal(t, expected, actual)
})
t.Run("index set", func(t *testing.T) {
const (
expectedOk = true
expectedPosition = int64(3)
)
indices := storageIndices{indices: [mandatoryNumberOfColumns]byte{0, 131}}
actualOk, actualPosition, err := indices.get(1)
require.NoError(t, err)
require.Equal(t, expectedOk, actualOk)
require.Equal(t, expectedPosition, actualPosition)
})
}
func TestStorageIndicesLen(t *testing.T) {
const expected = int64(2)
indices := storageIndices{count: 2}
actual := indices.len()
require.Equal(t, expected, actual)
}
func TestStorageIndicesAll(t *testing.T) {
expectedIndices := []uint64{1, 3}
indices := storageIndices{indices: [mandatoryNumberOfColumns]byte{0, 131, 0, 128}}
actualIndices := indices.all()
require.DeepEqual(t, expectedIndices, actualIndices)
}
func TestStorageIndicesSet(t *testing.T) {
t.Run("data column index too large", func(t *testing.T) {
var indices storageIndices
err := indices.set(1_000_000, 0)
require.ErrorIs(t, errDataColumnIndexTooLarge, err)
})
t.Run("position too large", func(t *testing.T) {
var indices storageIndices
err := indices.set(0, 255)
require.ErrorIs(t, errDataColumnIndexTooLarge, err)
})
t.Run("nominal", func(t *testing.T) {
expected := [mandatoryNumberOfColumns]byte{0, 0, 128, 0, 131}
var storageIndices storageIndices
require.Equal(t, int64(0), storageIndices.len())
err := storageIndices.set(2, 1)
require.NoError(t, err)
require.Equal(t, int64(1), storageIndices.len())
err = storageIndices.set(4, 3)
require.NoError(t, err)
require.Equal(t, int64(2), storageIndices.len())
err = storageIndices.set(2, 0)
require.NoError(t, err)
require.Equal(t, int64(2), storageIndices.len())
actual := storageIndices.indices
require.Equal(t, expected, actual)
})
}
func TestPrune(t *testing.T) {
t.Run(("nothing to prune"), func(t *testing.T) {
dir := t.TempDir()
dataColumnStorage, err := NewDataColumnStorage(context.Background(), WithDataColumnBasePath(dir))
require.NoError(t, err)
dataColumnStorage.prune()
})
t.Run("nominal", func(t *testing.T) {
var compareSlices = func(left, right []string) bool {
if len(left) != len(right) {
return false
}
leftMap := make(map[string]bool, len(left))
for _, leftItem := range left {
leftMap[leftItem] = true
}
for _, rightItem := range right {
if _, ok := leftMap[rightItem]; !ok {
return false
}
}
return true
}
_, verifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{
{0}: {
{Slot: 33, ColumnIndex: 2, DataColumn: []byte{1, 2, 3}}, // Period 0 - Epoch 1
{Slot: 33, ColumnIndex: 4, DataColumn: []byte{2, 3, 4}}, // Period 0 - Epoch 1
},
{1}: {
{Slot: 128_002, ColumnIndex: 2, DataColumn: []byte{1, 2, 3}}, // Period 0 - Epoch 4000
{Slot: 128_002, ColumnIndex: 4, DataColumn: []byte{2, 3, 4}}, // Period 0 - Epoch 4000
},
{2}: {
{Slot: 128_003, ColumnIndex: 1, DataColumn: []byte{1, 2, 3}}, // Period 0 - Epoch 4000
{Slot: 128_003, ColumnIndex: 3, DataColumn: []byte{2, 3, 4}}, // Period 0 - Epoch 4000
},
{3}: {
{Slot: 131_138, ColumnIndex: 2, DataColumn: []byte{1, 2, 3}}, // Period 1 - Epoch 4098
{Slot: 131_138, ColumnIndex: 3, DataColumn: []byte{1, 2, 3}}, // Period 1 - Epoch 4098
},
{4}: {
{Slot: 131_169, ColumnIndex: 2, DataColumn: []byte{1, 2, 3}}, // Period 1 - Epoch 4099
{Slot: 131_169, ColumnIndex: 3, DataColumn: []byte{1, 2, 3}}, // Period 1 - Epoch 4099
},
{5}: {
{Slot: 262_144, ColumnIndex: 2, DataColumn: []byte{1, 2, 3}}, // Period 2 - Epoch 8192
{Slot: 262_144, ColumnIndex: 3, DataColumn: []byte{1, 2, 3}}, // Period 2 - Epoch 8292
},
},
)
dir := t.TempDir()
dataColumnStorage, err := NewDataColumnStorage(context.Background(), WithDataColumnBasePath(dir), WithDataColumnRetentionEpochs(10_000))
require.NoError(t, err)
err = dataColumnStorage.Save(verifiedRoDataColumnSidecars)
require.NoError(t, err)
dirs, err := listDir(dataColumnStorage.fs, ".")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{"0", "1", "2"}, dirs))
dirs, err = listDir(dataColumnStorage.fs, "0")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{"1", "4000"}, dirs))
dirs, err = listDir(dataColumnStorage.fs, "1")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{"4099", "4098"}, dirs))
dirs, err = listDir(dataColumnStorage.fs, "2")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{"8192"}, dirs))
dirs, err = listDir(dataColumnStorage.fs, "0/1")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{"0x0000000000000000000000000000000000000000000000000000000000000000.sszs"}, dirs))
dirs, err = listDir(dataColumnStorage.fs, "0/4000")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{
"0x0200000000000000000000000000000000000000000000000000000000000000.sszs",
"0x0100000000000000000000000000000000000000000000000000000000000000.sszs",
}, dirs))
dirs, err = listDir(dataColumnStorage.fs, "1/4098")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{"0x0300000000000000000000000000000000000000000000000000000000000000.sszs"}, dirs))
dirs, err = listDir(dataColumnStorage.fs, "1/4099")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{"0x0400000000000000000000000000000000000000000000000000000000000000.sszs"}, dirs))
dirs, err = listDir(dataColumnStorage.fs, "2/8192")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{"0x0500000000000000000000000000000000000000000000000000000000000000.sszs"}, dirs))
_, verifiedRoDataColumnSidecars = util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{
{6}: {{Slot: 451_141, ColumnIndex: 2, DataColumn: []byte{1, 2, 3}}}, // Period 3 - Epoch 14_098
},
)
err = dataColumnStorage.Save(verifiedRoDataColumnSidecars)
require.NoError(t, err)
// dataColumnStorage.prune(14_098)
dataColumnStorage.prune()
dirs, err = listDir(dataColumnStorage.fs, ".")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{"1", "2", "3"}, dirs))
dirs, err = listDir(dataColumnStorage.fs, "1")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{"4099"}, dirs))
dirs, err = listDir(dataColumnStorage.fs, "2")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{"8192"}, dirs))
dirs, err = listDir(dataColumnStorage.fs, "3")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{"14098"}, dirs))
dirs, err = listDir(dataColumnStorage.fs, "1/4099")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{"0x0400000000000000000000000000000000000000000000000000000000000000.sszs"}, dirs))
dirs, err = listDir(dataColumnStorage.fs, "2/8192")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{"0x0500000000000000000000000000000000000000000000000000000000000000.sszs"}, dirs))
dirs, err = listDir(dataColumnStorage.fs, "3/14098")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{"0x0600000000000000000000000000000000000000000000000000000000000000.sszs"}, dirs))
})
}

View File

@@ -0,0 +1,104 @@
package filesystem
// nolint:dupword
/*
Data column sidecars storage documentation
==========================================
File organisation
-----------------
- The first byte represents the version of the file structure (up to 0xff = 255).
We set it to 0x01.
Note: This is not strictly needed, but it will help a lot if, in the future,
we want to modify the file structure.
- The next 4 bytes represents the size of a SSZ encoded data column sidecar.
(See the `Computation of the maximum size of a DataColumnSidecar` section to a description
of how this value is computed).
- The next 128 bytes represent the index in the file of a given column.
The first bit of each byte in the index is set to 0 if there is no data column,
and set to 1 if there is a data column.
The remaining 7 bits (from 0 to 127) represent the index of the data column.
This sentinel bit is needed to distinguish between the column with index 0 and no column.
Example: If the column with index 5 is in the 3th position in the file, then indices[5] = 0x80 + 0x03 = 0x83.
- The rest of the file is a repeat of the SSZ encoded data columns sidecars.
|------------------------------------------|------------------------------------------------------------------------------------|
| Byte offset | Description |
|------------------------------------------|------------------------------------------------------------------------------------|
| 0 | version (1 byte) | sszEncodedDataColumnSidecarSize (4 bytes) | indices (128 bytes) |
|133 + 0*sszEncodedDataColumnSidecarSize | sszEncodedDataColumnSidecar (sszEncodedDataColumnSidecarSize bytes) |
|133 + 1*sszEncodedDataColumnSidecarSize | sszEncodedDataColumnSidecar (sszEncodedDataColumnSidecarSize bytes) |
|133 + 2*sszEncodedDataColumnSidecarSize | sszEncodedDataColumnSidecar (sszEncodedDataColumnSidecarSize bytes) |
| ... | ... |
|133 + 127*sszEncodedDataColumnSidecarSize | sszEncodedDataColumnSidecar (sszEncodedDataColumnSidecarSize bytes) |
|------------------------------------------|------------------------------------------------------------------------------------|
Each file is named after the block root where the data columns were data columns are committed to.
Example: `0x259c6d2f6a0bb75e2405cea7cb248e5663dc26b9404fd3bcd777afc20de91c1e.sszs`
Database organisation
---------------------
SSZ encoded data column sidecars are stored following the `by-epoch` layout.
- The first layer is a directory corresponding to the `period`, which corresponds to the epoch divided by the 4096.
- The second layer is a directory corresponding to the epoch.
- Then all files are stored in the epoch directory.
Example:
data-columns
├── 0
│   ├── 3638
│   │   ├── 0x259c6d2f6a0bb75e2405cea7cb248e5663dc26b9404fd3bcd777afc20de91c1e.sszs
│   │   ├── 0x2a855b1f6e9a2f04f8383e336325bf7d5ba02d1eab3ef90ef183736f8c768533.sszs
│   │   ├── ...
│   │   ├── 0xeb78e2b2350a71c640f1e96fea9e42f38e65705ab7e6e100c8bc9c589f2c5f2b.sszs
│   │   └── 0xeb7ee68da988fd20d773d45aad01dd62527734367a146e2b048715bd68a4e370.sszs
│   └── 3639
│      ├── 0x0fd231fe95e57936fa44f6c712c490b9e337a481b661dfd46768901e90444330.sszs
│      ├── 0x1bf5edff6b6ba2b65b1db325ff3312bbb57da461ef2ae651bd741af851aada3a.sszs
│      ├── ...
│      ├── 0xa156a527e631f858fee79fab7ef1fde3f6117a2e1201d47c09fbab0c6780c937.sszs
│      └── 0xcd80bc535ddc467dea1d19e0c39c1160875ccd1989061bcd8ce206e3c1261c87.sszs
└── 1
├── 4096
│   ├── 0x0d244009093e2bedb72eb265280290199e8c7bf1d90d7583c41af40d9f662269.sszs
│   ├── 0x11f420928d8de41c50e735caab0369996824a5299c5f054e097965855925697d.sszs
│   ├── ...
│   ├── 0xbe91fc782877ed400d95c02c61aebfdd592635d11f8e64c94b46abd84f45c967.sszs
│   └── 0xf246189f078f02d30173ff74605cf31c9e65b5e463275ebdbeb40476638135ff.sszs
└── 4097
   ├── 0x454d000674793c479e90504c0fe9827b50bb176ae022dab4e37d6a21471ab570.sszs
   ├── 0xac5eb7437d7190c48cfa863e3c45f96a7f8af371d47ac12ccda07129a06af763.sszs
   ├── ...
   ├── 0xb7df30561d9d92ab5fafdd96bca8b44526497c8debf0fc425c7a0770b2abeb83.sszs
   └── 0xc1dd0b1ae847b6ec62303a36d08c6a4a2e9e3ec4be3ff70551972a0ee3de9c14.sszs
Computation of the maximum size of a DataColumnSidecar
------------------------------------------------------
https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/das-core.md#datacolumnsidecar
class DataColumnSidecar(Container):
index: ColumnIndex # Index of column in extended matrix
column: List[Cell, MAX_BLOB_COMMITMENTS_PER_BLOCK]
kzg_commitments: List[KZGCommitment, MAX_BLOB_COMMITMENTS_PER_BLOCK]
kzg_proofs: List[KZGProof, MAX_BLOB_COMMITMENTS_PER_BLOCK]
signed_block_header: SignedBeaconBlockHeader
kzg_commitments_inclusion_proof: Vector[Bytes32, KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH]
- index: 2 bytes (ColumnIndex)
- `column`: 4,096 (MAX_BLOB_COMMITMENTS_PER_BLOCK) * 64 (FIELD_ELEMENTS_PER_CELL) * 32 bytes (BYTES_PER_FIELD_ELEMENT) = 8,388,608 bytes
- kzg_commitments: 4,096 (MAX_BLOB_COMMITMENTS_PER_BLOCK) * 48 bytes (KZGCommitment) = 196,608 bytes
- kzg_proofs: 4,096 (MAX_BLOB_COMMITMENTS_PER_BLOCK) * 48 bytes (KZGProof) = 196,608 bytes
- signed_block_header: 2 bytes (Slot) + 2 bytes (ValidatorIndex) + 3 * 2 bytes (Root) + 96 bytes (BLSSignature) = 106 bytes
- kzg_commitments_inclusion_proof: 4 (KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH) * 32 bytes = 128 bytes
TOTAL: 8,782,060 bytes = 70,256,480 bits
log(70,256,480) / log(2) ~= 26.07
==> 32 bits (4 bytes) are enough to store the maximum size of a data column sidecar.
The maximum size of an SSZ encoded data column can be 2**32 bits = 536,879,912 bytes,
which left a room of 536,879,912 bytes - 8,782,060 bytes ~= 503 mega bytes to store the extra data needed by SSZ encoding (which is more than enough.)
*/

View File

@@ -6,6 +6,7 @@ import (
)
var (
// Blobs
blobBuckets = []float64{3, 5, 7, 9, 11, 13}
blobSaveLatency = promauto.NewHistogram(prometheus.HistogramOpts{
Name: "blob_storage_save_latency",
@@ -33,4 +34,29 @@ var (
Name: "blob_disk_bytes",
Help: "Approximate number of bytes occupied by blobs in storage",
})
// Data columns
dataColumnBuckets = []float64{3, 5, 7, 9, 11, 13}
dataColumnSaveLatency = promauto.NewHistogram(prometheus.HistogramOpts{
Name: "data_column_storage_save_latency",
Help: "Latency of DataColumnSidecar storage save operations in milliseconds",
Buckets: dataColumnBuckets,
})
dataColumnFetchLatency = promauto.NewHistogram(prometheus.HistogramOpts{
Name: "data_column_storage_get_latency",
Help: "Latency of DataColumnSidecar storage get operations in milliseconds",
Buckets: dataColumnBuckets,
})
dataColumnPrunedCounter = promauto.NewCounter(prometheus.CounterOpts{
Name: "data_column_pruned",
Help: "Number of DataColumnSidecar pruned.",
})
dataColumnWrittenCounter = promauto.NewCounter(prometheus.CounterOpts{
Name: "data_column_written",
Help: "Number of DataColumnSidecar written",
})
dataColumnDiskCount = promauto.NewGauge(prometheus.GaugeOpts{
Name: "data_column_disk_count",
Help: "Approximate number of data columns in storage",
})
)

View File

@@ -1,6 +1,7 @@
package filesystem
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/config/params"
@@ -9,6 +10,9 @@ import (
"github.com/spf13/afero"
)
// Blobs
// -----
// NewEphemeralBlobStorage should only be used for tests.
// The instance of BlobStorage returned is backed by an in-memory virtual filesystem,
// improving test performance and simplifying cleanup.
@@ -75,3 +79,41 @@ func NewMockBlobStorageSummarizer(t *testing.T, set map[[32]byte][]int) BlobStor
}
return c
}
// Data columns
// ------------
// NewEphemeralDataColumnStorage should only be used for tests.
// The instance of DataColumnStorage returned is backed by an in-memory virtual filesystem,
// improving test performance and simplifying cleanup.
func NewEphemeralDataColumnStorage(t testing.TB, opts ...DataColumnStorageOption) *DataColumnStorage {
return NewWarmedEphemeralDataColumnStorageUsingFs(t, afero.NewMemMapFs(), opts...)
}
// NewEphemeralDataColumnStorageAndFs can be used by tests that want access to the virtual filesystem
// in order to interact with it outside the parameters of the DataColumnStorage API.
func NewEphemeralDataColumnStorageAndFs(t testing.TB, opts ...DataColumnStorageOption) (afero.Fs, *DataColumnStorage) {
fs := afero.NewMemMapFs()
dcs := NewWarmedEphemeralDataColumnStorageUsingFs(t, fs, opts...)
return fs, dcs
}
func NewWarmedEphemeralDataColumnStorageUsingFs(t testing.TB, fs afero.Fs, opts ...DataColumnStorageOption) *DataColumnStorage {
bs := NewEphemeralDataColumnStorageUsingFs(t, fs, opts...)
bs.WarmCache()
return bs
}
func NewEphemeralDataColumnStorageUsingFs(t testing.TB, fs afero.Fs, opts ...DataColumnStorageOption) *DataColumnStorage {
opts = append(opts,
WithDataColumnRetentionEpochs(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest),
WithDataColumnFs(fs),
)
bs, err := NewDataColumnStorage(context.Background(), opts...)
if err != nil {
t.Fatal(err)
}
return bs
}

View File

@@ -16,9 +16,6 @@ var ErrNotFoundOriginBlockRoot = errors.Wrap(ErrNotFound, "OriginBlockRoot")
// ErrNotFoundGenesisBlockRoot means no genesis block root was found, indicating the db was not initialized with genesis
var ErrNotFoundGenesisBlockRoot = errors.Wrap(ErrNotFound, "OriginGenesisRoot")
// ErrNotFoundBackfillBlockRoot is an error specifically for the origin block root getter
var ErrNotFoundBackfillBlockRoot = errors.Wrap(ErrNotFound, "BackfillBlockRoot")
// ErrNotFoundFeeRecipient is a not found error specifically for the fee recipient getter
var ErrNotFoundFeeRecipient = errors.Wrap(ErrNotFound, "fee recipient")

View File

@@ -479,6 +479,93 @@ func TestProcessETH2GenesisLog_CorrectNumOfDeposits(t *testing.T) {
hook.Reset()
}
func TestProcessLogs_DepositRequestsStarted(t *testing.T) {
params.SetupTestConfigCleanup(t)
hook := logTest.NewGlobal()
testAcc, err := mock.Setup()
require.NoError(t, err, "Unable to set up simulated backend")
kvStore := testDB.SetupDB(t)
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
server, endpoint, err := mockExecution.SetupRPCServer()
require.NoError(t, err)
t.Cleanup(func() {
server.Stop()
})
web3Service, err := NewService(context.Background(),
WithHttpEndpoint(endpoint),
WithDepositContractAddress(testAcc.ContractAddr),
WithDatabase(kvStore),
WithDepositCache(depositCache),
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
web3Service = setDefaultMocks(web3Service)
web3Service.depositContractCaller, err = contracts.NewDepositContractCaller(testAcc.ContractAddr, testAcc.Backend.Client())
require.NoError(t, err)
web3Service.rpcClient = &mockExecution.RPCClient{Backend: testAcc.Backend}
web3Service.httpLogger = testAcc.Backend.Client()
web3Service.latestEth1Data.LastRequestedBlock = 0
block, err := testAcc.Backend.Client().BlockByNumber(context.Background(), nil)
require.NoError(t, err)
web3Service.latestEth1Data.BlockHeight = block.NumberU64()
web3Service.latestEth1Data.BlockTime = block.Time()
bConfig := params.MinimalSpecConfig().Copy()
bConfig.MinGenesisTime = 0
bConfig.SecondsPerETH1Block = 1
params.OverrideBeaconConfig(bConfig)
nConfig := params.BeaconNetworkConfig()
nConfig.ContractDeploymentBlock = 0
params.OverrideBeaconNetworkConfig(nConfig)
testAcc.Backend.Commit()
totalNumOfDeposits := depositsReqForChainStart + 30
deposits, _, err := util.DeterministicDepositsAndKeys(uint64(totalNumOfDeposits))
require.NoError(t, err)
_, depositRoots, err := util.DeterministicDepositTrie(len(deposits))
require.NoError(t, err)
depositOffset := 5
// 64 Validators are used as size required for beacon-chain to start. This number
// is defined in the deposit contract as the number required for the testnet. The actual number
// is 2**14
for i := 0; i < totalNumOfDeposits; i++ {
data := deposits[i].Data
testAcc.TxOpts.Value = mock.Amount32Eth()
testAcc.TxOpts.GasLimit = 1000000
_, err = testAcc.Contract.Deposit(testAcc.TxOpts, data.PublicKey, data.WithdrawalCredentials, data.Signature, depositRoots[i])
require.NoError(t, err, "Could not deposit to deposit contract")
// pack 8 deposits into a block with an offset of
// 5
if (i+1)%8 == depositOffset {
testAcc.Backend.Commit()
}
}
// Forward the chain to account for the follow distance
for i := uint64(0); i < params.BeaconConfig().Eth1FollowDistance; i++ {
testAcc.Backend.Commit()
}
b, err := testAcc.Backend.Client().BlockByNumber(context.Background(), nil)
require.NoError(t, err)
web3Service.latestEth1Data.BlockHeight = b.NumberU64()
web3Service.latestEth1Data.BlockTime = b.Time()
// Set up our subscriber now to listen for the chain started event.
stateChannel := make(chan *feed.Event, 1)
stateSub := web3Service.cfg.stateNotifier.StateFeed().Subscribe(stateChannel)
defer stateSub.Unsubscribe()
web3Service.depositRequestsStarted = true
web3Service.initPOWService()
require.NoError(t, err)
require.Equal(t, int64(-1), web3Service.lastReceivedMerkleIndex, "Processed deposit logs even when requests are active")
hook.Reset()
}
func TestProcessETH2GenesisLog_LargePeriodOfNoLogs(t *testing.T) {
params.SetupTestConfigCleanup(t)
hook := logTest.NewGlobal()

View File

@@ -16,6 +16,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache/depositsnapshot"
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
"github.com/OffchainLabs/prysm/v6/beacon-chain/execution/types"
@@ -141,6 +142,7 @@ type config struct {
type Service struct {
connectedETH1 bool
isRunning bool
depositRequestsStarted bool
processingLock sync.RWMutex
latestEth1DataLock sync.RWMutex
cfg *config
@@ -205,7 +207,7 @@ func NewService(ctx context.Context, opts ...Option) (*Service, error) {
return nil, err
}
}
s.initDepositRequests()
eth1Data, err := s.validPowchainData(ctx)
if err != nil {
return nil, errors.Wrap(err, "unable to validate powchain data")
@@ -463,7 +465,9 @@ func safelyHandlePanic() {
func (s *Service) handleETH1FollowDistance() {
defer safelyHandlePanic()
ctx := s.ctx
if s.depositRequestsStarted {
return
}
// use a 5 minutes timeout for block time, because the max mining time is 278 sec (block 7208027)
// (analyzed the time of the block from 2018-09-01 to 2019-02-13)
fiveMinutesTimeout := prysmTime.Now().Add(-5 * time.Minute)
@@ -530,25 +534,27 @@ func (s *Service) initPOWService() {
s.latestEth1Data.BlockTime = header.Time
s.latestEth1DataLock.Unlock()
if err := s.processPastLogs(ctx); err != nil {
err = errors.Wrap(err, "processPastLogs")
s.retryExecutionClientConnection(ctx, err)
errorLogger(
err,
"Unable to process past deposit contract logs, perhaps your execution client is not fully synced",
)
continue
}
// Cache eth1 headers from our voting period.
if err := s.cacheHeadersForEth1DataVote(ctx); err != nil {
err = errors.Wrap(err, "cacheHeadersForEth1DataVote")
s.retryExecutionClientConnection(ctx, err)
if errors.Is(err, errBlockTimeTooLate) {
log.WithError(err).Debug("Unable to cache headers for execution client votes")
} else {
errorLogger(err, "Unable to cache headers for execution client votes")
if !s.depositRequestsStarted {
if err := s.processPastLogs(ctx); err != nil {
err = errors.Wrap(err, "processPastLogs")
s.retryExecutionClientConnection(ctx, err)
errorLogger(
err,
"Unable to process past deposit contract logs, perhaps your execution client is not fully synced",
)
continue
}
// Cache eth1 headers from our voting period.
if err := s.cacheHeadersForEth1DataVote(ctx); err != nil {
err = errors.Wrap(err, "cacheHeadersForEth1DataVote")
s.retryExecutionClientConnection(ctx, err)
if errors.Is(err, errBlockTimeTooLate) {
log.WithError(err).Debug("Unable to cache headers for execution client votes")
} else {
errorLogger(err, "Unable to cache headers for execution client votes")
}
continue
}
continue
}
// Handle edge case with embedded genesis state by fetching genesis header to determine
// its height.
@@ -824,7 +830,7 @@ func (s *Service) validPowchainData(ctx context.Context) (*ethpb.ETH1ChainData,
if genState == nil || genState.IsNil() {
return eth1Data, nil
}
if eth1Data == nil || !eth1Data.ChainstartData.Chainstarted || !validateDepositContainers(eth1Data.DepositContainers) {
if s.depositRequestsStarted || eth1Data == nil || !eth1Data.ChainstartData.Chainstarted || !validateDepositContainers(eth1Data.DepositContainers) {
pbState, err := native.ProtobufBeaconStatePhase0(s.preGenesisState.ToProtoUnsafe())
if err != nil {
return nil, err
@@ -900,6 +906,15 @@ func (s *Service) removeStartupState() {
s.cfg.finalizedStateAtStartup = nil
}
func (s *Service) initDepositRequests() {
fState := s.cfg.finalizedStateAtStartup
isNil := fState == nil || fState.IsNil()
if isNil {
return
}
s.depositRequestsStarted = helpers.DepositRequestsStarted(fState)
}
func newBlobVerifierFromInitializer(ini *verification.Initializer) verification.NewBlobVerifier {
return func(b blocks.ROBlob, reqs []verification.Requirement) verification.BlobVerifier {
return ini.NewBlobVerifier(b, reqs)

View File

@@ -267,7 +267,7 @@ func (f *ForkChoice) HighestReceivedBlockSlot() primitives.Slot {
return f.store.highestReceivedNode.slot
}
// HighestReceivedBlockSlotDelay returns the number of slots that the highest
// HighestReceivedBlockDelay returns the number of slots that the highest
// received block was late when receiving it
func (f *ForkChoice) HighestReceivedBlockDelay() primitives.Slot {
n := f.store.highestReceivedNode

View File

@@ -55,6 +55,7 @@ go_library(
"//beacon-chain/startup:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",

View File

@@ -4,6 +4,7 @@ import (
"reflect"
p2ptypes "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/types"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
pb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
@@ -43,6 +44,18 @@ const BlobSidecarsByRangeName = "/blob_sidecars_by_range"
// BlobSidecarsByRootName is the name for the BlobSidecarsByRoot v1 message topic.
const BlobSidecarsByRootName = "/blob_sidecars_by_root"
// LightClientBootstrapName is the name for the LightClientBootstrap message topic,
const LightClientBootstrapName = "/light_client_bootstrap"
// LightClientUpdatesByRangeName is the name for the LightClientUpdatesByRange topic.
const LightClientUpdatesByRangeName = "/light_client_updates_by_range"
// LightClientFinalityUpdateName is the name for the LightClientFinalityUpdate topic.
const LightClientFinalityUpdateName = "/light_client_finality_update"
// LightClientOptimisticUpdateName is the name for the LightClientOptimisticUpdate topic.
const LightClientOptimisticUpdateName = "/light_client_optimistic_update"
const (
// V1 RPC Topics
// RPCStatusTopicV1 defines the v1 topic for the status rpc method.
@@ -66,6 +79,15 @@ const (
// /eth2/beacon_chain/req/blob_sidecars_by_root/1/
RPCBlobSidecarsByRootTopicV1 = protocolPrefix + BlobSidecarsByRootName + SchemaVersionV1
// RPCLightClientBootstrapTopicV1 is a topic for requesting a light client bootstrap.
RPCLightClientBootstrapTopicV1 = protocolPrefix + LightClientBootstrapName + SchemaVersionV1
// RPCLightClientUpdatesByRangeTopicV1 is a topic for requesting light client updates by range.
RPCLightClientUpdatesByRangeTopicV1 = protocolPrefix + LightClientUpdatesByRangeName + SchemaVersionV1
// RPCLightClientFinalityUpdateTopicV1 is a topic for requesting a light client finality update.
RPCLightClientFinalityUpdateTopicV1 = protocolPrefix + LightClientFinalityUpdateName + SchemaVersionV1
// RPCLightClientOptimisticUpdateTopicV1 is a topic for requesting a light client Optimistic update.
RPCLightClientOptimisticUpdateTopicV1 = protocolPrefix + LightClientOptimisticUpdateName + SchemaVersionV1
// V2 RPC Topics
// RPCBlocksByRangeTopicV2 defines v2 the topic for the blocks by range rpc method.
RPCBlocksByRangeTopicV2 = protocolPrefix + BeaconBlocksByRangeMessageName + SchemaVersionV2
@@ -101,6 +123,12 @@ var RPCTopicMappings = map[string]interface{}{
RPCBlobSidecarsByRangeTopicV1: new(pb.BlobSidecarsByRangeRequest),
// BlobSidecarsByRoot v1 Message
RPCBlobSidecarsByRootTopicV1: new(p2ptypes.BlobSidecarsByRootReq),
// Light client
RPCLightClientBootstrapTopicV1: new([fieldparams.RootLength]byte),
RPCLightClientUpdatesByRangeTopicV1: new(pb.LightClientUpdatesByRangeRequest),
RPCLightClientFinalityUpdateTopicV1: new(interface{}),
RPCLightClientOptimisticUpdateTopicV1: new(interface{}),
}
// Maps all registered protocol prefixes.
@@ -111,14 +139,18 @@ var protocolMapping = map[string]bool{
// Maps all the protocol message names for the different rpc
// topics.
var messageMapping = map[string]bool{
StatusMessageName: true,
GoodbyeMessageName: true,
BeaconBlocksByRangeMessageName: true,
BeaconBlocksByRootsMessageName: true,
PingMessageName: true,
MetadataMessageName: true,
BlobSidecarsByRangeName: true,
BlobSidecarsByRootName: true,
StatusMessageName: true,
GoodbyeMessageName: true,
BeaconBlocksByRangeMessageName: true,
BeaconBlocksByRootsMessageName: true,
PingMessageName: true,
MetadataMessageName: true,
BlobSidecarsByRangeName: true,
BlobSidecarsByRootName: true,
LightClientBootstrapName: true,
LightClientUpdatesByRangeName: true,
LightClientFinalityUpdateName: true,
LightClientOptimisticUpdateName: true,
}
// Maps all the RPC messages which are to updated in altair.

View File

@@ -87,6 +87,7 @@ go_test(
"//beacon-chain/execution/testing:go_default_library",
"//beacon-chain/startup:go_default_library",
"//beacon-chain/sync/initial-sync/testing:go_default_library",
"//config/features:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",

View File

@@ -22,6 +22,7 @@ import (
validatorv1alpha1 "github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/prysm/v1alpha1/validator"
validatorprysm "github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/prysm/validator"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state/stategen"
"github.com/OffchainLabs/prysm/v6/config/features"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
)
@@ -95,14 +96,19 @@ func (s *Service) endpoints(
endpoints = append(endpoints, s.nodeEndpoints()...)
endpoints = append(endpoints, s.beaconEndpoints(ch, stater, blocker, validatorServer, coreService)...)
endpoints = append(endpoints, s.configEndpoints()...)
endpoints = append(endpoints, s.lightClientEndpoints(blocker, stater)...)
endpoints = append(endpoints, s.eventsEndpoints()...)
endpoints = append(endpoints, s.prysmBeaconEndpoints(ch, stater, coreService)...)
endpoints = append(endpoints, s.prysmNodeEndpoints()...)
endpoints = append(endpoints, s.prysmValidatorEndpoints(stater, coreService)...)
if features.Get().EnableLightClient {
endpoints = append(endpoints, s.lightClientEndpoints(blocker, stater)...)
}
if enableDebug {
endpoints = append(endpoints, s.debugEndpoints(stater)...)
}
return endpoints
}
@@ -900,7 +906,7 @@ func (s *Service) beaconEndpoints(
middleware: []middleware.Middleware{
middleware.AcceptHeaderHandler([]string{api.JsonMediaType}),
},
handler: server.GetPendingDeposits,
handler: server.GetPendingConsolidations,
methods: []string{http.MethodGet},
},
{
@@ -1253,7 +1259,7 @@ func (s *Service) prysmValidatorEndpoints(stater lookup.Stater, coreService *cor
methods: []string{http.MethodPost},
},
{
template: "/prysm/v1/validators/participation",
template: "/prysm/v1/validators/{state_id}/participation",
name: namespace + ".GetParticipation",
middleware: []middleware.Middleware{
middleware.AcceptHeaderHandler([]string{api.JsonMediaType}),
@@ -1262,7 +1268,7 @@ func (s *Service) prysmValidatorEndpoints(stater lookup.Stater, coreService *cor
methods: []string{http.MethodGet},
},
{
template: "/prysm/v1/validators/active_set_changes",
template: "/prysm/v1/validators/{state_id}/active_set_changes",
name: namespace + ".GetActiveSetChanges",
middleware: []middleware.Middleware{
middleware.AcceptHeaderHandler([]string{api.JsonMediaType}),

View File

@@ -6,6 +6,7 @@ import (
"slices"
"testing"
"github.com/OffchainLabs/prysm/v6/config/features"
"github.com/OffchainLabs/prysm/v6/testing/assert"
)
@@ -133,33 +134,62 @@ func Test_endpoints(t *testing.T) {
}
prysmValidatorRoutes := map[string][]string{
"/prysm/validators/performance": {http.MethodPost},
"/prysm/v1/validators/performance": {http.MethodPost},
"/prysm/v1/validators/participation": {http.MethodGet},
"/prysm/v1/validators/active_set_changes": {http.MethodGet},
"/prysm/validators/performance": {http.MethodPost},
"/prysm/v1/validators/performance": {http.MethodPost},
"/prysm/v1/validators/{state_id}/participation": {http.MethodGet},
"/prysm/v1/validators/{state_id}/active_set_changes": {http.MethodGet},
}
s := &Service{cfg: &Config{}}
endpoints := s.endpoints(true, nil, nil, nil, nil, nil, nil)
actualRoutes := make(map[string][]string, len(endpoints))
for _, e := range endpoints {
if _, ok := actualRoutes[e.template]; ok {
actualRoutes[e.template] = append(actualRoutes[e.template], e.methods...)
} else {
actualRoutes[e.template] = e.methods
}
}
expectedRoutes := make(map[string][]string)
for _, m := range []map[string][]string{
beaconRoutes, builderRoutes, configRoutes, debugRoutes, eventsRoutes,
nodeRoutes, validatorRoutes, rewardsRoutes, lightClientRoutes, blobRoutes,
prysmValidatorRoutes, prysmNodeRoutes, prysmBeaconRoutes,
} {
maps.Copy(expectedRoutes, m)
testCases := []struct {
name string
flag *features.Flags
additionalExpectedRoutes []map[string][]string
}{
{
name: "no flags",
},
{
name: "light client enabled",
flag: &features.Flags{
EnableLightClient: true,
},
additionalExpectedRoutes: []map[string][]string{
lightClientRoutes,
},
},
}
assert.Equal(t, true, maps.EqualFunc(expectedRoutes, actualRoutes, func(actualMethods []string, expectedMethods []string) bool {
return slices.Equal(expectedMethods, actualMethods)
}))
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
resetFn := features.InitWithReset(tc.flag)
defer resetFn()
s := &Service{cfg: &Config{}}
endpoints := s.endpoints(true, nil, nil, nil, nil, nil, nil)
actualRoutes := make(map[string][]string, len(endpoints))
for _, e := range endpoints {
if _, ok := actualRoutes[e.template]; ok {
actualRoutes[e.template] = append(actualRoutes[e.template], e.methods...)
} else {
actualRoutes[e.template] = e.methods
}
}
expectedRoutes := make(map[string][]string)
for _, m := range []map[string][]string{
beaconRoutes, builderRoutes, configRoutes, debugRoutes, eventsRoutes,
nodeRoutes, validatorRoutes, rewardsRoutes, blobRoutes,
prysmValidatorRoutes, prysmNodeRoutes, prysmBeaconRoutes,
} {
maps.Copy(expectedRoutes, m)
}
for _, m := range tc.additionalExpectedRoutes {
maps.Copy(expectedRoutes, m)
}
assert.Equal(t, true, maps.EqualFunc(expectedRoutes, actualRoutes, func(actualMethods []string, expectedMethods []string) bool {
return slices.Equal(expectedMethods, actualMethods)
}))
})
}
}

View File

@@ -751,9 +751,9 @@ func (s *Server) GetAttesterSlashingsV2(w http.ResponseWriter, r *http.Request)
httputil.HandleError(w, "Could not get head state: "+err.Error(), http.StatusInternalServerError)
return
}
var attStructs []interface{}
sourceSlashings := s.SlashingsPool.PendingAttesterSlashings(ctx, headState, true /* return unlimited slashings */)
sourceSlashings := s.SlashingsPool.PendingAttesterSlashings(ctx, headState, true /* return unlimited slashings */)
attStructs := make([]interface{}, 0, len(sourceSlashings))
for _, slashing := range sourceSlashings {
var attStruct interface{}
if v >= version.Electra && slashing.Version() >= version.Electra {

View File

@@ -1858,6 +1858,7 @@ func TestGetAttesterSlashings(t *testing.T) {
// Unmarshal resp.Data into a slice of slashings
var slashings []*structs.AttesterSlashingElectra
require.NoError(t, json.Unmarshal(resp.Data, &slashings))
require.NotNil(t, slashings)
require.Equal(t, 0, len(slashings))
})
})

View File

@@ -265,7 +265,7 @@ func TestBlobs(t *testing.T) {
require.Equal(t, false, resp.Finalized)
})
t.Run("blob index over max", func(t *testing.T) {
overLimit := params.BeaconConfig().MaxBlobsPerBlockByVersion(version.Deneb)
overLimit := maxBlobsPerBlockByVersion(version.Deneb)
u := fmt.Sprintf("http://foo.example/123?indices=%d", overLimit)
request := httptest.NewRequest("GET", u, nil)
writer := httptest.NewRecorder()
@@ -412,10 +412,14 @@ func TestBlobs_Electra(t *testing.T) {
cfg := params.BeaconConfig().Copy()
cfg.DenebForkEpoch = 0
cfg.ElectraForkEpoch = 1
cfg.BlobSchedule = []params.BlobScheduleEntry{
{Epoch: 0, MaxBlobsPerBlock: 6},
{Epoch: 1, MaxBlobsPerBlock: 9},
}
params.OverrideBeaconConfig(cfg)
db := testDB.SetupDB(t)
electraBlock, blobs := util.GenerateTestElectraBlockWithSidecar(t, [32]byte{}, 123, params.BeaconConfig().MaxBlobsPerBlockByVersion(version.Electra))
electraBlock, blobs := util.GenerateTestElectraBlockWithSidecar(t, [32]byte{}, 123, maxBlobsPerBlockByVersion(version.Electra))
require.NoError(t, db.SaveBlock(context.Background(), electraBlock))
bs := filesystem.NewEphemeralBlobStorage(t)
testSidecars := verification.FakeVerifySliceForTest(t, blobs)
@@ -451,7 +455,7 @@ func TestBlobs_Electra(t *testing.T) {
assert.Equal(t, http.StatusOK, writer.Code)
resp := &structs.SidecarsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, params.BeaconConfig().MaxBlobsPerBlockByVersion(version.Electra), len(resp.Data))
require.Equal(t, maxBlobsPerBlockByVersion(version.Electra), len(resp.Data))
sidecar := resp.Data[0]
require.NotNil(t, sidecar)
assert.Equal(t, "0", sidecar.Index)
@@ -464,7 +468,7 @@ func TestBlobs_Electra(t *testing.T) {
require.Equal(t, false, resp.Finalized)
})
t.Run("requested blob index at max", func(t *testing.T) {
limit := params.BeaconConfig().MaxBlobsPerBlockByVersion(version.Electra) - 1
limit := maxBlobsPerBlockByVersion(version.Electra) - 1
u := fmt.Sprintf("http://foo.example/123?indices=%d", limit)
request := httptest.NewRequest("GET", u, nil)
writer := httptest.NewRecorder()
@@ -496,7 +500,7 @@ func TestBlobs_Electra(t *testing.T) {
require.Equal(t, false, resp.Finalized)
})
t.Run("blob index over max", func(t *testing.T) {
overLimit := params.BeaconConfig().MaxBlobsPerBlockByVersion(version.Electra)
overLimit := maxBlobsPerBlockByVersion(version.Electra)
u := fmt.Sprintf("http://foo.example/123?indices=%d", overLimit)
request := httptest.NewRequest("GET", u, nil)
writer := httptest.NewRecorder()
@@ -554,3 +558,11 @@ func Test_parseIndices(t *testing.T) {
})
}
}
func maxBlobsPerBlockByVersion(v int) int {
if v >= version.Electra {
return params.BeaconConfig().DeprecatedMaxBlobsPerBlockElectra
}
return params.BeaconConfig().DeprecatedMaxBlobsPerBlock
}

View File

@@ -17,7 +17,6 @@ go_library(
"//beacon-chain/db:go_default_library",
"//beacon-chain/rpc/eth/shared:go_default_library",
"//beacon-chain/rpc/lookup:go_default_library",
"//config/features:go_default_library",
"//config/params:go_default_library",
"//encoding/bytesutil:go_default_library",
"//monitoring/tracing/trace:go_default_library",
@@ -44,7 +43,6 @@ go_test(
"//beacon-chain/core/light-client:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/interfaces:go_default_library",

View File

@@ -8,7 +8,6 @@ import (
"github.com/OffchainLabs/prysm/v6/api/server/structs"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/eth/shared"
"github.com/OffchainLabs/prysm/v6/config/features"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
@@ -22,11 +21,6 @@ import (
// GetLightClientBootstrap - implements https://github.com/ethereum/beacon-APIs/blob/263f4ed6c263c967f13279c7a9f5629b51c5fc55/apis/beacon/light_client/bootstrap.yaml
func (s *Server) GetLightClientBootstrap(w http.ResponseWriter, req *http.Request) {
if !features.Get().EnableLightClient {
httputil.HandleError(w, "Light client feature flag is not enabled", http.StatusNotFound)
return
}
// Prepare
ctx, span := trace.StartSpan(req.Context(), "beacon.GetLightClientBootstrap")
defer span.End()
@@ -74,11 +68,6 @@ func (s *Server) GetLightClientBootstrap(w http.ResponseWriter, req *http.Reques
// GetLightClientUpdatesByRange - implements https://github.com/ethereum/beacon-APIs/blob/263f4ed6c263c967f13279c7a9f5629b51c5fc55/apis/beacon/light_client/updates.yaml
func (s *Server) GetLightClientUpdatesByRange(w http.ResponseWriter, req *http.Request) {
if !features.Get().EnableLightClient {
httputil.HandleError(w, "Light client feature flag is not enabled", http.StatusNotFound)
return
}
ctx, span := trace.StartSpan(req.Context(), "beacon.GetLightClientUpdatesByRange")
defer span.End()
@@ -187,11 +176,6 @@ func (s *Server) GetLightClientUpdatesByRange(w http.ResponseWriter, req *http.R
// GetLightClientFinalityUpdate - implements https://github.com/ethereum/beacon-APIs/blob/263f4ed6c263c967f13279c7a9f5629b51c5fc55/apis/beacon/light_client/finality_update.yaml
func (s *Server) GetLightClientFinalityUpdate(w http.ResponseWriter, req *http.Request) {
if !features.Get().EnableLightClient {
httputil.HandleError(w, "Light client feature flag is not enabled", http.StatusNotFound)
return
}
_, span := trace.StartSpan(req.Context(), "beacon.GetLightClientFinalityUpdate")
defer span.End()
@@ -225,11 +209,6 @@ func (s *Server) GetLightClientFinalityUpdate(w http.ResponseWriter, req *http.R
// GetLightClientOptimisticUpdate - implements https://github.com/ethereum/beacon-APIs/blob/263f4ed6c263c967f13279c7a9f5629b51c5fc55/apis/beacon/light_client/optimistic_update.yaml
func (s *Server) GetLightClientOptimisticUpdate(w http.ResponseWriter, req *http.Request) {
if !features.Get().EnableLightClient {
httputil.HandleError(w, "Light client feature flag is not enabled", http.StatusNotFound)
return
}
_, span := trace.StartSpan(req.Context(), "beacon.GetLightClientOptimisticUpdate")
defer span.End()

View File

@@ -17,7 +17,6 @@ import (
lightclient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
dbtesting "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/config/features"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
@@ -34,11 +33,6 @@ import (
)
func TestLightClientHandler_GetLightClientBootstrap(t *testing.T) {
resetFn := features.InitWithReset(&features.Flags{
EnableLightClient: true,
})
defer resetFn()
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.AltairForkEpoch = 0
@@ -454,11 +448,6 @@ func TestLightClientHandler_GetLightClientBootstrap(t *testing.T) {
}
func TestLightClientHandler_GetLightClientByRange(t *testing.T) {
resetFn := features.InitWithReset(&features.Flags{
EnableLightClient: true,
})
defer resetFn()
helpers.ClearCache()
ctx := context.Background()
@@ -1501,10 +1490,6 @@ func TestLightClientHandler_GetLightClientByRange(t *testing.T) {
}
func TestLightClientHandler_GetLightClientFinalityUpdate(t *testing.T) {
resetFn := features.InitWithReset(&features.Flags{
EnableLightClient: true,
})
defer resetFn()
helpers.ClearCache()
t.Run("no update", func(t *testing.T) {
@@ -1589,10 +1574,6 @@ func TestLightClientHandler_GetLightClientFinalityUpdate(t *testing.T) {
}
func TestLightClientHandler_GetLightClientOptimisticUpdate(t *testing.T) {
resetFn := features.InitWithReset(&features.Flags{
EnableLightClient: true,
})
defer resetFn()
helpers.ClearCache()
t.Run("no update", func(t *testing.T) {

View File

@@ -678,187 +678,6 @@ const (
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
}`
// BadBlindedBellatrixBlock contains wrong data to create a block that does not pass ToConsensus conversion
// "parent_root" length too short
// "block_hash" length too short
// "state_root" length too short
BadBlindedBellatrixBlock = `{
"message": {
"slot": "1",
"proposer_index": "1",
"parent_root": "0xcf8e0d4e95872",
"state_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"body": {
"randao_reveal": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505",
"eth1_data": {
"deposit_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"deposit_count": "1",
"block_hash": "0xcf8e0d4e95872"
},
"graffiti": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"proposer_slashings": [
{
"signed_header_1": {
"message": {
"slot": "1",
"proposer_index": "1",
"parent_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"state_root": "0xcf8e0d4e9580f2",
"body_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
},
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
},
"signed_header_2": {
"message": {
"slot": "1",
"proposer_index": "1",
"parent_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"state_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"body_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
},
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
}
}
],
"attester_slashings": [
{
"attestation_1": {
"attesting_indices": [
"1"
],
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505",
"data": {
"slot": "1",
"index": "1",
"beacon_block_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"source": {
"epoch": "1",
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
},
"target": {
"epoch": "1",
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
}
}
},
"attestation_2": {
"attesting_indices": [
"1"
],
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505",
"data": {
"slot": "1",
"index": "1",
"beacon_block_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"source": {
"epoch": "1",
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
},
"target": {
"epoch": "1",
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
}
}
}
}
],
"attestations": [
{
"aggregation_bits": "0xffffffffffffffffffffffffffffffffff3f",
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505",
"data": {
"slot": "1",
"index": "1",
"beacon_block_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"source": {
"epoch": "1",
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
},
"target": {
"epoch": "1",
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
}
}
}
],
"deposits": [
{
"proof": [
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
],
"data": {
"pubkey": "0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a",
"withdrawal_credentials": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"amount": "1",
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
}
}
],
"voluntary_exits": [
{
"message": {
"epoch": "1",
"validator_index": "1"
},
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
}
],
"sync_aggregate": {
"sync_committee_bits": "0x6451e9f951ebf05edc01de67e593484b672877054f055903ff0df1a1a945cf30ca26bb4d4b154f94a1bc776bcf5d0efb3603e1f9b8ee2499ccdcfe2a18cef458",
"sync_committee_signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
},
"execution_payload_header": {
"parent_hash": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"fee_recipient": "0xabcf8e0d4e9587369b2301d0790347320302cc09",
"state_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"receipts_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"logs_bloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"prev_randao": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"block_number": "1",
"gas_limit": "1",
"gas_used": "1",
"timestamp": "1",
"extra_data": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"base_fee_per_gas": "1",
"block_hash": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"transactions_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
}
}
},
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
}`
CapellaBlock = `{
"message": {
"slot": "1",
@@ -1056,208 +875,7 @@ const (
},
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
}`
// BadCapellaBlock contains wrong data to create a block that does not pass ToConsensus conversion
// "state_root" length too short
// "block_hash" length too short
// "graffiti" length too short
// "state_root" length too short
BadCapellaBlock = `{
"message": {
"slot": "1",
"proposer_index": "1",
"parent_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"state_root": "0xcf8e0d4e957e8208d920f2",
"body": {
"randao_reveal": "0x1b66ac1fb663c9baf888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505",
"eth1_data": {
"deposit_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"deposit_count": "1",
"block_hash": "0xcf8e0d4e95873691884560367e8208d920f2"
},
"graffiti": "0xcf8e0d4e9587369b230120f2",
"proposer_slashings": [
{
"signed_header_1": {
"message": {
"slot": "1",
"proposer_index": "1",
"parent_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"state_root": "0xcf8e0d4e9587369b2301d208d920f2",
"body_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
},
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
},
"signed_header_2": {
"message": {
"slot": "1",
"proposer_index": "1",
"parent_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"state_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"body_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
},
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
}
}
],
"attester_slashings": [
{
"attestation_1": {
"attesting_indices": [
"1"
],
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505",
"data": {
"slot": "1",
"index": "1",
"beacon_block_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"source": {
"epoch": "1",
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
},
"target": {
"epoch": "1",
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
}
}
},
"attestation_2": {
"attesting_indices": [
"1"
],
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505",
"data": {
"slot": "1",
"index": "1",
"beacon_block_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"source": {
"epoch": "1",
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
},
"target": {
"epoch": "1",
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
}
}
}
}
],
"attestations": [
{
"aggregation_bits": "0xffffffffffffffffffffffffffffffffff3f",
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505",
"data": {
"slot": "1",
"index": "1",
"beacon_block_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"source": {
"epoch": "1",
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
},
"target": {
"epoch": "1",
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
}
}
}
],
"deposits": [
{
"proof": [
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
],
"data": {
"pubkey": "0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a",
"withdrawal_credentials": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"amount": "1",
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
}
}
],
"voluntary_exits": [
{
"message": {
"epoch": "1",
"validator_index": "1"
},
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
}
],
"sync_aggregate": {
"sync_committee_bits": "0x6451e9f951ebf05edc01de67e593484b672877054f055903ff0df1a1a945cf30ca26bb4d4b154f94a1bc776bcf5d0efb3603e1f9b8ee2499ccdcfe2a18cef458",
"sync_committee_signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
},
"execution_payload": {
"parent_hash": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"fee_recipient": "0xabcf8e0d4e9587369b2301d0790347320302cc09",
"state_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"receipts_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"logs_bloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"prev_randao": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"block_number": "1",
"gas_limit": "1",
"gas_used": "1",
"timestamp": "1",
"extra_data": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"base_fee_per_gas": "14074904626401341155369551180448584754667373453244490859944217516317499064576",
"block_hash": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"transactions": [
"0x02f878831469668303f51d843b9ac9f9843b9aca0082520894c93269b73096998db66be0441e836d873535cb9c8894a19041886f000080c001a031cc29234036afbf9a1fb9476b463367cb1f957ac0b919b69bbc798436e604aaa018c4e9c3914eb27aadd0b91e10b18655739fcf8c1fc398763a9f1beecb8ddc86"
],
"withdrawals": [
{
"index": "1",
"validator_index": "1",
"address": "0xabcf8e0d4e9587369b2301d0790347320302cc09",
"amount": "1"
}
]
},
"bls_to_execution_changes": [
{
"message": {
"validator_index": "1",
"from_bls_pubkey": "0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a",
"to_execution_address": "0xabcf8e0d4e9587369b2301d0790347320302cc09"
},
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
}
]
}
},
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
}`
BlindedCapellaBlock = `{
"message": {
"slot": "1",

View File

@@ -26,6 +26,7 @@ go_library(
"rpc_blob_sidecars_by_root.go",
"rpc_chunked_response.go",
"rpc_goodbye.go",
"rpc_light_client.go",
"rpc_metadata.go",
"rpc_ping.go",
"rpc_send_request.go",
@@ -114,6 +115,7 @@ go_library(
"//encoding/bytesutil:go_default_library",
"//encoding/ssz/equality:go_default_library",
"//io/file:go_default_library",
"//math:go_default_library",
"//monitoring/tracing:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//network/forks:go_default_library",
@@ -168,6 +170,7 @@ go_test(
"rpc_blob_sidecars_by_root_test.go",
"rpc_goodbye_test.go",
"rpc_handler_test.go",
"rpc_light_client_test.go",
"rpc_metadata_test.go",
"rpc_ping_test.go",
"rpc_send_request_test.go",
@@ -231,6 +234,7 @@ go_test(
"//beacon-chain/verification:go_default_library",
"//cache/lru:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",

View File

@@ -80,6 +80,12 @@ func newRateLimiter(p2pProvider p2p.P2P) *limiter {
// BlobSidecarsByRangeV1
topicMap[addEncoding(p2p.RPCBlobSidecarsByRangeTopicV1)] = blobCollector
// Light client requests
topicMap[addEncoding(p2p.RPCLightClientBootstrapTopicV1)] = leakybucket.NewCollector(1, defaultBurstLimit, leakyBucketPeriod, false /* deleteEmptyBuckets */)
topicMap[addEncoding(p2p.RPCLightClientUpdatesByRangeTopicV1)] = leakybucket.NewCollector(1, defaultBurstLimit, leakyBucketPeriod, false /* deleteEmptyBuckets */)
topicMap[addEncoding(p2p.RPCLightClientOptimisticUpdateTopicV1)] = leakybucket.NewCollector(1, defaultBurstLimit, leakyBucketPeriod, false /* deleteEmptyBuckets */)
topicMap[addEncoding(p2p.RPCLightClientFinalityUpdateTopicV1)] = leakybucket.NewCollector(1, defaultBurstLimit, leakyBucketPeriod, false /* deleteEmptyBuckets */)
// General topic for all rpc requests.
topicMap[rpcLimiterTopic] = leakybucket.NewCollector(5, defaultBurstLimit*2, leakyBucketPeriod, false /* deleteEmptyBuckets */)

View File

@@ -18,7 +18,7 @@ import (
func TestNewRateLimiter(t *testing.T) {
rlimiter := newRateLimiter(mockp2p.NewTestP2P(t))
assert.Equal(t, len(rlimiter.limiterMap), 12, "correct number of topics not registered")
assert.Equal(t, len(rlimiter.limiterMap), 16, "correct number of topics not registered")
}
func TestNewRateLimiter_FreeCorrectly(t *testing.T) {

View File

@@ -9,6 +9,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
p2ptypes "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/types"
"github.com/OffchainLabs/prysm/v6/config/features"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing"
@@ -70,14 +71,23 @@ func (s *Service) rpcHandlerByTopicFromFork(forkIndex int) (map[string]rpcHandle
// Bellatrix: https://github.com/ethereum/consensus-specs/blob/dev/specs/bellatrix/p2p-interface.md#messages
// Altair: https://github.com/ethereum/consensus-specs/blob/dev/specs/altair/p2p-interface.md#messages
if forkIndex >= version.Altair {
return map[string]rpcHandler{
handler := map[string]rpcHandler{
p2p.RPCStatusTopicV1: s.statusRPCHandler,
p2p.RPCGoodByeTopicV1: s.goodbyeRPCHandler,
p2p.RPCBlocksByRangeTopicV2: s.beaconBlocksByRangeRPCHandler, // Updated in Altair and modified in Capella
p2p.RPCBlocksByRootTopicV2: s.beaconBlocksRootRPCHandler, // Updated in Altair and modified in Capella
p2p.RPCPingTopicV1: s.pingHandler,
p2p.RPCMetaDataTopicV2: s.metaDataHandler, // Updated in Altair
}, nil
}
if features.Get().EnableLightClient {
handler[p2p.RPCLightClientBootstrapTopicV1] = s.lightClientBootstrapRPCHandler
handler[p2p.RPCLightClientUpdatesByRangeTopicV1] = s.lightClientUpdatesByRangeRPCHandler
handler[p2p.RPCLightClientFinalityUpdateTopicV1] = s.lightClientFinalityUpdateRPCHandler
handler[p2p.RPCLightClientOptimisticUpdateTopicV1] = s.lightClientOptimisticUpdateRPCHandler
}
return handler, nil
}
// PhaseO: https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/p2p-interface.md#messages
@@ -246,9 +256,11 @@ func (s *Service) registerRPC(baseTopic string, handle rpcHandler) {
// Increment message received counter.
messageReceivedCounter.WithLabelValues(topic).Inc()
// since metadata requests do not have any data in the payload, we
// since some requests do not have any data in the payload, we
// do not decode anything.
if baseTopic == p2p.RPCMetaDataTopicV1 || baseTopic == p2p.RPCMetaDataTopicV2 {
if baseTopic == p2p.RPCMetaDataTopicV1 || baseTopic == p2p.RPCMetaDataTopicV2 ||
baseTopic == p2p.RPCLightClientOptimisticUpdateTopicV1 ||
baseTopic == p2p.RPCLightClientFinalityUpdateTopicV1 {
if err := handle(ctx, base, stream); err != nil {
messageFailedProcessingCounter.WithLabelValues(topic).Inc()
if !errors.Is(err, p2ptypes.ErrWrongForkDigestVersion) {

View File

@@ -161,3 +161,80 @@ func WriteBlobSidecarChunk(stream libp2pcore.Stream, tor blockchain.TemporalOrac
_, err = encoding.EncodeWithMaxLength(stream, sidecar)
return err
}
func WriteLightClientBootstrapChunk(stream libp2pcore.Stream, tor blockchain.TemporalOracle, encoding encoder.NetworkEncoding, bootstrap interfaces.LightClientBootstrap) error {
if _, err := stream.Write([]byte{responseCodeSuccess}); err != nil {
return err
}
valRoot := tor.GenesisValidatorsRoot()
digest, err := forks.ForkDigestFromEpoch(slots.ToEpoch(bootstrap.Header().Beacon().Slot), valRoot[:])
if err != nil {
return err
}
obtainedCtx := digest[:]
if err = writeContextToStream(obtainedCtx, stream); err != nil {
return err
}
_, err = encoding.EncodeWithMaxLength(stream, bootstrap)
return err
}
func WriteLightClientUpdateChunk(stream libp2pcore.Stream, tor blockchain.TemporalOracle, encoding encoder.NetworkEncoding, update interfaces.LightClientUpdate) error {
if _, err := stream.Write([]byte{responseCodeSuccess}); err != nil {
return err
}
valRoot := tor.GenesisValidatorsRoot()
digest, err := forks.ForkDigestFromEpoch(slots.ToEpoch(update.AttestedHeader().Beacon().Slot), valRoot[:])
if err != nil {
return err
}
obtainedCtx := digest[:]
if err = writeContextToStream(obtainedCtx, stream); err != nil {
return err
}
_, err = encoding.EncodeWithMaxLength(stream, update)
return err
}
func WriteLightClientOptimisticUpdateChunk(stream libp2pcore.Stream, tor blockchain.TemporalOracle, encoding encoder.NetworkEncoding, update interfaces.LightClientOptimisticUpdate) error {
if _, err := stream.Write([]byte{responseCodeSuccess}); err != nil {
return err
}
valRoot := tor.GenesisValidatorsRoot()
digest, err := forks.ForkDigestFromEpoch(slots.ToEpoch(update.AttestedHeader().Beacon().Slot), valRoot[:])
if err != nil {
return err
}
obtainedCtx := digest[:]
if err = writeContextToStream(obtainedCtx, stream); err != nil {
return err
}
_, err = encoding.EncodeWithMaxLength(stream, update)
return err
}
func WriteLightClientFinalityUpdateChunk(stream libp2pcore.Stream, tor blockchain.TemporalOracle, encoding encoder.NetworkEncoding, update interfaces.LightClientFinalityUpdate) error {
if _, err := stream.Write([]byte{responseCodeSuccess}); err != nil {
return err
}
valRoot := tor.GenesisValidatorsRoot()
digest, err := forks.ForkDigestFromEpoch(slots.ToEpoch(update.AttestedHeader().Beacon().Slot), valRoot[:])
if err != nil {
return err
}
obtainedCtx := digest[:]
if err = writeContextToStream(obtainedCtx, stream); err != nil {
return err
}
_, err = encoding.EncodeWithMaxLength(stream, update)
return err
}

View File

@@ -0,0 +1,222 @@
package sync
import (
"context"
"fmt"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/types"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/math"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
eth "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
libp2pcore "github.com/libp2p/go-libp2p/core"
)
// lightClientBootstrapRPCHandler handles the /eth2/beacon_chain/req/light_client_bootstrap/1/ RPC request.
func (s *Service) lightClientBootstrapRPCHandler(ctx context.Context, msg interface{}, stream libp2pcore.Stream) error {
ctx, span := trace.StartSpan(ctx, "sync.lightClientBootstrapRPCHandler")
defer span.End()
ctx, cancel := context.WithTimeout(ctx, ttfbTimeout)
defer cancel()
logger := log.WithField("handler", p2p.LightClientBootstrapName[1:])
SetRPCStreamDeadlines(stream)
if err := s.rateLimiter.validateRequest(stream, 1); err != nil {
logger.WithError(err).Error("s.rateLimiter.validateRequest")
return err
}
s.rateLimiter.add(stream, 1)
rawMsg, ok := msg.(*[fieldparams.RootLength]byte)
if !ok {
logger.Error("Message is not *types.LightClientBootstrapReq")
return fmt.Errorf("message is not type %T", &[fieldparams.RootLength]byte{})
}
blkRoot := *rawMsg
bootstrap, err := s.cfg.beaconDB.LightClientBootstrap(ctx, blkRoot[:])
if err != nil {
s.writeErrorResponseToStream(responseCodeServerError, types.ErrGeneric.Error(), stream)
tracing.AnnotateError(span, err)
logger.WithError(err).Error("s.cfg.beaconDB.LightClientBootstrap")
return err
}
if bootstrap == nil {
s.writeErrorResponseToStream(responseCodeResourceUnavailable, types.ErrResourceUnavailable.Error(), stream)
tracing.AnnotateError(span, err)
logger.WithError(err).Error(fmt.Sprintf("nil bootstrap for root %#x", blkRoot))
return err
}
SetStreamWriteDeadline(stream, defaultWriteDuration)
if err = WriteLightClientBootstrapChunk(stream, s.cfg.clock, s.cfg.p2p.Encoding(), bootstrap); err != nil {
s.writeErrorResponseToStream(responseCodeServerError, types.ErrGeneric.Error(), stream)
tracing.AnnotateError(span, err)
logger.WithError(err).Error("WriteLightClientBootstrapChunk")
return err
}
logger.Info("lightClientBootstrapRPCHandler completed")
closeStream(stream, logger)
return nil
}
// lightClientUpdatesByRangeRPCHandler handles the /eth2/beacon_chain/req/light_client_updates_by_range/1/ RPC request.
func (s *Service) lightClientUpdatesByRangeRPCHandler(ctx context.Context, msg interface{}, stream libp2pcore.Stream) error {
ctx, span := trace.StartSpan(ctx, "sync.lightClientUpdatesByRangeRPCHandler")
defer span.End()
ctx, cancel := context.WithTimeout(ctx, ttfbTimeout)
defer cancel()
logger := log.WithField("handler", p2p.LightClientUpdatesByRangeName[1:])
SetRPCStreamDeadlines(stream)
if err := s.rateLimiter.validateRequest(stream, 1); err != nil {
logger.WithError(err).Error("s.rateLimiter.validateRequest")
return err
}
s.rateLimiter.add(stream, 1)
r, ok := msg.(*eth.LightClientUpdatesByRangeRequest)
if !ok {
logger.Error("Message is not *eth.LightClientUpdatesByRangeReq")
return fmt.Errorf("message is not type %T", &eth.LightClientUpdatesByRangeRequest{})
}
if r.Count == 0 {
s.writeErrorResponseToStream(responseCodeInvalidRequest, "count is 0", stream)
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
logger.Error("Count is 0")
return nil
}
if r.Count > params.BeaconConfig().MaxRequestLightClientUpdates {
r.Count = params.BeaconConfig().MaxRequestLightClientUpdates
}
endPeriod, err := math.Add64(r.StartPeriod, r.Count-1)
if err != nil {
s.writeErrorResponseToStream(responseCodeInvalidRequest, err.Error(), stream)
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
tracing.AnnotateError(span, err)
logger.WithError(err).Error("End period overflows")
return err
}
logger.Infof("LC: requesting updates by range (StartPeriod: %d, EndPeriod: %d)", r.StartPeriod, r.StartPeriod+r.Count-1)
updates, err := s.cfg.beaconDB.LightClientUpdates(ctx, r.StartPeriod, endPeriod)
if err != nil {
s.writeErrorResponseToStream(responseCodeServerError, types.ErrGeneric.Error(), stream)
tracing.AnnotateError(span, err)
logger.WithError(err).Error("s.cfg.beaconDB.LightClientUpdates")
return err
}
if len(updates) == 0 {
s.writeErrorResponseToStream(responseCodeResourceUnavailable, types.ErrResourceUnavailable.Error(), stream)
tracing.AnnotateError(span, err)
logger.Debugf("No update available for start period %d", r.StartPeriod)
return nil
}
for i := r.StartPeriod; i <= endPeriod; i++ {
u, ok := updates[i]
if !ok {
break
}
SetStreamWriteDeadline(stream, defaultWriteDuration)
if err = WriteLightClientUpdateChunk(stream, s.cfg.clock, s.cfg.p2p.Encoding(), u); err != nil {
s.writeErrorResponseToStream(responseCodeServerError, types.ErrGeneric.Error(), stream)
tracing.AnnotateError(span, err)
logger.WithError(err).Error("WriteLightClientUpdateChunk")
return err
}
s.rateLimiter.add(stream, 1)
}
logger.Info("lightClientUpdatesByRangeRPCHandler completed")
closeStream(stream, logger)
return nil
}
// lightClientFinalityUpdateRPCHandler handles the /eth2/beacon_chain/req/light_client_finality_update/1/ RPC request.
func (s *Service) lightClientFinalityUpdateRPCHandler(ctx context.Context, _ interface{}, stream libp2pcore.Stream) error {
ctx, span := trace.StartSpan(ctx, "sync.lightClientFinalityUpdateRPCHandler")
defer span.End()
_, cancel := context.WithTimeout(ctx, ttfbTimeout)
defer cancel()
logger := log.WithField("handler", p2p.LightClientFinalityUpdateName[1:])
SetRPCStreamDeadlines(stream)
if err := s.rateLimiter.validateRequest(stream, 1); err != nil {
logger.WithError(err).Error("s.rateLimiter.validateRequest")
return err
}
s.rateLimiter.add(stream, 1)
if s.lcStore.LastFinalityUpdate() == nil {
s.writeErrorResponseToStream(responseCodeResourceUnavailable, types.ErrResourceUnavailable.Error(), stream)
logger.Error("No finality update available")
return nil
}
SetStreamWriteDeadline(stream, defaultWriteDuration)
if err := WriteLightClientFinalityUpdateChunk(stream, s.cfg.clock, s.cfg.p2p.Encoding(), s.lcStore.LastFinalityUpdate()); err != nil {
s.writeErrorResponseToStream(responseCodeServerError, types.ErrGeneric.Error(), stream)
tracing.AnnotateError(span, err)
logger.WithError(err).Error("WriteLightClientFinalityUpdateChunk")
return err
}
logger.Info("lightClientFinalityUpdateRPCHandler completed")
closeStream(stream, logger)
return nil
}
// lightClientOptimisticUpdateRPCHandler handles the /eth2/beacon_chain/req/light_client_optimistic_update/1/ RPC request.
func (s *Service) lightClientOptimisticUpdateRPCHandler(ctx context.Context, _ interface{}, stream libp2pcore.Stream) error {
ctx, span := trace.StartSpan(ctx, "sync.lightClientOptimisticUpdateRPCHandler")
defer span.End()
_, cancel := context.WithTimeout(ctx, ttfbTimeout)
defer cancel()
logger := log.WithField("handler", p2p.LightClientOptimisticUpdateName[1:])
logger.Info("lightClientOptimisticUpdateRPCHandler invoked")
SetRPCStreamDeadlines(stream)
if err := s.rateLimiter.validateRequest(stream, 1); err != nil {
logger.WithError(err).Error("s.rateLimiter.validateRequest")
return err
}
s.rateLimiter.add(stream, 1)
if s.lcStore.LastOptimisticUpdate() == nil {
s.writeErrorResponseToStream(responseCodeResourceUnavailable, types.ErrResourceUnavailable.Error(), stream)
logger.Error("No optimistic update available")
return nil
}
SetStreamWriteDeadline(stream, defaultWriteDuration)
if err := WriteLightClientOptimisticUpdateChunk(stream, s.cfg.clock, s.cfg.p2p.Encoding(), s.lcStore.LastOptimisticUpdate()); err != nil {
s.writeErrorResponseToStream(responseCodeServerError, types.ErrGeneric.Error(), stream)
tracing.AnnotateError(span, err)
logger.WithError(err).Error("WriteLightClientOptimisticUpdateChunk")
return err
}
logger.Info("lightClientOptimisticUpdateRPCHandler completed")
closeStream(stream, logger)
return nil
}

View File

@@ -0,0 +1,521 @@
package sync
import (
"context"
"sync"
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/async/abool"
mockChain "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
db "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
p2ptest "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
mockSync "github.com/OffchainLabs/prysm/v6/beacon-chain/sync/initial-sync/testing"
"github.com/OffchainLabs/prysm/v6/config/features"
"github.com/OffchainLabs/prysm/v6/config/params"
leakybucket "github.com/OffchainLabs/prysm/v6/container/leaky-bucket"
"github.com/OffchainLabs/prysm/v6/network/forks"
pb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/protocol"
)
func TestRPC_LightClientBootstrap(t *testing.T) {
resetFn := features.InitWithReset(&features.Flags{
EnableLightClient: true,
})
defer resetFn()
ctx := context.Background()
p2pService := p2ptest.NewTestP2P(t)
p1 := p2ptest.NewTestP2P(t)
p2 := p2ptest.NewTestP2P(t)
p1.Connect(p2)
require.Equal(t, 1, len(p1.BHost.Network().Peers()), "Expected peers to be connected")
chainService := &mockChain.ChainService{
ValidatorsRoot: [32]byte{'A'},
Genesis: time.Unix(time.Now().Unix(), 0),
}
d := db.SetupDB(t)
r := Service{
ctx: ctx,
cfg: &config{
p2p: p2pService,
initialSync: &mockSync.Sync{IsSyncing: false},
chain: chainService,
beaconDB: d,
clock: startup.NewClock(chainService.Genesis, chainService.ValidatorsRoot),
stateNotifier: &mockChain.MockStateNotifier{},
},
chainStarted: abool.New(),
lcStore: &lightClient.Store{},
subHandler: newSubTopicHandler(),
rateLimiter: newRateLimiter(p1),
}
pcl := protocol.ID(p2p.RPCLightClientBootstrapTopicV1)
topic := string(pcl)
r.rateLimiter.limiterMap[topic] = leakybucket.NewCollector(10000, 10000, time.Second, false)
altairDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().AltairForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
bellatrixDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().BellatrixForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
capellaDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().CapellaForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
denebDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().DenebForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
electraDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().ElectraForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
for i := 1; i <= 5; i++ {
t.Run(version.String(i), func(t *testing.T) {
l := util.NewTestLightClient(t, i)
bootstrap, err := lightClient.NewLightClientBootstrapFromBeaconState(ctx, l.State.Slot(), l.State, l.Block)
require.NoError(t, err)
blockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, r.cfg.beaconDB.SaveLightClientBootstrap(ctx, blockRoot[:], bootstrap))
var wg sync.WaitGroup
wg.Add(1)
p2.BHost.SetStreamHandler(pcl, func(stream network.Stream) {
defer wg.Done()
expectSuccess(t, stream)
rpcCtx, err := readContextFromStream(stream)
require.NoError(t, err)
require.Equal(t, 4, len(rpcCtx))
var resSSZ []byte
switch i {
case version.Altair:
require.DeepSSZEqual(t, altairDigest[:], rpcCtx)
var res pb.LightClientBootstrapAltair
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
case version.Bellatrix:
require.DeepSSZEqual(t, bellatrixDigest[:], rpcCtx)
var res pb.LightClientBootstrapAltair
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
case version.Capella:
require.DeepSSZEqual(t, capellaDigest[:], rpcCtx)
var res pb.LightClientBootstrapCapella
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
case version.Deneb:
require.DeepSSZEqual(t, denebDigest[:], rpcCtx)
var res pb.LightClientBootstrapDeneb
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
case version.Electra:
require.DeepSSZEqual(t, electraDigest[:], rpcCtx)
var res pb.LightClientBootstrapElectra
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
default:
t.Fatalf("unsupported version %d", i)
}
bootstrapSSZ, err := bootstrap.MarshalSSZ()
require.NoError(t, err)
require.DeepSSZEqual(t, resSSZ, bootstrapSSZ)
})
stream1, err := p1.BHost.NewStream(context.Background(), p2.BHost.ID(), pcl)
require.NoError(t, err)
err = r.lightClientBootstrapRPCHandler(ctx, &blockRoot, stream1)
require.NoError(t, err)
if util.WaitTimeout(&wg, 1*time.Second) {
t.Fatal("Did not receive stream within 1 sec")
}
})
}
}
func TestRPC_LightClientOptimisticUpdate(t *testing.T) {
resetFn := features.InitWithReset(&features.Flags{
EnableLightClient: true,
})
defer resetFn()
ctx := context.Background()
p2pService := p2ptest.NewTestP2P(t)
p1 := p2ptest.NewTestP2P(t)
p2 := p2ptest.NewTestP2P(t)
p1.Connect(p2)
require.Equal(t, 1, len(p1.BHost.Network().Peers()), "Expected peers to be connected")
chainService := &mockChain.ChainService{
ValidatorsRoot: [32]byte{'A'},
Genesis: time.Unix(time.Now().Unix(), 0),
}
d := db.SetupDB(t)
r := Service{
ctx: ctx,
cfg: &config{
p2p: p2pService,
initialSync: &mockSync.Sync{IsSyncing: false},
chain: chainService,
beaconDB: d,
clock: startup.NewClock(chainService.Genesis, chainService.ValidatorsRoot),
stateNotifier: &mockChain.MockStateNotifier{},
},
chainStarted: abool.New(),
lcStore: &lightClient.Store{},
subHandler: newSubTopicHandler(),
rateLimiter: newRateLimiter(p1),
}
pcl := protocol.ID(p2p.RPCLightClientOptimisticUpdateTopicV1)
topic := string(pcl)
r.rateLimiter.limiterMap[topic] = leakybucket.NewCollector(10000, 10000, time.Second, false)
altairDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().AltairForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
bellatrixDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().BellatrixForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
capellaDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().CapellaForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
denebDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().DenebForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
electraDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().ElectraForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
for i := 1; i <= 5; i++ {
t.Run(version.String(i), func(t *testing.T) {
l := util.NewTestLightClient(t, i)
update, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock)
require.NoError(t, err)
r.lcStore.SetLastOptimisticUpdate(update)
var wg sync.WaitGroup
wg.Add(1)
p2.BHost.SetStreamHandler(pcl, func(stream network.Stream) {
defer wg.Done()
expectSuccess(t, stream)
var resSSZ []byte
rpcCtx, err := readContextFromStream(stream)
require.NoError(t, err)
require.Equal(t, 4, len(rpcCtx))
switch i {
case version.Altair:
require.DeepSSZEqual(t, altairDigest[:], rpcCtx)
var res pb.LightClientOptimisticUpdateAltair
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
case version.Bellatrix:
require.DeepSSZEqual(t, bellatrixDigest[:], rpcCtx)
var res pb.LightClientOptimisticUpdateAltair
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
case version.Capella:
require.DeepSSZEqual(t, capellaDigest[:], rpcCtx)
var res pb.LightClientOptimisticUpdateCapella
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
case version.Deneb:
require.DeepSSZEqual(t, denebDigest[:], rpcCtx)
var res pb.LightClientOptimisticUpdateDeneb
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
case version.Electra:
require.DeepSSZEqual(t, electraDigest[:], rpcCtx)
var res pb.LightClientOptimisticUpdateDeneb
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
default:
t.Fatalf("unsupported version %d", i)
}
updateSSZ, err := update.MarshalSSZ()
require.NoError(t, err)
require.DeepSSZEqual(t, resSSZ, updateSSZ)
})
stream1, err := p1.BHost.NewStream(context.Background(), p2.BHost.ID(), pcl)
require.NoError(t, err)
err = r.lightClientOptimisticUpdateRPCHandler(ctx, nil, stream1)
require.NoError(t, err)
if util.WaitTimeout(&wg, 1*time.Second) {
t.Fatal("Did not receive stream within 1 sec")
}
})
}
}
func TestRPC_LightClientFinalityUpdate(t *testing.T) {
resetFn := features.InitWithReset(&features.Flags{
EnableLightClient: true,
})
defer resetFn()
ctx := context.Background()
p2pService := p2ptest.NewTestP2P(t)
p1 := p2ptest.NewTestP2P(t)
p2 := p2ptest.NewTestP2P(t)
p1.Connect(p2)
require.Equal(t, 1, len(p1.BHost.Network().Peers()), "Expected peers to be connected")
chainService := &mockChain.ChainService{
ValidatorsRoot: [32]byte{'A'},
Genesis: time.Unix(time.Now().Unix(), 0),
}
d := db.SetupDB(t)
r := Service{
ctx: ctx,
cfg: &config{
p2p: p2pService,
initialSync: &mockSync.Sync{IsSyncing: false},
chain: chainService,
beaconDB: d,
clock: startup.NewClock(chainService.Genesis, chainService.ValidatorsRoot),
stateNotifier: &mockChain.MockStateNotifier{},
},
chainStarted: abool.New(),
lcStore: &lightClient.Store{},
subHandler: newSubTopicHandler(),
rateLimiter: newRateLimiter(p1),
}
pcl := protocol.ID(p2p.RPCLightClientFinalityUpdateTopicV1)
topic := string(pcl)
r.rateLimiter.limiterMap[topic] = leakybucket.NewCollector(10000, 10000, time.Second, false)
altairDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().AltairForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
bellatrixDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().BellatrixForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
capellaDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().CapellaForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
denebDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().DenebForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
electraDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().ElectraForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
for i := 1; i <= 5; i++ {
t.Run(version.String(i), func(t *testing.T) {
l := util.NewTestLightClient(t, i)
update, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
r.lcStore.SetLastFinalityUpdate(update)
var wg sync.WaitGroup
wg.Add(1)
p2.BHost.SetStreamHandler(pcl, func(stream network.Stream) {
defer wg.Done()
expectSuccess(t, stream)
var resSSZ []byte
rpcCtx, err := readContextFromStream(stream)
require.NoError(t, err)
require.Equal(t, 4, len(rpcCtx))
switch i {
case version.Altair:
require.DeepSSZEqual(t, altairDigest[:], rpcCtx)
var res pb.LightClientFinalityUpdateAltair
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
case version.Bellatrix:
require.DeepSSZEqual(t, bellatrixDigest[:], rpcCtx)
var res pb.LightClientFinalityUpdateAltair
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
case version.Capella:
require.DeepSSZEqual(t, capellaDigest[:], rpcCtx)
var res pb.LightClientFinalityUpdateCapella
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
case version.Deneb:
require.DeepSSZEqual(t, denebDigest[:], rpcCtx)
var res pb.LightClientFinalityUpdateDeneb
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
case version.Electra:
require.DeepSSZEqual(t, electraDigest[:], rpcCtx)
var res pb.LightClientFinalityUpdateElectra
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
default:
t.Fatalf("unsupported version %d", i)
}
updateSSZ, err := update.MarshalSSZ()
require.NoError(t, err)
require.DeepSSZEqual(t, resSSZ, updateSSZ)
})
stream1, err := p1.BHost.NewStream(context.Background(), p2.BHost.ID(), pcl)
require.NoError(t, err)
err = r.lightClientFinalityUpdateRPCHandler(ctx, nil, stream1)
require.NoError(t, err)
if util.WaitTimeout(&wg, 1*time.Second) {
t.Fatal("Did not receive stream within 1 sec")
}
})
}
}
func TestRPC_LightClientUpdatesByRange(t *testing.T) {
resetFn := features.InitWithReset(&features.Flags{
EnableLightClient: true,
})
defer resetFn()
ctx := context.Background()
p2pService := p2ptest.NewTestP2P(t)
p1 := p2ptest.NewTestP2P(t)
p2 := p2ptest.NewTestP2P(t)
p1.Connect(p2)
require.Equal(t, 1, len(p1.BHost.Network().Peers()), "Expected peers to be connected")
chainService := &mockChain.ChainService{
ValidatorsRoot: [32]byte{'A'},
Genesis: time.Unix(time.Now().Unix(), 0),
}
d := db.SetupDB(t)
r := Service{
ctx: ctx,
cfg: &config{
p2p: p2pService,
initialSync: &mockSync.Sync{IsSyncing: false},
chain: chainService,
beaconDB: d,
clock: startup.NewClock(chainService.Genesis, chainService.ValidatorsRoot),
stateNotifier: &mockChain.MockStateNotifier{},
},
chainStarted: abool.New(),
lcStore: &lightClient.Store{},
subHandler: newSubTopicHandler(),
rateLimiter: newRateLimiter(p1),
}
pcl := protocol.ID(p2p.RPCLightClientUpdatesByRangeTopicV1)
topic := string(pcl)
r.rateLimiter.limiterMap[topic] = leakybucket.NewCollector(10000, 10000, time.Second, false)
altairDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().AltairForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
bellatrixDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().BellatrixForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
capellaDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().CapellaForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
denebDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().DenebForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
electraDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().ElectraForkEpoch, chainService.ValidatorsRoot[:])
require.NoError(t, err)
for i := 1; i <= 5; i++ {
t.Run(version.String(i), func(t *testing.T) {
for j := 0; j < 5; j++ {
l := util.NewTestLightClient(t, i, util.WithIncreasedAttestedSlot(uint64(j)))
update, err := lightClient.NewLightClientUpdateFromBeaconState(ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
require.NoError(t, r.cfg.beaconDB.SaveLightClientUpdate(ctx, uint64(j), update))
}
var wg sync.WaitGroup
wg.Add(1)
responseCounter := 0
p2.BHost.SetStreamHandler(pcl, func(stream network.Stream) {
defer wg.Done()
expectSuccess(t, stream)
rpcCtx, err := readContextFromStream(stream)
require.NoError(t, err)
require.Equal(t, 4, len(rpcCtx))
var resSSZ []byte
switch i {
case version.Altair:
require.DeepSSZEqual(t, altairDigest[:], rpcCtx)
var res pb.LightClientUpdateAltair
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
case version.Bellatrix:
require.DeepSSZEqual(t, bellatrixDigest[:], rpcCtx)
var res pb.LightClientUpdateAltair
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
case version.Capella:
require.DeepSSZEqual(t, capellaDigest[:], rpcCtx)
var res pb.LightClientUpdateCapella
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
case version.Deneb:
require.DeepSSZEqual(t, denebDigest[:], rpcCtx)
var res pb.LightClientUpdateDeneb
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
case version.Electra:
require.DeepSSZEqual(t, electraDigest[:], rpcCtx)
var res pb.LightClientUpdateElectra
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, &res))
resSSZ, err = res.MarshalSSZ()
require.NoError(t, err)
default:
t.Fatalf("unsupported version %d", i)
}
update, err := r.cfg.beaconDB.LightClientUpdates(ctx, 0, 4)
require.NoError(t, err)
bootstrapSSZ, err := update[uint64(responseCounter)].MarshalSSZ()
require.NoError(t, err)
require.DeepSSZEqual(t, resSSZ, bootstrapSSZ)
responseCounter++
})
stream1, err := p1.BHost.NewStream(context.Background(), p2.BHost.ID(), pcl)
require.NoError(t, err)
msg := pb.LightClientUpdatesByRangeRequest{
StartPeriod: 0,
Count: 5,
}
err = r.lightClientUpdatesByRangeRPCHandler(ctx, &msg, stream1)
require.NoError(t, err)
if util.WaitTimeout(&wg, 1*time.Second) {
t.Fatal("Did not receive stream within 1 sec")
}
})
}
}

View File

@@ -6,6 +6,7 @@ go_library(
"batch.go",
"blob.go",
"cache.go",
"data_column.go",
"error.go",
"fake.go",
"filesystem.go",
@@ -21,12 +22,14 @@ go_library(
deps = [
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/startup:go_default_library",
"//beacon-chain/state:go_default_library",
"//cache/lru:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",
@@ -51,16 +54,21 @@ go_test(
"batch_test.go",
"blob_test.go",
"cache_test.go",
"data_column_test.go",
"initializer_test.go",
"result_test.go",
"verification_test.go",
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/startup:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",

View File

@@ -32,7 +32,7 @@ type BlobBatchVerifier struct {
}
// VerifiedROBlobs satisfies the das.BlobBatchVerifier interface, used by das.AvailabilityStore.
func (batch *BlobBatchVerifier) VerifiedROBlobs(ctx context.Context, blk blocks.ROBlock, scs []blocks.ROBlob) ([]blocks.VerifiedROBlob, error) {
func (batch *BlobBatchVerifier) VerifiedROBlobs(_ context.Context, blk blocks.ROBlock, scs []blocks.ROBlob) ([]blocks.VerifiedROBlob, error) {
if len(scs) == 0 {
return nil, nil
}

View File

@@ -25,6 +25,10 @@ const (
RequireSidecarInclusionProven
RequireSidecarKzgProofVerified
RequireSidecarProposerExpected
// Data columns specific.
RequireValidFields
RequireCorrectSubnet
)
var allBlobSidecarRequirements = []Requirement{

View File

@@ -21,7 +21,8 @@ import (
)
const (
DefaultSignatureCacheSize = 256
DefaultSignatureCacheSize = 256
DefaultInclusionProofCacheSize = 2
)
// ValidatorAtIndexer defines the method needed to retrieve a validator by its index.
@@ -73,6 +74,14 @@ type sigCache struct {
getFork forkLookup
}
type inclusionProofCache struct {
*lru.Cache
}
func newInclusionProofCache(size int) *inclusionProofCache {
return &inclusionProofCache{Cache: lruwrpr.New(size)}
}
// VerifySignature verifies the given signature data against the key obtained via ValidatorAtIndexer.
func (c *sigCache) VerifySignature(sig SignatureData, v ValidatorAtIndexer) (err error) {
defer func() {

View File

@@ -0,0 +1,535 @@
package verification
import (
"context"
"fmt"
"strings"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
forkchoicetypes "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/types"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/runtime/logging"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/pkg/errors"
)
var (
// GossipDataColumnSidecarRequirements defines the set of requirements that DataColumnSidecars received on gossip
// must satisfy in order to upgrade an RODataColumn to a VerifiedRODataColumn.
// https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/p2p-interface.md#data_column_sidecar_subnet_id
GossipDataColumnSidecarRequirements = []Requirement{
RequireValidFields,
RequireCorrectSubnet,
RequireNotFromFutureSlot,
RequireSlotAboveFinalized,
RequireValidProposerSignature,
RequireSidecarParentSeen,
RequireSidecarParentValid,
RequireSidecarParentSlotLower,
RequireSidecarDescendsFromFinalized,
RequireSidecarInclusionProven,
RequireSidecarKzgProofVerified,
RequireSidecarProposerExpected,
}
errColumnsInvalid = errors.New("data columns failed verification")
errBadTopicLength = errors.New("topic length is invalid")
errBadTopic = errors.New("topic is not of the one expected")
)
type (
RODataColumnsVerifier struct {
*sharedResources
results *results
dataColumns []blocks.RODataColumn
verifyDataColumnsCommitment rodataColumnsCommitmentVerifier
stateByRoot map[[fieldparams.RootLength]byte]state.BeaconState
}
rodataColumnsCommitmentVerifier func([]blocks.RODataColumn) error
)
var _ DataColumnsVerifier = &RODataColumnsVerifier{}
// VerifiedRODataColumns "upgrades" wrapped RODataColumns to VerifiedRODataColumns.
// If any of the verifications ran against the data columns failed, or some required verifications
// were not run, an error will be returned.
func (dv *RODataColumnsVerifier) VerifiedRODataColumns() ([]blocks.VerifiedRODataColumn, error) {
if !dv.results.allSatisfied() {
return nil, dv.results.errors(errColumnsInvalid)
}
verifiedRODataColumns := make([]blocks.VerifiedRODataColumn, 0, len(dv.dataColumns))
for _, dataColumn := range dv.dataColumns {
verifiedRODataColumn := blocks.NewVerifiedRODataColumn(dataColumn)
verifiedRODataColumns = append(verifiedRODataColumns, verifiedRODataColumn)
}
return verifiedRODataColumns, nil
}
// SatisfyRequirement allows the caller to assert that a requirement has been satisfied.
// This gives us a way to tick the box for a requirement where the usual method would be impractical.
// For example, when batch syncing, forkchoice is only updated at the end of the batch. So the checks that use
// forkchoice, like descends from finalized or parent seen, would necessarily fail. Allowing the caller to
// assert the requirement has been satisfied ensures we have an easy way to audit which piece of code is satisfying
// a requirement outside of this package.
func (dv *RODataColumnsVerifier) SatisfyRequirement(req Requirement) {
dv.recordResult(req, nil)
}
func (dv *RODataColumnsVerifier) recordResult(req Requirement, err *error) {
if err == nil || *err == nil {
dv.results.record(req, nil)
return
}
dv.results.record(req, *err)
}
func (dv *RODataColumnsVerifier) ValidFields() (err error) {
if ok, err := dv.results.cached(RequireValidFields); ok {
return err
}
defer dv.recordResult(RequireValidFields, &err)
for _, dataColumn := range dv.dataColumns {
if err := peerdas.VerifyDataColumnSidecar(dataColumn); err != nil {
return columnErrBuilder(errors.Wrap(err, "verify data column sidecar"))
}
}
return nil
}
func (dv *RODataColumnsVerifier) CorrectSubnet(dataColumnSidecarSubTopic string, expectedTopics []string) (err error) {
if ok, err := dv.results.cached(RequireCorrectSubnet); ok {
return err
}
defer dv.recordResult(RequireCorrectSubnet, &err)
if len(expectedTopics) != len(dv.dataColumns) {
return columnErrBuilder(errBadTopicLength)
}
for i := range dv.dataColumns {
// We add a trailing slash to avoid, for example,
// an actual topic /eth2/9dc47cc6/data_column_sidecar_1
// to match with /eth2/9dc47cc6/data_column_sidecar_120
expectedTopic := expectedTopics[i] + "/"
actualSubnet := peerdas.ComputeSubnetForDataColumnSidecar(dv.dataColumns[i].Index)
actualSubTopic := fmt.Sprintf(dataColumnSidecarSubTopic, actualSubnet)
if !strings.Contains(expectedTopic, actualSubTopic) {
return columnErrBuilder(errBadTopic)
}
}
return nil
}
func (dv *RODataColumnsVerifier) NotFromFutureSlot() (err error) {
if ok, err := dv.results.cached(RequireNotFromFutureSlot); ok {
return err
}
defer dv.recordResult(RequireNotFromFutureSlot, &err)
// Retrieve the current slot.
currentSlot := dv.clock.CurrentSlot()
// Get the current time.
now := dv.clock.Now()
// Retrieve the maximum gossip clock disparity.
maximumGossipClockDisparity := params.BeaconConfig().MaximumGossipClockDisparityDuration()
for _, dataColumn := range dv.dataColumns {
// Extract the data column slot.
dataColumnSlot := dataColumn.Slot()
// Skip if the data column slotis the same as the current slot.
if currentSlot == dataColumnSlot {
continue
}
// earliestStart represents the time the slot starts, lowered by MAXIMUM_GOSSIP_CLOCK_DISPARITY.
// We lower the time by MAXIMUM_GOSSIP_CLOCK_DISPARITY in case system time is running slightly behind real time.
earliestStart := dv.clock.SlotStart(dataColumnSlot).Add(-maximumGossipClockDisparity)
// If the system time is still before earliestStart, we consider the column from a future slot and return an error.
if now.Before(earliestStart) {
return columnErrBuilder(ErrFromFutureSlot)
}
}
return nil
}
func (dv *RODataColumnsVerifier) SlotAboveFinalized() (err error) {
if ok, err := dv.results.cached(RequireSlotAboveFinalized); ok {
return err
}
defer dv.recordResult(RequireSlotAboveFinalized, &err)
// Retrieve the finalized checkpoint.
finalizedCheckpoint := dv.fc.FinalizedCheckpoint()
// Compute the first slot of the finalized checkpoint epoch.
startSlot, err := slots.EpochStart(finalizedCheckpoint.Epoch)
if err != nil {
return columnErrBuilder(errors.Wrap(err, "epoch start"))
}
for _, dataColumn := range dv.dataColumns {
// Extract the data column slot.
dataColumnSlot := dataColumn.Slot()
// Check if the data column slot is after first slot of the epoch corresponding to the finalized checkpoint.
if dataColumnSlot <= startSlot {
return columnErrBuilder(ErrSlotNotAfterFinalized)
}
}
return nil
}
func (dv *RODataColumnsVerifier) ValidProposerSignature(ctx context.Context) (err error) {
if ok, err := dv.results.cached(RequireValidProposerSignature); ok {
return err
}
defer dv.recordResult(RequireValidProposerSignature, &err)
for _, dataColumn := range dv.dataColumns {
// Extract the signature data from the data column.
signatureData := columnToSignatureData(dataColumn)
// Get logging fields.
fields := logging.DataColumnFields(dataColumn)
log := log.WithFields(fields)
// First check if there is a cached verification that can be reused.
seen, err := dv.sc.SignatureVerified(signatureData)
if err != nil {
log.WithError(err).Debug("Reusing failed proposer signature validation from cache")
columnVerificationProposerSignatureCache.WithLabelValues("hit-invalid").Inc()
return columnErrBuilder(ErrInvalidProposerSignature)
}
// If yes, we can skip the full verification.
if seen {
columnVerificationProposerSignatureCache.WithLabelValues("hit-valid").Inc()
continue
}
columnVerificationProposerSignatureCache.WithLabelValues("miss").Inc()
// Retrieve the parent state.
parentState, err := dv.parentState(ctx, dataColumn)
if err != nil {
return columnErrBuilder(errors.Wrap(err, "parent state"))
}
// Full verification, which will subsequently be cached for anything sharing the signature cache.
if err = dv.sc.VerifySignature(signatureData, parentState); err != nil {
return columnErrBuilder(errors.Wrap(err, "verify signature"))
}
}
return nil
}
func (dv *RODataColumnsVerifier) SidecarParentSeen(parentSeen func([fieldparams.RootLength]byte) bool) (err error) {
if ok, err := dv.results.cached(RequireSidecarParentSeen); ok {
return err
}
defer dv.recordResult(RequireSidecarParentSeen, &err)
for _, dataColumn := range dv.dataColumns {
// Extract the root of the parent block corresponding to the data column.
parentRoot := dataColumn.ParentRoot()
// Skip if the parent root has been seen.
if parentSeen != nil && parentSeen(parentRoot) {
continue
}
if !dv.fc.HasNode(parentRoot) {
return columnErrBuilder(ErrSidecarParentNotSeen)
}
}
return nil
}
func (dv *RODataColumnsVerifier) SidecarParentValid(badParent func([fieldparams.RootLength]byte) bool) (err error) {
if ok, err := dv.results.cached(RequireSidecarParentValid); ok {
return err
}
defer dv.recordResult(RequireSidecarParentValid, &err)
for _, dataColumn := range dv.dataColumns {
// Extract the root of the parent block corresponding to the data column.
parentRoot := dataColumn.ParentRoot()
if badParent != nil && badParent(parentRoot) {
return columnErrBuilder(ErrSidecarParentInvalid)
}
}
return nil
}
func (dv *RODataColumnsVerifier) SidecarParentSlotLower() (err error) {
if ok, err := dv.results.cached(RequireSidecarParentSlotLower); ok {
return err
}
defer dv.recordResult(RequireSidecarParentSlotLower, &err)
for _, dataColumn := range dv.dataColumns {
// Extract the root of the parent block corresponding to the data column.
parentRoot := dataColumn.ParentRoot()
// Compute the slot of the parent block.
parentSlot, err := dv.fc.Slot(parentRoot)
if err != nil {
return columnErrBuilder(errors.Wrap(err, "slot"))
}
// Extract the slot of the data column.
dataColumnSlot := dataColumn.Slot()
// Check if the data column slot is after the parent slot.
if parentSlot >= dataColumnSlot {
return ErrSlotNotAfterParent
}
}
return nil
}
func (dv *RODataColumnsVerifier) SidecarDescendsFromFinalized() (err error) {
if ok, err := dv.results.cached(RequireSidecarDescendsFromFinalized); ok {
return err
}
defer dv.recordResult(RequireSidecarDescendsFromFinalized, &err)
for _, dataColumn := range dv.dataColumns {
// Extract the root of the parent block corresponding to the data column.
parentRoot := dataColumn.ParentRoot()
if !dv.fc.HasNode(parentRoot) {
return columnErrBuilder(ErrSidecarNotFinalizedDescendent)
}
}
return nil
}
func (dv *RODataColumnsVerifier) SidecarInclusionProven() (err error) {
if ok, err := dv.results.cached(RequireSidecarInclusionProven); ok {
return err
}
defer dv.recordResult(RequireSidecarInclusionProven, &err)
startTime := time.Now()
for _, dataColumn := range dv.dataColumns {
k, keyErr := inclusionProofKey(dataColumn)
if keyErr == nil {
if _, ok := dv.ic.Get(k); ok {
continue
}
} else {
log.WithError(keyErr).Error("Failed to get inclusion proof key")
}
if err = peerdas.VerifyDataColumnSidecarInclusionProof(dataColumn); err != nil {
return columnErrBuilder(ErrSidecarInclusionProofInvalid)
}
if keyErr == nil {
dv.ic.Add(k, struct{}{})
}
}
dataColumnSidecarInclusionProofVerificationHistogram.Observe(float64(time.Since(startTime).Milliseconds()))
return nil
}
func (dv *RODataColumnsVerifier) SidecarKzgProofVerified() (err error) {
if ok, err := dv.results.cached(RequireSidecarKzgProofVerified); ok {
return err
}
defer dv.recordResult(RequireSidecarKzgProofVerified, &err)
startTime := time.Now()
err = dv.verifyDataColumnsCommitment(dv.dataColumns)
if err != nil {
return columnErrBuilder(errors.Wrap(err, "verify data column commitment"))
}
dataColumnBatchKZGVerificationHistogram.Observe(float64(time.Since(startTime).Milliseconds()))
return nil
}
func (dv *RODataColumnsVerifier) SidecarProposerExpected(ctx context.Context) (err error) {
if ok, err := dv.results.cached(RequireSidecarProposerExpected); ok {
return err
}
defer dv.recordResult(RequireSidecarProposerExpected, &err)
type slotParentRoot struct {
slot primitives.Slot
parentRoot [fieldparams.RootLength]byte
}
targetRootBySlotParentRoot := make(map[slotParentRoot][fieldparams.RootLength]byte)
var targetRootFromCache = func(slot primitives.Slot, parentRoot [fieldparams.RootLength]byte) ([fieldparams.RootLength]byte, error) {
// Use cached values if available.
slotParentRoot := slotParentRoot{slot: slot, parentRoot: parentRoot}
if root, ok := targetRootBySlotParentRoot[slotParentRoot]; ok {
return root, nil
}
// Compute the epoch of the data column slot.
dataColumnEpoch := slots.ToEpoch(slot)
if dataColumnEpoch > 0 {
dataColumnEpoch = dataColumnEpoch - 1
}
// Compute the target root for the epoch.
targetRoot, err := dv.fc.TargetRootForEpoch(parentRoot, dataColumnEpoch)
if err != nil {
return [fieldparams.RootLength]byte{}, errors.Wrap(err, "target root from epoch")
}
// Store the target root in the cache.
targetRootBySlotParentRoot[slotParentRoot] = targetRoot
return targetRoot, nil
}
for _, dataColumn := range dv.dataColumns {
// Extract the slot of the data column.
dataColumnSlot := dataColumn.Slot()
// Extract the root of the parent block corresponding to the data column.
parentRoot := dataColumn.ParentRoot()
// Compute the target root for the data column.
targetRoot, err := targetRootFromCache(dataColumnSlot, parentRoot)
if err != nil {
return columnErrBuilder(errors.Wrap(err, "target root"))
}
// Compute the epoch of the data column slot.
dataColumnEpoch := slots.ToEpoch(dataColumnSlot)
if dataColumnEpoch > 0 {
dataColumnEpoch = dataColumnEpoch - 1
}
// Create a checkpoint for the target root.
checkpoint := &forkchoicetypes.Checkpoint{Root: targetRoot, Epoch: dataColumnEpoch}
// Try to extract the proposer index from the data column in the cache.
idx, cached := dv.pc.Proposer(checkpoint, dataColumnSlot)
if !cached {
// Retrieve the parent state.
parentState, err := dv.parentState(ctx, dataColumn)
if err != nil {
return columnErrBuilder(errors.Wrap(err, "parent state"))
}
idx, err = dv.pc.ComputeProposer(ctx, parentRoot, dataColumnSlot, parentState)
if err != nil {
return columnErrBuilder(errors.Wrap(err, "compute proposer"))
}
}
if idx != dataColumn.ProposerIndex() {
return columnErrBuilder(ErrSidecarUnexpectedProposer)
}
}
return nil
}
// parentState retrieves the parent state of the data column from the cache if possible, else retrieves it from the state by rooter.
func (dv *RODataColumnsVerifier) parentState(ctx context.Context, dataColumn blocks.RODataColumn) (state.BeaconState, error) {
parentRoot := dataColumn.ParentRoot()
// If the parent root is already in the cache, return it.
if st, ok := dv.stateByRoot[parentRoot]; ok {
return st, nil
}
// Retrieve the parent state from the state by rooter.
st, err := dv.sr.StateByRoot(ctx, parentRoot)
if err != nil {
return nil, errors.Wrap(err, "state by root")
}
// Store the parent state in the cache.
dv.stateByRoot[parentRoot] = st
return st, nil
}
func columnToSignatureData(d blocks.RODataColumn) SignatureData {
return SignatureData{
Root: d.BlockRoot(),
Parent: d.ParentRoot(),
Signature: bytesutil.ToBytes96(d.SignedBlockHeader.Signature),
Proposer: d.ProposerIndex(),
Slot: d.Slot(),
}
}
func columnErrBuilder(baseErr error) error {
return errors.Wrap(baseErr, errColumnsInvalid.Error())
}
func inclusionProofKey(c blocks.RODataColumn) ([160]byte, error) {
var key [160]byte
if len(c.KzgCommitmentsInclusionProof) != 4 {
// This should be already enforced by ssz unmarshaling; still check so we don't panic on array bounds.
return key, columnErrBuilder(ErrSidecarInclusionProofInvalid)
}
root, err := c.SignedBlockHeader.HashTreeRoot()
if err != nil {
return [160]byte{}, errors.Wrap(err, "hash tree root")
}
for i := range c.KzgCommitmentsInclusionProof {
if copy(key[32*i:32*i+32], c.KzgCommitmentsInclusionProof[i]) != 32 {
return key, columnErrBuilder(ErrSidecarInclusionProofInvalid)
}
}
copy(key[128:], root[:])
return key, nil
}

View File

@@ -0,0 +1,978 @@
package verification
import (
"context"
"reflect"
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
forkchoicetypes "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/types"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/pkg/errors"
)
func GenerateTestDataColumns(t *testing.T, parent [fieldparams.RootLength]byte, slot primitives.Slot, blobCount int) []blocks.RODataColumn {
roBlock, roBlobs := util.GenerateTestDenebBlockWithSidecar(t, parent, slot, blobCount)
blobs := make([]kzg.Blob, 0, len(roBlobs))
for i := range roBlobs {
blobs = append(blobs, kzg.Blob(roBlobs[i].Blob))
}
cellsAndProofs := util.GenerateCellsAndProofs(t, blobs)
dataColumnSidecars, err := peerdas.DataColumnSidecars(roBlock, cellsAndProofs)
require.NoError(t, err)
roDataColumns := make([]blocks.RODataColumn, 0, len(dataColumnSidecars))
for i := range dataColumnSidecars {
roDataColumn, err := blocks.NewRODataColumn(dataColumnSidecars[i])
require.NoError(t, err)
roDataColumns = append(roDataColumns, roDataColumn)
}
return roDataColumns
}
func TestColumnSatisfyRequirement(t *testing.T) {
const (
columnSlot = 1
blobCount = 1
)
parentRoot := [fieldparams.RootLength]byte{}
columns := GenerateTestDataColumns(t, parentRoot, columnSlot, blobCount)
intializer := Initializer{}
v := intializer.NewDataColumnsVerifier(columns, GossipDataColumnSidecarRequirements)
require.Equal(t, false, v.results.executed(RequireValidProposerSignature))
v.SatisfyRequirement(RequireValidProposerSignature)
require.Equal(t, true, v.results.executed(RequireValidProposerSignature))
}
func TestValid(t *testing.T) {
var initializer Initializer
t.Run("one invalid column", func(t *testing.T) {
columns := GenerateTestDataColumns(t, [fieldparams.RootLength]byte{}, 1, 1)
columns[0].KzgCommitments = [][]byte{}
verifier := initializer.NewDataColumnsVerifier(columns, GossipDataColumnSidecarRequirements)
err := verifier.ValidFields()
require.NotNil(t, err)
require.NotNil(t, verifier.results.result(RequireValidFields))
})
t.Run("nominal", func(t *testing.T) {
columns := GenerateTestDataColumns(t, [fieldparams.RootLength]byte{}, 1, 1)
verifier := initializer.NewDataColumnsVerifier(columns, GossipDataColumnSidecarRequirements)
err := verifier.ValidFields()
require.NoError(t, err)
require.IsNil(t, verifier.results.result(RequireValidFields))
err = verifier.ValidFields()
require.NoError(t, err)
})
}
func TestCorrectSubnet(t *testing.T) {
const dataColumnSidecarSubTopic = "/data_column_sidecar_%d/"
var initializer Initializer
t.Run("lengths mismatch", func(t *testing.T) {
columns := GenerateTestDataColumns(t, [fieldparams.RootLength]byte{}, 1, 1)
verifier := initializer.NewDataColumnsVerifier(columns, GossipDataColumnSidecarRequirements)
err := verifier.CorrectSubnet(dataColumnSidecarSubTopic, []string{})
require.ErrorIs(t, err, errBadTopicLength)
require.NotNil(t, verifier.results.result(RequireCorrectSubnet))
})
t.Run("wrong topic", func(t *testing.T) {
columns := GenerateTestDataColumns(t, [fieldparams.RootLength]byte{}, 1, 1)
verifier := initializer.NewDataColumnsVerifier(columns[:2], GossipDataColumnSidecarRequirements)
err := verifier.CorrectSubnet(
dataColumnSidecarSubTopic,
[]string{
"/eth2/9dc47cc6/data_column_sidecar_1/ssz_snappy",
"/eth2/9dc47cc6/data_column_sidecar_0/ssz_snappy",
})
require.ErrorIs(t, err, errBadTopic)
require.NotNil(t, verifier.results.result(RequireCorrectSubnet))
})
t.Run("nominal", func(t *testing.T) {
subnets := []string{
"/eth2/9dc47cc6/data_column_sidecar_0/ssz_snappy",
"/eth2/9dc47cc6/data_column_sidecar_1",
}
columns := GenerateTestDataColumns(t, [fieldparams.RootLength]byte{}, 1, 1)
verifier := initializer.NewDataColumnsVerifier(columns[:2], GossipDataColumnSidecarRequirements)
err := verifier.CorrectSubnet(dataColumnSidecarSubTopic, subnets)
require.NoError(t, err)
require.IsNil(t, verifier.results.result(RequireCorrectSubnet))
err = verifier.CorrectSubnet(dataColumnSidecarSubTopic, subnets)
require.NoError(t, err)
})
}
func TestNotFromFutureSlot(t *testing.T) {
maximumGossipClockDisparity := params.BeaconConfig().MaximumGossipClockDisparityDuration()
testCases := []struct {
name string
currentSlot, columnSlot primitives.Slot
timeBeforeCurrentSlot time.Duration
isError bool
}{
{
name: "column slot == current slot",
currentSlot: 42,
columnSlot: 42,
timeBeforeCurrentSlot: 0,
isError: false,
},
{
name: "within maximum gossip clock disparity",
currentSlot: 42,
columnSlot: 42,
timeBeforeCurrentSlot: maximumGossipClockDisparity / 2,
isError: false,
},
{
name: "outside maximum gossip clock disparity",
currentSlot: 42,
columnSlot: 42,
timeBeforeCurrentSlot: maximumGossipClockDisparity * 2,
isError: true,
},
{
name: "too far in the future",
currentSlot: 10,
columnSlot: 42,
timeBeforeCurrentSlot: 0,
isError: true,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
const blobCount = 1
now := time.Now()
secondsPerSlot := time.Duration(params.BeaconConfig().SecondsPerSlot)
genesis := now.Add(-time.Duration(tc.currentSlot) * secondsPerSlot * time.Second)
clock := startup.NewClock(
genesis,
[fieldparams.RootLength]byte{},
startup.WithNower(func() time.Time {
return now.Add(-tc.timeBeforeCurrentSlot)
}),
)
parentRoot := [fieldparams.RootLength]byte{}
initializer := Initializer{shared: &sharedResources{clock: clock}}
columns := GenerateTestDataColumns(t, parentRoot, tc.columnSlot, blobCount)
verifier := initializer.NewDataColumnsVerifier(columns, GossipDataColumnSidecarRequirements)
err := verifier.NotFromFutureSlot()
require.Equal(t, true, verifier.results.executed(RequireNotFromFutureSlot))
if tc.isError {
require.ErrorIs(t, err, ErrFromFutureSlot)
require.NotNil(t, verifier.results.result(RequireNotFromFutureSlot))
return
}
require.NoError(t, err)
require.NoError(t, verifier.results.result(RequireNotFromFutureSlot))
err = verifier.NotFromFutureSlot()
require.NoError(t, err)
})
}
}
func TestColumnSlotAboveFinalized(t *testing.T) {
testCases := []struct {
name string
finalizedSlot, columnSlot primitives.Slot
isErr bool
}{
{
name: "finalized epoch < column epoch",
finalizedSlot: 10,
columnSlot: 96,
isErr: false,
},
{
name: "finalized slot < column slot (same epoch)",
finalizedSlot: 32,
columnSlot: 33,
isErr: false,
},
{
name: "finalized slot == column slot",
finalizedSlot: 64,
columnSlot: 64,
isErr: true,
},
{
name: "finalized epoch > column epoch",
finalizedSlot: 32,
columnSlot: 31,
isErr: true,
},
}
for _, tc := range testCases {
const blobCount = 1
t.Run(tc.name, func(t *testing.T) {
finalizedCheckpoint := func() *forkchoicetypes.Checkpoint {
return &forkchoicetypes.Checkpoint{
Epoch: slots.ToEpoch(tc.finalizedSlot),
Root: [fieldparams.RootLength]byte{},
}
}
parentRoot := [fieldparams.RootLength]byte{}
initializer := &Initializer{shared: &sharedResources{
fc: &mockForkchoicer{FinalizedCheckpointCB: finalizedCheckpoint},
}}
columns := GenerateTestDataColumns(t, parentRoot, tc.columnSlot, blobCount)
v := initializer.NewDataColumnsVerifier(columns, GossipDataColumnSidecarRequirements)
err := v.SlotAboveFinalized()
require.Equal(t, true, v.results.executed(RequireSlotAboveFinalized))
if tc.isErr {
require.ErrorIs(t, err, ErrSlotNotAfterFinalized)
require.NotNil(t, v.results.result(RequireSlotAboveFinalized))
return
}
require.NoError(t, err)
require.NoError(t, v.results.result(RequireSlotAboveFinalized))
err = v.SlotAboveFinalized()
require.NoError(t, err)
})
}
}
func TestValidProposerSignature(t *testing.T) {
const (
columnSlot = 0
blobCount = 1
)
parentRoot := [fieldparams.RootLength]byte{}
validator := &ethpb.Validator{}
columns := GenerateTestDataColumns(t, parentRoot, columnSlot, blobCount)
firstColumn := columns[0]
// The signature data does not depend on the data column itself, so we can use the first one.
expectedSignatureData := columnToSignatureData(firstColumn)
testCases := []struct {
isError bool
vscbShouldError bool
svcbReturn bool
stateByRooter StateByRooter
vscbError error
svcbError error
name string
}{
{
name: "cache hit - success",
svcbReturn: true,
svcbError: nil,
vscbShouldError: true,
vscbError: nil,
stateByRooter: &mockStateByRooter{sbr: sbrErrorIfCalled(t)},
isError: false,
},
{
name: "cache hit - error",
svcbReturn: true,
svcbError: errors.New("derp"),
vscbShouldError: true,
vscbError: nil,
stateByRooter: &mockStateByRooter{sbr: sbrErrorIfCalled(t)},
isError: true,
},
{
name: "cache miss - success",
svcbReturn: false,
svcbError: nil,
vscbShouldError: false,
vscbError: nil,
stateByRooter: sbrForValOverride(firstColumn.ProposerIndex(), validator),
isError: false,
},
{
name: "cache miss - state not found",
svcbReturn: false,
svcbError: nil,
vscbShouldError: false,
vscbError: nil,
stateByRooter: sbrNotFound(t, expectedSignatureData.Parent),
isError: true,
},
{
name: "cache miss - signature failure",
svcbReturn: false,
svcbError: nil,
vscbShouldError: false,
vscbError: errors.New("signature, not so good!"),
stateByRooter: sbrForValOverride(firstColumn.ProposerIndex(), validator),
isError: true,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
signatureCache := &mockSignatureCache{
svcb: func(signatureData SignatureData) (bool, error) {
if signatureData != expectedSignatureData {
t.Error("Did not see expected SignatureData")
}
return tc.svcbReturn, tc.svcbError
},
vscb: func(signatureData SignatureData, _ ValidatorAtIndexer) (err error) {
if tc.vscbShouldError {
t.Error("VerifySignature should not be called if the result is cached")
return nil
}
if expectedSignatureData != signatureData {
t.Error("unexpected signature data")
}
return tc.vscbError
},
}
initializer := Initializer{
shared: &sharedResources{
sc: signatureCache,
sr: tc.stateByRooter,
},
}
verifier := initializer.NewDataColumnsVerifier(columns, GossipDataColumnSidecarRequirements)
err := verifier.ValidProposerSignature(context.Background())
require.Equal(t, true, verifier.results.executed(RequireValidProposerSignature))
if tc.isError {
require.NotNil(t, err)
require.NotNil(t, verifier.results.result(RequireValidProposerSignature))
return
}
require.NoError(t, err)
require.NoError(t, verifier.results.result(RequireValidProposerSignature))
err = verifier.ValidProposerSignature(context.Background())
require.NoError(t, err)
})
}
}
func TestDataColumnsSidecarParentSeen(t *testing.T) {
const (
columnSlot = 0
blobCount = 1
)
parentRoot := [fieldparams.RootLength]byte{}
columns := GenerateTestDataColumns(t, parentRoot, columnSlot, blobCount)
firstColumn := columns[0]
fcHas := &mockForkchoicer{
HasNodeCB: func(parent [fieldparams.RootLength]byte) bool {
if parent != firstColumn.ParentRoot() {
t.Error("forkchoice.HasNode called with unexpected parent root")
}
return true
},
}
fcLacks := &mockForkchoicer{
HasNodeCB: func(parent [fieldparams.RootLength]byte) bool {
if parent != firstColumn.ParentRoot() {
t.Error("forkchoice.HasNode called with unexpected parent root")
}
return false
},
}
testCases := []struct {
name string
forkChoicer Forkchoicer
parentSeen func([fieldparams.RootLength]byte) bool
isError bool
}{
{
name: "happy path",
forkChoicer: fcHas,
parentSeen: nil,
isError: false,
},
{
name: "HasNode false, no badParent cb, expected error",
forkChoicer: fcLacks,
parentSeen: nil,
isError: true,
},
{
name: "HasNode false, badParent true",
forkChoicer: fcLacks,
parentSeen: badParentCb(t, firstColumn.ParentRoot(), true),
isError: false,
},
{
name: "HasNode false, badParent false",
forkChoicer: fcLacks,
parentSeen: badParentCb(t, firstColumn.ParentRoot(), false),
isError: true,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
initializer := Initializer{shared: &sharedResources{fc: tc.forkChoicer}}
verifier := initializer.NewDataColumnsVerifier(columns, GossipDataColumnSidecarRequirements)
err := verifier.SidecarParentSeen(tc.parentSeen)
require.Equal(t, true, verifier.results.executed(RequireSidecarParentSeen))
if tc.isError {
require.ErrorIs(t, err, ErrSidecarParentNotSeen)
require.NotNil(t, verifier.results.result(RequireSidecarParentSeen))
return
}
require.NoError(t, err)
require.NoError(t, verifier.results.result(RequireSidecarParentSeen))
err = verifier.SidecarParentSeen(tc.parentSeen)
require.NoError(t, err)
})
}
}
func TestDataColumnsSidecarParentValid(t *testing.T) {
testCases := []struct {
name string
badParentCbReturn bool
isError bool
}{
{
name: "parent valid",
badParentCbReturn: false,
isError: false,
},
{
name: "parent not valid",
badParentCbReturn: true,
isError: true,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
const (
columnSlot = 0
blobCount = 1
)
parentRoot := [fieldparams.RootLength]byte{}
columns := GenerateTestDataColumns(t, parentRoot, columnSlot, blobCount)
firstColumn := columns[0]
initializer := Initializer{shared: &sharedResources{}}
verifier := initializer.NewDataColumnsVerifier(columns, GossipDataColumnSidecarRequirements)
err := verifier.SidecarParentValid(badParentCb(t, firstColumn.ParentRoot(), tc.badParentCbReturn))
require.Equal(t, true, verifier.results.executed(RequireSidecarParentValid))
if tc.isError {
require.ErrorIs(t, err, ErrSidecarParentInvalid)
require.NotNil(t, verifier.results.result(RequireSidecarParentValid))
return
}
require.NoError(t, err)
require.NoError(t, verifier.results.result(RequireSidecarParentValid))
err = verifier.SidecarParentValid(badParentCb(t, firstColumn.ParentRoot(), tc.badParentCbReturn))
require.NoError(t, err)
})
}
}
func TestColumnSidecarParentSlotLower(t *testing.T) {
columns := GenerateTestDataColumns(t, [32]byte{}, 1, 1)
firstColumn := columns[0]
cases := []struct {
name string
forkChoiceSlot primitives.Slot
forkChoiceError, err error
errCheckValue bool
}{
{
name: "Not in forkchoice",
forkChoiceError: errors.New("not in forkchoice"),
err: ErrSlotNotAfterParent,
},
{
name: "In forkchoice, slot lower",
forkChoiceSlot: firstColumn.Slot() - 1,
},
{
name: "In forkchoice, slot equal",
forkChoiceSlot: firstColumn.Slot(),
err: ErrSlotNotAfterParent,
errCheckValue: true,
},
{
name: "In forkchoice, slot higher",
forkChoiceSlot: firstColumn.Slot() + 1,
err: ErrSlotNotAfterParent,
errCheckValue: true,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
initializer := Initializer{
shared: &sharedResources{fc: &mockForkchoicer{
SlotCB: func(r [32]byte) (primitives.Slot, error) {
if firstColumn.ParentRoot() != r {
t.Error("forkchoice.Slot called with unexpected parent root")
}
return c.forkChoiceSlot, c.forkChoiceError
},
}},
}
verifier := initializer.NewDataColumnsVerifier(columns, GossipDataColumnSidecarRequirements)
err := verifier.SidecarParentSlotLower()
require.Equal(t, true, verifier.results.executed(RequireSidecarParentSlotLower))
if c.err == nil {
require.NoError(t, err)
require.NoError(t, verifier.results.result(RequireSidecarParentSlotLower))
err = verifier.SidecarParentSlotLower()
require.NoError(t, err)
return
}
require.NotNil(t, err)
require.NotNil(t, verifier.results.result(RequireSidecarParentSlotLower))
if c.errCheckValue {
require.ErrorIs(t, err, c.err)
}
})
}
}
func TestDataColumnsSidecarDescendsFromFinalized(t *testing.T) {
testCases := []struct {
name string
hasNodeCBReturn bool
isError bool
}{
{
name: "Not canonical",
hasNodeCBReturn: false,
isError: true,
},
{
name: "Canonical",
hasNodeCBReturn: true,
isError: false,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
const (
columnSlot = 0
blobCount = 1
)
parentRoot := [fieldparams.RootLength]byte{}
columns := GenerateTestDataColumns(t, parentRoot, columnSlot, blobCount)
firstColumn := columns[0]
initializer := Initializer{
shared: &sharedResources{
fc: &mockForkchoicer{
HasNodeCB: func(r [fieldparams.RootLength]byte) bool {
if firstColumn.ParentRoot() != r {
t.Error("forkchoice.Slot called with unexpected parent root")
}
return tc.hasNodeCBReturn
},
},
},
}
verifier := initializer.NewDataColumnsVerifier(columns, GossipDataColumnSidecarRequirements)
err := verifier.SidecarDescendsFromFinalized()
require.Equal(t, true, verifier.results.executed(RequireSidecarDescendsFromFinalized))
if tc.isError {
require.ErrorIs(t, err, ErrSidecarNotFinalizedDescendent)
require.NotNil(t, verifier.results.result(RequireSidecarDescendsFromFinalized))
return
}
require.NoError(t, err)
require.NoError(t, verifier.results.result(RequireSidecarDescendsFromFinalized))
err = verifier.SidecarDescendsFromFinalized()
require.NoError(t, err)
})
}
}
func TestDataColumnsSidecarInclusionProven(t *testing.T) {
testCases := []struct {
name string
alterate bool
isError bool
}{
{
name: "Inclusion proven",
alterate: false,
isError: false,
},
{
name: "Inclusion not proven",
alterate: true,
isError: true,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
const (
columnSlot = 0
blobCount = 1
)
parentRoot := [fieldparams.RootLength]byte{}
columns := GenerateTestDataColumns(t, parentRoot, columnSlot, blobCount)
if tc.alterate {
firstColumn := columns[0]
byte0 := firstColumn.SignedBlockHeader.Header.BodyRoot[0]
firstColumn.SignedBlockHeader.Header.BodyRoot[0] = byte0 ^ 255
}
initializer := Initializer{
shared: &sharedResources{ic: newInclusionProofCache(1)},
}
verifier := initializer.NewDataColumnsVerifier(columns, GossipDataColumnSidecarRequirements)
err := verifier.SidecarInclusionProven()
require.Equal(t, true, verifier.results.executed(RequireSidecarInclusionProven))
if tc.isError {
require.ErrorIs(t, err, ErrSidecarInclusionProofInvalid)
require.NotNil(t, verifier.results.result(RequireSidecarInclusionProven))
return
}
require.NoError(t, err)
require.NoError(t, verifier.results.result(RequireSidecarInclusionProven))
err = verifier.SidecarInclusionProven()
require.NoError(t, err)
})
}
}
func TestDataColumnsSidecarKzgProofVerified(t *testing.T) {
testCases := []struct {
isError bool
verifyDataColumnsCommitmentError error
name string
}{
{
name: "KZG proof verified",
verifyDataColumnsCommitmentError: nil,
isError: false,
},
{
name: "KZG proof not verified",
verifyDataColumnsCommitmentError: errors.New("KZG proof error"),
isError: true,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
const (
columnSlot = 0
blobCount = 1
)
parentRoot := [fieldparams.RootLength]byte{}
columns := GenerateTestDataColumns(t, parentRoot, columnSlot, blobCount)
firstColumn := columns[0]
verifyDataColumnsCommitment := func(roDataColumns []blocks.RODataColumn) error {
for _, roDataColumn := range roDataColumns {
require.Equal(t, true, reflect.DeepEqual(firstColumn.KzgCommitments, roDataColumn.KzgCommitments))
}
return tc.verifyDataColumnsCommitmentError
}
verifier := &RODataColumnsVerifier{
results: newResults(),
dataColumns: columns,
verifyDataColumnsCommitment: verifyDataColumnsCommitment,
}
err := verifier.SidecarKzgProofVerified()
require.Equal(t, true, verifier.results.executed(RequireSidecarKzgProofVerified))
if tc.isError {
require.NotNil(t, err)
require.NotNil(t, verifier.results.result(RequireSidecarKzgProofVerified))
return
}
require.NoError(t, err)
require.NoError(t, verifier.results.result(RequireSidecarKzgProofVerified))
err = verifier.SidecarKzgProofVerified()
require.NoError(t, err)
})
}
}
func TestDataColumnsSidecarProposerExpected(t *testing.T) {
const (
columnSlot = 1
blobCount = 1
)
parentRoot := [fieldparams.RootLength]byte{}
columns := GenerateTestDataColumns(t, parentRoot, columnSlot, blobCount)
firstColumn := columns[0]
newColumns := GenerateTestDataColumns(t, parentRoot, 2*params.BeaconConfig().SlotsPerEpoch, blobCount)
firstNewColumn := newColumns[0]
validator := &ethpb.Validator{}
commonComputeProposerCB := func(_ context.Context, root [fieldparams.RootLength]byte, slot primitives.Slot, _ state.BeaconState) (primitives.ValidatorIndex, error) {
require.Equal(t, firstColumn.ParentRoot(), root)
require.Equal(t, firstColumn.Slot(), slot)
return firstColumn.ProposerIndex(), nil
}
testCases := []struct {
name string
stateByRooter StateByRooter
proposerCache ProposerCache
columns []blocks.RODataColumn
error string
}{
{
name: "Cached, matches",
stateByRooter: nil,
proposerCache: &mockProposerCache{
ProposerCB: pcReturnsIdx(firstColumn.ProposerIndex()),
},
columns: columns,
},
{
name: "Cached, does not match",
stateByRooter: nil,
proposerCache: &mockProposerCache{
ProposerCB: pcReturnsIdx(firstColumn.ProposerIndex() + 1),
},
columns: columns,
error: ErrSidecarUnexpectedProposer.Error(),
},
{
name: "Not cached, state lookup failure",
stateByRooter: sbrNotFound(t, firstColumn.ParentRoot()),
proposerCache: &mockProposerCache{
ProposerCB: pcReturnsNotFound(),
},
columns: columns,
error: "state by root",
},
{
name: "Not cached, proposer matches",
stateByRooter: sbrForValOverride(firstColumn.ProposerIndex(), validator),
proposerCache: &mockProposerCache{
ProposerCB: pcReturnsNotFound(),
ComputeProposerCB: commonComputeProposerCB,
},
columns: columns,
},
{
name: "Not cached, proposer matches",
stateByRooter: sbrForValOverride(firstColumn.ProposerIndex(), validator),
proposerCache: &mockProposerCache{
ProposerCB: pcReturnsNotFound(),
ComputeProposerCB: commonComputeProposerCB,
},
columns: columns,
},
{
name: "Not cached, proposer matches for next epoch",
stateByRooter: sbrForValOverride(firstNewColumn.ProposerIndex(), validator),
proposerCache: &mockProposerCache{
ProposerCB: pcReturnsNotFound(),
ComputeProposerCB: func(_ context.Context, root [32]byte, slot primitives.Slot, _ state.BeaconState) (primitives.ValidatorIndex, error) {
require.Equal(t, firstNewColumn.ParentRoot(), root)
require.Equal(t, firstNewColumn.Slot(), slot)
return firstColumn.ProposerIndex(), nil
},
},
columns: newColumns,
},
{
name: "Not cached, proposer does not match",
stateByRooter: sbrForValOverride(firstColumn.ProposerIndex(), validator),
proposerCache: &mockProposerCache{
ProposerCB: pcReturnsNotFound(),
ComputeProposerCB: func(_ context.Context, root [32]byte, slot primitives.Slot, _ state.BeaconState) (primitives.ValidatorIndex, error) {
require.Equal(t, firstColumn.ParentRoot(), root)
require.Equal(t, firstColumn.Slot(), slot)
return firstColumn.ProposerIndex() + 1, nil
},
},
columns: columns,
error: ErrSidecarUnexpectedProposer.Error(),
},
{
name: "Not cached, ComputeProposer fails",
stateByRooter: sbrForValOverride(firstColumn.ProposerIndex(), validator),
proposerCache: &mockProposerCache{
ProposerCB: pcReturnsNotFound(),
ComputeProposerCB: func(_ context.Context, root [32]byte, slot primitives.Slot, _ state.BeaconState) (primitives.ValidatorIndex, error) {
require.Equal(t, firstColumn.ParentRoot(), root)
require.Equal(t, firstColumn.Slot(), slot)
return 0, errors.New("ComputeProposer failed")
},
},
columns: columns,
error: "compute proposer",
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
initializer := Initializer{
shared: &sharedResources{
sr: tc.stateByRooter,
pc: tc.proposerCache,
fc: &mockForkchoicer{
TargetRootForEpochCB: fcReturnsTargetRoot([fieldparams.RootLength]byte{}),
},
},
}
verifier := initializer.NewDataColumnsVerifier(tc.columns, GossipDataColumnSidecarRequirements)
err := verifier.SidecarProposerExpected(context.Background())
require.Equal(t, true, verifier.results.executed(RequireSidecarProposerExpected))
if len(tc.error) > 0 {
require.ErrorContains(t, tc.error, err)
require.NotNil(t, verifier.results.result(RequireSidecarProposerExpected))
return
}
require.NoError(t, err)
require.NoError(t, verifier.results.result(RequireSidecarProposerExpected))
err = verifier.SidecarProposerExpected(context.Background())
require.NoError(t, err)
})
}
}
func TestColumnRequirementSatisfaction(t *testing.T) {
const (
columnSlot = 1
blobCount = 1
)
parentRoot := [fieldparams.RootLength]byte{}
columns := GenerateTestDataColumns(t, parentRoot, columnSlot, blobCount)
initializer := Initializer{}
verifier := initializer.NewDataColumnsVerifier(columns, GossipDataColumnSidecarRequirements)
// We haven't performed any verification, VerifiedRODataColumns should error.
_, err := verifier.VerifiedRODataColumns()
require.ErrorIs(t, err, errColumnsInvalid)
var me VerificationMultiError
ok := errors.As(err, &me)
require.Equal(t, true, ok)
fails := me.Failures()
// We haven't performed any verification, so all the results should be this type.
for _, v := range fails {
require.ErrorIs(t, v, ErrMissingVerification)
}
// Satisfy everything but the first requirement through the backdoor.
for _, r := range GossipDataColumnSidecarRequirements[1:] {
verifier.results.record(r, nil)
}
// One requirement is missing, VerifiedRODataColumns should still error.
_, err = verifier.VerifiedRODataColumns()
require.ErrorIs(t, err, errColumnsInvalid)
// Now, satisfy the first requirement.
verifier.results.record(GossipDataColumnSidecarRequirements[0], nil)
// VerifiedRODataColumns should now succeed.
require.Equal(t, true, verifier.results.allSatisfied())
_, err = verifier.VerifiedRODataColumns()
require.NoError(t, err)
}

View File

@@ -114,3 +114,12 @@ func VerifiedROBlobError(err error) (blocks.VerifiedROBlob, error) {
}
return blocks.VerifiedROBlob{}, err
}
// VerifiedRODataColumnError can be used by methods that have a VerifiedRODataColumn return type but do not have permission to
// create a value of that type in order to generate an error return value.
func VerifiedRODataColumnError(err error) (blocks.VerifiedRODataColumn, error) {
if err == nil {
return blocks.VerifiedRODataColumn{}, errVerificationImplementationFault
}
return blocks.VerifiedRODataColumn{}, err
}

View File

@@ -1,11 +1,14 @@
package verification
import (
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/spf13/afero"
)
// VerifiedROBlobError creates a verified read-only blob sidecar from an error.
func VerifiedROBlobFromDisk(fs afero.Fs, root [32]byte, path string) (blocks.VerifiedROBlob, error) {
encoded, err := afero.ReadFile(fs, path)
if err != nil {
@@ -21,3 +24,33 @@ func VerifiedROBlobFromDisk(fs afero.Fs, root [32]byte, path string) (blocks.Ver
}
return blocks.NewVerifiedROBlob(ro), nil
}
// VerifiedRODataColumnFromDisk created a verified read-only data column sidecar from disk.
func VerifiedRODataColumnFromDisk(file afero.File, root [fieldparams.RootLength]byte, sszEncodedDataColumnSidecarSize uint32) (blocks.VerifiedRODataColumn, error) {
// Read the ssz encoded data column sidecar from the file
sszEncodedDataColumnSidecar := make([]byte, sszEncodedDataColumnSidecarSize)
count, err := file.Read(sszEncodedDataColumnSidecar)
if err != nil {
return VerifiedRODataColumnError(err)
}
if uint32(count) != sszEncodedDataColumnSidecarSize {
return VerifiedRODataColumnError(err)
}
// Unmarshal the SSZ encoded data column sidecar.
dataColumnSidecar := &ethpb.DataColumnSidecar{}
if err := dataColumnSidecar.UnmarshalSSZ(sszEncodedDataColumnSidecar); err != nil {
return VerifiedRODataColumnError(err)
}
// Create a RO data column.
roDataColumnSidecar, err := blocks.NewRODataColumnWithRoot(dataColumnSidecar, root)
if err != nil {
return VerifiedRODataColumnError(err)
}
// Create a verified RO data column.
verifiedRODataColumn := blocks.NewVerifiedRODataColumn(roDataColumnSidecar)
return verifiedRODataColumn, nil
}

View File

@@ -5,9 +5,11 @@ import (
"sync"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
forkchoicetypes "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/types"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/network/forks"
@@ -38,6 +40,7 @@ type sharedResources struct {
sc SignatureCache
pc ProposerCache
sr StateByRooter
ic *inclusionProofCache
}
// Initializer is used to create different Verifiers.
@@ -57,6 +60,18 @@ func (ini *Initializer) NewBlobVerifier(b blocks.ROBlob, reqs []Requirement) *RO
}
}
// NewDataColumnsVerifier creates a DataColumnVerifier for a slice of data columns, with the given set of requirements.
// WARNING: The returned verifier is not thread-safe, and should not be used concurrently.
func (ini *Initializer) NewDataColumnsVerifier(roDataColumns []blocks.RODataColumn, reqs []Requirement) *RODataColumnsVerifier {
return &RODataColumnsVerifier{
sharedResources: ini.shared,
dataColumns: roDataColumns,
results: newResults(reqs...),
verifyDataColumnsCommitment: peerdas.VerifyDataColumnsSidecarKZGProofs,
stateByRoot: make(map[[fieldparams.RootLength]byte]state.BeaconState),
}
}
// InitializerWaiter provides an Initializer once all dependent resources are ready
// via the WaitForInitializer method.
type InitializerWaiter struct {
@@ -86,6 +101,7 @@ func NewInitializerWaiter(cw startup.ClockWaiter, fc Forkchoicer, sr StateByRoot
fc: fc,
pc: pc,
sr: sr,
ic: newInclusionProofCache(DefaultInclusionProofCacheSize),
}
iw := &InitializerWaiter{cw: cw, ini: &Initializer{shared: shared}}
for _, o := range opts {
@@ -107,6 +123,7 @@ func (w *InitializerWaiter) WaitForInitializer(ctx context.Context) (*Initialize
vr := w.ini.shared.clock.GenesisValidatorsRoot()
sc := newSigCache(vr[:], DefaultSignatureCacheSize, w.getFork)
w.ini.shared.sc = sc
w.ini.shared.ic = newInclusionProofCache(DefaultInclusionProofCacheSize)
return w.ini, nil
}

View File

@@ -3,6 +3,7 @@ package verification
import (
"context"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
)
@@ -29,3 +30,27 @@ type BlobVerifier interface {
// NewBlobVerifier is a function signature that can be used by code that needs to be
// able to mock Initializer.NewBlobVerifier without complex setup.
type NewBlobVerifier func(b blocks.ROBlob, reqs []Requirement) BlobVerifier
// DataColumnsVerifier defines the methods implemented by the RODataColumnVerifier.
// It serves a very similar purpose as the blob verifier interface for data columns.
type DataColumnsVerifier interface {
VerifiedRODataColumns() ([]blocks.VerifiedRODataColumn, error)
SatisfyRequirement(Requirement)
ValidFields() error
CorrectSubnet(dataColumnSidecarSubTopic string, expectedTopics []string) error
NotFromFutureSlot() error
SlotAboveFinalized() error
ValidProposerSignature(ctx context.Context) error
SidecarParentSeen(parentSeen func([fieldparams.RootLength]byte) bool) error
SidecarParentValid(badParent func([fieldparams.RootLength]byte) bool) error
SidecarParentSlotLower() error
SidecarDescendsFromFinalized() error
SidecarInclusionProven() error
SidecarKzgProofVerified() error
SidecarProposerExpected(ctx context.Context) error
}
// NewDataColumnsVerifier is a function signature that can be used to mock a setup where a
// column verifier can be easily initialized.
type NewDataColumnsVerifier func(dataColumns []blocks.RODataColumn, reqs []Requirement) DataColumnsVerifier

View File

@@ -13,4 +13,25 @@ var (
},
[]string{"result"},
)
columnVerificationProposerSignatureCache = promauto.NewCounterVec(
prometheus.CounterOpts{
Name: "data_column_verification_proposer_signature_cache",
Help: "DataColumnSidecar proposer signature cache result.",
},
[]string{"result"},
)
dataColumnSidecarInclusionProofVerificationHistogram = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "beacon_data_column_sidecar_inclusion_proof_verification_milliseconds",
Help: "Captures the time taken to verify data column sidecar inclusion proof.",
Buckets: []float64{5, 10, 50, 100, 150, 250, 500, 1000, 2000},
},
)
dataColumnBatchKZGVerificationHistogram = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "beacon_kzg_verification_data_column_batch_milliseconds",
Help: "Captures the time taken for batched data column kzg verification.",
Buckets: []float64{5, 10, 50, 100, 150, 250, 500, 1000, 2000},
},
)
)

View File

@@ -90,6 +90,11 @@ func (r *results) result(req Requirement) error {
return r.done[req]
}
func (r *results) cached(req Requirement) (bool, error) {
result, ok := r.done[req]
return ok, result
}
func (r *results) errors(err error) error {
return newVerificationMultiError(r, err)
}

View File

@@ -0,0 +1,15 @@
package verification
import (
"os"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
)
func TestMain(t *testing.M) {
if err := kzg.Start(); err != nil {
os.Exit(1)
}
t.Run()
}

View File

@@ -1,3 +0,0 @@
### Added
- Add light client p2p broadcaster functions.

View File

@@ -1,3 +0,0 @@
### Added
- Add light client p2p validator and subscriber functions.

View File

@@ -0,0 +1,3 @@
### Added
- Add support for light client req/resp domain.

View File

@@ -0,0 +1,3 @@
### Added
- Add light client mainnet spec test.

View File

@@ -0,0 +1,3 @@
### Added
- Add light client minimal spec test support for `update_ranking` tests.

View File

@@ -0,0 +1,3 @@
### Ignored
- Rename the `runLightClientUpdateRankingProofTest` function to `runLightClientUpdateRankingTest` in lc spec tests.

View File

@@ -0,0 +1,3 @@
### Ignored
- Put the light client beacon api endpoints behind a flag

View File

@@ -0,0 +1,3 @@
### Fixed
- Fix cyclical dependencies issue when using testing/util package

View File

@@ -0,0 +1,3 @@
### Fixed
- Fixed /eth/v2/beacon/pool/attester_slashings no slashings returns empty array instead of nil.

View File

@@ -1,3 +0,0 @@
### Ignored
- update deprecation message for gRPC to be less scary.

View File

@@ -0,0 +1,3 @@
### Fixed
- fixed wrong handler for get pending consolidations endpoint.

View File

@@ -1,2 +0,0 @@
### Fixed
- Ensure that the `payload_attributes` event has a consistent view of the head state by passing the head block in the event and using stategen to retrieve the corresponding state.

View File

@@ -1,2 +0,0 @@
### Fixed
- extend the payload attribute computation deadline to the beginning of the proposal slot.

View File

@@ -1,3 +0,0 @@
### Added
- Added immediate broadcasting of proposer slashings when equivocating blocks are detected during block processing.
- Added 2 new errors: `HeadStateErr` and `ErrCouldNotVerifyBlockHeader`

View File

@@ -1,3 +0,0 @@
### Added
- PeerDAS related KZG wrappers.

View File

@@ -0,0 +1,2 @@
### Added
- Implement data column sidecars filesystem.

View File

@@ -1,3 +0,0 @@
### Added
- PeerDAS: Add needed proto files and corresponding generated code.

View File

@@ -0,0 +1,2 @@
### Added
- Data column sidecars verification methods.

View File

@@ -1,5 +0,0 @@
### Added
- `UpgradeToFulu` spectests.
### Fixed
- `UpgradeToFulu`

View File

@@ -1,3 +0,0 @@
### Fixed
- avoid nondeterministic default fork value when generate genesis. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15151)

View File

@@ -0,0 +1,3 @@
### Changed
- Disable log processing after deposit requests are activated.

View File

@@ -1,3 +0,0 @@
### Fixed
- Fixes our generated ssz files to have the correct package name.

View File

@@ -1,3 +0,0 @@
### Fixed
- Fixes our blob sidecar by root request lists for electra.

View File

@@ -1,6 +0,0 @@
### Security
- Fix CVE-2025-22869
- Fix CVE-2025-22870
- Fix CVE-2025-22872
- Fix CVE-2025-30204

View File

@@ -0,0 +1,3 @@
### Ignored
- Use independent context for domain data calculation.

View File

@@ -1,3 +0,0 @@
### Ignored
- Check for uninitialized duties on `checkDependentRoot`

View File

@@ -1,3 +0,0 @@
### Fixed
- Pass parent context to update duties when dependent roots change

View File

@@ -1,3 +0,0 @@
### Ignored
- Add dependent roots in block events.

View File

@@ -1,3 +0,0 @@
### Changed
- Removed the slot from `UpdateDuties`.

View File

@@ -1,2 +0,0 @@
### Changed
- Refactored internal function `reValidateSubscriptions` to `pruneSubscriptions` in `beacon-chain/sync/subscriber.go` for improved clarity, addressing a TODO comment.

3
changelog/pvl-build.md Normal file
View File

@@ -0,0 +1,3 @@
### Added
- Added Prysm build data to otel tracing spans.

View File

@@ -0,0 +1,3 @@
### Ignored
- Release notes for v6.0.1

3
changelog/pvl-spans.md Normal file
View File

@@ -0,0 +1,3 @@
### Changed
- Added more tracing spans to various helpers related to GetDuties

View File

@@ -0,0 +1,3 @@
### Ignored
- Updated sha256 hashes for spectests at version v1.5.0. This was due to maintainers re-issuing the tar files to omit various empty directories.

View File

@@ -1,3 +0,0 @@
### Changed
- Updated geth to v1.15.9

View File

@@ -1,3 +0,0 @@
### Ignored
- Updated changelog for v6.0.0 release

View File

@@ -0,0 +1,3 @@
### Fixed
- Fix Prysm endpoints `/prysm/v1/validators/{state_id}/participation` and `/prysm/v1/validators/{state_id}/active_set_changes` to properly handle `{state_id}`.

View File

@@ -0,0 +1,3 @@
### Added
- Updated e2e Beacon API evaluator to support more endpoints, including the ones introduced in Electra.

View File

@@ -1,3 +0,0 @@
### Added
- Implement pending consolidations Beacon API endpoint.

Some files were not shown because too many files have changed in this diff Show More