mirror of
https://github.com/OffchainLabs/prysm.git
synced 2026-01-27 14:18:13 -05:00
Compare commits
49 Commits
eas-bug
...
docs/docum
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
7c5ab84f78 | ||
|
|
e7c99085cf | ||
|
|
c76be28115 | ||
|
|
5de8b0f660 | ||
|
|
a2da060021 | ||
|
|
db0c78a025 | ||
|
|
bc275e5a76 | ||
|
|
0924ba8bc8 | ||
|
|
5d0f4c76ad | ||
|
|
370d34a68a | ||
|
|
84e592f45c | ||
|
|
592b260950 | ||
|
|
7e12f45d3a | ||
|
|
a7be4e6eb4 | ||
|
|
ef7021132e | ||
|
|
d39ba8801c | ||
|
|
4282ce2277 | ||
|
|
8c33a1d54b | ||
|
|
d01a2c97aa | ||
|
|
b5cec39191 | ||
|
|
dbda80919e | ||
|
|
ce421f0534 | ||
|
|
eafbcdcbd7 | ||
|
|
090cbbdb94 | ||
|
|
d8bc2aa754 | ||
|
|
27ab90eed2 | ||
|
|
e3ed9dc79d | ||
|
|
1d2f0a7e0a | ||
|
|
6caa29da1f | ||
|
|
18eca953c1 | ||
|
|
8191bb5711 | ||
|
|
d4613aee0c | ||
|
|
9fcc1a7a77 | ||
|
|
75dea214ac | ||
|
|
4374e709cb | ||
|
|
be300f80bd | ||
|
|
096cba5b2d | ||
|
|
d5127233e4 | ||
|
|
3d35cc20ec | ||
|
|
1e658530a7 | ||
|
|
b360794c9c | ||
|
|
0fc9ab925a | ||
|
|
dda5ee3334 | ||
|
|
14c67376c3 | ||
|
|
9c8b68a66d | ||
|
|
a3210157e2 | ||
|
|
1536d59e30 | ||
|
|
11e46a4560 | ||
|
|
5a2e51b894 |
3
.github/PULL_REQUEST_TEMPLATE.md
vendored
3
.github/PULL_REQUEST_TEMPLATE.md
vendored
@@ -34,4 +34,5 @@ Fixes #
|
||||
|
||||
- [ ] I have read [CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
|
||||
- [ ] I have included a uniquely named [changelog fragment file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
|
||||
- [ ] I have added a description to this PR with sufficient context for reviewers to understand this PR.
|
||||
- [ ] I have added a description with sufficient context for reviewers to understand this PR.
|
||||
- [ ] I have tested that my changes work as expected and I added a testing plan to the PR description (if applicable).
|
||||
|
||||
@@ -193,6 +193,7 @@ nogo(
|
||||
"//tools/analyzers/featureconfig:go_default_library",
|
||||
"//tools/analyzers/gocognit:go_default_library",
|
||||
"//tools/analyzers/ineffassign:go_default_library",
|
||||
"//tools/analyzers/httperror:go_default_library",
|
||||
"//tools/analyzers/interfacechecker:go_default_library",
|
||||
"//tools/analyzers/logcapitalization:go_default_library",
|
||||
"//tools/analyzers/logruswitherror:go_default_library",
|
||||
|
||||
85
CHANGELOG.md
85
CHANGELOG.md
@@ -4,6 +4,91 @@ All notable changes to this project will be documented in this file.
|
||||
|
||||
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
|
||||
|
||||
## [v7.1.0](https://github.com/prysmaticlabs/prysm/compare/v7.0.0...v7.1.0) - 2025-12-10
|
||||
|
||||
This release includes several key features/fixes. If you are running v7.0.0 then you should update to v7.0.1 or later and remove the flag `--disable-last-epoch-targets`.
|
||||
|
||||
Release highlights:
|
||||
|
||||
- Backfill is now supported in Fulu. Backfill from checkpoint sync now supports data columns. Run with `--enable-backfill` when using checkpoint sync.
|
||||
- A new node configuration to custody enough data columns to reconstruct blobs. Use flag `--semi-supernode` to custody at least 50% of the data columns.
|
||||
- Critical fixes in attestation processing.
|
||||
|
||||
A post mortem doc with full details on the mainnet attestation processing issue from December 4th is expected in the coming days.
|
||||
|
||||
### Added
|
||||
|
||||
- add fulu support to light client processing. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15995)
|
||||
- Record data column gossip KZG batch verification latency in both the pooled worker and fallback paths so the `beacon_kzg_verification_data_column_batch_milliseconds` histogram reflects gossip traffic, annotated with `path` labels to distinguish the sources. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16018)
|
||||
- Implement Gloas state. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15611)
|
||||
- Add initial configs for the state-diff feature. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15903)
|
||||
- Add kv functions for the state-diff feature. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15903)
|
||||
- Add supported version for fork versions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16030)
|
||||
- prometheus metric `gossip_attestation_verification_milliseconds` to track attestation gossip topic validation latency. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15785)
|
||||
- Integrate state-diff into `State()`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16033)
|
||||
- Implement Gloas fork support in consensus-types/blocks with factory methods, getters, setters, and proto handling. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15618)
|
||||
- Integrate state-diff into `HasState()`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16045)
|
||||
- Added `--semi-supernode` flag to custody half of a super node's datacolumn requirements but allowing for reconstruction for blob retrieval. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16029)
|
||||
- Data column backfill. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15580)
|
||||
- Backfill metrics for columns: backfill_data_column_sidecar_downloaded, backfill_data_column_sidecar_downloaded_bytes, backfill_batch_columns_download_ms, backfill_batch_columns_verify_ms. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15580)
|
||||
- prometheus summary `gossip_data_column_sidecar_arrival_milliseconds` to track data column sidecar arrival latency since slot start. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16099)
|
||||
|
||||
### Changed
|
||||
|
||||
- Improve readability in slashing import and remove duplicated code. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15957)
|
||||
- Use dependent root instead of target when possible. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15996)
|
||||
- Changed `--subscribe-all-data-subnets` flag to `--supernode` and aliased `--subscribe-all-data-subnets` for existing users. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16012)
|
||||
- Use explicit slot component timing configs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15999)
|
||||
- Downgraded log level from INFO to DEBUG on PrepareBeaconProposer updated fee recipients. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15998)
|
||||
- Change the logging behaviour of Updated fee recipients to only log count of validators at Debug level and all validator indices at Trace level. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15998)
|
||||
- Stop emitting payload attribute events during late block handling when we are not proposing the next slot. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16026)
|
||||
- Initialize the `ExecutionRequests` field in gossip block map. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16047)
|
||||
- Avoid redundant WithHttpEndpoint when JWT is provided. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16032)
|
||||
- Removed dead slot parameter from blobCacheEntry.filter. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16021)
|
||||
- Added log prefix to the `genesis` package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16075)
|
||||
- Added log prefix to the `params` package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16075)
|
||||
- `WithGenesisValidatorsRoot`: Use camelCase for log field param. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16075)
|
||||
- Move `Origin checkpoint found in db` from WARN to INFO, since it is the expected behaviour. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16075)
|
||||
- backfill metrics that changed name and/or histogram buckets: backfill_batch_time_verify -> backfill_batch_verify_ms, backfill_batch_time_waiting -> backfill_batch_waiting_ms, backfill_batch_time_roundtrip -> backfill_batch_roundtrip_ms, backfill_blocks_bytes_downloaded -> backfill_blocks_downloaded_bytes, backfill_batch_time_verify -> backfill_batch_verify_ms, backfill_batch_blocks_time_download -> backfill_batch_blocks_download_ms, backfill_batch_blobs_time_download -> backfill_batch_blobs_download_ms, backfill_blobs_bytes_downloaded -> backfill_blocks_downloaded_bytes. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15580)
|
||||
- Move the "Not enough connected peers" (for a given subnet) from WARN to DEBUG. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16087)
|
||||
- `blobsDataFromStoredDataColumns`: Ask the use to use the `--supernode` flag and shorten the error mesage. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16097)
|
||||
- Introduced flag `--ignore-unviable-attestations` (replaces and deprecates `--disable-last-epoch-targets`) to drop attestations whose target state is not viable; default remains to process them unless explicitly enabled. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16094)
|
||||
|
||||
### Removed
|
||||
|
||||
- Remove validator cross-client from end-to-end tests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16025)
|
||||
- `NUMBER_OF_COLUMNS` configuration (not in the specification any more, replaced by a preset). [[PR]](https://github.com/prysmaticlabs/prysm/pull/16073)
|
||||
- `MAX_CELLS_IN_EXTENDED_MATRIX` configuration (not in the specification any more). [[PR]](https://github.com/prysmaticlabs/prysm/pull/16073)
|
||||
|
||||
### Fixed
|
||||
|
||||
- Nil check for block if it doesn't exist in the DB in fetchOriginSidecars. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16006)
|
||||
- Fix proposals progress bar count [#16020](https://github.com/OffchainLabs/prysm/pull/16020). [[PR]](https://github.com/prysmaticlabs/prysm/pull/16020)
|
||||
- Move `BlockGossipReceived` event to the end of gossip validation. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16031)
|
||||
- Fix state diff repetitive anchor slot bug. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16037)
|
||||
- Check the JWT secret length is exactly 256 bits (32 bytes) as per Engine API specification. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15939)
|
||||
- http_error_count now matches the other cases by listing the endpoint name rather than the actual URL requested. This improves metrics cardinality. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16055)
|
||||
- Fix array out of bounds in static analyzer. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16058)
|
||||
- fixes E2E tests to be able to start from Electra genesis fork or future forks. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16048)
|
||||
- Use head state to validate attestations for old blocks if they are compatible. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16095)
|
||||
|
||||
## [v7.0.1](https://github.com/prysmaticlabs/prysm/compare/v7.0.0...v7.0.1) - 2025-12-08
|
||||
|
||||
This patch release contains 4 cherry-picked changes to address the mainnet attestation processing issue from 2025-12-04. Operators are encouraged to update to this release as soon as practical. As of this release, the feature flag `--disable-last-epoch-targets` has been deprecated and can be safely removed from your node configuration.
|
||||
|
||||
A post mortem doc with full details is expected to be published later this week.
|
||||
|
||||
### Changed
|
||||
|
||||
- Move the "Not enough connected peers" (for a given subnet) from WARN to DEBUG. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16087)
|
||||
- Use dependent root instead of target when possible. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15996)
|
||||
- Introduced flag `--ignore-unviable-attestations` (replaces and deprecates `--disable-last-epoch-targets`) to drop attestations whose target state is not viable; default remains to process them unless explicitly enabled. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16094)
|
||||
|
||||
### Fixed
|
||||
|
||||
- Use head state to validate attestations for old blocks if they are compatible. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16095)
|
||||
|
||||
|
||||
## [v7.0.0](https://github.com/prysmaticlabs/prysm/compare/v6.1.4...v7.0.0) - 2025-11-10
|
||||
|
||||
This is our initial mainnet release for the Ethereum mainnet Fulu fork on December 3rd, 2025. All operators MUST update to v7.0.0 or later release prior to the fulu fork epoch `411392`. See the [Ethereum Foundation blog post](https://blog.ethereum.org/2025/11/06/fusaka-mainnet-announcement) for more information on Fulu.
|
||||
|
||||
@@ -110,7 +110,7 @@ func VerifyCellKZGProofBatch(commitmentsBytes []Bytes48, cellIndices []uint64, c
|
||||
ckzgCells := make([]ckzg4844.Cell, len(cells))
|
||||
|
||||
for i := range cells {
|
||||
copy(ckzgCells[i][:], cells[i][:])
|
||||
ckzgCells[i] = ckzg4844.Cell(cells[i])
|
||||
}
|
||||
return ckzg4844.VerifyCellKZGProofBatch(commitmentsBytes, cellIndices, ckzgCells, proofsBytes)
|
||||
}
|
||||
|
||||
@@ -89,7 +89,7 @@ func (mb *mockBroadcaster) BroadcastLightClientFinalityUpdate(_ context.Context,
|
||||
return nil
|
||||
}
|
||||
|
||||
func (mb *mockBroadcaster) BroadcastDataColumnSidecars(_ context.Context, _ []blocks.VerifiedRODataColumn) error {
|
||||
func (mb *mockBroadcaster) BroadcastDataColumnSidecars(_ context.Context, _ []blocks.VerifiedRODataColumn, _ []blocks.PartialDataColumn) error {
|
||||
mb.broadcastCalled = true
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -60,7 +60,7 @@ func Eth1DataHasEnoughSupport(beaconState state.ReadOnlyBeaconState, data *ethpb
|
||||
voteCount := uint64(0)
|
||||
|
||||
for _, vote := range beaconState.Eth1DataVotes() {
|
||||
if AreEth1DataEqual(vote, data.Copy()) {
|
||||
if AreEth1DataEqual(vote, data) {
|
||||
voteCount++
|
||||
}
|
||||
}
|
||||
|
||||
@@ -152,7 +152,7 @@ func ActiveValidatorIndices(ctx context.Context, s state.ReadOnlyBeaconState, ep
|
||||
}
|
||||
|
||||
if err := UpdateCommitteeCache(ctx, s, epoch); err != nil {
|
||||
return nil, errors.Wrap(err, "could not update committee cache")
|
||||
log.WithError(err).Error("Could not update committee cache")
|
||||
}
|
||||
|
||||
return indices, nil
|
||||
|
||||
@@ -33,6 +33,7 @@ go_library(
|
||||
"@com_github_pkg_errors//:go_default_library",
|
||||
"@com_github_prometheus_client_golang//prometheus:go_default_library",
|
||||
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
|
||||
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
|
||||
"@org_golang_x_sync//errgroup:go_default_library",
|
||||
],
|
||||
)
|
||||
|
||||
@@ -5,10 +5,20 @@ import (
|
||||
"github.com/prometheus/client_golang/prometheus/promauto"
|
||||
)
|
||||
|
||||
var dataColumnComputationTime = promauto.NewHistogram(
|
||||
prometheus.HistogramOpts{
|
||||
Name: "beacon_data_column_sidecar_computation_milliseconds",
|
||||
Help: "Captures the time taken to compute data column sidecars from blobs.",
|
||||
Buckets: []float64{25, 50, 100, 250, 500, 750, 1000},
|
||||
},
|
||||
var (
|
||||
dataColumnComputationTime = promauto.NewHistogram(
|
||||
prometheus.HistogramOpts{
|
||||
Name: "beacon_data_column_sidecar_computation_milliseconds",
|
||||
Help: "Captures the time taken to compute data column sidecars from blobs.",
|
||||
Buckets: []float64{25, 50, 100, 250, 500, 750, 1000},
|
||||
},
|
||||
)
|
||||
|
||||
cellsAndProofsFromStructuredComputationTime = promauto.NewHistogram(
|
||||
prometheus.HistogramOpts{
|
||||
Name: "cells_and_proofs_from_structured_computation_milliseconds",
|
||||
Help: "Captures the time taken to compute cells and proofs from structured computation.",
|
||||
Buckets: []float64{10, 20, 30, 40, 50, 100, 200},
|
||||
},
|
||||
)
|
||||
)
|
||||
|
||||
@@ -1,11 +1,15 @@
|
||||
package peerdas
|
||||
|
||||
import (
|
||||
stderrors "errors"
|
||||
"iter"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/blockchain/kzg"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v7/config/params"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v7/container/trie"
|
||||
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
|
||||
"github.com/ethereum/go-ethereum/p2p/enr"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
@@ -16,6 +20,7 @@ var (
|
||||
ErrIndexTooLarge = errors.New("column index is larger than the specified columns count")
|
||||
ErrNoKzgCommitments = errors.New("no KZG commitments found")
|
||||
ErrMismatchLength = errors.New("mismatch in the length of the column, commitments or proofs")
|
||||
ErrEmptySegment = errors.New("empty segment in batch")
|
||||
ErrInvalidKZGProof = errors.New("invalid KZG proof")
|
||||
ErrBadRootLength = errors.New("bad root length")
|
||||
ErrInvalidInclusionProof = errors.New("invalid inclusion proof")
|
||||
@@ -57,80 +62,122 @@ func VerifyDataColumnSidecar(sidecar blocks.RODataColumn) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// VerifyDataColumnsSidecarKZGProofs verifies if the KZG proofs are correct.
|
||||
// CellProofBundleSegment is returned when a batch fails. The caller can call
|
||||
// the `.Verify` method to verify just this segment.
|
||||
type CellProofBundleSegment struct {
|
||||
indices []uint64 // column index i.e in [0-127] for each cell
|
||||
commitments []kzg.Bytes48 // KZG commitment for the blob that contains the cell that needs to be verified
|
||||
cells []kzg.Cell // actual cell data for each cell (2KB each)
|
||||
proofs []kzg.Bytes48 // Cell proof for each cell
|
||||
// Note: All the above arrays are of the same length.
|
||||
}
|
||||
|
||||
// Verify verifies this segment without batching.
|
||||
func (s CellProofBundleSegment) Verify() error {
|
||||
if len(s.cells) == 0 {
|
||||
return ErrEmptySegment
|
||||
}
|
||||
verified, err := kzg.VerifyCellKZGProofBatch(s.commitments, s.indices, s.cells, s.proofs)
|
||||
if err != nil {
|
||||
return stderrors.Join(err, ErrInvalidKZGProof)
|
||||
}
|
||||
if !verified {
|
||||
return ErrInvalidKZGProof
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func VerifyDataColumnsCellsKZGProofs(sizeHint int, cellProofsIter iter.Seq[blocks.CellProofBundle]) error {
|
||||
// The BatchVerifyDataColumnsCellsKZGProofs call also returns the failed segments if beatch verification fails
|
||||
// so we can verify each segment seperately to identify the failed cell ("BatchVerifyDataColumnsCellsKZGProofs" is all or nothing).
|
||||
// But we ignore the failed segment list since we are just passing in one segment.
|
||||
_, err := BatchVerifyDataColumnsCellsKZGProofs(sizeHint, []iter.Seq[blocks.CellProofBundle]{cellProofsIter})
|
||||
return err
|
||||
}
|
||||
|
||||
// BatchVerifyDataColumnsCellsKZGProofs verifies if the KZG proofs are correct.
|
||||
// Note: We are slightly deviating from the specification here:
|
||||
// The specification verifies the KZG proofs for each sidecar separately,
|
||||
// while we are verifying all the KZG proofs from multiple sidecars in a batch.
|
||||
// This is done to improve performance since the internal KZG library is way more
|
||||
// efficient when verifying in batch.
|
||||
// efficient when verifying in batch. If the batch fails, the failed segments
|
||||
// are returned to the caller so that they may try segment by segment without
|
||||
// batching. On success the failed segment list is empty.
|
||||
//
|
||||
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/p2p-interface.md#verify_data_column_sidecar_kzg_proofs
|
||||
func VerifyDataColumnsSidecarKZGProofs(sidecars []blocks.RODataColumn) error {
|
||||
// Compute the total count.
|
||||
count := 0
|
||||
for _, sidecar := range sidecars {
|
||||
count += len(sidecar.Column)
|
||||
}
|
||||
// sizeHint is the total number of cells to verify across all sidecars.
|
||||
func BatchVerifyDataColumnsCellsKZGProofs(sizeHint int, cellProofsIters []iter.Seq[blocks.CellProofBundle]) ( /* failed segment list */ []CellProofBundleSegment, error) {
|
||||
commitments := make([]kzg.Bytes48, 0, sizeHint)
|
||||
indices := make([]uint64, 0, sizeHint)
|
||||
cells := make([]kzg.Cell, 0, sizeHint)
|
||||
proofs := make([]kzg.Bytes48, 0, sizeHint)
|
||||
|
||||
commitments := make([]kzg.Bytes48, 0, count)
|
||||
indices := make([]uint64, 0, count)
|
||||
cells := make([]kzg.Cell, 0, count)
|
||||
proofs := make([]kzg.Bytes48, 0, count)
|
||||
|
||||
for _, sidecar := range sidecars {
|
||||
for i := range sidecar.Column {
|
||||
var anySegmentEmpty bool
|
||||
var segments []CellProofBundleSegment
|
||||
for _, cellProofsIter := range cellProofsIters {
|
||||
startIdx := len(cells) // starts at 0, and increments by the number of cells in the current segment
|
||||
for bundle := range cellProofsIter {
|
||||
var (
|
||||
commitment kzg.Bytes48
|
||||
cell kzg.Cell
|
||||
proof kzg.Bytes48
|
||||
)
|
||||
|
||||
commitmentBytes := sidecar.KzgCommitments[i]
|
||||
cellBytes := sidecar.Column[i]
|
||||
proofBytes := sidecar.KzgProofs[i]
|
||||
|
||||
if len(commitmentBytes) != len(commitment) ||
|
||||
len(cellBytes) != len(cell) ||
|
||||
len(proofBytes) != len(proof) {
|
||||
return ErrMismatchLength
|
||||
// sanity check that each commitment is 48 bytes, each cell is 2KB and each proof is 48 bytes.
|
||||
if len(bundle.Commitment) != len(commitment) ||
|
||||
len(bundle.Cell) != len(cell) ||
|
||||
len(bundle.Proof) != len(proof) {
|
||||
return nil, ErrMismatchLength
|
||||
}
|
||||
|
||||
copy(commitment[:], commitmentBytes)
|
||||
copy(cell[:], cellBytes)
|
||||
copy(proof[:], proofBytes)
|
||||
copy(commitment[:], bundle.Commitment)
|
||||
copy(cell[:], bundle.Cell)
|
||||
copy(proof[:], bundle.Proof)
|
||||
|
||||
commitments = append(commitments, commitment)
|
||||
indices = append(indices, sidecar.Index)
|
||||
indices = append(indices, bundle.ColumnIndex)
|
||||
cells = append(cells, cell)
|
||||
proofs = append(proofs, proof)
|
||||
}
|
||||
if len(cells[startIdx:]) == 0 {
|
||||
// this basically means that the current segment i.e. the current "bundle" we're
|
||||
// iterating over is empty because "startIdx" did not increment after the last iteration.
|
||||
anySegmentEmpty = true
|
||||
}
|
||||
segments = append(segments, CellProofBundleSegment{
|
||||
indices: indices[startIdx:],
|
||||
commitments: commitments[startIdx:],
|
||||
cells: cells[startIdx:],
|
||||
proofs: proofs[startIdx:],
|
||||
})
|
||||
}
|
||||
|
||||
if anySegmentEmpty {
|
||||
// if any segment is empty, we don't verify anything as it's a malformed batch and we return all
|
||||
// the segments here so caller can verify each segment seperately.
|
||||
return segments, ErrEmptySegment
|
||||
}
|
||||
|
||||
// Batch verify that the cells match the corresponding commitments and proofs.
|
||||
verified, err := kzg.VerifyCellKZGProofBatch(commitments, indices, cells, proofs)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "verify cell KZG proof batch")
|
||||
return segments, stderrors.Join(err, ErrInvalidKZGProof)
|
||||
}
|
||||
|
||||
if !verified {
|
||||
return ErrInvalidKZGProof
|
||||
return segments, ErrInvalidKZGProof
|
||||
}
|
||||
|
||||
return nil
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// VerifyDataColumnSidecarInclusionProof verifies if the given KZG commitments included in the given beacon block.
|
||||
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/p2p-interface.md#verify_data_column_sidecar_inclusion_proof
|
||||
func VerifyDataColumnSidecarInclusionProof(sidecar blocks.RODataColumn) error {
|
||||
if sidecar.SignedBlockHeader == nil || sidecar.SignedBlockHeader.Header == nil {
|
||||
return ErrNilBlockHeader
|
||||
}
|
||||
|
||||
root := sidecar.SignedBlockHeader.Header.BodyRoot
|
||||
if len(root) != fieldparams.RootLength {
|
||||
// verifyKzgCommitmentsInclusionProof is the shared implementation for inclusion proof verification.
|
||||
func verifyKzgCommitmentsInclusionProof(bodyRoot []byte, kzgCommitments [][]byte, inclusionProof [][]byte) error {
|
||||
if len(bodyRoot) != fieldparams.RootLength {
|
||||
return ErrBadRootLength
|
||||
}
|
||||
|
||||
leaves := blocks.LeavesFromCommitments(sidecar.KzgCommitments)
|
||||
leaves := blocks.LeavesFromCommitments(kzgCommitments)
|
||||
|
||||
sparse, err := trie.GenerateTrieFromItems(leaves, fieldparams.LogMaxBlobCommitments)
|
||||
if err != nil {
|
||||
@@ -142,7 +189,7 @@ func VerifyDataColumnSidecarInclusionProof(sidecar blocks.RODataColumn) error {
|
||||
return errors.Wrap(err, "hash tree root")
|
||||
}
|
||||
|
||||
verified := trie.VerifyMerkleProof(root, hashTreeRoot[:], kzgPosition, sidecar.KzgCommitmentsInclusionProof)
|
||||
verified := trie.VerifyMerkleProof(bodyRoot, hashTreeRoot[:], kzgPosition, inclusionProof)
|
||||
if !verified {
|
||||
return ErrInvalidInclusionProof
|
||||
}
|
||||
@@ -150,6 +197,31 @@ func VerifyDataColumnSidecarInclusionProof(sidecar blocks.RODataColumn) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// VerifyDataColumnSidecarInclusionProof verifies if the given KZG commitments included in the given beacon block.
|
||||
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/p2p-interface.md#verify_data_column_sidecar_inclusion_proof
|
||||
func VerifyDataColumnSidecarInclusionProof(sidecar blocks.RODataColumn) error {
|
||||
if sidecar.SignedBlockHeader == nil || sidecar.SignedBlockHeader.Header == nil {
|
||||
return ErrNilBlockHeader
|
||||
}
|
||||
return verifyKzgCommitmentsInclusionProof(
|
||||
sidecar.SignedBlockHeader.Header.BodyRoot,
|
||||
sidecar.KzgCommitments,
|
||||
sidecar.KzgCommitmentsInclusionProof,
|
||||
)
|
||||
}
|
||||
|
||||
// VerifyPartialDataColumnHeaderInclusionProof verifies if the KZG commitments are included in the beacon block.
|
||||
func VerifyPartialDataColumnHeaderInclusionProof(header *ethpb.PartialDataColumnHeader) error {
|
||||
if header.SignedBlockHeader == nil || header.SignedBlockHeader.Header == nil {
|
||||
return ErrNilBlockHeader
|
||||
}
|
||||
return verifyKzgCommitmentsInclusionProof(
|
||||
header.SignedBlockHeader.Header.BodyRoot,
|
||||
header.KzgCommitments,
|
||||
header.KzgCommitmentsInclusionProof,
|
||||
)
|
||||
}
|
||||
|
||||
// ComputeSubnetForDataColumnSidecar computes the subnet for a data column sidecar.
|
||||
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/p2p-interface.md#compute_subnet_for_data_column_sidecar
|
||||
func ComputeSubnetForDataColumnSidecar(columnIndex uint64) uint64 {
|
||||
|
||||
@@ -3,6 +3,7 @@ package peerdas_test
|
||||
import (
|
||||
"crypto/rand"
|
||||
"fmt"
|
||||
"iter"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/blockchain/kzg"
|
||||
@@ -72,7 +73,7 @@ func TestVerifyDataColumnSidecarKZGProofs(t *testing.T) {
|
||||
sidecars := generateRandomSidecars(t, seed, blobCount)
|
||||
sidecars[0].Column[0] = sidecars[0].Column[0][:len(sidecars[0].Column[0])-1] // Remove one byte to create size mismatch
|
||||
|
||||
err := peerdas.VerifyDataColumnsSidecarKZGProofs(sidecars)
|
||||
err := peerdas.VerifyDataColumnsCellsKZGProofs(0, blocks.RODataColumnsToCellProofBundles(sidecars))
|
||||
require.ErrorIs(t, err, peerdas.ErrMismatchLength)
|
||||
})
|
||||
|
||||
@@ -80,14 +81,15 @@ func TestVerifyDataColumnSidecarKZGProofs(t *testing.T) {
|
||||
sidecars := generateRandomSidecars(t, seed, blobCount)
|
||||
sidecars[0].Column[0][0]++ // It is OK to overflow
|
||||
|
||||
err := peerdas.VerifyDataColumnsSidecarKZGProofs(sidecars)
|
||||
err := peerdas.VerifyDataColumnsCellsKZGProofs(0, blocks.RODataColumnsToCellProofBundles(sidecars))
|
||||
require.ErrorIs(t, err, peerdas.ErrInvalidKZGProof)
|
||||
})
|
||||
|
||||
t.Run("nominal", func(t *testing.T) {
|
||||
sidecars := generateRandomSidecars(t, seed, blobCount)
|
||||
err := peerdas.VerifyDataColumnsSidecarKZGProofs(sidecars)
|
||||
failedSegments, err := peerdas.BatchVerifyDataColumnsCellsKZGProofs(blobCount, []iter.Seq[blocks.CellProofBundle]{blocks.RODataColumnsToCellProofBundles(sidecars)})
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 0, len(failedSegments))
|
||||
})
|
||||
}
|
||||
|
||||
@@ -273,7 +275,7 @@ func BenchmarkVerifyDataColumnSidecarKZGProofs_SameCommitments_NoBatch(b *testin
|
||||
for _, sidecar := range sidecars {
|
||||
sidecars := []blocks.RODataColumn{sidecar}
|
||||
b.StartTimer()
|
||||
err := peerdas.VerifyDataColumnsSidecarKZGProofs(sidecars)
|
||||
err := peerdas.VerifyDataColumnsCellsKZGProofs(0, blocks.RODataColumnsToCellProofBundles(sidecars))
|
||||
b.StopTimer()
|
||||
require.NoError(b, err)
|
||||
}
|
||||
@@ -308,7 +310,7 @@ func BenchmarkVerifyDataColumnSidecarKZGProofs_DiffCommitments_Batch(b *testing.
|
||||
}
|
||||
|
||||
b.StartTimer()
|
||||
err := peerdas.VerifyDataColumnsSidecarKZGProofs(allSidecars)
|
||||
err := peerdas.VerifyDataColumnsCellsKZGProofs(0, blocks.RODataColumnsToCellProofBundles(allSidecars))
|
||||
b.StopTimer()
|
||||
require.NoError(b, err)
|
||||
}
|
||||
@@ -341,7 +343,7 @@ func BenchmarkVerifyDataColumnSidecarKZGProofs_DiffCommitments_Batch4(b *testing
|
||||
|
||||
for _, sidecars := range allSidecars {
|
||||
b.StartTimer()
|
||||
err := peerdas.VerifyDataColumnsSidecarKZGProofs(sidecars)
|
||||
err := peerdas.VerifyDataColumnsCellsKZGProofs(len(allSidecars), blocks.RODataColumnsToCellProofBundles(sidecars))
|
||||
b.StopTimer()
|
||||
require.NoError(b, err)
|
||||
}
|
||||
|
||||
@@ -3,7 +3,9 @@ package peerdas
|
||||
import (
|
||||
"sort"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/go-bitfield"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/blockchain/kzg"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v7/config/params"
|
||||
@@ -296,76 +298,116 @@ func ComputeCellsAndProofsFromFlat(blobs [][]byte, cellProofs [][]byte) ([][]kzg
|
||||
return nil, nil, ErrBlobsCellsProofsMismatch
|
||||
}
|
||||
|
||||
cellsPerBlob := make([][]kzg.Cell, 0, blobCount)
|
||||
proofsPerBlob := make([][]kzg.Proof, 0, blobCount)
|
||||
var wg errgroup.Group
|
||||
|
||||
cellsPerBlob := make([][]kzg.Cell, blobCount)
|
||||
proofsPerBlob := make([][]kzg.Proof, blobCount)
|
||||
|
||||
for i, blob := range blobs {
|
||||
var kzgBlob kzg.Blob
|
||||
if copy(kzgBlob[:], blob) != len(kzgBlob) {
|
||||
return nil, nil, errors.New("wrong blob size - should never happen")
|
||||
}
|
||||
|
||||
// Compute the extended cells from the (non-extended) blob.
|
||||
cells, err := kzg.ComputeCells(&kzgBlob)
|
||||
if err != nil {
|
||||
return nil, nil, errors.Wrap(err, "compute cells")
|
||||
}
|
||||
|
||||
var proofs []kzg.Proof
|
||||
for idx := uint64(i) * numberOfColumns; idx < (uint64(i)+1)*numberOfColumns; idx++ {
|
||||
var kzgProof kzg.Proof
|
||||
if copy(kzgProof[:], cellProofs[idx]) != len(kzgProof) {
|
||||
return nil, nil, errors.New("wrong KZG proof size - should never happen")
|
||||
wg.Go(func() error {
|
||||
var kzgBlob kzg.Blob
|
||||
if copy(kzgBlob[:], blob) != len(kzgBlob) {
|
||||
return errors.New("wrong blob size - should never happen")
|
||||
}
|
||||
|
||||
proofs = append(proofs, kzgProof)
|
||||
}
|
||||
// Compute the extended cells from the (non-extended) blob.
|
||||
cells, err := kzg.ComputeCells(&kzgBlob)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "compute cells")
|
||||
}
|
||||
|
||||
cellsPerBlob = append(cellsPerBlob, cells)
|
||||
proofsPerBlob = append(proofsPerBlob, proofs)
|
||||
proofs := make([]kzg.Proof, 0, numberOfColumns)
|
||||
for idx := uint64(i) * numberOfColumns; idx < (uint64(i)+1)*numberOfColumns; idx++ {
|
||||
var kzgProof kzg.Proof
|
||||
if copy(kzgProof[:], cellProofs[idx]) != len(kzgProof) {
|
||||
return errors.New("wrong KZG proof size - should never happen")
|
||||
}
|
||||
|
||||
proofs = append(proofs, kzgProof)
|
||||
}
|
||||
|
||||
cellsPerBlob[i] = cells
|
||||
proofsPerBlob[i] = proofs
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
if err := wg.Wait(); err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
return cellsPerBlob, proofsPerBlob, nil
|
||||
}
|
||||
|
||||
// ComputeCellsAndProofsFromStructured computes the cells and proofs from blobs and cell proofs.
|
||||
func ComputeCellsAndProofsFromStructured(blobsAndProofs []*pb.BlobAndProofV2) ([][]kzg.Cell, [][]kzg.Proof, error) {
|
||||
cellsPerBlob := make([][]kzg.Cell, 0, len(blobsAndProofs))
|
||||
proofsPerBlob := make([][]kzg.Proof, 0, len(blobsAndProofs))
|
||||
// commitmentCount (i.e. number of blobs) is required to return the correct sized bitlist even if we see a nil slice of blobsAndProofs.
|
||||
// Returns the bitlist of which blobs are present, the cells for each blob that is present, and the proofs for each cell in each blob that is present.
|
||||
// The returned cells and proofs are compacted and will not contain entries for missing blobs.
|
||||
func ComputeCellsAndProofsFromStructured(commitmentCount uint64, blobsAndProofs []*pb.BlobAndProofV2) (bitfield.Bitlist /* parts included */, [][]kzg.Cell, [][]kzg.Proof, error) {
|
||||
start := time.Now()
|
||||
defer func() {
|
||||
cellsAndProofsFromStructuredComputationTime.Observe(float64(time.Since(start).Milliseconds()))
|
||||
}()
|
||||
|
||||
var wg errgroup.Group
|
||||
|
||||
var blobsPresent int
|
||||
for _, blobAndProof := range blobsAndProofs {
|
||||
if blobAndProof != nil {
|
||||
blobsPresent++
|
||||
}
|
||||
}
|
||||
cellsPerBlob := make([][]kzg.Cell, blobsPresent) // array of cells for each blob
|
||||
proofsPerBlob := make([][]kzg.Proof, blobsPresent) // array of proofs for each blob
|
||||
included := bitfield.NewBitlist(commitmentCount) // bitlist of which blobs are present
|
||||
|
||||
var j int
|
||||
for i, blobAndProof := range blobsAndProofs {
|
||||
if blobAndProof == nil {
|
||||
return nil, nil, ErrNilBlobAndProof
|
||||
continue
|
||||
}
|
||||
included.SetBitAt(uint64(i), true) // blob at index i is present
|
||||
|
||||
var kzgBlob kzg.Blob
|
||||
if copy(kzgBlob[:], blobAndProof.Blob) != len(kzgBlob) {
|
||||
return nil, nil, errors.New("wrong blob size - should never happen")
|
||||
}
|
||||
|
||||
// Compute the extended cells from the (non-extended) blob.
|
||||
cells, err := kzg.ComputeCells(&kzgBlob)
|
||||
if err != nil {
|
||||
return nil, nil, errors.Wrap(err, "compute cells")
|
||||
}
|
||||
|
||||
kzgProofs := make([]kzg.Proof, 0, fieldparams.NumberOfColumns)
|
||||
for _, kzgProofBytes := range blobAndProof.KzgProofs {
|
||||
if len(kzgProofBytes) != kzg.BytesPerProof {
|
||||
return nil, nil, errors.New("wrong KZG proof size - should never happen")
|
||||
compactIndex := j // compact index is the index of the blob in the returned arrays after accounting for nil/missing blocks.
|
||||
wg.Go(func() error {
|
||||
var kzgBlob kzg.Blob
|
||||
// the number of copied bytes should be equal to the expected size of a blob.
|
||||
if copy(kzgBlob[:], blobAndProof.Blob) != len(kzgBlob) {
|
||||
return errors.New("wrong blob size - should never happen")
|
||||
}
|
||||
|
||||
var kzgProof kzg.Proof
|
||||
if copy(kzgProof[:], kzgProofBytes) != len(kzgProof) {
|
||||
return nil, nil, errors.New("wrong copied KZG proof size - should never happen")
|
||||
// Compute the extended cells from the (non-extended) blob.
|
||||
cells, err := kzg.ComputeCells(&kzgBlob)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "compute cells")
|
||||
}
|
||||
|
||||
kzgProofs = append(kzgProofs, kzgProof)
|
||||
}
|
||||
kzgProofs := make([]kzg.Proof, 0, fieldparams.NumberOfColumns)
|
||||
for _, kzgProofBytes := range blobAndProof.KzgProofs {
|
||||
if len(kzgProofBytes) != kzg.BytesPerProof {
|
||||
return errors.New("wrong KZG proof size - should never happen")
|
||||
}
|
||||
|
||||
cellsPerBlob = append(cellsPerBlob, cells)
|
||||
proofsPerBlob = append(proofsPerBlob, kzgProofs)
|
||||
var kzgProof kzg.Proof
|
||||
if copy(kzgProof[:], kzgProofBytes) != len(kzgProof) {
|
||||
return errors.New("wrong copied KZG proof size - should never happen")
|
||||
}
|
||||
|
||||
kzgProofs = append(kzgProofs, kzgProof)
|
||||
}
|
||||
|
||||
cellsPerBlob[compactIndex] = cells
|
||||
proofsPerBlob[compactIndex] = kzgProofs
|
||||
return nil
|
||||
})
|
||||
j++
|
||||
}
|
||||
|
||||
return cellsPerBlob, proofsPerBlob, nil
|
||||
if err := wg.Wait(); err != nil {
|
||||
return nil, nil, nil, err
|
||||
}
|
||||
|
||||
return included, cellsPerBlob, proofsPerBlob, nil
|
||||
}
|
||||
|
||||
// ReconstructBlobs reconstructs blobs from data column sidecars without computing KZG proofs or creating sidecars.
|
||||
|
||||
@@ -479,8 +479,9 @@ func TestComputeCellsAndProofsFromFlat(t *testing.T) {
|
||||
|
||||
func TestComputeCellsAndProofsFromStructured(t *testing.T) {
|
||||
t.Run("nil blob and proof", func(t *testing.T) {
|
||||
_, _, err := peerdas.ComputeCellsAndProofsFromStructured([]*pb.BlobAndProofV2{nil})
|
||||
require.ErrorIs(t, err, peerdas.ErrNilBlobAndProof)
|
||||
included, _, _, err := peerdas.ComputeCellsAndProofsFromStructured(0, []*pb.BlobAndProofV2{nil})
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, uint64(0), included.Count())
|
||||
})
|
||||
|
||||
t.Run("nominal", func(t *testing.T) {
|
||||
@@ -533,7 +534,8 @@ func TestComputeCellsAndProofsFromStructured(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
|
||||
// Test ComputeCellsAndProofs
|
||||
actualCellsPerBlob, actualProofsPerBlob, err := peerdas.ComputeCellsAndProofsFromStructured(blobsAndProofs)
|
||||
included, actualCellsPerBlob, actualProofsPerBlob, err := peerdas.ComputeCellsAndProofsFromStructured(uint64(len(blobsAndProofs)), blobsAndProofs)
|
||||
require.Equal(t, included.Count(), uint64(len(actualCellsPerBlob)))
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, blobCount, len(actualCellsPerBlob))
|
||||
|
||||
|
||||
@@ -3,6 +3,7 @@ package peerdas
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/go-bitfield"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/blockchain/kzg"
|
||||
beaconState "github.com/OffchainLabs/prysm/v7/beacon-chain/state"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
|
||||
@@ -143,6 +144,51 @@ func DataColumnSidecars(cellsPerBlob [][]kzg.Cell, proofsPerBlob [][]kzg.Proof,
|
||||
return roSidecars, nil
|
||||
}
|
||||
|
||||
// Note: cellsPerBlob
|
||||
// We should also assert that "included.Count() (i.e. number of bits set) == uint64(len(cellsPerBlob))" otherwise "cells[idx] = cells[idx][1:]" below will be out of bounds.
|
||||
func PartialColumns(included bitfield.Bitlist, cellsPerBlob [][]kzg.Cell, proofsPerBlob [][]kzg.Proof, src ConstructionPopulator) ([]blocks.PartialDataColumn, error) {
|
||||
start := time.Now()
|
||||
const numberOfColumns = uint64(fieldparams.NumberOfColumns)
|
||||
// rotate the cells and proofs from being per blob(i.e. "row indexed") to being per column ("column indexed").
|
||||
// The returned arrays are of size "numberOfColumns" in length and the "included" bitfield is used to filter out cells that are not present in each column.
|
||||
cells, proofs, err := rotateRowsToCols(cellsPerBlob, proofsPerBlob, numberOfColumns)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "rotate cells and proofs")
|
||||
}
|
||||
info, err := src.extract()
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "extract block info")
|
||||
}
|
||||
|
||||
dataColumns := make([]blocks.PartialDataColumn, 0, numberOfColumns)
|
||||
for idx := range numberOfColumns {
|
||||
dc, err := blocks.NewPartialDataColumn(info.signedBlockHeader, idx, info.kzgCommitments, info.kzgInclusionProof)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "new ro data column")
|
||||
}
|
||||
|
||||
// info.kzgCommitments is the array of KZG commitments for each blob in the block so it's length is the number of blobs in the block.
|
||||
// The included bitlist is the bitlist of which blobs are present i.e. which are the blobs we have the blob data for.
|
||||
// we're now iterating over each row and extending the column one cell at a time for each blob that is present.
|
||||
for i := range len(info.kzgCommitments) {
|
||||
if !included.BitAt(uint64(i)) {
|
||||
continue
|
||||
}
|
||||
// TODO: Check for "out of bounds" here. How do we know that cells always have upto "numberOfColumns" entries?
|
||||
// Okay, this probably works because "rotateRowsToCols" above allocates "numberOfColumns" entries for each cell and proof.
|
||||
// The "included" bitlist is used to filter out cells that are not present in each column so we only set the cells that are present in
|
||||
// this call to "ExtendFromVerfifiedCell".
|
||||
dc.ExtendFromVerfifiedCell(uint64(i), cells[idx][0], proofs[idx][0])
|
||||
cells[idx] = cells[idx][1:]
|
||||
proofs[idx] = proofs[idx][1:]
|
||||
}
|
||||
dataColumns = append(dataColumns, dc)
|
||||
}
|
||||
|
||||
dataColumnComputationTime.Observe(float64(time.Since(start).Milliseconds()))
|
||||
return dataColumns, nil
|
||||
}
|
||||
|
||||
// Slot returns the slot of the source
|
||||
func (s *BlockReconstructionSource) Slot() primitives.Slot {
|
||||
return s.Block().Slot()
|
||||
|
||||
@@ -515,6 +515,11 @@ func (dcs *DataColumnStorage) Clear() error {
|
||||
|
||||
// prune clean the cache, the filesystem and mutexes.
|
||||
func (dcs *DataColumnStorage) prune() {
|
||||
startTime := time.Now()
|
||||
defer func() {
|
||||
dataColumnPruneLatency.Observe(float64(time.Since(startTime).Milliseconds()))
|
||||
}()
|
||||
|
||||
highestStoredEpoch := dcs.cache.HighestEpoch()
|
||||
|
||||
// Check if we need to prune.
|
||||
@@ -622,6 +627,9 @@ func (dcs *DataColumnStorage) saveDataColumnSidecarsExistingFile(filePath string
|
||||
// Create the SSZ encoded data column sidecars.
|
||||
var sszEncodedDataColumnSidecars []byte
|
||||
|
||||
// Initialize the count of the saved SSZ encoded data column sidecar.
|
||||
storedCount := uint8(0)
|
||||
|
||||
for {
|
||||
dataColumnSidecars := pullChan(inputDataColumnSidecars)
|
||||
if len(dataColumnSidecars) == 0 {
|
||||
@@ -668,6 +676,9 @@ func (dcs *DataColumnStorage) saveDataColumnSidecarsExistingFile(filePath string
|
||||
return errors.Wrap(err, "set index")
|
||||
}
|
||||
|
||||
// Increment the count of the saved SSZ encoded data column sidecar.
|
||||
storedCount++
|
||||
|
||||
// Append the SSZ encoded data column sidecar to the SSZ encoded data column sidecars.
|
||||
sszEncodedDataColumnSidecars = append(sszEncodedDataColumnSidecars, sszEncodedDataColumnSidecar...)
|
||||
}
|
||||
@@ -692,9 +703,12 @@ func (dcs *DataColumnStorage) saveDataColumnSidecarsExistingFile(filePath string
|
||||
return errWrongBytesWritten
|
||||
}
|
||||
|
||||
syncStart := time.Now()
|
||||
if err := file.Sync(); err != nil {
|
||||
return errors.Wrap(err, "sync")
|
||||
}
|
||||
dataColumnFileSyncLatency.Observe(float64(time.Since(syncStart).Milliseconds()))
|
||||
dataColumnBatchStoreCount.Observe(float64(storedCount))
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -808,10 +822,14 @@ func (dcs *DataColumnStorage) saveDataColumnSidecarsNewFile(filePath string, inp
|
||||
return errWrongBytesWritten
|
||||
}
|
||||
|
||||
syncStart := time.Now()
|
||||
if err := file.Sync(); err != nil {
|
||||
return errors.Wrap(err, "sync")
|
||||
}
|
||||
|
||||
dataColumnFileSyncLatency.Observe(float64(time.Since(syncStart).Milliseconds()))
|
||||
dataColumnBatchStoreCount.Observe(float64(storedCount))
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
@@ -36,16 +36,15 @@ var (
|
||||
})
|
||||
|
||||
// Data columns
|
||||
dataColumnBuckets = []float64{3, 5, 7, 9, 11, 13}
|
||||
dataColumnSaveLatency = promauto.NewHistogram(prometheus.HistogramOpts{
|
||||
Name: "data_column_storage_save_latency",
|
||||
Help: "Latency of DataColumnSidecar storage save operations in milliseconds",
|
||||
Buckets: dataColumnBuckets,
|
||||
Buckets: []float64{10, 20, 30, 50, 100, 200, 500},
|
||||
})
|
||||
dataColumnFetchLatency = promauto.NewHistogram(prometheus.HistogramOpts{
|
||||
Name: "data_column_storage_get_latency",
|
||||
Help: "Latency of DataColumnSidecar storage get operations in milliseconds",
|
||||
Buckets: dataColumnBuckets,
|
||||
Buckets: []float64{3, 5, 7, 9, 11, 13},
|
||||
})
|
||||
dataColumnPrunedCounter = promauto.NewCounter(prometheus.CounterOpts{
|
||||
Name: "data_column_pruned",
|
||||
@@ -59,4 +58,16 @@ var (
|
||||
Name: "data_column_disk_count",
|
||||
Help: "Approximate number of data columns in storage",
|
||||
})
|
||||
dataColumnFileSyncLatency = promauto.NewSummary(prometheus.SummaryOpts{
|
||||
Name: "data_column_file_sync_latency",
|
||||
Help: "Latency of sync operations when saving data columns in milliseconds",
|
||||
})
|
||||
dataColumnBatchStoreCount = promauto.NewSummary(prometheus.SummaryOpts{
|
||||
Name: "data_column_batch_store_count",
|
||||
Help: "Number of data columns stored in a batch",
|
||||
})
|
||||
dataColumnPruneLatency = promauto.NewSummary(prometheus.SummaryOpts{
|
||||
Name: "data_column_prune_latency",
|
||||
Help: "Latency of data column prune operations in milliseconds",
|
||||
})
|
||||
)
|
||||
|
||||
@@ -72,6 +72,7 @@ go_library(
|
||||
"@com_github_pkg_errors//:go_default_library",
|
||||
"@com_github_prometheus_client_golang//prometheus:go_default_library",
|
||||
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
|
||||
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
|
||||
"@com_github_sirupsen_logrus//:go_default_library",
|
||||
"@io_k8s_client_go//tools/cache:go_default_library",
|
||||
"@org_golang_google_protobuf//proto:go_default_library",
|
||||
|
||||
@@ -7,6 +7,7 @@ import (
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/go-bitfield"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/blockchain/kzg"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/execution/types"
|
||||
@@ -57,6 +58,7 @@ var (
|
||||
fuluEngineEndpoints = []string{
|
||||
GetPayloadMethodV5,
|
||||
GetBlobsV2,
|
||||
GetBlobsV3,
|
||||
}
|
||||
)
|
||||
|
||||
@@ -98,6 +100,8 @@ const (
|
||||
GetBlobsV1 = "engine_getBlobsV1"
|
||||
// GetBlobsV2 request string for JSON-RPC.
|
||||
GetBlobsV2 = "engine_getBlobsV2"
|
||||
// GetBlobsV3 request string for JSON-RPC.
|
||||
GetBlobsV3 = "engine_getBlobsV3"
|
||||
// Defines the seconds before timing out engine endpoints with non-block execution semantics.
|
||||
defaultEngineTimeout = time.Second
|
||||
)
|
||||
@@ -121,7 +125,7 @@ type Reconstructor interface {
|
||||
ctx context.Context, blindedBlocks []interfaces.ReadOnlySignedBeaconBlock,
|
||||
) ([]interfaces.SignedBeaconBlock, error)
|
||||
ReconstructBlobSidecars(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [fieldparams.RootLength]byte, hi func(uint64) bool) ([]blocks.VerifiedROBlob, error)
|
||||
ConstructDataColumnSidecars(ctx context.Context, populator peerdas.ConstructionPopulator) ([]blocks.VerifiedRODataColumn, error)
|
||||
ConstructDataColumnSidecars(ctx context.Context, populator peerdas.ConstructionPopulator) ([]blocks.VerifiedRODataColumn, []blocks.PartialDataColumn, error)
|
||||
}
|
||||
|
||||
// EngineCaller defines a client that can interact with an Ethereum
|
||||
@@ -532,12 +536,35 @@ func (s *Service) GetBlobsV2(ctx context.Context, versionedHashes []common.Hash)
|
||||
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.GetBlobsV2")
|
||||
defer span.End()
|
||||
|
||||
start := time.Now()
|
||||
|
||||
if !s.capabilityCache.has(GetBlobsV2) {
|
||||
return nil, errors.New(fmt.Sprintf("%s is not supported", GetBlobsV2))
|
||||
}
|
||||
|
||||
result := make([]*pb.BlobAndProofV2, len(versionedHashes))
|
||||
err := s.rpcClient.CallContext(ctx, &result, GetBlobsV2, versionedHashes)
|
||||
|
||||
if len(result) != 0 {
|
||||
getBlobsV2Latency.Observe(float64(time.Since(start).Milliseconds()))
|
||||
}
|
||||
|
||||
return result, handleRPCError(err)
|
||||
}
|
||||
|
||||
func (s *Service) GetBlobsV3(ctx context.Context, versionedHashes []common.Hash) ([]*pb.BlobAndProofV2, error) {
|
||||
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.GetBlobsV3")
|
||||
defer span.End()
|
||||
start := time.Now()
|
||||
|
||||
if !s.capabilityCache.has(GetBlobsV3) {
|
||||
return nil, errors.New(fmt.Sprintf("%s is not supported", GetBlobsV3))
|
||||
}
|
||||
|
||||
getBlobsV3RequestsTotal.Inc()
|
||||
result := make([]*pb.BlobAndProofV2, len(versionedHashes))
|
||||
err := s.rpcClient.CallContext(ctx, &result, GetBlobsV3, versionedHashes)
|
||||
getBlobsV3Latency.Observe(time.Since(start).Seconds())
|
||||
return result, handleRPCError(err)
|
||||
}
|
||||
|
||||
@@ -651,40 +678,51 @@ func (s *Service) ReconstructBlobSidecars(ctx context.Context, block interfaces.
|
||||
return verifiedBlobs, nil
|
||||
}
|
||||
|
||||
func (s *Service) ConstructDataColumnSidecars(ctx context.Context, populator peerdas.ConstructionPopulator) ([]blocks.VerifiedRODataColumn, error) {
|
||||
func (s *Service) ConstructDataColumnSidecars(ctx context.Context, populator peerdas.ConstructionPopulator) ([]blocks.VerifiedRODataColumn, []blocks.PartialDataColumn, error) {
|
||||
root := populator.Root()
|
||||
|
||||
// Fetch cells and proofs from the execution client using the KZG commitments from the sidecar.
|
||||
commitments, err := populator.Commitments()
|
||||
if err != nil {
|
||||
return nil, wrapWithBlockRoot(err, root, "commitments")
|
||||
return nil, nil, wrapWithBlockRoot(err, root, "commitments")
|
||||
}
|
||||
|
||||
cellsPerBlob, proofsPerBlob, err := s.fetchCellsAndProofsFromExecution(ctx, commitments)
|
||||
included, cellsPerBlob, proofsPerBlob, err := s.fetchCellsAndProofsFromExecution(ctx, commitments)
|
||||
log.Info("Received cells and proofs from execution client", "included", included, "cells count", len(cellsPerBlob), "err", err)
|
||||
if err != nil {
|
||||
return nil, wrapWithBlockRoot(err, root, "fetch cells and proofs from execution client")
|
||||
return nil, nil, wrapWithBlockRoot(err, root, "fetch cells and proofs from execution client")
|
||||
}
|
||||
|
||||
// Return early if nothing is returned from the EL.
|
||||
if len(cellsPerBlob) == 0 {
|
||||
return nil, nil
|
||||
partialColumns, err := peerdas.PartialColumns(included, cellsPerBlob, proofsPerBlob, populator)
|
||||
haveAllBlobs := included.Count() == uint64(len(commitments))
|
||||
log.Info("Constructed partial columns", "haveAllBlobs", haveAllBlobs)
|
||||
|
||||
// TODO: Check for err==nil before we go here.
|
||||
if haveAllBlobs {
|
||||
// Construct data column sidears from the signed block and cells and proofs.
|
||||
roSidecars, err := peerdas.DataColumnSidecars(cellsPerBlob, proofsPerBlob, populator)
|
||||
if err != nil {
|
||||
return nil, nil, wrapWithBlockRoot(err, populator.Root(), "data column sidcars from column sidecar")
|
||||
}
|
||||
|
||||
// Upgrade the sidecars to verified sidecars.
|
||||
// We trust the execution layer we are connected to, so we can upgrade the sidecar into a verified one.
|
||||
verifiedROSidecars := upgradeSidecarsToVerifiedSidecars(roSidecars)
|
||||
|
||||
return verifiedROSidecars, partialColumns, nil
|
||||
}
|
||||
|
||||
// Construct data column sidears from the signed block and cells and proofs.
|
||||
roSidecars, err := peerdas.DataColumnSidecars(cellsPerBlob, proofsPerBlob, populator)
|
||||
if err != nil {
|
||||
return nil, wrapWithBlockRoot(err, populator.Root(), "data column sidcars from column sidecar")
|
||||
return nil, nil, wrapWithBlockRoot(err, populator.Root(), "partial columns from column sidecar")
|
||||
}
|
||||
|
||||
// Upgrade the sidecars to verified sidecars.
|
||||
// We trust the execution layer we are connected to, so we can upgrade the sidecar into a verified one.
|
||||
verifiedROSidecars := upgradeSidecarsToVerifiedSidecars(roSidecars)
|
||||
|
||||
return verifiedROSidecars, nil
|
||||
return nil, partialColumns, nil
|
||||
}
|
||||
|
||||
// fetchCellsAndProofsFromExecution fetches cells and proofs from the execution client (using engine_getBlobsV2 execution API method)
|
||||
func (s *Service) fetchCellsAndProofsFromExecution(ctx context.Context, kzgCommitments [][]byte) ([][]kzg.Cell, [][]kzg.Proof, error) {
|
||||
// The returned cells and proofs are compacted and will not contain entries for missing blobs.
|
||||
// The returned bitlist is the bitlist of which blobs are present i.e. which are the blobs we have the blob data for. This
|
||||
// will help index into the cells and proofs arrays.
|
||||
func (s *Service) fetchCellsAndProofsFromExecution(ctx context.Context, kzgCommitments [][]byte) (bitfield.Bitlist /* included parts */, [][]kzg.Cell, [][]kzg.Proof, error) {
|
||||
// Collect KZG hashes for all blobs.
|
||||
versionedHashes := make([]common.Hash, 0, len(kzgCommitments))
|
||||
for _, commitment := range kzgCommitments {
|
||||
@@ -692,24 +730,44 @@ func (s *Service) fetchCellsAndProofsFromExecution(ctx context.Context, kzgCommi
|
||||
versionedHashes = append(versionedHashes, versionedHash)
|
||||
}
|
||||
|
||||
var blobAndProofs []*pb.BlobAndProofV2
|
||||
|
||||
// Fetch all blobsAndCellsProofs from the execution client.
|
||||
blobAndProofV2s, err := s.GetBlobsV2(ctx, versionedHashes)
|
||||
if err != nil {
|
||||
return nil, nil, errors.Wrapf(err, "get blobs V2")
|
||||
var err error
|
||||
useV3 := s.capabilityCache.has(GetBlobsV3)
|
||||
if useV3 {
|
||||
// v3 can return a partial response. V2 is all or nothing
|
||||
blobAndProofs, err = s.GetBlobsV3(ctx, versionedHashes)
|
||||
} else {
|
||||
blobAndProofs, err = s.GetBlobsV2(ctx, versionedHashes)
|
||||
}
|
||||
|
||||
// Return early if nothing is returned from the EL.
|
||||
if len(blobAndProofV2s) == 0 {
|
||||
return nil, nil, nil
|
||||
if err != nil {
|
||||
return nil, nil, nil, errors.Wrapf(err, "get blobs V2/3")
|
||||
}
|
||||
|
||||
// Track complete vs partial responses for V3
|
||||
if useV3 && len(blobAndProofs) > 0 {
|
||||
nonNilCount := 0
|
||||
for _, bp := range blobAndProofs {
|
||||
if bp != nil {
|
||||
nonNilCount++
|
||||
}
|
||||
}
|
||||
if nonNilCount == len(versionedHashes) {
|
||||
getBlobsV3CompleteResponsesTotal.Inc()
|
||||
} else if nonNilCount > 0 {
|
||||
getBlobsV3PartialResponsesTotal.Inc()
|
||||
}
|
||||
}
|
||||
|
||||
// Compute cells and proofs from the blobs and cell proofs.
|
||||
cellsPerBlob, proofsPerBlob, err := peerdas.ComputeCellsAndProofsFromStructured(blobAndProofV2s)
|
||||
included, cellsPerBlob, proofsPerBlob, err := peerdas.ComputeCellsAndProofsFromStructured(uint64(len(kzgCommitments)), blobAndProofs)
|
||||
if err != nil {
|
||||
return nil, nil, errors.Wrap(err, "compute cells and proofs")
|
||||
return nil, nil, nil, errors.Wrap(err, "compute cells and proofs")
|
||||
}
|
||||
|
||||
return cellsPerBlob, proofsPerBlob, nil
|
||||
return included, cellsPerBlob, proofsPerBlob, nil
|
||||
}
|
||||
|
||||
// upgradeSidecarsToVerifiedSidecars upgrades a list of data column sidecars into verified data column sidecars.
|
||||
|
||||
@@ -2587,7 +2587,7 @@ func TestConstructDataColumnSidecars(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
t.Run("GetBlobsV2 is not supported", func(t *testing.T) {
|
||||
_, err := client.ConstructDataColumnSidecars(ctx, peerdas.PopulateFromBlock(roBlock))
|
||||
_, _, err := client.ConstructDataColumnSidecars(ctx, peerdas.PopulateFromBlock(roBlock))
|
||||
require.ErrorContains(t, "engine_getBlobsV2 is not supported", err)
|
||||
})
|
||||
|
||||
@@ -2598,7 +2598,7 @@ func TestConstructDataColumnSidecars(t *testing.T) {
|
||||
rpcClient, client := setupRpcClientV2(t, srv.URL, client)
|
||||
defer rpcClient.Close()
|
||||
|
||||
dataColumns, err := client.ConstructDataColumnSidecars(ctx, peerdas.PopulateFromBlock(roBlock))
|
||||
dataColumns, _, err := client.ConstructDataColumnSidecars(ctx, peerdas.PopulateFromBlock(roBlock))
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 0, len(dataColumns))
|
||||
})
|
||||
@@ -2611,7 +2611,7 @@ func TestConstructDataColumnSidecars(t *testing.T) {
|
||||
rpcClient, client := setupRpcClientV2(t, srv.URL, client)
|
||||
defer rpcClient.Close()
|
||||
|
||||
dataColumns, err := client.ConstructDataColumnSidecars(ctx, peerdas.PopulateFromBlock(roBlock))
|
||||
dataColumns, _, err := client.ConstructDataColumnSidecars(ctx, peerdas.PopulateFromBlock(roBlock))
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 128, len(dataColumns))
|
||||
})
|
||||
|
||||
@@ -27,6 +27,32 @@ var (
|
||||
Buckets: []float64{25, 50, 100, 200, 500, 1000, 2000, 4000},
|
||||
},
|
||||
)
|
||||
getBlobsV2Latency = promauto.NewHistogram(
|
||||
prometheus.HistogramOpts{
|
||||
Name: "get_blobs_v2_latency_milliseconds",
|
||||
Help: "Captures RPC latency for getBlobsV2 in milliseconds",
|
||||
Buckets: []float64{25, 50, 100, 200, 500, 1000, 2000, 4000},
|
||||
},
|
||||
)
|
||||
getBlobsV3RequestsTotal = promauto.NewCounter(prometheus.CounterOpts{
|
||||
Name: "beacon_engine_getBlobsV3_requests_total",
|
||||
Help: "Total number of engine_getBlobsV3 requests sent",
|
||||
})
|
||||
getBlobsV3CompleteResponsesTotal = promauto.NewCounter(prometheus.CounterOpts{
|
||||
Name: "beacon_engine_getBlobsV3_complete_responses_total",
|
||||
Help: "Total number of complete engine_getBlobsV3 successful responses received",
|
||||
})
|
||||
getBlobsV3PartialResponsesTotal = promauto.NewCounter(prometheus.CounterOpts{
|
||||
Name: "beacon_engine_getBlobsV3_partial_responses_total",
|
||||
Help: "Total number of engine_getBlobsV3 partial responses received",
|
||||
})
|
||||
getBlobsV3Latency = promauto.NewHistogram(
|
||||
prometheus.HistogramOpts{
|
||||
Name: "beacon_engine_getBlobsV3_request_duration_seconds",
|
||||
Help: "Duration of engine_getBlobsV3 requests in seconds",
|
||||
Buckets: []float64{0.025, 0.05, 0.1, 0.2, 0.5, 1, 2, 4},
|
||||
},
|
||||
)
|
||||
errParseCount = promauto.NewCounter(prometheus.CounterOpts{
|
||||
Name: "execution_parse_error_count",
|
||||
Help: "The number of errors that occurred while parsing execution payload",
|
||||
|
||||
@@ -118,8 +118,8 @@ func (e *EngineClient) ReconstructBlobSidecars(context.Context, interfaces.ReadO
|
||||
}
|
||||
|
||||
// ConstructDataColumnSidecars is a mock implementation of the ConstructDataColumnSidecars method.
|
||||
func (e *EngineClient) ConstructDataColumnSidecars(context.Context, peerdas.ConstructionPopulator) ([]blocks.VerifiedRODataColumn, error) {
|
||||
return e.DataColumnSidecars, e.ErrorDataColumnSidecars
|
||||
func (e *EngineClient) ConstructDataColumnSidecars(context.Context, peerdas.ConstructionPopulator) ([]blocks.VerifiedRODataColumn, []blocks.PartialDataColumn, error) {
|
||||
return e.DataColumnSidecars, nil, e.ErrorDataColumnSidecars
|
||||
}
|
||||
|
||||
// GetTerminalBlockHash --
|
||||
|
||||
95
beacon-chain/graffiti/graffiti-proposal-brief.md
Normal file
95
beacon-chain/graffiti/graffiti-proposal-brief.md
Normal file
@@ -0,0 +1,95 @@
|
||||
# Graffiti Version Info Implementation
|
||||
|
||||
## Summary
|
||||
Add automatic EL+CL version info to block graffiti following [ethereum/execution-apis#517](https://github.com/ethereum/execution-apis/pull/517). Uses the [flexible standard](https://hackmd.io/@wmoBhF17RAOH2NZ5bNXJVg/BJX2c9gja) to pack client info into leftover space after user graffiti.
|
||||
|
||||
More details: https://github.com/ethereum/execution-apis/blob/main/src/engine/identification.md
|
||||
|
||||
## Implementation
|
||||
|
||||
### Core Component: GraffitiInfo Struct
|
||||
Thread-safe struct holding version information:
|
||||
```go
|
||||
const clCode = "PR"
|
||||
|
||||
type GraffitiInfo struct {
|
||||
mu sync.RWMutex
|
||||
userGraffiti string // From --graffiti flag (set once at startup)
|
||||
clCommit string // From version.GetCommitPrefix() helper function
|
||||
elCode string // From engine_getClientVersionV1
|
||||
elCommit string // From engine_getClientVersionV1
|
||||
}
|
||||
```
|
||||
|
||||
### Flow
|
||||
1. **Startup**: Parse flags, create GraffitiInfo with user graffiti and CL info.
|
||||
2. **Wiring**: Pass struct to both execution service and RPC validator server
|
||||
3. **Runtime**: Execution service goroutine periodically calls `engine_getClientVersionV1` and updates EL fields
|
||||
4. **Block Proposal**: RPC validator server calls `GenerateGraffiti()` to get formatted graffiti
|
||||
|
||||
### Flexible Graffiti Format
|
||||
Packs as much client info as space allows (after user graffiti):
|
||||
|
||||
| Available Space | Format | Example |
|
||||
|----------------|--------|---------|
|
||||
| ≥12 bytes | `EL(2)+commit(4)+CL(2)+commit(4)+user` | `GE168dPR63afBob` |
|
||||
| 8-11 bytes | `EL(2)+commit(2)+CL(2)+commit(2)+user` | `GE16PR63my node here` |
|
||||
| 4-7 bytes | `EL(2)+CL(2)+user` | `GEPRthis is my graffiti msg` |
|
||||
| 2-3 bytes | `EL(2)+user` | `GEalmost full graffiti message` |
|
||||
| <2 bytes | user only | `full 32 byte user graffiti here` |
|
||||
|
||||
```go
|
||||
func (g *GraffitiInfo) GenerateGraffiti() [32]byte {
|
||||
available := 32 - len(userGraffiti)
|
||||
|
||||
if elCode == "" {
|
||||
elCommit2 = elCommit4 = ""
|
||||
}
|
||||
|
||||
switch {
|
||||
case available >= 12:
|
||||
return elCode + elCommit4 + clCode + clCommit4 + userGraffiti
|
||||
case available >= 8:
|
||||
return elCode + elCommit2 + clCode + clCommit2 + userGraffiti
|
||||
case available >= 4:
|
||||
return elCode + clCode + userGraffiti
|
||||
case available >= 2:
|
||||
return elCode + userGraffiti
|
||||
default:
|
||||
return userGraffiti
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Update Logic
|
||||
Single testable function in execution service:
|
||||
```go
|
||||
func (s *Service) updateGraffitiInfo() {
|
||||
versions, err := s.GetClientVersion(ctx)
|
||||
if err != nil {
|
||||
return // Keep last good value
|
||||
}
|
||||
if len(versions) == 1 {
|
||||
s.graffitiInfo.UpdateFromEngine(versions[0].Code, versions[0].Commit)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Goroutine calls this on `slot % 8 == 4` timing (4 times per epoch, avoids slot boundaries).
|
||||
|
||||
### Files Changes Required
|
||||
|
||||
**New:**
|
||||
- `beacon-chain/execution/graffiti_info.go` - The struct and methods
|
||||
- `beacon-chain/execution/graffiti_info_test.go` - Unit tests
|
||||
- `runtime/version/version.go` - Add `GetCommitPrefix()` helper that extracts first 4 hex chars from the git commit injected via Bazel ldflags at build time
|
||||
|
||||
**Modified:**
|
||||
- `beacon-chain/execution/service.go` - Add goroutine + updateGraffitiInfo()
|
||||
- `beacon-chain/execution/engine_client.go` - Add GetClientVersion() method that does engine call
|
||||
- `beacon-chain/rpc/.../validator/proposer.go` - Call GenerateGraffiti()
|
||||
- `beacon-chain/node/node.go` - Wire GraffitiInfo to services
|
||||
|
||||
### Testing Strategy
|
||||
- Unit test GraffitiInfo methods (priority logic, thread safety)
|
||||
- Unit test updateGraffitiInfo() with mocked engine client
|
||||
@@ -668,6 +668,7 @@ func (b *BeaconNode) registerP2P(cliCtx *cli.Context) error {
|
||||
DB: b.db,
|
||||
StateGen: b.stateGen,
|
||||
ClockWaiter: b.ClockWaiter,
|
||||
PartialDataColumns: b.cliCtx.Bool(flags.PartialDataColumns.Name),
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
|
||||
@@ -51,6 +51,7 @@ go_library(
|
||||
"//beacon-chain/db:go_default_library",
|
||||
"//beacon-chain/db/kv:go_default_library",
|
||||
"//beacon-chain/p2p/encoder:go_default_library",
|
||||
"//beacon-chain/p2p/partialdatacolumnbroadcaster:go_default_library",
|
||||
"//beacon-chain/p2p/peers:go_default_library",
|
||||
"//beacon-chain/p2p/peers/peerdata:go_default_library",
|
||||
"//beacon-chain/p2p/peers/scorers:go_default_library",
|
||||
|
||||
@@ -342,7 +342,7 @@ func (s *Service) BroadcastLightClientFinalityUpdate(ctx context.Context, update
|
||||
// there is at least one peer in each needed subnet. If not, it will attempt to find one before broadcasting.
|
||||
// This function is non-blocking. It stops trying to broadcast a given sidecar when more than one slot has passed, or the context is
|
||||
// cancelled (whichever comes first).
|
||||
func (s *Service) BroadcastDataColumnSidecars(ctx context.Context, sidecars []blocks.VerifiedRODataColumn) error {
|
||||
func (s *Service) BroadcastDataColumnSidecars(ctx context.Context, sidecars []blocks.VerifiedRODataColumn, partialColumns []blocks.PartialDataColumn) error {
|
||||
// Increase the number of broadcast attempts.
|
||||
dataColumnSidecarBroadcastAttempts.Add(float64(len(sidecars)))
|
||||
|
||||
@@ -352,7 +352,7 @@ func (s *Service) BroadcastDataColumnSidecars(ctx context.Context, sidecars []bl
|
||||
return errors.Wrap(err, "current fork digest")
|
||||
}
|
||||
|
||||
go s.broadcastDataColumnSidecars(ctx, forkDigest, sidecars)
|
||||
go s.broadcastDataColumnSidecars(ctx, forkDigest, sidecars, partialColumns)
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -360,7 +360,7 @@ func (s *Service) BroadcastDataColumnSidecars(ctx context.Context, sidecars []bl
|
||||
// broadcastDataColumnSidecars broadcasts multiple data column sidecars to the p2p network, after ensuring
|
||||
// there is at least one peer in each needed subnet. If not, it will attempt to find one before broadcasting.
|
||||
// It returns when all broadcasts are complete, or the context is cancelled (whichever comes first).
|
||||
func (s *Service) broadcastDataColumnSidecars(ctx context.Context, forkDigest [fieldparams.VersionLength]byte, sidecars []blocks.VerifiedRODataColumn) {
|
||||
func (s *Service) broadcastDataColumnSidecars(ctx context.Context, forkDigest [fieldparams.VersionLength]byte, sidecars []blocks.VerifiedRODataColumn, partialColumns []blocks.PartialDataColumn) {
|
||||
type rootAndIndex struct {
|
||||
root [fieldparams.RootLength]byte
|
||||
index uint64
|
||||
@@ -374,7 +374,7 @@ func (s *Service) broadcastDataColumnSidecars(ctx context.Context, forkDigest [f
|
||||
logLevel := logrus.GetLevel()
|
||||
|
||||
slotPerRoot := make(map[[fieldparams.RootLength]byte]primitives.Slot, 1)
|
||||
for _, sidecar := range sidecars {
|
||||
for i, sidecar := range sidecars {
|
||||
slotPerRoot[sidecar.BlockRoot()] = sidecar.Slot()
|
||||
|
||||
wg.Go(func() {
|
||||
@@ -398,6 +398,14 @@ func (s *Service) broadcastDataColumnSidecars(ctx context.Context, forkDigest [f
|
||||
return
|
||||
}
|
||||
|
||||
if s.partialColumnBroadcaster != nil && i < len(partialColumns) {
|
||||
fullTopicStr := topic + s.Encoding().ProtocolSuffix()
|
||||
if err := s.partialColumnBroadcaster.Publish(fullTopicStr, partialColumns[i]); err != nil {
|
||||
tracing.AnnotateError(span, err)
|
||||
log.WithError(err).Error("Cannot partial broadcast data column sidecar")
|
||||
}
|
||||
}
|
||||
|
||||
// Broadcast the data column sidecar to the network.
|
||||
if err := s.broadcastObject(ctx, sidecar, topic); err != nil {
|
||||
tracing.AnnotateError(span, err)
|
||||
|
||||
@@ -773,7 +773,7 @@ func TestService_BroadcastDataColumn(t *testing.T) {
|
||||
time.Sleep(50 * time.Millisecond)
|
||||
|
||||
// Broadcast to peers and wait.
|
||||
err = service.BroadcastDataColumnSidecars(ctx, []blocks.VerifiedRODataColumn{verifiedRoSidecar})
|
||||
err = service.BroadcastDataColumnSidecars(ctx, []blocks.VerifiedRODataColumn{verifiedRoSidecar}, nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Receive the message.
|
||||
|
||||
@@ -26,6 +26,7 @@ const (
|
||||
// Config for the p2p service. These parameters are set from application level flags
|
||||
// to initialize the p2p service.
|
||||
type Config struct {
|
||||
PartialDataColumns bool
|
||||
NoDiscovery bool
|
||||
EnableUPnP bool
|
||||
StaticPeerID bool
|
||||
|
||||
@@ -4,6 +4,7 @@ import (
|
||||
"context"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/encoder"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/partialdatacolumnbroadcaster"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/peers"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
|
||||
@@ -28,6 +29,7 @@ type (
|
||||
Broadcaster
|
||||
SetStreamHandler
|
||||
PubSubProvider
|
||||
PartialColumnBroadcasterProvider
|
||||
PubSubTopicUser
|
||||
SenderEncoder
|
||||
PeerManager
|
||||
@@ -52,7 +54,7 @@ type (
|
||||
BroadcastBlob(ctx context.Context, subnet uint64, blob *ethpb.BlobSidecar) error
|
||||
BroadcastLightClientOptimisticUpdate(ctx context.Context, update interfaces.LightClientOptimisticUpdate) error
|
||||
BroadcastLightClientFinalityUpdate(ctx context.Context, update interfaces.LightClientFinalityUpdate) error
|
||||
BroadcastDataColumnSidecars(ctx context.Context, sidecars []blocks.VerifiedRODataColumn) error
|
||||
BroadcastDataColumnSidecars(ctx context.Context, sidecars []blocks.VerifiedRODataColumn, partialColumns []blocks.PartialDataColumn) error
|
||||
}
|
||||
|
||||
// SetStreamHandler configures p2p to handle streams of a certain topic ID.
|
||||
@@ -92,6 +94,11 @@ type (
|
||||
PubSub() *pubsub.PubSub
|
||||
}
|
||||
|
||||
// PubSubProvider provides the p2p pubsub protocol.
|
||||
PartialColumnBroadcasterProvider interface {
|
||||
PartialColumnBroadcaster() *partialdatacolumnbroadcaster.PartialColumnBroadcaster
|
||||
}
|
||||
|
||||
// PeerManager abstracts some peer management methods from libp2p.
|
||||
PeerManager interface {
|
||||
Disconnect(peer.ID) error
|
||||
|
||||
@@ -157,6 +157,11 @@ var (
|
||||
Help: "The number of publish messages received via rpc for a particular topic",
|
||||
},
|
||||
[]string{"topic"})
|
||||
pubsubRPCPubRecvSize = promauto.NewCounterVec(prometheus.CounterOpts{
|
||||
Name: "p2p_pubsub_rpc_recv_pub_size_total",
|
||||
Help: "The total size of publish messages received via rpc for a particular topic",
|
||||
},
|
||||
[]string{"topic", "is_partial"})
|
||||
pubsubRPCDrop = promauto.NewCounterVec(prometheus.CounterOpts{
|
||||
Name: "p2p_pubsub_rpc_drop_total",
|
||||
Help: "The number of messages dropped via rpc for a particular control message",
|
||||
@@ -171,6 +176,11 @@ var (
|
||||
Help: "The number of publish messages dropped via rpc for a particular topic",
|
||||
},
|
||||
[]string{"topic"})
|
||||
pubsubRPCPubDropSize = promauto.NewCounterVec(prometheus.CounterOpts{
|
||||
Name: "p2p_pubsub_rpc_drop_pub_size_total",
|
||||
Help: "The total size of publish messages dropped via rpc for a particular topic",
|
||||
},
|
||||
[]string{"topic", "is_partial"})
|
||||
pubsubRPCSent = promauto.NewCounterVec(prometheus.CounterOpts{
|
||||
Name: "p2p_pubsub_rpc_sent_total",
|
||||
Help: "The number of messages sent via rpc for a particular control message",
|
||||
@@ -185,6 +195,11 @@ var (
|
||||
Help: "The number of publish messages sent via rpc for a particular topic",
|
||||
},
|
||||
[]string{"topic"})
|
||||
pubsubRPCPubSentSize = promauto.NewCounterVec(prometheus.CounterOpts{
|
||||
Name: "gossipsub_pubsub_rpc_sent_pub_size_total",
|
||||
Help: "The total size of publish messages sent via rpc for a particular topic",
|
||||
},
|
||||
[]string{"topic", "is_partial"})
|
||||
)
|
||||
|
||||
func (s *Service) updateMetrics() {
|
||||
|
||||
27
beacon-chain/p2p/partialdatacolumnbroadcaster/BUILD.bazel
Normal file
27
beacon-chain/p2p/partialdatacolumnbroadcaster/BUILD.bazel
Normal file
@@ -0,0 +1,27 @@
|
||||
load("@prysm//tools/go:def.bzl", "go_library")
|
||||
|
||||
go_library(
|
||||
name = "go_default_library",
|
||||
srcs = [
|
||||
"metrics.go",
|
||||
"partial.go",
|
||||
],
|
||||
importpath = "github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/partialdatacolumnbroadcaster",
|
||||
visibility = ["//visibility:public"],
|
||||
deps = [
|
||||
"//config/params:go_default_library",
|
||||
"//consensus-types/blocks:go_default_library",
|
||||
"//internal/logrusadapter:go_default_library",
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
"@com_github_libp2p_go_libp2p//core/peer:go_default_library",
|
||||
"@com_github_libp2p_go_libp2p_pubsub//:go_default_library",
|
||||
"@com_github_libp2p_go_libp2p_pubsub//partialmessages:go_default_library",
|
||||
"@com_github_libp2p_go_libp2p_pubsub//partialmessages/bitmap:go_default_library",
|
||||
"@com_github_libp2p_go_libp2p_pubsub//pb:go_default_library",
|
||||
"@com_github_pkg_errors//:go_default_library",
|
||||
"@com_github_prometheus_client_golang//prometheus:go_default_library",
|
||||
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
|
||||
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
|
||||
"@com_github_sirupsen_logrus//:go_default_library",
|
||||
],
|
||||
)
|
||||
@@ -0,0 +1,25 @@
|
||||
load("@prysm//tools/go:def.bzl", "go_test")
|
||||
|
||||
go_test(
|
||||
name = "go_default_test",
|
||||
size = "medium",
|
||||
srcs = ["two_node_test.go"],
|
||||
deps = [
|
||||
"//beacon-chain/core/peerdas:go_default_library",
|
||||
"//beacon-chain/p2p:go_default_library",
|
||||
"//beacon-chain/p2p/encoder:go_default_library",
|
||||
"//beacon-chain/p2p/partialdatacolumnbroadcaster:go_default_library",
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
"//consensus-types/blocks:go_default_library",
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
"//testing/assert:go_default_library",
|
||||
"//testing/require:go_default_library",
|
||||
"//testing/util:go_default_library",
|
||||
"@com_github_libp2p_go_libp2p//core/peer:go_default_library",
|
||||
"@com_github_libp2p_go_libp2p//x/simlibp2p:go_default_library",
|
||||
"@com_github_libp2p_go_libp2p_pubsub//:go_default_library",
|
||||
"@com_github_marcopolo_simnet//:go_default_library",
|
||||
"@com_github_sirupsen_logrus//:go_default_library",
|
||||
],
|
||||
)
|
||||
@@ -0,0 +1,238 @@
|
||||
package integrationtest
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/rand"
|
||||
"fmt"
|
||||
"testing"
|
||||
"testing/synctest"
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/encoder"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/partialdatacolumnbroadcaster"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v7/config/params"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
|
||||
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v7/testing/assert"
|
||||
"github.com/OffchainLabs/prysm/v7/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v7/testing/util"
|
||||
pubsub "github.com/libp2p/go-libp2p-pubsub"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
simlibp2p "github.com/libp2p/go-libp2p/x/simlibp2p"
|
||||
"github.com/marcopolo/simnet"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
// TestTwoNodePartialColumnExchange tests that two nodes can exchange partial columns
|
||||
// and reconstruct the complete column. Node 1 has cells 0-2, Node 2 has cells 3-5.
|
||||
// After exchange, both should have all cells.
|
||||
func TestTwoNodePartialColumnExchange(t *testing.T) {
|
||||
synctest.Test(t, func(t *testing.T) {
|
||||
// Create a simulated libp2p network
|
||||
latency := time.Millisecond * 10
|
||||
network, meta, err := simlibp2p.SimpleLibp2pNetwork([]simlibp2p.NodeLinkSettingsAndCount{
|
||||
{LinkSettings: simnet.NodeBiDiLinkSettings{
|
||||
Downlink: simnet.LinkSettings{BitsPerSecond: 20 * simlibp2p.OneMbps, Latency: latency / 2},
|
||||
Uplink: simnet.LinkSettings{BitsPerSecond: 20 * simlibp2p.OneMbps, Latency: latency / 2},
|
||||
}, Count: 2},
|
||||
}, simlibp2p.NetworkSettings{UseBlankHost: true})
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, network.Start())
|
||||
defer func() {
|
||||
require.NoError(t, network.Close())
|
||||
}()
|
||||
defer func() {
|
||||
for _, node := range meta.Nodes {
|
||||
err := node.Close()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
h1 := meta.Nodes[0]
|
||||
h2 := meta.Nodes[1]
|
||||
|
||||
logger := logrus.New()
|
||||
logger.SetLevel(logrus.DebugLevel)
|
||||
broadcaster1 := partialdatacolumnbroadcaster.NewBroadcaster(logger)
|
||||
broadcaster2 := partialdatacolumnbroadcaster.NewBroadcaster(logger)
|
||||
|
||||
opts1 := broadcaster1.AppendPubSubOpts([]pubsub.Option{
|
||||
pubsub.WithMessageSigning(false),
|
||||
pubsub.WithStrictSignatureVerification(false),
|
||||
})
|
||||
opts2 := broadcaster2.AppendPubSubOpts([]pubsub.Option{
|
||||
pubsub.WithMessageSigning(false),
|
||||
pubsub.WithStrictSignatureVerification(false),
|
||||
})
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
ps1, err := pubsub.NewGossipSub(ctx, h1, opts1...)
|
||||
require.NoError(t, err)
|
||||
ps2, err := pubsub.NewGossipSub(ctx, h2, opts2...)
|
||||
require.NoError(t, err)
|
||||
|
||||
go broadcaster1.Start()
|
||||
go broadcaster2.Start()
|
||||
defer func() {
|
||||
broadcaster1.Stop()
|
||||
broadcaster2.Stop()
|
||||
}()
|
||||
|
||||
// Generate Test Data
|
||||
var blockRoot [fieldparams.RootLength]byte
|
||||
copy(blockRoot[:], []byte("test-block-root"))
|
||||
|
||||
numCells := 6
|
||||
commitments := make([][]byte, numCells)
|
||||
cells := make([][]byte, numCells)
|
||||
proofs := make([][]byte, numCells)
|
||||
|
||||
for i := range numCells {
|
||||
commitments[i] = make([]byte, 48)
|
||||
|
||||
cells[i] = make([]byte, 2048)
|
||||
rand.Read(cells[i])
|
||||
proofs[i] = make([]byte, 48)
|
||||
fmt.Appendf(proofs[i][:0], "proof %d", i)
|
||||
}
|
||||
|
||||
roDC, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{
|
||||
{
|
||||
BodyRoot: blockRoot[:],
|
||||
KzgCommitments: commitments,
|
||||
Column: cells,
|
||||
KzgProofs: proofs,
|
||||
},
|
||||
})
|
||||
|
||||
pc1, err := blocks.NewPartialDataColumn(roDC[0].DataColumnSidecar.SignedBlockHeader, roDC[0].Index, roDC[0].KzgCommitments, roDC[0].KzgCommitmentsInclusionProof)
|
||||
require.NoError(t, err)
|
||||
pc2, err := blocks.NewPartialDataColumn(roDC[0].DataColumnSidecar.SignedBlockHeader, roDC[0].Index, roDC[0].KzgCommitments, roDC[0].KzgCommitmentsInclusionProof)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Split data
|
||||
for i := range numCells {
|
||||
if i%2 == 0 {
|
||||
pc1.ExtendFromVerfifiedCell(uint64(i), roDC[0].Column[i], roDC[0].KzgProofs[i])
|
||||
} else {
|
||||
pc2.ExtendFromVerfifiedCell(uint64(i), roDC[0].Column[i], roDC[0].KzgProofs[i])
|
||||
}
|
||||
}
|
||||
|
||||
// Setup Topic and Subscriptions
|
||||
digest := params.ForkDigest(0)
|
||||
columnIndex := uint64(12)
|
||||
subnet := peerdas.ComputeSubnetForDataColumnSidecar(columnIndex)
|
||||
topicStr := fmt.Sprintf(p2p.DataColumnSubnetTopicFormat, digest, subnet) +
|
||||
encoder.SszNetworkEncoder{}.ProtocolSuffix()
|
||||
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
|
||||
topic1, err := ps1.Join(topicStr, pubsub.RequestPartialMessages())
|
||||
require.NoError(t, err)
|
||||
topic2, err := ps2.Join(topicStr, pubsub.RequestPartialMessages())
|
||||
require.NoError(t, err)
|
||||
|
||||
// Header validator that verifies the inclusion proof
|
||||
headerValidator := func(header *ethpb.PartialDataColumnHeader) (reject bool, err error) {
|
||||
if header == nil {
|
||||
return false, fmt.Errorf("nil header")
|
||||
}
|
||||
if header.SignedBlockHeader == nil || header.SignedBlockHeader.Header == nil {
|
||||
return true, fmt.Errorf("nil signed block header")
|
||||
}
|
||||
if len(header.KzgCommitments) == 0 {
|
||||
return true, fmt.Errorf("empty kzg commitments")
|
||||
}
|
||||
// Verify inclusion proof
|
||||
if err := peerdas.VerifyPartialDataColumnHeaderInclusionProof(header); err != nil {
|
||||
return true, fmt.Errorf("invalid inclusion proof: %w", err)
|
||||
}
|
||||
t.Log("Header validation passed")
|
||||
return false, nil
|
||||
}
|
||||
|
||||
cellValidator := func(_ []blocks.CellProofBundle) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
node1Complete := make(chan blocks.VerifiedRODataColumn, 1)
|
||||
node2Complete := make(chan blocks.VerifiedRODataColumn, 1)
|
||||
|
||||
handler1 := func(topic string, col blocks.VerifiedRODataColumn) {
|
||||
t.Logf("Node 1: Completed! Column has %d cells", len(col.Column))
|
||||
node1Complete <- col
|
||||
}
|
||||
|
||||
handler2 := func(topic string, col blocks.VerifiedRODataColumn) {
|
||||
t.Logf("Node 2: Completed! Column has %d cells", len(col.Column))
|
||||
node2Complete <- col
|
||||
}
|
||||
|
||||
// Connect hosts
|
||||
err = h1.Connect(context.Background(), peer.AddrInfo{
|
||||
ID: h2.ID(),
|
||||
Addrs: h2.Addrs(),
|
||||
})
|
||||
require.NoError(t, err)
|
||||
time.Sleep(300 * time.Millisecond)
|
||||
|
||||
// Subscribe to regular GossipSub (critical for partial message RPC exchange!)
|
||||
sub1, err := topic1.Subscribe()
|
||||
require.NoError(t, err)
|
||||
defer sub1.Cancel()
|
||||
|
||||
sub2, err := topic2.Subscribe()
|
||||
require.NoError(t, err)
|
||||
defer sub2.Cancel()
|
||||
|
||||
err = broadcaster1.Subscribe(topic1, headerValidator, cellValidator, handler1)
|
||||
require.NoError(t, err)
|
||||
err = broadcaster2.Subscribe(topic2, headerValidator, cellValidator, handler2)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Wait for mesh to form
|
||||
time.Sleep(2 * time.Second)
|
||||
|
||||
// Publish
|
||||
t.Log("Publishing from Node 1")
|
||||
err = broadcaster1.Publish(topicStr, pc1)
|
||||
require.NoError(t, err)
|
||||
|
||||
time.Sleep(200 * time.Millisecond)
|
||||
|
||||
t.Log("Publishing from Node 2")
|
||||
err = broadcaster2.Publish(topicStr, pc2)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Wait for Completion
|
||||
timeout := time.After(10 * time.Second)
|
||||
var col1, col2 blocks.VerifiedRODataColumn
|
||||
receivedCount := 0
|
||||
|
||||
for receivedCount < 2 {
|
||||
select {
|
||||
case col1 = <-node1Complete:
|
||||
t.Log("Node 1 completed reconstruction")
|
||||
receivedCount++
|
||||
case col2 = <-node2Complete:
|
||||
t.Log("Node 2 completed reconstruction")
|
||||
receivedCount++
|
||||
case <-timeout:
|
||||
t.Fatalf("Timeout: Only %d/2 nodes completed", receivedCount)
|
||||
}
|
||||
}
|
||||
|
||||
// Verify both columns have all cells
|
||||
assert.Equal(t, numCells, len(col1.Column), "Node 1 should have all cells")
|
||||
assert.Equal(t, numCells, len(col2.Column), "Node 2 should have all cells")
|
||||
assert.DeepSSZEqual(t, cells, col1.Column, "Node 1 cell mismatch")
|
||||
assert.DeepSSZEqual(t, cells, col2.Column, "Node 2 cell mismatch")
|
||||
})
|
||||
}
|
||||
18
beacon-chain/p2p/partialdatacolumnbroadcaster/metrics.go
Normal file
18
beacon-chain/p2p/partialdatacolumnbroadcaster/metrics.go
Normal file
@@ -0,0 +1,18 @@
|
||||
package partialdatacolumnbroadcaster
|
||||
|
||||
import (
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/client_golang/prometheus/promauto"
|
||||
)
|
||||
|
||||
var (
|
||||
partialMessageUsefulCellsTotal = promauto.NewCounterVec(prometheus.CounterOpts{
|
||||
Name: "beacon_partial_message_useful_cells_total",
|
||||
Help: "Number of useful cells received via a partial message",
|
||||
}, []string{"column_index"})
|
||||
|
||||
partialMessageCellsReceivedTotal = promauto.NewCounterVec(prometheus.CounterOpts{
|
||||
Name: "beacon_partial_message_cells_received_total",
|
||||
Help: "Number of total cells received via a partial message",
|
||||
}, []string{"column_index"})
|
||||
)
|
||||
544
beacon-chain/p2p/partialdatacolumnbroadcaster/partial.go
Normal file
544
beacon-chain/p2p/partialdatacolumnbroadcaster/partial.go
Normal file
@@ -0,0 +1,544 @@
|
||||
package partialdatacolumnbroadcaster
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"log/slog"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/go-bitfield"
|
||||
"github.com/OffchainLabs/prysm/v7/config/params"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v7/internal/logrusadapter"
|
||||
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
|
||||
pubsub "github.com/libp2p/go-libp2p-pubsub"
|
||||
"github.com/libp2p/go-libp2p-pubsub/partialmessages"
|
||||
"github.com/libp2p/go-libp2p-pubsub/partialmessages/bitmap"
|
||||
pubsub_pb "github.com/libp2p/go-libp2p-pubsub/pb"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
// TODOs:
|
||||
// different eager push strategies:
|
||||
// - no eager push
|
||||
// - full column eager push
|
||||
// - With debouncing - some factor of RTT
|
||||
// - eager push missing cells
|
||||
|
||||
const TTLInSlots = 3
|
||||
const maxConcurrentValidators = 128
|
||||
|
||||
var dataColumnTopicRegex = regexp.MustCompile(`data_column_sidecar_(\d+)`)
|
||||
|
||||
func extractColumnIndexFromTopic(topic string) (uint64, error) {
|
||||
matches := dataColumnTopicRegex.FindStringSubmatch(topic)
|
||||
if len(matches) < 2 {
|
||||
return 0, errors.New("could not extract column index from topic")
|
||||
}
|
||||
return strconv.ParseUint(matches[1], 10, 64)
|
||||
}
|
||||
|
||||
// HeaderValidator validates a PartialDataColumnHeader.
|
||||
// Returns (reject, err) where:
|
||||
// - reject=true, err!=nil: REJECT - peer should be penalized
|
||||
// - reject=false, err!=nil: IGNORE - don't penalize, just ignore
|
||||
// - reject=false, err=nil: valid header
|
||||
type HeaderValidator func(header *ethpb.PartialDataColumnHeader) (reject bool, err error)
|
||||
type ColumnValidator func(cells []blocks.CellProofBundle) error
|
||||
|
||||
type PartialColumnBroadcaster struct {
|
||||
logger *logrus.Logger
|
||||
|
||||
ps *pubsub.PubSub
|
||||
stop chan struct{}
|
||||
|
||||
// map topic -> headerValidators
|
||||
headerValidators map[string]HeaderValidator
|
||||
// map topic -> Validator
|
||||
validators map[string]ColumnValidator
|
||||
|
||||
// map topic -> handler
|
||||
handlers map[string]SubHandler
|
||||
|
||||
// map topic -> *pubsub.Topic
|
||||
topics map[string]*pubsub.Topic
|
||||
|
||||
concurrentValidatorSemaphore chan struct{}
|
||||
|
||||
// map topic -> map[groupID]PartialColumn
|
||||
partialMsgStore map[string]map[string]*blocks.PartialDataColumn
|
||||
|
||||
groupTTL map[string]int8
|
||||
|
||||
// validHeaderCache caches validated headers by group ID (works across topics)
|
||||
validHeaderCache map[string]*ethpb.PartialDataColumnHeader
|
||||
|
||||
incomingReq chan request
|
||||
}
|
||||
|
||||
type requestKind uint8
|
||||
|
||||
const (
|
||||
requestKindPublish requestKind = iota
|
||||
requestKindSubscribe
|
||||
requestKindUnsubscribe
|
||||
requestKindHandleIncomingRPC
|
||||
requestKindCellsValidated
|
||||
)
|
||||
|
||||
type request struct {
|
||||
kind requestKind
|
||||
response chan error
|
||||
sub subscribe
|
||||
unsub unsubscribe
|
||||
publish publish
|
||||
incomingRPC rpcWithFrom
|
||||
cellsValidated *cellsValidated
|
||||
}
|
||||
|
||||
type publish struct {
|
||||
topic string
|
||||
c blocks.PartialDataColumn
|
||||
}
|
||||
|
||||
type subscribe struct {
|
||||
t *pubsub.Topic
|
||||
headerValidator HeaderValidator
|
||||
validator ColumnValidator
|
||||
handler SubHandler
|
||||
}
|
||||
|
||||
type unsubscribe struct {
|
||||
topic string
|
||||
}
|
||||
|
||||
type rpcWithFrom struct {
|
||||
*pubsub_pb.PartialMessagesExtension
|
||||
from peer.ID
|
||||
}
|
||||
|
||||
type cellsValidated struct {
|
||||
validationTook time.Duration
|
||||
topic string
|
||||
group []byte
|
||||
cellIndices []uint64
|
||||
cells []blocks.CellProofBundle
|
||||
}
|
||||
|
||||
func NewBroadcaster(logger *logrus.Logger) *PartialColumnBroadcaster {
|
||||
return &PartialColumnBroadcaster{
|
||||
validators: make(map[string]ColumnValidator),
|
||||
headerValidators: make(map[string]HeaderValidator),
|
||||
handlers: make(map[string]SubHandler),
|
||||
topics: make(map[string]*pubsub.Topic),
|
||||
partialMsgStore: make(map[string]map[string]*blocks.PartialDataColumn),
|
||||
groupTTL: make(map[string]int8),
|
||||
validHeaderCache: make(map[string]*ethpb.PartialDataColumnHeader),
|
||||
// GossipSub sends the messages to this channel. The buffer should be
|
||||
// big enough to avoid dropping messages. We don't want to block the gossipsub event loop for this.
|
||||
incomingReq: make(chan request, 128*16),
|
||||
logger: logger,
|
||||
|
||||
concurrentValidatorSemaphore: make(chan struct{}, maxConcurrentValidators),
|
||||
}
|
||||
}
|
||||
|
||||
// AppendPubSubOpts adds the necessary pubsub options to enable partial messages.
|
||||
func (p *PartialColumnBroadcaster) AppendPubSubOpts(opts []pubsub.Option) []pubsub.Option {
|
||||
slogger := slog.New(logrusadapter.Handler{Logger: p.logger})
|
||||
opts = append(opts,
|
||||
pubsub.WithPartialMessagesExtension(&partialmessages.PartialMessagesExtension{
|
||||
Logger: slogger,
|
||||
MergePartsMetadata: func(topic string, left, right partialmessages.PartsMetadata) partialmessages.PartsMetadata {
|
||||
if len(left) == 0 {
|
||||
return right
|
||||
}
|
||||
merged, err := bitfield.Bitlist(left).Or(bitfield.Bitlist(right))
|
||||
if err != nil {
|
||||
p.logger.Warn("Failed to merge bitfields", "err", err, "left", left, "right", right)
|
||||
return left
|
||||
}
|
||||
return partialmessages.PartsMetadata(merged)
|
||||
},
|
||||
ValidateRPC: func(from peer.ID, rpc *pubsub_pb.PartialMessagesExtension) error {
|
||||
// TODO. Add some basic and fast sanity checks
|
||||
return nil
|
||||
},
|
||||
OnIncomingRPC: func(from peer.ID, rpc *pubsub_pb.PartialMessagesExtension) error {
|
||||
select {
|
||||
case p.incomingReq <- request{
|
||||
kind: requestKindHandleIncomingRPC,
|
||||
incomingRPC: rpcWithFrom{rpc, from},
|
||||
}:
|
||||
default:
|
||||
p.logger.Warn("Dropping incoming partial RPC", "rpc", rpc)
|
||||
}
|
||||
return nil
|
||||
},
|
||||
}),
|
||||
func(ps *pubsub.PubSub) error {
|
||||
p.ps = ps
|
||||
return nil
|
||||
},
|
||||
)
|
||||
return opts
|
||||
}
|
||||
|
||||
// Start starts the event loop of the PartialColumnBroadcaster. Should be called
|
||||
// within a goroutine (go p.Start())
|
||||
func (p *PartialColumnBroadcaster) Start() {
|
||||
if p.stop != nil {
|
||||
return
|
||||
}
|
||||
p.stop = make(chan struct{})
|
||||
p.loop()
|
||||
}
|
||||
|
||||
func (p *PartialColumnBroadcaster) loop() {
|
||||
cleanup := time.NewTicker(time.Second * time.Duration(params.BeaconConfig().SecondsPerSlot))
|
||||
defer cleanup.Stop()
|
||||
for {
|
||||
select {
|
||||
case <-p.stop:
|
||||
return
|
||||
case <-cleanup.C:
|
||||
for groupID, ttl := range p.groupTTL {
|
||||
if ttl > 0 {
|
||||
p.groupTTL[groupID] = ttl - 1
|
||||
continue
|
||||
}
|
||||
|
||||
delete(p.groupTTL, groupID)
|
||||
delete(p.validHeaderCache, groupID)
|
||||
for topic, msgStore := range p.partialMsgStore {
|
||||
delete(msgStore, groupID)
|
||||
if len(msgStore) == 0 {
|
||||
delete(p.partialMsgStore, topic)
|
||||
}
|
||||
}
|
||||
}
|
||||
case req := <-p.incomingReq:
|
||||
switch req.kind {
|
||||
case requestKindPublish:
|
||||
req.response <- p.publish(req.publish.topic, req.publish.c)
|
||||
case requestKindSubscribe:
|
||||
req.response <- p.subscribe(req.sub.t, req.sub.headerValidator, req.sub.validator, req.sub.handler)
|
||||
case requestKindUnsubscribe:
|
||||
req.response <- p.unsubscribe(req.unsub.topic)
|
||||
case requestKindHandleIncomingRPC:
|
||||
err := p.handleIncomingRPC(req.incomingRPC)
|
||||
if err != nil {
|
||||
p.logger.Error("Failed to handle incoming partial RPC", "err", err)
|
||||
}
|
||||
case requestKindCellsValidated:
|
||||
err := p.handleCellsValidated(req.cellsValidated)
|
||||
if err != nil {
|
||||
p.logger.Error("Failed to handle cells validated", "err", err)
|
||||
}
|
||||
default:
|
||||
p.logger.Error("Unknown request kind", "kind", req.kind)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (p *PartialColumnBroadcaster) getDataColumn(topic string, group []byte) *blocks.PartialDataColumn {
|
||||
topicStore, ok := p.partialMsgStore[topic]
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
msg, ok := topicStore[string(group)]
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
return msg
|
||||
}
|
||||
|
||||
func (p *PartialColumnBroadcaster) handleIncomingRPC(rpcWithFrom rpcWithFrom) error {
|
||||
if p.ps == nil {
|
||||
return errors.New("pubsub not initialized")
|
||||
}
|
||||
|
||||
hasMessage := len(rpcWithFrom.PartialMessage) > 0
|
||||
|
||||
var message ethpb.PartialDataColumnSidecar
|
||||
if hasMessage {
|
||||
err := message.UnmarshalSSZ(rpcWithFrom.PartialMessage)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to unmarshal partial message data")
|
||||
}
|
||||
}
|
||||
|
||||
topicID := rpcWithFrom.GetTopicID()
|
||||
groupID := rpcWithFrom.GroupID
|
||||
ourDataColumn := p.getDataColumn(topicID, groupID)
|
||||
var shouldRepublish bool
|
||||
|
||||
if ourDataColumn == nil && hasMessage {
|
||||
var header *ethpb.PartialDataColumnHeader
|
||||
// Check cache first for this group
|
||||
if cachedHeader, ok := p.validHeaderCache[string(groupID)]; ok {
|
||||
header = cachedHeader
|
||||
} else {
|
||||
// We haven't seen this group before. Check if we have a valid header.
|
||||
if len(message.Header) == 0 {
|
||||
p.logger.Debug("No partial column found and no header in message, ignoring")
|
||||
return nil
|
||||
}
|
||||
|
||||
header = message.Header[0]
|
||||
headerValidator, ok := p.headerValidators[topicID]
|
||||
if !ok || headerValidator == nil {
|
||||
p.logger.Debug("No header validator registered for topic")
|
||||
return nil
|
||||
}
|
||||
|
||||
reject, err := headerValidator(header)
|
||||
if err != nil {
|
||||
p.logger.Debug("Header validation failed", "err", err, "reject", reject)
|
||||
if reject {
|
||||
// REJECT case: penalize the peer
|
||||
_ = p.ps.PeerFeedback(topicID, rpcWithFrom.from, pubsub.PeerFeedbackInvalidMessage)
|
||||
}
|
||||
// Both REJECT and IGNORE: don't process further
|
||||
return nil
|
||||
}
|
||||
// Cache the valid header
|
||||
p.validHeaderCache[string(groupID)] = header
|
||||
|
||||
// TODO: We now have the information we need to call GetBlobsV3, we should do that to see what we have locally.
|
||||
}
|
||||
|
||||
columnIndex, err := extractColumnIndexFromTopic(topicID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
newColumn, err := blocks.NewPartialDataColumn(
|
||||
header.SignedBlockHeader,
|
||||
columnIndex,
|
||||
header.KzgCommitments,
|
||||
header.KzgCommitmentsInclusionProof,
|
||||
)
|
||||
if err != nil {
|
||||
p.logger.WithError(err).WithFields(logrus.Fields{
|
||||
"topic": topicID,
|
||||
"columnIndex": columnIndex,
|
||||
"numCommitments": len(header.KzgCommitments),
|
||||
}).Error("Failed to create partial data column from header")
|
||||
return err
|
||||
}
|
||||
|
||||
// Save to store
|
||||
topicStore, ok := p.partialMsgStore[topicID]
|
||||
if !ok {
|
||||
topicStore = make(map[string]*blocks.PartialDataColumn)
|
||||
p.partialMsgStore[topicID] = topicStore
|
||||
}
|
||||
topicStore[string(newColumn.GroupID())] = &newColumn
|
||||
p.groupTTL[string(newColumn.GroupID())] = TTLInSlots
|
||||
|
||||
ourDataColumn = &newColumn
|
||||
shouldRepublish = true
|
||||
}
|
||||
|
||||
if ourDataColumn == nil {
|
||||
// We don't have a partial column for this. Can happen if we got cells
|
||||
// without a header.
|
||||
return nil
|
||||
}
|
||||
|
||||
logger := p.logger.WithFields(logrus.Fields{
|
||||
"from": rpcWithFrom.from,
|
||||
"topic": topicID,
|
||||
"group": groupID,
|
||||
})
|
||||
|
||||
validator, validatorOK := p.validators[topicID]
|
||||
if len(rpcWithFrom.PartialMessage) > 0 && validatorOK {
|
||||
// TODO: is there any penalty we want to consider for giving us data we didn't request?
|
||||
// Note that we need to be careful around race conditions and eager data.
|
||||
// Also note that protobufs by design allow extra data that we don't parse.
|
||||
// Marco's thoughts. No, we don't need to do anything else here.
|
||||
cellIndices, cellsToVerify, err := ourDataColumn.CellsToVerifyFromPartialMessage(&message)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// Track cells received via partial message
|
||||
if len(cellIndices) > 0 {
|
||||
columnIndexStr := strconv.FormatUint(ourDataColumn.Index, 10)
|
||||
partialMessageCellsReceivedTotal.WithLabelValues(columnIndexStr).Add(float64(len(cellIndices)))
|
||||
}
|
||||
if len(cellsToVerify) > 0 {
|
||||
p.concurrentValidatorSemaphore <- struct{}{}
|
||||
go func() {
|
||||
defer func() {
|
||||
<-p.concurrentValidatorSemaphore
|
||||
}()
|
||||
start := time.Now()
|
||||
err := validator(cellsToVerify)
|
||||
if err != nil {
|
||||
logger.Error("failed to validate cells", "err", err)
|
||||
_ = p.ps.PeerFeedback(topicID, rpcWithFrom.from, pubsub.PeerFeedbackInvalidMessage)
|
||||
return
|
||||
}
|
||||
_ = p.ps.PeerFeedback(topicID, rpcWithFrom.from, pubsub.PeerFeedbackUsefulMessage)
|
||||
p.incomingReq <- request{
|
||||
kind: requestKindCellsValidated,
|
||||
cellsValidated: &cellsValidated{
|
||||
validationTook: time.Since(start),
|
||||
topic: topicID,
|
||||
group: groupID,
|
||||
cells: cellsToVerify,
|
||||
cellIndices: cellIndices,
|
||||
},
|
||||
}
|
||||
}()
|
||||
}
|
||||
}
|
||||
|
||||
peerHas := bitmap.Bitmap(rpcWithFrom.PartsMetadata)
|
||||
iHave := bitmap.Bitmap(ourDataColumn.PartsMetadata())
|
||||
if !shouldRepublish && len(peerHas) > 0 && !bytes.Equal(peerHas, iHave) {
|
||||
// Either we have something they don't or vice versa
|
||||
shouldRepublish = true
|
||||
logger.Debug("republishing due to parts metadata difference")
|
||||
}
|
||||
|
||||
if shouldRepublish {
|
||||
err := p.ps.PublishPartialMessage(topicID, ourDataColumn, partialmessages.PublishOptions{})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *PartialColumnBroadcaster) handleCellsValidated(cells *cellsValidated) error {
|
||||
ourDataColumn := p.getDataColumn(cells.topic, cells.group)
|
||||
if ourDataColumn == nil {
|
||||
return errors.New("data column not found for verified cells")
|
||||
}
|
||||
extended := ourDataColumn.ExtendFromVerfifiedCells(cells.cellIndices, cells.cells)
|
||||
p.logger.Debug("Extended partial message", "duration", cells.validationTook, "extended", extended)
|
||||
|
||||
columnIndexStr := strconv.FormatUint(ourDataColumn.Index, 10)
|
||||
if extended {
|
||||
// Track useful cells (cells that extended our data)
|
||||
partialMessageUsefulCellsTotal.WithLabelValues(columnIndexStr).Add(float64(len(cells.cells)))
|
||||
|
||||
// TODO: we could use the heuristic here that if this data was
|
||||
// useful to us, it's likely useful to our peers and we should
|
||||
// republish eagerly
|
||||
|
||||
if col, ok := ourDataColumn.Complete(p.logger); ok {
|
||||
p.logger.Info("Completed partial column", "topic", cells.topic, "group", cells.group)
|
||||
handler, handlerOK := p.handlers[cells.topic]
|
||||
|
||||
if handlerOK {
|
||||
go handler(cells.topic, col)
|
||||
}
|
||||
} else {
|
||||
p.logger.Info("Extended partial column", "topic", cells.topic, "group", cells.group)
|
||||
}
|
||||
|
||||
err := p.ps.PublishPartialMessage(cells.topic, ourDataColumn, partialmessages.PublishOptions{})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *PartialColumnBroadcaster) Stop() {
|
||||
if p.stop != nil {
|
||||
close(p.stop)
|
||||
p.stop = nil
|
||||
}
|
||||
}
|
||||
|
||||
// Publish publishes the partial column.
|
||||
func (p *PartialColumnBroadcaster) Publish(topic string, c blocks.PartialDataColumn) error {
|
||||
if p.ps == nil {
|
||||
return errors.New("pubsub not initialized")
|
||||
}
|
||||
respCh := make(chan error)
|
||||
p.incomingReq <- request{
|
||||
kind: requestKindPublish,
|
||||
response: respCh,
|
||||
publish: publish{
|
||||
topic: topic,
|
||||
c: c,
|
||||
},
|
||||
}
|
||||
return <-respCh
|
||||
}
|
||||
|
||||
func (p *PartialColumnBroadcaster) publish(topic string, c blocks.PartialDataColumn) error {
|
||||
topicStore, ok := p.partialMsgStore[topic]
|
||||
if !ok {
|
||||
topicStore = make(map[string]*blocks.PartialDataColumn)
|
||||
p.partialMsgStore[topic] = topicStore
|
||||
}
|
||||
topicStore[string(c.GroupID())] = &c
|
||||
p.groupTTL[string(c.GroupID())] = TTLInSlots
|
||||
|
||||
return p.ps.PublishPartialMessage(topic, &c, partialmessages.PublishOptions{})
|
||||
}
|
||||
|
||||
type SubHandler func(topic string, col blocks.VerifiedRODataColumn)
|
||||
|
||||
func (p *PartialColumnBroadcaster) Subscribe(t *pubsub.Topic, headerValidator HeaderValidator, validator ColumnValidator, handler SubHandler) error {
|
||||
respCh := make(chan error)
|
||||
p.incomingReq <- request{
|
||||
kind: requestKindSubscribe,
|
||||
sub: subscribe{
|
||||
t: t,
|
||||
headerValidator: headerValidator,
|
||||
validator: validator,
|
||||
handler: handler,
|
||||
},
|
||||
response: respCh,
|
||||
}
|
||||
return <-respCh
|
||||
}
|
||||
func (p *PartialColumnBroadcaster) subscribe(t *pubsub.Topic, headerValidator HeaderValidator, validator ColumnValidator, handler SubHandler) error {
|
||||
topic := t.String()
|
||||
if _, ok := p.topics[topic]; ok {
|
||||
return errors.New("already subscribed")
|
||||
}
|
||||
|
||||
p.topics[topic] = t
|
||||
p.headerValidators[topic] = headerValidator
|
||||
p.validators[topic] = validator
|
||||
p.handlers[topic] = handler
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *PartialColumnBroadcaster) Unsubscribe(topic string) error {
|
||||
respCh := make(chan error)
|
||||
p.incomingReq <- request{
|
||||
kind: requestKindUnsubscribe,
|
||||
unsub: unsubscribe{
|
||||
topic: topic,
|
||||
},
|
||||
response: respCh,
|
||||
}
|
||||
return <-respCh
|
||||
}
|
||||
func (p *PartialColumnBroadcaster) unsubscribe(topic string) error {
|
||||
t, ok := p.topics[topic]
|
||||
if !ok {
|
||||
return errors.New("topic not found")
|
||||
}
|
||||
delete(p.topics, topic)
|
||||
delete(p.partialMsgStore, topic)
|
||||
delete(p.headerValidators, topic)
|
||||
delete(p.validators, topic)
|
||||
delete(p.handlers, topic)
|
||||
|
||||
return t.Close()
|
||||
}
|
||||
@@ -58,7 +58,7 @@ func TestPeerExplicitAdd(t *testing.T) {
|
||||
|
||||
resAddress, err := p.Address(id)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, address, resAddress, "Unexpected address")
|
||||
assert.Equal(t, address.Equal(resAddress), true, "Unexpected address")
|
||||
|
||||
resDirection, err := p.Direction(id)
|
||||
require.NoError(t, err)
|
||||
@@ -72,7 +72,7 @@ func TestPeerExplicitAdd(t *testing.T) {
|
||||
|
||||
resAddress2, err := p.Address(id)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, address2, resAddress2, "Unexpected address")
|
||||
assert.Equal(t, address2.Equal(resAddress2), true, "Unexpected address")
|
||||
|
||||
resDirection2, err := p.Direction(id)
|
||||
require.NoError(t, err)
|
||||
|
||||
@@ -160,6 +160,9 @@ func (s *Service) pubsubOptions() []pubsub.Option {
|
||||
}
|
||||
psOpts = append(psOpts, pubsub.WithDirectPeers(directPeersAddrInfos))
|
||||
}
|
||||
if s.partialColumnBroadcaster != nil {
|
||||
psOpts = s.partialColumnBroadcaster.AppendPubSubOpts(psOpts)
|
||||
}
|
||||
|
||||
return psOpts
|
||||
}
|
||||
|
||||
@@ -88,20 +88,20 @@ func (g gossipTracer) ThrottlePeer(p peer.ID) {
|
||||
|
||||
// RecvRPC .
|
||||
func (g gossipTracer) RecvRPC(rpc *pubsub.RPC) {
|
||||
g.setMetricFromRPC(recv, pubsubRPCSubRecv, pubsubRPCPubRecv, pubsubRPCRecv, rpc)
|
||||
g.setMetricFromRPC(recv, pubsubRPCSubRecv, pubsubRPCPubRecv, pubsubRPCPubRecvSize, pubsubRPCRecv, rpc)
|
||||
}
|
||||
|
||||
// SendRPC .
|
||||
func (g gossipTracer) SendRPC(rpc *pubsub.RPC, p peer.ID) {
|
||||
g.setMetricFromRPC(send, pubsubRPCSubSent, pubsubRPCPubSent, pubsubRPCSent, rpc)
|
||||
g.setMetricFromRPC(send, pubsubRPCSubSent, pubsubRPCPubSent, pubsubRPCPubSentSize, pubsubRPCSent, rpc)
|
||||
}
|
||||
|
||||
// DropRPC .
|
||||
func (g gossipTracer) DropRPC(rpc *pubsub.RPC, p peer.ID) {
|
||||
g.setMetricFromRPC(drop, pubsubRPCSubDrop, pubsubRPCPubDrop, pubsubRPCDrop, rpc)
|
||||
g.setMetricFromRPC(drop, pubsubRPCSubDrop, pubsubRPCPubDrop, pubsubRPCPubDropSize, pubsubRPCDrop, rpc)
|
||||
}
|
||||
|
||||
func (g gossipTracer) setMetricFromRPC(act action, subCtr prometheus.Counter, pubCtr, ctrlCtr *prometheus.CounterVec, rpc *pubsub.RPC) {
|
||||
func (g gossipTracer) setMetricFromRPC(act action, subCtr prometheus.Counter, pubCtr, pubSizeCtr, ctrlCtr *prometheus.CounterVec, rpc *pubsub.RPC) {
|
||||
subCtr.Add(float64(len(rpc.Subscriptions)))
|
||||
if rpc.Control != nil {
|
||||
ctrlCtr.WithLabelValues("graft").Add(float64(len(rpc.Control.Graft)))
|
||||
@@ -110,12 +110,17 @@ func (g gossipTracer) setMetricFromRPC(act action, subCtr prometheus.Counter, pu
|
||||
ctrlCtr.WithLabelValues("iwant").Add(float64(len(rpc.Control.Iwant)))
|
||||
ctrlCtr.WithLabelValues("idontwant").Add(float64(len(rpc.Control.Idontwant)))
|
||||
}
|
||||
// For incoming messages from pubsub, we do not record metrics for them as these values
|
||||
// could be junk.
|
||||
if act == recv {
|
||||
return
|
||||
}
|
||||
for _, msg := range rpc.Publish {
|
||||
// For incoming messages from pubsub, we do not record metrics for them as these values
|
||||
// could be junk.
|
||||
if act == recv {
|
||||
continue
|
||||
}
|
||||
pubCtr.WithLabelValues(*msg.Topic).Inc()
|
||||
pubCtr.WithLabelValues(msg.GetTopic()).Inc()
|
||||
pubSizeCtr.WithLabelValues(msg.GetTopic(), "false").Add(float64(msg.Size()))
|
||||
}
|
||||
if rpc.Partial != nil {
|
||||
pubCtr.WithLabelValues(rpc.Partial.GetTopicID()).Inc()
|
||||
pubSizeCtr.WithLabelValues(rpc.Partial.GetTopicID(), "true").Add(float64(rpc.Partial.Size()))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -11,6 +11,7 @@ import (
|
||||
|
||||
"github.com/OffchainLabs/prysm/v7/async"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/encoder"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/partialdatacolumnbroadcaster"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/peers"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/peers/scorers"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/types"
|
||||
@@ -77,6 +78,7 @@ type Service struct {
|
||||
privKey *ecdsa.PrivateKey
|
||||
metaData metadata.Metadata
|
||||
pubsub *pubsub.PubSub
|
||||
partialColumnBroadcaster *partialdatacolumnbroadcaster.PartialColumnBroadcaster
|
||||
joinedTopics map[string]*pubsub.Topic
|
||||
joinedTopicsLock sync.RWMutex
|
||||
subnetsLock map[uint64]*sync.RWMutex
|
||||
@@ -147,6 +149,10 @@ func NewService(ctx context.Context, cfg *Config) (*Service, error) {
|
||||
custodyInfoSet: make(chan struct{}),
|
||||
}
|
||||
|
||||
if cfg.PartialDataColumns {
|
||||
s.partialColumnBroadcaster = partialdatacolumnbroadcaster.NewBroadcaster(log.Logger)
|
||||
}
|
||||
|
||||
ipAddr := prysmnetwork.IPAddr()
|
||||
|
||||
opts, err := s.buildOptions(ipAddr, s.privKey)
|
||||
@@ -305,6 +311,10 @@ func (s *Service) Start() {
|
||||
logExternalDNSAddr(s.host.ID(), p2pHostDNS, p2pTCPPort)
|
||||
}
|
||||
go s.forkWatcher()
|
||||
|
||||
if s.partialColumnBroadcaster != nil {
|
||||
go s.partialColumnBroadcaster.Start()
|
||||
}
|
||||
}
|
||||
|
||||
// Stop the p2p service and terminate all peer connections.
|
||||
@@ -314,6 +324,10 @@ func (s *Service) Stop() error {
|
||||
if s.dv5Listener != nil {
|
||||
s.dv5Listener.Close()
|
||||
}
|
||||
|
||||
if s.partialColumnBroadcaster != nil {
|
||||
s.partialColumnBroadcaster.Stop()
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -350,6 +364,10 @@ func (s *Service) PubSub() *pubsub.PubSub {
|
||||
return s.pubsub
|
||||
}
|
||||
|
||||
func (s *Service) PartialColumnBroadcaster() *partialdatacolumnbroadcaster.PartialColumnBroadcaster {
|
||||
return s.partialColumnBroadcaster
|
||||
}
|
||||
|
||||
// Host returns the currently running libp2p
|
||||
// host of the service.
|
||||
func (s *Service) Host() host.Host {
|
||||
|
||||
@@ -21,6 +21,7 @@ go_library(
|
||||
deps = [
|
||||
"//beacon-chain/core/peerdas:go_default_library",
|
||||
"//beacon-chain/p2p/encoder:go_default_library",
|
||||
"//beacon-chain/p2p/partialdatacolumnbroadcaster:go_default_library",
|
||||
"//beacon-chain/p2p/peers:go_default_library",
|
||||
"//beacon-chain/p2p/peers/scorers:go_default_library",
|
||||
"//config/fieldparams:go_default_library",
|
||||
|
||||
@@ -4,6 +4,7 @@ import (
|
||||
"context"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/encoder"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/partialdatacolumnbroadcaster"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/peers"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
|
||||
@@ -108,6 +109,10 @@ func (*FakeP2P) PubSub() *pubsub.PubSub {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (*FakeP2P) PartialColumnBroadcaster() *partialdatacolumnbroadcaster.PartialColumnBroadcaster {
|
||||
return nil
|
||||
}
|
||||
|
||||
// MetadataSeq -- fake.
|
||||
func (*FakeP2P) MetadataSeq() uint64 {
|
||||
return 0
|
||||
@@ -169,7 +174,7 @@ func (*FakeP2P) BroadcastLightClientFinalityUpdate(_ context.Context, _ interfac
|
||||
}
|
||||
|
||||
// BroadcastDataColumnSidecar -- fake.
|
||||
func (*FakeP2P) BroadcastDataColumnSidecars(_ context.Context, _ []blocks.VerifiedRODataColumn) error {
|
||||
func (*FakeP2P) BroadcastDataColumnSidecars(_ context.Context, _ []blocks.VerifiedRODataColumn, _ []blocks.PartialDataColumn) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
@@ -63,7 +63,7 @@ func (m *MockBroadcaster) BroadcastLightClientFinalityUpdate(_ context.Context,
|
||||
}
|
||||
|
||||
// BroadcastDataColumnSidecar broadcasts a data column for mock.
|
||||
func (m *MockBroadcaster) BroadcastDataColumnSidecars(context.Context, []blocks.VerifiedRODataColumn) error {
|
||||
func (m *MockBroadcaster) BroadcastDataColumnSidecars(context.Context, []blocks.VerifiedRODataColumn, []blocks.PartialDataColumn) error {
|
||||
m.BroadcastCalled.Store(true)
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -12,6 +12,7 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/partialdatacolumnbroadcaster"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/encoder"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/peers"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/peers/scorers"
|
||||
@@ -233,7 +234,7 @@ func (p *TestP2P) BroadcastLightClientFinalityUpdate(_ context.Context, _ interf
|
||||
}
|
||||
|
||||
// BroadcastDataColumnSidecar broadcasts a data column for mock.
|
||||
func (p *TestP2P) BroadcastDataColumnSidecars(context.Context, []blocks.VerifiedRODataColumn) error {
|
||||
func (p *TestP2P) BroadcastDataColumnSidecars(context.Context, []blocks.VerifiedRODataColumn, []blocks.PartialDataColumn) error {
|
||||
p.BroadcastCalled.Store(true)
|
||||
return nil
|
||||
}
|
||||
@@ -299,6 +300,10 @@ func (p *TestP2P) PubSub() *pubsub.PubSub {
|
||||
return p.pubsub
|
||||
}
|
||||
|
||||
func (p *TestP2P) PartialColumnBroadcaster() *partialdatacolumnbroadcaster.PartialColumnBroadcaster {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Disconnect from a peer.
|
||||
func (p *TestP2P) Disconnect(pid peer.ID) error {
|
||||
return p.BHost.Network().ClosePeer(pid)
|
||||
|
||||
@@ -204,6 +204,9 @@ func InitializeDataMaps() {
|
||||
bytesutil.ToBytes4(params.BeaconConfig().ElectraForkVersion): func() (interfaces.LightClientOptimisticUpdate, error) {
|
||||
return lightclientConsensusTypes.NewEmptyOptimisticUpdateDeneb(), nil
|
||||
},
|
||||
bytesutil.ToBytes4(params.BeaconConfig().FuluForkVersion): func() (interfaces.LightClientOptimisticUpdate, error) {
|
||||
return lightclientConsensusTypes.NewEmptyOptimisticUpdateDeneb(), nil
|
||||
},
|
||||
}
|
||||
|
||||
// Reset our light client finality update map.
|
||||
@@ -223,5 +226,8 @@ func InitializeDataMaps() {
|
||||
bytesutil.ToBytes4(params.BeaconConfig().ElectraForkVersion): func() (interfaces.LightClientFinalityUpdate, error) {
|
||||
return lightclientConsensusTypes.NewEmptyFinalityUpdateElectra(), nil
|
||||
},
|
||||
bytesutil.ToBytes4(params.BeaconConfig().FuluForkVersion): func() (interfaces.LightClientFinalityUpdate, error) {
|
||||
return lightclientConsensusTypes.NewEmptyFinalityUpdateElectra(), nil
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
@@ -711,6 +711,7 @@ func (s *Server) SubmitAttesterSlashingsV2(w http.ResponseWriter, r *http.Reques
|
||||
versionHeader := r.Header.Get(api.VersionHeader)
|
||||
if versionHeader == "" {
|
||||
httputil.HandleError(w, api.VersionHeader+" header is required", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
v, err := version.FromString(versionHeader)
|
||||
if err != nil {
|
||||
|
||||
@@ -2112,6 +2112,33 @@ func TestSubmitAttesterSlashingsV2(t *testing.T) {
|
||||
assert.Equal(t, http.StatusBadRequest, e.Code)
|
||||
assert.StringContains(t, "Invalid attester slashing", e.Message)
|
||||
})
|
||||
|
||||
t.Run("missing-version-header", func(t *testing.T) {
|
||||
bs, err := util.NewBeaconStateElectra()
|
||||
require.NoError(t, err)
|
||||
|
||||
broadcaster := &p2pMock.MockBroadcaster{}
|
||||
s := &Server{
|
||||
ChainInfoFetcher: &blockchainmock.ChainService{State: bs},
|
||||
SlashingsPool: &slashingsmock.PoolMock{},
|
||||
Broadcaster: broadcaster,
|
||||
}
|
||||
|
||||
var body bytes.Buffer
|
||||
_, err = body.WriteString(invalidAttesterSlashing)
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com/beacon/pool/attester_slashings", &body)
|
||||
// Intentionally do not set api.VersionHeader to verify missing header handling.
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.SubmitAttesterSlashingsV2(writer, request)
|
||||
require.Equal(t, http.StatusBadRequest, writer.Code)
|
||||
e := &httputil.DefaultJsonError{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusBadRequest, e.Code)
|
||||
assert.StringContains(t, api.VersionHeader+" header is required", e.Message)
|
||||
})
|
||||
}
|
||||
|
||||
func TestSubmitProposerSlashing_InvalidSlashing(t *testing.T) {
|
||||
|
||||
@@ -654,6 +654,10 @@ func (m *futureSyncMockFetcher) StateBySlot(context.Context, primitives.Slot) (s
|
||||
return m.BeaconState, nil
|
||||
}
|
||||
|
||||
func (m *futureSyncMockFetcher) StateByEpoch(context.Context, primitives.Epoch) (state.BeaconState, error) {
|
||||
return m.BeaconState, nil
|
||||
}
|
||||
|
||||
func TestGetSyncCommittees_Future(t *testing.T) {
|
||||
st, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().SyncCommitteeSize)
|
||||
syncCommittee := make([][]byte, params.BeaconConfig().SyncCommitteeSize)
|
||||
|
||||
@@ -116,6 +116,7 @@ func (s *Server) GetLightClientUpdatesByRange(w http.ResponseWriter, req *http.R
|
||||
for _, update := range updates {
|
||||
if ctx.Err() != nil {
|
||||
httputil.HandleError(w, "Context error: "+ctx.Err().Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
updateSlot := update.AttestedHeader().Beacon().Slot
|
||||
@@ -131,12 +132,15 @@ func (s *Server) GetLightClientUpdatesByRange(w http.ResponseWriter, req *http.R
|
||||
chunkLength = ssz.MarshalUint64(chunkLength, uint64(len(updateSSZ)+4))
|
||||
if _, err := w.Write(chunkLength); err != nil {
|
||||
httputil.HandleError(w, "Could not write chunk length: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
if _, err := w.Write(updateEntry.ForkDigest[:]); err != nil {
|
||||
httputil.HandleError(w, "Could not write fork digest: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
if _, err := w.Write(updateSSZ); err != nil {
|
||||
httputil.HandleError(w, "Could not write update SSZ: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
}
|
||||
} else {
|
||||
@@ -145,6 +149,7 @@ func (s *Server) GetLightClientUpdatesByRange(w http.ResponseWriter, req *http.R
|
||||
for _, update := range updates {
|
||||
if ctx.Err() != nil {
|
||||
httputil.HandleError(w, "Context error: "+ctx.Err().Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
updateJson, err := structs.LightClientUpdateFromConsensus(update)
|
||||
|
||||
@@ -132,6 +132,7 @@ func (s *Server) GetHealth(w http.ResponseWriter, r *http.Request) {
|
||||
optimistic, err := s.OptimisticModeFetcher.IsOptimistic(ctx)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
if s.SyncChecker.Synced() && !optimistic {
|
||||
return
|
||||
|
||||
@@ -228,7 +228,7 @@ func (s *Server) attRewardsState(w http.ResponseWriter, r *http.Request) (state.
|
||||
}
|
||||
st, err := s.Stater.StateBySlot(r.Context(), nextEpochEnd)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not get state for epoch's starting slot: "+err.Error(), http.StatusInternalServerError)
|
||||
shared.WriteStateFetchError(w, err)
|
||||
return nil, false
|
||||
}
|
||||
return st, true
|
||||
|
||||
@@ -19,7 +19,6 @@ go_library(
|
||||
"//beacon-chain/cache:go_default_library",
|
||||
"//beacon-chain/core/feed/operation:go_default_library",
|
||||
"//beacon-chain/core/helpers:go_default_library",
|
||||
"//beacon-chain/core/transition:go_default_library",
|
||||
"//beacon-chain/db:go_default_library",
|
||||
"//beacon-chain/operations/attestations:go_default_library",
|
||||
"//beacon-chain/operations/synccommittee:go_default_library",
|
||||
@@ -78,6 +77,7 @@ go_test(
|
||||
"//beacon-chain/rpc/core:go_default_library",
|
||||
"//beacon-chain/rpc/eth/rewards/testing:go_default_library",
|
||||
"//beacon-chain/rpc/eth/shared/testing:go_default_library",
|
||||
"//beacon-chain/rpc/lookup:go_default_library",
|
||||
"//beacon-chain/rpc/testutil:go_default_library",
|
||||
"//beacon-chain/state:go_default_library",
|
||||
"//beacon-chain/state/stategen:go_default_library",
|
||||
|
||||
@@ -19,7 +19,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/builder"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/cache"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/transition"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/core"
|
||||
rpchelpers "github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/eth/helpers"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/eth/shared"
|
||||
@@ -898,20 +897,15 @@ func (s *Server) GetAttesterDuties(w http.ResponseWriter, r *http.Request) {
|
||||
return
|
||||
}
|
||||
|
||||
var startSlot primitives.Slot
|
||||
// For next epoch requests, we use the current epoch's state since committee
|
||||
// assignments for next epoch can be computed from current epoch's state.
|
||||
epochForState := requestedEpoch
|
||||
if requestedEpoch == nextEpoch {
|
||||
startSlot, err = slots.EpochStart(currentEpoch)
|
||||
} else {
|
||||
startSlot, err = slots.EpochStart(requestedEpoch)
|
||||
epochForState = currentEpoch
|
||||
}
|
||||
st, err := s.Stater.StateByEpoch(ctx, epochForState)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, fmt.Sprintf("Could not get start slot from epoch %d: %v", requestedEpoch, err), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
st, err := s.Stater.StateBySlot(ctx, startSlot)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not get state: "+err.Error(), http.StatusInternalServerError)
|
||||
shared.WriteStateFetchError(w, err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -1020,39 +1014,11 @@ func (s *Server) GetProposerDuties(w http.ResponseWriter, r *http.Request) {
|
||||
nextEpochLookahead = true
|
||||
}
|
||||
|
||||
epochStartSlot, err := slots.EpochStart(requestedEpoch)
|
||||
st, err := s.Stater.StateByEpoch(ctx, requestedEpoch)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, fmt.Sprintf("Could not get start slot of epoch %d: %v", requestedEpoch, err), http.StatusInternalServerError)
|
||||
shared.WriteStateFetchError(w, err)
|
||||
return
|
||||
}
|
||||
var st state.BeaconState
|
||||
// if the requested epoch is new, use the head state and the next slot cache
|
||||
if requestedEpoch < currentEpoch {
|
||||
st, err = s.Stater.StateBySlot(ctx, epochStartSlot)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, fmt.Sprintf("Could not get state for slot %d: %v ", epochStartSlot, err), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
} else {
|
||||
st, err = s.HeadFetcher.HeadState(ctx)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, fmt.Sprintf("Could not get head state: %v ", err), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
// Notice that even for Fulu requests for the next epoch, we are only advancing the state to the start of the current epoch.
|
||||
if st.Slot() < epochStartSlot {
|
||||
headRoot, err := s.HeadFetcher.HeadRoot(ctx)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, fmt.Sprintf("Could not get head root: %v ", err), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
st, err = transition.ProcessSlotsUsingNextSlotCache(ctx, st, headRoot, epochStartSlot)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, fmt.Sprintf("Could not process slots up to %d: %v ", epochStartSlot, err), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var assignments map[primitives.ValidatorIndex][]primitives.Slot
|
||||
if nextEpochLookahead {
|
||||
@@ -1103,7 +1069,8 @@ func (s *Server) GetProposerDuties(w http.ResponseWriter, r *http.Request) {
|
||||
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
if !sortProposerDuties(w, duties) {
|
||||
if err = sortProposerDuties(duties); err != nil {
|
||||
httputil.HandleError(w, "Could not sort proposer duties: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -1174,14 +1141,10 @@ func (s *Server) GetSyncCommitteeDuties(w http.ResponseWriter, r *http.Request)
|
||||
}
|
||||
|
||||
startingEpoch := min(requestedEpoch, currentEpoch)
|
||||
slot, err := slots.EpochStart(startingEpoch)
|
||||
|
||||
st, err := s.Stater.StateByEpoch(ctx, startingEpoch)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not get sync committee slot: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
st, err := s.Stater.State(ctx, []byte(strconv.FormatUint(uint64(slot), 10)))
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not get sync committee state: "+err.Error(), http.StatusInternalServerError)
|
||||
shared.WriteStateFetchError(w, err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -1327,7 +1290,7 @@ func (s *Server) GetLiveness(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
st, err = s.Stater.StateBySlot(ctx, epochEnd)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not get slot for requested epoch: "+err.Error(), http.StatusInternalServerError)
|
||||
shared.WriteStateFetchError(w, err)
|
||||
return
|
||||
}
|
||||
participation, err = st.CurrentEpochParticipation()
|
||||
@@ -1447,22 +1410,20 @@ func syncCommitteeDutiesAndVals(
|
||||
return duties, vals, nil
|
||||
}
|
||||
|
||||
func sortProposerDuties(w http.ResponseWriter, duties []*structs.ProposerDuty) bool {
|
||||
ok := true
|
||||
func sortProposerDuties(duties []*structs.ProposerDuty) error {
|
||||
var err error
|
||||
sort.Slice(duties, func(i, j int) bool {
|
||||
si, err := strconv.ParseUint(duties[i].Slot, 10, 64)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not parse slot: "+err.Error(), http.StatusInternalServerError)
|
||||
ok = false
|
||||
si, parseErr := strconv.ParseUint(duties[i].Slot, 10, 64)
|
||||
if parseErr != nil {
|
||||
err = errors.Wrap(parseErr, "could not parse slot")
|
||||
return false
|
||||
}
|
||||
sj, err := strconv.ParseUint(duties[j].Slot, 10, 64)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not parse slot: "+err.Error(), http.StatusInternalServerError)
|
||||
ok = false
|
||||
sj, parseErr := strconv.ParseUint(duties[j].Slot, 10, 64)
|
||||
if parseErr != nil {
|
||||
err = errors.Wrap(parseErr, "could not parse slot")
|
||||
return false
|
||||
}
|
||||
return si < sj
|
||||
})
|
||||
return ok
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -25,6 +25,7 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/operations/synccommittee"
|
||||
p2pmock "github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/testing"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/core"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/lookup"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/testutil"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/state/stategen"
|
||||
@@ -2006,6 +2007,7 @@ func TestGetAttesterDuties(t *testing.T) {
|
||||
TimeFetcher: chain,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
OptimisticModeFetcher: chain,
|
||||
HeadFetcher: chain,
|
||||
BeaconDB: db,
|
||||
}
|
||||
|
||||
@@ -2184,6 +2186,7 @@ func TestGetAttesterDuties(t *testing.T) {
|
||||
Stater: &testutil.MockStater{StatesBySlot: map[primitives.Slot]state.BeaconState{0: bs}},
|
||||
TimeFetcher: chain,
|
||||
OptimisticModeFetcher: chain,
|
||||
HeadFetcher: chain,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
BeaconDB: db,
|
||||
}
|
||||
@@ -2224,6 +2227,62 @@ func TestGetAttesterDuties(t *testing.T) {
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusServiceUnavailable, e.Code)
|
||||
})
|
||||
t.Run("state not found returns 404", func(t *testing.T) {
|
||||
chainSlot := primitives.Slot(0)
|
||||
chain := &mockChain.ChainService{
|
||||
State: bs, Root: genesisRoot[:], Slot: &chainSlot,
|
||||
}
|
||||
stateNotFoundErr := lookup.NewStateNotFoundError(8192, []byte("test"))
|
||||
s := &Server{
|
||||
Stater: &testutil.MockStater{CustomError: &stateNotFoundErr},
|
||||
TimeFetcher: chain,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
OptimisticModeFetcher: chain,
|
||||
HeadFetcher: chain,
|
||||
}
|
||||
|
||||
var body bytes.Buffer
|
||||
_, err = body.WriteString("[\"0\"]")
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodGet, "http://www.example.com/eth/v1/validator/duties/attester/{epoch}", &body)
|
||||
request.SetPathValue("epoch", "0")
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.GetAttesterDuties(writer, request)
|
||||
assert.Equal(t, http.StatusNotFound, writer.Code)
|
||||
e := &httputil.DefaultJsonError{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusNotFound, e.Code)
|
||||
assert.StringContains(t, "State not found", e.Message)
|
||||
})
|
||||
t.Run("state fetch error returns 500", func(t *testing.T) {
|
||||
chainSlot := primitives.Slot(0)
|
||||
chain := &mockChain.ChainService{
|
||||
State: bs, Root: genesisRoot[:], Slot: &chainSlot,
|
||||
}
|
||||
s := &Server{
|
||||
Stater: &testutil.MockStater{CustomError: errors.New("internal error")},
|
||||
TimeFetcher: chain,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
OptimisticModeFetcher: chain,
|
||||
HeadFetcher: chain,
|
||||
}
|
||||
|
||||
var body bytes.Buffer
|
||||
_, err = body.WriteString("[\"0\"]")
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodGet, "http://www.example.com/eth/v1/validator/duties/attester/{epoch}", &body)
|
||||
request.SetPathValue("epoch", "0")
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.GetAttesterDuties(writer, request)
|
||||
assert.Equal(t, http.StatusInternalServerError, writer.Code)
|
||||
e := &httputil.DefaultJsonError{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusInternalServerError, e.Code)
|
||||
})
|
||||
}
|
||||
|
||||
func TestGetProposerDuties(t *testing.T) {
|
||||
@@ -2427,6 +2486,60 @@ func TestGetProposerDuties(t *testing.T) {
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusServiceUnavailable, e.Code)
|
||||
})
|
||||
t.Run("state not found returns 404", func(t *testing.T) {
|
||||
bs, err := transition.GenesisBeaconState(t.Context(), deposits, 0, eth1Data)
|
||||
require.NoError(t, err)
|
||||
chainSlot := primitives.Slot(0)
|
||||
chain := &mockChain.ChainService{
|
||||
State: bs, Root: genesisRoot[:], Slot: &chainSlot,
|
||||
}
|
||||
stateNotFoundErr := lookup.NewStateNotFoundError(8192, []byte("test"))
|
||||
s := &Server{
|
||||
Stater: &testutil.MockStater{CustomError: &stateNotFoundErr},
|
||||
TimeFetcher: chain,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
OptimisticModeFetcher: chain,
|
||||
HeadFetcher: chain,
|
||||
}
|
||||
|
||||
request := httptest.NewRequest(http.MethodGet, "http://www.example.com/eth/v1/validator/duties/proposer/{epoch}", nil)
|
||||
request.SetPathValue("epoch", "0")
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.GetProposerDuties(writer, request)
|
||||
assert.Equal(t, http.StatusNotFound, writer.Code)
|
||||
e := &httputil.DefaultJsonError{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusNotFound, e.Code)
|
||||
assert.StringContains(t, "State not found", e.Message)
|
||||
})
|
||||
t.Run("state fetch error returns 500", func(t *testing.T) {
|
||||
bs, err := transition.GenesisBeaconState(t.Context(), deposits, 0, eth1Data)
|
||||
require.NoError(t, err)
|
||||
chainSlot := primitives.Slot(0)
|
||||
chain := &mockChain.ChainService{
|
||||
State: bs, Root: genesisRoot[:], Slot: &chainSlot,
|
||||
}
|
||||
s := &Server{
|
||||
Stater: &testutil.MockStater{CustomError: errors.New("internal error")},
|
||||
TimeFetcher: chain,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
OptimisticModeFetcher: chain,
|
||||
HeadFetcher: chain,
|
||||
}
|
||||
|
||||
request := httptest.NewRequest(http.MethodGet, "http://www.example.com/eth/v1/validator/duties/proposer/{epoch}", nil)
|
||||
request.SetPathValue("epoch", "0")
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.GetProposerDuties(writer, request)
|
||||
assert.Equal(t, http.StatusInternalServerError, writer.Code)
|
||||
e := &httputil.DefaultJsonError{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusInternalServerError, e.Code)
|
||||
})
|
||||
}
|
||||
|
||||
func TestGetSyncCommitteeDuties(t *testing.T) {
|
||||
@@ -2457,7 +2570,7 @@ func TestGetSyncCommitteeDuties(t *testing.T) {
|
||||
}
|
||||
require.NoError(t, st.SetNextSyncCommittee(nextCommittee))
|
||||
|
||||
mockChainService := &mockChain.ChainService{Genesis: genesisTime}
|
||||
mockChainService := &mockChain.ChainService{Genesis: genesisTime, State: st}
|
||||
s := &Server{
|
||||
Stater: &testutil.MockStater{BeaconState: st},
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
@@ -2648,7 +2761,7 @@ func TestGetSyncCommitteeDuties(t *testing.T) {
|
||||
return newSyncPeriodSt
|
||||
}
|
||||
}
|
||||
mockChainService := &mockChain.ChainService{Genesis: genesisTime, Slot: &newSyncPeriodStartSlot}
|
||||
mockChainService := &mockChain.ChainService{Genesis: genesisTime, Slot: &newSyncPeriodStartSlot, State: newSyncPeriodSt}
|
||||
s := &Server{
|
||||
Stater: &testutil.MockStater{BeaconState: stateFetchFn(newSyncPeriodStartSlot)},
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
@@ -2729,8 +2842,7 @@ func TestGetSyncCommitteeDuties(t *testing.T) {
|
||||
slot, err := slots.EpochStart(1)
|
||||
require.NoError(t, err)
|
||||
|
||||
st2, err := util.NewBeaconStateBellatrix()
|
||||
require.NoError(t, err)
|
||||
st2 := st.Copy()
|
||||
require.NoError(t, st2.SetSlot(slot))
|
||||
|
||||
mockChainService := &mockChain.ChainService{
|
||||
@@ -2744,7 +2856,7 @@ func TestGetSyncCommitteeDuties(t *testing.T) {
|
||||
State: st2,
|
||||
}
|
||||
s := &Server{
|
||||
Stater: &testutil.MockStater{BeaconState: st},
|
||||
Stater: &testutil.MockStater{BeaconState: st2},
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
TimeFetcher: mockChainService,
|
||||
HeadFetcher: mockChainService,
|
||||
@@ -2789,6 +2901,62 @@ func TestGetSyncCommitteeDuties(t *testing.T) {
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusServiceUnavailable, e.Code)
|
||||
})
|
||||
t.Run("state not found returns 404", func(t *testing.T) {
|
||||
slot := 2 * params.BeaconConfig().SlotsPerEpoch
|
||||
chainService := &mockChain.ChainService{
|
||||
Slot: &slot,
|
||||
}
|
||||
stateNotFoundErr := lookup.NewStateNotFoundError(8192, []byte("test"))
|
||||
s := &Server{
|
||||
Stater: &testutil.MockStater{CustomError: &stateNotFoundErr},
|
||||
TimeFetcher: chainService,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
OptimisticModeFetcher: chainService,
|
||||
HeadFetcher: chainService,
|
||||
}
|
||||
|
||||
var body bytes.Buffer
|
||||
_, err := body.WriteString("[\"1\"]")
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodGet, "http://www.example.com/eth/v1/validator/duties/sync/{epoch}", &body)
|
||||
request.SetPathValue("epoch", "1")
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.GetSyncCommitteeDuties(writer, request)
|
||||
assert.Equal(t, http.StatusNotFound, writer.Code)
|
||||
e := &httputil.DefaultJsonError{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusNotFound, e.Code)
|
||||
assert.StringContains(t, "State not found", e.Message)
|
||||
})
|
||||
t.Run("state fetch error returns 500", func(t *testing.T) {
|
||||
slot := 2 * params.BeaconConfig().SlotsPerEpoch
|
||||
chainService := &mockChain.ChainService{
|
||||
Slot: &slot,
|
||||
}
|
||||
s := &Server{
|
||||
Stater: &testutil.MockStater{CustomError: errors.New("internal error")},
|
||||
TimeFetcher: chainService,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
OptimisticModeFetcher: chainService,
|
||||
HeadFetcher: chainService,
|
||||
}
|
||||
|
||||
var body bytes.Buffer
|
||||
_, err := body.WriteString("[\"1\"]")
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodGet, "http://www.example.com/eth/v1/validator/duties/sync/{epoch}", &body)
|
||||
request.SetPathValue("epoch", "1")
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.GetSyncCommitteeDuties(writer, request)
|
||||
assert.Equal(t, http.StatusInternalServerError, writer.Code)
|
||||
e := &httputil.DefaultJsonError{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusInternalServerError, e.Code)
|
||||
})
|
||||
}
|
||||
|
||||
func TestPrepareBeaconProposer(t *testing.T) {
|
||||
|
||||
@@ -11,6 +11,7 @@ go_library(
|
||||
deps = [
|
||||
"//beacon-chain/blockchain:go_default_library",
|
||||
"//beacon-chain/core/peerdas:go_default_library",
|
||||
"//beacon-chain/core/transition:go_default_library",
|
||||
"//beacon-chain/db:go_default_library",
|
||||
"//beacon-chain/db/filesystem:go_default_library",
|
||||
"//beacon-chain/rpc/core:go_default_library",
|
||||
|
||||
@@ -8,6 +8,7 @@ import (
|
||||
"strings"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/blockchain"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/transition"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/db"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/state/stategen"
|
||||
@@ -82,8 +83,8 @@ type StateRootNotFoundError struct {
|
||||
}
|
||||
|
||||
// NewStateRootNotFoundError creates a new error instance.
|
||||
func NewStateRootNotFoundError(stateRootsSize int) StateNotFoundError {
|
||||
return StateNotFoundError{
|
||||
func NewStateRootNotFoundError(stateRootsSize int) StateRootNotFoundError {
|
||||
return StateRootNotFoundError{
|
||||
message: fmt.Sprintf("state root not found in the last %d state roots", stateRootsSize),
|
||||
}
|
||||
}
|
||||
@@ -98,6 +99,7 @@ type Stater interface {
|
||||
State(ctx context.Context, id []byte) (state.BeaconState, error)
|
||||
StateRoot(ctx context.Context, id []byte) ([]byte, error)
|
||||
StateBySlot(ctx context.Context, slot primitives.Slot) (state.BeaconState, error)
|
||||
StateByEpoch(ctx context.Context, epoch primitives.Epoch) (state.BeaconState, error)
|
||||
}
|
||||
|
||||
// BeaconDbStater is an implementation of Stater. It retrieves states from the beacon chain database.
|
||||
@@ -267,6 +269,46 @@ func (p *BeaconDbStater) StateBySlot(ctx context.Context, target primitives.Slot
|
||||
return st, nil
|
||||
}
|
||||
|
||||
// StateByEpoch returns the state for the start of the requested epoch.
|
||||
// For current or next epoch, it uses the head state and next slot cache for efficiency.
|
||||
// For past epochs, it replays blocks from the most recent canonical state.
|
||||
func (p *BeaconDbStater) StateByEpoch(ctx context.Context, epoch primitives.Epoch) (state.BeaconState, error) {
|
||||
ctx, span := trace.StartSpan(ctx, "statefetcher.StateByEpoch")
|
||||
defer span.End()
|
||||
|
||||
targetSlot, err := slots.EpochStart(epoch)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not get epoch start slot")
|
||||
}
|
||||
|
||||
currentSlot := p.GenesisTimeFetcher.CurrentSlot()
|
||||
currentEpoch := slots.ToEpoch(currentSlot)
|
||||
|
||||
// For past epochs, use the replay mechanism
|
||||
if epoch < currentEpoch {
|
||||
return p.StateBySlot(ctx, targetSlot)
|
||||
}
|
||||
|
||||
// For current or next epoch, use head state + next slot cache (much faster)
|
||||
headState, err := p.ChainInfoFetcher.HeadState(ctx)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not get head state")
|
||||
}
|
||||
|
||||
// If head state is already at or past the target slot, return it
|
||||
if headState.Slot() >= targetSlot {
|
||||
return headState, nil
|
||||
}
|
||||
|
||||
// Process slots using the next slot cache
|
||||
headRoot := p.ChainInfoFetcher.CachedHeadRoot()
|
||||
st, err := transition.ProcessSlotsUsingNextSlotCache(ctx, headState, headRoot[:], targetSlot)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "could not process slots up to %d", targetSlot)
|
||||
}
|
||||
return st, nil
|
||||
}
|
||||
|
||||
func (p *BeaconDbStater) headStateRoot(ctx context.Context) ([]byte, error) {
|
||||
b, err := p.ChainInfoFetcher.HeadBlock(ctx)
|
||||
if err != nil {
|
||||
|
||||
@@ -444,3 +444,111 @@ func TestStateBySlot_AfterHeadSlot(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, primitives.Slot(101), st.Slot())
|
||||
}
|
||||
|
||||
func TestStateByEpoch(t *testing.T) {
|
||||
ctx := t.Context()
|
||||
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
|
||||
|
||||
t.Run("current epoch uses head state", func(t *testing.T) {
|
||||
// Head is at slot 5 (epoch 0), requesting epoch 0
|
||||
headSlot := primitives.Slot(5)
|
||||
headSt, err := statenative.InitializeFromProtoPhase0(ðpb.BeaconState{Slot: headSlot})
|
||||
require.NoError(t, err)
|
||||
|
||||
currentSlot := headSlot
|
||||
mock := &chainMock.ChainService{State: headSt, Slot: ¤tSlot}
|
||||
p := BeaconDbStater{ChainInfoFetcher: mock, GenesisTimeFetcher: mock}
|
||||
|
||||
st, err := p.StateByEpoch(ctx, 0)
|
||||
require.NoError(t, err)
|
||||
// Should return head state since it's already past epoch start
|
||||
assert.Equal(t, headSlot, st.Slot())
|
||||
})
|
||||
|
||||
t.Run("current epoch processes slots to epoch start", func(t *testing.T) {
|
||||
// Head is at slot 5 (epoch 0), requesting epoch 1
|
||||
// Current slot is 32 (epoch 1), so epoch 1 is current epoch
|
||||
headSlot := primitives.Slot(5)
|
||||
headSt, err := statenative.InitializeFromProtoPhase0(ðpb.BeaconState{Slot: headSlot})
|
||||
require.NoError(t, err)
|
||||
|
||||
currentSlot := slotsPerEpoch // slot 32, epoch 1
|
||||
mock := &chainMock.ChainService{State: headSt, Slot: ¤tSlot}
|
||||
p := BeaconDbStater{ChainInfoFetcher: mock, GenesisTimeFetcher: mock}
|
||||
|
||||
// Note: This will fail since ProcessSlotsUsingNextSlotCache requires proper setup
|
||||
// In real usage, the transition package handles this properly
|
||||
_, err = p.StateByEpoch(ctx, 1)
|
||||
// The error is expected since we don't have a fully initialized beacon state
|
||||
// that can process slots (missing committees, etc.)
|
||||
assert.NotNil(t, err)
|
||||
})
|
||||
|
||||
t.Run("past epoch uses replay", func(t *testing.T) {
|
||||
// Head is at epoch 2, requesting epoch 0 (past)
|
||||
headSlot := slotsPerEpoch * 2 // slot 64, epoch 2
|
||||
headSt, err := statenative.InitializeFromProtoPhase0(ðpb.BeaconState{Slot: headSlot})
|
||||
require.NoError(t, err)
|
||||
|
||||
pastEpochSt, err := statenative.InitializeFromProtoPhase0(ðpb.BeaconState{Slot: 0})
|
||||
require.NoError(t, err)
|
||||
|
||||
currentSlot := headSlot
|
||||
mock := &chainMock.ChainService{State: headSt, Slot: ¤tSlot}
|
||||
mockReplayer := mockstategen.NewReplayerBuilder()
|
||||
mockReplayer.SetMockStateForSlot(pastEpochSt, 0)
|
||||
p := BeaconDbStater{ChainInfoFetcher: mock, GenesisTimeFetcher: mock, ReplayerBuilder: mockReplayer}
|
||||
|
||||
st, err := p.StateByEpoch(ctx, 0)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, primitives.Slot(0), st.Slot())
|
||||
})
|
||||
|
||||
t.Run("next epoch uses head state path", func(t *testing.T) {
|
||||
// Head is at slot 30 (epoch 0), requesting epoch 1 (next)
|
||||
// Current slot is 30 (epoch 0), so epoch 1 is next epoch
|
||||
headSlot := primitives.Slot(30)
|
||||
headSt, err := statenative.InitializeFromProtoPhase0(ðpb.BeaconState{Slot: headSlot})
|
||||
require.NoError(t, err)
|
||||
|
||||
currentSlot := headSlot
|
||||
mock := &chainMock.ChainService{State: headSt, Slot: ¤tSlot}
|
||||
p := BeaconDbStater{ChainInfoFetcher: mock, GenesisTimeFetcher: mock}
|
||||
|
||||
// Note: This will fail since ProcessSlotsUsingNextSlotCache requires proper setup
|
||||
_, err = p.StateByEpoch(ctx, 1)
|
||||
// The error is expected since we don't have a fully initialized beacon state
|
||||
assert.NotNil(t, err)
|
||||
})
|
||||
|
||||
t.Run("head state already at target slot returns immediately", func(t *testing.T) {
|
||||
// Head is at slot 32 (epoch 1 start), requesting epoch 1
|
||||
headSlot := slotsPerEpoch // slot 32
|
||||
headSt, err := statenative.InitializeFromProtoPhase0(ðpb.BeaconState{Slot: headSlot})
|
||||
require.NoError(t, err)
|
||||
|
||||
currentSlot := headSlot
|
||||
mock := &chainMock.ChainService{State: headSt, Slot: ¤tSlot}
|
||||
p := BeaconDbStater{ChainInfoFetcher: mock, GenesisTimeFetcher: mock}
|
||||
|
||||
st, err := p.StateByEpoch(ctx, 1)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, headSlot, st.Slot())
|
||||
})
|
||||
|
||||
t.Run("head state past target slot returns head state", func(t *testing.T) {
|
||||
// Head is at slot 40, requesting epoch 1 (starts at slot 32)
|
||||
headSlot := primitives.Slot(40)
|
||||
headSt, err := statenative.InitializeFromProtoPhase0(ðpb.BeaconState{Slot: headSlot})
|
||||
require.NoError(t, err)
|
||||
|
||||
currentSlot := headSlot
|
||||
mock := &chainMock.ChainService{State: headSt, Slot: ¤tSlot}
|
||||
p := BeaconDbStater{ChainInfoFetcher: mock, GenesisTimeFetcher: mock}
|
||||
|
||||
st, err := p.StateByEpoch(ctx, 1)
|
||||
require.NoError(t, err)
|
||||
// Returns head state since it's already >= epoch start
|
||||
assert.Equal(t, headSlot, st.Slot())
|
||||
})
|
||||
}
|
||||
|
||||
@@ -7,6 +7,7 @@ import (
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/go-bitfield"
|
||||
builderapi "github.com/OffchainLabs/prysm/v7/api/client/builder"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/blockchain"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/builder"
|
||||
@@ -308,6 +309,7 @@ func (vs *Server) ProposeBeaconBlock(ctx context.Context, req *ethpb.GenericSign
|
||||
}
|
||||
|
||||
rob, err := blocks.NewROBlockWithRoot(block, root)
|
||||
var partialColumns []blocks.PartialDataColumn
|
||||
if block.IsBlinded() {
|
||||
block, blobSidecars, err = vs.handleBlindedBlock(ctx, block)
|
||||
if errors.Is(err, builderapi.ErrBadGateway) {
|
||||
@@ -315,7 +317,7 @@ func (vs *Server) ProposeBeaconBlock(ctx context.Context, req *ethpb.GenericSign
|
||||
return ðpb.ProposeResponse{BlockRoot: root[:]}, nil
|
||||
}
|
||||
} else if block.Version() >= version.Deneb {
|
||||
blobSidecars, dataColumnSidecars, err = vs.handleUnblindedBlock(rob, req)
|
||||
blobSidecars, dataColumnSidecars, partialColumns, err = vs.handleUnblindedBlock(rob, req)
|
||||
}
|
||||
if err != nil {
|
||||
return nil, status.Errorf(codes.Internal, "%s: %v", "handle block failed", err)
|
||||
@@ -335,7 +337,7 @@ func (vs *Server) ProposeBeaconBlock(ctx context.Context, req *ethpb.GenericSign
|
||||
|
||||
wg.Wait()
|
||||
|
||||
if err := vs.broadcastAndReceiveSidecars(ctx, block, root, blobSidecars, dataColumnSidecars); err != nil {
|
||||
if err := vs.broadcastAndReceiveSidecars(ctx, block, root, blobSidecars, dataColumnSidecars, partialColumns); err != nil {
|
||||
return nil, status.Errorf(codes.Internal, "Could not broadcast/receive sidecars: %v", err)
|
||||
}
|
||||
if err := <-errChan; err != nil {
|
||||
@@ -352,9 +354,10 @@ func (vs *Server) broadcastAndReceiveSidecars(
|
||||
root [fieldparams.RootLength]byte,
|
||||
blobSidecars []*ethpb.BlobSidecar,
|
||||
dataColumnSidecars []blocks.RODataColumn,
|
||||
partialColumns []blocks.PartialDataColumn,
|
||||
) error {
|
||||
if block.Version() >= version.Fulu {
|
||||
if err := vs.broadcastAndReceiveDataColumns(ctx, dataColumnSidecars); err != nil {
|
||||
if err := vs.broadcastAndReceiveDataColumns(ctx, dataColumnSidecars, partialColumns); err != nil {
|
||||
return errors.Wrap(err, "broadcast and receive data columns")
|
||||
}
|
||||
return nil
|
||||
@@ -403,34 +406,41 @@ func (vs *Server) handleBlindedBlock(ctx context.Context, block interfaces.Signe
|
||||
func (vs *Server) handleUnblindedBlock(
|
||||
block blocks.ROBlock,
|
||||
req *ethpb.GenericSignedBeaconBlock,
|
||||
) ([]*ethpb.BlobSidecar, []blocks.RODataColumn, error) {
|
||||
) ([]*ethpb.BlobSidecar, []blocks.RODataColumn, []blocks.PartialDataColumn, error) {
|
||||
rawBlobs, proofs, err := blobsAndProofs(req)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
return nil, nil, nil, err
|
||||
}
|
||||
|
||||
if block.Version() >= version.Fulu {
|
||||
// Compute cells and proofs from the blobs and cell proofs.
|
||||
cellsPerBlob, proofsPerBlob, err := peerdas.ComputeCellsAndProofsFromFlat(rawBlobs, proofs)
|
||||
if err != nil {
|
||||
return nil, nil, errors.Wrap(err, "compute cells and proofs")
|
||||
return nil, nil, nil, errors.Wrap(err, "compute cells and proofs")
|
||||
}
|
||||
|
||||
// Construct data column sidecars from the signed block and cells and proofs.
|
||||
roDataColumnSidecars, err := peerdas.DataColumnSidecars(cellsPerBlob, proofsPerBlob, peerdas.PopulateFromBlock(block))
|
||||
if err != nil {
|
||||
return nil, nil, errors.Wrap(err, "data column sidcars")
|
||||
return nil, nil, nil, errors.Wrap(err, "data column sidcars")
|
||||
}
|
||||
|
||||
return nil, roDataColumnSidecars, nil
|
||||
included := bitfield.NewBitlist(uint64(len(cellsPerBlob)))
|
||||
included = included.Not() // all bits set to 1
|
||||
partialColumns, err := peerdas.PartialColumns(included, cellsPerBlob, proofsPerBlob, peerdas.PopulateFromBlock(block))
|
||||
if err != nil {
|
||||
return nil, nil, nil, errors.Wrap(err, "data column sidcars")
|
||||
}
|
||||
|
||||
return nil, roDataColumnSidecars, partialColumns, nil
|
||||
}
|
||||
|
||||
blobSidecars, err := BuildBlobSidecars(block, rawBlobs, proofs)
|
||||
if err != nil {
|
||||
return nil, nil, errors.Wrap(err, "build blob sidecars")
|
||||
return nil, nil, nil, errors.Wrap(err, "build blob sidecars")
|
||||
}
|
||||
|
||||
return blobSidecars, nil, nil
|
||||
return blobSidecars, nil, nil, nil
|
||||
}
|
||||
|
||||
// broadcastReceiveBlock broadcasts a block and handles its reception.
|
||||
@@ -497,7 +507,7 @@ func (vs *Server) broadcastAndReceiveBlobs(ctx context.Context, sidecars []*ethp
|
||||
}
|
||||
|
||||
// broadcastAndReceiveDataColumns handles the broadcasting and reception of data columns sidecars.
|
||||
func (vs *Server) broadcastAndReceiveDataColumns(ctx context.Context, roSidecars []blocks.RODataColumn) error {
|
||||
func (vs *Server) broadcastAndReceiveDataColumns(ctx context.Context, roSidecars []blocks.RODataColumn, partialColumns []blocks.PartialDataColumn) error {
|
||||
// We built this block ourselves, so we can upgrade the read only data column sidecar into a verified one.
|
||||
verifiedSidecars := make([]blocks.VerifiedRODataColumn, 0, len(roSidecars))
|
||||
for _, sidecar := range roSidecars {
|
||||
@@ -506,7 +516,7 @@ func (vs *Server) broadcastAndReceiveDataColumns(ctx context.Context, roSidecars
|
||||
}
|
||||
|
||||
// Broadcast sidecars (non blocking).
|
||||
if err := vs.P2P.BroadcastDataColumnSidecars(ctx, verifiedSidecars); err != nil {
|
||||
if err := vs.P2P.BroadcastDataColumnSidecars(ctx, verifiedSidecars, partialColumns); err != nil {
|
||||
return errors.Wrap(err, "broadcast data column sidecars")
|
||||
}
|
||||
|
||||
|
||||
@@ -26,5 +26,6 @@ go_library(
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
"//testing/require:go_default_library",
|
||||
"//testing/util:go_default_library",
|
||||
"//time/slots:go_default_library",
|
||||
],
|
||||
)
|
||||
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
|
||||
"github.com/OffchainLabs/prysm/v7/time/slots"
|
||||
)
|
||||
|
||||
// MockStater is a fake implementation of lookup.Stater.
|
||||
@@ -14,6 +15,7 @@ type MockStater struct {
|
||||
StateProviderFunc func(ctx context.Context, stateId []byte) (state.BeaconState, error)
|
||||
BeaconStateRoot []byte
|
||||
StatesBySlot map[primitives.Slot]state.BeaconState
|
||||
StatesByEpoch map[primitives.Epoch]state.BeaconState
|
||||
StatesByRoot map[[32]byte]state.BeaconState
|
||||
CustomError error
|
||||
}
|
||||
@@ -43,3 +45,22 @@ func (m *MockStater) StateRoot(context.Context, []byte) ([]byte, error) {
|
||||
func (m *MockStater) StateBySlot(_ context.Context, s primitives.Slot) (state.BeaconState, error) {
|
||||
return m.StatesBySlot[s], nil
|
||||
}
|
||||
|
||||
// StateByEpoch --
|
||||
func (m *MockStater) StateByEpoch(_ context.Context, e primitives.Epoch) (state.BeaconState, error) {
|
||||
if m.CustomError != nil {
|
||||
return nil, m.CustomError
|
||||
}
|
||||
if m.StatesByEpoch != nil {
|
||||
return m.StatesByEpoch[e], nil
|
||||
}
|
||||
// Fall back to StatesBySlot if StatesByEpoch is not set
|
||||
slot, err := slots.EpochStart(e)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if m.StatesBySlot != nil {
|
||||
return m.StatesBySlot[slot], nil
|
||||
}
|
||||
return m.BeaconState, nil
|
||||
}
|
||||
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
"fmt"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
|
||||
"github.com/OffchainLabs/prysm/v7/monitoring/tracing/trace"
|
||||
"github.com/sirupsen/logrus"
|
||||
@@ -37,76 +38,84 @@ func (s *State) MigrateToCold(ctx context.Context, fRoot [32]byte) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Start at previous finalized slot, stop at current finalized slot (it will be handled in the next migration).
|
||||
// If the slot is on archived point, save the state of that slot to the DB.
|
||||
for slot := oldFSlot; slot < fSlot; slot++ {
|
||||
// Calculate the first archived point slot >= oldFSlot (but > 0).
|
||||
// This avoids iterating through every slot and only visits archived points directly.
|
||||
var startSlot primitives.Slot
|
||||
if oldFSlot == 0 {
|
||||
startSlot = s.slotsPerArchivedPoint
|
||||
} else {
|
||||
// Round up to the next archived point
|
||||
startSlot = (oldFSlot + s.slotsPerArchivedPoint - 1) / s.slotsPerArchivedPoint * s.slotsPerArchivedPoint
|
||||
}
|
||||
|
||||
// Start at the first archived point after old finalized slot, stop before current finalized slot.
|
||||
// Jump directly between archived points.
|
||||
for slot := startSlot; slot < fSlot; slot += s.slotsPerArchivedPoint {
|
||||
if ctx.Err() != nil {
|
||||
return ctx.Err()
|
||||
}
|
||||
|
||||
if slot%s.slotsPerArchivedPoint == 0 && slot != 0 {
|
||||
cached, exists, err := s.epochBoundaryStateCache.getBySlot(slot)
|
||||
cached, exists, err := s.epochBoundaryStateCache.getBySlot(slot)
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not get epoch boundary state for slot %d", slot)
|
||||
}
|
||||
|
||||
var aRoot [32]byte
|
||||
var aState state.BeaconState
|
||||
|
||||
// When the epoch boundary state is not in cache due to skip slot scenario,
|
||||
// we have to regenerate the state which will represent epoch boundary.
|
||||
// By finding the highest available block below epoch boundary slot, we
|
||||
// generate the state for that block root.
|
||||
if exists {
|
||||
aRoot = cached.root
|
||||
aState = cached.state
|
||||
} else {
|
||||
_, roots, err := s.beaconDB.HighestRootsBelowSlot(ctx, slot)
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not get epoch boundary state for slot %d", slot)
|
||||
return err
|
||||
}
|
||||
|
||||
var aRoot [32]byte
|
||||
var aState state.BeaconState
|
||||
|
||||
// When the epoch boundary state is not in cache due to skip slot scenario,
|
||||
// we have to regenerate the state which will represent epoch boundary.
|
||||
// By finding the highest available block below epoch boundary slot, we
|
||||
// generate the state for that block root.
|
||||
if exists {
|
||||
aRoot = cached.root
|
||||
aState = cached.state
|
||||
} else {
|
||||
_, roots, err := s.beaconDB.HighestRootsBelowSlot(ctx, slot)
|
||||
// Given the block has been finalized, the db should not have more than one block in a given slot.
|
||||
// We should error out when this happens.
|
||||
if len(roots) != 1 {
|
||||
return errUnknownBlock
|
||||
}
|
||||
aRoot = roots[0]
|
||||
// There's no need to generate the state if the state already exists in the DB.
|
||||
// We can skip saving the state.
|
||||
if !s.beaconDB.HasState(ctx, aRoot) {
|
||||
aState, err = s.StateByRoot(ctx, aRoot)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// Given the block has been finalized, the db should not have more than one block in a given slot.
|
||||
// We should error out when this happens.
|
||||
if len(roots) != 1 {
|
||||
return errUnknownBlock
|
||||
}
|
||||
aRoot = roots[0]
|
||||
// There's no need to generate the state if the state already exists in the DB.
|
||||
// We can skip saving the state.
|
||||
if !s.beaconDB.HasState(ctx, aRoot) {
|
||||
aState, err = s.StateByRoot(ctx, aRoot)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if s.beaconDB.HasState(ctx, aRoot) {
|
||||
// If you are migrating a state and its already part of the hot state cache saved to the db,
|
||||
// you can just remove it from the hot state cache as it becomes redundant.
|
||||
s.saveHotStateDB.lock.Lock()
|
||||
roots := s.saveHotStateDB.blockRootsOfSavedStates
|
||||
for i := range roots {
|
||||
if aRoot == roots[i] {
|
||||
s.saveHotStateDB.blockRootsOfSavedStates = append(roots[:i], roots[i+1:]...)
|
||||
// There shouldn't be duplicated roots in `blockRootsOfSavedStates`.
|
||||
// Break here is ok.
|
||||
break
|
||||
}
|
||||
}
|
||||
s.saveHotStateDB.lock.Unlock()
|
||||
continue
|
||||
}
|
||||
|
||||
if err := s.beaconDB.SaveState(ctx, aState, aRoot); err != nil {
|
||||
return err
|
||||
}
|
||||
log.WithFields(
|
||||
logrus.Fields{
|
||||
"slot": aState.Slot(),
|
||||
"root": hex.EncodeToString(bytesutil.Trunc(aRoot[:])),
|
||||
}).Info("Saved state in DB")
|
||||
}
|
||||
|
||||
if s.beaconDB.HasState(ctx, aRoot) {
|
||||
// If you are migrating a state and its already part of the hot state cache saved to the db,
|
||||
// you can just remove it from the hot state cache as it becomes redundant.
|
||||
s.saveHotStateDB.lock.Lock()
|
||||
roots := s.saveHotStateDB.blockRootsOfSavedStates
|
||||
for i := range roots {
|
||||
if aRoot == roots[i] {
|
||||
s.saveHotStateDB.blockRootsOfSavedStates = append(roots[:i], roots[i+1:]...)
|
||||
// There shouldn't be duplicated roots in `blockRootsOfSavedStates`.
|
||||
// Break here is ok.
|
||||
break
|
||||
}
|
||||
}
|
||||
s.saveHotStateDB.lock.Unlock()
|
||||
continue
|
||||
}
|
||||
|
||||
if err := s.beaconDB.SaveState(ctx, aState, aRoot); err != nil {
|
||||
return err
|
||||
}
|
||||
log.WithFields(
|
||||
logrus.Fields{
|
||||
"slot": aState.Slot(),
|
||||
"root": hex.EncodeToString(bytesutil.Trunc(aRoot[:])),
|
||||
}).Info("Saved state in DB")
|
||||
}
|
||||
|
||||
// Update finalized info in memory.
|
||||
|
||||
@@ -58,6 +58,7 @@ go_library(
|
||||
"validate_bls_to_execution_change.go",
|
||||
"validate_data_column.go",
|
||||
"validate_light_client.go",
|
||||
"validate_partial_header.go",
|
||||
"validate_proposer_slashing.go",
|
||||
"validate_sync_committee_message.go",
|
||||
"validate_sync_contribution_proof.go",
|
||||
@@ -98,6 +99,7 @@ go_library(
|
||||
"//beacon-chain/operations/voluntaryexits:go_default_library",
|
||||
"//beacon-chain/p2p:go_default_library",
|
||||
"//beacon-chain/p2p/encoder:go_default_library",
|
||||
"//beacon-chain/p2p/partialdatacolumnbroadcaster:go_default_library",
|
||||
"//beacon-chain/p2p/peers:go_default_library",
|
||||
"//beacon-chain/p2p/types:go_default_library",
|
||||
"//beacon-chain/slasher/types:go_default_library",
|
||||
|
||||
@@ -2,6 +2,7 @@ package sync
|
||||
|
||||
import (
|
||||
"context"
|
||||
"iter"
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
|
||||
@@ -22,9 +23,16 @@ type signatureVerifier struct {
|
||||
resChan chan error
|
||||
}
|
||||
|
||||
type errorWithSegment struct {
|
||||
err error
|
||||
// segment is only available if the batched verification failed
|
||||
segment peerdas.CellProofBundleSegment
|
||||
}
|
||||
|
||||
type kzgVerifier struct {
|
||||
dataColumns []blocks.RODataColumn
|
||||
resChan chan error
|
||||
sizeHint int
|
||||
cellProofs iter.Seq[blocks.CellProofBundle]
|
||||
resChan chan errorWithSegment
|
||||
}
|
||||
|
||||
// A routine that runs in the background to perform batch
|
||||
@@ -59,8 +67,9 @@ func (s *Service) verifierRoutine() {
|
||||
// A routine that runs in the background to perform batch
|
||||
// KZG verifications by draining the channel and processing all pending requests.
|
||||
func (s *Service) kzgVerifierRoutine() {
|
||||
kzgBatch := make([]*kzgVerifier, 0, 1)
|
||||
for {
|
||||
kzgBatch := make([]*kzgVerifier, 0, 1)
|
||||
kzgBatch = kzgBatch[:0]
|
||||
select {
|
||||
case <-s.ctx.Done():
|
||||
return
|
||||
@@ -156,42 +165,74 @@ func performBatchAggregation(aggSet *bls.SignatureBatch) (*bls.SignatureBatch, e
|
||||
}
|
||||
|
||||
func (s *Service) validateWithKzgBatchVerifier(ctx context.Context, dataColumns []blocks.RODataColumn) (pubsub.ValidationResult, error) {
|
||||
_, span := trace.StartSpan(ctx, "sync.validateWithKzgBatchVerifier")
|
||||
defer span.End()
|
||||
if len(dataColumns) == 0 {
|
||||
return pubsub.ValidationReject, errors.New("no data columns provided")
|
||||
}
|
||||
|
||||
timeout := time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second
|
||||
|
||||
resChan := make(chan error)
|
||||
verificationSet := &kzgVerifier{dataColumns: dataColumns, resChan: resChan}
|
||||
s.kzgChan <- verificationSet
|
||||
|
||||
ctx, cancel := context.WithTimeout(ctx, timeout)
|
||||
defer cancel()
|
||||
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return pubsub.ValidationIgnore, ctx.Err() // parent context canceled, give up
|
||||
case err := <-resChan:
|
||||
if err != nil {
|
||||
log.WithError(err).Trace("Could not perform batch verification")
|
||||
tracing.AnnotateError(span, err)
|
||||
return s.validateUnbatchedColumnsKzg(ctx, dataColumns)
|
||||
// If there are no cells in this column, we can return early since there's nothing to validate.
|
||||
allEmpty := true
|
||||
for _, column := range dataColumns {
|
||||
if len(column.Column) != 0 {
|
||||
allEmpty = false
|
||||
break
|
||||
}
|
||||
}
|
||||
if allEmpty {
|
||||
return pubsub.ValidationAccept, nil
|
||||
}
|
||||
sizeHint := len(dataColumns[0].Column) * len(dataColumns)
|
||||
err := s.validateKZGProofs(ctx, sizeHint, blocks.RODataColumnsToCellProofBundles(dataColumns))
|
||||
if ctx.Err() != nil {
|
||||
return pubsub.ValidationIgnore, ctx.Err()
|
||||
} else if err != nil {
|
||||
return pubsub.ValidationReject, err
|
||||
}
|
||||
return pubsub.ValidationAccept, nil
|
||||
}
|
||||
|
||||
func (s *Service) validateUnbatchedColumnsKzg(ctx context.Context, columns []blocks.RODataColumn) (pubsub.ValidationResult, error) {
|
||||
func (s *Service) validateKZGProofs(ctx context.Context, sizeHint int, cellProofs iter.Seq[blocks.CellProofBundle]) error {
|
||||
_, span := trace.StartSpan(ctx, "sync.validateKZGProofs")
|
||||
defer span.End()
|
||||
|
||||
timeout := time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second
|
||||
|
||||
resChan := make(chan errorWithSegment, 1)
|
||||
verificationSet := &kzgVerifier{sizeHint: sizeHint, cellProofs: cellProofs, resChan: resChan}
|
||||
ctx, cancel := context.WithTimeout(ctx, timeout)
|
||||
defer cancel()
|
||||
|
||||
select {
|
||||
case s.kzgChan <- verificationSet:
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
}
|
||||
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return ctx.Err() // parent context canceled, give up
|
||||
case errWithSegment := <-resChan:
|
||||
if errWithSegment.err != nil {
|
||||
err := errWithSegment.err
|
||||
log.WithError(err).Trace("Could not perform batch verification of cells")
|
||||
tracing.AnnotateError(span, err)
|
||||
// We failed batch verification. Try again in this goroutine without batching
|
||||
return validateUnbatchedKZGProofs(ctx, errWithSegment.segment)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func validateUnbatchedKZGProofs(ctx context.Context, segment peerdas.CellProofBundleSegment) error {
|
||||
_, span := trace.StartSpan(ctx, "sync.validateUnbatchedColumnsKzg")
|
||||
defer span.End()
|
||||
start := time.Now()
|
||||
if err := peerdas.VerifyDataColumnsSidecarKZGProofs(columns); err != nil {
|
||||
if err := segment.Verify(); err != nil {
|
||||
err = errors.Wrap(err, "could not verify")
|
||||
tracing.AnnotateError(span, err)
|
||||
return pubsub.ValidationReject, err
|
||||
return err
|
||||
}
|
||||
verification.DataColumnBatchKZGVerificationHistogram.WithLabelValues("fallback").Observe(float64(time.Since(start).Milliseconds()))
|
||||
return pubsub.ValidationAccept, nil
|
||||
return nil
|
||||
}
|
||||
|
||||
func verifyKzgBatch(kzgBatch []*kzgVerifier) {
|
||||
@@ -199,14 +240,16 @@ func verifyKzgBatch(kzgBatch []*kzgVerifier) {
|
||||
return
|
||||
}
|
||||
|
||||
allDataColumns := make([]blocks.RODataColumn, 0, len(kzgBatch))
|
||||
cellProofIters := make([]iter.Seq[blocks.CellProofBundle], 0, len(kzgBatch))
|
||||
var sizeHint int
|
||||
for _, kzgVerifier := range kzgBatch {
|
||||
allDataColumns = append(allDataColumns, kzgVerifier.dataColumns...)
|
||||
sizeHint += kzgVerifier.sizeHint
|
||||
cellProofIters = append(cellProofIters, kzgVerifier.cellProofs)
|
||||
}
|
||||
|
||||
var verificationErr error
|
||||
start := time.Now()
|
||||
err := peerdas.VerifyDataColumnsSidecarKZGProofs(allDataColumns)
|
||||
segments, err := peerdas.BatchVerifyDataColumnsCellsKZGProofs(sizeHint, cellProofIters)
|
||||
if err != nil {
|
||||
verificationErr = errors.Wrap(err, "batch KZG verification failed")
|
||||
} else {
|
||||
@@ -214,7 +257,14 @@ func verifyKzgBatch(kzgBatch []*kzgVerifier) {
|
||||
}
|
||||
|
||||
// Send the same result to all verifiers in the batch
|
||||
for _, verifier := range kzgBatch {
|
||||
verifier.resChan <- verificationErr
|
||||
for i, verifier := range kzgBatch {
|
||||
var segment peerdas.CellProofBundleSegment
|
||||
if verificationErr != nil {
|
||||
segment = segments[i]
|
||||
}
|
||||
verifier.resChan <- errorWithSegment{
|
||||
err: verificationErr,
|
||||
segment: segment,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -7,6 +7,7 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/blockchain/kzg"
|
||||
"github.com/OffchainLabs/prysm/v7/config/params"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
|
||||
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v7/testing/assert"
|
||||
@@ -87,12 +88,12 @@ func TestVerifierRoutine(t *testing.T) {
|
||||
go service.kzgVerifierRoutine()
|
||||
|
||||
dataColumns := createValidTestDataColumns(t, 1)
|
||||
resChan := make(chan error, 1)
|
||||
service.kzgChan <- &kzgVerifier{dataColumns: dataColumns, resChan: resChan}
|
||||
resChan := make(chan errorWithSegment, 1)
|
||||
service.kzgChan <- &kzgVerifier{sizeHint: 1, cellProofs: blocks.RODataColumnsToCellProofBundles(dataColumns), resChan: resChan}
|
||||
|
||||
select {
|
||||
case err := <-resChan:
|
||||
require.NoError(t, err)
|
||||
case errWithSegment := <-resChan:
|
||||
require.NoError(t, errWithSegment.err)
|
||||
case <-time.After(time.Second):
|
||||
t.Fatal("timeout waiting for verification result")
|
||||
}
|
||||
@@ -108,19 +109,19 @@ func TestVerifierRoutine(t *testing.T) {
|
||||
go service.kzgVerifierRoutine()
|
||||
|
||||
const numRequests = 5
|
||||
resChans := make([]chan error, numRequests)
|
||||
resChans := make([]chan errorWithSegment, numRequests)
|
||||
|
||||
for i := range numRequests {
|
||||
dataColumns := createValidTestDataColumns(t, 1)
|
||||
resChan := make(chan error, 1)
|
||||
resChan := make(chan errorWithSegment, 1)
|
||||
resChans[i] = resChan
|
||||
service.kzgChan <- &kzgVerifier{dataColumns: dataColumns, resChan: resChan}
|
||||
service.kzgChan <- &kzgVerifier{sizeHint: 1, cellProofs: blocks.RODataColumnsToCellProofBundles(dataColumns), resChan: resChan}
|
||||
}
|
||||
|
||||
for i := range numRequests {
|
||||
select {
|
||||
case err := <-resChans[i]:
|
||||
require.NoError(t, err)
|
||||
case errWithSegment := <-resChans[i]:
|
||||
require.NoError(t, errWithSegment.err)
|
||||
case <-time.After(time.Second):
|
||||
t.Fatalf("timeout waiting for verification result %d", i)
|
||||
}
|
||||
@@ -157,14 +158,14 @@ func TestVerifyKzgBatch(t *testing.T) {
|
||||
|
||||
t.Run("all valid data columns succeed", func(t *testing.T) {
|
||||
dataColumns := createValidTestDataColumns(t, 3)
|
||||
resChan := make(chan error, 1)
|
||||
kzgVerifiers := []*kzgVerifier{{dataColumns: dataColumns, resChan: resChan}}
|
||||
resChan := make(chan errorWithSegment, 1)
|
||||
kzgVerifiers := []*kzgVerifier{{sizeHint: 3, cellProofs: blocks.RODataColumnsToCellProofBundles(dataColumns), resChan: resChan}}
|
||||
|
||||
verifyKzgBatch(kzgVerifiers)
|
||||
|
||||
select {
|
||||
case err := <-resChan:
|
||||
require.NoError(t, err)
|
||||
case errWithSegment := <-resChan:
|
||||
require.NoError(t, errWithSegment.err)
|
||||
case <-time.After(time.Second):
|
||||
t.Fatal("timeout waiting for batch verification")
|
||||
}
|
||||
@@ -175,14 +176,14 @@ func TestVerifyKzgBatch(t *testing.T) {
|
||||
invalidColumns := createInvalidTestDataColumns(t, 1)
|
||||
allColumns := append(validColumns, invalidColumns...)
|
||||
|
||||
resChan := make(chan error, 1)
|
||||
kzgVerifiers := []*kzgVerifier{{dataColumns: allColumns, resChan: resChan}}
|
||||
resChan := make(chan errorWithSegment, 1)
|
||||
kzgVerifiers := []*kzgVerifier{{sizeHint: 2, cellProofs: blocks.RODataColumnsToCellProofBundles(allColumns), resChan: resChan}}
|
||||
|
||||
verifyKzgBatch(kzgVerifiers)
|
||||
|
||||
select {
|
||||
case err := <-resChan:
|
||||
assert.NotNil(t, err)
|
||||
case errWithSegment := <-resChan:
|
||||
assert.NotNil(t, errWithSegment.err)
|
||||
case <-time.After(time.Second):
|
||||
t.Fatal("timeout waiting for batch verification")
|
||||
}
|
||||
@@ -268,6 +269,71 @@ func TestKzgBatchVerifierFallback(t *testing.T) {
|
||||
})
|
||||
}
|
||||
|
||||
func TestValidateWithKzgBatchVerifier_DeadlockOnTimeout(t *testing.T) {
|
||||
err := kzg.Start()
|
||||
require.NoError(t, err)
|
||||
|
||||
params.SetupTestConfigCleanup(t)
|
||||
cfg := params.BeaconConfig().Copy()
|
||||
cfg.SecondsPerSlot = 0
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
ctx, cancel := context.WithCancel(t.Context())
|
||||
defer cancel()
|
||||
|
||||
service := &Service{
|
||||
ctx: ctx,
|
||||
kzgChan: make(chan *kzgVerifier),
|
||||
}
|
||||
go service.kzgVerifierRoutine()
|
||||
|
||||
result, err := service.validateWithKzgBatchVerifier(context.Background(), nil)
|
||||
require.Equal(t, pubsub.ValidationIgnore, result)
|
||||
require.ErrorIs(t, err, context.DeadlineExceeded)
|
||||
|
||||
done := make(chan struct{})
|
||||
go func() {
|
||||
_, _ = service.validateWithKzgBatchVerifier(context.Background(), nil)
|
||||
close(done)
|
||||
}()
|
||||
|
||||
select {
|
||||
case <-done:
|
||||
case <-time.After(500 * time.Millisecond):
|
||||
t.Fatal("validateWithKzgBatchVerifier blocked")
|
||||
}
|
||||
}
|
||||
|
||||
func TestValidateWithKzgBatchVerifier_ContextCanceledBeforeSend(t *testing.T) {
|
||||
cancelledCtx, cancel := context.WithCancel(t.Context())
|
||||
cancel()
|
||||
|
||||
service := &Service{
|
||||
ctx: context.Background(),
|
||||
kzgChan: make(chan *kzgVerifier),
|
||||
}
|
||||
|
||||
done := make(chan struct{})
|
||||
go func() {
|
||||
result, err := service.validateWithKzgBatchVerifier(cancelledCtx, nil)
|
||||
require.Equal(t, pubsub.ValidationIgnore, result)
|
||||
require.ErrorIs(t, err, context.Canceled)
|
||||
close(done)
|
||||
}()
|
||||
|
||||
select {
|
||||
case <-done:
|
||||
case <-time.After(500 * time.Millisecond):
|
||||
t.Fatal("validateWithKzgBatchVerifier did not return after context cancellation")
|
||||
}
|
||||
|
||||
select {
|
||||
case <-service.kzgChan:
|
||||
t.Fatal("verificationSet was sent to kzgChan despite canceled context")
|
||||
default:
|
||||
}
|
||||
}
|
||||
|
||||
func createValidTestDataColumns(t *testing.T, count int) []blocks.RODataColumn {
|
||||
_, roSidecars, _ := util.GenerateTestFuluBlockWithSidecars(t, count)
|
||||
if len(roSidecars) >= count {
|
||||
|
||||
@@ -204,6 +204,13 @@ var (
|
||||
},
|
||||
)
|
||||
|
||||
dataColumnsRecoveredFromELAttempts = promauto.NewCounter(
|
||||
prometheus.CounterOpts{
|
||||
Name: "data_columns_recovered_from_el_attempts",
|
||||
Help: "Count the number of data columns recovery attempts from the execution layer.",
|
||||
},
|
||||
)
|
||||
|
||||
dataColumnsRecoveredFromELTotal = promauto.NewCounter(
|
||||
prometheus.CounterOpts{
|
||||
Name: "data_columns_recovered_from_el_total",
|
||||
@@ -242,6 +249,23 @@ var (
|
||||
Buckets: []float64{100, 250, 500, 750, 1000, 1500, 2000, 4000, 8000, 12000, 16000},
|
||||
},
|
||||
)
|
||||
|
||||
dataColumnSidecarsObtainedViaELCount = promauto.NewSummary(
|
||||
prometheus.SummaryOpts{
|
||||
Name: "data_column_obtained_via_el_count",
|
||||
Help: "Count the number of data column sidecars obtained via the execution layer.",
|
||||
},
|
||||
)
|
||||
|
||||
usefulFullColumnsReceivedTotal = promauto.NewCounterVec(prometheus.CounterOpts{
|
||||
Name: "beacon_useful_full_columns_received_total",
|
||||
Help: "Number of useful full columns (any cell being useful) received",
|
||||
}, []string{"column_index"})
|
||||
|
||||
partialMessageColumnCompletionsTotal = promauto.NewCounterVec(prometheus.CounterOpts{
|
||||
Name: "beacon_partial_message_column_completions_total",
|
||||
Help: "How often the partial message first completed the column",
|
||||
}, []string{"column_index"})
|
||||
)
|
||||
|
||||
func (s *Service) updateMetrics() {
|
||||
|
||||
@@ -265,7 +265,7 @@ func (s *Service) processVerifiedAttestation(
|
||||
if key, err := generateUnaggregatedAttCacheKey(broadcastAtt); err != nil {
|
||||
log.WithError(err).Error("Failed to generate cache key for attestation tracking")
|
||||
} else {
|
||||
s.setSeenUnaggregatedAtt(key)
|
||||
_ = s.setSeenUnaggregatedAtt(key)
|
||||
}
|
||||
|
||||
valCount, err := helpers.ActiveValidatorCount(ctx, preState, slots.ToEpoch(data.Slot))
|
||||
@@ -320,7 +320,7 @@ func (s *Service) processAggregate(ctx context.Context, aggregate ethpb.SignedAg
|
||||
return
|
||||
}
|
||||
|
||||
s.setAggregatorIndexEpochSeen(att.GetData().Target.Epoch, aggregate.AggregateAttestationAndProof().GetAggregatorIndex())
|
||||
_ = s.setAggregatorIndexEpochSeen(att.GetData().Target.Epoch, aggregate.AggregateAttestationAndProof().GetAggregatorIndex())
|
||||
|
||||
if err := s.cfg.p2p.Broadcast(ctx, aggregate); err != nil {
|
||||
log.WithError(err).Debug("Could not broadcast aggregated attestation")
|
||||
|
||||
@@ -5,6 +5,8 @@ import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
"runtime/debug"
|
||||
"slices"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
@@ -14,11 +16,13 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/partialdatacolumnbroadcaster"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/peers"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/startup"
|
||||
"github.com/OffchainLabs/prysm/v7/cmd/beacon-chain/flags"
|
||||
"github.com/OffchainLabs/prysm/v7/config/features"
|
||||
"github.com/OffchainLabs/prysm/v7/config/params"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v7/monitoring/tracing"
|
||||
"github.com/OffchainLabs/prysm/v7/monitoring/tracing/trace"
|
||||
@@ -61,6 +65,15 @@ type subscribeParameters struct {
|
||||
// getSubnetsRequiringPeers is a function that returns all subnets that require peers to be found
|
||||
// but for which no subscriptions are needed.
|
||||
getSubnetsRequiringPeers func(currentSlot primitives.Slot) map[uint64]bool
|
||||
|
||||
partial *partialSubscribeParameters
|
||||
}
|
||||
|
||||
type partialSubscribeParameters struct {
|
||||
broadcaster *partialdatacolumnbroadcaster.PartialColumnBroadcaster
|
||||
validateHeader partialdatacolumnbroadcaster.HeaderValidator
|
||||
validate partialdatacolumnbroadcaster.ColumnValidator
|
||||
handle partialdatacolumnbroadcaster.SubHandler
|
||||
}
|
||||
|
||||
// shortTopic is a less verbose version of topic strings used for logging.
|
||||
@@ -320,6 +333,35 @@ func (s *Service) registerSubscribers(nse params.NetworkScheduleEntry) bool {
|
||||
// New gossip topic in Fulu.
|
||||
if params.BeaconConfig().FuluForkEpoch <= nse.Epoch {
|
||||
s.spawn(func() {
|
||||
var ps *partialSubscribeParameters
|
||||
broadcaster := s.cfg.p2p.PartialColumnBroadcaster()
|
||||
if broadcaster != nil {
|
||||
ps = &partialSubscribeParameters{
|
||||
broadcaster: broadcaster,
|
||||
validateHeader: func(header *ethpb.PartialDataColumnHeader) (bool, error) {
|
||||
return s.validatePartialDataColumnHeader(context.TODO(), header)
|
||||
},
|
||||
validate: func(cellsToVerify []blocks.CellProofBundle) error {
|
||||
return s.validateKZGProofs(context.TODO(), len(cellsToVerify), slices.Values(cellsToVerify))
|
||||
},
|
||||
handle: func(topic string, col blocks.VerifiedRODataColumn) {
|
||||
ctx, cancel := context.WithTimeout(s.ctx, pubsubMessageTimeout)
|
||||
defer cancel()
|
||||
|
||||
slot := col.SignedBlockHeader.Header.Slot
|
||||
proposerIndex := col.SignedBlockHeader.Header.ProposerIndex
|
||||
if !s.hasSeenDataColumnIndex(slot, proposerIndex, col.Index) {
|
||||
s.setSeenDataColumnIndex(slot, proposerIndex, col.Index)
|
||||
// This column was completed from a partial message.
|
||||
partialMessageColumnCompletionsTotal.WithLabelValues(strconv.FormatUint(col.Index, 10)).Inc()
|
||||
}
|
||||
err := s.verifiedRODataColumnSubscriber(ctx, col)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Failed to handle verified RO data column subscriber")
|
||||
}
|
||||
},
|
||||
}
|
||||
}
|
||||
s.subscribeWithParameters(subscribeParameters{
|
||||
topicFormat: p2p.DataColumnSubnetTopicFormat,
|
||||
validate: s.validateDataColumn,
|
||||
@@ -327,6 +369,7 @@ func (s *Service) registerSubscribers(nse params.NetworkScheduleEntry) bool {
|
||||
nse: nse,
|
||||
getSubnetsToJoin: s.dataColumnSubnetIndices,
|
||||
getSubnetsRequiringPeers: s.allDataColumnSubnets,
|
||||
partial: ps,
|
||||
})
|
||||
})
|
||||
}
|
||||
@@ -365,11 +408,10 @@ func (s *Service) subscribe(topic string, validator wrappedVal, handle subHandle
|
||||
// Impossible condition as it would mean topic does not exist.
|
||||
panic(fmt.Sprintf("%s is not mapped to any message in GossipTopicMappings", topic)) // lint:nopanic -- Impossible condition.
|
||||
}
|
||||
s.subscribeWithBase(s.addDigestToTopic(topic, nse.ForkDigest), validator, handle)
|
||||
s.subscribeWithBase(s.addDigestToTopic(topic, nse.ForkDigest)+s.cfg.p2p.Encoding().ProtocolSuffix(), validator, handle)
|
||||
}
|
||||
|
||||
func (s *Service) subscribeWithBase(topic string, validator wrappedVal, handle subHandler) *pubsub.Subscription {
|
||||
topic += s.cfg.p2p.Encoding().ProtocolSuffix()
|
||||
log := log.WithField("topic", topic)
|
||||
|
||||
// Do not resubscribe already seen subscriptions.
|
||||
@@ -532,7 +574,11 @@ func (s *Service) wrapAndReportValidation(topic string, v wrappedVal) (string, p
|
||||
func (s *Service) pruneNotWanted(t *subnetTracker, wantedSubnets map[uint64]bool) {
|
||||
for _, subnet := range t.unwanted(wantedSubnets) {
|
||||
t.cancelSubscription(subnet)
|
||||
s.unSubscribeFromTopic(t.fullTopic(subnet, s.cfg.p2p.Encoding().ProtocolSuffix()))
|
||||
topic := t.fullTopic(subnet, s.cfg.p2p.Encoding().ProtocolSuffix())
|
||||
if t.partial != nil {
|
||||
_ = t.partial.broadcaster.Unsubscribe(topic)
|
||||
}
|
||||
s.unSubscribeFromTopic(topic)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -579,9 +625,34 @@ func (s *Service) trySubscribeSubnets(t *subnetTracker) {
|
||||
subnetsToJoin := t.getSubnetsToJoin(s.cfg.clock.CurrentSlot())
|
||||
s.pruneNotWanted(t, subnetsToJoin)
|
||||
for _, subnet := range t.missing(subnetsToJoin) {
|
||||
// TODO: subscribeWithBase appends the protocol suffix, other methods don't. Make this consistent.
|
||||
topic := t.fullTopic(subnet, "")
|
||||
t.track(subnet, s.subscribeWithBase(topic, t.validate, t.handle))
|
||||
topicStr := t.fullTopic(subnet, s.cfg.p2p.Encoding().ProtocolSuffix())
|
||||
topicOpts := make([]pubsub.TopicOpt, 0, 2)
|
||||
|
||||
requestPartial := t.partial != nil
|
||||
|
||||
if requestPartial {
|
||||
// TODO: do we want the ability to support partial messages without requesting them?
|
||||
topicOpts = append(topicOpts, pubsub.RequestPartialMessages())
|
||||
}
|
||||
|
||||
topic, err := s.cfg.p2p.JoinTopic(topicStr, topicOpts...)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Failed to join topic")
|
||||
return
|
||||
}
|
||||
|
||||
if requestPartial {
|
||||
log.Info("Subscribing to partial columns on", topicStr)
|
||||
err = t.partial.broadcaster.Subscribe(topic, t.partial.validateHeader, t.partial.validate, t.partial.handle)
|
||||
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Failed to subscribe to partial column")
|
||||
}
|
||||
}
|
||||
|
||||
// We still need to subscribe to the full columns as well as partial in
|
||||
// case our peers don't support partial messages.
|
||||
t.track(subnet, s.subscribeWithBase(topicStr, t.validate, t.handle))
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -11,6 +11,7 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/transition/interop"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p"
|
||||
"github.com/OffchainLabs/prysm/v7/config/features"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v7/config/params"
|
||||
@@ -189,21 +190,62 @@ func (s *Service) processDataColumnSidecarsFromExecution(ctx context.Context, so
|
||||
ctx, cancel := context.WithTimeout(ctx, secondsPerHalfSlot)
|
||||
defer cancel()
|
||||
|
||||
digest, err := s.currentForkDigest()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
log := log.WithFields(logrus.Fields{
|
||||
"root": fmt.Sprintf("%#x", source.Root()),
|
||||
"slot": source.Slot(),
|
||||
"proposerIndex": source.ProposerIndex(),
|
||||
"type": source.Type(),
|
||||
})
|
||||
|
||||
var constructedSidecarCount uint64
|
||||
for iteration := uint64(0); ; /*no stop condition*/ iteration++ {
|
||||
log = log.WithField("iteration", iteration)
|
||||
|
||||
// Exit early if all sidecars to sample have been seen.
|
||||
if s.haveAllSidecarsBeenSeen(source.Slot(), source.ProposerIndex(), columnIndicesToSample) {
|
||||
if iteration > 0 && constructedSidecarCount == 0 {
|
||||
log.Debug("No data column sidecars constructed from the execution client")
|
||||
}
|
||||
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
if iteration == 0 {
|
||||
dataColumnsRecoveredFromELAttempts.Inc()
|
||||
}
|
||||
|
||||
// Try to reconstruct data column constructedSidecars from the execution client.
|
||||
constructedSidecars, err := s.cfg.executionReconstructor.ConstructDataColumnSidecars(ctx, source)
|
||||
constructedSidecars, partialColumns, err := s.cfg.executionReconstructor.ConstructDataColumnSidecars(ctx, source)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "reconstruct data column sidecars")
|
||||
}
|
||||
|
||||
partialBroadcaster := s.cfg.p2p.PartialColumnBroadcaster()
|
||||
if partialBroadcaster != nil {
|
||||
log.WithField("len(partialColumns)", len(partialColumns)).Debug("Publishing partial columns")
|
||||
for i := range uint64(len(partialColumns)) {
|
||||
if !columnIndicesToSample[i] {
|
||||
continue
|
||||
}
|
||||
subnet := peerdas.ComputeSubnetForDataColumnSidecar(i)
|
||||
topic := fmt.Sprintf(p2p.DataColumnSubnetTopicFormat, digest, subnet) + s.cfg.p2p.Encoding().ProtocolSuffix()
|
||||
// Publish the partial column. This is idempotent if we republish the same data twice.
|
||||
// Note, the "partial column" may indeed be complete. We still
|
||||
// should publish to help our peers.
|
||||
err = partialBroadcaster.Publish(topic, partialColumns[i])
|
||||
if err != nil {
|
||||
log.WithError(err).Warn("Failed to publish partial column")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// No sidecars are retrieved from the EL, retry later
|
||||
sidecarCount := uint64(len(constructedSidecars))
|
||||
if sidecarCount == 0 {
|
||||
constructedSidecarCount = uint64(len(constructedSidecars))
|
||||
if constructedSidecarCount == 0 {
|
||||
if ctx.Err() != nil {
|
||||
return nil, ctx.Err()
|
||||
}
|
||||
@@ -212,9 +254,11 @@ func (s *Service) processDataColumnSidecarsFromExecution(ctx context.Context, so
|
||||
continue
|
||||
}
|
||||
|
||||
dataColumnsRecoveredFromELTotal.Inc()
|
||||
|
||||
// Boundary check.
|
||||
if sidecarCount != fieldparams.NumberOfColumns {
|
||||
return nil, errors.Errorf("reconstruct data column sidecars returned %d sidecars, expected %d - should never happen", sidecarCount, fieldparams.NumberOfColumns)
|
||||
if constructedSidecarCount != fieldparams.NumberOfColumns {
|
||||
return nil, errors.Errorf("reconstruct data column sidecars returned %d sidecars, expected %d - should never happen", constructedSidecarCount, fieldparams.NumberOfColumns)
|
||||
}
|
||||
|
||||
unseenIndices, err := s.broadcastAndReceiveUnseenDataColumnSidecars(ctx, source.Slot(), source.ProposerIndex(), columnIndicesToSample, constructedSidecars)
|
||||
@@ -222,19 +266,12 @@ func (s *Service) processDataColumnSidecarsFromExecution(ctx context.Context, so
|
||||
return nil, errors.Wrap(err, "broadcast and receive unseen data column sidecars")
|
||||
}
|
||||
|
||||
if len(unseenIndices) > 0 {
|
||||
dataColumnsRecoveredFromELTotal.Inc()
|
||||
log.WithFields(logrus.Fields{
|
||||
"count": len(unseenIndices),
|
||||
"indices": helpers.SortedPrettySliceFromMap(unseenIndices),
|
||||
}).Debug("Constructed data column sidecars from the execution client")
|
||||
|
||||
log.WithFields(logrus.Fields{
|
||||
"root": fmt.Sprintf("%#x", source.Root()),
|
||||
"slot": source.Slot(),
|
||||
"proposerIndex": source.ProposerIndex(),
|
||||
"iteration": iteration,
|
||||
"type": source.Type(),
|
||||
"count": len(unseenIndices),
|
||||
"indices": helpers.SortedPrettySliceFromMap(unseenIndices),
|
||||
}).Debug("Constructed data column sidecars from the execution client")
|
||||
}
|
||||
dataColumnSidecarsObtainedViaELCount.Observe(float64(len(unseenIndices)))
|
||||
|
||||
return nil, nil
|
||||
}
|
||||
@@ -272,7 +309,7 @@ func (s *Service) broadcastAndReceiveUnseenDataColumnSidecars(
|
||||
}
|
||||
|
||||
// Broadcast all the data column sidecars we reconstructed but did not see via gossip (non blocking).
|
||||
if err := s.cfg.p2p.BroadcastDataColumnSidecars(ctx, unseenSidecars); err != nil {
|
||||
if err := s.cfg.p2p.BroadcastDataColumnSidecars(ctx, unseenSidecars, nil); err != nil {
|
||||
return nil, errors.Wrap(err, "broadcast data column sidecars")
|
||||
}
|
||||
|
||||
|
||||
@@ -3,6 +3,7 @@ package sync
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"strconv"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/feed"
|
||||
opfeed "github.com/OffchainLabs/prysm/v7/beacon-chain/core/feed/operation"
|
||||
@@ -24,6 +25,13 @@ func (s *Service) dataColumnSubscriber(ctx context.Context, msg proto.Message) e
|
||||
return fmt.Errorf("message was not type blocks.VerifiedRODataColumn, type=%T", msg)
|
||||
}
|
||||
|
||||
// Track useful full columns received via gossip (not previously seen)
|
||||
slot := sidecar.SignedBlockHeader.Header.Slot
|
||||
proposerIndex := sidecar.SignedBlockHeader.Header.ProposerIndex
|
||||
if !s.hasSeenDataColumnIndex(slot, proposerIndex, sidecar.Index) {
|
||||
usefulFullColumnsReceivedTotal.WithLabelValues(strconv.FormatUint(sidecar.Index, 10)).Inc()
|
||||
}
|
||||
|
||||
if err := s.receiveDataColumnSidecar(ctx, sidecar); err != nil {
|
||||
return errors.Wrap(err, "receive data column sidecar")
|
||||
}
|
||||
@@ -51,6 +59,38 @@ func (s *Service) dataColumnSubscriber(ctx context.Context, msg proto.Message) e
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *Service) verifiedRODataColumnSubscriber(ctx context.Context, sidecar blocks.VerifiedRODataColumn) error {
|
||||
log.WithField("slot", sidecar.Slot()).WithField("column", sidecar.Index).Info("Received data column sidecar")
|
||||
|
||||
if err := s.receiveDataColumnSidecar(ctx, sidecar); err != nil {
|
||||
return errors.Wrap(err, "receive data column sidecar")
|
||||
}
|
||||
|
||||
var wg errgroup.Group
|
||||
wg.Go(func() error {
|
||||
if err := s.processDataColumnSidecarsFromReconstruction(ctx, sidecar); err != nil {
|
||||
return errors.Wrap(err, "process data column sidecars from reconstruction")
|
||||
}
|
||||
|
||||
return nil
|
||||
})
|
||||
|
||||
wg.Go(func() error {
|
||||
// Broadcast our complete column for peers that don't use partial messages
|
||||
if err := s.cfg.p2p.BroadcastDataColumnSidecars(ctx, []blocks.VerifiedRODataColumn{sidecar}, nil); err != nil {
|
||||
return errors.Wrap(err, "process data column sidecars from execution")
|
||||
}
|
||||
|
||||
return nil
|
||||
})
|
||||
|
||||
if err := wg.Wait(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// receiveDataColumnSidecar receives a single data column sidecar: marks it as seen and saves it to the chain.
|
||||
// Do not loop over this function to receive multiple sidecars, use receiveDataColumnSidecars instead.
|
||||
func (s *Service) receiveDataColumnSidecar(ctx context.Context, sidecar blocks.VerifiedRODataColumn) error {
|
||||
|
||||
@@ -137,7 +137,9 @@ func (s *Service) validateAggregateAndProof(ctx context.Context, pid peer.ID, ms
|
||||
return validationRes, err
|
||||
}
|
||||
|
||||
s.setAggregatorIndexEpochSeen(data.Target.Epoch, m.AggregateAttestationAndProof().GetAggregatorIndex())
|
||||
if first := s.setAggregatorIndexEpochSeen(data.Target.Epoch, m.AggregateAttestationAndProof().GetAggregatorIndex()); !first {
|
||||
return pubsub.ValidationIgnore, nil
|
||||
}
|
||||
|
||||
msg.ValidatorData = m
|
||||
|
||||
@@ -265,13 +267,19 @@ func (s *Service) hasSeenAggregatorIndexEpoch(epoch primitives.Epoch, aggregator
|
||||
}
|
||||
|
||||
// Set aggregate's aggregator index target epoch as seen.
|
||||
func (s *Service) setAggregatorIndexEpochSeen(epoch primitives.Epoch, aggregatorIndex primitives.ValidatorIndex) {
|
||||
// Returns true if this is the first time seeing this aggregator index and epoch.
|
||||
func (s *Service) setAggregatorIndexEpochSeen(epoch primitives.Epoch, aggregatorIndex primitives.ValidatorIndex) bool {
|
||||
b := append(bytesutil.Bytes32(uint64(epoch)), bytesutil.Bytes32(uint64(aggregatorIndex))...)
|
||||
|
||||
s.seenAggregatedAttestationLock.Lock()
|
||||
defer s.seenAggregatedAttestationLock.Unlock()
|
||||
|
||||
_, seen := s.seenAggregatedAttestationCache.Get(string(b))
|
||||
if seen {
|
||||
return false
|
||||
}
|
||||
s.seenAggregatedAttestationCache.Add(string(b), true)
|
||||
return true
|
||||
}
|
||||
|
||||
// This validates the bitfield is correct and aggregator's index in state is within the beacon committee.
|
||||
|
||||
@@ -801,3 +801,27 @@ func TestValidateAggregateAndProof_RejectWhenAttEpochDoesntEqualTargetEpoch(t *t
|
||||
assert.NotNil(t, err)
|
||||
assert.Equal(t, pubsub.ValidationReject, res)
|
||||
}
|
||||
|
||||
func Test_SetAggregatorIndexEpochSeen(t *testing.T) {
|
||||
db := dbtest.SetupDB(t)
|
||||
p := p2ptest.NewTestP2P(t)
|
||||
|
||||
r := &Service{
|
||||
cfg: &config{
|
||||
p2p: p,
|
||||
beaconDB: db,
|
||||
},
|
||||
seenAggregatedAttestationCache: lruwrpr.New(10),
|
||||
}
|
||||
|
||||
aggIndex := primitives.ValidatorIndex(42)
|
||||
epoch := primitives.Epoch(7)
|
||||
|
||||
require.Equal(t, false, r.hasSeenAggregatorIndexEpoch(epoch, aggIndex))
|
||||
first := r.setAggregatorIndexEpochSeen(epoch, aggIndex)
|
||||
require.Equal(t, true, first)
|
||||
require.Equal(t, true, r.hasSeenAggregatorIndexEpoch(epoch, aggIndex))
|
||||
|
||||
second := r.setAggregatorIndexEpochSeen(epoch, aggIndex)
|
||||
require.Equal(t, false, second)
|
||||
}
|
||||
|
||||
@@ -104,7 +104,8 @@ func (s *Service) validateCommitteeIndexBeaconAttestation(
|
||||
}
|
||||
|
||||
if !s.slasherEnabled {
|
||||
// Verify this the first attestation received for the participating validator for the slot.
|
||||
// Verify this the first attestation received for the participating validator for the slot. This verification is here to return early if we've already seen this attestation.
|
||||
// This verification is carried again later after all other validations to avoid TOCTOU issues.
|
||||
if s.hasSeenUnaggregatedAtt(attKey) {
|
||||
return pubsub.ValidationIgnore, nil
|
||||
}
|
||||
@@ -228,7 +229,10 @@ func (s *Service) validateCommitteeIndexBeaconAttestation(
|
||||
Data: eventData,
|
||||
})
|
||||
|
||||
s.setSeenUnaggregatedAtt(attKey)
|
||||
if first := s.setSeenUnaggregatedAtt(attKey); !first {
|
||||
// Another concurrent validation processed the same attestation meanwhile
|
||||
return pubsub.ValidationIgnore, nil
|
||||
}
|
||||
|
||||
// Attach final validated attestation to the message for further pipeline use
|
||||
msg.ValidatorData = attForValidation
|
||||
@@ -385,11 +389,16 @@ func (s *Service) hasSeenUnaggregatedAtt(key string) bool {
|
||||
}
|
||||
|
||||
// Set an incoming attestation as seen for the participating validator for the slot.
|
||||
func (s *Service) setSeenUnaggregatedAtt(key string) {
|
||||
// Returns false if the attestation was already seen.
|
||||
func (s *Service) setSeenUnaggregatedAtt(key string) bool {
|
||||
s.seenUnAggregatedAttestationLock.Lock()
|
||||
defer s.seenUnAggregatedAttestationLock.Unlock()
|
||||
|
||||
_, seen := s.seenUnAggregatedAttestationCache.Get(key)
|
||||
if seen {
|
||||
return false
|
||||
}
|
||||
s.seenUnAggregatedAttestationCache.Add(key, true)
|
||||
return true
|
||||
}
|
||||
|
||||
// hasBlockAndState returns true if the beacon node knows about a block and associated state in the
|
||||
|
||||
@@ -499,6 +499,10 @@ func TestService_setSeenUnaggregatedAtt(t *testing.T) {
|
||||
Data: ðpb.AttestationData{Slot: 2, CommitteeIndex: 0},
|
||||
AggregationBits: bitfield.Bitlist{0b1001},
|
||||
}
|
||||
s3c0a0 := ðpb.Attestation{
|
||||
Data: ðpb.AttestationData{Slot: 3, CommitteeIndex: 0},
|
||||
AggregationBits: bitfield.Bitlist{0b1001},
|
||||
}
|
||||
|
||||
t.Run("empty cache", func(t *testing.T) {
|
||||
key := generateKey(t, s0c0a0)
|
||||
@@ -506,26 +510,39 @@ func TestService_setSeenUnaggregatedAtt(t *testing.T) {
|
||||
})
|
||||
t.Run("ok", func(t *testing.T) {
|
||||
key := generateKey(t, s0c0a0)
|
||||
s.setSeenUnaggregatedAtt(key)
|
||||
first := s.setSeenUnaggregatedAtt(key)
|
||||
assert.Equal(t, true, s.hasSeenUnaggregatedAtt(key))
|
||||
assert.Equal(t, true, first)
|
||||
})
|
||||
t.Run("already seen", func(t *testing.T) {
|
||||
key := generateKey(t, s3c0a0)
|
||||
first := s.setSeenUnaggregatedAtt(key)
|
||||
assert.Equal(t, true, s.hasSeenUnaggregatedAtt(key))
|
||||
assert.Equal(t, true, first)
|
||||
first = s.setSeenUnaggregatedAtt(key)
|
||||
assert.Equal(t, true, s.hasSeenUnaggregatedAtt(key))
|
||||
assert.Equal(t, false, first)
|
||||
})
|
||||
t.Run("different slot", func(t *testing.T) {
|
||||
key1 := generateKey(t, s1c0a0)
|
||||
key2 := generateKey(t, s2c0a0)
|
||||
s.setSeenUnaggregatedAtt(key1)
|
||||
first := s.setSeenUnaggregatedAtt(key1)
|
||||
assert.Equal(t, false, s.hasSeenUnaggregatedAtt(key2))
|
||||
assert.Equal(t, true, first)
|
||||
})
|
||||
t.Run("different committee index", func(t *testing.T) {
|
||||
key1 := generateKey(t, s0c1a0)
|
||||
key2 := generateKey(t, s0c2a0)
|
||||
s.setSeenUnaggregatedAtt(key1)
|
||||
first := s.setSeenUnaggregatedAtt(key1)
|
||||
assert.Equal(t, false, s.hasSeenUnaggregatedAtt(key2))
|
||||
assert.Equal(t, true, first)
|
||||
})
|
||||
t.Run("different bit", func(t *testing.T) {
|
||||
key1 := generateKey(t, s0c0a1)
|
||||
key2 := generateKey(t, s0c0a2)
|
||||
s.setSeenUnaggregatedAtt(key1)
|
||||
first := s.setSeenUnaggregatedAtt(key1)
|
||||
assert.Equal(t, false, s.hasSeenUnaggregatedAtt(key2))
|
||||
assert.Equal(t, true, first)
|
||||
})
|
||||
t.Run("0 bits set is considered not seen", func(t *testing.T) {
|
||||
a := ðpb.Attestation{AggregationBits: bitfield.Bitlist{0b1000}}
|
||||
@@ -576,6 +593,11 @@ func TestService_setSeenUnaggregatedAtt(t *testing.T) {
|
||||
CommitteeId: 0,
|
||||
AttesterIndex: 0,
|
||||
}
|
||||
s3c0a0 := ðpb.SingleAttestation{
|
||||
Data: ðpb.AttestationData{Slot: 2},
|
||||
CommitteeId: 0,
|
||||
AttesterIndex: 0,
|
||||
}
|
||||
|
||||
t.Run("empty cache", func(t *testing.T) {
|
||||
key := generateKey(t, s0c0a0)
|
||||
@@ -583,26 +605,39 @@ func TestService_setSeenUnaggregatedAtt(t *testing.T) {
|
||||
})
|
||||
t.Run("ok", func(t *testing.T) {
|
||||
key := generateKey(t, s0c0a0)
|
||||
s.setSeenUnaggregatedAtt(key)
|
||||
first := s.setSeenUnaggregatedAtt(key)
|
||||
assert.Equal(t, true, s.hasSeenUnaggregatedAtt(key))
|
||||
assert.Equal(t, true, first)
|
||||
})
|
||||
t.Run("different slot", func(t *testing.T) {
|
||||
key1 := generateKey(t, s1c0a0)
|
||||
key2 := generateKey(t, s2c0a0)
|
||||
s.setSeenUnaggregatedAtt(key1)
|
||||
first := s.setSeenUnaggregatedAtt(key1)
|
||||
assert.Equal(t, false, s.hasSeenUnaggregatedAtt(key2))
|
||||
assert.Equal(t, true, first)
|
||||
})
|
||||
t.Run("already seen", func(t *testing.T) {
|
||||
key := generateKey(t, s3c0a0)
|
||||
first := s.setSeenUnaggregatedAtt(key)
|
||||
assert.Equal(t, true, s.hasSeenUnaggregatedAtt(key))
|
||||
assert.Equal(t, true, first)
|
||||
first = s.setSeenUnaggregatedAtt(key)
|
||||
assert.Equal(t, true, s.hasSeenUnaggregatedAtt(key))
|
||||
assert.Equal(t, false, first)
|
||||
})
|
||||
t.Run("different committee index", func(t *testing.T) {
|
||||
key1 := generateKey(t, s0c1a0)
|
||||
key2 := generateKey(t, s0c2a0)
|
||||
s.setSeenUnaggregatedAtt(key1)
|
||||
first := s.setSeenUnaggregatedAtt(key1)
|
||||
assert.Equal(t, false, s.hasSeenUnaggregatedAtt(key2))
|
||||
assert.Equal(t, true, first)
|
||||
})
|
||||
t.Run("different attester", func(t *testing.T) {
|
||||
key1 := generateKey(t, s0c0a1)
|
||||
key2 := generateKey(t, s0c0a2)
|
||||
s.setSeenUnaggregatedAtt(key1)
|
||||
first := s.setSeenUnaggregatedAtt(key1)
|
||||
assert.Equal(t, false, s.hasSeenUnaggregatedAtt(key2))
|
||||
assert.Equal(t, true, first)
|
||||
})
|
||||
t.Run("single attestation is considered not seen", func(t *testing.T) {
|
||||
a := ðpb.AttestationElectra{}
|
||||
|
||||
@@ -72,6 +72,7 @@ func (s *Service) validateDataColumn(ctx context.Context, pid peer.ID, msg *pubs
|
||||
roDataColumns := []blocks.RODataColumn{roDataColumn}
|
||||
|
||||
// Create the verifier.
|
||||
// Question(marco): Do we want the multiple columns verifier? Is batching used only for kzg proofs?
|
||||
verifier := s.newColumnsVerifier(roDataColumns, verification.GossipDataColumnSidecarRequirements)
|
||||
|
||||
// Start the verification process.
|
||||
|
||||
@@ -54,11 +54,13 @@ func TestValidateLightClientOptimisticUpdate(t *testing.T) {
|
||||
cfg.CapellaForkEpoch = 3
|
||||
cfg.DenebForkEpoch = 4
|
||||
cfg.ElectraForkEpoch = 5
|
||||
cfg.FuluForkEpoch = 6
|
||||
cfg.ForkVersionSchedule[[4]byte{1, 0, 0, 0}] = 1
|
||||
cfg.ForkVersionSchedule[[4]byte{2, 0, 0, 0}] = 2
|
||||
cfg.ForkVersionSchedule[[4]byte{3, 0, 0, 0}] = 3
|
||||
cfg.ForkVersionSchedule[[4]byte{4, 0, 0, 0}] = 4
|
||||
cfg.ForkVersionSchedule[[4]byte{5, 0, 0, 0}] = 5
|
||||
cfg.ForkVersionSchedule[[4]byte{6, 0, 0, 0}] = 6
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
secondsPerSlot := int(params.BeaconConfig().SecondsPerSlot)
|
||||
@@ -101,7 +103,10 @@ func TestValidateLightClientOptimisticUpdate(t *testing.T) {
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
for v := 1; v < 6; v++ {
|
||||
for v := range version.All() {
|
||||
if v == version.Phase0 {
|
||||
continue
|
||||
}
|
||||
t.Run(test.name+"_"+version.String(v), func(t *testing.T) {
|
||||
ctx := t.Context()
|
||||
p := p2ptest.NewTestP2P(t)
|
||||
@@ -180,11 +185,13 @@ func TestValidateLightClientFinalityUpdate(t *testing.T) {
|
||||
cfg.CapellaForkEpoch = 3
|
||||
cfg.DenebForkEpoch = 4
|
||||
cfg.ElectraForkEpoch = 5
|
||||
cfg.FuluForkEpoch = 6
|
||||
cfg.ForkVersionSchedule[[4]byte{1, 0, 0, 0}] = 1
|
||||
cfg.ForkVersionSchedule[[4]byte{2, 0, 0, 0}] = 2
|
||||
cfg.ForkVersionSchedule[[4]byte{3, 0, 0, 0}] = 3
|
||||
cfg.ForkVersionSchedule[[4]byte{4, 0, 0, 0}] = 4
|
||||
cfg.ForkVersionSchedule[[4]byte{5, 0, 0, 0}] = 5
|
||||
cfg.ForkVersionSchedule[[4]byte{6, 0, 0, 0}] = 6
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
secondsPerSlot := int(params.BeaconConfig().SecondsPerSlot)
|
||||
@@ -227,7 +234,10 @@ func TestValidateLightClientFinalityUpdate(t *testing.T) {
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
for v := 1; v < 6; v++ {
|
||||
for v := range version.All() {
|
||||
if v == version.Phase0 {
|
||||
continue
|
||||
}
|
||||
t.Run(test.name+"_"+version.String(v), func(t *testing.T) {
|
||||
ctx := t.Context()
|
||||
p := p2ptest.NewTestP2P(t)
|
||||
|
||||
142
beacon-chain/sync/validate_partial_header.go
Normal file
142
beacon-chain/sync/validate_partial_header.go
Normal file
@@ -0,0 +1,142 @@
|
||||
package sync
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/signing"
|
||||
"github.com/OffchainLabs/prysm/v7/config/params"
|
||||
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
|
||||
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v7/time/slots"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
var (
|
||||
// REJECT errors - peer should be penalized
|
||||
errHeaderEmptyCommitments = errors.New("header has no kzg commitments")
|
||||
errHeaderParentInvalid = errors.New("header parent invalid")
|
||||
errHeaderSlotNotAfterParent = errors.New("header slot not after parent")
|
||||
errHeaderNotFinalizedDescendant = errors.New("header not finalized descendant")
|
||||
errHeaderInvalidInclusionProof = errors.New("invalid inclusion proof")
|
||||
errHeaderInvalidSignature = errors.New("invalid proposer signature")
|
||||
errHeaderUnexpectedProposer = errors.New("unexpected proposer index")
|
||||
|
||||
// IGNORE errors - don't penalize peer
|
||||
errHeaderNil = errors.New("nil header")
|
||||
errHeaderFromFuture = errors.New("header is from future slot")
|
||||
errHeaderNotAboveFinalized = errors.New("header slot not above finalized")
|
||||
errHeaderParentNotSeen = errors.New("header parent not seen")
|
||||
)
|
||||
|
||||
// validatePartialDataColumnHeader validates a PartialDataColumnHeader per the consensus spec.
|
||||
// Returns (reject, err) where reject=true means the peer should be penalized.
|
||||
func (s *Service) validatePartialDataColumnHeader(ctx context.Context, header *ethpb.PartialDataColumnHeader) (reject bool, err error) {
|
||||
if header == nil || header.SignedBlockHeader == nil || header.SignedBlockHeader.Header == nil {
|
||||
return false, errHeaderNil // IGNORE
|
||||
}
|
||||
|
||||
blockHeader := header.SignedBlockHeader.Header
|
||||
headerSlot := blockHeader.Slot
|
||||
parentRoot := bytesutil.ToBytes32(blockHeader.ParentRoot)
|
||||
|
||||
// [REJECT] kzg_commitments list is non-empty
|
||||
if len(header.KzgCommitments) == 0 {
|
||||
return true, errHeaderEmptyCommitments
|
||||
}
|
||||
|
||||
// [IGNORE] Not from future slot (with MAXIMUM_GOSSIP_CLOCK_DISPARITY allowance)
|
||||
currentSlot := s.cfg.clock.CurrentSlot()
|
||||
if headerSlot > currentSlot {
|
||||
maxDisparity := params.BeaconConfig().MaximumGossipClockDisparityDuration()
|
||||
slotStart, err := s.cfg.clock.SlotStart(headerSlot)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
if s.cfg.clock.Now().Before(slotStart.Add(-maxDisparity)) {
|
||||
return false, errHeaderFromFuture // IGNORE
|
||||
}
|
||||
}
|
||||
|
||||
// [IGNORE] Slot above finalized
|
||||
finalizedCheckpoint := s.cfg.chain.FinalizedCheckpt()
|
||||
startSlot, err := slots.EpochStart(finalizedCheckpoint.Epoch)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
if headerSlot <= startSlot {
|
||||
return false, errHeaderNotAboveFinalized // IGNORE
|
||||
}
|
||||
|
||||
// [IGNORE] Parent has been seen
|
||||
if !s.cfg.chain.HasBlock(ctx, parentRoot) {
|
||||
return false, errHeaderParentNotSeen // IGNORE
|
||||
}
|
||||
|
||||
// [REJECT] Parent passes validation (not a bad block)
|
||||
if s.hasBadBlock(parentRoot) {
|
||||
return true, errHeaderParentInvalid
|
||||
}
|
||||
|
||||
// [REJECT] Header slot > parent slot
|
||||
parentSlot, err := s.cfg.chain.RecentBlockSlot(parentRoot)
|
||||
if err != nil {
|
||||
return false, errors.Wrap(err, "get parent slot")
|
||||
}
|
||||
if headerSlot <= parentSlot {
|
||||
return true, errHeaderSlotNotAfterParent
|
||||
}
|
||||
|
||||
// [REJECT] Finalized checkpoint is ancestor (parent is in forkchoice)
|
||||
if !s.cfg.chain.InForkchoice(parentRoot) {
|
||||
return true, errHeaderNotFinalizedDescendant
|
||||
}
|
||||
|
||||
// [REJECT] Inclusion proof valid
|
||||
if err := peerdas.VerifyPartialDataColumnHeaderInclusionProof(header); err != nil {
|
||||
return true, errHeaderInvalidInclusionProof
|
||||
}
|
||||
|
||||
// [REJECT] Valid proposer signature
|
||||
parentState, err := s.cfg.stateGen.StateByRoot(ctx, parentRoot)
|
||||
if err != nil {
|
||||
return false, errors.Wrap(err, "get parent state")
|
||||
}
|
||||
|
||||
proposerIdx := blockHeader.ProposerIndex
|
||||
proposer, err := parentState.ValidatorAtIndex(proposerIdx)
|
||||
if err != nil {
|
||||
return false, errors.Wrap(err, "get proposer")
|
||||
}
|
||||
|
||||
domain, err := signing.Domain(
|
||||
parentState.Fork(),
|
||||
slots.ToEpoch(headerSlot),
|
||||
params.BeaconConfig().DomainBeaconProposer,
|
||||
parentState.GenesisValidatorsRoot(),
|
||||
)
|
||||
if err != nil {
|
||||
return false, errors.Wrap(err, "get domain")
|
||||
}
|
||||
|
||||
if err := signing.VerifyBlockHeaderSigningRoot(
|
||||
blockHeader,
|
||||
proposer.PublicKey,
|
||||
header.SignedBlockHeader.Signature,
|
||||
domain,
|
||||
); err != nil {
|
||||
return true, errHeaderInvalidSignature
|
||||
}
|
||||
|
||||
// [REJECT] Expected proposer for slot
|
||||
expectedProposer, err := helpers.BeaconProposerIndexAtSlot(ctx, parentState, headerSlot)
|
||||
if err != nil {
|
||||
return false, errors.Wrap(err, "compute expected proposer")
|
||||
}
|
||||
if expectedProposer != proposerIdx {
|
||||
return true, errHeaderUnexpectedProposer
|
||||
}
|
||||
|
||||
return false, nil // Valid header
|
||||
}
|
||||
@@ -78,11 +78,21 @@ func (ini *Initializer) NewBlobVerifier(b blocks.ROBlob, reqs []Requirement) *RO
|
||||
// WARNING: The returned verifier is not thread-safe, and should not be used concurrently.
|
||||
func (ini *Initializer) NewDataColumnsVerifier(roDataColumns []blocks.RODataColumn, reqs []Requirement) *RODataColumnsVerifier {
|
||||
return &RODataColumnsVerifier{
|
||||
sharedResources: ini.shared,
|
||||
dataColumns: roDataColumns,
|
||||
results: newResults(reqs...),
|
||||
verifyDataColumnsCommitment: peerdas.VerifyDataColumnsSidecarKZGProofs,
|
||||
stateByRoot: make(map[[fieldparams.RootLength]byte]state.BeaconState),
|
||||
sharedResources: ini.shared,
|
||||
dataColumns: roDataColumns,
|
||||
results: newResults(reqs...),
|
||||
verifyDataColumnsCommitment: func(rc []blocks.RODataColumn) error {
|
||||
if len(rc) == 0 {
|
||||
return nil
|
||||
}
|
||||
var sizeHint int
|
||||
if len(rc) > 0 {
|
||||
sizeHint = len(rc[0].Column)
|
||||
}
|
||||
sizeHint *= len(rc)
|
||||
return peerdas.VerifyDataColumnsCellsKZGProofs(sizeHint, blocks.RODataColumnsToCellProofBundles(rc))
|
||||
},
|
||||
stateByRoot: make(map[[fieldparams.RootLength]byte]state.BeaconState),
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Removed dead slot parameter from blobCacheEntry.filter
|
||||
@@ -1,3 +0,0 @@
|
||||
## Changed
|
||||
|
||||
- Avoid redundant WithHttpEndpoint when JWT is provided
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Fix proposals progress bar count [#16020](https://github.com/OffchainLabs/prysm/pull/16020)
|
||||
3
changelog/SashaMalysehko_fix-return-after-check.md
Normal file
3
changelog/SashaMalysehko_fix-return-after-check.md
Normal file
@@ -0,0 +1,3 @@
|
||||
## Fixed
|
||||
|
||||
- Fix missing return after version header check in SubmitAttesterSlashingsV2.
|
||||
3
changelog/Snezhkko_fix-type.md
Normal file
3
changelog/Snezhkko_fix-type.md
Normal file
@@ -0,0 +1,3 @@
|
||||
## Fixed
|
||||
|
||||
- incorrect constructor return type [#16084](https://github.com/OffchainLabs/prysm/pull/16084)
|
||||
2
changelog/aarsh-revert-autonatv2.md
Normal file
2
changelog/aarsh-revert-autonatv2.md
Normal file
@@ -0,0 +1,2 @@
|
||||
### Ignored
|
||||
- Reverts AutoNatV2 change introduced in https://github.com/OffchainLabs/prysm/pull/16100 as the libp2p upgrade fails inter-op testing.
|
||||
3
changelog/avoid_kzg_send_after_context_cancel.md
Normal file
3
changelog/avoid_kzg_send_after_context_cancel.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Fixed
|
||||
|
||||
- Prevent blocked sends to the KZG batch verifier when the caller context is already canceled, avoiding useless queueing and potential hangs.
|
||||
3
changelog/bastin_fix-lcp2p-bug.md
Normal file
3
changelog/bastin_fix-lcp2p-bug.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Fixed
|
||||
|
||||
- Fix the missing fork version object mapping for Fulu in light client p2p.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Integrate state-diff into `HasState()`.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Refactor finding slot by block root using state summary and block to its own function.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Fix state diff repetitive anchor slot bug.
|
||||
@@ -1,4 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Add initial configs for the state-diff feature.
|
||||
- Add kv functions for the state-diff feature.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Integrate state-diff into `State()`.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- add fulu support to light client processing.
|
||||
@@ -1,2 +0,0 @@
|
||||
### Added
|
||||
- prometheus metric `gossip_attestation_verification_milliseconds` to track attestation gossip topic validation latency.
|
||||
@@ -1,4 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Downgraded log level from INFO to DEBUG on PrepareBeaconProposer updated fee recipients.
|
||||
- Change the logging behaviour of Updated fee recipients to only log count of validators at Debug level and all validator indices at Trace level.
|
||||
@@ -1,2 +0,0 @@
|
||||
### Ignored
|
||||
- Add osaka fork timestamp derivation to interop genesis
|
||||
3
changelog/fix_kzg_batch_verifier_timeout_deadlock.md
Normal file
3
changelog/fix_kzg_batch_verifier_timeout_deadlock.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Fixed
|
||||
|
||||
- Fix deadlock in data column gossip KZG batch verification when a caller times out preventing result delivery.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- fixes E2E tests to be able to start from Electra genesis fork or future forks
|
||||
3
changelog/james-prysm_fix-rest-replay-state.md
Normal file
3
changelog/james-prysm_fix-rest-replay-state.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Fixed
|
||||
|
||||
- fixed replay state issue in rest api caused by attester and sync committee duties endpoints
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- optimization to remove cell and blob proof computation on blob rest api.
|
||||
@@ -1,2 +0,0 @@
|
||||
### Added
|
||||
- Added `--semi-supernode` flag to custody half of a super node's datacolumn requirements but allowing for reconstruction for blob retrieval
|
||||
3
changelog/james-prysm_skip-e2e-slot1-check.md
Normal file
3
changelog/james-prysm_skip-e2e-slot1-check.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Changed
|
||||
|
||||
- e2e sync committee evaluator now skips the first slot after startup, we already skip the fork epoch for checks here, this skip only applies on startup, due to altair always from 0 and validators need to warm up.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Changed `--subscribe-all-data-subnets` flag to `--supernode` and aliased `--subscribe-all-data-subnets` for existing users.
|
||||
@@ -1,7 +0,0 @@
|
||||
### Added
|
||||
- Data column backfill.
|
||||
- Backfill metrics for columns: backfill_data_column_sidecar_downloaded, backfill_data_column_sidecar_downloaded_bytes, backfill_batch_columns_download_ms, backfill_batch_columns_verify_ms.
|
||||
|
||||
### Changed
|
||||
- backfill metrics that changed name and/or histogram buckets: backfill_batch_time_verify -> backfill_batch_verify_ms, backfill_batch_time_waiting -> backfill_batch_waiting_ms, backfill_batch_time_roundtrip -> backfill_batch_roundtrip_ms, backfill_blocks_bytes_downloaded -> backfill_blocks_downloaded_bytes, backfill_batch_time_verify -> backfill_batch_verify_ms, backfill_batch_blocks_time_download -> backfill_batch_blocks_download_ms, backfill_batch_blobs_time_download -> backfill_batch_blobs_download_ms, backfill_blobs_bytes_downloaded -> backfill_blocks_downloaded_bytes,
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user