mirror of
https://github.com/OffchainLabs/prysm.git
synced 2026-01-10 05:47:59 -05:00
Compare commits
7 Commits
log-data-t
...
grace-peri
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
94e5b3e805 | ||
|
|
c0b8d9ca52 | ||
|
|
b1eeb1b1f1 | ||
|
|
b94904b784 | ||
|
|
1af12d841d | ||
|
|
e1b98a4ca1 | ||
|
|
eae15697da |
83
CHANGELOG.md
83
CHANGELOG.md
@@ -4,6 +4,87 @@ All notable changes to this project will be documented in this file.
|
||||
|
||||
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
|
||||
|
||||
## [v7.0.0](https://github.com/prysmaticlabs/prysm/compare/v6.1.4...v7.0.0) - 2025-11-10
|
||||
|
||||
This is our initial mainnet release for the Ethereum mainnet Fulu fork on December 3rd, 2025. All operators MUST update to v7.0.0 or later release prior to the fulu fork epoch `411392`. See the [Ethereum Foundation blog post](https://blog.ethereum.org/2025/11/06/fusaka-mainnet-announcement) for more information on Fulu.
|
||||
|
||||
Other than the mainnet fulu fork schedule, there are a few callouts in this release:
|
||||
- `by-epoch` blob storage format is the default for new installations. Users that haven't migrated will see a warning to migrate to the new format. Existing deployments may set `--blob-storage-layout=by-epoch` to perform the migration.
|
||||
- Several deprecated flags have been deleted! Please review the "removed" section of this changelog carefully. If you are referencing a removed flag, Prysm will not start! All of these flags had no effect for at least one release.
|
||||
- Several deprecated API endpoints have been deleted. Please review the "removed" section of this changelog carefully.
|
||||
- Backfill is not supported in Fulu. This is expected to be fixed in the next release and should be delivered prior to the mainnet activation fork.
|
||||
- The builder default gas limit is raised from `45000000` (45 MGas) to `60000000` (60 MGas).
|
||||
- Several bug fixes and improvements.
|
||||
|
||||
### Added
|
||||
|
||||
- Allow custom headers in validator client HTTP requests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15884)
|
||||
- Metric to track data columns recovered from execution layer. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15924)
|
||||
- Metrics: Add count of peers per direction and type (inbound/outbound), (TCP/QUIC). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15922)
|
||||
- `p2p_subscribed_topic_peer_total`: Reset to avoid dangling values. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15922)
|
||||
- Add `p2p_minimum_peers_per_subnet` metric. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15922)
|
||||
- Added GeneralizedIndicesFromPath function to calculate the GIs for a given sszInfo object and a PathElement. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15873)
|
||||
- Add Gloas protobuf definitions with spec tests and SSZ serialization support. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15601)
|
||||
- Fulu fork epoch for mainnet configurations set for December 3, 2025, 09:49:11pm UTC. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15975)
|
||||
- Added BPO schedules for December 9, 2025, 02:21:11pm UTC and January 7, 2026, 01:01:11am UTC. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15975)
|
||||
|
||||
### Changed
|
||||
|
||||
- Updated consensus spec tests to v1.6.0-beta.1 with new hashes and URL template. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15918)
|
||||
- Use the `by-epoch' blob storage layout by default and log a warning to users who continue to use the flat layout, encouraging them to switch. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15904)
|
||||
- Update go-netroute to `v0.3.0`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15934)
|
||||
- Introduced Path type for SSZ-QL queries and updated PathElement (removed Length field, kept Index) enforcing that len queries are terminal (at most one per path). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15935)
|
||||
- Changed length query syntax from `block.payload.len(transactions)` to `len(block.payload.transactions)`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15935)
|
||||
- Update `go-netroute` to `v0.4.0`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15949)
|
||||
- Updated consensus spec tests to v1.6.0-beta.2. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15960)
|
||||
- Updated go bitfield from prysmaticlabs to offchainlabs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15968)
|
||||
- Bump builder default gas limit from `45000000` (45 MGas) to `60000000` (60 MGas). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15979)
|
||||
- Use head state for block pubsub validation when possible. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15972)
|
||||
- updated consensus spec to 1.6.0 from 1.6.0-beta.2. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15975)
|
||||
- Upgrade Prysm v6 to v7. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15989)
|
||||
- Use head state readonly when possible to validate data column sidecars. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15977)
|
||||
|
||||
### Removed
|
||||
|
||||
- log mentioning removed flag `--show-deposit-data`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15926)
|
||||
- Remove Beacon API endpoints that were deprecated in Electra: `GET /eth/v1/beacon/deposit_snapshot`, `GET /eth/v1/beacon/blocks/{block_id}/attestations`, `GET /eth/v1/beacon/pool/attestations`, `POST /eth/v1/beacon/pool/attestations`, `GET /eth/v1/beacon/pool/attester_slashings`, `POST /eth/v1/beacon/pool/attester_slashings`, `GET /eth/v1/validator/aggregate_attestation`, `POST /eth/v1/validator/aggregate_and_proofs`, `POST /eth/v1/beacon/blocks`, `POST /eth/v1/beacon/blinded_blocks`, `GET /eth/v1/builder/states/{state_id}/expected_withdrawals`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15962)
|
||||
- Deprecated flag `--enable-optional-engine-methods` has been removed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15986)
|
||||
- Deprecated flag `--disable-build-block-parallel` has been removed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15986)
|
||||
- Deprecated flag `--disable-reorg-late-blocks` has been removed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15986)
|
||||
- Deprecated flag `--disable-optional-engine-methods` has been removed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15986)
|
||||
- Deprecated flag `--disable-aggregate-parallel` has been removed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15986)
|
||||
- Deprecated flag `--enable-eip-4881` has been removed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15986)
|
||||
- Deprecated flag `--disable-eip-4881` has been removed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15986)
|
||||
- Deprecated flag `--enable-verbose-sig-verification` has been removed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15986)
|
||||
- Deprecated flag `--enable-debug-rpc-endpoints` has been removed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15986)
|
||||
- Deprecated flag `--beacon-rpc-gateway-provider` has been removed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15986)
|
||||
- Deprecated flag `--disable-grpc-gateway` has been removed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15986)
|
||||
- Deprecated flag `--enable-experimental-state` has been removed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15986)
|
||||
- Deprecated flag `--enable-committee-aware-packing` has been removed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15986)
|
||||
- Deprecated flag `--interop-genesis-time` has been removed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15986)
|
||||
- Deprecated flag `--interop-num-validators` has been removed (from beacon-chain only; still available in validator client). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15986)
|
||||
- Deprecated flag `--enable-quic` has been removed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15986)
|
||||
- Deprecated flag `--attest-timely` has been removed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15986)
|
||||
- Deprecated flag `--disable-experimental-state` has been removed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15986)
|
||||
- Deprecated flag `--p2p-metadata` has been removed. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15986)
|
||||
|
||||
### Fixed
|
||||
|
||||
- Remove `Reading static P2P private key from a file.` log if Fulu is enabled. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15913)
|
||||
- `blobSidecarByRootRPCHandler`: Do not serve a sidecar if the corresponding block is not available. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15933)
|
||||
- `dataColumnSidecarByRootRPCHandler`: Do not serve a sidecar if the corresponding block is not available. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15933)
|
||||
- Fix incorrect version used when sending attestation version in Fulu. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15950)
|
||||
- Changed the behavior of topic subscriptions such that only topics that require the active validator count will compute that value. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15955)
|
||||
- Added a Mutex to the computation of active validator count during topic subscription to avoid a race condition where multiple goroutines are computing the same work. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15955)
|
||||
- `RODataColumnsVerifier.ValidProposerSignature`: Ensure the expensive signature verification is only performed once for concurrent requests for the same signature data. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15954)
|
||||
- use filepath for path operations (clean, join, etc.) to ensure correct behavior on Windows. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15953)
|
||||
- Fix #15969: Handle addition overflow in `/eth/v1/beacon/rewards/attestations/{epoch}`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15970)
|
||||
- `SidecarProposerExpected`: Add the slot in the single flight key. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15976)
|
||||
- Ensures the rate limitation is respected for by root blob and data column sidecars requests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15981)
|
||||
- Use head only if its compatible with target for attestation validation. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15965)
|
||||
- Backfill disabled if checkpoint sync origin is after fulu fork due to lack of DataColumnSidecar support in backfill. To track the availability of fulu-compatible backfill please watch https://github.com/OffchainLabs/prysm/issues/15982. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15987)
|
||||
- `SidecarProposerExpected`: Use the correct value of proposer index in the singleflight group. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15993)
|
||||
|
||||
## [v6.1.4](https://github.com/prysmaticlabs/prysm/compare/v6.1.3...v6.1.4) - 2025-10-24
|
||||
|
||||
This release includes a bug fix affecting block proposals in rare cases, along with an important update for Windows users running post-Fusaka fork.
|
||||
@@ -3820,4 +3901,4 @@ There are no security updates in this release.
|
||||
|
||||
# Older than v2.0.0
|
||||
|
||||
For changelog history for releases older than v2.0.0, please refer to https://github.com/prysmaticlabs/prysm/releases
|
||||
For changelog history for releases older than v2.0.0, please refer to https://github.com/prysmaticlabs/prysm/releases
|
||||
|
||||
@@ -23,6 +23,7 @@ go_library(
|
||||
go_test(
|
||||
name = "go_default_test",
|
||||
srcs = [
|
||||
"kzg_test.go",
|
||||
"trusted_setup_test.go",
|
||||
"validation_test.go",
|
||||
],
|
||||
|
||||
@@ -34,12 +34,6 @@ type Bytes48 = ckzg4844.Bytes48
|
||||
// Bytes32 is a 32-byte array.
|
||||
type Bytes32 = ckzg4844.Bytes32
|
||||
|
||||
// CellsAndProofs represents the Cells and Proofs corresponding to a single blob.
|
||||
type CellsAndProofs struct {
|
||||
Cells []Cell
|
||||
Proofs []Proof
|
||||
}
|
||||
|
||||
// BlobToKZGCommitment computes a KZG commitment from a given blob.
|
||||
func BlobToKZGCommitment(blob *Blob) (Commitment, error) {
|
||||
var kzgBlob kzg4844.Blob
|
||||
@@ -65,7 +59,7 @@ func ComputeCells(blob *Blob) ([]Cell, error) {
|
||||
|
||||
cells := make([]Cell, len(ckzgCells))
|
||||
for i := range ckzgCells {
|
||||
cells[i] = Cell(ckzgCells[i])
|
||||
copy(cells[i][:], ckzgCells[i][:])
|
||||
}
|
||||
|
||||
return cells, nil
|
||||
@@ -78,22 +72,35 @@ func ComputeBlobKZGProof(blob *Blob, commitment Commitment) (Proof, error) {
|
||||
|
||||
proof, err := kzg4844.ComputeBlobProof(&kzgBlob, kzg4844.Commitment(commitment))
|
||||
if err != nil {
|
||||
return [48]byte{}, err
|
||||
return Proof{}, err
|
||||
}
|
||||
return Proof(proof), nil
|
||||
var result Proof
|
||||
copy(result[:], proof[:])
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// ComputeCellsAndKZGProofs computes the cells and cells KZG proofs from a given blob.
|
||||
func ComputeCellsAndKZGProofs(blob *Blob) (CellsAndProofs, error) {
|
||||
func ComputeCellsAndKZGProofs(blob *Blob) ([]Cell, []Proof, error) {
|
||||
var ckzgBlob ckzg4844.Blob
|
||||
copy(ckzgBlob[:], blob[:])
|
||||
|
||||
ckzgCells, ckzgProofs, err := ckzg4844.ComputeCellsAndKZGProofs(&ckzgBlob)
|
||||
if err != nil {
|
||||
return CellsAndProofs{}, err
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
return makeCellsAndProofs(ckzgCells[:], ckzgProofs[:])
|
||||
if len(ckzgCells) != len(ckzgProofs) {
|
||||
return nil, nil, errors.New("mismatched cells and proofs length")
|
||||
}
|
||||
|
||||
cells := make([]Cell, len(ckzgCells))
|
||||
proofs := make([]Proof, len(ckzgProofs))
|
||||
for i := range ckzgCells {
|
||||
copy(cells[i][:], ckzgCells[i][:])
|
||||
copy(proofs[i][:], ckzgProofs[i][:])
|
||||
}
|
||||
|
||||
return cells, proofs, nil
|
||||
}
|
||||
|
||||
// VerifyCellKZGProofBatch verifies the KZG proofs for a given slice of commitments, cells indices, cells and proofs.
|
||||
@@ -103,44 +110,57 @@ func VerifyCellKZGProofBatch(commitmentsBytes []Bytes48, cellIndices []uint64, c
|
||||
ckzgCells := make([]ckzg4844.Cell, len(cells))
|
||||
|
||||
for i := range cells {
|
||||
ckzgCells[i] = ckzg4844.Cell(cells[i])
|
||||
copy(ckzgCells[i][:], cells[i][:])
|
||||
}
|
||||
return ckzg4844.VerifyCellKZGProofBatch(commitmentsBytes, cellIndices, ckzgCells, proofsBytes)
|
||||
}
|
||||
|
||||
// RecoverCellsAndKZGProofs recovers the complete cells and KZG proofs from a given set of cell indices and partial cells.
|
||||
// RecoverCells recovers the complete cells from a given set of cell indices and partial cells.
|
||||
// Note: `len(cellIndices)` must be equal to `len(partialCells)` and `cellIndices` must be sorted in ascending order.
|
||||
func RecoverCellsAndKZGProofs(cellIndices []uint64, partialCells []Cell) (CellsAndProofs, error) {
|
||||
func RecoverCells(cellIndices []uint64, partialCells []Cell) ([]Cell, error) {
|
||||
// Convert `Cell` type to `ckzg4844.Cell`
|
||||
ckzgPartialCells := make([]ckzg4844.Cell, len(partialCells))
|
||||
for i := range partialCells {
|
||||
ckzgPartialCells[i] = ckzg4844.Cell(partialCells[i])
|
||||
copy(ckzgPartialCells[i][:], partialCells[i][:])
|
||||
}
|
||||
|
||||
ckzgCells, err := ckzg4844.RecoverCells(cellIndices, ckzgPartialCells)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "recover cells")
|
||||
}
|
||||
|
||||
cells := make([]Cell, len(ckzgCells))
|
||||
for i := range ckzgCells {
|
||||
copy(cells[i][:], ckzgCells[i][:])
|
||||
}
|
||||
|
||||
return cells, nil
|
||||
}
|
||||
|
||||
// RecoverCellsAndKZGProofs recovers the complete cells and KZG proofs from a given set of cell indices and partial cells.
|
||||
// Note: `len(cellIndices)` must be equal to `len(partialCells)` and `cellIndices` must be sorted in ascending order.
|
||||
func RecoverCellsAndKZGProofs(cellIndices []uint64, partialCells []Cell) ([]Cell, []Proof, error) {
|
||||
// Convert `Cell` type to `ckzg4844.Cell`
|
||||
ckzgPartialCells := make([]ckzg4844.Cell, len(partialCells))
|
||||
for i := range partialCells {
|
||||
copy(ckzgPartialCells[i][:], partialCells[i][:])
|
||||
}
|
||||
|
||||
ckzgCells, ckzgProofs, err := ckzg4844.RecoverCellsAndKZGProofs(cellIndices, ckzgPartialCells)
|
||||
if err != nil {
|
||||
return CellsAndProofs{}, errors.Wrap(err, "recover cells and KZG proofs")
|
||||
return nil, nil, errors.Wrap(err, "recover cells and KZG proofs")
|
||||
}
|
||||
|
||||
return makeCellsAndProofs(ckzgCells[:], ckzgProofs[:])
|
||||
}
|
||||
|
||||
// makeCellsAndProofs converts cells/proofs to the CellsAndProofs type defined in this package.
|
||||
func makeCellsAndProofs(ckzgCells []ckzg4844.Cell, ckzgProofs []ckzg4844.KZGProof) (CellsAndProofs, error) {
|
||||
if len(ckzgCells) != len(ckzgProofs) {
|
||||
return CellsAndProofs{}, errors.New("different number of cells/proofs")
|
||||
return nil, nil, errors.New("mismatched cells and proofs length")
|
||||
}
|
||||
|
||||
cells := make([]Cell, 0, len(ckzgCells))
|
||||
proofs := make([]Proof, 0, len(ckzgProofs))
|
||||
|
||||
cells := make([]Cell, len(ckzgCells))
|
||||
proofs := make([]Proof, len(ckzgProofs))
|
||||
for i := range ckzgCells {
|
||||
cells = append(cells, Cell(ckzgCells[i]))
|
||||
proofs = append(proofs, Proof(ckzgProofs[i]))
|
||||
copy(cells[i][:], ckzgCells[i][:])
|
||||
copy(proofs[i][:], ckzgProofs[i][:])
|
||||
}
|
||||
|
||||
return CellsAndProofs{
|
||||
Cells: cells,
|
||||
Proofs: proofs,
|
||||
}, nil
|
||||
return cells, proofs, nil
|
||||
}
|
||||
|
||||
236
beacon-chain/blockchain/kzg/kzg_test.go
Normal file
236
beacon-chain/blockchain/kzg/kzg_test.go
Normal file
@@ -0,0 +1,236 @@
|
||||
package kzg
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v7/crypto/random"
|
||||
"github.com/OffchainLabs/prysm/v7/testing/require"
|
||||
)
|
||||
|
||||
func TestComputeCells(t *testing.T) {
|
||||
require.NoError(t, Start())
|
||||
|
||||
t.Run("valid blob", func(t *testing.T) {
|
||||
randBlob := random.GetRandBlob(123)
|
||||
var blob Blob
|
||||
copy(blob[:], randBlob[:])
|
||||
|
||||
cells, err := ComputeCells(&blob)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 128, len(cells))
|
||||
})
|
||||
}
|
||||
|
||||
func TestComputeBlobKZGProof(t *testing.T) {
|
||||
require.NoError(t, Start())
|
||||
|
||||
t.Run("valid blob and commitment", func(t *testing.T) {
|
||||
randBlob := random.GetRandBlob(123)
|
||||
var blob Blob
|
||||
copy(blob[:], randBlob[:])
|
||||
|
||||
commitment, err := BlobToKZGCommitment(&blob)
|
||||
require.NoError(t, err)
|
||||
|
||||
proof, err := ComputeBlobKZGProof(&blob, commitment)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, BytesPerProof, len(proof))
|
||||
require.NotEqual(t, Proof{}, proof, "proof should not be empty")
|
||||
})
|
||||
}
|
||||
|
||||
func TestComputeCellsAndKZGProofs(t *testing.T) {
|
||||
require.NoError(t, Start())
|
||||
|
||||
t.Run("valid blob returns matching cells and proofs", func(t *testing.T) {
|
||||
randBlob := random.GetRandBlob(123)
|
||||
var blob Blob
|
||||
copy(blob[:], randBlob[:])
|
||||
|
||||
cells, proofs, err := ComputeCellsAndKZGProofs(&blob)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 128, len(cells))
|
||||
require.Equal(t, 128, len(proofs))
|
||||
require.Equal(t, len(cells), len(proofs), "cells and proofs should have matching lengths")
|
||||
})
|
||||
}
|
||||
|
||||
func TestVerifyCellKZGProofBatch(t *testing.T) {
|
||||
require.NoError(t, Start())
|
||||
|
||||
t.Run("valid proof batch", func(t *testing.T) {
|
||||
randBlob := random.GetRandBlob(123)
|
||||
var blob Blob
|
||||
copy(blob[:], randBlob[:])
|
||||
|
||||
commitment, err := BlobToKZGCommitment(&blob)
|
||||
require.NoError(t, err)
|
||||
|
||||
cells, proofs, err := ComputeCellsAndKZGProofs(&blob)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Verify a subset of cells
|
||||
cellIndices := []uint64{0, 1, 2, 3, 4}
|
||||
selectedCells := make([]Cell, len(cellIndices))
|
||||
commitmentsBytes := make([]Bytes48, len(cellIndices))
|
||||
proofsBytes := make([]Bytes48, len(cellIndices))
|
||||
|
||||
for i, idx := range cellIndices {
|
||||
selectedCells[i] = cells[idx]
|
||||
copy(commitmentsBytes[i][:], commitment[:])
|
||||
copy(proofsBytes[i][:], proofs[idx][:])
|
||||
}
|
||||
|
||||
valid, err := VerifyCellKZGProofBatch(commitmentsBytes, cellIndices, selectedCells, proofsBytes)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, true, valid)
|
||||
})
|
||||
|
||||
t.Run("invalid proof should fail", func(t *testing.T) {
|
||||
randBlob := random.GetRandBlob(123)
|
||||
var blob Blob
|
||||
copy(blob[:], randBlob[:])
|
||||
|
||||
commitment, err := BlobToKZGCommitment(&blob)
|
||||
require.NoError(t, err)
|
||||
|
||||
cells, _, err := ComputeCellsAndKZGProofs(&blob)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Use invalid proofs
|
||||
cellIndices := []uint64{0}
|
||||
selectedCells := []Cell{cells[0]}
|
||||
commitmentsBytes := make([]Bytes48, 1)
|
||||
copy(commitmentsBytes[0][:], commitment[:])
|
||||
|
||||
// Create an invalid proof
|
||||
invalidProof := Bytes48{}
|
||||
proofsBytes := []Bytes48{invalidProof}
|
||||
|
||||
valid, err := VerifyCellKZGProofBatch(commitmentsBytes, cellIndices, selectedCells, proofsBytes)
|
||||
require.NotNil(t, err)
|
||||
require.Equal(t, false, valid)
|
||||
})
|
||||
}
|
||||
|
||||
func TestRecoverCells(t *testing.T) {
|
||||
require.NoError(t, Start())
|
||||
|
||||
t.Run("recover from partial cells", func(t *testing.T) {
|
||||
randBlob := random.GetRandBlob(123)
|
||||
var blob Blob
|
||||
copy(blob[:], randBlob[:])
|
||||
|
||||
cells, err := ComputeCells(&blob)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Use half of the cells
|
||||
partialIndices := make([]uint64, 64)
|
||||
partialCells := make([]Cell, 64)
|
||||
for i := range 64 {
|
||||
partialIndices[i] = uint64(i)
|
||||
partialCells[i] = cells[i]
|
||||
}
|
||||
|
||||
recoveredCells, err := RecoverCells(partialIndices, partialCells)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 128, len(recoveredCells))
|
||||
|
||||
// Verify recovered cells match original
|
||||
for i := range cells {
|
||||
require.Equal(t, cells[i], recoveredCells[i])
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("insufficient cells should fail", func(t *testing.T) {
|
||||
randBlob := random.GetRandBlob(123)
|
||||
var blob Blob
|
||||
copy(blob[:], randBlob[:])
|
||||
|
||||
cells, err := ComputeCells(&blob)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Use only 32 cells (less than 50% required)
|
||||
partialIndices := make([]uint64, 32)
|
||||
partialCells := make([]Cell, 32)
|
||||
for i := range 32 {
|
||||
partialIndices[i] = uint64(i)
|
||||
partialCells[i] = cells[i]
|
||||
}
|
||||
|
||||
_, err = RecoverCells(partialIndices, partialCells)
|
||||
require.NotNil(t, err)
|
||||
})
|
||||
}
|
||||
|
||||
func TestRecoverCellsAndKZGProofs(t *testing.T) {
|
||||
require.NoError(t, Start())
|
||||
|
||||
t.Run("recover cells and proofs from partial cells", func(t *testing.T) {
|
||||
randBlob := random.GetRandBlob(123)
|
||||
var blob Blob
|
||||
copy(blob[:], randBlob[:])
|
||||
|
||||
cells, proofs, err := ComputeCellsAndKZGProofs(&blob)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Use half of the cells
|
||||
partialIndices := make([]uint64, 64)
|
||||
partialCells := make([]Cell, 64)
|
||||
for i := range 64 {
|
||||
partialIndices[i] = uint64(i)
|
||||
partialCells[i] = cells[i]
|
||||
}
|
||||
|
||||
recoveredCells, recoveredProofs, err := RecoverCellsAndKZGProofs(partialIndices, partialCells)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 128, len(recoveredCells))
|
||||
require.Equal(t, 128, len(recoveredProofs))
|
||||
require.Equal(t, len(recoveredCells), len(recoveredProofs), "recovered cells and proofs should have matching lengths")
|
||||
|
||||
// Verify recovered cells match original
|
||||
for i := range cells {
|
||||
require.Equal(t, cells[i], recoveredCells[i])
|
||||
require.Equal(t, proofs[i], recoveredProofs[i])
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("insufficient cells should fail", func(t *testing.T) {
|
||||
randBlob := random.GetRandBlob(123)
|
||||
var blob Blob
|
||||
copy(blob[:], randBlob[:])
|
||||
|
||||
cells, err := ComputeCells(&blob)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Use only 32 cells (less than 50% required)
|
||||
partialIndices := make([]uint64, 32)
|
||||
partialCells := make([]Cell, 32)
|
||||
for i := range 32 {
|
||||
partialIndices[i] = uint64(i)
|
||||
partialCells[i] = cells[i]
|
||||
}
|
||||
|
||||
_, _, err = RecoverCellsAndKZGProofs(partialIndices, partialCells)
|
||||
require.NotNil(t, err)
|
||||
})
|
||||
}
|
||||
|
||||
func TestBlobToKZGCommitment(t *testing.T) {
|
||||
require.NoError(t, Start())
|
||||
|
||||
t.Run("valid blob", func(t *testing.T) {
|
||||
randBlob := random.GetRandBlob(123)
|
||||
var blob Blob
|
||||
copy(blob[:], randBlob[:])
|
||||
|
||||
commitment, err := BlobToKZGCommitment(&blob)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 48, len(commitment))
|
||||
|
||||
// Verify commitment is deterministic
|
||||
commitment2, err := BlobToKZGCommitment(&blob)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, commitment, commitment2)
|
||||
})
|
||||
}
|
||||
@@ -203,13 +203,13 @@ func TestVerifyCellKZGProofBatchFromBlobData(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
|
||||
// Compute cells and proofs
|
||||
cellsAndProofs, err := ComputeCellsAndKZGProofs(&blob)
|
||||
_, proofs, err := ComputeCellsAndKZGProofs(&blob)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Create flattened cell proofs (like execution client format)
|
||||
cellProofs := make([][]byte, numberOfColumns)
|
||||
for i := range numberOfColumns {
|
||||
cellProofs[i] = cellsAndProofs.Proofs[i][:]
|
||||
cellProofs[i] = proofs[i][:]
|
||||
}
|
||||
|
||||
blobs := [][]byte{blob[:]}
|
||||
@@ -236,7 +236,7 @@ func TestVerifyCellKZGProofBatchFromBlobData(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
|
||||
// Compute cells and proofs
|
||||
cellsAndProofs, err := ComputeCellsAndKZGProofs(&blob)
|
||||
_, proofs, err := ComputeCellsAndKZGProofs(&blob)
|
||||
require.NoError(t, err)
|
||||
|
||||
blobs[i] = blob[:]
|
||||
@@ -244,7 +244,7 @@ func TestVerifyCellKZGProofBatchFromBlobData(t *testing.T) {
|
||||
|
||||
// Add cell proofs for this blob
|
||||
for j := range numberOfColumns {
|
||||
allCellProofs = append(allCellProofs, cellsAndProofs.Proofs[j][:])
|
||||
allCellProofs = append(allCellProofs, proofs[j][:])
|
||||
}
|
||||
}
|
||||
|
||||
@@ -319,7 +319,7 @@ func TestVerifyCellKZGProofBatchFromBlobData(t *testing.T) {
|
||||
randBlob := random.GetRandBlob(123)
|
||||
var blob Blob
|
||||
copy(blob[:], randBlob[:])
|
||||
cellsAndProofs, err := ComputeCellsAndKZGProofs(&blob)
|
||||
_, proofs, err := ComputeCellsAndKZGProofs(&blob)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Generate wrong commitment from different blob
|
||||
@@ -331,7 +331,7 @@ func TestVerifyCellKZGProofBatchFromBlobData(t *testing.T) {
|
||||
|
||||
cellProofs := make([][]byte, numberOfColumns)
|
||||
for i := range numberOfColumns {
|
||||
cellProofs[i] = cellsAndProofs.Proofs[i][:]
|
||||
cellProofs[i] = proofs[i][:]
|
||||
}
|
||||
|
||||
blobs := [][]byte{blob[:]}
|
||||
|
||||
@@ -43,6 +43,7 @@ go_test(
|
||||
"das_core_test.go",
|
||||
"info_test.go",
|
||||
"p2p_interface_test.go",
|
||||
"reconstruction_helpers_test.go",
|
||||
"reconstruction_test.go",
|
||||
"utils_test.go",
|
||||
"validator_test.go",
|
||||
|
||||
@@ -387,10 +387,10 @@ func generateRandomSidecars(t testing.TB, seed, blobCount int64) []blocks.ROData
|
||||
sBlock, err := blocks.NewSignedBeaconBlock(dbBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
cellsAndProofs := util.GenerateCellsAndProofs(t, blobs)
|
||||
cellsPerBlob, proofsPerBlob := util.GenerateCellsAndProofs(t, blobs)
|
||||
rob, err := blocks.NewROBlock(sBlock)
|
||||
require.NoError(t, err)
|
||||
sidecars, err := peerdas.DataColumnSidecars(cellsAndProofs, peerdas.PopulateFromBlock(rob))
|
||||
sidecars, err := peerdas.DataColumnSidecars(cellsPerBlob, proofsPerBlob, peerdas.PopulateFromBlock(rob))
|
||||
require.NoError(t, err)
|
||||
|
||||
return sidecars
|
||||
|
||||
@@ -2,6 +2,7 @@ package peerdas
|
||||
|
||||
import (
|
||||
"sort"
|
||||
"sync"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/blockchain/kzg"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
|
||||
@@ -28,6 +29,80 @@ func MinimumColumnCountToReconstruct() uint64 {
|
||||
return (params.BeaconConfig().NumberOfColumns + 1) / 2
|
||||
}
|
||||
|
||||
// recoverCellsForBlobs reconstructs cells for specified blobs from the given data column sidecars.
|
||||
// This is optimized to only recover cells without computing proofs.
|
||||
// Returns a map from blob index to recovered cells.
|
||||
func recoverCellsForBlobs(verifiedRoSidecars []blocks.VerifiedRODataColumn, blobIndices []int) (map[int][]kzg.Cell, error) {
|
||||
sidecarCount := len(verifiedRoSidecars)
|
||||
var wg errgroup.Group
|
||||
|
||||
cellsPerBlob := make(map[int][]kzg.Cell, len(blobIndices))
|
||||
var mu sync.Mutex
|
||||
|
||||
for _, blobIndex := range blobIndices {
|
||||
wg.Go(func() error {
|
||||
cellsIndices := make([]uint64, 0, sidecarCount)
|
||||
cells := make([]kzg.Cell, 0, sidecarCount)
|
||||
|
||||
for _, sidecar := range verifiedRoSidecars {
|
||||
cell := sidecar.Column[blobIndex]
|
||||
cells = append(cells, kzg.Cell(cell))
|
||||
cellsIndices = append(cellsIndices, sidecar.Index)
|
||||
}
|
||||
|
||||
recoveredCells, err := kzg.RecoverCells(cellsIndices, cells)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "recover cells for blob %d", blobIndex)
|
||||
}
|
||||
|
||||
mu.Lock()
|
||||
cellsPerBlob[blobIndex] = recoveredCells
|
||||
mu.Unlock()
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
if err := wg.Wait(); err != nil {
|
||||
return nil, errors.Wrap(err, "wait for RecoverCells")
|
||||
}
|
||||
return cellsPerBlob, nil
|
||||
}
|
||||
|
||||
// recoverCellsAndProofsForBlobs reconstructs both cells and proofs for specified blobs from the given data column sidecars.
|
||||
func recoverCellsAndProofsForBlobs(verifiedRoSidecars []blocks.VerifiedRODataColumn, blobIndices []int) ([][]kzg.Cell, [][]kzg.Proof, error) {
|
||||
sidecarCount := len(verifiedRoSidecars)
|
||||
var wg errgroup.Group
|
||||
|
||||
cellsPerBlob := make([][]kzg.Cell, len(blobIndices))
|
||||
proofsPerBlob := make([][]kzg.Proof, len(blobIndices))
|
||||
|
||||
for i, blobIndex := range blobIndices {
|
||||
wg.Go(func() error {
|
||||
cellsIndices := make([]uint64, 0, sidecarCount)
|
||||
cells := make([]kzg.Cell, 0, sidecarCount)
|
||||
|
||||
for _, sidecar := range verifiedRoSidecars {
|
||||
cell := sidecar.Column[blobIndex]
|
||||
cells = append(cells, kzg.Cell(cell))
|
||||
cellsIndices = append(cellsIndices, sidecar.Index)
|
||||
}
|
||||
|
||||
recoveredCells, recoveredProofs, err := kzg.RecoverCellsAndKZGProofs(cellsIndices, cells)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "recover cells and KZG proofs for blob %d", blobIndex)
|
||||
}
|
||||
cellsPerBlob[i] = recoveredCells
|
||||
proofsPerBlob[i] = recoveredProofs
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
if err := wg.Wait(); err != nil {
|
||||
return nil, nil, errors.Wrap(err, "wait for RecoverCellsAndKZGProofs")
|
||||
}
|
||||
return cellsPerBlob, proofsPerBlob, nil
|
||||
}
|
||||
|
||||
// ReconstructDataColumnSidecars reconstructs all the data column sidecars from the given input data column sidecars.
|
||||
// All input sidecars must be committed to the same block.
|
||||
// `inVerifiedRoSidecars` should contain enough sidecars to reconstruct the missing columns, and should not contain any duplicate.
|
||||
@@ -66,38 +141,16 @@ func ReconstructDataColumnSidecars(verifiedRoSidecars []blocks.VerifiedRODataCol
|
||||
})
|
||||
|
||||
// Recover cells and compute proofs in parallel.
|
||||
var wg errgroup.Group
|
||||
cellsAndProofs := make([]kzg.CellsAndProofs, blobCount)
|
||||
for blobIndex := range uint64(blobCount) {
|
||||
wg.Go(func() error {
|
||||
cellsIndices := make([]uint64, 0, sidecarCount)
|
||||
cells := make([]kzg.Cell, 0, sidecarCount)
|
||||
|
||||
for _, sidecar := range verifiedRoSidecars {
|
||||
cell := sidecar.Column[blobIndex]
|
||||
cells = append(cells, kzg.Cell(cell))
|
||||
cellsIndices = append(cellsIndices, sidecar.Index)
|
||||
}
|
||||
|
||||
// Recover the cells and proofs for the corresponding blob
|
||||
cellsAndProofsForBlob, err := kzg.RecoverCellsAndKZGProofs(cellsIndices, cells)
|
||||
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "recover cells and KZG proofs for blob %d", blobIndex)
|
||||
}
|
||||
|
||||
// It is safe for multiple goroutines to concurrently write to the same slice,
|
||||
// as long as they are writing to different indices, which is the case here.
|
||||
cellsAndProofs[blobIndex] = cellsAndProofsForBlob
|
||||
return nil
|
||||
})
|
||||
blobIndices := make([]int, blobCount)
|
||||
for i := range blobIndices {
|
||||
blobIndices[i] = i
|
||||
}
|
||||
cellsPerBlob, proofsPerBlob, err := recoverCellsAndProofsForBlobs(verifiedRoSidecars, blobIndices)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "recover cells and proofs for blobs")
|
||||
}
|
||||
|
||||
if err := wg.Wait(); err != nil {
|
||||
return nil, errors.Wrap(err, "wait for RecoverCellsAndKZGProofs")
|
||||
}
|
||||
|
||||
outSidecars, err := DataColumnSidecars(cellsAndProofs, PopulateFromSidecar(referenceSidecar))
|
||||
outSidecars, err := DataColumnSidecars(cellsPerBlob, proofsPerBlob, PopulateFromSidecar(referenceSidecar))
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "data column sidecars from items")
|
||||
}
|
||||
@@ -113,18 +166,192 @@ func ReconstructDataColumnSidecars(verifiedRoSidecars []blocks.VerifiedRODataCol
|
||||
return reconstructedVerifiedRoSidecars, nil
|
||||
}
|
||||
|
||||
// ReconstructBlobs constructs verified read only blobs sidecars from verified read only blob sidecars.
|
||||
// reconstructIfNeeded validates the input data column sidecars and returns the prepared sidecars
|
||||
// (reconstructed if necessary). This function performs common validation and reconstruction logic used by
|
||||
// both ReconstructBlobs and ReconstructBlobSidecars.
|
||||
func reconstructIfNeeded(verifiedDataColumnSidecars []blocks.VerifiedRODataColumn) ([]blocks.VerifiedRODataColumn, error) {
|
||||
if len(verifiedDataColumnSidecars) == 0 {
|
||||
return nil, ErrNotEnoughDataColumnSidecars
|
||||
}
|
||||
|
||||
// Check if the sidecars are sorted by index and do not contain duplicates.
|
||||
previousColumnIndex := verifiedDataColumnSidecars[0].Index
|
||||
for _, dataColumnSidecar := range verifiedDataColumnSidecars[1:] {
|
||||
columnIndex := dataColumnSidecar.Index
|
||||
if columnIndex <= previousColumnIndex {
|
||||
return nil, ErrDataColumnSidecarsNotSortedByIndex
|
||||
}
|
||||
|
||||
previousColumnIndex = columnIndex
|
||||
}
|
||||
|
||||
// Check if we have enough columns.
|
||||
cellsPerBlob := fieldparams.CellsPerBlob
|
||||
if len(verifiedDataColumnSidecars) < cellsPerBlob {
|
||||
return nil, ErrNotEnoughDataColumnSidecars
|
||||
}
|
||||
|
||||
// If all column sidecars corresponding to (non-extended) blobs are present, no need to reconstruct.
|
||||
if verifiedDataColumnSidecars[cellsPerBlob-1].Index == uint64(cellsPerBlob-1) {
|
||||
return verifiedDataColumnSidecars, nil
|
||||
}
|
||||
|
||||
// We need to reconstruct the data column sidecars.
|
||||
return ReconstructDataColumnSidecars(verifiedDataColumnSidecars)
|
||||
}
|
||||
|
||||
// ReconstructBlobSidecars constructs verified read only blobs sidecars from verified read only blob sidecars.
|
||||
// The following constraints must be satisfied:
|
||||
// - All `dataColumnSidecars` has to be committed to the same block, and
|
||||
// - `dataColumnSidecars` must be sorted by index and should not contain duplicates.
|
||||
// - `dataColumnSidecars` must contain either all sidecars corresponding to (non-extended) blobs,
|
||||
// or either enough sidecars to reconstruct the blobs.
|
||||
func ReconstructBlobs(block blocks.ROBlock, verifiedDataColumnSidecars []blocks.VerifiedRODataColumn, indices []int) ([]*blocks.VerifiedROBlob, error) {
|
||||
// - either enough sidecars to reconstruct the blobs.
|
||||
func ReconstructBlobSidecars(block blocks.ROBlock, verifiedDataColumnSidecars []blocks.VerifiedRODataColumn, indices []int) ([]*blocks.VerifiedROBlob, error) {
|
||||
// Return early if no blobs are requested.
|
||||
if len(indices) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// Validate and prepare data columns (reconstruct if necessary).
|
||||
// This also checks if input is empty.
|
||||
preparedDataColumnSidecars, err := reconstructIfNeeded(verifiedDataColumnSidecars)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Check if the blob index is too high.
|
||||
commitments, err := block.Block().Body().BlobKzgCommitments()
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "blob KZG commitments")
|
||||
}
|
||||
|
||||
for _, blobIndex := range indices {
|
||||
if blobIndex >= len(commitments) {
|
||||
return nil, ErrBlobIndexTooHigh
|
||||
}
|
||||
}
|
||||
|
||||
// Check if the data column sidecars are aligned with the block.
|
||||
dataColumnSidecars := make([]blocks.RODataColumn, 0, len(preparedDataColumnSidecars))
|
||||
for _, verifiedDataColumnSidecar := range preparedDataColumnSidecars {
|
||||
dataColumnSidecar := verifiedDataColumnSidecar.RODataColumn
|
||||
dataColumnSidecars = append(dataColumnSidecars, dataColumnSidecar)
|
||||
}
|
||||
|
||||
if err := DataColumnsAlignWithBlock(block, dataColumnSidecars); err != nil {
|
||||
return nil, errors.Wrap(err, "data columns align with block")
|
||||
}
|
||||
|
||||
// Convert verified data column sidecars to verified blob sidecars.
|
||||
blobSidecars, err := blobSidecarsFromDataColumnSidecars(block, preparedDataColumnSidecars, indices)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "blob sidecars from data column sidecars")
|
||||
}
|
||||
|
||||
return blobSidecars, nil
|
||||
}
|
||||
|
||||
// ComputeCellsAndProofsFromFlat computes the cells and proofs from blobs and cell flat proofs.
|
||||
func ComputeCellsAndProofsFromFlat(blobs [][]byte, cellProofs [][]byte) ([][]kzg.Cell, [][]kzg.Proof, error) {
|
||||
numberOfColumns := params.BeaconConfig().NumberOfColumns
|
||||
blobCount := uint64(len(blobs))
|
||||
cellProofsCount := uint64(len(cellProofs))
|
||||
|
||||
cellsCount := blobCount * numberOfColumns
|
||||
if cellsCount != cellProofsCount {
|
||||
return nil, nil, ErrBlobsCellsProofsMismatch
|
||||
}
|
||||
|
||||
cellsPerBlob := make([][]kzg.Cell, 0, blobCount)
|
||||
proofsPerBlob := make([][]kzg.Proof, 0, blobCount)
|
||||
for i, blob := range blobs {
|
||||
var kzgBlob kzg.Blob
|
||||
if copy(kzgBlob[:], blob) != len(kzgBlob) {
|
||||
return nil, nil, errors.New("wrong blob size - should never happen")
|
||||
}
|
||||
|
||||
// Compute the extended cells from the (non-extended) blob.
|
||||
cells, err := kzg.ComputeCells(&kzgBlob)
|
||||
if err != nil {
|
||||
return nil, nil, errors.Wrap(err, "compute cells")
|
||||
}
|
||||
|
||||
var proofs []kzg.Proof
|
||||
for idx := uint64(i) * numberOfColumns; idx < (uint64(i)+1)*numberOfColumns; idx++ {
|
||||
var kzgProof kzg.Proof
|
||||
if copy(kzgProof[:], cellProofs[idx]) != len(kzgProof) {
|
||||
return nil, nil, errors.New("wrong KZG proof size - should never happen")
|
||||
}
|
||||
|
||||
proofs = append(proofs, kzgProof)
|
||||
}
|
||||
|
||||
cellsPerBlob = append(cellsPerBlob, cells)
|
||||
proofsPerBlob = append(proofsPerBlob, proofs)
|
||||
}
|
||||
|
||||
return cellsPerBlob, proofsPerBlob, nil
|
||||
}
|
||||
|
||||
// ComputeCellsAndProofsFromStructured computes the cells and proofs from blobs and cell proofs.
|
||||
func ComputeCellsAndProofsFromStructured(blobsAndProofs []*pb.BlobAndProofV2) ([][]kzg.Cell, [][]kzg.Proof, error) {
|
||||
numberOfColumns := params.BeaconConfig().NumberOfColumns
|
||||
|
||||
cellsPerBlob := make([][]kzg.Cell, 0, len(blobsAndProofs))
|
||||
proofsPerBlob := make([][]kzg.Proof, 0, len(blobsAndProofs))
|
||||
for _, blobAndProof := range blobsAndProofs {
|
||||
if blobAndProof == nil {
|
||||
return nil, nil, ErrNilBlobAndProof
|
||||
}
|
||||
|
||||
var kzgBlob kzg.Blob
|
||||
if copy(kzgBlob[:], blobAndProof.Blob) != len(kzgBlob) {
|
||||
return nil, nil, errors.New("wrong blob size - should never happen")
|
||||
}
|
||||
|
||||
// Compute the extended cells from the (non-extended) blob.
|
||||
cells, err := kzg.ComputeCells(&kzgBlob)
|
||||
if err != nil {
|
||||
return nil, nil, errors.Wrap(err, "compute cells")
|
||||
}
|
||||
|
||||
kzgProofs := make([]kzg.Proof, 0, numberOfColumns)
|
||||
for _, kzgProofBytes := range blobAndProof.KzgProofs {
|
||||
if len(kzgProofBytes) != kzg.BytesPerProof {
|
||||
return nil, nil, errors.New("wrong KZG proof size - should never happen")
|
||||
}
|
||||
|
||||
var kzgProof kzg.Proof
|
||||
if copy(kzgProof[:], kzgProofBytes) != len(kzgProof) {
|
||||
return nil, nil, errors.New("wrong copied KZG proof size - should never happen")
|
||||
}
|
||||
|
||||
kzgProofs = append(kzgProofs, kzgProof)
|
||||
}
|
||||
|
||||
cellsPerBlob = append(cellsPerBlob, cells)
|
||||
proofsPerBlob = append(proofsPerBlob, kzgProofs)
|
||||
}
|
||||
|
||||
return cellsPerBlob, proofsPerBlob, nil
|
||||
}
|
||||
|
||||
// ReconstructBlobs reconstructs blobs from data column sidecars without computing KZG proofs or creating sidecars.
|
||||
// This is an optimized version for when only the blob data is needed (e.g., for the GetBlobs endpoint).
|
||||
// The following constraints must be satisfied:
|
||||
// - All `dataColumnSidecars` must be committed to the same block, and
|
||||
// - `dataColumnSidecars` must be sorted by index and should not contain duplicates.
|
||||
// - `dataColumnSidecars` must contain either all sidecars corresponding to (non-extended) blobs,
|
||||
// - or enough sidecars to reconstruct the blobs.
|
||||
func ReconstructBlobs(verifiedDataColumnSidecars []blocks.VerifiedRODataColumn, indices []int, blobCount int) ([][]byte, error) {
|
||||
// If no specific indices are requested, populate with all blob indices.
|
||||
if len(indices) == 0 {
|
||||
indices = make([]int, blobCount)
|
||||
for i := range indices {
|
||||
indices[i] = i
|
||||
}
|
||||
}
|
||||
|
||||
if len(verifiedDataColumnSidecars) == 0 {
|
||||
return nil, ErrNotEnoughDataColumnSidecars
|
||||
}
|
||||
@@ -146,136 +373,70 @@ func ReconstructBlobs(block blocks.ROBlock, verifiedDataColumnSidecars []blocks.
|
||||
return nil, ErrNotEnoughDataColumnSidecars
|
||||
}
|
||||
|
||||
// Check if the blob index is too high.
|
||||
commitments, err := block.Block().Body().BlobKzgCommitments()
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "blob KZG commitments")
|
||||
// Verify that the actual blob count from the first sidecar matches the expected count
|
||||
referenceSidecar := verifiedDataColumnSidecars[0]
|
||||
actualBlobCount := len(referenceSidecar.Column)
|
||||
if actualBlobCount != blobCount {
|
||||
return nil, errors.Errorf("blob count mismatch: expected %d, got %d", blobCount, actualBlobCount)
|
||||
}
|
||||
|
||||
// Check if the blob index is too high.
|
||||
for _, blobIndex := range indices {
|
||||
if blobIndex >= len(commitments) {
|
||||
if blobIndex >= blobCount {
|
||||
return nil, ErrBlobIndexTooHigh
|
||||
}
|
||||
}
|
||||
|
||||
// Check if the data column sidecars are aligned with the block.
|
||||
dataColumnSidecars := make([]blocks.RODataColumn, 0, len(verifiedDataColumnSidecars))
|
||||
for _, verifiedDataColumnSidecar := range verifiedDataColumnSidecars {
|
||||
dataColumnSidecar := verifiedDataColumnSidecar.RODataColumn
|
||||
dataColumnSidecars = append(dataColumnSidecars, dataColumnSidecar)
|
||||
// Check if all columns have the same length and are committed to the same block.
|
||||
blockRoot := referenceSidecar.BlockRoot()
|
||||
for _, sidecar := range verifiedDataColumnSidecars[1:] {
|
||||
if len(sidecar.Column) != blobCount {
|
||||
return nil, ErrColumnLengthsDiffer
|
||||
}
|
||||
|
||||
if sidecar.BlockRoot() != blockRoot {
|
||||
return nil, ErrBlockRootMismatch
|
||||
}
|
||||
}
|
||||
|
||||
if err := DataColumnsAlignWithBlock(block, dataColumnSidecars); err != nil {
|
||||
return nil, errors.Wrap(err, "data columns align with block")
|
||||
}
|
||||
// Check if we have all non-extended columns (0..63) - if so, no reconstruction needed.
|
||||
hasAllNonExtendedColumns := verifiedDataColumnSidecars[cellsPerBlob-1].Index == uint64(cellsPerBlob-1)
|
||||
|
||||
// If all column sidecars corresponding to (non-extended) blobs are present, no need to reconstruct.
|
||||
if verifiedDataColumnSidecars[cellsPerBlob-1].Index == uint64(cellsPerBlob-1) {
|
||||
// Convert verified data column sidecars to verified blob sidecars.
|
||||
blobSidecars, err := blobSidecarsFromDataColumnSidecars(block, verifiedDataColumnSidecars, indices)
|
||||
var reconstructedCells map[int][]kzg.Cell
|
||||
if !hasAllNonExtendedColumns {
|
||||
// Need to reconstruct cells (but NOT proofs) for the requested blobs only.
|
||||
var err error
|
||||
reconstructedCells, err = recoverCellsForBlobs(verifiedDataColumnSidecars, indices)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "blob sidecars from data column sidecars")
|
||||
return nil, errors.Wrap(err, "recover cells")
|
||||
}
|
||||
|
||||
return blobSidecars, nil
|
||||
}
|
||||
|
||||
// We need to reconstruct the data column sidecars.
|
||||
reconstructedDataColumnSidecars, err := ReconstructDataColumnSidecars(verifiedDataColumnSidecars)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "reconstruct data column sidecars")
|
||||
}
|
||||
// Extract blob data without computing proofs.
|
||||
blobs := make([][]byte, 0, len(indices))
|
||||
for _, blobIndex := range indices {
|
||||
var blob kzg.Blob
|
||||
|
||||
// Convert verified data column sidecars to verified blob sidecars.
|
||||
blobSidecars, err := blobSidecarsFromDataColumnSidecars(block, reconstructedDataColumnSidecars, indices)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "blob sidecars from data column sidecars")
|
||||
}
|
||||
|
||||
return blobSidecars, nil
|
||||
}
|
||||
|
||||
// ComputeCellsAndProofsFromFlat computes the cells and proofs from blobs and cell flat proofs.
|
||||
func ComputeCellsAndProofsFromFlat(blobs [][]byte, cellProofs [][]byte) ([]kzg.CellsAndProofs, error) {
|
||||
numberOfColumns := params.BeaconConfig().NumberOfColumns
|
||||
blobCount := uint64(len(blobs))
|
||||
cellProofsCount := uint64(len(cellProofs))
|
||||
|
||||
cellsCount := blobCount * numberOfColumns
|
||||
if cellsCount != cellProofsCount {
|
||||
return nil, ErrBlobsCellsProofsMismatch
|
||||
}
|
||||
|
||||
cellsAndProofs := make([]kzg.CellsAndProofs, 0, blobCount)
|
||||
for i, blob := range blobs {
|
||||
var kzgBlob kzg.Blob
|
||||
if copy(kzgBlob[:], blob) != len(kzgBlob) {
|
||||
return nil, errors.New("wrong blob size - should never happen")
|
||||
}
|
||||
|
||||
// Compute the extended cells from the (non-extended) blob.
|
||||
cells, err := kzg.ComputeCells(&kzgBlob)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "compute cells")
|
||||
}
|
||||
|
||||
var proofs []kzg.Proof
|
||||
for idx := uint64(i) * numberOfColumns; idx < (uint64(i)+1)*numberOfColumns; idx++ {
|
||||
var kzgProof kzg.Proof
|
||||
if copy(kzgProof[:], cellProofs[idx]) != len(kzgProof) {
|
||||
return nil, errors.New("wrong KZG proof size - should never happen")
|
||||
// Compute the content of the blob.
|
||||
for columnIndex := range cellsPerBlob {
|
||||
var cell []byte
|
||||
if hasAllNonExtendedColumns {
|
||||
// Use existing cells from sidecars
|
||||
cell = verifiedDataColumnSidecars[columnIndex].Column[blobIndex]
|
||||
} else {
|
||||
// Use reconstructed cells
|
||||
cell = reconstructedCells[blobIndex][columnIndex][:]
|
||||
}
|
||||
|
||||
proofs = append(proofs, kzgProof)
|
||||
if copy(blob[kzg.BytesPerCell*columnIndex:], cell) != kzg.BytesPerCell {
|
||||
return nil, errors.New("wrong cell size - should never happen")
|
||||
}
|
||||
}
|
||||
|
||||
cellsProofs := kzg.CellsAndProofs{Cells: cells, Proofs: proofs}
|
||||
cellsAndProofs = append(cellsAndProofs, cellsProofs)
|
||||
blobs = append(blobs, blob[:])
|
||||
}
|
||||
|
||||
return cellsAndProofs, nil
|
||||
}
|
||||
|
||||
// ComputeCellsAndProofs computes the cells and proofs from blobs and cell proofs.
|
||||
func ComputeCellsAndProofsFromStructured(blobsAndProofs []*pb.BlobAndProofV2) ([]kzg.CellsAndProofs, error) {
|
||||
numberOfColumns := params.BeaconConfig().NumberOfColumns
|
||||
|
||||
cellsAndProofs := make([]kzg.CellsAndProofs, 0, len(blobsAndProofs))
|
||||
for _, blobAndProof := range blobsAndProofs {
|
||||
if blobAndProof == nil {
|
||||
return nil, ErrNilBlobAndProof
|
||||
}
|
||||
|
||||
var kzgBlob kzg.Blob
|
||||
if copy(kzgBlob[:], blobAndProof.Blob) != len(kzgBlob) {
|
||||
return nil, errors.New("wrong blob size - should never happen")
|
||||
}
|
||||
|
||||
// Compute the extended cells from the (non-extended) blob.
|
||||
cells, err := kzg.ComputeCells(&kzgBlob)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "compute cells")
|
||||
}
|
||||
|
||||
kzgProofs := make([]kzg.Proof, 0, numberOfColumns)
|
||||
for _, kzgProofBytes := range blobAndProof.KzgProofs {
|
||||
if len(kzgProofBytes) != kzg.BytesPerProof {
|
||||
return nil, errors.New("wrong KZG proof size - should never happen")
|
||||
}
|
||||
|
||||
var kzgProof kzg.Proof
|
||||
if copy(kzgProof[:], kzgProofBytes) != len(kzgProof) {
|
||||
return nil, errors.New("wrong copied KZG proof size - should never happen")
|
||||
}
|
||||
|
||||
kzgProofs = append(kzgProofs, kzgProof)
|
||||
}
|
||||
|
||||
cellsProofs := kzg.CellsAndProofs{Cells: cells, Proofs: kzgProofs}
|
||||
cellsAndProofs = append(cellsAndProofs, cellsProofs)
|
||||
}
|
||||
|
||||
return cellsAndProofs, nil
|
||||
return blobs, nil
|
||||
}
|
||||
|
||||
// blobSidecarsFromDataColumnSidecars converts verified data column sidecars to verified blob sidecars.
|
||||
|
||||
79
beacon-chain/core/peerdas/reconstruction_helpers_test.go
Normal file
79
beacon-chain/core/peerdas/reconstruction_helpers_test.go
Normal file
@@ -0,0 +1,79 @@
|
||||
package peerdas_test
|
||||
|
||||
// Test helpers for reconstruction tests
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/blockchain/kzg"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
|
||||
"github.com/OffchainLabs/prysm/v7/config/params"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v7/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v7/testing/util"
|
||||
)
|
||||
|
||||
// testBlobSetup holds common test data for blob reconstruction tests.
|
||||
type testBlobSetup struct {
|
||||
blobCount int
|
||||
blobs []kzg.Blob
|
||||
roBlock blocks.ROBlock
|
||||
roDataColumnSidecars []blocks.RODataColumn
|
||||
verifiedRoDataColumnSidecars []blocks.VerifiedRODataColumn
|
||||
}
|
||||
|
||||
// setupTestBlobs creates a complete test setup with blobs, cells, proofs, and data column sidecars.
|
||||
func setupTestBlobs(t *testing.T, blobCount int) *testBlobSetup {
|
||||
_, roBlobSidecars := util.GenerateTestElectraBlockWithSidecar(t, [32]byte{}, 42, blobCount)
|
||||
|
||||
blobs := make([]kzg.Blob, blobCount)
|
||||
for i := range blobCount {
|
||||
copy(blobs[i][:], roBlobSidecars[i].Blob)
|
||||
}
|
||||
|
||||
cellsPerBlob, proofsPerBlob := util.GenerateCellsAndProofs(t, blobs)
|
||||
|
||||
fs := util.SlotAtEpoch(t, params.BeaconConfig().FuluForkEpoch)
|
||||
roBlock, _, _ := util.GenerateTestFuluBlockWithSidecars(t, blobCount, util.WithSlot(fs))
|
||||
|
||||
roDataColumnSidecars, err := peerdas.DataColumnSidecars(cellsPerBlob, proofsPerBlob, peerdas.PopulateFromBlock(roBlock))
|
||||
require.NoError(t, err)
|
||||
|
||||
verifiedRoSidecars := toVerifiedSidecars(roDataColumnSidecars)
|
||||
|
||||
return &testBlobSetup{
|
||||
blobCount: blobCount,
|
||||
blobs: blobs,
|
||||
roBlock: roBlock,
|
||||
roDataColumnSidecars: roDataColumnSidecars,
|
||||
verifiedRoDataColumnSidecars: verifiedRoSidecars,
|
||||
}
|
||||
}
|
||||
|
||||
// toVerifiedSidecars converts a slice of RODataColumn to VerifiedRODataColumn.
|
||||
func toVerifiedSidecars(roDataColumnSidecars []blocks.RODataColumn) []blocks.VerifiedRODataColumn {
|
||||
verifiedRoSidecars := make([]blocks.VerifiedRODataColumn, 0, len(roDataColumnSidecars))
|
||||
for _, roDataColumnSidecar := range roDataColumnSidecars {
|
||||
verifiedRoSidecar := blocks.NewVerifiedRODataColumn(roDataColumnSidecar)
|
||||
verifiedRoSidecars = append(verifiedRoSidecars, verifiedRoSidecar)
|
||||
}
|
||||
return verifiedRoSidecars
|
||||
}
|
||||
|
||||
// filterEvenIndexedSidecars returns only the even-indexed sidecars (0, 2, 4, ...).
|
||||
// This is useful for forcing reconstruction in tests.
|
||||
func filterEvenIndexedSidecars(sidecars []blocks.VerifiedRODataColumn) []blocks.VerifiedRODataColumn {
|
||||
filtered := make([]blocks.VerifiedRODataColumn, 0, len(sidecars)/2)
|
||||
for i := 0; i < len(sidecars); i += 2 {
|
||||
filtered = append(filtered, sidecars[i])
|
||||
}
|
||||
return filtered
|
||||
}
|
||||
|
||||
// setupFuluForkEpoch sets up the test configuration with Fulu fork after Electra.
|
||||
func setupFuluForkEpoch(t *testing.T) primitives.Slot {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
params.BeaconConfig().FuluForkEpoch = params.BeaconConfig().ElectraForkEpoch + 4096*2
|
||||
return util.SlotAtEpoch(t, params.BeaconConfig().FuluForkEpoch)
|
||||
}
|
||||
@@ -124,7 +124,7 @@ func TestReconstructDataColumnSidecars(t *testing.T) {
|
||||
})
|
||||
}
|
||||
|
||||
func TestReconstructBlobs(t *testing.T) {
|
||||
func TestReconstructBlobSidecars(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
params.BeaconConfig().FuluForkEpoch = params.BeaconConfig().ElectraForkEpoch + 4096*2
|
||||
|
||||
@@ -133,13 +133,13 @@ func TestReconstructBlobs(t *testing.T) {
|
||||
fs := util.SlotAtEpoch(t, params.BeaconConfig().FuluForkEpoch)
|
||||
|
||||
t.Run("no index", func(t *testing.T) {
|
||||
actual, err := peerdas.ReconstructBlobs(emptyBlock, nil, nil)
|
||||
actual, err := peerdas.ReconstructBlobSidecars(emptyBlock, nil, nil)
|
||||
require.NoError(t, err)
|
||||
require.IsNil(t, actual)
|
||||
})
|
||||
|
||||
t.Run("empty input", func(t *testing.T) {
|
||||
_, err := peerdas.ReconstructBlobs(emptyBlock, nil, []int{0})
|
||||
_, err := peerdas.ReconstructBlobSidecars(emptyBlock, nil, []int{0})
|
||||
require.ErrorIs(t, err, peerdas.ErrNotEnoughDataColumnSidecars)
|
||||
})
|
||||
|
||||
@@ -149,7 +149,7 @@ func TestReconstructBlobs(t *testing.T) {
|
||||
// Arbitrarily change the order of the sidecars.
|
||||
verifiedRoSidecars[3], verifiedRoSidecars[2] = verifiedRoSidecars[2], verifiedRoSidecars[3]
|
||||
|
||||
_, err := peerdas.ReconstructBlobs(emptyBlock, verifiedRoSidecars, []int{0})
|
||||
_, err := peerdas.ReconstructBlobSidecars(emptyBlock, verifiedRoSidecars, []int{0})
|
||||
require.ErrorIs(t, err, peerdas.ErrDataColumnSidecarsNotSortedByIndex)
|
||||
})
|
||||
|
||||
@@ -159,7 +159,7 @@ func TestReconstructBlobs(t *testing.T) {
|
||||
// [0, 1, 1, 3, 4, ...]
|
||||
verifiedRoSidecars[2] = verifiedRoSidecars[1]
|
||||
|
||||
_, err := peerdas.ReconstructBlobs(emptyBlock, verifiedRoSidecars, []int{0})
|
||||
_, err := peerdas.ReconstructBlobSidecars(emptyBlock, verifiedRoSidecars, []int{0})
|
||||
require.ErrorIs(t, err, peerdas.ErrDataColumnSidecarsNotSortedByIndex)
|
||||
})
|
||||
|
||||
@@ -169,7 +169,7 @@ func TestReconstructBlobs(t *testing.T) {
|
||||
// [0, 1, 2, 1, 4, ...]
|
||||
verifiedRoSidecars[3] = verifiedRoSidecars[1]
|
||||
|
||||
_, err := peerdas.ReconstructBlobs(emptyBlock, verifiedRoSidecars, []int{0})
|
||||
_, err := peerdas.ReconstructBlobSidecars(emptyBlock, verifiedRoSidecars, []int{0})
|
||||
require.ErrorIs(t, err, peerdas.ErrDataColumnSidecarsNotSortedByIndex)
|
||||
})
|
||||
|
||||
@@ -177,7 +177,7 @@ func TestReconstructBlobs(t *testing.T) {
|
||||
_, _, verifiedRoSidecars := util.GenerateTestFuluBlockWithSidecars(t, 3)
|
||||
|
||||
inputSidecars := verifiedRoSidecars[:fieldparams.CellsPerBlob-1]
|
||||
_, err := peerdas.ReconstructBlobs(emptyBlock, inputSidecars, []int{0})
|
||||
_, err := peerdas.ReconstructBlobSidecars(emptyBlock, inputSidecars, []int{0})
|
||||
require.ErrorIs(t, err, peerdas.ErrNotEnoughDataColumnSidecars)
|
||||
})
|
||||
|
||||
@@ -186,7 +186,7 @@ func TestReconstructBlobs(t *testing.T) {
|
||||
|
||||
roBlock, _, verifiedRoSidecars := util.GenerateTestFuluBlockWithSidecars(t, blobCount)
|
||||
|
||||
_, err := peerdas.ReconstructBlobs(roBlock, verifiedRoSidecars, []int{1, blobCount})
|
||||
_, err := peerdas.ReconstructBlobSidecars(roBlock, verifiedRoSidecars, []int{1, blobCount})
|
||||
require.ErrorIs(t, err, peerdas.ErrBlobIndexTooHigh)
|
||||
})
|
||||
|
||||
@@ -194,7 +194,7 @@ func TestReconstructBlobs(t *testing.T) {
|
||||
_, _, verifiedRoSidecars := util.GenerateTestFuluBlockWithSidecars(t, 3, util.WithParentRoot([fieldparams.RootLength]byte{1}), util.WithSlot(fs))
|
||||
roBlock, _, _ := util.GenerateTestFuluBlockWithSidecars(t, 3, util.WithParentRoot([fieldparams.RootLength]byte{2}), util.WithSlot(fs))
|
||||
|
||||
_, err := peerdas.ReconstructBlobs(roBlock, verifiedRoSidecars, []int{0})
|
||||
_, err := peerdas.ReconstructBlobSidecars(roBlock, verifiedRoSidecars, []int{0})
|
||||
require.ErrorContains(t, peerdas.ErrRootMismatch.Error(), err)
|
||||
})
|
||||
|
||||
@@ -207,7 +207,8 @@ func TestReconstructBlobs(t *testing.T) {
|
||||
// Compute cells and proofs from blob sidecars.
|
||||
var wg errgroup.Group
|
||||
blobs := make([][]byte, blobCount)
|
||||
inputCellsAndProofs := make([]kzg.CellsAndProofs, blobCount)
|
||||
inputCellsPerBlob := make([][]kzg.Cell, blobCount)
|
||||
inputProofsPerBlob := make([][]kzg.Proof, blobCount)
|
||||
for i := range blobCount {
|
||||
blob := roBlobSidecars[i].Blob
|
||||
blobs[i] = blob
|
||||
@@ -217,14 +218,15 @@ func TestReconstructBlobs(t *testing.T) {
|
||||
count := copy(kzgBlob[:], blob)
|
||||
require.Equal(t, len(kzgBlob), count)
|
||||
|
||||
cp, err := kzg.ComputeCellsAndKZGProofs(&kzgBlob)
|
||||
cells, proofs, err := kzg.ComputeCellsAndKZGProofs(&kzgBlob)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "compute cells and kzg proofs for blob %d", i)
|
||||
}
|
||||
|
||||
// It is safe for multiple goroutines to concurrently write to the same slice,
|
||||
// as long as they are writing to different indices, which is the case here.
|
||||
inputCellsAndProofs[i] = cp
|
||||
inputCellsPerBlob[i] = cells
|
||||
inputProofsPerBlob[i] = proofs
|
||||
|
||||
return nil
|
||||
})
|
||||
@@ -235,18 +237,18 @@ func TestReconstructBlobs(t *testing.T) {
|
||||
|
||||
// Flatten proofs.
|
||||
cellProofs := make([][]byte, 0, blobCount*numberOfColumns)
|
||||
for _, cp := range inputCellsAndProofs {
|
||||
for _, proof := range cp.Proofs {
|
||||
for _, proofs := range inputProofsPerBlob {
|
||||
for _, proof := range proofs {
|
||||
cellProofs = append(cellProofs, proof[:])
|
||||
}
|
||||
}
|
||||
|
||||
// Compute celles and proofs from the blobs and cell proofs.
|
||||
cellsAndProofs, err := peerdas.ComputeCellsAndProofsFromFlat(blobs, cellProofs)
|
||||
cellsPerBlob, proofsPerBlob, err := peerdas.ComputeCellsAndProofsFromFlat(blobs, cellProofs)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Construct data column sidears from the signed block and cells and proofs.
|
||||
roDataColumnSidecars, err := peerdas.DataColumnSidecars(cellsAndProofs, peerdas.PopulateFromBlock(roBlock))
|
||||
roDataColumnSidecars, err := peerdas.DataColumnSidecars(cellsPerBlob, proofsPerBlob, peerdas.PopulateFromBlock(roBlock))
|
||||
require.NoError(t, err)
|
||||
|
||||
// Convert to verified data column sidecars.
|
||||
@@ -260,7 +262,7 @@ func TestReconstructBlobs(t *testing.T) {
|
||||
|
||||
t.Run("no reconstruction needed", func(t *testing.T) {
|
||||
// Reconstruct blobs.
|
||||
reconstructedVerifiedRoBlobSidecars, err := peerdas.ReconstructBlobs(roBlock, verifiedRoSidecars, indices)
|
||||
reconstructedVerifiedRoBlobSidecars, err := peerdas.ReconstructBlobSidecars(roBlock, verifiedRoSidecars, indices)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Compare blobs.
|
||||
@@ -280,7 +282,7 @@ func TestReconstructBlobs(t *testing.T) {
|
||||
}
|
||||
|
||||
// Reconstruct blobs.
|
||||
reconstructedVerifiedRoBlobSidecars, err := peerdas.ReconstructBlobs(roBlock, filteredSidecars, indices)
|
||||
reconstructedVerifiedRoBlobSidecars, err := peerdas.ReconstructBlobSidecars(roBlock, filteredSidecars, indices)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Compare blobs.
|
||||
@@ -296,6 +298,135 @@ func TestReconstructBlobs(t *testing.T) {
|
||||
|
||||
}
|
||||
|
||||
func TestReconstructBlobs(t *testing.T) {
|
||||
setupFuluForkEpoch(t)
|
||||
require.NoError(t, kzg.Start())
|
||||
|
||||
t.Run("empty indices with blobCount > 0", func(t *testing.T) {
|
||||
setup := setupTestBlobs(t, 3)
|
||||
|
||||
// Call with empty indices - should return all blobs
|
||||
reconstructedBlobs, err := peerdas.ReconstructBlobs(setup.verifiedRoDataColumnSidecars, []int{}, setup.blobCount)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, setup.blobCount, len(reconstructedBlobs))
|
||||
|
||||
// Verify each blob matches
|
||||
for i := 0; i < setup.blobCount; i++ {
|
||||
require.DeepEqual(t, setup.blobs[i][:], reconstructedBlobs[i])
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("specific indices", func(t *testing.T) {
|
||||
setup := setupTestBlobs(t, 3)
|
||||
|
||||
// Request only blobs at indices 0 and 2
|
||||
indices := []int{0, 2}
|
||||
reconstructedBlobs, err := peerdas.ReconstructBlobs(setup.verifiedRoDataColumnSidecars, indices, setup.blobCount)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, len(indices), len(reconstructedBlobs))
|
||||
|
||||
// Verify requested blobs match
|
||||
for i, blobIndex := range indices {
|
||||
require.DeepEqual(t, setup.blobs[blobIndex][:], reconstructedBlobs[i])
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("blob count mismatch", func(t *testing.T) {
|
||||
setup := setupTestBlobs(t, 3)
|
||||
|
||||
// Pass wrong blob count
|
||||
wrongBlobCount := 5
|
||||
_, err := peerdas.ReconstructBlobs(setup.verifiedRoDataColumnSidecars, []int{0}, wrongBlobCount)
|
||||
require.ErrorContains(t, "blob count mismatch", err)
|
||||
})
|
||||
|
||||
t.Run("empty data columns", func(t *testing.T) {
|
||||
_, err := peerdas.ReconstructBlobs([]blocks.VerifiedRODataColumn{}, []int{0}, 1)
|
||||
require.ErrorIs(t, err, peerdas.ErrNotEnoughDataColumnSidecars)
|
||||
})
|
||||
|
||||
t.Run("index too high", func(t *testing.T) {
|
||||
setup := setupTestBlobs(t, 3)
|
||||
|
||||
// Request blob index that's too high
|
||||
_, err := peerdas.ReconstructBlobs(setup.verifiedRoDataColumnSidecars, []int{setup.blobCount}, setup.blobCount)
|
||||
require.ErrorIs(t, err, peerdas.ErrBlobIndexTooHigh)
|
||||
})
|
||||
|
||||
t.Run("not enough columns", func(t *testing.T) {
|
||||
setup := setupTestBlobs(t, 3)
|
||||
|
||||
// Only provide 63 columns (need at least 64)
|
||||
inputSidecars := setup.verifiedRoDataColumnSidecars[:fieldparams.CellsPerBlob-1]
|
||||
_, err := peerdas.ReconstructBlobs(inputSidecars, []int{0}, setup.blobCount)
|
||||
require.ErrorIs(t, err, peerdas.ErrNotEnoughDataColumnSidecars)
|
||||
})
|
||||
|
||||
t.Run("not sorted", func(t *testing.T) {
|
||||
setup := setupTestBlobs(t, 3)
|
||||
|
||||
// Swap two sidecars to make them unsorted
|
||||
setup.verifiedRoDataColumnSidecars[3], setup.verifiedRoDataColumnSidecars[2] = setup.verifiedRoDataColumnSidecars[2], setup.verifiedRoDataColumnSidecars[3]
|
||||
|
||||
_, err := peerdas.ReconstructBlobs(setup.verifiedRoDataColumnSidecars, []int{0}, setup.blobCount)
|
||||
require.ErrorIs(t, err, peerdas.ErrDataColumnSidecarsNotSortedByIndex)
|
||||
})
|
||||
|
||||
t.Run("with reconstruction needed", func(t *testing.T) {
|
||||
setup := setupTestBlobs(t, 3)
|
||||
|
||||
// Keep only even-indexed columns (will need reconstruction)
|
||||
filteredSidecars := filterEvenIndexedSidecars(setup.verifiedRoDataColumnSidecars)
|
||||
|
||||
// Reconstruct all blobs
|
||||
reconstructedBlobs, err := peerdas.ReconstructBlobs(filteredSidecars, []int{}, setup.blobCount)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, setup.blobCount, len(reconstructedBlobs))
|
||||
|
||||
// Verify all blobs match
|
||||
for i := range setup.blobCount {
|
||||
require.DeepEqual(t, setup.blobs[i][:], reconstructedBlobs[i])
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("no reconstruction needed - all non-extended columns present", func(t *testing.T) {
|
||||
setup := setupTestBlobs(t, 3)
|
||||
|
||||
// Use all columns (no reconstruction needed since we have all non-extended columns 0-63)
|
||||
reconstructedBlobs, err := peerdas.ReconstructBlobs(setup.verifiedRoDataColumnSidecars, []int{1}, setup.blobCount)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 1, len(reconstructedBlobs))
|
||||
|
||||
// Verify blob matches
|
||||
require.DeepEqual(t, setup.blobs[1][:], reconstructedBlobs[0])
|
||||
})
|
||||
|
||||
t.Run("reconstruct only requested blob indices", func(t *testing.T) {
|
||||
// This test verifies the optimization: when reconstruction is needed and specific
|
||||
// blob indices are requested, we only reconstruct those blobs, not all of them.
|
||||
setup := setupTestBlobs(t, 6)
|
||||
|
||||
// Keep only even-indexed columns (will need reconstruction)
|
||||
// This ensures we don't have all non-extended columns (0-63)
|
||||
filteredSidecars := filterEvenIndexedSidecars(setup.verifiedRoDataColumnSidecars)
|
||||
|
||||
// Request only specific blob indices (not all of them)
|
||||
requestedIndices := []int{1, 3, 5}
|
||||
reconstructedBlobs, err := peerdas.ReconstructBlobs(filteredSidecars, requestedIndices, setup.blobCount)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Should only get the requested blobs back (not all 6)
|
||||
require.Equal(t, len(requestedIndices), len(reconstructedBlobs),
|
||||
"should only reconstruct requested blobs, not all blobs")
|
||||
|
||||
// Verify each requested blob matches the original
|
||||
for i, blobIndex := range requestedIndices {
|
||||
require.DeepEqual(t, setup.blobs[blobIndex][:], reconstructedBlobs[i],
|
||||
"blob at index %d should match", blobIndex)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func TestComputeCellsAndProofsFromFlat(t *testing.T) {
|
||||
// Start the trusted setup.
|
||||
err := kzg.Start()
|
||||
@@ -310,7 +441,7 @@ func TestComputeCellsAndProofsFromFlat(t *testing.T) {
|
||||
// Create proofs for 2 blobs worth of columns
|
||||
cellProofs := make([][]byte, 2*numberOfColumns)
|
||||
|
||||
_, err := peerdas.ComputeCellsAndProofsFromFlat(blobs, cellProofs)
|
||||
_, _, err := peerdas.ComputeCellsAndProofsFromFlat(blobs, cellProofs)
|
||||
require.ErrorIs(t, err, peerdas.ErrBlobsCellsProofsMismatch)
|
||||
})
|
||||
|
||||
@@ -323,7 +454,8 @@ func TestComputeCellsAndProofsFromFlat(t *testing.T) {
|
||||
|
||||
// Extract blobs and compute expected cells and proofs
|
||||
blobs := make([][]byte, blobCount)
|
||||
expectedCellsAndProofs := make([]kzg.CellsAndProofs, blobCount)
|
||||
expectedCellsPerBlob := make([][]kzg.Cell, blobCount)
|
||||
expectedProofsPerBlob := make([][]kzg.Proof, blobCount)
|
||||
var wg errgroup.Group
|
||||
|
||||
for i := range blobCount {
|
||||
@@ -335,12 +467,13 @@ func TestComputeCellsAndProofsFromFlat(t *testing.T) {
|
||||
count := copy(kzgBlob[:], blob)
|
||||
require.Equal(t, len(kzgBlob), count)
|
||||
|
||||
cp, err := kzg.ComputeCellsAndKZGProofs(&kzgBlob)
|
||||
cells, proofs, err := kzg.ComputeCellsAndKZGProofs(&kzgBlob)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "compute cells and kzg proofs for blob %d", i)
|
||||
}
|
||||
|
||||
expectedCellsAndProofs[i] = cp
|
||||
expectedCellsPerBlob[i] = cells
|
||||
expectedProofsPerBlob[i] = proofs
|
||||
return nil
|
||||
})
|
||||
}
|
||||
@@ -350,30 +483,30 @@ func TestComputeCellsAndProofsFromFlat(t *testing.T) {
|
||||
|
||||
// Flatten proofs
|
||||
cellProofs := make([][]byte, 0, blobCount*numberOfColumns)
|
||||
for _, cp := range expectedCellsAndProofs {
|
||||
for _, proof := range cp.Proofs {
|
||||
for _, proofs := range expectedProofsPerBlob {
|
||||
for _, proof := range proofs {
|
||||
cellProofs = append(cellProofs, proof[:])
|
||||
}
|
||||
}
|
||||
|
||||
// Test ComputeCellsAndProofs
|
||||
actualCellsAndProofs, err := peerdas.ComputeCellsAndProofsFromFlat(blobs, cellProofs)
|
||||
actualCellsPerBlob, actualProofsPerBlob, err := peerdas.ComputeCellsAndProofsFromFlat(blobs, cellProofs)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, blobCount, len(actualCellsAndProofs))
|
||||
require.Equal(t, blobCount, len(actualCellsPerBlob))
|
||||
|
||||
// Verify the results match expected
|
||||
for i := range blobCount {
|
||||
require.Equal(t, len(expectedCellsAndProofs[i].Cells), len(actualCellsAndProofs[i].Cells))
|
||||
require.Equal(t, len(expectedCellsAndProofs[i].Proofs), len(actualCellsAndProofs[i].Proofs))
|
||||
require.Equal(t, len(expectedCellsPerBlob[i]), len(actualCellsPerBlob[i]))
|
||||
require.Equal(t, len(expectedProofsPerBlob[i]), len(actualProofsPerBlob[i]))
|
||||
|
||||
// Compare cells
|
||||
for j, expectedCell := range expectedCellsAndProofs[i].Cells {
|
||||
require.Equal(t, expectedCell, actualCellsAndProofs[i].Cells[j])
|
||||
for j, expectedCell := range expectedCellsPerBlob[i] {
|
||||
require.Equal(t, expectedCell, actualCellsPerBlob[i][j])
|
||||
}
|
||||
|
||||
// Compare proofs
|
||||
for j, expectedProof := range expectedCellsAndProofs[i].Proofs {
|
||||
require.Equal(t, expectedProof, actualCellsAndProofs[i].Proofs[j])
|
||||
for j, expectedProof := range expectedProofsPerBlob[i] {
|
||||
require.Equal(t, expectedProof, actualProofsPerBlob[i][j])
|
||||
}
|
||||
}
|
||||
})
|
||||
@@ -381,7 +514,7 @@ func TestComputeCellsAndProofsFromFlat(t *testing.T) {
|
||||
|
||||
func TestComputeCellsAndProofsFromStructured(t *testing.T) {
|
||||
t.Run("nil blob and proof", func(t *testing.T) {
|
||||
_, err := peerdas.ComputeCellsAndProofsFromStructured([]*pb.BlobAndProofV2{nil})
|
||||
_, _, err := peerdas.ComputeCellsAndProofsFromStructured([]*pb.BlobAndProofV2{nil})
|
||||
require.ErrorIs(t, err, peerdas.ErrNilBlobAndProof)
|
||||
})
|
||||
|
||||
@@ -397,7 +530,8 @@ func TestComputeCellsAndProofsFromStructured(t *testing.T) {
|
||||
|
||||
// Extract blobs and compute expected cells and proofs
|
||||
blobsAndProofs := make([]*pb.BlobAndProofV2, blobCount)
|
||||
expectedCellsAndProofs := make([]kzg.CellsAndProofs, blobCount)
|
||||
expectedCellsPerBlob := make([][]kzg.Cell, blobCount)
|
||||
expectedProofsPerBlob := make([][]kzg.Proof, blobCount)
|
||||
|
||||
var wg errgroup.Group
|
||||
for i := range blobCount {
|
||||
@@ -408,14 +542,15 @@ func TestComputeCellsAndProofsFromStructured(t *testing.T) {
|
||||
count := copy(kzgBlob[:], blob)
|
||||
require.Equal(t, len(kzgBlob), count)
|
||||
|
||||
cellsAndProofs, err := kzg.ComputeCellsAndKZGProofs(&kzgBlob)
|
||||
cells, proofs, err := kzg.ComputeCellsAndKZGProofs(&kzgBlob)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "compute cells and kzg proofs for blob %d", i)
|
||||
}
|
||||
expectedCellsAndProofs[i] = cellsAndProofs
|
||||
expectedCellsPerBlob[i] = cells
|
||||
expectedProofsPerBlob[i] = proofs
|
||||
|
||||
kzgProofs := make([][]byte, 0, len(cellsAndProofs.Proofs))
|
||||
for _, proof := range cellsAndProofs.Proofs {
|
||||
kzgProofs := make([][]byte, 0, len(proofs))
|
||||
for _, proof := range proofs {
|
||||
kzgProofs = append(kzgProofs, proof[:])
|
||||
}
|
||||
|
||||
@@ -433,24 +568,24 @@ func TestComputeCellsAndProofsFromStructured(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
|
||||
// Test ComputeCellsAndProofs
|
||||
actualCellsAndProofs, err := peerdas.ComputeCellsAndProofsFromStructured(blobsAndProofs)
|
||||
actualCellsPerBlob, actualProofsPerBlob, err := peerdas.ComputeCellsAndProofsFromStructured(blobsAndProofs)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, blobCount, len(actualCellsAndProofs))
|
||||
require.Equal(t, blobCount, len(actualCellsPerBlob))
|
||||
|
||||
// Verify the results match expected
|
||||
for i := range blobCount {
|
||||
require.Equal(t, len(expectedCellsAndProofs[i].Cells), len(actualCellsAndProofs[i].Cells))
|
||||
require.Equal(t, len(expectedCellsAndProofs[i].Proofs), len(actualCellsAndProofs[i].Proofs))
|
||||
require.Equal(t, len(expectedCellsAndProofs[i].Proofs), cap(actualCellsAndProofs[i].Proofs))
|
||||
require.Equal(t, len(expectedCellsPerBlob[i]), len(actualCellsPerBlob[i]))
|
||||
require.Equal(t, len(expectedProofsPerBlob[i]), len(actualProofsPerBlob[i]))
|
||||
require.Equal(t, len(expectedProofsPerBlob[i]), cap(actualProofsPerBlob[i]))
|
||||
|
||||
// Compare cells
|
||||
for j, expectedCell := range expectedCellsAndProofs[i].Cells {
|
||||
require.Equal(t, expectedCell, actualCellsAndProofs[i].Cells[j])
|
||||
for j, expectedCell := range expectedCellsPerBlob[i] {
|
||||
require.Equal(t, expectedCell, actualCellsPerBlob[i][j])
|
||||
}
|
||||
|
||||
// Compare proofs
|
||||
for j, expectedProof := range expectedCellsAndProofs[i].Proofs {
|
||||
require.Equal(t, expectedProof, actualCellsAndProofs[i].Proofs[j])
|
||||
for j, expectedProof := range expectedProofsPerBlob[i] {
|
||||
require.Equal(t, expectedProof, actualProofsPerBlob[i][j])
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
@@ -93,19 +93,20 @@ func ValidatorsCustodyRequirement(state beaconState.ReadOnlyBeaconState, validat
|
||||
return min(max(count, validatorCustodyRequirement), numberOfCustodyGroups), nil
|
||||
}
|
||||
|
||||
// DataColumnSidecars, given ConstructionPopulator and the cells/proofs associated with each blob in the
|
||||
// DataColumnSidecars given ConstructionPopulator and the cells/proofs associated with each blob in the
|
||||
// block, assembles sidecars which can be distributed to peers.
|
||||
// cellsPerBlob and proofsPerBlob are parallel slices where each index represents a blob sidecar.
|
||||
// This is an adapted version of
|
||||
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/validator.md#get_data_column_sidecars,
|
||||
// which is designed to be used both when constructing sidecars from a block and from a sidecar, replacing
|
||||
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/validator.md#get_data_column_sidecars_from_block and
|
||||
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/validator.md#get_data_column_sidecars_from_column_sidecar
|
||||
func DataColumnSidecars(rows []kzg.CellsAndProofs, src ConstructionPopulator) ([]blocks.RODataColumn, error) {
|
||||
if len(rows) == 0 {
|
||||
func DataColumnSidecars(cellsPerBlob [][]kzg.Cell, proofsPerBlob [][]kzg.Proof, src ConstructionPopulator) ([]blocks.RODataColumn, error) {
|
||||
if len(cellsPerBlob) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
start := time.Now()
|
||||
cells, proofs, err := rotateRowsToCols(rows, params.BeaconConfig().NumberOfColumns)
|
||||
cells, proofs, err := rotateRowsToCols(cellsPerBlob, proofsPerBlob, params.BeaconConfig().NumberOfColumns)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "rotate cells and proofs")
|
||||
}
|
||||
@@ -197,26 +198,31 @@ func (b *BlockReconstructionSource) extract() (*blockInfo, error) {
|
||||
|
||||
// rotateRowsToCols takes a 2D slice of cells and proofs, where the x is rows (blobs) and y is columns,
|
||||
// and returns a 2D slice where x is columns and y is rows.
|
||||
func rotateRowsToCols(rows []kzg.CellsAndProofs, numCols uint64) ([][][]byte, [][][]byte, error) {
|
||||
if len(rows) == 0 {
|
||||
func rotateRowsToCols(cellsPerBlob [][]kzg.Cell, proofsPerBlob [][]kzg.Proof, numCols uint64) ([][][]byte, [][][]byte, error) {
|
||||
if len(cellsPerBlob) == 0 {
|
||||
return nil, nil, nil
|
||||
}
|
||||
if len(cellsPerBlob) != len(proofsPerBlob) {
|
||||
return nil, nil, errors.New("cells and proofs length mismatch")
|
||||
}
|
||||
cellCols := make([][][]byte, numCols)
|
||||
proofCols := make([][][]byte, numCols)
|
||||
for i, cp := range rows {
|
||||
if uint64(len(cp.Cells)) != numCols {
|
||||
for i := range cellsPerBlob {
|
||||
cells := cellsPerBlob[i]
|
||||
proofs := proofsPerBlob[i]
|
||||
if uint64(len(cells)) != numCols {
|
||||
return nil, nil, errors.Wrap(ErrNotEnoughDataColumnSidecars, "not enough cells")
|
||||
}
|
||||
if len(cp.Cells) != len(cp.Proofs) {
|
||||
if len(cells) != len(proofs) {
|
||||
return nil, nil, errors.Wrap(ErrNotEnoughDataColumnSidecars, "not enough proofs")
|
||||
}
|
||||
for j := uint64(0); j < numCols; j++ {
|
||||
if i == 0 {
|
||||
cellCols[j] = make([][]byte, len(rows))
|
||||
proofCols[j] = make([][]byte, len(rows))
|
||||
cellCols[j] = make([][]byte, len(cellsPerBlob))
|
||||
proofCols[j] = make([][]byte, len(cellsPerBlob))
|
||||
}
|
||||
cellCols[j][i] = cp.Cells[j][:]
|
||||
proofCols[j][i] = cp.Proofs[j][:]
|
||||
cellCols[j][i] = cells[j][:]
|
||||
proofCols[j][i] = proofs[j][:]
|
||||
}
|
||||
}
|
||||
return cellCols, proofCols, nil
|
||||
|
||||
@@ -68,16 +68,16 @@ func TestDataColumnSidecars(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
|
||||
// Create cells and proofs.
|
||||
cellsAndProofs := []kzg.CellsAndProofs{
|
||||
{
|
||||
Cells: make([]kzg.Cell, params.BeaconConfig().NumberOfColumns),
|
||||
Proofs: make([]kzg.Proof, params.BeaconConfig().NumberOfColumns),
|
||||
},
|
||||
cellsPerBlob := [][]kzg.Cell{
|
||||
make([]kzg.Cell, params.BeaconConfig().NumberOfColumns),
|
||||
}
|
||||
proofsPerBlob := [][]kzg.Proof{
|
||||
make([]kzg.Proof, params.BeaconConfig().NumberOfColumns),
|
||||
}
|
||||
|
||||
rob, err := blocks.NewROBlock(signedBeaconBlock)
|
||||
require.NoError(t, err)
|
||||
_, err = peerdas.DataColumnSidecars(cellsAndProofs, peerdas.PopulateFromBlock(rob))
|
||||
_, err = peerdas.DataColumnSidecars(cellsPerBlob, proofsPerBlob, peerdas.PopulateFromBlock(rob))
|
||||
require.ErrorIs(t, err, peerdas.ErrSizeMismatch)
|
||||
})
|
||||
|
||||
@@ -92,18 +92,18 @@ func TestDataColumnSidecars(t *testing.T) {
|
||||
|
||||
// Create cells and proofs with insufficient cells for the number of columns.
|
||||
// This simulates a scenario where cellsAndProofs has fewer cells than expected columns.
|
||||
cellsAndProofs := []kzg.CellsAndProofs{
|
||||
{
|
||||
Cells: make([]kzg.Cell, 10), // Only 10 cells
|
||||
Proofs: make([]kzg.Proof, 10), // Only 10 proofs
|
||||
},
|
||||
cellsPerBlob := [][]kzg.Cell{
|
||||
make([]kzg.Cell, 10), // Only 10 cells
|
||||
}
|
||||
proofsPerBlob := [][]kzg.Proof{
|
||||
make([]kzg.Proof, 10), // Only 10 proofs
|
||||
}
|
||||
|
||||
// This should fail because the function will try to access columns up to NumberOfColumns
|
||||
// but we only have 10 cells/proofs.
|
||||
rob, err := blocks.NewROBlock(signedBeaconBlock)
|
||||
require.NoError(t, err)
|
||||
_, err = peerdas.DataColumnSidecars(cellsAndProofs, peerdas.PopulateFromBlock(rob))
|
||||
_, err = peerdas.DataColumnSidecars(cellsPerBlob, proofsPerBlob, peerdas.PopulateFromBlock(rob))
|
||||
require.ErrorIs(t, err, peerdas.ErrNotEnoughDataColumnSidecars)
|
||||
})
|
||||
|
||||
@@ -118,17 +118,17 @@ func TestDataColumnSidecars(t *testing.T) {
|
||||
|
||||
// Create cells and proofs with sufficient cells but insufficient proofs.
|
||||
numberOfColumns := params.BeaconConfig().NumberOfColumns
|
||||
cellsAndProofs := []kzg.CellsAndProofs{
|
||||
{
|
||||
Cells: make([]kzg.Cell, numberOfColumns),
|
||||
Proofs: make([]kzg.Proof, 5), // Only 5 proofs, less than columns
|
||||
},
|
||||
cellsPerBlob := [][]kzg.Cell{
|
||||
make([]kzg.Cell, numberOfColumns),
|
||||
}
|
||||
proofsPerBlob := [][]kzg.Proof{
|
||||
make([]kzg.Proof, 5), // Only 5 proofs, less than columns
|
||||
}
|
||||
|
||||
// This should fail when trying to access proof beyond index 4.
|
||||
rob, err := blocks.NewROBlock(signedBeaconBlock)
|
||||
require.NoError(t, err)
|
||||
_, err = peerdas.DataColumnSidecars(cellsAndProofs, peerdas.PopulateFromBlock(rob))
|
||||
_, err = peerdas.DataColumnSidecars(cellsPerBlob, proofsPerBlob, peerdas.PopulateFromBlock(rob))
|
||||
require.ErrorIs(t, err, peerdas.ErrNotEnoughDataColumnSidecars)
|
||||
require.ErrorContains(t, "not enough proofs", err)
|
||||
})
|
||||
@@ -150,28 +150,26 @@ func TestDataColumnSidecars(t *testing.T) {
|
||||
|
||||
// Create cells and proofs with correct dimensions.
|
||||
numberOfColumns := params.BeaconConfig().NumberOfColumns
|
||||
cellsAndProofs := []kzg.CellsAndProofs{
|
||||
{
|
||||
Cells: make([]kzg.Cell, numberOfColumns),
|
||||
Proofs: make([]kzg.Proof, numberOfColumns),
|
||||
},
|
||||
{
|
||||
Cells: make([]kzg.Cell, numberOfColumns),
|
||||
Proofs: make([]kzg.Proof, numberOfColumns),
|
||||
},
|
||||
cellsPerBlob := [][]kzg.Cell{
|
||||
make([]kzg.Cell, numberOfColumns),
|
||||
make([]kzg.Cell, numberOfColumns),
|
||||
}
|
||||
proofsPerBlob := [][]kzg.Proof{
|
||||
make([]kzg.Proof, numberOfColumns),
|
||||
make([]kzg.Proof, numberOfColumns),
|
||||
}
|
||||
|
||||
// Set distinct values in cells and proofs for testing
|
||||
for i := range numberOfColumns {
|
||||
cellsAndProofs[0].Cells[i][0] = byte(i)
|
||||
cellsAndProofs[0].Proofs[i][0] = byte(i)
|
||||
cellsAndProofs[1].Cells[i][0] = byte(i + 128)
|
||||
cellsAndProofs[1].Proofs[i][0] = byte(i + 128)
|
||||
cellsPerBlob[0][i][0] = byte(i)
|
||||
proofsPerBlob[0][i][0] = byte(i)
|
||||
cellsPerBlob[1][i][0] = byte(i + 128)
|
||||
proofsPerBlob[1][i][0] = byte(i + 128)
|
||||
}
|
||||
|
||||
rob, err := blocks.NewROBlock(signedBeaconBlock)
|
||||
require.NoError(t, err)
|
||||
sidecars, err := peerdas.DataColumnSidecars(cellsAndProofs, peerdas.PopulateFromBlock(rob))
|
||||
sidecars, err := peerdas.DataColumnSidecars(cellsPerBlob, proofsPerBlob, peerdas.PopulateFromBlock(rob))
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, sidecars)
|
||||
require.Equal(t, int(numberOfColumns), len(sidecars))
|
||||
@@ -215,28 +213,26 @@ func TestReconstructionSource(t *testing.T) {
|
||||
|
||||
// Create cells and proofs with correct dimensions.
|
||||
numberOfColumns := params.BeaconConfig().NumberOfColumns
|
||||
cellsAndProofs := []kzg.CellsAndProofs{
|
||||
{
|
||||
Cells: make([]kzg.Cell, numberOfColumns),
|
||||
Proofs: make([]kzg.Proof, numberOfColumns),
|
||||
},
|
||||
{
|
||||
Cells: make([]kzg.Cell, numberOfColumns),
|
||||
Proofs: make([]kzg.Proof, numberOfColumns),
|
||||
},
|
||||
cellsPerBlob := [][]kzg.Cell{
|
||||
make([]kzg.Cell, numberOfColumns),
|
||||
make([]kzg.Cell, numberOfColumns),
|
||||
}
|
||||
proofsPerBlob := [][]kzg.Proof{
|
||||
make([]kzg.Proof, numberOfColumns),
|
||||
make([]kzg.Proof, numberOfColumns),
|
||||
}
|
||||
|
||||
// Set distinct values in cells and proofs for testing
|
||||
for i := range numberOfColumns {
|
||||
cellsAndProofs[0].Cells[i][0] = byte(i)
|
||||
cellsAndProofs[0].Proofs[i][0] = byte(i)
|
||||
cellsAndProofs[1].Cells[i][0] = byte(i + 128)
|
||||
cellsAndProofs[1].Proofs[i][0] = byte(i + 128)
|
||||
cellsPerBlob[0][i][0] = byte(i)
|
||||
proofsPerBlob[0][i][0] = byte(i)
|
||||
cellsPerBlob[1][i][0] = byte(i + 128)
|
||||
proofsPerBlob[1][i][0] = byte(i + 128)
|
||||
}
|
||||
|
||||
rob, err := blocks.NewROBlock(signedBeaconBlock)
|
||||
require.NoError(t, err)
|
||||
sidecars, err := peerdas.DataColumnSidecars(cellsAndProofs, peerdas.PopulateFromBlock(rob))
|
||||
sidecars, err := peerdas.DataColumnSidecars(cellsPerBlob, proofsPerBlob, peerdas.PopulateFromBlock(rob))
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, sidecars)
|
||||
require.Equal(t, int(numberOfColumns), len(sidecars))
|
||||
|
||||
@@ -660,18 +660,18 @@ func (s *Service) ConstructDataColumnSidecars(ctx context.Context, populator pee
|
||||
return nil, wrapWithBlockRoot(err, root, "commitments")
|
||||
}
|
||||
|
||||
cellsAndProofs, err := s.fetchCellsAndProofsFromExecution(ctx, commitments)
|
||||
cellsPerBlob, proofsPerBlob, err := s.fetchCellsAndProofsFromExecution(ctx, commitments)
|
||||
if err != nil {
|
||||
return nil, wrapWithBlockRoot(err, root, "fetch cells and proofs from execution client")
|
||||
}
|
||||
|
||||
// Return early if nothing is returned from the EL.
|
||||
if len(cellsAndProofs) == 0 {
|
||||
if len(cellsPerBlob) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// Construct data column sidears from the signed block and cells and proofs.
|
||||
roSidecars, err := peerdas.DataColumnSidecars(cellsAndProofs, populator)
|
||||
roSidecars, err := peerdas.DataColumnSidecars(cellsPerBlob, proofsPerBlob, populator)
|
||||
if err != nil {
|
||||
return nil, wrapWithBlockRoot(err, populator.Root(), "data column sidcars from column sidecar")
|
||||
}
|
||||
@@ -684,7 +684,7 @@ func (s *Service) ConstructDataColumnSidecars(ctx context.Context, populator pee
|
||||
}
|
||||
|
||||
// fetchCellsAndProofsFromExecution fetches cells and proofs from the execution client (using engine_getBlobsV2 execution API method)
|
||||
func (s *Service) fetchCellsAndProofsFromExecution(ctx context.Context, kzgCommitments [][]byte) ([]kzg.CellsAndProofs, error) {
|
||||
func (s *Service) fetchCellsAndProofsFromExecution(ctx context.Context, kzgCommitments [][]byte) ([][]kzg.Cell, [][]kzg.Proof, error) {
|
||||
// Collect KZG hashes for all blobs.
|
||||
versionedHashes := make([]common.Hash, 0, len(kzgCommitments))
|
||||
for _, commitment := range kzgCommitments {
|
||||
@@ -695,21 +695,21 @@ func (s *Service) fetchCellsAndProofsFromExecution(ctx context.Context, kzgCommi
|
||||
// Fetch all blobsAndCellsProofs from the execution client.
|
||||
blobAndProofV2s, err := s.GetBlobsV2(ctx, versionedHashes)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "get blobs V2")
|
||||
return nil, nil, errors.Wrapf(err, "get blobs V2")
|
||||
}
|
||||
|
||||
// Return early if nothing is returned from the EL.
|
||||
if len(blobAndProofV2s) == 0 {
|
||||
return nil, nil
|
||||
return nil, nil, nil
|
||||
}
|
||||
|
||||
// Compute cells and proofs from the blobs and cell proofs.
|
||||
cellsAndProofs, err := peerdas.ComputeCellsAndProofsFromStructured(blobAndProofV2s)
|
||||
cellsPerBlob, proofsPerBlob, err := peerdas.ComputeCellsAndProofsFromStructured(blobAndProofV2s)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "compute cells and proofs")
|
||||
return nil, nil, errors.Wrap(err, "compute cells and proofs")
|
||||
}
|
||||
|
||||
return cellsAndProofs, nil
|
||||
return cellsPerBlob, proofsPerBlob, nil
|
||||
}
|
||||
|
||||
// upgradeSidecarsToVerifiedSidecars upgrades a list of data column sidecars into verified data column sidecars.
|
||||
|
||||
@@ -3786,12 +3786,12 @@ func Test_validateBlobs(t *testing.T) {
|
||||
numberOfColumns := params.BeaconConfig().NumberOfColumns
|
||||
cellProofs := make([][]byte, uint64(blobCount)*numberOfColumns)
|
||||
for blobIdx := 0; blobIdx < blobCount; blobIdx++ {
|
||||
cellsAndProofs, err := kzg.ComputeCellsAndKZGProofs(&kzgBlobs[blobIdx])
|
||||
_, proofs, err := kzg.ComputeCellsAndKZGProofs(&kzgBlobs[blobIdx])
|
||||
require.NoError(t, err)
|
||||
|
||||
for colIdx := uint64(0); colIdx < numberOfColumns; colIdx++ {
|
||||
cellProofIdx := uint64(blobIdx)*numberOfColumns + colIdx
|
||||
cellProofs[cellProofIdx] = cellsAndProofs.Proofs[colIdx][:]
|
||||
cellProofs[cellProofIdx] = proofs[colIdx][:]
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -38,7 +38,7 @@ func (s *Server) Blobs(w http.ResponseWriter, r *http.Request) {
|
||||
segments := strings.Split(r.URL.Path, "/")
|
||||
blockId := segments[len(segments)-1]
|
||||
|
||||
verifiedBlobs, rpcErr := s.Blocker.Blobs(ctx, blockId, options.WithIndices(indices))
|
||||
verifiedBlobs, rpcErr := s.Blocker.BlobSidecars(ctx, blockId, options.WithIndices(indices))
|
||||
if rpcErr != nil {
|
||||
code := core.ErrorReasonToHTTP(rpcErr.Reason)
|
||||
switch code {
|
||||
@@ -134,9 +134,6 @@ func (s *Server) GetBlobs(w http.ResponseWriter, r *http.Request) {
|
||||
segments := strings.Split(r.URL.Path, "/")
|
||||
blockId := segments[len(segments)-1]
|
||||
|
||||
var verifiedBlobs []*blocks.VerifiedROBlob
|
||||
var rpcErr *core.RpcError
|
||||
|
||||
// Check if versioned_hashes parameter is provided
|
||||
versionedHashesStr := r.URL.Query()["versioned_hashes"]
|
||||
versionedHashes := make([][]byte, len(versionedHashesStr))
|
||||
@@ -149,7 +146,7 @@ func (s *Server) GetBlobs(w http.ResponseWriter, r *http.Request) {
|
||||
versionedHashes[i] = hash
|
||||
}
|
||||
}
|
||||
verifiedBlobs, rpcErr = s.Blocker.Blobs(ctx, blockId, options.WithVersionedHashes(versionedHashes))
|
||||
blobsData, rpcErr := s.Blocker.Blobs(ctx, blockId, options.WithVersionedHashes(versionedHashes))
|
||||
if rpcErr != nil {
|
||||
code := core.ErrorReasonToHTTP(rpcErr.Reason)
|
||||
switch code {
|
||||
@@ -175,9 +172,9 @@ func (s *Server) GetBlobs(w http.ResponseWriter, r *http.Request) {
|
||||
|
||||
if httputil.RespondWithSsz(r) {
|
||||
sszLen := fieldparams.BlobSize
|
||||
sszData := make([]byte, len(verifiedBlobs)*sszLen)
|
||||
for i := range verifiedBlobs {
|
||||
copy(sszData[i*sszLen:(i+1)*sszLen], verifiedBlobs[i].Blob)
|
||||
sszData := make([]byte, len(blobsData)*sszLen)
|
||||
for i := range blobsData {
|
||||
copy(sszData[i*sszLen:(i+1)*sszLen], blobsData[i])
|
||||
}
|
||||
|
||||
w.Header().Set(api.VersionHeader, version.String(blk.Version()))
|
||||
@@ -196,9 +193,9 @@ func (s *Server) GetBlobs(w http.ResponseWriter, r *http.Request) {
|
||||
return
|
||||
}
|
||||
|
||||
data := make([]string, len(verifiedBlobs))
|
||||
for i, v := range verifiedBlobs {
|
||||
data[i] = hexutil.Encode(v.Blob)
|
||||
data := make([]string, len(blobsData))
|
||||
for i, blob := range blobsData {
|
||||
data[i] = hexutil.Encode(blob)
|
||||
}
|
||||
resp := &structs.GetBlobsResponse{
|
||||
Data: data,
|
||||
|
||||
@@ -60,7 +60,8 @@ func (e BlockIdParseError) Error() string {
|
||||
// Blocker is responsible for retrieving blocks.
|
||||
type Blocker interface {
|
||||
Block(ctx context.Context, id []byte) (interfaces.ReadOnlySignedBeaconBlock, error)
|
||||
Blobs(ctx context.Context, id string, opts ...options.BlobsOption) ([]*blocks.VerifiedROBlob, *core.RpcError)
|
||||
BlobSidecars(ctx context.Context, id string, opts ...options.BlobsOption) ([]*blocks.VerifiedROBlob, *core.RpcError)
|
||||
Blobs(ctx context.Context, id string, opts ...options.BlobsOption) ([][]byte, *core.RpcError)
|
||||
DataColumns(ctx context.Context, id string, indices []int) ([]blocks.VerifiedRODataColumn, *core.RpcError)
|
||||
}
|
||||
|
||||
@@ -224,23 +225,18 @@ func (p *BeaconDbBlocker) Block(ctx context.Context, id []byte) (interfaces.Read
|
||||
return blk, nil
|
||||
}
|
||||
|
||||
// Blobs returns the fetched blobs for a given block ID with configurable options.
|
||||
// Options can specify either blob indices or versioned hashes for retrieval.
|
||||
// The identifier can be one of:
|
||||
// - "head" (canonical head in node's view)
|
||||
// - "genesis"
|
||||
// - "finalized"
|
||||
// - "justified"
|
||||
// - <slot>
|
||||
// - <hex encoded block root with '0x' prefix>
|
||||
// - <block root>
|
||||
//
|
||||
// cases:
|
||||
// - no block, 404
|
||||
// - block exists, has commitments, inside retention period (greater of protocol- or user-specified) serve then w/ 200 unless we hit an error reading them.
|
||||
// we are technically not supposed to import a block to forkchoice unless we have the blobs, so the nuance here is if we can't find the file and we are inside the protocol-defined retention period, then it's actually a 500.
|
||||
// - block exists, has commitments, outside retention period (greater of protocol- or user-specified) - ie just like block exists, no commitment
|
||||
func (p *BeaconDbBlocker) Blobs(ctx context.Context, id string, opts ...options.BlobsOption) ([]*blocks.VerifiedROBlob, *core.RpcError) {
|
||||
// blobsContext holds common information needed for blob retrieval
|
||||
type blobsContext struct {
|
||||
root [fieldparams.RootLength]byte
|
||||
roBlock blocks.ROBlock
|
||||
commitments [][]byte
|
||||
indices []int
|
||||
postFulu bool
|
||||
}
|
||||
|
||||
// resolveBlobsContext extracts common blob retrieval logic including block resolution,
|
||||
// validation, and index conversion from versioned hashes.
|
||||
func (p *BeaconDbBlocker) resolveBlobsContext(ctx context.Context, id string, opts ...options.BlobsOption) (*blobsContext, *core.RpcError) {
|
||||
// Apply options
|
||||
cfg := &options.BlobsConfig{}
|
||||
for _, opt := range opts {
|
||||
@@ -279,11 +275,6 @@ func (p *BeaconDbBlocker) Blobs(ctx context.Context, id string, opts ...options.
|
||||
return nil, &core.RpcError{Err: errors.Wrapf(err, "failed to retrieve kzg commitments from block %#x", root), Reason: core.Internal}
|
||||
}
|
||||
|
||||
// If there are no commitments return 200 w/ empty list
|
||||
if len(commitments) == 0 {
|
||||
return make([]*blocks.VerifiedROBlob, 0), nil
|
||||
}
|
||||
|
||||
// Compute the first Fulu slot.
|
||||
fuluForkSlot := primitives.Slot(math.MaxUint64)
|
||||
if fuluForkEpoch := params.BeaconConfig().FuluForkEpoch; fuluForkEpoch != primitives.Epoch(math.MaxUint64) {
|
||||
@@ -333,16 +324,156 @@ func (p *BeaconDbBlocker) Blobs(ctx context.Context, id string, opts ...options.
|
||||
}
|
||||
}
|
||||
|
||||
isPostFulu := false
|
||||
// Create ROBlock with root for post-Fulu blocks
|
||||
var roBlockWithRoot blocks.ROBlock
|
||||
if roBlock.Slot() >= fuluForkSlot {
|
||||
roBlock, err := blocks.NewROBlockWithRoot(roSignedBlock, root)
|
||||
roBlockWithRoot, err = blocks.NewROBlockWithRoot(roSignedBlock, root)
|
||||
if err != nil {
|
||||
return nil, &core.RpcError{Err: errors.Wrapf(err, "failed to create roBlock with root %#x", root), Reason: core.Internal}
|
||||
}
|
||||
|
||||
return p.blobsFromStoredDataColumns(roBlock, indices)
|
||||
isPostFulu = true
|
||||
}
|
||||
|
||||
return p.blobsFromStoredBlobs(commitments, root, indices)
|
||||
return &blobsContext{
|
||||
root: root,
|
||||
roBlock: roBlockWithRoot,
|
||||
commitments: commitments,
|
||||
indices: indices,
|
||||
postFulu: isPostFulu,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// BlobSidecars returns the fetched blob sidecars (with full KZG proofs) for a given block ID.
|
||||
// Options can specify either blob indices or versioned hashes for retrieval.
|
||||
// The identifier can be one of:
|
||||
// - "head" (canonical head in node's view)
|
||||
// - "genesis"
|
||||
// - "finalized"
|
||||
// - "justified"
|
||||
// - <slot>
|
||||
// - <hex encoded block root with '0x' prefix>
|
||||
func (p *BeaconDbBlocker) BlobSidecars(ctx context.Context, id string, opts ...options.BlobsOption) ([]*blocks.VerifiedROBlob, *core.RpcError) {
|
||||
bctx, rpcErr := p.resolveBlobsContext(ctx, id, opts...)
|
||||
if rpcErr != nil {
|
||||
return nil, rpcErr
|
||||
}
|
||||
|
||||
// If there are no commitments return 200 w/ empty list
|
||||
if len(bctx.commitments) == 0 {
|
||||
return make([]*blocks.VerifiedROBlob, 0), nil
|
||||
}
|
||||
|
||||
// Check if this is a post-Fulu block (uses data columns)
|
||||
if bctx.postFulu {
|
||||
return p.blobSidecarsFromStoredDataColumns(bctx.roBlock, bctx.indices)
|
||||
}
|
||||
|
||||
// Pre-Fulu block (uses blob sidecars)
|
||||
return p.blobsFromStoredBlobs(bctx.commitments, bctx.root, bctx.indices)
|
||||
}
|
||||
|
||||
// Blobs returns just the blob data without computing KZG proofs or creating full sidecars.
|
||||
// This is an optimized endpoint for when only blob data is needed (e.g., GetBlobs endpoint).
|
||||
// The identifier can be one of:
|
||||
// - "head" (canonical head in node's view)
|
||||
// - "genesis"
|
||||
// - "finalized"
|
||||
// - "justified"
|
||||
// - <slot>
|
||||
// - <hex encoded block root with '0x' prefix>
|
||||
func (p *BeaconDbBlocker) Blobs(ctx context.Context, id string, opts ...options.BlobsOption) ([][]byte, *core.RpcError) {
|
||||
bctx, rpcErr := p.resolveBlobsContext(ctx, id, opts...)
|
||||
if rpcErr != nil {
|
||||
return nil, rpcErr
|
||||
}
|
||||
|
||||
// If there are no commitments return 200 w/ empty list
|
||||
if len(bctx.commitments) == 0 {
|
||||
return make([][]byte, 0), nil
|
||||
}
|
||||
|
||||
// Check if this is a post-Fulu block (uses data columns)
|
||||
if bctx.postFulu {
|
||||
return p.blobsDataFromStoredDataColumns(bctx.root, bctx.indices, len(bctx.commitments))
|
||||
}
|
||||
|
||||
// Pre-Fulu block (uses blob sidecars)
|
||||
return p.blobsDataFromStoredBlobs(bctx.root, bctx.indices)
|
||||
}
|
||||
|
||||
// blobsDataFromStoredBlobs retrieves just blob data (without proofs) from stored blob sidecars.
|
||||
func (p *BeaconDbBlocker) blobsDataFromStoredBlobs(root [fieldparams.RootLength]byte, indices []int) ([][]byte, *core.RpcError) {
|
||||
summary := p.BlobStorage.Summary(root)
|
||||
|
||||
// If no indices are provided, use all indices that are available in the summary.
|
||||
if len(indices) == 0 {
|
||||
maxBlobCount := summary.MaxBlobsForEpoch()
|
||||
for index := 0; uint64(index) < maxBlobCount; index++ { // needed for safe conversion
|
||||
if summary.HasIndex(uint64(index)) {
|
||||
indices = append(indices, index)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Retrieve blob sidecars from the store and extract just the blob data.
|
||||
blobsData := make([][]byte, 0, len(indices))
|
||||
for _, index := range indices {
|
||||
if !summary.HasIndex(uint64(index)) {
|
||||
return nil, &core.RpcError{
|
||||
Err: fmt.Errorf("requested index %d not found", index),
|
||||
Reason: core.NotFound,
|
||||
}
|
||||
}
|
||||
|
||||
blobSidecar, err := p.BlobStorage.Get(root, uint64(index))
|
||||
if err != nil {
|
||||
return nil, &core.RpcError{
|
||||
Err: fmt.Errorf("could not retrieve blob for block root %#x at index %d", root, index),
|
||||
Reason: core.Internal,
|
||||
}
|
||||
}
|
||||
|
||||
blobsData = append(blobsData, blobSidecar.Blob)
|
||||
}
|
||||
|
||||
return blobsData, nil
|
||||
}
|
||||
|
||||
// blobsDataFromStoredDataColumns retrieves blob data from stored data columns without computing KZG proofs.
|
||||
func (p *BeaconDbBlocker) blobsDataFromStoredDataColumns(root [fieldparams.RootLength]byte, indices []int, blobCount int) ([][]byte, *core.RpcError) {
|
||||
// Count how many columns we have in the store.
|
||||
summary := p.DataColumnStorage.Summary(root)
|
||||
stored := summary.Stored()
|
||||
count := uint64(len(stored))
|
||||
|
||||
if count < peerdas.MinimumColumnCountToReconstruct() {
|
||||
// There is no way to reconstruct the data columns.
|
||||
return nil, &core.RpcError{
|
||||
Err: errors.Errorf("the node does not custody enough data columns to reconstruct blobs - please start the beacon node with the `--%s` flag to ensure this call to succeed, or retry later if it is already the case", flags.SubscribeAllDataSubnets.Name),
|
||||
Reason: core.NotFound,
|
||||
}
|
||||
}
|
||||
|
||||
// Retrieve from the database needed data columns.
|
||||
verifiedRoDataColumnSidecars, err := p.neededDataColumnSidecars(root, stored)
|
||||
if err != nil {
|
||||
return nil, &core.RpcError{
|
||||
Err: errors.Wrap(err, "needed data column sidecars"),
|
||||
Reason: core.Internal,
|
||||
}
|
||||
}
|
||||
|
||||
// Use optimized path to get just blob data without computing proofs.
|
||||
blobsData, err := peerdas.ReconstructBlobs(verifiedRoDataColumnSidecars, indices, blobCount)
|
||||
if err != nil {
|
||||
return nil, &core.RpcError{
|
||||
Err: errors.Wrap(err, "reconstruct blobs data"),
|
||||
Reason: core.Internal,
|
||||
}
|
||||
}
|
||||
|
||||
return blobsData, nil
|
||||
}
|
||||
|
||||
// blobsFromStoredBlobs retrieves blob sidercars corresponding to `indices` and `root` from the store.
|
||||
@@ -393,13 +524,12 @@ func (p *BeaconDbBlocker) blobsFromStoredBlobs(commitments [][]byte, root [field
|
||||
return blobs, nil
|
||||
}
|
||||
|
||||
// blobsFromStoredDataColumns retrieves data column sidecars from the store,
|
||||
// reconstructs the whole matrix if needed, converts the matrix to blobs,
|
||||
// and then returns converted blobs corresponding to `indices` and `root`.
|
||||
// blobSidecarsFromStoredDataColumns retrieves data column sidecars from the store,
|
||||
// reconstructs the whole matrix if needed, converts the matrix to blob sidecars with full KZG proofs.
|
||||
// This function expects data column sidecars to be stored (aka. no blob sidecars).
|
||||
// If not enough data column sidecars are available to convert blobs from them
|
||||
// (either directly or after reconstruction), an error is returned.
|
||||
func (p *BeaconDbBlocker) blobsFromStoredDataColumns(block blocks.ROBlock, indices []int) ([]*blocks.VerifiedROBlob, *core.RpcError) {
|
||||
func (p *BeaconDbBlocker) blobSidecarsFromStoredDataColumns(block blocks.ROBlock, indices []int) ([]*blocks.VerifiedROBlob, *core.RpcError) {
|
||||
root := block.Root()
|
||||
|
||||
// Use all indices if none are provided.
|
||||
@@ -439,8 +569,8 @@ func (p *BeaconDbBlocker) blobsFromStoredDataColumns(block blocks.ROBlock, indic
|
||||
}
|
||||
}
|
||||
|
||||
// Reconstruct blob sidecars from data column sidecars.
|
||||
verifiedRoBlobSidecars, err := peerdas.ReconstructBlobs(block, verifiedRoDataColumnSidecars, indices)
|
||||
// Reconstruct blob sidecars with full KZG proofs.
|
||||
verifiedRoBlobSidecars, err := peerdas.ReconstructBlobSidecars(block, verifiedRoDataColumnSidecars, indices)
|
||||
if err != nil {
|
||||
return nil, &core.RpcError{
|
||||
Err: errors.Wrap(err, "blobs from data columns"),
|
||||
|
||||
@@ -182,7 +182,7 @@ func TestBlobsErrorHandling(t *testing.T) {
|
||||
BeaconDB: db,
|
||||
}
|
||||
|
||||
_, rpcErr := blocker.Blobs(ctx, "0x1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef")
|
||||
_, rpcErr := blocker.BlobSidecars(ctx, "0x1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef")
|
||||
require.NotNil(t, rpcErr)
|
||||
require.Equal(t, core.ErrorReason(core.NotFound), rpcErr.Reason)
|
||||
require.StringContains(t, "not found", rpcErr.Err.Error())
|
||||
@@ -194,7 +194,7 @@ func TestBlobsErrorHandling(t *testing.T) {
|
||||
ChainInfoFetcher: &mockChain.ChainService{},
|
||||
}
|
||||
|
||||
_, rpcErr := blocker.Blobs(ctx, "999999")
|
||||
_, rpcErr := blocker.BlobSidecars(ctx, "999999")
|
||||
require.NotNil(t, rpcErr)
|
||||
require.Equal(t, core.ErrorReason(core.NotFound), rpcErr.Reason)
|
||||
require.StringContains(t, "no blocks found at slot", rpcErr.Err.Error())
|
||||
@@ -206,7 +206,7 @@ func TestBlobsErrorHandling(t *testing.T) {
|
||||
}
|
||||
|
||||
// Note: genesis blocks don't support blobs, so this returns BadRequest
|
||||
_, rpcErr := blocker.Blobs(ctx, "genesis")
|
||||
_, rpcErr := blocker.BlobSidecars(ctx, "genesis")
|
||||
require.NotNil(t, rpcErr)
|
||||
require.Equal(t, core.ErrorReason(core.BadRequest), rpcErr.Reason)
|
||||
require.StringContains(t, "not supported for Phase 0", rpcErr.Err.Error())
|
||||
@@ -222,7 +222,7 @@ func TestBlobsErrorHandling(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
_, rpcErr := blocker.Blobs(ctx, "finalized")
|
||||
_, rpcErr := blocker.BlobSidecars(ctx, "finalized")
|
||||
require.NotNil(t, rpcErr)
|
||||
require.Equal(t, core.ErrorReason(core.NotFound), rpcErr.Reason)
|
||||
require.StringContains(t, "finalized block", rpcErr.Err.Error())
|
||||
@@ -239,7 +239,7 @@ func TestBlobsErrorHandling(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
_, rpcErr := blocker.Blobs(ctx, "justified")
|
||||
_, rpcErr := blocker.BlobSidecars(ctx, "justified")
|
||||
require.NotNil(t, rpcErr)
|
||||
require.Equal(t, core.ErrorReason(core.NotFound), rpcErr.Reason)
|
||||
require.StringContains(t, "justified block", rpcErr.Err.Error())
|
||||
@@ -251,7 +251,7 @@ func TestBlobsErrorHandling(t *testing.T) {
|
||||
BeaconDB: db,
|
||||
}
|
||||
|
||||
_, rpcErr := blocker.Blobs(ctx, "invalid-hex")
|
||||
_, rpcErr := blocker.BlobSidecars(ctx, "invalid-hex")
|
||||
require.NotNil(t, rpcErr)
|
||||
require.Equal(t, core.ErrorReason(core.BadRequest), rpcErr.Reason)
|
||||
require.StringContains(t, "could not parse block ID", rpcErr.Err.Error())
|
||||
@@ -268,7 +268,7 @@ func TestBlobsErrorHandling(t *testing.T) {
|
||||
BeaconDB: db,
|
||||
}
|
||||
|
||||
_, rpcErr := blocker.Blobs(ctx, "100")
|
||||
_, rpcErr := blocker.BlobSidecars(ctx, "100")
|
||||
require.NotNil(t, rpcErr)
|
||||
require.Equal(t, core.ErrorReason(core.Internal), rpcErr.Reason)
|
||||
})
|
||||
@@ -306,16 +306,18 @@ func TestGetBlob(t *testing.T) {
|
||||
fuluBlock, fuluBlobSidecars := util.GenerateTestElectraBlockWithSidecar(t, [fieldparams.RootLength]byte{}, fs, blobCount)
|
||||
fuluBlockRoot := fuluBlock.Root()
|
||||
|
||||
cellsAndProofsList := make([]kzg.CellsAndProofs, 0, len(fuluBlobSidecars))
|
||||
cellsPerBlobList := make([][]kzg.Cell, 0, len(fuluBlobSidecars))
|
||||
proofsPerBlobList := make([][]kzg.Proof, 0, len(fuluBlobSidecars))
|
||||
for _, blob := range fuluBlobSidecars {
|
||||
var kzgBlob kzg.Blob
|
||||
copy(kzgBlob[:], blob.Blob)
|
||||
cellsAndProofs, err := kzg.ComputeCellsAndKZGProofs(&kzgBlob)
|
||||
cells, proofs, err := kzg.ComputeCellsAndKZGProofs(&kzgBlob)
|
||||
require.NoError(t, err)
|
||||
cellsAndProofsList = append(cellsAndProofsList, cellsAndProofs)
|
||||
cellsPerBlobList = append(cellsPerBlobList, cells)
|
||||
proofsPerBlobList = append(proofsPerBlobList, proofs)
|
||||
}
|
||||
|
||||
roDataColumnSidecars, err := peerdas.DataColumnSidecars(cellsAndProofsList, peerdas.PopulateFromBlock(fuluBlock))
|
||||
roDataColumnSidecars, err := peerdas.DataColumnSidecars(cellsPerBlobList, proofsPerBlobList, peerdas.PopulateFromBlock(fuluBlock))
|
||||
require.NoError(t, err)
|
||||
|
||||
verifiedRoDataColumnSidecars := make([]blocks.VerifiedRODataColumn, 0, len(roDataColumnSidecars))
|
||||
@@ -329,7 +331,7 @@ func TestGetBlob(t *testing.T) {
|
||||
|
||||
t.Run("genesis", func(t *testing.T) {
|
||||
blocker := &BeaconDbBlocker{}
|
||||
_, rpcErr := blocker.Blobs(ctx, "genesis")
|
||||
_, rpcErr := blocker.BlobSidecars(ctx, "genesis")
|
||||
require.Equal(t, http.StatusBadRequest, core.ErrorReasonToHTTP(rpcErr.Reason))
|
||||
require.StringContains(t, "not supported for Phase 0 fork", rpcErr.Err.Error())
|
||||
})
|
||||
@@ -347,7 +349,7 @@ func TestGetBlob(t *testing.T) {
|
||||
BlobStorage: blobStorage,
|
||||
}
|
||||
|
||||
retrievedVerifiedSidecars, rpcErr := blocker.Blobs(ctx, "head")
|
||||
retrievedVerifiedSidecars, rpcErr := blocker.BlobSidecars(ctx, "head")
|
||||
require.IsNil(t, rpcErr)
|
||||
require.Equal(t, blobCount, len(retrievedVerifiedSidecars))
|
||||
|
||||
@@ -374,7 +376,7 @@ func TestGetBlob(t *testing.T) {
|
||||
BlobStorage: blobStorage,
|
||||
}
|
||||
|
||||
verifiedSidecars, rpcErr := blocker.Blobs(ctx, "finalized")
|
||||
verifiedSidecars, rpcErr := blocker.BlobSidecars(ctx, "finalized")
|
||||
require.IsNil(t, rpcErr)
|
||||
require.Equal(t, blobCount, len(verifiedSidecars))
|
||||
})
|
||||
@@ -389,7 +391,7 @@ func TestGetBlob(t *testing.T) {
|
||||
BlobStorage: blobStorage,
|
||||
}
|
||||
|
||||
verifiedSidecars, rpcErr := blocker.Blobs(ctx, "justified")
|
||||
verifiedSidecars, rpcErr := blocker.BlobSidecars(ctx, "justified")
|
||||
require.IsNil(t, rpcErr)
|
||||
require.Equal(t, blobCount, len(verifiedSidecars))
|
||||
})
|
||||
@@ -403,7 +405,7 @@ func TestGetBlob(t *testing.T) {
|
||||
BlobStorage: blobStorage,
|
||||
}
|
||||
|
||||
verifiedBlobs, rpcErr := blocker.Blobs(ctx, hexutil.Encode(denebBlockRoot[:]))
|
||||
verifiedBlobs, rpcErr := blocker.BlobSidecars(ctx, hexutil.Encode(denebBlockRoot[:]))
|
||||
require.IsNil(t, rpcErr)
|
||||
require.Equal(t, blobCount, len(verifiedBlobs))
|
||||
})
|
||||
@@ -418,7 +420,7 @@ func TestGetBlob(t *testing.T) {
|
||||
BlobStorage: blobStorage,
|
||||
}
|
||||
|
||||
verifiedBlobs, rpcErr := blocker.Blobs(ctx, dsStr)
|
||||
verifiedBlobs, rpcErr := blocker.BlobSidecars(ctx, dsStr)
|
||||
require.IsNil(t, rpcErr)
|
||||
require.Equal(t, blobCount, len(verifiedBlobs))
|
||||
})
|
||||
@@ -435,7 +437,7 @@ func TestGetBlob(t *testing.T) {
|
||||
BlobStorage: blobStorage,
|
||||
}
|
||||
|
||||
retrievedVerifiedSidecars, rpcErr := blocker.Blobs(ctx, dsStr, options.WithIndices([]int{index}))
|
||||
retrievedVerifiedSidecars, rpcErr := blocker.BlobSidecars(ctx, dsStr, options.WithIndices([]int{index}))
|
||||
require.IsNil(t, rpcErr)
|
||||
require.Equal(t, 1, len(retrievedVerifiedSidecars))
|
||||
|
||||
@@ -459,7 +461,7 @@ func TestGetBlob(t *testing.T) {
|
||||
BlobStorage: filesystem.NewEphemeralBlobStorage(t),
|
||||
}
|
||||
|
||||
verifiedBlobs, rpcErr := blocker.Blobs(ctx, dsStr)
|
||||
verifiedBlobs, rpcErr := blocker.BlobSidecars(ctx, dsStr)
|
||||
require.IsNil(t, rpcErr)
|
||||
require.Equal(t, 0, len(verifiedBlobs))
|
||||
})
|
||||
@@ -475,7 +477,7 @@ func TestGetBlob(t *testing.T) {
|
||||
}
|
||||
|
||||
noBlobIndex := len(storedBlobSidecars) + 1
|
||||
_, rpcErr := blocker.Blobs(ctx, dsStr, options.WithIndices([]int{0, noBlobIndex}))
|
||||
_, rpcErr := blocker.BlobSidecars(ctx, dsStr, options.WithIndices([]int{0, noBlobIndex}))
|
||||
require.NotNil(t, rpcErr)
|
||||
require.Equal(t, core.ErrorReason(core.NotFound), rpcErr.Reason)
|
||||
})
|
||||
@@ -489,7 +491,7 @@ func TestGetBlob(t *testing.T) {
|
||||
BeaconDB: db,
|
||||
BlobStorage: blobStorage,
|
||||
}
|
||||
_, rpcErr := blocker.Blobs(ctx, dsStr, options.WithIndices([]int{0, math.MaxInt}))
|
||||
_, rpcErr := blocker.BlobSidecars(ctx, dsStr, options.WithIndices([]int{0, math.MaxInt}))
|
||||
require.NotNil(t, rpcErr)
|
||||
require.Equal(t, core.ErrorReason(core.BadRequest), rpcErr.Reason)
|
||||
})
|
||||
@@ -508,7 +510,7 @@ func TestGetBlob(t *testing.T) {
|
||||
DataColumnStorage: dataColumnStorage,
|
||||
}
|
||||
|
||||
_, rpcErr := blocker.Blobs(ctx, hexutil.Encode(fuluBlockRoot[:]))
|
||||
_, rpcErr := blocker.BlobSidecars(ctx, hexutil.Encode(fuluBlockRoot[:]))
|
||||
require.NotNil(t, rpcErr)
|
||||
require.Equal(t, core.ErrorReason(core.NotFound), rpcErr.Reason)
|
||||
})
|
||||
@@ -527,7 +529,7 @@ func TestGetBlob(t *testing.T) {
|
||||
DataColumnStorage: dataColumnStorage,
|
||||
}
|
||||
|
||||
retrievedVerifiedRoBlobs, rpcErr := blocker.Blobs(ctx, hexutil.Encode(fuluBlockRoot[:]))
|
||||
retrievedVerifiedRoBlobs, rpcErr := blocker.BlobSidecars(ctx, hexutil.Encode(fuluBlockRoot[:]))
|
||||
require.IsNil(t, rpcErr)
|
||||
require.Equal(t, len(fuluBlobSidecars), len(retrievedVerifiedRoBlobs))
|
||||
|
||||
@@ -552,7 +554,7 @@ func TestGetBlob(t *testing.T) {
|
||||
DataColumnStorage: dataColumnStorage,
|
||||
}
|
||||
|
||||
retrievedVerifiedRoBlobs, rpcErr := blocker.Blobs(ctx, hexutil.Encode(fuluBlockRoot[:]))
|
||||
retrievedVerifiedRoBlobs, rpcErr := blocker.BlobSidecars(ctx, hexutil.Encode(fuluBlockRoot[:]))
|
||||
require.IsNil(t, rpcErr)
|
||||
require.Equal(t, len(fuluBlobSidecars), len(retrievedVerifiedRoBlobs))
|
||||
|
||||
@@ -581,7 +583,7 @@ func TestGetBlob(t *testing.T) {
|
||||
BeaconDB: db,
|
||||
}
|
||||
|
||||
_, rpcErr := blocker.Blobs(ctx, hexutil.Encode(predenebBlockRoot[:]))
|
||||
_, rpcErr := blocker.BlobSidecars(ctx, hexutil.Encode(predenebBlockRoot[:]))
|
||||
require.NotNil(t, rpcErr)
|
||||
require.Equal(t, core.ErrorReason(core.BadRequest), rpcErr.Reason)
|
||||
require.Equal(t, http.StatusBadRequest, core.ErrorReasonToHTTP(rpcErr.Reason))
|
||||
@@ -621,7 +623,7 @@ func TestGetBlob(t *testing.T) {
|
||||
}
|
||||
|
||||
// Should successfully retrieve blobs even when FuluForkEpoch is not set
|
||||
retrievedBlobs, rpcErr := blocker.Blobs(ctx, hexutil.Encode(denebBlockRoot[:]))
|
||||
retrievedBlobs, rpcErr := blocker.BlobSidecars(ctx, hexutil.Encode(denebBlockRoot[:]))
|
||||
require.IsNil(t, rpcErr)
|
||||
require.Equal(t, 2, len(retrievedBlobs))
|
||||
|
||||
@@ -665,16 +667,18 @@ func TestBlobs_CommitmentOrdering(t *testing.T) {
|
||||
require.Equal(t, 3, len(commitments))
|
||||
|
||||
// Convert blob sidecars to data column sidecars for Fulu
|
||||
cellsAndProofsList := make([]kzg.CellsAndProofs, 0, len(fuluBlobs))
|
||||
cellsPerBlobList := make([][]kzg.Cell, 0, len(fuluBlobs))
|
||||
proofsPerBlobList := make([][]kzg.Proof, 0, len(fuluBlobs))
|
||||
for _, blob := range fuluBlobs {
|
||||
var kzgBlob kzg.Blob
|
||||
copy(kzgBlob[:], blob.Blob)
|
||||
cellsAndProofs, err := kzg.ComputeCellsAndKZGProofs(&kzgBlob)
|
||||
cells, proofs, err := kzg.ComputeCellsAndKZGProofs(&kzgBlob)
|
||||
require.NoError(t, err)
|
||||
cellsAndProofsList = append(cellsAndProofsList, cellsAndProofs)
|
||||
cellsPerBlobList = append(cellsPerBlobList, cells)
|
||||
proofsPerBlobList = append(proofsPerBlobList, proofs)
|
||||
}
|
||||
|
||||
dataColumnSidecarPb, err := peerdas.DataColumnSidecars(cellsAndProofsList, peerdas.PopulateFromBlock(fuluBlock))
|
||||
dataColumnSidecarPb, err := peerdas.DataColumnSidecars(cellsPerBlobList, proofsPerBlobList, peerdas.PopulateFromBlock(fuluBlock))
|
||||
require.NoError(t, err)
|
||||
|
||||
verifiedRoDataColumnSidecars := make([]blocks.VerifiedRODataColumn, 0, len(dataColumnSidecarPb))
|
||||
@@ -713,7 +717,7 @@ func TestBlobs_CommitmentOrdering(t *testing.T) {
|
||||
// Request versioned hashes in reverse order: 2, 1, 0
|
||||
requestedHashes := [][]byte{hash2[:], hash1[:], hash0[:]}
|
||||
|
||||
verifiedBlobs, rpcErr := blocker.Blobs(ctx, "finalized", options.WithVersionedHashes(requestedHashes))
|
||||
verifiedBlobs, rpcErr := blocker.BlobSidecars(ctx, "finalized", options.WithVersionedHashes(requestedHashes))
|
||||
if rpcErr != nil {
|
||||
t.Errorf("RPC Error: %v (reason: %v)", rpcErr.Err, rpcErr.Reason)
|
||||
return
|
||||
@@ -738,7 +742,7 @@ func TestBlobs_CommitmentOrdering(t *testing.T) {
|
||||
// Request hashes for indices 1 and 0 (out of order)
|
||||
requestedHashes := [][]byte{hash1[:], hash0[:]}
|
||||
|
||||
verifiedBlobs, rpcErr := blocker.Blobs(ctx, "finalized", options.WithVersionedHashes(requestedHashes))
|
||||
verifiedBlobs, rpcErr := blocker.BlobSidecars(ctx, "finalized", options.WithVersionedHashes(requestedHashes))
|
||||
if rpcErr != nil {
|
||||
t.Errorf("RPC Error: %v (reason: %v)", rpcErr.Err, rpcErr.Reason)
|
||||
return
|
||||
@@ -764,7 +768,7 @@ func TestBlobs_CommitmentOrdering(t *testing.T) {
|
||||
// Request only the fake hash
|
||||
requestedHashes := [][]byte{fakeHash}
|
||||
|
||||
_, rpcErr := blocker.Blobs(ctx, "finalized", options.WithVersionedHashes(requestedHashes))
|
||||
_, rpcErr := blocker.BlobSidecars(ctx, "finalized", options.WithVersionedHashes(requestedHashes))
|
||||
require.NotNil(t, rpcErr)
|
||||
require.Equal(t, core.ErrorReason(core.NotFound), rpcErr.Reason)
|
||||
require.StringContains(t, "versioned hash(es) not found in block", rpcErr.Err.Error())
|
||||
@@ -784,7 +788,7 @@ func TestBlobs_CommitmentOrdering(t *testing.T) {
|
||||
// Request valid hash with two fake hashes
|
||||
requestedHashes := [][]byte{fakeHash1, hash0[:], fakeHash2}
|
||||
|
||||
_, rpcErr := blocker.Blobs(ctx, "finalized", options.WithVersionedHashes(requestedHashes))
|
||||
_, rpcErr := blocker.BlobSidecars(ctx, "finalized", options.WithVersionedHashes(requestedHashes))
|
||||
require.NotNil(t, rpcErr)
|
||||
require.Equal(t, core.ErrorReason(core.NotFound), rpcErr.Reason)
|
||||
require.StringContains(t, "versioned hash(es) not found in block", rpcErr.Err.Error())
|
||||
@@ -829,16 +833,18 @@ func TestGetDataColumns(t *testing.T) {
|
||||
fuluBlock, fuluBlobSidecars := util.GenerateTestElectraBlockWithSidecar(t, [fieldparams.RootLength]byte{}, fuluForkSlot, blobCount)
|
||||
fuluBlockRoot := fuluBlock.Root()
|
||||
|
||||
cellsAndProofsList := make([]kzg.CellsAndProofs, 0, len(fuluBlobSidecars))
|
||||
cellsPerBlobList := make([][]kzg.Cell, 0, len(fuluBlobSidecars))
|
||||
proofsPerBlobList := make([][]kzg.Proof, 0, len(fuluBlobSidecars))
|
||||
for _, blob := range fuluBlobSidecars {
|
||||
var kzgBlob kzg.Blob
|
||||
copy(kzgBlob[:], blob.Blob)
|
||||
cellsAndProofs, err := kzg.ComputeCellsAndKZGProofs(&kzgBlob)
|
||||
cells, proofs, err := kzg.ComputeCellsAndKZGProofs(&kzgBlob)
|
||||
require.NoError(t, err)
|
||||
cellsAndProofsList = append(cellsAndProofsList, cellsAndProofs)
|
||||
cellsPerBlobList = append(cellsPerBlobList, cells)
|
||||
proofsPerBlobList = append(proofsPerBlobList, proofs)
|
||||
}
|
||||
|
||||
roDataColumnSidecars, err := peerdas.DataColumnSidecars(cellsAndProofsList, peerdas.PopulateFromBlock(fuluBlock))
|
||||
roDataColumnSidecars, err := peerdas.DataColumnSidecars(cellsPerBlobList, proofsPerBlobList, peerdas.PopulateFromBlock(fuluBlock))
|
||||
require.NoError(t, err)
|
||||
|
||||
verifiedRoDataColumnSidecars := make([]blocks.VerifiedRODataColumn, 0, len(roDataColumnSidecars))
|
||||
|
||||
@@ -413,13 +413,13 @@ func (vs *Server) handleUnblindedBlock(
|
||||
|
||||
if block.Version() >= version.Fulu {
|
||||
// Compute cells and proofs from the blobs and cell proofs.
|
||||
cellsAndProofs, err := peerdas.ComputeCellsAndProofsFromFlat(rawBlobs, proofs)
|
||||
cellsPerBlob, proofsPerBlob, err := peerdas.ComputeCellsAndProofsFromFlat(rawBlobs, proofs)
|
||||
if err != nil {
|
||||
return nil, nil, errors.Wrap(err, "compute cells and proofs")
|
||||
}
|
||||
|
||||
// Construct data column sidecars from the signed block and cells and proofs.
|
||||
roDataColumnSidecars, err := peerdas.DataColumnSidecars(cellsAndProofs, peerdas.PopulateFromBlock(block))
|
||||
roDataColumnSidecars, err := peerdas.DataColumnSidecars(cellsPerBlob, proofsPerBlob, peerdas.PopulateFromBlock(block))
|
||||
if err != nil {
|
||||
return nil, nil, errors.Wrap(err, "data column sidcars")
|
||||
}
|
||||
|
||||
@@ -39,8 +39,13 @@ func (m *MockBlocker) Block(_ context.Context, b []byte) (interfaces.ReadOnlySig
|
||||
return m.SlotBlockMap[primitives.Slot(slotNumber)], nil
|
||||
}
|
||||
|
||||
// BlobSidecars --
|
||||
func (*MockBlocker) BlobSidecars(_ context.Context, _ string, _ ...options.BlobsOption) ([]*blocks.VerifiedROBlob, *core.RpcError) {
|
||||
return nil, &core.RpcError{}
|
||||
}
|
||||
|
||||
// Blobs --
|
||||
func (*MockBlocker) Blobs(_ context.Context, _ string, _ ...options.BlobsOption) ([]*blocks.VerifiedROBlob, *core.RpcError) {
|
||||
func (*MockBlocker) Blobs(_ context.Context, _ string, _ ...options.BlobsOption) ([][]byte, *core.RpcError) {
|
||||
return nil, &core.RpcError{}
|
||||
}
|
||||
|
||||
|
||||
@@ -28,6 +28,7 @@ func (s *Service) maintainCustodyInfo() {
|
||||
|
||||
func (s *Service) updateCustodyInfoIfNeeded() error {
|
||||
const minimumPeerCount = 1
|
||||
const gracePeriodSeconds = 300 // 300-second grace period for CGC increases
|
||||
|
||||
// Get our actual custody group count.
|
||||
actualCustodyGrounpCount, err := s.cfg.p2p.CustodyGroupCount(s.ctx)
|
||||
@@ -35,12 +36,101 @@ func (s *Service) updateCustodyInfoIfNeeded() error {
|
||||
return errors.Wrap(err, "p2p custody group count")
|
||||
}
|
||||
|
||||
// Update the P2P custody group count metric
|
||||
custodyGroupCountP2P.Set(float64(actualCustodyGrounpCount))
|
||||
|
||||
// Get our target custody group count.
|
||||
targetCustodyGroupCount, err := s.custodyGroupCount(s.ctx)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "custody group count")
|
||||
}
|
||||
|
||||
// Handle pending CGC changes with proper grace period
|
||||
s.pendingCGCLock.Lock()
|
||||
now := time.Now()
|
||||
|
||||
switch {
|
||||
case s.pendingCGC > 0 && now.After(s.pendingCGCDeadline):
|
||||
// Grace period expired - check if pending change is still valid
|
||||
targetToApply := s.pendingCGC
|
||||
s.pendingCGC = 0 // Clear the pending change
|
||||
s.pendingCGCDeadline = time.Time{}
|
||||
s.pendingCGCLock.Unlock()
|
||||
|
||||
// Only apply the pending change if current target still justifies it
|
||||
// This prevents applying stale increases when validators have been removed
|
||||
// or configuration has changed during the grace period
|
||||
if targetToApply <= targetCustodyGroupCount {
|
||||
// Pending value is still valid (at or below current target)
|
||||
// the network still wants at least that many groups
|
||||
// Use the current target to allow for any increases that happened during grace period
|
||||
if targetCustodyGroupCount > actualCustodyGrounpCount {
|
||||
log.WithFields(logrus.Fields{
|
||||
"previousCGC": actualCustodyGrounpCount,
|
||||
"newCGC": targetCustodyGroupCount,
|
||||
"pendingCGC": targetToApply,
|
||||
}).Info("Applying custody group count increase after grace period")
|
||||
}
|
||||
} else {
|
||||
// Pending value is higher than current target - drop it as stale
|
||||
log.WithFields(logrus.Fields{
|
||||
"currentCGC": actualCustodyGrounpCount,
|
||||
"targetCGC": targetCustodyGroupCount,
|
||||
"stalePendingCGC": targetToApply,
|
||||
}).Info("Dropping stale pending CGC increase as target has decreased")
|
||||
|
||||
// Still check if current target needs an increase (with new grace period)
|
||||
if targetCustodyGroupCount > actualCustodyGrounpCount {
|
||||
// Re-schedule with current target and new grace period
|
||||
s.pendingCGCLock.Lock()
|
||||
s.pendingCGC = targetCustodyGroupCount
|
||||
s.pendingCGCDeadline = now.Add(time.Duration(gracePeriodSeconds) * time.Second)
|
||||
s.pendingCGCLock.Unlock()
|
||||
|
||||
log.WithFields(logrus.Fields{
|
||||
"currentCGC": actualCustodyGrounpCount,
|
||||
"targetCGC": targetCustodyGroupCount,
|
||||
"gracePeriod": gracePeriodSeconds,
|
||||
}).Info("Re-scheduling CGC increase with updated target")
|
||||
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
case s.pendingCGC > 0 && !now.After(s.pendingCGCDeadline):
|
||||
// Pending change exists but grace period not expired - do nothing
|
||||
pending := s.pendingCGC
|
||||
timeRemaining := s.pendingCGCDeadline.Sub(now).Seconds()
|
||||
s.pendingCGCLock.Unlock()
|
||||
|
||||
log.WithFields(logrus.Fields{
|
||||
"pendingCGC": pending,
|
||||
"timeRemaining": timeRemaining,
|
||||
}).Debug("Grace period still active, skipping CGC update")
|
||||
|
||||
return nil
|
||||
|
||||
default:
|
||||
// No pending change: check if we need to schedule one
|
||||
if targetCustodyGroupCount > actualCustodyGrounpCount {
|
||||
// Schedule the increase with grace period
|
||||
s.pendingCGC = targetCustodyGroupCount
|
||||
s.pendingCGCDeadline = now.Add(time.Duration(gracePeriodSeconds) * time.Second)
|
||||
s.pendingCGCLock.Unlock()
|
||||
|
||||
log.WithFields(logrus.Fields{
|
||||
"currentCGC": actualCustodyGrounpCount,
|
||||
"targetCGC": targetCustodyGroupCount,
|
||||
"gracePeriod": gracePeriodSeconds,
|
||||
"effectiveTime": s.pendingCGCDeadline.Format(time.RFC3339),
|
||||
}).Info("Scheduling custody group count increase with grace period")
|
||||
|
||||
return nil
|
||||
}
|
||||
// No change needed
|
||||
s.pendingCGCLock.Unlock()
|
||||
}
|
||||
|
||||
// If the actual custody group count is already equal to the target, skip the update.
|
||||
if actualCustodyGrounpCount >= targetCustodyGroupCount {
|
||||
return nil
|
||||
@@ -80,10 +170,21 @@ func (s *Service) updateCustodyInfoIfNeeded() error {
|
||||
return errors.Wrap(err, "p2p update custody info")
|
||||
}
|
||||
|
||||
if _, _, err := s.cfg.beaconDB.UpdateCustodyInfo(s.ctx, storedEarliestSlot, storedGroupCount); err != nil {
|
||||
// Update the p2p earliest available slot metric
|
||||
earliestAvailableSlotP2P.Set(float64(storedEarliestSlot))
|
||||
|
||||
dbEarliestSlot, dbStoredGroupCount, err := s.cfg.beaconDB.UpdateCustodyInfo(s.ctx, storedEarliestSlot, storedGroupCount)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "beacon db update custody info")
|
||||
}
|
||||
|
||||
// Update the DB earliest available slot metric
|
||||
earliestAvailableSlotDB.Set(float64(dbEarliestSlot))
|
||||
|
||||
// Update both custody group count metrics with their respective values
|
||||
custodyGroupCountP2P.Set(float64(storedGroupCount))
|
||||
custodyGroupCountDB.Set(float64(dbStoredGroupCount))
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
@@ -217,10 +217,17 @@ func (s *Service) fetchOriginSidecars(peers []peer.ID) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "error fetching origin checkpoint blockroot")
|
||||
}
|
||||
|
||||
block, err := s.cfg.DB.Block(s.ctx, blockRoot)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "block")
|
||||
}
|
||||
if block.IsNil() {
|
||||
return errors.Errorf("origin block for root %#x not found in database", blockRoot)
|
||||
}
|
||||
|
||||
currentSlot, blockSlot := s.clock.CurrentSlot(), block.Block().Slot()
|
||||
currentEpoch, blockEpoch := slots.ToEpoch(currentSlot), slots.ToEpoch(blockSlot)
|
||||
|
||||
@@ -230,6 +230,27 @@ var (
|
||||
Buckets: []float64{100, 250, 500, 750, 1000, 1500, 2000, 4000, 8000, 12000, 16000},
|
||||
},
|
||||
)
|
||||
|
||||
// Custody earliest available slot metrics
|
||||
earliestAvailableSlotP2P = promauto.NewGauge(prometheus.GaugeOpts{
|
||||
Name: "custody_earliest_available_slot_p2p",
|
||||
Help: "The earliest available slot tracked by the p2p service for custody purposes",
|
||||
})
|
||||
|
||||
earliestAvailableSlotDB = promauto.NewGauge(prometheus.GaugeOpts{
|
||||
Name: "custody_earliest_available_slot_db",
|
||||
Help: "The earliest available slot tracked by the database for custody purposes",
|
||||
})
|
||||
|
||||
// Custody group count metrics - separate for P2P and DB views
|
||||
custodyGroupCountP2P = promauto.NewGauge(prometheus.GaugeOpts{
|
||||
Name: "beacon_custody_group_count_p2p",
|
||||
Help: "Current custody group count (CGC) from P2P layer",
|
||||
})
|
||||
custodyGroupCountDB = promauto.NewGauge(prometheus.GaugeOpts{
|
||||
Name: "beacon_custody_group_count_db",
|
||||
Help: "Current custody group count (CGC) stored in database",
|
||||
})
|
||||
)
|
||||
|
||||
func (s *Service) updateMetrics() {
|
||||
|
||||
@@ -182,6 +182,10 @@ type Service struct {
|
||||
dataColumnLogCh chan dataColumnLogEntry
|
||||
digestActions perDigestSet
|
||||
subscriptionSpawner func(func()) // see Service.spawn for details
|
||||
// Grace period fields for CGC changes
|
||||
pendingCGC uint64
|
||||
pendingCGCDeadline time.Time
|
||||
pendingCGCLock sync.RWMutex
|
||||
}
|
||||
|
||||
// NewService initializes new regular sync service.
|
||||
|
||||
@@ -28,8 +28,8 @@ func GenerateTestDataColumns(t *testing.T, parent [fieldparams.RootLength]byte,
|
||||
blobs = append(blobs, kzg.Blob(roBlobs[i].Blob))
|
||||
}
|
||||
|
||||
cellsAndProofs := util.GenerateCellsAndProofs(t, blobs)
|
||||
roDataColumnSidecars, err := peerdas.DataColumnSidecars(cellsAndProofs, peerdas.PopulateFromBlock(roBlock))
|
||||
cellsPerBlob, proofsPerBlob := util.GenerateCellsAndProofs(t, blobs)
|
||||
roDataColumnSidecars, err := peerdas.DataColumnSidecars(cellsPerBlob, proofsPerBlob, peerdas.PopulateFromBlock(roBlock))
|
||||
require.NoError(t, err)
|
||||
|
||||
return roDataColumnSidecars
|
||||
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Add log prefix to the light-client package.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Upgrade Prysm v6 to v7.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Added GeneralizedIndicesFromPath function to calculate the GIs for a given sszInfo object and a PathElement
|
||||
@@ -1,3 +0,0 @@
|
||||
## Changed
|
||||
- Introduced Path type for SSZ-QL queries and updated PathElement (removed Length field, kept Index) enforcing that len queries are terminal (at most one per path).
|
||||
- Changed length query syntax from `block.payload.len(transactions)` to `len(block.payload.transactions)`
|
||||
@@ -1,8 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Fulu fork epoch for mainnet configurations set for December 3, 2025, 09:49:11pm UTC
|
||||
- Added BPO schedules for December 9, 2025, 02:21:11pm UTC and January 7, 2026, 01:01:11am UTC
|
||||
|
||||
### Changed
|
||||
|
||||
- updated consensus spec to 1.6.0 from 1.6.0-beta.2
|
||||
3
changelog/james-prysm_optimize-get-blobs.md
Normal file
3
changelog/james-prysm_optimize-get-blobs.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Ignored
|
||||
|
||||
- optimization to remove cell and blob proof computation on blob rest api.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Removed
|
||||
|
||||
- log mentioning removed flag `--show-deposit-data`
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Changelog entries for v6.1.4 through v6.1.3
|
||||
@@ -1,2 +0,0 @@
|
||||
### Changed
|
||||
- Use the `by-epoch' blob storage layout by default and log a warning to users who continue to use the flat layout, encouraging them to switch.
|
||||
@@ -1,2 +0,0 @@
|
||||
### Fixed
|
||||
- Backfill disabled if checkpoint sync origin is after fulu fork due to lack of DataColumnSidecar support in backfill. To track the availability of fulu-compatible backfill please watch https://github.com/OffchainLabs/prysm/issues/15982
|
||||
@@ -1,2 +0,0 @@
|
||||
### Ignored
|
||||
- Fix bug with layout detection when readdirnames returns io.EOF.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
- `blobSidecarByRootRPCHandler`: Do not serve a sidecar if the corresponding block is not available.
|
||||
- `dataColumnSidecarByRootRPCHandler`: Do not serve a sidecar if the corresponding block is not available.
|
||||
@@ -1,2 +0,0 @@
|
||||
### Fixed
|
||||
- `SidecarProposerExpected`: Add the slot in the single flight key.
|
||||
@@ -1,2 +0,0 @@
|
||||
### Ignored
|
||||
- `BeaconBlockContainerToSignedBeaconBlock`: Add Fulu.
|
||||
@@ -1,2 +0,0 @@
|
||||
### Fixed
|
||||
- `SidecarProposerExpected`: Use the correct value of proposer index in the singleflight group.
|
||||
@@ -1,2 +0,0 @@
|
||||
### Fixed
|
||||
- Remove `Reading static P2P private key from a file.` log if Fulu is enabled.
|
||||
@@ -1,2 +0,0 @@
|
||||
### Changed
|
||||
- Update `go-netroute` to `v0.4.0`
|
||||
@@ -1,2 +0,0 @@
|
||||
### Changed
|
||||
- Update go-netroute to `v0.3.0`
|
||||
@@ -1,2 +0,0 @@
|
||||
### Ignored
|
||||
- `beacon_data_column_sidecar_gossip_verification_milliseconds`: Divide by 10.
|
||||
@@ -1,4 +0,0 @@
|
||||
### Added
|
||||
- Metrics: Add count of peers per direction and type (inbound/outbound), (TCP/QUIC).
|
||||
- `p2p_subscribed_topic_peer_total`: Reset to avoid dangling values.
|
||||
- Add `p2p_minimum_peers_per_subnet` metric.
|
||||
@@ -1,2 +0,0 @@
|
||||
### Fixed
|
||||
- Ensures the rate limitation is respected for by root blob and data column sidecars requests.
|
||||
@@ -1,2 +0,0 @@
|
||||
### Fixed
|
||||
- `RODataColumnsVerifier.ValidProposerSignature`: Ensure the expensive signature verification is only performed once for concurrent requests for the same signature data.
|
||||
@@ -1,2 +0,0 @@
|
||||
### Changed
|
||||
- Bump builder default gas limit from `45000000` (45 MGas) to `60000000` (60 MGas)
|
||||
@@ -1,2 +0,0 @@
|
||||
### Fixed
|
||||
- Fix incorrect version used when sending attestation version in Fulu
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- use filepath for path operations (clean, join, etc.) to ensure correct behavior on Windows
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Use head only if its compatible with target for attestation validation.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Use head state for block pubsub validation when possible.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Use head state readonly when possible to validate data column sidecars.
|
||||
@@ -1,4 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Changed the behavior of topic subscriptions such that only topics that require the active validator count will compute that value.
|
||||
- Added a Mutex to the computation of active validator count during topic subscription to avoid a race condition where multiple goroutines are computing the same work.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Fix test setup to properly reference electra rather than unset the fulu epoch
|
||||
3
changelog/pvl-v7-notes.md
Normal file
3
changelog/pvl-v7-notes.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Ignored
|
||||
|
||||
- Updated CHANGELOG.md with release notes from v7.0.0
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Allow custom headers in validator client HTTP requests.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Removed
|
||||
|
||||
- Remove Beacon API endpoints that were deprecated in Electra: `GET /eth/v1/beacon/deposit_snapshot`, `GET /eth/v1/beacon/blocks/{block_id}/attestations`, `GET /eth/v1/beacon/pool/attestations`, `POST /eth/v1/beacon/pool/attestations`, `GET /eth/v1/beacon/pool/attester_slashings`, `POST /eth/v1/beacon/pool/attester_slashings`, `GET /eth/v1/validator/aggregate_attestation`, `POST /eth/v1/validator/aggregate_and_proofs`, `POST /eth/v1/beacon/blocks`, `POST /eth/v1/beacon/blinded_blocks`, `GET /eth/v1/builder/states/{state_id}/expected_withdrawals`.
|
||||
@@ -1,21 +0,0 @@
|
||||
### Removed
|
||||
|
||||
- Deprecated flag `--enable-optional-engine-methods` has been removed.
|
||||
- Deprecated flag `--disable-build-block-parallel` has been removed.
|
||||
- Deprecated flag `--disable-reorg-late-blocks` has been removed.
|
||||
- Deprecated flag `--disable-optional-engine-methods` has been removed.
|
||||
- Deprecated flag `--disable-aggregate-parallel` has been removed.
|
||||
- Deprecated flag `--enable-eip-4881` has been removed.
|
||||
- Deprecated flag `--disable-eip-4881` has been removed.
|
||||
- Deprecated flag `--enable-verbose-sig-verification` has been removed.
|
||||
- Deprecated flag `--enable-debug-rpc-endpoints` has been removed.
|
||||
- Deprecated flag `--beacon-rpc-gateway-provider` has been removed.
|
||||
- Deprecated flag `--disable-grpc-gateway` has been removed.
|
||||
- Deprecated flag `--enable-experimental-state` has been removed.
|
||||
- Deprecated flag `--enable-committee-aware-packing` has been removed.
|
||||
- Deprecated flag `--interop-genesis-time` has been removed.
|
||||
- Deprecated flag `--interop-num-validators` has been removed (from beacon-chain only; still available in validator client).
|
||||
- Deprecated flag `--enable-quic` has been removed.
|
||||
- Deprecated flag `--attest-timely` has been removed.
|
||||
- Deprecated flag `--disable-experimental-state` has been removed.
|
||||
- Deprecated flag `--p2p-metadata` has been removed.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Use slices.Contains to simplify code
|
||||
3
changelog/satushh-cgc-grace.md
Normal file
3
changelog/satushh-cgc-grace.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Added
|
||||
|
||||
- Grace period for cgc when it is supposed to get updated.
|
||||
3
changelog/satushh-eas-metric.md
Normal file
3
changelog/satushh-eas-metric.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Added
|
||||
|
||||
- Metrics to track earliest available slot
|
||||
3
changelog/satushh-fetchoriginsidecars-bug.md
Normal file
3
changelog/satushh-fetchoriginsidecars-bug.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Fixed
|
||||
|
||||
- Nil check for block if it doesn't exist in the DB in fetchOriginSidecars
|
||||
@@ -1,2 +0,0 @@
|
||||
### Fixed
|
||||
- Fix #15969: Handle addition overflow in `/eth/v1/beacon/rewards/attestations/{epoch}`.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Updated go bitfield from prysmaticlabs to offchainlabs
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Updated consensus spec tests to v1.6.0-beta.2
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Metric to track data columns recovered from execution layer
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Add Gloas protobuf definitions with spec tests and SSZ serialization support
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Return optimistic response only when handling blinded blocks in proposer
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Updated consensus spec tests to v1.6.0-beta.1 with new hashes and URL template
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Use SlotTicker with offset instead of time.Ticker for attestation pool pruning to avoid conflicts with slot boundary operations
|
||||
@@ -42,18 +42,16 @@ func TestComputeCellsAndKzgProofs(t *testing.T) {
|
||||
}
|
||||
b := kzgPrysm.Blob(blob)
|
||||
|
||||
cellsAndProofsForBlob, err := kzgPrysm.ComputeCellsAndKZGProofs(&b)
|
||||
cells, proofs, err := kzgPrysm.ComputeCellsAndKZGProofs(&b)
|
||||
if test.Output != nil {
|
||||
require.NoError(t, err)
|
||||
var combined [][]string
|
||||
cs := cellsAndProofsForBlob.Cells
|
||||
csRaw := make([]string, 0, len(cs))
|
||||
for _, c := range cs {
|
||||
csRaw := make([]string, 0, len(cells))
|
||||
for _, c := range cells {
|
||||
csRaw = append(csRaw, hexutil.Encode(c[:]))
|
||||
}
|
||||
ps := cellsAndProofsForBlob.Proofs
|
||||
psRaw := make([]string, 0, len(ps))
|
||||
for _, p := range ps {
|
||||
psRaw := make([]string, 0, len(proofs))
|
||||
for _, p := range proofs {
|
||||
psRaw = append(psRaw, hexutil.Encode(p[:]))
|
||||
}
|
||||
combined = append(combined, csRaw)
|
||||
|
||||
@@ -69,18 +69,16 @@ func TestRecoverCellsAndKzgProofs(t *testing.T) {
|
||||
}
|
||||
|
||||
// Recover the cells and proofs for the corresponding blob
|
||||
cellsAndProofsForBlob, err := kzgPrysm.RecoverCellsAndKZGProofs(cellIndices, cells)
|
||||
recoveredCells, recoveredProofs, err := kzgPrysm.RecoverCellsAndKZGProofs(cellIndices, cells)
|
||||
if test.Output != nil {
|
||||
require.NoError(t, err)
|
||||
var combined [][]string
|
||||
cs := cellsAndProofsForBlob.Cells
|
||||
csRaw := make([]string, 0, len(cs))
|
||||
for _, c := range cs {
|
||||
csRaw := make([]string, 0, len(recoveredCells))
|
||||
for _, c := range recoveredCells {
|
||||
csRaw = append(csRaw, hexutil.Encode(c[:]))
|
||||
}
|
||||
ps := cellsAndProofsForBlob.Proofs
|
||||
psRaw := make([]string, 0, len(ps))
|
||||
for _, p := range ps {
|
||||
psRaw := make([]string, 0, len(recoveredProofs))
|
||||
for _, p := range recoveredProofs {
|
||||
psRaw = append(psRaw, hexutil.Encode(p[:]))
|
||||
}
|
||||
combined = append(combined, csRaw)
|
||||
|
||||
@@ -146,11 +146,11 @@ func GenerateTestFuluBlockWithSidecars(t *testing.T, blobCount int, options ...F
|
||||
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(block)
|
||||
require.NoError(t, err)
|
||||
|
||||
cellsAndProofs := GenerateCellsAndProofs(t, blobs)
|
||||
cellsPerBlob, proofsPerBlob := GenerateCellsAndProofs(t, blobs)
|
||||
|
||||
rob, err := blocks.NewROBlockWithRoot(signedBeaconBlock, root)
|
||||
require.NoError(t, err)
|
||||
roSidecars, err := peerdas.DataColumnSidecars(cellsAndProofs, peerdas.PopulateFromBlock(rob))
|
||||
roSidecars, err := peerdas.DataColumnSidecars(cellsPerBlob, proofsPerBlob, peerdas.PopulateFromBlock(rob))
|
||||
require.NoError(t, err)
|
||||
|
||||
verifiedRoSidecars := make([]blocks.VerifiedRODataColumn, 0, len(roSidecars))
|
||||
@@ -167,12 +167,14 @@ func GenerateTestFuluBlockWithSidecars(t *testing.T, blobCount int, options ...F
|
||||
return roBlock, roSidecars, verifiedRoSidecars
|
||||
}
|
||||
|
||||
func GenerateCellsAndProofs(t testing.TB, blobs []kzg.Blob) []kzg.CellsAndProofs {
|
||||
cellsAndProofs := make([]kzg.CellsAndProofs, len(blobs))
|
||||
func GenerateCellsAndProofs(t testing.TB, blobs []kzg.Blob) ([][]kzg.Cell, [][]kzg.Proof) {
|
||||
cellsPerBlob := make([][]kzg.Cell, len(blobs))
|
||||
proofsPerBlob := make([][]kzg.Proof, len(blobs))
|
||||
for i := range blobs {
|
||||
cp, err := kzg.ComputeCellsAndKZGProofs(&blobs[i])
|
||||
cells, proofs, err := kzg.ComputeCellsAndKZGProofs(&blobs[i])
|
||||
require.NoError(t, err)
|
||||
cellsAndProofs[i] = cp
|
||||
cellsPerBlob[i] = cells
|
||||
proofsPerBlob[i] = proofs
|
||||
}
|
||||
return cellsAndProofs
|
||||
return cellsPerBlob, proofsPerBlob
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user