Compare commits

...

5 Commits

Author SHA1 Message Date
Manu NALEPA
6397093627 foobar 2026-01-04 23:44:38 +01:00
Manu NALEPA
0db74365e0 Summarize "Accepted data column sidecars summary" log. (#16210)
**What type of PR is this?**
Other

**What does this PR do? Why is it needed?**

**Before:**
```
[2026-01-02 13:29:50.13] DEBUG sync: Accepted data column sidecars summary columnIndices=[0 1 6 7 8 9 10 11 12 13 14 15 16 18 23 28 29 31 32 35 37 38 39 40 41 42 43 45 47 48 49 50 51 52 55 58 59 60 62 65 66 68 70 73 74 75 76 78 79 81 83 84 88 89 90 93 94 95 96 98 99 103 105 106 107 108 109 110 111 113 114 115 117 118 119 121 122] gossipScores=[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] peers=[rjzcRC oxj6o4 HCT2LE HCT2LE oxj6o4 oxj6o4 oxj6o4 oxj6o4 oxj6o4 oxj6o4 oxj6o4 oxj6o4 HCT2LE HCT2LE aZAzfp HCT2LE HCT2LE oxj6o4 oxj6o4 oxj6o4 HCT2LE oxj6o4 oxj6o4 HCT2LE oxj6o4 HCT2LE oxj6o4 oxj6o4 oxj6o4 HCT2LE oxj6o4 oxj6o4 HCT2LE HCT2LE oxj6o4 oxj6o4 oxj6o4 oxj6o4 oxj6o4 HCT2LE oxj6o4 HCT2LE oxj6o4 oxj6o4 HCT2LE aZAzfp oxj6o4 oxj6o4 YdJQCg oxj6o4 oxj6o4 oxj6o4 HCT2LE oxj6o4 HCT2LE HCT2LE 5jMhEK HCT2LE oxj6o4 oxj6o4 oxj6o4 oxj6o4 oxj6o4 oxj6o4 HCT2LE rjzcRC oxj6o4 HCT2LE oxj6o4 oxj6o4 HCT2LE oxj6o4 oxj6o4 HCT2LE HCT2LE HCT2LE oxj6o4] receivedCount=77 sinceStartTimes=[869.00ms 845.00ms 797.00ms 795.00ms 805.00ms 906.00ms 844.00ms 849.00ms 843.00ms 844.00ms 821.00ms 796.00ms 794.00ms 796.00ms 838.00ms 842.00ms 843.00ms 848.00ms 795.00ms 820.00ms 797.00ms 830.00ms 801.00ms 794.00ms 925.00ms 924.00ms 935.00ms 843.00ms 802.00ms 796.00ms 802.00ms 798.00ms 794.00ms 796.00ms 796.00ms 843.00ms 802.00ms 830.00ms 826.00ms 796.00ms 819.00ms 801.00ms 852.00ms 877.00ms 876.00ms 843.00ms 843.00ms 844.00ms 1138.00ms 843.00ms 886.00ms 805.00ms 794.00ms 844.00ms 909.00ms 845.00ms 889.00ms 798.00ms 792.00ms 843.00ms 878.00ms 802.00ms 798.00ms 849.00ms 826.00ms 815.00ms 844.00ms 797.00ms 795.00ms 798.00ms 843.00ms 844.00ms 845.00ms 845.00ms 867.00ms 805.00ms 800.00ms] slot=2095599 validationTimes=[399.00ms 423.00ms 470.00ms 472.00ms 463.00ms 362.00ms 423.00ms 419.00ms 425.00ms 423.00ms 446.00ms 471.00ms 473.00ms 471.00ms 429.00ms 425.00ms 424.00ms 419.00ms 471.00ms 448.00ms 470.00ms 437.00ms 467.00ms 472.00ms 342.00ms 343.00ms 332.00ms 424.00ms 465.00ms 471.00ms 465.00ms 469.00ms 473.00ms 470.00ms 470.00ms 424.00ms 466.00ms 438.00ms 442.00ms 471.00ms 448.00ms 467.00ms 416.00ms 390.00ms 392.00ms 424.00ms 425.00ms 423.00ms 140.00ms 424.00ms 381.00ms 462.00ms 473.00ms 423.00ms 359.00ms 423.00ms 378.00ms 469.00ms 475.00ms 425.00ms 390.00ms 465.00ms 469.00ms 419.00ms 442.00ms 452.00ms 423.00ms 470.00ms 473.00ms 469.00ms 424.00ms 423.00ms 423.00ms 423.00ms 400.00ms 462.00ms 467.00ms]
```


**After:**
```
[2026-01-02 16:48:48.61] DEBUG sync: Accepted data column sidecars summary count=31 indices=0-1,3-5,7,21,24,27,29,36-37,46,48,55,57,66,70,76,82,89,93-94,97,99-101,113,120,124,126 root=0x409a4eac4761a3199f60dec0dfe50b6eed91e29d6c3671bb61704401906d2b69 sinceStartTime=[min: 512.181127ms, avg: 541.358688ms, max: 557.074707ms] slot=2096594 validationTime=[min: 13.357515ms, avg: 55.1343ms, max: 73.909889ms]
```

Distributions are still available on metrics:
<img width="792" height="309" alt="image"
src="https://github.com/user-attachments/assets/15128283-6740-4387-b205-41fb18205f54"
/>

<img width="799" height="322" alt="image"
src="https://github.com/user-attachments/assets/e0d602fa-db06-4cd3-8ec7-1ee2671c9921"
/>


**Which issues(s) does this PR fix?**

Fixes:
- https://github.com/OffchainLabs/prysm/issues/16208

**Other notes for review**

**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-02 17:09:30 +00:00
Potuz
6f90101364 Use proposer lookahead for data column verification (#16202)
Replace the proposer indices cache usage in data column sidecar
verification with direct state lookahead access. Since data column
sidecars require the Fulu fork, the state always has a ProposerLookahead
field that provides O(1) proposer index lookups for current and next
epoch.

This simplifies SidecarProposerExpected() by removing:
- Checkpoint-based proposer cache lookup
- Singleflight wrapper (not needed for O(1) access)
- Target root computation for cache keys

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-02 17:01:53 +00:00
Manu NALEPA
49e1763ec2 Data columns cache warmup: Parallelize computation of all files for a given epoch. (#16207)
**What type of PR is this?**
Other

**What does this PR do? Why is it needed?**
Before this PR, all `.sszs` files containing the data column sidecars
were read an process sequentially, taking some time.
After this PR, every `.sszs` files of a given epoch (so, up to 32 files
with the current `SLOT_PER_EPOCHS` value) are processed in parallel.

**Which issues(s) does this PR fix?**
- https://github.com/OffchainLabs/prysm/issues/16204

Tested on - [Netcup VPS 4000 G11](https://www.netcup.com/en/server/vps).
**Before this PR (3 trials)**:
```
[2026-01-02 08:55:12.71]  INFO filesystem: Data column filesystem cache warm-up complete elapsed=1m22.894007534s
[2026-01-02 12:59:33.62]  INFO filesystem: Data column filesystem cache warm-up complete elapsed=42.346732863s
[2026-01-02 13:03:13.65]  INFO filesystem: Data column filesystem cache warm-up complete elapsed=56.143565960s
```

**After this PR (3 trials)**:
```
[2026-01-02 12:50:07.53]  INFO filesystem: Data column filesystem cache warm-up complete elapsed=2.019424193s
[2026-01-02 12:52:01.34]  INFO filesystem: Data column filesystem cache warm-up complete elapsed=1.960671225s
[2026-01-02 12:53:34.66]  INFO filesystem: Data column filesystem cache warm-up complete elapsed=2.549555363s
```


**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-02 16:59:55 +00:00
Potuz
c2527c82cd Use a separate context when updating the NSC (#16209)
There is a race condition introduced in #16149 in which the update to
the NSC happens with a context that may be cancelled by the time the
routine is called. This PR starts a new context with a deadline to call
the routine in the background.

fixes #16205
2026-01-02 16:43:34 +00:00
12 changed files with 320 additions and 288 deletions

View File

@@ -106,7 +106,12 @@ func (s *Service) postBlockProcess(cfg *postBlockProcessConfig) error {
// Pre-Fulu the caches are updated when computing the payload attributes
if cfg.postState.Version() >= version.Fulu {
go s.updateCachesPostBlockProcessing(cfg)
go func() {
ctx, cancel := context.WithTimeout(s.ctx, slotDeadline)
defer cancel()
cfg.ctx = ctx
s.updateCachesPostBlockProcessing(cfg)
}()
}
return nil
}
@@ -929,6 +934,8 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
// After Fulu, we can update the caches asynchronously after sending FCU to the engine
defer func() {
go func() {
ctx, cancel := context.WithTimeout(s.ctx, slotDeadline)
defer cancel()
lastState.CopyAllTries()
if err := transition.UpdateNextSlotCache(ctx, lastRoot, lastState); err != nil {
log.WithError(err).Debug("Could not update next slot state cache")

View File

@@ -38,6 +38,7 @@ go_library(
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_spf13_afero//:go_default_library",
"@org_golang_x_sync//errgroup:go_default_library",
],
)

View File

@@ -25,6 +25,7 @@ import (
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
"github.com/spf13/afero"
"golang.org/x/sync/errgroup"
)
const (
@@ -185,73 +186,162 @@ func (dcs *DataColumnStorage) WarmCache() {
highestStoredEpoch := primitives.Epoch(0)
// Walk the data column filesystem to warm up the cache.
if err := afero.Walk(dcs.fs, ".", func(path string, info os.FileInfo, fileErr error) (err error) {
if fileErr != nil {
return fileErr
}
// If not a leaf, skip.
if info.IsDir() {
return nil
}
// Extract metadata from the file path.
fileMetadata, err := extractFileMetadata(path)
if err != nil {
log.WithError(err).Error("Error encountered while extracting file metadata")
return nil
}
// Open the data column filesystem file.
f, err := dcs.fs.Open(path)
if err != nil {
log.WithError(err).Error("Error encountered while opening data column filesystem file")
return nil
}
// Close the file.
defer func() {
// Overwrite the existing error only if it is nil, since the close error is less important.
closeErr := f.Close()
if closeErr != nil && err == nil {
err = closeErr
}
}()
// Read the metadata of the file.
metadata, err := dcs.metadata(f)
if err != nil {
log.WithError(err).Error("Error encountered while reading metadata from data column filesystem file")
return nil
}
// Check the indices.
indices := metadata.indices.all()
if len(indices) == 0 {
return nil
}
// Build the ident.
dataColumnsIdent := DataColumnsIdent{Root: fileMetadata.blockRoot, Epoch: fileMetadata.epoch, Indices: indices}
// Update the highest stored epoch.
highestStoredEpoch = max(highestStoredEpoch, fileMetadata.epoch)
// Set the ident in the cache.
if err := dcs.cache.set(dataColumnsIdent); err != nil {
log.WithError(err).Error("Error encountered while ensuring data column filesystem cache")
}
return nil
}); err != nil {
log.WithError(err).Error("Error encountered while walking data column filesystem.")
// List all period directories
periodFileInfos, err := afero.ReadDir(dcs.fs, ".")
if err != nil {
log.WithError(err).Error("Error reading top directory during warm cache")
return
}
// Prune the cache and the filesystem.
// Iterate through periods
for _, periodFileInfo := range periodFileInfos {
if !periodFileInfo.IsDir() {
continue
}
periodPath := periodFileInfo.Name()
// List all epoch directories in this period
epochFileInfos, err := afero.ReadDir(dcs.fs, periodPath)
if err != nil {
log.WithError(err).WithField("period", periodPath).Error("Error reading period directory during warm cache")
continue
}
// Iterate through epochs
for _, epochFileInfo := range epochFileInfos {
if !epochFileInfo.IsDir() {
continue
}
epochPath := path.Join(periodPath, epochFileInfo.Name())
// List all .sszs files in this epoch
files, err := listEpochFiles(dcs.fs, epochPath)
if err != nil {
log.WithError(err).WithField("epoch", epochPath).Error("Error listing epoch files during warm cache")
continue
}
if len(files) == 0 {
continue
}
// Process all files in this epoch in parallel
epochHighest, err := dcs.processEpochFiles(files)
if err != nil {
log.WithError(err).WithField("epoch", epochPath).Error("Error processing epoch files during warm cache")
}
highestStoredEpoch = max(highestStoredEpoch, epochHighest)
}
}
// Prune the cache and the filesystem
dcs.prune()
log.WithField("elapsed", time.Since(start)).Info("Data column filesystem cache warm-up complete")
totalElapsed := time.Since(start)
// Log summary
log.WithField("elapsed", totalElapsed).Info("Data column filesystem cache warm-up complete")
}
// listEpochFiles lists all .sszs files in an epoch directory.
func listEpochFiles(fs afero.Fs, epochPath string) ([]string, error) {
fileInfos, err := afero.ReadDir(fs, epochPath)
if err != nil {
return nil, errors.Wrap(err, "read epoch directory")
}
files := make([]string, 0, len(fileInfos))
for _, fileInfo := range fileInfos {
if fileInfo.IsDir() {
continue
}
fileName := fileInfo.Name()
if strings.HasSuffix(fileName, "."+dataColumnsFileExtension) {
files = append(files, path.Join(epochPath, fileName))
}
}
return files, nil
}
// processEpochFiles processes all .sszs files in an epoch directory in parallel.
func (dcs *DataColumnStorage) processEpochFiles(files []string) (primitives.Epoch, error) {
var (
eg errgroup.Group
mu sync.Mutex
)
highestEpoch := primitives.Epoch(0)
for _, filePath := range files {
eg.Go(func() error {
epoch, err := dcs.processFile(filePath)
if err != nil {
log.WithError(err).WithField("file", filePath).Error("Error processing file during warm cache")
return nil
}
mu.Lock()
defer mu.Unlock()
highestEpoch = max(highestEpoch, epoch)
return nil
})
}
if err := eg.Wait(); err != nil {
return highestEpoch, err
}
return highestEpoch, nil
}
// processFile processes a single .sszs file.
func (dcs *DataColumnStorage) processFile(filePath string) (primitives.Epoch, error) {
// Extract metadata from the file path
fileMetadata, err := extractFileMetadata(filePath)
if err != nil {
return 0, errors.Wrap(err, "extract file metadata")
}
// Open the file (each goroutine gets its own FD)
f, err := dcs.fs.Open(filePath)
if err != nil {
return 0, errors.Wrap(err, "open file")
}
defer func() {
if closeErr := f.Close(); closeErr != nil {
log.WithError(closeErr).WithField("file", filePath).Error("Error closing file during warm cache")
}
}()
// Read metadata
metadata, err := dcs.metadata(f)
if err != nil {
return 0, errors.Wrap(err, "read metadata")
}
// Extract indices
indices := metadata.indices.all()
if len(indices) == 0 {
return fileMetadata.epoch, nil // No indices, skip
}
// Build ident and set in cache (thread-safe)
dataColumnsIdent := DataColumnsIdent{
Root: fileMetadata.blockRoot,
Epoch: fileMetadata.epoch,
Indices: indices,
}
if err := dcs.cache.set(dataColumnsIdent); err != nil {
return 0, errors.Wrap(err, "cache set")
}
return fileMetadata.epoch, nil
}
// Summary returns the DataColumnStorageSummary.

View File

@@ -3,11 +3,12 @@ package sync
import (
"context"
"fmt"
"math"
"slices"
"time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/feed"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/feed/operation"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p"
"github.com/OffchainLabs/prysm/v7/beacon-chain/verification"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
@@ -143,12 +144,9 @@ func (s *Service) validateDataColumn(ctx context.Context, pid peer.ID, msg *pubs
}
// [REJECT] The sidecar's column data is valid as verified by `verify_data_column_sidecar_kzg_proofs(sidecar)`.
validationResult, err := s.validateWithKzgBatchVerifier(ctx, roDataColumns)
if validationResult != pubsub.ValidationAccept {
return validationResult, err
if err := verifier.SidecarKzgProofVerified(); err != nil {
return pubsub.ValidationReject, err
}
// Mark KZG verification as satisfied since we did it via batch verifier
verifier.SatisfyRequirement(verification.RequireSidecarKzgProofVerified)
// [IGNORE] The sidecar is the first sidecar for the tuple `(block_header.slot, block_header.proposer_index, sidecar.index)`
// with valid header signature, sidecar inclusion proof, and kzg proof.
@@ -192,19 +190,13 @@ func (s *Service) validateDataColumn(ctx context.Context, pid peer.ID, msg *pubs
dataColumnSidecarArrivalGossipSummary.Observe(float64(sinceSlotStartTime.Milliseconds()))
dataColumnSidecarVerificationGossipHistogram.Observe(float64(validationTime.Milliseconds()))
peerGossipScore := s.cfg.p2p.Peers().Scorers().GossipScorer().Score(pid)
select {
case s.dataColumnLogCh <- dataColumnLogEntry{
Slot: roDataColumn.Slot(),
ColIdx: roDataColumn.Index,
PropIdx: roDataColumn.ProposerIndex(),
BlockRoot: roDataColumn.BlockRoot(),
ParentRoot: roDataColumn.ParentRoot(),
PeerSuffix: pid.String()[len(pid.String())-6:],
PeerGossipScore: peerGossipScore,
validationTime: validationTime,
sinceStartTime: sinceSlotStartTime,
slot: roDataColumn.Slot(),
index: roDataColumn.Index,
root: roDataColumn.BlockRoot(),
validationTime: validationTime,
sinceStartTime: sinceSlotStartTime,
}:
default:
log.WithField("slot", roDataColumn.Slot()).Warn("Failed to send data column log entry")
@@ -249,68 +241,69 @@ func computeCacheKey(slot primitives.Slot, proposerIndex primitives.ValidatorInd
}
type dataColumnLogEntry struct {
Slot primitives.Slot
ColIdx uint64
PropIdx primitives.ValidatorIndex
BlockRoot [32]byte
ParentRoot [32]byte
PeerSuffix string
PeerGossipScore float64
validationTime time.Duration
sinceStartTime time.Duration
slot primitives.Slot
index uint64
root [32]byte
validationTime time.Duration
sinceStartTime time.Duration
}
func (s *Service) processDataColumnLogs() {
ticker := time.NewTicker(1 * time.Second)
defer ticker.Stop()
slotStats := make(map[primitives.Slot][fieldparams.NumberOfColumns]dataColumnLogEntry)
slotStats := make(map[[fieldparams.RootLength]byte][]dataColumnLogEntry)
for {
select {
case entry := <-s.dataColumnLogCh:
cols := slotStats[entry.Slot]
cols[entry.ColIdx] = entry
slotStats[entry.Slot] = cols
case col := <-s.dataColumnLogCh:
cols := slotStats[col.root]
cols = append(cols, col)
slotStats[col.root] = cols
case <-ticker.C:
for slot, columns := range slotStats {
var (
colIndices = make([]uint64, 0, fieldparams.NumberOfColumns)
peers = make([]string, 0, fieldparams.NumberOfColumns)
gossipScores = make([]float64, 0, fieldparams.NumberOfColumns)
validationTimes = make([]string, 0, fieldparams.NumberOfColumns)
sinceStartTimes = make([]string, 0, fieldparams.NumberOfColumns)
)
for root, columns := range slotStats {
indices := make([]uint64, 0, fieldparams.NumberOfColumns)
minValidationTime, maxValidationTime, sumValidationTime := time.Duration(0), time.Duration(0), time.Duration(0)
minSinceStartTime, maxSinceStartTime, sumSinceStartTime := time.Duration(0), time.Duration(0), time.Duration(0)
totalReceived := 0
for _, entry := range columns {
if entry.PeerSuffix == "" {
for _, column := range columns {
indices = append(indices, column.index)
sumValidationTime += column.validationTime
sumSinceStartTime += column.sinceStartTime
if totalReceived == 0 {
minValidationTime, maxValidationTime = column.validationTime, column.validationTime
minSinceStartTime, maxSinceStartTime = column.sinceStartTime, column.sinceStartTime
totalReceived++
continue
}
colIndices = append(colIndices, entry.ColIdx)
peers = append(peers, entry.PeerSuffix)
gossipScores = append(gossipScores, roundFloat(entry.PeerGossipScore, 2))
validationTimes = append(validationTimes, fmt.Sprintf("%.2fms", float64(entry.validationTime.Milliseconds())))
sinceStartTimes = append(sinceStartTimes, fmt.Sprintf("%.2fms", float64(entry.sinceStartTime.Milliseconds())))
minValidationTime, maxValidationTime = min(minValidationTime, column.validationTime), max(maxValidationTime, column.validationTime)
minSinceStartTime, maxSinceStartTime = min(minSinceStartTime, column.sinceStartTime), max(maxSinceStartTime, column.sinceStartTime)
totalReceived++
}
log.WithFields(logrus.Fields{
"slot": slot,
"receivedCount": totalReceived,
"columnIndices": colIndices,
"peers": peers,
"gossipScores": gossipScores,
"validationTimes": validationTimes,
"sinceStartTimes": sinceStartTimes,
}).Debug("Accepted data column sidecars summary")
if totalReceived > 0 {
slices.Sort(indices)
avgValidationTime := sumValidationTime / time.Duration(totalReceived)
avgSinceStartTime := sumSinceStartTime / time.Duration(totalReceived)
log.WithFields(logrus.Fields{
"slot": columns[0].slot,
"root": fmt.Sprintf("%#x", root),
"count": totalReceived,
"indices": helpers.PrettySlice(indices),
"validationTime": prettyMinMaxAverage(minValidationTime, maxValidationTime, avgValidationTime),
"sinceStartTime": prettyMinMaxAverage(minSinceStartTime, maxSinceStartTime, avgSinceStartTime),
}).Debug("Accepted data column sidecars summary")
}
}
slotStats = make(map[primitives.Slot][fieldparams.NumberOfColumns]dataColumnLogEntry)
slotStats = make(map[[fieldparams.RootLength]byte][]dataColumnLogEntry)
}
}
}
func roundFloat(f float64, decimals int) float64 {
mult := math.Pow(10, float64(decimals))
return math.Round(f*mult) / mult
func prettyMinMaxAverage(min, max, average time.Duration) string {
return fmt.Sprintf("[min: %v, avg: %v, max: %v]", min, average, max)
}

View File

@@ -687,6 +687,12 @@ func sbrNotFound(t *testing.T, expectedRoot [32]byte) *mockStateByRooter {
}}
}
func sbrReturnsState(st state.BeaconState) *mockStateByRooter {
return &mockStateByRooter{sbr: func(_ context.Context, _ [32]byte) (state.BeaconState, error) {
return st, nil
}}
}
func sbrForValOverride(idx primitives.ValidatorIndex, val *ethpb.Validator) *mockStateByRooter {
return sbrForValOverrideWithT(nil, idx, val)
}

View File

@@ -11,12 +11,10 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/transition"
forkchoicetypes "github.com/OffchainLabs/prysm/v7/beacon-chain/forkchoice/types"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v7/runtime/logging"
"github.com/OffchainLabs/prysm/v7/time/slots"
@@ -484,88 +482,19 @@ func (dv *RODataColumnsVerifier) SidecarProposerExpected(ctx context.Context) (e
defer dv.recordResult(RequireSidecarProposerExpected, &err)
type slotParentRoot struct {
slot primitives.Slot
parentRoot [fieldparams.RootLength]byte
}
targetRootBySlotParentRoot := make(map[slotParentRoot][fieldparams.RootLength]byte)
var targetRootFromCache = func(slot primitives.Slot, parentRoot [fieldparams.RootLength]byte) ([fieldparams.RootLength]byte, error) {
// Use cached values if available.
slotParentRoot := slotParentRoot{slot: slot, parentRoot: parentRoot}
if root, ok := targetRootBySlotParentRoot[slotParentRoot]; ok {
return root, nil
}
// Compute the epoch of the data column slot.
dataColumnEpoch := slots.ToEpoch(slot)
if dataColumnEpoch > 0 {
dataColumnEpoch = dataColumnEpoch - 1
}
// Compute the target root for the epoch.
targetRoot, err := dv.fc.TargetRootForEpoch(parentRoot, dataColumnEpoch)
if err != nil {
return [fieldparams.RootLength]byte{}, columnErrBuilder(errors.Wrap(err, "target root from epoch"))
}
// Store the target root in the cache.
targetRootBySlotParentRoot[slotParentRoot] = targetRoot
return targetRoot, nil
}
for _, dataColumn := range dv.dataColumns {
// Extract the slot of the data column.
dataColumnSlot := dataColumn.Slot()
// Extract the root of the parent block corresponding to the data column.
parentRoot := dataColumn.ParentRoot()
// Compute the target root for the data column.
targetRoot, err := targetRootFromCache(dataColumnSlot, parentRoot)
// Get the verifying state, it is guaranteed to have the correct proposer in the lookahead.
verifyingState, err := dv.getVerifyingState(ctx, dataColumn)
if err != nil {
return columnErrBuilder(errors.Wrap(err, "target root"))
return columnErrBuilder(errors.Wrap(err, "verifying state"))
}
// Compute the epoch of the data column slot.
dataColumnEpoch := slots.ToEpoch(dataColumnSlot)
if dataColumnEpoch > 0 {
dataColumnEpoch = dataColumnEpoch - 1
}
// Create a checkpoint for the target root.
checkpoint := &forkchoicetypes.Checkpoint{Root: targetRoot, Epoch: dataColumnEpoch}
// Try to extract the proposer index from the data column in the cache.
idx, cached := dv.pc.Proposer(checkpoint, dataColumnSlot)
if !cached {
parentRoot := dataColumn.ParentRoot()
// Ensure the expensive index computation is only performed once for
// concurrent requests for the same signature data.
idxAny, err, _ := dv.sg.Do(concatRootSlot(parentRoot, dataColumnSlot), func() (any, error) {
verifyingState, err := dv.getVerifyingState(ctx, dataColumn)
if err != nil {
return nil, columnErrBuilder(errors.Wrap(err, "verifying state"))
}
idx, err = helpers.BeaconProposerIndexAtSlot(ctx, verifyingState, dataColumnSlot)
if err != nil {
return nil, columnErrBuilder(errors.Wrap(err, "compute proposer"))
}
return idx, nil
})
if err != nil {
return err
}
var ok bool
if idx, ok = idxAny.(primitives.ValidatorIndex); !ok {
return columnErrBuilder(errors.New("type assertion to ValidatorIndex failed"))
}
// Use proposer lookahead directly
idx, err := helpers.BeaconProposerIndexAtSlot(ctx, verifyingState, dataColumnSlot)
if err != nil {
return columnErrBuilder(errors.Wrap(err, "proposer from lookahead"))
}
if idx != dataColumn.ProposerIndex() {
@@ -626,7 +555,3 @@ func inclusionProofKey(c blocks.RODataColumn) ([32]byte, error) {
return sha256.Sum256(unhashedKey), nil
}
func concatRootSlot(root [fieldparams.RootLength]byte, slot primitives.Slot) string {
return string(root[:]) + fmt.Sprintf("%d", slot)
}

View File

@@ -2,7 +2,6 @@ package verification
import (
"reflect"
"sync"
"testing"
"time"
@@ -795,87 +794,90 @@ func TestDataColumnsSidecarProposerExpected(t *testing.T) {
blobCount = 1
)
parentRoot := [fieldparams.RootLength]byte{}
columns := GenerateTestDataColumns(t, parentRoot, columnSlot, blobCount)
firstColumn := columns[0]
ctx := t.Context()
testCases := []struct {
name string
stateByRooter StateByRooter
proposerCache proposerCache
columns []blocks.RODataColumn
error string
}{
{
name: "Cached, matches",
stateByRooter: nil,
proposerCache: &mockProposerCache{
ProposerCB: pcReturnsIdx(firstColumn.ProposerIndex()),
},
columns: columns,
},
{
name: "Cached, does not match",
stateByRooter: nil,
proposerCache: &mockProposerCache{
ProposerCB: pcReturnsIdx(firstColumn.ProposerIndex() + 1),
},
columns: columns,
error: errSidecarUnexpectedProposer.Error(),
},
{
name: "Not cached, state lookup failure",
stateByRooter: sbrNotFound(t, firstColumn.ParentRoot()),
proposerCache: &mockProposerCache{
ProposerCB: pcReturnsNotFound(),
},
columns: columns,
error: "verifying state",
},
}
parentRoot := [fieldparams.RootLength]byte{}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
initializer := Initializer{
shared: &sharedResources{
sr: tc.stateByRooter,
pc: tc.proposerCache,
hsp: &mockHeadStateProvider{},
fc: &mockForkchoicer{
TargetRootForEpochCB: fcReturnsTargetRoot([fieldparams.RootLength]byte{}),
},
// Create a Fulu state to get the expected proposer from the lookahead.
fuluState, _ := util.DeterministicGenesisStateFulu(t, 32)
expectedProposer, err := fuluState.ProposerLookahead()
require.NoError(t, err)
expectedProposerIdx := primitives.ValidatorIndex(expectedProposer[columnSlot])
// Generate data columns with the expected proposer index.
matchingColumns := generateTestDataColumnsWithProposer(t, parentRoot, columnSlot, blobCount, expectedProposerIdx)
// Generate data columns with wrong proposer index.
wrongColumns := generateTestDataColumnsWithProposer(t, parentRoot, columnSlot, blobCount, expectedProposerIdx+1)
t.Run("Proposer matches", func(t *testing.T) {
initializer := Initializer{
shared: &sharedResources{
sr: sbrReturnsState(fuluState),
hsp: &mockHeadStateProvider{
headRoot: parentRoot[:],
headSlot: columnSlot, // Same epoch so HeadStateReadOnly is used
headStateReadOnly: fuluState,
},
}
fc: &mockForkchoicer{},
},
}
verifier := initializer.NewDataColumnsVerifier(tc.columns, GossipDataColumnSidecarRequirements)
var wg sync.WaitGroup
verifier := initializer.NewDataColumnsVerifier(matchingColumns, GossipDataColumnSidecarRequirements)
err := verifier.SidecarProposerExpected(ctx)
require.NoError(t, err)
require.Equal(t, true, verifier.results.executed(RequireSidecarProposerExpected))
require.NoError(t, verifier.results.result(RequireSidecarProposerExpected))
})
var err1, err2 error
wg.Go(func() {
err1 = verifier.SidecarProposerExpected(ctx)
})
wg.Go(func() {
err2 = verifier.SidecarProposerExpected(ctx)
})
wg.Wait()
t.Run("Proposer does not match", func(t *testing.T) {
initializer := Initializer{
shared: &sharedResources{
sr: sbrReturnsState(fuluState),
hsp: &mockHeadStateProvider{
headRoot: parentRoot[:],
headSlot: columnSlot, // Same epoch so HeadStateReadOnly is used
headStateReadOnly: fuluState,
},
fc: &mockForkchoicer{},
},
}
require.Equal(t, true, verifier.results.executed(RequireSidecarProposerExpected))
verifier := initializer.NewDataColumnsVerifier(wrongColumns, GossipDataColumnSidecarRequirements)
err := verifier.SidecarProposerExpected(ctx)
require.ErrorContains(t, errSidecarUnexpectedProposer.Error(), err)
require.Equal(t, true, verifier.results.executed(RequireSidecarProposerExpected))
require.NotNil(t, verifier.results.result(RequireSidecarProposerExpected))
})
if len(tc.error) > 0 {
require.ErrorContains(t, tc.error, err1)
require.ErrorContains(t, tc.error, err2)
require.NotNil(t, verifier.results.result(RequireSidecarProposerExpected))
return
}
t.Run("State lookup failure", func(t *testing.T) {
columns := GenerateTestDataColumns(t, parentRoot, columnSlot, blobCount)
initializer := Initializer{
shared: &sharedResources{
sr: sbrNotFound(t, columns[0].ParentRoot()),
hsp: &mockHeadStateProvider{},
fc: &mockForkchoicer{},
},
}
require.NoError(t, err1)
require.NoError(t, err2)
require.NoError(t, verifier.results.result(RequireSidecarProposerExpected))
verifier := initializer.NewDataColumnsVerifier(columns, GossipDataColumnSidecarRequirements)
err := verifier.SidecarProposerExpected(ctx)
require.ErrorContains(t, "verifying state", err)
require.Equal(t, true, verifier.results.executed(RequireSidecarProposerExpected))
require.NotNil(t, verifier.results.result(RequireSidecarProposerExpected))
})
}
err := verifier.SidecarProposerExpected(ctx)
require.NoError(t, err)
})
func generateTestDataColumnsWithProposer(t *testing.T, parent [fieldparams.RootLength]byte, slot primitives.Slot, blobCount int, proposer primitives.ValidatorIndex) []blocks.RODataColumn {
roBlock, roBlobs := util.GenerateTestDenebBlockWithSidecar(t, parent, slot, blobCount, util.WithProposer(proposer))
blobs := make([]kzg.Blob, 0, len(roBlobs))
for i := range roBlobs {
blobs = append(blobs, kzg.Blob(roBlobs[i].Blob))
}
cellsPerBlob, proofsPerBlob := util.GenerateCellsAndProofs(t, blobs)
roDataColumnSidecars, err := peerdas.DataColumnSidecars(cellsPerBlob, proofsPerBlob, peerdas.PopulateFromBlock(roBlock))
require.NoError(t, err)
return roDataColumnSidecars
}
func TestColumnRequirementSatisfaction(t *testing.T) {
@@ -922,12 +924,3 @@ func TestColumnRequirementSatisfaction(t *testing.T) {
require.NoError(t, err)
}
func TestConcatRootSlot(t *testing.T) {
root := [fieldparams.RootLength]byte{1, 2, 3}
const slot = primitives.Slot(3210)
const expected = "\x01\x02\x03\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x003210"
actual := concatRootSlot(root, slot)
require.Equal(t, expected, actual)
}

View File

@@ -0,0 +1,3 @@
### Changed
- Data column sidecars cache warmup: Process in parallel all sidecars for a given epoch.

3
changelog/manu-log.md Normal file
View File

@@ -0,0 +1,3 @@
### Changed
- Summarize DEBUG log corresponding to incoming via gossip data column sidecar.

View File

@@ -0,0 +1,2 @@
### Changed
- Use lookahead to validate data column sidecar proposer index.

View File

@@ -0,0 +1,2 @@
### Changed
- Use a separate context when updating the slot cache.

View File

@@ -44,6 +44,13 @@ func WithProposerSigning(idx primitives.ValidatorIndex, sk bls.SecretKey, valRoo
}
}
// WithProposer sets the proposer index for the generated block without signing.
func WithProposer(idx primitives.ValidatorIndex) DenebBlockGeneratorOption {
return func(g *denebBlockGenerator) {
g.proposer = idx
}
}
func WithPayloadSetter(p *enginev1.ExecutionPayloadDeneb) DenebBlockGeneratorOption {
return func(g *denebBlockGenerator) {
g.payload = p