Compare commits

..

8 Commits

Author SHA1 Message Date
james-prysm
93514608b0 changelog 2025-11-14 11:27:20 -06:00
james-prysm
f999173ccd Merge branch 'develop' into lite-supernode 2025-11-14 07:20:31 -08:00
james-prysm
9c074acf59 fixing test 2025-11-13 15:05:27 -06:00
james-prysm
430bb09a74 Merge branch 'develop' into lite-supernode 2025-11-13 12:58:33 -08:00
james-prysm
d72ce136b8 Merge branch 'develop' into lite-supernode 2025-11-13 07:00:47 -08:00
james-prysm
2937b62a10 Rename semi-super-node to lite-supernode 2025-11-12 21:27:36 -06:00
james-prysm
2fb2fecd92 changing attempt 2025-11-12 20:58:20 -06:00
james-prysm
f93a77aef9 wip 2025-11-12 20:15:58 -06:00
338 changed files with 2242 additions and 37472 deletions

View File

@@ -34,5 +34,4 @@ Fixes #
- [ ] I have read [CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [ ] I have included a uniquely named [changelog fragment file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [ ] I have added a description with sufficient context for reviewers to understand this PR.
- [ ] I have tested that my changes work as expected and I added a testing plan to the PR description (if applicable).
- [ ] I have added a description to this PR with sufficient context for reviewers to understand this PR.

View File

@@ -193,7 +193,6 @@ nogo(
"//tools/analyzers/featureconfig:go_default_library",
"//tools/analyzers/gocognit:go_default_library",
"//tools/analyzers/ineffassign:go_default_library",
"//tools/analyzers/httperror:go_default_library",
"//tools/analyzers/interfacechecker:go_default_library",
"//tools/analyzers/logcapitalization:go_default_library",
"//tools/analyzers/logruswitherror:go_default_library",

View File

@@ -4,91 +4,6 @@ All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
## [v7.1.0](https://github.com/prysmaticlabs/prysm/compare/v7.0.0...v7.1.0) - 2025-12-10
This release includes several key features/fixes. If you are running v7.0.0 then you should update to v7.0.1 or later and remove the flag `--disable-last-epoch-targets`.
Release highlights:
- Backfill is now supported in Fulu. Backfill from checkpoint sync now supports data columns. Run with `--enable-backfill` when using checkpoint sync.
- A new node configuration to custody enough data columns to reconstruct blobs. Use flag `--semi-supernode` to custody at least 50% of the data columns.
- Critical fixes in attestation processing.
A post mortem doc with full details on the mainnet attestation processing issue from December 4th is expected in the coming days.
### Added
- add fulu support to light client processing. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15995)
- Record data column gossip KZG batch verification latency in both the pooled worker and fallback paths so the `beacon_kzg_verification_data_column_batch_milliseconds` histogram reflects gossip traffic, annotated with `path` labels to distinguish the sources. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16018)
- Implement Gloas state. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15611)
- Add initial configs for the state-diff feature. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15903)
- Add kv functions for the state-diff feature. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15903)
- Add supported version for fork versions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16030)
- prometheus metric `gossip_attestation_verification_milliseconds` to track attestation gossip topic validation latency. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15785)
- Integrate state-diff into `State()`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16033)
- Implement Gloas fork support in consensus-types/blocks with factory methods, getters, setters, and proto handling. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15618)
- Integrate state-diff into `HasState()`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16045)
- Added `--semi-supernode` flag to custody half of a super node's datacolumn requirements but allowing for reconstruction for blob retrieval. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16029)
- Data column backfill. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15580)
- Backfill metrics for columns: backfill_data_column_sidecar_downloaded, backfill_data_column_sidecar_downloaded_bytes, backfill_batch_columns_download_ms, backfill_batch_columns_verify_ms. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15580)
- prometheus summary `gossip_data_column_sidecar_arrival_milliseconds` to track data column sidecar arrival latency since slot start. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16099)
### Changed
- Improve readability in slashing import and remove duplicated code. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15957)
- Use dependent root instead of target when possible. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15996)
- Changed `--subscribe-all-data-subnets` flag to `--supernode` and aliased `--subscribe-all-data-subnets` for existing users. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16012)
- Use explicit slot component timing configs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15999)
- Downgraded log level from INFO to DEBUG on PrepareBeaconProposer updated fee recipients. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15998)
- Change the logging behaviour of Updated fee recipients to only log count of validators at Debug level and all validator indices at Trace level. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15998)
- Stop emitting payload attribute events during late block handling when we are not proposing the next slot. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16026)
- Initialize the `ExecutionRequests` field in gossip block map. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16047)
- Avoid redundant WithHttpEndpoint when JWT is provided. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16032)
- Removed dead slot parameter from blobCacheEntry.filter. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16021)
- Added log prefix to the `genesis` package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16075)
- Added log prefix to the `params` package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16075)
- `WithGenesisValidatorsRoot`: Use camelCase for log field param. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16075)
- Move `Origin checkpoint found in db` from WARN to INFO, since it is the expected behaviour. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16075)
- backfill metrics that changed name and/or histogram buckets: backfill_batch_time_verify -> backfill_batch_verify_ms, backfill_batch_time_waiting -> backfill_batch_waiting_ms, backfill_batch_time_roundtrip -> backfill_batch_roundtrip_ms, backfill_blocks_bytes_downloaded -> backfill_blocks_downloaded_bytes, backfill_batch_time_verify -> backfill_batch_verify_ms, backfill_batch_blocks_time_download -> backfill_batch_blocks_download_ms, backfill_batch_blobs_time_download -> backfill_batch_blobs_download_ms, backfill_blobs_bytes_downloaded -> backfill_blocks_downloaded_bytes. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15580)
- Move the "Not enough connected peers" (for a given subnet) from WARN to DEBUG. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16087)
- `blobsDataFromStoredDataColumns`: Ask the use to use the `--supernode` flag and shorten the error mesage. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16097)
- Introduced flag `--ignore-unviable-attestations` (replaces and deprecates `--disable-last-epoch-targets`) to drop attestations whose target state is not viable; default remains to process them unless explicitly enabled. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16094)
### Removed
- Remove validator cross-client from end-to-end tests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16025)
- `NUMBER_OF_COLUMNS` configuration (not in the specification any more, replaced by a preset). [[PR]](https://github.com/prysmaticlabs/prysm/pull/16073)
- `MAX_CELLS_IN_EXTENDED_MATRIX` configuration (not in the specification any more). [[PR]](https://github.com/prysmaticlabs/prysm/pull/16073)
### Fixed
- Nil check for block if it doesn't exist in the DB in fetchOriginSidecars. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16006)
- Fix proposals progress bar count [#16020](https://github.com/OffchainLabs/prysm/pull/16020). [[PR]](https://github.com/prysmaticlabs/prysm/pull/16020)
- Move `BlockGossipReceived` event to the end of gossip validation. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16031)
- Fix state diff repetitive anchor slot bug. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16037)
- Check the JWT secret length is exactly 256 bits (32 bytes) as per Engine API specification. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15939)
- http_error_count now matches the other cases by listing the endpoint name rather than the actual URL requested. This improves metrics cardinality. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16055)
- Fix array out of bounds in static analyzer. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16058)
- fixes E2E tests to be able to start from Electra genesis fork or future forks. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16048)
- Use head state to validate attestations for old blocks if they are compatible. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16095)
## [v7.0.1](https://github.com/prysmaticlabs/prysm/compare/v7.0.0...v7.0.1) - 2025-12-08
This patch release contains 4 cherry-picked changes to address the mainnet attestation processing issue from 2025-12-04. Operators are encouraged to update to this release as soon as practical. As of this release, the feature flag `--disable-last-epoch-targets` has been deprecated and can be safely removed from your node configuration.
A post mortem doc with full details is expected to be published later this week.
### Changed
- Move the "Not enough connected peers" (for a given subnet) from WARN to DEBUG. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16087)
- Use dependent root instead of target when possible. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15996)
- Introduced flag `--ignore-unviable-attestations` (replaces and deprecates `--disable-last-epoch-targets`) to drop attestations whose target state is not viable; default remains to process them unless explicitly enabled. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16094)
### Fixed
- Use head state to validate attestations for old blocks if they are compatible. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16095)
## [v7.0.0](https://github.com/prysmaticlabs/prysm/compare/v6.1.4...v7.0.0) - 2025-11-10
This is our initial mainnet release for the Ethereum mainnet Fulu fork on December 3rd, 2025. All operators MUST update to v7.0.0 or later release prior to the fulu fork epoch `411392`. See the [Ethereum Foundation blog post](https://blog.ethereum.org/2025/11/06/fusaka-mainnet-announcement) for more information on Fulu.

View File

@@ -22,7 +22,10 @@ import (
// The caller of this function must have a lock on forkchoice.
func (s *Service) getRecentPreState(ctx context.Context, c *ethpb.Checkpoint) state.ReadOnlyBeaconState {
headEpoch := slots.ToEpoch(s.HeadSlot())
if c.Epoch < headEpoch || c.Epoch == 0 {
if c.Epoch < headEpoch {
return nil
}
if !s.cfg.ForkChoiceStore.IsCanonical([32]byte(c.Root)) {
return nil
}
// Only use head state if the head state is compatible with the target checkpoint.
@@ -30,11 +33,11 @@ func (s *Service) getRecentPreState(ctx context.Context, c *ethpb.Checkpoint) st
if err != nil {
return nil
}
headDependent, err := s.cfg.ForkChoiceStore.DependentRootForEpoch([32]byte(headRoot), c.Epoch-1)
headDependent, err := s.cfg.ForkChoiceStore.DependentRootForEpoch([32]byte(headRoot), c.Epoch)
if err != nil {
return nil
}
targetDependent, err := s.cfg.ForkChoiceStore.DependentRootForEpoch([32]byte(c.Root), c.Epoch-1)
targetDependent, err := s.cfg.ForkChoiceStore.DependentRootForEpoch([32]byte(c.Root), c.Epoch)
if err != nil {
return nil
}
@@ -50,11 +53,7 @@ func (s *Service) getRecentPreState(ctx context.Context, c *ethpb.Checkpoint) st
}
return st
}
// At this point we can only have c.Epoch > headEpoch.
if !s.cfg.ForkChoiceStore.IsCanonical([32]byte(c.Root)) {
return nil
}
// Advance the head state to the start of the target epoch.
// Otherwise we need to advance the head state to the start of the target epoch.
// This point can only be reached if c.Root == headRoot and c.Epoch > headEpoch.
slot, err := slots.EpochStart(c.Epoch)
if err != nil {

View File

@@ -181,123 +181,6 @@ func TestService_GetRecentPreState(t *testing.T) {
require.NotNil(t, service.getRecentPreState(ctx, &ethpb.Checkpoint{Epoch: 1, Root: ckRoot}))
}
func TestService_GetRecentPreState_Epoch_0(t *testing.T) {
service, _ := minimalTestService(t)
ctx := t.Context()
require.IsNil(t, service.getRecentPreState(ctx, &ethpb.Checkpoint{}))
}
func TestService_GetRecentPreState_Old_Checkpoint(t *testing.T) {
service, _ := minimalTestService(t)
ctx := t.Context()
s, err := util.NewBeaconState()
require.NoError(t, err)
ckRoot := bytesutil.PadTo([]byte{'A'}, fieldparams.RootLength)
cp0 := &ethpb.Checkpoint{Epoch: 0, Root: ckRoot}
err = s.SetFinalizedCheckpoint(cp0)
require.NoError(t, err)
st, root, err := prepareForkchoiceState(ctx, 33, [32]byte(ckRoot), [32]byte{}, [32]byte{'R'}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, root))
service.head = &head{
root: [32]byte(ckRoot),
state: s,
slot: 33,
}
require.IsNil(t, service.getRecentPreState(ctx, &ethpb.Checkpoint{}))
}
func TestService_GetRecentPreState_Same_DependentRoots(t *testing.T) {
service, _ := minimalTestService(t)
ctx := t.Context()
s, err := util.NewBeaconState()
require.NoError(t, err)
ckRoot := bytesutil.PadTo([]byte{'A'}, fieldparams.RootLength)
cp0 := &ethpb.Checkpoint{Epoch: 0, Root: ckRoot}
// Create a fork 31 <-- 32 <--- 64
// \---------33
// With the same dependent root at epoch 0 for a checkpoint at epoch 2
st, blk, err := prepareForkchoiceState(ctx, 31, [32]byte(ckRoot), [32]byte{}, [32]byte{}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
st, blk, err = prepareForkchoiceState(ctx, 32, [32]byte{'S'}, blk.Root(), [32]byte{}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
st, blk, err = prepareForkchoiceState(ctx, 64, [32]byte{'T'}, blk.Root(), [32]byte{}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
st, blk, err = prepareForkchoiceState(ctx, 33, [32]byte{'U'}, [32]byte(ckRoot), [32]byte{}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
cpRoot := blk.Root()
service.head = &head{
root: [32]byte{'T'},
state: s,
slot: 64,
}
require.NotNil(t, service.getRecentPreState(ctx, &ethpb.Checkpoint{Epoch: 2, Root: cpRoot[:]}))
}
func TestService_GetRecentPreState_Different_DependentRoots(t *testing.T) {
service, _ := minimalTestService(t)
ctx := t.Context()
s, err := util.NewBeaconState()
require.NoError(t, err)
ckRoot := bytesutil.PadTo([]byte{'A'}, fieldparams.RootLength)
cp0 := &ethpb.Checkpoint{Epoch: 0, Root: ckRoot}
// Create a fork 30 <-- 31 <-- 32 <--- 64
// \---------33
// With the same dependent root at epoch 0 for a checkpoint at epoch 2
st, blk, err := prepareForkchoiceState(ctx, 30, [32]byte(ckRoot), [32]byte{}, [32]byte{}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
st, blk, err = prepareForkchoiceState(ctx, 31, [32]byte{'S'}, blk.Root(), [32]byte{}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
st, blk, err = prepareForkchoiceState(ctx, 32, [32]byte{'T'}, blk.Root(), [32]byte{}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
st, blk, err = prepareForkchoiceState(ctx, 64, [32]byte{'U'}, blk.Root(), [32]byte{}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
st, blk, err = prepareForkchoiceState(ctx, 33, [32]byte{'V'}, [32]byte(ckRoot), [32]byte{}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
cpRoot := blk.Root()
service.head = &head{
root: [32]byte{'T'},
state: s,
slot: 64,
}
require.IsNil(t, service.getRecentPreState(ctx, &ethpb.Checkpoint{Epoch: 2, Root: cpRoot[:]}))
}
func TestService_GetRecentPreState_Different(t *testing.T) {
service, _ := minimalTestService(t)
ctx := t.Context()
s, err := util.NewBeaconState()
require.NoError(t, err)
ckRoot := bytesutil.PadTo([]byte{'A'}, fieldparams.RootLength)
cp0 := &ethpb.Checkpoint{Epoch: 0, Root: ckRoot}
err = s.SetFinalizedCheckpoint(cp0)
require.NoError(t, err)
st, root, err := prepareForkchoiceState(ctx, 33, [32]byte(ckRoot), [32]byte{}, [32]byte{'R'}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, root))
service.head = &head{
root: [32]byte(ckRoot),
state: s,
slot: 33,
}
require.IsNil(t, service.getRecentPreState(ctx, &ethpb.Checkpoint{}))
}
func TestService_GetAttPreState_Concurrency(t *testing.T) {
service, _ := minimalTestService(t)
ctx := t.Context()

View File

@@ -134,7 +134,7 @@ func getStateVersionAndPayload(st state.BeaconState) (int, interfaces.ExecutionD
return preStateVersion, preStateHeader, nil
}
func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlock, avs das.AvailabilityChecker) error {
func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlock, avs das.AvailabilityStore) error {
ctx, span := trace.StartSpan(ctx, "blockChain.onBlockBatch")
defer span.End()
@@ -306,7 +306,7 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
return s.saveHeadNoDB(ctx, lastB, lastBR, preState, !isValidPayload)
}
func (s *Service) areSidecarsAvailable(ctx context.Context, avs das.AvailabilityChecker, roBlock consensusblocks.ROBlock) error {
func (s *Service) areSidecarsAvailable(ctx context.Context, avs das.AvailabilityStore, roBlock consensusblocks.ROBlock) error {
blockVersion := roBlock.Version()
block := roBlock.Block()
slot := block.Slot()
@@ -460,9 +460,6 @@ func (s *Service) pruneAttsFromPool(ctx context.Context, headState state.BeaconS
func (s *Service) pruneCoveredAttsFromPool(ctx context.Context, headState state.BeaconState, att ethpb.Att) error {
switch {
case !att.IsAggregated():
if features.Get().EnableExperimentalAttestationPool {
return errors.Wrap(s.cfg.AttestationCache.DeleteCovered(att), "could not delete covered attestation")
}
return s.cfg.AttPool.DeleteUnaggregatedAttestation(att)
case att.Version() == version.Phase0:
if features.Get().EnableExperimentalAttestationPool {
@@ -531,12 +528,12 @@ func (s *Service) pruneCoveredElectraAttsFromPool(ctx context.Context, headState
if err = s.cfg.AttestationCache.DeleteCovered(a); err != nil {
return errors.Wrap(err, "could not delete covered attestation")
}
} else if a.IsAggregated() {
if err = s.cfg.AttPool.DeleteAggregatedAttestation(a); err != nil {
return errors.Wrap(err, "could not delete aggregated attestation")
} else if !a.IsAggregated() {
if err = s.cfg.AttPool.DeleteUnaggregatedAttestation(a); err != nil {
return errors.Wrap(err, "could not delete unaggregated attestation")
}
} else if err = s.cfg.AttPool.DeleteUnaggregatedAttestation(a); err != nil {
return errors.Wrap(err, "could not delete unaggregated attestation")
} else if err = s.cfg.AttPool.DeleteAggregatedAttestation(a); err != nil {
return errors.Wrap(err, "could not delete aggregated attestation")
}
offset += uint64(len(c))
@@ -637,7 +634,9 @@ func missingDataColumnIndices(store *filesystem.DataColumnStorage, root [fieldpa
return nil, nil
}
if len(expected) > fieldparams.NumberOfColumns {
numberOfColumns := params.BeaconConfig().NumberOfColumns
if uint64(len(expected)) > numberOfColumns {
return nil, errMaxDataColumnsExceeded
}
@@ -819,9 +818,10 @@ func (s *Service) areDataColumnsAvailable(
case <-ctx.Done():
var missingIndices any = "all"
missingIndicesCount := len(missing)
numberOfColumns := params.BeaconConfig().NumberOfColumns
missingIndicesCount := uint64(len(missing))
if missingIndicesCount < fieldparams.NumberOfColumns {
if missingIndicesCount < numberOfColumns {
missingIndices = helpers.SortedPrettySliceFromMap(missing)
}
@@ -948,6 +948,13 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
attribute := s.getPayloadAttribute(ctx, headState, s.CurrentSlot()+1, headRoot[:])
// return early if we are not proposing next slot
if attribute.IsEmpty() {
headBlock, err := s.headBlock()
if err != nil {
log.WithError(err).WithField("head_root", headRoot).Error("Unable to retrieve head block to fire payload attributes event")
}
// notifyForkchoiceUpdate fires the payload attribute event. But in this case, we won't
// call notifyForkchoiceUpdate, so the event is fired here.
go s.firePayloadAttributesEvent(s.cfg.StateNotifier.StateFeed(), headBlock, headRoot, s.CurrentSlot()+1)
return
}

View File

@@ -2495,8 +2495,7 @@ func TestMissingBlobIndices(t *testing.T) {
}
func TestMissingDataColumnIndices(t *testing.T) {
const countPlusOne = fieldparams.NumberOfColumns + 1
countPlusOne := params.BeaconConfig().NumberOfColumns + 1
tooManyColumns := make(map[uint64]bool, countPlusOne)
for i := range countPlusOne {
tooManyColumns[uint64(i)] = true
@@ -2806,10 +2805,6 @@ func TestProcessLightClientUpdate(t *testing.T) {
require.NoError(t, s.cfg.BeaconDB.SaveHeadBlockRoot(ctx, [32]byte{1, 2}))
for _, testVersion := range version.All()[1:] {
if testVersion == version.Gloas {
// TODO(16027): Unskip light client tests for Gloas
continue
}
t.Run(version.String(testVersion), func(t *testing.T) {
l := util.NewTestLightClient(t, testVersion)

View File

@@ -39,8 +39,8 @@ var epochsSinceFinalityExpandCache = primitives.Epoch(4)
// BlockReceiver interface defines the methods of chain service for receiving and processing new blocks.
type BlockReceiver interface {
ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte, avs das.AvailabilityChecker) error
ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock, avs das.AvailabilityChecker) error
ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte, avs das.AvailabilityStore) error
ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock, avs das.AvailabilityStore) error
HasBlock(ctx context.Context, root [32]byte) bool
RecentBlockSlot(root [32]byte) (primitives.Slot, error)
BlockBeingSynced([32]byte) bool
@@ -69,7 +69,7 @@ type SlashingReceiver interface {
// 1. Validate block, apply state transition and update checkpoints
// 2. Apply fork choice to the processed block
// 3. Save latest head info
func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte, avs das.AvailabilityChecker) error {
func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte, avs das.AvailabilityStore) error {
ctx, span := trace.StartSpan(ctx, "blockChain.ReceiveBlock")
defer span.End()
// Return early if the block is blacklisted
@@ -242,7 +242,7 @@ func (s *Service) validateExecutionAndConsensus(
return postState, isValidPayload, nil
}
func (s *Service) handleDA(ctx context.Context, avs das.AvailabilityChecker, block blocks.ROBlock) (time.Duration, error) {
func (s *Service) handleDA(ctx context.Context, avs das.AvailabilityStore, block blocks.ROBlock) (time.Duration, error) {
var err error
start := time.Now()
if avs != nil {
@@ -332,7 +332,7 @@ func (s *Service) executePostFinalizationTasks(ctx context.Context, finalizedSta
// ReceiveBlockBatch processes the whole block batch at once, assuming the block batch is linear ,transitioning
// the state, performing batch verification of all collected signatures and then performing the appropriate
// actions for a block post-transition.
func (s *Service) ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock, avs das.AvailabilityChecker) error {
func (s *Service) ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock, avs das.AvailabilityStore) error {
ctx, span := trace.StartSpan(ctx, "blockChain.ReceiveBlockBatch")
defer span.End()

View File

@@ -14,7 +14,6 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/cache"
statefeed "github.com/OffchainLabs/prysm/v7/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
coreTime "github.com/OffchainLabs/prysm/v7/beacon-chain/core/time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/transition"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db"
@@ -471,37 +470,62 @@ func (s *Service) removeStartupState() {
// UpdateCustodyInfoInDB updates the custody information in the database.
// It returns the (potentially updated) custody group count and the earliest available slot.
func (s *Service) updateCustodyInfoInDB(slot primitives.Slot) (primitives.Slot, uint64, error) {
isSupernode := flags.Get().Supernode
isSemiSupernode := flags.Get().SemiSupernode
isSubscribedToAllDataSubnets := flags.Get().SubscribeAllDataSubnets
isLiteSupernode := flags.Get().LiteSupernode
cfg := params.BeaconConfig()
custodyRequirement := cfg.CustodyRequirement
// Warn if both flags are set (supernode takes precedence).
if isSubscribedToAllDataSubnets && isLiteSupernode {
log.Warnf(
"Both `--%s` and `--%s` flags are set. The supernode flag takes precedence.",
flags.SubscribeAllDataSubnets.Name,
flags.LiteSupernode.Name,
)
}
// Check if the node was previously subscribed to all data subnets, and if so,
// store the new status accordingly.
wasSupernode, err := s.cfg.BeaconDB.UpdateSubscribedToAllDataSubnets(s.ctx, isSupernode)
wasSubscribedToAllDataSubnets, err := s.cfg.BeaconDB.UpdateSubscribedToAllDataSubnets(s.ctx, isSubscribedToAllDataSubnets)
if err != nil {
return 0, 0, errors.Wrap(err, "update subscribed to all data subnets")
log.WithError(err).Error("Could not update subscription status to all data subnets")
}
// Compute the target custody group count based on current flag configuration.
targetCustodyGroupCount := custodyRequirement
// Supernode: custody all groups (either currently set or previously enabled)
if isSupernode {
targetCustodyGroupCount = cfg.NumberOfCustodyGroups
// Warn the user if the node was previously subscribed to all data subnets and is not any more.
if wasSubscribedToAllDataSubnets && !isSubscribedToAllDataSubnets {
log.Warnf(
"Because the flag `--%s` was previously used, the node will still subscribe to all data subnets.",
flags.SubscribeAllDataSubnets.Name,
)
}
// Semi-supernode: custody minimum needed for reconstruction, or custody requirement if higher
if isSemiSupernode {
semiSupernodeCustody, err := peerdas.MinimumCustodyGroupCountToReconstruct()
if err != nil {
return 0, 0, errors.Wrap(err, "minimum custody group count")
}
targetCustodyGroupCount = max(custodyRequirement, semiSupernodeCustody)
// Check if the node was previously in lite-supernode mode, and if so,
// store the new status accordingly.
wasLiteSupernode, err := s.cfg.BeaconDB.UpdateLiteSupernode(s.ctx, isLiteSupernode)
if err != nil {
log.WithError(err).Error("Could not update lite-supernode status")
}
// Warn the user if the node was previously in lite-supernode mode and is not any more.
if wasLiteSupernode && !isLiteSupernode {
log.Warnf(
"Because the flag `--%s` was previously used, the node will remain in lite-supernode mode (subscribe to 64 subnets).",
flags.LiteSupernode.Name,
)
}
// Compute the custody group count.
// Note: Lite-supernode subscribes to 64 subnets (enough to reconstruct) but only custodies the minimum groups (4).
// Only supernode increases custody to all 128 groups.
// Persistence (preventing downgrades) is handled by UpdateCustodyInfo() which
// refuses to decrease the stored custody count.
custodyGroupCount := custodyRequirement
if isSubscribedToAllDataSubnets {
custodyGroupCount = cfg.NumberOfCustodyGroups
}
// Lite-supernode does NOT increase custody count - it keeps the minimum (4 or validator requirement).
// Safely compute the fulu fork slot.
fuluForkSlot, err := fuluForkSlot()
if err != nil {
@@ -516,23 +540,12 @@ func (s *Service) updateCustodyInfoInDB(slot primitives.Slot) (primitives.Slot,
}
}
earliestAvailableSlot, actualCustodyGroupCount, err := s.cfg.BeaconDB.UpdateCustodyInfo(s.ctx, slot, targetCustodyGroupCount)
earliestAvailableSlot, custodyGroupCount, err := s.cfg.BeaconDB.UpdateCustodyInfo(s.ctx, slot, custodyGroupCount)
if err != nil {
return 0, 0, errors.Wrap(err, "update custody info")
}
if isSupernode {
log.WithFields(logrus.Fields{
"current": actualCustodyGroupCount,
"target": cfg.NumberOfCustodyGroups,
}).Info("Supernode mode enabled. Will custody all data columns going forward.")
}
if wasSupernode && !isSupernode {
log.Warningf("Because the `--%s` flag was previously used, the node will continue to act as a super node.", flags.Supernode.Name)
}
return earliestAvailableSlot, actualCustodyGroupCount, nil
return earliestAvailableSlot, custodyGroupCount, nil
}
func spawnCountdownIfPreGenesis(ctx context.Context, genesisTime time.Time, db db.HeadAccessDatabase) {

View File

@@ -603,6 +603,7 @@ func TestUpdateCustodyInfoInDB(t *testing.T) {
custodyRequirement = uint64(4)
earliestStoredSlot = primitives.Slot(12)
numberOfCustodyGroups = uint64(64)
numberOfColumns = uint64(128)
)
params.SetupTestConfigCleanup(t)
@@ -610,6 +611,7 @@ func TestUpdateCustodyInfoInDB(t *testing.T) {
cfg.FuluForkEpoch = fuluForkEpoch
cfg.CustodyRequirement = custodyRequirement
cfg.NumberOfCustodyGroups = numberOfCustodyGroups
cfg.NumberOfColumns = numberOfColumns
params.OverrideBeaconConfig(cfg)
ctx := t.Context()
@@ -640,7 +642,7 @@ func TestUpdateCustodyInfoInDB(t *testing.T) {
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.Supernode = true
gFlags.SubscribeAllDataSubnets = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
@@ -678,7 +680,7 @@ func TestUpdateCustodyInfoInDB(t *testing.T) {
// ----------
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.Supernode = true
gFlags.SubscribeAllDataSubnets = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
@@ -693,121 +695,4 @@ func TestUpdateCustodyInfoInDB(t *testing.T) {
require.Equal(t, slot, actualEas)
require.Equal(t, numberOfCustodyGroups, actualCgc)
})
t.Run("Supernode downgrade prevented", func(t *testing.T) {
service, requirements := minimalTestService(t)
err = requirements.db.SaveBlock(ctx, roBlock)
require.NoError(t, err)
// Enable supernode
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.Supernode = true
flags.Init(gFlags)
slot := fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1
actualEas, actualCgc, err := service.updateCustodyInfoInDB(slot)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
require.Equal(t, numberOfCustodyGroups, actualCgc)
// Try to downgrade by removing flag
gFlags.Supernode = false
flags.Init(gFlags)
defer flags.Init(resetFlags)
// Should still be supernode
actualEas, actualCgc, err = service.updateCustodyInfoInDB(slot + 2)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
require.Equal(t, numberOfCustodyGroups, actualCgc) // Still 64, not downgraded
})
t.Run("Semi-supernode downgrade prevented", func(t *testing.T) {
service, requirements := minimalTestService(t)
err = requirements.db.SaveBlock(ctx, roBlock)
require.NoError(t, err)
// Enable semi-supernode
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.SemiSupernode = true
flags.Init(gFlags)
slot := fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1
actualEas, actualCgc, err := service.updateCustodyInfoInDB(slot)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
semiSupernodeCustody := numberOfCustodyGroups / 2 // 64
require.Equal(t, semiSupernodeCustody, actualCgc) // Semi-supernode custodies 64 groups
// Try to downgrade by removing flag
gFlags.SemiSupernode = false
flags.Init(gFlags)
defer flags.Init(resetFlags)
// UpdateCustodyInfo should prevent downgrade - custody count should remain at 64
actualEas, actualCgc, err = service.updateCustodyInfoInDB(slot + 2)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
require.Equal(t, semiSupernodeCustody, actualCgc) // Still 64 due to downgrade prevention by UpdateCustodyInfo
})
t.Run("Semi-supernode to supernode upgrade allowed", func(t *testing.T) {
service, requirements := minimalTestService(t)
err = requirements.db.SaveBlock(ctx, roBlock)
require.NoError(t, err)
// Start with semi-supernode
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.SemiSupernode = true
flags.Init(gFlags)
slot := fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1
actualEas, actualCgc, err := service.updateCustodyInfoInDB(slot)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
semiSupernodeCustody := numberOfCustodyGroups / 2 // 64
require.Equal(t, semiSupernodeCustody, actualCgc) // Semi-supernode custodies 64 groups
// Upgrade to full supernode
gFlags.SemiSupernode = false
gFlags.Supernode = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
// Should upgrade to full supernode
upgradeSlot := slot + 2
actualEas, actualCgc, err = service.updateCustodyInfoInDB(upgradeSlot)
require.NoError(t, err)
require.Equal(t, upgradeSlot, actualEas) // Earliest slot updates when upgrading
require.Equal(t, numberOfCustodyGroups, actualCgc) // Upgraded to 128
})
t.Run("Semi-supernode with high validator requirements uses higher custody", func(t *testing.T) {
service, requirements := minimalTestService(t)
err = requirements.db.SaveBlock(ctx, roBlock)
require.NoError(t, err)
// Enable semi-supernode
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.SemiSupernode = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
// Mock a high custody requirement (simulating many validators)
// We need to override the custody requirement calculation
// For this test, we'll verify the logic by checking if custodyRequirement > 64
// Since custodyRequirement in minimalTestService is 4, we can't test the high case here
// This would require a different test setup with actual validators
slot := fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1
actualEas, actualCgc, err := service.updateCustodyInfoInDB(slot)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
semiSupernodeCustody := numberOfCustodyGroups / 2 // 64
// With low validator requirements (4), should use semi-supernode minimum (64)
require.Equal(t, semiSupernodeCustody, actualCgc)
})
}

View File

@@ -275,7 +275,7 @@ func (s *ChainService) ReceiveBlockInitialSync(ctx context.Context, block interf
}
// ReceiveBlockBatch processes blocks in batches from initial-sync.
func (s *ChainService) ReceiveBlockBatch(ctx context.Context, blks []blocks.ROBlock, _ das.AvailabilityChecker) error {
func (s *ChainService) ReceiveBlockBatch(ctx context.Context, blks []blocks.ROBlock, _ das.AvailabilityStore) error {
if s.State == nil {
return ErrNilState
}
@@ -305,7 +305,7 @@ func (s *ChainService) ReceiveBlockBatch(ctx context.Context, blks []blocks.ROBl
}
// ReceiveBlock mocks ReceiveBlock method in chain service.
func (s *ChainService) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, _ [32]byte, _ das.AvailabilityChecker) error {
func (s *ChainService) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, _ [32]byte, _ das.AvailabilityStore) error {
if s.ReceiveBlockMockErr != nil {
return s.ReceiveBlockMockErr
}

View File

@@ -76,22 +76,13 @@ func (c *AttestationCache) Add(att ethpb.Att) error {
if group == nil {
group = &attGroup{
slot: att.GetData().Slot,
atts: []ethpb.Att{att.Clone()},
atts: []ethpb.Att{att},
}
c.atts[id] = group
return nil
}
if att.IsAggregated() {
// Ensure that this attestation is not already fully contained in an existing attestation.
for _, a := range group.atts {
if c, err := a.GetAggregationBits().Contains(att.GetAggregationBits()); err != nil {
return err
} else if c {
return nil
}
}
group.atts = append(group.atts, att.Clone())
return nil
}

View File

@@ -151,32 +151,6 @@ func TestAdd(t *testing.T) {
require.Equal(t, 1, len(group.atts))
assert.DeepEqual(t, []int{0, 1}, group.atts[0].GetAggregationBits().BitIndices())
})
t.Run("added attestation is copied", func(t *testing.T) {
// We want to make sure that the running aggregate is based on a copy of the original attestation,
// to avoid mutating the original attestation.
c := NewAttestationCache()
ab := bitfield.NewBitlist(8)
ab.SetBitAt(0, true)
original := &ethpb.Attestation{
Data: &ethpb.AttestationData{Slot: 123, BeaconBlockRoot: make([]byte, 32), Source: &ethpb.Checkpoint{Root: make([]byte, 32)}, Target: &ethpb.Checkpoint{Root: make([]byte, 32)}},
AggregationBits: ab,
Signature: sig.Marshal(),
}
require.NoError(t, c.Add(original))
ab = bitfield.NewBitlist(8)
ab.SetBitAt(1, true)
a := &ethpb.Attestation{
Data: &ethpb.AttestationData{Slot: 123, BeaconBlockRoot: make([]byte, 32), Source: &ethpb.Checkpoint{Root: make([]byte, 32)}, Target: &ethpb.Checkpoint{Root: make([]byte, 32)}},
AggregationBits: ab,
Signature: sig.Marshal(),
}
require.NoError(t, c.Add(a))
// Assert that the bit of the second attestation was not added to the original attestation.
assert.Equal(t, uint64(1), original.AggregationBits.Count())
})
}
func TestGetAll(t *testing.T) {

View File

@@ -60,7 +60,7 @@ func Eth1DataHasEnoughSupport(beaconState state.ReadOnlyBeaconState, data *ethpb
voteCount := uint64(0)
for _, vote := range beaconState.Eth1DataVotes() {
if AreEth1DataEqual(vote, data) {
if AreEth1DataEqual(vote, data.Copy()) {
voteCount++
}
}

View File

@@ -90,9 +90,6 @@ func IsExecutionEnabled(st state.ReadOnlyBeaconState, body interfaces.ReadOnlyBe
if st == nil || body == nil {
return false, errors.New("nil state or block body")
}
if st.Version() >= version.Capella {
return true, nil
}
if IsPreBellatrixVersion(st.Version()) {
return false, nil
}

View File

@@ -260,12 +260,11 @@ func Test_IsExecutionBlockCapella(t *testing.T) {
func Test_IsExecutionEnabled(t *testing.T) {
tests := []struct {
name string
payload *enginev1.ExecutionPayload
header interfaces.ExecutionData
useAltairSt bool
useCapellaSt bool
want bool
name string
payload *enginev1.ExecutionPayload
header interfaces.ExecutionData
useAltairSt bool
want bool
}{
{
name: "use older than bellatrix state",
@@ -332,17 +331,6 @@ func Test_IsExecutionEnabled(t *testing.T) {
}(),
want: true,
},
{
name: "capella state always enabled",
payload: emptyPayload(),
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
return h
}(),
useCapellaSt: true,
want: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
@@ -354,8 +342,6 @@ func Test_IsExecutionEnabled(t *testing.T) {
require.NoError(t, err)
if tt.useAltairSt {
st, _ = util.DeterministicGenesisStateAltair(t, 1)
} else if tt.useCapellaSt {
st, _ = util.DeterministicGenesisStateCapella(t, 1)
}
got, err := blocks.IsExecutionEnabled(st, body)
require.NoError(t, err)

View File

@@ -152,7 +152,7 @@ func ActiveValidatorIndices(ctx context.Context, s state.ReadOnlyBeaconState, ep
}
if err := UpdateCommitteeCache(ctx, s, epoch); err != nil {
log.WithError(err).Error("Could not update committee cache")
return nil, errors.Wrap(err, "could not update committee cache")
}
return indices, nil

View File

@@ -45,13 +45,12 @@ go_test(
"p2p_interface_test.go",
"reconstruction_helpers_test.go",
"reconstruction_test.go",
"semi_supernode_test.go",
"utils_test.go",
"validator_test.go",
"verification_test.go",
],
embed = [":go_default_library"],
deps = [
":go_default_library",
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/fieldparams:go_default_library",

View File

@@ -5,7 +5,6 @@ import (
"math"
"slices"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/crypto/hash"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
@@ -97,7 +96,8 @@ func ComputeColumnsForCustodyGroup(custodyGroup uint64) ([]uint64, error) {
return nil, ErrCustodyGroupTooLarge
}
numberOfColumns := uint64(fieldparams.NumberOfColumns)
numberOfColumns := cfg.NumberOfColumns
columnsPerGroup := numberOfColumns / numberOfCustodyGroups
columns := make([]uint64, 0, columnsPerGroup)
@@ -112,9 +112,8 @@ func ComputeColumnsForCustodyGroup(custodyGroup uint64) ([]uint64, error) {
// ComputeCustodyGroupForColumn computes the custody group for a given column.
// It is the reciprocal function of ComputeColumnsForCustodyGroup.
func ComputeCustodyGroupForColumn(columnIndex uint64) (uint64, error) {
const numberOfColumns = fieldparams.NumberOfColumns
cfg := params.BeaconConfig()
numberOfColumns := cfg.NumberOfColumns
numberOfCustodyGroups := cfg.NumberOfCustodyGroups
if columnIndex >= numberOfColumns {

View File

@@ -30,6 +30,7 @@ func TestComputeColumnsForCustodyGroup(t *testing.T) {
func TestComputeCustodyGroupForColumn(t *testing.T) {
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.NumberOfColumns = 128
config.NumberOfCustodyGroups = 64
params.OverrideBeaconConfig(config)

View File

@@ -2,7 +2,6 @@ package peerdas
import (
"encoding/binary"
"maps"
"sync"
"github.com/ethereum/go-ethereum/p2p/enode"
@@ -108,102 +107,3 @@ func computeInfoCacheKey(nodeID enode.ID, custodyGroupCount uint64) [nodeInfoCac
return key
}
// ColumnIndices represents as a set of ColumnIndices. This could be the set of indices that a node is required to custody,
// the set that a peer custodies, missing indices for a given block, indices that are present on disk, etc.
type ColumnIndices map[uint64]struct{}
// Has returns true if the index is present in the ColumnIndices.
func (ci ColumnIndices) Has(index uint64) bool {
_, ok := ci[index]
return ok
}
// Count returns the number of indices present in the ColumnIndices.
func (ci ColumnIndices) Count() int {
return len(ci)
}
// Set sets the index in the ColumnIndices.
func (ci ColumnIndices) Set(index uint64) {
ci[index] = struct{}{}
}
// Unset removes the index from the ColumnIndices.
func (ci ColumnIndices) Unset(index uint64) {
delete(ci, index)
}
// Copy creates a copy of the ColumnIndices.
func (ci ColumnIndices) Copy() ColumnIndices {
newCi := make(ColumnIndices, len(ci))
maps.Copy(newCi, ci)
return newCi
}
// Intersection returns a new ColumnIndices that contains only the indices that are present in both ColumnIndices.
func (ci ColumnIndices) Intersection(other ColumnIndices) ColumnIndices {
result := make(ColumnIndices)
for index := range ci {
if other.Has(index) {
result.Set(index)
}
}
return result
}
// Merge mutates the receiver so that any index that is set in either of
// the two ColumnIndices is set in the receiver after the function finishes.
// It does not mutate the other ColumnIndices given as a function argument.
func (ci ColumnIndices) Merge(other ColumnIndices) {
for index := range other {
ci.Set(index)
}
}
// ToMap converts a ColumnIndices into a map[uint64]struct{}.
// In the future ColumnIndices may be changed to a bit map, so using
// ToMap will ensure forwards-compatibility.
func (ci ColumnIndices) ToMap() map[uint64]struct{} {
return ci.Copy()
}
// ToSlice converts a ColumnIndices into a slice of uint64 indices.
func (ci ColumnIndices) ToSlice() []uint64 {
indices := make([]uint64, 0, len(ci))
for index := range ci {
indices = append(indices, index)
}
return indices
}
// NewColumnIndicesFromSlice creates a ColumnIndices from a slice of uint64.
func NewColumnIndicesFromSlice(indices []uint64) ColumnIndices {
ci := make(ColumnIndices, len(indices))
for _, index := range indices {
ci[index] = struct{}{}
}
return ci
}
// NewColumnIndicesFromMap creates a ColumnIndices from a map[uint64]bool. This kind of map
// is used in several places in peerdas code. Converting from this map type to ColumnIndices
// will allow us to move ColumnIndices underlying type to a bitmap in the future and avoid
// lots of loops for things like intersections/unions or copies.
func NewColumnIndicesFromMap(indices map[uint64]bool) ColumnIndices {
ci := make(ColumnIndices, len(indices))
for index, set := range indices {
if !set {
continue
}
ci[index] = struct{}{}
}
return ci
}
// NewColumnIndices creates an empty ColumnIndices.
// In the future ColumnIndices may change from a reference type to a value type,
// so using this constructor will ensure forwards-compatibility.
func NewColumnIndices() ColumnIndices {
return make(ColumnIndices)
}

View File

@@ -25,10 +25,3 @@ func TestInfo(t *testing.T) {
require.DeepEqual(t, expectedDataColumnsSubnets, actual.DataColumnsSubnets)
}
}
func TestNewColumnIndicesFromMap(t *testing.T) {
t.Run("nil map", func(t *testing.T) {
ci := peerdas.NewColumnIndicesFromMap(nil)
require.Equal(t, 0, ci.Count())
})
}

View File

@@ -5,20 +5,10 @@ import (
"github.com/prometheus/client_golang/prometheus/promauto"
)
var (
dataColumnComputationTime = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "beacon_data_column_sidecar_computation_milliseconds",
Help: "Captures the time taken to compute data column sidecars from blobs.",
Buckets: []float64{25, 50, 100, 250, 500, 750, 1000},
},
)
cellsAndProofsFromStructuredComputationTime = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "cells_and_proofs_from_structured_computation_milliseconds",
Help: "Captures the time taken to compute cells and proofs from structured computation.",
Buckets: []float64{10, 20, 30, 40, 50, 100, 200},
},
)
var dataColumnComputationTime = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "beacon_data_column_sidecar_computation_milliseconds",
Help: "Captures the time taken to compute data column sidecars from blobs.",
Buckets: []float64{25, 50, 100, 250, 500, 750, 1000},
},
)

View File

@@ -33,7 +33,8 @@ func (Cgc) ENRKey() string { return params.BeaconNetworkConfig().CustodyGroupCou
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/p2p-interface.md#verify_data_column_sidecar
func VerifyDataColumnSidecar(sidecar blocks.RODataColumn) error {
// The sidecar index must be within the valid range.
if sidecar.Index >= fieldparams.NumberOfColumns {
numberOfColumns := params.BeaconConfig().NumberOfColumns
if sidecar.Index >= numberOfColumns {
return ErrIndexTooLarge
}

View File

@@ -281,11 +281,8 @@ func BenchmarkVerifyDataColumnSidecarKZGProofs_SameCommitments_NoBatch(b *testin
}
func BenchmarkVerifyDataColumnSidecarKZGProofs_DiffCommitments_Batch(b *testing.B) {
const (
blobCount = 12
numberOfColumns = fieldparams.NumberOfColumns
)
const blobCount = 12
numberOfColumns := int64(params.BeaconConfig().NumberOfColumns)
err := kzg.Start()
require.NoError(b, err)

View File

@@ -3,7 +3,6 @@ package peerdas
import (
"sort"
"sync"
"time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/blockchain/kzg"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
@@ -27,40 +26,7 @@ var (
func MinimumColumnCountToReconstruct() uint64 {
// If the number of columns is odd, then we need total / 2 + 1 columns to reconstruct.
// If the number of columns is even, then we need total / 2 columns to reconstruct.
return (fieldparams.NumberOfColumns + 1) / 2
}
// MinimumCustodyGroupCountToReconstruct returns the minimum number of custody groups needed to
// custody enough data columns for reconstruction. This accounts for the relationship between
// custody groups and columns, making it future-proof if these values change.
// Returns an error if the configuration values are invalid (zero or would cause division by zero).
func MinimumCustodyGroupCountToReconstruct() (uint64, error) {
const numberOfColumns = fieldparams.NumberOfColumns
cfg := params.BeaconConfig()
// Validate configuration values
if numberOfColumns == 0 {
return 0, errors.New("NumberOfColumns cannot be zero")
}
if cfg.NumberOfCustodyGroups == 0 {
return 0, errors.New("NumberOfCustodyGroups cannot be zero")
}
minimumColumnCount := MinimumColumnCountToReconstruct()
// Calculate how many columns each custody group represents
columnsPerGroup := numberOfColumns / cfg.NumberOfCustodyGroups
// If there are more groups than columns (columnsPerGroup = 0), this is an invalid configuration
// for reconstruction purposes as we cannot determine a meaningful custody group count
if columnsPerGroup == 0 {
return 0, errors.Errorf("invalid configuration: NumberOfCustodyGroups (%d) exceeds NumberOfColumns (%d)",
cfg.NumberOfCustodyGroups, numberOfColumns)
}
// Use ceiling division to ensure we have enough groups to cover the minimum columns
// ceiling(a/b) = (a + b - 1) / b
return (minimumColumnCount + columnsPerGroup - 1) / columnsPerGroup, nil
return (params.BeaconConfig().NumberOfColumns + 1) / 2
}
// recoverCellsForBlobs reconstructs cells for specified blobs from the given data column sidecars.
@@ -287,8 +253,7 @@ func ReconstructBlobSidecars(block blocks.ROBlock, verifiedDataColumnSidecars []
// ComputeCellsAndProofsFromFlat computes the cells and proofs from blobs and cell flat proofs.
func ComputeCellsAndProofsFromFlat(blobs [][]byte, cellProofs [][]byte) ([][]kzg.Cell, [][]kzg.Proof, error) {
const numberOfColumns = fieldparams.NumberOfColumns
numberOfColumns := params.BeaconConfig().NumberOfColumns
blobCount := uint64(len(blobs))
cellProofsCount := uint64(len(cellProofs))
@@ -297,42 +262,32 @@ func ComputeCellsAndProofsFromFlat(blobs [][]byte, cellProofs [][]byte) ([][]kzg
return nil, nil, ErrBlobsCellsProofsMismatch
}
var wg errgroup.Group
cellsPerBlob := make([][]kzg.Cell, blobCount)
proofsPerBlob := make([][]kzg.Proof, blobCount)
cellsPerBlob := make([][]kzg.Cell, 0, blobCount)
proofsPerBlob := make([][]kzg.Proof, 0, blobCount)
for i, blob := range blobs {
wg.Go(func() error {
var kzgBlob kzg.Blob
if copy(kzgBlob[:], blob) != len(kzgBlob) {
return errors.New("wrong blob size - should never happen")
var kzgBlob kzg.Blob
if copy(kzgBlob[:], blob) != len(kzgBlob) {
return nil, nil, errors.New("wrong blob size - should never happen")
}
// Compute the extended cells from the (non-extended) blob.
cells, err := kzg.ComputeCells(&kzgBlob)
if err != nil {
return nil, nil, errors.Wrap(err, "compute cells")
}
var proofs []kzg.Proof
for idx := uint64(i) * numberOfColumns; idx < (uint64(i)+1)*numberOfColumns; idx++ {
var kzgProof kzg.Proof
if copy(kzgProof[:], cellProofs[idx]) != len(kzgProof) {
return nil, nil, errors.New("wrong KZG proof size - should never happen")
}
// Compute the extended cells from the (non-extended) blob.
cells, err := kzg.ComputeCells(&kzgBlob)
if err != nil {
return errors.Wrap(err, "compute cells")
}
proofs = append(proofs, kzgProof)
}
proofs := make([]kzg.Proof, 0, numberOfColumns)
for idx := uint64(i) * numberOfColumns; idx < (uint64(i)+1)*numberOfColumns; idx++ {
var kzgProof kzg.Proof
if copy(kzgProof[:], cellProofs[idx]) != len(kzgProof) {
return errors.New("wrong KZG proof size - should never happen")
}
proofs = append(proofs, kzgProof)
}
cellsPerBlob[i] = cells
proofsPerBlob[i] = proofs
return nil
})
}
if err := wg.Wait(); err != nil {
return nil, nil, err
cellsPerBlob = append(cellsPerBlob, cells)
proofsPerBlob = append(proofsPerBlob, proofs)
}
return cellsPerBlob, proofsPerBlob, nil
@@ -340,55 +295,42 @@ func ComputeCellsAndProofsFromFlat(blobs [][]byte, cellProofs [][]byte) ([][]kzg
// ComputeCellsAndProofsFromStructured computes the cells and proofs from blobs and cell proofs.
func ComputeCellsAndProofsFromStructured(blobsAndProofs []*pb.BlobAndProofV2) ([][]kzg.Cell, [][]kzg.Proof, error) {
start := time.Now()
defer func() {
cellsAndProofsFromStructuredComputationTime.Observe(float64(time.Since(start).Milliseconds()))
}()
numberOfColumns := params.BeaconConfig().NumberOfColumns
var wg errgroup.Group
cellsPerBlob := make([][]kzg.Cell, len(blobsAndProofs))
proofsPerBlob := make([][]kzg.Proof, len(blobsAndProofs))
for i, blobAndProof := range blobsAndProofs {
cellsPerBlob := make([][]kzg.Cell, 0, len(blobsAndProofs))
proofsPerBlob := make([][]kzg.Proof, 0, len(blobsAndProofs))
for _, blobAndProof := range blobsAndProofs {
if blobAndProof == nil {
return nil, nil, ErrNilBlobAndProof
}
wg.Go(func() error {
var kzgBlob kzg.Blob
if copy(kzgBlob[:], blobAndProof.Blob) != len(kzgBlob) {
return errors.New("wrong blob size - should never happen")
var kzgBlob kzg.Blob
if copy(kzgBlob[:], blobAndProof.Blob) != len(kzgBlob) {
return nil, nil, errors.New("wrong blob size - should never happen")
}
// Compute the extended cells from the (non-extended) blob.
cells, err := kzg.ComputeCells(&kzgBlob)
if err != nil {
return nil, nil, errors.Wrap(err, "compute cells")
}
kzgProofs := make([]kzg.Proof, 0, numberOfColumns)
for _, kzgProofBytes := range blobAndProof.KzgProofs {
if len(kzgProofBytes) != kzg.BytesPerProof {
return nil, nil, errors.New("wrong KZG proof size - should never happen")
}
// Compute the extended cells from the (non-extended) blob.
cells, err := kzg.ComputeCells(&kzgBlob)
if err != nil {
return errors.Wrap(err, "compute cells")
var kzgProof kzg.Proof
if copy(kzgProof[:], kzgProofBytes) != len(kzgProof) {
return nil, nil, errors.New("wrong copied KZG proof size - should never happen")
}
kzgProofs := make([]kzg.Proof, 0, fieldparams.NumberOfColumns)
for _, kzgProofBytes := range blobAndProof.KzgProofs {
if len(kzgProofBytes) != kzg.BytesPerProof {
return errors.New("wrong KZG proof size - should never happen")
}
kzgProofs = append(kzgProofs, kzgProof)
}
var kzgProof kzg.Proof
if copy(kzgProof[:], kzgProofBytes) != len(kzgProof) {
return errors.New("wrong copied KZG proof size - should never happen")
}
kzgProofs = append(kzgProofs, kzgProof)
}
cellsPerBlob[i] = cells
proofsPerBlob[i] = kzgProofs
return nil
})
}
if err := wg.Wait(); err != nil {
return nil, nil, err
cellsPerBlob = append(cellsPerBlob, cells)
proofsPerBlob = append(proofsPerBlob, kzgProofs)
}
return cellsPerBlob, proofsPerBlob, nil

View File

@@ -17,9 +17,41 @@ import (
)
func TestMinimumColumnsCountToReconstruct(t *testing.T) {
const expected = uint64(64)
actual := peerdas.MinimumColumnCountToReconstruct()
require.Equal(t, expected, actual)
testCases := []struct {
name string
numberOfColumns uint64
expected uint64
}{
{
name: "numberOfColumns=128",
numberOfColumns: 128,
expected: 64,
},
{
name: "numberOfColumns=129",
numberOfColumns: 129,
expected: 65,
},
{
name: "numberOfColumns=130",
numberOfColumns: 130,
expected: 65,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// Set the total number of columns.
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.NumberOfColumns = tc.numberOfColumns
params.OverrideBeaconConfig(cfg)
// Compute the minimum number of columns needed to reconstruct.
actual := peerdas.MinimumColumnCountToReconstruct()
require.Equal(t, tc.expected, actual)
})
}
}
func TestReconstructDataColumnSidecars(t *testing.T) {
@@ -168,6 +200,7 @@ func TestReconstructBlobSidecars(t *testing.T) {
t.Run("nominal", func(t *testing.T) {
const blobCount = 3
numberOfColumns := params.BeaconConfig().NumberOfColumns
roBlock, roBlobSidecars := util.GenerateTestElectraBlockWithSidecar(t, [fieldparams.RootLength]byte{}, 42, blobCount)
@@ -203,7 +236,7 @@ func TestReconstructBlobSidecars(t *testing.T) {
require.NoError(t, err)
// Flatten proofs.
cellProofs := make([][]byte, 0, blobCount*fieldparams.NumberOfColumns)
cellProofs := make([][]byte, 0, blobCount*numberOfColumns)
for _, proofs := range inputProofsPerBlob {
for _, proof := range proofs {
cellProofs = append(cellProofs, proof[:])
@@ -395,12 +428,13 @@ func TestReconstructBlobs(t *testing.T) {
}
func TestComputeCellsAndProofsFromFlat(t *testing.T) {
const numberOfColumns = fieldparams.NumberOfColumns
// Start the trusted setup.
err := kzg.Start()
require.NoError(t, err)
t.Run("mismatched blob and proof counts", func(t *testing.T) {
numberOfColumns := params.BeaconConfig().NumberOfColumns
// Create one blob but proofs for two blobs
blobs := [][]byte{{}}
@@ -413,6 +447,7 @@ func TestComputeCellsAndProofsFromFlat(t *testing.T) {
t.Run("nominal", func(t *testing.T) {
const blobCount = 2
numberOfColumns := params.BeaconConfig().NumberOfColumns
// Generate test blobs
_, roBlobSidecars := util.GenerateTestElectraBlockWithSidecar(t, [fieldparams.RootLength]byte{}, 42, blobCount)

View File

@@ -1,137 +0,0 @@
package peerdas
import (
"testing"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/ethereum/go-ethereum/p2p/enode"
)
func TestSemiSupernodeCustody(t *testing.T) {
const numberOfColumns = fieldparams.NumberOfColumns
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.NumberOfCustodyGroups = 128
params.OverrideBeaconConfig(cfg)
// Create a test node ID
nodeID := enode.ID([32]byte{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32})
t.Run("semi-supernode custodies exactly 64 columns", func(t *testing.T) {
// Semi-supernode uses 64 custody groups (half of 128)
const semiSupernodeCustodyGroupCount = 64
// Get custody groups for semi-supernode
custodyGroups, err := CustodyGroups(nodeID, semiSupernodeCustodyGroupCount)
require.NoError(t, err)
require.Equal(t, semiSupernodeCustodyGroupCount, len(custodyGroups))
// Verify we get exactly 64 custody columns
custodyColumns, err := CustodyColumns(custodyGroups)
require.NoError(t, err)
require.Equal(t, semiSupernodeCustodyGroupCount, len(custodyColumns))
// Verify the columns are valid (within 0-127 range)
for columnIndex := range custodyColumns {
if columnIndex >= numberOfColumns {
t.Fatalf("Invalid column index %d, should be less than %d", columnIndex, numberOfColumns)
}
}
})
t.Run("64 columns is exactly the minimum for reconstruction", func(t *testing.T) {
minimumCount := MinimumColumnCountToReconstruct()
require.Equal(t, uint64(64), minimumCount)
})
t.Run("semi-supernode vs supernode custody", func(t *testing.T) {
// Semi-supernode (64 custody groups)
semiSupernodeGroups, err := CustodyGroups(nodeID, 64)
require.NoError(t, err)
semiSupernodeColumns, err := CustodyColumns(semiSupernodeGroups)
require.NoError(t, err)
// Supernode (128 custody groups = all groups)
supernodeGroups, err := CustodyGroups(nodeID, 128)
require.NoError(t, err)
supernodeColumns, err := CustodyColumns(supernodeGroups)
require.NoError(t, err)
// Verify semi-supernode has exactly half the columns of supernode
require.Equal(t, 64, len(semiSupernodeColumns))
require.Equal(t, 128, len(supernodeColumns))
require.Equal(t, len(supernodeColumns)/2, len(semiSupernodeColumns))
// Verify all semi-supernode columns are a subset of supernode columns
for columnIndex := range semiSupernodeColumns {
if !supernodeColumns[columnIndex] {
t.Fatalf("Semi-supernode column %d not found in supernode columns", columnIndex)
}
}
})
}
func TestMinimumCustodyGroupCountToReconstruct(t *testing.T) {
tests := []struct {
name string
numberOfGroups uint64
expectedResult uint64
}{
{
name: "Standard 1:1 ratio (128 columns, 128 groups)",
numberOfGroups: 128,
expectedResult: 64, // Need half of 128 groups
},
{
name: "2 columns per group (128 columns, 64 groups)",
numberOfGroups: 64,
expectedResult: 32, // Need 64 columns, which is 32 groups (64/2)
},
{
name: "4 columns per group (128 columns, 32 groups)",
numberOfGroups: 32,
expectedResult: 16, // Need 64 columns, which is 16 groups (64/4)
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.NumberOfCustodyGroups = tt.numberOfGroups
params.OverrideBeaconConfig(cfg)
result, err := MinimumCustodyGroupCountToReconstruct()
require.NoError(t, err)
require.Equal(t, tt.expectedResult, result)
})
}
}
func TestMinimumCustodyGroupCountToReconstruct_ErrorCases(t *testing.T) {
t.Run("Returns error when NumberOfCustodyGroups is zero", func(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.NumberOfCustodyGroups = 0
params.OverrideBeaconConfig(cfg)
_, err := MinimumCustodyGroupCountToReconstruct()
require.NotNil(t, err)
require.Equal(t, true, err.Error() == "NumberOfCustodyGroups cannot be zero")
})
t.Run("Returns error when NumberOfCustodyGroups exceeds NumberOfColumns", func(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.NumberOfCustodyGroups = 256
params.OverrideBeaconConfig(cfg)
_, err := MinimumCustodyGroupCountToReconstruct()
require.NotNil(t, err)
// Just check that we got an error about the configuration
require.Equal(t, true, len(err.Error()) > 0)
})
}

View File

@@ -102,13 +102,11 @@ func ValidatorsCustodyRequirement(state beaconState.ReadOnlyBeaconState, validat
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/validator.md#get_data_column_sidecars_from_block and
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/validator.md#get_data_column_sidecars_from_column_sidecar
func DataColumnSidecars(cellsPerBlob [][]kzg.Cell, proofsPerBlob [][]kzg.Proof, src ConstructionPopulator) ([]blocks.RODataColumn, error) {
const numberOfColumns = uint64(fieldparams.NumberOfColumns)
if len(cellsPerBlob) == 0 {
return nil, nil
}
start := time.Now()
cells, proofs, err := rotateRowsToCols(cellsPerBlob, proofsPerBlob, numberOfColumns)
cells, proofs, err := rotateRowsToCols(cellsPerBlob, proofsPerBlob, params.BeaconConfig().NumberOfColumns)
if err != nil {
return nil, errors.Wrap(err, "rotate cells and proofs")
}
@@ -117,8 +115,9 @@ func DataColumnSidecars(cellsPerBlob [][]kzg.Cell, proofsPerBlob [][]kzg.Proof,
return nil, errors.Wrap(err, "extract block info")
}
roSidecars := make([]blocks.RODataColumn, 0, numberOfColumns)
for idx := range numberOfColumns {
maxIdx := params.BeaconConfig().NumberOfColumns
roSidecars := make([]blocks.RODataColumn, 0, maxIdx)
for idx := range maxIdx {
sidecar := &ethpb.DataColumnSidecar{
Index: idx,
Column: cells[idx],

View File

@@ -6,7 +6,7 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/blockchain/kzg"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
@@ -59,8 +59,6 @@ func TestValidatorsCustodyRequirement(t *testing.T) {
}
func TestDataColumnSidecars(t *testing.T) {
const numberOfColumns = fieldparams.NumberOfColumns
t.Run("sizes mismatch", func(t *testing.T) {
// Create a protobuf signed beacon block.
signedBeaconBlockPb := util.NewBeaconBlockDeneb()
@@ -71,10 +69,10 @@ func TestDataColumnSidecars(t *testing.T) {
// Create cells and proofs.
cellsPerBlob := [][]kzg.Cell{
make([]kzg.Cell, numberOfColumns),
make([]kzg.Cell, params.BeaconConfig().NumberOfColumns),
}
proofsPerBlob := [][]kzg.Proof{
make([]kzg.Proof, numberOfColumns),
make([]kzg.Proof, params.BeaconConfig().NumberOfColumns),
}
rob, err := blocks.NewROBlock(signedBeaconBlock)
@@ -119,6 +117,7 @@ func TestDataColumnSidecars(t *testing.T) {
require.NoError(t, err)
// Create cells and proofs with sufficient cells but insufficient proofs.
numberOfColumns := params.BeaconConfig().NumberOfColumns
cellsPerBlob := [][]kzg.Cell{
make([]kzg.Cell, numberOfColumns),
}
@@ -150,6 +149,7 @@ func TestDataColumnSidecars(t *testing.T) {
require.NoError(t, err)
// Create cells and proofs with correct dimensions.
numberOfColumns := params.BeaconConfig().NumberOfColumns
cellsPerBlob := [][]kzg.Cell{
make([]kzg.Cell, numberOfColumns),
make([]kzg.Cell, numberOfColumns),
@@ -197,7 +197,6 @@ func TestDataColumnSidecars(t *testing.T) {
}
func TestReconstructionSource(t *testing.T) {
const numberOfColumns = fieldparams.NumberOfColumns
// Create a Fulu block with blob commitments.
signedBeaconBlockPb := util.NewBeaconBlockFulu()
commitment1 := make([]byte, 48)
@@ -213,6 +212,7 @@ func TestReconstructionSource(t *testing.T) {
require.NoError(t, err)
// Create cells and proofs with correct dimensions.
numberOfColumns := params.BeaconConfig().NumberOfColumns
cellsPerBlob := [][]kzg.Cell{
make([]kzg.Cell, numberOfColumns),
make([]kzg.Cell, numberOfColumns),

View File

@@ -4,19 +4,14 @@ go_library(
name = "go_default_library",
srcs = [
"availability_blobs.go",
"availability_columns.go",
"bisect.go",
"blob_cache.go",
"data_column_cache.go",
"iface.go",
"log.go",
"mock.go",
"needs.go",
],
importpath = "github.com/OffchainLabs/prysm/v7/beacon-chain/das",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/verification:go_default_library",
"//config/fieldparams:go_default_library",
@@ -26,7 +21,6 @@ go_library(
"//runtime/logging:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
@@ -36,14 +30,11 @@ go_test(
name = "go_default_test",
srcs = [
"availability_blobs_test.go",
"availability_columns_test.go",
"blob_cache_test.go",
"data_column_cache_test.go",
"needs_test.go",
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/verification:go_default_library",
"//config/fieldparams:go_default_library",
@@ -54,7 +45,6 @@ go_test(
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)

View File

@@ -11,8 +11,9 @@ import (
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/runtime/logging"
"github.com/OffchainLabs/prysm/v7/runtime/version"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
log "github.com/sirupsen/logrus"
)
var (
@@ -23,13 +24,12 @@ var (
// This implementation will hold any blobs passed to Persist until the IsDataAvailable is called for their
// block, at which time they will undergo full verification and be saved to the disk.
type LazilyPersistentStoreBlob struct {
store *filesystem.BlobStorage
cache *blobCache
verifier BlobBatchVerifier
shouldRetain RetentionChecker
store *filesystem.BlobStorage
cache *blobCache
verifier BlobBatchVerifier
}
var _ AvailabilityChecker = &LazilyPersistentStoreBlob{}
var _ AvailabilityStore = &LazilyPersistentStoreBlob{}
// BlobBatchVerifier enables LazyAvailabilityStore to manage the verification process
// going from ROBlob->VerifiedROBlob, while avoiding the decision of which individual verifications
@@ -42,12 +42,11 @@ type BlobBatchVerifier interface {
// NewLazilyPersistentStore creates a new LazilyPersistentStore. This constructor should always be used
// when creating a LazilyPersistentStore because it needs to initialize the cache under the hood.
func NewLazilyPersistentStore(store *filesystem.BlobStorage, verifier BlobBatchVerifier, shouldRetain RetentionChecker) *LazilyPersistentStoreBlob {
func NewLazilyPersistentStore(store *filesystem.BlobStorage, verifier BlobBatchVerifier) *LazilyPersistentStoreBlob {
return &LazilyPersistentStoreBlob{
store: store,
cache: newBlobCache(),
verifier: verifier,
shouldRetain: shouldRetain,
store: store,
cache: newBlobCache(),
verifier: verifier,
}
}
@@ -67,6 +66,9 @@ func (s *LazilyPersistentStoreBlob) Persist(current primitives.Slot, sidecars ..
}
}
}
if !params.WithinDAPeriod(slots.ToEpoch(sidecars[0].Slot()), slots.ToEpoch(current)) {
return nil
}
key := keyFromSidecar(sidecars[0])
entry := s.cache.ensure(key)
for _, blobSidecar := range sidecars {
@@ -79,17 +81,8 @@ func (s *LazilyPersistentStoreBlob) Persist(current primitives.Slot, sidecars ..
// IsDataAvailable returns nil if all the commitments in the given block are persisted to the db and have been verified.
// BlobSidecars already in the db are assumed to have been previously verified against the block.
func (s *LazilyPersistentStoreBlob) IsDataAvailable(ctx context.Context, current primitives.Slot, blks ...blocks.ROBlock) error {
for _, b := range blks {
if err := s.checkOne(ctx, current, b); err != nil {
return err
}
}
return nil
}
func (s *LazilyPersistentStoreBlob) checkOne(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error {
blockCommitments, err := commitmentsToCheck(b, s.shouldRetain)
func (s *LazilyPersistentStoreBlob) IsDataAvailable(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error {
blockCommitments, err := commitmentsToCheck(b, current)
if err != nil {
return errors.Wrapf(err, "could not check data availability for block %#x", b.Root())
}
@@ -107,7 +100,7 @@ func (s *LazilyPersistentStoreBlob) checkOne(ctx context.Context, current primit
// Verify we have all the expected sidecars, and fail fast if any are missing or inconsistent.
// We don't try to salvage problematic batches because this indicates a misbehaving peer and we'd rather
// ignore their response and decrease their peer score.
sidecars, err := entry.filter(root, blockCommitments)
sidecars, err := entry.filter(root, blockCommitments, b.Block().Slot())
if err != nil {
return errors.Wrap(err, "incomplete BlobSidecar batch")
}
@@ -119,7 +112,7 @@ func (s *LazilyPersistentStoreBlob) checkOne(ctx context.Context, current primit
ok := errors.As(err, &me)
if ok {
fails := me.Failures()
lf := make(logrus.Fields, len(fails))
lf := make(log.Fields, len(fails))
for i := range fails {
lf[fmt.Sprintf("fail_%d", i)] = fails[i].Error()
}
@@ -138,12 +131,13 @@ func (s *LazilyPersistentStoreBlob) checkOne(ctx context.Context, current primit
return nil
}
func commitmentsToCheck(b blocks.ROBlock, shouldRetain RetentionChecker) ([][]byte, error) {
func commitmentsToCheck(b blocks.ROBlock, current primitives.Slot) ([][]byte, error) {
if b.Version() < version.Deneb {
return nil, nil
}
if !shouldRetain(b.Block().Slot()) {
// We are only required to check within MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUEST
if !params.WithinDAPeriod(slots.ToEpoch(b.Block().Slot()), slots.ToEpoch(current)) {
return nil, nil
}

View File

@@ -17,10 +17,6 @@ import (
errors "github.com/pkg/errors"
)
func testShouldRetainAlways(s primitives.Slot) bool {
return true
}
func Test_commitmentsToCheck(t *testing.T) {
params.SetupTestConfigCleanup(t)
params.BeaconConfig().FuluForkEpoch = params.BeaconConfig().ElectraForkEpoch + 4096*2
@@ -34,12 +30,11 @@ func Test_commitmentsToCheck(t *testing.T) {
commits[i] = bytesutil.PadTo([]byte{byte(i)}, 48)
}
cases := []struct {
name string
commits [][]byte
block func(*testing.T) blocks.ROBlock
slot primitives.Slot
err error
shouldRetain RetentionChecker
name string
commits [][]byte
block func(*testing.T) blocks.ROBlock
slot primitives.Slot
err error
}{
{
name: "pre deneb",
@@ -65,7 +60,6 @@ func Test_commitmentsToCheck(t *testing.T) {
require.NoError(t, err)
return rb
},
shouldRetain: testShouldRetainAlways,
commits: func() [][]byte {
mb := params.GetNetworkScheduleEntry(slots.ToEpoch(fulu + 100)).MaxBlobsPerBlock
return commits[:mb]
@@ -85,8 +79,7 @@ func Test_commitmentsToCheck(t *testing.T) {
require.NoError(t, err)
return rb
},
shouldRetain: func(s primitives.Slot) bool { return false },
slot: fulu + windowSlots + 1,
slot: fulu + windowSlots + 1,
},
{
name: "excessive commitments",
@@ -104,15 +97,14 @@ func Test_commitmentsToCheck(t *testing.T) {
require.Equal(t, true, len(c) > params.BeaconConfig().MaxBlobsPerBlock(sb.Block().Slot()))
return rb
},
shouldRetain: testShouldRetainAlways,
slot: windowSlots + 1,
err: errIndexOutOfBounds,
slot: windowSlots + 1,
err: errIndexOutOfBounds,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
b := c.block(t)
co, err := commitmentsToCheck(b, c.shouldRetain)
co, err := commitmentsToCheck(b, c.slot)
if c.err != nil {
require.ErrorIs(t, err, c.err)
} else {
@@ -134,7 +126,7 @@ func TestLazilyPersistent_Missing(t *testing.T) {
blk, blobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, ds, 3)
mbv := &mockBlobBatchVerifier{t: t, scs: blobSidecars}
as := NewLazilyPersistentStore(store, mbv, testShouldRetainAlways)
as := NewLazilyPersistentStore(store, mbv)
// Only one commitment persisted, should return error with other indices
require.NoError(t, as.Persist(ds, blobSidecars[2]))
@@ -161,7 +153,7 @@ func TestLazilyPersistent_Mismatch(t *testing.T) {
mbv := &mockBlobBatchVerifier{t: t, err: errors.New("kzg check should not run")}
blobSidecars[0].KzgCommitment = bytesutil.PadTo([]byte("nope"), 48)
as := NewLazilyPersistentStore(store, mbv, testShouldRetainAlways)
as := NewLazilyPersistentStore(store, mbv)
// Only one commitment persisted, should return error with other indices
require.NoError(t, as.Persist(ds, blobSidecars[0]))
@@ -174,11 +166,11 @@ func TestLazyPersistOnceCommitted(t *testing.T) {
ds := util.SlotAtEpoch(t, params.BeaconConfig().DenebForkEpoch)
_, blobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, ds, 6)
as := NewLazilyPersistentStore(filesystem.NewEphemeralBlobStorage(t), &mockBlobBatchVerifier{}, testShouldRetainAlways)
as := NewLazilyPersistentStore(filesystem.NewEphemeralBlobStorage(t), &mockBlobBatchVerifier{})
// stashes as expected
require.NoError(t, as.Persist(ds, blobSidecars...))
// ignores duplicates
require.ErrorIs(t, as.Persist(ds, blobSidecars...), errDuplicateSidecar)
require.ErrorIs(t, as.Persist(ds, blobSidecars...), ErrDuplicateSidecar)
// ignores index out of bound
blobSidecars[0].Index = 6
@@ -191,7 +183,7 @@ func TestLazyPersistOnceCommitted(t *testing.T) {
require.NoError(t, as.Persist(slotOOB, moreBlobSidecars[0]))
// doesn't ignore new sidecars with a different block root
require.NoError(t, as.Persist(ds, moreBlobSidecars[1:]...))
require.NoError(t, as.Persist(ds, moreBlobSidecars...))
}
type mockBlobBatchVerifier struct {

View File

@@ -1,244 +0,0 @@
package das
import (
"context"
"io"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v7/beacon-chain/verification"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/ethereum/go-ethereum/p2p/enode"
errors "github.com/pkg/errors"
)
// LazilyPersistentStoreColumn is an implementation of AvailabilityStore to be used when batch syncing data columns.
// This implementation will hold any data columns passed to Persist until the IsDataAvailable is called for their
// block, at which time they will undergo full verification and be saved to the disk.
type LazilyPersistentStoreColumn struct {
store *filesystem.DataColumnStorage
cache *dataColumnCache
newDataColumnsVerifier verification.NewDataColumnsVerifier
custody *custodyRequirement
bisector Bisector
shouldRetain RetentionChecker
}
var _ AvailabilityChecker = &LazilyPersistentStoreColumn{}
// DataColumnsVerifier enables LazilyPersistentStoreColumn to manage the verification process
// going from RODataColumn->VerifiedRODataColumn, while avoiding the decision of which individual verifications
// to run and in what order. Since LazilyPersistentStoreColumn always tries to verify and save data columns only when
// they are all available, the interface takes a slice of data column sidecars.
type DataColumnsVerifier interface {
VerifiedRODataColumns(ctx context.Context, blk blocks.ROBlock, scs []blocks.RODataColumn) ([]blocks.VerifiedRODataColumn, error)
}
// NewLazilyPersistentStoreColumn creates a new LazilyPersistentStoreColumn.
// WARNING: The resulting LazilyPersistentStoreColumn is NOT thread-safe.
func NewLazilyPersistentStoreColumn(
store *filesystem.DataColumnStorage,
newDataColumnsVerifier verification.NewDataColumnsVerifier,
nodeID enode.ID,
cgc uint64,
bisector Bisector,
shouldRetain RetentionChecker,
) *LazilyPersistentStoreColumn {
return &LazilyPersistentStoreColumn{
store: store,
cache: newDataColumnCache(),
newDataColumnsVerifier: newDataColumnsVerifier,
custody: &custodyRequirement{nodeID: nodeID, cgc: cgc},
bisector: bisector,
shouldRetain: shouldRetain,
}
}
// PersistColumns adds columns to the working column cache. Columns stored in this cache will be persisted
// for at least as long as the node is running. Once IsDataAvailable succeeds, all columns referenced
// by the given block are guaranteed to be persisted for the remainder of the retention period.
func (s *LazilyPersistentStoreColumn) Persist(_ primitives.Slot, sidecars ...blocks.RODataColumn) error {
for _, sidecar := range sidecars {
if err := s.cache.stash(sidecar); err != nil {
return errors.Wrap(err, "stash DataColumnSidecar")
}
}
return nil
}
// IsDataAvailable returns nil if all the commitments in the given block are persisted to the db and have been verified.
// DataColumnsSidecars already in the db are assumed to have been previously verified against the block.
func (s *LazilyPersistentStoreColumn) IsDataAvailable(ctx context.Context, _ primitives.Slot, blks ...blocks.ROBlock) error {
toVerify := make([]blocks.RODataColumn, 0)
for _, block := range blks {
indices, err := s.required(block)
if err != nil {
return errors.Wrapf(err, "full commitments to check with block root `%#x`", block.Root())
}
if indices.Count() == 0 {
continue
}
key := keyFromBlock(block)
entry := s.cache.entry(key)
toVerify, err = entry.append(toVerify, IndicesNotStored(s.store.Summary(block.Root()), indices))
if err != nil {
return errors.Wrap(err, "entry filter")
}
}
if err := s.verifyAndSave(toVerify); err != nil {
log.Warn("Batch verification failed, bisecting columns by peer")
if err := s.bisectVerification(toVerify); err != nil {
return errors.Wrap(err, "bisect verification")
}
}
s.cache.cleanup(blks)
return nil
}
// required returns the set of column indices to check for a given block.
func (s *LazilyPersistentStoreColumn) required(block blocks.ROBlock) (peerdas.ColumnIndices, error) {
if !s.shouldRetain(block.Block().Slot()) {
return peerdas.NewColumnIndices(), nil
}
// If there are any commitments in the block, there are blobs,
// and if there are blobs, we need the columns bisecting those blobs.
commitments, err := block.Block().Body().BlobKzgCommitments()
if err != nil {
return nil, errors.Wrap(err, "blob KZG commitments")
}
// No DA check needed if the block has no blobs.
if len(commitments) == 0 {
return peerdas.NewColumnIndices(), nil
}
return s.custody.required()
}
// verifyAndSave calls Save on the column store if the columns pass verification.
func (s *LazilyPersistentStoreColumn) verifyAndSave(columns []blocks.RODataColumn) error {
if len(columns) == 0 {
return nil
}
verified, err := s.verifyColumns(columns)
if err != nil {
return errors.Wrap(err, "verify columns")
}
if err := s.store.Save(verified); err != nil {
return errors.Wrap(err, "save data column sidecars")
}
return nil
}
func (s *LazilyPersistentStoreColumn) verifyColumns(columns []blocks.RODataColumn) ([]blocks.VerifiedRODataColumn, error) {
if len(columns) == 0 {
return nil, nil
}
verifier := s.newDataColumnsVerifier(columns, verification.ByRangeRequestDataColumnSidecarRequirements)
if err := verifier.ValidFields(); err != nil {
return nil, errors.Wrap(err, "valid fields")
}
if err := verifier.SidecarInclusionProven(); err != nil {
return nil, errors.Wrap(err, "sidecar inclusion proven")
}
if err := verifier.SidecarKzgProofVerified(); err != nil {
return nil, errors.Wrap(err, "sidecar KZG proof verified")
}
return verifier.VerifiedRODataColumns()
}
// bisectVerification is used when verification of a batch of columns fails. Since the batch could
// span multiple blocks or have been fetched from multiple peers, this pattern enables code using the
// store to break the verification into smaller units and learn the results, in order to plan to retry
// retrieval of the unusable columns.
func (s *LazilyPersistentStoreColumn) bisectVerification(columns []blocks.RODataColumn) error {
if len(columns) == 0 {
return nil
}
if s.bisector == nil {
return errors.New("bisector not initialized")
}
iter, err := s.bisector.Bisect(columns)
if err != nil {
return errors.Wrap(err, "Bisector.Bisect")
}
// It's up to the bisector how to chunk up columns for verification,
// which could be by block, or by peer, or any other strategy.
// For the purposes of range syncing or backfill this will be by peer,
// so that the node can learn which peer is giving us bad data and downscore them.
for columns, err := iter.Next(); columns != nil; columns, err = iter.Next() {
if err != nil {
if !errors.Is(err, io.EOF) {
return errors.Wrap(err, "Bisector.Next")
}
break // io.EOF signals end of iteration
}
// We save the parts of the batch that have been verified successfully even though we don't know
// if all columns for the block will be available until the block is imported.
if err := s.verifyAndSave(s.columnsNotStored(columns)); err != nil {
iter.OnError(err)
continue
}
}
// This should give us a single error representing any unresolved errors seen via onError.
return iter.Error()
}
// columnsNotStored filters the list of ROColumnSidecars to only include those that are not found in the storage summary.
func (s *LazilyPersistentStoreColumn) columnsNotStored(sidecars []blocks.RODataColumn) []blocks.RODataColumn {
// We use this method to filter a set of sidecars that were previously seen to be unavailable on disk. So our base assumption
// is that they are still available and we don't need to copy the list. Instead we make a slice of any indices that are unexpectedly
// stored and only when we find that the storage view has changed do we need to create a new slice.
stored := make(map[int]struct{}, 0)
lastRoot := [32]byte{}
var sum filesystem.DataColumnStorageSummary
for i, sc := range sidecars {
if sc.BlockRoot() != lastRoot {
sum = s.store.Summary(sc.BlockRoot())
lastRoot = sc.BlockRoot()
}
if sum.HasIndex(sc.Index) {
stored[i] = struct{}{}
}
}
// If the view on storage hasn't changed, return the original list.
if len(stored) == 0 {
return sidecars
}
shift := 0
for i := range sidecars {
if _, ok := stored[i]; ok {
// If the index is stored, skip and overwrite it.
// Track how many spaces down to shift unseen sidecars (to overwrite the previously shifted or seen).
shift++
continue
}
if shift > 0 {
// If the index is not stored and we have seen stored indices,
// we need to shift the current index down.
sidecars[i-shift] = sidecars[i]
}
}
return sidecars[:len(sidecars)-shift]
}
type custodyRequirement struct {
nodeID enode.ID
cgc uint64 // custody group count
indices peerdas.ColumnIndices
}
func (c *custodyRequirement) required() (peerdas.ColumnIndices, error) {
peerInfo, _, err := peerdas.Info(c.nodeID, c.cgc)
if err != nil {
return peerdas.NewColumnIndices(), errors.Wrap(err, "peer info")
}
return peerdas.NewColumnIndicesFromMap(peerInfo.CustodyColumns), nil
}

View File

@@ -1,908 +0,0 @@
package das
import (
"context"
"fmt"
"io"
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v7/beacon-chain/verification"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/testing/util"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/pkg/errors"
)
func mockShouldRetain(current primitives.Epoch) RetentionChecker {
return func(slot primitives.Slot) bool {
return params.WithinDAPeriod(slots.ToEpoch(slot), current)
}
}
var commitments = [][]byte{
bytesutil.PadTo([]byte("a"), 48),
bytesutil.PadTo([]byte("b"), 48),
bytesutil.PadTo([]byte("c"), 48),
bytesutil.PadTo([]byte("d"), 48),
}
func TestPersist(t *testing.T) {
t.Run("no sidecars", func(t *testing.T) {
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, nil, enode.ID{}, 0, nil, mockShouldRetain(0))
err := lazilyPersistentStoreColumns.Persist(0)
require.NoError(t, err)
require.Equal(t, 0, len(lazilyPersistentStoreColumns.cache.entries))
})
t.Run("outside DA period", func(t *testing.T) {
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
dataColumnParamsByBlockRoot := []util.DataColumnParam{
{Slot: 1, Index: 1},
}
var current primitives.Slot = 1_000_000
sr := mockShouldRetain(slots.ToEpoch(current))
roSidecars, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnParamsByBlockRoot)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, nil, enode.ID{}, 0, nil, sr)
err := lazilyPersistentStoreColumns.Persist(current, roSidecars...)
require.NoError(t, err)
require.Equal(t, len(roSidecars), len(lazilyPersistentStoreColumns.cache.entries))
})
t.Run("nominal", func(t *testing.T) {
const slot = 42
store := filesystem.NewEphemeralDataColumnStorage(t)
dataColumnParamsByBlockRoot := []util.DataColumnParam{
{Slot: slot, Index: 1},
{Slot: slot, Index: 5},
}
roSidecars, roDataColumns := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnParamsByBlockRoot)
avs := NewLazilyPersistentStoreColumn(store, nil, enode.ID{}, 0, nil, mockShouldRetain(slots.ToEpoch(slot)))
err := avs.Persist(slot, roSidecars...)
require.NoError(t, err)
require.Equal(t, 1, len(avs.cache.entries))
key := cacheKey{slot: slot, root: roDataColumns[0].BlockRoot()}
entry, ok := avs.cache.entries[key]
require.Equal(t, true, ok)
summary := store.Summary(key.root)
// A call to Persist does NOT save the sidecars to disk.
require.Equal(t, uint64(0), summary.Count())
require.Equal(t, len(roSidecars), len(entry.scs))
idx1 := entry.scs[1]
require.NotNil(t, idx1)
require.DeepSSZEqual(t, roDataColumns[0].BlockRoot(), idx1.BlockRoot())
idx5 := entry.scs[5]
require.NotNil(t, idx5)
require.DeepSSZEqual(t, roDataColumns[1].BlockRoot(), idx5.BlockRoot())
for i, roDataColumn := range entry.scs {
if map[uint64]bool{1: true, 5: true}[i] {
continue
}
require.IsNil(t, roDataColumn)
}
})
}
func TestIsDataAvailable(t *testing.T) {
params.SetupTestConfigCleanup(t)
params.BeaconConfig().FuluForkEpoch = params.BeaconConfig().ElectraForkEpoch + 4096*2
newDataColumnsVerifier := func(dataColumnSidecars []blocks.RODataColumn, _ []verification.Requirement) verification.DataColumnsVerifier {
return &mockDataColumnsVerifier{t: t, dataColumnSidecars: dataColumnSidecars}
}
ctx := t.Context()
t.Run("without commitments", func(t *testing.T) {
signedBeaconBlockFulu := util.NewBeaconBlockFulu()
signedRoBlock := newSignedRoBlock(t, signedBeaconBlockFulu)
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, newDataColumnsVerifier, enode.ID{}, 0, nil, mockShouldRetain(0))
err := lazilyPersistentStoreColumns.IsDataAvailable(ctx, 0, signedRoBlock)
require.NoError(t, err)
})
t.Run("with commitments", func(t *testing.T) {
signedBeaconBlockFulu := util.NewBeaconBlockFulu()
signedBeaconBlockFulu.Block.Slot = primitives.Slot(params.BeaconConfig().FuluForkEpoch) * params.BeaconConfig().SlotsPerEpoch
signedBeaconBlockFulu.Block.Body.BlobKzgCommitments = commitments
signedRoBlock := newSignedRoBlock(t, signedBeaconBlockFulu)
block := signedRoBlock.Block()
slot := block.Slot()
proposerIndex := block.ProposerIndex()
parentRoot := block.ParentRoot()
stateRoot := block.StateRoot()
bodyRoot, err := block.Body().HashTreeRoot()
require.NoError(t, err)
root := signedRoBlock.Root()
storage := filesystem.NewEphemeralDataColumnStorage(t)
indices := []uint64{1, 17, 19, 42, 75, 87, 102, 117}
avs := NewLazilyPersistentStoreColumn(storage, newDataColumnsVerifier, enode.ID{}, uint64(len(indices)), nil, mockShouldRetain(slots.ToEpoch(slot)))
dcparams := make([]util.DataColumnParam, 0, len(indices))
for _, index := range indices {
dataColumnParams := util.DataColumnParam{
Index: index,
KzgCommitments: commitments,
Slot: slot,
ProposerIndex: proposerIndex,
ParentRoot: parentRoot[:],
StateRoot: stateRoot[:],
BodyRoot: bodyRoot[:],
}
dcparams = append(dcparams, dataColumnParams)
}
_, verifiedRoDataColumns := util.CreateTestVerifiedRoDataColumnSidecars(t, dcparams)
key := keyFromBlock(signedRoBlock)
entry := avs.cache.entry(key)
defer avs.cache.delete(key)
for _, verifiedRoDataColumn := range verifiedRoDataColumns {
err := entry.stash(verifiedRoDataColumn.RODataColumn)
require.NoError(t, err)
}
err = avs.IsDataAvailable(ctx, slot, signedRoBlock)
require.NoError(t, err)
actual, err := storage.Get(root, indices)
require.NoError(t, err)
//summary := storage.Summary(root)
require.Equal(t, len(verifiedRoDataColumns), len(actual))
//require.Equal(t, uint64(len(indices)), summary.Count())
//require.DeepSSZEqual(t, verifiedRoDataColumns, actual)
})
}
func TestRetentionWindow(t *testing.T) {
windowSlots, err := slots.EpochEnd(params.BeaconConfig().MinEpochsForDataColumnSidecarsRequest)
require.NoError(t, err)
fuluSlot, err := slots.EpochStart(params.BeaconConfig().FuluForkEpoch)
require.NoError(t, err)
numberOfColumns := fieldparams.NumberOfColumns
testCases := []struct {
name string
commitments [][]byte
block func(*testing.T) blocks.ROBlock
slot primitives.Slot
wantedCols int
}{
{
name: "Pre-Fulu block",
block: func(t *testing.T) blocks.ROBlock {
return newSignedRoBlock(t, util.NewBeaconBlockElectra())
},
},
{
name: "Commitments outside data availability window",
block: func(t *testing.T) blocks.ROBlock {
beaconBlockElectra := util.NewBeaconBlockElectra()
// Block is from slot 0, "current slot" is window size +1 (so outside the window)
beaconBlockElectra.Block.Body.BlobKzgCommitments = commitments
return newSignedRoBlock(t, beaconBlockElectra)
},
slot: fuluSlot + windowSlots,
},
{
name: "Commitments within data availability window",
block: func(t *testing.T) blocks.ROBlock {
signedBeaconBlockFulu := util.NewBeaconBlockFulu()
signedBeaconBlockFulu.Block.Body.BlobKzgCommitments = commitments
signedBeaconBlockFulu.Block.Slot = fuluSlot + windowSlots - 1
return newSignedRoBlock(t, signedBeaconBlockFulu)
},
commitments: commitments,
slot: fuluSlot + windowSlots,
wantedCols: numberOfColumns,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
b := tc.block(t)
s := NewLazilyPersistentStoreColumn(nil, nil, enode.ID{}, uint64(numberOfColumns), nil, mockShouldRetain(slots.ToEpoch(tc.slot)))
indices, err := s.required(b)
require.NoError(t, err)
require.Equal(t, tc.wantedCols, len(indices))
})
}
}
func newSignedRoBlock(t *testing.T, signedBeaconBlock any) blocks.ROBlock {
sb, err := blocks.NewSignedBeaconBlock(signedBeaconBlock)
require.NoError(t, err)
rb, err := blocks.NewROBlock(sb)
require.NoError(t, err)
return rb
}
type mockDataColumnsVerifier struct {
t *testing.T
dataColumnSidecars []blocks.RODataColumn
validCalled, SidecarInclusionProvenCalled, SidecarKzgProofVerifiedCalled bool
}
var _ verification.DataColumnsVerifier = &mockDataColumnsVerifier{}
func (m *mockDataColumnsVerifier) VerifiedRODataColumns() ([]blocks.VerifiedRODataColumn, error) {
require.Equal(m.t, true, m.validCalled && m.SidecarInclusionProvenCalled && m.SidecarKzgProofVerifiedCalled)
verifiedDataColumnSidecars := make([]blocks.VerifiedRODataColumn, 0, len(m.dataColumnSidecars))
for _, dataColumnSidecar := range m.dataColumnSidecars {
verifiedDataColumnSidecar := blocks.NewVerifiedRODataColumn(dataColumnSidecar)
verifiedDataColumnSidecars = append(verifiedDataColumnSidecars, verifiedDataColumnSidecar)
}
return verifiedDataColumnSidecars, nil
}
func (m *mockDataColumnsVerifier) SatisfyRequirement(verification.Requirement) {}
func (m *mockDataColumnsVerifier) ValidFields() error {
m.validCalled = true
return nil
}
func (m *mockDataColumnsVerifier) CorrectSubnet(dataColumnSidecarSubTopic string, expectedTopics []string) error {
return nil
}
func (m *mockDataColumnsVerifier) NotFromFutureSlot() error { return nil }
func (m *mockDataColumnsVerifier) SlotAboveFinalized() error { return nil }
func (m *mockDataColumnsVerifier) ValidProposerSignature(ctx context.Context) error { return nil }
func (m *mockDataColumnsVerifier) SidecarParentSeen(parentSeen func([fieldparams.RootLength]byte) bool) error {
return nil
}
func (m *mockDataColumnsVerifier) SidecarParentValid(badParent func([fieldparams.RootLength]byte) bool) error {
return nil
}
func (m *mockDataColumnsVerifier) SidecarParentSlotLower() error { return nil }
func (m *mockDataColumnsVerifier) SidecarDescendsFromFinalized() error { return nil }
func (m *mockDataColumnsVerifier) SidecarInclusionProven() error {
m.SidecarInclusionProvenCalled = true
return nil
}
func (m *mockDataColumnsVerifier) SidecarKzgProofVerified() error {
m.SidecarKzgProofVerifiedCalled = true
return nil
}
func (m *mockDataColumnsVerifier) SidecarProposerExpected(ctx context.Context) error { return nil }
// Mock implementations for bisectVerification tests
// mockBisectionIterator simulates a BisectionIterator for testing.
type mockBisectionIterator struct {
chunks [][]blocks.RODataColumn
chunkErrors []error
finalError error
chunkIndex int
nextCallCount int
onErrorCallCount int
onErrorErrors []error
}
func (m *mockBisectionIterator) Next() ([]blocks.RODataColumn, error) {
if m.chunkIndex >= len(m.chunks) {
return nil, io.EOF
}
chunk := m.chunks[m.chunkIndex]
var err error
if m.chunkIndex < len(m.chunkErrors) {
err = m.chunkErrors[m.chunkIndex]
}
m.chunkIndex++
m.nextCallCount++
if err != nil {
return chunk, err
}
return chunk, nil
}
func (m *mockBisectionIterator) OnError(err error) {
m.onErrorCallCount++
m.onErrorErrors = append(m.onErrorErrors, err)
}
func (m *mockBisectionIterator) Error() error {
return m.finalError
}
// mockBisector simulates a Bisector for testing.
type mockBisector struct {
shouldError bool
bisectErr error
iterator *mockBisectionIterator
}
func (m *mockBisector) Bisect(columns []blocks.RODataColumn) (BisectionIterator, error) {
if m.shouldError {
return nil, m.bisectErr
}
return m.iterator, nil
}
// testDataColumnsVerifier implements verification.DataColumnsVerifier for testing.
type testDataColumnsVerifier struct {
t *testing.T
shouldFail bool
columns []blocks.RODataColumn
}
func (v *testDataColumnsVerifier) VerifiedRODataColumns() ([]blocks.VerifiedRODataColumn, error) {
verified := make([]blocks.VerifiedRODataColumn, len(v.columns))
for i, col := range v.columns {
verified[i] = blocks.NewVerifiedRODataColumn(col)
}
return verified, nil
}
func (v *testDataColumnsVerifier) SatisfyRequirement(verification.Requirement) {}
func (v *testDataColumnsVerifier) ValidFields() error {
if v.shouldFail {
return errors.New("verification failed")
}
return nil
}
func (v *testDataColumnsVerifier) CorrectSubnet(string, []string) error { return nil }
func (v *testDataColumnsVerifier) NotFromFutureSlot() error { return nil }
func (v *testDataColumnsVerifier) SlotAboveFinalized() error { return nil }
func (v *testDataColumnsVerifier) ValidProposerSignature(context.Context) error { return nil }
func (v *testDataColumnsVerifier) SidecarParentSeen(func([fieldparams.RootLength]byte) bool) error {
return nil
}
func (v *testDataColumnsVerifier) SidecarParentValid(func([fieldparams.RootLength]byte) bool) error {
return nil
}
func (v *testDataColumnsVerifier) SidecarParentSlotLower() error { return nil }
func (v *testDataColumnsVerifier) SidecarDescendsFromFinalized() error { return nil }
func (v *testDataColumnsVerifier) SidecarInclusionProven() error { return nil }
func (v *testDataColumnsVerifier) SidecarKzgProofVerified() error { return nil }
func (v *testDataColumnsVerifier) SidecarProposerExpected(context.Context) error { return nil }
// Helper function to create test data columns
func makeTestDataColumns(t *testing.T, count int, blockRoot [32]byte, startIndex uint64) []blocks.RODataColumn {
columns := make([]blocks.RODataColumn, 0, count)
for i := range count {
params := util.DataColumnParam{
Index: startIndex + uint64(i),
KzgCommitments: commitments,
Slot: primitives.Slot(params.BeaconConfig().FuluForkEpoch) * params.BeaconConfig().SlotsPerEpoch,
}
_, verifiedCols := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{params})
if len(verifiedCols) > 0 {
columns = append(columns, verifiedCols[0].RODataColumn)
}
}
return columns
}
// Helper function to create test verifier factory with failure pattern
func makeTestVerifierFactory(failurePattern []bool) verification.NewDataColumnsVerifier {
callIndex := 0
return func(cols []blocks.RODataColumn, _ []verification.Requirement) verification.DataColumnsVerifier {
shouldFail := callIndex < len(failurePattern) && failurePattern[callIndex]
callIndex++
return &testDataColumnsVerifier{
shouldFail: shouldFail,
columns: cols,
}
}
}
// TestBisectVerification tests the bisectVerification method with comprehensive table-driven test cases.
func TestBisectVerification(t *testing.T) {
params.SetupTestConfigCleanup(t)
params.BeaconConfig().FuluForkEpoch = params.BeaconConfig().ElectraForkEpoch + 4096*2
cases := []struct {
expectedError bool
bisectorNil bool
expectedOnErrorCallCount int
expectedNextCallCount int
inputCount int
iteratorFinalError error
bisectorError error
name string
storedColumnIndices []uint64
verificationFailurePattern []bool
chunkErrors []error
chunks [][]blocks.RODataColumn
}{
{
name: "EmptyColumns",
inputCount: 0,
expectedError: false,
expectedNextCallCount: 0,
expectedOnErrorCallCount: 0,
},
{
name: "NilBisector",
inputCount: 3,
bisectorNil: true,
expectedError: true,
expectedNextCallCount: 0,
expectedOnErrorCallCount: 0,
},
{
name: "BisectError",
inputCount: 5,
bisectorError: errors.New("bisect failed"),
expectedError: true,
expectedNextCallCount: 0,
expectedOnErrorCallCount: 0,
},
{
name: "SingleChunkSuccess",
inputCount: 4,
chunks: [][]blocks.RODataColumn{{}},
verificationFailurePattern: []bool{false},
expectedError: false,
expectedNextCallCount: 2,
expectedOnErrorCallCount: 0,
},
{
name: "SingleChunkFails",
inputCount: 4,
chunks: [][]blocks.RODataColumn{{}},
verificationFailurePattern: []bool{true},
iteratorFinalError: errors.New("chunk failed"),
expectedError: true,
expectedNextCallCount: 2,
expectedOnErrorCallCount: 1,
},
{
name: "TwoChunks_BothPass",
inputCount: 8,
chunks: [][]blocks.RODataColumn{{}, {}},
verificationFailurePattern: []bool{false, false},
expectedError: false,
expectedNextCallCount: 3,
expectedOnErrorCallCount: 0,
},
{
name: "TwoChunks_FirstFails",
inputCount: 8,
chunks: [][]blocks.RODataColumn{{}, {}},
verificationFailurePattern: []bool{true, false},
iteratorFinalError: errors.New("first failed"),
expectedError: true,
expectedNextCallCount: 3,
expectedOnErrorCallCount: 1,
},
{
name: "TwoChunks_SecondFails",
inputCount: 8,
chunks: [][]blocks.RODataColumn{{}, {}},
verificationFailurePattern: []bool{false, true},
iteratorFinalError: errors.New("second failed"),
expectedError: true,
expectedNextCallCount: 3,
expectedOnErrorCallCount: 1,
},
{
name: "TwoChunks_BothFail",
inputCount: 8,
chunks: [][]blocks.RODataColumn{{}, {}},
verificationFailurePattern: []bool{true, true},
iteratorFinalError: errors.New("both failed"),
expectedError: true,
expectedNextCallCount: 3,
expectedOnErrorCallCount: 2,
},
{
name: "ManyChunks_AllPass",
inputCount: 16,
chunks: [][]blocks.RODataColumn{{}, {}, {}, {}},
verificationFailurePattern: []bool{false, false, false, false},
expectedError: false,
expectedNextCallCount: 5,
expectedOnErrorCallCount: 0,
},
{
name: "ManyChunks_MixedFail",
inputCount: 16,
chunks: [][]blocks.RODataColumn{{}, {}, {}, {}},
verificationFailurePattern: []bool{false, true, false, true},
iteratorFinalError: errors.New("mixed failures"),
expectedError: true,
expectedNextCallCount: 5,
expectedOnErrorCallCount: 2,
},
{
name: "FilterStoredColumns_PartialFilter",
inputCount: 6,
chunks: [][]blocks.RODataColumn{{}},
verificationFailurePattern: []bool{false},
storedColumnIndices: []uint64{1, 3},
expectedError: false,
expectedNextCallCount: 2,
expectedOnErrorCallCount: 0,
},
{
name: "FilterStoredColumns_AllStored",
inputCount: 6,
chunks: [][]blocks.RODataColumn{{}},
verificationFailurePattern: []bool{false},
storedColumnIndices: []uint64{0, 1, 2, 3, 4, 5},
expectedError: false,
expectedNextCallCount: 2,
expectedOnErrorCallCount: 0,
},
{
name: "FilterStoredColumns_MixedAccess",
inputCount: 10,
chunks: [][]blocks.RODataColumn{{}},
verificationFailurePattern: []bool{false},
storedColumnIndices: []uint64{1, 5, 9},
expectedError: false,
expectedNextCallCount: 2,
expectedOnErrorCallCount: 0,
},
{
name: "IteratorNextError",
inputCount: 4,
chunks: [][]blocks.RODataColumn{{}, {}},
chunkErrors: []error{nil, errors.New("next error")},
verificationFailurePattern: []bool{false},
expectedError: true,
expectedNextCallCount: 2,
expectedOnErrorCallCount: 0,
},
{
name: "IteratorNextEOF",
inputCount: 4,
chunks: [][]blocks.RODataColumn{{}},
verificationFailurePattern: []bool{false},
expectedError: false,
expectedNextCallCount: 2,
expectedOnErrorCallCount: 0,
},
{
name: "LargeChunkSize",
inputCount: 128,
chunks: [][]blocks.RODataColumn{{}},
verificationFailurePattern: []bool{false},
expectedError: false,
expectedNextCallCount: 2,
expectedOnErrorCallCount: 0,
},
{
name: "ManySmallChunks",
inputCount: 32,
chunks: [][]blocks.RODataColumn{{}, {}, {}, {}, {}, {}, {}, {}},
verificationFailurePattern: []bool{false, false, false, false, false, false, false, false},
expectedError: false,
expectedNextCallCount: 9,
expectedOnErrorCallCount: 0,
},
{
name: "ChunkWithSomeStoredColumns",
inputCount: 6,
chunks: [][]blocks.RODataColumn{{}},
verificationFailurePattern: []bool{false},
storedColumnIndices: []uint64{0, 2, 4},
expectedError: false,
expectedNextCallCount: 2,
expectedOnErrorCallCount: 0,
},
{
name: "OnErrorDoesNotStopIteration",
inputCount: 8,
chunks: [][]blocks.RODataColumn{{}, {}},
verificationFailurePattern: []bool{true, false},
iteratorFinalError: errors.New("first failed"),
expectedError: true,
expectedNextCallCount: 3,
expectedOnErrorCallCount: 1,
},
{
name: "VerificationErrorWrapping",
inputCount: 4,
chunks: [][]blocks.RODataColumn{{}},
verificationFailurePattern: []bool{true},
iteratorFinalError: errors.New("verification failed"),
expectedError: true,
expectedNextCallCount: 2,
expectedOnErrorCallCount: 1,
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
// Setup storage
var store *filesystem.DataColumnStorage
if len(tc.storedColumnIndices) > 0 {
mocker, s := filesystem.NewEphemeralDataColumnStorageWithMocker(t)
blockRoot := [32]byte{1, 2, 3}
slot := primitives.Slot(params.BeaconConfig().FuluForkEpoch) * params.BeaconConfig().SlotsPerEpoch
require.NoError(t, mocker.CreateFakeIndices(blockRoot, slot, tc.storedColumnIndices...))
store = s
} else {
store = filesystem.NewEphemeralDataColumnStorage(t)
}
// Create test columns
blockRoot := [32]byte{1, 2, 3}
columns := makeTestDataColumns(t, tc.inputCount, blockRoot, 0)
// Setup iterator with chunks
iterator := &mockBisectionIterator{
chunks: tc.chunks,
chunkErrors: tc.chunkErrors,
finalError: tc.iteratorFinalError,
}
// Setup bisector
var bisector Bisector
if tc.bisectorNil || tc.inputCount == 0 {
bisector = nil
} else if tc.bisectorError != nil {
bisector = &mockBisector{
shouldError: true,
bisectErr: tc.bisectorError,
}
} else {
bisector = &mockBisector{
shouldError: false,
iterator: iterator,
}
}
// Create store with verifier
verifierFactory := makeTestVerifierFactory(tc.verificationFailurePattern)
lazilyPersistentStore := &LazilyPersistentStoreColumn{
store: store,
cache: newDataColumnCache(),
newDataColumnsVerifier: verifierFactory,
custody: &custodyRequirement{},
bisector: bisector,
}
// Execute
err := lazilyPersistentStore.bisectVerification(columns)
// Assert
if tc.expectedError {
require.NotNil(t, err)
} else {
require.NoError(t, err)
}
// Verify iterator interactions for non-error cases
if tc.inputCount > 0 && bisector != nil && tc.bisectorError == nil && !tc.expectedError {
require.NotEqual(t, 0, iterator.nextCallCount, "iterator Next() should have been called")
require.Equal(t, tc.expectedOnErrorCallCount, iterator.onErrorCallCount, "OnError() call count mismatch")
}
})
}
}
func allIndicesExcept(total int, excluded []uint64) []uint64 {
excludeMap := make(map[uint64]bool)
for _, idx := range excluded {
excludeMap[idx] = true
}
var result []uint64
for i := range total {
if !excludeMap[uint64(i)] {
result = append(result, uint64(i))
}
}
return result
}
// TestColumnsNotStored tests the columnsNotStored method.
func TestColumnsNotStored(t *testing.T) {
params.SetupTestConfigCleanup(t)
params.BeaconConfig().FuluForkEpoch = params.BeaconConfig().ElectraForkEpoch + 4096*2
cases := []struct {
name string
count int
stored []uint64 // Column indices marked as stored
expected []uint64 // Expected column indices in returned result
}{
// Empty cases
{
name: "EmptyInput",
count: 0,
stored: []uint64{},
expected: []uint64{},
},
// Single element cases
{
name: "SingleElement_NotStored",
count: 1,
stored: []uint64{},
expected: []uint64{0},
},
{
name: "SingleElement_Stored",
count: 1,
stored: []uint64{0},
expected: []uint64{},
},
// All not stored cases
{
name: "AllNotStored_FiveElements",
count: 5,
stored: []uint64{},
expected: []uint64{0, 1, 2, 3, 4},
},
// All stored cases
{
name: "AllStored",
count: 5,
stored: []uint64{0, 1, 2, 3, 4},
expected: []uint64{},
},
// Partial storage - beginning
{
name: "StoredAtBeginning",
count: 5,
stored: []uint64{0, 1},
expected: []uint64{2, 3, 4},
},
// Partial storage - end
{
name: "StoredAtEnd",
count: 5,
stored: []uint64{3, 4},
expected: []uint64{0, 1, 2},
},
// Partial storage - middle
{
name: "StoredInMiddle",
count: 5,
stored: []uint64{2},
expected: []uint64{0, 1, 3, 4},
},
// Partial storage - scattered
{
name: "StoredScattered",
count: 8,
stored: []uint64{1, 3, 5},
expected: []uint64{0, 2, 4, 6, 7},
},
// Alternating pattern
{
name: "AlternatingPattern",
count: 8,
stored: []uint64{0, 2, 4, 6},
expected: []uint64{1, 3, 5, 7},
},
// Consecutive stored
{
name: "ConsecutiveStored",
count: 10,
stored: []uint64{3, 4, 5, 6},
expected: []uint64{0, 1, 2, 7, 8, 9},
},
// Large slice cases
{
name: "LargeSlice_NoStored",
count: 64,
stored: []uint64{},
expected: allIndicesExcept(64, []uint64{}),
},
{
name: "LargeSlice_SingleStored",
count: 64,
stored: []uint64{32},
expected: allIndicesExcept(64, []uint64{32}),
},
}
slot := primitives.Slot(params.BeaconConfig().FuluForkEpoch) * params.BeaconConfig().SlotsPerEpoch
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
// Create test columns first to get the actual block root
var columns []blocks.RODataColumn
if tc.count > 0 {
columns = makeTestDataColumns(t, tc.count, [32]byte{}, 0)
}
// Get the actual block root from the first column (if any)
var blockRoot [32]byte
if len(columns) > 0 {
blockRoot = columns[0].BlockRoot()
}
// Setup storage
var store *filesystem.DataColumnStorage
if len(tc.stored) > 0 {
mocker, s := filesystem.NewEphemeralDataColumnStorageWithMocker(t)
require.NoError(t, mocker.CreateFakeIndices(blockRoot, slot, tc.stored...))
store = s
} else {
store = filesystem.NewEphemeralDataColumnStorage(t)
}
// Create store instance
lazilyPersistentStore := &LazilyPersistentStoreColumn{
store: store,
}
// Execute
result := lazilyPersistentStore.columnsNotStored(columns)
// Assert count
require.Equal(t, len(tc.expected), len(result),
fmt.Sprintf("expected %d columns, got %d", len(tc.expected), len(result)))
// Verify that no stored columns are in the result
if len(tc.stored) > 0 {
resultIndices := make(map[uint64]bool)
for _, col := range result {
resultIndices[col.Index] = true
}
for _, storedIdx := range tc.stored {
require.Equal(t, false, resultIndices[storedIdx],
fmt.Sprintf("stored column index %d should not be in result", storedIdx))
}
}
// If expectedIndices is specified, verify the exact column indices in order
if len(tc.expected) > 0 && len(tc.stored) == 0 {
// Only check exact order for non-stored cases (where we know they stay in same order)
for i, expectedIdx := range tc.expected {
require.Equal(t, columns[expectedIdx].Index, result[i].Index,
fmt.Sprintf("column %d: expected index %d, got %d", i, columns[expectedIdx].Index, result[i].Index))
}
}
// Verify optimization: if nothing stored, should return original slice
if len(tc.stored) == 0 && tc.count > 0 {
require.Equal(t, &columns[0], &result[0],
"when no columns stored, should return original slice (same pointer)")
}
// Verify optimization: if some stored, result should use in-place shifting
if len(tc.stored) > 0 && len(tc.expected) > 0 && tc.count > 0 {
require.Equal(t, cap(columns), cap(result),
"result should be in-place shifted from original (same capacity)")
}
})
}
}

View File

@@ -1,40 +0,0 @@
package das
import (
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
)
// Bisector describes a type that takes a set of RODataColumns via the Bisect method
// and returns a BisectionIterator that returns batches of those columns to be
// verified together.
type Bisector interface {
// Bisect initializes the BisectionIterator and returns the result.
Bisect([]blocks.RODataColumn) (BisectionIterator, error)
}
// BisectionIterator describes an iterator that returns groups of columns to verify.
// It is up to the bisector implementation to decide how to chunk up the columns,
// whether by block, by peer, or any other strategy. For example, backfill implements
// a bisector that keeps track of the source of each sidecar by peer, and groups
// sidecars by peer in the Next method, enabling it to track which peers, out of all
// the peers contributing to a batch, gave us bad data.
// When a batch fails, the OnError method should be used so that the bisector can
// keep track of the failed groups of columns and eg apply that knowledge in peer scoring.
// The same column will be returned multiple times by Next; first as part of a larger batch,
// and again as part of a more fine grained batch if there was an error in the large batch.
// For example, first as part of a batch of all columns spanning peers, and then again
// as part of a batch of columns from a single peer if some column in the larger batch
// failed verification.
type BisectionIterator interface {
// Next returns the next group of columns to verify.
// When the iteration is complete, Next should return (nil, io.EOF).
Next() ([]blocks.RODataColumn, error)
// OnError should be called when verification of a group of columns obtained via Next() fails.
OnError(error)
// Error can be used at the end of the iteration to get a single error result. It will return
// nil if OnError was never called, or an error of the implementers choosing representing the set
// of errors seen during iteration. For instance when bisecting from columns spanning peers to columns
// from a single peer, the broader error could be dropped, and then the more specific error
// (for a single peer's response) returned after bisecting to it.
Error() error
}

View File

@@ -76,7 +76,7 @@ func (e *blobCacheEntry) stash(sc *blocks.ROBlob) error {
e.scs = make([]*blocks.ROBlob, maxBlobsPerBlock)
}
if e.scs[sc.Index] != nil {
return errors.Wrapf(errDuplicateSidecar, "root=%#x, index=%d, commitment=%#x", sc.BlockRoot(), sc.Index, sc.KzgCommitment)
return errors.Wrapf(ErrDuplicateSidecar, "root=%#x, index=%d, commitment=%#x", sc.BlockRoot(), sc.Index, sc.KzgCommitment)
}
e.scs[sc.Index] = sc
return nil
@@ -88,7 +88,7 @@ func (e *blobCacheEntry) stash(sc *blocks.ROBlob) error {
// commitments were found in the cache and the sidecar slice return value can be used
// to perform a DA check against the cached sidecars.
// filter only returns blobs that need to be checked. Blobs already available on disk will be excluded.
func (e *blobCacheEntry) filter(root [32]byte, kc [][]byte) ([]blocks.ROBlob, error) {
func (e *blobCacheEntry) filter(root [32]byte, kc [][]byte, slot primitives.Slot) ([]blocks.ROBlob, error) {
count := len(kc)
if e.diskSummary.AllAvailable(count) {
return nil, nil

View File

@@ -34,8 +34,7 @@ type filterTestCaseSetupFunc func(t *testing.T) (*blobCacheEntry, [][]byte, []bl
func filterTestCaseSetup(slot primitives.Slot, nBlobs int, onDisk []int, numExpected int) filterTestCaseSetupFunc {
return func(t *testing.T) (*blobCacheEntry, [][]byte, []blocks.ROBlob) {
blk, blobs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, slot, nBlobs)
shouldRetain := func(s primitives.Slot) bool { return true }
commits, err := commitmentsToCheck(blk, shouldRetain)
commits, err := commitmentsToCheck(blk, blk.Block().Slot())
require.NoError(t, err)
entry := &blobCacheEntry{}
if len(onDisk) > 0 {
@@ -114,7 +113,7 @@ func TestFilterDiskSummary(t *testing.T) {
t.Run(c.name, func(t *testing.T) {
entry, commits, expected := c.setup(t)
// first (root) argument doesn't matter, it is just for logs
got, err := entry.filter([32]byte{}, commits)
got, err := entry.filter([32]byte{}, commits, 100)
require.NoError(t, err)
require.Equal(t, len(expected), len(got))
})
@@ -196,7 +195,7 @@ func TestFilter(t *testing.T) {
t.Run(c.name, func(t *testing.T) {
entry, commits, expected := c.setup(t)
// first (root) argument doesn't matter, it is just for logs
got, err := entry.filter([32]byte{}, commits)
got, err := entry.filter([32]byte{}, commits, 100)
if c.err != nil {
require.ErrorIs(t, err, c.err)
return

View File

@@ -1,7 +1,9 @@
package das
import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
"bytes"
"slices"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db/filesystem"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
@@ -9,9 +11,9 @@ import (
)
var (
errDuplicateSidecar = errors.New("duplicate sidecar stashed in AvailabilityStore")
ErrDuplicateSidecar = errors.New("duplicate sidecar stashed in AvailabilityStore")
errColumnIndexTooHigh = errors.New("column index too high")
errCommitmentMismatch = errors.New("commitment of sidecar in cache did not match block commitment")
errCommitmentMismatch = errors.New("KzgCommitment of sidecar in cache did not match block commitment")
errMissingSidecar = errors.New("no sidecar in cache for block commitment")
)
@@ -23,80 +25,107 @@ func newDataColumnCache() *dataColumnCache {
return &dataColumnCache{entries: make(map[cacheKey]*dataColumnCacheEntry)}
}
// entry returns the entry for the given key, creating it if it isn't already present.
func (c *dataColumnCache) entry(key cacheKey) *dataColumnCacheEntry {
// ensure returns the entry for the given key, creating it if it isn't already present.
func (c *dataColumnCache) ensure(key cacheKey) *dataColumnCacheEntry {
entry, ok := c.entries[key]
if !ok {
entry = newDataColumnCacheEntry(key.root)
entry = &dataColumnCacheEntry{}
c.entries[key] = entry
}
return entry
}
func (c *dataColumnCache) cleanup(blks []blocks.ROBlock) {
for _, block := range blks {
key := cacheKey{slot: block.Block().Slot(), root: block.Root()}
c.delete(key)
}
}
// delete removes the cache entry from the cache.
func (c *dataColumnCache) delete(key cacheKey) {
delete(c.entries, key)
}
func (c *dataColumnCache) stash(sc blocks.RODataColumn) error {
key := cacheKey{slot: sc.Slot(), root: sc.BlockRoot()}
entry := c.entry(key)
return entry.stash(sc)
}
func newDataColumnCacheEntry(root [32]byte) *dataColumnCacheEntry {
return &dataColumnCacheEntry{scs: make(map[uint64]blocks.RODataColumn), root: &root}
}
// dataColumnCacheEntry is the set of RODataColumns for a given block.
// dataColumnCacheEntry holds a fixed-length cache of BlobSidecars.
type dataColumnCacheEntry struct {
root *[32]byte
scs map[uint64]blocks.RODataColumn
scs [fieldparams.NumberOfColumns]*blocks.RODataColumn
diskSummary filesystem.DataColumnStorageSummary
}
func (e *dataColumnCacheEntry) setDiskSummary(sum filesystem.DataColumnStorageSummary) {
e.diskSummary = sum
}
// stash adds an item to the in-memory cache of DataColumnSidecars.
// stash will return an error if the given data column Index is out of bounds.
// It will overwrite any existing entry for the same index.
func (e *dataColumnCacheEntry) stash(sc blocks.RODataColumn) error {
// Only the first DataColumnSidecar of a given Index will be kept in the cache.
// stash will return an error if the given data colunn is already in the cache, or if the Index is out of bounds.
func (e *dataColumnCacheEntry) stash(sc *blocks.RODataColumn) error {
if sc.Index >= fieldparams.NumberOfColumns {
return errors.Wrapf(errColumnIndexTooHigh, "index=%d", sc.Index)
}
if e.scs[sc.Index] != nil {
return errors.Wrapf(ErrDuplicateSidecar, "root=%#x, index=%d, commitment=%#x", sc.BlockRoot(), sc.Index, sc.KzgCommitments)
}
e.scs[sc.Index] = sc
return nil
}
// append appends the requested root and indices from the cache to the given sidecars slice and returns the result.
// If any of the given indices are missing, an error will be returned and the sidecars slice will be unchanged.
func (e *dataColumnCacheEntry) append(sidecars []blocks.RODataColumn, indices peerdas.ColumnIndices) ([]blocks.RODataColumn, error) {
needed := indices.ToMap()
for col := range needed {
_, ok := e.scs[col]
if !ok {
return nil, errors.Wrapf(errMissingSidecar, "root=%#x, index=%#x", e.root, col)
func (e *dataColumnCacheEntry) filter(root [32]byte, commitmentsArray *safeCommitmentsArray) ([]blocks.RODataColumn, error) {
nonEmptyIndices := commitmentsArray.nonEmptyIndices()
if e.diskSummary.AllAvailable(nonEmptyIndices) {
return nil, nil
}
commitmentsCount := commitmentsArray.count()
sidecars := make([]blocks.RODataColumn, 0, commitmentsCount)
for i := range nonEmptyIndices {
if e.diskSummary.HasIndex(i) {
continue
}
if e.scs[i] == nil {
return nil, errors.Wrapf(errMissingSidecar, "root=%#x, index=%#x", root, i)
}
if !sliceBytesEqual(commitmentsArray[i], e.scs[i].KzgCommitments) {
return nil, errors.Wrapf(errCommitmentMismatch, "root=%#x, index=%#x, commitment=%#x, block commitment=%#x", root, i, e.scs[i].KzgCommitments, commitmentsArray[i])
}
sidecars = append(sidecars, *e.scs[i])
}
// Loop twice so we can avoid touching the slice if any of the blobs are missing.
for col := range needed {
sidecars = append(sidecars, e.scs[col])
}
return sidecars, nil
}
// IndicesNotStored filters the list of indices to only include those that are not found in the storage summary.
func IndicesNotStored(sum filesystem.DataColumnStorageSummary, indices peerdas.ColumnIndices) peerdas.ColumnIndices {
indices = indices.Copy()
for col := range indices {
if sum.HasIndex(col) {
indices.Unset(col)
// safeCommitmentsArray is a fixed size array of commitments.
// This is helpful for avoiding gratuitous bounds checks.
type safeCommitmentsArray [fieldparams.NumberOfColumns][][]byte
// count returns the number of commitments in the array.
func (s *safeCommitmentsArray) count() int {
count := 0
for i := range s {
if s[i] != nil {
count++
}
}
return indices
return count
}
// nonEmptyIndices returns a map of indices that are non-nil in the array.
func (s *safeCommitmentsArray) nonEmptyIndices() map[uint64]bool {
columns := make(map[uint64]bool)
for i := range s {
if s[i] != nil {
columns[uint64(i)] = true
}
}
return columns
}
func sliceBytesEqual(a, b [][]byte) bool {
return slices.EqualFunc(a, b, bytes.Equal)
}

View File

@@ -1,10 +1,8 @@
package das
import (
"slices"
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db/filesystem"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
@@ -15,105 +13,124 @@ import (
func TestEnsureDeleteSetDiskSummary(t *testing.T) {
c := newDataColumnCache()
key := cacheKey{}
entry := c.entry(key)
require.Equal(t, 0, len(entry.scs))
entry := c.ensure(key)
require.DeepEqual(t, dataColumnCacheEntry{}, *entry)
nonDupe := c.entry(key)
require.Equal(t, entry, nonDupe) // same pointer
expect, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{{Index: 1}})
require.NoError(t, entry.stash(expect[0]))
require.Equal(t, 1, len(entry.scs))
cols, err := nonDupe.append([]blocks.RODataColumn{}, peerdas.NewColumnIndicesFromSlice([]uint64{expect[0].Index}))
require.NoError(t, err)
require.DeepEqual(t, expect[0], cols[0])
diskSummary := filesystem.NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{true})
entry.setDiskSummary(diskSummary)
entry = c.ensure(key)
require.DeepEqual(t, dataColumnCacheEntry{diskSummary: diskSummary}, *entry)
c.delete(key)
entry = c.entry(key)
require.Equal(t, 0, len(entry.scs))
require.NotEqual(t, entry, nonDupe) // different pointer
entry = c.ensure(key)
require.DeepEqual(t, dataColumnCacheEntry{}, *entry)
}
func TestStash(t *testing.T) {
t.Run("Index too high", func(t *testing.T) {
columns, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{{Index: 10_000}})
roDataColumns, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{{Index: 10_000}})
var entry dataColumnCacheEntry
err := entry.stash(columns[0])
err := entry.stash(&roDataColumns[0])
require.NotNil(t, err)
})
t.Run("Nominal and already existing", func(t *testing.T) {
roDataColumns, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{{Index: 1}})
entry := newDataColumnCacheEntry(roDataColumns[0].BlockRoot())
err := entry.stash(roDataColumns[0])
var entry dataColumnCacheEntry
err := entry.stash(&roDataColumns[0])
require.NoError(t, err)
require.DeepEqual(t, roDataColumns[0], entry.scs[1])
require.NoError(t, entry.stash(roDataColumns[0]))
// stash simply replaces duplicate values now
require.DeepEqual(t, roDataColumns[0], entry.scs[1])
err = entry.stash(&roDataColumns[0])
require.NotNil(t, err)
})
}
func TestAppendDataColumns(t *testing.T) {
func TestFilterDataColumns(t *testing.T) {
t.Run("All available", func(t *testing.T) {
sum := filesystem.NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{false, true, false, true})
notStored := IndicesNotStored(sum, peerdas.NewColumnIndicesFromSlice([]uint64{1, 3}))
actual, err := newDataColumnCacheEntry([32]byte{}).append([]blocks.RODataColumn{}, notStored)
commitmentsArray := safeCommitmentsArray{nil, [][]byte{[]byte{1}}, nil, [][]byte{[]byte{3}}}
diskSummary := filesystem.NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{false, true, false, true})
dataColumnCacheEntry := dataColumnCacheEntry{diskSummary: diskSummary}
actual, err := dataColumnCacheEntry.filter([fieldparams.RootLength]byte{}, &commitmentsArray)
require.NoError(t, err)
require.Equal(t, 0, len(actual))
require.IsNil(t, actual)
})
t.Run("Some scs missing", func(t *testing.T) {
sum := filesystem.NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{})
commitmentsArray := safeCommitmentsArray{nil, [][]byte{[]byte{1}}}
notStored := IndicesNotStored(sum, peerdas.NewColumnIndicesFromSlice([]uint64{1}))
actual, err := newDataColumnCacheEntry([32]byte{}).append([]blocks.RODataColumn{}, notStored)
require.Equal(t, 0, len(actual))
diskSummary := filesystem.NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{})
dataColumnCacheEntry := dataColumnCacheEntry{diskSummary: diskSummary}
_, err := dataColumnCacheEntry.filter([fieldparams.RootLength]byte{}, &commitmentsArray)
require.NotNil(t, err)
})
t.Run("Commitments not equal", func(t *testing.T) {
commitmentsArray := safeCommitmentsArray{nil, [][]byte{[]byte{1}}}
roDataColumns, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{{Index: 1}})
var scs [fieldparams.NumberOfColumns]*blocks.RODataColumn
scs[1] = &roDataColumns[0]
dataColumnCacheEntry := dataColumnCacheEntry{scs: scs}
_, err := dataColumnCacheEntry.filter(roDataColumns[0].BlockRoot(), &commitmentsArray)
require.NotNil(t, err)
})
t.Run("Nominal", func(t *testing.T) {
indices := peerdas.NewColumnIndicesFromSlice([]uint64{1, 3})
commitmentsArray := safeCommitmentsArray{nil, [][]byte{[]byte{1}}, nil, [][]byte{[]byte{3}}}
diskSummary := filesystem.NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{false, true})
expected, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{{Index: 3, KzgCommitments: [][]byte{[]byte{3}}}})
scs := map[uint64]blocks.RODataColumn{
3: expected[0],
}
sum := filesystem.NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{false, true})
entry := dataColumnCacheEntry{scs: scs}
var scs [fieldparams.NumberOfColumns]*blocks.RODataColumn
scs[3] = &expected[0]
actual, err := entry.append([]blocks.RODataColumn{}, IndicesNotStored(sum, indices))
dataColumnCacheEntry := dataColumnCacheEntry{scs: scs, diskSummary: diskSummary}
actual, err := dataColumnCacheEntry.filter(expected[0].BlockRoot(), &commitmentsArray)
require.NoError(t, err)
require.DeepEqual(t, expected, actual)
})
}
t.Run("Append does not mutate the input", func(t *testing.T) {
indices := peerdas.NewColumnIndicesFromSlice([]uint64{1, 2})
expected, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{
{Index: 0, KzgCommitments: [][]byte{[]byte{1}}},
{Index: 1, KzgCommitments: [][]byte{[]byte{2}}},
{Index: 2, KzgCommitments: [][]byte{[]byte{3}}},
})
func TestCount(t *testing.T) {
s := safeCommitmentsArray{nil, [][]byte{[]byte{1}}, nil, [][]byte{[]byte{3}}}
require.Equal(t, 2, s.count())
}
scs := map[uint64]blocks.RODataColumn{
1: expected[1],
2: expected[2],
}
entry := dataColumnCacheEntry{scs: scs}
func TestNonEmptyIndices(t *testing.T) {
s := safeCommitmentsArray{nil, [][]byte{[]byte{10}}, nil, [][]byte{[]byte{20}}}
actual := s.nonEmptyIndices()
require.DeepEqual(t, map[uint64]bool{1: true, 3: true}, actual)
}
original := []blocks.RODataColumn{expected[0]}
actual, err := entry.append(original, indices)
require.NoError(t, err)
require.Equal(t, len(expected), len(actual))
slices.SortFunc(actual, func(i, j blocks.RODataColumn) int {
return int(i.Index) - int(j.Index)
})
for i := range expected {
require.Equal(t, expected[i].Index, actual[i].Index)
}
require.Equal(t, 1, len(original))
func TestSliceBytesEqual(t *testing.T) {
t.Run("Different lengths", func(t *testing.T) {
a := [][]byte{[]byte{1, 2, 3}}
b := [][]byte{[]byte{1, 2, 3}, []byte{4, 5, 6}}
require.Equal(t, false, sliceBytesEqual(a, b))
})
t.Run("Same length but different content", func(t *testing.T) {
a := [][]byte{[]byte{1, 2, 3}, []byte{4, 5, 6}}
b := [][]byte{[]byte{1, 2, 3}, []byte{4, 5, 7}}
require.Equal(t, false, sliceBytesEqual(a, b))
})
t.Run("Equal slices", func(t *testing.T) {
a := [][]byte{[]byte{1, 2, 3}, []byte{4, 5, 6}}
b := [][]byte{[]byte{1, 2, 3}, []byte{4, 5, 6}}
require.Equal(t, true, sliceBytesEqual(a, b))
})
}

View File

@@ -7,13 +7,13 @@ import (
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
)
// AvailabilityChecker is the minimum interface needed to check if data is available for a block.
// By convention there is a concept of an AvailabilityStore that implements a method to persist
// blobs or data columns to prepare for Availability checking, but since those methods are different
// for different forms of blob data, they are not included in the interface.
type AvailabilityChecker interface {
IsDataAvailable(ctx context.Context, current primitives.Slot, b ...blocks.ROBlock) error
// AvailabilityStore describes a component that can verify and save sidecars for a given block, and confirm previously
// verified and saved sidecars.
// Persist guarantees that the sidecar will be available to perform a DA check
// for the life of the beacon node process.
// IsDataAvailable guarantees that all blobs committed to in the block have been
// durably persisted before returning a non-error value.
type AvailabilityStore interface {
IsDataAvailable(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error
Persist(current primitives.Slot, blobSidecar ...blocks.ROBlob) error
}
// RetentionChecker is a callback that determines whether blobs at the given slot are within the retention period.
type RetentionChecker func(primitives.Slot) bool

View File

@@ -1,5 +0,0 @@
package das
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "das")

View File

@@ -9,20 +9,16 @@ import (
// MockAvailabilityStore is an implementation of AvailabilityStore that can be used by other packages in tests.
type MockAvailabilityStore struct {
VerifyAvailabilityCallback func(ctx context.Context, current primitives.Slot, b ...blocks.ROBlock) error
ErrIsDataAvailable error
VerifyAvailabilityCallback func(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error
PersistBlobsCallback func(current primitives.Slot, blobSidecar ...blocks.ROBlob) error
}
var _ AvailabilityChecker = &MockAvailabilityStore{}
var _ AvailabilityStore = &MockAvailabilityStore{}
// IsDataAvailable satisfies the corresponding method of the AvailabilityStore interface in a way that is useful for tests.
func (m *MockAvailabilityStore) IsDataAvailable(ctx context.Context, current primitives.Slot, b ...blocks.ROBlock) error {
if m.ErrIsDataAvailable != nil {
return m.ErrIsDataAvailable
}
func (m *MockAvailabilityStore) IsDataAvailable(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error {
if m.VerifyAvailabilityCallback != nil {
return m.VerifyAvailabilityCallback(ctx, current, b...)
return m.VerifyAvailabilityCallback(ctx, current, b)
}
return nil
}

View File

@@ -1,135 +0,0 @@
package das
import (
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
)
// NeedSpan represents the need for a resource over a span of slots.
type NeedSpan struct {
Begin primitives.Slot
End primitives.Slot
}
// At returns whether blocks/blobs/columns are needed At the given slot.
func (n NeedSpan) At(slot primitives.Slot) bool {
return slot >= n.Begin && slot < n.End
}
// CurrentNeeds fields can be used to check whether the given resource type is needed
// at a given slot. The values are based on the current slot, so this value shouldn't
// be retained / reused across slots.
type CurrentNeeds struct {
Block NeedSpan
Blob NeedSpan
Col NeedSpan
}
// SyncNeeds holds configuration and state for determining what data is needed
// at any given slot during backfill based on the current slot.
type SyncNeeds struct {
current func() primitives.Slot
deneb primitives.Slot
fulu primitives.Slot
oldestSlotFlagPtr *primitives.Slot
validOldestSlotPtr *primitives.Slot
blockRetention primitives.Epoch
blobRetentionFlag primitives.Epoch
blobRetention primitives.Epoch
colRetention primitives.Epoch
}
type CurrentSlotter func() primitives.Slot
func NewSyncNeeds(current CurrentSlotter, oldestSlotFlagPtr *primitives.Slot, blobRetentionFlag primitives.Epoch) (SyncNeeds, error) {
deneb, err := slots.EpochStart(params.BeaconConfig().DenebForkEpoch)
if err != nil {
return SyncNeeds{}, errors.Wrap(err, "deneb fork slot")
}
fuluBoundary := min(params.BeaconConfig().FuluForkEpoch, slots.MaxSafeEpoch())
fulu, err := slots.EpochStart(fuluBoundary)
if err != nil {
return SyncNeeds{}, errors.Wrap(err, "fulu fork slot")
}
sn := SyncNeeds{
current: func() primitives.Slot { return current() },
deneb: deneb,
fulu: fulu,
blobRetentionFlag: blobRetentionFlag,
}
// We apply the --blob-retention-epochs flag to both blob and column retention.
sn.blobRetention = max(sn.blobRetentionFlag, params.BeaconConfig().MinEpochsForBlobsSidecarsRequest)
sn.colRetention = max(sn.blobRetentionFlag, params.BeaconConfig().MinEpochsForDataColumnSidecarsRequest)
// Override spec minimum block retention with user-provided flag only if it is lower than the spec minimum.
sn.blockRetention = primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
if oldestSlotFlagPtr != nil {
oldestEpoch := slots.ToEpoch(*oldestSlotFlagPtr)
if oldestEpoch < sn.blockRetention {
sn.validOldestSlotPtr = oldestSlotFlagPtr
} else {
log.WithField("backfill-oldest-slot", *oldestSlotFlagPtr).
WithField("specMinSlot", syncEpochOffset(current(), sn.blockRetention)).
Warn("Ignoring user-specified slot > MIN_EPOCHS_FOR_BLOCK_REQUESTS.")
}
}
return sn, nil
}
// Currently is the main callback given to the different parts of backfill to determine
// what resources are needed at a given slot. It assumes the current instance of SyncNeeds
// is the result of calling initialize.
func (n SyncNeeds) Currently() CurrentNeeds {
current := n.current()
c := CurrentNeeds{
Block: n.blockSpan(current),
Blob: NeedSpan{Begin: syncEpochOffset(current, n.blobRetention), End: n.fulu},
Col: NeedSpan{Begin: syncEpochOffset(current, n.colRetention), End: current},
}
// Adjust the minimums forward to the slots where the sidecar types were introduced
c.Blob.Begin = max(c.Blob.Begin, n.deneb)
c.Col.Begin = max(c.Col.Begin, n.fulu)
return c
}
func (n SyncNeeds) blockSpan(current primitives.Slot) NeedSpan {
if n.validOldestSlotPtr != nil { // assumes validation done in initialize()
return NeedSpan{Begin: *n.validOldestSlotPtr, End: current}
}
return NeedSpan{Begin: syncEpochOffset(current, n.blockRetention), End: current}
}
func (n SyncNeeds) BlobRetentionChecker() RetentionChecker {
return func(slot primitives.Slot) bool {
current := n.Currently()
return current.Blob.At(slot)
}
}
func (n SyncNeeds) DataColumnRetentionChecker() RetentionChecker {
return func(slot primitives.Slot) bool {
current := n.Currently()
return current.Col.At(slot)
}
}
// syncEpochOffset subtracts a number of epochs as slots from the current slot, with underflow checks.
// It returns slot 1 if the result would be 0 or underflow. It doesn't return slot 0 because the
// genesis block needs to be specially synced (it doesn't have a valid signature).
func syncEpochOffset(current primitives.Slot, subtract primitives.Epoch) primitives.Slot {
minEpoch := min(subtract, slots.MaxSafeEpoch())
// compute slot offset - offset is a number of slots to go back from current (not an absolute slot).
offset := slots.UnsafeEpochStart(minEpoch)
// Undeflow protection: slot 0 is the genesis block, therefore the signature in it is invalid.
// To prevent us from rejecting a batch, we restrict the minimum backfill batch till only slot 1
if offset >= current {
return 1
}
return current - offset
}

View File

@@ -1,675 +0,0 @@
package das
import (
"fmt"
"testing"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/time/slots"
)
// TestNeedSpanAt tests the needSpan.at() method for range checking.
func TestNeedSpanAt(t *testing.T) {
cases := []struct {
name string
span NeedSpan
slots []primitives.Slot
expected bool
}{
{
name: "within bounds",
span: NeedSpan{Begin: 100, End: 200},
slots: []primitives.Slot{101, 150, 199},
expected: true,
},
{
name: "before begin / at end boundary (exclusive)",
span: NeedSpan{Begin: 100, End: 200},
slots: []primitives.Slot{99, 200, 201},
expected: false,
},
{
name: "empty span (begin == end)",
span: NeedSpan{Begin: 100, End: 100},
slots: []primitives.Slot{100},
expected: false,
},
{
name: "slot 0 with span starting at 0",
span: NeedSpan{Begin: 0, End: 100},
slots: []primitives.Slot{0},
expected: true,
},
}
for _, tc := range cases {
for _, sl := range tc.slots {
t.Run(fmt.Sprintf("%s at slot %d, ", tc.name, sl), func(t *testing.T) {
result := tc.span.At(sl)
require.Equal(t, tc.expected, result)
})
}
}
}
// TestSyncEpochOffset tests the syncEpochOffset helper function.
func TestSyncEpochOffset(t *testing.T) {
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
cases := []struct {
name string
current primitives.Slot
subtract primitives.Epoch
expected primitives.Slot
}{
{
name: "typical offset - 5 epochs back",
current: primitives.Slot(10000),
subtract: 5,
expected: primitives.Slot(10000 - 5*slotsPerEpoch),
},
{
name: "zero subtract returns current",
current: primitives.Slot(5000),
subtract: 0,
expected: primitives.Slot(5000),
},
{
name: "subtract 1 epoch from mid-range slot",
current: primitives.Slot(1000),
subtract: 1,
expected: primitives.Slot(1000 - slotsPerEpoch),
},
{
name: "offset equals current - underflow protection",
current: primitives.Slot(slotsPerEpoch),
subtract: 1,
expected: 1,
},
{
name: "offset exceeds current - underflow protection",
current: primitives.Slot(50),
subtract: 1000,
expected: 1,
},
{
name: "current very close to 0",
current: primitives.Slot(10),
subtract: 1,
expected: 1,
},
{
name: "subtract MaxSafeEpoch",
current: primitives.Slot(1000000),
subtract: slots.MaxSafeEpoch(),
expected: 1, // underflow protection
},
{
name: "result exactly at slot 1",
current: primitives.Slot(1 + slotsPerEpoch),
subtract: 1,
expected: 1,
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
result := syncEpochOffset(tc.current, tc.subtract)
require.Equal(t, tc.expected, result)
})
}
}
// TestSyncNeedsInitialize tests the syncNeeds.initialize() method.
func TestSyncNeedsInitialize(t *testing.T) {
params.SetupTestConfigCleanup(t)
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
minBlobEpochs := params.BeaconConfig().MinEpochsForBlobsSidecarsRequest
minColEpochs := params.BeaconConfig().MinEpochsForDataColumnSidecarsRequest
currentSlot := primitives.Slot(10000)
currentFunc := func() primitives.Slot { return currentSlot }
cases := []struct {
invalidOldestFlag bool
expectValidOldest bool
oldestSlotFlagPtr *primitives.Slot
blobRetentionFlag primitives.Epoch
expectedBlob primitives.Epoch
expectedCol primitives.Epoch
name string
input SyncNeeds
}{
{
name: "basic initialization with no flags",
expectValidOldest: false,
expectedBlob: minBlobEpochs,
expectedCol: minColEpochs,
blobRetentionFlag: 0,
},
{
name: "blob retention flag less than spec minimum",
blobRetentionFlag: minBlobEpochs - 1,
expectValidOldest: false,
expectedBlob: minBlobEpochs,
expectedCol: minColEpochs,
},
{
name: "blob retention flag greater than spec minimum",
blobRetentionFlag: minBlobEpochs + 10,
expectValidOldest: false,
expectedBlob: minBlobEpochs + 10,
expectedCol: minBlobEpochs + 10,
},
{
name: "oldestSlotFlagPtr is nil",
blobRetentionFlag: 0,
oldestSlotFlagPtr: nil,
expectValidOldest: false,
expectedBlob: minBlobEpochs,
expectedCol: minColEpochs,
},
{
name: "valid oldestSlotFlagPtr (earlier than spec minimum)",
blobRetentionFlag: 0,
oldestSlotFlagPtr: func() *primitives.Slot {
slot := primitives.Slot(10)
return &slot
}(),
expectValidOldest: true,
expectedBlob: minBlobEpochs,
expectedCol: minColEpochs,
},
{
name: "invalid oldestSlotFlagPtr (later than spec minimum)",
blobRetentionFlag: 0,
oldestSlotFlagPtr: func() *primitives.Slot {
// Make it way past the spec minimum
slot := currentSlot - primitives.Slot(params.BeaconConfig().MinEpochsForBlockRequests-1)*slotsPerEpoch
return &slot
}(),
expectValidOldest: false,
expectedBlob: minBlobEpochs,
expectedCol: minColEpochs,
invalidOldestFlag: true,
},
{
name: "oldestSlotFlagPtr at boundary (exactly at spec minimum)",
blobRetentionFlag: 0,
oldestSlotFlagPtr: func() *primitives.Slot {
slot := currentSlot - primitives.Slot(params.BeaconConfig().MinEpochsForBlockRequests)*slotsPerEpoch
return &slot
}(),
expectValidOldest: false,
expectedBlob: minBlobEpochs,
expectedCol: minColEpochs,
invalidOldestFlag: true,
},
{
name: "both blob retention flag and oldest slot set",
blobRetentionFlag: minBlobEpochs + 5,
oldestSlotFlagPtr: func() *primitives.Slot {
slot := primitives.Slot(100)
return &slot
}(),
expectValidOldest: true,
expectedBlob: minBlobEpochs + 5,
expectedCol: minBlobEpochs + 5,
},
{
name: "zero blob retention uses spec minimum",
blobRetentionFlag: 0,
expectValidOldest: false,
expectedBlob: minBlobEpochs,
expectedCol: minColEpochs,
},
{
name: "large blob retention value",
blobRetentionFlag: 5000,
expectValidOldest: false,
expectedBlob: 5000,
expectedCol: 5000,
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
result, err := NewSyncNeeds(currentFunc, tc.oldestSlotFlagPtr, tc.blobRetentionFlag)
require.NoError(t, err)
// Check that current, deneb, fulu are set correctly
require.Equal(t, currentSlot, result.current())
// Check retention calculations
require.Equal(t, tc.expectedBlob, result.blobRetention)
require.Equal(t, tc.expectedCol, result.colRetention)
if tc.invalidOldestFlag {
require.IsNil(t, result.validOldestSlotPtr)
} else {
require.Equal(t, tc.oldestSlotFlagPtr, result.validOldestSlotPtr)
}
// Check blockRetention is always spec minimum
require.Equal(t, primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests), result.blockRetention)
})
}
}
// TestSyncNeedsBlockSpan tests the syncNeeds.blockSpan() method.
func TestSyncNeedsBlockSpan(t *testing.T) {
params.SetupTestConfigCleanup(t)
minBlockEpochs := params.BeaconConfig().MinEpochsForBlockRequests
cases := []struct {
name string
validOldest *primitives.Slot
blockRetention primitives.Epoch
current primitives.Slot
expectedBegin primitives.Slot
expectedEnd primitives.Slot
}{
{
name: "with validOldestSlotPtr set",
validOldest: func() *primitives.Slot { s := primitives.Slot(500); return &s }(),
blockRetention: primitives.Epoch(minBlockEpochs),
current: 10000,
expectedBegin: 500,
expectedEnd: 10000,
},
{
name: "without validOldestSlotPtr (nil)",
validOldest: nil,
blockRetention: primitives.Epoch(minBlockEpochs),
current: 10000,
expectedBegin: syncEpochOffset(10000, primitives.Epoch(minBlockEpochs)),
expectedEnd: 10000,
},
{
name: "very low current slot",
validOldest: nil,
blockRetention: primitives.Epoch(minBlockEpochs),
current: 100,
expectedBegin: 1, // underflow protection
expectedEnd: 100,
},
{
name: "very high current slot",
validOldest: nil,
blockRetention: primitives.Epoch(minBlockEpochs),
current: 1000000,
expectedBegin: syncEpochOffset(1000000, primitives.Epoch(minBlockEpochs)),
expectedEnd: 1000000,
},
{
name: "validOldestSlotPtr at boundary value",
validOldest: func() *primitives.Slot { s := primitives.Slot(1); return &s }(),
blockRetention: primitives.Epoch(minBlockEpochs),
current: 5000,
expectedBegin: 1,
expectedEnd: 5000,
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
sn := SyncNeeds{
validOldestSlotPtr: tc.validOldest,
blockRetention: tc.blockRetention,
}
result := sn.blockSpan(tc.current)
require.Equal(t, tc.expectedBegin, result.Begin)
require.Equal(t, tc.expectedEnd, result.End)
})
}
}
// TestSyncNeedsCurrently tests the syncNeeds.currently() method.
func TestSyncNeedsCurrently(t *testing.T) {
params.SetupTestConfigCleanup(t)
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
denebSlot := primitives.Slot(1000)
fuluSlot := primitives.Slot(2000)
cases := []struct {
name string
current primitives.Slot
blobRetention primitives.Epoch
colRetention primitives.Epoch
blockRetention primitives.Epoch
validOldest *primitives.Slot
// Expected block span
expectBlockBegin primitives.Slot
expectBlockEnd primitives.Slot
// Expected blob span
expectBlobBegin primitives.Slot
expectBlobEnd primitives.Slot
// Expected column span
expectColBegin primitives.Slot
expectColEnd primitives.Slot
}{
{
name: "pre-Deneb - only blocks needed",
current: 500,
blobRetention: 10,
colRetention: 10,
blockRetention: 5,
validOldest: nil,
expectBlockBegin: syncEpochOffset(500, 5),
expectBlockEnd: 500,
expectBlobBegin: denebSlot, // adjusted to deneb
expectBlobEnd: fuluSlot,
expectColBegin: fuluSlot, // adjusted to fulu
expectColEnd: 500,
},
{
name: "between Deneb and Fulu - blocks and blobs needed",
current: 1500,
blobRetention: 10,
colRetention: 10,
blockRetention: 5,
validOldest: nil,
expectBlockBegin: syncEpochOffset(1500, 5),
expectBlockEnd: 1500,
expectBlobBegin: max(syncEpochOffset(1500, 10), denebSlot),
expectBlobEnd: fuluSlot,
expectColBegin: fuluSlot, // adjusted to fulu
expectColEnd: 1500,
},
{
name: "post-Fulu - all resources needed",
current: 3000,
blobRetention: 10,
colRetention: 10,
blockRetention: 5,
validOldest: nil,
expectBlockBegin: syncEpochOffset(3000, 5),
expectBlockEnd: 3000,
expectBlobBegin: max(syncEpochOffset(3000, 10), denebSlot),
expectBlobEnd: fuluSlot,
expectColBegin: max(syncEpochOffset(3000, 10), fuluSlot),
expectColEnd: 3000,
},
{
name: "exactly at Deneb boundary",
current: denebSlot,
blobRetention: 10,
colRetention: 10,
blockRetention: 5,
validOldest: nil,
expectBlockBegin: syncEpochOffset(denebSlot, 5),
expectBlockEnd: denebSlot,
expectBlobBegin: denebSlot,
expectBlobEnd: fuluSlot,
expectColBegin: fuluSlot,
expectColEnd: denebSlot,
},
{
name: "exactly at Fulu boundary",
current: fuluSlot,
blobRetention: 10,
colRetention: 10,
blockRetention: 5,
validOldest: nil,
expectBlockBegin: syncEpochOffset(fuluSlot, 5),
expectBlockEnd: fuluSlot,
expectBlobBegin: max(syncEpochOffset(fuluSlot, 10), denebSlot),
expectBlobEnd: fuluSlot,
expectColBegin: fuluSlot,
expectColEnd: fuluSlot,
},
{
name: "small retention periods",
current: 5000,
blobRetention: 1,
colRetention: 2,
blockRetention: 1,
validOldest: nil,
expectBlockBegin: syncEpochOffset(5000, 1),
expectBlockEnd: 5000,
expectBlobBegin: max(syncEpochOffset(5000, 1), denebSlot),
expectBlobEnd: fuluSlot,
expectColBegin: max(syncEpochOffset(5000, 2), fuluSlot),
expectColEnd: 5000,
},
{
name: "large retention periods",
current: 10000,
blobRetention: 100,
colRetention: 100,
blockRetention: 50,
validOldest: nil,
expectBlockBegin: syncEpochOffset(10000, 50),
expectBlockEnd: 10000,
expectBlobBegin: max(syncEpochOffset(10000, 100), denebSlot),
expectBlobEnd: fuluSlot,
expectColBegin: max(syncEpochOffset(10000, 100), fuluSlot),
expectColEnd: 10000,
},
{
name: "with validOldestSlotPtr for blocks",
current: 8000,
blobRetention: 10,
colRetention: 10,
blockRetention: 5,
validOldest: func() *primitives.Slot { s := primitives.Slot(100); return &s }(),
expectBlockBegin: 100,
expectBlockEnd: 8000,
expectBlobBegin: max(syncEpochOffset(8000, 10), denebSlot),
expectBlobEnd: fuluSlot,
expectColBegin: max(syncEpochOffset(8000, 10), fuluSlot),
expectColEnd: 8000,
},
{
name: "retention approaching current slot",
current: primitives.Slot(2000 + 5*slotsPerEpoch),
blobRetention: 5,
colRetention: 5,
blockRetention: 3,
validOldest: nil,
expectBlockBegin: syncEpochOffset(primitives.Slot(2000+5*slotsPerEpoch), 3),
expectBlockEnd: primitives.Slot(2000 + 5*slotsPerEpoch),
expectBlobBegin: max(syncEpochOffset(primitives.Slot(2000+5*slotsPerEpoch), 5), denebSlot),
expectBlobEnd: fuluSlot,
expectColBegin: max(syncEpochOffset(primitives.Slot(2000+5*slotsPerEpoch), 5), fuluSlot),
expectColEnd: primitives.Slot(2000 + 5*slotsPerEpoch),
},
{
name: "current just after Deneb",
current: denebSlot + 10,
blobRetention: 10,
colRetention: 10,
blockRetention: 5,
validOldest: nil,
expectBlockBegin: syncEpochOffset(denebSlot+10, 5),
expectBlockEnd: denebSlot + 10,
expectBlobBegin: denebSlot,
expectBlobEnd: fuluSlot,
expectColBegin: fuluSlot,
expectColEnd: denebSlot + 10,
},
{
name: "current just after Fulu",
current: fuluSlot + 10,
blobRetention: 10,
colRetention: 10,
blockRetention: 5,
validOldest: nil,
expectBlockBegin: syncEpochOffset(fuluSlot+10, 5),
expectBlockEnd: fuluSlot + 10,
expectBlobBegin: max(syncEpochOffset(fuluSlot+10, 10), denebSlot),
expectBlobEnd: fuluSlot,
expectColBegin: fuluSlot,
expectColEnd: fuluSlot + 10,
},
{
name: "blob retention would start before Deneb",
current: denebSlot + primitives.Slot(5*slotsPerEpoch),
blobRetention: 100, // very large retention
colRetention: 10,
blockRetention: 5,
validOldest: nil,
expectBlockBegin: syncEpochOffset(denebSlot+primitives.Slot(5*slotsPerEpoch), 5),
expectBlockEnd: denebSlot + primitives.Slot(5*slotsPerEpoch),
expectBlobBegin: denebSlot, // clamped to deneb
expectBlobEnd: fuluSlot,
expectColBegin: fuluSlot,
expectColEnd: denebSlot + primitives.Slot(5*slotsPerEpoch),
},
{
name: "column retention would start before Fulu",
current: fuluSlot + primitives.Slot(5*slotsPerEpoch),
blobRetention: 10,
colRetention: 100, // very large retention
blockRetention: 5,
validOldest: nil,
expectBlockBegin: syncEpochOffset(fuluSlot+primitives.Slot(5*slotsPerEpoch), 5),
expectBlockEnd: fuluSlot + primitives.Slot(5*slotsPerEpoch),
expectBlobBegin: max(syncEpochOffset(fuluSlot+primitives.Slot(5*slotsPerEpoch), 10), denebSlot),
expectBlobEnd: fuluSlot,
expectColBegin: fuluSlot, // clamped to fulu
expectColEnd: fuluSlot + primitives.Slot(5*slotsPerEpoch),
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
sn := SyncNeeds{
current: func() primitives.Slot { return tc.current },
deneb: denebSlot,
fulu: fuluSlot,
validOldestSlotPtr: tc.validOldest,
blockRetention: tc.blockRetention,
blobRetention: tc.blobRetention,
colRetention: tc.colRetention,
}
result := sn.Currently()
// Verify block span
require.Equal(t, tc.expectBlockBegin, result.Block.Begin,
"block.begin mismatch")
require.Equal(t, tc.expectBlockEnd, result.Block.End,
"block.end mismatch")
// Verify blob span
require.Equal(t, tc.expectBlobBegin, result.Blob.Begin,
"blob.begin mismatch")
require.Equal(t, tc.expectBlobEnd, result.Blob.End,
"blob.end mismatch")
// Verify column span
require.Equal(t, tc.expectColBegin, result.Col.Begin,
"col.begin mismatch")
require.Equal(t, tc.expectColEnd, result.Col.End,
"col.end mismatch")
})
}
}
// TestCurrentNeedsIntegration verifies the complete currentNeeds workflow.
func TestCurrentNeedsIntegration(t *testing.T) {
params.SetupTestConfigCleanup(t)
denebSlot := primitives.Slot(1000)
fuluSlot := primitives.Slot(2000)
cases := []struct {
name string
current primitives.Slot
blobRetention primitives.Epoch
colRetention primitives.Epoch
testSlots []primitives.Slot
expectBlockAt []bool
expectBlobAt []bool
expectColAt []bool
}{
{
name: "pre-Deneb slot - only blocks",
current: 500,
blobRetention: 10,
colRetention: 10,
testSlots: []primitives.Slot{100, 250, 499, 500, 1000, 2000},
expectBlockAt: []bool{true, true, true, false, false, false},
expectBlobAt: []bool{false, false, false, false, true, false},
expectColAt: []bool{false, false, false, false, false, false},
},
{
name: "between Deneb and Fulu - blocks and blobs",
current: 1500,
blobRetention: 10,
colRetention: 10,
testSlots: []primitives.Slot{500, 1000, 1200, 1499, 1500, 2000},
expectBlockAt: []bool{true, true, true, true, false, false},
expectBlobAt: []bool{false, false, true, true, true, false},
expectColAt: []bool{false, false, false, false, false, false},
},
{
name: "post-Fulu - all resources",
current: 3000,
blobRetention: 10,
colRetention: 10,
testSlots: []primitives.Slot{1000, 1500, 2000, 2500, 2999, 3000},
expectBlockAt: []bool{true, true, true, true, true, false},
expectBlobAt: []bool{false, false, false, false, false, false},
expectColAt: []bool{false, false, false, false, true, false},
},
{
name: "at Deneb boundary",
current: denebSlot,
blobRetention: 5,
colRetention: 5,
testSlots: []primitives.Slot{500, 999, 1000, 1500, 2000},
expectBlockAt: []bool{true, true, false, false, false},
expectBlobAt: []bool{false, false, true, true, false},
expectColAt: []bool{false, false, false, false, false},
},
{
name: "at Fulu boundary",
current: fuluSlot,
blobRetention: 5,
colRetention: 5,
testSlots: []primitives.Slot{1000, 1500, 1999, 2000, 2001},
expectBlockAt: []bool{true, true, true, false, false},
expectBlobAt: []bool{false, false, true, false, false},
expectColAt: []bool{false, false, false, false, false},
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
sn := SyncNeeds{
current: func() primitives.Slot { return tc.current },
deneb: denebSlot,
fulu: fuluSlot,
blockRetention: 100,
blobRetention: tc.blobRetention,
colRetention: tc.colRetention,
}
cn := sn.Currently()
// Verify block.end == current
require.Equal(t, tc.current, cn.Block.End, "block.end should equal current")
// Verify blob.end == fulu
require.Equal(t, fuluSlot, cn.Blob.End, "blob.end should equal fulu")
// Verify col.end == current
require.Equal(t, tc.current, cn.Col.End, "col.end should equal current")
// Test each slot
for i, slot := range tc.testSlots {
require.Equal(t, tc.expectBlockAt[i], cn.Block.At(slot),
"block.at(%d) mismatch at index %d", slot, i)
require.Equal(t, tc.expectBlobAt[i], cn.Blob.At(slot),
"blob.at(%d) mismatch at index %d", slot, i)
require.Equal(t, tc.expectColAt[i], cn.Col.At(slot),
"col.at(%d) mismatch at index %d", slot, i)
}
})
}
}

View File

@@ -270,7 +270,7 @@ func (dcs *DataColumnStorage) Save(dataColumnSidecars []blocks.VerifiedRODataCol
// Check the number of columns is the one expected.
// While implementing this, we expect the number of columns won't change.
// If it does, we will need to create a new version of the data column sidecar file.
if fieldparams.NumberOfColumns != mandatoryNumberOfColumns {
if params.BeaconConfig().NumberOfColumns != mandatoryNumberOfColumns {
return errWrongNumberOfColumns
}
@@ -515,11 +515,6 @@ func (dcs *DataColumnStorage) Clear() error {
// prune clean the cache, the filesystem and mutexes.
func (dcs *DataColumnStorage) prune() {
startTime := time.Now()
defer func() {
dataColumnPruneLatency.Observe(float64(time.Since(startTime).Milliseconds()))
}()
highestStoredEpoch := dcs.cache.HighestEpoch()
// Check if we need to prune.
@@ -627,9 +622,6 @@ func (dcs *DataColumnStorage) saveDataColumnSidecarsExistingFile(filePath string
// Create the SSZ encoded data column sidecars.
var sszEncodedDataColumnSidecars []byte
// Initialize the count of the saved SSZ encoded data column sidecar.
storedCount := uint8(0)
for {
dataColumnSidecars := pullChan(inputDataColumnSidecars)
if len(dataColumnSidecars) == 0 {
@@ -676,9 +668,6 @@ func (dcs *DataColumnStorage) saveDataColumnSidecarsExistingFile(filePath string
return errors.Wrap(err, "set index")
}
// Increment the count of the saved SSZ encoded data column sidecar.
storedCount++
// Append the SSZ encoded data column sidecar to the SSZ encoded data column sidecars.
sszEncodedDataColumnSidecars = append(sszEncodedDataColumnSidecars, sszEncodedDataColumnSidecar...)
}
@@ -703,12 +692,9 @@ func (dcs *DataColumnStorage) saveDataColumnSidecarsExistingFile(filePath string
return errWrongBytesWritten
}
syncStart := time.Now()
if err := file.Sync(); err != nil {
return errors.Wrap(err, "sync")
}
dataColumnFileSyncLatency.Observe(float64(time.Since(syncStart).Milliseconds()))
dataColumnBatchStoreCount.Observe(float64(storedCount))
return nil
}
@@ -822,14 +808,10 @@ func (dcs *DataColumnStorage) saveDataColumnSidecarsNewFile(filePath string, inp
return errWrongBytesWritten
}
syncStart := time.Now()
if err := file.Sync(); err != nil {
return errors.Wrap(err, "sync")
}
dataColumnFileSyncLatency.Observe(float64(time.Since(syncStart).Milliseconds()))
dataColumnBatchStoreCount.Observe(float64(storedCount))
return nil
}
@@ -982,7 +964,8 @@ func (si *storageIndices) set(dataColumnIndex uint64, position uint8) error {
// pullChan pulls data column sidecars from the input channel until it is empty.
func pullChan(inputRoDataColumns chan []blocks.VerifiedRODataColumn) []blocks.VerifiedRODataColumn {
dataColumnSidecars := make([]blocks.VerifiedRODataColumn, 0, fieldparams.NumberOfColumns)
numberOfColumns := params.BeaconConfig().NumberOfColumns
dataColumnSidecars := make([]blocks.VerifiedRODataColumn, 0, numberOfColumns)
for {
select {

View File

@@ -117,6 +117,8 @@ func (sc *dataColumnStorageSummaryCache) HighestEpoch() primitives.Epoch {
// set updates the cache.
func (sc *dataColumnStorageSummaryCache) set(dataColumnsIdent DataColumnsIdent) error {
numberOfColumns := params.BeaconConfig().NumberOfColumns
sc.mu.Lock()
defer sc.mu.Unlock()
@@ -125,7 +127,7 @@ func (sc *dataColumnStorageSummaryCache) set(dataColumnsIdent DataColumnsIdent)
count := uint64(0)
for _, index := range dataColumnsIdent.Indices {
if index >= fieldparams.NumberOfColumns {
if index >= numberOfColumns {
return errDataColumnIndexOutOfBounds
}

View File

@@ -7,6 +7,7 @@ import (
"testing"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/testing/require"
@@ -87,6 +88,22 @@ func TestWarmCache(t *testing.T) {
}
func TestSaveDataColumnsSidecars(t *testing.T) {
t.Run("wrong numbers of columns", func(t *testing.T) {
cfg := params.BeaconConfig().Copy()
cfg.NumberOfColumns = 0
params.OverrideBeaconConfig(cfg)
params.SetupTestConfigCleanup(t)
_, verifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
[]util.DataColumnParam{{Index: 12}, {Index: 1_000_000}, {Index: 48}},
)
_, dataColumnStorage := NewEphemeralDataColumnStorageAndFs(t)
err := dataColumnStorage.Save(verifiedRoDataColumnSidecars)
require.ErrorIs(t, err, errWrongNumberOfColumns)
})
t.Run("one of the column index is too large", func(t *testing.T) {
_, verifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,

View File

@@ -36,15 +36,16 @@ var (
})
// Data columns
dataColumnBuckets = []float64{3, 5, 7, 9, 11, 13}
dataColumnSaveLatency = promauto.NewHistogram(prometheus.HistogramOpts{
Name: "data_column_storage_save_latency",
Help: "Latency of DataColumnSidecar storage save operations in milliseconds",
Buckets: []float64{10, 20, 30, 50, 100, 200, 500},
Buckets: dataColumnBuckets,
})
dataColumnFetchLatency = promauto.NewHistogram(prometheus.HistogramOpts{
Name: "data_column_storage_get_latency",
Help: "Latency of DataColumnSidecar storage get operations in milliseconds",
Buckets: []float64{3, 5, 7, 9, 11, 13},
Buckets: dataColumnBuckets,
})
dataColumnPrunedCounter = promauto.NewCounter(prometheus.CounterOpts{
Name: "data_column_pruned",
@@ -58,16 +59,4 @@ var (
Name: "data_column_disk_count",
Help: "Approximate number of data columns in storage",
})
dataColumnFileSyncLatency = promauto.NewSummary(prometheus.SummaryOpts{
Name: "data_column_file_sync_latency",
Help: "Latency of sync operations when saving data columns in milliseconds",
})
dataColumnBatchStoreCount = promauto.NewSummary(prometheus.SummaryOpts{
Name: "data_column_batch_store_count",
Help: "Number of data columns stored in a batch",
})
dataColumnPruneLatency = promauto.NewSummary(prometheus.SummaryOpts{
Name: "data_column_prune_latency",
Help: "Latency of data column prune operations in milliseconds",
})
)

View File

@@ -128,9 +128,10 @@ type NoHeadAccessDatabase interface {
BackfillFinalizedIndex(ctx context.Context, blocks []blocks.ROBlock, finalizedChildRoot [32]byte) error
// Custody operations.
UpdateSubscribedToAllDataSubnets(ctx context.Context, subscribed bool) (bool, error)
UpdateLiteSupernode(ctx context.Context, enabled bool) (bool, error)
UpdateCustodyInfo(ctx context.Context, earliestAvailableSlot primitives.Slot, custodyGroupCount uint64) (primitives.Slot, uint64, error)
UpdateEarliestAvailableSlot(ctx context.Context, earliestAvailableSlot primitives.Slot) error
UpdateSubscribedToAllDataSubnets(ctx context.Context, subscribed bool) (bool, error)
// P2P Metadata operations.
SaveMetadataSeqNum(ctx context.Context, seqNum uint64) error

View File

@@ -27,9 +27,6 @@ go_library(
"p2p.go",
"schema.go",
"state.go",
"state_diff.go",
"state_diff_cache.go",
"state_diff_helpers.go",
"state_summary.go",
"state_summary_cache.go",
"utils.go",
@@ -44,12 +41,10 @@ go_library(
"//beacon-chain/db/iface:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/hdiff:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/light-client:go_default_library",
"//consensus-types/primitives:go_default_library",
@@ -58,7 +53,6 @@ go_library(
"//encoding/ssz/detect:go_default_library",
"//genesis:go_default_library",
"//io/file:go_default_library",
"//math:go_default_library",
"//monitoring/progress:go_default_library",
"//monitoring/tracing:go_default_library",
"//monitoring/tracing/trace:go_default_library",
@@ -104,7 +98,6 @@ go_test(
"migration_block_slot_index_test.go",
"migration_state_validators_test.go",
"p2p_test.go",
"state_diff_test.go",
"state_summary_test.go",
"state_test.go",
"utils_test.go",
@@ -118,7 +111,6 @@ go_test(
"//beacon-chain/db/iface:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
@@ -128,7 +120,6 @@ go_test(
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//genesis:go_default_library",
"//math:go_default_library",
"//proto/dbval:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
@@ -142,7 +133,6 @@ go_test(
"@com_github_golang_snappy//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@io_bazel_rules_go//go/tools/bazel:go_default_library",
"@io_etcd_go_bbolt//:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",

View File

@@ -146,9 +146,9 @@ func (s *Store) UpdateEarliestAvailableSlot(ctx context.Context, earliestAvailab
return nil
}
// UpdateSubscribedToAllDataSubnets updates whether the node is subscribed to all data subnets (supernode mode).
// This is a one-way flag - once set to true, it cannot be reverted to false.
// Returns the previous state.
// UpdateSubscribedToAllDataSubnets updates the "subscribed to all data subnets" status in the database
// only if `subscribed` is `true`.
// It returns the previous subscription status.
func (s *Store) UpdateSubscribedToAllDataSubnets(ctx context.Context, subscribed bool) (bool, error) {
_, span := trace.StartSpan(ctx, "BeaconDB.UpdateSubscribedToAllDataSubnets")
defer span.End()
@@ -156,11 +156,13 @@ func (s *Store) UpdateSubscribedToAllDataSubnets(ctx context.Context, subscribed
result := false
if !subscribed {
if err := s.db.View(func(tx *bolt.Tx) error {
// Retrieve the custody bucket.
bucket := tx.Bucket(custodyBucket)
if bucket == nil {
return nil
}
// Retrieve the subscribe all data subnets flag.
bytes := bucket.Get(subscribeAllDataSubnetsKey)
if len(bytes) == 0 {
return nil
@@ -179,6 +181,7 @@ func (s *Store) UpdateSubscribedToAllDataSubnets(ctx context.Context, subscribed
}
if err := s.db.Update(func(tx *bolt.Tx) error {
// Retrieve the custody bucket.
bucket, err := tx.CreateBucketIfNotExists(custodyBucket)
if err != nil {
return errors.Wrap(err, "create custody bucket")
@@ -200,3 +203,61 @@ func (s *Store) UpdateSubscribedToAllDataSubnets(ctx context.Context, subscribed
return result, nil
}
// UpdateLiteSupernode updates the "lite-supernode" status in the database
// only if `enabled` is `true`.
// It returns the previous lite-supernode status.
func (s *Store) UpdateLiteSupernode(ctx context.Context, enabled bool) (bool, error) {
_, span := trace.StartSpan(ctx, "BeaconDB.UpdateLiteSupernode")
defer span.End()
result := false
if !enabled {
if err := s.db.View(func(tx *bolt.Tx) error {
// Retrieve the custody bucket.
bucket := tx.Bucket(custodyBucket)
if bucket == nil {
return nil
}
// Retrieve the lite-supernode flag.
bytes := bucket.Get(liteSupernodeKey)
if len(bytes) == 0 {
return nil
}
if bytes[0] == 1 {
result = true
}
return nil
}); err != nil {
return false, err
}
return result, nil
}
if err := s.db.Update(func(tx *bolt.Tx) error {
// Retrieve the custody bucket.
bucket, err := tx.CreateBucketIfNotExists(custodyBucket)
if err != nil {
return errors.Wrap(err, "create custody bucket")
}
bytes := bucket.Get(liteSupernodeKey)
if len(bytes) != 0 && bytes[0] == 1 {
result = true
}
if err := bucket.Put(liteSupernodeKey, []byte{1}); err != nil {
return errors.Wrap(err, "put lite-supernode")
}
return nil
}); err != nil {
return false, err
}
return result, nil
}

View File

@@ -67,6 +67,28 @@ func getSubscriptionStatusFromDB(t *testing.T, db *Store) bool {
return subscribed
}
// getLiteSupernodeStatusFromDB reads the lite-supernode status directly from the database for testing purposes.
func getLiteSupernodeStatusFromDB(t *testing.T, db *Store) bool {
t.Helper()
var enabled bool
err := db.db.View(func(tx *bolt.Tx) error {
bucket := tx.Bucket(custodyBucket)
if bucket == nil {
return nil
}
bytes := bucket.Get(liteSupernodeKey)
if len(bytes) != 0 && bytes[0] == 1 {
enabled = true
}
return nil
})
require.NoError(t, err)
return enabled
}
func TestUpdateCustodyInfo(t *testing.T) {
ctx := t.Context()
@@ -275,17 +297,6 @@ func TestUpdateSubscribedToAllDataSubnets(t *testing.T) {
require.Equal(t, false, stored)
})
t.Run("initial update with empty database - set to true", func(t *testing.T) {
db := setupDB(t)
prev, err := db.UpdateSubscribedToAllDataSubnets(ctx, true)
require.NoError(t, err)
require.Equal(t, false, prev)
stored := getSubscriptionStatusFromDB(t, db)
require.Equal(t, true, stored)
})
t.Run("attempt to update from true to false (should not change)", func(t *testing.T) {
db := setupDB(t)
@@ -300,7 +311,7 @@ func TestUpdateSubscribedToAllDataSubnets(t *testing.T) {
require.Equal(t, true, stored)
})
t.Run("update from true to true (no change)", func(t *testing.T) {
t.Run("attempt to update from true to false (should not change)", func(t *testing.T) {
db := setupDB(t)
_, err := db.UpdateSubscribedToAllDataSubnets(ctx, true)
@@ -314,3 +325,57 @@ func TestUpdateSubscribedToAllDataSubnets(t *testing.T) {
require.Equal(t, true, stored)
})
}
func TestUpdateLiteSupernode(t *testing.T) {
ctx := context.Background()
t.Run("initial update with empty database - set to false", func(t *testing.T) {
db := setupDB(t)
prev, err := db.UpdateLiteSupernode(ctx, false)
require.NoError(t, err)
require.Equal(t, false, prev)
stored := getLiteSupernodeStatusFromDB(t, db)
require.Equal(t, false, stored)
})
t.Run("initial update with empty database - set to true", func(t *testing.T) {
db := setupDB(t)
prev, err := db.UpdateLiteSupernode(ctx, true)
require.NoError(t, err)
require.Equal(t, false, prev)
stored := getLiteSupernodeStatusFromDB(t, db)
require.Equal(t, true, stored)
})
t.Run("attempt to update from true to false (should not change)", func(t *testing.T) {
db := setupDB(t)
_, err := db.UpdateLiteSupernode(ctx, true)
require.NoError(t, err)
prev, err := db.UpdateLiteSupernode(ctx, false)
require.NoError(t, err)
require.Equal(t, true, prev)
stored := getLiteSupernodeStatusFromDB(t, db)
require.Equal(t, true, stored)
})
t.Run("update from true to true (no change)", func(t *testing.T) {
db := setupDB(t)
_, err := db.UpdateLiteSupernode(ctx, true)
require.NoError(t, err)
prev, err := db.UpdateLiteSupernode(ctx, true)
require.NoError(t, err)
require.Equal(t, true, prev)
stored := getLiteSupernodeStatusFromDB(t, db)
require.Equal(t, true, stored)
})
}

View File

@@ -2,13 +2,6 @@ package kv
import "bytes"
func hasPhase0Key(enc []byte) bool {
if len(phase0Key) >= len(enc) {
return false
}
return bytes.Equal(enc[:len(phase0Key)], phase0Key)
}
// In order for an encoding to be Altair compatible, it must be prefixed with altair key.
func hasAltairKey(enc []byte) bool {
if len(altairKey) >= len(enc) {

View File

@@ -91,7 +91,6 @@ type Store struct {
blockCache *ristretto.Cache[string, interfaces.ReadOnlySignedBeaconBlock]
validatorEntryCache *ristretto.Cache[[]byte, *ethpb.Validator]
stateSummaryCache *stateSummaryCache
stateDiffCache *stateDiffCache
ctx context.Context
}
@@ -113,7 +112,6 @@ var Buckets = [][]byte{
lightClientUpdatesBucket,
lightClientBootstrapBucket,
lightClientSyncCommitteeBucket,
stateDiffBucket,
// Indices buckets.
blockSlotIndicesBucket,
stateSlotIndicesBucket,
@@ -203,14 +201,6 @@ func NewKVStore(ctx context.Context, dirPath string, opts ...KVStoreOption) (*St
return nil, err
}
if features.Get().EnableStateDiff {
sdCache, err := newStateDiffCache(kv)
if err != nil {
return nil, err
}
kv.stateDiffCache = sdCache
}
return kv, nil
}

View File

@@ -216,10 +216,6 @@ func TestStore_LightClientUpdate_CanSaveRetrieve(t *testing.T) {
db := setupDB(t)
ctx := t.Context()
for _, testVersion := range version.All()[1:] {
if testVersion == version.Gloas {
// TODO(16027): Unskip light client tests for Gloas
continue
}
t.Run(version.String(testVersion), func(t *testing.T) {
update, err := createUpdate(t, testVersion)
require.NoError(t, err)
@@ -576,10 +572,6 @@ func TestStore_LightClientBootstrap_CanSaveRetrieve(t *testing.T) {
require.IsNil(t, retrievedBootstrap)
})
for _, testVersion := range version.All()[1:] {
if testVersion == version.Gloas {
// TODO(16027): Unskip light client tests for Gloas
continue
}
t.Run(version.String(testVersion), func(t *testing.T) {
bootstrap, err := createDefaultLightClientBootstrap(primitives.Slot(uint64(params.BeaconConfig().VersionToForkEpochMap()[testVersion]) * uint64(params.BeaconConfig().SlotsPerEpoch)))
require.NoError(t, err)

View File

@@ -16,7 +16,6 @@ var (
stateValidatorsBucket = []byte("state-validators")
feeRecipientBucket = []byte("fee-recipient")
registrationBucket = []byte("registration")
stateDiffBucket = []byte("state-diff")
// Light Client Updates Bucket
lightClientUpdatesBucket = []byte("light-client-updates")
@@ -47,7 +46,6 @@ var (
// Below keys are used to identify objects are to be fork compatible.
// Objects that are only compatible with specific forks should be prefixed with such keys.
phase0Key = []byte("phase0")
altairKey = []byte("altair")
bellatrixKey = []byte("merge")
bellatrixBlindKey = []byte("blind-bellatrix")
@@ -79,4 +77,5 @@ var (
groupCountKey = []byte("group-count")
earliestAvailableSlotKey = []byte("earliest-available-slot")
subscribeAllDataSubnetsKey = []byte("subscribe-all-data-subnets")
liteSupernodeKey = []byte("lite-supernode")
)

View File

@@ -8,6 +8,7 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
statenative "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v7/config/features"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v7/genesis"
@@ -27,17 +28,6 @@ func (s *Store) State(ctx context.Context, blockRoot [32]byte) (state.BeaconStat
ctx, span := trace.StartSpan(ctx, "BeaconDB.State")
defer span.End()
startTime := time.Now()
// If state diff is enabled, we get the state from the state-diff db.
if features.Get().EnableStateDiff {
st, err := s.getStateUsingStateDiff(ctx, blockRoot)
if err != nil {
return nil, err
}
stateReadingTime.Observe(float64(time.Since(startTime).Milliseconds()))
return st, nil
}
enc, err := s.stateBytes(ctx, blockRoot)
if err != nil {
return nil, err
@@ -427,16 +417,6 @@ func (s *Store) storeValidatorEntriesSeparately(ctx context.Context, tx *bolt.Tx
func (s *Store) HasState(ctx context.Context, blockRoot [32]byte) bool {
_, span := trace.StartSpan(ctx, "BeaconDB.HasState")
defer span.End()
if features.Get().EnableStateDiff {
hasState, err := s.hasStateUsingStateDiff(ctx, blockRoot)
if err != nil {
log.WithError(err).Error(fmt.Sprintf("error checking state existence using state-diff"))
return false
}
return hasState
}
hasState := false
err := s.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(stateBucket)
@@ -490,7 +470,7 @@ func (s *Store) DeleteState(ctx context.Context, blockRoot [32]byte) error {
return nil
}
slot, err := s.SlotByBlockRoot(ctx, blockRoot)
slot, err := s.slotByBlockRoot(ctx, tx, blockRoot[:])
if err != nil {
return err
}
@@ -832,45 +812,50 @@ func (s *Store) stateBytes(ctx context.Context, blockRoot [32]byte) ([]byte, err
return dst, err
}
// SlotByBlockRoot returns the slot of the input block root, based on state summary, block, or state.
// Check for state is only done if state diff feature is not enabled.
func (s *Store) SlotByBlockRoot(ctx context.Context, blockRoot [32]byte) (primitives.Slot, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.SlotByBlockRoot")
// slotByBlockRoot retrieves the corresponding slot of the input block root.
func (s *Store) slotByBlockRoot(ctx context.Context, tx *bolt.Tx, blockRoot []byte) (primitives.Slot, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.slotByBlockRoot")
defer span.End()
// check state summary first
stateSummary, err := s.StateSummary(ctx, blockRoot)
if err != nil {
return 0, err
}
if stateSummary != nil {
return stateSummary.Slot, nil
}
bkt := tx.Bucket(stateSummaryBucket)
enc := bkt.Get(blockRoot)
// fall back to block if state summary is not found
blk, err := s.Block(ctx, blockRoot)
if err != nil {
return 0, err
}
if blk != nil && !blk.IsNil() {
return blk.Block().Slot(), nil
}
if enc == nil {
// Fall back to check the block.
bkt := tx.Bucket(blocksBucket)
enc := bkt.Get(blockRoot)
// fall back to state, only if state diff feature is not enabled
if features.Get().EnableStateDiff {
return 0, errors.New("neither state summary nor block found")
if enc == nil {
// Fallback and check the state.
bkt = tx.Bucket(stateBucket)
enc = bkt.Get(blockRoot)
if enc == nil {
return 0, errors.New("state enc can't be nil")
}
// no need to construct the validator entries as it is not used here.
s, err := s.unmarshalState(ctx, enc, nil)
if err != nil {
return 0, errors.Wrap(err, "could not unmarshal state")
}
if s == nil || s.IsNil() {
return 0, errors.New("state can't be nil")
}
return s.Slot(), nil
}
b, err := unmarshalBlock(ctx, enc)
if err != nil {
return 0, errors.Wrap(err, "could not unmarshal block")
}
if err := blocks.BeaconBlockIsNil(b); err != nil {
return 0, err
}
return b.Block().Slot(), nil
}
st, err := s.State(ctx, blockRoot)
if err != nil {
return 0, err
stateSummary := &ethpb.StateSummary{}
if err := decode(ctx, enc, stateSummary); err != nil {
return 0, errors.Wrap(err, "could not unmarshal state summary")
}
if st != nil && !st.IsNil() {
return st.Slot(), nil
}
// neither state summary, block nor state found
return 0, errors.New("neither state summary, block nor state found")
return stateSummary.Slot, nil
}
// HighestSlotStatesBelow returns the states with the highest slot below the input slot
@@ -1046,30 +1031,3 @@ func (s *Store) isStateValidatorMigrationOver() (bool, error) {
}
return returnFlag, nil
}
func (s *Store) getStateUsingStateDiff(ctx context.Context, blockRoot [32]byte) (state.BeaconState, error) {
slot, err := s.SlotByBlockRoot(ctx, blockRoot)
if err != nil {
return nil, err
}
st, err := s.stateByDiff(ctx, slot)
if err != nil {
return nil, err
}
if st == nil || st.IsNil() {
return nil, errors.New("state not found")
}
return st, nil
}
func (s *Store) hasStateUsingStateDiff(ctx context.Context, blockRoot [32]byte) (bool, error) {
slot, err := s.SlotByBlockRoot(ctx, blockRoot)
if err != nil {
return false, err
}
stateLvl := computeLevel(s.getOffset(), slot)
return stateLvl != -1, nil
}

View File

@@ -1,232 +0,0 @@
package kv
import (
"context"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v7/consensus-types/hdiff"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/monitoring/tracing/trace"
"github.com/pkg/errors"
bolt "go.etcd.io/bbolt"
)
const (
stateSuffix = "_s"
validatorSuffix = "_v"
balancesSuffix = "_b"
)
/*
We use a level-based approach to save state diffs. Each level corresponds to an exponent of 2 (exponents[lvl]).
The data at level 0 is saved every 2**exponent[0] slots and always contains a full state snapshot that is used as a base for the delta saved at other levels.
*/
// saveStateByDiff takes a state and decides between saving a full state snapshot or a diff.
func (s *Store) saveStateByDiff(ctx context.Context, st state.ReadOnlyBeaconState) error {
_, span := trace.StartSpan(ctx, "BeaconDB.saveStateByDiff")
defer span.End()
if st == nil {
return errors.New("state is nil")
}
slot := st.Slot()
offset := s.getOffset()
if uint64(slot) < offset {
return ErrSlotBeforeOffset
}
// Find the level to save the state.
lvl := computeLevel(offset, slot)
if lvl == -1 {
return nil
}
// Save full state if level is 0.
if lvl == 0 {
return s.saveFullSnapshot(st)
}
// Get anchor state to compute the diff from.
anchorState, err := s.getAnchorState(offset, lvl, slot)
if err != nil {
return err
}
return s.saveHdiff(lvl, anchorState, st)
}
// stateByDiff retrieves the full state for a given slot.
func (s *Store) stateByDiff(ctx context.Context, slot primitives.Slot) (state.BeaconState, error) {
offset := s.getOffset()
if uint64(slot) < offset {
return nil, ErrSlotBeforeOffset
}
snapshot, diffChain, err := s.getBaseAndDiffChain(offset, slot)
if err != nil {
return nil, err
}
for _, diff := range diffChain {
if err := ctx.Err(); err != nil {
return nil, err
}
snapshot, err = hdiff.ApplyDiff(ctx, snapshot, diff)
if err != nil {
return nil, err
}
}
return snapshot, nil
}
// saveHdiff computes the diff between the anchor state and the current state and saves it to the database.
// This function needs to be called only with the latest finalized state, and in a strictly increasing slot order.
func (s *Store) saveHdiff(lvl int, anchor, st state.ReadOnlyBeaconState) error {
slot := uint64(st.Slot())
key := makeKeyForStateDiffTree(lvl, slot)
diff, err := hdiff.Diff(anchor, st)
if err != nil {
return err
}
err = s.db.Update(func(tx *bolt.Tx) error {
bucket := tx.Bucket(stateDiffBucket)
if bucket == nil {
return bolt.ErrBucketNotFound
}
buf := append(key, stateSuffix...)
if err := bucket.Put(buf, diff.StateDiff); err != nil {
return err
}
buf = append(key, validatorSuffix...)
if err := bucket.Put(buf, diff.ValidatorDiffs); err != nil {
return err
}
buf = append(key, balancesSuffix...)
if err := bucket.Put(buf, diff.BalancesDiff); err != nil {
return err
}
return nil
})
if err != nil {
return err
}
// Save the full state to the cache (if not the last level).
if lvl != len(flags.Get().StateDiffExponents)-1 {
err = s.stateDiffCache.setAnchor(lvl, st)
if err != nil {
return err
}
}
return nil
}
// SaveFullSnapshot saves the full level 0 state snapshot to the database.
func (s *Store) saveFullSnapshot(st state.ReadOnlyBeaconState) error {
slot := uint64(st.Slot())
key := makeKeyForStateDiffTree(0, slot)
stateBytes, err := st.MarshalSSZ()
if err != nil {
return err
}
// add version key to value
enc, err := addKey(st.Version(), stateBytes)
if err != nil {
return err
}
err = s.db.Update(func(tx *bolt.Tx) error {
bucket := tx.Bucket(stateDiffBucket)
if bucket == nil {
return bolt.ErrBucketNotFound
}
if err := bucket.Put(key, enc); err != nil {
return err
}
return nil
})
if err != nil {
return err
}
// Save the full state to the cache, and invalidate other levels.
s.stateDiffCache.clearAnchors()
err = s.stateDiffCache.setAnchor(0, st)
if err != nil {
return err
}
return nil
}
func (s *Store) getDiff(lvl int, slot uint64) (hdiff.HdiffBytes, error) {
key := makeKeyForStateDiffTree(lvl, slot)
var stateDiff []byte
var validatorDiff []byte
var balancesDiff []byte
err := s.db.View(func(tx *bolt.Tx) error {
bucket := tx.Bucket(stateDiffBucket)
if bucket == nil {
return bolt.ErrBucketNotFound
}
buf := append(key, stateSuffix...)
stateDiff = bucket.Get(buf)
if stateDiff == nil {
return errors.New("state diff not found")
}
buf = append(key, validatorSuffix...)
validatorDiff = bucket.Get(buf)
if validatorDiff == nil {
return errors.New("validator diff not found")
}
buf = append(key, balancesSuffix...)
balancesDiff = bucket.Get(buf)
if balancesDiff == nil {
return errors.New("balances diff not found")
}
return nil
})
if err != nil {
return hdiff.HdiffBytes{}, err
}
return hdiff.HdiffBytes{
StateDiff: stateDiff,
ValidatorDiffs: validatorDiff,
BalancesDiff: balancesDiff,
}, nil
}
func (s *Store) getFullSnapshot(slot uint64) (state.BeaconState, error) {
key := makeKeyForStateDiffTree(0, slot)
var enc []byte
err := s.db.View(func(tx *bolt.Tx) error {
bucket := tx.Bucket(stateDiffBucket)
if bucket == nil {
return bolt.ErrBucketNotFound
}
enc = bucket.Get(key)
if enc == nil {
return errors.New("state not found")
}
return nil
})
if err != nil {
return nil, err
}
return decodeStateSnapshot(enc)
}

View File

@@ -1,77 +0,0 @@
package kv
import (
"encoding/binary"
"errors"
"sync"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/cmd/beacon-chain/flags"
"go.etcd.io/bbolt"
)
type stateDiffCache struct {
sync.RWMutex
anchors []state.ReadOnlyBeaconState
offset uint64
}
func newStateDiffCache(s *Store) (*stateDiffCache, error) {
var offset uint64
err := s.db.View(func(tx *bbolt.Tx) error {
bucket := tx.Bucket(stateDiffBucket)
if bucket == nil {
return bbolt.ErrBucketNotFound
}
offsetBytes := bucket.Get([]byte("offset"))
if offsetBytes == nil {
return errors.New("state diff cache: offset not found")
}
offset = binary.LittleEndian.Uint64(offsetBytes)
return nil
})
if err != nil {
return nil, err
}
return &stateDiffCache{
anchors: make([]state.ReadOnlyBeaconState, len(flags.Get().StateDiffExponents)-1), // -1 because last level doesn't need to be cached
offset: offset,
}, nil
}
func (c *stateDiffCache) getAnchor(level int) state.ReadOnlyBeaconState {
c.RLock()
defer c.RUnlock()
return c.anchors[level]
}
func (c *stateDiffCache) setAnchor(level int, anchor state.ReadOnlyBeaconState) error {
c.Lock()
defer c.Unlock()
if level >= len(c.anchors) || level < 0 {
return errors.New("state diff cache: anchor level out of range")
}
c.anchors[level] = anchor
return nil
}
func (c *stateDiffCache) getOffset() uint64 {
c.RLock()
defer c.RUnlock()
return c.offset
}
func (c *stateDiffCache) setOffset(offset uint64) {
c.Lock()
defer c.Unlock()
c.offset = offset
}
func (c *stateDiffCache) clearAnchors() {
c.Lock()
defer c.Unlock()
c.anchors = make([]state.ReadOnlyBeaconState, len(flags.Get().StateDiffExponents)-1) // -1 because last level doesn't need to be cached
}

View File

@@ -1,250 +0,0 @@
package kv
import (
"context"
"encoding/binary"
"errors"
"fmt"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
statenative "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v7/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v7/consensus-types/hdiff"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/math"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/runtime/version"
"go.etcd.io/bbolt"
)
var (
offsetKey = []byte("offset")
ErrSlotBeforeOffset = errors.New("slot is before root offset")
)
func makeKeyForStateDiffTree(level int, slot uint64) []byte {
buf := make([]byte, 16)
buf[0] = byte(level)
binary.LittleEndian.PutUint64(buf[1:], slot)
return buf
}
func (s *Store) getAnchorState(offset uint64, lvl int, slot primitives.Slot) (anchor state.ReadOnlyBeaconState, err error) {
if lvl <= 0 || lvl > len(flags.Get().StateDiffExponents) {
return nil, errors.New("invalid value for level")
}
if uint64(slot) < offset {
return nil, ErrSlotBeforeOffset
}
relSlot := uint64(slot) - offset
prevExp := flags.Get().StateDiffExponents[lvl-1]
if prevExp < 2 || prevExp >= 64 {
return nil, fmt.Errorf("state diff exponent %d out of range for uint64", prevExp)
}
span := math.PowerOf2(uint64(prevExp))
anchorSlot := primitives.Slot(uint64(slot) - relSlot%span)
// anchorLvl can be [0, lvl-1]
anchorLvl := computeLevel(offset, anchorSlot)
if anchorLvl == -1 {
return nil, errors.New("could not compute anchor level")
}
// Check if we have the anchor in cache.
anchor = s.stateDiffCache.getAnchor(anchorLvl)
if anchor != nil {
return anchor, nil
}
// If not, load it from the database.
anchor, err = s.stateByDiff(context.Background(), anchorSlot)
if err != nil {
return nil, err
}
// Save it in the cache.
err = s.stateDiffCache.setAnchor(anchorLvl, anchor)
if err != nil {
return nil, err
}
return anchor, nil
}
// computeLevel computes the level in the diff tree. Returns -1 in case slot should not be in tree.
func computeLevel(offset uint64, slot primitives.Slot) int {
rel := uint64(slot) - offset
for i, exp := range flags.Get().StateDiffExponents {
if exp < 2 || exp >= 64 {
return -1
}
span := math.PowerOf2(uint64(exp))
if rel%span == 0 {
return i
}
}
// If rel isnt on any of the boundaries, we should ignore saving it.
return -1
}
func (s *Store) setOffset(slot primitives.Slot) error {
err := s.db.Update(func(tx *bbolt.Tx) error {
bucket := tx.Bucket(stateDiffBucket)
if bucket == nil {
return bbolt.ErrBucketNotFound
}
offsetBytes := bucket.Get(offsetKey)
if offsetBytes != nil {
return fmt.Errorf("offset already set to %d", binary.LittleEndian.Uint64(offsetBytes))
}
offsetBytes = make([]byte, 8)
binary.LittleEndian.PutUint64(offsetBytes, uint64(slot))
if err := bucket.Put(offsetKey, offsetBytes); err != nil {
return err
}
return nil
})
if err != nil {
return err
}
// Save the offset in the cache.
s.stateDiffCache.setOffset(uint64(slot))
return nil
}
func (s *Store) getOffset() uint64 {
return s.stateDiffCache.getOffset()
}
func keyForSnapshot(v int) ([]byte, error) {
switch v {
case version.Fulu:
return fuluKey, nil
case version.Electra:
return ElectraKey, nil
case version.Deneb:
return denebKey, nil
case version.Capella:
return capellaKey, nil
case version.Bellatrix:
return bellatrixKey, nil
case version.Altair:
return altairKey, nil
case version.Phase0:
return phase0Key, nil
default:
return nil, errors.New("unsupported fork")
}
}
func addKey(v int, bytes []byte) ([]byte, error) {
key, err := keyForSnapshot(v)
if err != nil {
return nil, err
}
enc := make([]byte, len(key)+len(bytes))
copy(enc, key)
copy(enc[len(key):], bytes)
return enc, nil
}
func decodeStateSnapshot(enc []byte) (state.BeaconState, error) {
switch {
case hasFuluKey(enc):
var fuluState ethpb.BeaconStateFulu
if err := fuluState.UnmarshalSSZ(enc[len(fuluKey):]); err != nil {
return nil, err
}
return statenative.InitializeFromProtoUnsafeFulu(&fuluState)
case HasElectraKey(enc):
var electraState ethpb.BeaconStateElectra
if err := electraState.UnmarshalSSZ(enc[len(ElectraKey):]); err != nil {
return nil, err
}
return statenative.InitializeFromProtoUnsafeElectra(&electraState)
case hasDenebKey(enc):
var denebState ethpb.BeaconStateDeneb
if err := denebState.UnmarshalSSZ(enc[len(denebKey):]); err != nil {
return nil, err
}
return statenative.InitializeFromProtoUnsafeDeneb(&denebState)
case hasCapellaKey(enc):
var capellaState ethpb.BeaconStateCapella
if err := capellaState.UnmarshalSSZ(enc[len(capellaKey):]); err != nil {
return nil, err
}
return statenative.InitializeFromProtoUnsafeCapella(&capellaState)
case hasBellatrixKey(enc):
var bellatrixState ethpb.BeaconStateBellatrix
if err := bellatrixState.UnmarshalSSZ(enc[len(bellatrixKey):]); err != nil {
return nil, err
}
return statenative.InitializeFromProtoUnsafeBellatrix(&bellatrixState)
case hasAltairKey(enc):
var altairState ethpb.BeaconStateAltair
if err := altairState.UnmarshalSSZ(enc[len(altairKey):]); err != nil {
return nil, err
}
return statenative.InitializeFromProtoUnsafeAltair(&altairState)
case hasPhase0Key(enc):
var phase0State ethpb.BeaconState
if err := phase0State.UnmarshalSSZ(enc[len(phase0Key):]); err != nil {
return nil, err
}
return statenative.InitializeFromProtoUnsafePhase0(&phase0State)
default:
return nil, errors.New("unsupported fork")
}
}
func (s *Store) getBaseAndDiffChain(offset uint64, slot primitives.Slot) (state.BeaconState, []hdiff.HdiffBytes, error) {
if uint64(slot) < offset {
return nil, nil, ErrSlotBeforeOffset
}
rel := uint64(slot) - offset
lvl := computeLevel(offset, slot)
if lvl == -1 {
return nil, nil, errors.New("slot not in tree")
}
exponents := flags.Get().StateDiffExponents
baseSpan := math.PowerOf2(uint64(exponents[0]))
baseAnchorSlot := uint64(slot) - rel%baseSpan
type diffItem struct {
level int
slot uint64
}
var diffChainItems []diffItem
lastSeenAnchorSlot := baseAnchorSlot
for i, exp := range exponents[1 : lvl+1] {
span := math.PowerOf2(uint64(exp))
diffSlot := rel / span * span
if diffSlot == lastSeenAnchorSlot {
continue
}
diffChainItems = append(diffChainItems, diffItem{level: i + 1, slot: diffSlot + offset})
lastSeenAnchorSlot = diffSlot
}
baseSnapshot, err := s.getFullSnapshot(baseAnchorSlot)
if err != nil {
return nil, nil, err
}
diffChain := make([]hdiff.HdiffBytes, 0, len(diffChainItems))
for _, item := range diffChainItems {
diff, err := s.getDiff(item.level, item.slot)
if err != nil {
return nil, nil, err
}
diffChain = append(diffChain, diff)
}
return baseSnapshot, diffChain, nil
}

View File

@@ -1,662 +0,0 @@
package kv
import (
"context"
"encoding/binary"
"fmt"
"math/rand"
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/math"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/runtime/version"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/testing/util"
"go.etcd.io/bbolt"
)
func TestStateDiff_LoadOrInitOffset(t *testing.T) {
setDefaultStateDiffExponents()
db := setupDB(t)
err := setOffsetInDB(db, 10)
require.NoError(t, err)
offset := db.getOffset()
require.Equal(t, uint64(10), offset)
err = db.setOffset(10)
require.ErrorContains(t, "offset already set", err)
offset = db.getOffset()
require.Equal(t, uint64(10), offset)
}
func TestStateDiff_ComputeLevel(t *testing.T) {
db := setupDB(t)
setDefaultStateDiffExponents()
err := setOffsetInDB(db, 0)
require.NoError(t, err)
offset := db.getOffset()
// 2 ** 21
lvl := computeLevel(offset, primitives.Slot(math.PowerOf2(21)))
require.Equal(t, 0, lvl)
// 2 ** 21 * 3
lvl = computeLevel(offset, primitives.Slot(math.PowerOf2(21)*3))
require.Equal(t, 0, lvl)
// 2 ** 18
lvl = computeLevel(offset, primitives.Slot(math.PowerOf2(18)))
require.Equal(t, 1, lvl)
// 2 ** 18 * 3
lvl = computeLevel(offset, primitives.Slot(math.PowerOf2(18)*3))
require.Equal(t, 1, lvl)
// 2 ** 16
lvl = computeLevel(offset, primitives.Slot(math.PowerOf2(16)))
require.Equal(t, 2, lvl)
// 2 ** 16 * 3
lvl = computeLevel(offset, primitives.Slot(math.PowerOf2(16)*3))
require.Equal(t, 2, lvl)
// 2 ** 13
lvl = computeLevel(offset, primitives.Slot(math.PowerOf2(13)))
require.Equal(t, 3, lvl)
// 2 ** 13 * 3
lvl = computeLevel(offset, primitives.Slot(math.PowerOf2(13)*3))
require.Equal(t, 3, lvl)
// 2 ** 11
lvl = computeLevel(offset, primitives.Slot(math.PowerOf2(11)))
require.Equal(t, 4, lvl)
// 2 ** 11 * 3
lvl = computeLevel(offset, primitives.Slot(math.PowerOf2(11)*3))
require.Equal(t, 4, lvl)
// 2 ** 9
lvl = computeLevel(offset, primitives.Slot(math.PowerOf2(9)))
require.Equal(t, 5, lvl)
// 2 ** 9 * 3
lvl = computeLevel(offset, primitives.Slot(math.PowerOf2(9)*3))
require.Equal(t, 5, lvl)
// 2 ** 5
lvl = computeLevel(offset, primitives.Slot(math.PowerOf2(5)))
require.Equal(t, 6, lvl)
// 2 ** 5 * 3
lvl = computeLevel(offset, primitives.Slot(math.PowerOf2(5)*3))
require.Equal(t, 6, lvl)
// 2 ** 7
lvl = computeLevel(offset, primitives.Slot(math.PowerOf2(7)))
require.Equal(t, 6, lvl)
// 2 ** 5 + 1
lvl = computeLevel(offset, primitives.Slot(math.PowerOf2(5)+1))
require.Equal(t, -1, lvl)
// 2 ** 5 + 16
lvl = computeLevel(offset, primitives.Slot(math.PowerOf2(5)+16))
require.Equal(t, -1, lvl)
// 2 ** 5 + 32
lvl = computeLevel(offset, primitives.Slot(math.PowerOf2(5)+32))
require.Equal(t, 6, lvl)
}
func TestStateDiff_SaveFullSnapshot(t *testing.T) {
setDefaultStateDiffExponents()
for v := range version.All() {
t.Run(version.String(v), func(t *testing.T) {
db := setupDB(t)
// Create state with slot 0
st, enc := createState(t, 0, v)
err := setOffsetInDB(db, 0)
require.NoError(t, err)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
err = db.db.View(func(tx *bbolt.Tx) error {
bucket := tx.Bucket(stateDiffBucket)
if bucket == nil {
return bbolt.ErrBucketNotFound
}
s := bucket.Get(makeKeyForStateDiffTree(0, uint64(0)))
if s == nil {
return bbolt.ErrIncompatibleValue
}
require.DeepSSZEqual(t, enc, s)
return nil
})
require.NoError(t, err)
})
}
}
func TestStateDiff_SaveAndReadFullSnapshot(t *testing.T) {
setDefaultStateDiffExponents()
for v := range version.All() {
t.Run(version.String(v), func(t *testing.T) {
db := setupDB(t)
st, _ := createState(t, 0, v)
err := setOffsetInDB(db, 0)
require.NoError(t, err)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
readSt, err := db.stateByDiff(context.Background(), 0)
require.NoError(t, err)
require.NotNil(t, readSt)
stSSZ, err := st.MarshalSSZ()
require.NoError(t, err)
readStSSZ, err := readSt.MarshalSSZ()
require.NoError(t, err)
require.DeepSSZEqual(t, stSSZ, readStSSZ)
})
}
}
func TestStateDiff_SaveDiff(t *testing.T) {
setDefaultStateDiffExponents()
for v := range version.All() {
t.Run(version.String(v), func(t *testing.T) {
db := setupDB(t)
// Create state with slot 2**21
slot := primitives.Slot(math.PowerOf2(21))
st, enc := createState(t, slot, v)
err := setOffsetInDB(db, uint64(slot))
require.NoError(t, err)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
err = db.db.View(func(tx *bbolt.Tx) error {
bucket := tx.Bucket(stateDiffBucket)
if bucket == nil {
return bbolt.ErrBucketNotFound
}
s := bucket.Get(makeKeyForStateDiffTree(0, uint64(slot)))
if s == nil {
return bbolt.ErrIncompatibleValue
}
require.DeepSSZEqual(t, enc, s)
return nil
})
require.NoError(t, err)
// create state with slot 2**18 (+2**21)
slot = primitives.Slot(math.PowerOf2(18) + math.PowerOf2(21))
st, _ = createState(t, slot, v)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
key := makeKeyForStateDiffTree(1, uint64(slot))
err = db.db.View(func(tx *bbolt.Tx) error {
bucket := tx.Bucket(stateDiffBucket)
if bucket == nil {
return bbolt.ErrBucketNotFound
}
buf := append(key, "_s"...)
s := bucket.Get(buf)
if s == nil {
return bbolt.ErrIncompatibleValue
}
buf = append(key, "_v"...)
v := bucket.Get(buf)
if v == nil {
return bbolt.ErrIncompatibleValue
}
buf = append(key, "_b"...)
b := bucket.Get(buf)
if b == nil {
return bbolt.ErrIncompatibleValue
}
return nil
})
require.NoError(t, err)
})
}
}
func TestStateDiff_SaveAndReadDiff(t *testing.T) {
setDefaultStateDiffExponents()
for v := range version.All() {
t.Run(version.String(v), func(t *testing.T) {
db := setupDB(t)
st, _ := createState(t, 0, v)
err := setOffsetInDB(db, 0)
require.NoError(t, err)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
slot := primitives.Slot(math.PowerOf2(5))
st, _ = createState(t, slot, v)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
readSt, err := db.stateByDiff(context.Background(), slot)
require.NoError(t, err)
require.NotNil(t, readSt)
stSSZ, err := st.MarshalSSZ()
require.NoError(t, err)
readStSSZ, err := readSt.MarshalSSZ()
require.NoError(t, err)
require.DeepSSZEqual(t, stSSZ, readStSSZ)
})
}
}
func TestStateDiff_SaveAndReadDiff_WithRepetitiveAnchorSlots(t *testing.T) {
globalFlags := flags.GlobalFlags{
StateDiffExponents: []int{20, 14, 10, 7, 5},
}
flags.Init(&globalFlags)
for v := range version.All() {
t.Run(version.String(v), func(t *testing.T) {
db := setupDB(t)
err := setOffsetInDB(db, 0)
st, _ := createState(t, 0, v)
require.NoError(t, err)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
slot := primitives.Slot(math.PowerOf2(11))
st, _ = createState(t, slot, v)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
slot = primitives.Slot(math.PowerOf2(11) + math.PowerOf2(5))
st, _ = createState(t, slot, v)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
readSt, err := db.stateByDiff(context.Background(), slot)
require.NoError(t, err)
require.NotNil(t, readSt)
stSSZ, err := st.MarshalSSZ()
require.NoError(t, err)
readStSSZ, err := readSt.MarshalSSZ()
require.NoError(t, err)
require.DeepSSZEqual(t, stSSZ, readStSSZ)
})
}
}
func TestStateDiff_SaveAndReadDiff_MultipleLevels(t *testing.T) {
setDefaultStateDiffExponents()
for v := range version.All() {
t.Run(version.String(v), func(t *testing.T) {
db := setupDB(t)
st, _ := createState(t, 0, v)
err := setOffsetInDB(db, 0)
require.NoError(t, err)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
slot := primitives.Slot(math.PowerOf2(11))
st, _ = createState(t, slot, v)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
readSt, err := db.stateByDiff(context.Background(), slot)
require.NoError(t, err)
require.NotNil(t, readSt)
stSSZ, err := st.MarshalSSZ()
require.NoError(t, err)
readStSSZ, err := readSt.MarshalSSZ()
require.NoError(t, err)
require.DeepSSZEqual(t, stSSZ, readStSSZ)
slot = primitives.Slot(math.PowerOf2(11) + math.PowerOf2(9))
st, _ = createState(t, slot, v)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
readSt, err = db.stateByDiff(context.Background(), slot)
require.NoError(t, err)
require.NotNil(t, readSt)
stSSZ, err = st.MarshalSSZ()
require.NoError(t, err)
readStSSZ, err = readSt.MarshalSSZ()
require.NoError(t, err)
require.DeepSSZEqual(t, stSSZ, readStSSZ)
slot = primitives.Slot(math.PowerOf2(11) + math.PowerOf2(9) + math.PowerOf2(5))
st, _ = createState(t, slot, v)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
readSt, err = db.stateByDiff(context.Background(), slot)
require.NoError(t, err)
require.NotNil(t, readSt)
stSSZ, err = st.MarshalSSZ()
require.NoError(t, err)
readStSSZ, err = readSt.MarshalSSZ()
require.NoError(t, err)
require.DeepSSZEqual(t, stSSZ, readStSSZ)
})
}
}
func TestStateDiff_SaveAndReadDiffForkTransition(t *testing.T) {
setDefaultStateDiffExponents()
for v := range version.All()[:len(version.All())-1] {
t.Run(version.String(v), func(t *testing.T) {
db := setupDB(t)
st, _ := createState(t, 0, v)
err := setOffsetInDB(db, 0)
require.NoError(t, err)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
slot := primitives.Slot(math.PowerOf2(5))
st, _ = createState(t, slot, v+1)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
readSt, err := db.stateByDiff(context.Background(), slot)
require.NoError(t, err)
require.NotNil(t, readSt)
stSSZ, err := st.MarshalSSZ()
require.NoError(t, err)
readStSSZ, err := readSt.MarshalSSZ()
require.NoError(t, err)
require.DeepSSZEqual(t, stSSZ, readStSSZ)
})
}
}
func TestStateDiff_OffsetCache(t *testing.T) {
setDefaultStateDiffExponents()
// test for slot numbers 0 and 1 for every version
for slotNum := range 2 {
for v := range version.All() {
t.Run(fmt.Sprintf("slotNum=%d,%s", slotNum, version.String(v)), func(t *testing.T) {
db := setupDB(t)
slot := primitives.Slot(slotNum)
err := setOffsetInDB(db, uint64(slot))
require.NoError(t, err)
st, _ := createState(t, slot, v)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
offset := db.stateDiffCache.getOffset()
require.Equal(t, uint64(slotNum), offset)
slot2 := primitives.Slot(uint64(slotNum) + math.PowerOf2(uint64(flags.Get().StateDiffExponents[0])))
st2, _ := createState(t, slot2, v)
err = db.saveStateByDiff(context.Background(), st2)
require.NoError(t, err)
offset = db.stateDiffCache.getOffset()
require.Equal(t, uint64(slot), offset)
})
}
}
}
func TestStateDiff_AnchorCache(t *testing.T) {
setDefaultStateDiffExponents()
for v := range version.All() {
t.Run(version.String(v), func(t *testing.T) {
exponents := flags.Get().StateDiffExponents
localCache := make([]state.ReadOnlyBeaconState, len(exponents)-1)
db := setupDB(t)
err := setOffsetInDB(db, 0) // lvl 0
require.NoError(t, err)
// at first the cache should be empty
for i := 0; i < len(flags.Get().StateDiffExponents)-1; i++ {
anchor := db.stateDiffCache.getAnchor(i)
require.IsNil(t, anchor)
}
// add level 0
slot := primitives.Slot(0) // offset 0 is already set
st, _ := createState(t, slot, v)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
localCache[0] = st
// level 0 should be the same
require.DeepEqual(t, localCache[0], db.stateDiffCache.getAnchor(0))
// rest of the cache should be nil
for i := 1; i < len(exponents)-1; i++ {
require.IsNil(t, db.stateDiffCache.getAnchor(i))
}
// skip last level as it does not get cached
for i := len(exponents) - 2; i > 0; i-- {
slot = primitives.Slot(math.PowerOf2(uint64(exponents[i])))
st, _ := createState(t, slot, v)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
localCache[i] = st
// anchor cache must match local cache
for i := 0; i < len(exponents)-1; i++ {
if localCache[i] == nil {
require.IsNil(t, db.stateDiffCache.getAnchor(i))
continue
}
localSSZ, err := localCache[i].MarshalSSZ()
require.NoError(t, err)
anchorSSZ, err := db.stateDiffCache.getAnchor(i).MarshalSSZ()
require.NoError(t, err)
require.DeepSSZEqual(t, localSSZ, anchorSSZ)
}
}
// moving to a new tree should invalidate the cache except for level 0
twoTo21 := math.PowerOf2(21)
slot = primitives.Slot(twoTo21)
st, _ = createState(t, slot, v)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
localCache = make([]state.ReadOnlyBeaconState, len(exponents)-1)
localCache[0] = st
// level 0 should be the same
require.DeepEqual(t, localCache[0], db.stateDiffCache.getAnchor(0))
// rest of the cache should be nil
for i := 1; i < len(exponents)-1; i++ {
require.IsNil(t, db.stateDiffCache.getAnchor(i))
}
})
}
}
func TestStateDiff_EncodingAndDecoding(t *testing.T) {
for v := range version.All() {
t.Run(version.String(v), func(t *testing.T) {
st, enc := createState(t, 0, v) // this has addKey called inside
stDecoded, err := decodeStateSnapshot(enc)
require.NoError(t, err)
st1ssz, err := st.MarshalSSZ()
require.NoError(t, err)
st2ssz, err := stDecoded.MarshalSSZ()
require.NoError(t, err)
require.DeepSSZEqual(t, st1ssz, st2ssz)
})
}
}
func createState(t *testing.T, slot primitives.Slot, v int) (state.ReadOnlyBeaconState, []byte) {
p := params.BeaconConfig()
var st state.BeaconState
var err error
switch v {
case version.Phase0:
st, err = util.NewBeaconState()
require.NoError(t, err)
err = st.SetFork(&ethpb.Fork{
PreviousVersion: p.GenesisForkVersion,
CurrentVersion: p.GenesisForkVersion,
Epoch: 0,
})
require.NoError(t, err)
case version.Altair:
st, err = util.NewBeaconStateAltair()
require.NoError(t, err)
err = st.SetFork(&ethpb.Fork{
PreviousVersion: p.GenesisForkVersion,
CurrentVersion: p.AltairForkVersion,
Epoch: p.AltairForkEpoch,
})
require.NoError(t, err)
case version.Bellatrix:
st, err = util.NewBeaconStateBellatrix()
require.NoError(t, err)
err = st.SetFork(&ethpb.Fork{
PreviousVersion: p.AltairForkVersion,
CurrentVersion: p.BellatrixForkVersion,
Epoch: p.BellatrixForkEpoch,
})
require.NoError(t, err)
case version.Capella:
st, err = util.NewBeaconStateCapella()
require.NoError(t, err)
err = st.SetFork(&ethpb.Fork{
PreviousVersion: p.BellatrixForkVersion,
CurrentVersion: p.CapellaForkVersion,
Epoch: p.CapellaForkEpoch,
})
require.NoError(t, err)
case version.Deneb:
st, err = util.NewBeaconStateDeneb()
require.NoError(t, err)
err = st.SetFork(&ethpb.Fork{
PreviousVersion: p.CapellaForkVersion,
CurrentVersion: p.DenebForkVersion,
Epoch: p.DenebForkEpoch,
})
require.NoError(t, err)
case version.Electra:
st, err = util.NewBeaconStateElectra()
require.NoError(t, err)
err = st.SetFork(&ethpb.Fork{
PreviousVersion: p.DenebForkVersion,
CurrentVersion: p.ElectraForkVersion,
Epoch: p.ElectraForkEpoch,
})
require.NoError(t, err)
case version.Fulu:
st, err = util.NewBeaconStateFulu()
require.NoError(t, err)
err = st.SetFork(&ethpb.Fork{
PreviousVersion: p.ElectraForkVersion,
CurrentVersion: p.FuluForkVersion,
Epoch: p.FuluForkEpoch,
})
require.NoError(t, err)
default:
t.Fatalf("unsupported version: %d", v)
}
err = st.SetSlot(slot)
require.NoError(t, err)
slashings := make([]uint64, 8192)
slashings[0] = uint64(rand.Intn(10))
err = st.SetSlashings(slashings)
require.NoError(t, err)
stssz, err := st.MarshalSSZ()
require.NoError(t, err)
enc, err := addKey(v, stssz)
require.NoError(t, err)
return st, enc
}
func setOffsetInDB(s *Store, offset uint64) error {
err := s.db.Update(func(tx *bbolt.Tx) error {
bucket := tx.Bucket(stateDiffBucket)
if bucket == nil {
return bbolt.ErrBucketNotFound
}
offsetBytes := bucket.Get(offsetKey)
if offsetBytes != nil {
return fmt.Errorf("offset already set to %d", binary.LittleEndian.Uint64(offsetBytes))
}
offsetBytes = make([]byte, 8)
binary.LittleEndian.PutUint64(offsetBytes, offset)
if err := bucket.Put(offsetKey, offsetBytes); err != nil {
return err
}
return nil
})
if err != nil {
return err
}
sdCache, err := newStateDiffCache(s)
if err != nil {
return err
}
s.stateDiffCache = sdCache
return nil
}
func setDefaultStateDiffExponents() {
globalFlags := flags.GlobalFlags{
StateDiffExponents: []int{21, 18, 16, 13, 11, 9, 5},
}
flags.Init(&globalFlags)
}

View File

@@ -1,7 +1,6 @@
package kv
import (
"context"
"crypto/rand"
"encoding/binary"
mathRand "math/rand"
@@ -10,7 +9,6 @@ import (
"time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v7/config/features"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
@@ -19,14 +17,11 @@ import (
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v7/genesis"
"github.com/OffchainLabs/prysm/v7/math"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/runtime/version"
"github.com/OffchainLabs/prysm/v7/testing/assert"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/testing/util"
logTest "github.com/sirupsen/logrus/hooks/test"
bolt "go.etcd.io/bbolt"
)
@@ -1334,297 +1329,3 @@ func TestStore_CleanUpDirtyStates_NoOriginRoot(t *testing.T) {
}
}
}
func TestStore_CanSaveRetrieveStateUsingStateDiff(t *testing.T) {
t.Run("No state summary or block", func(t *testing.T) {
db := setupDB(t)
featCfg := &features.Flags{}
featCfg.EnableStateDiff = true
reset := features.InitWithReset(featCfg)
defer reset()
setDefaultStateDiffExponents()
err := setOffsetInDB(db, 0)
require.NoError(t, err)
readSt, err := db.State(context.Background(), [32]byte{'A'})
require.IsNil(t, readSt)
require.ErrorContains(t, "neither state summary nor block found", err)
})
t.Run("Slot not in tree", func(t *testing.T) {
db := setupDB(t)
featCfg := &features.Flags{}
featCfg.EnableStateDiff = true
reset := features.InitWithReset(featCfg)
defer reset()
setDefaultStateDiffExponents()
err := setOffsetInDB(db, 0)
require.NoError(t, err)
r := bytesutil.ToBytes32([]byte{'A'})
ss := &ethpb.StateSummary{Slot: 1, Root: r[:]} // slot 1 not in tree
err = db.SaveStateSummary(context.Background(), ss)
require.NoError(t, err)
readSt, err := db.State(context.Background(), r)
require.ErrorContains(t, "slot not in tree", err)
require.IsNil(t, readSt)
})
t.Run("State not found", func(t *testing.T) {
db := setupDB(t)
featCfg := &features.Flags{}
featCfg.EnableStateDiff = true
reset := features.InitWithReset(featCfg)
defer reset()
setDefaultStateDiffExponents()
err := setOffsetInDB(db, 0)
require.NoError(t, err)
r := bytesutil.ToBytes32([]byte{'A'})
ss := &ethpb.StateSummary{Slot: 32, Root: r[:]} // slot 32 is in tree
err = db.SaveStateSummary(context.Background(), ss)
require.NoError(t, err)
readSt, err := db.State(context.Background(), r)
require.ErrorContains(t, "state not found", err)
require.IsNil(t, readSt)
})
t.Run("Full state snapshot", func(t *testing.T) {
t.Run("using state summary", func(t *testing.T) {
for v := range version.All() {
t.Run(version.String(v), func(t *testing.T) {
db := setupDB(t)
featCfg := &features.Flags{}
featCfg.EnableStateDiff = true
reset := features.InitWithReset(featCfg)
defer reset()
setDefaultStateDiffExponents()
err := setOffsetInDB(db, 0)
require.NoError(t, err)
st, _ := createState(t, 0, v)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
r := bytesutil.ToBytes32([]byte{'A'})
ss := &ethpb.StateSummary{Slot: 0, Root: r[:]}
err = db.SaveStateSummary(context.Background(), ss)
require.NoError(t, err)
readSt, err := db.State(context.Background(), r)
require.NoError(t, err)
require.NotNil(t, readSt)
stSSZ, err := st.MarshalSSZ()
require.NoError(t, err)
readStSSZ, err := readSt.MarshalSSZ()
require.NoError(t, err)
require.DeepSSZEqual(t, stSSZ, readStSSZ)
})
}
})
t.Run("using block", func(t *testing.T) {
for v := range version.All() {
t.Run(version.String(v), func(t *testing.T) {
db := setupDB(t)
featCfg := &features.Flags{}
featCfg.EnableStateDiff = true
reset := features.InitWithReset(featCfg)
defer reset()
setDefaultStateDiffExponents()
err := setOffsetInDB(db, 0)
require.NoError(t, err)
st, _ := createState(t, 0, v)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
blk := util.NewBeaconBlock()
blk.Block.Slot = 0
signedBlk, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
err = db.SaveBlock(context.Background(), signedBlk)
require.NoError(t, err)
r, err := signedBlk.Block().HashTreeRoot()
require.NoError(t, err)
readSt, err := db.State(context.Background(), r)
require.NoError(t, err)
require.NotNil(t, readSt)
stSSZ, err := st.MarshalSSZ()
require.NoError(t, err)
readStSSZ, err := readSt.MarshalSSZ()
require.NoError(t, err)
require.DeepSSZEqual(t, stSSZ, readStSSZ)
})
}
})
})
t.Run("Diffed state", func(t *testing.T) {
t.Run("using state summary", func(t *testing.T) {
for v := range version.All() {
t.Run(version.String(v), func(t *testing.T) {
db := setupDB(t)
featCfg := &features.Flags{}
featCfg.EnableStateDiff = true
reset := features.InitWithReset(featCfg)
defer reset()
setDefaultStateDiffExponents()
exponents := flags.Get().StateDiffExponents
err := setOffsetInDB(db, 0)
require.NoError(t, err)
st, _ := createState(t, 0, v)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
slot := primitives.Slot(math.PowerOf2(uint64(exponents[len(exponents)-2])))
st, _ = createState(t, slot, v)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
slot = primitives.Slot(math.PowerOf2(uint64(exponents[len(exponents)-2])) + math.PowerOf2(uint64(exponents[len(exponents)-1])))
st, _ = createState(t, slot, v)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
r := bytesutil.ToBytes32([]byte{'A'})
ss := &ethpb.StateSummary{Slot: slot, Root: r[:]}
err = db.SaveStateSummary(context.Background(), ss)
require.NoError(t, err)
readSt, err := db.State(context.Background(), r)
require.NoError(t, err)
require.NotNil(t, readSt)
stSSZ, err := st.MarshalSSZ()
require.NoError(t, err)
readStSSZ, err := readSt.MarshalSSZ()
require.NoError(t, err)
require.DeepSSZEqual(t, stSSZ, readStSSZ)
})
}
})
t.Run("using block", func(t *testing.T) {
for v := range version.All() {
t.Run(version.String(v), func(t *testing.T) {
db := setupDB(t)
featCfg := &features.Flags{}
featCfg.EnableStateDiff = true
reset := features.InitWithReset(featCfg)
defer reset()
setDefaultStateDiffExponents()
exponents := flags.Get().StateDiffExponents
err := setOffsetInDB(db, 0)
require.NoError(t, err)
st, _ := createState(t, 0, v)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
slot := primitives.Slot(math.PowerOf2(uint64(exponents[len(exponents)-2])))
st, _ = createState(t, slot, v)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
slot = primitives.Slot(math.PowerOf2(uint64(exponents[len(exponents)-2])) + math.PowerOf2(uint64(exponents[len(exponents)-1])))
st, _ = createState(t, slot, v)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
blk := util.NewBeaconBlock()
blk.Block.Slot = slot
signedBlk, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
err = db.SaveBlock(context.Background(), signedBlk)
require.NoError(t, err)
r, err := signedBlk.Block().HashTreeRoot()
require.NoError(t, err)
readSt, err := db.State(context.Background(), r)
require.NoError(t, err)
require.NotNil(t, readSt)
stSSZ, err := st.MarshalSSZ()
require.NoError(t, err)
readStSSZ, err := readSt.MarshalSSZ()
require.NoError(t, err)
require.DeepSSZEqual(t, stSSZ, readStSSZ)
})
}
})
})
}
func TestStore_HasStateUsingStateDiff(t *testing.T) {
t.Run("No state summary or block", func(t *testing.T) {
hook := logTest.NewGlobal()
db := setupDB(t)
featCfg := &features.Flags{}
featCfg.EnableStateDiff = true
reset := features.InitWithReset(featCfg)
defer reset()
setDefaultStateDiffExponents()
err := setOffsetInDB(db, 0)
require.NoError(t, err)
hasSt := db.HasState(t.Context(), [32]byte{'A'})
require.Equal(t, false, hasSt)
require.LogsContain(t, hook, "neither state summary nor block found")
})
t.Run("slot in tree or not", func(t *testing.T) {
db := setupDB(t)
featCfg := &features.Flags{}
featCfg.EnableStateDiff = true
reset := features.InitWithReset(featCfg)
defer reset()
setDefaultStateDiffExponents()
err := setOffsetInDB(db, 0)
require.NoError(t, err)
testCases := []struct {
slot primitives.Slot
expected bool
}{
{slot: 1, expected: false}, // slot 1 not in tree
{slot: 32, expected: true}, // slot 32 in tree
{slot: 0, expected: true}, // slot 0 in tree
{slot: primitives.Slot(math.PowerOf2(21)), expected: true}, // slot in tree
{slot: primitives.Slot(math.PowerOf2(21) - 1), expected: false}, // slot not in tree
{slot: primitives.Slot(math.PowerOf2(22)), expected: true}, // slot in tree
}
for _, tc := range testCases {
r := bytesutil.ToBytes32([]byte{'A'})
ss := &ethpb.StateSummary{Slot: tc.slot, Root: r[:]}
err = db.SaveStateSummary(t.Context(), ss)
require.NoError(t, err)
hasSt := db.HasState(t.Context(), r)
require.Equal(t, tc.expected, hasSt)
}
})
}

View File

@@ -532,19 +532,12 @@ func (s *Service) GetBlobsV2(ctx context.Context, versionedHashes []common.Hash)
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.GetBlobsV2")
defer span.End()
start := time.Now()
if !s.capabilityCache.has(GetBlobsV2) {
return nil, errors.New(fmt.Sprintf("%s is not supported", GetBlobsV2))
}
result := make([]*pb.BlobAndProofV2, len(versionedHashes))
err := s.rpcClient.CallContext(ctx, &result, GetBlobsV2, versionedHashes)
if len(result) != 0 {
getBlobsV2Latency.Observe(float64(time.Since(start).Milliseconds()))
}
return result, handleRPCError(err)
}

View File

@@ -2683,7 +2683,7 @@ func createBlobServerV2(t *testing.T, numBlobs int, blobMasks []bool) *httptest.
Blob: []byte("0xblob"),
KzgProofs: []hexutil.Bytes{},
}
for range fieldparams.NumberOfColumns {
for j := 0; j < int(params.BeaconConfig().NumberOfColumns); j++ {
cellProof := make([]byte, 48)
blobAndCellProofs[i].KzgProofs = append(blobAndCellProofs[i].KzgProofs, cellProof)
}

View File

@@ -27,13 +27,6 @@ var (
Buckets: []float64{25, 50, 100, 200, 500, 1000, 2000, 4000},
},
)
getBlobsV2Latency = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "get_blobs_v2_latency_milliseconds",
Help: "Captures RPC latency for getBlobsV2 in milliseconds",
Buckets: []float64{25, 50, 100, 200, 500, 1000, 2000, 4000},
},
)
errParseCount = promauto.NewCounter(prometheus.CounterOpts{
Name: "execution_parse_error_count",
Help: "The number of errors that occurred while parsing execution payload",

View File

@@ -240,7 +240,7 @@ func (f *ForkChoice) IsViableForCheckpoint(cp *forkchoicetypes.Checkpoint) (bool
if node.slot == epochStart {
return true, nil
}
if !features.Get().IgnoreUnviableAttestations {
if !features.Get().DisableLastEpochTargets {
// Allow any node from the checkpoint epoch - 1 to be viable.
nodeEpoch := slots.ToEpoch(node.slot)
if nodeEpoch+1 == cp.Epoch {
@@ -642,12 +642,8 @@ func (f *ForkChoice) DependentRootForEpoch(root [32]byte, epoch primitives.Epoch
if !ok || node == nil {
return [32]byte{}, ErrNilNode
}
if slots.ToEpoch(node.slot) >= epoch {
if node.parent != nil {
node = node.parent
} else {
return f.store.finalizedDependentRoot, nil
}
if slots.ToEpoch(node.slot) >= epoch && node.parent != nil {
node = node.parent
}
return node.root, nil
}

View File

@@ -212,9 +212,6 @@ func (s *Store) prune(ctx context.Context) error {
return nil
}
// Save the new finalized dependent root because it will be pruned
s.finalizedDependentRoot = finalizedNode.parent.root
// Prune nodeByRoot starting from root
if err := s.pruneFinalizedNodeByRootMap(ctx, s.treeRootNode, finalizedNode); err != nil {
return err

View File

@@ -465,7 +465,6 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
ctx := t.Context()
f := setup(1, 1)
// Insert a block in slot 32
state, blk, err := prepareForkchoiceState(ctx, params.BeaconConfig().SlotsPerEpoch, [32]byte{'a'}, params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blk))
@@ -476,7 +475,6 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
require.NoError(t, err)
require.Equal(t, dependent, [32]byte{})
// Insert a block in slot 33
state, blk1, err := prepareForkchoiceState(ctx, params.BeaconConfig().SlotsPerEpoch+1, [32]byte{'b'}, blk.Root(), params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blk1))
@@ -490,7 +488,7 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
require.NoError(t, err)
require.Equal(t, dependent, [32]byte{})
// Insert a block for the next epoch (missed slot 0), slot 65
// Insert a block for the next epoch (missed slot 0)
state, blk2, err := prepareForkchoiceState(ctx, 2*params.BeaconConfig().SlotsPerEpoch+1, [32]byte{'c'}, blk1.Root(), params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
@@ -511,7 +509,6 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
require.NoError(t, err)
require.Equal(t, dependent, blk1.Root())
// Insert a block at slot 66
state, blk3, err := prepareForkchoiceState(ctx, 2*params.BeaconConfig().SlotsPerEpoch+2, [32]byte{'d'}, blk2.Root(), params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blk3))
@@ -536,11 +533,8 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
dependent, err = f.DependentRoot(1)
require.NoError(t, err)
require.Equal(t, [32]byte{}, dependent)
dependent, err = f.DependentRoot(2)
require.NoError(t, err)
require.Equal(t, blk1.Root(), dependent)
// Insert a block for the next epoch, slot 96 (descends from finalized at slot 33)
// Insert a block for next epoch (slot 0 present)
state, blk4, err := prepareForkchoiceState(ctx, 3*params.BeaconConfig().SlotsPerEpoch, [32]byte{'e'}, blk1.Root(), params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blk4))
@@ -557,7 +551,6 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
require.NoError(t, err)
require.Equal(t, dependent, blk1.Root())
// Insert a block at slot 97
state, blk5, err := prepareForkchoiceState(ctx, 3*params.BeaconConfig().SlotsPerEpoch+1, [32]byte{'f'}, blk4.Root(), params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blk5))
@@ -607,16 +600,12 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
require.NoError(t, err)
require.Equal(t, target, blk1.Root())
// Prune finalization, finalize the block at slot 96
// Prune finalization
s.finalizedCheckpoint.Root = blk4.Root()
require.NoError(t, s.prune(ctx))
target, err = f.TargetRootForEpoch(blk4.Root(), 3)
require.NoError(t, err)
require.Equal(t, blk4.Root(), target)
// Dependent root for the finalized block should be the root of the pruned block at slot 33
dependent, err = f.DependentRootForEpoch(blk4.Root(), 3)
require.NoError(t, err)
require.Equal(t, blk1.Root(), dependent)
}
func TestStore_DependentRootForEpoch(t *testing.T) {

View File

@@ -31,7 +31,6 @@ type Store struct {
proposerBoostRoot [fieldparams.RootLength]byte // latest block root that was boosted after being received in a timely manner.
previousProposerBoostRoot [fieldparams.RootLength]byte // previous block root that was boosted after being received in a timely manner.
previousProposerBoostScore uint64 // previous proposer boosted root score.
finalizedDependentRoot [fieldparams.RootLength]byte // dependent root at finalized checkpoint.
committeeWeight uint64 // tracks the total active validator balance divided by the number of slots per Epoch.
treeRootNode *Node // the root node of the store tree.
headNode *Node // last head Node

View File

@@ -1,95 +0,0 @@
# Graffiti Version Info Implementation
## Summary
Add automatic EL+CL version info to block graffiti following [ethereum/execution-apis#517](https://github.com/ethereum/execution-apis/pull/517). Uses the [flexible standard](https://hackmd.io/@wmoBhF17RAOH2NZ5bNXJVg/BJX2c9gja) to pack client info into leftover space after user graffiti.
More details: https://github.com/ethereum/execution-apis/blob/main/src/engine/identification.md
## Implementation
### Core Component: GraffitiInfo Struct
Thread-safe struct holding version information:
```go
const clCode = "PR"
type GraffitiInfo struct {
mu sync.RWMutex
userGraffiti string // From --graffiti flag (set once at startup)
clCommit string // From version.GetCommitPrefix() helper function
elCode string // From engine_getClientVersionV1
elCommit string // From engine_getClientVersionV1
}
```
### Flow
1. **Startup**: Parse flags, create GraffitiInfo with user graffiti and CL info.
2. **Wiring**: Pass struct to both execution service and RPC validator server
3. **Runtime**: Execution service goroutine periodically calls `engine_getClientVersionV1` and updates EL fields
4. **Block Proposal**: RPC validator server calls `GenerateGraffiti()` to get formatted graffiti
### Flexible Graffiti Format
Packs as much client info as space allows (after user graffiti):
| Available Space | Format | Example |
|----------------|--------|---------|
| ≥12 bytes | `EL(2)+commit(4)+CL(2)+commit(4)+user` | `GE168dPR63afBob` |
| 8-11 bytes | `EL(2)+commit(2)+CL(2)+commit(2)+user` | `GE16PR63my node here` |
| 4-7 bytes | `EL(2)+CL(2)+user` | `GEPRthis is my graffiti msg` |
| 2-3 bytes | `EL(2)+user` | `GEalmost full graffiti message` |
| <2 bytes | user only | `full 32 byte user graffiti here` |
```go
func (g *GraffitiInfo) GenerateGraffiti() [32]byte {
available := 32 - len(userGraffiti)
if elCode == "" {
elCommit2 = elCommit4 = ""
}
switch {
case available >= 12:
return elCode + elCommit4 + clCode + clCommit4 + userGraffiti
case available >= 8:
return elCode + elCommit2 + clCode + clCommit2 + userGraffiti
case available >= 4:
return elCode + clCode + userGraffiti
case available >= 2:
return elCode + userGraffiti
default:
return userGraffiti
}
}
```
### Update Logic
Single testable function in execution service:
```go
func (s *Service) updateGraffitiInfo() {
versions, err := s.GetClientVersion(ctx)
if err != nil {
return // Keep last good value
}
if len(versions) == 1 {
s.graffitiInfo.UpdateFromEngine(versions[0].Code, versions[0].Commit)
}
}
```
Goroutine calls this on `slot % 8 == 4` timing (4 times per epoch, avoids slot boundaries).
### Files Changes Required
**New:**
- `beacon-chain/execution/graffiti_info.go` - The struct and methods
- `beacon-chain/execution/graffiti_info_test.go` - Unit tests
- `runtime/version/version.go` - Add `GetCommitPrefix()` helper that extracts first 4 hex chars from the git commit injected via Bazel ldflags at build time
**Modified:**
- `beacon-chain/execution/service.go` - Add goroutine + updateGraffitiInfo()
- `beacon-chain/execution/engine_client.go` - Add GetClientVersion() method that does engine call
- `beacon-chain/rpc/.../validator/proposer.go` - Call GenerateGraffiti()
- `beacon-chain/node/node.go` - Wire GraffitiInfo to services
### Testing Strategy
- Unit test GraffitiInfo methods (priority logic, thread safety)
- Unit test updateGraffitiInfo() with mocked engine client

View File

@@ -33,10 +33,6 @@ func TestLightClient_NewLightClientOptimisticUpdateFromBeaconState(t *testing.T)
params.OverrideBeaconConfig(cfg)
for _, testVersion := range version.All()[1:] {
if testVersion == version.Gloas {
// TODO(16027): Unskip light client tests for Gloas
continue
}
t.Run(version.String(testVersion), func(t *testing.T) {
l := util.NewTestLightClient(t, testVersion)

View File

@@ -23,7 +23,6 @@ go_library(
"//beacon-chain/builder:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/cache/depositsnapshot:go_default_library",
"//beacon-chain/das:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/db/kv:go_default_library",

View File

@@ -26,7 +26,6 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/builder"
"github.com/OffchainLabs/prysm/v7/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v7/beacon-chain/cache/depositsnapshot"
"github.com/OffchainLabs/prysm/v7/beacon-chain/das"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db/kv"
@@ -117,7 +116,7 @@ type BeaconNode struct {
GenesisProviders []genesis.Provider
CheckpointInitializer checkpoint.Initializer
forkChoicer forkchoice.ForkChoicer
ClockWaiter startup.ClockWaiter
clockWaiter startup.ClockWaiter
BackfillOpts []backfill.ServiceOption
initialSyncComplete chan struct{}
BlobStorage *filesystem.BlobStorage
@@ -130,7 +129,6 @@ type BeaconNode struct {
slasherEnabled bool
lcStore *lightclient.Store
ConfigOptions []params.Option
SyncNeedsWaiter func() (das.SyncNeeds, error)
}
// New creates a new node instance, sets up configuration options, and registers
@@ -195,7 +193,7 @@ func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*Beaco
params.LogDigests(params.BeaconConfig())
synchronizer := startup.NewClockSynchronizer()
beacon.ClockWaiter = synchronizer
beacon.clockWaiter = synchronizer
beacon.forkChoicer = doublylinkedtree.New()
depositAddress, err := execution.DepositContractAddress()
@@ -235,13 +233,12 @@ func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*Beaco
beacon.lhsp = &verification.LazyHeadStateProvider{}
beacon.verifyInitWaiter = verification.NewInitializerWaiter(
beacon.ClockWaiter, forkchoice.NewROForkChoice(beacon.forkChoicer), beacon.stateGen, beacon.lhsp)
beacon.clockWaiter, forkchoice.NewROForkChoice(beacon.forkChoicer), beacon.stateGen, beacon.lhsp)
beacon.BackfillOpts = append(
beacon.BackfillOpts,
backfill.WithVerifierWaiter(beacon.verifyInitWaiter),
backfill.WithInitSyncWaiter(initSyncWaiter(ctx, beacon.initialSyncComplete)),
backfill.WithSyncNeedsWaiter(beacon.SyncNeedsWaiter),
)
if err := registerServices(cliCtx, beacon, synchronizer, bfs); err != nil {
@@ -283,10 +280,7 @@ func configureBeacon(cliCtx *cli.Context) error {
return errors.Wrap(err, "could not configure beacon chain")
}
err := flags.ConfigureGlobalFlags(cliCtx)
if err != nil {
return errors.Wrap(err, "could not configure global flags")
}
flags.ConfigureGlobalFlags(cliCtx)
if err := configureChainConfig(cliCtx); err != nil {
return errors.Wrap(err, "could not configure chain config")
@@ -666,8 +660,7 @@ func (b *BeaconNode) registerP2P(cliCtx *cli.Context) error {
EnableUPnP: cliCtx.Bool(cmd.EnableUPnPFlag.Name),
StateNotifier: b,
DB: b.db,
StateGen: b.stateGen,
ClockWaiter: b.ClockWaiter,
ClockWaiter: b.clockWaiter,
})
if err != nil {
return err
@@ -709,7 +702,7 @@ func (b *BeaconNode) registerSlashingPoolService() error {
return err
}
s := slashings.NewPoolService(b.ctx, b.slashingsPool, slashings.WithElectraTimer(b.ClockWaiter, chainService.CurrentSlot))
s := slashings.NewPoolService(b.ctx, b.slashingsPool, slashings.WithElectraTimer(b.clockWaiter, chainService.CurrentSlot))
return b.services.RegisterService(s)
}
@@ -831,7 +824,7 @@ func (b *BeaconNode) registerSyncService(initialSyncComplete chan struct{}, bFil
regularsync.WithSlasherAttestationsFeed(b.slasherAttestationsFeed),
regularsync.WithSlasherBlockHeadersFeed(b.slasherBlockHeadersFeed),
regularsync.WithReconstructor(web3Service),
regularsync.WithClockWaiter(b.ClockWaiter),
regularsync.WithClockWaiter(b.clockWaiter),
regularsync.WithInitialSyncComplete(initialSyncComplete),
regularsync.WithStateNotifier(b),
regularsync.WithBlobStorage(b.BlobStorage),
@@ -862,8 +855,7 @@ func (b *BeaconNode) registerInitialSyncService(complete chan struct{}) error {
P2P: b.fetchP2P(),
StateNotifier: b,
BlockNotifier: b,
ClockWaiter: b.ClockWaiter,
SyncNeedsWaiter: b.SyncNeedsWaiter,
ClockWaiter: b.clockWaiter,
InitialSyncComplete: complete,
BlobStorage: b.BlobStorage,
DataColumnStorage: b.DataColumnStorage,
@@ -894,7 +886,7 @@ func (b *BeaconNode) registerSlasherService() error {
SlashingPoolInserter: b.slashingsPool,
SyncChecker: syncService,
HeadStateFetcher: chainService,
ClockWaiter: b.ClockWaiter,
ClockWaiter: b.clockWaiter,
})
if err != nil {
return err
@@ -987,7 +979,7 @@ func (b *BeaconNode) registerRPCService(router *http.ServeMux) error {
MaxMsgSize: maxMsgSize,
BlockBuilder: b.fetchBuilderService(),
Router: router,
ClockWaiter: b.ClockWaiter,
ClockWaiter: b.clockWaiter,
BlobStorage: b.BlobStorage,
DataColumnStorage: b.DataColumnStorage,
TrackedValidatorsCache: b.trackedValidatorsCache,
@@ -1132,7 +1124,7 @@ func (b *BeaconNode) registerPrunerService(cliCtx *cli.Context) error {
func (b *BeaconNode) RegisterBackfillService(cliCtx *cli.Context, bfs *backfill.Store) error {
pa := peers.NewAssigner(b.fetchP2P().Peers(), b.forkChoicer)
bf, err := backfill.NewService(cliCtx.Context, bfs, b.BlobStorage, b.DataColumnStorage, b.ClockWaiter, b.fetchP2P(), pa, b.BackfillOpts...)
bf, err := backfill.NewService(cliCtx.Context, bfs, b.BlobStorage, b.clockWaiter, b.fetchP2P(), pa, b.BackfillOpts...)
if err != nil {
return errors.Wrap(err, "error initializing backfill service")
}

View File

@@ -120,9 +120,6 @@ func (s *Service) aggregateAndSaveForkChoiceAtts(atts []ethpb.Att) error {
return err
}
if features.Get().EnableExperimentalAttestationPool {
return s.cfg.Cache.SaveForkchoiceAttestations(aggregatedAtts)
}
return s.cfg.Pool.SaveForkchoiceAttestations(aggregatedAtts)
}

View File

@@ -20,7 +20,7 @@ func (s *Service) pruneExpired() {
s.pruneExpiredAtts()
s.updateMetrics()
case <-s.ctx.Done():
log.Debug("Context closed, exiting att pruning routine")
log.Debug("Context closed, exiting routine")
return
}
}
@@ -28,13 +28,11 @@ func (s *Service) pruneExpired() {
// pruneExpiredExperimental prunes attestations on every prune interval.
func (s *Service) pruneExpiredExperimental() {
secondsPerSlot := params.BeaconConfig().SecondsPerSlot
offset := time.Duration(secondsPerSlot-1) * time.Second
slotTicker := slots.NewSlotTickerWithOffset(s.genesisTime, offset, secondsPerSlot)
defer slotTicker.Done()
ticker := time.NewTicker(s.cfg.pruneInterval)
defer ticker.Stop()
for {
select {
case <-slotTicker.C():
case <-ticker.C:
expirySlot, err := s.expirySlot()
if err != nil {
log.WithError(err).Error("Could not get expiry slot")
@@ -43,7 +41,7 @@ func (s *Service) pruneExpiredExperimental() {
numExpired := s.cfg.Cache.PruneBefore(expirySlot)
s.updateMetricsExperimental(numExpired)
case <-s.ctx.Done():
log.Debug("Context closed, exiting att pruning routine")
log.Debug("Context closed, exiting routine")
return
}
}

View File

@@ -23,7 +23,8 @@ func TestPruneExpired_Ticker(t *testing.T) {
defer cancel()
s, err := NewService(ctx, &Config{
Pool: NewPool(),
Pool: NewPool(),
pruneInterval: 250 * time.Millisecond,
})
require.NoError(t, err)

View File

@@ -11,6 +11,7 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/cache"
lruwrpr "github.com/OffchainLabs/prysm/v7/cache/lru"
"github.com/OffchainLabs/prysm/v7/config/features"
"github.com/OffchainLabs/prysm/v7/config/params"
lru "github.com/hashicorp/golang-lru"
)
@@ -30,6 +31,7 @@ type Service struct {
type Config struct {
Cache *cache.AttestationCache
Pool Pool
pruneInterval time.Duration
InitialSyncComplete chan struct{}
}
@@ -38,6 +40,11 @@ type Config struct {
func NewService(ctx context.Context, cfg *Config) (*Service, error) {
cache := lruwrpr.New(forkChoiceProcessedAttsSize)
if cfg.pruneInterval == 0 {
// Prune expired attestations from the pool every slot interval.
cfg.pruneInterval = time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second
}
ctx, cancel := context.WithCancel(ctx)
return &Service{
cfg: cfg,

View File

@@ -56,7 +56,6 @@ go_library(
"//beacon-chain/p2p/peers/scorers:go_default_library",
"//beacon-chain/p2p/types:go_default_library",
"//beacon-chain/startup:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
@@ -154,7 +153,6 @@ go_test(
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/db/iface:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/p2p/encoder:go_default_library",
"//beacon-chain/p2p/peers:go_default_library",
@@ -163,7 +161,6 @@ go_test(
"//beacon-chain/p2p/testing:go_default_library",
"//beacon-chain/p2p/types:go_default_library",
"//beacon-chain/startup:go_default_library",
"//beacon-chain/state/stategen/mock:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",

View File

@@ -7,7 +7,6 @@ import (
statefeed "github.com/OffchainLabs/prysm/v7/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db"
"github.com/OffchainLabs/prysm/v7/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state/stategen"
"github.com/sirupsen/logrus"
)
@@ -50,7 +49,6 @@ type Config struct {
IPColocationWhitelist []*net.IPNet
StateNotifier statefeed.Notifier
DB db.ReadOnlyDatabaseWithSeqNum
StateGen stategen.StateManager
ClockWaiter startup.ClockWaiter
}

View File

@@ -156,13 +156,29 @@ func (s *Service) retrieveActiveValidators() (uint64, error) {
if s.activeValidatorCount != 0 {
return s.activeValidatorCount, nil
}
finalizedCheckpoint, err := s.cfg.DB.FinalizedCheckpoint(s.ctx)
rt := s.cfg.DB.LastArchivedRoot(s.ctx)
if rt == params.BeaconConfig().ZeroHash {
genState, err := s.cfg.DB.GenesisState(s.ctx)
if err != nil {
return 0, err
}
if genState == nil || genState.IsNil() {
return 0, errors.New("no genesis state exists")
}
activeVals, err := helpers.ActiveValidatorCount(context.Background(), genState, coreTime.CurrentEpoch(genState))
if err != nil {
return 0, err
}
// Cache active validator count
s.activeValidatorCount = activeVals
return activeVals, nil
}
bState, err := s.cfg.DB.State(s.ctx, rt)
if err != nil {
return 0, err
}
bState, err := s.cfg.StateGen.StateByRoot(s.ctx, [32]byte(finalizedCheckpoint.Root))
if err != nil {
return 0, err
if bState == nil || bState.IsNil() {
return 0, errors.Errorf("no state with root %#x exists", rt)
}
activeVals, err := helpers.ActiveValidatorCount(context.Background(), bState, coreTime.CurrentEpoch(bState))
if err != nil {

View File

@@ -1,14 +1,10 @@
package p2p
import (
"context"
"testing"
iface "github.com/OffchainLabs/prysm/v7/beacon-chain/db/iface"
dbutil "github.com/OffchainLabs/prysm/v7/beacon-chain/db/testing"
mockstategen "github.com/OffchainLabs/prysm/v7/beacon-chain/state/stategen/mock"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/testing/assert"
"github.com/OffchainLabs/prysm/v7/testing/require"
@@ -24,11 +20,9 @@ func TestCorrect_ActiveValidatorsCount(t *testing.T) {
params.OverrideBeaconConfig(cfg)
db := dbutil.SetupDB(t)
wrappedDB := &finalizedCheckpointDB{ReadOnlyDatabaseWithSeqNum: db}
stateGen := mockstategen.NewService()
s := &Service{
ctx: t.Context(),
cfg: &Config{DB: wrappedDB, StateGen: stateGen},
cfg: &Config{DB: db},
}
bState, err := util.NewBeaconState(func(state *ethpb.BeaconState) error {
validators := make([]*ethpb.Validator, params.BeaconConfig().MinGenesisActiveValidatorCount)
@@ -45,10 +39,6 @@ func TestCorrect_ActiveValidatorsCount(t *testing.T) {
})
require.NoError(t, err)
require.NoError(t, db.SaveGenesisData(s.ctx, bState))
checkpoint, err := db.FinalizedCheckpoint(s.ctx)
require.NoError(t, err)
wrappedDB.finalized = checkpoint
stateGen.AddStateForRoot(bState, bytesutil.ToBytes32(checkpoint.Root))
vals, err := s.retrieveActiveValidators()
assert.NoError(t, err, "genesis state not retrieved")
@@ -62,10 +52,7 @@ func TestCorrect_ActiveValidatorsCount(t *testing.T) {
}))
}
require.NoError(t, bState.SetSlot(10000))
rootA := [32]byte{'a'}
require.NoError(t, db.SaveState(s.ctx, bState, rootA))
wrappedDB.finalized = &ethpb.Checkpoint{Root: rootA[:]}
stateGen.AddStateForRoot(bState, rootA)
require.NoError(t, db.SaveState(s.ctx, bState, [32]byte{'a'}))
// Reset count
s.activeValidatorCount = 0
@@ -90,15 +77,3 @@ func TestLoggingParameters(_ *testing.T) {
logGossipParameters("testing", defaultLightClientOptimisticUpdateTopicParams())
logGossipParameters("testing", defaultLightClientFinalityUpdateTopicParams())
}
type finalizedCheckpointDB struct {
iface.ReadOnlyDatabaseWithSeqNum
finalized *ethpb.Checkpoint
}
func (f *finalizedCheckpointDB) FinalizedCheckpoint(ctx context.Context) (*ethpb.Checkpoint, error) {
if f.finalized != nil {
return f.finalized, nil
}
return f.ReadOnlyDatabaseWithSeqNum.FinalizedCheckpoint(ctx)
}

View File

@@ -47,7 +47,6 @@ go_test(
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/p2p/peers/peerdata:go_default_library",
"//beacon-chain/p2p/peers/scorers:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",

View File

@@ -4,18 +4,11 @@ import (
forkchoicetypes "github.com/OffchainLabs/prysm/v7/beacon-chain/forkchoice/types"
"github.com/OffchainLabs/prysm/v7/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
// StatusProvider describes the minimum capability that Assigner needs from peer status tracking.
// That is, the ability to retrieve the best peers by finalized checkpoint.
type StatusProvider interface {
BestFinalized(ourFinalized primitives.Epoch) (primitives.Epoch, []peer.ID)
}
// FinalizedCheckpointer describes the minimum capability that Assigner needs from forkchoice.
// That is, the ability to retrieve the latest finalized checkpoint to help with peer evaluation.
type FinalizedCheckpointer interface {
@@ -24,9 +17,9 @@ type FinalizedCheckpointer interface {
// NewAssigner assists in the correct construction of an Assigner by code in other packages,
// assuring all the important private member fields are given values.
// The StatusProvider is used to retrieve best peers, and FinalizedCheckpointer is used to retrieve the latest finalized checkpoint each time peers are requested.
// The FinalizedCheckpointer is used to retrieve the latest finalized checkpoint each time peers are requested.
// Peers that report an older finalized checkpoint are filtered out.
func NewAssigner(s StatusProvider, fc FinalizedCheckpointer) *Assigner {
func NewAssigner(s *Status, fc FinalizedCheckpointer) *Assigner {
return &Assigner{
ps: s,
fc: fc,
@@ -35,7 +28,7 @@ func NewAssigner(s StatusProvider, fc FinalizedCheckpointer) *Assigner {
// Assigner uses the "BestFinalized" peer scoring method to pick the next-best peer to receive rpc requests.
type Assigner struct {
ps StatusProvider
ps *Status
fc FinalizedCheckpointer
}
@@ -45,42 +38,38 @@ type Assigner struct {
var ErrInsufficientSuitable = errors.New("no suitable peers")
func (a *Assigner) freshPeers() ([]peer.ID, error) {
required := min(flags.Get().MinimumSyncPeers, min(flags.Get().MinimumSyncPeers, params.BeaconConfig().MaxPeersToSync))
_, peers := a.ps.BestFinalized(a.fc.FinalizedCheckpoint().Epoch)
required := min(flags.Get().MinimumSyncPeers, params.BeaconConfig().MaxPeersToSync)
_, peers := a.ps.BestFinalized(params.BeaconConfig().MaxPeersToSync, a.fc.FinalizedCheckpoint().Epoch)
if len(peers) < required {
log.WithFields(logrus.Fields{
"suitable": len(peers),
"required": required}).Trace("Unable to assign peer while suitable peers < required")
"required": required}).Warn("Unable to assign peer while suitable peers < required ")
return nil, ErrInsufficientSuitable
}
return peers, nil
}
// AssignmentFilter describes a function that takes a list of peer.IDs and returns a filtered subset.
// An example is the NotBusy filter.
type AssignmentFilter func([]peer.ID) []peer.ID
// Assign uses the "BestFinalized" method to select the best peers that agree on a canonical block
// for the configured finalized epoch. At most `n` peers will be returned. The `busy` param can be used
// to filter out peers that we know we don't want to connect to, for instance if we are trying to limit
// the number of outbound requests to each peer from a given component.
func (a *Assigner) Assign(filter AssignmentFilter) ([]peer.ID, error) {
func (a *Assigner) Assign(busy map[peer.ID]bool, n int) ([]peer.ID, error) {
best, err := a.freshPeers()
if err != nil {
return nil, err
}
return filter(best), nil
return pickBest(busy, n, best), nil
}
// NotBusy is a filter that returns the list of peer.IDs that are not in the `busy` map.
func NotBusy(busy map[peer.ID]bool) AssignmentFilter {
return func(peers []peer.ID) []peer.ID {
ps := make([]peer.ID, 0, len(peers))
for _, p := range peers {
if !busy[p] {
ps = append(ps, p)
}
func pickBest(busy map[peer.ID]bool, n int, best []peer.ID) []peer.ID {
ps := make([]peer.ID, 0, n)
for _, p := range best {
if len(ps) == n {
return ps
}
if !busy[p] {
ps = append(ps, p)
}
return ps
}
return ps
}

View File

@@ -5,8 +5,6 @@ import (
"slices"
"testing"
forkchoicetypes "github.com/OffchainLabs/prysm/v7/beacon-chain/forkchoice/types"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/libp2p/go-libp2p/core/peer"
)
@@ -16,68 +14,82 @@ func TestPickBest(t *testing.T) {
cases := []struct {
name string
busy map[peer.ID]bool
n int
best []peer.ID
expected []peer.ID
}{
{
name: "don't limit",
expected: best,
name: "",
n: 0,
},
{
name: "none busy",
expected: best,
n: 1,
expected: best[0:1],
},
{
name: "all busy except last",
n: 1,
busy: testBusyMap(best[0 : len(best)-1]),
expected: best[len(best)-1:],
},
{
name: "all busy except i=5",
n: 1,
busy: testBusyMap(slices.Concat(best[0:5], best[6:])),
expected: []peer.ID{best[5]},
},
{
name: "all busy - 0 results",
n: 1,
busy: testBusyMap(best),
},
{
name: "first half busy",
n: 5,
busy: testBusyMap(best[0:5]),
expected: best[5:],
},
{
name: "back half busy",
n: 5,
busy: testBusyMap(best[5:]),
expected: best[0:5],
},
{
name: "pick all ",
n: 10,
expected: best,
},
{
name: "none available",
n: 10,
best: []peer.ID{},
},
{
name: "not enough",
n: 10,
best: best[0:1],
expected: best[0:1],
},
{
name: "not enough, some busy",
n: 10,
best: best[0:6],
busy: testBusyMap(best[0:5]),
expected: best[5:6],
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
name := fmt.Sprintf("n=%d", c.n)
if c.name != "" {
name += " " + c.name
}
t.Run(name, func(t *testing.T) {
if c.best == nil {
c.best = best
}
filt := NotBusy(c.busy)
pb := filt(c.best)
pb := pickBest(c.busy, c.n, c.best)
require.Equal(t, len(c.expected), len(pb))
for i := range c.expected {
require.Equal(t, c.expected[i], pb[i])
@@ -101,310 +113,3 @@ func testPeerIds(n int) []peer.ID {
}
return pids
}
// MockStatus is a test mock for the Status interface used in Assigner.
type MockStatus struct {
bestFinalizedEpoch primitives.Epoch
bestPeers []peer.ID
}
func (m *MockStatus) BestFinalized(ourFinalized primitives.Epoch) (primitives.Epoch, []peer.ID) {
return m.bestFinalizedEpoch, m.bestPeers
}
// MockFinalizedCheckpointer is a test mock for FinalizedCheckpointer interface.
type MockFinalizedCheckpointer struct {
checkpoint *forkchoicetypes.Checkpoint
}
func (m *MockFinalizedCheckpointer) FinalizedCheckpoint() *forkchoicetypes.Checkpoint {
return m.checkpoint
}
// TestAssign_HappyPath tests the Assign method with sufficient peers and various filters.
func TestAssign_HappyPath(t *testing.T) {
peers := testPeerIds(10)
cases := []struct {
name string
bestPeers []peer.ID
finalizedEpoch primitives.Epoch
filter AssignmentFilter
expectedCount int
}{
{
name: "sufficient peers with identity filter",
bestPeers: peers,
finalizedEpoch: 10,
filter: func(p []peer.ID) []peer.ID { return p },
expectedCount: 10,
},
{
name: "sufficient peers with NotBusy filter (no busy)",
bestPeers: peers,
finalizedEpoch: 10,
filter: NotBusy(make(map[peer.ID]bool)),
expectedCount: 10,
},
{
name: "sufficient peers with NotBusy filter (some busy)",
bestPeers: peers,
finalizedEpoch: 10,
filter: NotBusy(testBusyMap(peers[0:5])),
expectedCount: 5,
},
{
name: "minimum threshold exactly met",
bestPeers: peers[0:5],
finalizedEpoch: 10,
filter: func(p []peer.ID) []peer.ID { return p },
expectedCount: 5,
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
mockStatus := &MockStatus{
bestFinalizedEpoch: tc.finalizedEpoch,
bestPeers: tc.bestPeers,
}
mockCheckpointer := &MockFinalizedCheckpointer{
checkpoint: &forkchoicetypes.Checkpoint{Epoch: tc.finalizedEpoch},
}
assigner := NewAssigner(mockStatus, mockCheckpointer)
result, err := assigner.Assign(tc.filter)
require.NoError(t, err)
require.Equal(t, tc.expectedCount, len(result),
fmt.Sprintf("expected %d peers, got %d", tc.expectedCount, len(result)))
})
}
}
// TestAssign_InsufficientPeers tests error handling when not enough suitable peers are available.
// Note: The actual peer threshold depends on config values MaxPeersToSync and MinimumSyncPeers.
func TestAssign_InsufficientPeers(t *testing.T) {
cases := []struct {
name string
bestPeers []peer.ID
expectedErr error
description string
}{
{
name: "exactly at minimum threshold",
bestPeers: testPeerIds(5),
expectedErr: nil,
description: "5 peers should meet the minimum threshold",
},
{
name: "well above minimum threshold",
bestPeers: testPeerIds(50),
expectedErr: nil,
description: "50 peers should easily meet requirements",
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
mockStatus := &MockStatus{
bestFinalizedEpoch: 10,
bestPeers: tc.bestPeers,
}
mockCheckpointer := &MockFinalizedCheckpointer{
checkpoint: &forkchoicetypes.Checkpoint{Epoch: 10},
}
assigner := NewAssigner(mockStatus, mockCheckpointer)
result, err := assigner.Assign(NotBusy(make(map[peer.ID]bool)))
if tc.expectedErr != nil {
require.NotNil(t, err, tc.description)
require.Equal(t, tc.expectedErr, err)
} else {
require.NoError(t, err, tc.description)
require.Equal(t, len(tc.bestPeers), len(result))
}
})
}
}
// TestAssign_FilterApplication verifies that filters are correctly applied to peer lists.
func TestAssign_FilterApplication(t *testing.T) {
peers := testPeerIds(10)
cases := []struct {
name string
bestPeers []peer.ID
filterToApply AssignmentFilter
expectedCount int
description string
}{
{
name: "identity filter returns all peers",
bestPeers: peers,
filterToApply: func(p []peer.ID) []peer.ID { return p },
expectedCount: 10,
description: "identity filter should not change peer list",
},
{
name: "filter removes all peers (all busy)",
bestPeers: peers,
filterToApply: NotBusy(testBusyMap(peers)),
expectedCount: 0,
description: "all peers busy should return empty list",
},
{
name: "filter removes first 5 peers",
bestPeers: peers,
filterToApply: NotBusy(testBusyMap(peers[0:5])),
expectedCount: 5,
description: "should only return non-busy peers",
},
{
name: "filter removes last 5 peers",
bestPeers: peers,
filterToApply: NotBusy(testBusyMap(peers[5:])),
expectedCount: 5,
description: "should only return non-busy peers from beginning",
},
{
name: "custom filter selects every other peer",
bestPeers: peers,
filterToApply: func(p []peer.ID) []peer.ID {
result := make([]peer.ID, 0)
for i := 0; i < len(p); i += 2 {
result = append(result, p[i])
}
return result
},
expectedCount: 5,
description: "custom filter selecting every other peer",
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
mockStatus := &MockStatus{
bestFinalizedEpoch: 10,
bestPeers: tc.bestPeers,
}
mockCheckpointer := &MockFinalizedCheckpointer{
checkpoint: &forkchoicetypes.Checkpoint{Epoch: 10},
}
assigner := NewAssigner(mockStatus, mockCheckpointer)
result, err := assigner.Assign(tc.filterToApply)
require.NoError(t, err, fmt.Sprintf("unexpected error: %v", err))
require.Equal(t, tc.expectedCount, len(result),
fmt.Sprintf("%s: expected %d peers, got %d", tc.description, tc.expectedCount, len(result)))
})
}
}
// TestAssign_FinalizedCheckpointUsage verifies that the finalized checkpoint is correctly used.
func TestAssign_FinalizedCheckpointUsage(t *testing.T) {
peers := testPeerIds(10)
cases := []struct {
name string
finalizedEpoch primitives.Epoch
bestPeers []peer.ID
expectedCount int
description string
}{
{
name: "epoch 0",
finalizedEpoch: 0,
bestPeers: peers,
expectedCount: 10,
description: "epoch 0 should work",
},
{
name: "epoch 100",
finalizedEpoch: 100,
bestPeers: peers,
expectedCount: 10,
description: "high epoch number should work",
},
{
name: "epoch changes between calls",
finalizedEpoch: 50,
bestPeers: testPeerIds(5),
expectedCount: 5,
description: "epoch value should be used in checkpoint",
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
mockStatus := &MockStatus{
bestFinalizedEpoch: tc.finalizedEpoch,
bestPeers: tc.bestPeers,
}
mockCheckpointer := &MockFinalizedCheckpointer{
checkpoint: &forkchoicetypes.Checkpoint{Epoch: tc.finalizedEpoch},
}
assigner := NewAssigner(mockStatus, mockCheckpointer)
result, err := assigner.Assign(NotBusy(make(map[peer.ID]bool)))
require.NoError(t, err)
require.Equal(t, tc.expectedCount, len(result),
fmt.Sprintf("%s: expected %d peers, got %d", tc.description, tc.expectedCount, len(result)))
})
}
}
// TestAssign_EdgeCases tests boundary conditions and edge cases.
func TestAssign_EdgeCases(t *testing.T) {
cases := []struct {
name string
bestPeers []peer.ID
filter AssignmentFilter
expectedCount int
description string
}{
{
name: "filter returns empty from sufficient peers",
bestPeers: testPeerIds(10),
filter: func(p []peer.ID) []peer.ID { return []peer.ID{} },
expectedCount: 0,
description: "filter can return empty list even if sufficient peers available",
},
{
name: "filter selects subset from sufficient peers",
bestPeers: testPeerIds(10),
filter: func(p []peer.ID) []peer.ID { return p[0:2] },
expectedCount: 2,
description: "filter can return subset of available peers",
},
{
name: "filter selects single peer from many",
bestPeers: testPeerIds(20),
filter: func(p []peer.ID) []peer.ID { return p[0:1] },
expectedCount: 1,
description: "filter can select single peer from many available",
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
mockStatus := &MockStatus{
bestFinalizedEpoch: 10,
bestPeers: tc.bestPeers,
}
mockCheckpointer := &MockFinalizedCheckpointer{
checkpoint: &forkchoicetypes.Checkpoint{Epoch: 10},
}
assigner := NewAssigner(mockStatus, mockCheckpointer)
result, err := assigner.Assign(tc.filter)
require.NoError(t, err, fmt.Sprintf("%s: unexpected error: %v", tc.description, err))
require.Equal(t, tc.expectedCount, len(result),
fmt.Sprintf("%s: expected %d peers, got %d", tc.description, tc.expectedCount, len(result)))
})
}
}

View File

@@ -704,54 +704,76 @@ func (p *Status) deprecatedPrune() {
p.tallyIPTracker()
}
// BestFinalized groups all peers by their last known finalized epoch
// and selects the epoch of the largest group as best.
// Any peer with a finalized epoch < ourFinalized is excluded from consideration.
// In the event of a tie in largest group size, the higher epoch is the tie breaker.
// The selected epoch is returned, along with a list of peers with a finalized epoch >= the selected epoch.
func (p *Status) BestFinalized(ourFinalized primitives.Epoch) (primitives.Epoch, []peer.ID) {
// BestFinalized returns the highest finalized epoch equal to or higher than `ourFinalizedEpoch`
// that is agreed upon by the majority of peers, and the peers agreeing on this finalized epoch.
// This method may not return the absolute highest finalized epoch, but the finalized epoch in which
// most peers can serve blocks (plurality voting). Ideally, all peers would be reporting the same
// finalized epoch but some may be behind due to their own latency, or because of their finalized
// epoch at the time we queried them.
func (p *Status) BestFinalized(maxPeers int, ourFinalizedEpoch primitives.Epoch) (primitives.Epoch, []peer.ID) {
// Retrieve all connected peers.
connected := p.Connected()
pids := make([]peer.ID, 0, len(connected))
views := make(map[peer.ID]*pb.StatusV2, len(connected))
votes := make(map[primitives.Epoch]uint64)
winner := primitives.Epoch(0)
// key: finalized epoch, value: number of peers that support this finalized epoch.
finalizedEpochVotes := make(map[primitives.Epoch]uint64)
// key: peer ID, value: finalized epoch of the peer.
pidEpoch := make(map[peer.ID]primitives.Epoch, len(connected))
// key: peer ID, value: head slot of the peer.
pidHead := make(map[peer.ID]primitives.Slot, len(connected))
potentialPIDs := make([]peer.ID, 0, len(connected))
for _, pid := range connected {
view, err := p.ChainState(pid)
if err != nil || view == nil || view.FinalizedEpoch < ourFinalized {
continue
}
pids = append(pids, pid)
views[pid] = view
peerChainState, err := p.ChainState(pid)
votes[view.FinalizedEpoch]++
if winner == 0 {
winner = view.FinalizedEpoch
// Skip if the peer's finalized epoch is not defined, or if the peer's finalized epoch is
// lower than ours.
if err != nil || peerChainState == nil || peerChainState.FinalizedEpoch < ourFinalizedEpoch {
continue
}
e, v := view.FinalizedEpoch, votes[view.FinalizedEpoch]
if v > votes[winner] || v == votes[winner] && e > winner {
winner = e
finalizedEpochVotes[peerChainState.FinalizedEpoch]++
pidEpoch[pid] = peerChainState.FinalizedEpoch
pidHead[pid] = peerChainState.HeadSlot
potentialPIDs = append(potentialPIDs, pid)
}
// Select the target epoch, which is the epoch most peers agree upon.
// If there is a tie, select the highest epoch.
targetEpoch, mostVotes := primitives.Epoch(0), uint64(0)
for epoch, count := range finalizedEpochVotes {
if count > mostVotes || (count == mostVotes && epoch > targetEpoch) {
mostVotes = count
targetEpoch = epoch
}
}
// Descending sort by (finalized, head).
sort.Slice(pids, func(i, j int) bool {
iv, jv := views[pids[i]], views[pids[j]]
if iv.FinalizedEpoch == jv.FinalizedEpoch {
return iv.HeadSlot > jv.HeadSlot
// Sort PIDs by finalized (epoch, head), in decreasing order.
sort.Slice(potentialPIDs, func(i, j int) bool {
if pidEpoch[potentialPIDs[i]] == pidEpoch[potentialPIDs[j]] {
return pidHead[potentialPIDs[i]] > pidHead[potentialPIDs[j]]
}
return iv.FinalizedEpoch > jv.FinalizedEpoch
return pidEpoch[potentialPIDs[i]] > pidEpoch[potentialPIDs[j]]
})
// Find the first peer with finalized epoch < winner, trim and all following (lower) peers.
trim := sort.Search(len(pids), func(i int) bool {
return views[pids[i]].FinalizedEpoch < winner
})
pids = pids[:trim]
// Trim potential peers to those on or after target epoch.
for i, pid := range potentialPIDs {
if pidEpoch[pid] < targetEpoch {
potentialPIDs = potentialPIDs[:i]
break
}
}
return winner, pids
// Trim potential peers to at most maxPeers.
if len(potentialPIDs) > maxPeers {
potentialPIDs = potentialPIDs[:maxPeers]
}
return targetEpoch, potentialPIDs
}
// BestNonFinalized returns the highest known epoch, higher than ours,

View File

@@ -654,10 +654,9 @@ func TestTrimmedOrderedPeers(t *testing.T) {
FinalizedRoot: mockroot2[:],
})
target, pids := p.BestFinalized(0)
target, pids := p.BestFinalized(maxPeers, 0)
assert.Equal(t, expectedTarget, target, "Incorrect target epoch retrieved")
// addPeer called 5 times above
assert.Equal(t, 5, len(pids), "Incorrect number of peers retrieved")
assert.Equal(t, maxPeers, len(pids), "Incorrect number of peers retrieved")
// Expect the returned list to be ordered by finalized epoch and trimmed to max peers.
assert.Equal(t, pid3, pids[0], "Incorrect first peer")
@@ -1018,10 +1017,7 @@ func TestStatus_BestPeer(t *testing.T) {
HeadSlot: peerConfig.headSlot,
})
}
epoch, pids := p.BestFinalized(tt.ourFinalizedEpoch)
if len(pids) > tt.limitPeers {
pids = pids[:tt.limitPeers]
}
epoch, pids := p.BestFinalized(tt.limitPeers, tt.ourFinalizedEpoch)
assert.Equal(t, tt.targetEpoch, epoch, "Unexpected epoch retrieved")
assert.Equal(t, tt.targetEpochSupport, len(pids), "Unexpected number of peers supporting retrieved epoch")
})
@@ -1048,10 +1044,7 @@ func TestBestFinalized_returnsMaxValue(t *testing.T) {
})
}
_, pids := p.BestFinalized(0)
if len(pids) > maxPeers {
pids = pids[:maxPeers]
}
_, pids := p.BestFinalized(maxPeers, 0)
assert.Equal(t, maxPeers, len(pids), "Wrong number of peers returned")
}

View File

@@ -77,12 +77,12 @@ func InitializeDataMaps() {
},
bytesutil.ToBytes4(params.BeaconConfig().ElectraForkVersion): func() (interfaces.ReadOnlySignedBeaconBlock, error) {
return blocks.NewSignedBeaconBlock(
&ethpb.SignedBeaconBlockElectra{Block: &ethpb.BeaconBlockElectra{Body: &ethpb.BeaconBlockBodyElectra{ExecutionPayload: &enginev1.ExecutionPayloadDeneb{}, ExecutionRequests: &enginev1.ExecutionRequests{}}}},
&ethpb.SignedBeaconBlockElectra{Block: &ethpb.BeaconBlockElectra{Body: &ethpb.BeaconBlockBodyElectra{ExecutionPayload: &enginev1.ExecutionPayloadDeneb{}}}},
)
},
bytesutil.ToBytes4(params.BeaconConfig().FuluForkVersion): func() (interfaces.ReadOnlySignedBeaconBlock, error) {
return blocks.NewSignedBeaconBlock(
&ethpb.SignedBeaconBlockFulu{Block: &ethpb.BeaconBlockElectra{Body: &ethpb.BeaconBlockBodyElectra{ExecutionPayload: &enginev1.ExecutionPayloadDeneb{}, ExecutionRequests: &enginev1.ExecutionRequests{}}}},
&ethpb.SignedBeaconBlockFulu{Block: &ethpb.BeaconBlockElectra{Body: &ethpb.BeaconBlockBodyElectra{ExecutionPayload: &enginev1.ExecutionPayloadDeneb{}}}},
)
},
}
@@ -204,9 +204,6 @@ func InitializeDataMaps() {
bytesutil.ToBytes4(params.BeaconConfig().ElectraForkVersion): func() (interfaces.LightClientOptimisticUpdate, error) {
return lightclientConsensusTypes.NewEmptyOptimisticUpdateDeneb(), nil
},
bytesutil.ToBytes4(params.BeaconConfig().FuluForkVersion): func() (interfaces.LightClientOptimisticUpdate, error) {
return lightclientConsensusTypes.NewEmptyOptimisticUpdateDeneb(), nil
},
}
// Reset our light client finality update map.
@@ -226,8 +223,5 @@ func InitializeDataMaps() {
bytesutil.ToBytes4(params.BeaconConfig().ElectraForkVersion): func() (interfaces.LightClientFinalityUpdate, error) {
return lightclientConsensusTypes.NewEmptyFinalityUpdateElectra(), nil
},
bytesutil.ToBytes4(params.BeaconConfig().FuluForkVersion): func() (interfaces.LightClientFinalityUpdate, error) {
return lightclientConsensusTypes.NewEmptyFinalityUpdateElectra(), nil
},
}
}

View File

@@ -73,7 +73,7 @@ func (e *endpoint) handlerWithMiddleware() http.HandlerFunc {
handler.ServeHTTP(rw, r)
if rw.statusCode >= 400 {
httpErrorCount.WithLabelValues(e.name, http.StatusText(rw.statusCode), r.Method).Inc()
httpErrorCount.WithLabelValues(r.URL.Path, http.StatusText(rw.statusCode), r.Method).Inc()
}
}
}

View File

@@ -993,11 +993,10 @@ func (s *Server) validateEquivocation(blk interfaces.ReadOnlyBeaconBlock) error
}
func (s *Server) validateBlobs(blk interfaces.SignedBeaconBlock, blobs [][]byte, proofs [][]byte) error {
const numberOfColumns = fieldparams.NumberOfColumns
if blk.Version() < version.Deneb {
return nil
}
numberOfColumns := params.BeaconConfig().NumberOfColumns
commitments, err := blk.Block().Body().BlobKzgCommitments()
if err != nil {
return errors.Wrap(err, "could not get blob kzg commitments")

View File

@@ -130,10 +130,6 @@ func (s *Server) SubmitAttestationsV2(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "beacon.SubmitAttestationsV2")
defer span.End()
if shared.IsSyncing(ctx, w, s.SyncChecker, s.HeadFetcher, s.TimeFetcher, s.OptimisticModeFetcher) {
return
}
versionHeader := r.Header.Get(api.VersionHeader)
if versionHeader == "" {
httputil.HandleError(w, api.VersionHeader+" header is required", http.StatusBadRequest)
@@ -242,14 +238,22 @@ func (s *Server) handleAttestationsElectra(
},
})
// Broadcast first using CommitteeId directly (fast path)
// This matches gRPC behavior and avoids blocking on state fetching
wantedEpoch := slots.ToEpoch(singleAtt.Data.Slot)
targetState, err := s.AttestationStateFetcher.AttestationTargetState(ctx, singleAtt.Data.Target)
if err != nil {
return nil, nil, errors.Wrap(err, "could not get target state for attestation")
}
committee, err := corehelpers.BeaconCommitteeFromState(ctx, targetState, singleAtt.Data.Slot, singleAtt.CommitteeId)
if err != nil {
return nil, nil, errors.Wrap(err, "could not get committee for attestation")
}
att := singleAtt.ToAttestationElectra(committee)
wantedEpoch := slots.ToEpoch(att.Data.Slot)
vals, err := s.HeadFetcher.HeadValidatorsIndices(ctx, wantedEpoch)
if err != nil {
return nil, nil, errors.Wrap(err, "could not get head validator indices")
}
subnet := corehelpers.ComputeSubnetFromCommitteeAndSlot(uint64(len(vals)), singleAtt.CommitteeId, singleAtt.Data.Slot)
subnet := corehelpers.ComputeSubnetFromCommitteeAndSlot(uint64(len(vals)), att.GetCommitteeIndex(), att.Data.Slot)
if err = s.Broadcaster.BroadcastAttestation(ctx, subnet, singleAtt); err != nil {
failedBroadcasts = append(failedBroadcasts, &server.IndexedError{
Index: i,
@@ -260,33 +264,17 @@ func (s *Server) handleAttestationsElectra(
}
continue
}
}
// Save to pool after broadcast (slow path - requires state fetching)
// Run in goroutine to avoid blocking the HTTP response
go func() {
for _, singleAtt := range validAttestations {
targetState, err := s.AttestationStateFetcher.AttestationTargetState(context.Background(), singleAtt.Data.Target)
if err != nil {
log.WithError(err).Error("Could not get target state for attestation")
continue
if features.Get().EnableExperimentalAttestationPool {
if err = s.AttestationCache.Add(att); err != nil {
log.WithError(err).Error("Could not save attestation")
}
committee, err := corehelpers.BeaconCommitteeFromState(context.Background(), targetState, singleAtt.Data.Slot, singleAtt.CommitteeId)
if err != nil {
log.WithError(err).Error("Could not get committee for attestation")
continue
}
att := singleAtt.ToAttestationElectra(committee)
if features.Get().EnableExperimentalAttestationPool {
if err = s.AttestationCache.Add(att); err != nil {
log.WithError(err).Error("Could not save attestation")
}
} else if err = s.AttestationsPool.SaveUnaggregatedAttestation(att); err != nil {
} else {
if err = s.AttestationsPool.SaveUnaggregatedAttestation(att); err != nil {
log.WithError(err).Error("Could not save attestation")
}
}
}()
}
if len(failedBroadcasts) > 0 {
log.WithFields(logrus.Fields{
@@ -382,8 +370,10 @@ func (s *Server) handleAttestations(
if err = s.AttestationsPool.SaveAggregatedAttestation(att); err != nil {
log.WithError(err).Error("Could not save aggregated attestation")
}
} else if err = s.AttestationsPool.SaveUnaggregatedAttestation(att); err != nil {
log.WithError(err).Error("Could not save unaggregated attestation")
} else {
if err = s.AttestationsPool.SaveUnaggregatedAttestation(att); err != nil {
log.WithError(err).Error("Could not save unaggregated attestation")
}
}
}
@@ -480,10 +470,6 @@ func (s *Server) SubmitSyncCommitteeSignatures(w http.ResponseWriter, r *http.Re
ctx, span := trace.StartSpan(r.Context(), "beacon.SubmitPoolSyncCommitteeSignatures")
defer span.End()
if shared.IsSyncing(ctx, w, s.SyncChecker, s.HeadFetcher, s.TimeFetcher, s.OptimisticModeFetcher) {
return
}
var req structs.SubmitSyncCommitteeSignaturesRequest
err := json.NewDecoder(r.Body).Decode(&req.Data)
switch {
@@ -725,7 +711,6 @@ func (s *Server) SubmitAttesterSlashingsV2(w http.ResponseWriter, r *http.Reques
versionHeader := r.Header.Get(api.VersionHeader)
if versionHeader == "" {
httputil.HandleError(w, api.VersionHeader+" header is required", http.StatusBadRequest)
return
}
v, err := version.FromString(versionHeader)
if err != nil {

View File

@@ -26,7 +26,6 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/operations/voluntaryexits/mock"
p2pMock "github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/core"
mockSync "github.com/OffchainLabs/prysm/v7/beacon-chain/sync/initial-sync/testing"
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
@@ -623,8 +622,6 @@ func TestSubmitAttestationsV2(t *testing.T) {
HeadFetcher: chainService,
ChainInfoFetcher: chainService,
TimeFetcher: chainService,
OptimisticModeFetcher: chainService,
SyncChecker: &mockSync.Sync{IsSyncing: false},
OperationNotifier: &blockchainmock.MockOperationNotifier{},
AttestationStateFetcher: chainService,
}
@@ -657,7 +654,6 @@ func TestSubmitAttestationsV2(t *testing.T) {
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Source.Epoch)
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().Target.Root))
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Target.Epoch)
time.Sleep(100 * time.Millisecond) // Wait for async pool save
assert.Equal(t, 1, s.AttestationsPool.UnaggregatedAttestationCount())
})
t.Run("multiple", func(t *testing.T) {
@@ -677,7 +673,6 @@ func TestSubmitAttestationsV2(t *testing.T) {
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, 2, broadcaster.NumAttestations())
time.Sleep(100 * time.Millisecond) // Wait for async pool save
assert.Equal(t, 2, s.AttestationsPool.UnaggregatedAttestationCount())
})
t.Run("phase0 att post electra", func(t *testing.T) {
@@ -798,7 +793,6 @@ func TestSubmitAttestationsV2(t *testing.T) {
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Source.Epoch)
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().Target.Root))
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Target.Epoch)
time.Sleep(100 * time.Millisecond) // Wait for async pool save
assert.Equal(t, 1, s.AttestationsPool.UnaggregatedAttestationCount())
})
t.Run("multiple", func(t *testing.T) {
@@ -818,7 +812,6 @@ func TestSubmitAttestationsV2(t *testing.T) {
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, 2, broadcaster.NumAttestations())
time.Sleep(100 * time.Millisecond) // Wait for async pool save
assert.Equal(t, 2, s.AttestationsPool.UnaggregatedAttestationCount())
})
t.Run("no body", func(t *testing.T) {
@@ -868,27 +861,6 @@ func TestSubmitAttestationsV2(t *testing.T) {
assert.Equal(t, true, strings.Contains(e.Failures[0].Message, "Incorrect attestation signature"))
})
})
t.Run("syncing", func(t *testing.T) {
chainService := &blockchainmock.ChainService{}
s := &Server{
HeadFetcher: chainService,
TimeFetcher: chainService,
OptimisticModeFetcher: chainService,
SyncChecker: &mockSync.Sync{IsSyncing: true},
}
var body bytes.Buffer
_, err := body.WriteString(singleAtt)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.VersionHeader, version.String(version.Phase0))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestationsV2(writer, request)
assert.Equal(t, http.StatusServiceUnavailable, writer.Code)
assert.Equal(t, true, strings.Contains(writer.Body.String(), "Beacon node is currently syncing"))
})
}
func TestListVoluntaryExits(t *testing.T) {
@@ -1085,19 +1057,14 @@ func TestSubmitSyncCommitteeSignatures(t *testing.T) {
t.Run("single", func(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
chainService := &blockchainmock.ChainService{
State: st,
SyncCommitteeIndices: []primitives.CommitteeIndex{0},
}
s := &Server{
HeadFetcher: chainService,
TimeFetcher: chainService,
OptimisticModeFetcher: chainService,
SyncChecker: &mockSync.Sync{IsSyncing: false},
CoreService: &core.Service{
SyncCommitteePool: synccommittee.NewStore(),
P2P: broadcaster,
HeadFetcher: chainService,
HeadFetcher: &blockchainmock.ChainService{
State: st,
SyncCommitteeIndices: []primitives.CommitteeIndex{0},
},
},
}
@@ -1122,19 +1089,14 @@ func TestSubmitSyncCommitteeSignatures(t *testing.T) {
})
t.Run("multiple", func(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
chainService := &blockchainmock.ChainService{
State: st,
SyncCommitteeIndices: []primitives.CommitteeIndex{0},
}
s := &Server{
HeadFetcher: chainService,
TimeFetcher: chainService,
OptimisticModeFetcher: chainService,
SyncChecker: &mockSync.Sync{IsSyncing: false},
CoreService: &core.Service{
SyncCommitteePool: synccommittee.NewStore(),
P2P: broadcaster,
HeadFetcher: chainService,
HeadFetcher: &blockchainmock.ChainService{
State: st,
SyncCommitteeIndices: []primitives.CommitteeIndex{0},
},
},
}
@@ -1158,18 +1120,13 @@ func TestSubmitSyncCommitteeSignatures(t *testing.T) {
})
t.Run("invalid", func(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
chainService := &blockchainmock.ChainService{
State: st,
}
s := &Server{
HeadFetcher: chainService,
TimeFetcher: chainService,
OptimisticModeFetcher: chainService,
SyncChecker: &mockSync.Sync{IsSyncing: false},
CoreService: &core.Service{
SyncCommitteePool: synccommittee.NewStore(),
P2P: broadcaster,
HeadFetcher: chainService,
HeadFetcher: &blockchainmock.ChainService{
State: st,
},
},
}
@@ -1192,13 +1149,7 @@ func TestSubmitSyncCommitteeSignatures(t *testing.T) {
assert.Equal(t, false, broadcaster.BroadcastCalled.Load())
})
t.Run("empty", func(t *testing.T) {
chainService := &blockchainmock.ChainService{State: st}
s := &Server{
HeadFetcher: chainService,
TimeFetcher: chainService,
OptimisticModeFetcher: chainService,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
s := &Server{}
var body bytes.Buffer
_, err := body.WriteString("[]")
@@ -1215,13 +1166,7 @@ func TestSubmitSyncCommitteeSignatures(t *testing.T) {
assert.Equal(t, true, strings.Contains(e.Message, "No data submitted"))
})
t.Run("no body", func(t *testing.T) {
chainService := &blockchainmock.ChainService{State: st}
s := &Server{
HeadFetcher: chainService,
TimeFetcher: chainService,
OptimisticModeFetcher: chainService,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
s := &Server{}
request := httptest.NewRequest(http.MethodPost, "http://example.com", nil)
writer := httptest.NewRecorder()
@@ -1234,26 +1179,6 @@ func TestSubmitSyncCommitteeSignatures(t *testing.T) {
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "No data submitted"))
})
t.Run("syncing", func(t *testing.T) {
chainService := &blockchainmock.ChainService{State: st}
s := &Server{
HeadFetcher: chainService,
TimeFetcher: chainService,
OptimisticModeFetcher: chainService,
SyncChecker: &mockSync.Sync{IsSyncing: true},
}
var body bytes.Buffer
_, err := body.WriteString(singleSyncCommitteeMsg)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitSyncCommitteeSignatures(writer, request)
assert.Equal(t, http.StatusServiceUnavailable, writer.Code)
assert.Equal(t, true, strings.Contains(writer.Body.String(), "Beacon node is currently syncing"))
})
}
func TestListBLSToExecutionChanges(t *testing.T) {
@@ -2187,33 +2112,6 @@ func TestSubmitAttesterSlashingsV2(t *testing.T) {
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.StringContains(t, "Invalid attester slashing", e.Message)
})
t.Run("missing-version-header", func(t *testing.T) {
bs, err := util.NewBeaconStateElectra()
require.NoError(t, err)
broadcaster := &p2pMock.MockBroadcaster{}
s := &Server{
ChainInfoFetcher: &blockchainmock.ChainService{State: bs},
SlashingsPool: &slashingsmock.PoolMock{},
Broadcaster: broadcaster,
}
var body bytes.Buffer
_, err = body.WriteString(invalidAttesterSlashing)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com/beacon/pool/attester_slashings", &body)
// Intentionally do not set api.VersionHeader to verify missing header handling.
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttesterSlashingsV2(writer, request)
require.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.StringContains(t, api.VersionHeader+" header is required", e.Message)
})
}
func TestSubmitProposerSlashing_InvalidSlashing(t *testing.T) {

View File

@@ -654,10 +654,6 @@ func (m *futureSyncMockFetcher) StateBySlot(context.Context, primitives.Slot) (s
return m.BeaconState, nil
}
func (m *futureSyncMockFetcher) StateByEpoch(context.Context, primitives.Epoch) (state.BeaconState, error) {
return m.BeaconState, nil
}
func TestGetSyncCommittees_Future(t *testing.T) {
st, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := make([][]byte, params.BeaconConfig().SyncCommitteeSize)

View File

@@ -27,7 +27,6 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/testutil"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
mockSync "github.com/OffchainLabs/prysm/v7/beacon-chain/sync/initial-sync/testing"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
@@ -3757,7 +3756,6 @@ func Test_validateBlobs(t *testing.T) {
})
t.Run("Fulu block with valid cell proofs", func(t *testing.T) {
const numberOfColumns = fieldparams.NumberOfColumns
blk := util.NewBeaconBlockFulu()
blk.Block.Slot = fs
@@ -3785,13 +3783,14 @@ func Test_validateBlobs(t *testing.T) {
require.NoError(t, err)
// Generate cell proofs for the blobs (flattened format like execution client)
numberOfColumns := params.BeaconConfig().NumberOfColumns
cellProofs := make([][]byte, uint64(blobCount)*numberOfColumns)
for blobIdx := range blobCount {
_, proofs, err := kzg.ComputeCellsAndKZGProofs(&kzgBlobs[blobIdx])
require.NoError(t, err)
for colIdx := range numberOfColumns {
cellProofIdx := blobIdx*numberOfColumns + colIdx
cellProofIdx := uint64(blobIdx)*numberOfColumns + colIdx
cellProofs[cellProofIdx] = proofs[colIdx][:]
}
}

View File

@@ -155,19 +155,20 @@ func TestGetSpec(t *testing.T) {
config.MaxAttesterSlashingsElectra = 88
config.MaxAttestationsElectra = 89
config.MaxWithdrawalRequestsPerPayload = 90
config.UnsetDepositRequestsStartIndex = 91
config.MaxDepositRequestsPerPayload = 92
config.MaxPendingDepositsPerEpoch = 93
config.MaxBlobCommitmentsPerBlock = 94
config.MaxBytesPerTransaction = 95
config.MaxExtraDataBytes = 96
config.BytesPerLogsBloom = 97
config.MaxTransactionsPerPayload = 98
config.FieldElementsPerBlob = 99
config.KzgCommitmentInclusionProofDepth = 100
config.BlobsidecarSubnetCount = 101
config.BlobsidecarSubnetCountElectra = 102
config.SyncMessageDueBPS = 103
config.MaxCellsInExtendedMatrix = 91
config.UnsetDepositRequestsStartIndex = 92
config.MaxDepositRequestsPerPayload = 93
config.MaxPendingDepositsPerEpoch = 94
config.MaxBlobCommitmentsPerBlock = 95
config.MaxBytesPerTransaction = 96
config.MaxExtraDataBytes = 97
config.BytesPerLogsBloom = 98
config.MaxTransactionsPerPayload = 99
config.FieldElementsPerBlob = 100
config.KzgCommitmentInclusionProofDepth = 101
config.BlobsidecarSubnetCount = 102
config.BlobsidecarSubnetCountElectra = 103
config.SyncMessageDueBPS = 104
var dbp [4]byte
copy(dbp[:], []byte{'0', '0', '0', '1'})
@@ -205,7 +206,7 @@ func TestGetSpec(t *testing.T) {
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), &resp))
data, ok := resp.Data.(map[string]any)
require.Equal(t, true, ok)
assert.Equal(t, 175, len(data))
assert.Equal(t, 176, len(data))
for k, v := range data {
t.Run(k, func(t *testing.T) {
switch k {
@@ -499,6 +500,8 @@ func TestGetSpec(t *testing.T) {
assert.Equal(t, "1024", v)
case "MAX_REQUEST_BLOCKS_DENEB":
assert.Equal(t, "128", v)
case "NUMBER_OF_COLUMNS":
assert.Equal(t, "128", v)
case "MIN_PER_EPOCH_CHURN_LIMIT_ELECTRA":
assert.Equal(t, "128000000000", v)
case "MAX_PER_EPOCH_ACTIVATION_EXIT_CHURN_LIMIT":
@@ -535,12 +538,14 @@ func TestGetSpec(t *testing.T) {
assert.Equal(t, "89", v)
case "MAX_WITHDRAWAL_REQUESTS_PER_PAYLOAD":
assert.Equal(t, "90", v)
case "UNSET_DEPOSIT_REQUESTS_START_INDEX":
case "MAX_CELLS_IN_EXTENDED_MATRIX":
assert.Equal(t, "91", v)
case "MAX_DEPOSIT_REQUESTS_PER_PAYLOAD":
case "UNSET_DEPOSIT_REQUESTS_START_INDEX":
assert.Equal(t, "92", v)
case "MAX_PENDING_DEPOSITS_PER_EPOCH":
case "MAX_DEPOSIT_REQUESTS_PER_PAYLOAD":
assert.Equal(t, "93", v)
case "MAX_PENDING_DEPOSITS_PER_EPOCH":
assert.Equal(t, "94", v)
case "MAX_BLOBS_PER_BLOCK_ELECTRA":
assert.Equal(t, "9", v)
case "MAX_REQUEST_BLOB_SIDECARS_ELECTRA":
@@ -558,25 +563,25 @@ func TestGetSpec(t *testing.T) {
case "MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS":
assert.Equal(t, "4096", v)
case "MAX_BLOB_COMMITMENTS_PER_BLOCK":
assert.Equal(t, "94", v)
case "MAX_BYTES_PER_TRANSACTION":
assert.Equal(t, "95", v)
case "MAX_EXTRA_DATA_BYTES":
case "MAX_BYTES_PER_TRANSACTION":
assert.Equal(t, "96", v)
case "BYTES_PER_LOGS_BLOOM":
case "MAX_EXTRA_DATA_BYTES":
assert.Equal(t, "97", v)
case "MAX_TRANSACTIONS_PER_PAYLOAD":
case "BYTES_PER_LOGS_BLOOM":
assert.Equal(t, "98", v)
case "FIELD_ELEMENTS_PER_BLOB":
case "MAX_TRANSACTIONS_PER_PAYLOAD":
assert.Equal(t, "99", v)
case "KZG_COMMITMENT_INCLUSION_PROOF_DEPTH":
case "FIELD_ELEMENTS_PER_BLOB":
assert.Equal(t, "100", v)
case "BLOB_SIDECAR_SUBNET_COUNT":
case "KZG_COMMITMENT_INCLUSION_PROOF_DEPTH":
assert.Equal(t, "101", v)
case "BLOB_SIDECAR_SUBNET_COUNT_ELECTRA":
case "BLOB_SIDECAR_SUBNET_COUNT":
assert.Equal(t, "102", v)
case "SYNC_MESSAGE_DUE_BPS":
case "BLOB_SIDECAR_SUBNET_COUNT_ELECTRA":
assert.Equal(t, "103", v)
case "SYNC_MESSAGE_DUE_BPS":
assert.Equal(t, "104", v)
case "BLOB_SCHEDULE":
blobSchedule, ok := v.([]any)
assert.Equal(t, true, ok)

View File

@@ -17,7 +17,6 @@ go_library(
"//beacon-chain/rpc/eth/helpers:go_default_library",
"//beacon-chain/rpc/eth/shared:go_default_library",
"//beacon-chain/rpc/lookup:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",

View File

@@ -15,7 +15,6 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/core"
"github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/eth/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/eth/shared"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
@@ -309,7 +308,7 @@ func (s *Server) DataColumnSidecars(w http.ResponseWriter, r *http.Request) {
// parseDataColumnIndices filters out invalid and duplicate data column indices
func parseDataColumnIndices(url *url.URL) ([]int, error) {
const numberOfColumns = fieldparams.NumberOfColumns
numberOfColumns := params.BeaconConfig().NumberOfColumns
rawIndices := url.Query()["indices"]
indices := make([]int, 0, numberOfColumns)
invalidIndices := make([]string, 0)

View File

@@ -709,6 +709,15 @@ func TestDataColumnSidecars(t *testing.T) {
}
func TestParseDataColumnIndices(t *testing.T) {
// Save the original config
originalConfig := params.BeaconConfig()
defer func() { params.OverrideBeaconConfig(originalConfig) }()
// Set NumberOfColumns to 128 for testing
config := params.BeaconConfig().Copy()
config.NumberOfColumns = 128
params.OverrideBeaconConfig(config)
tests := []struct {
name string
queryParams map[string][]string

View File

@@ -116,7 +116,6 @@ func (s *Server) GetLightClientUpdatesByRange(w http.ResponseWriter, req *http.R
for _, update := range updates {
if ctx.Err() != nil {
httputil.HandleError(w, "Context error: "+ctx.Err().Error(), http.StatusInternalServerError)
return
}
updateSlot := update.AttestedHeader().Beacon().Slot
@@ -132,15 +131,12 @@ func (s *Server) GetLightClientUpdatesByRange(w http.ResponseWriter, req *http.R
chunkLength = ssz.MarshalUint64(chunkLength, uint64(len(updateSSZ)+4))
if _, err := w.Write(chunkLength); err != nil {
httputil.HandleError(w, "Could not write chunk length: "+err.Error(), http.StatusInternalServerError)
return
}
if _, err := w.Write(updateEntry.ForkDigest[:]); err != nil {
httputil.HandleError(w, "Could not write fork digest: "+err.Error(), http.StatusInternalServerError)
return
}
if _, err := w.Write(updateSSZ); err != nil {
httputil.HandleError(w, "Could not write update SSZ: "+err.Error(), http.StatusInternalServerError)
return
}
}
} else {
@@ -149,7 +145,6 @@ func (s *Server) GetLightClientUpdatesByRange(w http.ResponseWriter, req *http.R
for _, update := range updates {
if ctx.Err() != nil {
httputil.HandleError(w, "Context error: "+ctx.Err().Error(), http.StatusInternalServerError)
return
}
updateJson, err := structs.LightClientUpdateFromConsensus(update)

View File

@@ -47,10 +47,6 @@ func TestLightClientHandler_GetLightClientBootstrap(t *testing.T) {
params.OverrideBeaconConfig(cfg)
for _, testVersion := range version.All()[1:] {
if testVersion == version.Gloas {
// TODO(16027): Unskip light client tests for Gloas
continue
}
t.Run(version.String(testVersion), func(t *testing.T) {
l := util.NewTestLightClient(t, testVersion)
@@ -182,10 +178,6 @@ func TestLightClientHandler_GetLightClientByRange(t *testing.T) {
t.Run("can save retrieve", func(t *testing.T) {
for _, testVersion := range version.All()[1:] {
if testVersion == version.Gloas {
// TODO(16027): Unskip light client tests for Gloas
continue
}
t.Run(version.String(testVersion), func(t *testing.T) {
slot := primitives.Slot(params.BeaconConfig().VersionToForkEpochMap()[testVersion] * primitives.Epoch(config.SlotsPerEpoch)).Add(1)
@@ -740,10 +732,6 @@ func TestLightClientHandler_GetLightClientFinalityUpdate(t *testing.T) {
})
for _, testVersion := range version.All()[1:] {
if testVersion == version.Gloas {
// TODO(16027): Unskip light client tests for Gloas
continue
}
t.Run(version.String(testVersion), func(t *testing.T) {
ctx := t.Context()
@@ -839,10 +827,6 @@ func TestLightClientHandler_GetLightClientOptimisticUpdate(t *testing.T) {
})
for _, testVersion := range version.All()[1:] {
if testVersion == version.Gloas {
// TODO(16027): Unskip light client tests for Gloas
continue
}
t.Run(version.String(testVersion), func(t *testing.T) {
ctx := t.Context()
l := util.NewTestLightClient(t, testVersion)

Some files were not shown because too many files have changed in this diff Show More