Compare commits

..

8 Commits

Author SHA1 Message Date
james-prysm
93514608b0 changelog 2025-11-14 11:27:20 -06:00
james-prysm
f999173ccd Merge branch 'develop' into lite-supernode 2025-11-14 07:20:31 -08:00
james-prysm
9c074acf59 fixing test 2025-11-13 15:05:27 -06:00
james-prysm
430bb09a74 Merge branch 'develop' into lite-supernode 2025-11-13 12:58:33 -08:00
james-prysm
d72ce136b8 Merge branch 'develop' into lite-supernode 2025-11-13 07:00:47 -08:00
james-prysm
2937b62a10 Rename semi-super-node to lite-supernode 2025-11-12 21:27:36 -06:00
james-prysm
2fb2fecd92 changing attempt 2025-11-12 20:58:20 -06:00
james-prysm
f93a77aef9 wip 2025-11-12 20:15:58 -06:00
333 changed files with 3455 additions and 41732 deletions

View File

@@ -34,5 +34,4 @@ Fixes #
- [ ] I have read [CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [ ] I have included a uniquely named [changelog fragment file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [ ] I have added a description with sufficient context for reviewers to understand this PR.
- [ ] I have tested that my changes work as expected and I added a testing plan to the PR description (if applicable).
- [ ] I have added a description to this PR with sufficient context for reviewers to understand this PR.

7
.gitignore vendored
View File

@@ -38,13 +38,6 @@ metaData
# execution API authentication
jwt.hex
execution/
# local execution client data
execution/
# local documentation
CLAUDE.md
# manual testing
tmp

View File

@@ -4,91 +4,6 @@ All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
## [v7.1.0](https://github.com/prysmaticlabs/prysm/compare/v7.0.0...v7.1.0) - 2025-12-10
This release includes several key features/fixes. If you are running v7.0.0 then you should update to v7.0.1 or later and remove the flag `--disable-last-epoch-targets`.
Release highlights:
- Backfill is now supported in Fulu. Backfill from checkpoint sync now supports data columns. Run with `--enable-backfill` when using checkpoint sync.
- A new node configuration to custody enough data columns to reconstruct blobs. Use flag `--semi-supernode` to custody at least 50% of the data columns.
- Critical fixes in attestation processing.
A post mortem doc with full details on the mainnet attestation processing issue from December 4th is expected in the coming days.
### Added
- add fulu support to light client processing. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15995)
- Record data column gossip KZG batch verification latency in both the pooled worker and fallback paths so the `beacon_kzg_verification_data_column_batch_milliseconds` histogram reflects gossip traffic, annotated with `path` labels to distinguish the sources. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16018)
- Implement Gloas state. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15611)
- Add initial configs for the state-diff feature. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15903)
- Add kv functions for the state-diff feature. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15903)
- Add supported version for fork versions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16030)
- prometheus metric `gossip_attestation_verification_milliseconds` to track attestation gossip topic validation latency. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15785)
- Integrate state-diff into `State()`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16033)
- Implement Gloas fork support in consensus-types/blocks with factory methods, getters, setters, and proto handling. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15618)
- Integrate state-diff into `HasState()`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16045)
- Added `--semi-supernode` flag to custody half of a super node's datacolumn requirements but allowing for reconstruction for blob retrieval. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16029)
- Data column backfill. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15580)
- Backfill metrics for columns: backfill_data_column_sidecar_downloaded, backfill_data_column_sidecar_downloaded_bytes, backfill_batch_columns_download_ms, backfill_batch_columns_verify_ms. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15580)
- prometheus summary `gossip_data_column_sidecar_arrival_milliseconds` to track data column sidecar arrival latency since slot start. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16099)
### Changed
- Improve readability in slashing import and remove duplicated code. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15957)
- Use dependent root instead of target when possible. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15996)
- Changed `--subscribe-all-data-subnets` flag to `--supernode` and aliased `--subscribe-all-data-subnets` for existing users. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16012)
- Use explicit slot component timing configs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15999)
- Downgraded log level from INFO to DEBUG on PrepareBeaconProposer updated fee recipients. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15998)
- Change the logging behaviour of Updated fee recipients to only log count of validators at Debug level and all validator indices at Trace level. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15998)
- Stop emitting payload attribute events during late block handling when we are not proposing the next slot. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16026)
- Initialize the `ExecutionRequests` field in gossip block map. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16047)
- Avoid redundant WithHttpEndpoint when JWT is provided. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16032)
- Removed dead slot parameter from blobCacheEntry.filter. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16021)
- Added log prefix to the `genesis` package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16075)
- Added log prefix to the `params` package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16075)
- `WithGenesisValidatorsRoot`: Use camelCase for log field param. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16075)
- Move `Origin checkpoint found in db` from WARN to INFO, since it is the expected behaviour. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16075)
- backfill metrics that changed name and/or histogram buckets: backfill_batch_time_verify -> backfill_batch_verify_ms, backfill_batch_time_waiting -> backfill_batch_waiting_ms, backfill_batch_time_roundtrip -> backfill_batch_roundtrip_ms, backfill_blocks_bytes_downloaded -> backfill_blocks_downloaded_bytes, backfill_batch_time_verify -> backfill_batch_verify_ms, backfill_batch_blocks_time_download -> backfill_batch_blocks_download_ms, backfill_batch_blobs_time_download -> backfill_batch_blobs_download_ms, backfill_blobs_bytes_downloaded -> backfill_blocks_downloaded_bytes. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15580)
- Move the "Not enough connected peers" (for a given subnet) from WARN to DEBUG. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16087)
- `blobsDataFromStoredDataColumns`: Ask the use to use the `--supernode` flag and shorten the error mesage. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16097)
- Introduced flag `--ignore-unviable-attestations` (replaces and deprecates `--disable-last-epoch-targets`) to drop attestations whose target state is not viable; default remains to process them unless explicitly enabled. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16094)
### Removed
- Remove validator cross-client from end-to-end tests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16025)
- `NUMBER_OF_COLUMNS` configuration (not in the specification any more, replaced by a preset). [[PR]](https://github.com/prysmaticlabs/prysm/pull/16073)
- `MAX_CELLS_IN_EXTENDED_MATRIX` configuration (not in the specification any more). [[PR]](https://github.com/prysmaticlabs/prysm/pull/16073)
### Fixed
- Nil check for block if it doesn't exist in the DB in fetchOriginSidecars. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16006)
- Fix proposals progress bar count [#16020](https://github.com/OffchainLabs/prysm/pull/16020). [[PR]](https://github.com/prysmaticlabs/prysm/pull/16020)
- Move `BlockGossipReceived` event to the end of gossip validation. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16031)
- Fix state diff repetitive anchor slot bug. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16037)
- Check the JWT secret length is exactly 256 bits (32 bytes) as per Engine API specification. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15939)
- http_error_count now matches the other cases by listing the endpoint name rather than the actual URL requested. This improves metrics cardinality. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16055)
- Fix array out of bounds in static analyzer. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16058)
- fixes E2E tests to be able to start from Electra genesis fork or future forks. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16048)
- Use head state to validate attestations for old blocks if they are compatible. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16095)
## [v7.0.1](https://github.com/prysmaticlabs/prysm/compare/v7.0.0...v7.0.1) - 2025-12-08
This patch release contains 4 cherry-picked changes to address the mainnet attestation processing issue from 2025-12-04. Operators are encouraged to update to this release as soon as practical. As of this release, the feature flag `--disable-last-epoch-targets` has been deprecated and can be safely removed from your node configuration.
A post mortem doc with full details is expected to be published later this week.
### Changed
- Move the "Not enough connected peers" (for a given subnet) from WARN to DEBUG. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16087)
- Use dependent root instead of target when possible. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15996)
- Introduced flag `--ignore-unviable-attestations` (replaces and deprecates `--disable-last-epoch-targets`) to drop attestations whose target state is not viable; default remains to process them unless explicitly enabled. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16094)
### Fixed
- Use head state to validate attestations for old blocks if they are compatible. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16095)
## [v7.0.0](https://github.com/prysmaticlabs/prysm/compare/v6.1.4...v7.0.0) - 2025-11-10
This is our initial mainnet release for the Ethereum mainnet Fulu fork on December 3rd, 2025. All operators MUST update to v7.0.0 or later release prior to the fulu fork epoch `411392`. See the [Ethereum Foundation blog post](https://blog.ethereum.org/2025/11/06/fusaka-mainnet-announcement) for more information on Fulu.

View File

@@ -22,7 +22,10 @@ import (
// The caller of this function must have a lock on forkchoice.
func (s *Service) getRecentPreState(ctx context.Context, c *ethpb.Checkpoint) state.ReadOnlyBeaconState {
headEpoch := slots.ToEpoch(s.HeadSlot())
if c.Epoch < headEpoch || c.Epoch == 0 {
if c.Epoch < headEpoch {
return nil
}
if !s.cfg.ForkChoiceStore.IsCanonical([32]byte(c.Root)) {
return nil
}
// Only use head state if the head state is compatible with the target checkpoint.
@@ -30,11 +33,11 @@ func (s *Service) getRecentPreState(ctx context.Context, c *ethpb.Checkpoint) st
if err != nil {
return nil
}
headDependent, err := s.cfg.ForkChoiceStore.DependentRootForEpoch([32]byte(headRoot), c.Epoch-1)
headDependent, err := s.cfg.ForkChoiceStore.DependentRootForEpoch([32]byte(headRoot), c.Epoch)
if err != nil {
return nil
}
targetDependent, err := s.cfg.ForkChoiceStore.DependentRootForEpoch([32]byte(c.Root), c.Epoch-1)
targetDependent, err := s.cfg.ForkChoiceStore.DependentRootForEpoch([32]byte(c.Root), c.Epoch)
if err != nil {
return nil
}
@@ -50,11 +53,7 @@ func (s *Service) getRecentPreState(ctx context.Context, c *ethpb.Checkpoint) st
}
return st
}
// At this point we can only have c.Epoch > headEpoch.
if !s.cfg.ForkChoiceStore.IsCanonical([32]byte(c.Root)) {
return nil
}
// Advance the head state to the start of the target epoch.
// Otherwise we need to advance the head state to the start of the target epoch.
// This point can only be reached if c.Root == headRoot and c.Epoch > headEpoch.
slot, err := slots.EpochStart(c.Epoch)
if err != nil {

View File

@@ -181,123 +181,6 @@ func TestService_GetRecentPreState(t *testing.T) {
require.NotNil(t, service.getRecentPreState(ctx, &ethpb.Checkpoint{Epoch: 1, Root: ckRoot}))
}
func TestService_GetRecentPreState_Epoch_0(t *testing.T) {
service, _ := minimalTestService(t)
ctx := t.Context()
require.IsNil(t, service.getRecentPreState(ctx, &ethpb.Checkpoint{}))
}
func TestService_GetRecentPreState_Old_Checkpoint(t *testing.T) {
service, _ := minimalTestService(t)
ctx := t.Context()
s, err := util.NewBeaconState()
require.NoError(t, err)
ckRoot := bytesutil.PadTo([]byte{'A'}, fieldparams.RootLength)
cp0 := &ethpb.Checkpoint{Epoch: 0, Root: ckRoot}
err = s.SetFinalizedCheckpoint(cp0)
require.NoError(t, err)
st, root, err := prepareForkchoiceState(ctx, 33, [32]byte(ckRoot), [32]byte{}, [32]byte{'R'}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, root))
service.head = &head{
root: [32]byte(ckRoot),
state: s,
slot: 33,
}
require.IsNil(t, service.getRecentPreState(ctx, &ethpb.Checkpoint{}))
}
func TestService_GetRecentPreState_Same_DependentRoots(t *testing.T) {
service, _ := minimalTestService(t)
ctx := t.Context()
s, err := util.NewBeaconState()
require.NoError(t, err)
ckRoot := bytesutil.PadTo([]byte{'A'}, fieldparams.RootLength)
cp0 := &ethpb.Checkpoint{Epoch: 0, Root: ckRoot}
// Create a fork 31 <-- 32 <--- 64
// \---------33
// With the same dependent root at epoch 0 for a checkpoint at epoch 2
st, blk, err := prepareForkchoiceState(ctx, 31, [32]byte(ckRoot), [32]byte{}, [32]byte{}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
st, blk, err = prepareForkchoiceState(ctx, 32, [32]byte{'S'}, blk.Root(), [32]byte{}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
st, blk, err = prepareForkchoiceState(ctx, 64, [32]byte{'T'}, blk.Root(), [32]byte{}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
st, blk, err = prepareForkchoiceState(ctx, 33, [32]byte{'U'}, [32]byte(ckRoot), [32]byte{}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
cpRoot := blk.Root()
service.head = &head{
root: [32]byte{'T'},
state: s,
slot: 64,
}
require.NotNil(t, service.getRecentPreState(ctx, &ethpb.Checkpoint{Epoch: 2, Root: cpRoot[:]}))
}
func TestService_GetRecentPreState_Different_DependentRoots(t *testing.T) {
service, _ := minimalTestService(t)
ctx := t.Context()
s, err := util.NewBeaconState()
require.NoError(t, err)
ckRoot := bytesutil.PadTo([]byte{'A'}, fieldparams.RootLength)
cp0 := &ethpb.Checkpoint{Epoch: 0, Root: ckRoot}
// Create a fork 30 <-- 31 <-- 32 <--- 64
// \---------33
// With the same dependent root at epoch 0 for a checkpoint at epoch 2
st, blk, err := prepareForkchoiceState(ctx, 30, [32]byte(ckRoot), [32]byte{}, [32]byte{}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
st, blk, err = prepareForkchoiceState(ctx, 31, [32]byte{'S'}, blk.Root(), [32]byte{}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
st, blk, err = prepareForkchoiceState(ctx, 32, [32]byte{'T'}, blk.Root(), [32]byte{}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
st, blk, err = prepareForkchoiceState(ctx, 64, [32]byte{'U'}, blk.Root(), [32]byte{}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
st, blk, err = prepareForkchoiceState(ctx, 33, [32]byte{'V'}, [32]byte(ckRoot), [32]byte{}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blk))
cpRoot := blk.Root()
service.head = &head{
root: [32]byte{'T'},
state: s,
slot: 64,
}
require.IsNil(t, service.getRecentPreState(ctx, &ethpb.Checkpoint{Epoch: 2, Root: cpRoot[:]}))
}
func TestService_GetRecentPreState_Different(t *testing.T) {
service, _ := minimalTestService(t)
ctx := t.Context()
s, err := util.NewBeaconState()
require.NoError(t, err)
ckRoot := bytesutil.PadTo([]byte{'A'}, fieldparams.RootLength)
cp0 := &ethpb.Checkpoint{Epoch: 0, Root: ckRoot}
err = s.SetFinalizedCheckpoint(cp0)
require.NoError(t, err)
st, root, err := prepareForkchoiceState(ctx, 33, [32]byte(ckRoot), [32]byte{}, [32]byte{'R'}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, root))
service.head = &head{
root: [32]byte(ckRoot),
state: s,
slot: 33,
}
require.IsNil(t, service.getRecentPreState(ctx, &ethpb.Checkpoint{}))
}
func TestService_GetAttPreState_Concurrency(t *testing.T) {
service, _ := minimalTestService(t)
ctx := t.Context()

View File

@@ -134,7 +134,7 @@ func getStateVersionAndPayload(st state.BeaconState) (int, interfaces.ExecutionD
return preStateVersion, preStateHeader, nil
}
func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlock, avs das.AvailabilityChecker) error {
func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlock, avs das.AvailabilityStore) error {
ctx, span := trace.StartSpan(ctx, "blockChain.onBlockBatch")
defer span.End()
@@ -306,7 +306,7 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
return s.saveHeadNoDB(ctx, lastB, lastBR, preState, !isValidPayload)
}
func (s *Service) areSidecarsAvailable(ctx context.Context, avs das.AvailabilityChecker, roBlock consensusblocks.ROBlock) error {
func (s *Service) areSidecarsAvailable(ctx context.Context, avs das.AvailabilityStore, roBlock consensusblocks.ROBlock) error {
blockVersion := roBlock.Version()
block := roBlock.Block()
slot := block.Slot()
@@ -634,7 +634,9 @@ func missingDataColumnIndices(store *filesystem.DataColumnStorage, root [fieldpa
return nil, nil
}
if len(expected) > fieldparams.NumberOfColumns {
numberOfColumns := params.BeaconConfig().NumberOfColumns
if uint64(len(expected)) > numberOfColumns {
return nil, errMaxDataColumnsExceeded
}
@@ -816,9 +818,10 @@ func (s *Service) areDataColumnsAvailable(
case <-ctx.Done():
var missingIndices any = "all"
missingIndicesCount := len(missing)
numberOfColumns := params.BeaconConfig().NumberOfColumns
missingIndicesCount := uint64(len(missing))
if missingIndicesCount < fieldparams.NumberOfColumns {
if missingIndicesCount < numberOfColumns {
missingIndices = helpers.SortedPrettySliceFromMap(missing)
}
@@ -945,6 +948,13 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
attribute := s.getPayloadAttribute(ctx, headState, s.CurrentSlot()+1, headRoot[:])
// return early if we are not proposing next slot
if attribute.IsEmpty() {
headBlock, err := s.headBlock()
if err != nil {
log.WithError(err).WithField("head_root", headRoot).Error("Unable to retrieve head block to fire payload attributes event")
}
// notifyForkchoiceUpdate fires the payload attribute event. But in this case, we won't
// call notifyForkchoiceUpdate, so the event is fired here.
go s.firePayloadAttributesEvent(s.cfg.StateNotifier.StateFeed(), headBlock, headRoot, s.CurrentSlot()+1)
return
}

View File

@@ -2495,8 +2495,7 @@ func TestMissingBlobIndices(t *testing.T) {
}
func TestMissingDataColumnIndices(t *testing.T) {
const countPlusOne = fieldparams.NumberOfColumns + 1
countPlusOne := params.BeaconConfig().NumberOfColumns + 1
tooManyColumns := make(map[uint64]bool, countPlusOne)
for i := range countPlusOne {
tooManyColumns[uint64(i)] = true
@@ -2806,10 +2805,6 @@ func TestProcessLightClientUpdate(t *testing.T) {
require.NoError(t, s.cfg.BeaconDB.SaveHeadBlockRoot(ctx, [32]byte{1, 2}))
for _, testVersion := range version.All()[1:] {
if testVersion == version.Gloas {
// TODO(16027): Unskip light client tests for Gloas
continue
}
t.Run(version.String(testVersion), func(t *testing.T) {
l := util.NewTestLightClient(t, testVersion)

View File

@@ -39,8 +39,8 @@ var epochsSinceFinalityExpandCache = primitives.Epoch(4)
// BlockReceiver interface defines the methods of chain service for receiving and processing new blocks.
type BlockReceiver interface {
ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte, avs das.AvailabilityChecker) error
ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock, avs das.AvailabilityChecker) error
ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte, avs das.AvailabilityStore) error
ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock, avs das.AvailabilityStore) error
HasBlock(ctx context.Context, root [32]byte) bool
RecentBlockSlot(root [32]byte) (primitives.Slot, error)
BlockBeingSynced([32]byte) bool
@@ -69,7 +69,7 @@ type SlashingReceiver interface {
// 1. Validate block, apply state transition and update checkpoints
// 2. Apply fork choice to the processed block
// 3. Save latest head info
func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte, avs das.AvailabilityChecker) error {
func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte, avs das.AvailabilityStore) error {
ctx, span := trace.StartSpan(ctx, "blockChain.ReceiveBlock")
defer span.End()
// Return early if the block is blacklisted
@@ -242,7 +242,7 @@ func (s *Service) validateExecutionAndConsensus(
return postState, isValidPayload, nil
}
func (s *Service) handleDA(ctx context.Context, avs das.AvailabilityChecker, block blocks.ROBlock) (time.Duration, error) {
func (s *Service) handleDA(ctx context.Context, avs das.AvailabilityStore, block blocks.ROBlock) (time.Duration, error) {
var err error
start := time.Now()
if avs != nil {
@@ -332,7 +332,7 @@ func (s *Service) executePostFinalizationTasks(ctx context.Context, finalizedSta
// ReceiveBlockBatch processes the whole block batch at once, assuming the block batch is linear ,transitioning
// the state, performing batch verification of all collected signatures and then performing the appropriate
// actions for a block post-transition.
func (s *Service) ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock, avs das.AvailabilityChecker) error {
func (s *Service) ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock, avs das.AvailabilityStore) error {
ctx, span := trace.StartSpan(ctx, "blockChain.ReceiveBlockBatch")
defer span.End()

View File

@@ -14,7 +14,6 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/cache"
statefeed "github.com/OffchainLabs/prysm/v7/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
coreTime "github.com/OffchainLabs/prysm/v7/beacon-chain/core/time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/transition"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db"
@@ -471,37 +470,62 @@ func (s *Service) removeStartupState() {
// UpdateCustodyInfoInDB updates the custody information in the database.
// It returns the (potentially updated) custody group count and the earliest available slot.
func (s *Service) updateCustodyInfoInDB(slot primitives.Slot) (primitives.Slot, uint64, error) {
isSupernode := flags.Get().Supernode
isSemiSupernode := flags.Get().SemiSupernode
isSubscribedToAllDataSubnets := flags.Get().SubscribeAllDataSubnets
isLiteSupernode := flags.Get().LiteSupernode
cfg := params.BeaconConfig()
custodyRequirement := cfg.CustodyRequirement
// Warn if both flags are set (supernode takes precedence).
if isSubscribedToAllDataSubnets && isLiteSupernode {
log.Warnf(
"Both `--%s` and `--%s` flags are set. The supernode flag takes precedence.",
flags.SubscribeAllDataSubnets.Name,
flags.LiteSupernode.Name,
)
}
// Check if the node was previously subscribed to all data subnets, and if so,
// store the new status accordingly.
wasSupernode, err := s.cfg.BeaconDB.UpdateSubscribedToAllDataSubnets(s.ctx, isSupernode)
wasSubscribedToAllDataSubnets, err := s.cfg.BeaconDB.UpdateSubscribedToAllDataSubnets(s.ctx, isSubscribedToAllDataSubnets)
if err != nil {
return 0, 0, errors.Wrap(err, "update subscribed to all data subnets")
log.WithError(err).Error("Could not update subscription status to all data subnets")
}
// Compute the target custody group count based on current flag configuration.
targetCustodyGroupCount := custodyRequirement
// Supernode: custody all groups (either currently set or previously enabled)
if isSupernode {
targetCustodyGroupCount = cfg.NumberOfCustodyGroups
// Warn the user if the node was previously subscribed to all data subnets and is not any more.
if wasSubscribedToAllDataSubnets && !isSubscribedToAllDataSubnets {
log.Warnf(
"Because the flag `--%s` was previously used, the node will still subscribe to all data subnets.",
flags.SubscribeAllDataSubnets.Name,
)
}
// Semi-supernode: custody minimum needed for reconstruction, or custody requirement if higher
if isSemiSupernode {
semiSupernodeCustody, err := peerdas.MinimumCustodyGroupCountToReconstruct()
if err != nil {
return 0, 0, errors.Wrap(err, "minimum custody group count")
}
targetCustodyGroupCount = max(custodyRequirement, semiSupernodeCustody)
// Check if the node was previously in lite-supernode mode, and if so,
// store the new status accordingly.
wasLiteSupernode, err := s.cfg.BeaconDB.UpdateLiteSupernode(s.ctx, isLiteSupernode)
if err != nil {
log.WithError(err).Error("Could not update lite-supernode status")
}
// Warn the user if the node was previously in lite-supernode mode and is not any more.
if wasLiteSupernode && !isLiteSupernode {
log.Warnf(
"Because the flag `--%s` was previously used, the node will remain in lite-supernode mode (subscribe to 64 subnets).",
flags.LiteSupernode.Name,
)
}
// Compute the custody group count.
// Note: Lite-supernode subscribes to 64 subnets (enough to reconstruct) but only custodies the minimum groups (4).
// Only supernode increases custody to all 128 groups.
// Persistence (preventing downgrades) is handled by UpdateCustodyInfo() which
// refuses to decrease the stored custody count.
custodyGroupCount := custodyRequirement
if isSubscribedToAllDataSubnets {
custodyGroupCount = cfg.NumberOfCustodyGroups
}
// Lite-supernode does NOT increase custody count - it keeps the minimum (4 or validator requirement).
// Safely compute the fulu fork slot.
fuluForkSlot, err := fuluForkSlot()
if err != nil {
@@ -516,23 +540,12 @@ func (s *Service) updateCustodyInfoInDB(slot primitives.Slot) (primitives.Slot,
}
}
earliestAvailableSlot, actualCustodyGroupCount, err := s.cfg.BeaconDB.UpdateCustodyInfo(s.ctx, slot, targetCustodyGroupCount)
earliestAvailableSlot, custodyGroupCount, err := s.cfg.BeaconDB.UpdateCustodyInfo(s.ctx, slot, custodyGroupCount)
if err != nil {
return 0, 0, errors.Wrap(err, "update custody info")
}
if isSupernode {
log.WithFields(logrus.Fields{
"current": actualCustodyGroupCount,
"target": cfg.NumberOfCustodyGroups,
}).Info("Supernode mode enabled. Will custody all data columns going forward.")
}
if wasSupernode && !isSupernode {
log.Warningf("Because the `--%s` flag was previously used, the node will continue to act as a super node.", flags.Supernode.Name)
}
return earliestAvailableSlot, actualCustodyGroupCount, nil
return earliestAvailableSlot, custodyGroupCount, nil
}
func spawnCountdownIfPreGenesis(ctx context.Context, genesisTime time.Time, db db.HeadAccessDatabase) {

View File

@@ -603,6 +603,7 @@ func TestUpdateCustodyInfoInDB(t *testing.T) {
custodyRequirement = uint64(4)
earliestStoredSlot = primitives.Slot(12)
numberOfCustodyGroups = uint64(64)
numberOfColumns = uint64(128)
)
params.SetupTestConfigCleanup(t)
@@ -610,6 +611,7 @@ func TestUpdateCustodyInfoInDB(t *testing.T) {
cfg.FuluForkEpoch = fuluForkEpoch
cfg.CustodyRequirement = custodyRequirement
cfg.NumberOfCustodyGroups = numberOfCustodyGroups
cfg.NumberOfColumns = numberOfColumns
params.OverrideBeaconConfig(cfg)
ctx := t.Context()
@@ -640,7 +642,7 @@ func TestUpdateCustodyInfoInDB(t *testing.T) {
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.Supernode = true
gFlags.SubscribeAllDataSubnets = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
@@ -678,7 +680,7 @@ func TestUpdateCustodyInfoInDB(t *testing.T) {
// ----------
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.Supernode = true
gFlags.SubscribeAllDataSubnets = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
@@ -693,121 +695,4 @@ func TestUpdateCustodyInfoInDB(t *testing.T) {
require.Equal(t, slot, actualEas)
require.Equal(t, numberOfCustodyGroups, actualCgc)
})
t.Run("Supernode downgrade prevented", func(t *testing.T) {
service, requirements := minimalTestService(t)
err = requirements.db.SaveBlock(ctx, roBlock)
require.NoError(t, err)
// Enable supernode
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.Supernode = true
flags.Init(gFlags)
slot := fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1
actualEas, actualCgc, err := service.updateCustodyInfoInDB(slot)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
require.Equal(t, numberOfCustodyGroups, actualCgc)
// Try to downgrade by removing flag
gFlags.Supernode = false
flags.Init(gFlags)
defer flags.Init(resetFlags)
// Should still be supernode
actualEas, actualCgc, err = service.updateCustodyInfoInDB(slot + 2)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
require.Equal(t, numberOfCustodyGroups, actualCgc) // Still 64, not downgraded
})
t.Run("Semi-supernode downgrade prevented", func(t *testing.T) {
service, requirements := minimalTestService(t)
err = requirements.db.SaveBlock(ctx, roBlock)
require.NoError(t, err)
// Enable semi-supernode
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.SemiSupernode = true
flags.Init(gFlags)
slot := fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1
actualEas, actualCgc, err := service.updateCustodyInfoInDB(slot)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
semiSupernodeCustody := numberOfCustodyGroups / 2 // 64
require.Equal(t, semiSupernodeCustody, actualCgc) // Semi-supernode custodies 64 groups
// Try to downgrade by removing flag
gFlags.SemiSupernode = false
flags.Init(gFlags)
defer flags.Init(resetFlags)
// UpdateCustodyInfo should prevent downgrade - custody count should remain at 64
actualEas, actualCgc, err = service.updateCustodyInfoInDB(slot + 2)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
require.Equal(t, semiSupernodeCustody, actualCgc) // Still 64 due to downgrade prevention by UpdateCustodyInfo
})
t.Run("Semi-supernode to supernode upgrade allowed", func(t *testing.T) {
service, requirements := minimalTestService(t)
err = requirements.db.SaveBlock(ctx, roBlock)
require.NoError(t, err)
// Start with semi-supernode
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.SemiSupernode = true
flags.Init(gFlags)
slot := fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1
actualEas, actualCgc, err := service.updateCustodyInfoInDB(slot)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
semiSupernodeCustody := numberOfCustodyGroups / 2 // 64
require.Equal(t, semiSupernodeCustody, actualCgc) // Semi-supernode custodies 64 groups
// Upgrade to full supernode
gFlags.SemiSupernode = false
gFlags.Supernode = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
// Should upgrade to full supernode
upgradeSlot := slot + 2
actualEas, actualCgc, err = service.updateCustodyInfoInDB(upgradeSlot)
require.NoError(t, err)
require.Equal(t, upgradeSlot, actualEas) // Earliest slot updates when upgrading
require.Equal(t, numberOfCustodyGroups, actualCgc) // Upgraded to 128
})
t.Run("Semi-supernode with high validator requirements uses higher custody", func(t *testing.T) {
service, requirements := minimalTestService(t)
err = requirements.db.SaveBlock(ctx, roBlock)
require.NoError(t, err)
// Enable semi-supernode
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.SemiSupernode = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
// Mock a high custody requirement (simulating many validators)
// We need to override the custody requirement calculation
// For this test, we'll verify the logic by checking if custodyRequirement > 64
// Since custodyRequirement in minimalTestService is 4, we can't test the high case here
// This would require a different test setup with actual validators
slot := fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1
actualEas, actualCgc, err := service.updateCustodyInfoInDB(slot)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
semiSupernodeCustody := numberOfCustodyGroups / 2 // 64
// With low validator requirements (4), should use semi-supernode minimum (64)
require.Equal(t, semiSupernodeCustody, actualCgc)
})
}

View File

@@ -275,7 +275,7 @@ func (s *ChainService) ReceiveBlockInitialSync(ctx context.Context, block interf
}
// ReceiveBlockBatch processes blocks in batches from initial-sync.
func (s *ChainService) ReceiveBlockBatch(ctx context.Context, blks []blocks.ROBlock, _ das.AvailabilityChecker) error {
func (s *ChainService) ReceiveBlockBatch(ctx context.Context, blks []blocks.ROBlock, _ das.AvailabilityStore) error {
if s.State == nil {
return ErrNilState
}
@@ -305,7 +305,7 @@ func (s *ChainService) ReceiveBlockBatch(ctx context.Context, blks []blocks.ROBl
}
// ReceiveBlock mocks ReceiveBlock method in chain service.
func (s *ChainService) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, _ [32]byte, _ das.AvailabilityChecker) error {
func (s *ChainService) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, _ [32]byte, _ das.AvailabilityStore) error {
if s.ReceiveBlockMockErr != nil {
return s.ReceiveBlockMockErr
}

View File

@@ -60,7 +60,7 @@ func Eth1DataHasEnoughSupport(beaconState state.ReadOnlyBeaconState, data *ethpb
voteCount := uint64(0)
for _, vote := range beaconState.Eth1DataVotes() {
if AreEth1DataEqual(vote, data) {
if AreEth1DataEqual(vote, data.Copy()) {
voteCount++
}
}

View File

@@ -90,9 +90,6 @@ func IsExecutionEnabled(st state.ReadOnlyBeaconState, body interfaces.ReadOnlyBe
if st == nil || body == nil {
return false, errors.New("nil state or block body")
}
if st.Version() >= version.Capella {
return true, nil
}
if IsPreBellatrixVersion(st.Version()) {
return false, nil
}

View File

@@ -260,12 +260,11 @@ func Test_IsExecutionBlockCapella(t *testing.T) {
func Test_IsExecutionEnabled(t *testing.T) {
tests := []struct {
name string
payload *enginev1.ExecutionPayload
header interfaces.ExecutionData
useAltairSt bool
useCapellaSt bool
want bool
name string
payload *enginev1.ExecutionPayload
header interfaces.ExecutionData
useAltairSt bool
want bool
}{
{
name: "use older than bellatrix state",
@@ -332,17 +331,6 @@ func Test_IsExecutionEnabled(t *testing.T) {
}(),
want: true,
},
{
name: "capella state always enabled",
payload: emptyPayload(),
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
return h
}(),
useCapellaSt: true,
want: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
@@ -354,8 +342,6 @@ func Test_IsExecutionEnabled(t *testing.T) {
require.NoError(t, err)
if tt.useAltairSt {
st, _ = util.DeterministicGenesisStateAltair(t, 1)
} else if tt.useCapellaSt {
st, _ = util.DeterministicGenesisStateCapella(t, 1)
}
got, err := blocks.IsExecutionEnabled(st, body)
require.NoError(t, err)

View File

@@ -45,13 +45,12 @@ go_test(
"p2p_interface_test.go",
"reconstruction_helpers_test.go",
"reconstruction_test.go",
"semi_supernode_test.go",
"utils_test.go",
"validator_test.go",
"verification_test.go",
],
embed = [":go_default_library"],
deps = [
":go_default_library",
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/fieldparams:go_default_library",

View File

@@ -5,7 +5,6 @@ import (
"math"
"slices"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/crypto/hash"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
@@ -97,7 +96,8 @@ func ComputeColumnsForCustodyGroup(custodyGroup uint64) ([]uint64, error) {
return nil, ErrCustodyGroupTooLarge
}
numberOfColumns := uint64(fieldparams.NumberOfColumns)
numberOfColumns := cfg.NumberOfColumns
columnsPerGroup := numberOfColumns / numberOfCustodyGroups
columns := make([]uint64, 0, columnsPerGroup)
@@ -112,9 +112,8 @@ func ComputeColumnsForCustodyGroup(custodyGroup uint64) ([]uint64, error) {
// ComputeCustodyGroupForColumn computes the custody group for a given column.
// It is the reciprocal function of ComputeColumnsForCustodyGroup.
func ComputeCustodyGroupForColumn(columnIndex uint64) (uint64, error) {
const numberOfColumns = fieldparams.NumberOfColumns
cfg := params.BeaconConfig()
numberOfColumns := cfg.NumberOfColumns
numberOfCustodyGroups := cfg.NumberOfCustodyGroups
if columnIndex >= numberOfColumns {

View File

@@ -30,6 +30,7 @@ func TestComputeColumnsForCustodyGroup(t *testing.T) {
func TestComputeCustodyGroupForColumn(t *testing.T) {
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.NumberOfColumns = 128
config.NumberOfCustodyGroups = 64
params.OverrideBeaconConfig(config)

View File

@@ -2,7 +2,6 @@ package peerdas
import (
"encoding/binary"
"maps"
"sync"
"github.com/ethereum/go-ethereum/p2p/enode"
@@ -108,102 +107,3 @@ func computeInfoCacheKey(nodeID enode.ID, custodyGroupCount uint64) [nodeInfoCac
return key
}
// ColumnIndices represents as a set of ColumnIndices. This could be the set of indices that a node is required to custody,
// the set that a peer custodies, missing indices for a given block, indices that are present on disk, etc.
type ColumnIndices map[uint64]struct{}
// Has returns true if the index is present in the ColumnIndices.
func (ci ColumnIndices) Has(index uint64) bool {
_, ok := ci[index]
return ok
}
// Count returns the number of indices present in the ColumnIndices.
func (ci ColumnIndices) Count() int {
return len(ci)
}
// Set sets the index in the ColumnIndices.
func (ci ColumnIndices) Set(index uint64) {
ci[index] = struct{}{}
}
// Unset removes the index from the ColumnIndices.
func (ci ColumnIndices) Unset(index uint64) {
delete(ci, index)
}
// Copy creates a copy of the ColumnIndices.
func (ci ColumnIndices) Copy() ColumnIndices {
newCi := make(ColumnIndices, len(ci))
maps.Copy(newCi, ci)
return newCi
}
// Intersection returns a new ColumnIndices that contains only the indices that are present in both ColumnIndices.
func (ci ColumnIndices) Intersection(other ColumnIndices) ColumnIndices {
result := make(ColumnIndices)
for index := range ci {
if other.Has(index) {
result.Set(index)
}
}
return result
}
// Merge mutates the receiver so that any index that is set in either of
// the two ColumnIndices is set in the receiver after the function finishes.
// It does not mutate the other ColumnIndices given as a function argument.
func (ci ColumnIndices) Merge(other ColumnIndices) {
for index := range other {
ci.Set(index)
}
}
// ToMap converts a ColumnIndices into a map[uint64]struct{}.
// In the future ColumnIndices may be changed to a bit map, so using
// ToMap will ensure forwards-compatibility.
func (ci ColumnIndices) ToMap() map[uint64]struct{} {
return ci.Copy()
}
// ToSlice converts a ColumnIndices into a slice of uint64 indices.
func (ci ColumnIndices) ToSlice() []uint64 {
indices := make([]uint64, 0, len(ci))
for index := range ci {
indices = append(indices, index)
}
return indices
}
// NewColumnIndicesFromSlice creates a ColumnIndices from a slice of uint64.
func NewColumnIndicesFromSlice(indices []uint64) ColumnIndices {
ci := make(ColumnIndices, len(indices))
for _, index := range indices {
ci[index] = struct{}{}
}
return ci
}
// NewColumnIndicesFromMap creates a ColumnIndices from a map[uint64]bool. This kind of map
// is used in several places in peerdas code. Converting from this map type to ColumnIndices
// will allow us to move ColumnIndices underlying type to a bitmap in the future and avoid
// lots of loops for things like intersections/unions or copies.
func NewColumnIndicesFromMap(indices map[uint64]bool) ColumnIndices {
ci := make(ColumnIndices, len(indices))
for index, set := range indices {
if !set {
continue
}
ci[index] = struct{}{}
}
return ci
}
// NewColumnIndices creates an empty ColumnIndices.
// In the future ColumnIndices may change from a reference type to a value type,
// so using this constructor will ensure forwards-compatibility.
func NewColumnIndices() ColumnIndices {
return make(ColumnIndices)
}

View File

@@ -25,10 +25,3 @@ func TestInfo(t *testing.T) {
require.DeepEqual(t, expectedDataColumnsSubnets, actual.DataColumnsSubnets)
}
}
func TestNewColumnIndicesFromMap(t *testing.T) {
t.Run("nil map", func(t *testing.T) {
ci := peerdas.NewColumnIndicesFromMap(nil)
require.Equal(t, 0, ci.Count())
})
}

View File

@@ -33,7 +33,8 @@ func (Cgc) ENRKey() string { return params.BeaconNetworkConfig().CustodyGroupCou
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/p2p-interface.md#verify_data_column_sidecar
func VerifyDataColumnSidecar(sidecar blocks.RODataColumn) error {
// The sidecar index must be within the valid range.
if sidecar.Index >= fieldparams.NumberOfColumns {
numberOfColumns := params.BeaconConfig().NumberOfColumns
if sidecar.Index >= numberOfColumns {
return ErrIndexTooLarge
}

View File

@@ -281,11 +281,8 @@ func BenchmarkVerifyDataColumnSidecarKZGProofs_SameCommitments_NoBatch(b *testin
}
func BenchmarkVerifyDataColumnSidecarKZGProofs_DiffCommitments_Batch(b *testing.B) {
const (
blobCount = 12
numberOfColumns = fieldparams.NumberOfColumns
)
const blobCount = 12
numberOfColumns := int64(params.BeaconConfig().NumberOfColumns)
err := kzg.Start()
require.NoError(b, err)

View File

@@ -26,40 +26,7 @@ var (
func MinimumColumnCountToReconstruct() uint64 {
// If the number of columns is odd, then we need total / 2 + 1 columns to reconstruct.
// If the number of columns is even, then we need total / 2 columns to reconstruct.
return (fieldparams.NumberOfColumns + 1) / 2
}
// MinimumCustodyGroupCountToReconstruct returns the minimum number of custody groups needed to
// custody enough data columns for reconstruction. This accounts for the relationship between
// custody groups and columns, making it future-proof if these values change.
// Returns an error if the configuration values are invalid (zero or would cause division by zero).
func MinimumCustodyGroupCountToReconstruct() (uint64, error) {
const numberOfColumns = fieldparams.NumberOfColumns
cfg := params.BeaconConfig()
// Validate configuration values
if numberOfColumns == 0 {
return 0, errors.New("NumberOfColumns cannot be zero")
}
if cfg.NumberOfCustodyGroups == 0 {
return 0, errors.New("NumberOfCustodyGroups cannot be zero")
}
minimumColumnCount := MinimumColumnCountToReconstruct()
// Calculate how many columns each custody group represents
columnsPerGroup := numberOfColumns / cfg.NumberOfCustodyGroups
// If there are more groups than columns (columnsPerGroup = 0), this is an invalid configuration
// for reconstruction purposes as we cannot determine a meaningful custody group count
if columnsPerGroup == 0 {
return 0, errors.Errorf("invalid configuration: NumberOfCustodyGroups (%d) exceeds NumberOfColumns (%d)",
cfg.NumberOfCustodyGroups, numberOfColumns)
}
// Use ceiling division to ensure we have enough groups to cover the minimum columns
// ceiling(a/b) = (a + b - 1) / b
return (minimumColumnCount + columnsPerGroup - 1) / columnsPerGroup, nil
return (params.BeaconConfig().NumberOfColumns + 1) / 2
}
// recoverCellsForBlobs reconstructs cells for specified blobs from the given data column sidecars.
@@ -286,8 +253,7 @@ func ReconstructBlobSidecars(block blocks.ROBlock, verifiedDataColumnSidecars []
// ComputeCellsAndProofsFromFlat computes the cells and proofs from blobs and cell flat proofs.
func ComputeCellsAndProofsFromFlat(blobs [][]byte, cellProofs [][]byte) ([][]kzg.Cell, [][]kzg.Proof, error) {
const numberOfColumns = fieldparams.NumberOfColumns
numberOfColumns := params.BeaconConfig().NumberOfColumns
blobCount := uint64(len(blobs))
cellProofsCount := uint64(len(cellProofs))
@@ -329,6 +295,8 @@ func ComputeCellsAndProofsFromFlat(blobs [][]byte, cellProofs [][]byte) ([][]kzg
// ComputeCellsAndProofsFromStructured computes the cells and proofs from blobs and cell proofs.
func ComputeCellsAndProofsFromStructured(blobsAndProofs []*pb.BlobAndProofV2) ([][]kzg.Cell, [][]kzg.Proof, error) {
numberOfColumns := params.BeaconConfig().NumberOfColumns
cellsPerBlob := make([][]kzg.Cell, 0, len(blobsAndProofs))
proofsPerBlob := make([][]kzg.Proof, 0, len(blobsAndProofs))
for _, blobAndProof := range blobsAndProofs {
@@ -347,7 +315,7 @@ func ComputeCellsAndProofsFromStructured(blobsAndProofs []*pb.BlobAndProofV2) ([
return nil, nil, errors.Wrap(err, "compute cells")
}
kzgProofs := make([]kzg.Proof, 0, fieldparams.NumberOfColumns)
kzgProofs := make([]kzg.Proof, 0, numberOfColumns)
for _, kzgProofBytes := range blobAndProof.KzgProofs {
if len(kzgProofBytes) != kzg.BytesPerProof {
return nil, nil, errors.New("wrong KZG proof size - should never happen")

View File

@@ -17,9 +17,41 @@ import (
)
func TestMinimumColumnsCountToReconstruct(t *testing.T) {
const expected = uint64(64)
actual := peerdas.MinimumColumnCountToReconstruct()
require.Equal(t, expected, actual)
testCases := []struct {
name string
numberOfColumns uint64
expected uint64
}{
{
name: "numberOfColumns=128",
numberOfColumns: 128,
expected: 64,
},
{
name: "numberOfColumns=129",
numberOfColumns: 129,
expected: 65,
},
{
name: "numberOfColumns=130",
numberOfColumns: 130,
expected: 65,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// Set the total number of columns.
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.NumberOfColumns = tc.numberOfColumns
params.OverrideBeaconConfig(cfg)
// Compute the minimum number of columns needed to reconstruct.
actual := peerdas.MinimumColumnCountToReconstruct()
require.Equal(t, tc.expected, actual)
})
}
}
func TestReconstructDataColumnSidecars(t *testing.T) {
@@ -168,6 +200,7 @@ func TestReconstructBlobSidecars(t *testing.T) {
t.Run("nominal", func(t *testing.T) {
const blobCount = 3
numberOfColumns := params.BeaconConfig().NumberOfColumns
roBlock, roBlobSidecars := util.GenerateTestElectraBlockWithSidecar(t, [fieldparams.RootLength]byte{}, 42, blobCount)
@@ -203,7 +236,7 @@ func TestReconstructBlobSidecars(t *testing.T) {
require.NoError(t, err)
// Flatten proofs.
cellProofs := make([][]byte, 0, blobCount*fieldparams.NumberOfColumns)
cellProofs := make([][]byte, 0, blobCount*numberOfColumns)
for _, proofs := range inputProofsPerBlob {
for _, proof := range proofs {
cellProofs = append(cellProofs, proof[:])
@@ -395,12 +428,13 @@ func TestReconstructBlobs(t *testing.T) {
}
func TestComputeCellsAndProofsFromFlat(t *testing.T) {
const numberOfColumns = fieldparams.NumberOfColumns
// Start the trusted setup.
err := kzg.Start()
require.NoError(t, err)
t.Run("mismatched blob and proof counts", func(t *testing.T) {
numberOfColumns := params.BeaconConfig().NumberOfColumns
// Create one blob but proofs for two blobs
blobs := [][]byte{{}}
@@ -413,6 +447,7 @@ func TestComputeCellsAndProofsFromFlat(t *testing.T) {
t.Run("nominal", func(t *testing.T) {
const blobCount = 2
numberOfColumns := params.BeaconConfig().NumberOfColumns
// Generate test blobs
_, roBlobSidecars := util.GenerateTestElectraBlockWithSidecar(t, [fieldparams.RootLength]byte{}, 42, blobCount)

View File

@@ -1,137 +0,0 @@
package peerdas
import (
"testing"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/ethereum/go-ethereum/p2p/enode"
)
func TestSemiSupernodeCustody(t *testing.T) {
const numberOfColumns = fieldparams.NumberOfColumns
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.NumberOfCustodyGroups = 128
params.OverrideBeaconConfig(cfg)
// Create a test node ID
nodeID := enode.ID([32]byte{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32})
t.Run("semi-supernode custodies exactly 64 columns", func(t *testing.T) {
// Semi-supernode uses 64 custody groups (half of 128)
const semiSupernodeCustodyGroupCount = 64
// Get custody groups for semi-supernode
custodyGroups, err := CustodyGroups(nodeID, semiSupernodeCustodyGroupCount)
require.NoError(t, err)
require.Equal(t, semiSupernodeCustodyGroupCount, len(custodyGroups))
// Verify we get exactly 64 custody columns
custodyColumns, err := CustodyColumns(custodyGroups)
require.NoError(t, err)
require.Equal(t, semiSupernodeCustodyGroupCount, len(custodyColumns))
// Verify the columns are valid (within 0-127 range)
for columnIndex := range custodyColumns {
if columnIndex >= numberOfColumns {
t.Fatalf("Invalid column index %d, should be less than %d", columnIndex, numberOfColumns)
}
}
})
t.Run("64 columns is exactly the minimum for reconstruction", func(t *testing.T) {
minimumCount := MinimumColumnCountToReconstruct()
require.Equal(t, uint64(64), minimumCount)
})
t.Run("semi-supernode vs supernode custody", func(t *testing.T) {
// Semi-supernode (64 custody groups)
semiSupernodeGroups, err := CustodyGroups(nodeID, 64)
require.NoError(t, err)
semiSupernodeColumns, err := CustodyColumns(semiSupernodeGroups)
require.NoError(t, err)
// Supernode (128 custody groups = all groups)
supernodeGroups, err := CustodyGroups(nodeID, 128)
require.NoError(t, err)
supernodeColumns, err := CustodyColumns(supernodeGroups)
require.NoError(t, err)
// Verify semi-supernode has exactly half the columns of supernode
require.Equal(t, 64, len(semiSupernodeColumns))
require.Equal(t, 128, len(supernodeColumns))
require.Equal(t, len(supernodeColumns)/2, len(semiSupernodeColumns))
// Verify all semi-supernode columns are a subset of supernode columns
for columnIndex := range semiSupernodeColumns {
if !supernodeColumns[columnIndex] {
t.Fatalf("Semi-supernode column %d not found in supernode columns", columnIndex)
}
}
})
}
func TestMinimumCustodyGroupCountToReconstruct(t *testing.T) {
tests := []struct {
name string
numberOfGroups uint64
expectedResult uint64
}{
{
name: "Standard 1:1 ratio (128 columns, 128 groups)",
numberOfGroups: 128,
expectedResult: 64, // Need half of 128 groups
},
{
name: "2 columns per group (128 columns, 64 groups)",
numberOfGroups: 64,
expectedResult: 32, // Need 64 columns, which is 32 groups (64/2)
},
{
name: "4 columns per group (128 columns, 32 groups)",
numberOfGroups: 32,
expectedResult: 16, // Need 64 columns, which is 16 groups (64/4)
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.NumberOfCustodyGroups = tt.numberOfGroups
params.OverrideBeaconConfig(cfg)
result, err := MinimumCustodyGroupCountToReconstruct()
require.NoError(t, err)
require.Equal(t, tt.expectedResult, result)
})
}
}
func TestMinimumCustodyGroupCountToReconstruct_ErrorCases(t *testing.T) {
t.Run("Returns error when NumberOfCustodyGroups is zero", func(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.NumberOfCustodyGroups = 0
params.OverrideBeaconConfig(cfg)
_, err := MinimumCustodyGroupCountToReconstruct()
require.NotNil(t, err)
require.Equal(t, true, err.Error() == "NumberOfCustodyGroups cannot be zero")
})
t.Run("Returns error when NumberOfCustodyGroups exceeds NumberOfColumns", func(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.NumberOfCustodyGroups = 256
params.OverrideBeaconConfig(cfg)
_, err := MinimumCustodyGroupCountToReconstruct()
require.NotNil(t, err)
// Just check that we got an error about the configuration
require.Equal(t, true, len(err.Error()) > 0)
})
}

View File

@@ -102,13 +102,11 @@ func ValidatorsCustodyRequirement(state beaconState.ReadOnlyBeaconState, validat
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/validator.md#get_data_column_sidecars_from_block and
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/validator.md#get_data_column_sidecars_from_column_sidecar
func DataColumnSidecars(cellsPerBlob [][]kzg.Cell, proofsPerBlob [][]kzg.Proof, src ConstructionPopulator) ([]blocks.RODataColumn, error) {
const numberOfColumns = uint64(fieldparams.NumberOfColumns)
if len(cellsPerBlob) == 0 {
return nil, nil
}
start := time.Now()
cells, proofs, err := rotateRowsToCols(cellsPerBlob, proofsPerBlob, numberOfColumns)
cells, proofs, err := rotateRowsToCols(cellsPerBlob, proofsPerBlob, params.BeaconConfig().NumberOfColumns)
if err != nil {
return nil, errors.Wrap(err, "rotate cells and proofs")
}
@@ -117,8 +115,9 @@ func DataColumnSidecars(cellsPerBlob [][]kzg.Cell, proofsPerBlob [][]kzg.Proof,
return nil, errors.Wrap(err, "extract block info")
}
roSidecars := make([]blocks.RODataColumn, 0, numberOfColumns)
for idx := range numberOfColumns {
maxIdx := params.BeaconConfig().NumberOfColumns
roSidecars := make([]blocks.RODataColumn, 0, maxIdx)
for idx := range maxIdx {
sidecar := &ethpb.DataColumnSidecar{
Index: idx,
Column: cells[idx],

View File

@@ -6,7 +6,7 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/blockchain/kzg"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
@@ -59,8 +59,6 @@ func TestValidatorsCustodyRequirement(t *testing.T) {
}
func TestDataColumnSidecars(t *testing.T) {
const numberOfColumns = fieldparams.NumberOfColumns
t.Run("sizes mismatch", func(t *testing.T) {
// Create a protobuf signed beacon block.
signedBeaconBlockPb := util.NewBeaconBlockDeneb()
@@ -71,10 +69,10 @@ func TestDataColumnSidecars(t *testing.T) {
// Create cells and proofs.
cellsPerBlob := [][]kzg.Cell{
make([]kzg.Cell, numberOfColumns),
make([]kzg.Cell, params.BeaconConfig().NumberOfColumns),
}
proofsPerBlob := [][]kzg.Proof{
make([]kzg.Proof, numberOfColumns),
make([]kzg.Proof, params.BeaconConfig().NumberOfColumns),
}
rob, err := blocks.NewROBlock(signedBeaconBlock)
@@ -119,6 +117,7 @@ func TestDataColumnSidecars(t *testing.T) {
require.NoError(t, err)
// Create cells and proofs with sufficient cells but insufficient proofs.
numberOfColumns := params.BeaconConfig().NumberOfColumns
cellsPerBlob := [][]kzg.Cell{
make([]kzg.Cell, numberOfColumns),
}
@@ -150,6 +149,7 @@ func TestDataColumnSidecars(t *testing.T) {
require.NoError(t, err)
// Create cells and proofs with correct dimensions.
numberOfColumns := params.BeaconConfig().NumberOfColumns
cellsPerBlob := [][]kzg.Cell{
make([]kzg.Cell, numberOfColumns),
make([]kzg.Cell, numberOfColumns),
@@ -197,7 +197,6 @@ func TestDataColumnSidecars(t *testing.T) {
}
func TestReconstructionSource(t *testing.T) {
const numberOfColumns = fieldparams.NumberOfColumns
// Create a Fulu block with blob commitments.
signedBeaconBlockPb := util.NewBeaconBlockFulu()
commitment1 := make([]byte, 48)
@@ -213,6 +212,7 @@ func TestReconstructionSource(t *testing.T) {
require.NoError(t, err)
// Create cells and proofs with correct dimensions.
numberOfColumns := params.BeaconConfig().NumberOfColumns
cellsPerBlob := [][]kzg.Cell{
make([]kzg.Cell, numberOfColumns),
make([]kzg.Cell, numberOfColumns),

View File

@@ -4,19 +4,14 @@ go_library(
name = "go_default_library",
srcs = [
"availability_blobs.go",
"availability_columns.go",
"bisect.go",
"blob_cache.go",
"data_column_cache.go",
"iface.go",
"log.go",
"mock.go",
"needs.go",
],
importpath = "github.com/OffchainLabs/prysm/v7/beacon-chain/das",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/verification:go_default_library",
"//config/fieldparams:go_default_library",
@@ -26,7 +21,6 @@ go_library(
"//runtime/logging:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
@@ -36,14 +30,11 @@ go_test(
name = "go_default_test",
srcs = [
"availability_blobs_test.go",
"availability_columns_test.go",
"blob_cache_test.go",
"data_column_cache_test.go",
"needs_test.go",
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/verification:go_default_library",
"//config/fieldparams:go_default_library",
@@ -54,7 +45,6 @@ go_test(
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)

View File

@@ -11,8 +11,9 @@ import (
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/runtime/logging"
"github.com/OffchainLabs/prysm/v7/runtime/version"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
log "github.com/sirupsen/logrus"
)
var (
@@ -23,13 +24,12 @@ var (
// This implementation will hold any blobs passed to Persist until the IsDataAvailable is called for their
// block, at which time they will undergo full verification and be saved to the disk.
type LazilyPersistentStoreBlob struct {
store *filesystem.BlobStorage
cache *blobCache
verifier BlobBatchVerifier
shouldRetain RetentionChecker
store *filesystem.BlobStorage
cache *blobCache
verifier BlobBatchVerifier
}
var _ AvailabilityChecker = &LazilyPersistentStoreBlob{}
var _ AvailabilityStore = &LazilyPersistentStoreBlob{}
// BlobBatchVerifier enables LazyAvailabilityStore to manage the verification process
// going from ROBlob->VerifiedROBlob, while avoiding the decision of which individual verifications
@@ -42,12 +42,11 @@ type BlobBatchVerifier interface {
// NewLazilyPersistentStore creates a new LazilyPersistentStore. This constructor should always be used
// when creating a LazilyPersistentStore because it needs to initialize the cache under the hood.
func NewLazilyPersistentStore(store *filesystem.BlobStorage, verifier BlobBatchVerifier, shouldRetain RetentionChecker) *LazilyPersistentStoreBlob {
func NewLazilyPersistentStore(store *filesystem.BlobStorage, verifier BlobBatchVerifier) *LazilyPersistentStoreBlob {
return &LazilyPersistentStoreBlob{
store: store,
cache: newBlobCache(),
verifier: verifier,
shouldRetain: shouldRetain,
store: store,
cache: newBlobCache(),
verifier: verifier,
}
}
@@ -67,6 +66,9 @@ func (s *LazilyPersistentStoreBlob) Persist(current primitives.Slot, sidecars ..
}
}
}
if !params.WithinDAPeriod(slots.ToEpoch(sidecars[0].Slot()), slots.ToEpoch(current)) {
return nil
}
key := keyFromSidecar(sidecars[0])
entry := s.cache.ensure(key)
for _, blobSidecar := range sidecars {
@@ -79,17 +81,8 @@ func (s *LazilyPersistentStoreBlob) Persist(current primitives.Slot, sidecars ..
// IsDataAvailable returns nil if all the commitments in the given block are persisted to the db and have been verified.
// BlobSidecars already in the db are assumed to have been previously verified against the block.
func (s *LazilyPersistentStoreBlob) IsDataAvailable(ctx context.Context, current primitives.Slot, blks ...blocks.ROBlock) error {
for _, b := range blks {
if err := s.checkOne(ctx, current, b); err != nil {
return err
}
}
return nil
}
func (s *LazilyPersistentStoreBlob) checkOne(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error {
blockCommitments, err := commitmentsToCheck(b, s.shouldRetain)
func (s *LazilyPersistentStoreBlob) IsDataAvailable(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error {
blockCommitments, err := commitmentsToCheck(b, current)
if err != nil {
return errors.Wrapf(err, "could not check data availability for block %#x", b.Root())
}
@@ -107,7 +100,7 @@ func (s *LazilyPersistentStoreBlob) checkOne(ctx context.Context, current primit
// Verify we have all the expected sidecars, and fail fast if any are missing or inconsistent.
// We don't try to salvage problematic batches because this indicates a misbehaving peer and we'd rather
// ignore their response and decrease their peer score.
sidecars, err := entry.filter(root, blockCommitments)
sidecars, err := entry.filter(root, blockCommitments, b.Block().Slot())
if err != nil {
return errors.Wrap(err, "incomplete BlobSidecar batch")
}
@@ -119,7 +112,7 @@ func (s *LazilyPersistentStoreBlob) checkOne(ctx context.Context, current primit
ok := errors.As(err, &me)
if ok {
fails := me.Failures()
lf := make(logrus.Fields, len(fails))
lf := make(log.Fields, len(fails))
for i := range fails {
lf[fmt.Sprintf("fail_%d", i)] = fails[i].Error()
}
@@ -138,12 +131,13 @@ func (s *LazilyPersistentStoreBlob) checkOne(ctx context.Context, current primit
return nil
}
func commitmentsToCheck(b blocks.ROBlock, shouldRetain RetentionChecker) ([][]byte, error) {
func commitmentsToCheck(b blocks.ROBlock, current primitives.Slot) ([][]byte, error) {
if b.Version() < version.Deneb {
return nil, nil
}
if !shouldRetain(b.Block().Slot()) {
// We are only required to check within MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUEST
if !params.WithinDAPeriod(slots.ToEpoch(b.Block().Slot()), slots.ToEpoch(current)) {
return nil, nil
}

View File

@@ -17,10 +17,6 @@ import (
errors "github.com/pkg/errors"
)
func testShouldRetainAlways(s primitives.Slot) bool {
return true
}
func Test_commitmentsToCheck(t *testing.T) {
params.SetupTestConfigCleanup(t)
params.BeaconConfig().FuluForkEpoch = params.BeaconConfig().ElectraForkEpoch + 4096*2
@@ -34,12 +30,11 @@ func Test_commitmentsToCheck(t *testing.T) {
commits[i] = bytesutil.PadTo([]byte{byte(i)}, 48)
}
cases := []struct {
name string
commits [][]byte
block func(*testing.T) blocks.ROBlock
slot primitives.Slot
err error
shouldRetain RetentionChecker
name string
commits [][]byte
block func(*testing.T) blocks.ROBlock
slot primitives.Slot
err error
}{
{
name: "pre deneb",
@@ -65,7 +60,6 @@ func Test_commitmentsToCheck(t *testing.T) {
require.NoError(t, err)
return rb
},
shouldRetain: testShouldRetainAlways,
commits: func() [][]byte {
mb := params.GetNetworkScheduleEntry(slots.ToEpoch(fulu + 100)).MaxBlobsPerBlock
return commits[:mb]
@@ -85,8 +79,7 @@ func Test_commitmentsToCheck(t *testing.T) {
require.NoError(t, err)
return rb
},
shouldRetain: func(s primitives.Slot) bool { return false },
slot: fulu + windowSlots + 1,
slot: fulu + windowSlots + 1,
},
{
name: "excessive commitments",
@@ -104,15 +97,14 @@ func Test_commitmentsToCheck(t *testing.T) {
require.Equal(t, true, len(c) > params.BeaconConfig().MaxBlobsPerBlock(sb.Block().Slot()))
return rb
},
shouldRetain: testShouldRetainAlways,
slot: windowSlots + 1,
err: errIndexOutOfBounds,
slot: windowSlots + 1,
err: errIndexOutOfBounds,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
b := c.block(t)
co, err := commitmentsToCheck(b, c.shouldRetain)
co, err := commitmentsToCheck(b, c.slot)
if c.err != nil {
require.ErrorIs(t, err, c.err)
} else {
@@ -134,7 +126,7 @@ func TestLazilyPersistent_Missing(t *testing.T) {
blk, blobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, ds, 3)
mbv := &mockBlobBatchVerifier{t: t, scs: blobSidecars}
as := NewLazilyPersistentStore(store, mbv, testShouldRetainAlways)
as := NewLazilyPersistentStore(store, mbv)
// Only one commitment persisted, should return error with other indices
require.NoError(t, as.Persist(ds, blobSidecars[2]))
@@ -161,7 +153,7 @@ func TestLazilyPersistent_Mismatch(t *testing.T) {
mbv := &mockBlobBatchVerifier{t: t, err: errors.New("kzg check should not run")}
blobSidecars[0].KzgCommitment = bytesutil.PadTo([]byte("nope"), 48)
as := NewLazilyPersistentStore(store, mbv, testShouldRetainAlways)
as := NewLazilyPersistentStore(store, mbv)
// Only one commitment persisted, should return error with other indices
require.NoError(t, as.Persist(ds, blobSidecars[0]))
@@ -174,11 +166,11 @@ func TestLazyPersistOnceCommitted(t *testing.T) {
ds := util.SlotAtEpoch(t, params.BeaconConfig().DenebForkEpoch)
_, blobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, ds, 6)
as := NewLazilyPersistentStore(filesystem.NewEphemeralBlobStorage(t), &mockBlobBatchVerifier{}, testShouldRetainAlways)
as := NewLazilyPersistentStore(filesystem.NewEphemeralBlobStorage(t), &mockBlobBatchVerifier{})
// stashes as expected
require.NoError(t, as.Persist(ds, blobSidecars...))
// ignores duplicates
require.ErrorIs(t, as.Persist(ds, blobSidecars...), errDuplicateSidecar)
require.ErrorIs(t, as.Persist(ds, blobSidecars...), ErrDuplicateSidecar)
// ignores index out of bound
blobSidecars[0].Index = 6
@@ -191,7 +183,7 @@ func TestLazyPersistOnceCommitted(t *testing.T) {
require.NoError(t, as.Persist(slotOOB, moreBlobSidecars[0]))
// doesn't ignore new sidecars with a different block root
require.NoError(t, as.Persist(ds, moreBlobSidecars[1:]...))
require.NoError(t, as.Persist(ds, moreBlobSidecars...))
}
type mockBlobBatchVerifier struct {

View File

@@ -1,244 +0,0 @@
package das
import (
"context"
"io"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v7/beacon-chain/verification"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/ethereum/go-ethereum/p2p/enode"
errors "github.com/pkg/errors"
)
// LazilyPersistentStoreColumn is an implementation of AvailabilityStore to be used when batch syncing data columns.
// This implementation will hold any data columns passed to Persist until the IsDataAvailable is called for their
// block, at which time they will undergo full verification and be saved to the disk.
type LazilyPersistentStoreColumn struct {
store *filesystem.DataColumnStorage
cache *dataColumnCache
newDataColumnsVerifier verification.NewDataColumnsVerifier
custody *custodyRequirement
bisector Bisector
shouldRetain RetentionChecker
}
var _ AvailabilityChecker = &LazilyPersistentStoreColumn{}
// DataColumnsVerifier enables LazilyPersistentStoreColumn to manage the verification process
// going from RODataColumn->VerifiedRODataColumn, while avoiding the decision of which individual verifications
// to run and in what order. Since LazilyPersistentStoreColumn always tries to verify and save data columns only when
// they are all available, the interface takes a slice of data column sidecars.
type DataColumnsVerifier interface {
VerifiedRODataColumns(ctx context.Context, blk blocks.ROBlock, scs []blocks.RODataColumn) ([]blocks.VerifiedRODataColumn, error)
}
// NewLazilyPersistentStoreColumn creates a new LazilyPersistentStoreColumn.
// WARNING: The resulting LazilyPersistentStoreColumn is NOT thread-safe.
func NewLazilyPersistentStoreColumn(
store *filesystem.DataColumnStorage,
newDataColumnsVerifier verification.NewDataColumnsVerifier,
nodeID enode.ID,
cgc uint64,
bisector Bisector,
shouldRetain RetentionChecker,
) *LazilyPersistentStoreColumn {
return &LazilyPersistentStoreColumn{
store: store,
cache: newDataColumnCache(),
newDataColumnsVerifier: newDataColumnsVerifier,
custody: &custodyRequirement{nodeID: nodeID, cgc: cgc},
bisector: bisector,
shouldRetain: shouldRetain,
}
}
// PersistColumns adds columns to the working column cache. Columns stored in this cache will be persisted
// for at least as long as the node is running. Once IsDataAvailable succeeds, all columns referenced
// by the given block are guaranteed to be persisted for the remainder of the retention period.
func (s *LazilyPersistentStoreColumn) Persist(_ primitives.Slot, sidecars ...blocks.RODataColumn) error {
for _, sidecar := range sidecars {
if err := s.cache.stash(sidecar); err != nil {
return errors.Wrap(err, "stash DataColumnSidecar")
}
}
return nil
}
// IsDataAvailable returns nil if all the commitments in the given block are persisted to the db and have been verified.
// DataColumnsSidecars already in the db are assumed to have been previously verified against the block.
func (s *LazilyPersistentStoreColumn) IsDataAvailable(ctx context.Context, _ primitives.Slot, blks ...blocks.ROBlock) error {
toVerify := make([]blocks.RODataColumn, 0)
for _, block := range blks {
indices, err := s.required(block)
if err != nil {
return errors.Wrapf(err, "full commitments to check with block root `%#x`", block.Root())
}
if indices.Count() == 0 {
continue
}
key := keyFromBlock(block)
entry := s.cache.entry(key)
toVerify, err = entry.append(toVerify, IndicesNotStored(s.store.Summary(block.Root()), indices))
if err != nil {
return errors.Wrap(err, "entry filter")
}
}
if err := s.verifyAndSave(toVerify); err != nil {
log.Warn("Batch verification failed, bisecting columns by peer")
if err := s.bisectVerification(toVerify); err != nil {
return errors.Wrap(err, "bisect verification")
}
}
s.cache.cleanup(blks)
return nil
}
// required returns the set of column indices to check for a given block.
func (s *LazilyPersistentStoreColumn) required(block blocks.ROBlock) (peerdas.ColumnIndices, error) {
if !s.shouldRetain(block.Block().Slot()) {
return peerdas.NewColumnIndices(), nil
}
// If there are any commitments in the block, there are blobs,
// and if there are blobs, we need the columns bisecting those blobs.
commitments, err := block.Block().Body().BlobKzgCommitments()
if err != nil {
return nil, errors.Wrap(err, "blob KZG commitments")
}
// No DA check needed if the block has no blobs.
if len(commitments) == 0 {
return peerdas.NewColumnIndices(), nil
}
return s.custody.required()
}
// verifyAndSave calls Save on the column store if the columns pass verification.
func (s *LazilyPersistentStoreColumn) verifyAndSave(columns []blocks.RODataColumn) error {
if len(columns) == 0 {
return nil
}
verified, err := s.verifyColumns(columns)
if err != nil {
return errors.Wrap(err, "verify columns")
}
if err := s.store.Save(verified); err != nil {
return errors.Wrap(err, "save data column sidecars")
}
return nil
}
func (s *LazilyPersistentStoreColumn) verifyColumns(columns []blocks.RODataColumn) ([]blocks.VerifiedRODataColumn, error) {
if len(columns) == 0 {
return nil, nil
}
verifier := s.newDataColumnsVerifier(columns, verification.ByRangeRequestDataColumnSidecarRequirements)
if err := verifier.ValidFields(); err != nil {
return nil, errors.Wrap(err, "valid fields")
}
if err := verifier.SidecarInclusionProven(); err != nil {
return nil, errors.Wrap(err, "sidecar inclusion proven")
}
if err := verifier.SidecarKzgProofVerified(); err != nil {
return nil, errors.Wrap(err, "sidecar KZG proof verified")
}
return verifier.VerifiedRODataColumns()
}
// bisectVerification is used when verification of a batch of columns fails. Since the batch could
// span multiple blocks or have been fetched from multiple peers, this pattern enables code using the
// store to break the verification into smaller units and learn the results, in order to plan to retry
// retrieval of the unusable columns.
func (s *LazilyPersistentStoreColumn) bisectVerification(columns []blocks.RODataColumn) error {
if len(columns) == 0 {
return nil
}
if s.bisector == nil {
return errors.New("bisector not initialized")
}
iter, err := s.bisector.Bisect(columns)
if err != nil {
return errors.Wrap(err, "Bisector.Bisect")
}
// It's up to the bisector how to chunk up columns for verification,
// which could be by block, or by peer, or any other strategy.
// For the purposes of range syncing or backfill this will be by peer,
// so that the node can learn which peer is giving us bad data and downscore them.
for columns, err := iter.Next(); columns != nil; columns, err = iter.Next() {
if err != nil {
if !errors.Is(err, io.EOF) {
return errors.Wrap(err, "Bisector.Next")
}
break // io.EOF signals end of iteration
}
// We save the parts of the batch that have been verified successfully even though we don't know
// if all columns for the block will be available until the block is imported.
if err := s.verifyAndSave(s.columnsNotStored(columns)); err != nil {
iter.OnError(err)
continue
}
}
// This should give us a single error representing any unresolved errors seen via onError.
return iter.Error()
}
// columnsNotStored filters the list of ROColumnSidecars to only include those that are not found in the storage summary.
func (s *LazilyPersistentStoreColumn) columnsNotStored(sidecars []blocks.RODataColumn) []blocks.RODataColumn {
// We use this method to filter a set of sidecars that were previously seen to be unavailable on disk. So our base assumption
// is that they are still available and we don't need to copy the list. Instead we make a slice of any indices that are unexpectedly
// stored and only when we find that the storage view has changed do we need to create a new slice.
stored := make(map[int]struct{}, 0)
lastRoot := [32]byte{}
var sum filesystem.DataColumnStorageSummary
for i, sc := range sidecars {
if sc.BlockRoot() != lastRoot {
sum = s.store.Summary(sc.BlockRoot())
lastRoot = sc.BlockRoot()
}
if sum.HasIndex(sc.Index) {
stored[i] = struct{}{}
}
}
// If the view on storage hasn't changed, return the original list.
if len(stored) == 0 {
return sidecars
}
shift := 0
for i := range sidecars {
if _, ok := stored[i]; ok {
// If the index is stored, skip and overwrite it.
// Track how many spaces down to shift unseen sidecars (to overwrite the previously shifted or seen).
shift++
continue
}
if shift > 0 {
// If the index is not stored and we have seen stored indices,
// we need to shift the current index down.
sidecars[i-shift] = sidecars[i]
}
}
return sidecars[:len(sidecars)-shift]
}
type custodyRequirement struct {
nodeID enode.ID
cgc uint64 // custody group count
indices peerdas.ColumnIndices
}
func (c *custodyRequirement) required() (peerdas.ColumnIndices, error) {
peerInfo, _, err := peerdas.Info(c.nodeID, c.cgc)
if err != nil {
return peerdas.NewColumnIndices(), errors.Wrap(err, "peer info")
}
return peerdas.NewColumnIndicesFromMap(peerInfo.CustodyColumns), nil
}

View File

@@ -1,908 +0,0 @@
package das
import (
"context"
"fmt"
"io"
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v7/beacon-chain/verification"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/testing/util"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/pkg/errors"
)
func mockShouldRetain(current primitives.Epoch) RetentionChecker {
return func(slot primitives.Slot) bool {
return params.WithinDAPeriod(slots.ToEpoch(slot), current)
}
}
var commitments = [][]byte{
bytesutil.PadTo([]byte("a"), 48),
bytesutil.PadTo([]byte("b"), 48),
bytesutil.PadTo([]byte("c"), 48),
bytesutil.PadTo([]byte("d"), 48),
}
func TestPersist(t *testing.T) {
t.Run("no sidecars", func(t *testing.T) {
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, nil, enode.ID{}, 0, nil, mockShouldRetain(0))
err := lazilyPersistentStoreColumns.Persist(0)
require.NoError(t, err)
require.Equal(t, 0, len(lazilyPersistentStoreColumns.cache.entries))
})
t.Run("outside DA period", func(t *testing.T) {
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
dataColumnParamsByBlockRoot := []util.DataColumnParam{
{Slot: 1, Index: 1},
}
var current primitives.Slot = 1_000_000
sr := mockShouldRetain(slots.ToEpoch(current))
roSidecars, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnParamsByBlockRoot)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, nil, enode.ID{}, 0, nil, sr)
err := lazilyPersistentStoreColumns.Persist(current, roSidecars...)
require.NoError(t, err)
require.Equal(t, len(roSidecars), len(lazilyPersistentStoreColumns.cache.entries))
})
t.Run("nominal", func(t *testing.T) {
const slot = 42
store := filesystem.NewEphemeralDataColumnStorage(t)
dataColumnParamsByBlockRoot := []util.DataColumnParam{
{Slot: slot, Index: 1},
{Slot: slot, Index: 5},
}
roSidecars, roDataColumns := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnParamsByBlockRoot)
avs := NewLazilyPersistentStoreColumn(store, nil, enode.ID{}, 0, nil, mockShouldRetain(slots.ToEpoch(slot)))
err := avs.Persist(slot, roSidecars...)
require.NoError(t, err)
require.Equal(t, 1, len(avs.cache.entries))
key := cacheKey{slot: slot, root: roDataColumns[0].BlockRoot()}
entry, ok := avs.cache.entries[key]
require.Equal(t, true, ok)
summary := store.Summary(key.root)
// A call to Persist does NOT save the sidecars to disk.
require.Equal(t, uint64(0), summary.Count())
require.Equal(t, len(roSidecars), len(entry.scs))
idx1 := entry.scs[1]
require.NotNil(t, idx1)
require.DeepSSZEqual(t, roDataColumns[0].BlockRoot(), idx1.BlockRoot())
idx5 := entry.scs[5]
require.NotNil(t, idx5)
require.DeepSSZEqual(t, roDataColumns[1].BlockRoot(), idx5.BlockRoot())
for i, roDataColumn := range entry.scs {
if map[uint64]bool{1: true, 5: true}[i] {
continue
}
require.IsNil(t, roDataColumn)
}
})
}
func TestIsDataAvailable(t *testing.T) {
params.SetupTestConfigCleanup(t)
params.BeaconConfig().FuluForkEpoch = params.BeaconConfig().ElectraForkEpoch + 4096*2
newDataColumnsVerifier := func(dataColumnSidecars []blocks.RODataColumn, _ []verification.Requirement) verification.DataColumnsVerifier {
return &mockDataColumnsVerifier{t: t, dataColumnSidecars: dataColumnSidecars}
}
ctx := t.Context()
t.Run("without commitments", func(t *testing.T) {
signedBeaconBlockFulu := util.NewBeaconBlockFulu()
signedRoBlock := newSignedRoBlock(t, signedBeaconBlockFulu)
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, newDataColumnsVerifier, enode.ID{}, 0, nil, mockShouldRetain(0))
err := lazilyPersistentStoreColumns.IsDataAvailable(ctx, 0, signedRoBlock)
require.NoError(t, err)
})
t.Run("with commitments", func(t *testing.T) {
signedBeaconBlockFulu := util.NewBeaconBlockFulu()
signedBeaconBlockFulu.Block.Slot = primitives.Slot(params.BeaconConfig().FuluForkEpoch) * params.BeaconConfig().SlotsPerEpoch
signedBeaconBlockFulu.Block.Body.BlobKzgCommitments = commitments
signedRoBlock := newSignedRoBlock(t, signedBeaconBlockFulu)
block := signedRoBlock.Block()
slot := block.Slot()
proposerIndex := block.ProposerIndex()
parentRoot := block.ParentRoot()
stateRoot := block.StateRoot()
bodyRoot, err := block.Body().HashTreeRoot()
require.NoError(t, err)
root := signedRoBlock.Root()
storage := filesystem.NewEphemeralDataColumnStorage(t)
indices := []uint64{1, 17, 19, 42, 75, 87, 102, 117}
avs := NewLazilyPersistentStoreColumn(storage, newDataColumnsVerifier, enode.ID{}, uint64(len(indices)), nil, mockShouldRetain(slots.ToEpoch(slot)))
dcparams := make([]util.DataColumnParam, 0, len(indices))
for _, index := range indices {
dataColumnParams := util.DataColumnParam{
Index: index,
KzgCommitments: commitments,
Slot: slot,
ProposerIndex: proposerIndex,
ParentRoot: parentRoot[:],
StateRoot: stateRoot[:],
BodyRoot: bodyRoot[:],
}
dcparams = append(dcparams, dataColumnParams)
}
_, verifiedRoDataColumns := util.CreateTestVerifiedRoDataColumnSidecars(t, dcparams)
key := keyFromBlock(signedRoBlock)
entry := avs.cache.entry(key)
defer avs.cache.delete(key)
for _, verifiedRoDataColumn := range verifiedRoDataColumns {
err := entry.stash(verifiedRoDataColumn.RODataColumn)
require.NoError(t, err)
}
err = avs.IsDataAvailable(ctx, slot, signedRoBlock)
require.NoError(t, err)
actual, err := storage.Get(root, indices)
require.NoError(t, err)
//summary := storage.Summary(root)
require.Equal(t, len(verifiedRoDataColumns), len(actual))
//require.Equal(t, uint64(len(indices)), summary.Count())
//require.DeepSSZEqual(t, verifiedRoDataColumns, actual)
})
}
func TestRetentionWindow(t *testing.T) {
windowSlots, err := slots.EpochEnd(params.BeaconConfig().MinEpochsForDataColumnSidecarsRequest)
require.NoError(t, err)
fuluSlot, err := slots.EpochStart(params.BeaconConfig().FuluForkEpoch)
require.NoError(t, err)
numberOfColumns := fieldparams.NumberOfColumns
testCases := []struct {
name string
commitments [][]byte
block func(*testing.T) blocks.ROBlock
slot primitives.Slot
wantedCols int
}{
{
name: "Pre-Fulu block",
block: func(t *testing.T) blocks.ROBlock {
return newSignedRoBlock(t, util.NewBeaconBlockElectra())
},
},
{
name: "Commitments outside data availability window",
block: func(t *testing.T) blocks.ROBlock {
beaconBlockElectra := util.NewBeaconBlockElectra()
// Block is from slot 0, "current slot" is window size +1 (so outside the window)
beaconBlockElectra.Block.Body.BlobKzgCommitments = commitments
return newSignedRoBlock(t, beaconBlockElectra)
},
slot: fuluSlot + windowSlots,
},
{
name: "Commitments within data availability window",
block: func(t *testing.T) blocks.ROBlock {
signedBeaconBlockFulu := util.NewBeaconBlockFulu()
signedBeaconBlockFulu.Block.Body.BlobKzgCommitments = commitments
signedBeaconBlockFulu.Block.Slot = fuluSlot + windowSlots - 1
return newSignedRoBlock(t, signedBeaconBlockFulu)
},
commitments: commitments,
slot: fuluSlot + windowSlots,
wantedCols: numberOfColumns,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
b := tc.block(t)
s := NewLazilyPersistentStoreColumn(nil, nil, enode.ID{}, uint64(numberOfColumns), nil, mockShouldRetain(slots.ToEpoch(tc.slot)))
indices, err := s.required(b)
require.NoError(t, err)
require.Equal(t, tc.wantedCols, len(indices))
})
}
}
func newSignedRoBlock(t *testing.T, signedBeaconBlock any) blocks.ROBlock {
sb, err := blocks.NewSignedBeaconBlock(signedBeaconBlock)
require.NoError(t, err)
rb, err := blocks.NewROBlock(sb)
require.NoError(t, err)
return rb
}
type mockDataColumnsVerifier struct {
t *testing.T
dataColumnSidecars []blocks.RODataColumn
validCalled, SidecarInclusionProvenCalled, SidecarKzgProofVerifiedCalled bool
}
var _ verification.DataColumnsVerifier = &mockDataColumnsVerifier{}
func (m *mockDataColumnsVerifier) VerifiedRODataColumns() ([]blocks.VerifiedRODataColumn, error) {
require.Equal(m.t, true, m.validCalled && m.SidecarInclusionProvenCalled && m.SidecarKzgProofVerifiedCalled)
verifiedDataColumnSidecars := make([]blocks.VerifiedRODataColumn, 0, len(m.dataColumnSidecars))
for _, dataColumnSidecar := range m.dataColumnSidecars {
verifiedDataColumnSidecar := blocks.NewVerifiedRODataColumn(dataColumnSidecar)
verifiedDataColumnSidecars = append(verifiedDataColumnSidecars, verifiedDataColumnSidecar)
}
return verifiedDataColumnSidecars, nil
}
func (m *mockDataColumnsVerifier) SatisfyRequirement(verification.Requirement) {}
func (m *mockDataColumnsVerifier) ValidFields() error {
m.validCalled = true
return nil
}
func (m *mockDataColumnsVerifier) CorrectSubnet(dataColumnSidecarSubTopic string, expectedTopics []string) error {
return nil
}
func (m *mockDataColumnsVerifier) NotFromFutureSlot() error { return nil }
func (m *mockDataColumnsVerifier) SlotAboveFinalized() error { return nil }
func (m *mockDataColumnsVerifier) ValidProposerSignature(ctx context.Context) error { return nil }
func (m *mockDataColumnsVerifier) SidecarParentSeen(parentSeen func([fieldparams.RootLength]byte) bool) error {
return nil
}
func (m *mockDataColumnsVerifier) SidecarParentValid(badParent func([fieldparams.RootLength]byte) bool) error {
return nil
}
func (m *mockDataColumnsVerifier) SidecarParentSlotLower() error { return nil }
func (m *mockDataColumnsVerifier) SidecarDescendsFromFinalized() error { return nil }
func (m *mockDataColumnsVerifier) SidecarInclusionProven() error {
m.SidecarInclusionProvenCalled = true
return nil
}
func (m *mockDataColumnsVerifier) SidecarKzgProofVerified() error {
m.SidecarKzgProofVerifiedCalled = true
return nil
}
func (m *mockDataColumnsVerifier) SidecarProposerExpected(ctx context.Context) error { return nil }
// Mock implementations for bisectVerification tests
// mockBisectionIterator simulates a BisectionIterator for testing.
type mockBisectionIterator struct {
chunks [][]blocks.RODataColumn
chunkErrors []error
finalError error
chunkIndex int
nextCallCount int
onErrorCallCount int
onErrorErrors []error
}
func (m *mockBisectionIterator) Next() ([]blocks.RODataColumn, error) {
if m.chunkIndex >= len(m.chunks) {
return nil, io.EOF
}
chunk := m.chunks[m.chunkIndex]
var err error
if m.chunkIndex < len(m.chunkErrors) {
err = m.chunkErrors[m.chunkIndex]
}
m.chunkIndex++
m.nextCallCount++
if err != nil {
return chunk, err
}
return chunk, nil
}
func (m *mockBisectionIterator) OnError(err error) {
m.onErrorCallCount++
m.onErrorErrors = append(m.onErrorErrors, err)
}
func (m *mockBisectionIterator) Error() error {
return m.finalError
}
// mockBisector simulates a Bisector for testing.
type mockBisector struct {
shouldError bool
bisectErr error
iterator *mockBisectionIterator
}
func (m *mockBisector) Bisect(columns []blocks.RODataColumn) (BisectionIterator, error) {
if m.shouldError {
return nil, m.bisectErr
}
return m.iterator, nil
}
// testDataColumnsVerifier implements verification.DataColumnsVerifier for testing.
type testDataColumnsVerifier struct {
t *testing.T
shouldFail bool
columns []blocks.RODataColumn
}
func (v *testDataColumnsVerifier) VerifiedRODataColumns() ([]blocks.VerifiedRODataColumn, error) {
verified := make([]blocks.VerifiedRODataColumn, len(v.columns))
for i, col := range v.columns {
verified[i] = blocks.NewVerifiedRODataColumn(col)
}
return verified, nil
}
func (v *testDataColumnsVerifier) SatisfyRequirement(verification.Requirement) {}
func (v *testDataColumnsVerifier) ValidFields() error {
if v.shouldFail {
return errors.New("verification failed")
}
return nil
}
func (v *testDataColumnsVerifier) CorrectSubnet(string, []string) error { return nil }
func (v *testDataColumnsVerifier) NotFromFutureSlot() error { return nil }
func (v *testDataColumnsVerifier) SlotAboveFinalized() error { return nil }
func (v *testDataColumnsVerifier) ValidProposerSignature(context.Context) error { return nil }
func (v *testDataColumnsVerifier) SidecarParentSeen(func([fieldparams.RootLength]byte) bool) error {
return nil
}
func (v *testDataColumnsVerifier) SidecarParentValid(func([fieldparams.RootLength]byte) bool) error {
return nil
}
func (v *testDataColumnsVerifier) SidecarParentSlotLower() error { return nil }
func (v *testDataColumnsVerifier) SidecarDescendsFromFinalized() error { return nil }
func (v *testDataColumnsVerifier) SidecarInclusionProven() error { return nil }
func (v *testDataColumnsVerifier) SidecarKzgProofVerified() error { return nil }
func (v *testDataColumnsVerifier) SidecarProposerExpected(context.Context) error { return nil }
// Helper function to create test data columns
func makeTestDataColumns(t *testing.T, count int, blockRoot [32]byte, startIndex uint64) []blocks.RODataColumn {
columns := make([]blocks.RODataColumn, 0, count)
for i := range count {
params := util.DataColumnParam{
Index: startIndex + uint64(i),
KzgCommitments: commitments,
Slot: primitives.Slot(params.BeaconConfig().FuluForkEpoch) * params.BeaconConfig().SlotsPerEpoch,
}
_, verifiedCols := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{params})
if len(verifiedCols) > 0 {
columns = append(columns, verifiedCols[0].RODataColumn)
}
}
return columns
}
// Helper function to create test verifier factory with failure pattern
func makeTestVerifierFactory(failurePattern []bool) verification.NewDataColumnsVerifier {
callIndex := 0
return func(cols []blocks.RODataColumn, _ []verification.Requirement) verification.DataColumnsVerifier {
shouldFail := callIndex < len(failurePattern) && failurePattern[callIndex]
callIndex++
return &testDataColumnsVerifier{
shouldFail: shouldFail,
columns: cols,
}
}
}
// TestBisectVerification tests the bisectVerification method with comprehensive table-driven test cases.
func TestBisectVerification(t *testing.T) {
params.SetupTestConfigCleanup(t)
params.BeaconConfig().FuluForkEpoch = params.BeaconConfig().ElectraForkEpoch + 4096*2
cases := []struct {
expectedError bool
bisectorNil bool
expectedOnErrorCallCount int
expectedNextCallCount int
inputCount int
iteratorFinalError error
bisectorError error
name string
storedColumnIndices []uint64
verificationFailurePattern []bool
chunkErrors []error
chunks [][]blocks.RODataColumn
}{
{
name: "EmptyColumns",
inputCount: 0,
expectedError: false,
expectedNextCallCount: 0,
expectedOnErrorCallCount: 0,
},
{
name: "NilBisector",
inputCount: 3,
bisectorNil: true,
expectedError: true,
expectedNextCallCount: 0,
expectedOnErrorCallCount: 0,
},
{
name: "BisectError",
inputCount: 5,
bisectorError: errors.New("bisect failed"),
expectedError: true,
expectedNextCallCount: 0,
expectedOnErrorCallCount: 0,
},
{
name: "SingleChunkSuccess",
inputCount: 4,
chunks: [][]blocks.RODataColumn{{}},
verificationFailurePattern: []bool{false},
expectedError: false,
expectedNextCallCount: 2,
expectedOnErrorCallCount: 0,
},
{
name: "SingleChunkFails",
inputCount: 4,
chunks: [][]blocks.RODataColumn{{}},
verificationFailurePattern: []bool{true},
iteratorFinalError: errors.New("chunk failed"),
expectedError: true,
expectedNextCallCount: 2,
expectedOnErrorCallCount: 1,
},
{
name: "TwoChunks_BothPass",
inputCount: 8,
chunks: [][]blocks.RODataColumn{{}, {}},
verificationFailurePattern: []bool{false, false},
expectedError: false,
expectedNextCallCount: 3,
expectedOnErrorCallCount: 0,
},
{
name: "TwoChunks_FirstFails",
inputCount: 8,
chunks: [][]blocks.RODataColumn{{}, {}},
verificationFailurePattern: []bool{true, false},
iteratorFinalError: errors.New("first failed"),
expectedError: true,
expectedNextCallCount: 3,
expectedOnErrorCallCount: 1,
},
{
name: "TwoChunks_SecondFails",
inputCount: 8,
chunks: [][]blocks.RODataColumn{{}, {}},
verificationFailurePattern: []bool{false, true},
iteratorFinalError: errors.New("second failed"),
expectedError: true,
expectedNextCallCount: 3,
expectedOnErrorCallCount: 1,
},
{
name: "TwoChunks_BothFail",
inputCount: 8,
chunks: [][]blocks.RODataColumn{{}, {}},
verificationFailurePattern: []bool{true, true},
iteratorFinalError: errors.New("both failed"),
expectedError: true,
expectedNextCallCount: 3,
expectedOnErrorCallCount: 2,
},
{
name: "ManyChunks_AllPass",
inputCount: 16,
chunks: [][]blocks.RODataColumn{{}, {}, {}, {}},
verificationFailurePattern: []bool{false, false, false, false},
expectedError: false,
expectedNextCallCount: 5,
expectedOnErrorCallCount: 0,
},
{
name: "ManyChunks_MixedFail",
inputCount: 16,
chunks: [][]blocks.RODataColumn{{}, {}, {}, {}},
verificationFailurePattern: []bool{false, true, false, true},
iteratorFinalError: errors.New("mixed failures"),
expectedError: true,
expectedNextCallCount: 5,
expectedOnErrorCallCount: 2,
},
{
name: "FilterStoredColumns_PartialFilter",
inputCount: 6,
chunks: [][]blocks.RODataColumn{{}},
verificationFailurePattern: []bool{false},
storedColumnIndices: []uint64{1, 3},
expectedError: false,
expectedNextCallCount: 2,
expectedOnErrorCallCount: 0,
},
{
name: "FilterStoredColumns_AllStored",
inputCount: 6,
chunks: [][]blocks.RODataColumn{{}},
verificationFailurePattern: []bool{false},
storedColumnIndices: []uint64{0, 1, 2, 3, 4, 5},
expectedError: false,
expectedNextCallCount: 2,
expectedOnErrorCallCount: 0,
},
{
name: "FilterStoredColumns_MixedAccess",
inputCount: 10,
chunks: [][]blocks.RODataColumn{{}},
verificationFailurePattern: []bool{false},
storedColumnIndices: []uint64{1, 5, 9},
expectedError: false,
expectedNextCallCount: 2,
expectedOnErrorCallCount: 0,
},
{
name: "IteratorNextError",
inputCount: 4,
chunks: [][]blocks.RODataColumn{{}, {}},
chunkErrors: []error{nil, errors.New("next error")},
verificationFailurePattern: []bool{false},
expectedError: true,
expectedNextCallCount: 2,
expectedOnErrorCallCount: 0,
},
{
name: "IteratorNextEOF",
inputCount: 4,
chunks: [][]blocks.RODataColumn{{}},
verificationFailurePattern: []bool{false},
expectedError: false,
expectedNextCallCount: 2,
expectedOnErrorCallCount: 0,
},
{
name: "LargeChunkSize",
inputCount: 128,
chunks: [][]blocks.RODataColumn{{}},
verificationFailurePattern: []bool{false},
expectedError: false,
expectedNextCallCount: 2,
expectedOnErrorCallCount: 0,
},
{
name: "ManySmallChunks",
inputCount: 32,
chunks: [][]blocks.RODataColumn{{}, {}, {}, {}, {}, {}, {}, {}},
verificationFailurePattern: []bool{false, false, false, false, false, false, false, false},
expectedError: false,
expectedNextCallCount: 9,
expectedOnErrorCallCount: 0,
},
{
name: "ChunkWithSomeStoredColumns",
inputCount: 6,
chunks: [][]blocks.RODataColumn{{}},
verificationFailurePattern: []bool{false},
storedColumnIndices: []uint64{0, 2, 4},
expectedError: false,
expectedNextCallCount: 2,
expectedOnErrorCallCount: 0,
},
{
name: "OnErrorDoesNotStopIteration",
inputCount: 8,
chunks: [][]blocks.RODataColumn{{}, {}},
verificationFailurePattern: []bool{true, false},
iteratorFinalError: errors.New("first failed"),
expectedError: true,
expectedNextCallCount: 3,
expectedOnErrorCallCount: 1,
},
{
name: "VerificationErrorWrapping",
inputCount: 4,
chunks: [][]blocks.RODataColumn{{}},
verificationFailurePattern: []bool{true},
iteratorFinalError: errors.New("verification failed"),
expectedError: true,
expectedNextCallCount: 2,
expectedOnErrorCallCount: 1,
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
// Setup storage
var store *filesystem.DataColumnStorage
if len(tc.storedColumnIndices) > 0 {
mocker, s := filesystem.NewEphemeralDataColumnStorageWithMocker(t)
blockRoot := [32]byte{1, 2, 3}
slot := primitives.Slot(params.BeaconConfig().FuluForkEpoch) * params.BeaconConfig().SlotsPerEpoch
require.NoError(t, mocker.CreateFakeIndices(blockRoot, slot, tc.storedColumnIndices...))
store = s
} else {
store = filesystem.NewEphemeralDataColumnStorage(t)
}
// Create test columns
blockRoot := [32]byte{1, 2, 3}
columns := makeTestDataColumns(t, tc.inputCount, blockRoot, 0)
// Setup iterator with chunks
iterator := &mockBisectionIterator{
chunks: tc.chunks,
chunkErrors: tc.chunkErrors,
finalError: tc.iteratorFinalError,
}
// Setup bisector
var bisector Bisector
if tc.bisectorNil || tc.inputCount == 0 {
bisector = nil
} else if tc.bisectorError != nil {
bisector = &mockBisector{
shouldError: true,
bisectErr: tc.bisectorError,
}
} else {
bisector = &mockBisector{
shouldError: false,
iterator: iterator,
}
}
// Create store with verifier
verifierFactory := makeTestVerifierFactory(tc.verificationFailurePattern)
lazilyPersistentStore := &LazilyPersistentStoreColumn{
store: store,
cache: newDataColumnCache(),
newDataColumnsVerifier: verifierFactory,
custody: &custodyRequirement{},
bisector: bisector,
}
// Execute
err := lazilyPersistentStore.bisectVerification(columns)
// Assert
if tc.expectedError {
require.NotNil(t, err)
} else {
require.NoError(t, err)
}
// Verify iterator interactions for non-error cases
if tc.inputCount > 0 && bisector != nil && tc.bisectorError == nil && !tc.expectedError {
require.NotEqual(t, 0, iterator.nextCallCount, "iterator Next() should have been called")
require.Equal(t, tc.expectedOnErrorCallCount, iterator.onErrorCallCount, "OnError() call count mismatch")
}
})
}
}
func allIndicesExcept(total int, excluded []uint64) []uint64 {
excludeMap := make(map[uint64]bool)
for _, idx := range excluded {
excludeMap[idx] = true
}
var result []uint64
for i := range total {
if !excludeMap[uint64(i)] {
result = append(result, uint64(i))
}
}
return result
}
// TestColumnsNotStored tests the columnsNotStored method.
func TestColumnsNotStored(t *testing.T) {
params.SetupTestConfigCleanup(t)
params.BeaconConfig().FuluForkEpoch = params.BeaconConfig().ElectraForkEpoch + 4096*2
cases := []struct {
name string
count int
stored []uint64 // Column indices marked as stored
expected []uint64 // Expected column indices in returned result
}{
// Empty cases
{
name: "EmptyInput",
count: 0,
stored: []uint64{},
expected: []uint64{},
},
// Single element cases
{
name: "SingleElement_NotStored",
count: 1,
stored: []uint64{},
expected: []uint64{0},
},
{
name: "SingleElement_Stored",
count: 1,
stored: []uint64{0},
expected: []uint64{},
},
// All not stored cases
{
name: "AllNotStored_FiveElements",
count: 5,
stored: []uint64{},
expected: []uint64{0, 1, 2, 3, 4},
},
// All stored cases
{
name: "AllStored",
count: 5,
stored: []uint64{0, 1, 2, 3, 4},
expected: []uint64{},
},
// Partial storage - beginning
{
name: "StoredAtBeginning",
count: 5,
stored: []uint64{0, 1},
expected: []uint64{2, 3, 4},
},
// Partial storage - end
{
name: "StoredAtEnd",
count: 5,
stored: []uint64{3, 4},
expected: []uint64{0, 1, 2},
},
// Partial storage - middle
{
name: "StoredInMiddle",
count: 5,
stored: []uint64{2},
expected: []uint64{0, 1, 3, 4},
},
// Partial storage - scattered
{
name: "StoredScattered",
count: 8,
stored: []uint64{1, 3, 5},
expected: []uint64{0, 2, 4, 6, 7},
},
// Alternating pattern
{
name: "AlternatingPattern",
count: 8,
stored: []uint64{0, 2, 4, 6},
expected: []uint64{1, 3, 5, 7},
},
// Consecutive stored
{
name: "ConsecutiveStored",
count: 10,
stored: []uint64{3, 4, 5, 6},
expected: []uint64{0, 1, 2, 7, 8, 9},
},
// Large slice cases
{
name: "LargeSlice_NoStored",
count: 64,
stored: []uint64{},
expected: allIndicesExcept(64, []uint64{}),
},
{
name: "LargeSlice_SingleStored",
count: 64,
stored: []uint64{32},
expected: allIndicesExcept(64, []uint64{32}),
},
}
slot := primitives.Slot(params.BeaconConfig().FuluForkEpoch) * params.BeaconConfig().SlotsPerEpoch
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
// Create test columns first to get the actual block root
var columns []blocks.RODataColumn
if tc.count > 0 {
columns = makeTestDataColumns(t, tc.count, [32]byte{}, 0)
}
// Get the actual block root from the first column (if any)
var blockRoot [32]byte
if len(columns) > 0 {
blockRoot = columns[0].BlockRoot()
}
// Setup storage
var store *filesystem.DataColumnStorage
if len(tc.stored) > 0 {
mocker, s := filesystem.NewEphemeralDataColumnStorageWithMocker(t)
require.NoError(t, mocker.CreateFakeIndices(blockRoot, slot, tc.stored...))
store = s
} else {
store = filesystem.NewEphemeralDataColumnStorage(t)
}
// Create store instance
lazilyPersistentStore := &LazilyPersistentStoreColumn{
store: store,
}
// Execute
result := lazilyPersistentStore.columnsNotStored(columns)
// Assert count
require.Equal(t, len(tc.expected), len(result),
fmt.Sprintf("expected %d columns, got %d", len(tc.expected), len(result)))
// Verify that no stored columns are in the result
if len(tc.stored) > 0 {
resultIndices := make(map[uint64]bool)
for _, col := range result {
resultIndices[col.Index] = true
}
for _, storedIdx := range tc.stored {
require.Equal(t, false, resultIndices[storedIdx],
fmt.Sprintf("stored column index %d should not be in result", storedIdx))
}
}
// If expectedIndices is specified, verify the exact column indices in order
if len(tc.expected) > 0 && len(tc.stored) == 0 {
// Only check exact order for non-stored cases (where we know they stay in same order)
for i, expectedIdx := range tc.expected {
require.Equal(t, columns[expectedIdx].Index, result[i].Index,
fmt.Sprintf("column %d: expected index %d, got %d", i, columns[expectedIdx].Index, result[i].Index))
}
}
// Verify optimization: if nothing stored, should return original slice
if len(tc.stored) == 0 && tc.count > 0 {
require.Equal(t, &columns[0], &result[0],
"when no columns stored, should return original slice (same pointer)")
}
// Verify optimization: if some stored, result should use in-place shifting
if len(tc.stored) > 0 && len(tc.expected) > 0 && tc.count > 0 {
require.Equal(t, cap(columns), cap(result),
"result should be in-place shifted from original (same capacity)")
}
})
}
}

View File

@@ -1,40 +0,0 @@
package das
import (
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
)
// Bisector describes a type that takes a set of RODataColumns via the Bisect method
// and returns a BisectionIterator that returns batches of those columns to be
// verified together.
type Bisector interface {
// Bisect initializes the BisectionIterator and returns the result.
Bisect([]blocks.RODataColumn) (BisectionIterator, error)
}
// BisectionIterator describes an iterator that returns groups of columns to verify.
// It is up to the bisector implementation to decide how to chunk up the columns,
// whether by block, by peer, or any other strategy. For example, backfill implements
// a bisector that keeps track of the source of each sidecar by peer, and groups
// sidecars by peer in the Next method, enabling it to track which peers, out of all
// the peers contributing to a batch, gave us bad data.
// When a batch fails, the OnError method should be used so that the bisector can
// keep track of the failed groups of columns and eg apply that knowledge in peer scoring.
// The same column will be returned multiple times by Next; first as part of a larger batch,
// and again as part of a more fine grained batch if there was an error in the large batch.
// For example, first as part of a batch of all columns spanning peers, and then again
// as part of a batch of columns from a single peer if some column in the larger batch
// failed verification.
type BisectionIterator interface {
// Next returns the next group of columns to verify.
// When the iteration is complete, Next should return (nil, io.EOF).
Next() ([]blocks.RODataColumn, error)
// OnError should be called when verification of a group of columns obtained via Next() fails.
OnError(error)
// Error can be used at the end of the iteration to get a single error result. It will return
// nil if OnError was never called, or an error of the implementers choosing representing the set
// of errors seen during iteration. For instance when bisecting from columns spanning peers to columns
// from a single peer, the broader error could be dropped, and then the more specific error
// (for a single peer's response) returned after bisecting to it.
Error() error
}

View File

@@ -76,7 +76,7 @@ func (e *blobCacheEntry) stash(sc *blocks.ROBlob) error {
e.scs = make([]*blocks.ROBlob, maxBlobsPerBlock)
}
if e.scs[sc.Index] != nil {
return errors.Wrapf(errDuplicateSidecar, "root=%#x, index=%d, commitment=%#x", sc.BlockRoot(), sc.Index, sc.KzgCommitment)
return errors.Wrapf(ErrDuplicateSidecar, "root=%#x, index=%d, commitment=%#x", sc.BlockRoot(), sc.Index, sc.KzgCommitment)
}
e.scs[sc.Index] = sc
return nil
@@ -88,7 +88,7 @@ func (e *blobCacheEntry) stash(sc *blocks.ROBlob) error {
// commitments were found in the cache and the sidecar slice return value can be used
// to perform a DA check against the cached sidecars.
// filter only returns blobs that need to be checked. Blobs already available on disk will be excluded.
func (e *blobCacheEntry) filter(root [32]byte, kc [][]byte) ([]blocks.ROBlob, error) {
func (e *blobCacheEntry) filter(root [32]byte, kc [][]byte, slot primitives.Slot) ([]blocks.ROBlob, error) {
count := len(kc)
if e.diskSummary.AllAvailable(count) {
return nil, nil

View File

@@ -34,8 +34,7 @@ type filterTestCaseSetupFunc func(t *testing.T) (*blobCacheEntry, [][]byte, []bl
func filterTestCaseSetup(slot primitives.Slot, nBlobs int, onDisk []int, numExpected int) filterTestCaseSetupFunc {
return func(t *testing.T) (*blobCacheEntry, [][]byte, []blocks.ROBlob) {
blk, blobs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, slot, nBlobs)
shouldRetain := func(s primitives.Slot) bool { return true }
commits, err := commitmentsToCheck(blk, shouldRetain)
commits, err := commitmentsToCheck(blk, blk.Block().Slot())
require.NoError(t, err)
entry := &blobCacheEntry{}
if len(onDisk) > 0 {
@@ -114,7 +113,7 @@ func TestFilterDiskSummary(t *testing.T) {
t.Run(c.name, func(t *testing.T) {
entry, commits, expected := c.setup(t)
// first (root) argument doesn't matter, it is just for logs
got, err := entry.filter([32]byte{}, commits)
got, err := entry.filter([32]byte{}, commits, 100)
require.NoError(t, err)
require.Equal(t, len(expected), len(got))
})
@@ -196,7 +195,7 @@ func TestFilter(t *testing.T) {
t.Run(c.name, func(t *testing.T) {
entry, commits, expected := c.setup(t)
// first (root) argument doesn't matter, it is just for logs
got, err := entry.filter([32]byte{}, commits)
got, err := entry.filter([32]byte{}, commits, 100)
if c.err != nil {
require.ErrorIs(t, err, c.err)
return

View File

@@ -1,7 +1,9 @@
package das
import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
"bytes"
"slices"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db/filesystem"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
@@ -9,9 +11,9 @@ import (
)
var (
errDuplicateSidecar = errors.New("duplicate sidecar stashed in AvailabilityStore")
ErrDuplicateSidecar = errors.New("duplicate sidecar stashed in AvailabilityStore")
errColumnIndexTooHigh = errors.New("column index too high")
errCommitmentMismatch = errors.New("commitment of sidecar in cache did not match block commitment")
errCommitmentMismatch = errors.New("KzgCommitment of sidecar in cache did not match block commitment")
errMissingSidecar = errors.New("no sidecar in cache for block commitment")
)
@@ -23,80 +25,107 @@ func newDataColumnCache() *dataColumnCache {
return &dataColumnCache{entries: make(map[cacheKey]*dataColumnCacheEntry)}
}
// entry returns the entry for the given key, creating it if it isn't already present.
func (c *dataColumnCache) entry(key cacheKey) *dataColumnCacheEntry {
// ensure returns the entry for the given key, creating it if it isn't already present.
func (c *dataColumnCache) ensure(key cacheKey) *dataColumnCacheEntry {
entry, ok := c.entries[key]
if !ok {
entry = newDataColumnCacheEntry(key.root)
entry = &dataColumnCacheEntry{}
c.entries[key] = entry
}
return entry
}
func (c *dataColumnCache) cleanup(blks []blocks.ROBlock) {
for _, block := range blks {
key := cacheKey{slot: block.Block().Slot(), root: block.Root()}
c.delete(key)
}
}
// delete removes the cache entry from the cache.
func (c *dataColumnCache) delete(key cacheKey) {
delete(c.entries, key)
}
func (c *dataColumnCache) stash(sc blocks.RODataColumn) error {
key := cacheKey{slot: sc.Slot(), root: sc.BlockRoot()}
entry := c.entry(key)
return entry.stash(sc)
}
func newDataColumnCacheEntry(root [32]byte) *dataColumnCacheEntry {
return &dataColumnCacheEntry{scs: make(map[uint64]blocks.RODataColumn), root: &root}
}
// dataColumnCacheEntry is the set of RODataColumns for a given block.
// dataColumnCacheEntry holds a fixed-length cache of BlobSidecars.
type dataColumnCacheEntry struct {
root *[32]byte
scs map[uint64]blocks.RODataColumn
scs [fieldparams.NumberOfColumns]*blocks.RODataColumn
diskSummary filesystem.DataColumnStorageSummary
}
func (e *dataColumnCacheEntry) setDiskSummary(sum filesystem.DataColumnStorageSummary) {
e.diskSummary = sum
}
// stash adds an item to the in-memory cache of DataColumnSidecars.
// stash will return an error if the given data column Index is out of bounds.
// It will overwrite any existing entry for the same index.
func (e *dataColumnCacheEntry) stash(sc blocks.RODataColumn) error {
// Only the first DataColumnSidecar of a given Index will be kept in the cache.
// stash will return an error if the given data colunn is already in the cache, or if the Index is out of bounds.
func (e *dataColumnCacheEntry) stash(sc *blocks.RODataColumn) error {
if sc.Index >= fieldparams.NumberOfColumns {
return errors.Wrapf(errColumnIndexTooHigh, "index=%d", sc.Index)
}
if e.scs[sc.Index] != nil {
return errors.Wrapf(ErrDuplicateSidecar, "root=%#x, index=%d, commitment=%#x", sc.BlockRoot(), sc.Index, sc.KzgCommitments)
}
e.scs[sc.Index] = sc
return nil
}
// append appends the requested root and indices from the cache to the given sidecars slice and returns the result.
// If any of the given indices are missing, an error will be returned and the sidecars slice will be unchanged.
func (e *dataColumnCacheEntry) append(sidecars []blocks.RODataColumn, indices peerdas.ColumnIndices) ([]blocks.RODataColumn, error) {
needed := indices.ToMap()
for col := range needed {
_, ok := e.scs[col]
if !ok {
return nil, errors.Wrapf(errMissingSidecar, "root=%#x, index=%#x", e.root, col)
func (e *dataColumnCacheEntry) filter(root [32]byte, commitmentsArray *safeCommitmentsArray) ([]blocks.RODataColumn, error) {
nonEmptyIndices := commitmentsArray.nonEmptyIndices()
if e.diskSummary.AllAvailable(nonEmptyIndices) {
return nil, nil
}
commitmentsCount := commitmentsArray.count()
sidecars := make([]blocks.RODataColumn, 0, commitmentsCount)
for i := range nonEmptyIndices {
if e.diskSummary.HasIndex(i) {
continue
}
if e.scs[i] == nil {
return nil, errors.Wrapf(errMissingSidecar, "root=%#x, index=%#x", root, i)
}
if !sliceBytesEqual(commitmentsArray[i], e.scs[i].KzgCommitments) {
return nil, errors.Wrapf(errCommitmentMismatch, "root=%#x, index=%#x, commitment=%#x, block commitment=%#x", root, i, e.scs[i].KzgCommitments, commitmentsArray[i])
}
sidecars = append(sidecars, *e.scs[i])
}
// Loop twice so we can avoid touching the slice if any of the blobs are missing.
for col := range needed {
sidecars = append(sidecars, e.scs[col])
}
return sidecars, nil
}
// IndicesNotStored filters the list of indices to only include those that are not found in the storage summary.
func IndicesNotStored(sum filesystem.DataColumnStorageSummary, indices peerdas.ColumnIndices) peerdas.ColumnIndices {
indices = indices.Copy()
for col := range indices {
if sum.HasIndex(col) {
indices.Unset(col)
// safeCommitmentsArray is a fixed size array of commitments.
// This is helpful for avoiding gratuitous bounds checks.
type safeCommitmentsArray [fieldparams.NumberOfColumns][][]byte
// count returns the number of commitments in the array.
func (s *safeCommitmentsArray) count() int {
count := 0
for i := range s {
if s[i] != nil {
count++
}
}
return indices
return count
}
// nonEmptyIndices returns a map of indices that are non-nil in the array.
func (s *safeCommitmentsArray) nonEmptyIndices() map[uint64]bool {
columns := make(map[uint64]bool)
for i := range s {
if s[i] != nil {
columns[uint64(i)] = true
}
}
return columns
}
func sliceBytesEqual(a, b [][]byte) bool {
return slices.EqualFunc(a, b, bytes.Equal)
}

View File

@@ -1,10 +1,8 @@
package das
import (
"slices"
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db/filesystem"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
@@ -15,105 +13,124 @@ import (
func TestEnsureDeleteSetDiskSummary(t *testing.T) {
c := newDataColumnCache()
key := cacheKey{}
entry := c.entry(key)
require.Equal(t, 0, len(entry.scs))
entry := c.ensure(key)
require.DeepEqual(t, dataColumnCacheEntry{}, *entry)
nonDupe := c.entry(key)
require.Equal(t, entry, nonDupe) // same pointer
expect, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{{Index: 1}})
require.NoError(t, entry.stash(expect[0]))
require.Equal(t, 1, len(entry.scs))
cols, err := nonDupe.append([]blocks.RODataColumn{}, peerdas.NewColumnIndicesFromSlice([]uint64{expect[0].Index}))
require.NoError(t, err)
require.DeepEqual(t, expect[0], cols[0])
diskSummary := filesystem.NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{true})
entry.setDiskSummary(diskSummary)
entry = c.ensure(key)
require.DeepEqual(t, dataColumnCacheEntry{diskSummary: diskSummary}, *entry)
c.delete(key)
entry = c.entry(key)
require.Equal(t, 0, len(entry.scs))
require.NotEqual(t, entry, nonDupe) // different pointer
entry = c.ensure(key)
require.DeepEqual(t, dataColumnCacheEntry{}, *entry)
}
func TestStash(t *testing.T) {
t.Run("Index too high", func(t *testing.T) {
columns, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{{Index: 10_000}})
roDataColumns, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{{Index: 10_000}})
var entry dataColumnCacheEntry
err := entry.stash(columns[0])
err := entry.stash(&roDataColumns[0])
require.NotNil(t, err)
})
t.Run("Nominal and already existing", func(t *testing.T) {
roDataColumns, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{{Index: 1}})
entry := newDataColumnCacheEntry(roDataColumns[0].BlockRoot())
err := entry.stash(roDataColumns[0])
var entry dataColumnCacheEntry
err := entry.stash(&roDataColumns[0])
require.NoError(t, err)
require.DeepEqual(t, roDataColumns[0], entry.scs[1])
require.NoError(t, entry.stash(roDataColumns[0]))
// stash simply replaces duplicate values now
require.DeepEqual(t, roDataColumns[0], entry.scs[1])
err = entry.stash(&roDataColumns[0])
require.NotNil(t, err)
})
}
func TestAppendDataColumns(t *testing.T) {
func TestFilterDataColumns(t *testing.T) {
t.Run("All available", func(t *testing.T) {
sum := filesystem.NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{false, true, false, true})
notStored := IndicesNotStored(sum, peerdas.NewColumnIndicesFromSlice([]uint64{1, 3}))
actual, err := newDataColumnCacheEntry([32]byte{}).append([]blocks.RODataColumn{}, notStored)
commitmentsArray := safeCommitmentsArray{nil, [][]byte{[]byte{1}}, nil, [][]byte{[]byte{3}}}
diskSummary := filesystem.NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{false, true, false, true})
dataColumnCacheEntry := dataColumnCacheEntry{diskSummary: diskSummary}
actual, err := dataColumnCacheEntry.filter([fieldparams.RootLength]byte{}, &commitmentsArray)
require.NoError(t, err)
require.Equal(t, 0, len(actual))
require.IsNil(t, actual)
})
t.Run("Some scs missing", func(t *testing.T) {
sum := filesystem.NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{})
commitmentsArray := safeCommitmentsArray{nil, [][]byte{[]byte{1}}}
notStored := IndicesNotStored(sum, peerdas.NewColumnIndicesFromSlice([]uint64{1}))
actual, err := newDataColumnCacheEntry([32]byte{}).append([]blocks.RODataColumn{}, notStored)
require.Equal(t, 0, len(actual))
diskSummary := filesystem.NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{})
dataColumnCacheEntry := dataColumnCacheEntry{diskSummary: diskSummary}
_, err := dataColumnCacheEntry.filter([fieldparams.RootLength]byte{}, &commitmentsArray)
require.NotNil(t, err)
})
t.Run("Commitments not equal", func(t *testing.T) {
commitmentsArray := safeCommitmentsArray{nil, [][]byte{[]byte{1}}}
roDataColumns, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{{Index: 1}})
var scs [fieldparams.NumberOfColumns]*blocks.RODataColumn
scs[1] = &roDataColumns[0]
dataColumnCacheEntry := dataColumnCacheEntry{scs: scs}
_, err := dataColumnCacheEntry.filter(roDataColumns[0].BlockRoot(), &commitmentsArray)
require.NotNil(t, err)
})
t.Run("Nominal", func(t *testing.T) {
indices := peerdas.NewColumnIndicesFromSlice([]uint64{1, 3})
commitmentsArray := safeCommitmentsArray{nil, [][]byte{[]byte{1}}, nil, [][]byte{[]byte{3}}}
diskSummary := filesystem.NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{false, true})
expected, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{{Index: 3, KzgCommitments: [][]byte{[]byte{3}}}})
scs := map[uint64]blocks.RODataColumn{
3: expected[0],
}
sum := filesystem.NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{false, true})
entry := dataColumnCacheEntry{scs: scs}
var scs [fieldparams.NumberOfColumns]*blocks.RODataColumn
scs[3] = &expected[0]
actual, err := entry.append([]blocks.RODataColumn{}, IndicesNotStored(sum, indices))
dataColumnCacheEntry := dataColumnCacheEntry{scs: scs, diskSummary: diskSummary}
actual, err := dataColumnCacheEntry.filter(expected[0].BlockRoot(), &commitmentsArray)
require.NoError(t, err)
require.DeepEqual(t, expected, actual)
})
}
t.Run("Append does not mutate the input", func(t *testing.T) {
indices := peerdas.NewColumnIndicesFromSlice([]uint64{1, 2})
expected, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{
{Index: 0, KzgCommitments: [][]byte{[]byte{1}}},
{Index: 1, KzgCommitments: [][]byte{[]byte{2}}},
{Index: 2, KzgCommitments: [][]byte{[]byte{3}}},
})
func TestCount(t *testing.T) {
s := safeCommitmentsArray{nil, [][]byte{[]byte{1}}, nil, [][]byte{[]byte{3}}}
require.Equal(t, 2, s.count())
}
scs := map[uint64]blocks.RODataColumn{
1: expected[1],
2: expected[2],
}
entry := dataColumnCacheEntry{scs: scs}
func TestNonEmptyIndices(t *testing.T) {
s := safeCommitmentsArray{nil, [][]byte{[]byte{10}}, nil, [][]byte{[]byte{20}}}
actual := s.nonEmptyIndices()
require.DeepEqual(t, map[uint64]bool{1: true, 3: true}, actual)
}
original := []blocks.RODataColumn{expected[0]}
actual, err := entry.append(original, indices)
require.NoError(t, err)
require.Equal(t, len(expected), len(actual))
slices.SortFunc(actual, func(i, j blocks.RODataColumn) int {
return int(i.Index) - int(j.Index)
})
for i := range expected {
require.Equal(t, expected[i].Index, actual[i].Index)
}
require.Equal(t, 1, len(original))
func TestSliceBytesEqual(t *testing.T) {
t.Run("Different lengths", func(t *testing.T) {
a := [][]byte{[]byte{1, 2, 3}}
b := [][]byte{[]byte{1, 2, 3}, []byte{4, 5, 6}}
require.Equal(t, false, sliceBytesEqual(a, b))
})
t.Run("Same length but different content", func(t *testing.T) {
a := [][]byte{[]byte{1, 2, 3}, []byte{4, 5, 6}}
b := [][]byte{[]byte{1, 2, 3}, []byte{4, 5, 7}}
require.Equal(t, false, sliceBytesEqual(a, b))
})
t.Run("Equal slices", func(t *testing.T) {
a := [][]byte{[]byte{1, 2, 3}, []byte{4, 5, 6}}
b := [][]byte{[]byte{1, 2, 3}, []byte{4, 5, 6}}
require.Equal(t, true, sliceBytesEqual(a, b))
})
}

View File

@@ -7,13 +7,13 @@ import (
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
)
// AvailabilityChecker is the minimum interface needed to check if data is available for a block.
// By convention there is a concept of an AvailabilityStore that implements a method to persist
// blobs or data columns to prepare for Availability checking, but since those methods are different
// for different forms of blob data, they are not included in the interface.
type AvailabilityChecker interface {
IsDataAvailable(ctx context.Context, current primitives.Slot, b ...blocks.ROBlock) error
// AvailabilityStore describes a component that can verify and save sidecars for a given block, and confirm previously
// verified and saved sidecars.
// Persist guarantees that the sidecar will be available to perform a DA check
// for the life of the beacon node process.
// IsDataAvailable guarantees that all blobs committed to in the block have been
// durably persisted before returning a non-error value.
type AvailabilityStore interface {
IsDataAvailable(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error
Persist(current primitives.Slot, blobSidecar ...blocks.ROBlob) error
}
// RetentionChecker is a callback that determines whether blobs at the given slot are within the retention period.
type RetentionChecker func(primitives.Slot) bool

View File

@@ -1,5 +0,0 @@
package das
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "das")

View File

@@ -9,20 +9,16 @@ import (
// MockAvailabilityStore is an implementation of AvailabilityStore that can be used by other packages in tests.
type MockAvailabilityStore struct {
VerifyAvailabilityCallback func(ctx context.Context, current primitives.Slot, b ...blocks.ROBlock) error
ErrIsDataAvailable error
VerifyAvailabilityCallback func(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error
PersistBlobsCallback func(current primitives.Slot, blobSidecar ...blocks.ROBlob) error
}
var _ AvailabilityChecker = &MockAvailabilityStore{}
var _ AvailabilityStore = &MockAvailabilityStore{}
// IsDataAvailable satisfies the corresponding method of the AvailabilityStore interface in a way that is useful for tests.
func (m *MockAvailabilityStore) IsDataAvailable(ctx context.Context, current primitives.Slot, b ...blocks.ROBlock) error {
if m.ErrIsDataAvailable != nil {
return m.ErrIsDataAvailable
}
func (m *MockAvailabilityStore) IsDataAvailable(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error {
if m.VerifyAvailabilityCallback != nil {
return m.VerifyAvailabilityCallback(ctx, current, b...)
return m.VerifyAvailabilityCallback(ctx, current, b)
}
return nil
}

View File

@@ -1,135 +0,0 @@
package das
import (
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
)
// NeedSpan represents the need for a resource over a span of slots.
type NeedSpan struct {
Begin primitives.Slot
End primitives.Slot
}
// At returns whether blocks/blobs/columns are needed At the given slot.
func (n NeedSpan) At(slot primitives.Slot) bool {
return slot >= n.Begin && slot < n.End
}
// CurrentNeeds fields can be used to check whether the given resource type is needed
// at a given slot. The values are based on the current slot, so this value shouldn't
// be retained / reused across slots.
type CurrentNeeds struct {
Block NeedSpan
Blob NeedSpan
Col NeedSpan
}
// SyncNeeds holds configuration and state for determining what data is needed
// at any given slot during backfill based on the current slot.
type SyncNeeds struct {
current func() primitives.Slot
deneb primitives.Slot
fulu primitives.Slot
oldestSlotFlagPtr *primitives.Slot
validOldestSlotPtr *primitives.Slot
blockRetention primitives.Epoch
blobRetentionFlag primitives.Epoch
blobRetention primitives.Epoch
colRetention primitives.Epoch
}
type CurrentSlotter func() primitives.Slot
func NewSyncNeeds(current CurrentSlotter, oldestSlotFlagPtr *primitives.Slot, blobRetentionFlag primitives.Epoch) (SyncNeeds, error) {
deneb, err := slots.EpochStart(params.BeaconConfig().DenebForkEpoch)
if err != nil {
return SyncNeeds{}, errors.Wrap(err, "deneb fork slot")
}
fuluBoundary := min(params.BeaconConfig().FuluForkEpoch, slots.MaxSafeEpoch())
fulu, err := slots.EpochStart(fuluBoundary)
if err != nil {
return SyncNeeds{}, errors.Wrap(err, "fulu fork slot")
}
sn := SyncNeeds{
current: func() primitives.Slot { return current() },
deneb: deneb,
fulu: fulu,
blobRetentionFlag: blobRetentionFlag,
}
// We apply the --blob-retention-epochs flag to both blob and column retention.
sn.blobRetention = max(sn.blobRetentionFlag, params.BeaconConfig().MinEpochsForBlobsSidecarsRequest)
sn.colRetention = max(sn.blobRetentionFlag, params.BeaconConfig().MinEpochsForDataColumnSidecarsRequest)
// Override spec minimum block retention with user-provided flag only if it is lower than the spec minimum.
sn.blockRetention = primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
if oldestSlotFlagPtr != nil {
oldestEpoch := slots.ToEpoch(*oldestSlotFlagPtr)
if oldestEpoch < sn.blockRetention {
sn.validOldestSlotPtr = oldestSlotFlagPtr
} else {
log.WithField("backfill-oldest-slot", *oldestSlotFlagPtr).
WithField("specMinSlot", syncEpochOffset(current(), sn.blockRetention)).
Warn("Ignoring user-specified slot > MIN_EPOCHS_FOR_BLOCK_REQUESTS.")
}
}
return sn, nil
}
// Currently is the main callback given to the different parts of backfill to determine
// what resources are needed at a given slot. It assumes the current instance of SyncNeeds
// is the result of calling initialize.
func (n SyncNeeds) Currently() CurrentNeeds {
current := n.current()
c := CurrentNeeds{
Block: n.blockSpan(current),
Blob: NeedSpan{Begin: syncEpochOffset(current, n.blobRetention), End: n.fulu},
Col: NeedSpan{Begin: syncEpochOffset(current, n.colRetention), End: current},
}
// Adjust the minimums forward to the slots where the sidecar types were introduced
c.Blob.Begin = max(c.Blob.Begin, n.deneb)
c.Col.Begin = max(c.Col.Begin, n.fulu)
return c
}
func (n SyncNeeds) blockSpan(current primitives.Slot) NeedSpan {
if n.validOldestSlotPtr != nil { // assumes validation done in initialize()
return NeedSpan{Begin: *n.validOldestSlotPtr, End: current}
}
return NeedSpan{Begin: syncEpochOffset(current, n.blockRetention), End: current}
}
func (n SyncNeeds) BlobRetentionChecker() RetentionChecker {
return func(slot primitives.Slot) bool {
current := n.Currently()
return current.Blob.At(slot)
}
}
func (n SyncNeeds) DataColumnRetentionChecker() RetentionChecker {
return func(slot primitives.Slot) bool {
current := n.Currently()
return current.Col.At(slot)
}
}
// syncEpochOffset subtracts a number of epochs as slots from the current slot, with underflow checks.
// It returns slot 1 if the result would be 0 or underflow. It doesn't return slot 0 because the
// genesis block needs to be specially synced (it doesn't have a valid signature).
func syncEpochOffset(current primitives.Slot, subtract primitives.Epoch) primitives.Slot {
minEpoch := min(subtract, slots.MaxSafeEpoch())
// compute slot offset - offset is a number of slots to go back from current (not an absolute slot).
offset := slots.UnsafeEpochStart(minEpoch)
// Undeflow protection: slot 0 is the genesis block, therefore the signature in it is invalid.
// To prevent us from rejecting a batch, we restrict the minimum backfill batch till only slot 1
if offset >= current {
return 1
}
return current - offset
}

View File

@@ -1,675 +0,0 @@
package das
import (
"fmt"
"testing"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/time/slots"
)
// TestNeedSpanAt tests the needSpan.at() method for range checking.
func TestNeedSpanAt(t *testing.T) {
cases := []struct {
name string
span NeedSpan
slots []primitives.Slot
expected bool
}{
{
name: "within bounds",
span: NeedSpan{Begin: 100, End: 200},
slots: []primitives.Slot{101, 150, 199},
expected: true,
},
{
name: "before begin / at end boundary (exclusive)",
span: NeedSpan{Begin: 100, End: 200},
slots: []primitives.Slot{99, 200, 201},
expected: false,
},
{
name: "empty span (begin == end)",
span: NeedSpan{Begin: 100, End: 100},
slots: []primitives.Slot{100},
expected: false,
},
{
name: "slot 0 with span starting at 0",
span: NeedSpan{Begin: 0, End: 100},
slots: []primitives.Slot{0},
expected: true,
},
}
for _, tc := range cases {
for _, sl := range tc.slots {
t.Run(fmt.Sprintf("%s at slot %d, ", tc.name, sl), func(t *testing.T) {
result := tc.span.At(sl)
require.Equal(t, tc.expected, result)
})
}
}
}
// TestSyncEpochOffset tests the syncEpochOffset helper function.
func TestSyncEpochOffset(t *testing.T) {
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
cases := []struct {
name string
current primitives.Slot
subtract primitives.Epoch
expected primitives.Slot
}{
{
name: "typical offset - 5 epochs back",
current: primitives.Slot(10000),
subtract: 5,
expected: primitives.Slot(10000 - 5*slotsPerEpoch),
},
{
name: "zero subtract returns current",
current: primitives.Slot(5000),
subtract: 0,
expected: primitives.Slot(5000),
},
{
name: "subtract 1 epoch from mid-range slot",
current: primitives.Slot(1000),
subtract: 1,
expected: primitives.Slot(1000 - slotsPerEpoch),
},
{
name: "offset equals current - underflow protection",
current: primitives.Slot(slotsPerEpoch),
subtract: 1,
expected: 1,
},
{
name: "offset exceeds current - underflow protection",
current: primitives.Slot(50),
subtract: 1000,
expected: 1,
},
{
name: "current very close to 0",
current: primitives.Slot(10),
subtract: 1,
expected: 1,
},
{
name: "subtract MaxSafeEpoch",
current: primitives.Slot(1000000),
subtract: slots.MaxSafeEpoch(),
expected: 1, // underflow protection
},
{
name: "result exactly at slot 1",
current: primitives.Slot(1 + slotsPerEpoch),
subtract: 1,
expected: 1,
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
result := syncEpochOffset(tc.current, tc.subtract)
require.Equal(t, tc.expected, result)
})
}
}
// TestSyncNeedsInitialize tests the syncNeeds.initialize() method.
func TestSyncNeedsInitialize(t *testing.T) {
params.SetupTestConfigCleanup(t)
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
minBlobEpochs := params.BeaconConfig().MinEpochsForBlobsSidecarsRequest
minColEpochs := params.BeaconConfig().MinEpochsForDataColumnSidecarsRequest
currentSlot := primitives.Slot(10000)
currentFunc := func() primitives.Slot { return currentSlot }
cases := []struct {
invalidOldestFlag bool
expectValidOldest bool
oldestSlotFlagPtr *primitives.Slot
blobRetentionFlag primitives.Epoch
expectedBlob primitives.Epoch
expectedCol primitives.Epoch
name string
input SyncNeeds
}{
{
name: "basic initialization with no flags",
expectValidOldest: false,
expectedBlob: minBlobEpochs,
expectedCol: minColEpochs,
blobRetentionFlag: 0,
},
{
name: "blob retention flag less than spec minimum",
blobRetentionFlag: minBlobEpochs - 1,
expectValidOldest: false,
expectedBlob: minBlobEpochs,
expectedCol: minColEpochs,
},
{
name: "blob retention flag greater than spec minimum",
blobRetentionFlag: minBlobEpochs + 10,
expectValidOldest: false,
expectedBlob: minBlobEpochs + 10,
expectedCol: minBlobEpochs + 10,
},
{
name: "oldestSlotFlagPtr is nil",
blobRetentionFlag: 0,
oldestSlotFlagPtr: nil,
expectValidOldest: false,
expectedBlob: minBlobEpochs,
expectedCol: minColEpochs,
},
{
name: "valid oldestSlotFlagPtr (earlier than spec minimum)",
blobRetentionFlag: 0,
oldestSlotFlagPtr: func() *primitives.Slot {
slot := primitives.Slot(10)
return &slot
}(),
expectValidOldest: true,
expectedBlob: minBlobEpochs,
expectedCol: minColEpochs,
},
{
name: "invalid oldestSlotFlagPtr (later than spec minimum)",
blobRetentionFlag: 0,
oldestSlotFlagPtr: func() *primitives.Slot {
// Make it way past the spec minimum
slot := currentSlot - primitives.Slot(params.BeaconConfig().MinEpochsForBlockRequests-1)*slotsPerEpoch
return &slot
}(),
expectValidOldest: false,
expectedBlob: minBlobEpochs,
expectedCol: minColEpochs,
invalidOldestFlag: true,
},
{
name: "oldestSlotFlagPtr at boundary (exactly at spec minimum)",
blobRetentionFlag: 0,
oldestSlotFlagPtr: func() *primitives.Slot {
slot := currentSlot - primitives.Slot(params.BeaconConfig().MinEpochsForBlockRequests)*slotsPerEpoch
return &slot
}(),
expectValidOldest: false,
expectedBlob: minBlobEpochs,
expectedCol: minColEpochs,
invalidOldestFlag: true,
},
{
name: "both blob retention flag and oldest slot set",
blobRetentionFlag: minBlobEpochs + 5,
oldestSlotFlagPtr: func() *primitives.Slot {
slot := primitives.Slot(100)
return &slot
}(),
expectValidOldest: true,
expectedBlob: minBlobEpochs + 5,
expectedCol: minBlobEpochs + 5,
},
{
name: "zero blob retention uses spec minimum",
blobRetentionFlag: 0,
expectValidOldest: false,
expectedBlob: minBlobEpochs,
expectedCol: minColEpochs,
},
{
name: "large blob retention value",
blobRetentionFlag: 5000,
expectValidOldest: false,
expectedBlob: 5000,
expectedCol: 5000,
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
result, err := NewSyncNeeds(currentFunc, tc.oldestSlotFlagPtr, tc.blobRetentionFlag)
require.NoError(t, err)
// Check that current, deneb, fulu are set correctly
require.Equal(t, currentSlot, result.current())
// Check retention calculations
require.Equal(t, tc.expectedBlob, result.blobRetention)
require.Equal(t, tc.expectedCol, result.colRetention)
if tc.invalidOldestFlag {
require.IsNil(t, result.validOldestSlotPtr)
} else {
require.Equal(t, tc.oldestSlotFlagPtr, result.validOldestSlotPtr)
}
// Check blockRetention is always spec minimum
require.Equal(t, primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests), result.blockRetention)
})
}
}
// TestSyncNeedsBlockSpan tests the syncNeeds.blockSpan() method.
func TestSyncNeedsBlockSpan(t *testing.T) {
params.SetupTestConfigCleanup(t)
minBlockEpochs := params.BeaconConfig().MinEpochsForBlockRequests
cases := []struct {
name string
validOldest *primitives.Slot
blockRetention primitives.Epoch
current primitives.Slot
expectedBegin primitives.Slot
expectedEnd primitives.Slot
}{
{
name: "with validOldestSlotPtr set",
validOldest: func() *primitives.Slot { s := primitives.Slot(500); return &s }(),
blockRetention: primitives.Epoch(minBlockEpochs),
current: 10000,
expectedBegin: 500,
expectedEnd: 10000,
},
{
name: "without validOldestSlotPtr (nil)",
validOldest: nil,
blockRetention: primitives.Epoch(minBlockEpochs),
current: 10000,
expectedBegin: syncEpochOffset(10000, primitives.Epoch(minBlockEpochs)),
expectedEnd: 10000,
},
{
name: "very low current slot",
validOldest: nil,
blockRetention: primitives.Epoch(minBlockEpochs),
current: 100,
expectedBegin: 1, // underflow protection
expectedEnd: 100,
},
{
name: "very high current slot",
validOldest: nil,
blockRetention: primitives.Epoch(minBlockEpochs),
current: 1000000,
expectedBegin: syncEpochOffset(1000000, primitives.Epoch(minBlockEpochs)),
expectedEnd: 1000000,
},
{
name: "validOldestSlotPtr at boundary value",
validOldest: func() *primitives.Slot { s := primitives.Slot(1); return &s }(),
blockRetention: primitives.Epoch(minBlockEpochs),
current: 5000,
expectedBegin: 1,
expectedEnd: 5000,
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
sn := SyncNeeds{
validOldestSlotPtr: tc.validOldest,
blockRetention: tc.blockRetention,
}
result := sn.blockSpan(tc.current)
require.Equal(t, tc.expectedBegin, result.Begin)
require.Equal(t, tc.expectedEnd, result.End)
})
}
}
// TestSyncNeedsCurrently tests the syncNeeds.currently() method.
func TestSyncNeedsCurrently(t *testing.T) {
params.SetupTestConfigCleanup(t)
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
denebSlot := primitives.Slot(1000)
fuluSlot := primitives.Slot(2000)
cases := []struct {
name string
current primitives.Slot
blobRetention primitives.Epoch
colRetention primitives.Epoch
blockRetention primitives.Epoch
validOldest *primitives.Slot
// Expected block span
expectBlockBegin primitives.Slot
expectBlockEnd primitives.Slot
// Expected blob span
expectBlobBegin primitives.Slot
expectBlobEnd primitives.Slot
// Expected column span
expectColBegin primitives.Slot
expectColEnd primitives.Slot
}{
{
name: "pre-Deneb - only blocks needed",
current: 500,
blobRetention: 10,
colRetention: 10,
blockRetention: 5,
validOldest: nil,
expectBlockBegin: syncEpochOffset(500, 5),
expectBlockEnd: 500,
expectBlobBegin: denebSlot, // adjusted to deneb
expectBlobEnd: fuluSlot,
expectColBegin: fuluSlot, // adjusted to fulu
expectColEnd: 500,
},
{
name: "between Deneb and Fulu - blocks and blobs needed",
current: 1500,
blobRetention: 10,
colRetention: 10,
blockRetention: 5,
validOldest: nil,
expectBlockBegin: syncEpochOffset(1500, 5),
expectBlockEnd: 1500,
expectBlobBegin: max(syncEpochOffset(1500, 10), denebSlot),
expectBlobEnd: fuluSlot,
expectColBegin: fuluSlot, // adjusted to fulu
expectColEnd: 1500,
},
{
name: "post-Fulu - all resources needed",
current: 3000,
blobRetention: 10,
colRetention: 10,
blockRetention: 5,
validOldest: nil,
expectBlockBegin: syncEpochOffset(3000, 5),
expectBlockEnd: 3000,
expectBlobBegin: max(syncEpochOffset(3000, 10), denebSlot),
expectBlobEnd: fuluSlot,
expectColBegin: max(syncEpochOffset(3000, 10), fuluSlot),
expectColEnd: 3000,
},
{
name: "exactly at Deneb boundary",
current: denebSlot,
blobRetention: 10,
colRetention: 10,
blockRetention: 5,
validOldest: nil,
expectBlockBegin: syncEpochOffset(denebSlot, 5),
expectBlockEnd: denebSlot,
expectBlobBegin: denebSlot,
expectBlobEnd: fuluSlot,
expectColBegin: fuluSlot,
expectColEnd: denebSlot,
},
{
name: "exactly at Fulu boundary",
current: fuluSlot,
blobRetention: 10,
colRetention: 10,
blockRetention: 5,
validOldest: nil,
expectBlockBegin: syncEpochOffset(fuluSlot, 5),
expectBlockEnd: fuluSlot,
expectBlobBegin: max(syncEpochOffset(fuluSlot, 10), denebSlot),
expectBlobEnd: fuluSlot,
expectColBegin: fuluSlot,
expectColEnd: fuluSlot,
},
{
name: "small retention periods",
current: 5000,
blobRetention: 1,
colRetention: 2,
blockRetention: 1,
validOldest: nil,
expectBlockBegin: syncEpochOffset(5000, 1),
expectBlockEnd: 5000,
expectBlobBegin: max(syncEpochOffset(5000, 1), denebSlot),
expectBlobEnd: fuluSlot,
expectColBegin: max(syncEpochOffset(5000, 2), fuluSlot),
expectColEnd: 5000,
},
{
name: "large retention periods",
current: 10000,
blobRetention: 100,
colRetention: 100,
blockRetention: 50,
validOldest: nil,
expectBlockBegin: syncEpochOffset(10000, 50),
expectBlockEnd: 10000,
expectBlobBegin: max(syncEpochOffset(10000, 100), denebSlot),
expectBlobEnd: fuluSlot,
expectColBegin: max(syncEpochOffset(10000, 100), fuluSlot),
expectColEnd: 10000,
},
{
name: "with validOldestSlotPtr for blocks",
current: 8000,
blobRetention: 10,
colRetention: 10,
blockRetention: 5,
validOldest: func() *primitives.Slot { s := primitives.Slot(100); return &s }(),
expectBlockBegin: 100,
expectBlockEnd: 8000,
expectBlobBegin: max(syncEpochOffset(8000, 10), denebSlot),
expectBlobEnd: fuluSlot,
expectColBegin: max(syncEpochOffset(8000, 10), fuluSlot),
expectColEnd: 8000,
},
{
name: "retention approaching current slot",
current: primitives.Slot(2000 + 5*slotsPerEpoch),
blobRetention: 5,
colRetention: 5,
blockRetention: 3,
validOldest: nil,
expectBlockBegin: syncEpochOffset(primitives.Slot(2000+5*slotsPerEpoch), 3),
expectBlockEnd: primitives.Slot(2000 + 5*slotsPerEpoch),
expectBlobBegin: max(syncEpochOffset(primitives.Slot(2000+5*slotsPerEpoch), 5), denebSlot),
expectBlobEnd: fuluSlot,
expectColBegin: max(syncEpochOffset(primitives.Slot(2000+5*slotsPerEpoch), 5), fuluSlot),
expectColEnd: primitives.Slot(2000 + 5*slotsPerEpoch),
},
{
name: "current just after Deneb",
current: denebSlot + 10,
blobRetention: 10,
colRetention: 10,
blockRetention: 5,
validOldest: nil,
expectBlockBegin: syncEpochOffset(denebSlot+10, 5),
expectBlockEnd: denebSlot + 10,
expectBlobBegin: denebSlot,
expectBlobEnd: fuluSlot,
expectColBegin: fuluSlot,
expectColEnd: denebSlot + 10,
},
{
name: "current just after Fulu",
current: fuluSlot + 10,
blobRetention: 10,
colRetention: 10,
blockRetention: 5,
validOldest: nil,
expectBlockBegin: syncEpochOffset(fuluSlot+10, 5),
expectBlockEnd: fuluSlot + 10,
expectBlobBegin: max(syncEpochOffset(fuluSlot+10, 10), denebSlot),
expectBlobEnd: fuluSlot,
expectColBegin: fuluSlot,
expectColEnd: fuluSlot + 10,
},
{
name: "blob retention would start before Deneb",
current: denebSlot + primitives.Slot(5*slotsPerEpoch),
blobRetention: 100, // very large retention
colRetention: 10,
blockRetention: 5,
validOldest: nil,
expectBlockBegin: syncEpochOffset(denebSlot+primitives.Slot(5*slotsPerEpoch), 5),
expectBlockEnd: denebSlot + primitives.Slot(5*slotsPerEpoch),
expectBlobBegin: denebSlot, // clamped to deneb
expectBlobEnd: fuluSlot,
expectColBegin: fuluSlot,
expectColEnd: denebSlot + primitives.Slot(5*slotsPerEpoch),
},
{
name: "column retention would start before Fulu",
current: fuluSlot + primitives.Slot(5*slotsPerEpoch),
blobRetention: 10,
colRetention: 100, // very large retention
blockRetention: 5,
validOldest: nil,
expectBlockBegin: syncEpochOffset(fuluSlot+primitives.Slot(5*slotsPerEpoch), 5),
expectBlockEnd: fuluSlot + primitives.Slot(5*slotsPerEpoch),
expectBlobBegin: max(syncEpochOffset(fuluSlot+primitives.Slot(5*slotsPerEpoch), 10), denebSlot),
expectBlobEnd: fuluSlot,
expectColBegin: fuluSlot, // clamped to fulu
expectColEnd: fuluSlot + primitives.Slot(5*slotsPerEpoch),
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
sn := SyncNeeds{
current: func() primitives.Slot { return tc.current },
deneb: denebSlot,
fulu: fuluSlot,
validOldestSlotPtr: tc.validOldest,
blockRetention: tc.blockRetention,
blobRetention: tc.blobRetention,
colRetention: tc.colRetention,
}
result := sn.Currently()
// Verify block span
require.Equal(t, tc.expectBlockBegin, result.Block.Begin,
"block.begin mismatch")
require.Equal(t, tc.expectBlockEnd, result.Block.End,
"block.end mismatch")
// Verify blob span
require.Equal(t, tc.expectBlobBegin, result.Blob.Begin,
"blob.begin mismatch")
require.Equal(t, tc.expectBlobEnd, result.Blob.End,
"blob.end mismatch")
// Verify column span
require.Equal(t, tc.expectColBegin, result.Col.Begin,
"col.begin mismatch")
require.Equal(t, tc.expectColEnd, result.Col.End,
"col.end mismatch")
})
}
}
// TestCurrentNeedsIntegration verifies the complete currentNeeds workflow.
func TestCurrentNeedsIntegration(t *testing.T) {
params.SetupTestConfigCleanup(t)
denebSlot := primitives.Slot(1000)
fuluSlot := primitives.Slot(2000)
cases := []struct {
name string
current primitives.Slot
blobRetention primitives.Epoch
colRetention primitives.Epoch
testSlots []primitives.Slot
expectBlockAt []bool
expectBlobAt []bool
expectColAt []bool
}{
{
name: "pre-Deneb slot - only blocks",
current: 500,
blobRetention: 10,
colRetention: 10,
testSlots: []primitives.Slot{100, 250, 499, 500, 1000, 2000},
expectBlockAt: []bool{true, true, true, false, false, false},
expectBlobAt: []bool{false, false, false, false, true, false},
expectColAt: []bool{false, false, false, false, false, false},
},
{
name: "between Deneb and Fulu - blocks and blobs",
current: 1500,
blobRetention: 10,
colRetention: 10,
testSlots: []primitives.Slot{500, 1000, 1200, 1499, 1500, 2000},
expectBlockAt: []bool{true, true, true, true, false, false},
expectBlobAt: []bool{false, false, true, true, true, false},
expectColAt: []bool{false, false, false, false, false, false},
},
{
name: "post-Fulu - all resources",
current: 3000,
blobRetention: 10,
colRetention: 10,
testSlots: []primitives.Slot{1000, 1500, 2000, 2500, 2999, 3000},
expectBlockAt: []bool{true, true, true, true, true, false},
expectBlobAt: []bool{false, false, false, false, false, false},
expectColAt: []bool{false, false, false, false, true, false},
},
{
name: "at Deneb boundary",
current: denebSlot,
blobRetention: 5,
colRetention: 5,
testSlots: []primitives.Slot{500, 999, 1000, 1500, 2000},
expectBlockAt: []bool{true, true, false, false, false},
expectBlobAt: []bool{false, false, true, true, false},
expectColAt: []bool{false, false, false, false, false},
},
{
name: "at Fulu boundary",
current: fuluSlot,
blobRetention: 5,
colRetention: 5,
testSlots: []primitives.Slot{1000, 1500, 1999, 2000, 2001},
expectBlockAt: []bool{true, true, true, false, false},
expectBlobAt: []bool{false, false, true, false, false},
expectColAt: []bool{false, false, false, false, false},
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
sn := SyncNeeds{
current: func() primitives.Slot { return tc.current },
deneb: denebSlot,
fulu: fuluSlot,
blockRetention: 100,
blobRetention: tc.blobRetention,
colRetention: tc.colRetention,
}
cn := sn.Currently()
// Verify block.end == current
require.Equal(t, tc.current, cn.Block.End, "block.end should equal current")
// Verify blob.end == fulu
require.Equal(t, fuluSlot, cn.Blob.End, "blob.end should equal fulu")
// Verify col.end == current
require.Equal(t, tc.current, cn.Col.End, "col.end should equal current")
// Test each slot
for i, slot := range tc.testSlots {
require.Equal(t, tc.expectBlockAt[i], cn.Block.At(slot),
"block.at(%d) mismatch at index %d", slot, i)
require.Equal(t, tc.expectBlobAt[i], cn.Blob.At(slot),
"blob.at(%d) mismatch at index %d", slot, i)
require.Equal(t, tc.expectColAt[i], cn.Col.At(slot),
"col.at(%d) mismatch at index %d", slot, i)
}
})
}
}

View File

@@ -270,7 +270,7 @@ func (dcs *DataColumnStorage) Save(dataColumnSidecars []blocks.VerifiedRODataCol
// Check the number of columns is the one expected.
// While implementing this, we expect the number of columns won't change.
// If it does, we will need to create a new version of the data column sidecar file.
if fieldparams.NumberOfColumns != mandatoryNumberOfColumns {
if params.BeaconConfig().NumberOfColumns != mandatoryNumberOfColumns {
return errWrongNumberOfColumns
}
@@ -964,7 +964,8 @@ func (si *storageIndices) set(dataColumnIndex uint64, position uint8) error {
// pullChan pulls data column sidecars from the input channel until it is empty.
func pullChan(inputRoDataColumns chan []blocks.VerifiedRODataColumn) []blocks.VerifiedRODataColumn {
dataColumnSidecars := make([]blocks.VerifiedRODataColumn, 0, fieldparams.NumberOfColumns)
numberOfColumns := params.BeaconConfig().NumberOfColumns
dataColumnSidecars := make([]blocks.VerifiedRODataColumn, 0, numberOfColumns)
for {
select {

View File

@@ -117,6 +117,8 @@ func (sc *dataColumnStorageSummaryCache) HighestEpoch() primitives.Epoch {
// set updates the cache.
func (sc *dataColumnStorageSummaryCache) set(dataColumnsIdent DataColumnsIdent) error {
numberOfColumns := params.BeaconConfig().NumberOfColumns
sc.mu.Lock()
defer sc.mu.Unlock()
@@ -125,7 +127,7 @@ func (sc *dataColumnStorageSummaryCache) set(dataColumnsIdent DataColumnsIdent)
count := uint64(0)
for _, index := range dataColumnsIdent.Indices {
if index >= fieldparams.NumberOfColumns {
if index >= numberOfColumns {
return errDataColumnIndexOutOfBounds
}

View File

@@ -7,6 +7,7 @@ import (
"testing"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/testing/require"
@@ -87,6 +88,22 @@ func TestWarmCache(t *testing.T) {
}
func TestSaveDataColumnsSidecars(t *testing.T) {
t.Run("wrong numbers of columns", func(t *testing.T) {
cfg := params.BeaconConfig().Copy()
cfg.NumberOfColumns = 0
params.OverrideBeaconConfig(cfg)
params.SetupTestConfigCleanup(t)
_, verifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
[]util.DataColumnParam{{Index: 12}, {Index: 1_000_000}, {Index: 48}},
)
_, dataColumnStorage := NewEphemeralDataColumnStorageAndFs(t)
err := dataColumnStorage.Save(verifiedRoDataColumnSidecars)
require.ErrorIs(t, err, errWrongNumberOfColumns)
})
t.Run("one of the column index is too large", func(t *testing.T) {
_, verifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,

View File

@@ -128,9 +128,10 @@ type NoHeadAccessDatabase interface {
BackfillFinalizedIndex(ctx context.Context, blocks []blocks.ROBlock, finalizedChildRoot [32]byte) error
// Custody operations.
UpdateSubscribedToAllDataSubnets(ctx context.Context, subscribed bool) (bool, error)
UpdateLiteSupernode(ctx context.Context, enabled bool) (bool, error)
UpdateCustodyInfo(ctx context.Context, earliestAvailableSlot primitives.Slot, custodyGroupCount uint64) (primitives.Slot, uint64, error)
UpdateEarliestAvailableSlot(ctx context.Context, earliestAvailableSlot primitives.Slot) error
UpdateSubscribedToAllDataSubnets(ctx context.Context, subscribed bool) (bool, error)
// P2P Metadata operations.
SaveMetadataSeqNum(ctx context.Context, seqNum uint64) error

View File

@@ -27,9 +27,6 @@ go_library(
"p2p.go",
"schema.go",
"state.go",
"state_diff.go",
"state_diff_cache.go",
"state_diff_helpers.go",
"state_summary.go",
"state_summary_cache.go",
"utils.go",
@@ -44,12 +41,10 @@ go_library(
"//beacon-chain/db/iface:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/hdiff:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/light-client:go_default_library",
"//consensus-types/primitives:go_default_library",
@@ -58,7 +53,6 @@ go_library(
"//encoding/ssz/detect:go_default_library",
"//genesis:go_default_library",
"//io/file:go_default_library",
"//math:go_default_library",
"//monitoring/progress:go_default_library",
"//monitoring/tracing:go_default_library",
"//monitoring/tracing/trace:go_default_library",
@@ -104,7 +98,6 @@ go_test(
"migration_block_slot_index_test.go",
"migration_state_validators_test.go",
"p2p_test.go",
"state_diff_test.go",
"state_summary_test.go",
"state_test.go",
"utils_test.go",
@@ -118,7 +111,6 @@ go_test(
"//beacon-chain/db/iface:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
@@ -128,7 +120,6 @@ go_test(
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//genesis:go_default_library",
"//math:go_default_library",
"//proto/dbval:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
@@ -142,7 +133,6 @@ go_test(
"@com_github_golang_snappy//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@io_bazel_rules_go//go/tools/bazel:go_default_library",
"@io_etcd_go_bbolt//:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",

View File

@@ -146,9 +146,9 @@ func (s *Store) UpdateEarliestAvailableSlot(ctx context.Context, earliestAvailab
return nil
}
// UpdateSubscribedToAllDataSubnets updates whether the node is subscribed to all data subnets (supernode mode).
// This is a one-way flag - once set to true, it cannot be reverted to false.
// Returns the previous state.
// UpdateSubscribedToAllDataSubnets updates the "subscribed to all data subnets" status in the database
// only if `subscribed` is `true`.
// It returns the previous subscription status.
func (s *Store) UpdateSubscribedToAllDataSubnets(ctx context.Context, subscribed bool) (bool, error) {
_, span := trace.StartSpan(ctx, "BeaconDB.UpdateSubscribedToAllDataSubnets")
defer span.End()
@@ -156,11 +156,13 @@ func (s *Store) UpdateSubscribedToAllDataSubnets(ctx context.Context, subscribed
result := false
if !subscribed {
if err := s.db.View(func(tx *bolt.Tx) error {
// Retrieve the custody bucket.
bucket := tx.Bucket(custodyBucket)
if bucket == nil {
return nil
}
// Retrieve the subscribe all data subnets flag.
bytes := bucket.Get(subscribeAllDataSubnetsKey)
if len(bytes) == 0 {
return nil
@@ -179,6 +181,7 @@ func (s *Store) UpdateSubscribedToAllDataSubnets(ctx context.Context, subscribed
}
if err := s.db.Update(func(tx *bolt.Tx) error {
// Retrieve the custody bucket.
bucket, err := tx.CreateBucketIfNotExists(custodyBucket)
if err != nil {
return errors.Wrap(err, "create custody bucket")
@@ -200,3 +203,61 @@ func (s *Store) UpdateSubscribedToAllDataSubnets(ctx context.Context, subscribed
return result, nil
}
// UpdateLiteSupernode updates the "lite-supernode" status in the database
// only if `enabled` is `true`.
// It returns the previous lite-supernode status.
func (s *Store) UpdateLiteSupernode(ctx context.Context, enabled bool) (bool, error) {
_, span := trace.StartSpan(ctx, "BeaconDB.UpdateLiteSupernode")
defer span.End()
result := false
if !enabled {
if err := s.db.View(func(tx *bolt.Tx) error {
// Retrieve the custody bucket.
bucket := tx.Bucket(custodyBucket)
if bucket == nil {
return nil
}
// Retrieve the lite-supernode flag.
bytes := bucket.Get(liteSupernodeKey)
if len(bytes) == 0 {
return nil
}
if bytes[0] == 1 {
result = true
}
return nil
}); err != nil {
return false, err
}
return result, nil
}
if err := s.db.Update(func(tx *bolt.Tx) error {
// Retrieve the custody bucket.
bucket, err := tx.CreateBucketIfNotExists(custodyBucket)
if err != nil {
return errors.Wrap(err, "create custody bucket")
}
bytes := bucket.Get(liteSupernodeKey)
if len(bytes) != 0 && bytes[0] == 1 {
result = true
}
if err := bucket.Put(liteSupernodeKey, []byte{1}); err != nil {
return errors.Wrap(err, "put lite-supernode")
}
return nil
}); err != nil {
return false, err
}
return result, nil
}

View File

@@ -67,6 +67,28 @@ func getSubscriptionStatusFromDB(t *testing.T, db *Store) bool {
return subscribed
}
// getLiteSupernodeStatusFromDB reads the lite-supernode status directly from the database for testing purposes.
func getLiteSupernodeStatusFromDB(t *testing.T, db *Store) bool {
t.Helper()
var enabled bool
err := db.db.View(func(tx *bolt.Tx) error {
bucket := tx.Bucket(custodyBucket)
if bucket == nil {
return nil
}
bytes := bucket.Get(liteSupernodeKey)
if len(bytes) != 0 && bytes[0] == 1 {
enabled = true
}
return nil
})
require.NoError(t, err)
return enabled
}
func TestUpdateCustodyInfo(t *testing.T) {
ctx := t.Context()
@@ -275,17 +297,6 @@ func TestUpdateSubscribedToAllDataSubnets(t *testing.T) {
require.Equal(t, false, stored)
})
t.Run("initial update with empty database - set to true", func(t *testing.T) {
db := setupDB(t)
prev, err := db.UpdateSubscribedToAllDataSubnets(ctx, true)
require.NoError(t, err)
require.Equal(t, false, prev)
stored := getSubscriptionStatusFromDB(t, db)
require.Equal(t, true, stored)
})
t.Run("attempt to update from true to false (should not change)", func(t *testing.T) {
db := setupDB(t)
@@ -300,7 +311,7 @@ func TestUpdateSubscribedToAllDataSubnets(t *testing.T) {
require.Equal(t, true, stored)
})
t.Run("update from true to true (no change)", func(t *testing.T) {
t.Run("attempt to update from true to false (should not change)", func(t *testing.T) {
db := setupDB(t)
_, err := db.UpdateSubscribedToAllDataSubnets(ctx, true)
@@ -314,3 +325,57 @@ func TestUpdateSubscribedToAllDataSubnets(t *testing.T) {
require.Equal(t, true, stored)
})
}
func TestUpdateLiteSupernode(t *testing.T) {
ctx := context.Background()
t.Run("initial update with empty database - set to false", func(t *testing.T) {
db := setupDB(t)
prev, err := db.UpdateLiteSupernode(ctx, false)
require.NoError(t, err)
require.Equal(t, false, prev)
stored := getLiteSupernodeStatusFromDB(t, db)
require.Equal(t, false, stored)
})
t.Run("initial update with empty database - set to true", func(t *testing.T) {
db := setupDB(t)
prev, err := db.UpdateLiteSupernode(ctx, true)
require.NoError(t, err)
require.Equal(t, false, prev)
stored := getLiteSupernodeStatusFromDB(t, db)
require.Equal(t, true, stored)
})
t.Run("attempt to update from true to false (should not change)", func(t *testing.T) {
db := setupDB(t)
_, err := db.UpdateLiteSupernode(ctx, true)
require.NoError(t, err)
prev, err := db.UpdateLiteSupernode(ctx, false)
require.NoError(t, err)
require.Equal(t, true, prev)
stored := getLiteSupernodeStatusFromDB(t, db)
require.Equal(t, true, stored)
})
t.Run("update from true to true (no change)", func(t *testing.T) {
db := setupDB(t)
_, err := db.UpdateLiteSupernode(ctx, true)
require.NoError(t, err)
prev, err := db.UpdateLiteSupernode(ctx, true)
require.NoError(t, err)
require.Equal(t, true, prev)
stored := getLiteSupernodeStatusFromDB(t, db)
require.Equal(t, true, stored)
})
}

View File

@@ -2,13 +2,6 @@ package kv
import "bytes"
func hasPhase0Key(enc []byte) bool {
if len(phase0Key) >= len(enc) {
return false
}
return bytes.Equal(enc[:len(phase0Key)], phase0Key)
}
// In order for an encoding to be Altair compatible, it must be prefixed with altair key.
func hasAltairKey(enc []byte) bool {
if len(altairKey) >= len(enc) {

View File

@@ -91,7 +91,6 @@ type Store struct {
blockCache *ristretto.Cache[string, interfaces.ReadOnlySignedBeaconBlock]
validatorEntryCache *ristretto.Cache[[]byte, *ethpb.Validator]
stateSummaryCache *stateSummaryCache
stateDiffCache *stateDiffCache
ctx context.Context
}
@@ -113,7 +112,6 @@ var Buckets = [][]byte{
lightClientUpdatesBucket,
lightClientBootstrapBucket,
lightClientSyncCommitteeBucket,
stateDiffBucket,
// Indices buckets.
blockSlotIndicesBucket,
stateSlotIndicesBucket,
@@ -203,14 +201,6 @@ func NewKVStore(ctx context.Context, dirPath string, opts ...KVStoreOption) (*St
return nil, err
}
if features.Get().EnableStateDiff {
sdCache, err := newStateDiffCache(kv)
if err != nil {
return nil, err
}
kv.stateDiffCache = sdCache
}
return kv, nil
}

View File

@@ -216,10 +216,6 @@ func TestStore_LightClientUpdate_CanSaveRetrieve(t *testing.T) {
db := setupDB(t)
ctx := t.Context()
for _, testVersion := range version.All()[1:] {
if testVersion == version.Gloas {
// TODO(16027): Unskip light client tests for Gloas
continue
}
t.Run(version.String(testVersion), func(t *testing.T) {
update, err := createUpdate(t, testVersion)
require.NoError(t, err)
@@ -576,10 +572,6 @@ func TestStore_LightClientBootstrap_CanSaveRetrieve(t *testing.T) {
require.IsNil(t, retrievedBootstrap)
})
for _, testVersion := range version.All()[1:] {
if testVersion == version.Gloas {
// TODO(16027): Unskip light client tests for Gloas
continue
}
t.Run(version.String(testVersion), func(t *testing.T) {
bootstrap, err := createDefaultLightClientBootstrap(primitives.Slot(uint64(params.BeaconConfig().VersionToForkEpochMap()[testVersion]) * uint64(params.BeaconConfig().SlotsPerEpoch)))
require.NoError(t, err)

View File

@@ -16,7 +16,6 @@ var (
stateValidatorsBucket = []byte("state-validators")
feeRecipientBucket = []byte("fee-recipient")
registrationBucket = []byte("registration")
stateDiffBucket = []byte("state-diff")
// Light Client Updates Bucket
lightClientUpdatesBucket = []byte("light-client-updates")
@@ -47,7 +46,6 @@ var (
// Below keys are used to identify objects are to be fork compatible.
// Objects that are only compatible with specific forks should be prefixed with such keys.
phase0Key = []byte("phase0")
altairKey = []byte("altair")
bellatrixKey = []byte("merge")
bellatrixBlindKey = []byte("blind-bellatrix")
@@ -79,4 +77,5 @@ var (
groupCountKey = []byte("group-count")
earliestAvailableSlotKey = []byte("earliest-available-slot")
subscribeAllDataSubnetsKey = []byte("subscribe-all-data-subnets")
liteSupernodeKey = []byte("lite-supernode")
)

View File

@@ -8,6 +8,7 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
statenative "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v7/config/features"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v7/genesis"
@@ -27,17 +28,6 @@ func (s *Store) State(ctx context.Context, blockRoot [32]byte) (state.BeaconStat
ctx, span := trace.StartSpan(ctx, "BeaconDB.State")
defer span.End()
startTime := time.Now()
// If state diff is enabled, we get the state from the state-diff db.
if features.Get().EnableStateDiff {
st, err := s.getStateUsingStateDiff(ctx, blockRoot)
if err != nil {
return nil, err
}
stateReadingTime.Observe(float64(time.Since(startTime).Milliseconds()))
return st, nil
}
enc, err := s.stateBytes(ctx, blockRoot)
if err != nil {
return nil, err
@@ -427,16 +417,6 @@ func (s *Store) storeValidatorEntriesSeparately(ctx context.Context, tx *bolt.Tx
func (s *Store) HasState(ctx context.Context, blockRoot [32]byte) bool {
_, span := trace.StartSpan(ctx, "BeaconDB.HasState")
defer span.End()
if features.Get().EnableStateDiff {
hasState, err := s.hasStateUsingStateDiff(ctx, blockRoot)
if err != nil {
log.WithError(err).Error(fmt.Sprintf("error checking state existence using state-diff"))
return false
}
return hasState
}
hasState := false
err := s.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(stateBucket)
@@ -490,7 +470,7 @@ func (s *Store) DeleteState(ctx context.Context, blockRoot [32]byte) error {
return nil
}
slot, err := s.SlotByBlockRoot(ctx, blockRoot)
slot, err := s.slotByBlockRoot(ctx, tx, blockRoot[:])
if err != nil {
return err
}
@@ -832,45 +812,50 @@ func (s *Store) stateBytes(ctx context.Context, blockRoot [32]byte) ([]byte, err
return dst, err
}
// SlotByBlockRoot returns the slot of the input block root, based on state summary, block, or state.
// Check for state is only done if state diff feature is not enabled.
func (s *Store) SlotByBlockRoot(ctx context.Context, blockRoot [32]byte) (primitives.Slot, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.SlotByBlockRoot")
// slotByBlockRoot retrieves the corresponding slot of the input block root.
func (s *Store) slotByBlockRoot(ctx context.Context, tx *bolt.Tx, blockRoot []byte) (primitives.Slot, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.slotByBlockRoot")
defer span.End()
// check state summary first
stateSummary, err := s.StateSummary(ctx, blockRoot)
if err != nil {
return 0, err
}
if stateSummary != nil {
return stateSummary.Slot, nil
}
bkt := tx.Bucket(stateSummaryBucket)
enc := bkt.Get(blockRoot)
// fall back to block if state summary is not found
blk, err := s.Block(ctx, blockRoot)
if err != nil {
return 0, err
}
if blk != nil && !blk.IsNil() {
return blk.Block().Slot(), nil
}
if enc == nil {
// Fall back to check the block.
bkt := tx.Bucket(blocksBucket)
enc := bkt.Get(blockRoot)
// fall back to state, only if state diff feature is not enabled
if features.Get().EnableStateDiff {
return 0, errors.New("neither state summary nor block found")
if enc == nil {
// Fallback and check the state.
bkt = tx.Bucket(stateBucket)
enc = bkt.Get(blockRoot)
if enc == nil {
return 0, errors.New("state enc can't be nil")
}
// no need to construct the validator entries as it is not used here.
s, err := s.unmarshalState(ctx, enc, nil)
if err != nil {
return 0, errors.Wrap(err, "could not unmarshal state")
}
if s == nil || s.IsNil() {
return 0, errors.New("state can't be nil")
}
return s.Slot(), nil
}
b, err := unmarshalBlock(ctx, enc)
if err != nil {
return 0, errors.Wrap(err, "could not unmarshal block")
}
if err := blocks.BeaconBlockIsNil(b); err != nil {
return 0, err
}
return b.Block().Slot(), nil
}
st, err := s.State(ctx, blockRoot)
if err != nil {
return 0, err
stateSummary := &ethpb.StateSummary{}
if err := decode(ctx, enc, stateSummary); err != nil {
return 0, errors.Wrap(err, "could not unmarshal state summary")
}
if st != nil && !st.IsNil() {
return st.Slot(), nil
}
// neither state summary, block nor state found
return 0, errors.New("neither state summary, block nor state found")
return stateSummary.Slot, nil
}
// HighestSlotStatesBelow returns the states with the highest slot below the input slot
@@ -1046,30 +1031,3 @@ func (s *Store) isStateValidatorMigrationOver() (bool, error) {
}
return returnFlag, nil
}
func (s *Store) getStateUsingStateDiff(ctx context.Context, blockRoot [32]byte) (state.BeaconState, error) {
slot, err := s.SlotByBlockRoot(ctx, blockRoot)
if err != nil {
return nil, err
}
st, err := s.stateByDiff(ctx, slot)
if err != nil {
return nil, err
}
if st == nil || st.IsNil() {
return nil, errors.New("state not found")
}
return st, nil
}
func (s *Store) hasStateUsingStateDiff(ctx context.Context, blockRoot [32]byte) (bool, error) {
slot, err := s.SlotByBlockRoot(ctx, blockRoot)
if err != nil {
return false, err
}
stateLvl := computeLevel(s.getOffset(), slot)
return stateLvl != -1, nil
}

View File

@@ -1,232 +0,0 @@
package kv
import (
"context"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v7/consensus-types/hdiff"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/monitoring/tracing/trace"
"github.com/pkg/errors"
bolt "go.etcd.io/bbolt"
)
const (
stateSuffix = "_s"
validatorSuffix = "_v"
balancesSuffix = "_b"
)
/*
We use a level-based approach to save state diffs. Each level corresponds to an exponent of 2 (exponents[lvl]).
The data at level 0 is saved every 2**exponent[0] slots and always contains a full state snapshot that is used as a base for the delta saved at other levels.
*/
// saveStateByDiff takes a state and decides between saving a full state snapshot or a diff.
func (s *Store) saveStateByDiff(ctx context.Context, st state.ReadOnlyBeaconState) error {
_, span := trace.StartSpan(ctx, "BeaconDB.saveStateByDiff")
defer span.End()
if st == nil {
return errors.New("state is nil")
}
slot := st.Slot()
offset := s.getOffset()
if uint64(slot) < offset {
return ErrSlotBeforeOffset
}
// Find the level to save the state.
lvl := computeLevel(offset, slot)
if lvl == -1 {
return nil
}
// Save full state if level is 0.
if lvl == 0 {
return s.saveFullSnapshot(st)
}
// Get anchor state to compute the diff from.
anchorState, err := s.getAnchorState(offset, lvl, slot)
if err != nil {
return err
}
return s.saveHdiff(lvl, anchorState, st)
}
// stateByDiff retrieves the full state for a given slot.
func (s *Store) stateByDiff(ctx context.Context, slot primitives.Slot) (state.BeaconState, error) {
offset := s.getOffset()
if uint64(slot) < offset {
return nil, ErrSlotBeforeOffset
}
snapshot, diffChain, err := s.getBaseAndDiffChain(offset, slot)
if err != nil {
return nil, err
}
for _, diff := range diffChain {
if err := ctx.Err(); err != nil {
return nil, err
}
snapshot, err = hdiff.ApplyDiff(ctx, snapshot, diff)
if err != nil {
return nil, err
}
}
return snapshot, nil
}
// saveHdiff computes the diff between the anchor state and the current state and saves it to the database.
// This function needs to be called only with the latest finalized state, and in a strictly increasing slot order.
func (s *Store) saveHdiff(lvl int, anchor, st state.ReadOnlyBeaconState) error {
slot := uint64(st.Slot())
key := makeKeyForStateDiffTree(lvl, slot)
diff, err := hdiff.Diff(anchor, st)
if err != nil {
return err
}
err = s.db.Update(func(tx *bolt.Tx) error {
bucket := tx.Bucket(stateDiffBucket)
if bucket == nil {
return bolt.ErrBucketNotFound
}
buf := append(key, stateSuffix...)
if err := bucket.Put(buf, diff.StateDiff); err != nil {
return err
}
buf = append(key, validatorSuffix...)
if err := bucket.Put(buf, diff.ValidatorDiffs); err != nil {
return err
}
buf = append(key, balancesSuffix...)
if err := bucket.Put(buf, diff.BalancesDiff); err != nil {
return err
}
return nil
})
if err != nil {
return err
}
// Save the full state to the cache (if not the last level).
if lvl != len(flags.Get().StateDiffExponents)-1 {
err = s.stateDiffCache.setAnchor(lvl, st)
if err != nil {
return err
}
}
return nil
}
// SaveFullSnapshot saves the full level 0 state snapshot to the database.
func (s *Store) saveFullSnapshot(st state.ReadOnlyBeaconState) error {
slot := uint64(st.Slot())
key := makeKeyForStateDiffTree(0, slot)
stateBytes, err := st.MarshalSSZ()
if err != nil {
return err
}
// add version key to value
enc, err := addKey(st.Version(), stateBytes)
if err != nil {
return err
}
err = s.db.Update(func(tx *bolt.Tx) error {
bucket := tx.Bucket(stateDiffBucket)
if bucket == nil {
return bolt.ErrBucketNotFound
}
if err := bucket.Put(key, enc); err != nil {
return err
}
return nil
})
if err != nil {
return err
}
// Save the full state to the cache, and invalidate other levels.
s.stateDiffCache.clearAnchors()
err = s.stateDiffCache.setAnchor(0, st)
if err != nil {
return err
}
return nil
}
func (s *Store) getDiff(lvl int, slot uint64) (hdiff.HdiffBytes, error) {
key := makeKeyForStateDiffTree(lvl, slot)
var stateDiff []byte
var validatorDiff []byte
var balancesDiff []byte
err := s.db.View(func(tx *bolt.Tx) error {
bucket := tx.Bucket(stateDiffBucket)
if bucket == nil {
return bolt.ErrBucketNotFound
}
buf := append(key, stateSuffix...)
stateDiff = bucket.Get(buf)
if stateDiff == nil {
return errors.New("state diff not found")
}
buf = append(key, validatorSuffix...)
validatorDiff = bucket.Get(buf)
if validatorDiff == nil {
return errors.New("validator diff not found")
}
buf = append(key, balancesSuffix...)
balancesDiff = bucket.Get(buf)
if balancesDiff == nil {
return errors.New("balances diff not found")
}
return nil
})
if err != nil {
return hdiff.HdiffBytes{}, err
}
return hdiff.HdiffBytes{
StateDiff: stateDiff,
ValidatorDiffs: validatorDiff,
BalancesDiff: balancesDiff,
}, nil
}
func (s *Store) getFullSnapshot(slot uint64) (state.BeaconState, error) {
key := makeKeyForStateDiffTree(0, slot)
var enc []byte
err := s.db.View(func(tx *bolt.Tx) error {
bucket := tx.Bucket(stateDiffBucket)
if bucket == nil {
return bolt.ErrBucketNotFound
}
enc = bucket.Get(key)
if enc == nil {
return errors.New("state not found")
}
return nil
})
if err != nil {
return nil, err
}
return decodeStateSnapshot(enc)
}

View File

@@ -1,77 +0,0 @@
package kv
import (
"encoding/binary"
"errors"
"sync"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/cmd/beacon-chain/flags"
"go.etcd.io/bbolt"
)
type stateDiffCache struct {
sync.RWMutex
anchors []state.ReadOnlyBeaconState
offset uint64
}
func newStateDiffCache(s *Store) (*stateDiffCache, error) {
var offset uint64
err := s.db.View(func(tx *bbolt.Tx) error {
bucket := tx.Bucket(stateDiffBucket)
if bucket == nil {
return bbolt.ErrBucketNotFound
}
offsetBytes := bucket.Get([]byte("offset"))
if offsetBytes == nil {
return errors.New("state diff cache: offset not found")
}
offset = binary.LittleEndian.Uint64(offsetBytes)
return nil
})
if err != nil {
return nil, err
}
return &stateDiffCache{
anchors: make([]state.ReadOnlyBeaconState, len(flags.Get().StateDiffExponents)-1), // -1 because last level doesn't need to be cached
offset: offset,
}, nil
}
func (c *stateDiffCache) getAnchor(level int) state.ReadOnlyBeaconState {
c.RLock()
defer c.RUnlock()
return c.anchors[level]
}
func (c *stateDiffCache) setAnchor(level int, anchor state.ReadOnlyBeaconState) error {
c.Lock()
defer c.Unlock()
if level >= len(c.anchors) || level < 0 {
return errors.New("state diff cache: anchor level out of range")
}
c.anchors[level] = anchor
return nil
}
func (c *stateDiffCache) getOffset() uint64 {
c.RLock()
defer c.RUnlock()
return c.offset
}
func (c *stateDiffCache) setOffset(offset uint64) {
c.Lock()
defer c.Unlock()
c.offset = offset
}
func (c *stateDiffCache) clearAnchors() {
c.Lock()
defer c.Unlock()
c.anchors = make([]state.ReadOnlyBeaconState, len(flags.Get().StateDiffExponents)-1) // -1 because last level doesn't need to be cached
}

View File

@@ -1,250 +0,0 @@
package kv
import (
"context"
"encoding/binary"
"errors"
"fmt"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
statenative "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v7/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v7/consensus-types/hdiff"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/math"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/runtime/version"
"go.etcd.io/bbolt"
)
var (
offsetKey = []byte("offset")
ErrSlotBeforeOffset = errors.New("slot is before root offset")
)
func makeKeyForStateDiffTree(level int, slot uint64) []byte {
buf := make([]byte, 16)
buf[0] = byte(level)
binary.LittleEndian.PutUint64(buf[1:], slot)
return buf
}
func (s *Store) getAnchorState(offset uint64, lvl int, slot primitives.Slot) (anchor state.ReadOnlyBeaconState, err error) {
if lvl <= 0 || lvl > len(flags.Get().StateDiffExponents) {
return nil, errors.New("invalid value for level")
}
if uint64(slot) < offset {
return nil, ErrSlotBeforeOffset
}
relSlot := uint64(slot) - offset
prevExp := flags.Get().StateDiffExponents[lvl-1]
if prevExp < 2 || prevExp >= 64 {
return nil, fmt.Errorf("state diff exponent %d out of range for uint64", prevExp)
}
span := math.PowerOf2(uint64(prevExp))
anchorSlot := primitives.Slot(uint64(slot) - relSlot%span)
// anchorLvl can be [0, lvl-1]
anchorLvl := computeLevel(offset, anchorSlot)
if anchorLvl == -1 {
return nil, errors.New("could not compute anchor level")
}
// Check if we have the anchor in cache.
anchor = s.stateDiffCache.getAnchor(anchorLvl)
if anchor != nil {
return anchor, nil
}
// If not, load it from the database.
anchor, err = s.stateByDiff(context.Background(), anchorSlot)
if err != nil {
return nil, err
}
// Save it in the cache.
err = s.stateDiffCache.setAnchor(anchorLvl, anchor)
if err != nil {
return nil, err
}
return anchor, nil
}
// computeLevel computes the level in the diff tree. Returns -1 in case slot should not be in tree.
func computeLevel(offset uint64, slot primitives.Slot) int {
rel := uint64(slot) - offset
for i, exp := range flags.Get().StateDiffExponents {
if exp < 2 || exp >= 64 {
return -1
}
span := math.PowerOf2(uint64(exp))
if rel%span == 0 {
return i
}
}
// If rel isnt on any of the boundaries, we should ignore saving it.
return -1
}
func (s *Store) setOffset(slot primitives.Slot) error {
err := s.db.Update(func(tx *bbolt.Tx) error {
bucket := tx.Bucket(stateDiffBucket)
if bucket == nil {
return bbolt.ErrBucketNotFound
}
offsetBytes := bucket.Get(offsetKey)
if offsetBytes != nil {
return fmt.Errorf("offset already set to %d", binary.LittleEndian.Uint64(offsetBytes))
}
offsetBytes = make([]byte, 8)
binary.LittleEndian.PutUint64(offsetBytes, uint64(slot))
if err := bucket.Put(offsetKey, offsetBytes); err != nil {
return err
}
return nil
})
if err != nil {
return err
}
// Save the offset in the cache.
s.stateDiffCache.setOffset(uint64(slot))
return nil
}
func (s *Store) getOffset() uint64 {
return s.stateDiffCache.getOffset()
}
func keyForSnapshot(v int) ([]byte, error) {
switch v {
case version.Fulu:
return fuluKey, nil
case version.Electra:
return ElectraKey, nil
case version.Deneb:
return denebKey, nil
case version.Capella:
return capellaKey, nil
case version.Bellatrix:
return bellatrixKey, nil
case version.Altair:
return altairKey, nil
case version.Phase0:
return phase0Key, nil
default:
return nil, errors.New("unsupported fork")
}
}
func addKey(v int, bytes []byte) ([]byte, error) {
key, err := keyForSnapshot(v)
if err != nil {
return nil, err
}
enc := make([]byte, len(key)+len(bytes))
copy(enc, key)
copy(enc[len(key):], bytes)
return enc, nil
}
func decodeStateSnapshot(enc []byte) (state.BeaconState, error) {
switch {
case hasFuluKey(enc):
var fuluState ethpb.BeaconStateFulu
if err := fuluState.UnmarshalSSZ(enc[len(fuluKey):]); err != nil {
return nil, err
}
return statenative.InitializeFromProtoUnsafeFulu(&fuluState)
case HasElectraKey(enc):
var electraState ethpb.BeaconStateElectra
if err := electraState.UnmarshalSSZ(enc[len(ElectraKey):]); err != nil {
return nil, err
}
return statenative.InitializeFromProtoUnsafeElectra(&electraState)
case hasDenebKey(enc):
var denebState ethpb.BeaconStateDeneb
if err := denebState.UnmarshalSSZ(enc[len(denebKey):]); err != nil {
return nil, err
}
return statenative.InitializeFromProtoUnsafeDeneb(&denebState)
case hasCapellaKey(enc):
var capellaState ethpb.BeaconStateCapella
if err := capellaState.UnmarshalSSZ(enc[len(capellaKey):]); err != nil {
return nil, err
}
return statenative.InitializeFromProtoUnsafeCapella(&capellaState)
case hasBellatrixKey(enc):
var bellatrixState ethpb.BeaconStateBellatrix
if err := bellatrixState.UnmarshalSSZ(enc[len(bellatrixKey):]); err != nil {
return nil, err
}
return statenative.InitializeFromProtoUnsafeBellatrix(&bellatrixState)
case hasAltairKey(enc):
var altairState ethpb.BeaconStateAltair
if err := altairState.UnmarshalSSZ(enc[len(altairKey):]); err != nil {
return nil, err
}
return statenative.InitializeFromProtoUnsafeAltair(&altairState)
case hasPhase0Key(enc):
var phase0State ethpb.BeaconState
if err := phase0State.UnmarshalSSZ(enc[len(phase0Key):]); err != nil {
return nil, err
}
return statenative.InitializeFromProtoUnsafePhase0(&phase0State)
default:
return nil, errors.New("unsupported fork")
}
}
func (s *Store) getBaseAndDiffChain(offset uint64, slot primitives.Slot) (state.BeaconState, []hdiff.HdiffBytes, error) {
if uint64(slot) < offset {
return nil, nil, ErrSlotBeforeOffset
}
rel := uint64(slot) - offset
lvl := computeLevel(offset, slot)
if lvl == -1 {
return nil, nil, errors.New("slot not in tree")
}
exponents := flags.Get().StateDiffExponents
baseSpan := math.PowerOf2(uint64(exponents[0]))
baseAnchorSlot := uint64(slot) - rel%baseSpan
type diffItem struct {
level int
slot uint64
}
var diffChainItems []diffItem
lastSeenAnchorSlot := baseAnchorSlot
for i, exp := range exponents[1 : lvl+1] {
span := math.PowerOf2(uint64(exp))
diffSlot := rel / span * span
if diffSlot == lastSeenAnchorSlot {
continue
}
diffChainItems = append(diffChainItems, diffItem{level: i + 1, slot: diffSlot + offset})
lastSeenAnchorSlot = diffSlot
}
baseSnapshot, err := s.getFullSnapshot(baseAnchorSlot)
if err != nil {
return nil, nil, err
}
diffChain := make([]hdiff.HdiffBytes, 0, len(diffChainItems))
for _, item := range diffChainItems {
diff, err := s.getDiff(item.level, item.slot)
if err != nil {
return nil, nil, err
}
diffChain = append(diffChain, diff)
}
return baseSnapshot, diffChain, nil
}

View File

@@ -1,662 +0,0 @@
package kv
import (
"context"
"encoding/binary"
"fmt"
"math/rand"
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/math"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/runtime/version"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/testing/util"
"go.etcd.io/bbolt"
)
func TestStateDiff_LoadOrInitOffset(t *testing.T) {
setDefaultStateDiffExponents()
db := setupDB(t)
err := setOffsetInDB(db, 10)
require.NoError(t, err)
offset := db.getOffset()
require.Equal(t, uint64(10), offset)
err = db.setOffset(10)
require.ErrorContains(t, "offset already set", err)
offset = db.getOffset()
require.Equal(t, uint64(10), offset)
}
func TestStateDiff_ComputeLevel(t *testing.T) {
db := setupDB(t)
setDefaultStateDiffExponents()
err := setOffsetInDB(db, 0)
require.NoError(t, err)
offset := db.getOffset()
// 2 ** 21
lvl := computeLevel(offset, primitives.Slot(math.PowerOf2(21)))
require.Equal(t, 0, lvl)
// 2 ** 21 * 3
lvl = computeLevel(offset, primitives.Slot(math.PowerOf2(21)*3))
require.Equal(t, 0, lvl)
// 2 ** 18
lvl = computeLevel(offset, primitives.Slot(math.PowerOf2(18)))
require.Equal(t, 1, lvl)
// 2 ** 18 * 3
lvl = computeLevel(offset, primitives.Slot(math.PowerOf2(18)*3))
require.Equal(t, 1, lvl)
// 2 ** 16
lvl = computeLevel(offset, primitives.Slot(math.PowerOf2(16)))
require.Equal(t, 2, lvl)
// 2 ** 16 * 3
lvl = computeLevel(offset, primitives.Slot(math.PowerOf2(16)*3))
require.Equal(t, 2, lvl)
// 2 ** 13
lvl = computeLevel(offset, primitives.Slot(math.PowerOf2(13)))
require.Equal(t, 3, lvl)
// 2 ** 13 * 3
lvl = computeLevel(offset, primitives.Slot(math.PowerOf2(13)*3))
require.Equal(t, 3, lvl)
// 2 ** 11
lvl = computeLevel(offset, primitives.Slot(math.PowerOf2(11)))
require.Equal(t, 4, lvl)
// 2 ** 11 * 3
lvl = computeLevel(offset, primitives.Slot(math.PowerOf2(11)*3))
require.Equal(t, 4, lvl)
// 2 ** 9
lvl = computeLevel(offset, primitives.Slot(math.PowerOf2(9)))
require.Equal(t, 5, lvl)
// 2 ** 9 * 3
lvl = computeLevel(offset, primitives.Slot(math.PowerOf2(9)*3))
require.Equal(t, 5, lvl)
// 2 ** 5
lvl = computeLevel(offset, primitives.Slot(math.PowerOf2(5)))
require.Equal(t, 6, lvl)
// 2 ** 5 * 3
lvl = computeLevel(offset, primitives.Slot(math.PowerOf2(5)*3))
require.Equal(t, 6, lvl)
// 2 ** 7
lvl = computeLevel(offset, primitives.Slot(math.PowerOf2(7)))
require.Equal(t, 6, lvl)
// 2 ** 5 + 1
lvl = computeLevel(offset, primitives.Slot(math.PowerOf2(5)+1))
require.Equal(t, -1, lvl)
// 2 ** 5 + 16
lvl = computeLevel(offset, primitives.Slot(math.PowerOf2(5)+16))
require.Equal(t, -1, lvl)
// 2 ** 5 + 32
lvl = computeLevel(offset, primitives.Slot(math.PowerOf2(5)+32))
require.Equal(t, 6, lvl)
}
func TestStateDiff_SaveFullSnapshot(t *testing.T) {
setDefaultStateDiffExponents()
for v := range version.All() {
t.Run(version.String(v), func(t *testing.T) {
db := setupDB(t)
// Create state with slot 0
st, enc := createState(t, 0, v)
err := setOffsetInDB(db, 0)
require.NoError(t, err)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
err = db.db.View(func(tx *bbolt.Tx) error {
bucket := tx.Bucket(stateDiffBucket)
if bucket == nil {
return bbolt.ErrBucketNotFound
}
s := bucket.Get(makeKeyForStateDiffTree(0, uint64(0)))
if s == nil {
return bbolt.ErrIncompatibleValue
}
require.DeepSSZEqual(t, enc, s)
return nil
})
require.NoError(t, err)
})
}
}
func TestStateDiff_SaveAndReadFullSnapshot(t *testing.T) {
setDefaultStateDiffExponents()
for v := range version.All() {
t.Run(version.String(v), func(t *testing.T) {
db := setupDB(t)
st, _ := createState(t, 0, v)
err := setOffsetInDB(db, 0)
require.NoError(t, err)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
readSt, err := db.stateByDiff(context.Background(), 0)
require.NoError(t, err)
require.NotNil(t, readSt)
stSSZ, err := st.MarshalSSZ()
require.NoError(t, err)
readStSSZ, err := readSt.MarshalSSZ()
require.NoError(t, err)
require.DeepSSZEqual(t, stSSZ, readStSSZ)
})
}
}
func TestStateDiff_SaveDiff(t *testing.T) {
setDefaultStateDiffExponents()
for v := range version.All() {
t.Run(version.String(v), func(t *testing.T) {
db := setupDB(t)
// Create state with slot 2**21
slot := primitives.Slot(math.PowerOf2(21))
st, enc := createState(t, slot, v)
err := setOffsetInDB(db, uint64(slot))
require.NoError(t, err)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
err = db.db.View(func(tx *bbolt.Tx) error {
bucket := tx.Bucket(stateDiffBucket)
if bucket == nil {
return bbolt.ErrBucketNotFound
}
s := bucket.Get(makeKeyForStateDiffTree(0, uint64(slot)))
if s == nil {
return bbolt.ErrIncompatibleValue
}
require.DeepSSZEqual(t, enc, s)
return nil
})
require.NoError(t, err)
// create state with slot 2**18 (+2**21)
slot = primitives.Slot(math.PowerOf2(18) + math.PowerOf2(21))
st, _ = createState(t, slot, v)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
key := makeKeyForStateDiffTree(1, uint64(slot))
err = db.db.View(func(tx *bbolt.Tx) error {
bucket := tx.Bucket(stateDiffBucket)
if bucket == nil {
return bbolt.ErrBucketNotFound
}
buf := append(key, "_s"...)
s := bucket.Get(buf)
if s == nil {
return bbolt.ErrIncompatibleValue
}
buf = append(key, "_v"...)
v := bucket.Get(buf)
if v == nil {
return bbolt.ErrIncompatibleValue
}
buf = append(key, "_b"...)
b := bucket.Get(buf)
if b == nil {
return bbolt.ErrIncompatibleValue
}
return nil
})
require.NoError(t, err)
})
}
}
func TestStateDiff_SaveAndReadDiff(t *testing.T) {
setDefaultStateDiffExponents()
for v := range version.All() {
t.Run(version.String(v), func(t *testing.T) {
db := setupDB(t)
st, _ := createState(t, 0, v)
err := setOffsetInDB(db, 0)
require.NoError(t, err)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
slot := primitives.Slot(math.PowerOf2(5))
st, _ = createState(t, slot, v)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
readSt, err := db.stateByDiff(context.Background(), slot)
require.NoError(t, err)
require.NotNil(t, readSt)
stSSZ, err := st.MarshalSSZ()
require.NoError(t, err)
readStSSZ, err := readSt.MarshalSSZ()
require.NoError(t, err)
require.DeepSSZEqual(t, stSSZ, readStSSZ)
})
}
}
func TestStateDiff_SaveAndReadDiff_WithRepetitiveAnchorSlots(t *testing.T) {
globalFlags := flags.GlobalFlags{
StateDiffExponents: []int{20, 14, 10, 7, 5},
}
flags.Init(&globalFlags)
for v := range version.All() {
t.Run(version.String(v), func(t *testing.T) {
db := setupDB(t)
err := setOffsetInDB(db, 0)
st, _ := createState(t, 0, v)
require.NoError(t, err)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
slot := primitives.Slot(math.PowerOf2(11))
st, _ = createState(t, slot, v)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
slot = primitives.Slot(math.PowerOf2(11) + math.PowerOf2(5))
st, _ = createState(t, slot, v)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
readSt, err := db.stateByDiff(context.Background(), slot)
require.NoError(t, err)
require.NotNil(t, readSt)
stSSZ, err := st.MarshalSSZ()
require.NoError(t, err)
readStSSZ, err := readSt.MarshalSSZ()
require.NoError(t, err)
require.DeepSSZEqual(t, stSSZ, readStSSZ)
})
}
}
func TestStateDiff_SaveAndReadDiff_MultipleLevels(t *testing.T) {
setDefaultStateDiffExponents()
for v := range version.All() {
t.Run(version.String(v), func(t *testing.T) {
db := setupDB(t)
st, _ := createState(t, 0, v)
err := setOffsetInDB(db, 0)
require.NoError(t, err)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
slot := primitives.Slot(math.PowerOf2(11))
st, _ = createState(t, slot, v)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
readSt, err := db.stateByDiff(context.Background(), slot)
require.NoError(t, err)
require.NotNil(t, readSt)
stSSZ, err := st.MarshalSSZ()
require.NoError(t, err)
readStSSZ, err := readSt.MarshalSSZ()
require.NoError(t, err)
require.DeepSSZEqual(t, stSSZ, readStSSZ)
slot = primitives.Slot(math.PowerOf2(11) + math.PowerOf2(9))
st, _ = createState(t, slot, v)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
readSt, err = db.stateByDiff(context.Background(), slot)
require.NoError(t, err)
require.NotNil(t, readSt)
stSSZ, err = st.MarshalSSZ()
require.NoError(t, err)
readStSSZ, err = readSt.MarshalSSZ()
require.NoError(t, err)
require.DeepSSZEqual(t, stSSZ, readStSSZ)
slot = primitives.Slot(math.PowerOf2(11) + math.PowerOf2(9) + math.PowerOf2(5))
st, _ = createState(t, slot, v)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
readSt, err = db.stateByDiff(context.Background(), slot)
require.NoError(t, err)
require.NotNil(t, readSt)
stSSZ, err = st.MarshalSSZ()
require.NoError(t, err)
readStSSZ, err = readSt.MarshalSSZ()
require.NoError(t, err)
require.DeepSSZEqual(t, stSSZ, readStSSZ)
})
}
}
func TestStateDiff_SaveAndReadDiffForkTransition(t *testing.T) {
setDefaultStateDiffExponents()
for v := range version.All()[:len(version.All())-1] {
t.Run(version.String(v), func(t *testing.T) {
db := setupDB(t)
st, _ := createState(t, 0, v)
err := setOffsetInDB(db, 0)
require.NoError(t, err)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
slot := primitives.Slot(math.PowerOf2(5))
st, _ = createState(t, slot, v+1)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
readSt, err := db.stateByDiff(context.Background(), slot)
require.NoError(t, err)
require.NotNil(t, readSt)
stSSZ, err := st.MarshalSSZ()
require.NoError(t, err)
readStSSZ, err := readSt.MarshalSSZ()
require.NoError(t, err)
require.DeepSSZEqual(t, stSSZ, readStSSZ)
})
}
}
func TestStateDiff_OffsetCache(t *testing.T) {
setDefaultStateDiffExponents()
// test for slot numbers 0 and 1 for every version
for slotNum := range 2 {
for v := range version.All() {
t.Run(fmt.Sprintf("slotNum=%d,%s", slotNum, version.String(v)), func(t *testing.T) {
db := setupDB(t)
slot := primitives.Slot(slotNum)
err := setOffsetInDB(db, uint64(slot))
require.NoError(t, err)
st, _ := createState(t, slot, v)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
offset := db.stateDiffCache.getOffset()
require.Equal(t, uint64(slotNum), offset)
slot2 := primitives.Slot(uint64(slotNum) + math.PowerOf2(uint64(flags.Get().StateDiffExponents[0])))
st2, _ := createState(t, slot2, v)
err = db.saveStateByDiff(context.Background(), st2)
require.NoError(t, err)
offset = db.stateDiffCache.getOffset()
require.Equal(t, uint64(slot), offset)
})
}
}
}
func TestStateDiff_AnchorCache(t *testing.T) {
setDefaultStateDiffExponents()
for v := range version.All() {
t.Run(version.String(v), func(t *testing.T) {
exponents := flags.Get().StateDiffExponents
localCache := make([]state.ReadOnlyBeaconState, len(exponents)-1)
db := setupDB(t)
err := setOffsetInDB(db, 0) // lvl 0
require.NoError(t, err)
// at first the cache should be empty
for i := 0; i < len(flags.Get().StateDiffExponents)-1; i++ {
anchor := db.stateDiffCache.getAnchor(i)
require.IsNil(t, anchor)
}
// add level 0
slot := primitives.Slot(0) // offset 0 is already set
st, _ := createState(t, slot, v)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
localCache[0] = st
// level 0 should be the same
require.DeepEqual(t, localCache[0], db.stateDiffCache.getAnchor(0))
// rest of the cache should be nil
for i := 1; i < len(exponents)-1; i++ {
require.IsNil(t, db.stateDiffCache.getAnchor(i))
}
// skip last level as it does not get cached
for i := len(exponents) - 2; i > 0; i-- {
slot = primitives.Slot(math.PowerOf2(uint64(exponents[i])))
st, _ := createState(t, slot, v)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
localCache[i] = st
// anchor cache must match local cache
for i := 0; i < len(exponents)-1; i++ {
if localCache[i] == nil {
require.IsNil(t, db.stateDiffCache.getAnchor(i))
continue
}
localSSZ, err := localCache[i].MarshalSSZ()
require.NoError(t, err)
anchorSSZ, err := db.stateDiffCache.getAnchor(i).MarshalSSZ()
require.NoError(t, err)
require.DeepSSZEqual(t, localSSZ, anchorSSZ)
}
}
// moving to a new tree should invalidate the cache except for level 0
twoTo21 := math.PowerOf2(21)
slot = primitives.Slot(twoTo21)
st, _ = createState(t, slot, v)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
localCache = make([]state.ReadOnlyBeaconState, len(exponents)-1)
localCache[0] = st
// level 0 should be the same
require.DeepEqual(t, localCache[0], db.stateDiffCache.getAnchor(0))
// rest of the cache should be nil
for i := 1; i < len(exponents)-1; i++ {
require.IsNil(t, db.stateDiffCache.getAnchor(i))
}
})
}
}
func TestStateDiff_EncodingAndDecoding(t *testing.T) {
for v := range version.All() {
t.Run(version.String(v), func(t *testing.T) {
st, enc := createState(t, 0, v) // this has addKey called inside
stDecoded, err := decodeStateSnapshot(enc)
require.NoError(t, err)
st1ssz, err := st.MarshalSSZ()
require.NoError(t, err)
st2ssz, err := stDecoded.MarshalSSZ()
require.NoError(t, err)
require.DeepSSZEqual(t, st1ssz, st2ssz)
})
}
}
func createState(t *testing.T, slot primitives.Slot, v int) (state.ReadOnlyBeaconState, []byte) {
p := params.BeaconConfig()
var st state.BeaconState
var err error
switch v {
case version.Phase0:
st, err = util.NewBeaconState()
require.NoError(t, err)
err = st.SetFork(&ethpb.Fork{
PreviousVersion: p.GenesisForkVersion,
CurrentVersion: p.GenesisForkVersion,
Epoch: 0,
})
require.NoError(t, err)
case version.Altair:
st, err = util.NewBeaconStateAltair()
require.NoError(t, err)
err = st.SetFork(&ethpb.Fork{
PreviousVersion: p.GenesisForkVersion,
CurrentVersion: p.AltairForkVersion,
Epoch: p.AltairForkEpoch,
})
require.NoError(t, err)
case version.Bellatrix:
st, err = util.NewBeaconStateBellatrix()
require.NoError(t, err)
err = st.SetFork(&ethpb.Fork{
PreviousVersion: p.AltairForkVersion,
CurrentVersion: p.BellatrixForkVersion,
Epoch: p.BellatrixForkEpoch,
})
require.NoError(t, err)
case version.Capella:
st, err = util.NewBeaconStateCapella()
require.NoError(t, err)
err = st.SetFork(&ethpb.Fork{
PreviousVersion: p.BellatrixForkVersion,
CurrentVersion: p.CapellaForkVersion,
Epoch: p.CapellaForkEpoch,
})
require.NoError(t, err)
case version.Deneb:
st, err = util.NewBeaconStateDeneb()
require.NoError(t, err)
err = st.SetFork(&ethpb.Fork{
PreviousVersion: p.CapellaForkVersion,
CurrentVersion: p.DenebForkVersion,
Epoch: p.DenebForkEpoch,
})
require.NoError(t, err)
case version.Electra:
st, err = util.NewBeaconStateElectra()
require.NoError(t, err)
err = st.SetFork(&ethpb.Fork{
PreviousVersion: p.DenebForkVersion,
CurrentVersion: p.ElectraForkVersion,
Epoch: p.ElectraForkEpoch,
})
require.NoError(t, err)
case version.Fulu:
st, err = util.NewBeaconStateFulu()
require.NoError(t, err)
err = st.SetFork(&ethpb.Fork{
PreviousVersion: p.ElectraForkVersion,
CurrentVersion: p.FuluForkVersion,
Epoch: p.FuluForkEpoch,
})
require.NoError(t, err)
default:
t.Fatalf("unsupported version: %d", v)
}
err = st.SetSlot(slot)
require.NoError(t, err)
slashings := make([]uint64, 8192)
slashings[0] = uint64(rand.Intn(10))
err = st.SetSlashings(slashings)
require.NoError(t, err)
stssz, err := st.MarshalSSZ()
require.NoError(t, err)
enc, err := addKey(v, stssz)
require.NoError(t, err)
return st, enc
}
func setOffsetInDB(s *Store, offset uint64) error {
err := s.db.Update(func(tx *bbolt.Tx) error {
bucket := tx.Bucket(stateDiffBucket)
if bucket == nil {
return bbolt.ErrBucketNotFound
}
offsetBytes := bucket.Get(offsetKey)
if offsetBytes != nil {
return fmt.Errorf("offset already set to %d", binary.LittleEndian.Uint64(offsetBytes))
}
offsetBytes = make([]byte, 8)
binary.LittleEndian.PutUint64(offsetBytes, offset)
if err := bucket.Put(offsetKey, offsetBytes); err != nil {
return err
}
return nil
})
if err != nil {
return err
}
sdCache, err := newStateDiffCache(s)
if err != nil {
return err
}
s.stateDiffCache = sdCache
return nil
}
func setDefaultStateDiffExponents() {
globalFlags := flags.GlobalFlags{
StateDiffExponents: []int{21, 18, 16, 13, 11, 9, 5},
}
flags.Init(&globalFlags)
}

View File

@@ -1,7 +1,6 @@
package kv
import (
"context"
"crypto/rand"
"encoding/binary"
mathRand "math/rand"
@@ -10,7 +9,6 @@ import (
"time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v7/config/features"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
@@ -19,14 +17,11 @@ import (
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v7/genesis"
"github.com/OffchainLabs/prysm/v7/math"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/runtime/version"
"github.com/OffchainLabs/prysm/v7/testing/assert"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/testing/util"
logTest "github.com/sirupsen/logrus/hooks/test"
bolt "go.etcd.io/bbolt"
)
@@ -1334,297 +1329,3 @@ func TestStore_CleanUpDirtyStates_NoOriginRoot(t *testing.T) {
}
}
}
func TestStore_CanSaveRetrieveStateUsingStateDiff(t *testing.T) {
t.Run("No state summary or block", func(t *testing.T) {
db := setupDB(t)
featCfg := &features.Flags{}
featCfg.EnableStateDiff = true
reset := features.InitWithReset(featCfg)
defer reset()
setDefaultStateDiffExponents()
err := setOffsetInDB(db, 0)
require.NoError(t, err)
readSt, err := db.State(context.Background(), [32]byte{'A'})
require.IsNil(t, readSt)
require.ErrorContains(t, "neither state summary nor block found", err)
})
t.Run("Slot not in tree", func(t *testing.T) {
db := setupDB(t)
featCfg := &features.Flags{}
featCfg.EnableStateDiff = true
reset := features.InitWithReset(featCfg)
defer reset()
setDefaultStateDiffExponents()
err := setOffsetInDB(db, 0)
require.NoError(t, err)
r := bytesutil.ToBytes32([]byte{'A'})
ss := &ethpb.StateSummary{Slot: 1, Root: r[:]} // slot 1 not in tree
err = db.SaveStateSummary(context.Background(), ss)
require.NoError(t, err)
readSt, err := db.State(context.Background(), r)
require.ErrorContains(t, "slot not in tree", err)
require.IsNil(t, readSt)
})
t.Run("State not found", func(t *testing.T) {
db := setupDB(t)
featCfg := &features.Flags{}
featCfg.EnableStateDiff = true
reset := features.InitWithReset(featCfg)
defer reset()
setDefaultStateDiffExponents()
err := setOffsetInDB(db, 0)
require.NoError(t, err)
r := bytesutil.ToBytes32([]byte{'A'})
ss := &ethpb.StateSummary{Slot: 32, Root: r[:]} // slot 32 is in tree
err = db.SaveStateSummary(context.Background(), ss)
require.NoError(t, err)
readSt, err := db.State(context.Background(), r)
require.ErrorContains(t, "state not found", err)
require.IsNil(t, readSt)
})
t.Run("Full state snapshot", func(t *testing.T) {
t.Run("using state summary", func(t *testing.T) {
for v := range version.All() {
t.Run(version.String(v), func(t *testing.T) {
db := setupDB(t)
featCfg := &features.Flags{}
featCfg.EnableStateDiff = true
reset := features.InitWithReset(featCfg)
defer reset()
setDefaultStateDiffExponents()
err := setOffsetInDB(db, 0)
require.NoError(t, err)
st, _ := createState(t, 0, v)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
r := bytesutil.ToBytes32([]byte{'A'})
ss := &ethpb.StateSummary{Slot: 0, Root: r[:]}
err = db.SaveStateSummary(context.Background(), ss)
require.NoError(t, err)
readSt, err := db.State(context.Background(), r)
require.NoError(t, err)
require.NotNil(t, readSt)
stSSZ, err := st.MarshalSSZ()
require.NoError(t, err)
readStSSZ, err := readSt.MarshalSSZ()
require.NoError(t, err)
require.DeepSSZEqual(t, stSSZ, readStSSZ)
})
}
})
t.Run("using block", func(t *testing.T) {
for v := range version.All() {
t.Run(version.String(v), func(t *testing.T) {
db := setupDB(t)
featCfg := &features.Flags{}
featCfg.EnableStateDiff = true
reset := features.InitWithReset(featCfg)
defer reset()
setDefaultStateDiffExponents()
err := setOffsetInDB(db, 0)
require.NoError(t, err)
st, _ := createState(t, 0, v)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
blk := util.NewBeaconBlock()
blk.Block.Slot = 0
signedBlk, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
err = db.SaveBlock(context.Background(), signedBlk)
require.NoError(t, err)
r, err := signedBlk.Block().HashTreeRoot()
require.NoError(t, err)
readSt, err := db.State(context.Background(), r)
require.NoError(t, err)
require.NotNil(t, readSt)
stSSZ, err := st.MarshalSSZ()
require.NoError(t, err)
readStSSZ, err := readSt.MarshalSSZ()
require.NoError(t, err)
require.DeepSSZEqual(t, stSSZ, readStSSZ)
})
}
})
})
t.Run("Diffed state", func(t *testing.T) {
t.Run("using state summary", func(t *testing.T) {
for v := range version.All() {
t.Run(version.String(v), func(t *testing.T) {
db := setupDB(t)
featCfg := &features.Flags{}
featCfg.EnableStateDiff = true
reset := features.InitWithReset(featCfg)
defer reset()
setDefaultStateDiffExponents()
exponents := flags.Get().StateDiffExponents
err := setOffsetInDB(db, 0)
require.NoError(t, err)
st, _ := createState(t, 0, v)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
slot := primitives.Slot(math.PowerOf2(uint64(exponents[len(exponents)-2])))
st, _ = createState(t, slot, v)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
slot = primitives.Slot(math.PowerOf2(uint64(exponents[len(exponents)-2])) + math.PowerOf2(uint64(exponents[len(exponents)-1])))
st, _ = createState(t, slot, v)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
r := bytesutil.ToBytes32([]byte{'A'})
ss := &ethpb.StateSummary{Slot: slot, Root: r[:]}
err = db.SaveStateSummary(context.Background(), ss)
require.NoError(t, err)
readSt, err := db.State(context.Background(), r)
require.NoError(t, err)
require.NotNil(t, readSt)
stSSZ, err := st.MarshalSSZ()
require.NoError(t, err)
readStSSZ, err := readSt.MarshalSSZ()
require.NoError(t, err)
require.DeepSSZEqual(t, stSSZ, readStSSZ)
})
}
})
t.Run("using block", func(t *testing.T) {
for v := range version.All() {
t.Run(version.String(v), func(t *testing.T) {
db := setupDB(t)
featCfg := &features.Flags{}
featCfg.EnableStateDiff = true
reset := features.InitWithReset(featCfg)
defer reset()
setDefaultStateDiffExponents()
exponents := flags.Get().StateDiffExponents
err := setOffsetInDB(db, 0)
require.NoError(t, err)
st, _ := createState(t, 0, v)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
slot := primitives.Slot(math.PowerOf2(uint64(exponents[len(exponents)-2])))
st, _ = createState(t, slot, v)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
slot = primitives.Slot(math.PowerOf2(uint64(exponents[len(exponents)-2])) + math.PowerOf2(uint64(exponents[len(exponents)-1])))
st, _ = createState(t, slot, v)
err = db.saveStateByDiff(context.Background(), st)
require.NoError(t, err)
blk := util.NewBeaconBlock()
blk.Block.Slot = slot
signedBlk, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
err = db.SaveBlock(context.Background(), signedBlk)
require.NoError(t, err)
r, err := signedBlk.Block().HashTreeRoot()
require.NoError(t, err)
readSt, err := db.State(context.Background(), r)
require.NoError(t, err)
require.NotNil(t, readSt)
stSSZ, err := st.MarshalSSZ()
require.NoError(t, err)
readStSSZ, err := readSt.MarshalSSZ()
require.NoError(t, err)
require.DeepSSZEqual(t, stSSZ, readStSSZ)
})
}
})
})
}
func TestStore_HasStateUsingStateDiff(t *testing.T) {
t.Run("No state summary or block", func(t *testing.T) {
hook := logTest.NewGlobal()
db := setupDB(t)
featCfg := &features.Flags{}
featCfg.EnableStateDiff = true
reset := features.InitWithReset(featCfg)
defer reset()
setDefaultStateDiffExponents()
err := setOffsetInDB(db, 0)
require.NoError(t, err)
hasSt := db.HasState(t.Context(), [32]byte{'A'})
require.Equal(t, false, hasSt)
require.LogsContain(t, hook, "neither state summary nor block found")
})
t.Run("slot in tree or not", func(t *testing.T) {
db := setupDB(t)
featCfg := &features.Flags{}
featCfg.EnableStateDiff = true
reset := features.InitWithReset(featCfg)
defer reset()
setDefaultStateDiffExponents()
err := setOffsetInDB(db, 0)
require.NoError(t, err)
testCases := []struct {
slot primitives.Slot
expected bool
}{
{slot: 1, expected: false}, // slot 1 not in tree
{slot: 32, expected: true}, // slot 32 in tree
{slot: 0, expected: true}, // slot 0 in tree
{slot: primitives.Slot(math.PowerOf2(21)), expected: true}, // slot in tree
{slot: primitives.Slot(math.PowerOf2(21) - 1), expected: false}, // slot not in tree
{slot: primitives.Slot(math.PowerOf2(22)), expected: true}, // slot in tree
}
for _, tc := range testCases {
r := bytesutil.ToBytes32([]byte{'A'})
ss := &ethpb.StateSummary{Slot: tc.slot, Root: r[:]}
err = db.SaveStateSummary(t.Context(), ss)
require.NoError(t, err)
hasSt := db.HasState(t.Context(), r)
require.Equal(t, tc.expected, hasSt)
}
})
}

View File

@@ -2683,7 +2683,7 @@ func createBlobServerV2(t *testing.T, numBlobs int, blobMasks []bool) *httptest.
Blob: []byte("0xblob"),
KzgProofs: []hexutil.Bytes{},
}
for range fieldparams.NumberOfColumns {
for j := 0; j < int(params.BeaconConfig().NumberOfColumns); j++ {
cellProof := make([]byte, 48)
blobAndCellProofs[i].KzgProofs = append(blobAndCellProofs[i].KzgProofs, cellProof)
}

View File

@@ -240,7 +240,7 @@ func (f *ForkChoice) IsViableForCheckpoint(cp *forkchoicetypes.Checkpoint) (bool
if node.slot == epochStart {
return true, nil
}
if !features.Get().IgnoreUnviableAttestations {
if !features.Get().DisableLastEpochTargets {
// Allow any node from the checkpoint epoch - 1 to be viable.
nodeEpoch := slots.ToEpoch(node.slot)
if nodeEpoch+1 == cp.Epoch {
@@ -642,12 +642,8 @@ func (f *ForkChoice) DependentRootForEpoch(root [32]byte, epoch primitives.Epoch
if !ok || node == nil {
return [32]byte{}, ErrNilNode
}
if slots.ToEpoch(node.slot) >= epoch {
if node.parent != nil {
node = node.parent
} else {
return f.store.finalizedDependentRoot, nil
}
if slots.ToEpoch(node.slot) >= epoch && node.parent != nil {
node = node.parent
}
return node.root, nil
}

View File

@@ -212,9 +212,6 @@ func (s *Store) prune(ctx context.Context) error {
return nil
}
// Save the new finalized dependent root because it will be pruned
s.finalizedDependentRoot = finalizedNode.parent.root
// Prune nodeByRoot starting from root
if err := s.pruneFinalizedNodeByRootMap(ctx, s.treeRootNode, finalizedNode); err != nil {
return err

View File

@@ -465,7 +465,6 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
ctx := t.Context()
f := setup(1, 1)
// Insert a block in slot 32
state, blk, err := prepareForkchoiceState(ctx, params.BeaconConfig().SlotsPerEpoch, [32]byte{'a'}, params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blk))
@@ -476,7 +475,6 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
require.NoError(t, err)
require.Equal(t, dependent, [32]byte{})
// Insert a block in slot 33
state, blk1, err := prepareForkchoiceState(ctx, params.BeaconConfig().SlotsPerEpoch+1, [32]byte{'b'}, blk.Root(), params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blk1))
@@ -490,7 +488,7 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
require.NoError(t, err)
require.Equal(t, dependent, [32]byte{})
// Insert a block for the next epoch (missed slot 0), slot 65
// Insert a block for the next epoch (missed slot 0)
state, blk2, err := prepareForkchoiceState(ctx, 2*params.BeaconConfig().SlotsPerEpoch+1, [32]byte{'c'}, blk1.Root(), params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
@@ -511,7 +509,6 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
require.NoError(t, err)
require.Equal(t, dependent, blk1.Root())
// Insert a block at slot 66
state, blk3, err := prepareForkchoiceState(ctx, 2*params.BeaconConfig().SlotsPerEpoch+2, [32]byte{'d'}, blk2.Root(), params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blk3))
@@ -536,11 +533,8 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
dependent, err = f.DependentRoot(1)
require.NoError(t, err)
require.Equal(t, [32]byte{}, dependent)
dependent, err = f.DependentRoot(2)
require.NoError(t, err)
require.Equal(t, blk1.Root(), dependent)
// Insert a block for the next epoch, slot 96 (descends from finalized at slot 33)
// Insert a block for next epoch (slot 0 present)
state, blk4, err := prepareForkchoiceState(ctx, 3*params.BeaconConfig().SlotsPerEpoch, [32]byte{'e'}, blk1.Root(), params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blk4))
@@ -557,7 +551,6 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
require.NoError(t, err)
require.Equal(t, dependent, blk1.Root())
// Insert a block at slot 97
state, blk5, err := prepareForkchoiceState(ctx, 3*params.BeaconConfig().SlotsPerEpoch+1, [32]byte{'f'}, blk4.Root(), params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blk5))
@@ -607,16 +600,12 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
require.NoError(t, err)
require.Equal(t, target, blk1.Root())
// Prune finalization, finalize the block at slot 96
// Prune finalization
s.finalizedCheckpoint.Root = blk4.Root()
require.NoError(t, s.prune(ctx))
target, err = f.TargetRootForEpoch(blk4.Root(), 3)
require.NoError(t, err)
require.Equal(t, blk4.Root(), target)
// Dependent root for the finalized block should be the root of the pruned block at slot 33
dependent, err = f.DependentRootForEpoch(blk4.Root(), 3)
require.NoError(t, err)
require.Equal(t, blk1.Root(), dependent)
}
func TestStore_DependentRootForEpoch(t *testing.T) {

View File

@@ -31,7 +31,6 @@ type Store struct {
proposerBoostRoot [fieldparams.RootLength]byte // latest block root that was boosted after being received in a timely manner.
previousProposerBoostRoot [fieldparams.RootLength]byte // previous block root that was boosted after being received in a timely manner.
previousProposerBoostScore uint64 // previous proposer boosted root score.
finalizedDependentRoot [fieldparams.RootLength]byte // dependent root at finalized checkpoint.
committeeWeight uint64 // tracks the total active validator balance divided by the number of slots per Epoch.
treeRootNode *Node // the root node of the store tree.
headNode *Node // last head Node

View File

@@ -1,95 +0,0 @@
# Graffiti Version Info Implementation
## Summary
Add automatic EL+CL version info to block graffiti following [ethereum/execution-apis#517](https://github.com/ethereum/execution-apis/pull/517). Uses the [flexible standard](https://hackmd.io/@wmoBhF17RAOH2NZ5bNXJVg/BJX2c9gja) to pack client info into leftover space after user graffiti.
More details: https://github.com/ethereum/execution-apis/blob/main/src/engine/identification.md
## Implementation
### Core Component: GraffitiInfo Struct
Thread-safe struct holding version information:
```go
const clCode = "PR"
type GraffitiInfo struct {
mu sync.RWMutex
userGraffiti string // From --graffiti flag (set once at startup)
clCommit string // From version.GetCommitPrefix() helper function
elCode string // From engine_getClientVersionV1
elCommit string // From engine_getClientVersionV1
}
```
### Flow
1. **Startup**: Parse flags, create GraffitiInfo with user graffiti and CL info.
2. **Wiring**: Pass struct to both execution service and RPC validator server
3. **Runtime**: Execution service goroutine periodically calls `engine_getClientVersionV1` and updates EL fields
4. **Block Proposal**: RPC validator server calls `GenerateGraffiti()` to get formatted graffiti
### Flexible Graffiti Format
Packs as much client info as space allows (after user graffiti):
| Available Space | Format | Example |
|----------------|--------|---------|
| ≥12 bytes | `EL(2)+commit(4)+CL(2)+commit(4)+user` | `GE168dPR63afBob` |
| 8-11 bytes | `EL(2)+commit(2)+CL(2)+commit(2)+user` | `GE16PR63my node here` |
| 4-7 bytes | `EL(2)+CL(2)+user` | `GEPRthis is my graffiti msg` |
| 2-3 bytes | `EL(2)+user` | `GEalmost full graffiti message` |
| <2 bytes | user only | `full 32 byte user graffiti here` |
```go
func (g *GraffitiInfo) GenerateGraffiti() [32]byte {
available := 32 - len(userGraffiti)
if elCode == "" {
elCommit2 = elCommit4 = ""
}
switch {
case available >= 12:
return elCode + elCommit4 + clCode + clCommit4 + userGraffiti
case available >= 8:
return elCode + elCommit2 + clCode + clCommit2 + userGraffiti
case available >= 4:
return elCode + clCode + userGraffiti
case available >= 2:
return elCode + userGraffiti
default:
return userGraffiti
}
}
```
### Update Logic
Single testable function in execution service:
```go
func (s *Service) updateGraffitiInfo() {
versions, err := s.GetClientVersion(ctx)
if err != nil {
return // Keep last good value
}
if len(versions) == 1 {
s.graffitiInfo.UpdateFromEngine(versions[0].Code, versions[0].Commit)
}
}
```
Goroutine calls this on `slot % 8 == 4` timing (4 times per epoch, avoids slot boundaries).
### Files Changes Required
**New:**
- `beacon-chain/execution/graffiti_info.go` - The struct and methods
- `beacon-chain/execution/graffiti_info_test.go` - Unit tests
- `runtime/version/version.go` - Add `GetCommitPrefix()` helper that extracts first 4 hex chars from the git commit injected via Bazel ldflags at build time
**Modified:**
- `beacon-chain/execution/service.go` - Add goroutine + updateGraffitiInfo()
- `beacon-chain/execution/engine_client.go` - Add GetClientVersion() method that does engine call
- `beacon-chain/rpc/.../validator/proposer.go` - Call GenerateGraffiti()
- `beacon-chain/node/node.go` - Wire GraffitiInfo to services
### Testing Strategy
- Unit test GraffitiInfo methods (priority logic, thread safety)
- Unit test updateGraffitiInfo() with mocked engine client

View File

@@ -33,10 +33,6 @@ func TestLightClient_NewLightClientOptimisticUpdateFromBeaconState(t *testing.T)
params.OverrideBeaconConfig(cfg)
for _, testVersion := range version.All()[1:] {
if testVersion == version.Gloas {
// TODO(16027): Unskip light client tests for Gloas
continue
}
t.Run(version.String(testVersion), func(t *testing.T) {
l := util.NewTestLightClient(t, testVersion)

View File

@@ -23,7 +23,6 @@ go_library(
"//beacon-chain/builder:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/cache/depositsnapshot:go_default_library",
"//beacon-chain/das:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/db/kv:go_default_library",

View File

@@ -26,7 +26,6 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/builder"
"github.com/OffchainLabs/prysm/v7/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v7/beacon-chain/cache/depositsnapshot"
"github.com/OffchainLabs/prysm/v7/beacon-chain/das"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db/kv"
@@ -117,7 +116,7 @@ type BeaconNode struct {
GenesisProviders []genesis.Provider
CheckpointInitializer checkpoint.Initializer
forkChoicer forkchoice.ForkChoicer
ClockWaiter startup.ClockWaiter
clockWaiter startup.ClockWaiter
BackfillOpts []backfill.ServiceOption
initialSyncComplete chan struct{}
BlobStorage *filesystem.BlobStorage
@@ -130,7 +129,6 @@ type BeaconNode struct {
slasherEnabled bool
lcStore *lightclient.Store
ConfigOptions []params.Option
SyncNeedsWaiter func() (das.SyncNeeds, error)
}
// New creates a new node instance, sets up configuration options, and registers
@@ -195,7 +193,7 @@ func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*Beaco
params.LogDigests(params.BeaconConfig())
synchronizer := startup.NewClockSynchronizer()
beacon.ClockWaiter = synchronizer
beacon.clockWaiter = synchronizer
beacon.forkChoicer = doublylinkedtree.New()
depositAddress, err := execution.DepositContractAddress()
@@ -235,13 +233,12 @@ func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*Beaco
beacon.lhsp = &verification.LazyHeadStateProvider{}
beacon.verifyInitWaiter = verification.NewInitializerWaiter(
beacon.ClockWaiter, forkchoice.NewROForkChoice(beacon.forkChoicer), beacon.stateGen, beacon.lhsp)
beacon.clockWaiter, forkchoice.NewROForkChoice(beacon.forkChoicer), beacon.stateGen, beacon.lhsp)
beacon.BackfillOpts = append(
beacon.BackfillOpts,
backfill.WithVerifierWaiter(beacon.verifyInitWaiter),
backfill.WithInitSyncWaiter(initSyncWaiter(ctx, beacon.initialSyncComplete)),
backfill.WithSyncNeedsWaiter(beacon.SyncNeedsWaiter),
)
if err := registerServices(cliCtx, beacon, synchronizer, bfs); err != nil {
@@ -283,10 +280,7 @@ func configureBeacon(cliCtx *cli.Context) error {
return errors.Wrap(err, "could not configure beacon chain")
}
err := flags.ConfigureGlobalFlags(cliCtx)
if err != nil {
return errors.Wrap(err, "could not configure global flags")
}
flags.ConfigureGlobalFlags(cliCtx)
if err := configureChainConfig(cliCtx); err != nil {
return errors.Wrap(err, "could not configure chain config")
@@ -664,11 +658,9 @@ func (b *BeaconNode) registerP2P(cliCtx *cli.Context) error {
DenyListCIDR: slice.SplitCommaSeparated(cliCtx.StringSlice(cmd.P2PDenyList.Name)),
IPColocationWhitelist: colocationWhitelist,
EnableUPnP: cliCtx.Bool(cmd.EnableUPnPFlag.Name),
EnableAutoNAT: cliCtx.Bool(cmd.EnableAutoNATFlag.Name),
StateNotifier: b,
DB: b.db,
StateGen: b.stateGen,
ClockWaiter: b.ClockWaiter,
ClockWaiter: b.clockWaiter,
})
if err != nil {
return err
@@ -710,7 +702,7 @@ func (b *BeaconNode) registerSlashingPoolService() error {
return err
}
s := slashings.NewPoolService(b.ctx, b.slashingsPool, slashings.WithElectraTimer(b.ClockWaiter, chainService.CurrentSlot))
s := slashings.NewPoolService(b.ctx, b.slashingsPool, slashings.WithElectraTimer(b.clockWaiter, chainService.CurrentSlot))
return b.services.RegisterService(s)
}
@@ -832,7 +824,7 @@ func (b *BeaconNode) registerSyncService(initialSyncComplete chan struct{}, bFil
regularsync.WithSlasherAttestationsFeed(b.slasherAttestationsFeed),
regularsync.WithSlasherBlockHeadersFeed(b.slasherBlockHeadersFeed),
regularsync.WithReconstructor(web3Service),
regularsync.WithClockWaiter(b.ClockWaiter),
regularsync.WithClockWaiter(b.clockWaiter),
regularsync.WithInitialSyncComplete(initialSyncComplete),
regularsync.WithStateNotifier(b),
regularsync.WithBlobStorage(b.BlobStorage),
@@ -863,8 +855,7 @@ func (b *BeaconNode) registerInitialSyncService(complete chan struct{}) error {
P2P: b.fetchP2P(),
StateNotifier: b,
BlockNotifier: b,
ClockWaiter: b.ClockWaiter,
SyncNeedsWaiter: b.SyncNeedsWaiter,
ClockWaiter: b.clockWaiter,
InitialSyncComplete: complete,
BlobStorage: b.BlobStorage,
DataColumnStorage: b.DataColumnStorage,
@@ -895,7 +886,7 @@ func (b *BeaconNode) registerSlasherService() error {
SlashingPoolInserter: b.slashingsPool,
SyncChecker: syncService,
HeadStateFetcher: chainService,
ClockWaiter: b.ClockWaiter,
ClockWaiter: b.clockWaiter,
})
if err != nil {
return err
@@ -988,7 +979,7 @@ func (b *BeaconNode) registerRPCService(router *http.ServeMux) error {
MaxMsgSize: maxMsgSize,
BlockBuilder: b.fetchBuilderService(),
Router: router,
ClockWaiter: b.ClockWaiter,
ClockWaiter: b.clockWaiter,
BlobStorage: b.BlobStorage,
DataColumnStorage: b.DataColumnStorage,
TrackedValidatorsCache: b.trackedValidatorsCache,
@@ -1133,7 +1124,7 @@ func (b *BeaconNode) registerPrunerService(cliCtx *cli.Context) error {
func (b *BeaconNode) RegisterBackfillService(cliCtx *cli.Context, bfs *backfill.Store) error {
pa := peers.NewAssigner(b.fetchP2P().Peers(), b.forkChoicer)
bf, err := backfill.NewService(cliCtx.Context, bfs, b.BlobStorage, b.DataColumnStorage, b.ClockWaiter, b.fetchP2P(), pa, b.BackfillOpts...)
bf, err := backfill.NewService(cliCtx.Context, bfs, b.BlobStorage, b.clockWaiter, b.fetchP2P(), pa, b.BackfillOpts...)
if err != nil {
return errors.Wrap(err, "error initializing backfill service")
}

View File

@@ -13,8 +13,6 @@ go_library(
"doc.go",
"fork.go",
"fork_watcher.go",
"gossip_peer_controller.go",
"gossip_peer_crawler.go",
"gossip_scoring_params.go",
"gossip_topic_mappings.go",
"handshake.go",
@@ -53,13 +51,11 @@ go_library(
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/kv:go_default_library",
"//beacon-chain/p2p/encoder:go_default_library",
"//beacon-chain/p2p/gossipcrawler:go_default_library",
"//beacon-chain/p2p/peers:go_default_library",
"//beacon-chain/p2p/peers/peerdata:go_default_library",
"//beacon-chain/p2p/peers/scorers:go_default_library",
"//beacon-chain/p2p/types:go_default_library",
"//beacon-chain/startup:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
@@ -95,7 +91,6 @@ go_library(
"@com_github_libp2p_go_libp2p//core/connmgr:go_default_library",
"@com_github_libp2p_go_libp2p//core/control:go_default_library",
"@com_github_libp2p_go_libp2p//core/crypto:go_default_library",
"@com_github_libp2p_go_libp2p//core/event:go_default_library",
"@com_github_libp2p_go_libp2p//core/host:go_default_library",
"@com_github_libp2p_go_libp2p//core/network:go_default_library",
"@com_github_libp2p_go_libp2p//core/peer:go_default_library",
@@ -105,8 +100,10 @@ go_library(
"@com_github_libp2p_go_libp2p//p2p/security/noise:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/transport/quic:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/transport/tcp:go_default_library",
"@com_github_libp2p_go_libp2p_mplex//:go_default_library",
"@com_github_libp2p_go_libp2p_pubsub//:go_default_library",
"@com_github_libp2p_go_libp2p_pubsub//pb:go_default_library",
"@com_github_libp2p_go_mplex//:go_default_library",
"@com_github_multiformats_go_multiaddr//:go_default_library",
"@com_github_multiformats_go_multiaddr//net:go_default_library",
"@com_github_patrickmn_go_cache//:go_default_library",
@@ -117,7 +114,6 @@ go_library(
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
"@org_golang_x_sync//semaphore:go_default_library",
],
)
@@ -131,8 +127,6 @@ go_test(
"dial_relay_node_test.go",
"discovery_test.go",
"fork_test.go",
"gossip_peer_controller_test.go",
"gossip_peer_crawler_test.go",
"gossip_scoring_params_test.go",
"gossip_topic_mappings_test.go",
"message_id_test.go",
@@ -159,18 +153,14 @@ go_test(
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/iface:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/p2p/encoder:go_default_library",
"//beacon-chain/p2p/gossipcrawler:go_default_library",
"//beacon-chain/p2p/peers:go_default_library",
"//beacon-chain/p2p/peers/peerdata:go_default_library",
"//beacon-chain/p2p/peers/scorers:go_default_library",
"//beacon-chain/p2p/testing:go_default_library",
"//beacon-chain/p2p/types:go_default_library",
"//beacon-chain/startup:go_default_library",
"//beacon-chain/state/stategen/mock:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
@@ -201,7 +191,6 @@ go_test(
"@com_github_libp2p_go_libp2p//:go_default_library",
"@com_github_libp2p_go_libp2p//core/connmgr:go_default_library",
"@com_github_libp2p_go_libp2p//core/crypto:go_default_library",
"@com_github_libp2p_go_libp2p//core/event:go_default_library",
"@com_github_libp2p_go_libp2p//core/host:go_default_library",
"@com_github_libp2p_go_libp2p//core/network:go_default_library",
"@com_github_libp2p_go_libp2p//core/peer:go_default_library",
@@ -211,10 +200,8 @@ go_test(
"@com_github_libp2p_go_libp2p_pubsub//pb:go_default_library",
"@com_github_multiformats_go_multiaddr//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus/testutil:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@com_github_stretchr_testify//require:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
],
)

View File

@@ -60,10 +60,7 @@ func (s *Service) Broadcast(ctx context.Context, msg proto.Message) error {
if !ok {
return errors.Errorf("message of %T does not support marshaller interface", msg)
}
fullTopic := fmt.Sprintf(topic, forkDigest) + s.Encoding().ProtocolSuffix()
return s.broadcastObject(ctx, castMsg, fullTopic)
return s.broadcastObject(ctx, castMsg, fmt.Sprintf(topic, forkDigest))
}
// BroadcastAttestation broadcasts an attestation to the p2p network, the message is assumed to be
@@ -109,7 +106,6 @@ func (s *Service) BroadcastSyncCommitteeMessage(ctx context.Context, subnet uint
}
func (s *Service) internalBroadcastAttestation(ctx context.Context, subnet uint64, att ethpb.Att, forkDigest [fieldparams.VersionLength]byte) {
topic := AttestationSubnetTopic(forkDigest, subnet)
_, span := trace.StartSpan(ctx, "p2p.internalBroadcastAttestation")
defer span.End()
ctx = trace.NewContext(context.Background(), span) // clear parent context / deadline.
@@ -120,7 +116,7 @@ func (s *Service) internalBroadcastAttestation(ctx context.Context, subnet uint6
// Ensure we have peers with this subnet.
s.subnetLocker(subnet).RLock()
hasPeer := s.hasPeerWithTopic(topic)
hasPeer := s.hasPeerWithSubnet(attestationToTopic(subnet, forkDigest))
s.subnetLocker(subnet).RUnlock()
span.SetAttributes(
@@ -135,7 +131,7 @@ func (s *Service) internalBroadcastAttestation(ctx context.Context, subnet uint6
s.subnetLocker(subnet).Lock()
defer s.subnetLocker(subnet).Unlock()
if err := s.gossipDialer.DialPeersForTopicBlocking(ctx, topic, minimumPeersPerSubnetForBroadcast); err != nil {
if err := s.FindAndDialPeersWithSubnets(ctx, AttestationSubnetTopicFormat, forkDigest, minimumPeersPerSubnetForBroadcast, map[uint64]bool{subnet: true}); err != nil {
return errors.Wrap(err, "find peers with subnets")
}
@@ -157,14 +153,13 @@ func (s *Service) internalBroadcastAttestation(ctx context.Context, subnet uint6
return
}
if err := s.broadcastObject(ctx, att, topic); err != nil {
if err := s.broadcastObject(ctx, att, attestationToTopic(subnet, forkDigest)); err != nil {
log.WithError(err).Error("Failed to broadcast attestation")
tracing.AnnotateError(span, err)
}
}
func (s *Service) broadcastSyncCommittee(ctx context.Context, subnet uint64, sMsg *ethpb.SyncCommitteeMessage, forkDigest [fieldparams.VersionLength]byte) {
topic := SyncCommitteeSubnetTopic(forkDigest, subnet)
_, span := trace.StartSpan(ctx, "p2p.broadcastSyncCommittee")
defer span.End()
ctx = trace.NewContext(context.Background(), span) // clear parent context / deadline.
@@ -178,7 +173,7 @@ func (s *Service) broadcastSyncCommittee(ctx context.Context, subnet uint64, sMs
// to ensure that we can reuse the same subnet locker.
wrappedSubIdx := subnet + syncLockerVal
s.subnetLocker(wrappedSubIdx).RLock()
hasPeer := s.hasPeerWithTopic(topic)
hasPeer := s.hasPeerWithSubnet(syncCommitteeToTopic(subnet, forkDigest))
s.subnetLocker(wrappedSubIdx).RUnlock()
span.SetAttributes(
@@ -192,7 +187,7 @@ func (s *Service) broadcastSyncCommittee(ctx context.Context, subnet uint64, sMs
if err := func() error {
s.subnetLocker(wrappedSubIdx).Lock()
defer s.subnetLocker(wrappedSubIdx).Unlock()
if err := s.gossipDialer.DialPeersForTopicBlocking(ctx, topic, minimumPeersPerSubnetForBroadcast); err != nil {
if err := s.FindAndDialPeersWithSubnets(ctx, SyncCommitteeSubnetTopicFormat, forkDigest, minimumPeersPerSubnetForBroadcast, map[uint64]bool{subnet: true}); err != nil {
return errors.Wrap(err, "find peers with subnets")
}
@@ -210,7 +205,7 @@ func (s *Service) broadcastSyncCommittee(ctx context.Context, subnet uint64, sMs
return
}
if err := s.broadcastObject(ctx, sMsg, topic); err != nil {
if err := s.broadcastObject(ctx, sMsg, syncCommitteeToTopic(subnet, forkDigest)); err != nil {
log.WithError(err).Error("Failed to broadcast sync committee message")
tracing.AnnotateError(span, err)
}
@@ -238,7 +233,6 @@ func (s *Service) BroadcastBlob(ctx context.Context, subnet uint64, blob *ethpb.
}
func (s *Service) internalBroadcastBlob(ctx context.Context, subnet uint64, blobSidecar *ethpb.BlobSidecar, forkDigest [fieldparams.VersionLength]byte) {
topic := BlobSubnetTopic(forkDigest, subnet)
_, span := trace.StartSpan(ctx, "p2p.internalBroadcastBlob")
defer span.End()
ctx = trace.NewContext(context.Background(), span) // clear parent context / deadline.
@@ -249,7 +243,7 @@ func (s *Service) internalBroadcastBlob(ctx context.Context, subnet uint64, blob
wrappedSubIdx := subnet + blobSubnetLockerVal
s.subnetLocker(wrappedSubIdx).RLock()
hasPeer := s.hasPeerWithTopic(topic)
hasPeer := s.hasPeerWithSubnet(blobSubnetToTopic(subnet, forkDigest))
s.subnetLocker(wrappedSubIdx).RUnlock()
if !hasPeer {
@@ -258,7 +252,7 @@ func (s *Service) internalBroadcastBlob(ctx context.Context, subnet uint64, blob
s.subnetLocker(wrappedSubIdx).Lock()
defer s.subnetLocker(wrappedSubIdx).Unlock()
if err := s.gossipDialer.DialPeersForTopicBlocking(ctx, topic, minimumPeersPerSubnetForBroadcast); err != nil {
if err := s.FindAndDialPeersWithSubnets(ctx, BlobSubnetTopicFormat, forkDigest, minimumPeersPerSubnetForBroadcast, map[uint64]bool{subnet: true}); err != nil {
return errors.Wrap(err, "find peers with subnets")
}
@@ -270,7 +264,7 @@ func (s *Service) internalBroadcastBlob(ctx context.Context, subnet uint64, blob
}
}
if err := s.broadcastObject(ctx, blobSidecar, topic); err != nil {
if err := s.broadcastObject(ctx, blobSidecar, blobSubnetToTopic(subnet, forkDigest)); err != nil {
log.WithError(err).Error("Failed to broadcast blob sidecar")
tracing.AnnotateError(span, err)
}
@@ -299,7 +293,7 @@ func (s *Service) BroadcastLightClientOptimisticUpdate(ctx context.Context, upda
}
digest := params.ForkDigest(slots.ToEpoch(update.AttestedHeader().Beacon().Slot))
if err := s.broadcastObject(ctx, update, LcOptimisticToTopic(digest)); err != nil {
if err := s.broadcastObject(ctx, update, lcOptimisticToTopic(digest)); err != nil {
log.WithError(err).Debug("Failed to broadcast light client optimistic update")
err := errors.Wrap(err, "could not publish message")
tracing.AnnotateError(span, err)
@@ -333,7 +327,7 @@ func (s *Service) BroadcastLightClientFinalityUpdate(ctx context.Context, update
}
forkDigest := params.ForkDigest(slots.ToEpoch(update.AttestedHeader().Beacon().Slot))
if err := s.broadcastObject(ctx, update, LcFinalityToTopic(forkDigest)); err != nil {
if err := s.broadcastObject(ctx, update, lcFinalityToTopic(forkDigest)); err != nil {
log.WithError(err).Debug("Failed to broadcast light client finality update")
err := errors.Wrap(err, "could not publish message")
tracing.AnnotateError(span, err)
@@ -392,14 +386,13 @@ func (s *Service) broadcastDataColumnSidecars(ctx context.Context, forkDigest [f
subnet := peerdas.ComputeSubnetForDataColumnSidecar(sidecar.Index)
// Build the topic corresponding to subnet column subnet and this fork digest.
topic := DataColumnSubnetTopic(forkDigest, subnet)
topic := dataColumnSubnetToTopic(subnet, forkDigest)
// Compute the wrapped subnet index.
wrappedSubIdx := subnet + dataColumnSubnetVal
// Find peers if needed.
if err := s.findPeersIfNeeded(ctx, wrappedSubIdx, topic); err != nil {
if err := s.findPeersIfNeeded(ctx, wrappedSubIdx, DataColumnSubnetTopicFormat, forkDigest, subnet); err != nil {
tracing.AnnotateError(span, err)
log.WithError(err).Error("Cannot find peers if needed")
return
@@ -494,16 +487,20 @@ func (s *Service) broadcastDataColumnSidecars(ctx context.Context, forkDigest [f
func (s *Service) findPeersIfNeeded(
ctx context.Context,
wrappedSubIdx uint64,
topic string,
topicFormat string,
forkDigest [fieldparams.VersionLength]byte,
subnet uint64,
) error {
// Sending a data column sidecar to only one peer is not ideal,
// but it ensures at least one peer receives it.
s.subnetLocker(wrappedSubIdx).Lock()
defer s.subnetLocker(wrappedSubIdx).Unlock()
if err := s.gossipDialer.DialPeersForTopicBlocking(ctx, topic, minimumPeersPerSubnetForBroadcast); err != nil {
// No peers found, attempt to find peers with this subnet.
if err := s.FindAndDialPeersWithSubnets(ctx, topicFormat, forkDigest, minimumPeersPerSubnetForBroadcast, map[uint64]bool{subnet: true}); err != nil {
return errors.Wrap(err, "find peers with subnet")
}
return nil
}
@@ -528,10 +525,34 @@ func (s *Service) broadcastObject(ctx context.Context, obj ssz.Marshaler, topic
iid := int64(id)
span = trace.AddMessageSendEvent(span, iid, messageLen /*uncompressed*/, messageLen /*compressed*/)
}
if err := s.PublishToTopic(ctx, topic, buf.Bytes()); err != nil {
if err := s.PublishToTopic(ctx, topic+s.Encoding().ProtocolSuffix(), buf.Bytes()); err != nil {
err := errors.Wrap(err, "could not publish message")
tracing.AnnotateError(span, err)
return err
}
return nil
}
func attestationToTopic(subnet uint64, forkDigest [fieldparams.VersionLength]byte) string {
return fmt.Sprintf(AttestationSubnetTopicFormat, forkDigest, subnet)
}
func syncCommitteeToTopic(subnet uint64, forkDigest [fieldparams.VersionLength]byte) string {
return fmt.Sprintf(SyncCommitteeSubnetTopicFormat, forkDigest, subnet)
}
func blobSubnetToTopic(subnet uint64, forkDigest [fieldparams.VersionLength]byte) string {
return fmt.Sprintf(BlobSubnetTopicFormat, forkDigest, subnet)
}
func lcOptimisticToTopic(forkDigest [4]byte) string {
return fmt.Sprintf(LightClientOptimisticUpdateTopicFormat, forkDigest)
}
func lcFinalityToTopic(forkDigest [4]byte) string {
return fmt.Sprintf(LightClientFinalityUpdateTopicFormat, forkDigest)
}
func dataColumnSubnetToTopic(subnet uint64, forkDigest [fieldparams.VersionLength]byte) string {
return fmt.Sprintf(DataColumnSubnetTopicFormat, forkDigest, subnet)
}

View File

@@ -30,7 +30,6 @@ import (
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/testing/util"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/ethereum/go-ethereum/p2p/enode"
pubsub "github.com/libp2p/go-libp2p-pubsub"
"github.com/libp2p/go-libp2p/core/host"
"google.golang.org/protobuf/proto"
@@ -110,7 +109,6 @@ func TestService_Attestation_Subnet(t *testing.T) {
if gtm := GossipTypeMapping[reflect.TypeFor[*ethpb.Attestation]()]; gtm != AttestationSubnetTopicFormat {
t.Errorf("Constant is out of date. Wanted %s, got %s", AttestationSubnetTopicFormat, gtm)
}
s := Service{}
tests := []struct {
att *ethpb.Attestation
@@ -123,7 +121,7 @@ func TestService_Attestation_Subnet(t *testing.T) {
Slot: 2,
},
},
topic: "/eth2/00000000/beacon_attestation_2" + s.Encoding().ProtocolSuffix(),
topic: "/eth2/00000000/beacon_attestation_2",
},
{
att: &ethpb.Attestation{
@@ -132,7 +130,7 @@ func TestService_Attestation_Subnet(t *testing.T) {
Slot: 10,
},
},
topic: "/eth2/00000000/beacon_attestation_21" + s.Encoding().ProtocolSuffix(),
topic: "/eth2/00000000/beacon_attestation_21",
},
{
att: &ethpb.Attestation{
@@ -141,12 +139,12 @@ func TestService_Attestation_Subnet(t *testing.T) {
Slot: 529,
},
},
topic: "/eth2/00000000/beacon_attestation_8" + s.Encoding().ProtocolSuffix(),
topic: "/eth2/00000000/beacon_attestation_8",
},
}
for _, tt := range tests {
subnet := helpers.ComputeSubnetFromCommitteeAndSlot(100, tt.att.Data.CommitteeIndex, tt.att.Data.Slot)
assert.Equal(t, tt.topic, AttestationSubnetTopic([4]byte{}, subnet), "Wrong topic")
assert.Equal(t, tt.topic, attestationToTopic(subnet, [4]byte{} /* fork digest */), "Wrong topic")
}
}
@@ -175,12 +173,14 @@ func TestService_BroadcastAttestation(t *testing.T) {
msg := util.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.NewBitlist(7)})
subnet := uint64(5)
GossipTypeMapping[reflect.TypeFor[*ethpb.Attestation]()] = AttestationSubnetTopicFormat
topic := AttestationSubnetTopicFormat
GossipTypeMapping[reflect.TypeFor[*ethpb.Attestation]()] = topic
digest, err := p.currentForkDigest()
require.NoError(t, err)
topic := AttestationSubnetTopic(digest, subnet)
topic = fmt.Sprintf(topic, digest, subnet)
// External peer subscribes to the topic.
topic += p.Encoding().ProtocolSuffix()
sub, err := p2.SubscribeToTopic(topic)
require.NoError(t, err)
@@ -226,7 +226,6 @@ func TestService_BroadcastAttestationWithDiscoveryAttempts(t *testing.T) {
// Setup bootnode.
cfg := &Config{PingInterval: testPingInterval, DB: db}
cfg.UDPPort = uint(port)
cfg.TCPPort = uint(port)
_, pkey := createAddrAndPrivKey(t)
ipAddr := net.ParseIP("127.0.0.1")
genesisTime := time.Now()
@@ -252,9 +251,8 @@ func TestService_BroadcastAttestationWithDiscoveryAttempts(t *testing.T) {
var listeners []*listenerWrapper
var hosts []host.Host
var configs []*Config
// setup other nodes.
baseCfg := &Config{
cfg = &Config{
Discv5BootStrapAddrs: []string{bootNode.String()},
MaxPeers: 2,
PingInterval: testPingInterval,
@@ -263,21 +261,11 @@ func TestService_BroadcastAttestationWithDiscoveryAttempts(t *testing.T) {
// Setup 2 different hosts
for i := uint(1); i <= 2; i++ {
h, pkey, ipAddr := createHost(t, port+i)
// Create a new config for each service to avoid shared mutations
cfg := &Config{
Discv5BootStrapAddrs: baseCfg.Discv5BootStrapAddrs,
MaxPeers: baseCfg.MaxPeers,
PingInterval: baseCfg.PingInterval,
DB: baseCfg.DB,
UDPPort: uint(port + i),
TCPPort: uint(port + i),
}
cfg.UDPPort = uint(port + i)
cfg.TCPPort = uint(port + i)
if len(listeners) > 0 {
cfg.Discv5BootStrapAddrs = append(cfg.Discv5BootStrapAddrs, listeners[len(listeners)-1].Self().String())
}
s := &Service{
cfg: cfg,
genesisTime: genesisTime,
@@ -290,22 +278,18 @@ func TestService_BroadcastAttestationWithDiscoveryAttempts(t *testing.T) {
close(s.custodyInfoSet)
listener, err := s.startDiscoveryV5(ipAddr, pkey)
assert.NoError(t, err, "Could not start discovery for node")
// Set listener for the service
s.dv5Listener = listener
s.metaData = wrapper.WrappedMetadataV0(new(ethpb.MetaDataV0))
// Set subnet for 2nd peer
// Set for 2nd peer
if i == 2 {
s.dv5Listener = listener
s.metaData = wrapper.WrappedMetadataV0(new(ethpb.MetaDataV0))
bitV := bitfield.NewBitvector64()
bitV.SetBitAt(subnet, true)
err := s.updateSubnetRecordWithMetadata(bitV)
require.NoError(t, err)
}
assert.NoError(t, err, "Could not start discovery for node")
listeners = append(listeners, listener)
hosts = append(hosts, h)
configs = append(configs, cfg)
}
defer func() {
// Close down all peers.
@@ -340,7 +324,7 @@ func TestService_BroadcastAttestationWithDiscoveryAttempts(t *testing.T) {
pubsub: ps1,
dv5Listener: listeners[0],
joinedTopics: map[string]*pubsub.Topic{},
cfg: configs[0],
cfg: cfg,
genesisTime: time.Now(),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
subnetsLock: make(map[uint64]*sync.RWMutex),
@@ -356,7 +340,7 @@ func TestService_BroadcastAttestationWithDiscoveryAttempts(t *testing.T) {
pubsub: ps2,
dv5Listener: listeners[1],
joinedTopics: map[string]*pubsub.Topic{},
cfg: configs[1],
cfg: cfg,
genesisTime: time.Now(),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
subnetsLock: make(map[uint64]*sync.RWMutex),
@@ -369,12 +353,14 @@ func TestService_BroadcastAttestationWithDiscoveryAttempts(t *testing.T) {
go p2.listenForNewNodes()
msg := util.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.NewBitlist(7)})
GossipTypeMapping[reflect.TypeFor[*ethpb.Attestation]()] = AttestationSubnetTopicFormat
topic := AttestationSubnetTopicFormat
GossipTypeMapping[reflect.TypeFor[*ethpb.Attestation]()] = topic
digest, err := p.currentForkDigest()
require.NoError(t, err)
topic := AttestationSubnetTopic(digest, subnet)
topic = fmt.Sprintf(topic, digest, subnet)
// External peer subscribes to the topic.
topic += p.Encoding().ProtocolSuffix()
// We don't use our internal subscribe method
// due to using floodsub over here.
tpHandle, err := p2.JoinTopic(topic)
@@ -445,12 +431,14 @@ func TestService_BroadcastSyncCommittee(t *testing.T) {
msg := util.HydrateSyncCommittee(&ethpb.SyncCommitteeMessage{})
subnet := uint64(5)
GossipTypeMapping[reflect.TypeFor[*ethpb.SyncCommitteeMessage]()] = SyncCommitteeSubnetTopicFormat
topic := SyncCommitteeSubnetTopicFormat
GossipTypeMapping[reflect.TypeFor[*ethpb.SyncCommitteeMessage]()] = topic
digest, err := p.currentForkDigest()
require.NoError(t, err)
topic := SyncCommitteeSubnetTopic(digest, subnet)
topic = fmt.Sprintf(topic, digest, subnet)
// External peer subscribes to the topic.
topic += p.Encoding().ProtocolSuffix()
sub, err := p2.SubscribeToTopic(topic)
require.NoError(t, err)
@@ -520,12 +508,14 @@ func TestService_BroadcastBlob(t *testing.T) {
}
subnet := uint64(0)
GossipTypeMapping[reflect.TypeFor[*ethpb.BlobSidecar]()] = BlobSubnetTopicFormat
topic := BlobSubnetTopicFormat
GossipTypeMapping[reflect.TypeFor[*ethpb.BlobSidecar]()] = topic
digest, err := p.currentForkDigest()
require.NoError(t, err)
topic := BlobSubnetTopic(digest, subnet)
topic = fmt.Sprintf(topic, digest, subnet)
// External peer subscribes to the topic.
topic += p.Encoding().ProtocolSuffix()
sub, err := p2.SubscribeToTopic(topic)
require.NoError(t, err)
@@ -585,9 +575,10 @@ func TestService_BroadcastLightClientOptimisticUpdate(t *testing.T) {
require.NoError(t, err)
GossipTypeMapping[reflect.TypeOf(msg)] = LightClientOptimisticUpdateTopicFormat
topic := LcOptimisticToTopic(params.ForkDigest(slots.ToEpoch(msg.AttestedHeader().Beacon().Slot)))
topic := fmt.Sprintf(LightClientOptimisticUpdateTopicFormat, params.ForkDigest(slots.ToEpoch(msg.AttestedHeader().Beacon().Slot)))
// External peer subscribes to the topic.
topic += p.Encoding().ProtocolSuffix()
sub, err := p2.SubscribeToTopic(topic)
require.NoError(t, err)
@@ -660,9 +651,10 @@ func TestService_BroadcastLightClientFinalityUpdate(t *testing.T) {
require.NoError(t, err)
GossipTypeMapping[reflect.TypeOf(msg)] = LightClientFinalityUpdateTopicFormat
topic := LcFinalityToTopic(params.ForkDigest(slots.ToEpoch(msg.AttestedHeader().Beacon().Slot)))
topic := fmt.Sprintf(LightClientFinalityUpdateTopicFormat, params.ForkDigest(slots.ToEpoch(msg.AttestedHeader().Beacon().Slot)))
// External peer subscribes to the topic.
topic += p.Encoding().ProtocolSuffix()
sub, err := p2.SubscribeToTopic(topic)
require.NoError(t, err)
@@ -710,6 +702,7 @@ func TestService_BroadcastDataColumn(t *testing.T) {
const (
port = 2000
columnIndex = 12
topicFormat = DataColumnSubnetTopicFormat
)
ctx := t.Context()
@@ -767,17 +760,7 @@ func TestService_BroadcastDataColumn(t *testing.T) {
require.NoError(t, err)
subnet := peerdas.ComputeSubnetForDataColumnSidecar(columnIndex)
topic := DataColumnSubnetTopic(digest, subnet)
crawler, err := NewGossipPeerCrawler(t.Context(), service, listener, 1*time.Second, 1*time.Second, 10,
func(n *enode.Node) bool { return true },
service.Peers().Scorers().Score)
require.NoError(t, err)
err = crawler.Start(func(ctx context.Context, node *enode.Node) ([]string, error) {
return []string{topic}, nil
})
require.NoError(t, err)
service.gossipDialer = NewGossipPeerDialer(t.Context(), crawler, service.PubSub().ListPeers, service.DialPeers)
topic := fmt.Sprintf(topicFormat, digest, subnet) + service.Encoding().ProtocolSuffix()
_, verifiedRoSidecars := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{{Index: columnIndex}})
verifiedRoSidecar := verifiedRoSidecars[0]

View File

@@ -7,7 +7,6 @@ import (
statefeed "github.com/OffchainLabs/prysm/v7/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db"
"github.com/OffchainLabs/prysm/v7/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state/stategen"
"github.com/sirupsen/logrus"
)
@@ -28,7 +27,6 @@ const (
type Config struct {
NoDiscovery bool
EnableUPnP bool
EnableAutoNAT bool
StaticPeerID bool
DisableLivenessCheck bool
StaticPeers []string
@@ -51,7 +49,6 @@ type Config struct {
IPColocationWhitelist []*net.IPNet
StateNotifier statefeed.Notifier
DB db.ReadOnlyDatabaseWithSeqNum
StateGen stategen.StateManager
ClockWaiter startup.ClockWaiter
}

View File

@@ -369,11 +369,11 @@ func (s *Service) listenForNewNodes() {
}
}
// findAndDialPeers ensures that our node is connected to enough peers.
// If the threshold is met, then this function immediately returns.
// FindAndDialPeersWithSubnets ensures that our node is connected to enough peers.
// If, the threshold is met, then this function immediately returns.
// Otherwise, it searches for new peers and dials them.
// If `ctx` is canceled while searching for peers, search is stopped, but newly
// found peers are still dialed. In this case, the function returns an error.
// If `ctx is canceled while searching for peers, search is stopped, but new found peers are still dialed.
// In this case, the function returns an error.
func (s *Service) findAndDialPeers(ctx context.Context) error {
// Restrict dials if limit is applied.
maxConcurrentDials := math.MaxInt
@@ -404,7 +404,8 @@ func (s *Service) findAndDialPeers(ctx context.Context) error {
return err
}
dialedPeerCount := s.DialPeers(s.ctx, maxConcurrentDials, peersToDial)
dialedPeerCount := s.dialPeers(s.ctx, maxConcurrentDials, peersToDial)
if dialedPeerCount > missingPeerCount {
missingPeerCount = 0
continue
@@ -553,7 +554,6 @@ func (s *Service) createListener(
Bootnodes: bootNodes,
PingInterval: s.cfg.PingInterval,
NoFindnodeLivenessCheck: s.cfg.DisableLivenessCheck,
V5RespTimeout: 300 * time.Millisecond,
}
listener, err := discover.ListenV5(conn, localNode, dv5Cfg)

View File

@@ -1,252 +0,0 @@
package p2p
import (
"context"
"math"
"slices"
"sync"
"time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/gossipcrawler"
"github.com/OffchainLabs/prysm/v7/cmd/beacon-chain/flags"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/libp2p/go-libp2p/core/peer"
)
const dialInterval = 1 * time.Second
// GossipPeerDialer maintains minimum peer counts for gossip topics by periodically
// dialing new peers discovered by a crawler. It runs a background loop that checks each
// topic's peer count and dials new peers when below the target threshold.
type GossipPeerDialer struct {
ctx context.Context
listPeers func(topic string) []peer.ID
dialPeers func(ctx context.Context, maxConcurrentDials int, nodes []*enode.Node) uint
crawler gossipcrawler.Crawler
topicsProvider gossipcrawler.SubnetTopicsProvider
once sync.Once
}
// NewGossipPeerDialer creates a new GossipPeerDialer instance.
//
// Parameters:
// - ctx: Parent context that controls the lifecycle of the dialer. When cancelled,
// the background dial loop will terminate.
// - crawler: Source of peer candidates for each topic. The crawler maintains a registry
// of peers discovered through DHT crawling, indexed by the topics they subscribe to.
// - listPeers: Function that returns the current peers connected for a given topic.
// Used to determine how many additional peers need to be dialed.
// - dialPeers: Function that dials the given enode.Node peers with a concurrency limit.
// Returns the number of successful dials.
//
// The dialer must be started with Start() before it begins maintaining peer counts.
func NewGossipPeerDialer(
ctx context.Context,
crawler gossipcrawler.Crawler,
listPeers func(topic string) []peer.ID,
dialPeers func(ctx context.Context, maxConcurrentDials int, nodes []*enode.Node) uint,
) *GossipPeerDialer {
return &GossipPeerDialer{
ctx: ctx,
listPeers: listPeers,
dialPeers: dialPeers,
crawler: crawler,
}
}
// Start begins the background dial loop that maintains peer counts for all topics.
//
// The provider function is called on each tick to get the current list of topics that
// need peer maintenance. This allows the set of topics to change dynamically as the node
// subscribes/unsubscribes from subnets.
//
// Start is idempotent - calling it multiple times has no effect after the first call.
// Only the provider from the first call will be used; subsequent calls are ignored.
//
// The dial loop runs every dialInterval (1 second) and for each topic:
// 1. Checks current peer count via listPeers()
// 2. If below the per-topic min peer count, requests candidates from the crawler
// 3. Deduplicates peers across all topics to avoid redundant dials
// 4. Dials missing peers with rate limiting if enabled
//
// Returns nil always (error return preserved for interface compatibility).
func (g *GossipPeerDialer) Start(provider gossipcrawler.SubnetTopicsProvider) error {
g.once.Do(func() {
g.topicsProvider = provider
go g.dialLoop()
})
return nil
}
func (g *GossipPeerDialer) dialLoop() {
ticker := time.NewTicker(dialInterval)
defer ticker.Stop()
for {
select {
case <-ticker.C:
peersToDial := g.selectPeersForTopics()
if len(peersToDial) == 0 {
continue
}
g.dialPeersWithRatelimiting(peersToDial)
case <-g.ctx.Done():
return
}
}
}
// selectPeersForTopics builds a bidirectional mapping of topics to peers and selects
// peers to dial using a greedy algorithm that prioritizes peers serving multiple topics.
// When a peer is selected, the needed count is decremented for ALL topics that peer serves,
// avoiding redundant dials when one peer can satisfy multiple topic requirements.
func (g *GossipPeerDialer) selectPeersForTopics() []*enode.Node {
topicsWithMinPeers := g.topicsProvider()
// Calculate how many peers each topic still needs.
neededByTopic := make(map[string]int)
for topic, minPeers := range topicsWithMinPeers {
currentCount := len(g.listPeers(topic))
if needed := minPeers - currentCount; needed > 0 {
neededByTopic[topic] = needed
}
}
if len(neededByTopic) == 0 {
return nil
}
peerToTopics := make(map[enode.ID][]string)
nodeByID := make(map[enode.ID]*enode.Node)
for topic := range neededByTopic {
candidates := g.crawler.PeersForTopic(topic)
for _, node := range candidates {
id := node.ID()
if _, exists := nodeByID[id]; !exists {
nodeByID[id] = node
}
peerToTopics[id] = append(peerToTopics[id], topic)
}
}
// Build candidate list sorted by topic count (descending).
// Peers serving more topics are prioritized.
type candidate struct {
node *enode.Node
topics []string
}
candidates := make([]candidate, 0, len(peerToTopics))
for id, topics := range peerToTopics {
candidates = append(candidates, candidate{node: nodeByID[id], topics: topics})
}
// sort candidates by topic count (descending)
slices.SortFunc(candidates, func(a, b candidate) int {
return len(b.topics) - len(a.topics)
})
// Greedy selection with cross-topic accounting.
var selected []*enode.Node
for _, c := range candidates {
// Check if this peer serves any topic we still need.
servesNeededTopic := false
for _, topic := range c.topics {
if neededByTopic[topic] > 0 {
servesNeededTopic = true
break
}
}
if !servesNeededTopic {
continue
}
// Select this peer and decrement needed count for ALL topics it serves.
selected = append(selected, c.node)
for _, topic := range c.topics {
if neededByTopic[topic] > 0 {
neededByTopic[topic]--
}
}
}
return selected
}
// DialPeersForTopicBlocking blocks until the specified topic has at least nPeers connected,
// or until the context is cancelled.
//
// This method is useful when you need to ensure a minimum number of peers are connected
// for a specific topic before proceeding (e.g., before publishing a message).
//
// The method polls in a loop:
// 1. Check if current peer count >= nPeers, return nil if satisfied
// 2. Get peer candidates from crawler for this topic
// 3. Dial candidates with rate limiting
// 4. Wait 100ms for connections to establish in pubsub layer
// 5. Repeat until target reached or context cancelled
//
// Parameters:
// - ctx: Context to cancel the blocking operation. Takes precedence for cancellation.
// - topic: The gossipsub topic to ensure peers for.
// - nPeers: Minimum number of peers required before returning.
//
// Returns:
// - nil: Successfully reached the target peer count.
// - ctx.Err(): The provided context was cancelled.
// - g.ctx.Err(): The dialer's parent context was cancelled.
//
// Note: This may block indefinitely if the crawler cannot provide enough peers
// and the context has no deadline.
func (g *GossipPeerDialer) DialPeersForTopicBlocking(ctx context.Context, topic string, nPeers int) error {
for {
peers := g.listPeers(topic)
if len(peers) >= nPeers {
return nil
}
newPeers := g.peersForTopic(topic, nPeers)
if len(newPeers) > 0 {
g.dialPeersWithRatelimiting(newPeers)
}
select {
case <-ctx.Done():
return ctx.Err()
// some wait here is good after dialing as connections take some time to show up in pubsub
case <-time.After(100 * time.Millisecond):
case <-g.ctx.Done():
return g.ctx.Err()
}
}
}
func (g *GossipPeerDialer) peersForTopic(topic string, targetCount int) []*enode.Node {
peers := g.listPeers(topic)
peerCount := len(peers)
if peerCount >= targetCount {
return nil
}
missing := targetCount - peerCount
newPeers := g.crawler.PeersForTopic(topic)
if len(newPeers) > missing {
newPeers = newPeers[:missing]
}
return newPeers
}
func (g *GossipPeerDialer) dialPeersWithRatelimiting(peers []*enode.Node) {
// Dial new peers in batches.
maxConcurrentDials := math.MaxInt
if flags.MaxDialIsActive() {
maxConcurrentDials = flags.Get().MaxConcurrentDials
}
g.dialPeers(g.ctx, maxConcurrentDials, peers)
}

View File

@@ -1,523 +0,0 @@
package p2p
import (
"context"
"crypto/rand"
"net"
"slices"
"sync"
"testing"
"time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/gossipcrawler"
"github.com/OffchainLabs/prysm/v7/crypto/ecdsa"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/libp2p/go-libp2p/core/crypto"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/stretchr/testify/require"
)
func TestGossipPeerDialer_Start(t *testing.T) {
tests := []struct {
name string
newCrawler func(t *testing.T) *mockCrawler
provider gossipcrawler.SubnetTopicsProvider
expectedConnects int
expectStartErr bool
}{
{
name: "dials unique peers across topics",
newCrawler: func(t *testing.T) *mockCrawler {
nodeA := newTestNode(t, "127.0.0.1", 30101)
nodeB := newTestNode(t, "127.0.0.1", 30102)
return &mockCrawler{
consume: true,
peers: map[string][]*enode.Node{
"topic/a": {nodeA, nodeB},
"topic/b": {nodeA},
},
}
},
provider: func() map[string]int {
return map[string]int{"topic/a": 2, "topic/b": 2}
},
expectedConnects: 2,
},
{
name: "uses per-topic min peer counts",
newCrawler: func(t *testing.T) *mockCrawler {
nodes := make([]*enode.Node, 5)
for i := range nodes {
nodes[i] = newTestNode(t, "127.0.0.1", uint16(30110+i))
}
return &mockCrawler{
consume: true,
peers: map[string][]*enode.Node{
// topic/mesh has 3 available peers, minPeers=2 -> should dial 2
"topic/mesh": {nodes[0], nodes[1], nodes[2]},
// topic/fanout has 3 available peers, minPeers=1 -> should dial 1
"topic/fanout": {nodes[3], nodes[4]},
},
}
},
provider: func() map[string]int {
return map[string]int{
"topic/mesh": 2,
"topic/fanout": 1,
}
},
// Total: 2 from mesh + 1 from fanout = 3 peers dialed
expectedConnects: 3,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
md := &mockDialer{}
listPeers := func(topic string) []peer.ID { return nil }
dialer := NewGossipPeerDialer(t.Context(), tt.newCrawler(t), listPeers, md.DialPeers)
err := dialer.Start(tt.provider)
if tt.expectStartErr {
require.Error(t, err)
return
}
require.NoError(t, err)
require.Eventually(t, func() bool {
return md.dialCount() >= tt.expectedConnects
}, 2*time.Second, 20*time.Millisecond)
require.Equal(t, tt.expectedConnects, md.dialCount())
})
}
}
func TestGossipPeerDialer_DialPeersForTopicBlocking(t *testing.T) {
tests := []struct {
name string
connectedPeers int
newCrawler func(t *testing.T) *mockCrawler
targetPeers int
ctx func() (context.Context, context.CancelFunc)
expectedConnects int
expectErr bool
}{
{
name: "returns immediately when enough peers",
connectedPeers: 1,
newCrawler: func(t *testing.T) *mockCrawler {
return &mockCrawler{}
},
targetPeers: 1,
ctx: func() (context.Context, context.CancelFunc) { return context.WithCancel(context.Background()) },
expectedConnects: 0,
expectErr: false,
},
{
name: "dials when peers are missing",
connectedPeers: 0,
newCrawler: func(t *testing.T) *mockCrawler {
nodeA := newTestNode(t, "127.0.0.1", 30201)
nodeB := newTestNode(t, "127.0.0.1", 30202)
return &mockCrawler{
peers: map[string][]*enode.Node{
"topic/a": {nodeA, nodeB},
},
}
},
targetPeers: 2,
ctx: func() (context.Context, context.CancelFunc) {
return context.WithTimeout(context.Background(), 1*time.Second)
},
expectedConnects: 2,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
md := &mockDialer{}
var mu sync.Mutex
connected := make([]peer.ID, 0)
for i := 0; i < tt.connectedPeers; i++ {
connected = append(connected, peer.ID(string(rune(i))))
}
listPeers := func(topic string) []peer.ID {
mu.Lock()
defer mu.Unlock()
return connected
}
dialPeers := func(ctx context.Context, max int, nodes []*enode.Node) uint {
cnt := md.DialPeers(ctx, max, nodes)
mu.Lock()
defer mu.Unlock()
for range nodes {
// Just add a dummy peer ID to simulate connection success
connected = append(connected, peer.ID("dummy"))
}
return cnt
}
crawler := tt.newCrawler(t)
dialer := NewGossipPeerDialer(t.Context(), crawler, listPeers, dialPeers)
topic := "topic/a"
ctx, cancel := tt.ctx()
defer cancel()
err := dialer.DialPeersForTopicBlocking(ctx, topic, tt.targetPeers)
if tt.expectErr {
require.Error(t, err)
} else {
require.NoError(t, err)
}
require.Equal(t, tt.expectedConnects, md.dialCount())
})
}
}
func TestGossipPeerDialer_peersForTopic(t *testing.T) {
tests := []struct {
name string
connected int
targetCount int
buildPeers func(t *testing.T) ([]*enode.Node, []*enode.Node)
}{
{
name: "returns nil when enough peers already connected",
connected: 1,
targetCount: 1,
buildPeers: func(t *testing.T) ([]*enode.Node, []*enode.Node) {
return []*enode.Node{newTestNode(t, "127.0.0.1", 30301)}, nil
},
},
{
name: "returns crawler peers when none connected",
connected: 0,
targetCount: 2,
buildPeers: func(t *testing.T) ([]*enode.Node, []*enode.Node) {
nodeA := newTestNode(t, "127.0.0.1", 30311)
nodeB := newTestNode(t, "127.0.0.1", 30312)
return []*enode.Node{nodeA, nodeB}, []*enode.Node{nodeA, nodeB}
},
},
{
name: "truncates peers when more than needed",
connected: 0,
targetCount: 1,
buildPeers: func(t *testing.T) ([]*enode.Node, []*enode.Node) {
nodeA := newTestNode(t, "127.0.0.1", 30321)
nodeB := newTestNode(t, "127.0.0.1", 30322)
nodeC := newTestNode(t, "127.0.0.1", 30323)
return []*enode.Node{nodeA, nodeB, nodeC}, []*enode.Node{nodeA}
},
},
{
name: "only returns missing peers",
connected: 1,
targetCount: 3,
buildPeers: func(t *testing.T) ([]*enode.Node, []*enode.Node) {
nodeA := newTestNode(t, "127.0.0.1", 30331)
nodeB := newTestNode(t, "127.0.0.1", 30332)
nodeC := newTestNode(t, "127.0.0.1", 30333)
return []*enode.Node{nodeA, nodeB, nodeC}, []*enode.Node{nodeA, nodeB}
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
listPeers := func(topic string) []peer.ID {
peers := make([]peer.ID, tt.connected)
for i := 0; i < tt.connected; i++ {
peers[i] = peer.ID(string(rune(i))) // Fake peer ID
}
return peers
}
crawlerPeers, expected := tt.buildPeers(t)
crawler := &mockCrawler{
peers: map[string][]*enode.Node{"topic/test": crawlerPeers},
consume: false,
}
dialer := NewGossipPeerDialer(t.Context(), crawler, listPeers, func(ctx context.Context,
maxConcurrentDials int, nodes []*enode.Node) uint {
return 0
})
got := dialer.peersForTopic("topic/test", tt.targetCount)
if expected == nil {
require.Nil(t, got)
return
}
require.Equal(t, len(expected), len(got))
for i := range expected {
require.Equal(t, expected[i], got[i])
}
})
}
}
func TestGossipPeerDialer_selectPeersForTopics(t *testing.T) {
tests := []struct {
name string
connectedPeers map[string]int // topic -> connected peer count
topicsProvider func() map[string]int
buildPeers func(t *testing.T) (map[string][]*enode.Node, []*enode.Node)
}{
{
name: "prioritizes multi-topic peer over single-topic peers",
connectedPeers: map[string]int{},
topicsProvider: func() map[string]int {
return map[string]int{
"topic/a": 1,
"topic/b": 1,
"topic/c": 1,
}
},
buildPeers: func(t *testing.T) (map[string][]*enode.Node, []*enode.Node) {
// Peer X serves all 3 topics
nodeX := newTestNode(t, "127.0.0.1", 30401)
// Peer Y serves only topic/a
nodeY := newTestNode(t, "127.0.0.1", 30402)
// Peer Z serves only topic/b
nodeZ := newTestNode(t, "127.0.0.1", 30403)
crawlerPeers := map[string][]*enode.Node{
"topic/a": {nodeX, nodeY},
"topic/b": {nodeX, nodeZ},
"topic/c": {nodeX},
}
// Only nodeX should be dialed (satisfies all 3 topics)
return crawlerPeers, []*enode.Node{nodeX}
},
},
{
name: "cross-topic decrement works correctly",
connectedPeers: map[string]int{},
topicsProvider: func() map[string]int {
return map[string]int{
"topic/a": 2, // Need 2 peers
"topic/b": 1, // Need 1 peer
}
},
buildPeers: func(t *testing.T) (map[string][]*enode.Node, []*enode.Node) {
// Peer X serves both topics
nodeX := newTestNode(t, "127.0.0.1", 30411)
// Peer Y serves only topic/a
nodeY := newTestNode(t, "127.0.0.1", 30412)
crawlerPeers := map[string][]*enode.Node{
"topic/a": {nodeX, nodeY},
"topic/b": {nodeX},
}
// nodeX covers topic/b fully, and 1 of 2 for topic/a
// nodeY covers remaining 1 for topic/a
return crawlerPeers, []*enode.Node{nodeX, nodeY}
},
},
{
name: "no redundant dials when one peer satisfies all",
connectedPeers: map[string]int{},
topicsProvider: func() map[string]int {
return map[string]int{
"topic/a": 1,
"topic/b": 1,
"topic/c": 1,
}
},
buildPeers: func(t *testing.T) (map[string][]*enode.Node, []*enode.Node) {
nodeX := newTestNode(t, "127.0.0.1", 30421)
crawlerPeers := map[string][]*enode.Node{
"topic/a": {nodeX},
"topic/b": {nodeX},
"topic/c": {nodeX},
}
// Only 1 dial needed for all 3 topics
return crawlerPeers, []*enode.Node{nodeX}
},
},
{
name: "skips topics with enough peers already",
connectedPeers: map[string]int{
"topic/a": 2, // Already has 2
},
topicsProvider: func() map[string]int {
return map[string]int{
"topic/a": 2, // min 2, already have 2
"topic/b": 1, // min 1, have 0
}
},
buildPeers: func(t *testing.T) (map[string][]*enode.Node, []*enode.Node) {
nodeX := newTestNode(t, "127.0.0.1", 30431)
nodeY := newTestNode(t, "127.0.0.1", 30432)
crawlerPeers := map[string][]*enode.Node{
"topic/a": {nodeX},
"topic/b": {nodeY},
}
// Only nodeY should be dialed (topic/a already satisfied)
return crawlerPeers, []*enode.Node{nodeY}
},
},
{
name: "returns nil when all topics satisfied",
connectedPeers: map[string]int{"topic/a": 2, "topic/b": 1},
topicsProvider: func() map[string]int {
return map[string]int{
"topic/a": 2,
"topic/b": 1,
}
},
buildPeers: func(t *testing.T) (map[string][]*enode.Node, []*enode.Node) {
nodeX := newTestNode(t, "127.0.0.1", 30441)
crawlerPeers := map[string][]*enode.Node{
"topic/a": {nodeX},
"topic/b": {nodeX},
}
// No dials needed
return crawlerPeers, nil
},
},
{
name: "handles empty crawler response",
connectedPeers: map[string]int{},
topicsProvider: func() map[string]int {
return map[string]int{"topic/a": 1}
},
buildPeers: func(t *testing.T) (map[string][]*enode.Node, []*enode.Node) {
return map[string][]*enode.Node{}, nil
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
listPeers := func(topic string) []peer.ID {
count := tt.connectedPeers[topic]
peers := make([]peer.ID, count)
for i := range count {
peers[i] = peer.ID(topic + string(rune(i)))
}
return peers
}
crawlerPeers, expected := tt.buildPeers(t)
crawler := &mockCrawler{
peers: crawlerPeers,
consume: false,
}
dialer := NewGossipPeerDialer(t.Context(), crawler, listPeers, func(ctx context.Context,
maxConcurrentDials int, nodes []*enode.Node) uint {
return 0
})
dialer.topicsProvider = tt.topicsProvider
got := dialer.selectPeersForTopics()
if expected == nil {
require.Nil(t, got)
return
}
require.Equal(t, len(expected), len(got), "expected %d peers, got %d", len(expected), len(got))
// Verify all expected nodes are present (order may vary for equal topic counts)
expectedIDs := make(map[enode.ID]struct{})
for _, n := range expected {
expectedIDs[n.ID()] = struct{}{}
}
for _, n := range got {
_, ok := expectedIDs[n.ID()]
require.True(t, ok, "unexpected peer %s in result", n.ID())
}
})
}
}
type mockCrawler struct {
mu sync.Mutex
peers map[string][]*enode.Node
consume bool
}
func (m *mockCrawler) Start(gossipcrawler.TopicExtractor) error {
return nil
}
func (m *mockCrawler) Stop() {}
func (m *mockCrawler) RemovePeerByPeerId(peer.ID) {}
func (m *mockCrawler) RemoveTopic(string) {}
func (m *mockCrawler) PeersForTopic(topic string) []*enode.Node {
m.mu.Lock()
defer m.mu.Unlock()
nodes := m.peers[topic]
if len(nodes) == 0 {
return nil
}
copied := slices.Clone(nodes)
if m.consume {
m.peers[topic] = nil
}
return copied
}
type mockDialer struct {
mu sync.Mutex
dials []*enode.Node
}
func (m *mockDialer) DialPeers(ctx context.Context, maxConcurrentDials int, nodes []*enode.Node) uint {
m.mu.Lock()
defer m.mu.Unlock()
m.dials = append(m.dials, nodes...)
return uint(len(nodes))
}
func (m *mockDialer) dialCount() int {
m.mu.Lock()
defer m.mu.Unlock()
return len(m.dials)
}
func (m *mockDialer) dialedNodes() []*enode.Node {
m.mu.Lock()
defer m.mu.Unlock()
return slices.Clone(m.dials)
}
func newTestNode(t *testing.T, ip string, tcpPort uint16) *enode.Node {
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
require.NoError(t, err)
return newTestNodeWithPriv(t, priv, ip, tcpPort)
}
func newTestNodeWithPriv(t *testing.T, priv crypto.PrivKey, ip string, tcpPort uint16) *enode.Node {
t.Helper()
db, err := enode.OpenDB("")
require.NoError(t, err)
t.Cleanup(func() {
db.Close()
})
convertedKey, err := ecdsa.ConvertFromInterfacePrivKey(priv)
require.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
localNode.SetStaticIP(net.ParseIP(ip))
localNode.Set(enr.TCP(tcpPort))
localNode.Set(enr.UDP(tcpPort))
return localNode.Node()
}

View File

@@ -1,546 +0,0 @@
package p2p
import (
"context"
"fmt"
"slices"
"sync"
"time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/gossipcrawler"
"github.com/pkg/errors"
"golang.org/x/sync/semaphore"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/ethereum/go-ethereum/p2p/enode"
)
type peerNode struct {
isPinged bool
node *enode.Node
peerID peer.ID
topics map[string]struct{}
}
type crawledPeers struct {
mu sync.RWMutex
peerNodeByEnode map[enode.ID]*peerNode
peerNodeByPid map[peer.ID]*peerNode
peersByTopic map[string]map[*peerNode]struct{}
}
func (cp *crawledPeers) updateStatusToPinged(enodeID enode.ID) {
cp.mu.Lock()
defer cp.mu.Unlock()
existingPNode, ok := cp.peerNodeByEnode[enodeID]
if !ok {
return
}
// we only want to ping a node with a given NodeId once -> not on every sequence number change
// as ping is simply a test of a node being reachable and not fake
existingPNode.isPinged = true
}
func (cp *crawledPeers) updatePeer(node *enode.Node, topics []string) (bool, error) {
if node == nil {
return false, errors.New("node is nil")
}
cp.mu.Lock()
defer cp.mu.Unlock()
enodeID := node.ID()
existingPNode, ok := cp.peerNodeByEnode[enodeID]
if ok && existingPNode.node == nil {
return false, errors.New("enode is nil for enodeId")
}
// we don't want to update enodes with a lower sequence number as they're stale records
if ok && existingPNode.node.Seq() >= node.Seq() {
return false, nil
}
if !ok {
// this is a new peer
peerID, err := enodeToPeerID(node)
if err != nil {
return false, fmt.Errorf("converting enode to peer ID: %w", err)
}
existingPNode = &peerNode{
node: node,
peerID: peerID,
topics: make(map[string]struct{}),
}
cp.peerNodeByEnode[enodeID] = existingPNode
cp.peerNodeByPid[peerID] = existingPNode
} else {
existingPNode.node = node
}
cp.updateTopicsUnlocked(existingPNode, topics)
cp.recordMetricsUnlocked()
if existingPNode.isPinged || len(topics) == 0 {
return false, nil
}
return true, nil
}
func (cp *crawledPeers) removeTopic(topic string) {
cp.mu.Lock()
defer cp.mu.Unlock()
// Get all peers subscribed to this topic
peers, ok := cp.peersByTopic[topic]
if !ok {
return // Topic doesn't exist
}
// Remove the topic from each peer's topic list
for pnode := range peers {
delete(pnode.topics, topic)
// remove the peer if it has no more topics left
if len(pnode.topics) == 0 {
cp.updateTopicsUnlocked(pnode, nil)
}
}
// Remove the topic from byTopic map
delete(cp.peersByTopic, topic)
cp.recordMetricsUnlocked()
}
func (cp *crawledPeers) removePeerByPeerId(peerID peer.ID) {
cp.mu.Lock()
defer cp.mu.Unlock()
pnode, ok := cp.peerNodeByPid[peerID]
if !ok {
return
}
// Use updateTopicsUnlocked with empty topics to remove the peer
cp.updateTopicsUnlocked(pnode, nil)
cp.recordMetricsUnlocked()
}
func (cp *crawledPeers) removePeerByNodeId(enodeID enode.ID) {
cp.mu.Lock()
defer cp.mu.Unlock()
pnode, ok := cp.peerNodeByEnode[enodeID]
if !ok {
return
}
cp.updateTopicsUnlocked(pnode, nil)
cp.recordMetricsUnlocked()
}
func (cp *crawledPeers) recordMetricsUnlocked() {
gossipCrawlerPeersByEnodeCount.Set(float64(len(cp.peerNodeByEnode)))
gossipCrawlerPeersByPidCount.Set(float64(len(cp.peerNodeByPid)))
gossipCrawlerTopicsCount.Set(float64(len(cp.peersByTopic)))
}
func (cp *crawledPeers) cleanupPeer(pnode *peerNode) {
delete(cp.peerNodeByPid, pnode.peerID)
delete(cp.peerNodeByEnode, pnode.node.ID())
for t := range pnode.topics {
if peers, ok := cp.peersByTopic[t]; ok {
delete(peers, pnode)
if len(peers) == 0 {
delete(cp.peersByTopic, t)
}
}
}
pnode.topics = nil // Clear topics to indicate removal.
}
func (cp *crawledPeers) removeOldTopicsFromPeer(pnode *peerNode, newTopics map[string]struct{}) {
for oldTopic := range pnode.topics {
if _, ok := newTopics[oldTopic]; !ok {
if peers, ok := cp.peersByTopic[oldTopic]; ok {
delete(peers, pnode)
if len(peers) == 0 {
delete(cp.peersByTopic, oldTopic)
}
}
}
}
}
func (cp *crawledPeers) addNewTopicsToPeer(pnode *peerNode, newTopics map[string]struct{}) {
for newTopic := range newTopics {
if _, ok := pnode.topics[newTopic]; !ok {
if _, ok := cp.peersByTopic[newTopic]; !ok {
cp.peersByTopic[newTopic] = make(map[*peerNode]struct{})
}
cp.peersByTopic[newTopic][pnode] = struct{}{}
}
}
}
// updateTopicsUnlocked updates the topics associated with a peer node.
// If the topics slice is empty, the peer is completely removed from the crawled peers.
// Otherwise, it updates the peer's topics by removing old topics that are no longer
// present and adding new topics. This method assumes the caller holds the lock on cp.mu.
// If a topic has no peers after this update, it is removed from the list of topics we track peers for.
func (cp *crawledPeers) updateTopicsUnlocked(pnode *peerNode, topics []string) {
// If topics is empty, remove the peer completely.
if len(topics) == 0 {
cp.cleanupPeer(pnode)
return
}
newTopics := make(map[string]struct{})
for _, t := range topics {
newTopics[t] = struct{}{}
}
// Remove old topics that are no longer present.
cp.removeOldTopicsFromPeer(pnode, newTopics)
// Add new topics.
cp.addNewTopicsToPeer(pnode, newTopics)
pnode.topics = newTopics
}
func (cp *crawledPeers) getPeersForTopic(topic string, filter gossipcrawler.PeerFilterFunc) []peerNode {
cp.mu.RLock()
defer cp.mu.RUnlock()
peers, ok := cp.peersByTopic[topic]
if !ok {
return nil
}
var peerNodes []peerNode
for pnode := range peers {
if pnode.node == nil {
continue
}
if pnode.isPinged && filter(pnode.node) {
peerNodes = append(peerNodes, *pnode)
}
}
return peerNodes
}
// GossipPeerCrawler discovers and maintains a registry of peers subscribed to gossipsub topics.
// It uses discv5 to find peers, extracts their topic subscriptions from ENR records, and verifies
// their reachability via ping. Only peers that have been successfully pinged are returned when
// querying for peers on a given topic. The crawler runs three background loops: one for discovery,
// one for ping verification, and one for periodic cleanup of stale or filtered-out peers.
type GossipPeerCrawler struct {
ctx context.Context
crawlInterval, crawlTimeout time.Duration
crawledPeers *crawledPeers
// Discovery interface for finding peers
dv5 ListenerRebooter
p2pSvc *Service
topicExtractor gossipcrawler.TopicExtractor
peerFilter gossipcrawler.PeerFilterFunc
scorer PeerScoreFunc
pingCh chan enode.Node
pingSemaphore *semaphore.Weighted
once sync.Once
}
// cleanupInterval controls how frequently we sweep crawled peers and prune
// those that are no longer useful.
const cleanupInterval = 5 * time.Minute
// PeerScoreFunc calculates a reputation score for a given peer ID.
// Higher scores indicate more desirable peers. This function is used by PeersForTopic
// to sort returned peers in descending order of quality, allowing callers to prioritize
// connections to the most reliable peers.
type PeerScoreFunc func(peer.ID) float64
// NewGossipPeerCrawler creates a new crawler for discovering gossipsub peers.
// The crawler uses the provided discv5 listener to discover peers and tracks their
// topic subscriptions. Parameters:
// - p2pSvc: The P2P service for network operations
// - dv5: The discv5 listener used for peer discovery and ping verification
// - crawlTimeout: Maximum duration for each crawl iteration
// - crawlInterval: The duration between each crawl iteration
// - maxConcurrentPings: Limits parallel ping operations to avoid overwhelming the network
// - peerFilter: Determines which discovered peers should be tracked
// - scorer: Calculates peer quality scores for sorting results
//
// Returns an error if any required parameter is nil or invalid.
func NewGossipPeerCrawler(
ctx context.Context,
p2pSvc *Service,
dv5 ListenerRebooter,
crawlTimeout time.Duration,
crawlInterval time.Duration,
maxConcurrentPings int64,
peerFilter gossipcrawler.PeerFilterFunc,
scorer PeerScoreFunc,
) (*GossipPeerCrawler, error) {
if p2pSvc == nil {
return nil, errors.New("p2pSvc is nil")
}
if dv5 == nil {
return nil, errors.New("dv5 is nil")
}
if crawlTimeout <= 0 {
return nil, errors.New("crawl timeout must be greater than 0")
}
if crawlInterval <= 0 {
return nil, errors.New("crawl interval must be greater than 0")
}
if maxConcurrentPings <= 0 {
return nil, errors.New("max concurrent pings must be greater than 0")
}
if peerFilter == nil {
return nil, errors.New("peer filter is nil")
}
if scorer == nil {
return nil, errors.New("peer scorer is nil")
}
g := &GossipPeerCrawler{
ctx: ctx,
crawlInterval: crawlInterval,
crawlTimeout: crawlTimeout,
p2pSvc: p2pSvc,
dv5: dv5,
peerFilter: peerFilter,
scorer: scorer,
}
g.pingCh = make(chan enode.Node, 4*maxConcurrentPings)
g.pingSemaphore = semaphore.NewWeighted(maxConcurrentPings)
g.crawledPeers = &crawledPeers{
peerNodeByEnode: make(map[enode.ID]*peerNode),
peerNodeByPid: make(map[peer.ID]*peerNode),
peersByTopic: make(map[string]map[*peerNode]struct{}),
}
return g, nil
}
// PeersForTopic returns a list of enode records for peers subscribed to the given topic.
// Only peers that have been successfully pinged (verified as reachable) and pass the
// configured peer filter are included. Results are sorted in descending order by peer
// score, so higher-quality peers appear first. Returns nil if no peers are found for
// the topic. The returned slice should not be modified as it contains pointers to
// internal enode records.
func (g *GossipPeerCrawler) PeersForTopic(topic string) []*enode.Node {
peerNodes := g.crawledPeers.getPeersForTopic(topic, g.peerFilter)
slices.SortFunc(peerNodes, func(a, b peerNode) int {
scoreA := g.scorer(a.peerID)
scoreB := g.scorer(b.peerID)
if scoreA > scoreB {
return -1
}
if scoreA < scoreB {
return 1
}
return 0
})
nodes := make([]*enode.Node, 0, len(peerNodes))
for _, pn := range peerNodes {
nodes = append(nodes, pn.node)
}
return nodes
}
// RemovePeerByPeerId removes a peer from the crawler's registry by their libp2p peer ID.
// This also removes the peer from all topic subscriptions they were associated with.
// If the peer is not found, this operation is a no-op.
func (g *GossipPeerCrawler) RemovePeerByPeerId(peerID peer.ID) {
g.crawledPeers.removePeerByPeerId(peerID)
}
// RemoveTopic removes a topic and all its peer associations from the crawler.
// Peers that were only subscribed to this topic are completely removed from the registry.
// Peers subscribed to other topics remain tracked for those topics.
// If the topic does not exist, this operation is a no-op.
func (g *GossipPeerCrawler) RemoveTopic(topic string) {
g.crawledPeers.removeTopic(topic)
}
// Start begins the crawler's background operations. It launches three goroutines:
// a crawl loop that periodically discovers new peers via discv5, a ping loop that
// verifies peer reachability, and a cleanup loop that removes stale or filtered peers.
// The provided TopicExtractor is used to determine which gossipsub topics each
// discovered peer subscribes to. Start is idempotent; subsequent calls after the
// first are no-ops. Returns an error if the topic extractor is nil.
func (g *GossipPeerCrawler) Start(te gossipcrawler.TopicExtractor) error {
if te == nil {
return errors.New("topic extractor is nil")
}
g.once.Do(func() {
g.topicExtractor = te
go g.crawlLoop()
go g.pingLoop()
go g.cleanupLoop()
})
return nil
}
func (g *GossipPeerCrawler) pingLoop() {
for {
select {
case node := <-g.pingCh:
if err := g.pingSemaphore.Acquire(g.ctx, 1); err != nil {
return
}
go func(node *enode.Node) {
defer g.pingSemaphore.Release(1)
if err := g.dv5.Ping(node); err != nil {
log.WithError(err).WithField("node", node.ID()).Debug("Failed to ping node")
g.crawledPeers.removePeerByNodeId(node.ID())
return
}
g.crawledPeers.updateStatusToPinged(node.ID())
}(&node)
case <-g.ctx.Done():
return
}
}
}
func (g *GossipPeerCrawler) crawlLoop() {
for {
g.crawl()
select {
case <-time.After(g.crawlInterval):
case <-g.ctx.Done():
return
}
}
}
func (g *GossipPeerCrawler) crawl() {
log.Debug("Bitcoin")
ctx, cancel := context.WithTimeout(g.ctx, g.crawlTimeout)
defer cancel()
iterator := g.dv5.RandomNodes()
// Ensure iterator unblocks on context cancellation or timeout
go func() {
<-ctx.Done()
iterator.Close()
}()
log.Debug("ABCDEFG")
for iterator.Next() {
log.Debug("Ethereum")
if ctx.Err() != nil {
return
}
node := iterator.Node()
if node == nil {
continue
}
if !g.peerFilter(node) {
g.crawledPeers.removePeerByNodeId(node.ID())
continue
}
topics, err := g.topicExtractor(ctx, node)
if err != nil {
log.WithError(err).WithField("node", node.ID()).Debug("Failed to extract topics, skipping")
continue
}
shouldPing, err := g.crawledPeers.updatePeer(node, topics)
if err != nil {
log.WithError(err).WithField("node", node.ID()).Error("Failed to update crawled peers")
}
if !shouldPing {
continue
}
select {
case g.pingCh <- *node:
case <-g.ctx.Done():
return
}
}
}
// cleanupLoop periodically removes peers that the filter rejects or that
// have no topics of interest. It uses the same context lifecycle as other
// background loops.
func (g *GossipPeerCrawler) cleanupLoop() {
ticker := time.NewTicker(cleanupInterval)
defer ticker.Stop()
// Initial cleanup to catch any leftovers from startup state
g.cleanup()
for {
select {
case <-ticker.C:
g.cleanup()
case <-g.ctx.Done():
return
}
}
}
// cleanup scans the crawled peer set and removes entries that either fail
// the current peer filter or have no topics of interest remaining.
func (g *GossipPeerCrawler) cleanup() {
cp := g.crawledPeers
// Snapshot current peers to evaluate without holding the lock during
// filter and topic extraction.
cp.mu.RLock()
peers := make([]*peerNode, 0, len(cp.peerNodeByPid))
for _, p := range cp.peerNodeByPid {
peers = append(peers, p)
}
cp.mu.RUnlock()
for _, p := range peers {
// Remove peers that no longer pass the filter
if !g.peerFilter(p.node) {
cp.removePeerByNodeId(p.node.ID())
continue
}
// Re-extract topics; if the extractor errors or yields none, drop the peer.
topics, err := g.topicExtractor(g.ctx, p.node)
if err != nil || len(topics) == 0 {
cp.removePeerByNodeId(p.node.ID())
}
}
}
// enodeToPeerID converts an enode record to a peer ID.
func enodeToPeerID(n *enode.Node) (peer.ID, error) {
info, _, err := convertToAddrInfo(n)
if err != nil {
return "", fmt.Errorf("converting enode to addr info: %w", err)
}
if info == nil {
return "", errors.New("peer info is nil")
}
return info.ID, nil
}

View File

@@ -1,787 +0,0 @@
package p2p
import (
"context"
"net"
"testing"
"time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/gossipcrawler"
p2ptest "github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/testing"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/prometheus/client_golang/prometheus/testutil"
"github.com/stretchr/testify/require"
require2 "github.com/stretchr/testify/require"
)
// Helpers for crawledPeers tests
func newTestCrawledPeers() *crawledPeers {
return &crawledPeers{
peerNodeByEnode: make(map[enode.ID]*peerNode),
peerNodeByPid: make(map[peer.ID]*peerNode),
peersByTopic: make(map[string]map[*peerNode]struct{}),
}
}
func addPeerWithTopics(t *testing.T, cp *crawledPeers, node *enode.Node, topics []string, pinged bool) *peerNode {
t.Helper()
pid, err := enodeToPeerID(node)
require.NoError(t, err)
p := &peerNode{
isPinged: pinged,
node: node,
peerID: pid,
topics: make(map[string]struct{}),
}
cp.mu.Lock()
cp.peerNodeByEnode[p.node.ID()] = p
cp.peerNodeByPid[p.peerID] = p
cp.updateTopicsUnlocked(p, topics)
cp.mu.Unlock()
return p
}
func TestUpdateStatusToPinged(t *testing.T) {
localNode := createTestNodeRandom(t)
node1 := localNode.Node()
localNode2 := createTestNodeRandom(t)
node2 := localNode2.Node()
cases := []struct {
name string
prep func(*crawledPeers)
target *enode.Node
expectPinged map[enode.ID]bool
}{
{
name: "sets pinged for existing peer",
prep: func(cp *crawledPeers) {
addPeerWithTopics(t, cp, node1, []string{"a"}, false)
},
target: node1,
expectPinged: map[enode.ID]bool{
node1.ID(): true,
},
},
{
name: "idempotent when already pinged",
prep: func(cp *crawledPeers) {
addPeerWithTopics(t, cp, node1, []string{"a"}, true)
},
target: node1,
expectPinged: map[enode.ID]bool{
node1.ID(): true,
},
},
{
name: "no change when peer missing",
prep: func(cp *crawledPeers) {
addPeerWithTopics(t, cp, node1, []string{"a"}, false)
},
target: node2,
expectPinged: map[enode.ID]bool{
node1.ID(): false,
},
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
cp := newTestCrawledPeers()
tc.prep(cp)
cp.updateStatusToPinged(tc.target.ID())
cp.mu.RLock()
defer cp.mu.RUnlock()
for id, exp := range tc.expectPinged {
if p := cp.peerNodeByEnode[id]; p != nil {
require.Equal(t, exp, p.isPinged)
}
}
})
}
}
func TestRemoveTopic(t *testing.T) {
localNode := createTestNodeRandom(t)
node1 := localNode.Node()
localNode2 := createTestNodeRandom(t)
node2 := localNode2.Node()
topic1 := "t1"
topic2 := "t2"
cases := []struct {
name string
prep func(*crawledPeers)
topic string
check func(*testing.T, *crawledPeers)
}{
{
name: "removes topic from all peers and index",
prep: func(cp *crawledPeers) {
addPeerWithTopics(t, cp, node1, []string{"t1", "t2"}, true)
addPeerWithTopics(t, cp, node2, []string{"t1"}, true)
},
topic: topic1,
check: func(t *testing.T, cp *crawledPeers) {
_, ok := cp.peersByTopic[topic1]
require.False(t, ok)
for _, p := range cp.peerNodeByPid {
_, has := p.topics[topic1]
require.False(t, has)
}
// Ensure other topics remain
_, ok = cp.peersByTopic[topic2]
require.True(t, ok)
},
},
{
name: "no-op when topic missing",
prep: func(cp *crawledPeers) {
addPeerWithTopics(t, cp, node1, []string{"t2"}, true)
},
topic: topic1,
check: func(t *testing.T, cp *crawledPeers) {
_, ok := cp.peersByTopic[topic2]
require.True(t, ok)
},
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
cp := newTestCrawledPeers()
tc.prep(cp)
cp.removeTopic(tc.topic)
tc.check(t, cp)
})
}
}
func TestRemovePeer(t *testing.T) {
localNode := createTestNodeRandom(t)
node1 := localNode.Node()
localNode2 := createTestNodeRandom(t)
node2 := localNode2.Node()
cases := []struct {
name string
prep func(*crawledPeers)
target enode.ID
wantTopics int
}{
{
name: "removes existing peer and prunes empty topic",
prep: func(cp *crawledPeers) {
addPeerWithTopics(t, cp, node1, []string{"t1"}, true)
},
target: node1.ID(),
wantTopics: 0,
},
{
name: "removes only targeted peer; keeps topic for other",
prep: func(cp *crawledPeers) {
addPeerWithTopics(t, cp, node1, []string{"t1"}, true)
addPeerWithTopics(t, cp, node2, []string{"t1"}, true)
},
target: node1.ID(),
wantTopics: 1, // byTopic should still have t1 with one peer
},
{
name: "no-op when peer missing",
prep: func(cp *crawledPeers) {
addPeerWithTopics(t, cp, node1, []string{"t1"}, true)
},
target: node2.ID(),
wantTopics: 1,
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
cp := newTestCrawledPeers()
tc.prep(cp)
cp.removePeerByNodeId(tc.target)
cp.mu.RLock()
defer cp.mu.RUnlock()
require.Len(t, cp.peersByTopic, tc.wantTopics)
})
}
}
func TestRemovePeerId(t *testing.T) {
localNode := createTestNodeRandom(t)
node1 := localNode.Node()
localNode2 := createTestNodeRandom(t)
node2 := localNode2.Node()
pid1, err := enodeToPeerID(node1)
require.NoError(t, err)
pid2, err := enodeToPeerID(node2)
require.NoError(t, err)
cases := []struct {
name string
prep func(*crawledPeers)
target peer.ID
wantTopics int
wantPeers int
}{
{
name: "removes existing peer by id and prunes topic",
prep: func(cp *crawledPeers) {
addPeerWithTopics(t, cp, node1, []string{"t1"}, true)
},
target: pid1,
wantTopics: 0,
wantPeers: 0,
},
{
name: "removes only targeted peer id; keeps topic for other",
prep: func(cp *crawledPeers) {
addPeerWithTopics(t, cp, node1, []string{"t1"}, true)
addPeerWithTopics(t, cp, node2, []string{"t1"}, true)
},
target: pid1,
wantTopics: 1,
wantPeers: 1,
},
{
name: "no-op when peer id missing",
prep: func(cp *crawledPeers) {
addPeerWithTopics(t, cp, node1, []string{"t1"}, true)
},
target: pid2,
wantTopics: 1,
wantPeers: 1,
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
cp := newTestCrawledPeers()
tc.prep(cp)
cp.removePeerByPeerId(tc.target)
cp.mu.RLock()
defer cp.mu.RUnlock()
require.Len(t, cp.peersByTopic, tc.wantTopics)
require.Len(t, cp.peerNodeByPid, tc.wantPeers)
})
}
}
func TestUpdateCrawledIfNewer(t *testing.T) {
newCrawler := func() (*crawledPeers, *GossipPeerCrawler, func()) {
ctx, cancel := context.WithCancel(context.Background())
g := &GossipPeerCrawler{
ctx: ctx,
pingCh: make(chan enode.Node, 8),
}
cp := newTestCrawledPeers()
return cp, g, cancel
}
// Helper: local node that will cause enodeToPeerID to fail (no TCP/UDP multiaddrs)
newNodeNoPorts := func(t *testing.T) *enode.Node {
_, privKey := createAddrAndPrivKey(t)
db, err := enode.OpenDB("")
require.NoError(t, err)
t.Cleanup(func() { db.Close() })
ln := enode.NewLocalNode(db, privKey)
// Do not set TCP/UDP; keep only IP
ln.SetStaticIP(net.ParseIP("127.0.0.1"))
return ln.Node()
}
// Ensure both A nodes have the same enode.ID but differing seq
ln := createTestNodeRandom(t)
nodeA1 := ln.Node()
setNodeSeq(ln, nodeA1.Seq()+1)
nodeA2 := ln.Node()
tests := []struct {
name string
arrange func(*crawledPeers)
invokeNode *enode.Node
invokeTopics []string
expectedShouldPing bool
expectErr bool
assert func(*testing.T, *crawledPeers, <-chan enode.Node)
}{
{
name: "new peer with topics adds peer and pings",
arrange: func(cp *crawledPeers) {},
invokeNode: nodeA1,
invokeTopics: []string{"a"},
expectedShouldPing: true,
assert: func(t *testing.T, cp *crawledPeers, ch <-chan enode.Node) {
cp.mu.RLock()
require.Len(t, cp.peerNodeByEnode, 1)
require.Len(t, cp.peerNodeByPid, 1)
require.Contains(t, cp.peersByTopic, "a")
cp.mu.RUnlock()
},
},
{
name: "new peer with empty topics is removed",
arrange: func(cp *crawledPeers) {},
invokeNode: nodeA1,
invokeTopics: nil,
assert: func(t *testing.T, cp *crawledPeers, ch <-chan enode.Node) {
cp.mu.RLock()
require.Empty(t, cp.peerNodeByEnode)
require.Empty(t, cp.peerNodeByPid)
require.Empty(t, cp.peersByTopic)
cp.mu.RUnlock()
},
},
{
name: "existing peer lower seq is ignored",
arrange: func(cp *crawledPeers) {
addPeerWithTopics(t, cp, nodeA2, []string{"x"}, false) // higher seq exists
},
invokeNode: nodeA1, // lower seq
invokeTopics: []string{"a", "b"},
assert: func(t *testing.T, cp *crawledPeers, ch <-chan enode.Node) {
cp.mu.RLock()
require.Contains(t, cp.peersByTopic, "x")
require.NotContains(t, cp.peersByTopic, "a")
cp.mu.RUnlock()
},
},
{
name: "existing peer equal seq is ignored",
arrange: func(cp *crawledPeers) {
addPeerWithTopics(t, cp, nodeA1, []string{"x"}, false)
},
invokeNode: nodeA1,
invokeTopics: []string{"a"},
assert: func(t *testing.T, cp *crawledPeers, ch <-chan enode.Node) {
cp.mu.RLock()
require.Contains(t, cp.peersByTopic, "x")
require.NotContains(t, cp.peersByTopic, "a")
cp.mu.RUnlock()
},
},
{
name: "existing peer higher seq updates topics and pings",
arrange: func(cp *crawledPeers) {
addPeerWithTopics(t, cp, nodeA1, []string{"x"}, false)
},
invokeNode: nodeA2,
invokeTopics: []string{"a"},
expectedShouldPing: true,
assert: func(t *testing.T, cp *crawledPeers, ch <-chan enode.Node) {
cp.mu.RLock()
require.NotContains(t, cp.peersByTopic, "x")
require.Contains(t, cp.peersByTopic, "a")
cp.mu.RUnlock()
},
},
{
name: "existing peer higher seq but empty topics removes peer",
arrange: func(cp *crawledPeers) {
addPeerWithTopics(t, cp, nodeA1, []string{"x"}, false)
},
invokeNode: nodeA2,
invokeTopics: nil,
assert: func(t *testing.T, cp *crawledPeers, ch <-chan enode.Node) {
cp.mu.RLock()
require.Empty(t, cp.peerNodeByEnode)
require.Empty(t, cp.peerNodeByPid)
cp.mu.RUnlock()
},
},
{
name: "corrupted existing entry with nil node is ignored",
arrange: func(cp *crawledPeers) {
pid, _ := enodeToPeerID(nodeA1)
cp.mu.Lock()
pn := &peerNode{node: nil, peerID: pid, topics: map[string]struct{}{"x": {}}}
cp.peerNodeByEnode[nodeA1.ID()] = pn
cp.peerNodeByPid[pid] = pn
cp.peersByTopic["x"] = map[*peerNode]struct{}{pn: {}}
cp.mu.Unlock()
},
expectErr: true,
invokeNode: nodeA2,
invokeTopics: []string{"a"},
assert: func(t *testing.T, cp *crawledPeers, ch <-chan enode.Node) {
cp.mu.RLock()
require.Contains(t, cp.peersByTopic, "x")
cp.mu.RUnlock()
},
},
{
name: "new peer with no ports causes enodeToPeerID error; no add",
arrange: func(cp *crawledPeers) {},
invokeNode: newNodeNoPorts(t),
invokeTopics: []string{"a"},
expectErr: true,
assert: func(t *testing.T, cp *crawledPeers, ch <-chan enode.Node) {
cp.mu.RLock()
require.Empty(t, cp.peerNodeByEnode)
require.Empty(t, cp.peerNodeByPid)
require.Empty(t, cp.peersByTopic)
cp.mu.RUnlock()
},
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
cp, g, cancel := newCrawler()
defer cancel()
tc.arrange(cp)
shouldPing, err := cp.updatePeer(tc.invokeNode, tc.invokeTopics)
if tc.expectErr {
require.Error(t, err)
} else {
require.NoError(t, err)
}
require.Equal(t, shouldPing, tc.expectedShouldPing)
tc.assert(t, cp, g.pingCh)
})
}
}
func TestPeersForTopic(t *testing.T) {
t.Parallel()
newCrawler := func(filter gossipcrawler.PeerFilterFunc) (*GossipPeerCrawler, *crawledPeers) {
g := &GossipPeerCrawler{
peerFilter: filter,
scorer: func(peer.ID) float64 { return 0 },
crawledPeers: newTestCrawledPeers(),
}
return g, g.crawledPeers
}
// Prepare nodes
ln1 := createTestNodeRandom(t)
ln2 := createTestNodeRandom(t)
ln3 := createTestNodeRandom(t)
n1, n2, n3 := ln1.Node(), ln2.Node(), ln3.Node()
topic := "top"
cases := []struct {
name string
filter gossipcrawler.PeerFilterFunc
setup func(t *testing.T, g *GossipPeerCrawler, cp *crawledPeers)
wantIDs []enode.ID
}{
{
name: "no peers for topic returns empty",
filter: func(*enode.Node) bool { return true },
setup: func(t *testing.T, g *GossipPeerCrawler, cp *crawledPeers) {},
wantIDs: nil,
},
{
name: "excludes unpinged peers",
filter: func(*enode.Node) bool { return true },
setup: func(t *testing.T, g *GossipPeerCrawler, cp *crawledPeers) {
// Add one pinged and one not pinged on same topic
addPeerWithTopics(t, cp, n1, []string{string(topic)}, true)
addPeerWithTopics(t, cp, n2, []string{string(topic)}, false)
},
wantIDs: []enode.ID{n1.ID()},
},
{
name: "applies peer filter to exclude",
filter: func(n *enode.Node) bool { return n.ID() != n2.ID() },
setup: func(t *testing.T, g *GossipPeerCrawler, cp *crawledPeers) {
addPeerWithTopics(t, cp, n1, []string{string(topic)}, true)
addPeerWithTopics(t, cp, n2, []string{string(topic)}, true)
},
wantIDs: []enode.ID{n1.ID()},
},
{
name: "ignores peerNode with nil node",
filter: func(*enode.Node) bool { return true },
setup: func(t *testing.T, g *GossipPeerCrawler, cp *crawledPeers) {
addPeerWithTopics(t, cp, n1, []string{string(topic)}, true)
// Add n2 then set its node to nil to simulate corrupted entry
p2 := addPeerWithTopics(t, cp, n2, []string{string(topic)}, true)
cp.mu.Lock()
p2.node = nil
cp.mu.Unlock()
},
wantIDs: []enode.ID{n1.ID()},
},
{
name: "sorted by score descending",
filter: func(*enode.Node) bool { return true },
setup: func(t *testing.T, g *GossipPeerCrawler, cp *crawledPeers) {
// Add three pinged peers
p1 := addPeerWithTopics(t, cp, n1, []string{string(topic)}, true)
p2 := addPeerWithTopics(t, cp, n2, []string{string(topic)}, true)
p3 := addPeerWithTopics(t, cp, n3, []string{string(topic)}, true)
// Provide a deterministic scoring function
scores := map[peer.ID]float64{
p1.peerID: 3.0,
p2.peerID: 2.0,
p3.peerID: 1.0,
}
g.scorer = func(id peer.ID) float64 { return scores[id] }
},
wantIDs: []enode.ID{n1.ID(), n2.ID(), n3.ID()},
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
g, cp := newCrawler(tc.filter)
tc.setup(t, g, cp)
got := g.PeersForTopic(topic)
var gotIDs []enode.ID
for _, n := range got {
gotIDs = append(gotIDs, n.ID())
}
if tc.wantIDs == nil {
require.Empty(t, gotIDs)
return
}
require.Equal(t, tc.wantIDs, gotIDs)
})
}
}
func TestCrawler_AddsAndPingsPeer(t *testing.T) {
// Create a test node with valid ENR entries (IP/TCP/UDP)
localNode := createTestNodeRandom(t)
node := localNode.Node()
// Prepare a mock iterator returning our single node
iterator := p2ptest.NewMockIterator([]*enode.Node{node})
// Prepare a mock listener with successful Ping
mockListener := p2ptest.NewMockListener(localNode, iterator)
mockListener.PingFunc = func(*enode.Node) error { return nil }
// Inject a permissive peer filter
filter := gossipcrawler.PeerFilterFunc(func(n *enode.Node) bool { return true })
// Create crawler with small intervals
scorer := func(peer.ID) float64 { return 0 }
g, err := NewGossipPeerCrawler(t.Context(), &Service{}, mockListener, 2*time.Second, 10*time.Millisecond, 4, filter, scorer)
require.NoError(t, err)
// Assign a simple topic extractor
topic := "test/topic"
topicExtractor := func(ctx context.Context, n *enode.Node) ([]string, error) {
return []string{topic}, nil
}
// Run ping loop in background and perform a single crawl
require.NoError(t, g.Start(topicExtractor))
// Verify that the peer has been indexed under the topic and marked as pinged
require2.Eventually(t, func() bool {
g.crawledPeers.mu.RLock()
defer g.crawledPeers.mu.RUnlock()
peers := g.crawledPeers.peersByTopic[topic]
if len(peers) == 0 {
return false
}
// Fetch the single peerNode and check status
for pn := range peers {
if pn == nil {
return false
}
return pn.isPinged
}
return false
}, 2*time.Second, 10*time.Millisecond)
}
func TestCrawler_SkipsPeer_WhenFilterRejects(t *testing.T) {
t.Parallel()
localNode := createTestNodeRandom(t)
node := localNode.Node()
iterator := p2ptest.NewMockIterator([]*enode.Node{node})
mockListener := p2ptest.NewMockListener(localNode, iterator)
mockListener.PingFunc = func(*enode.Node) error { return nil }
// Reject all peers via injected filter
filter := gossipcrawler.PeerFilterFunc(func(n *enode.Node) bool { return false })
scorer := func(peer.ID) float64 { return 0 }
g, err := NewGossipPeerCrawler(t.Context(), &Service{}, mockListener, 2*time.Second, 10*time.Millisecond, 2, filter, scorer)
if err != nil {
t.Fatalf("NewGossipPeerCrawler error: %v", err)
}
topic := "test/topic"
g.topicExtractor = func(ctx context.Context, n *enode.Node) ([]string, error) { return []string{topic}, nil }
g.crawl()
// Verify no peers are indexed, because filter rejected the node
g.crawledPeers.mu.RLock()
defer g.crawledPeers.mu.RUnlock()
if len(g.crawledPeers.peerNodeByEnode) != 0 || len(g.crawledPeers.peerNodeByPid) != 0 || len(g.crawledPeers.peersByTopic) != 0 {
t.Fatalf("expected no peers indexed, got byEnode=%d byPeerId=%d byTopic=%d",
len(g.crawledPeers.peerNodeByEnode), len(g.crawledPeers.peerNodeByPid), len(g.crawledPeers.peersByTopic))
}
}
func TestCrawler_RemoveTopic_RemovesTopicFromIndexes(t *testing.T) {
t.Parallel()
localNode := createTestNodeRandom(t)
node := localNode.Node()
iterator := p2ptest.NewMockIterator([]*enode.Node{node})
mockListener := p2ptest.NewMockListener(localNode, iterator)
mockListener.PingFunc = func(*enode.Node) error { return nil }
filter := gossipcrawler.PeerFilterFunc(func(n *enode.Node) bool { return true })
scorer := func(peer.ID) float64 { return 0 }
g, err := NewGossipPeerCrawler(t.Context(), &Service{}, mockListener, 2*time.Second, 10*time.Millisecond, 2, filter, scorer)
if err != nil {
t.Fatalf("NewGossipPeerCrawler error: %v", err)
}
topic1 := "test/topic1"
topic2 := "test/topic2"
g.topicExtractor = func(ctx context.Context, n *enode.Node) ([]string, error) { return []string{topic1, topic2}, nil }
// Single crawl to index topics
g.crawl()
// Remove one topic and assert it is pruned from all indexes
g.RemoveTopic(topic1)
g.crawledPeers.mu.RLock()
defer g.crawledPeers.mu.RUnlock()
if _, ok := g.crawledPeers.peersByTopic[topic1]; ok {
t.Fatalf("expected topic1 to be removed from byTopic")
}
// Ensure peer still exists and retains topic2
for _, pn := range g.crawledPeers.peerNodeByEnode {
if _, has1 := pn.topics[topic1]; has1 {
t.Fatalf("expected topic1 to be removed from peer topics")
}
if _, has2 := pn.topics[topic2]; !has2 {
t.Fatalf("expected topic2 to remain for peer")
}
}
}
func TestCrawledPeersMetrics(t *testing.T) {
localNode1 := createTestNodeRandom(t)
node1 := localNode1.Node()
localNode2 := createTestNodeRandom(t)
node2 := localNode2.Node()
pid1, err := enodeToPeerID(node1)
require.NoError(t, err)
t.Run("updatePeer records metrics", func(t *testing.T) {
cp := newTestCrawledPeers()
// Add first peer with two topics
_, err := cp.updatePeer(node1, []string{"topic1", "topic2"})
require.NoError(t, err)
require.Equal(t, float64(1), testutil.ToFloat64(gossipCrawlerPeersByEnodeCount))
require.Equal(t, float64(1), testutil.ToFloat64(gossipCrawlerPeersByPidCount))
require.Equal(t, float64(2), testutil.ToFloat64(gossipCrawlerTopicsCount))
// Add second peer with one overlapping topic
_, err = cp.updatePeer(node2, []string{"topic1", "topic3"})
require.NoError(t, err)
require.Equal(t, float64(2), testutil.ToFloat64(gossipCrawlerPeersByEnodeCount))
require.Equal(t, float64(2), testutil.ToFloat64(gossipCrawlerPeersByPidCount))
require.Equal(t, float64(3), testutil.ToFloat64(gossipCrawlerTopicsCount))
})
t.Run("removePeerByPeerId records metrics", func(t *testing.T) {
cp := newTestCrawledPeers()
// Add two peers
_, err := cp.updatePeer(node1, []string{"topic1"})
require.NoError(t, err)
_, err = cp.updatePeer(node2, []string{"topic1", "topic2"})
require.NoError(t, err)
require.Equal(t, float64(2), testutil.ToFloat64(gossipCrawlerPeersByEnodeCount))
require.Equal(t, float64(2), testutil.ToFloat64(gossipCrawlerTopicsCount))
// Remove first peer by peer ID
cp.removePeerByPeerId(pid1)
require.Equal(t, float64(1), testutil.ToFloat64(gossipCrawlerPeersByEnodeCount))
require.Equal(t, float64(1), testutil.ToFloat64(gossipCrawlerPeersByPidCount))
require.Equal(t, float64(2), testutil.ToFloat64(gossipCrawlerTopicsCount))
})
t.Run("removePeerByNodeId records metrics", func(t *testing.T) {
cp := newTestCrawledPeers()
// Add two peers
_, err := cp.updatePeer(node1, []string{"topic1"})
require.NoError(t, err)
_, err = cp.updatePeer(node2, []string{"topic2"})
require.NoError(t, err)
require.Equal(t, float64(2), testutil.ToFloat64(gossipCrawlerPeersByEnodeCount))
require.Equal(t, float64(2), testutil.ToFloat64(gossipCrawlerTopicsCount))
// Remove first peer by enode ID
cp.removePeerByNodeId(node1.ID())
require.Equal(t, float64(1), testutil.ToFloat64(gossipCrawlerPeersByEnodeCount))
require.Equal(t, float64(1), testutil.ToFloat64(gossipCrawlerPeersByPidCount))
require.Equal(t, float64(1), testutil.ToFloat64(gossipCrawlerTopicsCount))
})
t.Run("removeTopic records metrics", func(t *testing.T) {
cp := newTestCrawledPeers()
// Add two peers with overlapping topics
_, err := cp.updatePeer(node1, []string{"topic1", "topic2"})
require.NoError(t, err)
_, err = cp.updatePeer(node2, []string{"topic1"})
require.NoError(t, err)
require.Equal(t, float64(2), testutil.ToFloat64(gossipCrawlerPeersByEnodeCount))
require.Equal(t, float64(2), testutil.ToFloat64(gossipCrawlerTopicsCount))
// Remove topic1 - this should also remove node2 which only had topic1
cp.removeTopic("topic1")
require.Equal(t, float64(1), testutil.ToFloat64(gossipCrawlerPeersByEnodeCount))
require.Equal(t, float64(1), testutil.ToFloat64(gossipCrawlerPeersByPidCount))
require.Equal(t, float64(1), testutil.ToFloat64(gossipCrawlerTopicsCount))
})
t.Run("updatePeer with empty topics removes peer and records metrics", func(t *testing.T) {
cp := newTestCrawledPeers()
// Add peer with topics
_, err := cp.updatePeer(node1, []string{"topic1"})
require.NoError(t, err)
require.Equal(t, float64(1), testutil.ToFloat64(gossipCrawlerPeersByEnodeCount))
require.Equal(t, float64(1), testutil.ToFloat64(gossipCrawlerTopicsCount))
// Increment sequence number to ensure update is processed
setNodeSeq(localNode1, node1.Seq()+1)
node1Updated := localNode1.Node()
// Update with empty topics - should remove the peer
_, err = cp.updatePeer(node1Updated, nil)
require.NoError(t, err)
require.Equal(t, float64(0), testutil.ToFloat64(gossipCrawlerPeersByEnodeCount))
require.Equal(t, float64(0), testutil.ToFloat64(gossipCrawlerPeersByPidCount))
require.Equal(t, float64(0), testutil.ToFloat64(gossipCrawlerTopicsCount))
})
}

View File

@@ -156,13 +156,29 @@ func (s *Service) retrieveActiveValidators() (uint64, error) {
if s.activeValidatorCount != 0 {
return s.activeValidatorCount, nil
}
finalizedCheckpoint, err := s.cfg.DB.FinalizedCheckpoint(s.ctx)
rt := s.cfg.DB.LastArchivedRoot(s.ctx)
if rt == params.BeaconConfig().ZeroHash {
genState, err := s.cfg.DB.GenesisState(s.ctx)
if err != nil {
return 0, err
}
if genState == nil || genState.IsNil() {
return 0, errors.New("no genesis state exists")
}
activeVals, err := helpers.ActiveValidatorCount(context.Background(), genState, coreTime.CurrentEpoch(genState))
if err != nil {
return 0, err
}
// Cache active validator count
s.activeValidatorCount = activeVals
return activeVals, nil
}
bState, err := s.cfg.DB.State(s.ctx, rt)
if err != nil {
return 0, err
}
bState, err := s.cfg.StateGen.StateByRoot(s.ctx, [32]byte(finalizedCheckpoint.Root))
if err != nil {
return 0, err
if bState == nil || bState.IsNil() {
return 0, errors.Errorf("no state with root %#x exists", rt)
}
activeVals, err := helpers.ActiveValidatorCount(context.Background(), bState, coreTime.CurrentEpoch(bState))
if err != nil {

View File

@@ -1,14 +1,10 @@
package p2p
import (
"context"
"testing"
iface "github.com/OffchainLabs/prysm/v7/beacon-chain/db/iface"
dbutil "github.com/OffchainLabs/prysm/v7/beacon-chain/db/testing"
mockstategen "github.com/OffchainLabs/prysm/v7/beacon-chain/state/stategen/mock"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/testing/assert"
"github.com/OffchainLabs/prysm/v7/testing/require"
@@ -24,11 +20,9 @@ func TestCorrect_ActiveValidatorsCount(t *testing.T) {
params.OverrideBeaconConfig(cfg)
db := dbutil.SetupDB(t)
wrappedDB := &finalizedCheckpointDB{ReadOnlyDatabaseWithSeqNum: db}
stateGen := mockstategen.NewService()
s := &Service{
ctx: t.Context(),
cfg: &Config{DB: wrappedDB, StateGen: stateGen},
cfg: &Config{DB: db},
}
bState, err := util.NewBeaconState(func(state *ethpb.BeaconState) error {
validators := make([]*ethpb.Validator, params.BeaconConfig().MinGenesisActiveValidatorCount)
@@ -45,10 +39,6 @@ func TestCorrect_ActiveValidatorsCount(t *testing.T) {
})
require.NoError(t, err)
require.NoError(t, db.SaveGenesisData(s.ctx, bState))
checkpoint, err := db.FinalizedCheckpoint(s.ctx)
require.NoError(t, err)
wrappedDB.finalized = checkpoint
stateGen.AddStateForRoot(bState, bytesutil.ToBytes32(checkpoint.Root))
vals, err := s.retrieveActiveValidators()
assert.NoError(t, err, "genesis state not retrieved")
@@ -62,10 +52,7 @@ func TestCorrect_ActiveValidatorsCount(t *testing.T) {
}))
}
require.NoError(t, bState.SetSlot(10000))
rootA := [32]byte{'a'}
require.NoError(t, db.SaveState(s.ctx, bState, rootA))
wrappedDB.finalized = &ethpb.Checkpoint{Root: rootA[:]}
stateGen.AddStateForRoot(bState, rootA)
require.NoError(t, db.SaveState(s.ctx, bState, [32]byte{'a'}))
// Reset count
s.activeValidatorCount = 0
@@ -90,15 +77,3 @@ func TestLoggingParameters(_ *testing.T) {
logGossipParameters("testing", defaultLightClientOptimisticUpdateTopicParams())
logGossipParameters("testing", defaultLightClientFinalityUpdateTopicParams())
}
type finalizedCheckpointDB struct {
iface.ReadOnlyDatabaseWithSeqNum
finalized *ethpb.Checkpoint
}
func (f *finalizedCheckpointDB) FinalizedCheckpoint(ctx context.Context) (*ethpb.Checkpoint, error) {
if f.finalized != nil {
return f.finalized, nil
}
return f.ReadOnlyDatabaseWithSeqNum.FinalizedCheckpoint(ctx)
}

View File

@@ -1,14 +0,0 @@
load("@prysm//tools/go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = ["interface.go"],
importpath = "github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/gossipcrawler",
visibility = [
"//visibility:public",
],
deps = [
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_libp2p_go_libp2p//core/peer:go_default_library",
],
)

View File

@@ -1,35 +0,0 @@
package gossipcrawler
import (
"context"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/libp2p/go-libp2p/core/peer"
)
// TopicExtractor is a function that can determine the set of topics a current or potential peer
// is subscribed to based on key/value pairs from the ENR record.
type TopicExtractor func(ctx context.Context, node *enode.Node) ([]string, error)
// PeerFilterFunc defines the filtering interface used by the crawler to decide if a node
// is a valid candidate to index in the crawler.
type PeerFilterFunc func(*enode.Node) bool
type Crawler interface {
Start(te TopicExtractor) error
RemovePeerByPeerId(peerID peer.ID)
RemoveTopic(topic string)
PeersForTopic(topic string) []*enode.Node
}
// SubnetTopicsProvider returns the set of gossipsub topics the node
// should currently maintain peer connections for along with the minimum number of peers required
// for each topic.
type SubnetTopicsProvider func() map[string]int
// GossipDialer controls dialing peers for gossipsub topics based
// on a provided SubnetTopicsProvider and the p2p crawler.
type GossipDialer interface {
Start(provider SubnetTopicsProvider) error
DialPeersForTopicBlocking(ctx context.Context, topic string, nPeers int) error
}

View File

@@ -225,10 +225,6 @@ func (s *Service) AddDisconnectionHandler(handler func(ctx context.Context, id p
return
}
if s.crawler != nil {
s.crawler.RemovePeerByPeerId(peerID)
}
priorState, err := s.peers.ConnectionState(peerID)
if err != nil {
// Can happen if the peer has already disconnected, so...

View File

@@ -4,8 +4,8 @@ import (
"context"
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/encoder"
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/gossipcrawler"
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/peers"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
@@ -98,13 +98,11 @@ type (
PeerID() peer.ID
Host() host.Host
ENR() *enr.Record
GossipDialer() gossipcrawler.GossipDialer
NodeID() enode.ID
DiscoveryAddresses() ([]multiaddr.Multiaddr, error)
RefreshPersistentSubnets()
DialPeers(ctx context.Context, maxConcurrentDials int, nodes []*enode.Node) uint
FindAndDialPeersWithSubnets(ctx context.Context, topicFormat string, digest [fieldparams.VersionLength]byte, minimumPeersPerSubnet int, subnets map[uint64]bool) error
AddPingMethod(reqFunc func(ctx context.Context, id peer.ID) error)
Crawler() gossipcrawler.Crawler
}
// Sender abstracts the sending functionality from libp2p.

View File

@@ -97,20 +97,6 @@ var (
Help: "The number of data column sidecar message broadcast attempts.",
})
// Gossip Peer Crawler Metrics
gossipCrawlerPeersByEnodeCount = promauto.NewGauge(prometheus.GaugeOpts{
Name: "p2p_gossip_crawler_peers_by_enode_count",
Help: "The number of peers tracked by enode ID in the gossip peer crawler.",
})
gossipCrawlerPeersByPidCount = promauto.NewGauge(prometheus.GaugeOpts{
Name: "p2p_gossip_crawler_peers_by_pid_count",
Help: "The number of peers tracked by peer ID in the gossip peer crawler.",
})
gossipCrawlerTopicsCount = promauto.NewGauge(prometheus.GaugeOpts{
Name: "p2p_gossip_crawler_topics_count",
Help: "The number of topics tracked in the gossip peer crawler.",
})
// Gossip Tracer Metrics
pubsubTopicsActive = promauto.NewGaugeVec(prometheus.GaugeOpts{
Name: "p2p_pubsub_topic_active",

View File

@@ -4,17 +4,20 @@ import (
"crypto/ecdsa"
"fmt"
"net"
"time"
"github.com/OffchainLabs/prysm/v7/config/features"
ecdsaprysm "github.com/OffchainLabs/prysm/v7/crypto/ecdsa"
"github.com/OffchainLabs/prysm/v7/runtime/version"
"github.com/libp2p/go-libp2p"
mplex "github.com/libp2p/go-libp2p-mplex"
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/libp2p/go-libp2p/p2p/net/connmgr"
"github.com/libp2p/go-libp2p/p2p/security/noise"
libp2pquic "github.com/libp2p/go-libp2p/p2p/transport/quic"
libp2ptcp "github.com/libp2p/go-libp2p/p2p/transport/tcp"
gomplex "github.com/libp2p/go-mplex"
ma "github.com/multiformats/go-multiaddr"
"github.com/pkg/errors"
)
@@ -107,6 +110,7 @@ func (s *Service) buildOptions(ip net.IP, priKey *ecdsa.PrivateKey) ([]libp2p.Op
libp2p.ConnectionGater(s),
libp2p.Transport(libp2ptcp.NewTCPTransport),
libp2p.DefaultMuxers,
libp2p.Muxer("/mplex/6.7.0", mplex.DefaultTransport),
libp2p.Security(noise.ID, noise.New),
libp2p.Ping(false), // Disable Ping Service.
}
@@ -158,10 +162,6 @@ func (s *Service) buildOptions(ip net.IP, priKey *ecdsa.PrivateKey) ([]libp2p.Op
options = append(options, libp2p.ResourceManager(&network.NullResourceManager{}))
}
if cfg.EnableAutoNAT {
options = append(options, libp2p.EnableAutoNATv2())
}
return options, nil
}
@@ -217,3 +217,8 @@ func privKeyOption(privkey *ecdsa.PrivateKey) libp2p.Option {
return cfg.Apply(libp2p.Identity(ifaceKey))
}
}
// Configures stream timeouts on mplex.
func configureMplex() {
gomplex.ResetStreamTimeout = 5 * time.Second
}

View File

@@ -132,6 +132,7 @@ func TestDefaultMultiplexers(t *testing.T) {
assert.NoError(t, err)
assert.Equal(t, protocol.ID("/yamux/1.0.0"), cfg.Muxers[0].ID)
assert.Equal(t, protocol.ID("/mplex/6.7.0"), cfg.Muxers[1].ID)
}
func TestSetConnManagerOption(t *testing.T) {
@@ -187,30 +188,6 @@ func checkLimit(t *testing.T, cm connmgr.ConnManager, expected int) {
}
}
func TestBuildOptions_EnableAutoNAT(t *testing.T) {
params.SetupTestConfigCleanup(t)
p2pCfg := &Config{
UDPPort: 2000,
TCPPort: 3000,
QUICPort: 3000,
EnableAutoNAT: true,
StateNotifier: &mock.MockStateNotifier{},
}
svc := &Service{cfg: p2pCfg}
var err error
svc.privKey, err = privKey(svc.cfg)
require.NoError(t, err)
ipAddr := network.IPAddr()
opts, err := svc.buildOptions(ipAddr, svc.privKey)
require.NoError(t, err)
// Verify that options were built without error when EnableAutoNAT is true.
// The actual AutoNAT v2 behavior is tested by libp2p itself.
var cfg libp2p.Config
err = cfg.Apply(append(opts, libp2p.FallbackDefaults)...)
assert.NoError(t, err)
}
func TestMultiAddressBuilderWithID(t *testing.T) {
testCases := []struct {
name string

View File

@@ -47,7 +47,6 @@ go_test(
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/p2p/peers/peerdata:go_default_library",
"//beacon-chain/p2p/peers/scorers:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",

View File

@@ -4,18 +4,11 @@ import (
forkchoicetypes "github.com/OffchainLabs/prysm/v7/beacon-chain/forkchoice/types"
"github.com/OffchainLabs/prysm/v7/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
// StatusProvider describes the minimum capability that Assigner needs from peer status tracking.
// That is, the ability to retrieve the best peers by finalized checkpoint.
type StatusProvider interface {
BestFinalized(ourFinalized primitives.Epoch) (primitives.Epoch, []peer.ID)
}
// FinalizedCheckpointer describes the minimum capability that Assigner needs from forkchoice.
// That is, the ability to retrieve the latest finalized checkpoint to help with peer evaluation.
type FinalizedCheckpointer interface {
@@ -24,9 +17,9 @@ type FinalizedCheckpointer interface {
// NewAssigner assists in the correct construction of an Assigner by code in other packages,
// assuring all the important private member fields are given values.
// The StatusProvider is used to retrieve best peers, and FinalizedCheckpointer is used to retrieve the latest finalized checkpoint each time peers are requested.
// The FinalizedCheckpointer is used to retrieve the latest finalized checkpoint each time peers are requested.
// Peers that report an older finalized checkpoint are filtered out.
func NewAssigner(s StatusProvider, fc FinalizedCheckpointer) *Assigner {
func NewAssigner(s *Status, fc FinalizedCheckpointer) *Assigner {
return &Assigner{
ps: s,
fc: fc,
@@ -35,7 +28,7 @@ func NewAssigner(s StatusProvider, fc FinalizedCheckpointer) *Assigner {
// Assigner uses the "BestFinalized" peer scoring method to pick the next-best peer to receive rpc requests.
type Assigner struct {
ps StatusProvider
ps *Status
fc FinalizedCheckpointer
}
@@ -45,42 +38,38 @@ type Assigner struct {
var ErrInsufficientSuitable = errors.New("no suitable peers")
func (a *Assigner) freshPeers() ([]peer.ID, error) {
required := min(flags.Get().MinimumSyncPeers, min(flags.Get().MinimumSyncPeers, params.BeaconConfig().MaxPeersToSync))
_, peers := a.ps.BestFinalized(a.fc.FinalizedCheckpoint().Epoch)
required := min(flags.Get().MinimumSyncPeers, params.BeaconConfig().MaxPeersToSync)
_, peers := a.ps.BestFinalized(params.BeaconConfig().MaxPeersToSync, a.fc.FinalizedCheckpoint().Epoch)
if len(peers) < required {
log.WithFields(logrus.Fields{
"suitable": len(peers),
"required": required}).Trace("Unable to assign peer while suitable peers < required")
"required": required}).Warn("Unable to assign peer while suitable peers < required ")
return nil, ErrInsufficientSuitable
}
return peers, nil
}
// AssignmentFilter describes a function that takes a list of peer.IDs and returns a filtered subset.
// An example is the NotBusy filter.
type AssignmentFilter func([]peer.ID) []peer.ID
// Assign uses the "BestFinalized" method to select the best peers that agree on a canonical block
// for the configured finalized epoch. At most `n` peers will be returned. The `busy` param can be used
// to filter out peers that we know we don't want to connect to, for instance if we are trying to limit
// the number of outbound requests to each peer from a given component.
func (a *Assigner) Assign(filter AssignmentFilter) ([]peer.ID, error) {
func (a *Assigner) Assign(busy map[peer.ID]bool, n int) ([]peer.ID, error) {
best, err := a.freshPeers()
if err != nil {
return nil, err
}
return filter(best), nil
return pickBest(busy, n, best), nil
}
// NotBusy is a filter that returns the list of peer.IDs that are not in the `busy` map.
func NotBusy(busy map[peer.ID]bool) AssignmentFilter {
return func(peers []peer.ID) []peer.ID {
ps := make([]peer.ID, 0, len(peers))
for _, p := range peers {
if !busy[p] {
ps = append(ps, p)
}
func pickBest(busy map[peer.ID]bool, n int, best []peer.ID) []peer.ID {
ps := make([]peer.ID, 0, n)
for _, p := range best {
if len(ps) == n {
return ps
}
if !busy[p] {
ps = append(ps, p)
}
return ps
}
return ps
}

View File

@@ -5,8 +5,6 @@ import (
"slices"
"testing"
forkchoicetypes "github.com/OffchainLabs/prysm/v7/beacon-chain/forkchoice/types"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/libp2p/go-libp2p/core/peer"
)
@@ -16,68 +14,82 @@ func TestPickBest(t *testing.T) {
cases := []struct {
name string
busy map[peer.ID]bool
n int
best []peer.ID
expected []peer.ID
}{
{
name: "don't limit",
expected: best,
name: "",
n: 0,
},
{
name: "none busy",
expected: best,
n: 1,
expected: best[0:1],
},
{
name: "all busy except last",
n: 1,
busy: testBusyMap(best[0 : len(best)-1]),
expected: best[len(best)-1:],
},
{
name: "all busy except i=5",
n: 1,
busy: testBusyMap(slices.Concat(best[0:5], best[6:])),
expected: []peer.ID{best[5]},
},
{
name: "all busy - 0 results",
n: 1,
busy: testBusyMap(best),
},
{
name: "first half busy",
n: 5,
busy: testBusyMap(best[0:5]),
expected: best[5:],
},
{
name: "back half busy",
n: 5,
busy: testBusyMap(best[5:]),
expected: best[0:5],
},
{
name: "pick all ",
n: 10,
expected: best,
},
{
name: "none available",
n: 10,
best: []peer.ID{},
},
{
name: "not enough",
n: 10,
best: best[0:1],
expected: best[0:1],
},
{
name: "not enough, some busy",
n: 10,
best: best[0:6],
busy: testBusyMap(best[0:5]),
expected: best[5:6],
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
name := fmt.Sprintf("n=%d", c.n)
if c.name != "" {
name += " " + c.name
}
t.Run(name, func(t *testing.T) {
if c.best == nil {
c.best = best
}
filt := NotBusy(c.busy)
pb := filt(c.best)
pb := pickBest(c.busy, c.n, c.best)
require.Equal(t, len(c.expected), len(pb))
for i := range c.expected {
require.Equal(t, c.expected[i], pb[i])
@@ -101,310 +113,3 @@ func testPeerIds(n int) []peer.ID {
}
return pids
}
// MockStatus is a test mock for the Status interface used in Assigner.
type MockStatus struct {
bestFinalizedEpoch primitives.Epoch
bestPeers []peer.ID
}
func (m *MockStatus) BestFinalized(ourFinalized primitives.Epoch) (primitives.Epoch, []peer.ID) {
return m.bestFinalizedEpoch, m.bestPeers
}
// MockFinalizedCheckpointer is a test mock for FinalizedCheckpointer interface.
type MockFinalizedCheckpointer struct {
checkpoint *forkchoicetypes.Checkpoint
}
func (m *MockFinalizedCheckpointer) FinalizedCheckpoint() *forkchoicetypes.Checkpoint {
return m.checkpoint
}
// TestAssign_HappyPath tests the Assign method with sufficient peers and various filters.
func TestAssign_HappyPath(t *testing.T) {
peers := testPeerIds(10)
cases := []struct {
name string
bestPeers []peer.ID
finalizedEpoch primitives.Epoch
filter AssignmentFilter
expectedCount int
}{
{
name: "sufficient peers with identity filter",
bestPeers: peers,
finalizedEpoch: 10,
filter: func(p []peer.ID) []peer.ID { return p },
expectedCount: 10,
},
{
name: "sufficient peers with NotBusy filter (no busy)",
bestPeers: peers,
finalizedEpoch: 10,
filter: NotBusy(make(map[peer.ID]bool)),
expectedCount: 10,
},
{
name: "sufficient peers with NotBusy filter (some busy)",
bestPeers: peers,
finalizedEpoch: 10,
filter: NotBusy(testBusyMap(peers[0:5])),
expectedCount: 5,
},
{
name: "minimum threshold exactly met",
bestPeers: peers[0:5],
finalizedEpoch: 10,
filter: func(p []peer.ID) []peer.ID { return p },
expectedCount: 5,
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
mockStatus := &MockStatus{
bestFinalizedEpoch: tc.finalizedEpoch,
bestPeers: tc.bestPeers,
}
mockCheckpointer := &MockFinalizedCheckpointer{
checkpoint: &forkchoicetypes.Checkpoint{Epoch: tc.finalizedEpoch},
}
assigner := NewAssigner(mockStatus, mockCheckpointer)
result, err := assigner.Assign(tc.filter)
require.NoError(t, err)
require.Equal(t, tc.expectedCount, len(result),
fmt.Sprintf("expected %d peers, got %d", tc.expectedCount, len(result)))
})
}
}
// TestAssign_InsufficientPeers tests error handling when not enough suitable peers are available.
// Note: The actual peer threshold depends on config values MaxPeersToSync and MinimumSyncPeers.
func TestAssign_InsufficientPeers(t *testing.T) {
cases := []struct {
name string
bestPeers []peer.ID
expectedErr error
description string
}{
{
name: "exactly at minimum threshold",
bestPeers: testPeerIds(5),
expectedErr: nil,
description: "5 peers should meet the minimum threshold",
},
{
name: "well above minimum threshold",
bestPeers: testPeerIds(50),
expectedErr: nil,
description: "50 peers should easily meet requirements",
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
mockStatus := &MockStatus{
bestFinalizedEpoch: 10,
bestPeers: tc.bestPeers,
}
mockCheckpointer := &MockFinalizedCheckpointer{
checkpoint: &forkchoicetypes.Checkpoint{Epoch: 10},
}
assigner := NewAssigner(mockStatus, mockCheckpointer)
result, err := assigner.Assign(NotBusy(make(map[peer.ID]bool)))
if tc.expectedErr != nil {
require.NotNil(t, err, tc.description)
require.Equal(t, tc.expectedErr, err)
} else {
require.NoError(t, err, tc.description)
require.Equal(t, len(tc.bestPeers), len(result))
}
})
}
}
// TestAssign_FilterApplication verifies that filters are correctly applied to peer lists.
func TestAssign_FilterApplication(t *testing.T) {
peers := testPeerIds(10)
cases := []struct {
name string
bestPeers []peer.ID
filterToApply AssignmentFilter
expectedCount int
description string
}{
{
name: "identity filter returns all peers",
bestPeers: peers,
filterToApply: func(p []peer.ID) []peer.ID { return p },
expectedCount: 10,
description: "identity filter should not change peer list",
},
{
name: "filter removes all peers (all busy)",
bestPeers: peers,
filterToApply: NotBusy(testBusyMap(peers)),
expectedCount: 0,
description: "all peers busy should return empty list",
},
{
name: "filter removes first 5 peers",
bestPeers: peers,
filterToApply: NotBusy(testBusyMap(peers[0:5])),
expectedCount: 5,
description: "should only return non-busy peers",
},
{
name: "filter removes last 5 peers",
bestPeers: peers,
filterToApply: NotBusy(testBusyMap(peers[5:])),
expectedCount: 5,
description: "should only return non-busy peers from beginning",
},
{
name: "custom filter selects every other peer",
bestPeers: peers,
filterToApply: func(p []peer.ID) []peer.ID {
result := make([]peer.ID, 0)
for i := 0; i < len(p); i += 2 {
result = append(result, p[i])
}
return result
},
expectedCount: 5,
description: "custom filter selecting every other peer",
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
mockStatus := &MockStatus{
bestFinalizedEpoch: 10,
bestPeers: tc.bestPeers,
}
mockCheckpointer := &MockFinalizedCheckpointer{
checkpoint: &forkchoicetypes.Checkpoint{Epoch: 10},
}
assigner := NewAssigner(mockStatus, mockCheckpointer)
result, err := assigner.Assign(tc.filterToApply)
require.NoError(t, err, fmt.Sprintf("unexpected error: %v", err))
require.Equal(t, tc.expectedCount, len(result),
fmt.Sprintf("%s: expected %d peers, got %d", tc.description, tc.expectedCount, len(result)))
})
}
}
// TestAssign_FinalizedCheckpointUsage verifies that the finalized checkpoint is correctly used.
func TestAssign_FinalizedCheckpointUsage(t *testing.T) {
peers := testPeerIds(10)
cases := []struct {
name string
finalizedEpoch primitives.Epoch
bestPeers []peer.ID
expectedCount int
description string
}{
{
name: "epoch 0",
finalizedEpoch: 0,
bestPeers: peers,
expectedCount: 10,
description: "epoch 0 should work",
},
{
name: "epoch 100",
finalizedEpoch: 100,
bestPeers: peers,
expectedCount: 10,
description: "high epoch number should work",
},
{
name: "epoch changes between calls",
finalizedEpoch: 50,
bestPeers: testPeerIds(5),
expectedCount: 5,
description: "epoch value should be used in checkpoint",
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
mockStatus := &MockStatus{
bestFinalizedEpoch: tc.finalizedEpoch,
bestPeers: tc.bestPeers,
}
mockCheckpointer := &MockFinalizedCheckpointer{
checkpoint: &forkchoicetypes.Checkpoint{Epoch: tc.finalizedEpoch},
}
assigner := NewAssigner(mockStatus, mockCheckpointer)
result, err := assigner.Assign(NotBusy(make(map[peer.ID]bool)))
require.NoError(t, err)
require.Equal(t, tc.expectedCount, len(result),
fmt.Sprintf("%s: expected %d peers, got %d", tc.description, tc.expectedCount, len(result)))
})
}
}
// TestAssign_EdgeCases tests boundary conditions and edge cases.
func TestAssign_EdgeCases(t *testing.T) {
cases := []struct {
name string
bestPeers []peer.ID
filter AssignmentFilter
expectedCount int
description string
}{
{
name: "filter returns empty from sufficient peers",
bestPeers: testPeerIds(10),
filter: func(p []peer.ID) []peer.ID { return []peer.ID{} },
expectedCount: 0,
description: "filter can return empty list even if sufficient peers available",
},
{
name: "filter selects subset from sufficient peers",
bestPeers: testPeerIds(10),
filter: func(p []peer.ID) []peer.ID { return p[0:2] },
expectedCount: 2,
description: "filter can return subset of available peers",
},
{
name: "filter selects single peer from many",
bestPeers: testPeerIds(20),
filter: func(p []peer.ID) []peer.ID { return p[0:1] },
expectedCount: 1,
description: "filter can select single peer from many available",
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
mockStatus := &MockStatus{
bestFinalizedEpoch: 10,
bestPeers: tc.bestPeers,
}
mockCheckpointer := &MockFinalizedCheckpointer{
checkpoint: &forkchoicetypes.Checkpoint{Epoch: 10},
}
assigner := NewAssigner(mockStatus, mockCheckpointer)
result, err := assigner.Assign(tc.filter)
require.NoError(t, err, fmt.Sprintf("%s: unexpected error: %v", tc.description, err))
require.Equal(t, tc.expectedCount, len(result),
fmt.Sprintf("%s: expected %d peers, got %d", tc.description, tc.expectedCount, len(result)))
})
}
}

View File

@@ -704,54 +704,76 @@ func (p *Status) deprecatedPrune() {
p.tallyIPTracker()
}
// BestFinalized groups all peers by their last known finalized epoch
// and selects the epoch of the largest group as best.
// Any peer with a finalized epoch < ourFinalized is excluded from consideration.
// In the event of a tie in largest group size, the higher epoch is the tie breaker.
// The selected epoch is returned, along with a list of peers with a finalized epoch >= the selected epoch.
func (p *Status) BestFinalized(ourFinalized primitives.Epoch) (primitives.Epoch, []peer.ID) {
// BestFinalized returns the highest finalized epoch equal to or higher than `ourFinalizedEpoch`
// that is agreed upon by the majority of peers, and the peers agreeing on this finalized epoch.
// This method may not return the absolute highest finalized epoch, but the finalized epoch in which
// most peers can serve blocks (plurality voting). Ideally, all peers would be reporting the same
// finalized epoch but some may be behind due to their own latency, or because of their finalized
// epoch at the time we queried them.
func (p *Status) BestFinalized(maxPeers int, ourFinalizedEpoch primitives.Epoch) (primitives.Epoch, []peer.ID) {
// Retrieve all connected peers.
connected := p.Connected()
pids := make([]peer.ID, 0, len(connected))
views := make(map[peer.ID]*pb.StatusV2, len(connected))
votes := make(map[primitives.Epoch]uint64)
winner := primitives.Epoch(0)
// key: finalized epoch, value: number of peers that support this finalized epoch.
finalizedEpochVotes := make(map[primitives.Epoch]uint64)
// key: peer ID, value: finalized epoch of the peer.
pidEpoch := make(map[peer.ID]primitives.Epoch, len(connected))
// key: peer ID, value: head slot of the peer.
pidHead := make(map[peer.ID]primitives.Slot, len(connected))
potentialPIDs := make([]peer.ID, 0, len(connected))
for _, pid := range connected {
view, err := p.ChainState(pid)
if err != nil || view == nil || view.FinalizedEpoch < ourFinalized {
continue
}
pids = append(pids, pid)
views[pid] = view
peerChainState, err := p.ChainState(pid)
votes[view.FinalizedEpoch]++
if winner == 0 {
winner = view.FinalizedEpoch
// Skip if the peer's finalized epoch is not defined, or if the peer's finalized epoch is
// lower than ours.
if err != nil || peerChainState == nil || peerChainState.FinalizedEpoch < ourFinalizedEpoch {
continue
}
e, v := view.FinalizedEpoch, votes[view.FinalizedEpoch]
if v > votes[winner] || v == votes[winner] && e > winner {
winner = e
finalizedEpochVotes[peerChainState.FinalizedEpoch]++
pidEpoch[pid] = peerChainState.FinalizedEpoch
pidHead[pid] = peerChainState.HeadSlot
potentialPIDs = append(potentialPIDs, pid)
}
// Select the target epoch, which is the epoch most peers agree upon.
// If there is a tie, select the highest epoch.
targetEpoch, mostVotes := primitives.Epoch(0), uint64(0)
for epoch, count := range finalizedEpochVotes {
if count > mostVotes || (count == mostVotes && epoch > targetEpoch) {
mostVotes = count
targetEpoch = epoch
}
}
// Descending sort by (finalized, head).
sort.Slice(pids, func(i, j int) bool {
iv, jv := views[pids[i]], views[pids[j]]
if iv.FinalizedEpoch == jv.FinalizedEpoch {
return iv.HeadSlot > jv.HeadSlot
// Sort PIDs by finalized (epoch, head), in decreasing order.
sort.Slice(potentialPIDs, func(i, j int) bool {
if pidEpoch[potentialPIDs[i]] == pidEpoch[potentialPIDs[j]] {
return pidHead[potentialPIDs[i]] > pidHead[potentialPIDs[j]]
}
return iv.FinalizedEpoch > jv.FinalizedEpoch
return pidEpoch[potentialPIDs[i]] > pidEpoch[potentialPIDs[j]]
})
// Find the first peer with finalized epoch < winner, trim and all following (lower) peers.
trim := sort.Search(len(pids), func(i int) bool {
return views[pids[i]].FinalizedEpoch < winner
})
pids = pids[:trim]
// Trim potential peers to those on or after target epoch.
for i, pid := range potentialPIDs {
if pidEpoch[pid] < targetEpoch {
potentialPIDs = potentialPIDs[:i]
break
}
}
return winner, pids
// Trim potential peers to at most maxPeers.
if len(potentialPIDs) > maxPeers {
potentialPIDs = potentialPIDs[:maxPeers]
}
return targetEpoch, potentialPIDs
}
// BestNonFinalized returns the highest known epoch, higher than ours,

View File

@@ -58,7 +58,7 @@ func TestPeerExplicitAdd(t *testing.T) {
resAddress, err := p.Address(id)
require.NoError(t, err)
assert.Equal(t, address.Equal(resAddress), true, "Unexpected address")
assert.Equal(t, address, resAddress, "Unexpected address")
resDirection, err := p.Direction(id)
require.NoError(t, err)
@@ -72,7 +72,7 @@ func TestPeerExplicitAdd(t *testing.T) {
resAddress2, err := p.Address(id)
require.NoError(t, err)
assert.Equal(t, address2.Equal(resAddress2), true, "Unexpected address")
assert.Equal(t, address2, resAddress2, "Unexpected address")
resDirection2, err := p.Direction(id)
require.NoError(t, err)
@@ -654,10 +654,9 @@ func TestTrimmedOrderedPeers(t *testing.T) {
FinalizedRoot: mockroot2[:],
})
target, pids := p.BestFinalized(0)
target, pids := p.BestFinalized(maxPeers, 0)
assert.Equal(t, expectedTarget, target, "Incorrect target epoch retrieved")
// addPeer called 5 times above
assert.Equal(t, 5, len(pids), "Incorrect number of peers retrieved")
assert.Equal(t, maxPeers, len(pids), "Incorrect number of peers retrieved")
// Expect the returned list to be ordered by finalized epoch and trimmed to max peers.
assert.Equal(t, pid3, pids[0], "Incorrect first peer")
@@ -1018,10 +1017,7 @@ func TestStatus_BestPeer(t *testing.T) {
HeadSlot: peerConfig.headSlot,
})
}
epoch, pids := p.BestFinalized(tt.ourFinalizedEpoch)
if len(pids) > tt.limitPeers {
pids = pids[:tt.limitPeers]
}
epoch, pids := p.BestFinalized(tt.limitPeers, tt.ourFinalizedEpoch)
assert.Equal(t, tt.targetEpoch, epoch, "Unexpected epoch retrieved")
assert.Equal(t, tt.targetEpochSupport, len(pids), "Unexpected number of peers supporting retrieved epoch")
})
@@ -1048,10 +1044,7 @@ func TestBestFinalized_returnsMaxValue(t *testing.T) {
})
}
_, pids := p.BestFinalized(0)
if len(pids) > maxPeers {
pids = pids[:maxPeers]
}
_, pids := p.BestFinalized(maxPeers, 0)
assert.Equal(t, maxPeers, len(pids), "Wrong number of peers returned")
}

View File

@@ -6,13 +6,11 @@ package p2p
import (
"context"
"crypto/ecdsa"
"fmt"
"sync"
"time"
"github.com/OffchainLabs/prysm/v7/async"
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/encoder"
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/gossipcrawler"
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/peers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/peers/scorers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/types"
@@ -30,7 +28,6 @@ import (
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/libp2p/go-libp2p"
pubsub "github.com/libp2p/go-libp2p-pubsub"
"github.com/libp2p/go-libp2p/core/event"
"github.com/libp2p/go-libp2p/core/host"
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer"
@@ -64,10 +61,6 @@ var (
// for the current peer limit status for the time period
// defined below.
pollingPeriod = 6 * time.Second
crawlTimeout = 5 * time.Second
crawlInterval = 1 * time.Second
maxConcurrentDials = int64(256)
)
// Service for managing peer to peer (p2p) networking.
@@ -102,8 +95,6 @@ type Service struct {
custodyInfoLock sync.RWMutex // Lock access to custodyInfo
custodyInfoSet chan struct{}
allForkDigests map[[4]byte]struct{}
crawler gossipcrawler.Crawler
gossipDialer gossipcrawler.GossipDialer
}
type custodyInfo struct {
@@ -163,6 +154,8 @@ func NewService(ctx context.Context, cfg *Config) (*Service, error) {
return nil, errors.Wrapf(err, "failed to build p2p options")
}
// Sets mplex timeouts
configureMplex()
h, err := libp2p.New(opts...)
if err != nil {
return nil, errors.Wrapf(err, "failed to create p2p host")
@@ -170,7 +163,7 @@ func NewService(ctx context.Context, cfg *Config) (*Service, error) {
s.host = h
// Gossip registration is done before we add in any new peers
// Gossipsub registration is done before we add in any new peers
// due to libp2p's gossipsub implementation not taking into
// account previously added peers when creating the gossipsub
// object.
@@ -248,25 +241,6 @@ func (s *Service) Start() {
s.dv5Listener = listener
go s.listenForNewNodes()
crawler, err := NewGossipPeerCrawler(
s.ctx,
s,
s.dv5Listener,
crawlTimeout,
crawlInterval,
maxConcurrentDials,
s.filterPeer,
s.Peers().Scorers().Score,
)
if err != nil {
log.WithError(err).Fatal("Failed to create peer crawler")
s.startupErr = err
return
}
s.crawler = crawler
// Initialise the gossipsub dialer which will be started
// once the sync service is ready to provide subnet topics.
s.gossipDialer = NewGossipPeerDialer(s.ctx, s.crawler, s.PubSub().ListPeers, s.DialPeers)
}
s.started = true
@@ -285,14 +259,6 @@ func (s *Service) Start() {
// current epoch.
s.RefreshPersistentSubnets()
if s.cfg.EnableAutoNAT {
if err := s.subscribeReachabilityEvents(); err != nil {
log.WithError(err).Error("Failed to subscribe to AutoNAT v2 reachability events")
} else {
log.Info("AutoNAT v2 enabled for address reachability detection")
}
}
// Periodic functions.
async.RunEvery(s.ctx, params.BeaconConfig().TtfbTimeoutDuration(), func() {
ensurePeerConnections(s.ctx, s.host, s.peers, relayNodes...)
@@ -345,25 +311,12 @@ func (s *Service) Start() {
func (s *Service) Stop() error {
defer s.cancel()
s.started = false
if s.dv5Listener != nil {
s.dv5Listener.Close()
}
return nil
}
// Crawler returns the p2p service's peer crawler.
func (s *Service) Crawler() gossipcrawler.Crawler {
return s.crawler
}
// GossipDialer returns the dialer responsible for maintaining
// peer counts per gossipsub topic, if discovery is enabled.
func (s *Service) GossipDialer() gossipcrawler.GossipDialer {
return s.gossipDialer
}
// Status of the p2p service. Will return an error if the service is considered unhealthy to
// indicate that this node should not serve traffic until the issue has been resolved.
func (s *Service) Status() error {
@@ -604,118 +557,3 @@ func (s *Service) downscorePeer(peerID peer.ID, reason string) {
newScore := s.Peers().Scorers().BadResponsesScorer().Increment(peerID)
log.WithFields(logrus.Fields{"peerID": peerID, "reason": reason, "newScore": newScore}).Debug("Downscore peer")
}
func (s *Service) subscribeReachabilityEvents() error {
sub, err := s.host.EventBus().Subscribe(new(event.EvtHostReachableAddrsChanged))
if err != nil {
return fmt.Errorf("subscribing to reachability events: %w", err)
}
go func() {
defer func() {
if err := sub.Close(); err != nil {
log.WithError(err).Debug("Failed to close reachability event subscription")
}
}()
for {
select {
case <-s.ctx.Done():
return
case ev := <-sub.Out():
if event, ok := ev.(event.EvtHostReachableAddrsChanged); ok {
log.WithFields(logrus.Fields{
"reachable": multiaddrsToStrings(event.Reachable),
"unreachable": multiaddrsToStrings(event.Unreachable),
"unknown": multiaddrsToStrings(event.Unknown),
}).Info("Address reachability changed")
}
}
}
}()
return nil
}
func multiaddrsToStrings(addrs []multiaddr.Multiaddr) []string {
strs := make([]string, len(addrs))
for i, a := range addrs {
strs[i] = a.String()
}
return strs
}
func AttestationSubnets(nodeID enode.ID, node *enode.Node, record *enr.Record) (map[uint64]bool, error) {
return attestationSubnets(record)
}
func SyncSubnets(nodeID enode.ID, node *enode.Node, record *enr.Record) (map[uint64]bool, error) {
return syncSubnets(record)
}
func DataColumnSubnets(nodeID enode.ID, node *enode.Node, record *enr.Record) (map[uint64]bool, error) {
return dataColumnSubnets(nodeID, record)
}
func DataColumnSubnetTopic(digest [4]byte, subnet uint64) string {
e := &encoder.SszNetworkEncoder{}
return fmt.Sprintf(DataColumnSubnetTopicFormat, digest, subnet) + e.ProtocolSuffix()
}
func SyncCommitteeSubnetTopic(digest [4]byte, subnet uint64) string {
e := &encoder.SszNetworkEncoder{}
return fmt.Sprintf(SyncCommitteeSubnetTopicFormat, digest, subnet) + e.ProtocolSuffix()
}
func AttestationSubnetTopic(digest [4]byte, subnet uint64) string {
e := &encoder.SszNetworkEncoder{}
return fmt.Sprintf(AttestationSubnetTopicFormat, digest, subnet) + e.ProtocolSuffix()
}
func BlobSubnetTopic(digest [4]byte, subnet uint64) string {
e := &encoder.SszNetworkEncoder{}
return fmt.Sprintf(BlobSubnetTopicFormat, digest, subnet) + e.ProtocolSuffix()
}
func LcOptimisticToTopic(forkDigest [4]byte) string {
e := &encoder.SszNetworkEncoder{}
return fmt.Sprintf(LightClientOptimisticUpdateTopicFormat, forkDigest) + e.ProtocolSuffix()
}
func LcFinalityToTopic(forkDigest [4]byte) string {
e := &encoder.SszNetworkEncoder{}
return fmt.Sprintf(LightClientFinalityUpdateTopicFormat, forkDigest) + e.ProtocolSuffix()
}
func BlockSubnetTopic(forkDigest [4]byte) string {
e := &encoder.SszNetworkEncoder{}
return fmt.Sprintf(BlockSubnetTopicFormat, forkDigest) + e.ProtocolSuffix()
}
func AggregateAndProofSubnetTopic(forkDigest [4]byte) string {
e := &encoder.SszNetworkEncoder{}
return fmt.Sprintf(AggregateAndProofSubnetTopicFormat, forkDigest) + e.ProtocolSuffix()
}
func VoluntaryExitSubnetTopic(forkDigest [4]byte) string {
e := &encoder.SszNetworkEncoder{}
return fmt.Sprintf(ExitSubnetTopicFormat, forkDigest) + e.ProtocolSuffix()
}
func ProposerSlashingSubnetTopic(forkDigest [4]byte) string {
e := &encoder.SszNetworkEncoder{}
return fmt.Sprintf(ProposerSlashingSubnetTopicFormat, forkDigest) + e.ProtocolSuffix()
}
func AttesterSlashingSubnetTopic(forkDigest [4]byte) string {
e := &encoder.SszNetworkEncoder{}
return fmt.Sprintf(AttesterSlashingSubnetTopicFormat, forkDigest) + e.ProtocolSuffix()
}
func SyncContributionAndProofSubnetTopic(forkDigest [4]byte) string {
e := &encoder.SszNetworkEncoder{}
return fmt.Sprintf(SyncContributionAndProofSubnetTopicFormat, forkDigest) + e.ProtocolSuffix()
}
func BlsToExecutionChangeSubnetTopic(forkDigest [4]byte) string {
e := &encoder.SszNetworkEncoder{}
return fmt.Sprintf(BlsToExecutionChangeSubnetTopicFormat, forkDigest) + e.ProtocolSuffix()
}

View File

@@ -21,7 +21,6 @@ import (
"github.com/OffchainLabs/prysm/v7/testing/require"
prysmTime "github.com/OffchainLabs/prysm/v7/time"
"github.com/libp2p/go-libp2p"
"github.com/libp2p/go-libp2p/core/event"
"github.com/libp2p/go-libp2p/core/host"
"github.com/libp2p/go-libp2p/core/peer"
noise "github.com/libp2p/go-libp2p/p2p/security/noise"
@@ -36,8 +35,7 @@ func createHost(t *testing.T, port uint) (host.Host, *ecdsa.PrivateKey, net.IP)
ipAddr := net.ParseIP("127.0.0.1")
listen, err := multiaddr.NewMultiaddr(fmt.Sprintf("/ip4/%s/tcp/%d", ipAddr, port))
require.NoError(t, err, "Failed to p2p listen")
h, err := libp2p.New([]libp2p.Option{privKeyOption(pkey), libp2p.ListenAddrs(listen),
libp2p.Security(noise.ID, noise.New)}...)
h, err := libp2p.New([]libp2p.Option{privKeyOption(pkey), libp2p.ListenAddrs(listen), libp2p.Security(noise.ID, noise.New)}...)
require.NoError(t, err)
return h, pkey, ipAddr
}
@@ -402,58 +400,3 @@ func TestService_connectWithPeer(t *testing.T) {
})
}
}
func TestService_SubscribeReachabilityEvents(t *testing.T) {
hook := logTest.NewGlobal()
ctx := t.Context()
h, _, _ := createHost(t, 0)
defer func() {
if err := h.Close(); err != nil {
t.Fatal(err)
}
}()
// Create service with the host
s := &Service{
ctx: ctx,
host: h,
cfg: &Config{EnableAutoNAT: true},
}
// Get an emitter for the reachability event
emitter, err := h.EventBus().Emitter(new(event.EvtHostReachableAddrsChanged))
require.NoError(t, err)
defer func() {
if err := emitter.Close(); err != nil {
t.Fatal(err)
}
}()
// Subscribe to reachability events
require.NoError(t, s.subscribeReachabilityEvents())
// Create test multiaddrs for each reachability state
reachableAddr, err := multiaddr.NewMultiaddr("/ip4/192.168.1.1/tcp/9000")
require.NoError(t, err)
unreachableAddr, err := multiaddr.NewMultiaddr("/ip4/10.0.0.1/tcp/9001")
require.NoError(t, err)
unknownAddr, err := multiaddr.NewMultiaddr("/ip4/172.16.0.1/tcp/9002")
require.NoError(t, err)
// Emit a reachability event with all address types
err = emitter.Emit(event.EvtHostReachableAddrsChanged{
Reachable: []multiaddr.Multiaddr{reachableAddr},
Unreachable: []multiaddr.Multiaddr{unreachableAddr},
Unknown: []multiaddr.Multiaddr{unknownAddr},
})
require.NoError(t, err)
// Wait for the event to be processed
time.Sleep(100 * time.Millisecond)
// Verify the log message contains all addresses
require.LogsContain(t, hook, "Address reachability changed")
require.LogsContain(t, hook, "/ip4/192.168.1.1/tcp/9000")
require.LogsContain(t, hook, "/ip4/10.0.0.1/tcp/9001")
require.LogsContain(t, hook, "/ip4/172.16.0.1/tcp/9002")
}

View File

@@ -3,6 +3,9 @@ package p2p
import (
"context"
"fmt"
"maps"
"math"
"strings"
"sync"
"time"
@@ -11,16 +14,19 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v7/cmd/beacon-chain/flags"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/consensus-types/wrapper"
"github.com/OffchainLabs/prysm/v7/crypto/hash"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v7/monitoring/tracing/trace"
pb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/holiman/uint256"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
var (
@@ -55,9 +61,223 @@ const dataColumnSubnetVal = 150
const errSavingSequenceNumber = "saving sequence number after updating subnets: %w"
// DialPeers dials multiple peers concurrently up to `maxConcurrentDials` at a time.
// nodeFilter returns a function that filters nodes based on the subnet topic and subnet index.
func (s *Service) nodeFilter(topic string, indices map[uint64]int) (func(node *enode.Node) (map[uint64]bool, error), error) {
switch {
case strings.Contains(topic, GossipAttestationMessage):
return s.filterPeerForAttSubnet(indices), nil
case strings.Contains(topic, GossipSyncCommitteeMessage):
return s.filterPeerForSyncSubnet(indices), nil
case strings.Contains(topic, GossipBlobSidecarMessage):
return s.filterPeerForBlobSubnet(indices), nil
case strings.Contains(topic, GossipDataColumnSidecarMessage):
return s.filterPeerForDataColumnsSubnet(indices), nil
default:
return nil, errors.Errorf("no subnet exists for provided topic: %s", topic)
}
}
// FindAndDialPeersWithSubnets ensures that our node is connected to at least `minimumPeersPerSubnet`
// peers for each subnet listed in `subnets`.
// If, for all subnets, the threshold is met, then this function immediately returns.
// Otherwise, it searches for new peers for defective subnets, and dials them.
// If `ctx“ is canceled while searching for peers, search is stopped, but new found peers are still dialed.
// In this case, the function returns an error.
func (s *Service) FindAndDialPeersWithSubnets(
ctx context.Context,
topicFormat string,
digest [fieldparams.VersionLength]byte,
minimumPeersPerSubnet int,
subnets map[uint64]bool,
) error {
ctx, span := trace.StartSpan(ctx, "p2p.FindAndDialPeersWithSubnet")
defer span.End()
// Return early if the discovery listener isn't set.
if s.dv5Listener == nil {
return nil
}
// Restrict dials if limit is applied.
maxConcurrentDials := math.MaxInt
if flags.MaxDialIsActive() {
maxConcurrentDials = flags.Get().MaxConcurrentDials
}
defectiveSubnets := s.defectiveSubnets(topicFormat, digest, minimumPeersPerSubnet, subnets)
for len(defectiveSubnets) > 0 {
// Stop the search/dialing loop if the context is canceled.
if err := ctx.Err(); err != nil {
return err
}
peersToDial, err := func() ([]*enode.Node, error) {
ctx, cancel := context.WithTimeout(ctx, batchPeriod)
defer cancel()
peersToDial, err := s.findPeersWithSubnets(ctx, topicFormat, digest, minimumPeersPerSubnet, defectiveSubnets)
if err != nil && !errors.Is(err, context.DeadlineExceeded) {
return nil, errors.Wrap(err, "find peers with subnets")
}
return peersToDial, nil
}()
if err != nil {
return err
}
// Dial new peers in batches.
s.dialPeers(s.ctx, maxConcurrentDials, peersToDial)
defectiveSubnets = s.defectiveSubnets(topicFormat, digest, minimumPeersPerSubnet, subnets)
}
return nil
}
// updateDefectiveSubnets updates the defective subnets map when a node with matching subnets is found.
// It decrements the defective count for each subnet the node satisfies and removes subnets
// that are fully satisfied (count reaches 0).
func updateDefectiveSubnets(
nodeSubnets map[uint64]bool,
defectiveSubnets map[uint64]int,
) {
for subnet := range defectiveSubnets {
if !nodeSubnets[subnet] {
continue
}
defectiveSubnets[subnet]--
if defectiveSubnets[subnet] == 0 {
delete(defectiveSubnets, subnet)
}
}
}
// findPeersWithSubnets finds peers subscribed to defective subnets in batches
// until enough peers are found or the context is canceled.
// It returns new peers found during the search.
func (s *Service) findPeersWithSubnets(
ctx context.Context,
topicFormat string,
digest [fieldparams.VersionLength]byte,
minimumPeersPerSubnet int,
defectiveSubnetsOrigin map[uint64]int,
) ([]*enode.Node, error) {
// Copy the defective subnets map to avoid modifying the original map.
defectiveSubnets := make(map[uint64]int, len(defectiveSubnetsOrigin))
maps.Copy(defectiveSubnets, defectiveSubnetsOrigin)
// Create an discovery iterator to find new peers.
iterator := s.dv5Listener.RandomNodes()
// `iterator.Next` can block indefinitely. `iterator.Close` unblocks it.
// So it is important to close the iterator when the context is done to ensure
// that the search does not hang indefinitely.
go func() {
<-ctx.Done()
iterator.Close()
}()
// Retrieve the filter function that will be used to filter nodes based on the defective subnets.
filter, err := s.nodeFilter(topicFormat, defectiveSubnets)
if err != nil {
return nil, errors.Wrap(err, "node filter")
}
// Crawl the network for peers subscribed to the defective subnets.
nodeByNodeID := make(map[enode.ID]*enode.Node)
for len(defectiveSubnets) > 0 && iterator.Next() {
if err := ctx.Err(); err != nil {
// Convert the map to a slice.
peersToDial := make([]*enode.Node, 0, len(nodeByNodeID))
for _, node := range nodeByNodeID {
peersToDial = append(peersToDial, node)
}
return peersToDial, err
}
node := iterator.Node()
// Remove duplicates, keeping the node with higher seq.
existing, ok := nodeByNodeID[node.ID()]
if ok && existing.Seq() >= node.Seq() {
continue // keep existing and skip.
}
// Treat nodes that exist in nodeByNodeID with higher seq numbers as new peers
// Skip peer not matching the filter.
if !s.filterPeer(node) {
if ok {
// this means the existing peer with the lower sequence number is no longer valid
delete(nodeByNodeID, existing.ID())
// Note: We are choosing to not rollback changes to the defective subnets map in favor of calling s.defectiveSubnets once again after dialing peers.
// This is a case that should rarely happen and should be handled through a second iteration in FindAndDialPeersWithSubnets
}
continue
}
// Get all needed subnets that the node is subscribed to.
// Skip nodes that are not subscribed to any of the defective subnets.
nodeSubnets, err := filter(node)
if err != nil {
log.WithError(err).WithFields(logrus.Fields{
"nodeID": node.ID(),
"topicFormat": topicFormat,
}).Debug("Could not get needed subnets from peer")
continue
}
if len(nodeSubnets) == 0 {
continue
}
// We found a new peer. Modify the defective subnets map
// and the filter accordingly.
nodeByNodeID[node.ID()] = node
updateDefectiveSubnets(nodeSubnets, defectiveSubnets)
filter, err = s.nodeFilter(topicFormat, defectiveSubnets)
if err != nil {
return nil, errors.Wrap(err, "node filter")
}
}
// Convert the map to a slice.
peersToDial := make([]*enode.Node, 0, len(nodeByNodeID))
for _, node := range nodeByNodeID {
peersToDial = append(peersToDial, node)
}
return peersToDial, nil
}
// defectiveSubnets returns a map of subnets that have fewer than the minimum peer count.
func (s *Service) defectiveSubnets(
topicFormat string,
digest [fieldparams.VersionLength]byte,
minimumPeersPerSubnet int,
subnets map[uint64]bool,
) map[uint64]int {
missingCountPerSubnet := make(map[uint64]int, len(subnets))
for subnet := range subnets {
topic := fmt.Sprintf(topicFormat, digest, subnet) + s.Encoding().ProtocolSuffix()
peers := s.pubsub.ListPeers(topic)
peerCount := len(peers)
if peerCount < minimumPeersPerSubnet {
missingCountPerSubnet[subnet] = minimumPeersPerSubnet - peerCount
}
}
return missingCountPerSubnet
}
// dialPeers dials multiple peers concurrently up to `maxConcurrentDials` at a time.
// In case of a dial failure, it logs the error but continues dialing other peers.
func (s *Service) DialPeers(ctx context.Context, maxConcurrentDials int, nodes []*enode.Node) uint {
func (s *Service) dialPeers(ctx context.Context, maxConcurrentDials int, nodes []*enode.Node) uint {
var mut sync.Mutex
counter := uint(0)
@@ -99,13 +319,75 @@ func (s *Service) DialPeers(ctx context.Context, maxConcurrentDials int, nodes [
return counter
}
// filterPeerForAttSubnet returns a method with filters peers specifically for a particular attestation subnet.
func (s *Service) filterPeerForAttSubnet(indices map[uint64]int) func(node *enode.Node) (map[uint64]bool, error) {
return func(node *enode.Node) (map[uint64]bool, error) {
if !s.filterPeer(node) {
return map[uint64]bool{}, nil
}
subnets, err := attestationSubnets(node.Record())
if err != nil {
return nil, errors.Wrap(err, "attestation subnets")
}
return intersect(indices, subnets), nil
}
}
// returns a method with filters peers specifically for a particular sync subnet.
func (s *Service) filterPeerForSyncSubnet(indices map[uint64]int) func(node *enode.Node) (map[uint64]bool, error) {
return func(node *enode.Node) (map[uint64]bool, error) {
if !s.filterPeer(node) {
return map[uint64]bool{}, nil
}
subnets, err := syncSubnets(node.Record())
if err != nil {
return nil, errors.Wrap(err, "sync subnets")
}
return intersect(indices, subnets), nil
}
}
// returns a method with filters peers specifically for a particular blob subnet.
// All peers are supposed to be subscribed to all blob subnets.
func (s *Service) filterPeerForBlobSubnet(indices map[uint64]int) func(_ *enode.Node) (map[uint64]bool, error) {
result := make(map[uint64]bool, len(indices))
for i := range indices {
result[i] = true
}
return func(_ *enode.Node) (map[uint64]bool, error) {
return result, nil
}
}
// returns a method with filters peers specifically for a particular data column subnet.
func (s *Service) filterPeerForDataColumnsSubnet(indices map[uint64]int) func(node *enode.Node) (map[uint64]bool, error) {
return func(node *enode.Node) (map[uint64]bool, error) {
if !s.filterPeer(node) {
return map[uint64]bool{}, nil
}
subnets, err := dataColumnSubnets(node.ID(), node.Record())
if err != nil {
return nil, errors.Wrap(err, "data column subnets")
}
return intersect(indices, subnets), nil
}
}
// lower threshold to broadcast object compared to searching
// for a subnet. So that even in the event of poor peer
// connectivity, we can still broadcast an attestation.
func (s *Service) hasPeerWithTopic(topic string) bool {
func (s *Service) hasPeerWithSubnet(subnetTopic string) bool {
// In the event peer threshold is lower, we will choose the lower
// threshold.
minPeers := min(1, flags.Get().MinimumPeersPerSubnet)
topic := subnetTopic + s.Encoding().ProtocolSuffix()
peersWithSubnet := s.pubsub.ListPeers(topic)
peersWithSubnetCount := len(peersWithSubnet)
@@ -430,3 +712,16 @@ func byteCount(bitCount int) int {
}
return numOfBytes
}
// interesect intersects two maps and returns a new map containing only the keys
// that are present in both maps.
func intersect(left map[uint64]int, right map[uint64]bool) map[uint64]bool {
result := make(map[uint64]bool, min(len(left), len(right)))
for i := range left {
if right[i] {
result[i] = true
}
}
return result
}

View File

@@ -3,15 +3,14 @@ package p2p
import (
"context"
"crypto/rand"
"fmt"
"testing"
"time"
"github.com/OffchainLabs/go-bitfield"
"github.com/OffchainLabs/prysm/v7/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db"
testDB "github.com/OffchainLabs/prysm/v7/beacon-chain/db/testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/gossipcrawler"
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/peers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/peers/scorers"
testp2p "github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/testing"
@@ -25,7 +24,6 @@ import (
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/libp2p/go-libp2p/core/crypto"
"github.com/libp2p/go-libp2p/core/network"
require2 "github.com/stretchr/testify/require"
)
func TestStartDiscV5_FindAndDialPeersWithSubnet(t *testing.T) {
@@ -124,21 +122,6 @@ func TestStartDiscV5_FindAndDialPeersWithSubnet(t *testing.T) {
// Start the service.
service.Start()
// start the crawler with a topic extractor that maps ENR attestation subnets
// to full attestation topics for the current fork digest and encoding.
_ = service.Crawler().Start(func(ctx context.Context, node *enode.Node) ([]string, error) {
subs, err := attestationSubnets(node.Record())
if err != nil {
return nil, err
}
var topics []string
for subnet := range subs {
t := AttestationSubnetTopic(bootNodeForkDigest, subnet)
topics = append(topics, t)
}
return topics, nil
})
// Set the ENR `attnets`, used by Prysm to filter peers by subnet.
bitV := bitfield.NewBitvector64()
bitV.SetBitAt(i, true)
@@ -146,7 +129,7 @@ func TestStartDiscV5_FindAndDialPeersWithSubnet(t *testing.T) {
service.dv5Listener.LocalNode().Set(entry)
// Join and subscribe to the subnet, needed by libp2p.
topicName := AttestationSubnetTopic(bootNodeForkDigest, i)
topicName := fmt.Sprintf(AttestationSubnetTopicFormat, bootNodeForkDigest, i) + "/ssz_snappy"
topic, err := service.pubsub.Join(topicName)
require.NoError(t, err)
@@ -186,65 +169,23 @@ func TestStartDiscV5_FindAndDialPeersWithSubnet(t *testing.T) {
close(service.custodyInfoSet)
service.Start()
subnets := map[uint64]bool{1: true, 2: true, 3: true}
var topics []string
for subnet := range subnets {
t := AttestationSubnetTopic(bootNodeForkDigest, subnet)
topics = append(topics, t)
}
// start the crawler with a topic extractor that maps ENR attestation subnets
// to full attestation topics for the current fork digest and encoding.
_ = service.Crawler().Start(func(ctx context.Context, node *enode.Node) ([]string, error) {
var topics []string
subs, err := attestationSubnets(node.Record())
if err != nil {
return nil, err
}
for subnet := range subs {
t := AttestationSubnetTopic(bootNodeForkDigest, subnet)
topics = append(topics, t)
}
return topics, nil
})
defer func() {
err := service.Stop()
require.NoError(t, err)
}()
builder := func(idx uint64) string {
return AttestationSubnetTopic(bootNodeForkDigest, idx)
}
defectiveSubnetsCount := defectiveSubnets(service, topics, minimumPeersPerSubnet)
require.Equal(t, subnetCount, defectiveSubnetsCount)
subnets := map[uint64]bool{1: true, 2: true, 3: true}
defectiveSubnets := service.defectiveSubnets(AttestationSubnetTopicFormat, bootNodeForkDigest, minimumPeersPerSubnet, subnets)
require.Equal(t, subnetCount, len(defectiveSubnets))
var topicsToDial []string
for s := range subnets {
topicsToDial = append(topicsToDial, builder(s))
}
ctx, cancel := context.WithTimeout(ctx, 15*time.Second)
ctxWithTimeOut, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
for _, topic := range topicsToDial {
err = service.GossipDialer().DialPeersForTopicBlocking(ctx, topic, minimumPeersPerSubnet)
require.NoError(t, err)
}
err = service.FindAndDialPeersWithSubnets(ctxWithTimeOut, AttestationSubnetTopicFormat, bootNodeForkDigest, minimumPeersPerSubnet, subnets)
require.NoError(t, err)
defectiveSubnetsCount = defectiveSubnets(service, topics, minimumPeersPerSubnet)
require.Equal(t, 0, defectiveSubnetsCount)
}
func defectiveSubnets(service *Service, topics []string, minimumPeersPerSubnet int) int {
count := 0
for _, topic := range topics {
peers := service.pubsub.ListPeers(topic)
if len(peers) < minimumPeersPerSubnet {
count++
}
}
return count
defectiveSubnets = service.defectiveSubnets(AttestationSubnetTopicFormat, bootNodeForkDigest, minimumPeersPerSubnet, subnets)
require.Equal(t, 0, len(defectiveSubnets))
}
func Test_AttSubnets(t *testing.T) {
@@ -640,6 +581,7 @@ func TestFindPeersWithSubnets_NodeDeduplication(t *testing.T) {
cache.SubnetIDs.EmptyAllCaches()
defer cache.SubnetIDs.EmptyAllCaches()
ctx := context.Background()
db := testDB.SetupDB(t)
localNode1 := createTestNodeWithID(t, "node1")
@@ -800,13 +742,47 @@ func TestFindPeersWithSubnets_NodeDeduplication(t *testing.T) {
flags.Init(gFlags)
defer flags.Init(new(flags.GlobalFlags))
s := createTestService(t, db)
fakePeer := testp2p.NewTestP2P(t)
s := &Service{
cfg: &Config{
MaxPeers: 30,
DB: db,
},
genesisTime: time.Now(),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
peers: peers.NewStatus(ctx, &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{},
}),
host: fakePeer.BHost,
}
localNode := createTestNodeRandom(t)
mockIter := testp2p.NewMockIterator(tt.nodes)
s.dv5Listener = testp2p.NewMockListener(localNode, mockIter)
crawler := startTestCrawler(t, s, s.dv5Listener.(*testp2p.MockListener))
verifyCrawlerPeers(t, crawler, s, tt.defectiveSubnets, tt.expectedCount, tt.description, tt.eval)
digest, err := s.currentForkDigest()
require.NoError(t, err)
ctxWithTimeout, cancel := context.WithTimeout(ctx, 100*time.Millisecond)
defer cancel()
result, err := s.findPeersWithSubnets(
ctxWithTimeout,
AttestationSubnetTopicFormat,
digest,
1,
tt.defectiveSubnets,
)
require.NoError(t, err, tt.description)
require.Equal(t, tt.expectedCount, len(result), tt.description)
if tt.eval != nil {
tt.eval(t, result)
}
})
}
}
@@ -816,6 +792,7 @@ func TestFindPeersWithSubnets_FilterPeerRemoval(t *testing.T) {
cache.SubnetIDs.EmptyAllCaches()
defer cache.SubnetIDs.EmptyAllCaches()
ctx := context.Background()
db := testDB.SetupDB(t)
localNode1 := createTestNodeWithID(t, "node1")
@@ -968,7 +945,23 @@ func TestFindPeersWithSubnets_FilterPeerRemoval(t *testing.T) {
flags.Init(gFlags)
defer flags.Init(new(flags.GlobalFlags))
s := createTestService(t, db)
// Create test P2P instance
fakePeer := testp2p.NewTestP2P(t)
// Create mock service
s := &Service{
cfg: &Config{
MaxPeers: 30,
DB: db,
},
genesisTime: time.Now(),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
peers: peers.NewStatus(ctx, &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{},
}),
host: fakePeer.BHost,
}
// Mark specific node versions as "bad" to simulate filterPeer failures
for _, node := range tt.nodes {
@@ -986,11 +979,30 @@ func TestFindPeersWithSubnets_FilterPeerRemoval(t *testing.T) {
}
localNode := createTestNodeRandom(t)
mockIter := testp2p.NewMockIterator(tt.nodes)
s.dv5Listener = testp2p.NewMockListener(localNode, mockIter)
crawler := startTestCrawler(t, s, s.dv5Listener.(*testp2p.MockListener))
verifyCrawlerPeers(t, crawler, s, tt.defectiveSubnets, tt.expectedCount, tt.description, tt.eval)
digest, err := s.currentForkDigest()
require.NoError(t, err)
ctxWithTimeout, cancel := context.WithTimeout(ctx, 100*time.Millisecond)
defer cancel()
result, err := s.findPeersWithSubnets(
ctxWithTimeout,
AttestationSubnetTopicFormat,
digest,
1,
tt.defectiveSubnets,
)
require.NoError(t, err, tt.description)
require.Equal(t, tt.expectedCount, len(result), tt.description)
if tt.eval != nil {
tt.eval(t, result)
}
})
}
}
@@ -1034,6 +1046,7 @@ func TestFindPeersWithSubnets_received_bad_existing_node(t *testing.T) {
cache.SubnetIDs.EmptyAllCaches()
defer cache.SubnetIDs.EmptyAllCaches()
ctx := context.Background()
db := testDB.SetupDB(t)
// Create LocalNode with same ID but different sequences
@@ -1054,7 +1067,21 @@ func TestFindPeersWithSubnets_received_bad_existing_node(t *testing.T) {
flags.Init(gFlags)
defer flags.Init(new(flags.GlobalFlags))
service := createTestService(t, db)
fakePeer := testp2p.NewTestP2P(t)
service := &Service{
cfg: &Config{
MaxPeers: 30,
DB: db,
},
genesisTime: time.Now(),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
peers: peers.NewStatus(ctx, &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{},
}),
host: fakePeer.BHost,
}
// Create iterator with callback that marks peer as bad before processing node1_seq2
iter := &callbackIteratorForSubnets{
@@ -1078,80 +1105,22 @@ func TestFindPeersWithSubnets_received_bad_existing_node(t *testing.T) {
localNode := createTestNodeRandom(t)
service.dv5Listener = testp2p.NewMockListener(localNode, iter)
crawler := startTestCrawler(t, service, service.dv5Listener.(*testp2p.MockListener))
// Verification using verifyCrawlerPeers with a custom eval function
verifyCrawlerPeers(t, crawler, service, map[uint64]int{1: 1}, 1, "only node2 should remain", func(t *testing.T, result []*enode.Node) {
require.Equal(t, localNode2.Node().ID(), result[0].ID())
})
}
func createTestService(t *testing.T, d db.Database) *Service {
fakePeer := testp2p.NewTestP2P(t)
s := &Service{
cfg: &Config{
MaxPeers: 30,
DB: d,
},
genesisTime: time.Now(),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
peers: peers.NewStatus(t.Context(), &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{},
}),
host: fakePeer.BHost,
}
return s
}
func startTestCrawler(t *testing.T, s *Service, listener *testp2p.MockListener) *GossipPeerCrawler {
digest, err := s.currentForkDigest()
digest, err := service.currentForkDigest()
require.NoError(t, err)
crawler, err := NewGossipPeerCrawler(t.Context(), s, listener,
1*time.Second, 100*time.Millisecond, 10, gossipcrawler.PeerFilterFunc(s.filterPeer),
s.Peers().Scorers().Score)
// Run findPeersWithSubnets - node1_seq1 gets processed first, then callback marks peer bad, then node1_seq2 fails
ctxWithTimeout, cancel := context.WithTimeout(ctx, 1*time.Second)
defer cancel()
result, err := service.findPeersWithSubnets(
ctxWithTimeout,
AttestationSubnetTopicFormat,
digest,
1,
map[uint64]int{1: 2}, // Need 2 peers for subnet 1
)
require.NoError(t, err)
s.crawler = crawler
require.NoError(t, crawler.Start(func(ctx context.Context, n *enode.Node) ([]string, error) {
subs, err := attestationSubnets(n.Record())
if err != nil {
return nil, err
}
var topics []string
for subnet := range subs {
t := AttestationSubnetTopic(digest, subnet)
topics = append(topics, t)
}
return topics, nil
}))
return crawler
}
func verifyCrawlerPeers(t *testing.T, crawler *GossipPeerCrawler, s *Service, subnets map[uint64]int, expectedCount int, description string, eval func(t *testing.T, result []*enode.Node)) {
digest, err := s.currentForkDigest()
require.NoError(t, err)
var topics []string
for subnet := range subnets {
topics = append(topics, AttestationSubnetTopic(digest, subnet))
}
var results []*enode.Node
require2.Eventually(t, func() bool {
results = results[:0]
seen := make(map[enode.ID]struct{})
for _, topic := range topics {
peers := crawler.PeersForTopic(topic)
for _, peer := range peers {
if _, ok := seen[peer.ID()]; !ok {
seen[peer.ID()] = struct{}{}
results = append(results, peer)
}
}
}
return len(results) == expectedCount
}, 1*time.Second, 100*time.Millisecond, description)
if eval != nil {
eval(t, results)
}
require.Equal(t, 1, len(result))
require.Equal(t, localNode2.Node().ID(), result[0].ID()) // only node2 should remain
}

View File

@@ -21,9 +21,9 @@ go_library(
deps = [
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/p2p/encoder:go_default_library",
"//beacon-chain/p2p/gossipcrawler:go_default_library",
"//beacon-chain/p2p/peers:go_default_library",
"//beacon-chain/p2p/peers/scorers:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",

View File

@@ -4,8 +4,8 @@ import (
"context"
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/encoder"
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/gossipcrawler"
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/peers"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
@@ -41,16 +41,6 @@ func (*FakeP2P) AddConnectionHandler(_, _ func(ctx context.Context, id peer.ID)
}
// Crawler -- fake.
func (*FakeP2P) Crawler() gossipcrawler.Crawler {
return &MockCrawler{}
}
// GossipDialer -- fake.
func (*FakeP2P) GossipDialer() gossipcrawler.GossipDialer {
return nil
}
// AddDisconnectionHandler -- fake.
func (*FakeP2P) AddDisconnectionHandler(_ func(ctx context.Context, id peer.ID) error) {
}
@@ -80,6 +70,11 @@ func (*FakeP2P) DiscoveryAddresses() ([]multiaddr.Multiaddr, error) {
return nil, nil
}
// FindAndDialPeersWithSubnets mocks the p2p func.
func (*FakeP2P) FindAndDialPeersWithSubnets(ctx context.Context, topicFormat string, digest [fieldparams.VersionLength]byte, minimumPeersPerSubnet int, subnets map[uint64]bool) error {
return nil
}
// RefreshPersistentSubnets mocks the p2p func.
func (*FakeP2P) RefreshPersistentSubnets() {}
@@ -98,11 +93,6 @@ func (*FakeP2P) Peers() *peers.Status {
return nil
}
// DialPeers -- fake.
func (*FakeP2P) DialPeers(ctx context.Context, maxConcurrentDials int, nodes []*enode.Node) uint {
return 0
}
// PublishToTopic -- fake.
func (*FakeP2P) PublishToTopic(_ context.Context, _ string, _ []byte, _ ...pubsub.PubOpt) error {
return nil

View File

@@ -4,7 +4,7 @@ import (
"context"
"errors"
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/gossipcrawler"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/libp2p/go-libp2p/core/host"
@@ -19,7 +19,6 @@ type MockPeerManager struct {
BHost host.Host
DiscoveryAddr []multiaddr.Multiaddr
FailDiscoveryAddr bool
Dialer gossipcrawler.GossipDialer
}
// Disconnect .
@@ -47,11 +46,6 @@ func (m MockPeerManager) NodeID() enode.ID {
return enode.ID{}
}
// GossipDialer returns the configured dialer mock, if any.
func (m MockPeerManager) GossipDialer() gossipcrawler.GossipDialer {
return m.Dialer
}
// DiscoveryAddresses .
func (m *MockPeerManager) DiscoveryAddresses() ([]multiaddr.Multiaddr, error) {
if m.FailDiscoveryAddr {
@@ -63,15 +57,10 @@ func (m *MockPeerManager) DiscoveryAddresses() ([]multiaddr.Multiaddr, error) {
// RefreshPersistentSubnets .
func (*MockPeerManager) RefreshPersistentSubnets() {}
// DialPeers
func (p *MockPeerManager) DialPeers(ctx context.Context, maxConcurrentDials int, nodes []*enode.Node) uint {
return 0
// FindAndDialPeersWithSubnet .
func (*MockPeerManager) FindAndDialPeersWithSubnets(ctx context.Context, topicFormat string, digest [fieldparams.VersionLength]byte, minimumPeersPerSubnet int, subnets map[uint64]bool) error {
return nil
}
// AddPingMethod .
func (*MockPeerManager) AddPingMethod(_ func(ctx context.Context, id peer.ID) error) {}
// Crawler.
func (*MockPeerManager) Crawler() gossipcrawler.Crawler {
return nil
}

View File

@@ -13,9 +13,9 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/encoder"
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/gossipcrawler"
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/peers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/peers/scorers"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
@@ -66,7 +66,6 @@ type TestP2P struct {
earliestAvailableSlot primitives.Slot
custodyGroupCount uint64
enr *enr.Record
dialer gossipcrawler.GossipDialer
}
// NewTestP2P initializes a new p2p test service.
@@ -184,7 +183,11 @@ func (p *TestP2P) ReceivePubSub(topic string, msg proto.Message) {
if _, err := p.Encoding().EncodeGossip(buf, castedMsg); err != nil {
p.t.Fatalf("Failed to encode message: %v", err)
}
topicHandle, err := ps.Join(topic)
digest, err := p.ForkDigest()
if err != nil {
p.t.Fatal(err)
}
topicHandle, err := ps.Join(fmt.Sprintf(topic, digest) + p.Encoding().ProtocolSuffix())
if err != nil {
p.t.Fatal(err)
}
@@ -276,9 +279,6 @@ func (p *TestP2P) SubscribeToTopic(topic string, opts ...pubsub.SubOpt) (*pubsub
// LeaveTopic closes topic and removes corresponding handler from list of joined topics.
// This method will return error if there are outstanding event handlers or subscriptions.
func (p *TestP2P) LeaveTopic(topic string) error {
p.mu.Lock()
defer p.mu.Unlock()
if t, ok := p.joinedTopics[topic]; ok {
if err := t.Close(); err != nil {
return err
@@ -419,8 +419,9 @@ func (p *TestP2P) Peers() *peers.Status {
return p.peers
}
func (p *TestP2P) DialPeers(ctx context.Context, maxConcurrentDials int, nodes []*enode.Node) uint {
return 0
// FindAndDialPeersWithSubnets mocks the p2p func.
func (*TestP2P) FindAndDialPeersWithSubnets(ctx context.Context, topicFormat string, digest [fieldparams.VersionLength]byte, minimumPeersPerSubnet int, subnets map[uint64]bool) error {
return nil
}
// RefreshPersistentSubnets mocks the p2p func.
@@ -557,40 +558,3 @@ func (s *TestP2P) custodyGroupCountFromPeerENR(pid peer.ID) uint64 {
return custodyGroupCount
}
// MockCrawler is a minimal mock implementation of PeerCrawler for testing
type MockCrawler struct{}
// Start does nothing as this is a mock
func (m *MockCrawler) Start(gossipcrawler.TopicExtractor) error {
return nil
}
// Stop does nothing as this is a mock
func (m *MockCrawler) Stop() {}
// SetTopicExtractor does nothing as this is a mock
func (m *MockCrawler) SetTopicExtractor(extractor func(context.Context, *enode.Node) ([]string, error)) error {
return nil
}
// RemoveTopic does nothing as this is a mock
func (m *MockCrawler) RemoveTopic(topic string) {}
// RemovePeerID does nothing as this is a mock
func (m *MockCrawler) RemovePeerByPeerId(pid peer.ID) {}
// PeersForTopic returns empty list as this is a mock
func (m *MockCrawler) PeersForTopic(topic string) []*enode.Node {
return []*enode.Node{}
}
// Crawler returns a mock crawler implementation for testing.
func (*TestP2P) Crawler() gossipcrawler.Crawler {
return &MockCrawler{}
}
// GossipDialer returns nil for tests that do not exercise dialer behaviour.
func (p *TestP2P) GossipDialer() gossipcrawler.GossipDialer {
return p.dialer
}

View File

@@ -77,12 +77,12 @@ func InitializeDataMaps() {
},
bytesutil.ToBytes4(params.BeaconConfig().ElectraForkVersion): func() (interfaces.ReadOnlySignedBeaconBlock, error) {
return blocks.NewSignedBeaconBlock(
&ethpb.SignedBeaconBlockElectra{Block: &ethpb.BeaconBlockElectra{Body: &ethpb.BeaconBlockBodyElectra{ExecutionPayload: &enginev1.ExecutionPayloadDeneb{}, ExecutionRequests: &enginev1.ExecutionRequests{}}}},
&ethpb.SignedBeaconBlockElectra{Block: &ethpb.BeaconBlockElectra{Body: &ethpb.BeaconBlockBodyElectra{ExecutionPayload: &enginev1.ExecutionPayloadDeneb{}}}},
)
},
bytesutil.ToBytes4(params.BeaconConfig().FuluForkVersion): func() (interfaces.ReadOnlySignedBeaconBlock, error) {
return blocks.NewSignedBeaconBlock(
&ethpb.SignedBeaconBlockFulu{Block: &ethpb.BeaconBlockElectra{Body: &ethpb.BeaconBlockBodyElectra{ExecutionPayload: &enginev1.ExecutionPayloadDeneb{}, ExecutionRequests: &enginev1.ExecutionRequests{}}}},
&ethpb.SignedBeaconBlockFulu{Block: &ethpb.BeaconBlockElectra{Body: &ethpb.BeaconBlockBodyElectra{ExecutionPayload: &enginev1.ExecutionPayloadDeneb{}}}},
)
},
}

View File

@@ -73,7 +73,7 @@ func (e *endpoint) handlerWithMiddleware() http.HandlerFunc {
handler.ServeHTTP(rw, r)
if rw.statusCode >= 400 {
httpErrorCount.WithLabelValues(e.name, http.StatusText(rw.statusCode), r.Method).Inc()
httpErrorCount.WithLabelValues(r.URL.Path, http.StatusText(rw.statusCode), r.Method).Inc()
}
}
}

Some files were not shown because too many files have changed in this diff Show More