Compare commits

...

19 Commits

Author SHA1 Message Date
Kasey
fa6a09b1e8 generic ssz list encoding
Co-authored-by: Bastin <43618253+Inspector-Butters@users.noreply.github.com>
2025-07-02 21:50:36 -05:00
Manu NALEPA
b1ac8209b2 PeerDAS: Implement reconstruct. (#15454)
* PeerDAS: Implement reconstruct.

* Fix Preston's comment.

* Fix Preston's comment.
2025-07-02 19:02:55 +00:00
Bastin
74c9586c66 refactor lc kv tests (#15465)
* refactor lc kv tests

* clean up

* Update lightclient_test.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-07-02 13:24:32 +00:00
Bastin
f0ad3dfaeb Refactor lc bootstrap tests (#15462)
* fix versioning

* changelog

* fix blockchain tests

* fix linter issue

* fix spec tests

* fix default lc update version

* fix lc header version

* gzl

* clean up the code

* Update testing/spectest/shared/common/light_client/update_ranking.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* add fulu set up in update ranking

* pass att block to createDefaultLCUpdate

* address comments

* linter

* Update lightclient.go

* refactor lc bootstrap tests

* changelog

* sort imports

* refactor lc bootstrap tests

* changelog

* Implement the new Fulu Metadata. (#15440)

* clean up

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-07-02 13:23:39 +00:00
Bastin
2540196747 Put the initialization of LC Store behind the enable-light-client flag (#15464)
* put lc store behind flag

* lint

* remove extra line
2025-07-02 11:09:37 +00:00
Manu NALEPA
f133751cce --chain-config-file: Do not use any more mainnet boot nodes. (#15460) 2025-07-02 09:36:31 +00:00
Bastin
bddcc158e4 Fix LC versioning bug (#15400)
* fix versioning

* changelog

* fix blockchain tests

* fix linter issue

* fix spec tests

* fix default lc update version

* fix lc header version

* gzl

* clean up the code

* Update testing/spectest/shared/common/light_client/update_ranking.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* add fulu set up in update ranking

* pass att block to createDefaultLCUpdate

* address comments

* linter

* Update lightclient.go

* sort imports

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2025-07-01 21:40:18 +00:00
Manu NALEPA
bc7664321b Implement the new Fulu Metadata. (#15440) 2025-07-01 07:07:32 +00:00
Manu NALEPA
97f416b3a7 dataColumnSidecarByRootRPCHandler: Do not rely any more on map iteration, which does not produce reproducible output order. (#15441) 2025-06-26 13:07:19 +00:00
Manu NALEPA
1c1e0f38bb PeerDAS: Implement beacon API blob sidecar endpoint for Fulu (#15436)
* `TestGetBlob`: Refactor (no functional change).

* `GenerateTestFuluBlockWithSidecars`: Generate commitments consistent with blobs.

This is actually not needed for this PR, but it does not hurt.

* Beacon API: Implement blob sidecars endpoint for Fulu.

* Fix Radek's comment.

* Update beacon-chain/rpc/lookup/blocker.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Fix Radek's comment.

* Update beacon-chain/rpc/lookup/blocker.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-06-26 10:05:22 +00:00
Manu NALEPA
121914d0d7 Implement SendDataColumnSidecarsByRangeRequest and SendDataColumnSidecarsByRootRequest. (#15430)
* Implement `SendDataColumnSidecarsByRangeRequest` and `SendDataColumnSidecarsByRootRequest`.

* Fix James's comment.

* Fix James's comment.

* Fix James' comment.

* Fix marshaller.
2025-06-24 21:40:16 +00:00
Manu NALEPA
e8625cd89d Peerdas various (#15423)
* Topic mapping: Groupe `const` and `var`.

Cosmetic change, no functional change.

* `TopicFromMessage`: Do not assume anymore that all Fulu specific topics are V3 only.

* Proto: Remove unused `DataColumnIdentifier` and add new `StatusV2`.

`DataColumnIdentifier` was removed in the spec here: https://github.com/ethereum/consensus-specs/pull/4284. Eventually, we stopped using it in Prysm, but never removed the corresponding proto message.

The new `StatusV2` is introduced in the spec here: https://github.com/ethereum/consensus-specs/pull/4374

* `readChunkedDataColumnSideCar` ==> `readChunkedDataColumnSidecar`.

* `rpc_send_request.go`: Reorganize file (no function change).

* `readChunkedDataColumnSidecar`: Add `validationFunctions` parameter and add tests.
2025-06-23 14:47:02 +00:00
terence
667aaf1564 Remove invalid from logs for blob sidecars missing parent or out of range (#15428) 2025-06-22 21:55:44 +00:00
Bastin
e020907d2a Revert "SSZ support for submitAttestationsV2 (#15422)" (#15426)
This reverts commit 9927cea35a.
2025-06-20 21:21:16 +00:00
Bastin
9927cea35a SSZ support for submitAttestationsV2 (#15422)
* ssz support for submitAttestationsV2

* changelog

* lowercase errors

* save 1 line

* Update beacon-chain/rpc/eth/beacon/handlers_pool.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* add comment

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-06-20 17:21:48 +00:00
Manu NALEPA
d4233471d2 PeerDAS: Implement data columns by range RPC handler. (#15421)
* `dataColumnsRPCMinValidSlot`: Add new test case.

* peerDAS: Implement `dataColumnSidecarByRangeRPCHandler`.

* `rateLimitingAmount`:  Define out of the function.

* Fix James' comment.

* Fix James' comment.
2025-06-20 15:22:50 +00:00
james-prysm
d63ae69920 adding ssz for get block endpoint (#15390)
* adding get ssz

* adding some tests

* gaz

* adding ssz to e2e

* wip ssz

* adding in additional check on header type

* remove unused

* renaming json rest handler, and adding in usage of use ssz debug flag

* fixing unit tests

* fixing tests

* gaz

* radek feedback

* Update config/features/config.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update config/features/flags.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update config/features/flags.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/beacon-api/get_beacon_block.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/beacon-api/get_beacon_block.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/beacon-api/get_beacon_block.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* addressing feedback

* missing import

* another missing import

* fixing tests

* gaz

* removing unused

* gaz

* more radek feedback

* fixing context

* adding in check for non accepted conent type

* reverting to not create more edgecases

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-06-20 14:27:09 +00:00
terence
b9fd32dfff Remove deposit count from sync new block log (#15420)
* Remove deposit count from sync new block log

* Fix tests
2025-06-19 23:49:17 +00:00
Manu NALEPA
559d02bf4d peerDAS: Implement dataColumnSidecarByRootRPCHandler. (#15405)
* `CreateTestVerifiedRoDataColumnSidecars`: Use consistent block root.

* peerDAS: Implement `dataColumnSidecarByRootRPCHandler`.

* Fix James' comment.

* Fix James' comment.
2025-06-19 12:12:45 +00:00
128 changed files with 7146 additions and 2614 deletions

View File

@@ -26,9 +26,6 @@ func logStateTransitionData(b interfaces.ReadOnlyBeaconBlock) error {
if len(b.Body().Attestations()) > 0 {
log = log.WithField("attestations", len(b.Body().Attestations()))
}
if len(b.Body().Deposits()) > 0 {
log = log.WithField("deposits", len(b.Body().Deposits()))
}
if len(b.Body().AttesterSlashings()) > 0 {
log = log.WithField("attesterSlashings", len(b.Body().AttesterSlashings()))
}
@@ -111,7 +108,6 @@ func logBlockSyncStatus(block interfaces.ReadOnlyBeaconBlock, blockRoot [32]byte
"sinceSlotStartTime": prysmTime.Now().Sub(startTime),
"chainServiceProcessedTime": prysmTime.Now().Sub(receivedTime) - daWaitedTime,
"dataAvailabilityWaitedTime": daWaitedTime,
"deposits": len(block.Body().Deposits()),
}
log.WithFields(lf).Debug("Synced new block")
} else {
@@ -159,7 +155,9 @@ func logPayload(block interfaces.ReadOnlyBeaconBlock) error {
if err != nil {
return errors.Wrap(err, "could not get BLSToExecutionChanges")
}
fields["blsToExecutionChanges"] = len(changes)
if len(changes) > 0 {
fields["blsToExecutionChanges"] = len(changes)
}
}
log.WithFields(fields).Debug("Synced new payload")
return nil

View File

@@ -53,7 +53,7 @@ func Test_logStateTransitionData(t *testing.T) {
require.NoError(t, err)
return wb
},
want: "\"Finished applying state transition\" attestations=1 deposits=1 prefix=blockchain slot=0",
want: "\"Finished applying state transition\" attestations=1 prefix=blockchain slot=0",
},
{name: "has attester slashing",
b: func() interfaces.ReadOnlyBeaconBlock {
@@ -93,7 +93,7 @@ func Test_logStateTransitionData(t *testing.T) {
require.NoError(t, err)
return wb
},
want: "\"Finished applying state transition\" attestations=1 attesterSlashings=1 deposits=1 prefix=blockchain proposerSlashings=1 slot=0 voluntaryExits=1",
want: "\"Finished applying state transition\" attestations=1 attesterSlashings=1 prefix=blockchain proposerSlashings=1 slot=0 voluntaryExits=1",
},
{name: "has payload",
b: func() interfaces.ReadOnlyBeaconBlock { return wrappedPayloadBlk },

View File

@@ -258,3 +258,12 @@ func WithLightClientStore(lcs *lightclient.Store) Option {
return nil
}
}
// WithStartWaitingDataColumnSidecars sets a channel that the `areDataColumnsAvailable` function will fill
// in when starting to wait for additional data columns.
func WithStartWaitingDataColumnSidecars(c chan bool) Option {
return func(s *Service) error {
s.startWaitingDataColumnSidecars = c
return nil
}
}

View File

@@ -637,7 +637,11 @@ func missingDataColumnIndices(bs *filesystem.DataColumnStorage, root [fieldparam
// The function will first check the database to see if all sidecars have been persisted. If any
// sidecars are missing, it will then read from the sidecar notifier channel for the given root until the channel is
// closed, the context hits cancellation/timeout, or notifications have been received for all the missing sidecars.
func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signedBlock interfaces.ReadOnlySignedBeaconBlock) error {
func (s *Service) isDataAvailable(
ctx context.Context,
root [fieldparams.RootLength]byte,
signedBlock interfaces.ReadOnlySignedBeaconBlock,
) error {
block := signedBlock.Block()
if block == nil {
return errors.New("invalid nil beacon block")
@@ -657,7 +661,11 @@ func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signedBloc
// areDataColumnsAvailable blocks until all data columns committed to in the block are available,
// or an error or context cancellation occurs. A nil result means that the data availability check is successful.
func (s *Service) areDataColumnsAvailable(ctx context.Context, root [fieldparams.RootLength]byte, block interfaces.ReadOnlyBeaconBlock) error {
func (s *Service) areDataColumnsAvailable(
ctx context.Context,
root [fieldparams.RootLength]byte,
block interfaces.ReadOnlyBeaconBlock,
) error {
// We are only required to check within MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS
blockSlot, currentSlot := block.Slot(), s.CurrentSlot()
blockEpoch, currentEpoch := slots.ToEpoch(blockSlot), slots.ToEpoch(currentSlot)
@@ -724,6 +732,10 @@ func (s *Service) areDataColumnsAvailable(ctx context.Context, root [fieldparams
return nil
}
if s.startWaitingDataColumnSidecars != nil {
s.startWaitingDataColumnSidecars <- true
}
// Log for DA checks that cross over into the next slot; helpful for debugging.
nextSlot := slots.BeginsAt(block.Slot()+1, s.genesisTime)

View File

@@ -2796,7 +2796,7 @@ func TestProcessLightClientUpdate(t *testing.T) {
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(s.CurrentSlot(), l.AttestedState)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(l.AttestedBlock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
@@ -2848,7 +2848,7 @@ func TestProcessLightClientUpdate(t *testing.T) {
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(s.CurrentSlot(), l.AttestedState)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(l.AttestedBlock)
require.NoError(t, err)
scb := make([]byte, 64)
@@ -2954,7 +2954,7 @@ func TestProcessLightClientUpdate(t *testing.T) {
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(s.CurrentSlot(), l.AttestedState)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(l.AttestedBlock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
@@ -3006,7 +3006,7 @@ func TestProcessLightClientUpdate(t *testing.T) {
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(s.CurrentSlot(), l.AttestedState)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(l.AttestedBlock)
require.NoError(t, err)
scb := make([]byte, 64)
@@ -3112,7 +3112,7 @@ func TestProcessLightClientUpdate(t *testing.T) {
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(s.CurrentSlot(), l.AttestedState)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(l.AttestedBlock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
@@ -3164,7 +3164,7 @@ func TestProcessLightClientUpdate(t *testing.T) {
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(s.CurrentSlot(), l.AttestedState)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(l.AttestedBlock)
require.NoError(t, err)
scb := make([]byte, 64)
@@ -3332,17 +3332,27 @@ func testIsAvailableSetup(t *testing.T, params testIsAvailableParams) (context.C
signedBeaconBlock, err := util.GenerateFullBlockFulu(genesisState, secretKeys, conf, 10 /*block slot*/)
require.NoError(t, err)
root, err := signedBeaconBlock.Block.HashTreeRoot()
block := signedBeaconBlock.Block
bodyRoot, err := block.Body.HashTreeRoot()
require.NoError(t, err)
dataColumnsParams := make([]util.DataColumnParams, 0, len(params.columnsToSave))
root, err := block.HashTreeRoot()
require.NoError(t, err)
dataColumnsParams := make([]util.DataColumnParam, 0, len(params.columnsToSave))
for _, i := range params.columnsToSave {
dataColumnParam := util.DataColumnParams{ColumnIndex: i}
dataColumnParam := util.DataColumnParam{
Index: i,
Slot: block.Slot,
ProposerIndex: block.ProposerIndex,
ParentRoot: block.ParentRoot,
StateRoot: block.StateRoot,
BodyRoot: bodyRoot[:],
}
dataColumnsParams = append(dataColumnsParams, dataColumnParam)
}
dataColumnParamsByBlockRoot := util.DataColumnsParamsByRoot{root: dataColumnsParams}
_, verifiedRODataColumns := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnParamsByBlockRoot)
_, verifiedRODataColumns := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnsParams)
err = dataColumnStorage.Save(verifiedRODataColumns)
require.NoError(t, err)
@@ -3402,38 +3412,47 @@ func TestIsDataAvailable(t *testing.T) {
})
t.Run("Fulu - some initially missing data columns (no reconstruction)", func(t *testing.T) {
startWaiting := make(chan bool)
testParams := testIsAvailableParams{
options: []Option{WithCustodyInfo(&peerdas.CustodyInfo{})},
options: []Option{WithCustodyInfo(&peerdas.CustodyInfo{}), WithStartWaitingDataColumnSidecars(startWaiting)},
columnsToSave: []uint64{1, 17, 19, 75, 102, 117, 119}, // 119 is not needed, 42 and 87 are missing
blobKzgCommitmentsCount: 3,
}
ctx, _, service, root, signed := testIsAvailableSetup(t, testParams)
block := signed.Block()
slot := block.Slot()
proposerIndex := block.ProposerIndex()
parentRoot := block.ParentRoot()
stateRoot := block.StateRoot()
bodyRoot, err := block.Body().HashTreeRoot()
require.NoError(t, err)
var wrongRoot [fieldparams.RootLength]byte
copy(wrongRoot[:], root[:])
wrongRoot[0]++ // change the root to simulate a wrong root
_, verifiedSidecarsWrongRoot := util.CreateTestVerifiedRoDataColumnSidecars(
t,
[]util.DataColumnParam{
{Index: 42, Slot: slot + 1}, // Needed index, but not for this slot.
})
_, verifiedSidecarsWrongRoot := util.CreateTestVerifiedRoDataColumnSidecars(t, util.DataColumnsParamsByRoot{wrongRoot: {
{ColumnIndex: 42}, // needed
}})
_, verifiedSidecars := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{
{Index: 87, Slot: slot, ProposerIndex: proposerIndex, ParentRoot: parentRoot[:], StateRoot: stateRoot[:], BodyRoot: bodyRoot[:]}, // Needed index
{Index: 1, Slot: slot, ProposerIndex: proposerIndex, ParentRoot: parentRoot[:], StateRoot: stateRoot[:], BodyRoot: bodyRoot[:]}, // Not needed index
{Index: 42, Slot: slot, ProposerIndex: proposerIndex, ParentRoot: parentRoot[:], StateRoot: stateRoot[:], BodyRoot: bodyRoot[:]}, // Needed index
})
_, verifiedSidecars := util.CreateTestVerifiedRoDataColumnSidecars(t, util.DataColumnsParamsByRoot{root: {
{ColumnIndex: 87}, // needed
{ColumnIndex: 1}, // not needed
{ColumnIndex: 42}, // needed
}})
go func() {
<-startWaiting
time.AfterFunc(10*time.Millisecond, func() {
err := service.dataColumnStorage.Save(verifiedSidecarsWrongRoot)
require.NoError(t, err)
err = service.dataColumnStorage.Save(verifiedSidecars)
require.NoError(t, err)
})
}()
err := service.isDataAvailable(ctx, root, signed)
err = service.isDataAvailable(ctx, root, signed)
require.NoError(t, err)
})
@@ -3442,6 +3461,9 @@ func TestIsDataAvailable(t *testing.T) {
missingColumns = uint64(2)
cgc = 128
)
startWaiting := make(chan bool)
var custodyInfo peerdas.CustodyInfo
custodyInfo.TargetGroupCount.SetValidatorsCustodyRequirement(cgc)
custodyInfo.ToAdvertiseGroupCount.Set(cgc)
@@ -3454,41 +3476,61 @@ func TestIsDataAvailable(t *testing.T) {
}
testParams := testIsAvailableParams{
options: []Option{WithCustodyInfo(&custodyInfo)},
options: []Option{WithCustodyInfo(&custodyInfo), WithStartWaitingDataColumnSidecars(startWaiting)},
columnsToSave: indices,
blobKzgCommitmentsCount: 3,
}
ctx, _, service, root, signed := testIsAvailableSetup(t, testParams)
block := signed.Block()
slot := block.Slot()
proposerIndex := block.ProposerIndex()
parentRoot := block.ParentRoot()
stateRoot := block.StateRoot()
bodyRoot, err := block.Body().HashTreeRoot()
require.NoError(t, err)
dataColumnParams := make([]util.DataColumnParams, 0, missingColumns)
dataColumnParams := make([]util.DataColumnParam, 0, missingColumns)
for i := minimumColumnsCountToReconstruct - missingColumns; i < minimumColumnsCountToReconstruct; i++ {
dataColumnParam := util.DataColumnParams{ColumnIndex: i}
dataColumnParam := util.DataColumnParam{
Index: i,
Slot: slot,
ProposerIndex: proposerIndex,
ParentRoot: parentRoot[:],
StateRoot: stateRoot[:],
BodyRoot: bodyRoot[:],
}
dataColumnParams = append(dataColumnParams, dataColumnParam)
}
_, verifiedSidecars := util.CreateTestVerifiedRoDataColumnSidecars(t, util.DataColumnsParamsByRoot{root: dataColumnParams})
_, verifiedSidecars := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnParams)
go func() {
<-startWaiting
time.AfterFunc(10*time.Millisecond, func() {
err := service.dataColumnStorage.Save(verifiedSidecars)
require.NoError(t, err)
})
}()
err := service.isDataAvailable(ctx, root, signed)
err = service.isDataAvailable(ctx, root, signed)
require.NoError(t, err)
})
t.Run("Fulu - some columns are definitively missing", func(t *testing.T) {
startWaiting := make(chan bool)
params := testIsAvailableParams{
options: []Option{WithCustodyInfo(&peerdas.CustodyInfo{})},
options: []Option{WithCustodyInfo(&peerdas.CustodyInfo{}), WithStartWaitingDataColumnSidecars(startWaiting)},
blobKzgCommitmentsCount: 3,
}
ctx, cancel, service, root, signed := testIsAvailableSetup(t, params)
time.AfterFunc(10*time.Millisecond, func() {
go func() {
<-startWaiting
cancel()
})
}()
err := service.isDataAvailable(ctx, root, signed)
require.NotNil(t, err)
@@ -3605,7 +3647,7 @@ func TestProcessLightClientOptimisticUpdate(t *testing.T) {
expectedVersion = version.Altair
case 2:
forkEpoch = uint64(params.BeaconConfig().BellatrixForkEpoch)
expectedVersion = version.Altair
expectedVersion = version.Bellatrix
case 3:
forkEpoch = uint64(params.BeaconConfig().CapellaForkEpoch)
expectedVersion = version.Capella
@@ -3614,7 +3656,7 @@ func TestProcessLightClientOptimisticUpdate(t *testing.T) {
expectedVersion = version.Deneb
case 5:
forkEpoch = uint64(params.BeaconConfig().ElectraForkEpoch)
expectedVersion = version.Deneb
expectedVersion = version.Electra
default:
t.Errorf("Unsupported fork version %s", version.String(testVersion))
}
@@ -3759,7 +3801,7 @@ func TestProcessLightClientFinalityUpdate(t *testing.T) {
expectedVersion = version.Altair
case 2:
forkEpoch = uint64(params.BeaconConfig().BellatrixForkEpoch)
expectedVersion = version.Altair
expectedVersion = version.Bellatrix
case 3:
forkEpoch = uint64(params.BeaconConfig().CapellaForkEpoch)
expectedVersion = version.Capella

View File

@@ -47,27 +47,28 @@ import (
// Service represents a service that handles the internal
// logic of managing the full PoS beacon chain.
type Service struct {
cfg *config
ctx context.Context
cancel context.CancelFunc
genesisTime time.Time
head *head
headLock sync.RWMutex
originBlockRoot [32]byte // genesis root, or weak subjectivity checkpoint root, depending on how the node is initialized
boundaryRoots [][32]byte
checkpointStateCache *cache.CheckpointStateCache
initSyncBlocks map[[32]byte]interfaces.ReadOnlySignedBeaconBlock
initSyncBlocksLock sync.RWMutex
wsVerifier *WeakSubjectivityVerifier
clockSetter startup.ClockSetter
clockWaiter startup.ClockWaiter
syncComplete chan struct{}
blobNotifiers *blobNotifierMap
blockBeingSynced *currentlySyncingBlock
blobStorage *filesystem.BlobStorage
dataColumnStorage *filesystem.DataColumnStorage
slasherEnabled bool
lcStore *lightClient.Store
cfg *config
ctx context.Context
cancel context.CancelFunc
genesisTime time.Time
head *head
headLock sync.RWMutex
originBlockRoot [32]byte // genesis root, or weak subjectivity checkpoint root, depending on how the node is initialized
boundaryRoots [][32]byte
checkpointStateCache *cache.CheckpointStateCache
initSyncBlocks map[[32]byte]interfaces.ReadOnlySignedBeaconBlock
initSyncBlocksLock sync.RWMutex
wsVerifier *WeakSubjectivityVerifier
clockSetter startup.ClockSetter
clockWaiter startup.ClockWaiter
syncComplete chan struct{}
blobNotifiers *blobNotifierMap
blockBeingSynced *currentlySyncingBlock
blobStorage *filesystem.BlobStorage
dataColumnStorage *filesystem.DataColumnStorage
slasherEnabled bool
lcStore *lightClient.Store
startWaitingDataColumnSidecars chan bool // for testing purposes only
}
// config options for the service.

View File

@@ -38,11 +38,14 @@ const (
// SingleAttReceived is sent after a single attestation object is received from gossip or rpc
SingleAttReceived = 9
// DataColumnSidecarReceived is sent after a data column sidecar is received from gossip or rpc.
DataColumnSidecarReceived = 10
// BlockGossipReceived is sent after a block has been received from gossip or API that passes validation rules.
BlockGossipReceived = 10
BlockGossipReceived = 11
// DataColumnReceived is sent after a data column has been seen after gossip validation rules.
DataColumnReceived = 11
DataColumnReceived = 12
)
// UnAggregatedAttReceivedData is the data sent with UnaggregatedAttReceived events.
@@ -94,6 +97,11 @@ type SingleAttReceivedData struct {
Attestation ethpb.Att
}
// DataColumnSidecarReceivedData is the data sent with DataColumnSidecarReceived events.
type DataColumnSidecarReceivedData struct {
DataColumn *blocks.VerifiedRODataColumn
}
// BlockGossipReceivedData is the data sent with BlockGossipReceived events.
type BlockGossipReceivedData struct {
// SignedBlock is the block that was received.

View File

@@ -3,6 +3,7 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"helpers.go",
"lightclient.go",
"store.go",
],
@@ -41,7 +42,6 @@ go_test(
"//consensus-types:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/light-client:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/ssz:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",

View File

@@ -0,0 +1,243 @@
package light_client
import (
"context"
"fmt"
"github.com/OffchainLabs/prysm/v6/beacon-chain/execution"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
light_client "github.com/OffchainLabs/prysm/v6/consensus-types/light-client"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
enginev1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
pb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/pkg/errors"
"google.golang.org/protobuf/proto"
)
func createDefaultLightClientBootstrap(currentSlot primitives.Slot) (interfaces.LightClientBootstrap, error) {
currentEpoch := slots.ToEpoch(currentSlot)
syncCommitteeSize := params.BeaconConfig().SyncCommitteeSize
pubKeys := make([][]byte, syncCommitteeSize)
for i := uint64(0); i < syncCommitteeSize; i++ {
pubKeys[i] = make([]byte, fieldparams.BLSPubkeyLength)
}
currentSyncCommittee := &pb.SyncCommittee{
Pubkeys: pubKeys,
AggregatePubkey: make([]byte, fieldparams.BLSPubkeyLength),
}
var currentSyncCommitteeBranch [][]byte
if currentEpoch >= params.BeaconConfig().ElectraForkEpoch {
currentSyncCommitteeBranch = make([][]byte, fieldparams.SyncCommitteeBranchDepthElectra)
} else {
currentSyncCommitteeBranch = make([][]byte, fieldparams.SyncCommitteeBranchDepth)
}
for i := 0; i < len(currentSyncCommitteeBranch); i++ {
currentSyncCommitteeBranch[i] = make([]byte, fieldparams.RootLength)
}
executionBranch := make([][]byte, fieldparams.ExecutionBranchDepth)
for i := 0; i < fieldparams.ExecutionBranchDepth; i++ {
executionBranch[i] = make([]byte, 32)
}
var m proto.Message
if currentEpoch < params.BeaconConfig().CapellaForkEpoch {
m = &pb.LightClientBootstrapAltair{
Header: &pb.LightClientHeaderAltair{
Beacon: &pb.BeaconBlockHeader{},
},
CurrentSyncCommittee: currentSyncCommittee,
CurrentSyncCommitteeBranch: currentSyncCommitteeBranch,
}
} else if currentEpoch < params.BeaconConfig().DenebForkEpoch {
m = &pb.LightClientBootstrapCapella{
Header: &pb.LightClientHeaderCapella{
Beacon: &pb.BeaconBlockHeader{},
Execution: &enginev1.ExecutionPayloadHeaderCapella{},
ExecutionBranch: executionBranch,
},
CurrentSyncCommittee: currentSyncCommittee,
CurrentSyncCommitteeBranch: currentSyncCommitteeBranch,
}
} else if currentEpoch < params.BeaconConfig().ElectraForkEpoch {
m = &pb.LightClientBootstrapDeneb{
Header: &pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{},
Execution: &enginev1.ExecutionPayloadHeaderDeneb{},
ExecutionBranch: executionBranch,
},
CurrentSyncCommittee: currentSyncCommittee,
CurrentSyncCommitteeBranch: currentSyncCommitteeBranch,
}
} else {
m = &pb.LightClientBootstrapElectra{
Header: &pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{},
Execution: &enginev1.ExecutionPayloadHeaderDeneb{},
ExecutionBranch: executionBranch,
},
CurrentSyncCommittee: currentSyncCommittee,
CurrentSyncCommitteeBranch: currentSyncCommitteeBranch,
}
}
return light_client.NewWrappedBootstrap(m)
}
func makeExecutionAndProofDeneb(ctx context.Context, blk interfaces.ReadOnlySignedBeaconBlock) (*enginev1.ExecutionPayloadHeaderDeneb, [][]byte, error) {
if blk.Version() < version.Capella {
p, err := execution.EmptyExecutionPayloadHeader(version.Deneb)
if err != nil {
return nil, nil, errors.Wrap(err, "could not get payload header")
}
payloadHeader, ok := p.(*enginev1.ExecutionPayloadHeaderDeneb)
if !ok {
return nil, nil, fmt.Errorf("payload header type %T is not %T", p, &enginev1.ExecutionPayloadHeaderDeneb{})
}
payloadProof := emptyPayloadProof()
return payloadHeader, payloadProof, nil
}
payload, err := blk.Block().Body().Execution()
if err != nil {
return nil, nil, errors.Wrap(err, "could not get execution payload")
}
transactionsRoot, err := ComputeTransactionsRoot(payload)
if err != nil {
return nil, nil, errors.Wrap(err, "could not get transactions root")
}
withdrawalsRoot, err := ComputeWithdrawalsRoot(payload)
if err != nil {
return nil, nil, errors.Wrap(err, "could not get withdrawals root")
}
payloadHeader := &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: payload.ParentHash(),
FeeRecipient: payload.FeeRecipient(),
StateRoot: payload.StateRoot(),
ReceiptsRoot: payload.ReceiptsRoot(),
LogsBloom: payload.LogsBloom(),
PrevRandao: payload.PrevRandao(),
BlockNumber: payload.BlockNumber(),
GasLimit: payload.GasLimit(),
GasUsed: payload.GasUsed(),
Timestamp: payload.Timestamp(),
ExtraData: payload.ExtraData(),
BaseFeePerGas: payload.BaseFeePerGas(),
BlockHash: payload.BlockHash(),
TransactionsRoot: transactionsRoot,
WithdrawalsRoot: withdrawalsRoot,
BlobGasUsed: 0,
ExcessBlobGas: 0,
}
if blk.Version() >= version.Deneb {
blobGasUsed, err := payload.BlobGasUsed()
if err != nil {
return nil, nil, errors.Wrap(err, "could not get blob gas used")
}
excessBlobGas, err := payload.ExcessBlobGas()
if err != nil {
return nil, nil, errors.Wrap(err, "could not get excess blob gas")
}
payloadHeader.BlobGasUsed = blobGasUsed
payloadHeader.ExcessBlobGas = excessBlobGas
}
payloadProof, err := blocks.PayloadProof(ctx, blk.Block())
if err != nil {
return nil, nil, errors.Wrap(err, "could not get execution payload proof")
}
return payloadHeader, payloadProof, nil
}
func makeExecutionAndProofCapella(ctx context.Context, blk interfaces.ReadOnlySignedBeaconBlock) (*enginev1.ExecutionPayloadHeaderCapella, [][]byte, error) {
if blk.Version() > version.Capella {
return nil, nil, fmt.Errorf("unsupported block version %s for capella execution payload", version.String(blk.Version()))
}
if blk.Version() < version.Capella {
p, err := execution.EmptyExecutionPayloadHeader(version.Capella)
if err != nil {
return nil, nil, errors.Wrap(err, "could not get payload header")
}
payloadHeader, ok := p.(*enginev1.ExecutionPayloadHeaderCapella)
if !ok {
return nil, nil, fmt.Errorf("payload header type %T is not %T", p, &enginev1.ExecutionPayloadHeaderCapella{})
}
payloadProof := emptyPayloadProof()
return payloadHeader, payloadProof, nil
}
payload, err := blk.Block().Body().Execution()
if err != nil {
return nil, nil, errors.Wrap(err, "could not get execution payload")
}
transactionsRoot, err := ComputeTransactionsRoot(payload)
if err != nil {
return nil, nil, errors.Wrap(err, "could not get transactions root")
}
withdrawalsRoot, err := ComputeWithdrawalsRoot(payload)
if err != nil {
return nil, nil, errors.Wrap(err, "could not get withdrawals root")
}
payloadHeader := &enginev1.ExecutionPayloadHeaderCapella{
ParentHash: payload.ParentHash(),
FeeRecipient: payload.FeeRecipient(),
StateRoot: payload.StateRoot(),
ReceiptsRoot: payload.ReceiptsRoot(),
LogsBloom: payload.LogsBloom(),
PrevRandao: payload.PrevRandao(),
BlockNumber: payload.BlockNumber(),
GasLimit: payload.GasLimit(),
GasUsed: payload.GasUsed(),
Timestamp: payload.Timestamp(),
ExtraData: payload.ExtraData(),
BaseFeePerGas: payload.BaseFeePerGas(),
BlockHash: payload.BlockHash(),
TransactionsRoot: transactionsRoot,
WithdrawalsRoot: withdrawalsRoot,
}
payloadProof, err := blocks.PayloadProof(ctx, blk.Block())
if err != nil {
return nil, nil, errors.Wrap(err, "could not get execution payload proof")
}
return payloadHeader, payloadProof, nil
}
func makeBeaconBlockHeader(blk interfaces.ReadOnlySignedBeaconBlock) (*pb.BeaconBlockHeader, error) {
parentRoot := blk.Block().ParentRoot()
stateRoot := blk.Block().StateRoot()
bodyRoot, err := blk.Block().Body().HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "could not get body root")
}
return &pb.BeaconBlockHeader{
Slot: blk.Block().Slot(),
ProposerIndex: blk.Block().ProposerIndex(),
ParentRoot: parentRoot[:],
StateRoot: stateRoot[:],
BodyRoot: bodyRoot[:],
}, nil
}
func emptyPayloadProof() [][]byte {
branch := interfaces.LightClientExecutionBranch{}
proof := make([][]byte, len(branch))
for i, b := range branch {
proof[i] = b[:]
}
return proof
}

View File

@@ -6,12 +6,10 @@ import (
"fmt"
"reflect"
"github.com/OffchainLabs/prysm/v6/beacon-chain/execution"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
consensus_types "github.com/OffchainLabs/prysm/v6/consensus-types"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
light_client "github.com/OffchainLabs/prysm/v6/consensus-types/light-client"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
@@ -163,13 +161,13 @@ func NewLightClientUpdateFromBeaconState(
updateAttestedPeriod := slots.SyncCommitteePeriod(slots.ToEpoch(attestedBlock.Block().Slot()))
// update = LightClientUpdate()
result, err := CreateDefaultLightClientUpdate(currentSlot, attestedState)
result, err := CreateDefaultLightClientUpdate(attestedBlock)
if err != nil {
return nil, errors.Wrap(err, "could not create default light client update")
}
// update.attested_header = block_to_light_client_header(attested_block)
attestedLightClientHeader, err := BlockToLightClientHeader(ctx, currentSlot, attestedBlock)
attestedLightClientHeader, err := BlockToLightClientHeader(ctx, attestedBlock.Version(), attestedBlock)
if err != nil {
return nil, errors.Wrap(err, "could not get attested light client header")
}
@@ -210,7 +208,7 @@ func NewLightClientUpdateFromBeaconState(
// if finalized_block.message.slot != GENESIS_SLOT
if finalizedBlock.Block().Slot() != 0 {
// update.finalized_header = block_to_light_client_header(finalized_block)
finalizedLightClientHeader, err := BlockToLightClientHeader(ctx, currentSlot, finalizedBlock)
finalizedLightClientHeader, err := BlockToLightClientHeader(ctx, attestedBlock.Version(), finalizedBlock)
if err != nil {
return nil, errors.Wrap(err, "could not get finalized light client header")
}
@@ -247,9 +245,7 @@ func NewLightClientUpdateFromBeaconState(
return result, nil
}
func CreateDefaultLightClientUpdate(currentSlot primitives.Slot, attestedState state.BeaconState) (interfaces.LightClientUpdate, error) {
currentEpoch := slots.ToEpoch(currentSlot)
func CreateDefaultLightClientUpdate(attestedBlock interfaces.ReadOnlySignedBeaconBlock) (interfaces.LightClientUpdate, error) {
syncCommitteeSize := params.BeaconConfig().SyncCommitteeSize
pubKeys := make([][]byte, syncCommitteeSize)
for i := uint64(0); i < syncCommitteeSize; i++ {
@@ -261,7 +257,7 @@ func CreateDefaultLightClientUpdate(currentSlot primitives.Slot, attestedState s
}
var nextSyncCommitteeBranch [][]byte
if attestedState.Version() >= version.Electra {
if attestedBlock.Version() >= version.Electra {
nextSyncCommitteeBranch = make([][]byte, fieldparams.SyncCommitteeBranchDepthElectra)
} else {
nextSyncCommitteeBranch = make([][]byte, fieldparams.SyncCommitteeBranchDepth)
@@ -276,7 +272,7 @@ func CreateDefaultLightClientUpdate(currentSlot primitives.Slot, attestedState s
}
var finalityBranch [][]byte
if attestedState.Version() >= version.Electra {
if attestedBlock.Version() >= version.Electra {
finalityBranch = make([][]byte, fieldparams.FinalityBranchDepthElectra)
} else {
finalityBranch = make([][]byte, fieldparams.FinalityBranchDepth)
@@ -286,10 +282,12 @@ func CreateDefaultLightClientUpdate(currentSlot primitives.Slot, attestedState s
}
var m proto.Message
if currentEpoch < params.BeaconConfig().CapellaForkEpoch {
switch attestedBlock.Version() {
case version.Altair, version.Bellatrix:
m = &pb.LightClientUpdateAltair{
AttestedHeader: &pb.LightClientHeaderAltair{
Beacon: &pb.BeaconBlockHeader{
Slot: attestedBlock.Block().Slot(),
ParentRoot: make([]byte, 32),
StateRoot: make([]byte, 32),
BodyRoot: make([]byte, 32),
@@ -310,10 +308,11 @@ func CreateDefaultLightClientUpdate(currentSlot primitives.Slot, attestedState s
SyncCommitteeSignature: make([]byte, 96),
},
}
} else if currentEpoch < params.BeaconConfig().DenebForkEpoch {
case version.Capella:
m = &pb.LightClientUpdateCapella{
AttestedHeader: &pb.LightClientHeaderCapella{
Beacon: &pb.BeaconBlockHeader{
Slot: attestedBlock.Block().Slot(),
ParentRoot: make([]byte, 32),
StateRoot: make([]byte, 32),
BodyRoot: make([]byte, 32),
@@ -362,10 +361,11 @@ func CreateDefaultLightClientUpdate(currentSlot primitives.Slot, attestedState s
SyncCommitteeSignature: make([]byte, 96),
},
}
} else if currentEpoch < params.BeaconConfig().ElectraForkEpoch {
case version.Deneb:
m = &pb.LightClientUpdateDeneb{
AttestedHeader: &pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{
Slot: attestedBlock.Block().Slot(),
ParentRoot: make([]byte, 32),
StateRoot: make([]byte, 32),
BodyRoot: make([]byte, 32),
@@ -418,120 +418,65 @@ func CreateDefaultLightClientUpdate(currentSlot primitives.Slot, attestedState s
SyncCommitteeSignature: make([]byte, 96),
},
}
} else {
if attestedState.Version() >= version.Electra {
m = &pb.LightClientUpdateElectra{
AttestedHeader: &pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{
ParentRoot: make([]byte, 32),
StateRoot: make([]byte, 32),
BodyRoot: make([]byte, 32),
},
Execution: &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: make([]byte, fieldparams.RootLength),
WithdrawalsRoot: make([]byte, fieldparams.RootLength),
GasLimit: 0,
GasUsed: 0,
},
ExecutionBranch: executionBranch,
case version.Electra, version.Fulu:
m = &pb.LightClientUpdateElectra{
AttestedHeader: &pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{
Slot: attestedBlock.Block().Slot(),
ParentRoot: make([]byte, 32),
StateRoot: make([]byte, 32),
BodyRoot: make([]byte, 32),
},
NextSyncCommittee: nextSyncCommittee,
NextSyncCommitteeBranch: nextSyncCommitteeBranch,
FinalityBranch: finalityBranch,
FinalizedHeader: &pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{
ParentRoot: make([]byte, 32),
StateRoot: make([]byte, 32),
BodyRoot: make([]byte, 32),
},
Execution: &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: make([]byte, fieldparams.RootLength),
WithdrawalsRoot: make([]byte, fieldparams.RootLength),
GasLimit: 0,
GasUsed: 0,
},
ExecutionBranch: executionBranch,
Execution: &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: make([]byte, fieldparams.RootLength),
WithdrawalsRoot: make([]byte, fieldparams.RootLength),
GasLimit: 0,
GasUsed: 0,
},
SyncAggregate: &pb.SyncAggregate{
SyncCommitteeBits: make([]byte, 64),
SyncCommitteeSignature: make([]byte, 96),
ExecutionBranch: executionBranch,
},
NextSyncCommittee: nextSyncCommittee,
NextSyncCommitteeBranch: nextSyncCommitteeBranch,
FinalityBranch: finalityBranch,
FinalizedHeader: &pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{
ParentRoot: make([]byte, 32),
StateRoot: make([]byte, 32),
BodyRoot: make([]byte, 32),
},
}
} else {
m = &pb.LightClientUpdateDeneb{
AttestedHeader: &pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{
ParentRoot: make([]byte, 32),
StateRoot: make([]byte, 32),
BodyRoot: make([]byte, 32),
},
Execution: &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: make([]byte, fieldparams.RootLength),
WithdrawalsRoot: make([]byte, fieldparams.RootLength),
GasLimit: 0,
GasUsed: 0,
},
ExecutionBranch: executionBranch,
Execution: &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: make([]byte, fieldparams.RootLength),
WithdrawalsRoot: make([]byte, fieldparams.RootLength),
GasLimit: 0,
GasUsed: 0,
},
NextSyncCommittee: nextSyncCommittee,
NextSyncCommitteeBranch: nextSyncCommitteeBranch,
FinalityBranch: finalityBranch,
FinalizedHeader: &pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{
ParentRoot: make([]byte, 32),
StateRoot: make([]byte, 32),
BodyRoot: make([]byte, 32),
},
Execution: &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: make([]byte, fieldparams.RootLength),
WithdrawalsRoot: make([]byte, fieldparams.RootLength),
GasLimit: 0,
GasUsed: 0,
},
ExecutionBranch: executionBranch,
},
SyncAggregate: &pb.SyncAggregate{
SyncCommitteeBits: make([]byte, 64),
SyncCommitteeSignature: make([]byte, 96),
},
}
ExecutionBranch: executionBranch,
},
SyncAggregate: &pb.SyncAggregate{
SyncCommitteeBits: make([]byte, 64),
SyncCommitteeSignature: make([]byte, 96),
},
}
default:
return nil, errors.Errorf("unsupported beacon chain version %s", version.String(attestedBlock.Version()))
}
return light_client.NewWrappedUpdate(m)
@@ -575,189 +520,52 @@ func ComputeWithdrawalsRoot(payload interfaces.ExecutionData) ([]byte, error) {
func BlockToLightClientHeader(
ctx context.Context,
currentSlot primitives.Slot,
block interfaces.ReadOnlySignedBeaconBlock,
attestedBlockVersion int, // this is the version that the light client header should be in, based on the attested block.
block interfaces.ReadOnlySignedBeaconBlock, // this block is either the attested block, or the finalized block.
// in case of the latter, we might need to upgrade it to the attested block's version.
) (interfaces.LightClientHeader, error) {
var m proto.Message
currentEpoch := slots.ToEpoch(currentSlot)
blockEpoch := slots.ToEpoch(block.Block().Slot())
parentRoot := block.Block().ParentRoot()
stateRoot := block.Block().StateRoot()
bodyRoot, err := block.Block().Body().HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "could not get body root")
if block.Version() > attestedBlockVersion {
return nil, errors.Errorf("block version %s is greater than attested block version %s", version.String(block.Version()), version.String(attestedBlockVersion))
}
if currentEpoch < params.BeaconConfig().CapellaForkEpoch {
beacon, err := makeBeaconBlockHeader(block)
if err != nil {
return nil, errors.Wrap(err, "could not make beacon block header")
}
var m proto.Message
switch attestedBlockVersion {
case version.Altair, version.Bellatrix:
m = &pb.LightClientHeaderAltair{
Beacon: &pb.BeaconBlockHeader{
Slot: block.Block().Slot(),
ProposerIndex: block.Block().ProposerIndex(),
ParentRoot: parentRoot[:],
StateRoot: stateRoot[:],
BodyRoot: bodyRoot[:],
},
Beacon: beacon,
}
} else if currentEpoch < params.BeaconConfig().DenebForkEpoch {
var payloadHeader *enginev1.ExecutionPayloadHeaderCapella
var payloadProof [][]byte
if blockEpoch < params.BeaconConfig().CapellaForkEpoch {
var ok bool
p, err := execution.EmptyExecutionPayloadHeader(version.Capella)
if err != nil {
return nil, errors.Wrap(err, "could not get payload header")
}
payloadHeader, ok = p.(*enginev1.ExecutionPayloadHeaderCapella)
if !ok {
return nil, fmt.Errorf("payload header type %T is not %T", p, &enginev1.ExecutionPayloadHeaderCapella{})
}
payloadProof = emptyPayloadProof()
} else {
payload, err := block.Block().Body().Execution()
if err != nil {
return nil, errors.Wrap(err, "could not get execution payload")
}
transactionsRoot, err := ComputeTransactionsRoot(payload)
if err != nil {
return nil, errors.Wrap(err, "could not get transactions root")
}
withdrawalsRoot, err := ComputeWithdrawalsRoot(payload)
if err != nil {
return nil, errors.Wrap(err, "could not get withdrawals root")
}
payloadHeader = &enginev1.ExecutionPayloadHeaderCapella{
ParentHash: payload.ParentHash(),
FeeRecipient: payload.FeeRecipient(),
StateRoot: payload.StateRoot(),
ReceiptsRoot: payload.ReceiptsRoot(),
LogsBloom: payload.LogsBloom(),
PrevRandao: payload.PrevRandao(),
BlockNumber: payload.BlockNumber(),
GasLimit: payload.GasLimit(),
GasUsed: payload.GasUsed(),
Timestamp: payload.Timestamp(),
ExtraData: payload.ExtraData(),
BaseFeePerGas: payload.BaseFeePerGas(),
BlockHash: payload.BlockHash(),
TransactionsRoot: transactionsRoot,
WithdrawalsRoot: withdrawalsRoot,
}
payloadProof, err = blocks.PayloadProof(ctx, block.Block())
if err != nil {
return nil, errors.Wrap(err, "could not get execution payload proof")
}
case version.Capella:
payloadHeader, payloadProof, err := makeExecutionAndProofCapella(ctx, block)
if err != nil {
return nil, errors.Wrap(err, "could not make execution payload header and proof")
}
m = &pb.LightClientHeaderCapella{
Beacon: &pb.BeaconBlockHeader{
Slot: block.Block().Slot(),
ProposerIndex: block.Block().ProposerIndex(),
ParentRoot: parentRoot[:],
StateRoot: stateRoot[:],
BodyRoot: bodyRoot[:],
},
Beacon: beacon,
Execution: payloadHeader,
ExecutionBranch: payloadProof,
}
} else {
var payloadHeader *enginev1.ExecutionPayloadHeaderDeneb
var payloadProof [][]byte
if blockEpoch < params.BeaconConfig().CapellaForkEpoch {
var ok bool
p, err := execution.EmptyExecutionPayloadHeader(version.Deneb)
if err != nil {
return nil, errors.Wrap(err, "could not get payload header")
}
payloadHeader, ok = p.(*enginev1.ExecutionPayloadHeaderDeneb)
if !ok {
return nil, fmt.Errorf("payload header type %T is not %T", p, &enginev1.ExecutionPayloadHeaderDeneb{})
}
payloadProof = emptyPayloadProof()
} else {
payload, err := block.Block().Body().Execution()
if err != nil {
return nil, errors.Wrap(err, "could not get execution payload")
}
transactionsRoot, err := ComputeTransactionsRoot(payload)
if err != nil {
return nil, errors.Wrap(err, "could not get transactions root")
}
withdrawalsRoot, err := ComputeWithdrawalsRoot(payload)
if err != nil {
return nil, errors.Wrap(err, "could not get withdrawals root")
}
var blobGasUsed uint64
var excessBlobGas uint64
if blockEpoch >= params.BeaconConfig().DenebForkEpoch {
blobGasUsed, err = payload.BlobGasUsed()
if err != nil {
return nil, errors.Wrap(err, "could not get blob gas used")
}
excessBlobGas, err = payload.ExcessBlobGas()
if err != nil {
return nil, errors.Wrap(err, "could not get excess blob gas")
}
}
payloadHeader = &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: payload.ParentHash(),
FeeRecipient: payload.FeeRecipient(),
StateRoot: payload.StateRoot(),
ReceiptsRoot: payload.ReceiptsRoot(),
LogsBloom: payload.LogsBloom(),
PrevRandao: payload.PrevRandao(),
BlockNumber: payload.BlockNumber(),
GasLimit: payload.GasLimit(),
GasUsed: payload.GasUsed(),
Timestamp: payload.Timestamp(),
ExtraData: payload.ExtraData(),
BaseFeePerGas: payload.BaseFeePerGas(),
BlockHash: payload.BlockHash(),
TransactionsRoot: transactionsRoot,
WithdrawalsRoot: withdrawalsRoot,
BlobGasUsed: blobGasUsed,
ExcessBlobGas: excessBlobGas,
}
payloadProof, err = blocks.PayloadProof(ctx, block.Block())
if err != nil {
return nil, errors.Wrap(err, "could not get execution payload proof")
}
case version.Deneb, version.Electra, version.Fulu:
payloadHeader, payloadProof, err := makeExecutionAndProofDeneb(ctx, block)
if err != nil {
return nil, errors.Wrap(err, "could not make execution payload header and proof")
}
m = &pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{
Slot: block.Block().Slot(),
ProposerIndex: block.Block().ProposerIndex(),
ParentRoot: parentRoot[:],
StateRoot: stateRoot[:],
BodyRoot: bodyRoot[:],
},
Beacon: beacon,
Execution: payloadHeader,
ExecutionBranch: payloadProof,
}
default:
return nil, fmt.Errorf("unsupported attested block version %s", version.String(attestedBlockVersion))
}
return light_client.NewWrappedHeader(m)
}
func emptyPayloadProof() [][]byte {
branch := interfaces.LightClientExecutionBranch{}
proof := make([][]byte, len(branch))
for i, b := range branch {
proof[i] = b[:]
}
return proof
}
func HasRelevantSyncCommittee(update interfaces.LightClientUpdate) (bool, error) {
if update.Version() >= version.Electra {
branch, err := update.NextSyncCommitteeBranchElectra()
@@ -909,7 +717,7 @@ func NewLightClientBootstrapFromBeaconState(
return nil, errors.Wrap(err, "could not create default light client bootstrap")
}
lightClientHeader, err := BlockToLightClientHeader(ctx, currentSlot, block)
lightClientHeader, err := BlockToLightClientHeader(ctx, state.Version(), block)
if err != nil {
return nil, errors.Wrap(err, "could not convert block to light client header")
}
@@ -942,78 +750,6 @@ func NewLightClientBootstrapFromBeaconState(
return bootstrap, nil
}
func createDefaultLightClientBootstrap(currentSlot primitives.Slot) (interfaces.LightClientBootstrap, error) {
currentEpoch := slots.ToEpoch(currentSlot)
syncCommitteeSize := params.BeaconConfig().SyncCommitteeSize
pubKeys := make([][]byte, syncCommitteeSize)
for i := uint64(0); i < syncCommitteeSize; i++ {
pubKeys[i] = make([]byte, fieldparams.BLSPubkeyLength)
}
currentSyncCommittee := &pb.SyncCommittee{
Pubkeys: pubKeys,
AggregatePubkey: make([]byte, fieldparams.BLSPubkeyLength),
}
var currentSyncCommitteeBranch [][]byte
if currentEpoch >= params.BeaconConfig().ElectraForkEpoch {
currentSyncCommitteeBranch = make([][]byte, fieldparams.SyncCommitteeBranchDepthElectra)
} else {
currentSyncCommitteeBranch = make([][]byte, fieldparams.SyncCommitteeBranchDepth)
}
for i := 0; i < len(currentSyncCommitteeBranch); i++ {
currentSyncCommitteeBranch[i] = make([]byte, fieldparams.RootLength)
}
executionBranch := make([][]byte, fieldparams.ExecutionBranchDepth)
for i := 0; i < fieldparams.ExecutionBranchDepth; i++ {
executionBranch[i] = make([]byte, 32)
}
// TODO: can this be based on the current epoch?
var m proto.Message
if currentEpoch < params.BeaconConfig().CapellaForkEpoch {
m = &pb.LightClientBootstrapAltair{
Header: &pb.LightClientHeaderAltair{
Beacon: &pb.BeaconBlockHeader{},
},
CurrentSyncCommittee: currentSyncCommittee,
CurrentSyncCommitteeBranch: currentSyncCommitteeBranch,
}
} else if currentEpoch < params.BeaconConfig().DenebForkEpoch {
m = &pb.LightClientBootstrapCapella{
Header: &pb.LightClientHeaderCapella{
Beacon: &pb.BeaconBlockHeader{},
Execution: &enginev1.ExecutionPayloadHeaderCapella{},
ExecutionBranch: executionBranch,
},
CurrentSyncCommittee: currentSyncCommittee,
CurrentSyncCommitteeBranch: currentSyncCommitteeBranch,
}
} else if currentEpoch < params.BeaconConfig().ElectraForkEpoch {
m = &pb.LightClientBootstrapDeneb{
Header: &pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{},
Execution: &enginev1.ExecutionPayloadHeaderDeneb{},
ExecutionBranch: executionBranch,
},
CurrentSyncCommittee: currentSyncCommittee,
CurrentSyncCommitteeBranch: currentSyncCommitteeBranch,
}
} else {
m = &pb.LightClientBootstrapElectra{
Header: &pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{},
Execution: &enginev1.ExecutionPayloadHeaderDeneb{},
ExecutionBranch: executionBranch,
},
CurrentSyncCommittee: currentSyncCommittee,
CurrentSyncCommitteeBranch: currentSyncCommitteeBranch,
}
}
return light_client.NewWrappedBootstrap(m)
}
func UpdateHasSupermajority(syncAggregate *pb.SyncAggregate) bool {
maxActiveParticipants := syncAggregate.SyncCommitteeBits.Len()
numActiveParticipants := syncAggregate.SyncCommitteeBits.Count()

View File

@@ -7,7 +7,6 @@ import (
"github.com/OffchainLabs/prysm/v6/config/params"
light_client "github.com/OffchainLabs/prysm/v6/consensus-types/light-client"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/runtime/version"
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
@@ -547,7 +546,7 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
header, err := lightClient.BlockToLightClientHeader(
l.Ctx,
primitives.Slot(params.BeaconConfig().AltairForkEpoch)*params.BeaconConfig().SlotsPerEpoch,
version.Altair,
l.Block,
)
require.NoError(t, err)
@@ -570,7 +569,7 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
header, err := lightClient.BlockToLightClientHeader(
l.Ctx,
primitives.Slot(params.BeaconConfig().BellatrixForkEpoch)*params.BeaconConfig().SlotsPerEpoch,
version.Bellatrix,
l.Block,
)
require.NoError(t, err)
@@ -594,7 +593,7 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
header, err := lightClient.BlockToLightClientHeader(
l.Ctx,
primitives.Slot(params.BeaconConfig().CapellaForkEpoch)*params.BeaconConfig().SlotsPerEpoch,
version.Capella,
l.Block,
)
require.NoError(t, err)
@@ -655,7 +654,7 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
header, err := lightClient.BlockToLightClientHeader(
l.Ctx,
primitives.Slot(params.BeaconConfig().CapellaForkEpoch)*params.BeaconConfig().SlotsPerEpoch,
version.Capella,
l.Block,
)
require.NoError(t, err)
@@ -718,7 +717,7 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
header, err := lightClient.BlockToLightClientHeader(
l.Ctx,
primitives.Slot(params.BeaconConfig().DenebForkEpoch)*params.BeaconConfig().SlotsPerEpoch,
version.Deneb,
l.Block,
)
require.NoError(t, err)
@@ -787,7 +786,7 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
header, err := lightClient.BlockToLightClientHeader(
l.Ctx,
primitives.Slot(params.BeaconConfig().DenebForkEpoch)*params.BeaconConfig().SlotsPerEpoch,
version.Deneb,
l.Block,
)
require.NoError(t, err)
@@ -856,7 +855,7 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
t.Run("Non-Blinded Beacon Block", func(t *testing.T) {
l := util.NewTestLightClient(t, version.Electra)
header, err := lightClient.BlockToLightClientHeader(l.Ctx, l.State.Slot(), l.Block)
header, err := lightClient.BlockToLightClientHeader(l.Ctx, version.Electra, l.Block)
require.NoError(t, err)
require.NotNil(t, header, "header is nil")
@@ -921,7 +920,7 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
t.Run("Blinded Beacon Block", func(t *testing.T) {
l := util.NewTestLightClient(t, version.Electra, util.WithBlinded())
header, err := lightClient.BlockToLightClientHeader(l.Ctx, l.State.Slot(), l.Block)
header, err := lightClient.BlockToLightClientHeader(l.Ctx, version.Electra, l.Block)
require.NoError(t, err)
require.NotNil(t, header, "header is nil")
@@ -989,7 +988,7 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
header, err := lightClient.BlockToLightClientHeader(
l.Ctx,
primitives.Slot(params.BeaconConfig().CapellaForkEpoch)*params.BeaconConfig().SlotsPerEpoch,
version.Capella,
l.Block)
require.NoError(t, err)
require.NotNil(t, header, "header is nil")
@@ -1011,7 +1010,7 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
header, err := lightClient.BlockToLightClientHeader(
l.Ctx,
primitives.Slot(params.BeaconConfig().DenebForkEpoch)*params.BeaconConfig().SlotsPerEpoch,
version.Deneb,
l.Block)
require.NoError(t, err)
require.NotNil(t, header, "header is nil")
@@ -1034,7 +1033,7 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
header, err := lightClient.BlockToLightClientHeader(
l.Ctx,
primitives.Slot(params.BeaconConfig().DenebForkEpoch)*params.BeaconConfig().SlotsPerEpoch,
version.Deneb,
l.Block)
require.NoError(t, err)
require.NotNil(t, header, "header is nil")
@@ -1094,7 +1093,7 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
header, err := lightClient.BlockToLightClientHeader(
l.Ctx,
primitives.Slot(params.BeaconConfig().DenebForkEpoch)*params.BeaconConfig().SlotsPerEpoch,
version.Deneb,
l.Block)
require.NoError(t, err)
require.NotNil(t, header, "header is nil")
@@ -1180,14 +1179,13 @@ func createNonEmptyFinalityBranch() [][]byte {
}
func TestIsBetterUpdate(t *testing.T) {
config := params.BeaconConfig()
st, err := util.NewBeaconStateAltair()
blk, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlockAltair())
require.NoError(t, err)
t.Run("new has supermajority but old doesn't", func(t *testing.T) {
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(1), st)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(2), st)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
oldUpdate.SetSyncAggregate(&pb.SyncAggregate{
@@ -1203,9 +1201,9 @@ func TestIsBetterUpdate(t *testing.T) {
})
t.Run("old has supermajority but new doesn't", func(t *testing.T) {
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(1), st)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(2), st)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
oldUpdate.SetSyncAggregate(&pb.SyncAggregate{
@@ -1221,9 +1219,9 @@ func TestIsBetterUpdate(t *testing.T) {
})
t.Run("new doesn't have supermajority and newNumActiveParticipants is greater than oldNumActiveParticipants", func(t *testing.T) {
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(1), st)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(2), st)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
oldUpdate.SetSyncAggregate(&pb.SyncAggregate{
@@ -1239,9 +1237,9 @@ func TestIsBetterUpdate(t *testing.T) {
})
t.Run("new doesn't have supermajority and newNumActiveParticipants is lesser than oldNumActiveParticipants", func(t *testing.T) {
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(1), st)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(2), st)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
oldUpdate.SetSyncAggregate(&pb.SyncAggregate{
@@ -1257,9 +1255,9 @@ func TestIsBetterUpdate(t *testing.T) {
})
t.Run("new has relevant sync committee but old doesn't", func(t *testing.T) {
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(1), st)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(2), st)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
oldUpdate.SetSyncAggregate(&pb.SyncAggregate{
@@ -1296,9 +1294,9 @@ func TestIsBetterUpdate(t *testing.T) {
})
t.Run("old has relevant sync committee but new doesn't", func(t *testing.T) {
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(1), st)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(2), st)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
oldUpdate.SetSyncAggregate(&pb.SyncAggregate{
@@ -1335,9 +1333,9 @@ func TestIsBetterUpdate(t *testing.T) {
})
t.Run("new has finality but old doesn't", func(t *testing.T) {
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(1), st)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(2), st)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
oldUpdate.SetSyncAggregate(&pb.SyncAggregate{
@@ -1378,9 +1376,9 @@ func TestIsBetterUpdate(t *testing.T) {
})
t.Run("old has finality but new doesn't", func(t *testing.T) {
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(1), st)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(2), st)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
oldUpdate.SetSyncAggregate(&pb.SyncAggregate{
@@ -1421,9 +1419,9 @@ func TestIsBetterUpdate(t *testing.T) {
})
t.Run("new has finality and sync committee finality both but old doesn't have sync committee finality", func(t *testing.T) {
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(1), st)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(2), st)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
oldUpdate.SetSyncAggregate(&pb.SyncAggregate{
@@ -1482,9 +1480,9 @@ func TestIsBetterUpdate(t *testing.T) {
})
t.Run("new has finality but doesn't have sync committee finality and old has sync committee finality", func(t *testing.T) {
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(1), st)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(2), st)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
oldUpdate.SetSyncAggregate(&pb.SyncAggregate{
@@ -1543,9 +1541,9 @@ func TestIsBetterUpdate(t *testing.T) {
})
t.Run("new has more active participants than old", func(t *testing.T) {
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(1), st)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(2), st)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
oldUpdate.SetSyncAggregate(&pb.SyncAggregate{
@@ -1561,9 +1559,9 @@ func TestIsBetterUpdate(t *testing.T) {
})
t.Run("new has less active participants than old", func(t *testing.T) {
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(1), st)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(2), st)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
oldUpdate.SetSyncAggregate(&pb.SyncAggregate{
@@ -1579,9 +1577,9 @@ func TestIsBetterUpdate(t *testing.T) {
})
t.Run("new's attested header's slot is lesser than old's attested header's slot", func(t *testing.T) {
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(1), st)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(2), st)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
oldUpdate.SetSyncAggregate(&pb.SyncAggregate{
@@ -1640,9 +1638,9 @@ func TestIsBetterUpdate(t *testing.T) {
})
t.Run("new's attested header's slot is greater than old's attested header's slot", func(t *testing.T) {
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(1), st)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(2), st)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
oldUpdate.SetSyncAggregate(&pb.SyncAggregate{
@@ -1701,9 +1699,9 @@ func TestIsBetterUpdate(t *testing.T) {
})
t.Run("none of the above conditions are met and new signature's slot is less than old signature's slot", func(t *testing.T) {
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(1), st)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(2), st)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
oldUpdate.SetSyncAggregate(&pb.SyncAggregate{
@@ -1762,9 +1760,9 @@ func TestIsBetterUpdate(t *testing.T) {
})
t.Run("none of the above conditions are met and new signature's slot is greater than old signature's slot", func(t *testing.T) {
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(1), st)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(2), st)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
oldUpdate.SetSyncAggregate(&pb.SyncAggregate{

View File

@@ -13,6 +13,10 @@ type Store struct {
lastOptimisticUpdate interfaces.LightClientOptimisticUpdate
}
func NewLightClientStore() *Store {
return &Store{}
}
func (s *Store) SetLastFinalityUpdate(update interfaces.LightClientFinalityUpdate) {
s.mu.Lock()
defer s.mu.Unlock()

View File

@@ -38,9 +38,9 @@ func TestPersist(t *testing.T) {
t.Run("mixed roots", func(t *testing.T) {
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
dataColumnParamsByBlockRoot := map[[fieldparams.RootLength]byte][]util.DataColumnParams{
{1}: {{ColumnIndex: 1}},
{2}: {{ColumnIndex: 2}},
dataColumnParamsByBlockRoot := []util.DataColumnParam{
{Slot: 1, Index: 1},
{Slot: 2, Index: 2},
}
roSidecars, _ := roSidecarsFromDataColumnParamsByBlockRoot(t, dataColumnParamsByBlockRoot)
@@ -54,8 +54,8 @@ func TestPersist(t *testing.T) {
t.Run("outside DA period", func(t *testing.T) {
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
dataColumnParamsByBlockRoot := map[[fieldparams.RootLength]byte][]util.DataColumnParams{
{1}: {{ColumnIndex: 1}},
dataColumnParamsByBlockRoot := []util.DataColumnParam{
{Slot: 1, Index: 1},
}
roSidecars, _ := roSidecarsFromDataColumnParamsByBlockRoot(t, dataColumnParamsByBlockRoot)
@@ -67,21 +67,24 @@ func TestPersist(t *testing.T) {
})
t.Run("nominal", func(t *testing.T) {
const slot = 42
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
dataColumnParamsByBlockRoot := map[[fieldparams.RootLength]byte][]util.DataColumnParams{
{}: {{ColumnIndex: 1}, {ColumnIndex: 5}},
dataColumnParamsByBlockRoot := []util.DataColumnParam{
{Slot: slot, Index: 1},
{Slot: slot, Index: 5},
}
roSidecars, roDataColumns := roSidecarsFromDataColumnParamsByBlockRoot(t, dataColumnParamsByBlockRoot)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, nil, &peerdas.CustodyInfo{})
err := lazilyPersistentStoreColumns.Persist(0, roSidecars...)
err := lazilyPersistentStoreColumns.Persist(slot, roSidecars...)
require.NoError(t, err)
require.Equal(t, 1, len(lazilyPersistentStoreColumns.cache.entries))
key := cacheKey{slot: 0, root: [fieldparams.RootLength]byte{}}
entry := lazilyPersistentStoreColumns.cache.entries[key]
key := cacheKey{slot: slot, root: roDataColumns[0].BlockRoot()}
entry, ok := lazilyPersistentStoreColumns.cache.entries[key]
require.Equal(t, true, ok)
// A call to Persist does NOT save the sidecars to disk.
require.Equal(t, uint64(0), entry.diskSummary.Count())
@@ -121,24 +124,37 @@ func TestIsDataAvailable(t *testing.T) {
signedBeaconBlockFulu := util.NewBeaconBlockFulu()
signedBeaconBlockFulu.Block.Body.BlobKzgCommitments = commitments
signedRoBlock := newSignedRoBlock(t, signedBeaconBlockFulu)
block := signedRoBlock.Block()
slot := block.Slot()
proposerIndex := block.ProposerIndex()
parentRoot := block.ParentRoot()
stateRoot := block.StateRoot()
bodyRoot, err := block.Body().HashTreeRoot()
require.NoError(t, err)
root := signedRoBlock.Root()
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, newDataColumnsVerifier, &peerdas.CustodyInfo{})
indices := [...]uint64{1, 17, 87, 102}
dataColumnsParams := make([]util.DataColumnParams, 0, len(indices))
dataColumnsParams := make([]util.DataColumnParam, 0, len(indices))
for _, index := range indices {
dataColumnParams := util.DataColumnParams{
ColumnIndex: index,
dataColumnParams := util.DataColumnParam{
Index: index,
KzgCommitments: commitments,
Slot: slot,
ProposerIndex: proposerIndex,
ParentRoot: parentRoot[:],
StateRoot: stateRoot[:],
BodyRoot: bodyRoot[:],
}
dataColumnsParams = append(dataColumnsParams, dataColumnParams)
}
dataColumnsParamsByBlockRoot := util.DataColumnsParamsByRoot{root: dataColumnsParams}
_, verifiedRoDataColumns := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnsParamsByBlockRoot)
_, verifiedRoDataColumns := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnsParams)
key := cacheKey{root: root}
entry := lazilyPersistentStoreColumns.cache.ensure(key)
@@ -149,7 +165,7 @@ func TestIsDataAvailable(t *testing.T) {
require.NoError(t, err)
}
err := lazilyPersistentStoreColumns.IsDataAvailable(ctx, 0 /*current slot*/, signedRoBlock)
err = lazilyPersistentStoreColumns.IsDataAvailable(ctx, slot, signedRoBlock)
require.NoError(t, err)
actual, err := dataColumnStorage.Get(root, indices[:])
@@ -224,8 +240,8 @@ func TestFullCommitmentsToCheck(t *testing.T) {
}
}
func roSidecarsFromDataColumnParamsByBlockRoot(t *testing.T, dataColumnParamsByBlockRoot util.DataColumnsParamsByRoot) ([]blocks.ROSidecar, []blocks.RODataColumn) {
roDataColumns, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnParamsByBlockRoot)
func roSidecarsFromDataColumnParamsByBlockRoot(t *testing.T, parameters []util.DataColumnParam) ([]blocks.ROSidecar, []blocks.RODataColumn) {
roDataColumns, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, parameters)
roSidecars := make([]blocks.ROSidecar, 0, len(roDataColumns))
for _, roDataColumn := range roDataColumns {

View File

@@ -28,8 +28,7 @@ func TestEnsureDeleteSetDiskSummary(t *testing.T) {
func TestStash(t *testing.T) {
t.Run("Index too high", func(t *testing.T) {
dataColumnParamsByBlockRoot := util.DataColumnsParamsByRoot{{1}: {{ColumnIndex: 10_000}}}
roDataColumns, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnParamsByBlockRoot)
roDataColumns, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{{Index: 10_000}})
var entry dataColumnCacheEntry
err := entry.stash(&roDataColumns[0])
@@ -37,8 +36,7 @@ func TestStash(t *testing.T) {
})
t.Run("Nominal and already existing", func(t *testing.T) {
dataColumnParamsByBlockRoot := util.DataColumnsParamsByRoot{{1}: {{ColumnIndex: 1}}}
roDataColumns, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnParamsByBlockRoot)
roDataColumns, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{{Index: 1}})
var entry dataColumnCacheEntry
err := entry.stash(&roDataColumns[0])
@@ -76,36 +74,30 @@ func TestFilterDataColumns(t *testing.T) {
})
t.Run("Commitments not equal", func(t *testing.T) {
root := [fieldparams.RootLength]byte{}
commitmentsArray := safeCommitmentsArray{nil, [][]byte{[]byte{1}}}
dataColumnParamsByBlockRoot := util.DataColumnsParamsByRoot{root: {{ColumnIndex: 1}}}
roDataColumns, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnParamsByBlockRoot)
roDataColumns, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{{Index: 1}})
var scs [fieldparams.NumberOfColumns]*blocks.RODataColumn
scs[1] = &roDataColumns[0]
dataColumnCacheEntry := dataColumnCacheEntry{scs: scs}
_, err := dataColumnCacheEntry.filter(root, &commitmentsArray)
_, err := dataColumnCacheEntry.filter(roDataColumns[0].BlockRoot(), &commitmentsArray)
require.NotNil(t, err)
})
t.Run("Nominal", func(t *testing.T) {
root := [fieldparams.RootLength]byte{}
commitmentsArray := safeCommitmentsArray{nil, [][]byte{[]byte{1}}, nil, [][]byte{[]byte{3}}}
diskSummary := filesystem.NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{false, true})
dataColumnParamsByBlockRoot := util.DataColumnsParamsByRoot{root: {{ColumnIndex: 3, KzgCommitments: [][]byte{[]byte{3}}}}}
expected, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnParamsByBlockRoot)
expected, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{{Index: 3, KzgCommitments: [][]byte{[]byte{3}}}})
var scs [fieldparams.NumberOfColumns]*blocks.RODataColumn
scs[3] = &expected[0]
dataColumnCacheEntry := dataColumnCacheEntry{scs: scs, diskSummary: diskSummary}
actual, err := dataColumnCacheEntry.filter(root, &commitmentsArray)
actual, err := dataColumnCacheEntry.filter(expected[0].BlockRoot(), &commitmentsArray)
require.NoError(t, err)
require.DeepEqual(t, expected, actual)

View File

@@ -59,6 +59,7 @@ go_test(
"//beacon-chain/verification:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",

View File

@@ -25,7 +25,7 @@ func directoryPermissions() os.FileMode {
var (
errIndexOutOfBounds = errors.New("blob index in file name >= MAX_BLOBS_PER_BLOCK")
errSidecarEmptySSZData = errors.New("sidecar marshalled to an empty ssz byte slice")
errNoBasePath = errors.New("BlobStorage base path not specified in init")
errNoBlobBasePath = errors.New("BlobStorage base path not specified in init")
)
// BlobStorageOption is a functional option for configuring a BlobStorage.
@@ -85,7 +85,7 @@ func NewBlobStorage(opts ...BlobStorageOption) (*BlobStorage, error) {
// Allow tests to set up a different fs using WithFs.
if b.fs == nil {
if b.base == "" {
return nil, errNoBasePath
return nil, errNoBlobBasePath
}
b.base = path.Clean(b.base)
if err := file.MkdirAll(b.base); err != nil {

View File

@@ -160,7 +160,7 @@ func writeFakeSSZ(t *testing.T, fs afero.Fs, root [32]byte, slot primitives.Slot
func TestNewBlobStorage(t *testing.T) {
_, err := NewBlobStorage()
require.ErrorIs(t, err, errNoBasePath)
require.ErrorIs(t, err, errNoBlobBasePath)
_, err = NewBlobStorage(WithBasePath(path.Join(t.TempDir(), "good")))
require.NoError(t, err)
}

View File

@@ -50,6 +50,7 @@ var (
errTooManyDataColumns = errors.New("too many data columns")
errWrongSszEncodedDataColumnSidecarSize = errors.New("wrong SSZ encoded data column sidecar size")
errDataColumnSidecarsFromDifferentSlots = errors.New("data column sidecars from different slots")
errNoDataColumnBasePath = errors.New("DataColumnStorage base path not specified in init")
)
type (
@@ -142,7 +143,7 @@ func NewDataColumnStorage(ctx context.Context, opts ...DataColumnStorageOption)
// Allow tests to set up a different fs using WithFs.
if storage.fs == nil {
if storage.base == "" {
return nil, errNoBasePath
return nil, errNoDataColumnBasePath
}
storage.base = path.Clean(storage.base)

View File

@@ -7,6 +7,7 @@ import (
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
@@ -18,7 +19,7 @@ func TestNewDataColumnStorage(t *testing.T) {
t.Run("No base path", func(t *testing.T) {
_, err := NewDataColumnStorage(ctx)
require.ErrorIs(t, err, errNoBasePath)
require.ErrorIs(t, err, errNoDataColumnBasePath)
})
t.Run("Nominal", func(t *testing.T) {
@@ -40,32 +41,18 @@ func TestWarmCache(t *testing.T) {
_, verifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{
{0}: {
{Slot: 33, ColumnIndex: 2, DataColumn: []byte{1, 2, 3}}, // Period 0 - Epoch 1
{Slot: 33, ColumnIndex: 4, DataColumn: []byte{2, 3, 4}}, // Period 0 - Epoch 1
},
{1}: {
{Slot: 128_002, ColumnIndex: 2, DataColumn: []byte{1, 2, 3}}, // Period 0 - Epoch 4000
{Slot: 128_002, ColumnIndex: 4, DataColumn: []byte{2, 3, 4}}, // Period 0 - Epoch 4000
},
{2}: {
{Slot: 128_003, ColumnIndex: 1, DataColumn: []byte{1, 2, 3}}, // Period 0 - Epoch 4000
{Slot: 128_003, ColumnIndex: 3, DataColumn: []byte{2, 3, 4}}, // Period 0 - Epoch 4000
},
{3}: {
{Slot: 128_034, ColumnIndex: 2, DataColumn: []byte{1, 2, 3}}, // Period 0 - Epoch 4001
{Slot: 128_034, ColumnIndex: 4, DataColumn: []byte{2, 3, 4}}, // Period 0 - Epoch 4001
},
{4}: {
{Slot: 131_138, ColumnIndex: 2, DataColumn: []byte{1, 2, 3}}, // Period 1 - Epoch 4098
},
{5}: {
{Slot: 131_138, ColumnIndex: 1, DataColumn: []byte{1, 2, 3}}, // Period 1 - Epoch 4098
},
{6}: {
{Slot: 131_168, ColumnIndex: 0, DataColumn: []byte{1, 2, 3}}, // Period 1 - Epoch 4099
},
[]util.DataColumnParam{
{Slot: 33, Index: 2, Column: [][]byte{{1}, {2}, {3}}}, // Period 0 - Epoch 1
{Slot: 33, Index: 4, Column: [][]byte{{2}, {3}, {4}}}, // Period 0 - Epoch 1
{Slot: 128_002, Index: 2, Column: [][]byte{{1}, {2}, {3}}}, // Period 0 - Epoch 4000
{Slot: 128_002, Index: 4, Column: [][]byte{{2}, {3}, {4}}}, // Period 0 - Epoch 4000
{Slot: 128_003, Index: 1, Column: [][]byte{{1}, {2}, {3}}}, // Period 0 - Epoch 4000
{Slot: 128_003, Index: 3, Column: [][]byte{{2}, {3}, {4}}}, // Period 0 - Epoch 4000
{Slot: 128_034, Index: 2, Column: [][]byte{{1}, {2}, {3}}}, // Period 0 - Epoch 4001
{Slot: 128_034, Index: 4, Column: [][]byte{{2}, {3}, {4}}}, // Period 0 - Epoch 4001
{Slot: 131_138, Index: 2, Column: [][]byte{{1}, {2}, {3}}}, // Period 1 - Epoch 4098
{Slot: 131_138, Index: 1, Column: [][]byte{{1}, {2}, {3}}}, // Period 1 - Epoch 4098
{Slot: 131_168, Index: 0, Column: [][]byte{{1}, {2}, {3}}}, // Period 1 - Epoch 4099
},
)
@@ -76,29 +63,25 @@ func TestWarmCache(t *testing.T) {
storage.WarmCache()
require.Equal(t, primitives.Epoch(4_000), storage.cache.lowestCachedEpoch)
require.Equal(t, 6, len(storage.cache.cache))
require.Equal(t, 5, len(storage.cache.cache))
summary, ok := storage.cache.get([fieldparams.RootLength]byte{1})
summary, ok := storage.cache.get(verifiedRoDataColumnSidecars[2].BlockRoot())
require.Equal(t, true, ok)
require.DeepEqual(t, DataColumnStorageSummary{epoch: 4_000, mask: [fieldparams.NumberOfColumns]bool{false, false, true, false, true}}, summary)
summary, ok = storage.cache.get([fieldparams.RootLength]byte{2})
summary, ok = storage.cache.get(verifiedRoDataColumnSidecars[4].BlockRoot())
require.Equal(t, true, ok)
require.DeepEqual(t, DataColumnStorageSummary{epoch: 4_000, mask: [fieldparams.NumberOfColumns]bool{false, true, false, true}}, summary)
summary, ok = storage.cache.get([fieldparams.RootLength]byte{3})
summary, ok = storage.cache.get(verifiedRoDataColumnSidecars[6].BlockRoot())
require.Equal(t, true, ok)
require.DeepEqual(t, DataColumnStorageSummary{epoch: 4_001, mask: [fieldparams.NumberOfColumns]bool{false, false, true, false, true}}, summary)
summary, ok = storage.cache.get([fieldparams.RootLength]byte{4})
summary, ok = storage.cache.get(verifiedRoDataColumnSidecars[8].BlockRoot())
require.Equal(t, true, ok)
require.DeepEqual(t, DataColumnStorageSummary{epoch: 4_098, mask: [fieldparams.NumberOfColumns]bool{false, false, true}}, summary)
require.DeepEqual(t, DataColumnStorageSummary{epoch: 4_098, mask: [fieldparams.NumberOfColumns]bool{false, true, true}}, summary)
summary, ok = storage.cache.get([fieldparams.RootLength]byte{5})
require.Equal(t, true, ok)
require.DeepEqual(t, DataColumnStorageSummary{epoch: 4_098, mask: [fieldparams.NumberOfColumns]bool{false, true}}, summary)
summary, ok = storage.cache.get([fieldparams.RootLength]byte{6})
summary, ok = storage.cache.get(verifiedRoDataColumnSidecars[10].BlockRoot())
require.Equal(t, true, ok)
require.DeepEqual(t, DataColumnStorageSummary{epoch: 4_099, mask: [fieldparams.NumberOfColumns]bool{true}}, summary)
}
@@ -112,9 +95,7 @@ func TestSaveDataColumnsSidecars(t *testing.T) {
_, verifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{
{}: {{ColumnIndex: 12}, {ColumnIndex: 1_000_000}, {ColumnIndex: 48}},
},
[]util.DataColumnParam{{Index: 12}, {Index: 1_000_000}, {Index: 48}},
)
_, dataColumnStorage := NewEphemeralDataColumnStorageAndFs(t)
@@ -125,7 +106,7 @@ func TestSaveDataColumnsSidecars(t *testing.T) {
t.Run("one of the column index is too large", func(t *testing.T) {
_, verifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{{}: {{ColumnIndex: 12}, {ColumnIndex: 1_000_000}, {ColumnIndex: 48}}},
[]util.DataColumnParam{{Index: 12}, {Index: 1_000_000}, {Index: 48}},
)
_, dataColumnStorage := NewEphemeralDataColumnStorageAndFs(t)
@@ -136,23 +117,34 @@ func TestSaveDataColumnsSidecars(t *testing.T) {
t.Run("different slots", func(t *testing.T) {
_, verifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{
{}: {
{Slot: 1, ColumnIndex: 12, DataColumn: []byte{1, 2, 3}},
{Slot: 2, ColumnIndex: 12, DataColumn: []byte{1, 2, 3}},
},
[]util.DataColumnParam{
{Slot: 1, Index: 12, Column: [][]byte{{1}, {2}, {3}}},
{Slot: 2, Index: 12, Column: [][]byte{{1}, {2}, {3}}},
},
)
// Create a sidecar with a different slot but the same root.
alteredVerifiedRoDataColumnSidecars := make([]blocks.VerifiedRODataColumn, 0, 2)
alteredVerifiedRoDataColumnSidecars = append(alteredVerifiedRoDataColumnSidecars, verifiedRoDataColumnSidecars[0])
altered, err := blocks.NewRODataColumnWithRoot(
verifiedRoDataColumnSidecars[1].RODataColumn.DataColumnSidecar,
verifiedRoDataColumnSidecars[0].BlockRoot(),
)
require.NoError(t, err)
verifiedAltered := blocks.NewVerifiedRODataColumn(altered)
alteredVerifiedRoDataColumnSidecars = append(alteredVerifiedRoDataColumnSidecars, verifiedAltered)
_, dataColumnStorage := NewEphemeralDataColumnStorageAndFs(t)
err := dataColumnStorage.Save(verifiedRoDataColumnSidecars)
err = dataColumnStorage.Save(alteredVerifiedRoDataColumnSidecars)
require.ErrorIs(t, err, errDataColumnSidecarsFromDifferentSlots)
})
t.Run("new file - no data columns to save", func(t *testing.T) {
_, verifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{{}: {}},
[]util.DataColumnParam{},
)
_, dataColumnStorage := NewEphemeralDataColumnStorageAndFs(t)
@@ -163,11 +155,9 @@ func TestSaveDataColumnsSidecars(t *testing.T) {
t.Run("new file - different data column size", func(t *testing.T) {
_, verifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{
{}: {
{ColumnIndex: 12, DataColumn: []byte{1, 2, 3}},
{ColumnIndex: 11, DataColumn: []byte{1, 2, 3, 4}},
},
[]util.DataColumnParam{
{Slot: 1, Index: 12, Column: [][]byte{{1}, {2}, {3}}},
{Slot: 1, Index: 13, Column: [][]byte{{1}, {2}, {3}, {4}}},
},
)
@@ -179,7 +169,9 @@ func TestSaveDataColumnsSidecars(t *testing.T) {
t.Run("existing file - wrong incoming SSZ encoded size", func(t *testing.T) {
_, verifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{{1}: {{ColumnIndex: 12, DataColumn: []byte{1, 2, 3}}}},
[]util.DataColumnParam{
{Slot: 1, Index: 12, Column: [][]byte{{1}, {2}, {3}}},
},
)
// Save data columns into a file.
@@ -191,7 +183,9 @@ func TestSaveDataColumnsSidecars(t *testing.T) {
// column index and an different SSZ encoded size.
_, verifiedRoDataColumnSidecars = util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{{1}: {{ColumnIndex: 13, DataColumn: []byte{1, 2, 3, 4}}}},
[]util.DataColumnParam{
{Slot: 1, Index: 13, Column: [][]byte{{1}, {2}, {3}, {4}}},
},
)
// Try to rewrite the file.
@@ -202,17 +196,13 @@ func TestSaveDataColumnsSidecars(t *testing.T) {
t.Run("nominal", func(t *testing.T) {
_, inputVerifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{
{1}: {
{ColumnIndex: 12, DataColumn: []byte{1, 2, 3}},
{ColumnIndex: 11, DataColumn: []byte{3, 4, 5}},
{ColumnIndex: 12, DataColumn: []byte{1, 2, 3}}, // OK if duplicate
{ColumnIndex: 13, DataColumn: []byte{6, 7, 8}},
},
{2}: {
{ColumnIndex: 12, DataColumn: []byte{3, 4, 5}},
{ColumnIndex: 13, DataColumn: []byte{6, 7, 8}},
},
[]util.DataColumnParam{
{Slot: 1, Index: 12, Column: [][]byte{{1}, {2}, {3}}},
{Slot: 1, Index: 11, Column: [][]byte{{3}, {4}, {5}}},
{Slot: 1, Index: 12, Column: [][]byte{{1}, {2}, {3}}}, // OK if duplicate
{Slot: 1, Index: 13, Column: [][]byte{{6}, {7}, {8}}},
{Slot: 2, Index: 12, Column: [][]byte{{3}, {4}, {5}}},
{Slot: 2, Index: 13, Column: [][]byte{{6}, {7}, {8}}},
},
)
@@ -222,16 +212,12 @@ func TestSaveDataColumnsSidecars(t *testing.T) {
_, inputVerifiedRoDataColumnSidecars = util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{
{1}: {
{ColumnIndex: 12, DataColumn: []byte{1, 2, 3}}, // OK if duplicate
{ColumnIndex: 15, DataColumn: []byte{2, 3, 4}},
{ColumnIndex: 1, DataColumn: []byte{2, 3, 4}},
},
{3}: {
{ColumnIndex: 6, DataColumn: []byte{3, 4, 5}},
{ColumnIndex: 2, DataColumn: []byte{6, 7, 8}},
},
[]util.DataColumnParam{
{Slot: 1, Index: 12, Column: [][]byte{{1}, {2}, {3}}}, // OK if duplicate
{Slot: 1, Index: 15, Column: [][]byte{{2}, {3}, {4}}},
{Slot: 1, Index: 1, Column: [][]byte{{2}, {3}, {4}}},
{Slot: 3, Index: 6, Column: [][]byte{{3}, {4}, {5}}},
{Slot: 3, Index: 2, Column: [][]byte{{6}, {7}, {8}}},
},
)
@@ -240,51 +226,47 @@ func TestSaveDataColumnsSidecars(t *testing.T) {
type fixture struct {
fileName string
blockRoot [fieldparams.RootLength]byte
expectedIndices [mandatoryNumberOfColumns]byte
dataColumnParams []util.DataColumnParams
dataColumnParams []util.DataColumnParam
}
fixtures := []fixture{
{
fileName: "0/0/0x0100000000000000000000000000000000000000000000000000000000000000.sszs",
blockRoot: [fieldparams.RootLength]byte{1},
fileName: "0/0/0x8bb2f09de48c102635622dc27e6de03ae2b22639df7c33edbc8222b2ec423746.sszs",
expectedIndices: [mandatoryNumberOfColumns]byte{
0, nonZeroOffset + 4, 0, 0, 0, 0, 0, 0,
0, 0, 0, nonZeroOffset + 1, nonZeroOffset, nonZeroOffset + 2, 0, nonZeroOffset + 3,
// The rest is filled with zeroes.
},
dataColumnParams: []util.DataColumnParams{
{ColumnIndex: 12, DataColumn: []byte{1, 2, 3}},
{ColumnIndex: 11, DataColumn: []byte{3, 4, 5}},
{ColumnIndex: 13, DataColumn: []byte{6, 7, 8}},
{ColumnIndex: 15, DataColumn: []byte{2, 3, 4}},
{ColumnIndex: 1, DataColumn: []byte{2, 3, 4}},
dataColumnParams: []util.DataColumnParam{
{Slot: 1, Index: 12, Column: [][]byte{{1}, {2}, {3}}},
{Slot: 1, Index: 11, Column: [][]byte{{3}, {4}, {5}}},
{Slot: 1, Index: 13, Column: [][]byte{{6}, {7}, {8}}},
{Slot: 1, Index: 15, Column: [][]byte{{2}, {3}, {4}}},
{Slot: 1, Index: 1, Column: [][]byte{{2}, {3}, {4}}},
},
},
{
fileName: "0/0/0x0200000000000000000000000000000000000000000000000000000000000000.sszs",
blockRoot: [fieldparams.RootLength]byte{2},
fileName: "0/0/0x221f88cae2219050d4e9d8c2d0d83cb4c8ce4c84ab1bb3e0b89f3dec36077c4f.sszs",
expectedIndices: [mandatoryNumberOfColumns]byte{
0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, nonZeroOffset, nonZeroOffset + 1, 0, 0,
// The rest is filled with zeroes.
},
dataColumnParams: []util.DataColumnParams{
{ColumnIndex: 12, DataColumn: []byte{3, 4, 5}},
{ColumnIndex: 13, DataColumn: []byte{6, 7, 8}},
dataColumnParams: []util.DataColumnParam{
{Slot: 2, Index: 12, Column: [][]byte{{3}, {4}, {5}}},
{Slot: 2, Index: 13, Column: [][]byte{{6}, {7}, {8}}},
},
},
{
fileName: "0/0/0x0300000000000000000000000000000000000000000000000000000000000000.sszs",
blockRoot: [fieldparams.RootLength]byte{3},
fileName: "0/0/0x7b163bd57e1c4c8b5048c5389698098f4c957d62d7ce86f4ffa9bdc75c16a18b.sszs",
expectedIndices: [mandatoryNumberOfColumns]byte{
0, 0, nonZeroOffset + 1, 0, 0, 0, nonZeroOffset, 0,
// The rest is filled with zeroes.
},
dataColumnParams: []util.DataColumnParams{
{ColumnIndex: 6, DataColumn: []byte{3, 4, 5}},
{ColumnIndex: 2, DataColumn: []byte{6, 7, 8}},
dataColumnParams: []util.DataColumnParam{
{Slot: 3, Index: 6, Column: [][]byte{{3}, {4}, {5}}},
{Slot: 3, Index: 2, Column: [][]byte{{6}, {7}, {8}}},
},
},
}
@@ -293,7 +275,7 @@ func TestSaveDataColumnsSidecars(t *testing.T) {
// Build expected data column sidecars.
_, expectedDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{fixture.blockRoot: fixture.dataColumnParams},
fixture.dataColumnParams,
)
// Build expected bytes.
@@ -320,6 +302,8 @@ func TestSaveDataColumnsSidecars(t *testing.T) {
expectedBytes = append(expectedBytes, fixture.expectedIndices[:]...)
expectedBytes = append(expectedBytes, sszEncodedDataColumnSidecars...)
blockRoot := expectedDataColumnSidecars[0].BlockRoot()
// Check the actual content of the file.
actualBytes, err := afero.ReadFile(dataColumnStorage.fs, fixture.fileName)
require.NoError(t, err)
@@ -328,18 +312,18 @@ func TestSaveDataColumnsSidecars(t *testing.T) {
// Check the summary.
indices := map[uint64]bool{}
for _, dataColumnParam := range fixture.dataColumnParams {
indices[dataColumnParam.ColumnIndex] = true
indices[dataColumnParam.Index] = true
}
summary := dataColumnStorage.Summary(fixture.blockRoot)
summary := dataColumnStorage.Summary(blockRoot)
for index := range uint64(mandatoryNumberOfColumns) {
require.Equal(t, indices[index], summary.HasIndex(index))
}
err = dataColumnStorage.Remove(fixture.blockRoot)
err = dataColumnStorage.Remove(blockRoot)
require.NoError(t, err)
summary = dataColumnStorage.Summary(fixture.blockRoot)
summary = dataColumnStorage.Summary(blockRoot)
for index := range uint64(mandatoryNumberOfColumns) {
require.Equal(t, false, summary.HasIndex(index))
}
@@ -362,11 +346,9 @@ func TestGetDataColumnSidecars(t *testing.T) {
t.Run("indices not found", func(t *testing.T) {
_, savedVerifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{
{1}: {
{ColumnIndex: 12, DataColumn: []byte{1, 2, 3}},
{ColumnIndex: 14, DataColumn: []byte{2, 3, 4}},
},
[]util.DataColumnParam{
{Index: 12, Column: [][]byte{{1}, {2}, {3}}},
{Index: 14, Column: [][]byte{{2}, {3}, {4}}},
},
)
@@ -374,7 +356,7 @@ func TestGetDataColumnSidecars(t *testing.T) {
err := dataColumnStorage.Save(savedVerifiedRoDataColumnSidecars)
require.NoError(t, err)
verifiedRODataColumnSidecars, err := dataColumnStorage.Get([fieldparams.RootLength]byte{1}, []uint64{3, 1, 2})
verifiedRODataColumnSidecars, err := dataColumnStorage.Get(savedVerifiedRoDataColumnSidecars[0].BlockRoot(), []uint64{3, 1, 2})
require.NoError(t, err)
require.Equal(t, 0, len(verifiedRODataColumnSidecars))
})
@@ -382,11 +364,9 @@ func TestGetDataColumnSidecars(t *testing.T) {
t.Run("nominal", func(t *testing.T) {
_, expectedVerifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{
{1}: {
{ColumnIndex: 12, DataColumn: []byte{1, 2, 3}},
{ColumnIndex: 14, DataColumn: []byte{2, 3, 4}},
},
[]util.DataColumnParam{
{Index: 12, Column: [][]byte{{1}, {2}, {3}}},
{Index: 14, Column: [][]byte{{2}, {3}, {4}}},
},
)
@@ -394,11 +374,13 @@ func TestGetDataColumnSidecars(t *testing.T) {
err := dataColumnStorage.Save(expectedVerifiedRoDataColumnSidecars)
require.NoError(t, err)
verifiedRODataColumnSidecars, err := dataColumnStorage.Get([fieldparams.RootLength]byte{1}, nil)
root := expectedVerifiedRoDataColumnSidecars[0].BlockRoot()
verifiedRODataColumnSidecars, err := dataColumnStorage.Get(root, nil)
require.NoError(t, err)
require.DeepSSZEqual(t, expectedVerifiedRoDataColumnSidecars, verifiedRODataColumnSidecars)
verifiedRODataColumnSidecars, err = dataColumnStorage.Get([fieldparams.RootLength]byte{1}, []uint64{12, 13, 14})
verifiedRODataColumnSidecars, err = dataColumnStorage.Get(root, []uint64{12, 13, 14})
require.NoError(t, err)
require.DeepSSZEqual(t, expectedVerifiedRoDataColumnSidecars, verifiedRODataColumnSidecars)
})
@@ -414,15 +396,11 @@ func TestRemove(t *testing.T) {
t.Run("nominal", func(t *testing.T) {
_, inputVerifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{
{1}: {
{Slot: 32, ColumnIndex: 10, DataColumn: []byte{1, 2, 3}},
{Slot: 32, ColumnIndex: 11, DataColumn: []byte{2, 3, 4}},
},
{2}: {
{Slot: 33, ColumnIndex: 10, DataColumn: []byte{1, 2, 3}},
{Slot: 33, ColumnIndex: 11, DataColumn: []byte{2, 3, 4}},
},
[]util.DataColumnParam{
{Slot: 32, Index: 10, Column: [][]byte{{1}, {2}, {3}}},
{Slot: 32, Index: 11, Column: [][]byte{{2}, {3}, {4}}},
{Slot: 33, Index: 10, Column: [][]byte{{1}, {2}, {3}}},
{Slot: 33, Index: 11, Column: [][]byte{{2}, {3}, {4}}},
},
)
@@ -430,22 +408,22 @@ func TestRemove(t *testing.T) {
err := dataColumnStorage.Save(inputVerifiedRoDataColumnSidecars)
require.NoError(t, err)
err = dataColumnStorage.Remove([fieldparams.RootLength]byte{1})
err = dataColumnStorage.Remove(inputVerifiedRoDataColumnSidecars[0].BlockRoot())
require.NoError(t, err)
summary := dataColumnStorage.Summary([fieldparams.RootLength]byte{1})
summary := dataColumnStorage.Summary(inputVerifiedRoDataColumnSidecars[0].BlockRoot())
require.Equal(t, primitives.Epoch(0), summary.epoch)
require.Equal(t, uint64(0), summary.Count())
summary = dataColumnStorage.Summary([fieldparams.RootLength]byte{2})
summary = dataColumnStorage.Summary(inputVerifiedRoDataColumnSidecars[3].BlockRoot())
require.Equal(t, primitives.Epoch(1), summary.epoch)
require.Equal(t, uint64(2), summary.Count())
actual, err := dataColumnStorage.Get([fieldparams.RootLength]byte{1}, nil)
actual, err := dataColumnStorage.Get(inputVerifiedRoDataColumnSidecars[0].BlockRoot(), nil)
require.NoError(t, err)
require.Equal(t, 0, len(actual))
actual, err = dataColumnStorage.Get([fieldparams.RootLength]byte{2}, nil)
actual, err = dataColumnStorage.Get(inputVerifiedRoDataColumnSidecars[3].BlockRoot(), nil)
require.NoError(t, err)
require.Equal(t, 2, len(actual))
})
@@ -454,9 +432,9 @@ func TestRemove(t *testing.T) {
func TestClear(t *testing.T) {
_, inputVerifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{
{1}: {{ColumnIndex: 12, DataColumn: []byte{1, 2, 3}}},
{2}: {{ColumnIndex: 13, DataColumn: []byte{6, 7, 8}}},
[]util.DataColumnParam{
{Slot: 1, Index: 12, Column: [][]byte{{1}, {2}, {3}}},
{Slot: 2, Index: 13, Column: [][]byte{{6}, {7}, {8}}},
},
)
@@ -465,8 +443,8 @@ func TestClear(t *testing.T) {
require.NoError(t, err)
filePaths := []string{
"0/0/0x0100000000000000000000000000000000000000000000000000000000000000.sszs",
"0/0/0x0200000000000000000000000000000000000000000000000000000000000000.sszs",
"0/0/0x8bb2f09de48c102635622dc27e6de03ae2b22639df7c33edbc8222b2ec423746.sszs",
"0/0/0x221f88cae2219050d4e9d8c2d0d83cb4c8ce4c84ab1bb3e0b89f3dec36077c4f.sszs",
}
for _, filePath := range filePaths {
@@ -492,8 +470,8 @@ func TestMetadata(t *testing.T) {
t.Run("wrong version", func(t *testing.T) {
_, verifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{
{1}: {{ColumnIndex: 12, DataColumn: []byte{1, 2, 3}}},
[]util.DataColumnParam{
{Slot: 1, Index: 12, Column: [][]byte{{1}, {2}, {3}}},
},
)
@@ -503,7 +481,7 @@ func TestMetadata(t *testing.T) {
require.NoError(t, err)
// Alter the version.
const filePath = "0/0/0x0100000000000000000000000000000000000000000000000000000000000000.sszs"
const filePath = "0/0/0x8bb2f09de48c102635622dc27e6de03ae2b22639df7c33edbc8222b2ec423746.sszs"
file, err := dataColumnStorage.fs.OpenFile(filePath, os.O_WRONLY, os.FileMode(0600))
require.NoError(t, err)
@@ -643,31 +621,19 @@ func TestPrune(t *testing.T) {
}
_, verifiedRoDataColumnSidecars := util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{
{0}: {
{Slot: 33, ColumnIndex: 2, DataColumn: []byte{1, 2, 3}}, // Period 0 - Epoch 1
{Slot: 33, ColumnIndex: 4, DataColumn: []byte{2, 3, 4}}, // Period 0 - Epoch 1
},
{1}: {
{Slot: 128_002, ColumnIndex: 2, DataColumn: []byte{1, 2, 3}}, // Period 0 - Epoch 4000
{Slot: 128_002, ColumnIndex: 4, DataColumn: []byte{2, 3, 4}}, // Period 0 - Epoch 4000
},
{2}: {
{Slot: 128_003, ColumnIndex: 1, DataColumn: []byte{1, 2, 3}}, // Period 0 - Epoch 4000
{Slot: 128_003, ColumnIndex: 3, DataColumn: []byte{2, 3, 4}}, // Period 0 - Epoch 4000
},
{3}: {
{Slot: 131_138, ColumnIndex: 2, DataColumn: []byte{1, 2, 3}}, // Period 1 - Epoch 4098
{Slot: 131_138, ColumnIndex: 3, DataColumn: []byte{1, 2, 3}}, // Period 1 - Epoch 4098
},
{4}: {
{Slot: 131_169, ColumnIndex: 2, DataColumn: []byte{1, 2, 3}}, // Period 1 - Epoch 4099
{Slot: 131_169, ColumnIndex: 3, DataColumn: []byte{1, 2, 3}}, // Period 1 - Epoch 4099
},
{5}: {
{Slot: 262_144, ColumnIndex: 2, DataColumn: []byte{1, 2, 3}}, // Period 2 - Epoch 8192
{Slot: 262_144, ColumnIndex: 3, DataColumn: []byte{1, 2, 3}}, // Period 2 - Epoch 8292
},
[]util.DataColumnParam{
{Slot: 33, Index: 2, Column: [][]byte{{1}, {2}, {3}}}, // Period 0 - Epoch 1
{Slot: 33, Index: 4, Column: [][]byte{{2}, {3}, {4}}}, // Period 0 - Epoch 1
{Slot: 128_002, Index: 2, Column: [][]byte{{1}, {2}, {3}}}, // Period 0 - Epoch 4000
{Slot: 128_002, Index: 4, Column: [][]byte{{2}, {3}, {4}}}, // Period 0 - Epoch 4000
{Slot: 128_003, Index: 1, Column: [][]byte{{1}, {2}, {3}}}, // Period 0 - Epoch 4000
{Slot: 128_003, Index: 3, Column: [][]byte{{2}, {3}, {4}}}, // Period 0 - Epoch 4000
{Slot: 131_138, Index: 2, Column: [][]byte{{1}, {2}, {3}}}, // Period 1 - Epoch 4098
{Slot: 131_138, Index: 3, Column: [][]byte{{1}, {2}, {3}}}, // Period 1 - Epoch 4098
{Slot: 131_169, Index: 2, Column: [][]byte{{1}, {2}, {3}}}, // Period 1 - Epoch 4099
{Slot: 131_169, Index: 3, Column: [][]byte{{1}, {2}, {3}}}, // Period 1 - Epoch 4099
{Slot: 262_144, Index: 2, Column: [][]byte{{1}, {2}, {3}}}, // Period 2 - Epoch 8192
{Slot: 262_144, Index: 3, Column: [][]byte{{1}, {2}, {3}}}, // Period 2 - Epoch 8292
},
)
@@ -696,31 +662,31 @@ func TestPrune(t *testing.T) {
dirs, err = listDir(dataColumnStorage.fs, "0/1")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{"0x0000000000000000000000000000000000000000000000000000000000000000.sszs"}, dirs))
require.Equal(t, true, compareSlices([]string{"0x775283f428813c949b7e8af07f01fef9790137f021b3597ad2d0d81e8be8f0f0.sszs"}, dirs))
dirs, err = listDir(dataColumnStorage.fs, "0/4000")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{
"0x0200000000000000000000000000000000000000000000000000000000000000.sszs",
"0x0100000000000000000000000000000000000000000000000000000000000000.sszs",
"0x9977031132157ebb9c81bce952003ce07a4f54e921ca63b7693d1562483fdf9f.sszs",
"0xb2b14d9d858fa99b70f0405e4e39f38e51e36dd9a70343c109e24eeb5f77e369.sszs",
}, dirs))
dirs, err = listDir(dataColumnStorage.fs, "1/4098")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{"0x0300000000000000000000000000000000000000000000000000000000000000.sszs"}, dirs))
require.Equal(t, true, compareSlices([]string{"0x5106745cdd6b1aa3602ef4d000ef373af672019264c167fa4bd641a1094aa5c5.sszs"}, dirs))
dirs, err = listDir(dataColumnStorage.fs, "1/4099")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{"0x0400000000000000000000000000000000000000000000000000000000000000.sszs"}, dirs))
require.Equal(t, true, compareSlices([]string{"0x4e5f2bd5bb84bf0422af8edd1cc5a52cc6cea85baf3d66d172fe41831ac1239c.sszs"}, dirs))
dirs, err = listDir(dataColumnStorage.fs, "2/8192")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{"0x0500000000000000000000000000000000000000000000000000000000000000.sszs"}, dirs))
require.Equal(t, true, compareSlices([]string{"0xa8adba7446eb56a01a9dd6d55e9c3990b10c91d43afb77847b4a21ac4ee62527.sszs"}, dirs))
_, verifiedRoDataColumnSidecars = util.CreateTestVerifiedRoDataColumnSidecars(
t,
util.DataColumnsParamsByRoot{
{6}: {{Slot: 451_141, ColumnIndex: 2, DataColumn: []byte{1, 2, 3}}}, // Period 3 - Epoch 14_098
[]util.DataColumnParam{
{Slot: 451_141, Index: 2, Column: [][]byte{{1}, {2}, {3}}}, // Period 3 - Epoch 14_098
},
)
@@ -748,14 +714,14 @@ func TestPrune(t *testing.T) {
dirs, err = listDir(dataColumnStorage.fs, "1/4099")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{"0x0400000000000000000000000000000000000000000000000000000000000000.sszs"}, dirs))
require.Equal(t, true, compareSlices([]string{"0x4e5f2bd5bb84bf0422af8edd1cc5a52cc6cea85baf3d66d172fe41831ac1239c.sszs"}, dirs))
dirs, err = listDir(dataColumnStorage.fs, "2/8192")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{"0x0500000000000000000000000000000000000000000000000000000000000000.sszs"}, dirs))
require.Equal(t, true, compareSlices([]string{"0xa8adba7446eb56a01a9dd6d55e9c3990b10c91d43afb77847b4a21ac4ee62527.sszs"}, dirs))
dirs, err = listDir(dataColumnStorage.fs, "3/14098")
require.NoError(t, err)
require.Equal(t, true, compareSlices([]string{"0x0600000000000000000000000000000000000000000000000000000000000000.sszs"}, dirs))
require.Equal(t, true, compareSlices([]string{"0x0de28a18cae63cbc6f0b20dc1afb0b1df38da40824a5f09f92d485ade04de97f.sszs"}, dirs))
})
}

View File

@@ -153,6 +153,13 @@ func decodeLightClientBootstrap(enc []byte) (interfaces.LightClientBootstrap, []
}
m = bootstrap
syncCommitteeHash = enc[len(altairKey) : len(altairKey)+32]
case hasBellatrixKey(enc):
bootstrap := &ethpb.LightClientBootstrapAltair{}
if err := bootstrap.UnmarshalSSZ(enc[len(bellatrixKey)+32:]); err != nil {
return nil, nil, errors.Wrap(err, "could not unmarshal Bellatrix light client bootstrap")
}
m = bootstrap
syncCommitteeHash = enc[len(bellatrixKey) : len(bellatrixKey)+32]
case hasCapellaKey(enc):
bootstrap := &ethpb.LightClientBootstrapCapella{}
if err := bootstrap.UnmarshalSSZ(enc[len(capellaKey)+32:]); err != nil {
@@ -265,6 +272,12 @@ func decodeLightClientUpdate(enc []byte) (interfaces.LightClientUpdate, error) {
return nil, errors.Wrap(err, "could not unmarshal Altair light client update")
}
m = update
case hasBellatrixKey(enc):
update := &ethpb.LightClientUpdateAltair{}
if err := update.UnmarshalSSZ(enc[len(bellatrixKey):]); err != nil {
return nil, errors.Wrap(err, "could not unmarshal Bellatrix light client update")
}
m = update
case hasCapellaKey(enc):
update := &ethpb.LightClientUpdateCapella{}
if err := update.UnmarshalSSZ(enc[len(capellaKey):]); err != nil {
@@ -297,6 +310,8 @@ func keyForLightClientUpdate(v int) ([]byte, error) {
return denebKey, nil
case version.Capella:
return capellaKey, nil
case version.Bellatrix:
return bellatrixKey, nil
case version.Altair:
return altairKey, nil
default:

View File

@@ -46,7 +46,21 @@ func createUpdate(t *testing.T, v int) (interfaces.LightClientUpdate, error) {
slot = primitives.Slot(config.AltairForkEpoch * primitives.Epoch(config.SlotsPerEpoch)).Add(1)
header, err = light_client.NewWrappedHeader(&pb.LightClientHeaderAltair{
Beacon: &pb.BeaconBlockHeader{
Slot: 1,
Slot: slot,
ProposerIndex: primitives.ValidatorIndex(rand.Int()),
ParentRoot: sampleRoot,
StateRoot: sampleRoot,
BodyRoot: sampleRoot,
},
})
require.NoError(t, err)
st, err = util.NewBeaconState()
require.NoError(t, err)
case version.Bellatrix:
slot = primitives.Slot(config.BellatrixForkEpoch * primitives.Epoch(config.SlotsPerEpoch)).Add(1)
header, err = light_client.NewWrappedHeader(&pb.LightClientHeaderAltair{
Beacon: &pb.BeaconBlockHeader{
Slot: slot,
ProposerIndex: primitives.ValidatorIndex(rand.Int()),
ParentRoot: sampleRoot,
StateRoot: sampleRoot,
@@ -60,7 +74,7 @@ func createUpdate(t *testing.T, v int) (interfaces.LightClientUpdate, error) {
slot = primitives.Slot(config.CapellaForkEpoch * primitives.Epoch(config.SlotsPerEpoch)).Add(1)
header, err = light_client.NewWrappedHeader(&pb.LightClientHeaderCapella{
Beacon: &pb.BeaconBlockHeader{
Slot: 1,
Slot: slot,
ProposerIndex: primitives.ValidatorIndex(rand.Int()),
ParentRoot: sampleRoot,
StateRoot: sampleRoot,
@@ -88,7 +102,7 @@ func createUpdate(t *testing.T, v int) (interfaces.LightClientUpdate, error) {
slot = primitives.Slot(config.DenebForkEpoch * primitives.Epoch(config.SlotsPerEpoch)).Add(1)
header, err = light_client.NewWrappedHeader(&pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{
Slot: 1,
Slot: slot,
ProposerIndex: primitives.ValidatorIndex(rand.Int()),
ParentRoot: sampleRoot,
StateRoot: sampleRoot,
@@ -116,7 +130,7 @@ func createUpdate(t *testing.T, v int) (interfaces.LightClientUpdate, error) {
slot = primitives.Slot(config.ElectraForkEpoch * primitives.Epoch(config.SlotsPerEpoch)).Add(1)
header, err = light_client.NewWrappedHeader(&pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{
Slot: 1,
Slot: slot,
ProposerIndex: primitives.ValidatorIndex(rand.Int()),
ParentRoot: sampleRoot,
StateRoot: sampleRoot,
@@ -144,7 +158,7 @@ func createUpdate(t *testing.T, v int) (interfaces.LightClientUpdate, error) {
slot = primitives.Slot(config.FuluForkEpoch * primitives.Epoch(config.SlotsPerEpoch)).Add(1)
header, err = light_client.NewWrappedHeader(&pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{
Slot: 1,
Slot: slot,
ProposerIndex: primitives.ValidatorIndex(rand.Int()),
ParentRoot: sampleRoot,
StateRoot: sampleRoot,
@@ -192,71 +206,30 @@ func TestStore_LightClientUpdate_CanSaveRetrieve(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.AltairForkEpoch = 0
cfg.CapellaForkEpoch = 1
cfg.DenebForkEpoch = 2
cfg.ElectraForkEpoch = 3
cfg.FuluForkEpoch = 3
cfg.BellatrixForkEpoch = 1
cfg.CapellaForkEpoch = 2
cfg.DenebForkEpoch = 3
cfg.ElectraForkEpoch = 4
cfg.FuluForkEpoch = 5
params.OverrideBeaconConfig(cfg)
db := setupDB(t)
ctx := t.Context()
t.Run("Altair", func(t *testing.T) {
update, err := createUpdate(t, version.Altair)
require.NoError(t, err)
period := uint64(1)
for testVersion := version.Altair; testVersion <= version.Electra; testVersion++ {
t.Run(version.String(testVersion), func(t *testing.T) {
update, err := createUpdate(t, testVersion)
require.NoError(t, err)
period := uint64(1)
err = db.SaveLightClientUpdate(ctx, period, update)
require.NoError(t, err)
err = db.SaveLightClientUpdate(ctx, period, update)
require.NoError(t, err)
retrievedUpdate, err := db.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.DeepEqual(t, update, retrievedUpdate, "retrieved update does not match saved update")
})
t.Run("Capella", func(t *testing.T) {
update, err := createUpdate(t, version.Capella)
require.NoError(t, err)
period := uint64(1)
err = db.SaveLightClientUpdate(ctx, period, update)
require.NoError(t, err)
retrievedUpdate, err := db.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.DeepEqual(t, update, retrievedUpdate, "retrieved update does not match saved update")
})
t.Run("Deneb", func(t *testing.T) {
update, err := createUpdate(t, version.Deneb)
require.NoError(t, err)
period := uint64(1)
err = db.SaveLightClientUpdate(ctx, period, update)
require.NoError(t, err)
retrievedUpdate, err := db.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.DeepEqual(t, update, retrievedUpdate, "retrieved update does not match saved update")
})
t.Run("Electra", func(t *testing.T) {
update, err := createUpdate(t, version.Electra)
require.NoError(t, err)
period := uint64(1)
err = db.SaveLightClientUpdate(ctx, period, update)
require.NoError(t, err)
retrievedUpdate, err := db.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.DeepEqual(t, update, retrievedUpdate, "retrieved update does not match saved update")
})
t.Run("Fulu", func(t *testing.T) {
update, err := createUpdate(t, version.Fulu)
require.NoError(t, err)
period := uint64(1)
err = db.SaveLightClientUpdate(ctx, period, update)
require.NoError(t, err)
retrievedUpdate, err := db.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.DeepEqual(t, update, retrievedUpdate, "retrieved update does not match saved update")
})
retrievedUpdate, err := db.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.DeepEqual(t, update, retrievedUpdate, "retrieved update does not match saved update")
})
}
}
func TestStore_LightClientUpdates_canRetrieveRange(t *testing.T) {
@@ -584,12 +557,21 @@ func TestStore_LightClientBootstrap_CanSaveRetrieve(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.AltairForkEpoch = 0
cfg.CapellaForkEpoch = 1
cfg.DenebForkEpoch = 2
cfg.ElectraForkEpoch = 3
cfg.BellatrixForkEpoch = 1
cfg.CapellaForkEpoch = 2
cfg.DenebForkEpoch = 3
cfg.ElectraForkEpoch = 4
cfg.EpochsPerSyncCommitteePeriod = 1
params.OverrideBeaconConfig(cfg)
versionToForkEpoch := map[int]primitives.Epoch{
version.Altair: params.BeaconConfig().AltairForkEpoch,
version.Bellatrix: params.BeaconConfig().BellatrixForkEpoch,
version.Capella: params.BeaconConfig().CapellaForkEpoch,
version.Deneb: params.BeaconConfig().DenebForkEpoch,
version.Electra: params.BeaconConfig().ElectraForkEpoch,
}
db := setupDB(t)
ctx := t.Context()
@@ -599,89 +581,38 @@ func TestStore_LightClientBootstrap_CanSaveRetrieve(t *testing.T) {
require.IsNil(t, retrievedBootstrap)
})
t.Run("Altair", func(t *testing.T) {
bootstrap, err := createDefaultLightClientBootstrap(primitives.Slot(uint64(params.BeaconConfig().AltairForkEpoch) * uint64(params.BeaconConfig().SlotsPerEpoch)))
require.NoError(t, err)
for testVersion := version.Altair; testVersion <= version.Electra; testVersion++ {
t.Run(version.String(testVersion), func(t *testing.T) {
bootstrap, err := createDefaultLightClientBootstrap(primitives.Slot(uint64(versionToForkEpoch[testVersion]) * uint64(params.BeaconConfig().SlotsPerEpoch)))
require.NoError(t, err)
err = bootstrap.SetCurrentSyncCommittee(createRandomSyncCommittee())
require.NoError(t, err)
err = bootstrap.SetCurrentSyncCommittee(createRandomSyncCommittee())
require.NoError(t, err)
err = db.SaveLightClientBootstrap(ctx, []byte("blockRootAltair"), bootstrap)
require.NoError(t, err)
blockRoot := []byte("blockRootAltair" + version.String(testVersion))
retrievedBootstrap, err := db.LightClientBootstrap(ctx, []byte("blockRootAltair"))
require.NoError(t, err)
require.DeepEqual(t, bootstrap.Header(), retrievedBootstrap.Header(), "retrieved bootstrap header does not match saved bootstrap header")
require.DeepEqual(t, bootstrap.CurrentSyncCommittee(), retrievedBootstrap.CurrentSyncCommittee(), "retrieved bootstrap sync committee does not match saved bootstrap sync committee")
savedBranch, err := bootstrap.CurrentSyncCommitteeBranch()
require.NoError(t, err)
retrievedBranch, err := retrievedBootstrap.CurrentSyncCommitteeBranch()
require.NoError(t, err)
require.DeepEqual(t, savedBranch, retrievedBranch, "retrieved bootstrap sync committee branch does not match saved bootstrap sync committee branch")
})
err = db.SaveLightClientBootstrap(ctx, blockRoot, bootstrap)
require.NoError(t, err)
t.Run("Capella", func(t *testing.T) {
bootstrap, err := createDefaultLightClientBootstrap(primitives.Slot(uint64(params.BeaconConfig().CapellaForkEpoch) * uint64(params.BeaconConfig().SlotsPerEpoch)))
require.NoError(t, err)
err = bootstrap.SetCurrentSyncCommittee(createRandomSyncCommittee())
require.NoError(t, err)
err = db.SaveLightClientBootstrap(ctx, []byte("blockRootCapella"), bootstrap)
require.NoError(t, err)
retrievedBootstrap, err := db.LightClientBootstrap(ctx, []byte("blockRootCapella"))
require.NoError(t, err)
require.DeepEqual(t, bootstrap.Header(), retrievedBootstrap.Header(), "retrieved bootstrap header does not match saved bootstrap header")
require.DeepEqual(t, bootstrap.CurrentSyncCommittee(), retrievedBootstrap.CurrentSyncCommittee(), "retrieved bootstrap sync committee does not match saved bootstrap sync committee")
savedBranch, err := bootstrap.CurrentSyncCommitteeBranch()
require.NoError(t, err)
retrievedBranch, err := retrievedBootstrap.CurrentSyncCommitteeBranch()
require.NoError(t, err)
require.DeepEqual(t, savedBranch, retrievedBranch, "retrieved bootstrap sync committee branch does not match saved bootstrap sync committee branch")
})
t.Run("Deneb", func(t *testing.T) {
bootstrap, err := createDefaultLightClientBootstrap(primitives.Slot(uint64(params.BeaconConfig().DenebForkEpoch) * uint64(params.BeaconConfig().SlotsPerEpoch)))
require.NoError(t, err)
err = bootstrap.SetCurrentSyncCommittee(createRandomSyncCommittee())
require.NoError(t, err)
err = db.SaveLightClientBootstrap(ctx, []byte("blockRootDeneb"), bootstrap)
require.NoError(t, err)
retrievedBootstrap, err := db.LightClientBootstrap(ctx, []byte("blockRootDeneb"))
require.NoError(t, err)
require.DeepEqual(t, bootstrap.Header(), retrievedBootstrap.Header(), "retrieved bootstrap header does not match saved bootstrap header")
require.DeepEqual(t, bootstrap.CurrentSyncCommittee(), retrievedBootstrap.CurrentSyncCommittee(), "retrieved bootstrap sync committee does not match saved bootstrap sync committee")
savedBranch, err := bootstrap.CurrentSyncCommitteeBranch()
require.NoError(t, err)
retrievedBranch, err := retrievedBootstrap.CurrentSyncCommitteeBranch()
require.NoError(t, err)
require.DeepEqual(t, savedBranch, retrievedBranch, "retrieved bootstrap sync committee branch does not match saved bootstrap sync committee branch")
})
t.Run("Electra", func(t *testing.T) {
bootstrap, err := createDefaultLightClientBootstrap(primitives.Slot(uint64(params.BeaconConfig().ElectraForkEpoch) * uint64(params.BeaconConfig().SlotsPerEpoch)))
require.NoError(t, err)
err = bootstrap.SetCurrentSyncCommittee(createRandomSyncCommittee())
require.NoError(t, err)
err = db.SaveLightClientBootstrap(ctx, []byte("blockRootElectra"), bootstrap)
require.NoError(t, err)
retrievedBootstrap, err := db.LightClientBootstrap(ctx, []byte("blockRootElectra"))
require.NoError(t, err)
require.DeepEqual(t, bootstrap.Header(), retrievedBootstrap.Header(), "retrieved bootstrap header does not match saved bootstrap header")
require.DeepEqual(t, bootstrap.CurrentSyncCommittee(), retrievedBootstrap.CurrentSyncCommittee(), "retrieved bootstrap sync committee does not match saved bootstrap sync committee")
savedBranch, err := bootstrap.CurrentSyncCommitteeBranchElectra()
require.NoError(t, err)
retrievedBranch, err := retrievedBootstrap.CurrentSyncCommitteeBranchElectra()
require.NoError(t, err)
require.DeepEqual(t, savedBranch, retrievedBranch, "retrieved bootstrap sync committee branch does not match saved bootstrap sync committee branch")
})
retrievedBootstrap, err := db.LightClientBootstrap(ctx, blockRoot)
require.NoError(t, err)
require.DeepEqual(t, bootstrap.Header(), retrievedBootstrap.Header(), "retrieved bootstrap header does not match saved bootstrap header")
require.DeepEqual(t, bootstrap.CurrentSyncCommittee(), retrievedBootstrap.CurrentSyncCommittee(), "retrieved bootstrap sync committee does not match saved bootstrap sync committee")
if testVersion >= version.Electra {
savedBranch, err := bootstrap.CurrentSyncCommitteeBranchElectra()
require.NoError(t, err)
retrievedBranch, err := retrievedBootstrap.CurrentSyncCommitteeBranchElectra()
require.NoError(t, err)
require.DeepEqual(t, savedBranch, retrievedBranch, "retrieved bootstrap sync committee branch does not match saved bootstrap sync committee branch")
} else {
savedBranch, err := bootstrap.CurrentSyncCommitteeBranch()
require.NoError(t, err)
retrievedBranch, err := retrievedBootstrap.CurrentSyncCommitteeBranch()
require.NoError(t, err)
require.DeepEqual(t, savedBranch, retrievedBranch, "retrieved bootstrap sync committee branch does not match saved bootstrap sync committee branch")
}
})
}
}
func TestStore_LightClientBootstrap_MultipleBootstrapsWithSameSyncCommittee(t *testing.T) {
@@ -839,6 +770,7 @@ func createDefaultLightClientBootstrap(currentSlot primitives.Slot) (interfaces.
m = &pb.LightClientBootstrapAltair{
Header: &pb.LightClientHeaderAltair{
Beacon: &pb.BeaconBlockHeader{
Slot: currentSlot,
ParentRoot: make([]byte, 32),
StateRoot: make([]byte, 32),
BodyRoot: make([]byte, 32),
@@ -851,6 +783,7 @@ func createDefaultLightClientBootstrap(currentSlot primitives.Slot) (interfaces.
m = &pb.LightClientBootstrapCapella{
Header: &pb.LightClientHeaderCapella{
Beacon: &pb.BeaconBlockHeader{
Slot: currentSlot,
ParentRoot: make([]byte, 32),
StateRoot: make([]byte, 32),
BodyRoot: make([]byte, 32),
@@ -877,6 +810,7 @@ func createDefaultLightClientBootstrap(currentSlot primitives.Slot) (interfaces.
m = &pb.LightClientBootstrapDeneb{
Header: &pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{
Slot: currentSlot,
ParentRoot: make([]byte, 32),
StateRoot: make([]byte, 32),
BodyRoot: make([]byte, 32),
@@ -905,6 +839,7 @@ func createDefaultLightClientBootstrap(currentSlot primitives.Slot) (interfaces.
m = &pb.LightClientBootstrapElectra{
Header: &pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{
Slot: currentSlot,
ParentRoot: make([]byte, 32),
StateRoot: make([]byte, 32),
BodyRoot: make([]byte, 32),

View File

@@ -87,45 +87,47 @@ type serviceFlagOpts struct {
// full PoS node. It handles the lifecycle of the entire system and registers
// services to a service registry.
type BeaconNode struct {
cliCtx *cli.Context
ctx context.Context
cancel context.CancelFunc
services *runtime.ServiceRegistry
lock sync.RWMutex
stop chan struct{} // Channel to wait for termination notifications.
db db.Database
slasherDB db.SlasherDatabase
attestationCache *cache.AttestationCache
attestationPool attestations.Pool
exitPool voluntaryexits.PoolManager
slashingsPool slashings.PoolManager
syncCommitteePool synccommittee.Pool
blsToExecPool blstoexec.PoolManager
depositCache cache.DepositCache
trackedValidatorsCache *cache.TrackedValidatorsCache
payloadIDCache *cache.PayloadIDCache
stateFeed *event.Feed
blockFeed *event.Feed
opFeed *event.Feed
stateGen *stategen.State
collector *bcnodeCollector
slasherBlockHeadersFeed *event.Feed
slasherAttestationsFeed *event.Feed
finalizedStateAtStartUp state.BeaconState
serviceFlagOpts *serviceFlagOpts
GenesisInitializer genesis.Initializer
CheckpointInitializer checkpoint.Initializer
forkChoicer forkchoice.ForkChoicer
clockWaiter startup.ClockWaiter
BackfillOpts []backfill.ServiceOption
initialSyncComplete chan struct{}
BlobStorage *filesystem.BlobStorage
BlobStorageOptions []filesystem.BlobStorageOption
custodyInfo *peerdas.CustodyInfo
verifyInitWaiter *verification.InitializerWaiter
syncChecker *initialsync.SyncChecker
slasherEnabled bool
lcStore *lightclient.Store
cliCtx *cli.Context
ctx context.Context
cancel context.CancelFunc
services *runtime.ServiceRegistry
lock sync.RWMutex
stop chan struct{} // Channel to wait for termination notifications.
db db.Database
slasherDB db.SlasherDatabase
attestationCache *cache.AttestationCache
attestationPool attestations.Pool
exitPool voluntaryexits.PoolManager
slashingsPool slashings.PoolManager
syncCommitteePool synccommittee.Pool
blsToExecPool blstoexec.PoolManager
depositCache cache.DepositCache
trackedValidatorsCache *cache.TrackedValidatorsCache
payloadIDCache *cache.PayloadIDCache
stateFeed *event.Feed
blockFeed *event.Feed
opFeed *event.Feed
stateGen *stategen.State
collector *bcnodeCollector
slasherBlockHeadersFeed *event.Feed
slasherAttestationsFeed *event.Feed
finalizedStateAtStartUp state.BeaconState
serviceFlagOpts *serviceFlagOpts
GenesisInitializer genesis.Initializer
CheckpointInitializer checkpoint.Initializer
forkChoicer forkchoice.ForkChoicer
clockWaiter startup.ClockWaiter
BackfillOpts []backfill.ServiceOption
initialSyncComplete chan struct{}
BlobStorage *filesystem.BlobStorage
BlobStorageOptions []filesystem.BlobStorageOption
DataColumnStorage *filesystem.DataColumnStorage
DataColumnStorageOptions []filesystem.DataColumnStorageOption
custodyInfo *peerdas.CustodyInfo
verifyInitWaiter *verification.InitializerWaiter
syncChecker *initialsync.SyncChecker
slasherEnabled bool
lcStore *lightclient.Store
}
// New creates a new node instance, sets up configuration options, and registers
@@ -165,7 +167,6 @@ func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*Beaco
syncChecker: &initialsync.SyncChecker{},
custodyInfo: &peerdas.CustodyInfo{},
slasherEnabled: cliCtx.Bool(flags.SlasherFlag.Name),
lcStore: &lightclient.Store{},
}
for _, opt := range opts {
@@ -193,6 +194,15 @@ func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*Beaco
beacon.BlobStorage = blobs
}
if beacon.DataColumnStorage == nil {
dataColumnStorage, err := filesystem.NewDataColumnStorage(cliCtx.Context, beacon.DataColumnStorageOptions...)
if err != nil {
return nil, errors.Wrap(err, "new data column storage")
}
beacon.DataColumnStorage = dataColumnStorage
}
bfs, err := startBaseServices(cliCtx, beacon, depositAddress)
if err != nil {
return nil, errors.Wrap(err, "could not start modules")
@@ -224,6 +234,10 @@ func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*Beaco
// their initialization.
beacon.finalizedStateAtStartUp = nil
if features.Get().EnableLightClient {
beacon.lcStore = lightclient.NewLightClientStore()
}
return beacon, nil
}
@@ -780,6 +794,7 @@ func (b *BeaconNode) registerBlockchainService(fc forkchoice.ForkChoicer, gs *st
blockchain.WithClockSynchronizer(gs),
blockchain.WithSyncComplete(syncComplete),
blockchain.WithBlobStorage(b.BlobStorage),
blockchain.WithDataColumnStorage(b.DataColumnStorage),
blockchain.WithTrackedValidatorsCache(b.trackedValidatorsCache),
blockchain.WithPayloadIDCache(b.payloadIDCache),
blockchain.WithSyncChecker(b.syncChecker),
@@ -868,8 +883,10 @@ func (b *BeaconNode) registerSyncService(initialSyncComplete chan struct{}, bFil
regularsync.WithInitialSyncComplete(initialSyncComplete),
regularsync.WithStateNotifier(b),
regularsync.WithBlobStorage(b.BlobStorage),
regularsync.WithDataColumnStorage(b.DataColumnStorage),
regularsync.WithVerifierWaiter(b.verifyInitWaiter),
regularsync.WithAvailableBlocker(bFillStore),
regularsync.WithCustodyInfo(b.custodyInfo),
regularsync.WithSlasherEnabled(b.slasherEnabled),
regularsync.WithLightClientStore(b.lcStore),
)

View File

@@ -54,7 +54,12 @@ func TestNodeClose_OK(t *testing.T) {
cmd.ValidatorMonitorIndicesFlag.Value.SetInt(1)
ctx, cancel := newCliContextWithCancel(&app, set)
node, err := New(ctx, cancel, WithBlobStorage(filesystem.NewEphemeralBlobStorage(t)))
options := []Option{
WithBlobStorage(filesystem.NewEphemeralBlobStorage(t)),
WithDataColumnStorage(filesystem.NewEphemeralDataColumnStorage(t)),
}
node, err := New(ctx, cancel, options...)
require.NoError(t, err)
node.Close()
@@ -72,10 +77,16 @@ func TestNodeStart_Ok(t *testing.T) {
require.NoError(t, set.Set("suggested-fee-recipient", "0x6e35733c5af9B61374A128e6F85f553aF09ff89A"))
ctx, cancel := newCliContextWithCancel(&app, set)
node, err := New(ctx, cancel, WithBlockchainFlagOptions([]blockchain.Option{}),
options := []Option{
WithBlockchainFlagOptions([]blockchain.Option{}),
WithBuilderFlagOptions([]builder.Option{}),
WithExecutionChainOptions([]execution.Option{}),
WithBlobStorage(filesystem.NewEphemeralBlobStorage(t)))
WithBlobStorage(filesystem.NewEphemeralBlobStorage(t)),
WithDataColumnStorage(filesystem.NewEphemeralDataColumnStorage(t)),
}
node, err := New(ctx, cancel, options...)
require.NoError(t, err)
node.services = &runtime.ServiceRegistry{}
go func() {
@@ -96,10 +107,16 @@ func TestNodeStart_SyncChecker(t *testing.T) {
require.NoError(t, set.Set("suggested-fee-recipient", "0x6e35733c5af9B61374A128e6F85f553aF09ff89A"))
ctx, cancel := newCliContextWithCancel(&app, set)
node, err := New(ctx, cancel, WithBlockchainFlagOptions([]blockchain.Option{}),
options := []Option{
WithBlockchainFlagOptions([]blockchain.Option{}),
WithBuilderFlagOptions([]builder.Option{}),
WithExecutionChainOptions([]execution.Option{}),
WithBlobStorage(filesystem.NewEphemeralBlobStorage(t)))
WithBlobStorage(filesystem.NewEphemeralBlobStorage(t)),
WithDataColumnStorage(filesystem.NewEphemeralDataColumnStorage(t)),
}
node, err := New(ctx, cancel, options...)
require.NoError(t, err)
go func() {
node.Start()
@@ -128,10 +145,13 @@ func TestClearDB(t *testing.T) {
set.String("suggested-fee-recipient", "0x6e35733c5af9B61374A128e6F85f553aF09ff89A", "fee recipient")
require.NoError(t, set.Set("suggested-fee-recipient", "0x6e35733c5af9B61374A128e6F85f553aF09ff89A"))
context, cancel := newCliContextWithCancel(&app, set)
options := []Option{
WithExecutionChainOptions([]execution.Option{execution.WithHttpEndpoint(endpoint)}),
WithBlobStorage(filesystem.NewEphemeralBlobStorage(t)),
WithDataColumnStorage(filesystem.NewEphemeralDataColumnStorage(t)),
}
_, err = New(context, cancel, options...)
require.NoError(t, err)
require.LogsContain(t, hook, "Removing database")

View File

@@ -50,3 +50,20 @@ func WithBlobStorageOptions(opt ...filesystem.BlobStorageOption) Option {
return nil
}
}
// WithDataColumnStorage sets the DataColumnStorage backend for the BeaconNode
func WithDataColumnStorage(bs *filesystem.DataColumnStorage) Option {
return func(bn *BeaconNode) error {
bn.DataColumnStorage = bs
return nil
}
}
// WithDataColumnStorageOptions appends 1 or more filesystem.DataColumnStorageOption on the beacon node,
// to be used when initializing data column storage.
func WithDataColumnStorageOptions(opt ...filesystem.DataColumnStorageOption) Option {
return func(bn *BeaconNode) error {
bn.DataColumnStorageOptions = append(bn.DataColumnStorageOptions, opt...)
return nil
}
}

View File

@@ -712,7 +712,7 @@ func TestService_BroadcastDataColumn(t *testing.T) {
subnet := peerdas.ComputeSubnetForDataColumnSidecar(columnIndex)
topic := fmt.Sprintf(topicFormat, digest, subnet)
roSidecars, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, util.DataColumnsParamsByRoot{{}: {{ColumnIndex: columnIndex}}})
roSidecars, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{{Index: columnIndex}})
sidecar := roSidecars[0].DataColumnSidecar
// Async listen for the pubsub, must be before the broadcast.

View File

@@ -22,44 +22,52 @@ const (
SchemaVersionV3 = "/3"
)
// Specifies the protocol prefix for all our Req/Resp topics.
const protocolPrefix = "/eth2/beacon_chain/req"
const (
// Specifies the protocol prefix for all our Req/Resp topics.
protocolPrefix = "/eth2/beacon_chain/req"
// StatusMessageName specifies the name for the status message topic.
const StatusMessageName = "/status"
// StatusMessageName specifies the name for the status message topic.
StatusMessageName = "/status"
// GoodbyeMessageName specifies the name for the goodbye message topic.
const GoodbyeMessageName = "/goodbye"
// GoodbyeMessageName specifies the name for the goodbye message topic.
GoodbyeMessageName = "/goodbye"
// BeaconBlocksByRangeMessageName specifies the name for the beacon blocks by range message topic.
const BeaconBlocksByRangeMessageName = "/beacon_blocks_by_range"
// BeaconBlocksByRangeMessageName specifies the name for the beacon blocks by range message topic.
BeaconBlocksByRangeMessageName = "/beacon_blocks_by_range"
// BeaconBlocksByRootsMessageName specifies the name for the beacon blocks by root message topic.
const BeaconBlocksByRootsMessageName = "/beacon_blocks_by_root"
// BeaconBlocksByRootsMessageName specifies the name for the beacon blocks by root message topic.
BeaconBlocksByRootsMessageName = "/beacon_blocks_by_root"
// PingMessageName Specifies the name for the ping message topic.
const PingMessageName = "/ping"
// PingMessageName Specifies the name for the ping message topic.
PingMessageName = "/ping"
// MetadataMessageName specifies the name for the metadata message topic.
const MetadataMessageName = "/metadata"
// MetadataMessageName specifies the name for the metadata message topic.
MetadataMessageName = "/metadata"
// BlobSidecarsByRangeName is the name for the BlobSidecarsByRange v1 message topic.
const BlobSidecarsByRangeName = "/blob_sidecars_by_range"
// BlobSidecarsByRangeName is the name for the BlobSidecarsByRange v1 message topic.
BlobSidecarsByRangeName = "/blob_sidecars_by_range"
// BlobSidecarsByRootName is the name for the BlobSidecarsByRoot v1 message topic.
const BlobSidecarsByRootName = "/blob_sidecars_by_root"
// BlobSidecarsByRootName is the name for the BlobSidecarsByRoot v1 message topic.
BlobSidecarsByRootName = "/blob_sidecars_by_root"
// LightClientBootstrapName is the name for the LightClientBootstrap message topic,
const LightClientBootstrapName = "/light_client_bootstrap"
// LightClientBootstrapName is the name for the LightClientBootstrap message topic,
LightClientBootstrapName = "/light_client_bootstrap"
// LightClientUpdatesByRangeName is the name for the LightClientUpdatesByRange topic.
const LightClientUpdatesByRangeName = "/light_client_updates_by_range"
// LightClientUpdatesByRangeName is the name for the LightClientUpdatesByRange topic.
LightClientUpdatesByRangeName = "/light_client_updates_by_range"
// LightClientFinalityUpdateName is the name for the LightClientFinalityUpdate topic.
const LightClientFinalityUpdateName = "/light_client_finality_update"
// LightClientFinalityUpdateName is the name for the LightClientFinalityUpdate topic.
LightClientFinalityUpdateName = "/light_client_finality_update"
// LightClientOptimisticUpdateName is the name for the LightClientOptimisticUpdate topic.
const LightClientOptimisticUpdateName = "/light_client_optimistic_update"
// LightClientOptimisticUpdateName is the name for the LightClientOptimisticUpdate topic.
LightClientOptimisticUpdateName = "/light_client_optimistic_update"
// DataColumnSidecarsByRootName is the name for the DataColumnSidecarsByRoot v1 message topic.
DataColumnSidecarsByRootName = "/data_column_sidecars_by_root"
// DataColumnSidecarsByRangeName is the name for the DataColumnSidecarsByRange v1 message topic.
DataColumnSidecarsByRangeName = "/data_column_sidecars_by_range"
)
const (
// V1 RPC Topics
@@ -92,6 +100,12 @@ const (
RPCLightClientFinalityUpdateTopicV1 = protocolPrefix + LightClientFinalityUpdateName + SchemaVersionV1
// RPCLightClientOptimisticUpdateTopicV1 is a topic for requesting a light client Optimistic update.
RPCLightClientOptimisticUpdateTopicV1 = protocolPrefix + LightClientOptimisticUpdateName + SchemaVersionV1
// RPCDataColumnSidecarsByRootTopicV1 is a topic for requesting data column sidecars by their block root.
// /eth2/beacon_chain/req/data_column_sidecars_by_root/1 - New in Fulu.
RPCDataColumnSidecarsByRootTopicV1 = protocolPrefix + DataColumnSidecarsByRootName + SchemaVersionV1
// RPCDataColumnSidecarsByRangeTopicV1 is a topic for requesting data column sidecars by their slot.
// /eth2/beacon_chain/req/data_column_sidecars_by_range/1 - New in Fulu.
RPCDataColumnSidecarsByRangeTopicV1 = protocolPrefix + DataColumnSidecarsByRangeName + SchemaVersionV1
// V2 RPC Topics
// RPCBlocksByRangeTopicV2 defines v2 the topic for the blocks by range rpc method.
@@ -112,87 +126,103 @@ const (
)
// RPCTopicMappings map the base message type to the rpc request.
var RPCTopicMappings = map[string]interface{}{
// RPC Status Message
RPCStatusTopicV1: new(pb.Status),
// RPC Goodbye Message
RPCGoodByeTopicV1: new(primitives.SSZUint64),
// RPC Block By Range Message
RPCBlocksByRangeTopicV1: new(pb.BeaconBlocksByRangeRequest),
RPCBlocksByRangeTopicV2: new(pb.BeaconBlocksByRangeRequest),
// RPC Block By Root Message
RPCBlocksByRootTopicV1: new(p2ptypes.BeaconBlockByRootsReq),
RPCBlocksByRootTopicV2: new(p2ptypes.BeaconBlockByRootsReq),
// RPC Ping Message
RPCPingTopicV1: new(primitives.SSZUint64),
// RPC Metadata Message
RPCMetaDataTopicV1: new(interface{}),
RPCMetaDataTopicV2: new(interface{}),
RPCMetaDataTopicV3: new(interface{}),
// BlobSidecarsByRange v1 Message
RPCBlobSidecarsByRangeTopicV1: new(pb.BlobSidecarsByRangeRequest),
// BlobSidecarsByRoot v1 Message
RPCBlobSidecarsByRootTopicV1: new(p2ptypes.BlobSidecarsByRootReq),
var (
RPCTopicMappings = map[string]interface{}{
// RPC Status Message
RPCStatusTopicV1: new(pb.Status),
// Light client
RPCLightClientBootstrapTopicV1: new([fieldparams.RootLength]byte),
RPCLightClientUpdatesByRangeTopicV1: new(pb.LightClientUpdatesByRangeRequest),
RPCLightClientFinalityUpdateTopicV1: new(interface{}),
RPCLightClientOptimisticUpdateTopicV1: new(interface{}),
}
// RPC Goodbye Message
RPCGoodByeTopicV1: new(primitives.SSZUint64),
// Maps all registered protocol prefixes.
var protocolMapping = map[string]bool{
protocolPrefix: true,
}
// RPC Block By Range Message
RPCBlocksByRangeTopicV1: new(pb.BeaconBlocksByRangeRequest),
RPCBlocksByRangeTopicV2: new(pb.BeaconBlocksByRangeRequest),
// Maps all the protocol message names for the different rpc
// topics.
var messageMapping = map[string]bool{
StatusMessageName: true,
GoodbyeMessageName: true,
BeaconBlocksByRangeMessageName: true,
BeaconBlocksByRootsMessageName: true,
PingMessageName: true,
MetadataMessageName: true,
BlobSidecarsByRangeName: true,
BlobSidecarsByRootName: true,
LightClientBootstrapName: true,
LightClientUpdatesByRangeName: true,
LightClientFinalityUpdateName: true,
LightClientOptimisticUpdateName: true,
}
// RPC Block By Root Message
RPCBlocksByRootTopicV1: new(p2ptypes.BeaconBlockByRootsReq),
RPCBlocksByRootTopicV2: new(p2ptypes.BeaconBlockByRootsReq),
// Maps all the RPC messages which are to updated in altair.
var altairMapping = map[string]bool{
BeaconBlocksByRangeMessageName: true,
BeaconBlocksByRootsMessageName: true,
MetadataMessageName: true,
}
// RPC Ping Message
RPCPingTopicV1: new(primitives.SSZUint64),
// Maps all the RPC messages which are to updated in fulu.
var fuluMapping = map[string]bool{
MetadataMessageName: true,
}
// RPC Metadata Message
RPCMetaDataTopicV1: new(interface{}),
RPCMetaDataTopicV2: new(interface{}),
RPCMetaDataTopicV3: new(interface{}),
var versionMapping = map[string]bool{
SchemaVersionV1: true,
SchemaVersionV2: true,
SchemaVersionV3: true,
}
// BlobSidecarsByRange v1 Message
RPCBlobSidecarsByRangeTopicV1: new(pb.BlobSidecarsByRangeRequest),
// OmitContextBytesV1 keeps track of which RPC methods do not write context bytes in their v1 incarnations.
// Phase0 did not have the notion of context bytes, which prefix wire-encoded values with a [4]byte identifier
// to convey the schema for the receiver to use. These RPCs had a version bump to V2 when the context byte encoding
// was introduced. For other RPC methods, context bytes are always required.
var OmitContextBytesV1 = map[string]bool{
StatusMessageName: true,
GoodbyeMessageName: true,
BeaconBlocksByRangeMessageName: true,
BeaconBlocksByRootsMessageName: true,
PingMessageName: true,
MetadataMessageName: true,
}
// BlobSidecarsByRoot v1 Message
RPCBlobSidecarsByRootTopicV1: new(p2ptypes.BlobSidecarsByRootReq),
// Light client
RPCLightClientBootstrapTopicV1: new([fieldparams.RootLength]byte),
RPCLightClientUpdatesByRangeTopicV1: new(pb.LightClientUpdatesByRangeRequest),
RPCLightClientFinalityUpdateTopicV1: new(interface{}),
RPCLightClientOptimisticUpdateTopicV1: new(interface{}),
// DataColumnSidecarsByRange v1 Message
RPCDataColumnSidecarsByRangeTopicV1: new(pb.DataColumnSidecarsByRangeRequest),
// DataColumnSidecarsByRoot v1 Message
RPCDataColumnSidecarsByRootTopicV1: new(p2ptypes.DataColumnsByRootIdentifiers),
}
// Maps all registered protocol prefixes.
protocolMapping = map[string]bool{
protocolPrefix: true,
}
// Maps all the protocol message names for the different rpc topics.
messageMapping = map[string]bool{
StatusMessageName: true,
GoodbyeMessageName: true,
BeaconBlocksByRangeMessageName: true,
BeaconBlocksByRootsMessageName: true,
PingMessageName: true,
MetadataMessageName: true,
BlobSidecarsByRangeName: true,
BlobSidecarsByRootName: true,
LightClientBootstrapName: true,
LightClientUpdatesByRangeName: true,
LightClientFinalityUpdateName: true,
LightClientOptimisticUpdateName: true,
DataColumnSidecarsByRootName: true,
DataColumnSidecarsByRangeName: true,
}
// Maps all the RPC messages which are to updated in altair.
altairMapping = map[string]string{
BeaconBlocksByRangeMessageName: SchemaVersionV2,
BeaconBlocksByRootsMessageName: SchemaVersionV2,
MetadataMessageName: SchemaVersionV2,
}
// Maps all the RPC messages which are to updated in fulu.
fuluMapping = map[string]string{
MetadataMessageName: SchemaVersionV3,
}
versionMapping = map[string]bool{
SchemaVersionV1: true,
SchemaVersionV2: true,
SchemaVersionV3: true,
}
// OmitContextBytesV1 keeps track of which RPC methods do not write context bytes in their v1 incarnations.
// Phase0 did not have the notion of context bytes, which prefix wire-encoded values with a [4]byte identifier
// to convey the schema for the receiver to use. These RPCs had a version bump to V2 when the context byte encoding
// was introduced. For other RPC methods, context bytes are always required.
OmitContextBytesV1 = map[string]bool{
StatusMessageName: true,
GoodbyeMessageName: true,
BeaconBlocksByRangeMessageName: true,
BeaconBlocksByRootsMessageName: true,
PingMessageName: true,
MetadataMessageName: true,
}
)
// VerifyTopicMapping verifies that the topic and its accompanying
// message type is correct.
@@ -314,13 +344,17 @@ func TopicFromMessage(msg string, epoch primitives.Epoch) (string, error) {
beaconConfig := params.BeaconConfig()
// Check if the message is to be updated in fulu.
if epoch >= beaconConfig.FuluForkEpoch && fuluMapping[msg] {
return protocolPrefix + msg + SchemaVersionV3, nil
if epoch >= beaconConfig.FuluForkEpoch {
if version, ok := fuluMapping[msg]; ok {
return protocolPrefix + msg + version, nil
}
}
// Check if the message is to be updated in altair.
if epoch >= beaconConfig.AltairForkEpoch && altairMapping[msg] {
return protocolPrefix + msg + SchemaVersionV2, nil
if epoch >= beaconConfig.AltairForkEpoch {
if version, ok := altairMapping[msg]; ok {
return protocolPrefix + msg + version, nil
}
}
return protocolPrefix + msg + SchemaVersionV1, nil

View File

@@ -119,50 +119,31 @@ func TestTopicFromMessage_CorrectType(t *testing.T) {
})
t.Run("after altair fork but before fulu fork", func(t *testing.T) {
for m := range messageMapping {
topic, err := TopicFromMessage(m, altairForkEpoch)
require.NoError(t, err)
// Not modified in altair fork.
topic, err := TopicFromMessage(GoodbyeMessageName, altairForkEpoch)
require.NoError(t, err)
require.Equal(t, "/eth2/beacon_chain/req/goodbye/1", topic)
if altairMapping[m] {
require.Equal(t, true, strings.Contains(topic, SchemaVersionV2))
_, _, version, err := TopicDeconstructor(topic)
require.NoError(t, err)
require.Equal(t, SchemaVersionV2, version)
continue
}
require.Equal(t, true, strings.Contains(topic, SchemaVersionV1))
_, _, version, err := TopicDeconstructor(topic)
require.NoError(t, err)
require.Equal(t, SchemaVersionV1, version)
}
// Modified in altair fork.
topic, err = TopicFromMessage(MetadataMessageName, altairForkEpoch)
require.NoError(t, err)
require.Equal(t, "/eth2/beacon_chain/req/metadata/2", topic)
})
t.Run("after fulu fork", func(t *testing.T) {
for m := range messageMapping {
topic, err := TopicFromMessage(m, fuluForkEpoch)
require.NoError(t, err)
// Not modified in any fork.
topic, err := TopicFromMessage(GoodbyeMessageName, fuluForkEpoch)
require.NoError(t, err)
require.Equal(t, "/eth2/beacon_chain/req/goodbye/1", topic)
if fuluMapping[m] {
require.Equal(t, true, strings.Contains(topic, SchemaVersionV3))
_, _, version, err := TopicDeconstructor(topic)
require.NoError(t, err)
require.Equal(t, SchemaVersionV3, version)
continue
}
// Modified in altair fork.
topic, err = TopicFromMessage(BeaconBlocksByRangeMessageName, fuluForkEpoch)
require.NoError(t, err)
require.Equal(t, "/eth2/beacon_chain/req/beacon_blocks_by_range/2", topic)
if altairMapping[m] {
require.Equal(t, true, strings.Contains(topic, SchemaVersionV2))
_, _, version, err := TopicDeconstructor(topic)
require.NoError(t, err)
require.Equal(t, SchemaVersionV2, version)
continue
}
require.Equal(t, true, strings.Contains(topic, SchemaVersionV1))
_, _, version, err := TopicDeconstructor(topic)
require.NoError(t, err)
require.Equal(t, SchemaVersionV1, version)
}
// Modified both in altair and fulu fork.
topic, err = TopicFromMessage(MetadataMessageName, fuluForkEpoch)
require.NoError(t, err)
require.Equal(t, "/eth2/beacon_chain/req/metadata/3", topic)
})
}

View File

@@ -25,6 +25,7 @@ go_library(
"//consensus-types/primitives:go_default_library",
"//consensus-types/wrapper:go_default_library",
"//encoding/bytesutil:go_default_library",
"//encoding/ssz:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/metadata:go_default_library",
@@ -45,10 +46,10 @@ go_test(
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//encoding/ssz:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
],
)

View File

@@ -9,10 +9,13 @@ var (
ErrInvalidSequenceNum = errors.New("invalid sequence number provided")
ErrGeneric = errors.New("internal service error")
ErrRateLimited = errors.New("rate limited")
ErrIODeadline = errors.New("i/o deadline exceeded")
ErrInvalidRequest = errors.New("invalid range, step or count")
ErrBlobLTMinRequest = errors.New("blob epoch < minimum_request_epoch")
ErrMaxBlobReqExceeded = errors.New("requested more than MAX_REQUEST_BLOB_SIDECARS")
ErrRateLimited = errors.New("rate limited")
ErrIODeadline = errors.New("i/o deadline exceeded")
ErrInvalidRequest = errors.New("invalid range, step or count")
ErrBlobLTMinRequest = errors.New("blob epoch < minimum_request_epoch")
ErrMaxBlobReqExceeded = errors.New("requested more than MAX_REQUEST_BLOB_SIDECARS")
ErrMaxDataColumnReqExceeded = errors.New("requested more than MAX_REQUEST_DATA_COLUMN_SIDECARS")
ErrResourceUnavailable = errors.New("resource requested unavailable")
)

View File

@@ -5,19 +5,18 @@ package types
import (
"bytes"
"encoding/binary"
"sort"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/encoding/ssz"
eth "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/pkg/errors"
ssz "github.com/prysmaticlabs/fastssz"
fastssz "github.com/prysmaticlabs/fastssz"
)
const (
maxErrorLength = 256
bytesPerLengthOffset = 4
maxErrorLength = 256
)
// SSZBytes is a bytes slice that satisfies the fast-ssz interface.
@@ -25,11 +24,11 @@ type SSZBytes []byte
// HashTreeRoot hashes the uint64 object following the SSZ standard.
func (b *SSZBytes) HashTreeRoot() ([32]byte, error) {
return ssz.HashWithDefaultHasher(b)
return fastssz.HashWithDefaultHasher(b)
}
// HashTreeRootWith hashes the uint64 object with the given hasher.
func (b *SSZBytes) HashTreeRootWith(hh *ssz.Hasher) error {
func (b *SSZBytes) HashTreeRootWith(hh *fastssz.Hasher) error {
indx := hh.Index()
hh.PutBytes(*b)
hh.Merkleize(indx)
@@ -74,7 +73,7 @@ func (r *BeaconBlockByRootsReq) UnmarshalSSZ(buf []byte) error {
return errors.Errorf("expected buffer with length of up to %d but received length %d", maxLength, bufLen)
}
if bufLen%fieldparams.RootLength != 0 {
return ssz.ErrIncorrectByteSize
return fastssz.ErrIncorrectByteSize
}
numOfRoots := bufLen / fieldparams.RootLength
roots := make([][fieldparams.RootLength]byte, 0, numOfRoots)
@@ -128,17 +127,13 @@ func (m *ErrorMessage) UnmarshalSSZ(buf []byte) error {
return nil
}
var BlobSidecarsByRootReqSerdes = ssz.NewListFixedElementSerdes[*eth.BlobIdentifier](func() *eth.BlobIdentifier {
return &eth.BlobIdentifier{}
})
// BlobSidecarsByRootReq is used to specify a list of blob targets (root+index) in a BlobSidecarsByRoot RPC request.
type BlobSidecarsByRootReq []*eth.BlobIdentifier
// BlobIdentifier is a fixed size value, so we can compute its fixed size at start time (see init below)
var blobIdSize int
// SizeSSZ returns the size of the serialized representation.
func (b *BlobSidecarsByRootReq) SizeSSZ() int {
return len(*b) * blobIdSize
}
// MarshalSSZTo appends the serialized BlobSidecarsByRootReq value to the provided byte slice.
func (b *BlobSidecarsByRootReq) MarshalSSZTo(dst []byte) ([]byte, error) {
// A List without an enclosing container is marshaled exactly like a vector, no length offset required.
@@ -151,38 +146,20 @@ func (b *BlobSidecarsByRootReq) MarshalSSZTo(dst []byte) ([]byte, error) {
// MarshalSSZ serializes the BlobSidecarsByRootReq value to a byte slice.
func (b *BlobSidecarsByRootReq) MarshalSSZ() ([]byte, error) {
buf := make([]byte, len(*b)*blobIdSize)
for i, id := range *b {
by, err := id.MarshalSSZ()
if err != nil {
return nil, err
}
copy(buf[i*blobIdSize:(i+1)*blobIdSize], by)
}
return buf, nil
return BlobSidecarsByRootReqSerdes.Marshal(*b)
}
// UnmarshalSSZ unmarshals the provided bytes buffer into the
// BlobSidecarsByRootReq value.
func (b *BlobSidecarsByRootReq) UnmarshalSSZ(buf []byte) error {
bufLen := len(buf)
maxLength := int(params.BeaconConfig().MaxRequestBlobSidecarsElectra) * blobIdSize
if bufLen > maxLength {
return errors.Wrapf(ssz.ErrIncorrectListSize, "expected buffer with length of up to %d but received length %d", maxLength, bufLen)
v, err := BlobSidecarsByRootReqSerdes.Unmarshal(buf)
if err != nil {
return errors.Wrapf(err, "failed to unmarshal BlobSidecarsByRootReq")
}
if bufLen%blobIdSize != 0 {
return errors.Wrapf(ssz.ErrIncorrectByteSize, "size=%d", bufLen)
}
count := bufLen / blobIdSize
*b = make([]*eth.BlobIdentifier, count)
for i := 0; i < count; i++ {
id := &eth.BlobIdentifier{}
err := id.UnmarshalSSZ(buf[i*blobIdSize : (i+1)*blobIdSize])
if err != nil {
return err
}
(*b)[i] = id
if len(v) > int(params.BeaconConfig().MaxRequestBlobSidecarsElectra) {
return ErrMaxBlobReqExceeded
}
*b = v
return nil
}
@@ -213,106 +190,32 @@ func (s BlobSidecarsByRootReq) Len() int {
// ====================================
// DataColumnsByRootIdentifiers section
// ====================================
var _ ssz.Marshaler = (*DataColumnsByRootIdentifiers)(nil)
var _ ssz.Unmarshaler = (*DataColumnsByRootIdentifiers)(nil)
var _ fastssz.Marshaler = DataColumnsByRootIdentifiers{}
var _ fastssz.Unmarshaler = &DataColumnsByRootIdentifiers{}
// DataColumnsByRootIdentifiers is used to specify a list of data column targets (root+index) in a DataColumnSidecarsByRoot RPC request.
type DataColumnsByRootIdentifiers []*eth.DataColumnsByRootIdentifier
// DataColumnIdentifier is a fixed size value, so we can compute its fixed size at start time (see init below)
var dataColumnIdSize int
var DataColumnsByRootIdentifiersSerdes = ssz.NewListVariableElementSerdes[*eth.DataColumnsByRootIdentifier](func() *eth.DataColumnsByRootIdentifier {
return &eth.DataColumnsByRootIdentifier{}
})
// UnmarshalSSZ implements ssz.Unmarshaler. It unmarshals the provided bytes buffer into the DataColumnSidecarsByRootReq value.
func (d *DataColumnsByRootIdentifiers) UnmarshalSSZ(buf []byte) error {
// Exit early if the buffer is too small.
if len(buf) < bytesPerLengthOffset {
return nil
//v, err := DataColumnsByRootIdentifiersSerdes.Unmarshal(buf)
v, err := DataColumnsByRootIdentifiersSerdes.Unmarshal(buf)
if err != nil {
return errors.Wrapf(err, "failed to unmarshal DataColumnsByRootIdentifiers")
}
// Get the size of the offsets.
offsetEnd := binary.LittleEndian.Uint32(buf[:bytesPerLengthOffset])
if offsetEnd%bytesPerLengthOffset != 0 {
return errors.Errorf("expected offsets size to be a multiple of %d but got %d", bytesPerLengthOffset, offsetEnd)
}
count := offsetEnd / bytesPerLengthOffset
if count < 1 {
return nil
}
maxSize := params.BeaconConfig().MaxRequestBlocksDeneb
if uint64(count) > maxSize {
return errors.Errorf("data column identifiers list exceeds max size: %d > %d", count, maxSize)
}
if offsetEnd > uint32(len(buf)) {
return errors.Errorf("offsets value %d larger than buffer %d", offsetEnd, len(buf))
}
valueStart := offsetEnd
// Decode the identifers.
*d = make([]*eth.DataColumnsByRootIdentifier, count)
var start uint32
end := uint32(len(buf))
for i := count; i > 0; i-- {
offsetEnd -= bytesPerLengthOffset
start = binary.LittleEndian.Uint32(buf[offsetEnd : offsetEnd+bytesPerLengthOffset])
if start > end {
return errors.Errorf("expected offset[%d] %d to be less than %d", i-1, start, end)
}
if start < valueStart {
return errors.Errorf("offset[%d] %d indexes before value section %d", i-1, start, valueStart)
}
// Decode the identifier.
ident := &eth.DataColumnsByRootIdentifier{}
if err := ident.UnmarshalSSZ(buf[start:end]); err != nil {
return err
}
(*d)[i-1] = ident
end = start
}
*d = v
return nil
}
func (d *DataColumnsByRootIdentifiers) MarshalSSZ() ([]byte, error) {
var err error
count := len(*d)
maxSize := params.BeaconConfig().MaxRequestBlocksDeneb
if uint64(count) > maxSize {
return nil, errors.Errorf("data column identifiers list exceeds max size: %d > %d", count, maxSize)
}
if len(*d) == 0 {
return []byte{}, nil
}
sizes := make([]uint32, count)
valTotal := uint32(0)
for i, elem := range *d {
if elem == nil {
return nil, errors.New("nil item in DataColumnsByRootIdentifiers list")
}
sizes[i] = uint32(elem.SizeSSZ())
valTotal += sizes[i]
}
offSize := uint32(4 * len(*d))
out := make([]byte, offSize, offSize+valTotal)
for i := range sizes {
binary.LittleEndian.PutUint32(out[i*4:i*4+4], offSize)
offSize += sizes[i]
}
for _, elem := range *d {
out, err = elem.MarshalSSZTo(out)
if err != nil {
return nil, err
}
}
return out, nil
func (d DataColumnsByRootIdentifiers) MarshalSSZ() ([]byte, error) {
return DataColumnsByRootIdentifiersSerdes.Marshal(d)
}
// MarshalSSZTo implements ssz.Marshaler. It appends the serialized DataColumnSidecarsByRootReq value to the provided byte slice.
func (d *DataColumnsByRootIdentifiers) MarshalSSZTo(dst []byte) ([]byte, error) {
func (d DataColumnsByRootIdentifiers) MarshalSSZTo(dst []byte) ([]byte, error) {
obj, err := d.MarshalSSZ()
if err != nil {
return nil, err
@@ -321,19 +224,11 @@ func (d *DataColumnsByRootIdentifiers) MarshalSSZTo(dst []byte) ([]byte, error)
}
// SizeSSZ implements ssz.Marshaler. It returns the size of the serialized representation.
func (d *DataColumnsByRootIdentifiers) SizeSSZ() int {
func (d DataColumnsByRootIdentifiers) SizeSSZ() int {
size := 0
for i := 0; i < len(*d); i++ {
for i := 0; i < len(d); i++ {
size += 4
size += (*d)[i].SizeSSZ()
size += (d)[i].SizeSSZ()
}
return size
}
func init() {
blobSizer := &eth.BlobIdentifier{}
blobIdSize = blobSizer.SizeSSZ()
dataColumnSizer := &eth.DataColumnSidecarsByRangeRequest{}
dataColumnIdSize = dataColumnSizer.SizeSSZ()
}

View File

@@ -8,10 +8,10 @@ import (
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/encoding/ssz"
eth "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
ssz "github.com/prysmaticlabs/fastssz"
)
func generateBlobIdentifiers(n int) []*eth.BlobIdentifier {
@@ -51,7 +51,7 @@ func TestBlobSidecarsByRootReq_MarshalSSZ(t *testing.T) {
{
name: "beyond max list",
ids: generateBlobIdentifiers(int(params.BeaconConfig().MaxRequestBlobSidecarsElectra) + 1),
unmarshalErr: ssz.ErrIncorrectListSize,
unmarshalErr: ErrMaxBlobReqExceeded,
},
{
name: "wonky unmarshal size",
@@ -60,7 +60,7 @@ func TestBlobSidecarsByRootReq_MarshalSSZ(t *testing.T) {
in = append(in, byte(0))
return in
},
unmarshalErr: ssz.ErrIncorrectByteSize,
unmarshalErr: ssz.ErrInvalidFixedEncodingLen,
},
}
@@ -305,7 +305,8 @@ func TestDataColumnSidecarsByRootReq_MarshalUnmarshal(t *testing.T) {
name: "size too big",
ids: generateDataColumnIdentifiers(1),
unmarshalMod: func(in []byte) []byte {
maxLen := params.BeaconConfig().MaxRequestDataColumnSidecars * uint64(dataColumnIdSize)
sizer := &eth.DataColumnSidecarsByRangeRequest{}
maxLen := params.BeaconConfig().MaxRequestDataColumnSidecars * uint64(sizer.SizeSSZ())
add := make([]byte, maxLen)
in = append(in, add...)
return in

View File

@@ -42,9 +42,9 @@ go_test(
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/light-client:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/light-client:go_default_library",
"//consensus-types/primitives:go_default_library",

File diff suppressed because it is too large Load Diff

View File

@@ -10,11 +10,13 @@ go_library(
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/rpc/core:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
@@ -36,7 +38,9 @@ go_test(
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/rpc/core:go_default_library",
@@ -45,6 +49,7 @@ go_test(
"//beacon-chain/state/stategen:go_default_library",
"//beacon-chain/state/stategen/mock:go_default_library",
"//beacon-chain/verification:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",

View File

@@ -3,12 +3,15 @@ package lookup
import (
"context"
"fmt"
"math"
"strconv"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/core"
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
@@ -49,6 +52,7 @@ type BeaconDbBlocker struct {
ChainInfoFetcher blockchain.ChainInfoFetcher
GenesisTimeFetcher blockchain.TimeFetcher
BlobStorage *filesystem.BlobStorage
DataColumnStorage *filesystem.DataColumnStorage
}
// Block returns the beacon block for a given identifier. The identifier can be one of:
@@ -212,64 +216,190 @@ func (p *BeaconDbBlocker) Blobs(ctx context.Context, id string, indices []int) (
root := bytesutil.ToBytes32(rootSlice)
b, err := p.BeaconDB.Block(ctx, root)
roSignedBlock, err := p.BeaconDB.Block(ctx, root)
if err != nil {
return nil, &core.RpcError{Err: errors.Wrapf(err, "failed to retrieve block %#x from db", rootSlice), Reason: core.Internal}
}
if b == nil {
if roSignedBlock == nil {
return nil, &core.RpcError{Err: fmt.Errorf("block %#x not found in db", rootSlice), Reason: core.NotFound}
}
// if block is not in the retention window, return 200 w/ empty list
if !p.BlobStorage.WithinRetentionPeriod(slots.ToEpoch(b.Block().Slot()), slots.ToEpoch(p.GenesisTimeFetcher.CurrentSlot())) {
// If block is not in the retention window, return 200 w/ empty list
if !p.BlobStorage.WithinRetentionPeriod(slots.ToEpoch(roSignedBlock.Block().Slot()), slots.ToEpoch(p.GenesisTimeFetcher.CurrentSlot())) {
return make([]*blocks.VerifiedROBlob, 0), nil
}
commitments, err := b.Block().Body().BlobKzgCommitments()
roBlock := roSignedBlock.Block()
commitments, err := roBlock.Body().BlobKzgCommitments()
if err != nil {
return nil, &core.RpcError{Err: errors.Wrapf(err, "failed to retrieve kzg commitments from block %#x", rootSlice), Reason: core.Internal}
}
// if there are no commitments return 200 w/ empty list
// If there are no commitments return 200 w/ empty list
if len(commitments) == 0 {
return make([]*blocks.VerifiedROBlob, 0), nil
}
sum := p.BlobStorage.Summary(root)
// Compute the first Fulu slot.
fuluForkEpoch := params.BeaconConfig().FuluForkEpoch
fuluForkSlot := primitives.Slot(math.MaxUint64)
if fuluForkEpoch != primitives.Epoch(math.MaxUint64) {
fuluForkSlot, err = slots.EpochStart(fuluForkEpoch)
if err != nil {
return nil, &core.RpcError{Err: errors.Wrap(err, "could not calculate peerDAS start slot"), Reason: core.Internal}
}
}
if len(indices) == 0 {
for i := range commitments {
if sum.HasIndex(uint64(i)) {
indices = append(indices, i)
if roBlock.Slot() >= fuluForkSlot {
roBlock, err := blocks.NewROBlockWithRoot(roSignedBlock, root)
if err != nil {
return nil, &core.RpcError{Err: errors.Wrapf(err, "failed to create roBlock with root %#x", root), Reason: core.Internal}
}
return p.blobsFromStoredDataColumns(roBlock, indices)
}
return p.blobsFromStoredBlobs(commitments, root, indices)
}
// blobsFromStoredBlobs retrieves blob sidercars corresponding to `indices` and `root` from the store.
// This function expects blob sidecars to be stored (aka. no data column sidecars).
func (p *BeaconDbBlocker) blobsFromStoredBlobs(commitments [][]byte, root [fieldparams.RootLength]byte, indices []int) ([]*blocks.VerifiedROBlob, *core.RpcError) {
summary := p.BlobStorage.Summary(root)
maxBlobCount := summary.MaxBlobsForEpoch()
for _, index := range indices {
if uint64(index) >= maxBlobCount {
return nil, &core.RpcError{
Err: fmt.Errorf("requested index %d is bigger than the maximum possible blob count %d", index, maxBlobCount),
Reason: core.BadRequest,
}
}
} else {
for _, ix := range indices {
if uint64(ix) >= sum.MaxBlobsForEpoch() {
return nil, &core.RpcError{
Err: fmt.Errorf("requested index %d is bigger than the maximum possible blob count %d", ix, sum.MaxBlobsForEpoch()),
Reason: core.BadRequest,
}
}
if !sum.HasIndex(uint64(ix)) {
return nil, &core.RpcError{
Err: fmt.Errorf("requested index %d not found", ix),
Reason: core.NotFound,
}
if !summary.HasIndex(uint64(index)) {
return nil, &core.RpcError{
Err: fmt.Errorf("requested index %d not found", index),
Reason: core.NotFound,
}
}
}
blobs := make([]*blocks.VerifiedROBlob, len(indices))
for i, index := range indices {
vblob, err := p.BlobStorage.Get(root, uint64(index))
// If no indices are provided, use all indices that are available in the summary.
if len(indices) == 0 {
for index := range commitments {
if summary.HasIndex(uint64(index)) {
indices = append(indices, index)
}
}
}
// Retrieve blob sidecars from the store.
blobs := make([]*blocks.VerifiedROBlob, 0, len(indices))
for _, index := range indices {
blobSidecar, err := p.BlobStorage.Get(root, uint64(index))
if err != nil {
return nil, &core.RpcError{
Err: fmt.Errorf("could not retrieve blob for block root %#x at index %d", rootSlice, index),
Err: fmt.Errorf("could not retrieve blob for block root %#x at index %d", root, index),
Reason: core.Internal,
}
}
blobs[i] = &vblob
blobs = append(blobs, &blobSidecar)
}
return blobs, nil
}
// blobsFromStoredDataColumns retrieves data column sidecars from the store,
// reconstructs the whole matrix if needed, converts the matrix to blobs,
// and then returns converted blobs corresponding to `indices` and `root`.
// This function expects data column sidecars to be stored (aka. no blob sidecars).
// If not enough data column sidecars are available to convert blobs from them
// (either directly or after reconstruction), an error is returned.
func (p *BeaconDbBlocker) blobsFromStoredDataColumns(block blocks.ROBlock, indices []int) ([]*blocks.VerifiedROBlob, *core.RpcError) {
root := block.Root()
// Use all indices if none are provided.
if len(indices) == 0 {
commitments, err := block.Block().Body().BlobKzgCommitments()
if err != nil {
return nil, &core.RpcError{
Err: errors.Wrap(err, "could not retrieve blob commitments"),
Reason: core.Internal,
}
}
for index := range commitments {
indices = append(indices, index)
}
}
// Count how many columns we have in the store.
summary := p.DataColumnStorage.Summary(root)
stored := summary.Stored()
count := uint64(len(stored))
if count < peerdas.MinimumColumnsCountToReconstruct() {
// There is no way to reconstruct the data columns.
return nil, &core.RpcError{
Err: errors.Errorf("the node does not custody enough data columns to reconstruct blobs - please start the beacon node with the `--%s` flag to ensure this call to succeed, or retry later if it is already the case", flags.SubscribeAllDataSubnets.Name),
Reason: core.NotFound,
}
}
// Retrieve from the database needed data columns.
verifiedRoDataColumnSidecars, err := p.neededDataColumnSidecars(root, stored)
if err != nil {
return nil, &core.RpcError{
Err: errors.Wrap(err, "needed data column sidecars"),
Reason: core.Internal,
}
}
// Reconstruct blob sidecars from data column sidecars.
verifiedRoBlobSidecars, err := peerdas.ReconstructBlobs(block, verifiedRoDataColumnSidecars, indices)
if err != nil {
return nil, &core.RpcError{
Err: errors.Wrap(err, "blobs from data columns"),
Reason: core.Internal,
}
}
return verifiedRoBlobSidecars, nil
}
// neededDataColumnSidecars retrieves all data column sidecars corresponding to (non extended) blobs if available,
// else retrieves all data column sidecars from the store.
func (p *BeaconDbBlocker) neededDataColumnSidecars(root [fieldparams.RootLength]byte, stored map[uint64]bool) ([]blocks.VerifiedRODataColumn, error) {
// Check if we have all the non-extended data columns.
cellsPerBlob := fieldparams.CellsPerBlob
blobIndices := make([]uint64, 0, cellsPerBlob)
hasAllBlobColumns := true
for i := range uint64(cellsPerBlob) {
if !stored[i] {
hasAllBlobColumns = false
break
}
blobIndices = append(blobIndices, i)
}
if hasAllBlobColumns {
// Retrieve only the non-extended data columns.
verifiedRoSidecars, err := p.DataColumnStorage.Get(root, blobIndices)
if err != nil {
return nil, errors.Wrap(err, "data columns storage get")
}
return verifiedRoSidecars, nil
}
// Retrieve all the data columns.
verifiedRoSidecars, err := p.DataColumnStorage.Get(root, nil)
if err != nil {
return nil, errors.Wrap(err, "data columns storage get")
}
return verifiedRoSidecars, nil
}

View File

@@ -8,12 +8,15 @@ import (
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
mockChain "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/core"
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/testutil"
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
@@ -158,172 +161,335 @@ func TestGetBlock(t *testing.T) {
}
func TestGetBlob(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.DenebForkEpoch = 1
params.OverrideBeaconConfig(cfg)
const (
slot = 123
blobCount = 4
denebForEpoch = 1
fuluForkEpoch = 2
)
setupDeneb := func(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.DenebForkEpoch = denebForEpoch
params.OverrideBeaconConfig(cfg)
}
setupFulu := func(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.DenebForkEpoch = denebForEpoch
cfg.FuluForkEpoch = fuluForkEpoch
params.OverrideBeaconConfig(cfg)
}
ctx := t.Context()
db := testDB.SetupDB(t)
denebBlock, blobs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 123, 4)
require.NoError(t, db.SaveBlock(t.Context(), denebBlock))
_, bs := filesystem.NewEphemeralBlobStorageAndFs(t)
testSidecars := verification.FakeVerifySliceForTest(t, blobs)
for i := range testSidecars {
require.NoError(t, bs.Save(testSidecars[i]))
// Start the trusted setup.
err := kzg.Start()
require.NoError(t, err)
// Create and save Deneb block and blob sidecars.
_, blobStorage := filesystem.NewEphemeralBlobStorageAndFs(t)
denebBlock, storedBlobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [fieldparams.RootLength]byte{}, slot, blobCount)
denebBlockRoot := denebBlock.Root()
verifiedStoredSidecars := verification.FakeVerifySliceForTest(t, storedBlobSidecars)
for i := range verifiedStoredSidecars {
err := blobStorage.Save(verifiedStoredSidecars[i])
require.NoError(t, err)
}
blockRoot := blobs[0].BlockRoot()
err = db.SaveBlock(t.Context(), denebBlock)
require.NoError(t, err)
// Create Electra block and blob sidecars. (Electra block = Fulu block),
// save the block, convert blob sidecars to data column sidecars and save the block.
fuluForkSlot := fuluForkEpoch * params.BeaconConfig().SlotsPerEpoch
fuluBlock, fuluBlobSidecars := util.GenerateTestElectraBlockWithSidecar(t, [fieldparams.RootLength]byte{}, fuluForkSlot, blobCount)
fuluBlockRoot := fuluBlock.Root()
cellsAndProofsList := make([]kzg.CellsAndProofs, 0, len(fuluBlobSidecars))
for _, blob := range fuluBlobSidecars {
var kzgBlob kzg.Blob
copy(kzgBlob[:], blob.Blob)
cellsAndProogs, err := kzg.ComputeCellsAndKZGProofs(&kzgBlob)
require.NoError(t, err)
cellsAndProofsList = append(cellsAndProofsList, cellsAndProogs)
}
dataColumnSidecarPb, err := peerdas.DataColumnSidecars(fuluBlock, cellsAndProofsList)
require.NoError(t, err)
verifiedRoDataColumnSidecars := make([]blocks.VerifiedRODataColumn, 0, len(dataColumnSidecarPb))
for _, sidecarPb := range dataColumnSidecarPb {
roDataColumn, err := blocks.NewRODataColumnWithRoot(sidecarPb, fuluBlockRoot)
require.NoError(t, err)
verifiedRoDataColumn := blocks.NewVerifiedRODataColumn(roDataColumn)
verifiedRoDataColumnSidecars = append(verifiedRoDataColumnSidecars, verifiedRoDataColumn)
}
err = db.SaveBlock(t.Context(), fuluBlock)
require.NoError(t, err)
t.Run("genesis", func(t *testing.T) {
setupDeneb(t)
blocker := &BeaconDbBlocker{}
_, rpcErr := blocker.Blobs(ctx, "genesis", nil)
assert.Equal(t, http.StatusBadRequest, core.ErrorReasonToHTTP(rpcErr.Reason))
assert.StringContains(t, "blobs are not supported for Phase 0 fork", rpcErr.Err.Error())
require.Equal(t, http.StatusBadRequest, core.ErrorReasonToHTTP(rpcErr.Reason))
require.StringContains(t, "blobs are not supported for Phase 0 fork", rpcErr.Err.Error())
})
t.Run("head", func(t *testing.T) {
setupDeneb(t)
blocker := &BeaconDbBlocker{
ChainInfoFetcher: &mockChain.ChainService{Root: blockRoot[:]},
ChainInfoFetcher: &mockChain.ChainService{Root: denebBlockRoot[:]},
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
BeaconDB: db,
BlobStorage: bs,
BlobStorage: blobStorage,
}
retrievedVerifiedSidecars, rpcErr := blocker.Blobs(ctx, "head", nil)
require.IsNil(t, rpcErr)
require.Equal(t, blobCount, len(retrievedVerifiedSidecars))
for i := range blobCount {
expected := verifiedStoredSidecars[i]
actual := retrievedVerifiedSidecars[i].BlobSidecar
require.NotNil(t, actual)
require.Equal(t, expected.Index, actual.Index)
require.DeepEqual(t, expected.Blob, actual.Blob)
require.DeepEqual(t, expected.KzgCommitment, actual.KzgCommitment)
require.DeepEqual(t, expected.KzgProof, actual.KzgProof)
}
verifiedBlobs, rpcErr := blocker.Blobs(ctx, "head", nil)
assert.Equal(t, rpcErr == nil, true)
require.Equal(t, 4, len(verifiedBlobs))
sidecar := verifiedBlobs[0].BlobSidecar
require.NotNil(t, sidecar)
assert.Equal(t, uint64(0), sidecar.Index)
assert.DeepEqual(t, blobs[0].Blob, sidecar.Blob)
assert.DeepEqual(t, blobs[0].KzgCommitment, sidecar.KzgCommitment)
assert.DeepEqual(t, blobs[0].KzgProof, sidecar.KzgProof)
sidecar = verifiedBlobs[1].BlobSidecar
require.NotNil(t, sidecar)
assert.Equal(t, uint64(1), sidecar.Index)
assert.DeepEqual(t, blobs[1].Blob, sidecar.Blob)
assert.DeepEqual(t, blobs[1].KzgCommitment, sidecar.KzgCommitment)
assert.DeepEqual(t, blobs[1].KzgProof, sidecar.KzgProof)
sidecar = verifiedBlobs[2].BlobSidecar
require.NotNil(t, sidecar)
assert.Equal(t, uint64(2), sidecar.Index)
assert.DeepEqual(t, blobs[2].Blob, sidecar.Blob)
assert.DeepEqual(t, blobs[2].KzgCommitment, sidecar.KzgCommitment)
assert.DeepEqual(t, blobs[2].KzgProof, sidecar.KzgProof)
sidecar = verifiedBlobs[3].BlobSidecar
require.NotNil(t, sidecar)
assert.Equal(t, uint64(3), sidecar.Index)
assert.DeepEqual(t, blobs[3].Blob, sidecar.Blob)
assert.DeepEqual(t, blobs[3].KzgCommitment, sidecar.KzgCommitment)
assert.DeepEqual(t, blobs[3].KzgProof, sidecar.KzgProof)
})
t.Run("finalized", func(t *testing.T) {
setupDeneb(t)
blocker := &BeaconDbBlocker{
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: &ethpb.Checkpoint{Root: blockRoot[:]}},
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: &ethpb.Checkpoint{Root: denebBlockRoot[:]}},
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
BeaconDB: db,
BlobStorage: bs,
BlobStorage: blobStorage,
}
verifiedBlobs, rpcErr := blocker.Blobs(ctx, "finalized", nil)
assert.Equal(t, rpcErr == nil, true)
require.Equal(t, 4, len(verifiedBlobs))
verifiedSidecars, rpcErr := blocker.Blobs(ctx, "finalized", nil)
require.IsNil(t, rpcErr)
require.Equal(t, blobCount, len(verifiedSidecars))
})
t.Run("justified", func(t *testing.T) {
setupDeneb(t)
blocker := &BeaconDbBlocker{
ChainInfoFetcher: &mockChain.ChainService{CurrentJustifiedCheckPoint: &ethpb.Checkpoint{Root: blockRoot[:]}},
ChainInfoFetcher: &mockChain.ChainService{CurrentJustifiedCheckPoint: &ethpb.Checkpoint{Root: denebBlockRoot[:]}},
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
BeaconDB: db,
BlobStorage: bs,
BlobStorage: blobStorage,
}
verifiedBlobs, rpcErr := blocker.Blobs(ctx, "justified", nil)
assert.Equal(t, rpcErr == nil, true)
require.Equal(t, 4, len(verifiedBlobs))
verifiedSidecars, rpcErr := blocker.Blobs(ctx, "justified", nil)
require.IsNil(t, rpcErr)
require.Equal(t, blobCount, len(verifiedSidecars))
})
t.Run("root", func(t *testing.T) {
setupDeneb(t)
blocker := &BeaconDbBlocker{
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
BeaconDB: db,
BlobStorage: bs,
BlobStorage: blobStorage,
}
verifiedBlobs, rpcErr := blocker.Blobs(ctx, hexutil.Encode(blockRoot[:]), nil)
assert.Equal(t, rpcErr == nil, true)
require.Equal(t, 4, len(verifiedBlobs))
verifiedBlobs, rpcErr := blocker.Blobs(ctx, hexutil.Encode(denebBlockRoot[:]), nil)
require.IsNil(t, rpcErr)
require.Equal(t, blobCount, len(verifiedBlobs))
})
t.Run("slot", func(t *testing.T) {
setupDeneb(t)
blocker := &BeaconDbBlocker{
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
BeaconDB: db,
BlobStorage: bs,
BlobStorage: blobStorage,
}
verifiedBlobs, rpcErr := blocker.Blobs(ctx, "123", nil)
assert.Equal(t, rpcErr == nil, true)
require.Equal(t, 4, len(verifiedBlobs))
require.IsNil(t, rpcErr)
require.Equal(t, blobCount, len(verifiedBlobs))
})
t.Run("one blob only", func(t *testing.T) {
const index = 2
setupDeneb(t)
blocker := &BeaconDbBlocker{
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: &ethpb.Checkpoint{Root: blockRoot[:]}},
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: &ethpb.Checkpoint{Root: denebBlockRoot[:]}},
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
BeaconDB: db,
BlobStorage: bs,
BlobStorage: blobStorage,
}
verifiedBlobs, rpcErr := blocker.Blobs(ctx, "123", []int{2})
assert.Equal(t, rpcErr == nil, true)
require.Equal(t, 1, len(verifiedBlobs))
sidecar := verifiedBlobs[0].BlobSidecar
require.NotNil(t, sidecar)
assert.Equal(t, uint64(2), sidecar.Index)
assert.DeepEqual(t, blobs[2].Blob, sidecar.Blob)
assert.DeepEqual(t, blobs[2].KzgCommitment, sidecar.KzgCommitment)
assert.DeepEqual(t, blobs[2].KzgProof, sidecar.KzgProof)
retrievedVerifiedSidecars, rpcErr := blocker.Blobs(ctx, "123", []int{index})
require.IsNil(t, rpcErr)
require.Equal(t, 1, len(retrievedVerifiedSidecars))
expected := verifiedStoredSidecars[index]
actual := retrievedVerifiedSidecars[0].BlobSidecar
require.NotNil(t, actual)
require.Equal(t, uint64(index), actual.Index)
require.DeepEqual(t, expected.Blob, actual.Blob)
require.DeepEqual(t, expected.KzgCommitment, actual.KzgCommitment)
require.DeepEqual(t, expected.KzgProof, actual.KzgProof)
})
t.Run("no blobs returns an empty array", func(t *testing.T) {
setupDeneb(t)
blocker := &BeaconDbBlocker{
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: &ethpb.Checkpoint{Root: blockRoot[:]}},
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: &ethpb.Checkpoint{Root: denebBlockRoot[:]}},
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
BeaconDB: db,
BlobStorage: filesystem.NewEphemeralBlobStorage(t),
}
verifiedBlobs, rpcErr := blocker.Blobs(ctx, "123", nil)
assert.Equal(t, rpcErr == nil, true)
require.IsNil(t, rpcErr)
require.Equal(t, 0, len(verifiedBlobs))
})
t.Run("no blob at index", func(t *testing.T) {
setupDeneb(t)
blocker := &BeaconDbBlocker{
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: &ethpb.Checkpoint{Root: blockRoot[:]}},
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: &ethpb.Checkpoint{Root: denebBlockRoot[:]}},
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
BeaconDB: db,
BlobStorage: bs,
BlobStorage: blobStorage,
}
noBlobIndex := len(blobs) + 1
noBlobIndex := len(storedBlobSidecars) + 1
_, rpcErr := blocker.Blobs(ctx, "123", []int{0, noBlobIndex})
require.NotNil(t, rpcErr)
assert.Equal(t, core.ErrorReason(core.NotFound), rpcErr.Reason)
require.Equal(t, core.ErrorReason(core.NotFound), rpcErr.Reason)
})
t.Run("index too big", func(t *testing.T) {
setupDeneb(t)
blocker := &BeaconDbBlocker{
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: &ethpb.Checkpoint{Root: blockRoot[:]}},
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: &ethpb.Checkpoint{Root: denebBlockRoot[:]}},
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
BeaconDB: db,
BlobStorage: bs,
BlobStorage: blobStorage,
}
_, rpcErr := blocker.Blobs(ctx, "123", []int{0, math.MaxInt})
require.NotNil(t, rpcErr)
assert.Equal(t, core.ErrorReason(core.BadRequest), rpcErr.Reason)
require.Equal(t, core.ErrorReason(core.BadRequest), rpcErr.Reason)
})
t.Run("not enough stored data column sidecars", func(t *testing.T) {
setupFulu(t)
_, dataColumnStorage := filesystem.NewEphemeralDataColumnStorageAndFs(t)
err = dataColumnStorage.Save(verifiedRoDataColumnSidecars[:fieldparams.CellsPerBlob-1])
require.NoError(t, err)
blocker := &BeaconDbBlocker{
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
BeaconDB: db,
BlobStorage: blobStorage,
DataColumnStorage: dataColumnStorage,
}
_, rpcErr := blocker.Blobs(ctx, hexutil.Encode(fuluBlockRoot[:]), nil)
require.NotNil(t, rpcErr)
require.Equal(t, core.ErrorReason(core.NotFound), rpcErr.Reason)
})
t.Run("reconstruction needed", func(t *testing.T) {
setupFulu(t)
_, dataColumnStorage := filesystem.NewEphemeralDataColumnStorageAndFs(t)
err = dataColumnStorage.Save(verifiedRoDataColumnSidecars[1 : peerdas.MinimumColumnsCountToReconstruct()+1])
require.NoError(t, err)
blocker := &BeaconDbBlocker{
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
BeaconDB: db,
BlobStorage: blobStorage,
DataColumnStorage: dataColumnStorage,
}
retrievedVerifiedRoBlobs, rpcErr := blocker.Blobs(ctx, hexutil.Encode(fuluBlockRoot[:]), nil)
require.IsNil(t, rpcErr)
require.Equal(t, len(fuluBlobSidecars), len(retrievedVerifiedRoBlobs))
for i, retrievedVerifiedRoBlob := range retrievedVerifiedRoBlobs {
retrievedBlobSidecarPb := retrievedVerifiedRoBlob.BlobSidecar
initialBlobSidecarPb := fuluBlobSidecars[i].BlobSidecar
require.DeepSSZEqual(t, initialBlobSidecarPb, retrievedBlobSidecarPb)
}
})
t.Run("no reconstruction needed", func(t *testing.T) {
setupFulu(t)
_, dataColumnStorage := filesystem.NewEphemeralDataColumnStorageAndFs(t)
err = dataColumnStorage.Save(verifiedRoDataColumnSidecars)
require.NoError(t, err)
blocker := &BeaconDbBlocker{
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
BeaconDB: db,
BlobStorage: blobStorage,
DataColumnStorage: dataColumnStorage,
}
retrievedVerifiedRoBlobs, rpcErr := blocker.Blobs(ctx, hexutil.Encode(fuluBlockRoot[:]), nil)
require.IsNil(t, rpcErr)
require.Equal(t, len(fuluBlobSidecars), len(retrievedVerifiedRoBlobs))
for i, retrievedVerifiedRoBlob := range retrievedVerifiedRoBlobs {
retrievedBlobSidecarPb := retrievedVerifiedRoBlob.BlobSidecar
initialBlobSidecarPb := fuluBlobSidecars[i].BlobSidecar
require.DeepSSZEqual(t, initialBlobSidecarPb, retrievedBlobSidecarPb)
}
})
}

View File

@@ -7,6 +7,7 @@ go_library(
"block_batcher.go",
"broadcast_bls_changes.go",
"context.go",
"data_columns_reconstruct.go",
"deadlines.go",
"decode_pubsub.go",
"doc.go",
@@ -25,6 +26,8 @@ go_library(
"rpc_blob_sidecars_by_range.go",
"rpc_blob_sidecars_by_root.go",
"rpc_chunked_response.go",
"rpc_data_column_sidecars_by_range.go",
"rpc_data_column_sidecars_by_root.go",
"rpc_goodbye.go",
"rpc_light_client.go",
"rpc_metadata.go",
@@ -38,6 +41,7 @@ go_library(
"subscriber_beacon_blocks.go",
"subscriber_blob_sidecar.go",
"subscriber_bls_to_execution_change.go",
"subscriber_data_column_sidecar.go",
"subscriber_handlers.go",
"subscriber_light_client.go",
"subscriber_sync_committee_message.go",
@@ -76,6 +80,7 @@ go_library(
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/light-client:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/core/transition/interop:go_default_library",
@@ -160,6 +165,7 @@ go_test(
"block_batcher_test.go",
"broadcast_bls_changes_test.go",
"context_test.go",
"data_columns_reconstruct_test.go",
"decode_pubsub_test.go",
"error_test.go",
"fork_watcher_test.go",
@@ -170,6 +176,8 @@ go_test(
"rpc_beacon_blocks_by_root_test.go",
"rpc_blob_sidecars_by_range_test.go",
"rpc_blob_sidecars_by_root_test.go",
"rpc_data_column_sidecars_by_range_test.go",
"rpc_data_column_sidecars_by_root_test.go",
"rpc_goodbye_test.go",
"rpc_handler_test.go",
"rpc_light_client_test.go",
@@ -203,6 +211,7 @@ go_test(
deps = [
"//async/abool:go_default_library",
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/core/altair:go_default_library",
@@ -210,6 +219,7 @@ go_test(
"//beacon-chain/core/feed/operation:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/light-client:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/transition:go_default_library",
@@ -280,6 +290,7 @@ go_test(
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@com_github_stretchr_testify//require:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
],
)

View File

@@ -0,0 +1,208 @@
package sync
import (
"context"
"fmt"
"slices"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
const (
broadcastMissingDataColumnsTimeIntoSlotMin = 1 * time.Second
broadcastMissingDataColumnsSlack = 2 * time.Second
)
// reconstructSaveBroadcastDataColumnSidecars reconstructs if possible and
// needed all data column sidecars. Then, it saves into the store missing
// sidecars. After a delay, it broadcasts in the background not seen via gossip
// (but reconstructed) sidecars.
func (s *Service) reconstructSaveBroadcastDataColumnSidecars(
ctx context.Context,
slot primitives.Slot,
proposerIndex primitives.ValidatorIndex,
root [fieldparams.RootLength]byte,
) error {
startTime := time.Now()
// Get the columns we store.
storedDataColumns := s.cfg.dataColumnStorage.Summary(root)
storedColumnsCount := storedDataColumns.Count()
numberOfColumns := params.BeaconConfig().NumberOfColumns
// Lock to prevent concurrent reconstructions.
s.reconstructionLock.Lock()
defer s.reconstructionLock.Unlock()
// If reconstruction is not possible or if all columns are already stored, exit early.
if storedColumnsCount < peerdas.MinimumColumnsCountToReconstruct() || storedColumnsCount == numberOfColumns {
return nil
}
// Retrieve our local node info.
nodeID := s.cfg.p2p.NodeID()
custodyGroupCount := s.cfg.custodyInfo.ActualGroupCount()
localNodeInfo, _, err := peerdas.Info(nodeID, custodyGroupCount)
if err != nil {
return errors.Wrap(err, "peer info")
}
// Load all the possible data columns sidecars, to minimize reconstruction time.
verifiedSidecars, err := s.cfg.dataColumnStorage.Get(root, nil)
if err != nil {
return errors.Wrap(err, "get data column sidecars")
}
// Reconstruct all the data column sidecars.
reconstructedSidecars, err := peerdas.ReconstructDataColumnSidecars(verifiedSidecars)
if err != nil {
return errors.Wrap(err, "reconstruct data column sidecars")
}
// Filter reconstructed sidecars to save.
custodyColumns := localNodeInfo.CustodyColumns
toSaveSidecars := make([]blocks.VerifiedRODataColumn, 0, len(custodyColumns))
for _, sidecar := range reconstructedSidecars {
if custodyColumns[sidecar.Index] {
toSaveSidecars = append(toSaveSidecars, sidecar)
}
}
// Save the data columns sidecars in the database.
// Note: We do not call `receiveDataColumn`, because it will ignore
// incoming data columns via gossip while we did not broadcast (yet) the reconstructed data columns.
if err := s.cfg.dataColumnStorage.Save(toSaveSidecars); err != nil {
return errors.Wrap(err, "save data column sidecars")
}
// Update reconstruction metrics
dataColumnReconstructionHistogram.Observe(float64(time.Since(startTime).Milliseconds()))
dataColumnReconstructionCounter.Add(float64(len(reconstructedSidecars) - len(verifiedSidecars)))
// Schedule the broadcast.
if err := s.scheduleMissingDataColumnSidecarsBroadcast(ctx, root, proposerIndex, slot); err != nil {
return errors.Wrap(err, "schedule reconstructed data columns broadcast")
}
log.WithFields(logrus.Fields{
"root": fmt.Sprintf("%#x", root),
"slot": slot,
"fromColumnsCount": storedColumnsCount,
}).Debug("Data columns reconstructed and saved")
return nil
}
// scheduleMissingDataColumnSidecarsBroadcast schedules the broadcast of missing
// (aka. not seen via gossip but reconstructed) sidecars.
func (s *Service) scheduleMissingDataColumnSidecarsBroadcast(
ctx context.Context,
root [fieldparams.RootLength]byte,
proposerIndex primitives.ValidatorIndex,
slot primitives.Slot,
) error {
log := log.WithFields(logrus.Fields{
"root": fmt.Sprintf("%x", root),
"slot": slot,
})
// Get the time corresponding to the start of the slot.
genesisTime := uint64(s.cfg.chain.GenesisTime().Unix())
slotStartTime, err := slots.ToTime(genesisTime, slot)
if err != nil {
return errors.Wrap(err, "to time")
}
// Compute the waiting time. This could be negative. In such a case, broadcast immediately.
randFloat := s.reconstructionRandGen.Float64()
timeIntoSlot := broadcastMissingDataColumnsTimeIntoSlotMin + time.Duration(float64(broadcastMissingDataColumnsSlack)*randFloat)
broadcastTime := slotStartTime.Add(timeIntoSlot)
waitingTime := time.Until(broadcastTime)
time.AfterFunc(waitingTime, func() {
// Return early if the context was canceled during the waiting time.
if err := ctx.Err(); err != nil {
return
}
if err := s.broadcastMissingDataColumnSidecars(slot, proposerIndex, root, timeIntoSlot); err != nil {
log.WithError(err).Error("Failed to broadcast missing data column sidecars")
}
})
return nil
}
func (s *Service) broadcastMissingDataColumnSidecars(
slot primitives.Slot,
proposerIndex primitives.ValidatorIndex,
root [fieldparams.RootLength]byte,
timeIntoSlot time.Duration,
) error {
// Get the node ID.
nodeID := s.cfg.p2p.NodeID()
// Get the custody group count.
custodyGroupCount := s.cfg.custodyInfo.ActualGroupCount()
// Retrieve the local node info.
localNodeInfo, _, err := peerdas.Info(nodeID, custodyGroupCount)
if err != nil {
return errors.Wrap(err, "peerdas info")
}
// Compute the missing data columns (data columns we should custody but we did not received via gossip.)
missingColumns := make([]uint64, 0, len(localNodeInfo.CustodyColumns))
for column := range localNodeInfo.CustodyColumns {
if !s.hasSeenDataColumnIndex(slot, proposerIndex, column) {
missingColumns = append(missingColumns, column)
}
}
// Return early if there are no missing data columns.
if len(missingColumns) == 0 {
return nil
}
// Load from the store the non received but reconstructed data column.
verifiedRODataColumnSidecars, err := s.cfg.dataColumnStorage.Get(root, missingColumns)
if err != nil {
return errors.Wrap(err, "data column storage get")
}
broadcastedColumns := make([]uint64, 0, len(verifiedRODataColumnSidecars))
for _, verifiedRODataColumn := range verifiedRODataColumnSidecars {
broadcastedColumns = append(broadcastedColumns, verifiedRODataColumn.Index)
// Compute the subnet for this column.
subnet := peerdas.ComputeSubnetForDataColumnSidecar(verifiedRODataColumn.Index)
// Broadcast the missing data column.
if err := s.cfg.p2p.BroadcastDataColumn(root, subnet, verifiedRODataColumn.DataColumnSidecar); err != nil {
log.WithError(err).Error("Broadcast data column")
}
// Now, we can set the data column as seen.
s.setSeenDataColumnIndex(slot, proposerIndex, verifiedRODataColumn.Index)
}
if logrus.GetLevel() >= logrus.DebugLevel {
// Sort for nice logging.
slices.Sort(broadcastedColumns)
slices.Sort(missingColumns)
log.WithFields(logrus.Fields{
"timeIntoSlot": timeIntoSlot,
"missingColumns": missingColumns,
"broadcasted": broadcastedColumns,
}).Debug("Start broadcasting not seen via gossip but reconstructed data columns")
}
return nil
}

View File

@@ -0,0 +1,191 @@
package sync
import (
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
mockChain "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
p2ptest "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
)
func TestReconstructDataColumns(t *testing.T) {
const blobCount = 4
numberOfColumns := params.BeaconConfig().NumberOfColumns
ctx := t.Context()
// Start the trusted setup.
err := kzg.Start()
require.NoError(t, err)
roBlock, _, verifiedRoDataColumns := util.GenerateTestFuluBlockWithSidecars(t, blobCount)
require.Equal(t, numberOfColumns, uint64(len(verifiedRoDataColumns)))
root, block := roBlock.Root(), roBlock.Block()
slot, proposerIndex := block.Slot(), block.ProposerIndex()
minimumCount := peerdas.MinimumColumnsCountToReconstruct()
t.Run("not enough stored sidecars", func(t *testing.T) {
storage := filesystem.NewEphemeralDataColumnStorage(t)
err := storage.Save(verifiedRoDataColumns[:minimumCount-1])
require.NoError(t, err)
service := NewService(ctx, WithP2P(p2ptest.NewTestP2P(t)), WithDataColumnStorage(storage))
err = service.reconstructSaveBroadcastDataColumnSidecars(ctx, slot, proposerIndex, root)
require.NoError(t, err)
})
t.Run("all stored sidecars", func(t *testing.T) {
storage := filesystem.NewEphemeralDataColumnStorage(t)
err := storage.Save(verifiedRoDataColumns)
require.NoError(t, err)
service := NewService(ctx, WithP2P(p2ptest.NewTestP2P(t)), WithDataColumnStorage(storage))
err = service.reconstructSaveBroadcastDataColumnSidecars(ctx, slot, proposerIndex, root)
require.NoError(t, err)
})
t.Run("should reconstruct", func(t *testing.T) {
// Here we setup a cgc of 8, which is not realistic, since there is no
// real reason for a node to both:
// - store enough data column sidecars to enable reconstruction, and
// - custody not enough columns to enable reconstruction.
// However, for the needs of this test, this is perfectly fine.
const cgc = 8
storage := filesystem.NewEphemeralDataColumnStorage(t)
minimumCount := peerdas.MinimumColumnsCountToReconstruct()
err := storage.Save(verifiedRoDataColumns[:minimumCount])
require.NoError(t, err)
custodyInfo := &peerdas.CustodyInfo{}
custodyInfo.TargetGroupCount.SetValidatorsCustodyRequirement(cgc)
custodyInfo.ToAdvertiseGroupCount.Set(cgc)
service := NewService(
ctx,
WithP2P(p2ptest.NewTestP2P(t)),
WithDataColumnStorage(storage),
WithCustodyInfo(custodyInfo),
WithChainService(&mockChain.ChainService{}),
)
err = service.reconstructSaveBroadcastDataColumnSidecars(ctx, slot, proposerIndex, root)
require.NoError(t, err)
expected := make(map[uint64]bool, minimumCount+cgc)
for i := range minimumCount {
expected[i] = true
}
// The node should custody these indices.
for _, i := range [...]uint64{1, 17, 19, 42, 75, 87, 102, 117} {
expected[i] = true
}
summary := storage.Summary(root)
actual := summary.Stored()
require.Equal(t, len(expected), len(actual))
for index := range expected {
require.Equal(t, true, actual[index])
}
})
}
func TestBroadcastMissingDataColumnSidecars(t *testing.T) {
const (
cgc = 8
blobCount = 4
timeIntoSlot = 0
)
numberOfColumns := params.BeaconConfig().NumberOfColumns
ctx := t.Context()
// Start the trusted setup.
err := kzg.Start()
require.NoError(t, err)
roBlock, _, verifiedRoDataColumns := util.GenerateTestFuluBlockWithSidecars(t, blobCount)
require.Equal(t, numberOfColumns, uint64(len(verifiedRoDataColumns)))
root, block := roBlock.Root(), roBlock.Block()
slot, proposerIndex := block.Slot(), block.ProposerIndex()
t.Run("no missing sidecars", func(t *testing.T) {
custodyInfo := &peerdas.CustodyInfo{}
custodyInfo.TargetGroupCount.SetValidatorsCustodyRequirement(cgc)
custodyInfo.ToAdvertiseGroupCount.Set(cgc)
service := NewService(
ctx,
WithP2P(p2ptest.NewTestP2P(t)),
WithCustodyInfo(custodyInfo),
)
for _, index := range [...]uint64{1, 17, 19, 42, 75, 87, 102, 117} {
key := computeCacheKey(slot, proposerIndex, index)
service.seenDataColumnCache.Add(key, true)
}
err := service.broadcastMissingDataColumnSidecars(slot, proposerIndex, root, timeIntoSlot)
require.NoError(t, err)
})
t.Run("some missing sidecars", func(t *testing.T) {
custodyInfo := &peerdas.CustodyInfo{}
custodyInfo.TargetGroupCount.SetValidatorsCustodyRequirement(cgc)
custodyInfo.ToAdvertiseGroupCount.Set(cgc)
toSave := make([]blocks.VerifiedRODataColumn, 0, 2)
for _, index := range [...]uint64{42, 87} {
toSave = append(toSave, verifiedRoDataColumns[index])
}
p2p := p2ptest.NewTestP2P(t)
storage := filesystem.NewEphemeralDataColumnStorage(t)
err := storage.Save(toSave)
require.NoError(t, err)
service := NewService(
ctx,
WithP2P(p2p),
WithCustodyInfo(custodyInfo),
WithDataColumnStorage(storage),
)
for _, index := range [...]uint64{1, 17, 19, 102, 117} { // 42, 75 and 87 are missing
key := computeCacheKey(slot, proposerIndex, index)
service.seenDataColumnCache.Add(key, true)
}
for _, index := range [...]uint64{42, 75, 87} {
seen := service.hasSeenDataColumnIndex(slot, proposerIndex, index)
require.Equal(t, false, seen)
}
require.Equal(t, false, p2p.BroadcastCalled.Load())
err = service.broadcastMissingDataColumnSidecars(slot, proposerIndex, root, timeIntoSlot)
require.NoError(t, err)
seen := service.hasSeenDataColumnIndex(slot, proposerIndex, 75)
require.Equal(t, false, seen)
for _, index := range [...]uint64{42, 87} {
seen := service.hasSeenDataColumnIndex(slot, proposerIndex, index)
require.Equal(t, true, seen)
}
require.Equal(t, true, p2p.BroadcastCalled.Load())
})
}

View File

@@ -15,13 +15,17 @@ import (
"github.com/sirupsen/logrus"
)
var ErrNoValidDigest = errors.New("no valid digest matched")
var ErrUnrecognizedVersion = errors.New("cannot determine context bytes for unrecognized object")
var (
ErrNoValidDigest = errors.New("no valid digest matched")
ErrUnrecognizedVersion = errors.New("cannot determine context bytes for unrecognized object")
)
var responseCodeSuccess = byte(0x00)
var responseCodeInvalidRequest = byte(0x01)
var responseCodeServerError = byte(0x02)
var responseCodeResourceUnavailable = byte(0x03)
var (
responseCodeSuccess = byte(0x00)
responseCodeInvalidRequest = byte(0x01)
responseCodeServerError = byte(0x02)
responseCodeResourceUnavailable = byte(0x03)
)
func (s *Service) generateErrorResponse(code byte, reason string) ([]byte, error) {
return createErrorResponse(code, reason, s.cfg.p2p)

View File

@@ -7,6 +7,7 @@ import (
"github.com/OffchainLabs/prysm/v6/async/abool"
mockChain "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
p2ptest "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
@@ -46,6 +47,7 @@ func TestService_CheckForNextEpochFork(t *testing.T) {
chain: chainService,
clock: startup.NewClock(gt, vr),
initialSync: &mockSync.Sync{IsSyncing: false},
custodyInfo: &peerdas.CustodyInfo{},
},
chainStarted: abool.New(),
subHandler: newSubTopicHandler(),
@@ -81,6 +83,7 @@ func TestService_CheckForNextEpochFork(t *testing.T) {
chain: chainService,
clock: startup.NewClock(gt, vr),
initialSync: &mockSync.Sync{IsSyncing: false},
custodyInfo: &peerdas.CustodyInfo{},
},
chainStarted: abool.New(),
subHandler: newSubTopicHandler(),
@@ -125,6 +128,7 @@ func TestService_CheckForNextEpochFork(t *testing.T) {
chain: chainService,
clock: startup.NewClock(chainService.Genesis, chainService.ValidatorsRoot),
initialSync: &mockSync.Sync{IsSyncing: false},
custodyInfo: &peerdas.CustodyInfo{},
},
chainStarted: abool.New(),
subHandler: newSubTopicHandler(),
@@ -167,6 +171,7 @@ func TestService_CheckForNextEpochFork(t *testing.T) {
chain: chainService,
clock: startup.NewClock(gt, vr),
initialSync: &mockSync.Sync{IsSyncing: false},
custodyInfo: &peerdas.CustodyInfo{},
},
chainStarted: abool.New(),
subHandler: newSubTopicHandler(),
@@ -211,6 +216,7 @@ func TestService_CheckForNextEpochFork(t *testing.T) {
chain: chainService,
clock: startup.NewClock(gt, vr),
initialSync: &mockSync.Sync{IsSyncing: false},
custodyInfo: &peerdas.CustodyInfo{},
},
chainStarted: abool.New(),
subHandler: newSubTopicHandler(),
@@ -255,6 +261,7 @@ func TestService_CheckForNextEpochFork(t *testing.T) {
chain: chainService,
clock: startup.NewClock(gt, vr),
initialSync: &mockSync.Sync{IsSyncing: false},
custodyInfo: &peerdas.CustodyInfo{},
},
chainStarted: abool.New(),
subHandler: newSubTopicHandler(),
@@ -274,6 +281,7 @@ func TestService_CheckForNextEpochFork(t *testing.T) {
}
assert.Equal(t, true, rpcMap[p2p.RPCBlobSidecarsByRangeTopicV1+s.cfg.p2p.Encoding().ProtocolSuffix()], "topic doesn't exist")
assert.Equal(t, true, rpcMap[p2p.RPCBlobSidecarsByRootTopicV1+s.cfg.p2p.Encoding().ProtocolSuffix()], "topic doesn't exist")
assert.Equal(t, true, rpcMap[p2p.RPCMetaDataTopicV3+s.cfg.p2p.Encoding().ProtocolSuffix()], "topic doesn't exist")
},
},
}

View File

@@ -89,6 +89,13 @@ var (
Buckets: []float64{5, 10, 50, 100, 150, 250, 500, 1000, 2000},
},
)
rpcDataColumnsByRangeResponseLatency = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "rpc_data_columns_by_range_response_latency_milliseconds",
Help: "Captures total time to respond to rpc DataColumnsByRange requests in a milliseconds distribution",
Buckets: []float64{5, 10, 50, 100, 150, 250, 500, 1000, 2000},
},
)
arrivalBlockPropagationHistogram = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "block_arrival_latency_milliseconds",
@@ -203,6 +210,19 @@ var (
Buckets: []float64{100, 250, 500, 750, 1000, 1500, 2000, 4000, 8000, 12000, 16000},
},
)
dataColumnReconstructionCounter = promauto.NewCounter(prometheus.CounterOpts{
Name: "beacon_data_availability_reconstructed_columns_total",
Help: "Count the number of reconstructed data columns.",
})
dataColumnReconstructionHistogram = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "beacon_data_availability_reconstruction_time_milliseconds",
Help: "Captures the time taken to reconstruct data columns.",
Buckets: []float64{100, 250, 500, 750, 1000, 1500, 2000, 4000, 8000, 12000, 16000},
},
)
)
func (s *Service) updateMetrics() {

View File

@@ -7,6 +7,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/operation"
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/execution"
@@ -173,6 +174,14 @@ func WithBlobStorage(b *filesystem.BlobStorage) Option {
}
}
// WithDataColumnStorage gives the sync package direct access to DataColumnStorage.
func WithDataColumnStorage(b *filesystem.DataColumnStorage) Option {
return func(s *Service) error {
s.cfg.dataColumnStorage = b
return nil
}
}
// WithVerifierWaiter gives the sync package direct access to the verifier waiter.
func WithVerifierWaiter(v *verification.InitializerWaiter) Option {
return func(s *Service) error {
@@ -190,6 +199,14 @@ func WithAvailableBlocker(avb coverage.AvailableBlocker) Option {
}
}
// WithCustodyInfo for custody info.
func WithCustodyInfo(custodyInfo *peerdas.CustodyInfo) Option {
return func(s *Service) error {
s.cfg.custodyInfo = custodyInfo
return nil
}
}
// WithSlasherEnabled configures the sync package to support slashing detection.
func WithSlasherEnabled(enabled bool) Option {
return func(s *Service) error {

View File

@@ -47,13 +47,18 @@ func newRateLimiter(p2pProvider p2p.P2P) *limiter {
allowedBlobsPerSecond := float64(flags.Get().BlobBatchLimit)
allowedBlobsBurst := int64(flags.Get().BlobBatchLimitBurstFactor * flags.Get().BlobBatchLimit)
// Initialize data column limits.
allowedDataColumnsPerSecond := float64(flags.Get().DataColumnBatchLimit)
allowedDataColumnsBurst := int64(flags.Get().DataColumnBatchLimitBurstFactor * flags.Get().DataColumnBatchLimit)
// Set topic map for all rpc topics.
topicMap := make(map[string]*leakybucket.Collector, len(p2p.RPCTopicMappings))
// Goodbye Message
topicMap[addEncoding(p2p.RPCGoodByeTopicV1)] = leakybucket.NewCollector(1, 1, leakyBucketPeriod, false /* deleteEmptyBuckets */)
// MetadataV0 Message
// Metadata Message
topicMap[addEncoding(p2p.RPCMetaDataTopicV1)] = leakybucket.NewCollector(1, defaultBurstLimit, leakyBucketPeriod, false /* deleteEmptyBuckets */)
topicMap[addEncoding(p2p.RPCMetaDataTopicV2)] = leakybucket.NewCollector(1, defaultBurstLimit, leakyBucketPeriod, false /* deleteEmptyBuckets */)
topicMap[addEncoding(p2p.RPCMetaDataTopicV3)] = leakybucket.NewCollector(1, defaultBurstLimit, leakyBucketPeriod, false /* deleteEmptyBuckets */)
// Ping Message
topicMap[addEncoding(p2p.RPCPingTopicV1)] = leakybucket.NewCollector(1, defaultBurstLimit, leakyBucketPeriod, false /* deleteEmptyBuckets */)
// Status Message
@@ -67,6 +72,9 @@ func newRateLimiter(p2pProvider p2p.P2P) *limiter {
// for BlobSidecarsByRoot and BlobSidecarsByRange
blobCollector := leakybucket.NewCollector(allowedBlobsPerSecond, allowedBlobsBurst, blockBucketPeriod, false)
// for DataColumnSidecarsByRoot and DataColumnSidecarsByRange
dataColumnSidecars := leakybucket.NewCollector(allowedDataColumnsPerSecond, allowedDataColumnsBurst, blockBucketPeriod, false)
// BlocksByRoots requests
topicMap[addEncoding(p2p.RPCBlocksByRootTopicV1)] = blockCollector
topicMap[addEncoding(p2p.RPCBlocksByRootTopicV2)] = blockCollectorV2
@@ -86,6 +94,11 @@ func newRateLimiter(p2pProvider p2p.P2P) *limiter {
topicMap[addEncoding(p2p.RPCLightClientOptimisticUpdateTopicV1)] = leakybucket.NewCollector(1, defaultBurstLimit, leakyBucketPeriod, false /* deleteEmptyBuckets */)
topicMap[addEncoding(p2p.RPCLightClientFinalityUpdateTopicV1)] = leakybucket.NewCollector(1, defaultBurstLimit, leakyBucketPeriod, false /* deleteEmptyBuckets */)
// DataColumnSidecarsByRootV1
topicMap[addEncoding(p2p.RPCDataColumnSidecarsByRootTopicV1)] = dataColumnSidecars
// DataColumnSidecarsByRangeV1
topicMap[addEncoding(p2p.RPCDataColumnSidecarsByRangeTopicV1)] = dataColumnSidecars
// General topic for all rpc requests.
topicMap[rpcLimiterTopic] = leakybucket.NewCollector(5, defaultBurstLimit*2, leakyBucketPeriod, false /* deleteEmptyBuckets */)

View File

@@ -17,7 +17,7 @@ import (
func TestNewRateLimiter(t *testing.T) {
rlimiter := newRateLimiter(mockp2p.NewTestP2P(t))
assert.Equal(t, len(rlimiter.limiterMap), 16, "correct number of topics not registered")
assert.Equal(t, len(rlimiter.limiterMap), 19, "correct number of topics not registered")
}
func TestNewRateLimiter_FreeCorrectly(t *testing.T) {

View File

@@ -39,6 +39,21 @@ type rpcHandler func(context.Context, interface{}, libp2pcore.Stream) error
// rpcHandlerByTopicFromFork returns the RPC handlers for a given fork index.
func (s *Service) rpcHandlerByTopicFromFork(forkIndex int) (map[string]rpcHandler, error) {
// Fulu: https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/p2p-interface.md#messages
if forkIndex >= version.Fulu {
return map[string]rpcHandler{
p2p.RPCGoodByeTopicV1: s.goodbyeRPCHandler,
p2p.RPCBlocksByRangeTopicV2: s.beaconBlocksByRangeRPCHandler,
p2p.RPCBlocksByRootTopicV2: s.beaconBlocksRootRPCHandler,
p2p.RPCPingTopicV1: s.pingHandler,
p2p.RPCMetaDataTopicV3: s.metaDataHandler, // Modified in Fulu
p2p.RPCBlobSidecarsByRootTopicV1: s.blobSidecarByRootRPCHandler,
p2p.RPCBlobSidecarsByRangeTopicV1: s.blobSidecarsByRangeRPCHandler,
p2p.RPCDataColumnSidecarsByRootTopicV1: s.dataColumnSidecarByRootRPCHandler, // Added in Fulu
p2p.RPCDataColumnSidecarsByRangeTopicV1: s.dataColumnSidecarsByRangeRPCHandler, // Added in Fulu
}, nil
}
// Electra: https://github.com/ethereum/consensus-specs/blob/dev/specs/electra/p2p-interface.md#messages
if forkIndex >= version.Electra {
return map[string]rpcHandler{
@@ -258,9 +273,15 @@ func (s *Service) registerRPC(baseTopic string, handle rpcHandler) {
// since some requests do not have any data in the payload, we
// do not decode anything.
if baseTopic == p2p.RPCMetaDataTopicV1 || baseTopic == p2p.RPCMetaDataTopicV2 ||
baseTopic == p2p.RPCLightClientOptimisticUpdateTopicV1 ||
baseTopic == p2p.RPCLightClientFinalityUpdateTopicV1 {
topics := map[string]bool{
p2p.RPCMetaDataTopicV1: true,
p2p.RPCMetaDataTopicV2: true,
p2p.RPCMetaDataTopicV3: true,
p2p.RPCLightClientOptimisticUpdateTopicV1: true,
p2p.RPCLightClientFinalityUpdateTopicV1: true,
}
if topics[baseTopic] {
if err := handle(ctx, base, stream); err != nil {
messageFailedProcessingCounter.WithLabelValues(topic).Inc()
if !errors.Is(err, p2ptypes.ErrWrongForkDigestVersion) {

View File

@@ -9,6 +9,7 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/network/forks"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
libp2pcore "github.com/libp2p/go-libp2p/core"
@@ -238,3 +239,30 @@ func WriteLightClientFinalityUpdateChunk(stream libp2pcore.Stream, tor blockchai
_, err = encoding.EncodeWithMaxLength(stream, update)
return err
}
// WriteDataColumnSidecarChunk writes data column chunk object to stream.
// response_chunk ::= <result> | <context-bytes> | <encoding-dependent-header> | <encoded-payload>
func WriteDataColumnSidecarChunk(stream libp2pcore.Stream, tor blockchain.TemporalOracle, encoding encoder.NetworkEncoding, sidecar *ethpb.DataColumnSidecar) error {
// Success response code.
if _, err := stream.Write([]byte{responseCodeSuccess}); err != nil {
return errors.Wrap(err, "stream write")
}
// Fork digest.
genesisValidatorsRoot := tor.GenesisValidatorsRoot()
ctxBytes, err := forks.ForkDigestFromEpoch(slots.ToEpoch(sidecar.SignedBlockHeader.Header.Slot), genesisValidatorsRoot[:])
if err != nil {
return errors.Wrap(err, "fork digest from epoch")
}
if err := writeContextToStream(ctxBytes[:], stream); err != nil {
return errors.Wrap(err, "write context to stream")
}
// Sidecar.
if _, err = encoding.EncodeWithMaxLength(stream, sidecar); err != nil {
return errors.Wrap(err, "encode with max length")
}
return nil
}

View File

@@ -0,0 +1,218 @@
package sync
import (
"context"
"slices"
"time"
p2ptypes "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/types"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
pb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
libp2pcore "github.com/libp2p/go-libp2p/core"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
// We count a single request as a single rate limiting amount, regardless of the number of columns requested.
const rateLimitingAmount = 1
var notDataColumnsByRangeIdentifiersError = errors.New("not data columns by range identifiers")
// dataColumnSidecarsByRangeRPCHandler looks up the request data columns from the database from a given start slot index
func (s *Service) dataColumnSidecarsByRangeRPCHandler(ctx context.Context, msg interface{}, stream libp2pcore.Stream) error {
ctx, span := trace.StartSpan(ctx, "sync.DataColumnSidecarsByRangeHandler")
defer span.End()
// Check if the message type is the one expected.
request, ok := msg.(*pb.DataColumnSidecarsByRangeRequest)
if !ok {
return notDataColumnsByRangeIdentifiersError
}
ctx, cancel := context.WithTimeout(ctx, respTimeout)
defer cancel()
SetRPCStreamDeadlines(stream)
beaconConfig := params.BeaconConfig()
maxRequestDataColumnSidecars := beaconConfig.MaxRequestDataColumnSidecars
remotePeer := stream.Conn().RemotePeer()
requestedColumns := request.Columns
// Format log fields.
var requestedColumnsLog interface{} = "all"
if uint64(len(requestedColumns)) != beaconConfig.NumberOfColumns {
requestedColumnsLog = requestedColumns
}
log := log.WithFields(logrus.Fields{
"remotePeer": remotePeer,
"requestedColumns": requestedColumnsLog,
"startSlot": request.StartSlot,
"count": request.Count,
})
// Validate the request regarding rate limiting.
if err := s.rateLimiter.validateRequest(stream, rateLimitingAmount); err != nil {
return errors.Wrap(err, "rate limiter validate request")
}
// Validate the request regarding its parameters.
rangeParameters, err := validateDataColumnsByRange(request, s.cfg.chain.CurrentSlot())
if err != nil {
s.writeErrorResponseToStream(responseCodeInvalidRequest, err.Error(), stream)
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(remotePeer)
tracing.AnnotateError(span, err)
return errors.Wrap(err, "validate data columns by range")
}
if rangeParameters == nil {
log.Debug("No data columns by range to serve")
return nil
}
log.Debug("Serving data columns by range request")
// Ticker to stagger out large requests.
ticker := time.NewTicker(time.Second)
batcher, err := newBlockRangeBatcher(*rangeParameters, s.cfg.beaconDB, s.rateLimiter, s.cfg.chain.IsCanonical, ticker)
if err != nil {
s.writeErrorResponseToStream(responseCodeServerError, p2ptypes.ErrGeneric.Error(), stream)
tracing.AnnotateError(span, err)
return errors.Wrap(err, "new block range batcher")
}
// Derive the wanted columns for the request.
wantedColumns := make([]uint64, len(request.Columns))
copy(wantedColumns, request.Columns)
// Sort the wanted columns.
slices.Sort(wantedColumns)
var batch blockBatch
for batch, ok = batcher.next(ctx, stream); ok; batch, ok = batcher.next(ctx, stream) {
batchStart := time.Now()
maxRequestDataColumnSidecars, err = s.streamDataColumnBatch(ctx, batch, maxRequestDataColumnSidecars, wantedColumns, stream)
rpcDataColumnsByRangeResponseLatency.Observe(float64(time.Since(batchStart).Milliseconds()))
if err != nil {
return err
}
// Once the quota is reached, we're done serving the request.
if maxRequestDataColumnSidecars == 0 {
log.WithField("initialQuota", beaconConfig.MaxRequestDataColumnSidecars).Debug("Reached quota for data column sidecars by range request")
break
}
}
if err := batch.error(); err != nil {
log.WithError(err).Debug("error in DataColumnSidecarsByRange batch")
// If we hit a rate limit, the error response has already been written, and the stream is already closed.
if !errors.Is(err, p2ptypes.ErrRateLimited) {
s.writeErrorResponseToStream(responseCodeServerError, p2ptypes.ErrGeneric.Error(), stream)
}
tracing.AnnotateError(span, err)
return err
}
closeStream(stream, log)
return nil
}
func (s *Service) streamDataColumnBatch(ctx context.Context, batch blockBatch, quota uint64, wantedDataColumnIndices []uint64, stream libp2pcore.Stream) (uint64, error) {
_, span := trace.StartSpan(ctx, "sync.streamDataColumnBatch")
defer span.End()
// Defensive check to guard against underflow.
if quota == 0 {
return 0, nil
}
// Loop over the blocks in the batch.
for _, block := range batch.canonical() {
// Get the block blockRoot.
blockRoot := block.Root()
// Retrieve the data column sidecars from the store.
verifiedRODataColumns, err := s.cfg.dataColumnStorage.Get(blockRoot, wantedDataColumnIndices)
if err != nil {
s.writeErrorResponseToStream(responseCodeServerError, p2ptypes.ErrGeneric.Error(), stream)
return quota, errors.Wrapf(err, "get data column sidecars: block root %#x", blockRoot)
}
// Write the retrieved sidecars to the stream.
for _, verifiedRODataColumn := range verifiedRODataColumns {
sidecar := verifiedRODataColumn.DataColumnSidecar
SetStreamWriteDeadline(stream, defaultWriteDuration)
if err := WriteDataColumnSidecarChunk(stream, s.cfg.chain, s.cfg.p2p.Encoding(), sidecar); err != nil {
s.writeErrorResponseToStream(responseCodeServerError, p2ptypes.ErrGeneric.Error(), stream)
tracing.AnnotateError(span, err)
return quota, errors.Wrap(err, "write data column sidecar chunk")
}
s.rateLimiter.add(stream, rateLimitingAmount)
quota -= 1
// Stop streaming results once the quota of writes for the request is consumed.
if quota == 0 {
return 0, nil
}
}
}
return quota, nil
}
func validateDataColumnsByRange(request *pb.DataColumnSidecarsByRangeRequest, currentSlot primitives.Slot) (*rangeParams, error) {
startSlot, count := request.StartSlot, request.Count
if count == 0 {
return nil, errors.Wrap(p2ptypes.ErrInvalidRequest, "invalid request count parameter")
}
endSlot, err := request.StartSlot.SafeAdd(count - 1)
if err != nil {
return nil, errors.Wrap(p2ptypes.ErrInvalidRequest, "overflow start + count -1")
}
// Peers may overshoot the current slot when in initial sync,
// so we don't want to penalize them by treating the request as an error.
if startSlot > currentSlot {
return nil, nil
}
// Compute the oldest slot we'll allow a peer to request, based on the current slot.
minStartSlot, err := dataColumnsRPCMinValidSlot(currentSlot)
if err != nil {
return nil, errors.Wrap(p2ptypes.ErrInvalidRequest, "data columns RPC min valid slot")
}
// Return early if there is nothing to serve.
if endSlot < minStartSlot {
return nil, nil
}
// Do not serve sidecars for slots before the minimum valid slot or after the current slot.
startSlot = max(startSlot, minStartSlot)
endSlot = min(endSlot, currentSlot)
sizeMinusOne, err := endSlot.SafeSub(uint64(startSlot))
if err != nil {
return nil, errors.Errorf("overflow end - start: %d - %d - should never happen", endSlot, startSlot)
}
size, err := sizeMinusOne.SafeAdd(1)
if err != nil {
return nil, errors.Wrap(p2ptypes.ErrInvalidRequest, "overflow end - start + 1")
}
rangeParameters := &rangeParams{start: startSlot, end: endSlot, size: uint64(size)}
return rangeParameters, nil
}

View File

@@ -0,0 +1,301 @@
package sync
import (
"context"
"fmt"
"io"
"math"
"sync"
"testing"
"time"
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/protocol"
"github.com/pkg/errors"
"github.com/stretchr/testify/require"
chainMock "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
p2ptest "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
consensusblocks "github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
pb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/testing/util"
)
func TestDataColumnSidecarsByRangeRPCHandler(t *testing.T) {
ctx := context.Background()
t.Run("wrong message type", func(t *testing.T) {
service := &Service{}
err := service.dataColumnSidecarsByRangeRPCHandler(ctx, nil, nil)
require.ErrorIs(t, err, notDataColumnsByRangeIdentifiersError)
})
t.Run("invalid request", func(t *testing.T) {
slot := primitives.Slot(400)
localP2P, remoteP2P := p2ptest.NewTestP2P(t), p2ptest.NewTestP2P(t)
service := &Service{
cfg: &config{
p2p: localP2P,
chain: &chainMock.ChainService{
Slot: &slot,
},
},
rateLimiter: newRateLimiter(localP2P),
}
protocolID := protocol.ID(fmt.Sprintf("%s/ssz_snappy", p2p.RPCDataColumnSidecarsByRangeTopicV1))
var wg sync.WaitGroup
wg.Add(1)
remoteP2P.BHost.SetStreamHandler(protocolID, func(stream network.Stream) {
defer wg.Done()
code, _, err := readStatusCodeNoDeadline(stream, localP2P.Encoding())
require.NoError(t, err)
require.Equal(t, responseCodeInvalidRequest, code)
})
localP2P.Connect(remoteP2P)
stream, err := localP2P.BHost.NewStream(ctx, remoteP2P.BHost.ID(), protocolID)
require.NoError(t, err)
msg := &pb.DataColumnSidecarsByRangeRequest{
Count: 0, // Invalid count
}
require.Equal(t, true, localP2P.Peers().Scorers().BadResponsesScorer().Score(remoteP2P.PeerID()) >= 0)
err = service.dataColumnSidecarsByRangeRPCHandler(ctx, msg, stream)
require.NotNil(t, err)
require.Equal(t, true, localP2P.Peers().Scorers().BadResponsesScorer().Score(remoteP2P.PeerID()) < 0)
if util.WaitTimeout(&wg, 1*time.Second) {
t.Fatal("Did not receive stream within 1 sec")
}
})
t.Run("nominal", func(t *testing.T) {
params.SetupTestConfigCleanup(t)
beaconConfig := params.BeaconConfig()
beaconConfig.FuluForkEpoch = 0
params.OverrideBeaconConfig(beaconConfig)
slot := primitives.Slot(400)
params := []util.DataColumnParam{
{Slot: 10, Index: 1}, {Slot: 10, Index: 2}, {Slot: 10, Index: 3},
{Slot: 40, Index: 4}, {Slot: 40, Index: 6},
{Slot: 45, Index: 7}, {Slot: 45, Index: 8}, {Slot: 45, Index: 9},
}
_, verifiedRODataColumns := util.CreateTestVerifiedRoDataColumnSidecars(t, params)
storage := filesystem.NewEphemeralDataColumnStorage(t)
err := storage.Save(verifiedRODataColumns)
require.NoError(t, err)
localP2P, remoteP2P := p2ptest.NewTestP2P(t), p2ptest.NewTestP2P(t)
protocolID := protocol.ID(fmt.Sprintf("%s/ssz_snappy", p2p.RPCDataColumnSidecarsByRangeTopicV1))
roots := [][fieldparams.RootLength]byte{
verifiedRODataColumns[0].BlockRoot(),
verifiedRODataColumns[3].BlockRoot(),
verifiedRODataColumns[5].BlockRoot(),
}
slots := []primitives.Slot{
verifiedRODataColumns[0].Slot(),
verifiedRODataColumns[3].Slot(),
verifiedRODataColumns[5].Slot(),
}
beaconDB := testDB.SetupDB(t)
roBlocks := make([]blocks.ROBlock, 0, len(roots))
for i := range 3 {
signedBeaconBlockPb := util.NewBeaconBlock()
signedBeaconBlockPb.Block.Slot = slots[i]
if i != 0 {
signedBeaconBlockPb.Block.ParentRoot = roots[i-1][:]
}
signedBeaconBlock, err := consensusblocks.NewSignedBeaconBlock(signedBeaconBlockPb)
require.NoError(t, err)
// There is a discrepancy between the root of the beacon block and the rodata column root,
// but for the sake of this test, we actually don't care.
roblock, err := consensusblocks.NewROBlockWithRoot(signedBeaconBlock, roots[i])
require.NoError(t, err)
roBlocks = append(roBlocks, roblock)
}
err = beaconDB.SaveROBlocks(ctx, roBlocks, false /*cache*/)
require.NoError(t, err)
service := &Service{
cfg: &config{
p2p: localP2P,
beaconDB: beaconDB,
chain: &chainMock.ChainService{
Slot: &slot,
},
dataColumnStorage: storage,
},
rateLimiter: newRateLimiter(localP2P),
}
ctxMap := ContextByteVersions{
[4]byte{245, 165, 253, 66}: version.Fulu,
}
root0 := verifiedRODataColumns[0].BlockRoot()
root3 := verifiedRODataColumns[3].BlockRoot()
root5 := verifiedRODataColumns[5].BlockRoot()
var wg sync.WaitGroup
wg.Add(1)
remoteP2P.BHost.SetStreamHandler(protocolID, func(stream network.Stream) {
defer wg.Done()
sidecars := make([]*blocks.RODataColumn, 0, 5)
for i := uint64(0); ; /* no stop condition */ i++ {
sidecar, err := readChunkedDataColumnSidecar(stream, remoteP2P, ctxMap)
if errors.Is(err, io.EOF) {
// End of stream.
break
}
require.NoError(t, err)
sidecars = append(sidecars, sidecar)
}
require.Equal(t, 8, len(sidecars))
require.Equal(t, root0, sidecars[0].BlockRoot())
require.Equal(t, root0, sidecars[1].BlockRoot())
require.Equal(t, root0, sidecars[2].BlockRoot())
require.Equal(t, root3, sidecars[3].BlockRoot())
require.Equal(t, root3, sidecars[4].BlockRoot())
require.Equal(t, root5, sidecars[5].BlockRoot())
require.Equal(t, root5, sidecars[6].BlockRoot())
require.Equal(t, root5, sidecars[7].BlockRoot())
require.Equal(t, uint64(1), sidecars[0].Index)
require.Equal(t, uint64(2), sidecars[1].Index)
require.Equal(t, uint64(3), sidecars[2].Index)
require.Equal(t, uint64(4), sidecars[3].Index)
require.Equal(t, uint64(6), sidecars[4].Index)
require.Equal(t, uint64(7), sidecars[5].Index)
require.Equal(t, uint64(8), sidecars[6].Index)
require.Equal(t, uint64(9), sidecars[7].Index)
})
localP2P.Connect(remoteP2P)
stream, err := localP2P.BHost.NewStream(ctx, remoteP2P.BHost.ID(), protocolID)
require.NoError(t, err)
msg := &pb.DataColumnSidecarsByRangeRequest{
StartSlot: 5,
Count: 50,
Columns: []uint64{1, 2, 3, 4, 6, 7, 8, 9, 10},
}
err = service.dataColumnSidecarsByRangeRPCHandler(ctx, msg, stream)
require.NoError(t, err)
})
}
func TestValidateDataColumnsByRange(t *testing.T) {
maxUint := primitives.Slot(math.MaxUint64)
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.FuluForkEpoch = 10
config.MinEpochsForDataColumnSidecarsRequest = 4096
params.OverrideBeaconConfig(config)
tests := []struct {
name string
startSlot primitives.Slot
count uint64
currentSlot primitives.Slot
expected *rangeParams
expectErr bool
errContains string
}{
{
name: "zero count returns error",
count: 0,
expectErr: true,
errContains: "invalid request count parameter",
},
{
name: "overflow in addition returns error",
startSlot: maxUint - 5,
count: 10,
currentSlot: maxUint,
expectErr: true,
errContains: "overflow start + count -1",
},
{
name: "start greater than current returns nil",
startSlot: 150,
count: 10,
currentSlot: 100,
expected: nil,
expectErr: false,
},
{
name: "end slot greater than min start slot returns nil",
startSlot: 150,
count: 100,
currentSlot: 300,
expected: nil,
expectErr: false,
},
{
name: "range within limits",
startSlot: 350,
count: 10,
currentSlot: 400,
expected: &rangeParams{start: 350, end: 359, size: 10},
expectErr: false,
},
{
name: "range exceeds limits",
startSlot: 0,
count: 10_000,
currentSlot: 400,
expected: &rangeParams{start: 320, end: 400, size: 81},
expectErr: false,
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
request := &pb.DataColumnSidecarsByRangeRequest{
StartSlot: tc.startSlot,
Count: tc.count,
}
rangeParameters, err := validateDataColumnsByRange(request, tc.currentSlot)
if tc.expectErr {
require.ErrorContains(t, err, tc.errContains)
return
}
require.NoError(t, err)
require.Equal(t, tc.expected, rangeParameters)
})
}
}

View File

@@ -0,0 +1,183 @@
package sync
import (
"context"
"fmt"
"math"
"slices"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/types"
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
"github.com/OffchainLabs/prysm/v6/time/slots"
libp2pcore "github.com/libp2p/go-libp2p/core"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
var (
notDataColumnsByRootIdentifiersError = errors.New("not data columns by root identifiers")
tickerDelay = time.Second
)
// dataColumnSidecarByRootRPCHandler handles the data column sidecars by root RPC request.
// https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/p2p-interface.md#datacolumnsidecarsbyroot-v1
func (s *Service) dataColumnSidecarByRootRPCHandler(ctx context.Context, msg interface{}, stream libp2pcore.Stream) error {
ctx, span := trace.StartSpan(ctx, "sync.dataColumnSidecarByRootRPCHandler")
defer span.End()
batchSize := flags.Get().DataColumnBatchLimit
numberOfColumns := params.BeaconConfig().NumberOfColumns
// Check if the message type is the one expected.
ref, ok := msg.(*types.DataColumnsByRootIdentifiers)
if !ok {
return notDataColumnsByRootIdentifiersError
}
requestedColumnIdents := *ref
remotePeerId := stream.Conn().RemotePeer()
ctx, cancel := context.WithTimeout(ctx, ttfbTimeout)
defer cancel()
SetRPCStreamDeadlines(stream)
// Penalize peers that send invalid requests.
if err := validateDataColumnsByRootRequest(requestedColumnIdents); err != nil {
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(remotePeerId)
s.writeErrorResponseToStream(responseCodeInvalidRequest, err.Error(), stream)
return errors.Wrap(err, "validate data columns by root request")
}
requestedColumnsByRoot := make(map[[fieldparams.RootLength]byte][]uint64)
for _, columnIdent := range requestedColumnIdents {
var root [fieldparams.RootLength]byte
copy(root[:], columnIdent.BlockRoot)
requestedColumnsByRoot[root] = append(requestedColumnsByRoot[root], columnIdent.Columns...)
}
// Sort by column index for each root.
for _, columns := range requestedColumnsByRoot {
slices.Sort(columns)
}
// Format nice logs.
requestedColumnsByRootLog := make(map[string]interface{})
for root, columns := range requestedColumnsByRoot {
rootStr := fmt.Sprintf("%#x", root)
requestedColumnsByRootLog[rootStr] = "all"
if uint64(len(columns)) != numberOfColumns {
requestedColumnsByRootLog[rootStr] = columns
}
}
// Compute the oldest slot we'll allow a peer to request, based on the current slot.
minReqSlot, err := dataColumnsRPCMinValidSlot(s.cfg.clock.CurrentSlot())
if err != nil {
return errors.Wrapf(err, "data columns RPC min valid slot")
}
log := log.WithFields(logrus.Fields{
"peer": remotePeerId,
"columns": requestedColumnsByRootLog,
})
defer closeStream(stream, log)
var ticker *time.Ticker
if len(requestedColumnIdents) > batchSize {
ticker = time.NewTicker(tickerDelay)
}
log.Debug("Serving data column sidecar by root request")
count := 0
for _, ident := range requestedColumnIdents {
if err := ctx.Err(); err != nil {
closeStream(stream, log)
return errors.Wrap(err, "context error")
}
root := bytesutil.ToBytes32(ident.BlockRoot)
columns := ident.Columns
// Throttle request processing to no more than batchSize/sec.
for range columns {
if ticker != nil && count != 0 && count%batchSize == 0 {
<-ticker.C
}
count++
}
s.rateLimiter.add(stream, int64(len(columns)))
// Retrieve the requested sidecars from the store.
verifiedRODataColumns, err := s.cfg.dataColumnStorage.Get(root, columns)
if err != nil {
s.writeErrorResponseToStream(responseCodeServerError, types.ErrGeneric.Error(), stream)
return errors.Wrap(err, "get data column sidecars")
}
for _, verifiedRODataColumn := range verifiedRODataColumns {
// Filter out data column sidecars that are too old.
if verifiedRODataColumn.SignedBlockHeader.Header.Slot < minReqSlot {
continue
}
SetStreamWriteDeadline(stream, defaultWriteDuration)
if chunkErr := WriteDataColumnSidecarChunk(stream, s.cfg.chain, s.cfg.p2p.Encoding(), verifiedRODataColumn.DataColumnSidecar); chunkErr != nil {
s.writeErrorResponseToStream(responseCodeServerError, types.ErrGeneric.Error(), stream)
tracing.AnnotateError(span, chunkErr)
return chunkErr
}
}
}
return nil
}
// validateDataColumnsByRootRequest checks if the request for data column sidecars is valid.
func validateDataColumnsByRootRequest(colIdents types.DataColumnsByRootIdentifiers) error {
total := uint64(0)
for _, id := range colIdents {
total += uint64(len(id.Columns))
}
if total > params.BeaconConfig().MaxRequestDataColumnSidecars {
return types.ErrMaxDataColumnReqExceeded
}
return nil
}
// dataColumnsRPCMinValidSlot returns the minimum slot that a peer can request data column sidecars for.
func dataColumnsRPCMinValidSlot(currentSlot primitives.Slot) (primitives.Slot, error) {
// Avoid overflow if we're running on a config where fulu is set to far future epoch.
if !params.FuluEnabled() {
return primitives.Slot(math.MaxUint64), nil
}
beaconConfig := params.BeaconConfig()
minReqEpochs := beaconConfig.MinEpochsForDataColumnSidecarsRequest
minStartEpoch := beaconConfig.FuluForkEpoch
currEpoch := slots.ToEpoch(currentSlot)
if currEpoch > minReqEpochs && currEpoch-minReqEpochs > minStartEpoch {
minStartEpoch = currEpoch - minReqEpochs
}
epochStart, err := slots.EpochStart(minStartEpoch)
if err != nil {
return 0, errors.Wrapf(err, "epoch start for epoch %d", minStartEpoch)
}
return epochStart, nil
}

View File

@@ -0,0 +1,321 @@
package sync
import (
"context"
"io"
"math"
"sync"
"testing"
"time"
chainMock "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
p2ptest "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/types"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/protocol"
"github.com/pkg/errors"
)
func TestDataColumnSidecarsByRootRPCHandler(t *testing.T) {
ctx := context.Background()
t.Run("wrong message type", func(t *testing.T) {
service := &Service{}
err := service.dataColumnSidecarByRootRPCHandler(ctx, nil, nil)
require.ErrorIs(t, err, notDataColumnsByRootIdentifiersError)
})
t.Run("invalid request", func(t *testing.T) {
params.SetupTestConfigCleanup(t)
beaconConfig := params.BeaconConfig()
beaconConfig.MaxRequestDataColumnSidecars = 1
params.OverrideBeaconConfig(beaconConfig)
localP2P := p2ptest.NewTestP2P(t)
service := &Service{cfg: &config{p2p: localP2P}}
protocolID := protocol.ID(p2p.RPCDataColumnSidecarsByRootTopicV1)
remoteP2P := p2ptest.NewTestP2P(t)
var wg sync.WaitGroup
wg.Add(1)
remoteP2P.BHost.SetStreamHandler(protocolID, func(stream network.Stream) {
defer wg.Done()
code, errMsg, err := readStatusCodeNoDeadline(stream, localP2P.Encoding())
require.NoError(t, err)
require.Equal(t, responseCodeInvalidRequest, code)
require.Equal(t, types.ErrMaxDataColumnReqExceeded.Error(), errMsg)
})
localP2P.Connect(remoteP2P)
stream, err := localP2P.BHost.NewStream(ctx, remoteP2P.BHost.ID(), protocolID)
require.NoError(t, err)
msg := &types.DataColumnsByRootIdentifiers{{Columns: []uint64{1, 2, 3}}}
require.Equal(t, true, localP2P.Peers().Scorers().BadResponsesScorer().Score(remoteP2P.PeerID()) >= 0)
err = service.dataColumnSidecarByRootRPCHandler(ctx, msg, stream)
require.NotNil(t, err)
require.Equal(t, true, localP2P.Peers().Scorers().BadResponsesScorer().Score(remoteP2P.PeerID()) < 0)
if util.WaitTimeout(&wg, 1*time.Second) {
t.Fatal("Did not receive stream within 1 sec")
}
})
t.Run("nominal", func(t *testing.T) {
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.DataColumnBatchLimit = 2
flags.Init(gFlags)
defer flags.Init(resetFlags)
// Setting the ticker to 0 will cause the ticker to panic.
// Setting it to the minimum value instead.
refTickerDelay := tickerDelay
tickerDelay = time.Nanosecond
defer func() {
tickerDelay = refTickerDelay
}()
params.SetupTestConfigCleanup(t)
beaconConfig := params.BeaconConfig()
beaconConfig.FuluForkEpoch = 1
params.OverrideBeaconConfig(beaconConfig)
localP2P := p2ptest.NewTestP2P(t)
clock := startup.NewClock(time.Now(), [fieldparams.RootLength]byte{})
params := []util.DataColumnParam{
{Slot: 10, Index: 1}, {Slot: 10, Index: 2}, {Slot: 10, Index: 3},
{Slot: 40, Index: 4}, {Slot: 40, Index: 6},
{Slot: 45, Index: 7}, {Slot: 45, Index: 8}, {Slot: 45, Index: 9},
}
_, verifiedRODataColumns := util.CreateTestVerifiedRoDataColumnSidecars(t, params)
storage := filesystem.NewEphemeralDataColumnStorage(t)
err := storage.Save(verifiedRODataColumns)
require.NoError(t, err)
service := &Service{
cfg: &config{
p2p: localP2P,
clock: clock,
dataColumnStorage: storage,
chain: &chainMock.ChainService{},
},
rateLimiter: newRateLimiter(localP2P),
}
protocolID := protocol.ID(p2p.RPCDataColumnSidecarsByRootTopicV1)
remoteP2P := p2ptest.NewTestP2P(t)
var wg sync.WaitGroup
wg.Add(1)
ctxMap := ContextByteVersions{
[4]byte{245, 165, 253, 66}: version.Fulu,
}
root0 := verifiedRODataColumns[0].BlockRoot()
root3 := verifiedRODataColumns[3].BlockRoot()
root5 := verifiedRODataColumns[5].BlockRoot()
remoteP2P.BHost.SetStreamHandler(protocolID, func(stream network.Stream) {
defer wg.Done()
sidecars := make([]*blocks.RODataColumn, 0, 5)
for i := uint64(0); ; /* no stop condition */ i++ {
sidecar, err := readChunkedDataColumnSidecar(stream, remoteP2P, ctxMap)
if errors.Is(err, io.EOF) {
// End of stream.
break
}
require.NoError(t, err)
sidecars = append(sidecars, sidecar)
}
require.Equal(t, 5, len(sidecars))
require.Equal(t, root3, sidecars[0].BlockRoot())
require.Equal(t, root3, sidecars[1].BlockRoot())
require.Equal(t, root5, sidecars[2].BlockRoot())
require.Equal(t, root5, sidecars[3].BlockRoot())
require.Equal(t, root5, sidecars[4].BlockRoot())
require.Equal(t, uint64(4), sidecars[0].Index)
require.Equal(t, uint64(6), sidecars[1].Index)
require.Equal(t, uint64(7), sidecars[2].Index)
require.Equal(t, uint64(8), sidecars[3].Index)
require.Equal(t, uint64(9), sidecars[4].Index)
})
localP2P.Connect(remoteP2P)
stream, err := localP2P.BHost.NewStream(ctx, remoteP2P.BHost.ID(), protocolID)
require.NoError(t, err)
msg := &types.DataColumnsByRootIdentifiers{
{
BlockRoot: root0[:],
Columns: []uint64{1, 2, 3},
},
{
BlockRoot: root3[:],
Columns: []uint64{4, 5, 6},
},
{
BlockRoot: root5[:],
Columns: []uint64{7, 8, 9},
},
}
err = service.dataColumnSidecarByRootRPCHandler(ctx, msg, stream)
require.NoError(t, err)
require.Equal(t, true, localP2P.Peers().Scorers().BadResponsesScorer().Score(remoteP2P.PeerID()) >= 0)
if util.WaitTimeout(&wg, 1*time.Minute) {
t.Fatal("Did not receive stream within 1 sec")
}
})
}
func TestValidateDataColumnsByRootRequest(t *testing.T) {
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
maxCols := uint64(10) // Set a small value for testing
config.MaxRequestDataColumnSidecars = maxCols
params.OverrideBeaconConfig(config)
tests := []struct {
name string
colIdents types.DataColumnsByRootIdentifiers
expectedErr error
}{
{
name: "Invalid request - multiple identifiers exceed max",
colIdents: types.DataColumnsByRootIdentifiers{
{
BlockRoot: make([]byte, fieldparams.RootLength),
Columns: make([]uint64, maxCols/2+1),
},
{
BlockRoot: make([]byte, fieldparams.RootLength),
Columns: make([]uint64, maxCols/2+1),
},
},
expectedErr: types.ErrMaxDataColumnReqExceeded,
},
{
name: "Valid request - less than max",
colIdents: types.DataColumnsByRootIdentifiers{
{
BlockRoot: make([]byte, fieldparams.RootLength),
Columns: make([]uint64, maxCols-1),
},
},
expectedErr: nil,
},
{
name: "Valid request - multiple identifiers sum to max",
colIdents: types.DataColumnsByRootIdentifiers{
{
BlockRoot: make([]byte, fieldparams.RootLength),
Columns: make([]uint64, maxCols/2),
},
{
BlockRoot: make([]byte, fieldparams.RootLength),
Columns: make([]uint64, maxCols/2),
},
},
expectedErr: nil,
},
}
// Run tests
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := validateDataColumnsByRootRequest(tt.colIdents)
if tt.expectedErr == nil {
require.NoError(t, err)
} else {
require.ErrorIs(t, err, tt.expectedErr)
}
})
}
}
func TestDataColumnsRPCMinValidSlot(t *testing.T) {
type testCase struct {
name string
fuluForkEpoch primitives.Epoch
minReqEpochs primitives.Epoch
currentSlot primitives.Slot
expected primitives.Slot
}
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
testCases := []testCase{
{
name: "Fulu not enabled",
fuluForkEpoch: math.MaxUint64, // Disable Fulu
minReqEpochs: 5,
currentSlot: 0,
expected: primitives.Slot(math.MaxUint64),
},
{
name: "Current epoch is before fulu fork epoch",
fuluForkEpoch: 10,
minReqEpochs: 5,
currentSlot: primitives.Slot(8 * slotsPerEpoch),
expected: primitives.Slot(10 * slotsPerEpoch),
},
{
name: "Current epoch is fulu fork epoch",
fuluForkEpoch: 10,
minReqEpochs: 5,
currentSlot: primitives.Slot(10 * slotsPerEpoch),
expected: primitives.Slot(10 * slotsPerEpoch),
},
{
name: "Current epoch between fulu fork epoch and minReqEpochs",
fuluForkEpoch: 10,
minReqEpochs: 20,
currentSlot: primitives.Slot(15 * slotsPerEpoch),
expected: primitives.Slot(10 * slotsPerEpoch),
},
{
name: "Current epoch after fulu fork epoch + minReqEpochs",
fuluForkEpoch: 10,
minReqEpochs: 5,
currentSlot: primitives.Slot(20 * slotsPerEpoch),
expected: primitives.Slot(15 * slotsPerEpoch),
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.FuluForkEpoch = tc.fuluForkEpoch
config.MinEpochsForDataColumnSidecarsRequest = tc.minReqEpochs
params.OverrideBeaconConfig(config)
actual, err := dataColumnsRPCMinValidSlot(tc.currentSlot)
require.NoError(t, err)
require.Equal(t, tc.expected, actual)
})
}
}

View File

@@ -20,8 +20,8 @@ type rpcHandlerTest struct {
s *Service
}
func (rt *rpcHandlerTest) testHandler(nh network.StreamHandler, rh rpcHandler, rhi interface{}) {
ctx, cancel := context.WithTimeout(rt.t.Context(), rt.timeout)
func (rt *rpcHandlerTest) testHandler(streamHandler network.StreamHandler, rpcHandler rpcHandler, message interface{}) {
ctx, cancel := context.WithTimeout(context.Background(), rt.timeout)
defer func() {
cancel()
}()
@@ -36,16 +36,18 @@ func (rt *rpcHandlerTest) testHandler(nh network.StreamHandler, rh rpcHandler, r
defer func() {
require.NoError(rt.t, client.Disconnect(server.PeerID()))
}()
require.Equal(rt.t, 1, len(client.BHost.Network().Peers()), "Expected peers to be connected")
h := func(stream network.Stream) {
handler := func(stream network.Stream) {
defer w.Done()
nh(stream)
streamHandler(stream)
}
server.BHost.SetStreamHandler(rt.topic, h)
server.BHost.SetStreamHandler(rt.topic, handler)
stream, err := client.BHost.NewStream(ctx, server.BHost.ID(), rt.topic)
require.NoError(rt.t, err)
err = rh(ctx, rhi, stream)
err = rpcHandler(ctx, message, stream)
if rt.err == nil {
require.NoError(rt.t, err)
} else {

View File

@@ -17,7 +17,7 @@ import (
"github.com/prysmaticlabs/go-bitfield"
)
// metaDataHandler reads the incoming metadata rpc request from the peer.
// metaDataHandler reads the incoming metadata RPC request from the peer.
func (s *Service) metaDataHandler(_ context.Context, _ interface{}, stream libp2pcore.Stream) error {
SetRPCStreamDeadlines(stream)
@@ -70,7 +70,9 @@ func (s *Service) metaDataHandler(_ context.Context, _ interface{}, stream libp2
switch streamVersion {
case p2p.SchemaVersionV1:
switch metadataVersion {
case version.Altair, version.Deneb:
case version.Altair, version.Fulu:
// If the stream version corresponds to Phase 0 but our metadata
// corresponds to Altair or Fulu, convert our metadata to the Phase 0 one.
metadata = wrapper.WrappedMetadataV0(
&pb.MetaDataV0{
Attnets: metadata.AttnetsBitfield(),
@@ -81,13 +83,18 @@ func (s *Service) metaDataHandler(_ context.Context, _ interface{}, stream libp2
case p2p.SchemaVersionV2:
switch metadataVersion {
case version.Phase0:
// If the stream version corresponds to Altair but our metadata
// corresponds to Phase 0, convert our metadata to the Altair one,
// and use a zeroed syncnets bitfield.
metadata = wrapper.WrappedMetadataV1(
&pb.MetaDataV1{
Attnets: metadata.AttnetsBitfield(),
SeqNumber: metadata.SequenceNumber(),
Syncnets: bitfield.Bitvector4{byte(0x00)},
})
case version.Deneb:
case version.Fulu:
// If the stream version corresponds to Altair but our metadata
// corresponds to Fulu, convert our metadata to the Altair one.
metadata = wrapper.WrappedMetadataV1(
&pb.MetaDataV1{
Attnets: metadata.AttnetsBitfield(),
@@ -95,6 +102,32 @@ func (s *Service) metaDataHandler(_ context.Context, _ interface{}, stream libp2
Syncnets: metadata.SyncnetsBitfield(),
})
}
case p2p.SchemaVersionV3:
switch metadataVersion {
case version.Phase0:
// If the stream version corresponds to Fulu but our metadata
// corresponds to Phase 0, convert our metadata to the Fulu one,
// and use a zeroed syncnets bitfield and custody group count.
metadata = wrapper.WrappedMetadataV2(
&pb.MetaDataV2{
Attnets: metadata.AttnetsBitfield(),
SeqNumber: metadata.SequenceNumber(),
Syncnets: bitfield.Bitvector4{byte(0x00)},
CustodyGroupCount: 0,
})
case version.Altair:
// If the stream version corresponds to Fulu but our metadata
// corresponds to Altair, convert our metadata to the Fulu one and
// use a zeroed custody group count.
metadata = wrapper.WrappedMetadataV2(
&pb.MetaDataV2{
Attnets: metadata.AttnetsBitfield(),
SeqNumber: metadata.SequenceNumber(),
Syncnets: metadata.SyncnetsBitfield(),
CustodyGroupCount: 0,
})
}
}
// Write the METADATA response into the stream.
@@ -164,12 +197,14 @@ func (s *Service) sendMetaDataRequest(ctx context.Context, peerID peer.ID) (meta
}
// Defensive check to ensure valid objects are being sent.
topicVersion := ""
var topicVersion string
switch msg.Version() {
case version.Phase0:
topicVersion = p2p.SchemaVersionV1
case version.Altair:
topicVersion = p2p.SchemaVersionV2
case version.Fulu:
topicVersion = p2p.SchemaVersionV3
}
// Validate the version of the topic.

View File

@@ -1,6 +1,7 @@
package sync
import (
"context"
"sync"
"testing"
"time"
@@ -15,6 +16,7 @@ import (
leakybucket "github.com/OffchainLabs/prysm/v6/container/leaky-bucket"
"github.com/OffchainLabs/prysm/v6/encoding/ssz/equality"
pb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1/metadata"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
@@ -22,6 +24,7 @@ import (
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/protocol"
libp2pquic "github.com/libp2p/go-libp2p/p2p/transport/quic"
"github.com/prysmaticlabs/go-bitfield"
)
func TestMetaDataRPCHandler_ReceivesMetadata(t *testing.T) {
@@ -76,158 +79,241 @@ func TestMetaDataRPCHandler_ReceivesMetadata(t *testing.T) {
}
}
func TestMetadataRPCHandler_SendsMetadata(t *testing.T) {
p1 := p2ptest.NewTestP2P(t)
p2 := p2ptest.NewTestP2P(t)
p1.Connect(p2)
assert.Equal(t, 1, len(p1.BHost.Network().Peers()), "Expected peers to be connected")
bitfield := [8]byte{'A', 'B'}
p2.LocalMetadata = wrapper.WrappedMetadataV0(&pb.MetaDataV0{
SeqNumber: 2,
Attnets: bitfield[:],
})
// Set up a head state in the database with data we expect.
chain := &mock.ChainService{Genesis: time.Now(), ValidatorsRoot: [32]byte{}}
d := db.SetupDB(t)
r := &Service{
func createService(peer p2p.P2P, chain *mock.ChainService) *Service {
return &Service{
cfg: &config{
beaconDB: d,
p2p: p1,
chain: chain,
clock: startup.NewClock(chain.Genesis, chain.ValidatorsRoot),
p2p: peer,
chain: chain,
clock: startup.NewClock(chain.Genesis, chain.ValidatorsRoot),
},
rateLimiter: newRateLimiter(p1),
}
r2 := &Service{
cfg: &config{
beaconDB: d,
p2p: p2,
chain: &mock.ChainService{Genesis: time.Now(), ValidatorsRoot: [32]byte{}},
},
rateLimiter: newRateLimiter(p2),
}
// Setup streams
pcl := protocol.ID(p2p.RPCMetaDataTopicV1 + r.cfg.p2p.Encoding().ProtocolSuffix())
topic := string(pcl)
r.rateLimiter.limiterMap[topic] = leakybucket.NewCollector(1, 1, time.Second, false)
r2.rateLimiter.limiterMap[topic] = leakybucket.NewCollector(1, 1, time.Second, false)
var wg sync.WaitGroup
wg.Add(1)
p2.BHost.SetStreamHandler(pcl, func(stream network.Stream) {
defer wg.Done()
assert.NoError(t, r2.metaDataHandler(t.Context(), new(interface{}), stream))
})
md, err := r.sendMetaDataRequest(t.Context(), p2.BHost.ID())
assert.NoError(t, err)
if !equality.DeepEqual(md.InnerObject(), p2.LocalMetadata.InnerObject()) {
t.Fatalf("MetadataV0 unequal, received %v but wanted %v", md, p2.LocalMetadata)
}
if util.WaitTimeout(&wg, 1*time.Second) {
t.Fatal("Did not receive stream within 1 sec")
}
conns := p1.BHost.Network().ConnsToPeer(p2.BHost.ID())
if len(conns) == 0 {
t.Error("Peer is disconnected despite receiving a valid ping")
rateLimiter: newRateLimiter(peer),
}
}
func TestMetadataRPCHandler_SendsMetadataAltair(t *testing.T) {
func TestMetadataRPCHandler_SendMetadataRequest(t *testing.T) {
const (
requestTimeout = 1 * time.Second
seqNumber = 2
custodyGroupCount = 4
)
attnets := []byte{'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H'}
syncnets := []byte{0x4}
// Configure the test beacon chain.
params.SetupTestConfigCleanup(t)
bCfg := params.BeaconConfig().Copy()
bCfg.AltairForkEpoch = 5
params.OverrideBeaconConfig(bCfg)
beaconChainConfig := params.BeaconConfig().Copy()
beaconChainConfig.AltairForkEpoch = 5
beaconChainConfig.FuluForkEpoch = 15
params.OverrideBeaconConfig(beaconChainConfig)
params.BeaconConfig().InitializeForkSchedule()
p1 := p2ptest.NewTestP2P(t)
p2 := p2ptest.NewTestP2P(t)
p1.Connect(p2)
assert.Equal(t, 1, len(p1.BHost.Network().Peers()), "Expected peers to be connected")
bitfield := [8]byte{'A', 'B'}
p2.LocalMetadata = wrapper.WrappedMetadataV0(&pb.MetaDataV0{
SeqNumber: 2,
Attnets: bitfield[:],
})
// Compute the number of seconds in an epoch.
secondsPerEpoch := oneEpoch()
// Set up a head state in the database with data we expect.
d := db.SetupDB(t)
chain := &mock.ChainService{Genesis: time.Now().Add(-5 * oneEpoch()), ValidatorsRoot: [32]byte{}}
r := &Service{
cfg: &config{
beaconDB: d,
p2p: p1,
chain: chain,
clock: startup.NewClock(chain.Genesis, chain.ValidatorsRoot),
testCases := []struct {
name string
topic string
epochsSinceGenesisPeer1, epochsSinceGenesisPeer2 int
metadataPeer2, expected metadata.Metadata
}{
{
name: "Phase0-Phase0",
topic: p2p.RPCMetaDataTopicV1,
epochsSinceGenesisPeer1: 0,
epochsSinceGenesisPeer2: 0,
metadataPeer2: wrapper.WrappedMetadataV0(&pb.MetaDataV0{
SeqNumber: seqNumber,
Attnets: attnets,
}),
expected: wrapper.WrappedMetadataV0(&pb.MetaDataV0{
SeqNumber: seqNumber,
Attnets: attnets,
}),
},
rateLimiter: newRateLimiter(p1),
}
chain2 := &mock.ChainService{Genesis: time.Now().Add(-5 * oneEpoch()), ValidatorsRoot: [32]byte{}}
r2 := &Service{
cfg: &config{
beaconDB: d,
p2p: p2,
chain: chain2,
clock: startup.NewClock(chain2.Genesis, chain2.ValidatorsRoot),
{
name: "Phase0-Altair",
topic: p2p.RPCMetaDataTopicV1,
epochsSinceGenesisPeer1: 0,
epochsSinceGenesisPeer2: 5,
metadataPeer2: wrapper.WrappedMetadataV1(&pb.MetaDataV1{
SeqNumber: seqNumber,
Attnets: attnets,
Syncnets: syncnets,
}),
expected: wrapper.WrappedMetadataV0(&pb.MetaDataV0{
SeqNumber: seqNumber,
Attnets: attnets,
}),
},
{
name: "Phase0-Fulu",
topic: p2p.RPCMetaDataTopicV1,
epochsSinceGenesisPeer1: 0,
epochsSinceGenesisPeer2: 15,
metadataPeer2: wrapper.WrappedMetadataV2(&pb.MetaDataV2{
SeqNumber: seqNumber,
Attnets: attnets,
Syncnets: syncnets,
CustodyGroupCount: custodyGroupCount,
}),
expected: wrapper.WrappedMetadataV0(&pb.MetaDataV0{
SeqNumber: seqNumber,
Attnets: attnets,
}),
},
{
name: "Altair-Phase0",
topic: p2p.RPCMetaDataTopicV2,
epochsSinceGenesisPeer1: 5,
epochsSinceGenesisPeer2: 0,
metadataPeer2: wrapper.WrappedMetadataV0(&pb.MetaDataV0{
SeqNumber: seqNumber,
Attnets: attnets,
}),
expected: wrapper.WrappedMetadataV1(&pb.MetaDataV1{
SeqNumber: seqNumber,
Attnets: attnets,
Syncnets: bitfield.Bitvector4{byte(0x00)},
}),
},
{
name: "Altair-Altair",
topic: p2p.RPCMetaDataTopicV2,
epochsSinceGenesisPeer1: 5,
epochsSinceGenesisPeer2: 5,
metadataPeer2: wrapper.WrappedMetadataV1(&pb.MetaDataV1{
SeqNumber: seqNumber,
Attnets: attnets,
Syncnets: syncnets,
}),
expected: wrapper.WrappedMetadataV1(&pb.MetaDataV1{
SeqNumber: seqNumber,
Attnets: attnets,
Syncnets: syncnets,
}),
},
{
name: "Altair-Fulu",
topic: p2p.RPCMetaDataTopicV2,
epochsSinceGenesisPeer1: 5,
epochsSinceGenesisPeer2: 15,
metadataPeer2: wrapper.WrappedMetadataV2(&pb.MetaDataV2{
SeqNumber: seqNumber,
Attnets: attnets,
Syncnets: syncnets,
CustodyGroupCount: custodyGroupCount,
}),
expected: wrapper.WrappedMetadataV1(&pb.MetaDataV1{
SeqNumber: seqNumber,
Attnets: attnets,
Syncnets: syncnets,
}),
},
{
name: "Fulu-Phase0",
topic: p2p.RPCMetaDataTopicV3,
epochsSinceGenesisPeer1: 15,
epochsSinceGenesisPeer2: 0,
metadataPeer2: wrapper.WrappedMetadataV0(&pb.MetaDataV0{
SeqNumber: seqNumber,
Attnets: attnets,
}),
expected: wrapper.WrappedMetadataV2(&pb.MetaDataV2{
SeqNumber: seqNumber,
Attnets: attnets,
Syncnets: bitfield.Bitvector4{byte(0x00)},
CustodyGroupCount: 0,
}),
},
{
name: "Fulu-Altair",
topic: p2p.RPCMetaDataTopicV3,
epochsSinceGenesisPeer1: 15,
epochsSinceGenesisPeer2: 5,
metadataPeer2: wrapper.WrappedMetadataV1(&pb.MetaDataV1{
SeqNumber: seqNumber,
Attnets: attnets,
Syncnets: syncnets,
}),
expected: wrapper.WrappedMetadataV2(&pb.MetaDataV2{
SeqNumber: seqNumber,
Attnets: attnets,
Syncnets: syncnets,
CustodyGroupCount: 0,
}),
},
{
name: "Fulu-Fulu",
topic: p2p.RPCMetaDataTopicV3,
epochsSinceGenesisPeer1: 15,
epochsSinceGenesisPeer2: 15,
metadataPeer2: wrapper.WrappedMetadataV2(&pb.MetaDataV2{
SeqNumber: seqNumber,
Attnets: attnets,
Syncnets: syncnets,
CustodyGroupCount: custodyGroupCount,
}),
expected: wrapper.WrappedMetadataV2(&pb.MetaDataV2{
SeqNumber: seqNumber,
Attnets: attnets,
Syncnets: syncnets,
CustodyGroupCount: custodyGroupCount,
}),
},
rateLimiter: newRateLimiter(p2),
}
// Setup streams
pcl := protocol.ID(p2p.RPCMetaDataTopicV2 + r.cfg.p2p.Encoding().ProtocolSuffix())
topic := string(pcl)
r.rateLimiter.limiterMap[topic] = leakybucket.NewCollector(2, 2, time.Second, false)
r2.rateLimiter.limiterMap[topic] = leakybucket.NewCollector(2, 2, time.Second, false)
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
var wg sync.WaitGroup
var wg sync.WaitGroup
wg.Add(1)
p2.BHost.SetStreamHandler(pcl, func(stream network.Stream) {
defer wg.Done()
err := r2.metaDataHandler(t.Context(), new(interface{}), stream)
assert.NoError(t, err)
})
ctx := context.Background()
_, err := r.sendMetaDataRequest(t.Context(), p2.BHost.ID())
assert.NoError(t, err)
// Setup and connect peers.
peer1, peer2 := p2ptest.NewTestP2P(t), p2ptest.NewTestP2P(t)
peer1.Connect(peer2)
if util.WaitTimeout(&wg, 1*time.Second) {
t.Fatal("Did not receive stream within 1 sec")
}
// Ensure the peers are connected.
peersCount := len(peer1.BHost.Network().Peers())
require.Equal(t, 1, peersCount, "Expected peers to be connected")
// Fix up peer with the correct metadata.
p2.LocalMetadata = wrapper.WrappedMetadataV1(&pb.MetaDataV1{
SeqNumber: 2,
Attnets: bitfield[:],
Syncnets: []byte{0x0},
})
// Setup sync services.
genesisPeer1 := time.Now().Add(-time.Duration(tc.epochsSinceGenesisPeer1) * secondsPerEpoch)
genesisPeer2 := time.Now().Add(-time.Duration(tc.epochsSinceGenesisPeer2) * secondsPerEpoch)
wg.Add(1)
p2.BHost.SetStreamHandler(pcl, func(stream network.Stream) {
defer wg.Done()
assert.NoError(t, r2.metaDataHandler(t.Context(), new(interface{}), stream))
})
chainPeer1 := &mock.ChainService{Genesis: genesisPeer1, ValidatorsRoot: [32]byte{}}
chainPeer2 := &mock.ChainService{Genesis: genesisPeer2, ValidatorsRoot: [32]byte{}}
md, err := r.sendMetaDataRequest(t.Context(), p2.BHost.ID())
assert.NoError(t, err)
servicePeer1 := createService(peer1, chainPeer1)
servicePeer2 := createService(peer2, chainPeer2)
if !equality.DeepEqual(md.InnerObject(), p2.LocalMetadata.InnerObject()) {
t.Fatalf("MetadataV1 unequal, received %v but wanted %v", md, p2.LocalMetadata)
}
// Define the behavior of peer2 when receiving a METADATA request.
protocolSuffix := servicePeer2.cfg.p2p.Encoding().ProtocolSuffix()
protocolID := protocol.ID(tc.topic + protocolSuffix)
peer2.LocalMetadata = tc.metadataPeer2
if util.WaitTimeout(&wg, 1*time.Second) {
t.Fatal("Did not receive stream within 1 sec")
}
wg.Add(1)
peer2.BHost.SetStreamHandler(protocolID, func(stream network.Stream) {
defer wg.Done()
err := servicePeer2.metaDataHandler(ctx, new(interface{}), stream)
require.NoError(t, err)
})
conns := p1.BHost.Network().ConnsToPeer(p2.BHost.ID())
if len(conns) == 0 {
t.Error("Peer is disconnected despite receiving a valid ping")
// Send a METADATA request from peer1 to peer2.
actual, err := servicePeer1.sendMetaDataRequest(ctx, peer2.BHost.ID())
require.NoError(t, err)
// Wait until the METADATA request is received by peer2 or timeout.
timeOutReached := util.WaitTimeout(&wg, requestTimeout)
require.Equal(t, false, timeOutReached, "Did not receive METADATA request within timeout")
// Compare the received METADATA object with the expected METADATA object.
require.DeepSSZEqual(t, tc.expected.InnerObject(), actual.InnerObject(), "Metadata unequal")
// Ensure the peers are still connected.
peersCount = len(peer1.BHost.Network().Peers())
assert.Equal(t, 1, peersCount, "Expected peers to be connected")
})
}
}

View File

@@ -4,12 +4,14 @@ import (
"context"
"fmt"
"io"
"slices"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/encoder"
p2ptypes "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/types"
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
@@ -30,17 +32,24 @@ var errBlobUnmarshal = errors.New("Could not unmarshal chunk-encoded blob")
// Any error from the following declaration block should result in peer downscoring.
var (
// ErrInvalidFetchedData is used to signal that an error occurred which should result in peer downscoring.
ErrInvalidFetchedData = errors.New("invalid data returned from peer")
errBlobIndexOutOfBounds = errors.Wrap(verification.ErrBlobInvalid, "blob index out of range")
errMaxRequestBlobSidecarsExceeded = errors.Wrap(verification.ErrBlobInvalid, "peer exceeded req blob chunk tx limit")
errChunkResponseSlotNotAsc = errors.Wrap(verification.ErrBlobInvalid, "blob slot not higher than previous block root")
errChunkResponseIndexNotAsc = errors.Wrap(verification.ErrBlobInvalid, "blob indices for a block must start at 0 and increase by 1")
errUnrequested = errors.Wrap(verification.ErrBlobInvalid, "received BlobSidecar in response that was not requested")
errBlobResponseOutOfBounds = errors.Wrap(verification.ErrBlobInvalid, "received BlobSidecar with slot outside BlobSidecarsByRangeRequest bounds")
errChunkResponseBlockMismatch = errors.Wrap(verification.ErrBlobInvalid, "blob block details do not match")
errChunkResponseParentMismatch = errors.Wrap(verification.ErrBlobInvalid, "parent root for response element doesn't match previous element root")
ErrInvalidFetchedData = errors.New("invalid data returned from peer")
errBlobIndexOutOfBounds = errors.Wrap(verification.ErrBlobInvalid, "blob index out of range")
errMaxRequestBlobSidecarsExceeded = errors.Wrap(verification.ErrBlobInvalid, "peer exceeded req blob chunk tx limit")
errChunkResponseSlotNotAsc = errors.Wrap(verification.ErrBlobInvalid, "blob slot not higher than previous block root")
errChunkResponseIndexNotAsc = errors.Wrap(verification.ErrBlobInvalid, "blob indices for a block must start at 0 and increase by 1")
errUnrequested = errors.Wrap(verification.ErrBlobInvalid, "received BlobSidecar in response that was not requested")
errBlobResponseOutOfBounds = errors.Wrap(verification.ErrBlobInvalid, "received BlobSidecar with slot outside BlobSidecarsByRangeRequest bounds")
errChunkResponseBlockMismatch = errors.Wrap(verification.ErrBlobInvalid, "blob block details do not match")
errChunkResponseParentMismatch = errors.Wrap(verification.ErrBlobInvalid, "parent root for response element doesn't match previous element root")
errDataColumnChunkedReadFailure = errors.New("failed to read stream of chunk-encoded data columns")
errMaxRequestDataColumnSidecarsExceeded = errors.New("count of requested data column sidecars exceeds MAX_REQUEST_DATA_COLUMN_SIDECARS")
errMaxResponseDataColumnSidecarsExceeded = errors.New("peer returned more data column sidecars than requested")
)
// ------
// Blocks
// ------
// BeaconBlockProcessor defines a block processing function, which allows to start utilizing
// blocks even before all blocks are ready.
type BeaconBlockProcessor func(block interfaces.ReadOnlySignedBeaconBlock) error
@@ -154,6 +163,14 @@ func SendBeaconBlocksByRootRequest(
return blocks, nil
}
// -------------
// Blob sidecars
// -------------
// BlobResponseValidation represents a function that can validate aspects of a single unmarshaled blob sidecar
// that was received from a peer in response to an rpc request.
type BlobResponseValidation func(blocks.ROBlob) error
func SendBlobsByRangeRequest(ctx context.Context, tor blockchain.TemporalOracle, p2pApi p2p.SenderEncoder, pid peer.ID, ctxMap ContextByteVersions, req *ethpb.BlobSidecarsByRangeRequest, bvs ...BlobResponseValidation) ([]blocks.ROBlob, error) {
topic, err := p2p.TopicFromMessage(p2p.BlobSidecarsByRangeName, slots.ToEpoch(tor.CurrentSlot()))
if err != nil {
@@ -215,10 +232,6 @@ func SendBlobSidecarByRoot(
return readChunkEncodedBlobs(stream, p2pApi.Encoding(), ctxMap, blobValidatorFromRootReq(req), max)
}
// BlobResponseValidation represents a function that can validate aspects of a single unmarshaled blob
// that was received from a peer in response to an rpc request.
type BlobResponseValidation func(blocks.ROBlob) error
func composeBlobValidations(vf ...BlobResponseValidation) BlobResponseValidation {
return func(blob blocks.ROBlob) error {
for i := range vf {
@@ -383,3 +396,308 @@ func readChunkedBlobSidecar(stream network.Stream, encoding encoder.NetworkEncod
return rob, nil
}
// --------------------
// Data column sidecars
// --------------------
// SendDataColumnSidecarsByRangeRequest sends a request for data column sidecars by range
// and returns the fetched data column sidecars.
func SendDataColumnSidecarsByRangeRequest(
ctx context.Context,
tor blockchain.TemporalOracle,
p2pApi p2p.P2P,
pid peer.ID,
ctxMap ContextByteVersions,
request *ethpb.DataColumnSidecarsByRangeRequest,
) ([]blocks.RODataColumn, error) {
// Return early if nothing to request.
if request == nil || request.Count == 0 || len(request.Columns) == 0 {
return nil, nil
}
beaconConfig := params.BeaconConfig()
numberOfColumns := beaconConfig.NumberOfColumns
maxRequestDataColumnSidecars := params.BeaconConfig().MaxRequestDataColumnSidecars
// Check if we do not request too many sidecars.
columnsCount := uint64(len(request.Columns))
totalCount := request.Count * columnsCount
if totalCount > maxRequestDataColumnSidecars {
return nil, errors.Wrapf(errMaxRequestDataColumnSidecarsExceeded, "requestedCount=%d, allowedCount=%d", totalCount, maxRequestDataColumnSidecars)
}
// Build the topic.
currentSlot := tor.CurrentSlot()
currentEpoch := slots.ToEpoch(currentSlot)
topic, err := p2p.TopicFromMessage(p2p.DataColumnSidecarsByRangeName, currentEpoch)
if err != nil {
return nil, errors.Wrap(err, "topic from message")
}
// Build the logs.
var columnsLog interface{} = "all"
if columnsCount < numberOfColumns {
columns := request.Columns
slices.Sort(columns)
columnsLog = columns
}
log := log.WithFields(logrus.Fields{
"peer": pid,
"topic": topic,
"startSlot": request.StartSlot,
"count": request.Count,
"columns": columnsLog,
"totalCount": totalCount,
})
// Send the request.
stream, err := p2pApi.Send(ctx, request, topic, pid)
if err != nil {
return nil, errors.Wrap(err, "p2p send")
}
defer closeStream(stream, log)
// Read the data column sidecars from the stream.
roDataColumns := make([]blocks.RODataColumn, 0, totalCount)
for range totalCount {
// Avoid reading extra chunks if the context is done.
if err := ctx.Err(); err != nil {
return nil, err
}
validatorSlotWithinBounds, err := isSidecarSlotWithinBounds(request)
if err != nil {
return nil, errors.Wrap(err, "is sidecar slot within bounds")
}
roDataColumn, err := readChunkedDataColumnSidecar(
stream, p2pApi, ctxMap,
validatorSlotWithinBounds,
isSidecarIndexRequested(request),
)
if errors.Is(err, io.EOF) {
return roDataColumns, nil
}
if err != nil {
return nil, errors.Wrap(err, "read chunked data column sidecar")
}
if roDataColumn == nil {
return nil, errors.New("nil data column sidecar, should never happen")
}
roDataColumns = append(roDataColumns, *roDataColumn)
}
// All requested sidecars were delivered by the peer. Expecting EOF.
if _, err := readChunkedDataColumnSidecar(stream, p2pApi, ctxMap); !errors.Is(err, io.EOF) {
return nil, errors.Wrapf(errMaxResponseDataColumnSidecarsExceeded, "requestedCount=%d", totalCount)
}
return roDataColumns, nil
}
// isSidecarSlotWithinBounds verifies that the slot of the data column sidecar is within the bounds of the request.
func isSidecarSlotWithinBounds(request *ethpb.DataColumnSidecarsByRangeRequest) (DataColumnResponseValidation, error) {
// endSlot is exclusive (while request.StartSlot is inclusive).
endSlot, err := request.StartSlot.SafeAdd(request.Count)
if err != nil {
return nil, errors.Wrap(err, "calculate end slot")
}
validator := func(sidecar blocks.RODataColumn) error {
slot := sidecar.Slot()
if !(request.StartSlot <= slot && slot < endSlot) {
return errors.Errorf("data column sidecar slot %d out of range [%d, %d[", slot, request.StartSlot, endSlot)
}
return nil
}
return validator, nil
}
// isSidecarIndexRequested verifies that the index of the data column sidecar is found in the requested indices.
func isSidecarIndexRequested(request *ethpb.DataColumnSidecarsByRangeRequest) DataColumnResponseValidation {
requestedIndices := make(map[uint64]bool)
for _, col := range request.Columns {
requestedIndices[col] = true
}
return func(sidecar blocks.RODataColumn) error {
columnIndex := sidecar.Index
if !requestedIndices[columnIndex] {
return errors.Errorf("data column sidecar index %d not found in requested indices", columnIndex)
}
return nil
}
}
// SendDataColumnSidecarsByRootRequest sends a request for data column sidecars by root
// and returns the fetched data column sidecars.
func SendDataColumnSidecarsByRootRequest(
ctx context.Context,
tor blockchain.TemporalOracle,
p2pApi p2p.P2P,
pid peer.ID,
ctxMap ContextByteVersions,
request p2ptypes.DataColumnsByRootIdentifiers,
) ([]blocks.RODataColumn, error) {
// Return early if the request is nil.
if request == nil {
return nil, nil
}
// Compute how many sidecars are requested.
count := uint64(0)
for _, identifier := range request {
count += uint64(len(identifier.Columns))
}
// Return early if nothing to request.
if count == 0 {
return nil, nil
}
// Verify that the request count is within the maximum allowed.
maxRequestDataColumnSidecars := params.BeaconConfig().MaxRequestDataColumnSidecars
if count > maxRequestDataColumnSidecars {
return nil, errors.Wrapf(errMaxRequestDataColumnSidecarsExceeded, "current: %d, max: %d", count, maxRequestDataColumnSidecars)
}
// Get the topic for the request.
topic, err := p2p.TopicFromMessage(p2p.DataColumnSidecarsByRootName, slots.ToEpoch(tor.CurrentSlot()))
if err != nil {
return nil, errors.Wrap(err, "topic from message")
}
// Send the request to the peer.
stream, err := p2pApi.Send(ctx, request, topic, pid)
if err != nil {
return nil, errors.Wrap(err, "p2p api send")
}
defer closeStream(stream, log)
// Read the data column sidecars from the stream.
roDataColumns := make([]blocks.RODataColumn, 0, count)
// Read the data column sidecars from the stream.
for range count {
roDataColumn, err := readChunkedDataColumnSidecar(stream, p2pApi, ctxMap, isSidecarIndexRootRequested(request))
if errors.Is(err, io.EOF) {
return roDataColumns, nil
}
if err != nil {
return nil, errors.Wrap(err, "read chunked data column sidecar")
}
if roDataColumn == nil {
return nil, errors.Wrap(err, "nil data column sidecar, should never happen")
}
roDataColumns = append(roDataColumns, *roDataColumn)
}
// All requested sidecars were delivered by the peer. Expecting EOF.
if _, err := readChunkedDataColumnSidecar(stream, p2pApi, ctxMap); !errors.Is(err, io.EOF) {
return nil, errors.Wrapf(errMaxResponseDataColumnSidecarsExceeded, "requestedCount=%d", count)
}
return roDataColumns, nil
}
func isSidecarIndexRootRequested(request p2ptypes.DataColumnsByRootIdentifiers) DataColumnResponseValidation {
columnsIndexFromRoot := make(map[[fieldparams.RootLength]byte]map[uint64]bool)
for _, sidecar := range request {
blockRoot := bytesutil.ToBytes32(sidecar.BlockRoot)
if columnsIndexFromRoot[blockRoot] == nil {
columnsIndexFromRoot[blockRoot] = make(map[uint64]bool)
}
for _, column := range sidecar.Columns {
columnsIndexFromRoot[blockRoot][column] = true
}
}
return func(sidecar blocks.RODataColumn) error {
root, index := sidecar.BlockRoot(), sidecar.Index
indices, ok := columnsIndexFromRoot[root]
if !ok {
return errors.Errorf("root #%x returned by peer but not requested", root)
}
if !indices[index] {
return errors.Errorf("index %d for root #%x returned by peer but not requested", index, root)
}
return nil
}
}
// DataColumnResponseValidation represents a function that can validate aspects of a single unmarshaled data column sidecar
// that was received from a peer in response to an rpc request.
type DataColumnResponseValidation func(column blocks.RODataColumn) error
func readChunkedDataColumnSidecar(
stream network.Stream,
p2pApi p2p.P2P,
ctxMap ContextByteVersions,
validationFunctions ...DataColumnResponseValidation,
) (*blocks.RODataColumn, error) {
// Read the status code from the stream.
statusCode, errMessage, err := ReadStatusCode(stream, p2pApi.Encoding())
if err != nil {
return nil, errors.Wrap(err, "read status code")
}
if statusCode != 0 {
return nil, errors.Wrap(errDataColumnChunkedReadFailure, errMessage)
}
// Retrieve the fork digest.
ctxBytes, err := readContextFromStream(stream)
if err != nil {
return nil, errors.Wrap(err, "read context from stream")
}
// Check if the fork digest is recognized.
msgVersion, ok := ctxMap[bytesutil.ToBytes4(ctxBytes)]
if !ok {
return nil, errors.Errorf("unrecognized fork digest %#x", ctxBytes)
}
// Check if we are on Fulu.
if msgVersion < version.Fulu {
return nil, errors.Errorf(
"unexpected context bytes for DataColumnSidecar, ctx=%#x, msgVersion=%v, minimalSupportedVersion=%v",
ctxBytes, version.String(msgVersion), version.String(version.Fulu),
)
}
// Decode the data column sidecar from the stream.
dataColumnSidecar := new(ethpb.DataColumnSidecar)
if err := p2pApi.Encoding().DecodeWithMaxLength(stream, dataColumnSidecar); err != nil {
return nil, errors.Wrap(err, "failed to decode the protobuf-encoded BlobSidecar message from RPC chunk stream")
}
// Create a read-only data column from the data column sidecar.
roDataColumn, err := blocks.NewRODataColumn(dataColumnSidecar)
if err != nil {
return nil, errors.Wrap(err, "new read only data column")
}
// Run validation functions.
for _, validationFunction := range validationFunctions {
if err := validationFunction(roDataColumn); err != nil {
return nil, errors.Wrap(err, "validation function")
}
}
return &roDataColumn, nil
}

View File

@@ -5,12 +5,15 @@ import (
"errors"
"fmt"
"io"
"sync"
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
p2ptest "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/types"
p2pTypes "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/types"
p2ptypes "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/types"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
@@ -20,6 +23,7 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
@@ -882,3 +886,745 @@ func TestSendBlobsByRangeRequest(t *testing.T) {
func TestErrInvalidFetchedDataDistinction(t *testing.T) {
require.Equal(t, false, errors.Is(ErrInvalidFetchedData, verification.ErrBlobInvalid))
}
func TestSendDataColumnSidecarsByRangeRequest(t *testing.T) {
nilTestCases := []struct {
name string
request *ethpb.DataColumnSidecarsByRangeRequest
}{
{
name: "nil request",
request: nil,
},
{
name: "count is 0",
request: &ethpb.DataColumnSidecarsByRangeRequest{},
},
{
name: "columns is nil",
request: &ethpb.DataColumnSidecarsByRangeRequest{Count: 1},
},
}
for _, tc := range nilTestCases {
t.Run(tc.name, func(t *testing.T) {
actual, err := SendDataColumnSidecarsByRangeRequest(t.Context(), nil, nil, "aRandomPID", nil, tc.request)
require.NoError(t, err)
require.IsNil(t, actual)
})
}
t.Run("too many columns in request", func(t *testing.T) {
params.SetupTestConfigCleanup(t)
beaconConfig := params.BeaconConfig()
beaconConfig.MaxRequestDataColumnSidecars = 0
params.OverrideBeaconConfig(beaconConfig)
request := &ethpb.DataColumnSidecarsByRangeRequest{Count: 1, Columns: []uint64{1, 2, 3}}
_, err := SendDataColumnSidecarsByRangeRequest(t.Context(), nil, nil, "aRandomPID", nil, request)
require.ErrorContains(t, errMaxRequestDataColumnSidecarsExceeded.Error(), err)
})
type slotIndex struct {
Slot primitives.Slot
Index uint64
}
createSidecar := func(slotIndex slotIndex) *ethpb.DataColumnSidecar {
const count = 4
kzgCommitmentsInclusionProof := make([][]byte, 0, count)
for range count {
kzgCommitmentsInclusionProof = append(kzgCommitmentsInclusionProof, make([]byte, 32))
}
return &ethpb.DataColumnSidecar{
Index: slotIndex.Index,
SignedBlockHeader: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{
Slot: slotIndex.Slot,
ParentRoot: make([]byte, fieldparams.RootLength),
StateRoot: make([]byte, fieldparams.RootLength),
BodyRoot: make([]byte, fieldparams.RootLength),
},
Signature: make([]byte, fieldparams.BLSSignatureLength),
},
KzgCommitmentsInclusionProof: kzgCommitmentsInclusionProof,
}
}
testCases := []struct {
name string
slotIndices []slotIndex
expectedError error
}{
{
name: "too many responses",
slotIndices: []slotIndex{
{Slot: 0, Index: 1},
{Slot: 0, Index: 2},
{Slot: 0, Index: 3},
{Slot: 1, Index: 1},
{Slot: 1, Index: 2},
{Slot: 1, Index: 3},
{Slot: 0, Index: 3}, // Duplicate
},
expectedError: errMaxResponseDataColumnSidecarsExceeded,
},
{
name: "perfect match",
slotIndices: []slotIndex{
{Slot: 0, Index: 1},
{Slot: 0, Index: 2},
{Slot: 0, Index: 3},
{Slot: 1, Index: 1},
{Slot: 1, Index: 2},
{Slot: 1, Index: 3},
},
},
{
name: "few responses than maximum possible",
slotIndices: []slotIndex{
{Slot: 0, Index: 1},
{Slot: 0, Index: 2},
{Slot: 0, Index: 3},
{Slot: 1, Index: 1},
{Slot: 1, Index: 2},
},
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
protocol := fmt.Sprintf("%s/ssz_snappy", p2p.RPCDataColumnSidecarsByRangeTopicV1)
clock := startup.NewClock(time.Now(), [fieldparams.RootLength]byte{})
p1, p2 := p2ptest.NewTestP2P(t), p2ptest.NewTestP2P(t)
p1.Connect(p2)
expected := make([]*ethpb.DataColumnSidecar, 0, len(tc.slotIndices))
for _, slotIndex := range tc.slotIndices {
sidecar := createSidecar(slotIndex)
expected = append(expected, sidecar)
}
requestSent := &ethpb.DataColumnSidecarsByRangeRequest{
StartSlot: 0,
Count: 2,
Columns: []uint64{1, 3, 2},
}
var wg sync.WaitGroup
wg.Add(1)
p2.SetStreamHandler(protocol, func(stream network.Stream) {
wg.Done()
requestReceived := new(ethpb.DataColumnSidecarsByRangeRequest)
err := p2.Encoding().DecodeWithMaxLength(stream, requestReceived)
assert.NoError(t, err)
assert.DeepSSZEqual(t, requestSent, requestReceived)
for _, sidecar := range expected {
err := WriteDataColumnSidecarChunk(stream, clock, p2.Encoding(), sidecar)
assert.NoError(t, err)
}
err = stream.CloseWrite()
assert.NoError(t, err)
})
ctx := t.Context()
ctxMap := ContextByteVersions{[4]byte{245, 165, 253, 66}: version.Fulu}
actual, err := SendDataColumnSidecarsByRangeRequest(ctx, clock, p1, p2.PeerID(), ctxMap, requestSent)
if tc.expectedError != nil {
require.ErrorContains(t, tc.expectedError.Error(), err)
if util.WaitTimeout(&wg, time.Second) {
t.Fatal("Did not receive stream within 1 sec")
}
return
}
require.Equal(t, len(expected), len(actual))
for i := range expected {
require.DeepSSZEqual(t, expected[i], actual[i].DataColumnSidecar)
}
})
}
}
func TestIsSidecarSlotWithinBounds(t *testing.T) {
request := &ethpb.DataColumnSidecarsByRangeRequest{
StartSlot: 10,
Count: 10,
}
validator, err := isSidecarSlotWithinBounds(request)
require.NoError(t, err)
testCases := []struct {
name string
slot primitives.Slot
isErrorExpected bool
}{
{
name: "too soon",
slot: 9,
isErrorExpected: true,
},
{
name: "too late",
slot: 20,
isErrorExpected: true,
},
{
name: "within bounds",
slot: 15,
isErrorExpected: false,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
const count = 4
kzgCommitmentsInclusionProof := make([][]byte, 0, count)
for range count {
kzgCommitmentsInclusionProof = append(kzgCommitmentsInclusionProof, make([]byte, 32))
}
sidecarPb := &ethpb.DataColumnSidecar{
SignedBlockHeader: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{
Slot: tc.slot,
ParentRoot: make([]byte, fieldparams.RootLength),
StateRoot: make([]byte, fieldparams.RootLength),
BodyRoot: make([]byte, fieldparams.RootLength),
},
Signature: make([]byte, fieldparams.BLSSignatureLength),
},
KzgCommitmentsInclusionProof: kzgCommitmentsInclusionProof,
}
sidecar, err := blocks.NewRODataColumn(sidecarPb)
require.NoError(t, err)
err = validator(sidecar)
if tc.isErrorExpected {
require.NotNil(t, err)
return
}
require.NoError(t, err)
})
}
}
func TestIsSidecarIndexRequested(t *testing.T) {
request := &ethpb.DataColumnSidecarsByRangeRequest{
Columns: []uint64{2, 9, 4},
}
validator := isSidecarIndexRequested(request)
testCases := []struct {
name string
index uint64
isErrorExpected bool
}{
{
name: "not requested",
index: 1,
isErrorExpected: true,
},
{
name: "requested",
index: 9,
isErrorExpected: false,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
const count = 4
kzgCommitmentsInclusionProof := make([][]byte, 0, count)
for range count {
kzgCommitmentsInclusionProof = append(kzgCommitmentsInclusionProof, make([]byte, 32))
}
sidecarPb := &ethpb.DataColumnSidecar{
SignedBlockHeader: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{
Slot: 0,
ParentRoot: make([]byte, fieldparams.RootLength),
StateRoot: make([]byte, fieldparams.RootLength),
BodyRoot: make([]byte, fieldparams.RootLength),
},
Signature: make([]byte, fieldparams.BLSSignatureLength),
},
KzgCommitmentsInclusionProof: kzgCommitmentsInclusionProof,
Index: tc.index,
}
sidecar, err := blocks.NewRODataColumn(sidecarPb)
require.NoError(t, err)
err = validator(sidecar)
if tc.isErrorExpected {
require.NotNil(t, err)
return
}
require.NoError(t, err)
})
}
}
func TestSendDataColumnSidecarsByRootRequest(t *testing.T) {
nilTestCases := []struct {
name string
request p2ptypes.DataColumnsByRootIdentifiers
}{
{
name: "nil request",
request: nil,
},
{
name: "count is 0",
request: p2ptypes.DataColumnsByRootIdentifiers{{}, {}},
},
}
for _, tc := range nilTestCases {
t.Run(tc.name, func(t *testing.T) {
actual, err := SendDataColumnSidecarsByRootRequest(t.Context(), nil, nil, "aRandomPID", nil, tc.request)
require.NoError(t, err)
require.IsNil(t, actual)
})
}
t.Run("too many columns in request", func(t *testing.T) {
params.SetupTestConfigCleanup(t)
beaconConfig := params.BeaconConfig()
beaconConfig.MaxRequestDataColumnSidecars = 4
params.OverrideBeaconConfig(beaconConfig)
request := p2ptypes.DataColumnsByRootIdentifiers{
{Columns: []uint64{1, 2, 3}},
{Columns: []uint64{4, 5, 6}},
}
_, err := SendDataColumnSidecarsByRootRequest(t.Context(), nil, nil, "aRandomPID", nil, request)
require.ErrorContains(t, errMaxRequestDataColumnSidecarsExceeded.Error(), err)
})
type slotIndex struct {
Slot primitives.Slot
Index uint64
}
createSidecar := func(rootIndex slotIndex) blocks.RODataColumn {
const count = 4
kzgCommitmentsInclusionProof := make([][]byte, 0, count)
for range count {
kzgCommitmentsInclusionProof = append(kzgCommitmentsInclusionProof, make([]byte, 32))
}
sidecarPb := &ethpb.DataColumnSidecar{
Index: rootIndex.Index,
SignedBlockHeader: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{
ParentRoot: make([]byte, fieldparams.RootLength),
StateRoot: make([]byte, fieldparams.RootLength),
BodyRoot: make([]byte, fieldparams.RootLength),
},
Signature: make([]byte, fieldparams.BLSSignatureLength),
},
KzgCommitmentsInclusionProof: kzgCommitmentsInclusionProof,
}
roSidecar, err := blocks.NewRODataColumn(sidecarPb)
require.NoError(t, err)
return roSidecar
}
testCases := []struct {
name string
slotIndices []slotIndex
expectedError error
}{
{
name: "too many responses",
slotIndices: []slotIndex{
{Slot: 1, Index: 1},
{Slot: 1, Index: 2},
{Slot: 1, Index: 3},
{Slot: 2, Index: 1},
{Slot: 2, Index: 2},
{Slot: 2, Index: 3},
{Slot: 1, Index: 3}, // Duplicate
},
expectedError: errMaxResponseDataColumnSidecarsExceeded,
},
{
name: "perfect match",
slotIndices: []slotIndex{
{Slot: 1, Index: 1},
{Slot: 1, Index: 2},
{Slot: 1, Index: 3},
{Slot: 2, Index: 1},
{Slot: 2, Index: 2},
{Slot: 2, Index: 3},
},
},
{
name: "few responses than maximum possible",
slotIndices: []slotIndex{
{Slot: 1, Index: 1},
{Slot: 1, Index: 2},
{Slot: 1, Index: 3},
{Slot: 2, Index: 1},
{Slot: 2, Index: 2},
},
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
protocol := fmt.Sprintf("%s/ssz_snappy", p2p.RPCDataColumnSidecarsByRootTopicV1)
clock := startup.NewClock(time.Now(), [fieldparams.RootLength]byte{})
p1, p2 := p2ptest.NewTestP2P(t), p2ptest.NewTestP2P(t)
p1.Connect(p2)
expected := make([]blocks.RODataColumn, 0, len(tc.slotIndices))
for _, slotIndex := range tc.slotIndices {
roSidecar := createSidecar(slotIndex)
expected = append(expected, roSidecar)
}
blockRoot1, blockRoot2 := expected[0].BlockRoot(), expected[3].BlockRoot()
sentRequest := p2ptypes.DataColumnsByRootIdentifiers{
{BlockRoot: blockRoot1[:], Columns: []uint64{1, 2, 3}},
{BlockRoot: blockRoot2[:], Columns: []uint64{1, 2, 3}},
}
var wg sync.WaitGroup
wg.Add(1)
p2.SetStreamHandler(protocol, func(stream network.Stream) {
wg.Done()
requestReceived := new(p2ptypes.DataColumnsByRootIdentifiers)
err := p2.Encoding().DecodeWithMaxLength(stream, requestReceived)
assert.NoError(t, err)
require.Equal(t, len(sentRequest), len(*requestReceived))
for i := range sentRequest {
require.DeepSSZEqual(t, (sentRequest)[i], (*requestReceived)[i])
}
for _, sidecar := range expected {
err := WriteDataColumnSidecarChunk(stream, clock, p2.Encoding(), sidecar.DataColumnSidecar)
assert.NoError(t, err)
}
err = stream.CloseWrite()
assert.NoError(t, err)
})
ctx := t.Context()
ctxMap := ContextByteVersions{[4]byte{245, 165, 253, 66}: version.Fulu}
actual, err := SendDataColumnSidecarsByRootRequest(ctx, clock, p1, p2.PeerID(), ctxMap, sentRequest)
if tc.expectedError != nil {
require.ErrorContains(t, tc.expectedError.Error(), err)
if util.WaitTimeout(&wg, time.Second) {
t.Fatal("Did not receive stream within 1 sec")
}
return
}
require.Equal(t, len(expected), len(actual))
for i := range expected {
require.DeepSSZEqual(t, expected[i], actual[i])
}
})
}
}
func TestIsSidecarIndexRootRequested(t *testing.T) {
testCases := []struct {
name string
root [fieldparams.RootLength]byte
index uint64
isErrorExpected bool
}{
{
name: "non requested root",
root: [fieldparams.RootLength]byte{2},
isErrorExpected: true,
},
{
name: "non requested index",
root: [fieldparams.RootLength]byte{1},
index: 3,
isErrorExpected: true,
},
{
name: "nominal",
root: [fieldparams.RootLength]byte{1},
index: 2,
isErrorExpected: false,
},
}
request := types.DataColumnsByRootIdentifiers{
{BlockRoot: []byte{1}, Columns: []uint64{1, 2}},
}
validator := isSidecarIndexRootRequested(request)
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
const count = 4
kzgCommitmentsInclusionProof := make([][]byte, 0, count)
for range count {
kzgCommitmentsInclusionProof = append(kzgCommitmentsInclusionProof, make([]byte, 32))
}
sidecarPb := &ethpb.DataColumnSidecar{
SignedBlockHeader: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{
ParentRoot: make([]byte, fieldparams.RootLength),
StateRoot: make([]byte, fieldparams.RootLength),
BodyRoot: make([]byte, fieldparams.RootLength),
},
Signature: make([]byte, fieldparams.BLSSignatureLength),
},
KzgCommitmentsInclusionProof: kzgCommitmentsInclusionProof,
Index: tc.index,
}
// There is a discrepancy between `tc.root` and the real root,
// but we don't care about it here.
sidecar, err := blocks.NewRODataColumnWithRoot(sidecarPb, tc.root)
require.NoError(t, err)
err = validator(sidecar)
if tc.isErrorExpected {
require.NotNil(t, err)
return
}
require.NoError(t, err)
})
}
}
func TestReadChunkedDataColumnSidecar(t *testing.T) {
t.Run("non nil status code", func(t *testing.T) {
const reason = "a dummy reason"
p1, p2 := p2ptest.NewTestP2P(t), p2ptest.NewTestP2P(t)
var wg sync.WaitGroup
wg.Add(1)
p2.SetStreamHandler(p2p.RPCDataColumnSidecarsByRootTopicV1, func(stream network.Stream) {
defer wg.Done()
_, err := readChunkedDataColumnSidecar(stream, p2, nil)
require.ErrorContains(t, reason, err)
})
p1.Connect(p2)
stream, err := p1.BHost.NewStream(t.Context(), p2.PeerID(), p2p.RPCDataColumnSidecarsByRootTopicV1)
require.NoError(t, err)
writeErrorResponseToStream(responseCodeInvalidRequest, reason, stream, p1)
if util.WaitTimeout(&wg, time.Second) {
t.Fatal("Did not receive stream within 1 sec")
}
})
t.Run("unrecognized fork digest", func(t *testing.T) {
p1, p2 := p2ptest.NewTestP2P(t), p2ptest.NewTestP2P(t)
var wg sync.WaitGroup
wg.Add(1)
p2.SetStreamHandler(p2p.RPCDataColumnSidecarsByRootTopicV1, func(stream network.Stream) {
defer wg.Done()
_, err := readChunkedDataColumnSidecar(stream, p2, ContextByteVersions{})
require.ErrorContains(t, "unrecognized fork digest", err)
})
p1.Connect(p2)
stream, err := p1.BHost.NewStream(t.Context(), p2.PeerID(), p2p.RPCDataColumnSidecarsByRootTopicV1)
require.NoError(t, err)
_, err = stream.Write([]byte{responseCodeSuccess})
require.NoError(t, err)
err = writeContextToStream([]byte{42, 42, 42, 42}, stream)
require.NoError(t, err)
if util.WaitTimeout(&wg, time.Second) {
t.Fatal("Did not receive stream within 1 sec")
}
})
t.Run("before fulu", func(t *testing.T) {
p1, p2 := p2ptest.NewTestP2P(t), p2ptest.NewTestP2P(t)
var wg sync.WaitGroup
wg.Add(1)
p2.SetStreamHandler(p2p.RPCDataColumnSidecarsByRootTopicV1, func(stream network.Stream) {
defer wg.Done()
_, err := readChunkedDataColumnSidecar(stream, p2, ContextByteVersions{[4]byte{1, 2, 3, 4}: version.Phase0})
require.ErrorContains(t, "unexpected context bytes", err)
})
p1.Connect(p2)
stream, err := p1.BHost.NewStream(t.Context(), p2.PeerID(), p2p.RPCDataColumnSidecarsByRootTopicV1)
require.NoError(t, err)
_, err = stream.Write([]byte{responseCodeSuccess})
require.NoError(t, err)
err = writeContextToStream([]byte{1, 2, 3, 4}, stream)
require.NoError(t, err)
if util.WaitTimeout(&wg, time.Second) {
t.Fatal("Did not receive stream within 1 sec")
}
})
t.Run("one validation failed", func(t *testing.T) {
const reason = "a dummy reason"
p1, p2 := p2ptest.NewTestP2P(t), p2ptest.NewTestP2P(t)
var wg sync.WaitGroup
wg.Add(1)
p2.SetStreamHandler(p2p.RPCDataColumnSidecarsByRootTopicV1, func(stream network.Stream) {
defer wg.Done()
validationOne := func(column blocks.RODataColumn) error {
return nil
}
validationTwo := func(column blocks.RODataColumn) error {
return errors.New(reason)
}
_, err := readChunkedDataColumnSidecar(
stream,
p2,
ContextByteVersions{[4]byte{1, 2, 3, 4}: version.Fulu},
validationOne, // OK
validationTwo, // Fail
)
require.ErrorContains(t, reason, err)
})
p1.Connect(p2)
stream, err := p1.BHost.NewStream(t.Context(), p2.PeerID(), p2p.RPCDataColumnSidecarsByRootTopicV1)
require.NoError(t, err)
const count = 4
kzgCommitmentsInclusionProof := make([][]byte, 0, count)
for range count {
kzgCommitmentsInclusionProof = append(kzgCommitmentsInclusionProof, make([]byte, 32))
}
// Success response code.
_, err = stream.Write([]byte{responseCodeSuccess})
require.NoError(t, err)
// Fork digest.
err = writeContextToStream([]byte{1, 2, 3, 4}, stream)
require.NoError(t, err)
// Sidecar.
_, err = p1.Encoding().EncodeWithMaxLength(stream, &ethpb.DataColumnSidecar{
SignedBlockHeader: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{
ParentRoot: make([]byte, fieldparams.RootLength),
StateRoot: make([]byte, fieldparams.RootLength),
BodyRoot: make([]byte, fieldparams.RootLength),
},
Signature: make([]byte, fieldparams.BLSSignatureLength),
},
KzgCommitmentsInclusionProof: kzgCommitmentsInclusionProof,
})
require.NoError(t, err)
if util.WaitTimeout(&wg, time.Minute) {
t.Fatal("Did not receive stream within 1 sec")
}
})
t.Run("nominal", func(t *testing.T) {
p1, p2 := p2ptest.NewTestP2P(t), p2ptest.NewTestP2P(t)
const count = 4
kzgCommitmentsInclusionProof := make([][]byte, 0, count)
for range count {
kzgCommitmentsInclusionProof = append(kzgCommitmentsInclusionProof, make([]byte, 32))
}
expected := &ethpb.DataColumnSidecar{
SignedBlockHeader: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{
ParentRoot: make([]byte, fieldparams.RootLength),
StateRoot: make([]byte, fieldparams.RootLength),
BodyRoot: make([]byte, fieldparams.RootLength),
},
Signature: make([]byte, fieldparams.BLSSignatureLength),
},
KzgCommitmentsInclusionProof: kzgCommitmentsInclusionProof,
}
var wg sync.WaitGroup
wg.Add(1)
p2.SetStreamHandler(p2p.RPCDataColumnSidecarsByRootTopicV1, func(stream network.Stream) {
defer wg.Done()
actual, err := readChunkedDataColumnSidecar(stream, p2, ContextByteVersions{[4]byte{1, 2, 3, 4}: version.Fulu})
require.NoError(t, err)
require.DeepSSZEqual(t, expected, actual.DataColumnSidecar)
})
p1.Connect(p2)
stream, err := p1.BHost.NewStream(t.Context(), p2.PeerID(), p2p.RPCDataColumnSidecarsByRootTopicV1)
require.NoError(t, err)
// Success response code.
_, err = stream.Write([]byte{responseCodeSuccess})
require.NoError(t, err)
// Fork digest.
err = writeContextToStream([]byte{1, 2, 3, 4}, stream)
require.NoError(t, err)
// Sidecar.
_, err = p1.Encoding().EncodeWithMaxLength(stream, expected)
require.NoError(t, err)
if util.WaitTimeout(&wg, time.Minute) {
t.Fatal("Did not receive stream within 1 sec")
}
})
}

View File

@@ -10,6 +10,8 @@ import (
"time"
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/crypto/rand"
lru "github.com/hashicorp/golang-lru"
pubsub "github.com/libp2p/go-libp2p-pubsub"
libp2pcore "github.com/libp2p/go-libp2p/core"
@@ -102,12 +104,15 @@ type config struct {
clock *startup.Clock
stateNotifier statefeed.Notifier
blobStorage *filesystem.BlobStorage
dataColumnStorage *filesystem.DataColumnStorage
custodyInfo *peerdas.CustodyInfo
}
// This defines the interface for interacting with block chain service
type blockchainService interface {
blockchain.BlockReceiver
blockchain.BlobReceiver
blockchain.DataColumnReceiver
blockchain.HeadFetcher
blockchain.FinalizationFetcher
blockchain.ForkFetcher
@@ -165,6 +170,8 @@ type Service struct {
newBlobVerifier verification.NewBlobVerifier
newColumnsVerifier verification.NewDataColumnsVerifier
availableBlocker coverage.AvailableBlocker
reconstructionLock sync.Mutex
reconstructionRandGen *rand.Rand
ctxMap ContextByteVersions
slasherEnabled bool
lcStore *lightClient.Store
@@ -175,15 +182,16 @@ type Service struct {
func NewService(ctx context.Context, opts ...Option) *Service {
ctx, cancel := context.WithCancel(ctx)
r := &Service{
ctx: ctx,
cancel: cancel,
chainStarted: abool.New(),
cfg: &config{clock: startup.NewClock(time.Unix(0, 0), [32]byte{})},
slotToPendingBlocks: gcache.New(pendingBlockExpTime /* exp time */, 0 /* disable janitor */),
seenPendingBlocks: make(map[[32]byte]bool),
blkRootToPendingAtts: make(map[[32]byte][]ethpb.SignedAggregateAttAndProof),
signatureChan: make(chan *signatureVerifier, verifierLimit),
dataColumnLogCh: make(chan dataColumnLogEntry, 1000),
ctx: ctx,
cancel: cancel,
chainStarted: abool.New(),
cfg: &config{clock: startup.NewClock(time.Unix(0, 0), [32]byte{})},
slotToPendingBlocks: gcache.New(pendingBlockExpTime /* exp time */, 0 /* disable janitor */),
seenPendingBlocks: make(map[[32]byte]bool),
blkRootToPendingAtts: make(map[[32]byte][]ethpb.SignedAggregateAttAndProof),
signatureChan: make(chan *signatureVerifier, verifierLimit),
dataColumnLogCh: make(chan dataColumnLogEntry, 1000),
reconstructionRandGen: rand.NewGenerator(),
}
for _, opt := range opts {

View File

@@ -6,12 +6,14 @@ import (
"fmt"
"reflect"
"runtime/debug"
"slices"
"strings"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/altair"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
@@ -188,14 +190,14 @@ func (s *Service) registerSubscribers(epoch primitives.Epoch, digest [4]byte) {
)
}
// New gossip topic in Fulu
// New gossip topic in Fulu.
if params.BeaconConfig().FuluForkEpoch <= epoch {
s.subscribeWithParameters(
p2p.DataColumnSubnetTopicFormat,
s.validateDataColumn,
func(context.Context, proto.Message) error { return nil },
s.dataColumnSubscriber,
digest,
func(primitives.Slot) []uint64 { return nil },
s.dataColumnSubnetIndices,
func(currentSlot primitives.Slot) []uint64 { return []uint64{} },
)
}
@@ -600,6 +602,19 @@ func (s *Service) enoughPeersAreConnected(subnetTopic string) bool {
return peersWithSubnetCount >= threshold
}
func (s *Service) dataColumnSubnetIndices(_ primitives.Slot) []uint64 {
nodeID := s.cfg.p2p.NodeID()
custodyGroupCount := s.cfg.custodyInfo.CustodyGroupSamplingSize(peerdas.Target)
nodeInfo, _, err := peerdas.Info(nodeID, custodyGroupCount)
if err != nil {
log.WithError(err).Error("Could not retrieve peer info")
return []uint64{}
}
return sliceFromMap(nodeInfo.DataColumnsSubnets, true /*sorted*/)
}
func (s *Service) persistentAndAggregatorSubnetIndices(currentSlot primitives.Slot) []uint64 {
if flags.Get().SubscribeToAllSubnets {
return sliceFromCount(params.BeaconConfig().AttestationSubnetCount)
@@ -727,3 +742,17 @@ func errorIsIgnored(err error) bool {
}
return false
}
// sliceFromMap returns a sorted list of keys from a map.
func sliceFromMap(m map[uint64]bool, sorted ...bool) []uint64 {
result := make([]uint64, 0, len(m))
for k := range m {
result = append(result, k)
}
if len(sorted) > 0 && sorted[0] {
slices.Sort(result)
}
return result
}

View File

@@ -0,0 +1,54 @@
package sync
import (
"context"
"fmt"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed"
opfeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/operation"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/pkg/errors"
"google.golang.org/protobuf/proto"
)
func (s *Service) dataColumnSubscriber(ctx context.Context, msg proto.Message) error {
sidecar, ok := msg.(blocks.VerifiedRODataColumn)
if !ok {
return fmt.Errorf("message was not type blocks.VerifiedRODataColumn, type=%T", msg)
}
if err := s.receiveDataColumnSidecar(ctx, sidecar); err != nil {
return errors.Wrap(err, "receive data column")
}
slot := sidecar.Slot()
proposerIndex := sidecar.ProposerIndex()
root := sidecar.BlockRoot()
if err := s.reconstructSaveBroadcastDataColumnSidecars(ctx, slot, proposerIndex, root); err != nil {
return errors.Wrap(err, "reconstruct data columns")
}
return nil
}
func (s *Service) receiveDataColumnSidecar(ctx context.Context, sidecar blocks.VerifiedRODataColumn) error {
slot := sidecar.SignedBlockHeader.Header.Slot
proposerIndex := sidecar.SignedBlockHeader.Header.ProposerIndex
columnIndex := sidecar.Index
s.setSeenDataColumnIndex(slot, proposerIndex, columnIndex)
if err := s.cfg.chain.ReceiveDataColumn(sidecar); err != nil {
return errors.Wrap(err, "receive data column")
}
s.cfg.operationNotifier.OperationFeed().Send(&feed.Event{
Type: opfeed.DataColumnSidecarReceived,
Data: &opfeed.DataColumnSidecarReceivedData{
DataColumn: &sidecar,
},
})
return nil
}

View File

@@ -15,10 +15,12 @@ func TestMain(m *testing.M) {
resetFlags := flags.Get()
flags.Init(&flags.GlobalFlags{
BlockBatchLimit: 64,
BlockBatchLimitBurstFactor: 10,
BlobBatchLimit: 32,
BlobBatchLimitBurstFactor: 2,
BlockBatchLimit: 64,
BlockBatchLimitBurstFactor: 10,
BlobBatchLimit: 32,
BlobBatchLimitBurstFactor: 2,
DataColumnBatchLimit: 4096,
DataColumnBatchLimitBurstFactor: 2,
})
defer func() {
flags.Init(resetFlags)

View File

@@ -30,16 +30,16 @@ var (
ErrBlobIndexInvalid = errors.Join(ErrBlobInvalid, errors.New("incorrect blob sidecar index"))
// errFromFutureSlot means RequireSlotNotTooEarly failed.
errFromFutureSlot = errors.Join(ErrBlobInvalid, errors.New("slot is too far in the future"))
errFromFutureSlot = errors.New("slot is too far in the future")
// errSlotNotAfterFinalized means RequireSlotAboveFinalized failed.
errSlotNotAfterFinalized = errors.Join(ErrBlobInvalid, errors.New("slot <= finalized checkpoint"))
errSlotNotAfterFinalized = errors.New("slot <= finalized checkpoint")
// ErrInvalidProposerSignature means RequireValidProposerSignature failed.
ErrInvalidProposerSignature = errors.Join(ErrBlobInvalid, errors.New("proposer signature could not be verified"))
// errSidecarParentNotSeen means RequireSidecarParentSeen failed.
errSidecarParentNotSeen = errors.Join(ErrBlobInvalid, errors.New("parent root has not been seen"))
errSidecarParentNotSeen = errors.New("parent root has not been seen")
// errSidecarParentInvalid means RequireSidecarParentValid failed.
errSidecarParentInvalid = errors.Join(ErrBlobInvalid, errors.New("parent block is not valid"))

View File

@@ -0,0 +1,3 @@
### Fixed
- Fixed the versioning bug for light client data types in the Beacon API.

View File

@@ -0,0 +1,3 @@
### Changed
- Put the initiation of LC Store behind the `enable-light-client` flag.

View File

@@ -0,0 +1,3 @@
### Ignored
- Refactor light client bootstrap tests in the RPC package.

View File

@@ -0,0 +1,3 @@
### Ignored
- Refactor light client kv tests.

View File

@@ -0,0 +1,7 @@
### Added
- New ssz-only flag for validator client to enable calling rest apis in SSZ, starting with get block endpoint.
### Changed
- when REST api is enabled the get Block api defaults to requesting and receiving SSZ instead of JSON, JSON is the fallback.

View File

@@ -0,0 +1,2 @@
## Added
- Methods to generically encode/decode independent lists of ssz values.

View File

@@ -0,0 +1,2 @@
### Fixed
- `--chain-config-file`: Do not use any more mainnet boot nodes.

View File

@@ -0,0 +1,2 @@
### Added
- Implement beacon API blob sidecar enpoint for Fulu.

View File

@@ -0,0 +1,2 @@
### Added
- Implement `dataColumnSidecarsByRangeRPCHandler`.

View File

@@ -0,0 +1,2 @@
### Added
- Implement `dataColumnSidecarByRootRPCHandler`.

View File

@@ -0,0 +1,2 @@
### Fixed
- Non deterministic output order of `dataColumnSidecarByRootRPCHandler`.

View File

@@ -0,0 +1,2 @@
### Added
- PeerDAS: Implement the new Fulu Metadata.

View File

@@ -0,0 +1,2 @@
### Added
- PeerDAS: Implement reconstruction.

View File

@@ -0,0 +1,3 @@
### Added
- Implement `SendDataColumnSidecarsByRangeRequest`.
- Implement `SendDataColumnSidecarsByRootRequest`.

View File

@@ -0,0 +1,9 @@
### Changed
- In `TopicFromMessage`: Do not assume anymore that all Fulu specific topic are V3 only.
- `readChunkedDataColumnSidecar`: Add `validationFunctions` parameter and add tests.
### Added
- New `StatusV2` proto message.
### Removed
- Unused `DataColumnIdentifier` proto message.

3
changelog/tt_milk.md Normal file
View File

@@ -0,0 +1,3 @@
### Removed
- Remove deposit count from sync new block log

3
changelog/tt_steak.md Normal file
View File

@@ -0,0 +1,3 @@
### Changed
- Remove "invalid" from logs for incoming blob sidecar that is missing parent or out of range slot

View File

@@ -212,6 +212,18 @@ var (
Usage: "The factor by which blob batch limit may increase on burst.",
Value: 3,
}
// DataColumnBatchLimit specifies the requested data column batch size.
DataColumnBatchLimit = &cli.IntFlag{
Name: "data-column-batch-limit",
Usage: "The amount of data columns the local peer is bounded to request and respond to in a batch.",
Value: 4096,
}
// DataColumnBatchLimitBurstFactor specifies the factor by which data column batch size may increase.
DataColumnBatchLimitBurstFactor = &cli.IntFlag{
Name: "data-column-batch-limit-burst-factor",
Usage: "The factor by which data column batch limit may increase on burst.",
Value: 2,
}
// DisableDebugRPCEndpoints disables the debug Beacon API namespace.
DisableDebugRPCEndpoints = &cli.BoolFlag{
Name: "disable-debug-rpc-endpoints",

View File

@@ -8,15 +8,17 @@ import (
// GlobalFlags specifies all the global flags for the
// beacon node.
type GlobalFlags struct {
SubscribeToAllSubnets bool
SubscribeAllDataSubnets bool
MinimumSyncPeers int
MinimumPeersPerSubnet int
MaxConcurrentDials int
BlockBatchLimit int
BlockBatchLimitBurstFactor int
BlobBatchLimit int
BlobBatchLimitBurstFactor int
SubscribeToAllSubnets bool
SubscribeAllDataSubnets bool
MinimumSyncPeers int
MinimumPeersPerSubnet int
MaxConcurrentDials int
BlockBatchLimit int
BlockBatchLimitBurstFactor int
BlobBatchLimit int
BlobBatchLimitBurstFactor int
DataColumnBatchLimit int
DataColumnBatchLimitBurstFactor int
}
var globalConfig *GlobalFlags
@@ -53,8 +55,11 @@ func ConfigureGlobalFlags(ctx *cli.Context) {
cfg.BlockBatchLimitBurstFactor = ctx.Int(BlockBatchLimitBurstFactor.Name)
cfg.BlobBatchLimit = ctx.Int(BlobBatchLimit.Name)
cfg.BlobBatchLimitBurstFactor = ctx.Int(BlobBatchLimitBurstFactor.Name)
cfg.DataColumnBatchLimit = ctx.Int(DataColumnBatchLimit.Name)
cfg.DataColumnBatchLimitBurstFactor = ctx.Int(DataColumnBatchLimitBurstFactor.Name)
cfg.MinimumPeersPerSubnet = ctx.Int(MinPeersPerSubnet.Name)
cfg.MaxConcurrentDials = ctx.Int(MaxConcurrentDials.Name)
configureMinimumPeers(ctx, cfg)
Init(cfg)

View File

@@ -14,7 +14,6 @@ import (
)
var (
// BlobStoragePathFlag defines a flag to start the beacon chain from a give genesis state file.
BlobStoragePathFlag = &cli.PathFlag{
Name: "blob-path",
Usage: "Location for blob storage. Default location will be a 'blobs' directory next to the beacon db.",
@@ -30,6 +29,10 @@ var (
Usage: layoutFlagUsage(),
Value: filesystem.LayoutNameFlat,
}
DataColumnStoragePathFlag = &cli.PathFlag{
Name: "data-column-path",
Usage: "Location for data column storage. Default location will be a 'data-columns' directory next to the beacon db.",
}
)
func layoutOptions() string {
@@ -54,15 +57,23 @@ func validateLayoutFlag(_ *cli.Context, v string) error {
// create a cancellable context. If we switch to using App.RunContext, we can set up this cancellation in the cmd
// package instead, and allow the functional options to tap into context cancellation.
func BeaconNodeOptions(c *cli.Context) ([]node.Option, error) {
e, err := blobRetentionEpoch(c)
blobRetentionEpoch, err := blobRetentionEpoch(c)
if err != nil {
return nil, err
return nil, errors.Wrap(err, "blob retention epoch")
}
opts := []node.Option{node.WithBlobStorageOptions(
filesystem.WithBlobRetentionEpochs(e),
blobStorageOptions := node.WithBlobStorageOptions(
filesystem.WithBlobRetentionEpochs(blobRetentionEpoch),
filesystem.WithBasePath(blobStoragePath(c)),
filesystem.WithLayout(c.String(BlobStorageLayout.Name)), // This is validated in the Action func for BlobStorageLayout.
)}
)
dataColumnStorageOption := node.WithDataColumnStorageOptions(
filesystem.WithDataColumnRetentionEpochs(blobRetentionEpoch),
filesystem.WithDataColumnBasePath(dataColumnStoragePath(c)),
)
opts := []node.Option{blobStorageOptions, dataColumnStorageOption}
return opts, nil
}
@@ -75,6 +86,16 @@ func blobStoragePath(c *cli.Context) string {
return blobsPath
}
func dataColumnStoragePath(c *cli.Context) string {
dataColumnsPath := c.Path(DataColumnStoragePathFlag.Name)
if dataColumnsPath == "" {
// append a "data-columns" subdir to the end of the data dir path
dataColumnsPath = path.Join(c.String(cmd.DataDirFlag.Name), "data-columns")
}
return dataColumnsPath
}
var errInvalidBlobRetentionEpochs = errors.New("value is smaller than spec minimum")
// blobRetentionEpoch returns the spec default MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUEST

View File

@@ -52,6 +52,7 @@ type Flags struct {
EnableExperimentalAttestationPool bool // EnableExperimentalAttestationPool enables an experimental attestation pool design.
EnableDutiesV2 bool // EnableDutiesV2 sets validator client to use the get Duties V2 endpoint
EnableWeb bool // EnableWeb enables the webui on the validator client
SSZOnly bool // SSZOnly forces the validator client to use SSZ for communication with the beacon node when REST mode is enabled (useful for debugging)
// Logging related toggles.
DisableGRPCConnectionLogs bool // Disables logging when a new grpc client has connected.
EnableFullSSZDataLogging bool // Enables logging for full ssz data on rejected gossip messages
@@ -155,7 +156,8 @@ func configureTestnet(ctx *cli.Context) error {
params.UseHoodiNetworkConfig()
} else {
if ctx.IsSet(cmd.ChainConfigFileFlag.Name) {
log.Warn("Running on custom Ethereum network specified in a chain configuration yaml file")
log.Warning("Running on custom Ethereum network specified in a chain configuration YAML file")
params.UseCustomNetworkConfig()
} else {
log.Info("Running on Ethereum Mainnet")
}
@@ -167,11 +169,11 @@ func configureTestnet(ctx *cli.Context) error {
}
// Insert feature flags within the function to be enabled for Sepolia testnet.
func applySepoliaFeatureFlags(ctx *cli.Context) {
func applySepoliaFeatureFlags(_ *cli.Context) {
}
// Insert feature flags within the function to be enabled for Holesky testnet.
func applyHoleskyFeatureFlags(ctx *cli.Context) {
func applyHoleskyFeatureFlags(_ *cli.Context) {
}
// ConfigureBeaconChain sets the global config based
@@ -344,6 +346,11 @@ func ConfigureValidator(ctx *cli.Context) error {
logEnabled(EnableWebFlag)
cfg.EnableWeb = true
}
if ctx.Bool(SSZOnly.Name) {
logEnabled(SSZOnly)
cfg.SSZOnly = true
}
cfg.KeystoreImportDebounceInterval = ctx.Duration(dynamicKeyReloadDebounceInterval.Name)
Init(cfg)
return nil

View File

@@ -201,6 +201,12 @@ var (
Usage: "(Work in progress): Enables the web portal for the validator client.",
Value: false,
}
// SSZOnly forces the validator client to use SSZ for communication with the beacon node when REST mode is enabled
SSZOnly = &cli.BoolFlag{
Name: "ssz-only",
Usage: "(debug): Forces the validator client to use SSZ for communication with the beacon node when REST mode is enabled",
}
)
// devModeFlags holds list of flags that are set when development mode is on.
@@ -223,6 +229,7 @@ var ValidatorFlags = append(deprecatedFlags, []cli.Flag{
EnableBeaconRESTApi,
EnableDutiesV2,
EnableWebFlag,
SSZOnly,
}...)
// E2EValidatorFlags contains a list of the validator feature flags to be tested in E2E.

View File

@@ -14,6 +14,7 @@ go_library(
"mainnet_config.go",
"minimal_config.go",
"network_config.go",
"testnet_custom_network_config.go",
"testnet_e2e_config.go",
"testnet_holesky_config.go",
"testnet_hoodi_config.go",

View File

@@ -0,0 +1,9 @@
package params
func UseCustomNetworkConfig() {
cfg := BeaconNetworkConfig().Copy()
cfg.ContractDeploymentBlock = 0
cfg.BootstrapNodes = []string{}
OverrideBeaconNetworkConfig(cfg)
}

View File

@@ -8,6 +8,7 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
pb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
"google.golang.org/protobuf/proto"
)
@@ -83,7 +84,7 @@ func (h *bootstrapAltair) SizeSSZ() int {
}
func (h *bootstrapAltair) Version() int {
return version.Altair
return slots.ToForkVersion(h.header.Beacon().Slot)
}
func (h *bootstrapAltair) Proto() proto.Message {
@@ -188,7 +189,7 @@ func (h *bootstrapCapella) SizeSSZ() int {
}
func (h *bootstrapCapella) Version() int {
return version.Capella
return slots.ToForkVersion(h.header.Beacon().Slot)
}
func (h *bootstrapCapella) Proto() proto.Message {
@@ -293,7 +294,7 @@ func (h *bootstrapDeneb) SizeSSZ() int {
}
func (h *bootstrapDeneb) Version() int {
return version.Deneb
return slots.ToForkVersion(h.header.Beacon().Slot)
}
func (h *bootstrapDeneb) Proto() proto.Message {
@@ -398,7 +399,7 @@ func (h *bootstrapElectra) SizeSSZ() int {
}
func (h *bootstrapElectra) Version() int {
return version.Electra
return slots.ToForkVersion(h.header.Beacon().Slot)
}
func (h *bootstrapElectra) Proto() proto.Message {

View File

@@ -8,7 +8,7 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
pb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
"google.golang.org/protobuf/proto"
)
@@ -174,7 +174,7 @@ func (u *finalityUpdateAltair) Proto() proto.Message {
}
func (u *finalityUpdateAltair) Version() int {
return version.Altair
return slots.ToForkVersion(u.attestedHeader.Beacon().Slot)
}
func (u *finalityUpdateAltair) AttestedHeader() interfaces.LightClientHeader {
@@ -286,7 +286,7 @@ func (u *finalityUpdateCapella) Proto() proto.Message {
}
func (u *finalityUpdateCapella) Version() int {
return version.Capella
return slots.ToForkVersion(u.attestedHeader.Beacon().Slot)
}
func (u *finalityUpdateCapella) AttestedHeader() interfaces.LightClientHeader {
@@ -398,7 +398,7 @@ func (u *finalityUpdateDeneb) Proto() proto.Message {
}
func (u *finalityUpdateDeneb) Version() int {
return version.Deneb
return slots.ToForkVersion(u.attestedHeader.Beacon().Slot)
}
func (u *finalityUpdateDeneb) AttestedHeader() interfaces.LightClientHeader {
@@ -511,7 +511,7 @@ func (u *finalityUpdateElectra) Proto() proto.Message {
}
func (u *finalityUpdateElectra) Version() int {
return version.Electra
return slots.ToForkVersion(u.attestedHeader.Beacon().Slot)
}
func (u *finalityUpdateElectra) AttestedHeader() interfaces.LightClientHeader {

View File

@@ -7,7 +7,7 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
pb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
"google.golang.org/protobuf/proto"
)
@@ -139,7 +139,7 @@ func (u *optimisticUpdateAltair) Proto() proto.Message {
}
func (u *optimisticUpdateAltair) Version() int {
return version.Altair
return slots.ToForkVersion(u.attestedHeader.Beacon().Slot)
}
func (u *optimisticUpdateAltair) AttestedHeader() interfaces.LightClientHeader {
@@ -223,7 +223,7 @@ func (u *optimisticUpdateCapella) Proto() proto.Message {
}
func (u *optimisticUpdateCapella) Version() int {
return version.Capella
return slots.ToForkVersion(u.attestedHeader.Beacon().Slot)
}
func (u *optimisticUpdateCapella) AttestedHeader() interfaces.LightClientHeader {
@@ -307,7 +307,7 @@ func (u *optimisticUpdateDeneb) Proto() proto.Message {
}
func (u *optimisticUpdateDeneb) Version() int {
return version.Deneb
return slots.ToForkVersion(u.attestedHeader.Beacon().Slot)
}
func (u *optimisticUpdateDeneb) AttestedHeader() interfaces.LightClientHeader {

View File

@@ -9,6 +9,7 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
pb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
"google.golang.org/protobuf/proto"
)
@@ -105,7 +106,7 @@ func (u *updateAltair) Proto() proto.Message {
}
func (u *updateAltair) Version() int {
return version.Altair
return slots.ToForkVersion(u.attestedHeader.Beacon().Slot)
}
func (u *updateAltair) AttestedHeader() interfaces.LightClientHeader {
@@ -272,7 +273,7 @@ func (u *updateCapella) Proto() proto.Message {
}
func (u *updateCapella) Version() int {
return version.Capella
return slots.ToForkVersion(u.attestedHeader.Beacon().Slot)
}
func (u *updateCapella) AttestedHeader() interfaces.LightClientHeader {
@@ -439,7 +440,7 @@ func (u *updateDeneb) Proto() proto.Message {
}
func (u *updateDeneb) Version() int {
return version.Deneb
return slots.ToForkVersion(u.attestedHeader.Beacon().Slot)
}
func (u *updateDeneb) AttestedHeader() interfaces.LightClientHeader {
@@ -607,7 +608,7 @@ func (u *updateElectra) Proto() proto.Message {
}
func (u *updateElectra) Version() int {
return version.Electra
return slots.ToForkVersion(u.attestedHeader.Beacon().Slot)
}
func (u *updateElectra) AttestedHeader() interfaces.LightClientHeader {

View File

@@ -3,9 +3,11 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"errors.go",
"hashers.go",
"helpers.go",
"htrutils.go",
"list.go",
"merkleize.go",
"slice_root.go",
],

17
encoding/ssz/errors.go Normal file
View File

@@ -0,0 +1,17 @@
package ssz
import "github.com/pkg/errors"
var (
ErrInvalidEncodingLength = errors.New("invalid encoded length")
ErrInvalidFixedEncodingLen = errors.Wrap(ErrInvalidEncodingLength, "not multiple of fixed size")
ErrEncodingSmallerThanOffset = errors.Wrap(ErrInvalidEncodingLength, "smaller than a single offset")
ErrInvalidOffset = errors.New("invalid offset")
ErrOffsetIntoFixed = errors.Wrap(ErrInvalidOffset, "does not point past fixed section of encoding")
ErrOffsetExceedsBuffer = errors.Wrap(ErrInvalidOffset, "exceeds buffer length")
ErrNegativeRelativeOffset = errors.Wrap(ErrInvalidOffset, "less than previous offset")
ErrOffsetInsufficient = errors.Wrap(ErrInvalidOffset, "insufficient difference relative to previous")
ErrOffsetSectionMisaligned = errors.Wrap(ErrInvalidOffset, "offset bytes are not a multiple of offset size")
ErrOffsetDecodedMismatch = errors.New("unmarshaled size does not relative offsets")
)

275
encoding/ssz/list.go Normal file
View File

@@ -0,0 +1,275 @@
package ssz
import (
"encoding/binary"
"math"
"github.com/pkg/errors"
)
const offsetLen = 4 // Each variable-sized list element offset is a 4-byte uint32.
// Marshalable describes the methodset required for a type to be generically marshaled.
type Marshalable interface {
MarshalSSZTo(buf []byte) ([]byte, error)
SizeSSZ() int
}
// Unmarshalable describes the methodset required for a type to be generically unmarshaled.
type Unmarshalable interface {
UnmarshalSSZ(buf []byte) error
SizeSSZ() int
}
// SerDesable is a union interface that combines both Marshalable and Unmarshalable interfaces.
// The name means "serializable/deserializable", indicating that types implementing this interface
// can be marshaled to and unmarshaled from a byte sequence.
type SerDesable interface {
Marshalable
Unmarshalable
}
// ListSerdes is a type that manages the serialization and deserialization of a list of elements
// for a given type that supports the SerDesable interface.
type ListSerdes[T SerDesable] struct {
new func() T
marshal func([]T) ([]byte, error)
unmarshal func([]byte, func() T) ([]T, error)
}
// NewListFixedElementSerdes creates a new ListSerdes parameterized by the given type [T SerDesable]
// where the type [T] is expected to be a fixed-size ssz type.
// and holds onto to the constructor func so users of the ListSerdes don't need to specify it.
func NewListFixedElementSerdes[T SerDesable](newt func() T) ListSerdes[T] {
return ListSerdes[T]{
new: newt,
marshal: MarshalListFixedElement[T],
unmarshal: UnmarshalListFixedElement[T],
}
}
// NewListVariableElementSerdes creates a new ListSerdes parameterized by the given type [T SerDesable]
// where the type [T] is expected to be a fixed-size ssz type.
// and holds onto to the constructor func so users of the ListSerdes don't need to specify it.
func NewListVariableElementSerdes[T SerDesable](newt func() T) ListSerdes[T] {
return ListSerdes[T]{
new: newt,
marshal: MarshalListVariableElement[T],
unmarshal: UnmarshalListVariableElement[T],
}
}
// Marshal encodes a slice of elements of type [T] as an ssz-encoded List.
func (ls ListSerdes[T]) Marshal(elems []T) ([]byte, error) {
if len(elems) == 0 {
return nil, nil
}
return ls.marshal(elems)
}
// Unmarshal decodes an ssz-encoded List of elements of type [T].
func (ls ListSerdes[T]) Unmarshal(buf []byte) ([]T, error) {
if len(buf) == 0 {
return nil, nil
}
return ls.unmarshal(buf, ls.new)
}
// MarshalListFixedElement encodes a slice of fixed-sized elements as an ssz list.
// A list of fixed-size elements is marshaled by concatenating the marshaled bytes
// of each element in the list.
//
// For variable-sized elements, use MarshalListVariableElement instead.
// SSZ Lists have different encoding rules depending whether their elements are fixed- or variable-sized,
// and we can't differentiate them by the ssz interface, so it is the caller's responsibility to
// pick the correct method.
//
// This method should only be used for container types that have code-generated methods.
// For lists of primitive types (eg a List[Vector[byte, 32], N]), please use generated code, or hand-tuned methods.
func MarshalListFixedElement[T Marshalable](elems []T) ([]byte, error) {
if len(elems) == 0 {
return nil, nil
}
size := elems[0].SizeSSZ()
buf := make([]byte, 0, len(elems)*size)
for _, elem := range elems {
if elem.SizeSSZ() != size {
return nil, ErrInvalidFixedEncodingLen
}
var err error
buf, err = elem.MarshalSSZTo(buf)
if err != nil {
return nil, errors.Wrap(err, "marshal ssz")
}
}
return buf, nil
}
// UnmarshalListFixedElement unmarshals an ssz-encoded list of fixed-sized elements.
// A List of fixed-size elements is encoded as a concatenation of the marshaled bytes of each
// element, so after performing some safety checks on the alignment and size of the buffer,
// we simply iterate over the buffer in chunks of the fixed size and unmarshal each element.
// Because this generic method is parameterized by a [T Unmarshalable] interface type,
// it is unable to initialize elements of the list internally. That is why the caller must
// provide the `newt` function that returns a new instance of the type [T] to be unmarshaled.
// This func will be called for each element in the list to create a new instance of [T].
//
// For variable-sized elements, use UnmarshalListVariableElement instead.
// SSZ Lists have different encoding rules depending whether their elements are fixed- or variable-sized,
// and we can't differentiate them by the ssz interface, so it is the caller's responsibility to
// pick the correct method.
//
// This method should only be used for container types that have code-generated methods.
// For lists of primitive types (eg a List[Vector[byte, 32], N]), please use generated code, or hand-tuned methods.
func UnmarshalListFixedElement[T Unmarshalable](buf []byte, newt func() T) ([]T, error) {
bufLen := len(buf)
if bufLen == 0 {
return nil, nil
}
fixedSize := newt().SizeSSZ()
if bufLen%fixedSize != 0 {
return nil, ErrInvalidFixedEncodingLen
}
nElems := bufLen / fixedSize
elements := make([]T, nElems)
for i := range elements {
elem := newt()
if err := elem.UnmarshalSSZ(buf[i*fixedSize : (i+1)*fixedSize]); err != nil {
return nil, errors.Wrap(err, "unmarshal ssz")
}
elements[i] = elem
}
return elements, nil
}
// MarshalListVariableElement marshals a slice of variable-sized elements as an ssz list.
// A list of variable-sized elements is marshaled by first writing the offsets of each element to the
// beginning of the byte sequence (the fixed size section of the variable sized list container), followed
// by the encoded values of each element at the indicated offset relative to the beginning of the byte sequence.
//
// For fixed-sized elements, use MarshalListFixedElement instead.
// SSZ Lists have different encoding rules depending whether their elements are fixed- or variable-sized,
// and we can't differentiate them by the ssz interface, so it is the caller's responsibility to
// pick the correct method.
//
// This method should only be used for container types that have code-generated methods.
// For lists of primitive types (eg a List[List[byte, N], N]), please use generated code, or hand-tuned methods.
func MarshalListVariableElement[T Marshalable](elems []T) ([]byte, error) {
var err error
var total uint32
nElems := len(elems)
if nElems == 0 {
return nil, nil
}
sizes := make([]uint32, nElems)
for i, e := range elems {
sizes[i], err = safeUint32(e.SizeSSZ())
if err != nil {
return nil, err
}
total += sizes[i]
}
nextOffset, err := safeUint32(nElems * offsetLen)
if err != nil {
return nil, err
}
buf := make([]byte, 0, total+nextOffset)
for _, size := range sizes {
buf = binary.LittleEndian.AppendUint32(buf, nextOffset)
nextOffset += size
}
for _, elem := range elems {
buf, err = elem.MarshalSSZTo(buf)
if err != nil {
return nil, err
}
}
return buf, nil
}
// UnmarshalListVariableElement unmarshals an ssz-encoded list of variable-sized elements.
// Because this generic method is parameterized by a [T Unmarshalable] interface type,
// it is unable to initialize elements of the list internally. That is why the caller must
// provide the `newt` function that returns a new instance of the type [T] to be unmarshaled.
// This func will be called for each element in the list to create a new instance of [T].
//
// For fixed-sized elements, use UnmarshalListFixedElement instead.
// SSZ Lists have different encoding rules depending whether their elements are fixed- or variable-sized,
// and we can't differentiate them by the ssz interface, so it is the caller's responsibility to
// pick the correct method.
//
// This method should only be used for container types that have code-generated methods.
// For lists of primitive types (eg a List[List[byte, N], N]), please use generated code, or hand-tuned methods.
func UnmarshalListVariableElement[T Unmarshalable](buf []byte, newt func() T) ([]T, error) {
bufLen := len(buf)
if bufLen == 0 {
return nil, nil
}
if bufLen < offsetLen {
return nil, ErrEncodingSmallerThanOffset
}
fixedSize := uint32(newt().SizeSSZ())
bufLen32 := uint32(bufLen)
first := binary.LittleEndian.Uint32(buf)
// Rather than just return a zero element list in this case,
// we want to explicitly reject this input as invalid
if first < offsetLen {
return nil, ErrOffsetIntoFixed
}
if first%offsetLen != 0 {
return nil, ErrOffsetSectionMisaligned
}
if first > bufLen32 {
return nil, ErrOffsetExceedsBuffer
}
nElems := int(first) / offsetLen // lint:ignore uintcast -- int has higher precision than uint32 on 64 bit systems, so this is 100% safe
sizes := make([]uint32, nElems)
// We've already looked at the offset of the first element (to perform validation on it)
// so we just need to iterate over the remaining offsets, aka nElems-1 times.
// The size of each element is computed relative to the next offset, so this loop is effectively
// looking ahead +1 (starting with a `buf` that has already had the first offset sliced off),
// with the final element handled as a special case outside the loop (using the size of the entire buffer
// as the ending bound).
previous := first
buf = buf[offsetLen:]
for i := 0; i < nElems-1; i++ {
next := binary.LittleEndian.Uint32(buf)
if next > bufLen32 {
return nil, ErrOffsetExceedsBuffer
}
if next < previous {
return nil, ErrNegativeRelativeOffset
}
sizes[i] = next - previous
if sizes[i] < fixedSize {
return nil, ErrOffsetInsufficient
}
buf = buf[offsetLen:]
previous = next
}
sizes[len(sizes)-1] = bufLen32 - previous
elements := make([]T, len(sizes))
for i, size := range sizes {
elem := newt()
if err := elem.UnmarshalSSZ(buf[:size]); err != nil {
return nil, errors.Wrap(err, "unmarshal ssz")
}
szi := int(size) // lint:ignore uintcast -- int has higher precision than uint32 on 64 bit systems, so this is 100% safe
if elem.SizeSSZ() != szi {
return nil, ErrOffsetDecodedMismatch
}
elements[i] = elem
buf = buf[size:]
}
return elements, nil
}
func safeUint32(val int) (uint32, error) {
if val < 0 || val > math.MaxUint32 {
return 0, errors.New("value exceeds uint32 range")
}
return uint32(val), nil // lint:ignore uintcast -- integer value explicitly checked to prevent truncation
}

View File

@@ -2,14 +2,13 @@
# Common
##############################################################################
load("@rules_proto//proto:defs.bzl", "proto_library")
##############################################################################
# Go
##############################################################################
# gazelle:ignore
load("@io_bazel_rules_go//go:def.bzl", "go_library", "go_test")
load("@io_bazel_rules_go//proto:def.bzl", "go_proto_library")
load("@rules_proto//proto:defs.bzl", "proto_library")
load("//proto:ssz_proto_library.bzl", "ssz_proto_files")
load("//tools:ssz.bzl", "SSZ_DEPS", "ssz_gen_marshal")
@@ -189,6 +188,7 @@ ssz_fulu_objs = [
"DataColumnIdentifier",
"DataColumnsByRootIdentifier",
"DataColumnSidecar",
"StatusV2",
"SignedBeaconBlockContentsFulu",
"SignedBeaconBlockFulu",
"SignedBlindedBeaconBlockFulu",
@@ -359,15 +359,17 @@ go_library(
importpath = "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1",
visibility = ["//visibility:public"],
deps = SSZ_DEPS + [
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/eth/ext:go_default_library",
"//runtime/version:go_default_library",
"//consensus-types/primitives:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_golang_protobuf//proto:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library", # keep
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@googleapis//google/api:annotations_go_proto",
"@io_bazel_rules_go//proto/wkt:descriptor_go_proto",
"@io_bazel_rules_go//proto/wkt:empty_go_proto",
@@ -382,8 +384,6 @@ go_library(
"@org_golang_google_protobuf//runtime/protoimpl:go_default_library",
"@org_golang_google_protobuf//types/descriptorpb:go_default_library",
"@org_golang_google_protobuf//types/known/emptypb:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
],
)

View File

@@ -5,6 +5,12 @@ import (
enginev1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
)
// GenericConverter defines any struct that can be converted to a generic beacon block.
// We assume all your versioned block structs implement this method.
type GenericConverter interface {
ToGeneric() (*GenericBeaconBlock, error)
}
// ----------------------------------------------------------------------------
// Phase 0
// ----------------------------------------------------------------------------

View File

@@ -109,61 +109,6 @@ func (x *DataColumnSidecar) GetKzgCommitmentsInclusionProof() [][]byte {
return nil
}
type DataColumnIdentifier struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
BlockRoot []byte `protobuf:"bytes,1,opt,name=block_root,json=blockRoot,proto3" json:"block_root,omitempty" ssz-size:"32"`
Index uint64 `protobuf:"varint,2,opt,name=index,proto3" json:"index,omitempty"`
}
func (x *DataColumnIdentifier) Reset() {
*x = DataColumnIdentifier{}
if protoimpl.UnsafeEnabled {
mi := &file_proto_prysm_v1alpha1_data_columns_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *DataColumnIdentifier) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*DataColumnIdentifier) ProtoMessage() {}
func (x *DataColumnIdentifier) ProtoReflect() protoreflect.Message {
mi := &file_proto_prysm_v1alpha1_data_columns_proto_msgTypes[1]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use DataColumnIdentifier.ProtoReflect.Descriptor instead.
func (*DataColumnIdentifier) Descriptor() ([]byte, []int) {
return file_proto_prysm_v1alpha1_data_columns_proto_rawDescGZIP(), []int{1}
}
func (x *DataColumnIdentifier) GetBlockRoot() []byte {
if x != nil {
return x.BlockRoot
}
return nil
}
func (x *DataColumnIdentifier) GetIndex() uint64 {
if x != nil {
return x.Index
}
return 0
}
type DataColumnsByRootIdentifier struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
@@ -176,7 +121,7 @@ type DataColumnsByRootIdentifier struct {
func (x *DataColumnsByRootIdentifier) Reset() {
*x = DataColumnsByRootIdentifier{}
if protoimpl.UnsafeEnabled {
mi := &file_proto_prysm_v1alpha1_data_columns_proto_msgTypes[2]
mi := &file_proto_prysm_v1alpha1_data_columns_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -189,7 +134,7 @@ func (x *DataColumnsByRootIdentifier) String() string {
func (*DataColumnsByRootIdentifier) ProtoMessage() {}
func (x *DataColumnsByRootIdentifier) ProtoReflect() protoreflect.Message {
mi := &file_proto_prysm_v1alpha1_data_columns_proto_msgTypes[2]
mi := &file_proto_prysm_v1alpha1_data_columns_proto_msgTypes[1]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -202,7 +147,7 @@ func (x *DataColumnsByRootIdentifier) ProtoReflect() protoreflect.Message {
// Deprecated: Use DataColumnsByRootIdentifier.ProtoReflect.Descriptor instead.
func (*DataColumnsByRootIdentifier) Descriptor() ([]byte, []int) {
return file_proto_prysm_v1alpha1_data_columns_proto_rawDescGZIP(), []int{2}
return file_proto_prysm_v1alpha1_data_columns_proto_rawDescGZIP(), []int{1}
}
func (x *DataColumnsByRootIdentifier) GetBlockRoot() []byte {
@@ -253,29 +198,24 @@ var file_proto_prysm_v1alpha1_data_columns_proto_rawDesc = []byte{
0x63, 0x6c, 0x75, 0x73, 0x69, 0x6f, 0x6e, 0x5f, 0x70, 0x72, 0x6f, 0x6f, 0x66, 0x18, 0x06, 0x20,
0x03, 0x28, 0x0c, 0x42, 0x08, 0x8a, 0xb5, 0x18, 0x04, 0x34, 0x2c, 0x33, 0x32, 0x52, 0x1c, 0x6b,
0x7a, 0x67, 0x43, 0x6f, 0x6d, 0x6d, 0x69, 0x74, 0x6d, 0x65, 0x6e, 0x74, 0x73, 0x49, 0x6e, 0x63,
0x6c, 0x75, 0x73, 0x69, 0x6f, 0x6e, 0x50, 0x72, 0x6f, 0x6f, 0x66, 0x22, 0x53, 0x0a, 0x14, 0x44,
0x61, 0x74, 0x61, 0x43, 0x6f, 0x6c, 0x75, 0x6d, 0x6e, 0x49, 0x64, 0x65, 0x6e, 0x74, 0x69, 0x66,
0x69, 0x65, 0x72, 0x12, 0x25, 0x0a, 0x0a, 0x62, 0x6c, 0x6f, 0x63, 0x6b, 0x5f, 0x72, 0x6f, 0x6f,
0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x06, 0x8a, 0xb5, 0x18, 0x02, 0x33, 0x32, 0x52,
0x09, 0x62, 0x6c, 0x6f, 0x63, 0x6b, 0x52, 0x6f, 0x6f, 0x74, 0x12, 0x14, 0x0a, 0x05, 0x69, 0x6e,
0x64, 0x65, 0x78, 0x18, 0x02, 0x20, 0x01, 0x28, 0x04, 0x52, 0x05, 0x69, 0x6e, 0x64, 0x65, 0x78,
0x22, 0x67, 0x0a, 0x1b, 0x44, 0x61, 0x74, 0x61, 0x43, 0x6f, 0x6c, 0x75, 0x6d, 0x6e, 0x73, 0x42,
0x79, 0x52, 0x6f, 0x6f, 0x74, 0x49, 0x64, 0x65, 0x6e, 0x74, 0x69, 0x66, 0x69, 0x65, 0x72, 0x12,
0x25, 0x0a, 0x0a, 0x62, 0x6c, 0x6f, 0x63, 0x6b, 0x5f, 0x72, 0x6f, 0x6f, 0x74, 0x18, 0x01, 0x20,
0x01, 0x28, 0x0c, 0x42, 0x06, 0x8a, 0xb5, 0x18, 0x02, 0x33, 0x32, 0x52, 0x09, 0x62, 0x6c, 0x6f,
0x63, 0x6b, 0x52, 0x6f, 0x6f, 0x74, 0x12, 0x21, 0x0a, 0x07, 0x63, 0x6f, 0x6c, 0x75, 0x6d, 0x6e,
0x73, 0x18, 0x02, 0x20, 0x03, 0x28, 0x04, 0x42, 0x07, 0x92, 0xb5, 0x18, 0x03, 0x31, 0x32, 0x38,
0x52, 0x07, 0x63, 0x6f, 0x6c, 0x75, 0x6d, 0x6e, 0x73, 0x42, 0x9a, 0x01, 0x0a, 0x19, 0x6f, 0x72,
0x67, 0x2e, 0x65, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x2e, 0x65, 0x74, 0x68, 0x2e, 0x76,
0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x31, 0x42, 0x10, 0x44, 0x61, 0x74, 0x61, 0x43, 0x6f, 0x6c,
0x75, 0x6d, 0x6e, 0x73, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x50, 0x01, 0x5a, 0x39, 0x67, 0x69, 0x74,
0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x4f, 0x66, 0x66, 0x63, 0x68, 0x61, 0x69, 0x6e,
0x4c, 0x61, 0x62, 0x73, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x2f, 0x76, 0x36, 0x2f, 0x70, 0x72,
0x6f, 0x74, 0x6f, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x2f, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68,
0x61, 0x31, 0x3b, 0x65, 0x74, 0x68, 0xaa, 0x02, 0x15, 0x45, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75,
0x6d, 0x2e, 0x45, 0x74, 0x68, 0x2e, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x31, 0xca, 0x02,
0x15, 0x45, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x5c, 0x45, 0x74, 0x68, 0x5c, 0x76, 0x31,
0x61, 0x6c, 0x70, 0x68, 0x61, 0x31, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
0x6c, 0x75, 0x73, 0x69, 0x6f, 0x6e, 0x50, 0x72, 0x6f, 0x6f, 0x66, 0x22, 0x67, 0x0a, 0x1b, 0x44,
0x61, 0x74, 0x61, 0x43, 0x6f, 0x6c, 0x75, 0x6d, 0x6e, 0x73, 0x42, 0x79, 0x52, 0x6f, 0x6f, 0x74,
0x49, 0x64, 0x65, 0x6e, 0x74, 0x69, 0x66, 0x69, 0x65, 0x72, 0x12, 0x25, 0x0a, 0x0a, 0x62, 0x6c,
0x6f, 0x63, 0x6b, 0x5f, 0x72, 0x6f, 0x6f, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x06,
0x8a, 0xb5, 0x18, 0x02, 0x33, 0x32, 0x52, 0x09, 0x62, 0x6c, 0x6f, 0x63, 0x6b, 0x52, 0x6f, 0x6f,
0x74, 0x12, 0x21, 0x0a, 0x07, 0x63, 0x6f, 0x6c, 0x75, 0x6d, 0x6e, 0x73, 0x18, 0x02, 0x20, 0x03,
0x28, 0x04, 0x42, 0x07, 0x92, 0xb5, 0x18, 0x03, 0x31, 0x32, 0x38, 0x52, 0x07, 0x63, 0x6f, 0x6c,
0x75, 0x6d, 0x6e, 0x73, 0x42, 0x9a, 0x01, 0x0a, 0x19, 0x6f, 0x72, 0x67, 0x2e, 0x65, 0x74, 0x68,
0x65, 0x72, 0x65, 0x75, 0x6d, 0x2e, 0x65, 0x74, 0x68, 0x2e, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68,
0x61, 0x31, 0x42, 0x10, 0x44, 0x61, 0x74, 0x61, 0x43, 0x6f, 0x6c, 0x75, 0x6d, 0x6e, 0x73, 0x50,
0x72, 0x6f, 0x74, 0x6f, 0x50, 0x01, 0x5a, 0x39, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63,
0x6f, 0x6d, 0x2f, 0x4f, 0x66, 0x66, 0x63, 0x68, 0x61, 0x69, 0x6e, 0x4c, 0x61, 0x62, 0x73, 0x2f,
0x70, 0x72, 0x79, 0x73, 0x6d, 0x2f, 0x76, 0x36, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x70,
0x72, 0x79, 0x73, 0x6d, 0x2f, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x31, 0x3b, 0x65, 0x74,
0x68, 0xaa, 0x02, 0x15, 0x45, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x2e, 0x45, 0x74, 0x68,
0x2e, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x31, 0xca, 0x02, 0x15, 0x45, 0x74, 0x68, 0x65,
0x72, 0x65, 0x75, 0x6d, 0x5c, 0x45, 0x74, 0x68, 0x5c, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61,
0x31, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
var (
@@ -290,15 +230,14 @@ func file_proto_prysm_v1alpha1_data_columns_proto_rawDescGZIP() []byte {
return file_proto_prysm_v1alpha1_data_columns_proto_rawDescData
}
var file_proto_prysm_v1alpha1_data_columns_proto_msgTypes = make([]protoimpl.MessageInfo, 3)
var file_proto_prysm_v1alpha1_data_columns_proto_msgTypes = make([]protoimpl.MessageInfo, 2)
var file_proto_prysm_v1alpha1_data_columns_proto_goTypes = []interface{}{
(*DataColumnSidecar)(nil), // 0: ethereum.eth.v1alpha1.DataColumnSidecar
(*DataColumnIdentifier)(nil), // 1: ethereum.eth.v1alpha1.DataColumnIdentifier
(*DataColumnsByRootIdentifier)(nil), // 2: ethereum.eth.v1alpha1.DataColumnsByRootIdentifier
(*SignedBeaconBlockHeader)(nil), // 3: ethereum.eth.v1alpha1.SignedBeaconBlockHeader
(*DataColumnsByRootIdentifier)(nil), // 1: ethereum.eth.v1alpha1.DataColumnsByRootIdentifier
(*SignedBeaconBlockHeader)(nil), // 2: ethereum.eth.v1alpha1.SignedBeaconBlockHeader
}
var file_proto_prysm_v1alpha1_data_columns_proto_depIdxs = []int32{
3, // 0: ethereum.eth.v1alpha1.DataColumnSidecar.signed_block_header:type_name -> ethereum.eth.v1alpha1.SignedBeaconBlockHeader
2, // 0: ethereum.eth.v1alpha1.DataColumnSidecar.signed_block_header:type_name -> ethereum.eth.v1alpha1.SignedBeaconBlockHeader
1, // [1:1] is the sub-list for method output_type
1, // [1:1] is the sub-list for method input_type
1, // [1:1] is the sub-list for extension type_name
@@ -326,18 +265,6 @@ func file_proto_prysm_v1alpha1_data_columns_proto_init() {
}
}
file_proto_prysm_v1alpha1_data_columns_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*DataColumnIdentifier); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_proto_prysm_v1alpha1_data_columns_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*DataColumnsByRootIdentifier); i {
case 0:
return &v.state
@@ -356,7 +283,7 @@ func file_proto_prysm_v1alpha1_data_columns_proto_init() {
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_proto_prysm_v1alpha1_data_columns_proto_rawDesc,
NumEnums: 0,
NumMessages: 3,
NumMessages: 2,
NumExtensions: 0,
NumServices: 0,
},

View File

@@ -45,11 +45,6 @@ message DataColumnSidecar {
"kzg_commitments_inclusion_proof_depth.size,32" ];
}
message DataColumnIdentifier {
bytes block_root = 1 [ (ethereum.eth.ext.ssz_size) = "32" ];
uint64 index = 2;
}
message DataColumnsByRootIdentifier {
bytes block_root = 1 [ (ethereum.eth.ext.ssz_size) = "32" ];
repeated uint64 columns = 2 [ (ethereum.eth.ext.ssz_max) = "128" ];

View File

@@ -2246,77 +2246,6 @@ func (d *DataColumnSidecar) HashTreeRootWith(hh *ssz.Hasher) (err error) {
return
}
// MarshalSSZ ssz marshals the DataColumnIdentifier object
func (d *DataColumnIdentifier) MarshalSSZ() ([]byte, error) {
return ssz.MarshalSSZ(d)
}
// MarshalSSZTo ssz marshals the DataColumnIdentifier object to a target array
func (d *DataColumnIdentifier) MarshalSSZTo(buf []byte) (dst []byte, err error) {
dst = buf
// Field (0) 'BlockRoot'
if size := len(d.BlockRoot); size != 32 {
err = ssz.ErrBytesLengthFn("--.BlockRoot", size, 32)
return
}
dst = append(dst, d.BlockRoot...)
// Field (1) 'Index'
dst = ssz.MarshalUint64(dst, d.Index)
return
}
// UnmarshalSSZ ssz unmarshals the DataColumnIdentifier object
func (d *DataColumnIdentifier) UnmarshalSSZ(buf []byte) error {
var err error
size := uint64(len(buf))
if size != 40 {
return ssz.ErrSize
}
// Field (0) 'BlockRoot'
if cap(d.BlockRoot) == 0 {
d.BlockRoot = make([]byte, 0, len(buf[0:32]))
}
d.BlockRoot = append(d.BlockRoot, buf[0:32]...)
// Field (1) 'Index'
d.Index = ssz.UnmarshallUint64(buf[32:40])
return err
}
// SizeSSZ returns the ssz encoded size in bytes for the DataColumnIdentifier object
func (d *DataColumnIdentifier) SizeSSZ() (size int) {
size = 40
return
}
// HashTreeRoot ssz hashes the DataColumnIdentifier object
func (d *DataColumnIdentifier) HashTreeRoot() ([32]byte, error) {
return ssz.HashWithDefaultHasher(d)
}
// HashTreeRootWith ssz hashes the DataColumnIdentifier object with a hasher
func (d *DataColumnIdentifier) HashTreeRootWith(hh *ssz.Hasher) (err error) {
indx := hh.Index()
// Field (0) 'BlockRoot'
if size := len(d.BlockRoot); size != 32 {
err = ssz.ErrBytesLengthFn("--.BlockRoot", size, 32)
return
}
hh.PutBytes(d.BlockRoot)
// Field (1) 'Index'
hh.PutUint64(d.Index)
hh.Merkleize(indx)
return
}
// MarshalSSZ ssz marshals the DataColumnsByRootIdentifier object
func (d *DataColumnsByRootIdentifier) MarshalSSZ() ([]byte, error) {
return ssz.MarshalSSZ(d)
@@ -2436,3 +2365,132 @@ func (d *DataColumnsByRootIdentifier) HashTreeRootWith(hh *ssz.Hasher) (err erro
hh.Merkleize(indx)
return
}
// MarshalSSZ ssz marshals the StatusV2 object
func (s *StatusV2) MarshalSSZ() ([]byte, error) {
return ssz.MarshalSSZ(s)
}
// MarshalSSZTo ssz marshals the StatusV2 object to a target array
func (s *StatusV2) MarshalSSZTo(buf []byte) (dst []byte, err error) {
dst = buf
// Field (0) 'ForkDigest'
if size := len(s.ForkDigest); size != 4 {
err = ssz.ErrBytesLengthFn("--.ForkDigest", size, 4)
return
}
dst = append(dst, s.ForkDigest...)
// Field (1) 'FinalizedRoot'
if size := len(s.FinalizedRoot); size != 32 {
err = ssz.ErrBytesLengthFn("--.FinalizedRoot", size, 32)
return
}
dst = append(dst, s.FinalizedRoot...)
// Field (2) 'FinalizedEpoch'
dst = ssz.MarshalUint64(dst, uint64(s.FinalizedEpoch))
// Field (3) 'HeadRoot'
if size := len(s.HeadRoot); size != 32 {
err = ssz.ErrBytesLengthFn("--.HeadRoot", size, 32)
return
}
dst = append(dst, s.HeadRoot...)
// Field (4) 'HeadSlot'
dst = ssz.MarshalUint64(dst, uint64(s.HeadSlot))
// Field (5) 'EarliestAvailableSlot'
dst = ssz.MarshalUint64(dst, uint64(s.EarliestAvailableSlot))
return
}
// UnmarshalSSZ ssz unmarshals the StatusV2 object
func (s *StatusV2) UnmarshalSSZ(buf []byte) error {
var err error
size := uint64(len(buf))
if size != 92 {
return ssz.ErrSize
}
// Field (0) 'ForkDigest'
if cap(s.ForkDigest) == 0 {
s.ForkDigest = make([]byte, 0, len(buf[0:4]))
}
s.ForkDigest = append(s.ForkDigest, buf[0:4]...)
// Field (1) 'FinalizedRoot'
if cap(s.FinalizedRoot) == 0 {
s.FinalizedRoot = make([]byte, 0, len(buf[4:36]))
}
s.FinalizedRoot = append(s.FinalizedRoot, buf[4:36]...)
// Field (2) 'FinalizedEpoch'
s.FinalizedEpoch = github_com_OffchainLabs_prysm_v6_consensus_types_primitives.Epoch(ssz.UnmarshallUint64(buf[36:44]))
// Field (3) 'HeadRoot'
if cap(s.HeadRoot) == 0 {
s.HeadRoot = make([]byte, 0, len(buf[44:76]))
}
s.HeadRoot = append(s.HeadRoot, buf[44:76]...)
// Field (4) 'HeadSlot'
s.HeadSlot = github_com_OffchainLabs_prysm_v6_consensus_types_primitives.Slot(ssz.UnmarshallUint64(buf[76:84]))
// Field (5) 'EarliestAvailableSlot'
s.EarliestAvailableSlot = github_com_OffchainLabs_prysm_v6_consensus_types_primitives.Slot(ssz.UnmarshallUint64(buf[84:92]))
return err
}
// SizeSSZ returns the ssz encoded size in bytes for the StatusV2 object
func (s *StatusV2) SizeSSZ() (size int) {
size = 92
return
}
// HashTreeRoot ssz hashes the StatusV2 object
func (s *StatusV2) HashTreeRoot() ([32]byte, error) {
return ssz.HashWithDefaultHasher(s)
}
// HashTreeRootWith ssz hashes the StatusV2 object with a hasher
func (s *StatusV2) HashTreeRootWith(hh *ssz.Hasher) (err error) {
indx := hh.Index()
// Field (0) 'ForkDigest'
if size := len(s.ForkDigest); size != 4 {
err = ssz.ErrBytesLengthFn("--.ForkDigest", size, 4)
return
}
hh.PutBytes(s.ForkDigest)
// Field (1) 'FinalizedRoot'
if size := len(s.FinalizedRoot); size != 32 {
err = ssz.ErrBytesLengthFn("--.FinalizedRoot", size, 32)
return
}
hh.PutBytes(s.FinalizedRoot)
// Field (2) 'FinalizedEpoch'
hh.PutUint64(uint64(s.FinalizedEpoch))
// Field (3) 'HeadRoot'
if size := len(s.HeadRoot); size != 32 {
err = ssz.ErrBytesLengthFn("--.HeadRoot", size, 32)
return
}
hh.PutBytes(s.HeadRoot)
// Field (4) 'HeadSlot'
hh.PutUint64(uint64(s.HeadSlot))
// Field (5) 'EarliestAvailableSlot'
hh.PutUint64(uint64(s.EarliestAvailableSlot))
hh.Merkleize(indx)
return
}

View File

@@ -104,6 +104,93 @@ func (x *Status) GetHeadSlot() github_com_OffchainLabs_prysm_v6_consensus_types_
return github_com_OffchainLabs_prysm_v6_consensus_types_primitives.Slot(0)
}
type StatusV2 struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
ForkDigest []byte `protobuf:"bytes,1,opt,name=fork_digest,json=forkDigest,proto3" json:"fork_digest,omitempty" ssz-size:"4"`
FinalizedRoot []byte `protobuf:"bytes,2,opt,name=finalized_root,json=finalizedRoot,proto3" json:"finalized_root,omitempty" ssz-size:"32"`
FinalizedEpoch github_com_OffchainLabs_prysm_v6_consensus_types_primitives.Epoch `protobuf:"varint,3,opt,name=finalized_epoch,json=finalizedEpoch,proto3" json:"finalized_epoch,omitempty" cast-type:"github.com/OffchainLabs/prysm/v6/consensus-types/primitives.Epoch"`
HeadRoot []byte `protobuf:"bytes,4,opt,name=head_root,json=headRoot,proto3" json:"head_root,omitempty" ssz-size:"32"`
HeadSlot github_com_OffchainLabs_prysm_v6_consensus_types_primitives.Slot `protobuf:"varint,5,opt,name=head_slot,json=headSlot,proto3" json:"head_slot,omitempty" cast-type:"github.com/OffchainLabs/prysm/v6/consensus-types/primitives.Slot"`
EarliestAvailableSlot github_com_OffchainLabs_prysm_v6_consensus_types_primitives.Slot `protobuf:"varint,6,opt,name=earliest_available_slot,json=earliestAvailableSlot,proto3" json:"earliest_available_slot,omitempty" cast-type:"github.com/OffchainLabs/prysm/v6/consensus-types/primitives.Slot"`
}
func (x *StatusV2) Reset() {
*x = StatusV2{}
if protoimpl.UnsafeEnabled {
mi := &file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *StatusV2) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*StatusV2) ProtoMessage() {}
func (x *StatusV2) ProtoReflect() protoreflect.Message {
mi := &file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[1]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use StatusV2.ProtoReflect.Descriptor instead.
func (*StatusV2) Descriptor() ([]byte, []int) {
return file_proto_prysm_v1alpha1_p2p_messages_proto_rawDescGZIP(), []int{1}
}
func (x *StatusV2) GetForkDigest() []byte {
if x != nil {
return x.ForkDigest
}
return nil
}
func (x *StatusV2) GetFinalizedRoot() []byte {
if x != nil {
return x.FinalizedRoot
}
return nil
}
func (x *StatusV2) GetFinalizedEpoch() github_com_OffchainLabs_prysm_v6_consensus_types_primitives.Epoch {
if x != nil {
return x.FinalizedEpoch
}
return github_com_OffchainLabs_prysm_v6_consensus_types_primitives.Epoch(0)
}
func (x *StatusV2) GetHeadRoot() []byte {
if x != nil {
return x.HeadRoot
}
return nil
}
func (x *StatusV2) GetHeadSlot() github_com_OffchainLabs_prysm_v6_consensus_types_primitives.Slot {
if x != nil {
return x.HeadSlot
}
return github_com_OffchainLabs_prysm_v6_consensus_types_primitives.Slot(0)
}
func (x *StatusV2) GetEarliestAvailableSlot() github_com_OffchainLabs_prysm_v6_consensus_types_primitives.Slot {
if x != nil {
return x.EarliestAvailableSlot
}
return github_com_OffchainLabs_prysm_v6_consensus_types_primitives.Slot(0)
}
type BeaconBlocksByRangeRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
@@ -117,7 +204,7 @@ type BeaconBlocksByRangeRequest struct {
func (x *BeaconBlocksByRangeRequest) Reset() {
*x = BeaconBlocksByRangeRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[1]
mi := &file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -130,7 +217,7 @@ func (x *BeaconBlocksByRangeRequest) String() string {
func (*BeaconBlocksByRangeRequest) ProtoMessage() {}
func (x *BeaconBlocksByRangeRequest) ProtoReflect() protoreflect.Message {
mi := &file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[1]
mi := &file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[2]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -143,7 +230,7 @@ func (x *BeaconBlocksByRangeRequest) ProtoReflect() protoreflect.Message {
// Deprecated: Use BeaconBlocksByRangeRequest.ProtoReflect.Descriptor instead.
func (*BeaconBlocksByRangeRequest) Descriptor() ([]byte, []int) {
return file_proto_prysm_v1alpha1_p2p_messages_proto_rawDescGZIP(), []int{1}
return file_proto_prysm_v1alpha1_p2p_messages_proto_rawDescGZIP(), []int{2}
}
func (x *BeaconBlocksByRangeRequest) GetStartSlot() github_com_OffchainLabs_prysm_v6_consensus_types_primitives.Slot {
@@ -180,7 +267,7 @@ type ENRForkID struct {
func (x *ENRForkID) Reset() {
*x = ENRForkID{}
if protoimpl.UnsafeEnabled {
mi := &file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[2]
mi := &file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[3]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -193,7 +280,7 @@ func (x *ENRForkID) String() string {
func (*ENRForkID) ProtoMessage() {}
func (x *ENRForkID) ProtoReflect() protoreflect.Message {
mi := &file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[2]
mi := &file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[3]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -206,7 +293,7 @@ func (x *ENRForkID) ProtoReflect() protoreflect.Message {
// Deprecated: Use ENRForkID.ProtoReflect.Descriptor instead.
func (*ENRForkID) Descriptor() ([]byte, []int) {
return file_proto_prysm_v1alpha1_p2p_messages_proto_rawDescGZIP(), []int{2}
return file_proto_prysm_v1alpha1_p2p_messages_proto_rawDescGZIP(), []int{3}
}
func (x *ENRForkID) GetCurrentForkDigest() []byte {
@@ -242,7 +329,7 @@ type MetaDataV0 struct {
func (x *MetaDataV0) Reset() {
*x = MetaDataV0{}
if protoimpl.UnsafeEnabled {
mi := &file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[3]
mi := &file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[4]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -255,7 +342,7 @@ func (x *MetaDataV0) String() string {
func (*MetaDataV0) ProtoMessage() {}
func (x *MetaDataV0) ProtoReflect() protoreflect.Message {
mi := &file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[3]
mi := &file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[4]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -268,7 +355,7 @@ func (x *MetaDataV0) ProtoReflect() protoreflect.Message {
// Deprecated: Use MetaDataV0.ProtoReflect.Descriptor instead.
func (*MetaDataV0) Descriptor() ([]byte, []int) {
return file_proto_prysm_v1alpha1_p2p_messages_proto_rawDescGZIP(), []int{3}
return file_proto_prysm_v1alpha1_p2p_messages_proto_rawDescGZIP(), []int{4}
}
func (x *MetaDataV0) GetSeqNumber() uint64 {
@@ -298,7 +385,7 @@ type MetaDataV1 struct {
func (x *MetaDataV1) Reset() {
*x = MetaDataV1{}
if protoimpl.UnsafeEnabled {
mi := &file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[4]
mi := &file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[5]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -311,7 +398,7 @@ func (x *MetaDataV1) String() string {
func (*MetaDataV1) ProtoMessage() {}
func (x *MetaDataV1) ProtoReflect() protoreflect.Message {
mi := &file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[4]
mi := &file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[5]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -324,7 +411,7 @@ func (x *MetaDataV1) ProtoReflect() protoreflect.Message {
// Deprecated: Use MetaDataV1.ProtoReflect.Descriptor instead.
func (*MetaDataV1) Descriptor() ([]byte, []int) {
return file_proto_prysm_v1alpha1_p2p_messages_proto_rawDescGZIP(), []int{4}
return file_proto_prysm_v1alpha1_p2p_messages_proto_rawDescGZIP(), []int{5}
}
func (x *MetaDataV1) GetSeqNumber() uint64 {
@@ -362,7 +449,7 @@ type MetaDataV2 struct {
func (x *MetaDataV2) Reset() {
*x = MetaDataV2{}
if protoimpl.UnsafeEnabled {
mi := &file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[5]
mi := &file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[6]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -375,7 +462,7 @@ func (x *MetaDataV2) String() string {
func (*MetaDataV2) ProtoMessage() {}
func (x *MetaDataV2) ProtoReflect() protoreflect.Message {
mi := &file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[5]
mi := &file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[6]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -388,7 +475,7 @@ func (x *MetaDataV2) ProtoReflect() protoreflect.Message {
// Deprecated: Use MetaDataV2.ProtoReflect.Descriptor instead.
func (*MetaDataV2) Descriptor() ([]byte, []int) {
return file_proto_prysm_v1alpha1_p2p_messages_proto_rawDescGZIP(), []int{5}
return file_proto_prysm_v1alpha1_p2p_messages_proto_rawDescGZIP(), []int{6}
}
func (x *MetaDataV2) GetSeqNumber() uint64 {
@@ -431,7 +518,7 @@ type BlobSidecarsByRangeRequest struct {
func (x *BlobSidecarsByRangeRequest) Reset() {
*x = BlobSidecarsByRangeRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[6]
mi := &file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[7]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -444,7 +531,7 @@ func (x *BlobSidecarsByRangeRequest) String() string {
func (*BlobSidecarsByRangeRequest) ProtoMessage() {}
func (x *BlobSidecarsByRangeRequest) ProtoReflect() protoreflect.Message {
mi := &file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[6]
mi := &file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[7]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -457,7 +544,7 @@ func (x *BlobSidecarsByRangeRequest) ProtoReflect() protoreflect.Message {
// Deprecated: Use BlobSidecarsByRangeRequest.ProtoReflect.Descriptor instead.
func (*BlobSidecarsByRangeRequest) Descriptor() ([]byte, []int) {
return file_proto_prysm_v1alpha1_p2p_messages_proto_rawDescGZIP(), []int{6}
return file_proto_prysm_v1alpha1_p2p_messages_proto_rawDescGZIP(), []int{7}
}
func (x *BlobSidecarsByRangeRequest) GetStartSlot() github_com_OffchainLabs_prysm_v6_consensus_types_primitives.Slot {
@@ -487,7 +574,7 @@ type DataColumnSidecarsByRangeRequest struct {
func (x *DataColumnSidecarsByRangeRequest) Reset() {
*x = DataColumnSidecarsByRangeRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[7]
mi := &file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[8]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -500,7 +587,7 @@ func (x *DataColumnSidecarsByRangeRequest) String() string {
func (*DataColumnSidecarsByRangeRequest) ProtoMessage() {}
func (x *DataColumnSidecarsByRangeRequest) ProtoReflect() protoreflect.Message {
mi := &file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[7]
mi := &file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[8]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -513,7 +600,7 @@ func (x *DataColumnSidecarsByRangeRequest) ProtoReflect() protoreflect.Message {
// Deprecated: Use DataColumnSidecarsByRangeRequest.ProtoReflect.Descriptor instead.
func (*DataColumnSidecarsByRangeRequest) Descriptor() ([]byte, []int) {
return file_proto_prysm_v1alpha1_p2p_messages_proto_rawDescGZIP(), []int{7}
return file_proto_prysm_v1alpha1_p2p_messages_proto_rawDescGZIP(), []int{8}
}
func (x *DataColumnSidecarsByRangeRequest) GetStartSlot() github_com_OffchainLabs_prysm_v6_consensus_types_primitives.Slot {
@@ -549,7 +636,7 @@ type LightClientUpdatesByRangeRequest struct {
func (x *LightClientUpdatesByRangeRequest) Reset() {
*x = LightClientUpdatesByRangeRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[8]
mi := &file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[9]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -562,7 +649,7 @@ func (x *LightClientUpdatesByRangeRequest) String() string {
func (*LightClientUpdatesByRangeRequest) ProtoMessage() {}
func (x *LightClientUpdatesByRangeRequest) ProtoReflect() protoreflect.Message {
mi := &file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[8]
mi := &file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[9]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -575,7 +662,7 @@ func (x *LightClientUpdatesByRangeRequest) ProtoReflect() protoreflect.Message {
// Deprecated: Use LightClientUpdatesByRangeRequest.ProtoReflect.Descriptor instead.
func (*LightClientUpdatesByRangeRequest) Descriptor() ([]byte, []int) {
return file_proto_prysm_v1alpha1_p2p_messages_proto_rawDescGZIP(), []int{8}
return file_proto_prysm_v1alpha1_p2p_messages_proto_rawDescGZIP(), []int{9}
}
func (x *LightClientUpdatesByRangeRequest) GetStartPeriod() uint64 {
@@ -624,109 +711,138 @@ var file_proto_prysm_v1alpha1_p2p_messages_proto_rawDesc = []byte{
0x69, 0x6e, 0x4c, 0x61, 0x62, 0x73, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x2f, 0x76, 0x36, 0x2f,
0x63, 0x6f, 0x6e, 0x73, 0x65, 0x6e, 0x73, 0x75, 0x73, 0x2d, 0x74, 0x79, 0x70, 0x65, 0x73, 0x2f,
0x70, 0x72, 0x69, 0x6d, 0x69, 0x74, 0x69, 0x76, 0x65, 0x73, 0x2e, 0x53, 0x6c, 0x6f, 0x74, 0x52,
0x08, 0x68, 0x65, 0x61, 0x64, 0x53, 0x6c, 0x6f, 0x74, 0x22, 0xab, 0x01, 0x0a, 0x1a, 0x42, 0x65,
0x61, 0x63, 0x6f, 0x6e, 0x42, 0x6c, 0x6f, 0x63, 0x6b, 0x73, 0x42, 0x79, 0x52, 0x61, 0x6e, 0x67,
0x65, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x63, 0x0a, 0x0a, 0x73, 0x74, 0x61, 0x72,
0x74, 0x5f, 0x73, 0x6c, 0x6f, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x04, 0x42, 0x44, 0x82, 0xb5,
0x18, 0x40, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x4f, 0x66, 0x66,
0x63, 0x68, 0x61, 0x69, 0x6e, 0x4c, 0x61, 0x62, 0x73, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x2f,
0x76, 0x36, 0x2f, 0x63, 0x6f, 0x6e, 0x73, 0x65, 0x6e, 0x73, 0x75, 0x73, 0x2d, 0x74, 0x79, 0x70,
0x65, 0x73, 0x2f, 0x70, 0x72, 0x69, 0x6d, 0x69, 0x74, 0x69, 0x76, 0x65, 0x73, 0x2e, 0x53, 0x6c,
0x6f, 0x74, 0x52, 0x09, 0x73, 0x74, 0x61, 0x72, 0x74, 0x53, 0x6c, 0x6f, 0x74, 0x12, 0x14, 0x0a,
0x05, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x18, 0x02, 0x20, 0x01, 0x28, 0x04, 0x52, 0x05, 0x63, 0x6f,
0x75, 0x6e, 0x74, 0x12, 0x12, 0x0a, 0x04, 0x73, 0x74, 0x65, 0x70, 0x18, 0x03, 0x20, 0x01, 0x28,
0x04, 0x52, 0x04, 0x73, 0x74, 0x65, 0x70, 0x22, 0xe4, 0x01, 0x0a, 0x09, 0x45, 0x4e, 0x52, 0x46,
0x6f, 0x72, 0x6b, 0x49, 0x44, 0x12, 0x35, 0x0a, 0x13, 0x63, 0x75, 0x72, 0x72, 0x65, 0x6e, 0x74,
0x5f, 0x66, 0x6f, 0x72, 0x6b, 0x5f, 0x64, 0x69, 0x67, 0x65, 0x73, 0x74, 0x18, 0x01, 0x20, 0x01,
0x28, 0x0c, 0x42, 0x05, 0x8a, 0xb5, 0x18, 0x01, 0x34, 0x52, 0x11, 0x63, 0x75, 0x72, 0x72, 0x65,
0x6e, 0x74, 0x46, 0x6f, 0x72, 0x6b, 0x44, 0x69, 0x67, 0x65, 0x73, 0x74, 0x12, 0x31, 0x0a, 0x11,
0x6e, 0x65, 0x78, 0x74, 0x5f, 0x66, 0x6f, 0x72, 0x6b, 0x5f, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f,
0x6e, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x05, 0x8a, 0xb5, 0x18, 0x01, 0x34, 0x52, 0x0f,
0x6e, 0x65, 0x78, 0x74, 0x46, 0x6f, 0x72, 0x6b, 0x56, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x12,
0x6d, 0x0a, 0x0f, 0x6e, 0x65, 0x78, 0x74, 0x5f, 0x66, 0x6f, 0x72, 0x6b, 0x5f, 0x65, 0x70, 0x6f,
0x63, 0x68, 0x18, 0x03, 0x20, 0x01, 0x28, 0x04, 0x42, 0x45, 0x82, 0xb5, 0x18, 0x41, 0x67, 0x69,
0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x4f, 0x66, 0x66, 0x63, 0x68, 0x61, 0x69,
0x6e, 0x4c, 0x61, 0x62, 0x73, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x2f, 0x76, 0x36, 0x2f, 0x63,
0x6f, 0x6e, 0x73, 0x65, 0x6e, 0x73, 0x75, 0x73, 0x2d, 0x74, 0x79, 0x70, 0x65, 0x73, 0x2f, 0x70,
0x72, 0x69, 0x6d, 0x69, 0x74, 0x69, 0x76, 0x65, 0x73, 0x2e, 0x45, 0x70, 0x6f, 0x63, 0x68, 0x52,
0x0d, 0x6e, 0x65, 0x78, 0x74, 0x46, 0x6f, 0x72, 0x6b, 0x45, 0x70, 0x6f, 0x63, 0x68, 0x22, 0x80,
0x01, 0x0a, 0x0a, 0x4d, 0x65, 0x74, 0x61, 0x44, 0x61, 0x74, 0x61, 0x56, 0x30, 0x12, 0x1d, 0x0a,
0x0a, 0x73, 0x65, 0x71, 0x5f, 0x6e, 0x75, 0x6d, 0x62, 0x65, 0x72, 0x18, 0x01, 0x20, 0x01, 0x28,
0x04, 0x52, 0x09, 0x73, 0x65, 0x71, 0x4e, 0x75, 0x6d, 0x62, 0x65, 0x72, 0x12, 0x53, 0x0a, 0x07,
0x61, 0x74, 0x74, 0x6e, 0x65, 0x74, 0x73, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x39, 0x82,
0xb5, 0x18, 0x30, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x70, 0x72,
0x79, 0x73, 0x6d, 0x61, 0x74, 0x69, 0x63, 0x6c, 0x61, 0x62, 0x73, 0x2f, 0x67, 0x6f, 0x2d, 0x62,
0x69, 0x74, 0x66, 0x69, 0x65, 0x6c, 0x64, 0x2e, 0x42, 0x69, 0x74, 0x76, 0x65, 0x63, 0x74, 0x6f,
0x72, 0x36, 0x34, 0x8a, 0xb5, 0x18, 0x01, 0x38, 0x52, 0x07, 0x61, 0x74, 0x74, 0x6e, 0x65, 0x74,
0x73, 0x22, 0xd6, 0x01, 0x0a, 0x0a, 0x4d, 0x65, 0x74, 0x61, 0x44, 0x61, 0x74, 0x61, 0x56, 0x31,
0x12, 0x1d, 0x0a, 0x0a, 0x73, 0x65, 0x71, 0x5f, 0x6e, 0x75, 0x6d, 0x62, 0x65, 0x72, 0x18, 0x01,
0x20, 0x01, 0x28, 0x04, 0x52, 0x09, 0x73, 0x65, 0x71, 0x4e, 0x75, 0x6d, 0x62, 0x65, 0x72, 0x12,
0x53, 0x0a, 0x07, 0x61, 0x74, 0x74, 0x6e, 0x65, 0x74, 0x73, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c,
0x42, 0x39, 0x82, 0xb5, 0x18, 0x30, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d,
0x08, 0x68, 0x65, 0x61, 0x64, 0x53, 0x6c, 0x6f, 0x74, 0x22, 0xd7, 0x03, 0x0a, 0x08, 0x53, 0x74,
0x61, 0x74, 0x75, 0x73, 0x56, 0x32, 0x12, 0x26, 0x0a, 0x0b, 0x66, 0x6f, 0x72, 0x6b, 0x5f, 0x64,
0x69, 0x67, 0x65, 0x73, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x05, 0x8a, 0xb5, 0x18,
0x01, 0x34, 0x52, 0x0a, 0x66, 0x6f, 0x72, 0x6b, 0x44, 0x69, 0x67, 0x65, 0x73, 0x74, 0x12, 0x2d,
0x0a, 0x0e, 0x66, 0x69, 0x6e, 0x61, 0x6c, 0x69, 0x7a, 0x65, 0x64, 0x5f, 0x72, 0x6f, 0x6f, 0x74,
0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x06, 0x8a, 0xb5, 0x18, 0x02, 0x33, 0x32, 0x52, 0x0d,
0x66, 0x69, 0x6e, 0x61, 0x6c, 0x69, 0x7a, 0x65, 0x64, 0x52, 0x6f, 0x6f, 0x74, 0x12, 0x6e, 0x0a,
0x0f, 0x66, 0x69, 0x6e, 0x61, 0x6c, 0x69, 0x7a, 0x65, 0x64, 0x5f, 0x65, 0x70, 0x6f, 0x63, 0x68,
0x18, 0x03, 0x20, 0x01, 0x28, 0x04, 0x42, 0x45, 0x82, 0xb5, 0x18, 0x41, 0x67, 0x69, 0x74, 0x68,
0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x4f, 0x66, 0x66, 0x63, 0x68, 0x61, 0x69, 0x6e, 0x4c,
0x61, 0x62, 0x73, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x2f, 0x76, 0x36, 0x2f, 0x63, 0x6f, 0x6e,
0x73, 0x65, 0x6e, 0x73, 0x75, 0x73, 0x2d, 0x74, 0x79, 0x70, 0x65, 0x73, 0x2f, 0x70, 0x72, 0x69,
0x6d, 0x69, 0x74, 0x69, 0x76, 0x65, 0x73, 0x2e, 0x45, 0x70, 0x6f, 0x63, 0x68, 0x52, 0x0e, 0x66,
0x69, 0x6e, 0x61, 0x6c, 0x69, 0x7a, 0x65, 0x64, 0x45, 0x70, 0x6f, 0x63, 0x68, 0x12, 0x23, 0x0a,
0x09, 0x68, 0x65, 0x61, 0x64, 0x5f, 0x72, 0x6f, 0x6f, 0x74, 0x18, 0x04, 0x20, 0x01, 0x28, 0x0c,
0x42, 0x06, 0x8a, 0xb5, 0x18, 0x02, 0x33, 0x32, 0x52, 0x08, 0x68, 0x65, 0x61, 0x64, 0x52, 0x6f,
0x6f, 0x74, 0x12, 0x61, 0x0a, 0x09, 0x68, 0x65, 0x61, 0x64, 0x5f, 0x73, 0x6c, 0x6f, 0x74, 0x18,
0x05, 0x20, 0x01, 0x28, 0x04, 0x42, 0x44, 0x82, 0xb5, 0x18, 0x40, 0x67, 0x69, 0x74, 0x68, 0x75,
0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x4f, 0x66, 0x66, 0x63, 0x68, 0x61, 0x69, 0x6e, 0x4c, 0x61,
0x62, 0x73, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x2f, 0x76, 0x36, 0x2f, 0x63, 0x6f, 0x6e, 0x73,
0x65, 0x6e, 0x73, 0x75, 0x73, 0x2d, 0x74, 0x79, 0x70, 0x65, 0x73, 0x2f, 0x70, 0x72, 0x69, 0x6d,
0x69, 0x74, 0x69, 0x76, 0x65, 0x73, 0x2e, 0x53, 0x6c, 0x6f, 0x74, 0x52, 0x08, 0x68, 0x65, 0x61,
0x64, 0x53, 0x6c, 0x6f, 0x74, 0x12, 0x7c, 0x0a, 0x17, 0x65, 0x61, 0x72, 0x6c, 0x69, 0x65, 0x73,
0x74, 0x5f, 0x61, 0x76, 0x61, 0x69, 0x6c, 0x61, 0x62, 0x6c, 0x65, 0x5f, 0x73, 0x6c, 0x6f, 0x74,
0x18, 0x06, 0x20, 0x01, 0x28, 0x04, 0x42, 0x44, 0x82, 0xb5, 0x18, 0x40, 0x67, 0x69, 0x74, 0x68,
0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x4f, 0x66, 0x66, 0x63, 0x68, 0x61, 0x69, 0x6e, 0x4c,
0x61, 0x62, 0x73, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x2f, 0x76, 0x36, 0x2f, 0x63, 0x6f, 0x6e,
0x73, 0x65, 0x6e, 0x73, 0x75, 0x73, 0x2d, 0x74, 0x79, 0x70, 0x65, 0x73, 0x2f, 0x70, 0x72, 0x69,
0x6d, 0x69, 0x74, 0x69, 0x76, 0x65, 0x73, 0x2e, 0x53, 0x6c, 0x6f, 0x74, 0x52, 0x15, 0x65, 0x61,
0x72, 0x6c, 0x69, 0x65, 0x73, 0x74, 0x41, 0x76, 0x61, 0x69, 0x6c, 0x61, 0x62, 0x6c, 0x65, 0x53,
0x6c, 0x6f, 0x74, 0x22, 0xab, 0x01, 0x0a, 0x1a, 0x42, 0x65, 0x61, 0x63, 0x6f, 0x6e, 0x42, 0x6c,
0x6f, 0x63, 0x6b, 0x73, 0x42, 0x79, 0x52, 0x61, 0x6e, 0x67, 0x65, 0x52, 0x65, 0x71, 0x75, 0x65,
0x73, 0x74, 0x12, 0x63, 0x0a, 0x0a, 0x73, 0x74, 0x61, 0x72, 0x74, 0x5f, 0x73, 0x6c, 0x6f, 0x74,
0x18, 0x01, 0x20, 0x01, 0x28, 0x04, 0x42, 0x44, 0x82, 0xb5, 0x18, 0x40, 0x67, 0x69, 0x74, 0x68,
0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x4f, 0x66, 0x66, 0x63, 0x68, 0x61, 0x69, 0x6e, 0x4c,
0x61, 0x62, 0x73, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x2f, 0x76, 0x36, 0x2f, 0x63, 0x6f, 0x6e,
0x73, 0x65, 0x6e, 0x73, 0x75, 0x73, 0x2d, 0x74, 0x79, 0x70, 0x65, 0x73, 0x2f, 0x70, 0x72, 0x69,
0x6d, 0x69, 0x74, 0x69, 0x76, 0x65, 0x73, 0x2e, 0x53, 0x6c, 0x6f, 0x74, 0x52, 0x09, 0x73, 0x74,
0x61, 0x72, 0x74, 0x53, 0x6c, 0x6f, 0x74, 0x12, 0x14, 0x0a, 0x05, 0x63, 0x6f, 0x75, 0x6e, 0x74,
0x18, 0x02, 0x20, 0x01, 0x28, 0x04, 0x52, 0x05, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x12, 0x12, 0x0a,
0x04, 0x73, 0x74, 0x65, 0x70, 0x18, 0x03, 0x20, 0x01, 0x28, 0x04, 0x52, 0x04, 0x73, 0x74, 0x65,
0x70, 0x22, 0xe4, 0x01, 0x0a, 0x09, 0x45, 0x4e, 0x52, 0x46, 0x6f, 0x72, 0x6b, 0x49, 0x44, 0x12,
0x35, 0x0a, 0x13, 0x63, 0x75, 0x72, 0x72, 0x65, 0x6e, 0x74, 0x5f, 0x66, 0x6f, 0x72, 0x6b, 0x5f,
0x64, 0x69, 0x67, 0x65, 0x73, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x05, 0x8a, 0xb5,
0x18, 0x01, 0x34, 0x52, 0x11, 0x63, 0x75, 0x72, 0x72, 0x65, 0x6e, 0x74, 0x46, 0x6f, 0x72, 0x6b,
0x44, 0x69, 0x67, 0x65, 0x73, 0x74, 0x12, 0x31, 0x0a, 0x11, 0x6e, 0x65, 0x78, 0x74, 0x5f, 0x66,
0x6f, 0x72, 0x6b, 0x5f, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x18, 0x02, 0x20, 0x01, 0x28,
0x0c, 0x42, 0x05, 0x8a, 0xb5, 0x18, 0x01, 0x34, 0x52, 0x0f, 0x6e, 0x65, 0x78, 0x74, 0x46, 0x6f,
0x72, 0x6b, 0x56, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x12, 0x6d, 0x0a, 0x0f, 0x6e, 0x65, 0x78,
0x74, 0x5f, 0x66, 0x6f, 0x72, 0x6b, 0x5f, 0x65, 0x70, 0x6f, 0x63, 0x68, 0x18, 0x03, 0x20, 0x01,
0x28, 0x04, 0x42, 0x45, 0x82, 0xb5, 0x18, 0x41, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63,
0x6f, 0x6d, 0x2f, 0x4f, 0x66, 0x66, 0x63, 0x68, 0x61, 0x69, 0x6e, 0x4c, 0x61, 0x62, 0x73, 0x2f,
0x70, 0x72, 0x79, 0x73, 0x6d, 0x2f, 0x76, 0x36, 0x2f, 0x63, 0x6f, 0x6e, 0x73, 0x65, 0x6e, 0x73,
0x75, 0x73, 0x2d, 0x74, 0x79, 0x70, 0x65, 0x73, 0x2f, 0x70, 0x72, 0x69, 0x6d, 0x69, 0x74, 0x69,
0x76, 0x65, 0x73, 0x2e, 0x45, 0x70, 0x6f, 0x63, 0x68, 0x52, 0x0d, 0x6e, 0x65, 0x78, 0x74, 0x46,
0x6f, 0x72, 0x6b, 0x45, 0x70, 0x6f, 0x63, 0x68, 0x22, 0x80, 0x01, 0x0a, 0x0a, 0x4d, 0x65, 0x74,
0x61, 0x44, 0x61, 0x74, 0x61, 0x56, 0x30, 0x12, 0x1d, 0x0a, 0x0a, 0x73, 0x65, 0x71, 0x5f, 0x6e,
0x75, 0x6d, 0x62, 0x65, 0x72, 0x18, 0x01, 0x20, 0x01, 0x28, 0x04, 0x52, 0x09, 0x73, 0x65, 0x71,
0x4e, 0x75, 0x6d, 0x62, 0x65, 0x72, 0x12, 0x53, 0x0a, 0x07, 0x61, 0x74, 0x74, 0x6e, 0x65, 0x74,
0x73, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x39, 0x82, 0xb5, 0x18, 0x30, 0x67, 0x69, 0x74,
0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x61, 0x74, 0x69,
0x63, 0x6c, 0x61, 0x62, 0x73, 0x2f, 0x67, 0x6f, 0x2d, 0x62, 0x69, 0x74, 0x66, 0x69, 0x65, 0x6c,
0x64, 0x2e, 0x42, 0x69, 0x74, 0x76, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x36, 0x34, 0x8a, 0xb5, 0x18,
0x01, 0x38, 0x52, 0x07, 0x61, 0x74, 0x74, 0x6e, 0x65, 0x74, 0x73, 0x22, 0xd6, 0x01, 0x0a, 0x0a,
0x4d, 0x65, 0x74, 0x61, 0x44, 0x61, 0x74, 0x61, 0x56, 0x31, 0x12, 0x1d, 0x0a, 0x0a, 0x73, 0x65,
0x71, 0x5f, 0x6e, 0x75, 0x6d, 0x62, 0x65, 0x72, 0x18, 0x01, 0x20, 0x01, 0x28, 0x04, 0x52, 0x09,
0x73, 0x65, 0x71, 0x4e, 0x75, 0x6d, 0x62, 0x65, 0x72, 0x12, 0x53, 0x0a, 0x07, 0x61, 0x74, 0x74,
0x6e, 0x65, 0x74, 0x73, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x39, 0x82, 0xb5, 0x18, 0x30,
0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d,
0x61, 0x74, 0x69, 0x63, 0x6c, 0x61, 0x62, 0x73, 0x2f, 0x67, 0x6f, 0x2d, 0x62, 0x69, 0x74, 0x66,
0x69, 0x65, 0x6c, 0x64, 0x2e, 0x42, 0x69, 0x74, 0x76, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x36, 0x34,
0x8a, 0xb5, 0x18, 0x01, 0x38, 0x52, 0x07, 0x61, 0x74, 0x74, 0x6e, 0x65, 0x74, 0x73, 0x12, 0x54,
0x0a, 0x08, 0x73, 0x79, 0x6e, 0x63, 0x6e, 0x65, 0x74, 0x73, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0c,
0x42, 0x38, 0x82, 0xb5, 0x18, 0x2f, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d,
0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x61, 0x74, 0x69, 0x63, 0x6c, 0x61, 0x62, 0x73, 0x2f, 0x67,
0x6f, 0x2d, 0x62, 0x69, 0x74, 0x66, 0x69, 0x65, 0x6c, 0x64, 0x2e, 0x42, 0x69, 0x74, 0x76, 0x65,
0x63, 0x74, 0x6f, 0x72, 0x36, 0x34, 0x8a, 0xb5, 0x18, 0x01, 0x38, 0x52, 0x07, 0x61, 0x74, 0x74,
0x6e, 0x65, 0x74, 0x73, 0x12, 0x54, 0x0a, 0x08, 0x73, 0x79, 0x6e, 0x63, 0x6e, 0x65, 0x74, 0x73,
0x18, 0x03, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x38, 0x82, 0xb5, 0x18, 0x2f, 0x67, 0x69, 0x74, 0x68,
0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x61, 0x74, 0x69, 0x63,
0x6c, 0x61, 0x62, 0x73, 0x2f, 0x67, 0x6f, 0x2d, 0x62, 0x69, 0x74, 0x66, 0x69, 0x65, 0x6c, 0x64,
0x2e, 0x42, 0x69, 0x74, 0x76, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x34, 0x8a, 0xb5, 0x18, 0x01, 0x31,
0x52, 0x08, 0x73, 0x79, 0x6e, 0x63, 0x6e, 0x65, 0x74, 0x73, 0x22, 0x86, 0x02, 0x0a, 0x0a, 0x4d,
0x65, 0x74, 0x61, 0x44, 0x61, 0x74, 0x61, 0x56, 0x32, 0x12, 0x1d, 0x0a, 0x0a, 0x73, 0x65, 0x71,
0x5f, 0x6e, 0x75, 0x6d, 0x62, 0x65, 0x72, 0x18, 0x01, 0x20, 0x01, 0x28, 0x04, 0x52, 0x09, 0x73,
0x65, 0x71, 0x4e, 0x75, 0x6d, 0x62, 0x65, 0x72, 0x12, 0x53, 0x0a, 0x07, 0x61, 0x74, 0x74, 0x6e,
0x65, 0x74, 0x73, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x39, 0x82, 0xb5, 0x18, 0x30, 0x67,
0x63, 0x74, 0x6f, 0x72, 0x34, 0x8a, 0xb5, 0x18, 0x01, 0x31, 0x52, 0x08, 0x73, 0x79, 0x6e, 0x63,
0x6e, 0x65, 0x74, 0x73, 0x22, 0x86, 0x02, 0x0a, 0x0a, 0x4d, 0x65, 0x74, 0x61, 0x44, 0x61, 0x74,
0x61, 0x56, 0x32, 0x12, 0x1d, 0x0a, 0x0a, 0x73, 0x65, 0x71, 0x5f, 0x6e, 0x75, 0x6d, 0x62, 0x65,
0x72, 0x18, 0x01, 0x20, 0x01, 0x28, 0x04, 0x52, 0x09, 0x73, 0x65, 0x71, 0x4e, 0x75, 0x6d, 0x62,
0x65, 0x72, 0x12, 0x53, 0x0a, 0x07, 0x61, 0x74, 0x74, 0x6e, 0x65, 0x74, 0x73, 0x18, 0x02, 0x20,
0x01, 0x28, 0x0c, 0x42, 0x39, 0x82, 0xb5, 0x18, 0x30, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e,
0x63, 0x6f, 0x6d, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x61, 0x74, 0x69, 0x63, 0x6c, 0x61, 0x62,
0x73, 0x2f, 0x67, 0x6f, 0x2d, 0x62, 0x69, 0x74, 0x66, 0x69, 0x65, 0x6c, 0x64, 0x2e, 0x42, 0x69,
0x74, 0x76, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x36, 0x34, 0x8a, 0xb5, 0x18, 0x01, 0x38, 0x52, 0x07,
0x61, 0x74, 0x74, 0x6e, 0x65, 0x74, 0x73, 0x12, 0x54, 0x0a, 0x08, 0x73, 0x79, 0x6e, 0x63, 0x6e,
0x65, 0x74, 0x73, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x38, 0x82, 0xb5, 0x18, 0x2f, 0x67,
0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x61,
0x74, 0x69, 0x63, 0x6c, 0x61, 0x62, 0x73, 0x2f, 0x67, 0x6f, 0x2d, 0x62, 0x69, 0x74, 0x66, 0x69,
0x65, 0x6c, 0x64, 0x2e, 0x42, 0x69, 0x74, 0x76, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x36, 0x34, 0x8a,
0xb5, 0x18, 0x01, 0x38, 0x52, 0x07, 0x61, 0x74, 0x74, 0x6e, 0x65, 0x74, 0x73, 0x12, 0x54, 0x0a,
0x08, 0x73, 0x79, 0x6e, 0x63, 0x6e, 0x65, 0x74, 0x73, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0c, 0x42,
0x38, 0x82, 0xb5, 0x18, 0x2f, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f,
0x70, 0x72, 0x79, 0x73, 0x6d, 0x61, 0x74, 0x69, 0x63, 0x6c, 0x61, 0x62, 0x73, 0x2f, 0x67, 0x6f,
0x2d, 0x62, 0x69, 0x74, 0x66, 0x69, 0x65, 0x6c, 0x64, 0x2e, 0x42, 0x69, 0x74, 0x76, 0x65, 0x63,
0x74, 0x6f, 0x72, 0x34, 0x8a, 0xb5, 0x18, 0x01, 0x31, 0x52, 0x08, 0x73, 0x79, 0x6e, 0x63, 0x6e,
0x65, 0x74, 0x73, 0x12, 0x2e, 0x0a, 0x13, 0x63, 0x75, 0x73, 0x74, 0x6f, 0x64, 0x79, 0x5f, 0x67,
0x72, 0x6f, 0x75, 0x70, 0x5f, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x18, 0x04, 0x20, 0x01, 0x28, 0x04,
0x52, 0x11, 0x63, 0x75, 0x73, 0x74, 0x6f, 0x64, 0x79, 0x47, 0x72, 0x6f, 0x75, 0x70, 0x43, 0x6f,
0x75, 0x6e, 0x74, 0x22, 0x97, 0x01, 0x0a, 0x1a, 0x42, 0x6c, 0x6f, 0x62, 0x53, 0x69, 0x64, 0x65,
0x63, 0x61, 0x72, 0x73, 0x42, 0x79, 0x52, 0x61, 0x6e, 0x67, 0x65, 0x52, 0x65, 0x71, 0x75, 0x65,
0x73, 0x74, 0x12, 0x63, 0x0a, 0x0a, 0x73, 0x74, 0x61, 0x72, 0x74, 0x5f, 0x73, 0x6c, 0x6f, 0x74,
0x18, 0x01, 0x20, 0x01, 0x28, 0x04, 0x42, 0x44, 0x82, 0xb5, 0x18, 0x40, 0x67, 0x69, 0x74, 0x68,
0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x4f, 0x66, 0x66, 0x63, 0x68, 0x61, 0x69, 0x6e, 0x4c,
0x61, 0x62, 0x73, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x2f, 0x76, 0x36, 0x2f, 0x63, 0x6f, 0x6e,
0x73, 0x65, 0x6e, 0x73, 0x75, 0x73, 0x2d, 0x74, 0x79, 0x70, 0x65, 0x73, 0x2f, 0x70, 0x72, 0x69,
0x6d, 0x69, 0x74, 0x69, 0x76, 0x65, 0x73, 0x2e, 0x53, 0x6c, 0x6f, 0x74, 0x52, 0x09, 0x73, 0x74,
0x61, 0x72, 0x74, 0x53, 0x6c, 0x6f, 0x74, 0x12, 0x14, 0x0a, 0x05, 0x63, 0x6f, 0x75, 0x6e, 0x74,
0x18, 0x02, 0x20, 0x01, 0x28, 0x04, 0x52, 0x05, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x22, 0xc0, 0x01,
0x0a, 0x20, 0x44, 0x61, 0x74, 0x61, 0x43, 0x6f, 0x6c, 0x75, 0x6d, 0x6e, 0x53, 0x69, 0x64, 0x65,
0x63, 0x61, 0x72, 0x73, 0x42, 0x79, 0x52, 0x61, 0x6e, 0x67, 0x65, 0x52, 0x65, 0x71, 0x75, 0x65,
0x73, 0x74, 0x12, 0x63, 0x0a, 0x0a, 0x73, 0x74, 0x61, 0x72, 0x74, 0x5f, 0x73, 0x6c, 0x6f, 0x74,
0x18, 0x01, 0x20, 0x01, 0x28, 0x04, 0x42, 0x44, 0x82, 0xb5, 0x18, 0x40, 0x67, 0x69, 0x74, 0x68,
0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x4f, 0x66, 0x66, 0x63, 0x68, 0x61, 0x69, 0x6e, 0x4c,
0x61, 0x62, 0x73, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x2f, 0x76, 0x36, 0x2f, 0x63, 0x6f, 0x6e,
0x73, 0x65, 0x6e, 0x73, 0x75, 0x73, 0x2d, 0x74, 0x79, 0x70, 0x65, 0x73, 0x2f, 0x70, 0x72, 0x69,
0x6d, 0x69, 0x74, 0x69, 0x76, 0x65, 0x73, 0x2e, 0x53, 0x6c, 0x6f, 0x74, 0x52, 0x09, 0x73, 0x74,
0x61, 0x72, 0x74, 0x53, 0x6c, 0x6f, 0x74, 0x12, 0x14, 0x0a, 0x05, 0x63, 0x6f, 0x75, 0x6e, 0x74,
0x18, 0x02, 0x20, 0x01, 0x28, 0x04, 0x52, 0x05, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x12, 0x21, 0x0a,
0x07, 0x63, 0x6f, 0x6c, 0x75, 0x6d, 0x6e, 0x73, 0x18, 0x03, 0x20, 0x03, 0x28, 0x04, 0x42, 0x07,
0x92, 0xb5, 0x18, 0x03, 0x31, 0x32, 0x38, 0x52, 0x07, 0x63, 0x6f, 0x6c, 0x75, 0x6d, 0x6e, 0x73,
0x22, 0x5b, 0x0a, 0x20, 0x4c, 0x69, 0x67, 0x68, 0x74, 0x43, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x55,
0x70, 0x64, 0x61, 0x74, 0x65, 0x73, 0x42, 0x79, 0x52, 0x61, 0x6e, 0x67, 0x65, 0x52, 0x65, 0x71,
0x75, 0x65, 0x73, 0x74, 0x12, 0x21, 0x0a, 0x0c, 0x73, 0x74, 0x61, 0x72, 0x74, 0x5f, 0x70, 0x65,
0x72, 0x69, 0x6f, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x04, 0x52, 0x0b, 0x73, 0x74, 0x61, 0x72,
0x74, 0x50, 0x65, 0x72, 0x69, 0x6f, 0x64, 0x12, 0x14, 0x0a, 0x05, 0x63, 0x6f, 0x75, 0x6e, 0x74,
0x18, 0x02, 0x20, 0x01, 0x28, 0x04, 0x52, 0x05, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x42, 0x9a, 0x01,
0x0a, 0x19, 0x6f, 0x72, 0x67, 0x2e, 0x65, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x2e, 0x65,
0x74, 0x68, 0x2e, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x31, 0x42, 0x10, 0x50, 0x32, 0x50,
0x4d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x73, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x50, 0x01, 0x5a,
0x39, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x4f, 0x66, 0x66, 0x63,
0x68, 0x61, 0x69, 0x6e, 0x4c, 0x61, 0x62, 0x73, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x2f, 0x76,
0x36, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x2f, 0x76, 0x31,
0x61, 0x6c, 0x70, 0x68, 0x61, 0x31, 0x3b, 0x65, 0x74, 0x68, 0xaa, 0x02, 0x15, 0x45, 0x74, 0x68,
0x65, 0x72, 0x65, 0x75, 0x6d, 0x2e, 0x45, 0x74, 0x68, 0x2e, 0x56, 0x31, 0x61, 0x6c, 0x70, 0x68,
0x61, 0x31, 0xca, 0x02, 0x15, 0x45, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x5c, 0x45, 0x74,
0x68, 0x5c, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x31, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74,
0x6f, 0x33,
0x65, 0x6c, 0x64, 0x2e, 0x42, 0x69, 0x74, 0x76, 0x65, 0x63, 0x74, 0x6f, 0x72, 0x34, 0x8a, 0xb5,
0x18, 0x01, 0x31, 0x52, 0x08, 0x73, 0x79, 0x6e, 0x63, 0x6e, 0x65, 0x74, 0x73, 0x12, 0x2e, 0x0a,
0x13, 0x63, 0x75, 0x73, 0x74, 0x6f, 0x64, 0x79, 0x5f, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x5f, 0x63,
0x6f, 0x75, 0x6e, 0x74, 0x18, 0x04, 0x20, 0x01, 0x28, 0x04, 0x52, 0x11, 0x63, 0x75, 0x73, 0x74,
0x6f, 0x64, 0x79, 0x47, 0x72, 0x6f, 0x75, 0x70, 0x43, 0x6f, 0x75, 0x6e, 0x74, 0x22, 0x97, 0x01,
0x0a, 0x1a, 0x42, 0x6c, 0x6f, 0x62, 0x53, 0x69, 0x64, 0x65, 0x63, 0x61, 0x72, 0x73, 0x42, 0x79,
0x52, 0x61, 0x6e, 0x67, 0x65, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x63, 0x0a, 0x0a,
0x73, 0x74, 0x61, 0x72, 0x74, 0x5f, 0x73, 0x6c, 0x6f, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x04,
0x42, 0x44, 0x82, 0xb5, 0x18, 0x40, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d,
0x2f, 0x4f, 0x66, 0x66, 0x63, 0x68, 0x61, 0x69, 0x6e, 0x4c, 0x61, 0x62, 0x73, 0x2f, 0x70, 0x72,
0x79, 0x73, 0x6d, 0x2f, 0x76, 0x36, 0x2f, 0x63, 0x6f, 0x6e, 0x73, 0x65, 0x6e, 0x73, 0x75, 0x73,
0x2d, 0x74, 0x79, 0x70, 0x65, 0x73, 0x2f, 0x70, 0x72, 0x69, 0x6d, 0x69, 0x74, 0x69, 0x76, 0x65,
0x73, 0x2e, 0x53, 0x6c, 0x6f, 0x74, 0x52, 0x09, 0x73, 0x74, 0x61, 0x72, 0x74, 0x53, 0x6c, 0x6f,
0x74, 0x12, 0x14, 0x0a, 0x05, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x18, 0x02, 0x20, 0x01, 0x28, 0x04,
0x52, 0x05, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x22, 0xc0, 0x01, 0x0a, 0x20, 0x44, 0x61, 0x74, 0x61,
0x43, 0x6f, 0x6c, 0x75, 0x6d, 0x6e, 0x53, 0x69, 0x64, 0x65, 0x63, 0x61, 0x72, 0x73, 0x42, 0x79,
0x52, 0x61, 0x6e, 0x67, 0x65, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x63, 0x0a, 0x0a,
0x73, 0x74, 0x61, 0x72, 0x74, 0x5f, 0x73, 0x6c, 0x6f, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x04,
0x42, 0x44, 0x82, 0xb5, 0x18, 0x40, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d,
0x2f, 0x4f, 0x66, 0x66, 0x63, 0x68, 0x61, 0x69, 0x6e, 0x4c, 0x61, 0x62, 0x73, 0x2f, 0x70, 0x72,
0x79, 0x73, 0x6d, 0x2f, 0x76, 0x36, 0x2f, 0x63, 0x6f, 0x6e, 0x73, 0x65, 0x6e, 0x73, 0x75, 0x73,
0x2d, 0x74, 0x79, 0x70, 0x65, 0x73, 0x2f, 0x70, 0x72, 0x69, 0x6d, 0x69, 0x74, 0x69, 0x76, 0x65,
0x73, 0x2e, 0x53, 0x6c, 0x6f, 0x74, 0x52, 0x09, 0x73, 0x74, 0x61, 0x72, 0x74, 0x53, 0x6c, 0x6f,
0x74, 0x12, 0x14, 0x0a, 0x05, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x18, 0x02, 0x20, 0x01, 0x28, 0x04,
0x52, 0x05, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x12, 0x21, 0x0a, 0x07, 0x63, 0x6f, 0x6c, 0x75, 0x6d,
0x6e, 0x73, 0x18, 0x03, 0x20, 0x03, 0x28, 0x04, 0x42, 0x07, 0x92, 0xb5, 0x18, 0x03, 0x31, 0x32,
0x38, 0x52, 0x07, 0x63, 0x6f, 0x6c, 0x75, 0x6d, 0x6e, 0x73, 0x22, 0x5b, 0x0a, 0x20, 0x4c, 0x69,
0x67, 0x68, 0x74, 0x43, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x73,
0x42, 0x79, 0x52, 0x61, 0x6e, 0x67, 0x65, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x21,
0x0a, 0x0c, 0x73, 0x74, 0x61, 0x72, 0x74, 0x5f, 0x70, 0x65, 0x72, 0x69, 0x6f, 0x64, 0x18, 0x01,
0x20, 0x01, 0x28, 0x04, 0x52, 0x0b, 0x73, 0x74, 0x61, 0x72, 0x74, 0x50, 0x65, 0x72, 0x69, 0x6f,
0x64, 0x12, 0x14, 0x0a, 0x05, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x18, 0x02, 0x20, 0x01, 0x28, 0x04,
0x52, 0x05, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x42, 0x9a, 0x01, 0x0a, 0x19, 0x6f, 0x72, 0x67, 0x2e,
0x65, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x2e, 0x65, 0x74, 0x68, 0x2e, 0x76, 0x31, 0x61,
0x6c, 0x70, 0x68, 0x61, 0x31, 0x42, 0x10, 0x50, 0x32, 0x50, 0x4d, 0x65, 0x73, 0x73, 0x61, 0x67,
0x65, 0x73, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x50, 0x01, 0x5a, 0x39, 0x67, 0x69, 0x74, 0x68, 0x75,
0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x4f, 0x66, 0x66, 0x63, 0x68, 0x61, 0x69, 0x6e, 0x4c, 0x61,
0x62, 0x73, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x2f, 0x76, 0x36, 0x2f, 0x70, 0x72, 0x6f, 0x74,
0x6f, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x2f, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x31,
0x3b, 0x65, 0x74, 0x68, 0xaa, 0x02, 0x15, 0x45, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x2e,
0x45, 0x74, 0x68, 0x2e, 0x56, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x31, 0xca, 0x02, 0x15, 0x45,
0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x5c, 0x45, 0x74, 0x68, 0x5c, 0x76, 0x31, 0x61, 0x6c,
0x70, 0x68, 0x61, 0x31, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
var (
@@ -741,17 +857,18 @@ func file_proto_prysm_v1alpha1_p2p_messages_proto_rawDescGZIP() []byte {
return file_proto_prysm_v1alpha1_p2p_messages_proto_rawDescData
}
var file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes = make([]protoimpl.MessageInfo, 9)
var file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes = make([]protoimpl.MessageInfo, 10)
var file_proto_prysm_v1alpha1_p2p_messages_proto_goTypes = []interface{}{
(*Status)(nil), // 0: ethereum.eth.v1alpha1.Status
(*BeaconBlocksByRangeRequest)(nil), // 1: ethereum.eth.v1alpha1.BeaconBlocksByRangeRequest
(*ENRForkID)(nil), // 2: ethereum.eth.v1alpha1.ENRForkID
(*MetaDataV0)(nil), // 3: ethereum.eth.v1alpha1.MetaDataV0
(*MetaDataV1)(nil), // 4: ethereum.eth.v1alpha1.MetaDataV1
(*MetaDataV2)(nil), // 5: ethereum.eth.v1alpha1.MetaDataV2
(*BlobSidecarsByRangeRequest)(nil), // 6: ethereum.eth.v1alpha1.BlobSidecarsByRangeRequest
(*DataColumnSidecarsByRangeRequest)(nil), // 7: ethereum.eth.v1alpha1.DataColumnSidecarsByRangeRequest
(*LightClientUpdatesByRangeRequest)(nil), // 8: ethereum.eth.v1alpha1.LightClientUpdatesByRangeRequest
(*StatusV2)(nil), // 1: ethereum.eth.v1alpha1.StatusV2
(*BeaconBlocksByRangeRequest)(nil), // 2: ethereum.eth.v1alpha1.BeaconBlocksByRangeRequest
(*ENRForkID)(nil), // 3: ethereum.eth.v1alpha1.ENRForkID
(*MetaDataV0)(nil), // 4: ethereum.eth.v1alpha1.MetaDataV0
(*MetaDataV1)(nil), // 5: ethereum.eth.v1alpha1.MetaDataV1
(*MetaDataV2)(nil), // 6: ethereum.eth.v1alpha1.MetaDataV2
(*BlobSidecarsByRangeRequest)(nil), // 7: ethereum.eth.v1alpha1.BlobSidecarsByRangeRequest
(*DataColumnSidecarsByRangeRequest)(nil), // 8: ethereum.eth.v1alpha1.DataColumnSidecarsByRangeRequest
(*LightClientUpdatesByRangeRequest)(nil), // 9: ethereum.eth.v1alpha1.LightClientUpdatesByRangeRequest
}
var file_proto_prysm_v1alpha1_p2p_messages_proto_depIdxs = []int32{
0, // [0:0] is the sub-list for method output_type
@@ -780,7 +897,7 @@ func file_proto_prysm_v1alpha1_p2p_messages_proto_init() {
}
}
file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*BeaconBlocksByRangeRequest); i {
switch v := v.(*StatusV2); i {
case 0:
return &v.state
case 1:
@@ -792,7 +909,7 @@ func file_proto_prysm_v1alpha1_p2p_messages_proto_init() {
}
}
file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*ENRForkID); i {
switch v := v.(*BeaconBlocksByRangeRequest); i {
case 0:
return &v.state
case 1:
@@ -804,7 +921,7 @@ func file_proto_prysm_v1alpha1_p2p_messages_proto_init() {
}
}
file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*MetaDataV0); i {
switch v := v.(*ENRForkID); i {
case 0:
return &v.state
case 1:
@@ -816,7 +933,7 @@ func file_proto_prysm_v1alpha1_p2p_messages_proto_init() {
}
}
file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*MetaDataV1); i {
switch v := v.(*MetaDataV0); i {
case 0:
return &v.state
case 1:
@@ -828,7 +945,7 @@ func file_proto_prysm_v1alpha1_p2p_messages_proto_init() {
}
}
file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[5].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*MetaDataV2); i {
switch v := v.(*MetaDataV1); i {
case 0:
return &v.state
case 1:
@@ -840,7 +957,7 @@ func file_proto_prysm_v1alpha1_p2p_messages_proto_init() {
}
}
file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[6].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*BlobSidecarsByRangeRequest); i {
switch v := v.(*MetaDataV2); i {
case 0:
return &v.state
case 1:
@@ -852,7 +969,7 @@ func file_proto_prysm_v1alpha1_p2p_messages_proto_init() {
}
}
file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[7].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*DataColumnSidecarsByRangeRequest); i {
switch v := v.(*BlobSidecarsByRangeRequest); i {
case 0:
return &v.state
case 1:
@@ -864,6 +981,18 @@ func file_proto_prysm_v1alpha1_p2p_messages_proto_init() {
}
}
file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[8].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*DataColumnSidecarsByRangeRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_proto_prysm_v1alpha1_p2p_messages_proto_msgTypes[9].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*LightClientUpdatesByRangeRequest); i {
case 0:
return &v.state
@@ -882,7 +1011,7 @@ func file_proto_prysm_v1alpha1_p2p_messages_proto_init() {
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_proto_prysm_v1alpha1_p2p_messages_proto_rawDesc,
NumEnums: 0,
NumMessages: 9,
NumMessages: 10,
NumExtensions: 0,
NumServices: 0,
},

View File

@@ -26,6 +26,24 @@ message Status {
];
}
message StatusV2 {
bytes fork_digest = 1 [(ethereum.eth.ext.ssz_size) = "4"];
bytes finalized_root = 2 [(ethereum.eth.ext.ssz_size) = "32"];
uint64 finalized_epoch = 3 [
(ethereum.eth.ext.cast_type) =
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives.Epoch"
];
bytes head_root = 4 [(ethereum.eth.ext.ssz_size) = "32"];
uint64 head_slot = 5 [
(ethereum.eth.ext.cast_type) =
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives.Slot"
];
uint64 earliest_available_slot = 6 [
(ethereum.eth.ext.cast_type) =
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives.Slot"
];
}
message BeaconBlocksByRangeRequest {
uint64 start_slot = 1 [
(ethereum.eth.ext.cast_type) =

View File

@@ -248,6 +248,9 @@ func (v *ValidatorNode) Start(ctx context.Context) error {
args = append(args,
fmt.Sprintf("--%s=http://localhost:%d", flags.BeaconRESTApiProviderFlag.Name, beaconRestApiPort),
fmt.Sprintf("--%s", features.EnableBeaconRESTApi.Name))
if v.config.UseSSZOnly {
args = append(args, fmt.Sprintf("--%s", features.SSZOnly.Name))
}
}
// Only apply e2e flags to the current branch. New flags may not exist in previous release.

Some files were not shown because too many files have changed in this diff Show More