mirror of
https://github.com/OffchainLabs/prysm.git
synced 2026-01-09 21:38:05 -05:00
Compare commits
12 Commits
gas-limit-
...
debug-col-
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
a16f16b2d9 | ||
|
|
f5a9394c77 | ||
|
|
4095da8568 | ||
|
|
f1288a18ec | ||
|
|
543ebe857e | ||
|
|
e569df5ebc | ||
|
|
8c324cc491 | ||
|
|
265d84569c | ||
|
|
79b064a6cc | ||
|
|
182c18a7b2 | ||
|
|
8b9c161560 | ||
|
|
4a4532f3ba |
43
CHANGELOG.md
43
CHANGELOG.md
@@ -4,6 +4,49 @@ All notable changes to this project will be documented in this file.
|
||||
|
||||
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
|
||||
|
||||
## [v6.0.4](https://github.com/prysmaticlabs/prysm/compare/v6.0.3...v6.0.4) - 2025-06-05
|
||||
|
||||
This release has more work on PeerDAS, and light client support. Additionally, we have a few bug fixes:
|
||||
- Blob cache size now correctly set at startup.
|
||||
- A fix for slashing protection history exports where the validator database was in a nested folder.
|
||||
- Corrected behavior of the API call for state committees with an invalid request.
|
||||
- `/bin/sh` is now symlinked to `/bin/bash` for Prysm docker images.
|
||||
|
||||
In the [Hoodi](https://github.com/eth-clients/hoodi) testnet, the default gas limit is raised to 60M gas.
|
||||
|
||||
### Added
|
||||
|
||||
- Add light client mainnet spec test. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15295)
|
||||
- Add support for light client req/resp domain. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15281)
|
||||
- Added /bin/sh simlink to docker images. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15294)
|
||||
- Added Prysm build data to otel tracing spans. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15302)
|
||||
- Add light client minimal spec test support for `update_ranking` tests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15297)
|
||||
- Add fulu operation and epoch processing spec tests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15284)
|
||||
- Updated e2e Beacon API evaluator to support more endpoints, including the ones introduced in Electra. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15304)
|
||||
- Data column sidecars verification methods. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15232)
|
||||
- Implement data column sidecars filesystem. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15257)
|
||||
- Add blob schedule support from https://github.com/ethereum/consensus-specs/pull/4277. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15272)
|
||||
- random forkchoice spec tests for fulu. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15287)
|
||||
- Add ability to download nightly test vectors. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15312)
|
||||
- PeerDAS: Validation pipeline for data column sidecars received via gossip. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15310)
|
||||
- PeerDAS: Implement P2P. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15347)
|
||||
- PeerDAS: Implement the blockchain package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15350)
|
||||
|
||||
### Changed
|
||||
|
||||
- Update spec tests to v1.6.0-alpha.0. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15306)
|
||||
- PeerDAS: Refactor the reconstruction pipeline. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15309)
|
||||
- PeerDAS: `DataColumnStorage.Get` - Exit early no columns are available. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15309)
|
||||
- Default hoodi testnet builder gas limit to 60M. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15361)
|
||||
|
||||
### Fixed
|
||||
|
||||
- Fix cyclical dependencies issue when using testing/util package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15248)
|
||||
- Set seen blob cache size correctly based on current slot time at start up. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15348)
|
||||
- Fix `slashing-protection-history export` failing when `validator.db` is in a nested folder like `data/direct/`. (#14954). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15351)
|
||||
- Made `/eth/v1/beacon/states/{state_id}/committees` endpoint return `400` when slot does not belong to the specified epoch, aligning with the Beacon API spec (#15355). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15356)
|
||||
- Removed eager validator context cancellation that was causing validator builder registrations to fail occasionally. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15369)
|
||||
|
||||
## [v6.0.3](https://github.com/prysmaticlabs/prysm/compare/v6.0.2...v6.0.3) - 2025-05-21
|
||||
|
||||
This release has important bugfixes for users of the [Beacon API](https://ethereum.github.io/beacon-APIs/). These fixes include:
|
||||
|
||||
@@ -14,6 +14,7 @@ go_library(
|
||||
"endpoints_beacon.go",
|
||||
"endpoints_blob.go",
|
||||
"endpoints_builder.go",
|
||||
"endpoints_column_sidecar.go",
|
||||
"endpoints_config.go",
|
||||
"endpoints_debug.go",
|
||||
"endpoints_events.go",
|
||||
|
||||
19
api/server/structs/endpoints_column_sidecar.go
Normal file
19
api/server/structs/endpoints_column_sidecar.go
Normal file
@@ -0,0 +1,19 @@
|
||||
package structs
|
||||
|
||||
// DataColumnSidecar represents a sidecar containing data columns for a specific index.
|
||||
type DataColumnSidecar struct {
|
||||
Index string `json:"index"`
|
||||
Column []string `json:"column"`
|
||||
KZGCommitments []string `json:"kzg_commitments"`
|
||||
KZGProofs []string `json:"kzg_proofs"`
|
||||
SignedBlockHeader *SignedBeaconBlockHeader `json:"signed_block_header"`
|
||||
KZGCommitmentsInclusionProof []string `json:"kzg_commitments_inclusion_proof"`
|
||||
}
|
||||
|
||||
// DataColumnSidecarResponse represents the response structure for data column sidecars for beacon api endpoints.
|
||||
type DataColumnSidecarResponse struct {
|
||||
Version string `json:"version"`
|
||||
Data []*DataColumnSidecar `json:"data"`
|
||||
ExecutionOptimistic bool `json:"execution_optimistic"`
|
||||
Finalized bool `json:"finalized"`
|
||||
}
|
||||
@@ -33,8 +33,14 @@ type GetPeerResponse struct {
|
||||
Data *Peer `json:"data"`
|
||||
}
|
||||
|
||||
// Added Meta to align with beacon-api: https://ethereum.github.io/beacon-APIs/#/Node/getPeers
|
||||
type Meta struct {
|
||||
Count int `json:"count"`
|
||||
}
|
||||
|
||||
type GetPeersResponse struct {
|
||||
Data []*Peer `json:"data"`
|
||||
Meta Meta `json:"meta"`
|
||||
}
|
||||
|
||||
type Peer struct {
|
||||
|
||||
@@ -72,8 +72,6 @@ func (s *Service) postBlockProcess(cfg *postBlockProcessConfig) error {
|
||||
}
|
||||
if features.Get().EnableLightClient && slots.ToEpoch(s.CurrentSlot()) >= params.BeaconConfig().AltairForkEpoch {
|
||||
defer s.processLightClientUpdates(cfg)
|
||||
defer s.saveLightClientUpdate(cfg)
|
||||
defer s.saveLightClientBootstrap(cfg)
|
||||
}
|
||||
defer s.sendStateFeedOnBlock(cfg)
|
||||
defer reportProcessingTime(startTime)
|
||||
|
||||
@@ -131,6 +131,12 @@ func (s *Service) sendStateFeedOnBlock(cfg *postBlockProcessConfig) {
|
||||
}
|
||||
|
||||
func (s *Service) processLightClientUpdates(cfg *postBlockProcessConfig) {
|
||||
if err := s.processLightClientUpdate(cfg); err != nil {
|
||||
log.WithError(err).Error("Failed to process light client update")
|
||||
}
|
||||
if err := s.processLightClientBootstrap(cfg); err != nil {
|
||||
log.WithError(err).Error("Failed to process light client bootstrap")
|
||||
}
|
||||
if err := s.processLightClientOptimisticUpdate(cfg.ctx, cfg.roblock, cfg.postState); err != nil {
|
||||
log.WithError(err).Error("Failed to process light client optimistic update")
|
||||
}
|
||||
@@ -139,38 +145,33 @@ func (s *Service) processLightClientUpdates(cfg *postBlockProcessConfig) {
|
||||
}
|
||||
}
|
||||
|
||||
// saveLightClientUpdate saves the light client update for this block
|
||||
// processLightClientUpdate saves the light client update for this block
|
||||
// if it's better than the already saved one, when feature flag is enabled.
|
||||
func (s *Service) saveLightClientUpdate(cfg *postBlockProcessConfig) {
|
||||
func (s *Service) processLightClientUpdate(cfg *postBlockProcessConfig) error {
|
||||
attestedRoot := cfg.roblock.Block().ParentRoot()
|
||||
attestedBlock, err := s.getBlock(cfg.ctx, attestedRoot)
|
||||
if err != nil {
|
||||
log.WithError(err).Errorf("Saving light client update failed: Could not get attested block for root %#x", attestedRoot)
|
||||
return
|
||||
return errors.Wrapf(err, "could not get attested block for root %#x", attestedRoot)
|
||||
}
|
||||
if attestedBlock == nil || attestedBlock.IsNil() {
|
||||
log.Error("Saving light client update failed: Attested block is nil")
|
||||
return
|
||||
return errors.New("attested block is nil")
|
||||
}
|
||||
attestedState, err := s.cfg.StateGen.StateByRoot(cfg.ctx, attestedRoot)
|
||||
if err != nil {
|
||||
log.WithError(err).Errorf("Saving light client update failed: Could not get attested state for root %#x", attestedRoot)
|
||||
return
|
||||
return errors.Wrapf(err, "could not get attested state for root %#x", attestedRoot)
|
||||
}
|
||||
if attestedState == nil || attestedState.IsNil() {
|
||||
log.Error("Saving light client update failed: Attested state is nil")
|
||||
return
|
||||
return errors.New("attested state is nil")
|
||||
}
|
||||
|
||||
finalizedRoot := attestedState.FinalizedCheckpoint().Root
|
||||
finalizedBlock, err := s.getBlock(cfg.ctx, [32]byte(finalizedRoot))
|
||||
if err != nil {
|
||||
if errors.Is(err, errBlockNotFoundInCacheOrDB) {
|
||||
log.Debugf("Skipping saving light client update: Finalized block is nil for root %#x", finalizedRoot)
|
||||
} else {
|
||||
log.WithError(err).Errorf("Saving light client update failed: Could not get finalized block for root %#x", finalizedRoot)
|
||||
log.Debugf("Skipping saving light client update because finalized block is nil for root %#x", finalizedRoot)
|
||||
return nil
|
||||
}
|
||||
return
|
||||
return errors.Wrapf(err, "could not get finalized block for root %#x", finalizedRoot)
|
||||
}
|
||||
|
||||
update, err := lightclient.NewLightClientUpdateFromBeaconState(
|
||||
@@ -183,57 +184,52 @@ func (s *Service) saveLightClientUpdate(cfg *postBlockProcessConfig) {
|
||||
finalizedBlock,
|
||||
)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Saving light client update failed: Could not create light client update")
|
||||
return
|
||||
return errors.Wrapf(err, "could not create light client update")
|
||||
}
|
||||
|
||||
period := slots.SyncCommitteePeriod(slots.ToEpoch(attestedState.Slot()))
|
||||
|
||||
oldUpdate, err := s.cfg.BeaconDB.LightClientUpdate(cfg.ctx, period)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Saving light client update failed: Could not get current light client update")
|
||||
return
|
||||
return errors.Wrapf(err, "could not get current light client update")
|
||||
}
|
||||
|
||||
if oldUpdate == nil {
|
||||
if err := s.cfg.BeaconDB.SaveLightClientUpdate(cfg.ctx, period, update); err != nil {
|
||||
log.WithError(err).Error("Saving light client update failed: Could not save light client update")
|
||||
} else {
|
||||
log.WithField("period", period).Debug("Saving light client update: Saved new update")
|
||||
return errors.Wrapf(err, "could not save light client update")
|
||||
}
|
||||
return
|
||||
log.WithField("period", period).Debug("Saved new light client update")
|
||||
return nil
|
||||
}
|
||||
|
||||
isNewUpdateBetter, err := lightclient.IsBetterUpdate(update, oldUpdate)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Saving light client update failed: Could not compare light client updates")
|
||||
return
|
||||
return errors.Wrapf(err, "could not compare light client updates")
|
||||
}
|
||||
|
||||
if isNewUpdateBetter {
|
||||
if err := s.cfg.BeaconDB.SaveLightClientUpdate(cfg.ctx, period, update); err != nil {
|
||||
log.WithError(err).Error("Saving light client update failed: Could not save light client update")
|
||||
} else {
|
||||
log.WithField("period", period).Debug("Saving light client update: Saved new update")
|
||||
return errors.Wrapf(err, "could not save light client update")
|
||||
}
|
||||
} else {
|
||||
log.WithField("period", period).Debug("Saving light client update: New update is not better than the current one. Skipping save.")
|
||||
log.WithField("period", period).Debug("Saved new light client update")
|
||||
return nil
|
||||
}
|
||||
log.WithField("period", period).Debug("New light client update is not better than the current one, skipping save")
|
||||
return nil
|
||||
}
|
||||
|
||||
// saveLightClientBootstrap saves a light client bootstrap for this block
|
||||
// processLightClientBootstrap saves a light client bootstrap for this block
|
||||
// when feature flag is enabled.
|
||||
func (s *Service) saveLightClientBootstrap(cfg *postBlockProcessConfig) {
|
||||
func (s *Service) processLightClientBootstrap(cfg *postBlockProcessConfig) error {
|
||||
blockRoot := cfg.roblock.Root()
|
||||
bootstrap, err := lightclient.NewLightClientBootstrapFromBeaconState(cfg.ctx, s.CurrentSlot(), cfg.postState, cfg.roblock)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Saving light client bootstrap failed: Could not create light client bootstrap")
|
||||
return
|
||||
return errors.Wrapf(err, "could not create light client bootstrap")
|
||||
}
|
||||
err = s.cfg.BeaconDB.SaveLightClientBootstrap(cfg.ctx, blockRoot[:], bootstrap)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Saving light client bootstrap failed: Could not save light client bootstrap in DB")
|
||||
if err := s.cfg.BeaconDB.SaveLightClientBootstrap(cfg.ctx, blockRoot[:], bootstrap); err != nil {
|
||||
return errors.Wrapf(err, "could not save light client bootstrap")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *Service) processLightClientFinalityUpdate(
|
||||
|
||||
@@ -2706,7 +2706,7 @@ func fakeResult(missing []uint64) map[uint64]struct{} {
|
||||
return r
|
||||
}
|
||||
|
||||
func TestSaveLightClientUpdate(t *testing.T) {
|
||||
func TestProcessLightClientUpdate(t *testing.T) {
|
||||
featCfg := &features.Flags{}
|
||||
featCfg.EnableLightClient = true
|
||||
reset := features.InitWithReset(featCfg)
|
||||
@@ -2747,7 +2747,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
|
||||
isValidPayload: true,
|
||||
}
|
||||
|
||||
s.saveLightClientUpdate(cfg)
|
||||
require.NoError(t, s.processLightClientUpdate(cfg))
|
||||
|
||||
// Check that the light client update is saved
|
||||
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
|
||||
@@ -2802,7 +2802,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
|
||||
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
|
||||
require.NoError(t, err)
|
||||
|
||||
s.saveLightClientUpdate(cfg)
|
||||
require.NoError(t, s.processLightClientUpdate(cfg))
|
||||
|
||||
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
|
||||
require.NoError(t, err)
|
||||
@@ -2863,7 +2863,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
|
||||
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
|
||||
require.NoError(t, err)
|
||||
|
||||
s.saveLightClientUpdate(cfg)
|
||||
require.NoError(t, s.processLightClientUpdate(cfg))
|
||||
|
||||
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
|
||||
require.NoError(t, err)
|
||||
@@ -2906,7 +2906,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
|
||||
isValidPayload: true,
|
||||
}
|
||||
|
||||
s.saveLightClientUpdate(cfg)
|
||||
require.NoError(t, s.processLightClientUpdate(cfg))
|
||||
|
||||
// Check that the light client update is saved
|
||||
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
|
||||
@@ -2960,7 +2960,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
|
||||
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
|
||||
require.NoError(t, err)
|
||||
|
||||
s.saveLightClientUpdate(cfg)
|
||||
require.NoError(t, s.processLightClientUpdate(cfg))
|
||||
|
||||
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
|
||||
require.NoError(t, err)
|
||||
@@ -3021,7 +3021,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
|
||||
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
|
||||
require.NoError(t, err)
|
||||
|
||||
s.saveLightClientUpdate(cfg)
|
||||
require.NoError(t, s.processLightClientUpdate(cfg))
|
||||
|
||||
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
|
||||
require.NoError(t, err)
|
||||
@@ -3064,7 +3064,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
|
||||
isValidPayload: true,
|
||||
}
|
||||
|
||||
s.saveLightClientUpdate(cfg)
|
||||
require.NoError(t, s.processLightClientUpdate(cfg))
|
||||
|
||||
// Check that the light client update is saved
|
||||
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
|
||||
@@ -3118,7 +3118,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
|
||||
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
|
||||
require.NoError(t, err)
|
||||
|
||||
s.saveLightClientUpdate(cfg)
|
||||
require.NoError(t, s.processLightClientUpdate(cfg))
|
||||
|
||||
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
|
||||
require.NoError(t, err)
|
||||
@@ -3179,7 +3179,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
|
||||
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
|
||||
require.NoError(t, err)
|
||||
|
||||
s.saveLightClientUpdate(cfg)
|
||||
require.NoError(t, s.processLightClientUpdate(cfg))
|
||||
|
||||
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
|
||||
require.NoError(t, err)
|
||||
@@ -3192,7 +3192,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
|
||||
reset()
|
||||
}
|
||||
|
||||
func TestSaveLightClientBootstrap(t *testing.T) {
|
||||
func TestProcessLightClientBootstrap(t *testing.T) {
|
||||
featCfg := &features.Flags{}
|
||||
featCfg.EnableLightClient = true
|
||||
reset := features.InitWithReset(featCfg)
|
||||
@@ -3222,7 +3222,7 @@ func TestSaveLightClientBootstrap(t *testing.T) {
|
||||
isValidPayload: true,
|
||||
}
|
||||
|
||||
s.saveLightClientBootstrap(cfg)
|
||||
require.NoError(t, s.processLightClientBootstrap(cfg))
|
||||
|
||||
// Check that the light client bootstrap is saved
|
||||
b, err := s.cfg.BeaconDB.LightClientBootstrap(ctx, currentBlockRoot[:])
|
||||
@@ -3257,7 +3257,7 @@ func TestSaveLightClientBootstrap(t *testing.T) {
|
||||
isValidPayload: true,
|
||||
}
|
||||
|
||||
s.saveLightClientBootstrap(cfg)
|
||||
require.NoError(t, s.processLightClientBootstrap(cfg))
|
||||
|
||||
// Check that the light client bootstrap is saved
|
||||
b, err := s.cfg.BeaconDB.LightClientBootstrap(ctx, currentBlockRoot[:])
|
||||
@@ -3292,7 +3292,7 @@ func TestSaveLightClientBootstrap(t *testing.T) {
|
||||
isValidPayload: true,
|
||||
}
|
||||
|
||||
s.saveLightClientBootstrap(cfg)
|
||||
require.NoError(t, s.processLightClientBootstrap(cfg))
|
||||
|
||||
// Check that the light client bootstrap is saved
|
||||
b, err := s.cfg.BeaconDB.LightClientBootstrap(ctx, currentBlockRoot[:])
|
||||
|
||||
@@ -229,13 +229,16 @@ func verifyBlobCommitmentCount(slot primitives.Slot, body interfaces.ReadOnlyBea
|
||||
if body.Version() < version.Deneb {
|
||||
return nil
|
||||
}
|
||||
|
||||
kzgs, err := body.BlobKzgCommitments()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
|
||||
if len(kzgs) > maxBlobsPerBlock {
|
||||
return fmt.Errorf("too many kzg commitments in block: %d", len(kzgs))
|
||||
|
||||
commitmentCount, maxBlobsPerBlock := len(kzgs), params.BeaconConfig().MaxBlobsPerBlock(slot)
|
||||
if commitmentCount > maxBlobsPerBlock {
|
||||
return fmt.Errorf("too many kzg commitments in block: actual count %d - max allowed %d", commitmentCount, maxBlobsPerBlock)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -926,8 +926,10 @@ func TestVerifyBlobCommitmentCount(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, blocks.VerifyBlobCommitmentCount(rb.Slot(), rb.Body()))
|
||||
|
||||
b = ðpb.BeaconBlockDeneb{Body: ðpb.BeaconBlockBodyDeneb{BlobKzgCommitments: make([][]byte, params.BeaconConfig().MaxBlobsPerBlock(rb.Slot())+1)}}
|
||||
maxCommitmentsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(rb.Slot())
|
||||
|
||||
b = ðpb.BeaconBlockDeneb{Body: ðpb.BeaconBlockBodyDeneb{BlobKzgCommitments: make([][]byte, maxCommitmentsPerBlock+1)}}
|
||||
rb, err = consensusblocks.NewBeaconBlock(b)
|
||||
require.NoError(t, err)
|
||||
require.ErrorContains(t, fmt.Sprintf("too many kzg commitments in block: %d", params.BeaconConfig().MaxBlobsPerBlock(rb.Slot())+1), blocks.VerifyBlobCommitmentCount(rb.Slot(), rb.Body()))
|
||||
require.ErrorContains(t, fmt.Sprintf("too many kzg commitments in block: actual count %d - max allowed %d", maxCommitmentsPerBlock+1, maxCommitmentsPerBlock), blocks.VerifyBlobCommitmentCount(rb.Slot(), rb.Body()))
|
||||
}
|
||||
|
||||
@@ -277,10 +277,10 @@ type CommitteeAssignment struct {
|
||||
CommitteeIndex primitives.CommitteeIndex
|
||||
}
|
||||
|
||||
// verifyAssignmentEpoch verifies if the given epoch is valid for assignment based on the provided state.
|
||||
// VerifyAssignmentEpoch verifies if the given epoch is valid for assignment based on the provided state.
|
||||
// It checks if the epoch is not greater than the next epoch, and if the start slot of the epoch is greater
|
||||
// than or equal to the minimum valid start slot calculated based on the state's current slot and historical roots.
|
||||
func verifyAssignmentEpoch(epoch primitives.Epoch, state state.BeaconState) error {
|
||||
func VerifyAssignmentEpoch(epoch primitives.Epoch, state state.BeaconState) error {
|
||||
nextEpoch := time.NextEpoch(state)
|
||||
if epoch > nextEpoch {
|
||||
return fmt.Errorf("epoch %d can't be greater than next epoch %d", epoch, nextEpoch)
|
||||
@@ -308,7 +308,7 @@ func ProposerAssignments(ctx context.Context, state state.BeaconState, epoch pri
|
||||
defer span.End()
|
||||
|
||||
// Verify if the epoch is valid for assignment based on the provided state.
|
||||
if err := verifyAssignmentEpoch(epoch, state); err != nil {
|
||||
if err := VerifyAssignmentEpoch(epoch, state); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
startSlot, err := slots.EpochStart(epoch)
|
||||
@@ -351,6 +351,61 @@ func ProposerAssignments(ctx context.Context, state state.BeaconState, epoch pri
|
||||
return proposerAssignments, nil
|
||||
}
|
||||
|
||||
// LiteAssignment is a lite version of CommitteeAssignment, and has committee length
|
||||
// and validator committee index instead of the full committee list
|
||||
type LiteAssignment struct {
|
||||
AttesterSlot primitives.Slot // slot in which to attest
|
||||
CommitteeIndex primitives.CommitteeIndex // position of the committee in the slot
|
||||
CommitteeLength uint64 // number of members in the committee
|
||||
ValidatorCommitteeIndex uint64 // validator’s offset inside the committee
|
||||
}
|
||||
|
||||
// PrecomputeCommittees returns an array indexed by (slot-startSlot)
|
||||
// whose elements are the beacon committees of that slot.
|
||||
func PrecomputeCommittees(
|
||||
ctx context.Context,
|
||||
st state.BeaconState,
|
||||
startSlot primitives.Slot,
|
||||
) ([][][]primitives.ValidatorIndex, error) {
|
||||
cfg := params.BeaconConfig()
|
||||
out := make([][][]primitives.ValidatorIndex, cfg.SlotsPerEpoch)
|
||||
|
||||
for relativeSlot := primitives.Slot(0); relativeSlot < cfg.SlotsPerEpoch; relativeSlot++ {
|
||||
slot := startSlot + relativeSlot
|
||||
|
||||
comms, err := BeaconCommittees(ctx, st, slot)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "BeaconCommittees failed at slot %d", slot)
|
||||
}
|
||||
out[relativeSlot] = comms
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
// AssignmentForValidator scans the cached committees once
|
||||
// and returns the duty for a single validator.
|
||||
func AssignmentForValidator(
|
||||
bySlot [][][]primitives.ValidatorIndex,
|
||||
startSlot primitives.Slot,
|
||||
vIdx primitives.ValidatorIndex,
|
||||
) *LiteAssignment {
|
||||
for relativeSlot, committees := range bySlot {
|
||||
for cIdx, committee := range committees {
|
||||
for pos, member := range committee {
|
||||
if member == vIdx {
|
||||
return &LiteAssignment{
|
||||
AttesterSlot: startSlot + primitives.Slot(relativeSlot),
|
||||
CommitteeIndex: primitives.CommitteeIndex(cIdx),
|
||||
CommitteeLength: uint64(len(committee)),
|
||||
ValidatorCommitteeIndex: uint64(pos),
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil // validator is not scheduled this epoch
|
||||
}
|
||||
|
||||
// CommitteeAssignments calculates committee assignments for each validator during the specified epoch.
|
||||
// It retrieves active validator indices, determines the number of committees per slot, and computes
|
||||
// assignments for each validator based on their presence in the provided validators slice.
|
||||
@@ -359,7 +414,7 @@ func CommitteeAssignments(ctx context.Context, state state.BeaconState, epoch pr
|
||||
defer span.End()
|
||||
|
||||
// Verify if the epoch is valid for assignment based on the provided state.
|
||||
if err := verifyAssignmentEpoch(epoch, state); err != nil {
|
||||
if err := VerifyAssignmentEpoch(epoch, state); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
startSlot, err := slots.EpochStart(epoch)
|
||||
|
||||
@@ -871,3 +871,48 @@ func TestBeaconCommitteesFromCache(t *testing.T) {
|
||||
assert.DeepEqual(t, committees[idx], committee)
|
||||
}
|
||||
}
|
||||
|
||||
func TestPrecomputeCommittees_HappyPath(t *testing.T) {
|
||||
cfg := params.BeaconConfig()
|
||||
start := primitives.Slot(100)
|
||||
ctx := context.Background()
|
||||
st, _ := util.DeterministicGenesisState(t, 256)
|
||||
|
||||
got, err := helpers.PrecomputeCommittees(ctx, st, start)
|
||||
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, len(got), int(cfg.SlotsPerEpoch), "outer slice length mismatch")
|
||||
|
||||
for i := range got {
|
||||
expSlot := start + primitives.Slot(i)
|
||||
comms, err := helpers.BeaconCommittees(ctx, st, expSlot)
|
||||
require.NoError(t, err)
|
||||
require.DeepEqual(t, comms, got[i])
|
||||
}
|
||||
}
|
||||
|
||||
func TestAssignmentForValidator(t *testing.T) {
|
||||
start := primitives.Slot(200)
|
||||
bySlot := [][][]primitives.ValidatorIndex{
|
||||
{{1, 2, 3}},
|
||||
{{7, 8, 9}},
|
||||
}
|
||||
vIdx := primitives.ValidatorIndex(8)
|
||||
|
||||
got := helpers.AssignmentForValidator(bySlot, start, vIdx)
|
||||
|
||||
require.NotNil(t, got)
|
||||
require.Equal(t, start+1, got.AttesterSlot)
|
||||
require.Equal(t, primitives.CommitteeIndex(0), got.CommitteeIndex)
|
||||
require.Equal(t, uint64(3), got.CommitteeLength)
|
||||
require.Equal(t, uint64(1), got.ValidatorCommitteeIndex)
|
||||
|
||||
t.Run("Not Found", func(t *testing.T) {
|
||||
start = primitives.Slot(300)
|
||||
bySlot = [][][]primitives.ValidatorIndex{
|
||||
{{4, 5, 6}},
|
||||
}
|
||||
got = helpers.AssignmentForValidator(bySlot, start, primitives.ValidatorIndex(99))
|
||||
require.IsNil(t, got)
|
||||
})
|
||||
}
|
||||
|
||||
@@ -3,22 +3,27 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
|
||||
go_library(
|
||||
name = "go_default_library",
|
||||
srcs = [
|
||||
"availability.go",
|
||||
"cache.go",
|
||||
"availability_blobs.go",
|
||||
"availability_columns.go",
|
||||
"blob_cache.go",
|
||||
"data_column_cache.go",
|
||||
"iface.go",
|
||||
"mock.go",
|
||||
],
|
||||
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/das",
|
||||
visibility = ["//visibility:public"],
|
||||
deps = [
|
||||
"//beacon-chain/core/peerdas:go_default_library",
|
||||
"//beacon-chain/db/filesystem:go_default_library",
|
||||
"//beacon-chain/verification:go_default_library",
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
"//consensus-types/blocks:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
"//runtime/logging:go_default_library",
|
||||
"//runtime/version:go_default_library",
|
||||
"//time/slots:go_default_library",
|
||||
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
|
||||
"@com_github_pkg_errors//:go_default_library",
|
||||
"@com_github_sirupsen_logrus//:go_default_library",
|
||||
],
|
||||
@@ -27,13 +32,18 @@ go_library(
|
||||
go_test(
|
||||
name = "go_default_test",
|
||||
srcs = [
|
||||
"availability_test.go",
|
||||
"cache_test.go",
|
||||
"availability_blobs_test.go",
|
||||
"availability_columns_test.go",
|
||||
"blob_cache_test.go",
|
||||
"data_column_cache_test.go",
|
||||
],
|
||||
embed = [":go_default_library"],
|
||||
deps = [
|
||||
"//beacon-chain/core/peerdas:go_default_library",
|
||||
"//beacon-chain/db/filesystem:go_default_library",
|
||||
"//beacon-chain/verification:go_default_library",
|
||||
"//cmd/beacon-chain/flags:go_default_library",
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
"//consensus-types/blocks:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
@@ -41,6 +51,7 @@ go_test(
|
||||
"//testing/require:go_default_library",
|
||||
"//testing/util:go_default_library",
|
||||
"//time/slots:go_default_library",
|
||||
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
|
||||
"@com_github_pkg_errors//:go_default_library",
|
||||
],
|
||||
)
|
||||
|
||||
@@ -20,16 +20,16 @@ var (
|
||||
errMixedRoots = errors.New("BlobSidecars must all be for the same block")
|
||||
)
|
||||
|
||||
// LazilyPersistentStore is an implementation of AvailabilityStore to be used when batch syncing.
|
||||
// LazilyPersistentStoreBlob is an implementation of AvailabilityStore to be used when batch syncing.
|
||||
// This implementation will hold any blobs passed to Persist until the IsDataAvailable is called for their
|
||||
// block, at which time they will undergo full verification and be saved to the disk.
|
||||
type LazilyPersistentStore struct {
|
||||
type LazilyPersistentStoreBlob struct {
|
||||
store *filesystem.BlobStorage
|
||||
cache *cache
|
||||
cache *blobCache
|
||||
verifier BlobBatchVerifier
|
||||
}
|
||||
|
||||
var _ AvailabilityStore = &LazilyPersistentStore{}
|
||||
var _ AvailabilityStore = &LazilyPersistentStoreBlob{}
|
||||
|
||||
// BlobBatchVerifier enables LazyAvailabilityStore to manage the verification process
|
||||
// going from ROBlob->VerifiedROBlob, while avoiding the decision of which individual verifications
|
||||
@@ -42,10 +42,10 @@ type BlobBatchVerifier interface {
|
||||
|
||||
// NewLazilyPersistentStore creates a new LazilyPersistentStore. This constructor should always be used
|
||||
// when creating a LazilyPersistentStore because it needs to initialize the cache under the hood.
|
||||
func NewLazilyPersistentStore(store *filesystem.BlobStorage, verifier BlobBatchVerifier) *LazilyPersistentStore {
|
||||
return &LazilyPersistentStore{
|
||||
func NewLazilyPersistentStore(store *filesystem.BlobStorage, verifier BlobBatchVerifier) *LazilyPersistentStoreBlob {
|
||||
return &LazilyPersistentStoreBlob{
|
||||
store: store,
|
||||
cache: newCache(),
|
||||
cache: newBlobCache(),
|
||||
verifier: verifier,
|
||||
}
|
||||
}
|
||||
@@ -53,25 +53,31 @@ func NewLazilyPersistentStore(store *filesystem.BlobStorage, verifier BlobBatchV
|
||||
// Persist adds blobs to the working blob cache. Blobs stored in this cache will be persisted
|
||||
// for at least as long as the node is running. Once IsDataAvailable succeeds, all blobs referenced
|
||||
// by the given block are guaranteed to be persisted for the remainder of the retention period.
|
||||
func (s *LazilyPersistentStore) Persist(current primitives.Slot, sc ...blocks.ROBlob) error {
|
||||
if len(sc) == 0 {
|
||||
func (s *LazilyPersistentStoreBlob) Persist(current primitives.Slot, sidecars ...blocks.ROSidecar) error {
|
||||
if len(sidecars) == 0 {
|
||||
return nil
|
||||
}
|
||||
if len(sc) > 1 {
|
||||
first := sc[0].BlockRoot()
|
||||
for i := 1; i < len(sc); i++ {
|
||||
if first != sc[i].BlockRoot() {
|
||||
|
||||
blobSidecars, err := blocks.BlobSidecarsFromSidecars(sidecars)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "blob sidecars from sidecars")
|
||||
}
|
||||
|
||||
if len(blobSidecars) > 1 {
|
||||
firstRoot := blobSidecars[0].BlockRoot()
|
||||
for _, sidecar := range blobSidecars[1:] {
|
||||
if sidecar.BlockRoot() != firstRoot {
|
||||
return errMixedRoots
|
||||
}
|
||||
}
|
||||
}
|
||||
if !params.WithinDAPeriod(slots.ToEpoch(sc[0].Slot()), slots.ToEpoch(current)) {
|
||||
if !params.WithinDAPeriod(slots.ToEpoch(blobSidecars[0].Slot()), slots.ToEpoch(current)) {
|
||||
return nil
|
||||
}
|
||||
key := keyFromSidecar(sc[0])
|
||||
key := keyFromSidecar(blobSidecars[0])
|
||||
entry := s.cache.ensure(key)
|
||||
for i := range sc {
|
||||
if err := entry.stash(&sc[i]); err != nil {
|
||||
for _, blobSidecar := range blobSidecars {
|
||||
if err := entry.stash(&blobSidecar); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
@@ -80,7 +86,7 @@ func (s *LazilyPersistentStore) Persist(current primitives.Slot, sc ...blocks.RO
|
||||
|
||||
// IsDataAvailable returns nil if all the commitments in the given block are persisted to the db and have been verified.
|
||||
// BlobSidecars already in the db are assumed to have been previously verified against the block.
|
||||
func (s *LazilyPersistentStore) IsDataAvailable(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error {
|
||||
func (s *LazilyPersistentStoreBlob) IsDataAvailable(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error {
|
||||
blockCommitments, err := commitmentsToCheck(b, current)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "could not check data availability for block %#x", b.Root())
|
||||
@@ -116,9 +116,11 @@ func TestLazilyPersistent_Missing(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := filesystem.NewEphemeralBlobStorage(t)
|
||||
|
||||
blk, scs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 3)
|
||||
blk, blobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 3)
|
||||
|
||||
mbv := &mockBlobBatchVerifier{t: t, scs: scs}
|
||||
scs := blocks.NewSidecarsFromBlobSidecars(blobSidecars)
|
||||
|
||||
mbv := &mockBlobBatchVerifier{t: t, scs: blobSidecars}
|
||||
as := NewLazilyPersistentStore(store, mbv)
|
||||
|
||||
// Only one commitment persisted, should return error with other indices
|
||||
@@ -141,12 +143,14 @@ func TestLazilyPersistent_Mismatch(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := filesystem.NewEphemeralBlobStorage(t)
|
||||
|
||||
blk, scs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 3)
|
||||
blk, blobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 3)
|
||||
|
||||
mbv := &mockBlobBatchVerifier{t: t, err: errors.New("kzg check should not run")}
|
||||
scs[0].KzgCommitment = bytesutil.PadTo([]byte("nope"), 48)
|
||||
blobSidecars[0].KzgCommitment = bytesutil.PadTo([]byte("nope"), 48)
|
||||
as := NewLazilyPersistentStore(store, mbv)
|
||||
|
||||
scs := blocks.NewSidecarsFromBlobSidecars(blobSidecars)
|
||||
|
||||
// Only one commitment persisted, should return error with other indices
|
||||
require.NoError(t, as.Persist(1, scs[0]))
|
||||
err := as.IsDataAvailable(ctx, 1, blk)
|
||||
@@ -155,7 +159,10 @@ func TestLazilyPersistent_Mismatch(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestLazyPersistOnceCommitted(t *testing.T) {
|
||||
_, scs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 6)
|
||||
_, blobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 6)
|
||||
|
||||
scs := blocks.NewSidecarsFromBlobSidecars(blobSidecars)
|
||||
|
||||
as := NewLazilyPersistentStore(filesystem.NewEphemeralBlobStorage(t), &mockBlobBatchVerifier{})
|
||||
// stashes as expected
|
||||
require.NoError(t, as.Persist(1, scs...))
|
||||
@@ -163,10 +170,13 @@ func TestLazyPersistOnceCommitted(t *testing.T) {
|
||||
require.ErrorIs(t, as.Persist(1, scs...), ErrDuplicateSidecar)
|
||||
|
||||
// ignores index out of bound
|
||||
scs[0].Index = 6
|
||||
require.ErrorIs(t, as.Persist(1, scs[0]), errIndexOutOfBounds)
|
||||
blobSidecars[0].Index = 6
|
||||
require.ErrorIs(t, as.Persist(1, blocks.NewSidecarFromBlobSidecar(blobSidecars[0])), errIndexOutOfBounds)
|
||||
|
||||
_, moreBlobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 4)
|
||||
|
||||
more := blocks.NewSidecarsFromBlobSidecars(moreBlobSidecars)
|
||||
|
||||
_, more := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 4)
|
||||
// ignores sidecars before the retention period
|
||||
slotOOB, err := slots.EpochStart(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest)
|
||||
require.NoError(t, err)
|
||||
208
beacon-chain/das/availability_columns.go
Normal file
208
beacon-chain/das/availability_columns.go
Normal file
@@ -0,0 +1,208 @@
|
||||
package das
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/runtime/version"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
"github.com/ethereum/go-ethereum/p2p/enode"
|
||||
errors "github.com/pkg/errors"
|
||||
)
|
||||
|
||||
// LazilyPersistentStoreColumn is an implementation of AvailabilityStore to be used when batch syncing data columns.
|
||||
// This implementation will hold any data columns passed to Persist until the IsDataAvailable is called for their
|
||||
// block, at which time they will undergo full verification and be saved to the disk.
|
||||
type LazilyPersistentStoreColumn struct {
|
||||
store *filesystem.DataColumnStorage
|
||||
nodeID enode.ID
|
||||
cache *dataColumnCache
|
||||
custodyInfo *peerdas.CustodyInfo
|
||||
newDataColumnsVerifier verification.NewDataColumnsVerifier
|
||||
}
|
||||
|
||||
var _ AvailabilityStore = &LazilyPersistentStoreColumn{}
|
||||
|
||||
// DataColumnsVerifier enables LazilyPersistentStoreColumn to manage the verification process
|
||||
// going from RODataColumn->VerifiedRODataColumn, while avoiding the decision of which individual verifications
|
||||
// to run and in what order. Since LazilyPersistentStoreColumn always tries to verify and save data columns only when
|
||||
// they are all available, the interface takes a slice of data column sidecars.
|
||||
type DataColumnsVerifier interface {
|
||||
VerifiedRODataColumns(ctx context.Context, blk blocks.ROBlock, scs []blocks.RODataColumn) ([]blocks.VerifiedRODataColumn, error)
|
||||
}
|
||||
|
||||
// NewLazilyPersistentStoreColumn creates a new LazilyPersistentStoreColumn.
|
||||
// WARNING: The resulting LazilyPersistentStoreColumn is NOT thread-safe.
|
||||
func NewLazilyPersistentStoreColumn(store *filesystem.DataColumnStorage, nodeID enode.ID, newDataColumnsVerifier verification.NewDataColumnsVerifier, custodyInfo *peerdas.CustodyInfo) *LazilyPersistentStoreColumn {
|
||||
return &LazilyPersistentStoreColumn{
|
||||
store: store,
|
||||
nodeID: nodeID,
|
||||
cache: newDataColumnCache(),
|
||||
custodyInfo: custodyInfo,
|
||||
newDataColumnsVerifier: newDataColumnsVerifier,
|
||||
}
|
||||
}
|
||||
|
||||
// PersistColumns adds columns to the working column cache. Columns stored in this cache will be persisted
|
||||
// for at least as long as the node is running. Once IsDataAvailable succeeds, all columns referenced
|
||||
// by the given block are guaranteed to be persisted for the remainder of the retention period.
|
||||
func (s *LazilyPersistentStoreColumn) Persist(current primitives.Slot, sidecars ...blocks.ROSidecar) error {
|
||||
if len(sidecars) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
dataColumnSidecars, err := blocks.DataColumnSidecarsFromSidecars(sidecars)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "blob sidecars from sidecars")
|
||||
}
|
||||
|
||||
// It is safe to retrieve the first sidecar.
|
||||
firstSidecar := dataColumnSidecars[0]
|
||||
|
||||
if len(sidecars) > 1 {
|
||||
firstRoot := firstSidecar.BlockRoot()
|
||||
for _, sidecar := range dataColumnSidecars[1:] {
|
||||
if sidecar.BlockRoot() != firstRoot {
|
||||
return errMixedRoots
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
firstSidecarEpoch, currentEpoch := slots.ToEpoch(firstSidecar.Slot()), slots.ToEpoch(current)
|
||||
if !params.WithinDAPeriod(firstSidecarEpoch, currentEpoch) {
|
||||
return nil
|
||||
}
|
||||
|
||||
key := cacheKey{slot: firstSidecar.Slot(), root: firstSidecar.BlockRoot()}
|
||||
entry := s.cache.ensure(key)
|
||||
|
||||
for _, sidecar := range dataColumnSidecars {
|
||||
if err := entry.stash(&sidecar); err != nil {
|
||||
return errors.Wrap(err, "stash DataColumnSidecar")
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// IsDataAvailable returns nil if all the commitments in the given block are persisted to the db and have been verified.
|
||||
// DataColumnsSidecars already in the db are assumed to have been previously verified against the block.
|
||||
func (s *LazilyPersistentStoreColumn) IsDataAvailable(ctx context.Context, currentSlot primitives.Slot, block blocks.ROBlock) error {
|
||||
blockCommitments, err := s.fullCommitmentsToCheck(s.nodeID, block, currentSlot)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "full commitments to check with block root `%#x` and current slot `%d`", block.Root(), currentSlot)
|
||||
}
|
||||
|
||||
// Return early for blocks that do not have any commitments.
|
||||
if blockCommitments.count() == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Get the root of the block.
|
||||
blockRoot := block.Root()
|
||||
|
||||
// Build the cache key for the block.
|
||||
key := cacheKey{slot: block.Block().Slot(), root: blockRoot}
|
||||
|
||||
// Retrieve the cache entry for the block, or create an empty one if it doesn't exist.
|
||||
entry := s.cache.ensure(key)
|
||||
|
||||
// Delete the cache entry for the block at the end.
|
||||
defer s.cache.delete(key)
|
||||
|
||||
// Set the disk summary for the block in the cache entry.
|
||||
entry.setDiskSummary(s.store.Summary(blockRoot))
|
||||
|
||||
// Verify we have all the expected sidecars, and fail fast if any are missing or inconsistent.
|
||||
// We don't try to salvage problematic batches because this indicates a misbehaving peer and we'd rather
|
||||
// ignore their response and decrease their peer score.
|
||||
roDataColumns, err := entry.filter(blockRoot, blockCommitments)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "entry filter")
|
||||
}
|
||||
|
||||
// https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/p2p-interface.md#datacolumnsidecarsbyrange-v1
|
||||
verifier := s.newDataColumnsVerifier(roDataColumns, verification.ByRangeRequestDataColumnSidecarRequirements)
|
||||
|
||||
if err := verifier.ValidFields(); err != nil {
|
||||
return errors.Wrap(err, "valid")
|
||||
}
|
||||
|
||||
if err := verifier.SidecarInclusionProven(); err != nil {
|
||||
return errors.Wrap(err, "sidecar inclusion proven")
|
||||
}
|
||||
|
||||
if err := verifier.SidecarKzgProofVerified(); err != nil {
|
||||
return errors.Wrap(err, "sidecar KZG proof verified")
|
||||
}
|
||||
|
||||
verifiedRoDataColumns, err := verifier.VerifiedRODataColumns()
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "verified RO data columns - should never happen")
|
||||
}
|
||||
|
||||
if err := s.store.Save(verifiedRoDataColumns); err != nil {
|
||||
return errors.Wrap(err, "save data column sidecars")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// fullCommitmentsToCheck returns the commitments to check for a given block.
|
||||
func (s *LazilyPersistentStoreColumn) fullCommitmentsToCheck(nodeID enode.ID, block blocks.ROBlock, currentSlot primitives.Slot) (*safeCommitmentsArray, error) {
|
||||
// Return early for blocks that are pre-Fulu.
|
||||
if block.Version() < version.Fulu {
|
||||
return &safeCommitmentsArray{}, nil
|
||||
}
|
||||
|
||||
// Compute the block epoch.
|
||||
blockSlot := block.Block().Slot()
|
||||
blockEpoch := slots.ToEpoch(blockSlot)
|
||||
|
||||
// Compute the current spoch.
|
||||
currentEpoch := slots.ToEpoch(currentSlot)
|
||||
|
||||
// Return early if the request is out of the MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS window.
|
||||
if !params.WithinDAPeriod(blockEpoch, currentEpoch) {
|
||||
return &safeCommitmentsArray{}, nil
|
||||
}
|
||||
|
||||
// Retrieve the KZG commitments for the block.
|
||||
kzgCommitments, err := block.Block().Body().BlobKzgCommitments()
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "blob KZG commitments")
|
||||
}
|
||||
|
||||
// Return early if there are no commitments in the block.
|
||||
if len(kzgCommitments) == 0 {
|
||||
return &safeCommitmentsArray{}, nil
|
||||
}
|
||||
|
||||
// Retrieve the groups count.
|
||||
custodyGroupCount := s.custodyInfo.ActualGroupCount()
|
||||
|
||||
// Retrieve peer info.
|
||||
peerInfo, _, err := peerdas.Info(nodeID, custodyGroupCount)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "peer info")
|
||||
}
|
||||
|
||||
// Create a safe commitments array for the custody columns.
|
||||
commitmentsArray := &safeCommitmentsArray{}
|
||||
commitmentsArraySize := uint64(len(commitmentsArray))
|
||||
|
||||
for column := range peerInfo.CustodyColumns {
|
||||
if column >= commitmentsArraySize {
|
||||
return nil, errors.Errorf("custody column index %d too high (max allowed %d) - should never happen", column, commitmentsArraySize)
|
||||
}
|
||||
|
||||
commitmentsArray[column] = kzgCommitments
|
||||
}
|
||||
|
||||
return commitmentsArray, nil
|
||||
}
|
||||
303
beacon-chain/das/availability_columns_test.go
Normal file
303
beacon-chain/das/availability_columns_test.go
Normal file
@@ -0,0 +1,303 @@
|
||||
package das
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
|
||||
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/util"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
"github.com/ethereum/go-ethereum/p2p/enode"
|
||||
)
|
||||
|
||||
var commitments = [][]byte{
|
||||
bytesutil.PadTo([]byte("a"), 48),
|
||||
bytesutil.PadTo([]byte("b"), 48),
|
||||
bytesutil.PadTo([]byte("c"), 48),
|
||||
bytesutil.PadTo([]byte("d"), 48),
|
||||
}
|
||||
|
||||
func TestPersist(t *testing.T) {
|
||||
t.Run("no sidecars", func(t *testing.T) {
|
||||
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
|
||||
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, nil, &peerdas.CustodyInfo{})
|
||||
err := lazilyPersistentStoreColumns.Persist(0)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 0, len(lazilyPersistentStoreColumns.cache.entries))
|
||||
})
|
||||
|
||||
t.Run("mixed roots", func(t *testing.T) {
|
||||
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
|
||||
|
||||
dataColumnParamsByBlockRoot := map[[fieldparams.RootLength]byte][]util.DataColumnParams{
|
||||
{1}: {{ColumnIndex: 1}},
|
||||
{2}: {{ColumnIndex: 2}},
|
||||
}
|
||||
|
||||
roSidecars, _ := roSidecarsFromDataColumnParamsByBlockRoot(t, dataColumnParamsByBlockRoot)
|
||||
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, nil, &peerdas.CustodyInfo{})
|
||||
|
||||
err := lazilyPersistentStoreColumns.Persist(0, roSidecars...)
|
||||
require.ErrorIs(t, err, errMixedRoots)
|
||||
require.Equal(t, 0, len(lazilyPersistentStoreColumns.cache.entries))
|
||||
})
|
||||
|
||||
t.Run("outside DA period", func(t *testing.T) {
|
||||
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
|
||||
|
||||
dataColumnParamsByBlockRoot := map[[fieldparams.RootLength]byte][]util.DataColumnParams{
|
||||
{1}: {{ColumnIndex: 1}},
|
||||
}
|
||||
|
||||
roSidecars, _ := roSidecarsFromDataColumnParamsByBlockRoot(t, dataColumnParamsByBlockRoot)
|
||||
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, nil, &peerdas.CustodyInfo{})
|
||||
|
||||
err := lazilyPersistentStoreColumns.Persist(1_000_000, roSidecars...)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 0, len(lazilyPersistentStoreColumns.cache.entries))
|
||||
})
|
||||
|
||||
t.Run("nominal", func(t *testing.T) {
|
||||
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
|
||||
|
||||
dataColumnParamsByBlockRoot := map[[fieldparams.RootLength]byte][]util.DataColumnParams{
|
||||
{}: {{ColumnIndex: 1}, {ColumnIndex: 5}},
|
||||
}
|
||||
|
||||
roSidecars, roDataColumns := roSidecarsFromDataColumnParamsByBlockRoot(t, dataColumnParamsByBlockRoot)
|
||||
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, nil, &peerdas.CustodyInfo{})
|
||||
|
||||
err := lazilyPersistentStoreColumns.Persist(0, roSidecars...)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 1, len(lazilyPersistentStoreColumns.cache.entries))
|
||||
|
||||
key := cacheKey{slot: 0, root: [fieldparams.RootLength]byte{}}
|
||||
entry := lazilyPersistentStoreColumns.cache.entries[key]
|
||||
|
||||
// A call to Persist does NOT save the sidecars to disk.
|
||||
require.Equal(t, uint64(0), entry.diskSummary.Count())
|
||||
|
||||
require.DeepSSZEqual(t, roDataColumns[0], *entry.scs[1])
|
||||
require.DeepSSZEqual(t, roDataColumns[1], *entry.scs[5])
|
||||
|
||||
for i, roDataColumn := range entry.scs {
|
||||
if map[int]bool{1: true, 5: true}[i] {
|
||||
continue
|
||||
}
|
||||
|
||||
require.IsNil(t, roDataColumn)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func TestIsDataAvailable(t *testing.T) {
|
||||
newDataColumnsVerifier := func(dataColumnSidecars []blocks.RODataColumn, _ []verification.Requirement) verification.DataColumnsVerifier {
|
||||
return &mockDataColumnsVerifier{t: t, dataColumnSidecars: dataColumnSidecars}
|
||||
}
|
||||
|
||||
ctx := context.Background()
|
||||
|
||||
t.Run("without commitments", func(t *testing.T) {
|
||||
signedBeaconBlockFulu := util.NewBeaconBlockFulu()
|
||||
signedRoBlock := newSignedRoBlock(t, signedBeaconBlockFulu)
|
||||
|
||||
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
|
||||
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, newDataColumnsVerifier, &peerdas.CustodyInfo{})
|
||||
|
||||
err := lazilyPersistentStoreColumns.IsDataAvailable(ctx, 0 /*current slot*/, signedRoBlock)
|
||||
require.NoError(t, err)
|
||||
})
|
||||
|
||||
t.Run("with commitments", func(t *testing.T) {
|
||||
signedBeaconBlockFulu := util.NewBeaconBlockFulu()
|
||||
signedBeaconBlockFulu.Block.Body.BlobKzgCommitments = commitments
|
||||
signedRoBlock := newSignedRoBlock(t, signedBeaconBlockFulu)
|
||||
root := signedRoBlock.Root()
|
||||
|
||||
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
|
||||
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, newDataColumnsVerifier, &peerdas.CustodyInfo{})
|
||||
|
||||
indices := [...]uint64{1, 17, 87, 102}
|
||||
dataColumnsParams := make([]util.DataColumnParams, 0, len(indices))
|
||||
for _, index := range indices {
|
||||
dataColumnParams := util.DataColumnParams{
|
||||
ColumnIndex: index,
|
||||
KzgCommitments: commitments,
|
||||
}
|
||||
|
||||
dataColumnsParams = append(dataColumnsParams, dataColumnParams)
|
||||
}
|
||||
|
||||
dataColumnsParamsByBlockRoot := util.DataColumnsParamsByRoot{root: dataColumnsParams}
|
||||
_, verifiedRoDataColumns := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnsParamsByBlockRoot)
|
||||
|
||||
key := cacheKey{root: root}
|
||||
entry := lazilyPersistentStoreColumns.cache.ensure(key)
|
||||
defer lazilyPersistentStoreColumns.cache.delete(key)
|
||||
|
||||
for _, verifiedRoDataColumn := range verifiedRoDataColumns {
|
||||
err := entry.stash(&verifiedRoDataColumn.RODataColumn)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
err := lazilyPersistentStoreColumns.IsDataAvailable(ctx, 0 /*current slot*/, signedRoBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
actual, err := dataColumnStorage.Get(root, indices[:])
|
||||
require.NoError(t, err)
|
||||
|
||||
summary := dataColumnStorage.Summary(root)
|
||||
require.Equal(t, uint64(len(indices)), summary.Count())
|
||||
require.DeepSSZEqual(t, verifiedRoDataColumns, actual)
|
||||
})
|
||||
}
|
||||
|
||||
func TestFullCommitmentsToCheck(t *testing.T) {
|
||||
windowSlots, err := slots.EpochEnd(params.BeaconConfig().MinEpochsForDataColumnSidecarsRequest)
|
||||
require.NoError(t, err)
|
||||
|
||||
testCases := []struct {
|
||||
name string
|
||||
commitments [][]byte
|
||||
block func(*testing.T) blocks.ROBlock
|
||||
slot primitives.Slot
|
||||
}{
|
||||
{
|
||||
name: "Pre-Fulu block",
|
||||
block: func(t *testing.T) blocks.ROBlock {
|
||||
return newSignedRoBlock(t, util.NewBeaconBlockElectra())
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "Commitments outside data availability window",
|
||||
block: func(t *testing.T) blocks.ROBlock {
|
||||
beaconBlockElectra := util.NewBeaconBlockElectra()
|
||||
|
||||
// Block is from slot 0, "current slot" is window size +1 (so outside the window)
|
||||
beaconBlockElectra.Block.Body.BlobKzgCommitments = commitments
|
||||
|
||||
return newSignedRoBlock(t, beaconBlockElectra)
|
||||
},
|
||||
slot: windowSlots + 1,
|
||||
},
|
||||
{
|
||||
name: "Commitments within data availability window",
|
||||
block: func(t *testing.T) blocks.ROBlock {
|
||||
signedBeaconBlockFulu := util.NewBeaconBlockFulu()
|
||||
signedBeaconBlockFulu.Block.Body.BlobKzgCommitments = commitments
|
||||
signedBeaconBlockFulu.Block.Slot = 100
|
||||
|
||||
return newSignedRoBlock(t, signedBeaconBlockFulu)
|
||||
},
|
||||
commitments: commitments,
|
||||
slot: 100,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
resetFlags := flags.Get()
|
||||
gFlags := new(flags.GlobalFlags)
|
||||
gFlags.SubscribeAllDataSubnets = true
|
||||
flags.Init(gFlags)
|
||||
defer flags.Init(resetFlags)
|
||||
|
||||
b := tc.block(t)
|
||||
s := NewLazilyPersistentStoreColumn(nil, enode.ID{}, nil, &peerdas.CustodyInfo{})
|
||||
|
||||
commitmentsArray, err := s.fullCommitmentsToCheck(enode.ID{}, b, tc.slot)
|
||||
require.NoError(t, err)
|
||||
|
||||
for _, commitments := range commitmentsArray {
|
||||
require.DeepEqual(t, tc.commitments, commitments)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func roSidecarsFromDataColumnParamsByBlockRoot(t *testing.T, dataColumnParamsByBlockRoot util.DataColumnsParamsByRoot) ([]blocks.ROSidecar, []blocks.RODataColumn) {
|
||||
roDataColumns, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnParamsByBlockRoot)
|
||||
|
||||
roSidecars := make([]blocks.ROSidecar, 0, len(roDataColumns))
|
||||
for _, roDataColumn := range roDataColumns {
|
||||
roSidecars = append(roSidecars, blocks.NewSidecarFromDataColumnSidecar(roDataColumn))
|
||||
}
|
||||
|
||||
return roSidecars, roDataColumns
|
||||
}
|
||||
|
||||
func newSignedRoBlock(t *testing.T, signedBeaconBlock interface{}) blocks.ROBlock {
|
||||
sb, err := blocks.NewSignedBeaconBlock(signedBeaconBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
rb, err := blocks.NewROBlock(sb)
|
||||
require.NoError(t, err)
|
||||
|
||||
return rb
|
||||
}
|
||||
|
||||
type mockDataColumnsVerifier struct {
|
||||
t *testing.T
|
||||
dataColumnSidecars []blocks.RODataColumn
|
||||
validCalled, SidecarInclusionProvenCalled, SidecarKzgProofVerifiedCalled bool
|
||||
}
|
||||
|
||||
var _ verification.DataColumnsVerifier = &mockDataColumnsVerifier{}
|
||||
|
||||
func (m *mockDataColumnsVerifier) VerifiedRODataColumns() ([]blocks.VerifiedRODataColumn, error) {
|
||||
require.Equal(m.t, true, m.validCalled && m.SidecarInclusionProvenCalled && m.SidecarKzgProofVerifiedCalled)
|
||||
|
||||
verifiedDataColumnSidecars := make([]blocks.VerifiedRODataColumn, 0, len(m.dataColumnSidecars))
|
||||
for _, dataColumnSidecar := range m.dataColumnSidecars {
|
||||
verifiedDataColumnSidecar := blocks.NewVerifiedRODataColumn(dataColumnSidecar)
|
||||
verifiedDataColumnSidecars = append(verifiedDataColumnSidecars, verifiedDataColumnSidecar)
|
||||
}
|
||||
|
||||
return verifiedDataColumnSidecars, nil
|
||||
}
|
||||
|
||||
func (m *mockDataColumnsVerifier) SatisfyRequirement(verification.Requirement) {}
|
||||
|
||||
func (m *mockDataColumnsVerifier) ValidFields() error {
|
||||
m.validCalled = true
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *mockDataColumnsVerifier) CorrectSubnet(dataColumnSidecarSubTopic string, expectedTopics []string) error {
|
||||
return nil
|
||||
}
|
||||
func (m *mockDataColumnsVerifier) NotFromFutureSlot() error { return nil }
|
||||
func (m *mockDataColumnsVerifier) SlotAboveFinalized() error { return nil }
|
||||
func (m *mockDataColumnsVerifier) ValidProposerSignature(ctx context.Context) error { return nil }
|
||||
|
||||
func (m *mockDataColumnsVerifier) SidecarParentSeen(parentSeen func([fieldparams.RootLength]byte) bool) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *mockDataColumnsVerifier) SidecarParentValid(badParent func([fieldparams.RootLength]byte) bool) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *mockDataColumnsVerifier) SidecarParentSlotLower() error { return nil }
|
||||
func (m *mockDataColumnsVerifier) SidecarDescendsFromFinalized() error { return nil }
|
||||
|
||||
func (m *mockDataColumnsVerifier) SidecarInclusionProven() error {
|
||||
m.SidecarInclusionProvenCalled = true
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *mockDataColumnsVerifier) SidecarKzgProofVerified() error {
|
||||
m.SidecarKzgProofVerifiedCalled = true
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *mockDataColumnsVerifier) SidecarProposerExpected(ctx context.Context) error { return nil }
|
||||
@@ -4,33 +4,29 @@ import (
|
||||
"bytes"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrDuplicateSidecar = errors.New("duplicate sidecar stashed in AvailabilityStore")
|
||||
errIndexOutOfBounds = errors.New("sidecar.index > MAX_BLOBS_PER_BLOCK")
|
||||
errCommitmentMismatch = errors.New("KzgCommitment of sidecar in cache did not match block commitment")
|
||||
errMissingSidecar = errors.New("no sidecar in cache for block commitment")
|
||||
)
|
||||
var errIndexOutOfBounds = errors.New("sidecar.index > MAX_BLOBS_PER_BLOCK")
|
||||
|
||||
// cacheKey includes the slot so that we can easily iterate through the cache and compare
|
||||
// slots for eviction purposes. Whether the input is the block or the sidecar, we always have
|
||||
// the root+slot when interacting with the cache, so it isn't an inconvenience to use both.
|
||||
type cacheKey struct {
|
||||
slot primitives.Slot
|
||||
root [32]byte
|
||||
root [fieldparams.RootLength]byte
|
||||
}
|
||||
|
||||
type cache struct {
|
||||
entries map[cacheKey]*cacheEntry
|
||||
type blobCache struct {
|
||||
entries map[cacheKey]*blobCacheEntry
|
||||
}
|
||||
|
||||
func newCache() *cache {
|
||||
return &cache{entries: make(map[cacheKey]*cacheEntry)}
|
||||
func newBlobCache() *blobCache {
|
||||
return &blobCache{entries: make(map[cacheKey]*blobCacheEntry)}
|
||||
}
|
||||
|
||||
// keyFromSidecar is a convenience method for constructing a cacheKey from a BlobSidecar value.
|
||||
@@ -44,34 +40,34 @@ func keyFromBlock(b blocks.ROBlock) cacheKey {
|
||||
}
|
||||
|
||||
// ensure returns the entry for the given key, creating it if it isn't already present.
|
||||
func (c *cache) ensure(key cacheKey) *cacheEntry {
|
||||
func (c *blobCache) ensure(key cacheKey) *blobCacheEntry {
|
||||
e, ok := c.entries[key]
|
||||
if !ok {
|
||||
e = &cacheEntry{}
|
||||
e = &blobCacheEntry{}
|
||||
c.entries[key] = e
|
||||
}
|
||||
return e
|
||||
}
|
||||
|
||||
// delete removes the cache entry from the cache.
|
||||
func (c *cache) delete(key cacheKey) {
|
||||
func (c *blobCache) delete(key cacheKey) {
|
||||
delete(c.entries, key)
|
||||
}
|
||||
|
||||
// cacheEntry holds a fixed-length cache of BlobSidecars.
|
||||
type cacheEntry struct {
|
||||
// blobCacheEntry holds a fixed-length cache of BlobSidecars.
|
||||
type blobCacheEntry struct {
|
||||
scs []*blocks.ROBlob
|
||||
diskSummary filesystem.BlobStorageSummary
|
||||
}
|
||||
|
||||
func (e *cacheEntry) setDiskSummary(sum filesystem.BlobStorageSummary) {
|
||||
func (e *blobCacheEntry) setDiskSummary(sum filesystem.BlobStorageSummary) {
|
||||
e.diskSummary = sum
|
||||
}
|
||||
|
||||
// stash adds an item to the in-memory cache of BlobSidecars.
|
||||
// Only the first BlobSidecar of a given Index will be kept in the cache.
|
||||
// stash will return an error if the given blob is already in the cache, or if the Index is out of bounds.
|
||||
func (e *cacheEntry) stash(sc *blocks.ROBlob) error {
|
||||
func (e *blobCacheEntry) stash(sc *blocks.ROBlob) error {
|
||||
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(sc.Slot())
|
||||
if sc.Index >= uint64(maxBlobsPerBlock) {
|
||||
return errors.Wrapf(errIndexOutOfBounds, "index=%d", sc.Index)
|
||||
@@ -92,7 +88,7 @@ func (e *cacheEntry) stash(sc *blocks.ROBlob) error {
|
||||
// commitments were found in the cache and the sidecar slice return value can be used
|
||||
// to perform a DA check against the cached sidecars.
|
||||
// filter only returns blobs that need to be checked. Blobs already available on disk will be excluded.
|
||||
func (e *cacheEntry) filter(root [32]byte, kc [][]byte, slot primitives.Slot) ([]blocks.ROBlob, error) {
|
||||
func (e *blobCacheEntry) filter(root [32]byte, kc [][]byte, slot primitives.Slot) ([]blocks.ROBlob, error) {
|
||||
count := len(kc)
|
||||
if e.diskSummary.AllAvailable(count) {
|
||||
return nil, nil
|
||||
@@ -14,7 +14,7 @@ import (
|
||||
)
|
||||
|
||||
func TestCacheEnsureDelete(t *testing.T) {
|
||||
c := newCache()
|
||||
c := newBlobCache()
|
||||
require.Equal(t, 0, len(c.entries))
|
||||
root := bytesutil.ToBytes32([]byte("root"))
|
||||
slot := primitives.Slot(1234)
|
||||
@@ -25,18 +25,18 @@ func TestCacheEnsureDelete(t *testing.T) {
|
||||
|
||||
c.delete(k)
|
||||
require.Equal(t, 0, len(c.entries))
|
||||
var nilEntry *cacheEntry
|
||||
var nilEntry *blobCacheEntry
|
||||
require.Equal(t, nilEntry, c.entries[k])
|
||||
}
|
||||
|
||||
type filterTestCaseSetupFunc func(t *testing.T) (*cacheEntry, [][]byte, []blocks.ROBlob)
|
||||
type filterTestCaseSetupFunc func(t *testing.T) (*blobCacheEntry, [][]byte, []blocks.ROBlob)
|
||||
|
||||
func filterTestCaseSetup(slot primitives.Slot, nBlobs int, onDisk []int, numExpected int) filterTestCaseSetupFunc {
|
||||
return func(t *testing.T) (*cacheEntry, [][]byte, []blocks.ROBlob) {
|
||||
return func(t *testing.T) (*blobCacheEntry, [][]byte, []blocks.ROBlob) {
|
||||
blk, blobs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, slot, nBlobs)
|
||||
commits, err := commitmentsToCheck(blk, blk.Block().Slot())
|
||||
require.NoError(t, err)
|
||||
entry := &cacheEntry{}
|
||||
entry := &blobCacheEntry{}
|
||||
if len(onDisk) > 0 {
|
||||
od := map[[32]byte][]int{blk.Root(): onDisk}
|
||||
sumz := filesystem.NewMockBlobStorageSummarizer(t, od)
|
||||
@@ -125,12 +125,12 @@ func TestFilter(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
cases := []struct {
|
||||
name string
|
||||
setup func(t *testing.T) (*cacheEntry, [][]byte, []blocks.ROBlob)
|
||||
setup func(t *testing.T) (*blobCacheEntry, [][]byte, []blocks.ROBlob)
|
||||
err error
|
||||
}{
|
||||
{
|
||||
name: "commitments mismatch - extra sidecar",
|
||||
setup: func(t *testing.T) (*cacheEntry, [][]byte, []blocks.ROBlob) {
|
||||
setup: func(t *testing.T) (*blobCacheEntry, [][]byte, []blocks.ROBlob) {
|
||||
entry, commits, expected := filterTestCaseSetup(denebSlot, 6, []int{0, 1}, 4)(t)
|
||||
commits[5] = nil
|
||||
return entry, commits, expected
|
||||
@@ -139,7 +139,7 @@ func TestFilter(t *testing.T) {
|
||||
},
|
||||
{
|
||||
name: "sidecar missing",
|
||||
setup: func(t *testing.T) (*cacheEntry, [][]byte, []blocks.ROBlob) {
|
||||
setup: func(t *testing.T) (*blobCacheEntry, [][]byte, []blocks.ROBlob) {
|
||||
entry, commits, expected := filterTestCaseSetup(denebSlot, 6, []int{0, 1}, 4)(t)
|
||||
entry.scs[5] = nil
|
||||
return entry, commits, expected
|
||||
@@ -148,7 +148,7 @@ func TestFilter(t *testing.T) {
|
||||
},
|
||||
{
|
||||
name: "commitments mismatch - different bytes",
|
||||
setup: func(t *testing.T) (*cacheEntry, [][]byte, []blocks.ROBlob) {
|
||||
setup: func(t *testing.T) (*blobCacheEntry, [][]byte, []blocks.ROBlob) {
|
||||
entry, commits, expected := filterTestCaseSetup(denebSlot, 6, []int{0, 1}, 4)(t)
|
||||
entry.scs[5].KzgCommitment = []byte("nope")
|
||||
return entry, commits, expected
|
||||
131
beacon-chain/das/data_column_cache.go
Normal file
131
beacon-chain/das/data_column_cache.go
Normal file
@@ -0,0 +1,131 @@
|
||||
package das
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"slices"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrDuplicateSidecar = errors.New("duplicate sidecar stashed in AvailabilityStore")
|
||||
errColumnIndexTooHigh = errors.New("column index too high")
|
||||
errCommitmentMismatch = errors.New("KzgCommitment of sidecar in cache did not match block commitment")
|
||||
errMissingSidecar = errors.New("no sidecar in cache for block commitment")
|
||||
)
|
||||
|
||||
type dataColumnCache struct {
|
||||
entries map[cacheKey]*dataColumnCacheEntry
|
||||
}
|
||||
|
||||
func newDataColumnCache() *dataColumnCache {
|
||||
return &dataColumnCache{entries: make(map[cacheKey]*dataColumnCacheEntry)}
|
||||
}
|
||||
|
||||
// ensure returns the entry for the given key, creating it if it isn't already present.
|
||||
func (c *dataColumnCache) ensure(key cacheKey) *dataColumnCacheEntry {
|
||||
entry, ok := c.entries[key]
|
||||
if !ok {
|
||||
entry = &dataColumnCacheEntry{}
|
||||
c.entries[key] = entry
|
||||
}
|
||||
|
||||
return entry
|
||||
}
|
||||
|
||||
// delete removes the cache entry from the cache.
|
||||
func (c *dataColumnCache) delete(key cacheKey) {
|
||||
delete(c.entries, key)
|
||||
}
|
||||
|
||||
// dataColumnCacheEntry holds a fixed-length cache of BlobSidecars.
|
||||
type dataColumnCacheEntry struct {
|
||||
scs [fieldparams.NumberOfColumns]*blocks.RODataColumn
|
||||
diskSummary filesystem.DataColumnStorageSummary
|
||||
}
|
||||
|
||||
func (e *dataColumnCacheEntry) setDiskSummary(sum filesystem.DataColumnStorageSummary) {
|
||||
e.diskSummary = sum
|
||||
}
|
||||
|
||||
// stash adds an item to the in-memory cache of DataColumnSidecars.
|
||||
// Only the first DataColumnSidecar of a given Index will be kept in the cache.
|
||||
// stash will return an error if the given data colunn is already in the cache, or if the Index is out of bounds.
|
||||
func (e *dataColumnCacheEntry) stash(sc *blocks.RODataColumn) error {
|
||||
if sc.Index >= fieldparams.NumberOfColumns {
|
||||
return errors.Wrapf(errColumnIndexTooHigh, "index=%d", sc.Index)
|
||||
}
|
||||
|
||||
if e.scs[sc.Index] != nil {
|
||||
return errors.Wrapf(ErrDuplicateSidecar, "root=%#x, index=%d, commitment=%#x", sc.BlockRoot(), sc.Index, sc.KzgCommitments)
|
||||
}
|
||||
|
||||
e.scs[sc.Index] = sc
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (e *dataColumnCacheEntry) filter(root [32]byte, commitmentsArray *safeCommitmentsArray) ([]blocks.RODataColumn, error) {
|
||||
nonEmptyIndices := commitmentsArray.nonEmptyIndices()
|
||||
if e.diskSummary.AllAvailable(nonEmptyIndices) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
commitmentsCount := commitmentsArray.count()
|
||||
sidecars := make([]blocks.RODataColumn, 0, commitmentsCount)
|
||||
|
||||
for i := range nonEmptyIndices {
|
||||
if e.diskSummary.HasIndex(i) {
|
||||
continue
|
||||
}
|
||||
|
||||
if e.scs[i] == nil {
|
||||
return nil, errors.Wrapf(errMissingSidecar, "root=%#x, index=%#x", root, i)
|
||||
}
|
||||
|
||||
if !sliceBytesEqual(commitmentsArray[i], e.scs[i].KzgCommitments) {
|
||||
return nil, errors.Wrapf(errCommitmentMismatch, "root=%#x, index=%#x, commitment=%#x, block commitment=%#x", root, i, e.scs[i].KzgCommitments, commitmentsArray[i])
|
||||
}
|
||||
|
||||
sidecars = append(sidecars, *e.scs[i])
|
||||
}
|
||||
|
||||
return sidecars, nil
|
||||
}
|
||||
|
||||
// safeCommitmentsArray is a fixed size array of commitments.
|
||||
// This is helpful for avoiding gratuitous bounds checks.
|
||||
type safeCommitmentsArray [fieldparams.NumberOfColumns][][]byte
|
||||
|
||||
// count returns the number of commitments in the array.
|
||||
func (s *safeCommitmentsArray) count() int {
|
||||
count := 0
|
||||
|
||||
for i := range s {
|
||||
if s[i] != nil {
|
||||
count++
|
||||
}
|
||||
}
|
||||
|
||||
return count
|
||||
}
|
||||
|
||||
// nonEmptyIndices returns a map of indices that are non-nil in the array.
|
||||
func (s *safeCommitmentsArray) nonEmptyIndices() map[uint64]bool {
|
||||
columns := make(map[uint64]bool)
|
||||
|
||||
for i := range s {
|
||||
if s[i] != nil {
|
||||
columns[uint64(i)] = true
|
||||
}
|
||||
}
|
||||
|
||||
return columns
|
||||
}
|
||||
|
||||
func sliceBytesEqual(a, b [][]byte) bool {
|
||||
return slices.EqualFunc(a, b, bytes.Equal)
|
||||
}
|
||||
144
beacon-chain/das/data_column_cache_test.go
Normal file
144
beacon-chain/das/data_column_cache_test.go
Normal file
@@ -0,0 +1,144 @@
|
||||
package das
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/util"
|
||||
)
|
||||
|
||||
func TestEnsureDeleteSetDiskSummary(t *testing.T) {
|
||||
c := newDataColumnCache()
|
||||
key := cacheKey{}
|
||||
entry := c.ensure(key)
|
||||
require.DeepEqual(t, dataColumnCacheEntry{}, *entry)
|
||||
|
||||
diskSummary := filesystem.NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{true})
|
||||
entry.setDiskSummary(diskSummary)
|
||||
entry = c.ensure(key)
|
||||
require.DeepEqual(t, dataColumnCacheEntry{diskSummary: diskSummary}, *entry)
|
||||
|
||||
c.delete(key)
|
||||
entry = c.ensure(key)
|
||||
require.DeepEqual(t, dataColumnCacheEntry{}, *entry)
|
||||
}
|
||||
|
||||
func TestStash(t *testing.T) {
|
||||
t.Run("Index too high", func(t *testing.T) {
|
||||
dataColumnParamsByBlockRoot := util.DataColumnsParamsByRoot{{1}: {{ColumnIndex: 10_000}}}
|
||||
roDataColumns, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnParamsByBlockRoot)
|
||||
|
||||
var entry dataColumnCacheEntry
|
||||
err := entry.stash(&roDataColumns[0])
|
||||
require.NotNil(t, err)
|
||||
})
|
||||
|
||||
t.Run("Nominal and already existing", func(t *testing.T) {
|
||||
dataColumnParamsByBlockRoot := util.DataColumnsParamsByRoot{{1}: {{ColumnIndex: 1}}}
|
||||
roDataColumns, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnParamsByBlockRoot)
|
||||
|
||||
var entry dataColumnCacheEntry
|
||||
err := entry.stash(&roDataColumns[0])
|
||||
require.NoError(t, err)
|
||||
|
||||
require.DeepEqual(t, roDataColumns[0], entry.scs[1])
|
||||
|
||||
err = entry.stash(&roDataColumns[0])
|
||||
require.NotNil(t, err)
|
||||
})
|
||||
}
|
||||
|
||||
func TestFilterDataColumns(t *testing.T) {
|
||||
t.Run("All available", func(t *testing.T) {
|
||||
commitmentsArray := safeCommitmentsArray{nil, [][]byte{[]byte{1}}, nil, [][]byte{[]byte{3}}}
|
||||
|
||||
diskSummary := filesystem.NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{false, true, false, true})
|
||||
|
||||
dataColumnCacheEntry := dataColumnCacheEntry{diskSummary: diskSummary}
|
||||
|
||||
actual, err := dataColumnCacheEntry.filter([fieldparams.RootLength]byte{}, &commitmentsArray)
|
||||
require.NoError(t, err)
|
||||
require.IsNil(t, actual)
|
||||
})
|
||||
|
||||
t.Run("Some scs missing", func(t *testing.T) {
|
||||
commitmentsArray := safeCommitmentsArray{nil, [][]byte{[]byte{1}}}
|
||||
|
||||
diskSummary := filesystem.NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{})
|
||||
|
||||
dataColumnCacheEntry := dataColumnCacheEntry{diskSummary: diskSummary}
|
||||
|
||||
_, err := dataColumnCacheEntry.filter([fieldparams.RootLength]byte{}, &commitmentsArray)
|
||||
require.NotNil(t, err)
|
||||
})
|
||||
|
||||
t.Run("Commitments not equal", func(t *testing.T) {
|
||||
root := [fieldparams.RootLength]byte{}
|
||||
commitmentsArray := safeCommitmentsArray{nil, [][]byte{[]byte{1}}}
|
||||
|
||||
dataColumnParamsByBlockRoot := util.DataColumnsParamsByRoot{root: {{ColumnIndex: 1}}}
|
||||
roDataColumns, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnParamsByBlockRoot)
|
||||
|
||||
var scs [fieldparams.NumberOfColumns]*blocks.RODataColumn
|
||||
scs[1] = &roDataColumns[0]
|
||||
|
||||
dataColumnCacheEntry := dataColumnCacheEntry{scs: scs}
|
||||
|
||||
_, err := dataColumnCacheEntry.filter(root, &commitmentsArray)
|
||||
require.NotNil(t, err)
|
||||
})
|
||||
|
||||
t.Run("Nominal", func(t *testing.T) {
|
||||
root := [fieldparams.RootLength]byte{}
|
||||
commitmentsArray := safeCommitmentsArray{nil, [][]byte{[]byte{1}}, nil, [][]byte{[]byte{3}}}
|
||||
|
||||
diskSummary := filesystem.NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{false, true})
|
||||
|
||||
dataColumnParamsByBlockRoot := util.DataColumnsParamsByRoot{root: {{ColumnIndex: 3, KzgCommitments: [][]byte{[]byte{3}}}}}
|
||||
expected, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnParamsByBlockRoot)
|
||||
|
||||
var scs [fieldparams.NumberOfColumns]*blocks.RODataColumn
|
||||
scs[3] = &expected[0]
|
||||
|
||||
dataColumnCacheEntry := dataColumnCacheEntry{scs: scs, diskSummary: diskSummary}
|
||||
|
||||
actual, err := dataColumnCacheEntry.filter(root, &commitmentsArray)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.DeepEqual(t, expected, actual)
|
||||
})
|
||||
}
|
||||
|
||||
func TestCount(t *testing.T) {
|
||||
s := safeCommitmentsArray{nil, [][]byte{[]byte{1}}, nil, [][]byte{[]byte{3}}}
|
||||
require.Equal(t, 2, s.count())
|
||||
}
|
||||
|
||||
func TestNonEmptyIndices(t *testing.T) {
|
||||
s := safeCommitmentsArray{nil, [][]byte{[]byte{10}}, nil, [][]byte{[]byte{20}}}
|
||||
actual := s.nonEmptyIndices()
|
||||
require.DeepEqual(t, map[uint64]bool{1: true, 3: true}, actual)
|
||||
}
|
||||
|
||||
func TestSliceBytesEqual(t *testing.T) {
|
||||
t.Run("Different lengths", func(t *testing.T) {
|
||||
a := [][]byte{[]byte{1, 2, 3}}
|
||||
b := [][]byte{[]byte{1, 2, 3}, []byte{4, 5, 6}}
|
||||
require.Equal(t, false, sliceBytesEqual(a, b))
|
||||
})
|
||||
|
||||
t.Run("Same length but different content", func(t *testing.T) {
|
||||
a := [][]byte{[]byte{1, 2, 3}, []byte{4, 5, 6}}
|
||||
b := [][]byte{[]byte{1, 2, 3}, []byte{4, 5, 7}}
|
||||
require.Equal(t, false, sliceBytesEqual(a, b))
|
||||
})
|
||||
|
||||
t.Run("Equal slices", func(t *testing.T) {
|
||||
a := [][]byte{[]byte{1, 2, 3}, []byte{4, 5, 6}}
|
||||
b := [][]byte{[]byte{1, 2, 3}, []byte{4, 5, 6}}
|
||||
require.Equal(t, true, sliceBytesEqual(a, b))
|
||||
})
|
||||
}
|
||||
@@ -15,5 +15,5 @@ import (
|
||||
// durably persisted before returning a non-error value.
|
||||
type AvailabilityStore interface {
|
||||
IsDataAvailable(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error
|
||||
Persist(current primitives.Slot, sc ...blocks.ROBlob) error
|
||||
Persist(current primitives.Slot, sc ...blocks.ROSidecar) error
|
||||
}
|
||||
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
errors "github.com/pkg/errors"
|
||||
)
|
||||
|
||||
// MockAvailabilityStore is an implementation of AvailabilityStore that can be used by other packages in tests.
|
||||
@@ -24,9 +25,13 @@ func (m *MockAvailabilityStore) IsDataAvailable(ctx context.Context, current pri
|
||||
}
|
||||
|
||||
// Persist satisfies the corresponding method of the AvailabilityStore interface in a way that is useful for tests.
|
||||
func (m *MockAvailabilityStore) Persist(current primitives.Slot, sc ...blocks.ROBlob) error {
|
||||
func (m *MockAvailabilityStore) Persist(current primitives.Slot, sc ...blocks.ROSidecar) error {
|
||||
blobSidecars, err := blocks.BlobSidecarsFromSidecars(sc)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "blob sidecars from sidecars")
|
||||
}
|
||||
if m.PersistBlobsCallback != nil {
|
||||
return m.PersistBlobsCallback(current, sc...)
|
||||
return m.PersistBlobsCallback(current, blobSidecars...)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -23,6 +23,7 @@ go_library(
|
||||
"//beacon-chain/cache:go_default_library",
|
||||
"//beacon-chain/cache/depositsnapshot:go_default_library",
|
||||
"//beacon-chain/core/light-client:go_default_library",
|
||||
"//beacon-chain/core/peerdas:go_default_library",
|
||||
"//beacon-chain/db:go_default_library",
|
||||
"//beacon-chain/db/filesystem:go_default_library",
|
||||
"//beacon-chain/db/kv:go_default_library",
|
||||
|
||||
@@ -25,6 +25,7 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache/depositsnapshot"
|
||||
lightclient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/kv"
|
||||
@@ -120,6 +121,7 @@ type BeaconNode struct {
|
||||
initialSyncComplete chan struct{}
|
||||
BlobStorage *filesystem.BlobStorage
|
||||
BlobStorageOptions []filesystem.BlobStorageOption
|
||||
custodyInfo *peerdas.CustodyInfo
|
||||
verifyInitWaiter *verification.InitializerWaiter
|
||||
syncChecker *initialsync.SyncChecker
|
||||
slasherEnabled bool
|
||||
@@ -161,6 +163,7 @@ func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*Beaco
|
||||
serviceFlagOpts: &serviceFlagOpts{},
|
||||
initialSyncComplete: make(chan struct{}),
|
||||
syncChecker: &initialsync.SyncChecker{},
|
||||
custodyInfo: &peerdas.CustodyInfo{},
|
||||
slasherEnabled: cliCtx.Bool(flags.SlasherFlag.Name),
|
||||
lcStore: &lightclient.Store{},
|
||||
}
|
||||
@@ -280,6 +283,7 @@ func startBaseServices(cliCtx *cli.Context, beacon *BeaconNode, depositAddress s
|
||||
if err := beacon.startDB(cliCtx, depositAddress); err != nil {
|
||||
return nil, errors.Wrap(err, "could not start DB")
|
||||
}
|
||||
|
||||
beacon.BlobStorage.WarmCache()
|
||||
|
||||
log.Debugln("Starting Slashing DB")
|
||||
@@ -697,6 +701,7 @@ func (b *BeaconNode) registerP2P(cliCtx *cli.Context) error {
|
||||
StateNotifier: b,
|
||||
DB: b.db,
|
||||
ClockWaiter: b.clockWaiter,
|
||||
CustodyInfo: b.custodyInfo,
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -778,6 +783,7 @@ func (b *BeaconNode) registerBlockchainService(fc forkchoice.ForkChoicer, gs *st
|
||||
blockchain.WithTrackedValidatorsCache(b.trackedValidatorsCache),
|
||||
blockchain.WithPayloadIDCache(b.payloadIDCache),
|
||||
blockchain.WithSyncChecker(b.syncChecker),
|
||||
blockchain.WithCustodyInfo(b.custodyInfo),
|
||||
blockchain.WithSlasherEnabled(b.slasherEnabled),
|
||||
blockchain.WithLightClientStore(b.lcStore),
|
||||
)
|
||||
|
||||
@@ -221,7 +221,6 @@ func generateDataColumnIdentifiers(n int) []*eth.DataColumnsByRootIdentifier {
|
||||
|
||||
func TestDataColumnSidecarsByRootReq_Marshal(t *testing.T) {
|
||||
/*
|
||||
|
||||
SSZ encoding of DataColumnsByRootIdentifiers is tested in spectests.
|
||||
However, encoding a list of DataColumnsByRootIdentifier is not.
|
||||
We are testing it here.
|
||||
|
||||
@@ -201,6 +201,11 @@ func ConvertPeerIDToNodeID(pid peer.ID) (enode.ID, error) {
|
||||
return [32]byte{}, errors.Wrap(err, "parse public key")
|
||||
}
|
||||
|
||||
newPubkey := &ecdsa.PublicKey{Curve: gCrypto.S256(), X: pubKeyObjSecp256k1.X(), Y: pubKeyObjSecp256k1.Y()}
|
||||
newPubkey := &ecdsa.PublicKey{
|
||||
Curve: gCrypto.S256(),
|
||||
X: pubKeyObjSecp256k1.X(),
|
||||
Y: pubKeyObjSecp256k1.Y(),
|
||||
}
|
||||
|
||||
return enode.PubkeyToIDV4(newPubkey), nil
|
||||
}
|
||||
|
||||
@@ -34,6 +34,7 @@ go_library(
|
||||
"//beacon-chain/rpc/eth/beacon:go_default_library",
|
||||
"//beacon-chain/rpc/eth/blob:go_default_library",
|
||||
"//beacon-chain/rpc/eth/builder:go_default_library",
|
||||
"//beacon-chain/rpc/eth/column:go_default_library",
|
||||
"//beacon-chain/rpc/eth/config:go_default_library",
|
||||
"//beacon-chain/rpc/eth/debug:go_default_library",
|
||||
"//beacon-chain/rpc/eth/events:go_default_library",
|
||||
|
||||
@@ -9,6 +9,7 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/eth/beacon"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/eth/blob"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/eth/builder"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/eth/column"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/eth/config"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/eth/debug"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/eth/events"
|
||||
@@ -100,6 +101,7 @@ func (s *Service) endpoints(
|
||||
endpoints = append(endpoints, s.prysmBeaconEndpoints(ch, stater, coreService)...)
|
||||
endpoints = append(endpoints, s.prysmNodeEndpoints()...)
|
||||
endpoints = append(endpoints, s.prysmValidatorEndpoints(stater, coreService)...)
|
||||
endpoints = append(endpoints, s.dataColumnSideCarEndPoints(blocker)...)
|
||||
|
||||
if features.Get().EnableLightClient {
|
||||
endpoints = append(endpoints, s.lightClientEndpoints(blocker, stater)...)
|
||||
@@ -201,6 +203,28 @@ func (s *Service) blobEndpoints(blocker lookup.Blocker) []endpoint {
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Service) dataColumnSideCarEndPoints(blocker lookup.Blocker) []endpoint {
|
||||
server := &column.Server{
|
||||
Blocker: blocker,
|
||||
OptimisticModeFetcher: s.cfg.OptimisticModeFetcher,
|
||||
FinalizationFetcher: s.cfg.FinalizationFetcher,
|
||||
TimeFetcher: s.cfg.GenesisTimeFetcher,
|
||||
}
|
||||
|
||||
const namespace = "debug"
|
||||
return []endpoint{
|
||||
{
|
||||
template: "/eth/v1/debug/beacon/data_column_sidecars/{block_id}",
|
||||
name: namespace + ".DataColumnSidecars",
|
||||
middleware: []middleware.Middleware{
|
||||
middleware.AcceptHeaderHandler([]string{api.JsonMediaType, api.OctetStreamMediaType}),
|
||||
},
|
||||
handler: server.DataColumnSidecars,
|
||||
methods: []string{http.MethodGet},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Service) validatorEndpoints(
|
||||
validatorServer *validatorv1alpha1.Server,
|
||||
stater lookup.Stater,
|
||||
@@ -245,7 +269,7 @@ func (s *Service) validatorEndpoints(
|
||||
template: "/eth/v2/validator/aggregate_attestation",
|
||||
name: namespace + ".GetAggregateAttestationV2",
|
||||
middleware: []middleware.Middleware{
|
||||
middleware.AcceptHeaderHandler([]string{api.JsonMediaType}),
|
||||
middleware.AcceptHeaderHandler([]string{api.JsonMediaType, api.OctetStreamMediaType}),
|
||||
},
|
||||
handler: server.GetAggregateAttestationV2,
|
||||
methods: []string{http.MethodGet},
|
||||
@@ -314,7 +338,7 @@ func (s *Service) validatorEndpoints(
|
||||
template: "/eth/v1/validator/attestation_data",
|
||||
name: namespace + ".GetAttestationData",
|
||||
middleware: []middleware.Middleware{
|
||||
middleware.AcceptHeaderHandler([]string{api.JsonMediaType}),
|
||||
middleware.AcceptHeaderHandler([]string{api.JsonMediaType, api.OctetStreamMediaType}),
|
||||
},
|
||||
handler: server.GetAttestationData,
|
||||
methods: []string{http.MethodGet},
|
||||
|
||||
@@ -78,9 +78,10 @@ func Test_endpoints(t *testing.T) {
|
||||
}
|
||||
|
||||
debugRoutes := map[string][]string{
|
||||
"/eth/v2/debug/beacon/states/{state_id}": {http.MethodGet},
|
||||
"/eth/v2/debug/beacon/heads": {http.MethodGet},
|
||||
"/eth/v1/debug/fork_choice": {http.MethodGet},
|
||||
"/eth/v2/debug/beacon/states/{state_id}": {http.MethodGet},
|
||||
"/eth/v2/debug/beacon/heads": {http.MethodGet},
|
||||
"/eth/v1/debug/fork_choice": {http.MethodGet},
|
||||
"/eth/v1/debug/beacon/data_column_sidecars/{block_id}": {http.MethodGet},
|
||||
}
|
||||
|
||||
eventsRoutes := map[string][]string{
|
||||
|
||||
@@ -26,7 +26,7 @@ func (s *Server) Blobs(w http.ResponseWriter, r *http.Request) {
|
||||
ctx, span := trace.StartSpan(r.Context(), "beacon.Blobs")
|
||||
defer span.End()
|
||||
|
||||
indices, err := parseIndices(r.URL, s.TimeFetcher.CurrentSlot())
|
||||
indices, err := ParseIndices(r.URL, s.TimeFetcher.CurrentSlot())
|
||||
if err != nil {
|
||||
httputil.HandleError(w, err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
@@ -96,19 +96,20 @@ func (s *Server) Blobs(w http.ResponseWriter, r *http.Request) {
|
||||
httputil.WriteJson(w, resp)
|
||||
}
|
||||
|
||||
// parseIndices filters out invalid and duplicate blob indices
|
||||
func parseIndices(url *url.URL, s primitives.Slot) ([]uint64, error) {
|
||||
// ParseIndices filters out invalid and duplicate blob indices
|
||||
func ParseIndices(url *url.URL, s primitives.Slot) ([]int, error) {
|
||||
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(s)
|
||||
rawIndices := url.Query()["indices"]
|
||||
indices := make([]uint64, 0, params.BeaconConfig().MaxBlobsPerBlock(s))
|
||||
indices := make([]int, 0, maxBlobsPerBlock)
|
||||
invalidIndices := make([]string, 0)
|
||||
loop:
|
||||
for _, raw := range rawIndices {
|
||||
ix, err := strconv.ParseUint(raw, 10, 64)
|
||||
ix, err := strconv.Atoi(raw)
|
||||
if err != nil {
|
||||
invalidIndices = append(invalidIndices, raw)
|
||||
continue
|
||||
}
|
||||
if ix >= uint64(params.BeaconConfig().MaxBlobsPerBlock(s)) {
|
||||
if !(0 <= ix && ix < maxBlobsPerBlock) {
|
||||
invalidIndices = append(invalidIndices, raw)
|
||||
continue
|
||||
}
|
||||
|
||||
@@ -520,13 +520,13 @@ func Test_parseIndices(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
query string
|
||||
want []uint64
|
||||
want []int
|
||||
wantErr string
|
||||
}{
|
||||
{
|
||||
name: "happy path with duplicate indices within bound and other query parameters ignored",
|
||||
query: "indices=1&indices=2&indices=1&indices=3&bar=bar",
|
||||
want: []uint64{1, 2, 3},
|
||||
want: []int{1, 2, 3},
|
||||
},
|
||||
{
|
||||
name: "out of bounds indices throws error",
|
||||
@@ -546,7 +546,7 @@ func Test_parseIndices(t *testing.T) {
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got, err := parseIndices(&url.URL{RawQuery: tt.query}, 0)
|
||||
got, err := ParseIndices(&url.URL{RawQuery: tt.query}, 0)
|
||||
if err != nil && tt.wantErr != "" {
|
||||
require.StringContains(t, tt.wantErr, err.Error())
|
||||
return
|
||||
|
||||
22
beacon-chain/rpc/eth/column/BUILD.bazel
Normal file
22
beacon-chain/rpc/eth/column/BUILD.bazel
Normal file
@@ -0,0 +1,22 @@
|
||||
load("@prysm//tools/go:def.bzl", "go_library")
|
||||
|
||||
go_library(
|
||||
name = "go_default_library",
|
||||
srcs = ["handler.go"],
|
||||
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/eth/column",
|
||||
visibility = ["//visibility:public"],
|
||||
deps = [
|
||||
"//api:go_default_library",
|
||||
"//api/server/structs:go_default_library",
|
||||
"//beacon-chain/blockchain:go_default_library",
|
||||
"//beacon-chain/rpc/core:go_default_library",
|
||||
"//beacon-chain/rpc/eth/blob:go_default_library",
|
||||
"//beacon-chain/rpc/lookup:go_default_library",
|
||||
"//consensus-types/blocks:go_default_library",
|
||||
"//monitoring/tracing/trace:go_default_library",
|
||||
"//network/httputil:go_default_library",
|
||||
"//runtime/version:go_default_library",
|
||||
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
|
||||
"@com_github_pkg_errors//:go_default_library",
|
||||
],
|
||||
)
|
||||
143
beacon-chain/rpc/eth/column/handler.go
Normal file
143
beacon-chain/rpc/eth/column/handler.go
Normal file
@@ -0,0 +1,143 @@
|
||||
package column
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/api"
|
||||
"github.com/OffchainLabs/prysm/v6/api/server/structs"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/core"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/eth/blob"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/lookup"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
|
||||
"github.com/OffchainLabs/prysm/v6/network/httputil"
|
||||
"github.com/OffchainLabs/prysm/v6/runtime/version"
|
||||
"github.com/ethereum/go-ethereum/common/hexutil"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
// Server is the HTTP server for handling requests related to data column sidecars.
|
||||
type Server struct {
|
||||
Blocker lookup.Blocker
|
||||
OptimisticModeFetcher blockchain.OptimisticModeFetcher
|
||||
FinalizationFetcher blockchain.FinalizationFetcher
|
||||
TimeFetcher blockchain.TimeFetcher
|
||||
}
|
||||
|
||||
// DataColumnSidecars handles requests for data column sidecars associated with a specific block ID.
|
||||
func (s *Server) DataColumnSidecars(w http.ResponseWriter, r *http.Request) {
|
||||
ctx, span := trace.StartSpan(r.Context(), "beacon.DataColumnSidecars")
|
||||
defer span.End()
|
||||
|
||||
indices, err := blob.ParseIndices(r.URL, s.TimeFetcher.CurrentSlot())
|
||||
if err != nil {
|
||||
httputil.HandleError(w, err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
segments := strings.Split(r.URL.Path, "/")
|
||||
blockID := segments[len(segments)-1]
|
||||
|
||||
sidecars, rpcErr := s.Blocker.DataColumnSidecars(ctx, blockID, indices)
|
||||
if rpcErr != nil {
|
||||
code := core.ErrorReasonToHTTP(rpcErr.Reason)
|
||||
msg := rpcErr.Err.Error()
|
||||
switch code {
|
||||
case http.StatusBadRequest:
|
||||
httputil.HandleError(w, "Invalid block ID: "+msg, code)
|
||||
case http.StatusNotFound:
|
||||
httputil.HandleError(w, "Block not found: "+msg, code)
|
||||
case http.StatusInternalServerError:
|
||||
httputil.HandleError(w, "Internal server error: "+msg, code)
|
||||
default:
|
||||
httputil.HandleError(w, msg, code)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
blk, err := s.Blocker.Block(ctx, []byte(blockID))
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not fetch block: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
if blk == nil {
|
||||
httputil.HandleError(w, "Block not found", http.StatusNotFound)
|
||||
return
|
||||
}
|
||||
|
||||
versionStr := version.String(blk.Version())
|
||||
w.Header().Set(api.VersionHeader, versionStr)
|
||||
|
||||
if httputil.RespondWithSsz(r) {
|
||||
sszResp, err := buildColumnsSszResponse(sidecars)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
httputil.WriteSsz(w, sszResp)
|
||||
return
|
||||
}
|
||||
|
||||
blkRoot, err := blk.Block().HashTreeRoot()
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not hash block: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
isOptimistic, err := s.OptimisticModeFetcher.IsOptimisticForRoot(ctx, blkRoot)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
resp := &structs.DataColumnSidecarResponse{
|
||||
Version: versionStr,
|
||||
Data: buildColumnJSONResponse(sidecars),
|
||||
ExecutionOptimistic: isOptimistic,
|
||||
Finalized: s.FinalizationFetcher.IsFinalized(ctx, blkRoot),
|
||||
}
|
||||
httputil.WriteJson(w, resp)
|
||||
}
|
||||
|
||||
func buildColumnsSszResponse(columns []blocks.VerifiedRODataColumn) ([]byte, error) {
|
||||
buf := make([]byte, 0)
|
||||
for _, col := range columns {
|
||||
b, err := col.MarshalSSZ()
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "marshal sidecar ssz")
|
||||
}
|
||||
buf = append(buf, b...)
|
||||
}
|
||||
return buf, nil
|
||||
}
|
||||
|
||||
func buildColumnJSONResponse(columns []blocks.VerifiedRODataColumn) []*structs.DataColumnSidecar {
|
||||
out := make([]*structs.DataColumnSidecar, len(columns))
|
||||
for i, col := range columns {
|
||||
column := encodeByteSlices(col.Column)
|
||||
kzgCommitments := encodeByteSlices(col.KzgCommitments)
|
||||
kzgProofs := encodeByteSlices(col.KzgProofs)
|
||||
inclusionProofs := encodeByteSlices(col.KzgCommitmentsInclusionProof)
|
||||
|
||||
out[i] = &structs.DataColumnSidecar{
|
||||
Index: strconv.FormatUint(col.Index, 10),
|
||||
Column: column,
|
||||
KZGCommitments: kzgCommitments,
|
||||
KZGProofs: kzgProofs,
|
||||
SignedBlockHeader: structs.SignedBeaconBlockHeaderFromConsensus(col.SignedBlockHeader),
|
||||
KZGCommitmentsInclusionProof: inclusionProofs,
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func encodeByteSlices(items [][]byte) []string {
|
||||
out := make([]string, len(items))
|
||||
for i := range items {
|
||||
out[i] = hexutil.Encode(items[i])
|
||||
}
|
||||
return out
|
||||
}
|
||||
@@ -112,7 +112,12 @@ func (s *Server) GetPeers(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
allPeers = append(allPeers, p)
|
||||
}
|
||||
resp := &structs.GetPeersResponse{Data: allPeers}
|
||||
resp := &structs.GetPeersResponse{
|
||||
Data: allPeers,
|
||||
Meta: structs.Meta{
|
||||
Count: len(allPeers),
|
||||
},
|
||||
}
|
||||
httputil.WriteJson(w, resp)
|
||||
return
|
||||
}
|
||||
@@ -177,7 +182,12 @@ func (s *Server) GetPeers(w http.ResponseWriter, r *http.Request) {
|
||||
filteredPeers = append(filteredPeers, p)
|
||||
}
|
||||
|
||||
resp := &structs.GetPeersResponse{Data: filteredPeers}
|
||||
resp := &structs.GetPeersResponse{
|
||||
Data: filteredPeers,
|
||||
Meta: structs.Meta{
|
||||
Count: len(filteredPeers),
|
||||
},
|
||||
}
|
||||
httputil.WriteJson(w, resp)
|
||||
}
|
||||
|
||||
|
||||
@@ -145,6 +145,7 @@ func TestGetPeers(t *testing.T) {
|
||||
resp := &structs.GetPeersResponse{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
|
||||
require.Equal(t, 1, len(resp.Data))
|
||||
assert.Equal(t, 1, resp.Meta.Count)
|
||||
returnedPeer := resp.Data[0]
|
||||
assert.Equal(t, expectedId.String(), returnedPeer.PeerId)
|
||||
expectedEnr, err := peerStatus.ENR(expectedId)
|
||||
@@ -229,6 +230,7 @@ func TestGetPeers(t *testing.T) {
|
||||
resp := &structs.GetPeersResponse{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
|
||||
assert.Equal(t, len(tt.wantIds), len(resp.Data), "Wrong number of peers returned")
|
||||
assert.Equal(t, len(tt.wantIds), resp.Meta.Count, "meta.count does not match number of returned peers")
|
||||
for _, id := range tt.wantIds {
|
||||
expectedId := id.String()
|
||||
found := false
|
||||
|
||||
@@ -98,6 +98,39 @@ func (s *Server) GetAggregateAttestationV2(w http.ResponseWriter, r *http.Reques
|
||||
if agg == nil {
|
||||
return
|
||||
}
|
||||
|
||||
if httputil.RespondWithSsz(r) {
|
||||
var data []byte
|
||||
var err error
|
||||
if v >= version.Electra {
|
||||
typedAgg, ok := agg.(*ethpbalpha.AttestationElectra)
|
||||
if !ok {
|
||||
httputil.HandleError(w, fmt.Sprintf("Attestation is not of type %T", ðpbalpha.AttestationElectra{}), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
data, err = typedAgg.MarshalSSZ()
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not marshal attestation: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
} else {
|
||||
typedAgg, ok := agg.(*ethpbalpha.Attestation)
|
||||
if !ok {
|
||||
httputil.HandleError(w, fmt.Sprintf("Attestation is not of type %T", ðpbalpha.Attestation{}), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
data, err = typedAgg.MarshalSSZ()
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not marshal attestation: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
w.Header().Set(api.VersionHeader, version.String(v))
|
||||
httputil.WriteSsz(w, data)
|
||||
return
|
||||
}
|
||||
|
||||
resp := &structs.AggregateAttestationResponse{
|
||||
Version: version.String(v),
|
||||
}
|
||||
@@ -610,6 +643,16 @@ func (s *Server) GetAttestationData(w http.ResponseWriter, r *http.Request) {
|
||||
return
|
||||
}
|
||||
|
||||
if httputil.RespondWithSsz(r) {
|
||||
data, err := attestationData.MarshalSSZ()
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not marshal attestation data: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
httputil.WriteSsz(w, data)
|
||||
return
|
||||
}
|
||||
|
||||
response := &structs.GetAttestationDataResponse{
|
||||
Data: &structs.AttestationData{
|
||||
Slot: strconv.FormatUint(uint64(attestationData.Slot), 10),
|
||||
|
||||
@@ -307,6 +307,23 @@ func TestGetAggregateAttestation(t *testing.T) {
|
||||
|
||||
compareResult(t, attestation, "2", hexutil.Encode(aggSlot2.AggregationBits), root1, sig.Marshal())
|
||||
})
|
||||
t.Run("1 matching aggregated attestation - SSZ", func(t *testing.T) {
|
||||
reqRoot, err := aggSlot2.Data.HashTreeRoot()
|
||||
require.NoError(t, err, "Failed to generate attestation data hash tree root")
|
||||
attDataRoot := hexutil.Encode(reqRoot[:])
|
||||
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=2" + "&committee_index=0"
|
||||
request := httptest.NewRequest(http.MethodGet, url, nil)
|
||||
request.Header.Add("Accept", "application/octet-stream")
|
||||
writer := httptest.NewRecorder()
|
||||
|
||||
s.GetAggregateAttestationV2(writer, request)
|
||||
require.Equal(t, http.StatusOK, writer.Code, "Expected HTTP status OK")
|
||||
|
||||
var resp ethpbalpha.Attestation
|
||||
require.NoError(t, resp.UnmarshalSSZ(writer.Body.Bytes()))
|
||||
|
||||
compareResult(t, *structs.AttFromConsensus(&resp), "2", hexutil.Encode(aggSlot2.AggregationBits), root1, sig.Marshal())
|
||||
})
|
||||
t.Run("multiple matching aggregated attestations - return the one with most bits", func(t *testing.T) {
|
||||
reqRoot, err := aggSlot1_Root1_1.Data.HashTreeRoot()
|
||||
require.NoError(t, err, "Failed to generate attestation data hash tree root")
|
||||
@@ -327,6 +344,23 @@ func TestGetAggregateAttestation(t *testing.T) {
|
||||
|
||||
compareResult(t, attestation, "1", hexutil.Encode(aggSlot1_Root1_2.AggregationBits), root1, sig.Marshal())
|
||||
})
|
||||
t.Run("multiple matching aggregated attestations - return the one with most bits - SSZ", func(t *testing.T) {
|
||||
reqRoot, err := aggSlot1_Root1_1.Data.HashTreeRoot()
|
||||
require.NoError(t, err, "Failed to generate attestation data hash tree root")
|
||||
attDataRoot := hexutil.Encode(reqRoot[:])
|
||||
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=1" + "&committee_index=0"
|
||||
request := httptest.NewRequest(http.MethodGet, url, nil)
|
||||
request.Header.Add("Accept", "application/octet-stream")
|
||||
writer := httptest.NewRecorder()
|
||||
|
||||
s.GetAggregateAttestationV2(writer, request)
|
||||
require.Equal(t, http.StatusOK, writer.Code, "Expected HTTP status OK")
|
||||
|
||||
var resp ethpbalpha.Attestation
|
||||
require.NoError(t, resp.UnmarshalSSZ(writer.Body.Bytes()))
|
||||
|
||||
compareResult(t, *structs.AttFromConsensus(&resp), "1", hexutil.Encode(aggSlot1_Root1_2.AggregationBits), root1, sig.Marshal())
|
||||
})
|
||||
})
|
||||
t.Run("post-electra", func(t *testing.T) {
|
||||
aggSlot1_Root1_1 := createAttestationElectra(1, bitfield.Bitlist{0b11100}, root1)
|
||||
@@ -421,6 +455,23 @@ func TestGetAggregateAttestation(t *testing.T) {
|
||||
|
||||
compareResult(t, attestation, "2", hexutil.Encode(aggSlot2.AggregationBits), root1, sig.Marshal(), hexutil.Encode(aggSlot2.CommitteeBits))
|
||||
})
|
||||
t.Run("1 matching aggregated attestation - SSZ", func(t *testing.T) {
|
||||
reqRoot, err := aggSlot2.Data.HashTreeRoot()
|
||||
require.NoError(t, err, "Failed to generate attestation data hash tree root")
|
||||
attDataRoot := hexutil.Encode(reqRoot[:])
|
||||
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=2" + "&committee_index=0"
|
||||
request := httptest.NewRequest(http.MethodGet, url, nil)
|
||||
request.Header.Add("Accept", "application/octet-stream")
|
||||
writer := httptest.NewRecorder()
|
||||
|
||||
s.GetAggregateAttestationV2(writer, request)
|
||||
require.Equal(t, http.StatusOK, writer.Code, "Expected HTTP status OK")
|
||||
|
||||
var resp ethpbalpha.AttestationElectra
|
||||
require.NoError(t, resp.UnmarshalSSZ(writer.Body.Bytes()))
|
||||
|
||||
compareResult(t, *structs.AttElectraFromConsensus(&resp), "2", hexutil.Encode(aggSlot2.AggregationBits), root1, sig.Marshal(), hexutil.Encode(aggSlot2.CommitteeBits))
|
||||
})
|
||||
t.Run("multiple matching aggregated attestations - return the one with most bits", func(t *testing.T) {
|
||||
reqRoot, err := aggSlot1_Root1_1.Data.HashTreeRoot()
|
||||
require.NoError(t, err, "Failed to generate attestation data hash tree root")
|
||||
@@ -441,6 +492,23 @@ func TestGetAggregateAttestation(t *testing.T) {
|
||||
|
||||
compareResult(t, attestation, "1", hexutil.Encode(aggSlot1_Root1_2.AggregationBits), root1, sig.Marshal(), hexutil.Encode(aggSlot1_Root1_1.CommitteeBits))
|
||||
})
|
||||
t.Run("multiple matching aggregated attestations - return the one with most bits - SSZ", func(t *testing.T) {
|
||||
reqRoot, err := aggSlot1_Root1_1.Data.HashTreeRoot()
|
||||
require.NoError(t, err, "Failed to generate attestation data hash tree root")
|
||||
attDataRoot := hexutil.Encode(reqRoot[:])
|
||||
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=1" + "&committee_index=0"
|
||||
request := httptest.NewRequest(http.MethodGet, url, nil)
|
||||
request.Header.Add("Accept", "application/octet-stream")
|
||||
writer := httptest.NewRecorder()
|
||||
|
||||
s.GetAggregateAttestationV2(writer, request)
|
||||
require.Equal(t, http.StatusOK, writer.Code, "Expected HTTP status OK")
|
||||
|
||||
var resp ethpbalpha.AttestationElectra
|
||||
require.NoError(t, resp.UnmarshalSSZ(writer.Body.Bytes()))
|
||||
|
||||
compareResult(t, *structs.AttElectraFromConsensus(&resp), "1", hexutil.Encode(aggSlot1_Root1_2.AggregationBits), root1, sig.Marshal(), hexutil.Encode(aggSlot1_Root1_1.CommitteeBits))
|
||||
})
|
||||
t.Run("1 matching unaggregated attestation", func(t *testing.T) {
|
||||
reqRoot, err := unaggSlot4.Data.HashTreeRoot()
|
||||
require.NoError(t, err, "Failed to generate attestation data hash tree root")
|
||||
@@ -460,6 +528,23 @@ func TestGetAggregateAttestation(t *testing.T) {
|
||||
require.NoError(t, json.Unmarshal(resp.Data, &attestation), "Failed to unmarshal attestation data")
|
||||
compareResult(t, attestation, "4", hexutil.Encode(unaggSlot4.AggregationBits), root1, sig.Marshal(), hexutil.Encode(unaggSlot4.CommitteeBits))
|
||||
})
|
||||
t.Run("1 matching unaggregated attestation - SSZ", func(t *testing.T) {
|
||||
reqRoot, err := unaggSlot4.Data.HashTreeRoot()
|
||||
require.NoError(t, err, "Failed to generate attestation data hash tree root")
|
||||
attDataRoot := hexutil.Encode(reqRoot[:])
|
||||
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=4" + "&committee_index=0"
|
||||
request := httptest.NewRequest(http.MethodGet, url, nil)
|
||||
request.Header.Add("Accept", "application/octet-stream")
|
||||
writer := httptest.NewRecorder()
|
||||
|
||||
s.GetAggregateAttestationV2(writer, request)
|
||||
require.Equal(t, http.StatusOK, writer.Code, "Expected HTTP status OK")
|
||||
|
||||
var resp ethpbalpha.AttestationElectra
|
||||
require.NoError(t, resp.UnmarshalSSZ(writer.Body.Bytes()))
|
||||
|
||||
compareResult(t, *structs.AttElectraFromConsensus(&resp), "4", hexutil.Encode(unaggSlot4.AggregationBits), root1, sig.Marshal(), hexutil.Encode(unaggSlot4.CommitteeBits))
|
||||
})
|
||||
t.Run("multiple matching unaggregated attestations - their aggregate is returned", func(t *testing.T) {
|
||||
reqRoot, err := unaggSlot3_Root1_1.Data.HashTreeRoot()
|
||||
require.NoError(t, err, "Failed to generate attestation data hash tree root")
|
||||
@@ -484,12 +569,33 @@ func TestGetAggregateAttestation(t *testing.T) {
|
||||
expectedSig := bls.AggregateSignatures([]common.Signature{sig1, sig2})
|
||||
compareResult(t, attestation, "3", hexutil.Encode(bitfield.Bitlist{0b11100}), root1, expectedSig.Marshal(), hexutil.Encode(unaggSlot3_Root1_1.CommitteeBits))
|
||||
})
|
||||
t.Run("multiple matching unaggregated attestations - their aggregate is returned - SSZ", func(t *testing.T) {
|
||||
reqRoot, err := unaggSlot3_Root1_1.Data.HashTreeRoot()
|
||||
require.NoError(t, err, "Failed to generate attestation data hash tree root")
|
||||
attDataRoot := hexutil.Encode(reqRoot[:])
|
||||
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=3" + "&committee_index=0"
|
||||
request := httptest.NewRequest(http.MethodGet, url, nil)
|
||||
request.Header.Add("Accept", "application/octet-stream")
|
||||
writer := httptest.NewRecorder()
|
||||
|
||||
s.GetAggregateAttestationV2(writer, request)
|
||||
require.Equal(t, http.StatusOK, writer.Code, "Expected HTTP status OK")
|
||||
|
||||
var resp ethpbalpha.AttestationElectra
|
||||
require.NoError(t, resp.UnmarshalSSZ(writer.Body.Bytes()))
|
||||
|
||||
sig1, err := bls.SignatureFromBytes(unaggSlot3_Root1_1.Signature)
|
||||
require.NoError(t, err)
|
||||
sig2, err := bls.SignatureFromBytes(unaggSlot3_Root1_2.Signature)
|
||||
require.NoError(t, err)
|
||||
expectedSig := bls.AggregateSignatures([]common.Signature{sig1, sig2})
|
||||
compareResult(t, *structs.AttElectraFromConsensus(&resp), "3", hexutil.Encode(bitfield.Bitlist{0b11100}), root1, expectedSig.Marshal(), hexutil.Encode(unaggSlot3_Root1_1.CommitteeBits))
|
||||
})
|
||||
t.Run("pre-electra attestation is ignored", func(t *testing.T) {
|
||||
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
}
|
||||
|
||||
func createAttestationData(slot primitives.Slot, committeeIndex primitives.CommitteeIndex, root []byte) *ethpbalpha.AttestationData {
|
||||
@@ -1293,6 +1399,81 @@ func TestGetAttestationData(t *testing.T) {
|
||||
assert.DeepEqual(t, expectedResponse, resp)
|
||||
})
|
||||
|
||||
t.Run("ok SSZ", func(t *testing.T) {
|
||||
block := util.NewBeaconBlock()
|
||||
block.Block.Slot = 3*params.BeaconConfig().SlotsPerEpoch + 1
|
||||
targetBlock := util.NewBeaconBlock()
|
||||
targetBlock.Block.Slot = 1 * params.BeaconConfig().SlotsPerEpoch
|
||||
justifiedBlock := util.NewBeaconBlock()
|
||||
justifiedBlock.Block.Slot = 2 * params.BeaconConfig().SlotsPerEpoch
|
||||
blockRoot, err := block.Block.HashTreeRoot()
|
||||
require.NoError(t, err, "Could not hash beacon block")
|
||||
justifiedRoot, err := justifiedBlock.Block.HashTreeRoot()
|
||||
require.NoError(t, err, "Could not get signing root for justified block")
|
||||
slot := 3*params.BeaconConfig().SlotsPerEpoch + 1
|
||||
beaconState, err := util.NewBeaconState()
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, beaconState.SetSlot(slot))
|
||||
justifiedCheckpoint := ðpbalpha.Checkpoint{
|
||||
Epoch: 2,
|
||||
Root: justifiedRoot[:],
|
||||
}
|
||||
require.NoError(t, beaconState.SetCurrentJustifiedCheckpoint(justifiedCheckpoint))
|
||||
offset := int64(slot.Mul(params.BeaconConfig().SecondsPerSlot))
|
||||
chain := &mockChain.ChainService{
|
||||
Optimistic: false,
|
||||
Genesis: time.Now().Add(time.Duration(-1*offset) * time.Second),
|
||||
Root: blockRoot[:],
|
||||
CurrentJustifiedCheckPoint: justifiedCheckpoint,
|
||||
TargetRoot: blockRoot,
|
||||
State: beaconState,
|
||||
}
|
||||
|
||||
s := &Server{
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
HeadFetcher: chain,
|
||||
TimeFetcher: chain,
|
||||
OptimisticModeFetcher: chain,
|
||||
CoreService: &core.Service{
|
||||
HeadFetcher: chain,
|
||||
GenesisTimeFetcher: chain,
|
||||
FinalizedFetcher: chain,
|
||||
AttestationCache: cache.NewAttestationDataCache(),
|
||||
OptimisticModeFetcher: chain,
|
||||
},
|
||||
}
|
||||
|
||||
expectedAttData := ðpbalpha.AttestationData{
|
||||
Slot: slot,
|
||||
BeaconBlockRoot: blockRoot[:],
|
||||
CommitteeIndex: 0,
|
||||
Source: ðpbalpha.Checkpoint{
|
||||
Epoch: 2,
|
||||
Root: justifiedRoot[:],
|
||||
},
|
||||
Target: ðpbalpha.Checkpoint{
|
||||
Epoch: 3,
|
||||
Root: blockRoot[:],
|
||||
},
|
||||
}
|
||||
|
||||
expectedAttDataSSZ, err := expectedAttData.MarshalSSZ()
|
||||
require.NoError(t, err, "Could not marshal expected attestation data to SSZ")
|
||||
|
||||
url := fmt.Sprintf("http://example.com?slot=%d&committee_index=%d", slot, 0)
|
||||
request := httptest.NewRequest(http.MethodGet, url, nil)
|
||||
request.Header.Add("Accept", "application/octet-stream")
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.GetAttestationData(writer, request)
|
||||
|
||||
assert.Equal(t, http.StatusOK, writer.Code)
|
||||
assert.DeepSSZEqual(t, expectedAttDataSSZ, writer.Body.Bytes())
|
||||
var att ethpbalpha.AttestationData
|
||||
require.NoError(t, att.UnmarshalSSZ(writer.Body.Bytes()))
|
||||
})
|
||||
|
||||
t.Run("syncing", func(t *testing.T) {
|
||||
beaconState, err := util.NewBeaconState()
|
||||
require.NoError(t, err)
|
||||
@@ -1536,6 +1717,82 @@ func TestGetAttestationData(t *testing.T) {
|
||||
assert.DeepEqual(t, expectedResponse, resp)
|
||||
})
|
||||
|
||||
t.Run("succeeds in first epoch SSZ", func(t *testing.T) {
|
||||
slot := primitives.Slot(5)
|
||||
block := util.NewBeaconBlock()
|
||||
block.Block.Slot = slot
|
||||
targetBlock := util.NewBeaconBlock()
|
||||
targetBlock.Block.Slot = 0
|
||||
justifiedBlock := util.NewBeaconBlock()
|
||||
justifiedBlock.Block.Slot = 0
|
||||
blockRoot, err := block.Block.HashTreeRoot()
|
||||
require.NoError(t, err, "Could not hash beacon block")
|
||||
justifiedRoot, err := justifiedBlock.Block.HashTreeRoot()
|
||||
require.NoError(t, err, "Could not get signing root for justified block")
|
||||
|
||||
beaconState, err := util.NewBeaconState()
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, beaconState.SetSlot(slot))
|
||||
justifiedCheckpt := ðpbalpha.Checkpoint{
|
||||
Epoch: 0,
|
||||
Root: justifiedRoot[:],
|
||||
}
|
||||
require.NoError(t, beaconState.SetCurrentJustifiedCheckpoint(justifiedCheckpt))
|
||||
require.NoError(t, err)
|
||||
offset := int64(slot.Mul(params.BeaconConfig().SecondsPerSlot))
|
||||
chain := &mockChain.ChainService{
|
||||
Root: blockRoot[:],
|
||||
Genesis: time.Now().Add(time.Duration(-1*offset) * time.Second),
|
||||
CurrentJustifiedCheckPoint: justifiedCheckpt,
|
||||
TargetRoot: blockRoot,
|
||||
State: beaconState,
|
||||
}
|
||||
|
||||
s := &Server{
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
HeadFetcher: chain,
|
||||
TimeFetcher: chain,
|
||||
OptimisticModeFetcher: chain,
|
||||
CoreService: &core.Service{
|
||||
AttestationCache: cache.NewAttestationDataCache(),
|
||||
OptimisticModeFetcher: chain,
|
||||
HeadFetcher: chain,
|
||||
GenesisTimeFetcher: chain,
|
||||
FinalizedFetcher: chain,
|
||||
},
|
||||
}
|
||||
|
||||
expectedAttData := ðpbalpha.AttestationData{
|
||||
Slot: slot,
|
||||
BeaconBlockRoot: blockRoot[:],
|
||||
CommitteeIndex: 0,
|
||||
Source: ðpbalpha.Checkpoint{
|
||||
Epoch: 0,
|
||||
Root: justifiedRoot[:],
|
||||
},
|
||||
Target: ðpbalpha.Checkpoint{
|
||||
Epoch: 0,
|
||||
Root: blockRoot[:],
|
||||
},
|
||||
}
|
||||
|
||||
expectedAttDataSSZ, err := expectedAttData.MarshalSSZ()
|
||||
require.NoError(t, err, "Could not marshal expected attestation data to SSZ")
|
||||
|
||||
url := fmt.Sprintf("http://example.com?slot=%d&committee_index=%d", slot, 0)
|
||||
request := httptest.NewRequest(http.MethodGet, url, nil)
|
||||
request.Header.Add("Accept", "application/octet-stream")
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.GetAttestationData(writer, request)
|
||||
|
||||
assert.Equal(t, http.StatusOK, writer.Code)
|
||||
assert.DeepSSZEqual(t, expectedAttDataSSZ, writer.Body.Bytes())
|
||||
var att ethpbalpha.AttestationData
|
||||
require.NoError(t, att.UnmarshalSSZ(writer.Body.Bytes()))
|
||||
})
|
||||
|
||||
t.Run("handles far away justified epoch", func(t *testing.T) {
|
||||
// Scenario:
|
||||
//
|
||||
@@ -1629,6 +1886,101 @@ func TestGetAttestationData(t *testing.T) {
|
||||
require.NotNil(t, resp)
|
||||
assert.DeepEqual(t, expectedResponse, resp)
|
||||
})
|
||||
|
||||
t.Run("handles far away justified epoch SSZ", func(t *testing.T) {
|
||||
// Scenario:
|
||||
//
|
||||
// State slot = 10000
|
||||
// Last justified slot = epoch start of 1500
|
||||
// HistoricalRootsLimit = 8192
|
||||
//
|
||||
// More background: https://github.com/prysmaticlabs/prysm/issues/2153
|
||||
// This test breaks if it doesn't use mainnet config
|
||||
|
||||
// Ensure HistoricalRootsLimit matches scenario
|
||||
params.SetupTestConfigCleanup(t)
|
||||
cfg := params.MainnetConfig()
|
||||
cfg.HistoricalRootsLimit = 8192
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
block := util.NewBeaconBlock()
|
||||
block.Block.Slot = 10000
|
||||
epochBoundaryBlock := util.NewBeaconBlock()
|
||||
var err error
|
||||
epochBoundaryBlock.Block.Slot, err = slots.EpochStart(slots.ToEpoch(10000))
|
||||
require.NoError(t, err)
|
||||
justifiedBlock := util.NewBeaconBlock()
|
||||
justifiedBlock.Block.Slot, err = slots.EpochStart(slots.ToEpoch(1500))
|
||||
require.NoError(t, err)
|
||||
justifiedBlock.Block.Slot -= 2 // Imagine two skip block
|
||||
blockRoot, err := block.Block.HashTreeRoot()
|
||||
require.NoError(t, err, "Could not hash beacon block")
|
||||
justifiedBlockRoot, err := justifiedBlock.Block.HashTreeRoot()
|
||||
require.NoError(t, err, "Could not hash justified block")
|
||||
|
||||
slot := primitives.Slot(10000)
|
||||
beaconState, err := util.NewBeaconState()
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, beaconState.SetSlot(slot))
|
||||
justifiedCheckpt := ðpbalpha.Checkpoint{
|
||||
Epoch: slots.ToEpoch(1500),
|
||||
Root: justifiedBlockRoot[:],
|
||||
}
|
||||
require.NoError(t, beaconState.SetCurrentJustifiedCheckpoint(justifiedCheckpt))
|
||||
|
||||
offset := int64(slot.Mul(params.BeaconConfig().SecondsPerSlot))
|
||||
chain := &mockChain.ChainService{
|
||||
Root: blockRoot[:],
|
||||
Genesis: time.Now().Add(time.Duration(-1*offset) * time.Second),
|
||||
CurrentJustifiedCheckPoint: justifiedCheckpt,
|
||||
TargetRoot: blockRoot,
|
||||
State: beaconState,
|
||||
}
|
||||
|
||||
s := &Server{
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
HeadFetcher: chain,
|
||||
TimeFetcher: chain,
|
||||
OptimisticModeFetcher: chain,
|
||||
CoreService: &core.Service{
|
||||
AttestationCache: cache.NewAttestationDataCache(),
|
||||
OptimisticModeFetcher: chain,
|
||||
HeadFetcher: chain,
|
||||
GenesisTimeFetcher: chain,
|
||||
FinalizedFetcher: chain,
|
||||
},
|
||||
}
|
||||
|
||||
expectedAttData := ðpbalpha.AttestationData{
|
||||
Slot: slot,
|
||||
BeaconBlockRoot: blockRoot[:],
|
||||
CommitteeIndex: 0,
|
||||
Source: ðpbalpha.Checkpoint{
|
||||
Epoch: slots.ToEpoch(1500),
|
||||
Root: justifiedBlockRoot[:],
|
||||
},
|
||||
Target: ðpbalpha.Checkpoint{
|
||||
Epoch: 312,
|
||||
Root: blockRoot[:],
|
||||
},
|
||||
}
|
||||
|
||||
expectedAttDataSSZ, err := expectedAttData.MarshalSSZ()
|
||||
require.NoError(t, err, "Could not marshal expected attestation data to SSZ")
|
||||
|
||||
url := fmt.Sprintf("http://example.com?slot=%d&committee_index=%d", slot, 0)
|
||||
request := httptest.NewRequest(http.MethodGet, url, nil)
|
||||
request.Header.Add("Accept", "application/octet-stream")
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.GetAttestationData(writer, request)
|
||||
|
||||
assert.Equal(t, http.StatusOK, writer.Code)
|
||||
assert.DeepSSZEqual(t, expectedAttDataSSZ, writer.Body.Bytes())
|
||||
var att ethpbalpha.AttestationData
|
||||
require.NoError(t, att.UnmarshalSSZ(writer.Body.Bytes()))
|
||||
})
|
||||
}
|
||||
|
||||
func TestProduceSyncCommitteeContribution(t *testing.T) {
|
||||
|
||||
@@ -40,7 +40,8 @@ func (e BlockIdParseError) Error() string {
|
||||
// Blocker is responsible for retrieving blocks.
|
||||
type Blocker interface {
|
||||
Block(ctx context.Context, id []byte) (interfaces.ReadOnlySignedBeaconBlock, error)
|
||||
Blobs(ctx context.Context, id string, indices []uint64) ([]*blocks.VerifiedROBlob, *core.RpcError)
|
||||
Blobs(ctx context.Context, id string, indices []int) ([]*blocks.VerifiedROBlob, *core.RpcError)
|
||||
DataColumnSidecars(ctx context.Context, id string, indices []int) ([]blocks.VerifiedRODataColumn, *core.RpcError)
|
||||
}
|
||||
|
||||
// BeaconDbBlocker is an implementation of Blocker. It retrieves blocks from the beacon chain database.
|
||||
@@ -49,6 +50,7 @@ type BeaconDbBlocker struct {
|
||||
ChainInfoFetcher blockchain.ChainInfoFetcher
|
||||
GenesisTimeFetcher blockchain.TimeFetcher
|
||||
BlobStorage *filesystem.BlobStorage
|
||||
DataColumnStorage *filesystem.DataColumnStorage
|
||||
}
|
||||
|
||||
// Block returns the beacon block for a given identifier. The identifier can be one of:
|
||||
@@ -144,7 +146,7 @@ func (p *BeaconDbBlocker) Block(ctx context.Context, id []byte) (interfaces.Read
|
||||
// - block exists, has commitments, inside retention period (greater of protocol- or user-specified) serve then w/ 200 unless we hit an error reading them.
|
||||
// we are technically not supposed to import a block to forkchoice unless we have the blobs, so the nuance here is if we can't find the file and we are inside the protocol-defined retention period, then it's actually a 500.
|
||||
// - block exists, has commitments, outside retention period (greater of protocol- or user-specified) - ie just like block exists, no commitment
|
||||
func (p *BeaconDbBlocker) Blobs(ctx context.Context, id string, indices []uint64) ([]*blocks.VerifiedROBlob, *core.RpcError) {
|
||||
func (p *BeaconDbBlocker) Blobs(ctx context.Context, id string, indices []int) ([]*blocks.VerifiedROBlob, *core.RpcError) {
|
||||
var rootSlice []byte
|
||||
switch id {
|
||||
case "genesis":
|
||||
@@ -239,18 +241,18 @@ func (p *BeaconDbBlocker) Blobs(ctx context.Context, id string, indices []uint64
|
||||
if len(indices) == 0 {
|
||||
for i := range commitments {
|
||||
if sum.HasIndex(uint64(i)) {
|
||||
indices = append(indices, uint64(i))
|
||||
indices = append(indices, i)
|
||||
}
|
||||
}
|
||||
} else {
|
||||
for _, ix := range indices {
|
||||
if ix >= sum.MaxBlobsForEpoch() {
|
||||
if uint64(ix) >= sum.MaxBlobsForEpoch() {
|
||||
return nil, &core.RpcError{
|
||||
Err: fmt.Errorf("requested index %d is bigger than the maximum possible blob count %d", ix, sum.MaxBlobsForEpoch()),
|
||||
Reason: core.BadRequest,
|
||||
}
|
||||
}
|
||||
if !sum.HasIndex(ix) {
|
||||
if !sum.HasIndex(uint64(ix)) {
|
||||
return nil, &core.RpcError{
|
||||
Err: fmt.Errorf("requested index %d not found", ix),
|
||||
Reason: core.NotFound,
|
||||
@@ -261,7 +263,7 @@ func (p *BeaconDbBlocker) Blobs(ctx context.Context, id string, indices []uint64
|
||||
|
||||
blobs := make([]*blocks.VerifiedROBlob, len(indices))
|
||||
for i, index := range indices {
|
||||
vblob, err := p.BlobStorage.Get(root, index)
|
||||
vblob, err := p.BlobStorage.Get(root, uint64(index))
|
||||
if err != nil {
|
||||
return nil, &core.RpcError{
|
||||
Err: fmt.Errorf("could not retrieve blob for block root %#x at index %d", rootSlice, index),
|
||||
@@ -273,3 +275,103 @@ func (p *BeaconDbBlocker) Blobs(ctx context.Context, id string, indices []uint64
|
||||
|
||||
return blobs, nil
|
||||
}
|
||||
|
||||
// DataColumnSidecars returns the verified data columns for a given block id identifier and indices. The identifier can be one of:
|
||||
// - "head" (canonical head in node's view)
|
||||
// - "genesis"
|
||||
// - "finalized"
|
||||
// - "justified"
|
||||
// - <slot>
|
||||
// - <hex encoded block root with '0x' prefix>
|
||||
// - <block root>
|
||||
//
|
||||
// cases:
|
||||
// - no block, 404
|
||||
// - block exists, no commitment, 200 w/ empty list
|
||||
// - block exists, has commitments, inside retention period (greater of protocol- or user-specified) serve then w/ 200 unless we hit an error reading them.
|
||||
// we are technically not supposed to import a block to forkchoice unless we have the blobs, so the nuance here is if we can't find the file and we are inside the protocol-defined retention period, then it's actually a 500.
|
||||
// - block exists, has commitments, outside retention period (greater of protocol- or user-specified) - ie just like block exists, no commitment
|
||||
func (p *BeaconDbBlocker) DataColumnSidecars(ctx context.Context, id string, indices []int) ([]blocks.VerifiedRODataColumn, *core.RpcError) {
|
||||
var rootSlice []byte
|
||||
|
||||
switch id {
|
||||
case "genesis":
|
||||
return nil, &core.RpcError{Err: errors.New("data columns not supported at genesis"), Reason: core.BadRequest}
|
||||
case "head":
|
||||
var err error
|
||||
rootSlice, err = p.ChainInfoFetcher.HeadRoot(ctx)
|
||||
if err != nil {
|
||||
return nil, &core.RpcError{Err: errors.Wrap(err, "could not retrieve head root"), Reason: core.Internal}
|
||||
}
|
||||
case "finalized":
|
||||
fcp := p.ChainInfoFetcher.FinalizedCheckpt()
|
||||
if fcp == nil {
|
||||
return nil, &core.RpcError{Err: errors.New("received nil finalized checkpoint"), Reason: core.Internal}
|
||||
}
|
||||
rootSlice = fcp.Root
|
||||
case "justified":
|
||||
jcp := p.ChainInfoFetcher.CurrentJustifiedCheckpt()
|
||||
if jcp == nil {
|
||||
return nil, &core.RpcError{Err: errors.New("received nil justified checkpoint"), Reason: core.Internal}
|
||||
}
|
||||
rootSlice = jcp.Root
|
||||
default:
|
||||
if bytesutil.IsHex([]byte(id)) {
|
||||
var err error
|
||||
rootSlice, err = bytesutil.DecodeHexWithLength(id, fieldparams.RootLength)
|
||||
if err != nil {
|
||||
return nil, &core.RpcError{Err: NewBlockIdParseError(err), Reason: core.BadRequest}
|
||||
}
|
||||
} else {
|
||||
slot, err := strconv.ParseUint(id, 10, 64)
|
||||
if err != nil {
|
||||
return nil, &core.RpcError{Err: NewBlockIdParseError(err), Reason: core.BadRequest}
|
||||
}
|
||||
|
||||
ok, roots, err := p.BeaconDB.BlockRootsBySlot(ctx, primitives.Slot(slot))
|
||||
if !ok {
|
||||
return nil, &core.RpcError{Err: fmt.Errorf("no block roots at slot %d", slot), Reason: core.NotFound}
|
||||
}
|
||||
if err != nil {
|
||||
return nil, &core.RpcError{Err: errors.Wrapf(err, "failed to get block roots for slot %d", slot), Reason: core.Internal}
|
||||
}
|
||||
rootSlice = roots[0][:]
|
||||
if len(roots) > 1 {
|
||||
for _, blockRoot := range roots {
|
||||
canonical, err := p.ChainInfoFetcher.IsCanonical(ctx, blockRoot)
|
||||
if err != nil {
|
||||
return nil, &core.RpcError{Err: errors.Wrapf(err, "could not determine if block %#x is canonical", blockRoot), Reason: core.Internal}
|
||||
}
|
||||
if canonical {
|
||||
rootSlice = blockRoot[:]
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
root := bytesutil.ToBytes32(rootSlice)
|
||||
|
||||
block, err := p.BeaconDB.Block(ctx, root)
|
||||
if err != nil {
|
||||
return nil, &core.RpcError{Err: errors.Wrapf(err, "failed to retrieve block %#x from db", rootSlice), Reason: core.Internal}
|
||||
}
|
||||
if block == nil {
|
||||
return nil, &core.RpcError{Err: fmt.Errorf("block %#x not found in db", rootSlice), Reason: core.NotFound}
|
||||
}
|
||||
|
||||
uintIndices := make([]uint64, len(indices))
|
||||
for i, v := range indices {
|
||||
uintIndices[i] = uint64(v)
|
||||
}
|
||||
columns, err := p.DataColumnStorage.Get(root, uintIndices)
|
||||
if err != nil {
|
||||
return nil, &core.RpcError{
|
||||
Err: fmt.Errorf("could not retrieve data column for block root %#x", rootSlice),
|
||||
Reason: core.Internal,
|
||||
}
|
||||
}
|
||||
|
||||
return columns, nil
|
||||
}
|
||||
|
||||
@@ -18,7 +18,7 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
ethpbalpha "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/assert"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/util"
|
||||
@@ -51,7 +51,7 @@ func TestGetBlock(t *testing.T) {
|
||||
b4.Block.ParentRoot = bytesutil.PadTo([]byte{8}, 32)
|
||||
util.SaveBlock(t, ctx, beaconDB, b4)
|
||||
|
||||
wsb, err := blocks.NewSignedBeaconBlock(headBlock.Block.(*ethpbalpha.BeaconBlockContainer_Phase0Block).Phase0Block)
|
||||
wsb, err := blocks.NewSignedBeaconBlock(headBlock.Block.(*ethpb.BeaconBlockContainer_Phase0Block).Phase0Block)
|
||||
require.NoError(t, err)
|
||||
|
||||
fetcher := &BeaconDbBlocker{
|
||||
@@ -60,7 +60,7 @@ func TestGetBlock(t *testing.T) {
|
||||
DB: beaconDB,
|
||||
Block: wsb,
|
||||
Root: headBlock.BlockRoot,
|
||||
FinalizedCheckPoint: ðpbalpha.Checkpoint{Root: blkContainers[64].BlockRoot},
|
||||
FinalizedCheckPoint: ðpb.Checkpoint{Root: blkContainers[64].BlockRoot},
|
||||
CanonicalRoots: canonicalRoots,
|
||||
},
|
||||
}
|
||||
@@ -71,13 +71,13 @@ func TestGetBlock(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
blockID []byte
|
||||
want *ethpbalpha.SignedBeaconBlock
|
||||
want *ethpb.SignedBeaconBlock
|
||||
wantErr bool
|
||||
}{
|
||||
{
|
||||
name: "slot",
|
||||
blockID: []byte("30"),
|
||||
want: blkContainers[30].Block.(*ethpbalpha.BeaconBlockContainer_Phase0Block).Phase0Block,
|
||||
want: blkContainers[30].Block.(*ethpb.BeaconBlockContainer_Phase0Block).Phase0Block,
|
||||
},
|
||||
{
|
||||
name: "bad formatting",
|
||||
@@ -87,7 +87,7 @@ func TestGetBlock(t *testing.T) {
|
||||
{
|
||||
name: "canonical",
|
||||
blockID: []byte("30"),
|
||||
want: blkContainers[30].Block.(*ethpbalpha.BeaconBlockContainer_Phase0Block).Phase0Block,
|
||||
want: blkContainers[30].Block.(*ethpb.BeaconBlockContainer_Phase0Block).Phase0Block,
|
||||
},
|
||||
{
|
||||
name: "non canonical",
|
||||
@@ -97,12 +97,12 @@ func TestGetBlock(t *testing.T) {
|
||||
{
|
||||
name: "head",
|
||||
blockID: []byte("head"),
|
||||
want: headBlock.Block.(*ethpbalpha.BeaconBlockContainer_Phase0Block).Phase0Block,
|
||||
want: headBlock.Block.(*ethpb.BeaconBlockContainer_Phase0Block).Phase0Block,
|
||||
},
|
||||
{
|
||||
name: "finalized",
|
||||
blockID: []byte("finalized"),
|
||||
want: blkContainers[64].Block.(*ethpbalpha.BeaconBlockContainer_Phase0Block).Phase0Block,
|
||||
want: blkContainers[64].Block.(*ethpb.BeaconBlockContainer_Phase0Block).Phase0Block,
|
||||
},
|
||||
{
|
||||
name: "genesis",
|
||||
@@ -117,7 +117,7 @@ func TestGetBlock(t *testing.T) {
|
||||
{
|
||||
name: "root",
|
||||
blockID: blkContainers[20].BlockRoot,
|
||||
want: blkContainers[20].Block.(*ethpbalpha.BeaconBlockContainer_Phase0Block).Phase0Block,
|
||||
want: blkContainers[20].Block.(*ethpb.BeaconBlockContainer_Phase0Block).Phase0Block,
|
||||
},
|
||||
{
|
||||
name: "non-existent root",
|
||||
@@ -127,7 +127,7 @@ func TestGetBlock(t *testing.T) {
|
||||
{
|
||||
name: "hex",
|
||||
blockID: []byte(hexutil.Encode(blkContainers[20].BlockRoot)),
|
||||
want: blkContainers[20].Block.(*ethpbalpha.BeaconBlockContainer_Phase0Block).Phase0Block,
|
||||
want: blkContainers[20].Block.(*ethpb.BeaconBlockContainer_Phase0Block).Phase0Block,
|
||||
},
|
||||
{
|
||||
name: "no block",
|
||||
@@ -149,7 +149,7 @@ func TestGetBlock(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
pb, err := result.Proto()
|
||||
require.NoError(t, err)
|
||||
pbBlock, ok := pb.(*ethpbalpha.SignedBeaconBlock)
|
||||
pbBlock, ok := pb.(*ethpb.SignedBeaconBlock)
|
||||
require.Equal(t, true, ok)
|
||||
if !reflect.DeepEqual(pbBlock, tt.want) {
|
||||
t.Error("Expected blocks to equal")
|
||||
@@ -218,7 +218,7 @@ func TestGetBlob(t *testing.T) {
|
||||
})
|
||||
t.Run("finalized", func(t *testing.T) {
|
||||
blocker := &BeaconDbBlocker{
|
||||
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: ðpbalpha.Checkpoint{Root: blockRoot[:]}},
|
||||
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: ðpb.Checkpoint{Root: blockRoot[:]}},
|
||||
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
|
||||
Genesis: time.Now(),
|
||||
},
|
||||
@@ -232,7 +232,7 @@ func TestGetBlob(t *testing.T) {
|
||||
})
|
||||
t.Run("justified", func(t *testing.T) {
|
||||
blocker := &BeaconDbBlocker{
|
||||
ChainInfoFetcher: &mockChain.ChainService{CurrentJustifiedCheckPoint: ðpbalpha.Checkpoint{Root: blockRoot[:]}},
|
||||
ChainInfoFetcher: &mockChain.ChainService{CurrentJustifiedCheckPoint: ðpb.Checkpoint{Root: blockRoot[:]}},
|
||||
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
|
||||
Genesis: time.Now(),
|
||||
},
|
||||
@@ -270,14 +270,14 @@ func TestGetBlob(t *testing.T) {
|
||||
})
|
||||
t.Run("one blob only", func(t *testing.T) {
|
||||
blocker := &BeaconDbBlocker{
|
||||
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: ðpbalpha.Checkpoint{Root: blockRoot[:]}},
|
||||
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: ðpb.Checkpoint{Root: blockRoot[:]}},
|
||||
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
|
||||
Genesis: time.Now(),
|
||||
},
|
||||
BeaconDB: db,
|
||||
BlobStorage: bs,
|
||||
}
|
||||
verifiedBlobs, rpcErr := blocker.Blobs(ctx, "123", []uint64{2})
|
||||
verifiedBlobs, rpcErr := blocker.Blobs(ctx, "123", []int{2})
|
||||
assert.Equal(t, rpcErr == nil, true)
|
||||
require.Equal(t, 1, len(verifiedBlobs))
|
||||
sidecar := verifiedBlobs[0].BlobSidecar
|
||||
@@ -289,7 +289,7 @@ func TestGetBlob(t *testing.T) {
|
||||
})
|
||||
t.Run("no blobs returns an empty array", func(t *testing.T) {
|
||||
blocker := &BeaconDbBlocker{
|
||||
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: ðpbalpha.Checkpoint{Root: blockRoot[:]}},
|
||||
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: ðpb.Checkpoint{Root: blockRoot[:]}},
|
||||
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
|
||||
Genesis: time.Now(),
|
||||
},
|
||||
@@ -302,28 +302,28 @@ func TestGetBlob(t *testing.T) {
|
||||
})
|
||||
t.Run("no blob at index", func(t *testing.T) {
|
||||
blocker := &BeaconDbBlocker{
|
||||
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: ðpbalpha.Checkpoint{Root: blockRoot[:]}},
|
||||
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: ðpb.Checkpoint{Root: blockRoot[:]}},
|
||||
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
|
||||
Genesis: time.Now(),
|
||||
},
|
||||
BeaconDB: db,
|
||||
BlobStorage: bs,
|
||||
}
|
||||
noBlobIndex := uint64(len(blobs)) + 1
|
||||
_, rpcErr := blocker.Blobs(ctx, "123", []uint64{0, noBlobIndex})
|
||||
noBlobIndex := len(blobs) + 1
|
||||
_, rpcErr := blocker.Blobs(ctx, "123", []int{0, noBlobIndex})
|
||||
require.NotNil(t, rpcErr)
|
||||
assert.Equal(t, core.ErrorReason(core.NotFound), rpcErr.Reason)
|
||||
})
|
||||
t.Run("index too big", func(t *testing.T) {
|
||||
blocker := &BeaconDbBlocker{
|
||||
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: ðpbalpha.Checkpoint{Root: blockRoot[:]}},
|
||||
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: ðpb.Checkpoint{Root: blockRoot[:]}},
|
||||
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
|
||||
Genesis: time.Now(),
|
||||
},
|
||||
BeaconDB: db,
|
||||
BlobStorage: bs,
|
||||
}
|
||||
_, rpcErr := blocker.Blobs(ctx, "123", []uint64{0, math.MaxUint})
|
||||
_, rpcErr := blocker.Blobs(ctx, "123", []int{0, math.MaxInt})
|
||||
require.NotNil(t, rpcErr)
|
||||
assert.Equal(t, core.ErrorReason(core.BadRequest), rpcErr.Reason)
|
||||
})
|
||||
|
||||
@@ -8,6 +8,7 @@ go_library(
|
||||
"blocks.go",
|
||||
"construct_generic_block.go",
|
||||
"duties.go",
|
||||
"duties_v2.go",
|
||||
"exit.go",
|
||||
"log.go",
|
||||
"proposer.go",
|
||||
@@ -189,6 +190,7 @@ go_test(
|
||||
"blocks_test.go",
|
||||
"construct_generic_block_test.go",
|
||||
"duties_test.go",
|
||||
"duties_v2_test.go",
|
||||
"exit_test.go",
|
||||
"proposer_altair_test.go",
|
||||
"proposer_attestations_electra_test.go",
|
||||
|
||||
268
beacon-chain/rpc/prysm/v1alpha1/validator/duties_v2.go
Normal file
268
beacon-chain/rpc/prysm/v1alpha1/validator/duties_v2.go
Normal file
@@ -0,0 +1,268 @@
|
||||
package validator
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
|
||||
coreTime "github.com/OffchainLabs/prysm/v6/beacon-chain/core/time"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/core"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
"google.golang.org/grpc/codes"
|
||||
"google.golang.org/grpc/status"
|
||||
)
|
||||
|
||||
// GetDutiesV2 returns the duties assigned to a list of validators specified
|
||||
// in the request object.
|
||||
//
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
func (vs *Server) GetDutiesV2(ctx context.Context, req *ethpb.DutiesRequest) (*ethpb.DutiesV2Response, error) {
|
||||
if vs.SyncChecker.Syncing() {
|
||||
return nil, status.Error(codes.Unavailable, "Syncing to latest head, not ready to respond")
|
||||
}
|
||||
return vs.dutiesv2(ctx, req)
|
||||
}
|
||||
|
||||
// Compute the validator duties from the head state's corresponding epoch
|
||||
// for validators public key / indices requested.
|
||||
func (vs *Server) dutiesv2(ctx context.Context, req *ethpb.DutiesRequest) (*ethpb.DutiesV2Response, error) {
|
||||
currentEpoch := slots.ToEpoch(vs.TimeFetcher.CurrentSlot())
|
||||
if req.Epoch > currentEpoch+1 {
|
||||
return nil, status.Errorf(codes.Unavailable, "Request epoch %d can not be greater than next epoch %d", req.Epoch, currentEpoch+1)
|
||||
}
|
||||
|
||||
// Load head state
|
||||
s, err := vs.HeadFetcher.HeadState(ctx)
|
||||
if err != nil {
|
||||
return nil, status.Errorf(codes.Internal, "Could not get head state: %v", err)
|
||||
}
|
||||
|
||||
// Advance to start of requested epoch if necessary
|
||||
s, err = vs.stateForEpoch(ctx, s, req.Epoch)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Build duties for each validator
|
||||
ctx, span := trace.StartSpan(ctx, "dutiesv2.BuildResponse")
|
||||
span.SetAttributes(trace.Int64Attribute("num_pubkeys", int64(len(req.PublicKeys))))
|
||||
defer span.End()
|
||||
|
||||
// Load committee and proposer metadata
|
||||
meta, err := loadDutiesMetadata(ctx, s, req.Epoch)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
validatorAssignments := make([]*ethpb.DutiesV2Response_Duty, 0, len(req.PublicKeys))
|
||||
nextValidatorAssignments := make([]*ethpb.DutiesV2Response_Duty, 0, len(req.PublicKeys))
|
||||
|
||||
// start loop for assignments for current and next epochs
|
||||
for _, pubKey := range req.PublicKeys {
|
||||
if ctx.Err() != nil {
|
||||
return nil, status.Errorf(codes.Aborted, "Could not continue fetching assignments: %v", ctx.Err())
|
||||
}
|
||||
|
||||
idx, ok := s.ValidatorIndexByPubkey(bytesutil.ToBytes48(pubKey))
|
||||
if !ok {
|
||||
// Unknown validator: still append placeholder duty with UNKNOWN_STATUS
|
||||
validatorAssignments = append(validatorAssignments, ðpb.DutiesV2Response_Duty{
|
||||
PublicKey: pubKey,
|
||||
Status: ethpb.ValidatorStatus_UNKNOWN_STATUS,
|
||||
})
|
||||
nextValidatorAssignments = append(nextValidatorAssignments, ðpb.DutiesV2Response_Duty{
|
||||
PublicKey: pubKey,
|
||||
Status: ethpb.ValidatorStatus_UNKNOWN_STATUS,
|
||||
})
|
||||
continue
|
||||
}
|
||||
|
||||
meta.current.liteAssignment = helpers.AssignmentForValidator(meta.current.committeesBySlot, meta.current.startSlot, idx)
|
||||
meta.next.liteAssignment = helpers.AssignmentForValidator(meta.next.committeesBySlot, meta.next.startSlot, idx)
|
||||
|
||||
assignment, nextAssignment, err := vs.buildValidatorDuty(pubKey, idx, s, req.Epoch, meta)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
validatorAssignments = append(validatorAssignments, assignment)
|
||||
nextValidatorAssignments = append(nextValidatorAssignments, nextAssignment)
|
||||
}
|
||||
|
||||
// Dependent roots for fork choice
|
||||
currDependentRoot, err := vs.ForkchoiceFetcher.DependentRoot(currentEpoch)
|
||||
if err != nil {
|
||||
return nil, status.Errorf(codes.Internal, "Could not get dependent root: %v", err)
|
||||
}
|
||||
prevDependentRoot := currDependentRoot
|
||||
if currDependentRoot != [32]byte{} && currentEpoch > 0 {
|
||||
prevDependentRoot, err = vs.ForkchoiceFetcher.DependentRoot(currentEpoch - 1)
|
||||
if err != nil {
|
||||
return nil, status.Errorf(codes.Internal, "Could not get previous dependent root: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
return ðpb.DutiesV2Response{
|
||||
PreviousDutyDependentRoot: prevDependentRoot[:],
|
||||
CurrentDutyDependentRoot: currDependentRoot[:],
|
||||
CurrentEpochDuties: validatorAssignments,
|
||||
NextEpochDuties: nextValidatorAssignments,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// stateForEpoch returns a state advanced (with empty slot transitions) to the
|
||||
// start slot of the requested epoch.
|
||||
func (vs *Server) stateForEpoch(ctx context.Context, s state.BeaconState, reqEpoch primitives.Epoch) (state.BeaconState, error) {
|
||||
epochStartSlot, err := slots.EpochStart(reqEpoch)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if s.Slot() >= epochStartSlot {
|
||||
return s, nil
|
||||
}
|
||||
headRoot, err := vs.HeadFetcher.HeadRoot(ctx)
|
||||
if err != nil {
|
||||
return nil, status.Errorf(codes.Internal, "Could not retrieve head root: %v", err)
|
||||
}
|
||||
s, err = transition.ProcessSlotsUsingNextSlotCache(ctx, s, headRoot, epochStartSlot)
|
||||
if err != nil {
|
||||
return nil, status.Errorf(codes.Internal, "Could not process slots up to %d: %v", epochStartSlot, err)
|
||||
}
|
||||
return s, nil
|
||||
}
|
||||
|
||||
// dutiesMetadata bundles together related data needed for duty
|
||||
// construction.
|
||||
type dutiesMetadata struct {
|
||||
current *metadata
|
||||
next *metadata
|
||||
}
|
||||
|
||||
type metadata struct {
|
||||
committeesAtSlot uint64
|
||||
proposalSlots map[primitives.ValidatorIndex][]primitives.Slot
|
||||
startSlot primitives.Slot
|
||||
committeesBySlot [][][]primitives.ValidatorIndex
|
||||
liteAssignment *helpers.LiteAssignment
|
||||
}
|
||||
|
||||
func loadDutiesMetadata(ctx context.Context, s state.BeaconState, reqEpoch primitives.Epoch) (*dutiesMetadata, error) {
|
||||
meta := &dutiesMetadata{}
|
||||
var err error
|
||||
meta.current, err = loadMetadata(ctx, s, reqEpoch)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// note: we only set the proposer slots for the current assignment and not the next epoch assignment
|
||||
meta.current.proposalSlots, err = helpers.ProposerAssignments(ctx, s, reqEpoch)
|
||||
if err != nil {
|
||||
return nil, status.Errorf(codes.Internal, "Could not compute proposer slots: %v", err)
|
||||
}
|
||||
|
||||
meta.next, err = loadMetadata(ctx, s, reqEpoch+1)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return meta, nil
|
||||
}
|
||||
|
||||
func loadMetadata(ctx context.Context, s state.BeaconState, reqEpoch primitives.Epoch) (*metadata, error) {
|
||||
meta := &metadata{}
|
||||
|
||||
if err := helpers.VerifyAssignmentEpoch(reqEpoch, s); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
activeValidatorCount, err := helpers.ActiveValidatorCount(ctx, s, reqEpoch)
|
||||
if err != nil {
|
||||
return nil, status.Errorf(codes.Internal, "Could not get active validator count: %v", err)
|
||||
}
|
||||
meta.committeesAtSlot = helpers.SlotCommitteeCount(activeValidatorCount)
|
||||
|
||||
meta.startSlot, err = slots.EpochStart(reqEpoch)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
meta.committeesBySlot, err = helpers.PrecomputeCommittees(ctx, s, meta.startSlot)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return meta, nil
|
||||
}
|
||||
|
||||
// buildValidatorDuty builds both current‑epoch and next‑epoch V2 duty objects
|
||||
// for a single validator index.
|
||||
func (vs *Server) buildValidatorDuty(
|
||||
pubKey []byte,
|
||||
idx primitives.ValidatorIndex,
|
||||
s state.BeaconState,
|
||||
reqEpoch primitives.Epoch,
|
||||
meta *dutiesMetadata,
|
||||
) (*ethpb.DutiesV2Response_Duty, *ethpb.DutiesV2Response_Duty, error) {
|
||||
assignment := ðpb.DutiesV2Response_Duty{PublicKey: pubKey}
|
||||
nextAssignment := ðpb.DutiesV2Response_Duty{PublicKey: pubKey}
|
||||
|
||||
statusEnum := assignmentStatus(s, idx)
|
||||
assignment.ValidatorIndex = idx
|
||||
assignment.Status = statusEnum
|
||||
assignment.CommitteesAtSlot = meta.current.committeesAtSlot
|
||||
assignment.ProposerSlots = meta.current.proposalSlots[idx]
|
||||
populateCommitteeFields(assignment, meta.current.liteAssignment)
|
||||
|
||||
nextAssignment.ValidatorIndex = idx
|
||||
nextAssignment.Status = statusEnum
|
||||
nextAssignment.CommitteesAtSlot = meta.next.committeesAtSlot
|
||||
populateCommitteeFields(nextAssignment, meta.next.liteAssignment)
|
||||
|
||||
// Sync committee flags
|
||||
if coreTime.HigherEqualThanAltairVersionAndEpoch(s, reqEpoch) {
|
||||
inSync, err := helpers.IsCurrentPeriodSyncCommittee(s, idx)
|
||||
if err != nil {
|
||||
return nil, nil, status.Errorf(codes.Internal, "Could not determine current epoch sync committee: %v", err)
|
||||
}
|
||||
assignment.IsSyncCommittee = inSync
|
||||
nextAssignment.IsSyncCommittee = inSync
|
||||
if inSync {
|
||||
if err := core.RegisterSyncSubnetCurrentPeriodProto(s, reqEpoch, pubKey, statusEnum); err != nil {
|
||||
return nil, nil, status.Errorf(codes.Internal, "Could not register sync subnet current period: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Next epoch sync committee duty is assigned with next period sync committee only during
|
||||
// sync period epoch boundary (ie. EPOCHS_PER_SYNC_COMMITTEE_PERIOD - 1). Else wise
|
||||
// next epoch sync committee duty is the same as current epoch.
|
||||
nextEpoch := reqEpoch + 1
|
||||
currentEpoch := coreTime.CurrentEpoch(s)
|
||||
n := slots.SyncCommitteePeriod(nextEpoch)
|
||||
c := slots.SyncCommitteePeriod(currentEpoch)
|
||||
if n > c {
|
||||
nextInSync, err := helpers.IsNextPeriodSyncCommittee(s, idx)
|
||||
if err != nil {
|
||||
return nil, nil, status.Errorf(codes.Internal, "Could not determine next epoch sync committee: %v", err)
|
||||
}
|
||||
nextAssignment.IsSyncCommittee = nextInSync
|
||||
if nextInSync {
|
||||
go func() {
|
||||
if err := core.RegisterSyncSubnetNextPeriodProto(s, reqEpoch, pubKey, statusEnum); err != nil {
|
||||
log.WithError(err).Warn("Could not register sync subnet next period")
|
||||
}
|
||||
}()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return assignment, nextAssignment, nil
|
||||
}
|
||||
|
||||
func populateCommitteeFields(duty *ethpb.DutiesV2Response_Duty, la *helpers.LiteAssignment) {
|
||||
duty.CommitteeLength = la.CommitteeLength
|
||||
duty.CommitteeIndex = la.CommitteeIndex
|
||||
duty.ValidatorCommitteeIndex = la.ValidatorCommitteeIndex
|
||||
duty.AttesterSlot = la.AttesterSlot
|
||||
}
|
||||
562
beacon-chain/rpc/prysm/v1alpha1/validator/duties_v2_test.go
Normal file
562
beacon-chain/rpc/prysm/v1alpha1/validator/duties_v2_test.go
Normal file
@@ -0,0 +1,562 @@
|
||||
package validator
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
mockChain "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache/depositsnapshot"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/altair"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/execution"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition"
|
||||
mockExecution "github.com/OffchainLabs/prysm/v6/beacon-chain/execution/testing"
|
||||
mockSync "github.com/OffchainLabs/prysm/v6/beacon-chain/sync/initial-sync/testing"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/assert"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/util"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
)
|
||||
|
||||
func TestGetDutiesV2_OK(t *testing.T) {
|
||||
genesis := util.NewBeaconBlock()
|
||||
depChainStart := params.BeaconConfig().MinGenesisActiveValidatorCount
|
||||
deposits, _, err := util.DeterministicDepositsAndKeys(depChainStart)
|
||||
require.NoError(t, err)
|
||||
eth1Data, err := util.DeterministicEth1Data(len(deposits))
|
||||
require.NoError(t, err)
|
||||
bs, err := transition.GenesisBeaconState(context.Background(), deposits, 0, eth1Data)
|
||||
require.NoError(t, err, "Could not setup genesis bs")
|
||||
genesisRoot, err := genesis.Block.HashTreeRoot()
|
||||
require.NoError(t, err, "Could not get signing root")
|
||||
|
||||
pubKeys := make([][]byte, len(deposits))
|
||||
indices := make([]uint64, len(deposits))
|
||||
for i := 0; i < len(deposits); i++ {
|
||||
pubKeys[i] = deposits[i].Data.PublicKey
|
||||
indices[i] = uint64(i)
|
||||
}
|
||||
|
||||
chain := &mockChain.ChainService{
|
||||
State: bs, Root: genesisRoot[:], Genesis: time.Now(),
|
||||
}
|
||||
vs := &Server{
|
||||
HeadFetcher: chain,
|
||||
TimeFetcher: chain,
|
||||
ForkchoiceFetcher: chain,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
PayloadIDCache: cache.NewPayloadIDCache(),
|
||||
}
|
||||
|
||||
// Test the first validator in registry.
|
||||
req := ðpb.DutiesRequest{
|
||||
PublicKeys: [][]byte{deposits[0].Data.PublicKey},
|
||||
}
|
||||
res, err := vs.GetDutiesV2(context.Background(), req)
|
||||
require.NoError(t, err, "Could not call epoch committee assignment")
|
||||
if res.CurrentEpochDuties[0].AttesterSlot > bs.Slot()+params.BeaconConfig().SlotsPerEpoch {
|
||||
t.Errorf("Assigned slot %d can't be higher than %d",
|
||||
res.CurrentEpochDuties[0].AttesterSlot, bs.Slot()+params.BeaconConfig().SlotsPerEpoch)
|
||||
}
|
||||
|
||||
// Test the last validator in registry.
|
||||
lastValidatorIndex := depChainStart - 1
|
||||
req = ðpb.DutiesRequest{
|
||||
PublicKeys: [][]byte{deposits[lastValidatorIndex].Data.PublicKey},
|
||||
}
|
||||
res, err = vs.GetDutiesV2(context.Background(), req)
|
||||
require.NoError(t, err, "Could not call epoch committee assignment")
|
||||
if res.CurrentEpochDuties[0].AttesterSlot > bs.Slot()+params.BeaconConfig().SlotsPerEpoch {
|
||||
t.Errorf("Assigned slot %d can't be higher than %d",
|
||||
res.CurrentEpochDuties[0].AttesterSlot, bs.Slot()+params.BeaconConfig().SlotsPerEpoch)
|
||||
}
|
||||
|
||||
// We request for duties for all validators.
|
||||
req = ðpb.DutiesRequest{
|
||||
PublicKeys: pubKeys,
|
||||
Epoch: 0,
|
||||
}
|
||||
res, err = vs.GetDutiesV2(context.Background(), req)
|
||||
require.NoError(t, err, "Could not call epoch committee assignment")
|
||||
for i := 0; i < len(res.CurrentEpochDuties); i++ {
|
||||
assert.Equal(t, primitives.ValidatorIndex(i), res.CurrentEpochDuties[i].ValidatorIndex)
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetAltairDutiesV2_SyncCommitteeOK(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
cfg := params.BeaconConfig().Copy()
|
||||
cfg.AltairForkEpoch = primitives.Epoch(0)
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
genesis := util.NewBeaconBlock()
|
||||
deposits, _, err := util.DeterministicDepositsAndKeys(params.BeaconConfig().SyncCommitteeSize)
|
||||
require.NoError(t, err)
|
||||
eth1Data, err := util.DeterministicEth1Data(len(deposits))
|
||||
require.NoError(t, err)
|
||||
bs, err := util.GenesisBeaconState(context.Background(), deposits, 0, eth1Data)
|
||||
require.NoError(t, err, "Could not setup genesis bs")
|
||||
h := ðpb.BeaconBlockHeader{
|
||||
StateRoot: bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength),
|
||||
ParentRoot: bytesutil.PadTo([]byte{'b'}, fieldparams.RootLength),
|
||||
BodyRoot: bytesutil.PadTo([]byte{'c'}, fieldparams.RootLength),
|
||||
}
|
||||
require.NoError(t, bs.SetLatestBlockHeader(h))
|
||||
genesisRoot, err := genesis.Block.HashTreeRoot()
|
||||
require.NoError(t, err, "Could not get signing root")
|
||||
|
||||
syncCommittee, err := altair.NextSyncCommittee(context.Background(), bs)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, bs.SetCurrentSyncCommittee(syncCommittee))
|
||||
pubKeys := make([][]byte, len(deposits))
|
||||
indices := make([]uint64, len(deposits))
|
||||
for i := 0; i < len(deposits); i++ {
|
||||
pubKeys[i] = deposits[i].Data.PublicKey
|
||||
indices[i] = uint64(i)
|
||||
}
|
||||
require.NoError(t, bs.SetSlot(params.BeaconConfig().SlotsPerEpoch*primitives.Slot(params.BeaconConfig().EpochsPerSyncCommitteePeriod)-1))
|
||||
require.NoError(t, helpers.UpdateSyncCommitteeCache(bs))
|
||||
|
||||
pubkeysAs48ByteType := make([][fieldparams.BLSPubkeyLength]byte, len(pubKeys))
|
||||
for i, pk := range pubKeys {
|
||||
pubkeysAs48ByteType[i] = bytesutil.ToBytes48(pk)
|
||||
}
|
||||
|
||||
slot := uint64(params.BeaconConfig().SlotsPerEpoch) * uint64(params.BeaconConfig().EpochsPerSyncCommitteePeriod) * params.BeaconConfig().SecondsPerSlot
|
||||
chain := &mockChain.ChainService{
|
||||
State: bs, Root: genesisRoot[:], Genesis: time.Now().Add(time.Duration(-1*int64(slot-1)) * time.Second),
|
||||
}
|
||||
vs := &Server{
|
||||
HeadFetcher: chain,
|
||||
TimeFetcher: chain,
|
||||
ForkchoiceFetcher: chain,
|
||||
Eth1InfoFetcher: &mockExecution.Chain{},
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
PayloadIDCache: cache.NewPayloadIDCache(),
|
||||
}
|
||||
|
||||
// Test the first validator in registry.
|
||||
req := ðpb.DutiesRequest{
|
||||
PublicKeys: [][]byte{deposits[0].Data.PublicKey},
|
||||
}
|
||||
res, err := vs.GetDutiesV2(context.Background(), req)
|
||||
require.NoError(t, err, "Could not call epoch committee assignment")
|
||||
if res.CurrentEpochDuties[0].AttesterSlot > bs.Slot()+params.BeaconConfig().SlotsPerEpoch {
|
||||
t.Errorf("Assigned slot %d can't be higher than %d",
|
||||
res.CurrentEpochDuties[0].AttesterSlot, bs.Slot()+params.BeaconConfig().SlotsPerEpoch)
|
||||
}
|
||||
|
||||
// Test the last validator in registry.
|
||||
lastValidatorIndex := params.BeaconConfig().SyncCommitteeSize - 1
|
||||
req = ðpb.DutiesRequest{
|
||||
PublicKeys: [][]byte{deposits[lastValidatorIndex].Data.PublicKey},
|
||||
}
|
||||
res, err = vs.GetDutiesV2(context.Background(), req)
|
||||
require.NoError(t, err, "Could not call epoch committee assignment")
|
||||
if res.CurrentEpochDuties[0].AttesterSlot > bs.Slot()+params.BeaconConfig().SlotsPerEpoch {
|
||||
t.Errorf("Assigned slot %d can't be higher than %d",
|
||||
res.CurrentEpochDuties[0].AttesterSlot, bs.Slot()+params.BeaconConfig().SlotsPerEpoch)
|
||||
}
|
||||
|
||||
// We request for duties for all validators.
|
||||
req = ðpb.DutiesRequest{
|
||||
PublicKeys: pubKeys,
|
||||
Epoch: 0,
|
||||
}
|
||||
res, err = vs.GetDutiesV2(context.Background(), req)
|
||||
require.NoError(t, err, "Could not call epoch committee assignment")
|
||||
for i := 0; i < len(res.CurrentEpochDuties); i++ {
|
||||
require.Equal(t, primitives.ValidatorIndex(i), res.CurrentEpochDuties[i].ValidatorIndex)
|
||||
}
|
||||
for i := 0; i < len(res.CurrentEpochDuties); i++ {
|
||||
require.Equal(t, true, res.CurrentEpochDuties[i].IsSyncCommittee)
|
||||
// Current epoch and next epoch duties should be equal before the sync period epoch boundary.
|
||||
require.Equal(t, res.CurrentEpochDuties[i].IsSyncCommittee, res.NextEpochDuties[i].IsSyncCommittee)
|
||||
}
|
||||
|
||||
// Current epoch and next epoch duties should not be equal at the sync period epoch boundary.
|
||||
req = ðpb.DutiesRequest{
|
||||
PublicKeys: pubKeys,
|
||||
Epoch: params.BeaconConfig().EpochsPerSyncCommitteePeriod - 1,
|
||||
}
|
||||
res, err = vs.GetDutiesV2(context.Background(), req)
|
||||
require.NoError(t, err, "Could not call epoch committee assignment")
|
||||
for i := 0; i < len(res.CurrentEpochDuties); i++ {
|
||||
require.NotEqual(t, res.CurrentEpochDuties[i].IsSyncCommittee, res.NextEpochDuties[i].IsSyncCommittee)
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetBellatrixDutiesV2_SyncCommitteeOK(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
cfg := params.BeaconConfig().Copy()
|
||||
cfg.AltairForkEpoch = primitives.Epoch(0)
|
||||
cfg.BellatrixForkEpoch = primitives.Epoch(1)
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
genesis := util.NewBeaconBlock()
|
||||
deposits, _, err := util.DeterministicDepositsAndKeys(params.BeaconConfig().SyncCommitteeSize)
|
||||
require.NoError(t, err)
|
||||
eth1Data, err := util.DeterministicEth1Data(len(deposits))
|
||||
require.NoError(t, err)
|
||||
bs, err := util.GenesisBeaconState(context.Background(), deposits, 0, eth1Data)
|
||||
h := ðpb.BeaconBlockHeader{
|
||||
StateRoot: bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength),
|
||||
ParentRoot: bytesutil.PadTo([]byte{'b'}, fieldparams.RootLength),
|
||||
BodyRoot: bytesutil.PadTo([]byte{'c'}, fieldparams.RootLength),
|
||||
}
|
||||
require.NoError(t, bs.SetLatestBlockHeader(h))
|
||||
require.NoError(t, err, "Could not setup genesis bs")
|
||||
genesisRoot, err := genesis.Block.HashTreeRoot()
|
||||
require.NoError(t, err, "Could not get signing root")
|
||||
|
||||
syncCommittee, err := altair.NextSyncCommittee(context.Background(), bs)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, bs.SetCurrentSyncCommittee(syncCommittee))
|
||||
pubKeys := make([][]byte, len(deposits))
|
||||
indices := make([]uint64, len(deposits))
|
||||
for i := 0; i < len(deposits); i++ {
|
||||
pubKeys[i] = deposits[i].Data.PublicKey
|
||||
indices[i] = uint64(i)
|
||||
}
|
||||
require.NoError(t, bs.SetSlot(params.BeaconConfig().SlotsPerEpoch*primitives.Slot(params.BeaconConfig().EpochsPerSyncCommitteePeriod)-1))
|
||||
require.NoError(t, helpers.UpdateSyncCommitteeCache(bs))
|
||||
|
||||
bs, err = execution.UpgradeToBellatrix(bs)
|
||||
require.NoError(t, err)
|
||||
|
||||
pubkeysAs48ByteType := make([][fieldparams.BLSPubkeyLength]byte, len(pubKeys))
|
||||
for i, pk := range pubKeys {
|
||||
pubkeysAs48ByteType[i] = bytesutil.ToBytes48(pk)
|
||||
}
|
||||
|
||||
slot := uint64(params.BeaconConfig().SlotsPerEpoch) * uint64(params.BeaconConfig().EpochsPerSyncCommitteePeriod) * params.BeaconConfig().SecondsPerSlot
|
||||
chain := &mockChain.ChainService{
|
||||
State: bs, Root: genesisRoot[:], Genesis: time.Now().Add(time.Duration(-1*int64(slot-1)) * time.Second),
|
||||
}
|
||||
vs := &Server{
|
||||
HeadFetcher: chain,
|
||||
TimeFetcher: chain,
|
||||
ForkchoiceFetcher: chain,
|
||||
Eth1InfoFetcher: &mockExecution.Chain{},
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
PayloadIDCache: cache.NewPayloadIDCache(),
|
||||
}
|
||||
|
||||
// Test the first validator in registry.
|
||||
req := ðpb.DutiesRequest{
|
||||
PublicKeys: [][]byte{deposits[0].Data.PublicKey},
|
||||
}
|
||||
res, err := vs.GetDutiesV2(context.Background(), req)
|
||||
require.NoError(t, err, "Could not call epoch committee assignment")
|
||||
if res.CurrentEpochDuties[0].AttesterSlot > bs.Slot()+params.BeaconConfig().SlotsPerEpoch {
|
||||
t.Errorf("Assigned slot %d can't be higher than %d",
|
||||
res.CurrentEpochDuties[0].AttesterSlot, bs.Slot()+params.BeaconConfig().SlotsPerEpoch)
|
||||
}
|
||||
|
||||
// Test the last validator in registry.
|
||||
lastValidatorIndex := params.BeaconConfig().SyncCommitteeSize - 1
|
||||
req = ðpb.DutiesRequest{
|
||||
PublicKeys: [][]byte{deposits[lastValidatorIndex].Data.PublicKey},
|
||||
}
|
||||
res, err = vs.GetDutiesV2(context.Background(), req)
|
||||
require.NoError(t, err, "Could not call epoch committee assignment")
|
||||
if res.CurrentEpochDuties[0].AttesterSlot > bs.Slot()+params.BeaconConfig().SlotsPerEpoch {
|
||||
t.Errorf("Assigned slot %d can't be higher than %d",
|
||||
res.CurrentEpochDuties[0].AttesterSlot, bs.Slot()+params.BeaconConfig().SlotsPerEpoch)
|
||||
}
|
||||
|
||||
// We request for duties for all validators.
|
||||
req = ðpb.DutiesRequest{
|
||||
PublicKeys: pubKeys,
|
||||
Epoch: 0,
|
||||
}
|
||||
res, err = vs.GetDutiesV2(context.Background(), req)
|
||||
require.NoError(t, err, "Could not call epoch committee assignment")
|
||||
for i := 0; i < len(res.CurrentEpochDuties); i++ {
|
||||
assert.Equal(t, primitives.ValidatorIndex(i), res.CurrentEpochDuties[i].ValidatorIndex)
|
||||
}
|
||||
for i := 0; i < len(res.CurrentEpochDuties); i++ {
|
||||
assert.Equal(t, true, res.CurrentEpochDuties[i].IsSyncCommittee)
|
||||
// Current epoch and next epoch duties should be equal before the sync period epoch boundary.
|
||||
assert.Equal(t, res.CurrentEpochDuties[i].IsSyncCommittee, res.NextEpochDuties[i].IsSyncCommittee)
|
||||
}
|
||||
|
||||
// Current epoch and next epoch duties should not be equal at the sync period epoch boundary.
|
||||
req = ðpb.DutiesRequest{
|
||||
PublicKeys: pubKeys,
|
||||
Epoch: params.BeaconConfig().EpochsPerSyncCommitteePeriod - 1,
|
||||
}
|
||||
res, err = vs.GetDutiesV2(context.Background(), req)
|
||||
require.NoError(t, err, "Could not call epoch committee assignment")
|
||||
for i := 0; i < len(res.CurrentEpochDuties); i++ {
|
||||
require.NotEqual(t, res.CurrentEpochDuties[i].IsSyncCommittee, res.NextEpochDuties[i].IsSyncCommittee)
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetAltairDutiesV2_UnknownPubkey(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
cfg := params.BeaconConfig().Copy()
|
||||
cfg.AltairForkEpoch = primitives.Epoch(0)
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
genesis := util.NewBeaconBlock()
|
||||
deposits, _, err := util.DeterministicDepositsAndKeys(params.BeaconConfig().SyncCommitteeSize)
|
||||
require.NoError(t, err)
|
||||
eth1Data, err := util.DeterministicEth1Data(len(deposits))
|
||||
require.NoError(t, err)
|
||||
bs, err := util.GenesisBeaconState(context.Background(), deposits, 0, eth1Data)
|
||||
require.NoError(t, err)
|
||||
h := ðpb.BeaconBlockHeader{
|
||||
StateRoot: bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength),
|
||||
ParentRoot: bytesutil.PadTo([]byte{'b'}, fieldparams.RootLength),
|
||||
BodyRoot: bytesutil.PadTo([]byte{'c'}, fieldparams.RootLength),
|
||||
}
|
||||
require.NoError(t, bs.SetLatestBlockHeader(h))
|
||||
require.NoError(t, err, "Could not setup genesis bs")
|
||||
genesisRoot, err := genesis.Block.HashTreeRoot()
|
||||
require.NoError(t, err, "Could not get signing root")
|
||||
|
||||
require.NoError(t, bs.SetSlot(params.BeaconConfig().SlotsPerEpoch*primitives.Slot(params.BeaconConfig().EpochsPerSyncCommitteePeriod)-1))
|
||||
require.NoError(t, helpers.UpdateSyncCommitteeCache(bs))
|
||||
|
||||
slot := uint64(params.BeaconConfig().SlotsPerEpoch) * uint64(params.BeaconConfig().EpochsPerSyncCommitteePeriod) * params.BeaconConfig().SecondsPerSlot
|
||||
chain := &mockChain.ChainService{
|
||||
State: bs, Root: genesisRoot[:], Genesis: time.Now().Add(time.Duration(-1*int64(slot-1)) * time.Second),
|
||||
}
|
||||
depositCache, err := depositsnapshot.New()
|
||||
require.NoError(t, err)
|
||||
|
||||
vs := &Server{
|
||||
HeadFetcher: chain,
|
||||
ForkchoiceFetcher: chain,
|
||||
TimeFetcher: chain,
|
||||
Eth1InfoFetcher: &mockExecution.Chain{},
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
DepositFetcher: depositCache,
|
||||
PayloadIDCache: cache.NewPayloadIDCache(),
|
||||
}
|
||||
|
||||
unknownPubkey := bytesutil.PadTo([]byte{'u'}, 48)
|
||||
|
||||
req := ðpb.DutiesRequest{
|
||||
PublicKeys: [][]byte{unknownPubkey},
|
||||
}
|
||||
res, err := vs.GetDutiesV2(context.Background(), req)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, false, res.CurrentEpochDuties[0].IsSyncCommittee)
|
||||
assert.Equal(t, false, res.NextEpochDuties[0].IsSyncCommittee)
|
||||
}
|
||||
|
||||
func TestGetDutiesV2_StateAdvancement(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
cfg := params.BeaconConfig().Copy()
|
||||
cfg.ElectraForkEpoch = primitives.Epoch(0)
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
epochStart, err := slots.EpochStart(1)
|
||||
require.NoError(t, err)
|
||||
st, _ := util.DeterministicGenesisStateElectra(t, 1)
|
||||
require.NoError(t, st.SetSlot(epochStart-1))
|
||||
|
||||
// Request epoch 1 which requires slot 32 processing
|
||||
req := ðpb.DutiesRequest{
|
||||
PublicKeys: [][]byte{pubKey(0)},
|
||||
Epoch: 1,
|
||||
}
|
||||
b, err := blocks.NewSignedBeaconBlock(util.HydrateSignedBeaconBlockElectra(ðpb.SignedBeaconBlockElectra{}))
|
||||
require.NoError(t, err)
|
||||
b.SetSlot(epochStart)
|
||||
currentSlot := epochStart - 1
|
||||
// Mock chain service with state at slot 0
|
||||
chain := &mockChain.ChainService{
|
||||
Root: make([]byte, 32),
|
||||
State: st,
|
||||
Block: b,
|
||||
Slot: ¤tSlot,
|
||||
}
|
||||
|
||||
vs := &Server{
|
||||
HeadFetcher: chain,
|
||||
TimeFetcher: chain,
|
||||
ForkchoiceFetcher: chain,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
}
|
||||
|
||||
// Verify state processing occurs
|
||||
res, err := vs.GetDutiesV2(context.Background(), req)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, res)
|
||||
}
|
||||
|
||||
func TestGetDutiesV2_SlotOutOfUpperBound(t *testing.T) {
|
||||
chain := &mockChain.ChainService{
|
||||
Genesis: time.Now(),
|
||||
}
|
||||
vs := &Server{
|
||||
ForkchoiceFetcher: chain,
|
||||
TimeFetcher: chain,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
}
|
||||
req := ðpb.DutiesRequest{
|
||||
Epoch: primitives.Epoch(chain.CurrentSlot()/params.BeaconConfig().SlotsPerEpoch + 2),
|
||||
}
|
||||
_, err := vs.GetDutiesV2(context.Background(), req)
|
||||
require.ErrorContains(t, "can not be greater than next epoch", err)
|
||||
}
|
||||
|
||||
func TestGetDutiesV2_CurrentEpoch_ShouldNotFail(t *testing.T) {
|
||||
genesis := util.NewBeaconBlock()
|
||||
depChainStart := params.BeaconConfig().MinGenesisActiveValidatorCount
|
||||
deposits, _, err := util.DeterministicDepositsAndKeys(depChainStart)
|
||||
require.NoError(t, err)
|
||||
eth1Data, err := util.DeterministicEth1Data(len(deposits))
|
||||
require.NoError(t, err)
|
||||
bState, err := transition.GenesisBeaconState(context.Background(), deposits, 0, eth1Data)
|
||||
require.NoError(t, err, "Could not setup genesis state")
|
||||
// Set state to non-epoch start slot.
|
||||
require.NoError(t, bState.SetSlot(5))
|
||||
|
||||
genesisRoot, err := genesis.Block.HashTreeRoot()
|
||||
require.NoError(t, err, "Could not get signing root")
|
||||
|
||||
pubKeys := make([][fieldparams.BLSPubkeyLength]byte, len(deposits))
|
||||
indices := make([]uint64, len(deposits))
|
||||
for i := 0; i < len(deposits); i++ {
|
||||
pubKeys[i] = bytesutil.ToBytes48(deposits[i].Data.PublicKey)
|
||||
indices[i] = uint64(i)
|
||||
}
|
||||
|
||||
chain := &mockChain.ChainService{
|
||||
State: bState, Root: genesisRoot[:], Genesis: time.Now(),
|
||||
}
|
||||
vs := &Server{
|
||||
HeadFetcher: chain,
|
||||
ForkchoiceFetcher: chain,
|
||||
TimeFetcher: chain,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
PayloadIDCache: cache.NewPayloadIDCache(),
|
||||
}
|
||||
|
||||
// Test the first validator in registry.
|
||||
req := ðpb.DutiesRequest{
|
||||
PublicKeys: [][]byte{deposits[0].Data.PublicKey},
|
||||
}
|
||||
res, err := vs.GetDutiesV2(context.Background(), req)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, 1, len(res.CurrentEpochDuties), "Expected 1 assignment")
|
||||
}
|
||||
|
||||
func TestGetDutiesV2_MultipleKeys_OK(t *testing.T) {
|
||||
genesis := util.NewBeaconBlock()
|
||||
depChainStart := uint64(64)
|
||||
|
||||
deposits, _, err := util.DeterministicDepositsAndKeys(depChainStart)
|
||||
require.NoError(t, err)
|
||||
eth1Data, err := util.DeterministicEth1Data(len(deposits))
|
||||
require.NoError(t, err)
|
||||
bs, err := transition.GenesisBeaconState(context.Background(), deposits, 0, eth1Data)
|
||||
require.NoError(t, err, "Could not setup genesis bs")
|
||||
genesisRoot, err := genesis.Block.HashTreeRoot()
|
||||
require.NoError(t, err, "Could not get signing root")
|
||||
|
||||
pubKeys := make([][fieldparams.BLSPubkeyLength]byte, len(deposits))
|
||||
indices := make([]uint64, len(deposits))
|
||||
for i := 0; i < len(deposits); i++ {
|
||||
pubKeys[i] = bytesutil.ToBytes48(deposits[i].Data.PublicKey)
|
||||
indices[i] = uint64(i)
|
||||
}
|
||||
|
||||
chain := &mockChain.ChainService{
|
||||
State: bs, Root: genesisRoot[:], Genesis: time.Now(),
|
||||
}
|
||||
vs := &Server{
|
||||
HeadFetcher: chain,
|
||||
ForkchoiceFetcher: chain,
|
||||
TimeFetcher: chain,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
PayloadIDCache: cache.NewPayloadIDCache(),
|
||||
}
|
||||
|
||||
pubkey0 := deposits[0].Data.PublicKey
|
||||
pubkey1 := deposits[1].Data.PublicKey
|
||||
|
||||
// Test the first validator in registry.
|
||||
req := ðpb.DutiesRequest{
|
||||
PublicKeys: [][]byte{pubkey0, pubkey1},
|
||||
}
|
||||
res, err := vs.GetDutiesV2(context.Background(), req)
|
||||
require.NoError(t, err, "Could not call epoch committee assignment")
|
||||
assert.Equal(t, 2, len(res.CurrentEpochDuties))
|
||||
assert.Equal(t, primitives.Slot(4), res.CurrentEpochDuties[0].AttesterSlot)
|
||||
assert.Equal(t, primitives.Slot(4), res.CurrentEpochDuties[1].AttesterSlot)
|
||||
}
|
||||
|
||||
func TestGetDutiesV2_NextSyncCommitteePeriod(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
cfg := params.BeaconConfig().Copy()
|
||||
cfg.AltairForkEpoch = primitives.Epoch(0)
|
||||
cfg.EpochsPerSyncCommitteePeriod = 1
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
// Configure sync committee period boundary
|
||||
epochsPerPeriod := params.BeaconConfig().EpochsPerSyncCommitteePeriod
|
||||
boundaryEpoch := epochsPerPeriod - 1
|
||||
|
||||
// Create state at last epoch of current period
|
||||
deposits, _, err := util.DeterministicDepositsAndKeys(params.BeaconConfig().SyncCommitteeSize)
|
||||
require.NoError(t, err)
|
||||
eth1Data, err := util.DeterministicEth1Data(len(deposits))
|
||||
require.NoError(t, err)
|
||||
st, err := util.GenesisBeaconState(context.Background(), deposits, 0, eth1Data)
|
||||
require.NoError(t, err)
|
||||
|
||||
syncCommittee, err := altair.NextSyncCommittee(context.Background(), st)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, st.SetCurrentSyncCommittee(syncCommittee))
|
||||
require.NoError(t, st.SetSlot(params.BeaconConfig().SlotsPerEpoch*primitives.Slot(boundaryEpoch)))
|
||||
|
||||
validatorPubkey := deposits[0].Data.PublicKey
|
||||
|
||||
// Request duties for boundary epoch + 1
|
||||
req := ðpb.DutiesRequest{
|
||||
PublicKeys: [][]byte{validatorPubkey},
|
||||
Epoch: boundaryEpoch + 1,
|
||||
}
|
||||
|
||||
genesisRoot := [32]byte{}
|
||||
chain := &mockChain.ChainService{
|
||||
State: st,
|
||||
Root: genesisRoot[:],
|
||||
}
|
||||
vs := &Server{
|
||||
HeadFetcher: chain,
|
||||
TimeFetcher: chain,
|
||||
ForkchoiceFetcher: chain,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
}
|
||||
|
||||
res, err := vs.GetDutiesV2(context.Background(), req)
|
||||
require.NoError(t, err)
|
||||
|
||||
//Verify next epoch duties have updated sync committee status
|
||||
require.NotEqual(t,
|
||||
res.CurrentEpochDuties[0].IsSyncCommittee,
|
||||
res.NextEpochDuties[0].IsSyncCommittee,
|
||||
)
|
||||
}
|
||||
|
||||
func TestGetDutiesV2_SyncNotReady(t *testing.T) {
|
||||
vs := &Server{
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: true},
|
||||
}
|
||||
_, err := vs.GetDutiesV2(context.Background(), ðpb.DutiesRequest{})
|
||||
assert.ErrorContains(t, "Syncing to latest head", err)
|
||||
}
|
||||
@@ -36,6 +36,11 @@ func (m *MockBlocker) Block(_ context.Context, b []byte) (interfaces.ReadOnlySig
|
||||
}
|
||||
|
||||
// Blobs --
|
||||
func (m *MockBlocker) Blobs(_ context.Context, _ string, _ []uint64) ([]*blocks.VerifiedROBlob, *core.RpcError) {
|
||||
panic("implement me") // lint:nopanic -- Test code.
|
||||
func (*MockBlocker) Blobs(_ context.Context, _ string, _ []int) ([]*blocks.VerifiedROBlob, *core.RpcError) {
|
||||
return nil, &core.RpcError{}
|
||||
}
|
||||
|
||||
// DataColumnSidecars mocks the DataColumnSidecars method of the Blocker interface.
|
||||
func (*MockBlocker) DataColumnSidecars(_ context.Context, _ string, _ []int) ([]blocks.VerifiedRODataColumn, *core.RpcError) {
|
||||
return nil, &core.RpcError{}
|
||||
}
|
||||
|
||||
@@ -90,7 +90,10 @@ func (bs *blobSync) validateNext(rb blocks.ROBlob) error {
|
||||
if err := v.SidecarKzgProofVerified(); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := bs.store.Persist(bs.current, rb); err != nil {
|
||||
|
||||
sc := blocks.NewSidecarFromBlobSidecar(rb)
|
||||
|
||||
if err := bs.store.Persist(bs.current, sc); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
|
||||
@@ -166,7 +166,8 @@ func (s *Service) processFetchedDataRegSync(ctx context.Context, data *blocksQue
|
||||
"firstUnprocessed": bwb[0].Block.Block().Slot(),
|
||||
}
|
||||
for i, b := range bwb {
|
||||
if err := avs.Persist(s.clock.CurrentSlot(), b.Blobs...); err != nil {
|
||||
sidecars := blocks.NewSidecarsFromBlobSidecars(b.Blobs)
|
||||
if err := avs.Persist(s.clock.CurrentSlot(), sidecars...); err != nil {
|
||||
log.WithError(err).WithFields(batchFields).WithFields(syncFields(b.Block)).Warn("Batch failure due to BlobSidecar issues")
|
||||
return uint64(i), err
|
||||
}
|
||||
@@ -324,7 +325,10 @@ func (s *Service) processBatchedBlocks(ctx context.Context, bwb []blocks.BlockWi
|
||||
if len(bb.Blobs) == 0 {
|
||||
continue
|
||||
}
|
||||
if err := avs.Persist(s.clock.CurrentSlot(), bb.Blobs...); err != nil {
|
||||
|
||||
sidecars := blocks.NewSidecarsFromBlobSidecars(bb.Blobs)
|
||||
|
||||
if err := avs.Persist(s.clock.CurrentSlot(), sidecars...); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
}
|
||||
|
||||
@@ -331,16 +331,17 @@ func (s *Service) fetchOriginBlobs(pids []peer.ID) error {
|
||||
}
|
||||
shufflePeers(pids)
|
||||
for i := range pids {
|
||||
sidecars, err := sync.SendBlobSidecarByRoot(s.ctx, s.clock, s.cfg.P2P, pids[i], s.ctxMap, &req, rob.Block().Slot())
|
||||
blobSidecars, err := sync.SendBlobSidecarByRoot(s.ctx, s.clock, s.cfg.P2P, pids[i], s.ctxMap, &req, rob.Block().Slot())
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
if len(sidecars) != len(req) {
|
||||
if len(blobSidecars) != len(req) {
|
||||
continue
|
||||
}
|
||||
bv := verification.NewBlobBatchVerifier(s.newBlobVerifier, verification.InitsyncBlobSidecarRequirements)
|
||||
avs := das.NewLazilyPersistentStore(s.cfg.BlobStorage, bv)
|
||||
current := s.clock.CurrentSlot()
|
||||
sidecars := blocks.NewSidecarsFromBlobSidecars(blobSidecars)
|
||||
if err := avs.Persist(current, sidecars...); err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -348,7 +349,7 @@ func (s *Service) fetchOriginBlobs(pids []peer.ID) error {
|
||||
log.WithField("root", fmt.Sprintf("%#x", r)).WithField("peerID", pids[i]).Warn("Blobs from peer for origin block were unusable")
|
||||
continue
|
||||
}
|
||||
log.WithField("nBlobs", len(sidecars)).WithField("root", fmt.Sprintf("%#x", r)).Info("Successfully downloaded blobs for checkpoint sync block")
|
||||
log.WithField("nBlobs", len(blobSidecars)).WithField("root", fmt.Sprintf("%#x", r)).Info("Successfully downloaded blobs for checkpoint sync block")
|
||||
return nil
|
||||
}
|
||||
return fmt.Errorf("no connected peer able to provide blobs for checkpoint sync block %#x", r)
|
||||
|
||||
@@ -38,6 +38,15 @@ var (
|
||||
RequireSidecarProposerExpected,
|
||||
}
|
||||
|
||||
// ByRangeRequestDataColumnSidecarRequirements defines the set of requirements that DataColumnSidecars received
|
||||
// via the by range request must satisfy in order to upgrade an RODataColumn to a VerifiedRODataColumn.
|
||||
// https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/p2p-interface.md#datacolumnsidecarsbyrange-v1
|
||||
ByRangeRequestDataColumnSidecarRequirements = []Requirement{
|
||||
RequireValidFields,
|
||||
RequireSidecarInclusionProven,
|
||||
RequireSidecarKzgProofVerified,
|
||||
}
|
||||
|
||||
errColumnsInvalid = errors.New("data columns failed verification")
|
||||
errBadTopicLength = errors.New("topic length is invalid")
|
||||
errBadTopic = errors.New("topic is not of the one expected")
|
||||
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Made `/eth/v1/beacon/states/{state_id}/committees` endpoint return `400` when slot does not belong to the specified epoch, aligning with the Beacon API spec (#15355)
|
||||
3
changelog/Alleysira-fix-meta-in-getPeers-resp.md
Normal file
3
changelog/Alleysira-fix-meta-in-getPeers-resp.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Fixed
|
||||
|
||||
- Added missing `meta` field to the response of the endpoint `/eth/v1/node/peers` to align with the Beacon API spec (#15370)
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Add support for light client req/resp domain.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Add light client mainnet spec test.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Add light client minimal spec test support for `update_ranking` tests.
|
||||
4
changelog/bastin_attestation-api-ssz.md
Normal file
4
changelog/bastin_attestation-api-ssz.md
Normal file
@@ -0,0 +1,4 @@
|
||||
### Added
|
||||
|
||||
- Add SSZ support for two attestation APIs: `/eth/v1/validator/attestation_data` and
|
||||
`/eth/v2/validator/aggregate_attestation`.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Rename the `runLightClientUpdateRankingProofTest` function to `runLightClientUpdateRankingTest` in lc spec tests.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Put the light client beacon api endpoints behind a flag
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Fix cyclical dependencies issue when using testing/util package
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Replace context.WithCancel with t.Context in tests
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- fix discord links on readmes
|
||||
3
changelog/james-prysm_get-duties-v2.md
Normal file
3
changelog/james-prysm_get-duties-v2.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Added
|
||||
|
||||
- GetDutiesV2 gRPC function, removes committee list from duties, replaced with committee length, validator committee index.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- merging the builder execution payload types with the ones defined in structs which should reduce mental load when adding new execution payload types
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- code cleanup moving event channel into the validator struct for function cleanups.
|
||||
3
changelog/james-prysm_move-web-flag.md
Normal file
3
changelog/james-prysm_move-web-flag.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Ignored
|
||||
|
||||
- Code cleanup by moving the web flag as a feature flag so that we don't need to pass a variable throughout the code base.
|
||||
3
changelog/james-prysm_validator-duties-v2.md
Normal file
3
changelog/james-prysm_validator-duties-v2.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Added
|
||||
|
||||
- Added feature flag for validator client to use get duties v2.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- code cleanup on wait for activation and keymanagement through moving the account changed channel as a field in the validator.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Add ability to download nightly test vectors.
|
||||
@@ -1,2 +0,0 @@
|
||||
### Added
|
||||
- PeerDAS: Implement the blockchain package.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
- PeerDAS: Refactor the reconstruction pipeline.
|
||||
- PeerDAS: `DataColumnStorage.Get` - Exit early no columns are available.
|
||||
2
changelog/manu-peerdas-das.md
Normal file
2
changelog/manu-peerdas-das.md
Normal file
@@ -0,0 +1,2 @@
|
||||
### Added
|
||||
- PeerDAS: Implement DAS.
|
||||
@@ -1,2 +0,0 @@
|
||||
### Added
|
||||
- Implement data column sidecars filesystem.
|
||||
2
changelog/manu-peerdas-node.md
Normal file
2
changelog/manu-peerdas-node.md
Normal file
@@ -0,0 +1,2 @@
|
||||
### Added
|
||||
- PeerDAS: Add `CustodyInfo` in `BeaconNode`.
|
||||
@@ -1,2 +0,0 @@
|
||||
### Added
|
||||
- PeerDAS: Implement P2P.
|
||||
12
changelog/manu-peerdas-small.md
Normal file
12
changelog/manu-peerdas-small.md
Normal file
@@ -0,0 +1,12 @@
|
||||
### Added
|
||||
- `verifyBlobCommitmentCount`: Print max allowed blob count in error message.
|
||||
|
||||
|
||||
### Ignored
|
||||
- `TestPersist`: Use `fieldparams.RootLength` instead of `32`.
|
||||
- `TestDataColumnSidecarsByRootReq_Marshal`: Remove blank line.
|
||||
- `ConvertPeerIDToNodeID`: Improve readability by using one line per field.
|
||||
|
||||
### Changed
|
||||
- `parseIndices`: Return `[]int` instead of `[]uint64`.
|
||||
|
||||
@@ -1,2 +0,0 @@
|
||||
### Added
|
||||
- PeerDAS: Validation pipeline for data column sidecars received via gossip.
|
||||
@@ -1,2 +0,0 @@
|
||||
### Added
|
||||
- Data column sidecars verification methods.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Use independent context for domain data calculation.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Added Prysm build data to otel tracing spans.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Changed spectest tests to a single directory per config (mainnet, general, minimal). This has a significant reduction in runtime due to runfile tree generation only happening 3 times rather than dozens of times.
|
||||
3
changelog/pvl-regression-15369.md
Normal file
3
changelog/pvl-regression-15369.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Fixed
|
||||
|
||||
- Added regression test for [PR 15369](https://github.com/OffchainLabs/prysm/pull/15369)
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Removed eager validator context cancellation that was causing validator builder registrations to fail occasionally.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Release notes
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Updated changelog for v6.0.3
|
||||
3
changelog/pvl-v6.0.4.md
Normal file
3
changelog/pvl-v6.0.4.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Ignored
|
||||
|
||||
- Added changelog for v6.0.4
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Updated e2e Beacon API evaluator to support more endpoints, including the ones introduced in Electra.
|
||||
3
changelog/radek_reorganize-lc-processing.md
Normal file
3
changelog/radek_reorganize-lc-processing.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Ignored
|
||||
|
||||
- Reorganize processing of light client updates.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Fix `slashing-protection-history export` failing when `validator.db` is in a nested folder like `data/direct/`. (#14954)
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Added /bin/sh simlink to docker images
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Add blob schedule support from https://github.com/ethereum/consensus-specs/pull/4277
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Clean up verification package
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Add fulu operation and epoch processing spec tests
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Default hoodi testnet builder gas limit to 60M
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- random forkchoice spec tests for fulu
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Set seen blob cache size correctly based on current slot time at start up
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Use current slot helper whenever possible
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Update spec tests to v1.6.0-alpha.0
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Remove unused protobuf import
|
||||
@@ -325,12 +325,7 @@ var (
|
||||
Usage: "Skips the y/n confirmation userprompt for sending a deposit to the deposit contract.",
|
||||
Value: false,
|
||||
}
|
||||
// EnableWebFlag enables controlling the validator client via the Prysm web ui. This is a work in progress.
|
||||
EnableWebFlag = &cli.BoolFlag{
|
||||
Name: "web",
|
||||
Usage: "(Work in progress): Enables the web portal for the validator client.",
|
||||
Value: false,
|
||||
}
|
||||
|
||||
// SlashingProtectionExportDirFlag allows specifying the output directory
|
||||
// for a validator's slashing protection history.
|
||||
SlashingProtectionExportDirFlag = &cli.StringFlag{
|
||||
|
||||
@@ -72,7 +72,6 @@ var appFlags = []cli.Flag{
|
||||
flags.SlasherCertFlag,
|
||||
flags.WalletPasswordFileFlag,
|
||||
flags.WalletDirFlag,
|
||||
flags.EnableWebFlag,
|
||||
flags.GraffitiFileFlag,
|
||||
flags.EnableDistributed,
|
||||
flags.AuthTokenPathFlag,
|
||||
|
||||
@@ -136,7 +136,6 @@ var appHelpFlagGroups = []flagGroup{
|
||||
{
|
||||
Name: "misc",
|
||||
Flags: []cli.Flag{
|
||||
flags.EnableWebFlag,
|
||||
flags.DisablePenaltyRewardLogFlag,
|
||||
flags.DisableAccountMetricsFlag,
|
||||
flags.EnableDistributed,
|
||||
|
||||
@@ -50,6 +50,8 @@ type Flags struct {
|
||||
EnableHistoricalSpaceRepresentation bool // EnableHistoricalSpaceRepresentation enables the saving of registry validators in separate buckets to save space
|
||||
EnableBeaconRESTApi bool // EnableBeaconRESTApi enables experimental usage of the beacon REST API by the validator when querying a beacon node
|
||||
EnableExperimentalAttestationPool bool // EnableExperimentalAttestationPool enables an experimental attestation pool design.
|
||||
EnableDutiesV2 bool // EnableDutiesV2 sets validator client to use the get Duties V2 endpoint
|
||||
EnableWeb bool // EnableWeb enables the webui on the validator client
|
||||
// Logging related toggles.
|
||||
DisableGRPCConnectionLogs bool // Disables logging when a new grpc client has connected.
|
||||
EnableFullSSZDataLogging bool // Enables logging for full ssz data on rejected gossip messages
|
||||
@@ -334,6 +336,14 @@ func ConfigureValidator(ctx *cli.Context) error {
|
||||
logEnabled(EnableBeaconRESTApi)
|
||||
cfg.EnableBeaconRESTApi = true
|
||||
}
|
||||
if ctx.Bool(EnableDutiesV2.Name) {
|
||||
logEnabled(EnableDutiesV2)
|
||||
cfg.EnableDutiesV2 = true
|
||||
}
|
||||
if ctx.Bool(EnableWebFlag.Name) {
|
||||
logEnabled(EnableWebFlag)
|
||||
cfg.EnableWeb = true
|
||||
}
|
||||
cfg.KeystoreImportDebounceInterval = ctx.Duration(dynamicKeyReloadDebounceInterval.Name)
|
||||
Init(cfg)
|
||||
return nil
|
||||
|
||||
@@ -188,6 +188,19 @@ var (
|
||||
Name: "blacklist-roots",
|
||||
Usage: "A comma-separatted list of 0x-prefixed hexstrings. Declares blocks with the given blockroots to be invalid. It downscores peers that send these blocks.",
|
||||
}
|
||||
|
||||
// EnableDutiesV2 sets the validator client to use the get duties v2 grpc endpoint
|
||||
EnableDutiesV2 = &cli.BoolFlag{
|
||||
Name: "enable-duties-v2",
|
||||
Usage: "Forces use of get duties v2 endpoint.",
|
||||
}
|
||||
|
||||
// EnableWebFlag enables controlling the validator client via the Prysm web ui. This is a work in progress.
|
||||
EnableWebFlag = &cli.BoolFlag{
|
||||
Name: "web",
|
||||
Usage: "(Work in progress): Enables the web portal for the validator client.",
|
||||
Value: false,
|
||||
}
|
||||
)
|
||||
|
||||
// devModeFlags holds list of flags that are set when development mode is on.
|
||||
@@ -208,6 +221,8 @@ var ValidatorFlags = append(deprecatedFlags, []cli.Flag{
|
||||
EnableMinimalSlashingProtection,
|
||||
enableDoppelGangerProtection,
|
||||
EnableBeaconRESTApi,
|
||||
EnableDutiesV2,
|
||||
EnableWebFlag,
|
||||
}...)
|
||||
|
||||
// E2EValidatorFlags contains a list of the validator feature flags to be tested in E2E.
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user