mirror of
https://github.com/OffchainLabs/prysm.git
synced 2026-01-10 05:47:59 -05:00
Compare commits
27 Commits
rm-electra
...
move-netwo
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
57e94b89db | ||
|
|
5666a73f70 | ||
|
|
05aa803377 | ||
|
|
7cb856be9a | ||
|
|
196d26465d | ||
|
|
5e278329ae | ||
|
|
756b5c1598 | ||
|
|
0190e4addd | ||
|
|
3f302ad4b6 | ||
|
|
684a55384b | ||
|
|
d12da8cbe0 | ||
|
|
6087875da5 | ||
|
|
214f4a76fb | ||
|
|
f5a9394c77 | ||
|
|
4095da8568 | ||
|
|
f1288a18ec | ||
|
|
543ebe857e | ||
|
|
e569df5ebc | ||
|
|
8c324cc491 | ||
|
|
265d84569c | ||
|
|
79b064a6cc | ||
|
|
182c18a7b2 | ||
|
|
8b9c161560 | ||
|
|
4a4532f3ba | ||
|
|
91b44360fc | ||
|
|
472c5da49e | ||
|
|
a0060fa794 |
43
CHANGELOG.md
43
CHANGELOG.md
@@ -4,6 +4,49 @@ All notable changes to this project will be documented in this file.
|
||||
|
||||
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
|
||||
|
||||
## [v6.0.4](https://github.com/prysmaticlabs/prysm/compare/v6.0.3...v6.0.4) - 2025-06-05
|
||||
|
||||
This release has more work on PeerDAS, and light client support. Additionally, we have a few bug fixes:
|
||||
- Blob cache size now correctly set at startup.
|
||||
- A fix for slashing protection history exports where the validator database was in a nested folder.
|
||||
- Corrected behavior of the API call for state committees with an invalid request.
|
||||
- `/bin/sh` is now symlinked to `/bin/bash` for Prysm docker images.
|
||||
|
||||
In the [Hoodi](https://github.com/eth-clients/hoodi) testnet, the default gas limit is raised to 60M gas.
|
||||
|
||||
### Added
|
||||
|
||||
- Add light client mainnet spec test. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15295)
|
||||
- Add support for light client req/resp domain. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15281)
|
||||
- Added /bin/sh simlink to docker images. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15294)
|
||||
- Added Prysm build data to otel tracing spans. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15302)
|
||||
- Add light client minimal spec test support for `update_ranking` tests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15297)
|
||||
- Add fulu operation and epoch processing spec tests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15284)
|
||||
- Updated e2e Beacon API evaluator to support more endpoints, including the ones introduced in Electra. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15304)
|
||||
- Data column sidecars verification methods. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15232)
|
||||
- Implement data column sidecars filesystem. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15257)
|
||||
- Add blob schedule support from https://github.com/ethereum/consensus-specs/pull/4277. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15272)
|
||||
- random forkchoice spec tests for fulu. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15287)
|
||||
- Add ability to download nightly test vectors. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15312)
|
||||
- PeerDAS: Validation pipeline for data column sidecars received via gossip. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15310)
|
||||
- PeerDAS: Implement P2P. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15347)
|
||||
- PeerDAS: Implement the blockchain package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15350)
|
||||
|
||||
### Changed
|
||||
|
||||
- Update spec tests to v1.6.0-alpha.0. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15306)
|
||||
- PeerDAS: Refactor the reconstruction pipeline. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15309)
|
||||
- PeerDAS: `DataColumnStorage.Get` - Exit early no columns are available. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15309)
|
||||
- Default hoodi testnet builder gas limit to 60M. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15361)
|
||||
|
||||
### Fixed
|
||||
|
||||
- Fix cyclical dependencies issue when using testing/util package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15248)
|
||||
- Set seen blob cache size correctly based on current slot time at start up. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15348)
|
||||
- Fix `slashing-protection-history export` failing when `validator.db` is in a nested folder like `data/direct/`. (#14954). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15351)
|
||||
- Made `/eth/v1/beacon/states/{state_id}/committees` endpoint return `400` when slot does not belong to the specified epoch, aligning with the Beacon API spec (#15355). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15356)
|
||||
- Removed eager validator context cancellation that was causing validator builder registrations to fail occasionally. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15369)
|
||||
|
||||
## [v6.0.3](https://github.com/prysmaticlabs/prysm/compare/v6.0.2...v6.0.3) - 2025-05-21
|
||||
|
||||
This release has important bugfixes for users of the [Beacon API](https://ethereum.github.io/beacon-APIs/). These fixes include:
|
||||
|
||||
@@ -16,7 +16,6 @@ go_library(
|
||||
"//api/server/structs:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
"//encoding/bytesutil:go_default_library",
|
||||
"//network/forks:go_default_library",
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
|
||||
"@com_github_pkg_errors//:go_default_library",
|
||||
|
||||
@@ -9,7 +9,6 @@ import (
|
||||
"net/url"
|
||||
"path"
|
||||
"regexp"
|
||||
"sort"
|
||||
"strconv"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/api/client"
|
||||
@@ -17,7 +16,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/api/server/structs"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
"github.com/OffchainLabs/prysm/v6/network/forks"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/ethereum/go-ethereum/common/hexutil"
|
||||
"github.com/pkg/errors"
|
||||
@@ -137,24 +135,6 @@ func (c *Client) GetFork(ctx context.Context, stateId StateOrBlockId) (*ethpb.Fo
|
||||
return fr.ToConsensus()
|
||||
}
|
||||
|
||||
// GetForkSchedule retrieve all forks, past present and future, of which this node is aware.
|
||||
func (c *Client) GetForkSchedule(ctx context.Context) (forks.OrderedSchedule, error) {
|
||||
body, err := c.Get(ctx, getForkSchedulePath)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "error requesting fork schedule")
|
||||
}
|
||||
fsr := &forkScheduleResponse{}
|
||||
err = json.Unmarshal(body, fsr)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
ofs, err := fsr.OrderedForkSchedule()
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, fmt.Sprintf("problem unmarshaling %s response", getForkSchedulePath))
|
||||
}
|
||||
return ofs, nil
|
||||
}
|
||||
|
||||
// GetConfigSpec retrieve the current configs of the network used by the beacon node.
|
||||
func (c *Client) GetConfigSpec(ctx context.Context) (*structs.GetSpecResponse, error) {
|
||||
body, err := c.Get(ctx, getConfigSpecPath)
|
||||
@@ -334,31 +314,3 @@ func (c *Client) GetBLStoExecutionChanges(ctx context.Context) (*structs.BLSToEx
|
||||
}
|
||||
return poolResponse, nil
|
||||
}
|
||||
|
||||
type forkScheduleResponse struct {
|
||||
Data []structs.Fork
|
||||
}
|
||||
|
||||
func (fsr *forkScheduleResponse) OrderedForkSchedule() (forks.OrderedSchedule, error) {
|
||||
ofs := make(forks.OrderedSchedule, 0)
|
||||
for _, d := range fsr.Data {
|
||||
epoch, err := strconv.ParseUint(d.Epoch, 10, 64)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "error parsing epoch %s", d.Epoch)
|
||||
}
|
||||
vSlice, err := hexutil.Decode(d.CurrentVersion)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if len(vSlice) != 4 {
|
||||
return nil, fmt.Errorf("got %d byte version, expected 4 bytes. version hex=%s", len(vSlice), d.CurrentVersion)
|
||||
}
|
||||
version := bytesutil.ToBytes4(vSlice)
|
||||
ofs = append(ofs, forks.ForkScheduleEntry{
|
||||
Version: version,
|
||||
Epoch: primitives.Epoch(epoch),
|
||||
})
|
||||
}
|
||||
sort.Sort(ofs)
|
||||
return ofs, nil
|
||||
}
|
||||
|
||||
@@ -25,6 +25,13 @@ type BlockGossipEvent struct {
|
||||
Block string `json:"block"`
|
||||
}
|
||||
|
||||
type DataColumnGossipEvent struct {
|
||||
Slot string `json:"slot"`
|
||||
Index string `json:"index"`
|
||||
BlockRoot string `json:"block_root"`
|
||||
KzgCommitments []string `json:"kzg_commitments"`
|
||||
}
|
||||
|
||||
type AggregatedAttEventSource struct {
|
||||
Aggregate *Attestation `json:"aggregate"`
|
||||
}
|
||||
|
||||
@@ -33,8 +33,14 @@ type GetPeerResponse struct {
|
||||
Data *Peer `json:"data"`
|
||||
}
|
||||
|
||||
// Added Meta to align with beacon-api: https://ethereum.github.io/beacon-APIs/#/Node/getPeers
|
||||
type Meta struct {
|
||||
Count int `json:"count"`
|
||||
}
|
||||
|
||||
type GetPeersResponse struct {
|
||||
Data []*Peer `json:"data"`
|
||||
Meta Meta `json:"meta"`
|
||||
}
|
||||
|
||||
type Peer struct {
|
||||
|
||||
@@ -25,6 +25,7 @@ go_library(
|
||||
"receive_attestation.go",
|
||||
"receive_blob.go",
|
||||
"receive_block.go",
|
||||
"receive_data_column.go",
|
||||
"service.go",
|
||||
"setup_forchoice.go",
|
||||
"tracked_proposer.go",
|
||||
@@ -50,6 +51,7 @@ go_library(
|
||||
"//beacon-chain/core/feed/state:go_default_library",
|
||||
"//beacon-chain/core/helpers:go_default_library",
|
||||
"//beacon-chain/core/light-client:go_default_library",
|
||||
"//beacon-chain/core/peerdas:go_default_library",
|
||||
"//beacon-chain/core/signing:go_default_library",
|
||||
"//beacon-chain/core/time:go_default_library",
|
||||
"//beacon-chain/core/transition:go_default_library",
|
||||
@@ -146,6 +148,7 @@ go_test(
|
||||
"//beacon-chain/core/feed/state:go_default_library",
|
||||
"//beacon-chain/core/helpers:go_default_library",
|
||||
"//beacon-chain/core/light-client:go_default_library",
|
||||
"//beacon-chain/core/peerdas:go_default_library",
|
||||
"//beacon-chain/core/signing:go_default_library",
|
||||
"//beacon-chain/core/transition:go_default_library",
|
||||
"//beacon-chain/das:go_default_library",
|
||||
|
||||
@@ -40,10 +40,12 @@ var (
|
||||
errNotGenesisRoot = errors.New("root is not the genesis block root")
|
||||
// errBlacklistedBlock is returned when a block is blacklisted as invalid.
|
||||
errBlacklistedRoot = verification.AsVerificationFailure(errors.New("block root is blacklisted"))
|
||||
// errMaxBlobsExceeded is returned when the number of blobs in a block exceeds the maximum allowed.
|
||||
errMaxBlobsExceeded = verification.AsVerificationFailure(errors.New("expected commitments in block exceeds MAX_BLOBS_PER_BLOCK"))
|
||||
// errMaxDataColumnsExceeded is returned when the number of data columns exceeds the maximum allowed.
|
||||
errMaxDataColumnsExceeded = verification.AsVerificationFailure(errors.New("expected data columns for node exceeds NUMBER_OF_COLUMNS"))
|
||||
)
|
||||
|
||||
var errMaxBlobsExceeded = verification.AsVerificationFailure(errors.New("Expected commitments in block exceeds MAX_BLOBS_PER_BLOCK"))
|
||||
|
||||
// An invalid block is the block that fails state transition based on the core protocol rules.
|
||||
// The beacon node shall not be accepting nor building blocks that branch off from an invalid block.
|
||||
// Some examples of invalid blocks are:
|
||||
|
||||
@@ -439,6 +439,9 @@ func (s *Service) removeInvalidBlockAndState(ctx context.Context, blkRoots [][32
|
||||
// Blobs may not exist for some blocks, leading to deletion failures. Log such errors at debug level.
|
||||
log.WithError(err).Debug("Could not remove blob from blob storage")
|
||||
}
|
||||
if err := s.dataColumnStorage.Remove(root); err != nil {
|
||||
log.WithError(err).Errorf("Could not remove data columns from data column storage for root %#x", root)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -1,10 +1,13 @@
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/async/event"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
|
||||
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
|
||||
lightclient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/execution"
|
||||
@@ -127,9 +130,9 @@ func WithBLSToExecPool(p blstoexec.PoolManager) Option {
|
||||
}
|
||||
|
||||
// WithP2PBroadcaster to broadcast messages after appropriate processing.
|
||||
func WithP2PBroadcaster(p p2p.Broadcaster) Option {
|
||||
func WithP2PBroadcaster(p p2p.Accessor) Option {
|
||||
return func(s *Service) error {
|
||||
s.cfg.P2p = p
|
||||
s.cfg.P2P = p
|
||||
return nil
|
||||
}
|
||||
}
|
||||
@@ -208,6 +211,15 @@ func WithBlobStorage(b *filesystem.BlobStorage) Option {
|
||||
}
|
||||
}
|
||||
|
||||
// WithDataColumnStorage sets the data column storage backend for the blockchain service.
|
||||
func WithDataColumnStorage(b *filesystem.DataColumnStorage) Option {
|
||||
return func(s *Service) error {
|
||||
s.dataColumnStorage = b
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithSyncChecker sets the sync checker for the blockchain service.
|
||||
func WithSyncChecker(checker Checker) Option {
|
||||
return func(s *Service) error {
|
||||
s.cfg.SyncChecker = checker
|
||||
@@ -215,6 +227,15 @@ func WithSyncChecker(checker Checker) Option {
|
||||
}
|
||||
}
|
||||
|
||||
// WithCustodyInfo sets the custody info for the blockchain service.
|
||||
func WithCustodyInfo(custodyInfo *peerdas.CustodyInfo) Option {
|
||||
return func(s *Service) error {
|
||||
s.cfg.CustodyInfo = custodyInfo
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithSlasherEnabled sets whether the slasher is enabled or not.
|
||||
func WithSlasherEnabled(enabled bool) Option {
|
||||
return func(s *Service) error {
|
||||
s.slasherEnabled = enabled
|
||||
@@ -222,6 +243,15 @@ func WithSlasherEnabled(enabled bool) Option {
|
||||
}
|
||||
}
|
||||
|
||||
// WithGenesisTime sets the genesis time for the blockchain service.
|
||||
func WithGenesisTime(genesisTime time.Time) Option {
|
||||
return func(s *Service) error {
|
||||
s.genesisTime = genesisTime
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithLightClientStore sets the light client store for the blockchain service.
|
||||
func WithLightClientStore(lcs *lightclient.Store) Option {
|
||||
return func(s *Service) error {
|
||||
s.lcStore = lcs
|
||||
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
@@ -88,7 +89,7 @@ func (s *Service) OnAttestation(ctx context.Context, a ethpb.Att, disparity time
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if err := attestation.IsValidAttestationIndices(ctx, indexedAtt); err != nil {
|
||||
if err := attestation.IsValidAttestationIndices(ctx, indexedAtt, params.BeaconConfig().MaxValidatorsPerCommittee, params.BeaconConfig().MaxCommitteesPerSlot); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
|
||||
@@ -3,10 +3,12 @@ package blockchain
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"slices"
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
|
||||
coreTime "github.com/OffchainLabs/prysm/v6/beacon-chain/core/time"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/das"
|
||||
@@ -70,8 +72,6 @@ func (s *Service) postBlockProcess(cfg *postBlockProcessConfig) error {
|
||||
}
|
||||
if features.Get().EnableLightClient && slots.ToEpoch(s.CurrentSlot()) >= params.BeaconConfig().AltairForkEpoch {
|
||||
defer s.processLightClientUpdates(cfg)
|
||||
defer s.saveLightClientUpdate(cfg)
|
||||
defer s.saveLightClientBootstrap(cfg)
|
||||
}
|
||||
defer s.sendStateFeedOnBlock(cfg)
|
||||
defer reportProcessingTime(startTime)
|
||||
@@ -239,8 +239,9 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if err := avs.IsDataAvailable(ctx, s.CurrentSlot(), b); err != nil {
|
||||
return errors.Wrapf(err, "could not validate blob data availability at slot %d", b.Block().Slot())
|
||||
return errors.Wrapf(err, "could not validate sidecar availability at slot %d", b.Block().Slot())
|
||||
}
|
||||
args := &forkchoicetypes.BlockAndCheckpoints{Block: b,
|
||||
JustifiedCheckpoint: jCheckpoints[i],
|
||||
@@ -578,12 +579,12 @@ func (s *Service) runLateBlockTasks() {
|
||||
}
|
||||
}
|
||||
|
||||
// missingIndices uses the expected commitments from the block to determine
|
||||
// missingBlobIndices uses the expected commitments from the block to determine
|
||||
// which BlobSidecar indices would need to be in the database for DA success.
|
||||
// It returns a map where each key represents a missing BlobSidecar index.
|
||||
// An empty map means we have all indices; a non-empty map can be used to compare incoming
|
||||
// BlobSidecars against the set of known missing sidecars.
|
||||
func missingIndices(bs *filesystem.BlobStorage, root [32]byte, expected [][]byte, slot primitives.Slot) (map[uint64]struct{}, error) {
|
||||
func missingBlobIndices(bs *filesystem.BlobStorage, root [fieldparams.RootLength]byte, expected [][]byte, slot primitives.Slot) (map[uint64]bool, error) {
|
||||
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
|
||||
if len(expected) == 0 {
|
||||
return nil, nil
|
||||
@@ -592,29 +593,223 @@ func missingIndices(bs *filesystem.BlobStorage, root [32]byte, expected [][]byte
|
||||
return nil, errMaxBlobsExceeded
|
||||
}
|
||||
indices := bs.Summary(root)
|
||||
missing := make(map[uint64]struct{}, len(expected))
|
||||
missing := make(map[uint64]bool, len(expected))
|
||||
for i := range expected {
|
||||
if len(expected[i]) > 0 && !indices.HasIndex(uint64(i)) {
|
||||
missing[uint64(i)] = struct{}{}
|
||||
missing[uint64(i)] = true
|
||||
}
|
||||
}
|
||||
return missing, nil
|
||||
}
|
||||
|
||||
// isDataAvailable blocks until all BlobSidecars committed to in the block are available,
|
||||
// or an error or context cancellation occurs. A nil result means that the data availability check is successful.
|
||||
// The function will first check the database to see if all sidecars have been persisted. If any
|
||||
// sidecars are missing, it will then read from the blobNotifier channel for the given root until the channel is
|
||||
// closed, the context hits cancellation/timeout, or notifications have been received for all the missing sidecars.
|
||||
func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed interfaces.ReadOnlySignedBeaconBlock) error {
|
||||
if signed.Version() < version.Deneb {
|
||||
return nil
|
||||
// missingDataColumnIndices uses the expected data columns from the block to determine
|
||||
// which DataColumnSidecar indices would need to be in the database for DA success.
|
||||
// It returns a map where each key represents a missing DataColumnSidecar index.
|
||||
// An empty map means we have all indices; a non-empty map can be used to compare incoming
|
||||
// DataColumns against the set of known missing sidecars.
|
||||
func missingDataColumnIndices(bs *filesystem.DataColumnStorage, root [fieldparams.RootLength]byte, expected map[uint64]bool) (map[uint64]bool, error) {
|
||||
if len(expected) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
block := signed.Block()
|
||||
numberOfColumns := params.BeaconConfig().NumberOfColumns
|
||||
|
||||
if uint64(len(expected)) > numberOfColumns {
|
||||
return nil, errMaxDataColumnsExceeded
|
||||
}
|
||||
|
||||
// Get a summary of the data columns stored in the database.
|
||||
summary := bs.Summary(root)
|
||||
|
||||
// Check all expected data columns against the summary.
|
||||
missing := make(map[uint64]bool)
|
||||
for column := range expected {
|
||||
if !summary.HasIndex(column) {
|
||||
missing[column] = true
|
||||
}
|
||||
}
|
||||
|
||||
return missing, nil
|
||||
}
|
||||
|
||||
// isDataAvailable blocks until all sidecars committed to in the block are available,
|
||||
// or an error or context cancellation occurs. A nil result means that the data availability check is successful.
|
||||
// The function will first check the database to see if all sidecars have been persisted. If any
|
||||
// sidecars are missing, it will then read from the sidecar notifier channel for the given root until the channel is
|
||||
// closed, the context hits cancellation/timeout, or notifications have been received for all the missing sidecars.
|
||||
func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signedBlock interfaces.ReadOnlySignedBeaconBlock) error {
|
||||
block := signedBlock.Block()
|
||||
if block == nil {
|
||||
return errors.New("invalid nil beacon block")
|
||||
}
|
||||
|
||||
blockVersion := block.Version()
|
||||
if blockVersion >= version.Fulu {
|
||||
return s.areDataColumnsAvailable(ctx, root, block)
|
||||
}
|
||||
|
||||
if blockVersion >= version.Deneb {
|
||||
return s.areBlobsAvailable(ctx, root, block)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// areDataColumnsAvailable blocks until all data columns committed to in the block are available,
|
||||
// or an error or context cancellation occurs. A nil result means that the data availability check is successful.
|
||||
func (s *Service) areDataColumnsAvailable(ctx context.Context, root [fieldparams.RootLength]byte, block interfaces.ReadOnlyBeaconBlock) error {
|
||||
// We are only required to check within MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS
|
||||
blockSlot, currentSlot := block.Slot(), s.CurrentSlot()
|
||||
blockEpoch, currentEpoch := slots.ToEpoch(blockSlot), slots.ToEpoch(currentSlot)
|
||||
|
||||
if !params.WithinDAPeriod(blockEpoch, currentEpoch) {
|
||||
return nil
|
||||
}
|
||||
|
||||
body := block.Body()
|
||||
if body == nil {
|
||||
return errors.New("invalid nil beacon block body")
|
||||
}
|
||||
|
||||
kzgCommitments, err := body.BlobKzgCommitments()
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "blob KZG commitments")
|
||||
}
|
||||
|
||||
// If block has not commitments there is nothing to wait for.
|
||||
if len(kzgCommitments) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
// All columns to sample need to be available for the block to be considered available.
|
||||
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-alpha.10/specs/fulu/das-core.md#custody-sampling
|
||||
nodeID := s.cfg.P2P.NodeID()
|
||||
|
||||
// Prevent custody group count to change during the rest of the function.
|
||||
s.cfg.CustodyInfo.Mut.RLock()
|
||||
defer s.cfg.CustodyInfo.Mut.RUnlock()
|
||||
|
||||
// Get the custody group sampling size for the node.
|
||||
custodyGroupSamplingSize := s.cfg.CustodyInfo.CustodyGroupSamplingSize(peerdas.Actual)
|
||||
peerInfo, _, err := peerdas.Info(nodeID, custodyGroupSamplingSize)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "peer info")
|
||||
}
|
||||
|
||||
// Subscribe to newly data columns stored in the database.
|
||||
subscription, identsChan := s.dataColumnStorage.Subscribe()
|
||||
defer subscription.Unsubscribe()
|
||||
|
||||
// Get the count of data columns we already have in the store.
|
||||
summary := s.dataColumnStorage.Summary(root)
|
||||
storedDataColumnsCount := summary.Count()
|
||||
|
||||
minimumColumnCountToReconstruct := peerdas.MinimumColumnsCountToReconstruct()
|
||||
|
||||
// As soon as we have enough data column sidecars, we can reconstruct the missing ones.
|
||||
// We don't need to wait for the rest of the data columns to declare the block as available.
|
||||
if storedDataColumnsCount >= minimumColumnCountToReconstruct {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Get a map of data column indices that are not currently available.
|
||||
missingMap, err := missingDataColumnIndices(s.dataColumnStorage, root, peerInfo.CustodyColumns)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "missing data columns")
|
||||
}
|
||||
|
||||
// If there are no missing indices, all data column sidecars are available.
|
||||
// This is the happy path.
|
||||
if len(missingMap) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Log for DA checks that cross over into the next slot; helpful for debugging.
|
||||
nextSlot := slots.BeginsAt(block.Slot()+1, s.genesisTime)
|
||||
|
||||
// Avoid logging if DA check is called after next slot start.
|
||||
if nextSlot.After(time.Now()) {
|
||||
timer := time.AfterFunc(time.Until(nextSlot), func() {
|
||||
missingMapCount := uint64(len(missingMap))
|
||||
|
||||
if missingMapCount == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
var (
|
||||
expected interface{} = "all"
|
||||
missing interface{} = "all"
|
||||
)
|
||||
|
||||
numberOfColumns := params.BeaconConfig().NumberOfColumns
|
||||
colMapCount := uint64(len(peerInfo.CustodyColumns))
|
||||
|
||||
if colMapCount < numberOfColumns {
|
||||
expected = uint64MapToSortedSlice(peerInfo.CustodyColumns)
|
||||
}
|
||||
|
||||
if missingMapCount < numberOfColumns {
|
||||
missing = uint64MapToSortedSlice(missingMap)
|
||||
}
|
||||
|
||||
log.WithFields(logrus.Fields{
|
||||
"slot": block.Slot(),
|
||||
"root": fmt.Sprintf("%#x", root),
|
||||
"columnsExpected": expected,
|
||||
"columnsWaiting": missing,
|
||||
}).Warning("Data columns still missing at slot end")
|
||||
})
|
||||
defer timer.Stop()
|
||||
}
|
||||
|
||||
for {
|
||||
select {
|
||||
case idents := <-identsChan:
|
||||
if idents.Root != root {
|
||||
// This is not the root we are looking for.
|
||||
continue
|
||||
}
|
||||
|
||||
for _, index := range idents.Indices {
|
||||
// This is a data column we are expecting.
|
||||
if _, ok := missingMap[index]; ok {
|
||||
storedDataColumnsCount++
|
||||
}
|
||||
|
||||
// As soon as we have more than half of the data columns, we can reconstruct the missing ones.
|
||||
// We don't need to wait for the rest of the data columns to declare the block as available.
|
||||
if storedDataColumnsCount >= minimumColumnCountToReconstruct {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Remove the index from the missing map.
|
||||
delete(missingMap, index)
|
||||
|
||||
// Return if there is no more missing data columns.
|
||||
if len(missingMap) == 0 {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
case <-ctx.Done():
|
||||
var missingIndices interface{} = "all"
|
||||
numberOfColumns := params.BeaconConfig().NumberOfColumns
|
||||
missingIndicesCount := uint64(len(missingMap))
|
||||
|
||||
if missingIndicesCount < numberOfColumns {
|
||||
missingIndices = uint64MapToSortedSlice(missingMap)
|
||||
}
|
||||
|
||||
return errors.Wrapf(ctx.Err(), "data column sidecars slot: %d, BlockRoot: %#x, missing %v", block.Slot(), root, missingIndices)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// areBlobsAvailable blocks until all BlobSidecars committed to in the block are available,
|
||||
// or an error or context cancellation occurs. A nil result means that the data availability check is successful.
|
||||
func (s *Service) areBlobsAvailable(ctx context.Context, root [fieldparams.RootLength]byte, block interfaces.ReadOnlyBeaconBlock) error {
|
||||
blockSlot := block.Slot()
|
||||
|
||||
// We are only required to check within MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS
|
||||
if !params.WithinDAPeriod(slots.ToEpoch(block.Slot()), slots.ToEpoch(s.CurrentSlot())) {
|
||||
return nil
|
||||
@@ -634,9 +829,9 @@ func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed int
|
||||
return nil
|
||||
}
|
||||
// get a map of BlobSidecar indices that are not currently available.
|
||||
missing, err := missingIndices(s.blobStorage, root, kzgCommitments, block.Slot())
|
||||
missing, err := missingBlobIndices(s.blobStorage, root, kzgCommitments, block.Slot())
|
||||
if err != nil {
|
||||
return err
|
||||
return errors.Wrap(err, "missing indices")
|
||||
}
|
||||
// If there are no missing indices, all BlobSidecars are available.
|
||||
if len(missing) == 0 {
|
||||
@@ -648,15 +843,20 @@ func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed int
|
||||
nc := s.blobNotifiers.forRoot(root, block.Slot())
|
||||
|
||||
// Log for DA checks that cross over into the next slot; helpful for debugging.
|
||||
nextSlot := slots.BeginsAt(signed.Block().Slot()+1, s.genesisTime)
|
||||
nextSlot := slots.BeginsAt(block.Slot()+1, s.genesisTime)
|
||||
// Avoid logging if DA check is called after next slot start.
|
||||
if nextSlot.After(time.Now()) {
|
||||
nst := time.AfterFunc(time.Until(nextSlot), func() {
|
||||
if len(missing) == 0 {
|
||||
return
|
||||
}
|
||||
log.WithFields(daCheckLogFields(root, signed.Block().Slot(), expected, len(missing))).
|
||||
Error("Still waiting for DA check at slot end.")
|
||||
|
||||
log.WithFields(logrus.Fields{
|
||||
"slot": blockSlot,
|
||||
"root": fmt.Sprintf("%#x", root),
|
||||
"blobsExpected": expected,
|
||||
"blobsWaiting": len(missing),
|
||||
}).Error("Still waiting for blobs DA check at slot end.")
|
||||
})
|
||||
defer nst.Stop()
|
||||
}
|
||||
@@ -678,13 +878,14 @@ func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed int
|
||||
}
|
||||
}
|
||||
|
||||
func daCheckLogFields(root [32]byte, slot primitives.Slot, expected, missing int) logrus.Fields {
|
||||
return logrus.Fields{
|
||||
"slot": slot,
|
||||
"root": fmt.Sprintf("%#x", root),
|
||||
"blobsExpected": expected,
|
||||
"blobsWaiting": missing,
|
||||
// uint64MapToSortedSlice produces a sorted uint64 slice from a map.
|
||||
func uint64MapToSortedSlice(input map[uint64]bool) []uint64 {
|
||||
output := make([]uint64, 0, len(input))
|
||||
for idx := range input {
|
||||
output = append(output, idx)
|
||||
}
|
||||
slices.Sort[[]uint64](output)
|
||||
return output
|
||||
}
|
||||
|
||||
// lateBlockTasks is called 4 seconds into the slot and performs tasks
|
||||
@@ -770,7 +971,7 @@ func (s *Service) waitForSync() error {
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Service) handleInvalidExecutionError(ctx context.Context, err error, blockRoot [32]byte, parentRoot [32]byte) error {
|
||||
func (s *Service) handleInvalidExecutionError(ctx context.Context, err error, blockRoot, parentRoot [fieldparams.RootLength]byte) error {
|
||||
if IsInvalidBlock(err) && InvalidBlockLVH(err) != [32]byte{} {
|
||||
return s.pruneInvalidBlock(ctx, blockRoot, parentRoot, InvalidBlockLVH(err))
|
||||
}
|
||||
|
||||
@@ -131,6 +131,12 @@ func (s *Service) sendStateFeedOnBlock(cfg *postBlockProcessConfig) {
|
||||
}
|
||||
|
||||
func (s *Service) processLightClientUpdates(cfg *postBlockProcessConfig) {
|
||||
if err := s.processLightClientUpdate(cfg); err != nil {
|
||||
log.WithError(err).Error("Failed to process light client update")
|
||||
}
|
||||
if err := s.processLightClientBootstrap(cfg); err != nil {
|
||||
log.WithError(err).Error("Failed to process light client bootstrap")
|
||||
}
|
||||
if err := s.processLightClientOptimisticUpdate(cfg.ctx, cfg.roblock, cfg.postState); err != nil {
|
||||
log.WithError(err).Error("Failed to process light client optimistic update")
|
||||
}
|
||||
@@ -139,38 +145,33 @@ func (s *Service) processLightClientUpdates(cfg *postBlockProcessConfig) {
|
||||
}
|
||||
}
|
||||
|
||||
// saveLightClientUpdate saves the light client update for this block
|
||||
// processLightClientUpdate saves the light client update for this block
|
||||
// if it's better than the already saved one, when feature flag is enabled.
|
||||
func (s *Service) saveLightClientUpdate(cfg *postBlockProcessConfig) {
|
||||
func (s *Service) processLightClientUpdate(cfg *postBlockProcessConfig) error {
|
||||
attestedRoot := cfg.roblock.Block().ParentRoot()
|
||||
attestedBlock, err := s.getBlock(cfg.ctx, attestedRoot)
|
||||
if err != nil {
|
||||
log.WithError(err).Errorf("Saving light client update failed: Could not get attested block for root %#x", attestedRoot)
|
||||
return
|
||||
return errors.Wrapf(err, "could not get attested block for root %#x", attestedRoot)
|
||||
}
|
||||
if attestedBlock == nil || attestedBlock.IsNil() {
|
||||
log.Error("Saving light client update failed: Attested block is nil")
|
||||
return
|
||||
return errors.New("attested block is nil")
|
||||
}
|
||||
attestedState, err := s.cfg.StateGen.StateByRoot(cfg.ctx, attestedRoot)
|
||||
if err != nil {
|
||||
log.WithError(err).Errorf("Saving light client update failed: Could not get attested state for root %#x", attestedRoot)
|
||||
return
|
||||
return errors.Wrapf(err, "could not get attested state for root %#x", attestedRoot)
|
||||
}
|
||||
if attestedState == nil || attestedState.IsNil() {
|
||||
log.Error("Saving light client update failed: Attested state is nil")
|
||||
return
|
||||
return errors.New("attested state is nil")
|
||||
}
|
||||
|
||||
finalizedRoot := attestedState.FinalizedCheckpoint().Root
|
||||
finalizedBlock, err := s.getBlock(cfg.ctx, [32]byte(finalizedRoot))
|
||||
if err != nil {
|
||||
if errors.Is(err, errBlockNotFoundInCacheOrDB) {
|
||||
log.Debugf("Skipping saving light client update: Finalized block is nil for root %#x", finalizedRoot)
|
||||
} else {
|
||||
log.WithError(err).Errorf("Saving light client update failed: Could not get finalized block for root %#x", finalizedRoot)
|
||||
log.Debugf("Skipping saving light client update because finalized block is nil for root %#x", finalizedRoot)
|
||||
return nil
|
||||
}
|
||||
return
|
||||
return errors.Wrapf(err, "could not get finalized block for root %#x", finalizedRoot)
|
||||
}
|
||||
|
||||
update, err := lightclient.NewLightClientUpdateFromBeaconState(
|
||||
@@ -183,57 +184,52 @@ func (s *Service) saveLightClientUpdate(cfg *postBlockProcessConfig) {
|
||||
finalizedBlock,
|
||||
)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Saving light client update failed: Could not create light client update")
|
||||
return
|
||||
return errors.Wrapf(err, "could not create light client update")
|
||||
}
|
||||
|
||||
period := slots.SyncCommitteePeriod(slots.ToEpoch(attestedState.Slot()))
|
||||
|
||||
oldUpdate, err := s.cfg.BeaconDB.LightClientUpdate(cfg.ctx, period)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Saving light client update failed: Could not get current light client update")
|
||||
return
|
||||
return errors.Wrapf(err, "could not get current light client update")
|
||||
}
|
||||
|
||||
if oldUpdate == nil {
|
||||
if err := s.cfg.BeaconDB.SaveLightClientUpdate(cfg.ctx, period, update); err != nil {
|
||||
log.WithError(err).Error("Saving light client update failed: Could not save light client update")
|
||||
} else {
|
||||
log.WithField("period", period).Debug("Saving light client update: Saved new update")
|
||||
return errors.Wrapf(err, "could not save light client update")
|
||||
}
|
||||
return
|
||||
log.WithField("period", period).Debug("Saved new light client update")
|
||||
return nil
|
||||
}
|
||||
|
||||
isNewUpdateBetter, err := lightclient.IsBetterUpdate(update, oldUpdate)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Saving light client update failed: Could not compare light client updates")
|
||||
return
|
||||
return errors.Wrapf(err, "could not compare light client updates")
|
||||
}
|
||||
|
||||
if isNewUpdateBetter {
|
||||
if err := s.cfg.BeaconDB.SaveLightClientUpdate(cfg.ctx, period, update); err != nil {
|
||||
log.WithError(err).Error("Saving light client update failed: Could not save light client update")
|
||||
} else {
|
||||
log.WithField("period", period).Debug("Saving light client update: Saved new update")
|
||||
return errors.Wrapf(err, "could not save light client update")
|
||||
}
|
||||
} else {
|
||||
log.WithField("period", period).Debug("Saving light client update: New update is not better than the current one. Skipping save.")
|
||||
log.WithField("period", period).Debug("Saved new light client update")
|
||||
return nil
|
||||
}
|
||||
log.WithField("period", period).Debug("New light client update is not better than the current one, skipping save")
|
||||
return nil
|
||||
}
|
||||
|
||||
// saveLightClientBootstrap saves a light client bootstrap for this block
|
||||
// processLightClientBootstrap saves a light client bootstrap for this block
|
||||
// when feature flag is enabled.
|
||||
func (s *Service) saveLightClientBootstrap(cfg *postBlockProcessConfig) {
|
||||
func (s *Service) processLightClientBootstrap(cfg *postBlockProcessConfig) error {
|
||||
blockRoot := cfg.roblock.Root()
|
||||
bootstrap, err := lightclient.NewLightClientBootstrapFromBeaconState(cfg.ctx, s.CurrentSlot(), cfg.postState, cfg.roblock)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Saving light client bootstrap failed: Could not create light client bootstrap")
|
||||
return
|
||||
return errors.Wrapf(err, "could not create light client bootstrap")
|
||||
}
|
||||
err = s.cfg.BeaconDB.SaveLightClientBootstrap(cfg.ctx, blockRoot[:], bootstrap)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Saving light client bootstrap failed: Could not save light client bootstrap in DB")
|
||||
if err := s.cfg.BeaconDB.SaveLightClientBootstrap(cfg.ctx, blockRoot[:], bootstrap); err != nil {
|
||||
return errors.Wrapf(err, "could not save light client bootstrap")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *Service) processLightClientFinalityUpdate(
|
||||
@@ -310,7 +306,7 @@ func (s *Service) processLightClientFinalityUpdate(
|
||||
Data: newUpdate,
|
||||
})
|
||||
|
||||
if err = s.cfg.P2p.BroadcastLightClientFinalityUpdate(ctx, newUpdate); err != nil {
|
||||
if err = s.cfg.P2P.BroadcastLightClientFinalityUpdate(ctx, newUpdate); err != nil {
|
||||
return errors.Wrap(err, "could not broadcast light client finality update")
|
||||
}
|
||||
|
||||
@@ -363,7 +359,7 @@ func (s *Service) processLightClientOptimisticUpdate(ctx context.Context, signed
|
||||
Data: newUpdate,
|
||||
})
|
||||
|
||||
if err = s.cfg.P2p.BroadcastLightClientOptimisticUpdate(ctx, newUpdate); err != nil {
|
||||
if err = s.cfg.P2P.BroadcastLightClientOptimisticUpdate(ctx, newUpdate); err != nil {
|
||||
return errors.Wrap(err, "could not broadcast light client optimistic update")
|
||||
}
|
||||
|
||||
|
||||
@@ -13,6 +13,7 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
|
||||
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/das"
|
||||
@@ -2331,13 +2332,13 @@ func driftGenesisTime(s *Service, slot, delay int64) {
|
||||
s.cfg.ForkChoiceStore.SetGenesisTime(uint64(newTime.Unix()))
|
||||
}
|
||||
|
||||
func TestMissingIndices(t *testing.T) {
|
||||
func TestMissingBlobIndices(t *testing.T) {
|
||||
cases := []struct {
|
||||
name string
|
||||
expected [][]byte
|
||||
present []uint64
|
||||
result map[uint64]struct{}
|
||||
root [32]byte
|
||||
root [fieldparams.RootLength]byte
|
||||
err error
|
||||
}{
|
||||
{
|
||||
@@ -2395,7 +2396,7 @@ func TestMissingIndices(t *testing.T) {
|
||||
bm, bs := filesystem.NewEphemeralBlobStorageWithMocker(t)
|
||||
t.Run(c.name, func(t *testing.T) {
|
||||
require.NoError(t, bm.CreateFakeIndices(c.root, 0, c.present...))
|
||||
missing, err := missingIndices(bs, c.root, c.expected, 0)
|
||||
missing, err := missingBlobIndices(bs, c.root, c.expected, 0)
|
||||
if c.err != nil {
|
||||
require.ErrorIs(t, err, c.err)
|
||||
return
|
||||
@@ -2403,9 +2404,70 @@ func TestMissingIndices(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, len(c.result), len(missing))
|
||||
for key := range c.result {
|
||||
m, ok := missing[key]
|
||||
require.Equal(t, true, ok)
|
||||
require.Equal(t, c.result[key], m)
|
||||
require.Equal(t, true, missing[key])
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestMissingDataColumnIndices(t *testing.T) {
|
||||
countPlusOne := params.BeaconConfig().NumberOfColumns + 1
|
||||
tooManyColumns := make(map[uint64]bool, countPlusOne)
|
||||
for i := range countPlusOne {
|
||||
tooManyColumns[uint64(i)] = true
|
||||
}
|
||||
|
||||
testCases := []struct {
|
||||
name string
|
||||
storedIndices []uint64
|
||||
input map[uint64]bool
|
||||
expected map[uint64]bool
|
||||
err error
|
||||
}{
|
||||
{
|
||||
name: "zero len expected",
|
||||
input: map[uint64]bool{},
|
||||
},
|
||||
{
|
||||
name: "expected exceeds max",
|
||||
input: tooManyColumns,
|
||||
err: errMaxDataColumnsExceeded,
|
||||
},
|
||||
{
|
||||
name: "all missing",
|
||||
storedIndices: []uint64{},
|
||||
input: map[uint64]bool{0: true, 1: true, 2: true},
|
||||
expected: map[uint64]bool{0: true, 1: true, 2: true},
|
||||
},
|
||||
{
|
||||
name: "none missing",
|
||||
input: map[uint64]bool{0: true, 1: true, 2: true},
|
||||
expected: map[uint64]bool{},
|
||||
storedIndices: []uint64{0, 1, 2, 3, 4}, // Extra columns stored but not expected
|
||||
},
|
||||
{
|
||||
name: "some missing",
|
||||
storedIndices: []uint64{0, 20},
|
||||
input: map[uint64]bool{0: true, 10: true, 20: true, 30: true},
|
||||
expected: map[uint64]bool{10: true, 30: true},
|
||||
},
|
||||
}
|
||||
|
||||
var emptyRoot [fieldparams.RootLength]byte
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
dcm, dcs := filesystem.NewEphemeralDataColumnStorageWithMocker(t)
|
||||
err := dcm.CreateFakeIndices(emptyRoot, 0, tc.storedIndices...)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Test the function
|
||||
actual, err := missingDataColumnIndices(dcs, emptyRoot, tc.input)
|
||||
require.ErrorIs(t, err, tc.err)
|
||||
|
||||
require.Equal(t, len(tc.expected), len(actual))
|
||||
for key := range tc.expected {
|
||||
require.Equal(t, true, actual[key])
|
||||
}
|
||||
})
|
||||
}
|
||||
@@ -2644,7 +2706,7 @@ func fakeResult(missing []uint64) map[uint64]struct{} {
|
||||
return r
|
||||
}
|
||||
|
||||
func TestSaveLightClientUpdate(t *testing.T) {
|
||||
func TestProcessLightClientUpdate(t *testing.T) {
|
||||
featCfg := &features.Flags{}
|
||||
featCfg.EnableLightClient = true
|
||||
reset := features.InitWithReset(featCfg)
|
||||
@@ -2685,7 +2747,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
|
||||
isValidPayload: true,
|
||||
}
|
||||
|
||||
s.saveLightClientUpdate(cfg)
|
||||
require.NoError(t, s.processLightClientUpdate(cfg))
|
||||
|
||||
// Check that the light client update is saved
|
||||
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
|
||||
@@ -2740,7 +2802,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
|
||||
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
|
||||
require.NoError(t, err)
|
||||
|
||||
s.saveLightClientUpdate(cfg)
|
||||
require.NoError(t, s.processLightClientUpdate(cfg))
|
||||
|
||||
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
|
||||
require.NoError(t, err)
|
||||
@@ -2801,7 +2863,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
|
||||
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
|
||||
require.NoError(t, err)
|
||||
|
||||
s.saveLightClientUpdate(cfg)
|
||||
require.NoError(t, s.processLightClientUpdate(cfg))
|
||||
|
||||
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
|
||||
require.NoError(t, err)
|
||||
@@ -2844,7 +2906,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
|
||||
isValidPayload: true,
|
||||
}
|
||||
|
||||
s.saveLightClientUpdate(cfg)
|
||||
require.NoError(t, s.processLightClientUpdate(cfg))
|
||||
|
||||
// Check that the light client update is saved
|
||||
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
|
||||
@@ -2898,7 +2960,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
|
||||
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
|
||||
require.NoError(t, err)
|
||||
|
||||
s.saveLightClientUpdate(cfg)
|
||||
require.NoError(t, s.processLightClientUpdate(cfg))
|
||||
|
||||
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
|
||||
require.NoError(t, err)
|
||||
@@ -2959,7 +3021,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
|
||||
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
|
||||
require.NoError(t, err)
|
||||
|
||||
s.saveLightClientUpdate(cfg)
|
||||
require.NoError(t, s.processLightClientUpdate(cfg))
|
||||
|
||||
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
|
||||
require.NoError(t, err)
|
||||
@@ -3002,7 +3064,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
|
||||
isValidPayload: true,
|
||||
}
|
||||
|
||||
s.saveLightClientUpdate(cfg)
|
||||
require.NoError(t, s.processLightClientUpdate(cfg))
|
||||
|
||||
// Check that the light client update is saved
|
||||
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
|
||||
@@ -3056,7 +3118,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
|
||||
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
|
||||
require.NoError(t, err)
|
||||
|
||||
s.saveLightClientUpdate(cfg)
|
||||
require.NoError(t, s.processLightClientUpdate(cfg))
|
||||
|
||||
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
|
||||
require.NoError(t, err)
|
||||
@@ -3117,7 +3179,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
|
||||
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
|
||||
require.NoError(t, err)
|
||||
|
||||
s.saveLightClientUpdate(cfg)
|
||||
require.NoError(t, s.processLightClientUpdate(cfg))
|
||||
|
||||
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
|
||||
require.NoError(t, err)
|
||||
@@ -3130,7 +3192,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
|
||||
reset()
|
||||
}
|
||||
|
||||
func TestSaveLightClientBootstrap(t *testing.T) {
|
||||
func TestProcessLightClientBootstrap(t *testing.T) {
|
||||
featCfg := &features.Flags{}
|
||||
featCfg.EnableLightClient = true
|
||||
reset := features.InitWithReset(featCfg)
|
||||
@@ -3160,7 +3222,7 @@ func TestSaveLightClientBootstrap(t *testing.T) {
|
||||
isValidPayload: true,
|
||||
}
|
||||
|
||||
s.saveLightClientBootstrap(cfg)
|
||||
require.NoError(t, s.processLightClientBootstrap(cfg))
|
||||
|
||||
// Check that the light client bootstrap is saved
|
||||
b, err := s.cfg.BeaconDB.LightClientBootstrap(ctx, currentBlockRoot[:])
|
||||
@@ -3195,7 +3257,7 @@ func TestSaveLightClientBootstrap(t *testing.T) {
|
||||
isValidPayload: true,
|
||||
}
|
||||
|
||||
s.saveLightClientBootstrap(cfg)
|
||||
require.NoError(t, s.processLightClientBootstrap(cfg))
|
||||
|
||||
// Check that the light client bootstrap is saved
|
||||
b, err := s.cfg.BeaconDB.LightClientBootstrap(ctx, currentBlockRoot[:])
|
||||
@@ -3230,7 +3292,7 @@ func TestSaveLightClientBootstrap(t *testing.T) {
|
||||
isValidPayload: true,
|
||||
}
|
||||
|
||||
s.saveLightClientBootstrap(cfg)
|
||||
require.NoError(t, s.processLightClientBootstrap(cfg))
|
||||
|
||||
// Check that the light client bootstrap is saved
|
||||
b, err := s.cfg.BeaconDB.LightClientBootstrap(ctx, currentBlockRoot[:])
|
||||
@@ -3246,6 +3308,193 @@ func TestSaveLightClientBootstrap(t *testing.T) {
|
||||
reset()
|
||||
}
|
||||
|
||||
type testIsAvailableParams struct {
|
||||
options []Option
|
||||
blobKzgCommitmentsCount uint64
|
||||
columnsToSave []uint64
|
||||
}
|
||||
|
||||
func testIsAvailableSetup(t *testing.T, params testIsAvailableParams) (context.Context, context.CancelFunc, *Service, [fieldparams.RootLength]byte, interfaces.SignedBeaconBlock) {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
|
||||
|
||||
options := append(params.options, WithDataColumnStorage(dataColumnStorage))
|
||||
service, _ := minimalTestService(t, options...)
|
||||
|
||||
genesisState, secretKeys := util.DeterministicGenesisStateElectra(t, 32 /*validator count*/)
|
||||
|
||||
err := service.saveGenesisData(ctx, genesisState)
|
||||
require.NoError(t, err)
|
||||
|
||||
conf := util.DefaultBlockGenConfig()
|
||||
conf.NumBlobKzgCommitments = params.blobKzgCommitmentsCount
|
||||
|
||||
signedBeaconBlock, err := util.GenerateFullBlockFulu(genesisState, secretKeys, conf, 10 /*block slot*/)
|
||||
require.NoError(t, err)
|
||||
|
||||
root, err := signedBeaconBlock.Block.HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
|
||||
dataColumnsParams := make([]util.DataColumnParams, 0, len(params.columnsToSave))
|
||||
for _, i := range params.columnsToSave {
|
||||
dataColumnParam := util.DataColumnParams{ColumnIndex: i}
|
||||
dataColumnsParams = append(dataColumnsParams, dataColumnParam)
|
||||
}
|
||||
|
||||
dataColumnParamsByBlockRoot := util.DataColumnsParamsByRoot{root: dataColumnsParams}
|
||||
_, verifiedRODataColumns := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnParamsByBlockRoot)
|
||||
|
||||
err = dataColumnStorage.Save(verifiedRODataColumns)
|
||||
require.NoError(t, err)
|
||||
|
||||
signed, err := consensusblocks.NewSignedBeaconBlock(signedBeaconBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
return ctx, cancel, service, root, signed
|
||||
}
|
||||
|
||||
func TestIsDataAvailable(t *testing.T) {
|
||||
t.Run("Fulu - out of retention window", func(t *testing.T) {
|
||||
params := testIsAvailableParams{options: []Option{WithGenesisTime(time.Unix(0, 0))}}
|
||||
ctx, _, service, root, signed := testIsAvailableSetup(t, params)
|
||||
|
||||
err := service.isDataAvailable(ctx, root, signed)
|
||||
require.NoError(t, err)
|
||||
})
|
||||
|
||||
t.Run("Fulu - no commitment in blocks", func(t *testing.T) {
|
||||
ctx, _, service, root, signed := testIsAvailableSetup(t, testIsAvailableParams{})
|
||||
|
||||
err := service.isDataAvailable(ctx, root, signed)
|
||||
require.NoError(t, err)
|
||||
})
|
||||
|
||||
t.Run("Fulu - more than half of the columns in custody", func(t *testing.T) {
|
||||
minimumColumnsCountToReconstruct := peerdas.MinimumColumnsCountToReconstruct()
|
||||
indices := make([]uint64, 0, minimumColumnsCountToReconstruct)
|
||||
for i := range minimumColumnsCountToReconstruct {
|
||||
indices = append(indices, i)
|
||||
}
|
||||
|
||||
params := testIsAvailableParams{
|
||||
options: []Option{WithCustodyInfo(&peerdas.CustodyInfo{})},
|
||||
columnsToSave: indices,
|
||||
blobKzgCommitmentsCount: 3,
|
||||
}
|
||||
|
||||
ctx, _, service, root, signed := testIsAvailableSetup(t, params)
|
||||
|
||||
err := service.isDataAvailable(ctx, root, signed)
|
||||
require.NoError(t, err)
|
||||
})
|
||||
|
||||
t.Run("Fulu - no missing data columns", func(t *testing.T) {
|
||||
params := testIsAvailableParams{
|
||||
options: []Option{WithCustodyInfo(&peerdas.CustodyInfo{})},
|
||||
columnsToSave: []uint64{1, 17, 19, 42, 75, 87, 102, 117, 119}, // 119 is not needed
|
||||
blobKzgCommitmentsCount: 3,
|
||||
}
|
||||
|
||||
ctx, _, service, root, signed := testIsAvailableSetup(t, params)
|
||||
|
||||
err := service.isDataAvailable(ctx, root, signed)
|
||||
require.NoError(t, err)
|
||||
})
|
||||
|
||||
t.Run("Fulu - some initially missing data columns (no reconstruction)", func(t *testing.T) {
|
||||
testParams := testIsAvailableParams{
|
||||
options: []Option{WithCustodyInfo(&peerdas.CustodyInfo{})},
|
||||
columnsToSave: []uint64{1, 17, 19, 75, 102, 117, 119}, // 119 is not needed, 42 and 87 are missing
|
||||
|
||||
blobKzgCommitmentsCount: 3,
|
||||
}
|
||||
|
||||
ctx, _, service, root, signed := testIsAvailableSetup(t, testParams)
|
||||
|
||||
var wrongRoot [fieldparams.RootLength]byte
|
||||
copy(wrongRoot[:], root[:])
|
||||
wrongRoot[0]++ // change the root to simulate a wrong root
|
||||
|
||||
_, verifiedSidecarsWrongRoot := util.CreateTestVerifiedRoDataColumnSidecars(t, util.DataColumnsParamsByRoot{wrongRoot: {
|
||||
{ColumnIndex: 42}, // needed
|
||||
}})
|
||||
|
||||
_, verifiedSidecars := util.CreateTestVerifiedRoDataColumnSidecars(t, util.DataColumnsParamsByRoot{root: {
|
||||
{ColumnIndex: 87}, // needed
|
||||
{ColumnIndex: 1}, // not needed
|
||||
{ColumnIndex: 42}, // needed
|
||||
}})
|
||||
|
||||
time.AfterFunc(10*time.Millisecond, func() {
|
||||
err := service.dataColumnStorage.Save(verifiedSidecarsWrongRoot)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = service.dataColumnStorage.Save(verifiedSidecars)
|
||||
require.NoError(t, err)
|
||||
})
|
||||
|
||||
err := service.isDataAvailable(ctx, root, signed)
|
||||
require.NoError(t, err)
|
||||
})
|
||||
|
||||
t.Run("Fulu - some initially missing data columns (reconstruction)", func(t *testing.T) {
|
||||
const (
|
||||
missingColumns = uint64(2)
|
||||
cgc = 128
|
||||
)
|
||||
var custodyInfo peerdas.CustodyInfo
|
||||
custodyInfo.TargetGroupCount.SetValidatorsCustodyRequirement(cgc)
|
||||
custodyInfo.ToAdvertiseGroupCount.Set(cgc)
|
||||
|
||||
minimumColumnsCountToReconstruct := peerdas.MinimumColumnsCountToReconstruct()
|
||||
indices := make([]uint64, 0, minimumColumnsCountToReconstruct-missingColumns)
|
||||
|
||||
for i := range minimumColumnsCountToReconstruct - missingColumns {
|
||||
indices = append(indices, i)
|
||||
}
|
||||
|
||||
testParams := testIsAvailableParams{
|
||||
options: []Option{WithCustodyInfo(&custodyInfo)},
|
||||
columnsToSave: indices,
|
||||
blobKzgCommitmentsCount: 3,
|
||||
}
|
||||
|
||||
ctx, _, service, root, signed := testIsAvailableSetup(t, testParams)
|
||||
|
||||
dataColumnParams := make([]util.DataColumnParams, 0, missingColumns)
|
||||
for i := minimumColumnsCountToReconstruct - missingColumns; i < minimumColumnsCountToReconstruct; i++ {
|
||||
dataColumnParam := util.DataColumnParams{ColumnIndex: i}
|
||||
dataColumnParams = append(dataColumnParams, dataColumnParam)
|
||||
}
|
||||
|
||||
_, verifiedSidecars := util.CreateTestVerifiedRoDataColumnSidecars(t, util.DataColumnsParamsByRoot{root: dataColumnParams})
|
||||
|
||||
time.AfterFunc(10*time.Millisecond, func() {
|
||||
err := service.dataColumnStorage.Save(verifiedSidecars)
|
||||
require.NoError(t, err)
|
||||
})
|
||||
|
||||
err := service.isDataAvailable(ctx, root, signed)
|
||||
require.NoError(t, err)
|
||||
})
|
||||
|
||||
t.Run("Fulu - some columns are definitively missing", func(t *testing.T) {
|
||||
params := testIsAvailableParams{
|
||||
options: []Option{WithCustodyInfo(&peerdas.CustodyInfo{})},
|
||||
blobKzgCommitmentsCount: 3,
|
||||
}
|
||||
|
||||
ctx, cancel, service, root, signed := testIsAvailableSetup(t, params)
|
||||
|
||||
time.AfterFunc(10*time.Millisecond, func() {
|
||||
cancel()
|
||||
})
|
||||
|
||||
err := service.isDataAvailable(ctx, root, signed)
|
||||
require.NotNil(t, err)
|
||||
})
|
||||
}
|
||||
|
||||
func setupLightClientTestRequirements(ctx context.Context, t *testing.T, s *Service, v int, options ...util.LightClientOption) (*util.TestLightClient, *postBlockProcessConfig) {
|
||||
var l *util.TestLightClient
|
||||
switch v {
|
||||
@@ -3310,7 +3559,7 @@ func TestProcessLightClientOptimisticUpdate(t *testing.T) {
|
||||
params.OverrideBeaconConfig(beaconCfg)
|
||||
|
||||
s, tr := minimalTestService(t)
|
||||
s.cfg.P2p = &mockp2p.FakeP2P{}
|
||||
s.cfg.P2P = &mockp2p.FakeP2P{}
|
||||
ctx := tr.ctx
|
||||
|
||||
testCases := []struct {
|
||||
@@ -3446,7 +3695,7 @@ func TestProcessLightClientFinalityUpdate(t *testing.T) {
|
||||
params.OverrideBeaconConfig(beaconCfg)
|
||||
|
||||
s, tr := minimalTestService(t)
|
||||
s.cfg.P2p = &mockp2p.FakeP2P{}
|
||||
s.cfg.P2P = &mockp2p.FakeP2P{}
|
||||
ctx := tr.ctx
|
||||
|
||||
testCases := []struct {
|
||||
|
||||
@@ -16,6 +16,7 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/slasher/types"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
|
||||
"github.com/OffchainLabs/prysm/v6/config/features"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
@@ -52,6 +53,13 @@ type BlobReceiver interface {
|
||||
ReceiveBlob(context.Context, blocks.VerifiedROBlob) error
|
||||
}
|
||||
|
||||
// DataColumnReceiver interface defines the methods of chain service for receiving new
|
||||
// data columns
|
||||
type DataColumnReceiver interface {
|
||||
ReceiveDataColumn(blocks.VerifiedRODataColumn) error
|
||||
ReceiveDataColumns([]blocks.VerifiedRODataColumn) error
|
||||
}
|
||||
|
||||
// SlashingReceiver interface defines the methods of chain service for receiving validated slashing over the wire.
|
||||
type SlashingReceiver interface {
|
||||
ReceiveAttesterSlashing(ctx context.Context, slashing ethpb.AttSlashing)
|
||||
@@ -74,6 +82,7 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
|
||||
log.WithField("blockRoot", fmt.Sprintf("%#x", blockRoot)).Debug("Ignoring already synced block")
|
||||
return nil
|
||||
}
|
||||
|
||||
receivedTime := time.Now()
|
||||
s.blockBeingSynced.set(blockRoot)
|
||||
defer s.blockBeingSynced.unset(blockRoot)
|
||||
@@ -82,6 +91,7 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
preState, err := s.getBlockPreState(ctx, blockCopy.Block())
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not get block's prestate")
|
||||
@@ -97,10 +107,12 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
daWaitedTime, err := s.handleDA(ctx, blockCopy, blockRoot, avs)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Defragment the state before continuing block processing.
|
||||
s.defragmentState(postState)
|
||||
|
||||
@@ -227,26 +239,34 @@ func (s *Service) validateExecutionAndConsensus(
|
||||
func (s *Service) handleDA(
|
||||
ctx context.Context,
|
||||
block interfaces.SignedBeaconBlock,
|
||||
blockRoot [32]byte,
|
||||
blockRoot [fieldparams.RootLength]byte,
|
||||
avs das.AvailabilityStore,
|
||||
) (time.Duration, error) {
|
||||
daStartTime := time.Now()
|
||||
if avs != nil {
|
||||
rob, err := blocks.NewROBlockWithRoot(block, blockRoot)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
) (elapsed time.Duration, err error) {
|
||||
defer func(start time.Time) {
|
||||
elapsed = time.Since(start)
|
||||
|
||||
if err == nil {
|
||||
dataAvailWaitedTime.Observe(float64(elapsed.Milliseconds()))
|
||||
}
|
||||
if err := avs.IsDataAvailable(ctx, s.CurrentSlot(), rob); err != nil {
|
||||
return 0, errors.Wrap(err, "could not validate blob data availability (AvailabilityStore.IsDataAvailable)")
|
||||
}
|
||||
} else {
|
||||
if err := s.isDataAvailable(ctx, blockRoot, block); err != nil {
|
||||
return 0, errors.Wrap(err, "could not validate blob data availability")
|
||||
}(time.Now())
|
||||
|
||||
if avs == nil {
|
||||
if err = s.isDataAvailable(ctx, blockRoot, block); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
daWaitedTime := time.Since(daStartTime)
|
||||
dataAvailWaitedTime.Observe(float64(daWaitedTime.Milliseconds()))
|
||||
return daWaitedTime, nil
|
||||
|
||||
var rob blocks.ROBlock
|
||||
rob, err = blocks.NewROBlockWithRoot(block, blockRoot)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
err = avs.IsDataAvailable(ctx, s.CurrentSlot(), rob)
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func (s *Service) reportPostBlockProcessing(
|
||||
|
||||
@@ -180,6 +180,19 @@ func TestService_ReceiveBlock(t *testing.T) {
|
||||
}
|
||||
wg.Wait()
|
||||
}
|
||||
func TestHandleDA(t *testing.T) {
|
||||
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(ðpb.SignedBeaconBlock{
|
||||
Block: ðpb.BeaconBlock{
|
||||
Body: ðpb.BeaconBlockBody{},
|
||||
},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
s, _ := minimalTestService(t)
|
||||
elapsed, err := s.handleDA(context.Background(), signedBeaconBlock, [fieldparams.RootLength]byte{}, nil)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, true, elapsed > 0, "Elapsed time should be greater than 0")
|
||||
}
|
||||
|
||||
func TestService_ReceiveBlockUpdateHead(t *testing.T) {
|
||||
s, tr := minimalTestService(t,
|
||||
|
||||
25
beacon-chain/blockchain/receive_data_column.go
Normal file
25
beacon-chain/blockchain/receive_data_column.go
Normal file
@@ -0,0 +1,25 @@
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
// ReceiveDataColumns receives a batch of data columns.
|
||||
func (s *Service) ReceiveDataColumns(dataColumnSidecars []blocks.VerifiedRODataColumn) error {
|
||||
if err := s.dataColumnStorage.Save(dataColumnSidecars); err != nil {
|
||||
return errors.Wrap(err, "save data column sidecars")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// ReceiveDataColumn receives a single data column.
|
||||
// (It is only a wrapper around ReceiveDataColumns.)
|
||||
func (s *Service) ReceiveDataColumn(dataColumnSidecar blocks.VerifiedRODataColumn) error {
|
||||
if err := s.dataColumnStorage.Save([]blocks.VerifiedRODataColumn{dataColumnSidecar}); err != nil {
|
||||
return errors.Wrap(err, "save data column sidecars")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -16,6 +16,7 @@ import (
|
||||
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
|
||||
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
|
||||
coreTime "github.com/OffchainLabs/prysm/v6/beacon-chain/core/time"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
|
||||
@@ -64,6 +65,7 @@ type Service struct {
|
||||
blobNotifiers *blobNotifierMap
|
||||
blockBeingSynced *currentlySyncingBlock
|
||||
blobStorage *filesystem.BlobStorage
|
||||
dataColumnStorage *filesystem.DataColumnStorage
|
||||
slasherEnabled bool
|
||||
lcStore *lightClient.Store
|
||||
}
|
||||
@@ -81,7 +83,7 @@ type config struct {
|
||||
ExitPool voluntaryexits.PoolManager
|
||||
SlashingPool slashings.PoolManager
|
||||
BLSToExecPool blstoexec.PoolManager
|
||||
P2p p2p.Broadcaster
|
||||
P2P p2p.Accessor
|
||||
MaxRoutines int
|
||||
StateNotifier statefeed.Notifier
|
||||
ForkChoiceStore f.ForkChoicer
|
||||
@@ -93,6 +95,7 @@ type config struct {
|
||||
FinalizedStateAtStartUp state.BeaconState
|
||||
ExecutionEngineCaller execution.EngineCaller
|
||||
SyncChecker Checker
|
||||
CustodyInfo *peerdas.CustodyInfo
|
||||
}
|
||||
|
||||
// Checker is an interface used to determine if a node is in initial sync
|
||||
|
||||
@@ -97,7 +97,7 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
|
||||
WithAttestationPool(attestations.NewPool()),
|
||||
WithSlashingPool(slashings.NewPool()),
|
||||
WithExitPool(voluntaryexits.NewPool()),
|
||||
WithP2PBroadcaster(&mockBroadcaster{}),
|
||||
WithP2PBroadcaster(&mockAccessor{}),
|
||||
WithStateNotifier(&mockBeaconNode{}),
|
||||
WithForkChoiceStore(fc),
|
||||
WithAttestationService(attService),
|
||||
|
||||
@@ -49,7 +49,7 @@ type mockBroadcaster struct {
|
||||
broadcastCalled bool
|
||||
}
|
||||
|
||||
type mockAccesser struct {
|
||||
type mockAccessor struct {
|
||||
mockBroadcaster
|
||||
p2pTesting.MockPeerManager
|
||||
}
|
||||
@@ -144,9 +144,10 @@ func minimalTestService(t *testing.T, opts ...Option) (*Service, *testServiceReq
|
||||
WithDepositCache(dc),
|
||||
WithTrackedValidatorsCache(cache.NewTrackedValidatorsCache()),
|
||||
WithBlobStorage(filesystem.NewEphemeralBlobStorage(t)),
|
||||
WithDataColumnStorage(filesystem.NewEphemeralDataColumnStorage(t)),
|
||||
WithSyncChecker(mock.MockChecker{}),
|
||||
WithExecutionEngineCaller(&mockExecution.EngineClient{}),
|
||||
WithP2PBroadcaster(&mockAccesser{}),
|
||||
WithP2PBroadcaster(&mockAccessor{}),
|
||||
WithLightClientStore(&lightclient.Store{}),
|
||||
}
|
||||
// append the variadic opts so they override the defaults by being processed afterwards
|
||||
|
||||
@@ -75,6 +75,7 @@ type ChainService struct {
|
||||
BlockSlot primitives.Slot
|
||||
SyncingRoot [32]byte
|
||||
Blobs []blocks.VerifiedROBlob
|
||||
DataColumns []blocks.VerifiedRODataColumn
|
||||
TargetRoot [32]byte
|
||||
MockHeadSlot *primitives.Slot
|
||||
}
|
||||
@@ -715,6 +716,17 @@ func (c *ChainService) ReceiveBlob(_ context.Context, b blocks.VerifiedROBlob) e
|
||||
return nil
|
||||
}
|
||||
|
||||
// ReceiveDataColumn implements the same method in chain service
|
||||
func (c *ChainService) ReceiveDataColumn(dc blocks.VerifiedRODataColumn) error {
|
||||
c.DataColumns = append(c.DataColumns, dc)
|
||||
return nil
|
||||
}
|
||||
|
||||
// ReceiveDataColumns implements the same method in chain service
|
||||
func (*ChainService) ReceiveDataColumns(_ []blocks.VerifiedRODataColumn) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// TargetRootForEpoch mocks the same method in the chain service
|
||||
func (c *ChainService) TargetRootForEpoch(_ [32]byte, _ primitives.Epoch) ([32]byte, error) {
|
||||
return c.TargetRoot, nil
|
||||
|
||||
@@ -81,6 +81,7 @@ go_test(
|
||||
"//proto/prysm/v1alpha1/attestation:go_default_library",
|
||||
"//runtime/version:go_default_library",
|
||||
"//testing/assert:go_default_library",
|
||||
"//testing/fuzz:go_default_library",
|
||||
"//testing/require:go_default_library",
|
||||
"//testing/util:go_default_library",
|
||||
"//time:go_default_library",
|
||||
|
||||
@@ -19,9 +19,10 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/math"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1/attestation"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/fuzz"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/util"
|
||||
fuzz "github.com/google/gofuzz"
|
||||
gofuzz "github.com/google/gofuzz"
|
||||
"github.com/prysmaticlabs/go-bitfield"
|
||||
)
|
||||
|
||||
@@ -458,7 +459,7 @@ func TestValidatorFlag_Add_ExceedsLength(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestFuzzProcessAttestationsNoVerify_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
st := ðpb.BeaconStateAltair{}
|
||||
b := ðpb.SignedBeaconBlockAltair{Block: ðpb.BeaconBlockAltair{}}
|
||||
for i := 0; i < 10000; i++ {
|
||||
@@ -478,6 +479,7 @@ func TestFuzzProcessAttestationsNoVerify_10000(t *testing.T) {
|
||||
if err != nil && r != nil {
|
||||
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, s, b)
|
||||
}
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -7,12 +7,13 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/altair"
|
||||
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/fuzz"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
fuzz "github.com/google/gofuzz"
|
||||
gofuzz "github.com/google/gofuzz"
|
||||
)
|
||||
|
||||
func TestFuzzProcessDeposits_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconStateAltair{}
|
||||
deposits := make([]*ethpb.Deposit, 100)
|
||||
ctx := context.Background()
|
||||
@@ -27,11 +28,12 @@ func TestFuzzProcessDeposits_10000(t *testing.T) {
|
||||
if err != nil && r != nil {
|
||||
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposits)
|
||||
}
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFuzzProcessPreGenesisDeposit_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconStateAltair{}
|
||||
deposit := ðpb.Deposit{}
|
||||
ctx := context.Background()
|
||||
@@ -45,11 +47,12 @@ func TestFuzzProcessPreGenesisDeposit_10000(t *testing.T) {
|
||||
if err != nil && r != nil {
|
||||
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposit)
|
||||
}
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFuzzProcessPreGenesisDeposit_Phase0_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconState{}
|
||||
deposit := ðpb.Deposit{}
|
||||
ctx := context.Background()
|
||||
@@ -63,11 +66,12 @@ func TestFuzzProcessPreGenesisDeposit_Phase0_10000(t *testing.T) {
|
||||
if err != nil && r != nil {
|
||||
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposit)
|
||||
}
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFuzzProcessDeposit_Phase0_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconState{}
|
||||
deposit := ðpb.Deposit{}
|
||||
|
||||
@@ -80,11 +84,12 @@ func TestFuzzProcessDeposit_Phase0_10000(t *testing.T) {
|
||||
if err != nil && r != nil {
|
||||
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposit)
|
||||
}
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFuzzProcessDeposit_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconStateAltair{}
|
||||
deposit := ðpb.Deposit{}
|
||||
|
||||
@@ -97,5 +102,6 @@ func TestFuzzProcessDeposit_10000(t *testing.T) {
|
||||
if err != nil && r != nil {
|
||||
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposit)
|
||||
}
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -41,7 +41,6 @@ go_library(
|
||||
"//encoding/ssz:go_default_library",
|
||||
"//math:go_default_library",
|
||||
"//monitoring/tracing/trace:go_default_library",
|
||||
"//network/forks:go_default_library",
|
||||
"//proto/engine/v1:go_default_library",
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
"//proto/prysm/v1alpha1/attestation:go_default_library",
|
||||
@@ -105,6 +104,7 @@ go_test(
|
||||
"//proto/prysm/v1alpha1/attestation/aggregation/attestations:go_default_library",
|
||||
"//runtime/version:go_default_library",
|
||||
"//testing/assert:go_default_library",
|
||||
"//testing/fuzz:go_default_library",
|
||||
"//testing/require:go_default_library",
|
||||
"//testing/util:go_default_library",
|
||||
"//time/slots:go_default_library",
|
||||
|
||||
@@ -178,7 +178,7 @@ func VerifyAttestationNoVerifySignature(
|
||||
}
|
||||
}
|
||||
|
||||
return attestation.IsValidAttestationIndices(ctx, indexedAtt)
|
||||
return attestation.IsValidAttestationIndices(ctx, indexedAtt, params.BeaconConfig().MaxValidatorsPerCommittee, params.BeaconConfig().MaxCommitteesPerSlot)
|
||||
}
|
||||
|
||||
// ProcessAttestationNoVerifySignature processes the attestation without verifying the attestation signature. This
|
||||
@@ -243,7 +243,7 @@ func VerifyIndexedAttestation(ctx context.Context, beaconState state.ReadOnlyBea
|
||||
ctx, span := trace.StartSpan(ctx, "core.VerifyIndexedAttestation")
|
||||
defer span.End()
|
||||
|
||||
if err := attestation.IsValidAttestationIndices(ctx, indexedAtt); err != nil {
|
||||
if err := attestation.IsValidAttestationIndices(ctx, indexedAtt, params.BeaconConfig().MaxValidatorsPerCommittee, params.BeaconConfig().MaxCommitteesPerSlot); err != nil {
|
||||
return err
|
||||
}
|
||||
domain, err := signing.Domain(
|
||||
|
||||
@@ -11,12 +11,13 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/fuzz"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
fuzz "github.com/google/gofuzz"
|
||||
gofuzz "github.com/google/gofuzz"
|
||||
)
|
||||
|
||||
func TestFuzzProcessAttestationNoVerify_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
ctx := context.Background()
|
||||
state := ðpb.BeaconState{}
|
||||
att := ðpb.Attestation{}
|
||||
@@ -28,11 +29,12 @@ func TestFuzzProcessAttestationNoVerify_10000(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
_, err = ProcessAttestationNoVerifySignature(ctx, s, att)
|
||||
_ = err
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFuzzProcessBlockHeader_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconState{}
|
||||
block := ðpb.SignedBeaconBlock{}
|
||||
|
||||
@@ -49,11 +51,12 @@ func TestFuzzProcessBlockHeader_10000(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
_, err = ProcessBlockHeader(context.Background(), s, wsb)
|
||||
_ = err
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFuzzverifyDepositDataSigningRoot_10000(_ *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
var ba []byte
|
||||
var pubkey [fieldparams.BLSPubkeyLength]byte
|
||||
var sig [96]byte
|
||||
@@ -77,7 +80,7 @@ func TestFuzzverifyDepositDataSigningRoot_10000(_ *testing.T) {
|
||||
}
|
||||
|
||||
func TestFuzzProcessEth1DataInBlock_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
e := ðpb.Eth1Data{}
|
||||
state, err := state_native.InitializeFromProtoUnsafePhase0(ðpb.BeaconState{})
|
||||
require.NoError(t, err)
|
||||
@@ -92,7 +95,7 @@ func TestFuzzProcessEth1DataInBlock_10000(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestFuzzareEth1DataEqual_10000(_ *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
eth1data := ðpb.Eth1Data{}
|
||||
eth1data2 := ðpb.Eth1Data{}
|
||||
|
||||
@@ -105,7 +108,7 @@ func TestFuzzareEth1DataEqual_10000(_ *testing.T) {
|
||||
}
|
||||
|
||||
func TestFuzzEth1DataHasEnoughSupport_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
eth1data := ðpb.Eth1Data{}
|
||||
var stateVotes []*ethpb.Eth1Data
|
||||
for i := 0; i < 100000; i++ {
|
||||
@@ -122,7 +125,7 @@ func TestFuzzEth1DataHasEnoughSupport_10000(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestFuzzProcessBlockHeaderNoVerify_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconState{}
|
||||
block := ðpb.BeaconBlock{}
|
||||
|
||||
@@ -133,11 +136,12 @@ func TestFuzzProcessBlockHeaderNoVerify_10000(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
_, err = ProcessBlockHeaderNoVerify(context.Background(), s, block.Slot, block.ProposerIndex, block.ParentRoot, []byte{})
|
||||
_ = err
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFuzzProcessRandao_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconState{}
|
||||
b := ðpb.SignedBeaconBlock{}
|
||||
|
||||
@@ -155,11 +159,12 @@ func TestFuzzProcessRandao_10000(t *testing.T) {
|
||||
if err != nil && r != nil {
|
||||
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, b)
|
||||
}
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFuzzProcessRandaoNoVerify_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconState{}
|
||||
blockBody := ðpb.BeaconBlockBody{}
|
||||
|
||||
@@ -172,11 +177,12 @@ func TestFuzzProcessRandaoNoVerify_10000(t *testing.T) {
|
||||
if err != nil && r != nil {
|
||||
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, blockBody)
|
||||
}
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFuzzProcessProposerSlashings_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconState{}
|
||||
p := ðpb.ProposerSlashing{}
|
||||
ctx := context.Background()
|
||||
@@ -189,11 +195,12 @@ func TestFuzzProcessProposerSlashings_10000(t *testing.T) {
|
||||
if err != nil && r != nil {
|
||||
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and slashing: %v", r, err, state, p)
|
||||
}
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFuzzVerifyProposerSlashing_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconState{}
|
||||
proposerSlashing := ðpb.ProposerSlashing{}
|
||||
for i := 0; i < 10000; i++ {
|
||||
@@ -203,11 +210,12 @@ func TestFuzzVerifyProposerSlashing_10000(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
err = VerifyProposerSlashing(s, proposerSlashing)
|
||||
_ = err
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFuzzProcessAttesterSlashings_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconState{}
|
||||
a := ðpb.AttesterSlashing{}
|
||||
ctx := context.Background()
|
||||
@@ -220,11 +228,12 @@ func TestFuzzProcessAttesterSlashings_10000(t *testing.T) {
|
||||
if err != nil && r != nil {
|
||||
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and slashing: %v", r, err, state, a)
|
||||
}
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFuzzVerifyAttesterSlashing_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconState{}
|
||||
attesterSlashing := ðpb.AttesterSlashing{}
|
||||
ctx := context.Background()
|
||||
@@ -235,11 +244,12 @@ func TestFuzzVerifyAttesterSlashing_10000(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
err = VerifyAttesterSlashing(ctx, s, attesterSlashing)
|
||||
_ = err
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFuzzIsSlashableAttestationData_10000(_ *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
attestationData := ðpb.AttestationData{}
|
||||
attestationData2 := ðpb.AttestationData{}
|
||||
|
||||
@@ -251,7 +261,7 @@ func TestFuzzIsSlashableAttestationData_10000(_ *testing.T) {
|
||||
}
|
||||
|
||||
func TestFuzzslashableAttesterIndices_10000(_ *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
attesterSlashing := ðpb.AttesterSlashing{}
|
||||
|
||||
for i := 0; i < 10000; i++ {
|
||||
@@ -261,7 +271,7 @@ func TestFuzzslashableAttesterIndices_10000(_ *testing.T) {
|
||||
}
|
||||
|
||||
func TestFuzzProcessAttestationsNoVerify_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconState{}
|
||||
b := ðpb.SignedBeaconBlock{}
|
||||
ctx := context.Background()
|
||||
@@ -279,11 +289,12 @@ func TestFuzzProcessAttestationsNoVerify_10000(t *testing.T) {
|
||||
if err != nil && r != nil {
|
||||
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, b)
|
||||
}
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFuzzVerifyIndexedAttestationn_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconState{}
|
||||
idxAttestation := ðpb.IndexedAttestation{}
|
||||
ctx := context.Background()
|
||||
@@ -294,11 +305,12 @@ func TestFuzzVerifyIndexedAttestationn_10000(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
err = VerifyIndexedAttestation(ctx, s, idxAttestation)
|
||||
_ = err
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFuzzverifyDeposit_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconState{}
|
||||
deposit := ðpb.Deposit{}
|
||||
for i := 0; i < 10000; i++ {
|
||||
@@ -312,7 +324,7 @@ func TestFuzzverifyDeposit_10000(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestFuzzProcessVoluntaryExits_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconState{}
|
||||
e := ðpb.SignedVoluntaryExit{}
|
||||
ctx := context.Background()
|
||||
@@ -325,11 +337,12 @@ func TestFuzzProcessVoluntaryExits_10000(t *testing.T) {
|
||||
if err != nil && r != nil {
|
||||
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and exit: %v", r, err, state, e)
|
||||
}
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFuzzProcessVoluntaryExitsNoVerify_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconState{}
|
||||
e := ðpb.SignedVoluntaryExit{}
|
||||
for i := 0; i < 10000; i++ {
|
||||
@@ -341,11 +354,12 @@ func TestFuzzProcessVoluntaryExitsNoVerify_10000(t *testing.T) {
|
||||
if err != nil && r != nil {
|
||||
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, e)
|
||||
}
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFuzzVerifyExit_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
ve := ðpb.SignedVoluntaryExit{}
|
||||
rawVal := ðpb.Validator{}
|
||||
fork := ðpb.Fork{}
|
||||
|
||||
@@ -229,13 +229,16 @@ func verifyBlobCommitmentCount(slot primitives.Slot, body interfaces.ReadOnlyBea
|
||||
if body.Version() < version.Deneb {
|
||||
return nil
|
||||
}
|
||||
|
||||
kzgs, err := body.BlobKzgCommitments()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
|
||||
if len(kzgs) > maxBlobsPerBlock {
|
||||
return fmt.Errorf("too many kzg commitments in block: %d", len(kzgs))
|
||||
|
||||
commitmentCount, maxBlobsPerBlock := len(kzgs), params.BeaconConfig().MaxBlobsPerBlock(slot)
|
||||
if commitmentCount > maxBlobsPerBlock {
|
||||
return fmt.Errorf("too many kzg commitments in block: actual count %d - max allowed %d", commitmentCount, maxBlobsPerBlock)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -926,8 +926,10 @@ func TestVerifyBlobCommitmentCount(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, blocks.VerifyBlobCommitmentCount(rb.Slot(), rb.Body()))
|
||||
|
||||
b = ðpb.BeaconBlockDeneb{Body: ðpb.BeaconBlockBodyDeneb{BlobKzgCommitments: make([][]byte, params.BeaconConfig().MaxBlobsPerBlock(rb.Slot())+1)}}
|
||||
maxCommitmentsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(rb.Slot())
|
||||
|
||||
b = ðpb.BeaconBlockDeneb{Body: ðpb.BeaconBlockBodyDeneb{BlobKzgCommitments: make([][]byte, maxCommitmentsPerBlock+1)}}
|
||||
rb, err = consensusblocks.NewBeaconBlock(b)
|
||||
require.NoError(t, err)
|
||||
require.ErrorContains(t, fmt.Sprintf("too many kzg commitments in block: %d", params.BeaconConfig().MaxBlobsPerBlock(rb.Slot())+1), blocks.VerifyBlobCommitmentCount(rb.Slot(), rb.Body()))
|
||||
require.ErrorContains(t, fmt.Sprintf("too many kzg commitments in block: actual count %d - max allowed %d", maxCommitmentsPerBlock+1, maxCommitmentsPerBlock), blocks.VerifyBlobCommitmentCount(rb.Slot(), rb.Body()))
|
||||
}
|
||||
|
||||
@@ -11,7 +11,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/crypto/bls"
|
||||
"github.com/OffchainLabs/prysm/v6/network/forks"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1/attestation"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
@@ -101,7 +100,7 @@ func VerifyBlockHeaderSignature(beaconState state.BeaconState, header *ethpb.Sig
|
||||
// via the respective epoch.
|
||||
func VerifyBlockSignatureUsingCurrentFork(beaconState state.ReadOnlyBeaconState, blk interfaces.ReadOnlySignedBeaconBlock, blkRoot [32]byte) error {
|
||||
currentEpoch := slots.ToEpoch(blk.Block().Slot())
|
||||
fork, err := forks.Fork(currentEpoch)
|
||||
fork, err := params.Fork(currentEpoch)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -200,7 +199,7 @@ func createAttestationSignatureBatch(
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := attestation.IsValidAttestationIndices(ctx, ia); err != nil {
|
||||
if err := attestation.IsValidAttestationIndices(ctx, ia, params.BeaconConfig().MaxValidatorsPerCommittee, params.BeaconConfig().MaxCommitteesPerSlot); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
indices := ia.GetAttestingIndices()
|
||||
|
||||
@@ -85,6 +85,7 @@ go_test(
|
||||
"//proto/engine/v1:go_default_library",
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
"//testing/assert:go_default_library",
|
||||
"//testing/fuzz:go_default_library",
|
||||
"//testing/require:go_default_library",
|
||||
"//testing/util:go_default_library",
|
||||
"//time/slots:go_default_library",
|
||||
|
||||
@@ -7,12 +7,13 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/electra"
|
||||
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/fuzz"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
fuzz "github.com/google/gofuzz"
|
||||
gofuzz "github.com/google/gofuzz"
|
||||
)
|
||||
|
||||
func TestFuzzProcessDeposits_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconStateElectra{}
|
||||
deposits := make([]*ethpb.Deposit, 100)
|
||||
ctx := context.Background()
|
||||
@@ -27,11 +28,12 @@ func TestFuzzProcessDeposits_10000(t *testing.T) {
|
||||
if err != nil && r != nil {
|
||||
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposits)
|
||||
}
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFuzzProcessDeposit_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
state := ðpb.BeaconStateElectra{}
|
||||
deposit := ðpb.Deposit{}
|
||||
|
||||
@@ -44,5 +46,6 @@ func TestFuzzProcessDeposit_10000(t *testing.T) {
|
||||
if err != nil && r != nil {
|
||||
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposit)
|
||||
}
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -48,6 +48,7 @@ go_test(
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
"//testing/assert:go_default_library",
|
||||
"//testing/fuzz:go_default_library",
|
||||
"//testing/require:go_default_library",
|
||||
"//testing/util:go_default_library",
|
||||
"@com_github_google_go_cmp//cmp:go_default_library",
|
||||
|
||||
@@ -5,12 +5,13 @@ import (
|
||||
|
||||
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/fuzz"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
fuzz "github.com/google/gofuzz"
|
||||
gofuzz "github.com/google/gofuzz"
|
||||
)
|
||||
|
||||
func TestFuzzFinalUpdates_10000(t *testing.T) {
|
||||
fuzzer := fuzz.NewWithSeed(0)
|
||||
fuzzer := gofuzz.NewWithSeed(0)
|
||||
base := ðpb.BeaconState{}
|
||||
|
||||
for i := 0; i < 10000; i++ {
|
||||
@@ -19,5 +20,6 @@ func TestFuzzFinalUpdates_10000(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
_, err = ProcessFinalUpdates(s)
|
||||
_ = err
|
||||
fuzz.FreeMemory(i)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -12,6 +12,7 @@ go_library(
|
||||
"//async/event:go_default_library",
|
||||
"//consensus-types/blocks:go_default_library",
|
||||
"//consensus-types/interfaces:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
],
|
||||
)
|
||||
|
||||
@@ -4,6 +4,7 @@ package operation
|
||||
import (
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
)
|
||||
|
||||
@@ -39,6 +40,9 @@ const (
|
||||
|
||||
// BlockGossipReceived is sent after a block has been received from gossip or API that passes validation rules.
|
||||
BlockGossipReceived = 10
|
||||
|
||||
// DataColumnReceived is sent after a data column has been seen after gossip validation rules.
|
||||
DataColumnReceived = 11
|
||||
)
|
||||
|
||||
// UnAggregatedAttReceivedData is the data sent with UnaggregatedAttReceived events.
|
||||
@@ -95,3 +99,10 @@ type BlockGossipReceivedData struct {
|
||||
// SignedBlock is the block that was received.
|
||||
SignedBlock interfaces.ReadOnlySignedBeaconBlock
|
||||
}
|
||||
|
||||
type DataColumnReceivedData struct {
|
||||
Slot primitives.Slot
|
||||
Index uint64
|
||||
BlockRoot [32]byte
|
||||
KzgCommitments [][]byte
|
||||
}
|
||||
|
||||
@@ -277,10 +277,10 @@ type CommitteeAssignment struct {
|
||||
CommitteeIndex primitives.CommitteeIndex
|
||||
}
|
||||
|
||||
// verifyAssignmentEpoch verifies if the given epoch is valid for assignment based on the provided state.
|
||||
// VerifyAssignmentEpoch verifies if the given epoch is valid for assignment based on the provided state.
|
||||
// It checks if the epoch is not greater than the next epoch, and if the start slot of the epoch is greater
|
||||
// than or equal to the minimum valid start slot calculated based on the state's current slot and historical roots.
|
||||
func verifyAssignmentEpoch(epoch primitives.Epoch, state state.BeaconState) error {
|
||||
func VerifyAssignmentEpoch(epoch primitives.Epoch, state state.BeaconState) error {
|
||||
nextEpoch := time.NextEpoch(state)
|
||||
if epoch > nextEpoch {
|
||||
return fmt.Errorf("epoch %d can't be greater than next epoch %d", epoch, nextEpoch)
|
||||
@@ -308,7 +308,7 @@ func ProposerAssignments(ctx context.Context, state state.BeaconState, epoch pri
|
||||
defer span.End()
|
||||
|
||||
// Verify if the epoch is valid for assignment based on the provided state.
|
||||
if err := verifyAssignmentEpoch(epoch, state); err != nil {
|
||||
if err := VerifyAssignmentEpoch(epoch, state); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
startSlot, err := slots.EpochStart(epoch)
|
||||
@@ -351,6 +351,61 @@ func ProposerAssignments(ctx context.Context, state state.BeaconState, epoch pri
|
||||
return proposerAssignments, nil
|
||||
}
|
||||
|
||||
// LiteAssignment is a lite version of CommitteeAssignment, and has committee length
|
||||
// and validator committee index instead of the full committee list
|
||||
type LiteAssignment struct {
|
||||
AttesterSlot primitives.Slot // slot in which to attest
|
||||
CommitteeIndex primitives.CommitteeIndex // position of the committee in the slot
|
||||
CommitteeLength uint64 // number of members in the committee
|
||||
ValidatorCommitteeIndex uint64 // validator’s offset inside the committee
|
||||
}
|
||||
|
||||
// PrecomputeCommittees returns an array indexed by (slot-startSlot)
|
||||
// whose elements are the beacon committees of that slot.
|
||||
func PrecomputeCommittees(
|
||||
ctx context.Context,
|
||||
st state.BeaconState,
|
||||
startSlot primitives.Slot,
|
||||
) ([][][]primitives.ValidatorIndex, error) {
|
||||
cfg := params.BeaconConfig()
|
||||
out := make([][][]primitives.ValidatorIndex, cfg.SlotsPerEpoch)
|
||||
|
||||
for relativeSlot := primitives.Slot(0); relativeSlot < cfg.SlotsPerEpoch; relativeSlot++ {
|
||||
slot := startSlot + relativeSlot
|
||||
|
||||
comms, err := BeaconCommittees(ctx, st, slot)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "BeaconCommittees failed at slot %d", slot)
|
||||
}
|
||||
out[relativeSlot] = comms
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
// AssignmentForValidator scans the cached committees once
|
||||
// and returns the duty for a single validator.
|
||||
func AssignmentForValidator(
|
||||
bySlot [][][]primitives.ValidatorIndex,
|
||||
startSlot primitives.Slot,
|
||||
vIdx primitives.ValidatorIndex,
|
||||
) *LiteAssignment {
|
||||
for relativeSlot, committees := range bySlot {
|
||||
for cIdx, committee := range committees {
|
||||
for pos, member := range committee {
|
||||
if member == vIdx {
|
||||
return &LiteAssignment{
|
||||
AttesterSlot: startSlot + primitives.Slot(relativeSlot),
|
||||
CommitteeIndex: primitives.CommitteeIndex(cIdx),
|
||||
CommitteeLength: uint64(len(committee)),
|
||||
ValidatorCommitteeIndex: uint64(pos),
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil // validator is not scheduled this epoch
|
||||
}
|
||||
|
||||
// CommitteeAssignments calculates committee assignments for each validator during the specified epoch.
|
||||
// It retrieves active validator indices, determines the number of committees per slot, and computes
|
||||
// assignments for each validator based on their presence in the provided validators slice.
|
||||
@@ -359,7 +414,7 @@ func CommitteeAssignments(ctx context.Context, state state.BeaconState, epoch pr
|
||||
defer span.End()
|
||||
|
||||
// Verify if the epoch is valid for assignment based on the provided state.
|
||||
if err := verifyAssignmentEpoch(epoch, state); err != nil {
|
||||
if err := VerifyAssignmentEpoch(epoch, state); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
startSlot, err := slots.EpochStart(epoch)
|
||||
|
||||
@@ -871,3 +871,48 @@ func TestBeaconCommitteesFromCache(t *testing.T) {
|
||||
assert.DeepEqual(t, committees[idx], committee)
|
||||
}
|
||||
}
|
||||
|
||||
func TestPrecomputeCommittees_HappyPath(t *testing.T) {
|
||||
cfg := params.BeaconConfig()
|
||||
start := primitives.Slot(100)
|
||||
ctx := context.Background()
|
||||
st, _ := util.DeterministicGenesisState(t, 256)
|
||||
|
||||
got, err := helpers.PrecomputeCommittees(ctx, st, start)
|
||||
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, len(got), int(cfg.SlotsPerEpoch), "outer slice length mismatch")
|
||||
|
||||
for i := range got {
|
||||
expSlot := start + primitives.Slot(i)
|
||||
comms, err := helpers.BeaconCommittees(ctx, st, expSlot)
|
||||
require.NoError(t, err)
|
||||
require.DeepEqual(t, comms, got[i])
|
||||
}
|
||||
}
|
||||
|
||||
func TestAssignmentForValidator(t *testing.T) {
|
||||
start := primitives.Slot(200)
|
||||
bySlot := [][][]primitives.ValidatorIndex{
|
||||
{{1, 2, 3}},
|
||||
{{7, 8, 9}},
|
||||
}
|
||||
vIdx := primitives.ValidatorIndex(8)
|
||||
|
||||
got := helpers.AssignmentForValidator(bySlot, start, vIdx)
|
||||
|
||||
require.NotNil(t, got)
|
||||
require.Equal(t, start+1, got.AttesterSlot)
|
||||
require.Equal(t, primitives.CommitteeIndex(0), got.CommitteeIndex)
|
||||
require.Equal(t, uint64(3), got.CommitteeLength)
|
||||
require.Equal(t, uint64(1), got.ValidatorCommitteeIndex)
|
||||
|
||||
t.Run("Not Found", func(t *testing.T) {
|
||||
start = primitives.Slot(300)
|
||||
bySlot = [][][]primitives.ValidatorIndex{
|
||||
{{4, 5, 6}},
|
||||
}
|
||||
got = helpers.AssignmentForValidator(bySlot, start, primitives.ValidatorIndex(99))
|
||||
require.IsNil(t, got)
|
||||
})
|
||||
}
|
||||
|
||||
@@ -4,7 +4,6 @@ go_library(
|
||||
name = "go_default_library",
|
||||
srcs = [
|
||||
"domain.go",
|
||||
"signature.go",
|
||||
"signing_root.go",
|
||||
],
|
||||
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing",
|
||||
@@ -25,7 +24,6 @@ go_test(
|
||||
name = "go_default_test",
|
||||
srcs = [
|
||||
"domain_test.go",
|
||||
"signature_test.go",
|
||||
"signing_root_test.go",
|
||||
],
|
||||
embed = [":go_default_library"],
|
||||
|
||||
@@ -1,34 +0,0 @@
|
||||
package signing
|
||||
|
||||
import (
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
var ErrNilRegistration = errors.New("nil signed registration")
|
||||
|
||||
// VerifyRegistrationSignature verifies the signature of a validator's registration.
|
||||
func VerifyRegistrationSignature(
|
||||
sr *ethpb.SignedValidatorRegistrationV1,
|
||||
) error {
|
||||
if sr == nil || sr.Message == nil {
|
||||
return ErrNilRegistration
|
||||
}
|
||||
|
||||
d := params.BeaconConfig().DomainApplicationBuilder
|
||||
// Per spec, we want the fork version and genesis validator to be nil.
|
||||
// Which is genesis value and zero by default.
|
||||
sd, err := ComputeDomain(
|
||||
d,
|
||||
nil, /* fork version */
|
||||
nil /* genesis val root */)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := VerifySigningRoot(sr.Message, sr.Message.Pubkey, sr.Signature, sd); err != nil {
|
||||
return ErrSigFailedToVerify
|
||||
}
|
||||
return nil
|
||||
}
|
||||
@@ -1,42 +0,0 @@
|
||||
package signing_test
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/crypto/bls"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
)
|
||||
|
||||
func TestVerifyRegistrationSignature(t *testing.T) {
|
||||
sk, err := bls.RandKey()
|
||||
require.NoError(t, err)
|
||||
reg := ðpb.ValidatorRegistrationV1{
|
||||
FeeRecipient: bytesutil.PadTo([]byte("fee"), 20),
|
||||
GasLimit: 123456,
|
||||
Timestamp: uint64(time.Now().Unix()),
|
||||
Pubkey: sk.PublicKey().Marshal(),
|
||||
}
|
||||
d := params.BeaconConfig().DomainApplicationBuilder
|
||||
domain, err := signing.ComputeDomain(d, nil, nil)
|
||||
require.NoError(t, err)
|
||||
sr, err := signing.ComputeSigningRoot(reg, domain)
|
||||
require.NoError(t, err)
|
||||
sk.Sign(sr[:]).Marshal()
|
||||
|
||||
sReg := ðpb.SignedValidatorRegistrationV1{
|
||||
Message: reg,
|
||||
Signature: sk.Sign(sr[:]).Marshal(),
|
||||
}
|
||||
require.NoError(t, signing.VerifyRegistrationSignature(sReg))
|
||||
|
||||
sReg.Signature = []byte("bad")
|
||||
require.ErrorIs(t, signing.VerifyRegistrationSignature(sReg), signing.ErrSigFailedToVerify)
|
||||
|
||||
sReg.Message = nil
|
||||
require.ErrorIs(t, signing.VerifyRegistrationSignature(sReg), signing.ErrNilRegistration)
|
||||
}
|
||||
@@ -3,22 +3,27 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
|
||||
go_library(
|
||||
name = "go_default_library",
|
||||
srcs = [
|
||||
"availability.go",
|
||||
"cache.go",
|
||||
"availability_blobs.go",
|
||||
"availability_columns.go",
|
||||
"blob_cache.go",
|
||||
"data_column_cache.go",
|
||||
"iface.go",
|
||||
"mock.go",
|
||||
],
|
||||
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/das",
|
||||
visibility = ["//visibility:public"],
|
||||
deps = [
|
||||
"//beacon-chain/core/peerdas:go_default_library",
|
||||
"//beacon-chain/db/filesystem:go_default_library",
|
||||
"//beacon-chain/verification:go_default_library",
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
"//consensus-types/blocks:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
"//runtime/logging:go_default_library",
|
||||
"//runtime/version:go_default_library",
|
||||
"//time/slots:go_default_library",
|
||||
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
|
||||
"@com_github_pkg_errors//:go_default_library",
|
||||
"@com_github_sirupsen_logrus//:go_default_library",
|
||||
],
|
||||
@@ -27,13 +32,18 @@ go_library(
|
||||
go_test(
|
||||
name = "go_default_test",
|
||||
srcs = [
|
||||
"availability_test.go",
|
||||
"cache_test.go",
|
||||
"availability_blobs_test.go",
|
||||
"availability_columns_test.go",
|
||||
"blob_cache_test.go",
|
||||
"data_column_cache_test.go",
|
||||
],
|
||||
embed = [":go_default_library"],
|
||||
deps = [
|
||||
"//beacon-chain/core/peerdas:go_default_library",
|
||||
"//beacon-chain/db/filesystem:go_default_library",
|
||||
"//beacon-chain/verification:go_default_library",
|
||||
"//cmd/beacon-chain/flags:go_default_library",
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
"//consensus-types/blocks:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
@@ -41,6 +51,7 @@ go_test(
|
||||
"//testing/require:go_default_library",
|
||||
"//testing/util:go_default_library",
|
||||
"//time/slots:go_default_library",
|
||||
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
|
||||
"@com_github_pkg_errors//:go_default_library",
|
||||
],
|
||||
)
|
||||
|
||||
@@ -20,16 +20,16 @@ var (
|
||||
errMixedRoots = errors.New("BlobSidecars must all be for the same block")
|
||||
)
|
||||
|
||||
// LazilyPersistentStore is an implementation of AvailabilityStore to be used when batch syncing.
|
||||
// LazilyPersistentStoreBlob is an implementation of AvailabilityStore to be used when batch syncing.
|
||||
// This implementation will hold any blobs passed to Persist until the IsDataAvailable is called for their
|
||||
// block, at which time they will undergo full verification and be saved to the disk.
|
||||
type LazilyPersistentStore struct {
|
||||
type LazilyPersistentStoreBlob struct {
|
||||
store *filesystem.BlobStorage
|
||||
cache *cache
|
||||
cache *blobCache
|
||||
verifier BlobBatchVerifier
|
||||
}
|
||||
|
||||
var _ AvailabilityStore = &LazilyPersistentStore{}
|
||||
var _ AvailabilityStore = &LazilyPersistentStoreBlob{}
|
||||
|
||||
// BlobBatchVerifier enables LazyAvailabilityStore to manage the verification process
|
||||
// going from ROBlob->VerifiedROBlob, while avoiding the decision of which individual verifications
|
||||
@@ -42,10 +42,10 @@ type BlobBatchVerifier interface {
|
||||
|
||||
// NewLazilyPersistentStore creates a new LazilyPersistentStore. This constructor should always be used
|
||||
// when creating a LazilyPersistentStore because it needs to initialize the cache under the hood.
|
||||
func NewLazilyPersistentStore(store *filesystem.BlobStorage, verifier BlobBatchVerifier) *LazilyPersistentStore {
|
||||
return &LazilyPersistentStore{
|
||||
func NewLazilyPersistentStore(store *filesystem.BlobStorage, verifier BlobBatchVerifier) *LazilyPersistentStoreBlob {
|
||||
return &LazilyPersistentStoreBlob{
|
||||
store: store,
|
||||
cache: newCache(),
|
||||
cache: newBlobCache(),
|
||||
verifier: verifier,
|
||||
}
|
||||
}
|
||||
@@ -53,25 +53,31 @@ func NewLazilyPersistentStore(store *filesystem.BlobStorage, verifier BlobBatchV
|
||||
// Persist adds blobs to the working blob cache. Blobs stored in this cache will be persisted
|
||||
// for at least as long as the node is running. Once IsDataAvailable succeeds, all blobs referenced
|
||||
// by the given block are guaranteed to be persisted for the remainder of the retention period.
|
||||
func (s *LazilyPersistentStore) Persist(current primitives.Slot, sc ...blocks.ROBlob) error {
|
||||
if len(sc) == 0 {
|
||||
func (s *LazilyPersistentStoreBlob) Persist(current primitives.Slot, sidecars ...blocks.ROSidecar) error {
|
||||
if len(sidecars) == 0 {
|
||||
return nil
|
||||
}
|
||||
if len(sc) > 1 {
|
||||
first := sc[0].BlockRoot()
|
||||
for i := 1; i < len(sc); i++ {
|
||||
if first != sc[i].BlockRoot() {
|
||||
|
||||
blobSidecars, err := blocks.BlobSidecarsFromSidecars(sidecars)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "blob sidecars from sidecars")
|
||||
}
|
||||
|
||||
if len(blobSidecars) > 1 {
|
||||
firstRoot := blobSidecars[0].BlockRoot()
|
||||
for _, sidecar := range blobSidecars[1:] {
|
||||
if sidecar.BlockRoot() != firstRoot {
|
||||
return errMixedRoots
|
||||
}
|
||||
}
|
||||
}
|
||||
if !params.WithinDAPeriod(slots.ToEpoch(sc[0].Slot()), slots.ToEpoch(current)) {
|
||||
if !params.WithinDAPeriod(slots.ToEpoch(blobSidecars[0].Slot()), slots.ToEpoch(current)) {
|
||||
return nil
|
||||
}
|
||||
key := keyFromSidecar(sc[0])
|
||||
key := keyFromSidecar(blobSidecars[0])
|
||||
entry := s.cache.ensure(key)
|
||||
for i := range sc {
|
||||
if err := entry.stash(&sc[i]); err != nil {
|
||||
for _, blobSidecar := range blobSidecars {
|
||||
if err := entry.stash(&blobSidecar); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
@@ -80,7 +86,7 @@ func (s *LazilyPersistentStore) Persist(current primitives.Slot, sc ...blocks.RO
|
||||
|
||||
// IsDataAvailable returns nil if all the commitments in the given block are persisted to the db and have been verified.
|
||||
// BlobSidecars already in the db are assumed to have been previously verified against the block.
|
||||
func (s *LazilyPersistentStore) IsDataAvailable(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error {
|
||||
func (s *LazilyPersistentStoreBlob) IsDataAvailable(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error {
|
||||
blockCommitments, err := commitmentsToCheck(b, current)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "could not check data availability for block %#x", b.Root())
|
||||
@@ -116,9 +116,11 @@ func TestLazilyPersistent_Missing(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := filesystem.NewEphemeralBlobStorage(t)
|
||||
|
||||
blk, scs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 3)
|
||||
blk, blobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 3)
|
||||
|
||||
mbv := &mockBlobBatchVerifier{t: t, scs: scs}
|
||||
scs := blocks.NewSidecarsFromBlobSidecars(blobSidecars)
|
||||
|
||||
mbv := &mockBlobBatchVerifier{t: t, scs: blobSidecars}
|
||||
as := NewLazilyPersistentStore(store, mbv)
|
||||
|
||||
// Only one commitment persisted, should return error with other indices
|
||||
@@ -141,12 +143,14 @@ func TestLazilyPersistent_Mismatch(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
store := filesystem.NewEphemeralBlobStorage(t)
|
||||
|
||||
blk, scs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 3)
|
||||
blk, blobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 3)
|
||||
|
||||
mbv := &mockBlobBatchVerifier{t: t, err: errors.New("kzg check should not run")}
|
||||
scs[0].KzgCommitment = bytesutil.PadTo([]byte("nope"), 48)
|
||||
blobSidecars[0].KzgCommitment = bytesutil.PadTo([]byte("nope"), 48)
|
||||
as := NewLazilyPersistentStore(store, mbv)
|
||||
|
||||
scs := blocks.NewSidecarsFromBlobSidecars(blobSidecars)
|
||||
|
||||
// Only one commitment persisted, should return error with other indices
|
||||
require.NoError(t, as.Persist(1, scs[0]))
|
||||
err := as.IsDataAvailable(ctx, 1, blk)
|
||||
@@ -155,7 +159,10 @@ func TestLazilyPersistent_Mismatch(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestLazyPersistOnceCommitted(t *testing.T) {
|
||||
_, scs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 6)
|
||||
_, blobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 6)
|
||||
|
||||
scs := blocks.NewSidecarsFromBlobSidecars(blobSidecars)
|
||||
|
||||
as := NewLazilyPersistentStore(filesystem.NewEphemeralBlobStorage(t), &mockBlobBatchVerifier{})
|
||||
// stashes as expected
|
||||
require.NoError(t, as.Persist(1, scs...))
|
||||
@@ -163,10 +170,13 @@ func TestLazyPersistOnceCommitted(t *testing.T) {
|
||||
require.ErrorIs(t, as.Persist(1, scs...), ErrDuplicateSidecar)
|
||||
|
||||
// ignores index out of bound
|
||||
scs[0].Index = 6
|
||||
require.ErrorIs(t, as.Persist(1, scs[0]), errIndexOutOfBounds)
|
||||
blobSidecars[0].Index = 6
|
||||
require.ErrorIs(t, as.Persist(1, blocks.NewSidecarFromBlobSidecar(blobSidecars[0])), errIndexOutOfBounds)
|
||||
|
||||
_, moreBlobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 4)
|
||||
|
||||
more := blocks.NewSidecarsFromBlobSidecars(moreBlobSidecars)
|
||||
|
||||
_, more := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 4)
|
||||
// ignores sidecars before the retention period
|
||||
slotOOB, err := slots.EpochStart(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest)
|
||||
require.NoError(t, err)
|
||||
208
beacon-chain/das/availability_columns.go
Normal file
208
beacon-chain/das/availability_columns.go
Normal file
@@ -0,0 +1,208 @@
|
||||
package das
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/runtime/version"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
"github.com/ethereum/go-ethereum/p2p/enode"
|
||||
errors "github.com/pkg/errors"
|
||||
)
|
||||
|
||||
// LazilyPersistentStoreColumn is an implementation of AvailabilityStore to be used when batch syncing data columns.
|
||||
// This implementation will hold any data columns passed to Persist until the IsDataAvailable is called for their
|
||||
// block, at which time they will undergo full verification and be saved to the disk.
|
||||
type LazilyPersistentStoreColumn struct {
|
||||
store *filesystem.DataColumnStorage
|
||||
nodeID enode.ID
|
||||
cache *dataColumnCache
|
||||
custodyInfo *peerdas.CustodyInfo
|
||||
newDataColumnsVerifier verification.NewDataColumnsVerifier
|
||||
}
|
||||
|
||||
var _ AvailabilityStore = &LazilyPersistentStoreColumn{}
|
||||
|
||||
// DataColumnsVerifier enables LazilyPersistentStoreColumn to manage the verification process
|
||||
// going from RODataColumn->VerifiedRODataColumn, while avoiding the decision of which individual verifications
|
||||
// to run and in what order. Since LazilyPersistentStoreColumn always tries to verify and save data columns only when
|
||||
// they are all available, the interface takes a slice of data column sidecars.
|
||||
type DataColumnsVerifier interface {
|
||||
VerifiedRODataColumns(ctx context.Context, blk blocks.ROBlock, scs []blocks.RODataColumn) ([]blocks.VerifiedRODataColumn, error)
|
||||
}
|
||||
|
||||
// NewLazilyPersistentStoreColumn creates a new LazilyPersistentStoreColumn.
|
||||
// WARNING: The resulting LazilyPersistentStoreColumn is NOT thread-safe.
|
||||
func NewLazilyPersistentStoreColumn(store *filesystem.DataColumnStorage, nodeID enode.ID, newDataColumnsVerifier verification.NewDataColumnsVerifier, custodyInfo *peerdas.CustodyInfo) *LazilyPersistentStoreColumn {
|
||||
return &LazilyPersistentStoreColumn{
|
||||
store: store,
|
||||
nodeID: nodeID,
|
||||
cache: newDataColumnCache(),
|
||||
custodyInfo: custodyInfo,
|
||||
newDataColumnsVerifier: newDataColumnsVerifier,
|
||||
}
|
||||
}
|
||||
|
||||
// PersistColumns adds columns to the working column cache. Columns stored in this cache will be persisted
|
||||
// for at least as long as the node is running. Once IsDataAvailable succeeds, all columns referenced
|
||||
// by the given block are guaranteed to be persisted for the remainder of the retention period.
|
||||
func (s *LazilyPersistentStoreColumn) Persist(current primitives.Slot, sidecars ...blocks.ROSidecar) error {
|
||||
if len(sidecars) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
dataColumnSidecars, err := blocks.DataColumnSidecarsFromSidecars(sidecars)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "blob sidecars from sidecars")
|
||||
}
|
||||
|
||||
// It is safe to retrieve the first sidecar.
|
||||
firstSidecar := dataColumnSidecars[0]
|
||||
|
||||
if len(sidecars) > 1 {
|
||||
firstRoot := firstSidecar.BlockRoot()
|
||||
for _, sidecar := range dataColumnSidecars[1:] {
|
||||
if sidecar.BlockRoot() != firstRoot {
|
||||
return errMixedRoots
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
firstSidecarEpoch, currentEpoch := slots.ToEpoch(firstSidecar.Slot()), slots.ToEpoch(current)
|
||||
if !params.WithinDAPeriod(firstSidecarEpoch, currentEpoch) {
|
||||
return nil
|
||||
}
|
||||
|
||||
key := cacheKey{slot: firstSidecar.Slot(), root: firstSidecar.BlockRoot()}
|
||||
entry := s.cache.ensure(key)
|
||||
|
||||
for _, sidecar := range dataColumnSidecars {
|
||||
if err := entry.stash(&sidecar); err != nil {
|
||||
return errors.Wrap(err, "stash DataColumnSidecar")
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// IsDataAvailable returns nil if all the commitments in the given block are persisted to the db and have been verified.
|
||||
// DataColumnsSidecars already in the db are assumed to have been previously verified against the block.
|
||||
func (s *LazilyPersistentStoreColumn) IsDataAvailable(ctx context.Context, currentSlot primitives.Slot, block blocks.ROBlock) error {
|
||||
blockCommitments, err := s.fullCommitmentsToCheck(s.nodeID, block, currentSlot)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "full commitments to check with block root `%#x` and current slot `%d`", block.Root(), currentSlot)
|
||||
}
|
||||
|
||||
// Return early for blocks that do not have any commitments.
|
||||
if blockCommitments.count() == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Get the root of the block.
|
||||
blockRoot := block.Root()
|
||||
|
||||
// Build the cache key for the block.
|
||||
key := cacheKey{slot: block.Block().Slot(), root: blockRoot}
|
||||
|
||||
// Retrieve the cache entry for the block, or create an empty one if it doesn't exist.
|
||||
entry := s.cache.ensure(key)
|
||||
|
||||
// Delete the cache entry for the block at the end.
|
||||
defer s.cache.delete(key)
|
||||
|
||||
// Set the disk summary for the block in the cache entry.
|
||||
entry.setDiskSummary(s.store.Summary(blockRoot))
|
||||
|
||||
// Verify we have all the expected sidecars, and fail fast if any are missing or inconsistent.
|
||||
// We don't try to salvage problematic batches because this indicates a misbehaving peer and we'd rather
|
||||
// ignore their response and decrease their peer score.
|
||||
roDataColumns, err := entry.filter(blockRoot, blockCommitments)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "entry filter")
|
||||
}
|
||||
|
||||
// https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/p2p-interface.md#datacolumnsidecarsbyrange-v1
|
||||
verifier := s.newDataColumnsVerifier(roDataColumns, verification.ByRangeRequestDataColumnSidecarRequirements)
|
||||
|
||||
if err := verifier.ValidFields(); err != nil {
|
||||
return errors.Wrap(err, "valid")
|
||||
}
|
||||
|
||||
if err := verifier.SidecarInclusionProven(); err != nil {
|
||||
return errors.Wrap(err, "sidecar inclusion proven")
|
||||
}
|
||||
|
||||
if err := verifier.SidecarKzgProofVerified(); err != nil {
|
||||
return errors.Wrap(err, "sidecar KZG proof verified")
|
||||
}
|
||||
|
||||
verifiedRoDataColumns, err := verifier.VerifiedRODataColumns()
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "verified RO data columns - should never happen")
|
||||
}
|
||||
|
||||
if err := s.store.Save(verifiedRoDataColumns); err != nil {
|
||||
return errors.Wrap(err, "save data column sidecars")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// fullCommitmentsToCheck returns the commitments to check for a given block.
|
||||
func (s *LazilyPersistentStoreColumn) fullCommitmentsToCheck(nodeID enode.ID, block blocks.ROBlock, currentSlot primitives.Slot) (*safeCommitmentsArray, error) {
|
||||
// Return early for blocks that are pre-Fulu.
|
||||
if block.Version() < version.Fulu {
|
||||
return &safeCommitmentsArray{}, nil
|
||||
}
|
||||
|
||||
// Compute the block epoch.
|
||||
blockSlot := block.Block().Slot()
|
||||
blockEpoch := slots.ToEpoch(blockSlot)
|
||||
|
||||
// Compute the current spoch.
|
||||
currentEpoch := slots.ToEpoch(currentSlot)
|
||||
|
||||
// Return early if the request is out of the MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS window.
|
||||
if !params.WithinDAPeriod(blockEpoch, currentEpoch) {
|
||||
return &safeCommitmentsArray{}, nil
|
||||
}
|
||||
|
||||
// Retrieve the KZG commitments for the block.
|
||||
kzgCommitments, err := block.Block().Body().BlobKzgCommitments()
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "blob KZG commitments")
|
||||
}
|
||||
|
||||
// Return early if there are no commitments in the block.
|
||||
if len(kzgCommitments) == 0 {
|
||||
return &safeCommitmentsArray{}, nil
|
||||
}
|
||||
|
||||
// Retrieve the groups count.
|
||||
custodyGroupCount := s.custodyInfo.ActualGroupCount()
|
||||
|
||||
// Retrieve peer info.
|
||||
peerInfo, _, err := peerdas.Info(nodeID, custodyGroupCount)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "peer info")
|
||||
}
|
||||
|
||||
// Create a safe commitments array for the custody columns.
|
||||
commitmentsArray := &safeCommitmentsArray{}
|
||||
commitmentsArraySize := uint64(len(commitmentsArray))
|
||||
|
||||
for column := range peerInfo.CustodyColumns {
|
||||
if column >= commitmentsArraySize {
|
||||
return nil, errors.Errorf("custody column index %d too high (max allowed %d) - should never happen", column, commitmentsArraySize)
|
||||
}
|
||||
|
||||
commitmentsArray[column] = kzgCommitments
|
||||
}
|
||||
|
||||
return commitmentsArray, nil
|
||||
}
|
||||
303
beacon-chain/das/availability_columns_test.go
Normal file
303
beacon-chain/das/availability_columns_test.go
Normal file
@@ -0,0 +1,303 @@
|
||||
package das
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
|
||||
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/util"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
"github.com/ethereum/go-ethereum/p2p/enode"
|
||||
)
|
||||
|
||||
var commitments = [][]byte{
|
||||
bytesutil.PadTo([]byte("a"), 48),
|
||||
bytesutil.PadTo([]byte("b"), 48),
|
||||
bytesutil.PadTo([]byte("c"), 48),
|
||||
bytesutil.PadTo([]byte("d"), 48),
|
||||
}
|
||||
|
||||
func TestPersist(t *testing.T) {
|
||||
t.Run("no sidecars", func(t *testing.T) {
|
||||
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
|
||||
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, nil, &peerdas.CustodyInfo{})
|
||||
err := lazilyPersistentStoreColumns.Persist(0)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 0, len(lazilyPersistentStoreColumns.cache.entries))
|
||||
})
|
||||
|
||||
t.Run("mixed roots", func(t *testing.T) {
|
||||
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
|
||||
|
||||
dataColumnParamsByBlockRoot := map[[fieldparams.RootLength]byte][]util.DataColumnParams{
|
||||
{1}: {{ColumnIndex: 1}},
|
||||
{2}: {{ColumnIndex: 2}},
|
||||
}
|
||||
|
||||
roSidecars, _ := roSidecarsFromDataColumnParamsByBlockRoot(t, dataColumnParamsByBlockRoot)
|
||||
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, nil, &peerdas.CustodyInfo{})
|
||||
|
||||
err := lazilyPersistentStoreColumns.Persist(0, roSidecars...)
|
||||
require.ErrorIs(t, err, errMixedRoots)
|
||||
require.Equal(t, 0, len(lazilyPersistentStoreColumns.cache.entries))
|
||||
})
|
||||
|
||||
t.Run("outside DA period", func(t *testing.T) {
|
||||
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
|
||||
|
||||
dataColumnParamsByBlockRoot := map[[fieldparams.RootLength]byte][]util.DataColumnParams{
|
||||
{1}: {{ColumnIndex: 1}},
|
||||
}
|
||||
|
||||
roSidecars, _ := roSidecarsFromDataColumnParamsByBlockRoot(t, dataColumnParamsByBlockRoot)
|
||||
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, nil, &peerdas.CustodyInfo{})
|
||||
|
||||
err := lazilyPersistentStoreColumns.Persist(1_000_000, roSidecars...)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 0, len(lazilyPersistentStoreColumns.cache.entries))
|
||||
})
|
||||
|
||||
t.Run("nominal", func(t *testing.T) {
|
||||
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
|
||||
|
||||
dataColumnParamsByBlockRoot := map[[fieldparams.RootLength]byte][]util.DataColumnParams{
|
||||
{}: {{ColumnIndex: 1}, {ColumnIndex: 5}},
|
||||
}
|
||||
|
||||
roSidecars, roDataColumns := roSidecarsFromDataColumnParamsByBlockRoot(t, dataColumnParamsByBlockRoot)
|
||||
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, nil, &peerdas.CustodyInfo{})
|
||||
|
||||
err := lazilyPersistentStoreColumns.Persist(0, roSidecars...)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 1, len(lazilyPersistentStoreColumns.cache.entries))
|
||||
|
||||
key := cacheKey{slot: 0, root: [fieldparams.RootLength]byte{}}
|
||||
entry := lazilyPersistentStoreColumns.cache.entries[key]
|
||||
|
||||
// A call to Persist does NOT save the sidecars to disk.
|
||||
require.Equal(t, uint64(0), entry.diskSummary.Count())
|
||||
|
||||
require.DeepSSZEqual(t, roDataColumns[0], *entry.scs[1])
|
||||
require.DeepSSZEqual(t, roDataColumns[1], *entry.scs[5])
|
||||
|
||||
for i, roDataColumn := range entry.scs {
|
||||
if map[int]bool{1: true, 5: true}[i] {
|
||||
continue
|
||||
}
|
||||
|
||||
require.IsNil(t, roDataColumn)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func TestIsDataAvailable(t *testing.T) {
|
||||
newDataColumnsVerifier := func(dataColumnSidecars []blocks.RODataColumn, _ []verification.Requirement) verification.DataColumnsVerifier {
|
||||
return &mockDataColumnsVerifier{t: t, dataColumnSidecars: dataColumnSidecars}
|
||||
}
|
||||
|
||||
ctx := context.Background()
|
||||
|
||||
t.Run("without commitments", func(t *testing.T) {
|
||||
signedBeaconBlockFulu := util.NewBeaconBlockFulu()
|
||||
signedRoBlock := newSignedRoBlock(t, signedBeaconBlockFulu)
|
||||
|
||||
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
|
||||
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, newDataColumnsVerifier, &peerdas.CustodyInfo{})
|
||||
|
||||
err := lazilyPersistentStoreColumns.IsDataAvailable(ctx, 0 /*current slot*/, signedRoBlock)
|
||||
require.NoError(t, err)
|
||||
})
|
||||
|
||||
t.Run("with commitments", func(t *testing.T) {
|
||||
signedBeaconBlockFulu := util.NewBeaconBlockFulu()
|
||||
signedBeaconBlockFulu.Block.Body.BlobKzgCommitments = commitments
|
||||
signedRoBlock := newSignedRoBlock(t, signedBeaconBlockFulu)
|
||||
root := signedRoBlock.Root()
|
||||
|
||||
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
|
||||
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, newDataColumnsVerifier, &peerdas.CustodyInfo{})
|
||||
|
||||
indices := [...]uint64{1, 17, 87, 102}
|
||||
dataColumnsParams := make([]util.DataColumnParams, 0, len(indices))
|
||||
for _, index := range indices {
|
||||
dataColumnParams := util.DataColumnParams{
|
||||
ColumnIndex: index,
|
||||
KzgCommitments: commitments,
|
||||
}
|
||||
|
||||
dataColumnsParams = append(dataColumnsParams, dataColumnParams)
|
||||
}
|
||||
|
||||
dataColumnsParamsByBlockRoot := util.DataColumnsParamsByRoot{root: dataColumnsParams}
|
||||
_, verifiedRoDataColumns := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnsParamsByBlockRoot)
|
||||
|
||||
key := cacheKey{root: root}
|
||||
entry := lazilyPersistentStoreColumns.cache.ensure(key)
|
||||
defer lazilyPersistentStoreColumns.cache.delete(key)
|
||||
|
||||
for _, verifiedRoDataColumn := range verifiedRoDataColumns {
|
||||
err := entry.stash(&verifiedRoDataColumn.RODataColumn)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
err := lazilyPersistentStoreColumns.IsDataAvailable(ctx, 0 /*current slot*/, signedRoBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
actual, err := dataColumnStorage.Get(root, indices[:])
|
||||
require.NoError(t, err)
|
||||
|
||||
summary := dataColumnStorage.Summary(root)
|
||||
require.Equal(t, uint64(len(indices)), summary.Count())
|
||||
require.DeepSSZEqual(t, verifiedRoDataColumns, actual)
|
||||
})
|
||||
}
|
||||
|
||||
func TestFullCommitmentsToCheck(t *testing.T) {
|
||||
windowSlots, err := slots.EpochEnd(params.BeaconConfig().MinEpochsForDataColumnSidecarsRequest)
|
||||
require.NoError(t, err)
|
||||
|
||||
testCases := []struct {
|
||||
name string
|
||||
commitments [][]byte
|
||||
block func(*testing.T) blocks.ROBlock
|
||||
slot primitives.Slot
|
||||
}{
|
||||
{
|
||||
name: "Pre-Fulu block",
|
||||
block: func(t *testing.T) blocks.ROBlock {
|
||||
return newSignedRoBlock(t, util.NewBeaconBlockElectra())
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "Commitments outside data availability window",
|
||||
block: func(t *testing.T) blocks.ROBlock {
|
||||
beaconBlockElectra := util.NewBeaconBlockElectra()
|
||||
|
||||
// Block is from slot 0, "current slot" is window size +1 (so outside the window)
|
||||
beaconBlockElectra.Block.Body.BlobKzgCommitments = commitments
|
||||
|
||||
return newSignedRoBlock(t, beaconBlockElectra)
|
||||
},
|
||||
slot: windowSlots + 1,
|
||||
},
|
||||
{
|
||||
name: "Commitments within data availability window",
|
||||
block: func(t *testing.T) blocks.ROBlock {
|
||||
signedBeaconBlockFulu := util.NewBeaconBlockFulu()
|
||||
signedBeaconBlockFulu.Block.Body.BlobKzgCommitments = commitments
|
||||
signedBeaconBlockFulu.Block.Slot = 100
|
||||
|
||||
return newSignedRoBlock(t, signedBeaconBlockFulu)
|
||||
},
|
||||
commitments: commitments,
|
||||
slot: 100,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
resetFlags := flags.Get()
|
||||
gFlags := new(flags.GlobalFlags)
|
||||
gFlags.SubscribeAllDataSubnets = true
|
||||
flags.Init(gFlags)
|
||||
defer flags.Init(resetFlags)
|
||||
|
||||
b := tc.block(t)
|
||||
s := NewLazilyPersistentStoreColumn(nil, enode.ID{}, nil, &peerdas.CustodyInfo{})
|
||||
|
||||
commitmentsArray, err := s.fullCommitmentsToCheck(enode.ID{}, b, tc.slot)
|
||||
require.NoError(t, err)
|
||||
|
||||
for _, commitments := range commitmentsArray {
|
||||
require.DeepEqual(t, tc.commitments, commitments)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func roSidecarsFromDataColumnParamsByBlockRoot(t *testing.T, dataColumnParamsByBlockRoot util.DataColumnsParamsByRoot) ([]blocks.ROSidecar, []blocks.RODataColumn) {
|
||||
roDataColumns, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnParamsByBlockRoot)
|
||||
|
||||
roSidecars := make([]blocks.ROSidecar, 0, len(roDataColumns))
|
||||
for _, roDataColumn := range roDataColumns {
|
||||
roSidecars = append(roSidecars, blocks.NewSidecarFromDataColumnSidecar(roDataColumn))
|
||||
}
|
||||
|
||||
return roSidecars, roDataColumns
|
||||
}
|
||||
|
||||
func newSignedRoBlock(t *testing.T, signedBeaconBlock interface{}) blocks.ROBlock {
|
||||
sb, err := blocks.NewSignedBeaconBlock(signedBeaconBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
rb, err := blocks.NewROBlock(sb)
|
||||
require.NoError(t, err)
|
||||
|
||||
return rb
|
||||
}
|
||||
|
||||
type mockDataColumnsVerifier struct {
|
||||
t *testing.T
|
||||
dataColumnSidecars []blocks.RODataColumn
|
||||
validCalled, SidecarInclusionProvenCalled, SidecarKzgProofVerifiedCalled bool
|
||||
}
|
||||
|
||||
var _ verification.DataColumnsVerifier = &mockDataColumnsVerifier{}
|
||||
|
||||
func (m *mockDataColumnsVerifier) VerifiedRODataColumns() ([]blocks.VerifiedRODataColumn, error) {
|
||||
require.Equal(m.t, true, m.validCalled && m.SidecarInclusionProvenCalled && m.SidecarKzgProofVerifiedCalled)
|
||||
|
||||
verifiedDataColumnSidecars := make([]blocks.VerifiedRODataColumn, 0, len(m.dataColumnSidecars))
|
||||
for _, dataColumnSidecar := range m.dataColumnSidecars {
|
||||
verifiedDataColumnSidecar := blocks.NewVerifiedRODataColumn(dataColumnSidecar)
|
||||
verifiedDataColumnSidecars = append(verifiedDataColumnSidecars, verifiedDataColumnSidecar)
|
||||
}
|
||||
|
||||
return verifiedDataColumnSidecars, nil
|
||||
}
|
||||
|
||||
func (m *mockDataColumnsVerifier) SatisfyRequirement(verification.Requirement) {}
|
||||
|
||||
func (m *mockDataColumnsVerifier) ValidFields() error {
|
||||
m.validCalled = true
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *mockDataColumnsVerifier) CorrectSubnet(dataColumnSidecarSubTopic string, expectedTopics []string) error {
|
||||
return nil
|
||||
}
|
||||
func (m *mockDataColumnsVerifier) NotFromFutureSlot() error { return nil }
|
||||
func (m *mockDataColumnsVerifier) SlotAboveFinalized() error { return nil }
|
||||
func (m *mockDataColumnsVerifier) ValidProposerSignature(ctx context.Context) error { return nil }
|
||||
|
||||
func (m *mockDataColumnsVerifier) SidecarParentSeen(parentSeen func([fieldparams.RootLength]byte) bool) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *mockDataColumnsVerifier) SidecarParentValid(badParent func([fieldparams.RootLength]byte) bool) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *mockDataColumnsVerifier) SidecarParentSlotLower() error { return nil }
|
||||
func (m *mockDataColumnsVerifier) SidecarDescendsFromFinalized() error { return nil }
|
||||
|
||||
func (m *mockDataColumnsVerifier) SidecarInclusionProven() error {
|
||||
m.SidecarInclusionProvenCalled = true
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *mockDataColumnsVerifier) SidecarKzgProofVerified() error {
|
||||
m.SidecarKzgProofVerifiedCalled = true
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *mockDataColumnsVerifier) SidecarProposerExpected(ctx context.Context) error { return nil }
|
||||
@@ -4,33 +4,29 @@ import (
|
||||
"bytes"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrDuplicateSidecar = errors.New("duplicate sidecar stashed in AvailabilityStore")
|
||||
errIndexOutOfBounds = errors.New("sidecar.index > MAX_BLOBS_PER_BLOCK")
|
||||
errCommitmentMismatch = errors.New("KzgCommitment of sidecar in cache did not match block commitment")
|
||||
errMissingSidecar = errors.New("no sidecar in cache for block commitment")
|
||||
)
|
||||
var errIndexOutOfBounds = errors.New("sidecar.index > MAX_BLOBS_PER_BLOCK")
|
||||
|
||||
// cacheKey includes the slot so that we can easily iterate through the cache and compare
|
||||
// slots for eviction purposes. Whether the input is the block or the sidecar, we always have
|
||||
// the root+slot when interacting with the cache, so it isn't an inconvenience to use both.
|
||||
type cacheKey struct {
|
||||
slot primitives.Slot
|
||||
root [32]byte
|
||||
root [fieldparams.RootLength]byte
|
||||
}
|
||||
|
||||
type cache struct {
|
||||
entries map[cacheKey]*cacheEntry
|
||||
type blobCache struct {
|
||||
entries map[cacheKey]*blobCacheEntry
|
||||
}
|
||||
|
||||
func newCache() *cache {
|
||||
return &cache{entries: make(map[cacheKey]*cacheEntry)}
|
||||
func newBlobCache() *blobCache {
|
||||
return &blobCache{entries: make(map[cacheKey]*blobCacheEntry)}
|
||||
}
|
||||
|
||||
// keyFromSidecar is a convenience method for constructing a cacheKey from a BlobSidecar value.
|
||||
@@ -44,34 +40,34 @@ func keyFromBlock(b blocks.ROBlock) cacheKey {
|
||||
}
|
||||
|
||||
// ensure returns the entry for the given key, creating it if it isn't already present.
|
||||
func (c *cache) ensure(key cacheKey) *cacheEntry {
|
||||
func (c *blobCache) ensure(key cacheKey) *blobCacheEntry {
|
||||
e, ok := c.entries[key]
|
||||
if !ok {
|
||||
e = &cacheEntry{}
|
||||
e = &blobCacheEntry{}
|
||||
c.entries[key] = e
|
||||
}
|
||||
return e
|
||||
}
|
||||
|
||||
// delete removes the cache entry from the cache.
|
||||
func (c *cache) delete(key cacheKey) {
|
||||
func (c *blobCache) delete(key cacheKey) {
|
||||
delete(c.entries, key)
|
||||
}
|
||||
|
||||
// cacheEntry holds a fixed-length cache of BlobSidecars.
|
||||
type cacheEntry struct {
|
||||
// blobCacheEntry holds a fixed-length cache of BlobSidecars.
|
||||
type blobCacheEntry struct {
|
||||
scs []*blocks.ROBlob
|
||||
diskSummary filesystem.BlobStorageSummary
|
||||
}
|
||||
|
||||
func (e *cacheEntry) setDiskSummary(sum filesystem.BlobStorageSummary) {
|
||||
func (e *blobCacheEntry) setDiskSummary(sum filesystem.BlobStorageSummary) {
|
||||
e.diskSummary = sum
|
||||
}
|
||||
|
||||
// stash adds an item to the in-memory cache of BlobSidecars.
|
||||
// Only the first BlobSidecar of a given Index will be kept in the cache.
|
||||
// stash will return an error if the given blob is already in the cache, or if the Index is out of bounds.
|
||||
func (e *cacheEntry) stash(sc *blocks.ROBlob) error {
|
||||
func (e *blobCacheEntry) stash(sc *blocks.ROBlob) error {
|
||||
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(sc.Slot())
|
||||
if sc.Index >= uint64(maxBlobsPerBlock) {
|
||||
return errors.Wrapf(errIndexOutOfBounds, "index=%d", sc.Index)
|
||||
@@ -92,7 +88,7 @@ func (e *cacheEntry) stash(sc *blocks.ROBlob) error {
|
||||
// commitments were found in the cache and the sidecar slice return value can be used
|
||||
// to perform a DA check against the cached sidecars.
|
||||
// filter only returns blobs that need to be checked. Blobs already available on disk will be excluded.
|
||||
func (e *cacheEntry) filter(root [32]byte, kc [][]byte, slot primitives.Slot) ([]blocks.ROBlob, error) {
|
||||
func (e *blobCacheEntry) filter(root [32]byte, kc [][]byte, slot primitives.Slot) ([]blocks.ROBlob, error) {
|
||||
count := len(kc)
|
||||
if e.diskSummary.AllAvailable(count) {
|
||||
return nil, nil
|
||||
@@ -14,7 +14,7 @@ import (
|
||||
)
|
||||
|
||||
func TestCacheEnsureDelete(t *testing.T) {
|
||||
c := newCache()
|
||||
c := newBlobCache()
|
||||
require.Equal(t, 0, len(c.entries))
|
||||
root := bytesutil.ToBytes32([]byte("root"))
|
||||
slot := primitives.Slot(1234)
|
||||
@@ -25,18 +25,18 @@ func TestCacheEnsureDelete(t *testing.T) {
|
||||
|
||||
c.delete(k)
|
||||
require.Equal(t, 0, len(c.entries))
|
||||
var nilEntry *cacheEntry
|
||||
var nilEntry *blobCacheEntry
|
||||
require.Equal(t, nilEntry, c.entries[k])
|
||||
}
|
||||
|
||||
type filterTestCaseSetupFunc func(t *testing.T) (*cacheEntry, [][]byte, []blocks.ROBlob)
|
||||
type filterTestCaseSetupFunc func(t *testing.T) (*blobCacheEntry, [][]byte, []blocks.ROBlob)
|
||||
|
||||
func filterTestCaseSetup(slot primitives.Slot, nBlobs int, onDisk []int, numExpected int) filterTestCaseSetupFunc {
|
||||
return func(t *testing.T) (*cacheEntry, [][]byte, []blocks.ROBlob) {
|
||||
return func(t *testing.T) (*blobCacheEntry, [][]byte, []blocks.ROBlob) {
|
||||
blk, blobs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, slot, nBlobs)
|
||||
commits, err := commitmentsToCheck(blk, blk.Block().Slot())
|
||||
require.NoError(t, err)
|
||||
entry := &cacheEntry{}
|
||||
entry := &blobCacheEntry{}
|
||||
if len(onDisk) > 0 {
|
||||
od := map[[32]byte][]int{blk.Root(): onDisk}
|
||||
sumz := filesystem.NewMockBlobStorageSummarizer(t, od)
|
||||
@@ -125,12 +125,12 @@ func TestFilter(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
cases := []struct {
|
||||
name string
|
||||
setup func(t *testing.T) (*cacheEntry, [][]byte, []blocks.ROBlob)
|
||||
setup func(t *testing.T) (*blobCacheEntry, [][]byte, []blocks.ROBlob)
|
||||
err error
|
||||
}{
|
||||
{
|
||||
name: "commitments mismatch - extra sidecar",
|
||||
setup: func(t *testing.T) (*cacheEntry, [][]byte, []blocks.ROBlob) {
|
||||
setup: func(t *testing.T) (*blobCacheEntry, [][]byte, []blocks.ROBlob) {
|
||||
entry, commits, expected := filterTestCaseSetup(denebSlot, 6, []int{0, 1}, 4)(t)
|
||||
commits[5] = nil
|
||||
return entry, commits, expected
|
||||
@@ -139,7 +139,7 @@ func TestFilter(t *testing.T) {
|
||||
},
|
||||
{
|
||||
name: "sidecar missing",
|
||||
setup: func(t *testing.T) (*cacheEntry, [][]byte, []blocks.ROBlob) {
|
||||
setup: func(t *testing.T) (*blobCacheEntry, [][]byte, []blocks.ROBlob) {
|
||||
entry, commits, expected := filterTestCaseSetup(denebSlot, 6, []int{0, 1}, 4)(t)
|
||||
entry.scs[5] = nil
|
||||
return entry, commits, expected
|
||||
@@ -148,7 +148,7 @@ func TestFilter(t *testing.T) {
|
||||
},
|
||||
{
|
||||
name: "commitments mismatch - different bytes",
|
||||
setup: func(t *testing.T) (*cacheEntry, [][]byte, []blocks.ROBlob) {
|
||||
setup: func(t *testing.T) (*blobCacheEntry, [][]byte, []blocks.ROBlob) {
|
||||
entry, commits, expected := filterTestCaseSetup(denebSlot, 6, []int{0, 1}, 4)(t)
|
||||
entry.scs[5].KzgCommitment = []byte("nope")
|
||||
return entry, commits, expected
|
||||
131
beacon-chain/das/data_column_cache.go
Normal file
131
beacon-chain/das/data_column_cache.go
Normal file
@@ -0,0 +1,131 @@
|
||||
package das
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"slices"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrDuplicateSidecar = errors.New("duplicate sidecar stashed in AvailabilityStore")
|
||||
errColumnIndexTooHigh = errors.New("column index too high")
|
||||
errCommitmentMismatch = errors.New("KzgCommitment of sidecar in cache did not match block commitment")
|
||||
errMissingSidecar = errors.New("no sidecar in cache for block commitment")
|
||||
)
|
||||
|
||||
type dataColumnCache struct {
|
||||
entries map[cacheKey]*dataColumnCacheEntry
|
||||
}
|
||||
|
||||
func newDataColumnCache() *dataColumnCache {
|
||||
return &dataColumnCache{entries: make(map[cacheKey]*dataColumnCacheEntry)}
|
||||
}
|
||||
|
||||
// ensure returns the entry for the given key, creating it if it isn't already present.
|
||||
func (c *dataColumnCache) ensure(key cacheKey) *dataColumnCacheEntry {
|
||||
entry, ok := c.entries[key]
|
||||
if !ok {
|
||||
entry = &dataColumnCacheEntry{}
|
||||
c.entries[key] = entry
|
||||
}
|
||||
|
||||
return entry
|
||||
}
|
||||
|
||||
// delete removes the cache entry from the cache.
|
||||
func (c *dataColumnCache) delete(key cacheKey) {
|
||||
delete(c.entries, key)
|
||||
}
|
||||
|
||||
// dataColumnCacheEntry holds a fixed-length cache of BlobSidecars.
|
||||
type dataColumnCacheEntry struct {
|
||||
scs [fieldparams.NumberOfColumns]*blocks.RODataColumn
|
||||
diskSummary filesystem.DataColumnStorageSummary
|
||||
}
|
||||
|
||||
func (e *dataColumnCacheEntry) setDiskSummary(sum filesystem.DataColumnStorageSummary) {
|
||||
e.diskSummary = sum
|
||||
}
|
||||
|
||||
// stash adds an item to the in-memory cache of DataColumnSidecars.
|
||||
// Only the first DataColumnSidecar of a given Index will be kept in the cache.
|
||||
// stash will return an error if the given data colunn is already in the cache, or if the Index is out of bounds.
|
||||
func (e *dataColumnCacheEntry) stash(sc *blocks.RODataColumn) error {
|
||||
if sc.Index >= fieldparams.NumberOfColumns {
|
||||
return errors.Wrapf(errColumnIndexTooHigh, "index=%d", sc.Index)
|
||||
}
|
||||
|
||||
if e.scs[sc.Index] != nil {
|
||||
return errors.Wrapf(ErrDuplicateSidecar, "root=%#x, index=%d, commitment=%#x", sc.BlockRoot(), sc.Index, sc.KzgCommitments)
|
||||
}
|
||||
|
||||
e.scs[sc.Index] = sc
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (e *dataColumnCacheEntry) filter(root [32]byte, commitmentsArray *safeCommitmentsArray) ([]blocks.RODataColumn, error) {
|
||||
nonEmptyIndices := commitmentsArray.nonEmptyIndices()
|
||||
if e.diskSummary.AllAvailable(nonEmptyIndices) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
commitmentsCount := commitmentsArray.count()
|
||||
sidecars := make([]blocks.RODataColumn, 0, commitmentsCount)
|
||||
|
||||
for i := range nonEmptyIndices {
|
||||
if e.diskSummary.HasIndex(i) {
|
||||
continue
|
||||
}
|
||||
|
||||
if e.scs[i] == nil {
|
||||
return nil, errors.Wrapf(errMissingSidecar, "root=%#x, index=%#x", root, i)
|
||||
}
|
||||
|
||||
if !sliceBytesEqual(commitmentsArray[i], e.scs[i].KzgCommitments) {
|
||||
return nil, errors.Wrapf(errCommitmentMismatch, "root=%#x, index=%#x, commitment=%#x, block commitment=%#x", root, i, e.scs[i].KzgCommitments, commitmentsArray[i])
|
||||
}
|
||||
|
||||
sidecars = append(sidecars, *e.scs[i])
|
||||
}
|
||||
|
||||
return sidecars, nil
|
||||
}
|
||||
|
||||
// safeCommitmentsArray is a fixed size array of commitments.
|
||||
// This is helpful for avoiding gratuitous bounds checks.
|
||||
type safeCommitmentsArray [fieldparams.NumberOfColumns][][]byte
|
||||
|
||||
// count returns the number of commitments in the array.
|
||||
func (s *safeCommitmentsArray) count() int {
|
||||
count := 0
|
||||
|
||||
for i := range s {
|
||||
if s[i] != nil {
|
||||
count++
|
||||
}
|
||||
}
|
||||
|
||||
return count
|
||||
}
|
||||
|
||||
// nonEmptyIndices returns a map of indices that are non-nil in the array.
|
||||
func (s *safeCommitmentsArray) nonEmptyIndices() map[uint64]bool {
|
||||
columns := make(map[uint64]bool)
|
||||
|
||||
for i := range s {
|
||||
if s[i] != nil {
|
||||
columns[uint64(i)] = true
|
||||
}
|
||||
}
|
||||
|
||||
return columns
|
||||
}
|
||||
|
||||
func sliceBytesEqual(a, b [][]byte) bool {
|
||||
return slices.EqualFunc(a, b, bytes.Equal)
|
||||
}
|
||||
144
beacon-chain/das/data_column_cache_test.go
Normal file
144
beacon-chain/das/data_column_cache_test.go
Normal file
@@ -0,0 +1,144 @@
|
||||
package das
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/util"
|
||||
)
|
||||
|
||||
func TestEnsureDeleteSetDiskSummary(t *testing.T) {
|
||||
c := newDataColumnCache()
|
||||
key := cacheKey{}
|
||||
entry := c.ensure(key)
|
||||
require.DeepEqual(t, dataColumnCacheEntry{}, *entry)
|
||||
|
||||
diskSummary := filesystem.NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{true})
|
||||
entry.setDiskSummary(diskSummary)
|
||||
entry = c.ensure(key)
|
||||
require.DeepEqual(t, dataColumnCacheEntry{diskSummary: diskSummary}, *entry)
|
||||
|
||||
c.delete(key)
|
||||
entry = c.ensure(key)
|
||||
require.DeepEqual(t, dataColumnCacheEntry{}, *entry)
|
||||
}
|
||||
|
||||
func TestStash(t *testing.T) {
|
||||
t.Run("Index too high", func(t *testing.T) {
|
||||
dataColumnParamsByBlockRoot := util.DataColumnsParamsByRoot{{1}: {{ColumnIndex: 10_000}}}
|
||||
roDataColumns, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnParamsByBlockRoot)
|
||||
|
||||
var entry dataColumnCacheEntry
|
||||
err := entry.stash(&roDataColumns[0])
|
||||
require.NotNil(t, err)
|
||||
})
|
||||
|
||||
t.Run("Nominal and already existing", func(t *testing.T) {
|
||||
dataColumnParamsByBlockRoot := util.DataColumnsParamsByRoot{{1}: {{ColumnIndex: 1}}}
|
||||
roDataColumns, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnParamsByBlockRoot)
|
||||
|
||||
var entry dataColumnCacheEntry
|
||||
err := entry.stash(&roDataColumns[0])
|
||||
require.NoError(t, err)
|
||||
|
||||
require.DeepEqual(t, roDataColumns[0], entry.scs[1])
|
||||
|
||||
err = entry.stash(&roDataColumns[0])
|
||||
require.NotNil(t, err)
|
||||
})
|
||||
}
|
||||
|
||||
func TestFilterDataColumns(t *testing.T) {
|
||||
t.Run("All available", func(t *testing.T) {
|
||||
commitmentsArray := safeCommitmentsArray{nil, [][]byte{[]byte{1}}, nil, [][]byte{[]byte{3}}}
|
||||
|
||||
diskSummary := filesystem.NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{false, true, false, true})
|
||||
|
||||
dataColumnCacheEntry := dataColumnCacheEntry{diskSummary: diskSummary}
|
||||
|
||||
actual, err := dataColumnCacheEntry.filter([fieldparams.RootLength]byte{}, &commitmentsArray)
|
||||
require.NoError(t, err)
|
||||
require.IsNil(t, actual)
|
||||
})
|
||||
|
||||
t.Run("Some scs missing", func(t *testing.T) {
|
||||
commitmentsArray := safeCommitmentsArray{nil, [][]byte{[]byte{1}}}
|
||||
|
||||
diskSummary := filesystem.NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{})
|
||||
|
||||
dataColumnCacheEntry := dataColumnCacheEntry{diskSummary: diskSummary}
|
||||
|
||||
_, err := dataColumnCacheEntry.filter([fieldparams.RootLength]byte{}, &commitmentsArray)
|
||||
require.NotNil(t, err)
|
||||
})
|
||||
|
||||
t.Run("Commitments not equal", func(t *testing.T) {
|
||||
root := [fieldparams.RootLength]byte{}
|
||||
commitmentsArray := safeCommitmentsArray{nil, [][]byte{[]byte{1}}}
|
||||
|
||||
dataColumnParamsByBlockRoot := util.DataColumnsParamsByRoot{root: {{ColumnIndex: 1}}}
|
||||
roDataColumns, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnParamsByBlockRoot)
|
||||
|
||||
var scs [fieldparams.NumberOfColumns]*blocks.RODataColumn
|
||||
scs[1] = &roDataColumns[0]
|
||||
|
||||
dataColumnCacheEntry := dataColumnCacheEntry{scs: scs}
|
||||
|
||||
_, err := dataColumnCacheEntry.filter(root, &commitmentsArray)
|
||||
require.NotNil(t, err)
|
||||
})
|
||||
|
||||
t.Run("Nominal", func(t *testing.T) {
|
||||
root := [fieldparams.RootLength]byte{}
|
||||
commitmentsArray := safeCommitmentsArray{nil, [][]byte{[]byte{1}}, nil, [][]byte{[]byte{3}}}
|
||||
|
||||
diskSummary := filesystem.NewDataColumnStorageSummary(42, [fieldparams.NumberOfColumns]bool{false, true})
|
||||
|
||||
dataColumnParamsByBlockRoot := util.DataColumnsParamsByRoot{root: {{ColumnIndex: 3, KzgCommitments: [][]byte{[]byte{3}}}}}
|
||||
expected, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnParamsByBlockRoot)
|
||||
|
||||
var scs [fieldparams.NumberOfColumns]*blocks.RODataColumn
|
||||
scs[3] = &expected[0]
|
||||
|
||||
dataColumnCacheEntry := dataColumnCacheEntry{scs: scs, diskSummary: diskSummary}
|
||||
|
||||
actual, err := dataColumnCacheEntry.filter(root, &commitmentsArray)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.DeepEqual(t, expected, actual)
|
||||
})
|
||||
}
|
||||
|
||||
func TestCount(t *testing.T) {
|
||||
s := safeCommitmentsArray{nil, [][]byte{[]byte{1}}, nil, [][]byte{[]byte{3}}}
|
||||
require.Equal(t, 2, s.count())
|
||||
}
|
||||
|
||||
func TestNonEmptyIndices(t *testing.T) {
|
||||
s := safeCommitmentsArray{nil, [][]byte{[]byte{10}}, nil, [][]byte{[]byte{20}}}
|
||||
actual := s.nonEmptyIndices()
|
||||
require.DeepEqual(t, map[uint64]bool{1: true, 3: true}, actual)
|
||||
}
|
||||
|
||||
func TestSliceBytesEqual(t *testing.T) {
|
||||
t.Run("Different lengths", func(t *testing.T) {
|
||||
a := [][]byte{[]byte{1, 2, 3}}
|
||||
b := [][]byte{[]byte{1, 2, 3}, []byte{4, 5, 6}}
|
||||
require.Equal(t, false, sliceBytesEqual(a, b))
|
||||
})
|
||||
|
||||
t.Run("Same length but different content", func(t *testing.T) {
|
||||
a := [][]byte{[]byte{1, 2, 3}, []byte{4, 5, 6}}
|
||||
b := [][]byte{[]byte{1, 2, 3}, []byte{4, 5, 7}}
|
||||
require.Equal(t, false, sliceBytesEqual(a, b))
|
||||
})
|
||||
|
||||
t.Run("Equal slices", func(t *testing.T) {
|
||||
a := [][]byte{[]byte{1, 2, 3}, []byte{4, 5, 6}}
|
||||
b := [][]byte{[]byte{1, 2, 3}, []byte{4, 5, 6}}
|
||||
require.Equal(t, true, sliceBytesEqual(a, b))
|
||||
})
|
||||
}
|
||||
@@ -15,5 +15,5 @@ import (
|
||||
// durably persisted before returning a non-error value.
|
||||
type AvailabilityStore interface {
|
||||
IsDataAvailable(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error
|
||||
Persist(current primitives.Slot, sc ...blocks.ROBlob) error
|
||||
Persist(current primitives.Slot, sc ...blocks.ROSidecar) error
|
||||
}
|
||||
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
errors "github.com/pkg/errors"
|
||||
)
|
||||
|
||||
// MockAvailabilityStore is an implementation of AvailabilityStore that can be used by other packages in tests.
|
||||
@@ -24,9 +25,13 @@ func (m *MockAvailabilityStore) IsDataAvailable(ctx context.Context, current pri
|
||||
}
|
||||
|
||||
// Persist satisfies the corresponding method of the AvailabilityStore interface in a way that is useful for tests.
|
||||
func (m *MockAvailabilityStore) Persist(current primitives.Slot, sc ...blocks.ROBlob) error {
|
||||
func (m *MockAvailabilityStore) Persist(current primitives.Slot, sc ...blocks.ROSidecar) error {
|
||||
blobSidecars, err := blocks.BlobSidecarsFromSidecars(sc)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "blob sidecars from sidecars")
|
||||
}
|
||||
if m.PersistBlobsCallback != nil {
|
||||
return m.PersistBlobsCallback(current, sc...)
|
||||
return m.PersistBlobsCallback(current, blobSidecars...)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -8,6 +8,7 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/spf13/afero"
|
||||
)
|
||||
|
||||
@@ -46,14 +47,21 @@ func NewWarmedEphemeralBlobStorageUsingFs(t testing.TB, fs afero.Fs, opts ...Blo
|
||||
return bs
|
||||
}
|
||||
|
||||
type BlobMocker struct {
|
||||
fs afero.Fs
|
||||
bs *BlobStorage
|
||||
}
|
||||
type (
|
||||
BlobMocker struct {
|
||||
fs afero.Fs
|
||||
bs *BlobStorage
|
||||
}
|
||||
|
||||
DataColumnMocker struct {
|
||||
fs afero.Fs
|
||||
dcs *DataColumnStorage
|
||||
}
|
||||
)
|
||||
|
||||
// CreateFakeIndices creates empty blob sidecar files at the expected path for the given
|
||||
// root and indices to influence the result of Indices().
|
||||
func (bm *BlobMocker) CreateFakeIndices(root [32]byte, slot primitives.Slot, indices ...uint64) error {
|
||||
func (bm *BlobMocker) CreateFakeIndices(root [fieldparams.RootLength]byte, slot primitives.Slot, indices ...uint64) error {
|
||||
for i := range indices {
|
||||
if err := bm.bs.layout.notify(newBlobIdent(root, slots.ToEpoch(slot), indices[i])); err != nil {
|
||||
return err
|
||||
@@ -62,6 +70,17 @@ func (bm *BlobMocker) CreateFakeIndices(root [32]byte, slot primitives.Slot, ind
|
||||
return nil
|
||||
}
|
||||
|
||||
// CreateFakeIndices creates empty blob sidecar files at the expected path for the given
|
||||
// root and indices to influence the result of Indices().
|
||||
func (bm *DataColumnMocker) CreateFakeIndices(root [fieldparams.RootLength]byte, slot primitives.Slot, indices ...uint64) error {
|
||||
err := bm.dcs.cache.set(DataColumnsIdent{Root: root, Epoch: slots.ToEpoch(slot), Indices: indices})
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "cache set")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// NewEphemeralBlobStorageWithMocker returns a *BlobMocker value in addition to the BlobStorage value.
|
||||
// BlockMocker encapsulates things blob path construction to avoid leaking implementation details.
|
||||
func NewEphemeralBlobStorageWithMocker(t testing.TB) (*BlobMocker, *BlobStorage) {
|
||||
@@ -119,6 +138,13 @@ func NewEphemeralDataColumnStorageUsingFs(t testing.TB, fs afero.Fs, opts ...Dat
|
||||
return bs
|
||||
}
|
||||
|
||||
// NewEphemeralDataColumnStorageWithMocker returns a *BlobMocker value in addition to the BlobStorage value.
|
||||
// BlockMocker encapsulates things blob path construction to avoid leaking implementation details.
|
||||
func NewEphemeralDataColumnStorageWithMocker(t testing.TB) (*DataColumnMocker, *DataColumnStorage) {
|
||||
fs, dcs := NewEphemeralDataColumnStorageAndFs(t)
|
||||
return &DataColumnMocker{fs: fs, dcs: dcs}, dcs
|
||||
}
|
||||
|
||||
func NewMockDataColumnStorageSummarizer(t *testing.T, set map[[fieldparams.RootLength]byte][]uint64) DataColumnStorageSummarizer {
|
||||
c := newDataColumnStorageSummaryCache()
|
||||
for root, indices := range set {
|
||||
|
||||
@@ -23,6 +23,7 @@ go_library(
|
||||
"//beacon-chain/cache:go_default_library",
|
||||
"//beacon-chain/cache/depositsnapshot:go_default_library",
|
||||
"//beacon-chain/core/light-client:go_default_library",
|
||||
"//beacon-chain/core/peerdas:go_default_library",
|
||||
"//beacon-chain/db:go_default_library",
|
||||
"//beacon-chain/db/filesystem:go_default_library",
|
||||
"//beacon-chain/db/kv:go_default_library",
|
||||
|
||||
@@ -25,6 +25,7 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache/depositsnapshot"
|
||||
lightclient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/kv"
|
||||
@@ -120,6 +121,7 @@ type BeaconNode struct {
|
||||
initialSyncComplete chan struct{}
|
||||
BlobStorage *filesystem.BlobStorage
|
||||
BlobStorageOptions []filesystem.BlobStorageOption
|
||||
custodyInfo *peerdas.CustodyInfo
|
||||
verifyInitWaiter *verification.InitializerWaiter
|
||||
syncChecker *initialsync.SyncChecker
|
||||
slasherEnabled bool
|
||||
@@ -161,6 +163,7 @@ func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*Beaco
|
||||
serviceFlagOpts: &serviceFlagOpts{},
|
||||
initialSyncComplete: make(chan struct{}),
|
||||
syncChecker: &initialsync.SyncChecker{},
|
||||
custodyInfo: &peerdas.CustodyInfo{},
|
||||
slasherEnabled: cliCtx.Bool(flags.SlasherFlag.Name),
|
||||
lcStore: &lightclient.Store{},
|
||||
}
|
||||
@@ -280,6 +283,7 @@ func startBaseServices(cliCtx *cli.Context, beacon *BeaconNode, depositAddress s
|
||||
if err := beacon.startDB(cliCtx, depositAddress); err != nil {
|
||||
return nil, errors.Wrap(err, "could not start DB")
|
||||
}
|
||||
|
||||
beacon.BlobStorage.WarmCache()
|
||||
|
||||
log.Debugln("Starting Slashing DB")
|
||||
@@ -697,6 +701,7 @@ func (b *BeaconNode) registerP2P(cliCtx *cli.Context) error {
|
||||
StateNotifier: b,
|
||||
DB: b.db,
|
||||
ClockWaiter: b.clockWaiter,
|
||||
CustodyInfo: b.custodyInfo,
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -778,6 +783,7 @@ func (b *BeaconNode) registerBlockchainService(fc forkchoice.ForkChoicer, gs *st
|
||||
blockchain.WithTrackedValidatorsCache(b.trackedValidatorsCache),
|
||||
blockchain.WithPayloadIDCache(b.payloadIDCache),
|
||||
blockchain.WithSyncChecker(b.syncChecker),
|
||||
blockchain.WithCustodyInfo(b.custodyInfo),
|
||||
blockchain.WithSlasherEnabled(b.slasherEnabled),
|
||||
blockchain.WithLightClientStore(b.lcStore),
|
||||
)
|
||||
|
||||
@@ -71,7 +71,6 @@ go_library(
|
||||
"//monitoring/tracing:go_default_library",
|
||||
"//monitoring/tracing/trace:go_default_library",
|
||||
"//network:go_default_library",
|
||||
"//network/forks:go_default_library",
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
"//proto/prysm/v1alpha1/metadata:go_default_library",
|
||||
"//runtime:go_default_library",
|
||||
@@ -168,7 +167,6 @@ go_test(
|
||||
"//crypto/hash:go_default_library",
|
||||
"//encoding/bytesutil:go_default_library",
|
||||
"//network:go_default_library",
|
||||
"//network/forks:go_default_library",
|
||||
"//proto/eth/v1:go_default_library",
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
"//proto/prysm/v1alpha1/metadata:go_default_library",
|
||||
|
||||
@@ -15,7 +15,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/crypto/hash"
|
||||
"github.com/OffchainLabs/prysm/v6/monitoring/tracing"
|
||||
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
|
||||
"github.com/OffchainLabs/prysm/v6/network/forks"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
"github.com/pkg/errors"
|
||||
@@ -279,14 +278,8 @@ func (s *Service) BroadcastLightClientOptimisticUpdate(ctx context.Context, upda
|
||||
return errors.New("attempted to broadcast nil light client optimistic update")
|
||||
}
|
||||
|
||||
forkDigest, err := forks.ForkDigestFromEpoch(slots.ToEpoch(update.AttestedHeader().Beacon().Slot), s.genesisValidatorsRoot)
|
||||
if err != nil {
|
||||
err := errors.Wrap(err, "could not retrieve fork digest")
|
||||
tracing.AnnotateError(span, err)
|
||||
return err
|
||||
}
|
||||
|
||||
if err := s.broadcastObject(ctx, update, lcOptimisticToTopic(forkDigest)); err != nil {
|
||||
digest := params.ForkDigest(slots.ToEpoch(update.AttestedHeader().Beacon().Slot))
|
||||
if err := s.broadcastObject(ctx, update, lcOptimisticToTopic(digest)); err != nil {
|
||||
log.WithError(err).Debug("Failed to broadcast light client optimistic update")
|
||||
err := errors.Wrap(err, "could not publish message")
|
||||
tracing.AnnotateError(span, err)
|
||||
@@ -305,13 +298,7 @@ func (s *Service) BroadcastLightClientFinalityUpdate(ctx context.Context, update
|
||||
return errors.New("attempted to broadcast nil light client finality update")
|
||||
}
|
||||
|
||||
forkDigest, err := forks.ForkDigestFromEpoch(slots.ToEpoch(update.AttestedHeader().Beacon().Slot), s.genesisValidatorsRoot)
|
||||
if err != nil {
|
||||
err := errors.Wrap(err, "could not retrieve fork digest")
|
||||
tracing.AnnotateError(span, err)
|
||||
return err
|
||||
}
|
||||
|
||||
forkDigest := params.ForkDigest(slots.ToEpoch(update.AttestedHeader().Beacon().Slot))
|
||||
if err := s.broadcastObject(ctx, update, lcFinalityToTopic(forkDigest)); err != nil {
|
||||
log.WithError(err).Debug("Failed to broadcast light client finality update")
|
||||
err := errors.Wrap(err, "could not publish message")
|
||||
|
||||
@@ -18,10 +18,10 @@ import (
|
||||
p2ptest "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
|
||||
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/wrapper"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
"github.com/OffchainLabs/prysm/v6/network/forks"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
testpb "github.com/OffchainLabs/prysm/v6/proto/testing"
|
||||
"github.com/OffchainLabs/prysm/v6/runtime/version"
|
||||
@@ -550,9 +550,7 @@ func TestService_BroadcastLightClientOptimisticUpdate(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
|
||||
GossipTypeMapping[reflect.TypeOf(msg)] = LightClientOptimisticUpdateTopicFormat
|
||||
digest, err := forks.ForkDigestFromEpoch(slots.ToEpoch(msg.AttestedHeader().Beacon().Slot), p.genesisValidatorsRoot)
|
||||
require.NoError(t, err)
|
||||
topic := fmt.Sprintf(LightClientOptimisticUpdateTopicFormat, digest)
|
||||
topic := fmt.Sprintf(LightClientOptimisticUpdateTopicFormat, params.ForkDigest(slots.ToEpoch(msg.AttestedHeader().Beacon().Slot)))
|
||||
|
||||
// External peer subscribes to the topic.
|
||||
topic += p.Encoding().ProtocolSuffix()
|
||||
@@ -617,9 +615,7 @@ func TestService_BroadcastLightClientFinalityUpdate(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
|
||||
GossipTypeMapping[reflect.TypeOf(msg)] = LightClientFinalityUpdateTopicFormat
|
||||
digest, err := forks.ForkDigestFromEpoch(slots.ToEpoch(msg.AttestedHeader().Beacon().Slot), p.genesisValidatorsRoot)
|
||||
require.NoError(t, err)
|
||||
topic := fmt.Sprintf(LightClientFinalityUpdateTopicFormat, digest)
|
||||
topic := fmt.Sprintf(LightClientFinalityUpdateTopicFormat, params.ForkDigest(slots.ToEpoch(msg.AttestedHeader().Beacon().Slot)))
|
||||
|
||||
// External peer subscribes to the topic.
|
||||
topic += p.Encoding().ProtocolSuffix()
|
||||
|
||||
@@ -504,7 +504,7 @@ func (s *Service) createLocalNode(
|
||||
localNode.SetFallbackIP(ipAddr)
|
||||
localNode.SetFallbackUDP(udpPort)
|
||||
|
||||
localNode, err = addForkEntry(localNode, s.genesisTime, s.genesisValidatorsRoot)
|
||||
localNode, err = addForkEntry(localNode, s.clock.CurrentEpoch())
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not add eth2 fork version entry to enr")
|
||||
}
|
||||
|
||||
@@ -3,13 +3,10 @@ package p2p
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/network/forks"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
pb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
prysmTime "github.com/OffchainLabs/prysm/v6/time"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
"github.com/ethereum/go-ethereum/p2p/enode"
|
||||
"github.com/ethereum/go-ethereum/p2p/enr"
|
||||
"github.com/pkg/errors"
|
||||
@@ -25,7 +22,7 @@ func (s *Service) currentForkDigest() ([4]byte, error) {
|
||||
if !s.isInitialized() {
|
||||
return [4]byte{}, errors.New("state is not initialized")
|
||||
}
|
||||
return forks.CreateForkDigest(s.genesisTime, s.genesisValidatorsRoot)
|
||||
return params.ForkDigest(s.clock.CurrentEpoch()), nil
|
||||
}
|
||||
|
||||
// Compares fork ENRs between an incoming peer's record and our node's
|
||||
@@ -79,24 +76,9 @@ func (s *Service) compareForkENR(record *enr.Record) error {
|
||||
// which takes into account the current fork version from the current
|
||||
// epoch to create a fork digest, the next fork version,
|
||||
// and the next fork epoch.
|
||||
func addForkEntry(
|
||||
node *enode.LocalNode,
|
||||
genesisTime time.Time,
|
||||
genesisValidatorsRoot []byte,
|
||||
) (*enode.LocalNode, error) {
|
||||
digest, err := forks.CreateForkDigest(genesisTime, genesisValidatorsRoot)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
currentSlot := slots.Since(genesisTime)
|
||||
currentEpoch := slots.ToEpoch(currentSlot)
|
||||
if prysmTime.Now().Before(genesisTime) {
|
||||
currentEpoch = 0
|
||||
}
|
||||
nextForkVersion, nextForkEpoch, err := forks.NextForkData(currentEpoch)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
func addForkEntry(node *enode.LocalNode, current primitives.Epoch) (*enode.LocalNode, error) {
|
||||
digest := params.ForkDigest(current)
|
||||
nextForkVersion, nextForkEpoch := params.NextForkData(current)
|
||||
enrForkID := &pb.ENRForkID{
|
||||
CurrentForkDigest: digest[:],
|
||||
NextForkVersion: nextForkVersion[:],
|
||||
|
||||
@@ -11,11 +11,11 @@ import (
|
||||
|
||||
mock "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
"github.com/OffchainLabs/prysm/v6/network/forks"
|
||||
pb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/assert"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
@@ -231,10 +231,8 @@ func TestDiscv5_AddRetrieveForkEntryENR(t *testing.T) {
|
||||
nextForkVersion := []byte{0, 0, 0, 1}
|
||||
params.OverrideBeaconConfig(c)
|
||||
|
||||
genesisTime := time.Now()
|
||||
genesisValidatorsRoot := make([]byte, 32)
|
||||
digest, err := forks.CreateForkDigest(genesisTime, make([]byte, 32))
|
||||
require.NoError(t, err)
|
||||
clock := startup.NewClock(time.Now(), [32]byte{})
|
||||
digest := params.ForkDigest(clock.CurrentEpoch())
|
||||
enrForkID := &pb.ENRForkID{
|
||||
CurrentForkDigest: digest[:],
|
||||
NextForkVersion: nextForkVersion,
|
||||
@@ -256,7 +254,7 @@ func TestDiscv5_AddRetrieveForkEntryENR(t *testing.T) {
|
||||
localNode := enode.NewLocalNode(db, pkey)
|
||||
localNode.Set(entry)
|
||||
|
||||
want, err := signing.ComputeForkDigest([]byte{0, 0, 0, 0}, genesisValidatorsRoot)
|
||||
want, err := signing.ComputeForkDigest([]byte{0, 0, 0, 0}, clock.GenesisValidatorsRootSlice())
|
||||
require.NoError(t, err)
|
||||
|
||||
resp, err := forkEntry(localNode.Node().Record())
|
||||
@@ -283,7 +281,8 @@ func TestAddForkEntry_Genesis(t *testing.T) {
|
||||
params.OverrideBeaconConfig(bCfg)
|
||||
|
||||
localNode := enode.NewLocalNode(db, pkey)
|
||||
localNode, err = addForkEntry(localNode, time.Now().Add(10*time.Second), bytesutil.PadTo([]byte{'A', 'B', 'C', 'D'}, 32))
|
||||
clock := startup.NewClock(time.Now(), bCfg.GenesisValidatorsRoot)
|
||||
localNode, err = addForkEntry(localNode, clock.CurrentEpoch())
|
||||
require.NoError(t, err)
|
||||
forkEntry, err := forkEntry(localNode.Node().Record())
|
||||
require.NoError(t, err)
|
||||
|
||||
@@ -25,7 +25,7 @@ func (s *Service) forkWatcher() {
|
||||
// this over the epoch, which might be slightly wasteful
|
||||
// but is fine nonetheless.
|
||||
if s.dv5Listener != nil { // make sure it's not a local network
|
||||
_, err := addForkEntry(s.dv5Listener.LocalNode(), s.genesisTime, s.genesisValidatorsRoot)
|
||||
_, err := addForkEntry(s.dv5Listener.LocalNode(), s.clock.CurrentEpoch())
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Could not add fork entry")
|
||||
}
|
||||
|
||||
@@ -20,96 +20,106 @@ import (
|
||||
"google.golang.org/protobuf/proto"
|
||||
)
|
||||
|
||||
// P2P represents the full p2p interface composed of all of the sub-interfaces.
|
||||
type P2P interface {
|
||||
Broadcaster
|
||||
SetStreamHandler
|
||||
PubSubProvider
|
||||
PubSubTopicUser
|
||||
SenderEncoder
|
||||
PeerManager
|
||||
ConnectionHandler
|
||||
PeersProvider
|
||||
MetadataProvider
|
||||
}
|
||||
type (
|
||||
// P2P represents the full p2p interface composed of all of the sub-interfaces.
|
||||
P2P interface {
|
||||
Broadcaster
|
||||
SetStreamHandler
|
||||
PubSubProvider
|
||||
PubSubTopicUser
|
||||
SenderEncoder
|
||||
PeerManager
|
||||
ConnectionHandler
|
||||
PeersProvider
|
||||
MetadataProvider
|
||||
DataColumnsHandler
|
||||
}
|
||||
|
||||
// Broadcaster broadcasts messages to peers over the p2p pubsub protocol.
|
||||
type Broadcaster interface {
|
||||
Broadcast(context.Context, proto.Message) error
|
||||
BroadcastAttestation(ctx context.Context, subnet uint64, att ethpb.Att) error
|
||||
BroadcastSyncCommitteeMessage(ctx context.Context, subnet uint64, sMsg *ethpb.SyncCommitteeMessage) error
|
||||
BroadcastBlob(ctx context.Context, subnet uint64, blob *ethpb.BlobSidecar) error
|
||||
BroadcastLightClientOptimisticUpdate(ctx context.Context, update interfaces.LightClientOptimisticUpdate) error
|
||||
BroadcastLightClientFinalityUpdate(ctx context.Context, update interfaces.LightClientFinalityUpdate) error
|
||||
BroadcastDataColumn(root [fieldparams.RootLength]byte, columnSubnet uint64, dataColumnSidecar *ethpb.DataColumnSidecar, peersChecked ...chan<- bool) error
|
||||
}
|
||||
// Accessor provides access to the Broadcaster and PeerManager interfaces.
|
||||
Accessor interface {
|
||||
Broadcaster
|
||||
PeerManager
|
||||
}
|
||||
|
||||
// SetStreamHandler configures p2p to handle streams of a certain topic ID.
|
||||
type SetStreamHandler interface {
|
||||
SetStreamHandler(topic string, handler network.StreamHandler)
|
||||
}
|
||||
// Broadcaster broadcasts messages to peers over the p2p pubsub protocol.
|
||||
Broadcaster interface {
|
||||
Broadcast(context.Context, proto.Message) error
|
||||
BroadcastAttestation(ctx context.Context, subnet uint64, att ethpb.Att) error
|
||||
BroadcastSyncCommitteeMessage(ctx context.Context, subnet uint64, sMsg *ethpb.SyncCommitteeMessage) error
|
||||
BroadcastBlob(ctx context.Context, subnet uint64, blob *ethpb.BlobSidecar) error
|
||||
BroadcastLightClientOptimisticUpdate(ctx context.Context, update interfaces.LightClientOptimisticUpdate) error
|
||||
BroadcastLightClientFinalityUpdate(ctx context.Context, update interfaces.LightClientFinalityUpdate) error
|
||||
BroadcastDataColumn(root [fieldparams.RootLength]byte, columnSubnet uint64, dataColumnSidecar *ethpb.DataColumnSidecar, peersChecked ...chan<- bool) error
|
||||
}
|
||||
|
||||
// PubSubTopicUser provides way to join, use and leave PubSub topics.
|
||||
type PubSubTopicUser interface {
|
||||
JoinTopic(topic string, opts ...pubsub.TopicOpt) (*pubsub.Topic, error)
|
||||
LeaveTopic(topic string) error
|
||||
PublishToTopic(ctx context.Context, topic string, data []byte, opts ...pubsub.PubOpt) error
|
||||
SubscribeToTopic(topic string, opts ...pubsub.SubOpt) (*pubsub.Subscription, error)
|
||||
}
|
||||
// SetStreamHandler configures p2p to handle streams of a certain topic ID.
|
||||
SetStreamHandler interface {
|
||||
SetStreamHandler(topic string, handler network.StreamHandler)
|
||||
}
|
||||
|
||||
// ConnectionHandler configures p2p to handle connections with a peer.
|
||||
type ConnectionHandler interface {
|
||||
AddConnectionHandler(f func(ctx context.Context, id peer.ID) error,
|
||||
j func(ctx context.Context, id peer.ID) error)
|
||||
AddDisconnectionHandler(f func(ctx context.Context, id peer.ID) error)
|
||||
connmgr.ConnectionGater
|
||||
}
|
||||
// PubSubTopicUser provides way to join, use and leave PubSub topics.
|
||||
PubSubTopicUser interface {
|
||||
JoinTopic(topic string, opts ...pubsub.TopicOpt) (*pubsub.Topic, error)
|
||||
LeaveTopic(topic string) error
|
||||
PublishToTopic(ctx context.Context, topic string, data []byte, opts ...pubsub.PubOpt) error
|
||||
SubscribeToTopic(topic string, opts ...pubsub.SubOpt) (*pubsub.Subscription, error)
|
||||
}
|
||||
|
||||
// SenderEncoder allows sending functionality from libp2p as well as encoding for requests and responses.
|
||||
type SenderEncoder interface {
|
||||
EncodingProvider
|
||||
Sender
|
||||
}
|
||||
// ConnectionHandler configures p2p to handle connections with a peer.
|
||||
ConnectionHandler interface {
|
||||
AddConnectionHandler(f func(ctx context.Context, id peer.ID) error,
|
||||
j func(ctx context.Context, id peer.ID) error)
|
||||
AddDisconnectionHandler(f func(ctx context.Context, id peer.ID) error)
|
||||
connmgr.ConnectionGater
|
||||
}
|
||||
|
||||
// EncodingProvider provides p2p network encoding.
|
||||
type EncodingProvider interface {
|
||||
Encoding() encoder.NetworkEncoding
|
||||
}
|
||||
// SenderEncoder allows sending functionality from libp2p as well as encoding for requests and responses.
|
||||
SenderEncoder interface {
|
||||
EncodingProvider
|
||||
Sender
|
||||
}
|
||||
|
||||
// PubSubProvider provides the p2p pubsub protocol.
|
||||
type PubSubProvider interface {
|
||||
PubSub() *pubsub.PubSub
|
||||
}
|
||||
// EncodingProvider provides p2p network encoding.
|
||||
EncodingProvider interface {
|
||||
Encoding() encoder.NetworkEncoding
|
||||
}
|
||||
|
||||
// PeerManager abstracts some peer management methods from libp2p.
|
||||
type PeerManager interface {
|
||||
Disconnect(peer.ID) error
|
||||
PeerID() peer.ID
|
||||
Host() host.Host
|
||||
ENR() *enr.Record
|
||||
NodeID() enode.ID
|
||||
DiscoveryAddresses() ([]multiaddr.Multiaddr, error)
|
||||
RefreshPersistentSubnets()
|
||||
FindPeersWithSubnet(ctx context.Context, topic string, subIndex uint64, threshold int) (bool, error)
|
||||
AddPingMethod(reqFunc func(ctx context.Context, id peer.ID) error)
|
||||
}
|
||||
// PubSubProvider provides the p2p pubsub protocol.
|
||||
PubSubProvider interface {
|
||||
PubSub() *pubsub.PubSub
|
||||
}
|
||||
|
||||
// Sender abstracts the sending functionality from libp2p.
|
||||
type Sender interface {
|
||||
Send(context.Context, interface{}, string, peer.ID) (network.Stream, error)
|
||||
}
|
||||
// PeerManager abstracts some peer management methods from libp2p.
|
||||
PeerManager interface {
|
||||
Disconnect(peer.ID) error
|
||||
PeerID() peer.ID
|
||||
Host() host.Host
|
||||
ENR() *enr.Record
|
||||
NodeID() enode.ID
|
||||
DiscoveryAddresses() ([]multiaddr.Multiaddr, error)
|
||||
RefreshPersistentSubnets()
|
||||
FindPeersWithSubnet(ctx context.Context, topic string, subIndex uint64, threshold int) (bool, error)
|
||||
AddPingMethod(reqFunc func(ctx context.Context, id peer.ID) error)
|
||||
}
|
||||
|
||||
// PeersProvider abstracts obtaining our current list of known peers status.
|
||||
type PeersProvider interface {
|
||||
Peers() *peers.Status
|
||||
}
|
||||
// Sender abstracts the sending functionality from libp2p.
|
||||
Sender interface {
|
||||
Send(context.Context, interface{}, string, peer.ID) (network.Stream, error)
|
||||
}
|
||||
|
||||
// MetadataProvider returns the metadata related information for the local peer.
|
||||
type MetadataProvider interface {
|
||||
Metadata() metadata.Metadata
|
||||
MetadataSeq() uint64
|
||||
}
|
||||
// PeersProvider abstracts obtaining our current list of known peers status.
|
||||
PeersProvider interface {
|
||||
Peers() *peers.Status
|
||||
}
|
||||
|
||||
type DataColumnsHandler interface {
|
||||
CustodyGroupCountFromPeer(peer.ID) uint64
|
||||
}
|
||||
// MetadataProvider returns the metadata related information for the local peer.
|
||||
MetadataProvider interface {
|
||||
Metadata() metadata.Metadata
|
||||
MetadataSeq() uint64
|
||||
}
|
||||
|
||||
// DataColumnsHandler abstracts some data columns related methods.
|
||||
DataColumnsHandler interface {
|
||||
CustodyGroupCountFromPeer(peer.ID) uint64
|
||||
}
|
||||
)
|
||||
|
||||
@@ -7,7 +7,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/crypto/hash"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
"github.com/OffchainLabs/prysm/v6/math"
|
||||
"github.com/OffchainLabs/prysm/v6/network/forks"
|
||||
pubsubpb "github.com/libp2p/go-libp2p-pubsub/pb"
|
||||
)
|
||||
|
||||
@@ -39,7 +38,7 @@ func MsgID(genesisValidatorsRoot []byte, pmsg *pubsubpb.Message) string {
|
||||
copy(msg, "invalid")
|
||||
return bytesutil.UnsafeCastToString(msg)
|
||||
}
|
||||
_, fEpoch, err := forks.RetrieveForkDataFromDigest(digest, genesisValidatorsRoot)
|
||||
_, fEpoch, err := params.ForkDataFromDigest(digest)
|
||||
if err != nil {
|
||||
// Impossible condition that should
|
||||
// never be hit.
|
||||
|
||||
@@ -7,10 +7,10 @@ import (
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/crypto/hash"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
"github.com/OffchainLabs/prysm/v6/network/forks"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/assert"
|
||||
"github.com/golang/snappy"
|
||||
pubsubpb "github.com/libp2p/go-libp2p-pubsub/pb"
|
||||
@@ -18,22 +18,22 @@ import (
|
||||
|
||||
func TestMsgID_HashesCorrectly(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
genesisValidatorsRoot := bytesutil.PadTo([]byte{'A'}, 32)
|
||||
d, err := forks.CreateForkDigest(time.Now(), genesisValidatorsRoot)
|
||||
assert.NoError(t, err)
|
||||
clock := startup.NewClock(time.Now(), bytesutil.ToBytes32([]byte{'A'}))
|
||||
valRoot := clock.GenesisValidatorsRoot()
|
||||
d := params.ForkDigest(clock.CurrentEpoch())
|
||||
tpc := fmt.Sprintf(p2p.BlockSubnetTopicFormat, d)
|
||||
invalidSnappy := [32]byte{'J', 'U', 'N', 'K'}
|
||||
pMsg := &pubsubpb.Message{Data: invalidSnappy[:], Topic: &tpc}
|
||||
hashedData := hash.Hash(append(params.BeaconConfig().MessageDomainInvalidSnappy[:], pMsg.Data...))
|
||||
msgID := string(hashedData[:20])
|
||||
assert.Equal(t, msgID, p2p.MsgID(genesisValidatorsRoot, pMsg), "Got incorrect msg id")
|
||||
assert.Equal(t, msgID, p2p.MsgID(valRoot[:], pMsg), "Got incorrect msg id")
|
||||
|
||||
validObj := [32]byte{'v', 'a', 'l', 'i', 'd'}
|
||||
enc := snappy.Encode(nil, validObj[:])
|
||||
nMsg := &pubsubpb.Message{Data: enc, Topic: &tpc}
|
||||
hashedData = hash.Hash(append(params.BeaconConfig().MessageDomainValidSnappy[:], validObj[:]...))
|
||||
msgID = string(hashedData[:20])
|
||||
assert.Equal(t, msgID, p2p.MsgID(genesisValidatorsRoot, nMsg), "Got incorrect msg id")
|
||||
assert.Equal(t, msgID, p2p.MsgID(valRoot[:], nMsg), "Got incorrect msg id")
|
||||
}
|
||||
|
||||
func TestMessageIDFunction_HashesCorrectlyAltair(t *testing.T) {
|
||||
|
||||
@@ -6,7 +6,6 @@ import (
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/encoder"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/network/forks"
|
||||
pubsub "github.com/libp2p/go-libp2p-pubsub"
|
||||
pubsubpb "github.com/libp2p/go-libp2p-pubsub/pb"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
@@ -53,36 +52,12 @@ func (s *Service) CanSubscribe(topic string) bool {
|
||||
log.WithError(err).Error("Could not determine fork digest")
|
||||
return false
|
||||
}
|
||||
altairForkDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().AltairForkEpoch, s.genesisValidatorsRoot)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Could not determine altair fork digest")
|
||||
return false
|
||||
}
|
||||
bellatrixForkDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().BellatrixForkEpoch, s.genesisValidatorsRoot)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Could not determine Bellatrix fork digest")
|
||||
return false
|
||||
}
|
||||
capellaForkDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().CapellaForkEpoch, s.genesisValidatorsRoot)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Could not determine Capella fork digest")
|
||||
return false
|
||||
}
|
||||
denebForkDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().DenebForkEpoch, s.genesisValidatorsRoot)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Could not determine Deneb fork digest")
|
||||
return false
|
||||
}
|
||||
electraForkDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().ElectraForkEpoch, s.genesisValidatorsRoot)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Could not determine Electra fork digest")
|
||||
return false
|
||||
}
|
||||
fuluForkDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().FuluForkEpoch, s.genesisValidatorsRoot)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Could not determine Fulu fork digest")
|
||||
return false
|
||||
}
|
||||
altairForkDigest := params.ForkDigest(params.BeaconConfig().AltairForkEpoch)
|
||||
bellatrixForkDigest := params.ForkDigest(params.BeaconConfig().BellatrixForkEpoch)
|
||||
capellaForkDigest := params.ForkDigest(params.BeaconConfig().CapellaForkEpoch)
|
||||
denebForkDigest := params.ForkDigest(params.BeaconConfig().DenebForkEpoch)
|
||||
electraForkDigest := params.ForkDigest(params.BeaconConfig().ElectraForkEpoch)
|
||||
fuluForkDigest := params.ForkDigest(params.BeaconConfig().FuluForkEpoch)
|
||||
switch parts[2] {
|
||||
case fmt.Sprintf("%x", phase0ForkDigest):
|
||||
case fmt.Sprintf("%x", altairForkDigest):
|
||||
|
||||
@@ -11,8 +11,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
"github.com/OffchainLabs/prysm/v6/network/forks"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/assert"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
prysmTime "github.com/OffchainLabs/prysm/v6/time"
|
||||
pubsubpb "github.com/libp2p/go-libp2p-pubsub/pb"
|
||||
@@ -23,10 +21,8 @@ func TestService_CanSubscribe(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
currentFork := [4]byte{0x01, 0x02, 0x03, 0x04}
|
||||
validProtocolSuffix := "/" + encoder.ProtocolSuffixSSZSnappy
|
||||
genesisTime := time.Now()
|
||||
var valRoot [32]byte
|
||||
digest, err := forks.CreateForkDigest(genesisTime, valRoot[:])
|
||||
assert.NoError(t, err)
|
||||
clock := startup.NewClock(time.Now(), [32]byte{})
|
||||
digest := params.ForkDigest(clock.CurrentEpoch())
|
||||
type test struct {
|
||||
name string
|
||||
topic string
|
||||
@@ -108,11 +104,12 @@ func TestService_CanSubscribe(t *testing.T) {
|
||||
}
|
||||
tests = append(tests, tt)
|
||||
}
|
||||
valRoot := clock.GenesisValidatorsRoot()
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
s := &Service{
|
||||
genesisValidatorsRoot: valRoot[:],
|
||||
genesisTime: genesisTime,
|
||||
genesisTime: clock.GenesisTime(),
|
||||
}
|
||||
if got := s.CanSubscribe(tt.topic); got != tt.want {
|
||||
t.Errorf("CanSubscribe(%s) = %v, want %v", tt.topic, got, tt.want)
|
||||
@@ -220,10 +217,8 @@ func TestGossipTopicMapping_scanfcheck_GossipTopicFormattingSanityCheck(t *testi
|
||||
func TestService_FilterIncomingSubscriptions(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
validProtocolSuffix := "/" + encoder.ProtocolSuffixSSZSnappy
|
||||
genesisTime := time.Now()
|
||||
var valRoot [32]byte
|
||||
digest, err := forks.CreateForkDigest(genesisTime, valRoot[:])
|
||||
assert.NoError(t, err)
|
||||
clock := startup.NewClock(time.Now(), [32]byte{})
|
||||
digest := params.ForkDigest(clock.CurrentEpoch())
|
||||
type args struct {
|
||||
id peer.ID
|
||||
subs []*pubsubpb.RPC_SubOpts
|
||||
@@ -320,11 +315,12 @@ func TestService_FilterIncomingSubscriptions(t *testing.T) {
|
||||
},
|
||||
},
|
||||
}
|
||||
valRoot := clock.GenesisValidatorsRoot()
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
s := &Service{
|
||||
genesisValidatorsRoot: valRoot[:],
|
||||
genesisTime: genesisTime,
|
||||
genesisTime: clock.GenesisTime(),
|
||||
}
|
||||
got, err := s.FilterIncomingSubscriptions(tt.args.id, tt.args.subs)
|
||||
if (err != nil) != tt.wantErr {
|
||||
|
||||
@@ -14,6 +14,7 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers/scorers"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/types"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
|
||||
"github.com/OffchainLabs/prysm/v6/config/features"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
leakybucket "github.com/OffchainLabs/prysm/v6/container/leaky-bucket"
|
||||
@@ -82,6 +83,7 @@ type Service struct {
|
||||
genesisTime time.Time
|
||||
genesisValidatorsRoot []byte
|
||||
activeValidatorCount uint64
|
||||
clock *startup.Clock
|
||||
}
|
||||
|
||||
// NewService initializes a new p2p service compatible with shared.Service interface. No
|
||||
|
||||
@@ -15,7 +15,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
"github.com/OffchainLabs/prysm/v6/network/forks"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/assert"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
prysmTime "github.com/OffchainLabs/prysm/v6/time"
|
||||
@@ -348,14 +347,12 @@ func TestService_JoinLeaveTopic(t *testing.T) {
|
||||
func initializeStateWithForkDigest(_ context.Context, t *testing.T, gs startup.ClockSetter) [4]byte {
|
||||
gt := prysmTime.Now()
|
||||
gvr := bytesutil.ToBytes32(bytesutil.PadTo([]byte("genesis validators root"), 32))
|
||||
require.NoError(t, gs.SetClock(startup.NewClock(gt, gvr)))
|
||||
|
||||
fd, err := forks.CreateForkDigest(gt, gvr[:])
|
||||
require.NoError(t, err)
|
||||
clock := startup.NewClock(gt, gvr)
|
||||
require.NoError(t, gs.SetClock(clock))
|
||||
|
||||
time.Sleep(50 * time.Millisecond) // wait for pubsub filter to initialize.
|
||||
|
||||
return fd
|
||||
return params.ForkDigest(clock.CurrentEpoch())
|
||||
}
|
||||
|
||||
func TestService_connectWithPeer(t *testing.T) {
|
||||
|
||||
@@ -221,7 +221,6 @@ func generateDataColumnIdentifiers(n int) []*eth.DataColumnsByRootIdentifier {
|
||||
|
||||
func TestDataColumnSidecarsByRootReq_Marshal(t *testing.T) {
|
||||
/*
|
||||
|
||||
SSZ encoding of DataColumnsByRootIdentifiers is tested in spectests.
|
||||
However, encoding a list of DataColumnsByRootIdentifier is not.
|
||||
We are testing it here.
|
||||
|
||||
@@ -201,6 +201,11 @@ func ConvertPeerIDToNodeID(pid peer.ID) (enode.ID, error) {
|
||||
return [32]byte{}, errors.Wrap(err, "parse public key")
|
||||
}
|
||||
|
||||
newPubkey := &ecdsa.PublicKey{Curve: gCrypto.S256(), X: pubKeyObjSecp256k1.X(), Y: pubKeyObjSecp256k1.Y()}
|
||||
newPubkey := &ecdsa.PublicKey{
|
||||
Curve: gCrypto.S256(),
|
||||
X: pubKeyObjSecp256k1.X(),
|
||||
Y: pubKeyObjSecp256k1.Y(),
|
||||
}
|
||||
|
||||
return enode.PubkeyToIDV4(newPubkey), nil
|
||||
}
|
||||
|
||||
@@ -245,7 +245,7 @@ func (s *Service) validatorEndpoints(
|
||||
template: "/eth/v2/validator/aggregate_attestation",
|
||||
name: namespace + ".GetAggregateAttestationV2",
|
||||
middleware: []middleware.Middleware{
|
||||
middleware.AcceptHeaderHandler([]string{api.JsonMediaType}),
|
||||
middleware.AcceptHeaderHandler([]string{api.JsonMediaType, api.OctetStreamMediaType}),
|
||||
},
|
||||
handler: server.GetAggregateAttestationV2,
|
||||
methods: []string{http.MethodGet},
|
||||
@@ -314,7 +314,7 @@ func (s *Service) validatorEndpoints(
|
||||
template: "/eth/v1/validator/attestation_data",
|
||||
name: namespace + ".GetAttestationData",
|
||||
middleware: []middleware.Middleware{
|
||||
middleware.AcceptHeaderHandler([]string{api.JsonMediaType}),
|
||||
middleware.AcceptHeaderHandler([]string{api.JsonMediaType, api.OctetStreamMediaType}),
|
||||
},
|
||||
handler: server.GetAttestationData,
|
||||
methods: []string{http.MethodGet},
|
||||
|
||||
@@ -1265,6 +1265,15 @@ func (s *Server) GetCommittees(w http.ResponseWriter, r *http.Request) {
|
||||
httputil.HandleError(w, "Could not get epoch end slot: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
// Validate that the provided slot belongs to the specified epoch, as required by the Beacon API spec.
|
||||
if rawSlot != "" {
|
||||
if sl < uint64(startSlot) || sl > uint64(endSlot) {
|
||||
httputil.HandleError(w, fmt.Sprintf("Slot %d does not belong in epoch %d", sl, epoch), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
committeesPerSlot := corehelpers.SlotCommitteeCount(activeCount)
|
||||
committees := make([]*structs.Committee, 0)
|
||||
for slot := startSlot; slot <= endSlot; slot++ {
|
||||
|
||||
@@ -4078,6 +4078,47 @@ func TestGetCommittees(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, true, resp.Finalized)
|
||||
})
|
||||
|
||||
t.Run("Invalid slot for given epoch", func(t *testing.T) {
|
||||
cases := []struct {
|
||||
name string
|
||||
epoch string
|
||||
slot string
|
||||
expectMsg string
|
||||
}{
|
||||
{
|
||||
name: "Slot after the specified epoch",
|
||||
epoch: "10",
|
||||
slot: "400",
|
||||
expectMsg: "Slot 400 does not belong in epoch 10",
|
||||
},
|
||||
{
|
||||
name: "Slot before the specified epoch",
|
||||
epoch: "10",
|
||||
slot: "300",
|
||||
expectMsg: "Slot 300 does not belong in epoch 10",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range cases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
query := url + "?epoch=" + tc.epoch + "&slot=" + tc.slot
|
||||
request := httptest.NewRequest(http.MethodGet, query, nil)
|
||||
request.SetPathValue("state_id", "head")
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
s.GetCommittees(writer, request)
|
||||
assert.Equal(t, http.StatusBadRequest, writer.Code)
|
||||
var resp struct {
|
||||
Message string `json:"message"`
|
||||
Code int `json:"code"`
|
||||
}
|
||||
err := json.Unmarshal(writer.Body.Bytes(), &resp)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, tc.expectMsg, resp.Message)
|
||||
})
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func TestGetBlockHeaders(t *testing.T) {
|
||||
|
||||
@@ -97,18 +97,19 @@ func (s *Server) Blobs(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
|
||||
// parseIndices filters out invalid and duplicate blob indices
|
||||
func parseIndices(url *url.URL, s primitives.Slot) ([]uint64, error) {
|
||||
func parseIndices(url *url.URL, s primitives.Slot) ([]int, error) {
|
||||
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(s)
|
||||
rawIndices := url.Query()["indices"]
|
||||
indices := make([]uint64, 0, params.BeaconConfig().MaxBlobsPerBlock(s))
|
||||
indices := make([]int, 0, maxBlobsPerBlock)
|
||||
invalidIndices := make([]string, 0)
|
||||
loop:
|
||||
for _, raw := range rawIndices {
|
||||
ix, err := strconv.ParseUint(raw, 10, 64)
|
||||
ix, err := strconv.Atoi(raw)
|
||||
if err != nil {
|
||||
invalidIndices = append(invalidIndices, raw)
|
||||
continue
|
||||
}
|
||||
if ix >= uint64(params.BeaconConfig().MaxBlobsPerBlock(s)) {
|
||||
if !(0 <= ix && ix < maxBlobsPerBlock) {
|
||||
invalidIndices = append(invalidIndices, raw)
|
||||
continue
|
||||
}
|
||||
|
||||
@@ -520,13 +520,13 @@ func Test_parseIndices(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
query string
|
||||
want []uint64
|
||||
want []int
|
||||
wantErr string
|
||||
}{
|
||||
{
|
||||
name: "happy path with duplicate indices within bound and other query parameters ignored",
|
||||
query: "indices=1&indices=2&indices=1&indices=3&bar=bar",
|
||||
want: []uint64{1, 2, 3},
|
||||
want: []int{1, 2, 3},
|
||||
},
|
||||
{
|
||||
name: "out of bounds indices throws error",
|
||||
|
||||
@@ -9,7 +9,6 @@ go_library(
|
||||
"//api/server/structs:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
"//monitoring/tracing/trace:go_default_library",
|
||||
"//network/forks:go_default_library",
|
||||
"//network/httputil:go_default_library",
|
||||
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
|
||||
],
|
||||
@@ -24,7 +23,6 @@ go_test(
|
||||
"//config/params:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
"//encoding/bytesutil:go_default_library",
|
||||
"//network/forks:go_default_library",
|
||||
"//testing/assert:go_default_library",
|
||||
"//testing/require:go_default_library",
|
||||
"@com_github_ethereum_go_ethereum//common:go_default_library",
|
||||
|
||||
@@ -10,7 +10,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/api/server/structs"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
|
||||
"github.com/OffchainLabs/prysm/v6/network/forks"
|
||||
"github.com/OffchainLabs/prysm/v6/network/httputil"
|
||||
"github.com/ethereum/go-ethereum/common/hexutil"
|
||||
)
|
||||
@@ -33,34 +32,25 @@ func GetForkSchedule(w http.ResponseWriter, r *http.Request) {
|
||||
_, span := trace.StartSpan(r.Context(), "config.GetForkSchedule")
|
||||
defer span.End()
|
||||
|
||||
schedule := params.BeaconConfig().ForkVersionSchedule
|
||||
schedule := params.SortedForkSchedule()
|
||||
data := make([]*structs.Fork, 0, len(schedule))
|
||||
if len(schedule) == 0 {
|
||||
httputil.WriteJson(w, &structs.GetForkScheduleResponse{
|
||||
Data: make([]*structs.Fork, 0),
|
||||
Data: data,
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
versions := forks.SortedForkVersions(schedule)
|
||||
chainForks := make([]*structs.Fork, len(schedule))
|
||||
var previous, current []byte
|
||||
for i, v := range versions {
|
||||
if i == 0 {
|
||||
previous = params.BeaconConfig().GenesisForkVersion
|
||||
} else {
|
||||
previous = current
|
||||
}
|
||||
copyV := v
|
||||
current = copyV[:]
|
||||
chainForks[i] = &structs.Fork{
|
||||
PreviousVersion: hexutil.Encode(previous),
|
||||
CurrentVersion: hexutil.Encode(current),
|
||||
Epoch: fmt.Sprintf("%d", schedule[v]),
|
||||
}
|
||||
previous := schedule[0]
|
||||
for _, entry := range schedule {
|
||||
data = append(data, &structs.Fork{
|
||||
PreviousVersion: hexutil.Encode(previous.ForkVersion[:]),
|
||||
CurrentVersion: hexutil.Encode(entry.ForkVersion[:]),
|
||||
Epoch: fmt.Sprintf("%d", entry.Epoch),
|
||||
})
|
||||
previous = entry
|
||||
}
|
||||
|
||||
httputil.WriteJson(w, &structs.GetForkScheduleResponse{
|
||||
Data: chainForks,
|
||||
Data: data,
|
||||
})
|
||||
}
|
||||
|
||||
|
||||
@@ -13,7 +13,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
"github.com/OffchainLabs/prysm/v6/network/forks"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/assert"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/ethereum/go-ethereum/common"
|
||||
@@ -633,7 +632,7 @@ func TestForkSchedule_Ok(t *testing.T) {
|
||||
require.Equal(t, http.StatusOK, writer.Code)
|
||||
resp := &structs.GetForkScheduleResponse{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
|
||||
os := forks.NewOrderedSchedule(params.BeaconConfig())
|
||||
assert.Equal(t, os.Len(), len(resp.Data))
|
||||
os := params.SortedForkSchedule()
|
||||
assert.Equal(t, len(os), len(resp.Data))
|
||||
})
|
||||
}
|
||||
|
||||
@@ -72,6 +72,8 @@ const (
|
||||
LightClientFinalityUpdateTopic = "light_client_finality_update"
|
||||
// LightClientOptimisticUpdateTopic represents a new light client optimistic update event topic.
|
||||
LightClientOptimisticUpdateTopic = "light_client_optimistic_update"
|
||||
// DataColumnTopic represents a data column sidecar event topic
|
||||
DataColumnTopic = "data_column_sidecar"
|
||||
)
|
||||
|
||||
var (
|
||||
@@ -105,6 +107,7 @@ var opsFeedEventTopics = map[feed.EventType]string{
|
||||
operation.AttesterSlashingReceived: AttesterSlashingTopic,
|
||||
operation.ProposerSlashingReceived: ProposerSlashingTopic,
|
||||
operation.BlockGossipReceived: BlockGossipTopic,
|
||||
operation.DataColumnReceived: DataColumnTopic,
|
||||
}
|
||||
|
||||
var stateFeedEventTopics = map[feed.EventType]string{
|
||||
@@ -461,6 +464,8 @@ func topicForEvent(event *feed.Event) string {
|
||||
return BlockTopic
|
||||
case payloadattribute.EventData:
|
||||
return PayloadAttributesTopic
|
||||
case *operation.DataColumnReceivedData:
|
||||
return DataColumnTopic
|
||||
default:
|
||||
return InvalidTopic
|
||||
}
|
||||
@@ -495,6 +500,19 @@ func (s *Server) lazyReaderForEvent(ctx context.Context, event *feed.Event, topi
|
||||
}
|
||||
return jsonMarshalReader(eventName, blk)
|
||||
}, nil
|
||||
case *operation.DataColumnReceivedData:
|
||||
return func() io.Reader {
|
||||
kzgCommitments := make([]string, len(v.KzgCommitments))
|
||||
for i, kzgCommitment := range v.KzgCommitments {
|
||||
kzgCommitments[i] = hexutil.Encode(kzgCommitment)
|
||||
}
|
||||
return jsonMarshalReader(eventName, &structs.DataColumnGossipEvent{
|
||||
Slot: fmt.Sprintf("%d", v.Slot),
|
||||
Index: fmt.Sprintf("%d", v.Index),
|
||||
BlockRoot: hexutil.Encode(v.BlockRoot[:]),
|
||||
KzgCommitments: kzgCommitments,
|
||||
})
|
||||
}, nil
|
||||
case *operation.AggregatedAttReceivedData:
|
||||
switch att := v.Attestation.AggregateVal().(type) {
|
||||
case *eth.Attestation:
|
||||
|
||||
@@ -31,7 +31,7 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/util"
|
||||
"github.com/ethereum/go-ethereum/common"
|
||||
sse "github.com/r3labs/sse/v2"
|
||||
"github.com/r3labs/sse/v2"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
@@ -121,6 +121,7 @@ func operationEventsFixtures(t *testing.T) (*topicRequest, []*feed.Event) {
|
||||
AttesterSlashingTopic,
|
||||
ProposerSlashingTopic,
|
||||
BlockGossipTopic,
|
||||
DataColumnTopic,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
ro, err := blocks.NewROBlob(util.HydrateBlobSidecar(ð.BlobSidecar{}))
|
||||
@@ -301,6 +302,15 @@ func operationEventsFixtures(t *testing.T) (*topicRequest, []*feed.Event) {
|
||||
SignedBlock: signedBlock,
|
||||
},
|
||||
},
|
||||
{
|
||||
Type: operation.DataColumnReceived,
|
||||
Data: &operation.DataColumnReceivedData{
|
||||
Slot: 1,
|
||||
Index: 2,
|
||||
BlockRoot: [32]byte{'a'},
|
||||
KzgCommitments: [][]byte{{'a'}, {'b'}, {'c'}},
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
@@ -709,7 +719,7 @@ func TestStuckReaderScenarios(t *testing.T) {
|
||||
|
||||
func wedgedWriterTestCase(t *testing.T, queueDepth func([]*feed.Event) int) {
|
||||
topics, events := operationEventsFixtures(t)
|
||||
require.Equal(t, 11, len(events))
|
||||
require.Equal(t, 12, len(events))
|
||||
|
||||
// set eventFeedDepth to a number lower than the events we intend to send to force the server to drop the reader.
|
||||
stn := mockChain.NewEventFeedWrapper()
|
||||
|
||||
@@ -20,7 +20,6 @@ go_library(
|
||||
"//config/params:go_default_library",
|
||||
"//encoding/bytesutil:go_default_library",
|
||||
"//monitoring/tracing/trace:go_default_library",
|
||||
"//network/forks:go_default_library",
|
||||
"//network/httputil:go_default_library",
|
||||
"//runtime/version:go_default_library",
|
||||
"//time/slots:go_default_library",
|
||||
|
||||
@@ -11,7 +11,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
|
||||
"github.com/OffchainLabs/prysm/v6/network/forks"
|
||||
"github.com/OffchainLabs/prysm/v6/network/httputil"
|
||||
"github.com/OffchainLabs/prysm/v6/runtime/version"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
@@ -115,7 +114,7 @@ func (s *Server) GetLightClientUpdatesByRange(w http.ResponseWriter, req *http.R
|
||||
|
||||
updateSlot := update.AttestedHeader().Beacon().Slot
|
||||
updateEpoch := slots.ToEpoch(updateSlot)
|
||||
updateFork, err := forks.Fork(updateEpoch)
|
||||
updateFork, err := params.Fork(updateEpoch)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not get fork Version: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
|
||||
@@ -112,7 +112,12 @@ func (s *Server) GetPeers(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
allPeers = append(allPeers, p)
|
||||
}
|
||||
resp := &structs.GetPeersResponse{Data: allPeers}
|
||||
resp := &structs.GetPeersResponse{
|
||||
Data: allPeers,
|
||||
Meta: structs.Meta{
|
||||
Count: len(allPeers),
|
||||
},
|
||||
}
|
||||
httputil.WriteJson(w, resp)
|
||||
return
|
||||
}
|
||||
@@ -177,7 +182,12 @@ func (s *Server) GetPeers(w http.ResponseWriter, r *http.Request) {
|
||||
filteredPeers = append(filteredPeers, p)
|
||||
}
|
||||
|
||||
resp := &structs.GetPeersResponse{Data: filteredPeers}
|
||||
resp := &structs.GetPeersResponse{
|
||||
Data: filteredPeers,
|
||||
Meta: structs.Meta{
|
||||
Count: len(filteredPeers),
|
||||
},
|
||||
}
|
||||
httputil.WriteJson(w, resp)
|
||||
}
|
||||
|
||||
|
||||
@@ -145,6 +145,7 @@ func TestGetPeers(t *testing.T) {
|
||||
resp := &structs.GetPeersResponse{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
|
||||
require.Equal(t, 1, len(resp.Data))
|
||||
assert.Equal(t, 1, resp.Meta.Count)
|
||||
returnedPeer := resp.Data[0]
|
||||
assert.Equal(t, expectedId.String(), returnedPeer.PeerId)
|
||||
expectedEnr, err := peerStatus.ENR(expectedId)
|
||||
@@ -229,6 +230,7 @@ func TestGetPeers(t *testing.T) {
|
||||
resp := &structs.GetPeersResponse{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
|
||||
assert.Equal(t, len(tt.wantIds), len(resp.Data), "Wrong number of peers returned")
|
||||
assert.Equal(t, len(tt.wantIds), resp.Meta.Count, "meta.count does not match number of returned peers")
|
||||
for _, id := range tt.wantIds {
|
||||
expectedId := id.String()
|
||||
found := false
|
||||
|
||||
@@ -98,6 +98,39 @@ func (s *Server) GetAggregateAttestationV2(w http.ResponseWriter, r *http.Reques
|
||||
if agg == nil {
|
||||
return
|
||||
}
|
||||
|
||||
if httputil.RespondWithSsz(r) {
|
||||
var data []byte
|
||||
var err error
|
||||
if v >= version.Electra {
|
||||
typedAgg, ok := agg.(*ethpbalpha.AttestationElectra)
|
||||
if !ok {
|
||||
httputil.HandleError(w, fmt.Sprintf("Attestation is not of type %T", ðpbalpha.AttestationElectra{}), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
data, err = typedAgg.MarshalSSZ()
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not marshal attestation: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
} else {
|
||||
typedAgg, ok := agg.(*ethpbalpha.Attestation)
|
||||
if !ok {
|
||||
httputil.HandleError(w, fmt.Sprintf("Attestation is not of type %T", ðpbalpha.Attestation{}), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
data, err = typedAgg.MarshalSSZ()
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not marshal attestation: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
w.Header().Set(api.VersionHeader, version.String(v))
|
||||
httputil.WriteSsz(w, data)
|
||||
return
|
||||
}
|
||||
|
||||
resp := &structs.AggregateAttestationResponse{
|
||||
Version: version.String(v),
|
||||
}
|
||||
@@ -610,6 +643,16 @@ func (s *Server) GetAttestationData(w http.ResponseWriter, r *http.Request) {
|
||||
return
|
||||
}
|
||||
|
||||
if httputil.RespondWithSsz(r) {
|
||||
data, err := attestationData.MarshalSSZ()
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not marshal attestation data: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
httputil.WriteSsz(w, data)
|
||||
return
|
||||
}
|
||||
|
||||
response := &structs.GetAttestationDataResponse{
|
||||
Data: &structs.AttestationData{
|
||||
Slot: strconv.FormatUint(uint64(attestationData.Slot), 10),
|
||||
|
||||
@@ -307,6 +307,23 @@ func TestGetAggregateAttestation(t *testing.T) {
|
||||
|
||||
compareResult(t, attestation, "2", hexutil.Encode(aggSlot2.AggregationBits), root1, sig.Marshal())
|
||||
})
|
||||
t.Run("1 matching aggregated attestation - SSZ", func(t *testing.T) {
|
||||
reqRoot, err := aggSlot2.Data.HashTreeRoot()
|
||||
require.NoError(t, err, "Failed to generate attestation data hash tree root")
|
||||
attDataRoot := hexutil.Encode(reqRoot[:])
|
||||
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=2" + "&committee_index=0"
|
||||
request := httptest.NewRequest(http.MethodGet, url, nil)
|
||||
request.Header.Add("Accept", "application/octet-stream")
|
||||
writer := httptest.NewRecorder()
|
||||
|
||||
s.GetAggregateAttestationV2(writer, request)
|
||||
require.Equal(t, http.StatusOK, writer.Code, "Expected HTTP status OK")
|
||||
|
||||
var resp ethpbalpha.Attestation
|
||||
require.NoError(t, resp.UnmarshalSSZ(writer.Body.Bytes()))
|
||||
|
||||
compareResult(t, *structs.AttFromConsensus(&resp), "2", hexutil.Encode(aggSlot2.AggregationBits), root1, sig.Marshal())
|
||||
})
|
||||
t.Run("multiple matching aggregated attestations - return the one with most bits", func(t *testing.T) {
|
||||
reqRoot, err := aggSlot1_Root1_1.Data.HashTreeRoot()
|
||||
require.NoError(t, err, "Failed to generate attestation data hash tree root")
|
||||
@@ -327,6 +344,23 @@ func TestGetAggregateAttestation(t *testing.T) {
|
||||
|
||||
compareResult(t, attestation, "1", hexutil.Encode(aggSlot1_Root1_2.AggregationBits), root1, sig.Marshal())
|
||||
})
|
||||
t.Run("multiple matching aggregated attestations - return the one with most bits - SSZ", func(t *testing.T) {
|
||||
reqRoot, err := aggSlot1_Root1_1.Data.HashTreeRoot()
|
||||
require.NoError(t, err, "Failed to generate attestation data hash tree root")
|
||||
attDataRoot := hexutil.Encode(reqRoot[:])
|
||||
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=1" + "&committee_index=0"
|
||||
request := httptest.NewRequest(http.MethodGet, url, nil)
|
||||
request.Header.Add("Accept", "application/octet-stream")
|
||||
writer := httptest.NewRecorder()
|
||||
|
||||
s.GetAggregateAttestationV2(writer, request)
|
||||
require.Equal(t, http.StatusOK, writer.Code, "Expected HTTP status OK")
|
||||
|
||||
var resp ethpbalpha.Attestation
|
||||
require.NoError(t, resp.UnmarshalSSZ(writer.Body.Bytes()))
|
||||
|
||||
compareResult(t, *structs.AttFromConsensus(&resp), "1", hexutil.Encode(aggSlot1_Root1_2.AggregationBits), root1, sig.Marshal())
|
||||
})
|
||||
})
|
||||
t.Run("post-electra", func(t *testing.T) {
|
||||
aggSlot1_Root1_1 := createAttestationElectra(1, bitfield.Bitlist{0b11100}, root1)
|
||||
@@ -421,6 +455,23 @@ func TestGetAggregateAttestation(t *testing.T) {
|
||||
|
||||
compareResult(t, attestation, "2", hexutil.Encode(aggSlot2.AggregationBits), root1, sig.Marshal(), hexutil.Encode(aggSlot2.CommitteeBits))
|
||||
})
|
||||
t.Run("1 matching aggregated attestation - SSZ", func(t *testing.T) {
|
||||
reqRoot, err := aggSlot2.Data.HashTreeRoot()
|
||||
require.NoError(t, err, "Failed to generate attestation data hash tree root")
|
||||
attDataRoot := hexutil.Encode(reqRoot[:])
|
||||
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=2" + "&committee_index=0"
|
||||
request := httptest.NewRequest(http.MethodGet, url, nil)
|
||||
request.Header.Add("Accept", "application/octet-stream")
|
||||
writer := httptest.NewRecorder()
|
||||
|
||||
s.GetAggregateAttestationV2(writer, request)
|
||||
require.Equal(t, http.StatusOK, writer.Code, "Expected HTTP status OK")
|
||||
|
||||
var resp ethpbalpha.AttestationElectra
|
||||
require.NoError(t, resp.UnmarshalSSZ(writer.Body.Bytes()))
|
||||
|
||||
compareResult(t, *structs.AttElectraFromConsensus(&resp), "2", hexutil.Encode(aggSlot2.AggregationBits), root1, sig.Marshal(), hexutil.Encode(aggSlot2.CommitteeBits))
|
||||
})
|
||||
t.Run("multiple matching aggregated attestations - return the one with most bits", func(t *testing.T) {
|
||||
reqRoot, err := aggSlot1_Root1_1.Data.HashTreeRoot()
|
||||
require.NoError(t, err, "Failed to generate attestation data hash tree root")
|
||||
@@ -441,6 +492,23 @@ func TestGetAggregateAttestation(t *testing.T) {
|
||||
|
||||
compareResult(t, attestation, "1", hexutil.Encode(aggSlot1_Root1_2.AggregationBits), root1, sig.Marshal(), hexutil.Encode(aggSlot1_Root1_1.CommitteeBits))
|
||||
})
|
||||
t.Run("multiple matching aggregated attestations - return the one with most bits - SSZ", func(t *testing.T) {
|
||||
reqRoot, err := aggSlot1_Root1_1.Data.HashTreeRoot()
|
||||
require.NoError(t, err, "Failed to generate attestation data hash tree root")
|
||||
attDataRoot := hexutil.Encode(reqRoot[:])
|
||||
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=1" + "&committee_index=0"
|
||||
request := httptest.NewRequest(http.MethodGet, url, nil)
|
||||
request.Header.Add("Accept", "application/octet-stream")
|
||||
writer := httptest.NewRecorder()
|
||||
|
||||
s.GetAggregateAttestationV2(writer, request)
|
||||
require.Equal(t, http.StatusOK, writer.Code, "Expected HTTP status OK")
|
||||
|
||||
var resp ethpbalpha.AttestationElectra
|
||||
require.NoError(t, resp.UnmarshalSSZ(writer.Body.Bytes()))
|
||||
|
||||
compareResult(t, *structs.AttElectraFromConsensus(&resp), "1", hexutil.Encode(aggSlot1_Root1_2.AggregationBits), root1, sig.Marshal(), hexutil.Encode(aggSlot1_Root1_1.CommitteeBits))
|
||||
})
|
||||
t.Run("1 matching unaggregated attestation", func(t *testing.T) {
|
||||
reqRoot, err := unaggSlot4.Data.HashTreeRoot()
|
||||
require.NoError(t, err, "Failed to generate attestation data hash tree root")
|
||||
@@ -460,6 +528,23 @@ func TestGetAggregateAttestation(t *testing.T) {
|
||||
require.NoError(t, json.Unmarshal(resp.Data, &attestation), "Failed to unmarshal attestation data")
|
||||
compareResult(t, attestation, "4", hexutil.Encode(unaggSlot4.AggregationBits), root1, sig.Marshal(), hexutil.Encode(unaggSlot4.CommitteeBits))
|
||||
})
|
||||
t.Run("1 matching unaggregated attestation - SSZ", func(t *testing.T) {
|
||||
reqRoot, err := unaggSlot4.Data.HashTreeRoot()
|
||||
require.NoError(t, err, "Failed to generate attestation data hash tree root")
|
||||
attDataRoot := hexutil.Encode(reqRoot[:])
|
||||
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=4" + "&committee_index=0"
|
||||
request := httptest.NewRequest(http.MethodGet, url, nil)
|
||||
request.Header.Add("Accept", "application/octet-stream")
|
||||
writer := httptest.NewRecorder()
|
||||
|
||||
s.GetAggregateAttestationV2(writer, request)
|
||||
require.Equal(t, http.StatusOK, writer.Code, "Expected HTTP status OK")
|
||||
|
||||
var resp ethpbalpha.AttestationElectra
|
||||
require.NoError(t, resp.UnmarshalSSZ(writer.Body.Bytes()))
|
||||
|
||||
compareResult(t, *structs.AttElectraFromConsensus(&resp), "4", hexutil.Encode(unaggSlot4.AggregationBits), root1, sig.Marshal(), hexutil.Encode(unaggSlot4.CommitteeBits))
|
||||
})
|
||||
t.Run("multiple matching unaggregated attestations - their aggregate is returned", func(t *testing.T) {
|
||||
reqRoot, err := unaggSlot3_Root1_1.Data.HashTreeRoot()
|
||||
require.NoError(t, err, "Failed to generate attestation data hash tree root")
|
||||
@@ -484,12 +569,33 @@ func TestGetAggregateAttestation(t *testing.T) {
|
||||
expectedSig := bls.AggregateSignatures([]common.Signature{sig1, sig2})
|
||||
compareResult(t, attestation, "3", hexutil.Encode(bitfield.Bitlist{0b11100}), root1, expectedSig.Marshal(), hexutil.Encode(unaggSlot3_Root1_1.CommitteeBits))
|
||||
})
|
||||
t.Run("multiple matching unaggregated attestations - their aggregate is returned - SSZ", func(t *testing.T) {
|
||||
reqRoot, err := unaggSlot3_Root1_1.Data.HashTreeRoot()
|
||||
require.NoError(t, err, "Failed to generate attestation data hash tree root")
|
||||
attDataRoot := hexutil.Encode(reqRoot[:])
|
||||
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=3" + "&committee_index=0"
|
||||
request := httptest.NewRequest(http.MethodGet, url, nil)
|
||||
request.Header.Add("Accept", "application/octet-stream")
|
||||
writer := httptest.NewRecorder()
|
||||
|
||||
s.GetAggregateAttestationV2(writer, request)
|
||||
require.Equal(t, http.StatusOK, writer.Code, "Expected HTTP status OK")
|
||||
|
||||
var resp ethpbalpha.AttestationElectra
|
||||
require.NoError(t, resp.UnmarshalSSZ(writer.Body.Bytes()))
|
||||
|
||||
sig1, err := bls.SignatureFromBytes(unaggSlot3_Root1_1.Signature)
|
||||
require.NoError(t, err)
|
||||
sig2, err := bls.SignatureFromBytes(unaggSlot3_Root1_2.Signature)
|
||||
require.NoError(t, err)
|
||||
expectedSig := bls.AggregateSignatures([]common.Signature{sig1, sig2})
|
||||
compareResult(t, *structs.AttElectraFromConsensus(&resp), "3", hexutil.Encode(bitfield.Bitlist{0b11100}), root1, expectedSig.Marshal(), hexutil.Encode(unaggSlot3_Root1_1.CommitteeBits))
|
||||
})
|
||||
t.Run("pre-electra attestation is ignored", func(t *testing.T) {
|
||||
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
}
|
||||
|
||||
func createAttestationData(slot primitives.Slot, committeeIndex primitives.CommitteeIndex, root []byte) *ethpbalpha.AttestationData {
|
||||
@@ -1293,6 +1399,81 @@ func TestGetAttestationData(t *testing.T) {
|
||||
assert.DeepEqual(t, expectedResponse, resp)
|
||||
})
|
||||
|
||||
t.Run("ok SSZ", func(t *testing.T) {
|
||||
block := util.NewBeaconBlock()
|
||||
block.Block.Slot = 3*params.BeaconConfig().SlotsPerEpoch + 1
|
||||
targetBlock := util.NewBeaconBlock()
|
||||
targetBlock.Block.Slot = 1 * params.BeaconConfig().SlotsPerEpoch
|
||||
justifiedBlock := util.NewBeaconBlock()
|
||||
justifiedBlock.Block.Slot = 2 * params.BeaconConfig().SlotsPerEpoch
|
||||
blockRoot, err := block.Block.HashTreeRoot()
|
||||
require.NoError(t, err, "Could not hash beacon block")
|
||||
justifiedRoot, err := justifiedBlock.Block.HashTreeRoot()
|
||||
require.NoError(t, err, "Could not get signing root for justified block")
|
||||
slot := 3*params.BeaconConfig().SlotsPerEpoch + 1
|
||||
beaconState, err := util.NewBeaconState()
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, beaconState.SetSlot(slot))
|
||||
justifiedCheckpoint := ðpbalpha.Checkpoint{
|
||||
Epoch: 2,
|
||||
Root: justifiedRoot[:],
|
||||
}
|
||||
require.NoError(t, beaconState.SetCurrentJustifiedCheckpoint(justifiedCheckpoint))
|
||||
offset := int64(slot.Mul(params.BeaconConfig().SecondsPerSlot))
|
||||
chain := &mockChain.ChainService{
|
||||
Optimistic: false,
|
||||
Genesis: time.Now().Add(time.Duration(-1*offset) * time.Second),
|
||||
Root: blockRoot[:],
|
||||
CurrentJustifiedCheckPoint: justifiedCheckpoint,
|
||||
TargetRoot: blockRoot,
|
||||
State: beaconState,
|
||||
}
|
||||
|
||||
s := &Server{
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
HeadFetcher: chain,
|
||||
TimeFetcher: chain,
|
||||
OptimisticModeFetcher: chain,
|
||||
CoreService: &core.Service{
|
||||
HeadFetcher: chain,
|
||||
GenesisTimeFetcher: chain,
|
||||
FinalizedFetcher: chain,
|
||||
AttestationCache: cache.NewAttestationDataCache(),
|
||||
OptimisticModeFetcher: chain,
|
||||
},
|
||||
}
|
||||
|
||||
expectedAttData := ðpbalpha.AttestationData{
|
||||
Slot: slot,
|
||||
BeaconBlockRoot: blockRoot[:],
|
||||
CommitteeIndex: 0,
|
||||
Source: ðpbalpha.Checkpoint{
|
||||
Epoch: 2,
|
||||
Root: justifiedRoot[:],
|
||||
},
|
||||
Target: ðpbalpha.Checkpoint{
|
||||
Epoch: 3,
|
||||
Root: blockRoot[:],
|
||||
},
|
||||
}
|
||||
|
||||
expectedAttDataSSZ, err := expectedAttData.MarshalSSZ()
|
||||
require.NoError(t, err, "Could not marshal expected attestation data to SSZ")
|
||||
|
||||
url := fmt.Sprintf("http://example.com?slot=%d&committee_index=%d", slot, 0)
|
||||
request := httptest.NewRequest(http.MethodGet, url, nil)
|
||||
request.Header.Add("Accept", "application/octet-stream")
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.GetAttestationData(writer, request)
|
||||
|
||||
assert.Equal(t, http.StatusOK, writer.Code)
|
||||
assert.DeepSSZEqual(t, expectedAttDataSSZ, writer.Body.Bytes())
|
||||
var att ethpbalpha.AttestationData
|
||||
require.NoError(t, att.UnmarshalSSZ(writer.Body.Bytes()))
|
||||
})
|
||||
|
||||
t.Run("syncing", func(t *testing.T) {
|
||||
beaconState, err := util.NewBeaconState()
|
||||
require.NoError(t, err)
|
||||
@@ -1536,6 +1717,82 @@ func TestGetAttestationData(t *testing.T) {
|
||||
assert.DeepEqual(t, expectedResponse, resp)
|
||||
})
|
||||
|
||||
t.Run("succeeds in first epoch SSZ", func(t *testing.T) {
|
||||
slot := primitives.Slot(5)
|
||||
block := util.NewBeaconBlock()
|
||||
block.Block.Slot = slot
|
||||
targetBlock := util.NewBeaconBlock()
|
||||
targetBlock.Block.Slot = 0
|
||||
justifiedBlock := util.NewBeaconBlock()
|
||||
justifiedBlock.Block.Slot = 0
|
||||
blockRoot, err := block.Block.HashTreeRoot()
|
||||
require.NoError(t, err, "Could not hash beacon block")
|
||||
justifiedRoot, err := justifiedBlock.Block.HashTreeRoot()
|
||||
require.NoError(t, err, "Could not get signing root for justified block")
|
||||
|
||||
beaconState, err := util.NewBeaconState()
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, beaconState.SetSlot(slot))
|
||||
justifiedCheckpt := ðpbalpha.Checkpoint{
|
||||
Epoch: 0,
|
||||
Root: justifiedRoot[:],
|
||||
}
|
||||
require.NoError(t, beaconState.SetCurrentJustifiedCheckpoint(justifiedCheckpt))
|
||||
require.NoError(t, err)
|
||||
offset := int64(slot.Mul(params.BeaconConfig().SecondsPerSlot))
|
||||
chain := &mockChain.ChainService{
|
||||
Root: blockRoot[:],
|
||||
Genesis: time.Now().Add(time.Duration(-1*offset) * time.Second),
|
||||
CurrentJustifiedCheckPoint: justifiedCheckpt,
|
||||
TargetRoot: blockRoot,
|
||||
State: beaconState,
|
||||
}
|
||||
|
||||
s := &Server{
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
HeadFetcher: chain,
|
||||
TimeFetcher: chain,
|
||||
OptimisticModeFetcher: chain,
|
||||
CoreService: &core.Service{
|
||||
AttestationCache: cache.NewAttestationDataCache(),
|
||||
OptimisticModeFetcher: chain,
|
||||
HeadFetcher: chain,
|
||||
GenesisTimeFetcher: chain,
|
||||
FinalizedFetcher: chain,
|
||||
},
|
||||
}
|
||||
|
||||
expectedAttData := ðpbalpha.AttestationData{
|
||||
Slot: slot,
|
||||
BeaconBlockRoot: blockRoot[:],
|
||||
CommitteeIndex: 0,
|
||||
Source: ðpbalpha.Checkpoint{
|
||||
Epoch: 0,
|
||||
Root: justifiedRoot[:],
|
||||
},
|
||||
Target: ðpbalpha.Checkpoint{
|
||||
Epoch: 0,
|
||||
Root: blockRoot[:],
|
||||
},
|
||||
}
|
||||
|
||||
expectedAttDataSSZ, err := expectedAttData.MarshalSSZ()
|
||||
require.NoError(t, err, "Could not marshal expected attestation data to SSZ")
|
||||
|
||||
url := fmt.Sprintf("http://example.com?slot=%d&committee_index=%d", slot, 0)
|
||||
request := httptest.NewRequest(http.MethodGet, url, nil)
|
||||
request.Header.Add("Accept", "application/octet-stream")
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.GetAttestationData(writer, request)
|
||||
|
||||
assert.Equal(t, http.StatusOK, writer.Code)
|
||||
assert.DeepSSZEqual(t, expectedAttDataSSZ, writer.Body.Bytes())
|
||||
var att ethpbalpha.AttestationData
|
||||
require.NoError(t, att.UnmarshalSSZ(writer.Body.Bytes()))
|
||||
})
|
||||
|
||||
t.Run("handles far away justified epoch", func(t *testing.T) {
|
||||
// Scenario:
|
||||
//
|
||||
@@ -1629,6 +1886,101 @@ func TestGetAttestationData(t *testing.T) {
|
||||
require.NotNil(t, resp)
|
||||
assert.DeepEqual(t, expectedResponse, resp)
|
||||
})
|
||||
|
||||
t.Run("handles far away justified epoch SSZ", func(t *testing.T) {
|
||||
// Scenario:
|
||||
//
|
||||
// State slot = 10000
|
||||
// Last justified slot = epoch start of 1500
|
||||
// HistoricalRootsLimit = 8192
|
||||
//
|
||||
// More background: https://github.com/prysmaticlabs/prysm/issues/2153
|
||||
// This test breaks if it doesn't use mainnet config
|
||||
|
||||
// Ensure HistoricalRootsLimit matches scenario
|
||||
params.SetupTestConfigCleanup(t)
|
||||
cfg := params.MainnetConfig()
|
||||
cfg.HistoricalRootsLimit = 8192
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
block := util.NewBeaconBlock()
|
||||
block.Block.Slot = 10000
|
||||
epochBoundaryBlock := util.NewBeaconBlock()
|
||||
var err error
|
||||
epochBoundaryBlock.Block.Slot, err = slots.EpochStart(slots.ToEpoch(10000))
|
||||
require.NoError(t, err)
|
||||
justifiedBlock := util.NewBeaconBlock()
|
||||
justifiedBlock.Block.Slot, err = slots.EpochStart(slots.ToEpoch(1500))
|
||||
require.NoError(t, err)
|
||||
justifiedBlock.Block.Slot -= 2 // Imagine two skip block
|
||||
blockRoot, err := block.Block.HashTreeRoot()
|
||||
require.NoError(t, err, "Could not hash beacon block")
|
||||
justifiedBlockRoot, err := justifiedBlock.Block.HashTreeRoot()
|
||||
require.NoError(t, err, "Could not hash justified block")
|
||||
|
||||
slot := primitives.Slot(10000)
|
||||
beaconState, err := util.NewBeaconState()
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, beaconState.SetSlot(slot))
|
||||
justifiedCheckpt := ðpbalpha.Checkpoint{
|
||||
Epoch: slots.ToEpoch(1500),
|
||||
Root: justifiedBlockRoot[:],
|
||||
}
|
||||
require.NoError(t, beaconState.SetCurrentJustifiedCheckpoint(justifiedCheckpt))
|
||||
|
||||
offset := int64(slot.Mul(params.BeaconConfig().SecondsPerSlot))
|
||||
chain := &mockChain.ChainService{
|
||||
Root: blockRoot[:],
|
||||
Genesis: time.Now().Add(time.Duration(-1*offset) * time.Second),
|
||||
CurrentJustifiedCheckPoint: justifiedCheckpt,
|
||||
TargetRoot: blockRoot,
|
||||
State: beaconState,
|
||||
}
|
||||
|
||||
s := &Server{
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
HeadFetcher: chain,
|
||||
TimeFetcher: chain,
|
||||
OptimisticModeFetcher: chain,
|
||||
CoreService: &core.Service{
|
||||
AttestationCache: cache.NewAttestationDataCache(),
|
||||
OptimisticModeFetcher: chain,
|
||||
HeadFetcher: chain,
|
||||
GenesisTimeFetcher: chain,
|
||||
FinalizedFetcher: chain,
|
||||
},
|
||||
}
|
||||
|
||||
expectedAttData := ðpbalpha.AttestationData{
|
||||
Slot: slot,
|
||||
BeaconBlockRoot: blockRoot[:],
|
||||
CommitteeIndex: 0,
|
||||
Source: ðpbalpha.Checkpoint{
|
||||
Epoch: slots.ToEpoch(1500),
|
||||
Root: justifiedBlockRoot[:],
|
||||
},
|
||||
Target: ðpbalpha.Checkpoint{
|
||||
Epoch: 312,
|
||||
Root: blockRoot[:],
|
||||
},
|
||||
}
|
||||
|
||||
expectedAttDataSSZ, err := expectedAttData.MarshalSSZ()
|
||||
require.NoError(t, err, "Could not marshal expected attestation data to SSZ")
|
||||
|
||||
url := fmt.Sprintf("http://example.com?slot=%d&committee_index=%d", slot, 0)
|
||||
request := httptest.NewRequest(http.MethodGet, url, nil)
|
||||
request.Header.Add("Accept", "application/octet-stream")
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.GetAttestationData(writer, request)
|
||||
|
||||
assert.Equal(t, http.StatusOK, writer.Code)
|
||||
assert.DeepSSZEqual(t, expectedAttDataSSZ, writer.Body.Bytes())
|
||||
var att ethpbalpha.AttestationData
|
||||
require.NoError(t, att.UnmarshalSSZ(writer.Body.Bytes()))
|
||||
})
|
||||
}
|
||||
|
||||
func TestProduceSyncCommitteeContribution(t *testing.T) {
|
||||
|
||||
@@ -40,7 +40,7 @@ func (e BlockIdParseError) Error() string {
|
||||
// Blocker is responsible for retrieving blocks.
|
||||
type Blocker interface {
|
||||
Block(ctx context.Context, id []byte) (interfaces.ReadOnlySignedBeaconBlock, error)
|
||||
Blobs(ctx context.Context, id string, indices []uint64) ([]*blocks.VerifiedROBlob, *core.RpcError)
|
||||
Blobs(ctx context.Context, id string, indices []int) ([]*blocks.VerifiedROBlob, *core.RpcError)
|
||||
}
|
||||
|
||||
// BeaconDbBlocker is an implementation of Blocker. It retrieves blocks from the beacon chain database.
|
||||
@@ -144,7 +144,7 @@ func (p *BeaconDbBlocker) Block(ctx context.Context, id []byte) (interfaces.Read
|
||||
// - block exists, has commitments, inside retention period (greater of protocol- or user-specified) serve then w/ 200 unless we hit an error reading them.
|
||||
// we are technically not supposed to import a block to forkchoice unless we have the blobs, so the nuance here is if we can't find the file and we are inside the protocol-defined retention period, then it's actually a 500.
|
||||
// - block exists, has commitments, outside retention period (greater of protocol- or user-specified) - ie just like block exists, no commitment
|
||||
func (p *BeaconDbBlocker) Blobs(ctx context.Context, id string, indices []uint64) ([]*blocks.VerifiedROBlob, *core.RpcError) {
|
||||
func (p *BeaconDbBlocker) Blobs(ctx context.Context, id string, indices []int) ([]*blocks.VerifiedROBlob, *core.RpcError) {
|
||||
var rootSlice []byte
|
||||
switch id {
|
||||
case "genesis":
|
||||
@@ -239,18 +239,18 @@ func (p *BeaconDbBlocker) Blobs(ctx context.Context, id string, indices []uint64
|
||||
if len(indices) == 0 {
|
||||
for i := range commitments {
|
||||
if sum.HasIndex(uint64(i)) {
|
||||
indices = append(indices, uint64(i))
|
||||
indices = append(indices, i)
|
||||
}
|
||||
}
|
||||
} else {
|
||||
for _, ix := range indices {
|
||||
if ix >= sum.MaxBlobsForEpoch() {
|
||||
if uint64(ix) >= sum.MaxBlobsForEpoch() {
|
||||
return nil, &core.RpcError{
|
||||
Err: fmt.Errorf("requested index %d is bigger than the maximum possible blob count %d", ix, sum.MaxBlobsForEpoch()),
|
||||
Reason: core.BadRequest,
|
||||
}
|
||||
}
|
||||
if !sum.HasIndex(ix) {
|
||||
if !sum.HasIndex(uint64(ix)) {
|
||||
return nil, &core.RpcError{
|
||||
Err: fmt.Errorf("requested index %d not found", ix),
|
||||
Reason: core.NotFound,
|
||||
@@ -261,7 +261,7 @@ func (p *BeaconDbBlocker) Blobs(ctx context.Context, id string, indices []uint64
|
||||
|
||||
blobs := make([]*blocks.VerifiedROBlob, len(indices))
|
||||
for i, index := range indices {
|
||||
vblob, err := p.BlobStorage.Get(root, index)
|
||||
vblob, err := p.BlobStorage.Get(root, uint64(index))
|
||||
if err != nil {
|
||||
return nil, &core.RpcError{
|
||||
Err: fmt.Errorf("could not retrieve blob for block root %#x at index %d", rootSlice, index),
|
||||
|
||||
@@ -18,7 +18,7 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
ethpbalpha "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/assert"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/util"
|
||||
@@ -51,7 +51,7 @@ func TestGetBlock(t *testing.T) {
|
||||
b4.Block.ParentRoot = bytesutil.PadTo([]byte{8}, 32)
|
||||
util.SaveBlock(t, ctx, beaconDB, b4)
|
||||
|
||||
wsb, err := blocks.NewSignedBeaconBlock(headBlock.Block.(*ethpbalpha.BeaconBlockContainer_Phase0Block).Phase0Block)
|
||||
wsb, err := blocks.NewSignedBeaconBlock(headBlock.Block.(*ethpb.BeaconBlockContainer_Phase0Block).Phase0Block)
|
||||
require.NoError(t, err)
|
||||
|
||||
fetcher := &BeaconDbBlocker{
|
||||
@@ -60,7 +60,7 @@ func TestGetBlock(t *testing.T) {
|
||||
DB: beaconDB,
|
||||
Block: wsb,
|
||||
Root: headBlock.BlockRoot,
|
||||
FinalizedCheckPoint: ðpbalpha.Checkpoint{Root: blkContainers[64].BlockRoot},
|
||||
FinalizedCheckPoint: ðpb.Checkpoint{Root: blkContainers[64].BlockRoot},
|
||||
CanonicalRoots: canonicalRoots,
|
||||
},
|
||||
}
|
||||
@@ -71,13 +71,13 @@ func TestGetBlock(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
blockID []byte
|
||||
want *ethpbalpha.SignedBeaconBlock
|
||||
want *ethpb.SignedBeaconBlock
|
||||
wantErr bool
|
||||
}{
|
||||
{
|
||||
name: "slot",
|
||||
blockID: []byte("30"),
|
||||
want: blkContainers[30].Block.(*ethpbalpha.BeaconBlockContainer_Phase0Block).Phase0Block,
|
||||
want: blkContainers[30].Block.(*ethpb.BeaconBlockContainer_Phase0Block).Phase0Block,
|
||||
},
|
||||
{
|
||||
name: "bad formatting",
|
||||
@@ -87,7 +87,7 @@ func TestGetBlock(t *testing.T) {
|
||||
{
|
||||
name: "canonical",
|
||||
blockID: []byte("30"),
|
||||
want: blkContainers[30].Block.(*ethpbalpha.BeaconBlockContainer_Phase0Block).Phase0Block,
|
||||
want: blkContainers[30].Block.(*ethpb.BeaconBlockContainer_Phase0Block).Phase0Block,
|
||||
},
|
||||
{
|
||||
name: "non canonical",
|
||||
@@ -97,12 +97,12 @@ func TestGetBlock(t *testing.T) {
|
||||
{
|
||||
name: "head",
|
||||
blockID: []byte("head"),
|
||||
want: headBlock.Block.(*ethpbalpha.BeaconBlockContainer_Phase0Block).Phase0Block,
|
||||
want: headBlock.Block.(*ethpb.BeaconBlockContainer_Phase0Block).Phase0Block,
|
||||
},
|
||||
{
|
||||
name: "finalized",
|
||||
blockID: []byte("finalized"),
|
||||
want: blkContainers[64].Block.(*ethpbalpha.BeaconBlockContainer_Phase0Block).Phase0Block,
|
||||
want: blkContainers[64].Block.(*ethpb.BeaconBlockContainer_Phase0Block).Phase0Block,
|
||||
},
|
||||
{
|
||||
name: "genesis",
|
||||
@@ -117,7 +117,7 @@ func TestGetBlock(t *testing.T) {
|
||||
{
|
||||
name: "root",
|
||||
blockID: blkContainers[20].BlockRoot,
|
||||
want: blkContainers[20].Block.(*ethpbalpha.BeaconBlockContainer_Phase0Block).Phase0Block,
|
||||
want: blkContainers[20].Block.(*ethpb.BeaconBlockContainer_Phase0Block).Phase0Block,
|
||||
},
|
||||
{
|
||||
name: "non-existent root",
|
||||
@@ -127,7 +127,7 @@ func TestGetBlock(t *testing.T) {
|
||||
{
|
||||
name: "hex",
|
||||
blockID: []byte(hexutil.Encode(blkContainers[20].BlockRoot)),
|
||||
want: blkContainers[20].Block.(*ethpbalpha.BeaconBlockContainer_Phase0Block).Phase0Block,
|
||||
want: blkContainers[20].Block.(*ethpb.BeaconBlockContainer_Phase0Block).Phase0Block,
|
||||
},
|
||||
{
|
||||
name: "no block",
|
||||
@@ -149,7 +149,7 @@ func TestGetBlock(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
pb, err := result.Proto()
|
||||
require.NoError(t, err)
|
||||
pbBlock, ok := pb.(*ethpbalpha.SignedBeaconBlock)
|
||||
pbBlock, ok := pb.(*ethpb.SignedBeaconBlock)
|
||||
require.Equal(t, true, ok)
|
||||
if !reflect.DeepEqual(pbBlock, tt.want) {
|
||||
t.Error("Expected blocks to equal")
|
||||
@@ -218,7 +218,7 @@ func TestGetBlob(t *testing.T) {
|
||||
})
|
||||
t.Run("finalized", func(t *testing.T) {
|
||||
blocker := &BeaconDbBlocker{
|
||||
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: ðpbalpha.Checkpoint{Root: blockRoot[:]}},
|
||||
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: ðpb.Checkpoint{Root: blockRoot[:]}},
|
||||
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
|
||||
Genesis: time.Now(),
|
||||
},
|
||||
@@ -232,7 +232,7 @@ func TestGetBlob(t *testing.T) {
|
||||
})
|
||||
t.Run("justified", func(t *testing.T) {
|
||||
blocker := &BeaconDbBlocker{
|
||||
ChainInfoFetcher: &mockChain.ChainService{CurrentJustifiedCheckPoint: ðpbalpha.Checkpoint{Root: blockRoot[:]}},
|
||||
ChainInfoFetcher: &mockChain.ChainService{CurrentJustifiedCheckPoint: ðpb.Checkpoint{Root: blockRoot[:]}},
|
||||
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
|
||||
Genesis: time.Now(),
|
||||
},
|
||||
@@ -270,14 +270,14 @@ func TestGetBlob(t *testing.T) {
|
||||
})
|
||||
t.Run("one blob only", func(t *testing.T) {
|
||||
blocker := &BeaconDbBlocker{
|
||||
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: ðpbalpha.Checkpoint{Root: blockRoot[:]}},
|
||||
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: ðpb.Checkpoint{Root: blockRoot[:]}},
|
||||
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
|
||||
Genesis: time.Now(),
|
||||
},
|
||||
BeaconDB: db,
|
||||
BlobStorage: bs,
|
||||
}
|
||||
verifiedBlobs, rpcErr := blocker.Blobs(ctx, "123", []uint64{2})
|
||||
verifiedBlobs, rpcErr := blocker.Blobs(ctx, "123", []int{2})
|
||||
assert.Equal(t, rpcErr == nil, true)
|
||||
require.Equal(t, 1, len(verifiedBlobs))
|
||||
sidecar := verifiedBlobs[0].BlobSidecar
|
||||
@@ -289,7 +289,7 @@ func TestGetBlob(t *testing.T) {
|
||||
})
|
||||
t.Run("no blobs returns an empty array", func(t *testing.T) {
|
||||
blocker := &BeaconDbBlocker{
|
||||
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: ðpbalpha.Checkpoint{Root: blockRoot[:]}},
|
||||
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: ðpb.Checkpoint{Root: blockRoot[:]}},
|
||||
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
|
||||
Genesis: time.Now(),
|
||||
},
|
||||
@@ -302,28 +302,28 @@ func TestGetBlob(t *testing.T) {
|
||||
})
|
||||
t.Run("no blob at index", func(t *testing.T) {
|
||||
blocker := &BeaconDbBlocker{
|
||||
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: ðpbalpha.Checkpoint{Root: blockRoot[:]}},
|
||||
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: ðpb.Checkpoint{Root: blockRoot[:]}},
|
||||
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
|
||||
Genesis: time.Now(),
|
||||
},
|
||||
BeaconDB: db,
|
||||
BlobStorage: bs,
|
||||
}
|
||||
noBlobIndex := uint64(len(blobs)) + 1
|
||||
_, rpcErr := blocker.Blobs(ctx, "123", []uint64{0, noBlobIndex})
|
||||
noBlobIndex := len(blobs) + 1
|
||||
_, rpcErr := blocker.Blobs(ctx, "123", []int{0, noBlobIndex})
|
||||
require.NotNil(t, rpcErr)
|
||||
assert.Equal(t, core.ErrorReason(core.NotFound), rpcErr.Reason)
|
||||
})
|
||||
t.Run("index too big", func(t *testing.T) {
|
||||
blocker := &BeaconDbBlocker{
|
||||
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: ðpbalpha.Checkpoint{Root: blockRoot[:]}},
|
||||
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: ðpb.Checkpoint{Root: blockRoot[:]}},
|
||||
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
|
||||
Genesis: time.Now(),
|
||||
},
|
||||
BeaconDB: db,
|
||||
BlobStorage: bs,
|
||||
}
|
||||
_, rpcErr := blocker.Blobs(ctx, "123", []uint64{0, math.MaxUint})
|
||||
_, rpcErr := blocker.Blobs(ctx, "123", []int{0, math.MaxInt})
|
||||
require.NotNil(t, rpcErr)
|
||||
assert.Equal(t, core.ErrorReason(core.BadRequest), rpcErr.Reason)
|
||||
})
|
||||
|
||||
@@ -8,6 +8,7 @@ go_library(
|
||||
"blocks.go",
|
||||
"construct_generic_block.go",
|
||||
"duties.go",
|
||||
"duties_v2.go",
|
||||
"exit.go",
|
||||
"log.go",
|
||||
"proposer.go",
|
||||
@@ -189,6 +190,7 @@ go_test(
|
||||
"blocks_test.go",
|
||||
"construct_generic_block_test.go",
|
||||
"duties_test.go",
|
||||
"duties_v2_test.go",
|
||||
"exit_test.go",
|
||||
"proposer_altair_test.go",
|
||||
"proposer_attestations_electra_test.go",
|
||||
|
||||
268
beacon-chain/rpc/prysm/v1alpha1/validator/duties_v2.go
Normal file
268
beacon-chain/rpc/prysm/v1alpha1/validator/duties_v2.go
Normal file
@@ -0,0 +1,268 @@
|
||||
package validator
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
|
||||
coreTime "github.com/OffchainLabs/prysm/v6/beacon-chain/core/time"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/core"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
"google.golang.org/grpc/codes"
|
||||
"google.golang.org/grpc/status"
|
||||
)
|
||||
|
||||
// GetDutiesV2 returns the duties assigned to a list of validators specified
|
||||
// in the request object.
|
||||
//
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
func (vs *Server) GetDutiesV2(ctx context.Context, req *ethpb.DutiesRequest) (*ethpb.DutiesV2Response, error) {
|
||||
if vs.SyncChecker.Syncing() {
|
||||
return nil, status.Error(codes.Unavailable, "Syncing to latest head, not ready to respond")
|
||||
}
|
||||
return vs.dutiesv2(ctx, req)
|
||||
}
|
||||
|
||||
// Compute the validator duties from the head state's corresponding epoch
|
||||
// for validators public key / indices requested.
|
||||
func (vs *Server) dutiesv2(ctx context.Context, req *ethpb.DutiesRequest) (*ethpb.DutiesV2Response, error) {
|
||||
currentEpoch := slots.ToEpoch(vs.TimeFetcher.CurrentSlot())
|
||||
if req.Epoch > currentEpoch+1 {
|
||||
return nil, status.Errorf(codes.Unavailable, "Request epoch %d can not be greater than next epoch %d", req.Epoch, currentEpoch+1)
|
||||
}
|
||||
|
||||
// Load head state
|
||||
s, err := vs.HeadFetcher.HeadState(ctx)
|
||||
if err != nil {
|
||||
return nil, status.Errorf(codes.Internal, "Could not get head state: %v", err)
|
||||
}
|
||||
|
||||
// Advance to start of requested epoch if necessary
|
||||
s, err = vs.stateForEpoch(ctx, s, req.Epoch)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Build duties for each validator
|
||||
ctx, span := trace.StartSpan(ctx, "dutiesv2.BuildResponse")
|
||||
span.SetAttributes(trace.Int64Attribute("num_pubkeys", int64(len(req.PublicKeys))))
|
||||
defer span.End()
|
||||
|
||||
// Load committee and proposer metadata
|
||||
meta, err := loadDutiesMetadata(ctx, s, req.Epoch)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
validatorAssignments := make([]*ethpb.DutiesV2Response_Duty, 0, len(req.PublicKeys))
|
||||
nextValidatorAssignments := make([]*ethpb.DutiesV2Response_Duty, 0, len(req.PublicKeys))
|
||||
|
||||
// start loop for assignments for current and next epochs
|
||||
for _, pubKey := range req.PublicKeys {
|
||||
if ctx.Err() != nil {
|
||||
return nil, status.Errorf(codes.Aborted, "Could not continue fetching assignments: %v", ctx.Err())
|
||||
}
|
||||
|
||||
idx, ok := s.ValidatorIndexByPubkey(bytesutil.ToBytes48(pubKey))
|
||||
if !ok {
|
||||
// Unknown validator: still append placeholder duty with UNKNOWN_STATUS
|
||||
validatorAssignments = append(validatorAssignments, ðpb.DutiesV2Response_Duty{
|
||||
PublicKey: pubKey,
|
||||
Status: ethpb.ValidatorStatus_UNKNOWN_STATUS,
|
||||
})
|
||||
nextValidatorAssignments = append(nextValidatorAssignments, ðpb.DutiesV2Response_Duty{
|
||||
PublicKey: pubKey,
|
||||
Status: ethpb.ValidatorStatus_UNKNOWN_STATUS,
|
||||
})
|
||||
continue
|
||||
}
|
||||
|
||||
meta.current.liteAssignment = helpers.AssignmentForValidator(meta.current.committeesBySlot, meta.current.startSlot, idx)
|
||||
meta.next.liteAssignment = helpers.AssignmentForValidator(meta.next.committeesBySlot, meta.next.startSlot, idx)
|
||||
|
||||
assignment, nextAssignment, err := vs.buildValidatorDuty(pubKey, idx, s, req.Epoch, meta)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
validatorAssignments = append(validatorAssignments, assignment)
|
||||
nextValidatorAssignments = append(nextValidatorAssignments, nextAssignment)
|
||||
}
|
||||
|
||||
// Dependent roots for fork choice
|
||||
currDependentRoot, err := vs.ForkchoiceFetcher.DependentRoot(currentEpoch)
|
||||
if err != nil {
|
||||
return nil, status.Errorf(codes.Internal, "Could not get dependent root: %v", err)
|
||||
}
|
||||
prevDependentRoot := currDependentRoot
|
||||
if currDependentRoot != [32]byte{} && currentEpoch > 0 {
|
||||
prevDependentRoot, err = vs.ForkchoiceFetcher.DependentRoot(currentEpoch - 1)
|
||||
if err != nil {
|
||||
return nil, status.Errorf(codes.Internal, "Could not get previous dependent root: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
return ðpb.DutiesV2Response{
|
||||
PreviousDutyDependentRoot: prevDependentRoot[:],
|
||||
CurrentDutyDependentRoot: currDependentRoot[:],
|
||||
CurrentEpochDuties: validatorAssignments,
|
||||
NextEpochDuties: nextValidatorAssignments,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// stateForEpoch returns a state advanced (with empty slot transitions) to the
|
||||
// start slot of the requested epoch.
|
||||
func (vs *Server) stateForEpoch(ctx context.Context, s state.BeaconState, reqEpoch primitives.Epoch) (state.BeaconState, error) {
|
||||
epochStartSlot, err := slots.EpochStart(reqEpoch)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if s.Slot() >= epochStartSlot {
|
||||
return s, nil
|
||||
}
|
||||
headRoot, err := vs.HeadFetcher.HeadRoot(ctx)
|
||||
if err != nil {
|
||||
return nil, status.Errorf(codes.Internal, "Could not retrieve head root: %v", err)
|
||||
}
|
||||
s, err = transition.ProcessSlotsUsingNextSlotCache(ctx, s, headRoot, epochStartSlot)
|
||||
if err != nil {
|
||||
return nil, status.Errorf(codes.Internal, "Could not process slots up to %d: %v", epochStartSlot, err)
|
||||
}
|
||||
return s, nil
|
||||
}
|
||||
|
||||
// dutiesMetadata bundles together related data needed for duty
|
||||
// construction.
|
||||
type dutiesMetadata struct {
|
||||
current *metadata
|
||||
next *metadata
|
||||
}
|
||||
|
||||
type metadata struct {
|
||||
committeesAtSlot uint64
|
||||
proposalSlots map[primitives.ValidatorIndex][]primitives.Slot
|
||||
startSlot primitives.Slot
|
||||
committeesBySlot [][][]primitives.ValidatorIndex
|
||||
liteAssignment *helpers.LiteAssignment
|
||||
}
|
||||
|
||||
func loadDutiesMetadata(ctx context.Context, s state.BeaconState, reqEpoch primitives.Epoch) (*dutiesMetadata, error) {
|
||||
meta := &dutiesMetadata{}
|
||||
var err error
|
||||
meta.current, err = loadMetadata(ctx, s, reqEpoch)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// note: we only set the proposer slots for the current assignment and not the next epoch assignment
|
||||
meta.current.proposalSlots, err = helpers.ProposerAssignments(ctx, s, reqEpoch)
|
||||
if err != nil {
|
||||
return nil, status.Errorf(codes.Internal, "Could not compute proposer slots: %v", err)
|
||||
}
|
||||
|
||||
meta.next, err = loadMetadata(ctx, s, reqEpoch+1)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return meta, nil
|
||||
}
|
||||
|
||||
func loadMetadata(ctx context.Context, s state.BeaconState, reqEpoch primitives.Epoch) (*metadata, error) {
|
||||
meta := &metadata{}
|
||||
|
||||
if err := helpers.VerifyAssignmentEpoch(reqEpoch, s); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
activeValidatorCount, err := helpers.ActiveValidatorCount(ctx, s, reqEpoch)
|
||||
if err != nil {
|
||||
return nil, status.Errorf(codes.Internal, "Could not get active validator count: %v", err)
|
||||
}
|
||||
meta.committeesAtSlot = helpers.SlotCommitteeCount(activeValidatorCount)
|
||||
|
||||
meta.startSlot, err = slots.EpochStart(reqEpoch)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
meta.committeesBySlot, err = helpers.PrecomputeCommittees(ctx, s, meta.startSlot)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return meta, nil
|
||||
}
|
||||
|
||||
// buildValidatorDuty builds both current‑epoch and next‑epoch V2 duty objects
|
||||
// for a single validator index.
|
||||
func (vs *Server) buildValidatorDuty(
|
||||
pubKey []byte,
|
||||
idx primitives.ValidatorIndex,
|
||||
s state.BeaconState,
|
||||
reqEpoch primitives.Epoch,
|
||||
meta *dutiesMetadata,
|
||||
) (*ethpb.DutiesV2Response_Duty, *ethpb.DutiesV2Response_Duty, error) {
|
||||
assignment := ðpb.DutiesV2Response_Duty{PublicKey: pubKey}
|
||||
nextAssignment := ðpb.DutiesV2Response_Duty{PublicKey: pubKey}
|
||||
|
||||
statusEnum := assignmentStatus(s, idx)
|
||||
assignment.ValidatorIndex = idx
|
||||
assignment.Status = statusEnum
|
||||
assignment.CommitteesAtSlot = meta.current.committeesAtSlot
|
||||
assignment.ProposerSlots = meta.current.proposalSlots[idx]
|
||||
populateCommitteeFields(assignment, meta.current.liteAssignment)
|
||||
|
||||
nextAssignment.ValidatorIndex = idx
|
||||
nextAssignment.Status = statusEnum
|
||||
nextAssignment.CommitteesAtSlot = meta.next.committeesAtSlot
|
||||
populateCommitteeFields(nextAssignment, meta.next.liteAssignment)
|
||||
|
||||
// Sync committee flags
|
||||
if coreTime.HigherEqualThanAltairVersionAndEpoch(s, reqEpoch) {
|
||||
inSync, err := helpers.IsCurrentPeriodSyncCommittee(s, idx)
|
||||
if err != nil {
|
||||
return nil, nil, status.Errorf(codes.Internal, "Could not determine current epoch sync committee: %v", err)
|
||||
}
|
||||
assignment.IsSyncCommittee = inSync
|
||||
nextAssignment.IsSyncCommittee = inSync
|
||||
if inSync {
|
||||
if err := core.RegisterSyncSubnetCurrentPeriodProto(s, reqEpoch, pubKey, statusEnum); err != nil {
|
||||
return nil, nil, status.Errorf(codes.Internal, "Could not register sync subnet current period: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Next epoch sync committee duty is assigned with next period sync committee only during
|
||||
// sync period epoch boundary (ie. EPOCHS_PER_SYNC_COMMITTEE_PERIOD - 1). Else wise
|
||||
// next epoch sync committee duty is the same as current epoch.
|
||||
nextEpoch := reqEpoch + 1
|
||||
currentEpoch := coreTime.CurrentEpoch(s)
|
||||
n := slots.SyncCommitteePeriod(nextEpoch)
|
||||
c := slots.SyncCommitteePeriod(currentEpoch)
|
||||
if n > c {
|
||||
nextInSync, err := helpers.IsNextPeriodSyncCommittee(s, idx)
|
||||
if err != nil {
|
||||
return nil, nil, status.Errorf(codes.Internal, "Could not determine next epoch sync committee: %v", err)
|
||||
}
|
||||
nextAssignment.IsSyncCommittee = nextInSync
|
||||
if nextInSync {
|
||||
go func() {
|
||||
if err := core.RegisterSyncSubnetNextPeriodProto(s, reqEpoch, pubKey, statusEnum); err != nil {
|
||||
log.WithError(err).Warn("Could not register sync subnet next period")
|
||||
}
|
||||
}()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return assignment, nextAssignment, nil
|
||||
}
|
||||
|
||||
func populateCommitteeFields(duty *ethpb.DutiesV2Response_Duty, la *helpers.LiteAssignment) {
|
||||
duty.CommitteeLength = la.CommitteeLength
|
||||
duty.CommitteeIndex = la.CommitteeIndex
|
||||
duty.ValidatorCommitteeIndex = la.ValidatorCommitteeIndex
|
||||
duty.AttesterSlot = la.AttesterSlot
|
||||
}
|
||||
562
beacon-chain/rpc/prysm/v1alpha1/validator/duties_v2_test.go
Normal file
562
beacon-chain/rpc/prysm/v1alpha1/validator/duties_v2_test.go
Normal file
@@ -0,0 +1,562 @@
|
||||
package validator
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
mockChain "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache/depositsnapshot"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/altair"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/execution"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition"
|
||||
mockExecution "github.com/OffchainLabs/prysm/v6/beacon-chain/execution/testing"
|
||||
mockSync "github.com/OffchainLabs/prysm/v6/beacon-chain/sync/initial-sync/testing"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/assert"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/util"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
)
|
||||
|
||||
func TestGetDutiesV2_OK(t *testing.T) {
|
||||
genesis := util.NewBeaconBlock()
|
||||
depChainStart := params.BeaconConfig().MinGenesisActiveValidatorCount
|
||||
deposits, _, err := util.DeterministicDepositsAndKeys(depChainStart)
|
||||
require.NoError(t, err)
|
||||
eth1Data, err := util.DeterministicEth1Data(len(deposits))
|
||||
require.NoError(t, err)
|
||||
bs, err := transition.GenesisBeaconState(context.Background(), deposits, 0, eth1Data)
|
||||
require.NoError(t, err, "Could not setup genesis bs")
|
||||
genesisRoot, err := genesis.Block.HashTreeRoot()
|
||||
require.NoError(t, err, "Could not get signing root")
|
||||
|
||||
pubKeys := make([][]byte, len(deposits))
|
||||
indices := make([]uint64, len(deposits))
|
||||
for i := 0; i < len(deposits); i++ {
|
||||
pubKeys[i] = deposits[i].Data.PublicKey
|
||||
indices[i] = uint64(i)
|
||||
}
|
||||
|
||||
chain := &mockChain.ChainService{
|
||||
State: bs, Root: genesisRoot[:], Genesis: time.Now(),
|
||||
}
|
||||
vs := &Server{
|
||||
HeadFetcher: chain,
|
||||
TimeFetcher: chain,
|
||||
ForkchoiceFetcher: chain,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
PayloadIDCache: cache.NewPayloadIDCache(),
|
||||
}
|
||||
|
||||
// Test the first validator in registry.
|
||||
req := ðpb.DutiesRequest{
|
||||
PublicKeys: [][]byte{deposits[0].Data.PublicKey},
|
||||
}
|
||||
res, err := vs.GetDutiesV2(context.Background(), req)
|
||||
require.NoError(t, err, "Could not call epoch committee assignment")
|
||||
if res.CurrentEpochDuties[0].AttesterSlot > bs.Slot()+params.BeaconConfig().SlotsPerEpoch {
|
||||
t.Errorf("Assigned slot %d can't be higher than %d",
|
||||
res.CurrentEpochDuties[0].AttesterSlot, bs.Slot()+params.BeaconConfig().SlotsPerEpoch)
|
||||
}
|
||||
|
||||
// Test the last validator in registry.
|
||||
lastValidatorIndex := depChainStart - 1
|
||||
req = ðpb.DutiesRequest{
|
||||
PublicKeys: [][]byte{deposits[lastValidatorIndex].Data.PublicKey},
|
||||
}
|
||||
res, err = vs.GetDutiesV2(context.Background(), req)
|
||||
require.NoError(t, err, "Could not call epoch committee assignment")
|
||||
if res.CurrentEpochDuties[0].AttesterSlot > bs.Slot()+params.BeaconConfig().SlotsPerEpoch {
|
||||
t.Errorf("Assigned slot %d can't be higher than %d",
|
||||
res.CurrentEpochDuties[0].AttesterSlot, bs.Slot()+params.BeaconConfig().SlotsPerEpoch)
|
||||
}
|
||||
|
||||
// We request for duties for all validators.
|
||||
req = ðpb.DutiesRequest{
|
||||
PublicKeys: pubKeys,
|
||||
Epoch: 0,
|
||||
}
|
||||
res, err = vs.GetDutiesV2(context.Background(), req)
|
||||
require.NoError(t, err, "Could not call epoch committee assignment")
|
||||
for i := 0; i < len(res.CurrentEpochDuties); i++ {
|
||||
assert.Equal(t, primitives.ValidatorIndex(i), res.CurrentEpochDuties[i].ValidatorIndex)
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetAltairDutiesV2_SyncCommitteeOK(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
cfg := params.BeaconConfig().Copy()
|
||||
cfg.AltairForkEpoch = primitives.Epoch(0)
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
genesis := util.NewBeaconBlock()
|
||||
deposits, _, err := util.DeterministicDepositsAndKeys(params.BeaconConfig().SyncCommitteeSize)
|
||||
require.NoError(t, err)
|
||||
eth1Data, err := util.DeterministicEth1Data(len(deposits))
|
||||
require.NoError(t, err)
|
||||
bs, err := util.GenesisBeaconState(context.Background(), deposits, 0, eth1Data)
|
||||
require.NoError(t, err, "Could not setup genesis bs")
|
||||
h := ðpb.BeaconBlockHeader{
|
||||
StateRoot: bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength),
|
||||
ParentRoot: bytesutil.PadTo([]byte{'b'}, fieldparams.RootLength),
|
||||
BodyRoot: bytesutil.PadTo([]byte{'c'}, fieldparams.RootLength),
|
||||
}
|
||||
require.NoError(t, bs.SetLatestBlockHeader(h))
|
||||
genesisRoot, err := genesis.Block.HashTreeRoot()
|
||||
require.NoError(t, err, "Could not get signing root")
|
||||
|
||||
syncCommittee, err := altair.NextSyncCommittee(context.Background(), bs)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, bs.SetCurrentSyncCommittee(syncCommittee))
|
||||
pubKeys := make([][]byte, len(deposits))
|
||||
indices := make([]uint64, len(deposits))
|
||||
for i := 0; i < len(deposits); i++ {
|
||||
pubKeys[i] = deposits[i].Data.PublicKey
|
||||
indices[i] = uint64(i)
|
||||
}
|
||||
require.NoError(t, bs.SetSlot(params.BeaconConfig().SlotsPerEpoch*primitives.Slot(params.BeaconConfig().EpochsPerSyncCommitteePeriod)-1))
|
||||
require.NoError(t, helpers.UpdateSyncCommitteeCache(bs))
|
||||
|
||||
pubkeysAs48ByteType := make([][fieldparams.BLSPubkeyLength]byte, len(pubKeys))
|
||||
for i, pk := range pubKeys {
|
||||
pubkeysAs48ByteType[i] = bytesutil.ToBytes48(pk)
|
||||
}
|
||||
|
||||
slot := uint64(params.BeaconConfig().SlotsPerEpoch) * uint64(params.BeaconConfig().EpochsPerSyncCommitteePeriod) * params.BeaconConfig().SecondsPerSlot
|
||||
chain := &mockChain.ChainService{
|
||||
State: bs, Root: genesisRoot[:], Genesis: time.Now().Add(time.Duration(-1*int64(slot-1)) * time.Second),
|
||||
}
|
||||
vs := &Server{
|
||||
HeadFetcher: chain,
|
||||
TimeFetcher: chain,
|
||||
ForkchoiceFetcher: chain,
|
||||
Eth1InfoFetcher: &mockExecution.Chain{},
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
PayloadIDCache: cache.NewPayloadIDCache(),
|
||||
}
|
||||
|
||||
// Test the first validator in registry.
|
||||
req := ðpb.DutiesRequest{
|
||||
PublicKeys: [][]byte{deposits[0].Data.PublicKey},
|
||||
}
|
||||
res, err := vs.GetDutiesV2(context.Background(), req)
|
||||
require.NoError(t, err, "Could not call epoch committee assignment")
|
||||
if res.CurrentEpochDuties[0].AttesterSlot > bs.Slot()+params.BeaconConfig().SlotsPerEpoch {
|
||||
t.Errorf("Assigned slot %d can't be higher than %d",
|
||||
res.CurrentEpochDuties[0].AttesterSlot, bs.Slot()+params.BeaconConfig().SlotsPerEpoch)
|
||||
}
|
||||
|
||||
// Test the last validator in registry.
|
||||
lastValidatorIndex := params.BeaconConfig().SyncCommitteeSize - 1
|
||||
req = ðpb.DutiesRequest{
|
||||
PublicKeys: [][]byte{deposits[lastValidatorIndex].Data.PublicKey},
|
||||
}
|
||||
res, err = vs.GetDutiesV2(context.Background(), req)
|
||||
require.NoError(t, err, "Could not call epoch committee assignment")
|
||||
if res.CurrentEpochDuties[0].AttesterSlot > bs.Slot()+params.BeaconConfig().SlotsPerEpoch {
|
||||
t.Errorf("Assigned slot %d can't be higher than %d",
|
||||
res.CurrentEpochDuties[0].AttesterSlot, bs.Slot()+params.BeaconConfig().SlotsPerEpoch)
|
||||
}
|
||||
|
||||
// We request for duties for all validators.
|
||||
req = ðpb.DutiesRequest{
|
||||
PublicKeys: pubKeys,
|
||||
Epoch: 0,
|
||||
}
|
||||
res, err = vs.GetDutiesV2(context.Background(), req)
|
||||
require.NoError(t, err, "Could not call epoch committee assignment")
|
||||
for i := 0; i < len(res.CurrentEpochDuties); i++ {
|
||||
require.Equal(t, primitives.ValidatorIndex(i), res.CurrentEpochDuties[i].ValidatorIndex)
|
||||
}
|
||||
for i := 0; i < len(res.CurrentEpochDuties); i++ {
|
||||
require.Equal(t, true, res.CurrentEpochDuties[i].IsSyncCommittee)
|
||||
// Current epoch and next epoch duties should be equal before the sync period epoch boundary.
|
||||
require.Equal(t, res.CurrentEpochDuties[i].IsSyncCommittee, res.NextEpochDuties[i].IsSyncCommittee)
|
||||
}
|
||||
|
||||
// Current epoch and next epoch duties should not be equal at the sync period epoch boundary.
|
||||
req = ðpb.DutiesRequest{
|
||||
PublicKeys: pubKeys,
|
||||
Epoch: params.BeaconConfig().EpochsPerSyncCommitteePeriod - 1,
|
||||
}
|
||||
res, err = vs.GetDutiesV2(context.Background(), req)
|
||||
require.NoError(t, err, "Could not call epoch committee assignment")
|
||||
for i := 0; i < len(res.CurrentEpochDuties); i++ {
|
||||
require.NotEqual(t, res.CurrentEpochDuties[i].IsSyncCommittee, res.NextEpochDuties[i].IsSyncCommittee)
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetBellatrixDutiesV2_SyncCommitteeOK(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
cfg := params.BeaconConfig().Copy()
|
||||
cfg.AltairForkEpoch = primitives.Epoch(0)
|
||||
cfg.BellatrixForkEpoch = primitives.Epoch(1)
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
genesis := util.NewBeaconBlock()
|
||||
deposits, _, err := util.DeterministicDepositsAndKeys(params.BeaconConfig().SyncCommitteeSize)
|
||||
require.NoError(t, err)
|
||||
eth1Data, err := util.DeterministicEth1Data(len(deposits))
|
||||
require.NoError(t, err)
|
||||
bs, err := util.GenesisBeaconState(context.Background(), deposits, 0, eth1Data)
|
||||
h := ðpb.BeaconBlockHeader{
|
||||
StateRoot: bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength),
|
||||
ParentRoot: bytesutil.PadTo([]byte{'b'}, fieldparams.RootLength),
|
||||
BodyRoot: bytesutil.PadTo([]byte{'c'}, fieldparams.RootLength),
|
||||
}
|
||||
require.NoError(t, bs.SetLatestBlockHeader(h))
|
||||
require.NoError(t, err, "Could not setup genesis bs")
|
||||
genesisRoot, err := genesis.Block.HashTreeRoot()
|
||||
require.NoError(t, err, "Could not get signing root")
|
||||
|
||||
syncCommittee, err := altair.NextSyncCommittee(context.Background(), bs)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, bs.SetCurrentSyncCommittee(syncCommittee))
|
||||
pubKeys := make([][]byte, len(deposits))
|
||||
indices := make([]uint64, len(deposits))
|
||||
for i := 0; i < len(deposits); i++ {
|
||||
pubKeys[i] = deposits[i].Data.PublicKey
|
||||
indices[i] = uint64(i)
|
||||
}
|
||||
require.NoError(t, bs.SetSlot(params.BeaconConfig().SlotsPerEpoch*primitives.Slot(params.BeaconConfig().EpochsPerSyncCommitteePeriod)-1))
|
||||
require.NoError(t, helpers.UpdateSyncCommitteeCache(bs))
|
||||
|
||||
bs, err = execution.UpgradeToBellatrix(bs)
|
||||
require.NoError(t, err)
|
||||
|
||||
pubkeysAs48ByteType := make([][fieldparams.BLSPubkeyLength]byte, len(pubKeys))
|
||||
for i, pk := range pubKeys {
|
||||
pubkeysAs48ByteType[i] = bytesutil.ToBytes48(pk)
|
||||
}
|
||||
|
||||
slot := uint64(params.BeaconConfig().SlotsPerEpoch) * uint64(params.BeaconConfig().EpochsPerSyncCommitteePeriod) * params.BeaconConfig().SecondsPerSlot
|
||||
chain := &mockChain.ChainService{
|
||||
State: bs, Root: genesisRoot[:], Genesis: time.Now().Add(time.Duration(-1*int64(slot-1)) * time.Second),
|
||||
}
|
||||
vs := &Server{
|
||||
HeadFetcher: chain,
|
||||
TimeFetcher: chain,
|
||||
ForkchoiceFetcher: chain,
|
||||
Eth1InfoFetcher: &mockExecution.Chain{},
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
PayloadIDCache: cache.NewPayloadIDCache(),
|
||||
}
|
||||
|
||||
// Test the first validator in registry.
|
||||
req := ðpb.DutiesRequest{
|
||||
PublicKeys: [][]byte{deposits[0].Data.PublicKey},
|
||||
}
|
||||
res, err := vs.GetDutiesV2(context.Background(), req)
|
||||
require.NoError(t, err, "Could not call epoch committee assignment")
|
||||
if res.CurrentEpochDuties[0].AttesterSlot > bs.Slot()+params.BeaconConfig().SlotsPerEpoch {
|
||||
t.Errorf("Assigned slot %d can't be higher than %d",
|
||||
res.CurrentEpochDuties[0].AttesterSlot, bs.Slot()+params.BeaconConfig().SlotsPerEpoch)
|
||||
}
|
||||
|
||||
// Test the last validator in registry.
|
||||
lastValidatorIndex := params.BeaconConfig().SyncCommitteeSize - 1
|
||||
req = ðpb.DutiesRequest{
|
||||
PublicKeys: [][]byte{deposits[lastValidatorIndex].Data.PublicKey},
|
||||
}
|
||||
res, err = vs.GetDutiesV2(context.Background(), req)
|
||||
require.NoError(t, err, "Could not call epoch committee assignment")
|
||||
if res.CurrentEpochDuties[0].AttesterSlot > bs.Slot()+params.BeaconConfig().SlotsPerEpoch {
|
||||
t.Errorf("Assigned slot %d can't be higher than %d",
|
||||
res.CurrentEpochDuties[0].AttesterSlot, bs.Slot()+params.BeaconConfig().SlotsPerEpoch)
|
||||
}
|
||||
|
||||
// We request for duties for all validators.
|
||||
req = ðpb.DutiesRequest{
|
||||
PublicKeys: pubKeys,
|
||||
Epoch: 0,
|
||||
}
|
||||
res, err = vs.GetDutiesV2(context.Background(), req)
|
||||
require.NoError(t, err, "Could not call epoch committee assignment")
|
||||
for i := 0; i < len(res.CurrentEpochDuties); i++ {
|
||||
assert.Equal(t, primitives.ValidatorIndex(i), res.CurrentEpochDuties[i].ValidatorIndex)
|
||||
}
|
||||
for i := 0; i < len(res.CurrentEpochDuties); i++ {
|
||||
assert.Equal(t, true, res.CurrentEpochDuties[i].IsSyncCommittee)
|
||||
// Current epoch and next epoch duties should be equal before the sync period epoch boundary.
|
||||
assert.Equal(t, res.CurrentEpochDuties[i].IsSyncCommittee, res.NextEpochDuties[i].IsSyncCommittee)
|
||||
}
|
||||
|
||||
// Current epoch and next epoch duties should not be equal at the sync period epoch boundary.
|
||||
req = ðpb.DutiesRequest{
|
||||
PublicKeys: pubKeys,
|
||||
Epoch: params.BeaconConfig().EpochsPerSyncCommitteePeriod - 1,
|
||||
}
|
||||
res, err = vs.GetDutiesV2(context.Background(), req)
|
||||
require.NoError(t, err, "Could not call epoch committee assignment")
|
||||
for i := 0; i < len(res.CurrentEpochDuties); i++ {
|
||||
require.NotEqual(t, res.CurrentEpochDuties[i].IsSyncCommittee, res.NextEpochDuties[i].IsSyncCommittee)
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetAltairDutiesV2_UnknownPubkey(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
cfg := params.BeaconConfig().Copy()
|
||||
cfg.AltairForkEpoch = primitives.Epoch(0)
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
genesis := util.NewBeaconBlock()
|
||||
deposits, _, err := util.DeterministicDepositsAndKeys(params.BeaconConfig().SyncCommitteeSize)
|
||||
require.NoError(t, err)
|
||||
eth1Data, err := util.DeterministicEth1Data(len(deposits))
|
||||
require.NoError(t, err)
|
||||
bs, err := util.GenesisBeaconState(context.Background(), deposits, 0, eth1Data)
|
||||
require.NoError(t, err)
|
||||
h := ðpb.BeaconBlockHeader{
|
||||
StateRoot: bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength),
|
||||
ParentRoot: bytesutil.PadTo([]byte{'b'}, fieldparams.RootLength),
|
||||
BodyRoot: bytesutil.PadTo([]byte{'c'}, fieldparams.RootLength),
|
||||
}
|
||||
require.NoError(t, bs.SetLatestBlockHeader(h))
|
||||
require.NoError(t, err, "Could not setup genesis bs")
|
||||
genesisRoot, err := genesis.Block.HashTreeRoot()
|
||||
require.NoError(t, err, "Could not get signing root")
|
||||
|
||||
require.NoError(t, bs.SetSlot(params.BeaconConfig().SlotsPerEpoch*primitives.Slot(params.BeaconConfig().EpochsPerSyncCommitteePeriod)-1))
|
||||
require.NoError(t, helpers.UpdateSyncCommitteeCache(bs))
|
||||
|
||||
slot := uint64(params.BeaconConfig().SlotsPerEpoch) * uint64(params.BeaconConfig().EpochsPerSyncCommitteePeriod) * params.BeaconConfig().SecondsPerSlot
|
||||
chain := &mockChain.ChainService{
|
||||
State: bs, Root: genesisRoot[:], Genesis: time.Now().Add(time.Duration(-1*int64(slot-1)) * time.Second),
|
||||
}
|
||||
depositCache, err := depositsnapshot.New()
|
||||
require.NoError(t, err)
|
||||
|
||||
vs := &Server{
|
||||
HeadFetcher: chain,
|
||||
ForkchoiceFetcher: chain,
|
||||
TimeFetcher: chain,
|
||||
Eth1InfoFetcher: &mockExecution.Chain{},
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
DepositFetcher: depositCache,
|
||||
PayloadIDCache: cache.NewPayloadIDCache(),
|
||||
}
|
||||
|
||||
unknownPubkey := bytesutil.PadTo([]byte{'u'}, 48)
|
||||
|
||||
req := ðpb.DutiesRequest{
|
||||
PublicKeys: [][]byte{unknownPubkey},
|
||||
}
|
||||
res, err := vs.GetDutiesV2(context.Background(), req)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, false, res.CurrentEpochDuties[0].IsSyncCommittee)
|
||||
assert.Equal(t, false, res.NextEpochDuties[0].IsSyncCommittee)
|
||||
}
|
||||
|
||||
func TestGetDutiesV2_StateAdvancement(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
cfg := params.BeaconConfig().Copy()
|
||||
cfg.ElectraForkEpoch = primitives.Epoch(0)
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
epochStart, err := slots.EpochStart(1)
|
||||
require.NoError(t, err)
|
||||
st, _ := util.DeterministicGenesisStateElectra(t, 1)
|
||||
require.NoError(t, st.SetSlot(epochStart-1))
|
||||
|
||||
// Request epoch 1 which requires slot 32 processing
|
||||
req := ðpb.DutiesRequest{
|
||||
PublicKeys: [][]byte{pubKey(0)},
|
||||
Epoch: 1,
|
||||
}
|
||||
b, err := blocks.NewSignedBeaconBlock(util.HydrateSignedBeaconBlockElectra(ðpb.SignedBeaconBlockElectra{}))
|
||||
require.NoError(t, err)
|
||||
b.SetSlot(epochStart)
|
||||
currentSlot := epochStart - 1
|
||||
// Mock chain service with state at slot 0
|
||||
chain := &mockChain.ChainService{
|
||||
Root: make([]byte, 32),
|
||||
State: st,
|
||||
Block: b,
|
||||
Slot: ¤tSlot,
|
||||
}
|
||||
|
||||
vs := &Server{
|
||||
HeadFetcher: chain,
|
||||
TimeFetcher: chain,
|
||||
ForkchoiceFetcher: chain,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
}
|
||||
|
||||
// Verify state processing occurs
|
||||
res, err := vs.GetDutiesV2(context.Background(), req)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, res)
|
||||
}
|
||||
|
||||
func TestGetDutiesV2_SlotOutOfUpperBound(t *testing.T) {
|
||||
chain := &mockChain.ChainService{
|
||||
Genesis: time.Now(),
|
||||
}
|
||||
vs := &Server{
|
||||
ForkchoiceFetcher: chain,
|
||||
TimeFetcher: chain,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
}
|
||||
req := ðpb.DutiesRequest{
|
||||
Epoch: primitives.Epoch(chain.CurrentSlot()/params.BeaconConfig().SlotsPerEpoch + 2),
|
||||
}
|
||||
_, err := vs.GetDutiesV2(context.Background(), req)
|
||||
require.ErrorContains(t, "can not be greater than next epoch", err)
|
||||
}
|
||||
|
||||
func TestGetDutiesV2_CurrentEpoch_ShouldNotFail(t *testing.T) {
|
||||
genesis := util.NewBeaconBlock()
|
||||
depChainStart := params.BeaconConfig().MinGenesisActiveValidatorCount
|
||||
deposits, _, err := util.DeterministicDepositsAndKeys(depChainStart)
|
||||
require.NoError(t, err)
|
||||
eth1Data, err := util.DeterministicEth1Data(len(deposits))
|
||||
require.NoError(t, err)
|
||||
bState, err := transition.GenesisBeaconState(context.Background(), deposits, 0, eth1Data)
|
||||
require.NoError(t, err, "Could not setup genesis state")
|
||||
// Set state to non-epoch start slot.
|
||||
require.NoError(t, bState.SetSlot(5))
|
||||
|
||||
genesisRoot, err := genesis.Block.HashTreeRoot()
|
||||
require.NoError(t, err, "Could not get signing root")
|
||||
|
||||
pubKeys := make([][fieldparams.BLSPubkeyLength]byte, len(deposits))
|
||||
indices := make([]uint64, len(deposits))
|
||||
for i := 0; i < len(deposits); i++ {
|
||||
pubKeys[i] = bytesutil.ToBytes48(deposits[i].Data.PublicKey)
|
||||
indices[i] = uint64(i)
|
||||
}
|
||||
|
||||
chain := &mockChain.ChainService{
|
||||
State: bState, Root: genesisRoot[:], Genesis: time.Now(),
|
||||
}
|
||||
vs := &Server{
|
||||
HeadFetcher: chain,
|
||||
ForkchoiceFetcher: chain,
|
||||
TimeFetcher: chain,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
PayloadIDCache: cache.NewPayloadIDCache(),
|
||||
}
|
||||
|
||||
// Test the first validator in registry.
|
||||
req := ðpb.DutiesRequest{
|
||||
PublicKeys: [][]byte{deposits[0].Data.PublicKey},
|
||||
}
|
||||
res, err := vs.GetDutiesV2(context.Background(), req)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, 1, len(res.CurrentEpochDuties), "Expected 1 assignment")
|
||||
}
|
||||
|
||||
func TestGetDutiesV2_MultipleKeys_OK(t *testing.T) {
|
||||
genesis := util.NewBeaconBlock()
|
||||
depChainStart := uint64(64)
|
||||
|
||||
deposits, _, err := util.DeterministicDepositsAndKeys(depChainStart)
|
||||
require.NoError(t, err)
|
||||
eth1Data, err := util.DeterministicEth1Data(len(deposits))
|
||||
require.NoError(t, err)
|
||||
bs, err := transition.GenesisBeaconState(context.Background(), deposits, 0, eth1Data)
|
||||
require.NoError(t, err, "Could not setup genesis bs")
|
||||
genesisRoot, err := genesis.Block.HashTreeRoot()
|
||||
require.NoError(t, err, "Could not get signing root")
|
||||
|
||||
pubKeys := make([][fieldparams.BLSPubkeyLength]byte, len(deposits))
|
||||
indices := make([]uint64, len(deposits))
|
||||
for i := 0; i < len(deposits); i++ {
|
||||
pubKeys[i] = bytesutil.ToBytes48(deposits[i].Data.PublicKey)
|
||||
indices[i] = uint64(i)
|
||||
}
|
||||
|
||||
chain := &mockChain.ChainService{
|
||||
State: bs, Root: genesisRoot[:], Genesis: time.Now(),
|
||||
}
|
||||
vs := &Server{
|
||||
HeadFetcher: chain,
|
||||
ForkchoiceFetcher: chain,
|
||||
TimeFetcher: chain,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
PayloadIDCache: cache.NewPayloadIDCache(),
|
||||
}
|
||||
|
||||
pubkey0 := deposits[0].Data.PublicKey
|
||||
pubkey1 := deposits[1].Data.PublicKey
|
||||
|
||||
// Test the first validator in registry.
|
||||
req := ðpb.DutiesRequest{
|
||||
PublicKeys: [][]byte{pubkey0, pubkey1},
|
||||
}
|
||||
res, err := vs.GetDutiesV2(context.Background(), req)
|
||||
require.NoError(t, err, "Could not call epoch committee assignment")
|
||||
assert.Equal(t, 2, len(res.CurrentEpochDuties))
|
||||
assert.Equal(t, primitives.Slot(4), res.CurrentEpochDuties[0].AttesterSlot)
|
||||
assert.Equal(t, primitives.Slot(4), res.CurrentEpochDuties[1].AttesterSlot)
|
||||
}
|
||||
|
||||
func TestGetDutiesV2_NextSyncCommitteePeriod(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
cfg := params.BeaconConfig().Copy()
|
||||
cfg.AltairForkEpoch = primitives.Epoch(0)
|
||||
cfg.EpochsPerSyncCommitteePeriod = 1
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
// Configure sync committee period boundary
|
||||
epochsPerPeriod := params.BeaconConfig().EpochsPerSyncCommitteePeriod
|
||||
boundaryEpoch := epochsPerPeriod - 1
|
||||
|
||||
// Create state at last epoch of current period
|
||||
deposits, _, err := util.DeterministicDepositsAndKeys(params.BeaconConfig().SyncCommitteeSize)
|
||||
require.NoError(t, err)
|
||||
eth1Data, err := util.DeterministicEth1Data(len(deposits))
|
||||
require.NoError(t, err)
|
||||
st, err := util.GenesisBeaconState(context.Background(), deposits, 0, eth1Data)
|
||||
require.NoError(t, err)
|
||||
|
||||
syncCommittee, err := altair.NextSyncCommittee(context.Background(), st)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, st.SetCurrentSyncCommittee(syncCommittee))
|
||||
require.NoError(t, st.SetSlot(params.BeaconConfig().SlotsPerEpoch*primitives.Slot(boundaryEpoch)))
|
||||
|
||||
validatorPubkey := deposits[0].Data.PublicKey
|
||||
|
||||
// Request duties for boundary epoch + 1
|
||||
req := ðpb.DutiesRequest{
|
||||
PublicKeys: [][]byte{validatorPubkey},
|
||||
Epoch: boundaryEpoch + 1,
|
||||
}
|
||||
|
||||
genesisRoot := [32]byte{}
|
||||
chain := &mockChain.ChainService{
|
||||
State: st,
|
||||
Root: genesisRoot[:],
|
||||
}
|
||||
vs := &Server{
|
||||
HeadFetcher: chain,
|
||||
TimeFetcher: chain,
|
||||
ForkchoiceFetcher: chain,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
}
|
||||
|
||||
res, err := vs.GetDutiesV2(context.Background(), req)
|
||||
require.NoError(t, err)
|
||||
|
||||
//Verify next epoch duties have updated sync committee status
|
||||
require.NotEqual(t,
|
||||
res.CurrentEpochDuties[0].IsSyncCommittee,
|
||||
res.NextEpochDuties[0].IsSyncCommittee,
|
||||
)
|
||||
}
|
||||
|
||||
func TestGetDutiesV2_SyncNotReady(t *testing.T) {
|
||||
vs := &Server{
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: true},
|
||||
}
|
||||
_, err := vs.GetDutiesV2(context.Background(), ðpb.DutiesRequest{})
|
||||
assert.ErrorContains(t, "Syncing to latest head", err)
|
||||
}
|
||||
@@ -19,7 +19,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/ssz"
|
||||
"github.com/OffchainLabs/prysm/v6/monitoring/tracing"
|
||||
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
|
||||
"github.com/OffchainLabs/prysm/v6/network/forks"
|
||||
enginev1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
|
||||
"github.com/OffchainLabs/prysm/v6/runtime/version"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
@@ -220,7 +219,7 @@ func (vs *Server) getPayloadHeaderFromBuilder(
|
||||
if signedBid == nil || signedBid.IsNil() {
|
||||
return nil, errors.New("builder returned nil bid")
|
||||
}
|
||||
fork, err := forks.Fork(slots.ToEpoch(slot))
|
||||
fork, err := params.Fork(slots.ToEpoch(slot))
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "unable to get fork information")
|
||||
}
|
||||
|
||||
@@ -29,7 +29,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/sync"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
"github.com/OffchainLabs/prysm/v6/network/forks"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/runtime/version"
|
||||
"google.golang.org/grpc/codes"
|
||||
@@ -155,7 +154,7 @@ func (vs *Server) ValidatorIndex(ctx context.Context, req *ethpb.ValidatorIndexR
|
||||
//
|
||||
// DomainData fetches the current domain version information from the beacon state.
|
||||
func (vs *Server) DomainData(ctx context.Context, request *ethpb.DomainRequest) (*ethpb.DomainResponse, error) {
|
||||
fork, err := forks.Fork(request.Epoch)
|
||||
fork, err := params.Fork(request.Epoch)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
@@ -36,6 +36,6 @@ func (m *MockBlocker) Block(_ context.Context, b []byte) (interfaces.ReadOnlySig
|
||||
}
|
||||
|
||||
// Blobs --
|
||||
func (m *MockBlocker) Blobs(_ context.Context, _ string, _ []uint64) ([]*blocks.VerifiedROBlob, *core.RpcError) {
|
||||
panic("implement me") // lint:nopanic -- Test code.
|
||||
func (*MockBlocker) Blobs(_ context.Context, _ string, _ []int) ([]*blocks.VerifiedROBlob, *core.RpcError) {
|
||||
return nil, &core.RpcError{}
|
||||
}
|
||||
|
||||
@@ -35,12 +35,21 @@ func (g *Clock) GenesisValidatorsRoot() [32]byte {
|
||||
return g.vr
|
||||
}
|
||||
|
||||
// GenesisValidatorsRoot returns the genesis state validator root as a slice for convenience.
|
||||
func (g *Clock) GenesisValidatorsRootSlice() []byte {
|
||||
return g.vr[:]
|
||||
}
|
||||
|
||||
// CurrentSlot returns the current slot relative to the time.Time value that Clock embeds.
|
||||
func (g *Clock) CurrentSlot() types.Slot {
|
||||
now := g.now()
|
||||
return slots.Duration(g.t, now)
|
||||
}
|
||||
|
||||
func (g *Clock) CurrentEpoch() types.Epoch {
|
||||
return slots.ToEpoch(g.CurrentSlot())
|
||||
}
|
||||
|
||||
// SlotStart computes the time the given slot begins.
|
||||
func (g *Clock) SlotStart(slot types.Slot) time.Time {
|
||||
return slots.BeginsAt(slot, g.t)
|
||||
|
||||
@@ -119,7 +119,6 @@ go_library(
|
||||
"//math:go_default_library",
|
||||
"//monitoring/tracing:go_default_library",
|
||||
"//monitoring/tracing/trace:go_default_library",
|
||||
"//network/forks:go_default_library",
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
"//proto/prysm/v1alpha1/attestation:go_default_library",
|
||||
"//proto/prysm/v1alpha1/metadata:go_default_library",
|
||||
@@ -250,7 +249,6 @@ go_test(
|
||||
"//crypto/rand:go_default_library",
|
||||
"//encoding/bytesutil:go_default_library",
|
||||
"//encoding/ssz/equality:go_default_library",
|
||||
"//network/forks:go_default_library",
|
||||
"//proto/engine/v1:go_default_library",
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
"//proto/prysm/v1alpha1/attestation:go_default_library",
|
||||
|
||||
@@ -35,7 +35,6 @@ go_library(
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
"//crypto/bls:go_default_library",
|
||||
"//encoding/bytesutil:go_default_library",
|
||||
"//network/forks:go_default_library",
|
||||
"//proto/dbval:go_default_library",
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
"//runtime:go_default_library",
|
||||
@@ -80,7 +79,6 @@ go_test(
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
"//crypto/bls:go_default_library",
|
||||
"//encoding/bytesutil:go_default_library",
|
||||
"//network/forks:go_default_library",
|
||||
"//proto/dbval:go_default_library",
|
||||
"//runtime/interop:go_default_library",
|
||||
"//testing/require:go_default_library",
|
||||
|
||||
@@ -90,7 +90,10 @@ func (bs *blobSync) validateNext(rb blocks.ROBlob) error {
|
||||
if err := v.SidecarKzgProofVerified(); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := bs.store.Persist(bs.current, rb); err != nil {
|
||||
|
||||
sc := blocks.NewSidecarFromBlobSidecar(rb)
|
||||
|
||||
if err := bs.store.Persist(bs.current, sc); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
|
||||
@@ -9,7 +9,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/crypto/bls"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
"github.com/OffchainLabs/prysm/v6/network/forks"
|
||||
"github.com/OffchainLabs/prysm/v6/runtime/version"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
"github.com/pkg/errors"
|
||||
@@ -107,8 +106,7 @@ func (vr verifier) blockSignatureBatch(b blocks.ROBlock) (*bls.SignatureBatch, e
|
||||
}
|
||||
|
||||
func newBackfillVerifier(vr []byte, keys [][fieldparams.BLSPubkeyLength]byte) (*verifier, error) {
|
||||
dc, err := newDomainCache(vr, params.BeaconConfig().DomainBeaconProposer,
|
||||
forks.NewOrderedSchedule(params.BeaconConfig()))
|
||||
dc, err := newDomainCache(vr, params.BeaconConfig().DomainBeaconProposer)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -122,33 +120,31 @@ func newBackfillVerifier(vr []byte, keys [][fieldparams.BLSPubkeyLength]byte) (*
|
||||
|
||||
// domainCache provides a fast signing domain lookup by epoch.
|
||||
type domainCache struct {
|
||||
fsched forks.OrderedSchedule
|
||||
forkDomains map[[4]byte][]byte
|
||||
dType [bls.DomainByteLength]byte
|
||||
}
|
||||
|
||||
func newDomainCache(vRoot []byte, dType [bls.DomainByteLength]byte, fsched forks.OrderedSchedule) (*domainCache, error) {
|
||||
func newDomainCache(vRoot []byte, dType [bls.DomainByteLength]byte) (*domainCache, error) {
|
||||
dc := &domainCache{
|
||||
fsched: fsched,
|
||||
forkDomains: make(map[[4]byte][]byte),
|
||||
dType: dType,
|
||||
}
|
||||
for _, entry := range fsched {
|
||||
d, err := signing.ComputeDomain(dc.dType, entry.Version[:], vRoot)
|
||||
for _, entry := range params.SortedForkSchedule() {
|
||||
d, err := signing.ComputeDomain(dc.dType, entry.ForkVersion[:], vRoot)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "failed to pre-compute signing domain for fork version=%#x", entry.Version)
|
||||
return nil, errors.Wrapf(err, "failed to pre-compute signing domain for fork version=%#x", entry.ForkVersion)
|
||||
}
|
||||
dc.forkDomains[entry.Version] = d
|
||||
dc.forkDomains[entry.ForkVersion] = d
|
||||
}
|
||||
return dc, nil
|
||||
}
|
||||
|
||||
func (dc *domainCache) forEpoch(e primitives.Epoch) ([]byte, error) {
|
||||
fork, err := dc.fsched.VersionForEpoch(e)
|
||||
fork, err := params.Fork(e)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
d, ok := dc.forkDomains[fork]
|
||||
d, ok := dc.forkDomains[[4]byte(fork.CurrentVersion)]
|
||||
if !ok {
|
||||
return nil, errors.Wrapf(errUnknownDomain, "fork version=%#x, epoch=%d", fork, e)
|
||||
}
|
||||
|
||||
@@ -12,7 +12,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/crypto/bls"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
"github.com/OffchainLabs/prysm/v6/network/forks"
|
||||
"github.com/OffchainLabs/prysm/v6/runtime/interop"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/util"
|
||||
@@ -30,18 +29,17 @@ func TestDomainCache(t *testing.T) {
|
||||
}
|
||||
|
||||
vRoot, err := hexutil.Decode("0x0011223344556677889900112233445566778899001122334455667788990011")
|
||||
require.NoError(t, err)
|
||||
dType := cfg.DomainBeaconProposer
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 32, len(vRoot))
|
||||
fsched := forks.NewOrderedSchedule(cfg)
|
||||
dc, err := newDomainCache(vRoot, dType, fsched)
|
||||
dc, err := newDomainCache(vRoot, dType)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, len(fsched), len(dc.forkDomains))
|
||||
for i := range fsched {
|
||||
e := fsched[i].Epoch
|
||||
ad, err := dc.forEpoch(e)
|
||||
schedule := params.SortedForkSchedule()
|
||||
require.Equal(t, len(schedule), len(dc.forkDomains))
|
||||
for _, entry := range schedule {
|
||||
ad, err := dc.forEpoch(entry.Epoch)
|
||||
require.NoError(t, err)
|
||||
ed, err := signing.ComputeDomain(dType, fsched[i].Version[:], vRoot)
|
||||
ed, err := signing.ComputeDomain(dType, entry.ForkVersion[:], vRoot)
|
||||
require.NoError(t, err)
|
||||
require.DeepEqual(t, ed, ad)
|
||||
}
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user