mirror of
https://github.com/OffchainLabs/prysm.git
synced 2026-01-10 05:47:59 -05:00
Compare commits
19 Commits
reduce-loo
...
remove_pro
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
25e695f1d0 | ||
|
|
37a91f7d9f | ||
|
|
6eed6686eb | ||
|
|
75dea214ac | ||
|
|
4374e709cb | ||
|
|
be300f80bd | ||
|
|
096cba5b2d | ||
|
|
d5127233e4 | ||
|
|
3d35cc20ec | ||
|
|
1e658530a7 | ||
|
|
b360794c9c | ||
|
|
0fc9ab925a | ||
|
|
dda5ee3334 | ||
|
|
14c67376c3 | ||
|
|
9c8b68a66d | ||
|
|
a3210157e2 | ||
|
|
1536d59e30 | ||
|
|
11e46a4560 | ||
|
|
5a2e51b894 |
3
.github/PULL_REQUEST_TEMPLATE.md
vendored
3
.github/PULL_REQUEST_TEMPLATE.md
vendored
@@ -34,4 +34,5 @@ Fixes #
|
||||
|
||||
- [ ] I have read [CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
|
||||
- [ ] I have included a uniquely named [changelog fragment file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
|
||||
- [ ] I have added a description to this PR with sufficient context for reviewers to understand this PR.
|
||||
- [ ] I have added a description with sufficient context for reviewers to understand this PR.
|
||||
- [ ] I have tested that my changes work as expected and I added a testing plan to the PR description (if applicable).
|
||||
|
||||
@@ -193,6 +193,7 @@ nogo(
|
||||
"//tools/analyzers/featureconfig:go_default_library",
|
||||
"//tools/analyzers/gocognit:go_default_library",
|
||||
"//tools/analyzers/ineffassign:go_default_library",
|
||||
"//tools/analyzers/httperror:go_default_library",
|
||||
"//tools/analyzers/interfacechecker:go_default_library",
|
||||
"//tools/analyzers/logcapitalization:go_default_library",
|
||||
"//tools/analyzers/logruswitherror:go_default_library",
|
||||
|
||||
85
CHANGELOG.md
85
CHANGELOG.md
@@ -4,6 +4,91 @@ All notable changes to this project will be documented in this file.
|
||||
|
||||
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
|
||||
|
||||
## [v7.1.0](https://github.com/prysmaticlabs/prysm/compare/v7.0.0...v7.1.0) - 2025-12-10
|
||||
|
||||
This release includes several key features/fixes. If you are running v7.0.0 then you should update to v7.0.1 or later and remove the flag `--disable-last-epoch-targets`.
|
||||
|
||||
Release highlights:
|
||||
|
||||
- Backfill is now supported in Fulu. Backfill from checkpoint sync now supports data columns. Run with `--enable-backfill` when using checkpoint sync.
|
||||
- A new node configuration to custody enough data columns to reconstruct blobs. Use flag `--semi-supernode` to custody at least 50% of the data columns.
|
||||
- Critical fixes in attestation processing.
|
||||
|
||||
A post mortem doc with full details on the mainnet attestation processing issue from December 4th is expected in the coming days.
|
||||
|
||||
### Added
|
||||
|
||||
- add fulu support to light client processing. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15995)
|
||||
- Record data column gossip KZG batch verification latency in both the pooled worker and fallback paths so the `beacon_kzg_verification_data_column_batch_milliseconds` histogram reflects gossip traffic, annotated with `path` labels to distinguish the sources. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16018)
|
||||
- Implement Gloas state. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15611)
|
||||
- Add initial configs for the state-diff feature. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15903)
|
||||
- Add kv functions for the state-diff feature. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15903)
|
||||
- Add supported version for fork versions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16030)
|
||||
- prometheus metric `gossip_attestation_verification_milliseconds` to track attestation gossip topic validation latency. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15785)
|
||||
- Integrate state-diff into `State()`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16033)
|
||||
- Implement Gloas fork support in consensus-types/blocks with factory methods, getters, setters, and proto handling. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15618)
|
||||
- Integrate state-diff into `HasState()`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16045)
|
||||
- Added `--semi-supernode` flag to custody half of a super node's datacolumn requirements but allowing for reconstruction for blob retrieval. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16029)
|
||||
- Data column backfill. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15580)
|
||||
- Backfill metrics for columns: backfill_data_column_sidecar_downloaded, backfill_data_column_sidecar_downloaded_bytes, backfill_batch_columns_download_ms, backfill_batch_columns_verify_ms. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15580)
|
||||
- prometheus summary `gossip_data_column_sidecar_arrival_milliseconds` to track data column sidecar arrival latency since slot start. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16099)
|
||||
|
||||
### Changed
|
||||
|
||||
- Improve readability in slashing import and remove duplicated code. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15957)
|
||||
- Use dependent root instead of target when possible. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15996)
|
||||
- Changed `--subscribe-all-data-subnets` flag to `--supernode` and aliased `--subscribe-all-data-subnets` for existing users. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16012)
|
||||
- Use explicit slot component timing configs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15999)
|
||||
- Downgraded log level from INFO to DEBUG on PrepareBeaconProposer updated fee recipients. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15998)
|
||||
- Change the logging behaviour of Updated fee recipients to only log count of validators at Debug level and all validator indices at Trace level. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15998)
|
||||
- Stop emitting payload attribute events during late block handling when we are not proposing the next slot. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16026)
|
||||
- Initialize the `ExecutionRequests` field in gossip block map. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16047)
|
||||
- Avoid redundant WithHttpEndpoint when JWT is provided. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16032)
|
||||
- Removed dead slot parameter from blobCacheEntry.filter. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16021)
|
||||
- Added log prefix to the `genesis` package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16075)
|
||||
- Added log prefix to the `params` package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16075)
|
||||
- `WithGenesisValidatorsRoot`: Use camelCase for log field param. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16075)
|
||||
- Move `Origin checkpoint found in db` from WARN to INFO, since it is the expected behaviour. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16075)
|
||||
- backfill metrics that changed name and/or histogram buckets: backfill_batch_time_verify -> backfill_batch_verify_ms, backfill_batch_time_waiting -> backfill_batch_waiting_ms, backfill_batch_time_roundtrip -> backfill_batch_roundtrip_ms, backfill_blocks_bytes_downloaded -> backfill_blocks_downloaded_bytes, backfill_batch_time_verify -> backfill_batch_verify_ms, backfill_batch_blocks_time_download -> backfill_batch_blocks_download_ms, backfill_batch_blobs_time_download -> backfill_batch_blobs_download_ms, backfill_blobs_bytes_downloaded -> backfill_blocks_downloaded_bytes. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15580)
|
||||
- Move the "Not enough connected peers" (for a given subnet) from WARN to DEBUG. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16087)
|
||||
- `blobsDataFromStoredDataColumns`: Ask the use to use the `--supernode` flag and shorten the error mesage. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16097)
|
||||
- Introduced flag `--ignore-unviable-attestations` (replaces and deprecates `--disable-last-epoch-targets`) to drop attestations whose target state is not viable; default remains to process them unless explicitly enabled. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16094)
|
||||
|
||||
### Removed
|
||||
|
||||
- Remove validator cross-client from end-to-end tests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16025)
|
||||
- `NUMBER_OF_COLUMNS` configuration (not in the specification any more, replaced by a preset). [[PR]](https://github.com/prysmaticlabs/prysm/pull/16073)
|
||||
- `MAX_CELLS_IN_EXTENDED_MATRIX` configuration (not in the specification any more). [[PR]](https://github.com/prysmaticlabs/prysm/pull/16073)
|
||||
|
||||
### Fixed
|
||||
|
||||
- Nil check for block if it doesn't exist in the DB in fetchOriginSidecars. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16006)
|
||||
- Fix proposals progress bar count [#16020](https://github.com/OffchainLabs/prysm/pull/16020). [[PR]](https://github.com/prysmaticlabs/prysm/pull/16020)
|
||||
- Move `BlockGossipReceived` event to the end of gossip validation. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16031)
|
||||
- Fix state diff repetitive anchor slot bug. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16037)
|
||||
- Check the JWT secret length is exactly 256 bits (32 bytes) as per Engine API specification. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15939)
|
||||
- http_error_count now matches the other cases by listing the endpoint name rather than the actual URL requested. This improves metrics cardinality. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16055)
|
||||
- Fix array out of bounds in static analyzer. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16058)
|
||||
- fixes E2E tests to be able to start from Electra genesis fork or future forks. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16048)
|
||||
- Use head state to validate attestations for old blocks if they are compatible. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16095)
|
||||
|
||||
## [v7.0.1](https://github.com/prysmaticlabs/prysm/compare/v7.0.0...v7.0.1) - 2025-12-08
|
||||
|
||||
This patch release contains 4 cherry-picked changes to address the mainnet attestation processing issue from 2025-12-04. Operators are encouraged to update to this release as soon as practical. As of this release, the feature flag `--disable-last-epoch-targets` has been deprecated and can be safely removed from your node configuration.
|
||||
|
||||
A post mortem doc with full details is expected to be published later this week.
|
||||
|
||||
### Changed
|
||||
|
||||
- Move the "Not enough connected peers" (for a given subnet) from WARN to DEBUG. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16087)
|
||||
- Use dependent root instead of target when possible. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15996)
|
||||
- Introduced flag `--ignore-unviable-attestations` (replaces and deprecates `--disable-last-epoch-targets`) to drop attestations whose target state is not viable; default remains to process them unless explicitly enabled. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16094)
|
||||
|
||||
### Fixed
|
||||
|
||||
- Use head state to validate attestations for old blocks if they are compatible. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16095)
|
||||
|
||||
|
||||
## [v7.0.0](https://github.com/prysmaticlabs/prysm/compare/v6.1.4...v7.0.0) - 2025-11-10
|
||||
|
||||
This is our initial mainnet release for the Ethereum mainnet Fulu fork on December 3rd, 2025. All operators MUST update to v7.0.0 or later release prior to the fulu fork epoch `411392`. See the [Ethereum Foundation blog post](https://blog.ethereum.org/2025/11/06/fusaka-mainnet-announcement) for more information on Fulu.
|
||||
|
||||
@@ -335,9 +335,6 @@ func (s *Service) updateEpochBoundaryCaches(ctx context.Context, st state.Beacon
|
||||
if err := helpers.UpdateCommitteeCache(ctx, st, e); err != nil {
|
||||
return errors.Wrap(err, "could not update committee cache")
|
||||
}
|
||||
if err := helpers.UpdateProposerIndicesInCache(ctx, st, e); err != nil {
|
||||
return errors.Wrap(err, "could not update proposer index cache")
|
||||
}
|
||||
go func(ep primitives.Epoch) {
|
||||
// Use a custom deadline here, since this method runs asynchronously.
|
||||
// We ignore the parent method's context and instead create a new one
|
||||
@@ -348,26 +345,6 @@ func (s *Service) updateEpochBoundaryCaches(ctx context.Context, st state.Beacon
|
||||
log.WithError(err).Warn("Could not update committee cache")
|
||||
}
|
||||
}(e)
|
||||
// The latest block header is from the previous epoch
|
||||
r, err := st.LatestBlockHeader().HashTreeRoot()
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Could not update proposer index state-root map")
|
||||
return nil
|
||||
}
|
||||
// The proposer indices cache takes the target root for the previous
|
||||
// epoch as key
|
||||
if e > 0 {
|
||||
e = e - 1
|
||||
}
|
||||
target, err := s.cfg.ForkChoiceStore.TargetRootForEpoch(r, e)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Could not update proposer index state-root map")
|
||||
return nil
|
||||
}
|
||||
err = helpers.UpdateCachedCheckpointToStateRoot(st, &forkchoicetypes.Checkpoint{Epoch: e, Root: target})
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Could not update proposer index state-root map")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
@@ -15,7 +15,6 @@ import (
|
||||
statefeed "github.com/OffchainLabs/prysm/v7/beacon-chain/core/feed/state"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
|
||||
coreTime "github.com/OffchainLabs/prysm/v7/beacon-chain/core/time"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/transition"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/db"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/db/filesystem"
|
||||
@@ -397,10 +396,6 @@ func (s *Service) initializeBeaconChain(
|
||||
if err := helpers.UpdateCommitteeCache(ctx, genesisState, 0); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := helpers.UpdateProposerIndicesInCache(ctx, genesisState, coreTime.CurrentEpoch(genesisState)); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
s.cfg.AttService.SetGenesisTime(genesisState.GenesisTime())
|
||||
|
||||
return genesisState, nil
|
||||
|
||||
5
beacon-chain/cache/BUILD.bazel
vendored
5
beacon-chain/cache/BUILD.bazel
vendored
@@ -17,9 +17,6 @@ go_library(
|
||||
"error.go",
|
||||
"interfaces.go",
|
||||
"payload_id.go",
|
||||
"proposer_indices.go",
|
||||
"proposer_indices_disabled.go", # keep
|
||||
"proposer_indices_type.go",
|
||||
"registration.go",
|
||||
"skip_slot_cache.go",
|
||||
"subnet_ids.go",
|
||||
@@ -40,7 +37,6 @@ go_library(
|
||||
"//beacon-chain/operations/attestations/attmap:go_default_library",
|
||||
"//beacon-chain/state:go_default_library",
|
||||
"//cache/lru:go_default_library",
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
"//container/slice:go_default_library",
|
||||
@@ -77,7 +73,6 @@ go_test(
|
||||
"committee_test.go",
|
||||
"payload_id_test.go",
|
||||
"private_access_test.go",
|
||||
"proposer_indices_test.go",
|
||||
"registration_test.go",
|
||||
"skip_slot_cache_test.go",
|
||||
"subnet_ids_test.go",
|
||||
|
||||
122
beacon-chain/cache/proposer_indices.go
vendored
122
beacon-chain/cache/proposer_indices.go
vendored
@@ -1,122 +0,0 @@
|
||||
//go:build !fuzz
|
||||
|
||||
package cache
|
||||
|
||||
import (
|
||||
"sync"
|
||||
|
||||
forkchoicetypes "github.com/OffchainLabs/prysm/v7/beacon-chain/forkchoice/types"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/client_golang/prometheus/promauto"
|
||||
)
|
||||
|
||||
var (
|
||||
// ProposerIndicesCacheMiss tracks the number of proposerIndices requests that aren't present in the cache.
|
||||
ProposerIndicesCacheMiss = promauto.NewCounter(prometheus.CounterOpts{
|
||||
Name: "proposer_indices_cache_miss",
|
||||
Help: "The number of proposer indices requests that aren't present in the cache.",
|
||||
})
|
||||
// ProposerIndicesCacheHit tracks the number of proposerIndices requests that are in the cache.
|
||||
ProposerIndicesCacheHit = promauto.NewCounter(prometheus.CounterOpts{
|
||||
Name: "proposer_indices_cache_hit",
|
||||
Help: "The number of proposer indices requests that are present in the cache.",
|
||||
})
|
||||
)
|
||||
|
||||
// ProposerIndicesCache keeps track of the proposer indices in the next two
|
||||
// epochs. It is keyed by the state root of the last epoch before. That is, for
|
||||
// blocks during epoch 2, for example slot 65, it will be keyed by the state
|
||||
// root of slot 63 (last slot in epoch 1).
|
||||
// The cache keeps two sets of indices computed, the "safe" set is computed
|
||||
// right before the epoch transition into the current epoch. For example for
|
||||
// epoch 2 we will compute this list after importing block 63. The "unsafe"
|
||||
// version is computed an epoch in advance, for example for epoch 3, it will be
|
||||
// computed after importing block 63.
|
||||
//
|
||||
// The cache also keeps a map from checkpoints to state roots so that one is
|
||||
// able to access the proposer indices list from a checkpoint instead. The
|
||||
// checkpoint is the checkpoint for the epoch previous to the requested
|
||||
// proposer indices. That is, for a slot in epoch 2 (eg. 65), the checkpoint
|
||||
// root would be for slot 32 if present.
|
||||
type ProposerIndicesCache struct {
|
||||
sync.Mutex
|
||||
indices map[primitives.Epoch]map[[32]byte][fieldparams.SlotsPerEpoch]primitives.ValidatorIndex
|
||||
rootMap map[forkchoicetypes.Checkpoint][32]byte // A map from checkpoint root to state root
|
||||
}
|
||||
|
||||
// NewProposerIndicesCache returns a newly created cache
|
||||
func NewProposerIndicesCache() *ProposerIndicesCache {
|
||||
return &ProposerIndicesCache{
|
||||
indices: make(map[primitives.Epoch]map[[32]byte][fieldparams.SlotsPerEpoch]primitives.ValidatorIndex),
|
||||
rootMap: make(map[forkchoicetypes.Checkpoint][32]byte),
|
||||
}
|
||||
}
|
||||
|
||||
// ProposerIndices returns the proposer indices (safe) for the given root
|
||||
func (p *ProposerIndicesCache) ProposerIndices(epoch primitives.Epoch, root [32]byte) ([fieldparams.SlotsPerEpoch]primitives.ValidatorIndex, bool) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
inner, ok := p.indices[epoch]
|
||||
if !ok {
|
||||
ProposerIndicesCacheMiss.Inc()
|
||||
return [fieldparams.SlotsPerEpoch]primitives.ValidatorIndex{}, false
|
||||
}
|
||||
indices, exists := inner[root]
|
||||
if exists {
|
||||
ProposerIndicesCacheHit.Inc()
|
||||
} else {
|
||||
ProposerIndicesCacheMiss.Inc()
|
||||
}
|
||||
return indices, exists
|
||||
}
|
||||
|
||||
// Prune resets the ProposerIndicesCache to its initial state
|
||||
func (p *ProposerIndicesCache) Prune(epoch primitives.Epoch) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
for key := range p.indices {
|
||||
if key < epoch {
|
||||
delete(p.indices, key)
|
||||
}
|
||||
}
|
||||
for key := range p.rootMap {
|
||||
if key.Epoch+1 < epoch {
|
||||
delete(p.rootMap, key)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Set sets the proposer indices for the given root as key
|
||||
func (p *ProposerIndicesCache) Set(epoch primitives.Epoch, root [32]byte, indices [fieldparams.SlotsPerEpoch]primitives.ValidatorIndex) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
|
||||
inner, ok := p.indices[epoch]
|
||||
if !ok {
|
||||
inner = make(map[[32]byte][fieldparams.SlotsPerEpoch]primitives.ValidatorIndex)
|
||||
p.indices[epoch] = inner
|
||||
}
|
||||
inner[root] = indices
|
||||
}
|
||||
|
||||
// SetCheckpoint updates the map from checkpoints to state roots
|
||||
func (p *ProposerIndicesCache) SetCheckpoint(c forkchoicetypes.Checkpoint, root [32]byte) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
p.rootMap[c] = root
|
||||
}
|
||||
|
||||
// IndicesFromCheckpoint returns the proposer indices from a checkpoint rather than the state root
|
||||
func (p *ProposerIndicesCache) IndicesFromCheckpoint(c forkchoicetypes.Checkpoint) ([fieldparams.SlotsPerEpoch]primitives.ValidatorIndex, bool) {
|
||||
p.Lock()
|
||||
emptyIndices := [fieldparams.SlotsPerEpoch]primitives.ValidatorIndex{}
|
||||
root, ok := p.rootMap[c]
|
||||
p.Unlock()
|
||||
if !ok {
|
||||
ProposerIndicesCacheMiss.Inc()
|
||||
return emptyIndices, ok
|
||||
}
|
||||
return p.ProposerIndices(c.Epoch+1, root)
|
||||
}
|
||||
63
beacon-chain/cache/proposer_indices_disabled.go
vendored
63
beacon-chain/cache/proposer_indices_disabled.go
vendored
@@ -1,63 +0,0 @@
|
||||
//go:build fuzz
|
||||
|
||||
// This file is used in fuzzer builds to bypass proposer indices caches.
|
||||
package cache
|
||||
|
||||
import (
|
||||
forkchoicetypes "github.com/OffchainLabs/prysm/v7/beacon-chain/forkchoice/types"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/client_golang/prometheus/promauto"
|
||||
)
|
||||
|
||||
var (
|
||||
// ProposerIndicesCacheMiss tracks the number of proposerIndices requests that aren't present in the cache.
|
||||
ProposerIndicesCacheMiss = promauto.NewCounter(prometheus.CounterOpts{
|
||||
Name: "proposer_indices_cache_miss",
|
||||
Help: "The number of proposer indices requests that aren't present in the cache.",
|
||||
})
|
||||
// ProposerIndicesCacheHit tracks the number of proposerIndices requests that are in the cache.
|
||||
ProposerIndicesCacheHit = promauto.NewCounter(prometheus.CounterOpts{
|
||||
Name: "proposer_indices_cache_hit",
|
||||
Help: "The number of proposer indices requests that are present in the cache.",
|
||||
})
|
||||
)
|
||||
|
||||
// FakeProposerIndicesCache is a struct with 1 queue for looking up proposer indices by root.
|
||||
type FakeProposerIndicesCache struct {
|
||||
}
|
||||
|
||||
// NewProposerIndicesCache creates a new proposer indices cache for storing/accessing proposer index assignments of an epoch.
|
||||
func NewProposerIndicesCache() *FakeProposerIndicesCache {
|
||||
return &FakeProposerIndicesCache{}
|
||||
}
|
||||
|
||||
// ProposerIndices is a stub.
|
||||
func (c *FakeProposerIndicesCache) ProposerIndices(_ primitives.Epoch, _ [32]byte) ([fieldparams.SlotsPerEpoch]primitives.ValidatorIndex, bool) {
|
||||
return [fieldparams.SlotsPerEpoch]primitives.ValidatorIndex{}, false
|
||||
}
|
||||
|
||||
// UnsafeProposerIndices is a stub.
|
||||
func (c *FakeProposerIndicesCache) UnsafeProposerIndices(_ primitives.Epoch, _ [32]byte) ([fieldparams.SlotsPerEpoch]primitives.ValidatorIndex, bool) {
|
||||
return [fieldparams.SlotsPerEpoch]primitives.ValidatorIndex{}, false
|
||||
}
|
||||
|
||||
// Prune is a stub.
|
||||
func (p *FakeProposerIndicesCache) Prune(epoch primitives.Epoch) {}
|
||||
|
||||
// Set is a stub.
|
||||
func (p *FakeProposerIndicesCache) Set(epoch primitives.Epoch, root [32]byte, indices [fieldparams.SlotsPerEpoch]primitives.ValidatorIndex) {
|
||||
}
|
||||
|
||||
// SetUnsafe is a stub.
|
||||
func (p *FakeProposerIndicesCache) SetUnsafe(epoch primitives.Epoch, root [32]byte, indices [fieldparams.SlotsPerEpoch]primitives.ValidatorIndex) {
|
||||
}
|
||||
|
||||
// SetCheckpoint is a stub.
|
||||
func (p *FakeProposerIndicesCache) SetCheckpoint(c forkchoicetypes.Checkpoint, root [32]byte) {}
|
||||
|
||||
// IndicesFromCheckpoint is a stub.
|
||||
func (p *FakeProposerIndicesCache) IndicesFromCheckpoint(_ forkchoicetypes.Checkpoint) ([fieldparams.SlotsPerEpoch]primitives.ValidatorIndex, bool) {
|
||||
return [fieldparams.SlotsPerEpoch]primitives.ValidatorIndex{}, false
|
||||
}
|
||||
105
beacon-chain/cache/proposer_indices_test.go
vendored
105
beacon-chain/cache/proposer_indices_test.go
vendored
@@ -1,105 +0,0 @@
|
||||
//go:build !fuzz
|
||||
|
||||
package cache
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
forkchoicetypes "github.com/OffchainLabs/prysm/v7/beacon-chain/forkchoice/types"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v7/testing/require"
|
||||
)
|
||||
|
||||
func TestProposerCache_Set(t *testing.T) {
|
||||
cache := NewProposerIndicesCache()
|
||||
bRoot := [32]byte{'A'}
|
||||
indices, ok := cache.ProposerIndices(0, bRoot)
|
||||
require.Equal(t, false, ok)
|
||||
emptyIndices := [fieldparams.SlotsPerEpoch]primitives.ValidatorIndex{}
|
||||
require.Equal(t, indices, emptyIndices, "Expected committee count not to exist in empty cache")
|
||||
emptyIndices[0] = 1
|
||||
cache.Set(0, bRoot, emptyIndices)
|
||||
|
||||
received, ok := cache.ProposerIndices(0, bRoot)
|
||||
require.Equal(t, true, ok)
|
||||
require.Equal(t, received, emptyIndices)
|
||||
|
||||
newRoot := [32]byte{'B'}
|
||||
copy(emptyIndices[3:], []primitives.ValidatorIndex{1, 2, 3, 4, 5, 6})
|
||||
cache.Set(0, newRoot, emptyIndices)
|
||||
|
||||
received, ok = cache.ProposerIndices(0, newRoot)
|
||||
require.Equal(t, true, ok)
|
||||
require.Equal(t, emptyIndices, received)
|
||||
}
|
||||
|
||||
func TestProposerCache_CheckpointAndPrune(t *testing.T) {
|
||||
cache := NewProposerIndicesCache()
|
||||
indices := [fieldparams.SlotsPerEpoch]primitives.ValidatorIndex{}
|
||||
copy(indices[3:], []primitives.ValidatorIndex{1, 2, 3, 4, 5, 6})
|
||||
for i := 1; i < 10; i++ {
|
||||
root := [32]byte{byte(i)}
|
||||
cache.Set(primitives.Epoch(i), root, indices)
|
||||
cpRoot := [32]byte{byte(i - 1)}
|
||||
cache.SetCheckpoint(forkchoicetypes.Checkpoint{Epoch: primitives.Epoch(i - 1), Root: cpRoot}, root)
|
||||
}
|
||||
received, ok := cache.ProposerIndices(1, [32]byte{1})
|
||||
require.Equal(t, true, ok)
|
||||
require.Equal(t, indices, received)
|
||||
|
||||
received, ok = cache.ProposerIndices(4, [32]byte{4})
|
||||
require.Equal(t, true, ok)
|
||||
require.Equal(t, indices, received)
|
||||
|
||||
received, ok = cache.ProposerIndices(9, [32]byte{9})
|
||||
require.Equal(t, true, ok)
|
||||
require.Equal(t, indices, received)
|
||||
|
||||
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{})
|
||||
require.Equal(t, true, ok)
|
||||
require.Equal(t, indices, received)
|
||||
|
||||
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 3, Root: [32]byte{3}})
|
||||
require.Equal(t, true, ok)
|
||||
require.Equal(t, indices, received)
|
||||
|
||||
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 4, Root: [32]byte{4}})
|
||||
require.Equal(t, true, ok)
|
||||
require.Equal(t, indices, received)
|
||||
|
||||
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 8, Root: [32]byte{8}})
|
||||
require.Equal(t, true, ok)
|
||||
require.Equal(t, indices, received)
|
||||
|
||||
cache.Prune(5)
|
||||
|
||||
emptyIndices := [fieldparams.SlotsPerEpoch]primitives.ValidatorIndex{}
|
||||
received, ok = cache.ProposerIndices(1, [32]byte{1})
|
||||
require.Equal(t, false, ok)
|
||||
require.Equal(t, emptyIndices, received)
|
||||
|
||||
received, ok = cache.ProposerIndices(4, [32]byte{4})
|
||||
require.Equal(t, false, ok)
|
||||
require.Equal(t, emptyIndices, received)
|
||||
|
||||
received, ok = cache.ProposerIndices(9, [32]byte{9})
|
||||
require.Equal(t, true, ok)
|
||||
require.Equal(t, indices, received)
|
||||
|
||||
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 0, Root: [32]byte{0}})
|
||||
require.Equal(t, false, ok)
|
||||
require.Equal(t, emptyIndices, received)
|
||||
|
||||
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 3, Root: [32]byte{3}})
|
||||
require.Equal(t, false, ok)
|
||||
require.Equal(t, emptyIndices, received)
|
||||
|
||||
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 4, Root: [32]byte{4}})
|
||||
require.Equal(t, true, ok)
|
||||
require.Equal(t, indices, received)
|
||||
|
||||
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 8, Root: [32]byte{8}})
|
||||
require.Equal(t, true, ok)
|
||||
require.Equal(t, indices, received)
|
||||
}
|
||||
11
beacon-chain/cache/proposer_indices_type.go
vendored
11
beacon-chain/cache/proposer_indices_type.go
vendored
@@ -1,11 +0,0 @@
|
||||
package cache
|
||||
|
||||
import (
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
|
||||
)
|
||||
|
||||
// ProposerIndices defines the cached struct for proposer indices.
|
||||
type ProposerIndices struct {
|
||||
BlockRoot [32]byte
|
||||
ProposerIndices []primitives.ValidatorIndex
|
||||
}
|
||||
@@ -60,7 +60,7 @@ func Eth1DataHasEnoughSupport(beaconState state.ReadOnlyBeaconState, data *ethpb
|
||||
voteCount := uint64(0)
|
||||
|
||||
for _, vote := range beaconState.Eth1DataVotes() {
|
||||
if AreEth1DataEqual(vote, data.Copy()) {
|
||||
if AreEth1DataEqual(vote, data) {
|
||||
voteCount++
|
||||
}
|
||||
}
|
||||
|
||||
@@ -23,7 +23,6 @@ go_library(
|
||||
deps = [
|
||||
"//beacon-chain/cache:go_default_library",
|
||||
"//beacon-chain/core/time:go_default_library",
|
||||
"//beacon-chain/forkchoice/types:go_default_library",
|
||||
"//beacon-chain/state:go_default_library",
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
@@ -72,7 +71,6 @@ go_test(
|
||||
deps = [
|
||||
"//beacon-chain/cache:go_default_library",
|
||||
"//beacon-chain/core/time:go_default_library",
|
||||
"//beacon-chain/forkchoice/types:go_default_library",
|
||||
"//beacon-chain/state:go_default_library",
|
||||
"//beacon-chain/state/state-native:go_default_library",
|
||||
"//config/fieldparams:go_default_library",
|
||||
|
||||
@@ -10,9 +10,7 @@ import (
|
||||
"github.com/OffchainLabs/go-bitfield"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/cache"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/time"
|
||||
forkchoicetypes "github.com/OffchainLabs/prysm/v7/beacon-chain/forkchoice/types"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v7/config/params"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v7/container/slice"
|
||||
@@ -27,8 +25,7 @@ import (
|
||||
)
|
||||
|
||||
var (
|
||||
committeeCache = cache.NewCommitteesCache()
|
||||
proposerIndicesCache = cache.NewProposerIndicesCache()
|
||||
committeeCache = cache.NewCommitteesCache()
|
||||
)
|
||||
|
||||
type beaconCommitteeFunc = func(
|
||||
@@ -528,75 +525,6 @@ func UpdateCommitteeCache(ctx context.Context, state state.ReadOnlyBeaconState,
|
||||
return nil
|
||||
}
|
||||
|
||||
// UpdateProposerIndicesInCache updates proposer indices entry of the committee cache.
|
||||
// Input state is used to retrieve active validator indices.
|
||||
// Input root is to use as key in the cache.
|
||||
// Input epoch is the epoch to retrieve proposer indices for.
|
||||
func UpdateProposerIndicesInCache(ctx context.Context, state state.ReadOnlyBeaconState, epoch primitives.Epoch) error {
|
||||
// The cache uses the state root at the end of (current epoch - 1) as key.
|
||||
// (e.g. for epoch 2, the key is root at slot 63)
|
||||
if epoch <= params.BeaconConfig().GenesisEpoch+params.BeaconConfig().MinSeedLookahead {
|
||||
return nil
|
||||
}
|
||||
slot, err := slots.EpochEnd(epoch - 1)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
root, err := StateRootAtSlot(state, slot)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
var proposerIndices []primitives.ValidatorIndex
|
||||
// use the state if post fulu (EIP-7917)
|
||||
if state.Version() >= version.Fulu {
|
||||
lookAhead, err := state.ProposerLookahead()
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not get proposer lookahead")
|
||||
}
|
||||
proposerIndices = lookAhead[:params.BeaconConfig().SlotsPerEpoch]
|
||||
} else {
|
||||
// Skip cache update if the key already exists
|
||||
_, ok := proposerIndicesCache.ProposerIndices(epoch, [32]byte(root))
|
||||
if ok {
|
||||
return nil
|
||||
}
|
||||
indices, err := ActiveValidatorIndices(ctx, state, epoch)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
proposerIndices, err = PrecomputeProposerIndices(state, indices, epoch)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if len(proposerIndices) != int(params.BeaconConfig().SlotsPerEpoch) {
|
||||
return errors.New("invalid proposer length returned from state")
|
||||
}
|
||||
}
|
||||
// This is here to deal with tests only
|
||||
var indicesArray [fieldparams.SlotsPerEpoch]primitives.ValidatorIndex
|
||||
copy(indicesArray[:], proposerIndices)
|
||||
proposerIndicesCache.Prune(epoch - 2)
|
||||
proposerIndicesCache.Set(epoch, [32]byte(root), indicesArray)
|
||||
return nil
|
||||
}
|
||||
|
||||
// UpdateCachedCheckpointToStateRoot updates the map from checkpoints to state root in the proposer indices cache
|
||||
func UpdateCachedCheckpointToStateRoot(state state.ReadOnlyBeaconState, cp *forkchoicetypes.Checkpoint) error {
|
||||
if cp.Epoch <= params.BeaconConfig().GenesisEpoch+params.BeaconConfig().MinSeedLookahead {
|
||||
return nil
|
||||
}
|
||||
slot, err := slots.EpochEnd(cp.Epoch)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
root, err := state.StateRootAtIndex(uint64(slot % params.BeaconConfig().SlotsPerHistoricalRoot))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
proposerIndicesCache.SetCheckpoint(*cp, [32]byte(root))
|
||||
return nil
|
||||
}
|
||||
|
||||
// ExpandCommitteeCache resizes the cache to a higher limit.
|
||||
func ExpandCommitteeCache() {
|
||||
committeeCache.ExpandCommitteeCache()
|
||||
@@ -610,7 +538,6 @@ func CompressCommitteeCache() {
|
||||
// ClearCache clears the beacon committee cache and sync committee cache.
|
||||
func ClearCache() {
|
||||
committeeCache.Clear()
|
||||
proposerIndicesCache.Prune(0)
|
||||
syncCommitteeCache.Clear()
|
||||
balanceCache.Clear()
|
||||
}
|
||||
|
||||
@@ -11,7 +11,3 @@ func CommitteeCache() *cache.FakeCommitteeCache {
|
||||
func SyncCommitteeCache() *cache.FakeSyncCommitteeCache {
|
||||
return syncCommitteeCache
|
||||
}
|
||||
|
||||
func ProposerIndicesCache() *cache.FakeProposerIndicesCache {
|
||||
return proposerIndicesCache
|
||||
}
|
||||
|
||||
@@ -11,7 +11,3 @@ func CommitteeCache() *cache.CommitteeCache {
|
||||
func SyncCommitteeCache() *cache.SyncCommitteeCache {
|
||||
return syncCommitteeCache
|
||||
}
|
||||
|
||||
func ProposerIndicesCache() *cache.ProposerIndicesCache {
|
||||
return proposerIndicesCache
|
||||
}
|
||||
|
||||
@@ -7,7 +7,6 @@ import (
|
||||
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/cache"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/time"
|
||||
forkchoicetypes "github.com/OffchainLabs/prysm/v7/beacon-chain/forkchoice/types"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v7/config/params"
|
||||
@@ -152,7 +151,7 @@ func ActiveValidatorIndices(ctx context.Context, s state.ReadOnlyBeaconState, ep
|
||||
}
|
||||
|
||||
if err := UpdateCommitteeCache(ctx, s, epoch); err != nil {
|
||||
return nil, errors.Wrap(err, "could not update committee cache")
|
||||
log.WithError(err).Error("Could not update committee cache")
|
||||
}
|
||||
|
||||
return indices, nil
|
||||
@@ -273,32 +272,6 @@ func BeaconProposerIndex(ctx context.Context, state state.ReadOnlyBeaconState) (
|
||||
return BeaconProposerIndexAtSlot(ctx, state, state.Slot())
|
||||
}
|
||||
|
||||
// cachedProposerIndexAtSlot returns the proposer index at the given slot from
|
||||
// the cache at the given root key.
|
||||
func cachedProposerIndexAtSlot(slot primitives.Slot, root [32]byte) (primitives.ValidatorIndex, error) {
|
||||
proposerIndices, has := proposerIndicesCache.ProposerIndices(slots.ToEpoch(slot), root)
|
||||
if !has {
|
||||
return 0, errProposerIndexMiss
|
||||
}
|
||||
if len(proposerIndices) != int(params.BeaconConfig().SlotsPerEpoch) {
|
||||
return 0, errProposerIndexMiss
|
||||
}
|
||||
return proposerIndices[slot%params.BeaconConfig().SlotsPerEpoch], nil
|
||||
}
|
||||
|
||||
// ProposerIndexAtSlotFromCheckpoint returns the proposer index at the given
|
||||
// slot from the cache at the given checkpoint
|
||||
func ProposerIndexAtSlotFromCheckpoint(c *forkchoicetypes.Checkpoint, slot primitives.Slot) (primitives.ValidatorIndex, error) {
|
||||
proposerIndices, has := proposerIndicesCache.IndicesFromCheckpoint(*c)
|
||||
if !has {
|
||||
return 0, errProposerIndexMiss
|
||||
}
|
||||
if len(proposerIndices) != int(params.BeaconConfig().SlotsPerEpoch) {
|
||||
return 0, errProposerIndexMiss
|
||||
}
|
||||
return proposerIndices[slot%params.BeaconConfig().SlotsPerEpoch], nil
|
||||
}
|
||||
|
||||
func beaconProposerIndexAtSlotFulu(state state.ReadOnlyBeaconState, slot primitives.Slot) (primitives.ValidatorIndex, error) {
|
||||
e := slots.ToEpoch(slot)
|
||||
stateEpoch := slots.ToEpoch(state.Slot())
|
||||
@@ -329,32 +302,6 @@ func BeaconProposerIndexAtSlot(ctx context.Context, state state.ReadOnlyBeaconSt
|
||||
return beaconProposerIndexAtSlotFulu(state, slot)
|
||||
}
|
||||
}
|
||||
// The cache uses the state root of the previous epoch - minimum_seed_lookahead last slot as key. (e.g. Starting epoch 1, slot 32, the key would be block root at slot 31)
|
||||
// For simplicity, the node will skip caching of genesis epoch. If the passed state has not yet reached this slot then we do not check the cache.
|
||||
if e <= stateEpoch && e > params.BeaconConfig().GenesisEpoch+params.BeaconConfig().MinSeedLookahead {
|
||||
s, err := slots.EpochEnd(e - 1)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
r, err := StateRootAtSlot(state, s)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if r != nil && !bytes.Equal(r, params.BeaconConfig().ZeroHash[:]) {
|
||||
pid, err := cachedProposerIndexAtSlot(slot, [32]byte(r))
|
||||
if err == nil {
|
||||
return pid, nil
|
||||
}
|
||||
if err := UpdateProposerIndicesInCache(ctx, state, e); err != nil {
|
||||
return 0, errors.Wrap(err, "could not update proposer index cache")
|
||||
}
|
||||
pid, err = cachedProposerIndexAtSlot(slot, [32]byte(r))
|
||||
if err == nil {
|
||||
return pid, nil
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
seed, err := Seed(state, e, params.BeaconConfig().DomainBeaconProposer)
|
||||
if err != nil {
|
||||
return 0, errors.Wrap(err, "could not generate seed")
|
||||
|
||||
@@ -7,7 +7,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/cache"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/time"
|
||||
forkchoicetypes "github.com/OffchainLabs/prysm/v7/beacon-chain/forkchoice/types"
|
||||
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v7/config/params"
|
||||
@@ -878,23 +877,6 @@ func TestLastActivatedValidatorIndex_OK(t *testing.T) {
|
||||
require.Equal(t, index, primitives.ValidatorIndex(3))
|
||||
}
|
||||
|
||||
func TestProposerIndexFromCheckpoint(t *testing.T) {
|
||||
helpers.ClearCache()
|
||||
|
||||
e := primitives.Epoch(2)
|
||||
r := [32]byte{'a'}
|
||||
root := [32]byte{'b'}
|
||||
ids := [32]primitives.ValidatorIndex{}
|
||||
slot := primitives.Slot(69) // slot 5 in the Epoch
|
||||
ids[5] = primitives.ValidatorIndex(19)
|
||||
helpers.ProposerIndicesCache().Set(e, r, ids)
|
||||
c := &forkchoicetypes.Checkpoint{Root: root, Epoch: e - 1}
|
||||
helpers.ProposerIndicesCache().SetCheckpoint(*c, r)
|
||||
id, err := helpers.ProposerIndexAtSlotFromCheckpoint(c, slot)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, ids[5], id)
|
||||
}
|
||||
|
||||
func TestHasETH1WithdrawalCredentials(t *testing.T) {
|
||||
creds := []byte{0xFA, 0xCC}
|
||||
v := ðpb.Validator{WithdrawalCredentials: creds}
|
||||
|
||||
95
beacon-chain/graffiti/graffiti-proposal-brief.md
Normal file
95
beacon-chain/graffiti/graffiti-proposal-brief.md
Normal file
@@ -0,0 +1,95 @@
|
||||
# Graffiti Version Info Implementation
|
||||
|
||||
## Summary
|
||||
Add automatic EL+CL version info to block graffiti following [ethereum/execution-apis#517](https://github.com/ethereum/execution-apis/pull/517). Uses the [flexible standard](https://hackmd.io/@wmoBhF17RAOH2NZ5bNXJVg/BJX2c9gja) to pack client info into leftover space after user graffiti.
|
||||
|
||||
More details: https://github.com/ethereum/execution-apis/blob/main/src/engine/identification.md
|
||||
|
||||
## Implementation
|
||||
|
||||
### Core Component: GraffitiInfo Struct
|
||||
Thread-safe struct holding version information:
|
||||
```go
|
||||
const clCode = "PR"
|
||||
|
||||
type GraffitiInfo struct {
|
||||
mu sync.RWMutex
|
||||
userGraffiti string // From --graffiti flag (set once at startup)
|
||||
clCommit string // From version.GetCommitPrefix() helper function
|
||||
elCode string // From engine_getClientVersionV1
|
||||
elCommit string // From engine_getClientVersionV1
|
||||
}
|
||||
```
|
||||
|
||||
### Flow
|
||||
1. **Startup**: Parse flags, create GraffitiInfo with user graffiti and CL info.
|
||||
2. **Wiring**: Pass struct to both execution service and RPC validator server
|
||||
3. **Runtime**: Execution service goroutine periodically calls `engine_getClientVersionV1` and updates EL fields
|
||||
4. **Block Proposal**: RPC validator server calls `GenerateGraffiti()` to get formatted graffiti
|
||||
|
||||
### Flexible Graffiti Format
|
||||
Packs as much client info as space allows (after user graffiti):
|
||||
|
||||
| Available Space | Format | Example |
|
||||
|----------------|--------|---------|
|
||||
| ≥12 bytes | `EL(2)+commit(4)+CL(2)+commit(4)+user` | `GE168dPR63afBob` |
|
||||
| 8-11 bytes | `EL(2)+commit(2)+CL(2)+commit(2)+user` | `GE16PR63my node here` |
|
||||
| 4-7 bytes | `EL(2)+CL(2)+user` | `GEPRthis is my graffiti msg` |
|
||||
| 2-3 bytes | `EL(2)+user` | `GEalmost full graffiti message` |
|
||||
| <2 bytes | user only | `full 32 byte user graffiti here` |
|
||||
|
||||
```go
|
||||
func (g *GraffitiInfo) GenerateGraffiti() [32]byte {
|
||||
available := 32 - len(userGraffiti)
|
||||
|
||||
if elCode == "" {
|
||||
elCommit2 = elCommit4 = ""
|
||||
}
|
||||
|
||||
switch {
|
||||
case available >= 12:
|
||||
return elCode + elCommit4 + clCode + clCommit4 + userGraffiti
|
||||
case available >= 8:
|
||||
return elCode + elCommit2 + clCode + clCommit2 + userGraffiti
|
||||
case available >= 4:
|
||||
return elCode + clCode + userGraffiti
|
||||
case available >= 2:
|
||||
return elCode + userGraffiti
|
||||
default:
|
||||
return userGraffiti
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Update Logic
|
||||
Single testable function in execution service:
|
||||
```go
|
||||
func (s *Service) updateGraffitiInfo() {
|
||||
versions, err := s.GetClientVersion(ctx)
|
||||
if err != nil {
|
||||
return // Keep last good value
|
||||
}
|
||||
if len(versions) == 1 {
|
||||
s.graffitiInfo.UpdateFromEngine(versions[0].Code, versions[0].Commit)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Goroutine calls this on `slot % 8 == 4` timing (4 times per epoch, avoids slot boundaries).
|
||||
|
||||
### Files Changes Required
|
||||
|
||||
**New:**
|
||||
- `beacon-chain/execution/graffiti_info.go` - The struct and methods
|
||||
- `beacon-chain/execution/graffiti_info_test.go` - Unit tests
|
||||
- `runtime/version/version.go` - Add `GetCommitPrefix()` helper that extracts first 4 hex chars from the git commit injected via Bazel ldflags at build time
|
||||
|
||||
**Modified:**
|
||||
- `beacon-chain/execution/service.go` - Add goroutine + updateGraffitiInfo()
|
||||
- `beacon-chain/execution/engine_client.go` - Add GetClientVersion() method that does engine call
|
||||
- `beacon-chain/rpc/.../validator/proposer.go` - Call GenerateGraffiti()
|
||||
- `beacon-chain/node/node.go` - Wire GraffitiInfo to services
|
||||
|
||||
### Testing Strategy
|
||||
- Unit test GraffitiInfo methods (priority logic, thread safety)
|
||||
- Unit test updateGraffitiInfo() with mocked engine client
|
||||
@@ -711,6 +711,7 @@ func (s *Server) SubmitAttesterSlashingsV2(w http.ResponseWriter, r *http.Reques
|
||||
versionHeader := r.Header.Get(api.VersionHeader)
|
||||
if versionHeader == "" {
|
||||
httputil.HandleError(w, api.VersionHeader+" header is required", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
v, err := version.FromString(versionHeader)
|
||||
if err != nil {
|
||||
|
||||
@@ -2112,6 +2112,33 @@ func TestSubmitAttesterSlashingsV2(t *testing.T) {
|
||||
assert.Equal(t, http.StatusBadRequest, e.Code)
|
||||
assert.StringContains(t, "Invalid attester slashing", e.Message)
|
||||
})
|
||||
|
||||
t.Run("missing-version-header", func(t *testing.T) {
|
||||
bs, err := util.NewBeaconStateElectra()
|
||||
require.NoError(t, err)
|
||||
|
||||
broadcaster := &p2pMock.MockBroadcaster{}
|
||||
s := &Server{
|
||||
ChainInfoFetcher: &blockchainmock.ChainService{State: bs},
|
||||
SlashingsPool: &slashingsmock.PoolMock{},
|
||||
Broadcaster: broadcaster,
|
||||
}
|
||||
|
||||
var body bytes.Buffer
|
||||
_, err = body.WriteString(invalidAttesterSlashing)
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com/beacon/pool/attester_slashings", &body)
|
||||
// Intentionally do not set api.VersionHeader to verify missing header handling.
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.SubmitAttesterSlashingsV2(writer, request)
|
||||
require.Equal(t, http.StatusBadRequest, writer.Code)
|
||||
e := &httputil.DefaultJsonError{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusBadRequest, e.Code)
|
||||
assert.StringContains(t, api.VersionHeader+" header is required", e.Message)
|
||||
})
|
||||
}
|
||||
|
||||
func TestSubmitProposerSlashing_InvalidSlashing(t *testing.T) {
|
||||
|
||||
@@ -654,6 +654,10 @@ func (m *futureSyncMockFetcher) StateBySlot(context.Context, primitives.Slot) (s
|
||||
return m.BeaconState, nil
|
||||
}
|
||||
|
||||
func (m *futureSyncMockFetcher) StateByEpoch(context.Context, primitives.Epoch) (state.BeaconState, error) {
|
||||
return m.BeaconState, nil
|
||||
}
|
||||
|
||||
func TestGetSyncCommittees_Future(t *testing.T) {
|
||||
st, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().SyncCommitteeSize)
|
||||
syncCommittee := make([][]byte, params.BeaconConfig().SyncCommitteeSize)
|
||||
|
||||
@@ -116,6 +116,7 @@ func (s *Server) GetLightClientUpdatesByRange(w http.ResponseWriter, req *http.R
|
||||
for _, update := range updates {
|
||||
if ctx.Err() != nil {
|
||||
httputil.HandleError(w, "Context error: "+ctx.Err().Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
updateSlot := update.AttestedHeader().Beacon().Slot
|
||||
@@ -131,12 +132,15 @@ func (s *Server) GetLightClientUpdatesByRange(w http.ResponseWriter, req *http.R
|
||||
chunkLength = ssz.MarshalUint64(chunkLength, uint64(len(updateSSZ)+4))
|
||||
if _, err := w.Write(chunkLength); err != nil {
|
||||
httputil.HandleError(w, "Could not write chunk length: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
if _, err := w.Write(updateEntry.ForkDigest[:]); err != nil {
|
||||
httputil.HandleError(w, "Could not write fork digest: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
if _, err := w.Write(updateSSZ); err != nil {
|
||||
httputil.HandleError(w, "Could not write update SSZ: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
}
|
||||
} else {
|
||||
@@ -145,6 +149,7 @@ func (s *Server) GetLightClientUpdatesByRange(w http.ResponseWriter, req *http.R
|
||||
for _, update := range updates {
|
||||
if ctx.Err() != nil {
|
||||
httputil.HandleError(w, "Context error: "+ctx.Err().Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
updateJson, err := structs.LightClientUpdateFromConsensus(update)
|
||||
|
||||
@@ -132,6 +132,7 @@ func (s *Server) GetHealth(w http.ResponseWriter, r *http.Request) {
|
||||
optimistic, err := s.OptimisticModeFetcher.IsOptimistic(ctx)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
if s.SyncChecker.Synced() && !optimistic {
|
||||
return
|
||||
|
||||
@@ -228,7 +228,7 @@ func (s *Server) attRewardsState(w http.ResponseWriter, r *http.Request) (state.
|
||||
}
|
||||
st, err := s.Stater.StateBySlot(r.Context(), nextEpochEnd)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not get state for epoch's starting slot: "+err.Error(), http.StatusInternalServerError)
|
||||
shared.WriteStateFetchError(w, err)
|
||||
return nil, false
|
||||
}
|
||||
return st, true
|
||||
|
||||
@@ -19,7 +19,6 @@ go_library(
|
||||
"//beacon-chain/cache:go_default_library",
|
||||
"//beacon-chain/core/feed/operation:go_default_library",
|
||||
"//beacon-chain/core/helpers:go_default_library",
|
||||
"//beacon-chain/core/transition:go_default_library",
|
||||
"//beacon-chain/db:go_default_library",
|
||||
"//beacon-chain/operations/attestations:go_default_library",
|
||||
"//beacon-chain/operations/synccommittee:go_default_library",
|
||||
@@ -78,6 +77,7 @@ go_test(
|
||||
"//beacon-chain/rpc/core:go_default_library",
|
||||
"//beacon-chain/rpc/eth/rewards/testing:go_default_library",
|
||||
"//beacon-chain/rpc/eth/shared/testing:go_default_library",
|
||||
"//beacon-chain/rpc/lookup:go_default_library",
|
||||
"//beacon-chain/rpc/testutil:go_default_library",
|
||||
"//beacon-chain/state:go_default_library",
|
||||
"//beacon-chain/state/stategen:go_default_library",
|
||||
|
||||
@@ -19,7 +19,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/builder"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/cache"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/transition"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/core"
|
||||
rpchelpers "github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/eth/helpers"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/eth/shared"
|
||||
@@ -898,20 +897,15 @@ func (s *Server) GetAttesterDuties(w http.ResponseWriter, r *http.Request) {
|
||||
return
|
||||
}
|
||||
|
||||
var startSlot primitives.Slot
|
||||
// For next epoch requests, we use the current epoch's state since committee
|
||||
// assignments for next epoch can be computed from current epoch's state.
|
||||
epochForState := requestedEpoch
|
||||
if requestedEpoch == nextEpoch {
|
||||
startSlot, err = slots.EpochStart(currentEpoch)
|
||||
} else {
|
||||
startSlot, err = slots.EpochStart(requestedEpoch)
|
||||
epochForState = currentEpoch
|
||||
}
|
||||
st, err := s.Stater.StateByEpoch(ctx, epochForState)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, fmt.Sprintf("Could not get start slot from epoch %d: %v", requestedEpoch, err), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
st, err := s.Stater.StateBySlot(ctx, startSlot)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not get state: "+err.Error(), http.StatusInternalServerError)
|
||||
shared.WriteStateFetchError(w, err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -1020,39 +1014,11 @@ func (s *Server) GetProposerDuties(w http.ResponseWriter, r *http.Request) {
|
||||
nextEpochLookahead = true
|
||||
}
|
||||
|
||||
epochStartSlot, err := slots.EpochStart(requestedEpoch)
|
||||
st, err := s.Stater.StateByEpoch(ctx, requestedEpoch)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, fmt.Sprintf("Could not get start slot of epoch %d: %v", requestedEpoch, err), http.StatusInternalServerError)
|
||||
shared.WriteStateFetchError(w, err)
|
||||
return
|
||||
}
|
||||
var st state.BeaconState
|
||||
// if the requested epoch is new, use the head state and the next slot cache
|
||||
if requestedEpoch < currentEpoch {
|
||||
st, err = s.Stater.StateBySlot(ctx, epochStartSlot)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, fmt.Sprintf("Could not get state for slot %d: %v ", epochStartSlot, err), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
} else {
|
||||
st, err = s.HeadFetcher.HeadState(ctx)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, fmt.Sprintf("Could not get head state: %v ", err), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
// Notice that even for Fulu requests for the next epoch, we are only advancing the state to the start of the current epoch.
|
||||
if st.Slot() < epochStartSlot {
|
||||
headRoot, err := s.HeadFetcher.HeadRoot(ctx)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, fmt.Sprintf("Could not get head root: %v ", err), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
st, err = transition.ProcessSlotsUsingNextSlotCache(ctx, st, headRoot, epochStartSlot)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, fmt.Sprintf("Could not process slots up to %d: %v ", epochStartSlot, err), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var assignments map[primitives.ValidatorIndex][]primitives.Slot
|
||||
if nextEpochLookahead {
|
||||
@@ -1103,7 +1069,8 @@ func (s *Server) GetProposerDuties(w http.ResponseWriter, r *http.Request) {
|
||||
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
if !sortProposerDuties(w, duties) {
|
||||
if err = sortProposerDuties(duties); err != nil {
|
||||
httputil.HandleError(w, "Could not sort proposer duties: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -1174,14 +1141,10 @@ func (s *Server) GetSyncCommitteeDuties(w http.ResponseWriter, r *http.Request)
|
||||
}
|
||||
|
||||
startingEpoch := min(requestedEpoch, currentEpoch)
|
||||
slot, err := slots.EpochStart(startingEpoch)
|
||||
|
||||
st, err := s.Stater.StateByEpoch(ctx, startingEpoch)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not get sync committee slot: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
st, err := s.Stater.State(ctx, []byte(strconv.FormatUint(uint64(slot), 10)))
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not get sync committee state: "+err.Error(), http.StatusInternalServerError)
|
||||
shared.WriteStateFetchError(w, err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -1327,7 +1290,7 @@ func (s *Server) GetLiveness(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
st, err = s.Stater.StateBySlot(ctx, epochEnd)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not get slot for requested epoch: "+err.Error(), http.StatusInternalServerError)
|
||||
shared.WriteStateFetchError(w, err)
|
||||
return
|
||||
}
|
||||
participation, err = st.CurrentEpochParticipation()
|
||||
@@ -1447,22 +1410,20 @@ func syncCommitteeDutiesAndVals(
|
||||
return duties, vals, nil
|
||||
}
|
||||
|
||||
func sortProposerDuties(w http.ResponseWriter, duties []*structs.ProposerDuty) bool {
|
||||
ok := true
|
||||
func sortProposerDuties(duties []*structs.ProposerDuty) error {
|
||||
var err error
|
||||
sort.Slice(duties, func(i, j int) bool {
|
||||
si, err := strconv.ParseUint(duties[i].Slot, 10, 64)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not parse slot: "+err.Error(), http.StatusInternalServerError)
|
||||
ok = false
|
||||
si, parseErr := strconv.ParseUint(duties[i].Slot, 10, 64)
|
||||
if parseErr != nil {
|
||||
err = errors.Wrap(parseErr, "could not parse slot")
|
||||
return false
|
||||
}
|
||||
sj, err := strconv.ParseUint(duties[j].Slot, 10, 64)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not parse slot: "+err.Error(), http.StatusInternalServerError)
|
||||
ok = false
|
||||
sj, parseErr := strconv.ParseUint(duties[j].Slot, 10, 64)
|
||||
if parseErr != nil {
|
||||
err = errors.Wrap(parseErr, "could not parse slot")
|
||||
return false
|
||||
}
|
||||
return si < sj
|
||||
})
|
||||
return ok
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -25,6 +25,7 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/operations/synccommittee"
|
||||
p2pmock "github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/testing"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/core"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/lookup"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/testutil"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/state/stategen"
|
||||
@@ -2006,6 +2007,7 @@ func TestGetAttesterDuties(t *testing.T) {
|
||||
TimeFetcher: chain,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
OptimisticModeFetcher: chain,
|
||||
HeadFetcher: chain,
|
||||
BeaconDB: db,
|
||||
}
|
||||
|
||||
@@ -2184,6 +2186,7 @@ func TestGetAttesterDuties(t *testing.T) {
|
||||
Stater: &testutil.MockStater{StatesBySlot: map[primitives.Slot]state.BeaconState{0: bs}},
|
||||
TimeFetcher: chain,
|
||||
OptimisticModeFetcher: chain,
|
||||
HeadFetcher: chain,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
BeaconDB: db,
|
||||
}
|
||||
@@ -2224,6 +2227,62 @@ func TestGetAttesterDuties(t *testing.T) {
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusServiceUnavailable, e.Code)
|
||||
})
|
||||
t.Run("state not found returns 404", func(t *testing.T) {
|
||||
chainSlot := primitives.Slot(0)
|
||||
chain := &mockChain.ChainService{
|
||||
State: bs, Root: genesisRoot[:], Slot: &chainSlot,
|
||||
}
|
||||
stateNotFoundErr := lookup.NewStateNotFoundError(8192, []byte("test"))
|
||||
s := &Server{
|
||||
Stater: &testutil.MockStater{CustomError: &stateNotFoundErr},
|
||||
TimeFetcher: chain,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
OptimisticModeFetcher: chain,
|
||||
HeadFetcher: chain,
|
||||
}
|
||||
|
||||
var body bytes.Buffer
|
||||
_, err = body.WriteString("[\"0\"]")
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodGet, "http://www.example.com/eth/v1/validator/duties/attester/{epoch}", &body)
|
||||
request.SetPathValue("epoch", "0")
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.GetAttesterDuties(writer, request)
|
||||
assert.Equal(t, http.StatusNotFound, writer.Code)
|
||||
e := &httputil.DefaultJsonError{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusNotFound, e.Code)
|
||||
assert.StringContains(t, "State not found", e.Message)
|
||||
})
|
||||
t.Run("state fetch error returns 500", func(t *testing.T) {
|
||||
chainSlot := primitives.Slot(0)
|
||||
chain := &mockChain.ChainService{
|
||||
State: bs, Root: genesisRoot[:], Slot: &chainSlot,
|
||||
}
|
||||
s := &Server{
|
||||
Stater: &testutil.MockStater{CustomError: errors.New("internal error")},
|
||||
TimeFetcher: chain,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
OptimisticModeFetcher: chain,
|
||||
HeadFetcher: chain,
|
||||
}
|
||||
|
||||
var body bytes.Buffer
|
||||
_, err = body.WriteString("[\"0\"]")
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodGet, "http://www.example.com/eth/v1/validator/duties/attester/{epoch}", &body)
|
||||
request.SetPathValue("epoch", "0")
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.GetAttesterDuties(writer, request)
|
||||
assert.Equal(t, http.StatusInternalServerError, writer.Code)
|
||||
e := &httputil.DefaultJsonError{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusInternalServerError, e.Code)
|
||||
})
|
||||
}
|
||||
|
||||
func TestGetProposerDuties(t *testing.T) {
|
||||
@@ -2427,6 +2486,60 @@ func TestGetProposerDuties(t *testing.T) {
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusServiceUnavailable, e.Code)
|
||||
})
|
||||
t.Run("state not found returns 404", func(t *testing.T) {
|
||||
bs, err := transition.GenesisBeaconState(t.Context(), deposits, 0, eth1Data)
|
||||
require.NoError(t, err)
|
||||
chainSlot := primitives.Slot(0)
|
||||
chain := &mockChain.ChainService{
|
||||
State: bs, Root: genesisRoot[:], Slot: &chainSlot,
|
||||
}
|
||||
stateNotFoundErr := lookup.NewStateNotFoundError(8192, []byte("test"))
|
||||
s := &Server{
|
||||
Stater: &testutil.MockStater{CustomError: &stateNotFoundErr},
|
||||
TimeFetcher: chain,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
OptimisticModeFetcher: chain,
|
||||
HeadFetcher: chain,
|
||||
}
|
||||
|
||||
request := httptest.NewRequest(http.MethodGet, "http://www.example.com/eth/v1/validator/duties/proposer/{epoch}", nil)
|
||||
request.SetPathValue("epoch", "0")
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.GetProposerDuties(writer, request)
|
||||
assert.Equal(t, http.StatusNotFound, writer.Code)
|
||||
e := &httputil.DefaultJsonError{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusNotFound, e.Code)
|
||||
assert.StringContains(t, "State not found", e.Message)
|
||||
})
|
||||
t.Run("state fetch error returns 500", func(t *testing.T) {
|
||||
bs, err := transition.GenesisBeaconState(t.Context(), deposits, 0, eth1Data)
|
||||
require.NoError(t, err)
|
||||
chainSlot := primitives.Slot(0)
|
||||
chain := &mockChain.ChainService{
|
||||
State: bs, Root: genesisRoot[:], Slot: &chainSlot,
|
||||
}
|
||||
s := &Server{
|
||||
Stater: &testutil.MockStater{CustomError: errors.New("internal error")},
|
||||
TimeFetcher: chain,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
OptimisticModeFetcher: chain,
|
||||
HeadFetcher: chain,
|
||||
}
|
||||
|
||||
request := httptest.NewRequest(http.MethodGet, "http://www.example.com/eth/v1/validator/duties/proposer/{epoch}", nil)
|
||||
request.SetPathValue("epoch", "0")
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.GetProposerDuties(writer, request)
|
||||
assert.Equal(t, http.StatusInternalServerError, writer.Code)
|
||||
e := &httputil.DefaultJsonError{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusInternalServerError, e.Code)
|
||||
})
|
||||
}
|
||||
|
||||
func TestGetSyncCommitteeDuties(t *testing.T) {
|
||||
@@ -2457,7 +2570,7 @@ func TestGetSyncCommitteeDuties(t *testing.T) {
|
||||
}
|
||||
require.NoError(t, st.SetNextSyncCommittee(nextCommittee))
|
||||
|
||||
mockChainService := &mockChain.ChainService{Genesis: genesisTime}
|
||||
mockChainService := &mockChain.ChainService{Genesis: genesisTime, State: st}
|
||||
s := &Server{
|
||||
Stater: &testutil.MockStater{BeaconState: st},
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
@@ -2648,7 +2761,7 @@ func TestGetSyncCommitteeDuties(t *testing.T) {
|
||||
return newSyncPeriodSt
|
||||
}
|
||||
}
|
||||
mockChainService := &mockChain.ChainService{Genesis: genesisTime, Slot: &newSyncPeriodStartSlot}
|
||||
mockChainService := &mockChain.ChainService{Genesis: genesisTime, Slot: &newSyncPeriodStartSlot, State: newSyncPeriodSt}
|
||||
s := &Server{
|
||||
Stater: &testutil.MockStater{BeaconState: stateFetchFn(newSyncPeriodStartSlot)},
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
@@ -2729,8 +2842,7 @@ func TestGetSyncCommitteeDuties(t *testing.T) {
|
||||
slot, err := slots.EpochStart(1)
|
||||
require.NoError(t, err)
|
||||
|
||||
st2, err := util.NewBeaconStateBellatrix()
|
||||
require.NoError(t, err)
|
||||
st2 := st.Copy()
|
||||
require.NoError(t, st2.SetSlot(slot))
|
||||
|
||||
mockChainService := &mockChain.ChainService{
|
||||
@@ -2744,7 +2856,7 @@ func TestGetSyncCommitteeDuties(t *testing.T) {
|
||||
State: st2,
|
||||
}
|
||||
s := &Server{
|
||||
Stater: &testutil.MockStater{BeaconState: st},
|
||||
Stater: &testutil.MockStater{BeaconState: st2},
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
TimeFetcher: mockChainService,
|
||||
HeadFetcher: mockChainService,
|
||||
@@ -2789,6 +2901,62 @@ func TestGetSyncCommitteeDuties(t *testing.T) {
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusServiceUnavailable, e.Code)
|
||||
})
|
||||
t.Run("state not found returns 404", func(t *testing.T) {
|
||||
slot := 2 * params.BeaconConfig().SlotsPerEpoch
|
||||
chainService := &mockChain.ChainService{
|
||||
Slot: &slot,
|
||||
}
|
||||
stateNotFoundErr := lookup.NewStateNotFoundError(8192, []byte("test"))
|
||||
s := &Server{
|
||||
Stater: &testutil.MockStater{CustomError: &stateNotFoundErr},
|
||||
TimeFetcher: chainService,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
OptimisticModeFetcher: chainService,
|
||||
HeadFetcher: chainService,
|
||||
}
|
||||
|
||||
var body bytes.Buffer
|
||||
_, err := body.WriteString("[\"1\"]")
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodGet, "http://www.example.com/eth/v1/validator/duties/sync/{epoch}", &body)
|
||||
request.SetPathValue("epoch", "1")
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.GetSyncCommitteeDuties(writer, request)
|
||||
assert.Equal(t, http.StatusNotFound, writer.Code)
|
||||
e := &httputil.DefaultJsonError{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusNotFound, e.Code)
|
||||
assert.StringContains(t, "State not found", e.Message)
|
||||
})
|
||||
t.Run("state fetch error returns 500", func(t *testing.T) {
|
||||
slot := 2 * params.BeaconConfig().SlotsPerEpoch
|
||||
chainService := &mockChain.ChainService{
|
||||
Slot: &slot,
|
||||
}
|
||||
s := &Server{
|
||||
Stater: &testutil.MockStater{CustomError: errors.New("internal error")},
|
||||
TimeFetcher: chainService,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
OptimisticModeFetcher: chainService,
|
||||
HeadFetcher: chainService,
|
||||
}
|
||||
|
||||
var body bytes.Buffer
|
||||
_, err := body.WriteString("[\"1\"]")
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodGet, "http://www.example.com/eth/v1/validator/duties/sync/{epoch}", &body)
|
||||
request.SetPathValue("epoch", "1")
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.GetSyncCommitteeDuties(writer, request)
|
||||
assert.Equal(t, http.StatusInternalServerError, writer.Code)
|
||||
e := &httputil.DefaultJsonError{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusInternalServerError, e.Code)
|
||||
})
|
||||
}
|
||||
|
||||
func TestPrepareBeaconProposer(t *testing.T) {
|
||||
|
||||
@@ -11,6 +11,7 @@ go_library(
|
||||
deps = [
|
||||
"//beacon-chain/blockchain:go_default_library",
|
||||
"//beacon-chain/core/peerdas:go_default_library",
|
||||
"//beacon-chain/core/transition:go_default_library",
|
||||
"//beacon-chain/db:go_default_library",
|
||||
"//beacon-chain/db/filesystem:go_default_library",
|
||||
"//beacon-chain/rpc/core:go_default_library",
|
||||
|
||||
@@ -8,6 +8,7 @@ import (
|
||||
"strings"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/blockchain"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/transition"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/db"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/state/stategen"
|
||||
@@ -82,8 +83,8 @@ type StateRootNotFoundError struct {
|
||||
}
|
||||
|
||||
// NewStateRootNotFoundError creates a new error instance.
|
||||
func NewStateRootNotFoundError(stateRootsSize int) StateNotFoundError {
|
||||
return StateNotFoundError{
|
||||
func NewStateRootNotFoundError(stateRootsSize int) StateRootNotFoundError {
|
||||
return StateRootNotFoundError{
|
||||
message: fmt.Sprintf("state root not found in the last %d state roots", stateRootsSize),
|
||||
}
|
||||
}
|
||||
@@ -98,6 +99,7 @@ type Stater interface {
|
||||
State(ctx context.Context, id []byte) (state.BeaconState, error)
|
||||
StateRoot(ctx context.Context, id []byte) ([]byte, error)
|
||||
StateBySlot(ctx context.Context, slot primitives.Slot) (state.BeaconState, error)
|
||||
StateByEpoch(ctx context.Context, epoch primitives.Epoch) (state.BeaconState, error)
|
||||
}
|
||||
|
||||
// BeaconDbStater is an implementation of Stater. It retrieves states from the beacon chain database.
|
||||
@@ -267,6 +269,46 @@ func (p *BeaconDbStater) StateBySlot(ctx context.Context, target primitives.Slot
|
||||
return st, nil
|
||||
}
|
||||
|
||||
// StateByEpoch returns the state for the start of the requested epoch.
|
||||
// For current or next epoch, it uses the head state and next slot cache for efficiency.
|
||||
// For past epochs, it replays blocks from the most recent canonical state.
|
||||
func (p *BeaconDbStater) StateByEpoch(ctx context.Context, epoch primitives.Epoch) (state.BeaconState, error) {
|
||||
ctx, span := trace.StartSpan(ctx, "statefetcher.StateByEpoch")
|
||||
defer span.End()
|
||||
|
||||
targetSlot, err := slots.EpochStart(epoch)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not get epoch start slot")
|
||||
}
|
||||
|
||||
currentSlot := p.GenesisTimeFetcher.CurrentSlot()
|
||||
currentEpoch := slots.ToEpoch(currentSlot)
|
||||
|
||||
// For past epochs, use the replay mechanism
|
||||
if epoch < currentEpoch {
|
||||
return p.StateBySlot(ctx, targetSlot)
|
||||
}
|
||||
|
||||
// For current or next epoch, use head state + next slot cache (much faster)
|
||||
headState, err := p.ChainInfoFetcher.HeadState(ctx)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not get head state")
|
||||
}
|
||||
|
||||
// If head state is already at or past the target slot, return it
|
||||
if headState.Slot() >= targetSlot {
|
||||
return headState, nil
|
||||
}
|
||||
|
||||
// Process slots using the next slot cache
|
||||
headRoot := p.ChainInfoFetcher.CachedHeadRoot()
|
||||
st, err := transition.ProcessSlotsUsingNextSlotCache(ctx, headState, headRoot[:], targetSlot)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "could not process slots up to %d", targetSlot)
|
||||
}
|
||||
return st, nil
|
||||
}
|
||||
|
||||
func (p *BeaconDbStater) headStateRoot(ctx context.Context) ([]byte, error) {
|
||||
b, err := p.ChainInfoFetcher.HeadBlock(ctx)
|
||||
if err != nil {
|
||||
|
||||
@@ -444,3 +444,111 @@ func TestStateBySlot_AfterHeadSlot(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, primitives.Slot(101), st.Slot())
|
||||
}
|
||||
|
||||
func TestStateByEpoch(t *testing.T) {
|
||||
ctx := t.Context()
|
||||
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
|
||||
|
||||
t.Run("current epoch uses head state", func(t *testing.T) {
|
||||
// Head is at slot 5 (epoch 0), requesting epoch 0
|
||||
headSlot := primitives.Slot(5)
|
||||
headSt, err := statenative.InitializeFromProtoPhase0(ðpb.BeaconState{Slot: headSlot})
|
||||
require.NoError(t, err)
|
||||
|
||||
currentSlot := headSlot
|
||||
mock := &chainMock.ChainService{State: headSt, Slot: ¤tSlot}
|
||||
p := BeaconDbStater{ChainInfoFetcher: mock, GenesisTimeFetcher: mock}
|
||||
|
||||
st, err := p.StateByEpoch(ctx, 0)
|
||||
require.NoError(t, err)
|
||||
// Should return head state since it's already past epoch start
|
||||
assert.Equal(t, headSlot, st.Slot())
|
||||
})
|
||||
|
||||
t.Run("current epoch processes slots to epoch start", func(t *testing.T) {
|
||||
// Head is at slot 5 (epoch 0), requesting epoch 1
|
||||
// Current slot is 32 (epoch 1), so epoch 1 is current epoch
|
||||
headSlot := primitives.Slot(5)
|
||||
headSt, err := statenative.InitializeFromProtoPhase0(ðpb.BeaconState{Slot: headSlot})
|
||||
require.NoError(t, err)
|
||||
|
||||
currentSlot := slotsPerEpoch // slot 32, epoch 1
|
||||
mock := &chainMock.ChainService{State: headSt, Slot: ¤tSlot}
|
||||
p := BeaconDbStater{ChainInfoFetcher: mock, GenesisTimeFetcher: mock}
|
||||
|
||||
// Note: This will fail since ProcessSlotsUsingNextSlotCache requires proper setup
|
||||
// In real usage, the transition package handles this properly
|
||||
_, err = p.StateByEpoch(ctx, 1)
|
||||
// The error is expected since we don't have a fully initialized beacon state
|
||||
// that can process slots (missing committees, etc.)
|
||||
assert.NotNil(t, err)
|
||||
})
|
||||
|
||||
t.Run("past epoch uses replay", func(t *testing.T) {
|
||||
// Head is at epoch 2, requesting epoch 0 (past)
|
||||
headSlot := slotsPerEpoch * 2 // slot 64, epoch 2
|
||||
headSt, err := statenative.InitializeFromProtoPhase0(ðpb.BeaconState{Slot: headSlot})
|
||||
require.NoError(t, err)
|
||||
|
||||
pastEpochSt, err := statenative.InitializeFromProtoPhase0(ðpb.BeaconState{Slot: 0})
|
||||
require.NoError(t, err)
|
||||
|
||||
currentSlot := headSlot
|
||||
mock := &chainMock.ChainService{State: headSt, Slot: ¤tSlot}
|
||||
mockReplayer := mockstategen.NewReplayerBuilder()
|
||||
mockReplayer.SetMockStateForSlot(pastEpochSt, 0)
|
||||
p := BeaconDbStater{ChainInfoFetcher: mock, GenesisTimeFetcher: mock, ReplayerBuilder: mockReplayer}
|
||||
|
||||
st, err := p.StateByEpoch(ctx, 0)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, primitives.Slot(0), st.Slot())
|
||||
})
|
||||
|
||||
t.Run("next epoch uses head state path", func(t *testing.T) {
|
||||
// Head is at slot 30 (epoch 0), requesting epoch 1 (next)
|
||||
// Current slot is 30 (epoch 0), so epoch 1 is next epoch
|
||||
headSlot := primitives.Slot(30)
|
||||
headSt, err := statenative.InitializeFromProtoPhase0(ðpb.BeaconState{Slot: headSlot})
|
||||
require.NoError(t, err)
|
||||
|
||||
currentSlot := headSlot
|
||||
mock := &chainMock.ChainService{State: headSt, Slot: ¤tSlot}
|
||||
p := BeaconDbStater{ChainInfoFetcher: mock, GenesisTimeFetcher: mock}
|
||||
|
||||
// Note: This will fail since ProcessSlotsUsingNextSlotCache requires proper setup
|
||||
_, err = p.StateByEpoch(ctx, 1)
|
||||
// The error is expected since we don't have a fully initialized beacon state
|
||||
assert.NotNil(t, err)
|
||||
})
|
||||
|
||||
t.Run("head state already at target slot returns immediately", func(t *testing.T) {
|
||||
// Head is at slot 32 (epoch 1 start), requesting epoch 1
|
||||
headSlot := slotsPerEpoch // slot 32
|
||||
headSt, err := statenative.InitializeFromProtoPhase0(ðpb.BeaconState{Slot: headSlot})
|
||||
require.NoError(t, err)
|
||||
|
||||
currentSlot := headSlot
|
||||
mock := &chainMock.ChainService{State: headSt, Slot: ¤tSlot}
|
||||
p := BeaconDbStater{ChainInfoFetcher: mock, GenesisTimeFetcher: mock}
|
||||
|
||||
st, err := p.StateByEpoch(ctx, 1)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, headSlot, st.Slot())
|
||||
})
|
||||
|
||||
t.Run("head state past target slot returns head state", func(t *testing.T) {
|
||||
// Head is at slot 40, requesting epoch 1 (starts at slot 32)
|
||||
headSlot := primitives.Slot(40)
|
||||
headSt, err := statenative.InitializeFromProtoPhase0(ðpb.BeaconState{Slot: headSlot})
|
||||
require.NoError(t, err)
|
||||
|
||||
currentSlot := headSlot
|
||||
mock := &chainMock.ChainService{State: headSt, Slot: ¤tSlot}
|
||||
p := BeaconDbStater{ChainInfoFetcher: mock, GenesisTimeFetcher: mock}
|
||||
|
||||
st, err := p.StateByEpoch(ctx, 1)
|
||||
require.NoError(t, err)
|
||||
// Returns head state since it's already >= epoch start
|
||||
assert.Equal(t, headSlot, st.Slot())
|
||||
})
|
||||
}
|
||||
|
||||
@@ -26,5 +26,6 @@ go_library(
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
"//testing/require:go_default_library",
|
||||
"//testing/util:go_default_library",
|
||||
"//time/slots:go_default_library",
|
||||
],
|
||||
)
|
||||
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
|
||||
"github.com/OffchainLabs/prysm/v7/time/slots"
|
||||
)
|
||||
|
||||
// MockStater is a fake implementation of lookup.Stater.
|
||||
@@ -14,6 +15,7 @@ type MockStater struct {
|
||||
StateProviderFunc func(ctx context.Context, stateId []byte) (state.BeaconState, error)
|
||||
BeaconStateRoot []byte
|
||||
StatesBySlot map[primitives.Slot]state.BeaconState
|
||||
StatesByEpoch map[primitives.Epoch]state.BeaconState
|
||||
StatesByRoot map[[32]byte]state.BeaconState
|
||||
CustomError error
|
||||
}
|
||||
@@ -43,3 +45,22 @@ func (m *MockStater) StateRoot(context.Context, []byte) ([]byte, error) {
|
||||
func (m *MockStater) StateBySlot(_ context.Context, s primitives.Slot) (state.BeaconState, error) {
|
||||
return m.StatesBySlot[s], nil
|
||||
}
|
||||
|
||||
// StateByEpoch --
|
||||
func (m *MockStater) StateByEpoch(_ context.Context, e primitives.Epoch) (state.BeaconState, error) {
|
||||
if m.CustomError != nil {
|
||||
return nil, m.CustomError
|
||||
}
|
||||
if m.StatesByEpoch != nil {
|
||||
return m.StatesByEpoch[e], nil
|
||||
}
|
||||
// Fall back to StatesBySlot if StatesByEpoch is not set
|
||||
slot, err := slots.EpochStart(e)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if m.StatesBySlot != nil {
|
||||
return m.StatesBySlot[slot], nil
|
||||
}
|
||||
return m.BeaconState, nil
|
||||
}
|
||||
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
"fmt"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
|
||||
"github.com/OffchainLabs/prysm/v7/monitoring/tracing/trace"
|
||||
"github.com/sirupsen/logrus"
|
||||
@@ -37,76 +38,84 @@ func (s *State) MigrateToCold(ctx context.Context, fRoot [32]byte) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Start at previous finalized slot, stop at current finalized slot (it will be handled in the next migration).
|
||||
// If the slot is on archived point, save the state of that slot to the DB.
|
||||
for slot := oldFSlot; slot < fSlot; slot++ {
|
||||
// Calculate the first archived point slot >= oldFSlot (but > 0).
|
||||
// This avoids iterating through every slot and only visits archived points directly.
|
||||
var startSlot primitives.Slot
|
||||
if oldFSlot == 0 {
|
||||
startSlot = s.slotsPerArchivedPoint
|
||||
} else {
|
||||
// Round up to the next archived point
|
||||
startSlot = (oldFSlot + s.slotsPerArchivedPoint - 1) / s.slotsPerArchivedPoint * s.slotsPerArchivedPoint
|
||||
}
|
||||
|
||||
// Start at the first archived point after old finalized slot, stop before current finalized slot.
|
||||
// Jump directly between archived points.
|
||||
for slot := startSlot; slot < fSlot; slot += s.slotsPerArchivedPoint {
|
||||
if ctx.Err() != nil {
|
||||
return ctx.Err()
|
||||
}
|
||||
|
||||
if slot%s.slotsPerArchivedPoint == 0 && slot != 0 {
|
||||
cached, exists, err := s.epochBoundaryStateCache.getBySlot(slot)
|
||||
cached, exists, err := s.epochBoundaryStateCache.getBySlot(slot)
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not get epoch boundary state for slot %d", slot)
|
||||
}
|
||||
|
||||
var aRoot [32]byte
|
||||
var aState state.BeaconState
|
||||
|
||||
// When the epoch boundary state is not in cache due to skip slot scenario,
|
||||
// we have to regenerate the state which will represent epoch boundary.
|
||||
// By finding the highest available block below epoch boundary slot, we
|
||||
// generate the state for that block root.
|
||||
if exists {
|
||||
aRoot = cached.root
|
||||
aState = cached.state
|
||||
} else {
|
||||
_, roots, err := s.beaconDB.HighestRootsBelowSlot(ctx, slot)
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not get epoch boundary state for slot %d", slot)
|
||||
return err
|
||||
}
|
||||
|
||||
var aRoot [32]byte
|
||||
var aState state.BeaconState
|
||||
|
||||
// When the epoch boundary state is not in cache due to skip slot scenario,
|
||||
// we have to regenerate the state which will represent epoch boundary.
|
||||
// By finding the highest available block below epoch boundary slot, we
|
||||
// generate the state for that block root.
|
||||
if exists {
|
||||
aRoot = cached.root
|
||||
aState = cached.state
|
||||
} else {
|
||||
_, roots, err := s.beaconDB.HighestRootsBelowSlot(ctx, slot)
|
||||
// Given the block has been finalized, the db should not have more than one block in a given slot.
|
||||
// We should error out when this happens.
|
||||
if len(roots) != 1 {
|
||||
return errUnknownBlock
|
||||
}
|
||||
aRoot = roots[0]
|
||||
// There's no need to generate the state if the state already exists in the DB.
|
||||
// We can skip saving the state.
|
||||
if !s.beaconDB.HasState(ctx, aRoot) {
|
||||
aState, err = s.StateByRoot(ctx, aRoot)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// Given the block has been finalized, the db should not have more than one block in a given slot.
|
||||
// We should error out when this happens.
|
||||
if len(roots) != 1 {
|
||||
return errUnknownBlock
|
||||
}
|
||||
aRoot = roots[0]
|
||||
// There's no need to generate the state if the state already exists in the DB.
|
||||
// We can skip saving the state.
|
||||
if !s.beaconDB.HasState(ctx, aRoot) {
|
||||
aState, err = s.StateByRoot(ctx, aRoot)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if s.beaconDB.HasState(ctx, aRoot) {
|
||||
// If you are migrating a state and its already part of the hot state cache saved to the db,
|
||||
// you can just remove it from the hot state cache as it becomes redundant.
|
||||
s.saveHotStateDB.lock.Lock()
|
||||
roots := s.saveHotStateDB.blockRootsOfSavedStates
|
||||
for i := range roots {
|
||||
if aRoot == roots[i] {
|
||||
s.saveHotStateDB.blockRootsOfSavedStates = append(roots[:i], roots[i+1:]...)
|
||||
// There shouldn't be duplicated roots in `blockRootsOfSavedStates`.
|
||||
// Break here is ok.
|
||||
break
|
||||
}
|
||||
}
|
||||
s.saveHotStateDB.lock.Unlock()
|
||||
continue
|
||||
}
|
||||
|
||||
if err := s.beaconDB.SaveState(ctx, aState, aRoot); err != nil {
|
||||
return err
|
||||
}
|
||||
log.WithFields(
|
||||
logrus.Fields{
|
||||
"slot": aState.Slot(),
|
||||
"root": hex.EncodeToString(bytesutil.Trunc(aRoot[:])),
|
||||
}).Info("Saved state in DB")
|
||||
}
|
||||
|
||||
if s.beaconDB.HasState(ctx, aRoot) {
|
||||
// If you are migrating a state and its already part of the hot state cache saved to the db,
|
||||
// you can just remove it from the hot state cache as it becomes redundant.
|
||||
s.saveHotStateDB.lock.Lock()
|
||||
roots := s.saveHotStateDB.blockRootsOfSavedStates
|
||||
for i := range roots {
|
||||
if aRoot == roots[i] {
|
||||
s.saveHotStateDB.blockRootsOfSavedStates = append(roots[:i], roots[i+1:]...)
|
||||
// There shouldn't be duplicated roots in `blockRootsOfSavedStates`.
|
||||
// Break here is ok.
|
||||
break
|
||||
}
|
||||
}
|
||||
s.saveHotStateDB.lock.Unlock()
|
||||
continue
|
||||
}
|
||||
|
||||
if err := s.beaconDB.SaveState(ctx, aState, aRoot); err != nil {
|
||||
return err
|
||||
}
|
||||
log.WithFields(
|
||||
logrus.Fields{
|
||||
"slot": aState.Slot(),
|
||||
"root": hex.EncodeToString(bytesutil.Trunc(aRoot[:])),
|
||||
}).Info("Saved state in DB")
|
||||
}
|
||||
|
||||
// Update finalized info in memory.
|
||||
|
||||
@@ -161,7 +161,7 @@ func (s *Service) validateWithKzgBatchVerifier(ctx context.Context, dataColumns
|
||||
|
||||
timeout := time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second
|
||||
|
||||
resChan := make(chan error)
|
||||
resChan := make(chan error, 1)
|
||||
verificationSet := &kzgVerifier{dataColumns: dataColumns, resChan: resChan}
|
||||
s.kzgChan <- verificationSet
|
||||
|
||||
|
||||
@@ -7,6 +7,7 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/blockchain/kzg"
|
||||
"github.com/OffchainLabs/prysm/v7/config/params"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
|
||||
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v7/testing/assert"
|
||||
@@ -268,6 +269,41 @@ func TestKzgBatchVerifierFallback(t *testing.T) {
|
||||
})
|
||||
}
|
||||
|
||||
func TestValidateWithKzgBatchVerifier_DeadlockOnTimeout(t *testing.T) {
|
||||
err := kzg.Start()
|
||||
require.NoError(t, err)
|
||||
|
||||
params.SetupTestConfigCleanup(t)
|
||||
cfg := params.BeaconConfig().Copy()
|
||||
cfg.SecondsPerSlot = 0
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
ctx, cancel := context.WithCancel(t.Context())
|
||||
defer cancel()
|
||||
|
||||
service := &Service{
|
||||
ctx: ctx,
|
||||
kzgChan: make(chan *kzgVerifier),
|
||||
}
|
||||
go service.kzgVerifierRoutine()
|
||||
|
||||
result, err := service.validateWithKzgBatchVerifier(context.Background(), nil)
|
||||
require.Equal(t, pubsub.ValidationIgnore, result)
|
||||
require.ErrorIs(t, err, context.DeadlineExceeded)
|
||||
|
||||
done := make(chan struct{})
|
||||
go func() {
|
||||
_, _ = service.validateWithKzgBatchVerifier(context.Background(), nil)
|
||||
close(done)
|
||||
}()
|
||||
|
||||
select {
|
||||
case <-done:
|
||||
case <-time.After(500 * time.Millisecond):
|
||||
t.Fatal("validateWithKzgBatchVerifier blocked")
|
||||
}
|
||||
}
|
||||
|
||||
func createValidTestDataColumns(t *testing.T, count int) []blocks.RODataColumn {
|
||||
_, roSidecars, _ := util.GenerateTestFuluBlockWithSidecars(t, count)
|
||||
if len(roSidecars) >= count {
|
||||
|
||||
@@ -265,7 +265,7 @@ func (s *Service) processVerifiedAttestation(
|
||||
if key, err := generateUnaggregatedAttCacheKey(broadcastAtt); err != nil {
|
||||
log.WithError(err).Error("Failed to generate cache key for attestation tracking")
|
||||
} else {
|
||||
s.setSeenUnaggregatedAtt(key)
|
||||
_ = s.setSeenUnaggregatedAtt(key)
|
||||
}
|
||||
|
||||
valCount, err := helpers.ActiveValidatorCount(ctx, preState, slots.ToEpoch(data.Slot))
|
||||
@@ -320,7 +320,7 @@ func (s *Service) processAggregate(ctx context.Context, aggregate ethpb.SignedAg
|
||||
return
|
||||
}
|
||||
|
||||
s.setAggregatorIndexEpochSeen(att.GetData().Target.Epoch, aggregate.AggregateAttestationAndProof().GetAggregatorIndex())
|
||||
_ = s.setAggregatorIndexEpochSeen(att.GetData().Target.Epoch, aggregate.AggregateAttestationAndProof().GetAggregatorIndex())
|
||||
|
||||
if err := s.cfg.p2p.Broadcast(ctx, aggregate); err != nil {
|
||||
log.WithError(err).Debug("Could not broadcast aggregated attestation")
|
||||
|
||||
@@ -137,7 +137,9 @@ func (s *Service) validateAggregateAndProof(ctx context.Context, pid peer.ID, ms
|
||||
return validationRes, err
|
||||
}
|
||||
|
||||
s.setAggregatorIndexEpochSeen(data.Target.Epoch, m.AggregateAttestationAndProof().GetAggregatorIndex())
|
||||
if first := s.setAggregatorIndexEpochSeen(data.Target.Epoch, m.AggregateAttestationAndProof().GetAggregatorIndex()); !first {
|
||||
return pubsub.ValidationIgnore, nil
|
||||
}
|
||||
|
||||
msg.ValidatorData = m
|
||||
|
||||
@@ -265,13 +267,19 @@ func (s *Service) hasSeenAggregatorIndexEpoch(epoch primitives.Epoch, aggregator
|
||||
}
|
||||
|
||||
// Set aggregate's aggregator index target epoch as seen.
|
||||
func (s *Service) setAggregatorIndexEpochSeen(epoch primitives.Epoch, aggregatorIndex primitives.ValidatorIndex) {
|
||||
// Returns true if this is the first time seeing this aggregator index and epoch.
|
||||
func (s *Service) setAggregatorIndexEpochSeen(epoch primitives.Epoch, aggregatorIndex primitives.ValidatorIndex) bool {
|
||||
b := append(bytesutil.Bytes32(uint64(epoch)), bytesutil.Bytes32(uint64(aggregatorIndex))...)
|
||||
|
||||
s.seenAggregatedAttestationLock.Lock()
|
||||
defer s.seenAggregatedAttestationLock.Unlock()
|
||||
|
||||
_, seen := s.seenAggregatedAttestationCache.Get(string(b))
|
||||
if seen {
|
||||
return false
|
||||
}
|
||||
s.seenAggregatedAttestationCache.Add(string(b), true)
|
||||
return true
|
||||
}
|
||||
|
||||
// This validates the bitfield is correct and aggregator's index in state is within the beacon committee.
|
||||
|
||||
@@ -801,3 +801,27 @@ func TestValidateAggregateAndProof_RejectWhenAttEpochDoesntEqualTargetEpoch(t *t
|
||||
assert.NotNil(t, err)
|
||||
assert.Equal(t, pubsub.ValidationReject, res)
|
||||
}
|
||||
|
||||
func Test_SetAggregatorIndexEpochSeen(t *testing.T) {
|
||||
db := dbtest.SetupDB(t)
|
||||
p := p2ptest.NewTestP2P(t)
|
||||
|
||||
r := &Service{
|
||||
cfg: &config{
|
||||
p2p: p,
|
||||
beaconDB: db,
|
||||
},
|
||||
seenAggregatedAttestationCache: lruwrpr.New(10),
|
||||
}
|
||||
|
||||
aggIndex := primitives.ValidatorIndex(42)
|
||||
epoch := primitives.Epoch(7)
|
||||
|
||||
require.Equal(t, false, r.hasSeenAggregatorIndexEpoch(epoch, aggIndex))
|
||||
first := r.setAggregatorIndexEpochSeen(epoch, aggIndex)
|
||||
require.Equal(t, true, first)
|
||||
require.Equal(t, true, r.hasSeenAggregatorIndexEpoch(epoch, aggIndex))
|
||||
|
||||
second := r.setAggregatorIndexEpochSeen(epoch, aggIndex)
|
||||
require.Equal(t, false, second)
|
||||
}
|
||||
|
||||
@@ -104,7 +104,8 @@ func (s *Service) validateCommitteeIndexBeaconAttestation(
|
||||
}
|
||||
|
||||
if !s.slasherEnabled {
|
||||
// Verify this the first attestation received for the participating validator for the slot.
|
||||
// Verify this the first attestation received for the participating validator for the slot. This verification is here to return early if we've already seen this attestation.
|
||||
// This verification is carried again later after all other validations to avoid TOCTOU issues.
|
||||
if s.hasSeenUnaggregatedAtt(attKey) {
|
||||
return pubsub.ValidationIgnore, nil
|
||||
}
|
||||
@@ -228,7 +229,10 @@ func (s *Service) validateCommitteeIndexBeaconAttestation(
|
||||
Data: eventData,
|
||||
})
|
||||
|
||||
s.setSeenUnaggregatedAtt(attKey)
|
||||
if first := s.setSeenUnaggregatedAtt(attKey); !first {
|
||||
// Another concurrent validation processed the same attestation meanwhile
|
||||
return pubsub.ValidationIgnore, nil
|
||||
}
|
||||
|
||||
// Attach final validated attestation to the message for further pipeline use
|
||||
msg.ValidatorData = attForValidation
|
||||
@@ -385,11 +389,16 @@ func (s *Service) hasSeenUnaggregatedAtt(key string) bool {
|
||||
}
|
||||
|
||||
// Set an incoming attestation as seen for the participating validator for the slot.
|
||||
func (s *Service) setSeenUnaggregatedAtt(key string) {
|
||||
// Returns false if the attestation was already seen.
|
||||
func (s *Service) setSeenUnaggregatedAtt(key string) bool {
|
||||
s.seenUnAggregatedAttestationLock.Lock()
|
||||
defer s.seenUnAggregatedAttestationLock.Unlock()
|
||||
|
||||
_, seen := s.seenUnAggregatedAttestationCache.Get(key)
|
||||
if seen {
|
||||
return false
|
||||
}
|
||||
s.seenUnAggregatedAttestationCache.Add(key, true)
|
||||
return true
|
||||
}
|
||||
|
||||
// hasBlockAndState returns true if the beacon node knows about a block and associated state in the
|
||||
|
||||
@@ -499,6 +499,10 @@ func TestService_setSeenUnaggregatedAtt(t *testing.T) {
|
||||
Data: ðpb.AttestationData{Slot: 2, CommitteeIndex: 0},
|
||||
AggregationBits: bitfield.Bitlist{0b1001},
|
||||
}
|
||||
s3c0a0 := ðpb.Attestation{
|
||||
Data: ðpb.AttestationData{Slot: 3, CommitteeIndex: 0},
|
||||
AggregationBits: bitfield.Bitlist{0b1001},
|
||||
}
|
||||
|
||||
t.Run("empty cache", func(t *testing.T) {
|
||||
key := generateKey(t, s0c0a0)
|
||||
@@ -506,26 +510,39 @@ func TestService_setSeenUnaggregatedAtt(t *testing.T) {
|
||||
})
|
||||
t.Run("ok", func(t *testing.T) {
|
||||
key := generateKey(t, s0c0a0)
|
||||
s.setSeenUnaggregatedAtt(key)
|
||||
first := s.setSeenUnaggregatedAtt(key)
|
||||
assert.Equal(t, true, s.hasSeenUnaggregatedAtt(key))
|
||||
assert.Equal(t, true, first)
|
||||
})
|
||||
t.Run("already seen", func(t *testing.T) {
|
||||
key := generateKey(t, s3c0a0)
|
||||
first := s.setSeenUnaggregatedAtt(key)
|
||||
assert.Equal(t, true, s.hasSeenUnaggregatedAtt(key))
|
||||
assert.Equal(t, true, first)
|
||||
first = s.setSeenUnaggregatedAtt(key)
|
||||
assert.Equal(t, true, s.hasSeenUnaggregatedAtt(key))
|
||||
assert.Equal(t, false, first)
|
||||
})
|
||||
t.Run("different slot", func(t *testing.T) {
|
||||
key1 := generateKey(t, s1c0a0)
|
||||
key2 := generateKey(t, s2c0a0)
|
||||
s.setSeenUnaggregatedAtt(key1)
|
||||
first := s.setSeenUnaggregatedAtt(key1)
|
||||
assert.Equal(t, false, s.hasSeenUnaggregatedAtt(key2))
|
||||
assert.Equal(t, true, first)
|
||||
})
|
||||
t.Run("different committee index", func(t *testing.T) {
|
||||
key1 := generateKey(t, s0c1a0)
|
||||
key2 := generateKey(t, s0c2a0)
|
||||
s.setSeenUnaggregatedAtt(key1)
|
||||
first := s.setSeenUnaggregatedAtt(key1)
|
||||
assert.Equal(t, false, s.hasSeenUnaggregatedAtt(key2))
|
||||
assert.Equal(t, true, first)
|
||||
})
|
||||
t.Run("different bit", func(t *testing.T) {
|
||||
key1 := generateKey(t, s0c0a1)
|
||||
key2 := generateKey(t, s0c0a2)
|
||||
s.setSeenUnaggregatedAtt(key1)
|
||||
first := s.setSeenUnaggregatedAtt(key1)
|
||||
assert.Equal(t, false, s.hasSeenUnaggregatedAtt(key2))
|
||||
assert.Equal(t, true, first)
|
||||
})
|
||||
t.Run("0 bits set is considered not seen", func(t *testing.T) {
|
||||
a := ðpb.Attestation{AggregationBits: bitfield.Bitlist{0b1000}}
|
||||
@@ -576,6 +593,11 @@ func TestService_setSeenUnaggregatedAtt(t *testing.T) {
|
||||
CommitteeId: 0,
|
||||
AttesterIndex: 0,
|
||||
}
|
||||
s3c0a0 := ðpb.SingleAttestation{
|
||||
Data: ðpb.AttestationData{Slot: 2},
|
||||
CommitteeId: 0,
|
||||
AttesterIndex: 0,
|
||||
}
|
||||
|
||||
t.Run("empty cache", func(t *testing.T) {
|
||||
key := generateKey(t, s0c0a0)
|
||||
@@ -583,26 +605,39 @@ func TestService_setSeenUnaggregatedAtt(t *testing.T) {
|
||||
})
|
||||
t.Run("ok", func(t *testing.T) {
|
||||
key := generateKey(t, s0c0a0)
|
||||
s.setSeenUnaggregatedAtt(key)
|
||||
first := s.setSeenUnaggregatedAtt(key)
|
||||
assert.Equal(t, true, s.hasSeenUnaggregatedAtt(key))
|
||||
assert.Equal(t, true, first)
|
||||
})
|
||||
t.Run("different slot", func(t *testing.T) {
|
||||
key1 := generateKey(t, s1c0a0)
|
||||
key2 := generateKey(t, s2c0a0)
|
||||
s.setSeenUnaggregatedAtt(key1)
|
||||
first := s.setSeenUnaggregatedAtt(key1)
|
||||
assert.Equal(t, false, s.hasSeenUnaggregatedAtt(key2))
|
||||
assert.Equal(t, true, first)
|
||||
})
|
||||
t.Run("already seen", func(t *testing.T) {
|
||||
key := generateKey(t, s3c0a0)
|
||||
first := s.setSeenUnaggregatedAtt(key)
|
||||
assert.Equal(t, true, s.hasSeenUnaggregatedAtt(key))
|
||||
assert.Equal(t, true, first)
|
||||
first = s.setSeenUnaggregatedAtt(key)
|
||||
assert.Equal(t, true, s.hasSeenUnaggregatedAtt(key))
|
||||
assert.Equal(t, false, first)
|
||||
})
|
||||
t.Run("different committee index", func(t *testing.T) {
|
||||
key1 := generateKey(t, s0c1a0)
|
||||
key2 := generateKey(t, s0c2a0)
|
||||
s.setSeenUnaggregatedAtt(key1)
|
||||
first := s.setSeenUnaggregatedAtt(key1)
|
||||
assert.Equal(t, false, s.hasSeenUnaggregatedAtt(key2))
|
||||
assert.Equal(t, true, first)
|
||||
})
|
||||
t.Run("different attester", func(t *testing.T) {
|
||||
key1 := generateKey(t, s0c0a1)
|
||||
key2 := generateKey(t, s0c0a2)
|
||||
s.setSeenUnaggregatedAtt(key1)
|
||||
first := s.setSeenUnaggregatedAtt(key1)
|
||||
assert.Equal(t, false, s.hasSeenUnaggregatedAtt(key2))
|
||||
assert.Equal(t, true, first)
|
||||
})
|
||||
t.Run("single attestation is considered not seen", func(t *testing.T) {
|
||||
a := ðpb.AttestationElectra{}
|
||||
|
||||
@@ -4,7 +4,6 @@ import (
|
||||
"context"
|
||||
"fmt"
|
||||
|
||||
forkchoicetypes "github.com/OffchainLabs/prysm/v7/beacon-chain/forkchoice/types"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
|
||||
"github.com/OffchainLabs/prysm/v7/config/params"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
|
||||
@@ -289,27 +288,15 @@ func (bv *ROBlobVerifier) SidecarKzgProofVerified() (err error) {
|
||||
// for later processing while proposers for the block's branch are calculated -- in such a case do not REJECT, instead IGNORE this message.
|
||||
func (bv *ROBlobVerifier) SidecarProposerExpected(ctx context.Context) (err error) {
|
||||
defer bv.recordResult(RequireSidecarProposerExpected, &err)
|
||||
e := slots.ToEpoch(bv.blob.Slot())
|
||||
if e > 0 {
|
||||
e = e - 1
|
||||
}
|
||||
r, err := bv.fc.TargetRootForEpoch(bv.blob.ParentRoot(), e)
|
||||
pst, err := bv.parentState(ctx)
|
||||
if err != nil {
|
||||
log.WithError(err).WithFields(logging.BlobFields(bv.blob)).Debug("State replay to parent_root failed")
|
||||
return errSidecarUnexpectedProposer
|
||||
}
|
||||
c := &forkchoicetypes.Checkpoint{Root: r, Epoch: e}
|
||||
idx, cached := bv.pc.Proposer(c, bv.blob.Slot())
|
||||
if !cached {
|
||||
pst, err := bv.parentState(ctx)
|
||||
if err != nil {
|
||||
log.WithError(err).WithFields(logging.BlobFields(bv.blob)).Debug("State replay to parent_root failed")
|
||||
return errSidecarUnexpectedProposer
|
||||
}
|
||||
idx, err = bv.pc.ComputeProposer(ctx, bv.blob.ParentRoot(), bv.blob.Slot(), pst)
|
||||
if err != nil {
|
||||
log.WithError(err).WithFields(logging.BlobFields(bv.blob)).Debug("Error computing proposer index from parent state")
|
||||
return errSidecarUnexpectedProposer
|
||||
}
|
||||
idx, err := bv.pc.ComputeProposer(ctx, bv.blob.Slot(), pst)
|
||||
if err != nil {
|
||||
log.WithError(err).WithFields(logging.BlobFields(bv.blob)).Debug("Error computing proposer index from parent state")
|
||||
return errSidecarUnexpectedProposer
|
||||
}
|
||||
if idx != bv.blob.ProposerIndex() {
|
||||
log.WithError(errSidecarUnexpectedProposer).
|
||||
|
||||
@@ -452,33 +452,17 @@ func TestSidecarProposerExpected(t *testing.T) {
|
||||
ctx := t.Context()
|
||||
_, blobs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 1)
|
||||
b := blobs[0]
|
||||
t.Run("cached, matches", func(t *testing.T) {
|
||||
ini := Initializer{shared: &sharedResources{pc: &mockProposerCache{ProposerCB: pcReturnsIdx(b.ProposerIndex())}, fc: &mockForkchoicer{TargetRootForEpochCB: fcReturnsTargetRoot([32]byte{})}}}
|
||||
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
|
||||
require.NoError(t, v.SidecarProposerExpected(ctx))
|
||||
require.Equal(t, true, v.results.executed(RequireSidecarProposerExpected))
|
||||
require.NoError(t, v.results.result(RequireSidecarProposerExpected))
|
||||
})
|
||||
t.Run("cached, does not match", func(t *testing.T) {
|
||||
ini := Initializer{shared: &sharedResources{pc: &mockProposerCache{ProposerCB: pcReturnsIdx(b.ProposerIndex() + 1)}, fc: &mockForkchoicer{TargetRootForEpochCB: fcReturnsTargetRoot([32]byte{})}}}
|
||||
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
|
||||
require.ErrorIs(t, v.SidecarProposerExpected(ctx), errSidecarUnexpectedProposer)
|
||||
require.Equal(t, true, v.results.executed(RequireSidecarProposerExpected))
|
||||
require.NotNil(t, v.results.result(RequireSidecarProposerExpected))
|
||||
})
|
||||
t.Run("not cached, state lookup failure", func(t *testing.T) {
|
||||
ini := Initializer{shared: &sharedResources{sr: sbrNotFound(t, b.ParentRoot()), pc: &mockProposerCache{ProposerCB: pcReturnsNotFound()}, fc: &mockForkchoicer{TargetRootForEpochCB: fcReturnsTargetRoot([32]byte{})}}}
|
||||
t.Run("state lookup failure", func(t *testing.T) {
|
||||
ini := Initializer{shared: &sharedResources{sr: sbrNotFound(t, b.ParentRoot()), pc: &mockProposerCache{}, fc: &mockForkchoicer{TargetRootForEpochCB: fcReturnsTargetRoot([32]byte{})}}}
|
||||
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
|
||||
require.ErrorIs(t, v.SidecarProposerExpected(ctx), errSidecarUnexpectedProposer)
|
||||
require.Equal(t, true, v.results.executed(RequireSidecarProposerExpected))
|
||||
require.NotNil(t, v.results.result(RequireSidecarProposerExpected))
|
||||
})
|
||||
|
||||
t.Run("not cached, proposer matches", func(t *testing.T) {
|
||||
t.Run("proposer matches", func(t *testing.T) {
|
||||
pc := &mockProposerCache{
|
||||
ProposerCB: pcReturnsNotFound(),
|
||||
ComputeProposerCB: func(_ context.Context, root [32]byte, slot primitives.Slot, _ state.BeaconState) (primitives.ValidatorIndex, error) {
|
||||
require.Equal(t, b.ParentRoot(), root)
|
||||
ComputeProposerCB: func(_ context.Context, slot primitives.Slot, _ state.BeaconState) (primitives.ValidatorIndex, error) {
|
||||
require.Equal(t, b.Slot(), slot)
|
||||
return b.ProposerIndex(), nil
|
||||
},
|
||||
@@ -489,11 +473,9 @@ func TestSidecarProposerExpected(t *testing.T) {
|
||||
require.Equal(t, true, v.results.executed(RequireSidecarProposerExpected))
|
||||
require.NoError(t, v.results.result(RequireSidecarProposerExpected))
|
||||
})
|
||||
t.Run("not cached, proposer does not match", func(t *testing.T) {
|
||||
t.Run("proposer does not match", func(t *testing.T) {
|
||||
pc := &mockProposerCache{
|
||||
ProposerCB: pcReturnsNotFound(),
|
||||
ComputeProposerCB: func(_ context.Context, root [32]byte, slot primitives.Slot, _ state.BeaconState) (primitives.ValidatorIndex, error) {
|
||||
require.Equal(t, b.ParentRoot(), root)
|
||||
ComputeProposerCB: func(_ context.Context, slot primitives.Slot, _ state.BeaconState) (primitives.ValidatorIndex, error) {
|
||||
require.Equal(t, b.Slot(), slot)
|
||||
return b.ProposerIndex() + 1, nil
|
||||
},
|
||||
@@ -504,11 +486,9 @@ func TestSidecarProposerExpected(t *testing.T) {
|
||||
require.Equal(t, true, v.results.executed(RequireSidecarProposerExpected))
|
||||
require.NotNil(t, v.results.result(RequireSidecarProposerExpected))
|
||||
})
|
||||
t.Run("not cached, ComputeProposer fails", func(t *testing.T) {
|
||||
t.Run("ComputeProposer fails", func(t *testing.T) {
|
||||
pc := &mockProposerCache{
|
||||
ProposerCB: pcReturnsNotFound(),
|
||||
ComputeProposerCB: func(_ context.Context, root [32]byte, slot primitives.Slot, _ state.BeaconState) (primitives.ValidatorIndex, error) {
|
||||
require.Equal(t, b.ParentRoot(), root)
|
||||
ComputeProposerCB: func(_ context.Context, slot primitives.Slot, _ state.BeaconState) (primitives.ValidatorIndex, error) {
|
||||
require.Equal(t, b.Slot(), slot)
|
||||
return 0, errors.New("ComputeProposer failed")
|
||||
},
|
||||
@@ -845,28 +825,11 @@ func (v *validxStateOverride) ReadFromEveryValidator(f func(idx int, val state.R
|
||||
}
|
||||
|
||||
type mockProposerCache struct {
|
||||
ComputeProposerCB func(ctx context.Context, root [32]byte, slot primitives.Slot, pst state.BeaconState) (primitives.ValidatorIndex, error)
|
||||
ProposerCB func(c *forkchoicetypes.Checkpoint, slot primitives.Slot) (primitives.ValidatorIndex, bool)
|
||||
ComputeProposerCB func(ctx context.Context, slot primitives.Slot, pst state.BeaconState) (primitives.ValidatorIndex, error)
|
||||
}
|
||||
|
||||
func (p *mockProposerCache) ComputeProposer(ctx context.Context, root [32]byte, slot primitives.Slot, pst state.BeaconState) (primitives.ValidatorIndex, error) {
|
||||
return p.ComputeProposerCB(ctx, root, slot, pst)
|
||||
}
|
||||
|
||||
func (p *mockProposerCache) Proposer(c *forkchoicetypes.Checkpoint, slot primitives.Slot) (primitives.ValidatorIndex, bool) {
|
||||
return p.ProposerCB(c, slot)
|
||||
func (p *mockProposerCache) ComputeProposer(ctx context.Context, slot primitives.Slot, pst state.BeaconState) (primitives.ValidatorIndex, error) {
|
||||
return p.ComputeProposerCB(ctx, slot, pst)
|
||||
}
|
||||
|
||||
var _ proposerCache = &mockProposerCache{}
|
||||
|
||||
func pcReturnsIdx(idx primitives.ValidatorIndex) func(c *forkchoicetypes.Checkpoint, slot primitives.Slot) (primitives.ValidatorIndex, bool) {
|
||||
return func(c *forkchoicetypes.Checkpoint, slot primitives.Slot) (primitives.ValidatorIndex, bool) {
|
||||
return idx, true
|
||||
}
|
||||
}
|
||||
|
||||
func pcReturnsNotFound() func(c *forkchoicetypes.Checkpoint, slot primitives.Slot) (primitives.ValidatorIndex, bool) {
|
||||
return func(c *forkchoicetypes.Checkpoint, slot primitives.Slot) (primitives.ValidatorIndex, bool) {
|
||||
return 0, false
|
||||
}
|
||||
}
|
||||
|
||||
@@ -7,7 +7,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/signing"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/transition"
|
||||
forkchoicetypes "github.com/OffchainLabs/prysm/v7/beacon-chain/forkchoice/types"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
|
||||
lruwrpr "github.com/OffchainLabs/prysm/v7/cache/lru"
|
||||
"github.com/OffchainLabs/prysm/v7/config/params"
|
||||
@@ -16,6 +15,7 @@ import (
|
||||
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v7/time/slots"
|
||||
lru "github.com/hashicorp/golang-lru"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
@@ -152,8 +152,7 @@ func (c *sigCache) SignatureVerified(sig signatureData) (bool, error) {
|
||||
// and cache the result so that it can be reused when the same verification needs to be performed
|
||||
// across multiple values.
|
||||
type proposerCache interface {
|
||||
ComputeProposer(ctx context.Context, root [32]byte, slot primitives.Slot, pst state.BeaconState) (primitives.ValidatorIndex, error)
|
||||
Proposer(c *forkchoicetypes.Checkpoint, slot primitives.Slot) (primitives.ValidatorIndex, bool)
|
||||
ComputeProposer(ctx context.Context, slot primitives.Slot, pst state.BeaconState) (primitives.ValidatorIndex, error)
|
||||
}
|
||||
|
||||
func newPropCache() *propCache {
|
||||
@@ -163,26 +162,20 @@ func newPropCache() *propCache {
|
||||
type propCache struct {
|
||||
}
|
||||
|
||||
// ComputeProposer takes the state for the given parent root and slot and computes the proposer index, updating the
|
||||
// proposer index cache when successful.
|
||||
func (*propCache) ComputeProposer(ctx context.Context, parent [32]byte, slot primitives.Slot, pst state.BeaconState) (primitives.ValidatorIndex, error) {
|
||||
pst, err := transition.ProcessSlotsUsingNextSlotCache(ctx, pst, parent[:], slot)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
// ComputeProposer takes the state and computes the proposer index at the given slot
|
||||
func (*propCache) ComputeProposer(ctx context.Context, slot primitives.Slot, pst state.BeaconState) (primitives.ValidatorIndex, error) {
|
||||
// After Fulu, the lookahead only contains proposers for the current and next epoch.
|
||||
stateEpoch := slots.ToEpoch(pst.Slot())
|
||||
slotEpoch := slots.ToEpoch(slot)
|
||||
if slotEpoch > stateEpoch+1 {
|
||||
start, err := slots.EpochStart(slotEpoch - 1)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
pst, err = transition.ProcessSlots(ctx, pst, start)
|
||||
if err != nil {
|
||||
return 0, errors.Wrap(err, "failed to advance state to compute proposer")
|
||||
}
|
||||
}
|
||||
idx, err := helpers.BeaconProposerIndex(ctx, pst)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
return idx, nil
|
||||
}
|
||||
|
||||
// Proposer returns the validator index if it is found in the cache, along with a boolean indicating
|
||||
// whether the value was present, similar to accessing an lru or go map.
|
||||
func (*propCache) Proposer(c *forkchoicetypes.Checkpoint, slot primitives.Slot) (primitives.ValidatorIndex, bool) {
|
||||
id, err := helpers.ProposerIndexAtSlotFromCheckpoint(c, slot)
|
||||
if err != nil {
|
||||
return 0, false
|
||||
}
|
||||
return id, true
|
||||
return helpers.BeaconProposerIndexAtSlot(ctx, pst, slot)
|
||||
}
|
||||
|
||||
@@ -4,7 +4,6 @@ import (
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/signing"
|
||||
forkchoicetypes "github.com/OffchainLabs/prysm/v7/beacon-chain/forkchoice/types"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v7/crypto/bls"
|
||||
@@ -107,25 +106,3 @@ func (m *mockValidatorAtIndexer) ValidatorAtIndex(idx primitives.ValidatorIndex)
|
||||
}
|
||||
|
||||
var _ validatorAtIndexer = &mockValidatorAtIndexer{}
|
||||
|
||||
func TestProposerCache(t *testing.T) {
|
||||
ctx := t.Context()
|
||||
// 3 validators because that was the first number that produced a non-zero proposer index by default
|
||||
st, _ := util.DeterministicGenesisStateDeneb(t, 3)
|
||||
|
||||
pc := newPropCache()
|
||||
_, cached := pc.Proposer(&forkchoicetypes.Checkpoint{}, 1)
|
||||
// should not be cached yet
|
||||
require.Equal(t, false, cached)
|
||||
|
||||
// If this test breaks due to changes in the deterministic state gen, just replace '2' with whatever the right index is.
|
||||
expectedIdx := 2
|
||||
idx, err := pc.ComputeProposer(ctx, [32]byte{}, 1, st)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, primitives.ValidatorIndex(expectedIdx), idx)
|
||||
|
||||
idx, cached = pc.Proposer(&forkchoicetypes.Checkpoint{}, 1)
|
||||
// TODO: update this test when we integrate a proposer id cache
|
||||
require.Equal(t, false, cached)
|
||||
require.Equal(t, primitives.ValidatorIndex(0), idx)
|
||||
}
|
||||
|
||||
@@ -11,7 +11,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/transition"
|
||||
forkchoicetypes "github.com/OffchainLabs/prysm/v7/beacon-chain/forkchoice/types"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v7/config/params"
|
||||
@@ -484,38 +483,6 @@ func (dv *RODataColumnsVerifier) SidecarProposerExpected(ctx context.Context) (e
|
||||
|
||||
defer dv.recordResult(RequireSidecarProposerExpected, &err)
|
||||
|
||||
type slotParentRoot struct {
|
||||
slot primitives.Slot
|
||||
parentRoot [fieldparams.RootLength]byte
|
||||
}
|
||||
|
||||
targetRootBySlotParentRoot := make(map[slotParentRoot][fieldparams.RootLength]byte)
|
||||
|
||||
var targetRootFromCache = func(slot primitives.Slot, parentRoot [fieldparams.RootLength]byte) ([fieldparams.RootLength]byte, error) {
|
||||
// Use cached values if available.
|
||||
slotParentRoot := slotParentRoot{slot: slot, parentRoot: parentRoot}
|
||||
if root, ok := targetRootBySlotParentRoot[slotParentRoot]; ok {
|
||||
return root, nil
|
||||
}
|
||||
|
||||
// Compute the epoch of the data column slot.
|
||||
dataColumnEpoch := slots.ToEpoch(slot)
|
||||
if dataColumnEpoch > 0 {
|
||||
dataColumnEpoch = dataColumnEpoch - 1
|
||||
}
|
||||
|
||||
// Compute the target root for the epoch.
|
||||
targetRoot, err := dv.fc.TargetRootForEpoch(parentRoot, dataColumnEpoch)
|
||||
if err != nil {
|
||||
return [fieldparams.RootLength]byte{}, columnErrBuilder(errors.Wrap(err, "target root from epoch"))
|
||||
}
|
||||
|
||||
// Store the target root in the cache.
|
||||
targetRootBySlotParentRoot[slotParentRoot] = targetRoot
|
||||
|
||||
return targetRoot, nil
|
||||
}
|
||||
|
||||
for _, dataColumn := range dv.dataColumns {
|
||||
// Extract the slot of the data column.
|
||||
dataColumnSlot := dataColumn.Slot()
|
||||
@@ -523,56 +490,33 @@ func (dv *RODataColumnsVerifier) SidecarProposerExpected(ctx context.Context) (e
|
||||
// Extract the root of the parent block corresponding to the data column.
|
||||
parentRoot := dataColumn.ParentRoot()
|
||||
|
||||
// Compute the target root for the data column.
|
||||
targetRoot, err := targetRootFromCache(dataColumnSlot, parentRoot)
|
||||
if err != nil {
|
||||
return columnErrBuilder(errors.Wrap(err, "target root"))
|
||||
}
|
||||
|
||||
// Compute the epoch of the data column slot.
|
||||
dataColumnEpoch := slots.ToEpoch(dataColumnSlot)
|
||||
if dataColumnEpoch > 0 {
|
||||
dataColumnEpoch = dataColumnEpoch - 1
|
||||
}
|
||||
|
||||
// Create a checkpoint for the target root.
|
||||
checkpoint := &forkchoicetypes.Checkpoint{Root: targetRoot, Epoch: dataColumnEpoch}
|
||||
|
||||
// Try to extract the proposer index from the data column in the cache.
|
||||
idx, cached := dv.pc.Proposer(checkpoint, dataColumnSlot)
|
||||
|
||||
if !cached {
|
||||
parentRoot := dataColumn.ParentRoot()
|
||||
// Ensure the expensive index computation is only performed once for
|
||||
// concurrent requests for the same signature data.
|
||||
idxAny, err, _ := dv.sg.Do(concatRootSlot(parentRoot, dataColumnSlot), func() (any, error) {
|
||||
verifyingState, err := dv.getVerifyingState(ctx, dataColumn)
|
||||
if err != nil {
|
||||
return nil, columnErrBuilder(errors.Wrap(err, "verifying state"))
|
||||
}
|
||||
|
||||
idx, err = helpers.BeaconProposerIndexAtSlot(ctx, verifyingState, dataColumnSlot)
|
||||
if err != nil {
|
||||
return nil, columnErrBuilder(errors.Wrap(err, "compute proposer"))
|
||||
}
|
||||
|
||||
return idx, nil
|
||||
})
|
||||
// Ensure the expensive index computation is only performed once for
|
||||
// concurrent requests for the same signature data.
|
||||
idxAny, err, _ := dv.sg.Do(concatRootSlot(parentRoot, dataColumnSlot), func() (any, error) {
|
||||
verifyingState, err := dv.getVerifyingState(ctx, dataColumn)
|
||||
if err != nil {
|
||||
return err
|
||||
return nil, columnErrBuilder(errors.Wrap(err, "verifying state"))
|
||||
}
|
||||
|
||||
var ok bool
|
||||
if idx, ok = idxAny.(primitives.ValidatorIndex); !ok {
|
||||
return columnErrBuilder(errors.New("type assertion to ValidatorIndex failed"))
|
||||
idx, err := helpers.BeaconProposerIndexAtSlot(ctx, verifyingState, dataColumnSlot)
|
||||
if err != nil {
|
||||
return nil, columnErrBuilder(errors.Wrap(err, "compute proposer"))
|
||||
}
|
||||
|
||||
return idx, nil
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
idx, ok := idxAny.(primitives.ValidatorIndex)
|
||||
if !ok {
|
||||
return columnErrBuilder(errors.New("type assertion to ValidatorIndex failed"))
|
||||
}
|
||||
if idx != dataColumn.ProposerIndex() {
|
||||
return columnErrBuilder(errSidecarUnexpectedProposer)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
@@ -799,35 +799,20 @@ func TestDataColumnsSidecarProposerExpected(t *testing.T) {
|
||||
columns := GenerateTestDataColumns(t, parentRoot, columnSlot, blobCount)
|
||||
firstColumn := columns[0]
|
||||
ctx := t.Context()
|
||||
|
||||
testCases := []struct {
|
||||
name string
|
||||
stateByRooter StateByRooter
|
||||
proposerCache proposerCache
|
||||
columns []blocks.RODataColumn
|
||||
error string
|
||||
name string
|
||||
stateByRooter StateByRooter
|
||||
headStateProvider *mockHeadStateProvider
|
||||
columns []blocks.RODataColumn
|
||||
error string
|
||||
}{
|
||||
{
|
||||
name: "Cached, matches",
|
||||
stateByRooter: nil,
|
||||
proposerCache: &mockProposerCache{
|
||||
ProposerCB: pcReturnsIdx(firstColumn.ProposerIndex()),
|
||||
},
|
||||
columns: columns,
|
||||
},
|
||||
{
|
||||
name: "Cached, does not match",
|
||||
stateByRooter: nil,
|
||||
proposerCache: &mockProposerCache{
|
||||
ProposerCB: pcReturnsIdx(firstColumn.ProposerIndex() + 1),
|
||||
},
|
||||
columns: columns,
|
||||
error: errSidecarUnexpectedProposer.Error(),
|
||||
},
|
||||
{
|
||||
name: "Not cached, state lookup failure",
|
||||
name: "state lookup failure",
|
||||
stateByRooter: sbrNotFound(t, firstColumn.ParentRoot()),
|
||||
proposerCache: &mockProposerCache{
|
||||
ProposerCB: pcReturnsNotFound(),
|
||||
headStateProvider: &mockHeadStateProvider{
|
||||
headRoot: []byte{0xff}, // Different from parentRoot so it won't use head
|
||||
headSlot: 1000,
|
||||
},
|
||||
columns: columns,
|
||||
error: "verifying state",
|
||||
@@ -839,8 +824,7 @@ func TestDataColumnsSidecarProposerExpected(t *testing.T) {
|
||||
initializer := Initializer{
|
||||
shared: &sharedResources{
|
||||
sr: tc.stateByRooter,
|
||||
pc: tc.proposerCache,
|
||||
hsp: &mockHeadStateProvider{},
|
||||
hsp: tc.headStateProvider,
|
||||
fc: &mockForkchoicer{
|
||||
TargetRootForEpochCB: fcReturnsTargetRoot([fieldparams.RootLength]byte{}),
|
||||
},
|
||||
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Removed dead slot parameter from blobCacheEntry.filter
|
||||
@@ -1,3 +0,0 @@
|
||||
## Changed
|
||||
|
||||
- Avoid redundant WithHttpEndpoint when JWT is provided
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Fix proposals progress bar count [#16020](https://github.com/OffchainLabs/prysm/pull/16020)
|
||||
3
changelog/SashaMalysehko_fix-return-after-check.md
Normal file
3
changelog/SashaMalysehko_fix-return-after-check.md
Normal file
@@ -0,0 +1,3 @@
|
||||
## Fixed
|
||||
|
||||
- Fix missing return after version header check in SubmitAttesterSlashingsV2.
|
||||
3
changelog/Snezhkko_fix-type.md
Normal file
3
changelog/Snezhkko_fix-type.md
Normal file
@@ -0,0 +1,3 @@
|
||||
## Fixed
|
||||
|
||||
- incorrect constructor return type [#16084](https://github.com/OffchainLabs/prysm/pull/16084)
|
||||
2
changelog/aarsh-revert-autonatv2.md
Normal file
2
changelog/aarsh-revert-autonatv2.md
Normal file
@@ -0,0 +1,2 @@
|
||||
### Ignored
|
||||
- Reverts AutoNatV2 change introduced in https://github.com/OffchainLabs/prysm/pull/16100 as the libp2p upgrade fails inter-op testing.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Integrate state-diff into `HasState()`.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Refactor finding slot by block root using state summary and block to its own function.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Fix state diff repetitive anchor slot bug.
|
||||
@@ -1,4 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Add initial configs for the state-diff feature.
|
||||
- Add kv functions for the state-diff feature.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Integrate state-diff into `State()`.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- add fulu support to light client processing.
|
||||
@@ -1,2 +0,0 @@
|
||||
### Added
|
||||
- prometheus metric `gossip_attestation_verification_milliseconds` to track attestation gossip topic validation latency.
|
||||
@@ -1,4 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Downgraded log level from INFO to DEBUG on PrepareBeaconProposer updated fee recipients.
|
||||
- Change the logging behaviour of Updated fee recipients to only log count of validators at Debug level and all validator indices at Trace level.
|
||||
@@ -1,2 +0,0 @@
|
||||
### Ignored
|
||||
- Add osaka fork timestamp derivation to interop genesis
|
||||
3
changelog/fix_kzg_batch_verifier_timeout_deadlock.md
Normal file
3
changelog/fix_kzg_batch_verifier_timeout_deadlock.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Fixed
|
||||
|
||||
- Fix deadlock in data column gossip KZG batch verification when a caller times out preventing result delivery.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- fixes E2E tests to be able to start from Electra genesis fork or future forks
|
||||
3
changelog/james-prysm_fix-rest-replay-state.md
Normal file
3
changelog/james-prysm_fix-rest-replay-state.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Fixed
|
||||
|
||||
- fixed replay state issue in rest api caused by attester and sync committee duties endpoints
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- optimization to remove cell and blob proof computation on blob rest api.
|
||||
@@ -1,2 +0,0 @@
|
||||
### Added
|
||||
- Added `--semi-supernode` flag to custody half of a super node's datacolumn requirements but allowing for reconstruction for blob retrieval
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Changed `--subscribe-all-data-subnets` flag to `--supernode` and aliased `--subscribe-all-data-subnets` for existing users.
|
||||
@@ -1,7 +0,0 @@
|
||||
### Added
|
||||
- Data column backfill.
|
||||
- Backfill metrics for columns: backfill_data_column_sidecar_downloaded, backfill_data_column_sidecar_downloaded_bytes, backfill_batch_columns_download_ms, backfill_batch_columns_verify_ms.
|
||||
|
||||
### Changed
|
||||
- backfill metrics that changed name and/or histogram buckets: backfill_batch_time_verify -> backfill_batch_verify_ms, backfill_batch_time_waiting -> backfill_batch_waiting_ms, backfill_batch_time_roundtrip -> backfill_batch_roundtrip_ms, backfill_blocks_bytes_downloaded -> backfill_blocks_downloaded_bytes, backfill_batch_time_verify -> backfill_batch_verify_ms, backfill_batch_blocks_time_download -> backfill_batch_blocks_download_ms, backfill_batch_blobs_time_download -> backfill_batch_blobs_download_ms, backfill_blobs_bytes_downloaded -> backfill_blocks_downloaded_bytes,
|
||||
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Stop emitting payload attribute events during late block handling when we are not proposing the next slot
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- `blobsDataFromStoredDataColumns`: Ask the use to use the `--supernode` flag and shorten the error mesage.
|
||||
@@ -1,6 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Added log prefix to the `genesis` package.
|
||||
- Added log prefix to the `params` package.
|
||||
- `WithGenesisValidatorsRoot`: Use camelCase for log field param.
|
||||
- Move `Origin checkpoint found in db` from WARN to INFO, since it is the expected behaviour.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Move the "Not enough connected peers" (for a given subnet) from WARN to DEBUG
|
||||
@@ -1,4 +0,0 @@
|
||||
### Removed
|
||||
|
||||
- `NUMBER_OF_COLUMNS` configuration (not in the specification any more, replaced by a preset)
|
||||
- `MAX_CELLS_IN_EXTENDED_MATRIX` configuration (not in the specification any more)
|
||||
2
changelog/manu-test-pr.md
Normal file
2
changelog/manu-test-pr.md
Normal file
@@ -0,0 +1,2 @@
|
||||
### Ignored
|
||||
- Added test requirement to `PULL_REQUEST_TEMPLATE.md`
|
||||
@@ -1,2 +0,0 @@
|
||||
### Fixed
|
||||
- Check the JWT secret length is exactly 256 bits (32 bytes) as per Engine API specification
|
||||
@@ -1,2 +0,0 @@
|
||||
### Changed
|
||||
- Improve readability in slashing import and remove duplicated code
|
||||
3
changelog/potuz_check_twice_attseen.md
Normal file
3
changelog/potuz_check_twice_attseen.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Fixed
|
||||
|
||||
- Fixed possible race when validating two attestations at the same time.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Fix array out of bounds in static analyzer.
|
||||
3
changelog/potuz_remove_proposer_cache.md
Normal file
3
changelog/potuz_remove_proposer_cache.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Changed
|
||||
|
||||
- Removed proposer id cache.
|
||||
3
changelog/potuz_return_indices_updateerr.md
Normal file
3
changelog/potuz_return_indices_updateerr.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Fixed
|
||||
|
||||
- Do not error when committee has been computed correctly but updating the cache failed.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Use dependent root instead of target when possible.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Use head state to validate attestations for old blocks if they are compatible.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Copied deleted dependency `github.com/tyler-smith/go-bip39` to the third_party directory and updated prysm to use that.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- http_error_count now matches the other cases by listing the endpoint name rather than the actual URL requested. This improves metrics cardinality.
|
||||
@@ -1,4 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Updated golang.org/x/tools
|
||||
- Introduced modernize static analyzers to nogo
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Updated CHANGELOG.md with release notes from v7.0.0
|
||||
3
changelog/pvl-v7.0.1.md
Normal file
3
changelog/pvl-v7.0.1.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Ignored
|
||||
|
||||
- Updated CHANGELOG.md for v7.0.1 patch release
|
||||
3
changelog/pvl-v7.1.0.md
Normal file
3
changelog/pvl-v7.1.0.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Ignored
|
||||
|
||||
- Changelog for v7.1.0
|
||||
3
changelog/radek_httperror-analyzer.md
Normal file
3
changelog/radek_httperror-analyzer.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Added
|
||||
|
||||
- Static analyzer that ensures each `httputil.HandleError` call is followed by a `return` statement.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Initialize the `ExecutionRequests` field in gossip block map.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Move `BlockGossipReceived` event to the end of gossip validation.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Removed
|
||||
|
||||
- Remove validator cross-client from end-to-end tests.
|
||||
3
changelog/radek_use-statefetch-error.md
Normal file
3
changelog/radek_use-statefetch-error.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Ignored
|
||||
|
||||
- Use `WriteStateFetchError` in API handlers whenever possible.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Replace fixed sleep delays with active polling in prometheus service test to improve test reliability.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Metrics to track earliest available slot
|
||||
3
changelog/satushh-eth1copy.md
Normal file
3
changelog/satushh-eth1copy.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Removed
|
||||
|
||||
- Unnecessary copy is removed from Eth1DataHasEnoughSupport
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Nil check for block if it doesn't exist in the DB in fetchOriginSidecars
|
||||
3
changelog/satushh-graffiti.md
Normal file
3
changelog/satushh-graffiti.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Added
|
||||
|
||||
- Proposal design document to implement graffiti. Currently it is empty by default and the idea is to have it of the form GE168dPR63af
|
||||
3
changelog/satushh-migratetocold.md
Normal file
3
changelog/satushh-migratetocold.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Changed
|
||||
|
||||
- Optimise migratetocold by not doing brute force for loop
|
||||
@@ -1,3 +0,0 @@
|
||||
### Removed
|
||||
|
||||
- Reverted the eas metric as it currently has a bug. Will be fixed later.
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user