mirror of
https://github.com/OffchainLabs/prysm.git
synced 2026-01-09 21:38:05 -05:00
Compare commits
12 Commits
v7.1.0
...
state-desi
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c1edf3fe55 | ||
|
|
1e658530a7 | ||
|
|
b360794c9c | ||
|
|
0fc9ab925a | ||
|
|
dda5ee3334 | ||
|
|
14c67376c3 | ||
|
|
9c8b68a66d | ||
|
|
a3210157e2 | ||
|
|
1536d59e30 | ||
|
|
11e46a4560 | ||
|
|
5a2e51b894 | ||
|
|
d20ec4c7a1 |
3
.github/PULL_REQUEST_TEMPLATE.md
vendored
3
.github/PULL_REQUEST_TEMPLATE.md
vendored
@@ -34,4 +34,5 @@ Fixes #
|
||||
|
||||
- [ ] I have read [CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
|
||||
- [ ] I have included a uniquely named [changelog fragment file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
|
||||
- [ ] I have added a description to this PR with sufficient context for reviewers to understand this PR.
|
||||
- [ ] I have added a description with sufficient context for reviewers to understand this PR.
|
||||
- [ ] I have tested that my changes work as expected and I added a testing plan to the PR description (if applicable).
|
||||
|
||||
85
CHANGELOG.md
85
CHANGELOG.md
@@ -4,6 +4,91 @@ All notable changes to this project will be documented in this file.
|
||||
|
||||
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
|
||||
|
||||
## [v7.1.0](https://github.com/prysmaticlabs/prysm/compare/v7.0.0...v7.1.0) - 2025-12-10
|
||||
|
||||
This release includes several key features/fixes. If you are running v7.0.0 then you should update to v7.0.1 or later and remove the flag `--disable-last-epoch-targets`.
|
||||
|
||||
Release highlights:
|
||||
|
||||
- Backfill is now supported in Fulu. Backfill from checkpoint sync now supports data columns. Run with `--enable-backfill` when using checkpoint sync.
|
||||
- A new node configuration to custody enough data columns to reconstruct blobs. Use flag `--semi-supernode` to custody at least 50% of the data columns.
|
||||
- Critical fixes in attestation processing.
|
||||
|
||||
A post mortem doc with full details on the mainnet attestation processing issue from December 4th is expected in the coming days.
|
||||
|
||||
### Added
|
||||
|
||||
- add fulu support to light client processing. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15995)
|
||||
- Record data column gossip KZG batch verification latency in both the pooled worker and fallback paths so the `beacon_kzg_verification_data_column_batch_milliseconds` histogram reflects gossip traffic, annotated with `path` labels to distinguish the sources. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16018)
|
||||
- Implement Gloas state. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15611)
|
||||
- Add initial configs for the state-diff feature. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15903)
|
||||
- Add kv functions for the state-diff feature. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15903)
|
||||
- Add supported version for fork versions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16030)
|
||||
- prometheus metric `gossip_attestation_verification_milliseconds` to track attestation gossip topic validation latency. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15785)
|
||||
- Integrate state-diff into `State()`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16033)
|
||||
- Implement Gloas fork support in consensus-types/blocks with factory methods, getters, setters, and proto handling. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15618)
|
||||
- Integrate state-diff into `HasState()`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16045)
|
||||
- Added `--semi-supernode` flag to custody half of a super node's datacolumn requirements but allowing for reconstruction for blob retrieval. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16029)
|
||||
- Data column backfill. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15580)
|
||||
- Backfill metrics for columns: backfill_data_column_sidecar_downloaded, backfill_data_column_sidecar_downloaded_bytes, backfill_batch_columns_download_ms, backfill_batch_columns_verify_ms. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15580)
|
||||
- prometheus summary `gossip_data_column_sidecar_arrival_milliseconds` to track data column sidecar arrival latency since slot start. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16099)
|
||||
|
||||
### Changed
|
||||
|
||||
- Improve readability in slashing import and remove duplicated code. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15957)
|
||||
- Use dependent root instead of target when possible. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15996)
|
||||
- Changed `--subscribe-all-data-subnets` flag to `--supernode` and aliased `--subscribe-all-data-subnets` for existing users. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16012)
|
||||
- Use explicit slot component timing configs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15999)
|
||||
- Downgraded log level from INFO to DEBUG on PrepareBeaconProposer updated fee recipients. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15998)
|
||||
- Change the logging behaviour of Updated fee recipients to only log count of validators at Debug level and all validator indices at Trace level. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15998)
|
||||
- Stop emitting payload attribute events during late block handling when we are not proposing the next slot. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16026)
|
||||
- Initialize the `ExecutionRequests` field in gossip block map. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16047)
|
||||
- Avoid redundant WithHttpEndpoint when JWT is provided. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16032)
|
||||
- Removed dead slot parameter from blobCacheEntry.filter. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16021)
|
||||
- Added log prefix to the `genesis` package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16075)
|
||||
- Added log prefix to the `params` package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16075)
|
||||
- `WithGenesisValidatorsRoot`: Use camelCase for log field param. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16075)
|
||||
- Move `Origin checkpoint found in db` from WARN to INFO, since it is the expected behaviour. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16075)
|
||||
- backfill metrics that changed name and/or histogram buckets: backfill_batch_time_verify -> backfill_batch_verify_ms, backfill_batch_time_waiting -> backfill_batch_waiting_ms, backfill_batch_time_roundtrip -> backfill_batch_roundtrip_ms, backfill_blocks_bytes_downloaded -> backfill_blocks_downloaded_bytes, backfill_batch_time_verify -> backfill_batch_verify_ms, backfill_batch_blocks_time_download -> backfill_batch_blocks_download_ms, backfill_batch_blobs_time_download -> backfill_batch_blobs_download_ms, backfill_blobs_bytes_downloaded -> backfill_blocks_downloaded_bytes. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15580)
|
||||
- Move the "Not enough connected peers" (for a given subnet) from WARN to DEBUG. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16087)
|
||||
- `blobsDataFromStoredDataColumns`: Ask the use to use the `--supernode` flag and shorten the error mesage. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16097)
|
||||
- Introduced flag `--ignore-unviable-attestations` (replaces and deprecates `--disable-last-epoch-targets`) to drop attestations whose target state is not viable; default remains to process them unless explicitly enabled. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16094)
|
||||
|
||||
### Removed
|
||||
|
||||
- Remove validator cross-client from end-to-end tests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16025)
|
||||
- `NUMBER_OF_COLUMNS` configuration (not in the specification any more, replaced by a preset). [[PR]](https://github.com/prysmaticlabs/prysm/pull/16073)
|
||||
- `MAX_CELLS_IN_EXTENDED_MATRIX` configuration (not in the specification any more). [[PR]](https://github.com/prysmaticlabs/prysm/pull/16073)
|
||||
|
||||
### Fixed
|
||||
|
||||
- Nil check for block if it doesn't exist in the DB in fetchOriginSidecars. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16006)
|
||||
- Fix proposals progress bar count [#16020](https://github.com/OffchainLabs/prysm/pull/16020). [[PR]](https://github.com/prysmaticlabs/prysm/pull/16020)
|
||||
- Move `BlockGossipReceived` event to the end of gossip validation. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16031)
|
||||
- Fix state diff repetitive anchor slot bug. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16037)
|
||||
- Check the JWT secret length is exactly 256 bits (32 bytes) as per Engine API specification. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15939)
|
||||
- http_error_count now matches the other cases by listing the endpoint name rather than the actual URL requested. This improves metrics cardinality. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16055)
|
||||
- Fix array out of bounds in static analyzer. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16058)
|
||||
- fixes E2E tests to be able to start from Electra genesis fork or future forks. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16048)
|
||||
- Use head state to validate attestations for old blocks if they are compatible. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16095)
|
||||
|
||||
## [v7.0.1](https://github.com/prysmaticlabs/prysm/compare/v7.0.0...v7.0.1) - 2025-12-08
|
||||
|
||||
This patch release contains 4 cherry-picked changes to address the mainnet attestation processing issue from 2025-12-04. Operators are encouraged to update to this release as soon as practical. As of this release, the feature flag `--disable-last-epoch-targets` has been deprecated and can be safely removed from your node configuration.
|
||||
|
||||
A post mortem doc with full details is expected to be published later this week.
|
||||
|
||||
### Changed
|
||||
|
||||
- Move the "Not enough connected peers" (for a given subnet) from WARN to DEBUG. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16087)
|
||||
- Use dependent root instead of target when possible. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15996)
|
||||
- Introduced flag `--ignore-unviable-attestations` (replaces and deprecates `--disable-last-epoch-targets`) to drop attestations whose target state is not viable; default remains to process them unless explicitly enabled. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16094)
|
||||
|
||||
### Fixed
|
||||
|
||||
- Use head state to validate attestations for old blocks if they are compatible. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16095)
|
||||
|
||||
|
||||
## [v7.0.0](https://github.com/prysmaticlabs/prysm/compare/v6.1.4...v7.0.0) - 2025-11-10
|
||||
|
||||
This is our initial mainnet release for the Ethereum mainnet Fulu fork on December 3rd, 2025. All operators MUST update to v7.0.0 or later release prior to the fulu fork epoch `411392`. See the [Ethereum Foundation blog post](https://blog.ethereum.org/2025/11/06/fusaka-mainnet-announcement) for more information on Fulu.
|
||||
|
||||
@@ -60,7 +60,7 @@ func Eth1DataHasEnoughSupport(beaconState state.ReadOnlyBeaconState, data *ethpb
|
||||
voteCount := uint64(0)
|
||||
|
||||
for _, vote := range beaconState.Eth1DataVotes() {
|
||||
if AreEth1DataEqual(vote, data.Copy()) {
|
||||
if AreEth1DataEqual(vote, data) {
|
||||
voteCount++
|
||||
}
|
||||
}
|
||||
|
||||
@@ -642,8 +642,12 @@ func (f *ForkChoice) DependentRootForEpoch(root [32]byte, epoch primitives.Epoch
|
||||
if !ok || node == nil {
|
||||
return [32]byte{}, ErrNilNode
|
||||
}
|
||||
if slots.ToEpoch(node.slot) >= epoch && node.parent != nil {
|
||||
node = node.parent
|
||||
if slots.ToEpoch(node.slot) >= epoch {
|
||||
if node.parent != nil {
|
||||
node = node.parent
|
||||
} else {
|
||||
return f.store.finalizedDependentRoot, nil
|
||||
}
|
||||
}
|
||||
return node.root, nil
|
||||
}
|
||||
|
||||
@@ -212,6 +212,9 @@ func (s *Store) prune(ctx context.Context) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Save the new finalized dependent root because it will be pruned
|
||||
s.finalizedDependentRoot = finalizedNode.parent.root
|
||||
|
||||
// Prune nodeByRoot starting from root
|
||||
if err := s.pruneFinalizedNodeByRootMap(ctx, s.treeRootNode, finalizedNode); err != nil {
|
||||
return err
|
||||
|
||||
@@ -465,6 +465,7 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
|
||||
ctx := t.Context()
|
||||
f := setup(1, 1)
|
||||
|
||||
// Insert a block in slot 32
|
||||
state, blk, err := prepareForkchoiceState(ctx, params.BeaconConfig().SlotsPerEpoch, [32]byte{'a'}, params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 1, 1)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, f.InsertNode(ctx, state, blk))
|
||||
@@ -475,6 +476,7 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, dependent, [32]byte{})
|
||||
|
||||
// Insert a block in slot 33
|
||||
state, blk1, err := prepareForkchoiceState(ctx, params.BeaconConfig().SlotsPerEpoch+1, [32]byte{'b'}, blk.Root(), params.BeaconConfig().ZeroHash, 1, 1)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, f.InsertNode(ctx, state, blk1))
|
||||
@@ -488,7 +490,7 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, dependent, [32]byte{})
|
||||
|
||||
// Insert a block for the next epoch (missed slot 0)
|
||||
// Insert a block for the next epoch (missed slot 0), slot 65
|
||||
|
||||
state, blk2, err := prepareForkchoiceState(ctx, 2*params.BeaconConfig().SlotsPerEpoch+1, [32]byte{'c'}, blk1.Root(), params.BeaconConfig().ZeroHash, 1, 1)
|
||||
require.NoError(t, err)
|
||||
@@ -509,6 +511,7 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, dependent, blk1.Root())
|
||||
|
||||
// Insert a block at slot 66
|
||||
state, blk3, err := prepareForkchoiceState(ctx, 2*params.BeaconConfig().SlotsPerEpoch+2, [32]byte{'d'}, blk2.Root(), params.BeaconConfig().ZeroHash, 1, 1)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, f.InsertNode(ctx, state, blk3))
|
||||
@@ -533,8 +536,11 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
|
||||
dependent, err = f.DependentRoot(1)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, [32]byte{}, dependent)
|
||||
dependent, err = f.DependentRoot(2)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, blk1.Root(), dependent)
|
||||
|
||||
// Insert a block for next epoch (slot 0 present)
|
||||
// Insert a block for the next epoch, slot 96 (descends from finalized at slot 33)
|
||||
state, blk4, err := prepareForkchoiceState(ctx, 3*params.BeaconConfig().SlotsPerEpoch, [32]byte{'e'}, blk1.Root(), params.BeaconConfig().ZeroHash, 1, 1)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, f.InsertNode(ctx, state, blk4))
|
||||
@@ -551,6 +557,7 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, dependent, blk1.Root())
|
||||
|
||||
// Insert a block at slot 97
|
||||
state, blk5, err := prepareForkchoiceState(ctx, 3*params.BeaconConfig().SlotsPerEpoch+1, [32]byte{'f'}, blk4.Root(), params.BeaconConfig().ZeroHash, 1, 1)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, f.InsertNode(ctx, state, blk5))
|
||||
@@ -600,12 +607,16 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, target, blk1.Root())
|
||||
|
||||
// Prune finalization
|
||||
// Prune finalization, finalize the block at slot 96
|
||||
s.finalizedCheckpoint.Root = blk4.Root()
|
||||
require.NoError(t, s.prune(ctx))
|
||||
target, err = f.TargetRootForEpoch(blk4.Root(), 3)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, blk4.Root(), target)
|
||||
// Dependent root for the finalized block should be the root of the pruned block at slot 33
|
||||
dependent, err = f.DependentRootForEpoch(blk4.Root(), 3)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, blk1.Root(), dependent)
|
||||
}
|
||||
|
||||
func TestStore_DependentRootForEpoch(t *testing.T) {
|
||||
|
||||
@@ -31,6 +31,7 @@ type Store struct {
|
||||
proposerBoostRoot [fieldparams.RootLength]byte // latest block root that was boosted after being received in a timely manner.
|
||||
previousProposerBoostRoot [fieldparams.RootLength]byte // previous block root that was boosted after being received in a timely manner.
|
||||
previousProposerBoostScore uint64 // previous proposer boosted root score.
|
||||
finalizedDependentRoot [fieldparams.RootLength]byte // dependent root at finalized checkpoint.
|
||||
committeeWeight uint64 // tracks the total active validator balance divided by the number of slots per Epoch.
|
||||
treeRootNode *Node // the root node of the store tree.
|
||||
headNode *Node // last head Node
|
||||
|
||||
95
beacon-chain/graffiti/graffiti-proposal-brief.md
Normal file
95
beacon-chain/graffiti/graffiti-proposal-brief.md
Normal file
@@ -0,0 +1,95 @@
|
||||
# Graffiti Version Info Implementation
|
||||
|
||||
## Summary
|
||||
Add automatic EL+CL version info to block graffiti following [ethereum/execution-apis#517](https://github.com/ethereum/execution-apis/pull/517). Uses the [flexible standard](https://hackmd.io/@wmoBhF17RAOH2NZ5bNXJVg/BJX2c9gja) to pack client info into leftover space after user graffiti.
|
||||
|
||||
More details: https://github.com/ethereum/execution-apis/blob/main/src/engine/identification.md
|
||||
|
||||
## Implementation
|
||||
|
||||
### Core Component: GraffitiInfo Struct
|
||||
Thread-safe struct holding version information:
|
||||
```go
|
||||
const clCode = "PR"
|
||||
|
||||
type GraffitiInfo struct {
|
||||
mu sync.RWMutex
|
||||
userGraffiti string // From --graffiti flag (set once at startup)
|
||||
clCommit string // From version.GetCommitPrefix() helper function
|
||||
elCode string // From engine_getClientVersionV1
|
||||
elCommit string // From engine_getClientVersionV1
|
||||
}
|
||||
```
|
||||
|
||||
### Flow
|
||||
1. **Startup**: Parse flags, create GraffitiInfo with user graffiti and CL info.
|
||||
2. **Wiring**: Pass struct to both execution service and RPC validator server
|
||||
3. **Runtime**: Execution service goroutine periodically calls `engine_getClientVersionV1` and updates EL fields
|
||||
4. **Block Proposal**: RPC validator server calls `GenerateGraffiti()` to get formatted graffiti
|
||||
|
||||
### Flexible Graffiti Format
|
||||
Packs as much client info as space allows (after user graffiti):
|
||||
|
||||
| Available Space | Format | Example |
|
||||
|----------------|--------|---------|
|
||||
| ≥12 bytes | `EL(2)+commit(4)+CL(2)+commit(4)+user` | `GE168dPR63afBob` |
|
||||
| 8-11 bytes | `EL(2)+commit(2)+CL(2)+commit(2)+user` | `GE16PR63my node here` |
|
||||
| 4-7 bytes | `EL(2)+CL(2)+user` | `GEPRthis is my graffiti msg` |
|
||||
| 2-3 bytes | `EL(2)+user` | `GEalmost full graffiti message` |
|
||||
| <2 bytes | user only | `full 32 byte user graffiti here` |
|
||||
|
||||
```go
|
||||
func (g *GraffitiInfo) GenerateGraffiti() [32]byte {
|
||||
available := 32 - len(userGraffiti)
|
||||
|
||||
if elCode == "" {
|
||||
elCommit2 = elCommit4 = ""
|
||||
}
|
||||
|
||||
switch {
|
||||
case available >= 12:
|
||||
return elCode + elCommit4 + clCode + clCommit4 + userGraffiti
|
||||
case available >= 8:
|
||||
return elCode + elCommit2 + clCode + clCommit2 + userGraffiti
|
||||
case available >= 4:
|
||||
return elCode + clCode + userGraffiti
|
||||
case available >= 2:
|
||||
return elCode + userGraffiti
|
||||
default:
|
||||
return userGraffiti
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Update Logic
|
||||
Single testable function in execution service:
|
||||
```go
|
||||
func (s *Service) updateGraffitiInfo() {
|
||||
versions, err := s.GetClientVersion(ctx)
|
||||
if err != nil {
|
||||
return // Keep last good value
|
||||
}
|
||||
if len(versions) == 1 {
|
||||
s.graffitiInfo.UpdateFromEngine(versions[0].Code, versions[0].Commit)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Goroutine calls this on `slot % 8 == 4` timing (4 times per epoch, avoids slot boundaries).
|
||||
|
||||
### Files Changes Required
|
||||
|
||||
**New:**
|
||||
- `beacon-chain/execution/graffiti_info.go` - The struct and methods
|
||||
- `beacon-chain/execution/graffiti_info_test.go` - Unit tests
|
||||
- `runtime/version/version.go` - Add `GetCommitPrefix()` helper that extracts first 4 hex chars from the git commit injected via Bazel ldflags at build time
|
||||
|
||||
**Modified:**
|
||||
- `beacon-chain/execution/service.go` - Add goroutine + updateGraffitiInfo()
|
||||
- `beacon-chain/execution/engine_client.go` - Add GetClientVersion() method that does engine call
|
||||
- `beacon-chain/rpc/.../validator/proposer.go` - Call GenerateGraffiti()
|
||||
- `beacon-chain/node/node.go` - Wire GraffitiInfo to services
|
||||
|
||||
### Testing Strategy
|
||||
- Unit test GraffitiInfo methods (priority logic, thread safety)
|
||||
- Unit test updateGraffitiInfo() with mocked engine client
|
||||
@@ -82,8 +82,8 @@ type StateRootNotFoundError struct {
|
||||
}
|
||||
|
||||
// NewStateRootNotFoundError creates a new error instance.
|
||||
func NewStateRootNotFoundError(stateRootsSize int) StateNotFoundError {
|
||||
return StateNotFoundError{
|
||||
func NewStateRootNotFoundError(stateRootsSize int) StateRootNotFoundError {
|
||||
return StateRootNotFoundError{
|
||||
message: fmt.Sprintf("state root not found in the last %d state roots", stateRootsSize),
|
||||
}
|
||||
}
|
||||
|
||||
438
beacon-chain/state/state-design.md
Normal file
438
beacon-chain/state/state-design.md
Normal file
@@ -0,0 +1,438 @@
|
||||
# Beacon Chain State Design Document
|
||||
|
||||
## Package Structure
|
||||
|
||||
### `/beacon-chain/state`
|
||||
|
||||
This is the top-level state package that defines interfaces only. This package provides a clean API boundary for state access without implementation details. The package follows the interface segregation principle, breaking down access patterns into granular read-only and write-only interfaces (e.g., `ReadOnlyValidators`, `WriteOnlyValidators`, `ReadOnlyBalances`, `WriteOnlyBalances`).
|
||||
|
||||
### `/beacon-chain/state/state-native`
|
||||
|
||||
This package contains the actual `BeaconState` struct and all concrete implementations of the state interfaces. It also contains thread-safe getters and setters.
|
||||
|
||||
### `/beacon-chain/state/fieldtrie`
|
||||
|
||||
A dedicated package for field-level merkle trie functionality. The main component of the package is the `FieldTrie` struct which represents a particular field's merkle trie. The package contains several field trie operations such as recomputing tries, copying and transferring them.
|
||||
|
||||
### `/beacon-chain/state/stateutil`
|
||||
|
||||
Utility package whose main components are various state merkleization functions. Other things contained in this package include field trie helpers, the implementation of shared references, and validator tracking.
|
||||
|
||||
## Beacon State Architecture
|
||||
|
||||
### `BeaconState` Structure
|
||||
|
||||
The `BeaconState` is a single structure that supports all consensus versions. The value of the `version` field determines the active version of the state. It is used extensively throughout the codebase to determine which code path will be executed.
|
||||
|
||||
```go
|
||||
type BeaconState struct {
|
||||
version int
|
||||
id uint64 // Used for tracking states in multi-value slices
|
||||
|
||||
// Common fields (all versions)
|
||||
genesisTime uint64
|
||||
genesisValidatorsRoot [32]byte
|
||||
slot primitives.Slot
|
||||
// ...
|
||||
|
||||
// Phase0+ fields
|
||||
eth1Data *ethpb.Eth1Data
|
||||
eth1DataVotes []*ethpb.Eth1Data
|
||||
eth1DepositIndex uint64
|
||||
slashings []uint64
|
||||
previousEpochAttestations []*ethpb.PendingAttestation
|
||||
currentEpochAttestations []*ethpb.PendingAttestation
|
||||
// ...
|
||||
|
||||
// Altair+ fields
|
||||
previousEpochParticipation []byte
|
||||
currentEpochParticipation []byte
|
||||
currentSyncCommittee *ethpb.SyncCommittee
|
||||
nextSyncCommittee *ethpb.SyncCommittee
|
||||
// ...
|
||||
|
||||
// Fields for other forks...
|
||||
|
||||
// Internal state management
|
||||
lock sync.RWMutex
|
||||
dirtyFields map[types.FieldIndex]bool
|
||||
dirtyIndices map[types.FieldIndex][]uint64
|
||||
stateFieldLeaves map[types.FieldIndex]*fieldtrie.FieldTrie
|
||||
rebuildTrie map[types.FieldIndex]bool
|
||||
valMapHandler *stateutil.ValidatorMapHandler
|
||||
merkleLayers [][][]byte
|
||||
sharedFieldReferences map[types.FieldIndex]*stateutil.Reference
|
||||
}
|
||||
```
|
||||
|
||||
### Multi-Value Slices
|
||||
|
||||
Several large array fields of the state are implemented as multi-value slices. This is a specialized data structure that enables efficient sharing and modification of slices between states. A multi-value slice is preferred over a regular slice in scenarios where only a small fraction of the array is updated at a time. In such cases, using a multi-value slice results in fewer memory allocations because many values of the slice will be shared between states, whereas with a regular slice changing even a single item results in copying the full slice. Examples include `MultiValueBlockRoots` and `MultiValueBalances`.
|
||||
|
||||
**Example**
|
||||
```go
|
||||
// Create new multi-value slice
|
||||
mvBalances := NewMultiValueBalances([]uint64{32000000000, 32000000000, ...})
|
||||
|
||||
// Share across states
|
||||
state1.balancesMultiValue = mvBalances
|
||||
state2 := state1.Copy() // state2 shares the same mvBalances
|
||||
|
||||
// Modify in state2
|
||||
state2.UpdateBalancesAtIndex(0, 31000000000) // This doesn't create a new multi-value slice.
|
||||
|
||||
state1.BalanceAtIndex(0) // this returns 32000000000
|
||||
state2.BalanceAtIndex(0) // this returns 31000000000
|
||||
```
|
||||
|
||||
### Getters/Setters
|
||||
|
||||
All beacon state getters and setters follow a consistent pattern. All exported methods are protected from concurrent modification using read locks (for getters) or write locks (for setters).
|
||||
|
||||
**Getters**
|
||||
|
||||
Values are never returned directly from getters. A copy of the value is returned instead.
|
||||
|
||||
```go
|
||||
func (b *BeaconState) Fork() *ethpb.Fork {
|
||||
if b.fork == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
b.lock.RLock()
|
||||
defer b.lock.RUnlock()
|
||||
|
||||
return b.forkVal()
|
||||
}
|
||||
|
||||
func (b *BeaconState) forkVal() *ethpb.Fork {
|
||||
if b.fork == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
prevVersion := make([]byte, len(b.fork.PreviousVersion))
|
||||
copy(prevVersion, b.fork.PreviousVersion)
|
||||
currVersion := make([]byte, len(b.fork.CurrentVersion))
|
||||
copy(currVersion, b.fork.CurrentVersion)
|
||||
return ðpb.Fork{
|
||||
PreviousVersion: prevVersion,
|
||||
CurrentVersion: currVersion,
|
||||
Epoch: b.fork.Epoch,
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```go
|
||||
func (b *BeaconState) CurrentEpochAttestations() ([]*ethpb.PendingAttestation, error) {
|
||||
if b.version != version.Phase0 {
|
||||
return nil, errNotSupported("CurrentEpochAttestations", b.version)
|
||||
}
|
||||
|
||||
if b.currentEpochAttestations == nil {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
b.lock.RLock()
|
||||
defer b.lock.RUnlock()
|
||||
|
||||
return b.currentEpochAttestationsVal(), nil
|
||||
}
|
||||
|
||||
func (b *BeaconState) currentEpochAttestationsVal() []*ethpb.PendingAttestation {
|
||||
if b.currentEpochAttestations == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
res := make([]*ethpb.PendingAttestation, len(b.currentEpochAttestations))
|
||||
for i := range res {
|
||||
res[i] = b.currentEpochAttestations[i].Copy()
|
||||
}
|
||||
return res
|
||||
}
|
||||
```
|
||||
|
||||
In the case of fields backed by multi-value slices, the appropriate methods of the multi-value slice are invoked.
|
||||
|
||||
```go
|
||||
func (b *BeaconState) StateRoots() [][]byte {
|
||||
b.lock.RLock()
|
||||
defer b.lock.RUnlock()
|
||||
|
||||
roots := b.stateRootsVal()
|
||||
if roots == nil {
|
||||
return nil
|
||||
}
|
||||
return roots.Slice()
|
||||
}
|
||||
|
||||
func (b *BeaconState) stateRootsVal() customtypes.StateRoots {
|
||||
if b.stateRootsMultiValue == nil {
|
||||
return nil
|
||||
}
|
||||
return b.stateRootsMultiValue.Value(b)
|
||||
}
|
||||
|
||||
func (b *BeaconState) StateRootAtIndex(idx uint64) ([]byte, error) {
|
||||
b.lock.RLock()
|
||||
defer b.lock.RUnlock()
|
||||
|
||||
if b.stateRootsMultiValue == nil {
|
||||
return nil, nil
|
||||
}
|
||||
r, err := b.stateRootsMultiValue.At(b, idx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return r[:], nil
|
||||
}
|
||||
```
|
||||
|
||||
**Setters**
|
||||
|
||||
Whenever a beacon state field is set, it is marked as dirty. This is needed for hash tree root computation so that the cached merkle branch with the old value of the modified field is recomputed using the new value.
|
||||
|
||||
```go
|
||||
func (b *BeaconState) SetSlot(val primitives.Slot) error {
|
||||
b.lock.Lock()
|
||||
defer b.lock.Unlock()
|
||||
|
||||
b.slot = val
|
||||
b.markFieldAsDirty(types.Slot)
|
||||
return nil
|
||||
}
|
||||
```
|
||||
|
||||
Several fields of the state are shared between states through a `Reference` mechanism. These references are stored in `b.sharedFieldReferences`. Whenever a state is copied, the reference counter for each of these fields is incremented. When a new value for any of these fields is set, the counter for the existing reference is decremented and a new reference is created for that field.
|
||||
|
||||
```go
|
||||
type Reference struct {
|
||||
refs uint // Reference counter
|
||||
lock sync.RWMutex
|
||||
}
|
||||
```
|
||||
|
||||
```go
|
||||
func (b *BeaconState) SetCurrentParticipationBits(val []byte) error {
|
||||
b.lock.Lock()
|
||||
defer b.lock.Unlock()
|
||||
|
||||
if b.version == version.Phase0 {
|
||||
return errNotSupported("SetCurrentParticipationBits", b.version)
|
||||
}
|
||||
|
||||
b.sharedFieldReferences[types.CurrentEpochParticipationBits].MinusRef()
|
||||
b.sharedFieldReferences[types.CurrentEpochParticipationBits] = stateutil.NewRef(1)
|
||||
|
||||
b.currentEpochParticipation = val
|
||||
b.markFieldAsDirty(types.CurrentEpochParticipationBits)
|
||||
return nil
|
||||
}
|
||||
```
|
||||
|
||||
Updating a single value of an array field requires updating `b.dirtyIndices` to ensure the field trie for that particular field is properly recomputed.
|
||||
|
||||
```go
|
||||
func (b *BeaconState) UpdateBalancesAtIndex(idx primitives.ValidatorIndex, val uint64) error {
|
||||
if err := b.balancesMultiValue.UpdateAt(b, uint64(idx), val); err != nil {
|
||||
return errors.Wrap(err, "could not update balances")
|
||||
}
|
||||
|
||||
b.lock.Lock()
|
||||
defer b.lock.Unlock()
|
||||
|
||||
b.markFieldAsDirty(types.Balances)
|
||||
b.addDirtyIndices(types.Balances, []uint64{uint64(idx)})
|
||||
return nil
|
||||
}
|
||||
```
|
||||
|
||||
As is the case with getters, for fields backed by multi-value slices the appropriate methods of the multi-value slice are invoked when updating field values. The multi-value slice keeps track of states internally, which means the `Reference` construct is unnecessary.
|
||||
|
||||
```go
|
||||
func (b *BeaconState) SetStateRoots(val [][]byte) error {
|
||||
b.lock.Lock()
|
||||
defer b.lock.Unlock()
|
||||
|
||||
if b.stateRootsMultiValue != nil {
|
||||
b.stateRootsMultiValue.Detach(b)
|
||||
}
|
||||
b.stateRootsMultiValue = NewMultiValueStateRoots(val)
|
||||
|
||||
b.markFieldAsDirty(types.StateRoots)
|
||||
b.rebuildTrie[types.StateRoots] = true
|
||||
return nil
|
||||
}
|
||||
|
||||
func (b *BeaconState) UpdateStateRootAtIndex(idx uint64, stateRoot [32]byte) error {
|
||||
if err := b.stateRootsMultiValue.UpdateAt(b, idx, stateRoot); err != nil {
|
||||
return errors.Wrap(err, "could not update state roots")
|
||||
}
|
||||
|
||||
b.lock.Lock()
|
||||
defer b.lock.Unlock()
|
||||
|
||||
b.markFieldAsDirty(types.StateRoots)
|
||||
b.addDirtyIndices(types.StateRoots, []uint64{idx})
|
||||
return nil
|
||||
}
|
||||
```
|
||||
|
||||
### Read-Only Validator
|
||||
|
||||
There are two ways in which validators can be accessed through getters. One approach is to use the methods `Validators` and `ValidatorAtIndex`, which return a copy of, respectively, the whole validator set or a particular validator. The other approach is to use the `ReadOnlyValidator` construct via the `ValidatorsReadOnly` and `ValidatorAtIndexReadOnly` methods. The `ReadOnlyValidator` structure is a wrapper around the protobuf validator. The advantage of using the read-only version is that no copy of the underlying validator is made, which helps with performance, especially when accessing a large number of validators (e.g. looping through the whole validator registry). Because the read-only wrapper exposes only getters, each of which returns a copy of the validator’s fields, it prevents accidental mutation of the underlying validator.
|
||||
|
||||
```go
|
||||
type ReadOnlyValidator struct {
|
||||
validator *ethpb.Validator
|
||||
}
|
||||
|
||||
// Only getter methods, no setters
|
||||
func (v *ReadOnlyValidator) PublicKey() [48]byte
|
||||
func (v *ReadOnlyValidator) EffectiveBalance() uint64
|
||||
func (v *ReadOnlyValidator) Slashed() bool
|
||||
// ... etc
|
||||
```
|
||||
|
||||
**Usage**
|
||||
```go
|
||||
// Returns ReadOnlyValidator
|
||||
validator, err := state.ValidatorAtIndexReadOnly(idx)
|
||||
|
||||
// Can read but not modify
|
||||
pubkey := validator.PublicKey()
|
||||
balance := validator.EffectiveBalance()
|
||||
|
||||
// To modify, must get mutable copy
|
||||
validatorCopy := validator.Copy() // Returns *ethpb.Validator
|
||||
validatorCopy.EffectiveBalance = newBalance
|
||||
state.UpdateValidatorAtIndex(idx, validatorCopy)
|
||||
```
|
||||
|
||||
## Field Trie System
|
||||
|
||||
For a few large arrays, such as block roots or the validator registry, hashing the state field at every slot would be very expensive. To avoid such re-hashing, the underlying merkle trie of the field is maintained and only the branch(es) corresponding to the changed index(es) are recomputed, instead of the whole trie.
|
||||
|
||||
Each state version has a list of active fields defined (e.g. `phase0Fields`, `altairFields`), which serves as the basis for field trie creation.
|
||||
|
||||
```go
|
||||
type FieldTrie struct {
|
||||
*sync.RWMutex
|
||||
reference *stateutil.Reference // The number of states this field trie is shared between
|
||||
fieldLayers [][]*[32]byte // Merkle trie layers
|
||||
field types.FieldIndex // Which field this field trie represents
|
||||
dataType types.DataType // Type of field's array
|
||||
length uint64 // Maximum number of elements
|
||||
numOfElems int // Number of elements
|
||||
isTransferred bool // Whether trie was transferred
|
||||
}
|
||||
```
|
||||
|
||||
The `DataType` enum indicates the type of the field's array. The possible values are:
|
||||
- `BasicArray`: Fixed-size arrays (e.g. `blockRoots`)
|
||||
- `CompositeArray`: Variable-size arrays (e.g. `validators`)
|
||||
- `CompressedArray`: Variable-size arrays that pack multiple elements per trie leaf (e.g. `balances` with 4 elements per leaf)
|
||||
|
||||
### Recomputing a Trie
|
||||
|
||||
To avoid recomputing the state root every time any value of the state changes, branches of field tries are not recomputed until the state root is actually needed. When it is necessary to recompute a trie, the `RecomputeTrie` function rebuilds the affected branches in the trie according to the provided changed indices. The changed indices of each field are tracked in the `dirtyIndices` field of the beacon state, which is a `map[types.FieldIndex][]uint64`. The recomputation algorithms for fixed-size and variable-size fields are different.
|
||||
|
||||
### Transferring a Trie
|
||||
|
||||
When it is expected that an older state won't need its trie for recomputation, its trie layers can be transferred directly to a new trie instead of copying them:
|
||||
|
||||
```go
|
||||
func (f *FieldTrie) TransferTrie() *FieldTrie {
|
||||
f.isTransferred = true
|
||||
nTrie := &FieldTrie{
|
||||
fieldLayers: f.fieldLayers, // Direct transfer, no copy
|
||||
field: f.field,
|
||||
dataType: f.dataType,
|
||||
reference: stateutil.NewRef(1),
|
||||
// ...
|
||||
}
|
||||
f.fieldLayers = nil // Zero out original
|
||||
return nTrie
|
||||
}
|
||||
```
|
||||
|
||||
The downside of this approach is that if it becomes necessary to access the older state's trie later on, the whole trie would have to be recreated since it is empty now. This is especially costly for the validator registry and that is why it is always copied instead.
|
||||
|
||||
```go
|
||||
if fTrie.FieldReference().Refs() > 1 {
|
||||
var newTrie *fieldtrie.FieldTrie
|
||||
// We choose to only copy the validator
|
||||
// trie as it is pretty expensive to regenerate.
|
||||
if index == types.Validators {
|
||||
newTrie = fTrie.CopyTrie()
|
||||
} else {
|
||||
newTrie = fTrie.TransferTrie()
|
||||
}
|
||||
fTrie.FieldReference().MinusRef()
|
||||
b.stateFieldLeaves[index] = newTrie
|
||||
fTrie = newTrie
|
||||
}
|
||||
```
|
||||
|
||||
## Hash Tree Root Computation
|
||||
|
||||
The `BeaconState` structure is not a protobuf object, so there are no SSZ-generated methods for its fields. For each state field, there exists a custom method that returns its root. One advantage of doing it this way is the possibility to have the most efficient implementation possible. As an example, when computing the root of the validator registry, validators are hashed in parallel and vectorized sha256 computation is utilized to speed up the computation.
|
||||
|
||||
```go
|
||||
func OptimizedValidatorRoots(validators []*ethpb.Validator) ([][32]byte, error) {
|
||||
// Exit early if no validators are provided.
|
||||
if len(validators) == 0 {
|
||||
return [][32]byte{}, nil
|
||||
}
|
||||
wg := sync.WaitGroup{}
|
||||
n := runtime.GOMAXPROCS(0)
|
||||
rootsSize := len(validators) * validatorFieldRoots
|
||||
groupSize := len(validators) / n
|
||||
roots := make([][32]byte, rootsSize)
|
||||
wg.Add(n - 1)
|
||||
for j := 0; j < n-1; j++ {
|
||||
go hashValidatorHelper(validators, roots, j, groupSize, &wg)
|
||||
}
|
||||
for i := (n - 1) * groupSize; i < len(validators); i++ {
|
||||
fRoots, err := ValidatorFieldRoots(validators[i])
|
||||
if err != nil {
|
||||
return [][32]byte{}, errors.Wrap(err, "could not compute validators merkleization")
|
||||
}
|
||||
for k, root := range fRoots {
|
||||
roots[i*validatorFieldRoots+k] = root
|
||||
}
|
||||
}
|
||||
wg.Wait()
|
||||
|
||||
// A validator's tree can represented with a depth of 3. As log2(8) = 3
|
||||
// Using this property we can lay out all the individual fields of a
|
||||
// validator and hash them in single level using our vectorized routine.
|
||||
for range validatorTreeDepth {
|
||||
// Overwrite input lists as we are hashing by level
|
||||
// and only need the highest level to proceed.
|
||||
roots = htr.VectorizedSha256(roots)
|
||||
}
|
||||
return roots, nil
|
||||
}
|
||||
```
|
||||
|
||||
Calculating the full state root is very expensive; therefore, it is done in a lazy fashion. Previously generated merkle layers are cached in the state and only merkle trie branches corresponding to dirty indices are regenerated before returning the final root.
|
||||
|
||||
```go
|
||||
func (b *BeaconState) HashTreeRoot(ctx context.Context) ([32]byte, error) {
|
||||
ctx, span := trace.StartSpan(ctx, "beaconState.HashTreeRoot")
|
||||
defer span.End()
|
||||
|
||||
b.lock.Lock()
|
||||
defer b.lock.Unlock()
|
||||
|
||||
// When len(b.merkleLayers) > 0, this function returns immediately
|
||||
if err := b.initializeMerkleLayers(ctx); err != nil {
|
||||
return [32]byte{}, err
|
||||
}
|
||||
// Dirty fields are tracked in the beacon state through b.dirtyFields
|
||||
if err := b.recomputeDirtyFields(ctx); err != nil {
|
||||
return [32]byte{}, err
|
||||
}
|
||||
return bytesutil.ToBytes32(b.merkleLayers[len(b.merkleLayers)-1][0]), nil
|
||||
}
|
||||
```
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
"fmt"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
|
||||
"github.com/OffchainLabs/prysm/v7/monitoring/tracing/trace"
|
||||
"github.com/sirupsen/logrus"
|
||||
@@ -37,76 +38,84 @@ func (s *State) MigrateToCold(ctx context.Context, fRoot [32]byte) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Start at previous finalized slot, stop at current finalized slot (it will be handled in the next migration).
|
||||
// If the slot is on archived point, save the state of that slot to the DB.
|
||||
for slot := oldFSlot; slot < fSlot; slot++ {
|
||||
// Calculate the first archived point slot >= oldFSlot (but > 0).
|
||||
// This avoids iterating through every slot and only visits archived points directly.
|
||||
var startSlot primitives.Slot
|
||||
if oldFSlot == 0 {
|
||||
startSlot = s.slotsPerArchivedPoint
|
||||
} else {
|
||||
// Round up to the next archived point
|
||||
startSlot = (oldFSlot + s.slotsPerArchivedPoint - 1) / s.slotsPerArchivedPoint * s.slotsPerArchivedPoint
|
||||
}
|
||||
|
||||
// Start at the first archived point after old finalized slot, stop before current finalized slot.
|
||||
// Jump directly between archived points.
|
||||
for slot := startSlot; slot < fSlot; slot += s.slotsPerArchivedPoint {
|
||||
if ctx.Err() != nil {
|
||||
return ctx.Err()
|
||||
}
|
||||
|
||||
if slot%s.slotsPerArchivedPoint == 0 && slot != 0 {
|
||||
cached, exists, err := s.epochBoundaryStateCache.getBySlot(slot)
|
||||
cached, exists, err := s.epochBoundaryStateCache.getBySlot(slot)
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not get epoch boundary state for slot %d", slot)
|
||||
}
|
||||
|
||||
var aRoot [32]byte
|
||||
var aState state.BeaconState
|
||||
|
||||
// When the epoch boundary state is not in cache due to skip slot scenario,
|
||||
// we have to regenerate the state which will represent epoch boundary.
|
||||
// By finding the highest available block below epoch boundary slot, we
|
||||
// generate the state for that block root.
|
||||
if exists {
|
||||
aRoot = cached.root
|
||||
aState = cached.state
|
||||
} else {
|
||||
_, roots, err := s.beaconDB.HighestRootsBelowSlot(ctx, slot)
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not get epoch boundary state for slot %d", slot)
|
||||
return err
|
||||
}
|
||||
|
||||
var aRoot [32]byte
|
||||
var aState state.BeaconState
|
||||
|
||||
// When the epoch boundary state is not in cache due to skip slot scenario,
|
||||
// we have to regenerate the state which will represent epoch boundary.
|
||||
// By finding the highest available block below epoch boundary slot, we
|
||||
// generate the state for that block root.
|
||||
if exists {
|
||||
aRoot = cached.root
|
||||
aState = cached.state
|
||||
} else {
|
||||
_, roots, err := s.beaconDB.HighestRootsBelowSlot(ctx, slot)
|
||||
// Given the block has been finalized, the db should not have more than one block in a given slot.
|
||||
// We should error out when this happens.
|
||||
if len(roots) != 1 {
|
||||
return errUnknownBlock
|
||||
}
|
||||
aRoot = roots[0]
|
||||
// There's no need to generate the state if the state already exists in the DB.
|
||||
// We can skip saving the state.
|
||||
if !s.beaconDB.HasState(ctx, aRoot) {
|
||||
aState, err = s.StateByRoot(ctx, aRoot)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// Given the block has been finalized, the db should not have more than one block in a given slot.
|
||||
// We should error out when this happens.
|
||||
if len(roots) != 1 {
|
||||
return errUnknownBlock
|
||||
}
|
||||
aRoot = roots[0]
|
||||
// There's no need to generate the state if the state already exists in the DB.
|
||||
// We can skip saving the state.
|
||||
if !s.beaconDB.HasState(ctx, aRoot) {
|
||||
aState, err = s.StateByRoot(ctx, aRoot)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if s.beaconDB.HasState(ctx, aRoot) {
|
||||
// If you are migrating a state and its already part of the hot state cache saved to the db,
|
||||
// you can just remove it from the hot state cache as it becomes redundant.
|
||||
s.saveHotStateDB.lock.Lock()
|
||||
roots := s.saveHotStateDB.blockRootsOfSavedStates
|
||||
for i := range roots {
|
||||
if aRoot == roots[i] {
|
||||
s.saveHotStateDB.blockRootsOfSavedStates = append(roots[:i], roots[i+1:]...)
|
||||
// There shouldn't be duplicated roots in `blockRootsOfSavedStates`.
|
||||
// Break here is ok.
|
||||
break
|
||||
}
|
||||
}
|
||||
s.saveHotStateDB.lock.Unlock()
|
||||
continue
|
||||
}
|
||||
|
||||
if err := s.beaconDB.SaveState(ctx, aState, aRoot); err != nil {
|
||||
return err
|
||||
}
|
||||
log.WithFields(
|
||||
logrus.Fields{
|
||||
"slot": aState.Slot(),
|
||||
"root": hex.EncodeToString(bytesutil.Trunc(aRoot[:])),
|
||||
}).Info("Saved state in DB")
|
||||
}
|
||||
|
||||
if s.beaconDB.HasState(ctx, aRoot) {
|
||||
// If you are migrating a state and its already part of the hot state cache saved to the db,
|
||||
// you can just remove it from the hot state cache as it becomes redundant.
|
||||
s.saveHotStateDB.lock.Lock()
|
||||
roots := s.saveHotStateDB.blockRootsOfSavedStates
|
||||
for i := range roots {
|
||||
if aRoot == roots[i] {
|
||||
s.saveHotStateDB.blockRootsOfSavedStates = append(roots[:i], roots[i+1:]...)
|
||||
// There shouldn't be duplicated roots in `blockRootsOfSavedStates`.
|
||||
// Break here is ok.
|
||||
break
|
||||
}
|
||||
}
|
||||
s.saveHotStateDB.lock.Unlock()
|
||||
continue
|
||||
}
|
||||
|
||||
if err := s.beaconDB.SaveState(ctx, aState, aRoot); err != nil {
|
||||
return err
|
||||
}
|
||||
log.WithFields(
|
||||
logrus.Fields{
|
||||
"slot": aState.Slot(),
|
||||
"root": hex.EncodeToString(bytesutil.Trunc(aRoot[:])),
|
||||
}).Info("Saved state in DB")
|
||||
}
|
||||
|
||||
// Update finalized info in memory.
|
||||
|
||||
@@ -265,7 +265,7 @@ func (s *Service) processVerifiedAttestation(
|
||||
if key, err := generateUnaggregatedAttCacheKey(broadcastAtt); err != nil {
|
||||
log.WithError(err).Error("Failed to generate cache key for attestation tracking")
|
||||
} else {
|
||||
s.setSeenUnaggregatedAtt(key)
|
||||
_ = s.setSeenUnaggregatedAtt(key)
|
||||
}
|
||||
|
||||
valCount, err := helpers.ActiveValidatorCount(ctx, preState, slots.ToEpoch(data.Slot))
|
||||
@@ -320,7 +320,7 @@ func (s *Service) processAggregate(ctx context.Context, aggregate ethpb.SignedAg
|
||||
return
|
||||
}
|
||||
|
||||
s.setAggregatorIndexEpochSeen(att.GetData().Target.Epoch, aggregate.AggregateAttestationAndProof().GetAggregatorIndex())
|
||||
_ = s.setAggregatorIndexEpochSeen(att.GetData().Target.Epoch, aggregate.AggregateAttestationAndProof().GetAggregatorIndex())
|
||||
|
||||
if err := s.cfg.p2p.Broadcast(ctx, aggregate); err != nil {
|
||||
log.WithError(err).Debug("Could not broadcast aggregated attestation")
|
||||
|
||||
@@ -137,7 +137,9 @@ func (s *Service) validateAggregateAndProof(ctx context.Context, pid peer.ID, ms
|
||||
return validationRes, err
|
||||
}
|
||||
|
||||
s.setAggregatorIndexEpochSeen(data.Target.Epoch, m.AggregateAttestationAndProof().GetAggregatorIndex())
|
||||
if first := s.setAggregatorIndexEpochSeen(data.Target.Epoch, m.AggregateAttestationAndProof().GetAggregatorIndex()); !first {
|
||||
return pubsub.ValidationIgnore, nil
|
||||
}
|
||||
|
||||
msg.ValidatorData = m
|
||||
|
||||
@@ -265,13 +267,19 @@ func (s *Service) hasSeenAggregatorIndexEpoch(epoch primitives.Epoch, aggregator
|
||||
}
|
||||
|
||||
// Set aggregate's aggregator index target epoch as seen.
|
||||
func (s *Service) setAggregatorIndexEpochSeen(epoch primitives.Epoch, aggregatorIndex primitives.ValidatorIndex) {
|
||||
// Returns true if this is the first time seeing this aggregator index and epoch.
|
||||
func (s *Service) setAggregatorIndexEpochSeen(epoch primitives.Epoch, aggregatorIndex primitives.ValidatorIndex) bool {
|
||||
b := append(bytesutil.Bytes32(uint64(epoch)), bytesutil.Bytes32(uint64(aggregatorIndex))...)
|
||||
|
||||
s.seenAggregatedAttestationLock.Lock()
|
||||
defer s.seenAggregatedAttestationLock.Unlock()
|
||||
|
||||
_, seen := s.seenAggregatedAttestationCache.Get(string(b))
|
||||
if seen {
|
||||
return false
|
||||
}
|
||||
s.seenAggregatedAttestationCache.Add(string(b), true)
|
||||
return true
|
||||
}
|
||||
|
||||
// This validates the bitfield is correct and aggregator's index in state is within the beacon committee.
|
||||
|
||||
@@ -801,3 +801,27 @@ func TestValidateAggregateAndProof_RejectWhenAttEpochDoesntEqualTargetEpoch(t *t
|
||||
assert.NotNil(t, err)
|
||||
assert.Equal(t, pubsub.ValidationReject, res)
|
||||
}
|
||||
|
||||
func Test_SetAggregatorIndexEpochSeen(t *testing.T) {
|
||||
db := dbtest.SetupDB(t)
|
||||
p := p2ptest.NewTestP2P(t)
|
||||
|
||||
r := &Service{
|
||||
cfg: &config{
|
||||
p2p: p,
|
||||
beaconDB: db,
|
||||
},
|
||||
seenAggregatedAttestationCache: lruwrpr.New(10),
|
||||
}
|
||||
|
||||
aggIndex := primitives.ValidatorIndex(42)
|
||||
epoch := primitives.Epoch(7)
|
||||
|
||||
require.Equal(t, false, r.hasSeenAggregatorIndexEpoch(epoch, aggIndex))
|
||||
first := r.setAggregatorIndexEpochSeen(epoch, aggIndex)
|
||||
require.Equal(t, true, first)
|
||||
require.Equal(t, true, r.hasSeenAggregatorIndexEpoch(epoch, aggIndex))
|
||||
|
||||
second := r.setAggregatorIndexEpochSeen(epoch, aggIndex)
|
||||
require.Equal(t, false, second)
|
||||
}
|
||||
|
||||
@@ -104,7 +104,8 @@ func (s *Service) validateCommitteeIndexBeaconAttestation(
|
||||
}
|
||||
|
||||
if !s.slasherEnabled {
|
||||
// Verify this the first attestation received for the participating validator for the slot.
|
||||
// Verify this the first attestation received for the participating validator for the slot. This verification is here to return early if we've already seen this attestation.
|
||||
// This verification is carried again later after all other validations to avoid TOCTOU issues.
|
||||
if s.hasSeenUnaggregatedAtt(attKey) {
|
||||
return pubsub.ValidationIgnore, nil
|
||||
}
|
||||
@@ -228,7 +229,10 @@ func (s *Service) validateCommitteeIndexBeaconAttestation(
|
||||
Data: eventData,
|
||||
})
|
||||
|
||||
s.setSeenUnaggregatedAtt(attKey)
|
||||
if first := s.setSeenUnaggregatedAtt(attKey); !first {
|
||||
// Another concurrent validation processed the same attestation meanwhile
|
||||
return pubsub.ValidationIgnore, nil
|
||||
}
|
||||
|
||||
// Attach final validated attestation to the message for further pipeline use
|
||||
msg.ValidatorData = attForValidation
|
||||
@@ -385,11 +389,16 @@ func (s *Service) hasSeenUnaggregatedAtt(key string) bool {
|
||||
}
|
||||
|
||||
// Set an incoming attestation as seen for the participating validator for the slot.
|
||||
func (s *Service) setSeenUnaggregatedAtt(key string) {
|
||||
// Returns false if the attestation was already seen.
|
||||
func (s *Service) setSeenUnaggregatedAtt(key string) bool {
|
||||
s.seenUnAggregatedAttestationLock.Lock()
|
||||
defer s.seenUnAggregatedAttestationLock.Unlock()
|
||||
|
||||
_, seen := s.seenUnAggregatedAttestationCache.Get(key)
|
||||
if seen {
|
||||
return false
|
||||
}
|
||||
s.seenUnAggregatedAttestationCache.Add(key, true)
|
||||
return true
|
||||
}
|
||||
|
||||
// hasBlockAndState returns true if the beacon node knows about a block and associated state in the
|
||||
|
||||
@@ -499,6 +499,10 @@ func TestService_setSeenUnaggregatedAtt(t *testing.T) {
|
||||
Data: ðpb.AttestationData{Slot: 2, CommitteeIndex: 0},
|
||||
AggregationBits: bitfield.Bitlist{0b1001},
|
||||
}
|
||||
s3c0a0 := ðpb.Attestation{
|
||||
Data: ðpb.AttestationData{Slot: 3, CommitteeIndex: 0},
|
||||
AggregationBits: bitfield.Bitlist{0b1001},
|
||||
}
|
||||
|
||||
t.Run("empty cache", func(t *testing.T) {
|
||||
key := generateKey(t, s0c0a0)
|
||||
@@ -506,26 +510,39 @@ func TestService_setSeenUnaggregatedAtt(t *testing.T) {
|
||||
})
|
||||
t.Run("ok", func(t *testing.T) {
|
||||
key := generateKey(t, s0c0a0)
|
||||
s.setSeenUnaggregatedAtt(key)
|
||||
first := s.setSeenUnaggregatedAtt(key)
|
||||
assert.Equal(t, true, s.hasSeenUnaggregatedAtt(key))
|
||||
assert.Equal(t, true, first)
|
||||
})
|
||||
t.Run("already seen", func(t *testing.T) {
|
||||
key := generateKey(t, s3c0a0)
|
||||
first := s.setSeenUnaggregatedAtt(key)
|
||||
assert.Equal(t, true, s.hasSeenUnaggregatedAtt(key))
|
||||
assert.Equal(t, true, first)
|
||||
first = s.setSeenUnaggregatedAtt(key)
|
||||
assert.Equal(t, true, s.hasSeenUnaggregatedAtt(key))
|
||||
assert.Equal(t, false, first)
|
||||
})
|
||||
t.Run("different slot", func(t *testing.T) {
|
||||
key1 := generateKey(t, s1c0a0)
|
||||
key2 := generateKey(t, s2c0a0)
|
||||
s.setSeenUnaggregatedAtt(key1)
|
||||
first := s.setSeenUnaggregatedAtt(key1)
|
||||
assert.Equal(t, false, s.hasSeenUnaggregatedAtt(key2))
|
||||
assert.Equal(t, true, first)
|
||||
})
|
||||
t.Run("different committee index", func(t *testing.T) {
|
||||
key1 := generateKey(t, s0c1a0)
|
||||
key2 := generateKey(t, s0c2a0)
|
||||
s.setSeenUnaggregatedAtt(key1)
|
||||
first := s.setSeenUnaggregatedAtt(key1)
|
||||
assert.Equal(t, false, s.hasSeenUnaggregatedAtt(key2))
|
||||
assert.Equal(t, true, first)
|
||||
})
|
||||
t.Run("different bit", func(t *testing.T) {
|
||||
key1 := generateKey(t, s0c0a1)
|
||||
key2 := generateKey(t, s0c0a2)
|
||||
s.setSeenUnaggregatedAtt(key1)
|
||||
first := s.setSeenUnaggregatedAtt(key1)
|
||||
assert.Equal(t, false, s.hasSeenUnaggregatedAtt(key2))
|
||||
assert.Equal(t, true, first)
|
||||
})
|
||||
t.Run("0 bits set is considered not seen", func(t *testing.T) {
|
||||
a := ðpb.Attestation{AggregationBits: bitfield.Bitlist{0b1000}}
|
||||
@@ -576,6 +593,11 @@ func TestService_setSeenUnaggregatedAtt(t *testing.T) {
|
||||
CommitteeId: 0,
|
||||
AttesterIndex: 0,
|
||||
}
|
||||
s3c0a0 := ðpb.SingleAttestation{
|
||||
Data: ðpb.AttestationData{Slot: 2},
|
||||
CommitteeId: 0,
|
||||
AttesterIndex: 0,
|
||||
}
|
||||
|
||||
t.Run("empty cache", func(t *testing.T) {
|
||||
key := generateKey(t, s0c0a0)
|
||||
@@ -583,26 +605,39 @@ func TestService_setSeenUnaggregatedAtt(t *testing.T) {
|
||||
})
|
||||
t.Run("ok", func(t *testing.T) {
|
||||
key := generateKey(t, s0c0a0)
|
||||
s.setSeenUnaggregatedAtt(key)
|
||||
first := s.setSeenUnaggregatedAtt(key)
|
||||
assert.Equal(t, true, s.hasSeenUnaggregatedAtt(key))
|
||||
assert.Equal(t, true, first)
|
||||
})
|
||||
t.Run("different slot", func(t *testing.T) {
|
||||
key1 := generateKey(t, s1c0a0)
|
||||
key2 := generateKey(t, s2c0a0)
|
||||
s.setSeenUnaggregatedAtt(key1)
|
||||
first := s.setSeenUnaggregatedAtt(key1)
|
||||
assert.Equal(t, false, s.hasSeenUnaggregatedAtt(key2))
|
||||
assert.Equal(t, true, first)
|
||||
})
|
||||
t.Run("already seen", func(t *testing.T) {
|
||||
key := generateKey(t, s3c0a0)
|
||||
first := s.setSeenUnaggregatedAtt(key)
|
||||
assert.Equal(t, true, s.hasSeenUnaggregatedAtt(key))
|
||||
assert.Equal(t, true, first)
|
||||
first = s.setSeenUnaggregatedAtt(key)
|
||||
assert.Equal(t, true, s.hasSeenUnaggregatedAtt(key))
|
||||
assert.Equal(t, false, first)
|
||||
})
|
||||
t.Run("different committee index", func(t *testing.T) {
|
||||
key1 := generateKey(t, s0c1a0)
|
||||
key2 := generateKey(t, s0c2a0)
|
||||
s.setSeenUnaggregatedAtt(key1)
|
||||
first := s.setSeenUnaggregatedAtt(key1)
|
||||
assert.Equal(t, false, s.hasSeenUnaggregatedAtt(key2))
|
||||
assert.Equal(t, true, first)
|
||||
})
|
||||
t.Run("different attester", func(t *testing.T) {
|
||||
key1 := generateKey(t, s0c0a1)
|
||||
key2 := generateKey(t, s0c0a2)
|
||||
s.setSeenUnaggregatedAtt(key1)
|
||||
first := s.setSeenUnaggregatedAtt(key1)
|
||||
assert.Equal(t, false, s.hasSeenUnaggregatedAtt(key2))
|
||||
assert.Equal(t, true, first)
|
||||
})
|
||||
t.Run("single attestation is considered not seen", func(t *testing.T) {
|
||||
a := ðpb.AttestationElectra{}
|
||||
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Removed dead slot parameter from blobCacheEntry.filter
|
||||
@@ -1,3 +0,0 @@
|
||||
## Changed
|
||||
|
||||
- Avoid redundant WithHttpEndpoint when JWT is provided
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Fix proposals progress bar count [#16020](https://github.com/OffchainLabs/prysm/pull/16020)
|
||||
3
changelog/Snezhkko_fix-type.md
Normal file
3
changelog/Snezhkko_fix-type.md
Normal file
@@ -0,0 +1,3 @@
|
||||
## Fixed
|
||||
|
||||
- incorrect constructor return type [#16084](https://github.com/OffchainLabs/prysm/pull/16084)
|
||||
2
changelog/aarsh-revert-autonatv2.md
Normal file
2
changelog/aarsh-revert-autonatv2.md
Normal file
@@ -0,0 +1,2 @@
|
||||
### Ignored
|
||||
- Reverts AutoNatV2 change introduced in https://github.com/OffchainLabs/prysm/pull/16100 as the libp2p upgrade fails inter-op testing.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Integrate state-diff into `HasState()`.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Refactor finding slot by block root using state summary and block to its own function.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Fix state diff repetitive anchor slot bug.
|
||||
@@ -1,4 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Add initial configs for the state-diff feature.
|
||||
- Add kv functions for the state-diff feature.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Integrate state-diff into `State()`.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- add fulu support to light client processing.
|
||||
@@ -1,2 +0,0 @@
|
||||
### Added
|
||||
- prometheus metric `gossip_attestation_verification_milliseconds` to track attestation gossip topic validation latency.
|
||||
@@ -1,4 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Downgraded log level from INFO to DEBUG on PrepareBeaconProposer updated fee recipients.
|
||||
- Change the logging behaviour of Updated fee recipients to only log count of validators at Debug level and all validator indices at Trace level.
|
||||
@@ -1,2 +0,0 @@
|
||||
### Ignored
|
||||
- Add osaka fork timestamp derivation to interop genesis
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- fixes E2E tests to be able to start from Electra genesis fork or future forks
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- optimization to remove cell and blob proof computation on blob rest api.
|
||||
@@ -1,2 +0,0 @@
|
||||
### Added
|
||||
- Added `--semi-supernode` flag to custody half of a super node's datacolumn requirements but allowing for reconstruction for blob retrieval
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Changed `--subscribe-all-data-subnets` flag to `--supernode` and aliased `--subscribe-all-data-subnets` for existing users.
|
||||
@@ -1,7 +0,0 @@
|
||||
### Added
|
||||
- Data column backfill.
|
||||
- Backfill metrics for columns: backfill_data_column_sidecar_downloaded, backfill_data_column_sidecar_downloaded_bytes, backfill_batch_columns_download_ms, backfill_batch_columns_verify_ms.
|
||||
|
||||
### Changed
|
||||
- backfill metrics that changed name and/or histogram buckets: backfill_batch_time_verify -> backfill_batch_verify_ms, backfill_batch_time_waiting -> backfill_batch_waiting_ms, backfill_batch_time_roundtrip -> backfill_batch_roundtrip_ms, backfill_blocks_bytes_downloaded -> backfill_blocks_downloaded_bytes, backfill_batch_time_verify -> backfill_batch_verify_ms, backfill_batch_blocks_time_download -> backfill_batch_blocks_download_ms, backfill_batch_blobs_time_download -> backfill_batch_blobs_download_ms, backfill_blobs_bytes_downloaded -> backfill_blocks_downloaded_bytes,
|
||||
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Stop emitting payload attribute events during late block handling when we are not proposing the next slot
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- `blobsDataFromStoredDataColumns`: Ask the use to use the `--supernode` flag and shorten the error mesage.
|
||||
@@ -1,6 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Added log prefix to the `genesis` package.
|
||||
- Added log prefix to the `params` package.
|
||||
- `WithGenesisValidatorsRoot`: Use camelCase for log field param.
|
||||
- Move `Origin checkpoint found in db` from WARN to INFO, since it is the expected behaviour.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Move the "Not enough connected peers" (for a given subnet) from WARN to DEBUG
|
||||
@@ -1,4 +0,0 @@
|
||||
### Removed
|
||||
|
||||
- `NUMBER_OF_COLUMNS` configuration (not in the specification any more, replaced by a preset)
|
||||
- `MAX_CELLS_IN_EXTENDED_MATRIX` configuration (not in the specification any more)
|
||||
2
changelog/manu-test-pr.md
Normal file
2
changelog/manu-test-pr.md
Normal file
@@ -0,0 +1,2 @@
|
||||
### Ignored
|
||||
- Added test requirement to `PULL_REQUEST_TEMPLATE.md`
|
||||
@@ -1,2 +0,0 @@
|
||||
### Fixed
|
||||
- Check the JWT secret length is exactly 256 bits (32 bytes) as per Engine API specification
|
||||
@@ -1,2 +0,0 @@
|
||||
### Changed
|
||||
- Improve readability in slashing import and remove duplicated code
|
||||
3
changelog/potuz_check_twice_attseen.md
Normal file
3
changelog/potuz_check_twice_attseen.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Fixed
|
||||
|
||||
- Fixed possible race when validating two attestations at the same time.
|
||||
3
changelog/potuz_finalized_deproot.md
Normal file
3
changelog/potuz_finalized_deproot.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Added
|
||||
|
||||
- Track the dependent root of the latest finalized checkpoint in forkchoice.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Fix array out of bounds in static analyzer.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Use dependent root instead of target when possible.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Use head state to validate attestations for old blocks if they are compatible.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Copied deleted dependency `github.com/tyler-smith/go-bip39` to the third_party directory and updated prysm to use that.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- http_error_count now matches the other cases by listing the endpoint name rather than the actual URL requested. This improves metrics cardinality.
|
||||
@@ -1,4 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Updated golang.org/x/tools
|
||||
- Introduced modernize static analyzers to nogo
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Updated CHANGELOG.md with release notes from v7.0.0
|
||||
3
changelog/pvl-v7.0.1.md
Normal file
3
changelog/pvl-v7.0.1.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Ignored
|
||||
|
||||
- Updated CHANGELOG.md for v7.0.1 patch release
|
||||
3
changelog/pvl-v7.1.0.md
Normal file
3
changelog/pvl-v7.1.0.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Ignored
|
||||
|
||||
- Changelog for v7.1.0
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Initialize the `ExecutionRequests` field in gossip block map.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Move `BlockGossipReceived` event to the end of gossip validation.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Removed
|
||||
|
||||
- Remove validator cross-client from end-to-end tests.
|
||||
3
changelog/radek_state-design-doc.md
Normal file
3
changelog/radek_state-design-doc.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Added
|
||||
|
||||
- Add state design doc.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Replace fixed sleep delays with active polling in prometheus service test to improve test reliability.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Metrics to track earliest available slot
|
||||
3
changelog/satushh-eth1copy.md
Normal file
3
changelog/satushh-eth1copy.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Removed
|
||||
|
||||
- Unnecessary copy is removed from Eth1DataHasEnoughSupport
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Nil check for block if it doesn't exist in the DB in fetchOriginSidecars
|
||||
3
changelog/satushh-graffiti.md
Normal file
3
changelog/satushh-graffiti.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Added
|
||||
|
||||
- Proposal design document to implement graffiti. Currently it is empty by default and the idea is to have it of the form GE168dPR63af
|
||||
3
changelog/satushh-migratetocold.md
Normal file
3
changelog/satushh-migratetocold.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Changed
|
||||
|
||||
- Optimise migratetocold by not doing brute force for loop
|
||||
@@ -1,3 +0,0 @@
|
||||
### Removed
|
||||
|
||||
- Reverted the eas metric as it currently has a bug. Will be fixed later.
|
||||
@@ -1,2 +0,0 @@
|
||||
### Added
|
||||
- prometheus summary `gossip_data_column_sidecar_arrival_milliseconds` to track data column sidecar arrival latency since slot start.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Record data column gossip KZG batch verification latency in both the pooled worker and fallback paths so the `beacon_kzg_verification_data_column_batch_milliseconds` histogram reflects gossip traffic, annotated with `path` labels to distinguish the sources.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Add supported version for fork versions
|
||||
@@ -1,2 +0,0 @@
|
||||
### Changed
|
||||
- Introduced flag `--ignore-unviable-attestations` (replaces and deprecates `--disable-last-epoch-targets`) to drop attestations whose target state is not viable; default remains to process them unless explicitly enabled.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Implement Gloas state
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Implement Gloas fork support in consensus-types/blocks with factory methods, getters, setters, and proto handling
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- P2p: wire stategen into service for last finalized state and use it for active validator count
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Use explicit slot component timing configs
|
||||
Reference in New Issue
Block a user