mirror of
https://github.com/OffchainLabs/prysm.git
synced 2026-01-10 22:07:59 -05:00
Compare commits
104 Commits
e2e-test-d
...
fix-earlie
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
4a6f470dd2 | ||
|
|
4e0bfada17 | ||
|
|
8a37575407 | ||
|
|
0b1d4c485d | ||
|
|
0d04013443 | ||
|
|
2fbcbea550 | ||
|
|
9fcc1a7a77 | ||
|
|
5a49dcabb6 | ||
|
|
d0580debf6 | ||
|
|
75dea214ac | ||
|
|
4374e709cb | ||
|
|
869a6586ff | ||
|
|
be300f80bd | ||
|
|
096cba5b2d | ||
|
|
d5127233e4 | ||
|
|
3d35cc20ec | ||
|
|
1e658530a7 | ||
|
|
b360794c9c | ||
|
|
0fc9ab925a | ||
|
|
dda5ee3334 | ||
|
|
14c67376c3 | ||
|
|
9c8b68a66d | ||
|
|
a3210157e2 | ||
|
|
1536d59e30 | ||
|
|
11e46a4560 | ||
|
|
5a2e51b894 | ||
|
|
d20ec4c7a1 | ||
|
|
0b47ac51f7 | ||
|
|
75916243f2 | ||
|
|
b499186483 | ||
|
|
cb28503b5f | ||
|
|
48fd9509ef | ||
|
|
316f0932ff | ||
|
|
7f321a5835 | ||
|
|
1b302219a8 | ||
|
|
e36c7ec26e | ||
|
|
8df746cead | ||
|
|
4294025eed | ||
|
|
41e336d373 | ||
|
|
86cb003b8e | ||
|
|
853d64f8f9 | ||
|
|
6dd11aaa2d | ||
|
|
f1d4f6d136 | ||
|
|
c5690a7eb6 | ||
|
|
edf60e6e33 | ||
|
|
ad022eead9 | ||
|
|
409b48940d | ||
|
|
64324211b8 | ||
|
|
5531014a75 | ||
|
|
80dc3d953d | ||
|
|
d175bd93ba | ||
|
|
4419a36cc2 | ||
|
|
0176706fad | ||
|
|
df307544c9 | ||
|
|
10821dc451 | ||
|
|
4706b758dd | ||
|
|
9f59b78cec | ||
|
|
75d8a2bdb6 | ||
|
|
ca4ae35f6e | ||
|
|
3245b884cf | ||
|
|
00e66ccab6 | ||
|
|
ed1885be8a | ||
|
|
fad101c4a0 | ||
|
|
1c0107812b | ||
|
|
fd710886c8 | ||
|
|
20d6bacfe9 | ||
|
|
1140141aaa | ||
|
|
7402efad2d | ||
|
|
f6cd1d9f7f | ||
|
|
a4751b057d | ||
|
|
eb5e2a5094 | ||
|
|
626830ff9c | ||
|
|
6f20ac57d4 | ||
|
|
cf4095cf3b | ||
|
|
39b3dba946 | ||
|
|
fa7e0b6e50 | ||
|
|
4aed113dea | ||
|
|
20ffd4c523 | ||
|
|
1cd3c3cc2d | ||
|
|
15e2e6b85e | ||
|
|
203a098076 | ||
|
|
9e5b7b00f3 | ||
|
|
1fcf3c7f30 | ||
|
|
ccf81ed33c | ||
|
|
382934fd30 | ||
|
|
c31f0435c6 | ||
|
|
59dc0263f2 | ||
|
|
de7ea1f72c | ||
|
|
2fc7a5af82 | ||
|
|
76d137198b | ||
|
|
05d8ce15af | ||
|
|
2715cd3fc7 | ||
|
|
7aa79a420c | ||
|
|
b3dc7e4afb | ||
|
|
db794db4ee | ||
|
|
f9bc42eed4 | ||
|
|
f97082df32 | ||
|
|
0d55b61b3d | ||
|
|
d487e5c109 | ||
|
|
7ffbf77b87 | ||
|
|
f103267f10 | ||
|
|
6f0ffa2a20 | ||
|
|
bcb5add346 | ||
|
|
a43fc50015 |
3
.github/PULL_REQUEST_TEMPLATE.md
vendored
3
.github/PULL_REQUEST_TEMPLATE.md
vendored
@@ -34,4 +34,5 @@ Fixes #
|
||||
|
||||
- [ ] I have read [CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
|
||||
- [ ] I have included a uniquely named [changelog fragment file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
|
||||
- [ ] I have added a description to this PR with sufficient context for reviewers to understand this PR.
|
||||
- [ ] I have added a description with sufficient context for reviewers to understand this PR.
|
||||
- [ ] I have tested that my changes work as expected and I added a testing plan to the PR description (if applicable).
|
||||
|
||||
@@ -193,6 +193,7 @@ nogo(
|
||||
"//tools/analyzers/featureconfig:go_default_library",
|
||||
"//tools/analyzers/gocognit:go_default_library",
|
||||
"//tools/analyzers/ineffassign:go_default_library",
|
||||
"//tools/analyzers/httperror:go_default_library",
|
||||
"//tools/analyzers/interfacechecker:go_default_library",
|
||||
"//tools/analyzers/logcapitalization:go_default_library",
|
||||
"//tools/analyzers/logruswitherror:go_default_library",
|
||||
|
||||
85
CHANGELOG.md
85
CHANGELOG.md
@@ -4,6 +4,91 @@ All notable changes to this project will be documented in this file.
|
||||
|
||||
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
|
||||
|
||||
## [v7.1.0](https://github.com/prysmaticlabs/prysm/compare/v7.0.0...v7.1.0) - 2025-12-10
|
||||
|
||||
This release includes several key features/fixes. If you are running v7.0.0 then you should update to v7.0.1 or later and remove the flag `--disable-last-epoch-targets`.
|
||||
|
||||
Release highlights:
|
||||
|
||||
- Backfill is now supported in Fulu. Backfill from checkpoint sync now supports data columns. Run with `--enable-backfill` when using checkpoint sync.
|
||||
- A new node configuration to custody enough data columns to reconstruct blobs. Use flag `--semi-supernode` to custody at least 50% of the data columns.
|
||||
- Critical fixes in attestation processing.
|
||||
|
||||
A post mortem doc with full details on the mainnet attestation processing issue from December 4th is expected in the coming days.
|
||||
|
||||
### Added
|
||||
|
||||
- add fulu support to light client processing. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15995)
|
||||
- Record data column gossip KZG batch verification latency in both the pooled worker and fallback paths so the `beacon_kzg_verification_data_column_batch_milliseconds` histogram reflects gossip traffic, annotated with `path` labels to distinguish the sources. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16018)
|
||||
- Implement Gloas state. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15611)
|
||||
- Add initial configs for the state-diff feature. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15903)
|
||||
- Add kv functions for the state-diff feature. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15903)
|
||||
- Add supported version for fork versions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16030)
|
||||
- prometheus metric `gossip_attestation_verification_milliseconds` to track attestation gossip topic validation latency. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15785)
|
||||
- Integrate state-diff into `State()`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16033)
|
||||
- Implement Gloas fork support in consensus-types/blocks with factory methods, getters, setters, and proto handling. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15618)
|
||||
- Integrate state-diff into `HasState()`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16045)
|
||||
- Added `--semi-supernode` flag to custody half of a super node's datacolumn requirements but allowing for reconstruction for blob retrieval. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16029)
|
||||
- Data column backfill. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15580)
|
||||
- Backfill metrics for columns: backfill_data_column_sidecar_downloaded, backfill_data_column_sidecar_downloaded_bytes, backfill_batch_columns_download_ms, backfill_batch_columns_verify_ms. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15580)
|
||||
- prometheus summary `gossip_data_column_sidecar_arrival_milliseconds` to track data column sidecar arrival latency since slot start. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16099)
|
||||
|
||||
### Changed
|
||||
|
||||
- Improve readability in slashing import and remove duplicated code. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15957)
|
||||
- Use dependent root instead of target when possible. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15996)
|
||||
- Changed `--subscribe-all-data-subnets` flag to `--supernode` and aliased `--subscribe-all-data-subnets` for existing users. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16012)
|
||||
- Use explicit slot component timing configs. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15999)
|
||||
- Downgraded log level from INFO to DEBUG on PrepareBeaconProposer updated fee recipients. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15998)
|
||||
- Change the logging behaviour of Updated fee recipients to only log count of validators at Debug level and all validator indices at Trace level. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15998)
|
||||
- Stop emitting payload attribute events during late block handling when we are not proposing the next slot. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16026)
|
||||
- Initialize the `ExecutionRequests` field in gossip block map. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16047)
|
||||
- Avoid redundant WithHttpEndpoint when JWT is provided. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16032)
|
||||
- Removed dead slot parameter from blobCacheEntry.filter. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16021)
|
||||
- Added log prefix to the `genesis` package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16075)
|
||||
- Added log prefix to the `params` package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16075)
|
||||
- `WithGenesisValidatorsRoot`: Use camelCase for log field param. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16075)
|
||||
- Move `Origin checkpoint found in db` from WARN to INFO, since it is the expected behaviour. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16075)
|
||||
- backfill metrics that changed name and/or histogram buckets: backfill_batch_time_verify -> backfill_batch_verify_ms, backfill_batch_time_waiting -> backfill_batch_waiting_ms, backfill_batch_time_roundtrip -> backfill_batch_roundtrip_ms, backfill_blocks_bytes_downloaded -> backfill_blocks_downloaded_bytes, backfill_batch_time_verify -> backfill_batch_verify_ms, backfill_batch_blocks_time_download -> backfill_batch_blocks_download_ms, backfill_batch_blobs_time_download -> backfill_batch_blobs_download_ms, backfill_blobs_bytes_downloaded -> backfill_blocks_downloaded_bytes. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15580)
|
||||
- Move the "Not enough connected peers" (for a given subnet) from WARN to DEBUG. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16087)
|
||||
- `blobsDataFromStoredDataColumns`: Ask the use to use the `--supernode` flag and shorten the error mesage. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16097)
|
||||
- Introduced flag `--ignore-unviable-attestations` (replaces and deprecates `--disable-last-epoch-targets`) to drop attestations whose target state is not viable; default remains to process them unless explicitly enabled. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16094)
|
||||
|
||||
### Removed
|
||||
|
||||
- Remove validator cross-client from end-to-end tests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16025)
|
||||
- `NUMBER_OF_COLUMNS` configuration (not in the specification any more, replaced by a preset). [[PR]](https://github.com/prysmaticlabs/prysm/pull/16073)
|
||||
- `MAX_CELLS_IN_EXTENDED_MATRIX` configuration (not in the specification any more). [[PR]](https://github.com/prysmaticlabs/prysm/pull/16073)
|
||||
|
||||
### Fixed
|
||||
|
||||
- Nil check for block if it doesn't exist in the DB in fetchOriginSidecars. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16006)
|
||||
- Fix proposals progress bar count [#16020](https://github.com/OffchainLabs/prysm/pull/16020). [[PR]](https://github.com/prysmaticlabs/prysm/pull/16020)
|
||||
- Move `BlockGossipReceived` event to the end of gossip validation. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16031)
|
||||
- Fix state diff repetitive anchor slot bug. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16037)
|
||||
- Check the JWT secret length is exactly 256 bits (32 bytes) as per Engine API specification. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15939)
|
||||
- http_error_count now matches the other cases by listing the endpoint name rather than the actual URL requested. This improves metrics cardinality. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16055)
|
||||
- Fix array out of bounds in static analyzer. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16058)
|
||||
- fixes E2E tests to be able to start from Electra genesis fork or future forks. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16048)
|
||||
- Use head state to validate attestations for old blocks if they are compatible. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16095)
|
||||
|
||||
## [v7.0.1](https://github.com/prysmaticlabs/prysm/compare/v7.0.0...v7.0.1) - 2025-12-08
|
||||
|
||||
This patch release contains 4 cherry-picked changes to address the mainnet attestation processing issue from 2025-12-04. Operators are encouraged to update to this release as soon as practical. As of this release, the feature flag `--disable-last-epoch-targets` has been deprecated and can be safely removed from your node configuration.
|
||||
|
||||
A post mortem doc with full details is expected to be published later this week.
|
||||
|
||||
### Changed
|
||||
|
||||
- Move the "Not enough connected peers" (for a given subnet) from WARN to DEBUG. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16087)
|
||||
- Use dependent root instead of target when possible. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15996)
|
||||
- Introduced flag `--ignore-unviable-attestations` (replaces and deprecates `--disable-last-epoch-targets`) to drop attestations whose target state is not viable; default remains to process them unless explicitly enabled. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16094)
|
||||
|
||||
### Fixed
|
||||
|
||||
- Use head state to validate attestations for old blocks if they are compatible. [[PR]](https://github.com/prysmaticlabs/prysm/pull/16095)
|
||||
|
||||
|
||||
## [v7.0.0](https://github.com/prysmaticlabs/prysm/compare/v6.1.4...v7.0.0) - 2025-11-10
|
||||
|
||||
This is our initial mainnet release for the Ethereum mainnet Fulu fork on December 3rd, 2025. All operators MUST update to v7.0.0 or later release prior to the fulu fork epoch `411392`. See the [Ethereum Foundation blog post](https://blog.ethereum.org/2025/11/06/fusaka-mainnet-announcement) for more information on Fulu.
|
||||
|
||||
@@ -521,6 +521,13 @@ func (s *Service) updateCustodyInfoInDB(slot primitives.Slot) (primitives.Slot,
|
||||
return 0, 0, errors.Wrap(err, "update custody info")
|
||||
}
|
||||
|
||||
log.WithFields(logrus.Fields{
|
||||
"earliestAvailableSlot": earliestAvailableSlot,
|
||||
"custodyGroupCount": actualCustodyGroupCount,
|
||||
"inputSlot": slot,
|
||||
"targetCustodyGroups": targetCustodyGroupCount,
|
||||
}).Info("Updated custody info in database")
|
||||
|
||||
if isSupernode {
|
||||
log.WithFields(logrus.Fields{
|
||||
"current": actualCustodyGroupCount,
|
||||
|
||||
@@ -60,7 +60,7 @@ func Eth1DataHasEnoughSupport(beaconState state.ReadOnlyBeaconState, data *ethpb
|
||||
voteCount := uint64(0)
|
||||
|
||||
for _, vote := range beaconState.Eth1DataVotes() {
|
||||
if AreEth1DataEqual(vote, data.Copy()) {
|
||||
if AreEth1DataEqual(vote, data) {
|
||||
voteCount++
|
||||
}
|
||||
}
|
||||
|
||||
@@ -152,7 +152,7 @@ func ActiveValidatorIndices(ctx context.Context, s state.ReadOnlyBeaconState, ep
|
||||
}
|
||||
|
||||
if err := UpdateCommitteeCache(ctx, s, epoch); err != nil {
|
||||
return nil, errors.Wrap(err, "could not update committee cache")
|
||||
log.WithError(err).Error("Could not update committee cache")
|
||||
}
|
||||
|
||||
return indices, nil
|
||||
|
||||
@@ -15,7 +15,8 @@ import (
|
||||
)
|
||||
|
||||
// UpdateCustodyInfo atomically updates the custody group count only if it is greater than the stored one.
|
||||
// In this case, it also updates the earliest available slot with the provided value.
|
||||
// When the custody group count increases, the earliest available slot is set to the maximum of the
|
||||
// incoming value and the stored value, ensuring the slot never decreases when increasing custody.
|
||||
// It returns the (potentially updated) custody group count and earliest available slot.
|
||||
func (s *Store) UpdateCustodyInfo(ctx context.Context, earliestAvailableSlot primitives.Slot, custodyGroupCount uint64) (primitives.Slot, uint64, error) {
|
||||
_, span := trace.StartSpan(ctx, "BeaconDB.UpdateCustodyInfo")
|
||||
@@ -41,25 +42,39 @@ func (s *Store) UpdateCustodyInfo(ctx context.Context, earliestAvailableSlot pri
|
||||
storedEarliestAvailableSlot = primitives.Slot(bytesutil.BytesToUint64BigEndian(storedEarliestAvailableSlotBytes))
|
||||
}
|
||||
|
||||
log.WithFields(logrus.Fields{
|
||||
"incomingSlot": earliestAvailableSlot,
|
||||
"incomingGroupCount": custodyGroupCount,
|
||||
"storedSlot": storedEarliestAvailableSlot,
|
||||
"storedGroupCount": storedGroupCount,
|
||||
"storedSlotBytesLen": len(storedEarliestAvailableSlotBytes),
|
||||
"storedGroupCountBytesLen": len(storedGroupCountBytes),
|
||||
}).Debug("UpdateCustodyInfo: comparing incoming vs stored values")
|
||||
|
||||
// Exit early if the new custody group count is lower than or equal to the stored one.
|
||||
if custodyGroupCount <= storedGroupCount {
|
||||
log.Debug("UpdateCustodyInfo: exiting early, custody group count not increasing")
|
||||
return nil
|
||||
}
|
||||
|
||||
storedGroupCount, storedEarliestAvailableSlot = custodyGroupCount, earliestAvailableSlot
|
||||
|
||||
// Store the earliest available slot.
|
||||
bytes := bytesutil.Uint64ToBytesBigEndian(uint64(earliestAvailableSlot))
|
||||
if err := bucket.Put(earliestAvailableSlotKey, bytes); err != nil {
|
||||
return errors.Wrap(err, "put earliest available slot")
|
||||
}
|
||||
|
||||
// Store the custody group count.
|
||||
bytes = bytesutil.Uint64ToBytesBigEndian(custodyGroupCount)
|
||||
// Update the custody group count.
|
||||
storedGroupCount = custodyGroupCount
|
||||
bytes := bytesutil.Uint64ToBytesBigEndian(custodyGroupCount)
|
||||
if err := bucket.Put(groupCountKey, bytes); err != nil {
|
||||
return errors.Wrap(err, "put custody group count")
|
||||
}
|
||||
|
||||
// Only update earliestAvailableSlot if the incoming value is higher.
|
||||
// This prevents losing availability for data we already have when switching modes
|
||||
// (e.g., from normal to semi-supernode or supernode).
|
||||
if earliestAvailableSlot > storedEarliestAvailableSlot {
|
||||
storedEarliestAvailableSlot = earliestAvailableSlot
|
||||
bytes = bytesutil.Uint64ToBytesBigEndian(uint64(earliestAvailableSlot))
|
||||
if err := bucket.Put(earliestAvailableSlotKey, bytes); err != nil {
|
||||
return errors.Wrap(err, "put earliest available slot")
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}); err != nil {
|
||||
return 0, 0, err
|
||||
|
||||
@@ -89,7 +89,7 @@ func TestUpdateCustodyInfo(t *testing.T) {
|
||||
require.Equal(t, groupCount, storedCount)
|
||||
})
|
||||
|
||||
t.Run("update with higher group count", func(t *testing.T) {
|
||||
t.Run("update with higher group count and higher slot", func(t *testing.T) {
|
||||
const (
|
||||
initialSlot = primitives.Slot(100)
|
||||
initialCount = uint64(5)
|
||||
@@ -112,6 +112,150 @@ func TestUpdateCustodyInfo(t *testing.T) {
|
||||
require.Equal(t, groupCount, storedCount)
|
||||
})
|
||||
|
||||
t.Run("update with higher group count and lower slot should preserve higher slot", func(t *testing.T) {
|
||||
// This is the bug scenario: when switching from normal mode to semi-supernode,
|
||||
// the incoming slot might be lower than the stored slot, but we should preserve
|
||||
// the higher stored slot to avoid advertising that we can serve data we don't have.
|
||||
const (
|
||||
initialSlot = primitives.Slot(1835523) // Higher stored slot
|
||||
initialCount = uint64(10)
|
||||
earliestSlot = primitives.Slot(1835456) // Lower incoming slot (e.g., from head slot)
|
||||
groupCount = uint64(64) // Increasing custody (e.g., semi-supernode)
|
||||
)
|
||||
|
||||
db := setupDB(t)
|
||||
|
||||
_, _, err := db.UpdateCustodyInfo(ctx, initialSlot, initialCount)
|
||||
require.NoError(t, err)
|
||||
|
||||
// When custody count increases but slot is lower, the higher slot should be preserved
|
||||
slot, count, err := db.UpdateCustodyInfo(ctx, earliestSlot, groupCount)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, initialSlot, slot, "earliestAvailableSlot should not decrease when custody group count increases")
|
||||
require.Equal(t, groupCount, count)
|
||||
|
||||
// Verify in the database
|
||||
storedSlot, storedCount := getCustodyInfoFromDB(t, db)
|
||||
require.Equal(t, initialSlot, storedSlot, "stored slot should be the higher value")
|
||||
require.Equal(t, groupCount, storedCount)
|
||||
})
|
||||
|
||||
t.Run("pre-fulu scenario: checkpoint sync before fork, restart with semi-supernode", func(t *testing.T) {
|
||||
// This test covers the pre-Fulu bug scenario:
|
||||
// 1. Node starts with checkpoint sync BEFORE Fulu fork - uses EarliestSlot() (checkpoint block slot)
|
||||
// 2. Validators connect after Fulu activates - maintainCustodyInfo() updates to head slot (higher)
|
||||
// 3. Node restarts with --semi-supernode - updateCustodyInfoInDB uses EarliestSlot() again
|
||||
// The bug was that step 3 would overwrite the higher slot from step 2.
|
||||
params.SetupTestConfigCleanup(t)
|
||||
cfg := params.BeaconConfig()
|
||||
cfg.FuluForkEpoch = 100
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
fuluForkSlot, err := slots.EpochStart(cfg.FuluForkEpoch)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Derive slot values relative to Fulu fork
|
||||
checkpointBlockSlot := fuluForkSlot - 10 // Checkpoint sync happened before Fulu
|
||||
headSlot := fuluForkSlot + 5 // Head slot after Fulu activates
|
||||
defaultCustody := cfg.CustodyRequirement // Default custody from config
|
||||
validatorCustody := cfg.CustodyRequirement + 6 // Custody after validators connect
|
||||
semiSupernodeCustody := cfg.NumberOfCustodyGroups // Semi-supernode custodies all groups
|
||||
|
||||
// Verify our test setup: checkpoint is pre-Fulu, head is post-Fulu
|
||||
require.Equal(t, true, checkpointBlockSlot < fuluForkSlot, "checkpoint must be before Fulu fork")
|
||||
require.Equal(t, true, headSlot >= fuluForkSlot, "head must be at or after Fulu fork")
|
||||
|
||||
db := setupDB(t)
|
||||
|
||||
// Step 1: Node starts with checkpoint sync (pre-Fulu)
|
||||
// updateCustodyInfoInDB sees saved.Slot() < fuluForkSlot, so uses EarliestSlot()
|
||||
slot, count, err := db.UpdateCustodyInfo(ctx, checkpointBlockSlot, defaultCustody)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, checkpointBlockSlot, slot)
|
||||
require.Equal(t, defaultCustody, count)
|
||||
|
||||
// Step 2: Validators connect after Fulu activates, maintainCustodyInfo() runs
|
||||
// Uses headSlot which is higher than checkpointBlockSlot
|
||||
slot, count, err = db.UpdateCustodyInfo(ctx, headSlot, validatorCustody)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, headSlot, slot, "should update to head slot")
|
||||
require.Equal(t, validatorCustody, count)
|
||||
|
||||
// Verify step 2 stored correctly
|
||||
storedSlot, storedCount := getCustodyInfoFromDB(t, db)
|
||||
require.Equal(t, headSlot, storedSlot)
|
||||
require.Equal(t, validatorCustody, storedCount)
|
||||
|
||||
// Step 3: Restart with --semi-supernode
|
||||
// updateCustodyInfoInDB sees saved.Slot() < fuluForkSlot, so uses EarliestSlot() again
|
||||
slot, count, err = db.UpdateCustodyInfo(ctx, checkpointBlockSlot, semiSupernodeCustody)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, headSlot, slot, "earliestAvailableSlot should NOT decrease back to checkpoint slot")
|
||||
require.Equal(t, semiSupernodeCustody, count)
|
||||
|
||||
// Verify the database preserved the higher slot
|
||||
storedSlot, storedCount = getCustodyInfoFromDB(t, db)
|
||||
require.Equal(t, headSlot, storedSlot, "stored slot should remain at head slot, not checkpoint slot")
|
||||
require.Equal(t, semiSupernodeCustody, storedCount)
|
||||
})
|
||||
|
||||
t.Run("post-fulu scenario: finalized slot lower than stored head slot", func(t *testing.T) {
|
||||
// This test covers the post-Fulu bug scenario:
|
||||
// Post-fork, updateCustodyInfoInDB uses saved.Slot() (finalized slot) directly,
|
||||
// not EarliestSlot(). But the same bug can occur because:
|
||||
// - maintainCustodyInfo() stores headSlot (higher)
|
||||
// - Restart uses finalized slot (lower than head)
|
||||
// Our fix ensures earliestAvailableSlot never decreases.
|
||||
params.SetupTestConfigCleanup(t)
|
||||
cfg := params.BeaconConfig()
|
||||
cfg.FuluForkEpoch = 100
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
fuluForkSlot, err := slots.EpochStart(cfg.FuluForkEpoch)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Derive slot values relative to Fulu fork - all slots are AFTER Fulu
|
||||
finalizedSlotAtStart := fuluForkSlot + 100 // Finalized slot at first start (post-Fulu)
|
||||
headSlot := fuluForkSlot + 200 // Head slot when validators connect
|
||||
finalizedSlotRestart := fuluForkSlot + 150 // Finalized slot at restart (< headSlot)
|
||||
defaultCustody := cfg.CustodyRequirement // Default custody from config
|
||||
validatorCustody := cfg.CustodyRequirement + 6 // Custody after validators connect
|
||||
semiSupernodeCustody := cfg.NumberOfCustodyGroups // Semi-supernode custodies all groups
|
||||
|
||||
// Verify our test setup: all slots are post-Fulu
|
||||
require.Equal(t, true, finalizedSlotAtStart >= fuluForkSlot, "finalized slot must be at or after Fulu fork")
|
||||
require.Equal(t, true, headSlot >= fuluForkSlot, "head slot must be at or after Fulu fork")
|
||||
require.Equal(t, true, finalizedSlotRestart >= fuluForkSlot, "restart finalized slot must be at or after Fulu fork")
|
||||
require.Equal(t, true, finalizedSlotRestart < headSlot, "restart finalized slot must be less than head slot")
|
||||
|
||||
db := setupDB(t)
|
||||
|
||||
// Step 1: Node starts post-Fulu
|
||||
// updateCustodyInfoInDB sees saved.Slot() >= fuluForkSlot, so uses saved.Slot() directly
|
||||
slot, count, err := db.UpdateCustodyInfo(ctx, finalizedSlotAtStart, defaultCustody)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, finalizedSlotAtStart, slot)
|
||||
require.Equal(t, defaultCustody, count)
|
||||
|
||||
// Step 2: Validators connect, maintainCustodyInfo() uses head slot
|
||||
slot, count, err = db.UpdateCustodyInfo(ctx, headSlot, validatorCustody)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, headSlot, slot)
|
||||
require.Equal(t, validatorCustody, count)
|
||||
|
||||
// Step 3: Restart with --semi-supernode
|
||||
// updateCustodyInfoInDB uses finalized slot which is lower than stored head slot
|
||||
slot, count, err = db.UpdateCustodyInfo(ctx, finalizedSlotRestart, semiSupernodeCustody)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, headSlot, slot, "earliestAvailableSlot should NOT decrease to finalized slot")
|
||||
require.Equal(t, semiSupernodeCustody, count)
|
||||
|
||||
// Verify database preserved the higher slot
|
||||
storedSlot, storedCount := getCustodyInfoFromDB(t, db)
|
||||
require.Equal(t, headSlot, storedSlot)
|
||||
require.Equal(t, semiSupernodeCustody, storedCount)
|
||||
})
|
||||
|
||||
t.Run("update with lower group count should not update", func(t *testing.T) {
|
||||
const (
|
||||
initialSlot = primitives.Slot(200)
|
||||
|
||||
@@ -642,8 +642,12 @@ func (f *ForkChoice) DependentRootForEpoch(root [32]byte, epoch primitives.Epoch
|
||||
if !ok || node == nil {
|
||||
return [32]byte{}, ErrNilNode
|
||||
}
|
||||
if slots.ToEpoch(node.slot) >= epoch && node.parent != nil {
|
||||
node = node.parent
|
||||
if slots.ToEpoch(node.slot) >= epoch {
|
||||
if node.parent != nil {
|
||||
node = node.parent
|
||||
} else {
|
||||
return f.store.finalizedDependentRoot, nil
|
||||
}
|
||||
}
|
||||
return node.root, nil
|
||||
}
|
||||
|
||||
@@ -212,6 +212,9 @@ func (s *Store) prune(ctx context.Context) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Save the new finalized dependent root because it will be pruned
|
||||
s.finalizedDependentRoot = finalizedNode.parent.root
|
||||
|
||||
// Prune nodeByRoot starting from root
|
||||
if err := s.pruneFinalizedNodeByRootMap(ctx, s.treeRootNode, finalizedNode); err != nil {
|
||||
return err
|
||||
|
||||
@@ -465,6 +465,7 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
|
||||
ctx := t.Context()
|
||||
f := setup(1, 1)
|
||||
|
||||
// Insert a block in slot 32
|
||||
state, blk, err := prepareForkchoiceState(ctx, params.BeaconConfig().SlotsPerEpoch, [32]byte{'a'}, params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 1, 1)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, f.InsertNode(ctx, state, blk))
|
||||
@@ -475,6 +476,7 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, dependent, [32]byte{})
|
||||
|
||||
// Insert a block in slot 33
|
||||
state, blk1, err := prepareForkchoiceState(ctx, params.BeaconConfig().SlotsPerEpoch+1, [32]byte{'b'}, blk.Root(), params.BeaconConfig().ZeroHash, 1, 1)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, f.InsertNode(ctx, state, blk1))
|
||||
@@ -488,7 +490,7 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, dependent, [32]byte{})
|
||||
|
||||
// Insert a block for the next epoch (missed slot 0)
|
||||
// Insert a block for the next epoch (missed slot 0), slot 65
|
||||
|
||||
state, blk2, err := prepareForkchoiceState(ctx, 2*params.BeaconConfig().SlotsPerEpoch+1, [32]byte{'c'}, blk1.Root(), params.BeaconConfig().ZeroHash, 1, 1)
|
||||
require.NoError(t, err)
|
||||
@@ -509,6 +511,7 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, dependent, blk1.Root())
|
||||
|
||||
// Insert a block at slot 66
|
||||
state, blk3, err := prepareForkchoiceState(ctx, 2*params.BeaconConfig().SlotsPerEpoch+2, [32]byte{'d'}, blk2.Root(), params.BeaconConfig().ZeroHash, 1, 1)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, f.InsertNode(ctx, state, blk3))
|
||||
@@ -533,8 +536,11 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
|
||||
dependent, err = f.DependentRoot(1)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, [32]byte{}, dependent)
|
||||
dependent, err = f.DependentRoot(2)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, blk1.Root(), dependent)
|
||||
|
||||
// Insert a block for next epoch (slot 0 present)
|
||||
// Insert a block for the next epoch, slot 96 (descends from finalized at slot 33)
|
||||
state, blk4, err := prepareForkchoiceState(ctx, 3*params.BeaconConfig().SlotsPerEpoch, [32]byte{'e'}, blk1.Root(), params.BeaconConfig().ZeroHash, 1, 1)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, f.InsertNode(ctx, state, blk4))
|
||||
@@ -551,6 +557,7 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, dependent, blk1.Root())
|
||||
|
||||
// Insert a block at slot 97
|
||||
state, blk5, err := prepareForkchoiceState(ctx, 3*params.BeaconConfig().SlotsPerEpoch+1, [32]byte{'f'}, blk4.Root(), params.BeaconConfig().ZeroHash, 1, 1)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, f.InsertNode(ctx, state, blk5))
|
||||
@@ -600,12 +607,16 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, target, blk1.Root())
|
||||
|
||||
// Prune finalization
|
||||
// Prune finalization, finalize the block at slot 96
|
||||
s.finalizedCheckpoint.Root = blk4.Root()
|
||||
require.NoError(t, s.prune(ctx))
|
||||
target, err = f.TargetRootForEpoch(blk4.Root(), 3)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, blk4.Root(), target)
|
||||
// Dependent root for the finalized block should be the root of the pruned block at slot 33
|
||||
dependent, err = f.DependentRootForEpoch(blk4.Root(), 3)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, blk1.Root(), dependent)
|
||||
}
|
||||
|
||||
func TestStore_DependentRootForEpoch(t *testing.T) {
|
||||
|
||||
@@ -31,6 +31,7 @@ type Store struct {
|
||||
proposerBoostRoot [fieldparams.RootLength]byte // latest block root that was boosted after being received in a timely manner.
|
||||
previousProposerBoostRoot [fieldparams.RootLength]byte // previous block root that was boosted after being received in a timely manner.
|
||||
previousProposerBoostScore uint64 // previous proposer boosted root score.
|
||||
finalizedDependentRoot [fieldparams.RootLength]byte // dependent root at finalized checkpoint.
|
||||
committeeWeight uint64 // tracks the total active validator balance divided by the number of slots per Epoch.
|
||||
treeRootNode *Node // the root node of the store tree.
|
||||
headNode *Node // last head Node
|
||||
|
||||
95
beacon-chain/graffiti/graffiti-proposal-brief.md
Normal file
95
beacon-chain/graffiti/graffiti-proposal-brief.md
Normal file
@@ -0,0 +1,95 @@
|
||||
# Graffiti Version Info Implementation
|
||||
|
||||
## Summary
|
||||
Add automatic EL+CL version info to block graffiti following [ethereum/execution-apis#517](https://github.com/ethereum/execution-apis/pull/517). Uses the [flexible standard](https://hackmd.io/@wmoBhF17RAOH2NZ5bNXJVg/BJX2c9gja) to pack client info into leftover space after user graffiti.
|
||||
|
||||
More details: https://github.com/ethereum/execution-apis/blob/main/src/engine/identification.md
|
||||
|
||||
## Implementation
|
||||
|
||||
### Core Component: GraffitiInfo Struct
|
||||
Thread-safe struct holding version information:
|
||||
```go
|
||||
const clCode = "PR"
|
||||
|
||||
type GraffitiInfo struct {
|
||||
mu sync.RWMutex
|
||||
userGraffiti string // From --graffiti flag (set once at startup)
|
||||
clCommit string // From version.GetCommitPrefix() helper function
|
||||
elCode string // From engine_getClientVersionV1
|
||||
elCommit string // From engine_getClientVersionV1
|
||||
}
|
||||
```
|
||||
|
||||
### Flow
|
||||
1. **Startup**: Parse flags, create GraffitiInfo with user graffiti and CL info.
|
||||
2. **Wiring**: Pass struct to both execution service and RPC validator server
|
||||
3. **Runtime**: Execution service goroutine periodically calls `engine_getClientVersionV1` and updates EL fields
|
||||
4. **Block Proposal**: RPC validator server calls `GenerateGraffiti()` to get formatted graffiti
|
||||
|
||||
### Flexible Graffiti Format
|
||||
Packs as much client info as space allows (after user graffiti):
|
||||
|
||||
| Available Space | Format | Example |
|
||||
|----------------|--------|---------|
|
||||
| ≥12 bytes | `EL(2)+commit(4)+CL(2)+commit(4)+user` | `GE168dPR63afBob` |
|
||||
| 8-11 bytes | `EL(2)+commit(2)+CL(2)+commit(2)+user` | `GE16PR63my node here` |
|
||||
| 4-7 bytes | `EL(2)+CL(2)+user` | `GEPRthis is my graffiti msg` |
|
||||
| 2-3 bytes | `EL(2)+user` | `GEalmost full graffiti message` |
|
||||
| <2 bytes | user only | `full 32 byte user graffiti here` |
|
||||
|
||||
```go
|
||||
func (g *GraffitiInfo) GenerateGraffiti() [32]byte {
|
||||
available := 32 - len(userGraffiti)
|
||||
|
||||
if elCode == "" {
|
||||
elCommit2 = elCommit4 = ""
|
||||
}
|
||||
|
||||
switch {
|
||||
case available >= 12:
|
||||
return elCode + elCommit4 + clCode + clCommit4 + userGraffiti
|
||||
case available >= 8:
|
||||
return elCode + elCommit2 + clCode + clCommit2 + userGraffiti
|
||||
case available >= 4:
|
||||
return elCode + clCode + userGraffiti
|
||||
case available >= 2:
|
||||
return elCode + userGraffiti
|
||||
default:
|
||||
return userGraffiti
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Update Logic
|
||||
Single testable function in execution service:
|
||||
```go
|
||||
func (s *Service) updateGraffitiInfo() {
|
||||
versions, err := s.GetClientVersion(ctx)
|
||||
if err != nil {
|
||||
return // Keep last good value
|
||||
}
|
||||
if len(versions) == 1 {
|
||||
s.graffitiInfo.UpdateFromEngine(versions[0].Code, versions[0].Commit)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Goroutine calls this on `slot % 8 == 4` timing (4 times per epoch, avoids slot boundaries).
|
||||
|
||||
### Files Changes Required
|
||||
|
||||
**New:**
|
||||
- `beacon-chain/execution/graffiti_info.go` - The struct and methods
|
||||
- `beacon-chain/execution/graffiti_info_test.go` - Unit tests
|
||||
- `runtime/version/version.go` - Add `GetCommitPrefix()` helper that extracts first 4 hex chars from the git commit injected via Bazel ldflags at build time
|
||||
|
||||
**Modified:**
|
||||
- `beacon-chain/execution/service.go` - Add goroutine + updateGraffitiInfo()
|
||||
- `beacon-chain/execution/engine_client.go` - Add GetClientVersion() method that does engine call
|
||||
- `beacon-chain/rpc/.../validator/proposer.go` - Call GenerateGraffiti()
|
||||
- `beacon-chain/node/node.go` - Wire GraffitiInfo to services
|
||||
|
||||
### Testing Strategy
|
||||
- Unit test GraffitiInfo methods (priority logic, thread safety)
|
||||
- Unit test updateGraffitiInfo() with mocked engine client
|
||||
@@ -711,6 +711,7 @@ func (s *Server) SubmitAttesterSlashingsV2(w http.ResponseWriter, r *http.Reques
|
||||
versionHeader := r.Header.Get(api.VersionHeader)
|
||||
if versionHeader == "" {
|
||||
httputil.HandleError(w, api.VersionHeader+" header is required", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
v, err := version.FromString(versionHeader)
|
||||
if err != nil {
|
||||
|
||||
@@ -2112,6 +2112,33 @@ func TestSubmitAttesterSlashingsV2(t *testing.T) {
|
||||
assert.Equal(t, http.StatusBadRequest, e.Code)
|
||||
assert.StringContains(t, "Invalid attester slashing", e.Message)
|
||||
})
|
||||
|
||||
t.Run("missing-version-header", func(t *testing.T) {
|
||||
bs, err := util.NewBeaconStateElectra()
|
||||
require.NoError(t, err)
|
||||
|
||||
broadcaster := &p2pMock.MockBroadcaster{}
|
||||
s := &Server{
|
||||
ChainInfoFetcher: &blockchainmock.ChainService{State: bs},
|
||||
SlashingsPool: &slashingsmock.PoolMock{},
|
||||
Broadcaster: broadcaster,
|
||||
}
|
||||
|
||||
var body bytes.Buffer
|
||||
_, err = body.WriteString(invalidAttesterSlashing)
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com/beacon/pool/attester_slashings", &body)
|
||||
// Intentionally do not set api.VersionHeader to verify missing header handling.
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.SubmitAttesterSlashingsV2(writer, request)
|
||||
require.Equal(t, http.StatusBadRequest, writer.Code)
|
||||
e := &httputil.DefaultJsonError{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusBadRequest, e.Code)
|
||||
assert.StringContains(t, api.VersionHeader+" header is required", e.Message)
|
||||
})
|
||||
}
|
||||
|
||||
func TestSubmitProposerSlashing_InvalidSlashing(t *testing.T) {
|
||||
|
||||
@@ -654,6 +654,10 @@ func (m *futureSyncMockFetcher) StateBySlot(context.Context, primitives.Slot) (s
|
||||
return m.BeaconState, nil
|
||||
}
|
||||
|
||||
func (m *futureSyncMockFetcher) StateByEpoch(context.Context, primitives.Epoch) (state.BeaconState, error) {
|
||||
return m.BeaconState, nil
|
||||
}
|
||||
|
||||
func TestGetSyncCommittees_Future(t *testing.T) {
|
||||
st, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().SyncCommitteeSize)
|
||||
syncCommittee := make([][]byte, params.BeaconConfig().SyncCommitteeSize)
|
||||
|
||||
@@ -116,6 +116,7 @@ func (s *Server) GetLightClientUpdatesByRange(w http.ResponseWriter, req *http.R
|
||||
for _, update := range updates {
|
||||
if ctx.Err() != nil {
|
||||
httputil.HandleError(w, "Context error: "+ctx.Err().Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
updateSlot := update.AttestedHeader().Beacon().Slot
|
||||
@@ -131,12 +132,15 @@ func (s *Server) GetLightClientUpdatesByRange(w http.ResponseWriter, req *http.R
|
||||
chunkLength = ssz.MarshalUint64(chunkLength, uint64(len(updateSSZ)+4))
|
||||
if _, err := w.Write(chunkLength); err != nil {
|
||||
httputil.HandleError(w, "Could not write chunk length: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
if _, err := w.Write(updateEntry.ForkDigest[:]); err != nil {
|
||||
httputil.HandleError(w, "Could not write fork digest: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
if _, err := w.Write(updateSSZ); err != nil {
|
||||
httputil.HandleError(w, "Could not write update SSZ: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
}
|
||||
} else {
|
||||
@@ -145,6 +149,7 @@ func (s *Server) GetLightClientUpdatesByRange(w http.ResponseWriter, req *http.R
|
||||
for _, update := range updates {
|
||||
if ctx.Err() != nil {
|
||||
httputil.HandleError(w, "Context error: "+ctx.Err().Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
updateJson, err := structs.LightClientUpdateFromConsensus(update)
|
||||
|
||||
@@ -132,6 +132,7 @@ func (s *Server) GetHealth(w http.ResponseWriter, r *http.Request) {
|
||||
optimistic, err := s.OptimisticModeFetcher.IsOptimistic(ctx)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
if s.SyncChecker.Synced() && !optimistic {
|
||||
return
|
||||
|
||||
@@ -228,7 +228,7 @@ func (s *Server) attRewardsState(w http.ResponseWriter, r *http.Request) (state.
|
||||
}
|
||||
st, err := s.Stater.StateBySlot(r.Context(), nextEpochEnd)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not get state for epoch's starting slot: "+err.Error(), http.StatusInternalServerError)
|
||||
shared.WriteStateFetchError(w, err)
|
||||
return nil, false
|
||||
}
|
||||
return st, true
|
||||
|
||||
@@ -19,7 +19,6 @@ go_library(
|
||||
"//beacon-chain/cache:go_default_library",
|
||||
"//beacon-chain/core/feed/operation:go_default_library",
|
||||
"//beacon-chain/core/helpers:go_default_library",
|
||||
"//beacon-chain/core/transition:go_default_library",
|
||||
"//beacon-chain/db:go_default_library",
|
||||
"//beacon-chain/operations/attestations:go_default_library",
|
||||
"//beacon-chain/operations/synccommittee:go_default_library",
|
||||
@@ -78,6 +77,7 @@ go_test(
|
||||
"//beacon-chain/rpc/core:go_default_library",
|
||||
"//beacon-chain/rpc/eth/rewards/testing:go_default_library",
|
||||
"//beacon-chain/rpc/eth/shared/testing:go_default_library",
|
||||
"//beacon-chain/rpc/lookup:go_default_library",
|
||||
"//beacon-chain/rpc/testutil:go_default_library",
|
||||
"//beacon-chain/state:go_default_library",
|
||||
"//beacon-chain/state/stategen:go_default_library",
|
||||
|
||||
@@ -19,7 +19,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/builder"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/cache"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/transition"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/core"
|
||||
rpchelpers "github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/eth/helpers"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/eth/shared"
|
||||
@@ -898,20 +897,15 @@ func (s *Server) GetAttesterDuties(w http.ResponseWriter, r *http.Request) {
|
||||
return
|
||||
}
|
||||
|
||||
var startSlot primitives.Slot
|
||||
// For next epoch requests, we use the current epoch's state since committee
|
||||
// assignments for next epoch can be computed from current epoch's state.
|
||||
epochForState := requestedEpoch
|
||||
if requestedEpoch == nextEpoch {
|
||||
startSlot, err = slots.EpochStart(currentEpoch)
|
||||
} else {
|
||||
startSlot, err = slots.EpochStart(requestedEpoch)
|
||||
epochForState = currentEpoch
|
||||
}
|
||||
st, err := s.Stater.StateByEpoch(ctx, epochForState)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, fmt.Sprintf("Could not get start slot from epoch %d: %v", requestedEpoch, err), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
st, err := s.Stater.StateBySlot(ctx, startSlot)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not get state: "+err.Error(), http.StatusInternalServerError)
|
||||
shared.WriteStateFetchError(w, err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -1020,39 +1014,11 @@ func (s *Server) GetProposerDuties(w http.ResponseWriter, r *http.Request) {
|
||||
nextEpochLookahead = true
|
||||
}
|
||||
|
||||
epochStartSlot, err := slots.EpochStart(requestedEpoch)
|
||||
st, err := s.Stater.StateByEpoch(ctx, requestedEpoch)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, fmt.Sprintf("Could not get start slot of epoch %d: %v", requestedEpoch, err), http.StatusInternalServerError)
|
||||
shared.WriteStateFetchError(w, err)
|
||||
return
|
||||
}
|
||||
var st state.BeaconState
|
||||
// if the requested epoch is new, use the head state and the next slot cache
|
||||
if requestedEpoch < currentEpoch {
|
||||
st, err = s.Stater.StateBySlot(ctx, epochStartSlot)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, fmt.Sprintf("Could not get state for slot %d: %v ", epochStartSlot, err), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
} else {
|
||||
st, err = s.HeadFetcher.HeadState(ctx)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, fmt.Sprintf("Could not get head state: %v ", err), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
// Notice that even for Fulu requests for the next epoch, we are only advancing the state to the start of the current epoch.
|
||||
if st.Slot() < epochStartSlot {
|
||||
headRoot, err := s.HeadFetcher.HeadRoot(ctx)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, fmt.Sprintf("Could not get head root: %v ", err), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
st, err = transition.ProcessSlotsUsingNextSlotCache(ctx, st, headRoot, epochStartSlot)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, fmt.Sprintf("Could not process slots up to %d: %v ", epochStartSlot, err), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var assignments map[primitives.ValidatorIndex][]primitives.Slot
|
||||
if nextEpochLookahead {
|
||||
@@ -1103,7 +1069,8 @@ func (s *Server) GetProposerDuties(w http.ResponseWriter, r *http.Request) {
|
||||
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
if !sortProposerDuties(w, duties) {
|
||||
if err = sortProposerDuties(duties); err != nil {
|
||||
httputil.HandleError(w, "Could not sort proposer duties: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -1174,14 +1141,10 @@ func (s *Server) GetSyncCommitteeDuties(w http.ResponseWriter, r *http.Request)
|
||||
}
|
||||
|
||||
startingEpoch := min(requestedEpoch, currentEpoch)
|
||||
slot, err := slots.EpochStart(startingEpoch)
|
||||
|
||||
st, err := s.Stater.StateByEpoch(ctx, startingEpoch)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not get sync committee slot: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
st, err := s.Stater.State(ctx, []byte(strconv.FormatUint(uint64(slot), 10)))
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not get sync committee state: "+err.Error(), http.StatusInternalServerError)
|
||||
shared.WriteStateFetchError(w, err)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -1327,7 +1290,7 @@ func (s *Server) GetLiveness(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
st, err = s.Stater.StateBySlot(ctx, epochEnd)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not get slot for requested epoch: "+err.Error(), http.StatusInternalServerError)
|
||||
shared.WriteStateFetchError(w, err)
|
||||
return
|
||||
}
|
||||
participation, err = st.CurrentEpochParticipation()
|
||||
@@ -1447,22 +1410,20 @@ func syncCommitteeDutiesAndVals(
|
||||
return duties, vals, nil
|
||||
}
|
||||
|
||||
func sortProposerDuties(w http.ResponseWriter, duties []*structs.ProposerDuty) bool {
|
||||
ok := true
|
||||
func sortProposerDuties(duties []*structs.ProposerDuty) error {
|
||||
var err error
|
||||
sort.Slice(duties, func(i, j int) bool {
|
||||
si, err := strconv.ParseUint(duties[i].Slot, 10, 64)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not parse slot: "+err.Error(), http.StatusInternalServerError)
|
||||
ok = false
|
||||
si, parseErr := strconv.ParseUint(duties[i].Slot, 10, 64)
|
||||
if parseErr != nil {
|
||||
err = errors.Wrap(parseErr, "could not parse slot")
|
||||
return false
|
||||
}
|
||||
sj, err := strconv.ParseUint(duties[j].Slot, 10, 64)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not parse slot: "+err.Error(), http.StatusInternalServerError)
|
||||
ok = false
|
||||
sj, parseErr := strconv.ParseUint(duties[j].Slot, 10, 64)
|
||||
if parseErr != nil {
|
||||
err = errors.Wrap(parseErr, "could not parse slot")
|
||||
return false
|
||||
}
|
||||
return si < sj
|
||||
})
|
||||
return ok
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -25,6 +25,7 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/operations/synccommittee"
|
||||
p2pmock "github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/testing"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/core"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/lookup"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/testutil"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/state/stategen"
|
||||
@@ -2006,6 +2007,7 @@ func TestGetAttesterDuties(t *testing.T) {
|
||||
TimeFetcher: chain,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
OptimisticModeFetcher: chain,
|
||||
HeadFetcher: chain,
|
||||
BeaconDB: db,
|
||||
}
|
||||
|
||||
@@ -2184,6 +2186,7 @@ func TestGetAttesterDuties(t *testing.T) {
|
||||
Stater: &testutil.MockStater{StatesBySlot: map[primitives.Slot]state.BeaconState{0: bs}},
|
||||
TimeFetcher: chain,
|
||||
OptimisticModeFetcher: chain,
|
||||
HeadFetcher: chain,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
BeaconDB: db,
|
||||
}
|
||||
@@ -2224,6 +2227,62 @@ func TestGetAttesterDuties(t *testing.T) {
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusServiceUnavailable, e.Code)
|
||||
})
|
||||
t.Run("state not found returns 404", func(t *testing.T) {
|
||||
chainSlot := primitives.Slot(0)
|
||||
chain := &mockChain.ChainService{
|
||||
State: bs, Root: genesisRoot[:], Slot: &chainSlot,
|
||||
}
|
||||
stateNotFoundErr := lookup.NewStateNotFoundError(8192, []byte("test"))
|
||||
s := &Server{
|
||||
Stater: &testutil.MockStater{CustomError: &stateNotFoundErr},
|
||||
TimeFetcher: chain,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
OptimisticModeFetcher: chain,
|
||||
HeadFetcher: chain,
|
||||
}
|
||||
|
||||
var body bytes.Buffer
|
||||
_, err = body.WriteString("[\"0\"]")
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodGet, "http://www.example.com/eth/v1/validator/duties/attester/{epoch}", &body)
|
||||
request.SetPathValue("epoch", "0")
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.GetAttesterDuties(writer, request)
|
||||
assert.Equal(t, http.StatusNotFound, writer.Code)
|
||||
e := &httputil.DefaultJsonError{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusNotFound, e.Code)
|
||||
assert.StringContains(t, "State not found", e.Message)
|
||||
})
|
||||
t.Run("state fetch error returns 500", func(t *testing.T) {
|
||||
chainSlot := primitives.Slot(0)
|
||||
chain := &mockChain.ChainService{
|
||||
State: bs, Root: genesisRoot[:], Slot: &chainSlot,
|
||||
}
|
||||
s := &Server{
|
||||
Stater: &testutil.MockStater{CustomError: errors.New("internal error")},
|
||||
TimeFetcher: chain,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
OptimisticModeFetcher: chain,
|
||||
HeadFetcher: chain,
|
||||
}
|
||||
|
||||
var body bytes.Buffer
|
||||
_, err = body.WriteString("[\"0\"]")
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodGet, "http://www.example.com/eth/v1/validator/duties/attester/{epoch}", &body)
|
||||
request.SetPathValue("epoch", "0")
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.GetAttesterDuties(writer, request)
|
||||
assert.Equal(t, http.StatusInternalServerError, writer.Code)
|
||||
e := &httputil.DefaultJsonError{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusInternalServerError, e.Code)
|
||||
})
|
||||
}
|
||||
|
||||
func TestGetProposerDuties(t *testing.T) {
|
||||
@@ -2427,6 +2486,60 @@ func TestGetProposerDuties(t *testing.T) {
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusServiceUnavailable, e.Code)
|
||||
})
|
||||
t.Run("state not found returns 404", func(t *testing.T) {
|
||||
bs, err := transition.GenesisBeaconState(t.Context(), deposits, 0, eth1Data)
|
||||
require.NoError(t, err)
|
||||
chainSlot := primitives.Slot(0)
|
||||
chain := &mockChain.ChainService{
|
||||
State: bs, Root: genesisRoot[:], Slot: &chainSlot,
|
||||
}
|
||||
stateNotFoundErr := lookup.NewStateNotFoundError(8192, []byte("test"))
|
||||
s := &Server{
|
||||
Stater: &testutil.MockStater{CustomError: &stateNotFoundErr},
|
||||
TimeFetcher: chain,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
OptimisticModeFetcher: chain,
|
||||
HeadFetcher: chain,
|
||||
}
|
||||
|
||||
request := httptest.NewRequest(http.MethodGet, "http://www.example.com/eth/v1/validator/duties/proposer/{epoch}", nil)
|
||||
request.SetPathValue("epoch", "0")
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.GetProposerDuties(writer, request)
|
||||
assert.Equal(t, http.StatusNotFound, writer.Code)
|
||||
e := &httputil.DefaultJsonError{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusNotFound, e.Code)
|
||||
assert.StringContains(t, "State not found", e.Message)
|
||||
})
|
||||
t.Run("state fetch error returns 500", func(t *testing.T) {
|
||||
bs, err := transition.GenesisBeaconState(t.Context(), deposits, 0, eth1Data)
|
||||
require.NoError(t, err)
|
||||
chainSlot := primitives.Slot(0)
|
||||
chain := &mockChain.ChainService{
|
||||
State: bs, Root: genesisRoot[:], Slot: &chainSlot,
|
||||
}
|
||||
s := &Server{
|
||||
Stater: &testutil.MockStater{CustomError: errors.New("internal error")},
|
||||
TimeFetcher: chain,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
OptimisticModeFetcher: chain,
|
||||
HeadFetcher: chain,
|
||||
}
|
||||
|
||||
request := httptest.NewRequest(http.MethodGet, "http://www.example.com/eth/v1/validator/duties/proposer/{epoch}", nil)
|
||||
request.SetPathValue("epoch", "0")
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.GetProposerDuties(writer, request)
|
||||
assert.Equal(t, http.StatusInternalServerError, writer.Code)
|
||||
e := &httputil.DefaultJsonError{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusInternalServerError, e.Code)
|
||||
})
|
||||
}
|
||||
|
||||
func TestGetSyncCommitteeDuties(t *testing.T) {
|
||||
@@ -2457,7 +2570,7 @@ func TestGetSyncCommitteeDuties(t *testing.T) {
|
||||
}
|
||||
require.NoError(t, st.SetNextSyncCommittee(nextCommittee))
|
||||
|
||||
mockChainService := &mockChain.ChainService{Genesis: genesisTime}
|
||||
mockChainService := &mockChain.ChainService{Genesis: genesisTime, State: st}
|
||||
s := &Server{
|
||||
Stater: &testutil.MockStater{BeaconState: st},
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
@@ -2648,7 +2761,7 @@ func TestGetSyncCommitteeDuties(t *testing.T) {
|
||||
return newSyncPeriodSt
|
||||
}
|
||||
}
|
||||
mockChainService := &mockChain.ChainService{Genesis: genesisTime, Slot: &newSyncPeriodStartSlot}
|
||||
mockChainService := &mockChain.ChainService{Genesis: genesisTime, Slot: &newSyncPeriodStartSlot, State: newSyncPeriodSt}
|
||||
s := &Server{
|
||||
Stater: &testutil.MockStater{BeaconState: stateFetchFn(newSyncPeriodStartSlot)},
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
@@ -2729,8 +2842,7 @@ func TestGetSyncCommitteeDuties(t *testing.T) {
|
||||
slot, err := slots.EpochStart(1)
|
||||
require.NoError(t, err)
|
||||
|
||||
st2, err := util.NewBeaconStateBellatrix()
|
||||
require.NoError(t, err)
|
||||
st2 := st.Copy()
|
||||
require.NoError(t, st2.SetSlot(slot))
|
||||
|
||||
mockChainService := &mockChain.ChainService{
|
||||
@@ -2744,7 +2856,7 @@ func TestGetSyncCommitteeDuties(t *testing.T) {
|
||||
State: st2,
|
||||
}
|
||||
s := &Server{
|
||||
Stater: &testutil.MockStater{BeaconState: st},
|
||||
Stater: &testutil.MockStater{BeaconState: st2},
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
TimeFetcher: mockChainService,
|
||||
HeadFetcher: mockChainService,
|
||||
@@ -2789,6 +2901,62 @@ func TestGetSyncCommitteeDuties(t *testing.T) {
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusServiceUnavailable, e.Code)
|
||||
})
|
||||
t.Run("state not found returns 404", func(t *testing.T) {
|
||||
slot := 2 * params.BeaconConfig().SlotsPerEpoch
|
||||
chainService := &mockChain.ChainService{
|
||||
Slot: &slot,
|
||||
}
|
||||
stateNotFoundErr := lookup.NewStateNotFoundError(8192, []byte("test"))
|
||||
s := &Server{
|
||||
Stater: &testutil.MockStater{CustomError: &stateNotFoundErr},
|
||||
TimeFetcher: chainService,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
OptimisticModeFetcher: chainService,
|
||||
HeadFetcher: chainService,
|
||||
}
|
||||
|
||||
var body bytes.Buffer
|
||||
_, err := body.WriteString("[\"1\"]")
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodGet, "http://www.example.com/eth/v1/validator/duties/sync/{epoch}", &body)
|
||||
request.SetPathValue("epoch", "1")
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.GetSyncCommitteeDuties(writer, request)
|
||||
assert.Equal(t, http.StatusNotFound, writer.Code)
|
||||
e := &httputil.DefaultJsonError{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusNotFound, e.Code)
|
||||
assert.StringContains(t, "State not found", e.Message)
|
||||
})
|
||||
t.Run("state fetch error returns 500", func(t *testing.T) {
|
||||
slot := 2 * params.BeaconConfig().SlotsPerEpoch
|
||||
chainService := &mockChain.ChainService{
|
||||
Slot: &slot,
|
||||
}
|
||||
s := &Server{
|
||||
Stater: &testutil.MockStater{CustomError: errors.New("internal error")},
|
||||
TimeFetcher: chainService,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
OptimisticModeFetcher: chainService,
|
||||
HeadFetcher: chainService,
|
||||
}
|
||||
|
||||
var body bytes.Buffer
|
||||
_, err := body.WriteString("[\"1\"]")
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodGet, "http://www.example.com/eth/v1/validator/duties/sync/{epoch}", &body)
|
||||
request.SetPathValue("epoch", "1")
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.GetSyncCommitteeDuties(writer, request)
|
||||
assert.Equal(t, http.StatusInternalServerError, writer.Code)
|
||||
e := &httputil.DefaultJsonError{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusInternalServerError, e.Code)
|
||||
})
|
||||
}
|
||||
|
||||
func TestPrepareBeaconProposer(t *testing.T) {
|
||||
|
||||
@@ -11,6 +11,7 @@ go_library(
|
||||
deps = [
|
||||
"//beacon-chain/blockchain:go_default_library",
|
||||
"//beacon-chain/core/peerdas:go_default_library",
|
||||
"//beacon-chain/core/transition:go_default_library",
|
||||
"//beacon-chain/db:go_default_library",
|
||||
"//beacon-chain/db/filesystem:go_default_library",
|
||||
"//beacon-chain/rpc/core:go_default_library",
|
||||
|
||||
@@ -8,6 +8,7 @@ import (
|
||||
"strings"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/blockchain"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/transition"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/db"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/state/stategen"
|
||||
@@ -82,8 +83,8 @@ type StateRootNotFoundError struct {
|
||||
}
|
||||
|
||||
// NewStateRootNotFoundError creates a new error instance.
|
||||
func NewStateRootNotFoundError(stateRootsSize int) StateNotFoundError {
|
||||
return StateNotFoundError{
|
||||
func NewStateRootNotFoundError(stateRootsSize int) StateRootNotFoundError {
|
||||
return StateRootNotFoundError{
|
||||
message: fmt.Sprintf("state root not found in the last %d state roots", stateRootsSize),
|
||||
}
|
||||
}
|
||||
@@ -98,6 +99,7 @@ type Stater interface {
|
||||
State(ctx context.Context, id []byte) (state.BeaconState, error)
|
||||
StateRoot(ctx context.Context, id []byte) ([]byte, error)
|
||||
StateBySlot(ctx context.Context, slot primitives.Slot) (state.BeaconState, error)
|
||||
StateByEpoch(ctx context.Context, epoch primitives.Epoch) (state.BeaconState, error)
|
||||
}
|
||||
|
||||
// BeaconDbStater is an implementation of Stater. It retrieves states from the beacon chain database.
|
||||
@@ -267,6 +269,46 @@ func (p *BeaconDbStater) StateBySlot(ctx context.Context, target primitives.Slot
|
||||
return st, nil
|
||||
}
|
||||
|
||||
// StateByEpoch returns the state for the start of the requested epoch.
|
||||
// For current or next epoch, it uses the head state and next slot cache for efficiency.
|
||||
// For past epochs, it replays blocks from the most recent canonical state.
|
||||
func (p *BeaconDbStater) StateByEpoch(ctx context.Context, epoch primitives.Epoch) (state.BeaconState, error) {
|
||||
ctx, span := trace.StartSpan(ctx, "statefetcher.StateByEpoch")
|
||||
defer span.End()
|
||||
|
||||
targetSlot, err := slots.EpochStart(epoch)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not get epoch start slot")
|
||||
}
|
||||
|
||||
currentSlot := p.GenesisTimeFetcher.CurrentSlot()
|
||||
currentEpoch := slots.ToEpoch(currentSlot)
|
||||
|
||||
// For past epochs, use the replay mechanism
|
||||
if epoch < currentEpoch {
|
||||
return p.StateBySlot(ctx, targetSlot)
|
||||
}
|
||||
|
||||
// For current or next epoch, use head state + next slot cache (much faster)
|
||||
headState, err := p.ChainInfoFetcher.HeadState(ctx)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not get head state")
|
||||
}
|
||||
|
||||
// If head state is already at or past the target slot, return it
|
||||
if headState.Slot() >= targetSlot {
|
||||
return headState, nil
|
||||
}
|
||||
|
||||
// Process slots using the next slot cache
|
||||
headRoot := p.ChainInfoFetcher.CachedHeadRoot()
|
||||
st, err := transition.ProcessSlotsUsingNextSlotCache(ctx, headState, headRoot[:], targetSlot)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "could not process slots up to %d", targetSlot)
|
||||
}
|
||||
return st, nil
|
||||
}
|
||||
|
||||
func (p *BeaconDbStater) headStateRoot(ctx context.Context) ([]byte, error) {
|
||||
b, err := p.ChainInfoFetcher.HeadBlock(ctx)
|
||||
if err != nil {
|
||||
|
||||
@@ -444,3 +444,111 @@ func TestStateBySlot_AfterHeadSlot(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, primitives.Slot(101), st.Slot())
|
||||
}
|
||||
|
||||
func TestStateByEpoch(t *testing.T) {
|
||||
ctx := t.Context()
|
||||
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
|
||||
|
||||
t.Run("current epoch uses head state", func(t *testing.T) {
|
||||
// Head is at slot 5 (epoch 0), requesting epoch 0
|
||||
headSlot := primitives.Slot(5)
|
||||
headSt, err := statenative.InitializeFromProtoPhase0(ðpb.BeaconState{Slot: headSlot})
|
||||
require.NoError(t, err)
|
||||
|
||||
currentSlot := headSlot
|
||||
mock := &chainMock.ChainService{State: headSt, Slot: ¤tSlot}
|
||||
p := BeaconDbStater{ChainInfoFetcher: mock, GenesisTimeFetcher: mock}
|
||||
|
||||
st, err := p.StateByEpoch(ctx, 0)
|
||||
require.NoError(t, err)
|
||||
// Should return head state since it's already past epoch start
|
||||
assert.Equal(t, headSlot, st.Slot())
|
||||
})
|
||||
|
||||
t.Run("current epoch processes slots to epoch start", func(t *testing.T) {
|
||||
// Head is at slot 5 (epoch 0), requesting epoch 1
|
||||
// Current slot is 32 (epoch 1), so epoch 1 is current epoch
|
||||
headSlot := primitives.Slot(5)
|
||||
headSt, err := statenative.InitializeFromProtoPhase0(ðpb.BeaconState{Slot: headSlot})
|
||||
require.NoError(t, err)
|
||||
|
||||
currentSlot := slotsPerEpoch // slot 32, epoch 1
|
||||
mock := &chainMock.ChainService{State: headSt, Slot: ¤tSlot}
|
||||
p := BeaconDbStater{ChainInfoFetcher: mock, GenesisTimeFetcher: mock}
|
||||
|
||||
// Note: This will fail since ProcessSlotsUsingNextSlotCache requires proper setup
|
||||
// In real usage, the transition package handles this properly
|
||||
_, err = p.StateByEpoch(ctx, 1)
|
||||
// The error is expected since we don't have a fully initialized beacon state
|
||||
// that can process slots (missing committees, etc.)
|
||||
assert.NotNil(t, err)
|
||||
})
|
||||
|
||||
t.Run("past epoch uses replay", func(t *testing.T) {
|
||||
// Head is at epoch 2, requesting epoch 0 (past)
|
||||
headSlot := slotsPerEpoch * 2 // slot 64, epoch 2
|
||||
headSt, err := statenative.InitializeFromProtoPhase0(ðpb.BeaconState{Slot: headSlot})
|
||||
require.NoError(t, err)
|
||||
|
||||
pastEpochSt, err := statenative.InitializeFromProtoPhase0(ðpb.BeaconState{Slot: 0})
|
||||
require.NoError(t, err)
|
||||
|
||||
currentSlot := headSlot
|
||||
mock := &chainMock.ChainService{State: headSt, Slot: ¤tSlot}
|
||||
mockReplayer := mockstategen.NewReplayerBuilder()
|
||||
mockReplayer.SetMockStateForSlot(pastEpochSt, 0)
|
||||
p := BeaconDbStater{ChainInfoFetcher: mock, GenesisTimeFetcher: mock, ReplayerBuilder: mockReplayer}
|
||||
|
||||
st, err := p.StateByEpoch(ctx, 0)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, primitives.Slot(0), st.Slot())
|
||||
})
|
||||
|
||||
t.Run("next epoch uses head state path", func(t *testing.T) {
|
||||
// Head is at slot 30 (epoch 0), requesting epoch 1 (next)
|
||||
// Current slot is 30 (epoch 0), so epoch 1 is next epoch
|
||||
headSlot := primitives.Slot(30)
|
||||
headSt, err := statenative.InitializeFromProtoPhase0(ðpb.BeaconState{Slot: headSlot})
|
||||
require.NoError(t, err)
|
||||
|
||||
currentSlot := headSlot
|
||||
mock := &chainMock.ChainService{State: headSt, Slot: ¤tSlot}
|
||||
p := BeaconDbStater{ChainInfoFetcher: mock, GenesisTimeFetcher: mock}
|
||||
|
||||
// Note: This will fail since ProcessSlotsUsingNextSlotCache requires proper setup
|
||||
_, err = p.StateByEpoch(ctx, 1)
|
||||
// The error is expected since we don't have a fully initialized beacon state
|
||||
assert.NotNil(t, err)
|
||||
})
|
||||
|
||||
t.Run("head state already at target slot returns immediately", func(t *testing.T) {
|
||||
// Head is at slot 32 (epoch 1 start), requesting epoch 1
|
||||
headSlot := slotsPerEpoch // slot 32
|
||||
headSt, err := statenative.InitializeFromProtoPhase0(ðpb.BeaconState{Slot: headSlot})
|
||||
require.NoError(t, err)
|
||||
|
||||
currentSlot := headSlot
|
||||
mock := &chainMock.ChainService{State: headSt, Slot: ¤tSlot}
|
||||
p := BeaconDbStater{ChainInfoFetcher: mock, GenesisTimeFetcher: mock}
|
||||
|
||||
st, err := p.StateByEpoch(ctx, 1)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, headSlot, st.Slot())
|
||||
})
|
||||
|
||||
t.Run("head state past target slot returns head state", func(t *testing.T) {
|
||||
// Head is at slot 40, requesting epoch 1 (starts at slot 32)
|
||||
headSlot := primitives.Slot(40)
|
||||
headSt, err := statenative.InitializeFromProtoPhase0(ðpb.BeaconState{Slot: headSlot})
|
||||
require.NoError(t, err)
|
||||
|
||||
currentSlot := headSlot
|
||||
mock := &chainMock.ChainService{State: headSt, Slot: ¤tSlot}
|
||||
p := BeaconDbStater{ChainInfoFetcher: mock, GenesisTimeFetcher: mock}
|
||||
|
||||
st, err := p.StateByEpoch(ctx, 1)
|
||||
require.NoError(t, err)
|
||||
// Returns head state since it's already >= epoch start
|
||||
assert.Equal(t, headSlot, st.Slot())
|
||||
})
|
||||
}
|
||||
|
||||
@@ -26,5 +26,6 @@ go_library(
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
"//testing/require:go_default_library",
|
||||
"//testing/util:go_default_library",
|
||||
"//time/slots:go_default_library",
|
||||
],
|
||||
)
|
||||
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
|
||||
"github.com/OffchainLabs/prysm/v7/time/slots"
|
||||
)
|
||||
|
||||
// MockStater is a fake implementation of lookup.Stater.
|
||||
@@ -14,6 +15,7 @@ type MockStater struct {
|
||||
StateProviderFunc func(ctx context.Context, stateId []byte) (state.BeaconState, error)
|
||||
BeaconStateRoot []byte
|
||||
StatesBySlot map[primitives.Slot]state.BeaconState
|
||||
StatesByEpoch map[primitives.Epoch]state.BeaconState
|
||||
StatesByRoot map[[32]byte]state.BeaconState
|
||||
CustomError error
|
||||
}
|
||||
@@ -43,3 +45,22 @@ func (m *MockStater) StateRoot(context.Context, []byte) ([]byte, error) {
|
||||
func (m *MockStater) StateBySlot(_ context.Context, s primitives.Slot) (state.BeaconState, error) {
|
||||
return m.StatesBySlot[s], nil
|
||||
}
|
||||
|
||||
// StateByEpoch --
|
||||
func (m *MockStater) StateByEpoch(_ context.Context, e primitives.Epoch) (state.BeaconState, error) {
|
||||
if m.CustomError != nil {
|
||||
return nil, m.CustomError
|
||||
}
|
||||
if m.StatesByEpoch != nil {
|
||||
return m.StatesByEpoch[e], nil
|
||||
}
|
||||
// Fall back to StatesBySlot if StatesByEpoch is not set
|
||||
slot, err := slots.EpochStart(e)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if m.StatesBySlot != nil {
|
||||
return m.StatesBySlot[slot], nil
|
||||
}
|
||||
return m.BeaconState, nil
|
||||
}
|
||||
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
"fmt"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
|
||||
"github.com/OffchainLabs/prysm/v7/monitoring/tracing/trace"
|
||||
"github.com/sirupsen/logrus"
|
||||
@@ -37,76 +38,84 @@ func (s *State) MigrateToCold(ctx context.Context, fRoot [32]byte) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Start at previous finalized slot, stop at current finalized slot (it will be handled in the next migration).
|
||||
// If the slot is on archived point, save the state of that slot to the DB.
|
||||
for slot := oldFSlot; slot < fSlot; slot++ {
|
||||
// Calculate the first archived point slot >= oldFSlot (but > 0).
|
||||
// This avoids iterating through every slot and only visits archived points directly.
|
||||
var startSlot primitives.Slot
|
||||
if oldFSlot == 0 {
|
||||
startSlot = s.slotsPerArchivedPoint
|
||||
} else {
|
||||
// Round up to the next archived point
|
||||
startSlot = (oldFSlot + s.slotsPerArchivedPoint - 1) / s.slotsPerArchivedPoint * s.slotsPerArchivedPoint
|
||||
}
|
||||
|
||||
// Start at the first archived point after old finalized slot, stop before current finalized slot.
|
||||
// Jump directly between archived points.
|
||||
for slot := startSlot; slot < fSlot; slot += s.slotsPerArchivedPoint {
|
||||
if ctx.Err() != nil {
|
||||
return ctx.Err()
|
||||
}
|
||||
|
||||
if slot%s.slotsPerArchivedPoint == 0 && slot != 0 {
|
||||
cached, exists, err := s.epochBoundaryStateCache.getBySlot(slot)
|
||||
cached, exists, err := s.epochBoundaryStateCache.getBySlot(slot)
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not get epoch boundary state for slot %d", slot)
|
||||
}
|
||||
|
||||
var aRoot [32]byte
|
||||
var aState state.BeaconState
|
||||
|
||||
// When the epoch boundary state is not in cache due to skip slot scenario,
|
||||
// we have to regenerate the state which will represent epoch boundary.
|
||||
// By finding the highest available block below epoch boundary slot, we
|
||||
// generate the state for that block root.
|
||||
if exists {
|
||||
aRoot = cached.root
|
||||
aState = cached.state
|
||||
} else {
|
||||
_, roots, err := s.beaconDB.HighestRootsBelowSlot(ctx, slot)
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not get epoch boundary state for slot %d", slot)
|
||||
return err
|
||||
}
|
||||
|
||||
var aRoot [32]byte
|
||||
var aState state.BeaconState
|
||||
|
||||
// When the epoch boundary state is not in cache due to skip slot scenario,
|
||||
// we have to regenerate the state which will represent epoch boundary.
|
||||
// By finding the highest available block below epoch boundary slot, we
|
||||
// generate the state for that block root.
|
||||
if exists {
|
||||
aRoot = cached.root
|
||||
aState = cached.state
|
||||
} else {
|
||||
_, roots, err := s.beaconDB.HighestRootsBelowSlot(ctx, slot)
|
||||
// Given the block has been finalized, the db should not have more than one block in a given slot.
|
||||
// We should error out when this happens.
|
||||
if len(roots) != 1 {
|
||||
return errUnknownBlock
|
||||
}
|
||||
aRoot = roots[0]
|
||||
// There's no need to generate the state if the state already exists in the DB.
|
||||
// We can skip saving the state.
|
||||
if !s.beaconDB.HasState(ctx, aRoot) {
|
||||
aState, err = s.StateByRoot(ctx, aRoot)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// Given the block has been finalized, the db should not have more than one block in a given slot.
|
||||
// We should error out when this happens.
|
||||
if len(roots) != 1 {
|
||||
return errUnknownBlock
|
||||
}
|
||||
aRoot = roots[0]
|
||||
// There's no need to generate the state if the state already exists in the DB.
|
||||
// We can skip saving the state.
|
||||
if !s.beaconDB.HasState(ctx, aRoot) {
|
||||
aState, err = s.StateByRoot(ctx, aRoot)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if s.beaconDB.HasState(ctx, aRoot) {
|
||||
// If you are migrating a state and its already part of the hot state cache saved to the db,
|
||||
// you can just remove it from the hot state cache as it becomes redundant.
|
||||
s.saveHotStateDB.lock.Lock()
|
||||
roots := s.saveHotStateDB.blockRootsOfSavedStates
|
||||
for i := range roots {
|
||||
if aRoot == roots[i] {
|
||||
s.saveHotStateDB.blockRootsOfSavedStates = append(roots[:i], roots[i+1:]...)
|
||||
// There shouldn't be duplicated roots in `blockRootsOfSavedStates`.
|
||||
// Break here is ok.
|
||||
break
|
||||
}
|
||||
}
|
||||
s.saveHotStateDB.lock.Unlock()
|
||||
continue
|
||||
}
|
||||
|
||||
if err := s.beaconDB.SaveState(ctx, aState, aRoot); err != nil {
|
||||
return err
|
||||
}
|
||||
log.WithFields(
|
||||
logrus.Fields{
|
||||
"slot": aState.Slot(),
|
||||
"root": hex.EncodeToString(bytesutil.Trunc(aRoot[:])),
|
||||
}).Info("Saved state in DB")
|
||||
}
|
||||
|
||||
if s.beaconDB.HasState(ctx, aRoot) {
|
||||
// If you are migrating a state and its already part of the hot state cache saved to the db,
|
||||
// you can just remove it from the hot state cache as it becomes redundant.
|
||||
s.saveHotStateDB.lock.Lock()
|
||||
roots := s.saveHotStateDB.blockRootsOfSavedStates
|
||||
for i := range roots {
|
||||
if aRoot == roots[i] {
|
||||
s.saveHotStateDB.blockRootsOfSavedStates = append(roots[:i], roots[i+1:]...)
|
||||
// There shouldn't be duplicated roots in `blockRootsOfSavedStates`.
|
||||
// Break here is ok.
|
||||
break
|
||||
}
|
||||
}
|
||||
s.saveHotStateDB.lock.Unlock()
|
||||
continue
|
||||
}
|
||||
|
||||
if err := s.beaconDB.SaveState(ctx, aState, aRoot); err != nil {
|
||||
return err
|
||||
}
|
||||
log.WithFields(
|
||||
logrus.Fields{
|
||||
"slot": aState.Slot(),
|
||||
"root": hex.EncodeToString(bytesutil.Trunc(aRoot[:])),
|
||||
}).Info("Saved state in DB")
|
||||
}
|
||||
|
||||
// Update finalized info in memory.
|
||||
|
||||
@@ -161,13 +161,17 @@ func (s *Service) validateWithKzgBatchVerifier(ctx context.Context, dataColumns
|
||||
|
||||
timeout := time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second
|
||||
|
||||
resChan := make(chan error)
|
||||
resChan := make(chan error, 1)
|
||||
verificationSet := &kzgVerifier{dataColumns: dataColumns, resChan: resChan}
|
||||
s.kzgChan <- verificationSet
|
||||
|
||||
ctx, cancel := context.WithTimeout(ctx, timeout)
|
||||
defer cancel()
|
||||
|
||||
select {
|
||||
case s.kzgChan <- verificationSet:
|
||||
case <-ctx.Done():
|
||||
return pubsub.ValidationIgnore, ctx.Err()
|
||||
}
|
||||
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return pubsub.ValidationIgnore, ctx.Err() // parent context canceled, give up
|
||||
|
||||
@@ -84,6 +84,13 @@ func (s *Service) updateCustodyInfoIfNeeded() error {
|
||||
return errors.Wrap(err, "beacon db update custody info")
|
||||
}
|
||||
|
||||
log.WithFields(logrus.Fields{
|
||||
"earliestAvailableSlot": storedEarliestSlot,
|
||||
"custodyGroupCount": storedGroupCount,
|
||||
"headSlot": headSlot,
|
||||
"targetCustodyGroups": targetCustodyGroupCount,
|
||||
}).Debug("Maintained custody info")
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
@@ -7,6 +7,7 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v7/beacon-chain/blockchain/kzg"
|
||||
"github.com/OffchainLabs/prysm/v7/config/params"
|
||||
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
|
||||
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v7/testing/assert"
|
||||
@@ -268,6 +269,71 @@ func TestKzgBatchVerifierFallback(t *testing.T) {
|
||||
})
|
||||
}
|
||||
|
||||
func TestValidateWithKzgBatchVerifier_DeadlockOnTimeout(t *testing.T) {
|
||||
err := kzg.Start()
|
||||
require.NoError(t, err)
|
||||
|
||||
params.SetupTestConfigCleanup(t)
|
||||
cfg := params.BeaconConfig().Copy()
|
||||
cfg.SecondsPerSlot = 0
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
ctx, cancel := context.WithCancel(t.Context())
|
||||
defer cancel()
|
||||
|
||||
service := &Service{
|
||||
ctx: ctx,
|
||||
kzgChan: make(chan *kzgVerifier),
|
||||
}
|
||||
go service.kzgVerifierRoutine()
|
||||
|
||||
result, err := service.validateWithKzgBatchVerifier(context.Background(), nil)
|
||||
require.Equal(t, pubsub.ValidationIgnore, result)
|
||||
require.ErrorIs(t, err, context.DeadlineExceeded)
|
||||
|
||||
done := make(chan struct{})
|
||||
go func() {
|
||||
_, _ = service.validateWithKzgBatchVerifier(context.Background(), nil)
|
||||
close(done)
|
||||
}()
|
||||
|
||||
select {
|
||||
case <-done:
|
||||
case <-time.After(500 * time.Millisecond):
|
||||
t.Fatal("validateWithKzgBatchVerifier blocked")
|
||||
}
|
||||
}
|
||||
|
||||
func TestValidateWithKzgBatchVerifier_ContextCanceledBeforeSend(t *testing.T) {
|
||||
cancelledCtx, cancel := context.WithCancel(t.Context())
|
||||
cancel()
|
||||
|
||||
service := &Service{
|
||||
ctx: context.Background(),
|
||||
kzgChan: make(chan *kzgVerifier),
|
||||
}
|
||||
|
||||
done := make(chan struct{})
|
||||
go func() {
|
||||
result, err := service.validateWithKzgBatchVerifier(cancelledCtx, nil)
|
||||
require.Equal(t, pubsub.ValidationIgnore, result)
|
||||
require.ErrorIs(t, err, context.Canceled)
|
||||
close(done)
|
||||
}()
|
||||
|
||||
select {
|
||||
case <-done:
|
||||
case <-time.After(500 * time.Millisecond):
|
||||
t.Fatal("validateWithKzgBatchVerifier did not return after context cancellation")
|
||||
}
|
||||
|
||||
select {
|
||||
case <-service.kzgChan:
|
||||
t.Fatal("verificationSet was sent to kzgChan despite canceled context")
|
||||
default:
|
||||
}
|
||||
}
|
||||
|
||||
func createValidTestDataColumns(t *testing.T, count int) []blocks.RODataColumn {
|
||||
_, roSidecars, _ := util.GenerateTestFuluBlockWithSidecars(t, count)
|
||||
if len(roSidecars) >= count {
|
||||
|
||||
@@ -265,7 +265,7 @@ func (s *Service) processVerifiedAttestation(
|
||||
if key, err := generateUnaggregatedAttCacheKey(broadcastAtt); err != nil {
|
||||
log.WithError(err).Error("Failed to generate cache key for attestation tracking")
|
||||
} else {
|
||||
s.setSeenUnaggregatedAtt(key)
|
||||
_ = s.setSeenUnaggregatedAtt(key)
|
||||
}
|
||||
|
||||
valCount, err := helpers.ActiveValidatorCount(ctx, preState, slots.ToEpoch(data.Slot))
|
||||
@@ -320,7 +320,7 @@ func (s *Service) processAggregate(ctx context.Context, aggregate ethpb.SignedAg
|
||||
return
|
||||
}
|
||||
|
||||
s.setAggregatorIndexEpochSeen(att.GetData().Target.Epoch, aggregate.AggregateAttestationAndProof().GetAggregatorIndex())
|
||||
_ = s.setAggregatorIndexEpochSeen(att.GetData().Target.Epoch, aggregate.AggregateAttestationAndProof().GetAggregatorIndex())
|
||||
|
||||
if err := s.cfg.p2p.Broadcast(ctx, aggregate); err != nil {
|
||||
log.WithError(err).Debug("Could not broadcast aggregated attestation")
|
||||
|
||||
@@ -137,7 +137,9 @@ func (s *Service) validateAggregateAndProof(ctx context.Context, pid peer.ID, ms
|
||||
return validationRes, err
|
||||
}
|
||||
|
||||
s.setAggregatorIndexEpochSeen(data.Target.Epoch, m.AggregateAttestationAndProof().GetAggregatorIndex())
|
||||
if first := s.setAggregatorIndexEpochSeen(data.Target.Epoch, m.AggregateAttestationAndProof().GetAggregatorIndex()); !first {
|
||||
return pubsub.ValidationIgnore, nil
|
||||
}
|
||||
|
||||
msg.ValidatorData = m
|
||||
|
||||
@@ -265,13 +267,19 @@ func (s *Service) hasSeenAggregatorIndexEpoch(epoch primitives.Epoch, aggregator
|
||||
}
|
||||
|
||||
// Set aggregate's aggregator index target epoch as seen.
|
||||
func (s *Service) setAggregatorIndexEpochSeen(epoch primitives.Epoch, aggregatorIndex primitives.ValidatorIndex) {
|
||||
// Returns true if this is the first time seeing this aggregator index and epoch.
|
||||
func (s *Service) setAggregatorIndexEpochSeen(epoch primitives.Epoch, aggregatorIndex primitives.ValidatorIndex) bool {
|
||||
b := append(bytesutil.Bytes32(uint64(epoch)), bytesutil.Bytes32(uint64(aggregatorIndex))...)
|
||||
|
||||
s.seenAggregatedAttestationLock.Lock()
|
||||
defer s.seenAggregatedAttestationLock.Unlock()
|
||||
|
||||
_, seen := s.seenAggregatedAttestationCache.Get(string(b))
|
||||
if seen {
|
||||
return false
|
||||
}
|
||||
s.seenAggregatedAttestationCache.Add(string(b), true)
|
||||
return true
|
||||
}
|
||||
|
||||
// This validates the bitfield is correct and aggregator's index in state is within the beacon committee.
|
||||
|
||||
@@ -801,3 +801,27 @@ func TestValidateAggregateAndProof_RejectWhenAttEpochDoesntEqualTargetEpoch(t *t
|
||||
assert.NotNil(t, err)
|
||||
assert.Equal(t, pubsub.ValidationReject, res)
|
||||
}
|
||||
|
||||
func Test_SetAggregatorIndexEpochSeen(t *testing.T) {
|
||||
db := dbtest.SetupDB(t)
|
||||
p := p2ptest.NewTestP2P(t)
|
||||
|
||||
r := &Service{
|
||||
cfg: &config{
|
||||
p2p: p,
|
||||
beaconDB: db,
|
||||
},
|
||||
seenAggregatedAttestationCache: lruwrpr.New(10),
|
||||
}
|
||||
|
||||
aggIndex := primitives.ValidatorIndex(42)
|
||||
epoch := primitives.Epoch(7)
|
||||
|
||||
require.Equal(t, false, r.hasSeenAggregatorIndexEpoch(epoch, aggIndex))
|
||||
first := r.setAggregatorIndexEpochSeen(epoch, aggIndex)
|
||||
require.Equal(t, true, first)
|
||||
require.Equal(t, true, r.hasSeenAggregatorIndexEpoch(epoch, aggIndex))
|
||||
|
||||
second := r.setAggregatorIndexEpochSeen(epoch, aggIndex)
|
||||
require.Equal(t, false, second)
|
||||
}
|
||||
|
||||
@@ -104,7 +104,8 @@ func (s *Service) validateCommitteeIndexBeaconAttestation(
|
||||
}
|
||||
|
||||
if !s.slasherEnabled {
|
||||
// Verify this the first attestation received for the participating validator for the slot.
|
||||
// Verify this the first attestation received for the participating validator for the slot. This verification is here to return early if we've already seen this attestation.
|
||||
// This verification is carried again later after all other validations to avoid TOCTOU issues.
|
||||
if s.hasSeenUnaggregatedAtt(attKey) {
|
||||
return pubsub.ValidationIgnore, nil
|
||||
}
|
||||
@@ -228,7 +229,10 @@ func (s *Service) validateCommitteeIndexBeaconAttestation(
|
||||
Data: eventData,
|
||||
})
|
||||
|
||||
s.setSeenUnaggregatedAtt(attKey)
|
||||
if first := s.setSeenUnaggregatedAtt(attKey); !first {
|
||||
// Another concurrent validation processed the same attestation meanwhile
|
||||
return pubsub.ValidationIgnore, nil
|
||||
}
|
||||
|
||||
// Attach final validated attestation to the message for further pipeline use
|
||||
msg.ValidatorData = attForValidation
|
||||
@@ -385,11 +389,16 @@ func (s *Service) hasSeenUnaggregatedAtt(key string) bool {
|
||||
}
|
||||
|
||||
// Set an incoming attestation as seen for the participating validator for the slot.
|
||||
func (s *Service) setSeenUnaggregatedAtt(key string) {
|
||||
// Returns false if the attestation was already seen.
|
||||
func (s *Service) setSeenUnaggregatedAtt(key string) bool {
|
||||
s.seenUnAggregatedAttestationLock.Lock()
|
||||
defer s.seenUnAggregatedAttestationLock.Unlock()
|
||||
|
||||
_, seen := s.seenUnAggregatedAttestationCache.Get(key)
|
||||
if seen {
|
||||
return false
|
||||
}
|
||||
s.seenUnAggregatedAttestationCache.Add(key, true)
|
||||
return true
|
||||
}
|
||||
|
||||
// hasBlockAndState returns true if the beacon node knows about a block and associated state in the
|
||||
|
||||
@@ -499,6 +499,10 @@ func TestService_setSeenUnaggregatedAtt(t *testing.T) {
|
||||
Data: ðpb.AttestationData{Slot: 2, CommitteeIndex: 0},
|
||||
AggregationBits: bitfield.Bitlist{0b1001},
|
||||
}
|
||||
s3c0a0 := ðpb.Attestation{
|
||||
Data: ðpb.AttestationData{Slot: 3, CommitteeIndex: 0},
|
||||
AggregationBits: bitfield.Bitlist{0b1001},
|
||||
}
|
||||
|
||||
t.Run("empty cache", func(t *testing.T) {
|
||||
key := generateKey(t, s0c0a0)
|
||||
@@ -506,26 +510,39 @@ func TestService_setSeenUnaggregatedAtt(t *testing.T) {
|
||||
})
|
||||
t.Run("ok", func(t *testing.T) {
|
||||
key := generateKey(t, s0c0a0)
|
||||
s.setSeenUnaggregatedAtt(key)
|
||||
first := s.setSeenUnaggregatedAtt(key)
|
||||
assert.Equal(t, true, s.hasSeenUnaggregatedAtt(key))
|
||||
assert.Equal(t, true, first)
|
||||
})
|
||||
t.Run("already seen", func(t *testing.T) {
|
||||
key := generateKey(t, s3c0a0)
|
||||
first := s.setSeenUnaggregatedAtt(key)
|
||||
assert.Equal(t, true, s.hasSeenUnaggregatedAtt(key))
|
||||
assert.Equal(t, true, first)
|
||||
first = s.setSeenUnaggregatedAtt(key)
|
||||
assert.Equal(t, true, s.hasSeenUnaggregatedAtt(key))
|
||||
assert.Equal(t, false, first)
|
||||
})
|
||||
t.Run("different slot", func(t *testing.T) {
|
||||
key1 := generateKey(t, s1c0a0)
|
||||
key2 := generateKey(t, s2c0a0)
|
||||
s.setSeenUnaggregatedAtt(key1)
|
||||
first := s.setSeenUnaggregatedAtt(key1)
|
||||
assert.Equal(t, false, s.hasSeenUnaggregatedAtt(key2))
|
||||
assert.Equal(t, true, first)
|
||||
})
|
||||
t.Run("different committee index", func(t *testing.T) {
|
||||
key1 := generateKey(t, s0c1a0)
|
||||
key2 := generateKey(t, s0c2a0)
|
||||
s.setSeenUnaggregatedAtt(key1)
|
||||
first := s.setSeenUnaggregatedAtt(key1)
|
||||
assert.Equal(t, false, s.hasSeenUnaggregatedAtt(key2))
|
||||
assert.Equal(t, true, first)
|
||||
})
|
||||
t.Run("different bit", func(t *testing.T) {
|
||||
key1 := generateKey(t, s0c0a1)
|
||||
key2 := generateKey(t, s0c0a2)
|
||||
s.setSeenUnaggregatedAtt(key1)
|
||||
first := s.setSeenUnaggregatedAtt(key1)
|
||||
assert.Equal(t, false, s.hasSeenUnaggregatedAtt(key2))
|
||||
assert.Equal(t, true, first)
|
||||
})
|
||||
t.Run("0 bits set is considered not seen", func(t *testing.T) {
|
||||
a := ðpb.Attestation{AggregationBits: bitfield.Bitlist{0b1000}}
|
||||
@@ -576,6 +593,11 @@ func TestService_setSeenUnaggregatedAtt(t *testing.T) {
|
||||
CommitteeId: 0,
|
||||
AttesterIndex: 0,
|
||||
}
|
||||
s3c0a0 := ðpb.SingleAttestation{
|
||||
Data: ðpb.AttestationData{Slot: 2},
|
||||
CommitteeId: 0,
|
||||
AttesterIndex: 0,
|
||||
}
|
||||
|
||||
t.Run("empty cache", func(t *testing.T) {
|
||||
key := generateKey(t, s0c0a0)
|
||||
@@ -583,26 +605,39 @@ func TestService_setSeenUnaggregatedAtt(t *testing.T) {
|
||||
})
|
||||
t.Run("ok", func(t *testing.T) {
|
||||
key := generateKey(t, s0c0a0)
|
||||
s.setSeenUnaggregatedAtt(key)
|
||||
first := s.setSeenUnaggregatedAtt(key)
|
||||
assert.Equal(t, true, s.hasSeenUnaggregatedAtt(key))
|
||||
assert.Equal(t, true, first)
|
||||
})
|
||||
t.Run("different slot", func(t *testing.T) {
|
||||
key1 := generateKey(t, s1c0a0)
|
||||
key2 := generateKey(t, s2c0a0)
|
||||
s.setSeenUnaggregatedAtt(key1)
|
||||
first := s.setSeenUnaggregatedAtt(key1)
|
||||
assert.Equal(t, false, s.hasSeenUnaggregatedAtt(key2))
|
||||
assert.Equal(t, true, first)
|
||||
})
|
||||
t.Run("already seen", func(t *testing.T) {
|
||||
key := generateKey(t, s3c0a0)
|
||||
first := s.setSeenUnaggregatedAtt(key)
|
||||
assert.Equal(t, true, s.hasSeenUnaggregatedAtt(key))
|
||||
assert.Equal(t, true, first)
|
||||
first = s.setSeenUnaggregatedAtt(key)
|
||||
assert.Equal(t, true, s.hasSeenUnaggregatedAtt(key))
|
||||
assert.Equal(t, false, first)
|
||||
})
|
||||
t.Run("different committee index", func(t *testing.T) {
|
||||
key1 := generateKey(t, s0c1a0)
|
||||
key2 := generateKey(t, s0c2a0)
|
||||
s.setSeenUnaggregatedAtt(key1)
|
||||
first := s.setSeenUnaggregatedAtt(key1)
|
||||
assert.Equal(t, false, s.hasSeenUnaggregatedAtt(key2))
|
||||
assert.Equal(t, true, first)
|
||||
})
|
||||
t.Run("different attester", func(t *testing.T) {
|
||||
key1 := generateKey(t, s0c0a1)
|
||||
key2 := generateKey(t, s0c0a2)
|
||||
s.setSeenUnaggregatedAtt(key1)
|
||||
first := s.setSeenUnaggregatedAtt(key1)
|
||||
assert.Equal(t, false, s.hasSeenUnaggregatedAtt(key2))
|
||||
assert.Equal(t, true, first)
|
||||
})
|
||||
t.Run("single attestation is considered not seen", func(t *testing.T) {
|
||||
a := ðpb.AttestationElectra{}
|
||||
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Removed dead slot parameter from blobCacheEntry.filter
|
||||
@@ -1,3 +0,0 @@
|
||||
## Changed
|
||||
|
||||
- Avoid redundant WithHttpEndpoint when JWT is provided
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Fix proposals progress bar count [#16020](https://github.com/OffchainLabs/prysm/pull/16020)
|
||||
3
changelog/SashaMalysehko_fix-return-after-check.md
Normal file
3
changelog/SashaMalysehko_fix-return-after-check.md
Normal file
@@ -0,0 +1,3 @@
|
||||
## Fixed
|
||||
|
||||
- Fix missing return after version header check in SubmitAttesterSlashingsV2.
|
||||
3
changelog/Snezhkko_fix-type.md
Normal file
3
changelog/Snezhkko_fix-type.md
Normal file
@@ -0,0 +1,3 @@
|
||||
## Fixed
|
||||
|
||||
- incorrect constructor return type [#16084](https://github.com/OffchainLabs/prysm/pull/16084)
|
||||
2
changelog/aarsh-revert-autonatv2.md
Normal file
2
changelog/aarsh-revert-autonatv2.md
Normal file
@@ -0,0 +1,2 @@
|
||||
### Ignored
|
||||
- Reverts AutoNatV2 change introduced in https://github.com/OffchainLabs/prysm/pull/16100 as the libp2p upgrade fails inter-op testing.
|
||||
3
changelog/avoid_kzg_send_after_context_cancel.md
Normal file
3
changelog/avoid_kzg_send_after_context_cancel.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Fixed
|
||||
|
||||
- Prevent blocked sends to the KZG batch verifier when the caller context is already canceled, avoiding useless queueing and potential hangs.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Integrate state-diff into `HasState()`.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Refactor finding slot by block root using state summary and block to its own function.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Fix state diff repetitive anchor slot bug.
|
||||
@@ -1,4 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Add initial configs for the state-diff feature.
|
||||
- Add kv functions for the state-diff feature.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Integrate state-diff into `State()`.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- add fulu support to light client processing.
|
||||
@@ -1,2 +0,0 @@
|
||||
### Added
|
||||
- prometheus metric `gossip_attestation_verification_milliseconds` to track attestation gossip topic validation latency.
|
||||
@@ -1,4 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Downgraded log level from INFO to DEBUG on PrepareBeaconProposer updated fee recipients.
|
||||
- Change the logging behaviour of Updated fee recipients to only log count of validators at Debug level and all validator indices at Trace level.
|
||||
@@ -1,2 +0,0 @@
|
||||
### Ignored
|
||||
- Add osaka fork timestamp derivation to interop genesis
|
||||
3
changelog/fix_kzg_batch_verifier_timeout_deadlock.md
Normal file
3
changelog/fix_kzg_batch_verifier_timeout_deadlock.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Fixed
|
||||
|
||||
- Fix deadlock in data column gossip KZG batch verification when a caller times out preventing result delivery.
|
||||
3
changelog/james-prysm_fix-backward-earliest-slot.md
Normal file
3
changelog/james-prysm_fix-backward-earliest-slot.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Fixed
|
||||
|
||||
- Fixes earliest slot should never go backwards when setting semi-supernode or supernode flags
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- fixes E2E tests to be able to start from Electra genesis fork or future forks
|
||||
3
changelog/james-prysm_fix-rest-replay-state.md
Normal file
3
changelog/james-prysm_fix-rest-replay-state.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Fixed
|
||||
|
||||
- fixed replay state issue in rest api caused by attester and sync committee duties endpoints
|
||||
11
changelog/james-prysm_fulu-e2e.md
Normal file
11
changelog/james-prysm_fulu-e2e.md
Normal file
@@ -0,0 +1,11 @@
|
||||
### Added
|
||||
|
||||
- Adding basic fulu fork transition support for mainnet and minimal e2e tests (multi scenario is not included)
|
||||
|
||||
### Changed
|
||||
|
||||
- updated go ethereum to 1.16.7
|
||||
|
||||
### Removed
|
||||
|
||||
- removed github.com/MariusVanDerWijden/FuzzyVM and github.com/MariusVanDerWijden/tx-fuzz due to lack of support post 1.16.7, only used in e2e for transaction fuzzing
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- optimization to remove cell and blob proof computation on blob rest api.
|
||||
@@ -1,2 +0,0 @@
|
||||
### Added
|
||||
- Added `--semi-supernode` flag to custody half of a super node's datacolumn requirements but allowing for reconstruction for blob retrieval
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Changed `--subscribe-all-data-subnets` flag to `--supernode` and aliased `--subscribe-all-data-subnets` for existing users.
|
||||
@@ -1,7 +0,0 @@
|
||||
### Added
|
||||
- Data column backfill.
|
||||
- Backfill metrics for columns: backfill_data_column_sidecar_downloaded, backfill_data_column_sidecar_downloaded_bytes, backfill_batch_columns_download_ms, backfill_batch_columns_verify_ms.
|
||||
|
||||
### Changed
|
||||
- backfill metrics that changed name and/or histogram buckets: backfill_batch_time_verify -> backfill_batch_verify_ms, backfill_batch_time_waiting -> backfill_batch_waiting_ms, backfill_batch_time_roundtrip -> backfill_batch_roundtrip_ms, backfill_blocks_bytes_downloaded -> backfill_blocks_downloaded_bytes, backfill_batch_time_verify -> backfill_batch_verify_ms, backfill_batch_blocks_time_download -> backfill_batch_blocks_download_ms, backfill_batch_blobs_time_download -> backfill_batch_blobs_download_ms, backfill_blobs_bytes_downloaded -> backfill_blocks_downloaded_bytes,
|
||||
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Stop emitting payload attribute events during late block handling when we are not proposing the next slot
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- `blobsDataFromStoredDataColumns`: Ask the use to use the `--supernode` flag and shorten the error mesage.
|
||||
@@ -1,6 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Added log prefix to the `genesis` package.
|
||||
- Added log prefix to the `params` package.
|
||||
- `WithGenesisValidatorsRoot`: Use camelCase for log field param.
|
||||
- Move `Origin checkpoint found in db` from WARN to INFO, since it is the expected behaviour.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Move the "Not enough connected peers" (for a given subnet) from WARN to DEBUG
|
||||
@@ -1,4 +0,0 @@
|
||||
### Removed
|
||||
|
||||
- `NUMBER_OF_COLUMNS` configuration (not in the specification any more, replaced by a preset)
|
||||
- `MAX_CELLS_IN_EXTENDED_MATRIX` configuration (not in the specification any more)
|
||||
2
changelog/manu-test-pr.md
Normal file
2
changelog/manu-test-pr.md
Normal file
@@ -0,0 +1,2 @@
|
||||
### Ignored
|
||||
- Added test requirement to `PULL_REQUEST_TEMPLATE.md`
|
||||
@@ -1,2 +0,0 @@
|
||||
### Fixed
|
||||
- Check the JWT secret length is exactly 256 bits (32 bytes) as per Engine API specification
|
||||
@@ -1,2 +0,0 @@
|
||||
### Changed
|
||||
- Improve readability in slashing import and remove duplicated code
|
||||
3
changelog/potuz_check_twice_attseen.md
Normal file
3
changelog/potuz_check_twice_attseen.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Fixed
|
||||
|
||||
- Fixed possible race when validating two attestations at the same time.
|
||||
3
changelog/potuz_finalized_deproot.md
Normal file
3
changelog/potuz_finalized_deproot.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Added
|
||||
|
||||
- Track the dependent root of the latest finalized checkpoint in forkchoice.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Fix array out of bounds in static analyzer.
|
||||
3
changelog/potuz_return_indices_updateerr.md
Normal file
3
changelog/potuz_return_indices_updateerr.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Fixed
|
||||
|
||||
- Do not error when committee has been computed correctly but updating the cache failed.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Use dependent root instead of target when possible.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Use head state to validate attestations for old blocks if they are compatible.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Copied deleted dependency `github.com/tyler-smith/go-bip39` to the third_party directory and updated prysm to use that.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- http_error_count now matches the other cases by listing the endpoint name rather than the actual URL requested. This improves metrics cardinality.
|
||||
@@ -1,4 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Updated golang.org/x/tools
|
||||
- Introduced modernize static analyzers to nogo
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Updated CHANGELOG.md with release notes from v7.0.0
|
||||
3
changelog/pvl-v7.0.1.md
Normal file
3
changelog/pvl-v7.0.1.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Ignored
|
||||
|
||||
- Updated CHANGELOG.md for v7.0.1 patch release
|
||||
3
changelog/pvl-v7.1.0.md
Normal file
3
changelog/pvl-v7.1.0.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Ignored
|
||||
|
||||
- Changelog for v7.1.0
|
||||
3
changelog/radek_httperror-analyzer.md
Normal file
3
changelog/radek_httperror-analyzer.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Added
|
||||
|
||||
- Static analyzer that ensures each `httputil.HandleError` call is followed by a `return` statement.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Initialize the `ExecutionRequests` field in gossip block map.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Move `BlockGossipReceived` event to the end of gossip validation.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Removed
|
||||
|
||||
- Remove validator cross-client from end-to-end tests.
|
||||
3
changelog/radek_use-statefetch-error.md
Normal file
3
changelog/radek_use-statefetch-error.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Ignored
|
||||
|
||||
- Use `WriteStateFetchError` in API handlers whenever possible.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Replace fixed sleep delays with active polling in prometheus service test to improve test reliability.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Metrics to track earliest available slot
|
||||
3
changelog/satushh-eth1copy.md
Normal file
3
changelog/satushh-eth1copy.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Removed
|
||||
|
||||
- Unnecessary copy is removed from Eth1DataHasEnoughSupport
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Nil check for block if it doesn't exist in the DB in fetchOriginSidecars
|
||||
3
changelog/satushh-graffiti.md
Normal file
3
changelog/satushh-graffiti.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Added
|
||||
|
||||
- Proposal design document to implement graffiti. Currently it is empty by default and the idea is to have it of the form GE168dPR63af
|
||||
3
changelog/satushh-migratetocold.md
Normal file
3
changelog/satushh-migratetocold.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Changed
|
||||
|
||||
- Optimise migratetocold by not doing brute force for loop
|
||||
@@ -1,3 +0,0 @@
|
||||
### Removed
|
||||
|
||||
- Reverted the eas metric as it currently has a bug. Will be fixed later.
|
||||
@@ -1,2 +0,0 @@
|
||||
### Added
|
||||
- prometheus summary `gossip_data_column_sidecar_arrival_milliseconds` to track data column sidecar arrival latency since slot start.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Record data column gossip KZG batch verification latency in both the pooled worker and fallback paths so the `beacon_kzg_verification_data_column_batch_milliseconds` histogram reflects gossip traffic, annotated with `path` labels to distinguish the sources.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Add supported version for fork versions
|
||||
@@ -1,2 +0,0 @@
|
||||
### Changed
|
||||
- Introduced flag `--ignore-unviable-attestations` (replaces and deprecates `--disable-last-epoch-targets`) to drop attestations whose target state is not viable; default remains to process them unless explicitly enabled.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Implement Gloas state
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Implement Gloas fork support in consensus-types/blocks with factory methods, getters, setters, and proto handling
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- P2p: wire stategen into service for last finalized state and use it for active validator count
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user