mirror of
https://github.com/OffchainLabs/prysm.git
synced 2026-01-10 13:58:09 -05:00
Compare commits
21 Commits
backfill-d
...
plus-one-b
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
2018b647ea | ||
|
|
0352e39f64 | ||
|
|
82847da8a7 | ||
|
|
accfdd0f7c | ||
|
|
2b51e9e350 | ||
|
|
e05fec0f5f | ||
|
|
9153c5a202 | ||
|
|
26ce94e224 | ||
|
|
1a904dbae3 | ||
|
|
255ea2fac1 | ||
|
|
46bc81b4c8 | ||
|
|
37417e5905 | ||
|
|
9c4774b82e | ||
|
|
7a5f4cf122 | ||
|
|
7dd4f5948c | ||
|
|
2f090c52d9 | ||
|
|
3ecb5d0b67 | ||
|
|
253f91930a | ||
|
|
7c3e45637f | ||
|
|
96429c5089 | ||
|
|
d613f3a262 |
40
CHANGELOG.md
40
CHANGELOG.md
@@ -4,6 +4,44 @@ All notable changes to this project will be documented in this file.
|
||||
|
||||
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
|
||||
|
||||
## [v6.1.3](https://github.com/prysmaticlabs/prysm/compare/v6.1.2...v6.1.3) - 2025-10-20
|
||||
|
||||
This release has several important beacon API and p2p fixes.
|
||||
|
||||
### Added
|
||||
|
||||
- Add Grandine to P2P known agents. (Useful for metrics). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15829)
|
||||
- Delegate sszInfo HashTreeRoot to FastSSZ-generated implementations via SSZObject, enabling roots calculation for generated types while avoiding duplicate logic. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15805)
|
||||
- SSZ-QL: Use `fastssz`'s `SizeSSZ` method for calculating the size of `Container` type. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15864)
|
||||
- SSZ-QL: Access n-th element in `List`/`Vector`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15767)
|
||||
|
||||
### Changed
|
||||
|
||||
- Do not verify block data when calculating rewards. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15819)
|
||||
- Process pending attestations after pending blocks are cleared. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15824)
|
||||
- updated web3signer to 25.9.1. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15832)
|
||||
- Gracefully handle submit blind block returning 502 errors. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15848)
|
||||
- Improve returning individual message errors from Beacon API. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15835)
|
||||
- SSZ-QL: Clarify `Size` method with more sophisticated `SSZType`s. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15864)
|
||||
|
||||
### Fixed
|
||||
|
||||
- Use service context and continue on slasher attestation errors (#15803). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15803)
|
||||
- block event probably shouldn't be sent on certain block processing failures, now sends only on successing processing Block is NON-CANONICAL, Block IS CANONICAL but getFCUArgs FAILS, and Full success. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15814)
|
||||
- Fixed web3signer e2e, issues caused due to a regression on old fork support. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15832)
|
||||
- Do not mark blocks as invalid from ErrNotDescendantOfFinalized. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15846)
|
||||
- Fixed [#15812](https://github.com/OffchainLabs/prysm/issues/15812): Gossip attestation validation incorrectly rejecting attestations that arrive before their referenced blocks. Previously, attestations were saved to the pending queue but immediately rejected by forkchoice validation, causing "not descendant of finalized checkpoint" errors. Now attestations for missing blocks return `ValidationIgnore` without error, allowing them to be properly processed when their blocks arrive. This eliminates false positive rejections and prevents potential incorrect peer downscoring during network congestion. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15840)
|
||||
- Mark the block as invalid if it has an invalid signature. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15847)
|
||||
- Display error messages from the server verbatim when they are not encoded as `application/json`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15860)
|
||||
- `HasAtLeastOneIndex`: Check the index is not too high. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15865)
|
||||
- Fix `/eth/v1/beacon/blob_sidecars/` beacon API is the fulu fork epoch is set to the far future epoch. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15867)
|
||||
- `dataColumnSidecarsByRangeRPCHandler`: Gracefully close the stream if no data to return. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15866)
|
||||
- `VerifyDataColumnSidecar`: Check if there is no too many commitments. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15859)
|
||||
- `WithDataColumnRetentionEpochs`: Use `dataColumnRetentionEpoch` instead of `blobColumnRetentionEpoch`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15872)
|
||||
- Mark epoch transition correctly on new head events. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15871)
|
||||
- reject committee index >= committees_per_slot in unaggregated attestation validation. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15855)
|
||||
- Decreased attestation gossip validation batch deadline to 5ms. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15882)
|
||||
|
||||
## [v6.1.2](https://github.com/prysmaticlabs/prysm/compare/v6.1.1...v6.1.2) - 2025-10-10
|
||||
|
||||
This release has several important fixes to improve Prysm's peering, stability, and attestation inclusion on mainnet and all testnets. All node operators are encouraged to update to this release as soon as practical for the best mainnet performance.
|
||||
@@ -3759,4 +3797,4 @@ There are no security updates in this release.
|
||||
|
||||
# Older than v2.0.0
|
||||
|
||||
For changelog history for releases older than v2.0.0, please refer to https://github.com/prysmaticlabs/prysm/releases
|
||||
For changelog history for releases older than v2.0.0, please refer to https://github.com/prysmaticlabs/prysm/releases
|
||||
10
WORKSPACE
10
WORKSPACE
@@ -253,16 +253,16 @@ filegroup(
|
||||
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
|
||||
)
|
||||
|
||||
consensus_spec_version = "v1.6.0-beta.0"
|
||||
consensus_spec_version = "v1.6.0-beta.1"
|
||||
|
||||
load("@prysm//tools:download_spectests.bzl", "consensus_spec_tests")
|
||||
|
||||
consensus_spec_tests(
|
||||
name = "consensus_spec_tests",
|
||||
flavors = {
|
||||
"general": "sha256-rT3jQp2+ZaDiO66gIQggetzqr+kGeexaLqEhbx4HDMY=",
|
||||
"minimal": "sha256-wowwwyvd0KJLsE+oDOtPkrhZyJndJpJ0lbXYsLH6XBw=",
|
||||
"mainnet": "sha256-4ZLrLNeO7NihZ4TuWH5V5fUhvW9Y3mAPBQDCqrfShps=",
|
||||
"general": "sha256-oEj0MTViJHjZo32nABK36gfvSXpbwkBk/jt6Mj7pWFI=",
|
||||
"minimal": "sha256-cS4NPv6IRBoCSmWomQ8OEo8IsVNW9YawUFqoRZQBUj4=",
|
||||
"mainnet": "sha256-BYuLndMPAh4p13IRJgNfVakrCVL69KRrNw2tdc3ETbE=",
|
||||
},
|
||||
version = consensus_spec_version,
|
||||
)
|
||||
@@ -278,7 +278,7 @@ filegroup(
|
||||
visibility = ["//visibility:public"],
|
||||
)
|
||||
""",
|
||||
integrity = "sha256-sBe3Rx8zGq9IrvfgIhZQpYidGjy3mE1SiCb6/+pjLdY=",
|
||||
integrity = "sha256-yrq3tdwPS8Ri+ueeLAHssIT3ssMrX7zvHiJ8Xf9GVYs=",
|
||||
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
|
||||
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
|
||||
)
|
||||
|
||||
@@ -6,6 +6,7 @@ go_library(
|
||||
"client.go",
|
||||
"errors.go",
|
||||
"options.go",
|
||||
"transport.go",
|
||||
],
|
||||
importpath = "github.com/OffchainLabs/prysm/v6/api/client",
|
||||
visibility = ["//visibility:public"],
|
||||
@@ -14,7 +15,13 @@ go_library(
|
||||
|
||||
go_test(
|
||||
name = "go_default_test",
|
||||
srcs = ["client_test.go"],
|
||||
srcs = [
|
||||
"client_test.go",
|
||||
"transport_test.go",
|
||||
],
|
||||
embed = [":go_default_library"],
|
||||
deps = ["//testing/require:go_default_library"],
|
||||
deps = [
|
||||
"//testing/assert:go_default_library",
|
||||
"//testing/require:go_default_library",
|
||||
],
|
||||
)
|
||||
|
||||
25
api/client/transport.go
Normal file
25
api/client/transport.go
Normal file
@@ -0,0 +1,25 @@
|
||||
package client
|
||||
|
||||
import "net/http"
|
||||
|
||||
// CustomHeadersTransport adds custom headers to each request
|
||||
type CustomHeadersTransport struct {
|
||||
base http.RoundTripper
|
||||
headers map[string][]string
|
||||
}
|
||||
|
||||
func NewCustomHeadersTransport(base http.RoundTripper, headers map[string][]string) *CustomHeadersTransport {
|
||||
return &CustomHeadersTransport{
|
||||
base: base,
|
||||
headers: headers,
|
||||
}
|
||||
}
|
||||
|
||||
func (t *CustomHeadersTransport) RoundTrip(req *http.Request) (*http.Response, error) {
|
||||
for header, values := range t.headers {
|
||||
for _, value := range values {
|
||||
req.Header.Add(header, value)
|
||||
}
|
||||
}
|
||||
return t.base.RoundTrip(req)
|
||||
}
|
||||
25
api/client/transport_test.go
Normal file
25
api/client/transport_test.go
Normal file
@@ -0,0 +1,25 @@
|
||||
package client
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/testing/assert"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
)
|
||||
|
||||
type noopTransport struct{}
|
||||
|
||||
func (*noopTransport) RoundTrip(*http.Request) (*http.Response, error) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
func TestRoundTrip(t *testing.T) {
|
||||
tr := &CustomHeadersTransport{base: &noopTransport{}, headers: map[string][]string{"key1": []string{"value1", "value2"}, "key2": []string{"value3"}}}
|
||||
req := httptest.NewRequest("GET", "http://foo", nil)
|
||||
_, err := tr.RoundTrip(req)
|
||||
require.NoError(t, err)
|
||||
assert.DeepEqual(t, []string{"value1", "value2"}, req.Header.Values("key1"))
|
||||
assert.DeepEqual(t, []string{"value3"}, req.Header.Values("key2"))
|
||||
}
|
||||
@@ -173,6 +173,7 @@ go_test(
|
||||
"//beacon-chain/state/state-native:go_default_library",
|
||||
"//beacon-chain/state/stategen:go_default_library",
|
||||
"//beacon-chain/verification:go_default_library",
|
||||
"//cmd/beacon-chain/flags:go_default_library",
|
||||
"//config/features:go_default_library",
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
|
||||
@@ -493,7 +493,7 @@ func (s *Service) updateCustodyInfoInDB(slot primitives.Slot) (primitives.Slot,
|
||||
// Compute the custody group count.
|
||||
custodyGroupCount := custodyRequirement
|
||||
if isSubscribedToAllDataSubnets {
|
||||
custodyGroupCount = beaconConfig.NumberOfColumns
|
||||
custodyGroupCount = beaconConfig.NumberOfCustodyGroups
|
||||
}
|
||||
|
||||
// Safely compute the fulu fork slot.
|
||||
|
||||
@@ -23,9 +23,11 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
|
||||
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/state/stategen"
|
||||
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
|
||||
"github.com/OffchainLabs/prysm/v6/config/features"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
consensusblocks "github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
@@ -596,3 +598,103 @@ func TestNotifyIndex(t *testing.T) {
|
||||
t.Errorf("Notifier channel did not receive the index")
|
||||
}
|
||||
}
|
||||
|
||||
func TestUpdateCustodyInfoInDB(t *testing.T) {
|
||||
const (
|
||||
fuluForkEpoch = 10
|
||||
custodyRequirement = uint64(4)
|
||||
earliestStoredSlot = primitives.Slot(12)
|
||||
numberOfCustodyGroups = uint64(64)
|
||||
numberOfColumns = uint64(128)
|
||||
)
|
||||
|
||||
params.SetupTestConfigCleanup(t)
|
||||
cfg := params.BeaconConfig()
|
||||
cfg.FuluForkEpoch = fuluForkEpoch
|
||||
cfg.CustodyRequirement = custodyRequirement
|
||||
cfg.NumberOfCustodyGroups = numberOfCustodyGroups
|
||||
cfg.NumberOfColumns = numberOfColumns
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
ctx := t.Context()
|
||||
pbBlock := util.NewBeaconBlock()
|
||||
pbBlock.Block.Slot = 12
|
||||
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(pbBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
roBlock, err := blocks.NewROBlock(signedBeaconBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
t.Run("CGC increases before fulu", func(t *testing.T) {
|
||||
service, requirements := minimalTestService(t)
|
||||
err = requirements.db.SaveBlock(ctx, roBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Before Fulu
|
||||
// -----------
|
||||
actualEas, actualCgc, err := service.updateCustodyInfoInDB(15)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, earliestStoredSlot, actualEas)
|
||||
require.Equal(t, custodyRequirement, actualCgc)
|
||||
|
||||
actualEas, actualCgc, err = service.updateCustodyInfoInDB(17)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, earliestStoredSlot, actualEas)
|
||||
require.Equal(t, custodyRequirement, actualCgc)
|
||||
|
||||
resetFlags := flags.Get()
|
||||
gFlags := new(flags.GlobalFlags)
|
||||
gFlags.SubscribeAllDataSubnets = true
|
||||
flags.Init(gFlags)
|
||||
defer flags.Init(resetFlags)
|
||||
|
||||
actualEas, actualCgc, err = service.updateCustodyInfoInDB(19)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, earliestStoredSlot, actualEas)
|
||||
require.Equal(t, numberOfCustodyGroups, actualCgc)
|
||||
|
||||
// After Fulu
|
||||
// ----------
|
||||
actualEas, actualCgc, err = service.updateCustodyInfoInDB(fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, earliestStoredSlot, actualEas)
|
||||
require.Equal(t, numberOfCustodyGroups, actualCgc)
|
||||
})
|
||||
|
||||
t.Run("CGC increases after fulu", func(t *testing.T) {
|
||||
service, requirements := minimalTestService(t)
|
||||
err = requirements.db.SaveBlock(ctx, roBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Before Fulu
|
||||
// -----------
|
||||
actualEas, actualCgc, err := service.updateCustodyInfoInDB(15)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, earliestStoredSlot, actualEas)
|
||||
require.Equal(t, custodyRequirement, actualCgc)
|
||||
|
||||
actualEas, actualCgc, err = service.updateCustodyInfoInDB(17)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, earliestStoredSlot, actualEas)
|
||||
require.Equal(t, custodyRequirement, actualCgc)
|
||||
|
||||
// After Fulu
|
||||
// ----------
|
||||
resetFlags := flags.Get()
|
||||
gFlags := new(flags.GlobalFlags)
|
||||
gFlags.SubscribeAllDataSubnets = true
|
||||
flags.Init(gFlags)
|
||||
defer flags.Init(resetFlags)
|
||||
|
||||
slot := fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1
|
||||
actualEas, actualCgc, err = service.updateCustodyInfoInDB(slot)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, slot, actualEas)
|
||||
require.Equal(t, numberOfCustodyGroups, actualCgc)
|
||||
|
||||
actualEas, actualCgc, err = service.updateCustodyInfoInDB(slot + 2)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, slot, actualEas)
|
||||
require.Equal(t, numberOfCustodyGroups, actualCgc)
|
||||
})
|
||||
}
|
||||
|
||||
@@ -130,6 +130,14 @@ func (dch *mockCustodyManager) UpdateCustodyInfo(earliestAvailableSlot primitive
|
||||
return earliestAvailableSlot, custodyGroupCount, nil
|
||||
}
|
||||
|
||||
func (dch *mockCustodyManager) UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
|
||||
dch.mut.Lock()
|
||||
defer dch.mut.Unlock()
|
||||
|
||||
dch.earliestAvailableSlot = earliestAvailableSlot
|
||||
return nil
|
||||
}
|
||||
|
||||
func (dch *mockCustodyManager) CustodyGroupCountFromPeer(peer.ID) uint64 {
|
||||
return 0
|
||||
}
|
||||
|
||||
@@ -201,14 +201,3 @@ func ParseWeakSubjectivityInputString(wsCheckpointString string) (*v1alpha1.Chec
|
||||
Root: bRoot,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// MinEpochsForBlockRequests computes the number of epochs of block history that we need to maintain,
|
||||
// relative to the current epoch, per the p2p specs. This is used to compute the slot where backfill is complete.
|
||||
// value defined:
|
||||
// https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/p2p-interface.md#configuration
|
||||
// MIN_VALIDATOR_WITHDRAWABILITY_DELAY + CHURN_LIMIT_QUOTIENT // 2 (= 33024, ~5 months)
|
||||
// detailed rationale: https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/p2p-interface.md#why-are-blocksbyrange-requests-only-required-to-be-served-for-the-latest-min_epochs_for_block_requests-epochs
|
||||
func MinEpochsForBlockRequests() primitives.Epoch {
|
||||
return params.BeaconConfig().MinValidatorWithdrawabilityDelay +
|
||||
primitives.Epoch(params.BeaconConfig().ChurnLimitQuotient/2)
|
||||
}
|
||||
|
||||
@@ -286,20 +286,3 @@ func genState(t *testing.T, valCount, avgBalance uint64) state.BeaconState {
|
||||
return beaconState
|
||||
}
|
||||
|
||||
func TestMinEpochsForBlockRequests(t *testing.T) {
|
||||
helpers.ClearCache()
|
||||
|
||||
params.SetActiveTestCleanup(t, params.MainnetConfig())
|
||||
var expected primitives.Epoch = 33024
|
||||
// expected value of 33024 via spec commentary:
|
||||
// https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/p2p-interface.md#why-are-blocksbyrange-requests-only-required-to-be-served-for-the-latest-min_epochs_for_block_requests-epochs
|
||||
// MIN_EPOCHS_FOR_BLOCK_REQUESTS is calculated using the arithmetic from compute_weak_subjectivity_period found in the weak subjectivity guide. Specifically to find this max epoch range, we use the worst case event of a very large validator size (>= MIN_PER_EPOCH_CHURN_LIMIT * CHURN_LIMIT_QUOTIENT).
|
||||
//
|
||||
// MIN_EPOCHS_FOR_BLOCK_REQUESTS = (
|
||||
// MIN_VALIDATOR_WITHDRAWABILITY_DELAY
|
||||
// + MAX_SAFETY_DECAY * CHURN_LIMIT_QUOTIENT // (2 * 100)
|
||||
// )
|
||||
//
|
||||
// Where MAX_SAFETY_DECAY = 100 and thus MIN_EPOCHS_FOR_BLOCK_REQUESTS = 33024 (~5 months).
|
||||
require.Equal(t, expected, helpers.MinEpochsForBlockRequests())
|
||||
}
|
||||
|
||||
@@ -212,7 +212,8 @@ func filterNoop(_ string) bool {
|
||||
return true
|
||||
}
|
||||
|
||||
func isRootDir(p string) bool {
|
||||
// IsBlockRootDir returns true if the path segment looks like a block root directory.
|
||||
func IsBlockRootDir(p string) bool {
|
||||
dir := filepath.Base(p)
|
||||
return len(dir) == rootStringLen && strings.HasPrefix(dir, "0x")
|
||||
}
|
||||
|
||||
@@ -188,7 +188,7 @@ func TestListDir(t *testing.T) {
|
||||
name: "root filter",
|
||||
dirPath: ".",
|
||||
expected: []string{childlessBlob.name, blobWithSsz.name, blobWithSszAndTmp.name},
|
||||
filter: isRootDir,
|
||||
filter: IsBlockRootDir,
|
||||
},
|
||||
{
|
||||
name: "ssz filter",
|
||||
|
||||
@@ -19,12 +19,14 @@ import (
|
||||
const (
|
||||
// Full root in directory will be 66 chars, eg:
|
||||
// >>> len('0x0002fb4db510b8618b04dc82d023793739c26346a8b02eb73482e24b0fec0555') == 66
|
||||
rootStringLen = 66
|
||||
sszExt = "ssz"
|
||||
partExt = "part"
|
||||
periodicEpochBaseDir = "by-epoch"
|
||||
rootStringLen = 66
|
||||
sszExt = "ssz"
|
||||
partExt = "part"
|
||||
)
|
||||
|
||||
// PeriodicEpochBaseDir is the name of the base directory for the by-epoch layout.
|
||||
const PeriodicEpochBaseDir = "by-epoch"
|
||||
|
||||
const (
|
||||
LayoutNameFlat = "flat"
|
||||
LayoutNameByEpoch = "by-epoch"
|
||||
@@ -130,11 +132,11 @@ func migrateLayout(fs afero.Fs, from, to fsLayout, cache *blobStorageSummaryCach
|
||||
if iter.atEOF() {
|
||||
return errLayoutNotDetected
|
||||
}
|
||||
log.WithField("fromLayout", from.name()).WithField("toLayout", to.name()).Info("Migrating blob filesystem layout. This one-time operation can take extra time (up to a few minutes for systems with extended blob storage and a cold disk cache).")
|
||||
lastMoved := ""
|
||||
parentDirs := make(map[string]bool) // this map should have < 65k keys by design
|
||||
moved := 0
|
||||
dc := newDirCleaner()
|
||||
migrationLogged := false
|
||||
for ident, err := iter.next(); !errors.Is(err, io.EOF); ident, err = iter.next() {
|
||||
if err != nil {
|
||||
if errors.Is(err, errIdentFailure) {
|
||||
@@ -146,6 +148,11 @@ func migrateLayout(fs afero.Fs, from, to fsLayout, cache *blobStorageSummaryCach
|
||||
}
|
||||
return errors.Wrapf(errMigrationFailure, "failed to iterate previous layout structure while migrating blobs, err=%s", err.Error())
|
||||
}
|
||||
if !migrationLogged {
|
||||
log.WithField("fromLayout", from.name()).WithField("toLayout", to.name()).
|
||||
Info("Migrating blob filesystem layout. This one-time operation can take extra time (up to a few minutes for systems with extended blob storage and a cold disk cache).")
|
||||
migrationLogged = true
|
||||
}
|
||||
src := from.dir(ident)
|
||||
target := to.dir(ident)
|
||||
if src != lastMoved {
|
||||
|
||||
@@ -34,7 +34,7 @@ func (l *periodicEpochLayout) name() string {
|
||||
|
||||
func (l *periodicEpochLayout) blockParentDirs(ident blobIdent) []string {
|
||||
return []string{
|
||||
periodicEpochBaseDir,
|
||||
PeriodicEpochBaseDir,
|
||||
l.periodDir(ident.epoch),
|
||||
l.epochDir(ident.epoch),
|
||||
}
|
||||
@@ -50,28 +50,28 @@ func (l *periodicEpochLayout) notify(ident blobIdent) error {
|
||||
|
||||
// If before == 0, it won't be used as a filter and all idents will be returned.
|
||||
func (l *periodicEpochLayout) iterateIdents(before primitives.Epoch) (*identIterator, error) {
|
||||
_, err := l.fs.Stat(periodicEpochBaseDir)
|
||||
_, err := l.fs.Stat(PeriodicEpochBaseDir)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return &identIterator{eof: true}, nil // The directory is non-existent, which is fine; stop iteration.
|
||||
}
|
||||
return nil, errors.Wrapf(err, "error reading path %s", periodicEpochBaseDir)
|
||||
return nil, errors.Wrapf(err, "error reading path %s", PeriodicEpochBaseDir)
|
||||
}
|
||||
// iterate root, which should have directories named by "period"
|
||||
entries, err := listDir(l.fs, periodicEpochBaseDir)
|
||||
entries, err := listDir(l.fs, PeriodicEpochBaseDir)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "failed to list %s", periodicEpochBaseDir)
|
||||
return nil, errors.Wrapf(err, "failed to list %s", PeriodicEpochBaseDir)
|
||||
}
|
||||
|
||||
return &identIterator{
|
||||
fs: l.fs,
|
||||
path: periodicEpochBaseDir,
|
||||
path: PeriodicEpochBaseDir,
|
||||
// Please see comments on the `layers` field in `identIterator`` if the role of the layers is unclear.
|
||||
layers: []layoutLayer{
|
||||
{populateIdent: populateNoop, filter: isBeforePeriod(before)},
|
||||
{populateIdent: populateEpoch, filter: isBeforeEpoch(before)},
|
||||
{populateIdent: populateRoot, filter: isRootDir}, // extract root from path
|
||||
{populateIdent: populateIndex, filter: isSszFile}, // extract index from filename
|
||||
{populateIdent: populateRoot, filter: IsBlockRootDir}, // extract root from path
|
||||
{populateIdent: populateIndex, filter: isSszFile}, // extract index from filename
|
||||
},
|
||||
entries: entries,
|
||||
}, nil
|
||||
@@ -98,7 +98,7 @@ func (l *periodicEpochLayout) epochDir(epoch primitives.Epoch) string {
|
||||
}
|
||||
|
||||
func (l *periodicEpochLayout) periodDir(epoch primitives.Epoch) string {
|
||||
return filepath.Join(periodicEpochBaseDir, fmt.Sprintf("%d", periodForEpoch(epoch)))
|
||||
return filepath.Join(PeriodicEpochBaseDir, fmt.Sprintf("%d", periodForEpoch(epoch)))
|
||||
}
|
||||
|
||||
func (l *periodicEpochLayout) sszPath(n blobIdent) string {
|
||||
|
||||
@@ -30,7 +30,7 @@ func (l *flatLayout) iterateIdents(before primitives.Epoch) (*identIterator, err
|
||||
if os.IsNotExist(err) {
|
||||
return &identIterator{eof: true}, nil // The directory is non-existent, which is fine; stop iteration.
|
||||
}
|
||||
return nil, errors.Wrapf(err, "error reading path %s", periodicEpochBaseDir)
|
||||
return nil, errors.Wrap(err, "error reading blob base dir")
|
||||
}
|
||||
entries, err := listDir(l.fs, ".")
|
||||
if err != nil {
|
||||
@@ -199,10 +199,10 @@ func (l *flatSlotReader) isSSZAndBefore(fname string) bool {
|
||||
// the epoch can be determined.
|
||||
func isFlatCachedAndBefore(cache *blobStorageSummaryCache, before primitives.Epoch) func(string) bool {
|
||||
if before == 0 {
|
||||
return isRootDir
|
||||
return IsBlockRootDir
|
||||
}
|
||||
return func(p string) bool {
|
||||
if !isRootDir(p) {
|
||||
if !IsBlockRootDir(p) {
|
||||
return false
|
||||
}
|
||||
root, err := rootFromPath(p)
|
||||
|
||||
@@ -129,6 +129,7 @@ type NoHeadAccessDatabase interface {
|
||||
// Custody operations.
|
||||
UpdateSubscribedToAllDataSubnets(ctx context.Context, subscribed bool) (bool, error)
|
||||
UpdateCustodyInfo(ctx context.Context, earliestAvailableSlot primitives.Slot, custodyGroupCount uint64) (primitives.Slot, uint64, error)
|
||||
UpdateEarliestAvailableSlot(ctx context.Context, earliestAvailableSlot primitives.Slot) error
|
||||
|
||||
// P2P Metadata operations.
|
||||
SaveMetadataSeqNum(ctx context.Context, seqNum uint64) error
|
||||
|
||||
@@ -2,16 +2,19 @@ package kv
|
||||
|
||||
import (
|
||||
"context"
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
bolt "go.etcd.io/bbolt"
|
||||
)
|
||||
|
||||
// UpdateCustodyInfo atomically updates the custody group count only it is greater than the stored one.
|
||||
// UpdateCustodyInfo atomically updates the custody group count only if it is greater than the stored one.
|
||||
// In this case, it also updates the earliest available slot with the provided value.
|
||||
// It returns the (potentially updated) custody group count and earliest available slot.
|
||||
func (s *Store) UpdateCustodyInfo(ctx context.Context, earliestAvailableSlot primitives.Slot, custodyGroupCount uint64) (primitives.Slot, uint64, error) {
|
||||
@@ -70,6 +73,79 @@ func (s *Store) UpdateCustodyInfo(ctx context.Context, earliestAvailableSlot pri
|
||||
return storedEarliestAvailableSlot, storedGroupCount, nil
|
||||
}
|
||||
|
||||
// UpdateEarliestAvailableSlot updates the earliest available slot.
|
||||
func (s *Store) UpdateEarliestAvailableSlot(ctx context.Context, earliestAvailableSlot primitives.Slot) error {
|
||||
_, span := trace.StartSpan(ctx, "BeaconDB.UpdateEarliestAvailableSlot")
|
||||
defer span.End()
|
||||
|
||||
storedEarliestAvailableSlot := primitives.Slot(0)
|
||||
if err := s.db.Update(func(tx *bolt.Tx) error {
|
||||
// Retrieve the custody bucket.
|
||||
bucket, err := tx.CreateBucketIfNotExists(custodyBucket)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "create custody bucket")
|
||||
}
|
||||
|
||||
// Retrieve the stored earliest available slot.
|
||||
storedEarliestAvailableSlotBytes := bucket.Get(earliestAvailableSlotKey)
|
||||
if len(storedEarliestAvailableSlotBytes) != 0 {
|
||||
storedEarliestAvailableSlot = primitives.Slot(bytesutil.BytesToUint64BigEndian(storedEarliestAvailableSlotBytes))
|
||||
}
|
||||
|
||||
// Allow decrease (for backfill scenarios)
|
||||
if earliestAvailableSlot <= storedEarliestAvailableSlot {
|
||||
storedEarliestAvailableSlot = earliestAvailableSlot
|
||||
bytes := bytesutil.Uint64ToBytesBigEndian(uint64(earliestAvailableSlot))
|
||||
if err := bucket.Put(earliestAvailableSlotKey, bytes); err != nil {
|
||||
return errors.Wrap(err, "put earliest available slot")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Prevent increase within the MIN_EPOCHS_FOR_BLOCK_REQUESTS period
|
||||
// This ensures we don't voluntarily refuse to serve mandatory block data
|
||||
genesisTime := time.Unix(int64(params.BeaconConfig().MinGenesisTime+params.BeaconConfig().GenesisDelay), 0)
|
||||
currentSlot := slots.CurrentSlot(genesisTime)
|
||||
currentEpoch := slots.ToEpoch(currentSlot)
|
||||
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
|
||||
|
||||
// Calculate the minimum required epoch (or 0 if we're early in the chain)
|
||||
minRequiredEpoch := primitives.Epoch(0)
|
||||
if currentEpoch > minEpochsForBlocks {
|
||||
minRequiredEpoch = currentEpoch - minEpochsForBlocks
|
||||
}
|
||||
|
||||
// Convert to slot to ensure we compare at slot-level granularity
|
||||
minRequiredSlot, err := slots.EpochStart(minRequiredEpoch)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "calculate minimum required slot")
|
||||
}
|
||||
|
||||
// Prevent any increase that would put earliest available slot beyond the minimum required slot
|
||||
if earliestAvailableSlot > minRequiredSlot {
|
||||
return errors.Errorf(
|
||||
"cannot increase earliest available slot to %d (epoch %d) as it exceeds minimum required slot %d (epoch %d)",
|
||||
earliestAvailableSlot, slots.ToEpoch(earliestAvailableSlot),
|
||||
minRequiredSlot, minRequiredEpoch,
|
||||
)
|
||||
}
|
||||
|
||||
storedEarliestAvailableSlot = earliestAvailableSlot
|
||||
bytes := bytesutil.Uint64ToBytesBigEndian(uint64(earliestAvailableSlot))
|
||||
if err := bucket.Put(earliestAvailableSlotKey, bytes); err != nil {
|
||||
return errors.Wrap(err, "put earliest available slot")
|
||||
}
|
||||
|
||||
return nil
|
||||
}); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
log.WithField("earliestAvailableSlot", storedEarliestAvailableSlot).Debug("Updated earliest available slot")
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// UpdateSubscribedToAllDataSubnets updates the "subscribed to all data subnets" status in the database
|
||||
// only if `subscribed` is `true`.
|
||||
// It returns the previous subscription status.
|
||||
|
||||
@@ -3,10 +3,13 @@ package kv
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
bolt "go.etcd.io/bbolt"
|
||||
)
|
||||
|
||||
@@ -132,6 +135,131 @@ func TestUpdateCustodyInfo(t *testing.T) {
|
||||
})
|
||||
}
|
||||
|
||||
func TestUpdateEarliestAvailableSlot(t *testing.T) {
|
||||
ctx := t.Context()
|
||||
|
||||
t.Run("allow decreasing earliest slot (backfill scenario)", func(t *testing.T) {
|
||||
const (
|
||||
initialSlot = primitives.Slot(300)
|
||||
initialCount = uint64(10)
|
||||
earliestSlot = primitives.Slot(200) // Lower than initial (backfill discovered earlier blocks)
|
||||
)
|
||||
|
||||
db := setupDB(t)
|
||||
|
||||
// Initialize custody info
|
||||
_, _, err := db.UpdateCustodyInfo(ctx, initialSlot, initialCount)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Update with a lower slot (should update for backfill)
|
||||
err = db.UpdateEarliestAvailableSlot(ctx, earliestSlot)
|
||||
require.NoError(t, err)
|
||||
|
||||
storedSlot, storedCount := getCustodyInfoFromDB(t, db)
|
||||
require.Equal(t, earliestSlot, storedSlot)
|
||||
require.Equal(t, initialCount, storedCount)
|
||||
})
|
||||
|
||||
t.Run("allow increasing slot within MIN_EPOCHS_FOR_BLOCK_REQUESTS (pruning scenario)", func(t *testing.T) {
|
||||
db := setupDB(t)
|
||||
|
||||
// Calculate the current slot and minimum required slot based on actual current time
|
||||
genesisTime := time.Unix(int64(params.BeaconConfig().MinGenesisTime+params.BeaconConfig().GenesisDelay), 0)
|
||||
currentSlot := slots.CurrentSlot(genesisTime)
|
||||
currentEpoch := slots.ToEpoch(currentSlot)
|
||||
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
|
||||
|
||||
var minRequiredEpoch primitives.Epoch
|
||||
if currentEpoch > minEpochsForBlocks {
|
||||
minRequiredEpoch = currentEpoch - minEpochsForBlocks
|
||||
} else {
|
||||
minRequiredEpoch = 0
|
||||
}
|
||||
|
||||
minRequiredSlot, err := slots.EpochStart(minRequiredEpoch)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Initial setup: set earliest slot well before minRequiredSlot
|
||||
const groupCount = uint64(5)
|
||||
initialSlot := primitives.Slot(1000)
|
||||
|
||||
_, _, err = db.UpdateCustodyInfo(ctx, initialSlot, groupCount)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Try to increase to a slot that's still BEFORE minRequiredSlot (should succeed)
|
||||
validSlot := minRequiredSlot - 100
|
||||
|
||||
err = db.UpdateEarliestAvailableSlot(ctx, validSlot)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Verify the database was updated
|
||||
storedSlot, storedCount := getCustodyInfoFromDB(t, db)
|
||||
require.Equal(t, validSlot, storedSlot)
|
||||
require.Equal(t, groupCount, storedCount)
|
||||
})
|
||||
|
||||
t.Run("prevent increasing slot beyond MIN_EPOCHS_FOR_BLOCK_REQUESTS", func(t *testing.T) {
|
||||
db := setupDB(t)
|
||||
|
||||
// Calculate the current slot and minimum required slot based on actual current time
|
||||
genesisTime := time.Unix(int64(params.BeaconConfig().MinGenesisTime+params.BeaconConfig().GenesisDelay), 0)
|
||||
currentSlot := slots.CurrentSlot(genesisTime)
|
||||
currentEpoch := slots.ToEpoch(currentSlot)
|
||||
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
|
||||
|
||||
var minRequiredEpoch primitives.Epoch
|
||||
if currentEpoch > minEpochsForBlocks {
|
||||
minRequiredEpoch = currentEpoch - minEpochsForBlocks
|
||||
} else {
|
||||
minRequiredEpoch = 0
|
||||
}
|
||||
|
||||
minRequiredSlot, err := slots.EpochStart(minRequiredEpoch)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Initial setup: set a valid earliest slot (well before minRequiredSlot)
|
||||
const initialCount = uint64(5)
|
||||
initialSlot := primitives.Slot(1000)
|
||||
|
||||
_, _, err = db.UpdateCustodyInfo(ctx, initialSlot, initialCount)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Try to set earliest slot beyond the minimum required slot
|
||||
invalidSlot := minRequiredSlot + 100
|
||||
|
||||
// This should fail
|
||||
err = db.UpdateEarliestAvailableSlot(ctx, invalidSlot)
|
||||
require.ErrorContains(t, "cannot increase earliest available slot", err)
|
||||
require.ErrorContains(t, "exceeds minimum required slot", err)
|
||||
|
||||
// Verify the database wasn't updated
|
||||
storedSlot, storedCount := getCustodyInfoFromDB(t, db)
|
||||
require.Equal(t, initialSlot, storedSlot)
|
||||
require.Equal(t, initialCount, storedCount)
|
||||
})
|
||||
|
||||
t.Run("no change when slot equals current slot", func(t *testing.T) {
|
||||
const (
|
||||
initialSlot = primitives.Slot(100)
|
||||
initialCount = uint64(5)
|
||||
)
|
||||
|
||||
db := setupDB(t)
|
||||
|
||||
// Initialize custody info
|
||||
_, _, err := db.UpdateCustodyInfo(ctx, initialSlot, initialCount)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Update with the same slot
|
||||
err = db.UpdateEarliestAvailableSlot(ctx, initialSlot)
|
||||
require.NoError(t, err)
|
||||
|
||||
storedSlot, storedCount := getCustodyInfoFromDB(t, db)
|
||||
require.Equal(t, initialSlot, storedSlot)
|
||||
require.Equal(t, initialCount, storedCount)
|
||||
})
|
||||
}
|
||||
|
||||
func TestUpdateSubscribedToAllDataSubnets(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
|
||||
@@ -8,7 +8,6 @@ go_library(
|
||||
"//beacon-chain:__subpackages__",
|
||||
],
|
||||
deps = [
|
||||
"//beacon-chain/core/helpers:go_default_library",
|
||||
"//beacon-chain/db:go_default_library",
|
||||
"//beacon-chain/db/iface:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
@@ -29,8 +28,10 @@ go_test(
|
||||
"//consensus-types/blocks:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
"//testing/assert:go_default_library",
|
||||
"//testing/require:go_default_library",
|
||||
"//testing/util:go_default_library",
|
||||
"//time/slots:go_default_library",
|
||||
"//time/slots/testing:go_default_library",
|
||||
"@com_github_sirupsen_logrus//:go_default_library",
|
||||
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
|
||||
|
||||
@@ -4,7 +4,6 @@ import (
|
||||
"context"
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/iface"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
@@ -25,17 +24,24 @@ const (
|
||||
defaultNumBatchesToPrune = 15
|
||||
)
|
||||
|
||||
// custodyUpdater is a tiny interface that p2p service implements; kept here to avoid
|
||||
// importing the p2p package and creating a cycle.
|
||||
type custodyUpdater interface {
|
||||
UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error
|
||||
}
|
||||
|
||||
type ServiceOption func(*Service)
|
||||
|
||||
// WithRetentionPeriod allows the user to specify a different data retention period than the spec default.
|
||||
// The retention period is specified in epochs, and must be >= MIN_EPOCHS_FOR_BLOCK_REQUESTS.
|
||||
func WithRetentionPeriod(retentionEpochs primitives.Epoch) ServiceOption {
|
||||
return func(s *Service) {
|
||||
defaultRetentionEpochs := helpers.MinEpochsForBlockRequests() + 1
|
||||
defaultRetentionEpochs := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
|
||||
if retentionEpochs < defaultRetentionEpochs {
|
||||
log.WithField("userEpochs", retentionEpochs).
|
||||
WithField("minRequired", defaultRetentionEpochs).
|
||||
Warn("Retention period too low, using minimum required value")
|
||||
Warn("Retention period too low, ignoring and using minimum required value")
|
||||
retentionEpochs = defaultRetentionEpochs
|
||||
}
|
||||
|
||||
s.ps = pruneStartSlotFunc(retentionEpochs)
|
||||
@@ -58,17 +64,23 @@ type Service struct {
|
||||
slotTicker slots.Ticker
|
||||
backfillWaiter func() error
|
||||
initSyncWaiter func() error
|
||||
custody custodyUpdater
|
||||
}
|
||||
|
||||
func New(ctx context.Context, db iface.Database, genesisTime time.Time, initSyncWaiter, backfillWaiter func() error, opts ...ServiceOption) (*Service, error) {
|
||||
func New(ctx context.Context, db iface.Database, genesisTime time.Time, initSyncWaiter, backfillWaiter func() error, custody custodyUpdater, opts ...ServiceOption) (*Service, error) {
|
||||
if custody == nil {
|
||||
return nil, errors.New("custody updater is required for pruner but was not provided")
|
||||
}
|
||||
|
||||
p := &Service{
|
||||
ctx: ctx,
|
||||
db: db,
|
||||
ps: pruneStartSlotFunc(helpers.MinEpochsForBlockRequests() + 1), // Default retention epochs is MIN_EPOCHS_FOR_BLOCK_REQUESTS + 1 from the current slot.
|
||||
ps: pruneStartSlotFunc(primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)),
|
||||
done: make(chan struct{}),
|
||||
slotTicker: slots.NewSlotTicker(slots.UnsafeStartTime(genesisTime, 0), params.BeaconConfig().SecondsPerSlot),
|
||||
initSyncWaiter: initSyncWaiter,
|
||||
backfillWaiter: backfillWaiter,
|
||||
custody: custody,
|
||||
}
|
||||
|
||||
for _, o := range opts {
|
||||
@@ -157,17 +169,45 @@ func (p *Service) prune(slot primitives.Slot) error {
|
||||
return errors.Wrap(err, "failed to prune batches")
|
||||
}
|
||||
|
||||
log.WithFields(logrus.Fields{
|
||||
"prunedUpto": pruneUpto,
|
||||
"duration": time.Since(tt),
|
||||
"currentSlot": slot,
|
||||
"batchSize": defaultPrunableBatchSize,
|
||||
"numBatches": numBatches,
|
||||
}).Debug("Successfully pruned chain data")
|
||||
earliestAvailableSlot := pruneUpto + 1
|
||||
|
||||
// Update pruning checkpoint.
|
||||
p.prunedUpto = pruneUpto
|
||||
|
||||
// Update the earliest available slot after pruning
|
||||
if err := p.updateEarliestAvailableSlot(earliestAvailableSlot); err != nil {
|
||||
return errors.Wrap(err, "update earliest available slot")
|
||||
}
|
||||
|
||||
log.WithFields(logrus.Fields{
|
||||
"prunedUpto": pruneUpto,
|
||||
"earliestAvailableSlot": earliestAvailableSlot,
|
||||
"duration": time.Since(tt),
|
||||
"currentSlot": slot,
|
||||
"batchSize": defaultPrunableBatchSize,
|
||||
"numBatches": numBatches,
|
||||
}).Debug("Successfully pruned chain data")
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// updateEarliestAvailableSlot updates the earliest available slot via the injected custody updater
|
||||
// and also persists it to the database.
|
||||
func (p *Service) updateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
|
||||
if !params.FuluEnabled() {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Update the p2p in-memory state
|
||||
if err := p.custody.UpdateEarliestAvailableSlot(earliestAvailableSlot); err != nil {
|
||||
return errors.Wrapf(err, "update earliest available slot after pruning to %d", earliestAvailableSlot)
|
||||
}
|
||||
|
||||
// Persist to database to ensure it survives restarts
|
||||
if err := p.db.UpdateEarliestAvailableSlot(p.ctx, earliestAvailableSlot); err != nil {
|
||||
return errors.Wrapf(err, "update earliest available slot in database for slot %d", earliestAvailableSlot)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -199,15 +239,38 @@ func (p *Service) pruneBatches(pruneUpto primitives.Slot) (int, error) {
|
||||
}
|
||||
|
||||
// pruneStartSlotFunc returns the function to determine the start slot to start pruning.
|
||||
// The pruning calculation is epoch-aligned,
|
||||
// ensuring that earliestAvailableSlot is always at an epoch boundary.
|
||||
// So that we prune epoch-wise.
|
||||
// e.g. if retentionEpochs is 3 i.e. we should keep at least 3 epochs from current slot,
|
||||
//
|
||||
// current slot is 325 (=> current epoch is 10),
|
||||
// then we should keep epoch 7 onwards (inclusive of epoch 7).
|
||||
// So we can prune up to the last slot of 6th epoch i.e. 32 x 7 - 1 = 223
|
||||
// Earliest available slot would be 224 in that case.
|
||||
func pruneStartSlotFunc(retentionEpochs primitives.Epoch) func(primitives.Slot) primitives.Slot {
|
||||
return func(current primitives.Slot) primitives.Slot {
|
||||
if retentionEpochs > slots.MaxSafeEpoch() {
|
||||
retentionEpochs = slots.MaxSafeEpoch()
|
||||
}
|
||||
offset := slots.UnsafeEpochStart(retentionEpochs)
|
||||
if offset >= current {
|
||||
|
||||
// Calculate epoch-aligned minimum required slot
|
||||
currentEpoch := slots.ToEpoch(current)
|
||||
var minRequiredEpoch primitives.Epoch
|
||||
if currentEpoch > retentionEpochs {
|
||||
minRequiredEpoch = currentEpoch - retentionEpochs
|
||||
} else {
|
||||
minRequiredEpoch = 0
|
||||
}
|
||||
|
||||
// Get the start slot of the minimum required epoch
|
||||
minRequiredSlot, err := slots.EpochStart(minRequiredEpoch)
|
||||
if err != nil || minRequiredSlot == 0 {
|
||||
return 0
|
||||
}
|
||||
return current - offset
|
||||
|
||||
// Prune up to (but not including) the minimum required slot
|
||||
// This ensures earliestAvailableSlot (pruneUpto + 1) is at the epoch boundary
|
||||
return minRequiredSlot - 1
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2,6 +2,7 @@ package pruner
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
@@ -10,11 +11,13 @@ import (
|
||||
eth "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/testing/util"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
slottest "github.com/OffchainLabs/prysm/v6/time/slots/testing"
|
||||
"github.com/sirupsen/logrus"
|
||||
|
||||
dbtest "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/assert"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
logTest "github.com/sirupsen/logrus/hooks/test"
|
||||
)
|
||||
@@ -62,7 +65,9 @@ func TestPruner_PruningConditions(t *testing.T) {
|
||||
if !tt.backfillCompleted {
|
||||
backfillWaiter = waiter
|
||||
}
|
||||
p, err := New(ctx, beaconDB, time.Now(), initSyncWaiter, backfillWaiter, WithSlotTicker(slotTicker))
|
||||
|
||||
mockCustody := &mockCustodyUpdater{}
|
||||
p, err := New(ctx, beaconDB, time.Now(), initSyncWaiter, backfillWaiter, mockCustody, WithSlotTicker(slotTicker))
|
||||
require.NoError(t, err)
|
||||
|
||||
go p.Start()
|
||||
@@ -97,12 +102,14 @@ func TestPruner_PruneSuccess(t *testing.T) {
|
||||
retentionEpochs := primitives.Epoch(2)
|
||||
slotTicker := &slottest.MockTicker{Channel: make(chan primitives.Slot)}
|
||||
|
||||
mockCustody := &mockCustodyUpdater{}
|
||||
p, err := New(
|
||||
ctx,
|
||||
beaconDB,
|
||||
time.Now(),
|
||||
nil,
|
||||
nil,
|
||||
mockCustody,
|
||||
WithSlotTicker(slotTicker),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
@@ -133,3 +140,380 @@ func TestPruner_PruneSuccess(t *testing.T) {
|
||||
|
||||
require.NoError(t, p.Stop())
|
||||
}
|
||||
|
||||
// Mock custody updater for testing
|
||||
type mockCustodyUpdater struct {
|
||||
custodyGroupCount uint64
|
||||
earliestAvailableSlot primitives.Slot
|
||||
updateCallCount int
|
||||
}
|
||||
|
||||
func (m *mockCustodyUpdater) UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
|
||||
m.updateCallCount++
|
||||
m.earliestAvailableSlot = earliestAvailableSlot
|
||||
return nil
|
||||
}
|
||||
|
||||
func TestPruner_UpdatesEarliestAvailableSlot(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
config := params.BeaconConfig()
|
||||
config.FuluForkEpoch = 0 // Enable Fulu from epoch 0
|
||||
params.OverrideBeaconConfig(config)
|
||||
|
||||
logrus.SetLevel(logrus.DebugLevel)
|
||||
hook := logTest.NewGlobal()
|
||||
ctx, cancel := context.WithCancel(t.Context())
|
||||
defer cancel()
|
||||
|
||||
beaconDB := dbtest.SetupDB(t)
|
||||
retentionEpochs := primitives.Epoch(2)
|
||||
|
||||
slotTicker := &slottest.MockTicker{Channel: make(chan primitives.Slot)}
|
||||
|
||||
// Create mock custody updater
|
||||
mockCustody := &mockCustodyUpdater{
|
||||
custodyGroupCount: 4,
|
||||
earliestAvailableSlot: 0,
|
||||
}
|
||||
|
||||
// Create pruner with mock custody updater
|
||||
p, err := New(
|
||||
ctx,
|
||||
beaconDB,
|
||||
time.Now(),
|
||||
nil,
|
||||
nil,
|
||||
mockCustody,
|
||||
WithSlotTicker(slotTicker),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
|
||||
p.ps = func(current primitives.Slot) primitives.Slot {
|
||||
return current - primitives.Slot(retentionEpochs)*params.BeaconConfig().SlotsPerEpoch
|
||||
}
|
||||
|
||||
// Save some blocks to be pruned
|
||||
for i := primitives.Slot(1); i <= 32; i++ {
|
||||
blk := util.NewBeaconBlock()
|
||||
blk.Block.Slot = i
|
||||
wsb, err := blocks.NewSignedBeaconBlock(blk)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
|
||||
}
|
||||
|
||||
// Start pruner and trigger at slot 80 (middle of 3rd epoch)
|
||||
go p.Start()
|
||||
currentSlot := primitives.Slot(80)
|
||||
slotTicker.Channel <- currentSlot
|
||||
|
||||
// Wait for pruning to complete
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
|
||||
// Check that UpdateEarliestAvailableSlot was called
|
||||
assert.Equal(t, true, mockCustody.updateCallCount > 0, "UpdateEarliestAvailableSlot should have been called")
|
||||
|
||||
// The earliest available slot should be pruneUpto + 1
|
||||
// pruneUpto = currentSlot - retentionEpochs*slotsPerEpoch = 80 - 2*32 = 16
|
||||
// So earliest available slot should be 16 + 1 = 17
|
||||
expectedEarliestSlot := primitives.Slot(17)
|
||||
require.Equal(t, expectedEarliestSlot, mockCustody.earliestAvailableSlot, "Earliest available slot should be updated correctly")
|
||||
require.Equal(t, uint64(4), mockCustody.custodyGroupCount, "Custody group count should be preserved")
|
||||
|
||||
// Verify that no error was logged
|
||||
for _, entry := range hook.AllEntries() {
|
||||
if entry.Level == logrus.ErrorLevel {
|
||||
t.Errorf("Unexpected error log: %s", entry.Message)
|
||||
}
|
||||
}
|
||||
|
||||
require.NoError(t, p.Stop())
|
||||
}
|
||||
|
||||
// Mock custody updater that returns an error for UpdateEarliestAvailableSlot
|
||||
type mockCustodyUpdaterWithUpdateError struct {
|
||||
updateCallCount int
|
||||
}
|
||||
|
||||
func (m *mockCustodyUpdaterWithUpdateError) UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
|
||||
m.updateCallCount++
|
||||
return errors.New("failed to update earliest available slot")
|
||||
}
|
||||
|
||||
func TestWithRetentionPeriod_EnforcesMinimum(t *testing.T) {
|
||||
// Use minimal config for testing
|
||||
params.SetupTestConfigCleanup(t)
|
||||
config := params.MinimalSpecConfig()
|
||||
params.OverrideBeaconConfig(config)
|
||||
|
||||
ctx := t.Context()
|
||||
beaconDB := dbtest.SetupDB(t)
|
||||
|
||||
// Get the minimum required epochs (272 for minimal)
|
||||
minRequiredEpochs := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
|
||||
|
||||
// Use a slot that's guaranteed to be after the minimum retention period
|
||||
currentSlot := primitives.Slot(minRequiredEpochs+100) * (params.BeaconConfig().SlotsPerEpoch)
|
||||
|
||||
// Calculate epoch-aligned expected prune slot
|
||||
// For epoch-aligned pruning: pruneUpto = epochStart(currentEpoch - retention) - 1
|
||||
currentEpoch := slots.ToEpoch(currentSlot)
|
||||
|
||||
// Helper function to calculate expected prune slot for a given retention
|
||||
calcExpectedPruneSlot := func(retention primitives.Epoch) primitives.Slot {
|
||||
minEpoch := currentEpoch - retention
|
||||
minSlot, _ := slots.EpochStart(minEpoch)
|
||||
return minSlot - 1
|
||||
}
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
userRetentionEpochs primitives.Epoch
|
||||
expectedPruneSlot primitives.Slot
|
||||
description string
|
||||
}{
|
||||
{
|
||||
name: "User value below minimum - should use minimum",
|
||||
userRetentionEpochs: 2, // Way below minimum
|
||||
expectedPruneSlot: calcExpectedPruneSlot(minRequiredEpochs),
|
||||
description: "Should use minimum when user value is too low",
|
||||
},
|
||||
{
|
||||
name: "User value at minimum",
|
||||
userRetentionEpochs: minRequiredEpochs,
|
||||
expectedPruneSlot: calcExpectedPruneSlot(minRequiredEpochs),
|
||||
description: "Should use user value when at minimum",
|
||||
},
|
||||
{
|
||||
name: "User value above minimum",
|
||||
userRetentionEpochs: minRequiredEpochs + 10,
|
||||
expectedPruneSlot: calcExpectedPruneSlot(minRequiredEpochs + 10),
|
||||
description: "Should use user value when above minimum",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
hook := logTest.NewGlobal()
|
||||
logrus.SetLevel(logrus.WarnLevel)
|
||||
|
||||
mockCustody := &mockCustodyUpdater{}
|
||||
// Create pruner with retention period
|
||||
p, err := New(
|
||||
ctx,
|
||||
beaconDB,
|
||||
time.Now(),
|
||||
nil,
|
||||
nil,
|
||||
mockCustody,
|
||||
WithRetentionPeriod(tt.userRetentionEpochs),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Test the pruning calculation
|
||||
pruneUptoSlot := p.ps(currentSlot)
|
||||
|
||||
// Verify the pruning slot
|
||||
assert.Equal(t, tt.expectedPruneSlot, pruneUptoSlot, tt.description)
|
||||
|
||||
// Check if warning was logged when value was too low
|
||||
if tt.userRetentionEpochs < minRequiredEpochs {
|
||||
assert.LogsContain(t, hook, "Retention period too low, ignoring and using minimum required value")
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestWithRetentionPeriod_AcceptsSpecMinimum(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
config := params.MinimalSpecConfig()
|
||||
params.OverrideBeaconConfig(config)
|
||||
|
||||
ctx := t.Context()
|
||||
beaconDB := dbtest.SetupDB(t)
|
||||
|
||||
hook := logTest.NewGlobal()
|
||||
logrus.SetLevel(logrus.WarnLevel)
|
||||
|
||||
// The spec minimum - this SHOULD be accepted without warning
|
||||
specMinimum := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
|
||||
|
||||
// Use a slot that's guaranteed to be after the minimum retention period
|
||||
currentSlot := primitives.Slot(specMinimum+100) * (params.BeaconConfig().SlotsPerEpoch)
|
||||
|
||||
mockCustody := &mockCustodyUpdater{}
|
||||
p, err := New(
|
||||
ctx,
|
||||
beaconDB,
|
||||
time.Now(),
|
||||
nil,
|
||||
nil,
|
||||
mockCustody,
|
||||
WithRetentionPeriod(specMinimum),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Test the pruning calculation
|
||||
pruneUptoSlot := p.ps(currentSlot)
|
||||
|
||||
// The expected prune slot should use epoch-aligned calculation
|
||||
// pruneUpto = epochStart(currentEpoch - retention) - 1
|
||||
currentEpoch := slots.ToEpoch(currentSlot)
|
||||
minRequiredEpoch := currentEpoch - specMinimum
|
||||
minRequiredSlot, err := slots.EpochStart(minRequiredEpoch)
|
||||
require.NoError(t, err)
|
||||
expectedPruneSlot := minRequiredSlot - 1
|
||||
|
||||
assert.Equal(t, expectedPruneSlot, pruneUptoSlot,
|
||||
"Pruner should accept and use MIN_EPOCHS_FOR_BLOCK_REQUESTS without adding 1")
|
||||
|
||||
for _, entry := range hook.AllEntries() {
|
||||
if entry.Level == logrus.WarnLevel {
|
||||
t.Errorf("Unexpected warning when using spec minimum: %s", entry.Message)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestPruneStartSlotFunc_EpochAlignment(t *testing.T) {
|
||||
// This test verifies that the pruning calculation is epoch-aligned.
|
||||
params.SetupTestConfigCleanup(t)
|
||||
config := params.MinimalSpecConfig()
|
||||
params.OverrideBeaconConfig(config)
|
||||
|
||||
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch // 8 for minimal config
|
||||
retentionEpochs := primitives.Epoch(3)
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
currentSlot primitives.Slot
|
||||
expectedEpochAlignment bool
|
||||
expectedMinRequiredSlot primitives.Slot
|
||||
description string
|
||||
}{
|
||||
{
|
||||
name: "Pruning at epoch boundary",
|
||||
currentSlot: primitives.Slot(4 * slotsPerEpoch), // Slot 32 (epoch 4, slot 0 of epoch)
|
||||
expectedEpochAlignment: true,
|
||||
expectedMinRequiredSlot: primitives.Slot(1 * slotsPerEpoch), // Epoch 1 start = slot 8
|
||||
description: "When pruning at epoch boundary, earliestAvailableSlot should be at epoch boundary",
|
||||
},
|
||||
{
|
||||
name: "Pruning at middle of epoch",
|
||||
currentSlot: primitives.Slot(4*slotsPerEpoch + 4), // Slot 36 (epoch 4, slot 4 of epoch)
|
||||
expectedEpochAlignment: true,
|
||||
expectedMinRequiredSlot: primitives.Slot(1 * slotsPerEpoch), // Epoch 1 start = slot 8
|
||||
description: "When pruning mid-epoch, earliestAvailableSlot must still be at epoch boundary",
|
||||
},
|
||||
{
|
||||
name: "Pruning at end of epoch",
|
||||
currentSlot: primitives.Slot(5*slotsPerEpoch - 1), // Slot 39 (epoch 4, last slot)
|
||||
expectedEpochAlignment: true,
|
||||
expectedMinRequiredSlot: primitives.Slot(1 * slotsPerEpoch), // Epoch 1 start = slot 8
|
||||
description: "When pruning at epoch end, earliestAvailableSlot must be at epoch boundary",
|
||||
},
|
||||
{
|
||||
name: "Pruning at various epoch positions",
|
||||
currentSlot: primitives.Slot(8*slotsPerEpoch + 5), // Slot 69 (epoch 8, slot 5 of epoch)
|
||||
expectedEpochAlignment: true,
|
||||
expectedMinRequiredSlot: primitives.Slot(5 * slotsPerEpoch), // Epoch 5 start = slot 40
|
||||
description: "EarliestAvailableSlot should always align to epoch boundary",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
// Create the prune start slot function
|
||||
ps := pruneStartSlotFunc(retentionEpochs)
|
||||
|
||||
// Calculate pruneUpto slot
|
||||
pruneUpto := ps(tt.currentSlot)
|
||||
|
||||
// EarliestAvailableSlot is pruneUpto + 1
|
||||
earliestAvailableSlot := pruneUpto + 1
|
||||
|
||||
// Verify epoch alignment: earliestAvailableSlot should be at an epoch boundary
|
||||
if tt.expectedEpochAlignment {
|
||||
// Check if earliestAvailableSlot is at the start of an epoch
|
||||
epoch := slots.ToEpoch(earliestAvailableSlot)
|
||||
epochStartSlot, err := slots.EpochStart(epoch)
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.Equal(t, epochStartSlot, earliestAvailableSlot,
|
||||
"%s: earliestAvailableSlot (%d) should be at epoch boundary (slot %d of epoch %d)",
|
||||
tt.description, earliestAvailableSlot, epochStartSlot, epoch)
|
||||
}
|
||||
|
||||
// Verify it matches the expected minimum required slot for custody validation
|
||||
assert.Equal(t, tt.expectedMinRequiredSlot, earliestAvailableSlot,
|
||||
"%s: earliestAvailableSlot should match custody minimum required slot",
|
||||
tt.description)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestPruner_UpdateEarliestSlotError(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
config := params.BeaconConfig()
|
||||
config.FuluForkEpoch = 0 // Enable Fulu from epoch 0
|
||||
params.OverrideBeaconConfig(config)
|
||||
|
||||
logrus.SetLevel(logrus.DebugLevel)
|
||||
hook := logTest.NewGlobal()
|
||||
ctx, cancel := context.WithCancel(t.Context())
|
||||
defer cancel()
|
||||
|
||||
beaconDB := dbtest.SetupDB(t)
|
||||
retentionEpochs := primitives.Epoch(2)
|
||||
|
||||
slotTicker := &slottest.MockTicker{Channel: make(chan primitives.Slot)}
|
||||
|
||||
// Create mock custody updater that returns an error for UpdateEarliestAvailableSlot
|
||||
mockCustody := &mockCustodyUpdaterWithUpdateError{}
|
||||
|
||||
// Create pruner with mock custody updater
|
||||
p, err := New(
|
||||
ctx,
|
||||
beaconDB,
|
||||
time.Now(),
|
||||
nil,
|
||||
nil,
|
||||
mockCustody,
|
||||
WithSlotTicker(slotTicker),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
|
||||
p.ps = func(current primitives.Slot) primitives.Slot {
|
||||
return current - primitives.Slot(retentionEpochs)*params.BeaconConfig().SlotsPerEpoch
|
||||
}
|
||||
|
||||
// Save some blocks to be pruned
|
||||
for i := primitives.Slot(1); i <= 32; i++ {
|
||||
blk := util.NewBeaconBlock()
|
||||
blk.Block.Slot = i
|
||||
wsb, err := blocks.NewSignedBeaconBlock(blk)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
|
||||
}
|
||||
|
||||
// Start pruner and trigger at slot 80
|
||||
go p.Start()
|
||||
currentSlot := primitives.Slot(80)
|
||||
slotTicker.Channel <- currentSlot
|
||||
|
||||
// Wait for pruning to complete
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
|
||||
// Should have called UpdateEarliestAvailableSlot
|
||||
assert.Equal(t, 1, mockCustody.updateCallCount, "UpdateEarliestAvailableSlot should be called")
|
||||
|
||||
// Check that error was logged by the prune function
|
||||
found := false
|
||||
for _, entry := range hook.AllEntries() {
|
||||
if entry.Level == logrus.ErrorLevel && entry.Message == "Failed to prune database" {
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
assert.Equal(t, true, found, "Should log error when UpdateEarliestAvailableSlot fails")
|
||||
|
||||
require.NoError(t, p.Stop())
|
||||
}
|
||||
|
||||
@@ -6,6 +6,7 @@ go_library(
|
||||
"cache.go",
|
||||
"helpers.go",
|
||||
"lightclient.go",
|
||||
"log.go",
|
||||
"store.go",
|
||||
],
|
||||
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/light-client",
|
||||
|
||||
5
beacon-chain/light-client/log.go
Normal file
5
beacon-chain/light-client/log.go
Normal file
@@ -0,0 +1,5 @@
|
||||
package light_client
|
||||
|
||||
import "github.com/sirupsen/logrus"
|
||||
|
||||
var log = logrus.WithField("prefix", "light-client")
|
||||
@@ -14,7 +14,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
"github.com/pkg/errors"
|
||||
log "github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
var ErrLightClientBootstrapNotFound = errors.New("light client bootstrap not found")
|
||||
|
||||
@@ -1108,6 +1108,7 @@ func (b *BeaconNode) registerPrunerService(cliCtx *cli.Context) error {
|
||||
genesis,
|
||||
initSyncWaiter(cliCtx.Context, b.initialSyncComplete),
|
||||
backfillService.WaitForCompletion,
|
||||
b.fetchP2P(),
|
||||
opts...,
|
||||
)
|
||||
if err != nil {
|
||||
|
||||
@@ -115,6 +115,57 @@ func (s *Service) UpdateCustodyInfo(earliestAvailableSlot primitives.Slot, custo
|
||||
return earliestAvailableSlot, custodyGroupCount, nil
|
||||
}
|
||||
|
||||
// UpdateEarliestAvailableSlot updates the earliest available slot.
|
||||
//
|
||||
// IMPORTANT: This function should only be called when Fulu is enabled. The caller is responsible
|
||||
// for checking params.FuluEnabled() before calling this function.
|
||||
func (s *Service) UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
|
||||
s.custodyInfoLock.Lock()
|
||||
defer s.custodyInfoLock.Unlock()
|
||||
|
||||
if s.custodyInfo == nil {
|
||||
return errors.New("no custody info available")
|
||||
}
|
||||
|
||||
currentSlot := slots.CurrentSlot(s.genesisTime)
|
||||
currentEpoch := slots.ToEpoch(currentSlot)
|
||||
|
||||
// Allow decrease (for backfill scenarios)
|
||||
if earliestAvailableSlot < s.custodyInfo.earliestAvailableSlot {
|
||||
s.custodyInfo.earliestAvailableSlot = earliestAvailableSlot
|
||||
return nil
|
||||
}
|
||||
|
||||
// Prevent increase within the MIN_EPOCHS_FOR_BLOCK_REQUESTS period
|
||||
// This ensures we don't voluntarily refuse to serve mandatory block data
|
||||
// This check applies regardless of whether we're early or late in the chain
|
||||
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
|
||||
|
||||
// Calculate the minimum required epoch (or 0 if we're early in the chain)
|
||||
minRequiredEpoch := primitives.Epoch(0)
|
||||
if currentEpoch > minEpochsForBlocks {
|
||||
minRequiredEpoch = currentEpoch - minEpochsForBlocks
|
||||
}
|
||||
|
||||
// Convert to slot to ensure we compare at slot-level granularity, not epoch-level
|
||||
// This prevents allowing increases to slots within minRequiredEpoch that are after its first slot
|
||||
minRequiredSlot, err := slots.EpochStart(minRequiredEpoch)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "epoch start")
|
||||
}
|
||||
|
||||
// Prevent any increase that would put earliest slot beyond the minimum required slot
|
||||
if earliestAvailableSlot > s.custodyInfo.earliestAvailableSlot && earliestAvailableSlot > minRequiredSlot {
|
||||
return errors.Errorf(
|
||||
"cannot increase earliest available slot to %d (epoch %d) as it exceeds minimum required slot %d (epoch %d)",
|
||||
earliestAvailableSlot, slots.ToEpoch(earliestAvailableSlot), minRequiredSlot, minRequiredEpoch,
|
||||
)
|
||||
}
|
||||
|
||||
s.custodyInfo.earliestAvailableSlot = earliestAvailableSlot
|
||||
return nil
|
||||
}
|
||||
|
||||
// CustodyGroupCountFromPeer retrieves custody group count from a peer.
|
||||
// It first tries to get the custody group count from the peer's metadata,
|
||||
// then falls back to the ENR value if the metadata is not available, then
|
||||
|
||||
@@ -4,6 +4,7 @@ import (
|
||||
"context"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
|
||||
@@ -167,6 +168,148 @@ func TestUpdateCustodyInfo(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestUpdateEarliestAvailableSlot(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
config := params.BeaconConfig()
|
||||
config.FuluForkEpoch = 0 // Enable Fulu from epoch 0
|
||||
params.OverrideBeaconConfig(config)
|
||||
|
||||
t.Run("Valid update", func(t *testing.T) {
|
||||
const (
|
||||
initialSlot primitives.Slot = 50
|
||||
newSlot primitives.Slot = 100
|
||||
groupCount uint64 = 5
|
||||
)
|
||||
|
||||
// Set up a scenario where we're far enough in the chain that increasing to newSlot is valid
|
||||
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
|
||||
currentEpoch := minEpochsForBlocks + 100 // Well beyond MIN_EPOCHS_FOR_BLOCK_REQUESTS
|
||||
currentSlot := primitives.Slot(currentEpoch) * primitives.Slot(params.BeaconConfig().SlotsPerEpoch)
|
||||
|
||||
service := &Service{
|
||||
// Set genesis time in the past so currentSlot is the "current" slot
|
||||
genesisTime: time.Now().Add(-time.Duration(currentSlot) * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second),
|
||||
custodyInfo: &custodyInfo{
|
||||
earliestAvailableSlot: initialSlot,
|
||||
groupCount: groupCount,
|
||||
},
|
||||
}
|
||||
|
||||
err := service.UpdateEarliestAvailableSlot(newSlot)
|
||||
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, newSlot, service.custodyInfo.earliestAvailableSlot)
|
||||
require.Equal(t, groupCount, service.custodyInfo.groupCount) // Should preserve group count
|
||||
})
|
||||
|
||||
t.Run("Earlier slot - allowed for backfill", func(t *testing.T) {
|
||||
const initialSlot primitives.Slot = 100
|
||||
const earlierSlot primitives.Slot = 50
|
||||
|
||||
service := &Service{
|
||||
genesisTime: time.Now(),
|
||||
custodyInfo: &custodyInfo{
|
||||
earliestAvailableSlot: initialSlot,
|
||||
groupCount: 5,
|
||||
},
|
||||
}
|
||||
|
||||
err := service.UpdateEarliestAvailableSlot(earlierSlot)
|
||||
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, earlierSlot, service.custodyInfo.earliestAvailableSlot) // Should decrease for backfill
|
||||
})
|
||||
|
||||
t.Run("Prevent increase within MIN_EPOCHS_FOR_BLOCK_REQUESTS - late in chain", func(t *testing.T) {
|
||||
// Set current time far enough in the future to have a meaningful MIN_EPOCHS_FOR_BLOCK_REQUESTS period
|
||||
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
|
||||
currentEpoch := minEpochsForBlocks + 100 // Well beyond the minimum
|
||||
currentSlot := primitives.Slot(currentEpoch) * primitives.Slot(params.BeaconConfig().SlotsPerEpoch)
|
||||
|
||||
// Calculate the minimum allowed epoch
|
||||
minRequiredEpoch := currentEpoch - minEpochsForBlocks
|
||||
minRequiredSlot := primitives.Slot(minRequiredEpoch) * primitives.Slot(params.BeaconConfig().SlotsPerEpoch)
|
||||
|
||||
// Try to set earliest slot to a value within the MIN_EPOCHS_FOR_BLOCK_REQUESTS period (should fail)
|
||||
attemptedSlot := minRequiredSlot + 1000 // Within the mandatory retention period
|
||||
|
||||
service := &Service{
|
||||
genesisTime: time.Now().Add(-time.Duration(currentSlot) * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second),
|
||||
custodyInfo: &custodyInfo{
|
||||
earliestAvailableSlot: minRequiredSlot - 100, // Current value is before the min required
|
||||
groupCount: 5,
|
||||
},
|
||||
}
|
||||
|
||||
err := service.UpdateEarliestAvailableSlot(attemptedSlot)
|
||||
|
||||
require.NotNil(t, err)
|
||||
require.Equal(t, true, strings.Contains(err.Error(), "cannot increase earliest available slot"))
|
||||
})
|
||||
|
||||
t.Run("Prevent increase at epoch boundary - slot precision matters", func(t *testing.T) {
|
||||
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
|
||||
currentEpoch := minEpochsForBlocks + 976 // Current epoch
|
||||
currentSlot := primitives.Slot(currentEpoch) * primitives.Slot(params.BeaconConfig().SlotsPerEpoch)
|
||||
|
||||
minRequiredEpoch := currentEpoch - minEpochsForBlocks // = 976
|
||||
storedEarliestSlot := primitives.Slot(minRequiredEpoch)*primitives.Slot(params.BeaconConfig().SlotsPerEpoch) - 232 // Before minRequired
|
||||
|
||||
// Try to set earliest to slot 8 of the minRequiredEpoch (should fail with slot comparison)
|
||||
attemptedSlot := primitives.Slot(minRequiredEpoch)*primitives.Slot(params.BeaconConfig().SlotsPerEpoch) + 8
|
||||
|
||||
service := &Service{
|
||||
genesisTime: time.Now().Add(-time.Duration(currentSlot) * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second),
|
||||
custodyInfo: &custodyInfo{
|
||||
earliestAvailableSlot: storedEarliestSlot,
|
||||
groupCount: 5,
|
||||
},
|
||||
}
|
||||
|
||||
err := service.UpdateEarliestAvailableSlot(attemptedSlot)
|
||||
|
||||
require.NotNil(t, err, "Should prevent increasing earliest slot beyond the minimum required SLOT (not just epoch)")
|
||||
require.Equal(t, true, strings.Contains(err.Error(), "cannot increase earliest available slot"))
|
||||
})
|
||||
|
||||
t.Run("Prevent increase within MIN_EPOCHS_FOR_BLOCK_REQUESTS - early in chain", func(t *testing.T) {
|
||||
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
|
||||
currentEpoch := minEpochsForBlocks - 10 // Early in chain, BEFORE we have MIN_EPOCHS_FOR_BLOCK_REQUESTS of history
|
||||
currentSlot := primitives.Slot(currentEpoch) * primitives.Slot(params.BeaconConfig().SlotsPerEpoch)
|
||||
|
||||
// Current earliest slot is at slot 100
|
||||
currentEarliestSlot := primitives.Slot(100)
|
||||
|
||||
// Try to increase earliest slot to slot 1000 (which would be within the mandatory window from currentSlot)
|
||||
attemptedSlot := primitives.Slot(1000)
|
||||
|
||||
service := &Service{
|
||||
genesisTime: time.Now().Add(-time.Duration(currentSlot) * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second),
|
||||
custodyInfo: &custodyInfo{
|
||||
earliestAvailableSlot: currentEarliestSlot,
|
||||
groupCount: 5,
|
||||
},
|
||||
}
|
||||
|
||||
err := service.UpdateEarliestAvailableSlot(attemptedSlot)
|
||||
|
||||
require.NotNil(t, err, "Should prevent increasing earliest slot within the mandatory retention window, even early in chain")
|
||||
require.Equal(t, true, strings.Contains(err.Error(), "cannot increase earliest available slot"))
|
||||
})
|
||||
|
||||
t.Run("Nil custody info - should return error", func(t *testing.T) {
|
||||
service := &Service{
|
||||
genesisTime: time.Now(),
|
||||
custodyInfo: nil, // No custody info set
|
||||
}
|
||||
|
||||
err := service.UpdateEarliestAvailableSlot(100)
|
||||
|
||||
require.NotNil(t, err)
|
||||
require.Equal(t, true, strings.Contains(err.Error(), "no custody info available"))
|
||||
})
|
||||
}
|
||||
|
||||
func TestCustodyGroupCountFromPeer(t *testing.T) {
|
||||
const (
|
||||
expectedENR uint64 = 7
|
||||
|
||||
@@ -126,6 +126,7 @@ type (
|
||||
EarliestAvailableSlot(ctx context.Context) (primitives.Slot, error)
|
||||
CustodyGroupCount(ctx context.Context) (uint64, error)
|
||||
UpdateCustodyInfo(earliestAvailableSlot primitives.Slot, custodyGroupCount uint64) (primitives.Slot, uint64, error)
|
||||
UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error
|
||||
CustodyGroupCountFromPeer(peer.ID) uint64
|
||||
}
|
||||
)
|
||||
|
||||
@@ -213,6 +213,11 @@ func (s *FakeP2P) UpdateCustodyInfo(earliestAvailableSlot primitives.Slot, custo
|
||||
return earliestAvailableSlot, custodyGroupCount, nil
|
||||
}
|
||||
|
||||
// UpdateEarliestAvailableSlot -- fake.
|
||||
func (*FakeP2P) UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// CustodyGroupCountFromPeer -- fake.
|
||||
func (*FakeP2P) CustodyGroupCountFromPeer(peer.ID) uint64 {
|
||||
return 0
|
||||
|
||||
@@ -499,6 +499,15 @@ func (s *TestP2P) UpdateCustodyInfo(earliestAvailableSlot primitives.Slot, custo
|
||||
return s.earliestAvailableSlot, s.custodyGroupCount, nil
|
||||
}
|
||||
|
||||
// UpdateEarliestAvailableSlot .
|
||||
func (s *TestP2P) UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
|
||||
s.custodyInfoMut.Lock()
|
||||
defer s.custodyInfoMut.Unlock()
|
||||
|
||||
s.earliestAvailableSlot = earliestAvailableSlot
|
||||
return nil
|
||||
}
|
||||
|
||||
// CustodyGroupCountFromPeer retrieves custody group count from a peer.
|
||||
// It first tries to get the custody group count from the peer's metadata,
|
||||
// then falls back to the ENR value if the metadata is not available, then
|
||||
|
||||
@@ -68,7 +68,10 @@ func privKey(cfg *Config) (*ecdsa.PrivateKey, error) {
|
||||
}
|
||||
|
||||
if defaultKeysExist {
|
||||
log.WithField("filePath", defaultKeyPath).Info("Reading static P2P private key from a file. To generate a new random private key at every start, please remove this file.")
|
||||
if !params.FuluEnabled() {
|
||||
log.WithField("filePath", defaultKeyPath).Info("Reading static P2P private key from a file. To generate a new random private key at every start, please remove this file.")
|
||||
}
|
||||
|
||||
return privKeyFromFile(defaultKeyPath)
|
||||
}
|
||||
|
||||
|
||||
@@ -229,7 +229,7 @@ func (vs *Server) BuildBlockParallel(ctx context.Context, sBlk interfaces.Signed
|
||||
sBlk.SetVoluntaryExits(vs.getExits(head, sBlk.Block().Slot()))
|
||||
|
||||
// Set sync aggregate. New in Altair.
|
||||
vs.setSyncAggregate(ctx, sBlk)
|
||||
vs.setSyncAggregate(ctx, sBlk, head)
|
||||
|
||||
// Set bls to execution change. New in Capella.
|
||||
vs.setBlsToExecData(sBlk, head)
|
||||
@@ -312,14 +312,14 @@ func (vs *Server) ProposeBeaconBlock(ctx context.Context, req *ethpb.GenericSign
|
||||
rob, err := blocks.NewROBlockWithRoot(block, root)
|
||||
if block.IsBlinded() {
|
||||
block, blobSidecars, err = vs.handleBlindedBlock(ctx, block)
|
||||
if errors.Is(err, builderapi.ErrBadGateway) {
|
||||
log.WithError(err).Info("Optimistically proposed block - builder relay temporarily unavailable, block may arrive over P2P")
|
||||
return ðpb.ProposeResponse{BlockRoot: root[:]}, nil
|
||||
}
|
||||
} else if block.Version() >= version.Deneb {
|
||||
blobSidecars, dataColumnSidecars, err = vs.handleUnblindedBlock(rob, req)
|
||||
}
|
||||
if err != nil {
|
||||
if errors.Is(err, builderapi.ErrBadGateway) && block.IsBlinded() {
|
||||
log.WithError(err).Info("Optimistically proposed block - builder relay temporarily unavailable, block may arrive over P2P")
|
||||
return ðpb.ProposeResponse{BlockRoot: root[:]}, nil
|
||||
}
|
||||
return nil, status.Errorf(codes.Internal, "%s: %v", "handle block failed", err)
|
||||
}
|
||||
|
||||
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
"context"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
@@ -20,12 +21,12 @@ import (
|
||||
"github.com/prysmaticlabs/go-bitfield"
|
||||
)
|
||||
|
||||
func (vs *Server) setSyncAggregate(ctx context.Context, blk interfaces.SignedBeaconBlock) {
|
||||
func (vs *Server) setSyncAggregate(ctx context.Context, blk interfaces.SignedBeaconBlock, headState state.BeaconState) {
|
||||
if blk.Version() < version.Altair {
|
||||
return
|
||||
}
|
||||
|
||||
syncAggregate, err := vs.getSyncAggregate(ctx, slots.PrevSlot(blk.Block().Slot()), blk.Block().ParentRoot())
|
||||
syncAggregate, err := vs.getSyncAggregate(ctx, slots.PrevSlot(blk.Block().Slot()), blk.Block().ParentRoot(), headState)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Could not get sync aggregate")
|
||||
emptySig := [96]byte{0xC0}
|
||||
@@ -47,7 +48,7 @@ func (vs *Server) setSyncAggregate(ctx context.Context, blk interfaces.SignedBea
|
||||
|
||||
// getSyncAggregate retrieves the sync contributions from the pool to construct the sync aggregate object.
|
||||
// The contributions are filtered based on matching of the input root and slot then profitability.
|
||||
func (vs *Server) getSyncAggregate(ctx context.Context, slot primitives.Slot, root [32]byte) (*ethpb.SyncAggregate, error) {
|
||||
func (vs *Server) getSyncAggregate(ctx context.Context, slot primitives.Slot, root [32]byte, headState state.BeaconState) (*ethpb.SyncAggregate, error) {
|
||||
_, span := trace.StartSpan(ctx, "ProposerServer.getSyncAggregate")
|
||||
defer span.End()
|
||||
|
||||
@@ -62,7 +63,7 @@ func (vs *Server) getSyncAggregate(ctx context.Context, slot primitives.Slot, ro
|
||||
// Contributions have to match the input root
|
||||
proposerContributions := proposerSyncContributions(poolContributions).filterByBlockRoot(root)
|
||||
|
||||
aggregatedContributions, err := vs.aggregatedSyncCommitteeMessages(ctx, slot, root, poolContributions)
|
||||
aggregatedContributions, err := vs.aggregatedSyncCommitteeMessages(ctx, slot, root, poolContributions, headState)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not get aggregated sync committee messages")
|
||||
}
|
||||
@@ -123,6 +124,7 @@ func (vs *Server) aggregatedSyncCommitteeMessages(
|
||||
slot primitives.Slot,
|
||||
root [32]byte,
|
||||
poolContributions []*ethpb.SyncCommitteeContribution,
|
||||
st state.BeaconState,
|
||||
) ([]*ethpb.SyncCommitteeContribution, error) {
|
||||
subcommitteeCount := params.BeaconConfig().SyncCommitteeSubnetCount
|
||||
subcommitteeSize := params.BeaconConfig().SyncCommitteeSize / subcommitteeCount
|
||||
@@ -146,10 +148,7 @@ func (vs *Server) aggregatedSyncCommitteeMessages(
|
||||
messageSigs = append(messageSigs, msg.Signature)
|
||||
}
|
||||
}
|
||||
st, err := vs.HeadFetcher.HeadState(ctx)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not get head state")
|
||||
}
|
||||
|
||||
positions, err := helpers.CurrentPeriodPositions(st, messageIndices)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not get sync committee positions")
|
||||
|
||||
@@ -9,6 +9,7 @@ import (
|
||||
mockSync "github.com/OffchainLabs/prysm/v6/beacon-chain/sync/initial-sync/testing"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/crypto/bls"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
@@ -51,15 +52,15 @@ func TestProposer_GetSyncAggregate_OK(t *testing.T) {
|
||||
require.NoError(t, proposerServer.SyncCommitteePool.SaveSyncCommitteeContribution(cont))
|
||||
}
|
||||
|
||||
aggregate, err := proposerServer.getSyncAggregate(t.Context(), 1, bytesutil.ToBytes32(conts[0].BlockRoot))
|
||||
aggregate, err := proposerServer.getSyncAggregate(t.Context(), 1, bytesutil.ToBytes32(conts[0].BlockRoot), st)
|
||||
require.NoError(t, err)
|
||||
require.DeepEqual(t, bitfield.Bitvector32{0xf, 0xf, 0xf, 0xf}, aggregate.SyncCommitteeBits)
|
||||
|
||||
aggregate, err = proposerServer.getSyncAggregate(t.Context(), 2, bytesutil.ToBytes32(conts[0].BlockRoot))
|
||||
aggregate, err = proposerServer.getSyncAggregate(t.Context(), 2, bytesutil.ToBytes32(conts[0].BlockRoot), st)
|
||||
require.NoError(t, err)
|
||||
require.DeepEqual(t, bitfield.Bitvector32{0xaa, 0xaa, 0xaa, 0xaa}, aggregate.SyncCommitteeBits)
|
||||
|
||||
aggregate, err = proposerServer.getSyncAggregate(t.Context(), 3, bytesutil.ToBytes32(conts[0].BlockRoot))
|
||||
aggregate, err = proposerServer.getSyncAggregate(t.Context(), 3, bytesutil.ToBytes32(conts[0].BlockRoot), st)
|
||||
require.NoError(t, err)
|
||||
require.DeepEqual(t, bitfield.NewBitvector32(), aggregate.SyncCommitteeBits)
|
||||
}
|
||||
@@ -68,7 +69,7 @@ func TestServer_SetSyncAggregate_EmptyCase(t *testing.T) {
|
||||
b, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlockAltair())
|
||||
require.NoError(t, err)
|
||||
s := &Server{} // Sever is not initialized with sync committee pool.
|
||||
s.setSyncAggregate(t.Context(), b)
|
||||
s.setSyncAggregate(t.Context(), b, nil)
|
||||
agg, err := b.Block().Body().SyncAggregate()
|
||||
require.NoError(t, err)
|
||||
|
||||
@@ -138,7 +139,7 @@ func TestProposer_GetSyncAggregate_IncludesSyncCommitteeMessages(t *testing.T) {
|
||||
}
|
||||
|
||||
// The final sync aggregates must have indexes [0,1,2,3] set for both subcommittees
|
||||
sa, err := proposerServer.getSyncAggregate(t.Context(), 1, r)
|
||||
sa, err := proposerServer.getSyncAggregate(t.Context(), 1, r, st)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, true, sa.SyncCommitteeBits.BitAt(0))
|
||||
assert.Equal(t, true, sa.SyncCommitteeBits.BitAt(1))
|
||||
@@ -194,8 +195,99 @@ func Test_aggregatedSyncCommitteeMessages_NoIntersectionWithPoolContributions(t
|
||||
BlockRoot: r[:],
|
||||
}
|
||||
|
||||
aggregated, err := proposerServer.aggregatedSyncCommitteeMessages(t.Context(), 1, r, []*ethpb.SyncCommitteeContribution{cont})
|
||||
aggregated, err := proposerServer.aggregatedSyncCommitteeMessages(t.Context(), 1, r, []*ethpb.SyncCommitteeContribution{cont}, st)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 1, len(aggregated))
|
||||
assert.Equal(t, false, aggregated[0].AggregationBits.BitAt(3))
|
||||
}
|
||||
|
||||
func TestGetSyncAggregate_CorrectStateAtSyncCommitteePeriodBoundary(t *testing.T) {
|
||||
helpers.ClearCache()
|
||||
syncPeriodBoundaryEpoch := primitives.Epoch(274176) // Real epoch from the bug report
|
||||
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
|
||||
|
||||
preEpochState, keys := util.DeterministicGenesisStateAltair(t, 100)
|
||||
require.NoError(t, preEpochState.SetSlot(primitives.Slot(syncPeriodBoundaryEpoch)*slotsPerEpoch-1)) // Last slot of previous epoch
|
||||
|
||||
postEpochState := preEpochState.Copy()
|
||||
require.NoError(t, postEpochState.SetSlot(primitives.Slot(syncPeriodBoundaryEpoch)*slotsPerEpoch+2)) // After 2 missed slots
|
||||
|
||||
oldCommittee := ðpb.SyncCommittee{
|
||||
Pubkeys: make([][]byte, params.BeaconConfig().SyncCommitteeSize),
|
||||
}
|
||||
newCommittee := ðpb.SyncCommittee{
|
||||
Pubkeys: make([][]byte, params.BeaconConfig().SyncCommitteeSize),
|
||||
}
|
||||
|
||||
for i := 0; i < int(params.BeaconConfig().SyncCommitteeSize); i++ {
|
||||
if i < len(keys) {
|
||||
oldCommittee.Pubkeys[i] = keys[i%len(keys)].PublicKey().Marshal()
|
||||
// Use different keys for new committee to simulate rotation
|
||||
newCommittee.Pubkeys[i] = keys[(i+10)%len(keys)].PublicKey().Marshal()
|
||||
}
|
||||
}
|
||||
|
||||
require.NoError(t, preEpochState.SetCurrentSyncCommittee(oldCommittee))
|
||||
require.NoError(t, postEpochState.SetCurrentSyncCommittee(newCommittee))
|
||||
|
||||
mockChainService := &chainmock.ChainService{
|
||||
State: postEpochState,
|
||||
}
|
||||
|
||||
proposerServer := &Server{
|
||||
HeadFetcher: mockChainService,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
SyncCommitteePool: synccommittee.NewStore(),
|
||||
}
|
||||
|
||||
slot := primitives.Slot(syncPeriodBoundaryEpoch)*slotsPerEpoch + 1 // First slot of new epoch
|
||||
blockRoot := [32]byte{0x01, 0x02, 0x03}
|
||||
|
||||
msg1 := ðpb.SyncCommitteeMessage{
|
||||
Slot: slot,
|
||||
BlockRoot: blockRoot[:],
|
||||
ValidatorIndex: 0, // This validator is in position 0 of OLD committee
|
||||
Signature: bls.NewAggregateSignature().Marshal(),
|
||||
}
|
||||
msg2 := ðpb.SyncCommitteeMessage{
|
||||
Slot: slot,
|
||||
BlockRoot: blockRoot[:],
|
||||
ValidatorIndex: 1, // This validator is in position 1 of OLD committee
|
||||
Signature: bls.NewAggregateSignature().Marshal(),
|
||||
}
|
||||
|
||||
require.NoError(t, proposerServer.SyncCommitteePool.SaveSyncCommitteeMessage(msg1))
|
||||
require.NoError(t, proposerServer.SyncCommitteePool.SaveSyncCommitteeMessage(msg2))
|
||||
|
||||
aggregateWrongState, err := proposerServer.getSyncAggregate(t.Context(), slot, blockRoot, postEpochState)
|
||||
require.NoError(t, err)
|
||||
|
||||
aggregateCorrectState, err := proposerServer.getSyncAggregate(t.Context(), slot, blockRoot, preEpochState)
|
||||
require.NoError(t, err)
|
||||
|
||||
wrongStateBits := bitfield.Bitlist(aggregateWrongState.SyncCommitteeBits)
|
||||
correctStateBits := bitfield.Bitlist(aggregateCorrectState.SyncCommitteeBits)
|
||||
|
||||
wrongStateHasValidators := false
|
||||
correctStateHasValidators := false
|
||||
|
||||
for i := 0; i < len(wrongStateBits); i++ {
|
||||
if wrongStateBits[i] != 0 {
|
||||
wrongStateHasValidators = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
for i := 0; i < len(correctStateBits); i++ {
|
||||
if correctStateBits[i] != 0 {
|
||||
correctStateHasValidators = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
assert.Equal(t, true, correctStateHasValidators, "Correct state should include validators that sent messages")
|
||||
assert.Equal(t, false, wrongStateHasValidators, "Wrong state should not find validators in incorrect sync committee")
|
||||
|
||||
t.Logf("Wrong state aggregate bits: %x (has validators: %v)", wrongStateBits, wrongStateHasValidators)
|
||||
t.Logf("Correct state aggregate bits: %x (has validators: %v)", correctStateBits, correctStateHasValidators)
|
||||
}
|
||||
|
||||
@@ -17,7 +17,6 @@ go_library(
|
||||
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/sync/backfill",
|
||||
visibility = ["//visibility:public"],
|
||||
deps = [
|
||||
"//beacon-chain/core/helpers:go_default_library",
|
||||
"//beacon-chain/core/signing:go_default_library",
|
||||
"//beacon-chain/das:go_default_library",
|
||||
"//beacon-chain/db:go_default_library",
|
||||
@@ -61,7 +60,6 @@ go_test(
|
||||
],
|
||||
embed = [":go_default_library"],
|
||||
deps = [
|
||||
"//beacon-chain/core/helpers:go_default_library",
|
||||
"//beacon-chain/core/signing:go_default_library",
|
||||
"//beacon-chain/das:go_default_library",
|
||||
"//beacon-chain/db:go_default_library",
|
||||
|
||||
@@ -3,12 +3,12 @@ package backfill
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/sync"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
@@ -348,7 +348,7 @@ func (*Service) Status() error {
|
||||
// minimumBackfillSlot determines the lowest slot that backfill needs to download based on looking back
|
||||
// MIN_EPOCHS_FOR_BLOCK_REQUESTS from the current slot.
|
||||
func minimumBackfillSlot(current primitives.Slot) primitives.Slot {
|
||||
oe := helpers.MinEpochsForBlockRequests()
|
||||
oe := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
|
||||
if oe > slots.MaxSafeEpoch() {
|
||||
oe = slots.MaxSafeEpoch()
|
||||
}
|
||||
|
||||
@@ -5,7 +5,6 @@ import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
|
||||
p2ptest "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
|
||||
@@ -84,7 +83,7 @@ func TestServiceInit(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestMinimumBackfillSlot(t *testing.T) {
|
||||
oe := helpers.MinEpochsForBlockRequests()
|
||||
oe := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
|
||||
|
||||
currSlot := (oe + 100).Mul(uint64(params.BeaconConfig().SlotsPerEpoch))
|
||||
minSlot := minimumBackfillSlot(primitives.Slot(currSlot))
|
||||
@@ -109,7 +108,7 @@ func testReadN(ctx context.Context, t *testing.T, c chan batch, n int, into []ba
|
||||
}
|
||||
|
||||
func TestBackfillMinSlotDefault(t *testing.T) {
|
||||
oe := helpers.MinEpochsForBlockRequests()
|
||||
oe := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
|
||||
current := primitives.Slot((oe + 100).Mul(uint64(params.BeaconConfig().SlotsPerEpoch)))
|
||||
s := &Service{}
|
||||
specMin := minimumBackfillSlot(current)
|
||||
|
||||
@@ -192,6 +192,13 @@ var (
|
||||
},
|
||||
)
|
||||
|
||||
dataColumnsRecoveredFromELTotal = promauto.NewCounter(
|
||||
prometheus.CounterOpts{
|
||||
Name: "data_columns_recovered_from_el_total",
|
||||
Help: "Count the number of times data columns have been recovered from the execution layer.",
|
||||
},
|
||||
)
|
||||
|
||||
// Data column sidecar validation, beacon metrics specs
|
||||
dataColumnSidecarVerificationRequestsCounter = promauto.NewCounter(prometheus.CounterOpts{
|
||||
Name: "beacon_data_column_sidecar_processing_requests_total",
|
||||
|
||||
@@ -224,6 +224,8 @@ func (s *Service) processDataColumnSidecarsFromExecution(ctx context.Context, so
|
||||
}
|
||||
|
||||
if len(unseenIndices) > 0 {
|
||||
dataColumnsRecoveredFromELTotal.Inc()
|
||||
|
||||
log.WithFields(logrus.Fields{
|
||||
"root": fmt.Sprintf("%#x", source.Root()),
|
||||
"slot": source.Slot(),
|
||||
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Use service context and continue on slasher attestation errors (#15803).
|
||||
3
changelog/bastin_lc-prefix.md
Normal file
3
changelog/bastin_lc-prefix.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Ignored
|
||||
|
||||
- Add log prefix to the light-client package.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Delegate sszInfo HashTreeRoot to FastSSZ-generated implementations via SSZObject, enabling roots calculation for generated types while avoiding duplicate logic.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Small code changes for reusability and readability to processAggregate.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- block event probably shouldn't be sent on certain block processing failures, now sends only on successing processing Block is NON-CANONICAL, Block IS CANONICAL but getFCUArgs FAILS, and Full success
|
||||
@@ -1,7 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Fixed web3signer e2e, issues caused due to a regression on old fork support
|
||||
|
||||
### Changed
|
||||
|
||||
- updated web3signer to 25.9.1
|
||||
3
changelog/james-prysm_remove-deposit-keymanager-log.md
Normal file
3
changelog/james-prysm_remove-deposit-keymanager-log.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Removed
|
||||
|
||||
- log mentioning removed flag `--show-deposit-data`
|
||||
3
changelog/james-prysm_v6.1.3.md
Normal file
3
changelog/james-prysm_v6.1.3.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Ignored
|
||||
|
||||
- Changelog entries for v6.1.3 through v6.1.2
|
||||
@@ -1,2 +0,0 @@
|
||||
### Fixed
|
||||
- Decreased attestation gossip validation batch deadline to 5ms.
|
||||
2
changelog/kasey_default-layout-by-epoch.md
Normal file
2
changelog/kasey_default-layout-by-epoch.md
Normal file
@@ -0,0 +1,2 @@
|
||||
### Changed
|
||||
- Use the `by-epoch' blob storage layout by default and log a warning to users who continue to use the flat layout, encouraging them to switch.
|
||||
@@ -1,2 +0,0 @@
|
||||
### Ignored
|
||||
- Data column sidecars fetch: Adjust log levels.
|
||||
@@ -1,2 +0,0 @@
|
||||
### Fixed
|
||||
- Fix `/eth/v1/beacon/blob_sidecars/` beacon API is the fulu fork epoch is set to the far future epoch.
|
||||
@@ -1,2 +0,0 @@
|
||||
### Fixed
|
||||
- `VerifyDataColumnSidecar`: Check if there is no too many commitments.
|
||||
@@ -1,2 +0,0 @@
|
||||
### Fixed
|
||||
- `WithDataColumnRetentionEpochs`: Use `dataColumnRetentionEpoch` instead of `blobColumnRetentionEpoch`.
|
||||
2
changelog/manu-fulu-log.md
Normal file
2
changelog/manu-fulu-log.md
Normal file
@@ -0,0 +1,2 @@
|
||||
### Fixed
|
||||
- Remove `Reading static P2P private key from a file.` log if Fulu is enabled.
|
||||
@@ -1,2 +0,0 @@
|
||||
### Fixed
|
||||
- `dataColumnSidecarsByRangeRPCHandler`: Gracefully close the stream if no data to return.
|
||||
@@ -1,2 +0,0 @@
|
||||
### Added
|
||||
- Add Grandine to P2P known agents. (Useful for metrics)
|
||||
@@ -1,2 +0,0 @@
|
||||
### Fixed
|
||||
- `HasAtLeastOneIndex`: Check the index is not too high.
|
||||
2
changelog/manu-number-custody-groups.md
Normal file
2
changelog/manu-number-custody-groups.md
Normal file
@@ -0,0 +1,2 @@
|
||||
### Fixed
|
||||
- `updateCustodyInfoInDB`: Use `NumberOfCustodyGroups` instead of `NumberOfColumns`.
|
||||
@@ -1,2 +0,0 @@
|
||||
### Ignored
|
||||
- Fix (unreleased) bug where the preallocated slice for KZG Proofs was 48x bigger than it needed to be.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- reject committee index >= committees_per_slot in unaggregated attestation validation
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Mark epoch transition correctly on new head events
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Do not mark blocks as invalid from ErrNotDescendantOfFinalized
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Mark the block as invalid if it has an invalid signature.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Remove redundant check for genesis root at startup.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- Changelog entries for v6.1.2 through v6.0.5
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Improve returning individual message errors from Beacon API.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Display error messages from the server verbatim when they are not encoded as `application/json`.
|
||||
3
changelog/radek_rest-custom-headers.md
Normal file
3
changelog/radek_rest-custom-headers.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Added
|
||||
|
||||
- Allow custom headers in validator client HTTP requests.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Do not verify block data when calculating rewards.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Fixed
|
||||
|
||||
- Fixed [#15812](https://github.com/OffchainLabs/prysm/issues/15812): Gossip attestation validation incorrectly rejecting attestations that arrive before their referenced blocks. Previously, attestations were saved to the pending queue but immediately rejected by forkchoice validation, causing "not descendant of finalized checkpoint" errors. Now attestations for missing blocks return `ValidationIgnore` without error, allowing them to be properly processed when their blocks arrive. This eliminates false positive rejections and prevents potential incorrect peer downscoring during network congestion.
|
||||
4
changelog/satushh-plus-one-bug.md
Normal file
4
changelog/satushh-plus-one-bug.md
Normal file
@@ -0,0 +1,4 @@
|
||||
### Fixed
|
||||
|
||||
- corrected defaultRetentionEpochs in pruner
|
||||
- epoch aligned pruning: pruning should be epoch-wise. No fractional epoch pruning.
|
||||
3
changelog/satushh-update-easlot-pruning.md
Normal file
3
changelog/satushh-update-easlot-pruning.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Added
|
||||
|
||||
- Update the earliest available slot after pruning operations in beacon chain database pruner. This ensures the P2P layer accurately knows which historical data is available after pruning, preventing nodes from advertising or attempting to serve data that has been pruned.
|
||||
@@ -1,7 +0,0 @@
|
||||
### Added
|
||||
|
||||
- SSZ-QL: Use `fastssz`'s `SizeSSZ` method for calculating the size of `Container` type.
|
||||
|
||||
### Changed
|
||||
|
||||
- SSZ-QL: Clarify `Size` method with more sophisticated `SSZType`s.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- SSZ-QL: Access n-th element in `List`/`Vector`.
|
||||
3
changelog/ttsao_add-columns-recovery-metric.md
Normal file
3
changelog/ttsao_add-columns-recovery-metric.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Added
|
||||
|
||||
- Metric to track data columns recovered from execution layer
|
||||
3
changelog/ttsao_fix-optimistic-blinded-blocks.md
Normal file
3
changelog/ttsao_fix-optimistic-blinded-blocks.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Ignored
|
||||
|
||||
- Return optimistic response only when handling blinded blocks in proposer
|
||||
3
changelog/ttsao_fix-sync-aggregate-state.md
Normal file
3
changelog/ttsao_fix-sync-aggregate-state.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Fixed
|
||||
|
||||
- Sync committee uses correct state to calculate position
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Gracefully handle submit blind block returning 502 errors.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Changed
|
||||
|
||||
- Process pending attestations after pending blocks are cleared
|
||||
3
changelog/ttsao_update-spec-tests.md
Normal file
3
changelog/ttsao_update-spec-tests.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Changed
|
||||
|
||||
- Updated consensus spec tests to v1.6.0-beta.1 with new hashes and URL template
|
||||
@@ -12,6 +12,7 @@ go_library(
|
||||
"//config/params:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
"@com_github_pkg_errors//:go_default_library",
|
||||
"@com_github_sirupsen_logrus//:go_default_library",
|
||||
"@com_github_urfave_cli_v2//:go_default_library",
|
||||
],
|
||||
)
|
||||
@@ -19,8 +20,10 @@ go_library(
|
||||
go_test(
|
||||
name = "go_default_test",
|
||||
srcs = ["options_test.go"],
|
||||
data = glob(["testdata/**"]),
|
||||
embed = [":go_default_library"],
|
||||
deps = [
|
||||
"//beacon-chain/db/filesystem:go_default_library",
|
||||
"//cmd:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
package storage
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path"
|
||||
"strings"
|
||||
|
||||
@@ -10,6 +12,7 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/pkg/errors"
|
||||
log "github.com/sirupsen/logrus"
|
||||
"github.com/urfave/cli/v2"
|
||||
)
|
||||
|
||||
@@ -25,9 +28,9 @@ var (
|
||||
Aliases: []string{"extend-blob-retention-epoch"},
|
||||
}
|
||||
BlobStorageLayout = &cli.StringFlag{
|
||||
Name: "blob-storage-layout",
|
||||
Usage: layoutFlagUsage(),
|
||||
Value: filesystem.LayoutNameFlat,
|
||||
Name: "blob-storage-layout",
|
||||
Usage: layoutFlagUsage(),
|
||||
DefaultText: fmt.Sprintf("\"%s\", unless a different existing layout is detected", filesystem.LayoutNameByEpoch),
|
||||
}
|
||||
DataColumnStoragePathFlag = &cli.PathFlag{
|
||||
Name: "data-column-path",
|
||||
@@ -35,6 +38,14 @@ var (
|
||||
}
|
||||
)
|
||||
|
||||
// Flags is the list of CLI flags for configuring blob storage.
|
||||
var Flags = []cli.Flag{
|
||||
BlobStoragePathFlag,
|
||||
BlobRetentionEpochFlag,
|
||||
BlobStorageLayout,
|
||||
DataColumnStoragePathFlag,
|
||||
}
|
||||
|
||||
func layoutOptions() string {
|
||||
return "available options are: " + strings.Join(filesystem.LayoutNames, ", ") + "."
|
||||
}
|
||||
@@ -62,10 +73,20 @@ func BeaconNodeOptions(c *cli.Context) ([]node.Option, error) {
|
||||
return nil, errors.Wrap(err, "blob retention epoch")
|
||||
}
|
||||
|
||||
blobPath := blobStoragePath(c)
|
||||
layout, err := detectLayout(blobPath, c)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "detecting blob storage layout")
|
||||
}
|
||||
if layout == filesystem.LayoutNameFlat {
|
||||
log.Warnf("Existing '%s' blob storage layout detected. Consider setting the flag --%s=%s for faster startup and more reliable pruning. Setting this flag will automatically migrate your existing blob storage to the newer layout on the next restart.",
|
||||
|
||||
filesystem.LayoutNameFlat, BlobStorageLayout.Name, filesystem.LayoutNameByEpoch)
|
||||
}
|
||||
blobStorageOptions := node.WithBlobStorageOptions(
|
||||
filesystem.WithBlobRetentionEpochs(blobRetentionEpoch),
|
||||
filesystem.WithBasePath(blobStoragePath(c)),
|
||||
filesystem.WithLayout(c.String(BlobStorageLayout.Name)), // This is validated in the Action func for BlobStorageLayout.
|
||||
filesystem.WithBasePath(blobPath),
|
||||
filesystem.WithLayout(layout), // This is validated in the Action func for BlobStorageLayout.
|
||||
)
|
||||
|
||||
dataColumnRetentionEpoch, err := dataColumnRetentionEpoch(c)
|
||||
@@ -82,6 +103,53 @@ func BeaconNodeOptions(c *cli.Context) ([]node.Option, error) {
|
||||
return opts, nil
|
||||
}
|
||||
|
||||
// stringFlagGetter makes testing detectLayout easier
|
||||
// because we don't need to mess with FlagSets and cli types.
|
||||
type stringFlagGetter interface {
|
||||
String(name string) string
|
||||
}
|
||||
|
||||
// detectLayout determines which layout to use based on explicit user flags or by probing the
|
||||
// blob directory to determine the previously used layout.
|
||||
// - explicit: If the user has specified a layout flag, that layout is returned.
|
||||
// - flat: If directories that look like flat layout's block root paths are present.
|
||||
// - by-epoch: default if neither of the above is true.
|
||||
func detectLayout(dir string, c stringFlagGetter) (string, error) {
|
||||
explicit := c.String(BlobStorageLayout.Name)
|
||||
if explicit != "" {
|
||||
return explicit, nil
|
||||
}
|
||||
|
||||
dir = path.Clean(dir)
|
||||
// nosec: this path is provided by the node operator via flag
|
||||
base, err := os.Open(dir) // #nosec G304
|
||||
if err != nil {
|
||||
// 'blobs' directory does not exist yet, so default to by-epoch.
|
||||
return filesystem.LayoutNameByEpoch, nil
|
||||
}
|
||||
defer func() {
|
||||
if err := base.Close(); err != nil {
|
||||
log.WithError(err).Errorf("Could not close blob storage directory")
|
||||
}
|
||||
}()
|
||||
|
||||
// When we go looking for existing by-root directories, we only need to find one directory
|
||||
// but one of those directories could be the `by-epoch` layout's top-level directory,
|
||||
// and it seems possible that on some platforms we could get extra system directories that I don't
|
||||
// know how to anticipate (looking at you, Windows), so I picked 16 as a small number with a generous
|
||||
// amount of wiggle room to be confident that we'll likely see a by-root director if one exists.
|
||||
entries, err := base.Readdirnames(16)
|
||||
if err != nil {
|
||||
return "", errors.Wrap(err, "reading blob storage directory")
|
||||
}
|
||||
for _, entry := range entries {
|
||||
if filesystem.IsBlockRootDir(entry) {
|
||||
return filesystem.LayoutNameFlat, nil
|
||||
}
|
||||
}
|
||||
return filesystem.LayoutNameByEpoch, nil
|
||||
}
|
||||
|
||||
func blobStoragePath(c *cli.Context) string {
|
||||
blobsPath := c.Path(BlobStoragePathFlag.Name)
|
||||
if blobsPath == "" {
|
||||
|
||||
@@ -3,8 +3,14 @@ package storage
|
||||
import (
|
||||
"flag"
|
||||
"fmt"
|
||||
"os"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"syscall"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
|
||||
"github.com/OffchainLabs/prysm/v6/cmd"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
@@ -109,3 +115,105 @@ func TestDataColumnStoragePath_FlagSpecified(t *testing.T) {
|
||||
|
||||
assert.Equal(t, "/blah/blah", storagePath)
|
||||
}
|
||||
|
||||
type mockStringFlagGetter struct {
|
||||
v string
|
||||
}
|
||||
|
||||
func (m mockStringFlagGetter) String(name string) string {
|
||||
return m.v
|
||||
}
|
||||
|
||||
func TestDetectLayout(t *testing.T) {
|
||||
fakeRoot := "0xabcdef1234567890abcdef1234567890abcdef1234567890abcdef1234567890"
|
||||
require.Equal(t, true, filesystem.IsBlockRootDir(fakeRoot))
|
||||
withFlatRoot := func(t *testing.T, dir string) {
|
||||
require.NoError(t, os.MkdirAll(path.Join(dir, fakeRoot), 0o755))
|
||||
}
|
||||
withByEpoch := func(t *testing.T, dir string) {
|
||||
require.NoError(t, os.MkdirAll(path.Join(dir, filesystem.PeriodicEpochBaseDir), 0o755))
|
||||
}
|
||||
|
||||
cases := []struct {
|
||||
name string
|
||||
expected string
|
||||
expectedErr error
|
||||
setup func(t *testing.T, dir string)
|
||||
getter mockStringFlagGetter
|
||||
}{
|
||||
{
|
||||
name: "no blobs dir",
|
||||
expected: filesystem.LayoutNameByEpoch,
|
||||
},
|
||||
{
|
||||
name: "blobs dir without root dirs",
|
||||
expected: filesystem.LayoutNameByEpoch,
|
||||
// empty subdirectory under blobs which doesn't match the block root pattern
|
||||
setup: func(t *testing.T, dir string) {
|
||||
require.NoError(t, os.MkdirAll(path.Join(dir, "some-dir"), 0o755))
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "blobs dir with root dir",
|
||||
setup: withFlatRoot,
|
||||
expected: filesystem.LayoutNameFlat,
|
||||
},
|
||||
{
|
||||
name: "blobs dir with root dir overridden by flag",
|
||||
setup: withFlatRoot,
|
||||
expected: filesystem.LayoutNameByEpoch,
|
||||
getter: mockStringFlagGetter{v: filesystem.LayoutNameByEpoch},
|
||||
},
|
||||
{
|
||||
name: "only has by-epoch dir",
|
||||
setup: withByEpoch,
|
||||
expected: filesystem.LayoutNameByEpoch,
|
||||
},
|
||||
{
|
||||
name: "contains by-epoch dir and root dirs",
|
||||
setup: func(t *testing.T, dir string) {
|
||||
withFlatRoot(t, dir)
|
||||
withByEpoch(t, dir)
|
||||
},
|
||||
expected: filesystem.LayoutNameFlat,
|
||||
},
|
||||
{
|
||||
name: "unreadable dir",
|
||||
// It isn't detectLayout's job to detect any errors reading the directory,
|
||||
// so it ignores errors from the os.Open call. But we can also get errors
|
||||
// from readdirnames, but this is hard to simulate in a test. So in the test
|
||||
// write a file in place of the dir, which will succeed in the Open call, but
|
||||
// fail when read as a directory. This is why the expected error is syscall.ENOTDIR
|
||||
// (syscall error code from using readdirnames syscall on an ordinary file).
|
||||
setup: func(t *testing.T, dir string) {
|
||||
parent := filepath.Dir(dir)
|
||||
require.NoError(t, os.MkdirAll(parent, 0o755))
|
||||
require.NoError(t, os.WriteFile(dir, []byte{}, 0o755))
|
||||
},
|
||||
expectedErr: syscall.ENOTDIR,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range cases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
dir := strings.Replace(t.Name(), " ", "_", -1)
|
||||
dir = path.Join(os.TempDir(), dir)
|
||||
if tc.setup != nil {
|
||||
tc.setup(t, dir)
|
||||
}
|
||||
if tc.expectedErr != nil {
|
||||
t.Log("hi")
|
||||
}
|
||||
layout, err := detectLayout(dir, tc.getter)
|
||||
if tc.expectedErr != nil {
|
||||
require.ErrorIs(t, err, tc.expectedErr)
|
||||
return
|
||||
}
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, tc.expected, layout)
|
||||
|
||||
assert.Equal(t, tc.expectedErr, err)
|
||||
assert.Equal(t, tc.expected, layout)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -45,6 +45,13 @@ var (
|
||||
Usage: "Beacon node REST API provider endpoint.",
|
||||
Value: "http://127.0.0.1:3500",
|
||||
}
|
||||
// BeaconRESTApiHeaders defines a list of headers to send with all HTTP requests to the beacon node.
|
||||
BeaconRESTApiHeaders = &cli.StringFlag{
|
||||
Name: "beacon-rest-api-headers",
|
||||
Usage: `Comma-separated list of key value pairs to pass as headers for all HTTP calls to the beacon node.
|
||||
To provide multiple values for the same key, specify the same key for each value.
|
||||
Example: --grpc-headers=key1=value1,key1=value2,key2=value3`,
|
||||
}
|
||||
// CertFlag defines a flag for the node's TLS certificate.
|
||||
CertFlag = &cli.StringFlag{
|
||||
Name: "tls-cert",
|
||||
|
||||
@@ -51,6 +51,7 @@ func startNode(ctx *cli.Context) error {
|
||||
var appFlags = []cli.Flag{
|
||||
flags.BeaconRPCProviderFlag,
|
||||
flags.BeaconRESTApiProviderFlag,
|
||||
flags.BeaconRESTApiHeaders,
|
||||
flags.CertFlag,
|
||||
flags.GraffitiFlag,
|
||||
flags.DisablePenaltyRewardLogFlag,
|
||||
|
||||
@@ -93,6 +93,7 @@ var appHelpFlagGroups = []flagGroup{
|
||||
Flags: []cli.Flag{
|
||||
flags.CertFlag,
|
||||
flags.BeaconRPCProviderFlag,
|
||||
flags.BeaconRESTApiHeaders,
|
||||
flags.EnableRPCFlag,
|
||||
flags.RPCHost,
|
||||
flags.RPCPort,
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
version: v1.6.0-beta.0
|
||||
version: v1.6.0-beta.1
|
||||
style: full
|
||||
|
||||
specrefs:
|
||||
@@ -18,6 +18,7 @@ exceptions:
|
||||
- UPDATE_TIMEOUT#altair
|
||||
|
||||
# Not implemented: gloas (future fork)
|
||||
- BUILDER_PENDING_WITHDRAWALS_LIMIT#gloas
|
||||
- MAX_PAYLOAD_ATTESTATIONS#gloas
|
||||
- PTC_SIZE#gloas
|
||||
|
||||
@@ -50,7 +51,6 @@ exceptions:
|
||||
# Not implemented: gloas (future fork)
|
||||
- BUILDER_PAYMENT_THRESHOLD_DENOMINATOR#gloas
|
||||
- BUILDER_PAYMENT_THRESHOLD_NUMERATOR#gloas
|
||||
- BUILDER_PENDING_WITHDRAWALS_LIMIT#gloas
|
||||
- BUILDER_WITHDRAWAL_PREFIX#gloas
|
||||
- DOMAIN_BEACON_BUILDER#gloas
|
||||
- DOMAIN_PTC_ATTESTER#gloas
|
||||
@@ -82,6 +82,12 @@ exceptions:
|
||||
- Eth1Block#phase0
|
||||
- MatrixEntry#fulu
|
||||
|
||||
# Not implemented: capella
|
||||
- LightClientBootstrap#capella
|
||||
- LightClientFinalityUpdate#capella
|
||||
- LightClientOptimisticUpdate#capella
|
||||
- LightClientUpdate#capella
|
||||
|
||||
# Not implemented: gloas (future fork)
|
||||
- BeaconBlockBody#gloas
|
||||
- BeaconState#gloas
|
||||
@@ -106,6 +112,9 @@ exceptions:
|
||||
- OptimisticStore#bellatrix
|
||||
- Store#phase0
|
||||
|
||||
# Not implemented: capella
|
||||
- LightClientStore#capella
|
||||
|
||||
# Not implemented: gloas (future fork)
|
||||
- LatestMessage#gloas
|
||||
- Store#gloas
|
||||
@@ -213,6 +222,7 @@ exceptions:
|
||||
- xor#phase0
|
||||
|
||||
# Not implemented: altair
|
||||
- compute_merkle_proof#altair
|
||||
- compute_sync_committee_period_at_slot#altair
|
||||
- get_contribution_and_proof#altair
|
||||
- get_contribution_due_ms#altair
|
||||
@@ -354,6 +364,7 @@ exceptions:
|
||||
- upgrade_to_gloas#gloas
|
||||
- validate_merge_block#gloas
|
||||
- validate_on_attestation#gloas
|
||||
- verify_data_column_sidecar#gloas
|
||||
- verify_data_column_sidecar_inclusion_proof#gloas
|
||||
- verify_execution_payload_envelope_signature#gloas
|
||||
- verify_execution_payload_bid_signature#gloas
|
||||
|
||||
@@ -971,12 +971,12 @@
|
||||
- file: proto/prysm/v1alpha1/light_client.proto
|
||||
search: message LightClientHeaderCapella {
|
||||
spec: |
|
||||
<spec ssz_object="LightClientHeader" fork="capella" hash="366cbdcd">
|
||||
<spec ssz_object="LightClientHeader" fork="capella" hash="b625e61e">
|
||||
class LightClientHeader(Container):
|
||||
# Beacon block header
|
||||
beacon: BeaconBlockHeader
|
||||
# Execution payload header corresponding to `beacon.body_root` (from Capella onward)
|
||||
# [New in Capella]
|
||||
execution: ExecutionPayloadHeader
|
||||
# [New in Capella]
|
||||
execution_branch: ExecutionBranch
|
||||
</spec>
|
||||
|
||||
|
||||
@@ -1303,9 +1303,24 @@
|
||||
- file: crypto/bls/bls.go
|
||||
search: func AggregatePublicKeys(
|
||||
spec: |
|
||||
<spec fn="eth_aggregate_pubkeys" fork="altair" hash="977edbdb">
|
||||
<spec fn="eth_aggregate_pubkeys" fork="altair" hash="bfb5ddd0">
|
||||
def eth_aggregate_pubkeys(pubkeys: Sequence[BLSPubkey]) -> BLSPubkey:
|
||||
return bls.AggregatePKs(pubkeys)
|
||||
"""
|
||||
Return the aggregate public key for the public keys in ``pubkeys``.
|
||||
|
||||
Note: the ``+`` operation should be interpreted as elliptic curve point addition, which takes as input
|
||||
elliptic curve points that must be decoded from the input ``BLSPubkey``s.
|
||||
This implementation is for demonstrative purposes only and ignores encoding/decoding concerns.
|
||||
Refer to the BLS signature draft standard for more information.
|
||||
"""
|
||||
assert len(pubkeys) > 0
|
||||
# Ensure that the given inputs are valid pubkeys
|
||||
assert all(bls.KeyValidate(pubkey) for pubkey in pubkeys)
|
||||
|
||||
result = copy(pubkeys[0])
|
||||
for pubkey in pubkeys[1:]:
|
||||
result += pubkey
|
||||
return result
|
||||
</spec>
|
||||
|
||||
- name: eth_fast_aggregate_verify
|
||||
@@ -4494,12 +4509,12 @@
|
||||
- file: beacon-chain/core/helpers/weak_subjectivity.go
|
||||
search: func IsWithinWeakSubjectivityPeriod(
|
||||
spec: |
|
||||
<spec fn="is_within_weak_subjectivity_period" fork="phase0" hash="f8a94089">
|
||||
<spec fn="is_within_weak_subjectivity_period" fork="phase0" hash="aef08e82">
|
||||
def is_within_weak_subjectivity_period(
|
||||
store: Store, ws_state: BeaconState, ws_checkpoint: Checkpoint
|
||||
) -> bool:
|
||||
# Clients may choose to validate the input state against the input Weak Subjectivity Checkpoint
|
||||
assert ws_state.latest_block_header.state_root == ws_checkpoint.root
|
||||
assert get_block_root(ws_state, ws_checkpoint.epoch) == ws_checkpoint.root
|
||||
assert compute_epoch_at_slot(ws_state.slot) == ws_checkpoint.epoch
|
||||
|
||||
ws_period = compute_weak_subjectivity_period(ws_state)
|
||||
@@ -4511,12 +4526,12 @@
|
||||
- name: is_within_weak_subjectivity_period#electra
|
||||
sources: []
|
||||
spec: |
|
||||
<spec fn="is_within_weak_subjectivity_period" fork="electra" hash="c89ae316">
|
||||
<spec fn="is_within_weak_subjectivity_period" fork="electra" hash="d05d230d">
|
||||
def is_within_weak_subjectivity_period(
|
||||
store: Store, ws_state: BeaconState, ws_checkpoint: Checkpoint
|
||||
) -> bool:
|
||||
# Clients may choose to validate the input state against the input Weak Subjectivity Checkpoint
|
||||
assert ws_state.latest_block_header.state_root == ws_checkpoint.root
|
||||
assert get_block_root(ws_state, ws_checkpoint.epoch) == ws_checkpoint.root
|
||||
assert compute_epoch_at_slot(ws_state.slot) == ws_checkpoint.epoch
|
||||
|
||||
# [Modified in Electra]
|
||||
@@ -7649,8 +7664,8 @@
|
||||
- name: upgrade_lc_bootstrap_to_capella
|
||||
sources: []
|
||||
spec: |
|
||||
<spec fn="upgrade_lc_bootstrap_to_capella" fork="capella" hash="2c8939e7">
|
||||
def upgrade_lc_bootstrap_to_capella(pre: bellatrix.LightClientBootstrap) -> LightClientBootstrap:
|
||||
<spec fn="upgrade_lc_bootstrap_to_capella" fork="capella" hash="d5f1203a">
|
||||
def upgrade_lc_bootstrap_to_capella(pre: altair.LightClientBootstrap) -> LightClientBootstrap:
|
||||
return LightClientBootstrap(
|
||||
header=upgrade_lc_header_to_capella(pre.header),
|
||||
current_sync_committee=pre.current_sync_committee,
|
||||
@@ -7687,9 +7702,9 @@
|
||||
- name: upgrade_lc_finality_update_to_capella
|
||||
sources: []
|
||||
spec: |
|
||||
<spec fn="upgrade_lc_finality_update_to_capella" fork="capella" hash="5315d1df">
|
||||
<spec fn="upgrade_lc_finality_update_to_capella" fork="capella" hash="c314b172">
|
||||
def upgrade_lc_finality_update_to_capella(
|
||||
pre: bellatrix.LightClientFinalityUpdate,
|
||||
pre: altair.LightClientFinalityUpdate,
|
||||
) -> LightClientFinalityUpdate:
|
||||
return LightClientFinalityUpdate(
|
||||
attested_header=upgrade_lc_header_to_capella(pre.attested_header),
|
||||
@@ -7735,10 +7750,12 @@
|
||||
- name: upgrade_lc_header_to_capella
|
||||
sources: []
|
||||
spec: |
|
||||
<spec fn="upgrade_lc_header_to_capella" fork="capella" hash="9eaa5026">
|
||||
def upgrade_lc_header_to_capella(pre: bellatrix.LightClientHeader) -> LightClientHeader:
|
||||
<spec fn="upgrade_lc_header_to_capella" fork="capella" hash="9f7a832d">
|
||||
def upgrade_lc_header_to_capella(pre: altair.LightClientHeader) -> LightClientHeader:
|
||||
return LightClientHeader(
|
||||
beacon=pre.beacon,
|
||||
execution=ExecutionPayloadHeader(),
|
||||
execution_branch=ExecutionBranch(),
|
||||
)
|
||||
</spec>
|
||||
|
||||
@@ -7789,9 +7806,9 @@
|
||||
- name: upgrade_lc_optimistic_update_to_capella
|
||||
sources: []
|
||||
spec: |
|
||||
<spec fn="upgrade_lc_optimistic_update_to_capella" fork="capella" hash="488c41c5">
|
||||
<spec fn="upgrade_lc_optimistic_update_to_capella" fork="capella" hash="c4c29295">
|
||||
def upgrade_lc_optimistic_update_to_capella(
|
||||
pre: bellatrix.LightClientOptimisticUpdate,
|
||||
pre: altair.LightClientOptimisticUpdate,
|
||||
) -> LightClientOptimisticUpdate:
|
||||
return LightClientOptimisticUpdate(
|
||||
attested_header=upgrade_lc_header_to_capella(pre.attested_header),
|
||||
@@ -7831,8 +7848,8 @@
|
||||
- name: upgrade_lc_store_to_capella
|
||||
sources: []
|
||||
spec: |
|
||||
<spec fn="upgrade_lc_store_to_capella" fork="capella" hash="a25064df">
|
||||
def upgrade_lc_store_to_capella(pre: bellatrix.LightClientStore) -> LightClientStore:
|
||||
<spec fn="upgrade_lc_store_to_capella" fork="capella" hash="d02773ec">
|
||||
def upgrade_lc_store_to_capella(pre: altair.LightClientStore) -> LightClientStore:
|
||||
if pre.best_valid_update is None:
|
||||
best_valid_update = None
|
||||
else:
|
||||
@@ -7891,8 +7908,8 @@
|
||||
- name: upgrade_lc_update_to_capella
|
||||
sources: []
|
||||
spec: |
|
||||
<spec fn="upgrade_lc_update_to_capella" fork="capella" hash="e7dbaf33">
|
||||
def upgrade_lc_update_to_capella(pre: bellatrix.LightClientUpdate) -> LightClientUpdate:
|
||||
<spec fn="upgrade_lc_update_to_capella" fork="capella" hash="ba863d2f">
|
||||
def upgrade_lc_update_to_capella(pre: altair.LightClientUpdate) -> LightClientUpdate:
|
||||
return LightClientUpdate(
|
||||
attested_header=upgrade_lc_header_to_capella(pre.attested_header),
|
||||
next_sync_committee=pre.next_sync_committee,
|
||||
@@ -8539,7 +8556,7 @@
|
||||
- file: beacon-chain/core/peerdas/p2p_interface.go
|
||||
search: func VerifyDataColumnSidecar(
|
||||
spec: |
|
||||
<spec fn="verify_data_column_sidecar" fork="fulu" hash="509c0986">
|
||||
<spec fn="verify_data_column_sidecar" fork="fulu" hash="1517491f">
|
||||
def verify_data_column_sidecar(sidecar: DataColumnSidecar) -> bool:
|
||||
"""
|
||||
Verify if the data column sidecar is valid.
|
||||
@@ -8552,6 +8569,11 @@
|
||||
if len(sidecar.kzg_commitments) == 0:
|
||||
return False
|
||||
|
||||
# Check that the sidecar respects the blob limit
|
||||
epoch = compute_epoch_at_slot(sidecar.signed_block_header.message.slot)
|
||||
if len(sidecar.kzg_commitments) > get_blob_parameters(epoch).max_blobs_per_block:
|
||||
return False
|
||||
|
||||
# The column length must be equal to the number of commitments/proofs
|
||||
if len(sidecar.column) != len(sidecar.kzg_commitments) or len(sidecar.column) != len(
|
||||
sidecar.kzg_proofs
|
||||
|
||||
@@ -110,6 +110,6 @@ consensus_spec_tests = repository_rule(
|
||||
"repo": attr.string(default = "ethereum/consensus-specs"),
|
||||
"workflow": attr.string(default = "generate_vectors.yml"),
|
||||
"branch": attr.string(default = "dev"),
|
||||
"release_url_template": attr.string(default = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s"),
|
||||
"release_url_template": attr.string(default = "https://github.com/ethereum/consensus-specs/releases/download/%s"),
|
||||
},
|
||||
)
|
||||
|
||||
@@ -221,10 +221,10 @@ func TestListAccounts_LocalKeymanager(t *testing.T) {
|
||||
// Expected output format definition
|
||||
const prologLength = 4
|
||||
const accountLength = 4
|
||||
const epilogLength = 2
|
||||
const nameOffset = 1
|
||||
const keyOffset = 2
|
||||
const privkeyOffset = 3
|
||||
const epilogLength = 1
|
||||
|
||||
const keyOffset = 1
|
||||
const privkeyOffset = 2
|
||||
|
||||
// Require the output has correct number of lines
|
||||
lineCount := prologLength + accountLength*numAccounts + epilogLength
|
||||
@@ -242,7 +242,7 @@ func TestListAccounts_LocalKeymanager(t *testing.T) {
|
||||
|
||||
// Assert that account names are printed on the correct lines
|
||||
for i, accountName := range accountNames {
|
||||
lineNumber := prologLength + accountLength*i + nameOffset
|
||||
lineNumber := prologLength + accountLength*i
|
||||
accountNameFound := strings.Contains(lines[lineNumber], accountName)
|
||||
assert.Equal(t, true, accountNameFound, "Account Name %s not found on line number %d", accountName, lineNumber)
|
||||
}
|
||||
|
||||
@@ -84,7 +84,7 @@ func (acm *CLIManager) prepareBeaconClients(ctx context.Context) (*iface.Validat
|
||||
conn := validatorHelpers.NewNodeConnection(
|
||||
grpcConn,
|
||||
acm.beaconApiEndpoint,
|
||||
acm.beaconApiTimeout,
|
||||
validatorHelpers.WithBeaconApiTimeout(acm.beaconApiTimeout),
|
||||
)
|
||||
|
||||
restHandler := beaconApi.NewBeaconApiRestHandler(
|
||||
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
api "github.com/OffchainLabs/prysm/v6/api/client"
|
||||
eventClient "github.com/OffchainLabs/prysm/v6/api/client/event"
|
||||
grpcutil "github.com/OffchainLabs/prysm/v6/api/grpc"
|
||||
"github.com/OffchainLabs/prysm/v6/async/event"
|
||||
@@ -79,6 +80,7 @@ type Config struct {
|
||||
BeaconNodeGRPCEndpoint string
|
||||
BeaconNodeCert string
|
||||
BeaconApiEndpoint string
|
||||
BeaconApiHeaders map[string][]string
|
||||
BeaconApiTimeout time.Duration
|
||||
Graffiti string
|
||||
GraffitiStruct *graffiti.Graffiti
|
||||
@@ -142,7 +144,8 @@ func NewValidatorService(ctx context.Context, cfg *Config) (*ValidatorService, e
|
||||
s.conn = validatorHelpers.NewNodeConnection(
|
||||
grpcConn,
|
||||
cfg.BeaconApiEndpoint,
|
||||
cfg.BeaconApiTimeout,
|
||||
validatorHelpers.WithBeaconApiHeaders(cfg.BeaconApiHeaders),
|
||||
validatorHelpers.WithBeaconApiTimeout(cfg.BeaconApiTimeout),
|
||||
)
|
||||
|
||||
return s, nil
|
||||
@@ -185,8 +188,9 @@ func (v *ValidatorService) Start() {
|
||||
return
|
||||
}
|
||||
|
||||
headersTransport := api.NewCustomHeadersTransport(http.DefaultTransport, v.conn.GetBeaconApiHeaders())
|
||||
restHandler := beaconApi.NewBeaconApiRestHandler(
|
||||
http.Client{Timeout: v.conn.GetBeaconApiTimeout(), Transport: otelhttp.NewTransport(http.DefaultTransport)},
|
||||
http.Client{Timeout: v.conn.GetBeaconApiTimeout(), Transport: otelhttp.NewTransport(headersTransport)},
|
||||
hosts[0],
|
||||
)
|
||||
|
||||
|
||||
@@ -10,16 +10,37 @@ import (
|
||||
type NodeConnection interface {
|
||||
GetGrpcClientConn() *grpc.ClientConn
|
||||
GetBeaconApiUrl() string
|
||||
GetBeaconApiHeaders() map[string][]string
|
||||
setBeaconApiHeaders(map[string][]string)
|
||||
GetBeaconApiTimeout() time.Duration
|
||||
setBeaconApiTimeout(time.Duration)
|
||||
dummy()
|
||||
}
|
||||
|
||||
type nodeConnection struct {
|
||||
grpcClientConn *grpc.ClientConn
|
||||
beaconApiUrl string
|
||||
beaconApiHeaders map[string][]string
|
||||
beaconApiTimeout time.Duration
|
||||
}
|
||||
|
||||
// NodeConnectionOption is a functional option for configuring the node connection.
|
||||
type NodeConnectionOption func(nc NodeConnection)
|
||||
|
||||
// WithBeaconApiHeaders sets the HTTP headers that should be sent to the server along with each request.
|
||||
func WithBeaconApiHeaders(headers map[string][]string) NodeConnectionOption {
|
||||
return func(nc NodeConnection) {
|
||||
nc.setBeaconApiHeaders(headers)
|
||||
}
|
||||
}
|
||||
|
||||
// WithBeaconApiTimeout sets the HTTP request timeout.
|
||||
func WithBeaconApiTimeout(timeout time.Duration) NodeConnectionOption {
|
||||
return func(nc NodeConnection) {
|
||||
nc.setBeaconApiTimeout(timeout)
|
||||
}
|
||||
}
|
||||
|
||||
func (c *nodeConnection) GetGrpcClientConn() *grpc.ClientConn {
|
||||
return c.grpcClientConn
|
||||
}
|
||||
@@ -28,16 +49,30 @@ func (c *nodeConnection) GetBeaconApiUrl() string {
|
||||
return c.beaconApiUrl
|
||||
}
|
||||
|
||||
func (c *nodeConnection) GetBeaconApiHeaders() map[string][]string {
|
||||
return c.beaconApiHeaders
|
||||
}
|
||||
|
||||
func (c *nodeConnection) setBeaconApiHeaders(headers map[string][]string) {
|
||||
c.beaconApiHeaders = headers
|
||||
}
|
||||
|
||||
func (c *nodeConnection) GetBeaconApiTimeout() time.Duration {
|
||||
return c.beaconApiTimeout
|
||||
}
|
||||
|
||||
func (c *nodeConnection) setBeaconApiTimeout(timeout time.Duration) {
|
||||
c.beaconApiTimeout = timeout
|
||||
}
|
||||
|
||||
func (*nodeConnection) dummy() {}
|
||||
|
||||
func NewNodeConnection(grpcConn *grpc.ClientConn, beaconApiUrl string, beaconApiTimeout time.Duration) NodeConnection {
|
||||
func NewNodeConnection(grpcConn *grpc.ClientConn, beaconApiUrl string, opts ...NodeConnectionOption) NodeConnection {
|
||||
conn := &nodeConnection{}
|
||||
conn.grpcClientConn = grpcConn
|
||||
conn.beaconApiUrl = beaconApiUrl
|
||||
conn.beaconApiTimeout = beaconApiTimeout
|
||||
for _, opt := range opts {
|
||||
opt(conn)
|
||||
}
|
||||
return conn
|
||||
}
|
||||
|
||||
@@ -402,10 +402,6 @@ func (km *Keymanager) ListKeymanagerAccounts(ctx context.Context, cfg keymanager
|
||||
} else {
|
||||
fmt.Printf("Showing %d validator accounts\n", numAccounts)
|
||||
}
|
||||
fmt.Println(
|
||||
au.BrightRed("View the eth1 deposit transaction data for your accounts " +
|
||||
"by running `validator accounts list --show-deposit-data`"),
|
||||
)
|
||||
|
||||
pubKeys, err := km.FetchValidatingPublicKeys(ctx)
|
||||
if err != nil {
|
||||
|
||||
@@ -433,6 +433,7 @@ func (c *ValidatorClient) registerValidatorService(cliCtx *cli.Context) error {
|
||||
BeaconNodeGRPCEndpoint: cliCtx.String(flags.BeaconRPCProviderFlag.Name),
|
||||
BeaconNodeCert: cliCtx.String(flags.CertFlag.Name),
|
||||
BeaconApiEndpoint: cliCtx.String(flags.BeaconRESTApiProviderFlag.Name),
|
||||
BeaconApiHeaders: parseBeaconApiHeaders(cliCtx.String(flags.BeaconRESTApiHeaders.Name)),
|
||||
BeaconApiTimeout: time.Second * 30,
|
||||
Graffiti: g.ParseHexGraffiti(cliCtx.String(flags.GraffitiFlag.Name)),
|
||||
GraffitiStruct: graffitiStruct,
|
||||
@@ -552,6 +553,7 @@ func (c *ValidatorClient) registerRPCService(cliCtx *cli.Context) error {
|
||||
GRPCHeaders: strings.Split(cliCtx.String(flags.GRPCHeadersFlag.Name), ","),
|
||||
BeaconNodeGRPCEndpoint: cliCtx.String(flags.BeaconRPCProviderFlag.Name),
|
||||
BeaconApiEndpoint: cliCtx.String(flags.BeaconRESTApiProviderFlag.Name),
|
||||
BeaconAPIHeaders: parseBeaconApiHeaders(cliCtx.String(flags.BeaconRESTApiHeaders.Name)),
|
||||
BeaconApiTimeout: time.Second * 30,
|
||||
BeaconNodeCert: cliCtx.String(flags.CertFlag.Name),
|
||||
DB: c.db,
|
||||
@@ -636,3 +638,23 @@ func clearDB(ctx context.Context, dataDir string, force bool, isDatabaseMinimal
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func parseBeaconApiHeaders(rawHeaders string) map[string][]string {
|
||||
result := make(map[string][]string)
|
||||
pairs := strings.Split(rawHeaders, ",")
|
||||
for _, pair := range pairs {
|
||||
key, value, found := strings.Cut(pair, "=")
|
||||
if !found {
|
||||
// Skip malformed pairs
|
||||
continue
|
||||
}
|
||||
key = strings.TrimSpace(key)
|
||||
value = strings.TrimSpace(value)
|
||||
if key == "" || value == "" {
|
||||
// Skip malformed pairs
|
||||
continue
|
||||
}
|
||||
result[key] = append(result[key], value)
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
@@ -308,3 +308,17 @@ func TestWeb3SignerConfig(t *testing.T) {
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func Test_parseBeaconApiHeaders(t *testing.T) {
|
||||
t.Run("ok", func(t *testing.T) {
|
||||
h := parseBeaconApiHeaders("key1=value1,key1=value2,key2=value3")
|
||||
assert.Equal(t, 2, len(h))
|
||||
assert.DeepEqual(t, []string{"value1", "value2"}, h["key1"])
|
||||
assert.DeepEqual(t, []string{"value3"}, h["key2"])
|
||||
})
|
||||
t.Run("ignores malformed", func(t *testing.T) {
|
||||
h := parseBeaconApiHeaders("key1=value1,key2value2,key3=,=key4")
|
||||
assert.Equal(t, 1, len(h))
|
||||
assert.DeepEqual(t, []string{"value1"}, h["key1"])
|
||||
})
|
||||
}
|
||||
|
||||
@@ -23,6 +23,7 @@ go_library(
|
||||
],
|
||||
deps = [
|
||||
"//api:go_default_library",
|
||||
"//api/client:go_default_library",
|
||||
"//api/grpc:go_default_library",
|
||||
"//api/pagination:go_default_library",
|
||||
"//api/server:go_default_library",
|
||||
|
||||
@@ -3,6 +3,7 @@ package rpc
|
||||
import (
|
||||
"net/http"
|
||||
|
||||
api "github.com/OffchainLabs/prysm/v6/api/client"
|
||||
grpcutil "github.com/OffchainLabs/prysm/v6/api/grpc"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/validator/client"
|
||||
@@ -52,11 +53,13 @@ func (s *Server) registerBeaconClient() error {
|
||||
conn := validatorHelpers.NewNodeConnection(
|
||||
grpcConn,
|
||||
s.beaconApiEndpoint,
|
||||
s.beaconApiTimeout,
|
||||
validatorHelpers.WithBeaconApiHeaders(s.beaconApiHeaders),
|
||||
validatorHelpers.WithBeaconApiTimeout(s.beaconApiTimeout),
|
||||
)
|
||||
|
||||
headersTransport := api.NewCustomHeadersTransport(http.DefaultTransport, conn.GetBeaconApiHeaders())
|
||||
restHandler := beaconApi.NewBeaconApiRestHandler(
|
||||
http.Client{Timeout: s.beaconApiTimeout, Transport: otelhttp.NewTransport(http.DefaultTransport)},
|
||||
http.Client{Timeout: s.beaconApiTimeout, Transport: otelhttp.NewTransport(headersTransport)},
|
||||
s.beaconApiEndpoint,
|
||||
)
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user