mirror of
https://github.com/OffchainLabs/prysm.git
synced 2026-01-10 05:47:59 -05:00
Compare commits
36 Commits
v6.1.3-rc.
...
log-att-st
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
6f87252004 | ||
|
|
b2a9db0826 | ||
|
|
040661bd68 | ||
|
|
d3bd0eaa30 | ||
|
|
577899bfec | ||
|
|
374bae9c81 | ||
|
|
3e0492a636 | ||
|
|
5a1a5b5ae5 | ||
|
|
dbb2f0b047 | ||
|
|
7b3c11c818 | ||
|
|
c9b34d556d | ||
|
|
10a2f0687b | ||
|
|
4fb75d6d0b | ||
|
|
6d596edea2 | ||
|
|
9153c5a202 | ||
|
|
26ce94e224 | ||
|
|
255ea2fac1 | ||
|
|
46bc81b4c8 | ||
|
|
9c4774b82e | ||
|
|
7dd4f5948c | ||
|
|
2f090c52d9 | ||
|
|
3ecb5d0b67 | ||
|
|
253f91930a | ||
|
|
7c3e45637f | ||
|
|
96429c5089 | ||
|
|
d613f3a262 | ||
|
|
5751dbf134 | ||
|
|
426fbcc3b0 | ||
|
|
a3baf98b05 | ||
|
|
5a897dfa6b | ||
|
|
90190883bc | ||
|
|
64ec665890 | ||
|
|
fdb06ea461 | ||
|
|
0486631d73 | ||
|
|
47764696ce | ||
|
|
b2d350b988 |
63
CHANGELOG.md
63
CHANGELOG.md
@@ -4,6 +4,67 @@ All notable changes to this project will be documented in this file.
|
||||
|
||||
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
|
||||
|
||||
## [v6.1.4](https://github.com/prysmaticlabs/prysm/compare/v6.1.3...v6.1.4) - 2025-10-24
|
||||
|
||||
This release includes a bug fix affecting block proposals in rare cases, along with an important update for Windows users running post-Fusaka fork.
|
||||
|
||||
### Added
|
||||
|
||||
- SSZ-QL: Add endpoints for `BeaconState`/`BeaconBlock`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15888)
|
||||
- Add native state diff type and marshalling functions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15250)
|
||||
- Update the earliest available slot after pruning operations in beacon chain database pruner. This ensures the P2P layer accurately knows which historical data is available after pruning, preventing nodes from advertising or attempting to serve data that has been pruned. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15694)
|
||||
|
||||
### Fixed
|
||||
|
||||
- Correctly advertise (in ENR and beacon API) attestation subnets when using `--subscribe-all-subnets`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15880)
|
||||
- `randomPeer`: Return if the context is cancelled when waiting for peers. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15876)
|
||||
- Improve error message when the byte count read from disk when reading a data column sidecars is lower than expected. (Mostly, because the file is truncated.). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15881)
|
||||
- Delete the genesis state file when --clear-db / --force-clear-db is specified. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15883)
|
||||
- Fix sync committee subscription to use subnet indices instead of committee indices. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15885)
|
||||
- Fixed metadata extraction on Windows by correctly splitting file paths. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15899)
|
||||
- `VerifyDataColumnsSidecarKZGProofs`: Check if sizes match. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15892)
|
||||
- Fix recoverStateSummary to persist state summaries in stateSummaryBucket instead of stateBucket (#15896). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15896)
|
||||
- `updateCustodyInfoInDB`: Use `NumberOfCustodyGroups` instead of `NumberOfColumns`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15908)
|
||||
- Sync committee uses correct state to calculate position. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15905)
|
||||
|
||||
## [v6.1.3](https://github.com/prysmaticlabs/prysm/compare/v6.1.2...v6.1.3) - 2025-10-20
|
||||
|
||||
This release has several important beacon API and p2p fixes.
|
||||
|
||||
### Added
|
||||
|
||||
- Add Grandine to P2P known agents. (Useful for metrics). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15829)
|
||||
- Delegate sszInfo HashTreeRoot to FastSSZ-generated implementations via SSZObject, enabling roots calculation for generated types while avoiding duplicate logic. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15805)
|
||||
- SSZ-QL: Use `fastssz`'s `SizeSSZ` method for calculating the size of `Container` type. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15864)
|
||||
- SSZ-QL: Access n-th element in `List`/`Vector`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15767)
|
||||
|
||||
### Changed
|
||||
|
||||
- Do not verify block data when calculating rewards. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15819)
|
||||
- Process pending attestations after pending blocks are cleared. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15824)
|
||||
- updated web3signer to 25.9.1. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15832)
|
||||
- Gracefully handle submit blind block returning 502 errors. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15848)
|
||||
- Improve returning individual message errors from Beacon API. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15835)
|
||||
- SSZ-QL: Clarify `Size` method with more sophisticated `SSZType`s. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15864)
|
||||
|
||||
### Fixed
|
||||
|
||||
- Use service context and continue on slasher attestation errors (#15803). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15803)
|
||||
- block event probably shouldn't be sent on certain block processing failures, now sends only on successing processing Block is NON-CANONICAL, Block IS CANONICAL but getFCUArgs FAILS, and Full success. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15814)
|
||||
- Fixed web3signer e2e, issues caused due to a regression on old fork support. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15832)
|
||||
- Do not mark blocks as invalid from ErrNotDescendantOfFinalized. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15846)
|
||||
- Fixed [#15812](https://github.com/OffchainLabs/prysm/issues/15812): Gossip attestation validation incorrectly rejecting attestations that arrive before their referenced blocks. Previously, attestations were saved to the pending queue but immediately rejected by forkchoice validation, causing "not descendant of finalized checkpoint" errors. Now attestations for missing blocks return `ValidationIgnore` without error, allowing them to be properly processed when their blocks arrive. This eliminates false positive rejections and prevents potential incorrect peer downscoring during network congestion. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15840)
|
||||
- Mark the block as invalid if it has an invalid signature. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15847)
|
||||
- Display error messages from the server verbatim when they are not encoded as `application/json`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15860)
|
||||
- `HasAtLeastOneIndex`: Check the index is not too high. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15865)
|
||||
- Fix `/eth/v1/beacon/blob_sidecars/` beacon API is the fulu fork epoch is set to the far future epoch. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15867)
|
||||
- `dataColumnSidecarsByRangeRPCHandler`: Gracefully close the stream if no data to return. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15866)
|
||||
- `VerifyDataColumnSidecar`: Check if there is no too many commitments. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15859)
|
||||
- `WithDataColumnRetentionEpochs`: Use `dataColumnRetentionEpoch` instead of `blobColumnRetentionEpoch`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15872)
|
||||
- Mark epoch transition correctly on new head events. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15871)
|
||||
- reject committee index >= committees_per_slot in unaggregated attestation validation. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15855)
|
||||
- Decreased attestation gossip validation batch deadline to 5ms. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15882)
|
||||
|
||||
## [v6.1.2](https://github.com/prysmaticlabs/prysm/compare/v6.1.1...v6.1.2) - 2025-10-10
|
||||
|
||||
This release has several important fixes to improve Prysm's peering, stability, and attestation inclusion on mainnet and all testnets. All node operators are encouraged to update to this release as soon as practical for the best mainnet performance.
|
||||
@@ -3759,4 +3820,4 @@ There are no security updates in this release.
|
||||
|
||||
# Older than v2.0.0
|
||||
|
||||
For changelog history for releases older than v2.0.0, please refer to https://github.com/prysmaticlabs/prysm/releases
|
||||
For changelog history for releases older than v2.0.0, please refer to https://github.com/prysmaticlabs/prysm/releases
|
||||
10
WORKSPACE
10
WORKSPACE
@@ -253,16 +253,16 @@ filegroup(
|
||||
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
|
||||
)
|
||||
|
||||
consensus_spec_version = "v1.6.0-beta.0"
|
||||
consensus_spec_version = "v1.6.0-beta.1"
|
||||
|
||||
load("@prysm//tools:download_spectests.bzl", "consensus_spec_tests")
|
||||
|
||||
consensus_spec_tests(
|
||||
name = "consensus_spec_tests",
|
||||
flavors = {
|
||||
"general": "sha256-rT3jQp2+ZaDiO66gIQggetzqr+kGeexaLqEhbx4HDMY=",
|
||||
"minimal": "sha256-wowwwyvd0KJLsE+oDOtPkrhZyJndJpJ0lbXYsLH6XBw=",
|
||||
"mainnet": "sha256-4ZLrLNeO7NihZ4TuWH5V5fUhvW9Y3mAPBQDCqrfShps=",
|
||||
"general": "sha256-oEj0MTViJHjZo32nABK36gfvSXpbwkBk/jt6Mj7pWFI=",
|
||||
"minimal": "sha256-cS4NPv6IRBoCSmWomQ8OEo8IsVNW9YawUFqoRZQBUj4=",
|
||||
"mainnet": "sha256-BYuLndMPAh4p13IRJgNfVakrCVL69KRrNw2tdc3ETbE=",
|
||||
},
|
||||
version = consensus_spec_version,
|
||||
)
|
||||
@@ -278,7 +278,7 @@ filegroup(
|
||||
visibility = ["//visibility:public"],
|
||||
)
|
||||
""",
|
||||
integrity = "sha256-sBe3Rx8zGq9IrvfgIhZQpYidGjy3mE1SiCb6/+pjLdY=",
|
||||
integrity = "sha256-yrq3tdwPS8Ri+ueeLAHssIT3ssMrX7zvHiJ8Xf9GVYs=",
|
||||
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
|
||||
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
|
||||
)
|
||||
|
||||
@@ -6,6 +6,7 @@ go_library(
|
||||
"client.go",
|
||||
"errors.go",
|
||||
"options.go",
|
||||
"transport.go",
|
||||
],
|
||||
importpath = "github.com/OffchainLabs/prysm/v6/api/client",
|
||||
visibility = ["//visibility:public"],
|
||||
@@ -14,7 +15,13 @@ go_library(
|
||||
|
||||
go_test(
|
||||
name = "go_default_test",
|
||||
srcs = ["client_test.go"],
|
||||
srcs = [
|
||||
"client_test.go",
|
||||
"transport_test.go",
|
||||
],
|
||||
embed = [":go_default_library"],
|
||||
deps = ["//testing/require:go_default_library"],
|
||||
deps = [
|
||||
"//testing/assert:go_default_library",
|
||||
"//testing/require:go_default_library",
|
||||
],
|
||||
)
|
||||
|
||||
25
api/client/transport.go
Normal file
25
api/client/transport.go
Normal file
@@ -0,0 +1,25 @@
|
||||
package client
|
||||
|
||||
import "net/http"
|
||||
|
||||
// CustomHeadersTransport adds custom headers to each request
|
||||
type CustomHeadersTransport struct {
|
||||
base http.RoundTripper
|
||||
headers map[string][]string
|
||||
}
|
||||
|
||||
func NewCustomHeadersTransport(base http.RoundTripper, headers map[string][]string) *CustomHeadersTransport {
|
||||
return &CustomHeadersTransport{
|
||||
base: base,
|
||||
headers: headers,
|
||||
}
|
||||
}
|
||||
|
||||
func (t *CustomHeadersTransport) RoundTrip(req *http.Request) (*http.Response, error) {
|
||||
for header, values := range t.headers {
|
||||
for _, value := range values {
|
||||
req.Header.Add(header, value)
|
||||
}
|
||||
}
|
||||
return t.base.RoundTrip(req)
|
||||
}
|
||||
25
api/client/transport_test.go
Normal file
25
api/client/transport_test.go
Normal file
@@ -0,0 +1,25 @@
|
||||
package client
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/testing/assert"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
)
|
||||
|
||||
type noopTransport struct{}
|
||||
|
||||
func (*noopTransport) RoundTrip(*http.Request) (*http.Response, error) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
func TestRoundTrip(t *testing.T) {
|
||||
tr := &CustomHeadersTransport{base: &noopTransport{}, headers: map[string][]string{"key1": []string{"value1", "value2"}, "key2": []string{"value3"}}}
|
||||
req := httptest.NewRequest("GET", "http://foo", nil)
|
||||
_, err := tr.RoundTrip(req)
|
||||
require.NoError(t, err)
|
||||
assert.DeepEqual(t, []string{"value1", "value2"}, req.Header.Values("key1"))
|
||||
assert.DeepEqual(t, []string{"value3"}, req.Header.Values("key2"))
|
||||
}
|
||||
@@ -296,3 +296,8 @@ type GetBlobsResponse struct {
|
||||
Finalized bool `json:"finalized"`
|
||||
Data []string `json:"data"` //blobs
|
||||
}
|
||||
|
||||
type SSZQueryRequest struct {
|
||||
Query string `json:"query"`
|
||||
IncludeProof bool `json:"include_proof,omitempty"`
|
||||
}
|
||||
|
||||
@@ -173,6 +173,7 @@ go_test(
|
||||
"//beacon-chain/state/state-native:go_default_library",
|
||||
"//beacon-chain/state/stategen:go_default_library",
|
||||
"//beacon-chain/verification:go_default_library",
|
||||
"//cmd/beacon-chain/flags:go_default_library",
|
||||
"//config/features:go_default_library",
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
|
||||
@@ -16,6 +16,7 @@ import (
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
// The caller of this function must have a lock on forkchoice.
|
||||
@@ -32,6 +33,7 @@ func (s *Service) getRecentPreState(ctx context.Context, c *ethpb.Checkpoint) st
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
log.Info("Got head state read only")
|
||||
return st
|
||||
}
|
||||
slot, err := slots.EpochStart(c.Epoch)
|
||||
@@ -48,6 +50,7 @@ func (s *Service) getRecentPreState(ctx context.Context, c *ethpb.Checkpoint) st
|
||||
return nil
|
||||
}
|
||||
if cachedState != nil && !cachedState.IsNil() {
|
||||
log.Info("Got cached state from check point")
|
||||
return cachedState
|
||||
}
|
||||
st, err := s.HeadState(ctx)
|
||||
@@ -61,6 +64,7 @@ func (s *Service) getRecentPreState(ctx context.Context, c *ethpb.Checkpoint) st
|
||||
if err := s.checkpointStateCache.AddCheckpointState(c, st); err != nil {
|
||||
log.WithError(err).Error("Could not save checkpoint state to cache")
|
||||
}
|
||||
log.Info("Got head state and processed head state to slot ", slot)
|
||||
return st
|
||||
}
|
||||
|
||||
@@ -82,6 +86,7 @@ func (s *Service) getAttPreState(ctx context.Context, c *ethpb.Checkpoint) (stat
|
||||
return nil, errors.Wrap(err, "could not get cached checkpoint state")
|
||||
}
|
||||
if cachedState != nil && !cachedState.IsNil() {
|
||||
log.Info("Got state (non canonical) check point cache")
|
||||
return cachedState, nil
|
||||
}
|
||||
// Try the next slot cache for the early epoch calls, this should mostly have been covered already
|
||||
@@ -101,6 +106,7 @@ func (s *Service) getAttPreState(ctx context.Context, c *ethpb.Checkpoint) (stat
|
||||
if err := s.checkpointStateCache.AddCheckpointState(c, cachedState); err != nil {
|
||||
return nil, errors.Wrap(err, "could not save checkpoint state to cache")
|
||||
}
|
||||
log.Info("Got state next slot cache and processed to slot ", slot)
|
||||
return cachedState, nil
|
||||
}
|
||||
|
||||
@@ -135,6 +141,9 @@ func (s *Service) getAttPreState(ctx context.Context, c *ethpb.Checkpoint) (stat
|
||||
if err := s.checkpointStateCache.AddCheckpointState(c, baseState); err != nil {
|
||||
return nil, errors.Wrap(err, "could not save checkpoint state to cache")
|
||||
}
|
||||
|
||||
log.Info("Got state state gen by root and processed to slot ", slot)
|
||||
|
||||
return baseState, nil
|
||||
}
|
||||
|
||||
|
||||
@@ -472,8 +472,8 @@ func (s *Service) removeStartupState() {
|
||||
func (s *Service) updateCustodyInfoInDB(slot primitives.Slot) (primitives.Slot, uint64, error) {
|
||||
isSubscribedToAllDataSubnets := flags.Get().SubscribeAllDataSubnets
|
||||
|
||||
beaconConfig := params.BeaconConfig()
|
||||
custodyRequirement := beaconConfig.CustodyRequirement
|
||||
cfg := params.BeaconConfig()
|
||||
custodyRequirement := cfg.CustodyRequirement
|
||||
|
||||
// Check if the node was previously subscribed to all data subnets, and if so,
|
||||
// store the new status accordingly.
|
||||
@@ -493,7 +493,7 @@ func (s *Service) updateCustodyInfoInDB(slot primitives.Slot) (primitives.Slot,
|
||||
// Compute the custody group count.
|
||||
custodyGroupCount := custodyRequirement
|
||||
if isSubscribedToAllDataSubnets {
|
||||
custodyGroupCount = beaconConfig.NumberOfColumns
|
||||
custodyGroupCount = cfg.NumberOfCustodyGroups
|
||||
}
|
||||
|
||||
// Safely compute the fulu fork slot.
|
||||
@@ -536,11 +536,11 @@ func spawnCountdownIfPreGenesis(ctx context.Context, genesisTime time.Time, db d
|
||||
}
|
||||
|
||||
func fuluForkSlot() (primitives.Slot, error) {
|
||||
beaconConfig := params.BeaconConfig()
|
||||
cfg := params.BeaconConfig()
|
||||
|
||||
fuluForkEpoch := beaconConfig.FuluForkEpoch
|
||||
if fuluForkEpoch == beaconConfig.FarFutureEpoch {
|
||||
return beaconConfig.FarFutureSlot, nil
|
||||
fuluForkEpoch := cfg.FuluForkEpoch
|
||||
if fuluForkEpoch == cfg.FarFutureEpoch {
|
||||
return cfg.FarFutureSlot, nil
|
||||
}
|
||||
|
||||
forkFuluSlot, err := slots.EpochStart(fuluForkEpoch)
|
||||
|
||||
@@ -23,9 +23,11 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
|
||||
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/state/stategen"
|
||||
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
|
||||
"github.com/OffchainLabs/prysm/v6/config/features"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
consensusblocks "github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
@@ -596,3 +598,103 @@ func TestNotifyIndex(t *testing.T) {
|
||||
t.Errorf("Notifier channel did not receive the index")
|
||||
}
|
||||
}
|
||||
|
||||
func TestUpdateCustodyInfoInDB(t *testing.T) {
|
||||
const (
|
||||
fuluForkEpoch = 10
|
||||
custodyRequirement = uint64(4)
|
||||
earliestStoredSlot = primitives.Slot(12)
|
||||
numberOfCustodyGroups = uint64(64)
|
||||
numberOfColumns = uint64(128)
|
||||
)
|
||||
|
||||
params.SetupTestConfigCleanup(t)
|
||||
cfg := params.BeaconConfig()
|
||||
cfg.FuluForkEpoch = fuluForkEpoch
|
||||
cfg.CustodyRequirement = custodyRequirement
|
||||
cfg.NumberOfCustodyGroups = numberOfCustodyGroups
|
||||
cfg.NumberOfColumns = numberOfColumns
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
ctx := t.Context()
|
||||
pbBlock := util.NewBeaconBlock()
|
||||
pbBlock.Block.Slot = 12
|
||||
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(pbBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
roBlock, err := blocks.NewROBlock(signedBeaconBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
t.Run("CGC increases before fulu", func(t *testing.T) {
|
||||
service, requirements := minimalTestService(t)
|
||||
err = requirements.db.SaveBlock(ctx, roBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Before Fulu
|
||||
// -----------
|
||||
actualEas, actualCgc, err := service.updateCustodyInfoInDB(15)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, earliestStoredSlot, actualEas)
|
||||
require.Equal(t, custodyRequirement, actualCgc)
|
||||
|
||||
actualEas, actualCgc, err = service.updateCustodyInfoInDB(17)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, earliestStoredSlot, actualEas)
|
||||
require.Equal(t, custodyRequirement, actualCgc)
|
||||
|
||||
resetFlags := flags.Get()
|
||||
gFlags := new(flags.GlobalFlags)
|
||||
gFlags.SubscribeAllDataSubnets = true
|
||||
flags.Init(gFlags)
|
||||
defer flags.Init(resetFlags)
|
||||
|
||||
actualEas, actualCgc, err = service.updateCustodyInfoInDB(19)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, earliestStoredSlot, actualEas)
|
||||
require.Equal(t, numberOfCustodyGroups, actualCgc)
|
||||
|
||||
// After Fulu
|
||||
// ----------
|
||||
actualEas, actualCgc, err = service.updateCustodyInfoInDB(fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, earliestStoredSlot, actualEas)
|
||||
require.Equal(t, numberOfCustodyGroups, actualCgc)
|
||||
})
|
||||
|
||||
t.Run("CGC increases after fulu", func(t *testing.T) {
|
||||
service, requirements := minimalTestService(t)
|
||||
err = requirements.db.SaveBlock(ctx, roBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Before Fulu
|
||||
// -----------
|
||||
actualEas, actualCgc, err := service.updateCustodyInfoInDB(15)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, earliestStoredSlot, actualEas)
|
||||
require.Equal(t, custodyRequirement, actualCgc)
|
||||
|
||||
actualEas, actualCgc, err = service.updateCustodyInfoInDB(17)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, earliestStoredSlot, actualEas)
|
||||
require.Equal(t, custodyRequirement, actualCgc)
|
||||
|
||||
// After Fulu
|
||||
// ----------
|
||||
resetFlags := flags.Get()
|
||||
gFlags := new(flags.GlobalFlags)
|
||||
gFlags.SubscribeAllDataSubnets = true
|
||||
flags.Init(gFlags)
|
||||
defer flags.Init(resetFlags)
|
||||
|
||||
slot := fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1
|
||||
actualEas, actualCgc, err = service.updateCustodyInfoInDB(slot)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, slot, actualEas)
|
||||
require.Equal(t, numberOfCustodyGroups, actualCgc)
|
||||
|
||||
actualEas, actualCgc, err = service.updateCustodyInfoInDB(slot + 2)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, slot, actualEas)
|
||||
require.Equal(t, numberOfCustodyGroups, actualCgc)
|
||||
})
|
||||
}
|
||||
|
||||
@@ -130,6 +130,14 @@ func (dch *mockCustodyManager) UpdateCustodyInfo(earliestAvailableSlot primitive
|
||||
return earliestAvailableSlot, custodyGroupCount, nil
|
||||
}
|
||||
|
||||
func (dch *mockCustodyManager) UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
|
||||
dch.mut.Lock()
|
||||
defer dch.mut.Unlock()
|
||||
|
||||
dch.earliestAvailableSlot = earliestAvailableSlot
|
||||
return nil
|
||||
}
|
||||
|
||||
func (dch *mockCustodyManager) CustodyGroupCountFromPeer(peer.ID) uint64 {
|
||||
return 0
|
||||
}
|
||||
|
||||
@@ -472,6 +472,36 @@ func (s *ChainService) HasBlock(ctx context.Context, rt [32]byte) bool {
|
||||
return s.InitSyncBlockRoots[rt]
|
||||
}
|
||||
|
||||
func (s *ChainService) AvailableBlocks(ctx context.Context, blockRoots [][32]byte) map[[32]byte]bool {
|
||||
if s.DB == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
count := len(blockRoots)
|
||||
availableRoots := make(map[[32]byte]bool, count)
|
||||
notInDBRoots := make([][32]byte, 0, count)
|
||||
for _, root := range blockRoots {
|
||||
if s.DB.HasBlock(ctx, root) {
|
||||
availableRoots[root] = true
|
||||
continue
|
||||
}
|
||||
|
||||
notInDBRoots = append(notInDBRoots, root)
|
||||
}
|
||||
|
||||
if s.InitSyncBlockRoots == nil {
|
||||
return availableRoots
|
||||
}
|
||||
|
||||
for _, root := range notInDBRoots {
|
||||
if s.InitSyncBlockRoots[root] {
|
||||
availableRoots[root] = true
|
||||
}
|
||||
}
|
||||
|
||||
return availableRoots
|
||||
}
|
||||
|
||||
// RecentBlockSlot mocks the same method in the chain service.
|
||||
func (s *ChainService) RecentBlockSlot([32]byte) (primitives.Slot, error) {
|
||||
return s.BlockSlot, nil
|
||||
|
||||
@@ -3,6 +3,7 @@ package blockchain
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"slices"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filters"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
@@ -81,12 +82,10 @@ func (v *WeakSubjectivityVerifier) VerifyWeakSubjectivity(ctx context.Context, f
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "error while retrieving block roots to verify weak subjectivity")
|
||||
}
|
||||
for _, root := range roots {
|
||||
if v.root == root {
|
||||
log.Info("Weak subjectivity check has passed!!")
|
||||
v.verified = true
|
||||
return nil
|
||||
}
|
||||
if slices.Contains(roots, v.root) {
|
||||
log.Info("Weak subjectivity check has passed!!")
|
||||
v.verified = true
|
||||
return nil
|
||||
}
|
||||
return errors.Wrap(errWSBlockNotFoundInEpoch, fmt.Sprintf("root=%#x, epoch=%d", v.root, v.epoch))
|
||||
}
|
||||
|
||||
@@ -12,6 +12,46 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1/attestation"
|
||||
)
|
||||
|
||||
// ConvertToAltair converts a Phase 0 beacon state to an Altair beacon state.
|
||||
func ConvertToAltair(state state.BeaconState) (state.BeaconState, error) {
|
||||
epoch := time.CurrentEpoch(state)
|
||||
|
||||
numValidators := state.NumValidators()
|
||||
s := ðpb.BeaconStateAltair{
|
||||
GenesisTime: uint64(state.GenesisTime().Unix()),
|
||||
GenesisValidatorsRoot: state.GenesisValidatorsRoot(),
|
||||
Slot: state.Slot(),
|
||||
Fork: ðpb.Fork{
|
||||
PreviousVersion: state.Fork().CurrentVersion,
|
||||
CurrentVersion: params.BeaconConfig().AltairForkVersion,
|
||||
Epoch: epoch,
|
||||
},
|
||||
LatestBlockHeader: state.LatestBlockHeader(),
|
||||
BlockRoots: state.BlockRoots(),
|
||||
StateRoots: state.StateRoots(),
|
||||
HistoricalRoots: state.HistoricalRoots(),
|
||||
Eth1Data: state.Eth1Data(),
|
||||
Eth1DataVotes: state.Eth1DataVotes(),
|
||||
Eth1DepositIndex: state.Eth1DepositIndex(),
|
||||
Validators: state.Validators(),
|
||||
Balances: state.Balances(),
|
||||
RandaoMixes: state.RandaoMixes(),
|
||||
Slashings: state.Slashings(),
|
||||
PreviousEpochParticipation: make([]byte, numValidators),
|
||||
CurrentEpochParticipation: make([]byte, numValidators),
|
||||
JustificationBits: state.JustificationBits(),
|
||||
PreviousJustifiedCheckpoint: state.PreviousJustifiedCheckpoint(),
|
||||
CurrentJustifiedCheckpoint: state.CurrentJustifiedCheckpoint(),
|
||||
FinalizedCheckpoint: state.FinalizedCheckpoint(),
|
||||
InactivityScores: make([]uint64, numValidators),
|
||||
}
|
||||
newState, err := state_native.InitializeFromProtoUnsafeAltair(s)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return newState, nil
|
||||
}
|
||||
|
||||
// UpgradeToAltair updates input state to return the version Altair state.
|
||||
//
|
||||
// Spec code:
|
||||
@@ -64,39 +104,7 @@ import (
|
||||
// post.next_sync_committee = get_next_sync_committee(post)
|
||||
// return post
|
||||
func UpgradeToAltair(ctx context.Context, state state.BeaconState) (state.BeaconState, error) {
|
||||
epoch := time.CurrentEpoch(state)
|
||||
|
||||
numValidators := state.NumValidators()
|
||||
s := ðpb.BeaconStateAltair{
|
||||
GenesisTime: uint64(state.GenesisTime().Unix()),
|
||||
GenesisValidatorsRoot: state.GenesisValidatorsRoot(),
|
||||
Slot: state.Slot(),
|
||||
Fork: ðpb.Fork{
|
||||
PreviousVersion: state.Fork().CurrentVersion,
|
||||
CurrentVersion: params.BeaconConfig().AltairForkVersion,
|
||||
Epoch: epoch,
|
||||
},
|
||||
LatestBlockHeader: state.LatestBlockHeader(),
|
||||
BlockRoots: state.BlockRoots(),
|
||||
StateRoots: state.StateRoots(),
|
||||
HistoricalRoots: state.HistoricalRoots(),
|
||||
Eth1Data: state.Eth1Data(),
|
||||
Eth1DataVotes: state.Eth1DataVotes(),
|
||||
Eth1DepositIndex: state.Eth1DepositIndex(),
|
||||
Validators: state.Validators(),
|
||||
Balances: state.Balances(),
|
||||
RandaoMixes: state.RandaoMixes(),
|
||||
Slashings: state.Slashings(),
|
||||
PreviousEpochParticipation: make([]byte, numValidators),
|
||||
CurrentEpochParticipation: make([]byte, numValidators),
|
||||
JustificationBits: state.JustificationBits(),
|
||||
PreviousJustifiedCheckpoint: state.PreviousJustifiedCheckpoint(),
|
||||
CurrentJustifiedCheckpoint: state.CurrentJustifiedCheckpoint(),
|
||||
FinalizedCheckpoint: state.FinalizedCheckpoint(),
|
||||
InactivityScores: make([]uint64, numValidators),
|
||||
}
|
||||
|
||||
newState, err := state_native.InitializeFromProtoUnsafeAltair(s)
|
||||
newState, err := ConvertToAltair(state)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
@@ -15,6 +15,129 @@ import (
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
// ConvertToElectra converts a Deneb beacon state to an Electra beacon state. It does not perform any fork logic.
|
||||
func ConvertToElectra(beaconState state.BeaconState) (state.BeaconState, error) {
|
||||
currentSyncCommittee, err := beaconState.CurrentSyncCommittee()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
nextSyncCommittee, err := beaconState.NextSyncCommittee()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
prevEpochParticipation, err := beaconState.PreviousEpochParticipation()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
currentEpochParticipation, err := beaconState.CurrentEpochParticipation()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
inactivityScores, err := beaconState.InactivityScores()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
payloadHeader, err := beaconState.LatestExecutionPayloadHeader()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
txRoot, err := payloadHeader.TransactionsRoot()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
wdRoot, err := payloadHeader.WithdrawalsRoot()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
wi, err := beaconState.NextWithdrawalIndex()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
vi, err := beaconState.NextWithdrawalValidatorIndex()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
summaries, err := beaconState.HistoricalSummaries()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
excessBlobGas, err := payloadHeader.ExcessBlobGas()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
blobGasUsed, err := payloadHeader.BlobGasUsed()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
s := ðpb.BeaconStateElectra{
|
||||
GenesisTime: uint64(beaconState.GenesisTime().Unix()),
|
||||
GenesisValidatorsRoot: beaconState.GenesisValidatorsRoot(),
|
||||
Slot: beaconState.Slot(),
|
||||
Fork: ðpb.Fork{
|
||||
PreviousVersion: beaconState.Fork().CurrentVersion,
|
||||
CurrentVersion: params.BeaconConfig().ElectraForkVersion,
|
||||
Epoch: time.CurrentEpoch(beaconState),
|
||||
},
|
||||
LatestBlockHeader: beaconState.LatestBlockHeader(),
|
||||
BlockRoots: beaconState.BlockRoots(),
|
||||
StateRoots: beaconState.StateRoots(),
|
||||
HistoricalRoots: beaconState.HistoricalRoots(),
|
||||
Eth1Data: beaconState.Eth1Data(),
|
||||
Eth1DataVotes: beaconState.Eth1DataVotes(),
|
||||
Eth1DepositIndex: beaconState.Eth1DepositIndex(),
|
||||
Validators: beaconState.Validators(),
|
||||
Balances: beaconState.Balances(),
|
||||
RandaoMixes: beaconState.RandaoMixes(),
|
||||
Slashings: beaconState.Slashings(),
|
||||
PreviousEpochParticipation: prevEpochParticipation,
|
||||
CurrentEpochParticipation: currentEpochParticipation,
|
||||
JustificationBits: beaconState.JustificationBits(),
|
||||
PreviousJustifiedCheckpoint: beaconState.PreviousJustifiedCheckpoint(),
|
||||
CurrentJustifiedCheckpoint: beaconState.CurrentJustifiedCheckpoint(),
|
||||
FinalizedCheckpoint: beaconState.FinalizedCheckpoint(),
|
||||
InactivityScores: inactivityScores,
|
||||
CurrentSyncCommittee: currentSyncCommittee,
|
||||
NextSyncCommittee: nextSyncCommittee,
|
||||
LatestExecutionPayloadHeader: &enginev1.ExecutionPayloadHeaderDeneb{
|
||||
ParentHash: payloadHeader.ParentHash(),
|
||||
FeeRecipient: payloadHeader.FeeRecipient(),
|
||||
StateRoot: payloadHeader.StateRoot(),
|
||||
ReceiptsRoot: payloadHeader.ReceiptsRoot(),
|
||||
LogsBloom: payloadHeader.LogsBloom(),
|
||||
PrevRandao: payloadHeader.PrevRandao(),
|
||||
BlockNumber: payloadHeader.BlockNumber(),
|
||||
GasLimit: payloadHeader.GasLimit(),
|
||||
GasUsed: payloadHeader.GasUsed(),
|
||||
Timestamp: payloadHeader.Timestamp(),
|
||||
ExtraData: payloadHeader.ExtraData(),
|
||||
BaseFeePerGas: payloadHeader.BaseFeePerGas(),
|
||||
BlockHash: payloadHeader.BlockHash(),
|
||||
TransactionsRoot: txRoot,
|
||||
WithdrawalsRoot: wdRoot,
|
||||
ExcessBlobGas: excessBlobGas,
|
||||
BlobGasUsed: blobGasUsed,
|
||||
},
|
||||
NextWithdrawalIndex: wi,
|
||||
NextWithdrawalValidatorIndex: vi,
|
||||
HistoricalSummaries: summaries,
|
||||
|
||||
DepositRequestsStartIndex: params.BeaconConfig().UnsetDepositRequestsStartIndex,
|
||||
DepositBalanceToConsume: 0,
|
||||
EarliestConsolidationEpoch: helpers.ActivationExitEpoch(slots.ToEpoch(beaconState.Slot())),
|
||||
PendingDeposits: make([]*ethpb.PendingDeposit, 0),
|
||||
PendingPartialWithdrawals: make([]*ethpb.PendingPartialWithdrawal, 0),
|
||||
PendingConsolidations: make([]*ethpb.PendingConsolidation, 0),
|
||||
}
|
||||
|
||||
// need to cast the beaconState to use in helper functions
|
||||
post, err := state_native.InitializeFromProtoUnsafeElectra(s)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "failed to initialize post electra beaconState")
|
||||
}
|
||||
return post, nil
|
||||
}
|
||||
|
||||
// UpgradeToElectra updates inputs a generic state to return the version Electra state.
|
||||
//
|
||||
// nolint:dupword
|
||||
@@ -126,55 +249,7 @@ import (
|
||||
//
|
||||
// return post
|
||||
func UpgradeToElectra(beaconState state.BeaconState) (state.BeaconState, error) {
|
||||
currentSyncCommittee, err := beaconState.CurrentSyncCommittee()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
nextSyncCommittee, err := beaconState.NextSyncCommittee()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
prevEpochParticipation, err := beaconState.PreviousEpochParticipation()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
currentEpochParticipation, err := beaconState.CurrentEpochParticipation()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
inactivityScores, err := beaconState.InactivityScores()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
payloadHeader, err := beaconState.LatestExecutionPayloadHeader()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
txRoot, err := payloadHeader.TransactionsRoot()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
wdRoot, err := payloadHeader.WithdrawalsRoot()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
wi, err := beaconState.NextWithdrawalIndex()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
vi, err := beaconState.NextWithdrawalValidatorIndex()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
summaries, err := beaconState.HistoricalSummaries()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
excessBlobGas, err := payloadHeader.ExcessBlobGas()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
blobGasUsed, err := payloadHeader.BlobGasUsed()
|
||||
s, err := ConvertToElectra(beaconState)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -206,97 +281,38 @@ func UpgradeToElectra(beaconState state.BeaconState) (state.BeaconState, error)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "failed to get total active balance")
|
||||
}
|
||||
|
||||
s := ðpb.BeaconStateElectra{
|
||||
GenesisTime: uint64(beaconState.GenesisTime().Unix()),
|
||||
GenesisValidatorsRoot: beaconState.GenesisValidatorsRoot(),
|
||||
Slot: beaconState.Slot(),
|
||||
Fork: ðpb.Fork{
|
||||
PreviousVersion: beaconState.Fork().CurrentVersion,
|
||||
CurrentVersion: params.BeaconConfig().ElectraForkVersion,
|
||||
Epoch: time.CurrentEpoch(beaconState),
|
||||
},
|
||||
LatestBlockHeader: beaconState.LatestBlockHeader(),
|
||||
BlockRoots: beaconState.BlockRoots(),
|
||||
StateRoots: beaconState.StateRoots(),
|
||||
HistoricalRoots: beaconState.HistoricalRoots(),
|
||||
Eth1Data: beaconState.Eth1Data(),
|
||||
Eth1DataVotes: beaconState.Eth1DataVotes(),
|
||||
Eth1DepositIndex: beaconState.Eth1DepositIndex(),
|
||||
Validators: beaconState.Validators(),
|
||||
Balances: beaconState.Balances(),
|
||||
RandaoMixes: beaconState.RandaoMixes(),
|
||||
Slashings: beaconState.Slashings(),
|
||||
PreviousEpochParticipation: prevEpochParticipation,
|
||||
CurrentEpochParticipation: currentEpochParticipation,
|
||||
JustificationBits: beaconState.JustificationBits(),
|
||||
PreviousJustifiedCheckpoint: beaconState.PreviousJustifiedCheckpoint(),
|
||||
CurrentJustifiedCheckpoint: beaconState.CurrentJustifiedCheckpoint(),
|
||||
FinalizedCheckpoint: beaconState.FinalizedCheckpoint(),
|
||||
InactivityScores: inactivityScores,
|
||||
CurrentSyncCommittee: currentSyncCommittee,
|
||||
NextSyncCommittee: nextSyncCommittee,
|
||||
LatestExecutionPayloadHeader: &enginev1.ExecutionPayloadHeaderDeneb{
|
||||
ParentHash: payloadHeader.ParentHash(),
|
||||
FeeRecipient: payloadHeader.FeeRecipient(),
|
||||
StateRoot: payloadHeader.StateRoot(),
|
||||
ReceiptsRoot: payloadHeader.ReceiptsRoot(),
|
||||
LogsBloom: payloadHeader.LogsBloom(),
|
||||
PrevRandao: payloadHeader.PrevRandao(),
|
||||
BlockNumber: payloadHeader.BlockNumber(),
|
||||
GasLimit: payloadHeader.GasLimit(),
|
||||
GasUsed: payloadHeader.GasUsed(),
|
||||
Timestamp: payloadHeader.Timestamp(),
|
||||
ExtraData: payloadHeader.ExtraData(),
|
||||
BaseFeePerGas: payloadHeader.BaseFeePerGas(),
|
||||
BlockHash: payloadHeader.BlockHash(),
|
||||
TransactionsRoot: txRoot,
|
||||
WithdrawalsRoot: wdRoot,
|
||||
ExcessBlobGas: excessBlobGas,
|
||||
BlobGasUsed: blobGasUsed,
|
||||
},
|
||||
NextWithdrawalIndex: wi,
|
||||
NextWithdrawalValidatorIndex: vi,
|
||||
HistoricalSummaries: summaries,
|
||||
|
||||
DepositRequestsStartIndex: params.BeaconConfig().UnsetDepositRequestsStartIndex,
|
||||
DepositBalanceToConsume: 0,
|
||||
ExitBalanceToConsume: helpers.ActivationExitChurnLimit(primitives.Gwei(tab)),
|
||||
EarliestExitEpoch: earliestExitEpoch,
|
||||
ConsolidationBalanceToConsume: helpers.ConsolidationChurnLimit(primitives.Gwei(tab)),
|
||||
EarliestConsolidationEpoch: helpers.ActivationExitEpoch(slots.ToEpoch(beaconState.Slot())),
|
||||
PendingDeposits: make([]*ethpb.PendingDeposit, 0),
|
||||
PendingPartialWithdrawals: make([]*ethpb.PendingPartialWithdrawal, 0),
|
||||
PendingConsolidations: make([]*ethpb.PendingConsolidation, 0),
|
||||
if err := s.SetExitBalanceToConsume(helpers.ActivationExitChurnLimit(primitives.Gwei(tab))); err != nil {
|
||||
return nil, errors.Wrap(err, "failed to set exit balance to consume")
|
||||
}
|
||||
if err := s.SetEarliestExitEpoch(earliestExitEpoch); err != nil {
|
||||
return nil, errors.Wrap(err, "failed to set earliest exit epoch")
|
||||
}
|
||||
if err := s.SetConsolidationBalanceToConsume(helpers.ConsolidationChurnLimit(primitives.Gwei(tab))); err != nil {
|
||||
return nil, errors.Wrap(err, "failed to set consolidation balance to consume")
|
||||
}
|
||||
|
||||
// Sorting preActivationIndices based on a custom criteria
|
||||
vals := s.Validators()
|
||||
sort.Slice(preActivationIndices, func(i, j int) bool {
|
||||
// Comparing based on ActivationEligibilityEpoch and then by index if the epochs are the same
|
||||
if s.Validators[preActivationIndices[i]].ActivationEligibilityEpoch == s.Validators[preActivationIndices[j]].ActivationEligibilityEpoch {
|
||||
if vals[preActivationIndices[i]].ActivationEligibilityEpoch == vals[preActivationIndices[j]].ActivationEligibilityEpoch {
|
||||
return preActivationIndices[i] < preActivationIndices[j]
|
||||
}
|
||||
return s.Validators[preActivationIndices[i]].ActivationEligibilityEpoch < s.Validators[preActivationIndices[j]].ActivationEligibilityEpoch
|
||||
return vals[preActivationIndices[i]].ActivationEligibilityEpoch < vals[preActivationIndices[j]].ActivationEligibilityEpoch
|
||||
})
|
||||
|
||||
// need to cast the beaconState to use in helper functions
|
||||
post, err := state_native.InitializeFromProtoUnsafeElectra(s)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "failed to initialize post electra beaconState")
|
||||
}
|
||||
|
||||
for _, index := range preActivationIndices {
|
||||
if err := QueueEntireBalanceAndResetValidator(post, index); err != nil {
|
||||
if err := QueueEntireBalanceAndResetValidator(s, index); err != nil {
|
||||
return nil, errors.Wrap(err, "failed to queue entire balance and reset validator")
|
||||
}
|
||||
}
|
||||
|
||||
// Ensure early adopters of compounding credentials go through the activation churn
|
||||
for _, index := range compoundWithdrawalIndices {
|
||||
if err := QueueExcessActiveBalance(post, index); err != nil {
|
||||
if err := QueueExcessActiveBalance(s, index); err != nil {
|
||||
return nil, errors.Wrap(err, "failed to queue excess active balance")
|
||||
}
|
||||
}
|
||||
|
||||
return post, nil
|
||||
return s, nil
|
||||
}
|
||||
|
||||
@@ -7,6 +7,7 @@ go_library(
|
||||
visibility = [
|
||||
"//beacon-chain:__subpackages__",
|
||||
"//cmd/prysmctl/testnet:__pkg__",
|
||||
"//consensus-types/hdiff:__subpackages__",
|
||||
"//testing/spectest:__subpackages__",
|
||||
"//validator/client:__pkg__",
|
||||
],
|
||||
|
||||
@@ -15,6 +15,7 @@ go_library(
|
||||
"//beacon-chain/state:go_default_library",
|
||||
"//beacon-chain/state/state-native:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
"//monitoring/tracing/trace:go_default_library",
|
||||
"//proto/engine/v1:go_default_library",
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
|
||||
@@ -8,6 +8,7 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
|
||||
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
enginev1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
@@ -17,6 +18,25 @@ import (
|
||||
// UpgradeToFulu updates inputs a generic state to return the version Fulu state.
|
||||
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/fork.md#upgrading-the-state
|
||||
func UpgradeToFulu(ctx context.Context, beaconState state.BeaconState) (state.BeaconState, error) {
|
||||
s, err := ConvertToFulu(beaconState)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not convert to fulu")
|
||||
}
|
||||
proposerLookahead, err := helpers.InitializeProposerLookahead(ctx, beaconState, slots.ToEpoch(beaconState.Slot()))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
pl := make([]primitives.ValidatorIndex, len(proposerLookahead))
|
||||
for i, v := range proposerLookahead {
|
||||
pl[i] = primitives.ValidatorIndex(v)
|
||||
}
|
||||
if err := s.SetProposerLookahead(pl); err != nil {
|
||||
return nil, errors.Wrap(err, "failed to set proposer lookahead")
|
||||
}
|
||||
return s, nil
|
||||
}
|
||||
|
||||
func ConvertToFulu(beaconState state.BeaconState) (state.BeaconState, error) {
|
||||
currentSyncCommittee, err := beaconState.CurrentSyncCommittee()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -105,11 +125,6 @@ func UpgradeToFulu(ctx context.Context, beaconState state.BeaconState) (state.Be
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
proposerLookahead, err := helpers.InitializeProposerLookahead(ctx, beaconState, slots.ToEpoch(beaconState.Slot()))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
s := ðpb.BeaconStateFulu{
|
||||
GenesisTime: uint64(beaconState.GenesisTime().Unix()),
|
||||
GenesisValidatorsRoot: beaconState.GenesisValidatorsRoot(),
|
||||
@@ -171,14 +186,6 @@ func UpgradeToFulu(ctx context.Context, beaconState state.BeaconState) (state.Be
|
||||
PendingDeposits: pendingDeposits,
|
||||
PendingPartialWithdrawals: pendingPartialWithdrawals,
|
||||
PendingConsolidations: pendingConsolidations,
|
||||
ProposerLookahead: proposerLookahead,
|
||||
}
|
||||
|
||||
// Need to cast the beaconState to use in helper functions
|
||||
post, err := state_native.InitializeFromProtoUnsafeFulu(s)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "failed to initialize post fulu beaconState")
|
||||
}
|
||||
|
||||
return post, nil
|
||||
return state_native.InitializeFromProtoUnsafeFulu(s)
|
||||
}
|
||||
|
||||
@@ -401,7 +401,7 @@ func ComputeProposerIndex(bState state.ReadOnlyBeaconState, activeIndices []prim
|
||||
return 0, errors.New("empty active indices list")
|
||||
}
|
||||
hashFunc := hash.CustomSHA256Hasher()
|
||||
beaconConfig := params.BeaconConfig()
|
||||
cfg := params.BeaconConfig()
|
||||
seedBuffer := make([]byte, len(seed)+8)
|
||||
copy(seedBuffer, seed[:])
|
||||
|
||||
@@ -426,14 +426,14 @@ func ComputeProposerIndex(bState state.ReadOnlyBeaconState, activeIndices []prim
|
||||
offset := (i % 16) * 2
|
||||
randomValue := uint64(randomBytes[offset]) | uint64(randomBytes[offset+1])<<8
|
||||
|
||||
if effectiveBal*fieldparams.MaxRandomValueElectra >= beaconConfig.MaxEffectiveBalanceElectra*randomValue {
|
||||
if effectiveBal*fieldparams.MaxRandomValueElectra >= cfg.MaxEffectiveBalanceElectra*randomValue {
|
||||
return candidateIndex, nil
|
||||
}
|
||||
} else {
|
||||
binary.LittleEndian.PutUint64(seedBuffer[len(seed):], i/32)
|
||||
randomByte := hashFunc(seedBuffer)[i%32]
|
||||
|
||||
if effectiveBal*fieldparams.MaxRandomByte >= beaconConfig.MaxEffectiveBalance*uint64(randomByte) {
|
||||
if effectiveBal*fieldparams.MaxRandomByte >= cfg.MaxEffectiveBalance*uint64(randomByte) {
|
||||
return candidateIndex, nil
|
||||
}
|
||||
}
|
||||
|
||||
@@ -201,14 +201,3 @@ func ParseWeakSubjectivityInputString(wsCheckpointString string) (*v1alpha1.Chec
|
||||
Root: bRoot,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// MinEpochsForBlockRequests computes the number of epochs of block history that we need to maintain,
|
||||
// relative to the current epoch, per the p2p specs. This is used to compute the slot where backfill is complete.
|
||||
// value defined:
|
||||
// https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/p2p-interface.md#configuration
|
||||
// MIN_VALIDATOR_WITHDRAWABILITY_DELAY + CHURN_LIMIT_QUOTIENT // 2 (= 33024, ~5 months)
|
||||
// detailed rationale: https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/p2p-interface.md#why-are-blocksbyrange-requests-only-required-to-be-served-for-the-latest-min_epochs_for_block_requests-epochs
|
||||
func MinEpochsForBlockRequests() primitives.Epoch {
|
||||
return params.BeaconConfig().MinValidatorWithdrawabilityDelay +
|
||||
primitives.Epoch(params.BeaconConfig().ChurnLimitQuotient/2)
|
||||
}
|
||||
|
||||
@@ -286,20 +286,3 @@ func genState(t *testing.T, valCount, avgBalance uint64) state.BeaconState {
|
||||
return beaconState
|
||||
}
|
||||
|
||||
func TestMinEpochsForBlockRequests(t *testing.T) {
|
||||
helpers.ClearCache()
|
||||
|
||||
params.SetActiveTestCleanup(t, params.MainnetConfig())
|
||||
var expected primitives.Epoch = 33024
|
||||
// expected value of 33024 via spec commentary:
|
||||
// https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/p2p-interface.md#why-are-blocksbyrange-requests-only-required-to-be-served-for-the-latest-min_epochs_for_block_requests-epochs
|
||||
// MIN_EPOCHS_FOR_BLOCK_REQUESTS is calculated using the arithmetic from compute_weak_subjectivity_period found in the weak subjectivity guide. Specifically to find this max epoch range, we use the worst case event of a very large validator size (>= MIN_PER_EPOCH_CHURN_LIMIT * CHURN_LIMIT_QUOTIENT).
|
||||
//
|
||||
// MIN_EPOCHS_FOR_BLOCK_REQUESTS = (
|
||||
// MIN_VALIDATOR_WITHDRAWABILITY_DELAY
|
||||
// + MAX_SAFETY_DECAY * CHURN_LIMIT_QUOTIENT // (2 * 100)
|
||||
// )
|
||||
//
|
||||
// Where MAX_SAFETY_DECAY = 100 and thus MIN_EPOCHS_FOR_BLOCK_REQUESTS = 33024 (~5 months).
|
||||
require.Equal(t, expected, helpers.MinEpochsForBlockRequests())
|
||||
}
|
||||
|
||||
@@ -89,14 +89,14 @@ func CustodyGroups(nodeId enode.ID, custodyGroupCount uint64) ([]uint64, error)
|
||||
// ComputeColumnsForCustodyGroup computes the columns for a given custody group.
|
||||
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/das-core.md#compute_columns_for_custody_group
|
||||
func ComputeColumnsForCustodyGroup(custodyGroup uint64) ([]uint64, error) {
|
||||
beaconConfig := params.BeaconConfig()
|
||||
numberOfCustodyGroups := beaconConfig.NumberOfCustodyGroups
|
||||
cfg := params.BeaconConfig()
|
||||
numberOfCustodyGroups := cfg.NumberOfCustodyGroups
|
||||
|
||||
if custodyGroup >= numberOfCustodyGroups {
|
||||
return nil, ErrCustodyGroupTooLarge
|
||||
}
|
||||
|
||||
numberOfColumns := beaconConfig.NumberOfColumns
|
||||
numberOfColumns := cfg.NumberOfColumns
|
||||
|
||||
columnsPerGroup := numberOfColumns / numberOfCustodyGroups
|
||||
|
||||
@@ -112,9 +112,9 @@ func ComputeColumnsForCustodyGroup(custodyGroup uint64) ([]uint64, error) {
|
||||
// ComputeCustodyGroupForColumn computes the custody group for a given column.
|
||||
// It is the reciprocal function of ComputeColumnsForCustodyGroup.
|
||||
func ComputeCustodyGroupForColumn(columnIndex uint64) (uint64, error) {
|
||||
beaconConfig := params.BeaconConfig()
|
||||
numberOfColumns := beaconConfig.NumberOfColumns
|
||||
numberOfCustodyGroups := beaconConfig.NumberOfCustodyGroups
|
||||
cfg := params.BeaconConfig()
|
||||
numberOfColumns := cfg.NumberOfColumns
|
||||
numberOfCustodyGroups := cfg.NumberOfCustodyGroups
|
||||
|
||||
if columnIndex >= numberOfColumns {
|
||||
return 0, ErrIndexTooLarge
|
||||
|
||||
@@ -79,10 +79,30 @@ func VerifyDataColumnsSidecarKZGProofs(sidecars []blocks.RODataColumn) error {
|
||||
|
||||
for _, sidecar := range sidecars {
|
||||
for i := range sidecar.Column {
|
||||
commitments = append(commitments, kzg.Bytes48(sidecar.KzgCommitments[i]))
|
||||
var (
|
||||
commitment kzg.Bytes48
|
||||
cell kzg.Cell
|
||||
proof kzg.Bytes48
|
||||
)
|
||||
|
||||
commitmentBytes := sidecar.KzgCommitments[i]
|
||||
cellBytes := sidecar.Column[i]
|
||||
proofBytes := sidecar.KzgProofs[i]
|
||||
|
||||
if len(commitmentBytes) != len(commitment) ||
|
||||
len(cellBytes) != len(cell) ||
|
||||
len(proofBytes) != len(proof) {
|
||||
return ErrMismatchLength
|
||||
}
|
||||
|
||||
copy(commitment[:], commitmentBytes)
|
||||
copy(cell[:], cellBytes)
|
||||
copy(proof[:], proofBytes)
|
||||
|
||||
commitments = append(commitments, commitment)
|
||||
indices = append(indices, sidecar.Index)
|
||||
cells = append(cells, kzg.Cell(sidecar.Column[i]))
|
||||
proofs = append(proofs, kzg.Bytes48(sidecar.KzgProofs[i]))
|
||||
cells = append(cells, cell)
|
||||
proofs = append(proofs, proof)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -68,6 +68,14 @@ func TestVerifyDataColumnSidecarKZGProofs(t *testing.T) {
|
||||
err := kzg.Start()
|
||||
require.NoError(t, err)
|
||||
|
||||
t.Run("size mismatch", func(t *testing.T) {
|
||||
sidecars := generateRandomSidecars(t, seed, blobCount)
|
||||
sidecars[0].Column[0] = sidecars[0].Column[0][:len(sidecars[0].Column[0])-1] // Remove one byte to create size mismatch
|
||||
|
||||
err := peerdas.VerifyDataColumnsSidecarKZGProofs(sidecars)
|
||||
require.ErrorIs(t, err, peerdas.ErrMismatchLength)
|
||||
})
|
||||
|
||||
t.Run("invalid proof", func(t *testing.T) {
|
||||
sidecars := generateRandomSidecars(t, seed, blobCount)
|
||||
sidecars[0].Column[0][0]++ // It is OK to overflow
|
||||
|
||||
@@ -84,10 +84,10 @@ func ValidatorsCustodyRequirement(state beaconState.ReadOnlyBeaconState, validat
|
||||
totalNodeBalance += validator.EffectiveBalance()
|
||||
}
|
||||
|
||||
beaconConfig := params.BeaconConfig()
|
||||
numberOfCustodyGroups := beaconConfig.NumberOfCustodyGroups
|
||||
validatorCustodyRequirement := beaconConfig.ValidatorCustodyRequirement
|
||||
balancePerAdditionalCustodyGroup := beaconConfig.BalancePerAdditionalCustodyGroup
|
||||
cfg := params.BeaconConfig()
|
||||
numberOfCustodyGroups := cfg.NumberOfCustodyGroups
|
||||
validatorCustodyRequirement := cfg.ValidatorCustodyRequirement
|
||||
balancePerAdditionalCustodyGroup := cfg.BalancePerAdditionalCustodyGroup
|
||||
|
||||
count := totalNodeBalance / balancePerAdditionalCustodyGroup
|
||||
return min(max(count, validatorCustodyRequirement), numberOfCustodyGroups), nil
|
||||
|
||||
@@ -196,7 +196,7 @@ func TestAltairCompatible(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestCanUpgradeTo(t *testing.T) {
|
||||
beaconConfig := params.BeaconConfig()
|
||||
cfg := params.BeaconConfig()
|
||||
|
||||
outerTestCases := []struct {
|
||||
name string
|
||||
@@ -205,32 +205,32 @@ func TestCanUpgradeTo(t *testing.T) {
|
||||
}{
|
||||
{
|
||||
name: "Altair",
|
||||
forkEpoch: &beaconConfig.AltairForkEpoch,
|
||||
forkEpoch: &cfg.AltairForkEpoch,
|
||||
upgradeFunc: time.CanUpgradeToAltair,
|
||||
},
|
||||
{
|
||||
name: "Bellatrix",
|
||||
forkEpoch: &beaconConfig.BellatrixForkEpoch,
|
||||
forkEpoch: &cfg.BellatrixForkEpoch,
|
||||
upgradeFunc: time.CanUpgradeToBellatrix,
|
||||
},
|
||||
{
|
||||
name: "Capella",
|
||||
forkEpoch: &beaconConfig.CapellaForkEpoch,
|
||||
forkEpoch: &cfg.CapellaForkEpoch,
|
||||
upgradeFunc: time.CanUpgradeToCapella,
|
||||
},
|
||||
{
|
||||
name: "Deneb",
|
||||
forkEpoch: &beaconConfig.DenebForkEpoch,
|
||||
forkEpoch: &cfg.DenebForkEpoch,
|
||||
upgradeFunc: time.CanUpgradeToDeneb,
|
||||
},
|
||||
{
|
||||
name: "Electra",
|
||||
forkEpoch: &beaconConfig.ElectraForkEpoch,
|
||||
forkEpoch: &cfg.ElectraForkEpoch,
|
||||
upgradeFunc: time.CanUpgradeToElectra,
|
||||
},
|
||||
{
|
||||
name: "Fulu",
|
||||
forkEpoch: &beaconConfig.FuluForkEpoch,
|
||||
forkEpoch: &cfg.FuluForkEpoch,
|
||||
upgradeFunc: time.CanUpgradeToFulu,
|
||||
},
|
||||
}
|
||||
@@ -238,7 +238,7 @@ func TestCanUpgradeTo(t *testing.T) {
|
||||
for _, otc := range outerTestCases {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
*otc.forkEpoch = 5
|
||||
params.OverrideBeaconConfig(beaconConfig)
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
innerTestCases := []struct {
|
||||
name string
|
||||
|
||||
@@ -200,6 +200,7 @@ func (dcs *DataColumnStorage) WarmCache() {
|
||||
fileMetadata, err := extractFileMetadata(path)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Error encountered while extracting file metadata")
|
||||
return nil
|
||||
}
|
||||
|
||||
// Open the data column filesystem file.
|
||||
@@ -988,8 +989,8 @@ func filePath(root [fieldparams.RootLength]byte, epoch primitives.Epoch) string
|
||||
// extractFileMetadata extracts the metadata from a file path.
|
||||
// If the path is not a leaf, it returns nil.
|
||||
func extractFileMetadata(path string) (*fileMetadata, error) {
|
||||
// Is this Windows friendly?
|
||||
parts := strings.Split(path, "/")
|
||||
// Use filepath.Separator to handle both Windows (\) and Unix (/) path separators
|
||||
parts := strings.Split(path, string(filepath.Separator))
|
||||
if len(parts) != 3 {
|
||||
return nil, errors.Errorf("unexpected file %s", path)
|
||||
}
|
||||
|
||||
@@ -3,6 +3,7 @@ package filesystem
|
||||
import (
|
||||
"encoding/binary"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
|
||||
@@ -725,3 +726,37 @@ func TestPrune(t *testing.T) {
|
||||
require.Equal(t, true, compareSlices([]string{"0x0de28a18cae63cbc6f0b20dc1afb0b1df38da40824a5f09f92d485ade04de97f.sszs"}, dirs))
|
||||
})
|
||||
}
|
||||
|
||||
func TestExtractFileMetadata(t *testing.T) {
|
||||
t.Run("Unix", func(t *testing.T) {
|
||||
// Test with Unix-style path separators (/)
|
||||
path := "12/1234/0x8bb2f09de48c102635622dc27e6de03ae2b22639df7c33edbc8222b2ec423746.sszs"
|
||||
metadata, err := extractFileMetadata(path)
|
||||
if filepath.Separator == '/' {
|
||||
// On Unix systems, this should succeed
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, uint64(12), metadata.period)
|
||||
require.Equal(t, primitives.Epoch(1234), metadata.epoch)
|
||||
return
|
||||
}
|
||||
|
||||
// On Windows systems, this should fail because it uses the wrong separator
|
||||
require.NotNil(t, err)
|
||||
})
|
||||
|
||||
t.Run("Windows", func(t *testing.T) {
|
||||
// Test with Windows-style path separators (\)
|
||||
path := "12\\1234\\0x8bb2f09de48c102635622dc27e6de03ae2b22639df7c33edbc8222b2ec423746.sszs"
|
||||
metadata, err := extractFileMetadata(path)
|
||||
if filepath.Separator == '\\' {
|
||||
// On Windows systems, this should succeed
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, uint64(12), metadata.period)
|
||||
require.Equal(t, primitives.Epoch(1234), metadata.epoch)
|
||||
return
|
||||
}
|
||||
|
||||
// On Unix systems, this should fail because it uses the wrong separator
|
||||
require.NotNil(t, err)
|
||||
})
|
||||
}
|
||||
|
||||
@@ -212,7 +212,8 @@ func filterNoop(_ string) bool {
|
||||
return true
|
||||
}
|
||||
|
||||
func isRootDir(p string) bool {
|
||||
// IsBlockRootDir returns true if the path segment looks like a block root directory.
|
||||
func IsBlockRootDir(p string) bool {
|
||||
dir := filepath.Base(p)
|
||||
return len(dir) == rootStringLen && strings.HasPrefix(dir, "0x")
|
||||
}
|
||||
|
||||
@@ -188,7 +188,7 @@ func TestListDir(t *testing.T) {
|
||||
name: "root filter",
|
||||
dirPath: ".",
|
||||
expected: []string{childlessBlob.name, blobWithSsz.name, blobWithSszAndTmp.name},
|
||||
filter: isRootDir,
|
||||
filter: IsBlockRootDir,
|
||||
},
|
||||
{
|
||||
name: "ssz filter",
|
||||
|
||||
@@ -19,12 +19,14 @@ import (
|
||||
const (
|
||||
// Full root in directory will be 66 chars, eg:
|
||||
// >>> len('0x0002fb4db510b8618b04dc82d023793739c26346a8b02eb73482e24b0fec0555') == 66
|
||||
rootStringLen = 66
|
||||
sszExt = "ssz"
|
||||
partExt = "part"
|
||||
periodicEpochBaseDir = "by-epoch"
|
||||
rootStringLen = 66
|
||||
sszExt = "ssz"
|
||||
partExt = "part"
|
||||
)
|
||||
|
||||
// PeriodicEpochBaseDir is the name of the base directory for the by-epoch layout.
|
||||
const PeriodicEpochBaseDir = "by-epoch"
|
||||
|
||||
const (
|
||||
LayoutNameFlat = "flat"
|
||||
LayoutNameByEpoch = "by-epoch"
|
||||
@@ -130,11 +132,11 @@ func migrateLayout(fs afero.Fs, from, to fsLayout, cache *blobStorageSummaryCach
|
||||
if iter.atEOF() {
|
||||
return errLayoutNotDetected
|
||||
}
|
||||
log.WithField("fromLayout", from.name()).WithField("toLayout", to.name()).Info("Migrating blob filesystem layout. This one-time operation can take extra time (up to a few minutes for systems with extended blob storage and a cold disk cache).")
|
||||
lastMoved := ""
|
||||
parentDirs := make(map[string]bool) // this map should have < 65k keys by design
|
||||
moved := 0
|
||||
dc := newDirCleaner()
|
||||
migrationLogged := false
|
||||
for ident, err := iter.next(); !errors.Is(err, io.EOF); ident, err = iter.next() {
|
||||
if err != nil {
|
||||
if errors.Is(err, errIdentFailure) {
|
||||
@@ -146,6 +148,11 @@ func migrateLayout(fs afero.Fs, from, to fsLayout, cache *blobStorageSummaryCach
|
||||
}
|
||||
return errors.Wrapf(errMigrationFailure, "failed to iterate previous layout structure while migrating blobs, err=%s", err.Error())
|
||||
}
|
||||
if !migrationLogged {
|
||||
log.WithField("fromLayout", from.name()).WithField("toLayout", to.name()).
|
||||
Info("Migrating blob filesystem layout. This one-time operation can take extra time (up to a few minutes for systems with extended blob storage and a cold disk cache).")
|
||||
migrationLogged = true
|
||||
}
|
||||
src := from.dir(ident)
|
||||
target := to.dir(ident)
|
||||
if src != lastMoved {
|
||||
|
||||
@@ -34,7 +34,7 @@ func (l *periodicEpochLayout) name() string {
|
||||
|
||||
func (l *periodicEpochLayout) blockParentDirs(ident blobIdent) []string {
|
||||
return []string{
|
||||
periodicEpochBaseDir,
|
||||
PeriodicEpochBaseDir,
|
||||
l.periodDir(ident.epoch),
|
||||
l.epochDir(ident.epoch),
|
||||
}
|
||||
@@ -50,28 +50,28 @@ func (l *periodicEpochLayout) notify(ident blobIdent) error {
|
||||
|
||||
// If before == 0, it won't be used as a filter and all idents will be returned.
|
||||
func (l *periodicEpochLayout) iterateIdents(before primitives.Epoch) (*identIterator, error) {
|
||||
_, err := l.fs.Stat(periodicEpochBaseDir)
|
||||
_, err := l.fs.Stat(PeriodicEpochBaseDir)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return &identIterator{eof: true}, nil // The directory is non-existent, which is fine; stop iteration.
|
||||
}
|
||||
return nil, errors.Wrapf(err, "error reading path %s", periodicEpochBaseDir)
|
||||
return nil, errors.Wrapf(err, "error reading path %s", PeriodicEpochBaseDir)
|
||||
}
|
||||
// iterate root, which should have directories named by "period"
|
||||
entries, err := listDir(l.fs, periodicEpochBaseDir)
|
||||
entries, err := listDir(l.fs, PeriodicEpochBaseDir)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "failed to list %s", periodicEpochBaseDir)
|
||||
return nil, errors.Wrapf(err, "failed to list %s", PeriodicEpochBaseDir)
|
||||
}
|
||||
|
||||
return &identIterator{
|
||||
fs: l.fs,
|
||||
path: periodicEpochBaseDir,
|
||||
path: PeriodicEpochBaseDir,
|
||||
// Please see comments on the `layers` field in `identIterator`` if the role of the layers is unclear.
|
||||
layers: []layoutLayer{
|
||||
{populateIdent: populateNoop, filter: isBeforePeriod(before)},
|
||||
{populateIdent: populateEpoch, filter: isBeforeEpoch(before)},
|
||||
{populateIdent: populateRoot, filter: isRootDir}, // extract root from path
|
||||
{populateIdent: populateIndex, filter: isSszFile}, // extract index from filename
|
||||
{populateIdent: populateRoot, filter: IsBlockRootDir}, // extract root from path
|
||||
{populateIdent: populateIndex, filter: isSszFile}, // extract index from filename
|
||||
},
|
||||
entries: entries,
|
||||
}, nil
|
||||
@@ -98,7 +98,7 @@ func (l *periodicEpochLayout) epochDir(epoch primitives.Epoch) string {
|
||||
}
|
||||
|
||||
func (l *periodicEpochLayout) periodDir(epoch primitives.Epoch) string {
|
||||
return filepath.Join(periodicEpochBaseDir, fmt.Sprintf("%d", periodForEpoch(epoch)))
|
||||
return filepath.Join(PeriodicEpochBaseDir, fmt.Sprintf("%d", periodForEpoch(epoch)))
|
||||
}
|
||||
|
||||
func (l *periodicEpochLayout) sszPath(n blobIdent) string {
|
||||
|
||||
@@ -30,7 +30,7 @@ func (l *flatLayout) iterateIdents(before primitives.Epoch) (*identIterator, err
|
||||
if os.IsNotExist(err) {
|
||||
return &identIterator{eof: true}, nil // The directory is non-existent, which is fine; stop iteration.
|
||||
}
|
||||
return nil, errors.Wrapf(err, "error reading path %s", periodicEpochBaseDir)
|
||||
return nil, errors.Wrap(err, "error reading blob base dir")
|
||||
}
|
||||
entries, err := listDir(l.fs, ".")
|
||||
if err != nil {
|
||||
@@ -199,10 +199,10 @@ func (l *flatSlotReader) isSSZAndBefore(fname string) bool {
|
||||
// the epoch can be determined.
|
||||
func isFlatCachedAndBefore(cache *blobStorageSummaryCache, before primitives.Epoch) func(string) bool {
|
||||
if before == 0 {
|
||||
return isRootDir
|
||||
return IsBlockRootDir
|
||||
}
|
||||
return func(p string) bool {
|
||||
if !isRootDir(p) {
|
||||
if !IsBlockRootDir(p) {
|
||||
return false
|
||||
}
|
||||
root, err := rootFromPath(p)
|
||||
|
||||
@@ -28,6 +28,7 @@ type ReadOnlyDatabase interface {
|
||||
BlocksBySlot(ctx context.Context, slot primitives.Slot) ([]interfaces.ReadOnlySignedBeaconBlock, error)
|
||||
BlockRootsBySlot(ctx context.Context, slot primitives.Slot) (bool, [][32]byte, error)
|
||||
HasBlock(ctx context.Context, blockRoot [32]byte) bool
|
||||
AvailableBlocks(ctx context.Context, blockRoots [][32]byte) map[[32]byte]bool
|
||||
GenesisBlock(ctx context.Context) (interfaces.ReadOnlySignedBeaconBlock, error)
|
||||
GenesisBlockRoot(ctx context.Context) ([32]byte, error)
|
||||
IsFinalizedBlock(ctx context.Context, blockRoot [32]byte) bool
|
||||
@@ -129,6 +130,7 @@ type NoHeadAccessDatabase interface {
|
||||
// Custody operations.
|
||||
UpdateSubscribedToAllDataSubnets(ctx context.Context, subscribed bool) (bool, error)
|
||||
UpdateCustodyInfo(ctx context.Context, earliestAvailableSlot primitives.Slot, custodyGroupCount uint64) (primitives.Slot, uint64, error)
|
||||
UpdateEarliestAvailableSlot(ctx context.Context, earliestAvailableSlot primitives.Slot) error
|
||||
|
||||
// P2P Metadata operations.
|
||||
SaveMetadataSeqNum(ctx context.Context, seqNum uint64) error
|
||||
|
||||
@@ -336,6 +336,42 @@ func (s *Store) HasBlock(ctx context.Context, blockRoot [32]byte) bool {
|
||||
return exists
|
||||
}
|
||||
|
||||
// AvailableBlocks returns a set of roots indicating which blocks corresponding to `blockRoots` are available in the storage.
|
||||
func (s *Store) AvailableBlocks(ctx context.Context, blockRoots [][32]byte) map[[32]byte]bool {
|
||||
_, span := trace.StartSpan(ctx, "BeaconDB.AvailableBlocks")
|
||||
defer span.End()
|
||||
|
||||
count := len(blockRoots)
|
||||
availableRoots := make(map[[32]byte]bool, count)
|
||||
|
||||
// First, check the cache for each block root.
|
||||
notInCacheRoots := make([][32]byte, 0, count)
|
||||
for _, root := range blockRoots {
|
||||
if v, ok := s.blockCache.Get(string(root[:])); v != nil && ok {
|
||||
availableRoots[root] = true
|
||||
continue
|
||||
}
|
||||
|
||||
notInCacheRoots = append(notInCacheRoots, root)
|
||||
}
|
||||
|
||||
// Next, check the database for the remaining block roots.
|
||||
if err := s.db.View(func(tx *bolt.Tx) error {
|
||||
bkt := tx.Bucket(blocksBucket)
|
||||
for _, root := range notInCacheRoots {
|
||||
if bkt.Get(root[:]) != nil {
|
||||
availableRoots[root] = true
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}); err != nil {
|
||||
panic(err) // lint:nopanic -- View never returns an error.
|
||||
}
|
||||
|
||||
return availableRoots
|
||||
}
|
||||
|
||||
// BlocksBySlot retrieves a list of beacon blocks and its respective roots by slot.
|
||||
func (s *Store) BlocksBySlot(ctx context.Context, slot primitives.Slot) ([]interfaces.ReadOnlySignedBeaconBlock, error) {
|
||||
ctx, span := trace.StartSpan(ctx, "BeaconDB.BlocksBySlot")
|
||||
|
||||
@@ -656,6 +656,44 @@ func TestStore_BlocksCRUD_NoCache(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestAvailableBlocks(t *testing.T) {
|
||||
ctx := t.Context()
|
||||
db := setupDB(t)
|
||||
|
||||
b0, b1, b2 := util.NewBeaconBlock(), util.NewBeaconBlock(), util.NewBeaconBlock()
|
||||
b0.Block.Slot, b1.Block.Slot, b2.Block.Slot = 10, 20, 30
|
||||
|
||||
sb0, err := blocks.NewSignedBeaconBlock(b0)
|
||||
require.NoError(t, err)
|
||||
r0, err := b0.Block.HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
|
||||
// Save b0 but remove it from cache.
|
||||
err = db.SaveBlock(ctx, sb0)
|
||||
require.NoError(t, err)
|
||||
db.blockCache.Del(string(r0[:]))
|
||||
|
||||
// b1 is not saved at all.
|
||||
r1, err := b1.Block.HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
|
||||
// Save b2 in cache and DB.
|
||||
sb2, err := blocks.NewSignedBeaconBlock(b2)
|
||||
require.NoError(t, err)
|
||||
r2, err := b2.Block.HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, db.SaveBlock(ctx, sb2))
|
||||
require.NoError(t, err)
|
||||
|
||||
expected := map[[32]byte]bool{r0: true, r2: true}
|
||||
actual := db.AvailableBlocks(ctx, [][32]byte{r0, r1, r2})
|
||||
|
||||
require.Equal(t, len(expected), len(actual))
|
||||
for i := range expected {
|
||||
require.Equal(t, true, actual[i])
|
||||
}
|
||||
}
|
||||
|
||||
func TestStore_Blocks_FiltersCorrectly(t *testing.T) {
|
||||
for _, tt := range blockTests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
|
||||
@@ -132,6 +132,6 @@ func recoverStateSummary(ctx context.Context, tx *bolt.Tx, root []byte) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
summaryBucket := tx.Bucket(stateBucket)
|
||||
summaryBucket := tx.Bucket(stateSummaryBucket)
|
||||
return summaryBucket.Put(root, summaryEnc)
|
||||
}
|
||||
|
||||
@@ -137,3 +137,32 @@ func TestStore_FinalizedCheckpoint_StateMustExist(t *testing.T) {
|
||||
|
||||
require.ErrorContains(t, errMissingStateForCheckpoint.Error(), db.SaveFinalizedCheckpoint(ctx, cp))
|
||||
}
|
||||
|
||||
// Regression test: verify that saving a checkpoint triggers recovery which writes
|
||||
// the state summary into the correct stateSummaryBucket so that HasStateSummary/StateSummary see it.
|
||||
func TestRecoverStateSummary_WritesToStateSummaryBucket(t *testing.T) {
|
||||
db := setupDB(t)
|
||||
ctx := t.Context()
|
||||
|
||||
// Create a block without saving a state or summary, so recovery is needed.
|
||||
blk := util.HydrateSignedBeaconBlock(ðpb.SignedBeaconBlock{})
|
||||
root, err := blk.Block.HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
wsb, err := blocks.NewSignedBeaconBlock(blk)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, db.SaveBlock(ctx, wsb))
|
||||
|
||||
// Precondition: summary not present yet.
|
||||
require.Equal(t, false, db.HasStateSummary(ctx, root))
|
||||
|
||||
// Saving justified checkpoint should trigger recovery path calling recoverStateSummary.
|
||||
cp := ðpb.Checkpoint{Epoch: 2, Root: root[:]}
|
||||
require.NoError(t, db.SaveJustifiedCheckpoint(ctx, cp))
|
||||
|
||||
// Postcondition: summary is visible via the public summary APIs (which read stateSummaryBucket).
|
||||
require.Equal(t, true, db.HasStateSummary(ctx, root))
|
||||
summary, err := db.StateSummary(ctx, root)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, summary)
|
||||
assert.DeepEqual(t, ðpb.StateSummary{Slot: blk.Block.Slot, Root: root[:]}, summary)
|
||||
}
|
||||
|
||||
@@ -2,16 +2,19 @@ package kv
|
||||
|
||||
import (
|
||||
"context"
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
bolt "go.etcd.io/bbolt"
|
||||
)
|
||||
|
||||
// UpdateCustodyInfo atomically updates the custody group count only it is greater than the stored one.
|
||||
// UpdateCustodyInfo atomically updates the custody group count only if it is greater than the stored one.
|
||||
// In this case, it also updates the earliest available slot with the provided value.
|
||||
// It returns the (potentially updated) custody group count and earliest available slot.
|
||||
func (s *Store) UpdateCustodyInfo(ctx context.Context, earliestAvailableSlot primitives.Slot, custodyGroupCount uint64) (primitives.Slot, uint64, error) {
|
||||
@@ -70,6 +73,79 @@ func (s *Store) UpdateCustodyInfo(ctx context.Context, earliestAvailableSlot pri
|
||||
return storedEarliestAvailableSlot, storedGroupCount, nil
|
||||
}
|
||||
|
||||
// UpdateEarliestAvailableSlot updates the earliest available slot.
|
||||
func (s *Store) UpdateEarliestAvailableSlot(ctx context.Context, earliestAvailableSlot primitives.Slot) error {
|
||||
_, span := trace.StartSpan(ctx, "BeaconDB.UpdateEarliestAvailableSlot")
|
||||
defer span.End()
|
||||
|
||||
storedEarliestAvailableSlot := primitives.Slot(0)
|
||||
if err := s.db.Update(func(tx *bolt.Tx) error {
|
||||
// Retrieve the custody bucket.
|
||||
bucket, err := tx.CreateBucketIfNotExists(custodyBucket)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "create custody bucket")
|
||||
}
|
||||
|
||||
// Retrieve the stored earliest available slot.
|
||||
storedEarliestAvailableSlotBytes := bucket.Get(earliestAvailableSlotKey)
|
||||
if len(storedEarliestAvailableSlotBytes) != 0 {
|
||||
storedEarliestAvailableSlot = primitives.Slot(bytesutil.BytesToUint64BigEndian(storedEarliestAvailableSlotBytes))
|
||||
}
|
||||
|
||||
// Allow decrease (for backfill scenarios)
|
||||
if earliestAvailableSlot <= storedEarliestAvailableSlot {
|
||||
storedEarliestAvailableSlot = earliestAvailableSlot
|
||||
bytes := bytesutil.Uint64ToBytesBigEndian(uint64(earliestAvailableSlot))
|
||||
if err := bucket.Put(earliestAvailableSlotKey, bytes); err != nil {
|
||||
return errors.Wrap(err, "put earliest available slot")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Prevent increase within the MIN_EPOCHS_FOR_BLOCK_REQUESTS period
|
||||
// This ensures we don't voluntarily refuse to serve mandatory block data
|
||||
genesisTime := time.Unix(int64(params.BeaconConfig().MinGenesisTime+params.BeaconConfig().GenesisDelay), 0)
|
||||
currentSlot := slots.CurrentSlot(genesisTime)
|
||||
currentEpoch := slots.ToEpoch(currentSlot)
|
||||
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
|
||||
|
||||
// Calculate the minimum required epoch (or 0 if we're early in the chain)
|
||||
minRequiredEpoch := primitives.Epoch(0)
|
||||
if currentEpoch > minEpochsForBlocks {
|
||||
minRequiredEpoch = currentEpoch - minEpochsForBlocks
|
||||
}
|
||||
|
||||
// Convert to slot to ensure we compare at slot-level granularity
|
||||
minRequiredSlot, err := slots.EpochStart(minRequiredEpoch)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "calculate minimum required slot")
|
||||
}
|
||||
|
||||
// Prevent any increase that would put earliest available slot beyond the minimum required slot
|
||||
if earliestAvailableSlot > minRequiredSlot {
|
||||
return errors.Errorf(
|
||||
"cannot increase earliest available slot to %d (epoch %d) as it exceeds minimum required slot %d (epoch %d)",
|
||||
earliestAvailableSlot, slots.ToEpoch(earliestAvailableSlot),
|
||||
minRequiredSlot, minRequiredEpoch,
|
||||
)
|
||||
}
|
||||
|
||||
storedEarliestAvailableSlot = earliestAvailableSlot
|
||||
bytes := bytesutil.Uint64ToBytesBigEndian(uint64(earliestAvailableSlot))
|
||||
if err := bucket.Put(earliestAvailableSlotKey, bytes); err != nil {
|
||||
return errors.Wrap(err, "put earliest available slot")
|
||||
}
|
||||
|
||||
return nil
|
||||
}); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
log.WithField("earliestAvailableSlot", storedEarliestAvailableSlot).Debug("Updated earliest available slot")
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// UpdateSubscribedToAllDataSubnets updates the "subscribed to all data subnets" status in the database
|
||||
// only if `subscribed` is `true`.
|
||||
// It returns the previous subscription status.
|
||||
|
||||
@@ -3,10 +3,13 @@ package kv
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
bolt "go.etcd.io/bbolt"
|
||||
)
|
||||
|
||||
@@ -132,6 +135,131 @@ func TestUpdateCustodyInfo(t *testing.T) {
|
||||
})
|
||||
}
|
||||
|
||||
func TestUpdateEarliestAvailableSlot(t *testing.T) {
|
||||
ctx := t.Context()
|
||||
|
||||
t.Run("allow decreasing earliest slot (backfill scenario)", func(t *testing.T) {
|
||||
const (
|
||||
initialSlot = primitives.Slot(300)
|
||||
initialCount = uint64(10)
|
||||
earliestSlot = primitives.Slot(200) // Lower than initial (backfill discovered earlier blocks)
|
||||
)
|
||||
|
||||
db := setupDB(t)
|
||||
|
||||
// Initialize custody info
|
||||
_, _, err := db.UpdateCustodyInfo(ctx, initialSlot, initialCount)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Update with a lower slot (should update for backfill)
|
||||
err = db.UpdateEarliestAvailableSlot(ctx, earliestSlot)
|
||||
require.NoError(t, err)
|
||||
|
||||
storedSlot, storedCount := getCustodyInfoFromDB(t, db)
|
||||
require.Equal(t, earliestSlot, storedSlot)
|
||||
require.Equal(t, initialCount, storedCount)
|
||||
})
|
||||
|
||||
t.Run("allow increasing slot within MIN_EPOCHS_FOR_BLOCK_REQUESTS (pruning scenario)", func(t *testing.T) {
|
||||
db := setupDB(t)
|
||||
|
||||
// Calculate the current slot and minimum required slot based on actual current time
|
||||
genesisTime := time.Unix(int64(params.BeaconConfig().MinGenesisTime+params.BeaconConfig().GenesisDelay), 0)
|
||||
currentSlot := slots.CurrentSlot(genesisTime)
|
||||
currentEpoch := slots.ToEpoch(currentSlot)
|
||||
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
|
||||
|
||||
var minRequiredEpoch primitives.Epoch
|
||||
if currentEpoch > minEpochsForBlocks {
|
||||
minRequiredEpoch = currentEpoch - minEpochsForBlocks
|
||||
} else {
|
||||
minRequiredEpoch = 0
|
||||
}
|
||||
|
||||
minRequiredSlot, err := slots.EpochStart(minRequiredEpoch)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Initial setup: set earliest slot well before minRequiredSlot
|
||||
const groupCount = uint64(5)
|
||||
initialSlot := primitives.Slot(1000)
|
||||
|
||||
_, _, err = db.UpdateCustodyInfo(ctx, initialSlot, groupCount)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Try to increase to a slot that's still BEFORE minRequiredSlot (should succeed)
|
||||
validSlot := minRequiredSlot - 100
|
||||
|
||||
err = db.UpdateEarliestAvailableSlot(ctx, validSlot)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Verify the database was updated
|
||||
storedSlot, storedCount := getCustodyInfoFromDB(t, db)
|
||||
require.Equal(t, validSlot, storedSlot)
|
||||
require.Equal(t, groupCount, storedCount)
|
||||
})
|
||||
|
||||
t.Run("prevent increasing slot beyond MIN_EPOCHS_FOR_BLOCK_REQUESTS", func(t *testing.T) {
|
||||
db := setupDB(t)
|
||||
|
||||
// Calculate the current slot and minimum required slot based on actual current time
|
||||
genesisTime := time.Unix(int64(params.BeaconConfig().MinGenesisTime+params.BeaconConfig().GenesisDelay), 0)
|
||||
currentSlot := slots.CurrentSlot(genesisTime)
|
||||
currentEpoch := slots.ToEpoch(currentSlot)
|
||||
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
|
||||
|
||||
var minRequiredEpoch primitives.Epoch
|
||||
if currentEpoch > minEpochsForBlocks {
|
||||
minRequiredEpoch = currentEpoch - minEpochsForBlocks
|
||||
} else {
|
||||
minRequiredEpoch = 0
|
||||
}
|
||||
|
||||
minRequiredSlot, err := slots.EpochStart(minRequiredEpoch)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Initial setup: set a valid earliest slot (well before minRequiredSlot)
|
||||
const initialCount = uint64(5)
|
||||
initialSlot := primitives.Slot(1000)
|
||||
|
||||
_, _, err = db.UpdateCustodyInfo(ctx, initialSlot, initialCount)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Try to set earliest slot beyond the minimum required slot
|
||||
invalidSlot := minRequiredSlot + 100
|
||||
|
||||
// This should fail
|
||||
err = db.UpdateEarliestAvailableSlot(ctx, invalidSlot)
|
||||
require.ErrorContains(t, "cannot increase earliest available slot", err)
|
||||
require.ErrorContains(t, "exceeds minimum required slot", err)
|
||||
|
||||
// Verify the database wasn't updated
|
||||
storedSlot, storedCount := getCustodyInfoFromDB(t, db)
|
||||
require.Equal(t, initialSlot, storedSlot)
|
||||
require.Equal(t, initialCount, storedCount)
|
||||
})
|
||||
|
||||
t.Run("no change when slot equals current slot", func(t *testing.T) {
|
||||
const (
|
||||
initialSlot = primitives.Slot(100)
|
||||
initialCount = uint64(5)
|
||||
)
|
||||
|
||||
db := setupDB(t)
|
||||
|
||||
// Initialize custody info
|
||||
_, _, err := db.UpdateCustodyInfo(ctx, initialSlot, initialCount)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Update with the same slot
|
||||
err = db.UpdateEarliestAvailableSlot(ctx, initialSlot)
|
||||
require.NoError(t, err)
|
||||
|
||||
storedSlot, storedCount := getCustodyInfoFromDB(t, db)
|
||||
require.Equal(t, initialSlot, storedSlot)
|
||||
require.Equal(t, initialCount, storedCount)
|
||||
})
|
||||
}
|
||||
|
||||
func TestUpdateSubscribedToAllDataSubnets(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
|
||||
@@ -8,7 +8,6 @@ go_library(
|
||||
"//beacon-chain:__subpackages__",
|
||||
],
|
||||
deps = [
|
||||
"//beacon-chain/core/helpers:go_default_library",
|
||||
"//beacon-chain/db:go_default_library",
|
||||
"//beacon-chain/db/iface:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
@@ -29,6 +28,7 @@ go_test(
|
||||
"//consensus-types/blocks:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
"//testing/assert:go_default_library",
|
||||
"//testing/require:go_default_library",
|
||||
"//testing/util:go_default_library",
|
||||
"//time/slots/testing:go_default_library",
|
||||
|
||||
@@ -4,7 +4,6 @@ import (
|
||||
"context"
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/iface"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
@@ -25,17 +24,24 @@ const (
|
||||
defaultNumBatchesToPrune = 15
|
||||
)
|
||||
|
||||
// custodyUpdater is a tiny interface that p2p service implements; kept here to avoid
|
||||
// importing the p2p package and creating a cycle.
|
||||
type custodyUpdater interface {
|
||||
UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error
|
||||
}
|
||||
|
||||
type ServiceOption func(*Service)
|
||||
|
||||
// WithRetentionPeriod allows the user to specify a different data retention period than the spec default.
|
||||
// The retention period is specified in epochs, and must be >= MIN_EPOCHS_FOR_BLOCK_REQUESTS.
|
||||
func WithRetentionPeriod(retentionEpochs primitives.Epoch) ServiceOption {
|
||||
return func(s *Service) {
|
||||
defaultRetentionEpochs := helpers.MinEpochsForBlockRequests() + 1
|
||||
defaultRetentionEpochs := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests) + 1
|
||||
if retentionEpochs < defaultRetentionEpochs {
|
||||
log.WithField("userEpochs", retentionEpochs).
|
||||
WithField("minRequired", defaultRetentionEpochs).
|
||||
Warn("Retention period too low, using minimum required value")
|
||||
Warn("Retention period too low, ignoring and using minimum required value")
|
||||
retentionEpochs = defaultRetentionEpochs
|
||||
}
|
||||
|
||||
s.ps = pruneStartSlotFunc(retentionEpochs)
|
||||
@@ -58,17 +64,23 @@ type Service struct {
|
||||
slotTicker slots.Ticker
|
||||
backfillWaiter func() error
|
||||
initSyncWaiter func() error
|
||||
custody custodyUpdater
|
||||
}
|
||||
|
||||
func New(ctx context.Context, db iface.Database, genesisTime time.Time, initSyncWaiter, backfillWaiter func() error, opts ...ServiceOption) (*Service, error) {
|
||||
func New(ctx context.Context, db iface.Database, genesisTime time.Time, initSyncWaiter, backfillWaiter func() error, custody custodyUpdater, opts ...ServiceOption) (*Service, error) {
|
||||
if custody == nil {
|
||||
return nil, errors.New("custody updater is required for pruner but was not provided")
|
||||
}
|
||||
|
||||
p := &Service{
|
||||
ctx: ctx,
|
||||
db: db,
|
||||
ps: pruneStartSlotFunc(helpers.MinEpochsForBlockRequests() + 1), // Default retention epochs is MIN_EPOCHS_FOR_BLOCK_REQUESTS + 1 from the current slot.
|
||||
ps: pruneStartSlotFunc(primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests) + 1), // Default retention epochs is MIN_EPOCHS_FOR_BLOCK_REQUESTS + 1 from the current slot.
|
||||
done: make(chan struct{}),
|
||||
slotTicker: slots.NewSlotTicker(slots.UnsafeStartTime(genesisTime, 0), params.BeaconConfig().SecondsPerSlot),
|
||||
initSyncWaiter: initSyncWaiter,
|
||||
backfillWaiter: backfillWaiter,
|
||||
custody: custody,
|
||||
}
|
||||
|
||||
for _, o := range opts {
|
||||
@@ -157,17 +169,45 @@ func (p *Service) prune(slot primitives.Slot) error {
|
||||
return errors.Wrap(err, "failed to prune batches")
|
||||
}
|
||||
|
||||
log.WithFields(logrus.Fields{
|
||||
"prunedUpto": pruneUpto,
|
||||
"duration": time.Since(tt),
|
||||
"currentSlot": slot,
|
||||
"batchSize": defaultPrunableBatchSize,
|
||||
"numBatches": numBatches,
|
||||
}).Debug("Successfully pruned chain data")
|
||||
earliestAvailableSlot := pruneUpto + 1
|
||||
|
||||
// Update pruning checkpoint.
|
||||
p.prunedUpto = pruneUpto
|
||||
|
||||
// Update the earliest available slot after pruning
|
||||
if err := p.updateEarliestAvailableSlot(earliestAvailableSlot); err != nil {
|
||||
return errors.Wrap(err, "update earliest available slot")
|
||||
}
|
||||
|
||||
log.WithFields(logrus.Fields{
|
||||
"prunedUpto": pruneUpto,
|
||||
"earliestAvailableSlot": earliestAvailableSlot,
|
||||
"duration": time.Since(tt),
|
||||
"currentSlot": slot,
|
||||
"batchSize": defaultPrunableBatchSize,
|
||||
"numBatches": numBatches,
|
||||
}).Debug("Successfully pruned chain data")
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// updateEarliestAvailableSlot updates the earliest available slot via the injected custody updater
|
||||
// and also persists it to the database.
|
||||
func (p *Service) updateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
|
||||
if !params.FuluEnabled() {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Update the p2p in-memory state
|
||||
if err := p.custody.UpdateEarliestAvailableSlot(earliestAvailableSlot); err != nil {
|
||||
return errors.Wrapf(err, "update earliest available slot after pruning to %d", earliestAvailableSlot)
|
||||
}
|
||||
|
||||
// Persist to database to ensure it survives restarts
|
||||
if err := p.db.UpdateEarliestAvailableSlot(p.ctx, earliestAvailableSlot); err != nil {
|
||||
return errors.Wrapf(err, "update earliest available slot in database for slot %d", earliestAvailableSlot)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
@@ -2,6 +2,7 @@ package pruner
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
@@ -15,6 +16,7 @@ import (
|
||||
|
||||
dbtest "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/assert"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
logTest "github.com/sirupsen/logrus/hooks/test"
|
||||
)
|
||||
@@ -62,7 +64,9 @@ func TestPruner_PruningConditions(t *testing.T) {
|
||||
if !tt.backfillCompleted {
|
||||
backfillWaiter = waiter
|
||||
}
|
||||
p, err := New(ctx, beaconDB, time.Now(), initSyncWaiter, backfillWaiter, WithSlotTicker(slotTicker))
|
||||
|
||||
mockCustody := &mockCustodyUpdater{}
|
||||
p, err := New(ctx, beaconDB, time.Now(), initSyncWaiter, backfillWaiter, mockCustody, WithSlotTicker(slotTicker))
|
||||
require.NoError(t, err)
|
||||
|
||||
go p.Start()
|
||||
@@ -97,12 +101,14 @@ func TestPruner_PruneSuccess(t *testing.T) {
|
||||
retentionEpochs := primitives.Epoch(2)
|
||||
slotTicker := &slottest.MockTicker{Channel: make(chan primitives.Slot)}
|
||||
|
||||
mockCustody := &mockCustodyUpdater{}
|
||||
p, err := New(
|
||||
ctx,
|
||||
beaconDB,
|
||||
time.Now(),
|
||||
nil,
|
||||
nil,
|
||||
mockCustody,
|
||||
WithSlotTicker(slotTicker),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
@@ -133,3 +139,242 @@ func TestPruner_PruneSuccess(t *testing.T) {
|
||||
|
||||
require.NoError(t, p.Stop())
|
||||
}
|
||||
|
||||
// Mock custody updater for testing
|
||||
type mockCustodyUpdater struct {
|
||||
custodyGroupCount uint64
|
||||
earliestAvailableSlot primitives.Slot
|
||||
updateCallCount int
|
||||
}
|
||||
|
||||
func (m *mockCustodyUpdater) UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
|
||||
m.updateCallCount++
|
||||
m.earliestAvailableSlot = earliestAvailableSlot
|
||||
return nil
|
||||
}
|
||||
|
||||
func TestPruner_UpdatesEarliestAvailableSlot(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
config := params.BeaconConfig()
|
||||
config.FuluForkEpoch = 0 // Enable Fulu from epoch 0
|
||||
params.OverrideBeaconConfig(config)
|
||||
|
||||
logrus.SetLevel(logrus.DebugLevel)
|
||||
hook := logTest.NewGlobal()
|
||||
ctx, cancel := context.WithCancel(t.Context())
|
||||
defer cancel()
|
||||
|
||||
beaconDB := dbtest.SetupDB(t)
|
||||
retentionEpochs := primitives.Epoch(2)
|
||||
|
||||
slotTicker := &slottest.MockTicker{Channel: make(chan primitives.Slot)}
|
||||
|
||||
// Create mock custody updater
|
||||
mockCustody := &mockCustodyUpdater{
|
||||
custodyGroupCount: 4,
|
||||
earliestAvailableSlot: 0,
|
||||
}
|
||||
|
||||
// Create pruner with mock custody updater
|
||||
p, err := New(
|
||||
ctx,
|
||||
beaconDB,
|
||||
time.Now(),
|
||||
nil,
|
||||
nil,
|
||||
mockCustody,
|
||||
WithSlotTicker(slotTicker),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
|
||||
p.ps = func(current primitives.Slot) primitives.Slot {
|
||||
return current - primitives.Slot(retentionEpochs)*params.BeaconConfig().SlotsPerEpoch
|
||||
}
|
||||
|
||||
// Save some blocks to be pruned
|
||||
for i := primitives.Slot(1); i <= 32; i++ {
|
||||
blk := util.NewBeaconBlock()
|
||||
blk.Block.Slot = i
|
||||
wsb, err := blocks.NewSignedBeaconBlock(blk)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
|
||||
}
|
||||
|
||||
// Start pruner and trigger at slot 80 (middle of 3rd epoch)
|
||||
go p.Start()
|
||||
currentSlot := primitives.Slot(80)
|
||||
slotTicker.Channel <- currentSlot
|
||||
|
||||
// Wait for pruning to complete
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
|
||||
// Check that UpdateEarliestAvailableSlot was called
|
||||
assert.Equal(t, true, mockCustody.updateCallCount > 0, "UpdateEarliestAvailableSlot should have been called")
|
||||
|
||||
// The earliest available slot should be pruneUpto + 1
|
||||
// pruneUpto = currentSlot - retentionEpochs*slotsPerEpoch = 80 - 2*32 = 16
|
||||
// So earliest available slot should be 16 + 1 = 17
|
||||
expectedEarliestSlot := primitives.Slot(17)
|
||||
require.Equal(t, expectedEarliestSlot, mockCustody.earliestAvailableSlot, "Earliest available slot should be updated correctly")
|
||||
require.Equal(t, uint64(4), mockCustody.custodyGroupCount, "Custody group count should be preserved")
|
||||
|
||||
// Verify that no error was logged
|
||||
for _, entry := range hook.AllEntries() {
|
||||
if entry.Level == logrus.ErrorLevel {
|
||||
t.Errorf("Unexpected error log: %s", entry.Message)
|
||||
}
|
||||
}
|
||||
|
||||
require.NoError(t, p.Stop())
|
||||
}
|
||||
|
||||
// Mock custody updater that returns an error for UpdateEarliestAvailableSlot
|
||||
type mockCustodyUpdaterWithUpdateError struct {
|
||||
updateCallCount int
|
||||
}
|
||||
|
||||
func (m *mockCustodyUpdaterWithUpdateError) UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
|
||||
m.updateCallCount++
|
||||
return errors.New("failed to update earliest available slot")
|
||||
}
|
||||
|
||||
func TestWithRetentionPeriod_EnforcesMinimum(t *testing.T) {
|
||||
// Use minimal config for testing
|
||||
params.SetupTestConfigCleanup(t)
|
||||
config := params.MinimalSpecConfig()
|
||||
params.OverrideBeaconConfig(config)
|
||||
|
||||
ctx := t.Context()
|
||||
beaconDB := dbtest.SetupDB(t)
|
||||
|
||||
// Get the minimum required epochs (272 + 1 = 273 for minimal)
|
||||
minRequiredEpochs := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests + 1)
|
||||
|
||||
// Use a slot that's guaranteed to be after the minimum retention period
|
||||
currentSlot := primitives.Slot(minRequiredEpochs+100) * (params.BeaconConfig().SlotsPerEpoch)
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
userRetentionEpochs primitives.Epoch
|
||||
expectedPruneSlot primitives.Slot
|
||||
description string
|
||||
}{
|
||||
{
|
||||
name: "User value below minimum - should use minimum",
|
||||
userRetentionEpochs: 2, // Way below minimum
|
||||
expectedPruneSlot: currentSlot - primitives.Slot(minRequiredEpochs)*params.BeaconConfig().SlotsPerEpoch,
|
||||
description: "Should use minimum when user value is too low",
|
||||
},
|
||||
{
|
||||
name: "User value at minimum",
|
||||
userRetentionEpochs: minRequiredEpochs,
|
||||
expectedPruneSlot: currentSlot - primitives.Slot(minRequiredEpochs)*params.BeaconConfig().SlotsPerEpoch,
|
||||
description: "Should use user value when at minimum",
|
||||
},
|
||||
{
|
||||
name: "User value above minimum",
|
||||
userRetentionEpochs: minRequiredEpochs + 10,
|
||||
expectedPruneSlot: currentSlot - primitives.Slot(minRequiredEpochs+10)*params.BeaconConfig().SlotsPerEpoch,
|
||||
description: "Should use user value when above minimum",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
hook := logTest.NewGlobal()
|
||||
logrus.SetLevel(logrus.WarnLevel)
|
||||
|
||||
mockCustody := &mockCustodyUpdater{}
|
||||
// Create pruner with retention period
|
||||
p, err := New(
|
||||
ctx,
|
||||
beaconDB,
|
||||
time.Now(),
|
||||
nil,
|
||||
nil,
|
||||
mockCustody,
|
||||
WithRetentionPeriod(tt.userRetentionEpochs),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Test the pruning calculation
|
||||
pruneUptoSlot := p.ps(currentSlot)
|
||||
|
||||
// Verify the pruning slot
|
||||
assert.Equal(t, tt.expectedPruneSlot, pruneUptoSlot, tt.description)
|
||||
|
||||
// Check if warning was logged when value was too low
|
||||
if tt.userRetentionEpochs < minRequiredEpochs {
|
||||
assert.LogsContain(t, hook, "Retention period too low, ignoring and using minimum required value")
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestPruner_UpdateEarliestSlotError(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
config := params.BeaconConfig()
|
||||
config.FuluForkEpoch = 0 // Enable Fulu from epoch 0
|
||||
params.OverrideBeaconConfig(config)
|
||||
|
||||
logrus.SetLevel(logrus.DebugLevel)
|
||||
hook := logTest.NewGlobal()
|
||||
ctx, cancel := context.WithCancel(t.Context())
|
||||
defer cancel()
|
||||
|
||||
beaconDB := dbtest.SetupDB(t)
|
||||
retentionEpochs := primitives.Epoch(2)
|
||||
|
||||
slotTicker := &slottest.MockTicker{Channel: make(chan primitives.Slot)}
|
||||
|
||||
// Create mock custody updater that returns an error for UpdateEarliestAvailableSlot
|
||||
mockCustody := &mockCustodyUpdaterWithUpdateError{}
|
||||
|
||||
// Create pruner with mock custody updater
|
||||
p, err := New(
|
||||
ctx,
|
||||
beaconDB,
|
||||
time.Now(),
|
||||
nil,
|
||||
nil,
|
||||
mockCustody,
|
||||
WithSlotTicker(slotTicker),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
|
||||
p.ps = func(current primitives.Slot) primitives.Slot {
|
||||
return current - primitives.Slot(retentionEpochs)*params.BeaconConfig().SlotsPerEpoch
|
||||
}
|
||||
|
||||
// Save some blocks to be pruned
|
||||
for i := primitives.Slot(1); i <= 32; i++ {
|
||||
blk := util.NewBeaconBlock()
|
||||
blk.Block.Slot = i
|
||||
wsb, err := blocks.NewSignedBeaconBlock(blk)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
|
||||
}
|
||||
|
||||
// Start pruner and trigger at slot 80
|
||||
go p.Start()
|
||||
currentSlot := primitives.Slot(80)
|
||||
slotTicker.Channel <- currentSlot
|
||||
|
||||
// Wait for pruning to complete
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
|
||||
// Should have called UpdateEarliestAvailableSlot
|
||||
assert.Equal(t, 1, mockCustody.updateCallCount, "UpdateEarliestAvailableSlot should be called")
|
||||
|
||||
// Check that error was logged by the prune function
|
||||
found := false
|
||||
for _, entry := range hook.AllEntries() {
|
||||
if entry.Level == logrus.ErrorLevel && entry.Message == "Failed to prune database" {
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
assert.Equal(t, true, found, "Should log error when UpdateEarliestAvailableSlot fails")
|
||||
|
||||
require.NoError(t, p.Stop())
|
||||
}
|
||||
|
||||
@@ -6,6 +6,7 @@ go_library(
|
||||
"cache.go",
|
||||
"helpers.go",
|
||||
"lightclient.go",
|
||||
"log.go",
|
||||
"store.go",
|
||||
],
|
||||
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/light-client",
|
||||
|
||||
5
beacon-chain/light-client/log.go
Normal file
5
beacon-chain/light-client/log.go
Normal file
@@ -0,0 +1,5 @@
|
||||
package light_client
|
||||
|
||||
import "github.com/sirupsen/logrus"
|
||||
|
||||
var log = logrus.WithField("prefix", "light-client")
|
||||
@@ -14,7 +14,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
"github.com/pkg/errors"
|
||||
log "github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
var ErrLightClientBootstrapNotFound = errors.New("light client bootstrap not found")
|
||||
|
||||
@@ -2,11 +2,13 @@ package node
|
||||
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/kv"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/slasherkv"
|
||||
"github.com/OffchainLabs/prysm/v6/cmd"
|
||||
"github.com/OffchainLabs/prysm/v6/genesis"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/urfave/cli/v2"
|
||||
)
|
||||
@@ -36,6 +38,22 @@ func (c *dbClearer) clearKV(ctx context.Context, db *kv.Store) (*kv.Store, error
|
||||
return kv.NewKVStore(ctx, db.DatabasePath())
|
||||
}
|
||||
|
||||
func (c *dbClearer) clearGenesis(dir string) error {
|
||||
if !c.shouldProceed() {
|
||||
return nil
|
||||
}
|
||||
|
||||
gfile, err := genesis.FindStateFile(dir)
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
if err := os.Remove(gfile.FilePath()); err != nil {
|
||||
return errors.Wrapf(err, "genesis state file not removed: %s", gfile.FilePath())
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *dbClearer) clearBlobs(bs *filesystem.BlobStorage) error {
|
||||
if !c.shouldProceed() {
|
||||
return nil
|
||||
|
||||
@@ -12,6 +12,7 @@ import (
|
||||
"os"
|
||||
"os/signal"
|
||||
"path/filepath"
|
||||
"slices"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
@@ -177,6 +178,9 @@ func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*Beaco
|
||||
}
|
||||
beacon.db = kvdb
|
||||
|
||||
if err := dbClearer.clearGenesis(dataDir); err != nil {
|
||||
return nil, errors.Wrap(err, "could not clear genesis state")
|
||||
}
|
||||
providers := append(beacon.GenesisProviders, kv.NewLegacyGenesisProvider(kvdb))
|
||||
if err := genesis.Initialize(ctx, dataDir, providers...); err != nil {
|
||||
return nil, errors.Wrap(err, "could not initialize genesis state")
|
||||
@@ -1105,6 +1109,7 @@ func (b *BeaconNode) registerPrunerService(cliCtx *cli.Context) error {
|
||||
genesis,
|
||||
initSyncWaiter(cliCtx.Context, b.initialSyncComplete),
|
||||
backfillService.WaitForCompletion,
|
||||
b.fetchP2P(),
|
||||
opts...,
|
||||
)
|
||||
if err != nil {
|
||||
@@ -1131,10 +1136,8 @@ func (b *BeaconNode) registerLightClientStore() {
|
||||
|
||||
func hasNetworkFlag(cliCtx *cli.Context) bool {
|
||||
for _, flag := range features.NetworkFlags {
|
||||
for _, name := range flag.Names() {
|
||||
if cliCtx.IsSet(name) {
|
||||
return true
|
||||
}
|
||||
if slices.ContainsFunc(flag.Names(), cliCtx.IsSet) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
|
||||
@@ -10,11 +10,13 @@ import (
|
||||
|
||||
// pruneExpired prunes attestations pool on every slot interval.
|
||||
func (s *Service) pruneExpired() {
|
||||
ticker := time.NewTicker(s.cfg.pruneInterval)
|
||||
defer ticker.Stop()
|
||||
secondsPerSlot := params.BeaconConfig().SecondsPerSlot
|
||||
offset := time.Duration(secondsPerSlot-1) * time.Second
|
||||
slotTicker := slots.NewSlotTickerWithOffset(s.genesisTime, offset, secondsPerSlot)
|
||||
defer slotTicker.Done()
|
||||
for {
|
||||
select {
|
||||
case <-ticker.C:
|
||||
case <-slotTicker.C():
|
||||
s.pruneExpiredAtts()
|
||||
s.updateMetrics()
|
||||
case <-s.ctx.Done():
|
||||
|
||||
@@ -17,7 +17,9 @@ import (
|
||||
)
|
||||
|
||||
func TestPruneExpired_Ticker(t *testing.T) {
|
||||
ctx, cancel := context.WithTimeout(t.Context(), 3*time.Second)
|
||||
// Need timeout longer than the offset (secondsPerSlot - 1) + some buffer
|
||||
timeout := time.Duration(params.BeaconConfig().SecondsPerSlot+5) * time.Second
|
||||
ctx, cancel := context.WithTimeout(t.Context(), timeout)
|
||||
defer cancel()
|
||||
|
||||
s, err := NewService(ctx, &Config{
|
||||
|
||||
@@ -7,6 +7,7 @@ import (
|
||||
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
// This is the default queue size used if we have specified an invalid one.
|
||||
@@ -63,12 +64,17 @@ func (cfg *Config) connManagerLowHigh() (int, int) {
|
||||
return low, high
|
||||
}
|
||||
|
||||
// validateConfig validates whether the values provided are accurate and will set
|
||||
// the appropriate values for those that are invalid.
|
||||
func validateConfig(cfg *Config) *Config {
|
||||
if cfg.QueueSize == 0 {
|
||||
log.Warnf("Invalid pubsub queue size of %d initialized, setting the quese size as %d instead", cfg.QueueSize, defaultPubsubQueueSize)
|
||||
cfg.QueueSize = defaultPubsubQueueSize
|
||||
// validateConfig validates whether the provided config has valid values and sets
|
||||
// the invalid ones to default.
|
||||
func validateConfig(cfg *Config) {
|
||||
if cfg.QueueSize > 0 {
|
||||
return
|
||||
}
|
||||
return cfg
|
||||
|
||||
log.WithFields(logrus.Fields{
|
||||
"queueSize": cfg.QueueSize,
|
||||
"default": defaultPubsubQueueSize,
|
||||
}).Warning("Invalid pubsub queue size, setting the queue size to the default value")
|
||||
|
||||
cfg.QueueSize = defaultPubsubQueueSize
|
||||
}
|
||||
|
||||
@@ -115,6 +115,57 @@ func (s *Service) UpdateCustodyInfo(earliestAvailableSlot primitives.Slot, custo
|
||||
return earliestAvailableSlot, custodyGroupCount, nil
|
||||
}
|
||||
|
||||
// UpdateEarliestAvailableSlot updates the earliest available slot.
|
||||
//
|
||||
// IMPORTANT: This function should only be called when Fulu is enabled. The caller is responsible
|
||||
// for checking params.FuluEnabled() before calling this function.
|
||||
func (s *Service) UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
|
||||
s.custodyInfoLock.Lock()
|
||||
defer s.custodyInfoLock.Unlock()
|
||||
|
||||
if s.custodyInfo == nil {
|
||||
return errors.New("no custody info available")
|
||||
}
|
||||
|
||||
currentSlot := slots.CurrentSlot(s.genesisTime)
|
||||
currentEpoch := slots.ToEpoch(currentSlot)
|
||||
|
||||
// Allow decrease (for backfill scenarios)
|
||||
if earliestAvailableSlot < s.custodyInfo.earliestAvailableSlot {
|
||||
s.custodyInfo.earliestAvailableSlot = earliestAvailableSlot
|
||||
return nil
|
||||
}
|
||||
|
||||
// Prevent increase within the MIN_EPOCHS_FOR_BLOCK_REQUESTS period
|
||||
// This ensures we don't voluntarily refuse to serve mandatory block data
|
||||
// This check applies regardless of whether we're early or late in the chain
|
||||
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
|
||||
|
||||
// Calculate the minimum required epoch (or 0 if we're early in the chain)
|
||||
minRequiredEpoch := primitives.Epoch(0)
|
||||
if currentEpoch > minEpochsForBlocks {
|
||||
minRequiredEpoch = currentEpoch - minEpochsForBlocks
|
||||
}
|
||||
|
||||
// Convert to slot to ensure we compare at slot-level granularity, not epoch-level
|
||||
// This prevents allowing increases to slots within minRequiredEpoch that are after its first slot
|
||||
minRequiredSlot, err := slots.EpochStart(minRequiredEpoch)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "epoch start")
|
||||
}
|
||||
|
||||
// Prevent any increase that would put earliest slot beyond the minimum required slot
|
||||
if earliestAvailableSlot > s.custodyInfo.earliestAvailableSlot && earliestAvailableSlot > minRequiredSlot {
|
||||
return errors.Errorf(
|
||||
"cannot increase earliest available slot to %d (epoch %d) as it exceeds minimum required slot %d (epoch %d)",
|
||||
earliestAvailableSlot, slots.ToEpoch(earliestAvailableSlot), minRequiredSlot, minRequiredEpoch,
|
||||
)
|
||||
}
|
||||
|
||||
s.custodyInfo.earliestAvailableSlot = earliestAvailableSlot
|
||||
return nil
|
||||
}
|
||||
|
||||
// CustodyGroupCountFromPeer retrieves custody group count from a peer.
|
||||
// It first tries to get the custody group count from the peer's metadata,
|
||||
// then falls back to the ENR value if the metadata is not available, then
|
||||
@@ -208,11 +259,11 @@ func (s *Service) custodyGroupCountFromPeerENR(pid peer.ID) uint64 {
|
||||
}
|
||||
|
||||
func fuluForkSlot() (primitives.Slot, error) {
|
||||
beaconConfig := params.BeaconConfig()
|
||||
cfg := params.BeaconConfig()
|
||||
|
||||
fuluForkEpoch := beaconConfig.FuluForkEpoch
|
||||
if fuluForkEpoch == beaconConfig.FarFutureEpoch {
|
||||
return beaconConfig.FarFutureSlot, nil
|
||||
fuluForkEpoch := cfg.FuluForkEpoch
|
||||
if fuluForkEpoch == cfg.FarFutureEpoch {
|
||||
return cfg.FarFutureSlot, nil
|
||||
}
|
||||
|
||||
forkFuluSlot, err := slots.EpochStart(fuluForkEpoch)
|
||||
|
||||
@@ -4,6 +4,7 @@ import (
|
||||
"context"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
|
||||
@@ -167,6 +168,148 @@ func TestUpdateCustodyInfo(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestUpdateEarliestAvailableSlot(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
config := params.BeaconConfig()
|
||||
config.FuluForkEpoch = 0 // Enable Fulu from epoch 0
|
||||
params.OverrideBeaconConfig(config)
|
||||
|
||||
t.Run("Valid update", func(t *testing.T) {
|
||||
const (
|
||||
initialSlot primitives.Slot = 50
|
||||
newSlot primitives.Slot = 100
|
||||
groupCount uint64 = 5
|
||||
)
|
||||
|
||||
// Set up a scenario where we're far enough in the chain that increasing to newSlot is valid
|
||||
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
|
||||
currentEpoch := minEpochsForBlocks + 100 // Well beyond MIN_EPOCHS_FOR_BLOCK_REQUESTS
|
||||
currentSlot := primitives.Slot(currentEpoch) * primitives.Slot(params.BeaconConfig().SlotsPerEpoch)
|
||||
|
||||
service := &Service{
|
||||
// Set genesis time in the past so currentSlot is the "current" slot
|
||||
genesisTime: time.Now().Add(-time.Duration(currentSlot) * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second),
|
||||
custodyInfo: &custodyInfo{
|
||||
earliestAvailableSlot: initialSlot,
|
||||
groupCount: groupCount,
|
||||
},
|
||||
}
|
||||
|
||||
err := service.UpdateEarliestAvailableSlot(newSlot)
|
||||
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, newSlot, service.custodyInfo.earliestAvailableSlot)
|
||||
require.Equal(t, groupCount, service.custodyInfo.groupCount) // Should preserve group count
|
||||
})
|
||||
|
||||
t.Run("Earlier slot - allowed for backfill", func(t *testing.T) {
|
||||
const initialSlot primitives.Slot = 100
|
||||
const earlierSlot primitives.Slot = 50
|
||||
|
||||
service := &Service{
|
||||
genesisTime: time.Now(),
|
||||
custodyInfo: &custodyInfo{
|
||||
earliestAvailableSlot: initialSlot,
|
||||
groupCount: 5,
|
||||
},
|
||||
}
|
||||
|
||||
err := service.UpdateEarliestAvailableSlot(earlierSlot)
|
||||
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, earlierSlot, service.custodyInfo.earliestAvailableSlot) // Should decrease for backfill
|
||||
})
|
||||
|
||||
t.Run("Prevent increase within MIN_EPOCHS_FOR_BLOCK_REQUESTS - late in chain", func(t *testing.T) {
|
||||
// Set current time far enough in the future to have a meaningful MIN_EPOCHS_FOR_BLOCK_REQUESTS period
|
||||
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
|
||||
currentEpoch := minEpochsForBlocks + 100 // Well beyond the minimum
|
||||
currentSlot := primitives.Slot(currentEpoch) * primitives.Slot(params.BeaconConfig().SlotsPerEpoch)
|
||||
|
||||
// Calculate the minimum allowed epoch
|
||||
minRequiredEpoch := currentEpoch - minEpochsForBlocks
|
||||
minRequiredSlot := primitives.Slot(minRequiredEpoch) * primitives.Slot(params.BeaconConfig().SlotsPerEpoch)
|
||||
|
||||
// Try to set earliest slot to a value within the MIN_EPOCHS_FOR_BLOCK_REQUESTS period (should fail)
|
||||
attemptedSlot := minRequiredSlot + 1000 // Within the mandatory retention period
|
||||
|
||||
service := &Service{
|
||||
genesisTime: time.Now().Add(-time.Duration(currentSlot) * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second),
|
||||
custodyInfo: &custodyInfo{
|
||||
earliestAvailableSlot: minRequiredSlot - 100, // Current value is before the min required
|
||||
groupCount: 5,
|
||||
},
|
||||
}
|
||||
|
||||
err := service.UpdateEarliestAvailableSlot(attemptedSlot)
|
||||
|
||||
require.NotNil(t, err)
|
||||
require.Equal(t, true, strings.Contains(err.Error(), "cannot increase earliest available slot"))
|
||||
})
|
||||
|
||||
t.Run("Prevent increase at epoch boundary - slot precision matters", func(t *testing.T) {
|
||||
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
|
||||
currentEpoch := minEpochsForBlocks + 976 // Current epoch
|
||||
currentSlot := primitives.Slot(currentEpoch) * primitives.Slot(params.BeaconConfig().SlotsPerEpoch)
|
||||
|
||||
minRequiredEpoch := currentEpoch - minEpochsForBlocks // = 976
|
||||
storedEarliestSlot := primitives.Slot(minRequiredEpoch)*primitives.Slot(params.BeaconConfig().SlotsPerEpoch) - 232 // Before minRequired
|
||||
|
||||
// Try to set earliest to slot 8 of the minRequiredEpoch (should fail with slot comparison)
|
||||
attemptedSlot := primitives.Slot(minRequiredEpoch)*primitives.Slot(params.BeaconConfig().SlotsPerEpoch) + 8
|
||||
|
||||
service := &Service{
|
||||
genesisTime: time.Now().Add(-time.Duration(currentSlot) * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second),
|
||||
custodyInfo: &custodyInfo{
|
||||
earliestAvailableSlot: storedEarliestSlot,
|
||||
groupCount: 5,
|
||||
},
|
||||
}
|
||||
|
||||
err := service.UpdateEarliestAvailableSlot(attemptedSlot)
|
||||
|
||||
require.NotNil(t, err, "Should prevent increasing earliest slot beyond the minimum required SLOT (not just epoch)")
|
||||
require.Equal(t, true, strings.Contains(err.Error(), "cannot increase earliest available slot"))
|
||||
})
|
||||
|
||||
t.Run("Prevent increase within MIN_EPOCHS_FOR_BLOCK_REQUESTS - early in chain", func(t *testing.T) {
|
||||
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
|
||||
currentEpoch := minEpochsForBlocks - 10 // Early in chain, BEFORE we have MIN_EPOCHS_FOR_BLOCK_REQUESTS of history
|
||||
currentSlot := primitives.Slot(currentEpoch) * primitives.Slot(params.BeaconConfig().SlotsPerEpoch)
|
||||
|
||||
// Current earliest slot is at slot 100
|
||||
currentEarliestSlot := primitives.Slot(100)
|
||||
|
||||
// Try to increase earliest slot to slot 1000 (which would be within the mandatory window from currentSlot)
|
||||
attemptedSlot := primitives.Slot(1000)
|
||||
|
||||
service := &Service{
|
||||
genesisTime: time.Now().Add(-time.Duration(currentSlot) * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second),
|
||||
custodyInfo: &custodyInfo{
|
||||
earliestAvailableSlot: currentEarliestSlot,
|
||||
groupCount: 5,
|
||||
},
|
||||
}
|
||||
|
||||
err := service.UpdateEarliestAvailableSlot(attemptedSlot)
|
||||
|
||||
require.NotNil(t, err, "Should prevent increasing earliest slot within the mandatory retention window, even early in chain")
|
||||
require.Equal(t, true, strings.Contains(err.Error(), "cannot increase earliest available slot"))
|
||||
})
|
||||
|
||||
t.Run("Nil custody info - should return error", func(t *testing.T) {
|
||||
service := &Service{
|
||||
genesisTime: time.Now(),
|
||||
custodyInfo: nil, // No custody info set
|
||||
}
|
||||
|
||||
err := service.UpdateEarliestAvailableSlot(100)
|
||||
|
||||
require.NotNil(t, err)
|
||||
require.Equal(t, true, strings.Contains(err.Error(), "no custody info available"))
|
||||
})
|
||||
}
|
||||
|
||||
func TestCustodyGroupCountFromPeer(t *testing.T) {
|
||||
const (
|
||||
expectedENR uint64 = 7
|
||||
|
||||
@@ -2,6 +2,7 @@ package p2p
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"math"
|
||||
"net"
|
||||
"reflect"
|
||||
@@ -106,18 +107,26 @@ func peerScoringParams(colocationWhitelist []*net.IPNet) (*pubsub.PeerScoreParam
|
||||
}
|
||||
|
||||
func (s *Service) topicScoreParams(topic string) (*pubsub.TopicScoreParams, error) {
|
||||
activeValidators, err := s.retrieveActiveValidators()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
switch {
|
||||
case strings.Contains(topic, GossipBlockMessage):
|
||||
return defaultBlockTopicParams(), nil
|
||||
case strings.Contains(topic, GossipAggregateAndProofMessage):
|
||||
activeValidators, err := s.retrieveActiveValidators()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to compute active validator count for topic %s: %w", GossipAggregateAndProofMessage, err)
|
||||
}
|
||||
return defaultAggregateTopicParams(activeValidators), nil
|
||||
case strings.Contains(topic, GossipAttestationMessage):
|
||||
activeValidators, err := s.retrieveActiveValidators()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to compute active validator count for topic %s: %w", GossipAttestationMessage, err)
|
||||
}
|
||||
return defaultAggregateSubnetTopicParams(activeValidators), nil
|
||||
case strings.Contains(topic, GossipSyncCommitteeMessage):
|
||||
activeValidators, err := s.retrieveActiveValidators()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to compute active validator count for topic %s: %w", GossipSyncCommitteeMessage, err)
|
||||
}
|
||||
return defaultSyncSubnetTopicParams(activeValidators), nil
|
||||
case strings.Contains(topic, GossipContributionAndProofMessage):
|
||||
return defaultSyncContributionTopicParams(), nil
|
||||
@@ -142,6 +151,8 @@ func (s *Service) topicScoreParams(topic string) (*pubsub.TopicScoreParams, erro
|
||||
}
|
||||
|
||||
func (s *Service) retrieveActiveValidators() (uint64, error) {
|
||||
s.activeValidatorCountLock.Lock()
|
||||
defer s.activeValidatorCountLock.Unlock()
|
||||
if s.activeValidatorCount != 0 {
|
||||
return s.activeValidatorCount, nil
|
||||
}
|
||||
|
||||
@@ -126,6 +126,7 @@ type (
|
||||
EarliestAvailableSlot(ctx context.Context) (primitives.Slot, error)
|
||||
CustodyGroupCount(ctx context.Context) (uint64, error)
|
||||
UpdateCustodyInfo(earliestAvailableSlot primitives.Slot, custodyGroupCount uint64) (primitives.Slot, uint64, error)
|
||||
UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error
|
||||
CustodyGroupCountFromPeer(peer.ID) uint64
|
||||
}
|
||||
)
|
||||
|
||||
@@ -3,6 +3,7 @@ package p2p
|
||||
import (
|
||||
"strings"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
"github.com/libp2p/go-libp2p/core/peerstore"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
@@ -26,12 +27,25 @@ var (
|
||||
Help: "The number of peers in a given state.",
|
||||
},
|
||||
[]string{"state"})
|
||||
p2pMaxPeers = promauto.NewGauge(prometheus.GaugeOpts{
|
||||
Name: "p2p_max_peers",
|
||||
Help: "The target maximum number of peers.",
|
||||
})
|
||||
p2pPeerCountDirectionType = promauto.NewGaugeVec(prometheus.GaugeOpts{
|
||||
Name: "p2p_peer_count_direction_type",
|
||||
Help: "The number of peers in a given direction and type.",
|
||||
},
|
||||
[]string{"direction", "type"})
|
||||
connectedPeersCount = promauto.NewGaugeVec(prometheus.GaugeOpts{
|
||||
Name: "connected_libp2p_peers",
|
||||
Help: "Tracks the total number of connected libp2p peers by agent string",
|
||||
},
|
||||
[]string{"agent"},
|
||||
)
|
||||
minimumPeersPerSubnet = promauto.NewGauge(prometheus.GaugeOpts{
|
||||
Name: "p2p_minimum_peers_per_subnet",
|
||||
Help: "The minimum number of peers to connect to per subnet",
|
||||
})
|
||||
avgScoreConnectedClients = promauto.NewGaugeVec(prometheus.GaugeOpts{
|
||||
Name: "connected_libp2p_peers_average_scores",
|
||||
Help: "Tracks the overall p2p scores of connected libp2p peers by agent string",
|
||||
@@ -174,18 +188,26 @@ var (
|
||||
)
|
||||
|
||||
func (s *Service) updateMetrics() {
|
||||
store := s.Host().Peerstore()
|
||||
connectedPeers := s.peers.Connected()
|
||||
|
||||
p2pPeerCount.WithLabelValues("Connected").Set(float64(len(connectedPeers)))
|
||||
p2pPeerCount.WithLabelValues("Disconnected").Set(float64(len(s.peers.Disconnected())))
|
||||
p2pPeerCount.WithLabelValues("Connecting").Set(float64(len(s.peers.Connecting())))
|
||||
p2pPeerCount.WithLabelValues("Disconnecting").Set(float64(len(s.peers.Disconnecting())))
|
||||
p2pPeerCount.WithLabelValues("Bad").Set(float64(len(s.peers.Bad())))
|
||||
|
||||
store := s.Host().Peerstore()
|
||||
numConnectedPeersByClient := make(map[string]float64)
|
||||
upperTCP := strings.ToUpper(string(peers.TCP))
|
||||
upperQUIC := strings.ToUpper(string(peers.QUIC))
|
||||
|
||||
p2pPeerCountDirectionType.WithLabelValues("inbound", upperTCP).Set(float64(len(s.peers.InboundConnectedWithProtocol(peers.TCP))))
|
||||
p2pPeerCountDirectionType.WithLabelValues("inbound", upperQUIC).Set(float64(len(s.peers.InboundConnectedWithProtocol(peers.QUIC))))
|
||||
p2pPeerCountDirectionType.WithLabelValues("outbound", upperTCP).Set(float64(len(s.peers.OutboundConnectedWithProtocol(peers.TCP))))
|
||||
p2pPeerCountDirectionType.WithLabelValues("outbound", upperQUIC).Set(float64(len(s.peers.OutboundConnectedWithProtocol(peers.QUIC))))
|
||||
|
||||
connectedPeersCountByClient := make(map[string]float64)
|
||||
peerScoresByClient := make(map[string][]float64)
|
||||
for i := 0; i < len(connectedPeers); i++ {
|
||||
p := connectedPeers[i]
|
||||
for _, p := range connectedPeers {
|
||||
pid, err := peer.Decode(p.String())
|
||||
if err != nil {
|
||||
log.WithError(err).Debug("Could not decode peer string")
|
||||
@@ -193,16 +215,18 @@ func (s *Service) updateMetrics() {
|
||||
}
|
||||
|
||||
foundName := agentFromPid(pid, store)
|
||||
numConnectedPeersByClient[foundName] += 1
|
||||
connectedPeersCountByClient[foundName] += 1
|
||||
|
||||
// Get peer scoring data.
|
||||
overallScore := s.peers.Scorers().Score(pid)
|
||||
peerScoresByClient[foundName] = append(peerScoresByClient[foundName], overallScore)
|
||||
}
|
||||
|
||||
connectedPeersCount.Reset() // Clear out previous results.
|
||||
for agent, total := range numConnectedPeersByClient {
|
||||
for agent, total := range connectedPeersCountByClient {
|
||||
connectedPeersCount.WithLabelValues(agent).Set(total)
|
||||
}
|
||||
|
||||
avgScoreConnectedClients.Reset() // Clear out previous results.
|
||||
for agent, scoringData := range peerScoresByClient {
|
||||
avgScore := average(scoringData)
|
||||
|
||||
@@ -25,6 +25,7 @@ package peers
|
||||
import (
|
||||
"context"
|
||||
"net"
|
||||
"slices"
|
||||
"sort"
|
||||
"strings"
|
||||
"time"
|
||||
@@ -81,29 +82,31 @@ const (
|
||||
type InternetProtocol string
|
||||
|
||||
const (
|
||||
TCP = "tcp"
|
||||
QUIC = "quic"
|
||||
TCP = InternetProtocol("tcp")
|
||||
QUIC = InternetProtocol("quic")
|
||||
)
|
||||
|
||||
// Status is the structure holding the peer status information.
|
||||
type Status struct {
|
||||
ctx context.Context
|
||||
scorers *scorers.Service
|
||||
store *peerdata.Store
|
||||
ipTracker map[string]uint64
|
||||
rand *rand.Rand
|
||||
ipColocationWhitelist []*net.IPNet
|
||||
}
|
||||
type (
|
||||
// Status is the structure holding the peer status information.
|
||||
Status struct {
|
||||
ctx context.Context
|
||||
scorers *scorers.Service
|
||||
store *peerdata.Store
|
||||
ipTracker map[string]uint64
|
||||
rand *rand.Rand
|
||||
ipColocationWhitelist []*net.IPNet
|
||||
}
|
||||
|
||||
// StatusConfig represents peer status service params.
|
||||
type StatusConfig struct {
|
||||
// PeerLimit specifies maximum amount of concurrent peers that are expected to be connect to the node.
|
||||
PeerLimit int
|
||||
// ScorerParams holds peer scorer configuration params.
|
||||
ScorerParams *scorers.Config
|
||||
// IPColocationWhitelist contains CIDR ranges that are exempt from IP colocation limits.
|
||||
IPColocationWhitelist []*net.IPNet
|
||||
}
|
||||
// StatusConfig represents peer status service params.
|
||||
StatusConfig struct {
|
||||
// PeerLimit specifies maximum amount of concurrent peers that are expected to be connect to the node.
|
||||
PeerLimit int
|
||||
// ScorerParams holds peer scorer configuration params.
|
||||
ScorerParams *scorers.Config
|
||||
// IPColocationWhitelist contains CIDR ranges that are exempt from IP colocation limits.
|
||||
IPColocationWhitelist []*net.IPNet
|
||||
}
|
||||
)
|
||||
|
||||
// NewStatus creates a new status entity.
|
||||
func NewStatus(ctx context.Context, config *StatusConfig) *Status {
|
||||
@@ -304,11 +307,8 @@ func (p *Status) SubscribedToSubnet(index uint64) []peer.ID {
|
||||
connectedStatus := peerData.ConnState == Connecting || peerData.ConnState == Connected
|
||||
if connectedStatus && peerData.MetaData != nil && !peerData.MetaData.IsNil() && peerData.MetaData.AttnetsBitfield() != nil {
|
||||
indices := indicesFromBitfield(peerData.MetaData.AttnetsBitfield())
|
||||
for _, idx := range indices {
|
||||
if idx == index {
|
||||
peers = append(peers, pid)
|
||||
break
|
||||
}
|
||||
if slices.Contains(indices, index) {
|
||||
peers = append(peers, pid)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -345,17 +345,17 @@ func TopicFromMessage(msg string, epoch primitives.Epoch) (string, error) {
|
||||
return "", errors.Errorf("%s: %s", invalidRPCMessageType, msg)
|
||||
}
|
||||
|
||||
beaconConfig := params.BeaconConfig()
|
||||
cfg := params.BeaconConfig()
|
||||
|
||||
// Check if the message is to be updated in fulu.
|
||||
if epoch >= beaconConfig.FuluForkEpoch {
|
||||
if epoch >= cfg.FuluForkEpoch {
|
||||
if version, ok := fuluMapping[msg]; ok {
|
||||
return protocolPrefix + msg + version, nil
|
||||
}
|
||||
}
|
||||
|
||||
// Check if the message is to be updated in altair.
|
||||
if epoch >= beaconConfig.AltairForkEpoch {
|
||||
if epoch >= cfg.AltairForkEpoch {
|
||||
if version, ok := altairMapping[msg]; ok {
|
||||
return protocolPrefix + msg + version, nil
|
||||
}
|
||||
|
||||
@@ -14,6 +14,7 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers/scorers"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/types"
|
||||
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
|
||||
"github.com/OffchainLabs/prysm/v6/config/features"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
@@ -64,35 +65,36 @@ var (
|
||||
|
||||
// Service for managing peer to peer (p2p) networking.
|
||||
type Service struct {
|
||||
started bool
|
||||
isPreGenesis bool
|
||||
pingMethod func(ctx context.Context, id peer.ID) error
|
||||
pingMethodLock sync.RWMutex
|
||||
cancel context.CancelFunc
|
||||
cfg *Config
|
||||
peers *peers.Status
|
||||
addrFilter *multiaddr.Filters
|
||||
ipLimiter *leakybucket.Collector
|
||||
privKey *ecdsa.PrivateKey
|
||||
metaData metadata.Metadata
|
||||
pubsub *pubsub.PubSub
|
||||
joinedTopics map[string]*pubsub.Topic
|
||||
joinedTopicsLock sync.RWMutex
|
||||
subnetsLock map[uint64]*sync.RWMutex
|
||||
subnetsLockLock sync.Mutex // Lock access to subnetsLock
|
||||
initializationLock sync.Mutex
|
||||
dv5Listener ListenerRebooter
|
||||
startupErr error
|
||||
ctx context.Context
|
||||
host host.Host
|
||||
genesisTime time.Time
|
||||
genesisValidatorsRoot []byte
|
||||
activeValidatorCount uint64
|
||||
peerDisconnectionTime *cache.Cache
|
||||
custodyInfo *custodyInfo
|
||||
custodyInfoLock sync.RWMutex // Lock access to custodyInfo
|
||||
custodyInfoSet chan struct{}
|
||||
allForkDigests map[[4]byte]struct{}
|
||||
started bool
|
||||
isPreGenesis bool
|
||||
pingMethod func(ctx context.Context, id peer.ID) error
|
||||
pingMethodLock sync.RWMutex
|
||||
cancel context.CancelFunc
|
||||
cfg *Config
|
||||
peers *peers.Status
|
||||
addrFilter *multiaddr.Filters
|
||||
ipLimiter *leakybucket.Collector
|
||||
privKey *ecdsa.PrivateKey
|
||||
metaData metadata.Metadata
|
||||
pubsub *pubsub.PubSub
|
||||
joinedTopics map[string]*pubsub.Topic
|
||||
joinedTopicsLock sync.RWMutex
|
||||
subnetsLock map[uint64]*sync.RWMutex
|
||||
subnetsLockLock sync.Mutex // Lock access to subnetsLock
|
||||
initializationLock sync.Mutex
|
||||
dv5Listener ListenerRebooter
|
||||
startupErr error
|
||||
ctx context.Context
|
||||
host host.Host
|
||||
genesisTime time.Time
|
||||
genesisValidatorsRoot []byte
|
||||
activeValidatorCount uint64
|
||||
activeValidatorCountLock sync.Mutex
|
||||
peerDisconnectionTime *cache.Cache
|
||||
custodyInfo *custodyInfo
|
||||
custodyInfoLock sync.RWMutex // Lock access to custodyInfo
|
||||
custodyInfoSet chan struct{}
|
||||
allForkDigests map[[4]byte]struct{}
|
||||
}
|
||||
|
||||
type custodyInfo struct {
|
||||
@@ -106,12 +108,16 @@ func NewService(ctx context.Context, cfg *Config) (*Service, error) {
|
||||
ctx, cancel := context.WithCancel(ctx)
|
||||
_ = cancel // govet fix for lost cancel. Cancel is handled in service.Stop().
|
||||
|
||||
cfg = validateConfig(cfg)
|
||||
validateConfig(cfg)
|
||||
|
||||
privKey, err := privKey(cfg)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "failed to generate p2p private key")
|
||||
}
|
||||
|
||||
p2pMaxPeers.Set(float64(cfg.MaxPeers))
|
||||
minimumPeersPerSubnet.Set(float64(flags.Get().MinimumPeersPerSubnet))
|
||||
|
||||
metaData, err := metaDataFromDB(ctx, cfg.DB)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Failed to create peer metadata")
|
||||
|
||||
@@ -514,17 +514,26 @@ func initializePersistentSubnets(id enode.ID, epoch primitives.Epoch) error {
|
||||
//
|
||||
// return [compute_subscribed_subnet(node_id, epoch, index) for index in range(SUBNETS_PER_NODE)]
|
||||
func computeSubscribedSubnets(nodeID enode.ID, epoch primitives.Epoch) ([]uint64, error) {
|
||||
subnetsPerNode := params.BeaconConfig().SubnetsPerNode
|
||||
subs := make([]uint64, 0, subnetsPerNode)
|
||||
cfg := params.BeaconConfig()
|
||||
|
||||
for i := uint64(0); i < subnetsPerNode; i++ {
|
||||
if flags.Get().SubscribeToAllSubnets {
|
||||
subnets := make([]uint64, 0, cfg.AttestationSubnetCount)
|
||||
for i := range cfg.AttestationSubnetCount {
|
||||
subnets = append(subnets, i)
|
||||
}
|
||||
return subnets, nil
|
||||
}
|
||||
|
||||
subnets := make([]uint64, 0, cfg.SubnetsPerNode)
|
||||
for i := range cfg.SubnetsPerNode {
|
||||
sub, err := computeSubscribedSubnet(nodeID, epoch, i)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return nil, errors.Wrap(err, "compute subscribed subnet")
|
||||
}
|
||||
subs = append(subs, sub)
|
||||
subnets = append(subnets, sub)
|
||||
}
|
||||
return subs, nil
|
||||
|
||||
return subnets, nil
|
||||
}
|
||||
|
||||
// Spec pseudocode definition:
|
||||
|
||||
@@ -514,17 +514,39 @@ func TestDataColumnSubnets(t *testing.T) {
|
||||
|
||||
func TestSubnetComputation(t *testing.T) {
|
||||
db, err := enode.OpenDB("")
|
||||
assert.NoError(t, err)
|
||||
require.NoError(t, err)
|
||||
defer db.Close()
|
||||
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
|
||||
assert.NoError(t, err)
|
||||
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
|
||||
assert.NoError(t, err)
|
||||
localNode := enode.NewLocalNode(db, convertedKey)
|
||||
|
||||
retrievedSubnets, err := computeSubscribedSubnets(localNode.ID(), 1000)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, retrievedSubnets[0]+1, retrievedSubnets[1])
|
||||
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
|
||||
require.NoError(t, err)
|
||||
|
||||
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
|
||||
require.NoError(t, err)
|
||||
|
||||
localNode := enode.NewLocalNode(db, convertedKey)
|
||||
cfg := params.BeaconConfig()
|
||||
|
||||
t.Run("standard", func(t *testing.T) {
|
||||
retrievedSubnets, err := computeSubscribedSubnets(localNode.ID(), 1000)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, cfg.SubnetsPerNode, uint64(len(retrievedSubnets)))
|
||||
require.Equal(t, retrievedSubnets[0]+1, retrievedSubnets[1])
|
||||
})
|
||||
|
||||
t.Run("subscribed to all", func(t *testing.T) {
|
||||
gFlags := new(flags.GlobalFlags)
|
||||
gFlags.SubscribeToAllSubnets = true
|
||||
flags.Init(gFlags)
|
||||
defer flags.Init(new(flags.GlobalFlags))
|
||||
|
||||
retrievedSubnets, err := computeSubscribedSubnets(localNode.ID(), 1000)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, cfg.AttestationSubnetCount, uint64(len(retrievedSubnets)))
|
||||
for i := range cfg.AttestationSubnetCount {
|
||||
require.Equal(t, i, retrievedSubnets[i])
|
||||
}
|
||||
})
|
||||
|
||||
}
|
||||
|
||||
func TestInitializePersistentSubnets(t *testing.T) {
|
||||
|
||||
@@ -213,6 +213,11 @@ func (s *FakeP2P) UpdateCustodyInfo(earliestAvailableSlot primitives.Slot, custo
|
||||
return earliestAvailableSlot, custodyGroupCount, nil
|
||||
}
|
||||
|
||||
// UpdateEarliestAvailableSlot -- fake.
|
||||
func (*FakeP2P) UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// CustodyGroupCountFromPeer -- fake.
|
||||
func (*FakeP2P) CustodyGroupCountFromPeer(peer.ID) uint64 {
|
||||
return 0
|
||||
|
||||
@@ -499,6 +499,15 @@ func (s *TestP2P) UpdateCustodyInfo(earliestAvailableSlot primitives.Slot, custo
|
||||
return s.earliestAvailableSlot, s.custodyGroupCount, nil
|
||||
}
|
||||
|
||||
// UpdateEarliestAvailableSlot .
|
||||
func (s *TestP2P) UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
|
||||
s.custodyInfoMut.Lock()
|
||||
defer s.custodyInfoMut.Unlock()
|
||||
|
||||
s.earliestAvailableSlot = earliestAvailableSlot
|
||||
return nil
|
||||
}
|
||||
|
||||
// CustodyGroupCountFromPeer retrieves custody group count from a peer.
|
||||
// It first tries to get the custody group count from the peer's metadata,
|
||||
// then falls back to the ENR value if the metadata is not available, then
|
||||
|
||||
@@ -68,7 +68,10 @@ func privKey(cfg *Config) (*ecdsa.PrivateKey, error) {
|
||||
}
|
||||
|
||||
if defaultKeysExist {
|
||||
log.WithField("filePath", defaultKeyPath).Info("Reading static P2P private key from a file. To generate a new random private key at every start, please remove this file.")
|
||||
if !params.FuluEnabled() {
|
||||
log.WithField("filePath", defaultKeyPath).Info("Reading static P2P private key from a file. To generate a new random private key at every start, please remove this file.")
|
||||
}
|
||||
|
||||
return privKeyFromFile(defaultKeyPath)
|
||||
}
|
||||
|
||||
|
||||
@@ -97,7 +97,7 @@ func (s *Service) endpoints(
|
||||
endpoints = append(endpoints, s.beaconEndpoints(ch, stater, blocker, validatorServer, coreService)...)
|
||||
endpoints = append(endpoints, s.configEndpoints()...)
|
||||
endpoints = append(endpoints, s.eventsEndpoints()...)
|
||||
endpoints = append(endpoints, s.prysmBeaconEndpoints(ch, stater, coreService)...)
|
||||
endpoints = append(endpoints, s.prysmBeaconEndpoints(ch, stater, blocker, coreService)...)
|
||||
endpoints = append(endpoints, s.prysmNodeEndpoints()...)
|
||||
endpoints = append(endpoints, s.prysmValidatorEndpoints(stater, coreService)...)
|
||||
|
||||
@@ -1184,6 +1184,7 @@ func (s *Service) eventsEndpoints() []endpoint {
|
||||
func (s *Service) prysmBeaconEndpoints(
|
||||
ch *stategen.CanonicalHistory,
|
||||
stater lookup.Stater,
|
||||
blocker lookup.Blocker,
|
||||
coreService *core.Service,
|
||||
) []endpoint {
|
||||
server := &beaconprysm.Server{
|
||||
@@ -1194,6 +1195,7 @@ func (s *Service) prysmBeaconEndpoints(
|
||||
CanonicalHistory: ch,
|
||||
BeaconDB: s.cfg.BeaconDB,
|
||||
Stater: stater,
|
||||
Blocker: blocker,
|
||||
ChainInfoFetcher: s.cfg.ChainInfoFetcher,
|
||||
FinalizationFetcher: s.cfg.FinalizationFetcher,
|
||||
CoreService: coreService,
|
||||
@@ -1266,6 +1268,28 @@ func (s *Service) prysmBeaconEndpoints(
|
||||
handler: server.PublishBlobs,
|
||||
methods: []string{http.MethodPost},
|
||||
},
|
||||
{
|
||||
template: "/prysm/v1/beacon/states/{state_id}/query",
|
||||
name: namespace + ".QueryBeaconState",
|
||||
middleware: []middleware.Middleware{
|
||||
middleware.ContentTypeHandler([]string{api.JsonMediaType}),
|
||||
middleware.AcceptHeaderHandler([]string{api.OctetStreamMediaType}),
|
||||
middleware.AcceptEncodingHeaderHandler(),
|
||||
},
|
||||
handler: server.QueryBeaconState,
|
||||
methods: []string{http.MethodPost},
|
||||
},
|
||||
{
|
||||
template: "/prysm/v1/beacon/blocks/{block_id}/query",
|
||||
name: namespace + ".QueryBeaconBlock",
|
||||
middleware: []middleware.Middleware{
|
||||
middleware.ContentTypeHandler([]string{api.JsonMediaType}),
|
||||
middleware.AcceptHeaderHandler([]string{api.OctetStreamMediaType}),
|
||||
middleware.AcceptEncodingHeaderHandler(),
|
||||
},
|
||||
handler: server.QueryBeaconBlock,
|
||||
methods: []string{http.MethodPost},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -127,6 +127,8 @@ func Test_endpoints(t *testing.T) {
|
||||
"/prysm/v1/beacon/states/{state_id}/validator_count": {http.MethodGet},
|
||||
"/prysm/v1/beacon/chain_head": {http.MethodGet},
|
||||
"/prysm/v1/beacon/blobs": {http.MethodPost},
|
||||
"/prysm/v1/beacon/states/{state_id}/query": {http.MethodPost},
|
||||
"/prysm/v1/beacon/blocks/{block_id}/query": {http.MethodPost},
|
||||
}
|
||||
|
||||
prysmNodeRoutes := map[string][]string{
|
||||
|
||||
@@ -4,6 +4,7 @@ import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"slices"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
@@ -388,12 +389,9 @@ func syncRewardsVals(
|
||||
scIndices := make([]primitives.ValidatorIndex, 0, len(allScIndices))
|
||||
scVals := make([]*precompute.Validator, 0, len(allScIndices))
|
||||
for _, valIdx := range valIndices {
|
||||
for _, scIdx := range allScIndices {
|
||||
if valIdx == scIdx {
|
||||
scVals = append(scVals, allVals[valIdx])
|
||||
scIndices = append(scIndices, valIdx)
|
||||
break
|
||||
}
|
||||
if slices.Contains(allScIndices, valIdx) {
|
||||
scVals = append(scVals, allVals[valIdx])
|
||||
scIndices = append(scIndices, valIdx)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -597,7 +597,18 @@ func (s *Server) SubmitSyncCommitteeSubscription(w http.ResponseWriter, r *http.
|
||||
epochDuration := time.Duration(params.BeaconConfig().SlotsPerEpoch.Mul(params.BeaconConfig().SecondsPerSlot)) * time.Second
|
||||
totalDuration := epochDuration * time.Duration(epochsToWatch)
|
||||
|
||||
cache.SyncSubnetIDs.AddSyncCommitteeSubnets(pubkey48[:], startEpoch, sub.SyncCommitteeIndices, totalDuration)
|
||||
subcommitteeSize := params.BeaconConfig().SyncCommitteeSize / params.BeaconConfig().SyncCommitteeSubnetCount
|
||||
seen := make(map[uint64]bool)
|
||||
var subnetIndices []uint64
|
||||
|
||||
for _, idx := range sub.SyncCommitteeIndices {
|
||||
subnetIdx := idx / subcommitteeSize
|
||||
if !seen[subnetIdx] {
|
||||
seen[subnetIdx] = true
|
||||
subnetIndices = append(subnetIndices, subnetIdx)
|
||||
}
|
||||
}
|
||||
cache.SyncSubnetIDs.AddSyncCommitteeSubnets(pubkey48[:], startEpoch, subnetIndices, totalDuration)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -1049,9 +1049,8 @@ func TestSubmitSyncCommitteeSubscription(t *testing.T) {
|
||||
s.SubmitSyncCommitteeSubscription(writer, request)
|
||||
assert.Equal(t, http.StatusOK, writer.Code)
|
||||
subnets, _, _, _ := cache.SyncSubnetIDs.GetSyncCommitteeSubnets(pubkeys[1], 0)
|
||||
require.Equal(t, 2, len(subnets))
|
||||
require.Equal(t, 1, len(subnets))
|
||||
assert.Equal(t, uint64(0), subnets[0])
|
||||
assert.Equal(t, uint64(2), subnets[1])
|
||||
})
|
||||
t.Run("multiple", func(t *testing.T) {
|
||||
cache.SyncSubnetIDs.EmptyAllCaches()
|
||||
@@ -1070,7 +1069,7 @@ func TestSubmitSyncCommitteeSubscription(t *testing.T) {
|
||||
assert.Equal(t, uint64(0), subnets[0])
|
||||
subnets, _, _, _ = cache.SyncSubnetIDs.GetSyncCommitteeSubnets(pubkeys[1], 0)
|
||||
require.Equal(t, 1, len(subnets))
|
||||
assert.Equal(t, uint64(2), subnets[0])
|
||||
assert.Equal(t, uint64(0), subnets[0])
|
||||
})
|
||||
t.Run("no body", func(t *testing.T) {
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com", nil)
|
||||
|
||||
@@ -5,11 +5,13 @@ go_library(
|
||||
srcs = [
|
||||
"handlers.go",
|
||||
"server.go",
|
||||
"ssz_query.go",
|
||||
"validator_count.go",
|
||||
],
|
||||
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/prysm/beacon",
|
||||
visibility = ["//visibility:public"],
|
||||
deps = [
|
||||
"//api:go_default_library",
|
||||
"//api/server/structs:go_default_library",
|
||||
"//beacon-chain/blockchain:go_default_library",
|
||||
"//beacon-chain/core/helpers:go_default_library",
|
||||
@@ -27,10 +29,13 @@ go_library(
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
"//consensus-types/validator:go_default_library",
|
||||
"//encoding/bytesutil:go_default_library",
|
||||
"//encoding/ssz/query:go_default_library",
|
||||
"//monitoring/tracing/trace:go_default_library",
|
||||
"//network/httputil:go_default_library",
|
||||
"//proto/eth/v1:go_default_library",
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
"//proto/ssz_query:go_default_library",
|
||||
"//runtime/version:go_default_library",
|
||||
"//time/slots:go_default_library",
|
||||
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
|
||||
"@com_github_pkg_errors//:go_default_library",
|
||||
@@ -41,10 +46,12 @@ go_test(
|
||||
name = "go_default_test",
|
||||
srcs = [
|
||||
"handlers_test.go",
|
||||
"ssz_query_test.go",
|
||||
"validator_count_test.go",
|
||||
],
|
||||
embed = [":go_default_library"],
|
||||
deps = [
|
||||
"//api:go_default_library",
|
||||
"//api/server/structs:go_default_library",
|
||||
"//beacon-chain/blockchain/testing:go_default_library",
|
||||
"//beacon-chain/core/helpers:go_default_library",
|
||||
@@ -63,10 +70,13 @@ go_test(
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
"//consensus-types/blocks:go_default_library",
|
||||
"//consensus-types/interfaces:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
"//encoding/bytesutil:go_default_library",
|
||||
"//network/httputil:go_default_library",
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
"//proto/ssz_query:go_default_library",
|
||||
"//runtime/version:go_default_library",
|
||||
"//testing/assert:go_default_library",
|
||||
"//testing/require:go_default_library",
|
||||
"//testing/util:go_default_library",
|
||||
|
||||
@@ -18,6 +18,7 @@ type Server struct {
|
||||
CanonicalHistory *stategen.CanonicalHistory
|
||||
BeaconDB beacondb.ReadOnlyDatabase
|
||||
Stater lookup.Stater
|
||||
Blocker lookup.Blocker
|
||||
ChainInfoFetcher blockchain.ChainInfoFetcher
|
||||
FinalizationFetcher blockchain.FinalizationFetcher
|
||||
CoreService *core.Service
|
||||
|
||||
202
beacon-chain/rpc/prysm/beacon/ssz_query.go
Normal file
202
beacon-chain/rpc/prysm/beacon/ssz_query.go
Normal file
@@ -0,0 +1,202 @@
|
||||
package beacon
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"io"
|
||||
"net/http"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/api"
|
||||
"github.com/OffchainLabs/prysm/v6/api/server/structs"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/eth/shared"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/lookup"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/ssz/query"
|
||||
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
|
||||
"github.com/OffchainLabs/prysm/v6/network/httputil"
|
||||
sszquerypb "github.com/OffchainLabs/prysm/v6/proto/ssz_query"
|
||||
"github.com/OffchainLabs/prysm/v6/runtime/version"
|
||||
)
|
||||
|
||||
// QueryBeaconState handles SSZ Query request for BeaconState.
|
||||
// Returns as bytes serialized SSZQueryResponse.
|
||||
func (s *Server) QueryBeaconState(w http.ResponseWriter, r *http.Request) {
|
||||
ctx, span := trace.StartSpan(r.Context(), "beacon.QueryBeaconState")
|
||||
defer span.End()
|
||||
|
||||
stateID := r.PathValue("state_id")
|
||||
if stateID == "" {
|
||||
httputil.HandleError(w, "state_id is required in URL params", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
// Validate path before lookup: it might be expensive.
|
||||
var req structs.SSZQueryRequest
|
||||
err := json.NewDecoder(r.Body).Decode(&req)
|
||||
switch {
|
||||
case errors.Is(err, io.EOF):
|
||||
httputil.HandleError(w, "No data submitted", http.StatusBadRequest)
|
||||
return
|
||||
case err != nil:
|
||||
httputil.HandleError(w, "Could not decode request body: "+err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
if len(req.Query) == 0 {
|
||||
httputil.HandleError(w, "Empty query submitted", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
path, err := query.ParsePath(req.Query)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not parse path '"+req.Query+"': "+err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
stateRoot, err := s.Stater.StateRoot(ctx, []byte(stateID))
|
||||
if err != nil {
|
||||
var rootNotFoundErr *lookup.StateRootNotFoundError
|
||||
if errors.As(err, &rootNotFoundErr) {
|
||||
httputil.HandleError(w, "State root not found: "+rootNotFoundErr.Error(), http.StatusNotFound)
|
||||
return
|
||||
}
|
||||
httputil.HandleError(w, "Could not get state root: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
st, err := s.Stater.State(ctx, []byte(stateID))
|
||||
if err != nil {
|
||||
shared.WriteStateFetchError(w, err)
|
||||
return
|
||||
}
|
||||
|
||||
// NOTE: Using unsafe conversion to proto is acceptable here,
|
||||
// as we play with a copy of the state returned by Stater.
|
||||
sszObject, ok := st.ToProtoUnsafe().(query.SSZObject)
|
||||
if !ok {
|
||||
httputil.HandleError(w, "Unsupported state version for querying: "+version.String(st.Version()), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
info, err := query.AnalyzeObject(sszObject)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not analyze state object: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
_, offset, length, err := query.CalculateOffsetAndLength(info, path)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not calculate offset and length for path '"+req.Query+"': "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
encodedState, err := st.MarshalSSZ()
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not marshal state to SSZ: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
response := &sszquerypb.SSZQueryResponse{
|
||||
Root: stateRoot,
|
||||
Result: encodedState[offset : offset+length],
|
||||
}
|
||||
|
||||
responseSsz, err := response.MarshalSSZ()
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not marshal response to SSZ: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
w.Header().Set(api.VersionHeader, version.String(st.Version()))
|
||||
httputil.WriteSsz(w, responseSsz)
|
||||
}
|
||||
|
||||
// QueryBeaconState handles SSZ Query request for BeaconState.
|
||||
// Returns as bytes serialized SSZQueryResponse.
|
||||
func (s *Server) QueryBeaconBlock(w http.ResponseWriter, r *http.Request) {
|
||||
ctx, span := trace.StartSpan(r.Context(), "beacon.QueryBeaconBlock")
|
||||
defer span.End()
|
||||
|
||||
blockId := r.PathValue("block_id")
|
||||
if blockId == "" {
|
||||
httputil.HandleError(w, "block_id is required in URL params", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
// Validate path before lookup: it might be expensive.
|
||||
var req structs.SSZQueryRequest
|
||||
err := json.NewDecoder(r.Body).Decode(&req)
|
||||
switch {
|
||||
case errors.Is(err, io.EOF):
|
||||
httputil.HandleError(w, "No data submitted", http.StatusBadRequest)
|
||||
return
|
||||
case err != nil:
|
||||
httputil.HandleError(w, "Could not decode request body: "+err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
if len(req.Query) == 0 {
|
||||
httputil.HandleError(w, "Empty query submitted", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
path, err := query.ParsePath(req.Query)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not parse path '"+req.Query+"': "+err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
signedBlock, err := s.Blocker.Block(ctx, []byte(blockId))
|
||||
if !shared.WriteBlockFetchError(w, signedBlock, err) {
|
||||
return
|
||||
}
|
||||
|
||||
protoBlock, err := signedBlock.Block().Proto()
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not convert block to proto: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
block, ok := protoBlock.(query.SSZObject)
|
||||
if !ok {
|
||||
httputil.HandleError(w, "Unsupported block version for querying: "+version.String(signedBlock.Version()), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
info, err := query.AnalyzeObject(block)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not analyze block object: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
_, offset, length, err := query.CalculateOffsetAndLength(info, path)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not calculate offset and length for path '"+req.Query+"': "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
encodedBlock, err := signedBlock.Block().MarshalSSZ()
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not marshal block to SSZ: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
blockRoot, err := block.HashTreeRoot()
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not compute block root: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
response := &sszquerypb.SSZQueryResponse{
|
||||
Root: blockRoot[:],
|
||||
Result: encodedBlock[offset : offset+length],
|
||||
}
|
||||
|
||||
responseSsz, err := response.MarshalSSZ()
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not marshal response to SSZ: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
w.Header().Set(api.VersionHeader, version.String(signedBlock.Version()))
|
||||
httputil.WriteSsz(w, responseSsz)
|
||||
}
|
||||
335
beacon-chain/rpc/prysm/beacon/ssz_query_test.go
Normal file
335
beacon-chain/rpc/prysm/beacon/ssz_query_test.go
Normal file
@@ -0,0 +1,335 @@
|
||||
package beacon
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/binary"
|
||||
"encoding/json"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/api"
|
||||
"github.com/OffchainLabs/prysm/v6/api/server/structs"
|
||||
chainMock "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/testutil"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
eth "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
sszquerypb "github.com/OffchainLabs/prysm/v6/proto/ssz_query"
|
||||
"github.com/OffchainLabs/prysm/v6/runtime/version"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/assert"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/util"
|
||||
"github.com/ethereum/go-ethereum/common/hexutil"
|
||||
"github.com/prysmaticlabs/go-bitfield"
|
||||
)
|
||||
|
||||
func TestQueryBeaconState(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
st, _ := util.DeterministicGenesisState(t, 16)
|
||||
require.NoError(t, st.SetSlot(primitives.Slot(42)))
|
||||
stateRoot, err := st.HashTreeRoot(ctx)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, st.UpdateBalancesAtIndex(0, 42000000000))
|
||||
|
||||
tests := []struct {
|
||||
path string
|
||||
expectedValue []byte
|
||||
}{
|
||||
{
|
||||
path: ".slot",
|
||||
expectedValue: func() []byte {
|
||||
slot := st.Slot()
|
||||
result, _ := slot.MarshalSSZ()
|
||||
return result
|
||||
}(),
|
||||
},
|
||||
{
|
||||
path: ".latest_block_header",
|
||||
expectedValue: func() []byte {
|
||||
header := st.LatestBlockHeader()
|
||||
result, _ := header.MarshalSSZ()
|
||||
return result
|
||||
}(),
|
||||
},
|
||||
{
|
||||
path: ".validators",
|
||||
expectedValue: func() []byte {
|
||||
b := make([]byte, 0)
|
||||
validators := st.Validators()
|
||||
for _, v := range validators {
|
||||
vBytes, _ := v.MarshalSSZ()
|
||||
b = append(b, vBytes...)
|
||||
}
|
||||
return b
|
||||
|
||||
}(),
|
||||
},
|
||||
{
|
||||
path: ".validators[0]",
|
||||
expectedValue: func() []byte {
|
||||
v, _ := st.ValidatorAtIndex(0)
|
||||
result, _ := v.MarshalSSZ()
|
||||
return result
|
||||
}(),
|
||||
},
|
||||
{
|
||||
path: ".validators[0].withdrawal_credentials",
|
||||
expectedValue: func() []byte {
|
||||
v, _ := st.ValidatorAtIndex(0)
|
||||
return v.WithdrawalCredentials
|
||||
}(),
|
||||
},
|
||||
{
|
||||
path: ".validators[0].effective_balance",
|
||||
expectedValue: func() []byte {
|
||||
v, _ := st.ValidatorAtIndex(0)
|
||||
b := make([]byte, 8)
|
||||
binary.LittleEndian.PutUint64(b, uint64(v.EffectiveBalance))
|
||||
return b
|
||||
}(),
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.path, func(t *testing.T) {
|
||||
chainService := &chainMock.ChainService{Optimistic: false, FinalizedRoots: make(map[[32]byte]bool)}
|
||||
s := &Server{
|
||||
OptimisticModeFetcher: chainService,
|
||||
FinalizationFetcher: chainService,
|
||||
Stater: &testutil.MockStater{
|
||||
BeaconStateRoot: stateRoot[:],
|
||||
BeaconState: st,
|
||||
},
|
||||
}
|
||||
|
||||
requestBody := &structs.SSZQueryRequest{
|
||||
Query: tt.path,
|
||||
}
|
||||
var buf bytes.Buffer
|
||||
require.NoError(t, json.NewEncoder(&buf).Encode(requestBody))
|
||||
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com/prysm/v1/beacon/states/{state_id}/query", &buf)
|
||||
request.SetPathValue("state_id", "head")
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.QueryBeaconState(writer, request)
|
||||
require.Equal(t, http.StatusOK, writer.Code)
|
||||
assert.Equal(t, version.String(version.Phase0), writer.Header().Get(api.VersionHeader))
|
||||
|
||||
expectedResponse := &sszquerypb.SSZQueryResponse{
|
||||
Root: stateRoot[:],
|
||||
Result: tt.expectedValue,
|
||||
}
|
||||
sszExpectedResponse, err := expectedResponse.MarshalSSZ()
|
||||
require.NoError(t, err)
|
||||
assert.DeepEqual(t, sszExpectedResponse, writer.Body.Bytes())
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestQueryBeaconStateInvalidRequest(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
st, _ := util.DeterministicGenesisState(t, 16)
|
||||
require.NoError(t, st.SetSlot(primitives.Slot(42)))
|
||||
stateRoot, err := st.HashTreeRoot(ctx)
|
||||
require.NoError(t, err)
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
stateId string
|
||||
path string
|
||||
code int
|
||||
errorString string
|
||||
}{
|
||||
{
|
||||
name: "empty query submitted",
|
||||
stateId: "head",
|
||||
path: "",
|
||||
errorString: "Empty query submitted",
|
||||
},
|
||||
{
|
||||
name: "invalid path",
|
||||
stateId: "head",
|
||||
path: ".invalid[]]",
|
||||
errorString: "Could not parse path",
|
||||
},
|
||||
{
|
||||
name: "non-existent field",
|
||||
stateId: "head",
|
||||
path: ".non_existent_field",
|
||||
code: http.StatusInternalServerError,
|
||||
errorString: "Could not calculate offset and length for path",
|
||||
},
|
||||
{
|
||||
name: "empty state ID",
|
||||
stateId: "",
|
||||
path: "",
|
||||
},
|
||||
{
|
||||
name: "far future slot",
|
||||
stateId: "1000000000000",
|
||||
path: "",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.path, func(t *testing.T) {
|
||||
chainService := &chainMock.ChainService{Optimistic: false, FinalizedRoots: make(map[[32]byte]bool)}
|
||||
s := &Server{
|
||||
OptimisticModeFetcher: chainService,
|
||||
FinalizationFetcher: chainService,
|
||||
Stater: &testutil.MockStater{
|
||||
BeaconStateRoot: stateRoot[:],
|
||||
BeaconState: st,
|
||||
},
|
||||
}
|
||||
|
||||
requestBody := &structs.SSZQueryRequest{
|
||||
Query: tt.path,
|
||||
}
|
||||
var buf bytes.Buffer
|
||||
require.NoError(t, json.NewEncoder(&buf).Encode(requestBody))
|
||||
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com/prysm/v1/beacon/states/{state_id}/query", &buf)
|
||||
request.SetPathValue("state_id", tt.stateId)
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.QueryBeaconState(writer, request)
|
||||
|
||||
if tt.code == 0 {
|
||||
tt.code = http.StatusBadRequest
|
||||
}
|
||||
require.Equal(t, tt.code, writer.Code)
|
||||
if tt.errorString != "" {
|
||||
errorString := writer.Body.String()
|
||||
require.Equal(t, true, strings.Contains(errorString, tt.errorString))
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestQueryBeaconBlock(t *testing.T) {
|
||||
randaoReveal, err := hexutil.Decode("0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505")
|
||||
require.NoError(t, err)
|
||||
root, err := hexutil.Decode("0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
|
||||
require.NoError(t, err)
|
||||
signature, err := hexutil.Decode("0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505")
|
||||
require.NoError(t, err)
|
||||
att := ð.Attestation{
|
||||
AggregationBits: bitfield.Bitlist{0x01},
|
||||
Data: ð.AttestationData{
|
||||
Slot: 1,
|
||||
CommitteeIndex: 1,
|
||||
BeaconBlockRoot: root,
|
||||
Source: ð.Checkpoint{
|
||||
Epoch: 1,
|
||||
Root: root,
|
||||
},
|
||||
Target: ð.Checkpoint{
|
||||
Epoch: 1,
|
||||
Root: root,
|
||||
},
|
||||
},
|
||||
Signature: signature,
|
||||
}
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
path string
|
||||
block interfaces.ReadOnlySignedBeaconBlock
|
||||
expectedValue []byte
|
||||
}{
|
||||
{
|
||||
name: "slot",
|
||||
path: ".slot",
|
||||
block: func() interfaces.ReadOnlySignedBeaconBlock {
|
||||
b := util.NewBeaconBlock()
|
||||
b.Block.Slot = 123
|
||||
sb, err := blocks.NewSignedBeaconBlock(b)
|
||||
require.NoError(t, err)
|
||||
return sb
|
||||
}(),
|
||||
expectedValue: func() []byte {
|
||||
b := make([]byte, 8)
|
||||
binary.LittleEndian.PutUint64(b, 123)
|
||||
return b
|
||||
}(),
|
||||
},
|
||||
{
|
||||
name: "randao_reveal",
|
||||
path: ".body.randao_reveal",
|
||||
block: func() interfaces.ReadOnlySignedBeaconBlock {
|
||||
b := util.NewBeaconBlock()
|
||||
b.Block.Body.RandaoReveal = randaoReveal
|
||||
sb, err := blocks.NewSignedBeaconBlock(b)
|
||||
require.NoError(t, err)
|
||||
return sb
|
||||
}(),
|
||||
expectedValue: randaoReveal,
|
||||
},
|
||||
{
|
||||
name: "attestations",
|
||||
path: ".body.attestations",
|
||||
block: func() interfaces.ReadOnlySignedBeaconBlock {
|
||||
b := util.NewBeaconBlock()
|
||||
b.Block.Body.Attestations = []*eth.Attestation{
|
||||
att,
|
||||
}
|
||||
sb, err := blocks.NewSignedBeaconBlock(b)
|
||||
require.NoError(t, err)
|
||||
return sb
|
||||
}(),
|
||||
expectedValue: func() []byte {
|
||||
b, err := att.MarshalSSZ()
|
||||
require.NoError(t, err)
|
||||
return b
|
||||
}(),
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
mockBlockFetcher := &testutil.MockBlocker{BlockToReturn: tt.block}
|
||||
mockChainService := &chainMock.ChainService{
|
||||
FinalizedRoots: map[[32]byte]bool{},
|
||||
}
|
||||
s := &Server{
|
||||
FinalizationFetcher: mockChainService,
|
||||
Blocker: mockBlockFetcher,
|
||||
}
|
||||
requestBody := &structs.SSZQueryRequest{
|
||||
Query: tt.path,
|
||||
}
|
||||
var buf bytes.Buffer
|
||||
require.NoError(t, json.NewEncoder(&buf).Encode(requestBody))
|
||||
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com/prysm/v1/beacon/blocks/{block_id}/query", &buf)
|
||||
request.SetPathValue("block_id", "head")
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.QueryBeaconBlock(writer, request)
|
||||
require.Equal(t, http.StatusOK, writer.Code)
|
||||
assert.Equal(t, version.String(version.Phase0), writer.Header().Get(api.VersionHeader))
|
||||
|
||||
blockRoot, err := tt.block.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
|
||||
expectedResponse := &sszquerypb.SSZQueryResponse{
|
||||
Root: blockRoot[:],
|
||||
Result: tt.expectedValue,
|
||||
}
|
||||
sszExpectedResponse, err := expectedResponse.MarshalSSZ()
|
||||
require.NoError(t, err)
|
||||
assert.DeepEqual(t, sszExpectedResponse, writer.Body.Bytes())
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -229,7 +229,7 @@ func (vs *Server) BuildBlockParallel(ctx context.Context, sBlk interfaces.Signed
|
||||
sBlk.SetVoluntaryExits(vs.getExits(head, sBlk.Block().Slot()))
|
||||
|
||||
// Set sync aggregate. New in Altair.
|
||||
vs.setSyncAggregate(ctx, sBlk)
|
||||
vs.setSyncAggregate(ctx, sBlk, head)
|
||||
|
||||
// Set bls to execution change. New in Capella.
|
||||
vs.setBlsToExecData(sBlk, head)
|
||||
@@ -312,14 +312,14 @@ func (vs *Server) ProposeBeaconBlock(ctx context.Context, req *ethpb.GenericSign
|
||||
rob, err := blocks.NewROBlockWithRoot(block, root)
|
||||
if block.IsBlinded() {
|
||||
block, blobSidecars, err = vs.handleBlindedBlock(ctx, block)
|
||||
if errors.Is(err, builderapi.ErrBadGateway) {
|
||||
log.WithError(err).Info("Optimistically proposed block - builder relay temporarily unavailable, block may arrive over P2P")
|
||||
return ðpb.ProposeResponse{BlockRoot: root[:]}, nil
|
||||
}
|
||||
} else if block.Version() >= version.Deneb {
|
||||
blobSidecars, dataColumnSidecars, err = vs.handleUnblindedBlock(rob, req)
|
||||
}
|
||||
if err != nil {
|
||||
if errors.Is(err, builderapi.ErrBadGateway) && block.IsBlinded() {
|
||||
log.WithError(err).Info("Optimistically proposed block - builder relay temporarily unavailable, block may arrive over P2P")
|
||||
return ðpb.ProposeResponse{BlockRoot: root[:]}, nil
|
||||
}
|
||||
return nil, status.Errorf(codes.Internal, "%s: %v", "handle block failed", err)
|
||||
}
|
||||
|
||||
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
"context"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
@@ -20,12 +21,12 @@ import (
|
||||
"github.com/prysmaticlabs/go-bitfield"
|
||||
)
|
||||
|
||||
func (vs *Server) setSyncAggregate(ctx context.Context, blk interfaces.SignedBeaconBlock) {
|
||||
func (vs *Server) setSyncAggregate(ctx context.Context, blk interfaces.SignedBeaconBlock, headState state.BeaconState) {
|
||||
if blk.Version() < version.Altair {
|
||||
return
|
||||
}
|
||||
|
||||
syncAggregate, err := vs.getSyncAggregate(ctx, slots.PrevSlot(blk.Block().Slot()), blk.Block().ParentRoot())
|
||||
syncAggregate, err := vs.getSyncAggregate(ctx, slots.PrevSlot(blk.Block().Slot()), blk.Block().ParentRoot(), headState)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Could not get sync aggregate")
|
||||
emptySig := [96]byte{0xC0}
|
||||
@@ -47,7 +48,7 @@ func (vs *Server) setSyncAggregate(ctx context.Context, blk interfaces.SignedBea
|
||||
|
||||
// getSyncAggregate retrieves the sync contributions from the pool to construct the sync aggregate object.
|
||||
// The contributions are filtered based on matching of the input root and slot then profitability.
|
||||
func (vs *Server) getSyncAggregate(ctx context.Context, slot primitives.Slot, root [32]byte) (*ethpb.SyncAggregate, error) {
|
||||
func (vs *Server) getSyncAggregate(ctx context.Context, slot primitives.Slot, root [32]byte, headState state.BeaconState) (*ethpb.SyncAggregate, error) {
|
||||
_, span := trace.StartSpan(ctx, "ProposerServer.getSyncAggregate")
|
||||
defer span.End()
|
||||
|
||||
@@ -62,7 +63,7 @@ func (vs *Server) getSyncAggregate(ctx context.Context, slot primitives.Slot, ro
|
||||
// Contributions have to match the input root
|
||||
proposerContributions := proposerSyncContributions(poolContributions).filterByBlockRoot(root)
|
||||
|
||||
aggregatedContributions, err := vs.aggregatedSyncCommitteeMessages(ctx, slot, root, poolContributions)
|
||||
aggregatedContributions, err := vs.aggregatedSyncCommitteeMessages(ctx, slot, root, poolContributions, headState)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not get aggregated sync committee messages")
|
||||
}
|
||||
@@ -123,6 +124,7 @@ func (vs *Server) aggregatedSyncCommitteeMessages(
|
||||
slot primitives.Slot,
|
||||
root [32]byte,
|
||||
poolContributions []*ethpb.SyncCommitteeContribution,
|
||||
st state.BeaconState,
|
||||
) ([]*ethpb.SyncCommitteeContribution, error) {
|
||||
subcommitteeCount := params.BeaconConfig().SyncCommitteeSubnetCount
|
||||
subcommitteeSize := params.BeaconConfig().SyncCommitteeSize / subcommitteeCount
|
||||
@@ -146,10 +148,7 @@ func (vs *Server) aggregatedSyncCommitteeMessages(
|
||||
messageSigs = append(messageSigs, msg.Signature)
|
||||
}
|
||||
}
|
||||
st, err := vs.HeadFetcher.HeadState(ctx)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not get head state")
|
||||
}
|
||||
|
||||
positions, err := helpers.CurrentPeriodPositions(st, messageIndices)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not get sync committee positions")
|
||||
|
||||
@@ -9,6 +9,7 @@ import (
|
||||
mockSync "github.com/OffchainLabs/prysm/v6/beacon-chain/sync/initial-sync/testing"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/crypto/bls"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
@@ -51,15 +52,15 @@ func TestProposer_GetSyncAggregate_OK(t *testing.T) {
|
||||
require.NoError(t, proposerServer.SyncCommitteePool.SaveSyncCommitteeContribution(cont))
|
||||
}
|
||||
|
||||
aggregate, err := proposerServer.getSyncAggregate(t.Context(), 1, bytesutil.ToBytes32(conts[0].BlockRoot))
|
||||
aggregate, err := proposerServer.getSyncAggregate(t.Context(), 1, bytesutil.ToBytes32(conts[0].BlockRoot), st)
|
||||
require.NoError(t, err)
|
||||
require.DeepEqual(t, bitfield.Bitvector32{0xf, 0xf, 0xf, 0xf}, aggregate.SyncCommitteeBits)
|
||||
|
||||
aggregate, err = proposerServer.getSyncAggregate(t.Context(), 2, bytesutil.ToBytes32(conts[0].BlockRoot))
|
||||
aggregate, err = proposerServer.getSyncAggregate(t.Context(), 2, bytesutil.ToBytes32(conts[0].BlockRoot), st)
|
||||
require.NoError(t, err)
|
||||
require.DeepEqual(t, bitfield.Bitvector32{0xaa, 0xaa, 0xaa, 0xaa}, aggregate.SyncCommitteeBits)
|
||||
|
||||
aggregate, err = proposerServer.getSyncAggregate(t.Context(), 3, bytesutil.ToBytes32(conts[0].BlockRoot))
|
||||
aggregate, err = proposerServer.getSyncAggregate(t.Context(), 3, bytesutil.ToBytes32(conts[0].BlockRoot), st)
|
||||
require.NoError(t, err)
|
||||
require.DeepEqual(t, bitfield.NewBitvector32(), aggregate.SyncCommitteeBits)
|
||||
}
|
||||
@@ -68,7 +69,7 @@ func TestServer_SetSyncAggregate_EmptyCase(t *testing.T) {
|
||||
b, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlockAltair())
|
||||
require.NoError(t, err)
|
||||
s := &Server{} // Sever is not initialized with sync committee pool.
|
||||
s.setSyncAggregate(t.Context(), b)
|
||||
s.setSyncAggregate(t.Context(), b, nil)
|
||||
agg, err := b.Block().Body().SyncAggregate()
|
||||
require.NoError(t, err)
|
||||
|
||||
@@ -138,7 +139,7 @@ func TestProposer_GetSyncAggregate_IncludesSyncCommitteeMessages(t *testing.T) {
|
||||
}
|
||||
|
||||
// The final sync aggregates must have indexes [0,1,2,3] set for both subcommittees
|
||||
sa, err := proposerServer.getSyncAggregate(t.Context(), 1, r)
|
||||
sa, err := proposerServer.getSyncAggregate(t.Context(), 1, r, st)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, true, sa.SyncCommitteeBits.BitAt(0))
|
||||
assert.Equal(t, true, sa.SyncCommitteeBits.BitAt(1))
|
||||
@@ -194,8 +195,99 @@ func Test_aggregatedSyncCommitteeMessages_NoIntersectionWithPoolContributions(t
|
||||
BlockRoot: r[:],
|
||||
}
|
||||
|
||||
aggregated, err := proposerServer.aggregatedSyncCommitteeMessages(t.Context(), 1, r, []*ethpb.SyncCommitteeContribution{cont})
|
||||
aggregated, err := proposerServer.aggregatedSyncCommitteeMessages(t.Context(), 1, r, []*ethpb.SyncCommitteeContribution{cont}, st)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 1, len(aggregated))
|
||||
assert.Equal(t, false, aggregated[0].AggregationBits.BitAt(3))
|
||||
}
|
||||
|
||||
func TestGetSyncAggregate_CorrectStateAtSyncCommitteePeriodBoundary(t *testing.T) {
|
||||
helpers.ClearCache()
|
||||
syncPeriodBoundaryEpoch := primitives.Epoch(274176) // Real epoch from the bug report
|
||||
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
|
||||
|
||||
preEpochState, keys := util.DeterministicGenesisStateAltair(t, 100)
|
||||
require.NoError(t, preEpochState.SetSlot(primitives.Slot(syncPeriodBoundaryEpoch)*slotsPerEpoch-1)) // Last slot of previous epoch
|
||||
|
||||
postEpochState := preEpochState.Copy()
|
||||
require.NoError(t, postEpochState.SetSlot(primitives.Slot(syncPeriodBoundaryEpoch)*slotsPerEpoch+2)) // After 2 missed slots
|
||||
|
||||
oldCommittee := ðpb.SyncCommittee{
|
||||
Pubkeys: make([][]byte, params.BeaconConfig().SyncCommitteeSize),
|
||||
}
|
||||
newCommittee := ðpb.SyncCommittee{
|
||||
Pubkeys: make([][]byte, params.BeaconConfig().SyncCommitteeSize),
|
||||
}
|
||||
|
||||
for i := 0; i < int(params.BeaconConfig().SyncCommitteeSize); i++ {
|
||||
if i < len(keys) {
|
||||
oldCommittee.Pubkeys[i] = keys[i%len(keys)].PublicKey().Marshal()
|
||||
// Use different keys for new committee to simulate rotation
|
||||
newCommittee.Pubkeys[i] = keys[(i+10)%len(keys)].PublicKey().Marshal()
|
||||
}
|
||||
}
|
||||
|
||||
require.NoError(t, preEpochState.SetCurrentSyncCommittee(oldCommittee))
|
||||
require.NoError(t, postEpochState.SetCurrentSyncCommittee(newCommittee))
|
||||
|
||||
mockChainService := &chainmock.ChainService{
|
||||
State: postEpochState,
|
||||
}
|
||||
|
||||
proposerServer := &Server{
|
||||
HeadFetcher: mockChainService,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
SyncCommitteePool: synccommittee.NewStore(),
|
||||
}
|
||||
|
||||
slot := primitives.Slot(syncPeriodBoundaryEpoch)*slotsPerEpoch + 1 // First slot of new epoch
|
||||
blockRoot := [32]byte{0x01, 0x02, 0x03}
|
||||
|
||||
msg1 := ðpb.SyncCommitteeMessage{
|
||||
Slot: slot,
|
||||
BlockRoot: blockRoot[:],
|
||||
ValidatorIndex: 0, // This validator is in position 0 of OLD committee
|
||||
Signature: bls.NewAggregateSignature().Marshal(),
|
||||
}
|
||||
msg2 := ðpb.SyncCommitteeMessage{
|
||||
Slot: slot,
|
||||
BlockRoot: blockRoot[:],
|
||||
ValidatorIndex: 1, // This validator is in position 1 of OLD committee
|
||||
Signature: bls.NewAggregateSignature().Marshal(),
|
||||
}
|
||||
|
||||
require.NoError(t, proposerServer.SyncCommitteePool.SaveSyncCommitteeMessage(msg1))
|
||||
require.NoError(t, proposerServer.SyncCommitteePool.SaveSyncCommitteeMessage(msg2))
|
||||
|
||||
aggregateWrongState, err := proposerServer.getSyncAggregate(t.Context(), slot, blockRoot, postEpochState)
|
||||
require.NoError(t, err)
|
||||
|
||||
aggregateCorrectState, err := proposerServer.getSyncAggregate(t.Context(), slot, blockRoot, preEpochState)
|
||||
require.NoError(t, err)
|
||||
|
||||
wrongStateBits := bitfield.Bitlist(aggregateWrongState.SyncCommitteeBits)
|
||||
correctStateBits := bitfield.Bitlist(aggregateCorrectState.SyncCommitteeBits)
|
||||
|
||||
wrongStateHasValidators := false
|
||||
correctStateHasValidators := false
|
||||
|
||||
for i := 0; i < len(wrongStateBits); i++ {
|
||||
if wrongStateBits[i] != 0 {
|
||||
wrongStateHasValidators = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
for i := 0; i < len(correctStateBits); i++ {
|
||||
if correctStateBits[i] != 0 {
|
||||
correctStateHasValidators = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
assert.Equal(t, true, correctStateHasValidators, "Correct state should include validators that sent messages")
|
||||
assert.Equal(t, false, wrongStateHasValidators, "Wrong state should not find validators in incorrect sync committee")
|
||||
|
||||
t.Logf("Wrong state aggregate bits: %x (has validators: %v)", wrongStateBits, wrongStateHasValidators)
|
||||
t.Logf("Correct state aggregate bits: %x (has validators: %v)", correctStateBits, correctStateHasValidators)
|
||||
}
|
||||
|
||||
@@ -266,6 +266,8 @@ type WriteOnlyEth1Data interface {
|
||||
SetEth1DepositIndex(val uint64) error
|
||||
ExitEpochAndUpdateChurn(exitBalance primitives.Gwei) (primitives.Epoch, error)
|
||||
ExitEpochAndUpdateChurnForTotalBal(totalActiveBalance primitives.Gwei, exitBalance primitives.Gwei) (primitives.Epoch, error)
|
||||
SetExitBalanceToConsume(val primitives.Gwei) error
|
||||
SetEarliestExitEpoch(val primitives.Epoch) error
|
||||
}
|
||||
|
||||
// WriteOnlyValidators defines a struct which only has write access to validators methods.
|
||||
@@ -333,6 +335,7 @@ type WriteOnlyWithdrawals interface {
|
||||
DequeuePendingPartialWithdrawals(num uint64) error
|
||||
SetNextWithdrawalIndex(i uint64) error
|
||||
SetNextWithdrawalValidatorIndex(i primitives.ValidatorIndex) error
|
||||
SetPendingPartialWithdrawals(val []*ethpb.PendingPartialWithdrawal) error
|
||||
}
|
||||
|
||||
type WriteOnlyConsolidations interface {
|
||||
|
||||
@@ -91,3 +91,33 @@ func (b *BeaconState) exitEpochAndUpdateChurn(totalActiveBalance primitives.Gwei
|
||||
|
||||
return b.earliestExitEpoch, nil
|
||||
}
|
||||
|
||||
// SetExitBalanceToConsume sets the exit balance to consume. This method mutates the state.
|
||||
func (b *BeaconState) SetExitBalanceToConsume(exitBalanceToConsume primitives.Gwei) error {
|
||||
if b.version < version.Electra {
|
||||
return errNotSupported("SetExitBalanceToConsume", b.version)
|
||||
}
|
||||
|
||||
b.lock.Lock()
|
||||
defer b.lock.Unlock()
|
||||
|
||||
b.exitBalanceToConsume = exitBalanceToConsume
|
||||
b.markFieldAsDirty(types.ExitBalanceToConsume)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// SetEarliestExitEpoch sets the earliest exit epoch. This method mutates the state.
|
||||
func (b *BeaconState) SetEarliestExitEpoch(earliestExitEpoch primitives.Epoch) error {
|
||||
if b.version < version.Electra {
|
||||
return errNotSupported("SetEarliestExitEpoch", b.version)
|
||||
}
|
||||
|
||||
b.lock.Lock()
|
||||
defer b.lock.Unlock()
|
||||
|
||||
b.earliestExitEpoch = earliestExitEpoch
|
||||
b.markFieldAsDirty(types.EarliestExitEpoch)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -100,3 +100,24 @@ func (b *BeaconState) DequeuePendingPartialWithdrawals(n uint64) error {
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// SetPendingPartialWithdrawals sets the pending partial withdrawals. This method mutates the state.
|
||||
func (b *BeaconState) SetPendingPartialWithdrawals(pendingPartialWithdrawals []*eth.PendingPartialWithdrawal) error {
|
||||
if b.version < version.Electra {
|
||||
return errNotSupported("SetPendingPartialWithdrawals", b.version)
|
||||
}
|
||||
|
||||
b.lock.Lock()
|
||||
defer b.lock.Unlock()
|
||||
|
||||
if pendingPartialWithdrawals == nil {
|
||||
return errors.New("cannot set nil pending partial withdrawals")
|
||||
}
|
||||
b.sharedFieldReferences[types.PendingPartialWithdrawals].MinusRef()
|
||||
b.sharedFieldReferences[types.PendingPartialWithdrawals] = stateutil.NewRef(1)
|
||||
|
||||
b.pendingPartialWithdrawals = pendingPartialWithdrawals
|
||||
b.markFieldAsDirty(types.PendingPartialWithdrawals)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -650,6 +650,11 @@ func InitializeFromProtoUnsafeFulu(st *ethpb.BeaconStateFulu) (state.BeaconState
|
||||
for i, v := range st.ProposerLookahead {
|
||||
proposerLookahead[i] = primitives.ValidatorIndex(v)
|
||||
}
|
||||
// Proposer lookahead must be exactly 2 * SLOTS_PER_EPOCH in length. We fill in with zeroes instead of erroring out here
|
||||
for i := len(proposerLookahead); i < 2*fieldparams.SlotsPerEpoch; i++ {
|
||||
proposerLookahead = append(proposerLookahead, 0)
|
||||
}
|
||||
|
||||
fieldCount := params.BeaconConfig().BeaconStateFuluFieldCount
|
||||
b := &BeaconState{
|
||||
version: version.Fulu,
|
||||
|
||||
@@ -17,7 +17,6 @@ go_library(
|
||||
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/sync/backfill",
|
||||
visibility = ["//visibility:public"],
|
||||
deps = [
|
||||
"//beacon-chain/core/helpers:go_default_library",
|
||||
"//beacon-chain/core/signing:go_default_library",
|
||||
"//beacon-chain/das:go_default_library",
|
||||
"//beacon-chain/db:go_default_library",
|
||||
@@ -61,7 +60,6 @@ go_test(
|
||||
],
|
||||
embed = [":go_default_library"],
|
||||
deps = [
|
||||
"//beacon-chain/core/helpers:go_default_library",
|
||||
"//beacon-chain/core/signing:go_default_library",
|
||||
"//beacon-chain/das:go_default_library",
|
||||
"//beacon-chain/db:go_default_library",
|
||||
|
||||
@@ -3,12 +3,12 @@ package backfill
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/sync"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
@@ -348,7 +348,7 @@ func (*Service) Status() error {
|
||||
// minimumBackfillSlot determines the lowest slot that backfill needs to download based on looking back
|
||||
// MIN_EPOCHS_FOR_BLOCK_REQUESTS from the current slot.
|
||||
func minimumBackfillSlot(current primitives.Slot) primitives.Slot {
|
||||
oe := helpers.MinEpochsForBlockRequests()
|
||||
oe := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
|
||||
if oe > slots.MaxSafeEpoch() {
|
||||
oe = slots.MaxSafeEpoch()
|
||||
}
|
||||
|
||||
@@ -5,7 +5,6 @@ import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
|
||||
p2ptest "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
|
||||
@@ -84,7 +83,7 @@ func TestServiceInit(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestMinimumBackfillSlot(t *testing.T) {
|
||||
oe := helpers.MinEpochsForBlockRequests()
|
||||
oe := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
|
||||
|
||||
currSlot := (oe + 100).Mul(uint64(params.BeaconConfig().SlotsPerEpoch))
|
||||
minSlot := minimumBackfillSlot(primitives.Slot(currSlot))
|
||||
@@ -109,7 +108,7 @@ func testReadN(ctx context.Context, t *testing.T, c chan batch, n int, into []ba
|
||||
}
|
||||
|
||||
func TestBackfillMinSlotDefault(t *testing.T) {
|
||||
oe := helpers.MinEpochsForBlockRequests()
|
||||
oe := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
|
||||
current := primitives.Slot((oe + 100).Mul(uint64(params.BeaconConfig().SlotsPerEpoch)))
|
||||
s := &Service{}
|
||||
specMin := minimumBackfillSlot(current)
|
||||
|
||||
@@ -90,10 +90,10 @@ func (s *Service) updateCustodyInfoIfNeeded() error {
|
||||
// custodyGroupCount computes the custody group count based on the custody requirement,
|
||||
// the validators custody requirement, and whether the node is subscribed to all data subnets.
|
||||
func (s *Service) custodyGroupCount(context.Context) (uint64, error) {
|
||||
beaconConfig := params.BeaconConfig()
|
||||
cfg := params.BeaconConfig()
|
||||
|
||||
if flags.Get().SubscribeAllDataSubnets {
|
||||
return beaconConfig.NumberOfCustodyGroups, nil
|
||||
return cfg.NumberOfCustodyGroups, nil
|
||||
}
|
||||
|
||||
validatorsCustodyRequirement, err := s.validatorsCustodyRequirement()
|
||||
@@ -101,7 +101,7 @@ func (s *Service) custodyGroupCount(context.Context) (uint64, error) {
|
||||
return 0, errors.Wrap(err, "validators custody requirement")
|
||||
}
|
||||
|
||||
return max(beaconConfig.CustodyRequirement, validatorsCustodyRequirement), nil
|
||||
return max(cfg.CustodyRequirement, validatorsCustodyRequirement), nil
|
||||
}
|
||||
|
||||
// validatorsCustodyRequirements computes the custody requirements based on the
|
||||
|
||||
@@ -116,11 +116,11 @@ func withSubscribeAllDataSubnets(t *testing.T, fn func()) {
|
||||
|
||||
func TestUpdateCustodyInfoIfNeeded(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
beaconConfig := params.BeaconConfig()
|
||||
beaconConfig.NumberOfCustodyGroups = 128
|
||||
beaconConfig.CustodyRequirement = 4
|
||||
beaconConfig.SamplesPerSlot = 8
|
||||
params.OverrideBeaconConfig(beaconConfig)
|
||||
cfg := params.BeaconConfig()
|
||||
cfg.NumberOfCustodyGroups = 128
|
||||
cfg.CustodyRequirement = 4
|
||||
cfg.SamplesPerSlot = 8
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
t.Run("Skip update when actual custody count >= target", func(t *testing.T) {
|
||||
setup := setupCustodyTest(t, false)
|
||||
@@ -159,7 +159,7 @@ func TestUpdateCustodyInfoIfNeeded(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
|
||||
const expectedSlot = primitives.Slot(100)
|
||||
setup.assertCustodyInfo(t, expectedSlot, beaconConfig.NumberOfCustodyGroups)
|
||||
setup.assertCustodyInfo(t, expectedSlot, cfg.NumberOfCustodyGroups)
|
||||
})
|
||||
})
|
||||
}
|
||||
|
||||
@@ -1122,19 +1122,21 @@ func randomPeer(
|
||||
}
|
||||
}
|
||||
|
||||
slices.Sort(nonRateLimitedPeers)
|
||||
|
||||
if len(nonRateLimitedPeers) == 0 {
|
||||
log.WithFields(logrus.Fields{
|
||||
"peerCount": peerCount,
|
||||
"delay": waitPeriod,
|
||||
}).Debug("Waiting for a peer with enough bandwidth for data column sidecars")
|
||||
time.Sleep(waitPeriod)
|
||||
continue
|
||||
if len(nonRateLimitedPeers) > 0 {
|
||||
slices.Sort(nonRateLimitedPeers)
|
||||
randomIndex := randomSource.Intn(len(nonRateLimitedPeers))
|
||||
return nonRateLimitedPeers[randomIndex], nil
|
||||
}
|
||||
|
||||
randomIndex := randomSource.Intn(len(nonRateLimitedPeers))
|
||||
return nonRateLimitedPeers[randomIndex], nil
|
||||
log.WithFields(logrus.Fields{
|
||||
"peerCount": peerCount,
|
||||
"delay": waitPeriod,
|
||||
}).Debug("Waiting for a peer with enough bandwidth for data column sidecars")
|
||||
|
||||
select {
|
||||
case <-time.After(waitPeriod):
|
||||
case <-ctx.Done():
|
||||
}
|
||||
}
|
||||
|
||||
return "", ctx.Err()
|
||||
|
||||
@@ -1366,16 +1366,16 @@ func TestFetchSidecars(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("Nominal", func(t *testing.T) {
|
||||
beaconConfig := params.BeaconConfig()
|
||||
numberOfColumns := beaconConfig.NumberOfColumns
|
||||
samplesPerSlot := beaconConfig.SamplesPerSlot
|
||||
cfg := params.BeaconConfig()
|
||||
numberOfColumns := cfg.NumberOfColumns
|
||||
samplesPerSlot := cfg.SamplesPerSlot
|
||||
|
||||
// Define "now" to be one epoch after genesis time + retention period.
|
||||
genesisTime := time.Date(2025, time.August, 10, 0, 0, 0, 0, time.UTC)
|
||||
secondsPerSlot := beaconConfig.SecondsPerSlot
|
||||
slotsPerEpoch := beaconConfig.SlotsPerEpoch
|
||||
secondsPerSlot := cfg.SecondsPerSlot
|
||||
slotsPerEpoch := cfg.SlotsPerEpoch
|
||||
secondsPerEpoch := uint64(slotsPerEpoch.Mul(secondsPerSlot))
|
||||
retentionEpochs := beaconConfig.MinEpochsForDataColumnSidecarsRequest
|
||||
retentionEpochs := cfg.MinEpochsForDataColumnSidecarsRequest
|
||||
nowWrtGenesisSecs := retentionEpochs.Add(1).Mul(secondsPerEpoch)
|
||||
now := genesisTime.Add(time.Duration(nowWrtGenesisSecs) * time.Second)
|
||||
|
||||
|
||||
@@ -530,12 +530,12 @@ func TestOriginOutsideRetention(t *testing.T) {
|
||||
func TestFetchOriginSidecars(t *testing.T) {
|
||||
ctx := t.Context()
|
||||
|
||||
beaconConfig := params.BeaconConfig()
|
||||
cfg := params.BeaconConfig()
|
||||
genesisTime := time.Date(2025, time.August, 10, 0, 0, 0, 0, time.UTC)
|
||||
secondsPerSlot := beaconConfig.SecondsPerSlot
|
||||
slotsPerEpoch := beaconConfig.SlotsPerEpoch
|
||||
secondsPerSlot := cfg.SecondsPerSlot
|
||||
slotsPerEpoch := cfg.SlotsPerEpoch
|
||||
secondsPerEpoch := uint64(slotsPerEpoch.Mul(secondsPerSlot))
|
||||
retentionEpochs := beaconConfig.MinEpochsForDataColumnSidecarsRequest
|
||||
retentionEpochs := cfg.MinEpochsForDataColumnSidecarsRequest
|
||||
|
||||
genesisValidatorRoot := [fieldparams.RootLength]byte{}
|
||||
|
||||
|
||||
@@ -192,6 +192,13 @@ var (
|
||||
},
|
||||
)
|
||||
|
||||
dataColumnsRecoveredFromELTotal = promauto.NewCounter(
|
||||
prometheus.CounterOpts{
|
||||
Name: "data_columns_recovered_from_el_total",
|
||||
Help: "Count the number of times data columns have been recovered from the execution layer.",
|
||||
},
|
||||
)
|
||||
|
||||
// Data column sidecar validation, beacon metrics specs
|
||||
dataColumnSidecarVerificationRequestsCounter = promauto.NewCounter(prometheus.CounterOpts{
|
||||
Name: "beacon_data_column_sidecar_processing_requests_total",
|
||||
@@ -279,6 +286,7 @@ func (s *Service) updateMetrics() {
|
||||
topicPeerCount.WithLabelValues(formattedTopic).Set(float64(len(s.cfg.p2p.PubSub().ListPeers(formattedTopic))))
|
||||
}
|
||||
|
||||
subscribedTopicPeerCount.Reset()
|
||||
for _, topic := range s.cfg.p2p.PubSub().GetTopics() {
|
||||
subscribedTopicPeerCount.WithLabelValues(topic).Set(float64(len(s.cfg.p2p.PubSub().ListPeers(topic))))
|
||||
}
|
||||
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
"context"
|
||||
"encoding/hex"
|
||||
"fmt"
|
||||
"slices"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
|
||||
@@ -382,10 +383,8 @@ func (s *Service) savePending(root [32]byte, pending any, isEqual func(other any
|
||||
|
||||
// Skip if the attestation/aggregate from the same validator already exists in
|
||||
// the pending queue.
|
||||
for _, a := range s.blkRootToPendingAtts[root] {
|
||||
if isEqual(a) {
|
||||
return
|
||||
}
|
||||
if slices.ContainsFunc(s.blkRootToPendingAtts[root], isEqual) {
|
||||
return
|
||||
}
|
||||
|
||||
pendingAttCount.Inc()
|
||||
|
||||
@@ -10,6 +10,7 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/types"
|
||||
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
@@ -58,6 +59,17 @@ func (s *Service) blobSidecarByRootRPCHandler(ctx context.Context, msg interface
|
||||
return errors.Wrapf(err, "unexpected error computing min valid blob request slot, current_slot=%d", cs)
|
||||
}
|
||||
|
||||
// Extract all needed roots.
|
||||
roots := make([][fieldparams.RootLength]byte, 0, len(blobIdents))
|
||||
for _, ident := range blobIdents {
|
||||
root := bytesutil.ToBytes32(ident.BlockRoot)
|
||||
roots = append(roots, root)
|
||||
}
|
||||
|
||||
// Filter all available roots in block storage.
|
||||
availableRoots := s.cfg.beaconDB.AvailableBlocks(ctx, roots)
|
||||
|
||||
// Serve each requested blob sidecar.
|
||||
for i := range blobIdents {
|
||||
if err := ctx.Err(); err != nil {
|
||||
closeStream(stream, log)
|
||||
@@ -69,7 +81,15 @@ func (s *Service) blobSidecarByRootRPCHandler(ctx context.Context, msg interface
|
||||
<-ticker.C
|
||||
}
|
||||
s.rateLimiter.add(stream, 1)
|
||||
|
||||
root, idx := bytesutil.ToBytes32(blobIdents[i].BlockRoot), blobIdents[i].Index
|
||||
|
||||
// Do not serve a blob sidecar if the corresponding block is not available.
|
||||
if !availableRoots[root] {
|
||||
log.Trace("Peer requested blob sidecar by root but corresponding block not found in db")
|
||||
continue
|
||||
}
|
||||
|
||||
sc, err := s.cfg.blobStorage.Get(root, idx)
|
||||
if err != nil {
|
||||
log := log.WithFields(logrus.Fields{
|
||||
@@ -113,19 +133,19 @@ func (s *Service) blobSidecarByRootRPCHandler(ctx context.Context, msg interface
|
||||
}
|
||||
|
||||
func validateBlobByRootRequest(blobIdents types.BlobSidecarsByRootReq, slot primitives.Slot) error {
|
||||
beaconConfig := params.BeaconConfig()
|
||||
cfg := params.BeaconConfig()
|
||||
epoch := slots.ToEpoch(slot)
|
||||
blobIdentCount := uint64(len(blobIdents))
|
||||
|
||||
if epoch >= beaconConfig.ElectraForkEpoch {
|
||||
if blobIdentCount > beaconConfig.MaxRequestBlobSidecarsElectra {
|
||||
if epoch >= cfg.ElectraForkEpoch {
|
||||
if blobIdentCount > cfg.MaxRequestBlobSidecarsElectra {
|
||||
return types.ErrMaxBlobReqExceeded
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
if blobIdentCount > beaconConfig.MaxRequestBlobSidecars {
|
||||
if blobIdentCount > cfg.MaxRequestBlobSidecars {
|
||||
return types.ErrMaxBlobReqExceeded
|
||||
}
|
||||
|
||||
|
||||
@@ -38,8 +38,8 @@ func (s *Service) dataColumnSidecarsByRangeRPCHandler(ctx context.Context, msg i
|
||||
defer cancel()
|
||||
|
||||
SetRPCStreamDeadlines(stream)
|
||||
beaconConfig := params.BeaconConfig()
|
||||
maxRequestDataColumnSidecars := beaconConfig.MaxRequestDataColumnSidecars
|
||||
cfg := params.BeaconConfig()
|
||||
maxRequestDataColumnSidecars := cfg.MaxRequestDataColumnSidecars
|
||||
remotePeer := stream.Conn().RemotePeer()
|
||||
|
||||
log := log.WithFields(logrus.Fields{
|
||||
@@ -102,7 +102,7 @@ func (s *Service) dataColumnSidecarsByRangeRPCHandler(ctx context.Context, msg i
|
||||
|
||||
// Once the quota is reached, we're done serving the request.
|
||||
if maxRequestDataColumnSidecars == 0 {
|
||||
log.WithField("initialQuota", beaconConfig.MaxRequestDataColumnSidecars).Trace("Reached quota for data column sidecars by range request")
|
||||
log.WithField("initialQuota", cfg.MaxRequestDataColumnSidecars).Trace("Reached quota for data column sidecars by range request")
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
@@ -31,9 +31,9 @@ import (
|
||||
|
||||
func TestDataColumnSidecarsByRangeRPCHandler(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
beaconConfig := params.BeaconConfig()
|
||||
beaconConfig.FuluForkEpoch = 0
|
||||
params.OverrideBeaconConfig(beaconConfig)
|
||||
cfg := params.BeaconConfig()
|
||||
cfg.FuluForkEpoch = 0
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
params.BeaconConfig().InitializeForkSchedule()
|
||||
ctx := context.Background()
|
||||
t.Run("wrong message type", func(t *testing.T) {
|
||||
|
||||
@@ -56,18 +56,6 @@ func (s *Service) dataColumnSidecarByRootRPCHandler(ctx context.Context, msg int
|
||||
return errors.Wrap(err, "validate data columns by root request")
|
||||
}
|
||||
|
||||
requestedColumnsByRoot := make(map[[fieldparams.RootLength]byte][]uint64)
|
||||
for _, columnIdent := range requestedColumnIdents {
|
||||
var root [fieldparams.RootLength]byte
|
||||
copy(root[:], columnIdent.BlockRoot)
|
||||
requestedColumnsByRoot[root] = append(requestedColumnsByRoot[root], columnIdent.Columns...)
|
||||
}
|
||||
|
||||
// Sort by column index for each root.
|
||||
for _, columns := range requestedColumnsByRoot {
|
||||
slices.Sort(columns)
|
||||
}
|
||||
|
||||
// Compute the oldest slot we'll allow a peer to request, based on the current slot.
|
||||
minReqSlot, err := dataColumnsRPCMinValidSlot(s.cfg.clock.CurrentSlot())
|
||||
if err != nil {
|
||||
@@ -84,6 +72,12 @@ func (s *Service) dataColumnSidecarByRootRPCHandler(ctx context.Context, msg int
|
||||
}
|
||||
|
||||
if log.Logger.Level >= logrus.TraceLevel {
|
||||
requestedColumnsByRoot := make(map[[fieldparams.RootLength]byte][]uint64)
|
||||
for _, ident := range requestedColumnIdents {
|
||||
root := bytesutil.ToBytes32(ident.BlockRoot)
|
||||
requestedColumnsByRoot[root] = append(requestedColumnsByRoot[root], ident.Columns...)
|
||||
}
|
||||
|
||||
// We optimistially assume the peer requests the same set of columns for all roots,
|
||||
// pre-sizing the map accordingly.
|
||||
requestedRootsByColumnSet := make(map[string][]string, 1)
|
||||
@@ -96,6 +90,17 @@ func (s *Service) dataColumnSidecarByRootRPCHandler(ctx context.Context, msg int
|
||||
log.WithField("requested", requestedRootsByColumnSet).Trace("Serving data column sidecars by root")
|
||||
}
|
||||
|
||||
// Extract all requested roots.
|
||||
roots := make([][fieldparams.RootLength]byte, 0, len(requestedColumnIdents))
|
||||
for _, ident := range requestedColumnIdents {
|
||||
root := bytesutil.ToBytes32(ident.BlockRoot)
|
||||
roots = append(roots, root)
|
||||
}
|
||||
|
||||
// Filter all available roots in block storage.
|
||||
availableRoots := s.cfg.beaconDB.AvailableBlocks(ctx, roots)
|
||||
|
||||
// Serve each requested data column sidecar.
|
||||
count := 0
|
||||
for _, ident := range requestedColumnIdents {
|
||||
if err := ctx.Err(); err != nil {
|
||||
@@ -117,6 +122,12 @@ func (s *Service) dataColumnSidecarByRootRPCHandler(ctx context.Context, msg int
|
||||
|
||||
s.rateLimiter.add(stream, int64(len(columns)))
|
||||
|
||||
// Do not serve a blob sidecar if the corresponding block is not available.
|
||||
if !availableRoots[root] {
|
||||
log.Trace("Peer requested blob sidecar by root but corresponding block not found in db")
|
||||
continue
|
||||
}
|
||||
|
||||
// Retrieve the requested sidecars from the store.
|
||||
verifiedRODataColumns, err := s.cfg.dataColumnStorage.Get(root, columns)
|
||||
if err != nil {
|
||||
@@ -163,9 +174,9 @@ func dataColumnsRPCMinValidSlot(currentSlot primitives.Slot) (primitives.Slot, e
|
||||
return primitives.Slot(math.MaxUint64), nil
|
||||
}
|
||||
|
||||
beaconConfig := params.BeaconConfig()
|
||||
minReqEpochs := beaconConfig.MinEpochsForDataColumnSidecarsRequest
|
||||
minStartEpoch := beaconConfig.FuluForkEpoch
|
||||
cfg := params.BeaconConfig()
|
||||
minReqEpochs := cfg.MinEpochsForDataColumnSidecarsRequest
|
||||
minStartEpoch := cfg.FuluForkEpoch
|
||||
|
||||
currEpoch := slots.ToEpoch(currentSlot)
|
||||
if currEpoch > minReqEpochs && currEpoch-minReqEpochs > minStartEpoch {
|
||||
|
||||
@@ -10,6 +10,7 @@ import (
|
||||
|
||||
chainMock "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
|
||||
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
|
||||
p2ptest "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/types"
|
||||
@@ -19,6 +20,7 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/assert"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/util"
|
||||
"github.com/libp2p/go-libp2p/core/network"
|
||||
@@ -28,9 +30,9 @@ import (
|
||||
|
||||
func TestDataColumnSidecarsByRootRPCHandler(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
beaconConfig := params.BeaconConfig()
|
||||
beaconConfig.FuluForkEpoch = 0
|
||||
params.OverrideBeaconConfig(beaconConfig)
|
||||
cfg := params.BeaconConfig()
|
||||
cfg.FuluForkEpoch = 0
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
params.BeaconConfig().InitializeForkSchedule()
|
||||
ctxMap, err := ContextByteVersionsForValRoot(params.BeaconConfig().GenesisValidatorsRoot)
|
||||
require.NoError(t, err)
|
||||
@@ -43,9 +45,9 @@ func TestDataColumnSidecarsByRootRPCHandler(t *testing.T) {
|
||||
|
||||
t.Run("invalid request", func(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
beaconConfig := params.BeaconConfig()
|
||||
beaconConfig.MaxRequestDataColumnSidecars = 1
|
||||
params.OverrideBeaconConfig(beaconConfig)
|
||||
cfg := params.BeaconConfig()
|
||||
cfg.MaxRequestDataColumnSidecars = 1
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
localP2P := p2ptest.NewTestP2P(t)
|
||||
service := &Service{cfg: &config{p2p: localP2P}}
|
||||
@@ -96,30 +98,54 @@ func TestDataColumnSidecarsByRootRPCHandler(t *testing.T) {
|
||||
}()
|
||||
|
||||
params.SetupTestConfigCleanup(t)
|
||||
beaconConfig := params.BeaconConfig()
|
||||
beaconConfig.FuluForkEpoch = 1
|
||||
params.OverrideBeaconConfig(beaconConfig)
|
||||
cfg := params.BeaconConfig()
|
||||
cfg.FuluForkEpoch = 1
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
localP2P := p2ptest.NewTestP2P(t)
|
||||
clock := startup.NewClock(time.Now(), [fieldparams.RootLength]byte{})
|
||||
|
||||
params := []util.DataColumnParam{
|
||||
{Slot: 10, Index: 1}, {Slot: 10, Index: 2}, {Slot: 10, Index: 3},
|
||||
{Slot: 40, Index: 4}, {Slot: 40, Index: 6},
|
||||
{Slot: 45, Index: 7}, {Slot: 45, Index: 8}, {Slot: 45, Index: 9},
|
||||
_, verifiedRODataColumns := util.CreateTestVerifiedRoDataColumnSidecars(
|
||||
t,
|
||||
[]util.DataColumnParam{
|
||||
{Slot: 10, Index: 1}, {Slot: 10, Index: 2}, {Slot: 10, Index: 3},
|
||||
{Slot: 40, Index: 4}, {Slot: 40, Index: 6},
|
||||
{Slot: 45, Index: 7}, {Slot: 45, Index: 8}, {Slot: 45, Index: 9},
|
||||
{Slot: 46, Index: 10}, // Corresponding block won't be saved in DB
|
||||
},
|
||||
)
|
||||
|
||||
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
|
||||
err := dataColumnStorage.Save(verifiedRODataColumns)
|
||||
require.NoError(t, err)
|
||||
|
||||
beaconDB := testDB.SetupDB(t)
|
||||
indices := [...]int{0, 3, 5}
|
||||
|
||||
roBlocks := make([]blocks.ROBlock, 0, len(indices))
|
||||
for _, i := range indices {
|
||||
blockPb := util.NewBeaconBlock()
|
||||
|
||||
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(blockPb)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Here the block root has to match the sidecar's block root.
|
||||
// (However, the block root does not match the actual root of the block, but we don't care for this test.)
|
||||
roBlock, err := blocks.NewROBlockWithRoot(signedBeaconBlock, verifiedRODataColumns[i].BlockRoot())
|
||||
require.NoError(t, err)
|
||||
|
||||
roBlocks = append(roBlocks, roBlock)
|
||||
}
|
||||
|
||||
_, verifiedRODataColumns := util.CreateTestVerifiedRoDataColumnSidecars(t, params)
|
||||
|
||||
storage := filesystem.NewEphemeralDataColumnStorage(t)
|
||||
err := storage.Save(verifiedRODataColumns)
|
||||
err = beaconDB.SaveROBlocks(ctx, roBlocks, false /*cache*/)
|
||||
require.NoError(t, err)
|
||||
|
||||
service := &Service{
|
||||
cfg: &config{
|
||||
p2p: localP2P,
|
||||
beaconDB: beaconDB,
|
||||
clock: clock,
|
||||
dataColumnStorage: storage,
|
||||
dataColumnStorage: dataColumnStorage,
|
||||
chain: &chainMock.ChainService{},
|
||||
},
|
||||
rateLimiter: newRateLimiter(localP2P),
|
||||
@@ -134,6 +160,7 @@ func TestDataColumnSidecarsByRootRPCHandler(t *testing.T) {
|
||||
root0 := verifiedRODataColumns[0].BlockRoot()
|
||||
root3 := verifiedRODataColumns[3].BlockRoot()
|
||||
root5 := verifiedRODataColumns[5].BlockRoot()
|
||||
root8 := verifiedRODataColumns[8].BlockRoot()
|
||||
|
||||
remoteP2P.BHost.SetStreamHandler(protocolID, func(stream network.Stream) {
|
||||
defer wg.Done()
|
||||
@@ -147,22 +174,22 @@ func TestDataColumnSidecarsByRootRPCHandler(t *testing.T) {
|
||||
break
|
||||
}
|
||||
|
||||
require.NoError(t, err)
|
||||
assert.NoError(t, err)
|
||||
sidecars = append(sidecars, sidecar)
|
||||
}
|
||||
|
||||
require.Equal(t, 5, len(sidecars))
|
||||
require.Equal(t, root3, sidecars[0].BlockRoot())
|
||||
require.Equal(t, root3, sidecars[1].BlockRoot())
|
||||
require.Equal(t, root5, sidecars[2].BlockRoot())
|
||||
require.Equal(t, root5, sidecars[3].BlockRoot())
|
||||
require.Equal(t, root5, sidecars[4].BlockRoot())
|
||||
assert.Equal(t, 5, len(sidecars))
|
||||
assert.Equal(t, root3, sidecars[0].BlockRoot())
|
||||
assert.Equal(t, root3, sidecars[1].BlockRoot())
|
||||
assert.Equal(t, root5, sidecars[2].BlockRoot())
|
||||
assert.Equal(t, root5, sidecars[3].BlockRoot())
|
||||
assert.Equal(t, root5, sidecars[4].BlockRoot())
|
||||
|
||||
require.Equal(t, uint64(4), sidecars[0].Index)
|
||||
require.Equal(t, uint64(6), sidecars[1].Index)
|
||||
require.Equal(t, uint64(7), sidecars[2].Index)
|
||||
require.Equal(t, uint64(8), sidecars[3].Index)
|
||||
require.Equal(t, uint64(9), sidecars[4].Index)
|
||||
assert.Equal(t, uint64(4), sidecars[0].Index)
|
||||
assert.Equal(t, uint64(6), sidecars[1].Index)
|
||||
assert.Equal(t, uint64(7), sidecars[2].Index)
|
||||
assert.Equal(t, uint64(8), sidecars[3].Index)
|
||||
assert.Equal(t, uint64(9), sidecars[4].Index)
|
||||
})
|
||||
|
||||
localP2P.Connect(remoteP2P)
|
||||
@@ -182,6 +209,10 @@ func TestDataColumnSidecarsByRootRPCHandler(t *testing.T) {
|
||||
BlockRoot: root5[:],
|
||||
Columns: []uint64{7, 8, 9},
|
||||
},
|
||||
{
|
||||
BlockRoot: root8[:],
|
||||
Columns: []uint64{10},
|
||||
},
|
||||
}
|
||||
|
||||
err = service.dataColumnSidecarByRootRPCHandler(ctx, msg, stream)
|
||||
|
||||
@@ -465,8 +465,8 @@ func SendDataColumnSidecarsByRangeRequest(
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
beaconConfig := params.BeaconConfig()
|
||||
numberOfColumns := beaconConfig.NumberOfColumns
|
||||
cfg := params.BeaconConfig()
|
||||
numberOfColumns := cfg.NumberOfColumns
|
||||
maxRequestDataColumnSidecars := params.BeaconConfig().MaxRequestDataColumnSidecars
|
||||
|
||||
// Check if we do not request too many sidecars.
|
||||
|
||||
@@ -889,9 +889,9 @@ func TestErrInvalidFetchedDataDistinction(t *testing.T) {
|
||||
|
||||
func TestSendDataColumnSidecarsByRangeRequest(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
beaconConfig := params.BeaconConfig()
|
||||
beaconConfig.FuluForkEpoch = 0
|
||||
params.OverrideBeaconConfig(beaconConfig)
|
||||
cfg := params.BeaconConfig()
|
||||
cfg.FuluForkEpoch = 0
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
params.BeaconConfig().InitializeForkSchedule()
|
||||
ctxMap, err := ContextByteVersionsForValRoot(params.BeaconConfig().GenesisValidatorsRoot)
|
||||
require.NoError(t, err)
|
||||
@@ -923,9 +923,9 @@ func TestSendDataColumnSidecarsByRangeRequest(t *testing.T) {
|
||||
|
||||
t.Run("too many columns in request", func(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
beaconConfig := params.BeaconConfig()
|
||||
beaconConfig.MaxRequestDataColumnSidecars = 0
|
||||
params.OverrideBeaconConfig(beaconConfig)
|
||||
cfg := params.BeaconConfig()
|
||||
cfg.MaxRequestDataColumnSidecars = 0
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
request := ðpb.DataColumnSidecarsByRangeRequest{Count: 1, Columns: []uint64{1, 2, 3}}
|
||||
_, err := SendDataColumnSidecarsByRangeRequest(DataColumnSidecarsParams{Ctx: t.Context()}, "", request)
|
||||
@@ -1193,9 +1193,9 @@ func TestIsSidecarIndexRequested(t *testing.T) {
|
||||
|
||||
func TestSendDataColumnSidecarsByRootRequest(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
beaconConfig := params.BeaconConfig()
|
||||
beaconConfig.FuluForkEpoch = 0
|
||||
params.OverrideBeaconConfig(beaconConfig)
|
||||
cfg := params.BeaconConfig()
|
||||
cfg.FuluForkEpoch = 0
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
params.BeaconConfig().InitializeForkSchedule()
|
||||
ctxMap, err := ContextByteVersionsForValRoot(params.BeaconConfig().GenesisValidatorsRoot)
|
||||
require.NoError(t, err)
|
||||
@@ -1223,9 +1223,9 @@ func TestSendDataColumnSidecarsByRootRequest(t *testing.T) {
|
||||
|
||||
t.Run("too many columns in request", func(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
beaconConfig := params.BeaconConfig()
|
||||
beaconConfig.MaxRequestDataColumnSidecars = 4
|
||||
params.OverrideBeaconConfig(beaconConfig)
|
||||
cfg := params.BeaconConfig()
|
||||
cfg.MaxRequestDataColumnSidecars = 4
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
request := p2ptypes.DataColumnsByRootIdentifiers{
|
||||
{Columns: []uint64{1, 2, 3}},
|
||||
|
||||
@@ -445,7 +445,7 @@ func TestStatusRPCRequest_RequestSent(t *testing.T) {
|
||||
custodyGroupCount = uint64(4)
|
||||
)
|
||||
|
||||
beaconConfig := params.BeaconConfig()
|
||||
cfg := params.BeaconConfig()
|
||||
ctx := t.Context()
|
||||
|
||||
testCases := []struct {
|
||||
@@ -456,7 +456,7 @@ func TestStatusRPCRequest_RequestSent(t *testing.T) {
|
||||
}{
|
||||
{
|
||||
name: "before fulu",
|
||||
fuluForkEpoch: beaconConfig.FarFutureEpoch,
|
||||
fuluForkEpoch: cfg.FarFutureEpoch,
|
||||
topic: "/eth2/beacon_chain/req/status/1/ssz_snappy",
|
||||
streamHandler: func(service *Service, stream network.Stream, genesisState beaconState.BeaconState, beaconRoot, headRoot, finalizedRoot []byte) {
|
||||
out := ðpb.Status{}
|
||||
|
||||
@@ -695,10 +695,10 @@ func (s *Service) dataColumnSubnetIndices(primitives.Slot) map[uint64]bool {
|
||||
// the validators custody requirement, and whether the node is subscribed to all data subnets.
|
||||
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/das-core.md#custody-sampling
|
||||
func (s *Service) samplingSize() (uint64, error) {
|
||||
beaconConfig := params.BeaconConfig()
|
||||
cfg := params.BeaconConfig()
|
||||
|
||||
if flags.Get().SubscribeAllDataSubnets {
|
||||
return beaconConfig.DataColumnSidecarSubnetCount, nil
|
||||
return cfg.DataColumnSidecarSubnetCount, nil
|
||||
}
|
||||
|
||||
// Compute the validators custody requirement.
|
||||
@@ -712,14 +712,10 @@ func (s *Service) samplingSize() (uint64, error) {
|
||||
return 0, errors.Wrap(err, "custody group count")
|
||||
}
|
||||
|
||||
return max(beaconConfig.SamplesPerSlot, validatorsCustodyRequirement, custodyGroupCount), nil
|
||||
return max(cfg.SamplesPerSlot, validatorsCustodyRequirement, custodyGroupCount), nil
|
||||
}
|
||||
|
||||
func (s *Service) persistentAndAggregatorSubnetIndices(currentSlot primitives.Slot) map[uint64]bool {
|
||||
if flags.Get().SubscribeToAllSubnets {
|
||||
return mapFromCount(params.BeaconConfig().AttestationSubnetCount)
|
||||
}
|
||||
|
||||
persistentSubnetIndices := persistentSubnetIndices()
|
||||
aggregatorSubnetIndices := aggregatorSubnetIndices(currentSlot)
|
||||
|
||||
|
||||
@@ -224,6 +224,8 @@ func (s *Service) processDataColumnSidecarsFromExecution(ctx context.Context, so
|
||||
}
|
||||
|
||||
if len(unseenIndices) > 0 {
|
||||
dataColumnsRecoveredFromELTotal.Inc()
|
||||
|
||||
log.WithFields(logrus.Fields{
|
||||
"root": fmt.Sprintf("%#x", source.Root()),
|
||||
"slot": source.Slot(),
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user