mirror of
https://github.com/OffchainLabs/prysm.git
synced 2026-01-10 22:07:59 -05:00
Compare commits
16 Commits
fix-valida
...
debug-log-
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
7e719770d4 | ||
|
|
96b31a9f64 | ||
|
|
a7c3004115 | ||
|
|
30d5749ef6 | ||
|
|
bc69ab8a44 | ||
|
|
ed7b511949 | ||
|
|
0b7c005d7d | ||
|
|
65e8c37b48 | ||
|
|
689015ff01 | ||
|
|
08c14f02f6 | ||
|
|
4bb0b44f16 | ||
|
|
29237cb0bc | ||
|
|
2b25ede641 | ||
|
|
b7de64a340 | ||
|
|
11aa51e033 | ||
|
|
fa0dc09ce0 |
55
CHANGELOG.md
55
CHANGELOG.md
@@ -4,7 +4,56 @@ All notable changes to this project will be documented in this file.
|
||||
|
||||
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
|
||||
|
||||
## [Unreleased](https://github.com/prysmaticlabs/prysm/compare/v5.1.2...HEAD)
|
||||
## [Unreleased](https://github.com/prysmaticlabs/prysm/compare/v5.2.0...HEAD)
|
||||
|
||||
### Added
|
||||
|
||||
- Added proper gas limit check for header from the builder.
|
||||
- Added an error field to log `Finished building block`.
|
||||
- Implemented a new `EmptyExecutionPayloadHeader` function.
|
||||
- `Finished building block`: Display error only if not nil.
|
||||
- Added support to update target and max blob count to different values per hard fork config.
|
||||
- Log before blob filesystem cache warm-up.
|
||||
- Debug log when downscoring a peer for bad response reason.
|
||||
|
||||
### Changed
|
||||
|
||||
- Process light client finality updates only for new finalized epochs instead of doing it for every block.
|
||||
- Refactor subnets subscriptions.
|
||||
- Refactor RPC handlers subscriptions.
|
||||
- Go deps upgrade, from `ioutil` to `io`
|
||||
- Move successfully registered validator(s) on builder log to debug.
|
||||
|
||||
### Deprecated
|
||||
|
||||
|
||||
### Removed
|
||||
|
||||
|
||||
### Fixed
|
||||
|
||||
- Added check to prevent nil pointer deference or out of bounds array access when validating the BLSToExecutionChange on an impossibly nil validator.
|
||||
|
||||
### Security
|
||||
|
||||
|
||||
## [v5.2.0](https://github.com/prysmaticlabs/prysm/compare/v5.1.2...v5.2.0)
|
||||
|
||||
Updating to this release is highly recommended, especially for users running v5.1.1 or v5.1.2.
|
||||
This release is **mandatory** for all validator clients using mev-boost with a gas limit increase.
|
||||
Without upgrading to this release, validator clients will default to using local execution blocks
|
||||
when the gas limit starts to increase.
|
||||
|
||||
This release has several fixes and new features. In this release, we have enabled QUIC protocol by
|
||||
default, which uses port 13000 for `--p2p-quic-port`. This may be a [breaking change](https://github.com/prysmaticlabs/prysm/pull/14688#issuecomment-2516713826)
|
||||
if you're using port 13000 already. This release has some improvements for raising the gas limit,
|
||||
but there are [known issues](https://hackmd.io/@ttsao/prysm-gas-limit) with the proposer settings
|
||||
file provided gas limit not being respected for mev-boost outsourced blocks. Signalling an increase
|
||||
for the gas limit works perfectly for local block production as of this release. See [pumpthegas.org](https://pumpthegas.org) for more info on raising the gas limit on L1.
|
||||
|
||||
Notable features:
|
||||
- Prysm can reuse blobs from the EL via engine_getBlobsV1, [potentially saving bandwidth](https://hackmd.io/@ttsao/get-blobs-early-results).
|
||||
- QUIC is enabled by default. This is a UDP based networking protocol with default port 13000.
|
||||
|
||||
### Added
|
||||
|
||||
@@ -33,8 +82,7 @@ The format is based on Keep a Changelog, and this project adheres to Semantic Ve
|
||||
- Added a Prometheus error counter metric for SSE requests.
|
||||
- Save light client updates and bootstraps in DB.
|
||||
- Added more comprehensive tests for `BlockToLightClientHeader`. [PR](https://github.com/prysmaticlabs/prysm/pull/14699)
|
||||
- Added an error field to log `Finished building block`.
|
||||
- Implemented a new `EmptyExecutionPayloadHeader` function.
|
||||
- Added light client feature flag check to RPC handlers. [PR](https://github.com/prysmaticlabs/prysm/pull/14736)
|
||||
|
||||
### Changed
|
||||
|
||||
@@ -79,7 +127,6 @@ The format is based on Keep a Changelog, and this project adheres to Semantic Ve
|
||||
- Check kzg commitments align with blobs and proofs for beacon api end point.
|
||||
- Revert "Proposer checks gas limit before accepting builder's bid".
|
||||
- Updated quic-go to v0.48.2 .
|
||||
- Process light client finality updates only for new finalized epochs instead of doing it for every block.
|
||||
|
||||
### Deprecated
|
||||
|
||||
|
||||
@@ -129,7 +129,7 @@ If your change is user facing, you must include a CHANGELOG.md entry. See the [M
|
||||
|
||||
**17. Create a pull request.**
|
||||
|
||||
Navigate your browser to https://github.com/prysmaticlabs/prysm and click on the new pull request button. In the “base” box on the left, leave the default selection “base master”, the branch that you want your changes to be applied to. In the “compare” box on the right, select feature-in-progress-branch, the branch containing the changes you want to apply. You will then be asked to answer a few questions about your pull request. After you complete the questionnaire, the pull request will appear in the list of pull requests at https://github.com/prysmaticlabs/prysm/pulls. Ensure that you have added an entry to CHANGELOG.md if your PR is a user-facing change. See the [Maintaining CHANGELOG.md](#maintaining-changelogmd) section for more information.
|
||||
Navigate your browser to https://github.com/prysmaticlabs/prysm and click on the new pull request button. In the “base” box on the left, leave the default selection “base develop”, the branch that you want your changes to be applied to. In the “compare” box on the right, select feature-in-progress-branch, the branch containing the changes you want to apply. You will then be asked to answer a few questions about your pull request. After you complete the questionnaire, the pull request will appear in the list of pull requests at https://github.com/prysmaticlabs/prysm/pulls. Ensure that you have added an entry to CHANGELOG.md if your PR is a user-facing change. See the [Maintaining CHANGELOG.md](#maintaining-changelogmd) section for more information.
|
||||
|
||||
**18. Respond to comments by Core Contributors.**
|
||||
|
||||
|
||||
@@ -15,6 +15,7 @@ go_library(
|
||||
"//api/client:go_default_library",
|
||||
"//api/server/structs:go_default_library",
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
"//consensus-types:go_default_library",
|
||||
"//consensus-types/blocks:go_default_library",
|
||||
"//consensus-types/interfaces:go_default_library",
|
||||
|
||||
@@ -282,7 +282,7 @@ func (c *Client) RegisterValidator(ctx context.Context, svr []*ethpb.SignedValid
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
log.WithField("num_registrations", len(svr)).Info("successfully registered validator(s) on builder")
|
||||
log.WithField("registrationCount", len(svr)).Debug("Successfully registered validator(s) on builder")
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
@@ -9,6 +9,7 @@ import (
|
||||
"github.com/ethereum/go-ethereum/common/hexutil"
|
||||
"github.com/pkg/errors"
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/params"
|
||||
consensusblocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
|
||||
types "github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
@@ -1013,7 +1014,7 @@ func (bb *BuilderBidDeneb) ToProto() (*eth.BuilderBidDeneb, error) {
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if len(bb.BlobKzgCommitments) > fieldparams.MaxBlobsPerBlock {
|
||||
if len(bb.BlobKzgCommitments) > params.BeaconConfig().DeprecatedMaxBlobsPerBlock {
|
||||
return nil, fmt.Errorf("too many blob commitments: %d", len(bb.BlobKzgCommitments))
|
||||
}
|
||||
kzgCommitments := make([][]byte, len(bb.BlobKzgCommitments))
|
||||
|
||||
@@ -140,6 +140,7 @@ go_test(
|
||||
"//beacon-chain/core/blocks:go_default_library",
|
||||
"//beacon-chain/core/feed/state:go_default_library",
|
||||
"//beacon-chain/core/helpers:go_default_library",
|
||||
"//beacon-chain/core/light-client:go_default_library",
|
||||
"//beacon-chain/core/signing:go_default_library",
|
||||
"//beacon-chain/core/transition:go_default_library",
|
||||
"//beacon-chain/das:go_default_library",
|
||||
|
||||
@@ -15,7 +15,6 @@ import (
|
||||
forkchoicetypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/forkchoice/types"
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/features"
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/params"
|
||||
consensusblocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
|
||||
@@ -496,14 +495,15 @@ func (s *Service) runLateBlockTasks() {
|
||||
// It returns a map where each key represents a missing BlobSidecar index.
|
||||
// An empty map means we have all indices; a non-empty map can be used to compare incoming
|
||||
// BlobSidecars against the set of known missing sidecars.
|
||||
func missingIndices(bs *filesystem.BlobStorage, root [32]byte, expected [][]byte) (map[uint64]struct{}, error) {
|
||||
func missingIndices(bs *filesystem.BlobStorage, root [32]byte, expected [][]byte, slot primitives.Slot) (map[uint64]struct{}, error) {
|
||||
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
|
||||
if len(expected) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
if len(expected) > fieldparams.MaxBlobsPerBlock {
|
||||
if len(expected) > maxBlobsPerBlock {
|
||||
return nil, errMaxBlobsExceeded
|
||||
}
|
||||
indices, err := bs.Indices(root)
|
||||
indices, err := bs.Indices(root, slot)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -552,7 +552,7 @@ func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed int
|
||||
return nil
|
||||
}
|
||||
// get a map of BlobSidecar indices that are not currently available.
|
||||
missing, err := missingIndices(s.blobStorage, root, kzgCommitments)
|
||||
missing, err := missingIndices(s.blobStorage, root, kzgCommitments, block.Slot())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -563,7 +563,7 @@ func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed int
|
||||
|
||||
// The gossip handler for blobs writes the index of each verified blob referencing the given
|
||||
// root to the channel returned by blobNotifiers.forRoot.
|
||||
nc := s.blobNotifiers.forRoot(root)
|
||||
nc := s.blobNotifiers.forRoot(root, block.Slot())
|
||||
|
||||
// Log for DA checks that cross over into the next slot; helpful for debugging.
|
||||
nextSlot := slots.BeginsAt(signed.Block().Slot()+1, s.genesisTime)
|
||||
|
||||
@@ -14,6 +14,7 @@ import (
|
||||
"github.com/pkg/errors"
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
|
||||
lightClient "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/light-client"
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/signing"
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/das"
|
||||
@@ -2205,23 +2206,23 @@ func TestMissingIndices(t *testing.T) {
|
||||
},
|
||||
{
|
||||
name: "expected exceeds max",
|
||||
expected: fakeCommitments(fieldparams.MaxBlobsPerBlock + 1),
|
||||
expected: fakeCommitments(params.BeaconConfig().MaxBlobsPerBlock(0) + 1),
|
||||
err: errMaxBlobsExceeded,
|
||||
},
|
||||
{
|
||||
name: "first missing",
|
||||
expected: fakeCommitments(fieldparams.MaxBlobsPerBlock),
|
||||
expected: fakeCommitments(params.BeaconConfig().MaxBlobsPerBlock(0)),
|
||||
present: []uint64{1, 2, 3, 4, 5},
|
||||
result: fakeResult([]uint64{0}),
|
||||
},
|
||||
{
|
||||
name: "all missing",
|
||||
expected: fakeCommitments(fieldparams.MaxBlobsPerBlock),
|
||||
expected: fakeCommitments(params.BeaconConfig().MaxBlobsPerBlock(0)),
|
||||
result: fakeResult([]uint64{0, 1, 2, 3, 4, 5}),
|
||||
},
|
||||
{
|
||||
name: "none missing",
|
||||
expected: fakeCommitments(fieldparams.MaxBlobsPerBlock),
|
||||
expected: fakeCommitments(params.BeaconConfig().MaxBlobsPerBlock(0)),
|
||||
present: []uint64{0, 1, 2, 3, 4, 5},
|
||||
result: fakeResult([]uint64{}),
|
||||
},
|
||||
@@ -2255,7 +2256,7 @@ func TestMissingIndices(t *testing.T) {
|
||||
bm, bs := filesystem.NewEphemeralBlobStorageWithMocker(t)
|
||||
t.Run(c.name, func(t *testing.T) {
|
||||
require.NoError(t, bm.CreateFakeIndices(c.root, c.present...))
|
||||
missing, err := missingIndices(bs, c.root, c.expected)
|
||||
missing, err := missingIndices(bs, c.root, c.expected, 0)
|
||||
if c.err != nil {
|
||||
require.ErrorIs(t, err, c.err)
|
||||
return
|
||||
@@ -2505,173 +2506,500 @@ func fakeResult(missing []uint64) map[uint64]struct{} {
|
||||
}
|
||||
|
||||
func TestSaveLightClientUpdate(t *testing.T) {
|
||||
featCfg := &features.Flags{}
|
||||
featCfg.EnableLightClient = true
|
||||
reset := features.InitWithReset(featCfg)
|
||||
|
||||
s, tr := minimalTestService(t)
|
||||
ctx := tr.ctx
|
||||
|
||||
t.Run("Altair", func(t *testing.T) {
|
||||
featCfg := &features.Flags{}
|
||||
featCfg.EnableLightClient = true
|
||||
reset := features.InitWithReset(featCfg)
|
||||
t.Run("No old update", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestAltair()
|
||||
|
||||
l := util.NewTestLightClient(t).SetupTestAltair()
|
||||
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().AltairForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
|
||||
|
||||
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().AltairForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
|
||||
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
|
||||
require.NoError(t, err)
|
||||
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
|
||||
require.NoError(t, err)
|
||||
|
||||
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
|
||||
require.NoError(t, err)
|
||||
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
|
||||
require.NoError(t, err)
|
||||
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
|
||||
require.NoError(t, err)
|
||||
|
||||
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
|
||||
require.NoError(t, err)
|
||||
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
|
||||
require.NoError(t, err)
|
||||
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
|
||||
require.NoError(t, err)
|
||||
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
|
||||
require.NoError(t, err)
|
||||
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
|
||||
require.NoError(t, err)
|
||||
cfg := &postBlockProcessConfig{
|
||||
ctx: ctx,
|
||||
roblock: roblock,
|
||||
postState: l.State,
|
||||
isValidPayload: true,
|
||||
}
|
||||
|
||||
cfg := &postBlockProcessConfig{
|
||||
ctx: ctx,
|
||||
roblock: roblock,
|
||||
postState: l.State,
|
||||
isValidPayload: true,
|
||||
}
|
||||
s.saveLightClientUpdate(cfg)
|
||||
|
||||
s.saveLightClientUpdate(cfg)
|
||||
// Check that the light client update is saved
|
||||
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
|
||||
|
||||
// Check that the light client update is saved
|
||||
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
|
||||
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, u)
|
||||
attestedStateRoot, err := l.AttestedState.HashTreeRoot(ctx)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, attestedStateRoot, [32]byte(u.AttestedHeader().Beacon().StateRoot))
|
||||
require.Equal(t, u.Version(), version.Altair)
|
||||
})
|
||||
|
||||
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, u)
|
||||
attestedStateRoot, err := l.AttestedState.HashTreeRoot(ctx)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, attestedStateRoot, [32]byte(u.AttestedHeader().Beacon().StateRoot))
|
||||
require.Equal(t, u.Version(), version.Altair)
|
||||
t.Run("New update is better", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestAltair()
|
||||
|
||||
reset()
|
||||
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().AltairForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
|
||||
|
||||
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
|
||||
require.NoError(t, err)
|
||||
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
|
||||
require.NoError(t, err)
|
||||
|
||||
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
|
||||
require.NoError(t, err)
|
||||
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
cfg := &postBlockProcessConfig{
|
||||
ctx: ctx,
|
||||
roblock: roblock,
|
||||
postState: l.State,
|
||||
isValidPayload: true,
|
||||
}
|
||||
|
||||
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
|
||||
|
||||
// create and save old update
|
||||
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(s.CurrentSlot(), l.AttestedState)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
|
||||
require.NoError(t, err)
|
||||
|
||||
s.saveLightClientUpdate(cfg)
|
||||
|
||||
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, u)
|
||||
attestedStateRoot, err := l.AttestedState.HashTreeRoot(ctx)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, attestedStateRoot, [32]byte(u.AttestedHeader().Beacon().StateRoot))
|
||||
require.Equal(t, u.Version(), version.Altair)
|
||||
})
|
||||
|
||||
t.Run("Old update is better", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestAltair()
|
||||
|
||||
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().AltairForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
|
||||
|
||||
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
|
||||
require.NoError(t, err)
|
||||
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
|
||||
require.NoError(t, err)
|
||||
|
||||
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
|
||||
require.NoError(t, err)
|
||||
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
cfg := &postBlockProcessConfig{
|
||||
ctx: ctx,
|
||||
roblock: roblock,
|
||||
postState: l.State,
|
||||
isValidPayload: true,
|
||||
}
|
||||
|
||||
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
|
||||
|
||||
// create and save old update
|
||||
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(s.CurrentSlot(), l.AttestedState)
|
||||
require.NoError(t, err)
|
||||
|
||||
scb := make([]byte, 64)
|
||||
for i := 0; i < 5; i++ {
|
||||
scb[i] = 0x01
|
||||
}
|
||||
oldUpdate.SetSyncAggregate(ðpb.SyncAggregate{
|
||||
SyncCommitteeBits: scb,
|
||||
SyncCommitteeSignature: make([]byte, 96),
|
||||
})
|
||||
|
||||
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
|
||||
require.NoError(t, err)
|
||||
|
||||
s.saveLightClientUpdate(cfg)
|
||||
|
||||
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, u)
|
||||
require.DeepEqual(t, oldUpdate, u)
|
||||
require.Equal(t, u.Version(), version.Altair)
|
||||
})
|
||||
})
|
||||
|
||||
t.Run("Capella", func(t *testing.T) {
|
||||
featCfg := &features.Flags{}
|
||||
featCfg.EnableLightClient = true
|
||||
reset := features.InitWithReset(featCfg)
|
||||
t.Run("No old update", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestCapella(false)
|
||||
|
||||
l := util.NewTestLightClient(t).SetupTestCapella(false)
|
||||
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().CapellaForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
|
||||
|
||||
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().CapellaForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
|
||||
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
|
||||
require.NoError(t, err)
|
||||
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
|
||||
require.NoError(t, err)
|
||||
|
||||
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
|
||||
require.NoError(t, err)
|
||||
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
|
||||
require.NoError(t, err)
|
||||
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
|
||||
require.NoError(t, err)
|
||||
|
||||
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
|
||||
require.NoError(t, err)
|
||||
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
|
||||
require.NoError(t, err)
|
||||
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
|
||||
require.NoError(t, err)
|
||||
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
|
||||
require.NoError(t, err)
|
||||
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
|
||||
require.NoError(t, err)
|
||||
cfg := &postBlockProcessConfig{
|
||||
ctx: ctx,
|
||||
roblock: roblock,
|
||||
postState: l.State,
|
||||
isValidPayload: true,
|
||||
}
|
||||
|
||||
cfg := &postBlockProcessConfig{
|
||||
ctx: ctx,
|
||||
roblock: roblock,
|
||||
postState: l.State,
|
||||
isValidPayload: true,
|
||||
}
|
||||
s.saveLightClientUpdate(cfg)
|
||||
|
||||
s.saveLightClientUpdate(cfg)
|
||||
// Check that the light client update is saved
|
||||
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
|
||||
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, u)
|
||||
attestedStateRoot, err := l.AttestedState.HashTreeRoot(ctx)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, attestedStateRoot, [32]byte(u.AttestedHeader().Beacon().StateRoot))
|
||||
require.Equal(t, u.Version(), version.Capella)
|
||||
})
|
||||
|
||||
// Check that the light client update is saved
|
||||
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
|
||||
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, u)
|
||||
attestedStateRoot, err := l.AttestedState.HashTreeRoot(ctx)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, attestedStateRoot, [32]byte(u.AttestedHeader().Beacon().StateRoot))
|
||||
require.Equal(t, u.Version(), version.Capella)
|
||||
t.Run("New update is better", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestCapella(false)
|
||||
|
||||
reset()
|
||||
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().CapellaForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
|
||||
|
||||
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
|
||||
require.NoError(t, err)
|
||||
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
|
||||
require.NoError(t, err)
|
||||
|
||||
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
|
||||
require.NoError(t, err)
|
||||
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
cfg := &postBlockProcessConfig{
|
||||
ctx: ctx,
|
||||
roblock: roblock,
|
||||
postState: l.State,
|
||||
isValidPayload: true,
|
||||
}
|
||||
|
||||
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
|
||||
|
||||
// create and save old update
|
||||
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(s.CurrentSlot(), l.AttestedState)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
|
||||
require.NoError(t, err)
|
||||
|
||||
s.saveLightClientUpdate(cfg)
|
||||
|
||||
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, u)
|
||||
attestedStateRoot, err := l.AttestedState.HashTreeRoot(ctx)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, attestedStateRoot, [32]byte(u.AttestedHeader().Beacon().StateRoot))
|
||||
require.Equal(t, u.Version(), version.Capella)
|
||||
})
|
||||
|
||||
t.Run("Old update is better", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestCapella(false)
|
||||
|
||||
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().CapellaForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
|
||||
|
||||
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
|
||||
require.NoError(t, err)
|
||||
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
|
||||
require.NoError(t, err)
|
||||
|
||||
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
|
||||
require.NoError(t, err)
|
||||
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
cfg := &postBlockProcessConfig{
|
||||
ctx: ctx,
|
||||
roblock: roblock,
|
||||
postState: l.State,
|
||||
isValidPayload: true,
|
||||
}
|
||||
|
||||
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
|
||||
|
||||
// create and save old update
|
||||
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(s.CurrentSlot(), l.AttestedState)
|
||||
require.NoError(t, err)
|
||||
|
||||
scb := make([]byte, 64)
|
||||
for i := 0; i < 5; i++ {
|
||||
scb[i] = 0x01
|
||||
}
|
||||
oldUpdate.SetSyncAggregate(ðpb.SyncAggregate{
|
||||
SyncCommitteeBits: scb,
|
||||
SyncCommitteeSignature: make([]byte, 96),
|
||||
})
|
||||
|
||||
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
|
||||
require.NoError(t, err)
|
||||
|
||||
s.saveLightClientUpdate(cfg)
|
||||
|
||||
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, u)
|
||||
require.DeepEqual(t, oldUpdate, u)
|
||||
require.Equal(t, u.Version(), version.Capella)
|
||||
})
|
||||
})
|
||||
|
||||
t.Run("Deneb", func(t *testing.T) {
|
||||
featCfg := &features.Flags{}
|
||||
featCfg.EnableLightClient = true
|
||||
reset := features.InitWithReset(featCfg)
|
||||
t.Run("No old update", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestDeneb(false)
|
||||
|
||||
l := util.NewTestLightClient(t).SetupTestDeneb(false)
|
||||
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().DenebForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
|
||||
|
||||
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().DenebForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
|
||||
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
|
||||
require.NoError(t, err)
|
||||
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
|
||||
require.NoError(t, err)
|
||||
|
||||
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
|
||||
require.NoError(t, err)
|
||||
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
|
||||
require.NoError(t, err)
|
||||
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
|
||||
require.NoError(t, err)
|
||||
|
||||
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
|
||||
require.NoError(t, err)
|
||||
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
|
||||
require.NoError(t, err)
|
||||
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
|
||||
require.NoError(t, err)
|
||||
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
|
||||
require.NoError(t, err)
|
||||
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
|
||||
require.NoError(t, err)
|
||||
cfg := &postBlockProcessConfig{
|
||||
ctx: ctx,
|
||||
roblock: roblock,
|
||||
postState: l.State,
|
||||
isValidPayload: true,
|
||||
}
|
||||
|
||||
cfg := &postBlockProcessConfig{
|
||||
ctx: ctx,
|
||||
roblock: roblock,
|
||||
postState: l.State,
|
||||
isValidPayload: true,
|
||||
}
|
||||
s.saveLightClientUpdate(cfg)
|
||||
|
||||
s.saveLightClientUpdate(cfg)
|
||||
// Check that the light client update is saved
|
||||
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
|
||||
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, u)
|
||||
attestedStateRoot, err := l.AttestedState.HashTreeRoot(ctx)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, attestedStateRoot, [32]byte(u.AttestedHeader().Beacon().StateRoot))
|
||||
require.Equal(t, u.Version(), version.Deneb)
|
||||
})
|
||||
|
||||
// Check that the light client update is saved
|
||||
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
|
||||
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, u)
|
||||
attestedStateRoot, err := l.AttestedState.HashTreeRoot(ctx)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, attestedStateRoot, [32]byte(u.AttestedHeader().Beacon().StateRoot))
|
||||
require.Equal(t, u.Version(), version.Deneb)
|
||||
t.Run("New update is better", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestDeneb(false)
|
||||
|
||||
reset()
|
||||
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().DenebForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
|
||||
|
||||
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
|
||||
require.NoError(t, err)
|
||||
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
|
||||
require.NoError(t, err)
|
||||
|
||||
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
|
||||
require.NoError(t, err)
|
||||
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
cfg := &postBlockProcessConfig{
|
||||
ctx: ctx,
|
||||
roblock: roblock,
|
||||
postState: l.State,
|
||||
isValidPayload: true,
|
||||
}
|
||||
|
||||
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
|
||||
|
||||
// create and save old update
|
||||
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(s.CurrentSlot(), l.AttestedState)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
|
||||
require.NoError(t, err)
|
||||
|
||||
s.saveLightClientUpdate(cfg)
|
||||
|
||||
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, u)
|
||||
attestedStateRoot, err := l.AttestedState.HashTreeRoot(ctx)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, attestedStateRoot, [32]byte(u.AttestedHeader().Beacon().StateRoot))
|
||||
require.Equal(t, u.Version(), version.Deneb)
|
||||
})
|
||||
|
||||
t.Run("Old update is better", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestDeneb(false)
|
||||
|
||||
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().DenebForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
|
||||
|
||||
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
|
||||
require.NoError(t, err)
|
||||
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
|
||||
require.NoError(t, err)
|
||||
|
||||
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
|
||||
require.NoError(t, err)
|
||||
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
cfg := &postBlockProcessConfig{
|
||||
ctx: ctx,
|
||||
roblock: roblock,
|
||||
postState: l.State,
|
||||
isValidPayload: true,
|
||||
}
|
||||
|
||||
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
|
||||
|
||||
// create and save old update
|
||||
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(s.CurrentSlot(), l.AttestedState)
|
||||
require.NoError(t, err)
|
||||
|
||||
scb := make([]byte, 64)
|
||||
for i := 0; i < 5; i++ {
|
||||
scb[i] = 0x01
|
||||
}
|
||||
oldUpdate.SetSyncAggregate(ðpb.SyncAggregate{
|
||||
SyncCommitteeBits: scb,
|
||||
SyncCommitteeSignature: make([]byte, 96),
|
||||
})
|
||||
|
||||
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
|
||||
require.NoError(t, err)
|
||||
|
||||
s.saveLightClientUpdate(cfg)
|
||||
|
||||
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, u)
|
||||
require.DeepEqual(t, oldUpdate, u)
|
||||
require.Equal(t, u.Version(), version.Deneb)
|
||||
})
|
||||
})
|
||||
|
||||
reset()
|
||||
}
|
||||
|
||||
func TestSaveLightClientBootstrap(t *testing.T) {
|
||||
featCfg := &features.Flags{}
|
||||
featCfg.EnableLightClient = true
|
||||
reset := features.InitWithReset(featCfg)
|
||||
|
||||
s, tr := minimalTestService(t)
|
||||
ctx := tr.ctx
|
||||
|
||||
t.Run("Altair", func(t *testing.T) {
|
||||
featCfg := &features.Flags{}
|
||||
featCfg.EnableLightClient = true
|
||||
reset := features.InitWithReset(featCfg)
|
||||
|
||||
l := util.NewTestLightClient(t).SetupTestAltair()
|
||||
|
||||
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().AltairForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
|
||||
@@ -2704,15 +3032,9 @@ func TestSaveLightClientBootstrap(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, stateRoot, [32]byte(b.Header().Beacon().StateRoot))
|
||||
require.Equal(t, b.Version(), version.Altair)
|
||||
|
||||
reset()
|
||||
})
|
||||
|
||||
t.Run("Capella", func(t *testing.T) {
|
||||
featCfg := &features.Flags{}
|
||||
featCfg.EnableLightClient = true
|
||||
reset := features.InitWithReset(featCfg)
|
||||
|
||||
l := util.NewTestLightClient(t).SetupTestCapella(false)
|
||||
|
||||
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().CapellaForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
|
||||
@@ -2745,15 +3067,9 @@ func TestSaveLightClientBootstrap(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, stateRoot, [32]byte(b.Header().Beacon().StateRoot))
|
||||
require.Equal(t, b.Version(), version.Capella)
|
||||
|
||||
reset()
|
||||
})
|
||||
|
||||
t.Run("Deneb", func(t *testing.T) {
|
||||
featCfg := &features.Flags{}
|
||||
featCfg.EnableLightClient = true
|
||||
reset := features.InitWithReset(featCfg)
|
||||
|
||||
l := util.NewTestLightClient(t).SetupTestDeneb(false)
|
||||
|
||||
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().DenebForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
|
||||
@@ -2786,7 +3102,7 @@ func TestSaveLightClientBootstrap(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, stateRoot, [32]byte(b.Header().Beacon().StateRoot))
|
||||
require.Equal(t, b.Version(), version.Deneb)
|
||||
|
||||
reset()
|
||||
})
|
||||
|
||||
reset()
|
||||
}
|
||||
|
||||
@@ -4,12 +4,13 @@ import (
|
||||
"context"
|
||||
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
)
|
||||
|
||||
// SendNewBlobEvent sends a message to the BlobNotifier channel that the blob
|
||||
// for the block root `root` is ready in the database
|
||||
func (s *Service) sendNewBlobEvent(root [32]byte, index uint64) {
|
||||
s.blobNotifiers.notifyIndex(root, index)
|
||||
func (s *Service) sendNewBlobEvent(root [32]byte, index uint64, slot primitives.Slot) {
|
||||
s.blobNotifiers.notifyIndex(root, index, slot)
|
||||
}
|
||||
|
||||
// ReceiveBlob saves the blob to database and sends the new event
|
||||
@@ -18,6 +19,6 @@ func (s *Service) ReceiveBlob(ctx context.Context, b blocks.VerifiedROBlob) erro
|
||||
return err
|
||||
}
|
||||
|
||||
s.sendNewBlobEvent(b.BlockRoot(), b.Index)
|
||||
s.sendNewBlobEvent(b.BlockRoot(), b.Index, b.Slot())
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -33,10 +33,10 @@ import (
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state/stategen"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/features"
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/params"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
|
||||
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
|
||||
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
|
||||
@@ -104,18 +104,22 @@ var ErrMissingClockSetter = errors.New("blockchain Service initialized without a
|
||||
type blobNotifierMap struct {
|
||||
sync.RWMutex
|
||||
notifiers map[[32]byte]chan uint64
|
||||
seenIndex map[[32]byte][fieldparams.MaxBlobsPerBlock]bool
|
||||
seenIndex map[[32]byte][]bool
|
||||
}
|
||||
|
||||
// notifyIndex notifies a blob by its index for a given root.
|
||||
// It uses internal maps to keep track of seen indices and notifier channels.
|
||||
func (bn *blobNotifierMap) notifyIndex(root [32]byte, idx uint64) {
|
||||
if idx >= fieldparams.MaxBlobsPerBlock {
|
||||
func (bn *blobNotifierMap) notifyIndex(root [32]byte, idx uint64, slot primitives.Slot) {
|
||||
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
|
||||
if idx >= uint64(maxBlobsPerBlock) {
|
||||
return
|
||||
}
|
||||
|
||||
bn.Lock()
|
||||
seen := bn.seenIndex[root]
|
||||
if seen == nil {
|
||||
seen = make([]bool, maxBlobsPerBlock)
|
||||
}
|
||||
if seen[idx] {
|
||||
bn.Unlock()
|
||||
return
|
||||
@@ -126,7 +130,7 @@ func (bn *blobNotifierMap) notifyIndex(root [32]byte, idx uint64) {
|
||||
// Retrieve or create the notifier channel for the given root.
|
||||
c, ok := bn.notifiers[root]
|
||||
if !ok {
|
||||
c = make(chan uint64, fieldparams.MaxBlobsPerBlock)
|
||||
c = make(chan uint64, maxBlobsPerBlock)
|
||||
bn.notifiers[root] = c
|
||||
}
|
||||
|
||||
@@ -135,12 +139,13 @@ func (bn *blobNotifierMap) notifyIndex(root [32]byte, idx uint64) {
|
||||
c <- idx
|
||||
}
|
||||
|
||||
func (bn *blobNotifierMap) forRoot(root [32]byte) chan uint64 {
|
||||
func (bn *blobNotifierMap) forRoot(root [32]byte, slot primitives.Slot) chan uint64 {
|
||||
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
|
||||
bn.Lock()
|
||||
defer bn.Unlock()
|
||||
c, ok := bn.notifiers[root]
|
||||
if !ok {
|
||||
c = make(chan uint64, fieldparams.MaxBlobsPerBlock)
|
||||
c = make(chan uint64, maxBlobsPerBlock)
|
||||
bn.notifiers[root] = c
|
||||
}
|
||||
return c
|
||||
@@ -166,7 +171,7 @@ func NewService(ctx context.Context, opts ...Option) (*Service, error) {
|
||||
ctx, cancel := context.WithCancel(ctx)
|
||||
bn := &blobNotifierMap{
|
||||
notifiers: make(map[[32]byte]chan uint64),
|
||||
seenIndex: make(map[[32]byte][fieldparams.MaxBlobsPerBlock]bool),
|
||||
seenIndex: make(map[[32]byte][]bool),
|
||||
}
|
||||
srv := &Service{
|
||||
ctx: ctx,
|
||||
|
||||
@@ -587,7 +587,7 @@ func (s *MockClockSetter) SetClock(g *startup.Clock) error {
|
||||
func TestNotifyIndex(t *testing.T) {
|
||||
// Initialize a blobNotifierMap
|
||||
bn := &blobNotifierMap{
|
||||
seenIndex: make(map[[32]byte][fieldparams.MaxBlobsPerBlock]bool),
|
||||
seenIndex: make(map[[32]byte][]bool),
|
||||
notifiers: make(map[[32]byte]chan uint64),
|
||||
}
|
||||
|
||||
@@ -596,7 +596,7 @@ func TestNotifyIndex(t *testing.T) {
|
||||
copy(root[:], "exampleRoot")
|
||||
|
||||
// Test notifying a new index
|
||||
bn.notifyIndex(root, 1)
|
||||
bn.notifyIndex(root, 1, 1)
|
||||
if !bn.seenIndex[root][1] {
|
||||
t.Errorf("Index was not marked as seen")
|
||||
}
|
||||
@@ -607,13 +607,13 @@ func TestNotifyIndex(t *testing.T) {
|
||||
}
|
||||
|
||||
// Test notifying an already seen index
|
||||
bn.notifyIndex(root, 1)
|
||||
bn.notifyIndex(root, 1, 1)
|
||||
if len(bn.notifiers[root]) > 1 {
|
||||
t.Errorf("Notifier channel should not receive multiple messages for the same index")
|
||||
}
|
||||
|
||||
// Test notifying a new index again
|
||||
bn.notifyIndex(root, 2)
|
||||
bn.notifyIndex(root, 2, 1)
|
||||
if !bn.seenIndex[root][2] {
|
||||
t.Errorf("Index was not marked as seen")
|
||||
}
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
package blocks
|
||||
|
||||
var ProcessBLSToExecutionChange = processBLSToExecutionChange
|
||||
|
||||
var ErrInvalidBLSPrefix = errInvalidBLSPrefix
|
||||
var VerifyBlobCommitmentCount = verifyBlobCommitmentCount
|
||||
|
||||
@@ -8,10 +8,11 @@ import (
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
|
||||
field_params "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/params"
|
||||
consensus_types "github.com/prysmaticlabs/prysm/v5/consensus-types"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
|
||||
"github.com/prysmaticlabs/prysm/v5/runtime/version"
|
||||
"github.com/prysmaticlabs/prysm/v5/time/slots"
|
||||
@@ -210,7 +211,7 @@ func ProcessPayload(st state.BeaconState, body interfaces.ReadOnlyBeaconBlockBod
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if err := verifyBlobCommitmentCount(body); err != nil {
|
||||
if err := verifyBlobCommitmentCount(st.Slot(), body); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := ValidatePayloadWhenMergeCompletes(st, payload); err != nil {
|
||||
@@ -225,7 +226,7 @@ func ProcessPayload(st state.BeaconState, body interfaces.ReadOnlyBeaconBlockBod
|
||||
return nil
|
||||
}
|
||||
|
||||
func verifyBlobCommitmentCount(body interfaces.ReadOnlyBeaconBlockBody) error {
|
||||
func verifyBlobCommitmentCount(slot primitives.Slot, body interfaces.ReadOnlyBeaconBlockBody) error {
|
||||
if body.Version() < version.Deneb {
|
||||
return nil
|
||||
}
|
||||
@@ -233,7 +234,8 @@ func verifyBlobCommitmentCount(body interfaces.ReadOnlyBeaconBlockBody) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if len(kzgs) > field_params.MaxBlobsPerBlock {
|
||||
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
|
||||
if len(kzgs) > maxBlobsPerBlock {
|
||||
return fmt.Errorf("too many kzg commitments in block: %d", len(kzgs))
|
||||
}
|
||||
return nil
|
||||
|
||||
@@ -9,6 +9,7 @@ import (
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/params"
|
||||
consensusblocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
|
||||
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
|
||||
@@ -923,10 +924,10 @@ func TestVerifyBlobCommitmentCount(t *testing.T) {
|
||||
b := ðpb.BeaconBlockDeneb{Body: ðpb.BeaconBlockBodyDeneb{}}
|
||||
rb, err := consensusblocks.NewBeaconBlock(b)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, blocks.VerifyBlobCommitmentCount(rb.Body()))
|
||||
require.NoError(t, blocks.VerifyBlobCommitmentCount(rb.Slot(), rb.Body()))
|
||||
|
||||
b = ðpb.BeaconBlockDeneb{Body: ðpb.BeaconBlockBodyDeneb{BlobKzgCommitments: make([][]byte, fieldparams.MaxBlobsPerBlock+1)}}
|
||||
b = ðpb.BeaconBlockDeneb{Body: ðpb.BeaconBlockBodyDeneb{BlobKzgCommitments: make([][]byte, params.BeaconConfig().MaxBlobsPerBlock(rb.Slot())+1)}}
|
||||
rb, err = consensusblocks.NewBeaconBlock(b)
|
||||
require.NoError(t, err)
|
||||
require.ErrorContains(t, fmt.Sprintf("too many kzg commitments in block: %d", fieldparams.MaxBlobsPerBlock+1), blocks.VerifyBlobCommitmentCount(rb.Body()))
|
||||
require.ErrorContains(t, fmt.Sprintf("too many kzg commitments in block: %d", params.BeaconConfig().MaxBlobsPerBlock(rb.Slot())+1), blocks.VerifyBlobCommitmentCount(rb.Slot(), rb.Body()))
|
||||
}
|
||||
|
||||
@@ -100,8 +100,11 @@ func ValidateBLSToExecutionChange(st state.ReadOnlyBeaconState, signed *ethpb.Si
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if val == nil {
|
||||
return nil, errors.Wrap(errInvalidWithdrawalCredentials, "validator is nil") // This should not be possible.
|
||||
}
|
||||
cred := val.WithdrawalCredentials
|
||||
if cred[0] != params.BeaconConfig().BLSWithdrawalPrefixByte {
|
||||
if len(cred) < 2 || cred[0] != params.BeaconConfig().BLSWithdrawalPrefixByte {
|
||||
return nil, errInvalidBLSPrefix
|
||||
}
|
||||
|
||||
|
||||
@@ -113,7 +113,42 @@ func TestProcessBLSToExecutionChange(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
require.DeepEqual(t, digest[:], val.WithdrawalCredentials)
|
||||
})
|
||||
t.Run("nil validator does not panic", func(t *testing.T) {
|
||||
priv, err := bls.RandKey()
|
||||
require.NoError(t, err)
|
||||
pubkey := priv.PublicKey().Marshal()
|
||||
|
||||
message := ðpb.BLSToExecutionChange{
|
||||
ToExecutionAddress: []byte{0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13},
|
||||
ValidatorIndex: 0,
|
||||
FromBlsPubkey: pubkey,
|
||||
}
|
||||
|
||||
registry := []*ethpb.Validator{
|
||||
nil,
|
||||
}
|
||||
st, err := state_native.InitializeFromProtoPhase0(ðpb.BeaconState{
|
||||
Validators: registry,
|
||||
Fork: ðpb.Fork{
|
||||
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
|
||||
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
|
||||
},
|
||||
Slot: params.BeaconConfig().SlotsPerEpoch * 5,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
signature, err := signing.ComputeDomainAndSign(st, time.CurrentEpoch(st), message, params.BeaconConfig().DomainBLSToExecutionChange, priv)
|
||||
require.NoError(t, err)
|
||||
|
||||
signed := ðpb.SignedBLSToExecutionChange{
|
||||
Message: message,
|
||||
Signature: signature,
|
||||
}
|
||||
_, err = blocks.ValidateBLSToExecutionChange(st, signed)
|
||||
// The state should return an empty validator, even when the validator object in the registry is
|
||||
// nil. This error should return when the withdrawal credentials are invalid or too short.
|
||||
require.ErrorIs(t, err, blocks.ErrInvalidBLSPrefix)
|
||||
})
|
||||
t.Run("non-existent validator", func(t *testing.T) {
|
||||
priv, err := bls.RandKey()
|
||||
require.NoError(t, err)
|
||||
|
||||
@@ -282,56 +282,247 @@ func CreateDefaultLightClientUpdate(currentSlot primitives.Slot, attestedState s
|
||||
if currentEpoch < params.BeaconConfig().CapellaForkEpoch {
|
||||
m = &pb.LightClientUpdateAltair{
|
||||
AttestedHeader: &pb.LightClientHeaderAltair{
|
||||
Beacon: &pb.BeaconBlockHeader{},
|
||||
Beacon: &pb.BeaconBlockHeader{
|
||||
ParentRoot: make([]byte, 32),
|
||||
StateRoot: make([]byte, 32),
|
||||
BodyRoot: make([]byte, 32),
|
||||
},
|
||||
},
|
||||
NextSyncCommittee: nextSyncCommittee,
|
||||
NextSyncCommitteeBranch: nextSyncCommitteeBranch,
|
||||
FinalityBranch: finalityBranch,
|
||||
FinalizedHeader: &pb.LightClientHeaderAltair{
|
||||
Beacon: &pb.BeaconBlockHeader{
|
||||
ParentRoot: make([]byte, 32),
|
||||
StateRoot: make([]byte, 32),
|
||||
BodyRoot: make([]byte, 32),
|
||||
},
|
||||
},
|
||||
SyncAggregate: &pb.SyncAggregate{
|
||||
SyncCommitteeBits: make([]byte, 64),
|
||||
SyncCommitteeSignature: make([]byte, 96),
|
||||
},
|
||||
}
|
||||
} else if currentEpoch < params.BeaconConfig().DenebForkEpoch {
|
||||
m = &pb.LightClientUpdateCapella{
|
||||
AttestedHeader: &pb.LightClientHeaderCapella{
|
||||
Beacon: &pb.BeaconBlockHeader{},
|
||||
Execution: &enginev1.ExecutionPayloadHeaderCapella{},
|
||||
Beacon: &pb.BeaconBlockHeader{
|
||||
ParentRoot: make([]byte, 32),
|
||||
StateRoot: make([]byte, 32),
|
||||
BodyRoot: make([]byte, 32),
|
||||
},
|
||||
Execution: &enginev1.ExecutionPayloadHeaderCapella{
|
||||
ParentHash: make([]byte, fieldparams.RootLength),
|
||||
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
|
||||
StateRoot: make([]byte, fieldparams.RootLength),
|
||||
ReceiptsRoot: make([]byte, fieldparams.RootLength),
|
||||
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
|
||||
PrevRandao: make([]byte, fieldparams.RootLength),
|
||||
ExtraData: make([]byte, 0),
|
||||
BaseFeePerGas: make([]byte, fieldparams.RootLength),
|
||||
BlockHash: make([]byte, fieldparams.RootLength),
|
||||
TransactionsRoot: make([]byte, fieldparams.RootLength),
|
||||
WithdrawalsRoot: make([]byte, fieldparams.RootLength),
|
||||
},
|
||||
ExecutionBranch: executionBranch,
|
||||
},
|
||||
NextSyncCommittee: nextSyncCommittee,
|
||||
NextSyncCommitteeBranch: nextSyncCommitteeBranch,
|
||||
FinalityBranch: finalityBranch,
|
||||
FinalizedHeader: &pb.LightClientHeaderCapella{
|
||||
Beacon: &pb.BeaconBlockHeader{
|
||||
ParentRoot: make([]byte, 32),
|
||||
StateRoot: make([]byte, 32),
|
||||
BodyRoot: make([]byte, 32),
|
||||
},
|
||||
Execution: &enginev1.ExecutionPayloadHeaderCapella{
|
||||
ParentHash: make([]byte, fieldparams.RootLength),
|
||||
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
|
||||
StateRoot: make([]byte, fieldparams.RootLength),
|
||||
ReceiptsRoot: make([]byte, fieldparams.RootLength),
|
||||
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
|
||||
PrevRandao: make([]byte, fieldparams.RootLength),
|
||||
ExtraData: make([]byte, 0),
|
||||
BaseFeePerGas: make([]byte, fieldparams.RootLength),
|
||||
BlockHash: make([]byte, fieldparams.RootLength),
|
||||
TransactionsRoot: make([]byte, fieldparams.RootLength),
|
||||
WithdrawalsRoot: make([]byte, fieldparams.RootLength),
|
||||
},
|
||||
ExecutionBranch: executionBranch,
|
||||
},
|
||||
SyncAggregate: &pb.SyncAggregate{
|
||||
SyncCommitteeBits: make([]byte, 64),
|
||||
SyncCommitteeSignature: make([]byte, 96),
|
||||
},
|
||||
}
|
||||
} else if currentEpoch < params.BeaconConfig().ElectraForkEpoch {
|
||||
m = &pb.LightClientUpdateDeneb{
|
||||
AttestedHeader: &pb.LightClientHeaderDeneb{
|
||||
Beacon: &pb.BeaconBlockHeader{},
|
||||
Execution: &enginev1.ExecutionPayloadHeaderDeneb{},
|
||||
Beacon: &pb.BeaconBlockHeader{
|
||||
ParentRoot: make([]byte, 32),
|
||||
StateRoot: make([]byte, 32),
|
||||
BodyRoot: make([]byte, 32),
|
||||
},
|
||||
Execution: &enginev1.ExecutionPayloadHeaderDeneb{
|
||||
ParentHash: make([]byte, fieldparams.RootLength),
|
||||
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
|
||||
StateRoot: make([]byte, fieldparams.RootLength),
|
||||
ReceiptsRoot: make([]byte, fieldparams.RootLength),
|
||||
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
|
||||
PrevRandao: make([]byte, fieldparams.RootLength),
|
||||
ExtraData: make([]byte, 0),
|
||||
BaseFeePerGas: make([]byte, fieldparams.RootLength),
|
||||
BlockHash: make([]byte, fieldparams.RootLength),
|
||||
TransactionsRoot: make([]byte, fieldparams.RootLength),
|
||||
WithdrawalsRoot: make([]byte, fieldparams.RootLength),
|
||||
GasLimit: 0,
|
||||
GasUsed: 0,
|
||||
},
|
||||
ExecutionBranch: executionBranch,
|
||||
},
|
||||
NextSyncCommittee: nextSyncCommittee,
|
||||
NextSyncCommitteeBranch: nextSyncCommitteeBranch,
|
||||
FinalityBranch: finalityBranch,
|
||||
FinalizedHeader: &pb.LightClientHeaderDeneb{
|
||||
Beacon: &pb.BeaconBlockHeader{
|
||||
ParentRoot: make([]byte, 32),
|
||||
StateRoot: make([]byte, 32),
|
||||
BodyRoot: make([]byte, 32),
|
||||
},
|
||||
Execution: &enginev1.ExecutionPayloadHeaderDeneb{
|
||||
ParentHash: make([]byte, fieldparams.RootLength),
|
||||
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
|
||||
StateRoot: make([]byte, fieldparams.RootLength),
|
||||
ReceiptsRoot: make([]byte, fieldparams.RootLength),
|
||||
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
|
||||
PrevRandao: make([]byte, fieldparams.RootLength),
|
||||
ExtraData: make([]byte, 0),
|
||||
BaseFeePerGas: make([]byte, fieldparams.RootLength),
|
||||
BlockHash: make([]byte, fieldparams.RootLength),
|
||||
TransactionsRoot: make([]byte, fieldparams.RootLength),
|
||||
WithdrawalsRoot: make([]byte, fieldparams.RootLength),
|
||||
GasLimit: 0,
|
||||
GasUsed: 0,
|
||||
},
|
||||
ExecutionBranch: executionBranch,
|
||||
},
|
||||
SyncAggregate: &pb.SyncAggregate{
|
||||
SyncCommitteeBits: make([]byte, 64),
|
||||
SyncCommitteeSignature: make([]byte, 96),
|
||||
},
|
||||
}
|
||||
} else {
|
||||
if attestedState.Version() >= version.Electra {
|
||||
m = &pb.LightClientUpdateElectra{
|
||||
AttestedHeader: &pb.LightClientHeaderDeneb{
|
||||
Beacon: &pb.BeaconBlockHeader{},
|
||||
Execution: &enginev1.ExecutionPayloadHeaderDeneb{},
|
||||
Beacon: &pb.BeaconBlockHeader{
|
||||
ParentRoot: make([]byte, 32),
|
||||
StateRoot: make([]byte, 32),
|
||||
BodyRoot: make([]byte, 32),
|
||||
},
|
||||
Execution: &enginev1.ExecutionPayloadHeaderDeneb{
|
||||
ParentHash: make([]byte, fieldparams.RootLength),
|
||||
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
|
||||
StateRoot: make([]byte, fieldparams.RootLength),
|
||||
ReceiptsRoot: make([]byte, fieldparams.RootLength),
|
||||
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
|
||||
PrevRandao: make([]byte, fieldparams.RootLength),
|
||||
ExtraData: make([]byte, 0),
|
||||
BaseFeePerGas: make([]byte, fieldparams.RootLength),
|
||||
BlockHash: make([]byte, fieldparams.RootLength),
|
||||
TransactionsRoot: make([]byte, fieldparams.RootLength),
|
||||
WithdrawalsRoot: make([]byte, fieldparams.RootLength),
|
||||
GasLimit: 0,
|
||||
GasUsed: 0,
|
||||
},
|
||||
ExecutionBranch: executionBranch,
|
||||
},
|
||||
NextSyncCommittee: nextSyncCommittee,
|
||||
NextSyncCommitteeBranch: nextSyncCommitteeBranch,
|
||||
FinalityBranch: finalityBranch,
|
||||
FinalizedHeader: &pb.LightClientHeaderDeneb{
|
||||
Beacon: &pb.BeaconBlockHeader{
|
||||
ParentRoot: make([]byte, 32),
|
||||
StateRoot: make([]byte, 32),
|
||||
BodyRoot: make([]byte, 32),
|
||||
},
|
||||
Execution: &enginev1.ExecutionPayloadHeaderDeneb{
|
||||
ParentHash: make([]byte, fieldparams.RootLength),
|
||||
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
|
||||
StateRoot: make([]byte, fieldparams.RootLength),
|
||||
ReceiptsRoot: make([]byte, fieldparams.RootLength),
|
||||
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
|
||||
PrevRandao: make([]byte, fieldparams.RootLength),
|
||||
ExtraData: make([]byte, 0),
|
||||
BaseFeePerGas: make([]byte, fieldparams.RootLength),
|
||||
BlockHash: make([]byte, fieldparams.RootLength),
|
||||
TransactionsRoot: make([]byte, fieldparams.RootLength),
|
||||
WithdrawalsRoot: make([]byte, fieldparams.RootLength),
|
||||
GasLimit: 0,
|
||||
GasUsed: 0,
|
||||
},
|
||||
ExecutionBranch: executionBranch,
|
||||
},
|
||||
SyncAggregate: &pb.SyncAggregate{
|
||||
SyncCommitteeBits: make([]byte, 64),
|
||||
SyncCommitteeSignature: make([]byte, 96),
|
||||
},
|
||||
}
|
||||
} else {
|
||||
m = &pb.LightClientUpdateDeneb{
|
||||
AttestedHeader: &pb.LightClientHeaderDeneb{
|
||||
Beacon: &pb.BeaconBlockHeader{},
|
||||
Execution: &enginev1.ExecutionPayloadHeaderDeneb{},
|
||||
Beacon: &pb.BeaconBlockHeader{
|
||||
ParentRoot: make([]byte, 32),
|
||||
StateRoot: make([]byte, 32),
|
||||
BodyRoot: make([]byte, 32),
|
||||
},
|
||||
Execution: &enginev1.ExecutionPayloadHeaderDeneb{
|
||||
ParentHash: make([]byte, fieldparams.RootLength),
|
||||
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
|
||||
StateRoot: make([]byte, fieldparams.RootLength),
|
||||
ReceiptsRoot: make([]byte, fieldparams.RootLength),
|
||||
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
|
||||
PrevRandao: make([]byte, fieldparams.RootLength),
|
||||
ExtraData: make([]byte, 0),
|
||||
BaseFeePerGas: make([]byte, fieldparams.RootLength),
|
||||
BlockHash: make([]byte, fieldparams.RootLength),
|
||||
TransactionsRoot: make([]byte, fieldparams.RootLength),
|
||||
WithdrawalsRoot: make([]byte, fieldparams.RootLength),
|
||||
GasLimit: 0,
|
||||
GasUsed: 0,
|
||||
},
|
||||
ExecutionBranch: executionBranch,
|
||||
},
|
||||
NextSyncCommittee: nextSyncCommittee,
|
||||
NextSyncCommitteeBranch: nextSyncCommitteeBranch,
|
||||
FinalityBranch: finalityBranch,
|
||||
FinalizedHeader: &pb.LightClientHeaderDeneb{
|
||||
Beacon: &pb.BeaconBlockHeader{
|
||||
ParentRoot: make([]byte, 32),
|
||||
StateRoot: make([]byte, 32),
|
||||
BodyRoot: make([]byte, 32),
|
||||
},
|
||||
Execution: &enginev1.ExecutionPayloadHeaderDeneb{
|
||||
ParentHash: make([]byte, fieldparams.RootLength),
|
||||
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
|
||||
StateRoot: make([]byte, fieldparams.RootLength),
|
||||
ReceiptsRoot: make([]byte, fieldparams.RootLength),
|
||||
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
|
||||
PrevRandao: make([]byte, fieldparams.RootLength),
|
||||
ExtraData: make([]byte, 0),
|
||||
BaseFeePerGas: make([]byte, fieldparams.RootLength),
|
||||
BlockHash: make([]byte, fieldparams.RootLength),
|
||||
TransactionsRoot: make([]byte, fieldparams.RootLength),
|
||||
WithdrawalsRoot: make([]byte, fieldparams.RootLength),
|
||||
GasLimit: 0,
|
||||
GasUsed: 0,
|
||||
},
|
||||
ExecutionBranch: executionBranch,
|
||||
},
|
||||
SyncAggregate: &pb.SyncAggregate{
|
||||
SyncCommitteeBits: make([]byte, 64),
|
||||
SyncCommitteeSignature: make([]byte, 96),
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -13,7 +13,6 @@ go_library(
|
||||
deps = [
|
||||
"//beacon-chain/db/filesystem:go_default_library",
|
||||
"//beacon-chain/verification:go_default_library",
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
"//consensus-types/blocks:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
@@ -35,7 +34,6 @@ go_test(
|
||||
deps = [
|
||||
"//beacon-chain/db/filesystem:go_default_library",
|
||||
"//beacon-chain/verification:go_default_library",
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
"//consensus-types/blocks:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
|
||||
@@ -83,10 +83,10 @@ func (s *LazilyPersistentStore) Persist(current primitives.Slot, sc ...blocks.RO
|
||||
func (s *LazilyPersistentStore) IsDataAvailable(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error {
|
||||
blockCommitments, err := commitmentsToCheck(b, current)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "could check data availability for block %#x", b.Root())
|
||||
return errors.Wrapf(err, "could not check data availability for block %#x", b.Root())
|
||||
}
|
||||
// Return early for blocks that are pre-deneb or which do not have any commitments.
|
||||
if blockCommitments.count() == 0 {
|
||||
if len(blockCommitments) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -106,7 +106,7 @@ func (s *LazilyPersistentStore) IsDataAvailable(ctx context.Context, current pri
|
||||
// Verify we have all the expected sidecars, and fail fast if any are missing or inconsistent.
|
||||
// We don't try to salvage problematic batches because this indicates a misbehaving peer and we'd rather
|
||||
// ignore their response and decrease their peer score.
|
||||
sidecars, err := entry.filter(root, blockCommitments)
|
||||
sidecars, err := entry.filter(root, blockCommitments, b.Block().Slot())
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "incomplete BlobSidecar batch")
|
||||
}
|
||||
@@ -137,22 +137,28 @@ func (s *LazilyPersistentStore) IsDataAvailable(ctx context.Context, current pri
|
||||
return nil
|
||||
}
|
||||
|
||||
func commitmentsToCheck(b blocks.ROBlock, current primitives.Slot) (safeCommitmentArray, error) {
|
||||
var ar safeCommitmentArray
|
||||
func commitmentsToCheck(b blocks.ROBlock, current primitives.Slot) ([][]byte, error) {
|
||||
if b.Version() < version.Deneb {
|
||||
return ar, nil
|
||||
return nil, nil
|
||||
}
|
||||
// We are only required to check within MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS
|
||||
|
||||
// We are only required to check within MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUEST
|
||||
if !params.WithinDAPeriod(slots.ToEpoch(b.Block().Slot()), slots.ToEpoch(current)) {
|
||||
return ar, nil
|
||||
return nil, nil
|
||||
}
|
||||
kc, err := b.Block().Body().BlobKzgCommitments()
|
||||
|
||||
kzgCommitments, err := b.Block().Body().BlobKzgCommitments()
|
||||
if err != nil {
|
||||
return ar, err
|
||||
return nil, err
|
||||
}
|
||||
if len(kc) > len(ar) {
|
||||
return ar, errIndexOutOfBounds
|
||||
|
||||
maxBlobCount := params.BeaconConfig().MaxBlobsPerBlock(b.Block().Slot())
|
||||
if len(kzgCommitments) > maxBlobCount {
|
||||
return nil, errIndexOutOfBounds
|
||||
}
|
||||
copy(ar[:], kc)
|
||||
return ar, nil
|
||||
|
||||
result := make([][]byte, len(kzgCommitments))
|
||||
copy(result, kzgCommitments)
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
@@ -8,7 +8,6 @@ import (
|
||||
errors "github.com/pkg/errors"
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem"
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/verification"
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/params"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
@@ -89,7 +88,7 @@ func Test_commitmentsToCheck(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
c, err := rb.Block().Body().BlobKzgCommitments()
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, true, len(c) > fieldparams.MaxBlobsPerBlock)
|
||||
require.Equal(t, true, len(c) > params.BeaconConfig().MaxBlobsPerBlock(sb.Block().Slot()))
|
||||
return rb
|
||||
},
|
||||
slot: windowSlots + 1,
|
||||
@@ -105,7 +104,7 @@ func Test_commitmentsToCheck(t *testing.T) {
|
||||
} else {
|
||||
require.NoError(t, err)
|
||||
}
|
||||
require.Equal(t, len(c.commits), co.count())
|
||||
require.Equal(t, len(c.commits), len(co))
|
||||
for i := 0; i < len(c.commits); i++ {
|
||||
require.Equal(t, true, bytes.Equal(c.commits[i], co[i]))
|
||||
}
|
||||
|
||||
@@ -5,7 +5,7 @@ import (
|
||||
|
||||
"github.com/pkg/errors"
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem"
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/params"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
)
|
||||
@@ -60,7 +60,7 @@ func (c *cache) delete(key cacheKey) {
|
||||
|
||||
// cacheEntry holds a fixed-length cache of BlobSidecars.
|
||||
type cacheEntry struct {
|
||||
scs [fieldparams.MaxBlobsPerBlock]*blocks.ROBlob
|
||||
scs []*blocks.ROBlob
|
||||
diskSummary filesystem.BlobStorageSummary
|
||||
}
|
||||
|
||||
@@ -72,9 +72,13 @@ func (e *cacheEntry) setDiskSummary(sum filesystem.BlobStorageSummary) {
|
||||
// Only the first BlobSidecar of a given Index will be kept in the cache.
|
||||
// stash will return an error if the given blob is already in the cache, or if the Index is out of bounds.
|
||||
func (e *cacheEntry) stash(sc *blocks.ROBlob) error {
|
||||
if sc.Index >= fieldparams.MaxBlobsPerBlock {
|
||||
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(sc.Slot())
|
||||
if sc.Index >= uint64(maxBlobsPerBlock) {
|
||||
return errors.Wrapf(errIndexOutOfBounds, "index=%d", sc.Index)
|
||||
}
|
||||
if e.scs == nil {
|
||||
e.scs = make([]*blocks.ROBlob, maxBlobsPerBlock)
|
||||
}
|
||||
if e.scs[sc.Index] != nil {
|
||||
return errors.Wrapf(ErrDuplicateSidecar, "root=%#x, index=%d, commitment=%#x", sc.BlockRoot(), sc.Index, sc.KzgCommitment)
|
||||
}
|
||||
@@ -88,12 +92,13 @@ func (e *cacheEntry) stash(sc *blocks.ROBlob) error {
|
||||
// commitments were found in the cache and the sidecar slice return value can be used
|
||||
// to perform a DA check against the cached sidecars.
|
||||
// filter only returns blobs that need to be checked. Blobs already available on disk will be excluded.
|
||||
func (e *cacheEntry) filter(root [32]byte, kc safeCommitmentArray) ([]blocks.ROBlob, error) {
|
||||
if e.diskSummary.AllAvailable(kc.count()) {
|
||||
func (e *cacheEntry) filter(root [32]byte, kc [][]byte, slot primitives.Slot) ([]blocks.ROBlob, error) {
|
||||
count := len(kc)
|
||||
if e.diskSummary.AllAvailable(count) {
|
||||
return nil, nil
|
||||
}
|
||||
scs := make([]blocks.ROBlob, 0, kc.count())
|
||||
for i := uint64(0); i < fieldparams.MaxBlobsPerBlock; i++ {
|
||||
scs := make([]blocks.ROBlob, 0, count)
|
||||
for i := uint64(0); i < uint64(count); i++ {
|
||||
// We already have this blob, we don't need to write it or validate it.
|
||||
if e.diskSummary.HasIndex(i) {
|
||||
continue
|
||||
@@ -116,16 +121,3 @@ func (e *cacheEntry) filter(root [32]byte, kc safeCommitmentArray) ([]blocks.ROB
|
||||
|
||||
return scs, nil
|
||||
}
|
||||
|
||||
// safeCommitmentArray is a fixed size array of commitment byte slices. This is helpful for avoiding
|
||||
// gratuitous bounds checks.
|
||||
type safeCommitmentArray [fieldparams.MaxBlobsPerBlock][]byte
|
||||
|
||||
func (s safeCommitmentArray) count() int {
|
||||
for i := range s {
|
||||
if s[i] == nil {
|
||||
return i
|
||||
}
|
||||
}
|
||||
return fieldparams.MaxBlobsPerBlock
|
||||
}
|
||||
|
||||
@@ -29,10 +29,10 @@ func TestCacheEnsureDelete(t *testing.T) {
|
||||
require.Equal(t, nilEntry, c.entries[k])
|
||||
}
|
||||
|
||||
type filterTestCaseSetupFunc func(t *testing.T) (*cacheEntry, safeCommitmentArray, []blocks.ROBlob)
|
||||
type filterTestCaseSetupFunc func(t *testing.T) (*cacheEntry, [][]byte, []blocks.ROBlob)
|
||||
|
||||
func filterTestCaseSetup(slot primitives.Slot, nBlobs int, onDisk []int, numExpected int) filterTestCaseSetupFunc {
|
||||
return func(t *testing.T) (*cacheEntry, safeCommitmentArray, []blocks.ROBlob) {
|
||||
return func(t *testing.T) (*cacheEntry, [][]byte, []blocks.ROBlob) {
|
||||
blk, blobs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, slot, nBlobs)
|
||||
commits, err := commitmentsToCheck(blk, blk.Block().Slot())
|
||||
require.NoError(t, err)
|
||||
@@ -44,7 +44,7 @@ func filterTestCaseSetup(slot primitives.Slot, nBlobs int, onDisk []int, numExpe
|
||||
entry.setDiskSummary(sum)
|
||||
}
|
||||
expected := make([]blocks.ROBlob, 0, nBlobs)
|
||||
for i := 0; i < commits.count(); i++ {
|
||||
for i := 0; i < len(commits); i++ {
|
||||
if entry.diskSummary.HasIndex(uint64(i)) {
|
||||
continue
|
||||
}
|
||||
@@ -113,7 +113,7 @@ func TestFilterDiskSummary(t *testing.T) {
|
||||
t.Run(c.name, func(t *testing.T) {
|
||||
entry, commits, expected := c.setup(t)
|
||||
// first (root) argument doesn't matter, it is just for logs
|
||||
got, err := entry.filter([32]byte{}, commits)
|
||||
got, err := entry.filter([32]byte{}, commits, 100)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, len(expected), len(got))
|
||||
})
|
||||
@@ -125,12 +125,12 @@ func TestFilter(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
cases := []struct {
|
||||
name string
|
||||
setup func(t *testing.T) (*cacheEntry, safeCommitmentArray, []blocks.ROBlob)
|
||||
setup func(t *testing.T) (*cacheEntry, [][]byte, []blocks.ROBlob)
|
||||
err error
|
||||
}{
|
||||
{
|
||||
name: "commitments mismatch - extra sidecar",
|
||||
setup: func(t *testing.T) (*cacheEntry, safeCommitmentArray, []blocks.ROBlob) {
|
||||
setup: func(t *testing.T) (*cacheEntry, [][]byte, []blocks.ROBlob) {
|
||||
entry, commits, expected := filterTestCaseSetup(denebSlot, 6, []int{0, 1}, 4)(t)
|
||||
commits[5] = nil
|
||||
return entry, commits, expected
|
||||
@@ -139,7 +139,7 @@ func TestFilter(t *testing.T) {
|
||||
},
|
||||
{
|
||||
name: "sidecar missing",
|
||||
setup: func(t *testing.T) (*cacheEntry, safeCommitmentArray, []blocks.ROBlob) {
|
||||
setup: func(t *testing.T) (*cacheEntry, [][]byte, []blocks.ROBlob) {
|
||||
entry, commits, expected := filterTestCaseSetup(denebSlot, 6, []int{0, 1}, 4)(t)
|
||||
entry.scs[5] = nil
|
||||
return entry, commits, expected
|
||||
@@ -148,7 +148,7 @@ func TestFilter(t *testing.T) {
|
||||
},
|
||||
{
|
||||
name: "commitments mismatch - different bytes",
|
||||
setup: func(t *testing.T) (*cacheEntry, safeCommitmentArray, []blocks.ROBlob) {
|
||||
setup: func(t *testing.T) (*cacheEntry, [][]byte, []blocks.ROBlob) {
|
||||
entry, commits, expected := filterTestCaseSetup(denebSlot, 6, []int{0, 1}, 4)(t)
|
||||
entry.scs[5].KzgCommitment = []byte("nope")
|
||||
return entry, commits, expected
|
||||
@@ -160,7 +160,7 @@ func TestFilter(t *testing.T) {
|
||||
t.Run(c.name, func(t *testing.T) {
|
||||
entry, commits, expected := c.setup(t)
|
||||
// first (root) argument doesn't matter, it is just for logs
|
||||
got, err := entry.filter([32]byte{}, commits)
|
||||
got, err := entry.filter([32]byte{}, commits, 100)
|
||||
if c.err != nil {
|
||||
require.ErrorIs(t, err, c.err)
|
||||
return
|
||||
|
||||
@@ -42,7 +42,7 @@ go_test(
|
||||
embed = [":go_default_library"],
|
||||
deps = [
|
||||
"//beacon-chain/verification:go_default_library",
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
"//encoding/bytesutil:go_default_library",
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
|
||||
@@ -13,7 +13,7 @@ import (
|
||||
"github.com/ethereum/go-ethereum/common/hexutil"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/verification"
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/params"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
|
||||
@@ -25,7 +25,7 @@ import (
|
||||
)
|
||||
|
||||
var (
|
||||
errIndexOutOfBounds = errors.New("blob index in file name >= MaxBlobsPerBlock")
|
||||
errIndexOutOfBounds = errors.New("blob index in file name >= DeprecatedMaxBlobsPerBlock")
|
||||
errEmptyBlobWritten = errors.New("zero bytes written to disk when saving blob sidecar")
|
||||
errSidecarEmptySSZData = errors.New("sidecar marshalled to an empty ssz byte slice")
|
||||
errNoBasePath = errors.New("BlobStorage base path not specified in init")
|
||||
@@ -109,10 +109,11 @@ func (bs *BlobStorage) WarmCache() {
|
||||
}
|
||||
go func() {
|
||||
start := time.Now()
|
||||
log.Info("Blob filesystem cache warm-up started. This may take a few minutes.")
|
||||
if err := bs.pruner.warmCache(); err != nil {
|
||||
log.WithError(err).Error("Error encountered while warming up blob pruner cache")
|
||||
}
|
||||
log.WithField("elapsed", time.Since(start)).Info("Blob filesystem cache warm-up complete.")
|
||||
log.WithField("elapsed", time.Since(start)).Info("Blob filesystem cache warm-up complete")
|
||||
}()
|
||||
}
|
||||
|
||||
@@ -218,6 +219,7 @@ func (bs *BlobStorage) Save(sidecar blocks.VerifiedROBlob) error {
|
||||
partialMoved = true
|
||||
blobsWrittenCounter.Inc()
|
||||
blobSaveLatency.Observe(float64(time.Since(startTime).Milliseconds()))
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -255,8 +257,10 @@ func (bs *BlobStorage) Remove(root [32]byte) error {
|
||||
// Indices generates a bitmap representing which BlobSidecar.Index values are present on disk for a given root.
|
||||
// This value can be compared to the commitments observed in a block to determine which indices need to be found
|
||||
// on the network to confirm data availability.
|
||||
func (bs *BlobStorage) Indices(root [32]byte) ([fieldparams.MaxBlobsPerBlock]bool, error) {
|
||||
var mask [fieldparams.MaxBlobsPerBlock]bool
|
||||
func (bs *BlobStorage) Indices(root [32]byte, s primitives.Slot) ([]bool, error) {
|
||||
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(s)
|
||||
mask := make([]bool, maxBlobsPerBlock)
|
||||
|
||||
rootDir := blobNamer{root: root}.dir()
|
||||
entries, err := afero.ReadDir(bs.fs, rootDir)
|
||||
if err != nil {
|
||||
@@ -265,6 +269,7 @@ func (bs *BlobStorage) Indices(root [32]byte) ([fieldparams.MaxBlobsPerBlock]boo
|
||||
}
|
||||
return mask, err
|
||||
}
|
||||
|
||||
for i := range entries {
|
||||
if entries[i].IsDir() {
|
||||
continue
|
||||
@@ -281,7 +286,7 @@ func (bs *BlobStorage) Indices(root [32]byte) ([fieldparams.MaxBlobsPerBlock]boo
|
||||
if err != nil {
|
||||
return mask, errors.Wrapf(err, "unexpected directory entry breaks listing, %s", parts[0])
|
||||
}
|
||||
if u >= fieldparams.MaxBlobsPerBlock {
|
||||
if u >= uint64(maxBlobsPerBlock) {
|
||||
return mask, errIndexOutOfBounds
|
||||
}
|
||||
mask[u] = true
|
||||
|
||||
@@ -10,7 +10,7 @@ import (
|
||||
|
||||
ssz "github.com/prysmaticlabs/fastssz"
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/verification"
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/params"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
|
||||
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
|
||||
@@ -20,7 +20,7 @@ import (
|
||||
)
|
||||
|
||||
func TestBlobStorage_SaveBlobData(t *testing.T) {
|
||||
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, fieldparams.MaxBlobsPerBlock)
|
||||
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, params.BeaconConfig().MaxBlobsPerBlock(1))
|
||||
testSidecars, err := verification.BlobSidecarSliceNoop(sidecars)
|
||||
require.NoError(t, err)
|
||||
|
||||
@@ -56,10 +56,10 @@ func TestBlobStorage_SaveBlobData(t *testing.T) {
|
||||
require.NoError(t, bs.Save(sc))
|
||||
actualSc, err := bs.Get(sc.BlockRoot(), sc.Index)
|
||||
require.NoError(t, err)
|
||||
expectedIdx := [fieldparams.MaxBlobsPerBlock]bool{false, false, true}
|
||||
actualIdx, err := bs.Indices(actualSc.BlockRoot())
|
||||
expectedIdx := []bool{false, false, true, false, false, false}
|
||||
actualIdx, err := bs.Indices(actualSc.BlockRoot(), 100)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, expectedIdx, actualIdx)
|
||||
require.DeepEqual(t, expectedIdx, actualIdx)
|
||||
})
|
||||
|
||||
t.Run("round trip write then read", func(t *testing.T) {
|
||||
@@ -132,19 +132,19 @@ func TestBlobIndicesBounds(t *testing.T) {
|
||||
fs, bs := NewEphemeralBlobStorageWithFs(t)
|
||||
root := [32]byte{}
|
||||
|
||||
okIdx := uint64(fieldparams.MaxBlobsPerBlock - 1)
|
||||
okIdx := uint64(params.BeaconConfig().MaxBlobsPerBlock(0)) - 1
|
||||
writeFakeSSZ(t, fs, root, okIdx)
|
||||
indices, err := bs.Indices(root)
|
||||
indices, err := bs.Indices(root, 100)
|
||||
require.NoError(t, err)
|
||||
var expected [fieldparams.MaxBlobsPerBlock]bool
|
||||
expected := make([]bool, params.BeaconConfig().MaxBlobsPerBlock(0))
|
||||
expected[okIdx] = true
|
||||
for i := range expected {
|
||||
require.Equal(t, expected[i], indices[i])
|
||||
}
|
||||
|
||||
oobIdx := uint64(fieldparams.MaxBlobsPerBlock)
|
||||
oobIdx := uint64(params.BeaconConfig().MaxBlobsPerBlock(0))
|
||||
writeFakeSSZ(t, fs, root, oobIdx)
|
||||
_, err = bs.Indices(root)
|
||||
_, err = bs.Indices(root, 100)
|
||||
require.ErrorIs(t, err, errIndexOutOfBounds)
|
||||
}
|
||||
|
||||
@@ -163,7 +163,7 @@ func TestBlobStoragePrune(t *testing.T) {
|
||||
fs, bs := NewEphemeralBlobStorageWithFs(t)
|
||||
|
||||
t.Run("PruneOne", func(t *testing.T) {
|
||||
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 300, fieldparams.MaxBlobsPerBlock)
|
||||
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 300, params.BeaconConfig().MaxBlobsPerBlock(0))
|
||||
testSidecars, err := verification.BlobSidecarSliceNoop(sidecars)
|
||||
require.NoError(t, err)
|
||||
|
||||
@@ -178,7 +178,7 @@ func TestBlobStoragePrune(t *testing.T) {
|
||||
require.Equal(t, 0, len(remainingFolders))
|
||||
})
|
||||
t.Run("Prune dangling blob", func(t *testing.T) {
|
||||
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 299, fieldparams.MaxBlobsPerBlock)
|
||||
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 299, params.BeaconConfig().MaxBlobsPerBlock(0))
|
||||
testSidecars, err := verification.BlobSidecarSliceNoop(sidecars)
|
||||
require.NoError(t, err)
|
||||
|
||||
@@ -198,7 +198,7 @@ func TestBlobStoragePrune(t *testing.T) {
|
||||
|
||||
for j := 0; j <= blockQty; j++ {
|
||||
root := bytesutil.ToBytes32(bytesutil.ToBytes(uint64(slot), 32))
|
||||
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, root, slot, fieldparams.MaxBlobsPerBlock)
|
||||
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, root, slot, params.BeaconConfig().MaxBlobsPerBlock(0))
|
||||
testSidecars, err := verification.BlobSidecarSliceNoop(sidecars)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, bs.Save(testSidecars[0]))
|
||||
@@ -224,7 +224,7 @@ func BenchmarkPruning(b *testing.B) {
|
||||
|
||||
for j := 0; j <= blockQty; j++ {
|
||||
root := bytesutil.ToBytes32(bytesutil.ToBytes(uint64(slot), 32))
|
||||
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, root, slot, fieldparams.MaxBlobsPerBlock)
|
||||
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, root, slot, params.BeaconConfig().MaxBlobsPerBlock(0))
|
||||
testSidecars, err := verification.BlobSidecarSliceNoop(sidecars)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, bs.Save(testSidecars[0]))
|
||||
|
||||
@@ -9,7 +9,7 @@ import (
|
||||
)
|
||||
|
||||
// blobIndexMask is a bitmask representing the set of blob indices that are currently set.
|
||||
type blobIndexMask [fieldparams.MaxBlobsPerBlock]bool
|
||||
type blobIndexMask []bool
|
||||
|
||||
// BlobStorageSummary represents cached information about the BlobSidecars on disk for each root the cache knows about.
|
||||
type BlobStorageSummary struct {
|
||||
@@ -20,7 +20,11 @@ type BlobStorageSummary struct {
|
||||
// HasIndex returns true if the BlobSidecar at the given index is available in the filesystem.
|
||||
func (s BlobStorageSummary) HasIndex(idx uint64) bool {
|
||||
// Protect from panic, but assume callers are sophisticated enough to not need an error telling them they have an invalid idx.
|
||||
if idx >= fieldparams.MaxBlobsPerBlock {
|
||||
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(s.slot)
|
||||
if idx >= uint64(maxBlobsPerBlock) {
|
||||
return false
|
||||
}
|
||||
if idx >= uint64(len(s.mask)) {
|
||||
return false
|
||||
}
|
||||
return s.mask[idx]
|
||||
@@ -28,7 +32,11 @@ func (s BlobStorageSummary) HasIndex(idx uint64) bool {
|
||||
|
||||
// AllAvailable returns true if we have all blobs for all indices from 0 to count-1.
|
||||
func (s BlobStorageSummary) AllAvailable(count int) bool {
|
||||
if count > fieldparams.MaxBlobsPerBlock {
|
||||
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(s.slot)
|
||||
if count > maxBlobsPerBlock {
|
||||
return false
|
||||
}
|
||||
if count > len(s.mask) {
|
||||
return false
|
||||
}
|
||||
for i := 0; i < count; i++ {
|
||||
@@ -68,13 +76,17 @@ func (s *blobStorageCache) Summary(root [32]byte) BlobStorageSummary {
|
||||
}
|
||||
|
||||
func (s *blobStorageCache) ensure(key [32]byte, slot primitives.Slot, idx uint64) error {
|
||||
if idx >= fieldparams.MaxBlobsPerBlock {
|
||||
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
|
||||
if idx >= uint64(maxBlobsPerBlock) {
|
||||
return errIndexOutOfBounds
|
||||
}
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
v := s.cache[key]
|
||||
v.slot = slot
|
||||
if v.mask == nil {
|
||||
v.mask = make(blobIndexMask, maxBlobsPerBlock)
|
||||
}
|
||||
if !v.mask[idx] {
|
||||
s.updateMetrics(1)
|
||||
}
|
||||
|
||||
@@ -3,13 +3,17 @@ package filesystem
|
||||
import (
|
||||
"testing"
|
||||
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/params"
|
||||
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/require"
|
||||
)
|
||||
|
||||
func TestSlotByRoot_Summary(t *testing.T) {
|
||||
var noneSet, allSet, firstSet, lastSet, oneSet blobIndexMask
|
||||
noneSet := make([]bool, params.BeaconConfig().MaxBlobsPerBlock(0))
|
||||
allSet := make([]bool, params.BeaconConfig().MaxBlobsPerBlock(0))
|
||||
firstSet := make([]bool, params.BeaconConfig().MaxBlobsPerBlock(0))
|
||||
lastSet := make([]bool, params.BeaconConfig().MaxBlobsPerBlock(0))
|
||||
oneSet := make([]bool, params.BeaconConfig().MaxBlobsPerBlock(0))
|
||||
firstSet[0] = true
|
||||
lastSet[len(lastSet)-1] = true
|
||||
oneSet[1] = true
|
||||
@@ -19,49 +23,49 @@ func TestSlotByRoot_Summary(t *testing.T) {
|
||||
cases := []struct {
|
||||
name string
|
||||
root [32]byte
|
||||
expected *blobIndexMask
|
||||
expected blobIndexMask
|
||||
}{
|
||||
{
|
||||
name: "not found",
|
||||
},
|
||||
{
|
||||
name: "none set",
|
||||
expected: &noneSet,
|
||||
expected: noneSet,
|
||||
},
|
||||
{
|
||||
name: "index 1 set",
|
||||
expected: &oneSet,
|
||||
expected: oneSet,
|
||||
},
|
||||
{
|
||||
name: "all set",
|
||||
expected: &allSet,
|
||||
expected: allSet,
|
||||
},
|
||||
{
|
||||
name: "first set",
|
||||
expected: &firstSet,
|
||||
expected: firstSet,
|
||||
},
|
||||
{
|
||||
name: "last set",
|
||||
expected: &lastSet,
|
||||
expected: lastSet,
|
||||
},
|
||||
}
|
||||
sc := newBlobStorageCache()
|
||||
for _, c := range cases {
|
||||
if c.expected != nil {
|
||||
key := bytesutil.ToBytes32([]byte(c.name))
|
||||
sc.cache[key] = BlobStorageSummary{slot: 0, mask: *c.expected}
|
||||
sc.cache[key] = BlobStorageSummary{slot: 0, mask: c.expected}
|
||||
}
|
||||
}
|
||||
for _, c := range cases {
|
||||
t.Run(c.name, func(t *testing.T) {
|
||||
key := bytesutil.ToBytes32([]byte(c.name))
|
||||
sum := sc.Summary(key)
|
||||
for i := range c.expected {
|
||||
for i, has := range c.expected {
|
||||
ui := uint64(i)
|
||||
if c.expected == nil {
|
||||
require.Equal(t, false, sum.HasIndex(ui))
|
||||
} else {
|
||||
require.Equal(t, c.expected[i], sum.HasIndex(ui))
|
||||
require.Equal(t, has, sum.HasIndex(ui))
|
||||
}
|
||||
}
|
||||
})
|
||||
@@ -121,13 +125,13 @@ func TestAllAvailable(t *testing.T) {
|
||||
},
|
||||
{
|
||||
name: "out of bound is safe",
|
||||
count: fieldparams.MaxBlobsPerBlock + 1,
|
||||
count: params.BeaconConfig().MaxBlobsPerBlock(0) + 1,
|
||||
aa: false,
|
||||
},
|
||||
{
|
||||
name: "max present",
|
||||
count: fieldparams.MaxBlobsPerBlock,
|
||||
idxSet: idxUpTo(fieldparams.MaxBlobsPerBlock),
|
||||
count: params.BeaconConfig().MaxBlobsPerBlock(0),
|
||||
idxSet: idxUpTo(params.BeaconConfig().MaxBlobsPerBlock(0)),
|
||||
aa: true,
|
||||
},
|
||||
{
|
||||
@@ -139,7 +143,7 @@ func TestAllAvailable(t *testing.T) {
|
||||
}
|
||||
for _, c := range cases {
|
||||
t.Run(c.name, func(t *testing.T) {
|
||||
var mask blobIndexMask
|
||||
mask := make([]bool, params.BeaconConfig().MaxBlobsPerBlock(0))
|
||||
for _, idx := range c.idxSet {
|
||||
mask[idx] = true
|
||||
}
|
||||
|
||||
@@ -12,7 +12,7 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/verification"
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/params"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/require"
|
||||
@@ -25,7 +25,7 @@ func TestTryPruneDir_CachedNotExpired(t *testing.T) {
|
||||
pr, err := newBlobPruner(fs, 0)
|
||||
require.NoError(t, err)
|
||||
slot := pr.windowSize
|
||||
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, slot, fieldparams.MaxBlobsPerBlock)
|
||||
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, slot, params.BeaconConfig().MaxBlobsPerBlock(slot))
|
||||
sc, err := verification.BlobSidecarNoop(sidecars[0])
|
||||
require.NoError(t, err)
|
||||
rootStr := rootString(sc.BlockRoot())
|
||||
|
||||
@@ -539,3 +539,231 @@ func createDefaultLightClientUpdate(currentSlot primitives.Slot, attestedState s
|
||||
|
||||
return light_client.NewWrappedUpdate(m)
|
||||
}
|
||||
|
||||
func TestStore_LightClientBootstrap_CanSaveRetrieve(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
cfg := params.BeaconConfig()
|
||||
cfg.AltairForkEpoch = 0
|
||||
cfg.CapellaForkEpoch = 1
|
||||
cfg.DenebForkEpoch = 2
|
||||
cfg.ElectraForkEpoch = 3
|
||||
cfg.EpochsPerSyncCommitteePeriod = 1
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
db := setupDB(t)
|
||||
ctx := context.Background()
|
||||
|
||||
t.Run("Nil", func(t *testing.T) {
|
||||
retrievedBootstrap, err := db.LightClientBootstrap(ctx, []byte("NilBlockRoot"))
|
||||
require.NoError(t, err)
|
||||
require.IsNil(t, retrievedBootstrap)
|
||||
})
|
||||
|
||||
t.Run("Altair", func(t *testing.T) {
|
||||
bootstrap, err := createDefaultLightClientBootstrap(primitives.Slot(uint64(params.BeaconConfig().AltairForkEpoch) * uint64(params.BeaconConfig().SlotsPerEpoch)))
|
||||
require.NoError(t, err)
|
||||
|
||||
err = bootstrap.SetCurrentSyncCommittee(createRandomSyncCommittee())
|
||||
require.NoError(t, err)
|
||||
|
||||
err = db.SaveLightClientBootstrap(ctx, []byte("blockRootAltair"), bootstrap)
|
||||
require.NoError(t, err)
|
||||
|
||||
retrievedBootstrap, err := db.LightClientBootstrap(ctx, []byte("blockRootAltair"))
|
||||
require.NoError(t, err)
|
||||
require.DeepEqual(t, bootstrap, retrievedBootstrap, "retrieved bootstrap does not match saved bootstrap")
|
||||
})
|
||||
|
||||
t.Run("Capella", func(t *testing.T) {
|
||||
bootstrap, err := createDefaultLightClientBootstrap(primitives.Slot(uint64(params.BeaconConfig().CapellaForkEpoch) * uint64(params.BeaconConfig().SlotsPerEpoch)))
|
||||
require.NoError(t, err)
|
||||
|
||||
err = bootstrap.SetCurrentSyncCommittee(createRandomSyncCommittee())
|
||||
require.NoError(t, err)
|
||||
|
||||
err = db.SaveLightClientBootstrap(ctx, []byte("blockRootCapella"), bootstrap)
|
||||
require.NoError(t, err)
|
||||
|
||||
retrievedBootstrap, err := db.LightClientBootstrap(ctx, []byte("blockRootCapella"))
|
||||
require.NoError(t, err)
|
||||
require.DeepEqual(t, bootstrap, retrievedBootstrap, "retrieved bootstrap does not match saved bootstrap")
|
||||
})
|
||||
|
||||
t.Run("Deneb", func(t *testing.T) {
|
||||
bootstrap, err := createDefaultLightClientBootstrap(primitives.Slot(uint64(params.BeaconConfig().DenebForkEpoch) * uint64(params.BeaconConfig().SlotsPerEpoch)))
|
||||
require.NoError(t, err)
|
||||
|
||||
err = bootstrap.SetCurrentSyncCommittee(createRandomSyncCommittee())
|
||||
require.NoError(t, err)
|
||||
|
||||
err = db.SaveLightClientBootstrap(ctx, []byte("blockRootDeneb"), bootstrap)
|
||||
require.NoError(t, err)
|
||||
|
||||
retrievedBootstrap, err := db.LightClientBootstrap(ctx, []byte("blockRootDeneb"))
|
||||
require.NoError(t, err)
|
||||
require.DeepEqual(t, bootstrap, retrievedBootstrap, "retrieved bootstrap does not match saved bootstrap")
|
||||
})
|
||||
|
||||
t.Run("Electra", func(t *testing.T) {
|
||||
bootstrap, err := createDefaultLightClientBootstrap(primitives.Slot(uint64(params.BeaconConfig().ElectraForkEpoch) * uint64(params.BeaconConfig().SlotsPerEpoch)))
|
||||
require.NoError(t, err)
|
||||
|
||||
err = bootstrap.SetCurrentSyncCommittee(createRandomSyncCommittee())
|
||||
require.NoError(t, err)
|
||||
|
||||
err = db.SaveLightClientBootstrap(ctx, []byte("blockRootElectra"), bootstrap)
|
||||
require.NoError(t, err)
|
||||
|
||||
retrievedBootstrap, err := db.LightClientBootstrap(ctx, []byte("blockRootElectra"))
|
||||
require.NoError(t, err)
|
||||
require.DeepEqual(t, bootstrap, retrievedBootstrap, "retrieved bootstrap does not match saved bootstrap")
|
||||
})
|
||||
}
|
||||
|
||||
func createDefaultLightClientBootstrap(currentSlot primitives.Slot) (interfaces.LightClientBootstrap, error) {
|
||||
currentEpoch := slots.ToEpoch(currentSlot)
|
||||
syncCommitteeSize := params.BeaconConfig().SyncCommitteeSize
|
||||
pubKeys := make([][]byte, syncCommitteeSize)
|
||||
for i := uint64(0); i < syncCommitteeSize; i++ {
|
||||
pubKeys[i] = make([]byte, fieldparams.BLSPubkeyLength)
|
||||
}
|
||||
currentSyncCommittee := &pb.SyncCommittee{
|
||||
Pubkeys: pubKeys,
|
||||
AggregatePubkey: make([]byte, fieldparams.BLSPubkeyLength),
|
||||
}
|
||||
|
||||
var currentSyncCommitteeBranch [][]byte
|
||||
if currentEpoch >= params.BeaconConfig().ElectraForkEpoch {
|
||||
currentSyncCommitteeBranch = make([][]byte, fieldparams.SyncCommitteeBranchDepthElectra)
|
||||
} else {
|
||||
currentSyncCommitteeBranch = make([][]byte, fieldparams.SyncCommitteeBranchDepth)
|
||||
}
|
||||
for i := 0; i < len(currentSyncCommitteeBranch); i++ {
|
||||
currentSyncCommitteeBranch[i] = make([]byte, fieldparams.RootLength)
|
||||
}
|
||||
|
||||
executionBranch := make([][]byte, fieldparams.ExecutionBranchDepth)
|
||||
for i := 0; i < fieldparams.ExecutionBranchDepth; i++ {
|
||||
executionBranch[i] = make([]byte, 32)
|
||||
}
|
||||
|
||||
// TODO: can this be based on the current epoch?
|
||||
var m proto.Message
|
||||
if currentEpoch < params.BeaconConfig().CapellaForkEpoch {
|
||||
m = &pb.LightClientBootstrapAltair{
|
||||
Header: &pb.LightClientHeaderAltair{
|
||||
Beacon: &pb.BeaconBlockHeader{
|
||||
ParentRoot: make([]byte, 32),
|
||||
StateRoot: make([]byte, 32),
|
||||
BodyRoot: make([]byte, 32),
|
||||
},
|
||||
},
|
||||
CurrentSyncCommittee: currentSyncCommittee,
|
||||
CurrentSyncCommitteeBranch: currentSyncCommitteeBranch,
|
||||
}
|
||||
} else if currentEpoch < params.BeaconConfig().DenebForkEpoch {
|
||||
m = &pb.LightClientBootstrapCapella{
|
||||
Header: &pb.LightClientHeaderCapella{
|
||||
Beacon: &pb.BeaconBlockHeader{
|
||||
ParentRoot: make([]byte, 32),
|
||||
StateRoot: make([]byte, 32),
|
||||
BodyRoot: make([]byte, 32),
|
||||
},
|
||||
Execution: &enginev1.ExecutionPayloadHeaderCapella{
|
||||
ParentHash: make([]byte, fieldparams.RootLength),
|
||||
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
|
||||
StateRoot: make([]byte, fieldparams.RootLength),
|
||||
ReceiptsRoot: make([]byte, fieldparams.RootLength),
|
||||
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
|
||||
PrevRandao: make([]byte, fieldparams.RootLength),
|
||||
ExtraData: make([]byte, 0),
|
||||
BaseFeePerGas: make([]byte, fieldparams.RootLength),
|
||||
BlockHash: make([]byte, fieldparams.RootLength),
|
||||
TransactionsRoot: make([]byte, fieldparams.RootLength),
|
||||
WithdrawalsRoot: make([]byte, fieldparams.RootLength),
|
||||
},
|
||||
ExecutionBranch: executionBranch,
|
||||
},
|
||||
CurrentSyncCommittee: currentSyncCommittee,
|
||||
CurrentSyncCommitteeBranch: currentSyncCommitteeBranch,
|
||||
}
|
||||
} else if currentEpoch < params.BeaconConfig().ElectraForkEpoch {
|
||||
m = &pb.LightClientBootstrapDeneb{
|
||||
Header: &pb.LightClientHeaderDeneb{
|
||||
Beacon: &pb.BeaconBlockHeader{
|
||||
ParentRoot: make([]byte, 32),
|
||||
StateRoot: make([]byte, 32),
|
||||
BodyRoot: make([]byte, 32),
|
||||
},
|
||||
Execution: &enginev1.ExecutionPayloadHeaderDeneb{
|
||||
ParentHash: make([]byte, fieldparams.RootLength),
|
||||
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
|
||||
StateRoot: make([]byte, fieldparams.RootLength),
|
||||
ReceiptsRoot: make([]byte, fieldparams.RootLength),
|
||||
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
|
||||
PrevRandao: make([]byte, fieldparams.RootLength),
|
||||
ExtraData: make([]byte, 0),
|
||||
BaseFeePerGas: make([]byte, fieldparams.RootLength),
|
||||
BlockHash: make([]byte, fieldparams.RootLength),
|
||||
TransactionsRoot: make([]byte, fieldparams.RootLength),
|
||||
WithdrawalsRoot: make([]byte, fieldparams.RootLength),
|
||||
GasLimit: 0,
|
||||
GasUsed: 0,
|
||||
},
|
||||
ExecutionBranch: executionBranch,
|
||||
},
|
||||
CurrentSyncCommittee: currentSyncCommittee,
|
||||
CurrentSyncCommitteeBranch: currentSyncCommitteeBranch,
|
||||
}
|
||||
} else {
|
||||
m = &pb.LightClientBootstrapElectra{
|
||||
Header: &pb.LightClientHeaderDeneb{
|
||||
Beacon: &pb.BeaconBlockHeader{
|
||||
ParentRoot: make([]byte, 32),
|
||||
StateRoot: make([]byte, 32),
|
||||
BodyRoot: make([]byte, 32),
|
||||
},
|
||||
Execution: &enginev1.ExecutionPayloadHeaderDeneb{
|
||||
ParentHash: make([]byte, fieldparams.RootLength),
|
||||
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
|
||||
StateRoot: make([]byte, fieldparams.RootLength),
|
||||
ReceiptsRoot: make([]byte, fieldparams.RootLength),
|
||||
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
|
||||
PrevRandao: make([]byte, fieldparams.RootLength),
|
||||
ExtraData: make([]byte, 0),
|
||||
BaseFeePerGas: make([]byte, fieldparams.RootLength),
|
||||
BlockHash: make([]byte, fieldparams.RootLength),
|
||||
TransactionsRoot: make([]byte, fieldparams.RootLength),
|
||||
WithdrawalsRoot: make([]byte, fieldparams.RootLength),
|
||||
GasLimit: 0,
|
||||
GasUsed: 0,
|
||||
},
|
||||
ExecutionBranch: executionBranch,
|
||||
},
|
||||
CurrentSyncCommittee: currentSyncCommittee,
|
||||
CurrentSyncCommitteeBranch: currentSyncCommitteeBranch,
|
||||
}
|
||||
}
|
||||
|
||||
return light_client.NewWrappedBootstrap(m)
|
||||
}
|
||||
|
||||
func createRandomSyncCommittee() *pb.SyncCommittee {
|
||||
// random number between 2 and 128
|
||||
base := rand.Int()%127 + 2
|
||||
|
||||
syncCom := make([][]byte, params.BeaconConfig().SyncCommitteeSize)
|
||||
for i := 0; uint64(i) < params.BeaconConfig().SyncCommitteeSize; i++ {
|
||||
if i%base == 0 {
|
||||
syncCom[i] = make([]byte, fieldparams.BLSPubkeyLength)
|
||||
syncCom[i][0] = 1
|
||||
continue
|
||||
}
|
||||
syncCom[i] = make([]byte, fieldparams.BLSPubkeyLength)
|
||||
}
|
||||
|
||||
return &pb.SyncCommittee{
|
||||
Pubkeys: syncCom,
|
||||
AggregatePubkey: make([]byte, fieldparams.BLSPubkeyLength),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -349,7 +349,7 @@ func (s *Service) listenForNewNodes() {
|
||||
wg.Add(1)
|
||||
go func(info *peer.AddrInfo) {
|
||||
if err := s.connectWithPeer(s.ctx, *info); err != nil {
|
||||
log.WithError(err).Tracef("Could not connect with peer %s", info.String())
|
||||
log.WithError(err).WithField("peerID", info.ID).Debug("Could not connect with peer")
|
||||
}
|
||||
wg.Done()
|
||||
}(peerInfo)
|
||||
|
||||
@@ -214,7 +214,11 @@ func (s *Service) AddDisconnectionHandler(handler func(ctx context.Context, id p
|
||||
// Only log disconnections if we were fully connected.
|
||||
if priorState == peers.Connected {
|
||||
activePeersCount := len(s.peers.Active())
|
||||
log.WithField("remainingActivePeers", activePeersCount).Debug("Peer disconnected")
|
||||
log.WithFields(logrus.Fields{
|
||||
"remainingActivePeers": activePeersCount,
|
||||
"direction": conn.Stat().Direction.String(),
|
||||
"peerID": peerID,
|
||||
}).Debug("Peer disconnected")
|
||||
}
|
||||
}()
|
||||
},
|
||||
|
||||
@@ -101,18 +101,22 @@ func (s *BadResponsesScorer) countNoLock(pid peer.ID) (int, error) {
|
||||
|
||||
// Increment increments the number of bad responses we have received from the given remote peer.
|
||||
// If peer doesn't exist this method is no-op.
|
||||
func (s *BadResponsesScorer) Increment(pid peer.ID) {
|
||||
func (s *BadResponsesScorer) Increment(pid peer.ID) int {
|
||||
const defaultBadResponses = 1
|
||||
|
||||
s.store.Lock()
|
||||
defer s.store.Unlock()
|
||||
|
||||
peerData, ok := s.store.PeerData(pid)
|
||||
if !ok {
|
||||
s.store.SetPeerData(pid, &peerdata.PeerData{
|
||||
BadResponses: 1,
|
||||
BadResponses: defaultBadResponses,
|
||||
})
|
||||
return
|
||||
return defaultBadResponses
|
||||
}
|
||||
peerData.BadResponses++
|
||||
|
||||
return peerData.BadResponses
|
||||
}
|
||||
|
||||
// IsBadPeer states if the peer is to be considered bad.
|
||||
|
||||
@@ -443,7 +443,7 @@ func (s *Service) connectWithAllTrustedPeers(multiAddrs []multiaddr.Multiaddr) {
|
||||
// make each dial non-blocking
|
||||
go func(info peer.AddrInfo) {
|
||||
if err := s.connectWithPeer(s.ctx, info); err != nil {
|
||||
log.WithError(err).Tracef("Could not connect with peer %s", info.String())
|
||||
log.WithError(err).WithField("peerID", info.ID).Debug("Could not connect with trusted peer")
|
||||
}
|
||||
}(info)
|
||||
}
|
||||
@@ -459,7 +459,7 @@ func (s *Service) connectWithAllPeers(multiAddrs []multiaddr.Multiaddr) {
|
||||
// make each dial non-blocking
|
||||
go func(info peer.AddrInfo) {
|
||||
if err := s.connectWithPeer(s.ctx, info); err != nil {
|
||||
log.WithError(err).Tracef("Could not connect with peer %s", info.String())
|
||||
log.WithError(err).WithField("peerID", info.ID).Debug("Could not connect with peer")
|
||||
}
|
||||
}(info)
|
||||
}
|
||||
@@ -478,8 +478,8 @@ func (s *Service) connectWithPeer(ctx context.Context, info peer.AddrInfo) error
|
||||
ctx, cancel := context.WithTimeout(ctx, maxDialTimeout)
|
||||
defer cancel()
|
||||
if err := s.host.Connect(ctx, info); err != nil {
|
||||
s.Peers().Scorers().BadResponsesScorer().Increment(info.ID)
|
||||
return err
|
||||
score := s.Peers().Scorers().BadResponsesScorer().Increment(info.ID)
|
||||
return errors.Wrapf(err, "connect to peer %s - new bad responses score: %d", info.ID, score)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -113,7 +113,7 @@ func (s *Service) dialPeer(ctx context.Context, wg *sync.WaitGroup, node *enode.
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
if err := s.connectWithPeer(ctx, *info); err != nil {
|
||||
log.WithError(err).Tracef("Could not connect with peer %s", info.String())
|
||||
log.WithError(err).WithField("peerID", info.ID).Debug("Could not connect with peer")
|
||||
}
|
||||
|
||||
wg.Done()
|
||||
|
||||
@@ -177,6 +177,7 @@ func (s *Service) blobEndpoints(blocker lookup.Blocker) []endpoint {
|
||||
Blocker: blocker,
|
||||
OptimisticModeFetcher: s.cfg.OptimisticModeFetcher,
|
||||
FinalizationFetcher: s.cfg.FinalizationFetcher,
|
||||
TimeFetcher: s.cfg.GenesisTimeFetcher,
|
||||
}
|
||||
|
||||
const namespace = "blob"
|
||||
|
||||
@@ -14,7 +14,9 @@ go_library(
|
||||
"//beacon-chain/rpc/core:go_default_library",
|
||||
"//beacon-chain/rpc/lookup:go_default_library",
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
"//consensus-types/blocks:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
"//monitoring/tracing/trace:go_default_library",
|
||||
"//network/httputil:go_default_library",
|
||||
"//runtime/version:go_default_library",
|
||||
|
||||
@@ -12,7 +12,9 @@ import (
|
||||
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/rpc/core"
|
||||
field_params "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/params"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
|
||||
"github.com/prysmaticlabs/prysm/v5/network/httputil"
|
||||
"github.com/prysmaticlabs/prysm/v5/runtime/version"
|
||||
@@ -23,7 +25,7 @@ func (s *Server) Blobs(w http.ResponseWriter, r *http.Request) {
|
||||
ctx, span := trace.StartSpan(r.Context(), "beacon.Blobs")
|
||||
defer span.End()
|
||||
|
||||
indices, err := parseIndices(r.URL)
|
||||
indices, err := parseIndices(r.URL, s.TimeFetcher.CurrentSlot())
|
||||
if err != nil {
|
||||
httputil.HandleError(w, err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
@@ -87,9 +89,9 @@ func (s *Server) Blobs(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
|
||||
// parseIndices filters out invalid and duplicate blob indices
|
||||
func parseIndices(url *url.URL) ([]uint64, error) {
|
||||
func parseIndices(url *url.URL, s primitives.Slot) ([]uint64, error) {
|
||||
rawIndices := url.Query()["indices"]
|
||||
indices := make([]uint64, 0, field_params.MaxBlobsPerBlock)
|
||||
indices := make([]uint64, 0, params.BeaconConfig().MaxBlobsPerBlock(s))
|
||||
invalidIndices := make([]string, 0)
|
||||
loop:
|
||||
for _, raw := range rawIndices {
|
||||
@@ -98,7 +100,7 @@ loop:
|
||||
invalidIndices = append(invalidIndices, raw)
|
||||
continue
|
||||
}
|
||||
if ix >= field_params.MaxBlobsPerBlock {
|
||||
if ix >= uint64(params.BeaconConfig().MaxBlobsPerBlock(s)) {
|
||||
invalidIndices = append(invalidIndices, raw)
|
||||
continue
|
||||
}
|
||||
|
||||
@@ -52,6 +52,7 @@ func TestBlobs(t *testing.T) {
|
||||
s := &Server{
|
||||
OptimisticModeFetcher: mockChainService,
|
||||
FinalizationFetcher: mockChainService,
|
||||
TimeFetcher: mockChainService,
|
||||
}
|
||||
|
||||
t.Run("genesis", func(t *testing.T) {
|
||||
@@ -400,7 +401,7 @@ func Test_parseIndices(t *testing.T) {
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got, err := parseIndices(&url.URL{RawQuery: tt.query})
|
||||
got, err := parseIndices(&url.URL{RawQuery: tt.query}, 0)
|
||||
if err != nil && tt.wantErr != "" {
|
||||
require.StringContains(t, tt.wantErr, err.Error())
|
||||
return
|
||||
|
||||
@@ -9,4 +9,5 @@ type Server struct {
|
||||
Blocker lookup.Blocker
|
||||
OptimisticModeFetcher blockchain.OptimisticModeFetcher
|
||||
FinalizationFetcher blockchain.FinalizationFetcher
|
||||
TimeFetcher blockchain.TimeFetcher
|
||||
}
|
||||
|
||||
@@ -190,7 +190,7 @@ func TestGetSpec(t *testing.T) {
|
||||
data, ok := resp.Data.(map[string]interface{})
|
||||
require.Equal(t, true, ok)
|
||||
|
||||
assert.Equal(t, 156, len(data))
|
||||
assert.Equal(t, 159, len(data))
|
||||
for k, v := range data {
|
||||
t.Run(k, func(t *testing.T) {
|
||||
switch k {
|
||||
@@ -335,7 +335,7 @@ func TestGetSpec(t *testing.T) {
|
||||
case "MAX_VOLUNTARY_EXITS":
|
||||
assert.Equal(t, "52", v)
|
||||
case "MAX_BLOBS_PER_BLOCK":
|
||||
assert.Equal(t, "4", v)
|
||||
assert.Equal(t, "6", v)
|
||||
case "TIMELY_HEAD_FLAG_INDEX":
|
||||
assert.Equal(t, "0x35", v)
|
||||
case "TIMELY_SOURCE_FLAG_INDEX":
|
||||
@@ -529,6 +529,10 @@ func TestGetSpec(t *testing.T) {
|
||||
assert.Equal(t, "93", v)
|
||||
case "MAX_PENDING_DEPOSITS_PER_EPOCH":
|
||||
assert.Equal(t, "94", v)
|
||||
case "TARGET_BLOBS_PER_BLOCK_ELECTRA":
|
||||
assert.Equal(t, "6", v)
|
||||
case "MAX_BLOBS_PER_BLOCK_ELECTRA":
|
||||
assert.Equal(t, "9", v)
|
||||
default:
|
||||
t.Errorf("Incorrect key: %s", k)
|
||||
}
|
||||
|
||||
@@ -18,6 +18,7 @@ go_library(
|
||||
"//beacon-chain/rpc/eth/shared:go_default_library",
|
||||
"//beacon-chain/rpc/lookup:go_default_library",
|
||||
"//beacon-chain/state:go_default_library",
|
||||
"//config/features:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
"//consensus-types/interfaces:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
@@ -45,6 +46,7 @@ go_test(
|
||||
"//beacon-chain/db/testing:go_default_library",
|
||||
"//beacon-chain/rpc/testutil:go_default_library",
|
||||
"//beacon-chain/state:go_default_library",
|
||||
"//config/features:go_default_library",
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
"//consensus-types/blocks:go_default_library",
|
||||
|
||||
@@ -12,6 +12,7 @@ import (
|
||||
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
|
||||
lightclient "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/light-client"
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/rpc/eth/shared"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/features"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/params"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
|
||||
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
|
||||
@@ -22,6 +23,11 @@ import (
|
||||
|
||||
// GetLightClientBootstrap - implements https://github.com/ethereum/beacon-APIs/blob/263f4ed6c263c967f13279c7a9f5629b51c5fc55/apis/beacon/light_client/bootstrap.yaml
|
||||
func (s *Server) GetLightClientBootstrap(w http.ResponseWriter, req *http.Request) {
|
||||
if !features.Get().EnableLightClient {
|
||||
httputil.HandleError(w, "Light client feature flag is not enabled", http.StatusNotFound)
|
||||
return
|
||||
}
|
||||
|
||||
// Prepare
|
||||
ctx, span := trace.StartSpan(req.Context(), "beacon.GetLightClientBootstrap")
|
||||
defer span.End()
|
||||
@@ -76,26 +82,21 @@ func (s *Server) GetLightClientBootstrap(w http.ResponseWriter, req *http.Reques
|
||||
|
||||
// GetLightClientUpdatesByRange - implements https://github.com/ethereum/beacon-APIs/blob/263f4ed6c263c967f13279c7a9f5629b51c5fc55/apis/beacon/light_client/updates.yaml
|
||||
func (s *Server) GetLightClientUpdatesByRange(w http.ResponseWriter, req *http.Request) {
|
||||
// Prepare
|
||||
if !features.Get().EnableLightClient {
|
||||
httputil.HandleError(w, "Light client feature flag is not enabled", http.StatusNotFound)
|
||||
return
|
||||
}
|
||||
|
||||
ctx, span := trace.StartSpan(req.Context(), "beacon.GetLightClientUpdatesByRange")
|
||||
defer span.End()
|
||||
|
||||
// Determine slots per period
|
||||
config := params.BeaconConfig()
|
||||
slotsPerPeriod := uint64(config.EpochsPerSyncCommitteePeriod) * uint64(config.SlotsPerEpoch)
|
||||
|
||||
// Adjust count based on configuration
|
||||
_, count, gotCount := shared.UintFromQuery(w, req, "count", true)
|
||||
if !gotCount {
|
||||
return
|
||||
} else if count == 0 {
|
||||
httputil.HandleError(w, fmt.Sprintf("got invalid 'count' query variable '%d': count must be greater than 0", count), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
// Determine the start and end periods
|
||||
_, startPeriod, gotStartPeriod := shared.UintFromQuery(w, req, "start_period", true)
|
||||
if !gotStartPeriod {
|
||||
httputil.HandleError(w, fmt.Sprintf("Got invalid 'count' query variable '%d': count must be greater than 0", count), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -103,33 +104,13 @@ func (s *Server) GetLightClientUpdatesByRange(w http.ResponseWriter, req *http.R
|
||||
count = config.MaxRequestLightClientUpdates
|
||||
}
|
||||
|
||||
// max possible slot is current head
|
||||
headState, err := s.HeadFetcher.HeadState(ctx)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "could not get head state: "+err.Error(), http.StatusInternalServerError)
|
||||
_, startPeriod, gotStartPeriod := shared.UintFromQuery(w, req, "start_period", true)
|
||||
if !gotStartPeriod {
|
||||
return
|
||||
}
|
||||
|
||||
maxSlot := uint64(headState.Slot())
|
||||
|
||||
// min possible slot is Altair fork period
|
||||
minSlot := uint64(config.AltairForkEpoch) * uint64(config.SlotsPerEpoch)
|
||||
|
||||
// Adjust startPeriod, the end of start period must be later than Altair fork epoch, otherwise, can not get the sync committee votes
|
||||
startPeriodEndSlot := (startPeriod+1)*slotsPerPeriod - 1
|
||||
if startPeriodEndSlot < minSlot {
|
||||
startPeriod = minSlot / slotsPerPeriod
|
||||
}
|
||||
|
||||
// Get the initial endPeriod, then we will adjust
|
||||
endPeriod := startPeriod + count - 1
|
||||
|
||||
// Adjust endPeriod, the end of end period must be earlier than current head slot
|
||||
endPeriodEndSlot := (endPeriod+1)*slotsPerPeriod - 1
|
||||
if endPeriodEndSlot > maxSlot {
|
||||
endPeriod = maxSlot / slotsPerPeriod
|
||||
}
|
||||
|
||||
// get updates
|
||||
updatesMap, err := s.BeaconDB.LightClientUpdates(ctx, startPeriod, endPeriod)
|
||||
if err != nil {
|
||||
@@ -162,6 +143,11 @@ func (s *Server) GetLightClientUpdatesByRange(w http.ResponseWriter, req *http.R
|
||||
|
||||
// GetLightClientFinalityUpdate - implements https://github.com/ethereum/beacon-APIs/blob/263f4ed6c263c967f13279c7a9f5629b51c5fc55/apis/beacon/light_client/finality_update.yaml
|
||||
func (s *Server) GetLightClientFinalityUpdate(w http.ResponseWriter, req *http.Request) {
|
||||
if !features.Get().EnableLightClient {
|
||||
httputil.HandleError(w, "Light client feature flag is not enabled", http.StatusNotFound)
|
||||
return
|
||||
}
|
||||
|
||||
ctx, span := trace.StartSpan(req.Context(), "beacon.GetLightClientFinalityUpdate")
|
||||
defer span.End()
|
||||
|
||||
@@ -220,6 +206,11 @@ func (s *Server) GetLightClientFinalityUpdate(w http.ResponseWriter, req *http.R
|
||||
|
||||
// GetLightClientOptimisticUpdate - implements https://github.com/ethereum/beacon-APIs/blob/263f4ed6c263c967f13279c7a9f5629b51c5fc55/apis/beacon/light_client/optimistic_update.yaml
|
||||
func (s *Server) GetLightClientOptimisticUpdate(w http.ResponseWriter, req *http.Request) {
|
||||
if !features.Get().EnableLightClient {
|
||||
httputil.HandleError(w, "Light client feature flag is not enabled", http.StatusNotFound)
|
||||
return
|
||||
}
|
||||
|
||||
ctx, span := trace.StartSpan(req.Context(), "beacon.GetLightClientOptimisticUpdate")
|
||||
defer span.End()
|
||||
|
||||
|
||||
@@ -19,6 +19,7 @@ import (
|
||||
dbtesting "github.com/prysmaticlabs/prysm/v5/beacon-chain/db/testing"
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/rpc/testutil"
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/features"
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/params"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
|
||||
@@ -33,6 +34,11 @@ import (
|
||||
)
|
||||
|
||||
func TestLightClientHandler_GetLightClientBootstrap(t *testing.T) {
|
||||
resetFn := features.InitWithReset(&features.Flags{
|
||||
EnableLightClient: true,
|
||||
})
|
||||
defer resetFn()
|
||||
|
||||
params.SetupTestConfigCleanup(t)
|
||||
cfg := params.BeaconConfig()
|
||||
cfg.AltairForkEpoch = 0
|
||||
@@ -252,6 +258,11 @@ func TestLightClientHandler_GetLightClientBootstrap(t *testing.T) {
|
||||
// GetLightClientByRange tests
|
||||
|
||||
func TestLightClientHandler_GetLightClientUpdatesByRangeAltair(t *testing.T) {
|
||||
resetFn := features.InitWithReset(&features.Flags{
|
||||
EnableLightClient: true,
|
||||
})
|
||||
defer resetFn()
|
||||
|
||||
helpers.ClearCache()
|
||||
ctx := context.Background()
|
||||
|
||||
@@ -301,6 +312,11 @@ func TestLightClientHandler_GetLightClientUpdatesByRangeAltair(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestLightClientHandler_GetLightClientUpdatesByRangeCapella(t *testing.T) {
|
||||
resetFn := features.InitWithReset(&features.Flags{
|
||||
EnableLightClient: true,
|
||||
})
|
||||
defer resetFn()
|
||||
|
||||
helpers.ClearCache()
|
||||
ctx := context.Background()
|
||||
params.SetupTestConfigCleanup(t)
|
||||
@@ -350,6 +366,11 @@ func TestLightClientHandler_GetLightClientUpdatesByRangeCapella(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestLightClientHandler_GetLightClientUpdatesByRangeDeneb(t *testing.T) {
|
||||
resetFn := features.InitWithReset(&features.Flags{
|
||||
EnableLightClient: true,
|
||||
})
|
||||
defer resetFn()
|
||||
|
||||
helpers.ClearCache()
|
||||
ctx := context.Background()
|
||||
params.SetupTestConfigCleanup(t)
|
||||
@@ -399,6 +420,11 @@ func TestLightClientHandler_GetLightClientUpdatesByRangeDeneb(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestLightClientHandler_GetLightClientUpdatesByRangeMultipleAltair(t *testing.T) {
|
||||
resetFn := features.InitWithReset(&features.Flags{
|
||||
EnableLightClient: true,
|
||||
})
|
||||
defer resetFn()
|
||||
|
||||
helpers.ClearCache()
|
||||
ctx := context.Background()
|
||||
params.SetupTestConfigCleanup(t)
|
||||
@@ -458,6 +484,11 @@ func TestLightClientHandler_GetLightClientUpdatesByRangeMultipleAltair(t *testin
|
||||
}
|
||||
|
||||
func TestLightClientHandler_GetLightClientUpdatesByRangeMultipleCapella(t *testing.T) {
|
||||
resetFn := features.InitWithReset(&features.Flags{
|
||||
EnableLightClient: true,
|
||||
})
|
||||
defer resetFn()
|
||||
|
||||
helpers.ClearCache()
|
||||
ctx := context.Background()
|
||||
params.SetupTestConfigCleanup(t)
|
||||
@@ -518,6 +549,11 @@ func TestLightClientHandler_GetLightClientUpdatesByRangeMultipleCapella(t *testi
|
||||
}
|
||||
|
||||
func TestLightClientHandler_GetLightClientUpdatesByRangeMultipleDeneb(t *testing.T) {
|
||||
resetFn := features.InitWithReset(&features.Flags{
|
||||
EnableLightClient: true,
|
||||
})
|
||||
defer resetFn()
|
||||
|
||||
helpers.ClearCache()
|
||||
ctx := context.Background()
|
||||
params.SetupTestConfigCleanup(t)
|
||||
@@ -578,6 +614,11 @@ func TestLightClientHandler_GetLightClientUpdatesByRangeMultipleDeneb(t *testing
|
||||
}
|
||||
|
||||
func TestLightClientHandler_GetLightClientUpdatesByRangeMultipleForksAltairCapella(t *testing.T) {
|
||||
resetFn := features.InitWithReset(&features.Flags{
|
||||
EnableLightClient: true,
|
||||
})
|
||||
defer resetFn()
|
||||
|
||||
helpers.ClearCache()
|
||||
ctx := context.Background()
|
||||
params.SetupTestConfigCleanup(t)
|
||||
@@ -646,6 +687,11 @@ func TestLightClientHandler_GetLightClientUpdatesByRangeMultipleForksAltairCapel
|
||||
}
|
||||
|
||||
func TestLightClientHandler_GetLightClientUpdatesByRangeMultipleForksCapellaDeneb(t *testing.T) {
|
||||
resetFn := features.InitWithReset(&features.Flags{
|
||||
EnableLightClient: true,
|
||||
})
|
||||
defer resetFn()
|
||||
|
||||
helpers.ClearCache()
|
||||
ctx := context.Background()
|
||||
params.SetupTestConfigCleanup(t)
|
||||
@@ -715,6 +761,11 @@ func TestLightClientHandler_GetLightClientUpdatesByRangeMultipleForksCapellaDene
|
||||
}
|
||||
|
||||
func TestLightClientHandler_GetLightClientUpdatesByRangeCountBiggerThanLimit(t *testing.T) {
|
||||
resetFn := features.InitWithReset(&features.Flags{
|
||||
EnableLightClient: true,
|
||||
})
|
||||
defer resetFn()
|
||||
|
||||
helpers.ClearCache()
|
||||
ctx := context.Background()
|
||||
params.SetupTestConfigCleanup(t)
|
||||
@@ -777,6 +828,11 @@ func TestLightClientHandler_GetLightClientUpdatesByRangeCountBiggerThanLimit(t *
|
||||
}
|
||||
|
||||
func TestLightClientHandler_GetLightClientUpdatesByRangeCountBiggerThanMax(t *testing.T) {
|
||||
resetFn := features.InitWithReset(&features.Flags{
|
||||
EnableLightClient: true,
|
||||
})
|
||||
defer resetFn()
|
||||
|
||||
helpers.ClearCache()
|
||||
ctx := context.Background()
|
||||
params.SetupTestConfigCleanup(t)
|
||||
@@ -838,35 +894,22 @@ func TestLightClientHandler_GetLightClientUpdatesByRangeCountBiggerThanMax(t *te
|
||||
}
|
||||
|
||||
func TestLightClientHandler_GetLightClientUpdatesByRangeStartPeriodBeforeAltair(t *testing.T) {
|
||||
resetFn := features.InitWithReset(&features.Flags{
|
||||
EnableLightClient: true,
|
||||
})
|
||||
defer resetFn()
|
||||
|
||||
helpers.ClearCache()
|
||||
ctx := context.Background()
|
||||
params.SetupTestConfigCleanup(t)
|
||||
config := params.BeaconConfig()
|
||||
config.AltairForkEpoch = 1
|
||||
config.EpochsPerSyncCommitteePeriod = 1
|
||||
params.OverrideBeaconConfig(config)
|
||||
slot := primitives.Slot(config.AltairForkEpoch * primitives.Epoch(config.SlotsPerEpoch)).Add(1)
|
||||
|
||||
st, err := util.NewBeaconStateAltair()
|
||||
require.NoError(t, err)
|
||||
headSlot := slot.Add(1)
|
||||
err = st.SetSlot(headSlot)
|
||||
require.NoError(t, err)
|
||||
|
||||
db := dbtesting.SetupDB(t)
|
||||
|
||||
updatePeriod := slot.Div(uint64(config.EpochsPerSyncCommitteePeriod)).Div(uint64(config.SlotsPerEpoch))
|
||||
|
||||
update, err := createUpdate(t, version.Altair)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = db.SaveLightClientUpdate(ctx, uint64(updatePeriod), update)
|
||||
require.NoError(t, err)
|
||||
|
||||
mockChainService := &mock.ChainService{State: st}
|
||||
s := &Server{
|
||||
HeadFetcher: mockChainService,
|
||||
BeaconDB: db,
|
||||
BeaconDB: db,
|
||||
}
|
||||
startPeriod := 0
|
||||
url := fmt.Sprintf("http://foo.com/?count=2&start_period=%d", startPeriod)
|
||||
@@ -878,18 +921,17 @@ func TestLightClientHandler_GetLightClientUpdatesByRangeStartPeriodBeforeAltair(
|
||||
|
||||
require.Equal(t, http.StatusOK, writer.Code)
|
||||
var resp structs.LightClientUpdatesByRangeResponse
|
||||
err = json.Unmarshal(writer.Body.Bytes(), &resp.Updates)
|
||||
err := json.Unmarshal(writer.Body.Bytes(), &resp.Updates)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 1, len(resp.Updates))
|
||||
|
||||
require.Equal(t, "altair", resp.Updates[0].Version)
|
||||
updateJson, err := structs.LightClientUpdateFromConsensus(update)
|
||||
require.NoError(t, err)
|
||||
require.DeepEqual(t, updateJson, resp.Updates[0].Data)
|
||||
|
||||
require.Equal(t, 0, len(resp.Updates))
|
||||
}
|
||||
|
||||
func TestLightClientHandler_GetLightClientUpdatesByRangeMissingUpdates(t *testing.T) {
|
||||
resetFn := features.InitWithReset(&features.Flags{
|
||||
EnableLightClient: true,
|
||||
})
|
||||
defer resetFn()
|
||||
|
||||
helpers.ClearCache()
|
||||
ctx := context.Background()
|
||||
params.SetupTestConfigCleanup(t)
|
||||
@@ -996,6 +1038,11 @@ func TestLightClientHandler_GetLightClientUpdatesByRangeMissingUpdates(t *testin
|
||||
}
|
||||
|
||||
func TestLightClientHandler_GetLightClientFinalityUpdate(t *testing.T) {
|
||||
resetFn := features.InitWithReset(&features.Flags{
|
||||
EnableLightClient: true,
|
||||
})
|
||||
defer resetFn()
|
||||
|
||||
helpers.ClearCache()
|
||||
ctx := context.Background()
|
||||
config := params.BeaconConfig()
|
||||
@@ -1108,6 +1155,11 @@ func TestLightClientHandler_GetLightClientFinalityUpdate(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestLightClientHandler_GetLightClientOptimisticUpdateAltair(t *testing.T) {
|
||||
resetFn := features.InitWithReset(&features.Flags{
|
||||
EnableLightClient: true,
|
||||
})
|
||||
defer resetFn()
|
||||
|
||||
helpers.ClearCache()
|
||||
ctx := context.Background()
|
||||
config := params.BeaconConfig()
|
||||
@@ -1220,6 +1272,11 @@ func TestLightClientHandler_GetLightClientOptimisticUpdateAltair(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestLightClientHandler_GetLightClientOptimisticUpdateCapella(t *testing.T) {
|
||||
resetFn := features.InitWithReset(&features.Flags{
|
||||
EnableLightClient: true,
|
||||
})
|
||||
defer resetFn()
|
||||
|
||||
helpers.ClearCache()
|
||||
ctx := context.Background()
|
||||
config := params.BeaconConfig()
|
||||
@@ -1332,6 +1389,11 @@ func TestLightClientHandler_GetLightClientOptimisticUpdateCapella(t *testing.T)
|
||||
}
|
||||
|
||||
func TestLightClientHandler_GetLightClientOptimisticUpdateDeneb(t *testing.T) {
|
||||
resetFn := features.InitWithReset(&features.Flags{
|
||||
EnableLightClient: true,
|
||||
})
|
||||
defer resetFn()
|
||||
|
||||
helpers.ClearCache()
|
||||
ctx := context.Background()
|
||||
config := params.BeaconConfig()
|
||||
|
||||
@@ -235,7 +235,7 @@ func (p *BeaconDbBlocker) Blobs(ctx context.Context, id string, indices []uint64
|
||||
return make([]*blocks.VerifiedROBlob, 0), nil
|
||||
}
|
||||
if len(indices) == 0 {
|
||||
m, err := p.BlobStorage.Indices(bytesutil.ToBytes32(root))
|
||||
m, err := p.BlobStorage.Indices(bytesutil.ToBytes32(root), b.Block().Slot())
|
||||
if err != nil {
|
||||
log.WithFields(log.Fields{
|
||||
"blockRoot": hexutil.Encode(root),
|
||||
@@ -244,6 +244,9 @@ func (p *BeaconDbBlocker) Blobs(ctx context.Context, id string, indices []uint64
|
||||
}
|
||||
for k, v := range m {
|
||||
if v {
|
||||
if k >= len(commitments) {
|
||||
return nil, &core.RpcError{Err: fmt.Errorf("blob index %d is more than blob kzg commitments :%dd", k, len(commitments)), Reason: core.BadRequest}
|
||||
}
|
||||
indices = append(indices, uint64(k))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -99,15 +99,18 @@ func (vs *Server) GetBeaconBlock(ctx context.Context, req *ethpb.BlockRequest) (
|
||||
}
|
||||
|
||||
resp, err := vs.BuildBlockParallel(ctx, sBlk, head, req.SkipMevBoost, builderBoostFactor)
|
||||
log.WithFields(logrus.Fields{
|
||||
log := log.WithFields(logrus.Fields{
|
||||
"slot": req.Slot,
|
||||
"sinceSlotStartTime": time.Since(t),
|
||||
"validator": sBlk.Block().ProposerIndex(),
|
||||
"err": err,
|
||||
}).Info("Finished building block")
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Finished building block")
|
||||
return nil, errors.Wrap(err, "could not build block in parallel")
|
||||
}
|
||||
|
||||
log.Info("Finished building block")
|
||||
return resp, nil
|
||||
}
|
||||
|
||||
@@ -237,7 +240,12 @@ func (vs *Server) BuildBlockParallel(ctx context.Context, sBlk interfaces.Signed
|
||||
// There's no reason to try to get a builder bid if local override is true.
|
||||
var builderBid builderapi.Bid
|
||||
if !(local.OverrideBuilder || skipMevBoost) {
|
||||
builderBid, err = vs.getBuilderPayloadAndBlobs(ctx, sBlk.Block().Slot(), sBlk.Block().ProposerIndex())
|
||||
latestHeader, err := head.LatestExecutionPayloadHeader()
|
||||
if err != nil {
|
||||
return nil, status.Errorf(codes.Internal, "Could not get latest execution payload header: %v", err)
|
||||
}
|
||||
parentGasLimit := latestHeader.GasLimit()
|
||||
builderBid, err = vs.getBuilderPayloadAndBlobs(ctx, sBlk.Block().Slot(), sBlk.Block().ProposerIndex(), parentGasLimit)
|
||||
if err != nil {
|
||||
builderGetPayloadMissCount.Inc()
|
||||
log.WithError(err).Error("Could not get builder payload")
|
||||
|
||||
@@ -51,6 +51,7 @@ var emptyTransactionsRoot = [32]byte{127, 254, 36, 30, 166, 1, 135, 253, 176, 24
|
||||
// blockBuilderTimeout is the maximum amount of time allowed for a block builder to respond to a
|
||||
// block request. This value is known as `BUILDER_PROPOSAL_DELAY_TOLERANCE` in builder spec.
|
||||
const blockBuilderTimeout = 1 * time.Second
|
||||
const gasLimitAdjustmentFactor = 1024
|
||||
|
||||
// Sets the execution data for the block. Execution data can come from local EL client or remote builder depends on validator registration and circuit breaker conditions.
|
||||
func setExecutionData(ctx context.Context, blk interfaces.SignedBeaconBlock, local *blocks.GetPayloadResponse, bid builder.Bid, builderBoostFactor primitives.Gwei) (primitives.Wei, *enginev1.BlobsBundle, error) {
|
||||
@@ -170,7 +171,11 @@ func setExecutionData(ctx context.Context, blk interfaces.SignedBeaconBlock, loc
|
||||
|
||||
// This function retrieves the payload header and kzg commitments given the slot number and the validator index.
|
||||
// It's a no-op if the latest head block is not versioned bellatrix.
|
||||
func (vs *Server) getPayloadHeaderFromBuilder(ctx context.Context, slot primitives.Slot, idx primitives.ValidatorIndex) (builder.Bid, error) {
|
||||
func (vs *Server) getPayloadHeaderFromBuilder(
|
||||
ctx context.Context,
|
||||
slot primitives.Slot,
|
||||
idx primitives.ValidatorIndex,
|
||||
parentGasLimit uint64) (builder.Bid, error) {
|
||||
ctx, span := trace.StartSpan(ctx, "ProposerServer.getPayloadHeaderFromBuilder")
|
||||
defer span.End()
|
||||
|
||||
@@ -243,6 +248,16 @@ func (vs *Server) getPayloadHeaderFromBuilder(ctx context.Context, slot primitiv
|
||||
return nil, fmt.Errorf("incorrect parent hash %#x != %#x", header.ParentHash(), h.BlockHash())
|
||||
}
|
||||
|
||||
reg, err := vs.BlockBuilder.RegistrationByValidatorID(ctx, idx)
|
||||
if err != nil {
|
||||
log.WithError(err).Warn("Proposer: failed to get registration by validator ID, could not check gas limit")
|
||||
} else {
|
||||
gasLimit := expectedGasLimit(parentGasLimit, reg.GasLimit)
|
||||
if gasLimit != header.GasLimit() {
|
||||
return nil, fmt.Errorf("incorrect header gas limit %d != %d", gasLimit, header.GasLimit())
|
||||
}
|
||||
}
|
||||
|
||||
t, err := slots.ToTime(uint64(vs.TimeFetcher.GenesisTime().Unix()), slot)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -255,13 +270,14 @@ func (vs *Server) getPayloadHeaderFromBuilder(ctx context.Context, slot primitiv
|
||||
return nil, errors.Wrap(err, "could not validate builder signature")
|
||||
}
|
||||
|
||||
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
|
||||
var kzgCommitments [][]byte
|
||||
if bid.Version() >= version.Deneb {
|
||||
kzgCommitments, err = bid.BlobKzgCommitments()
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not get blob kzg commitments")
|
||||
}
|
||||
if len(kzgCommitments) > fieldparams.MaxBlobsPerBlock {
|
||||
if len(kzgCommitments) > maxBlobsPerBlock {
|
||||
return nil, fmt.Errorf("builder returned too many kzg commitments: %d", len(kzgCommitments))
|
||||
}
|
||||
for _, c := range kzgCommitments {
|
||||
@@ -393,3 +409,32 @@ func setExecution(blk interfaces.SignedBeaconBlock, execution interfaces.Executi
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Calculates expected gas limit based on parent gas limit and target gas limit.
|
||||
// Spec code:
|
||||
//
|
||||
// def expected_gas_limit(parent_gas_limit, target_gas_limit, adjustment_factor):
|
||||
// max_gas_limit_difference = (parent_gas_limit // adjustment_factor) - 1
|
||||
// if target_gas_limit > parent_gas_limit:
|
||||
// gas_diff = target_gas_limit - parent_gas_limit
|
||||
// return parent_gas_limit + min(gas_diff, max_gas_limit_difference)
|
||||
// else:
|
||||
// gas_diff = parent_gas_limit - target_gas_limit
|
||||
// return parent_gas_limit - min(gas_diff, max_gas_limit_difference)
|
||||
func expectedGasLimit(parentGasLimit, proposerGasLimit uint64) uint64 {
|
||||
maxGasLimitDiff := uint64(0)
|
||||
if parentGasLimit > gasLimitAdjustmentFactor {
|
||||
maxGasLimitDiff = parentGasLimit/gasLimitAdjustmentFactor - 1
|
||||
}
|
||||
if proposerGasLimit > parentGasLimit {
|
||||
if proposerGasLimit-parentGasLimit > maxGasLimitDiff {
|
||||
return parentGasLimit + maxGasLimitDiff
|
||||
}
|
||||
return proposerGasLimit
|
||||
}
|
||||
|
||||
if parentGasLimit-proposerGasLimit > maxGasLimitDiff {
|
||||
return parentGasLimit - maxGasLimitDiff
|
||||
}
|
||||
return proposerGasLimit
|
||||
}
|
||||
|
||||
@@ -94,14 +94,14 @@ func TestServer_setExecutionData(t *testing.T) {
|
||||
ForkchoiceFetcher: &blockchainTest.ChainService{},
|
||||
TrackedValidatorsCache: cache.NewTrackedValidatorsCache(),
|
||||
}
|
||||
|
||||
gasLimit := uint64(30000000)
|
||||
t.Run("No builder configured. Use local block", func(t *testing.T) {
|
||||
blk, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlockCapella())
|
||||
require.NoError(t, err)
|
||||
b := blk.Block()
|
||||
res, err := vs.getLocalPayload(ctx, b, capellaTransitionState)
|
||||
require.NoError(t, err)
|
||||
builderBid, err := vs.getBuilderPayloadAndBlobs(ctx, b.Slot(), b.ProposerIndex())
|
||||
builderBid, err := vs.getBuilderPayloadAndBlobs(ctx, b.Slot(), b.ProposerIndex(), gasLimit)
|
||||
require.NoError(t, err)
|
||||
require.IsNil(t, builderBid)
|
||||
_, bundle, err := setExecutionData(context.Background(), blk, res, builderBid, defaultBuilderBoostFactor)
|
||||
@@ -115,7 +115,11 @@ func TestServer_setExecutionData(t *testing.T) {
|
||||
blk, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlockCapella())
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, vs.BeaconDB.SaveRegistrationsByValidatorIDs(ctx, []primitives.ValidatorIndex{blk.Block().ProposerIndex()},
|
||||
[]*ethpb.ValidatorRegistrationV1{{FeeRecipient: make([]byte, fieldparams.FeeRecipientLength), Timestamp: uint64(time.Now().Unix()), Pubkey: make([]byte, fieldparams.BLSPubkeyLength)}}))
|
||||
[]*ethpb.ValidatorRegistrationV1{{
|
||||
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
|
||||
Timestamp: uint64(time.Now().Unix()),
|
||||
GasLimit: gasLimit,
|
||||
Pubkey: make([]byte, fieldparams.BLSPubkeyLength)}}))
|
||||
ti, err := slots.ToTime(uint64(time.Now().Unix()), 0)
|
||||
require.NoError(t, err)
|
||||
sk, err := bls.RandKey()
|
||||
@@ -135,6 +139,7 @@ func TestServer_setExecutionData(t *testing.T) {
|
||||
BlockHash: make([]byte, fieldparams.RootLength),
|
||||
TransactionsRoot: bytesutil.PadTo([]byte{1}, fieldparams.RootLength),
|
||||
WithdrawalsRoot: make([]byte, fieldparams.RootLength),
|
||||
GasLimit: gasLimit,
|
||||
},
|
||||
Pubkey: sk.PublicKey().Marshal(),
|
||||
Value: bytesutil.PadTo([]byte{1}, 32),
|
||||
@@ -164,7 +169,7 @@ func TestServer_setExecutionData(t *testing.T) {
|
||||
|
||||
res, err := vs.getLocalPayload(ctx, b, capellaTransitionState)
|
||||
require.NoError(t, err)
|
||||
builderBid, err := vs.getBuilderPayloadAndBlobs(ctx, b.Slot(), b.ProposerIndex())
|
||||
builderBid, err := vs.getBuilderPayloadAndBlobs(ctx, b.Slot(), b.ProposerIndex(), gasLimit)
|
||||
require.NoError(t, err)
|
||||
_, err = builderBid.Header()
|
||||
require.NoError(t, err)
|
||||
@@ -184,7 +189,11 @@ func TestServer_setExecutionData(t *testing.T) {
|
||||
blk, err := blocks.NewSignedBeaconBlock(util.NewBlindedBeaconBlockCapella())
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, vs.BeaconDB.SaveRegistrationsByValidatorIDs(ctx, []primitives.ValidatorIndex{blk.Block().ProposerIndex()},
|
||||
[]*ethpb.ValidatorRegistrationV1{{FeeRecipient: make([]byte, fieldparams.FeeRecipientLength), Timestamp: uint64(time.Now().Unix()), Pubkey: make([]byte, fieldparams.BLSPubkeyLength)}}))
|
||||
[]*ethpb.ValidatorRegistrationV1{{
|
||||
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
|
||||
Timestamp: uint64(time.Now().Unix()),
|
||||
GasLimit: gasLimit,
|
||||
Pubkey: make([]byte, fieldparams.BLSPubkeyLength)}}))
|
||||
ti, err := slots.ToTime(uint64(time.Now().Unix()), 0)
|
||||
require.NoError(t, err)
|
||||
sk, err := bls.RandKey()
|
||||
@@ -207,6 +216,7 @@ func TestServer_setExecutionData(t *testing.T) {
|
||||
BlockHash: make([]byte, fieldparams.RootLength),
|
||||
TransactionsRoot: bytesutil.PadTo([]byte{1}, fieldparams.RootLength),
|
||||
WithdrawalsRoot: wr[:],
|
||||
GasLimit: gasLimit,
|
||||
},
|
||||
Pubkey: sk.PublicKey().Marshal(),
|
||||
Value: bytesutil.PadTo(builderValue, 32),
|
||||
@@ -236,7 +246,7 @@ func TestServer_setExecutionData(t *testing.T) {
|
||||
b := blk.Block()
|
||||
res, err := vs.getLocalPayload(ctx, b, capellaTransitionState)
|
||||
require.NoError(t, err)
|
||||
builderBid, err := vs.getBuilderPayloadAndBlobs(ctx, b.Slot(), b.ProposerIndex())
|
||||
builderBid, err := vs.getBuilderPayloadAndBlobs(ctx, b.Slot(), b.ProposerIndex(), gasLimit)
|
||||
require.NoError(t, err)
|
||||
_, err = builderBid.Header()
|
||||
require.NoError(t, err)
|
||||
@@ -256,7 +266,11 @@ func TestServer_setExecutionData(t *testing.T) {
|
||||
blk, err := blocks.NewSignedBeaconBlock(util.NewBlindedBeaconBlockCapella())
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, vs.BeaconDB.SaveRegistrationsByValidatorIDs(ctx, []primitives.ValidatorIndex{blk.Block().ProposerIndex()},
|
||||
[]*ethpb.ValidatorRegistrationV1{{FeeRecipient: make([]byte, fieldparams.FeeRecipientLength), Timestamp: uint64(time.Now().Unix()), Pubkey: make([]byte, fieldparams.BLSPubkeyLength)}}))
|
||||
[]*ethpb.ValidatorRegistrationV1{{
|
||||
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
|
||||
Timestamp: uint64(time.Now().Unix()),
|
||||
GasLimit: gasLimit,
|
||||
Pubkey: make([]byte, fieldparams.BLSPubkeyLength)}}))
|
||||
ti, err := slots.ToTime(uint64(time.Now().Unix()), 0)
|
||||
require.NoError(t, err)
|
||||
sk, err := bls.RandKey()
|
||||
@@ -278,6 +292,7 @@ func TestServer_setExecutionData(t *testing.T) {
|
||||
Timestamp: uint64(ti.Unix()),
|
||||
BlockNumber: 2,
|
||||
WithdrawalsRoot: wr[:],
|
||||
GasLimit: gasLimit,
|
||||
},
|
||||
Pubkey: sk.PublicKey().Marshal(),
|
||||
Value: bytesutil.PadTo(builderValue, 32),
|
||||
@@ -307,7 +322,7 @@ func TestServer_setExecutionData(t *testing.T) {
|
||||
b := blk.Block()
|
||||
res, err := vs.getLocalPayload(ctx, b, capellaTransitionState)
|
||||
require.NoError(t, err)
|
||||
builderBid, err := vs.getBuilderPayloadAndBlobs(ctx, b.Slot(), b.ProposerIndex())
|
||||
builderBid, err := vs.getBuilderPayloadAndBlobs(ctx, b.Slot(), b.ProposerIndex(), gasLimit)
|
||||
require.NoError(t, err)
|
||||
_, err = builderBid.Header()
|
||||
require.NoError(t, err)
|
||||
@@ -327,7 +342,11 @@ func TestServer_setExecutionData(t *testing.T) {
|
||||
blk, err := blocks.NewSignedBeaconBlock(util.NewBlindedBeaconBlockCapella())
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, vs.BeaconDB.SaveRegistrationsByValidatorIDs(ctx, []primitives.ValidatorIndex{blk.Block().ProposerIndex()},
|
||||
[]*ethpb.ValidatorRegistrationV1{{FeeRecipient: make([]byte, fieldparams.FeeRecipientLength), Timestamp: uint64(time.Now().Unix()), Pubkey: make([]byte, fieldparams.BLSPubkeyLength)}}))
|
||||
[]*ethpb.ValidatorRegistrationV1{{
|
||||
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
|
||||
Timestamp: uint64(time.Now().Unix()),
|
||||
GasLimit: gasLimit,
|
||||
Pubkey: make([]byte, fieldparams.BLSPubkeyLength)}}))
|
||||
ti, err := slots.ToTime(uint64(time.Now().Unix()), 0)
|
||||
require.NoError(t, err)
|
||||
sk, err := bls.RandKey()
|
||||
@@ -349,6 +368,7 @@ func TestServer_setExecutionData(t *testing.T) {
|
||||
Timestamp: uint64(ti.Unix()),
|
||||
BlockNumber: 2,
|
||||
WithdrawalsRoot: wr[:],
|
||||
GasLimit: gasLimit,
|
||||
},
|
||||
Pubkey: sk.PublicKey().Marshal(),
|
||||
Value: bytesutil.PadTo(builderValue, 32),
|
||||
@@ -378,7 +398,7 @@ func TestServer_setExecutionData(t *testing.T) {
|
||||
b := blk.Block()
|
||||
res, err := vs.getLocalPayload(ctx, b, capellaTransitionState)
|
||||
require.NoError(t, err)
|
||||
builderBid, err := vs.getBuilderPayloadAndBlobs(ctx, b.Slot(), b.ProposerIndex())
|
||||
builderBid, err := vs.getBuilderPayloadAndBlobs(ctx, b.Slot(), b.ProposerIndex(), gasLimit)
|
||||
require.NoError(t, err)
|
||||
_, err = builderBid.Header()
|
||||
require.NoError(t, err)
|
||||
@@ -404,7 +424,7 @@ func TestServer_setExecutionData(t *testing.T) {
|
||||
b := blk.Block()
|
||||
res, err := vs.getLocalPayload(ctx, b, capellaTransitionState)
|
||||
require.NoError(t, err)
|
||||
builderBid, err := vs.getBuilderPayloadAndBlobs(ctx, b.Slot(), b.ProposerIndex())
|
||||
builderBid, err := vs.getBuilderPayloadAndBlobs(ctx, b.Slot(), b.ProposerIndex(), gasLimit)
|
||||
require.NoError(t, err)
|
||||
_, err = builderBid.Header()
|
||||
require.NoError(t, err)
|
||||
@@ -436,7 +456,7 @@ func TestServer_setExecutionData(t *testing.T) {
|
||||
b := blk.Block()
|
||||
res, err := vs.getLocalPayload(ctx, b, capellaTransitionState)
|
||||
require.NoError(t, err)
|
||||
builderBid, err := vs.getBuilderPayloadAndBlobs(ctx, b.Slot(), b.ProposerIndex())
|
||||
builderBid, err := vs.getBuilderPayloadAndBlobs(ctx, b.Slot(), b.ProposerIndex(), gasLimit)
|
||||
require.NoError(t, err)
|
||||
_, err = builderBid.Header()
|
||||
require.NoError(t, err)
|
||||
@@ -471,7 +491,7 @@ func TestServer_setExecutionData(t *testing.T) {
|
||||
b := blk.Block()
|
||||
res, err := vs.getLocalPayload(ctx, b, capellaTransitionState)
|
||||
require.NoError(t, err)
|
||||
builderBid, err := vs.getBuilderPayloadAndBlobs(ctx, b.Slot(), b.ProposerIndex())
|
||||
builderBid, err := vs.getBuilderPayloadAndBlobs(ctx, b.Slot(), b.ProposerIndex(), gasLimit)
|
||||
require.NoError(t, err)
|
||||
builderKzgCommitments, err := builderBid.BlobKzgCommitments()
|
||||
if builderBid.Version() >= version.Deneb {
|
||||
@@ -503,7 +523,7 @@ func TestServer_setExecutionData(t *testing.T) {
|
||||
b := blk.Block()
|
||||
res, err := vs.getLocalPayload(ctx, b, capellaTransitionState)
|
||||
require.NoError(t, err)
|
||||
builderBid, err := vs.getBuilderPayloadAndBlobs(ctx, b.Slot(), b.ProposerIndex())
|
||||
builderBid, err := vs.getBuilderPayloadAndBlobs(ctx, b.Slot(), b.ProposerIndex(), gasLimit)
|
||||
require.ErrorIs(t, consensus_types.ErrNilObjectWrapped, err) // Builder returns fault. Use local block
|
||||
require.IsNil(t, builderBid)
|
||||
_, bundle, err := setExecutionData(context.Background(), blk, res, nil, defaultBuilderBoostFactor)
|
||||
@@ -578,6 +598,7 @@ func TestServer_setExecutionData(t *testing.T) {
|
||||
WithdrawalsRoot: wr[:],
|
||||
BlobGasUsed: 123,
|
||||
ExcessBlobGas: 456,
|
||||
GasLimit: gasLimit,
|
||||
},
|
||||
Pubkey: sk.PublicKey().Marshal(),
|
||||
Value: bytesutil.PadTo(builderValue, 32),
|
||||
@@ -599,7 +620,11 @@ func TestServer_setExecutionData(t *testing.T) {
|
||||
Cfg: &builderTest.Config{BeaconDB: beaconDB},
|
||||
}
|
||||
require.NoError(t, beaconDB.SaveRegistrationsByValidatorIDs(ctx, []primitives.ValidatorIndex{blk.Block().ProposerIndex()},
|
||||
[]*ethpb.ValidatorRegistrationV1{{FeeRecipient: make([]byte, fieldparams.FeeRecipientLength), Timestamp: uint64(time.Now().Unix()), Pubkey: make([]byte, fieldparams.BLSPubkeyLength)}}))
|
||||
[]*ethpb.ValidatorRegistrationV1{{
|
||||
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
|
||||
Timestamp: uint64(time.Now().Unix()),
|
||||
GasLimit: gasLimit,
|
||||
Pubkey: make([]byte, fieldparams.BLSPubkeyLength)}}))
|
||||
|
||||
wb, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlockDeneb())
|
||||
require.NoError(t, err)
|
||||
@@ -619,7 +644,7 @@ func TestServer_setExecutionData(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
blk.SetSlot(primitives.Slot(params.BeaconConfig().DenebForkEpoch) * params.BeaconConfig().SlotsPerEpoch)
|
||||
require.NoError(t, err)
|
||||
builderBid, err := vs.getBuilderPayloadAndBlobs(ctx, blk.Block().Slot(), blk.Block().ProposerIndex())
|
||||
builderBid, err := vs.getBuilderPayloadAndBlobs(ctx, blk.Block().Slot(), blk.Block().ProposerIndex(), gasLimit)
|
||||
require.NoError(t, err)
|
||||
builderPayload, err := builderBid.Header()
|
||||
require.NoError(t, err)
|
||||
@@ -660,6 +685,8 @@ func TestServer_getPayloadHeader(t *testing.T) {
|
||||
|
||||
sk, err := bls.RandKey()
|
||||
require.NoError(t, err)
|
||||
|
||||
gasLimit := uint64(30000000)
|
||||
bid := ðpb.BuilderBid{
|
||||
Header: &v1.ExecutionPayloadHeader{
|
||||
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
|
||||
@@ -672,6 +699,7 @@ func TestServer_getPayloadHeader(t *testing.T) {
|
||||
TransactionsRoot: bytesutil.PadTo([]byte{1}, fieldparams.RootLength),
|
||||
ParentHash: params.BeaconConfig().ZeroHash[:],
|
||||
Timestamp: uint64(ti.Unix()),
|
||||
GasLimit: gasLimit,
|
||||
},
|
||||
Pubkey: sk.PublicKey().Marshal(),
|
||||
Value: bytesutil.PadTo([]byte{1, 2, 3}, 32),
|
||||
@@ -709,6 +737,7 @@ func TestServer_getPayloadHeader(t *testing.T) {
|
||||
ParentHash: params.BeaconConfig().ZeroHash[:],
|
||||
Timestamp: uint64(tiCapella.Unix()),
|
||||
WithdrawalsRoot: wr[:],
|
||||
GasLimit: gasLimit,
|
||||
},
|
||||
Pubkey: sk.PublicKey().Marshal(),
|
||||
Value: bytesutil.PadTo([]byte{1, 2, 3}, 32),
|
||||
@@ -720,7 +749,29 @@ func TestServer_getPayloadHeader(t *testing.T) {
|
||||
Signature: sk.Sign(srCapella[:]).Marshal(),
|
||||
}
|
||||
|
||||
require.NoError(t, err)
|
||||
incorrectGasLimitBid := ðpb.BuilderBid{
|
||||
Header: &v1.ExecutionPayloadHeader{
|
||||
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
|
||||
StateRoot: make([]byte, fieldparams.RootLength),
|
||||
ReceiptsRoot: make([]byte, fieldparams.RootLength),
|
||||
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
|
||||
PrevRandao: make([]byte, fieldparams.RootLength),
|
||||
BaseFeePerGas: make([]byte, fieldparams.RootLength),
|
||||
BlockHash: make([]byte, fieldparams.RootLength),
|
||||
TransactionsRoot: bytesutil.PadTo([]byte{1}, fieldparams.RootLength),
|
||||
ParentHash: params.BeaconConfig().ZeroHash[:],
|
||||
Timestamp: uint64(tiCapella.Unix()),
|
||||
GasLimit: 31000000,
|
||||
},
|
||||
Pubkey: sk.PublicKey().Marshal(),
|
||||
Value: bytesutil.PadTo([]byte{1, 2, 3}, 32),
|
||||
}
|
||||
signedIncorrectGasLimitBid :=
|
||||
ðpb.SignedBuilderBid{
|
||||
Message: incorrectGasLimitBid,
|
||||
Signature: sk.Sign(srCapella[:]).Marshal(),
|
||||
}
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
head interfaces.ReadOnlySignedBeaconBlock
|
||||
@@ -847,15 +898,39 @@ func TestServer_getPayloadHeader(t *testing.T) {
|
||||
},
|
||||
returnedHeaderCapella: bidCapella.Header,
|
||||
},
|
||||
{
|
||||
name: "incorrect gas limit",
|
||||
mock: &builderTest.MockBuilderService{
|
||||
Bid: signedIncorrectGasLimitBid,
|
||||
},
|
||||
fetcher: &blockchainTest.ChainService{
|
||||
Block: func() interfaces.ReadOnlySignedBeaconBlock {
|
||||
wb, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlockBellatrix())
|
||||
require.NoError(t, err)
|
||||
wb.SetSlot(primitives.Slot(params.BeaconConfig().BellatrixForkEpoch) * params.BeaconConfig().SlotsPerEpoch)
|
||||
return wb
|
||||
}(),
|
||||
},
|
||||
err: "incorrect header gas limit 30000000 != 31000000",
|
||||
},
|
||||
}
|
||||
for _, tc := range tests {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
vs := &Server{BlockBuilder: tc.mock, HeadFetcher: tc.fetcher, TimeFetcher: &blockchainTest.ChainService{
|
||||
vs := &Server{BeaconDB: dbTest.SetupDB(t), BlockBuilder: tc.mock, HeadFetcher: tc.fetcher, TimeFetcher: &blockchainTest.ChainService{
|
||||
Genesis: genesis,
|
||||
}}
|
||||
regCache := cache.NewRegistrationCache()
|
||||
regCache.UpdateIndexToRegisteredMap(context.Background(), map[primitives.ValidatorIndex]*ethpb.ValidatorRegistrationV1{
|
||||
0: {
|
||||
GasLimit: gasLimit,
|
||||
FeeRecipient: make([]byte, 20),
|
||||
Pubkey: make([]byte, 48),
|
||||
},
|
||||
})
|
||||
tc.mock.RegistrationCache = regCache
|
||||
hb, err := vs.HeadFetcher.HeadBlock(context.Background())
|
||||
require.NoError(t, err)
|
||||
bid, err := vs.getPayloadHeaderFromBuilder(context.Background(), hb.Block().Slot(), 0)
|
||||
bid, err := vs.getPayloadHeaderFromBuilder(context.Background(), hb.Block().Slot(), 0, 30000000)
|
||||
if tc.err != "" {
|
||||
require.ErrorContains(t, tc.err, err)
|
||||
} else {
|
||||
@@ -971,3 +1046,87 @@ func TestEmptyTransactionsRoot(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
require.DeepEqual(t, r, emptyTransactionsRoot)
|
||||
}
|
||||
|
||||
func Test_expectedGasLimit(t *testing.T) {
|
||||
type args struct {
|
||||
parentGasLimit uint64
|
||||
targetGasLimit uint64
|
||||
}
|
||||
tests := []struct {
|
||||
name string
|
||||
args args
|
||||
want uint64
|
||||
}{
|
||||
{
|
||||
name: "Increase within limit",
|
||||
args: args{
|
||||
parentGasLimit: 15000000,
|
||||
targetGasLimit: 15000100,
|
||||
},
|
||||
want: 15000100,
|
||||
},
|
||||
{
|
||||
name: "Increase exceeding limit",
|
||||
args: args{
|
||||
parentGasLimit: 15000000,
|
||||
targetGasLimit: 16000000,
|
||||
},
|
||||
want: 15014647, // maxGasLimitDiff = (15000000 / 1024) - 1 = 1464
|
||||
},
|
||||
{
|
||||
name: "Decrease within limit",
|
||||
args: args{
|
||||
parentGasLimit: 15000000,
|
||||
targetGasLimit: 14999990,
|
||||
},
|
||||
want: 14999990,
|
||||
},
|
||||
{
|
||||
name: "Decrease exceeding limit",
|
||||
args: args{
|
||||
parentGasLimit: 15000000,
|
||||
targetGasLimit: 14000000,
|
||||
},
|
||||
want: 14985353, // maxGasLimitDiff = (15000000 / 1024) - 1 = 1464
|
||||
},
|
||||
{
|
||||
name: "Target equals parent",
|
||||
args: args{
|
||||
parentGasLimit: 15000000,
|
||||
targetGasLimit: 15000000,
|
||||
},
|
||||
want: 15000000, // No change
|
||||
},
|
||||
{
|
||||
name: "Very small parent gas limit",
|
||||
args: args{
|
||||
parentGasLimit: 1025,
|
||||
targetGasLimit: 2000,
|
||||
},
|
||||
want: 1025 + ((1025 / 1024) - 1),
|
||||
},
|
||||
{
|
||||
name: "Target far below parent but limited",
|
||||
args: args{
|
||||
parentGasLimit: 20000000,
|
||||
targetGasLimit: 10000000,
|
||||
},
|
||||
want: 19980470, // maxGasLimitDiff = (20000000 / 1024) - 1
|
||||
},
|
||||
{
|
||||
name: "Parent gas limit under flows",
|
||||
args: args{
|
||||
parentGasLimit: 1023,
|
||||
targetGasLimit: 30000000,
|
||||
},
|
||||
want: 1023,
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
if got := expectedGasLimit(tt.args.parentGasLimit, tt.args.targetGasLimit); got != tt.want {
|
||||
t.Errorf("expectedGasLimit() = %v, want %v", got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -239,7 +239,8 @@ func (vs *Server) getTerminalBlockHashIfExists(ctx context.Context, transitionTi
|
||||
|
||||
func (vs *Server) getBuilderPayloadAndBlobs(ctx context.Context,
|
||||
slot primitives.Slot,
|
||||
vIdx primitives.ValidatorIndex) (builder.Bid, error) {
|
||||
vIdx primitives.ValidatorIndex,
|
||||
parentGasLimit uint64) (builder.Bid, error) {
|
||||
ctx, span := trace.StartSpan(ctx, "ProposerServer.getBuilderPayloadAndBlobs")
|
||||
defer span.End()
|
||||
|
||||
@@ -255,7 +256,7 @@ func (vs *Server) getBuilderPayloadAndBlobs(ctx context.Context,
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
return vs.getPayloadHeaderFromBuilder(ctx, slot, vIdx)
|
||||
return vs.getPayloadHeaderFromBuilder(ctx, slot, vIdx, parentGasLimit)
|
||||
}
|
||||
|
||||
var errActivationNotReached = errors.New("activation epoch not reached")
|
||||
|
||||
@@ -244,6 +244,7 @@ go_test(
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
"//proto/prysm/v1alpha1/attestation:go_default_library",
|
||||
"//proto/prysm/v1alpha1/metadata:go_default_library",
|
||||
"//runtime/version:go_default_library",
|
||||
"//testing/assert:go_default_library",
|
||||
"//testing/require:go_default_library",
|
||||
"//testing/util:go_default_library",
|
||||
|
||||
@@ -9,6 +9,7 @@ import (
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem"
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/verification"
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/params"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
|
||||
@@ -44,7 +45,7 @@ func newBlobSync(current primitives.Slot, vbs verifiedROBlocks, cfg *blobSyncCon
|
||||
return &blobSync{current: current, expected: expected, bbv: bbv, store: as}, nil
|
||||
}
|
||||
|
||||
type blobVerifierMap map[[32]byte][fieldparams.MaxBlobsPerBlock]verification.BlobVerifier
|
||||
type blobVerifierMap map[[32]byte][]verification.BlobVerifier
|
||||
|
||||
type blobSync struct {
|
||||
store das.AvailabilityStore
|
||||
@@ -106,7 +107,10 @@ type blobBatchVerifier struct {
|
||||
}
|
||||
|
||||
func (bbv *blobBatchVerifier) newVerifier(rb blocks.ROBlob) verification.BlobVerifier {
|
||||
m := bbv.verifiers[rb.BlockRoot()]
|
||||
m, ok := bbv.verifiers[rb.BlockRoot()]
|
||||
if !ok {
|
||||
m = make([]verification.BlobVerifier, params.BeaconConfig().MaxBlobsPerBlock(rb.Slot()))
|
||||
}
|
||||
m[rb.Index] = bbv.newBlobVerifier(rb, verification.BackfillBlobSidecarRequirements)
|
||||
bbv.verifiers[rb.BlockRoot()] = m
|
||||
return m[rb.Index]
|
||||
|
||||
@@ -208,8 +208,8 @@ func (s *Service) importBatches(ctx context.Context) {
|
||||
}
|
||||
_, err := s.batchImporter(ctx, current, ib, s.store)
|
||||
if err != nil {
|
||||
log.WithError(err).WithFields(ib.logFields()).Debug("Backfill batch failed to import")
|
||||
s.downscore(ib)
|
||||
score := s.p2p.Peers().Scorers().BadResponsesScorer().Increment(ib.blockPid)
|
||||
log.WithError(err).WithFields(ib.logFields()).WithField("newBlockPidBadResponsesScore", score).Debug("Backfill batch failed to import")
|
||||
s.batchSeq.update(ib.withState(batchErrRetryable))
|
||||
// If a batch fails, the subsequent batches are no longer considered importable.
|
||||
break
|
||||
@@ -330,10 +330,6 @@ func (s *Service) initBatches() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *Service) downscore(b batch) {
|
||||
s.p2p.Peers().Scorers().BadResponsesScorer().Increment(b.blockPid)
|
||||
}
|
||||
|
||||
func (*Service) Stop() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -62,7 +62,7 @@ func (s *Service) validateWithBatchVerifier(ctx context.Context, message string,
|
||||
// If verification fails we fallback to individual verification
|
||||
// of each signature set.
|
||||
if resErr != nil {
|
||||
log.WithError(resErr).Tracef("Could not perform batch verification of %s", message)
|
||||
log.WithError(resErr).Debugf("Could not perform batch verification of %s", message)
|
||||
verified, err := set.Verify()
|
||||
if err != nil {
|
||||
verErr := errors.Wrapf(err, "Could not verify %s", message)
|
||||
|
||||
@@ -180,7 +180,7 @@ func (c *blobsTestCase) setup(t *testing.T) (*Service, []blocks.ROBlob, func())
|
||||
cleanup := func() {
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
}
|
||||
maxBlobs := fieldparams.MaxBlobsPerBlock
|
||||
maxBlobs := int(params.BeaconConfig().MaxBlobsPerBlock(0))
|
||||
chain, clock := defaultMockChain(t)
|
||||
if c.chain == nil {
|
||||
c.chain = chain
|
||||
@@ -218,8 +218,8 @@ func (c *blobsTestCase) setup(t *testing.T) (*Service, []blocks.ROBlob, func())
|
||||
rateLimiter: newRateLimiter(client),
|
||||
}
|
||||
|
||||
byRootRate := params.BeaconConfig().MaxRequestBlobSidecars * fieldparams.MaxBlobsPerBlock
|
||||
byRangeRate := params.BeaconConfig().MaxRequestBlobSidecars * fieldparams.MaxBlobsPerBlock
|
||||
byRootRate := params.BeaconConfig().MaxRequestBlobSidecars * uint64(params.BeaconConfig().MaxBlobsPerBlock(0))
|
||||
byRangeRate := params.BeaconConfig().MaxRequestBlobSidecars * uint64(params.BeaconConfig().MaxBlobsPerBlock(0))
|
||||
s.setRateCollector(p2p.RPCBlobSidecarsByRootTopicV1, leakybucket.NewCollector(0.000001, int64(byRootRate), time.Second, false))
|
||||
s.setRateCollector(p2p.RPCBlobSidecarsByRangeTopicV1, leakybucket.NewCollector(0.000001, int64(byRangeRate), time.Second, false))
|
||||
|
||||
@@ -310,7 +310,7 @@ func TestTestcaseSetup_BlocksAndBlobs(t *testing.T) {
|
||||
req := blobRootRequestFromSidecars(sidecars)
|
||||
expect := c.filterExpectedByRoot(t, sidecars, req)
|
||||
defer cleanup()
|
||||
maxed := nblocks * fieldparams.MaxBlobsPerBlock
|
||||
maxed := nblocks * params.BeaconConfig().MaxBlobsPerBlock(0)
|
||||
require.Equal(t, maxed, len(sidecars))
|
||||
require.Equal(t, maxed, len(expect))
|
||||
for _, sc := range sidecars {
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package sync
|
||||
|
||||
import (
|
||||
"github.com/libp2p/go-libp2p/core/protocol"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/params"
|
||||
@@ -39,87 +40,128 @@ func (s *Service) forkWatcher() {
|
||||
}
|
||||
}
|
||||
|
||||
// Checks if there is a fork in the next epoch and if there is
|
||||
// it registers the appropriate gossip and rpc topics.
|
||||
func (s *Service) registerForUpcomingFork(currEpoch primitives.Epoch) error {
|
||||
genRoot := s.cfg.clock.GenesisValidatorsRoot()
|
||||
isNextForkEpoch, err := forks.IsForkNextEpoch(s.cfg.clock.GenesisTime(), genRoot[:])
|
||||
// registerForUpcomingFork registers appropriate gossip and RPC topic if there is a fork in the next epoch.
|
||||
func (s *Service) registerForUpcomingFork(currentEpoch primitives.Epoch) error {
|
||||
// Get the genesis validators root.
|
||||
genesisValidatorsRoot := s.cfg.clock.GenesisValidatorsRoot()
|
||||
|
||||
// Check if there is a fork in the next epoch.
|
||||
isForkNextEpoch, err := forks.IsForkNextEpoch(s.cfg.clock.GenesisTime(), genesisValidatorsRoot[:])
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "Could not retrieve next fork epoch")
|
||||
}
|
||||
// In preparation for the upcoming fork
|
||||
// in the following epoch, the node
|
||||
// will subscribe the new topics in advance.
|
||||
if isNextForkEpoch {
|
||||
nextEpoch := currEpoch + 1
|
||||
digest, err := forks.ForkDigestFromEpoch(nextEpoch, genRoot[:])
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not retrieve fork digest")
|
||||
}
|
||||
if s.subHandler.digestExists(digest) {
|
||||
return nil
|
||||
}
|
||||
s.registerSubscribers(nextEpoch, digest)
|
||||
if nextEpoch == params.BeaconConfig().AltairForkEpoch {
|
||||
s.registerRPCHandlersAltair()
|
||||
}
|
||||
if nextEpoch == params.BeaconConfig().DenebForkEpoch {
|
||||
s.registerRPCHandlersDeneb()
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Checks if there was a fork in the previous epoch, and if there
|
||||
// was then we deregister the topics from that particular fork.
|
||||
func (s *Service) deregisterFromPastFork(currEpoch primitives.Epoch) error {
|
||||
genRoot := s.cfg.clock.GenesisValidatorsRoot()
|
||||
// This method takes care of the de-registration of
|
||||
// old gossip pubsub handlers. Once we are at the epoch
|
||||
// after the fork, we de-register from all the outdated topics.
|
||||
currFork, err := forks.Fork(currEpoch)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// If we are still in our genesis fork version then
|
||||
// we simply exit early.
|
||||
if currFork.Epoch == params.BeaconConfig().GenesisEpoch {
|
||||
// Exit early if there is no fork in the next epoch.
|
||||
if !isForkNextEpoch {
|
||||
return nil
|
||||
}
|
||||
epochAfterFork := currFork.Epoch + 1
|
||||
// If we are in the epoch after the fork, we start de-registering.
|
||||
if epochAfterFork == currEpoch {
|
||||
// Look at the previous fork's digest.
|
||||
epochBeforeFork := currFork.Epoch - 1
|
||||
prevDigest, err := forks.ForkDigestFromEpoch(epochBeforeFork, genRoot[:])
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "Failed to determine previous epoch fork digest")
|
||||
}
|
||||
|
||||
// Exit early if there are no topics with that particular
|
||||
// digest.
|
||||
if !s.subHandler.digestExists(prevDigest) {
|
||||
return nil
|
||||
}
|
||||
prevFork, err := forks.Fork(epochBeforeFork)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to determine previous epoch fork data")
|
||||
}
|
||||
if prevFork.Epoch == params.BeaconConfig().GenesisEpoch {
|
||||
s.unregisterPhase0Handlers()
|
||||
}
|
||||
// Run through all our current active topics and see
|
||||
// if there are any subscriptions to be removed.
|
||||
for _, t := range s.subHandler.allTopics() {
|
||||
retDigest, err := p2p.ExtractGossipDigest(t)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Could not retrieve digest")
|
||||
continue
|
||||
}
|
||||
if retDigest == prevDigest {
|
||||
s.unSubscribeFromTopic(t)
|
||||
}
|
||||
}
|
||||
beforeForkEpoch := currentEpoch
|
||||
forkEpoch := beforeForkEpoch + 1
|
||||
|
||||
// Get the fork afterForkDigest for the next epoch.
|
||||
afterForkDigest, err := forks.ForkDigestFromEpoch(forkEpoch, genesisValidatorsRoot[:])
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not retrieve fork digest")
|
||||
}
|
||||
|
||||
// Exit early if the topics for the next epoch are already registered.
|
||||
// It likely to be the case for all slots of the epoch that are not the first one.
|
||||
if s.subHandler.digestExists(afterForkDigest) {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Register the subscribers (gossipsub) for the next epoch.
|
||||
s.registerSubscribers(forkEpoch, afterForkDigest)
|
||||
|
||||
// Get the handlers for the current and next fork.
|
||||
beforeForkHandlerByTopic, err := s.rpcHandlerByTopicFromEpoch(beforeForkEpoch)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "RPC handler by topic from before fork epoch")
|
||||
}
|
||||
|
||||
forkHandlerByTopic, err := s.rpcHandlerByTopicFromEpoch(forkEpoch)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "RPC handler by topic from fork epoch")
|
||||
}
|
||||
|
||||
// Compute newly added topics.
|
||||
newRPCHandlerByTopic := addedRPCHandlerByTopic(beforeForkHandlerByTopic, forkHandlerByTopic)
|
||||
|
||||
// Register the new RPC handlers.
|
||||
for topic, handler := range newRPCHandlerByTopic {
|
||||
s.registerRPC(topic, handler)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// deregisterFromPastFork deregisters appropriate gossip and RPC topic if there is a fork in the current epoch.
|
||||
func (s *Service) deregisterFromPastFork(currentEpoch primitives.Epoch) error {
|
||||
// Extract the genesis validators root.
|
||||
genesisValidatorsRoot := s.cfg.clock.GenesisValidatorsRoot()
|
||||
|
||||
// Get the fork.
|
||||
currentFork, err := forks.Fork(currentEpoch)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "genesis validators root")
|
||||
}
|
||||
|
||||
// If we are still in our genesis fork version then exit early.
|
||||
if currentFork.Epoch == params.BeaconConfig().GenesisEpoch {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Get the epoch after the fork epoch.
|
||||
afterForkEpoch := currentFork.Epoch + 1
|
||||
|
||||
// Start de-registering if the current epoch is after the fork epoch.
|
||||
if currentEpoch != afterForkEpoch {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Look at the previous fork's digest.
|
||||
beforeForkEpoch := currentFork.Epoch - 1
|
||||
|
||||
beforeForkDigest, err := forks.ForkDigestFromEpoch(beforeForkEpoch, genesisValidatorsRoot[:])
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "fork digest from epoch")
|
||||
}
|
||||
|
||||
// Exit early if there are no topics with that particular digest.
|
||||
if !s.subHandler.digestExists(beforeForkDigest) {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Compute the RPC handlers that are no longer needed.
|
||||
beforeForkHandlerByTopic, err := s.rpcHandlerByTopicFromEpoch(beforeForkEpoch)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "RPC handler by topic from before fork epoch")
|
||||
}
|
||||
|
||||
forkHandlerByTopic, err := s.rpcHandlerByTopicFromEpoch(currentFork.Epoch)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "RPC handler by topic from fork epoch")
|
||||
}
|
||||
|
||||
topicsToRemove := removedRPCTopics(beforeForkHandlerByTopic, forkHandlerByTopic)
|
||||
for topic := range topicsToRemove {
|
||||
fullTopic := topic + s.cfg.p2p.Encoding().ProtocolSuffix()
|
||||
s.cfg.p2p.Host().RemoveStreamHandler(protocol.ID(fullTopic))
|
||||
}
|
||||
|
||||
// Run through all our current active topics and see
|
||||
// if there are any subscriptions to be removed.
|
||||
for _, t := range s.subHandler.allTopics() {
|
||||
retDigest, err := p2p.ExtractGossipDigest(t)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Could not retrieve digest")
|
||||
continue
|
||||
}
|
||||
if retDigest == beforeForkDigest {
|
||||
s.unSubscribeFromTopic(t)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -14,6 +14,7 @@ import (
|
||||
"github.com/prysmaticlabs/prysm/v5/config/params"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
"github.com/prysmaticlabs/prysm/v5/network/forks"
|
||||
"github.com/prysmaticlabs/prysm/v5/runtime/version"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/assert"
|
||||
)
|
||||
|
||||
@@ -230,7 +231,8 @@ func TestService_CheckForPreviousEpochFork(t *testing.T) {
|
||||
chainStarted: abool.New(),
|
||||
subHandler: newSubTopicHandler(),
|
||||
}
|
||||
r.registerRPCHandlers()
|
||||
err := r.registerRPCHandlers()
|
||||
assert.NoError(t, err)
|
||||
return r
|
||||
},
|
||||
currEpoch: 10,
|
||||
@@ -278,10 +280,21 @@ func TestService_CheckForPreviousEpochFork(t *testing.T) {
|
||||
prevGenesis := chainService.Genesis
|
||||
// To allow registration of v1 handlers
|
||||
chainService.Genesis = time.Now().Add(-1 * oneEpoch())
|
||||
r.registerRPCHandlers()
|
||||
err := r.registerRPCHandlers()
|
||||
assert.NoError(t, err)
|
||||
|
||||
chainService.Genesis = prevGenesis
|
||||
r.registerRPCHandlersAltair()
|
||||
previous, err := r.rpcHandlerByTopicFromFork(version.Phase0)
|
||||
assert.NoError(t, err)
|
||||
|
||||
next, err := r.rpcHandlerByTopicFromFork(version.Altair)
|
||||
assert.NoError(t, err)
|
||||
|
||||
handlerByTopic := addedRPCHandlerByTopic(previous, next)
|
||||
|
||||
for topic, handler := range handlerByTopic {
|
||||
r.registerRPC(topic, handler)
|
||||
}
|
||||
|
||||
genRoot := r.cfg.clock.GenesisValidatorsRoot()
|
||||
digest, err := forks.ForkDigestFromEpoch(0, genRoot[:])
|
||||
|
||||
@@ -85,7 +85,6 @@ go_test(
|
||||
"//beacon-chain/verification:go_default_library",
|
||||
"//cmd/beacon-chain/flags:go_default_library",
|
||||
"//config/features:go_default_library",
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
"//consensus-types/blocks:go_default_library",
|
||||
"//consensus-types/interfaces:go_default_library",
|
||||
|
||||
@@ -20,7 +20,6 @@ import (
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/startup"
|
||||
beaconsync "github.com/prysmaticlabs/prysm/v5/beacon-chain/sync"
|
||||
"github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags"
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/params"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
|
||||
@@ -1083,7 +1082,7 @@ func TestCommitmentCountList(t *testing.T) {
|
||||
name: "nil bss, sparse slots",
|
||||
cc: []commitmentCount{
|
||||
{slot: 11235, count: 1},
|
||||
{slot: 11240, count: fieldparams.MaxBlobsPerBlock},
|
||||
{slot: 11240, count: params.BeaconConfig().MaxBlobsPerBlock(0)},
|
||||
{slot: 11250, count: 3},
|
||||
},
|
||||
expected: &blobRange{low: 11235, high: 11250},
|
||||
@@ -1100,7 +1099,7 @@ func TestCommitmentCountList(t *testing.T) {
|
||||
},
|
||||
cc: []commitmentCount{
|
||||
{slot: 0, count: 3, root: bytesutil.ToBytes32([]byte("0"))},
|
||||
{slot: 5, count: fieldparams.MaxBlobsPerBlock, root: bytesutil.ToBytes32([]byte("1"))},
|
||||
{slot: 5, count: params.BeaconConfig().MaxBlobsPerBlock(0), root: bytesutil.ToBytes32([]byte("1"))},
|
||||
{slot: 15, count: 3},
|
||||
},
|
||||
expected: &blobRange{low: 0, high: 15},
|
||||
@@ -1118,7 +1117,7 @@ func TestCommitmentCountList(t *testing.T) {
|
||||
cc: []commitmentCount{
|
||||
{slot: 0, count: 2, root: bytesutil.ToBytes32([]byte("0"))},
|
||||
{slot: 5, count: 3},
|
||||
{slot: 15, count: fieldparams.MaxBlobsPerBlock, root: bytesutil.ToBytes32([]byte("2"))},
|
||||
{slot: 15, count: params.BeaconConfig().MaxBlobsPerBlock(0), root: bytesutil.ToBytes32([]byte("2"))},
|
||||
},
|
||||
expected: &blobRange{low: 5, high: 5},
|
||||
request: ðpb.BlobSidecarsByRangeRequest{StartSlot: 5, Count: 1},
|
||||
@@ -1136,7 +1135,7 @@ func TestCommitmentCountList(t *testing.T) {
|
||||
{slot: 0, count: 2, root: bytesutil.ToBytes32([]byte("0"))},
|
||||
{slot: 5, count: 3},
|
||||
{slot: 6, count: 3},
|
||||
{slot: 15, count: fieldparams.MaxBlobsPerBlock, root: bytesutil.ToBytes32([]byte("2"))},
|
||||
{slot: 15, count: params.BeaconConfig().MaxBlobsPerBlock(0), root: bytesutil.ToBytes32([]byte("2"))},
|
||||
},
|
||||
expected: &blobRange{low: 5, high: 6},
|
||||
request: ðpb.BlobSidecarsByRangeRequest{StartSlot: 5, Count: 2},
|
||||
@@ -1155,7 +1154,7 @@ func TestCommitmentCountList(t *testing.T) {
|
||||
{slot: 0, count: 2, root: bytesutil.ToBytes32([]byte("0"))},
|
||||
{slot: 5, count: 3, root: bytesutil.ToBytes32([]byte("1"))},
|
||||
{slot: 10, count: 3},
|
||||
{slot: 15, count: fieldparams.MaxBlobsPerBlock, root: bytesutil.ToBytes32([]byte("2"))},
|
||||
{slot: 15, count: params.BeaconConfig().MaxBlobsPerBlock(0), root: bytesutil.ToBytes32([]byte("2"))},
|
||||
},
|
||||
expected: &blobRange{low: 5, high: 10},
|
||||
request: ðpb.BlobSidecarsByRangeRequest{StartSlot: 5, Count: 6},
|
||||
|
||||
@@ -337,8 +337,10 @@ func (q *blocksQueue) onDataReceivedEvent(ctx context.Context) eventHandlerFn {
|
||||
}
|
||||
if errors.Is(response.err, beaconsync.ErrInvalidFetchedData) {
|
||||
// Peer returned invalid data, penalize.
|
||||
q.blocksFetcher.p2p.Peers().Scorers().BadResponsesScorer().Increment(m.pid)
|
||||
log.WithField("pid", response.pid).Debug("Peer is penalized for invalid blocks")
|
||||
score := q.blocksFetcher.p2p.Peers().Scorers().BadResponsesScorer().Increment(m.pid)
|
||||
log.
|
||||
WithFields(logrus.Fields{"pid": response.pid, "newBadResponsesScore": score}).
|
||||
Debug("Peer is penalized for invalid blocks")
|
||||
}
|
||||
return m.state, response.err
|
||||
}
|
||||
|
||||
@@ -292,7 +292,7 @@ func missingBlobRequest(blk blocks.ROBlock, store *filesystem.BlobStorage) (p2pt
|
||||
if len(cmts) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
onDisk, err := store.Indices(r)
|
||||
onDisk, err := store.Indices(r, blk.Block().Slot())
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "error checking existing blobs for checkpoint sync block root %#x", r)
|
||||
}
|
||||
@@ -333,7 +333,7 @@ func (s *Service) fetchOriginBlobs(pids []peer.ID) error {
|
||||
}
|
||||
shufflePeers(pids)
|
||||
for i := range pids {
|
||||
sidecars, err := sync.SendBlobSidecarByRoot(s.ctx, s.clock, s.cfg.P2P, pids[i], s.ctxMap, &req)
|
||||
sidecars, err := sync.SendBlobSidecarByRoot(s.ctx, s.clock, s.cfg.P2P, pids[i], s.ctxMap, &req, rob.Block().Slot())
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
@@ -112,9 +112,8 @@ func (l *limiter) validateRequest(stream network.Stream, amt uint64) error {
|
||||
amt = 1
|
||||
}
|
||||
if amt > uint64(remaining) {
|
||||
l.p2p.Peers().Scorers().BadResponsesScorer().Increment(remotePeer)
|
||||
writeErrorResponseToStream(responseCodeInvalidRequest, p2ptypes.ErrRateLimited.Error(), stream, l.p2p)
|
||||
return p2ptypes.ErrRateLimited
|
||||
score := l.p2p.Peers().Scorers().BadResponsesScorer().Increment(remotePeer)
|
||||
return errors.Wrapf(p2ptypes.ErrRateLimited, "new bad responses score: %d", score)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
@@ -135,9 +134,9 @@ func (l *limiter) validateRawRpcRequest(stream network.Stream) error {
|
||||
// Treat each request as a minimum of 1.
|
||||
amt := int64(1)
|
||||
if amt > remaining {
|
||||
l.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
|
||||
score := l.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
|
||||
writeErrorResponseToStream(responseCodeInvalidRequest, p2ptypes.ErrRateLimited.Error(), stream, l.p2p)
|
||||
return p2ptypes.ErrRateLimited
|
||||
return errors.Wrapf(p2ptypes.ErrRateLimited, "new bad responses score: %d", score)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -9,117 +9,153 @@ import (
|
||||
|
||||
libp2pcore "github.com/libp2p/go-libp2p/core"
|
||||
"github.com/libp2p/go-libp2p/core/network"
|
||||
"github.com/libp2p/go-libp2p/core/protocol"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
"github.com/pkg/errors"
|
||||
ssz "github.com/prysmaticlabs/fastssz"
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p"
|
||||
p2ptypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/types"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/params"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing"
|
||||
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
|
||||
"github.com/prysmaticlabs/prysm/v5/runtime/version"
|
||||
"github.com/prysmaticlabs/prysm/v5/time/slots"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
// Time to first byte timeout. The maximum time to wait for first byte of
|
||||
// request response (time-to-first-byte). The client is expected to give up if
|
||||
// they don't receive the first byte within 5 seconds.
|
||||
var ttfbTimeout = params.BeaconConfig().TtfbTimeoutDuration()
|
||||
var (
|
||||
// Time to first byte timeout. The maximum time to wait for first byte of
|
||||
// request response (time-to-first-byte). The client is expected to give up if
|
||||
// they don't receive the first byte within 5 seconds.
|
||||
ttfbTimeout = params.BeaconConfig().TtfbTimeoutDuration()
|
||||
|
||||
// respTimeout is the maximum time for complete response transfer.
|
||||
var respTimeout = params.BeaconConfig().RespTimeoutDuration()
|
||||
// respTimeout is the maximum time for complete response transfer.
|
||||
respTimeout = params.BeaconConfig().RespTimeoutDuration()
|
||||
)
|
||||
|
||||
// rpcHandler is responsible for handling and responding to any incoming message.
|
||||
// This method may return an error to internal monitoring, but the error will
|
||||
// not be relayed to the peer.
|
||||
type rpcHandler func(context.Context, interface{}, libp2pcore.Stream) error
|
||||
|
||||
// registerRPCHandlers for p2p RPC.
|
||||
func (s *Service) registerRPCHandlers() {
|
||||
currEpoch := slots.ToEpoch(s.cfg.clock.CurrentSlot())
|
||||
// Register V2 handlers if we are past altair fork epoch.
|
||||
if currEpoch >= params.BeaconConfig().AltairForkEpoch {
|
||||
s.registerRPC(
|
||||
p2p.RPCStatusTopicV1,
|
||||
s.statusRPCHandler,
|
||||
)
|
||||
s.registerRPC(
|
||||
p2p.RPCGoodByeTopicV1,
|
||||
s.goodbyeRPCHandler,
|
||||
)
|
||||
s.registerRPC(
|
||||
p2p.RPCPingTopicV1,
|
||||
s.pingHandler,
|
||||
)
|
||||
s.registerRPCHandlersAltair()
|
||||
// rpcHandlerByTopicFromFork returns the RPC handlers for a given fork index.
|
||||
func (s *Service) rpcHandlerByTopicFromFork(forkIndex int) (map[string]rpcHandler, error) {
|
||||
switch forkIndex {
|
||||
// PhaseO: https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/p2p-interface.md#messages
|
||||
case version.Phase0:
|
||||
return map[string]rpcHandler{
|
||||
p2p.RPCStatusTopicV1: s.statusRPCHandler,
|
||||
p2p.RPCGoodByeTopicV1: s.goodbyeRPCHandler,
|
||||
p2p.RPCBlocksByRangeTopicV1: s.beaconBlocksByRangeRPCHandler,
|
||||
p2p.RPCBlocksByRootTopicV1: s.beaconBlocksRootRPCHandler,
|
||||
p2p.RPCPingTopicV1: s.pingHandler,
|
||||
p2p.RPCMetaDataTopicV1: s.metaDataHandler,
|
||||
}, nil
|
||||
|
||||
if currEpoch >= params.BeaconConfig().DenebForkEpoch {
|
||||
s.registerRPCHandlersDeneb()
|
||||
}
|
||||
return
|
||||
// Altair: https://github.com/ethereum/consensus-specs/tree/dev/specs/altair#messages
|
||||
// Bellatrix: https://github.com/ethereum/consensus-specs/tree/dev/specs/bellatrix#messages
|
||||
// Capella: https://github.com/ethereum/consensus-specs/tree/dev/specs/capella#messages
|
||||
case version.Altair, version.Bellatrix, version.Capella:
|
||||
return map[string]rpcHandler{
|
||||
p2p.RPCStatusTopicV1: s.statusRPCHandler,
|
||||
p2p.RPCGoodByeTopicV1: s.goodbyeRPCHandler,
|
||||
p2p.RPCBlocksByRangeTopicV2: s.beaconBlocksByRangeRPCHandler, // Modified in Altair
|
||||
p2p.RPCBlocksByRootTopicV2: s.beaconBlocksRootRPCHandler, // Modified in Altair
|
||||
p2p.RPCPingTopicV1: s.pingHandler,
|
||||
p2p.RPCMetaDataTopicV2: s.metaDataHandler, // Modified in Altair
|
||||
}, nil
|
||||
|
||||
// Deneb: https://github.com/ethereum/consensus-specs/blob/dev/specs/deneb/p2p-interface.md#messages
|
||||
// Electra: https://github.com/ethereum/consensus-specs/blob/dev/specs/electra/p2p-interface.md#messages
|
||||
case version.Deneb, version.Electra:
|
||||
return map[string]rpcHandler{
|
||||
p2p.RPCStatusTopicV1: s.statusRPCHandler,
|
||||
p2p.RPCGoodByeTopicV1: s.goodbyeRPCHandler,
|
||||
p2p.RPCBlocksByRangeTopicV2: s.beaconBlocksByRangeRPCHandler,
|
||||
p2p.RPCBlocksByRootTopicV2: s.beaconBlocksRootRPCHandler,
|
||||
p2p.RPCPingTopicV1: s.pingHandler,
|
||||
p2p.RPCMetaDataTopicV2: s.metaDataHandler,
|
||||
p2p.RPCBlobSidecarsByRootTopicV1: s.blobSidecarByRootRPCHandler, // Added in Deneb
|
||||
p2p.RPCBlobSidecarsByRangeTopicV1: s.blobSidecarsByRangeRPCHandler, // Added in Deneb
|
||||
}, nil
|
||||
|
||||
default:
|
||||
return nil, errors.Errorf("RPC handler not found for fork index %d", forkIndex)
|
||||
}
|
||||
s.registerRPC(
|
||||
p2p.RPCStatusTopicV1,
|
||||
s.statusRPCHandler,
|
||||
)
|
||||
s.registerRPC(
|
||||
p2p.RPCGoodByeTopicV1,
|
||||
s.goodbyeRPCHandler,
|
||||
)
|
||||
s.registerRPC(
|
||||
p2p.RPCBlocksByRangeTopicV1,
|
||||
s.beaconBlocksByRangeRPCHandler,
|
||||
)
|
||||
s.registerRPC(
|
||||
p2p.RPCBlocksByRootTopicV1,
|
||||
s.beaconBlocksRootRPCHandler,
|
||||
)
|
||||
s.registerRPC(
|
||||
p2p.RPCPingTopicV1,
|
||||
s.pingHandler,
|
||||
)
|
||||
s.registerRPC(
|
||||
p2p.RPCMetaDataTopicV1,
|
||||
s.metaDataHandler,
|
||||
)
|
||||
}
|
||||
|
||||
// registerRPCHandlers for altair.
|
||||
func (s *Service) registerRPCHandlersAltair() {
|
||||
s.registerRPC(
|
||||
p2p.RPCBlocksByRangeTopicV2,
|
||||
s.beaconBlocksByRangeRPCHandler,
|
||||
)
|
||||
s.registerRPC(
|
||||
p2p.RPCBlocksByRootTopicV2,
|
||||
s.beaconBlocksRootRPCHandler,
|
||||
)
|
||||
s.registerRPC(
|
||||
p2p.RPCMetaDataTopicV2,
|
||||
s.metaDataHandler,
|
||||
)
|
||||
// rpcHandlerByTopic returns the RPC handlers for a given epoch.
|
||||
func (s *Service) rpcHandlerByTopicFromEpoch(epoch primitives.Epoch) (map[string]rpcHandler, error) {
|
||||
// Get the beacon config.
|
||||
beaconConfig := params.BeaconConfig()
|
||||
|
||||
if epoch >= beaconConfig.ElectraForkEpoch {
|
||||
return s.rpcHandlerByTopicFromFork(version.Electra)
|
||||
}
|
||||
|
||||
if epoch >= beaconConfig.DenebForkEpoch {
|
||||
return s.rpcHandlerByTopicFromFork(version.Deneb)
|
||||
}
|
||||
|
||||
if epoch >= beaconConfig.CapellaForkEpoch {
|
||||
return s.rpcHandlerByTopicFromFork(version.Capella)
|
||||
}
|
||||
|
||||
if epoch >= beaconConfig.BellatrixForkEpoch {
|
||||
return s.rpcHandlerByTopicFromFork(version.Bellatrix)
|
||||
}
|
||||
|
||||
if epoch >= beaconConfig.AltairForkEpoch {
|
||||
return s.rpcHandlerByTopicFromFork(version.Altair)
|
||||
}
|
||||
|
||||
return s.rpcHandlerByTopicFromFork(version.Phase0)
|
||||
}
|
||||
|
||||
func (s *Service) registerRPCHandlersDeneb() {
|
||||
s.registerRPC(
|
||||
p2p.RPCBlobSidecarsByRangeTopicV1,
|
||||
s.blobSidecarsByRangeRPCHandler,
|
||||
)
|
||||
s.registerRPC(
|
||||
p2p.RPCBlobSidecarsByRootTopicV1,
|
||||
s.blobSidecarByRootRPCHandler,
|
||||
)
|
||||
// addedRPCHandlerByTopic returns the RPC handlers that are added in the new map that are not present in the old map.
|
||||
func addedRPCHandlerByTopic(previous, next map[string]rpcHandler) map[string]rpcHandler {
|
||||
added := make(map[string]rpcHandler)
|
||||
|
||||
for topic, handler := range next {
|
||||
if _, ok := previous[topic]; !ok {
|
||||
added[topic] = handler
|
||||
}
|
||||
}
|
||||
|
||||
return added
|
||||
}
|
||||
|
||||
// Remove all v1 Stream handlers that are no longer supported
|
||||
// from altair onwards.
|
||||
func (s *Service) unregisterPhase0Handlers() {
|
||||
fullBlockRangeTopic := p2p.RPCBlocksByRangeTopicV1 + s.cfg.p2p.Encoding().ProtocolSuffix()
|
||||
fullBlockRootTopic := p2p.RPCBlocksByRootTopicV1 + s.cfg.p2p.Encoding().ProtocolSuffix()
|
||||
fullMetadataTopic := p2p.RPCMetaDataTopicV1 + s.cfg.p2p.Encoding().ProtocolSuffix()
|
||||
// removedTopics returns the topics that are removed in the new map that are not present in the old map.
|
||||
func removedRPCTopics(previous, next map[string]rpcHandler) map[string]bool {
|
||||
removed := make(map[string]bool)
|
||||
|
||||
s.cfg.p2p.Host().RemoveStreamHandler(protocol.ID(fullBlockRangeTopic))
|
||||
s.cfg.p2p.Host().RemoveStreamHandler(protocol.ID(fullBlockRootTopic))
|
||||
s.cfg.p2p.Host().RemoveStreamHandler(protocol.ID(fullMetadataTopic))
|
||||
for topic := range previous {
|
||||
if _, ok := next[topic]; !ok {
|
||||
removed[topic] = true
|
||||
}
|
||||
}
|
||||
|
||||
return removed
|
||||
}
|
||||
|
||||
// registerRPCHandlers for p2p RPC.
|
||||
func (s *Service) registerRPCHandlers() error {
|
||||
// Get the current epoch.
|
||||
currentSlot := s.cfg.clock.CurrentSlot()
|
||||
currentEpoch := slots.ToEpoch(currentSlot)
|
||||
|
||||
// Get the RPC handlers for the current epoch.
|
||||
handlerByTopic, err := s.rpcHandlerByTopicFromEpoch(currentEpoch)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "rpc handler by topic from epoch")
|
||||
}
|
||||
|
||||
// Register the RPC handlers for the current epoch.
|
||||
for topic, handler := range handlerByTopic {
|
||||
s.registerRPC(topic, handler)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// registerRPC for a given topic with an expected protobuf message type.
|
||||
@@ -218,9 +254,9 @@ func (s *Service) registerRPC(baseTopic string, handle rpcHandler) {
|
||||
return
|
||||
}
|
||||
if err := s.cfg.p2p.Encoding().DecodeWithMaxLength(stream, msg); err != nil {
|
||||
logStreamErrors(err, topic)
|
||||
tracing.AnnotateError(span, err)
|
||||
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
|
||||
score := s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(remotePeer)
|
||||
logStreamErrors(err, topic, remotePeer, score)
|
||||
return
|
||||
}
|
||||
if err := handle(ctx, msg, stream); err != nil {
|
||||
@@ -238,9 +274,9 @@ func (s *Service) registerRPC(baseTopic string, handle rpcHandler) {
|
||||
return
|
||||
}
|
||||
if err := s.cfg.p2p.Encoding().DecodeWithMaxLength(stream, msg); err != nil {
|
||||
logStreamErrors(err, topic)
|
||||
score := s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(remotePeer)
|
||||
logStreamErrors(err, topic, remotePeer, score)
|
||||
tracing.AnnotateError(span, err)
|
||||
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
|
||||
return
|
||||
}
|
||||
if err := handle(ctx, nTyp.Elem().Interface(), stream); err != nil {
|
||||
@@ -254,13 +290,20 @@ func (s *Service) registerRPC(baseTopic string, handle rpcHandler) {
|
||||
})
|
||||
}
|
||||
|
||||
func logStreamErrors(err error, topic string) {
|
||||
func logStreamErrors(err error, topic string, remotePeer peer.ID, badResponsesScore int) {
|
||||
log := log.WithFields(logrus.Fields{
|
||||
"topic": topic,
|
||||
"peer": remotePeer.String(),
|
||||
"newBadResponsesScore": badResponsesScore,
|
||||
})
|
||||
if isUnwantedError(err) {
|
||||
log.WithError(err).Debug("Unwanted error")
|
||||
return
|
||||
}
|
||||
|
||||
if strings.Contains(topic, p2p.RPCGoodByeTopicV1) {
|
||||
log.WithError(err).WithField("topic", topic).Trace("Could not decode goodbye stream message")
|
||||
log.WithError(err).Debug("Could not decode goodbye stream message")
|
||||
return
|
||||
}
|
||||
log.WithError(err).WithField("topic", topic).Debug("Could not decode stream message")
|
||||
log.WithError(err).Debug("Could not decode stream message")
|
||||
}
|
||||
|
||||
@@ -43,13 +43,13 @@ func (s *Service) beaconBlocksByRangeRPCHandler(ctx context.Context, msg interfa
|
||||
rp, err := validateRangeRequest(m, s.cfg.clock.CurrentSlot())
|
||||
if err != nil {
|
||||
s.writeErrorResponseToStream(responseCodeInvalidRequest, err.Error(), stream)
|
||||
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(remotePeer)
|
||||
score := s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(remotePeer)
|
||||
tracing.AnnotateError(span, err)
|
||||
return err
|
||||
return errors.Wrapf(err, "new bad responses score: %d", score)
|
||||
}
|
||||
available := s.validateRangeAvailability(rp)
|
||||
if !available {
|
||||
log.Debug("error in validating range availability")
|
||||
log.Debug("Error in validating range availability")
|
||||
s.writeErrorResponseToStream(responseCodeResourceUnavailable, p2ptypes.ErrResourceUnavailable.Error(), stream)
|
||||
tracing.AnnotateError(span, err)
|
||||
return nil
|
||||
|
||||
@@ -11,10 +11,10 @@ import (
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/types"
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/sync/verify"
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/verification"
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/params"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
|
||||
"github.com/prysmaticlabs/prysm/v5/runtime/version"
|
||||
"github.com/prysmaticlabs/prysm/v5/time/slots"
|
||||
@@ -94,9 +94,9 @@ func (s *Service) beaconBlocksRootRPCHandler(ctx context.Context, msg interface{
|
||||
|
||||
currentEpoch := slots.ToEpoch(s.cfg.clock.CurrentSlot())
|
||||
if uint64(len(blockRoots)) > params.MaxRequestBlock(currentEpoch) {
|
||||
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
|
||||
score := s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
|
||||
s.writeErrorResponseToStream(responseCodeInvalidRequest, "requested more than the max block limit", stream)
|
||||
return errors.New("requested more than the max block limit")
|
||||
return errors.Errorf("requested more than the max block limit - new bad responses score: %d", score)
|
||||
}
|
||||
s.rateLimiter.add(stream, int64(len(blockRoots)))
|
||||
|
||||
@@ -139,7 +139,7 @@ func (s *Service) sendAndSaveBlobSidecars(ctx context.Context, request types.Blo
|
||||
return nil
|
||||
}
|
||||
|
||||
sidecars, err := SendBlobSidecarByRoot(ctx, s.cfg.clock, s.cfg.p2p, peerID, s.ctxMap, &request)
|
||||
sidecars, err := SendBlobSidecarByRoot(ctx, s.cfg.clock, s.cfg.p2p, peerID, s.ctxMap, &request, block.Block().Slot())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -181,15 +181,15 @@ func (s *Service) pendingBlobsRequestForBlock(root [32]byte, b interfaces.ReadOn
|
||||
if len(cc) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
return s.constructPendingBlobsRequest(root, len(cc))
|
||||
return s.constructPendingBlobsRequest(root, len(cc), b.Block().Slot())
|
||||
}
|
||||
|
||||
// constructPendingBlobsRequest creates a request for BlobSidecars by root, considering blobs already in DB.
|
||||
func (s *Service) constructPendingBlobsRequest(root [32]byte, commitments int) (types.BlobSidecarsByRootReq, error) {
|
||||
func (s *Service) constructPendingBlobsRequest(root [32]byte, commitments int, slot primitives.Slot) (types.BlobSidecarsByRootReq, error) {
|
||||
if commitments == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
stored, err := s.cfg.blobStorage.Indices(root)
|
||||
stored, err := s.cfg.blobStorage.Indices(root, slot)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -200,7 +200,7 @@ func (s *Service) constructPendingBlobsRequest(root [32]byte, commitments int) (
|
||||
// requestsForMissingIndices constructs a slice of BlobIdentifiers that are missing from
|
||||
// local storage, based on a mapping that represents which indices are locally stored,
|
||||
// and the highest expected index.
|
||||
func requestsForMissingIndices(storedIndices [fieldparams.MaxBlobsPerBlock]bool, commitments int, root [32]byte) []*eth.BlobIdentifier {
|
||||
func requestsForMissingIndices(storedIndices []bool, commitments int, root [32]byte) []*eth.BlobIdentifier {
|
||||
var ids []*eth.BlobIdentifier
|
||||
for i := uint64(0); i < uint64(commitments); i++ {
|
||||
if !storedIndices[i] {
|
||||
|
||||
@@ -424,7 +424,7 @@ func TestConstructPendingBlobsRequest(t *testing.T) {
|
||||
// No unknown indices.
|
||||
root := [32]byte{1}
|
||||
count := 3
|
||||
actual, err := s.constructPendingBlobsRequest(root, count)
|
||||
actual, err := s.constructPendingBlobsRequest(root, count, 100)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 3, len(actual))
|
||||
for i, id := range actual {
|
||||
@@ -454,14 +454,14 @@ func TestConstructPendingBlobsRequest(t *testing.T) {
|
||||
expected := []*eth.BlobIdentifier{
|
||||
{Index: 1, BlockRoot: root[:]},
|
||||
}
|
||||
actual, err = s.constructPendingBlobsRequest(root, count)
|
||||
actual, err = s.constructPendingBlobsRequest(root, count, 100)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, expected[0].Index, actual[0].Index)
|
||||
require.DeepEqual(t, expected[0].BlockRoot, actual[0].BlockRoot)
|
||||
}
|
||||
|
||||
func TestFilterUnknownIndices(t *testing.T) {
|
||||
haveIndices := [fieldparams.MaxBlobsPerBlock]bool{true, true, true, false, false, false}
|
||||
haveIndices := []bool{true, true, true, false, false, false}
|
||||
|
||||
blockRoot := [32]byte{}
|
||||
count := 5
|
||||
|
||||
@@ -10,7 +10,6 @@ import (
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p"
|
||||
p2ptypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/types"
|
||||
"github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags"
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/params"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing"
|
||||
@@ -28,7 +27,7 @@ func (s *Service) streamBlobBatch(ctx context.Context, batch blockBatch, wQuota
|
||||
defer span.End()
|
||||
for _, b := range batch.canonical() {
|
||||
root := b.Root()
|
||||
idxs, err := s.cfg.blobStorage.Indices(b.Root())
|
||||
idxs, err := s.cfg.blobStorage.Indices(b.Root(), b.Block().Slot())
|
||||
if err != nil {
|
||||
s.writeErrorResponseToStream(responseCodeServerError, p2ptypes.ErrGeneric.Error(), stream)
|
||||
return wQuota, errors.Wrapf(err, "could not retrieve sidecars for block root %#x", root)
|
||||
@@ -82,9 +81,9 @@ func (s *Service) blobSidecarsByRangeRPCHandler(ctx context.Context, msg interfa
|
||||
rp, err := validateBlobsByRange(r, s.cfg.chain.CurrentSlot())
|
||||
if err != nil {
|
||||
s.writeErrorResponseToStream(responseCodeInvalidRequest, err.Error(), stream)
|
||||
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
|
||||
score := s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
|
||||
tracing.AnnotateError(span, err)
|
||||
return err
|
||||
return errors.Wrapf(err, "new bad responses score: %d", score)
|
||||
}
|
||||
|
||||
// Ticker to stagger out large requests.
|
||||
@@ -146,8 +145,9 @@ func BlobRPCMinValidSlot(current primitives.Slot) (primitives.Slot, error) {
|
||||
return slots.EpochStart(minStart)
|
||||
}
|
||||
|
||||
func blobBatchLimit() uint64 {
|
||||
return uint64(flags.Get().BlockBatchLimit / fieldparams.MaxBlobsPerBlock)
|
||||
func blobBatchLimit(slot primitives.Slot) uint64 {
|
||||
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
|
||||
return uint64(flags.Get().BlockBatchLimit / maxBlobsPerBlock)
|
||||
}
|
||||
|
||||
func validateBlobsByRange(r *pb.BlobSidecarsByRangeRequest, current primitives.Slot) (rangeParams, error) {
|
||||
@@ -200,7 +200,7 @@ func validateBlobsByRange(r *pb.BlobSidecarsByRangeRequest, current primitives.S
|
||||
rp.end = rp.start
|
||||
}
|
||||
|
||||
limit := blobBatchLimit()
|
||||
limit := blobBatchLimit(current)
|
||||
if limit > maxRequest {
|
||||
limit = maxRequest
|
||||
}
|
||||
|
||||
@@ -4,7 +4,6 @@ import (
|
||||
"testing"
|
||||
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p"
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/params"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
|
||||
types "github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
@@ -25,7 +24,7 @@ func (c *blobsTestCase) defaultOldestSlotByRange(t *testing.T) types.Slot {
|
||||
}
|
||||
|
||||
func blobRangeRequestFromSidecars(scs []blocks.ROBlob) interface{} {
|
||||
maxBlobs := fieldparams.MaxBlobsPerBlock
|
||||
maxBlobs := params.BeaconConfig().MaxBlobsPerBlock(scs[0].Slot())
|
||||
count := uint64(len(scs) / maxBlobs)
|
||||
return ðpb.BlobSidecarsByRangeRequest{
|
||||
StartSlot: scs[0].Slot(),
|
||||
@@ -135,7 +134,7 @@ func TestBlobByRangeOK(t *testing.T) {
|
||||
Count: 20,
|
||||
}
|
||||
},
|
||||
total: func() *int { x := fieldparams.MaxBlobsPerBlock * 10; return &x }(), // 10 blocks * 4 blobs = 40
|
||||
total: func() *int { x := params.BeaconConfig().MaxBlobsPerBlock(0) * 10; return &x }(), // 10 blocks * 4 blobs = 40
|
||||
},
|
||||
{
|
||||
name: "when request count > MAX_REQUEST_BLOCKS_DENEB, MAX_REQUEST_BLOBS_SIDECARS sidecars in response",
|
||||
@@ -233,7 +232,7 @@ func TestBlobsByRangeValidation(t *testing.T) {
|
||||
},
|
||||
start: defaultMinStart,
|
||||
end: defaultMinStart + 9,
|
||||
batch: blobBatchLimit(),
|
||||
batch: blobBatchLimit(100),
|
||||
},
|
||||
{
|
||||
name: "count > MAX_REQUEST_BLOB_SIDECARS",
|
||||
@@ -245,7 +244,7 @@ func TestBlobsByRangeValidation(t *testing.T) {
|
||||
start: defaultMinStart,
|
||||
end: defaultMinStart - 10 + 999,
|
||||
// a large count is ok, we just limit the amount of actual responses
|
||||
batch: blobBatchLimit(),
|
||||
batch: blobBatchLimit(100),
|
||||
},
|
||||
{
|
||||
name: "start + count > current",
|
||||
@@ -267,7 +266,7 @@ func TestBlobsByRangeValidation(t *testing.T) {
|
||||
},
|
||||
start: denebSlot,
|
||||
end: denebSlot + 89,
|
||||
batch: blobBatchLimit(),
|
||||
batch: blobBatchLimit(100),
|
||||
},
|
||||
}
|
||||
for _, c := range cases {
|
||||
|
||||
@@ -35,9 +35,9 @@ func (s *Service) blobSidecarByRootRPCHandler(ctx context.Context, msg interface
|
||||
|
||||
blobIdents := *ref
|
||||
if err := validateBlobByRootRequest(blobIdents); err != nil {
|
||||
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
|
||||
score := s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
|
||||
s.writeErrorResponseToStream(responseCodeInvalidRequest, err.Error(), stream)
|
||||
return err
|
||||
return errors.Wrapf(err, "new bad responses score: %d", score)
|
||||
}
|
||||
// Sort the identifiers so that requests for the same blob root will be adjacent, minimizing db lookups.
|
||||
sort.Sort(blobIdents)
|
||||
|
||||
@@ -8,7 +8,6 @@ import (
|
||||
"github.com/libp2p/go-libp2p/core/network"
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p"
|
||||
p2pTypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/types"
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/params"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
|
||||
types "github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
@@ -223,7 +222,7 @@ func TestBlobsByRootValidation(t *testing.T) {
|
||||
name: "block with all indices missing between 2 full blocks",
|
||||
nblocks: 3,
|
||||
missing: map[int]bool{1: true},
|
||||
total: func(i int) *int { return &i }(2 * fieldparams.MaxBlobsPerBlock),
|
||||
total: func(i int) *int { return &i }(2 * int(params.BeaconConfig().MaxBlobsPerBlock(0))),
|
||||
},
|
||||
{
|
||||
name: "exceeds req max",
|
||||
|
||||
@@ -139,13 +139,13 @@ func (s *Service) sendMetaDataRequest(ctx context.Context, peerID peer.ID) (meta
|
||||
// Read the METADATA response from the peer.
|
||||
code, errMsg, err := ReadStatusCode(stream, s.cfg.p2p.Encoding())
|
||||
if err != nil {
|
||||
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(peerID)
|
||||
return nil, errors.Wrap(err, "read status code")
|
||||
score := s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(peerID)
|
||||
return nil, errors.Wrapf(err, "read status code for metadata request, new bad responses score: %d", score)
|
||||
}
|
||||
|
||||
if code != 0 {
|
||||
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(peerID)
|
||||
return nil, errors.New(errMsg)
|
||||
score := s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(peerID)
|
||||
return nil, errors.Errorf("%s, new bad responses score: %d", errMsg, score)
|
||||
}
|
||||
|
||||
// Get the genesis validators root.
|
||||
@@ -179,8 +179,8 @@ func (s *Service) sendMetaDataRequest(ctx context.Context, peerID peer.ID) (meta
|
||||
|
||||
// Decode the metadata from the peer.
|
||||
if err := s.cfg.p2p.Encoding().DecodeWithMaxLength(stream, msg); err != nil {
|
||||
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
|
||||
return nil, err
|
||||
score := s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
|
||||
return nil, errors.Wrapf(err, "decode metadata, new bad responses score: %d", score)
|
||||
}
|
||||
|
||||
return msg, nil
|
||||
|
||||
@@ -43,7 +43,8 @@ func (s *Service) pingHandler(_ context.Context, msg interface{}, stream libp2pc
|
||||
if err != nil {
|
||||
// Descore peer for giving us a bad sequence number.
|
||||
if errors.Is(err, p2ptypes.ErrInvalidSequenceNum) {
|
||||
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(peerID)
|
||||
score := s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(peerID)
|
||||
err = errors.Wrapf(err, "new bad responses score: %d", score)
|
||||
s.writeErrorResponseToStream(responseCodeInvalidRequest, p2ptypes.ErrInvalidSequenceNum.Error(), stream)
|
||||
}
|
||||
|
||||
@@ -141,8 +142,8 @@ func (s *Service) sendPingRequest(ctx context.Context, peerID peer.ID) error {
|
||||
|
||||
// If the peer responded with an error, increment the bad responses scorer.
|
||||
if code != 0 {
|
||||
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(peerID)
|
||||
return errors.Errorf("code: %d - %s", code, errMsg)
|
||||
score := s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(peerID)
|
||||
return errors.Errorf("code: %d, new bad responses score: %d - %s", code, score, errMsg)
|
||||
}
|
||||
|
||||
// Decode the sequence number from the peer.
|
||||
@@ -156,7 +157,8 @@ func (s *Service) sendPingRequest(ctx context.Context, peerID peer.ID) error {
|
||||
if err != nil {
|
||||
// Descore peer for giving us a bad sequence number.
|
||||
if errors.Is(err, p2ptypes.ErrInvalidSequenceNum) {
|
||||
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(peerID)
|
||||
score := s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(peerID)
|
||||
err = errors.Wrapf(err, "new bad responses score: %d", score)
|
||||
}
|
||||
|
||||
return errors.Wrap(err, "validate sequence number")
|
||||
|
||||
@@ -12,7 +12,6 @@ import (
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p"
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/encoder"
|
||||
p2ptypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/types"
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/params"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
|
||||
@@ -170,9 +169,10 @@ func SendBlobsByRangeRequest(ctx context.Context, tor blockchain.TemporalOracle,
|
||||
}
|
||||
defer closeStream(stream, log)
|
||||
|
||||
maxBlobsPerBlock := uint64(params.BeaconConfig().MaxBlobsPerBlock(req.StartSlot))
|
||||
max := params.BeaconConfig().MaxRequestBlobSidecars
|
||||
if max > req.Count*fieldparams.MaxBlobsPerBlock {
|
||||
max = req.Count * fieldparams.MaxBlobsPerBlock
|
||||
if max > req.Count*maxBlobsPerBlock {
|
||||
max = req.Count * maxBlobsPerBlock
|
||||
}
|
||||
vfuncs := []BlobResponseValidation{blobValidatorFromRangeReq(req), newSequentialBlobValidator()}
|
||||
if len(bvs) > 0 {
|
||||
@@ -183,7 +183,7 @@ func SendBlobsByRangeRequest(ctx context.Context, tor blockchain.TemporalOracle,
|
||||
|
||||
func SendBlobSidecarByRoot(
|
||||
ctx context.Context, tor blockchain.TemporalOracle, p2pApi p2p.P2P, pid peer.ID,
|
||||
ctxMap ContextByteVersions, req *p2ptypes.BlobSidecarsByRootReq,
|
||||
ctxMap ContextByteVersions, req *p2ptypes.BlobSidecarsByRootReq, slot primitives.Slot,
|
||||
) ([]blocks.ROBlob, error) {
|
||||
if uint64(len(*req)) > params.BeaconConfig().MaxRequestBlobSidecars {
|
||||
return nil, errors.Wrapf(p2ptypes.ErrMaxBlobReqExceeded, "length=%d", len(*req))
|
||||
@@ -201,8 +201,9 @@ func SendBlobSidecarByRoot(
|
||||
defer closeStream(stream, log)
|
||||
|
||||
max := params.BeaconConfig().MaxRequestBlobSidecars
|
||||
if max > uint64(len(*req))*fieldparams.MaxBlobsPerBlock {
|
||||
max = uint64(len(*req)) * fieldparams.MaxBlobsPerBlock
|
||||
maxBlobCount := params.BeaconConfig().MaxBlobsPerBlock(slot)
|
||||
if max > uint64(len(*req)*maxBlobCount) {
|
||||
max = uint64(len(*req) * maxBlobCount)
|
||||
}
|
||||
return readChunkEncodedBlobs(stream, p2pApi.Encoding(), ctxMap, blobValidatorFromRootReq(req), max)
|
||||
}
|
||||
@@ -227,7 +228,8 @@ type seqBlobValid struct {
|
||||
}
|
||||
|
||||
func (sbv *seqBlobValid) nextValid(blob blocks.ROBlob) error {
|
||||
if blob.Index >= fieldparams.MaxBlobsPerBlock {
|
||||
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(blob.Slot())
|
||||
if blob.Index >= uint64(maxBlobsPerBlock) {
|
||||
return errBlobIndexOutOfBounds
|
||||
}
|
||||
if sbv.prev == nil {
|
||||
|
||||
@@ -619,7 +619,7 @@ func TestSeqBlobValid(t *testing.T) {
|
||||
wrongRoot, err := blocks.NewROBlobWithRoot(oops[2].BlobSidecar, bytesutil.ToBytes32([]byte("parentderp")))
|
||||
require.NoError(t, err)
|
||||
oob := oops[3]
|
||||
oob.Index = fieldparams.MaxBlobsPerBlock
|
||||
oob.Index = uint64(params.BeaconConfig().MaxBlobsPerBlock(0))
|
||||
|
||||
cases := []struct {
|
||||
name string
|
||||
|
||||
@@ -62,8 +62,12 @@ func (s *Service) maintainPeerStatuses() {
|
||||
}
|
||||
if prysmTime.Now().After(lastUpdated.Add(interval)) {
|
||||
if err := s.reValidatePeer(s.ctx, id); err != nil {
|
||||
log.WithField("peer", id).WithError(err).Debug("Could not revalidate peer")
|
||||
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(id)
|
||||
score := s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(id)
|
||||
log.
|
||||
WithFields(logrus.Fields{
|
||||
"peer": id,
|
||||
"newBadResponsesScore": score,
|
||||
}).WithError(err).Debug("Could not revalidate peer")
|
||||
}
|
||||
}
|
||||
}(pid)
|
||||
@@ -161,18 +165,18 @@ func (s *Service) sendRPCStatusRequest(ctx context.Context, id peer.ID) error {
|
||||
|
||||
code, errMsg, err := ReadStatusCode(stream, s.cfg.p2p.Encoding())
|
||||
if err != nil {
|
||||
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
|
||||
return err
|
||||
score := s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
|
||||
return errors.Wrapf(err, "read status code for status request, new bad responses score: %d", score)
|
||||
}
|
||||
|
||||
if code != 0 {
|
||||
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(id)
|
||||
return errors.New(errMsg)
|
||||
score := s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(id)
|
||||
return errors.Errorf(errMsg+" new bad responses score: %d", score)
|
||||
}
|
||||
msg := &pb.Status{}
|
||||
if err := s.cfg.p2p.Encoding().DecodeWithMaxLength(stream, msg); err != nil {
|
||||
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
|
||||
return err
|
||||
score := s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
|
||||
return errors.Wrapf(err, "decode with max length, new bad responses score: %d", score)
|
||||
}
|
||||
|
||||
// If validation fails, validation error is logged, and peer status scorer will mark peer as bad.
|
||||
@@ -187,7 +191,7 @@ func (s *Service) sendRPCStatusRequest(ctx context.Context, id peer.ID) error {
|
||||
func (s *Service) reValidatePeer(ctx context.Context, id peer.ID) error {
|
||||
s.cfg.p2p.Peers().Scorers().PeerStatusScorer().SetHeadSlot(s.cfg.chain.HeadSlot())
|
||||
if err := s.sendRPCStatusRequest(ctx, id); err != nil {
|
||||
return err
|
||||
return errors.Wrap(err, "revalidate peer")
|
||||
}
|
||||
// Do not return an error for ping requests.
|
||||
if err := s.sendPingRequest(ctx, id); err != nil && !isUnwantedError(err) {
|
||||
@@ -237,7 +241,11 @@ func (s *Service) statusRPCHandler(ctx context.Context, msg interface{}, stream
|
||||
return nil
|
||||
default:
|
||||
respCode = responseCodeInvalidRequest
|
||||
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(remotePeer)
|
||||
score := s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(remotePeer)
|
||||
log.WithError(err).WithFields(logrus.Fields{
|
||||
"peer": remotePeer,
|
||||
"newBadResponsesscore": score,
|
||||
}).Debug("Could not validate status message")
|
||||
}
|
||||
|
||||
originalErr := err
|
||||
|
||||
@@ -303,14 +303,21 @@ func (s *Service) waitForChainStart() {
|
||||
|
||||
ctxMap, err := ContextByteVersionsForValRoot(clock.GenesisValidatorsRoot())
|
||||
if err != nil {
|
||||
log.WithError(err).WithField("genesisValidatorRoot", clock.GenesisValidatorsRoot()).
|
||||
log.
|
||||
WithError(err).
|
||||
WithField("genesisValidatorRoot", clock.GenesisValidatorsRoot()).
|
||||
Error("sync service failed to initialize context version map")
|
||||
return
|
||||
}
|
||||
s.ctxMap = ctxMap
|
||||
|
||||
// Register respective rpc handlers at state initialized event.
|
||||
s.registerRPCHandlers()
|
||||
err = s.registerRPCHandlers()
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Could not register rpc handlers")
|
||||
return
|
||||
}
|
||||
|
||||
// Wait for chainstart in separate routine.
|
||||
if startTime.After(prysmTime.Now()) {
|
||||
time.Sleep(prysmTime.Until(startTime))
|
||||
|
||||
@@ -53,6 +53,30 @@ func (s *Service) noopValidator(_ context.Context, _ peer.ID, msg *pubsub.Messag
|
||||
return pubsub.ValidationAccept, nil
|
||||
}
|
||||
|
||||
func sliceFromCount(count uint64) []uint64 {
|
||||
result := make([]uint64, 0, count)
|
||||
|
||||
for item := range count {
|
||||
result = append(result, item)
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
func (s *Service) activeSyncSubnetIndices(currentSlot primitives.Slot) []uint64 {
|
||||
if flags.Get().SubscribeToAllSubnets {
|
||||
return sliceFromCount(params.BeaconConfig().SyncCommitteeSubnetCount)
|
||||
}
|
||||
|
||||
// Get the current epoch.
|
||||
currentEpoch := slots.ToEpoch(currentSlot)
|
||||
|
||||
// Retrieve the subnets we want to subscribe to.
|
||||
subs := cache.SyncSubnetIDs.GetAllSubnets(currentEpoch)
|
||||
|
||||
return slice.SetUint64(subs)
|
||||
}
|
||||
|
||||
// Register PubSub subscribers
|
||||
func (s *Service) registerSubscribers(epoch primitives.Epoch, digest [4]byte) {
|
||||
s.subscribe(
|
||||
@@ -85,49 +109,34 @@ func (s *Service) registerSubscribers(epoch primitives.Epoch, digest [4]byte) {
|
||||
s.attesterSlashingSubscriber,
|
||||
digest,
|
||||
)
|
||||
if flags.Get().SubscribeToAllSubnets {
|
||||
s.subscribeStaticWithSubnets(
|
||||
p2p.AttestationSubnetTopicFormat,
|
||||
s.validateCommitteeIndexBeaconAttestation, /* validator */
|
||||
s.committeeIndexBeaconAttestationSubscriber, /* message handler */
|
||||
digest,
|
||||
params.BeaconConfig().AttestationSubnetCount,
|
||||
)
|
||||
} else {
|
||||
s.subscribeDynamicWithSubnets(
|
||||
p2p.AttestationSubnetTopicFormat,
|
||||
s.validateCommitteeIndexBeaconAttestation, /* validator */
|
||||
s.committeeIndexBeaconAttestationSubscriber, /* message handler */
|
||||
digest,
|
||||
)
|
||||
}
|
||||
s.subscribeWithParameters(
|
||||
p2p.AttestationSubnetTopicFormat,
|
||||
s.validateCommitteeIndexBeaconAttestation,
|
||||
s.committeeIndexBeaconAttestationSubscriber,
|
||||
digest,
|
||||
s.persistentAndAggregatorSubnetIndices,
|
||||
s.attesterSubnetIndices,
|
||||
)
|
||||
// Altair Fork Version
|
||||
if epoch >= params.BeaconConfig().AltairForkEpoch {
|
||||
if params.BeaconConfig().AltairForkEpoch <= epoch {
|
||||
s.subscribe(
|
||||
p2p.SyncContributionAndProofSubnetTopicFormat,
|
||||
s.validateSyncContributionAndProof,
|
||||
s.syncContributionAndProofSubscriber,
|
||||
digest,
|
||||
)
|
||||
if flags.Get().SubscribeToAllSubnets {
|
||||
s.subscribeStaticWithSyncSubnets(
|
||||
p2p.SyncCommitteeSubnetTopicFormat,
|
||||
s.validateSyncCommitteeMessage, /* validator */
|
||||
s.syncCommitteeMessageSubscriber, /* message handler */
|
||||
digest,
|
||||
)
|
||||
} else {
|
||||
s.subscribeDynamicWithSyncSubnets(
|
||||
p2p.SyncCommitteeSubnetTopicFormat,
|
||||
s.validateSyncCommitteeMessage, /* validator */
|
||||
s.syncCommitteeMessageSubscriber, /* message handler */
|
||||
digest,
|
||||
)
|
||||
}
|
||||
s.subscribeWithParameters(
|
||||
p2p.SyncCommitteeSubnetTopicFormat,
|
||||
s.validateSyncCommitteeMessage,
|
||||
s.syncCommitteeMessageSubscriber,
|
||||
digest,
|
||||
s.activeSyncSubnetIndices,
|
||||
func(currentSlot primitives.Slot) []uint64 { return []uint64{} },
|
||||
)
|
||||
}
|
||||
|
||||
// New Gossip Topic in Capella
|
||||
if epoch >= params.BeaconConfig().CapellaForkEpoch {
|
||||
if params.BeaconConfig().CapellaForkEpoch <= epoch {
|
||||
s.subscribe(
|
||||
p2p.BlsToExecutionChangeSubnetTopicFormat,
|
||||
s.validateBlsToExecutionChange,
|
||||
@@ -137,13 +146,14 @@ func (s *Service) registerSubscribers(epoch primitives.Epoch, digest [4]byte) {
|
||||
}
|
||||
|
||||
// New Gossip Topic in Deneb
|
||||
if epoch >= params.BeaconConfig().DenebForkEpoch {
|
||||
s.subscribeStaticWithSubnets(
|
||||
if params.BeaconConfig().DenebForkEpoch <= epoch {
|
||||
s.subscribeWithParameters(
|
||||
p2p.BlobSubnetTopicFormat,
|
||||
s.validateBlob, /* validator */
|
||||
s.blobSubscriber, /* message handler */
|
||||
s.validateBlob,
|
||||
s.blobSubscriber,
|
||||
digest,
|
||||
params.BeaconConfig().BlobsidecarSubnetCount,
|
||||
func(primitives.Slot) []uint64 { return sliceFromCount(params.BeaconConfig().BlobsidecarSubnetCount) },
|
||||
func(currentSlot primitives.Slot) []uint64 { return []uint64{} },
|
||||
)
|
||||
}
|
||||
}
|
||||
@@ -324,132 +334,6 @@ func (s *Service) wrapAndReportValidation(topic string, v wrappedVal) (string, p
|
||||
}
|
||||
}
|
||||
|
||||
// subscribe to a static subnet with the given topic and index. A given validator and subscription handler is
|
||||
// used to handle messages from the subnet. The base protobuf message is used to initialize new messages for decoding.
|
||||
func (s *Service) subscribeStaticWithSubnets(topic string, validator wrappedVal, handle subHandler, digest [4]byte, subnetCount uint64) {
|
||||
genRoot := s.cfg.clock.GenesisValidatorsRoot()
|
||||
_, e, err := forks.RetrieveForkDataFromDigest(digest, genRoot[:])
|
||||
if err != nil {
|
||||
// Impossible condition as it would mean digest does not exist.
|
||||
panic(err)
|
||||
}
|
||||
base := p2p.GossipTopicMappings(topic, e)
|
||||
if base == nil {
|
||||
// Impossible condition as it would mean topic does not exist.
|
||||
panic(fmt.Sprintf("%s is not mapped to any message in GossipTopicMappings", topic))
|
||||
}
|
||||
for i := uint64(0); i < subnetCount; i++ {
|
||||
s.subscribeWithBase(s.addDigestAndIndexToTopic(topic, digest, i), validator, handle)
|
||||
}
|
||||
genesis := s.cfg.clock.GenesisTime()
|
||||
ticker := slots.NewSlotTicker(genesis, params.BeaconConfig().SecondsPerSlot)
|
||||
|
||||
go func() {
|
||||
for {
|
||||
select {
|
||||
case <-s.ctx.Done():
|
||||
ticker.Done()
|
||||
return
|
||||
case <-ticker.C():
|
||||
if s.chainStarted.IsSet() && s.cfg.initialSync.Syncing() {
|
||||
continue
|
||||
}
|
||||
valid, err := isDigestValid(digest, genesis, genRoot)
|
||||
if err != nil {
|
||||
log.Error(err)
|
||||
continue
|
||||
}
|
||||
if !valid {
|
||||
log.Warnf("Attestation subnets with digest %#x are no longer valid, unsubscribing from all of them.", digest)
|
||||
// Unsubscribes from all our current subnets.
|
||||
for i := uint64(0); i < subnetCount; i++ {
|
||||
fullTopic := fmt.Sprintf(topic, digest, i) + s.cfg.p2p.Encoding().ProtocolSuffix()
|
||||
s.unSubscribeFromTopic(fullTopic)
|
||||
}
|
||||
ticker.Done()
|
||||
return
|
||||
}
|
||||
// Check every slot that there are enough peers
|
||||
for i := uint64(0); i < subnetCount; i++ {
|
||||
if !s.enoughPeersAreConnected(s.addDigestAndIndexToTopic(topic, digest, i)) {
|
||||
_, err := s.cfg.p2p.FindPeersWithSubnet(
|
||||
s.ctx,
|
||||
s.addDigestAndIndexToTopic(topic, digest, i),
|
||||
i,
|
||||
flags.Get().MinimumPeersPerSubnet,
|
||||
)
|
||||
if err != nil {
|
||||
log.WithError(err).Debug("Could not search for peers")
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
// subscribe to a dynamically changing list of subnets. This method expects a fmt compatible
|
||||
// string for the topic name and the list of subnets for subscribed topics that should be
|
||||
// maintained.
|
||||
func (s *Service) subscribeDynamicWithSubnets(
|
||||
topicFormat string,
|
||||
validate wrappedVal,
|
||||
handle subHandler,
|
||||
digest [4]byte,
|
||||
) {
|
||||
genRoot := s.cfg.clock.GenesisValidatorsRoot()
|
||||
_, e, err := forks.RetrieveForkDataFromDigest(digest, genRoot[:])
|
||||
if err != nil {
|
||||
// Impossible condition as it would mean digest does not exist.
|
||||
panic(err)
|
||||
}
|
||||
base := p2p.GossipTopicMappings(topicFormat, e)
|
||||
if base == nil {
|
||||
panic(fmt.Sprintf("%s is not mapped to any message in GossipTopicMappings", topicFormat))
|
||||
}
|
||||
subscriptions := make(map[uint64]*pubsub.Subscription, params.BeaconConfig().MaxCommitteesPerSlot)
|
||||
genesis := s.cfg.clock.GenesisTime()
|
||||
ticker := slots.NewSlotTicker(genesis, params.BeaconConfig().SecondsPerSlot)
|
||||
|
||||
go func() {
|
||||
for {
|
||||
select {
|
||||
case <-s.ctx.Done():
|
||||
ticker.Done()
|
||||
return
|
||||
case currentSlot := <-ticker.C():
|
||||
if s.chainStarted.IsSet() && s.cfg.initialSync.Syncing() {
|
||||
continue
|
||||
}
|
||||
valid, err := isDigestValid(digest, genesis, genRoot)
|
||||
if err != nil {
|
||||
log.Error(err)
|
||||
continue
|
||||
}
|
||||
if !valid {
|
||||
log.Warnf("Attestation subnets with digest %#x are no longer valid, unsubscribing from all of them.", digest)
|
||||
// Unsubscribes from all our current subnets.
|
||||
s.reValidateSubscriptions(subscriptions, []uint64{}, topicFormat, digest)
|
||||
ticker.Done()
|
||||
return
|
||||
}
|
||||
wantedSubs := s.retrievePersistentSubs(currentSlot)
|
||||
s.reValidateSubscriptions(subscriptions, wantedSubs, topicFormat, digest)
|
||||
|
||||
for _, idx := range wantedSubs {
|
||||
s.subscribeAggregatorSubnet(subscriptions, idx, digest, validate, handle)
|
||||
}
|
||||
// find desired subs for attesters
|
||||
attesterSubs := s.attesterSubnetIndices(currentSlot)
|
||||
for _, idx := range attesterSubs {
|
||||
s.lookupAttesterSubnets(digest, idx)
|
||||
}
|
||||
}
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
// reValidateSubscriptions unsubscribe from topics we are currently subscribed to but that are
|
||||
// not in the list of wanted subnets.
|
||||
// TODO: Rename this functions as it does not only revalidate subscriptions.
|
||||
@@ -477,96 +361,44 @@ func (s *Service) reValidateSubscriptions(
|
||||
}
|
||||
}
|
||||
|
||||
// subscribe missing subnets for our aggregators.
|
||||
func (s *Service) subscribeAggregatorSubnet(
|
||||
subscriptions map[uint64]*pubsub.Subscription,
|
||||
idx uint64,
|
||||
// searchForPeers searches for peers in the given subnets.
|
||||
func (s *Service) searchForPeers(
|
||||
ctx context.Context,
|
||||
topicFormat string,
|
||||
digest [4]byte,
|
||||
validate wrappedVal,
|
||||
handle subHandler,
|
||||
currentSlot primitives.Slot,
|
||||
getSubnetsToSubscribe func(currentSlot primitives.Slot) []uint64,
|
||||
getSubnetsToFindPeersOnly func(currentSlot primitives.Slot) []uint64,
|
||||
) {
|
||||
// do not subscribe if we have no peers in the same
|
||||
// subnet
|
||||
topic := p2p.GossipTypeMapping[reflect.TypeOf(ðpb.Attestation{})]
|
||||
subnetTopic := fmt.Sprintf(topic, digest, idx)
|
||||
// check if subscription exists and if not subscribe the relevant subnet.
|
||||
if _, exists := subscriptions[idx]; !exists {
|
||||
subscriptions[idx] = s.subscribeWithBase(subnetTopic, validate, handle)
|
||||
}
|
||||
if !s.enoughPeersAreConnected(subnetTopic) {
|
||||
_, err := s.cfg.p2p.FindPeersWithSubnet(s.ctx, subnetTopic, idx, flags.Get().MinimumPeersPerSubnet)
|
||||
// Retrieve the subnets we want to subscribe to.
|
||||
subnetsToSubscribeIndex := getSubnetsToSubscribe(currentSlot)
|
||||
|
||||
// Retrieve the subnets we want to find peers for.
|
||||
subnetsToFindPeersOnlyIndex := getSubnetsToFindPeersOnly(currentSlot)
|
||||
|
||||
// Combine the subnets to subscribe and the subnets to find peers for.
|
||||
subnetsToFindPeersIndex := slice.SetUint64(append(subnetsToSubscribeIndex, subnetsToFindPeersOnlyIndex...))
|
||||
|
||||
// Find new peers for wanted subnets if needed.
|
||||
for _, subnetIndex := range subnetsToFindPeersIndex {
|
||||
topic := fmt.Sprintf(topicFormat, digest, subnetIndex)
|
||||
|
||||
// Check if we have enough peers in the subnet. Skip if we do.
|
||||
if s.enoughPeersAreConnected(topic) {
|
||||
continue
|
||||
}
|
||||
|
||||
// Not enough peers in the subnet, we need to search for more.
|
||||
_, err := s.cfg.p2p.FindPeersWithSubnet(ctx, topic, subnetIndex, flags.Get().MinimumPeersPerSubnet)
|
||||
if err != nil {
|
||||
log.WithError(err).Debug("Could not search for peers")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// subscribe to a static subnet with the given topic and index. A given validator and subscription handler is
|
||||
// used to handle messages from the subnet. The base protobuf message is used to initialize new messages for decoding.
|
||||
func (s *Service) subscribeStaticWithSyncSubnets(topic string, validator wrappedVal, handle subHandler, digest [4]byte) {
|
||||
genRoot := s.cfg.clock.GenesisValidatorsRoot()
|
||||
_, e, err := forks.RetrieveForkDataFromDigest(digest, genRoot[:])
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
base := p2p.GossipTopicMappings(topic, e)
|
||||
if base == nil {
|
||||
panic(fmt.Sprintf("%s is not mapped to any message in GossipTopicMappings", topic))
|
||||
}
|
||||
for i := uint64(0); i < params.BeaconConfig().SyncCommitteeSubnetCount; i++ {
|
||||
s.subscribeWithBase(s.addDigestAndIndexToTopic(topic, digest, i), validator, handle)
|
||||
}
|
||||
genesis := s.cfg.clock.GenesisTime()
|
||||
ticker := slots.NewSlotTicker(genesis, params.BeaconConfig().SecondsPerSlot)
|
||||
|
||||
go func() {
|
||||
for {
|
||||
select {
|
||||
case <-s.ctx.Done():
|
||||
ticker.Done()
|
||||
return
|
||||
case <-ticker.C():
|
||||
if s.chainStarted.IsSet() && s.cfg.initialSync.Syncing() {
|
||||
continue
|
||||
}
|
||||
valid, err := isDigestValid(digest, genesis, genRoot)
|
||||
if err != nil {
|
||||
log.Error(err)
|
||||
continue
|
||||
}
|
||||
if !valid {
|
||||
log.Warnf("Sync subnets with digest %#x are no longer valid, unsubscribing from all of them.", digest)
|
||||
// Unsubscribes from all our current subnets.
|
||||
for i := uint64(0); i < params.BeaconConfig().SyncCommitteeSubnetCount; i++ {
|
||||
fullTopic := fmt.Sprintf(topic, digest, i) + s.cfg.p2p.Encoding().ProtocolSuffix()
|
||||
s.unSubscribeFromTopic(fullTopic)
|
||||
}
|
||||
ticker.Done()
|
||||
return
|
||||
}
|
||||
// Check every slot that there are enough peers
|
||||
for i := uint64(0); i < params.BeaconConfig().SyncCommitteeSubnetCount; i++ {
|
||||
if !s.enoughPeersAreConnected(s.addDigestAndIndexToTopic(topic, digest, i)) {
|
||||
_, err := s.cfg.p2p.FindPeersWithSubnet(
|
||||
s.ctx,
|
||||
s.addDigestAndIndexToTopic(topic, digest, i),
|
||||
i,
|
||||
flags.Get().MinimumPeersPerSubnet,
|
||||
)
|
||||
if err != nil {
|
||||
log.WithError(err).Debug("Could not search for peers")
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
// subscribeToSyncSubnets subscribes to needed sync subnets, unsubscribe from unneeded ones and search for more peers if needed.
|
||||
// subscribeToSubnets subscribes to needed subnets, unsubscribe from unneeded ones and search for more peers if needed.
|
||||
// Returns `true` if the digest is valid (wrt. the current epoch), `false` otherwise.
|
||||
func (s *Service) subscribeToSyncSubnets(
|
||||
func (s *Service) subscribeToSubnets(
|
||||
topicFormat string,
|
||||
digest [4]byte,
|
||||
genesisValidatorsRoot [fieldparams.RootLength]byte,
|
||||
@@ -575,16 +407,15 @@ func (s *Service) subscribeToSyncSubnets(
|
||||
currentSlot primitives.Slot,
|
||||
validate wrappedVal,
|
||||
handle subHandler,
|
||||
getSubnetsToSubscribe func(currentSlot primitives.Slot) []uint64,
|
||||
getSubnetsToFindPeersOnly func(currentSlot primitives.Slot) []uint64,
|
||||
) bool {
|
||||
// Get sync subnets topic.
|
||||
topic := p2p.GossipTypeMapping[reflect.TypeOf(ðpb.SyncCommitteeMessage{})]
|
||||
|
||||
// Do not subscribe if not synced.
|
||||
if s.chainStarted.IsSet() && s.cfg.initialSync.Syncing() {
|
||||
return true
|
||||
}
|
||||
|
||||
// Do not subscribe is the digest is not valid.
|
||||
// Check the validity of the digest.
|
||||
valid, err := isDigestValid(digest, genesisTime, genesisValidatorsRoot)
|
||||
if err != nil {
|
||||
log.Error(err)
|
||||
@@ -593,23 +424,25 @@ func (s *Service) subscribeToSyncSubnets(
|
||||
|
||||
// Unsubscribe from all subnets if the digest is not valid. It's likely to be the case after a hard fork.
|
||||
if !valid {
|
||||
log.WithField("digest", fmt.Sprintf("%#x", digest)).Warn("Sync subnets with this digest are no longer valid, unsubscribing from all of them.")
|
||||
description := topicFormat
|
||||
if pos := strings.LastIndex(topicFormat, "/"); pos != -1 {
|
||||
description = topicFormat[pos+1:]
|
||||
}
|
||||
|
||||
log.WithField("digest", fmt.Sprintf("%#x", digest)).Warningf("%s subnets with this digest are no longer valid, unsubscribing from all of them.", description)
|
||||
s.reValidateSubscriptions(subscriptions, []uint64{}, topicFormat, digest)
|
||||
return false
|
||||
}
|
||||
|
||||
// Get the current epoch.
|
||||
currentEpoch := slots.ToEpoch(currentSlot)
|
||||
|
||||
// Retrieve the subnets we want to subscribe to.
|
||||
wantedSubnetsIndex := s.retrieveActiveSyncSubnets(currentEpoch)
|
||||
subnetsToSubscribeIndex := getSubnetsToSubscribe(currentSlot)
|
||||
|
||||
// Remove subscriptions that are no longer wanted.
|
||||
s.reValidateSubscriptions(subscriptions, wantedSubnetsIndex, topicFormat, digest)
|
||||
s.reValidateSubscriptions(subscriptions, subnetsToSubscribeIndex, topicFormat, digest)
|
||||
|
||||
// Subscribe to wanted subnets.
|
||||
for _, subnetIndex := range wantedSubnetsIndex {
|
||||
subnetTopic := fmt.Sprintf(topic, digest, subnetIndex)
|
||||
for _, subnetIndex := range subnetsToSubscribeIndex {
|
||||
subnetTopic := fmt.Sprintf(topicFormat, digest, subnetIndex)
|
||||
|
||||
// Check if subscription exists.
|
||||
if _, exists := subscriptions[subnetIndex]; exists {
|
||||
@@ -620,38 +453,20 @@ func (s *Service) subscribeToSyncSubnets(
|
||||
subscription := s.subscribeWithBase(subnetTopic, validate, handle)
|
||||
subscriptions[subnetIndex] = subscription
|
||||
}
|
||||
|
||||
// Find new peers for wanted subnets if needed.
|
||||
for _, subnetIndex := range wantedSubnetsIndex {
|
||||
subnetTopic := fmt.Sprintf(topic, digest, subnetIndex)
|
||||
|
||||
// Check if we have enough peers in the subnet. Skip if we do.
|
||||
if s.enoughPeersAreConnected(subnetTopic) {
|
||||
continue
|
||||
}
|
||||
|
||||
// Not enough peers in the subnet, we need to search for more.
|
||||
_, err := s.cfg.p2p.FindPeersWithSubnet(s.ctx, subnetTopic, subnetIndex, flags.Get().MinimumPeersPerSubnet)
|
||||
if err != nil {
|
||||
log.WithError(err).Debug("Could not search for peers")
|
||||
}
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// subscribeDynamicWithSyncSubnets subscribes to a dynamically changing list of subnets.
|
||||
func (s *Service) subscribeDynamicWithSyncSubnets(
|
||||
// subscribeWithParameters subscribes to a list of subnets.
|
||||
func (s *Service) subscribeWithParameters(
|
||||
topicFormat string,
|
||||
validate wrappedVal,
|
||||
handle subHandler,
|
||||
digest [4]byte,
|
||||
getSubnetsToSubscribe func(currentSlot primitives.Slot) []uint64,
|
||||
getSubnetsToFindPeersOnly func(currentSlot primitives.Slot) []uint64,
|
||||
) {
|
||||
// Retrieve the number of committee subnets we need to subscribe to.
|
||||
syncCommiteeSubnetsCount := params.BeaconConfig().SyncCommitteeSubnetCount
|
||||
|
||||
// Initialize the subscriptions map.
|
||||
subscriptions := make(map[uint64]*pubsub.Subscription, syncCommiteeSubnetsCount)
|
||||
subscriptions := make(map[uint64]*pubsub.Subscription)
|
||||
|
||||
// Retrieve the genesis validators root.
|
||||
genesisValidatorsRoot := s.cfg.clock.GenesisValidatorsRoot()
|
||||
@@ -678,14 +493,20 @@ func (s *Service) subscribeDynamicWithSyncSubnets(
|
||||
// Retrieve the current slot.
|
||||
currentSlot := s.cfg.clock.CurrentSlot()
|
||||
|
||||
// Subscribe to subnets.
|
||||
s.subscribeToSubnets(topicFormat, digest, genesisValidatorsRoot, genesisTime, subscriptions, currentSlot, validate, handle, getSubnetsToSubscribe, getSubnetsToFindPeersOnly)
|
||||
|
||||
// Derive a new context and cancel function.
|
||||
ctx, cancel := context.WithCancel(s.ctx)
|
||||
|
||||
go func() {
|
||||
// Subscribe to the sync subnets.
|
||||
s.subscribeToSyncSubnets(topicFormat, digest, genesisValidatorsRoot, genesisTime, subscriptions, currentSlot, validate, handle)
|
||||
// Search for peers.
|
||||
s.searchForPeers(ctx, topicFormat, digest, currentSlot, getSubnetsToSubscribe, getSubnetsToFindPeersOnly)
|
||||
|
||||
for {
|
||||
select {
|
||||
case currentSlot := <-ticker.C():
|
||||
isDigestValid := s.subscribeToSyncSubnets(topicFormat, digest, genesisValidatorsRoot, genesisTime, subscriptions, currentSlot, validate, handle)
|
||||
isDigestValid := s.subscribeToSubnets(topicFormat, digest, genesisValidatorsRoot, genesisTime, subscriptions, currentSlot, validate, handle, getSubnetsToSubscribe, getSubnetsToFindPeersOnly)
|
||||
|
||||
// Stop the ticker if the digest is not valid. Likely to happen after a hard fork.
|
||||
if !isDigestValid {
|
||||
@@ -693,7 +514,11 @@ func (s *Service) subscribeDynamicWithSyncSubnets(
|
||||
return
|
||||
}
|
||||
|
||||
// Search for peers.
|
||||
s.searchForPeers(ctx, topicFormat, digest, currentSlot, getSubnetsToSubscribe, getSubnetsToFindPeersOnly)
|
||||
|
||||
case <-s.ctx.Done():
|
||||
cancel()
|
||||
ticker.Done()
|
||||
return
|
||||
}
|
||||
@@ -701,21 +526,8 @@ func (s *Service) subscribeDynamicWithSyncSubnets(
|
||||
}()
|
||||
}
|
||||
|
||||
// lookup peers for attester specific subnets.
|
||||
func (s *Service) lookupAttesterSubnets(digest [4]byte, idx uint64) {
|
||||
topic := p2p.GossipTypeMapping[reflect.TypeOf(ðpb.Attestation{})]
|
||||
subnetTopic := fmt.Sprintf(topic, digest, idx)
|
||||
if !s.enoughPeersAreConnected(subnetTopic) {
|
||||
// perform a search for peers with the desired committee index.
|
||||
_, err := s.cfg.p2p.FindPeersWithSubnet(s.ctx, subnetTopic, idx, flags.Get().MinimumPeersPerSubnet)
|
||||
if err != nil {
|
||||
log.WithError(err).Debug("Could not search for peers")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Service) unSubscribeFromTopic(topic string) {
|
||||
log.WithField("topic", topic).Debug("Unsubscribing from topic")
|
||||
log.WithField("topic", topic).Info("Unsubscribed from")
|
||||
if err := s.cfg.p2p.PubSub().UnregisterTopicValidator(topic); err != nil {
|
||||
log.WithError(err).Error("Could not unregister topic validator")
|
||||
}
|
||||
@@ -740,19 +552,16 @@ func (s *Service) enoughPeersAreConnected(subnetTopic string) bool {
|
||||
return peersWithSubnetCount >= threshold
|
||||
}
|
||||
|
||||
func (s *Service) retrievePersistentSubs(currSlot primitives.Slot) []uint64 {
|
||||
// Persistent subscriptions from validators
|
||||
persistentSubs := s.persistentSubnetIndices()
|
||||
// Update desired topic indices for aggregator
|
||||
wantedSubs := s.aggregatorSubnetIndices(currSlot)
|
||||
func (s *Service) persistentAndAggregatorSubnetIndices(currentSlot primitives.Slot) []uint64 {
|
||||
if flags.Get().SubscribeToAllSubnets {
|
||||
return sliceFromCount(params.BeaconConfig().AttestationSubnetCount)
|
||||
}
|
||||
|
||||
// Combine subscriptions to get all requested subscriptions
|
||||
return slice.SetUint64(append(persistentSubs, wantedSubs...))
|
||||
}
|
||||
persistentSubnetIndices := s.persistentSubnetIndices()
|
||||
aggregatorSubnetIndices := s.aggregatorSubnetIndices(currentSlot)
|
||||
|
||||
func (*Service) retrieveActiveSyncSubnets(currEpoch primitives.Epoch) []uint64 {
|
||||
subs := cache.SyncSubnetIDs.GetAllSubnets(currEpoch)
|
||||
return slice.SetUint64(subs)
|
||||
// Combine subscriptions to get all requested subscriptions.
|
||||
return slice.SetUint64(append(persistentSubnetIndices, aggregatorSubnetIndices...))
|
||||
}
|
||||
|
||||
// filters out required peers for the node to function, not
|
||||
@@ -768,7 +577,7 @@ func (s *Service) filterNeededPeers(pids []peer.ID) []peer.ID {
|
||||
return pids
|
||||
}
|
||||
currSlot := s.cfg.clock.CurrentSlot()
|
||||
wantedSubs := s.retrievePersistentSubs(currSlot)
|
||||
wantedSubs := s.persistentAndAggregatorSubnetIndices(currSlot)
|
||||
wantedSubs = slice.SetUint64(append(wantedSubs, s.attesterSubnetIndices(currSlot)...))
|
||||
topic := p2p.GossipTypeMapping[reflect.TypeOf(ðpb.Attestation{})]
|
||||
|
||||
|
||||
@@ -81,7 +81,7 @@ func (s *Service) reconstructAndBroadcastBlobs(ctx context.Context, block interf
|
||||
if s.cfg.blobStorage == nil {
|
||||
return
|
||||
}
|
||||
indices, err := s.cfg.blobStorage.Indices(blockRoot)
|
||||
indices, err := s.cfg.blobStorage.Indices(blockRoot, block.Block().Slot())
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Failed to retrieve indices for block")
|
||||
return
|
||||
@@ -93,7 +93,7 @@ func (s *Service) reconstructAndBroadcastBlobs(ctx context.Context, block interf
|
||||
}
|
||||
|
||||
// Reconstruct blob sidecars from the EL
|
||||
blobSidecars, err := s.cfg.executionReconstructor.ReconstructBlobSidecars(ctx, block, blockRoot, indices[:])
|
||||
blobSidecars, err := s.cfg.executionReconstructor.ReconstructBlobSidecars(ctx, block, blockRoot, indices)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Failed to reconstruct blob sidecars")
|
||||
return
|
||||
@@ -103,7 +103,7 @@ func (s *Service) reconstructAndBroadcastBlobs(ctx context.Context, block interf
|
||||
}
|
||||
|
||||
// Refresh indices as new blobs may have been added to the db
|
||||
indices, err = s.cfg.blobStorage.Indices(blockRoot)
|
||||
indices, err = s.cfg.blobStorage.Indices(blockRoot, block.Block().Slot())
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Failed to retrieve indices for block")
|
||||
return
|
||||
|
||||
@@ -312,37 +312,6 @@ func TestRevalidateSubscription_CorrectlyFormatsTopic(t *testing.T) {
|
||||
require.LogsDoNotContain(t, hook, "Could not unregister topic validator")
|
||||
}
|
||||
|
||||
func TestStaticSubnets(t *testing.T) {
|
||||
p := p2ptest.NewTestP2P(t)
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
chain := &mockChain.ChainService{
|
||||
Genesis: time.Now(),
|
||||
ValidatorsRoot: [32]byte{'A'},
|
||||
}
|
||||
r := Service{
|
||||
ctx: ctx,
|
||||
cfg: &config{
|
||||
chain: chain,
|
||||
clock: startup.NewClock(chain.Genesis, chain.ValidatorsRoot),
|
||||
p2p: p,
|
||||
},
|
||||
chainStarted: abool.New(),
|
||||
subHandler: newSubTopicHandler(),
|
||||
}
|
||||
defaultTopic := "/eth2/%x/beacon_attestation_%d"
|
||||
d, err := r.currentForkDigest()
|
||||
assert.NoError(t, err)
|
||||
r.subscribeStaticWithSubnets(defaultTopic, r.noopValidator, func(_ context.Context, msg proto.Message) error {
|
||||
// no-op
|
||||
return nil
|
||||
}, d, params.BeaconConfig().AttestationSubnetCount)
|
||||
topics := r.cfg.p2p.PubSub().GetTopics()
|
||||
if uint64(len(topics)) != params.BeaconConfig().AttestationSubnetCount {
|
||||
t.Errorf("Wanted the number of subnet topics registered to be %d but got %d", params.BeaconConfig().AttestationSubnetCount, len(topics))
|
||||
}
|
||||
cancel()
|
||||
}
|
||||
|
||||
func Test_wrapAndReportValidation(t *testing.T) {
|
||||
mChain := &mockChain.ChainService{
|
||||
Genesis: time.Now(),
|
||||
@@ -539,37 +508,6 @@ func TestFilterSubnetPeers(t *testing.T) {
|
||||
assert.Equal(t, 1, len(recPeers), "expected at least 1 suitable peer to prune")
|
||||
}
|
||||
|
||||
func TestSubscribeWithSyncSubnets_StaticOK(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
cfg := params.MainnetTestConfig().Copy()
|
||||
cfg.SecondsPerSlot = 1
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
p := p2ptest.NewTestP2P(t)
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
chain := &mockChain.ChainService{
|
||||
Genesis: time.Now(),
|
||||
ValidatorsRoot: [32]byte{'A'},
|
||||
}
|
||||
r := Service{
|
||||
ctx: ctx,
|
||||
cfg: &config{
|
||||
chain: chain,
|
||||
clock: startup.NewClock(chain.Genesis, chain.ValidatorsRoot),
|
||||
p2p: p,
|
||||
},
|
||||
chainStarted: abool.New(),
|
||||
subHandler: newSubTopicHandler(),
|
||||
}
|
||||
// Empty cache at the end of the test.
|
||||
defer cache.SyncSubnetIDs.EmptyAllCaches()
|
||||
digest, err := r.currentForkDigest()
|
||||
assert.NoError(t, err)
|
||||
r.subscribeStaticWithSyncSubnets(p2p.SyncCommitteeSubnetTopicFormat, nil, nil, digest)
|
||||
assert.Equal(t, int(params.BeaconConfig().SyncCommitteeSubnetCount), len(r.cfg.p2p.PubSub().GetTopics()))
|
||||
cancel()
|
||||
}
|
||||
|
||||
func TestSubscribeWithSyncSubnets_DynamicOK(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
cfg := params.MainnetConfig().Copy()
|
||||
@@ -600,7 +538,7 @@ func TestSubscribeWithSyncSubnets_DynamicOK(t *testing.T) {
|
||||
cache.SyncSubnetIDs.AddSyncCommitteeSubnets([]byte("pubkey"), currEpoch, []uint64{0, 1}, 10*time.Second)
|
||||
digest, err := r.currentForkDigest()
|
||||
assert.NoError(t, err)
|
||||
r.subscribeDynamicWithSyncSubnets(p2p.SyncCommitteeSubnetTopicFormat, nil, nil, digest)
|
||||
r.subscribeWithParameters(p2p.SyncCommitteeSubnetTopicFormat, nil, nil, digest, r.activeSyncSubnetIndices, func(currentSlot primitives.Slot) []uint64 { return []uint64{} })
|
||||
time.Sleep(2 * time.Second)
|
||||
assert.Equal(t, 2, len(r.cfg.p2p.PubSub().GetTopics()))
|
||||
topicMap := map[string]bool{}
|
||||
@@ -615,46 +553,6 @@ func TestSubscribeWithSyncSubnets_DynamicOK(t *testing.T) {
|
||||
cancel()
|
||||
}
|
||||
|
||||
func TestSubscribeWithSyncSubnets_StaticSwitchFork(t *testing.T) {
|
||||
p := p2ptest.NewTestP2P(t)
|
||||
params.SetupTestConfigCleanup(t)
|
||||
cfg := params.BeaconConfig()
|
||||
cfg.AltairForkEpoch = 1
|
||||
cfg.SecondsPerSlot = 1
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
params.BeaconConfig().InitializeForkSchedule()
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
currSlot := primitives.Slot(100)
|
||||
chain := &mockChain.ChainService{
|
||||
Genesis: time.Now().Add(-time.Duration(uint64(params.BeaconConfig().SlotsPerEpoch)*params.BeaconConfig().SecondsPerSlot) * time.Second),
|
||||
ValidatorsRoot: [32]byte{'A'},
|
||||
Slot: &currSlot,
|
||||
}
|
||||
r := Service{
|
||||
ctx: ctx,
|
||||
cfg: &config{
|
||||
chain: chain,
|
||||
clock: startup.NewClock(chain.Genesis, chain.ValidatorsRoot),
|
||||
p2p: p,
|
||||
},
|
||||
chainStarted: abool.New(),
|
||||
subHandler: newSubTopicHandler(),
|
||||
}
|
||||
// Empty cache at the end of the test.
|
||||
defer cache.SyncSubnetIDs.EmptyAllCaches()
|
||||
genRoot := r.cfg.clock.GenesisValidatorsRoot()
|
||||
digest, err := signing.ComputeForkDigest(params.BeaconConfig().GenesisForkVersion, genRoot[:])
|
||||
assert.NoError(t, err)
|
||||
r.subscribeStaticWithSyncSubnets(p2p.SyncCommitteeSubnetTopicFormat, nil, nil, digest)
|
||||
assert.Equal(t, int(params.BeaconConfig().SyncCommitteeSubnetCount), len(r.cfg.p2p.PubSub().GetTopics()))
|
||||
|
||||
// Expect that all old topics will be unsubscribed.
|
||||
time.Sleep(2 * time.Second)
|
||||
assert.Equal(t, 0, len(r.cfg.p2p.PubSub().GetTopics()))
|
||||
|
||||
cancel()
|
||||
}
|
||||
|
||||
func TestSubscribeWithSyncSubnets_DynamicSwitchFork(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
p := p2ptest.NewTestP2P(t)
|
||||
@@ -689,7 +587,7 @@ func TestSubscribeWithSyncSubnets_DynamicSwitchFork(t *testing.T) {
|
||||
digest, err := signing.ComputeForkDigest(params.BeaconConfig().GenesisForkVersion, genRoot[:])
|
||||
assert.NoError(t, err)
|
||||
|
||||
r.subscribeDynamicWithSyncSubnets(p2p.SyncCommitteeSubnetTopicFormat, nil, nil, digest)
|
||||
r.subscribeWithParameters(p2p.SyncCommitteeSubnetTopicFormat, nil, nil, digest, r.activeSyncSubnetIndices, func(currentSlot primitives.Slot) []uint64 { return []uint64{} })
|
||||
time.Sleep(2 * time.Second)
|
||||
assert.Equal(t, 2, len(r.cfg.p2p.PubSub().GetTopics()))
|
||||
topicMap := map[string]bool{}
|
||||
|
||||
@@ -16,7 +16,6 @@ import (
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/features"
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/params"
|
||||
consensusblocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
|
||||
@@ -303,8 +302,10 @@ func validateDenebBeaconBlock(blk interfaces.ReadOnlyBeaconBlock) error {
|
||||
}
|
||||
// [REJECT] The length of KZG commitments is less than or equal to the limitation defined in Consensus Layer
|
||||
// -- i.e. validate that len(body.signed_beacon_block.message.blob_kzg_commitments) <= MAX_BLOBS_PER_BLOCK
|
||||
if len(commits) > fieldparams.MaxBlobsPerBlock {
|
||||
return errors.Wrapf(errRejectCommitmentLen, "%d > %d", len(commits), fieldparams.MaxBlobsPerBlock)
|
||||
|
||||
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(blk.Slot())
|
||||
if len(commits) > maxBlobsPerBlock {
|
||||
return errors.Wrapf(errRejectCommitmentLen, "%d > %d", len(commits), maxBlobsPerBlock)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -6,7 +6,7 @@ go_library(
|
||||
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/sync/verify",
|
||||
visibility = ["//visibility:public"],
|
||||
deps = [
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
"//consensus-types/blocks:go_default_library",
|
||||
"//encoding/bytesutil:go_default_library",
|
||||
"//runtime/version:go_default_library",
|
||||
|
||||
@@ -2,7 +2,7 @@ package verify
|
||||
|
||||
import (
|
||||
"github.com/pkg/errors"
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/params"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
|
||||
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
|
||||
"github.com/prysmaticlabs/prysm/v5/runtime/version"
|
||||
@@ -20,8 +20,9 @@ func BlobAlignsWithBlock(blob blocks.ROBlob, block blocks.ROBlock) error {
|
||||
if block.Version() < version.Deneb {
|
||||
return nil
|
||||
}
|
||||
if blob.Index >= fieldparams.MaxBlobsPerBlock {
|
||||
return errors.Wrapf(ErrIncorrectBlobIndex, "index %d exceeds MAX_BLOBS_PER_BLOCK %d", blob.Index, fieldparams.MaxBlobsPerBlock)
|
||||
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(blob.Slot())
|
||||
if blob.Index >= uint64(maxBlobsPerBlock) {
|
||||
return errors.Wrapf(ErrIncorrectBlobIndex, "index %d exceeds MAX_BLOBS_PER_BLOCK %d", blob.Index, maxBlobsPerBlock)
|
||||
}
|
||||
|
||||
if blob.BlockRoot() != block.Root() {
|
||||
|
||||
@@ -26,7 +26,6 @@ go_library(
|
||||
"//beacon-chain/startup:go_default_library",
|
||||
"//beacon-chain/state:go_default_library",
|
||||
"//cache/lru:go_default_library",
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
"//consensus-types/blocks:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
@@ -60,7 +59,6 @@ go_test(
|
||||
"//beacon-chain/forkchoice/types:go_default_library",
|
||||
"//beacon-chain/startup:go_default_library",
|
||||
"//beacon-chain/state:go_default_library",
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
"//consensus-types/blocks:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
|
||||
@@ -7,7 +7,6 @@ import (
|
||||
"github.com/pkg/errors"
|
||||
forkchoicetypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/forkchoice/types"
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/params"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
|
||||
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
|
||||
@@ -129,7 +128,8 @@ func (bv *ROBlobVerifier) recordResult(req Requirement, err *error) {
|
||||
// [REJECT] The sidecar's index is consistent with MAX_BLOBS_PER_BLOCK -- i.e. blob_sidecar.index < MAX_BLOBS_PER_BLOCK.
|
||||
func (bv *ROBlobVerifier) BlobIndexInBounds() (err error) {
|
||||
defer bv.recordResult(RequireBlobIndexInBounds, &err)
|
||||
if bv.blob.Index >= fieldparams.MaxBlobsPerBlock {
|
||||
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(bv.blob.Slot())
|
||||
if bv.blob.Index >= uint64(maxBlobsPerBlock) {
|
||||
log.WithFields(logging.BlobFields(bv.blob)).Debug("Sidecar index >= MAX_BLOBS_PER_BLOCK")
|
||||
return blobErrBuilder(ErrBlobIndexInvalid)
|
||||
}
|
||||
|
||||
@@ -12,7 +12,6 @@ import (
|
||||
forkchoicetypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/forkchoice/types"
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/startup"
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/params"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
@@ -32,7 +31,7 @@ func TestBlobIndexInBounds(t *testing.T) {
|
||||
require.Equal(t, true, v.results.executed(RequireBlobIndexInBounds))
|
||||
require.NoError(t, v.results.result(RequireBlobIndexInBounds))
|
||||
|
||||
b.Index = fieldparams.MaxBlobsPerBlock
|
||||
b.Index = uint64(params.BeaconConfig().MaxBlobsPerBlock(0))
|
||||
v = ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
|
||||
require.ErrorIs(t, v.BlobIndexInBounds(), ErrBlobIndexInvalid)
|
||||
require.Equal(t, true, v.results.executed(RequireBlobIndexInBounds))
|
||||
|
||||
@@ -26,7 +26,6 @@ const (
|
||||
SyncCommitteeAggregationBytesLength = 16 // SyncCommitteeAggregationBytesLength defines the length of sync committee aggregate bytes.
|
||||
SyncAggregateSyncCommitteeBytesLength = 64 // SyncAggregateSyncCommitteeBytesLength defines the length of sync committee bytes in a sync aggregate.
|
||||
MaxWithdrawalsPerPayload = 16 // MaxWithdrawalsPerPayloadLength defines the maximum number of withdrawals that can be included in a payload.
|
||||
MaxBlobsPerBlock = 6 // MaxBlobsPerBlock defines the maximum number of blobs with respect to consensus rule can be included in a block.
|
||||
MaxBlobCommitmentsPerBlock = 4096 // MaxBlobCommitmentsPerBlock defines the theoretical limit of blobs can be included in a block.
|
||||
LogMaxBlobCommitments = 12 // Log_2 of MaxBlobCommitmentsPerBlock
|
||||
BlobLength = 131072 // BlobLength defines the byte length of a blob.
|
||||
|
||||
@@ -26,7 +26,6 @@ const (
|
||||
SyncCommitteeAggregationBytesLength = 1 // SyncCommitteeAggregationBytesLength defines the sync committee aggregate bytes.
|
||||
SyncAggregateSyncCommitteeBytesLength = 4 // SyncAggregateSyncCommitteeBytesLength defines the length of sync committee bytes in a sync aggregate.
|
||||
MaxWithdrawalsPerPayload = 4 // MaxWithdrawalsPerPayloadLength defines the maximum number of withdrawals that can be included in a payload.
|
||||
MaxBlobsPerBlock = 6 // MaxBlobsPerBlock defines the maximum number of blobs with respect to consensus rule can be included in a block.
|
||||
MaxBlobCommitmentsPerBlock = 16 // MaxBlobCommitmentsPerBlock defines the theoretical limit of blobs can be included in a block.
|
||||
LogMaxBlobCommitments = 4 // Log_2 of MaxBlobCommitmentsPerBlock
|
||||
BlobLength = 131072 // BlobLength defines the byte length of a blob.
|
||||
|
||||
@@ -280,6 +280,19 @@ type BeaconChainConfig struct {
|
||||
AttestationSubnetPrefixBits uint64 `yaml:"ATTESTATION_SUBNET_PREFIX_BITS" spec:"true"` // AttestationSubnetPrefixBits is defined as (ceillog2(ATTESTATION_SUBNET_COUNT) + ATTESTATION_SUBNET_EXTRA_BITS).
|
||||
SubnetsPerNode uint64 `yaml:"SUBNETS_PER_NODE" spec:"true"` // SubnetsPerNode is the number of long-lived subnets a beacon node should be subscribed to.
|
||||
NodeIdBits uint64 `yaml:"NODE_ID_BITS" spec:"true"` // NodeIdBits defines the bit length of a node id.
|
||||
|
||||
// Blobs Values
|
||||
|
||||
// Deprecated_MaxBlobsPerBlock defines the max blobs that could exist in a block.
|
||||
// Deprecated: This field is no longer supported. Avoid using it.
|
||||
DeprecatedMaxBlobsPerBlock int `yaml:"MAX_BLOBS_PER_BLOCK" spec:"true"`
|
||||
// DeprecatedMaxBlobsPerBlockElectra defines the max blobs that could exist in a block post Electra hard fork.
|
||||
// Deprecated: This field is no longer supported. Avoid using it.
|
||||
DeprecatedMaxBlobsPerBlockElectra int `yaml:"MAX_BLOBS_PER_BLOCK_ELECTRA" spec:"true"`
|
||||
|
||||
// DeprecatedTargetBlobsPerBlockElectra defines the target number of blobs per block post Electra hard fork.
|
||||
// Deprecated: This field is no longer supported. Avoid using it.
|
||||
DeprecatedTargetBlobsPerBlockElectra int `yaml:"TARGET_BLOBS_PER_BLOCK_ELECTRA" spec:"true"`
|
||||
}
|
||||
|
||||
// InitializeForkSchedule initializes the schedules forks baked into the config.
|
||||
@@ -357,6 +370,24 @@ func (b *BeaconChainConfig) MaximumGossipClockDisparityDuration() time.Duration
|
||||
return time.Duration(b.MaximumGossipClockDisparity) * time.Millisecond
|
||||
}
|
||||
|
||||
// TargetBlobsPerBlock returns the target number of blobs per block for the given slot,
|
||||
// accounting for changes introduced by the Electra fork.
|
||||
func (b *BeaconChainConfig) TargetBlobsPerBlock(slot primitives.Slot) int {
|
||||
if primitives.Epoch(slot.DivSlot(32)) >= b.ElectraForkEpoch {
|
||||
return b.DeprecatedTargetBlobsPerBlockElectra
|
||||
}
|
||||
return b.DeprecatedMaxBlobsPerBlock / 2
|
||||
}
|
||||
|
||||
// MaxBlobsPerBlock returns the maximum number of blobs per block for the given slot,
|
||||
// adjusting for the Electra fork.
|
||||
func (b *BeaconChainConfig) MaxBlobsPerBlock(slot primitives.Slot) int {
|
||||
if primitives.Epoch(slot.DivSlot(32)) >= b.ElectraForkEpoch {
|
||||
return b.DeprecatedMaxBlobsPerBlockElectra
|
||||
}
|
||||
return b.DeprecatedMaxBlobsPerBlock
|
||||
}
|
||||
|
||||
// DenebEnabled centralizes the check to determine if code paths
|
||||
// that are specific to deneb should be allowed to execute. This will make it easier to find call sites that do this
|
||||
// kind of check and remove them post-deneb.
|
||||
|
||||
@@ -2,6 +2,7 @@ package params_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"math"
|
||||
"sync"
|
||||
"testing"
|
||||
|
||||
@@ -105,3 +106,19 @@ func TestConfigGenesisValidatorRoot(t *testing.T) {
|
||||
t.Fatal("mainnet params genesis validator root does not match the mainnet genesis state value")
|
||||
}
|
||||
}
|
||||
|
||||
func Test_MaxBlobCount(t *testing.T) {
|
||||
cfg := params.MainnetConfig()
|
||||
cfg.ElectraForkEpoch = 10
|
||||
require.Equal(t, cfg.MaxBlobsPerBlock(primitives.Slot(cfg.ElectraForkEpoch)*cfg.SlotsPerEpoch-1), 6)
|
||||
require.Equal(t, cfg.MaxBlobsPerBlock(primitives.Slot(cfg.ElectraForkEpoch)*cfg.SlotsPerEpoch), 9)
|
||||
cfg.ElectraForkEpoch = math.MaxUint64
|
||||
}
|
||||
|
||||
func Test_TargetBlobCount(t *testing.T) {
|
||||
cfg := params.MainnetConfig()
|
||||
cfg.ElectraForkEpoch = 10
|
||||
require.Equal(t, cfg.TargetBlobsPerBlock(primitives.Slot(cfg.ElectraForkEpoch)*cfg.SlotsPerEpoch-1), 3)
|
||||
require.Equal(t, cfg.TargetBlobsPerBlock(primitives.Slot(cfg.ElectraForkEpoch)*cfg.SlotsPerEpoch), 6)
|
||||
cfg.ElectraForkEpoch = math.MaxUint64
|
||||
}
|
||||
|
||||
@@ -35,7 +35,6 @@ var placeholderFields = []string{
|
||||
"EIP7732_FORK_VERSION",
|
||||
"FIELD_ELEMENTS_PER_BLOB", // Compile time constant.
|
||||
"KZG_COMMITMENT_INCLUSION_PROOF_DEPTH", // Compile time constant on BlobSidecar.commitment_inclusion_proof.
|
||||
"MAX_BLOBS_PER_BLOCK",
|
||||
"MAX_BLOBS_PER_BLOCK_EIP7594",
|
||||
"MAX_BLOB_COMMITMENTS_PER_BLOCK", // Compile time constant on BeaconBlockBodyDeneb.blob_kzg_commitments.
|
||||
"MAX_BYTES_PER_TRANSACTION", // Used for ssz of EL transactions. Unused in Prysm.
|
||||
|
||||
@@ -319,6 +319,10 @@ var mainnetBeaconConfig = &BeaconChainConfig{
|
||||
AttestationSubnetPrefixBits: 6,
|
||||
SubnetsPerNode: 2,
|
||||
NodeIdBits: 256,
|
||||
|
||||
DeprecatedMaxBlobsPerBlock: 6,
|
||||
DeprecatedMaxBlobsPerBlockElectra: 9,
|
||||
DeprecatedTargetBlobsPerBlockElectra: 6,
|
||||
}
|
||||
|
||||
// MainnetTestConfig provides a version of the mainnet config that has a different name
|
||||
|
||||
@@ -29,6 +29,9 @@ func NewWrappedBootstrap(m proto.Message) (interfaces.LightClientBootstrap, erro
|
||||
}
|
||||
}
|
||||
|
||||
// In addition to the proto object being wrapped, we store some fields that have to be
|
||||
// constructed from the proto, so that we don't have to reconstruct them every time
|
||||
// in getters.
|
||||
type bootstrapAltair struct {
|
||||
p *pb.LightClientBootstrapAltair
|
||||
header interfaces.LightClientHeader
|
||||
@@ -88,7 +91,7 @@ func (h *bootstrapAltair) Header() interfaces.LightClientHeader {
|
||||
}
|
||||
|
||||
func (h *bootstrapAltair) SetHeader(header interfaces.LightClientHeader) error {
|
||||
p, ok := (header.Proto()).(*pb.LightClientHeaderAltair)
|
||||
p, ok := header.Proto().(*pb.LightClientHeaderAltair)
|
||||
if !ok {
|
||||
return fmt.Errorf("header type %T is not %T", header.Proto(), &pb.LightClientHeaderAltair{})
|
||||
}
|
||||
@@ -127,6 +130,9 @@ func (h *bootstrapAltair) CurrentSyncCommitteeBranchElectra() (interfaces.LightC
|
||||
return [6][32]byte{}, consensustypes.ErrNotSupported("CurrentSyncCommitteeBranchElectra", version.Altair)
|
||||
}
|
||||
|
||||
// In addition to the proto object being wrapped, we store some fields that have to be
|
||||
// constructed from the proto, so that we don't have to reconstruct them every time
|
||||
// in getters.
|
||||
type bootstrapCapella struct {
|
||||
p *pb.LightClientBootstrapCapella
|
||||
header interfaces.LightClientHeader
|
||||
@@ -186,7 +192,7 @@ func (h *bootstrapCapella) Header() interfaces.LightClientHeader {
|
||||
}
|
||||
|
||||
func (h *bootstrapCapella) SetHeader(header interfaces.LightClientHeader) error {
|
||||
p, ok := (header.Proto()).(*pb.LightClientHeaderCapella)
|
||||
p, ok := header.Proto().(*pb.LightClientHeaderCapella)
|
||||
if !ok {
|
||||
return fmt.Errorf("header type %T is not %T", header.Proto(), &pb.LightClientHeaderCapella{})
|
||||
}
|
||||
@@ -225,6 +231,9 @@ func (h *bootstrapCapella) CurrentSyncCommitteeBranchElectra() (interfaces.Light
|
||||
return [6][32]byte{}, consensustypes.ErrNotSupported("CurrentSyncCommitteeBranchElectra", version.Capella)
|
||||
}
|
||||
|
||||
// In addition to the proto object being wrapped, we store some fields that have to be
|
||||
// constructed from the proto, so that we don't have to reconstruct them every time
|
||||
// in getters.
|
||||
type bootstrapDeneb struct {
|
||||
p *pb.LightClientBootstrapDeneb
|
||||
header interfaces.LightClientHeader
|
||||
@@ -284,7 +293,7 @@ func (h *bootstrapDeneb) Header() interfaces.LightClientHeader {
|
||||
}
|
||||
|
||||
func (h *bootstrapDeneb) SetHeader(header interfaces.LightClientHeader) error {
|
||||
p, ok := (header.Proto()).(*pb.LightClientHeaderDeneb)
|
||||
p, ok := header.Proto().(*pb.LightClientHeaderDeneb)
|
||||
if !ok {
|
||||
return fmt.Errorf("header type %T is not %T", header.Proto(), &pb.LightClientHeaderDeneb{})
|
||||
}
|
||||
@@ -323,6 +332,9 @@ func (h *bootstrapDeneb) CurrentSyncCommitteeBranchElectra() (interfaces.LightCl
|
||||
return [6][32]byte{}, consensustypes.ErrNotSupported("CurrentSyncCommitteeBranchElectra", version.Deneb)
|
||||
}
|
||||
|
||||
// In addition to the proto object being wrapped, we store some fields that have to be
|
||||
// constructed from the proto, so that we don't have to reconstruct them every time
|
||||
// in getters.
|
||||
type bootstrapElectra struct {
|
||||
p *pb.LightClientBootstrapElectra
|
||||
header interfaces.LightClientHeader
|
||||
@@ -382,7 +394,7 @@ func (h *bootstrapElectra) Header() interfaces.LightClientHeader {
|
||||
}
|
||||
|
||||
func (h *bootstrapElectra) SetHeader(header interfaces.LightClientHeader) error {
|
||||
p, ok := (header.Proto()).(*pb.LightClientHeaderDeneb)
|
||||
p, ok := header.Proto().(*pb.LightClientHeaderDeneb)
|
||||
if !ok {
|
||||
return fmt.Errorf("header type %T is not %T", header.Proto(), &pb.LightClientHeaderDeneb{})
|
||||
}
|
||||
|
||||
@@ -89,6 +89,9 @@ func NewFinalityUpdateFromUpdate(update interfaces.LightClientUpdate) (interface
|
||||
}
|
||||
}
|
||||
|
||||
// In addition to the proto object being wrapped, we store some fields that have to be
|
||||
// constructed from the proto, so that we don't have to reconstruct them every time
|
||||
// in getters.
|
||||
type finalityUpdateAltair struct {
|
||||
p *pb.LightClientFinalityUpdateAltair
|
||||
attestedHeader interfaces.LightClientHeader
|
||||
@@ -188,6 +191,9 @@ func (u *finalityUpdateAltair) SignatureSlot() primitives.Slot {
|
||||
return u.p.SignatureSlot
|
||||
}
|
||||
|
||||
// In addition to the proto object being wrapped, we store some fields that have to be
|
||||
// constructed from the proto, so that we don't have to reconstruct them every time
|
||||
// in getters.
|
||||
type finalityUpdateCapella struct {
|
||||
p *pb.LightClientFinalityUpdateCapella
|
||||
attestedHeader interfaces.LightClientHeader
|
||||
@@ -287,6 +293,9 @@ func (u *finalityUpdateCapella) SignatureSlot() primitives.Slot {
|
||||
return u.p.SignatureSlot
|
||||
}
|
||||
|
||||
// In addition to the proto object being wrapped, we store some fields that have to be
|
||||
// constructed from the proto, so that we don't have to reconstruct them every time
|
||||
// in getters.
|
||||
type finalityUpdateDeneb struct {
|
||||
p *pb.LightClientFinalityUpdateDeneb
|
||||
attestedHeader interfaces.LightClientHeader
|
||||
@@ -386,6 +395,9 @@ func (u *finalityUpdateDeneb) SignatureSlot() primitives.Slot {
|
||||
return u.p.SignatureSlot
|
||||
}
|
||||
|
||||
// In addition to the proto object being wrapped, we store some fields that have to be
|
||||
// constructed from the proto, so that we don't have to reconstruct them every time
|
||||
// in getters.
|
||||
type finalityUpdateElectra struct {
|
||||
p *pb.LightClientFinalityUpdateElectra
|
||||
attestedHeader interfaces.LightClientHeader
|
||||
|
||||
@@ -70,6 +70,9 @@ func NewOptimisticUpdateFromUpdate(update interfaces.LightClientUpdate) (interfa
|
||||
}
|
||||
}
|
||||
|
||||
// In addition to the proto object being wrapped, we store some fields that have to be
|
||||
// constructed from the proto, so that we don't have to reconstruct them every time
|
||||
// in getters.
|
||||
type optimisticUpdateAltair struct {
|
||||
p *pb.LightClientOptimisticUpdateAltair
|
||||
attestedHeader interfaces.LightClientHeader
|
||||
@@ -141,6 +144,9 @@ func (u *optimisticUpdateAltair) SignatureSlot() primitives.Slot {
|
||||
return u.p.SignatureSlot
|
||||
}
|
||||
|
||||
// In addition to the proto object being wrapped, we store some fields that have to be
|
||||
// constructed from the proto, so that we don't have to reconstruct them every time
|
||||
// in getters.
|
||||
type optimisticUpdateCapella struct {
|
||||
p *pb.LightClientOptimisticUpdateCapella
|
||||
attestedHeader interfaces.LightClientHeader
|
||||
@@ -212,6 +218,9 @@ func (u *optimisticUpdateCapella) SignatureSlot() primitives.Slot {
|
||||
return u.p.SignatureSlot
|
||||
}
|
||||
|
||||
// In addition to the proto object being wrapped, we store some fields that have to be
|
||||
// constructed from the proto, so that we don't have to reconstruct them every time
|
||||
// in getters.
|
||||
type optimisticUpdateDeneb struct {
|
||||
p *pb.LightClientOptimisticUpdateDeneb
|
||||
attestedHeader interfaces.LightClientHeader
|
||||
|
||||
@@ -113,11 +113,11 @@ func (u *updateAltair) AttestedHeader() interfaces.LightClientHeader {
|
||||
}
|
||||
|
||||
func (u *updateAltair) SetAttestedHeader(header interfaces.LightClientHeader) error {
|
||||
proto, ok := header.Proto().(*pb.LightClientHeaderAltair)
|
||||
p, ok := header.Proto().(*pb.LightClientHeaderAltair)
|
||||
if !ok {
|
||||
return fmt.Errorf("header type %T is not %T", header.Proto(), &pb.LightClientHeaderAltair{})
|
||||
}
|
||||
u.p.AttestedHeader = proto
|
||||
u.p.AttestedHeader = p
|
||||
u.attestedHeader = header
|
||||
return nil
|
||||
}
|
||||
@@ -155,11 +155,11 @@ func (u *updateAltair) FinalizedHeader() interfaces.LightClientHeader {
|
||||
}
|
||||
|
||||
func (u *updateAltair) SetFinalizedHeader(header interfaces.LightClientHeader) error {
|
||||
proto, ok := header.Proto().(*pb.LightClientHeaderAltair)
|
||||
p, ok := header.Proto().(*pb.LightClientHeaderAltair)
|
||||
if !ok {
|
||||
return fmt.Errorf("header type %T is not %T", header.Proto(), &pb.LightClientHeaderAltair{})
|
||||
}
|
||||
u.p.FinalizedHeader = proto
|
||||
u.p.FinalizedHeader = p
|
||||
u.finalizedHeader = header
|
||||
return nil
|
||||
}
|
||||
@@ -280,11 +280,11 @@ func (u *updateCapella) AttestedHeader() interfaces.LightClientHeader {
|
||||
}
|
||||
|
||||
func (u *updateCapella) SetAttestedHeader(header interfaces.LightClientHeader) error {
|
||||
proto, ok := header.Proto().(*pb.LightClientHeaderCapella)
|
||||
p, ok := header.Proto().(*pb.LightClientHeaderCapella)
|
||||
if !ok {
|
||||
return fmt.Errorf("header type %T is not %T", header.Proto(), &pb.LightClientHeaderCapella{})
|
||||
}
|
||||
u.p.AttestedHeader = proto
|
||||
u.p.AttestedHeader = p
|
||||
u.attestedHeader = header
|
||||
return nil
|
||||
}
|
||||
@@ -322,11 +322,11 @@ func (u *updateCapella) FinalizedHeader() interfaces.LightClientHeader {
|
||||
}
|
||||
|
||||
func (u *updateCapella) SetFinalizedHeader(header interfaces.LightClientHeader) error {
|
||||
proto, ok := header.Proto().(*pb.LightClientHeaderCapella)
|
||||
p, ok := header.Proto().(*pb.LightClientHeaderCapella)
|
||||
if !ok {
|
||||
return fmt.Errorf("header type %T is not %T", header.Proto(), &pb.LightClientHeaderCapella{})
|
||||
}
|
||||
u.p.FinalizedHeader = proto
|
||||
u.p.FinalizedHeader = p
|
||||
u.finalizedHeader = header
|
||||
return nil
|
||||
}
|
||||
@@ -447,11 +447,11 @@ func (u *updateDeneb) AttestedHeader() interfaces.LightClientHeader {
|
||||
}
|
||||
|
||||
func (u *updateDeneb) SetAttestedHeader(header interfaces.LightClientHeader) error {
|
||||
proto, ok := header.Proto().(*pb.LightClientHeaderDeneb)
|
||||
p, ok := header.Proto().(*pb.LightClientHeaderDeneb)
|
||||
if !ok {
|
||||
return fmt.Errorf("header type %T is not %T", header.Proto(), &pb.LightClientHeaderDeneb{})
|
||||
}
|
||||
u.p.AttestedHeader = proto
|
||||
u.p.AttestedHeader = p
|
||||
u.attestedHeader = header
|
||||
return nil
|
||||
}
|
||||
@@ -489,11 +489,11 @@ func (u *updateDeneb) FinalizedHeader() interfaces.LightClientHeader {
|
||||
}
|
||||
|
||||
func (u *updateDeneb) SetFinalizedHeader(header interfaces.LightClientHeader) error {
|
||||
proto, ok := header.Proto().(*pb.LightClientHeaderDeneb)
|
||||
p, ok := header.Proto().(*pb.LightClientHeaderDeneb)
|
||||
if !ok {
|
||||
return fmt.Errorf("header type %T is not %T", header.Proto(), &pb.LightClientHeaderDeneb{})
|
||||
}
|
||||
u.p.FinalizedHeader = proto
|
||||
u.p.FinalizedHeader = p
|
||||
u.finalizedHeader = header
|
||||
return nil
|
||||
}
|
||||
@@ -615,11 +615,11 @@ func (u *updateElectra) AttestedHeader() interfaces.LightClientHeader {
|
||||
}
|
||||
|
||||
func (u *updateElectra) SetAttestedHeader(header interfaces.LightClientHeader) error {
|
||||
proto, ok := header.Proto().(*pb.LightClientHeaderDeneb)
|
||||
p, ok := header.Proto().(*pb.LightClientHeaderDeneb)
|
||||
if !ok {
|
||||
return fmt.Errorf("header type %T is not %T", header.Proto(), &pb.LightClientHeaderDeneb{})
|
||||
}
|
||||
u.p.AttestedHeader = proto
|
||||
u.p.AttestedHeader = p
|
||||
u.attestedHeader = header
|
||||
return nil
|
||||
}
|
||||
@@ -657,11 +657,11 @@ func (u *updateElectra) FinalizedHeader() interfaces.LightClientHeader {
|
||||
}
|
||||
|
||||
func (u *updateElectra) SetFinalizedHeader(header interfaces.LightClientHeader) error {
|
||||
proto, ok := header.Proto().(*pb.LightClientHeaderDeneb)
|
||||
p, ok := header.Proto().(*pb.LightClientHeaderDeneb)
|
||||
if !ok {
|
||||
return fmt.Errorf("header type %T is not %T", header.Proto(), &pb.LightClientHeaderDeneb{})
|
||||
}
|
||||
u.p.FinalizedHeader = proto
|
||||
u.p.FinalizedHeader = p
|
||||
u.finalizedHeader = header
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -9,7 +9,6 @@ import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"net"
|
||||
"net/http"
|
||||
"strings"
|
||||
@@ -240,7 +239,7 @@ func (p *Proxy) sendHttpRequest(req *http.Request, requestBytes []byte) (*http.R
|
||||
}
|
||||
|
||||
// Set the modified request as the proxy request body.
|
||||
proxyReq.Body = ioutil.NopCloser(bytes.NewBuffer(requestBytes))
|
||||
proxyReq.Body = io.NopCloser(bytes.NewBuffer(requestBytes))
|
||||
|
||||
// Required proxy headers for forwarding JSON-RPC requests to the execution client.
|
||||
proxyReq.Header.Set("Host", req.Host)
|
||||
@@ -261,14 +260,14 @@ func (p *Proxy) sendHttpRequest(req *http.Request, requestBytes []byte) (*http.R
|
||||
|
||||
// Peek into the bytes of an HTTP request's body.
|
||||
func parseRequestBytes(req *http.Request) ([]byte, error) {
|
||||
requestBytes, err := ioutil.ReadAll(req.Body)
|
||||
requestBytes, err := io.ReadAll(req.Body)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err = req.Body.Close(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
req.Body = ioutil.NopCloser(bytes.NewBuffer(requestBytes))
|
||||
req.Body = io.NopCloser(bytes.NewBuffer(requestBytes))
|
||||
return requestBytes, nil
|
||||
}
|
||||
|
||||
|
||||
@@ -7,7 +7,7 @@ go_library(
|
||||
importpath = "github.com/prysmaticlabs/prysm/v5/testing/spectest/shared/common/merkle_proof",
|
||||
visibility = ["//visibility:public"],
|
||||
deps = [
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
"//consensus-types/blocks:go_default_library",
|
||||
"//container/trie:go_default_library",
|
||||
"//testing/require:go_default_library",
|
||||
|
||||
@@ -9,7 +9,7 @@ import (
|
||||
"github.com/bazelbuild/rules_go/go/tools/bazel"
|
||||
"github.com/golang/snappy"
|
||||
fssz "github.com/prysmaticlabs/fastssz"
|
||||
field_params "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/params"
|
||||
consensus_blocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
|
||||
"github.com/prysmaticlabs/prysm/v5/container/trie"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/require"
|
||||
@@ -80,7 +80,7 @@ func runSingleMerkleProofTests(t *testing.T, config, forkOrPhase string, unmarsh
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
if index < consensus_blocks.KZGOffset || index > consensus_blocks.KZGOffset+field_params.MaxBlobsPerBlock {
|
||||
if index < consensus_blocks.KZGOffset || index > uint64(consensus_blocks.KZGOffset+params.BeaconConfig().MaxBlobsPerBlock(0)) {
|
||||
return
|
||||
}
|
||||
localProof, err := consensus_blocks.MerkleProofKZGCommitment(body, int(index-consensus_blocks.KZGOffset))
|
||||
|
||||
@@ -3,13 +3,13 @@ package util
|
||||
import (
|
||||
"testing"
|
||||
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/params"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/require"
|
||||
)
|
||||
|
||||
func TestInclusionProofs(t *testing.T) {
|
||||
_, blobs := GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 0, fieldparams.MaxBlobsPerBlock)
|
||||
_, blobs := GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 0, params.BeaconConfig().MaxBlobsPerBlock(0))
|
||||
for i := range blobs {
|
||||
require.NoError(t, blocks.VerifyKZGInclusionProof(blobs[i]))
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user