Compare commits

...

21 Commits

Author SHA1 Message Date
terence tsao
3a56ccece7 Add Capella-Bellatrix bid compatibility to proposer version check 2025-08-06 20:52:05 -07:00
terence tsao
b340d61108 Add performance profiling to capture GetDutiesV2 operations taking over 2s 2025-08-03 07:16:16 -07:00
Bastin
ae4b982a6c Fix finality update bugs & Move broadcast logic to LC Store (#15540)
* fix IsBetterFinalityUpdate and add tests

fix finality update bugs

* Update lightclient.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/core/light-client/lightclient.go

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-08-01 12:35:21 +00:00
Radosław Kapka
f330021785 Do not compare liveness response with LH (#15556)
* Do not compare liveness response with LH

* changelog <3
2025-07-31 14:37:36 +00:00
james-prysm
bd6b4ecd5b wrapping goodbye messages in goroutine to speed up node shutdown (#15542)
* wrapping goodbye messages in goroutine to speed up node shutdown

* fixing requirement
2025-07-31 12:52:31 +00:00
Potuz
d7d8764a91 Trigger payload attribute event on early blocks (#15541)
Currently the payload attribute events is triggered on
`forkchoiceUpodateWithExecution`. However when we import an early block,
we do not call this function, we make two calls to FCU, the first one is
on a locked path at the end of `postBlockProcess` and this call is made
without any payload attributes to avoid updating the shuffling caches.

The second call is made on `handleSecondFCUCall` which calls directly
`notifyForkchoiceUpdate` bypassing the call to
`forkchoiceUpdateWithExecution`, but this call is the one that actually
computes the payload attributes. So the event handler is never called
with the new attributes.

This PR moves the event trigger to the same place where we actually call
FCU with the computed payload attributes.

Some considerations with forkchoice locking logic: since the calls are
always in a go routine, anyway the routine will wait to forkchoice to be
unlocked to proceed.

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-07-30 19:34:34 +00:00
Muzry
9b7f91d947 bugfix: submitPoolSyncCommitteeSignatures response inconsistent (#15516)
* fix: submitPoolSyncCommitteeSignatures reponse inconsistent

* update: bazel build file

* update: add changelog fragment file

* update api/server/structs/BUILD.bazel format

* update the unit test

* update: the error format

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-07-29 16:08:28 +00:00
terence
57e27199bd Fix builder bid version compatibility to support Electra bids with Fulu blocks (#15536) 2025-07-29 14:16:05 +00:00
Potuz
11ca766ed6 Add timing metric for PublishBlockV2 endpoint (#15539)
This commit adds a Prometheus histogram metric to measure the processing
duration of the PublishBlockV2 beacon API endpoint in milliseconds.

The metric covers the complete request processing time including:
- Request validation and parsing
- Block decoding (SSZ/JSON)
- Broadcast validation checks
- Block proposal through ProposeBeaconBlock
- All synchronous operations and awaited goroutines

Background operations that run in goroutines (block broadcasting, blob
sidecar processing) are included in the timing since the main function
waits for their completion before returning.

Files changed:
- beacon-chain/rpc/eth/beacon/metrics.go: New metric definition
- beacon-chain/rpc/eth/beacon/handlers.go: Timing instrumentation
- beacon-chain/rpc/eth/beacon/BUILD.bazel: Added metrics.go and Prometheus deps
- changelog/potuz_add_publishv2_metric.md: Changelog entry

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-07-28 18:20:57 +00:00
terence
cd6cc76d58 Add BLOB_SCHEDULE to eth/v1/config/spec endpoint (#15485)
* Beacon api: fix get config blob schedule

* Numbers should be string instead of float

* more generalized implementation for nested objects

* removing unused function

* fixing linting

* removing redundant switch fields

* adding additional log for debugging

* Fix build.

* adding skip function based on kasey's recommendation

* fixing test

---------

Co-authored-by: james-prysm <james@prysmaticlabs.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-07-25 19:06:10 +00:00
terence
fc4a1469f0 Include requested state root in StateNotFoundError message for debugging (#15533) 2025-07-25 17:50:26 +00:00
Justin Traglia
f3dc4c283e Improve das-core functions (#15524)
* Improve das-core functions

* Add changelog fragment

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-07-25 15:35:05 +00:00
raulk
6ddf271688 fix(beacon-api): return syncnets and cgc in Metadata. (#15506)
* fix(beacon-api): return syncnets and cgc in Metadata.

* changelog

* fixing implementation and adding unit tests

* gaz

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: james-prysm <james@prysmaticlabs.com>
2025-07-25 15:26:56 +00:00
Justin Traglia
af7afba26e Move reconstruction lock to prevent unnecessary work (#15528)
* Move reconstruction lock to prevent unnecessary work

* Add changelog fragment
2025-07-24 21:06:53 +00:00
james-prysm
b740a4ff83 implements the proposer lookahead api (#15525)
* implements the proposer lookahead api

* radek's feedback
2025-07-24 15:00:43 +00:00
Radosław Kapka
385c2224e8 Return zero value for Eth-Consensus-Block-Value on error (#15526) 2025-07-24 13:58:23 +00:00
Justin Traglia
04b39d1a4d Fix some nits associated with data column sidecar verification (#15521)
* Fix some nits associated with data column sidecar verification

* Add changelog fragment
2025-07-23 20:02:32 +00:00
james-prysm
4c40caf7fd persistent enode db to persist seq number (#15519)
* adding persistent db path for persistent seq data information

* fixing typo

* Update changelog/James-prysm_persistent-seq-number.md

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update beacon-chain/p2p/discovery_test.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* adding log updated based on manu's suggestion

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-23 13:55:05 +00:00
Justin Traglia
bc209cadab Use MinEpochsForDataColumnSidecarsRequest in WithinDAPeriod for Fulu (#15522)
* Use MinEpochsForDataColumnSidecarsRequest in WithinDAPeriod for Fulu

* Make fixes

* Add blank line
2025-07-23 13:35:27 +00:00
Justin Traglia
856742ff68 Update links to consensus-specs to point to master branch (#15523)
* Update links to consensus-specs to point to master branch

* Add changelog fragment
2025-07-23 08:32:09 +00:00
Manu NALEPA
abe16a9cb4 Fix downscore by peers when a node gracefully stops. (#15505)
* Log when downscoring a peer.

* `validateSequenceNumber`: Downscore peer in function, clarify and add logs

* `AddConnectionHandler`: Send majority code to the outer scope (no funtional change).

* `disconnectBadPeer`: Improve log.

* `sendRPCStatusRequest`: Improve log.

* `findPeersWithSubnets`: Add preventive peer filtering.
(As done in `s.findPeers`.)

* `Stop`: Use one `defer` for the whole function.
Reminder: `defer`s are executed backwards.

* `Stop`: Send a goodbye message to all connected peers when stopping the service.

Before this commit, stopping the service did not send any goodbye message to all connected peers. The issue with this approach is that the peer still thinks we are alive, and behaves so by trying to communicate with us. Unfortunatly, because we are offline, we cannot respond. Because of that, the peer starts to downscore us, and then bans us. As a consequence, when we restart, the peer refuses our connection request.

By sending a goodbye message when stopping the service, we ensure the peer stops to expect anything from us. When restarting, everything is allright.

* `ConnectedF` and `DisconnectedF`: Workaround very probable libp2p bug by preventing outbound connection to very recently disconnected peers.

* Fix James' comment.

* Fix James' comment.

* Fix James' comment.

* Fix James' comment.

* Fix James' comment.

* `AddDisconnectionHandler`: Handle multiple close calls to `DisconnectedF` for the same peer.
2025-07-22 20:15:18 +00:00
126 changed files with 2544 additions and 470 deletions

View File

@@ -36,6 +36,7 @@ go_library(
"//consensus-types/primitives:go_default_library",
"//consensus-types/validator:go_default_library",
"//container/slice:go_default_library",
"//crypto/bls:go_default_library",
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//proto/engine/v1:go_default_library",

View File

@@ -10,6 +10,7 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/consensus-types/validator"
"github.com/OffchainLabs/prysm/v6/container/slice"
"github.com/OffchainLabs/prysm/v6/crypto/bls"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/math"
enginev1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
@@ -699,6 +700,11 @@ func (m *SyncCommitteeMessage) ToConsensus() (*eth.SyncCommitteeMessage, error)
if err != nil {
return nil, server.NewDecodeError(err, "Signature")
}
// Add validation to check if the signature is valid BLS format
_, err = bls.SignatureFromBytes(sig)
if err != nil {
return nil, server.NewDecodeError(err, "Signature")
}
return &eth.SyncCommitteeMessage{
Slot: primitives.Slot(slot),

View File

@@ -283,3 +283,10 @@ type GetPendingPartialWithdrawalsResponse struct {
Finalized bool `json:"finalized"`
Data []*PendingPartialWithdrawal `json:"data"`
}
type GetProposerLookaheadResponse struct {
Version string `json:"version"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data []string `json:"data"` // validator indexes
}

View File

@@ -27,6 +27,8 @@ type Identity struct {
type Metadata struct {
SeqNumber string `json:"seq_number"`
Attnets string `json:"attnets"`
Syncnets string `json:"syncnets,omitempty"`
Cgc string `json:"custody_group_count,omitempty"`
}
type GetPeerResponse struct {

View File

@@ -174,6 +174,7 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *fcuConfig) (*
"payloadID": fmt.Sprintf("%#x", bytesutil.Trunc(payloadID[:])),
}).Info("Forkchoice updated with payload attributes for proposal")
s.cfg.PayloadIDCache.Set(nextSlot, arg.headRoot, pId)
go s.firePayloadAttributesEvent(s.cfg.StateNotifier.StateFeed(), arg.headBlock, arg.headRoot, nextSlot)
} else if hasAttr && payloadID == nil && !features.Get().PrepareAllPayloads {
log.WithFields(logrus.Fields{
"blockHash": fmt.Sprintf("%#x", headPayload.BlockHash()),

View File

@@ -102,8 +102,6 @@ func (s *Service) forkchoiceUpdateWithExecution(ctx context.Context, args *fcuCo
log.WithError(err).Error("Could not save head")
}
go s.firePayloadAttributesEvent(s.cfg.StateNotifier.StateFeed(), args.headBlock, args.headRoot, s.CurrentSlot()+1)
// Only need to prune attestations from pool if the head has changed.
s.pruneAttsFromPool(s.ctx, args.headState, args.headBlock)
return nil

View File

@@ -11,7 +11,7 @@ import (
)
var (
// https://github.com/ethereum/consensus-specs/blob/dev/presets/mainnet/trusted_setups/trusted_setup_4096.json
// https://github.com/ethereum/consensus-specs/blob/master/presets/mainnet/trusted_setups/trusted_setup_4096.json
//go:embed trusted_setup_4096.json
embeddedTrustedSetup []byte // 1.2Mb
kzgContext *GoKZG.Context

View File

@@ -36,7 +36,7 @@ func WithMaxGoroutines(x int) Option {
// WithLCStore for light client store access.
func WithLCStore() Option {
return func(s *Service) error {
s.lcStore = lightclient.NewLightClientStore(s.cfg.BeaconDB)
s.lcStore = lightclient.NewLightClientStore(s.cfg.BeaconDB, s.cfg.P2P, s.cfg.StateNotifier.StateFeed())
return nil
}
}

View File

@@ -666,10 +666,9 @@ func (s *Service) areDataColumnsAvailable(
root [fieldparams.RootLength]byte,
block interfaces.ReadOnlyBeaconBlock,
) error {
// We are only required to check within MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS
// We are only required to check within MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS.
blockSlot, currentSlot := block.Slot(), s.CurrentSlot()
blockEpoch, currentEpoch := slots.ToEpoch(blockSlot), slots.ToEpoch(currentSlot)
if !params.WithinDAPeriod(blockEpoch, currentEpoch) {
return nil
}

View File

@@ -1,7 +1,6 @@
package blockchain
import (
"bytes"
"context"
"fmt"
"strings"
@@ -198,8 +197,7 @@ func (s *Service) processLightClientFinalityUpdate(
finalizedCheckpoint := attestedState.FinalizedCheckpoint()
// Check if the finalized checkpoint has changed
if finalizedCheckpoint == nil || bytes.Equal(finalizedCheckpoint.GetRoot(), postState.FinalizedCheckpoint().Root) {
if finalizedCheckpoint == nil {
return nil
}
@@ -224,17 +222,7 @@ func (s *Service) processLightClientFinalityUpdate(
return nil
}
log.Debug("Saving new light client finality update")
s.lcStore.SetLastFinalityUpdate(newUpdate)
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.LightClientFinalityUpdate,
Data: newUpdate,
})
if err = s.cfg.P2P.BroadcastLightClientFinalityUpdate(ctx, newUpdate); err != nil {
return errors.Wrap(err, "could not broadcast light client finality update")
}
s.lcStore.SetLastFinalityUpdate(newUpdate, true)
return nil
}
@@ -266,17 +254,7 @@ func (s *Service) processLightClientOptimisticUpdate(ctx context.Context, signed
return nil
}
log.Debug("Saving new light client optimistic update")
s.lcStore.SetLastOptimisticUpdate(newUpdate)
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.LightClientOptimisticUpdate,
Data: newUpdate,
})
if err = s.cfg.P2P.BroadcastLightClientOptimisticUpdate(ctx, newUpdate); err != nil {
return errors.Wrap(err, "could not broadcast light client optimistic update")
}
s.lcStore.SetLastOptimisticUpdate(newUpdate, true)
return nil
}

View File

@@ -3170,7 +3170,7 @@ func TestProcessLightClientOptimisticUpdate(t *testing.T) {
t.Run(version.String(testVersion)+"_"+tc.name, func(t *testing.T) {
s.genesisTime = time.Unix(time.Now().Unix()-(int64(forkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
s.lcStore = &lightClient.Store{}
s.lcStore = lightClient.NewLightClientStore(s.cfg.BeaconDB, s.cfg.P2P, s.cfg.StateNotifier.StateFeed())
var oldActualUpdate interfaces.LightClientOptimisticUpdate
var err error
@@ -3246,39 +3246,39 @@ func TestProcessLightClientFinalityUpdate(t *testing.T) {
expectReplace: true,
},
{
name: "Old update is better - age - no supermajority",
name: "Old update is better - finalized slot is higher",
oldOptions: []util.LightClientOption{util.WithIncreasedFinalizedSlot(1)},
newOptions: []util.LightClientOption{},
expectReplace: false,
},
{
name: "Old update is better - age - both supermajority",
oldOptions: []util.LightClientOption{util.WithIncreasedFinalizedSlot(1), util.WithSupermajority()},
newOptions: []util.LightClientOption{util.WithSupermajority()},
expectReplace: false,
},
{
name: "Old update is better - supermajority",
oldOptions: []util.LightClientOption{util.WithSupermajority()},
name: "Old update is better - attested slot is higher",
oldOptions: []util.LightClientOption{util.WithIncreasedAttestedSlot(1)},
newOptions: []util.LightClientOption{},
expectReplace: false,
},
{
name: "New update is better - age - both supermajority",
oldOptions: []util.LightClientOption{util.WithSupermajority()},
newOptions: []util.LightClientOption{util.WithIncreasedFinalizedSlot(1), util.WithSupermajority()},
name: "Old update is better - signature slot is higher",
oldOptions: []util.LightClientOption{util.WithIncreasedSignatureSlot(1)},
newOptions: []util.LightClientOption{},
expectReplace: false,
},
{
name: "New update is better - finalized slot is higher",
oldOptions: []util.LightClientOption{},
newOptions: []util.LightClientOption{util.WithIncreasedAttestedSlot(1)},
expectReplace: true,
},
{
name: "New update is better - age - no supermajority",
name: "New update is better - attested slot is higher",
oldOptions: []util.LightClientOption{},
newOptions: []util.LightClientOption{util.WithIncreasedFinalizedSlot(1)},
newOptions: []util.LightClientOption{util.WithIncreasedAttestedSlot(1)},
expectReplace: true,
},
{
name: "New update is better - supermajority",
name: "New update is better - signature slot is higher",
oldOptions: []util.LightClientOption{},
newOptions: []util.LightClientOption{util.WithSupermajority()},
newOptions: []util.LightClientOption{util.WithIncreasedSignatureSlot(1)},
expectReplace: true,
},
}
@@ -3310,7 +3310,7 @@ func TestProcessLightClientFinalityUpdate(t *testing.T) {
t.Run(version.String(testVersion)+"_"+tc.name, func(t *testing.T) {
s.genesisTime = time.Unix(time.Now().Unix()-(int64(forkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
s.lcStore = &lightClient.Store{}
s.lcStore = lightClient.NewLightClientStore(s.cfg.BeaconDB, s.cfg.P2P, s.cfg.StateNotifier.StateFeed())
var actualOldUpdate, actualNewUpdate interfaces.LightClientFinalityUpdate
var err error

View File

@@ -177,7 +177,7 @@ func (s *Service) processAttestations(ctx context.Context, disparity time.Durati
for _, a := range atts {
// Based on the spec, don't process the attestation until the subsequent slot.
// This delays consideration in the fork choice until their slot is in the past.
// https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/fork-choice.md#validate_on_attestation
// https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/fork-choice.md#validate_on_attestation
nextSlot := a.GetData().Slot + 1
if err := slots.VerifyTime(s.genesisTime, nextSlot, disparity); err != nil {
continue

View File

@@ -15,7 +15,6 @@ func (s *Service) ReceiveDataColumns(dataColumnSidecars []blocks.VerifiedRODataC
}
// ReceiveDataColumn receives a single data column.
// (It is only a wrapper around ReceiveDataColumns.)
func (s *Service) ReceiveDataColumn(dataColumnSidecar blocks.VerifiedRODataColumn) error {
if err := s.dataColumnStorage.Save([]blocks.VerifiedRODataColumn{dataColumnSidecar}); err != nil {
return errors.Wrap(err, "save data column sidecars")

View File

@@ -206,9 +206,9 @@ func ParseWeakSubjectivityInputString(wsCheckpointString string) (*v1alpha1.Chec
// MinEpochsForBlockRequests computes the number of epochs of block history that we need to maintain,
// relative to the current epoch, per the p2p specs. This is used to compute the slot where backfill is complete.
// value defined:
// https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/p2p-interface.md#configuration
// https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/p2p-interface.md#configuration
// MIN_VALIDATOR_WITHDRAWABILITY_DELAY + CHURN_LIMIT_QUOTIENT // 2 (= 33024, ~5 months)
// detailed rationale: https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/p2p-interface.md#why-are-blocksbyrange-requests-only-required-to-be-served-for-the-latest-min_epochs_for_block_requests-epochs
// detailed rationale: https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/p2p-interface.md#why-are-blocksbyrange-requests-only-required-to-be-served-for-the-latest-min_epochs_for_block_requests-epochs
func MinEpochsForBlockRequests() primitives.Epoch {
return params.BeaconConfig().MinValidatorWithdrawabilityDelay +
primitives.Epoch(params.BeaconConfig().ChurnLimitQuotient/2)

View File

@@ -292,7 +292,7 @@ func TestMinEpochsForBlockRequests(t *testing.T) {
params.SetActiveTestCleanup(t, params.MainnetConfig())
var expected primitives.Epoch = 33024
// expected value of 33024 via spec commentary:
// https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/p2p-interface.md#why-are-blocksbyrange-requests-only-required-to-be-served-for-the-latest-min_epochs_for_block_requests-epochs
// https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/p2p-interface.md#why-are-blocksbyrange-requests-only-required-to-be-served-for-the-latest-min_epochs_for_block_requests-epochs
// MIN_EPOCHS_FOR_BLOCK_REQUESTS is calculated using the arithmetic from compute_weak_subjectivity_period found in the weak subjectivity guide. Specifically to find this max epoch range, we use the worst case event of a very large validator size (>= MIN_PER_EPOCH_CHURN_LIMIT * CHURN_LIMIT_QUOTIENT).
//
// MIN_EPOCHS_FOR_BLOCK_REQUESTS = (

View File

@@ -10,8 +10,12 @@ go_library(
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client",
visibility = ["//visibility:public"],
deps = [
"//async/event:go_default_library",
"//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/db/iface:go_default_library",
"//beacon-chain/execution:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
@@ -39,6 +43,9 @@ go_test(
],
deps = [
":go_default_library",
"//async/event:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/p2p/testing:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types:go_default_library",

View File

@@ -750,7 +750,9 @@ func UpdateHasSupermajority(syncAggregate *pb.SyncAggregate) bool {
return numActiveParticipants*3 >= maxActiveParticipants*2
}
func IsBetterFinalityUpdate(newUpdate, oldUpdate interfaces.LightClientFinalityUpdate) bool {
// IsFinalityUpdateValidForBroadcast checks if a finality update needs to be broadcasted.
// It is also used to check if an incoming gossiped finality update is valid for forwarding and saving.
func IsFinalityUpdateValidForBroadcast(newUpdate, oldUpdate interfaces.LightClientFinalityUpdate) bool {
if oldUpdate == nil {
return true
}
@@ -772,6 +774,35 @@ func IsBetterFinalityUpdate(newUpdate, oldUpdate interfaces.LightClientFinalityU
return true
}
// IsBetterFinalityUpdate checks if the new finality update is better than the old one for saving.
// This does not concern broadcasting, but rather the decision of whether to save the new update.
// For broadcasting checks, use IsFinalityUpdateValidForBroadcast.
func IsBetterFinalityUpdate(newUpdate, oldUpdate interfaces.LightClientFinalityUpdate) bool {
if oldUpdate == nil {
return true
}
// Full nodes SHOULD provide the LightClientFinalityUpdate with the highest attested_header.beacon.slot (if multiple, highest signature_slot)
newFinalizedSlot := newUpdate.FinalizedHeader().Beacon().Slot
newAttestedSlot := newUpdate.AttestedHeader().Beacon().Slot
oldFinalizedSlot := oldUpdate.FinalizedHeader().Beacon().Slot
oldAttestedSlot := oldUpdate.AttestedHeader().Beacon().Slot
if newFinalizedSlot < oldFinalizedSlot {
return false
}
if newFinalizedSlot == oldFinalizedSlot {
if newAttestedSlot < oldAttestedSlot {
return false
}
if newAttestedSlot == oldAttestedSlot && newUpdate.SignatureSlot() <= oldUpdate.SignatureSlot() {
return false
}
}
return true
}
func IsBetterOptimisticUpdate(newUpdate, oldUpdate interfaces.LightClientOptimisticUpdate) bool {
if oldUpdate == nil {
return true

View File

@@ -4,7 +4,11 @@ import (
"context"
"sync"
"github.com/OffchainLabs/prysm/v6/async/event"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed"
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/iface"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
@@ -16,13 +20,17 @@ type Store struct {
mu sync.RWMutex
beaconDB iface.HeadAccessDatabase
lastFinalityUpdate interfaces.LightClientFinalityUpdate
lastOptimisticUpdate interfaces.LightClientOptimisticUpdate
lastFinalityUpdate interfaces.LightClientFinalityUpdate // tracks the best finality update seen so far
lastOptimisticUpdate interfaces.LightClientOptimisticUpdate // tracks the best optimistic update seen so far
p2p p2p.Accessor
stateFeed event.SubscriberSender
}
func NewLightClientStore(db iface.HeadAccessDatabase) *Store {
func NewLightClientStore(db iface.HeadAccessDatabase, p p2p.Accessor, e event.SubscriberSender) *Store {
return &Store{
beaconDB: db,
beaconDB: db,
p2p: p,
stateFeed: e,
}
}
@@ -143,10 +151,23 @@ func (s *Store) SaveLightClientUpdate(ctx context.Context, period uint64, update
return nil
}
func (s *Store) SetLastFinalityUpdate(update interfaces.LightClientFinalityUpdate) {
func (s *Store) SetLastFinalityUpdate(update interfaces.LightClientFinalityUpdate, broadcast bool) {
s.mu.Lock()
defer s.mu.Unlock()
if broadcast && IsFinalityUpdateValidForBroadcast(update, s.lastFinalityUpdate) {
if err := s.p2p.BroadcastLightClientFinalityUpdate(context.Background(), update); err != nil {
log.WithError(err).Error("Could not broadcast light client finality update")
}
}
s.lastFinalityUpdate = update
log.Debug("Saved new light client finality update")
s.stateFeed.Send(&feed.Event{
Type: statefeed.LightClientFinalityUpdate,
Data: update,
})
}
func (s *Store) LastFinalityUpdate() interfaces.LightClientFinalityUpdate {
@@ -155,10 +176,23 @@ func (s *Store) LastFinalityUpdate() interfaces.LightClientFinalityUpdate {
return s.lastFinalityUpdate
}
func (s *Store) SetLastOptimisticUpdate(update interfaces.LightClientOptimisticUpdate) {
func (s *Store) SetLastOptimisticUpdate(update interfaces.LightClientOptimisticUpdate, broadcast bool) {
s.mu.Lock()
defer s.mu.Unlock()
if broadcast {
if err := s.p2p.BroadcastLightClientOptimisticUpdate(context.Background(), update); err != nil {
log.WithError(err).Error("Could not broadcast light client optimistic update")
}
}
s.lastOptimisticUpdate = update
log.Debug("Saved new light client optimistic update")
s.stateFeed.Send(&feed.Event{
Type: statefeed.LightClientOptimisticUpdate,
Data: update,
})
}
func (s *Store) LastOptimisticUpdate() interfaces.LightClientOptimisticUpdate {

View File

@@ -3,7 +3,10 @@ package light_client_test
import (
"testing"
"github.com/OffchainLabs/prysm/v6/async/event"
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
p2pTesting "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/testing/require"
@@ -21,7 +24,7 @@ func TestLightClientStore(t *testing.T) {
params.OverrideBeaconConfig(cfg)
// Initialize the light client store
lcStore := &lightClient.Store{}
lcStore := lightClient.NewLightClientStore(testDB.SetupDB(t), &p2pTesting.FakeP2P{}, new(event.Feed))
// Create test light client updates for Capella and Deneb
lCapella := util.NewTestLightClient(t, version.Capella)
@@ -45,24 +48,118 @@ func TestLightClientStore(t *testing.T) {
require.IsNil(t, lcStore.LastOptimisticUpdate(), "lastOptimisticUpdate should be nil")
// Set and get finality with Capella update. Optimistic update should be nil
lcStore.SetLastFinalityUpdate(finUpdateCapella)
lcStore.SetLastFinalityUpdate(finUpdateCapella, false)
require.Equal(t, finUpdateCapella, lcStore.LastFinalityUpdate(), "lastFinalityUpdate is wrong")
require.IsNil(t, lcStore.LastOptimisticUpdate(), "lastOptimisticUpdate should be nil")
// Set and get optimistic with Capella update. Finality update should be Capella
lcStore.SetLastOptimisticUpdate(opUpdateCapella)
lcStore.SetLastOptimisticUpdate(opUpdateCapella, false)
require.Equal(t, opUpdateCapella, lcStore.LastOptimisticUpdate(), "lastOptimisticUpdate is wrong")
require.Equal(t, finUpdateCapella, lcStore.LastFinalityUpdate(), "lastFinalityUpdate is wrong")
// Set and get finality and optimistic with Deneb update
lcStore.SetLastFinalityUpdate(finUpdateDeneb)
lcStore.SetLastOptimisticUpdate(opUpdateDeneb)
lcStore.SetLastFinalityUpdate(finUpdateDeneb, false)
lcStore.SetLastOptimisticUpdate(opUpdateDeneb, false)
require.Equal(t, finUpdateDeneb, lcStore.LastFinalityUpdate(), "lastFinalityUpdate is wrong")
require.Equal(t, opUpdateDeneb, lcStore.LastOptimisticUpdate(), "lastOptimisticUpdate is wrong")
// Set and get finality and optimistic with nil update
lcStore.SetLastFinalityUpdate(nil)
lcStore.SetLastOptimisticUpdate(nil)
require.IsNil(t, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should be nil")
require.IsNil(t, lcStore.LastOptimisticUpdate(), "lastOptimisticUpdate should be nil")
}
func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
p2p := p2pTesting.NewTestP2P(t)
lcStore := lightClient.NewLightClientStore(testDB.SetupDB(t), p2p, new(event.Feed))
// update 0 with basic data and no supermajority following an empty lastFinalityUpdate - should save and broadcast
l0 := util.NewTestLightClient(t, version.Altair)
update0, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l0.Ctx, l0.State, l0.Block, l0.AttestedState, l0.AttestedBlock, l0.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, lightClient.IsBetterFinalityUpdate(update0, lcStore.LastFinalityUpdate()), "update0 should be better than nil")
// update0 should be valid for broadcast - meaning it should be broadcasted
require.Equal(t, true, lightClient.IsFinalityUpdateValidForBroadcast(update0, lcStore.LastFinalityUpdate()), "update0 should be valid for broadcast")
lcStore.SetLastFinalityUpdate(update0, true)
require.Equal(t, update0, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, true, p2p.BroadcastCalled.Load(), "Broadcast should have been called after setting a new last finality update when previous is nil")
p2p.BroadcastCalled.Store(false) // Reset for next test
// update 1 with same finality slot, increased attested slot, and no supermajority - should save but not broadcast
l1 := util.NewTestLightClient(t, version.Altair, util.WithIncreasedAttestedSlot(1))
update1, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l1.Ctx, l1.State, l1.Block, l1.AttestedState, l1.AttestedBlock, l1.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, lightClient.IsBetterFinalityUpdate(update1, update0), "update1 should be better than update0")
// update1 should not be valid for broadcast - meaning it should not be broadcasted
require.Equal(t, false, lightClient.IsFinalityUpdateValidForBroadcast(update1, lcStore.LastFinalityUpdate()), "update1 should not be valid for broadcast")
lcStore.SetLastFinalityUpdate(update1, true)
require.Equal(t, update1, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been called after setting a new last finality update without supermajority")
p2p.BroadcastCalled.Store(false) // Reset for next test
// update 2 with same finality slot, increased attested slot, and supermajority - should save and broadcast
l2 := util.NewTestLightClient(t, version.Altair, util.WithIncreasedAttestedSlot(2), util.WithSupermajority())
update2, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l2.Ctx, l2.State, l2.Block, l2.AttestedState, l2.AttestedBlock, l2.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, lightClient.IsBetterFinalityUpdate(update2, update1), "update2 should be better than update1")
// update2 should be valid for broadcast - meaning it should be broadcasted
require.Equal(t, true, lightClient.IsFinalityUpdateValidForBroadcast(update2, lcStore.LastFinalityUpdate()), "update2 should be valid for broadcast")
lcStore.SetLastFinalityUpdate(update2, true)
require.Equal(t, update2, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, true, p2p.BroadcastCalled.Load(), "Broadcast should have been called after setting a new last finality update with supermajority")
p2p.BroadcastCalled.Store(false) // Reset for next test
// update 3 with same finality slot, increased attested slot, and supermajority - should save but not broadcast
l3 := util.NewTestLightClient(t, version.Altair, util.WithIncreasedAttestedSlot(3), util.WithSupermajority())
update3, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l3.Ctx, l3.State, l3.Block, l3.AttestedState, l3.AttestedBlock, l3.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, lightClient.IsBetterFinalityUpdate(update3, update2), "update3 should be better than update2")
// update3 should not be valid for broadcast - meaning it should not be broadcasted
require.Equal(t, false, lightClient.IsFinalityUpdateValidForBroadcast(update3, lcStore.LastFinalityUpdate()), "update3 should not be valid for broadcast")
lcStore.SetLastFinalityUpdate(update3, true)
require.Equal(t, update3, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been when previous was already broadcast")
// update 4 with increased finality slot, increased attested slot, and supermajority - should save and broadcast
l4 := util.NewTestLightClient(t, version.Altair, util.WithIncreasedFinalizedSlot(1), util.WithIncreasedAttestedSlot(1), util.WithSupermajority())
update4, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l4.Ctx, l4.State, l4.Block, l4.AttestedState, l4.AttestedBlock, l4.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, lightClient.IsBetterFinalityUpdate(update4, update3), "update4 should be better than update3")
// update4 should be valid for broadcast - meaning it should be broadcasted
require.Equal(t, true, lightClient.IsFinalityUpdateValidForBroadcast(update4, lcStore.LastFinalityUpdate()), "update4 should be valid for broadcast")
lcStore.SetLastFinalityUpdate(update4, true)
require.Equal(t, update4, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, true, p2p.BroadcastCalled.Load(), "Broadcast should have been called after a new finality update with increased finality slot")
p2p.BroadcastCalled.Store(false) // Reset for next test
// update 5 with the same new finality slot, increased attested slot, and supermajority - should save but not broadcast
l5 := util.NewTestLightClient(t, version.Altair, util.WithIncreasedFinalizedSlot(1), util.WithIncreasedAttestedSlot(2), util.WithSupermajority())
update5, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l5.Ctx, l5.State, l5.Block, l5.AttestedState, l5.AttestedBlock, l5.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, lightClient.IsBetterFinalityUpdate(update5, update4), "update5 should be better than update4")
// update5 should not be valid for broadcast - meaning it should not be broadcasted
require.Equal(t, false, lightClient.IsFinalityUpdateValidForBroadcast(update5, lcStore.LastFinalityUpdate()), "update5 should not be valid for broadcast")
lcStore.SetLastFinalityUpdate(update5, true)
require.Equal(t, update5, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been called when previous was already broadcast with supermajority")
// update 6 with the same new finality slot, increased attested slot, and no supermajority - should save but not broadcast
l6 := util.NewTestLightClient(t, version.Altair, util.WithIncreasedFinalizedSlot(1), util.WithIncreasedAttestedSlot(3))
update6, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l6.Ctx, l6.State, l6.Block, l6.AttestedState, l6.AttestedBlock, l6.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, lightClient.IsBetterFinalityUpdate(update6, update5), "update6 should be better than update5")
// update6 should not be valid for broadcast - meaning it should not be broadcasted
require.Equal(t, false, lightClient.IsFinalityUpdateValidForBroadcast(update6, lcStore.LastFinalityUpdate()), "update6 should not be valid for broadcast")
lcStore.SetLastFinalityUpdate(update6, true)
require.Equal(t, update6, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been called when previous was already broadcast with supermajority")
}

View File

@@ -41,17 +41,17 @@ const (
// CustodyGroups computes the custody groups the node should participate in for custody.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/das-core.md#get_custody_groups
func CustodyGroups(nodeId enode.ID, custodyGroupCount uint64) ([]uint64, error) {
numberOfCustodyGroup := params.BeaconConfig().NumberOfCustodyGroups
numberOfCustodyGroups := params.BeaconConfig().NumberOfCustodyGroups
// Check if the custody group count is larger than the number of custody groups.
if custodyGroupCount > numberOfCustodyGroup {
if custodyGroupCount > numberOfCustodyGroups {
return nil, ErrCustodyGroupCountTooLarge
}
// Shortcut if all custody groups are needed.
if custodyGroupCount == numberOfCustodyGroup {
custodyGroups := make([]uint64, 0, numberOfCustodyGroup)
for i := range numberOfCustodyGroup {
if custodyGroupCount == numberOfCustodyGroups {
custodyGroups := make([]uint64, 0, numberOfCustodyGroups)
for i := range numberOfCustodyGroups {
custodyGroups = append(custodyGroups, i)
}
@@ -73,7 +73,7 @@ func CustodyGroups(nodeId enode.ID, custodyGroupCount uint64) ([]uint64, error)
hashedCurrentId := hash.Hash(currentIdBytesLittleEndian)
// Get the custody group ID.
custodyGroup := binary.LittleEndian.Uint64(hashedCurrentId[:8]) % numberOfCustodyGroup
custodyGroup := binary.LittleEndian.Uint64(hashedCurrentId[:8]) % numberOfCustodyGroups
// Add the custody group to the map.
if !custodyGroupsMap[custodyGroup] {
@@ -88,9 +88,6 @@ func CustodyGroups(nodeId enode.ID, custodyGroupCount uint64) ([]uint64, error)
// Increment the current ID.
currentId.Add(currentId, one)
}
// Sort the custody groups.
slices.Sort[[]uint64](custodyGroups)
}
// Final check.
@@ -98,6 +95,9 @@ func CustodyGroups(nodeId enode.ID, custodyGroupCount uint64) ([]uint64, error)
return nil, errWrongComputedCustodyGroupCount
}
// Sort the custody groups.
slices.Sort[[]uint64](custodyGroups)
return custodyGroups, nil
}
@@ -105,19 +105,19 @@ func CustodyGroups(nodeId enode.ID, custodyGroupCount uint64) ([]uint64, error)
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/das-core.md#compute_columns_for_custody_group
func ComputeColumnsForCustodyGroup(custodyGroup uint64) ([]uint64, error) {
beaconConfig := params.BeaconConfig()
numberOfCustodyGroup := beaconConfig.NumberOfCustodyGroups
numberOfCustodyGroups := beaconConfig.NumberOfCustodyGroups
if custodyGroup >= numberOfCustodyGroup {
if custodyGroup >= numberOfCustodyGroups {
return nil, ErrCustodyGroupTooLarge
}
numberOfColumns := beaconConfig.NumberOfColumns
columnsPerGroup := numberOfColumns / numberOfCustodyGroup
columnsPerGroup := numberOfColumns / numberOfCustodyGroups
columns := make([]uint64, 0, columnsPerGroup)
for i := range columnsPerGroup {
column := numberOfCustodyGroup*i + custodyGroup
column := numberOfCustodyGroups*i + custodyGroup
columns = append(columns, column)
}
@@ -127,7 +127,7 @@ func ComputeColumnsForCustodyGroup(custodyGroup uint64) ([]uint64, error) {
// DataColumnSidecars computes the data column sidecars from the signed block, cells and cell proofs.
// The returned value contains pointers to function parameters.
// (If the caller alterates `cellsAndProofs` afterwards, the returned value will be modified as well.)
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.3/specs/fulu/das-core.md#get_data_column_sidecars
// https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.3/specs/fulu/validator.md#get_data_column_sidecars_from_block
func DataColumnSidecars(signedBlock interfaces.ReadOnlySignedBeaconBlock, cellsAndProofs []kzg.CellsAndProofs) ([]*ethpb.DataColumnSidecar, error) {
if signedBlock == nil || signedBlock.IsNil() || len(cellsAndProofs) == 0 {
return nil, nil
@@ -151,7 +151,7 @@ func DataColumnSidecars(signedBlock interfaces.ReadOnlySignedBeaconBlock, cellsA
kzgCommitmentsInclusionProof, err := blocks.MerkleProofKZGCommitments(blockBody)
if err != nil {
return nil, errors.Wrap(err, "merkle proof ZKG commitments")
return nil, errors.Wrap(err, "merkle proof KZG commitments")
}
dataColumnSidecars, err := dataColumnsSidecars(signedBlockHeader, blobKzgCommitments, kzgCommitmentsInclusionProof, cellsAndProofs)
@@ -219,6 +219,7 @@ func CustodyColumns(custodyGroups []uint64) (map[uint64]bool, error) {
// the KZG commitment includion proofs and cells and cell proofs.
// The returned value contains pointers to function parameters.
// (If the caller alterates input parameters afterwards, the returned value will be modified as well.)
// https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.3/specs/fulu/validator.md#get_data_column_sidecars
func dataColumnsSidecars(
signedBlockHeader *ethpb.SignedBeaconBlockHeader,
blobKzgCommitments [][]byte,

View File

@@ -17,8 +17,8 @@ func TestCustodyGroups(t *testing.T) {
// --------------------------------------------
// The happy path is unit tested in spec tests.
// --------------------------------------------
numberOfCustodyGroup := params.BeaconConfig().NumberOfCustodyGroups
_, err := peerdas.CustodyGroups(enode.ID{}, numberOfCustodyGroup+1)
numberOfCustodyGroups := params.BeaconConfig().NumberOfCustodyGroups
_, err := peerdas.CustodyGroups(enode.ID{}, numberOfCustodyGroups+1)
require.ErrorIs(t, err, peerdas.ErrCustodyGroupCountTooLarge)
}
@@ -26,8 +26,8 @@ func TestComputeColumnsForCustodyGroup(t *testing.T) {
// --------------------------------------------
// The happy path is unit tested in spec tests.
// --------------------------------------------
numberOfCustodyGroup := params.BeaconConfig().NumberOfCustodyGroups
_, err := peerdas.ComputeColumnsForCustodyGroup(numberOfCustodyGroup)
numberOfCustodyGroups := params.BeaconConfig().NumberOfCustodyGroups
_, err := peerdas.ComputeColumnsForCustodyGroup(numberOfCustodyGroups)
require.ErrorIs(t, err, peerdas.ErrCustodyGroupTooLarge)
}

View File

@@ -123,7 +123,7 @@ func ReconstructDataColumnSidecars(inVerifiedRoSidecars []blocks.VerifiedRODataC
// ConstructDataColumnSidecars constructs data column sidecars from a block, (un-extended) blobs and
// cell proofs corresponding the extended blobs. The main purpose of this function is to
// construct data columns sidecars from data obtained from the execution client via:
// construct data column sidecars from data obtained from the execution client via:
// - `engine_getBlobsV2` - https://github.com/ethereum/execution-apis/blob/main/src/engine/osaka.md#engine_getblobsv2, or
// - `engine_getPayloadV5` - https://github.com/ethereum/execution-apis/blob/main/src/engine/osaka.md#engine_getpayloadv5
// Note: In this function, to stick with the `BlobsBundleV2` format returned by the execution client in `engine_getPayloadV5`,
@@ -222,8 +222,8 @@ func ReconstructBlobs(block blocks.ROBlock, verifiedDataColumnSidecars []blocks.
// Check if the data column sidecars are aligned with the block.
dataColumnSidecars := make([]blocks.RODataColumn, 0, len(verifiedDataColumnSidecars))
for _, verifiedDataColumnSidecar := range verifiedDataColumnSidecars {
dataColumnSicecar := verifiedDataColumnSidecar.RODataColumn
dataColumnSidecars = append(dataColumnSidecars, dataColumnSicecar)
dataColumnSidecar := verifiedDataColumnSidecar.RODataColumn
dataColumnSidecars = append(dataColumnSidecars, dataColumnSidecar)
}
if err := DataColumnsAlignWithBlock(block, dataColumnSidecars); err != nil {
@@ -241,7 +241,7 @@ func ReconstructBlobs(block blocks.ROBlock, verifiedDataColumnSidecars []blocks.
return blobSidecars, nil
}
// We need to reconstruct the blobs.
// We need to reconstruct the data column sidecars.
reconstructedDataColumnSidecars, err := ReconstructDataColumnSidecars(verifiedDataColumnSidecars)
if err != nil {
return nil, errors.Wrap(err, "reconstruct data column sidecars")

View File

@@ -196,6 +196,26 @@ func TestReconstructBlobs(t *testing.T) {
require.ErrorIs(t, err, peerdas.ErrDataColumnSidecarsNotSortedByIndex)
})
t.Run("consecutive duplicates", func(t *testing.T) {
_, _, verifiedRoSidecars := util.GenerateTestFuluBlockWithSidecars(t, 3)
// [0, 1, 1, 3, 4, ...]
verifiedRoSidecars[2] = verifiedRoSidecars[1]
_, err := peerdas.ReconstructBlobs(emptyBlock, verifiedRoSidecars, []int{0})
require.ErrorIs(t, err, peerdas.ErrDataColumnSidecarsNotSortedByIndex)
})
t.Run("non-consecutive duplicates", func(t *testing.T) {
_, _, verifiedRoSidecars := util.GenerateTestFuluBlockWithSidecars(t, 3)
// [0, 1, 2, 1, 4, ...]
verifiedRoSidecars[3] = verifiedRoSidecars[1]
_, err := peerdas.ReconstructBlobs(emptyBlock, verifiedRoSidecars, []int{0})
require.ErrorIs(t, err, peerdas.ErrDataColumnSidecarsNotSortedByIndex)
})
t.Run("not enough columns", func(t *testing.T) {
_, _, verifiedRoSidecars := util.GenerateTestFuluBlockWithSidecars(t, 3)

View File

@@ -21,10 +21,10 @@ func ValidatorsCustodyRequirement(state beaconState.ReadOnlyBeaconState, validat
}
beaconConfig := params.BeaconConfig()
numberOfCustodyGroup := beaconConfig.NumberOfCustodyGroups
numberOfCustodyGroups := beaconConfig.NumberOfCustodyGroups
validatorCustodyRequirement := beaconConfig.ValidatorCustodyRequirement
balancePerAdditionalCustodyGroup := beaconConfig.BalancePerAdditionalCustodyGroup
count := totalNodeBalance / balancePerAdditionalCustodyGroup
return min(max(count, validatorCustodyRequirement), numberOfCustodyGroup), nil
return min(max(count, validatorCustodyRequirement), numberOfCustodyGroups), nil
}

View File

@@ -126,11 +126,11 @@ func (s *LazilyPersistentStoreColumn) IsDataAvailable(ctx context.Context, curre
return errors.Wrap(err, "entry filter")
}
// https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/p2p-interface.md#datacolumnsidecarsbyrange-v1
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/p2p-interface.md#datacolumnsidecarsbyrange-v1
verifier := s.newDataColumnsVerifier(roDataColumns, verification.ByRangeRequestDataColumnSidecarRequirements)
if err := verifier.ValidFields(); err != nil {
return errors.Wrap(err, "valid")
return errors.Wrap(err, "valid fields")
}
if err := verifier.SidecarInclusionProven(); err != nil {
@@ -164,7 +164,7 @@ func (s *LazilyPersistentStoreColumn) fullCommitmentsToCheck(nodeID enode.ID, bl
blockSlot := block.Block().Slot()
blockEpoch := slots.ToEpoch(blockSlot)
// Compute the current spoch.
// Compute the current epoch.
currentEpoch := slots.ToEpoch(currentSlot)
// Return early if the request is out of the MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS window.

View File

@@ -251,7 +251,7 @@ func (dcs *DataColumnStorage) Summary(root [fieldparams.RootLength]byte) DataCol
}
// Save saves data column sidecars into the database and asynchronously performs pruning.
// The returned chanel is closed when the pruning is complete.
// The returned channel is closed when the pruning is complete.
func (dcs *DataColumnStorage) Save(dataColumnSidecars []blocks.VerifiedRODataColumn) error {
startTime := time.Now()
@@ -266,8 +266,7 @@ func (dcs *DataColumnStorage) Save(dataColumnSidecars []blocks.VerifiedRODataCol
return errWrongNumberOfColumns
}
highestEpoch := primitives.Epoch(0)
dataColumnSidecarsbyRoot := make(map[[fieldparams.RootLength]byte][]blocks.VerifiedRODataColumn)
dataColumnSidecarsByRoot := make(map[[fieldparams.RootLength]byte][]blocks.VerifiedRODataColumn)
// Group data column sidecars by root.
for _, dataColumnSidecar := range dataColumnSidecars {
@@ -278,23 +277,20 @@ func (dcs *DataColumnStorage) Save(dataColumnSidecars []blocks.VerifiedRODataCol
// Group data column sidecars by root.
root := dataColumnSidecar.BlockRoot()
dataColumnSidecarsbyRoot[root] = append(dataColumnSidecarsbyRoot[root], dataColumnSidecar)
dataColumnSidecarsByRoot[root] = append(dataColumnSidecarsByRoot[root], dataColumnSidecar)
}
for root, dataColumnSidecars := range dataColumnSidecarsbyRoot {
for root, dataColumnSidecars := range dataColumnSidecarsByRoot {
// Safety check all data column sidecars for this root are from the same slot.
firstSlot := dataColumnSidecars[0].SignedBlockHeader.Header.Slot
slot := dataColumnSidecars[0].Slot()
for _, dataColumnSidecar := range dataColumnSidecars[1:] {
if dataColumnSidecar.SignedBlockHeader.Header.Slot != firstSlot {
if dataColumnSidecar.Slot() != slot {
return errDataColumnSidecarsFromDifferentSlots
}
}
// Set the highest epoch.
epoch := slots.ToEpoch(dataColumnSidecars[0].Slot())
highestEpoch = max(highestEpoch, epoch)
// Save data columns in the filesystem.
epoch := slots.ToEpoch(slot)
if err := dcs.saveFilesystem(root, epoch, dataColumnSidecars); err != nil {
return errors.Wrap(err, "save filesystem")
}
@@ -306,7 +302,7 @@ func (dcs *DataColumnStorage) Save(dataColumnSidecars []blocks.VerifiedRODataCol
}
// Compute the data columns ident.
dataColumnsIdent := DataColumnsIdent{Root: root, Epoch: slots.ToEpoch(dataColumnSidecars[0].Slot()), Indices: indices}
dataColumnsIdent := DataColumnsIdent{Root: root, Epoch: epoch, Indices: indices}
// Set data columns in the cache.
if err := dcs.cache.set(dataColumnsIdent); err != nil {

View File

@@ -20,7 +20,7 @@ File organisation
The remaining 7 bits (from 0 to 127) represent the index of the data column.
This sentinel bit is needed to distinguish between the column with index 0 and no column.
Example: If the column with index 5 is in the 3th position in the file, then indices[5] = 0x80 + 0x03 = 0x83.
- The rest of the file is a repeat of the SSZ encoded data columns sidecars.
- The rest of the file is a repeat of the SSZ encoded data column sidecars.
|------------------------------------------|------------------------------------------------------------------------------------|
@@ -75,7 +75,7 @@ data-columns
Computation of the maximum size of a DataColumnSidecar
------------------------------------------------------
https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/das-core.md#datacolumnsidecar
https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/das-core.md#datacolumnsidecar
class DataColumnSidecar(Container):

View File

@@ -236,7 +236,7 @@ func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*Beaco
beacon.finalizedStateAtStartUp = nil
if features.Get().EnableLightClient {
beacon.lcStore = lightclient.NewLightClientStore(beacon.db)
beacon.lcStore = lightclient.NewLightClientStore(beacon.db, beacon.fetchP2P(), beacon.StateFeed())
}
return beacon, nil
@@ -699,6 +699,7 @@ func (b *BeaconNode) registerP2P(cliCtx *cli.Context) error {
Discv5BootStrapAddrs: p2p.ParseBootStrapAddrs(bootstrapNodeAddrs),
RelayNodeAddr: cliCtx.String(cmd.RelayNode.Name),
DataDir: dataDir,
DiscoveryDir: filepath.Join(dataDir, "discovery"),
LocalIP: cliCtx.String(cmd.P2PIP.Name),
HostAddress: cliCtx.String(cmd.P2PHost.Name),
HostDNS: cliCtx.String(cmd.P2PHostDNS.Name),

View File

@@ -104,6 +104,7 @@ go_library(
"@com_github_libp2p_go_mplex//:go_default_library",
"@com_github_multiformats_go_multiaddr//:go_default_library",
"@com_github_multiformats_go_multiaddr//net:go_default_library",
"@com_github_patrickmn_go_cache//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
@@ -146,7 +147,6 @@ go_test(
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/light-client:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/db/testing:go_default_library",
@@ -173,7 +173,6 @@ go_test(
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/metadata:go_default_library",
"//proto/testing:go_default_library",
"//runtime/version:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",

View File

@@ -11,7 +11,6 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers/scorers"
@@ -24,7 +23,6 @@ import (
"github.com/OffchainLabs/prysm/v6/network/forks"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
testpb "github.com/OffchainLabs/prysm/v6/proto/testing"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
@@ -546,8 +544,7 @@ func TestService_BroadcastLightClientOptimisticUpdate(t *testing.T) {
}),
}
l := util.NewTestLightClient(t, version.Altair)
msg, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock)
msg, err := util.MockOptimisticUpdate()
require.NoError(t, err)
GossipTypeMapping[reflect.TypeOf(msg)] = LightClientOptimisticUpdateTopicFormat
@@ -613,8 +610,7 @@ func TestService_BroadcastLightClientFinalityUpdate(t *testing.T) {
}),
}
l := util.NewTestLightClient(t, version.Altair)
msg, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
msg, err := util.MockFinalityUpdate()
require.NoError(t, err)
GossipTypeMapping[reflect.TypeOf(msg)] = LightClientFinalityUpdateTopicFormat

View File

@@ -27,6 +27,7 @@ type Config struct {
HostDNS string
PrivateKey string
DataDir string
DiscoveryDir string
MetaDataDir string
QUICPort uint
TCPPort uint

View File

@@ -25,6 +25,7 @@ import (
ma "github.com/multiformats/go-multiaddr"
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield"
"github.com/sirupsen/logrus"
)
type (
@@ -544,7 +545,7 @@ func (s *Service) createLocalNode(
ipAddr net.IP,
udpPort, tcpPort, quicPort int,
) (*enode.LocalNode, error) {
db, err := enode.OpenDB("")
db, err := enode.OpenDB(s.cfg.DiscoveryDir)
if err != nil {
return nil, errors.Wrap(err, "could not open node's peer database")
}
@@ -603,7 +604,10 @@ func (s *Service) createLocalNode(
localNode.SetFallbackIP(firstIP)
}
}
log.WithFields(logrus.Fields{
"seq": localNode.Seq(),
"id": localNode.ID(),
}).Debug("Local node created")
return localNode, nil
}
@@ -619,7 +623,11 @@ func (s *Service) startDiscoveryV5(
return nil, errors.Wrap(err, "could not create listener")
}
record := wrappedListener.Self()
log.WithField("ENR", record.String()).Info("Started discovery v5")
log.WithFields(logrus.Fields{
"ENR": record.String(),
"seq": record.Seq(),
}).Info("Started discovery v5")
return wrappedListener, nil
}

View File

@@ -264,6 +264,7 @@ func TestRebootDiscoveryListener(t *testing.T) {
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
cfg: &Config{UDPPort: uint(port)},
}
createListener := func() (*discover.UDPv5, error) {
return s.createListener(ipAddr, pkey)
}
@@ -293,6 +294,7 @@ func TestMultiAddrsConversion_InvalidIPAddr(t *testing.T) {
s := &Service{
genesisTime: time.Now(),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
cfg: &Config{},
}
node, err := s.createLocalNode(pkey, addr, 0, 0, 0)
require.NoError(t, err)
@@ -495,6 +497,35 @@ func TestMultipleDiscoveryAddresses(t *testing.T) {
assert.Equal(t, true, ipv6Found, "IPv6 discovery address not found")
}
func TestDiscoveryV5_SeqNumber(t *testing.T) {
db, err := enode.OpenDB(t.TempDir())
require.NoError(t, err)
_, key := createAddrAndPrivKey(t)
node := enode.NewLocalNode(db, key)
node.Set(enr.IPv4{127, 0, 0, 1})
currentSeq := node.Seq()
s := &Service{dv5Listener: mockListener{localNode: node}}
_, err = s.DiscoveryAddresses()
require.NoError(t, err)
newSeq := node.Seq()
require.Equal(t, currentSeq+1, newSeq) // node seq should increase when discovery starts
// see that the keys changing, will change the node seq
_, keyTwo := createAddrAndPrivKey(t)
nodeTwo := enode.NewLocalNode(db, keyTwo) // use the same db with different key
nodeTwo.Set(enr.IPv6{0x20, 0x01, 0x48, 0x60, 0, 0, 0x20, 0x01, 0, 0, 0, 0, 0, 0, 0x00, 0x68})
seqTwo := nodeTwo.Seq()
assert.NotEqual(t, seqTwo, newSeq)
sTwo := &Service{dv5Listener: mockListener{localNode: nodeTwo}}
_, err = sTwo.DiscoveryAddresses()
require.NoError(t, err)
assert.Equal(t, seqTwo+1, nodeTwo.Seq())
// see that reloading the same node with same key and db results in same seq number
nodeThree := enode.NewLocalNode(db, key)
assert.Equal(t, node.Seq(), nodeThree.Seq())
}
func TestCorrectUDPVersion(t *testing.T) {
assert.Equal(t, udp4, udpVersionFromIP(net.IPv4zero), "incorrect network version")
assert.Equal(t, udp6, udpVersionFromIP(net.IPv6zero), "incorrect network version")

View File

@@ -1,7 +1,7 @@
/*
Package p2p implements the Ethereum consensus networking specification.
Canonical spec reference: https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/p2p-interface.md
Canonical spec reference: https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/p2p-interface.md
Prysm specific implementation design docs
- Networking Design Doc: https://docs.google.com/document/d/1VyhobQRkEjEkEPxmmdWvaHfKWn0j6dEae_wLZlrFtfU/view

View File

@@ -13,6 +13,7 @@ import (
"github.com/libp2p/go-libp2p/core/host"
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/patrickmn/go-cache"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
@@ -79,8 +80,8 @@ func (s *Service) disconnectFromPeerOnError(
// and validating the response from the peer.
func (s *Service) AddConnectionHandler(reqFunc, goodByeFunc func(ctx context.Context, id peer.ID) error) {
// Peer map and lock to keep track of current connection attempts.
var peerLock sync.Mutex
peerMap := make(map[peer.ID]bool)
peerLock := new(sync.Mutex)
// This is run at the start of each connection attempt, to ensure
// that there aren't multiple inflight connection requests for the
@@ -108,6 +109,19 @@ func (s *Service) AddConnectionHandler(reqFunc, goodByeFunc func(ctx context.Con
s.host.Network().Notify(&network.NotifyBundle{
ConnectedF: func(_ network.Network, conn network.Conn) {
remotePeer := conn.RemotePeer()
log := log.WithField("peer", remotePeer)
direction := conn.Stat().Direction
// For some reason, right after a disconnection, this `ConnectedF` callback
// is called. We want to avoid processing this connection if the peer was
// disconnected too recently and if we are at the initiative of this connection.
// This is very probably a bug in libp2p.
if direction == network.DirOutbound {
if err := s.wasDisconnectedTooRecently(remotePeer); err != nil {
log.WithError(err).Debug("Skipping connection handler")
return
}
}
// Connection handler must be non-blocking as part of libp2p design.
go func() {
@@ -133,53 +147,56 @@ func (s *Service) AddConnectionHandler(reqFunc, goodByeFunc func(ctx context.Con
return
}
// Do not perform handshake on inbound dials.
if conn.Stat().Direction == network.DirInbound {
_, err := s.peers.ChainState(remotePeer)
peerExists := err == nil
currentTime := prysmTime.Now()
// Wait for peer to initiate handshake
time.Sleep(timeForStatus)
// Exit if we are disconnected with the peer.
if s.host.Network().Connectedness(remotePeer) != network.Connected {
if direction != network.DirInbound {
s.peers.SetConnectionState(conn.RemotePeer(), peers.Connecting)
if err := reqFunc(context.TODO(), conn.RemotePeer()); err != nil && !errors.Is(err, io.EOF) {
s.disconnectFromPeerOnError(conn, goodByeFunc, err)
return
}
// If peer hasn't sent a status request, we disconnect with them
if _, err := s.peers.ChainState(remotePeer); errors.Is(err, peerdata.ErrPeerUnknown) || errors.Is(err, peerdata.ErrNoPeerStatus) {
statusMessageMissing.Inc()
s.disconnectFromPeerOnError(conn, goodByeFunc, errors.Wrap(err, "chain state"))
return
}
if peerExists {
updated, err := s.peers.ChainStateLastUpdated(remotePeer)
if err != nil {
s.disconnectFromPeerOnError(conn, goodByeFunc, errors.Wrap(err, "chain state last updated"))
return
}
// Exit if we don't receive any current status messages from peer.
if updated.IsZero() {
s.disconnectFromPeerOnError(conn, goodByeFunc, errors.New("is zero"))
return
}
if updated.Before(currentTime) {
s.disconnectFromPeerOnError(conn, goodByeFunc, errors.New("did not update"))
return
}
}
s.connectToPeer(conn)
return
}
s.peers.SetConnectionState(conn.RemotePeer(), peers.Connecting)
if err := reqFunc(context.TODO(), conn.RemotePeer()); err != nil && !errors.Is(err, io.EOF) {
s.disconnectFromPeerOnError(conn, goodByeFunc, err)
// The connection is inbound.
_, err = s.peers.ChainState(remotePeer)
peerExists := err == nil
currentTime := prysmTime.Now()
// Wait for peer to initiate handshake
time.Sleep(timeForStatus)
// Exit if we are disconnected with the peer.
if s.host.Network().Connectedness(remotePeer) != network.Connected {
return
}
// If peer hasn't sent a status request, we disconnect with them
if _, err := s.peers.ChainState(remotePeer); errors.Is(err, peerdata.ErrPeerUnknown) || errors.Is(err, peerdata.ErrNoPeerStatus) {
statusMessageMissing.Inc()
s.disconnectFromPeerOnError(conn, goodByeFunc, errors.Wrap(err, "chain state"))
return
}
if !peerExists {
s.connectToPeer(conn)
return
}
updated, err := s.peers.ChainStateLastUpdated(remotePeer)
if err != nil {
s.disconnectFromPeerOnError(conn, goodByeFunc, errors.Wrap(err, "chain state last updated"))
return
}
// Exit if we don't receive any current status messages from peer.
if updated.IsZero() {
s.disconnectFromPeerOnError(conn, goodByeFunc, errors.New("is zero"))
return
}
if updated.Before(currentTime) {
s.disconnectFromPeerOnError(conn, goodByeFunc, errors.New("did not update"))
return
}
@@ -220,6 +237,12 @@ func (s *Service) AddDisconnectionHandler(handler func(ctx context.Context, id p
}
s.peers.SetConnectionState(peerID, peers.Disconnected)
if err := s.peerDisconnectionTime.Add(peerID.String(), time.Now(), cache.DefaultExpiration); err != nil {
// The `DisconnectedF` funcition already called for this peer less than `cache.DefaultExpiration` ago. Skip.
// (Very probably a bug in libp2p.)
log.WithError(err).Trace("Failed to set peer disconnection time")
return
}
// Only log disconnections if we were fully connected.
if priorState == peers.Connected {
@@ -231,6 +254,28 @@ func (s *Service) AddDisconnectionHandler(handler func(ctx context.Context, id p
})
}
// wasDisconnectedTooRecently checks if the peer was disconnected within the last second.
func (s *Service) wasDisconnectedTooRecently(peerID peer.ID) error {
const disconnectionDurationThreshold = 1 * time.Second
peerDisconnectionTimeObj, ok := s.peerDisconnectionTime.Get(peerID.String())
if !ok {
return nil
}
peerDisconnectionTime, ok := peerDisconnectionTimeObj.(time.Time)
if !ok {
return errors.New("invalid peer disconnection time type")
}
timeSinceDisconnection := time.Since(peerDisconnectionTime)
if timeSinceDisconnection < disconnectionDurationThreshold {
return errors.Errorf("peer %s was disconnected too recently: %s", peerID, timeSinceDisconnection)
}
return nil
}
func agentString(pid peer.ID, hst host.Host) string {
rawVersion, storeErr := hst.Peerstore().Get(pid, agentVersionKey)

View File

@@ -101,21 +101,24 @@ func (s *BadResponsesScorer) countNoLock(pid peer.ID) (int, error) {
// Increment increments the number of bad responses we have received from the given remote peer.
// If peer doesn't exist this method is no-op.
func (s *BadResponsesScorer) Increment(pid peer.ID) {
func (s *BadResponsesScorer) Increment(pid peer.ID) int {
if pid == "" {
return
return 0
}
s.store.Lock()
defer s.store.Unlock()
peerData, ok := s.store.PeerData(pid)
if !ok {
s.store.SetPeerData(pid, &peerdata.PeerData{
BadResponses: 1,
})
return
if ok {
peerData.BadResponses++
return peerData.BadResponses
}
peerData.BadResponses++
const badResponses = 1
peerData = &peerdata.PeerData{BadResponses: badResponses}
s.store.SetPeerData(pid, peerData)
return badResponses
}
// IsBadPeer states if the peer is to be considered bad.

View File

@@ -31,6 +31,7 @@ import (
"github.com/libp2p/go-libp2p/core/peer"
"github.com/libp2p/go-libp2p/core/protocol"
"github.com/multiformats/go-multiaddr"
"github.com/patrickmn/go-cache"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
@@ -86,6 +87,7 @@ type Service struct {
genesisTime time.Time
genesisValidatorsRoot []byte
activeValidatorCount uint64
peerDisconnectionTime *cache.Cache
}
// NewService initializes a new p2p service compatible with shared.Service interface. No
@@ -115,16 +117,17 @@ func NewService(ctx context.Context, cfg *Config) (*Service, error) {
ipLimiter := leakybucket.NewCollector(ipLimit, ipBurst, 30*time.Second, true /* deleteEmptyBuckets */)
s := &Service{
ctx: ctx,
cancel: cancel,
cfg: cfg,
addrFilter: addrFilter,
ipLimiter: ipLimiter,
privKey: privKey,
metaData: metaData,
isPreGenesis: true,
joinedTopics: make(map[string]*pubsub.Topic, len(gossipTopicMappings)),
subnetsLock: make(map[uint64]*sync.RWMutex),
ctx: ctx,
cancel: cancel,
cfg: cfg,
addrFilter: addrFilter,
ipLimiter: ipLimiter,
privKey: privKey,
metaData: metaData,
isPreGenesis: true,
joinedTopics: make(map[string]*pubsub.Topic, len(gossipTopicMappings)),
subnetsLock: make(map[uint64]*sync.RWMutex),
peerDisconnectionTime: cache.New(1*time.Second, 1*time.Minute),
}
ipAddr := prysmnetwork.IPAddr()
@@ -408,7 +411,10 @@ func (s *Service) pingPeersAndLogEnr() {
defer s.pingMethodLock.RUnlock()
localENR := s.dv5Listener.Self()
log.WithField("ENR", localENR).Info("New node record")
log.WithFields(logrus.Fields{
"ENR": localENR,
"seq": localENR.Seq(),
}).Info("New node record")
if s.pingMethod == nil {
return
@@ -486,14 +492,17 @@ func (s *Service) connectWithPeer(ctx context.Context, info peer.AddrInfo) error
if info.ID == s.host.ID() {
return nil
}
if err := s.Peers().IsBad(info.ID); err != nil {
return errors.Wrap(err, "refused to connect to bad peer")
return errors.Wrap(err, "bad peer")
}
ctx, cancel := context.WithTimeout(ctx, maxDialTimeout)
defer cancel()
if err := s.host.Connect(ctx, info); err != nil {
s.Peers().Scorers().BadResponsesScorer().Increment(info.ID)
return err
s.downscorePeer(info.ID, "connectionError")
return errors.Wrap(err, "peer connect")
}
return nil
}
@@ -524,3 +533,8 @@ func (s *Service) connectToBootnodes() error {
func (s *Service) isInitialized() bool {
return !s.genesisTime.IsZero() && len(s.genesisValidatorsRoot) == 32
}
func (s *Service) downscorePeer(peerID peer.ID, reason string) {
newScore := s.Peers().Scorers().BadResponsesScorer().Increment(peerID)
log.WithFields(logrus.Fields{"peerID": peerID, "reason": reason, "newScore": newScore}).Debug("Downscore peer")
}

View File

@@ -403,7 +403,7 @@ func TestService_connectWithPeer(t *testing.T) {
return ps
}(),
info: peer.AddrInfo{ID: "bad"},
wantErr: "refused to connect to bad peer",
wantErr: "bad peer",
},
}
for _, tt := range tests {

View File

@@ -181,6 +181,11 @@ func (s *Service) findPeersWithSubnets(
// Get all needed subnets that the node is subscribed to.
// Skip nodes that are not subscribed to any of the defective subnets.
node := iterator.Node()
if !s.filterPeer(node) {
continue
}
nodeSubnets, err := filter(node)
if err != nil {
return nil, errors.Wrap(err, "filter node")

View File

@@ -985,6 +985,16 @@ func (s *Service) beaconEndpoints(
handler: server.GetPendingPartialWithdrawals,
methods: []string{http.MethodGet},
},
{
template: "/eth/v1/beacon/states/{state_id}/proposer_lookahead",
name: namespace + ".GetProposerLookahead",
middleware: []middleware.Middleware{
middleware.AcceptHeaderHandler([]string{api.JsonMediaType, api.OctetStreamMediaType}),
middleware.AcceptEncodingHeaderHandler(),
},
handler: server.GetProposerLookahead,
methods: []string{http.MethodGet},
},
}
}

View File

@@ -32,6 +32,7 @@ func Test_endpoints(t *testing.T) {
"/eth/v1/beacon/states/{state_id}/pending_deposits": {http.MethodGet},
"/eth/v1/beacon/states/{state_id}/pending_partial_withdrawals": {http.MethodGet},
"/eth/v1/beacon/states/{state_id}/pending_consolidations": {http.MethodGet},
"/eth/v1/beacon/states/{state_id}/proposer_lookahead": {http.MethodGet},
"/eth/v1/beacon/headers": {http.MethodGet},
"/eth/v1/beacon/headers/{block_id}": {http.MethodGet},
"/eth/v1/beacon/blinded_blocks": {http.MethodPost},

View File

@@ -8,6 +8,7 @@ go_library(
"handlers_state.go",
"handlers_validator.go",
"log.go",
"metrics.go",
"server.go",
],
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/eth/beacon",
@@ -61,6 +62,8 @@ go_library(
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_ethereum_go_ethereum//crypto/kzg4844:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
@@ -123,6 +126,7 @@ go_test(
"@com_github_crate_crypto_go_kzg_4844//:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@com_github_stretchr_testify//mock:go_default_library",

View File

@@ -9,6 +9,7 @@ import (
"net/http"
"strconv"
"strings"
"time"
"github.com/OffchainLabs/prysm/v6/api"
"github.com/OffchainLabs/prysm/v6/api/server/structs"
@@ -658,6 +659,12 @@ func (s *Server) PublishBlock(w http.ResponseWriter, r *http.Request) {
// broadcast all given signed blobs. The broadcast behaviour may be adjusted via the
// `broadcast_validation` query parameter.
func (s *Server) PublishBlockV2(w http.ResponseWriter, r *http.Request) {
start := time.Now()
defer func() {
duration := time.Since(start).Milliseconds()
publishBlockV2Duration.Observe(float64(duration))
}()
ctx, span := trace.StartSpan(r.Context(), "beacon.PublishBlockV2")
defer span.End()
if shared.IsSyncing(r.Context(), w, s.SyncChecker, s.HeadFetcher, s.TimeFetcher, s.OptimisticModeFetcher) {
@@ -1790,6 +1797,63 @@ func (s *Server) GetPendingPartialWithdrawals(w http.ResponseWriter, r *http.Req
}
}
func (s *Server) GetProposerLookahead(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "beacon.GetProposerLookahead")
defer span.End()
stateId := r.PathValue("state_id")
if stateId == "" {
httputil.HandleError(w, "state_id is required in URL params", http.StatusBadRequest)
return
}
st, err := s.Stater.State(ctx, []byte(stateId))
if err != nil {
shared.WriteStateFetchError(w, err)
return
}
if st.Version() < version.Fulu {
httputil.HandleError(w, "state_id is prior to fulu", http.StatusBadRequest)
return
}
pl, err := st.ProposerLookahead()
if err != nil {
httputil.HandleError(w, "Could not get proposer look ahead: "+err.Error(), http.StatusInternalServerError)
return
}
w.Header().Set(api.VersionHeader, version.String(st.Version()))
if httputil.RespondWithSsz(r) {
sszLen := (*primitives.ValidatorIndex)(nil).SizeSSZ()
sszData := make([]byte, len(pl)*sszLen)
for i, idx := range pl {
copy(sszData[i*sszLen:(i+1)*sszLen], ssz.MarshalUint64([]byte{}, uint64(idx)))
}
httputil.WriteSsz(w, sszData)
} else {
isOptimistic, err := helpers.IsOptimistic(ctx, []byte(stateId), s.OptimisticModeFetcher, s.Stater, s.ChainInfoFetcher, s.BeaconDB)
if err != nil {
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusInternalServerError)
return
}
blockRoot, err := st.LatestBlockHeader().HashTreeRoot()
if err != nil {
httputil.HandleError(w, "Could not calculate root of latest block header: "+err.Error(), http.StatusInternalServerError)
return
}
isFinalized := s.FinalizationFetcher.IsFinalized(ctx, blockRoot)
vi := make([]string, len(pl))
for i, v := range pl {
vi[i] = strconv.FormatUint(uint64(v), 10)
}
resp := structs.GetProposerLookaheadResponse{
Version: version.String(st.Version()),
ExecutionOptimistic: isOptimistic,
Finalized: isFinalized,
Data: vi,
}
httputil.WriteJson(w, resp)
}
}
// SerializeItems serializes a slice of items, each of which implements the MarshalSSZ method,
// into a single byte array.
func serializeItems[T interface{ MarshalSSZ() ([]byte, error) }](items []T) ([]byte, error) {

View File

@@ -1103,9 +1103,9 @@ func TestSubmitSyncCommitteeSignatures(t *testing.T) {
require.NoError(t, err)
require.Equal(t, 1, len(msgsInPool))
assert.Equal(t, primitives.Slot(1), msgsInPool[0].Slot)
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(msgsInPool[0].BlockRoot))
assert.Equal(t, "0xbacd20f09da907734434f052bd4c9503aa16bab1960e89ea20610d08d064481c", hexutil.Encode(msgsInPool[0].BlockRoot))
assert.Equal(t, primitives.ValidatorIndex(1), msgsInPool[0].ValidatorIndex)
assert.Equal(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505", hexutil.Encode(msgsInPool[0].Signature))
assert.Equal(t, "0xb591bd4ca7d745b6e027879645d7c014fecb8c58631af070f7607acc0c1c948a5102a33267f0e4ba41a85b254b07df91185274375b2e6436e37e81d2fd46cb3751f5a6c86efb7499c1796c0c17e122a54ac067bb0f5ff41f3241659cceb0c21c", hexutil.Encode(msgsInPool[0].Signature))
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
})
t.Run("multiple", func(t *testing.T) {
@@ -2497,23 +2497,23 @@ var (
singleSyncCommitteeMsg = `[
{
"slot": "1",
"beacon_block_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"beacon_block_root": "0xbacd20f09da907734434f052bd4c9503aa16bab1960e89ea20610d08d064481c",
"validator_index": "1",
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
"signature": "0xb591bd4ca7d745b6e027879645d7c014fecb8c58631af070f7607acc0c1c948a5102a33267f0e4ba41a85b254b07df91185274375b2e6436e37e81d2fd46cb3751f5a6c86efb7499c1796c0c17e122a54ac067bb0f5ff41f3241659cceb0c21c"
}
]`
multipleSyncCommitteeMsg = `[
{
"slot": "1",
"beacon_block_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"beacon_block_root": "0xbacd20f09da907734434f052bd4c9503aa16bab1960e89ea20610d08d064481c",
"validator_index": "1",
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
"signature": "0xb591bd4ca7d745b6e027879645d7c014fecb8c58631af070f7607acc0c1c948a5102a33267f0e4ba41a85b254b07df91185274375b2e6436e37e81d2fd46cb3751f5a6c86efb7499c1796c0c17e122a54ac067bb0f5ff41f3241659cceb0c21c"
},
{
"slot": "2",
"beacon_block_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"beacon_block_root": "0x2757f6fd8590925cd000a86a3e543f98a93eae23781783a33e34504729a8ad0c",
"validator_index": "1",
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
"signature": "0x99dfe11b6c8b306d2c72eb891926d37922d226ea8e1e7484d6c30fab746494f192b0daa3e40c13f1e335b35238f3362c113455a329b1fab0bc500bc47f643786f49e151d5b5052afb51af57ba5aa34a6051dc90ee4de83a26eb54a895061d89a"
}
]`
// signature is invalid
@@ -2523,6 +2523,18 @@ var (
"beacon_block_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"validator_index": "1",
"signature": "foo"
},
{
"slot": "1121",
"beacon_block_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"validator_index": "1",
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
},
{
"slot": "1121",
"beacon_block_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"validator_index": "2",
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
}
]`
// signatures are invalid

View File

@@ -43,6 +43,7 @@ import (
GoKZG "github.com/crate-crypto/go-kzg-4844"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
ssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/go-bitfield"
logTest "github.com/sirupsen/logrus/hooks/test"
"github.com/stretchr/testify/mock"
@@ -5363,3 +5364,190 @@ func TestGetPendingPartialWithdrawals(t *testing.T) {
require.Equal(t, true, resp.Finalized)
})
}
func TestGetProposerLookahead(t *testing.T) {
numValidators := 50
// Create a Fulu state with proposer lookahead data
st, _ := util.DeterministicGenesisStateFulu(t, uint64(numValidators))
lookaheadSize := int(params.BeaconConfig().MinSeedLookahead+1) * int(params.BeaconConfig().SlotsPerEpoch)
lookahead := make([]primitives.ValidatorIndex, lookaheadSize)
for i := 0; i < lookaheadSize; i++ {
lookahead[i] = primitives.ValidatorIndex(i % numValidators) // Cycle through validators
}
require.NoError(t, st.SetProposerLookahead(lookahead))
chainService := &chainMock.ChainService{
Optimistic: false,
FinalizedRoots: map[[32]byte]bool{},
}
server := &Server{
Stater: &testutil.MockStater{
BeaconState: st,
},
OptimisticModeFetcher: chainService,
FinalizationFetcher: chainService,
}
t.Run("json response", func(t *testing.T) {
req := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/beacon/states/{state_id}/proposer_lookahead", nil)
req.SetPathValue("state_id", "head")
rec := httptest.NewRecorder()
rec.Body = new(bytes.Buffer)
server.GetProposerLookahead(rec, req)
require.Equal(t, http.StatusOK, rec.Code)
require.Equal(t, "fulu", rec.Header().Get(api.VersionHeader))
var resp structs.GetProposerLookaheadResponse
require.NoError(t, json.Unmarshal(rec.Body.Bytes(), &resp))
expectedVersion := version.String(st.Version())
require.Equal(t, expectedVersion, resp.Version)
require.Equal(t, false, resp.ExecutionOptimistic)
require.Equal(t, false, resp.Finalized)
// Verify the data
require.Equal(t, lookaheadSize, len(resp.Data))
for i := 0; i < lookaheadSize; i++ {
expectedIdx := strconv.FormatUint(uint64(i%numValidators), 10)
require.Equal(t, expectedIdx, resp.Data[i])
}
})
t.Run("ssz response", func(t *testing.T) {
req := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/beacon/states/{state_id}/proposer_lookahead", nil)
req.Header.Set("Accept", "application/octet-stream")
req.SetPathValue("state_id", "head")
rec := httptest.NewRecorder()
rec.Body = new(bytes.Buffer)
server.GetProposerLookahead(rec, req)
require.Equal(t, http.StatusOK, rec.Code)
require.Equal(t, "fulu", rec.Header().Get(api.VersionHeader))
responseBytes := rec.Body.Bytes()
validatorIndexSize := (*primitives.ValidatorIndex)(nil).SizeSSZ()
require.Equal(t, len(responseBytes), validatorIndexSize*lookaheadSize)
recoveredIndices := make([]primitives.ValidatorIndex, lookaheadSize)
for i := 0; i < lookaheadSize; i++ {
start := i * validatorIndexSize
end := start + validatorIndexSize
idx := ssz.UnmarshallUint64(responseBytes[start:end])
recoveredIndices[i] = primitives.ValidatorIndex(idx)
}
require.DeepEqual(t, lookahead, recoveredIndices)
})
t.Run("pre fulu state", func(t *testing.T) {
preEplusSt, _ := util.DeterministicGenesisStateElectra(t, 1)
preFuluServer := &Server{
Stater: &testutil.MockStater{
BeaconState: preEplusSt,
},
OptimisticModeFetcher: chainService,
FinalizationFetcher: chainService,
}
// Test JSON request
req := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/beacon/states/{state_id}/proposer_lookahead", nil)
req.SetPathValue("state_id", "head")
rec := httptest.NewRecorder()
rec.Body = new(bytes.Buffer)
preFuluServer.GetProposerLookahead(rec, req)
require.Equal(t, http.StatusBadRequest, rec.Code)
var errResp struct {
Code int `json:"code"`
Message string `json:"message"`
}
require.NoError(t, json.Unmarshal(rec.Body.Bytes(), &errResp))
require.Equal(t, "state_id is prior to fulu", errResp.Message)
// Test SSZ request
sszReq := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/beacon/states/{state_id}/proposer_lookahead", nil)
sszReq.Header.Set("Accept", "application/octet-stream")
sszReq.SetPathValue("state_id", "head")
sszRec := httptest.NewRecorder()
sszRec.Body = new(bytes.Buffer)
preFuluServer.GetProposerLookahead(sszRec, sszReq)
require.Equal(t, http.StatusBadRequest, sszRec.Code)
var sszErrResp struct {
Code int `json:"code"`
Message string `json:"message"`
}
require.NoError(t, json.Unmarshal(sszRec.Body.Bytes(), &sszErrResp))
require.Equal(t, "state_id is prior to fulu", sszErrResp.Message)
})
t.Run("missing state_id parameter", func(t *testing.T) {
req := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/beacon/states/{state_id}/proposer_lookahead", nil)
rec := httptest.NewRecorder()
rec.Body = new(bytes.Buffer)
server.GetProposerLookahead(rec, req)
require.Equal(t, http.StatusBadRequest, rec.Code)
var errResp struct {
Code int `json:"code"`
Message string `json:"message"`
}
require.NoError(t, json.Unmarshal(rec.Body.Bytes(), &errResp))
require.Equal(t, "state_id is required in URL params", errResp.Message)
})
t.Run("optimistic node", func(t *testing.T) {
optimisticChainService := &chainMock.ChainService{
Optimistic: true,
FinalizedRoots: map[[32]byte]bool{},
}
optimisticServer := &Server{
Stater: server.Stater,
OptimisticModeFetcher: optimisticChainService,
FinalizationFetcher: optimisticChainService,
}
req := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/beacon/states/{state_id}/proposer_lookahead", nil)
req.SetPathValue("state_id", "head")
rec := httptest.NewRecorder()
rec.Body = new(bytes.Buffer)
optimisticServer.GetProposerLookahead(rec, req)
require.Equal(t, http.StatusOK, rec.Code)
var resp structs.GetProposerLookaheadResponse
require.NoError(t, json.Unmarshal(rec.Body.Bytes(), &resp))
require.Equal(t, true, resp.ExecutionOptimistic)
})
t.Run("finalized node", func(t *testing.T) {
blockRoot, err := st.LatestBlockHeader().HashTreeRoot()
require.NoError(t, err)
finalizedChainService := &chainMock.ChainService{
Optimistic: false,
FinalizedRoots: map[[32]byte]bool{blockRoot: true},
}
finalizedServer := &Server{
Stater: server.Stater,
OptimisticModeFetcher: finalizedChainService,
FinalizationFetcher: finalizedChainService,
}
req := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/beacon/states/{state_id}/proposer_lookahead", nil)
req.SetPathValue("state_id", "head")
rec := httptest.NewRecorder()
rec.Body = new(bytes.Buffer)
finalizedServer.GetProposerLookahead(rec, req)
require.Equal(t, http.StatusOK, rec.Code)
var resp structs.GetProposerLookaheadResponse
require.NoError(t, json.Unmarshal(rec.Body.Bytes(), &resp))
require.Equal(t, true, resp.Finalized)
})
}

View File

@@ -0,0 +1,16 @@
package beacon
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
)
var (
publishBlockV2Duration = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "publish_block_v2_duration_milliseconds",
Help: "Duration of publishBlockV2 endpoint processing in milliseconds",
Buckets: []float64{1, 5, 20, 100, 500, 1000, 2000, 5000},
},
)
)

View File

@@ -12,6 +12,7 @@ go_library(
"//network/forks:go_default_library",
"//network/httputil:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)

View File

@@ -2,6 +2,7 @@ package config
import (
"fmt"
"math"
"net/http"
"reflect"
"strconv"
@@ -13,6 +14,7 @@ import (
"github.com/OffchainLabs/prysm/v6/network/forks"
"github.com/OffchainLabs/prysm/v6/network/httputil"
"github.com/ethereum/go-ethereum/common/hexutil"
log "github.com/sirupsen/logrus"
)
// GetDepositContract retrieves deposit contract address and genesis fork version.
@@ -80,39 +82,108 @@ func GetSpec(w http.ResponseWriter, r *http.Request) {
httputil.WriteJson(w, &structs.GetSpecResponse{Data: data})
}
func prepareConfigSpec() (map[string]string, error) {
data := make(map[string]string)
func convertValueForJSON(v reflect.Value, tag string) interface{} {
// Unwrap pointers / interfaces
for v.Kind() == reflect.Interface || v.Kind() == reflect.Ptr {
if v.IsNil() {
return nil
}
v = v.Elem()
}
switch v.Kind() {
// ===== Single byte → 0xAB =====
case reflect.Uint8:
return hexutil.Encode([]byte{uint8(v.Uint())})
// ===== Other unsigned numbers → "123" =====
case reflect.Uint, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return strconv.FormatUint(v.Uint(), 10)
// ===== Signed numbers → "123" =====
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return strconv.FormatInt(v.Int(), 10)
// ===== Raw bytes encode to hex =====
case reflect.Slice:
if v.Type().Elem().Kind() == reflect.Uint8 {
return hexutil.Encode(v.Bytes())
}
fallthrough
case reflect.Array:
if v.Type().Elem().Kind() == reflect.Uint8 {
// Need a copy because v.Slice is illegal on arrays directly
tmp := make([]byte, v.Len())
reflect.Copy(reflect.ValueOf(tmp), v)
return hexutil.Encode(tmp)
}
// Generic slice/array handling
n := v.Len()
out := make([]interface{}, n)
for i := 0; i < n; i++ {
out[i] = convertValueForJSON(v.Index(i), tag)
}
return out
// ===== Struct =====
case reflect.Struct:
t := v.Type()
m := make(map[string]interface{}, v.NumField())
for i := 0; i < v.NumField(); i++ {
f := t.Field(i)
if !v.Field(i).CanInterface() {
continue // unexported
}
key := f.Tag.Get("json")
if key == "" || key == "-" {
key = f.Name
}
m[key] = convertValueForJSON(v.Field(i), tag)
}
return m
// ===== Default =====
default:
log.WithFields(log.Fields{
"fn": "prepareConfigSpec",
"tag": tag,
"kind": v.Kind().String(),
"type": v.Type().String(),
}).Error("Unsupported config field kind; value forwarded verbatim")
return v.Interface()
}
}
func prepareConfigSpec() (map[string]interface{}, error) {
data := make(map[string]interface{})
config := *params.BeaconConfig()
t := reflect.TypeOf(config)
v := reflect.ValueOf(config)
for i := 0; i < t.NumField(); i++ {
tField := t.Field(i)
_, isSpecField := tField.Tag.Lookup("spec")
if !isSpecField {
// Field should not be returned from API.
_, isSpec := tField.Tag.Lookup("spec")
if !isSpec {
continue
}
if shouldSkip(tField) {
continue
}
tagValue := strings.ToUpper(tField.Tag.Get("yaml"))
vField := v.Field(i)
switch vField.Kind() {
case reflect.Int:
data[tagValue] = strconv.FormatInt(vField.Int(), 10)
case reflect.Uint64:
data[tagValue] = strconv.FormatUint(vField.Uint(), 10)
case reflect.Slice:
data[tagValue] = hexutil.Encode(vField.Bytes())
case reflect.Array:
data[tagValue] = hexutil.Encode(reflect.ValueOf(&config).Elem().Field(i).Slice(0, vField.Len()).Bytes())
case reflect.String:
data[tagValue] = vField.String()
case reflect.Uint8:
data[tagValue] = hexutil.Encode([]byte{uint8(vField.Uint())})
default:
return nil, fmt.Errorf("unsupported config field type: %s", vField.Kind().String())
}
tag := strings.ToUpper(tField.Tag.Get("yaml"))
val := v.Field(i)
data[tag] = convertValueForJSON(val, tag)
}
return data, nil
}
func shouldSkip(tField reflect.StructField) bool {
// Dynamically skip blob schedule if Fulu is not yet scheduled.
if params.BeaconConfig().FuluForkEpoch == math.MaxUint64 &&
tField.Type == reflect.TypeOf(params.BeaconConfig().BlobSchedule) {
return true
}
return false
}

View File

@@ -5,6 +5,7 @@ import (
"encoding/hex"
"encoding/json"
"fmt"
"math"
"net/http"
"net/http/httptest"
"testing"
@@ -200,7 +201,7 @@ func TestGetSpec(t *testing.T) {
data, ok := resp.Data.(map[string]interface{})
require.Equal(t, true, ok)
assert.Equal(t, 175, len(data))
assert.Equal(t, 176, len(data))
for k, v := range data {
t.Run(k, func(t *testing.T) {
switch k {
@@ -577,6 +578,11 @@ func TestGetSpec(t *testing.T) {
assert.Equal(t, "102", v)
case "BLOB_SIDECAR_SUBNET_COUNT_ELECTRA":
assert.Equal(t, "103", v)
case "BLOB_SCHEDULE":
// BLOB_SCHEDULE should be an empty slice when no schedule is defined
blobSchedule, ok := v.([]interface{})
assert.Equal(t, true, ok)
assert.Equal(t, 0, len(blobSchedule))
default:
t.Errorf("Incorrect key: %s", k)
}
@@ -637,3 +643,86 @@ func TestForkSchedule_Ok(t *testing.T) {
assert.Equal(t, os.Len(), len(resp.Data))
})
}
func TestGetSpec_BlobSchedule(t *testing.T) {
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig().Copy()
config.FuluForkEpoch = 1
// Set up a blob schedule with test data
config.BlobSchedule = []params.BlobScheduleEntry{
{
Epoch: primitives.Epoch(100),
MaxBlobsPerBlock: 6,
},
{
Epoch: primitives.Epoch(200),
MaxBlobsPerBlock: 9,
},
}
params.OverrideBeaconConfig(config)
request := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/config/spec", nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
GetSpec(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := structs.GetSpecResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), &resp))
data, ok := resp.Data.(map[string]interface{})
require.Equal(t, true, ok)
// Verify BLOB_SCHEDULE is present and properly formatted
blobScheduleValue, exists := data["BLOB_SCHEDULE"]
require.Equal(t, true, exists)
// Verify it's a slice of maps (actual JSON object, not string)
// The JSON unmarshaling converts it to []interface{} with map[string]interface{} entries
blobScheduleSlice, ok := blobScheduleValue.([]interface{})
require.Equal(t, true, ok)
// Convert to generic interface for easier testing
var blobSchedule []map[string]interface{}
for _, entry := range blobScheduleSlice {
entryMap, ok := entry.(map[string]interface{})
require.Equal(t, true, ok)
blobSchedule = append(blobSchedule, entryMap)
}
// Verify the blob schedule content
require.Equal(t, 2, len(blobSchedule))
// Check first entry - values should be strings for consistent API output
assert.Equal(t, "100", blobSchedule[0]["EPOCH"])
assert.Equal(t, "6", blobSchedule[0]["MAX_BLOBS_PER_BLOCK"])
// Check second entry - values should be strings for consistent API output
assert.Equal(t, "200", blobSchedule[1]["EPOCH"])
assert.Equal(t, "9", blobSchedule[1]["MAX_BLOBS_PER_BLOCK"])
}
func TestGetSpec_BlobSchedule_NotFulu(t *testing.T) {
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig().Copy()
// Fulu not scheduled (default: math.MaxUint64)
config.FuluForkEpoch = math.MaxUint64
config.BlobSchedule = []params.BlobScheduleEntry{
{Epoch: primitives.Epoch(100), MaxBlobsPerBlock: 6},
}
params.OverrideBeaconConfig(config)
request := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/config/spec", nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
GetSpec(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := structs.GetSpecResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), &resp))
data, ok := resp.Data.(map[string]interface{})
require.Equal(t, true, ok)
_, exists := data["BLOB_SCHEDULE"]
require.Equal(t, false, exists)
}

View File

@@ -33,9 +33,11 @@ go_test(
embed = [":go_default_library"],
deps = [
"//api/server/structs:go_default_library",
"//async/event:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/light-client:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/p2p/testing:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",

View File

@@ -11,9 +11,11 @@ import (
"testing"
"github.com/OffchainLabs/prysm/v6/api/server/structs"
"github.com/OffchainLabs/prysm/v6/async/event"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
lightclient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
dbtesting "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
p2ptesting "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
@@ -53,7 +55,7 @@ func TestLightClientHandler_GetLightClientBootstrap(t *testing.T) {
require.NoError(t, err)
db := dbtesting.SetupDB(t)
lcStore := lightclient.NewLightClientStore(db)
lcStore := lightclient.NewLightClientStore(db, &p2ptesting.FakeP2P{}, new(event.Feed))
err = db.SaveLightClientBootstrap(l.Ctx, blockRoot[:], bootstrap)
require.NoError(t, err)
@@ -97,7 +99,7 @@ func TestLightClientHandler_GetLightClientBootstrap(t *testing.T) {
require.NoError(t, err)
db := dbtesting.SetupDB(t)
lcStore := lightclient.NewLightClientStore(db)
lcStore := lightclient.NewLightClientStore(db, &p2ptesting.FakeP2P{}, new(event.Feed))
err = db.SaveLightClientBootstrap(l.Ctx, blockRoot[:], bootstrap)
require.NoError(t, err)
@@ -141,7 +143,7 @@ func TestLightClientHandler_GetLightClientBootstrap(t *testing.T) {
t.Run("no bootstrap found", func(t *testing.T) {
s := &Server{
LCStore: lightclient.NewLightClientStore(dbtesting.SetupDB(t)),
LCStore: lightclient.NewLightClientStore(dbtesting.SetupDB(t), &p2ptesting.FakeP2P{}, new(event.Feed)),
}
request := httptest.NewRequest("GET", "http://foo.com/", nil)
request.SetPathValue("block_root", hexutil.Encode([]byte{0x00, 0x01, 0x02}))
@@ -184,7 +186,7 @@ func TestLightClientHandler_GetLightClientByRange(t *testing.T) {
}
s := &Server{
LCStore: lightclient.NewLightClientStore(db),
LCStore: lightclient.NewLightClientStore(db, &p2ptesting.FakeP2P{}, new(event.Feed)),
}
updatePeriod := startPeriod
@@ -325,7 +327,7 @@ func TestLightClientHandler_GetLightClientByRange(t *testing.T) {
db := dbtesting.SetupDB(t)
s := &Server{
LCStore: lightclient.NewLightClientStore(db),
LCStore: lightclient.NewLightClientStore(db, &p2ptesting.FakeP2P{}, new(event.Feed)),
}
updates := make([]interfaces.LightClientUpdate, 2)
@@ -445,7 +447,7 @@ func TestLightClientHandler_GetLightClientByRange(t *testing.T) {
db := dbtesting.SetupDB(t)
s := &Server{
LCStore: lightclient.NewLightClientStore(db),
LCStore: lightclient.NewLightClientStore(db, &p2ptesting.FakeP2P{}, new(event.Feed)),
}
updates := make([]interfaces.LightClientUpdate, 3)
@@ -492,7 +494,7 @@ func TestLightClientHandler_GetLightClientByRange(t *testing.T) {
db := dbtesting.SetupDB(t)
s := &Server{
LCStore: lightclient.NewLightClientStore(db),
LCStore: lightclient.NewLightClientStore(db, &p2ptesting.FakeP2P{}, new(event.Feed)),
}
updates := make([]interfaces.LightClientUpdate, 3)
@@ -536,7 +538,7 @@ func TestLightClientHandler_GetLightClientByRange(t *testing.T) {
t.Run("start period before altair", func(t *testing.T) {
db := dbtesting.SetupDB(t)
s := &Server{
LCStore: lightclient.NewLightClientStore(db),
LCStore: lightclient.NewLightClientStore(db, &p2ptesting.FakeP2P{}, new(event.Feed)),
}
startPeriod := 0
url := fmt.Sprintf("http://foo.com/?count=128&start_period=%d", startPeriod)
@@ -559,7 +561,7 @@ func TestLightClientHandler_GetLightClientByRange(t *testing.T) {
t.Run("missing update in the middle", func(t *testing.T) {
db := dbtesting.SetupDB(t)
s := &Server{
LCStore: lightclient.NewLightClientStore(db),
LCStore: lightclient.NewLightClientStore(db, &p2ptesting.FakeP2P{}, new(event.Feed)),
}
updates := make([]interfaces.LightClientUpdate, 3)
@@ -603,7 +605,7 @@ func TestLightClientHandler_GetLightClientByRange(t *testing.T) {
t.Run("missing update at the beginning", func(t *testing.T) {
db := dbtesting.SetupDB(t)
s := &Server{
LCStore: lightclient.NewLightClientStore(db),
LCStore: lightclient.NewLightClientStore(db, &p2ptesting.FakeP2P{}, new(event.Feed)),
}
updates := make([]interfaces.LightClientUpdate, 3)
@@ -663,8 +665,8 @@ func TestLightClientHandler_GetLightClientFinalityUpdate(t *testing.T) {
update, err := lightclient.NewLightClientFinalityUpdateFromBeaconState(ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
s := &Server{LCStore: &lightclient.Store{}}
s.LCStore.SetLastFinalityUpdate(update)
s := &Server{LCStore: lightclient.NewLightClientStore(dbtesting.SetupDB(t), &p2ptesting.FakeP2P{}, new(event.Feed))}
s.LCStore.SetLastFinalityUpdate(update, false)
request := httptest.NewRequest("GET", "http://foo.com", nil)
writer := httptest.NewRecorder()
@@ -688,8 +690,8 @@ func TestLightClientHandler_GetLightClientFinalityUpdate(t *testing.T) {
update, err := lightclient.NewLightClientFinalityUpdateFromBeaconState(ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
s := &Server{LCStore: &lightclient.Store{}}
s.LCStore.SetLastFinalityUpdate(update)
s := &Server{LCStore: lightclient.NewLightClientStore(dbtesting.SetupDB(t), &p2ptesting.FakeP2P{}, new(event.Feed))}
s.LCStore.SetLastFinalityUpdate(update, false)
request := httptest.NewRequest("GET", "http://foo.com", nil)
request.Header.Add("Accept", "application/octet-stream")
@@ -727,7 +729,7 @@ func TestLightClientHandler_GetLightClientOptimisticUpdate(t *testing.T) {
helpers.ClearCache()
t.Run("no update", func(t *testing.T) {
s := &Server{LCStore: &lightclient.Store{}}
s := &Server{LCStore: lightclient.NewLightClientStore(dbtesting.SetupDB(t), &p2ptesting.FakeP2P{}, new(event.Feed))}
request := httptest.NewRequest("GET", "http://foo.com", nil)
writer := httptest.NewRecorder()
@@ -743,8 +745,8 @@ func TestLightClientHandler_GetLightClientOptimisticUpdate(t *testing.T) {
update, err := lightclient.NewLightClientOptimisticUpdateFromBeaconState(ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock)
require.NoError(t, err)
s := &Server{LCStore: &lightclient.Store{}}
s.LCStore.SetLastOptimisticUpdate(update)
s := &Server{LCStore: lightclient.NewLightClientStore(dbtesting.SetupDB(t), &p2ptesting.FakeP2P{}, new(event.Feed))}
s.LCStore.SetLastOptimisticUpdate(update, false)
request := httptest.NewRequest("GET", "http://foo.com", nil)
writer := httptest.NewRecorder()
@@ -767,8 +769,8 @@ func TestLightClientHandler_GetLightClientOptimisticUpdate(t *testing.T) {
update, err := lightclient.NewLightClientOptimisticUpdateFromBeaconState(ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock)
require.NoError(t, err)
s := &Server{LCStore: &lightclient.Store{}}
s.LCStore.SetLastOptimisticUpdate(update)
s := &Server{LCStore: lightclient.NewLightClientStore(dbtesting.SetupDB(t), &p2ptesting.FakeP2P{}, new(event.Feed))}
s.LCStore.SetLastOptimisticUpdate(update, false)
request := httptest.NewRequest("GET", "http://foo.com", nil)
request.Header.Add("Accept", "application/octet-stream")

View File

@@ -19,12 +19,14 @@ go_library(
"//beacon-chain/p2p/peers/peerdata:go_default_library",
"//beacon-chain/rpc/eth/shared:go_default_library",
"//beacon-chain/sync:go_default_library",
"//config/params:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//network/httputil:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/migration:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_libp2p_go_libp2p//core/peer:go_default_library",
"@com_github_pkg_errors//:go_default_library",
@@ -47,6 +49,7 @@ go_test(
"//beacon-chain/p2p/testing:go_default_library",
"//beacon-chain/rpc/testutil:go_default_library",
"//beacon-chain/sync/initial-sync/testing:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/wrapper:go_default_library",
"//network/httputil:go_default_library",

View File

@@ -9,10 +9,12 @@ import (
"github.com/OffchainLabs/prysm/v6/api/server/structs"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/eth/shared"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
"github.com/OffchainLabs/prysm/v6/network/httputil"
ethpb "github.com/OffchainLabs/prysm/v6/proto/eth/v1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/ethereum/go-ethereum/common/hexutil"
)
@@ -75,17 +77,25 @@ func (s *Server) GetIdentity(w http.ResponseWriter, r *http.Request) {
httputil.HandleError(w, "Could not obtain enr: "+err.Error(), http.StatusInternalServerError)
return
}
currentEpoch := slots.ToEpoch(s.GenesisTimeFetcher.CurrentSlot())
metadata := s.MetadataProvider.Metadata()
md := &structs.Metadata{
SeqNumber: strconv.FormatUint(s.MetadataProvider.MetadataSeq(), 10),
Attnets: hexutil.Encode(metadata.AttnetsBitfield()),
}
if currentEpoch >= params.BeaconConfig().AltairForkEpoch {
md.Syncnets = hexutil.Encode(metadata.SyncnetsBitfield())
}
if currentEpoch >= params.BeaconConfig().FuluForkEpoch {
md.Cgc = strconv.FormatUint(metadata.CustodyGroupCount(), 10)
}
resp := &structs.GetIdentityResponse{
Data: &structs.Identity{
PeerId: peerId,
Enr: "enr:" + serializedEnr,
P2PAddresses: p2pAddresses,
DiscoveryAddresses: discoveryAddresses,
Metadata: &structs.Metadata{
SeqNumber: strconv.FormatUint(s.MetadataProvider.MetadataSeq(), 10),
Attnets: hexutil.Encode(s.MetadataProvider.Metadata().AttnetsBitfield()),
},
Metadata: md,
},
}
httputil.WriteJson(w, resp)

View File

@@ -15,6 +15,7 @@ import (
mockp2p "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/testutil"
syncmock "github.com/OffchainLabs/prysm/v6/beacon-chain/sync/initial-sync/testing"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/consensus-types/wrapper"
"github.com/OffchainLabs/prysm/v6/network/httputil"
@@ -144,7 +145,14 @@ func TestGetIdentity(t *testing.T) {
require.NoError(t, err)
attnets := bitfield.NewBitvector64()
attnets.SetBitAt(1, true)
metadataProvider := &mockp2p.MockMetadataProvider{Data: wrapper.WrappedMetadataV0(&pb.MetaDataV0{SeqNumber: 1, Attnets: attnets})}
syncnets := bitfield.NewBitvector4()
syncnets.SetBitAt(1, true)
metadataProvider := &mockp2p.MockMetadataProvider{Data: wrapper.WrappedMetadataV2(&pb.MetaDataV2{
SeqNumber: 1,
Attnets: attnets,
Syncnets: syncnets,
CustodyGroupCount: 2,
})}
t.Run("OK", func(t *testing.T) {
peerManager := &mockp2p.MockPeerManager{
@@ -154,8 +162,9 @@ func TestGetIdentity(t *testing.T) {
DiscoveryAddr: []ma.Multiaddr{discAddr1, discAddr2},
}
s := &Server{
PeerManager: peerManager,
MetadataProvider: metadataProvider,
PeerManager: peerManager,
MetadataProvider: metadataProvider,
GenesisTimeFetcher: &mock.ChainService{},
}
request := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/node/identity", nil)
@@ -187,6 +196,33 @@ func TestGetIdentity(t *testing.T) {
assert.Equal(t, discAddr1.String(), resp.Data.DiscoveryAddresses[0])
assert.Equal(t, discAddr2.String(), resp.Data.DiscoveryAddresses[1])
})
t.Run("OK Fulu", func(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.FuluForkEpoch = 0
params.OverrideBeaconConfig(cfg)
peerManager := &mockp2p.MockPeerManager{
Enr: enrRecord,
PID: "foo",
BHost: &mockp2p.MockHost{Addresses: []ma.Multiaddr{p2pAddr}},
DiscoveryAddr: []ma.Multiaddr{discAddr1, discAddr2},
}
s := &Server{
PeerManager: peerManager,
MetadataProvider: metadataProvider,
GenesisTimeFetcher: &mock.ChainService{},
}
request := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/node/identity", nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetIdentity(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &structs.GetIdentityResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, "2", resp.Data.Metadata.Cgc)
})
t.Run("ENR failure", func(t *testing.T) {
peerManager := &mockp2p.MockPeerManager{

View File

@@ -118,8 +118,9 @@ func (s *Server) produceBlockV3(ctx context.Context, w http.ResponseWriter, r *h
consensusBlockValue, httpError := getConsensusBlockValue(ctx, s.BlockRewardFetcher, v1alpha1resp.Block)
if httpError != nil {
log.WithError(httpError).Debug("Failed to get consensus block value")
// Having the consensus block value is not critical to block production
consensusBlockValue = ""
// Having the consensus block value is not critical to block production.
// We set it to zero to satisfy the specification, which requires a numeric value.
consensusBlockValue = "0"
}
w.Header().Set(api.ExecutionPayloadBlindedHeader, fmt.Sprintf("%v", v1alpha1resp.IsBlinded))

View File

@@ -15,6 +15,7 @@ import (
rewardtesting "github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/eth/rewards/testing"
rpctesting "github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/eth/shared/testing"
mockSync "github.com/OffchainLabs/prysm/v6/beacon-chain/sync/initial-sync/testing"
"github.com/OffchainLabs/prysm/v6/network/httputil"
eth "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/assert"
mock2 "github.com/OffchainLabs/prysm/v6/testing/mock"
@@ -408,6 +409,82 @@ func TestProduceBlockV3(t *testing.T) {
require.Equal(t, "electra", writer.Header().Get(api.VersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.ConsensusBlockValueHeader))
})
t.Run("Fulu", func(t *testing.T) {
var block *structs.SignedBeaconBlockContentsFulu
err := json.Unmarshal([]byte(rpctesting.FuluBlockContents), &block)
require.NoError(t, err)
jsonBytes, err := json.Marshal(block.ToUnsigned())
require.NoError(t, err)
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().GetBeaconBlock(gomock.Any(), &eth.BlockRequest{
Slot: 1,
RandaoReveal: bRandao,
Graffiti: bGraffiti,
SkipMevBoost: false,
}).Return(
func() (*eth.GenericBeaconBlock, error) {
b, err := block.ToUnsigned().ToGeneric()
require.NoError(t, err)
b.PayloadValue = "2000"
return b, nil
}())
server := &Server{
V1Alpha1Server: v1alpha1Server,
SyncChecker: syncChecker,
OptimisticModeFetcher: chainService,
BlockRewardFetcher: rewardFetcher,
}
request := httptest.NewRequest(http.MethodGet, fmt.Sprintf("http://foo.example/eth/v3/validator/blocks/1?randao_reveal=%s&graffiti=%s", randao, graffiti), nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.ProduceBlockV3(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
want := fmt.Sprintf(`{"version":"fulu","execution_payload_blinded":false,"execution_payload_value":"2000","consensus_block_value":"10000000000","data":%s}`, string(jsonBytes))
body := strings.ReplaceAll(writer.Body.String(), "\n", "")
require.Equal(t, want, body)
require.Equal(t, "false", writer.Header().Get(api.ExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.ExecutionPayloadValueHeader))
require.Equal(t, "fulu", writer.Header().Get(api.VersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.ConsensusBlockValueHeader))
})
t.Run("Blinded Fulu", func(t *testing.T) {
var block *structs.SignedBlindedBeaconBlockFulu
err := json.Unmarshal([]byte(rpctesting.BlindedFuluBlock), &block)
require.NoError(t, err)
jsonBytes, err := json.Marshal(block.Message)
require.NoError(t, err)
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().GetBeaconBlock(gomock.Any(), &eth.BlockRequest{
Slot: 1,
RandaoReveal: bRandao,
Graffiti: bGraffiti,
SkipMevBoost: false,
}).Return(
func() (*eth.GenericBeaconBlock, error) {
b, err := block.Message.ToGeneric()
require.NoError(t, err)
b.PayloadValue = "2000"
return b, nil
}())
server := &Server{
V1Alpha1Server: v1alpha1Server,
SyncChecker: syncChecker,
OptimisticModeFetcher: chainService,
BlockRewardFetcher: rewardFetcher,
}
request := httptest.NewRequest(http.MethodGet, fmt.Sprintf("http://foo.example/eth/v3/validator/blocks/1?randao_reveal=%s&graffiti=%s", randao, graffiti), nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.ProduceBlockV3(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
want := fmt.Sprintf(`{"version":"fulu","execution_payload_blinded":true,"execution_payload_value":"2000","consensus_block_value":"10000000000","data":%s}`, string(jsonBytes))
body := strings.ReplaceAll(writer.Body.String(), "\n", "")
require.Equal(t, want, body)
require.Equal(t, "true", writer.Header().Get(api.ExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.ExecutionPayloadValueHeader))
require.Equal(t, "fulu", writer.Header().Get(api.VersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.ConsensusBlockValueHeader))
})
t.Run("invalid query parameter slot empty", func(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
server := &Server{
@@ -461,6 +538,46 @@ func TestProduceBlockV3(t *testing.T) {
assert.Equal(t, http.StatusServiceUnavailable, writer.Code)
assert.Equal(t, true, strings.Contains(writer.Body.String(), "Beacon node is currently syncing and not serving request on that endpoint"))
})
t.Run("0 block value is returned on error", func(t *testing.T) {
rewardFetcher := &rewardtesting.MockBlockRewardFetcher{Error: &httputil.DefaultJsonError{}}
var block *structs.SignedBeaconBlockContentsFulu
err := json.Unmarshal([]byte(rpctesting.FuluBlockContents), &block)
require.NoError(t, err)
jsonBytes, err := json.Marshal(block.ToUnsigned())
require.NoError(t, err)
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().GetBeaconBlock(gomock.Any(), &eth.BlockRequest{
Slot: 1,
RandaoReveal: bRandao,
Graffiti: bGraffiti,
SkipMevBoost: false,
}).Return(
func() (*eth.GenericBeaconBlock, error) {
b, err := block.ToUnsigned().ToGeneric()
require.NoError(t, err)
b.PayloadValue = "2000"
return b, nil
}())
server := &Server{
V1Alpha1Server: v1alpha1Server,
SyncChecker: syncChecker,
OptimisticModeFetcher: chainService,
BlockRewardFetcher: rewardFetcher,
}
request := httptest.NewRequest(http.MethodGet, fmt.Sprintf("http://foo.example/eth/v3/validator/blocks/1?randao_reveal=%s&graffiti=%s", randao, graffiti), nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.ProduceBlockV3(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
want := fmt.Sprintf(`{"version":"fulu","execution_payload_blinded":false,"execution_payload_value":"2000","consensus_block_value":"0","data":%s}`, string(jsonBytes))
body := strings.ReplaceAll(writer.Body.String(), "\n", "")
require.Equal(t, want, body)
require.Equal(t, "false", writer.Header().Get(api.ExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.ExecutionPayloadValueHeader))
require.Equal(t, "fulu", writer.Header().Get(api.VersionHeader))
require.Equal(t, "0", writer.Header().Get(api.ConsensusBlockValueHeader))
})
}
func TestProduceBlockV3SSZ(t *testing.T) {
@@ -960,4 +1077,47 @@ func TestProduceBlockV3SSZ(t *testing.T) {
require.Equal(t, "fulu", writer.Header().Get(api.VersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.ConsensusBlockValueHeader))
})
t.Run("0 block value is returned on error", func(t *testing.T) {
rewardFetcher := &rewardtesting.MockBlockRewardFetcher{Error: &httputil.DefaultJsonError{}}
var block *structs.SignedBeaconBlockContentsFulu
err := json.Unmarshal([]byte(rpctesting.FuluBlockContents), &block)
require.NoError(t, err)
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().GetBeaconBlock(gomock.Any(), &eth.BlockRequest{
Slot: 1,
RandaoReveal: bRandao,
Graffiti: bGraffiti,
SkipMevBoost: false,
}).Return(
func() (*eth.GenericBeaconBlock, error) {
b, err := block.ToUnsigned().ToGeneric()
require.NoError(t, err)
b.PayloadValue = "2000"
return b, nil
}())
server := &Server{
V1Alpha1Server: v1alpha1Server,
SyncChecker: syncChecker,
OptimisticModeFetcher: chainService,
BlockRewardFetcher: rewardFetcher,
}
request := httptest.NewRequest(http.MethodGet, fmt.Sprintf("http://foo.example/eth/v3/validator/blocks/1?randao_reveal=%s&graffiti=%s", randao, graffiti), nil)
request.Header.Set("Accept", api.OctetStreamMediaType)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.ProduceBlockV3(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
g, err := block.ToUnsigned().ToGeneric()
require.NoError(t, err)
bl, ok := g.Block.(*eth.GenericBeaconBlock_Fulu)
require.Equal(t, true, ok)
ssz, err := bl.Fulu.MarshalSSZ()
require.NoError(t, err)
require.Equal(t, string(ssz), writer.Body.String())
require.Equal(t, "false", writer.Header().Get(api.ExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.ExecutionPayloadValueHeader))
require.Equal(t, "fulu", writer.Header().Get(api.VersionHeader))
require.Equal(t, "0", writer.Header().Get(api.ConsensusBlockValueHeader))
})
}

View File

@@ -44,9 +44,9 @@ type StateNotFoundError struct {
}
// NewStateNotFoundError creates a new error instance.
func NewStateNotFoundError(stateRootsSize int) StateNotFoundError {
func NewStateNotFoundError(stateRootsSize int, stateRoot []byte) StateNotFoundError {
return StateNotFoundError{
message: fmt.Sprintf("state not found in the last %d state roots", stateRootsSize),
message: fmt.Sprintf("state not found in the last %d state roots, looking for state root: %#x", stateRootsSize, stateRoot),
}
}
@@ -221,7 +221,7 @@ func (p *BeaconDbStater) stateByRoot(ctx context.Context, stateRoot []byte) (sta
}
}
stateNotFoundErr := NewStateNotFoundError(len(headState.StateRoots()))
stateNotFoundErr := NewStateNotFoundError(len(headState.StateRoots()), stateRoot)
return nil, &stateNotFoundErr
}

View File

@@ -418,8 +418,9 @@ func TestGetStateRoot(t *testing.T) {
}
func TestNewStateNotFoundError(t *testing.T) {
e := NewStateNotFoundError(100)
assert.Equal(t, "state not found in the last 100 state roots", e.message)
stateRoot := []byte{0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, 0x20}
e := NewStateNotFoundError(100, stateRoot)
assert.Equal(t, "state not found in the last 100 state roots, looking for state root: 0x0102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f20", e.message)
}
func TestStateBySlot_FutureSlot(t *testing.T) {

View File

@@ -315,7 +315,7 @@ func (bs *Server) ListIndexedAttestationsElectra(
// that it was included in a block. The attestation may have expired.
// Refer to the ethereum consensus specification for more details on how
// attestations are processed and when they are no longer valid.
// https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/beacon-chain.md#attestations
// https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/beacon-chain.md#attestations
func (bs *Server) AttestationPool(_ context.Context, req *ethpb.AttestationPoolRequest) (*ethpb.AttestationPoolResponse, error) {
var atts []*ethpb.Attestation
var err error

View File

@@ -81,6 +81,7 @@ go_library(
"//crypto/rand:go_default_library",
"//encoding/bytesutil:go_default_library",
"//encoding/ssz:go_default_library",
"//io/file:go_default_library",
"//math:go_default_library",
"//monitoring/tracing:go_default_library",
"//monitoring/tracing/trace:go_default_library",

View File

@@ -2,17 +2,26 @@ package validator
import (
"context"
"fmt"
"os"
"path/filepath"
"runtime"
"runtime/pprof"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
coreTime "github.com/OffchainLabs/prysm/v6/beacon-chain/core/time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition"
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/core"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/config/features"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/io/file"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/sirupsen/logrus"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
@@ -25,7 +34,19 @@ func (vs *Server) GetDutiesV2(ctx context.Context, req *ethpb.DutiesRequest) (*e
if vs.SyncChecker.Syncing() {
return nil, status.Error(codes.Unavailable, "Syncing to latest head, not ready to respond")
}
return vs.dutiesv2(ctx, req)
start := time.Now()
// Start background profiling that will capture if this takes too long
var profileCancel func()
if features.Get().SlowDutiesProfile {
profileCancel = vs.startSlowDutiesProfiler(start, len(req.PublicKeys), req.Epoch)
defer profileCancel()
}
resp, err := vs.dutiesv2(ctx, req)
return resp, err
}
// Compute the validator duties from the head state's corresponding epoch
@@ -270,3 +291,138 @@ func populateCommitteeFields(duty *ethpb.DutiesV2Response_Duty, la *helpers.Lite
duty.ValidatorCommitteeIndex = la.ValidatorCommitteeIndex
duty.AttesterSlot = la.AttesterSlot
}
// startSlowDutiesProfiler starts background profiling that triggers after 2s
// Returns a cancel function that should be called when the operation completes
func (vs *Server) startSlowDutiesProfiler(startTime time.Time, numValidators int, epoch primitives.Epoch) func() {
ctx, cancel := context.WithCancel(context.Background())
go func() {
// Wait for 2 seconds
select {
case <-time.After(2 * time.Second):
// Operation is taking too long, start profiling
vs.captureSlowDutiesProfile(startTime, numValidators, epoch, ctx)
case <-ctx.Done():
// Operation completed before 2s, no profiling needed
return
}
}()
return cancel
}
// captureSlowDutiesProfile captures CPU and mutex profiles when GetDutiesV2 is slow
func (vs *Server) captureSlowDutiesProfile(startTime time.Time, numValidators int, epoch primitives.Epoch, ctx context.Context) {
timestamp := time.Now().Format("20060102-150405")
// Get the datadir from the database path and create debug subdirectory
// Cast to Database interface to access DatabasePath method
dbWithPath, ok := vs.BeaconDB.(interface{ DatabasePath() string })
if !ok {
log.Error("Cannot access database path for profiling - database does not implement DatabasePath method")
return
}
dbPath := dbWithPath.DatabasePath()
profileDir := filepath.Join(filepath.Dir(dbPath), "debug")
// Create profile directory if it doesn't exist
if err := file.MkdirAll(profileDir); err != nil {
log.WithError(err).Warn("Failed to create profile directory")
return
}
currentDuration := time.Since(startTime)
log.WithFields(logrus.Fields{
"currentDuration": currentDuration,
"numValidators": numValidators,
"epoch": epoch,
"profileDir": profileDir,
}).Warn("GetDutiesV2 taking longer than 2s, capturing profiles")
// Start CPU profiling immediately
cpuFile, err := os.Create(fmt.Sprintf("%s/cpu-duties-%s.prof", profileDir, timestamp))
if err != nil {
log.WithError(err).Warn("Failed to create CPU profile file")
} else {
if err := pprof.StartCPUProfile(cpuFile); err != nil {
log.WithError(err).Warn("Failed to start CPU profile")
if closeErr := cpuFile.Close(); closeErr != nil {
log.WithError(closeErr).Warn("Failed to close CPU profile file")
}
} else {
// Profile for up to 10 seconds or until context is cancelled
go func() {
defer func() {
pprof.StopCPUProfile()
if closeErr := cpuFile.Close(); closeErr != nil {
log.WithError(closeErr).Warn("Failed to close CPU profile file")
}
log.WithField("file", cpuFile.Name()).Info("CPU profile captured")
}()
select {
case <-time.After(10 * time.Second):
// Stop profiling after 10s max
case <-ctx.Done():
// Stop profiling when operation completes
}
}()
}
}
// Enable mutex profiling
runtime.SetMutexProfileFraction(1)
// Capture snapshot profiles immediately
vs.captureSnapshotProfiles(profileDir, timestamp)
}
// captureSnapshotProfiles captures point-in-time profiles
func (vs *Server) captureSnapshotProfiles(profileDir, timestamp string) {
// Capture mutex profile
mutexFile, err := os.Create(fmt.Sprintf("%s/mutex-duties-%s.prof", profileDir, timestamp))
if err != nil {
log.WithError(err).Warn("Failed to create mutex profile file")
} else {
if err := pprof.Lookup("mutex").WriteTo(mutexFile, 0); err != nil {
log.WithError(err).Warn("Failed to write mutex profile")
} else {
log.WithField("file", mutexFile.Name()).Info("Mutex profile captured")
}
if closeErr := mutexFile.Close(); closeErr != nil {
log.WithError(closeErr).Warn("Failed to close mutex profile file")
}
}
// Capture goroutine profile
goroutineFile, err := os.Create(fmt.Sprintf("%s/goroutine-duties-%s.prof", profileDir, timestamp))
if err != nil {
log.WithError(err).Warn("Failed to create goroutine profile file")
} else {
if err := pprof.Lookup("goroutine").WriteTo(goroutineFile, 0); err != nil {
log.WithError(err).Warn("Failed to write goroutine profile")
} else {
log.WithField("file", goroutineFile.Name()).Info("Goroutine profile captured")
}
if closeErr := goroutineFile.Close(); closeErr != nil {
log.WithError(closeErr).Warn("Failed to close goroutine profile file")
}
}
// Capture heap profile
heapFile, err := os.Create(fmt.Sprintf("%s/heap-duties-%s.prof", profileDir, timestamp))
if err != nil {
log.WithError(err).Warn("Failed to create heap profile file")
} else {
runtime.GC() // Force GC before heap profile
if err := pprof.Lookup("heap").WriteTo(heapFile, 0); err != nil {
log.WithError(err).Warn("Failed to write heap profile")
} else {
log.WithField("file", heapFile.Name()).Info("Heap profile captured")
}
if closeErr := heapFile.Close(); closeErr != nil {
log.WithError(closeErr).Warn("Failed to close heap profile file")
}
}
}

View File

@@ -5,7 +5,6 @@ import (
"context"
"fmt"
"math/big"
"strings"
"time"
"github.com/OffchainLabs/prysm/v6/api/client/builder"
@@ -19,7 +18,6 @@ import (
"github.com/OffchainLabs/prysm/v6/encoding/ssz"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
"github.com/OffchainLabs/prysm/v6/network/forks"
enginev1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
@@ -220,16 +218,10 @@ func (vs *Server) getPayloadHeaderFromBuilder(
if signedBid == nil || signedBid.IsNil() {
return nil, errors.New("builder returned nil bid")
}
fork, err := forks.Fork(slots.ToEpoch(slot))
if err != nil {
return nil, errors.Wrap(err, "unable to get fork information")
}
forkName, ok := params.BeaconConfig().ForkVersionNames[bytesutil.ToBytes4(fork.CurrentVersion)]
if !ok {
return nil, errors.New("unable to find current fork in schedule")
}
if !strings.EqualFold(version.String(signedBid.Version()), forkName) {
return nil, fmt.Errorf("builder bid response version: %d is different from head block version: %d for epoch %d", signedBid.Version(), b.Version(), slots.ToEpoch(slot))
bidVersion := signedBid.Version()
headBlockVersion := b.Version()
if !isVersionCompatible(bidVersion, headBlockVersion) {
return nil, fmt.Errorf("builder bid response version: %d is not compatible with head block version: %d for epoch %d", bidVersion, headBlockVersion, slots.ToEpoch(slot))
}
bid, err := signedBid.Message()
@@ -466,3 +458,24 @@ func expectedGasLimit(parentGasLimit, proposerGasLimit uint64) uint64 {
}
return proposerGasLimit
}
// isVersionCompatible checks if a builder bid version is compatible with the head block version.
func isVersionCompatible(bidVersion, headBlockVersion int) bool {
// Exact version match is always compatible
if bidVersion == headBlockVersion {
return true
}
// Allow Electra bids for Fulu blocks - they have compatible payload formats
if bidVersion == version.Electra && headBlockVersion == version.Fulu {
return true
}
// Allow Capella bids for Bellatrix blocks - they have compatible payload formats
if bidVersion == version.Capella && headBlockVersion == version.Bellatrix {
return true
}
// For all other cases, require exact version match
return false
}

View File

@@ -25,6 +25,7 @@ import (
"github.com/OffchainLabs/prysm/v6/encoding/ssz"
v1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/OffchainLabs/prysm/v6/time/slots"
@@ -156,7 +157,7 @@ func TestServer_setExecutionData(t *testing.T) {
HasConfigured: true,
Cfg: &builderTest.Config{BeaconDB: beaconDB},
}
wb, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlockBellatrix())
wb, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlockCapella())
require.NoError(t, err)
chain := &blockchainTest.ChainService{ForkChoiceStore: doublylinkedtree.New(), Genesis: time.Now(), Block: wb}
vs.ForkchoiceFetcher = chain
@@ -973,7 +974,7 @@ func TestServer_getPayloadHeader(t *testing.T) {
return wb
}(),
},
err: "is different from head block version",
err: "builder bid response version: 3 is not compatible with head block version: 2 for epoch 1",
},
{
name: "different bid version during hard fork",
@@ -982,7 +983,7 @@ func TestServer_getPayloadHeader(t *testing.T) {
},
fetcher: &blockchainTest.ChainService{
Block: func() interfaces.ReadOnlySignedBeaconBlock {
wb, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlockBellatrix())
wb, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlockCapella())
require.NoError(t, err)
wb.SetSlot(primitives.Slot(fakeCapellaEpoch) * params.BeaconConfig().SlotsPerEpoch)
return wb
@@ -1005,6 +1006,86 @@ func TestServer_getPayloadHeader(t *testing.T) {
},
err: "incorrect header gas limit 30000000 != 31000000",
},
{
name: "electra bid with fulu head block - compatible",
mock: func() *builderTest.MockBuilderService {
// Create Electra bid
requests := &v1.ExecutionRequests{
Deposits: []*v1.DepositRequest{
{
Pubkey: bytesutil.PadTo([]byte{byte('a')}, fieldparams.BLSPubkeyLength),
WithdrawalCredentials: bytesutil.PadTo([]byte{byte('b')}, fieldparams.RootLength),
Amount: params.BeaconConfig().MinActivationBalance,
Signature: bytesutil.PadTo([]byte{byte('c')}, fieldparams.BLSSignatureLength),
Index: 0,
},
},
Withdrawals: []*v1.WithdrawalRequest{
{
SourceAddress: bytesutil.PadTo([]byte{byte('d')}, common.AddressLength),
ValidatorPubkey: bytesutil.PadTo([]byte{byte('e')}, fieldparams.BLSPubkeyLength),
Amount: params.BeaconConfig().MinActivationBalance,
},
},
Consolidations: []*v1.ConsolidationRequest{
{
SourceAddress: bytesutil.PadTo([]byte{byte('f')}, common.AddressLength),
SourcePubkey: bytesutil.PadTo([]byte{byte('g')}, fieldparams.BLSPubkeyLength),
TargetPubkey: bytesutil.PadTo([]byte{byte('h')}, fieldparams.BLSPubkeyLength),
},
},
}
electraBid := &ethpb.BuilderBidElectra{
Header: &v1.ExecutionPayloadHeaderDeneb{
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: bytesutil.PadTo([]byte{1}, fieldparams.RootLength),
ParentHash: params.BeaconConfig().ZeroHash[:],
Timestamp: uint64(ti.Unix()),
BlockNumber: 2,
WithdrawalsRoot: make([]byte, fieldparams.RootLength),
BlobGasUsed: 123,
ExcessBlobGas: 456,
GasLimit: gasLimit,
},
Pubkey: sk.PublicKey().Marshal(),
Value: bytesutil.PadTo([]byte{1, 2, 3}, 32),
BlobKzgCommitments: [][]byte{bytesutil.PadTo([]byte{2}, fieldparams.BLSPubkeyLength)},
ExecutionRequests: requests,
}
d := params.BeaconConfig().DomainApplicationBuilder
domain, err := signing.ComputeDomain(d, nil, nil)
require.NoError(t, err)
sr, err := signing.ComputeSigningRoot(electraBid, domain)
require.NoError(t, err)
sBidElectra := &ethpb.SignedBuilderBidElectra{
Message: electraBid,
Signature: sk.Sign(sr[:]).Marshal(),
}
return &builderTest.MockBuilderService{
BidElectra: sBidElectra,
}
}(),
fetcher: &blockchainTest.ChainService{
Block: func() interfaces.ReadOnlySignedBeaconBlock {
// Create Fulu head block
wb, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlockFulu())
require.NoError(t, err)
wb.SetSlot(primitives.Slot(params.BeaconConfig().BellatrixForkEpoch) * params.BeaconConfig().SlotsPerEpoch)
return wb
}(),
},
// Should succeed because Electra bids are compatible with Fulu head blocks
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
@@ -1222,3 +1303,113 @@ func Test_expectedGasLimit(t *testing.T) {
})
}
}
func TestIsVersionCompatible(t *testing.T) {
tests := []struct {
name string
bidVersion int
headBlockVersion int
want bool
}{
{
name: "Exact version match - Bellatrix",
bidVersion: version.Bellatrix,
headBlockVersion: version.Bellatrix,
want: true,
},
{
name: "Exact version match - Capella",
bidVersion: version.Capella,
headBlockVersion: version.Capella,
want: true,
},
{
name: "Exact version match - Deneb",
bidVersion: version.Deneb,
headBlockVersion: version.Deneb,
want: true,
},
{
name: "Exact version match - Electra",
bidVersion: version.Electra,
headBlockVersion: version.Electra,
want: true,
},
{
name: "Exact version match - Fulu",
bidVersion: version.Fulu,
headBlockVersion: version.Fulu,
want: true,
},
{
name: "Electra bid with Fulu head block - Compatible",
bidVersion: version.Electra,
headBlockVersion: version.Fulu,
want: true,
},
{
name: "Fulu bid with Electra head block - Not compatible",
bidVersion: version.Fulu,
headBlockVersion: version.Electra,
want: false,
},
{
name: "Deneb bid with Electra head block - Not compatible",
bidVersion: version.Deneb,
headBlockVersion: version.Electra,
want: false,
},
{
name: "Electra bid with Deneb head block - Not compatible",
bidVersion: version.Electra,
headBlockVersion: version.Deneb,
want: false,
},
{
name: "Capella bid with Deneb head block - Not compatible",
bidVersion: version.Capella,
headBlockVersion: version.Deneb,
want: false,
},
{
name: "Bellatrix bid with Capella head block - Not compatible",
bidVersion: version.Bellatrix,
headBlockVersion: version.Capella,
want: false,
},
{
name: "Capella bid with Bellatrix head block - Compatible",
bidVersion: version.Capella,
headBlockVersion: version.Bellatrix,
want: true,
},
{
name: "Phase0 bid with Altair head block - Not compatible",
bidVersion: version.Phase0,
headBlockVersion: version.Altair,
want: false,
},
{
name: "Deneb bid with Fulu head block - Not compatible",
bidVersion: version.Deneb,
headBlockVersion: version.Fulu,
want: false,
},
{
name: "Capella bid with Fulu head block - Not compatible",
bidVersion: version.Capella,
headBlockVersion: version.Fulu,
want: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := isVersionCompatible(tt.bidVersion, tt.headBlockVersion)
if got != tt.want {
t.Errorf("isVersionCompatible(%d, %d) = %v, want %v", tt.bidVersion, tt.headBlockVersion, got, tt.want)
}
})
}
}

View File

@@ -262,7 +262,7 @@ func (vs *Server) activationStatus(
// It cannot faithfully attest to the head block of the chain, since it has not fully verified that block.
//
// Spec:
// https://github.com/ethereum/consensus-specs/blob/dev/sync/optimistic.md
// https://github.com/ethereum/consensus-specs/blob/master/sync/optimistic.md
func (vs *Server) optimisticStatus(ctx context.Context) error {
if slots.ToEpoch(vs.TimeFetcher.CurrentSlot()) < params.BeaconConfig().BellatrixForkEpoch {
return nil

View File

@@ -212,6 +212,7 @@ go_test(
shard_count = 4,
deps = [
"//async/abool:go_default_library",
"//async/event:go_default_library",
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/blockchain/testing:go_default_library",

View File

@@ -17,6 +17,7 @@ import (
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
type Service struct {
@@ -211,7 +212,7 @@ func (s *Service) importBatches(ctx context.Context) {
_, err := s.batchImporter(ctx, current, ib, s.store)
if err != nil {
log.WithError(err).WithFields(ib.logFields()).Debug("Backfill batch failed to import")
s.downscore(ib)
s.downscorePeer(ib.blockPid, "backfillBatchImportError")
s.batchSeq.update(ib.withState(batchErrRetryable))
// If a batch fails, the subsequent batches are no longer considered importable.
break
@@ -336,10 +337,6 @@ func (s *Service) initBatches() error {
return nil
}
func (s *Service) downscore(b batch) {
s.p2p.Peers().Scorers().BadResponsesScorer().Increment(b.blockPid)
}
func (*Service) Stop() error {
return nil
}
@@ -383,3 +380,8 @@ func (s *Service) WaitForCompletion() error {
return nil
}
}
func (s *Service) downscorePeer(peerID peer.ID, reason string) {
newScore := s.p2p.Peers().Scorers().BadResponsesScorer().Increment(peerID)
log.WithFields(logrus.Fields{"peerID": peerID, "reason": reason, "newScore": newScore}).Debug("Downscore peer")
}

View File

@@ -21,9 +21,9 @@ const (
broadcastMissingDataColumnsSlack = 2 * time.Second
)
// reconstructSaveBroadcastDataColumnSidecars reconstructs if possible and
// needed all data column sidecars. Then, it saves into the store missing
// sidecars. After a delay, it broadcasts in the background not seen via gossip
// reconstructSaveBroadcastDataColumnSidecars reconstructs, if possible,
// all data column sidecars. Then, it saves missing sidecars to the store.
// After a delay, it broadcasts in the background not seen via gossip
// (but reconstructed) sidecars.
func (s *Service) reconstructSaveBroadcastDataColumnSidecars(
ctx context.Context,
@@ -33,15 +33,15 @@ func (s *Service) reconstructSaveBroadcastDataColumnSidecars(
) error {
startTime := time.Now()
// Lock to prevent concurrent reconstructions.
s.reconstructionLock.Lock()
defer s.reconstructionLock.Unlock()
// Get the columns we store.
storedDataColumns := s.cfg.dataColumnStorage.Summary(root)
storedColumnsCount := storedDataColumns.Count()
numberOfColumns := params.BeaconConfig().NumberOfColumns
// Lock to prevent concurrent reconstructions.
s.reconstructionLock.Lock()
defer s.reconstructionLock.Unlock()
// If reconstruction is not possible or if all columns are already stored, exit early.
if storedColumnsCount < peerdas.MinimumColumnsCountToReconstruct() || storedColumnsCount == numberOfColumns {
return nil
@@ -55,7 +55,7 @@ func (s *Service) reconstructSaveBroadcastDataColumnSidecars(
return errors.Wrap(err, "peer info")
}
// Load all the possible data columns sidecars, to minimize reconstruction time.
// Load all the possible data column sidecars, to minimize reconstruction time.
verifiedSidecars, err := s.cfg.dataColumnStorage.Get(root, nil)
if err != nil {
return errors.Wrap(err, "get data column sidecars")
@@ -76,7 +76,7 @@ func (s *Service) reconstructSaveBroadcastDataColumnSidecars(
}
}
// Save the data columns sidecars in the database.
// Save the data column sidecars to the database.
// Note: We do not call `receiveDataColumn`, because it will ignore
// incoming data columns via gossip while we did not broadcast (yet) the reconstructed data columns.
if err := s.cfg.dataColumnStorage.Save(toSaveSidecars); err != nil {
@@ -95,7 +95,7 @@ func (s *Service) reconstructSaveBroadcastDataColumnSidecars(
"reconstructionAndSaveDuration": time.Since(startTime),
}).Debug("Data columns reconstructed and saved")
// Update reconstruction metrics
// Update reconstruction metrics.
dataColumnReconstructionHistogram.Observe(float64(time.Since(startTime).Milliseconds()))
dataColumnReconstructionCounter.Add(float64(len(reconstructedSidecars) - len(verifiedSidecars)))

View File

@@ -337,14 +337,15 @@ func (q *blocksQueue) onDataReceivedEvent(ctx context.Context) eventHandlerFn {
}
}
}
if errors.Is(response.err, beaconsync.ErrInvalidFetchedData) {
// Peer returned invalid data, penalize.
q.blocksFetcher.p2p.Peers().Scorers().BadResponsesScorer().Increment(response.blocksFrom)
log.WithField("pid", response.blocksFrom).Debug("Peer is penalized for invalid blocks")
} else if errors.Is(response.err, verification.ErrBlobInvalid) {
q.blocksFetcher.p2p.Peers().Scorers().BadResponsesScorer().Increment(response.blobsFrom)
log.WithField("pid", response.blobsFrom).Debug("Peer is penalized for invalid blob response")
q.downscorePeer(response.blocksFrom, "invalidBlocks")
}
if errors.Is(response.err, verification.ErrBlobInvalid) {
q.downscorePeer(response.blobsFrom, "invalidBlobs")
}
return m.state, response.err
}
m.fetched = *response
@@ -455,6 +456,11 @@ func (q *blocksQueue) onProcessSkippedEvent(ctx context.Context) eventHandlerFn
}
}
func (q *blocksQueue) downscorePeer(peerID peer.ID, reason string) {
newScore := q.blocksFetcher.p2p.Peers().Scorers().BadResponsesScorer().Increment(peerID)
log.WithFields(logrus.Fields{"peerID": peerID, "reason": reason, "newScore": newScore}).Debug("Downscore peer")
}
// onCheckStaleEvent is an event that allows to mark stale epochs,
// so that they can be re-processed.
func onCheckStaleEvent(ctx context.Context) eventHandlerFn {

View File

@@ -522,7 +522,7 @@ func TestBlocksQueue_onDataReceivedEvent(t *testing.T) {
})
assert.ErrorContains(t, beaconsync.ErrInvalidFetchedData.Error(), err)
assert.Equal(t, stateScheduled, updatedState)
assert.LogsContain(t, hook, "msg=\"Peer is penalized for invalid blocks\" pid=ZiCa")
assert.LogsContain(t, hook, "Downscore peer")
})
t.Run("transition ok", func(t *testing.T) {

View File

@@ -14,6 +14,7 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/paulbellamy/ratecounter"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
@@ -345,12 +346,11 @@ func isPunishableError(err error) bool {
func (s *Service) updatePeerScorerStats(data *blocksQueueFetchedData, count uint64, err error) {
if isPunishableError(err) {
if verification.IsBlobValidationFailure(err) {
log.WithError(err).WithField("peer_id", data.blobsFrom).Warn("Downscoring peer for invalid blobs")
s.cfg.P2P.Peers().Scorers().BadResponsesScorer().Increment(data.blobsFrom)
s.downscorePeer(data.blobsFrom, "invalidBlobs")
} else {
log.WithError(err).WithField("peer_id", data.blocksFrom).Warn("Downscoring peer for invalid blocks")
s.cfg.P2P.Peers().Scorers().BadResponsesScorer().Increment(data.blocksFrom)
s.downscorePeer(data.blocksFrom, "invalidBlocks")
}
// If the error is punishable, exit here so that we don't give them credit for providing bad blocks.
return
}
@@ -376,3 +376,8 @@ func (s *Service) isProcessedBlock(ctx context.Context, blk blocks.ROBlock) bool
}
return false
}
func (s *Service) downscorePeer(peerID peer.ID, reason string) {
newScore := s.cfg.P2P.Peers().Scorers().BadResponsesScorer().Increment(peerID)
log.WithFields(logrus.Fields{"peerID": peerID, "reason": reason, "newScore": newScore}).Debug("Downscore peer")
}

View File

@@ -6,6 +6,7 @@ import (
"time"
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/trailofbits/go-mutexasserts"
@@ -122,7 +123,7 @@ func (l *limiter) validateRequest(stream network.Stream, amt uint64) error {
collector, err := l.retrieveCollector(topic)
if err != nil {
return err
return errors.Wrap(err, "retrieve collector")
}
remaining := collector.Remaining(remotePeer.String())
@@ -131,7 +132,7 @@ func (l *limiter) validateRequest(stream network.Stream, amt uint64) error {
amt = 1
}
if amt > uint64(remaining) {
l.p2p.Peers().Scorers().BadResponsesScorer().Increment(remotePeer)
l.downscorePeer(remotePeer, topic, "rateLimitExceeded")
writeErrorResponseToStream(responseCodeInvalidRequest, p2ptypes.ErrRateLimited.Error(), stream, l.p2p)
return p2ptypes.ErrRateLimited
}
@@ -139,22 +140,20 @@ func (l *limiter) validateRequest(stream network.Stream, amt uint64) error {
}
// This is used to validate all incoming rpc streams from external peers.
func (l *limiter) validateRawRpcRequest(stream network.Stream) error {
func (l *limiter) validateRawRpcRequest(stream network.Stream, amt uint64) error {
l.RLock()
defer l.RUnlock()
topic := rpcLimiterTopic
collector, err := l.retrieveCollector(topic)
remotePeer := stream.Conn().RemotePeer()
collector, err := l.retrieveCollector(rpcLimiterTopic)
if err != nil {
return err
}
key := stream.Conn().RemotePeer().String()
remaining := collector.Remaining(key)
// Treat each request as a minimum of 1.
amt := int64(1)
if amt > remaining {
l.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
if amt > uint64(remaining) {
l.downscorePeer(remotePeer, rpcLimiterTopic, "rawRateLimitExceeded")
writeErrorResponseToStream(responseCodeInvalidRequest, p2ptypes.ErrRateLimited.Error(), stream, l.p2p)
return p2ptypes.ErrRateLimited
}
@@ -233,3 +232,13 @@ func (l *limiter) retrieveCollector(topic string) (*leakybucket.Collector, error
func (_ *limiter) topicLogger(topic string) *logrus.Entry {
return log.WithField("rateLimiter", topic)
}
func (l *limiter) downscorePeer(peerID peer.ID, topic, reason string) {
newScore := l.p2p.Peers().Scorers().BadResponsesScorer().Increment(peerID)
log.WithFields(logrus.Fields{
"peerID": peerID.String(),
"reason": reason,
"newScore": newScore,
"topic": topic,
}).Debug("Downscore peer")
}

View File

@@ -85,16 +85,16 @@ func TestRateLimiter_ExceedRawCapacity(t *testing.T) {
require.NoError(t, err, "could not create stream")
for i := 0; i < 2*defaultBurstLimit; i++ {
err = rlimiter.validateRawRpcRequest(stream)
err = rlimiter.validateRawRpcRequest(stream, 1)
rlimiter.addRawStream(stream)
require.NoError(t, err, "could not validate incoming request")
}
// Triggers rate limit error on burst.
assert.ErrorContains(t, p2ptypes.ErrRateLimited.Error(), rlimiter.validateRawRpcRequest(stream))
assert.ErrorContains(t, p2ptypes.ErrRateLimited.Error(), rlimiter.validateRawRpcRequest(stream, 1))
// Make Peer bad.
for i := 0; i < defaultBurstLimit; i++ {
assert.ErrorContains(t, p2ptypes.ErrRateLimited.Error(), rlimiter.validateRawRpcRequest(stream))
assert.ErrorContains(t, p2ptypes.ErrRateLimited.Error(), rlimiter.validateRawRpcRequest(stream, 1))
}
assert.NotNil(t, p1.Peers().IsBad(p2.PeerID()), "peer is not marked as a bad peer")
require.NoError(t, stream.Close(), "could not close stream")

View File

@@ -20,6 +20,7 @@ import (
"github.com/libp2p/go-libp2p/core/network"
"github.com/pkg/errors"
ssz "github.com/prysmaticlabs/fastssz"
"github.com/sirupsen/logrus"
)
var (
@@ -39,7 +40,7 @@ type rpcHandler func(context.Context, interface{}, libp2pcore.Stream) error
// rpcHandlerByTopicFromFork returns the RPC handlers for a given fork index.
func (s *Service) rpcHandlerByTopicFromFork(forkIndex int) (map[string]rpcHandler, error) {
// Fulu: https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/p2p-interface.md#messages
// Fulu: https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/p2p-interface.md#messages
if forkIndex >= version.Fulu {
return map[string]rpcHandler{
p2p.RPCGoodByeTopicV1: s.goodbyeRPCHandler,
@@ -54,7 +55,7 @@ func (s *Service) rpcHandlerByTopicFromFork(forkIndex int) (map[string]rpcHandle
}, nil
}
// Electra: https://github.com/ethereum/consensus-specs/blob/dev/specs/electra/p2p-interface.md#messages
// Electra: https://github.com/ethereum/consensus-specs/blob/master/specs/electra/p2p-interface.md#messages
if forkIndex >= version.Electra {
return map[string]rpcHandler{
p2p.RPCStatusTopicV1: s.statusRPCHandler,
@@ -68,7 +69,7 @@ func (s *Service) rpcHandlerByTopicFromFork(forkIndex int) (map[string]rpcHandle
}, nil
}
// Deneb: https://github.com/ethereum/consensus-specs/blob/dev/specs/deneb/p2p-interface.md#messages
// Deneb: https://github.com/ethereum/consensus-specs/blob/master/specs/deneb/p2p-interface.md#messages
if forkIndex >= version.Deneb {
return map[string]rpcHandler{
p2p.RPCStatusTopicV1: s.statusRPCHandler,
@@ -82,9 +83,9 @@ func (s *Service) rpcHandlerByTopicFromFork(forkIndex int) (map[string]rpcHandle
}, nil
}
// Capella: https://github.com/ethereum/consensus-specs/blob/dev/specs/capella/p2p-interface.md#messages
// Bellatrix: https://github.com/ethereum/consensus-specs/blob/dev/specs/bellatrix/p2p-interface.md#messages
// Altair: https://github.com/ethereum/consensus-specs/blob/dev/specs/altair/p2p-interface.md#messages
// Capella: https://github.com/ethereum/consensus-specs/blob/master/specs/capella/p2p-interface.md#messages
// Bellatrix: https://github.com/ethereum/consensus-specs/blob/master/specs/bellatrix/p2p-interface.md#messages
// Altair: https://github.com/ethereum/consensus-specs/blob/master/specs/altair/p2p-interface.md#messages
if forkIndex >= version.Altair {
handler := map[string]rpcHandler{
p2p.RPCStatusTopicV1: s.statusRPCHandler,
@@ -105,7 +106,7 @@ func (s *Service) rpcHandlerByTopicFromFork(forkIndex int) (map[string]rpcHandle
return handler, nil
}
// PhaseO: https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/p2p-interface.md#messages
// PhaseO: https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/p2p-interface.md#messages
if forkIndex >= version.Phase0 {
return map[string]rpcHandler{
p2p.RPCStatusTopicV1: s.statusRPCHandler,
@@ -238,7 +239,7 @@ func (s *Service) registerRPC(baseTopic string, handle rpcHandler) {
defer span.End()
span.SetAttributes(trace.StringAttribute("topic", topic))
span.SetAttributes(trace.StringAttribute("peer", remotePeer.String()))
log := log.WithField("peer", stream.Conn().RemotePeer().String()).WithField("topic", string(stream.Protocol()))
log := log.WithFields(logrus.Fields{"peer": remotePeer.String(), "topic": string(stream.Protocol())})
// Check before hand that peer is valid.
if err := s.cfg.p2p.Peers().IsBad(remotePeer); err != nil {
@@ -248,7 +249,7 @@ func (s *Service) registerRPC(baseTopic string, handle rpcHandler) {
return
}
// Validate request according to peer limits.
if err := s.rateLimiter.validateRawRpcRequest(stream); err != nil {
if err := s.rateLimiter.validateRawRpcRequest(stream, 1); err != nil {
log.WithError(err).Debug("Could not validate rpc request from peer")
return
}
@@ -304,7 +305,7 @@ func (s *Service) registerRPC(baseTopic string, handle rpcHandler) {
if err := s.cfg.p2p.Encoding().DecodeWithMaxLength(stream, msg); err != nil {
logStreamErrors(err, topic)
tracing.AnnotateError(span, err)
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
s.downscorePeer(remotePeer, "registerRpcError")
return
}
if err := handle(ctx, msg, stream); err != nil {
@@ -324,7 +325,7 @@ func (s *Service) registerRPC(baseTopic string, handle rpcHandler) {
if err := s.cfg.p2p.Encoding().DecodeWithMaxLength(stream, msg); err != nil {
logStreamErrors(err, topic)
tracing.AnnotateError(span, err)
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
s.downscorePeer(remotePeer, "registerRpcError")
return
}
if err := handle(ctx, nTyp.Elem().Interface(), stream); err != nil {

View File

@@ -15,6 +15,7 @@ import (
pb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/time/slots"
libp2pcore "github.com/libp2p/go-libp2p/core"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
@@ -43,7 +44,7 @@ func (s *Service) beaconBlocksByRangeRPCHandler(ctx context.Context, msg interfa
rp, err := validateRangeRequest(m, s.cfg.clock.CurrentSlot())
if err != nil {
s.writeErrorResponseToStream(responseCodeInvalidRequest, err.Error(), stream)
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(remotePeer)
s.downscorePeer(remotePeer, "beaconBlocksByRangeRPCHandlerValidationError")
tracing.AnnotateError(span, err)
return err
}
@@ -201,3 +202,13 @@ func (s *Service) writeBlockBatchToStream(ctx context.Context, batch blockBatch,
return nil
}
func (s *Service) downscorePeer(peerID peer.ID, reason string, fields ...logrus.Fields) {
log := log
for _, field := range fields {
log = log.WithFields(field)
}
newScore := s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(peerID)
log.WithFields(logrus.Fields{"peerID": peerID, "reason": reason, "newScore": newScore}).Debug("Downscore peer")
}

View File

@@ -92,9 +92,11 @@ func (s *Service) beaconBlocksRootRPCHandler(ctx context.Context, msg interface{
return errors.New("no block roots provided")
}
remotePeer := stream.Conn().RemotePeer()
currentEpoch := slots.ToEpoch(s.cfg.clock.CurrentSlot())
if uint64(len(blockRoots)) > params.MaxRequestBlock(currentEpoch) {
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
s.downscorePeer(remotePeer, "beaconBlocksRootRPCHandlerTooManyRoots")
s.writeErrorResponseToStream(responseCodeInvalidRequest, "requested more than the max block limit", stream)
return errors.New("requested more than the max block limit")
}

View File

@@ -74,10 +74,13 @@ func (s *Service) blobSidecarsByRangeRPCHandler(ctx context.Context, msg interfa
if err := s.rateLimiter.validateRequest(stream, 1); err != nil {
return err
}
remotePeer := stream.Conn().RemotePeer()
rp, err := validateBlobsByRange(r, s.cfg.chain.CurrentSlot())
if err != nil {
s.writeErrorResponseToStream(responseCodeInvalidRequest, err.Error(), stream)
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
s.downscorePeer(remotePeer, "blobSidecarsByRangeRpcHandlerValidationError")
tracing.AnnotateError(span, err)
return err
}
@@ -87,7 +90,7 @@ func (s *Service) blobSidecarsByRangeRPCHandler(ctx context.Context, msg interfa
defer ticker.Stop()
batcher, err := newBlockRangeBatcher(rp, s.cfg.beaconDB, s.rateLimiter, s.cfg.chain.IsCanonical, ticker)
if err != nil {
log.WithError(err).Info("error in BlobSidecarsByRange batch")
log.WithError(err).Error("Cannot create new block range batcher")
s.writeErrorResponseToStream(responseCodeServerError, p2ptypes.ErrGeneric.Error(), stream)
tracing.AnnotateError(span, err)
return err
@@ -112,7 +115,7 @@ func (s *Service) blobSidecarsByRangeRPCHandler(ctx context.Context, msg interfa
}
}
if err := batch.error(); err != nil {
log.WithError(err).Debug("error in BlobSidecarsByRange batch")
log.WithError(err).Debug("Error in BlobSidecarsByRange batch")
// If a rate limit is hit, it means an error response has already been sent and the stream has been closed.
if !errors.Is(err, p2ptypes.ErrRateLimited) {

View File

@@ -39,7 +39,7 @@ func (s *Service) blobSidecarByRootRPCHandler(ctx context.Context, msg interface
cs := s.cfg.clock.CurrentSlot()
remotePeer := stream.Conn().RemotePeer()
if err := validateBlobByRootRequest(blobIdents, cs); err != nil {
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(remotePeer)
s.downscorePeer(remotePeer, "blobSidecarsByRootRpcHandlerValidationError")
s.writeErrorResponseToStream(responseCodeInvalidRequest, err.Error(), stream)
return err
}

View File

@@ -65,7 +65,7 @@ func (s *Service) dataColumnSidecarsByRangeRPCHandler(ctx context.Context, msg i
rangeParameters, err := validateDataColumnsByRange(request, s.cfg.chain.CurrentSlot())
if err != nil {
s.writeErrorResponseToStream(responseCodeInvalidRequest, err.Error(), stream)
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(remotePeer)
s.downscorePeer(remotePeer, "dataColumnSidecarsByRangeRpcHandlerValidationError")
tracing.AnnotateError(span, err)
return errors.Wrap(err, "validate data columns by range")
}

View File

@@ -27,7 +27,7 @@ var (
)
// dataColumnSidecarByRootRPCHandler handles the data column sidecars by root RPC request.
// https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/p2p-interface.md#datacolumnsidecarsbyroot-v1
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/p2p-interface.md#datacolumnsidecarsbyroot-v1
func (s *Service) dataColumnSidecarByRootRPCHandler(ctx context.Context, msg interface{}, stream libp2pcore.Stream) error {
ctx, span := trace.StartSpan(ctx, "sync.dataColumnSidecarByRootRPCHandler")
defer span.End()
@@ -42,7 +42,7 @@ func (s *Service) dataColumnSidecarByRootRPCHandler(ctx context.Context, msg int
}
requestedColumnIdents := *ref
remotePeerId := stream.Conn().RemotePeer()
remotePeer := stream.Conn().RemotePeer()
ctx, cancel := context.WithTimeout(ctx, ttfbTimeout)
defer cancel()
@@ -51,7 +51,7 @@ func (s *Service) dataColumnSidecarByRootRPCHandler(ctx context.Context, msg int
// Penalize peers that send invalid requests.
if err := validateDataColumnsByRootRequest(requestedColumnIdents); err != nil {
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(remotePeerId)
s.downscorePeer(remotePeer, "dataColumnSidecarByRootRPCHandlerValidationError")
s.writeErrorResponseToStream(responseCodeInvalidRequest, err.Error(), stream)
return errors.Wrap(err, "validate data columns by root request")
}
@@ -85,7 +85,7 @@ func (s *Service) dataColumnSidecarByRootRPCHandler(ctx context.Context, msg int
}
log := log.WithFields(logrus.Fields{
"peer": remotePeerId,
"peer": remotePeer,
"columns": requestedColumnsByRootLog,
})

View File

@@ -13,6 +13,7 @@ import (
libp2pcore "github.com/libp2p/go-libp2p/core"
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
@@ -35,22 +36,40 @@ var backOffTime = map[primitives.SSZUint64]time.Duration{
// goodbyeRPCHandler reads the incoming goodbye rpc message from the peer.
func (s *Service) goodbyeRPCHandler(_ context.Context, msg interface{}, stream libp2pcore.Stream) error {
const amount = 1
SetRPCStreamDeadlines(stream)
peerID := stream.Conn().RemotePeer()
m, ok := msg.(*primitives.SSZUint64)
if !ok {
return fmt.Errorf("wrong message type for goodbye, got %T, wanted *uint64", msg)
}
if err := s.rateLimiter.validateRequest(stream, 1); err != nil {
log.WithError(err).Debug("Goodbye message from rate-limited peer")
} else {
isRateLimitedPeer := false
if err := s.rateLimiter.validateRequest(stream, amount); err != nil {
if !errors.Is(err, p2ptypes.ErrRateLimited) {
return errors.Wrap(err, "validate request")
}
isRateLimitedPeer = true
}
if !isRateLimitedPeer {
s.rateLimiter.add(stream, 1)
}
log := log.WithField("Reason", goodbyeMessage(*m))
log.WithField("peer", stream.Conn().RemotePeer()).Trace("Peer has sent a goodbye message")
s.cfg.p2p.Peers().SetNextValidTime(stream.Conn().RemotePeer(), goodByeBackoff(*m))
// closes all streams with the peer
return s.cfg.p2p.Disconnect(stream.Conn().RemotePeer())
log.WithFields(logrus.Fields{
"peer": peerID,
"reason": goodbyeMessage(*m),
"isRateLimited": isRateLimitedPeer,
}).Debug("Received a goodbye message")
s.cfg.p2p.Peers().SetNextValidTime(peerID, goodByeBackoff(*m))
if err := s.cfg.p2p.Disconnect(peerID); err != nil {
return errors.Wrap(err, "disconnect")
}
return nil
}
// disconnectBadPeer checks whether peer is considered bad by some scorer, and tries to disconnect
@@ -70,7 +89,7 @@ func (s *Service) disconnectBadPeer(ctx context.Context, id peer.ID, badPeerErr
"peerID": id,
"agent": agentString(id, s.cfg.p2p.Host()),
}).
Debug("Sent peer disconnection")
Debug("Sent bad peer disconnection")
}
// A custom goodbye method that is used by our connection handler, in the

View File

@@ -26,7 +26,7 @@ func (s *Service) lightClientBootstrapRPCHandler(ctx context.Context, msg interf
SetRPCStreamDeadlines(stream)
if err := s.rateLimiter.validateRequest(stream, 1); err != nil {
logger.WithError(err).Error("s.rateLimiter.validateRequest")
logger.WithError(err).Error("Cannot validate request")
return err
}
s.rateLimiter.add(stream, 1)
@@ -42,7 +42,7 @@ func (s *Service) lightClientBootstrapRPCHandler(ctx context.Context, msg interf
if err != nil {
s.writeErrorResponseToStream(responseCodeServerError, types.ErrGeneric.Error(), stream)
tracing.AnnotateError(span, err)
logger.WithError(err).Error("s.cfg.beaconDB.LightClientBootstrap")
logger.WithError(err).Error("Cannot bootstrap light client")
return err
}
if bootstrap == nil {
@@ -74,10 +74,11 @@ func (s *Service) lightClientUpdatesByRangeRPCHandler(ctx context.Context, msg i
defer cancel()
logger := log.WithField("handler", p2p.LightClientUpdatesByRangeName[1:])
remotePeer := stream.Conn().RemotePeer()
SetRPCStreamDeadlines(stream)
if err := s.rateLimiter.validateRequest(stream, 1); err != nil {
logger.WithError(err).Error("s.rateLimiter.validateRequest")
logger.WithError(err).Error("Cannot validate request")
return err
}
s.rateLimiter.add(stream, 1)
@@ -90,7 +91,8 @@ func (s *Service) lightClientUpdatesByRangeRPCHandler(ctx context.Context, msg i
if r.Count == 0 {
s.writeErrorResponseToStream(responseCodeInvalidRequest, "count is 0", stream)
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
s.downscorePeer(remotePeer, "lightClientUpdatesByRangeRPCHandlerCount0")
logger.Error("Count is 0")
return nil
}
@@ -102,7 +104,7 @@ func (s *Service) lightClientUpdatesByRangeRPCHandler(ctx context.Context, msg i
endPeriod, err := math.Add64(r.StartPeriod, r.Count-1)
if err != nil {
s.writeErrorResponseToStream(responseCodeInvalidRequest, err.Error(), stream)
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
s.downscorePeer(remotePeer, "lightClientUpdatesByRangeRPCHandlerEndPeriodOverflow")
tracing.AnnotateError(span, err)
logger.WithError(err).Error("End period overflows")
return err
@@ -114,7 +116,7 @@ func (s *Service) lightClientUpdatesByRangeRPCHandler(ctx context.Context, msg i
if err != nil {
s.writeErrorResponseToStream(responseCodeServerError, types.ErrGeneric.Error(), stream)
tracing.AnnotateError(span, err)
logger.WithError(err).Error("s.cfg.beaconDB.LightClientUpdates")
logger.WithError(err).Error("Cannot retrieve light client updates")
return err
}
@@ -153,7 +155,7 @@ func (s *Service) lightClientFinalityUpdateRPCHandler(ctx context.Context, _ int
SetRPCStreamDeadlines(stream)
if err := s.rateLimiter.validateRequest(stream, 1); err != nil {
logger.WithError(err).Error("s.rateLimiter.validateRequest")
logger.WithError(err).Error("Cannot validate request")
return err
}
s.rateLimiter.add(stream, 1)
@@ -191,7 +193,7 @@ func (s *Service) lightClientOptimisticUpdateRPCHandler(ctx context.Context, _ i
SetRPCStreamDeadlines(stream)
if err := s.rateLimiter.validateRequest(stream, 1); err != nil {
logger.WithError(err).Error("s.rateLimiter.validateRequest")
logger.WithError(err).Error("Cannot validate request")
return err
}
s.rateLimiter.add(stream, 1)

View File

@@ -6,6 +6,7 @@ import (
"time"
"github.com/OffchainLabs/prysm/v6/async/abool"
"github.com/OffchainLabs/prysm/v6/async/event"
mockChain "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
db "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
@@ -54,7 +55,7 @@ func TestRPC_LightClientBootstrap(t *testing.T) {
stateNotifier: &mockChain.MockStateNotifier{},
},
chainStarted: abool.New(),
lcStore: lightClient.NewLightClientStore(d),
lcStore: lightClient.NewLightClientStore(d, &p2ptest.FakeP2P{}, new(event.Feed)),
subHandler: newSubTopicHandler(),
rateLimiter: newRateLimiter(p1),
}
@@ -176,7 +177,7 @@ func TestRPC_LightClientOptimisticUpdate(t *testing.T) {
stateNotifier: &mockChain.MockStateNotifier{},
},
chainStarted: abool.New(),
lcStore: &lightClient.Store{},
lcStore: lightClient.NewLightClientStore(d, &p2ptest.FakeP2P{}, new(event.Feed)),
subHandler: newSubTopicHandler(),
rateLimiter: newRateLimiter(p1),
}
@@ -202,7 +203,7 @@ func TestRPC_LightClientOptimisticUpdate(t *testing.T) {
update, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock)
require.NoError(t, err)
r.lcStore.SetLastOptimisticUpdate(update)
r.lcStore.SetLastOptimisticUpdate(update, false)
var wg sync.WaitGroup
wg.Add(1)
@@ -296,7 +297,7 @@ func TestRPC_LightClientFinalityUpdate(t *testing.T) {
stateNotifier: &mockChain.MockStateNotifier{},
},
chainStarted: abool.New(),
lcStore: &lightClient.Store{},
lcStore: lightClient.NewLightClientStore(d, &p2ptest.FakeP2P{}, new(event.Feed)),
subHandler: newSubTopicHandler(),
rateLimiter: newRateLimiter(p1),
}
@@ -322,7 +323,7 @@ func TestRPC_LightClientFinalityUpdate(t *testing.T) {
update, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
r.lcStore.SetLastFinalityUpdate(update)
r.lcStore.SetLastFinalityUpdate(update, false)
var wg sync.WaitGroup
wg.Add(1)
@@ -416,7 +417,7 @@ func TestRPC_LightClientUpdatesByRange(t *testing.T) {
stateNotifier: &mockChain.MockStateNotifier{},
},
chainStarted: abool.New(),
lcStore: lightClient.NewLightClientStore(d),
lcStore: lightClient.NewLightClientStore(d, &p2ptest.FakeP2P{}, new(event.Feed)),
subHandler: newSubTopicHandler(),
rateLimiter: newRateLimiter(p1),
}

View File

@@ -172,12 +172,12 @@ func (s *Service) sendMetaDataRequest(ctx context.Context, peerID peer.ID) (meta
// Read the METADATA response from the peer.
code, errMsg, err := ReadStatusCode(stream, s.cfg.p2p.Encoding())
if err != nil {
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(peerID)
s.downscorePeer(peerID, "MetadataReadStatusCodeError")
return nil, errors.Wrap(err, "read status code")
}
if code != 0 {
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(peerID)
s.downscorePeer(peerID, "NonNullMetadataReadStatusCode")
return nil, errors.New(errMsg)
}
@@ -214,8 +214,8 @@ func (s *Service) sendMetaDataRequest(ctx context.Context, peerID peer.ID) (meta
// Decode the metadata from the peer.
if err := s.cfg.p2p.Encoding().DecodeWithMaxLength(stream, msg); err != nil {
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
return nil, err
s.downscorePeer(peerID, "MetadataDecodeError")
return nil, errors.Wrap(err, "decode with max length")
}
return msg, nil

View File

@@ -13,6 +13,7 @@ import (
libp2pcore "github.com/libp2p/go-libp2p/core"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
// pingHandler reads the incoming ping rpc message from the peer.
@@ -27,6 +28,7 @@ func (s *Service) pingHandler(_ context.Context, msg interface{}, stream libp2pc
if !ok {
return fmt.Errorf("wrong message type for ping, got %T, wanted *uint64", msg)
}
sequenceNumber := uint64(*m)
// Validate the incoming request regarding rate limiting.
if err := s.rateLimiter.validateRequest(stream, 1); err != nil {
@@ -39,14 +41,9 @@ func (s *Service) pingHandler(_ context.Context, msg interface{}, stream libp2pc
peerID := stream.Conn().RemotePeer()
// Check if the peer sequence number is higher than the one we have in our store.
valid, err := s.validateSequenceNum(*m, peerID)
valid, err := s.isSequenceNumberUpToDate(sequenceNumber, peerID)
if err != nil {
// Descore peer for giving us a bad sequence number.
if errors.Is(err, p2ptypes.ErrInvalidSequenceNum) {
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(peerID)
s.writeErrorResponseToStream(responseCodeInvalidRequest, p2ptypes.ErrInvalidSequenceNum.Error(), stream)
}
s.writeErrorResponseToStream(responseCodeInvalidRequest, p2ptypes.ErrInvalidSequenceNum.Error(), stream)
return errors.Wrap(err, "validate sequence number")
}
@@ -141,7 +138,7 @@ func (s *Service) sendPingRequest(ctx context.Context, peerID peer.ID) error {
// If the peer responded with an error, increment the bad responses scorer.
if code != 0 {
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(peerID)
s.downscorePeer(peerID, "NotNullPingReadStatusCode")
return errors.Errorf("code: %d - %s", code, errMsg)
}
@@ -150,15 +147,11 @@ func (s *Service) sendPingRequest(ctx context.Context, peerID peer.ID) error {
if err := s.cfg.p2p.Encoding().DecodeWithMaxLength(stream, msg); err != nil {
return errors.Wrap(err, "decode sequence number")
}
sequenceNumber := uint64(*msg)
// Determine if the peer's sequence number returned by the peer is higher than the one we have in our store.
valid, err := s.validateSequenceNum(*msg, peerID)
valid, err := s.isSequenceNumberUpToDate(sequenceNumber, peerID)
if err != nil {
// Descore peer for giving us a bad sequence number.
if errors.Is(err, p2ptypes.ErrInvalidSequenceNum) {
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(peerID)
}
return errors.Wrap(err, "validate sequence number")
}
@@ -180,27 +173,36 @@ func (s *Service) sendPingRequest(ctx context.Context, peerID peer.ID) error {
return nil
}
// validateSequenceNum validates the peer's sequence number.
// - If the peer's sequence number is greater than the sequence number we have in our store for the peer, return false.
// - If the peer's sequence number is equal to the sequence number we have in our store for the peer, return true.
// - If the peer's sequence number is less than the sequence number we have in our store for the peer, return an error.
func (s *Service) validateSequenceNum(seq primitives.SSZUint64, id peer.ID) (bool, error) {
// isSequenceNumberUpToDate check if our internal sequence number for the peer is up to date wrt. the incoming one.
// - If the incoming sequence number is greater than the sequence number we have in our store for the peer, return false.
// - If the incoming sequence number is equal to the sequence number we have in our store for the peer, return true.
// - If the incoming sequence number is less than the sequence number we have in our store for the peer, return an error.
func (s *Service) isSequenceNumberUpToDate(incomingSequenceNumber uint64, peerID peer.ID) (bool, error) {
// Retrieve the metadata for the peer we got in our store.
md, err := s.cfg.p2p.Peers().Metadata(id)
storedMetadata, err := s.cfg.p2p.Peers().Metadata(peerID)
if err != nil {
return false, errors.Wrap(err, "get metadata")
return false, errors.Wrap(err, "peers metadata")
}
// If we have no metadata for the peer, return false.
if md == nil || md.IsNil() {
if storedMetadata == nil || storedMetadata.IsNil() {
return false, nil
}
// The peer's sequence number must be less than or equal to the sequence number we have in our store.
if md.SequenceNumber() > uint64(seq) {
storedSequenceNumber := storedMetadata.SequenceNumber()
if storedSequenceNumber > incomingSequenceNumber {
s.downscorePeer(peerID, "pingInvalidSequenceNumber", logrus.Fields{
"storedSequenceNumber": storedSequenceNumber,
"incomingSequenceNumber": incomingSequenceNumber,
})
return false, p2ptypes.ErrInvalidSequenceNum
}
// Return true if the peer's sequence number is equal to the sequence number we have in our store.
return md.SequenceNumber() == uint64(seq), nil
// If this is the case, our information about the peer is outdated.
if storedSequenceNumber < incomingSequenceNumber {
return false, nil
}
return true, nil
}

View File

@@ -35,6 +35,9 @@ func (s *Service) maintainPeerStatuses() {
wg.Add(1)
go func(id peer.ID) {
defer wg.Done()
log := log.WithField("peer", id)
// If our peer status has not been updated correctly we disconnect over here
// and set the connection state over here instead.
if s.cfg.p2p.Host().Network().Connectedness(id) != network.Connected {
@@ -43,27 +46,26 @@ func (s *Service) maintainPeerStatuses() {
log.WithError(err).Debug("Error when disconnecting with peer")
}
s.cfg.p2p.Peers().SetConnectionState(id, peers.Disconnected)
log.WithFields(logrus.Fields{
"peer": id,
"reason": "maintain peer statuses - peer is not connected",
}).Debug("Initiate peer disconnection")
log.WithField("reason", "maintainPeerStatusesNotConnectedPeer").Debug("Initiate peer disconnection")
return
}
// Disconnect from peers that are considered bad by any of the registered scorers.
if err := s.cfg.p2p.Peers().IsBad(id); err != nil {
s.disconnectBadPeer(s.ctx, id, err)
return
}
// If the status hasn't been updated in the recent interval time.
lastUpdated, err := s.cfg.p2p.Peers().ChainStateLastUpdated(id)
if err != nil {
// Peer has vanished; nothing to do.
return
}
if prysmTime.Now().After(lastUpdated.Add(interval)) {
if err := s.reValidatePeer(s.ctx, id); err != nil {
log.WithField("peer", id).WithError(err).Debug("Could not revalidate peer")
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(id)
log.WithError(err).Debug("Cannot re-validate peer")
}
}
}(pid)
@@ -128,19 +130,20 @@ func (s *Service) shouldReSync() bool {
}
// sendRPCStatusRequest for a given topic with an expected protobuf message type.
func (s *Service) sendRPCStatusRequest(ctx context.Context, id peer.ID) error {
func (s *Service) sendRPCStatusRequest(ctx context.Context, peer peer.ID) error {
ctx, cancel := context.WithTimeout(ctx, respTimeout)
defer cancel()
headRoot, err := s.cfg.chain.HeadRoot(ctx)
if err != nil {
return err
return errors.Wrap(err, "head root")
}
forkDigest, err := s.currentForkDigest()
if err != nil {
return err
return errors.Wrap(err, "current fork digest")
}
cp := s.cfg.chain.FinalizedCheckpt()
resp := &pb.Status{
ForkDigest: forkDigest[:],
@@ -149,37 +152,42 @@ func (s *Service) sendRPCStatusRequest(ctx context.Context, id peer.ID) error {
HeadRoot: headRoot,
HeadSlot: s.cfg.chain.HeadSlot(),
}
log := log.WithField("peer", peer)
topic, err := p2p.TopicFromMessage(p2p.StatusMessageName, slots.ToEpoch(s.cfg.clock.CurrentSlot()))
if err != nil {
return err
return errors.Wrap(err, "topic from message")
}
stream, err := s.cfg.p2p.Send(ctx, resp, topic, id)
stream, err := s.cfg.p2p.Send(ctx, resp, topic, peer)
if err != nil {
return err
return errors.Wrap(err, "send p2p message")
}
defer closeStream(stream, log)
code, errMsg, err := ReadStatusCode(stream, s.cfg.p2p.Encoding())
if err != nil {
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
return err
s.downscorePeer(peer, "statusRequestReadStatusCodeError")
return errors.Wrap(err, "read status code")
}
if code != 0 {
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(id)
s.downscorePeer(peer, "statusRequestNonNullStatusCode")
return errors.New(errMsg)
}
msg := &pb.Status{}
if err := s.cfg.p2p.Encoding().DecodeWithMaxLength(stream, msg); err != nil {
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
return err
s.downscorePeer(peer, "statusRequestDecodeError")
return errors.Wrap(err, "decode status message")
}
// If validation fails, validation error is logged, and peer status scorer will mark peer as bad.
err = s.validateStatusMessage(ctx, msg)
s.cfg.p2p.Peers().Scorers().PeerStatusScorer().SetPeerStatus(id, msg, err)
if err := s.cfg.p2p.Peers().IsBad(id); err != nil {
s.disconnectBadPeer(s.ctx, id, err)
s.cfg.p2p.Peers().Scorers().PeerStatusScorer().SetPeerStatus(peer, msg, err)
if err := s.cfg.p2p.Peers().IsBad(peer); err != nil {
s.disconnectBadPeer(s.ctx, peer, err)
}
return err
}
@@ -238,7 +246,7 @@ func (s *Service) statusRPCHandler(ctx context.Context, msg interface{}, stream
return nil
default:
respCode = responseCodeInvalidRequest
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(remotePeer)
s.downscorePeer(remotePeer, "statusRpcHandlerInvalidMessage")
}
originalErr := err

View File

@@ -11,10 +11,12 @@ import (
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
p2ptypes "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/types"
"github.com/OffchainLabs/prysm/v6/crypto/rand"
lru "github.com/hashicorp/golang-lru"
pubsub "github.com/libp2p/go-libp2p-pubsub"
libp2pcore "github.com/libp2p/go-libp2p/core"
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer"
gcache "github.com/patrickmn/go-cache"
"github.com/pkg/errors"
@@ -278,19 +280,40 @@ func (s *Service) Start() {
// Stop the regular sync service.
func (s *Service) Stop() error {
defer func() {
s.cancel()
if s.rateLimiter != nil {
s.rateLimiter.free()
}
}()
// Removing RPC Stream handlers.
// Create context with timeout to prevent hanging
goodbyeCtx, cancel := context.WithTimeout(s.ctx, 10*time.Second)
defer cancel()
// Use WaitGroup to ensure all goodbye messages complete
var wg sync.WaitGroup
for _, peerID := range s.cfg.p2p.Peers().Connected() {
if s.cfg.p2p.Host().Network().Connectedness(peerID) == network.Connected {
wg.Add(1)
go func(pid peer.ID) {
defer wg.Done()
if err := s.sendGoodByeAndDisconnect(goodbyeCtx, p2ptypes.GoodbyeCodeClientShutdown, pid); err != nil {
log.WithError(err).WithField("peerID", pid).Error("Failed to send goodbye message")
}
}(peerID)
}
}
wg.Wait()
log.Debug("All goodbye messages sent successfully")
// Now safe to remove handlers / unsubscribe.
for _, p := range s.cfg.p2p.Host().Mux().Protocols() {
s.cfg.p2p.Host().RemoveStreamHandler(p)
}
// Deregister Topic Subscribers.
for _, t := range s.cfg.p2p.PubSub().GetTopics() {
s.unSubscribeFromTopic(t)
}
defer s.cancel()
return nil
}

View File

@@ -2,6 +2,7 @@ package sync
import (
"context"
"sync"
"testing"
"time"
@@ -9,16 +10,22 @@ import (
mockChain "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed"
dbTest "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
p2ptest "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
p2ptypes "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/types"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
mockSync "github.com/OffchainLabs/prysm/v6/beacon-chain/sync/initial-sync/testing"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
leakybucket "github.com/OffchainLabs/prysm/v6/container/leaky-bucket"
"github.com/OffchainLabs/prysm/v6/crypto/bls"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/protocol"
gcache "github.com/patrickmn/go-cache"
)
@@ -227,3 +234,212 @@ func TestSyncService_StopCleanly(t *testing.T) {
require.Equal(t, 0, len(r.cfg.p2p.PubSub().GetTopics()))
require.Equal(t, 0, len(r.cfg.p2p.Host().Mux().Protocols()))
}
func TestService_Stop_SendsGoodbyeMessages(t *testing.T) {
// Create test peers
p1 := p2ptest.NewTestP2P(t)
p2 := p2ptest.NewTestP2P(t)
p3 := p2ptest.NewTestP2P(t)
// Connect peers
p1.Connect(p2)
p1.Connect(p3)
// Register peers in the peer status
p1.Peers().Add(nil, p2.BHost.ID(), p2.BHost.Addrs()[0], network.DirOutbound)
p1.Peers().Add(nil, p3.BHost.ID(), p3.BHost.Addrs()[0], network.DirOutbound)
p1.Peers().SetConnectionState(p2.BHost.ID(), peers.Connected)
p1.Peers().SetConnectionState(p3.BHost.ID(), peers.Connected)
// Create service with connected peers
d := dbTest.SetupDB(t)
chain := &mockChain.ChainService{Genesis: time.Now(), ValidatorsRoot: [32]byte{}}
ctx, cancel := context.WithCancel(context.Background())
r := &Service{
cfg: &config{
beaconDB: d,
p2p: p1,
chain: chain,
clock: startup.NewClock(chain.Genesis, chain.ValidatorsRoot),
},
ctx: ctx,
cancel: cancel,
rateLimiter: newRateLimiter(p1),
}
// Initialize context map for RPC
ctxMap, err := ContextByteVersionsForValRoot(chain.ValidatorsRoot)
require.NoError(t, err)
r.ctxMap = ctxMap
// Setup rate limiter for goodbye topic
pcl := protocol.ID("/eth2/beacon_chain/req/goodbye/1/ssz_snappy")
topic := string(pcl)
r.rateLimiter.limiterMap[topic] = leakybucket.NewCollector(1, 1, time.Second, false)
// Track goodbye messages received
var goodbyeMessages sync.Map
var wg sync.WaitGroup
wg.Add(2)
p2.BHost.SetStreamHandler(pcl, func(stream network.Stream) {
defer wg.Done()
out := new(primitives.SSZUint64)
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, out))
goodbyeMessages.Store(p2.BHost.ID().String(), *out)
require.NoError(t, stream.Close())
})
p3.BHost.SetStreamHandler(pcl, func(stream network.Stream) {
defer wg.Done()
out := new(primitives.SSZUint64)
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, out))
goodbyeMessages.Store(p3.BHost.ID().String(), *out)
require.NoError(t, stream.Close())
})
connectedPeers := r.cfg.p2p.Peers().Connected()
t.Logf("Connected peers before Stop: %d", len(connectedPeers))
assert.Equal(t, 2, len(connectedPeers), "Expected 2 connected peers")
err = r.Stop()
assert.NoError(t, err)
// Wait for goodbye messages
if util.WaitTimeout(&wg, 15*time.Second) {
t.Fatal("Did not receive goodbye messages within timeout")
}
// Verify correct goodbye codes were sent
msg2, ok := goodbyeMessages.Load(p2.BHost.ID().String())
assert.Equal(t, true, ok, "Expected goodbye message to peer 2")
assert.Equal(t, p2ptypes.GoodbyeCodeClientShutdown, msg2)
msg3, ok := goodbyeMessages.Load(p3.BHost.ID().String())
assert.Equal(t, true, ok, "Expected goodbye message to peer 3")
assert.Equal(t, p2ptypes.GoodbyeCodeClientShutdown, msg3)
}
func TestService_Stop_TimeoutHandling(t *testing.T) {
p1 := p2ptest.NewTestP2P(t)
p2 := p2ptest.NewTestP2P(t)
p1.Connect(p2)
p1.Peers().Add(nil, p2.BHost.ID(), p2.BHost.Addrs()[0], network.DirOutbound)
p1.Peers().SetConnectionState(p2.BHost.ID(), peers.Connected)
d := dbTest.SetupDB(t)
chain := &mockChain.ChainService{Genesis: time.Now(), ValidatorsRoot: [32]byte{}}
ctx, cancel := context.WithCancel(context.Background())
r := &Service{
cfg: &config{
beaconDB: d,
p2p: p1,
chain: chain,
clock: startup.NewClock(chain.Genesis, chain.ValidatorsRoot),
},
ctx: ctx,
cancel: cancel,
rateLimiter: newRateLimiter(p1),
}
// Initialize context map for RPC
ctxMap, err := ContextByteVersionsForValRoot(chain.ValidatorsRoot)
require.NoError(t, err)
r.ctxMap = ctxMap
// Setup rate limiter for goodbye topic
pcl := protocol.ID("/eth2/beacon_chain/req/goodbye/1/ssz_snappy")
topic := string(pcl)
r.rateLimiter.limiterMap[topic] = leakybucket.NewCollector(1, 1, time.Second, false)
// Don't set up stream handler on p2 to simulate unresponsive peer
// Verify peers are connected before stopping
connectedPeers := r.cfg.p2p.Peers().Connected()
t.Logf("Connected peers before Stop: %d", len(connectedPeers))
start := time.Now()
err = r.Stop()
duration := time.Since(start)
t.Logf("Stop completed in %v", duration)
// Stop should complete successfully even when peers don't respond
assert.NoError(t, err)
// Should not hang - completes quickly when goodbye fails
assert.Equal(t, true, duration < 5*time.Second, "Stop() should not hang when peer is unresponsive")
// Test passes - the timeout behavior is working correctly, goodbye attempts fail quickly
}
func TestService_Stop_ConcurrentGoodbyeMessages(t *testing.T) {
// Test that goodbye messages are sent concurrently, not sequentially
const numPeers = 10
p1 := p2ptest.NewTestP2P(t)
testPeers := make([]*p2ptest.TestP2P, numPeers)
// Create and connect multiple peers
for i := 0; i < numPeers; i++ {
testPeers[i] = p2ptest.NewTestP2P(t)
p1.Connect(testPeers[i])
// Register peer in the peer status
p1.Peers().Add(nil, testPeers[i].BHost.ID(), testPeers[i].BHost.Addrs()[0], network.DirOutbound)
p1.Peers().SetConnectionState(testPeers[i].BHost.ID(), peers.Connected)
}
d := dbTest.SetupDB(t)
chain := &mockChain.ChainService{Genesis: time.Now(), ValidatorsRoot: [32]byte{}}
ctx, cancel := context.WithCancel(context.Background())
r := &Service{
cfg: &config{
beaconDB: d,
p2p: p1,
chain: chain,
clock: startup.NewClock(chain.Genesis, chain.ValidatorsRoot),
},
ctx: ctx,
cancel: cancel,
rateLimiter: newRateLimiter(p1),
}
// Initialize context map for RPC
ctxMap, err := ContextByteVersionsForValRoot(chain.ValidatorsRoot)
require.NoError(t, err)
r.ctxMap = ctxMap
// Setup rate limiter for goodbye topic
pcl := protocol.ID("/eth2/beacon_chain/req/goodbye/1/ssz_snappy")
topic := string(pcl)
r.rateLimiter.limiterMap[topic] = leakybucket.NewCollector(1, 1, time.Second, false)
// Each peer will have artificial delay in processing goodbye
var wg sync.WaitGroup
wg.Add(numPeers)
for i := 0; i < numPeers; i++ {
idx := i // capture loop variable
testPeers[idx].BHost.SetStreamHandler(pcl, func(stream network.Stream) {
defer wg.Done()
time.Sleep(100 * time.Millisecond) // Artificial delay
out := new(primitives.SSZUint64)
require.NoError(t, r.cfg.p2p.Encoding().DecodeWithMaxLength(stream, out))
require.NoError(t, stream.Close())
})
}
start := time.Now()
err = r.Stop()
duration := time.Since(start)
// If messages were sent sequentially, it would take numPeers * 100ms = 1 second
// If concurrent, should be ~100ms
assert.NoError(t, err)
assert.Equal(t, true, duration < 500*time.Millisecond, "Goodbye messages should be sent concurrently")
require.Equal(t, false, util.WaitTimeout(&wg, 2*time.Second))
}

View File

@@ -135,7 +135,7 @@ func (s *Service) processDataColumnSidecarsFromExecution(ctx context.Context, ro
blockSlot := block.Slot()
proposerIndex := block.ProposerIndex()
// Broadcast and save data columns sidecars to custody but not yet received.
// Broadcast and save data column sidecars to custody but not yet received.
sidecarCount := uint64(len(sidecars))
for columnIndex := range info.CustodyColumns {
log := log.WithField("columnIndex", columnIndex)

View File

@@ -28,7 +28,7 @@ func (s *Service) lightClientOptimisticUpdateSubscriber(_ context.Context, msg p
"attestedHeaderRoot": fmt.Sprintf("%x", attestedHeaderRoot),
}).Debug("Saving newly received light client optimistic update.")
s.lcStore.SetLastOptimisticUpdate(update)
s.lcStore.SetLastOptimisticUpdate(update, false)
s.cfg.stateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.LightClientOptimisticUpdate,
@@ -55,7 +55,7 @@ func (s *Service) lightClientFinalityUpdateSubscriber(_ context.Context, msg pro
"attestedHeaderRoot": fmt.Sprintf("%x", attestedHeaderRoot),
}).Debug("Saving newly received light client finality update.")
s.lcStore.SetLastFinalityUpdate(update)
s.lcStore.SetLastFinalityUpdate(update, false)
s.cfg.stateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.LightClientFinalityUpdate,

View File

@@ -9,6 +9,7 @@ import (
"time"
"github.com/OffchainLabs/prysm/v6/async/abool"
"github.com/OffchainLabs/prysm/v6/async/event"
mockChain "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
@@ -684,7 +685,7 @@ func TestSubscribe_ReceivesLCOptimisticUpdate(t *testing.T) {
stateNotifier: &mockChain.MockStateNotifier{},
},
chainStarted: abool.New(),
lcStore: &lightClient.Store{},
lcStore: lightClient.NewLightClientStore(d, &p2ptest.FakeP2P{}, new(event.Feed)),
subHandler: newSubTopicHandler(),
}
topic := p2p.LightClientOptimisticUpdateTopicFormat
@@ -751,7 +752,7 @@ func TestSubscribe_ReceivesLCFinalityUpdate(t *testing.T) {
stateNotifier: &mockChain.MockStateNotifier{},
},
chainStarted: abool.New(),
lcStore: &lightClient.Store{},
lcStore: lightClient.NewLightClientStore(d, &p2ptest.FakeP2P{}, new(event.Feed)),
subHandler: newSubTopicHandler(),
}
topic := p2p.LightClientFinalityUpdateTopicFormat

View File

@@ -25,7 +25,7 @@ import (
"github.com/sirupsen/logrus"
)
// https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/p2p-interface.md#the-gossip-domain-gossipsub
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/p2p-interface.md#the-gossip-domain-gossipsub
func (s *Service) validateDataColumn(ctx context.Context, pid peer.ID, msg *pubsub.Message) (pubsub.ValidationResult, error) {
const dataColumnSidecarSubTopic = "/data_column_sidecar_%d/"
@@ -74,7 +74,7 @@ func (s *Service) validateDataColumn(ctx context.Context, pid peer.ID, msg *pubs
verifier := s.newColumnsVerifier(roDataColumns, verification.GossipDataColumnSidecarRequirements)
// Start the verification process.
// https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/p2p-interface.md#the-gossip-domain-gossipsub
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/p2p-interface.md#data_column_sidecar_subnet_id
// [REJECT] The sidecar is valid as verified by `verify_data_column_sidecar(sidecar)`.
if err := verifier.ValidFields(); err != nil {
@@ -86,7 +86,7 @@ func (s *Service) validateDataColumn(ctx context.Context, pid peer.ID, msg *pubs
return pubsub.ValidationReject, err
}
// [IGNORE] The sidecar is not from a future slot (with a `MAXIMUM_GOSSIP_CLOCK_DISPARITY`` allowance
// [IGNORE] The sidecar is not from a future slot (with a `MAXIMUM_GOSSIP_CLOCK_DISPARITY` allowance)
// -- i.e. validate that `block_header.slot <= current_slot` (a client MAY queue future sidecars for processing at the appropriate slot).
if err := verifier.NotFromFutureSlot(); err != nil {
return pubsub.ValidationIgnore, err
@@ -132,13 +132,13 @@ func (s *Service) validateDataColumn(ctx context.Context, pid peer.ID, msg *pubs
return pubsub.ValidationReject, err
}
// [REJECT] The current finalized_checkpoint is an ancestor of the sidecar's block
// [REJECT] The current `finalized_checkpoint` is an ancestor of the sidecar's block
// -- i.e. `get_checkpoint_block(store, block_header.parent_root, store.finalized_checkpoint.epoch) == store.finalized_checkpoint.root`.
if err := verifier.SidecarDescendsFromFinalized(); err != nil {
return pubsub.ValidationReject, err
}
// [REJECT] The sidecar's kzg_commitments field inclusion proof is valid as verified by `verify_data_column_sidecar_inclusion_proof(sidecar)`.
// [REJECT] The sidecar's `kzg_commitments` field inclusion proof is valid as verified by `verify_data_column_sidecar_inclusion_proof(sidecar)`.
if err := verifier.SidecarInclusionProven(); err != nil {
return pubsub.ValidationReject, err
}
@@ -154,7 +154,7 @@ func (s *Service) validateDataColumn(ctx context.Context, pid peer.ID, msg *pubs
return pubsub.ValidationIgnore, nil
}
// [REJECT] The sidecar is proposed by the expected `proposer_index` for the block's slot in the context of the current shuffling (defined by block_header.parent_root/block_header.slot).
// [REJECT] The sidecar is proposed by the expected `proposer_index` for the block's slot in the context of the current shuffling (defined by `block_header.parent_root`/`block_header.slot`).
// If the `proposer_index` cannot immediately be verified against the expected shuffling, the sidecar MAY be queued for later processing while proposers for the block's branch are calculated
// -- in such a case do not REJECT, instead IGNORE this message.
if err := verifier.SidecarProposerExpected(ctx); err != nil {

View File

@@ -131,7 +131,7 @@ func (s *Service) validateLightClientFinalityUpdate(ctx context.Context, pid pee
return pubsub.ValidationIgnore, nil
}
if !lightclient.IsBetterFinalityUpdate(newUpdate, s.lcStore.LastFinalityUpdate()) {
if !lightclient.IsFinalityUpdateValidForBroadcast(newUpdate, s.lcStore.LastFinalityUpdate()) {
log.WithFields(logrus.Fields{
"attestedSlot": fmt.Sprintf("%d", newUpdate.AttestedHeader().Beacon().Slot),
"signatureSlot": fmt.Sprintf("%d", newUpdate.SignatureSlot()),

View File

@@ -5,6 +5,7 @@ import (
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/async/event"
mock "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
@@ -102,7 +103,7 @@ func TestValidateLightClientOptimisticUpdate(t *testing.T) {
// drift back appropriate number of epochs based on fork + 2 slots for signature slot + time for gossip propagation + any extra drift
genesisDrift := v*slotsPerEpoch*secondsPerSlot + 2*secondsPerSlot + secondsPerSlot/slotIntervals + test.genesisDrift
chainService := &mock.ChainService{Genesis: time.Unix(time.Now().Unix()-int64(genesisDrift), 0)}
s := &Service{cfg: &config{p2p: p, initialSync: &mockSync.Sync{}, clock: startup.NewClock(chainService.Genesis, chainService.ValidatorsRoot)}, lcStore: &lightClient.Store{}}
s := &Service{cfg: &config{p2p: p, initialSync: &mockSync.Sync{}, clock: startup.NewClock(chainService.Genesis, chainService.ValidatorsRoot)}, lcStore: lightClient.NewLightClientStore(nil, &p2ptest.FakeP2P{}, new(event.Feed))}
var oldUpdate interfaces.LightClientOptimisticUpdate
var err error
@@ -111,7 +112,7 @@ func TestValidateLightClientOptimisticUpdate(t *testing.T) {
oldUpdate, err = lightClient.NewLightClientOptimisticUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock)
require.NoError(t, err)
s.lcStore.SetLastOptimisticUpdate(oldUpdate)
s.lcStore.SetLastOptimisticUpdate(oldUpdate, false)
}
l := util.NewTestLightClient(t, v, test.newUpdateOptions...)
@@ -242,7 +243,7 @@ func TestValidateLightClientFinalityUpdate(t *testing.T) {
// drift back appropriate number of epochs based on fork + 2 slots for signature slot + time for gossip propagation + any extra drift
genesisDrift := v*slotsPerEpoch*secondsPerSlot + 2*secondsPerSlot + secondsPerSlot/slotIntervals + test.genesisDrift
chainService := &mock.ChainService{Genesis: time.Unix(time.Now().Unix()-int64(genesisDrift), 0)}
s := &Service{cfg: &config{p2p: p, initialSync: &mockSync.Sync{}, clock: startup.NewClock(chainService.Genesis, chainService.ValidatorsRoot)}, lcStore: &lightClient.Store{}}
s := &Service{cfg: &config{p2p: p, initialSync: &mockSync.Sync{}, clock: startup.NewClock(chainService.Genesis, chainService.ValidatorsRoot)}, lcStore: lightClient.NewLightClientStore(nil, &p2ptest.FakeP2P{}, new(event.Feed))}
var oldUpdate interfaces.LightClientFinalityUpdate
var err error
@@ -251,7 +252,7 @@ func TestValidateLightClientFinalityUpdate(t *testing.T) {
oldUpdate, err = lightClient.NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
s.lcStore.SetLastFinalityUpdate(oldUpdate)
s.lcStore.SetLastFinalityUpdate(oldUpdate, false)
}
l := util.NewTestLightClient(t, v, test.newUpdateOptions...)

View File

@@ -22,7 +22,7 @@ import (
var (
// GossipDataColumnSidecarRequirements defines the set of requirements that DataColumnSidecars received on gossip
// must satisfy in order to upgrade an RODataColumn to a VerifiedRODataColumn.
// https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/p2p-interface.md#data_column_sidecar_subnet_id
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/p2p-interface.md#data_column_sidecar_subnet_id
GossipDataColumnSidecarRequirements = []Requirement{
RequireValidFields,
RequireCorrectSubnet,
@@ -40,7 +40,7 @@ var (
// ByRangeRequestDataColumnSidecarRequirements defines the set of requirements that DataColumnSidecars received
// via the by range request must satisfy in order to upgrade an RODataColumn to a VerifiedRODataColumn.
// https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/p2p-interface.md#datacolumnsidecarsbyrange-v1
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/p2p-interface.md#datacolumnsidecarsbyrange-v1
ByRangeRequestDataColumnSidecarRequirements = []Requirement{
RequireValidFields,
RequireSidecarInclusionProven,
@@ -165,7 +165,7 @@ func (dv *RODataColumnsVerifier) NotFromFutureSlot() (err error) {
// Extract the data column slot.
dataColumnSlot := dataColumn.Slot()
// Skip if the data column slotis the same as the current slot.
// Skip if the data column slot is the same as the current slot.
if currentSlot == dataColumnSlot {
continue
}
@@ -174,7 +174,7 @@ func (dv *RODataColumnsVerifier) NotFromFutureSlot() (err error) {
// We lower the time by MAXIMUM_GOSSIP_CLOCK_DISPARITY in case system time is running slightly behind real time.
earliestStart, err := dv.clock.SlotStart(dataColumnSlot)
if err != nil {
return fmt.Errorf("failed to determine slot start time from clock waiter: %w", err)
return columnErrBuilder(errors.Wrap(err, "failed to determine slot start time from clock waiter"))
}
earliestStart = earliestStart.Add(-maximumGossipClockDisparity)
@@ -204,11 +204,8 @@ func (dv *RODataColumnsVerifier) SlotAboveFinalized() (err error) {
}
for _, dataColumn := range dv.dataColumns {
// Extract the data column slot.
dataColumnSlot := dataColumn.Slot()
// Check if the data column slot is after first slot of the epoch corresponding to the finalized checkpoint.
if dataColumnSlot <= startSlot {
if dataColumn.Slot() <= startSlot {
return columnErrBuilder(errSlotNotAfterFinalized)
}
}
@@ -271,10 +268,8 @@ func (dv *RODataColumnsVerifier) SidecarParentSeen(parentSeen func([fieldparams.
defer dv.recordResult(RequireSidecarParentSeen, &err)
for _, dataColumn := range dv.dataColumns {
// Extract the root of the parent block corresponding to the data column.
parentRoot := dataColumn.ParentRoot()
// Skip if the parent root has been seen.
parentRoot := dataColumn.ParentRoot()
if parentSeen != nil && parentSeen(parentRoot) {
continue
}
@@ -295,10 +290,7 @@ func (dv *RODataColumnsVerifier) SidecarParentValid(badParent func([fieldparams.
defer dv.recordResult(RequireSidecarParentValid, &err)
for _, dataColumn := range dv.dataColumns {
// Extract the root of the parent block corresponding to the data column.
parentRoot := dataColumn.ParentRoot()
if badParent != nil && badParent(parentRoot) {
if badParent != nil && badParent(dataColumn.ParentRoot()) {
return columnErrBuilder(errSidecarParentInvalid)
}
}
@@ -314,21 +306,15 @@ func (dv *RODataColumnsVerifier) SidecarParentSlotLower() (err error) {
defer dv.recordResult(RequireSidecarParentSlotLower, &err)
for _, dataColumn := range dv.dataColumns {
// Extract the root of the parent block corresponding to the data column.
parentRoot := dataColumn.ParentRoot()
// Compute the slot of the parent block.
parentSlot, err := dv.fc.Slot(parentRoot)
parentSlot, err := dv.fc.Slot(dataColumn.ParentRoot())
if err != nil {
return columnErrBuilder(errors.Wrap(err, "slot"))
}
// Extract the slot of the data column.
dataColumnSlot := dataColumn.Slot()
// Check if the data column slot is after the parent slot.
if parentSlot >= dataColumnSlot {
return errSlotNotAfterParent
if parentSlot >= dataColumn.Slot() {
return columnErrBuilder(errSlotNotAfterParent)
}
}
@@ -435,7 +421,7 @@ func (dv *RODataColumnsVerifier) SidecarProposerExpected(ctx context.Context) (e
// Compute the target root for the epoch.
targetRoot, err := dv.fc.TargetRootForEpoch(parentRoot, dataColumnEpoch)
if err != nil {
return [fieldparams.RootLength]byte{}, errors.Wrap(err, "target root from epoch")
return [fieldparams.RootLength]byte{}, columnErrBuilder(errors.Wrap(err, "target root from epoch"))
}
// Store the target root in the cache.
@@ -534,7 +520,7 @@ func inclusionProofKey(c blocks.RODataColumn) ([160]byte, error) {
root, err := c.SignedBlockHeader.HashTreeRoot()
if err != nil {
return [160]byte{}, errors.Wrap(err, "hash tree root")
return [160]byte{}, columnErrBuilder(errors.Wrap(err, "hash tree root"))
}
for i := range c.KzgCommitmentsInclusionProof {

View File

@@ -0,0 +1,3 @@
### Changed
- when shutting down the sync service we now send p2p goodbye messages in parallel to maxmimize changes of propogating goodbyes to all peers before an unsafe shutdown.

View File

@@ -0,0 +1,3 @@
### Changed
- changed from in-memory to persistent discv5 db to keep local node information persistent for the key to keep the ENR sequence number deterministic when restarting.

View File

@@ -0,0 +1,3 @@
### Added
- Add BLOB_SCHEDULE field to `/eth/v1/config/spec` endpoint response to expose blob scheduling configuration for networks.

View File

@@ -0,0 +1,5 @@
### Changed
- Moved the broadcast and event notifier logic for saving LC updates to the store function.
- Fixed the issue with broadcasting more than twice per LC Finality update, and the if-case bug.
- Separated the finality update validation rules for saving and broadcasting.

View File

@@ -0,0 +1,3 @@
### Fixed
- Fix builder bid version compatibility to support Electra bids with Fulu blocks

View File

@@ -0,0 +1,3 @@
### Added
- Implements the `/eth/v1/beacon/states/{state_id}/proposer_lookahead` beacon api endpoint.

Some files were not shown because too many files have changed in this diff Show More