mirror of
https://github.com/OffchainLabs/prysm.git
synced 2026-01-10 13:58:09 -05:00
Compare commits
25 Commits
v6.0.0-rc.
...
event-data
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
e70a7de477 | ||
|
|
efaf6649e7 | ||
|
|
a1c1edf285 | ||
|
|
bde7a57ec9 | ||
|
|
c223957751 | ||
|
|
f7eddedd1d | ||
|
|
7887ebbc4a | ||
|
|
1b13520270 | ||
|
|
0936628b72 | ||
|
|
478ae81ed1 | ||
|
|
93276150e7 | ||
|
|
83460c9956 | ||
|
|
d30bb63d94 | ||
|
|
ab5505e13e | ||
|
|
9c00b06966 | ||
|
|
167f719860 | ||
|
|
d4469d17b7 | ||
|
|
8418157f8a | ||
|
|
e4acab4187 | ||
|
|
b99399c1f1 | ||
|
|
c9e8701987 | ||
|
|
215dbb8e40 | ||
|
|
6180b5a560 | ||
|
|
cd87082f25 | ||
|
|
bab898d1d3 |
60
CHANGELOG.md
60
CHANGELOG.md
@@ -4,6 +4,64 @@ All notable changes to this project will be documented in this file.
|
||||
|
||||
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
|
||||
|
||||
## [v6.0.0](https://github.com/prysmaticlabs/prysm/compare/v5.3.2...v6.0.0) - 2025-04-21
|
||||
|
||||
This release introduces Mainnet support for the upcoming Electra + Prague (Pectra) fork. The fork is scheduled for mainnet epoch 364032 (May 7, 2025, 10:05:11 UTC). You MUST update Prysm Beacon Node, Prysm Validator Client, and your execution layer client to the Pectra ready release prior to the fork to stay on the correct chain.
|
||||
|
||||
Besides Pectra, we have more light client API support, cleanups, and a few bugfixes. Please review the changelog below and update your client as soon as practical before May 7.
|
||||
|
||||
This release is **mandatory** for all operators before May 7.
|
||||
|
||||
### Added
|
||||
|
||||
- Implemented validator identities Beacon API endpoint. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15086)
|
||||
- Add SSZ support to light client updates by range API. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15082)
|
||||
- Add light client ssz types to the spec test. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15097)
|
||||
- Added the ability for execution requests to be tested in e2e with electra. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14971)
|
||||
- Add warning messages for gas limit ranges that might be problematic. Low gas limits (≤10% of default) may cause transactions to fail, while high gas limits (>150% of default) could lead to block propagation issues. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15078)
|
||||
- Add light client store object to the beacon node object. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15120)
|
||||
- prysmctl option in wrapper script to generate devnet ssz. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15145)
|
||||
- Add support for Electra fork epoch. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15132)
|
||||
|
||||
### Changed
|
||||
|
||||
- The validator client will no longer use the full list of committee values but instead use the committee length and validator committee index. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15039)
|
||||
- Remove the header `Content-Disposition` from the `httputil.WriteSSZ` function. No `filename` parameter is needed anymore. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15092)
|
||||
- Sort attestations in proposer block by reward. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15093)
|
||||
- More efficient query method for stategen to retrieve blocks between a given state and the replay target block. This avoids attempting to look up blocks that are not needed for head replay queries, which may be missing due to a previous rollback bug. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15063)
|
||||
- removed old web3signer metrics in favor for a universal one. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14920)
|
||||
- Deprecated everything related with the gRPC API. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14944)
|
||||
- Migrate Prysm repo to Offchain Labs organization ahead of Pectra upgrade v6. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15140)
|
||||
|
||||
### Deprecated
|
||||
|
||||
- deprecates and removes usage of the `--trace` flag and`--cpuprofile` flag in favor of just using `--pprof`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15083)
|
||||
|
||||
### Removed
|
||||
|
||||
- Remove /eth/v1/beacon/states/head/committees call when getting duties. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15039)
|
||||
- Removed unused hack scripts. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15157)
|
||||
- Remove `disable-committee-aware-packing` flag. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15162)
|
||||
- Remove deprecated flags for the major release. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15165)
|
||||
- Removed Beacon API endpoints which have been deprecated at the Deneb fork. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15166)
|
||||
|
||||
### Fixed
|
||||
|
||||
- The `--rpc` flag will now properly enable the keymanager APIs without web. The `--web` will enable both validator api endpoints and web. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15080)
|
||||
- Use latest state to pack attestation. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15113)
|
||||
- Clean up dangling block index entries for blocks that were previously deleted by incomplete cleanup code. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15040)
|
||||
- Fixed to use io stream instead of stream read. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15089)
|
||||
- When using a DV, send all aggregations for a slot and committee. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15110)
|
||||
- Fixed a bug in consolidation request processing. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15122)
|
||||
- Fix State Getter for pending withdrawal balance. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15123)
|
||||
- Fixed a bug in checking for attestation lengths in our block. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15134)
|
||||
- Fix Committee Index Check For Aggregates. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15146)
|
||||
- Fix filtering by committee index post-Electra in `ListAttestationsV2`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15148)
|
||||
- Peers giving invalid data in range syncing are now downscored. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15149)
|
||||
- Adding fork guard to attestation api endpoints so that it doesn't accidentally include wrong attestation types in the pool. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15161)
|
||||
- fixed underflow with balances in leaking edge case with expected withdrawals. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15191)
|
||||
- Attribute block and blob issues to correct peers during range syncing. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15173)
|
||||
|
||||
## [v5.3.2](https://github.com/prysmaticlabs/prysm/compare/v5.3.1...v5.3.2) - 2025-03-25
|
||||
|
||||
This release introduces support for the `Hoodi` testnet.
|
||||
@@ -3255,4 +3313,4 @@ There are no security updates in this release.
|
||||
|
||||
# Older than v2.0.0
|
||||
|
||||
For changelog history for releases older than v2.0.0, please refer to https://github.com/prysmaticlabs/prysm/releases
|
||||
For changelog history for releases older than v2.0.0, please refer to https://github.com/prysmaticlabs/prysm/releases
|
||||
|
||||
@@ -263,6 +263,13 @@ type ChainHead struct {
|
||||
OptimisticStatus bool `json:"optimistic_status"`
|
||||
}
|
||||
|
||||
type GetPendingConsolidationsResponse struct {
|
||||
Version string `json:"version"`
|
||||
ExecutionOptimistic bool `json:"execution_optimistic"`
|
||||
Finalized bool `json:"finalized"`
|
||||
Data []*PendingConsolidation `json:"data"`
|
||||
}
|
||||
|
||||
type GetPendingDepositsResponse struct {
|
||||
Version string `json:"version"`
|
||||
ExecutionOptimistic bool `json:"execution_optimistic"`
|
||||
|
||||
@@ -51,6 +51,7 @@ type ForkchoiceFetcher interface {
|
||||
ProposerBoost() [32]byte
|
||||
RecentBlockSlot(root [32]byte) (primitives.Slot, error)
|
||||
IsCanonical(ctx context.Context, blockRoot [32]byte) (bool, error)
|
||||
DependentRoot(primitives.Epoch) ([32]byte, error)
|
||||
}
|
||||
|
||||
// TimeFetcher retrieves the Ethereum consensus data that's related to time.
|
||||
|
||||
@@ -184,13 +184,13 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *fcuConfig) (*
|
||||
return payloadID, nil
|
||||
}
|
||||
|
||||
func firePayloadAttributesEvent(_ context.Context, f event.SubscriberSender, nextSlot primitives.Slot) {
|
||||
func firePayloadAttributesEvent(f event.SubscriberSender, block interfaces.ReadOnlySignedBeaconBlock, root [32]byte, nextSlot primitives.Slot) {
|
||||
// the fcu args have differing amounts of completeness based on the code path,
|
||||
// and there is work we only want to do if a client is actually listening to the events beacon api endpoint.
|
||||
// temporary solution: just fire a blank event and fill in the details in the api handler.
|
||||
f.Send(&feed.Event{
|
||||
Type: statefeed.PayloadAttributes,
|
||||
Data: payloadattribute.EventData{ProposalSlot: nextSlot},
|
||||
Data: payloadattribute.EventData{HeadBlock: block, HeadRoot: root, ProposalSlot: nextSlot},
|
||||
})
|
||||
}
|
||||
|
||||
|
||||
@@ -102,7 +102,7 @@ func (s *Service) forkchoiceUpdateWithExecution(ctx context.Context, args *fcuCo
|
||||
log.WithError(err).Error("could not save head")
|
||||
}
|
||||
|
||||
go firePayloadAttributesEvent(ctx, s.cfg.StateNotifier.StateFeed(), s.CurrentSlot()+1)
|
||||
go firePayloadAttributesEvent(s.cfg.StateNotifier.StateFeed(), args.headBlock, args.headRoot, s.CurrentSlot()+1)
|
||||
|
||||
// Only need to prune attestations from pool if the head has changed.
|
||||
s.pruneAttsFromPool(s.ctx, args.headState, args.headBlock)
|
||||
|
||||
@@ -3,15 +3,19 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
|
||||
go_library(
|
||||
name = "go_default_library",
|
||||
srcs = [
|
||||
"kzg.go",
|
||||
"trusted_setup.go",
|
||||
"validation.go",
|
||||
],
|
||||
embedsrcs = ["trusted_setup.json"],
|
||||
embedsrcs = ["trusted_setup_4096.json"],
|
||||
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg",
|
||||
visibility = ["//visibility:public"],
|
||||
deps = [
|
||||
"//consensus-types/blocks:go_default_library",
|
||||
"@com_github_crate_crypto_go_kzg_4844//:go_default_library",
|
||||
"@com_github_ethereum_c_kzg_4844//bindings/go:go_default_library",
|
||||
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
|
||||
"@com_github_ethereum_go_ethereum//crypto/kzg4844:go_default_library",
|
||||
"@com_github_pkg_errors//:go_default_library",
|
||||
],
|
||||
)
|
||||
|
||||
143
beacon-chain/blockchain/kzg/kzg.go
Normal file
143
beacon-chain/blockchain/kzg/kzg.go
Normal file
@@ -0,0 +1,143 @@
|
||||
package kzg
|
||||
|
||||
import (
|
||||
"github.com/pkg/errors"
|
||||
|
||||
ckzg4844 "github.com/ethereum/c-kzg-4844/v2/bindings/go"
|
||||
"github.com/ethereum/go-ethereum/crypto/kzg4844"
|
||||
)
|
||||
|
||||
// BytesPerBlob is the number of bytes in a single blob.
|
||||
const BytesPerBlob = ckzg4844.BytesPerBlob
|
||||
|
||||
// Blob represents a serialized chunk of data.
|
||||
type Blob [BytesPerBlob]byte
|
||||
|
||||
// BytesPerCell is the number of bytes in a single cell.
|
||||
const BytesPerCell = ckzg4844.BytesPerCell
|
||||
|
||||
// Cell represents a chunk of an encoded Blob.
|
||||
type Cell [BytesPerCell]byte
|
||||
|
||||
// Commitment represent a KZG commitment to a Blob.
|
||||
type Commitment [48]byte
|
||||
|
||||
// Proof represents a KZG proof that attests to the validity of a Blob or parts of it.
|
||||
type Proof [48]byte
|
||||
|
||||
// Bytes48 is a 48-byte array.
|
||||
type Bytes48 = ckzg4844.Bytes48
|
||||
|
||||
// Bytes32 is a 32-byte array.
|
||||
type Bytes32 = ckzg4844.Bytes32
|
||||
|
||||
// CellsAndProofs represents the Cells and Proofs corresponding to a single blob.
|
||||
type CellsAndProofs struct {
|
||||
Cells []Cell
|
||||
Proofs []Proof
|
||||
}
|
||||
|
||||
// BlobToKZGCommitment computes a KZG commitment from a given blob.
|
||||
func BlobToKZGCommitment(blob *Blob) (Commitment, error) {
|
||||
var kzgBlob kzg4844.Blob
|
||||
copy(kzgBlob[:], blob[:])
|
||||
|
||||
commitment, err := kzg4844.BlobToCommitment(&kzgBlob)
|
||||
if err != nil {
|
||||
return Commitment{}, err
|
||||
}
|
||||
|
||||
return Commitment(commitment), nil
|
||||
}
|
||||
|
||||
// ComputeCells computes the (extended) cells from a given blob.
|
||||
func ComputeCells(blob *Blob) ([]Cell, error) {
|
||||
var ckzgBlob ckzg4844.Blob
|
||||
copy(ckzgBlob[:], blob[:])
|
||||
|
||||
ckzgCells, err := ckzg4844.ComputeCells(&ckzgBlob)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "compute cells")
|
||||
}
|
||||
|
||||
cells := make([]Cell, len(ckzgCells))
|
||||
for i := range ckzgCells {
|
||||
cells[i] = Cell(ckzgCells[i])
|
||||
}
|
||||
|
||||
return cells, nil
|
||||
}
|
||||
|
||||
// ComputeBlobKZGProof computes the blob KZG proof from a given blob and its commitment.
|
||||
func ComputeBlobKZGProof(blob *Blob, commitment Commitment) (Proof, error) {
|
||||
var kzgBlob kzg4844.Blob
|
||||
copy(kzgBlob[:], blob[:])
|
||||
|
||||
proof, err := kzg4844.ComputeBlobProof(&kzgBlob, kzg4844.Commitment(commitment))
|
||||
if err != nil {
|
||||
return [48]byte{}, err
|
||||
}
|
||||
return Proof(proof), nil
|
||||
}
|
||||
|
||||
// ComputeCellsAndKZGProofs computes the cells and cells KZG proofs from a given blob.
|
||||
func ComputeCellsAndKZGProofs(blob *Blob) (CellsAndProofs, error) {
|
||||
var ckzgBlob ckzg4844.Blob
|
||||
copy(ckzgBlob[:], blob[:])
|
||||
|
||||
ckzgCells, ckzgProofs, err := ckzg4844.ComputeCellsAndKZGProofs(&ckzgBlob)
|
||||
if err != nil {
|
||||
return CellsAndProofs{}, err
|
||||
}
|
||||
|
||||
return makeCellsAndProofs(ckzgCells[:], ckzgProofs[:])
|
||||
}
|
||||
|
||||
// VerifyCellKZGProofBatch verifies the KZG proofs for a given slice of commitments, cells indices, cells and proofs.
|
||||
// Note: It is way more efficient to call once this function with big slices than calling it multiple times with small slices.
|
||||
func VerifyCellKZGProofBatch(commitmentsBytes []Bytes48, cellIndices []uint64, cells []Cell, proofsBytes []Bytes48) (bool, error) {
|
||||
// Convert `Cell` type to `ckzg4844.Cell`
|
||||
ckzgCells := make([]ckzg4844.Cell, len(cells))
|
||||
|
||||
for i := range cells {
|
||||
ckzgCells[i] = ckzg4844.Cell(cells[i])
|
||||
}
|
||||
|
||||
return ckzg4844.VerifyCellKZGProofBatch(commitmentsBytes, cellIndices, ckzgCells, proofsBytes)
|
||||
}
|
||||
|
||||
// RecoverCellsAndKZGProofs recovers the complete cells and KZG proofs from a given set of cell indices and partial cells.
|
||||
func RecoverCellsAndKZGProofs(cellIndices []uint64, partialCells []Cell) (CellsAndProofs, error) {
|
||||
// Convert `Cell` type to `ckzg4844.Cell`
|
||||
ckzgPartialCells := make([]ckzg4844.Cell, len(partialCells))
|
||||
for i := range partialCells {
|
||||
ckzgPartialCells[i] = ckzg4844.Cell(partialCells[i])
|
||||
}
|
||||
|
||||
ckzgCells, ckzgProofs, err := ckzg4844.RecoverCellsAndKZGProofs(cellIndices, ckzgPartialCells)
|
||||
if err != nil {
|
||||
return CellsAndProofs{}, errors.Wrap(err, "recover cells and KZG proofs")
|
||||
}
|
||||
|
||||
return makeCellsAndProofs(ckzgCells[:], ckzgProofs[:])
|
||||
}
|
||||
|
||||
// makeCellsAndProofs converts cells/proofs to the CellsAndProofs type defined in this package.
|
||||
func makeCellsAndProofs(ckzgCells []ckzg4844.Cell, ckzgProofs []ckzg4844.KZGProof) (CellsAndProofs, error) {
|
||||
if len(ckzgCells) != len(ckzgProofs) {
|
||||
return CellsAndProofs{}, errors.New("different number of cells/proofs")
|
||||
}
|
||||
|
||||
cells := make([]Cell, 0, len(ckzgCells))
|
||||
proofs := make([]Proof, 0, len(ckzgProofs))
|
||||
|
||||
for i := range ckzgCells {
|
||||
cells = append(cells, Cell(ckzgCells[i]))
|
||||
proofs = append(proofs, Proof(ckzgProofs[i]))
|
||||
}
|
||||
|
||||
return CellsAndProofs{
|
||||
Cells: cells,
|
||||
Proofs: proofs,
|
||||
}, nil
|
||||
}
|
||||
@@ -5,24 +5,69 @@ import (
|
||||
"encoding/json"
|
||||
|
||||
GoKZG "github.com/crate-crypto/go-kzg-4844"
|
||||
CKZG "github.com/ethereum/c-kzg-4844/v2/bindings/go"
|
||||
"github.com/ethereum/go-ethereum/common/hexutil"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
var (
|
||||
//go:embed trusted_setup.json
|
||||
// https://github.com/ethereum/consensus-specs/blob/dev/presets/mainnet/trusted_setups/trusted_setup_4096.json
|
||||
//go:embed trusted_setup_4096.json
|
||||
embeddedTrustedSetup []byte // 1.2Mb
|
||||
kzgContext *GoKZG.Context
|
||||
kzgLoaded bool
|
||||
)
|
||||
|
||||
type TrustedSetup struct {
|
||||
G1Monomial [GoKZG.ScalarsPerBlob]GoKZG.G1CompressedHexStr `json:"g1_monomial"`
|
||||
G1Lagrange [GoKZG.ScalarsPerBlob]GoKZG.G1CompressedHexStr `json:"g1_lagrange"`
|
||||
G2Monomial [65]GoKZG.G2CompressedHexStr `json:"g2_monomial"`
|
||||
}
|
||||
|
||||
func Start() error {
|
||||
parsedSetup := GoKZG.JSONTrustedSetup{}
|
||||
err := json.Unmarshal(embeddedTrustedSetup, &parsedSetup)
|
||||
trustedSetup := &TrustedSetup{}
|
||||
err := json.Unmarshal(embeddedTrustedSetup, trustedSetup)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not parse trusted setup JSON")
|
||||
}
|
||||
kzgContext, err = GoKZG.NewContext4096(&parsedSetup)
|
||||
|
||||
kzgContext, err = GoKZG.NewContext4096(&GoKZG.JSONTrustedSetup{
|
||||
SetupG2: trustedSetup.G2Monomial[:],
|
||||
SetupG1Lagrange: trustedSetup.G1Lagrange,
|
||||
})
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not initialize go-kzg context")
|
||||
}
|
||||
|
||||
// Length of a G1 point, converted from hex to binary.
|
||||
g1MonomialBytes := make([]byte, len(trustedSetup.G1Monomial)*(len(trustedSetup.G1Monomial[0])-2)/2)
|
||||
for i, g1 := range &trustedSetup.G1Monomial {
|
||||
copy(g1MonomialBytes[i*(len(g1)-2)/2:], hexutil.MustDecode(g1))
|
||||
}
|
||||
|
||||
// Length of a G1 point, converted from hex to binary.
|
||||
g1LagrangeBytes := make([]byte, len(trustedSetup.G1Lagrange)*(len(trustedSetup.G1Lagrange[0])-2)/2)
|
||||
for i, g1 := range &trustedSetup.G1Lagrange {
|
||||
copy(g1LagrangeBytes[i*(len(g1)-2)/2:], hexutil.MustDecode(g1))
|
||||
}
|
||||
|
||||
// Length of a G2 point, converted from hex to binary.
|
||||
g2MonomialBytes := make([]byte, len(trustedSetup.G2Monomial)*(len(trustedSetup.G2Monomial[0])-2)/2)
|
||||
for i, g2 := range &trustedSetup.G2Monomial {
|
||||
copy(g2MonomialBytes[i*(len(g2)-2)/2:], hexutil.MustDecode(g2))
|
||||
}
|
||||
|
||||
if !kzgLoaded {
|
||||
const precompute uint = 8
|
||||
|
||||
kzgLoaded = true
|
||||
|
||||
// Free the current trusted setup before running this method.
|
||||
// CKZG panics if the same setup is run multiple times.
|
||||
if err = CKZG.LoadTrustedSetup(g1MonomialBytes, g1LagrangeBytes, g2MonomialBytes, precompute); err != nil {
|
||||
return errors.Wrap(err, "load trust setup")
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -4,6 +4,7 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/async/event"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
|
||||
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
|
||||
lightclient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/execution"
|
||||
@@ -220,3 +221,10 @@ func WithSlasherEnabled(enabled bool) Option {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func WithLightClientStore(lcs *lightclient.Store) Option {
|
||||
return func(s *Service) error {
|
||||
s.lcStore = lcs
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
@@ -729,9 +729,13 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
|
||||
attribute := s.getPayloadAttribute(ctx, headState, s.CurrentSlot()+1, headRoot[:])
|
||||
// return early if we are not proposing next slot
|
||||
if attribute.IsEmpty() {
|
||||
headBlock, err := s.headBlock()
|
||||
if err != nil {
|
||||
log.WithError(err).WithField("head_root", headRoot).Error("unable to retrieve head block to fire payload attributes event")
|
||||
}
|
||||
// notifyForkchoiceUpdate fires the payload attribute event. But in this case, we won't
|
||||
// call notifyForkchoiceUpdate, so the event is fired here.
|
||||
go firePayloadAttributesEvent(ctx, s.cfg.StateNotifier.StateFeed(), s.CurrentSlot()+1)
|
||||
go firePayloadAttributesEvent(s.cfg.StateNotifier.StateFeed(), headBlock, headRoot, s.CurrentSlot()+1)
|
||||
return
|
||||
}
|
||||
|
||||
|
||||
@@ -254,7 +254,7 @@ func (s *Service) processLightClientFinalityUpdate(
|
||||
return errors.Wrapf(err, "could not get finalized block for root %#x", finalizedRoot)
|
||||
}
|
||||
|
||||
update, err := lightclient.NewLightClientFinalityUpdateFromBeaconState(
|
||||
newUpdate, err := lightclient.NewLightClientFinalityUpdateFromBeaconState(
|
||||
ctx,
|
||||
postState.Slot(),
|
||||
postState,
|
||||
@@ -268,9 +268,32 @@ func (s *Service) processLightClientFinalityUpdate(
|
||||
return errors.Wrap(err, "could not create light client finality update")
|
||||
}
|
||||
|
||||
lastUpdate := s.lcStore.LastFinalityUpdate()
|
||||
if lastUpdate != nil {
|
||||
// The finalized_header.beacon.lastUpdateSlot is greater than that of all previously forwarded finality_updates,
|
||||
// or it matches the highest previously forwarded lastUpdateSlot and also has a sync_aggregate indicating supermajority (> 2/3)
|
||||
// sync committee participation while the previously forwarded finality_update for that lastUpdateSlot did not indicate supermajority
|
||||
newUpdateSlot := newUpdate.FinalizedHeader().Beacon().Slot
|
||||
newHasSupermajority := lightclient.UpdateHasSupermajority(newUpdate.SyncAggregate())
|
||||
|
||||
lastUpdateSlot := lastUpdate.FinalizedHeader().Beacon().Slot
|
||||
lastHasSupermajority := lightclient.UpdateHasSupermajority(lastUpdate.SyncAggregate())
|
||||
|
||||
if newUpdateSlot < lastUpdateSlot {
|
||||
log.Debug("Skip saving light client finality newUpdate: Older than local newUpdate")
|
||||
return nil
|
||||
}
|
||||
if newUpdateSlot == lastUpdateSlot && (lastHasSupermajority || !newHasSupermajority) {
|
||||
log.Debug("Skip saving light client finality update: No supermajority advantage")
|
||||
return nil
|
||||
}
|
||||
}
|
||||
log.Debug("Saving new light client finality update")
|
||||
s.lcStore.SetLastFinalityUpdate(newUpdate)
|
||||
|
||||
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
|
||||
Type: statefeed.LightClientFinalityUpdate,
|
||||
Data: update,
|
||||
Data: newUpdate,
|
||||
})
|
||||
return nil
|
||||
}
|
||||
@@ -287,7 +310,7 @@ func (s *Service) processLightClientOptimisticUpdate(ctx context.Context, signed
|
||||
return errors.Wrapf(err, "could not get attested state for root %#x", attestedRoot)
|
||||
}
|
||||
|
||||
update, err := lightclient.NewLightClientOptimisticUpdateFromBeaconState(
|
||||
newUpdate, err := lightclient.NewLightClientOptimisticUpdateFromBeaconState(
|
||||
ctx,
|
||||
postState.Slot(),
|
||||
postState,
|
||||
@@ -304,9 +327,21 @@ func (s *Service) processLightClientOptimisticUpdate(ctx context.Context, signed
|
||||
return errors.Wrap(err, "could not create light client optimistic update")
|
||||
}
|
||||
|
||||
lastUpdate := s.lcStore.LastOptimisticUpdate()
|
||||
if lastUpdate != nil {
|
||||
// The attested_header.beacon.slot is greater than that of all previously forwarded optimistic updates
|
||||
if newUpdate.AttestedHeader().Beacon().Slot <= lastUpdate.AttestedHeader().Beacon().Slot {
|
||||
log.Debug("Skip saving light client optimistic update: Older than local update")
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
log.Debug("Saving new light client optimistic update")
|
||||
s.lcStore.SetLastOptimisticUpdate(newUpdate)
|
||||
|
||||
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
|
||||
Type: statefeed.LightClientOptimisticUpdate,
|
||||
Data: update,
|
||||
Data: newUpdate,
|
||||
})
|
||||
|
||||
return nil
|
||||
|
||||
@@ -2653,7 +2653,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
|
||||
|
||||
t.Run("Altair", func(t *testing.T) {
|
||||
t.Run("No old update", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestAltair(0, true)
|
||||
l := util.NewTestLightClient(t, version.Altair)
|
||||
|
||||
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().AltairForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
|
||||
|
||||
@@ -2699,7 +2699,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("New update is better", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestAltair(0, true)
|
||||
l := util.NewTestLightClient(t, version.Altair)
|
||||
|
||||
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().AltairForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
|
||||
|
||||
@@ -2751,7 +2751,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("Old update is better", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestAltair(0, false)
|
||||
l := util.NewTestLightClient(t, version.Altair)
|
||||
|
||||
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().AltairForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
|
||||
|
||||
@@ -2812,7 +2812,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
|
||||
|
||||
t.Run("Capella", func(t *testing.T) {
|
||||
t.Run("No old update", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestCapella(false, 0, true)
|
||||
l := util.NewTestLightClient(t, version.Capella)
|
||||
|
||||
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().CapellaForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
|
||||
|
||||
@@ -2857,7 +2857,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("New update is better", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestCapella(false, 0, true)
|
||||
l := util.NewTestLightClient(t, version.Capella)
|
||||
|
||||
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().CapellaForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
|
||||
|
||||
@@ -2909,7 +2909,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("Old update is better", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestCapella(false, 0, false)
|
||||
l := util.NewTestLightClient(t, version.Capella)
|
||||
|
||||
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().CapellaForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
|
||||
|
||||
@@ -2970,7 +2970,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
|
||||
|
||||
t.Run("Deneb", func(t *testing.T) {
|
||||
t.Run("No old update", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestDeneb(false, 0, true)
|
||||
l := util.NewTestLightClient(t, version.Deneb)
|
||||
|
||||
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().DenebForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
|
||||
|
||||
@@ -3015,7 +3015,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("New update is better", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestDeneb(false, 0, true)
|
||||
l := util.NewTestLightClient(t, version.Deneb)
|
||||
|
||||
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().DenebForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
|
||||
|
||||
@@ -3067,7 +3067,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("Old update is better", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestDeneb(false, 0, false)
|
||||
l := util.NewTestLightClient(t, version.Deneb)
|
||||
|
||||
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().DenebForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
|
||||
|
||||
@@ -3138,7 +3138,7 @@ func TestSaveLightClientBootstrap(t *testing.T) {
|
||||
ctx := tr.ctx
|
||||
|
||||
t.Run("Altair", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestAltair(0, true)
|
||||
l := util.NewTestLightClient(t, version.Altair)
|
||||
|
||||
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().AltairForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
|
||||
|
||||
@@ -3173,7 +3173,7 @@ func TestSaveLightClientBootstrap(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("Capella", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestCapella(false, 0, true)
|
||||
l := util.NewTestLightClient(t, version.Capella)
|
||||
|
||||
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().CapellaForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
|
||||
|
||||
@@ -3208,7 +3208,7 @@ func TestSaveLightClientBootstrap(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("Deneb", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestDeneb(false, 0, true)
|
||||
l := util.NewTestLightClient(t, version.Deneb)
|
||||
|
||||
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().DenebForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
|
||||
|
||||
@@ -3244,3 +3244,338 @@ func TestSaveLightClientBootstrap(t *testing.T) {
|
||||
|
||||
reset()
|
||||
}
|
||||
|
||||
func setupLightClientTestRequirements(ctx context.Context, t *testing.T, s *Service, v int, options ...util.LightClientOption) (*util.TestLightClient, *postBlockProcessConfig) {
|
||||
var l *util.TestLightClient
|
||||
switch v {
|
||||
case version.Altair:
|
||||
l = util.NewTestLightClient(t, version.Altair, options...)
|
||||
case version.Bellatrix:
|
||||
l = util.NewTestLightClient(t, version.Bellatrix, options...)
|
||||
case version.Capella:
|
||||
l = util.NewTestLightClient(t, version.Capella, options...)
|
||||
case version.Deneb:
|
||||
l = util.NewTestLightClient(t, version.Deneb, options...)
|
||||
case version.Electra:
|
||||
l = util.NewTestLightClient(t, version.Electra, options...)
|
||||
default:
|
||||
t.Errorf("Unsupported fork version %s", version.String(v))
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
|
||||
require.NoError(t, err)
|
||||
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
|
||||
require.NoError(t, err)
|
||||
|
||||
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
|
||||
require.NoError(t, err)
|
||||
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
cfg := &postBlockProcessConfig{
|
||||
ctx: ctx,
|
||||
roblock: roblock,
|
||||
postState: l.State,
|
||||
isValidPayload: true,
|
||||
}
|
||||
|
||||
return l, cfg
|
||||
}
|
||||
|
||||
func TestProcessLightClientOptimisticUpdate(t *testing.T) {
|
||||
featCfg := &features.Flags{}
|
||||
featCfg.EnableLightClient = true
|
||||
reset := features.InitWithReset(featCfg)
|
||||
defer reset()
|
||||
|
||||
params.SetupTestConfigCleanup(t)
|
||||
beaconCfg := params.BeaconConfig()
|
||||
beaconCfg.AltairForkEpoch = 1
|
||||
beaconCfg.BellatrixForkEpoch = 2
|
||||
beaconCfg.CapellaForkEpoch = 3
|
||||
beaconCfg.DenebForkEpoch = 4
|
||||
beaconCfg.ElectraForkEpoch = 5
|
||||
params.OverrideBeaconConfig(beaconCfg)
|
||||
|
||||
s, tr := minimalTestService(t)
|
||||
ctx := tr.ctx
|
||||
|
||||
testCases := []struct {
|
||||
name string
|
||||
oldOptions []util.LightClientOption
|
||||
newOptions []util.LightClientOption
|
||||
expectReplace bool
|
||||
}{
|
||||
{
|
||||
name: "No old update",
|
||||
oldOptions: nil,
|
||||
newOptions: []util.LightClientOption{},
|
||||
expectReplace: true,
|
||||
},
|
||||
{
|
||||
name: "Same age",
|
||||
oldOptions: []util.LightClientOption{},
|
||||
newOptions: []util.LightClientOption{util.WithSupermajority()}, // supermajority does not matter here and is only added to result in two different updates
|
||||
expectReplace: false,
|
||||
},
|
||||
{
|
||||
name: "Old update is better - age",
|
||||
oldOptions: []util.LightClientOption{util.WithIncreasedAttestedSlot(1)},
|
||||
newOptions: []util.LightClientOption{},
|
||||
expectReplace: false,
|
||||
},
|
||||
{
|
||||
name: "New update is better - age",
|
||||
oldOptions: []util.LightClientOption{},
|
||||
newOptions: []util.LightClientOption{util.WithIncreasedAttestedSlot(1)},
|
||||
expectReplace: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
for testVersion := 1; testVersion < 6; testVersion++ { // test all forks
|
||||
var forkEpoch uint64
|
||||
var expectedVersion int
|
||||
|
||||
switch testVersion {
|
||||
case 1:
|
||||
forkEpoch = uint64(params.BeaconConfig().AltairForkEpoch)
|
||||
expectedVersion = version.Altair
|
||||
case 2:
|
||||
forkEpoch = uint64(params.BeaconConfig().BellatrixForkEpoch)
|
||||
expectedVersion = version.Altair
|
||||
case 3:
|
||||
forkEpoch = uint64(params.BeaconConfig().CapellaForkEpoch)
|
||||
expectedVersion = version.Capella
|
||||
case 4:
|
||||
forkEpoch = uint64(params.BeaconConfig().DenebForkEpoch)
|
||||
expectedVersion = version.Deneb
|
||||
case 5:
|
||||
forkEpoch = uint64(params.BeaconConfig().ElectraForkEpoch)
|
||||
expectedVersion = version.Deneb
|
||||
default:
|
||||
t.Errorf("Unsupported fork version %s", version.String(testVersion))
|
||||
}
|
||||
|
||||
t.Run(version.String(testVersion)+"_"+tc.name, func(t *testing.T) {
|
||||
s.genesisTime = time.Unix(time.Now().Unix()-(int64(forkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
|
||||
s.lcStore = &lightClient.Store{}
|
||||
|
||||
var oldActualUpdate interfaces.LightClientOptimisticUpdate
|
||||
var err error
|
||||
if tc.oldOptions != nil {
|
||||
// config for old update
|
||||
lOld, cfgOld := setupLightClientTestRequirements(ctx, t, s, testVersion, tc.oldOptions...)
|
||||
require.NoError(t, s.processLightClientOptimisticUpdate(cfgOld.ctx, cfgOld.roblock, cfgOld.postState))
|
||||
|
||||
oldActualUpdate, err = lightClient.NewLightClientOptimisticUpdateFromBeaconState(
|
||||
lOld.Ctx,
|
||||
lOld.State.Slot(),
|
||||
lOld.State,
|
||||
lOld.Block,
|
||||
lOld.AttestedState,
|
||||
lOld.AttestedBlock,
|
||||
)
|
||||
require.NoError(t, err)
|
||||
|
||||
// check that the old update is saved
|
||||
oldUpdate := s.lcStore.LastOptimisticUpdate()
|
||||
require.NotNil(t, oldUpdate)
|
||||
|
||||
require.DeepEqual(t, oldUpdate, oldActualUpdate, "old update should be saved")
|
||||
}
|
||||
|
||||
// config for new update
|
||||
lNew, cfgNew := setupLightClientTestRequirements(ctx, t, s, testVersion, tc.newOptions...)
|
||||
require.NoError(t, s.processLightClientOptimisticUpdate(cfgNew.ctx, cfgNew.roblock, cfgNew.postState))
|
||||
|
||||
newActualUpdate, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(
|
||||
lNew.Ctx,
|
||||
lNew.State.Slot(),
|
||||
lNew.State,
|
||||
lNew.Block,
|
||||
lNew.AttestedState,
|
||||
lNew.AttestedBlock,
|
||||
)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.DeepNotEqual(t, newActualUpdate, oldActualUpdate, "new update should not be equal to old update")
|
||||
|
||||
// check that the new update is saved or skipped
|
||||
newUpdate := s.lcStore.LastOptimisticUpdate()
|
||||
require.NotNil(t, newUpdate)
|
||||
|
||||
if tc.expectReplace {
|
||||
require.DeepEqual(t, newActualUpdate, newUpdate)
|
||||
require.Equal(t, expectedVersion, newUpdate.Version())
|
||||
} else {
|
||||
require.DeepEqual(t, oldActualUpdate, newUpdate)
|
||||
require.Equal(t, expectedVersion, newUpdate.Version())
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestProcessLightClientFinalityUpdate(t *testing.T) {
|
||||
featCfg := &features.Flags{}
|
||||
featCfg.EnableLightClient = true
|
||||
reset := features.InitWithReset(featCfg)
|
||||
defer reset()
|
||||
|
||||
params.SetupTestConfigCleanup(t)
|
||||
beaconCfg := params.BeaconConfig()
|
||||
beaconCfg.AltairForkEpoch = 1
|
||||
beaconCfg.BellatrixForkEpoch = 2
|
||||
beaconCfg.CapellaForkEpoch = 3
|
||||
beaconCfg.DenebForkEpoch = 4
|
||||
beaconCfg.ElectraForkEpoch = 5
|
||||
params.OverrideBeaconConfig(beaconCfg)
|
||||
|
||||
s, tr := minimalTestService(t)
|
||||
ctx := tr.ctx
|
||||
|
||||
testCases := []struct {
|
||||
name string
|
||||
oldOptions []util.LightClientOption
|
||||
newOptions []util.LightClientOption
|
||||
expectReplace bool
|
||||
}{
|
||||
{
|
||||
name: "No old update",
|
||||
oldOptions: nil,
|
||||
newOptions: []util.LightClientOption{},
|
||||
expectReplace: true,
|
||||
},
|
||||
{
|
||||
name: "Old update is better - age - no supermajority",
|
||||
oldOptions: []util.LightClientOption{util.WithIncreasedFinalizedSlot(1)},
|
||||
newOptions: []util.LightClientOption{},
|
||||
expectReplace: false,
|
||||
},
|
||||
{
|
||||
name: "Old update is better - age - both supermajority",
|
||||
oldOptions: []util.LightClientOption{util.WithIncreasedFinalizedSlot(1), util.WithSupermajority()},
|
||||
newOptions: []util.LightClientOption{util.WithSupermajority()},
|
||||
expectReplace: false,
|
||||
},
|
||||
{
|
||||
name: "Old update is better - supermajority",
|
||||
oldOptions: []util.LightClientOption{util.WithSupermajority()},
|
||||
newOptions: []util.LightClientOption{},
|
||||
expectReplace: false,
|
||||
},
|
||||
{
|
||||
name: "New update is better - age - both supermajority",
|
||||
oldOptions: []util.LightClientOption{util.WithSupermajority()},
|
||||
newOptions: []util.LightClientOption{util.WithIncreasedFinalizedSlot(1), util.WithSupermajority()},
|
||||
expectReplace: true,
|
||||
},
|
||||
{
|
||||
name: "New update is better - age - no supermajority",
|
||||
oldOptions: []util.LightClientOption{},
|
||||
newOptions: []util.LightClientOption{util.WithIncreasedFinalizedSlot(1)},
|
||||
expectReplace: true,
|
||||
},
|
||||
{
|
||||
name: "New update is better - supermajority",
|
||||
oldOptions: []util.LightClientOption{},
|
||||
newOptions: []util.LightClientOption{util.WithSupermajority()},
|
||||
expectReplace: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
for testVersion := 1; testVersion < 6; testVersion++ { // test all forks
|
||||
var forkEpoch uint64
|
||||
var expectedVersion int
|
||||
|
||||
switch testVersion {
|
||||
case 1:
|
||||
forkEpoch = uint64(params.BeaconConfig().AltairForkEpoch)
|
||||
expectedVersion = version.Altair
|
||||
case 2:
|
||||
forkEpoch = uint64(params.BeaconConfig().BellatrixForkEpoch)
|
||||
expectedVersion = version.Altair
|
||||
case 3:
|
||||
forkEpoch = uint64(params.BeaconConfig().CapellaForkEpoch)
|
||||
expectedVersion = version.Capella
|
||||
case 4:
|
||||
forkEpoch = uint64(params.BeaconConfig().DenebForkEpoch)
|
||||
expectedVersion = version.Deneb
|
||||
case 5:
|
||||
forkEpoch = uint64(params.BeaconConfig().ElectraForkEpoch)
|
||||
expectedVersion = version.Electra
|
||||
default:
|
||||
t.Errorf("Unsupported fork version %s", version.String(testVersion))
|
||||
}
|
||||
|
||||
t.Run(version.String(testVersion)+"_"+tc.name, func(t *testing.T) {
|
||||
s.genesisTime = time.Unix(time.Now().Unix()-(int64(forkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
|
||||
s.lcStore = &lightClient.Store{}
|
||||
|
||||
var actualOldUpdate, actualNewUpdate interfaces.LightClientFinalityUpdate
|
||||
var err error
|
||||
|
||||
if tc.oldOptions != nil {
|
||||
// config for old update
|
||||
lOld, cfgOld := setupLightClientTestRequirements(ctx, t, s, testVersion, tc.oldOptions...)
|
||||
require.NoError(t, s.processLightClientFinalityUpdate(cfgOld.ctx, cfgOld.roblock, cfgOld.postState))
|
||||
|
||||
// check that the old update is saved
|
||||
actualOldUpdate, err = lightClient.NewLightClientFinalityUpdateFromBeaconState(
|
||||
ctx,
|
||||
cfgOld.postState.Slot(),
|
||||
cfgOld.postState,
|
||||
cfgOld.roblock,
|
||||
lOld.AttestedState,
|
||||
lOld.AttestedBlock,
|
||||
lOld.FinalizedBlock,
|
||||
)
|
||||
require.NoError(t, err)
|
||||
oldUpdate := s.lcStore.LastFinalityUpdate()
|
||||
require.DeepEqual(t, actualOldUpdate, oldUpdate)
|
||||
}
|
||||
|
||||
// config for new update
|
||||
lNew, cfgNew := setupLightClientTestRequirements(ctx, t, s, testVersion, tc.newOptions...)
|
||||
require.NoError(t, s.processLightClientFinalityUpdate(cfgNew.ctx, cfgNew.roblock, cfgNew.postState))
|
||||
|
||||
// check that the actual old update and the actual new update are different
|
||||
actualNewUpdate, err = lightClient.NewLightClientFinalityUpdateFromBeaconState(
|
||||
ctx,
|
||||
cfgNew.postState.Slot(),
|
||||
cfgNew.postState,
|
||||
cfgNew.roblock,
|
||||
lNew.AttestedState,
|
||||
lNew.AttestedBlock,
|
||||
lNew.FinalizedBlock,
|
||||
)
|
||||
require.NoError(t, err)
|
||||
require.DeepNotEqual(t, actualOldUpdate, actualNewUpdate)
|
||||
|
||||
// check that the new update is saved or skipped
|
||||
newUpdate := s.lcStore.LastFinalityUpdate()
|
||||
|
||||
if tc.expectReplace {
|
||||
require.DeepEqual(t, actualNewUpdate, newUpdate)
|
||||
require.Equal(t, expectedVersion, newUpdate.Version())
|
||||
} else {
|
||||
require.DeepEqual(t, actualOldUpdate, newUpdate)
|
||||
require.Equal(t, expectedVersion, newUpdate.Version())
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -15,6 +15,7 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed"
|
||||
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
|
||||
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
|
||||
coreTime "github.com/OffchainLabs/prysm/v6/beacon-chain/core/time"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
|
||||
@@ -64,6 +65,7 @@ type Service struct {
|
||||
blockBeingSynced *currentlySyncingBlock
|
||||
blobStorage *filesystem.BlobStorage
|
||||
slasherEnabled bool
|
||||
lcStore *lightClient.Store
|
||||
}
|
||||
|
||||
// config options for the service.
|
||||
|
||||
@@ -10,6 +10,7 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache/depositsnapshot"
|
||||
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
|
||||
lightclient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
|
||||
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
|
||||
@@ -122,6 +123,7 @@ func minimalTestService(t *testing.T, opts ...Option) (*Service, *testServiceReq
|
||||
WithBlobStorage(filesystem.NewEphemeralBlobStorage(t)),
|
||||
WithSyncChecker(mock.MockChecker{}),
|
||||
WithExecutionEngineCaller(&mockExecution.EngineClient{}),
|
||||
WithLightClientStore(&lightclient.Store{}),
|
||||
}
|
||||
// append the variadic opts so they override the defaults by being processed afterwards
|
||||
opts = append(defOpts, opts...)
|
||||
|
||||
@@ -53,6 +53,7 @@ type ChainService struct {
|
||||
InitSyncBlockRoots map[[32]byte]bool
|
||||
DB db.Database
|
||||
State state.BeaconState
|
||||
HeadStateErr error
|
||||
Block interfaces.ReadOnlySignedBeaconBlock
|
||||
VerifyBlkDescendantErr error
|
||||
stateNotifier statefeed.Notifier
|
||||
@@ -364,6 +365,9 @@ func (s *ChainService) HeadState(context.Context) (state.BeaconState, error) {
|
||||
|
||||
// HeadStateReadOnly mocks HeadStateReadOnly method in chain service.
|
||||
func (s *ChainService) HeadStateReadOnly(context.Context) (state.ReadOnlyBeaconState, error) {
|
||||
if s.HeadStateErr != nil {
|
||||
return nil, s.HeadStateErr
|
||||
}
|
||||
return s.State, nil
|
||||
}
|
||||
|
||||
@@ -448,6 +452,11 @@ func (s *ChainService) IsCanonical(_ context.Context, r [32]byte) (bool, error)
|
||||
return true, nil
|
||||
}
|
||||
|
||||
// DependentRoot mocks the base method in the chain service.
|
||||
func (*ChainService) DependentRoot(_ primitives.Epoch) ([32]byte, error) {
|
||||
return [32]byte{}, nil
|
||||
}
|
||||
|
||||
// HasBlock mocks the same method in the chain service.
|
||||
func (s *ChainService) HasBlock(ctx context.Context, rt [32]byte) bool {
|
||||
if s.DB == nil {
|
||||
|
||||
@@ -106,7 +106,7 @@ func VerifyAttestationNoVerifySignature(
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
c := helpers.SlotCommitteeCount(activeValidatorCount)
|
||||
committeeCount := helpers.SlotCommitteeCount(activeValidatorCount)
|
||||
|
||||
var indexedAtt ethpb.IndexedAtt
|
||||
|
||||
@@ -115,13 +115,14 @@ func VerifyAttestationNoVerifySignature(
|
||||
return errors.New("committee index must be 0 post-Electra")
|
||||
}
|
||||
|
||||
aggBits := att.GetAggregationBits()
|
||||
committeeIndices := att.CommitteeBitsVal().BitIndices()
|
||||
committees := make([][]primitives.ValidatorIndex, len(committeeIndices))
|
||||
participantsCount := 0
|
||||
var err error
|
||||
for i, ci := range committeeIndices {
|
||||
if uint64(ci) >= c {
|
||||
return fmt.Errorf("committee index %d >= committee count %d", ci, c)
|
||||
if uint64(ci) >= committeeCount {
|
||||
return fmt.Errorf("committee index %d >= committee count %d", ci, committeeCount)
|
||||
}
|
||||
committees[i], err = helpers.BeaconCommitteeFromState(ctx, beaconState, att.GetData().Slot, primitives.CommitteeIndex(ci))
|
||||
if err != nil {
|
||||
@@ -129,16 +130,32 @@ func VerifyAttestationNoVerifySignature(
|
||||
}
|
||||
participantsCount += len(committees[i])
|
||||
}
|
||||
if att.GetAggregationBits().Len() != uint64(participantsCount) {
|
||||
if aggBits.Len() != uint64(participantsCount) {
|
||||
return fmt.Errorf("aggregation bits count %d is different than participant count %d", att.GetAggregationBits().Len(), participantsCount)
|
||||
}
|
||||
|
||||
committeeOffset := 0
|
||||
for ci, c := range committees {
|
||||
attesterFound := false
|
||||
for i := range c {
|
||||
if aggBits.BitAt(uint64(committeeOffset + i)) {
|
||||
attesterFound = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !attesterFound {
|
||||
return fmt.Errorf("no attesting indices found for committee index %d", ci)
|
||||
}
|
||||
committeeOffset += len(c)
|
||||
}
|
||||
|
||||
indexedAtt, err = attestation.ConvertToIndexed(ctx, att, committees...)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
if uint64(att.GetData().CommitteeIndex) >= c {
|
||||
return fmt.Errorf("committee index %d >= committee count %d", att.GetData().CommitteeIndex, c)
|
||||
if uint64(att.GetData().CommitteeIndex) >= committeeCount {
|
||||
return fmt.Errorf("committee index %d >= committee count %d", att.GetData().CommitteeIndex, committeeCount)
|
||||
}
|
||||
|
||||
// Verify attesting indices are correct.
|
||||
|
||||
@@ -296,6 +296,22 @@ func TestVerifyAttestationNoVerifySignature_Electra(t *testing.T) {
|
||||
err = blocks.VerifyAttestationNoVerifySignature(context.TODO(), beaconState, att)
|
||||
assert.ErrorContains(t, "aggregation bits count 123 is different than participant count 3", err)
|
||||
})
|
||||
t.Run("no attester in committee", func(t *testing.T) {
|
||||
aggBits := bitfield.NewBitlist(3)
|
||||
committeeBits := bitfield.NewBitvector64()
|
||||
committeeBits.SetBitAt(0, true)
|
||||
att := ðpb.AttestationElectra{
|
||||
Data: ðpb.AttestationData{
|
||||
Source: ðpb.Checkpoint{Epoch: 0, Root: mockRoot[:]},
|
||||
Target: ðpb.Checkpoint{Epoch: 0, Root: make([]byte, 32)},
|
||||
},
|
||||
AggregationBits: aggBits,
|
||||
CommitteeBits: committeeBits,
|
||||
}
|
||||
att.Signature = zeroSig[:]
|
||||
err = blocks.VerifyAttestationNoVerifySignature(context.TODO(), beaconState, att)
|
||||
assert.ErrorContains(t, "no attesting indices found for committee index 0", err)
|
||||
})
|
||||
}
|
||||
|
||||
func TestConvertToIndexed_OK(t *testing.T) {
|
||||
|
||||
@@ -16,6 +16,9 @@ import (
|
||||
"google.golang.org/protobuf/proto"
|
||||
)
|
||||
|
||||
// ErrCouldNotVerifyBlockHeader is returned when a block header's signature cannot be verified.
|
||||
var ErrCouldNotVerifyBlockHeader = errors.New("could not verify beacon block header")
|
||||
|
||||
type slashValidatorFunc func(
|
||||
ctx context.Context,
|
||||
st state.BeaconState,
|
||||
@@ -114,7 +117,7 @@ func VerifyProposerSlashing(
|
||||
for _, header := range headers {
|
||||
if err := signing.ComputeDomainVerifySigningRoot(beaconState, pIdx, slots.ToEpoch(hSlot),
|
||||
header.Header, params.BeaconConfig().DomainBeaconProposer, header.Signature); err != nil {
|
||||
return errors.Wrap(err, "could not verify beacon block header")
|
||||
return errors.Wrap(ErrCouldNotVerifyBlockHeader, err.Error())
|
||||
}
|
||||
}
|
||||
return nil
|
||||
|
||||
@@ -11,7 +11,7 @@ import (
|
||||
)
|
||||
|
||||
// UpgradeToFulu updates inputs a generic state to return the version Fulu state.
|
||||
// https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/fork.md#upgrading-the-state
|
||||
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/fork.md#upgrading-the-state
|
||||
func UpgradeToFulu(beaconState state.BeaconState) (state.BeaconState, error) {
|
||||
currentSyncCommittee, err := beaconState.CurrentSyncCommittee()
|
||||
if err != nil {
|
||||
@@ -69,6 +69,10 @@ func UpgradeToFulu(beaconState state.BeaconState) (state.BeaconState, error) {
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
depositRequestsStartIndex, err := beaconState.DepositRequestsStartIndex()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
depositBalanceToConsume, err := beaconState.DepositBalanceToConsume()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -154,7 +158,7 @@ func UpgradeToFulu(beaconState state.BeaconState) (state.BeaconState, error) {
|
||||
NextWithdrawalValidatorIndex: vi,
|
||||
HistoricalSummaries: summaries,
|
||||
|
||||
DepositRequestsStartIndex: params.BeaconConfig().UnsetDepositRequestsStartIndex,
|
||||
DepositRequestsStartIndex: depositRequestsStartIndex,
|
||||
DepositBalanceToConsume: depositBalanceToConsume,
|
||||
ExitBalanceToConsume: exitBalanceToConsume,
|
||||
EarliestExitEpoch: earliestExitEpoch,
|
||||
|
||||
@@ -4,6 +4,7 @@ import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/time"
|
||||
@@ -299,9 +300,12 @@ func ProposerIndexAtSlotFromCheckpoint(c *forkchoicetypes.Checkpoint, slot primi
|
||||
// BeaconProposerIndexAtSlot returns proposer index at the given slot from the
|
||||
// point of view of the given state as head state
|
||||
func BeaconProposerIndexAtSlot(ctx context.Context, state state.ReadOnlyBeaconState, slot primitives.Slot) (primitives.ValidatorIndex, error) {
|
||||
pid, err := GetCachedProposerIndex(state, slot)
|
||||
if err == nil {
|
||||
return pid, nil
|
||||
}
|
||||
|
||||
e := slots.ToEpoch(slot)
|
||||
// The cache uses the state root of the previous epoch - minimum_seed_lookahead last slot as key. (e.g. Starting epoch 1, slot 32, the key would be block root at slot 31)
|
||||
// For simplicity, the node will skip caching of genesis epoch.
|
||||
if e > params.BeaconConfig().GenesisEpoch+params.BeaconConfig().MinSeedLookahead {
|
||||
s, err := slots.EpochEnd(e - 1)
|
||||
if err != nil {
|
||||
@@ -312,14 +316,10 @@ func BeaconProposerIndexAtSlot(ctx context.Context, state state.ReadOnlyBeaconSt
|
||||
return 0, err
|
||||
}
|
||||
if r != nil && !bytes.Equal(r, params.BeaconConfig().ZeroHash[:]) {
|
||||
pid, err := cachedProposerIndexAtSlot(slot, [32]byte(r))
|
||||
if err == nil {
|
||||
return pid, nil
|
||||
}
|
||||
if err := UpdateProposerIndicesInCache(ctx, state, e); err != nil {
|
||||
return 0, errors.Wrap(err, "could not update proposer index cache")
|
||||
}
|
||||
pid, err = cachedProposerIndexAtSlot(slot, [32]byte(r))
|
||||
pid, err := cachedProposerIndexAtSlot(slot, [32]byte(r))
|
||||
if err == nil {
|
||||
return pid, nil
|
||||
}
|
||||
@@ -342,6 +342,34 @@ func BeaconProposerIndexAtSlot(ctx context.Context, state state.ReadOnlyBeaconSt
|
||||
return ComputeProposerIndex(state, indices, seedWithSlotHash)
|
||||
}
|
||||
|
||||
var ErrProposerNotFound = errors.New("invalid nil or unknown node")
|
||||
|
||||
func GetCachedProposerIndex(state state.ReadOnlyBeaconState, slot primitives.Slot) (primitives.ValidatorIndex, error) {
|
||||
epoch := slots.ToEpoch(slot)
|
||||
if epoch <= params.BeaconConfig().GenesisEpoch+params.BeaconConfig().MinSeedLookahead {
|
||||
return 0, fmt.Errorf("epoch %d is too early to get cached proposer index", epoch)
|
||||
}
|
||||
|
||||
endSlot, err := slots.EpochEnd(epoch - 1)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
root, err := StateRootAtSlot(state, endSlot)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if root == nil || bytes.Equal(root, params.BeaconConfig().ZeroHash[:]) {
|
||||
return 0, ErrProposerNotFound
|
||||
}
|
||||
|
||||
proposerIndex, err := cachedProposerIndexAtSlot(slot, [32]byte(root))
|
||||
if err != nil {
|
||||
return 0, ErrProposerNotFound
|
||||
}
|
||||
return proposerIndex, nil
|
||||
}
|
||||
|
||||
// ComputeProposerIndex returns the index sampled by effective balance, which is used to calculate proposer.
|
||||
//
|
||||
// nolint:dupword
|
||||
|
||||
@@ -45,6 +45,7 @@ go_test(
|
||||
"//encoding/ssz:go_default_library",
|
||||
"//proto/engine/v1:go_default_library",
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
"//runtime/version:go_default_library",
|
||||
"//testing/require:go_default_library",
|
||||
"//testing/util:go_default_library",
|
||||
"@com_github_pkg_errors//:go_default_library",
|
||||
|
||||
@@ -1013,3 +1013,9 @@ func createDefaultLightClientBootstrap(currentSlot primitives.Slot) (interfaces.
|
||||
|
||||
return light_client.NewWrappedBootstrap(m)
|
||||
}
|
||||
|
||||
func UpdateHasSupermajority(syncAggregate *pb.SyncAggregate) bool {
|
||||
maxActiveParticipants := syncAggregate.SyncCommitteeBits.Len()
|
||||
numActiveParticipants := syncAggregate.SyncCommitteeBits.Count()
|
||||
return numActiveParticipants*3 >= maxActiveParticipants*2
|
||||
}
|
||||
|
||||
@@ -8,6 +8,7 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
light_client "github.com/OffchainLabs/prysm/v6/consensus-types/light-client"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/runtime/version"
|
||||
|
||||
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
|
||||
@@ -32,7 +33,7 @@ func TestLightClient_NewLightClientOptimisticUpdateFromBeaconState(t *testing.T)
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
t.Run("Altair", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestAltair(0, true)
|
||||
l := util.NewTestLightClient(t, version.Altair)
|
||||
|
||||
update, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(l.Ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock)
|
||||
require.NoError(t, err)
|
||||
@@ -44,7 +45,7 @@ func TestLightClient_NewLightClientOptimisticUpdateFromBeaconState(t *testing.T)
|
||||
})
|
||||
|
||||
t.Run("Capella", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestCapella(false, 0, true)
|
||||
l := util.NewTestLightClient(t, version.Capella)
|
||||
|
||||
update, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(l.Ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock)
|
||||
require.NoError(t, err)
|
||||
@@ -57,7 +58,7 @@ func TestLightClient_NewLightClientOptimisticUpdateFromBeaconState(t *testing.T)
|
||||
})
|
||||
|
||||
t.Run("Deneb", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestDeneb(false, 0, true)
|
||||
l := util.NewTestLightClient(t, version.Deneb)
|
||||
|
||||
update, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(l.Ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock)
|
||||
require.NoError(t, err)
|
||||
@@ -70,7 +71,7 @@ func TestLightClient_NewLightClientOptimisticUpdateFromBeaconState(t *testing.T)
|
||||
})
|
||||
|
||||
t.Run("Electra", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestElectra(false, 0, true)
|
||||
l := util.NewTestLightClient(t, version.Electra)
|
||||
|
||||
update, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(l.Ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock)
|
||||
require.NoError(t, err)
|
||||
@@ -94,7 +95,7 @@ func TestLightClient_NewLightClientFinalityUpdateFromBeaconState(t *testing.T) {
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
t.Run("Altair", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestAltair(0, true)
|
||||
l := util.NewTestLightClient(t, version.Altair)
|
||||
|
||||
t.Run("FinalizedBlock Not Nil", func(t *testing.T) {
|
||||
update, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
|
||||
@@ -131,7 +132,7 @@ func TestLightClient_NewLightClientFinalityUpdateFromBeaconState(t *testing.T) {
|
||||
t.Run("Capella", func(t *testing.T) {
|
||||
|
||||
t.Run("FinalizedBlock Not Nil", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestCapella(false, 0, true)
|
||||
l := util.NewTestLightClient(t, version.Capella)
|
||||
update, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, update, "update is nil")
|
||||
@@ -205,7 +206,7 @@ func TestLightClient_NewLightClientFinalityUpdateFromBeaconState(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("FinalizedBlock In Previous Fork", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestCapellaFinalizedBlockAltair(false)
|
||||
l := util.NewTestLightClient(t, version.Capella, util.WithFinalizedCheckpointInPrevFork())
|
||||
update, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, update, "update is nil")
|
||||
@@ -238,7 +239,7 @@ func TestLightClient_NewLightClientFinalityUpdateFromBeaconState(t *testing.T) {
|
||||
t.Run("Deneb", func(t *testing.T) {
|
||||
|
||||
t.Run("FinalizedBlock Not Nil", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestDeneb(false, 0, true)
|
||||
l := util.NewTestLightClient(t, version.Deneb)
|
||||
|
||||
update, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
|
||||
require.NoError(t, err)
|
||||
@@ -313,7 +314,7 @@ func TestLightClient_NewLightClientFinalityUpdateFromBeaconState(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("FinalizedBlock In Previous Fork", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestDenebFinalizedBlockCapella(false)
|
||||
l := util.NewTestLightClient(t, version.Deneb, util.WithFinalizedCheckpointInPrevFork())
|
||||
|
||||
update, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
|
||||
require.NoError(t, err)
|
||||
@@ -390,7 +391,7 @@ func TestLightClient_NewLightClientFinalityUpdateFromBeaconState(t *testing.T) {
|
||||
|
||||
t.Run("Electra", func(t *testing.T) {
|
||||
t.Run("FinalizedBlock Not Nil", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestElectra(false, 0, true)
|
||||
l := util.NewTestLightClient(t, version.Electra)
|
||||
|
||||
update, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
|
||||
require.NoError(t, err)
|
||||
@@ -465,7 +466,7 @@ func TestLightClient_NewLightClientFinalityUpdateFromBeaconState(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("FinalizedBlock In Previous Fork", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestElectraFinalizedBlockDeneb(false)
|
||||
l := util.NewTestLightClient(t, version.Electra, util.WithFinalizedCheckpointInPrevFork())
|
||||
|
||||
update, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
|
||||
require.NoError(t, err)
|
||||
@@ -542,7 +543,7 @@ func TestLightClient_NewLightClientFinalityUpdateFromBeaconState(t *testing.T) {
|
||||
|
||||
func TestLightClient_BlockToLightClientHeader(t *testing.T) {
|
||||
t.Run("Altair", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestAltair(0, true)
|
||||
l := util.NewTestLightClient(t, version.Altair)
|
||||
|
||||
header, err := lightClient.BlockToLightClientHeader(
|
||||
l.Ctx,
|
||||
@@ -565,7 +566,7 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("Bellatrix", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestBellatrix(0, true)
|
||||
l := util.NewTestLightClient(t, version.Bellatrix)
|
||||
|
||||
header, err := lightClient.BlockToLightClientHeader(
|
||||
l.Ctx,
|
||||
@@ -589,7 +590,7 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
|
||||
|
||||
t.Run("Capella", func(t *testing.T) {
|
||||
t.Run("Non-Blinded Beacon Block", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestCapella(false, 0, true)
|
||||
l := util.NewTestLightClient(t, version.Capella)
|
||||
|
||||
header, err := lightClient.BlockToLightClientHeader(
|
||||
l.Ctx,
|
||||
@@ -650,7 +651,7 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("Blinded Beacon Block", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestCapella(true, 0, true)
|
||||
l := util.NewTestLightClient(t, version.Capella, util.WithBlinded())
|
||||
|
||||
header, err := lightClient.BlockToLightClientHeader(
|
||||
l.Ctx,
|
||||
@@ -713,7 +714,7 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
|
||||
|
||||
t.Run("Deneb", func(t *testing.T) {
|
||||
t.Run("Non-Blinded Beacon Block", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestDeneb(false, 0, true)
|
||||
l := util.NewTestLightClient(t, version.Deneb)
|
||||
|
||||
header, err := lightClient.BlockToLightClientHeader(
|
||||
l.Ctx,
|
||||
@@ -782,7 +783,7 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("Blinded Beacon Block", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestDeneb(true, 0, true)
|
||||
l := util.NewTestLightClient(t, version.Deneb, util.WithBlinded())
|
||||
|
||||
header, err := lightClient.BlockToLightClientHeader(
|
||||
l.Ctx,
|
||||
@@ -853,7 +854,7 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
|
||||
|
||||
t.Run("Electra", func(t *testing.T) {
|
||||
t.Run("Non-Blinded Beacon Block", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestElectra(false, 0, true)
|
||||
l := util.NewTestLightClient(t, version.Electra)
|
||||
|
||||
header, err := lightClient.BlockToLightClientHeader(l.Ctx, l.State.Slot(), l.Block)
|
||||
require.NoError(t, err)
|
||||
@@ -918,7 +919,7 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("Blinded Beacon Block", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestElectra(true, 0, true)
|
||||
l := util.NewTestLightClient(t, version.Electra, util.WithBlinded())
|
||||
|
||||
header, err := lightClient.BlockToLightClientHeader(l.Ctx, l.State.Slot(), l.Block)
|
||||
require.NoError(t, err)
|
||||
@@ -984,7 +985,7 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("Capella fork with Altair block", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestAltair(0, true)
|
||||
l := util.NewTestLightClient(t, version.Altair)
|
||||
|
||||
header, err := lightClient.BlockToLightClientHeader(
|
||||
l.Ctx,
|
||||
@@ -1006,7 +1007,7 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("Deneb fork with Altair block", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestAltair(0, true)
|
||||
l := util.NewTestLightClient(t, version.Altair)
|
||||
|
||||
header, err := lightClient.BlockToLightClientHeader(
|
||||
l.Ctx,
|
||||
@@ -1029,7 +1030,7 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
|
||||
|
||||
t.Run("Deneb fork with Capella block", func(t *testing.T) {
|
||||
t.Run("Non-Blinded Beacon Block", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestCapella(false, 0, true)
|
||||
l := util.NewTestLightClient(t, version.Capella)
|
||||
|
||||
header, err := lightClient.BlockToLightClientHeader(
|
||||
l.Ctx,
|
||||
@@ -1089,7 +1090,7 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("Blinded Beacon Block", func(t *testing.T) {
|
||||
l := util.NewTestLightClient(t).SetupTestCapella(true, 0, true)
|
||||
l := util.NewTestLightClient(t, version.Capella, util.WithBlinded())
|
||||
|
||||
header, err := lightClient.BlockToLightClientHeader(
|
||||
l.Ctx,
|
||||
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
|
||||
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/runtime/version"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/util"
|
||||
)
|
||||
@@ -23,7 +24,7 @@ func TestLightClientStore(t *testing.T) {
|
||||
lcStore := &lightClient.Store{}
|
||||
|
||||
// Create test light client updates for Capella and Deneb
|
||||
lCapella := util.NewTestLightClient(t).SetupTestCapella(false, 0, true)
|
||||
lCapella := util.NewTestLightClient(t, version.Capella)
|
||||
opUpdateCapella, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(lCapella.Ctx, lCapella.State.Slot(), lCapella.State, lCapella.Block, lCapella.AttestedState, lCapella.AttestedBlock)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, opUpdateCapella, "OptimisticUpdateCapella is nil")
|
||||
@@ -31,7 +32,7 @@ func TestLightClientStore(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, finUpdateCapella, "FinalityUpdateCapella is nil")
|
||||
|
||||
lDeneb := util.NewTestLightClient(t).SetupTestDeneb(false, 0, true)
|
||||
lDeneb := util.NewTestLightClient(t, version.Deneb)
|
||||
opUpdateDeneb, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(lDeneb.Ctx, lDeneb.State.Slot(), lDeneb.State, lDeneb.Block, lDeneb.AttestedState, lDeneb.AttestedBlock)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, opUpdateDeneb, "OptimisticUpdateDeneb is nil")
|
||||
|
||||
@@ -779,6 +779,7 @@ func (b *BeaconNode) registerBlockchainService(fc forkchoice.ForkChoicer, gs *st
|
||||
blockchain.WithPayloadIDCache(b.payloadIDCache),
|
||||
blockchain.WithSyncChecker(b.syncChecker),
|
||||
blockchain.WithSlasherEnabled(b.slasherEnabled),
|
||||
blockchain.WithLightClientStore(b.lcStore),
|
||||
)
|
||||
|
||||
blockchainService, err := blockchain.NewService(b.ctx, opts...)
|
||||
@@ -1010,6 +1011,7 @@ func (b *BeaconNode) registerRPCService(router *http.ServeMux) error {
|
||||
BlobStorage: b.BlobStorage,
|
||||
TrackedValidatorsCache: b.trackedValidatorsCache,
|
||||
PayloadIDCache: b.payloadIDCache,
|
||||
LCStore: b.lcStore,
|
||||
})
|
||||
|
||||
return b.services.RegisterService(rpcService)
|
||||
|
||||
@@ -56,6 +56,7 @@ go_library(
|
||||
"//cmd/beacon-chain/flags:go_default_library",
|
||||
"//config/features:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
"//consensus-types/interfaces:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
"//consensus-types/wrapper:go_default_library",
|
||||
"//container/leaky-bucket:go_default_library",
|
||||
@@ -140,6 +141,7 @@ go_test(
|
||||
"//beacon-chain/blockchain/testing:go_default_library",
|
||||
"//beacon-chain/cache:go_default_library",
|
||||
"//beacon-chain/core/helpers:go_default_library",
|
||||
"//beacon-chain/core/light-client:go_default_library",
|
||||
"//beacon-chain/core/signing:go_default_library",
|
||||
"//beacon-chain/db/testing:go_default_library",
|
||||
"//beacon-chain/p2p/encoder:go_default_library",
|
||||
@@ -152,6 +154,7 @@ go_test(
|
||||
"//cmd/beacon-chain/flags:go_default_library",
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
"//consensus-types/interfaces:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
"//consensus-types/wrapper:go_default_library",
|
||||
"//container/leaky-bucket:go_default_library",
|
||||
@@ -163,10 +166,12 @@ go_test(
|
||||
"//proto/eth/v1:go_default_library",
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
"//proto/testing:go_default_library",
|
||||
"//runtime/version:go_default_library",
|
||||
"//testing/assert:go_default_library",
|
||||
"//testing/require:go_default_library",
|
||||
"//testing/util:go_default_library",
|
||||
"//time:go_default_library",
|
||||
"//time/slots:go_default_library",
|
||||
"@com_github_ethereum_go_ethereum//crypto:go_default_library",
|
||||
"@com_github_ethereum_go_ethereum//p2p/discover:go_default_library",
|
||||
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
|
||||
|
||||
@@ -10,9 +10,11 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/altair"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
|
||||
"github.com/OffchainLabs/prysm/v6/crypto/hash"
|
||||
"github.com/OffchainLabs/prysm/v6/monitoring/tracing"
|
||||
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
|
||||
"github.com/OffchainLabs/prysm/v6/network/forks"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
"github.com/pkg/errors"
|
||||
@@ -268,6 +270,58 @@ func (s *Service) internalBroadcastBlob(ctx context.Context, subnet uint64, blob
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Service) BroadcastLightClientOptimisticUpdate(ctx context.Context, update interfaces.LightClientOptimisticUpdate) error {
|
||||
ctx, span := trace.StartSpan(ctx, "p2p.BroadcastLightClientOptimisticUpdate")
|
||||
defer span.End()
|
||||
|
||||
if update == nil || update.IsNil() {
|
||||
return errors.New("attempted to broadcast nil light client optimistic update")
|
||||
}
|
||||
|
||||
forkDigest, err := forks.ForkDigestFromEpoch(slots.ToEpoch(update.AttestedHeader().Beacon().Slot), s.genesisValidatorsRoot)
|
||||
if err != nil {
|
||||
err := errors.Wrap(err, "could not retrieve fork digest")
|
||||
tracing.AnnotateError(span, err)
|
||||
return err
|
||||
}
|
||||
|
||||
// TODO: should we check if the update is too early or too late to broadcast?
|
||||
|
||||
if err := s.broadcastObject(ctx, update, lcOptimisticToTopic(forkDigest)); err != nil {
|
||||
err := errors.Wrap(err, "could not publish message")
|
||||
tracing.AnnotateError(span, err)
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *Service) BroadcastLightClientFinalityUpdate(ctx context.Context, update interfaces.LightClientFinalityUpdate) error {
|
||||
ctx, span := trace.StartSpan(ctx, "p2p.BroadcastLightClientFinalityUpdate")
|
||||
defer span.End()
|
||||
|
||||
if update == nil || update.IsNil() {
|
||||
return errors.New("attempted to broadcast nil light client finality update")
|
||||
}
|
||||
|
||||
forkDigest, err := forks.ForkDigestFromEpoch(slots.ToEpoch(update.AttestedHeader().Beacon().Slot), s.genesisValidatorsRoot)
|
||||
if err != nil {
|
||||
err := errors.Wrap(err, "could not retrieve fork digest")
|
||||
tracing.AnnotateError(span, err)
|
||||
return err
|
||||
}
|
||||
|
||||
// TODO: should we check if the update is too early or too late to broadcast?
|
||||
|
||||
if err := s.broadcastObject(ctx, update, lcFinalityToTopic(forkDigest)); err != nil {
|
||||
err := errors.Wrap(err, "could not publish message")
|
||||
tracing.AnnotateError(span, err)
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// method to broadcast messages to other peers in our gossip mesh.
|
||||
func (s *Service) broadcastObject(ctx context.Context, obj ssz.Marshaler, topic string) error {
|
||||
ctx, span := trace.StartSpan(ctx, "p2p.broadcastObject")
|
||||
@@ -308,3 +362,11 @@ func syncCommitteeToTopic(subnet uint64, forkDigest [4]byte) string {
|
||||
func blobSubnetToTopic(subnet uint64, forkDigest [4]byte) string {
|
||||
return fmt.Sprintf(BlobSubnetTopicFormat, forkDigest, subnet)
|
||||
}
|
||||
|
||||
func lcOptimisticToTopic(forkDigest [4]byte) string {
|
||||
return fmt.Sprintf(LightClientOptimisticUpdateTopicFormat, forkDigest)
|
||||
}
|
||||
|
||||
func lcFinalityToTopic(forkDigest [4]byte) string {
|
||||
return fmt.Sprintf(LightClientFinalityUpdateTopicFormat, forkDigest)
|
||||
}
|
||||
|
||||
@@ -10,17 +10,22 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
|
||||
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers/scorers"
|
||||
p2ptest "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/wrapper"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
"github.com/OffchainLabs/prysm/v6/network/forks"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
testpb "github.com/OffchainLabs/prysm/v6/proto/testing"
|
||||
"github.com/OffchainLabs/prysm/v6/runtime/version"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/assert"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/util"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
pubsub "github.com/libp2p/go-libp2p-pubsub"
|
||||
"github.com/libp2p/go-libp2p/core/host"
|
||||
"github.com/prysmaticlabs/go-bitfield"
|
||||
@@ -516,3 +521,137 @@ func TestService_BroadcastBlob(t *testing.T) {
|
||||
require.NoError(t, p.BroadcastBlob(ctx, subnet, blobSidecar))
|
||||
require.Equal(t, false, util.WaitTimeout(&wg, 1*time.Second), "Failed to receive pubsub within 1s")
|
||||
}
|
||||
|
||||
func TestService_BroadcastLightClientOptimisticUpdate(t *testing.T) {
|
||||
p1 := p2ptest.NewTestP2P(t)
|
||||
p2 := p2ptest.NewTestP2P(t)
|
||||
p1.Connect(p2)
|
||||
require.NotEqual(t, 0, len(p1.BHost.Network().Peers()))
|
||||
|
||||
p := &Service{
|
||||
host: p1.BHost,
|
||||
pubsub: p1.PubSub(),
|
||||
joinedTopics: map[string]*pubsub.Topic{},
|
||||
cfg: &Config{},
|
||||
genesisTime: time.Now(),
|
||||
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
|
||||
subnetsLock: make(map[uint64]*sync.RWMutex),
|
||||
subnetsLockLock: sync.Mutex{},
|
||||
peers: peers.NewStatus(context.Background(), &peers.StatusConfig{
|
||||
ScorerParams: &scorers.Config{},
|
||||
}),
|
||||
}
|
||||
|
||||
l := util.NewTestLightClient(t, version.Altair)
|
||||
msg, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(l.Ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
GossipTypeMapping[reflect.TypeOf(msg)] = LightClientOptimisticUpdateTopicFormat
|
||||
digest, err := forks.ForkDigestFromEpoch(slots.ToEpoch(msg.AttestedHeader().Beacon().Slot), p.genesisValidatorsRoot)
|
||||
require.NoError(t, err)
|
||||
topic := fmt.Sprintf(LightClientOptimisticUpdateTopicFormat, digest)
|
||||
|
||||
// External peer subscribes to the topic.
|
||||
topic += p.Encoding().ProtocolSuffix()
|
||||
sub, err := p2.SubscribeToTopic(topic)
|
||||
require.NoError(t, err)
|
||||
|
||||
time.Sleep(50 * time.Millisecond) // libp2p fails without this delay...
|
||||
|
||||
// Async listen for the pubsub, must be before the broadcast.
|
||||
var wg sync.WaitGroup
|
||||
wg.Add(1)
|
||||
go func(tt *testing.T) {
|
||||
defer wg.Done()
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
|
||||
defer cancel()
|
||||
|
||||
incomingMessage, err := sub.Next(ctx)
|
||||
require.NoError(t, err)
|
||||
|
||||
result := ðpb.LightClientOptimisticUpdateAltair{}
|
||||
require.NoError(t, p.Encoding().DecodeGossip(incomingMessage.Data, result))
|
||||
if !proto.Equal(result, msg.Proto()) {
|
||||
tt.Errorf("Did not receive expected message, got %+v, wanted %+v", result, msg)
|
||||
}
|
||||
}(t)
|
||||
|
||||
// Broadcasting nil should fail.
|
||||
ctx := context.Background()
|
||||
require.ErrorContains(t, "attempted to broadcast nil", p.BroadcastLightClientOptimisticUpdate(ctx, nil))
|
||||
var nilUpdate interfaces.LightClientOptimisticUpdate
|
||||
require.ErrorContains(t, "attempted to broadcast nil", p.BroadcastLightClientOptimisticUpdate(ctx, nilUpdate))
|
||||
|
||||
// Broadcast to peers and wait.
|
||||
require.NoError(t, p.BroadcastLightClientOptimisticUpdate(ctx, msg))
|
||||
if util.WaitTimeout(&wg, 1*time.Second) {
|
||||
t.Error("Failed to receive pubsub within 1s")
|
||||
}
|
||||
}
|
||||
|
||||
func TestService_BroadcastLightClientFinalityUpdate(t *testing.T) {
|
||||
p1 := p2ptest.NewTestP2P(t)
|
||||
p2 := p2ptest.NewTestP2P(t)
|
||||
p1.Connect(p2)
|
||||
require.NotEqual(t, 0, len(p1.BHost.Network().Peers()))
|
||||
|
||||
p := &Service{
|
||||
host: p1.BHost,
|
||||
pubsub: p1.PubSub(),
|
||||
joinedTopics: map[string]*pubsub.Topic{},
|
||||
cfg: &Config{},
|
||||
genesisTime: time.Now(),
|
||||
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
|
||||
subnetsLock: make(map[uint64]*sync.RWMutex),
|
||||
subnetsLockLock: sync.Mutex{},
|
||||
peers: peers.NewStatus(context.Background(), &peers.StatusConfig{
|
||||
ScorerParams: &scorers.Config{},
|
||||
}),
|
||||
}
|
||||
|
||||
l := util.NewTestLightClient(t, version.Altair)
|
||||
msg, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
GossipTypeMapping[reflect.TypeOf(msg)] = LightClientFinalityUpdateTopicFormat
|
||||
digest, err := forks.ForkDigestFromEpoch(slots.ToEpoch(msg.AttestedHeader().Beacon().Slot), p.genesisValidatorsRoot)
|
||||
require.NoError(t, err)
|
||||
topic := fmt.Sprintf(LightClientFinalityUpdateTopicFormat, digest)
|
||||
|
||||
// External peer subscribes to the topic.
|
||||
topic += p.Encoding().ProtocolSuffix()
|
||||
sub, err := p2.SubscribeToTopic(topic)
|
||||
require.NoError(t, err)
|
||||
|
||||
time.Sleep(50 * time.Millisecond) // libp2p fails without this delay...
|
||||
|
||||
// Async listen for the pubsub, must be before the broadcast.
|
||||
var wg sync.WaitGroup
|
||||
wg.Add(1)
|
||||
go func(tt *testing.T) {
|
||||
defer wg.Done()
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
|
||||
defer cancel()
|
||||
|
||||
incomingMessage, err := sub.Next(ctx)
|
||||
require.NoError(t, err)
|
||||
|
||||
result := ðpb.LightClientFinalityUpdateAltair{}
|
||||
require.NoError(t, p.Encoding().DecodeGossip(incomingMessage.Data, result))
|
||||
if !proto.Equal(result, msg.Proto()) {
|
||||
tt.Errorf("Did not receive expected message, got %+v, wanted %+v", result, msg)
|
||||
}
|
||||
}(t)
|
||||
|
||||
// Broadcasting nil should fail.
|
||||
ctx := context.Background()
|
||||
require.ErrorContains(t, "attempted to broadcast nil", p.BroadcastLightClientFinalityUpdate(ctx, nil))
|
||||
var nilUpdate interfaces.LightClientFinalityUpdate
|
||||
require.ErrorContains(t, "attempted to broadcast nil", p.BroadcastLightClientFinalityUpdate(ctx, nilUpdate))
|
||||
|
||||
// Broadcast to peers and wait.
|
||||
require.NoError(t, p.BroadcastLightClientFinalityUpdate(ctx, msg))
|
||||
if util.WaitTimeout(&wg, 1*time.Second) {
|
||||
t.Error("Failed to receive pubsub within 1s")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -105,7 +105,8 @@ func (l *listenerWrapper) RandomNodes() enode.Iterator {
|
||||
func (l *listenerWrapper) Ping(node *enode.Node) error {
|
||||
l.mu.RLock()
|
||||
defer l.mu.RUnlock()
|
||||
return l.listener.Ping(node)
|
||||
_, err := l.listener.Ping(node)
|
||||
return err
|
||||
}
|
||||
|
||||
func (l *listenerWrapper) RequestENR(node *enode.Node) (*enode.Node, error) {
|
||||
|
||||
@@ -22,6 +22,8 @@ var gossipTopicMappings = map[string]func() proto.Message{
|
||||
SyncCommitteeSubnetTopicFormat: func() proto.Message { return ðpb.SyncCommitteeMessage{} },
|
||||
BlsToExecutionChangeSubnetTopicFormat: func() proto.Message { return ðpb.SignedBLSToExecutionChange{} },
|
||||
BlobSubnetTopicFormat: func() proto.Message { return ðpb.BlobSidecar{} },
|
||||
LightClientOptimisticUpdateTopicFormat: func() proto.Message { return ðpb.LightClientOptimisticUpdateAltair{} },
|
||||
LightClientFinalityUpdateTopicFormat: func() proto.Message { return ðpb.LightClientFinalityUpdateAltair{} },
|
||||
}
|
||||
|
||||
// GossipTopicMappings is a function to return the assigned data type
|
||||
@@ -63,6 +65,25 @@ func GossipTopicMappings(topic string, epoch primitives.Epoch) proto.Message {
|
||||
return ðpb.SignedAggregateAttestationAndProofElectra{}
|
||||
}
|
||||
return gossipMessage(topic)
|
||||
case LightClientOptimisticUpdateTopicFormat:
|
||||
if epoch >= params.BeaconConfig().DenebForkEpoch {
|
||||
return ðpb.LightClientOptimisticUpdateDeneb{}
|
||||
}
|
||||
if epoch >= params.BeaconConfig().CapellaForkEpoch {
|
||||
return ðpb.LightClientOptimisticUpdateCapella{}
|
||||
}
|
||||
return gossipMessage(topic)
|
||||
case LightClientFinalityUpdateTopicFormat:
|
||||
if epoch >= params.BeaconConfig().ElectraForkEpoch {
|
||||
return ðpb.LightClientFinalityUpdateElectra{}
|
||||
}
|
||||
if epoch >= params.BeaconConfig().DenebForkEpoch {
|
||||
return ðpb.LightClientFinalityUpdateDeneb{}
|
||||
}
|
||||
if epoch >= params.BeaconConfig().CapellaForkEpoch {
|
||||
return ðpb.LightClientFinalityUpdateCapella{}
|
||||
}
|
||||
return gossipMessage(topic)
|
||||
default:
|
||||
return gossipMessage(topic)
|
||||
}
|
||||
@@ -97,21 +118,28 @@ func init() {
|
||||
|
||||
// Specially handle Altair objects.
|
||||
GossipTypeMapping[reflect.TypeOf(ðpb.SignedBeaconBlockAltair{})] = BlockSubnetTopicFormat
|
||||
GossipTypeMapping[reflect.TypeOf(ðpb.LightClientFinalityUpdateAltair{})] = LightClientFinalityUpdateTopicFormat
|
||||
GossipTypeMapping[reflect.TypeOf(ðpb.LightClientOptimisticUpdateAltair{})] = LightClientOptimisticUpdateTopicFormat
|
||||
|
||||
// Specially handle Bellatrix objects.
|
||||
GossipTypeMapping[reflect.TypeOf(ðpb.SignedBeaconBlockBellatrix{})] = BlockSubnetTopicFormat
|
||||
|
||||
// Specially handle Capella objects.
|
||||
GossipTypeMapping[reflect.TypeOf(ðpb.SignedBeaconBlockCapella{})] = BlockSubnetTopicFormat
|
||||
GossipTypeMapping[reflect.TypeOf(ðpb.LightClientOptimisticUpdateCapella{})] = LightClientOptimisticUpdateTopicFormat
|
||||
GossipTypeMapping[reflect.TypeOf(ðpb.LightClientFinalityUpdateCapella{})] = LightClientFinalityUpdateTopicFormat
|
||||
|
||||
// Specially handle Deneb objects.
|
||||
GossipTypeMapping[reflect.TypeOf(ðpb.SignedBeaconBlockDeneb{})] = BlockSubnetTopicFormat
|
||||
GossipTypeMapping[reflect.TypeOf(ðpb.LightClientOptimisticUpdateDeneb{})] = LightClientOptimisticUpdateTopicFormat
|
||||
GossipTypeMapping[reflect.TypeOf(ðpb.LightClientFinalityUpdateDeneb{})] = LightClientFinalityUpdateTopicFormat
|
||||
|
||||
// Specially handle Electra objects.
|
||||
GossipTypeMapping[reflect.TypeOf(ðpb.SignedBeaconBlockElectra{})] = BlockSubnetTopicFormat
|
||||
GossipTypeMapping[reflect.TypeOf(ðpb.SingleAttestation{})] = AttestationSubnetTopicFormat
|
||||
GossipTypeMapping[reflect.TypeOf(ðpb.AttesterSlashingElectra{})] = AttesterSlashingSubnetTopicFormat
|
||||
GossipTypeMapping[reflect.TypeOf(ðpb.SignedAggregateAttestationAndProofElectra{})] = AggregateAndProofSubnetTopicFormat
|
||||
GossipTypeMapping[reflect.TypeOf(ðpb.LightClientFinalityUpdateElectra{})] = LightClientFinalityUpdateTopicFormat
|
||||
|
||||
// Specially handle Fulu objects.
|
||||
GossipTypeMapping[reflect.TypeOf(ðpb.SignedBeaconBlockFulu{})] = BlockSubnetTopicFormat
|
||||
|
||||
@@ -102,6 +102,9 @@ func (s *BadResponsesScorer) countNoLock(pid peer.ID) (int, error) {
|
||||
// Increment increments the number of bad responses we have received from the given remote peer.
|
||||
// If peer doesn't exist this method is no-op.
|
||||
func (s *BadResponsesScorer) Increment(pid peer.ID) {
|
||||
if pid == "" {
|
||||
return
|
||||
}
|
||||
s.store.Lock()
|
||||
defer s.store.Unlock()
|
||||
|
||||
|
||||
@@ -124,6 +124,9 @@ func (s *BlockProviderScorer) Params() *BlockProviderScorerConfig {
|
||||
|
||||
// IncrementProcessedBlocks increments the number of blocks that have been successfully processed.
|
||||
func (s *BlockProviderScorer) IncrementProcessedBlocks(pid peer.ID, cnt uint64) {
|
||||
if pid == "" {
|
||||
return
|
||||
}
|
||||
s.store.Lock()
|
||||
defer s.store.Unlock()
|
||||
defer s.touchNoLock(pid)
|
||||
|
||||
@@ -30,6 +30,10 @@ const (
|
||||
GossipBlsToExecutionChangeMessage = "bls_to_execution_change"
|
||||
// GossipBlobSidecarMessage is the name for the blob sidecar message type.
|
||||
GossipBlobSidecarMessage = "blob_sidecar"
|
||||
// GossipLightClientFinalityUpdateMessage is the name for the light client finality update message type.
|
||||
GossipLightClientFinalityUpdateMessage = "light_client_finality_update"
|
||||
// GossipLightClientOptimisticUpdateMessage is the name for the light client optimistic update message type.
|
||||
GossipLightClientOptimisticUpdateMessage = "light_client_optimistic_update"
|
||||
// Topic Formats
|
||||
//
|
||||
// AttestationSubnetTopicFormat is the topic format for the attestation subnet.
|
||||
@@ -52,4 +56,8 @@ const (
|
||||
BlsToExecutionChangeSubnetTopicFormat = GossipProtocolAndDigest + GossipBlsToExecutionChangeMessage
|
||||
// BlobSubnetTopicFormat is the topic format for the blob subnet.
|
||||
BlobSubnetTopicFormat = GossipProtocolAndDigest + GossipBlobSidecarMessage + "_%d"
|
||||
// LightClientFinalityUpdateTopicFormat is the topic format for the light client finality update subnet.
|
||||
LightClientFinalityUpdateTopicFormat = GossipProtocolAndDigest + GossipLightClientFinalityUpdateMessage
|
||||
// LightClientOptimisticUpdateTopicFormat is the topic format for the light client optimistic update subnet.
|
||||
LightClientOptimisticUpdateTopicFormat = GossipProtocolAndDigest + GossipLightClientOptimisticUpdateMessage
|
||||
)
|
||||
|
||||
@@ -162,9 +162,9 @@ func (b *BlobSidecarsByRootReq) MarshalSSZ() ([]byte, error) {
|
||||
// BlobSidecarsByRootReq value.
|
||||
func (b *BlobSidecarsByRootReq) UnmarshalSSZ(buf []byte) error {
|
||||
bufLen := len(buf)
|
||||
maxLength := int(params.BeaconConfig().MaxRequestBlobSidecars) * blobIdSize
|
||||
maxLength := int(params.BeaconConfig().MaxRequestBlobSidecarsElectra) * blobIdSize
|
||||
if bufLen > maxLength {
|
||||
return errors.Errorf("expected buffer with length of up to %d but received length %d", maxLength, bufLen)
|
||||
return errors.Wrapf(ssz.ErrIncorrectListSize, "expected buffer with length of up to %d but received length %d", maxLength, bufLen)
|
||||
}
|
||||
if bufLen%blobIdSize != 0 {
|
||||
return errors.Wrapf(ssz.ErrIncorrectByteSize, "size=%d", bufLen)
|
||||
|
||||
@@ -43,6 +43,15 @@ func TestBlobSidecarsByRootReq_MarshalSSZ(t *testing.T) {
|
||||
name: "10 item list",
|
||||
ids: generateBlobIdentifiers(10),
|
||||
},
|
||||
{
|
||||
name: "max list",
|
||||
ids: generateBlobIdentifiers(int(params.BeaconConfig().MaxRequestBlobSidecarsElectra)),
|
||||
},
|
||||
{
|
||||
name: "beyond max list",
|
||||
ids: generateBlobIdentifiers(int(params.BeaconConfig().MaxRequestBlobSidecarsElectra) + 1),
|
||||
unmarshalErr: ssz.ErrIncorrectListSize,
|
||||
},
|
||||
{
|
||||
name: "wonky unmarshal size",
|
||||
ids: generateBlobIdentifiers(10),
|
||||
|
||||
@@ -20,6 +20,7 @@ go_library(
|
||||
"//beacon-chain/core/feed/block:go_default_library",
|
||||
"//beacon-chain/core/feed/operation:go_default_library",
|
||||
"//beacon-chain/core/feed/state:go_default_library",
|
||||
"//beacon-chain/core/light-client:go_default_library",
|
||||
"//beacon-chain/db:go_default_library",
|
||||
"//beacon-chain/db/filesystem:go_default_library",
|
||||
"//beacon-chain/execution:go_default_library",
|
||||
|
||||
@@ -894,6 +894,15 @@ func (s *Service) beaconEndpoints(
|
||||
handler: server.GetPendingDeposits,
|
||||
methods: []string{http.MethodGet},
|
||||
},
|
||||
{
|
||||
template: "/eth/v1/beacon/states/{state_id}/pending_consolidations",
|
||||
name: namespace + ".GetPendingConsolidations",
|
||||
middleware: []middleware.Middleware{
|
||||
middleware.AcceptHeaderHandler([]string{api.JsonMediaType}),
|
||||
},
|
||||
handler: server.GetPendingDeposits,
|
||||
methods: []string{http.MethodGet},
|
||||
},
|
||||
{
|
||||
template: "/eth/v1/beacon/states/{state_id}/pending_partial_withdrawals",
|
||||
name: namespace + ".GetPendingPartialWithdrawals",
|
||||
@@ -946,6 +955,7 @@ func (s *Service) lightClientEndpoints(blocker lookup.Blocker, stater lookup.Sta
|
||||
HeadFetcher: s.cfg.HeadFetcher,
|
||||
ChainInfoFetcher: s.cfg.ChainInfoFetcher,
|
||||
BeaconDB: s.cfg.BeaconDB,
|
||||
LCStore: s.cfg.LCStore,
|
||||
}
|
||||
|
||||
const namespace = "lightclient"
|
||||
@@ -1040,6 +1050,7 @@ func (s *Service) eventsEndpoints() []endpoint {
|
||||
HeadFetcher: s.cfg.HeadFetcher,
|
||||
ChainInfoFetcher: s.cfg.ChainInfoFetcher,
|
||||
TrackedValidatorsCache: s.cfg.TrackedValidatorsCache,
|
||||
StateGen: s.cfg.StateGen,
|
||||
}
|
||||
|
||||
const namespace = "events"
|
||||
|
||||
@@ -30,6 +30,7 @@ func Test_endpoints(t *testing.T) {
|
||||
"/eth/v1/beacon/states/{state_id}/randao": {http.MethodGet},
|
||||
"/eth/v1/beacon/states/{state_id}/pending_deposits": {http.MethodGet},
|
||||
"/eth/v1/beacon/states/{state_id}/pending_partial_withdrawals": {http.MethodGet},
|
||||
"/eth/v1/beacon/states/{state_id}/pending_consolidations": {http.MethodGet},
|
||||
"/eth/v1/beacon/headers": {http.MethodGet},
|
||||
"/eth/v1/beacon/headers/{block_id}": {http.MethodGet},
|
||||
"/eth/v1/beacon/blinded_blocks": {http.MethodPost},
|
||||
|
||||
@@ -1613,6 +1613,62 @@ func (s *Server) broadcastSeenBlockSidecars(
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetPendingConsolidations returns pending deposits for state with given 'stateId'.
|
||||
// Should return 400 if the state retrieved is prior to Electra.
|
||||
// Supports both JSON and SSZ responses based on Accept header.
|
||||
func (s *Server) GetPendingConsolidations(w http.ResponseWriter, r *http.Request) {
|
||||
ctx, span := trace.StartSpan(r.Context(), "beacon.GetPendingDeposits")
|
||||
defer span.End()
|
||||
|
||||
stateId := r.PathValue("state_id")
|
||||
if stateId == "" {
|
||||
httputil.HandleError(w, "state_id is required in URL params", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
st, err := s.Stater.State(ctx, []byte(stateId))
|
||||
if err != nil {
|
||||
shared.WriteStateFetchError(w, err)
|
||||
return
|
||||
}
|
||||
if st.Version() < version.Electra {
|
||||
httputil.HandleError(w, "state_id is prior to electra", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
pd, err := st.PendingConsolidations()
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not get pending consolidations: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
w.Header().Set(api.VersionHeader, version.String(st.Version()))
|
||||
if httputil.RespondWithSsz(r) {
|
||||
sszData, err := serializeItems(pd)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Failed to serialize pending consolidations: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
httputil.WriteSsz(w, sszData)
|
||||
} else {
|
||||
isOptimistic, err := helpers.IsOptimistic(ctx, []byte(stateId), s.OptimisticModeFetcher, s.Stater, s.ChainInfoFetcher, s.BeaconDB)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
blockRoot, err := st.LatestBlockHeader().HashTreeRoot()
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not calculate root of latest block header: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
isFinalized := s.FinalizationFetcher.IsFinalized(ctx, blockRoot)
|
||||
resp := structs.GetPendingConsolidationsResponse{
|
||||
Version: version.String(st.Version()),
|
||||
ExecutionOptimistic: isOptimistic,
|
||||
Finalized: isFinalized,
|
||||
Data: structs.PendingConsolidationsFromConsensus(pd),
|
||||
}
|
||||
httputil.WriteJson(w, resp)
|
||||
}
|
||||
}
|
||||
|
||||
// GetPendingDeposits returns pending deposits for state with given 'stateId'.
|
||||
// Should return 400 if the state retrieved is prior to Electra.
|
||||
// Supports both JSON and SSZ responses based on Accept header.
|
||||
|
||||
@@ -4755,6 +4755,191 @@ func Test_validateBlobSidecars(t *testing.T) {
|
||||
require.ErrorContains(t, "could not verify blob proof: can't verify opening proof", s.validateBlobSidecars(b, [][]byte{blob[:]}, [][]byte{proof[:]}))
|
||||
}
|
||||
|
||||
func TestGetPendingConsolidations(t *testing.T) {
|
||||
st, _ := util.DeterministicGenesisStateElectra(t, 10)
|
||||
|
||||
cs := make([]*eth.PendingConsolidation, 10)
|
||||
for i := 0; i < len(cs); i += 1 {
|
||||
cs[i] = ð.PendingConsolidation{
|
||||
SourceIndex: primitives.ValidatorIndex(i),
|
||||
TargetIndex: primitives.ValidatorIndex(i + 1),
|
||||
}
|
||||
}
|
||||
require.NoError(t, st.SetPendingConsolidations(cs))
|
||||
|
||||
chainService := &chainMock.ChainService{
|
||||
Optimistic: false,
|
||||
FinalizedRoots: map[[32]byte]bool{},
|
||||
}
|
||||
server := &Server{
|
||||
Stater: &testutil.MockStater{
|
||||
BeaconState: st,
|
||||
},
|
||||
OptimisticModeFetcher: chainService,
|
||||
FinalizationFetcher: chainService,
|
||||
}
|
||||
|
||||
t.Run("json response", func(t *testing.T) {
|
||||
req := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/beacon/states/{state_id}/pending_consolidations", nil)
|
||||
req.SetPathValue("state_id", "head")
|
||||
rec := httptest.NewRecorder()
|
||||
rec.Body = new(bytes.Buffer)
|
||||
|
||||
server.GetPendingConsolidations(rec, req)
|
||||
require.Equal(t, http.StatusOK, rec.Code)
|
||||
require.Equal(t, "electra", rec.Header().Get(api.VersionHeader))
|
||||
|
||||
var resp structs.GetPendingConsolidationsResponse
|
||||
require.NoError(t, json.Unmarshal(rec.Body.Bytes(), &resp))
|
||||
|
||||
expectedVersion := version.String(st.Version())
|
||||
require.Equal(t, expectedVersion, resp.Version)
|
||||
|
||||
require.Equal(t, false, resp.ExecutionOptimistic)
|
||||
require.Equal(t, false, resp.Finalized)
|
||||
|
||||
expectedConsolidations := structs.PendingConsolidationsFromConsensus(cs)
|
||||
require.DeepEqual(t, expectedConsolidations, resp.Data)
|
||||
})
|
||||
t.Run("ssz response", func(t *testing.T) {
|
||||
req := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/beacon/states/{state_id}/pending_consolidations", nil)
|
||||
req.Header.Set("Accept", "application/octet-stream")
|
||||
req.SetPathValue("state_id", "head")
|
||||
rec := httptest.NewRecorder()
|
||||
rec.Body = new(bytes.Buffer)
|
||||
|
||||
server.GetPendingConsolidations(rec, req)
|
||||
require.Equal(t, http.StatusOK, rec.Code)
|
||||
require.Equal(t, "electra", rec.Header().Get(api.VersionHeader))
|
||||
|
||||
responseBytes := rec.Body.Bytes()
|
||||
var recoveredConsolidations []*eth.PendingConsolidation
|
||||
|
||||
// Verify total size matches expected number of deposits
|
||||
consolidationSize := (ð.PendingConsolidation{}).SizeSSZ()
|
||||
require.Equal(t, len(responseBytes), consolidationSize*len(cs))
|
||||
|
||||
for i := 0; i < len(cs); i++ {
|
||||
start := i * consolidationSize
|
||||
end := start + consolidationSize
|
||||
|
||||
var c eth.PendingConsolidation
|
||||
require.NoError(t, c.UnmarshalSSZ(responseBytes[start:end]))
|
||||
recoveredConsolidations = append(recoveredConsolidations, &c)
|
||||
}
|
||||
require.DeepEqual(t, cs, recoveredConsolidations)
|
||||
})
|
||||
t.Run("pre electra state", func(t *testing.T) {
|
||||
preElectraSt, _ := util.DeterministicGenesisStateDeneb(t, 1)
|
||||
preElectraServer := &Server{
|
||||
Stater: &testutil.MockStater{
|
||||
BeaconState: preElectraSt,
|
||||
},
|
||||
OptimisticModeFetcher: chainService,
|
||||
FinalizationFetcher: chainService,
|
||||
}
|
||||
|
||||
// Test JSON request
|
||||
req := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/beacon/states/{state_id}/pending_consolidations", nil)
|
||||
req.SetPathValue("state_id", "head")
|
||||
rec := httptest.NewRecorder()
|
||||
rec.Body = new(bytes.Buffer)
|
||||
|
||||
preElectraServer.GetPendingConsolidations(rec, req)
|
||||
require.Equal(t, http.StatusBadRequest, rec.Code)
|
||||
|
||||
var errResp struct {
|
||||
Code int `json:"code"`
|
||||
Message string `json:"message"`
|
||||
}
|
||||
require.NoError(t, json.Unmarshal(rec.Body.Bytes(), &errResp))
|
||||
require.Equal(t, "state_id is prior to electra", errResp.Message)
|
||||
|
||||
// Test SSZ request
|
||||
sszReq := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/beacon/states/{state_id}/pending_consolidations", nil)
|
||||
sszReq.Header.Set("Accept", "application/octet-stream")
|
||||
sszReq.SetPathValue("state_id", "head")
|
||||
sszRec := httptest.NewRecorder()
|
||||
sszRec.Body = new(bytes.Buffer)
|
||||
|
||||
preElectraServer.GetPendingConsolidations(sszRec, sszReq)
|
||||
require.Equal(t, http.StatusBadRequest, sszRec.Code)
|
||||
|
||||
var sszErrResp struct {
|
||||
Code int `json:"code"`
|
||||
Message string `json:"message"`
|
||||
}
|
||||
require.NoError(t, json.Unmarshal(sszRec.Body.Bytes(), &sszErrResp))
|
||||
require.Equal(t, "state_id is prior to electra", sszErrResp.Message)
|
||||
})
|
||||
t.Run("missing state_id parameter", func(t *testing.T) {
|
||||
req := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/beacon/states/{state_id}/pending_consolidations", nil)
|
||||
// Intentionally not setting state_id
|
||||
rec := httptest.NewRecorder()
|
||||
rec.Body = new(bytes.Buffer)
|
||||
|
||||
server.GetPendingConsolidations(rec, req)
|
||||
require.Equal(t, http.StatusBadRequest, rec.Code)
|
||||
|
||||
var errResp struct {
|
||||
Code int `json:"code"`
|
||||
Message string `json:"message"`
|
||||
}
|
||||
require.NoError(t, json.Unmarshal(rec.Body.Bytes(), &errResp))
|
||||
require.Equal(t, "state_id is required in URL params", errResp.Message)
|
||||
})
|
||||
t.Run("optimistic node", func(t *testing.T) {
|
||||
optimisticChainService := &chainMock.ChainService{
|
||||
Optimistic: true,
|
||||
FinalizedRoots: map[[32]byte]bool{},
|
||||
}
|
||||
optimisticServer := &Server{
|
||||
Stater: server.Stater,
|
||||
OptimisticModeFetcher: optimisticChainService,
|
||||
FinalizationFetcher: optimisticChainService,
|
||||
}
|
||||
|
||||
req := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/beacon/states/{state_id}/pending_consolidations", nil)
|
||||
req.SetPathValue("state_id", "head")
|
||||
rec := httptest.NewRecorder()
|
||||
rec.Body = new(bytes.Buffer)
|
||||
|
||||
optimisticServer.GetPendingConsolidations(rec, req)
|
||||
require.Equal(t, http.StatusOK, rec.Code)
|
||||
|
||||
var resp structs.GetPendingConsolidationsResponse
|
||||
require.NoError(t, json.Unmarshal(rec.Body.Bytes(), &resp))
|
||||
require.Equal(t, true, resp.ExecutionOptimistic)
|
||||
})
|
||||
|
||||
t.Run("finalized node", func(t *testing.T) {
|
||||
blockRoot, err := st.LatestBlockHeader().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
|
||||
finalizedChainService := &chainMock.ChainService{
|
||||
Optimistic: false,
|
||||
FinalizedRoots: map[[32]byte]bool{blockRoot: true},
|
||||
}
|
||||
finalizedServer := &Server{
|
||||
Stater: server.Stater,
|
||||
OptimisticModeFetcher: finalizedChainService,
|
||||
FinalizationFetcher: finalizedChainService,
|
||||
}
|
||||
|
||||
req := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/beacon/states/{state_id}/pending_consolidations", nil)
|
||||
req.SetPathValue("state_id", "head")
|
||||
rec := httptest.NewRecorder()
|
||||
rec.Body = new(bytes.Buffer)
|
||||
|
||||
finalizedServer.GetPendingConsolidations(rec, req)
|
||||
require.Equal(t, http.StatusOK, rec.Code)
|
||||
|
||||
var resp structs.GetPendingConsolidationsResponse
|
||||
require.NoError(t, json.Unmarshal(rec.Body.Bytes(), &resp))
|
||||
require.Equal(t, true, resp.Finalized)
|
||||
})
|
||||
}
|
||||
|
||||
func TestGetPendingDeposits(t *testing.T) {
|
||||
st, _ := util.DeterministicGenesisStateElectra(t, 10)
|
||||
|
||||
|
||||
@@ -20,6 +20,7 @@ go_library(
|
||||
"//beacon-chain/core/helpers:go_default_library",
|
||||
"//beacon-chain/core/transition:go_default_library",
|
||||
"//beacon-chain/state:go_default_library",
|
||||
"//beacon-chain/state/stategen:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
"//consensus-types/interfaces:go_default_library",
|
||||
"//consensus-types/payload-attribute:go_default_library",
|
||||
@@ -53,6 +54,7 @@ go_test(
|
||||
"//beacon-chain/core/feed/operation:go_default_library",
|
||||
"//beacon-chain/core/feed/state:go_default_library",
|
||||
"//beacon-chain/state:go_default_library",
|
||||
"//beacon-chain/state/stategen/mock:go_default_library",
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
"//consensus-types/blocks:go_default_library",
|
||||
|
||||
@@ -681,53 +681,45 @@ var zeroRoot [32]byte
|
||||
// needsFill allows tests to provide filled EventData values. An ordinary event data value fired by the blockchain package will have
|
||||
// all of the checked fields empty, so the logical short circuit should hit immediately.
|
||||
func needsFill(ev payloadattribute.EventData) bool {
|
||||
return ev.HeadState == nil || ev.HeadState.IsNil() || ev.HeadState.LatestBlockHeader() == nil ||
|
||||
ev.HeadBlock == nil || ev.HeadBlock.IsNil() ||
|
||||
ev.HeadRoot == zeroRoot || len(ev.ParentBlockRoot) == 0 || len(ev.ParentBlockHash) == 0 ||
|
||||
return len(ev.ParentBlockHash) == 0 ||
|
||||
ev.Attributer == nil || ev.Attributer.IsEmpty()
|
||||
}
|
||||
|
||||
func (s *Server) fillEventData(ctx context.Context, ev payloadattribute.EventData) (payloadattribute.EventData, error) {
|
||||
var err error
|
||||
|
||||
if !needsFill(ev) {
|
||||
return ev, nil
|
||||
}
|
||||
|
||||
ev.HeadState, err = s.HeadFetcher.HeadState(ctx)
|
||||
if err != nil {
|
||||
return ev, errors.Wrap(err, "could not get head state")
|
||||
if ev.HeadBlock == nil || ev.HeadBlock.IsNil() {
|
||||
return ev, errors.New("head block is nil")
|
||||
}
|
||||
if ev.HeadRoot == zeroRoot {
|
||||
return ev, errors.New("head root is empty")
|
||||
}
|
||||
|
||||
ev.HeadBlock, err = s.HeadFetcher.HeadBlock(ctx)
|
||||
if err != nil {
|
||||
return ev, errors.Wrap(err, "could not look up head block")
|
||||
}
|
||||
ev.HeadRoot, err = ev.HeadBlock.Block().HashTreeRoot()
|
||||
if err != nil {
|
||||
return ev, errors.Wrap(err, "could not compute head block root")
|
||||
}
|
||||
pr := ev.HeadBlock.Block().ParentRoot()
|
||||
ev.ParentBlockRoot = pr[:]
|
||||
var err error
|
||||
var st state.BeaconState
|
||||
|
||||
hsr, err := ev.HeadState.LatestBlockHeader().HashTreeRoot()
|
||||
if err != nil {
|
||||
return ev, errors.Wrap(err, "could not compute latest block header root")
|
||||
}
|
||||
proposalSlot := ev.ProposalSlot
|
||||
|
||||
pse := slots.ToEpoch(ev.ProposalSlot)
|
||||
st := ev.HeadState
|
||||
if slots.ToEpoch(st.Slot()) != pse {
|
||||
st, err = transition.ProcessSlotsUsingNextSlotCache(ctx, st, hsr[:], ev.ProposalSlot)
|
||||
index, err := helpers.GetCachedProposerIndex(st, proposalSlot)
|
||||
if err == nil {
|
||||
ev.ProposerIndex = index
|
||||
} else {
|
||||
st, err = s.StateGen.StateByRoot(ctx, ev.HeadRoot)
|
||||
if err != nil {
|
||||
return ev, errors.Wrap(err, "could not get head state")
|
||||
}
|
||||
st, err = transition.ProcessSlotsUsingNextSlotCache(ctx, st, ev.HeadRoot[:], proposalSlot)
|
||||
if err != nil {
|
||||
return ev, errors.Wrap(err, "could not run process blocks on head state into the proposal slot epoch")
|
||||
}
|
||||
ev.ProposerIndex, err = helpers.BeaconProposerIndexAtSlot(ctx, st, ev.ProposalSlot)
|
||||
if err != nil {
|
||||
return ev, errors.Wrap(err, "failed to compute proposer index")
|
||||
}
|
||||
}
|
||||
ev.ProposerIndex, err = helpers.BeaconProposerIndexAtSlot(ctx, st, ev.ProposalSlot)
|
||||
if err != nil {
|
||||
return ev, errors.Wrap(err, "failed to compute proposer index")
|
||||
}
|
||||
randao, err := helpers.RandaoMix(st, pse)
|
||||
|
||||
randao, err := helpers.RandaoMix(st, slots.ToEpoch(proposalSlot))
|
||||
if err != nil {
|
||||
return ev, errors.Wrap(err, "could not get head state randado")
|
||||
}
|
||||
@@ -743,7 +735,7 @@ func (s *Server) fillEventData(ctx context.Context, ev payloadattribute.EventDat
|
||||
if err != nil {
|
||||
return ev, errors.Wrap(err, "could not get head state slot time")
|
||||
}
|
||||
ev.Attributer, err = s.computePayloadAttributes(ctx, st, hsr, ev.ProposerIndex, uint64(t.Unix()), randao)
|
||||
ev.Attributer, err = s.computePayloadAttributes(ctx, st, ev.HeadRoot, ev.ProposerIndex, uint64(t.Unix()), randao)
|
||||
return ev, err
|
||||
}
|
||||
|
||||
@@ -772,7 +764,7 @@ func (s *Server) payloadAttributesReader(ctx context.Context, ev payloadattribut
|
||||
ProposerIndex: strconv.FormatUint(uint64(ev.ProposerIndex), 10),
|
||||
ProposalSlot: strconv.FormatUint(uint64(ev.ProposalSlot), 10),
|
||||
ParentBlockNumber: strconv.FormatUint(ev.ParentBlockNumber, 10),
|
||||
ParentBlockRoot: hexutil.Encode(ev.ParentBlockRoot),
|
||||
ParentBlockRoot: hexutil.Encode(ev.HeadRoot[:]),
|
||||
ParentBlockHash: hexutil.Encode(ev.ParentBlockHash),
|
||||
PayloadAttributes: attributesBytes,
|
||||
})
|
||||
|
||||
@@ -18,6 +18,7 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/operation"
|
||||
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/state/stategen/mock"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
@@ -528,9 +529,13 @@ func TestStreamEvents_OperationsEvents(t *testing.T) {
|
||||
Block: b,
|
||||
Slot: ¤tSlot,
|
||||
}
|
||||
headRoot, err := b.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
|
||||
stn := mockChain.NewEventFeedWrapper()
|
||||
opn := mockChain.NewEventFeedWrapper()
|
||||
stategen := mock.NewService()
|
||||
stategen.AddStateForRoot(st, headRoot)
|
||||
s := &Server{
|
||||
StateNotifier: &mockChain.SimpleNotifier{Feed: stn},
|
||||
OperationNotifier: &mockChain.SimpleNotifier{Feed: opn},
|
||||
@@ -538,6 +543,7 @@ func TestStreamEvents_OperationsEvents(t *testing.T) {
|
||||
ChainInfoFetcher: mockChainService,
|
||||
TrackedValidatorsCache: cache.NewTrackedValidatorsCache(),
|
||||
EventWriteTimeout: testEventWriteTimeout,
|
||||
StateGen: stategen,
|
||||
}
|
||||
if tc.SetTrackedValidatorsCache != nil {
|
||||
tc.SetTrackedValidatorsCache(s.TrackedValidatorsCache)
|
||||
@@ -553,11 +559,9 @@ func TestStreamEvents_OperationsEvents(t *testing.T) {
|
||||
ProposerIndex: 0,
|
||||
ProposalSlot: 0,
|
||||
ParentBlockNumber: 0,
|
||||
ParentBlockRoot: make([]byte, 32),
|
||||
ParentBlockHash: make([]byte, 32),
|
||||
HeadState: st,
|
||||
HeadBlock: b,
|
||||
HeadRoot: [fieldparams.RootLength]byte{},
|
||||
HeadRoot: headRoot,
|
||||
},
|
||||
},
|
||||
}
|
||||
@@ -575,8 +579,6 @@ func TestStreamEvents_OperationsEvents(t *testing.T) {
|
||||
func TestFillEventData(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
t.Run("AlreadyFilledData_ShouldShortCircuitWithoutError", func(t *testing.T) {
|
||||
st, err := util.NewBeaconStateBellatrix()
|
||||
require.NoError(t, err)
|
||||
b, err := blocks.NewSignedBeaconBlock(util.HydrateSignedBeaconBlockBellatrix(ð.SignedBeaconBlockBellatrix{}))
|
||||
require.NoError(t, err)
|
||||
attributor, err := payloadattribute.New(&enginev1.PayloadAttributes{
|
||||
@@ -584,11 +586,9 @@ func TestFillEventData(t *testing.T) {
|
||||
})
|
||||
require.NoError(t, err)
|
||||
alreadyFilled := payloadattribute.EventData{
|
||||
HeadState: st,
|
||||
HeadBlock: b,
|
||||
HeadRoot: [32]byte{1, 2, 3},
|
||||
Attributer: attributor,
|
||||
ParentBlockRoot: []byte{1, 2, 3},
|
||||
ParentBlockHash: []byte{4, 5, 6},
|
||||
}
|
||||
srv := &Server{} // No real HeadFetcher needed here since it won't be called.
|
||||
@@ -612,12 +612,14 @@ func TestFillEventData(t *testing.T) {
|
||||
Timestamp: uint64(time.Now().Unix()),
|
||||
})
|
||||
require.NoError(t, err)
|
||||
headRoot, err := b.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
// Create an event data object missing certain fields:
|
||||
partial := payloadattribute.EventData{
|
||||
// The presence of a nil HeadState, nil HeadBlock, zeroed HeadRoot, etc.
|
||||
// will cause fillEventData to try to fill the values.
|
||||
ProposalSlot: 42, // different epoch from current slot
|
||||
Attributer: attributor, // Must be Bellatrix or later
|
||||
HeadBlock: b,
|
||||
HeadRoot: headRoot,
|
||||
}
|
||||
currentSlot := primitives.Slot(0)
|
||||
// to avoid slot processing
|
||||
@@ -629,6 +631,8 @@ func TestFillEventData(t *testing.T) {
|
||||
Slot: ¤tSlot,
|
||||
}
|
||||
|
||||
stategen := mock.NewService()
|
||||
stategen.AddStateForRoot(st, headRoot)
|
||||
stn := mockChain.NewEventFeedWrapper()
|
||||
opn := mockChain.NewEventFeedWrapper()
|
||||
srv := &Server{
|
||||
@@ -638,16 +642,15 @@ func TestFillEventData(t *testing.T) {
|
||||
ChainInfoFetcher: mockChainService,
|
||||
TrackedValidatorsCache: cache.NewTrackedValidatorsCache(),
|
||||
EventWriteTimeout: testEventWriteTimeout,
|
||||
StateGen: stategen,
|
||||
}
|
||||
|
||||
filled, err := srv.fillEventData(ctx, partial)
|
||||
require.NoError(t, err, "expected successful fill of partial event data")
|
||||
|
||||
// Verify that fields have been updated from the mock data:
|
||||
require.NotNil(t, filled.HeadState, "HeadState should be assigned")
|
||||
require.NotNil(t, filled.HeadBlock, "HeadBlock should be assigned")
|
||||
require.NotEqual(t, [32]byte{}, filled.HeadRoot, "HeadRoot should no longer be zero")
|
||||
require.NotEmpty(t, filled.ParentBlockRoot, "ParentBlockRoot should be filled")
|
||||
require.NotEmpty(t, filled.ParentBlockHash, "ParentBlockHash should be filled")
|
||||
require.Equal(t, uint64(0), filled.ParentBlockNumber, "ParentBlockNumber must match mock block")
|
||||
|
||||
|
||||
@@ -10,6 +10,7 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
|
||||
opfeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/operation"
|
||||
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/state/stategen"
|
||||
)
|
||||
|
||||
// Server defines a server implementation of the http events service,
|
||||
@@ -23,4 +24,5 @@ type Server struct {
|
||||
KeepAliveInterval time.Duration
|
||||
EventFeedDepth int
|
||||
EventWriteTimeout time.Duration
|
||||
StateGen stategen.StateManager
|
||||
}
|
||||
|
||||
@@ -19,7 +19,6 @@ go_library(
|
||||
"//beacon-chain/rpc/lookup:go_default_library",
|
||||
"//config/features:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
"//consensus-types/interfaces:go_default_library",
|
||||
"//encoding/bytesutil:go_default_library",
|
||||
"//monitoring/tracing/trace:go_default_library",
|
||||
"//network/forks:go_default_library",
|
||||
@@ -27,7 +26,6 @@ go_library(
|
||||
"//runtime/version:go_default_library",
|
||||
"//time/slots:go_default_library",
|
||||
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
|
||||
"@com_github_pkg_errors//:go_default_library",
|
||||
"@com_github_prysmaticlabs_fastssz//:go_default_library",
|
||||
],
|
||||
)
|
||||
@@ -45,12 +43,10 @@ go_test(
|
||||
"//beacon-chain/core/helpers:go_default_library",
|
||||
"//beacon-chain/core/light-client:go_default_library",
|
||||
"//beacon-chain/db/testing:go_default_library",
|
||||
"//beacon-chain/rpc/testutil:go_default_library",
|
||||
"//beacon-chain/state:go_default_library",
|
||||
"//config/features:go_default_library",
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
"//consensus-types/blocks:go_default_library",
|
||||
"//consensus-types/interfaces:go_default_library",
|
||||
"//consensus-types/light-client:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
@@ -61,5 +57,6 @@ go_test(
|
||||
"//testing/util:go_default_library",
|
||||
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
|
||||
"@com_github_prysmaticlabs_fastssz//:go_default_library",
|
||||
"@org_golang_google_protobuf//proto:go_default_library",
|
||||
],
|
||||
)
|
||||
|
||||
@@ -1,19 +1,15 @@
|
||||
package lightclient
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"math"
|
||||
"net/http"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/api"
|
||||
"github.com/OffchainLabs/prysm/v6/api/server/structs"
|
||||
lightclient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/eth/shared"
|
||||
"github.com/OffchainLabs/prysm/v6/config/features"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
|
||||
"github.com/OffchainLabs/prysm/v6/network/forks"
|
||||
@@ -21,7 +17,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/runtime/version"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
"github.com/ethereum/go-ethereum/common/hexutil"
|
||||
"github.com/pkg/errors"
|
||||
ssz "github.com/prysmaticlabs/fastssz"
|
||||
)
|
||||
|
||||
@@ -197,70 +192,32 @@ func (s *Server) GetLightClientFinalityUpdate(w http.ResponseWriter, req *http.R
|
||||
return
|
||||
}
|
||||
|
||||
ctx, span := trace.StartSpan(req.Context(), "beacon.GetLightClientFinalityUpdate")
|
||||
_, span := trace.StartSpan(req.Context(), "beacon.GetLightClientFinalityUpdate")
|
||||
defer span.End()
|
||||
|
||||
// Finality update needs super majority of sync committee signatures
|
||||
minSyncCommitteeParticipants := float64(params.BeaconConfig().MinSyncCommitteeParticipants)
|
||||
minSignatures := uint64(math.Ceil(minSyncCommitteeParticipants * 2 / 3))
|
||||
|
||||
block, err := s.suitableBlock(ctx, minSignatures)
|
||||
if !shared.WriteBlockFetchError(w, block, err) {
|
||||
return
|
||||
}
|
||||
|
||||
st, err := s.Stater.StateBySlot(ctx, block.Block().Slot())
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not get state: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
attestedRoot := block.Block().ParentRoot()
|
||||
attestedBlock, err := s.Blocker.Block(ctx, attestedRoot[:])
|
||||
if !shared.WriteBlockFetchError(w, block, errors.Wrap(err, "could not get attested block")) {
|
||||
return
|
||||
}
|
||||
attestedSlot := attestedBlock.Block().Slot()
|
||||
attestedState, err := s.Stater.StateBySlot(ctx, attestedSlot)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not get attested state: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
var finalizedBlock interfaces.ReadOnlySignedBeaconBlock
|
||||
finalizedCheckpoint := attestedState.FinalizedCheckpoint()
|
||||
if finalizedCheckpoint == nil {
|
||||
httputil.HandleError(w, "Attested state does not have a finalized checkpoint", http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
finalizedRoot := bytesutil.ToBytes32(finalizedCheckpoint.Root)
|
||||
finalizedBlock, err = s.Blocker.Block(ctx, finalizedRoot[:])
|
||||
if !shared.WriteBlockFetchError(w, block, errors.Wrap(err, "could not get finalized block")) {
|
||||
return
|
||||
}
|
||||
|
||||
update, err := lightclient.NewLightClientFinalityUpdateFromBeaconState(ctx, s.ChainInfoFetcher.CurrentSlot(), st, block, attestedState, attestedBlock, finalizedBlock)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not get light client finality update: "+err.Error(), http.StatusInternalServerError)
|
||||
update := s.LCStore.LastFinalityUpdate()
|
||||
if update == nil {
|
||||
httputil.HandleError(w, "No light client finality update available", http.StatusNotFound)
|
||||
return
|
||||
}
|
||||
|
||||
w.Header().Set(api.VersionHeader, version.String(update.Version()))
|
||||
if httputil.RespondWithSsz(req) {
|
||||
ssz, err := update.MarshalSSZ()
|
||||
data, err := update.MarshalSSZ()
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not marshal finality update to SSZ: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
httputil.WriteSsz(w, ssz)
|
||||
httputil.WriteSsz(w, data)
|
||||
} else {
|
||||
updateStruct, err := structs.LightClientFinalityUpdateFromConsensus(update)
|
||||
data, err := structs.LightClientFinalityUpdateFromConsensus(update)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not convert light client finality update to API struct: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
response := &structs.LightClientFinalityUpdateResponse{
|
||||
Version: version.String(attestedState.Version()),
|
||||
Data: updateStruct,
|
||||
Version: version.String(update.Version()),
|
||||
Data: data,
|
||||
}
|
||||
httputil.WriteJson(w, response)
|
||||
}
|
||||
@@ -273,111 +230,33 @@ func (s *Server) GetLightClientOptimisticUpdate(w http.ResponseWriter, req *http
|
||||
return
|
||||
}
|
||||
|
||||
ctx, span := trace.StartSpan(req.Context(), "beacon.GetLightClientOptimisticUpdate")
|
||||
_, span := trace.StartSpan(req.Context(), "beacon.GetLightClientOptimisticUpdate")
|
||||
defer span.End()
|
||||
|
||||
block, err := s.suitableBlock(ctx, params.BeaconConfig().MinSyncCommitteeParticipants)
|
||||
if !shared.WriteBlockFetchError(w, block, err) {
|
||||
return
|
||||
}
|
||||
st, err := s.Stater.StateBySlot(ctx, block.Block().Slot())
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "could not get state: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
attestedRoot := block.Block().ParentRoot()
|
||||
attestedBlock, err := s.Blocker.Block(ctx, attestedRoot[:])
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not get attested block: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
if attestedBlock == nil {
|
||||
httputil.HandleError(w, "Attested block is nil", http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
attestedSlot := attestedBlock.Block().Slot()
|
||||
attestedState, err := s.Stater.StateBySlot(ctx, attestedSlot)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not get attested state: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
update, err := lightclient.NewLightClientOptimisticUpdateFromBeaconState(ctx, s.ChainInfoFetcher.CurrentSlot(), st, block, attestedState, attestedBlock)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not get light client optimistic update: "+err.Error(), http.StatusInternalServerError)
|
||||
update := s.LCStore.LastOptimisticUpdate()
|
||||
if update == nil {
|
||||
httputil.HandleError(w, "No light client optimistic update available", http.StatusNotFound)
|
||||
return
|
||||
}
|
||||
|
||||
w.Header().Set(api.VersionHeader, version.String(update.Version()))
|
||||
if httputil.RespondWithSsz(req) {
|
||||
ssz, err := update.MarshalSSZ()
|
||||
data, err := update.MarshalSSZ()
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not marshal optimistic update to SSZ: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
httputil.WriteSsz(w, ssz)
|
||||
httputil.WriteSsz(w, data)
|
||||
} else {
|
||||
updateStruct, err := structs.LightClientOptimisticUpdateFromConsensus(update)
|
||||
data, err := structs.LightClientOptimisticUpdateFromConsensus(update)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not convert light client optimistic update to API struct: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
response := &structs.LightClientOptimisticUpdateResponse{
|
||||
Version: version.String(attestedState.Version()),
|
||||
Data: updateStruct,
|
||||
Version: version.String(update.Version()),
|
||||
Data: data,
|
||||
}
|
||||
httputil.WriteJson(w, response)
|
||||
}
|
||||
}
|
||||
|
||||
// suitableBlock returns the latest block that satisfies all criteria required for creating a new update
|
||||
func (s *Server) suitableBlock(ctx context.Context, minSignaturesRequired uint64) (interfaces.ReadOnlySignedBeaconBlock, error) {
|
||||
st, err := s.HeadFetcher.HeadState(ctx)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not get head state")
|
||||
}
|
||||
|
||||
latestBlockHeader := st.LatestBlockHeader()
|
||||
stateRoot, err := st.HashTreeRoot(ctx)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not get state root")
|
||||
}
|
||||
latestBlockHeader.StateRoot = stateRoot[:]
|
||||
latestBlockHeaderRoot, err := latestBlockHeader.HashTreeRoot()
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not get latest block header root")
|
||||
}
|
||||
|
||||
block, err := s.Blocker.Block(ctx, latestBlockHeaderRoot[:])
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not get latest block")
|
||||
}
|
||||
if block == nil {
|
||||
return nil, errors.New("latest block is nil")
|
||||
}
|
||||
|
||||
// Loop through the blocks until we find a block that satisfies minSignaturesRequired requirement
|
||||
var numOfSyncCommitteeSignatures uint64
|
||||
if syncAggregate, err := block.Block().Body().SyncAggregate(); err == nil {
|
||||
numOfSyncCommitteeSignatures = syncAggregate.SyncCommitteeBits.Count()
|
||||
}
|
||||
|
||||
for numOfSyncCommitteeSignatures < minSignaturesRequired {
|
||||
// Get the parent block
|
||||
parentRoot := block.Block().ParentRoot()
|
||||
block, err = s.Blocker.Block(ctx, parentRoot[:])
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not get parent block")
|
||||
}
|
||||
if block == nil {
|
||||
return nil, errors.New("parent block is nil")
|
||||
}
|
||||
|
||||
// Get the number of sync committee signatures
|
||||
numOfSyncCommitteeSignatures = 0
|
||||
if syncAggregate, err := block.Block().Body().SyncAggregate(); err == nil {
|
||||
numOfSyncCommitteeSignatures = syncAggregate.SyncCommitteeBits.Count()
|
||||
}
|
||||
}
|
||||
|
||||
return block, nil
|
||||
}
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -2,6 +2,7 @@ package lightclient
|
||||
|
||||
import (
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain"
|
||||
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/lookup"
|
||||
)
|
||||
@@ -12,4 +13,5 @@ type Server struct {
|
||||
HeadFetcher blockchain.HeadFetcher
|
||||
ChainInfoFetcher blockchain.ChainInfoFetcher
|
||||
BeaconDB db.HeadAccessDatabase
|
||||
LCStore *lightClient.Store
|
||||
}
|
||||
|
||||
@@ -18,7 +18,7 @@ import (
|
||||
|
||||
const errEpoch = "cannot retrieve information about an epoch in the future, current epoch %d, requesting %d"
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// ListValidatorAssignments retrieves the validator assignments for a given epoch,
|
||||
// optional validator indices or public keys may be included to filter validator assignments.
|
||||
|
||||
@@ -49,7 +49,7 @@ func mapAttestationsByTargetRoot(atts []ethpb.Att) map[[32]byte][]ethpb.Att {
|
||||
return attsMap
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// ListAttestations retrieves attestations by block root, slot, or epoch.
|
||||
// Attestations are sorted by data slot by default.
|
||||
@@ -115,7 +115,7 @@ func (bs *Server) ListAttestations(
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// ListAttestationsElectra retrieves attestations by block root, slot, or epoch.
|
||||
// Attestations are sorted by data slot by default.
|
||||
@@ -180,7 +180,7 @@ func (bs *Server) ListAttestationsElectra(ctx context.Context, req *ethpb.ListAt
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// ListIndexedAttestations retrieves indexed attestations by block root.
|
||||
// IndexedAttestationsForEpoch are sorted by data slot by default. Start-end epoch
|
||||
@@ -242,7 +242,7 @@ func (bs *Server) ListIndexedAttestations(
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// ListIndexedAttestationsElectra retrieves indexed attestations by block root.
|
||||
// IndexedAttestationsForEpoch are sorted by data slot by default. Start-end epoch
|
||||
@@ -305,7 +305,7 @@ func (bs *Server) ListIndexedAttestationsElectra(
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// AttestationPool retrieves pending attestations.
|
||||
//
|
||||
@@ -350,7 +350,7 @@ func (bs *Server) AttestationPool(_ context.Context, req *ethpb.AttestationPoolR
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
func (bs *Server) AttestationPoolElectra(_ context.Context, req *ethpb.AttestationPoolRequest) (*ethpb.AttestationPoolElectraResponse, error) {
|
||||
var atts []*ethpb.AttestationElectra
|
||||
var err error
|
||||
|
||||
@@ -26,7 +26,7 @@ type blockContainer struct {
|
||||
isCanonical bool
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// ListBeaconBlocks retrieves blocks by root, slot, or epoch.
|
||||
//
|
||||
@@ -246,7 +246,7 @@ func (bs *Server) listBlocksForGenesis(ctx context.Context, _ *ethpb.ListBlocksR
|
||||
}}, 1, strconv.Itoa(0), nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// GetChainHead retrieves information about the head of the beacon chain from
|
||||
// the view of the beacon chain node.
|
||||
|
||||
@@ -15,7 +15,7 @@ import (
|
||||
"google.golang.org/grpc/status"
|
||||
)
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// ListBeaconCommittees for a given epoch.
|
||||
//
|
||||
|
||||
@@ -10,7 +10,7 @@ import (
|
||||
"google.golang.org/protobuf/types/known/emptypb"
|
||||
)
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// GetBeaconConfig retrieves the current configuration parameters of the beacon chain.
|
||||
func (_ *Server) GetBeaconConfig(_ context.Context, _ *emptypb.Empty) (*ethpb.BeaconConfig, error) {
|
||||
|
||||
@@ -11,7 +11,7 @@ import (
|
||||
"google.golang.org/grpc/status"
|
||||
)
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// SubmitProposerSlashing receives a proposer slashing object via
|
||||
// RPC and injects it into the beacon node's operations pool.
|
||||
@@ -38,12 +38,12 @@ func (bs *Server) SubmitProposerSlashing(
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
func (bs *Server) SubmitAttesterSlashing(ctx context.Context, req *ethpb.AttesterSlashing) (*ethpb.SubmitSlashingResponse, error) {
|
||||
return bs.submitAttesterSlashing(ctx, req)
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// SubmitAttesterSlashingElectra receives an attester slashing object via
|
||||
// RPC and injects it into the beacon node's operations pool.
|
||||
|
||||
@@ -24,7 +24,7 @@ import (
|
||||
"google.golang.org/protobuf/types/known/emptypb"
|
||||
)
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// ListValidatorBalances retrieves the validator balances for a given set of public keys.
|
||||
// An optional Epoch parameter is provided to request historical validator balances from
|
||||
@@ -182,7 +182,7 @@ func (bs *Server) ListValidatorBalances(
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// ListValidators retrieves the current list of active validators with an optional historical epoch flag to
|
||||
// retrieve validator set in time.
|
||||
@@ -342,7 +342,7 @@ func (bs *Server) ListValidators(
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// GetValidator information from any validator in the registry by index or public key.
|
||||
func (bs *Server) GetValidator(
|
||||
@@ -388,7 +388,7 @@ func (bs *Server) GetValidator(
|
||||
return nil, status.Error(codes.NotFound, "No validator matched filter criteria")
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// GetValidatorActiveSetChanges retrieves the active set changes for a given epoch.
|
||||
//
|
||||
@@ -416,7 +416,7 @@ func (bs *Server) GetValidatorActiveSetChanges(
|
||||
return as, nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// GetValidatorParticipation retrieves the validator participation information for a given epoch,
|
||||
// it returns the information about validator's participation rate in voting on the proof of stake
|
||||
@@ -443,7 +443,7 @@ func (bs *Server) GetValidatorParticipation(
|
||||
return vp, nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// GetValidatorQueue retrieves the current validator queue information.
|
||||
func (bs *Server) GetValidatorQueue(
|
||||
@@ -536,7 +536,7 @@ func (bs *Server) GetValidatorQueue(
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// GetValidatorPerformance reports the validator's latest balance along with other important metrics on
|
||||
// rewards and penalties throughout its lifecycle in the beacon chain.
|
||||
@@ -550,7 +550,7 @@ func (bs *Server) GetValidatorPerformance(
|
||||
return response, nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// GetIndividualVotes retrieves individual voting status of validators.
|
||||
func (bs *Server) GetIndividualVotes(
|
||||
|
||||
@@ -17,7 +17,7 @@ import (
|
||||
"google.golang.org/grpc/status"
|
||||
)
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// GetBlock in an ssz-encoded format by block root.
|
||||
func (ds *Server) GetBlock(
|
||||
@@ -41,7 +41,7 @@ func (ds *Server) GetBlock(
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// GetInclusionSlot of an attestation in block.
|
||||
func (ds *Server) GetInclusionSlot(ctx context.Context, req *pbrpc.InclusionSlotRequest) (*pbrpc.InclusionSlotResponse, error) {
|
||||
|
||||
@@ -13,7 +13,7 @@ import (
|
||||
"google.golang.org/grpc/status"
|
||||
)
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// GetPeer returns the data known about the peer defined by the provided peer id.
|
||||
func (ds *Server) GetPeer(_ context.Context, peerReq *ethpb.PeerRequest) (*ethpb.DebugPeerResponse, error) {
|
||||
@@ -24,7 +24,7 @@ func (ds *Server) GetPeer(_ context.Context, peerReq *ethpb.PeerRequest) (*ethpb
|
||||
return ds.getPeer(pid)
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// ListPeers returns all peers known to the host node, regardless of if they are connected/
|
||||
// disconnected.
|
||||
|
||||
@@ -10,7 +10,7 @@ import (
|
||||
"google.golang.org/grpc/status"
|
||||
)
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// GetBeaconState retrieves an ssz-encoded beacon state
|
||||
// from the beacon node by either a slot or block root.
|
||||
|
||||
@@ -49,7 +49,7 @@ type Server struct {
|
||||
BeaconMonitoringPort int
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// GetHealth checks the health of the node
|
||||
func (ns *Server) GetHealth(ctx context.Context, request *ethpb.HealthRequest) (*empty.Empty, error) {
|
||||
@@ -80,7 +80,7 @@ func (ns *Server) GetHealth(ctx context.Context, request *ethpb.HealthRequest) (
|
||||
return &empty.Empty{}, status.Errorf(codes.Unavailable, "service unavailable")
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// GetSyncStatus checks the current network sync status of the node.
|
||||
func (ns *Server) GetSyncStatus(_ context.Context, _ *empty.Empty) (*ethpb.SyncStatus, error) {
|
||||
@@ -89,7 +89,7 @@ func (ns *Server) GetSyncStatus(_ context.Context, _ *empty.Empty) (*ethpb.SyncS
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// GetGenesis fetches genesis chain information of Ethereum. Returns unix timestamp 0
|
||||
// if a genesis time has yet to be determined.
|
||||
@@ -115,7 +115,7 @@ func (ns *Server) GetGenesis(ctx context.Context, _ *empty.Empty) (*ethpb.Genesi
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// GetVersion checks the version information of the beacon node.
|
||||
func (_ *Server) GetVersion(_ context.Context, _ *empty.Empty) (*ethpb.Version, error) {
|
||||
@@ -124,7 +124,7 @@ func (_ *Server) GetVersion(_ context.Context, _ *empty.Empty) (*ethpb.Version,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// ListImplementedServices lists the services implemented and enabled by this node.
|
||||
//
|
||||
@@ -143,7 +143,7 @@ func (ns *Server) ListImplementedServices(_ context.Context, _ *empty.Empty) (*e
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// GetHost returns the p2p data on the current local and host peer.
|
||||
func (ns *Server) GetHost(_ context.Context, _ *empty.Empty) (*ethpb.HostData, error) {
|
||||
@@ -168,7 +168,7 @@ func (ns *Server) GetHost(_ context.Context, _ *empty.Empty) (*ethpb.HostData, e
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// GetPeer returns the data known about the peer defined by the provided peer id.
|
||||
func (ns *Server) GetPeer(_ context.Context, peerReq *ethpb.PeerRequest) (*ethpb.Peer, error) {
|
||||
@@ -215,7 +215,7 @@ func (ns *Server) GetPeer(_ context.Context, peerReq *ethpb.PeerRequest) (*ethpb
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// ListPeers lists the peers connected to this node.
|
||||
func (ns *Server) ListPeers(ctx context.Context, _ *empty.Empty) (*ethpb.Peers, error) {
|
||||
@@ -270,7 +270,7 @@ func (ns *Server) ListPeers(ctx context.Context, _ *empty.Empty) (*ethpb.Peers,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// GetETH1ConnectionStatus gets data about the ETH1 endpoints.
|
||||
func (ns *Server) GetETH1ConnectionStatus(_ context.Context, _ *empty.Empty) (*ethpb.ETH1ConnectionStatus, error) {
|
||||
@@ -286,7 +286,7 @@ func (ns *Server) GetETH1ConnectionStatus(_ context.Context, _ *empty.Empty) (*e
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// StreamBeaconLogs from the beacon node via a gRPC server-side stream.
|
||||
// DEPRECATED: This endpoint doesn't appear to be used and have been marked for deprecation.
|
||||
|
||||
@@ -17,7 +17,7 @@ import (
|
||||
"google.golang.org/grpc/status"
|
||||
)
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// SubmitAggregateSelectionProof is called by a validator when its assigned to be an aggregator.
|
||||
// The aggregator submits the selection proof to obtain the aggregated attestation
|
||||
@@ -55,7 +55,7 @@ func (vs *Server) SubmitAggregateSelectionProof(ctx context.Context, req *ethpb.
|
||||
return ðpb.AggregateSelectionResponse{AggregateAndProof: attAndProof}, nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// SubmitAggregateSelectionProofElectra is called by a validator when its assigned to be an aggregator.
|
||||
// The aggregator submits the selection proof to obtain the aggregated attestation
|
||||
@@ -149,7 +149,7 @@ func (vs *Server) processAggregateSelection(ctx context.Context, req *ethpb.Aggr
|
||||
return indexInCommittee, validatorIndex, nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// SubmitSignedAggregateSelectionProof is called by a validator to broadcast a signed
|
||||
// aggregated and proof object.
|
||||
@@ -163,7 +163,7 @@ func (vs *Server) SubmitSignedAggregateSelectionProof(
|
||||
return ðpb.SignedAggregateSubmitResponse{}, nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// SubmitSignedAggregateSelectionProofElectra is called by a validator to broadcast a signed
|
||||
// aggregated and proof object.
|
||||
|
||||
@@ -22,7 +22,7 @@ import (
|
||||
"google.golang.org/protobuf/types/known/emptypb"
|
||||
)
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// GetAttestationData requests that the beacon node produce an attestation data object,
|
||||
// which the validator acting as an attester will then sign.
|
||||
@@ -44,7 +44,7 @@ func (vs *Server) GetAttestationData(ctx context.Context, req *ethpb.Attestation
|
||||
return res, nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// ProposeAttestation is a function called by an attester to vote
|
||||
// on a block via an attestation object as defined in the Ethereum specification.
|
||||
@@ -74,7 +74,7 @@ func (vs *Server) ProposeAttestation(ctx context.Context, att *ethpb.Attestation
|
||||
return resp, nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// ProposeAttestationElectra is a function called by an attester to vote
|
||||
// on a block via an attestation object as defined in the Ethereum specification.
|
||||
@@ -114,7 +114,7 @@ func (vs *Server) ProposeAttestationElectra(ctx context.Context, singleAtt *ethp
|
||||
return resp, nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// SubscribeCommitteeSubnets subscribes to the committee ID subnet given subscribe request.
|
||||
func (vs *Server) SubscribeCommitteeSubnets(ctx context.Context, req *ethpb.CommitteeSubnetsSubscribeRequest) (*emptypb.Empty, error) {
|
||||
|
||||
@@ -82,7 +82,7 @@ func TestProposeAttestation(t *testing.T) {
|
||||
config := params.BeaconConfig()
|
||||
config.ElectraForkEpoch = 0
|
||||
params.OverrideBeaconConfig(config)
|
||||
|
||||
|
||||
state, err := util.NewBeaconState()
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, state.SetSlot(params.BeaconConfig().SlotsPerEpoch+1))
|
||||
|
||||
@@ -9,12 +9,13 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/runtime/version"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
"github.com/pkg/errors"
|
||||
"google.golang.org/grpc/codes"
|
||||
"google.golang.org/grpc/status"
|
||||
)
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// StreamBlocksAltair to clients every single time a block is received by the beacon node.
|
||||
func (vs *Server) StreamBlocksAltair(req *ethpb.StreamBlocksRequest, stream ethpb.BeaconNodeValidator_StreamBlocksAltairServer) error {
|
||||
@@ -49,9 +50,9 @@ func (vs *Server) StreamBlocksAltair(req *ethpb.StreamBlocksRequest, stream ethp
|
||||
}
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// StreamSlots sends a block's slot to clients every single time a block is received by the beacon node.
|
||||
// StreamSlots sends a the block's slot and dependent roots to clients every single time a block is received by the beacon node.
|
||||
func (vs *Server) StreamSlots(req *ethpb.StreamSlotsRequest, stream ethpb.BeaconNodeValidator_StreamSlotsServer) error {
|
||||
ch := make(chan *feed.Event, 1)
|
||||
var sub event.Subscription
|
||||
@@ -85,7 +86,24 @@ func (vs *Server) StreamSlots(req *ethpb.StreamSlotsRequest, stream ethpb.Beacon
|
||||
}
|
||||
s = data.SignedBlock.Block().Slot()
|
||||
}
|
||||
if err := stream.Send(ðpb.StreamSlotsResponse{Slot: s}); err != nil {
|
||||
currEpoch := slots.ToEpoch(s)
|
||||
currDepRoot, err := vs.ForkchoiceFetcher.DependentRoot(currEpoch)
|
||||
if err != nil {
|
||||
return status.Errorf(codes.Internal, "Could not get dependent root: %v", err)
|
||||
}
|
||||
prevDepRoot := currDepRoot
|
||||
if currEpoch > 0 {
|
||||
prevDepRoot, err = vs.ForkchoiceFetcher.DependentRoot(currEpoch - 1)
|
||||
if err != nil {
|
||||
return status.Errorf(codes.Internal, "Could not get dependent root: %v", err)
|
||||
}
|
||||
}
|
||||
if err := stream.Send(
|
||||
ðpb.StreamSlotsResponse{
|
||||
Slot: s,
|
||||
PreviousDutyDependentRoot: prevDepRoot[:],
|
||||
CurrentDutyDependentRoot: currDepRoot[:],
|
||||
}); err != nil {
|
||||
return status.Errorf(codes.Unavailable, "Could not send over stream: %v", err)
|
||||
}
|
||||
case <-sub.Err():
|
||||
|
||||
@@ -297,15 +297,20 @@ func TestServer_StreamSlots_OnHeadUpdated(t *testing.T) {
|
||||
|
||||
chainService := &chainMock.ChainService{}
|
||||
server := &Server{
|
||||
Ctx: ctx,
|
||||
BlockNotifier: chainService.BlockNotifier(),
|
||||
Ctx: ctx,
|
||||
ForkchoiceFetcher: chainService,
|
||||
BlockNotifier: chainService.BlockNotifier(),
|
||||
}
|
||||
exitRoutine := make(chan bool)
|
||||
ctrl := gomock.NewController(t)
|
||||
defer ctrl.Finish()
|
||||
mockStream := mock.NewMockBeaconNodeValidator_StreamSlotsServer(ctrl)
|
||||
|
||||
mockStream.EXPECT().Send(ðpb.StreamSlotsResponse{Slot: 123}).Do(func(arg0 interface{}) {
|
||||
mockStream.EXPECT().Send(ðpb.StreamSlotsResponse{
|
||||
Slot: 123,
|
||||
PreviousDutyDependentRoot: params.BeaconConfig().ZeroHash[:],
|
||||
CurrentDutyDependentRoot: params.BeaconConfig().ZeroHash[:],
|
||||
}).Do(func(arg0 interface{}) {
|
||||
exitRoutine <- true
|
||||
})
|
||||
mockStream.EXPECT().Context().Return(ctx).AnyTimes()
|
||||
@@ -329,14 +334,19 @@ func TestServer_StreamSlotsVerified_OnHeadUpdated(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
chainService := &chainMock.ChainService{}
|
||||
server := &Server{
|
||||
Ctx: ctx,
|
||||
StateNotifier: chainService.StateNotifier(),
|
||||
Ctx: ctx,
|
||||
ForkchoiceFetcher: chainService,
|
||||
StateNotifier: chainService.StateNotifier(),
|
||||
}
|
||||
exitRoutine := make(chan bool)
|
||||
ctrl := gomock.NewController(t)
|
||||
defer ctrl.Finish()
|
||||
mockStream := mock.NewMockBeaconNodeValidator_StreamSlotsServer(ctrl)
|
||||
mockStream.EXPECT().Send(ðpb.StreamSlotsResponse{Slot: 123}).Do(func(arg0 interface{}) {
|
||||
mockStream.EXPECT().Send(ðpb.StreamSlotsResponse{
|
||||
Slot: 123,
|
||||
PreviousDutyDependentRoot: params.BeaconConfig().ZeroHash[:],
|
||||
CurrentDutyDependentRoot: params.BeaconConfig().ZeroHash[:],
|
||||
}).Do(func(arg0 interface{}) {
|
||||
exitRoutine <- true
|
||||
})
|
||||
mockStream.EXPECT().Context().Return(ctx).AnyTimes()
|
||||
|
||||
@@ -16,7 +16,7 @@ import (
|
||||
"google.golang.org/protobuf/types/known/emptypb"
|
||||
)
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// GetDuties returns the duties assigned to a list of validators specified
|
||||
// in the request object.
|
||||
@@ -159,13 +159,26 @@ func (vs *Server) duties(ctx context.Context, req *ethpb.DutiesRequest) (*ethpb.
|
||||
validatorAssignments = append(validatorAssignments, assignment)
|
||||
nextValidatorAssignments = append(nextValidatorAssignments, nextAssignment)
|
||||
}
|
||||
currDependentRoot, err := vs.ForkchoiceFetcher.DependentRoot(currentEpoch)
|
||||
if err != nil {
|
||||
return nil, status.Errorf(codes.Internal, "Could not get dependent root: %v", err)
|
||||
}
|
||||
prevDependentRoot := currDependentRoot
|
||||
if currDependentRoot != [32]byte{} && currentEpoch > 0 {
|
||||
prevDependentRoot, err = vs.ForkchoiceFetcher.DependentRoot(currentEpoch - 1)
|
||||
if err != nil {
|
||||
return nil, status.Errorf(codes.Internal, "Could not get previous dependent root: %v", err)
|
||||
}
|
||||
}
|
||||
return ðpb.DutiesResponse{
|
||||
CurrentEpochDuties: validatorAssignments,
|
||||
NextEpochDuties: nextValidatorAssignments,
|
||||
PreviousDutyDependentRoot: prevDependentRoot[:],
|
||||
CurrentDutyDependentRoot: currDependentRoot[:],
|
||||
CurrentEpochDuties: validatorAssignments,
|
||||
NextEpochDuties: nextValidatorAssignments,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// AssignValidatorToSubnet checks the status and pubkey of a particular validator
|
||||
// to discern whether persistent subnets need to be registered for them.
|
||||
|
||||
@@ -55,10 +55,11 @@ func TestGetDuties_OK(t *testing.T) {
|
||||
State: bs, Root: genesisRoot[:], Genesis: time.Now(),
|
||||
}
|
||||
vs := &Server{
|
||||
HeadFetcher: chain,
|
||||
TimeFetcher: chain,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
PayloadIDCache: cache.NewPayloadIDCache(),
|
||||
HeadFetcher: chain,
|
||||
TimeFetcher: chain,
|
||||
ForkchoiceFetcher: chain,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
PayloadIDCache: cache.NewPayloadIDCache(),
|
||||
}
|
||||
|
||||
// Test the first validator in registry.
|
||||
@@ -140,11 +141,12 @@ func TestGetAltairDuties_SyncCommitteeOK(t *testing.T) {
|
||||
State: bs, Root: genesisRoot[:], Genesis: time.Now().Add(time.Duration(-1*int64(slot-1)) * time.Second),
|
||||
}
|
||||
vs := &Server{
|
||||
HeadFetcher: chain,
|
||||
TimeFetcher: chain,
|
||||
Eth1InfoFetcher: &mockExecution.Chain{},
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
PayloadIDCache: cache.NewPayloadIDCache(),
|
||||
HeadFetcher: chain,
|
||||
TimeFetcher: chain,
|
||||
ForkchoiceFetcher: chain,
|
||||
Eth1InfoFetcher: &mockExecution.Chain{},
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
PayloadIDCache: cache.NewPayloadIDCache(),
|
||||
}
|
||||
|
||||
// Test the first validator in registry.
|
||||
@@ -246,11 +248,12 @@ func TestGetBellatrixDuties_SyncCommitteeOK(t *testing.T) {
|
||||
State: bs, Root: genesisRoot[:], Genesis: time.Now().Add(time.Duration(-1*int64(slot-1)) * time.Second),
|
||||
}
|
||||
vs := &Server{
|
||||
HeadFetcher: chain,
|
||||
TimeFetcher: chain,
|
||||
Eth1InfoFetcher: &mockExecution.Chain{},
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
PayloadIDCache: cache.NewPayloadIDCache(),
|
||||
HeadFetcher: chain,
|
||||
TimeFetcher: chain,
|
||||
ForkchoiceFetcher: chain,
|
||||
Eth1InfoFetcher: &mockExecution.Chain{},
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
PayloadIDCache: cache.NewPayloadIDCache(),
|
||||
}
|
||||
|
||||
// Test the first validator in registry.
|
||||
@@ -338,12 +341,13 @@ func TestGetAltairDuties_UnknownPubkey(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
|
||||
vs := &Server{
|
||||
HeadFetcher: chain,
|
||||
TimeFetcher: chain,
|
||||
Eth1InfoFetcher: &mockExecution.Chain{},
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
DepositFetcher: depositCache,
|
||||
PayloadIDCache: cache.NewPayloadIDCache(),
|
||||
HeadFetcher: chain,
|
||||
ForkchoiceFetcher: chain,
|
||||
TimeFetcher: chain,
|
||||
Eth1InfoFetcher: &mockExecution.Chain{},
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
DepositFetcher: depositCache,
|
||||
PayloadIDCache: cache.NewPayloadIDCache(),
|
||||
}
|
||||
|
||||
unknownPubkey := bytesutil.PadTo([]byte{'u'}, 48)
|
||||
@@ -361,7 +365,8 @@ func TestGetDuties_SlotOutOfUpperBound(t *testing.T) {
|
||||
Genesis: time.Now(),
|
||||
}
|
||||
vs := &Server{
|
||||
TimeFetcher: chain,
|
||||
ForkchoiceFetcher: chain,
|
||||
TimeFetcher: chain,
|
||||
}
|
||||
req := ðpb.DutiesRequest{
|
||||
Epoch: primitives.Epoch(chain.CurrentSlot()/params.BeaconConfig().SlotsPerEpoch + 2),
|
||||
@@ -396,10 +401,11 @@ func TestGetDuties_CurrentEpoch_ShouldNotFail(t *testing.T) {
|
||||
State: bState, Root: genesisRoot[:], Genesis: time.Now(),
|
||||
}
|
||||
vs := &Server{
|
||||
HeadFetcher: chain,
|
||||
TimeFetcher: chain,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
PayloadIDCache: cache.NewPayloadIDCache(),
|
||||
HeadFetcher: chain,
|
||||
ForkchoiceFetcher: chain,
|
||||
TimeFetcher: chain,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
PayloadIDCache: cache.NewPayloadIDCache(),
|
||||
}
|
||||
|
||||
// Test the first validator in registry.
|
||||
@@ -435,10 +441,11 @@ func TestGetDuties_MultipleKeys_OK(t *testing.T) {
|
||||
State: bs, Root: genesisRoot[:], Genesis: time.Now(),
|
||||
}
|
||||
vs := &Server{
|
||||
HeadFetcher: chain,
|
||||
TimeFetcher: chain,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
PayloadIDCache: cache.NewPayloadIDCache(),
|
||||
HeadFetcher: chain,
|
||||
ForkchoiceFetcher: chain,
|
||||
TimeFetcher: chain,
|
||||
SyncChecker: &mockSync.Sync{IsSyncing: false},
|
||||
PayloadIDCache: cache.NewPayloadIDCache(),
|
||||
}
|
||||
|
||||
pubkey0 := deposits[0].Data.PublicKey
|
||||
|
||||
@@ -12,7 +12,7 @@ import (
|
||||
"google.golang.org/grpc/status"
|
||||
)
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// ProposeExit proposes an exit for a validator.
|
||||
func (vs *Server) ProposeExit(ctx context.Context, req *ethpb.SignedVoluntaryExit) (*ethpb.ProposeExitResponse, error) {
|
||||
|
||||
@@ -45,7 +45,7 @@ const (
|
||||
defaultBuilderBoostFactor = primitives.Gwei(100)
|
||||
)
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// GetBeaconBlock is called by a proposer during its assigned slot to request a block to sign
|
||||
// by passing in the slot and the signed randao reveal of the slot.
|
||||
@@ -271,7 +271,7 @@ func (vs *Server) BuildBlockParallel(ctx context.Context, sBlk interfaces.Signed
|
||||
return vs.constructGenericBeaconBlock(sBlk, bundle, winningBid)
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// ProposeBeaconBlock handles the proposal of beacon blocks.
|
||||
func (vs *Server) ProposeBeaconBlock(ctx context.Context, req *ethpb.GenericSignedBeaconBlock) (*ethpb.ProposeResponse, error) {
|
||||
@@ -412,7 +412,7 @@ func (vs *Server) broadcastAndReceiveBlobs(ctx context.Context, sidecars []*ethp
|
||||
return eg.Wait()
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// PrepareBeaconProposer caches and updates the fee recipient for the given proposer.
|
||||
func (vs *Server) PrepareBeaconProposer(
|
||||
@@ -449,7 +449,7 @@ func (vs *Server) PrepareBeaconProposer(
|
||||
return &emptypb.Empty{}, nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// GetFeeRecipientByPubKey returns a fee recipient from the beacon node's settings or db based on a given public key
|
||||
func (vs *Server) GetFeeRecipientByPubKey(ctx context.Context, request *ethpb.FeeRecipientByPubKeyRequest) (*ethpb.FeeRecipientByPubKeyResponse, error) {
|
||||
@@ -506,7 +506,7 @@ func (vs *Server) computeStateRoot(ctx context.Context, block interfaces.ReadOnl
|
||||
return root[:], nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// SubmitValidatorRegistrations submits validator registrations.
|
||||
func (vs *Server) SubmitValidatorRegistrations(ctx context.Context, reg *ethpb.SignedValidatorRegistrationsV1) (*emptypb.Empty, error) {
|
||||
|
||||
@@ -82,7 +82,7 @@ type Server struct {
|
||||
AttestationStateFetcher blockchain.AttestationStateFetcher
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// WaitForActivation checks if a validator public key exists in the active validator registry of the current
|
||||
// beacon state, if not, then it creates a stream which listens for canonical states which contain
|
||||
@@ -132,7 +132,7 @@ func (vs *Server) WaitForActivation(req *ethpb.ValidatorActivationRequest, strea
|
||||
}
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// ValidatorIndex is called by a validator to get its index location in the beacon state.
|
||||
func (vs *Server) ValidatorIndex(ctx context.Context, req *ethpb.ValidatorIndexRequest) (*ethpb.ValidatorIndexResponse, error) {
|
||||
@@ -151,7 +151,7 @@ func (vs *Server) ValidatorIndex(ctx context.Context, req *ethpb.ValidatorIndexR
|
||||
return ðpb.ValidatorIndexResponse{Index: index}, nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// DomainData fetches the current domain version information from the beacon state.
|
||||
func (vs *Server) DomainData(ctx context.Context, request *ethpb.DomainRequest) (*ethpb.DomainResponse, error) {
|
||||
@@ -183,7 +183,7 @@ func (vs *Server) DomainData(ctx context.Context, request *ethpb.DomainRequest)
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// WaitForChainStart queries the logs of the Deposit Contract in order to verify the beacon chain
|
||||
// has started its runtime and validators begin their responsibilities. If it has not, it then
|
||||
|
||||
@@ -29,7 +29,7 @@ var nonExistentIndex = primitives.ValidatorIndex(^uint64(0))
|
||||
|
||||
var errParticipation = status.Errorf(codes.Internal, "Failed to obtain epoch participation")
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// ValidatorStatus returns the validator status of the current epoch.
|
||||
// The status response can be one of the following:
|
||||
@@ -54,7 +54,7 @@ func (vs *Server) ValidatorStatus(
|
||||
return vStatus, nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// MultipleValidatorStatus is the same as ValidatorStatus. Supports retrieval of multiple
|
||||
// validator statuses. Takes a list of public keys or a list of validator indices.
|
||||
@@ -104,7 +104,7 @@ func (vs *Server) MultipleValidatorStatus(
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// CheckDoppelGanger checks if the provided keys are currently active in the network.
|
||||
func (vs *Server) CheckDoppelGanger(ctx context.Context, req *ethpb.DoppelGangerRequest) (*ethpb.DoppelGangerResponse, error) {
|
||||
|
||||
@@ -12,7 +12,7 @@ import (
|
||||
"google.golang.org/protobuf/types/known/emptypb"
|
||||
)
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// GetSyncMessageBlockRoot retrieves the sync committee block root of the beacon chain.
|
||||
func (vs *Server) GetSyncMessageBlockRoot(
|
||||
@@ -34,7 +34,7 @@ func (vs *Server) GetSyncMessageBlockRoot(
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// SubmitSyncMessage submits the sync committee message to the network.
|
||||
// It also saves the sync committee message into the pending pool for block inclusion.
|
||||
@@ -45,7 +45,7 @@ func (vs *Server) SubmitSyncMessage(ctx context.Context, msg *ethpb.SyncCommitte
|
||||
return &emptypb.Empty{}, nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// GetSyncSubcommitteeIndex is called by a sync committee participant to get
|
||||
// its subcommittee index for sync message aggregation duty.
|
||||
@@ -63,7 +63,7 @@ func (vs *Server) GetSyncSubcommitteeIndex(
|
||||
return ðpb.SyncSubcommitteeIndexResponse{Indices: indices}, nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// GetSyncCommitteeContribution is called by a sync committee aggregator
|
||||
// to retrieve sync committee contribution object.
|
||||
@@ -106,7 +106,7 @@ func (vs *Server) GetSyncCommitteeContribution(
|
||||
return contribution, nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// SubmitSignedContributionAndProof is called by a sync committee aggregator
|
||||
// to submit signed contribution and proof object.
|
||||
@@ -120,7 +120,7 @@ func (vs *Server) SubmitSignedContributionAndProof(
|
||||
return &emptypb.Empty{}, nil
|
||||
}
|
||||
|
||||
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
|
||||
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
|
||||
//
|
||||
// AggregatedSigAndAggregationBits returns the aggregated signature and aggregation bits
|
||||
// associated with a particular set of sync committee messages.
|
||||
|
||||
@@ -16,6 +16,7 @@ import (
|
||||
blockfeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/block"
|
||||
opfeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/operation"
|
||||
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
|
||||
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/execution"
|
||||
@@ -121,6 +122,7 @@ type Config struct {
|
||||
BlobStorage *filesystem.BlobStorage
|
||||
TrackedValidatorsCache *cache.TrackedValidatorsCache
|
||||
PayloadIDCache *cache.PayloadIDCache
|
||||
LCStore *lightClient.Store
|
||||
}
|
||||
|
||||
// NewService instantiates a new RPC service instance that will
|
||||
|
||||
@@ -62,9 +62,11 @@ func (b *BeaconState) NextWithdrawalValidatorIndex() (primitives.ValidatorIndex,
|
||||
//
|
||||
// validator = state.validators[withdrawal.index]
|
||||
// has_sufficient_effective_balance = validator.effective_balance >= MIN_ACTIVATION_BALANCE
|
||||
// has_excess_balance = state.balances[withdrawal.index] > MIN_ACTIVATION_BALANCE
|
||||
// total_withdrawn = sum(w.amount for w in withdrawals if w.validator_index == withdrawal.validator_index)
|
||||
// balance = state.balances[withdrawal.validator_index] - total_withdrawn
|
||||
// has_excess_balance = balance > MIN_ACTIVATION_BALANCE
|
||||
// if validator.exit_epoch == FAR_FUTURE_EPOCH and has_sufficient_effective_balance and has_excess_balance:
|
||||
// withdrawable_balance = min(state.balances[withdrawal.index] - MIN_ACTIVATION_BALANCE, withdrawal.amount)
|
||||
// withdrawable_balance = min(balance - MIN_ACTIVATION_BALANCE, withdrawal.amount)
|
||||
// withdrawals.append(Withdrawal(
|
||||
// index=withdrawal_index,
|
||||
// validator_index=withdrawal.index,
|
||||
@@ -132,9 +134,19 @@ func (b *BeaconState) ExpectedWithdrawals() ([]*enginev1.Withdrawal, uint64, err
|
||||
return nil, 0, fmt.Errorf("could not retrieve balance at index %d: %w", w.Index, err)
|
||||
}
|
||||
hasSufficientEffectiveBalance := v.EffectiveBalance() >= params.BeaconConfig().MinActivationBalance
|
||||
hasExcessBalance := vBal > params.BeaconConfig().MinActivationBalance
|
||||
var totalWithdrawn uint64
|
||||
for _, wi := range withdrawals {
|
||||
if wi.ValidatorIndex == w.Index {
|
||||
totalWithdrawn += wi.Amount
|
||||
}
|
||||
}
|
||||
balance, err := mathutil.Sub64(vBal, totalWithdrawn)
|
||||
if err != nil {
|
||||
return nil, 0, errors.Wrapf(err, "failed to subtract balance %d with total withdrawn %d", vBal, totalWithdrawn)
|
||||
}
|
||||
hasExcessBalance := balance > params.BeaconConfig().MinActivationBalance
|
||||
if v.ExitEpoch() == params.BeaconConfig().FarFutureEpoch && hasSufficientEffectiveBalance && hasExcessBalance {
|
||||
amount := min(vBal-params.BeaconConfig().MinActivationBalance, w.Amount)
|
||||
amount := min(balance-params.BeaconConfig().MinActivationBalance, w.Amount)
|
||||
withdrawals = append(withdrawals, &enginev1.Withdrawal{
|
||||
Index: withdrawalIndex,
|
||||
ValidatorIndex: w.Index,
|
||||
@@ -165,7 +177,10 @@ func (b *BeaconState) ExpectedWithdrawals() ([]*enginev1.Withdrawal, uint64, err
|
||||
partiallyWithdrawnBalance += w.Amount
|
||||
}
|
||||
}
|
||||
balance = balance - partiallyWithdrawnBalance
|
||||
balance, err = mathutil.Sub64(balance, partiallyWithdrawnBalance)
|
||||
if err != nil {
|
||||
return nil, 0, errors.Wrapf(err, "could not subtract balance %d with partial withdrawn balance %d", balance, partiallyWithdrawnBalance)
|
||||
}
|
||||
}
|
||||
if helpers.IsFullyWithdrawableValidator(val, balance, epoch, b.version) {
|
||||
withdrawals = append(withdrawals, &enginev1.Withdrawal{
|
||||
|
||||
@@ -416,3 +416,37 @@ func TestExpectedWithdrawals(t *testing.T) {
|
||||
require.DeepEqual(t, withdrawalFull, expected[1])
|
||||
})
|
||||
}
|
||||
|
||||
func TestExpectedWithdrawals_underflow_electra(t *testing.T) {
|
||||
s, err := state_native.InitializeFromProtoUnsafeElectra(ðpb.BeaconStateElectra{})
|
||||
require.NoError(t, err)
|
||||
vals := make([]*ethpb.Validator, 1)
|
||||
balances := make([]uint64, 1)
|
||||
balances[0] = 2015_000_000_000 //Validator A begins leaking ETH due to inactivity, and over time, its balance decreases to 2,015 ETH
|
||||
val := ðpb.Validator{
|
||||
WithdrawalCredentials: make([]byte, 32),
|
||||
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalanceElectra,
|
||||
WithdrawableEpoch: primitives.Epoch(0),
|
||||
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
|
||||
}
|
||||
val.WithdrawalCredentials[0] = params.BeaconConfig().CompoundingWithdrawalPrefixByte
|
||||
val.WithdrawalCredentials[31] = byte(0)
|
||||
vals[0] = val
|
||||
|
||||
require.NoError(t, s.SetValidators(vals))
|
||||
require.NoError(t, s.SetBalances(balances))
|
||||
require.NoError(t, s.AppendPendingPartialWithdrawal(ðpb.PendingPartialWithdrawal{
|
||||
Amount: 1008_000_000_000,
|
||||
WithdrawableEpoch: primitives.Epoch(0),
|
||||
}))
|
||||
require.NoError(t, s.AppendPendingPartialWithdrawal(ðpb.PendingPartialWithdrawal{
|
||||
Amount: 1008_000_000_000,
|
||||
WithdrawableEpoch: primitives.Epoch(0),
|
||||
}))
|
||||
expected, _, err := s.ExpectedWithdrawals()
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 3, len(expected)) // is a fully withdrawable validator
|
||||
require.Equal(t, uint64(1008_000_000_000), expected[0].Amount)
|
||||
require.Equal(t, uint64(975_000_000_000), expected[1].Amount)
|
||||
require.Equal(t, uint64(32_000_000_000), expected[2].Amount)
|
||||
}
|
||||
|
||||
@@ -23,8 +23,8 @@ func NewService() *StateManager {
|
||||
}
|
||||
|
||||
// StateByRootIfCachedNoCopy --
|
||||
func (_ *StateManager) StateByRootIfCachedNoCopy(_ [32]byte) state.BeaconState {
|
||||
panic("implement me")
|
||||
func (m *StateManager) StateByRootIfCachedNoCopy(root [32]byte) state.BeaconState {
|
||||
return m.StatesByRoot[root]
|
||||
}
|
||||
|
||||
// Resume --
|
||||
|
||||
@@ -211,6 +211,7 @@ go_test(
|
||||
"//beacon-chain/operations/attestations:go_default_library",
|
||||
"//beacon-chain/operations/blstoexec:go_default_library",
|
||||
"//beacon-chain/operations/slashings:go_default_library",
|
||||
"//beacon-chain/operations/slashings/mock:go_default_library",
|
||||
"//beacon-chain/p2p:go_default_library",
|
||||
"//beacon-chain/p2p/encoder:go_default_library",
|
||||
"//beacon-chain/p2p/peers:go_default_library",
|
||||
|
||||
@@ -60,6 +60,7 @@ go_test(
|
||||
"blocks_fetcher_test.go",
|
||||
"blocks_fetcher_utils_test.go",
|
||||
"blocks_queue_test.go",
|
||||
"downscore_test.go",
|
||||
"fsm_benchmark_test.go",
|
||||
"fsm_test.go",
|
||||
"initial_sync_test.go",
|
||||
@@ -70,6 +71,7 @@ go_test(
|
||||
tags = ["CI_race_detection"],
|
||||
deps = [
|
||||
"//async/abool:go_default_library",
|
||||
"//beacon-chain/blockchain:go_default_library",
|
||||
"//beacon-chain/blockchain/testing:go_default_library",
|
||||
"//beacon-chain/das:go_default_library",
|
||||
"//beacon-chain/db:go_default_library",
|
||||
@@ -78,6 +80,7 @@ go_test(
|
||||
"//beacon-chain/db/testing:go_default_library",
|
||||
"//beacon-chain/p2p:go_default_library",
|
||||
"//beacon-chain/p2p/peers:go_default_library",
|
||||
"//beacon-chain/p2p/peers/peerdata:go_default_library",
|
||||
"//beacon-chain/p2p/peers/scorers:go_default_library",
|
||||
"//beacon-chain/p2p/testing:go_default_library",
|
||||
"//beacon-chain/p2p/types:go_default_library",
|
||||
@@ -105,6 +108,7 @@ go_test(
|
||||
"@com_github_libp2p_go_libp2p//core/network:go_default_library",
|
||||
"@com_github_libp2p_go_libp2p//core/peer:go_default_library",
|
||||
"@com_github_paulbellamy_ratecounter//:go_default_library",
|
||||
"@com_github_pkg_errors//:go_default_library",
|
||||
"@com_github_sirupsen_logrus//:go_default_library",
|
||||
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
|
||||
],
|
||||
|
||||
@@ -120,11 +120,20 @@ type fetchRequestParams struct {
|
||||
// fetchRequestResponse is a combined type to hold results of both successful executions and errors.
|
||||
// Valid usage pattern will be to check whether result's `err` is nil, before using `blocks`.
|
||||
type fetchRequestResponse struct {
|
||||
pid peer.ID
|
||||
start primitives.Slot
|
||||
count uint64
|
||||
bwb []blocks.BlockWithROBlobs
|
||||
err error
|
||||
blocksFrom peer.ID
|
||||
blobsFrom peer.ID
|
||||
start primitives.Slot
|
||||
count uint64
|
||||
bwb []blocks.BlockWithROBlobs
|
||||
err error
|
||||
}
|
||||
|
||||
func (r *fetchRequestResponse) blocksQueueFetchedData() *blocksQueueFetchedData {
|
||||
return &blocksQueueFetchedData{
|
||||
blocksFrom: r.blocksFrom,
|
||||
blobsFrom: r.blobsFrom,
|
||||
bwb: r.bwb,
|
||||
}
|
||||
}
|
||||
|
||||
// newBlocksFetcher creates ready to use fetcher.
|
||||
@@ -314,13 +323,14 @@ func (f *blocksFetcher) handleRequest(ctx context.Context, start primitives.Slot
|
||||
}
|
||||
}
|
||||
|
||||
response.bwb, response.pid, response.err = f.fetchBlocksFromPeer(ctx, start, count, peers)
|
||||
response.bwb, response.blocksFrom, response.err = f.fetchBlocksFromPeer(ctx, start, count, peers)
|
||||
if response.err == nil {
|
||||
bwb, err := f.fetchBlobsFromPeer(ctx, response.bwb, response.pid, peers)
|
||||
pid, bwb, err := f.fetchBlobsFromPeer(ctx, response.bwb, response.blocksFrom, peers)
|
||||
if err != nil {
|
||||
response.err = err
|
||||
}
|
||||
response.bwb = bwb
|
||||
response.blobsFrom = pid
|
||||
}
|
||||
return response
|
||||
}
|
||||
@@ -537,20 +547,20 @@ func missingCommitError(root [32]byte, slot primitives.Slot, missing [][]byte) e
|
||||
}
|
||||
|
||||
// fetchBlobsFromPeer fetches blocks from a single randomly selected peer.
|
||||
func (f *blocksFetcher) fetchBlobsFromPeer(ctx context.Context, bwb []blocks.BlockWithROBlobs, pid peer.ID, peers []peer.ID) ([]blocks.BlockWithROBlobs, error) {
|
||||
func (f *blocksFetcher) fetchBlobsFromPeer(ctx context.Context, bwb []blocks.BlockWithROBlobs, pid peer.ID, peers []peer.ID) (peer.ID, []blocks.BlockWithROBlobs, error) {
|
||||
ctx, span := trace.StartSpan(ctx, "initialsync.fetchBlobsFromPeer")
|
||||
defer span.End()
|
||||
if slots.ToEpoch(f.clock.CurrentSlot()) < params.BeaconConfig().DenebForkEpoch {
|
||||
return bwb, nil
|
||||
return "", bwb, nil
|
||||
}
|
||||
blobWindowStart, err := prysmsync.BlobRPCMinValidSlot(f.clock.CurrentSlot())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return "", nil, err
|
||||
}
|
||||
// Construct request message based on observed interval of blocks in need of blobs.
|
||||
req := countCommitments(bwb, blobWindowStart).blobRange(f.bs).Request()
|
||||
if req == nil {
|
||||
return bwb, nil
|
||||
return "", bwb, nil
|
||||
}
|
||||
peers = f.filterPeers(ctx, peers, peersPercentagePerRequest)
|
||||
// We dial the initial peer first to ensure that we get the desired set of blobs.
|
||||
@@ -573,9 +583,9 @@ func (f *blocksFetcher) fetchBlobsFromPeer(ctx context.Context, bwb []blocks.Blo
|
||||
log.WithField("peer", p).WithError(err).Debug("Invalid BeaconBlobsByRange response")
|
||||
continue
|
||||
}
|
||||
return robs, err
|
||||
return p, robs, err
|
||||
}
|
||||
return nil, errNoPeersAvailable
|
||||
return "", nil, errNoPeersAvailable
|
||||
}
|
||||
|
||||
// requestBlocks is a wrapper for handling BeaconBlocksByRangeRequest requests/streams.
|
||||
|
||||
@@ -22,8 +22,9 @@ import (
|
||||
// Blocks are stored in an ascending slot order. The first block is guaranteed to have parent
|
||||
// either in DB or initial sync cache.
|
||||
type forkData struct {
|
||||
peer peer.ID
|
||||
bwb []blocks.BlockWithROBlobs
|
||||
blocksFrom peer.ID
|
||||
blobsFrom peer.ID
|
||||
bwb []blocks.BlockWithROBlobs
|
||||
}
|
||||
|
||||
// nonSkippedSlotAfter checks slots after the given one in an attempt to find a non-empty future slot.
|
||||
@@ -280,13 +281,13 @@ func (f *blocksFetcher) findForkWithPeer(ctx context.Context, pid peer.ID, slot
|
||||
}
|
||||
// We need to fetch the blobs for the given alt-chain if any exist, so that we can try to verify and import
|
||||
// the blocks.
|
||||
bwb, err := f.fetchBlobsFromPeer(ctx, altBlocks, pid, []peer.ID{pid})
|
||||
bpid, bwb, err := f.fetchBlobsFromPeer(ctx, altBlocks, pid, []peer.ID{pid})
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "unable to retrieve blobs for blocks found in findForkWithPeer")
|
||||
}
|
||||
// The caller will use the BlocksWith VerifiedBlobs in bwb as the starting point for
|
||||
// round-robin syncing the alternate chain.
|
||||
return &forkData{peer: pid, bwb: bwb}, nil
|
||||
return &forkData{blocksFrom: pid, blobsFrom: bpid, bwb: bwb}, nil
|
||||
}
|
||||
return nil, errNoAlternateBlocks
|
||||
}
|
||||
@@ -302,13 +303,15 @@ func (f *blocksFetcher) findAncestor(ctx context.Context, pid peer.ID, b interfa
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "received invalid blocks in findAncestor")
|
||||
}
|
||||
bwb, err = f.fetchBlobsFromPeer(ctx, bwb, pid, []peer.ID{pid})
|
||||
var bpid peer.ID
|
||||
bpid, bwb, err = f.fetchBlobsFromPeer(ctx, bwb, pid, []peer.ID{pid})
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "unable to retrieve blobs for blocks found in findAncestor")
|
||||
}
|
||||
return &forkData{
|
||||
peer: pid,
|
||||
bwb: bwb,
|
||||
blocksFrom: pid,
|
||||
bwb: bwb,
|
||||
blobsFrom: bpid,
|
||||
}, nil
|
||||
}
|
||||
// Request block's parent.
|
||||
|
||||
@@ -263,7 +263,7 @@ func TestBlocksFetcher_findFork(t *testing.T) {
|
||||
reqEnd := testForkStartSlot(t, 251) + primitives.Slot(findForkReqRangeSize())
|
||||
require.Equal(t, primitives.Slot(len(chain1)), fork.bwb[0].Block.Block().Slot())
|
||||
require.Equal(t, int(reqEnd-forkSlot1b), len(fork.bwb))
|
||||
require.Equal(t, curForkMoreBlocksPeer, fork.peer)
|
||||
require.Equal(t, curForkMoreBlocksPeer, fork.blocksFrom)
|
||||
// Save all chain1b blocks (so that they do not interfere with alternative fork)
|
||||
for _, blk := range chain1b {
|
||||
util.SaveBlock(t, ctx, beaconDB, blk)
|
||||
@@ -283,7 +283,7 @@ func TestBlocksFetcher_findFork(t *testing.T) {
|
||||
alternativePeer := connectPeerHavingBlocks(t, p2p, chain2, finalizedSlot, p2p.Peers())
|
||||
fork, err = fetcher.findFork(ctx, 251)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, alternativePeer, fork.peer)
|
||||
assert.Equal(t, alternativePeer, fork.blocksFrom)
|
||||
assert.Equal(t, 65, len(fork.bwb))
|
||||
ind := forkSlot
|
||||
for _, blk := range fork.bwb {
|
||||
|
||||
@@ -10,6 +10,7 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
|
||||
beaconsync "github.com/OffchainLabs/prysm/v6/beacon-chain/sync"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
@@ -93,8 +94,9 @@ type blocksQueue struct {
|
||||
|
||||
// blocksQueueFetchedData is a data container that is returned from a queue on each step.
|
||||
type blocksQueueFetchedData struct {
|
||||
pid peer.ID
|
||||
bwb []blocks.BlockWithROBlobs
|
||||
blocksFrom peer.ID
|
||||
blobsFrom peer.ID
|
||||
bwb []blocks.BlockWithROBlobs
|
||||
}
|
||||
|
||||
// newBlocksQueue creates initialized priority queue.
|
||||
@@ -337,13 +339,15 @@ func (q *blocksQueue) onDataReceivedEvent(ctx context.Context) eventHandlerFn {
|
||||
}
|
||||
if errors.Is(response.err, beaconsync.ErrInvalidFetchedData) {
|
||||
// Peer returned invalid data, penalize.
|
||||
q.blocksFetcher.p2p.Peers().Scorers().BadResponsesScorer().Increment(m.pid)
|
||||
log.WithField("pid", response.pid).Debug("Peer is penalized for invalid blocks")
|
||||
q.blocksFetcher.p2p.Peers().Scorers().BadResponsesScorer().Increment(response.blocksFrom)
|
||||
log.WithField("pid", response.blocksFrom).Debug("Peer is penalized for invalid blocks")
|
||||
} else if errors.Is(response.err, verification.ErrBlobInvalid) {
|
||||
q.blocksFetcher.p2p.Peers().Scorers().BadResponsesScorer().Increment(response.blobsFrom)
|
||||
log.WithField("pid", response.blobsFrom).Debug("Peer is penalized for invalid blob response")
|
||||
}
|
||||
return m.state, response.err
|
||||
}
|
||||
m.pid = response.pid
|
||||
m.bwb = response.bwb
|
||||
m.fetched = *response
|
||||
return stateDataParsed, nil
|
||||
}
|
||||
}
|
||||
@@ -358,19 +362,15 @@ func (q *blocksQueue) onReadyToSendEvent(ctx context.Context) eventHandlerFn {
|
||||
return m.state, errInvalidInitialState
|
||||
}
|
||||
|
||||
if len(m.bwb) == 0 {
|
||||
if m.numFetched() == 0 {
|
||||
return stateSkipped, nil
|
||||
}
|
||||
|
||||
send := func() (stateID, error) {
|
||||
data := &blocksQueueFetchedData{
|
||||
pid: m.pid,
|
||||
bwb: m.bwb,
|
||||
}
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return m.state, ctx.Err()
|
||||
case q.fetchedData <- data:
|
||||
case q.fetchedData <- m.fetched.blocksQueueFetchedData():
|
||||
}
|
||||
return stateSent, nil
|
||||
}
|
||||
|
||||
@@ -472,8 +472,8 @@ func TestBlocksQueue_onDataReceivedEvent(t *testing.T) {
|
||||
updatedState, err := handlerFn(&stateMachine{
|
||||
state: stateScheduled,
|
||||
}, &fetchRequestResponse{
|
||||
pid: "abc",
|
||||
err: errSlotIsTooHigh,
|
||||
blocksFrom: "abc",
|
||||
err: errSlotIsTooHigh,
|
||||
})
|
||||
assert.ErrorContains(t, errSlotIsTooHigh.Error(), err)
|
||||
assert.Equal(t, stateScheduled, updatedState)
|
||||
@@ -495,9 +495,9 @@ func TestBlocksQueue_onDataReceivedEvent(t *testing.T) {
|
||||
updatedState, err := handlerFn(&stateMachine{
|
||||
state: stateScheduled,
|
||||
}, &fetchRequestResponse{
|
||||
pid: "abc",
|
||||
err: errSlotIsTooHigh,
|
||||
start: 256,
|
||||
blocksFrom: "abc",
|
||||
err: errSlotIsTooHigh,
|
||||
start: 256,
|
||||
})
|
||||
assert.ErrorContains(t, errSlotIsTooHigh.Error(), err)
|
||||
assert.Equal(t, stateScheduled, updatedState)
|
||||
@@ -517,8 +517,8 @@ func TestBlocksQueue_onDataReceivedEvent(t *testing.T) {
|
||||
updatedState, err := handlerFn(&stateMachine{
|
||||
state: stateScheduled,
|
||||
}, &fetchRequestResponse{
|
||||
pid: "abc",
|
||||
err: beaconsync.ErrInvalidFetchedData,
|
||||
blocksFrom: "abc",
|
||||
err: beaconsync.ErrInvalidFetchedData,
|
||||
})
|
||||
assert.ErrorContains(t, beaconsync.ErrInvalidFetchedData.Error(), err)
|
||||
assert.Equal(t, stateScheduled, updatedState)
|
||||
@@ -537,7 +537,7 @@ func TestBlocksQueue_onDataReceivedEvent(t *testing.T) {
|
||||
wsbCopy, err := wsb.Copy()
|
||||
require.NoError(t, err)
|
||||
response := &fetchRequestResponse{
|
||||
pid: "abc",
|
||||
blocksFrom: "abc",
|
||||
bwb: []blocks.BlockWithROBlobs{
|
||||
{Block: blocks.ROBlock{ReadOnlySignedBeaconBlock: wsb}},
|
||||
{Block: blocks.ROBlock{ReadOnlySignedBeaconBlock: wsbCopy}},
|
||||
@@ -546,13 +546,15 @@ func TestBlocksQueue_onDataReceivedEvent(t *testing.T) {
|
||||
fsm := &stateMachine{
|
||||
state: stateScheduled,
|
||||
}
|
||||
assert.Equal(t, peer.ID(""), fsm.pid)
|
||||
assert.Equal(t, 0, len(fsm.bwb))
|
||||
assert.Equal(t, peer.ID(""), fsm.fetched.blocksFrom)
|
||||
assert.Equal(t, peer.ID(""), fsm.fetched.blobsFrom)
|
||||
assert.Equal(t, 0, fsm.numFetched())
|
||||
updatedState, err := handlerFn(fsm, response)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, stateDataParsed, updatedState)
|
||||
assert.Equal(t, response.pid, fsm.pid)
|
||||
assert.DeepSSZEqual(t, response.bwb, fsm.bwb)
|
||||
assert.Equal(t, response.blocksFrom, fsm.fetched.blocksFrom)
|
||||
assert.Equal(t, response.blobsFrom, fsm.fetched.blobsFrom)
|
||||
assert.DeepSSZEqual(t, response.bwb, fsm.fetched.bwb)
|
||||
})
|
||||
}
|
||||
|
||||
@@ -635,10 +637,10 @@ func TestBlocksQueue_onReadyToSendEvent(t *testing.T) {
|
||||
queue.smm.addStateMachine(256)
|
||||
queue.smm.addStateMachine(320)
|
||||
queue.smm.machines[256].state = stateDataParsed
|
||||
queue.smm.machines[256].pid = pidDataParsed
|
||||
queue.smm.machines[256].fetched.blocksFrom = pidDataParsed
|
||||
rwsb, err := blocks.NewROBlock(wsb)
|
||||
require.NoError(t, err)
|
||||
queue.smm.machines[256].bwb = []blocks.BlockWithROBlobs{
|
||||
queue.smm.machines[256].fetched.bwb = []blocks.BlockWithROBlobs{
|
||||
{Block: rwsb},
|
||||
}
|
||||
|
||||
@@ -669,10 +671,10 @@ func TestBlocksQueue_onReadyToSendEvent(t *testing.T) {
|
||||
queue.smm.machines[256].state = stateDataParsed
|
||||
queue.smm.addStateMachine(320)
|
||||
queue.smm.machines[320].state = stateDataParsed
|
||||
queue.smm.machines[320].pid = pidDataParsed
|
||||
queue.smm.machines[320].fetched.blocksFrom = pidDataParsed
|
||||
rwsb, err := blocks.NewROBlock(wsb)
|
||||
require.NoError(t, err)
|
||||
queue.smm.machines[320].bwb = []blocks.BlockWithROBlobs{
|
||||
queue.smm.machines[320].fetched.bwb = []blocks.BlockWithROBlobs{
|
||||
{Block: rwsb},
|
||||
}
|
||||
|
||||
@@ -700,10 +702,10 @@ func TestBlocksQueue_onReadyToSendEvent(t *testing.T) {
|
||||
queue.smm.machines[256].state = stateSkipped
|
||||
queue.smm.addStateMachine(320)
|
||||
queue.smm.machines[320].state = stateDataParsed
|
||||
queue.smm.machines[320].pid = pidDataParsed
|
||||
queue.smm.machines[320].fetched.blocksFrom = pidDataParsed
|
||||
rwsb, err := blocks.NewROBlock(wsb)
|
||||
require.NoError(t, err)
|
||||
queue.smm.machines[320].bwb = []blocks.BlockWithROBlobs{
|
||||
queue.smm.machines[320].fetched.bwb = []blocks.BlockWithROBlobs{
|
||||
{Block: rwsb},
|
||||
}
|
||||
|
||||
@@ -1199,17 +1201,17 @@ func TestBlocksQueue_stuckInUnfavourableFork(t *testing.T) {
|
||||
firstFSM, ok := queue.smm.findStateMachine(forkedSlot)
|
||||
require.Equal(t, true, ok)
|
||||
require.Equal(t, stateDataParsed, firstFSM.state)
|
||||
require.Equal(t, forkedPeer, firstFSM.pid)
|
||||
require.Equal(t, forkedPeer, firstFSM.fetched.blocksFrom)
|
||||
reqEnd := testForkStartSlot(t, 251) + primitives.Slot(findForkReqRangeSize())
|
||||
require.Equal(t, int(reqEnd-forkedSlot), len(firstFSM.bwb))
|
||||
require.Equal(t, forkedSlot, firstFSM.bwb[0].Block.Block().Slot())
|
||||
require.Equal(t, int(reqEnd-forkedSlot), len(firstFSM.fetched.bwb))
|
||||
require.Equal(t, forkedSlot, firstFSM.fetched.bwb[0].Block.Block().Slot())
|
||||
|
||||
// Assert that forked data from chain2 is available (within 64 fetched blocks).
|
||||
for i, blk := range chain2[forkedSlot:] {
|
||||
if i >= len(firstFSM.bwb) {
|
||||
if i >= len(firstFSM.fetched.bwb) {
|
||||
break
|
||||
}
|
||||
rootFromFSM := firstFSM.bwb[i].Block.Root()
|
||||
rootFromFSM := firstFSM.fetched.bwb[i].Block.Root()
|
||||
blkRoot, err := blk.Block.HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, blkRoot, rootFromFSM)
|
||||
@@ -1217,7 +1219,7 @@ func TestBlocksQueue_stuckInUnfavourableFork(t *testing.T) {
|
||||
|
||||
// Assert that machines are in the expected state.
|
||||
startSlot = forkedEpochStartSlot.Add(1 + blocksPerRequest)
|
||||
require.Equal(t, int(blocksPerRequest)-int(forkedSlot-(forkedEpochStartSlot+1)), len(firstFSM.bwb))
|
||||
require.Equal(t, int(blocksPerRequest)-int(forkedSlot-(forkedEpochStartSlot+1)), len(firstFSM.fetched.bwb))
|
||||
for i := startSlot; i < startSlot.Add(blocksPerRequest*(lookaheadSteps-1)); i += primitives.Slot(blocksPerRequest) {
|
||||
fsm, ok := queue.smm.findStateMachine(i)
|
||||
require.Equal(t, true, ok)
|
||||
|
||||
@@ -24,8 +24,8 @@ func (q *blocksQueue) resetFromFork(fork *forkData) error {
|
||||
return err
|
||||
}
|
||||
fsm := q.smm.addStateMachine(firstBlock.Slot())
|
||||
fsm.pid = fork.peer
|
||||
fsm.bwb = fork.bwb
|
||||
fsm.fetched.bwb = fork.bwb
|
||||
fsm.fetched.blocksFrom, fsm.fetched.blobsFrom = fork.blocksFrom, fork.blobsFrom
|
||||
fsm.state = stateDataParsed
|
||||
|
||||
// The rest of machines are in skipped state.
|
||||
|
||||
219
beacon-chain/sync/initial-sync/downscore_test.go
Normal file
219
beacon-chain/sync/initial-sync/downscore_test.go
Normal file
@@ -0,0 +1,219 @@
|
||||
package initialsync
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain"
|
||||
mock "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers/peerdata"
|
||||
p2pt "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/sync"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/assert"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
type testDownscorePeer int
|
||||
|
||||
const (
|
||||
testDownscoreNeither testDownscorePeer = iota
|
||||
testDownscoreBlock
|
||||
testDownscoreBlob
|
||||
)
|
||||
|
||||
func peerIDForTestDownscore(w testDownscorePeer, name string) peer.ID {
|
||||
switch w {
|
||||
case testDownscoreBlock:
|
||||
return peer.ID("block" + name)
|
||||
case testDownscoreBlob:
|
||||
return peer.ID("blob" + name)
|
||||
default:
|
||||
return ""
|
||||
}
|
||||
}
|
||||
|
||||
func TestUpdatePeerScorerStats(t *testing.T) {
|
||||
cases := []struct {
|
||||
name string
|
||||
err error
|
||||
processed uint64
|
||||
downPeer testDownscorePeer
|
||||
}{
|
||||
{
|
||||
name: "invalid block",
|
||||
err: blockchain.ErrInvalidPayload,
|
||||
downPeer: testDownscoreBlock,
|
||||
processed: 10,
|
||||
},
|
||||
{
|
||||
name: "invalid blob",
|
||||
err: verification.ErrBlobIndexInvalid,
|
||||
downPeer: testDownscoreBlob,
|
||||
processed: 3,
|
||||
},
|
||||
{
|
||||
name: "not validity error",
|
||||
err: errors.New("test"),
|
||||
processed: 32,
|
||||
},
|
||||
{
|
||||
name: "no error",
|
||||
processed: 32,
|
||||
},
|
||||
}
|
||||
s := &Service{
|
||||
cfg: &Config{
|
||||
P2P: p2pt.NewTestP2P(t),
|
||||
},
|
||||
}
|
||||
for _, c := range cases {
|
||||
t.Run(c.name, func(t *testing.T) {
|
||||
data := &blocksQueueFetchedData{
|
||||
blocksFrom: peerIDForTestDownscore(testDownscoreBlock, c.name),
|
||||
blobsFrom: peerIDForTestDownscore(testDownscoreBlob, c.name),
|
||||
}
|
||||
s.updatePeerScorerStats(data, c.processed, c.err)
|
||||
if c.err != nil && c.downPeer != testDownscoreNeither {
|
||||
switch c.downPeer {
|
||||
case testDownscoreBlock:
|
||||
// block should be downscored
|
||||
blocksCount, err := s.cfg.P2P.Peers().Scorers().BadResponsesScorer().Count(data.blocksFrom)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 1, blocksCount)
|
||||
// blob should not be downscored - also we expect a not found error since peer scoring did not interact with blobs
|
||||
blobCount, err := s.cfg.P2P.Peers().Scorers().BadResponsesScorer().Count(data.blobsFrom)
|
||||
require.ErrorIs(t, err, peerdata.ErrPeerUnknown)
|
||||
require.Equal(t, -1, blobCount)
|
||||
case testDownscoreBlob:
|
||||
// block should not be downscored - also we expect a not found error since peer scoring did not interact with blocks
|
||||
blocksCount, err := s.cfg.P2P.Peers().Scorers().BadResponsesScorer().Count(data.blocksFrom)
|
||||
require.ErrorIs(t, err, peerdata.ErrPeerUnknown)
|
||||
require.Equal(t, -1, blocksCount)
|
||||
// blob should be downscored
|
||||
blobCount, err := s.cfg.P2P.Peers().Scorers().BadResponsesScorer().Count(data.blobsFrom)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 1, blobCount)
|
||||
}
|
||||
assert.Equal(t, uint64(0), s.cfg.P2P.Peers().Scorers().BlockProviderScorer().ProcessedBlocks(data.blocksFrom))
|
||||
return
|
||||
}
|
||||
// block should not be downscored - also we expect a not found error since peer scoring did not interact with blocks
|
||||
blocksCount, err := s.cfg.P2P.Peers().Scorers().BadResponsesScorer().Count(data.blocksFrom)
|
||||
// The scorer will know about the the block peer because it will have a processed blocks count
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 0, blocksCount)
|
||||
// no downscore, so scorer doesn't know the peer
|
||||
blobCount, err := s.cfg.P2P.Peers().Scorers().BadResponsesScorer().Count(data.blobsFrom)
|
||||
require.ErrorIs(t, err, peerdata.ErrPeerUnknown)
|
||||
require.Equal(t, -1, blobCount)
|
||||
|
||||
assert.Equal(t, c.processed, s.cfg.P2P.Peers().Scorers().BlockProviderScorer().ProcessedBlocks(data.blocksFrom))
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestOnDataReceivedDownscore(t *testing.T) {
|
||||
cases := []struct {
|
||||
name string
|
||||
err error
|
||||
downPeer testDownscorePeer
|
||||
}{
|
||||
{
|
||||
name: "invalid block",
|
||||
err: sync.ErrInvalidFetchedData,
|
||||
downPeer: testDownscoreBlock,
|
||||
},
|
||||
{
|
||||
name: "invalid blob",
|
||||
err: errors.Wrap(verification.ErrBlobInvalid, "test"),
|
||||
downPeer: testDownscoreBlob,
|
||||
},
|
||||
{
|
||||
name: "not validity error",
|
||||
err: errors.New("test"),
|
||||
},
|
||||
{
|
||||
name: "no error",
|
||||
},
|
||||
}
|
||||
for _, c := range cases {
|
||||
t.Run(c.name, func(t *testing.T) {
|
||||
data := &fetchRequestResponse{
|
||||
blocksFrom: peerIDForTestDownscore(testDownscoreBlock, c.name),
|
||||
blobsFrom: peerIDForTestDownscore(testDownscoreBlob, c.name),
|
||||
err: c.err,
|
||||
}
|
||||
if c.downPeer == testDownscoreBlob {
|
||||
require.Equal(t, true, verification.IsBlobValidationFailure(c.err))
|
||||
}
|
||||
ctx := context.Background()
|
||||
p2p := p2pt.NewTestP2P(t)
|
||||
mc := &mock.ChainService{Genesis: time.Now(), ValidatorsRoot: [32]byte{}}
|
||||
fetcher := newBlocksFetcher(ctx, &blocksFetcherConfig{
|
||||
chain: mc,
|
||||
p2p: p2p,
|
||||
clock: startup.NewClock(mc.Genesis, mc.ValidatorsRoot),
|
||||
})
|
||||
q := newBlocksQueue(ctx, &blocksQueueConfig{
|
||||
p2p: p2p,
|
||||
blocksFetcher: fetcher,
|
||||
highestExpectedSlot: primitives.Slot(32),
|
||||
chain: mc})
|
||||
sm := q.smm.addStateMachine(0)
|
||||
sm.state = stateScheduled
|
||||
handle := q.onDataReceivedEvent(context.Background())
|
||||
endState, err := handle(sm, data)
|
||||
if c.err != nil {
|
||||
require.ErrorIs(t, err, c.err)
|
||||
} else {
|
||||
require.NoError(t, err)
|
||||
}
|
||||
// state machine should stay in "scheduled" if there's an error
|
||||
// and transition to "data parsed" if there's no error
|
||||
if c.err != nil {
|
||||
require.Equal(t, stateScheduled, endState)
|
||||
} else {
|
||||
require.Equal(t, stateDataParsed, endState)
|
||||
}
|
||||
if c.err != nil && c.downPeer != testDownscoreNeither {
|
||||
switch c.downPeer {
|
||||
case testDownscoreBlock:
|
||||
// block should be downscored
|
||||
blocksCount, err := p2p.Peers().Scorers().BadResponsesScorer().Count(data.blocksFrom)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 1, blocksCount)
|
||||
// blob should not be downscored - also we expect a not found error since peer scoring did not interact with blobs
|
||||
blobCount, err := p2p.Peers().Scorers().BadResponsesScorer().Count(data.blobsFrom)
|
||||
require.ErrorIs(t, err, peerdata.ErrPeerUnknown)
|
||||
require.Equal(t, -1, blobCount)
|
||||
case testDownscoreBlob:
|
||||
// block should not be downscored - also we expect a not found error since peer scoring did not interact with blocks
|
||||
blocksCount, err := p2p.Peers().Scorers().BadResponsesScorer().Count(data.blocksFrom)
|
||||
require.ErrorIs(t, err, peerdata.ErrPeerUnknown)
|
||||
require.Equal(t, -1, blocksCount)
|
||||
// blob should be downscored
|
||||
blobCount, err := p2p.Peers().Scorers().BadResponsesScorer().Count(data.blobsFrom)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 1, blobCount)
|
||||
}
|
||||
assert.Equal(t, uint64(0), p2p.Peers().Scorers().BlockProviderScorer().ProcessedBlocks(data.blocksFrom))
|
||||
return
|
||||
}
|
||||
// block should not be downscored - also we expect a not found error since peer scoring did not interact with blocks
|
||||
blocksCount, err := p2p.Peers().Scorers().BadResponsesScorer().Count(data.blocksFrom)
|
||||
// no downscore, so scorer doesn't know the peer
|
||||
require.ErrorIs(t, err, peerdata.ErrPeerUnknown)
|
||||
require.Equal(t, -1, blocksCount)
|
||||
blobCount, err := p2p.Peers().Scorers().BadResponsesScorer().Count(data.blobsFrom)
|
||||
// no downscore, so scorer doesn't know the peer
|
||||
require.ErrorIs(t, err, peerdata.ErrPeerUnknown)
|
||||
require.Equal(t, -1, blobCount)
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -6,11 +6,9 @@ import (
|
||||
"sort"
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
prysmTime "github.com/OffchainLabs/prysm/v6/time"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
)
|
||||
|
||||
const (
|
||||
@@ -45,8 +43,7 @@ type stateMachine struct {
|
||||
smm *stateMachineManager
|
||||
start primitives.Slot
|
||||
state stateID
|
||||
pid peer.ID
|
||||
bwb []blocks.BlockWithROBlobs
|
||||
fetched fetchRequestResponse
|
||||
updated time.Time
|
||||
}
|
||||
|
||||
@@ -78,7 +75,7 @@ func (smm *stateMachineManager) addStateMachine(startSlot primitives.Slot) *stat
|
||||
smm: smm,
|
||||
start: startSlot,
|
||||
state: stateNew,
|
||||
bwb: []blocks.BlockWithROBlobs{},
|
||||
fetched: fetchRequestResponse{},
|
||||
updated: prysmTime.Now(),
|
||||
}
|
||||
smm.recalculateMachineAttribs()
|
||||
@@ -90,7 +87,7 @@ func (smm *stateMachineManager) removeStateMachine(startSlot primitives.Slot) er
|
||||
if _, ok := smm.machines[startSlot]; !ok {
|
||||
return fmt.Errorf("state for machine %v is not found", startSlot)
|
||||
}
|
||||
smm.machines[startSlot].bwb = nil
|
||||
smm.machines[startSlot].fetched = fetchRequestResponse{}
|
||||
delete(smm.machines, startSlot)
|
||||
smm.recalculateMachineAttribs()
|
||||
return nil
|
||||
@@ -187,6 +184,10 @@ func (m *stateMachine) isLast() bool {
|
||||
return m.start == m.smm.keys[len(m.smm.keys)-1]
|
||||
}
|
||||
|
||||
func (m *stateMachine) numFetched() int {
|
||||
return len(m.fetched.bwb)
|
||||
}
|
||||
|
||||
// String returns human-readable representation of a FSM state.
|
||||
func (m *stateMachine) String() string {
|
||||
return fmt.Sprintf("{%d:%s}", slots.ToEpoch(m.start), m.state)
|
||||
|
||||
@@ -14,7 +14,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
"github.com/paulbellamy/ratecounter"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
@@ -127,7 +126,7 @@ func (s *Service) syncToNonFinalizedEpoch(ctx context.Context) error {
|
||||
}
|
||||
for data := range queue.fetchedData {
|
||||
count, err := s.processFetchedDataRegSync(ctx, data)
|
||||
s.updatePeerScorerStats(data.pid, count, err)
|
||||
s.updatePeerScorerStats(data, count, err)
|
||||
}
|
||||
log.WithFields(logrus.Fields{
|
||||
"syncedSlot": s.cfg.Chain.HeadSlot(),
|
||||
@@ -147,7 +146,7 @@ func (s *Service) processFetchedData(ctx context.Context, data *blocksQueueFetch
|
||||
if err != nil {
|
||||
log.WithError(err).Warn("Skip processing batched blocks")
|
||||
}
|
||||
s.updatePeerScorerStats(data.pid, count, err)
|
||||
s.updatePeerScorerStats(data, count, err)
|
||||
}
|
||||
|
||||
// processFetchedDataRegSync processes data received from queue.
|
||||
@@ -339,18 +338,19 @@ func isPunishableError(err error) bool {
|
||||
}
|
||||
|
||||
// updatePeerScorerStats adjusts monitored metrics for a peer.
|
||||
func (s *Service) updatePeerScorerStats(pid peer.ID, count uint64, err error) {
|
||||
if pid == "" {
|
||||
return
|
||||
}
|
||||
func (s *Service) updatePeerScorerStats(data *blocksQueueFetchedData, count uint64, err error) {
|
||||
if isPunishableError(err) {
|
||||
log.WithError(err).WithField("peer_id", pid).Warn("Incrementing peers bad response count")
|
||||
s.cfg.P2P.Peers().Scorers().BadResponsesScorer().Increment(pid)
|
||||
if verification.IsBlobValidationFailure(err) {
|
||||
log.WithError(err).WithField("peer_id", data.blobsFrom).Warn("Downscoring peer for invalid blobs")
|
||||
s.cfg.P2P.Peers().Scorers().BadResponsesScorer().Increment(data.blobsFrom)
|
||||
} else {
|
||||
log.WithError(err).WithField("peer_id", data.blocksFrom).Warn("Downscoring peer for invalid blocks")
|
||||
s.cfg.P2P.Peers().Scorers().BadResponsesScorer().Increment(data.blocksFrom)
|
||||
}
|
||||
// If the error is punishable, exit here so that we don't give them credit for providing bad blocks.
|
||||
return
|
||||
}
|
||||
scorer := s.cfg.P2P.Peers().Scorers().BlockProviderScorer()
|
||||
scorer.IncrementProcessedBlocks(pid, count)
|
||||
s.cfg.P2P.Peers().Scorers().BlockProviderScorer().IncrementProcessedBlocks(data.blocksFrom, count)
|
||||
}
|
||||
|
||||
// isProcessedBlock checks DB and local cache for presence of a given block, to avoid duplicates.
|
||||
|
||||
@@ -156,7 +156,7 @@ func readChunkEncodedBlobsLowMax(t *testing.T, s *Service, expect []*expectedBlo
|
||||
}
|
||||
return func(stream network.Stream) {
|
||||
_, err := readChunkEncodedBlobs(stream, encoding, ctxMap, vf, 1)
|
||||
require.ErrorIs(t, err, ErrInvalidFetchedData)
|
||||
require.ErrorIs(t, err, errMaxRequestBlobSidecarsExceeded)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -9,6 +9,7 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/encoder"
|
||||
p2ptypes "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/types"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
|
||||
@@ -30,14 +31,14 @@ var errBlobUnmarshal = errors.New("Could not unmarshal chunk-encoded blob")
|
||||
var (
|
||||
// ErrInvalidFetchedData is used to signal that an error occurred which should result in peer downscoring.
|
||||
ErrInvalidFetchedData = errors.New("invalid data returned from peer")
|
||||
errBlobIndexOutOfBounds = errors.Wrap(ErrInvalidFetchedData, "blob index out of range")
|
||||
errMaxRequestBlobSidecarsExceeded = errors.Wrap(ErrInvalidFetchedData, "peer exceeded req blob chunk tx limit")
|
||||
errChunkResponseSlotNotAsc = errors.Wrap(ErrInvalidFetchedData, "blob slot not higher than previous block root")
|
||||
errChunkResponseIndexNotAsc = errors.Wrap(ErrInvalidFetchedData, "blob indices for a block must start at 0 and increase by 1")
|
||||
errUnrequested = errors.Wrap(ErrInvalidFetchedData, "received BlobSidecar in response that was not requested")
|
||||
errBlobResponseOutOfBounds = errors.Wrap(ErrInvalidFetchedData, "received BlobSidecar with slot outside BlobSidecarsByRangeRequest bounds")
|
||||
errChunkResponseBlockMismatch = errors.Wrap(ErrInvalidFetchedData, "blob block details do not match")
|
||||
errChunkResponseParentMismatch = errors.Wrap(ErrInvalidFetchedData, "parent root for response element doesn't match previous element root")
|
||||
errBlobIndexOutOfBounds = errors.Wrap(verification.ErrBlobInvalid, "blob index out of range")
|
||||
errMaxRequestBlobSidecarsExceeded = errors.Wrap(verification.ErrBlobInvalid, "peer exceeded req blob chunk tx limit")
|
||||
errChunkResponseSlotNotAsc = errors.Wrap(verification.ErrBlobInvalid, "blob slot not higher than previous block root")
|
||||
errChunkResponseIndexNotAsc = errors.Wrap(verification.ErrBlobInvalid, "blob indices for a block must start at 0 and increase by 1")
|
||||
errUnrequested = errors.Wrap(verification.ErrBlobInvalid, "received BlobSidecar in response that was not requested")
|
||||
errBlobResponseOutOfBounds = errors.Wrap(verification.ErrBlobInvalid, "received BlobSidecar with slot outside BlobSidecarsByRangeRequest bounds")
|
||||
errChunkResponseBlockMismatch = errors.Wrap(verification.ErrBlobInvalid, "blob block details do not match")
|
||||
errChunkResponseParentMismatch = errors.Wrap(verification.ErrBlobInvalid, "parent root for response element doesn't match previous element root")
|
||||
)
|
||||
|
||||
// BeaconBlockProcessor defines a block processing function, which allows to start utilizing
|
||||
|
||||
@@ -12,6 +12,7 @@ import (
|
||||
p2ptest "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
|
||||
p2pTypes "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/types"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
@@ -877,3 +878,7 @@ func TestSendBlobsByRangeRequest(t *testing.T) {
|
||||
assert.Equal(t, int(totalElectraBlobs), len(blobs))
|
||||
})
|
||||
}
|
||||
|
||||
func TestErrInvalidFetchedDataDistinction(t *testing.T) {
|
||||
require.Equal(t, false, errors.Is(ErrInvalidFetchedData, verification.ErrBlobInvalid))
|
||||
}
|
||||
|
||||
@@ -350,10 +350,9 @@ func (s *Service) wrapAndReportValidation(topic string, v wrappedVal) (string, p
|
||||
}
|
||||
}
|
||||
|
||||
// reValidateSubscriptions unsubscribe from topics we are currently subscribed to but that are
|
||||
// pruneSubscriptions unsubscribe from topics we are currently subscribed to but that are
|
||||
// not in the list of wanted subnets.
|
||||
// TODO: Rename this functions as it does not only revalidate subscriptions.
|
||||
func (s *Service) reValidateSubscriptions(
|
||||
func (s *Service) pruneSubscriptions(
|
||||
subscriptions map[uint64]*pubsub.Subscription,
|
||||
wantedSubs []uint64,
|
||||
topicFormat string,
|
||||
@@ -452,7 +451,7 @@ func (s *Service) subscribeToSubnets(
|
||||
"digest": fmt.Sprintf("%#x", digest),
|
||||
"subnets": description,
|
||||
}).Debug("Subnets with this digest are no longer valid, unsubscribing from all of them")
|
||||
s.reValidateSubscriptions(subscriptions, []uint64{}, topicFormat, digest)
|
||||
s.pruneSubscriptions(subscriptions, []uint64{}, topicFormat, digest)
|
||||
return false
|
||||
}
|
||||
|
||||
@@ -460,7 +459,7 @@ func (s *Service) subscribeToSubnets(
|
||||
subnetsToSubscribeIndex := getSubnetsToSubscribe(currentSlot)
|
||||
|
||||
// Remove subscriptions that are no longer wanted.
|
||||
s.reValidateSubscriptions(subscriptions, subnetsToSubscribeIndex, topicFormat, digest)
|
||||
s.pruneSubscriptions(subscriptions, subnetsToSubscribeIndex, topicFormat, digest)
|
||||
|
||||
// Subscribe to wanted subnets.
|
||||
for _, subnetIndex := range subnetsToSubscribeIndex {
|
||||
|
||||
@@ -308,7 +308,7 @@ func TestRevalidateSubscription_CorrectlyFormatsTopic(t *testing.T) {
|
||||
subscriptions[2], err = r.cfg.p2p.SubscribeToTopic(fullTopic)
|
||||
require.NoError(t, err)
|
||||
|
||||
r.reValidateSubscriptions(subscriptions, []uint64{2}, defaultTopic, digest)
|
||||
r.pruneSubscriptions(subscriptions, []uint64{2}, defaultTopic, digest)
|
||||
require.LogsDoNotContain(t, hook, "Could not unregister topic validator")
|
||||
}
|
||||
|
||||
|
||||
@@ -21,6 +21,7 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
"github.com/OffchainLabs/prysm/v6/monitoring/tracing"
|
||||
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/runtime/version"
|
||||
prysmTime "github.com/OffchainLabs/prysm/v6/time"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
@@ -31,8 +32,9 @@ import (
|
||||
)
|
||||
|
||||
var (
|
||||
ErrOptimisticParent = errors.New("parent of the block is optimistic")
|
||||
errRejectCommitmentLen = errors.New("[REJECT] The length of KZG commitments is less than or equal to the limitation defined in Consensus Layer")
|
||||
ErrOptimisticParent = errors.New("parent of the block is optimistic")
|
||||
errRejectCommitmentLen = errors.New("[REJECT] The length of KZG commitments is less than or equal to the limitation defined in Consensus Layer")
|
||||
ErrSlashingSignatureFailure = errors.New("proposer slashing signature verification failed")
|
||||
)
|
||||
|
||||
// validateBeaconBlockPubSub checks that the incoming block has a valid BLS signature.
|
||||
@@ -109,6 +111,16 @@ func (s *Service) validateBeaconBlockPubSub(ctx context.Context, pid peer.ID, ms
|
||||
|
||||
// Verify the block is the first block received for the proposer for the slot.
|
||||
if s.hasSeenBlockIndexSlot(blk.Block().Slot(), blk.Block().ProposerIndex()) {
|
||||
// Attempt to detect and broadcast equivocation before ignoring
|
||||
err = s.detectAndBroadcastEquivocation(ctx, blk)
|
||||
if err != nil {
|
||||
// If signature verification fails, reject the block
|
||||
if errors.Is(err, ErrSlashingSignatureFailure) {
|
||||
return pubsub.ValidationReject, err
|
||||
}
|
||||
// In case there is some other error log but don't reject
|
||||
log.WithError(err).Debug("Could not detect/broadcast equivocation")
|
||||
}
|
||||
return pubsub.ValidationIgnore, nil
|
||||
}
|
||||
|
||||
@@ -469,3 +481,74 @@ func getBlockFields(b interfaces.ReadOnlySignedBeaconBlock) logrus.Fields {
|
||||
"version": b.Block().Version(),
|
||||
}
|
||||
}
|
||||
|
||||
// detectAndBroadcastEquivocation checks if the given block is an equivocating block by comparing it with
|
||||
// the head block. If the blocks are from the same slot and proposer but have different signatures,
|
||||
// it creates and broadcasts a proposer slashing object after verification.
|
||||
func (s *Service) detectAndBroadcastEquivocation(ctx context.Context, blk interfaces.ReadOnlySignedBeaconBlock) error {
|
||||
slot := blk.Block().Slot()
|
||||
proposerIndex := blk.Block().ProposerIndex()
|
||||
|
||||
// Get head block for comparison
|
||||
headBlock, err := s.cfg.chain.HeadBlock(ctx)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not get head block")
|
||||
}
|
||||
|
||||
// Only proceed if this block is from same slot and proposer as head
|
||||
if headBlock.Block().Slot() != slot || headBlock.Block().ProposerIndex() != proposerIndex {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Compare signatures
|
||||
sig1 := blk.Signature()
|
||||
sig2 := headBlock.Signature()
|
||||
|
||||
// If signatures match, these are the same block
|
||||
if sig1 == sig2 {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Extract headers for slashing
|
||||
header1, err := blk.Header()
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not get header from new block")
|
||||
}
|
||||
header2, err := headBlock.Header()
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not get header from head block")
|
||||
}
|
||||
|
||||
slashing := ðpb.ProposerSlashing{
|
||||
Header_1: header1,
|
||||
Header_2: header2,
|
||||
}
|
||||
|
||||
// Get state for verification
|
||||
headState, err := s.cfg.chain.HeadStateReadOnly(ctx)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not get head state")
|
||||
}
|
||||
|
||||
// Verify the slashing against current state
|
||||
if err := blocks.VerifyProposerSlashing(headState, slashing); err != nil {
|
||||
if errors.Is(err, blocks.ErrCouldNotVerifyBlockHeader) {
|
||||
return errors.Wrap(ErrSlashingSignatureFailure, err.Error())
|
||||
}
|
||||
return errors.Wrap(err, "could not verify proposer slashing")
|
||||
}
|
||||
|
||||
// Broadcast if verification passes
|
||||
if !features.Get().DisableBroadcastSlashings {
|
||||
if err := s.cfg.p2p.Broadcast(ctx, slashing); err != nil {
|
||||
return errors.Wrap(err, "could not broadcast slashing object")
|
||||
}
|
||||
}
|
||||
|
||||
// Insert into slashing pool
|
||||
if err := s.cfg.slashingPool.InsertProposerSlashing(ctx, headState, slashing); err != nil {
|
||||
return errors.Wrap(err, "could not insert proposer slashing into pool")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -18,6 +18,7 @@ import (
|
||||
dbtest "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
|
||||
doublylinkedtree "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/doubly-linked-tree"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/operations/attestations"
|
||||
slashingsmock "github.com/OffchainLabs/prysm/v6/beacon-chain/operations/slashings/mock"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
|
||||
p2ptest "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
|
||||
@@ -713,8 +714,21 @@ func TestValidateBeaconBlockPubSub_SeenProposerSlot(t *testing.T) {
|
||||
msg.Signature, err = signing.ComputeDomainAndSign(beaconState, 0, msg.Block, params.BeaconConfig().DomainBeaconProposer, privKeys[proposerIdx])
|
||||
require.NoError(t, err)
|
||||
|
||||
chainService := &mock.ChainService{Genesis: time.Unix(time.Now().Unix()-int64(params.BeaconConfig().SecondsPerSlot), 0),
|
||||
State: beaconState,
|
||||
// Create a clone of the same block (same signature, not an equivocation)
|
||||
msgClone := util.NewBeaconBlock()
|
||||
msgClone.Block.Slot = 1
|
||||
msgClone.Block.ProposerIndex = proposerIdx
|
||||
msgClone.Block.ParentRoot = bRoot[:]
|
||||
msgClone.Signature = msg.Signature // Use the same signature
|
||||
|
||||
signedBlock, err := blocks.NewSignedBeaconBlock(msg)
|
||||
require.NoError(t, err)
|
||||
|
||||
slashingPool := &slashingsmock.PoolMock{}
|
||||
chainService := &mock.ChainService{
|
||||
Genesis: time.Unix(time.Now().Unix()-int64(params.BeaconConfig().SecondsPerSlot), 0),
|
||||
State: beaconState,
|
||||
Block: signedBlock, // Set the first block as the head block
|
||||
FinalizedCheckPoint: ðpb.Checkpoint{
|
||||
Epoch: 0,
|
||||
Root: make([]byte, 32),
|
||||
@@ -728,6 +742,7 @@ func TestValidateBeaconBlockPubSub_SeenProposerSlot(t *testing.T) {
|
||||
chain: chainService,
|
||||
clock: startup.NewClock(chainService.Genesis, chainService.ValidatorsRoot),
|
||||
blockNotifier: chainService.BlockNotifier(),
|
||||
slashingPool: slashingPool,
|
||||
},
|
||||
seenBlockCache: lruwrpr.New(10),
|
||||
badBlockCache: lruwrpr.New(10),
|
||||
@@ -735,10 +750,15 @@ func TestValidateBeaconBlockPubSub_SeenProposerSlot(t *testing.T) {
|
||||
seenPendingBlocks: make(map[[32]byte]bool),
|
||||
}
|
||||
|
||||
// Mark the proposer/slot as seen
|
||||
r.setSeenBlockIndexSlot(msg.Block.Slot, msg.Block.ProposerIndex)
|
||||
time.Sleep(10 * time.Millisecond) // Wait for cached value to pass through buffers
|
||||
|
||||
// Prepare and validate the second message (clone)
|
||||
buf := new(bytes.Buffer)
|
||||
_, err = p.Encoding().EncodeGossip(buf, msg)
|
||||
_, err = p.Encoding().EncodeGossip(buf, msgClone)
|
||||
require.NoError(t, err)
|
||||
topic := p2p.GossipTypeMapping[reflect.TypeOf(msg)]
|
||||
topic := p2p.GossipTypeMapping[reflect.TypeOf(msgClone)]
|
||||
digest, err := r.currentForkDigest()
|
||||
assert.NoError(t, err)
|
||||
topic = r.addDigestToTopic(topic, digest)
|
||||
@@ -748,11 +768,14 @@ func TestValidateBeaconBlockPubSub_SeenProposerSlot(t *testing.T) {
|
||||
Topic: &topic,
|
||||
},
|
||||
}
|
||||
r.setSeenBlockIndexSlot(msg.Block.Slot, msg.Block.ProposerIndex)
|
||||
time.Sleep(10 * time.Millisecond) // Wait for cached value to pass through buffers.
|
||||
|
||||
// Since this is not an equivocation (same signature), it should be ignored
|
||||
res, err := r.validateBeaconBlockPubSub(ctx, "", m)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, res, pubsub.ValidationIgnore, "seen proposer block should be ignored")
|
||||
assert.Equal(t, pubsub.ValidationIgnore, res, "block with same signature should be ignored")
|
||||
|
||||
// Verify no slashings were created
|
||||
assert.Equal(t, 0, len(slashingPool.PendingPropSlashings), "Expected no slashings for same signature")
|
||||
}
|
||||
|
||||
func TestValidateBeaconBlockPubSub_FilterByFinalizedEpoch(t *testing.T) {
|
||||
@@ -1495,3 +1518,218 @@ func Test_validateDenebBeaconBlock(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
require.ErrorIs(t, validateDenebBeaconBlock(bdb.Block()), errRejectCommitmentLen)
|
||||
}
|
||||
|
||||
func TestDetectAndBroadcastEquivocation(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
p := p2ptest.NewTestP2P(t)
|
||||
beaconState, privKeys := util.DeterministicGenesisState(t, 100)
|
||||
|
||||
t.Run("no equivocation", func(t *testing.T) {
|
||||
block := util.NewBeaconBlock()
|
||||
block.Block.Slot = 1
|
||||
block.Block.ProposerIndex = 0
|
||||
|
||||
sig, err := signing.ComputeDomainAndSign(beaconState, 0, block.Block, params.BeaconConfig().DomainBeaconProposer, privKeys[0])
|
||||
require.NoError(t, err)
|
||||
block.Signature = sig
|
||||
|
||||
// Create head block with different slot/proposer
|
||||
headBlock := util.NewBeaconBlock()
|
||||
headBlock.Block.Slot = 2 // Different slot
|
||||
headBlock.Block.ProposerIndex = 1 // Different proposer
|
||||
signedHeadBlock, err := blocks.NewSignedBeaconBlock(headBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
chainService := &mock.ChainService{
|
||||
State: beaconState,
|
||||
Genesis: time.Now(),
|
||||
Block: signedHeadBlock,
|
||||
}
|
||||
|
||||
slashingPool := &slashingsmock.PoolMock{}
|
||||
r := &Service{
|
||||
cfg: &config{
|
||||
p2p: p,
|
||||
chain: chainService,
|
||||
slashingPool: slashingPool,
|
||||
},
|
||||
seenBlockCache: lruwrpr.New(10),
|
||||
}
|
||||
|
||||
signedBlock, err := blocks.NewSignedBeaconBlock(block)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = r.detectAndBroadcastEquivocation(ctx, signedBlock)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, 0, len(slashingPool.PendingPropSlashings), "Expected no slashings")
|
||||
})
|
||||
|
||||
t.Run("equivocation detected", func(t *testing.T) {
|
||||
// Create head block
|
||||
headBlock := util.NewBeaconBlock()
|
||||
headBlock.Block.Slot = 1
|
||||
headBlock.Block.ProposerIndex = 0
|
||||
headBlock.Block.ParentRoot = bytesutil.PadTo([]byte("parent1"), 32)
|
||||
sig1, err := signing.ComputeDomainAndSign(beaconState, 0, headBlock.Block, params.BeaconConfig().DomainBeaconProposer, privKeys[0])
|
||||
require.NoError(t, err)
|
||||
headBlock.Signature = sig1
|
||||
|
||||
// Create second block with same slot/proposer but different contents
|
||||
newBlock := util.NewBeaconBlock()
|
||||
newBlock.Block.Slot = 1
|
||||
newBlock.Block.ProposerIndex = 0
|
||||
newBlock.Block.ParentRoot = bytesutil.PadTo([]byte("parent2"), 32)
|
||||
sig2, err := signing.ComputeDomainAndSign(beaconState, 0, newBlock.Block, params.BeaconConfig().DomainBeaconProposer, privKeys[0])
|
||||
require.NoError(t, err)
|
||||
newBlock.Signature = sig2
|
||||
|
||||
signedHeadBlock, err := blocks.NewSignedBeaconBlock(headBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
slashingPool := &slashingsmock.PoolMock{}
|
||||
chainService := &mock.ChainService{
|
||||
State: beaconState,
|
||||
Genesis: time.Now(),
|
||||
Block: signedHeadBlock,
|
||||
}
|
||||
|
||||
r := &Service{
|
||||
cfg: &config{
|
||||
p2p: p,
|
||||
chain: chainService,
|
||||
slashingPool: slashingPool,
|
||||
},
|
||||
seenBlockCache: lruwrpr.New(10),
|
||||
}
|
||||
|
||||
signedNewBlock, err := blocks.NewSignedBeaconBlock(newBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = r.detectAndBroadcastEquivocation(ctx, signedNewBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Verify slashing was inserted
|
||||
require.Equal(t, 1, len(slashingPool.PendingPropSlashings), "Expected a slashing to be inserted")
|
||||
slashing := slashingPool.PendingPropSlashings[0]
|
||||
assert.Equal(t, primitives.ValidatorIndex(0), slashing.Header_1.Header.ProposerIndex, "Wrong proposer index")
|
||||
assert.Equal(t, primitives.Slot(1), slashing.Header_1.Header.Slot, "Wrong slot")
|
||||
})
|
||||
|
||||
t.Run("same signature", func(t *testing.T) {
|
||||
// Create block
|
||||
block := util.NewBeaconBlock()
|
||||
block.Block.Slot = 1
|
||||
block.Block.ProposerIndex = 0
|
||||
sig, err := signing.ComputeDomainAndSign(beaconState, 0, block.Block, params.BeaconConfig().DomainBeaconProposer, privKeys[0])
|
||||
require.NoError(t, err)
|
||||
block.Signature = sig
|
||||
|
||||
signedBlock, err := blocks.NewSignedBeaconBlock(block)
|
||||
require.NoError(t, err)
|
||||
|
||||
slashingPool := &slashingsmock.PoolMock{}
|
||||
chainService := &mock.ChainService{
|
||||
State: beaconState,
|
||||
Genesis: time.Now(),
|
||||
Block: signedBlock,
|
||||
}
|
||||
|
||||
r := &Service{
|
||||
cfg: &config{
|
||||
p2p: p,
|
||||
chain: chainService,
|
||||
slashingPool: slashingPool,
|
||||
},
|
||||
seenBlockCache: lruwrpr.New(10),
|
||||
}
|
||||
|
||||
err = r.detectAndBroadcastEquivocation(ctx, signedBlock)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, 0, len(slashingPool.PendingPropSlashings), "Expected no slashings for same signature")
|
||||
})
|
||||
|
||||
t.Run("head state error", func(t *testing.T) {
|
||||
block := util.NewBeaconBlock()
|
||||
block.Block.Slot = 1
|
||||
block.Block.ProposerIndex = 0
|
||||
block.Block.ParentRoot = bytesutil.PadTo([]byte("parent1"), 32)
|
||||
sig1, err := signing.ComputeDomainAndSign(beaconState, 0, block.Block, params.BeaconConfig().DomainBeaconProposer, privKeys[0])
|
||||
require.NoError(t, err)
|
||||
block.Signature = sig1
|
||||
|
||||
headBlock := util.NewBeaconBlock()
|
||||
headBlock.Block.Slot = 1 // Same slot
|
||||
headBlock.Block.ProposerIndex = 0 // Same proposer
|
||||
headBlock.Block.ParentRoot = bytesutil.PadTo([]byte("parent2"), 32) // Different parent root
|
||||
sig2, err := signing.ComputeDomainAndSign(beaconState, 0, headBlock.Block, params.BeaconConfig().DomainBeaconProposer, privKeys[0])
|
||||
require.NoError(t, err)
|
||||
headBlock.Signature = sig2
|
||||
|
||||
signedBlock, err := blocks.NewSignedBeaconBlock(block)
|
||||
require.NoError(t, err)
|
||||
|
||||
signedHeadBlock, err := blocks.NewSignedBeaconBlock(headBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
chainService := &mock.ChainService{
|
||||
State: nil,
|
||||
Block: signedHeadBlock,
|
||||
HeadStateErr: errors.New("could not get head state"),
|
||||
}
|
||||
|
||||
r := &Service{
|
||||
cfg: &config{
|
||||
p2p: p,
|
||||
chain: chainService,
|
||||
slashingPool: &slashingsmock.PoolMock{},
|
||||
},
|
||||
seenBlockCache: lruwrpr.New(10),
|
||||
}
|
||||
|
||||
err = r.detectAndBroadcastEquivocation(ctx, signedBlock)
|
||||
require.ErrorContains(t, "could not get head state", err)
|
||||
})
|
||||
t.Run("signature verification failure", func(t *testing.T) {
|
||||
// Create head block
|
||||
headBlock := util.NewBeaconBlock()
|
||||
headBlock.Block.Slot = 1
|
||||
headBlock.Block.ProposerIndex = 0
|
||||
sig1, err := signing.ComputeDomainAndSign(beaconState, 0, headBlock.Block, params.BeaconConfig().DomainBeaconProposer, privKeys[0])
|
||||
require.NoError(t, err)
|
||||
headBlock.Signature = sig1
|
||||
|
||||
// Create test block with invalid signature
|
||||
newBlock := util.NewBeaconBlock()
|
||||
newBlock.Block.Slot = 1
|
||||
newBlock.Block.ProposerIndex = 0
|
||||
newBlock.Block.ParentRoot = bytesutil.PadTo([]byte("different"), 32)
|
||||
// generate invalid signature
|
||||
invalidSig := make([]byte, 96)
|
||||
copy(invalidSig, []byte("invalid signature"))
|
||||
newBlock.Signature = invalidSig
|
||||
|
||||
signedHeadBlock, err := blocks.NewSignedBeaconBlock(headBlock)
|
||||
require.NoError(t, err)
|
||||
signedNewBlock, err := blocks.NewSignedBeaconBlock(newBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
slashingPool := &slashingsmock.PoolMock{}
|
||||
chainService := &mock.ChainService{
|
||||
State: beaconState,
|
||||
Genesis: time.Now(),
|
||||
Block: signedHeadBlock,
|
||||
}
|
||||
|
||||
r := &Service{
|
||||
cfg: &config{
|
||||
p2p: p,
|
||||
chain: chainService,
|
||||
slashingPool: slashingPool,
|
||||
},
|
||||
seenBlockCache: lruwrpr.New(10),
|
||||
}
|
||||
|
||||
err = r.detectAndBroadcastEquivocation(ctx, signedNewBlock)
|
||||
require.ErrorIs(t, err, ErrSlashingSignatureFailure)
|
||||
})
|
||||
}
|
||||
|
||||
@@ -16,6 +16,11 @@ func AsVerificationFailure(err error) error {
|
||||
return errors.Join(ErrInvalid, err)
|
||||
}
|
||||
|
||||
// IsBlobValidationFailure checks if the given error is a blob validation failure.
|
||||
func IsBlobValidationFailure(err error) bool {
|
||||
return errors.Is(err, ErrBlobInvalid)
|
||||
}
|
||||
|
||||
var (
|
||||
// ErrBlobInvalid is joined with all other blob verification errors. This enables other packages to check for any sort of
|
||||
// verification error at one point, like sync code checking for peer scoring purposes.
|
||||
|
||||
3
changelog/bastin_add-lc-p2p-broadcasters.md
Normal file
3
changelog/bastin_add-lc-p2p-broadcasters.md
Normal file
@@ -0,0 +1,3 @@
|
||||
### Added
|
||||
|
||||
- Add light client p2p broadcaster functions.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Add light client ssz types to the spec test
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Add light client store object to the beacon node object.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Ignored
|
||||
|
||||
- add two parameters `increaseAttestedSlotBy` and `supermajority` to the lc test utils.
|
||||
@@ -1,3 +0,0 @@
|
||||
### Added
|
||||
|
||||
- Add SSZ support to light client updates by range API. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15082)
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user