Compare commits

...

35 Commits

Author SHA1 Message Date
Manu NALEPA
66d1d3e248 Use finalized state for validator custody instead of head state. (#15243)
* `finalizedState` ==> `FinalizedState`.
We'll need it in an other package later.

* `setTargetValidatorsCustodyRequirement`: Use finalized state instead of head state.

* Fix James's comment.
2025-05-05 21:13:49 +02:00
Manu NALEPA
99933678ea Peerdas fix get blobs v2 (#15234)
* `reconstructAndBroadcastDataColumnSidecars`: Improve logging.

* `ReconstructDataColumnSidecars`: Add comments and return early if needed.

* `reconstructAndBroadcastDataColumnSidecars`: Return early if not blobs are retrieved from the EL.

* `filterPeerWhichCustodyAtLeastOneDataColumn`: Remove unneded log field.

* Fix Terence's comment.
2025-05-02 17:34:32 +02:00
Manu NALEPA
34f8e1e92b Data colummns by range: Use all possible peers then filter them. (#15242) 2025-05-02 12:15:02 +02:00
terence
a6a41a8755 Add column sidecar inclusion proof cache (#15217) 2025-04-29 13:46:32 +02:00
terence
f110b94fac Add flag to subscribe to all blob column subnets (#15197)
* Seperate subscribe data columns from attestation and sync committee subnets

* Fix test

* Rename to subscribe-data-subnets

* Update to subscribe-all-data-subnets

* `--subscribe-all-data-subnets`: Add `.` at the end of help, since it seems to be the consensus.

* `ConfigureGlobalFlags`: Fix log.

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-04-29 11:59:17 +02:00
Manu NALEPA
33023aa282 Merge branch 'develop' into peerDAS 2025-04-29 11:13:27 +02:00
Manu NALEPA
1298dc3a46 PeerDAS: Add needed proto files and corresponding generated code. (#15187)
* PeerDAS: Add needed proto files and corresponding generated code.

* Fix Nishant's comment.

* `max_cell_proofs_length.size`: Set to `CELLS_PER_EXT_BLOB * MAX_BLOB_COMMITMENTS_PER_BLOCK`.

* `BlobsBundleV2`: Add comment.
2025-04-28 16:08:28 +00:00
Potuz
3c463d8171 Pass dependent roots in block events (#15227)
* Pass dependent roots in block events

* Check for empty roots
2025-04-28 00:00:48 +00:00
kasey
0a48fafc71 extend payload attribute deadline to proposal slot (#15230)
* extend payload attribute deadline to proposal slot

* Update beacon-chain/blockchain/execution_engine.go

Co-authored-by: Potuz <potuz@prysmaticlabs.com>

* Terence's feedback

* skip past proposal slots

* lint and skip events during init sync

* finagle test to not trigger old event error

* fix blockchain tests that panic without SyncChecker

---------

Co-authored-by: Kasey <kasey@users.noreply.github.com>
Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2025-04-27 23:59:37 +00:00
terence
28cd59c9b7 Fix payload attribute proposer calculation (#15228)
Process slots if epoch is greater than head state epoch
2025-04-27 17:52:09 +00:00
james-prysm
efaf6649e7 Update grpc deprecation message (#15222)
* comment updates and changelog

* some missed comments
2025-04-25 19:50:34 +00:00
Potuz
a1c1edf285 Fix deadlines (#15221)
* Fix deadlines

* use current slot in update duties

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-04-25 19:10:20 +00:00
Radosław Kapka
bde7a57ec9 Implement pending consolidations Beacon API endpoint (#15219)
* get block

* Implement pending consolidations API endpoint

* changelog

* fixing test

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: james-prysm <james@prysmaticlabs.com>
2025-04-25 18:27:58 +00:00
kasey
c223957751 include block in attr event and use stategen (#15213)
* include block in attr event and use stategen

* use no-copy state cache for proposal in same epoch

* only advance to the start of epoch

---------

Co-authored-by: Kasey <kasey@users.noreply.github.com>
2025-04-25 15:11:38 +00:00
Preston Van Loon
f7eddedd1d update geth v1.15.9 (#15216)
* Update go-ethereum to v1.15.9

* Fix go-ethereum secp256k1 build after https://github.com/ethereum/go-ethereum/pull/31242

* Fix Ping API change

* Changelog fragment
2025-04-25 12:40:19 +00:00
kira
7887ebbc4a Broadcast Proposer Slashing on equivocation (#14693)
* Add equivocation detection logic; broadcast slashing immediately on equivocation

* nit: comments

* move equivocation detection to validateBeaconBlockPubSub

* include broadcasting logic within the helper function

* fix lint

* Add unit tests for equivocation detection

* remove comment that are not required

* Add changelog file

* Add descriptive comment for detectAndBroadcastEquivocation

* use head block instead of block cache for equivocation detection

* add more equivocation unit tests; update a mock to include HeadState error

* update the order of the checks

* move slashing before state fetch; update Tests

* update changelog

* use verifyProposerSlashing to verify and reject block; remove verifySlashableBlock; update tests

* Update changelog

* nit: cleaner error check

* nit: clean up

* revert code logic; update string check; add a unit test

* improve errors; merge tests

* Update a unit test

* fix lint

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-04-24 20:27:34 +00:00
Nishant Das
1b13520270 Fix Unmarshalling of BlobSidecarsByRoot Requests (#15209)
* Handle Electra Lists

* Changelog
2025-04-23 14:04:46 +00:00
Tronica
0936628b72 refactor(sync): rename reValidateSubscriptions to pruneSubscriptions (#15160)
* Update subscriber.go

* Update subscriber_test.go

* Create pronoss_refactor_subscriber_rename.md

* Update pronoss_refactor_subscriber_rename.md

* Update pronoss_refactor_subscriber_rename.md

* Update pronoss_refactor_subscriber_rename.md

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-04-22 13:26:57 +00:00
Nishant Das
478ae81ed1 Update Insecure Dependencies (#15204) 2025-04-22 09:10:03 +00:00
Nishant Das
93276150e7 Fix Generated SSZ Files (#15199)
* Regenerate Files

* Changelog
2025-04-22 04:52:27 +00:00
Preston Van Loon
83460c9956 Changelog v6 (#15203)
* Run unclog for v6.0.0

* Add changelog note for v6 release

* Changelog fragment
2025-04-22 01:03:42 +00:00
Bastin
d30bb63d94 Add lc p2p broadcaster functions (#15175)
* add lc boradcasters

* gazelle

* changelog entry

* remove subnet

* implement and use IsNil()

* address comments

* Apply suggestions from code review

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* address comments

* address comments

* deps

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-04-18 12:28:12 +00:00
Manu NALEPA
eeb3cdc99e Merge branch 'develop' into peerDAS 2025-04-18 08:37:33 +02:00
Manu NALEPA
ab5505e13e Implement all needed KZG wrappers for peerDAS in the kzg package. (#15186)
* Implement all needed KZG wrappers for peerDAS in the `kzg` package.

This way, If we need to change the KZG backend, the only package to
modify is the `kzg` package.

* `.bazelrc`: Add `build --compilation_mode=opt`

* Remove --compilation_mode=opt, use supranational blst headers.

* Fix Terence's comment.

* Fix Terence`s comments.

---------

Co-authored-by: Preston Van Loon <preston@pvl.dev>
2025-04-17 22:43:41 +00:00
Preston Van Loon
1e7147f060 Remove --compilation_mode=opt, use supranational blst headers. 2025-04-17 20:53:54 +02:00
james-prysm
9c00b06966 fix expected withdrawals (#15191)
* fixed underflow with expected withdrawals

* update comment

* Revert "update comment"

This reverts commit e07da541ac.

* attempting to fix comment indents

* fixing another missed tab in comments

* trying tabs one more time for fixing tabs

* adding undeflow safety

* fixing error typo

* missed wrapping the error
2025-04-17 18:17:53 +00:00
Manu NALEPA
8936beaff3 Merge branch 'develop' into peerDAS 2025-04-17 16:49:22 +02:00
Manu NALEPA
167f719860 Upgrade to fulu: Fix and add spectests (#15190)
* `UpgradeToFulu`: Fix.

* `UpgradeToFulu`: Add spectests.
2025-04-17 14:13:32 +00:00
Manu NALEPA
c00283f247 UpgradeToFulu: Add spec tests. (#15189) 2025-04-17 15:17:27 +02:00
Manu NALEPA
a4269cf308 Add tests (#15188) 2025-04-17 13:12:46 +02:00
Manu NALEPA
91f3c8a4d0 c-kzg-4844 lib: Update to v2.1.1. (#15185) 2025-04-17 01:25:36 +02:00
terence
30c7ee9c7b Validate parent block exists before signature (#15184)
* Validate parent block exists before signature

* `ValidProposerSignature`: Add comment

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-04-17 00:40:48 +02:00
Manu NALEPA
456d8b9eb9 Merge branch 'develop' into peerDAS-do-not-merge 2025-04-16 22:58:38 +02:00
mmsqe
d4469d17b7 Problem: nondeterministic default fork value when generate genesis (#15151)
* Problem: nondeterministic default fork value when generate genesis

add sort versions

* add doc

* Apply suggestions from code review

* lint

---------

Co-authored-by: Bastin <43618253+Inspector-Butters@users.noreply.github.com>
2025-04-16 16:33:48 +00:00
kasey
8418157f8a improve peer scoring code in range sync (#15173)
* separate block/blob peer scoring

* Preston's test coverage feedback

* test to ensure we don't combine distinct errors

---------

Co-authored-by: Kasey <kasey@users.noreply.github.com>
2025-04-16 13:47:58 +00:00
217 changed files with 2665 additions and 1207 deletions

View File

@@ -22,7 +22,6 @@ coverage --define=coverage_enabled=1
build --workspace_status_command=./hack/workspace_status.sh
build --define blst_disabled=false
build --compilation_mode=opt
run --define blst_disabled=false
build:blst_disabled --define blst_disabled=true

View File

@@ -4,6 +4,64 @@ All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
## [v6.0.0](https://github.com/prysmaticlabs/prysm/compare/v5.3.2...v6.0.0) - 2025-04-21
This release introduces Mainnet support for the upcoming Electra + Prague (Pectra) fork. The fork is scheduled for mainnet epoch 364032 (May 7, 2025, 10:05:11 UTC). You MUST update Prysm Beacon Node, Prysm Validator Client, and your execution layer client to the Pectra ready release prior to the fork to stay on the correct chain.
Besides Pectra, we have more light client API support, cleanups, and a few bugfixes. Please review the changelog below and update your client as soon as practical before May 7.
This release is **mandatory** for all operators before May 7.
### Added
- Implemented validator identities Beacon API endpoint. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15086)
- Add SSZ support to light client updates by range API. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15082)
- Add light client ssz types to the spec test. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15097)
- Added the ability for execution requests to be tested in e2e with electra. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14971)
- Add warning messages for gas limit ranges that might be problematic. Low gas limits (≤10% of default) may cause transactions to fail, while high gas limits (>150% of default) could lead to block propagation issues. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15078)
- Add light client store object to the beacon node object. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15120)
- prysmctl option in wrapper script to generate devnet ssz. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15145)
- Add support for Electra fork epoch. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15132)
### Changed
- The validator client will no longer use the full list of committee values but instead use the committee length and validator committee index. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15039)
- Remove the header `Content-Disposition` from the `httputil.WriteSSZ` function. No `filename` parameter is needed anymore. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15092)
- Sort attestations in proposer block by reward. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15093)
- More efficient query method for stategen to retrieve blocks between a given state and the replay target block. This avoids attempting to look up blocks that are not needed for head replay queries, which may be missing due to a previous rollback bug. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15063)
- removed old web3signer metrics in favor for a universal one. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14920)
- Deprecated everything related with the gRPC API. [[PR]](https://github.com/prysmaticlabs/prysm/pull/14944)
- Migrate Prysm repo to Offchain Labs organization ahead of Pectra upgrade v6. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15140)
### Deprecated
- deprecates and removes usage of the `--trace` flag and`--cpuprofile` flag in favor of just using `--pprof`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15083)
### Removed
- Remove /eth/v1/beacon/states/head/committees call when getting duties. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15039)
- Removed unused hack scripts. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15157)
- Remove `disable-committee-aware-packing` flag. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15162)
- Remove deprecated flags for the major release. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15165)
- Removed Beacon API endpoints which have been deprecated at the Deneb fork. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15166)
### Fixed
- The `--rpc` flag will now properly enable the keymanager APIs without web. The `--web` will enable both validator api endpoints and web. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15080)
- Use latest state to pack attestation. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15113)
- Clean up dangling block index entries for blocks that were previously deleted by incomplete cleanup code. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15040)
- Fixed to use io stream instead of stream read. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15089)
- When using a DV, send all aggregations for a slot and committee. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15110)
- Fixed a bug in consolidation request processing. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15122)
- Fix State Getter for pending withdrawal balance. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15123)
- Fixed a bug in checking for attestation lengths in our block. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15134)
- Fix Committee Index Check For Aggregates. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15146)
- Fix filtering by committee index post-Electra in `ListAttestationsV2`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15148)
- Peers giving invalid data in range syncing are now downscored. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15149)
- Adding fork guard to attestation api endpoints so that it doesn't accidentally include wrong attestation types in the pool. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15161)
- fixed underflow with balances in leaking edge case with expected withdrawals. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15191)
- Attribute block and blob issues to correct peers during range syncing. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15173)
## [v5.3.2](https://github.com/prysmaticlabs/prysm/compare/v5.3.1...v5.3.2) - 2025-03-25
This release introduces support for the `Hoodi` testnet.
@@ -3255,4 +3313,4 @@ There are no security updates in this release.
# Older than v2.0.0
For changelog history for releases older than v2.0.0, please refer to https://github.com/prysmaticlabs/prysm/releases
For changelog history for releases older than v2.0.0, please refer to https://github.com/prysmaticlabs/prysm/releases

View File

@@ -263,6 +263,13 @@ type ChainHead struct {
OptimisticStatus bool `json:"optimistic_status"`
}
type GetPendingConsolidationsResponse struct {
Version string `json:"version"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data []*PendingConsolidation `json:"data"`
}
type GetPendingDepositsResponse struct {
Version string `json:"version"`
ExecutionOptimistic bool `json:"execution_optimistic"`

View File

@@ -184,13 +184,17 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *fcuConfig) (*
return payloadID, nil
}
func firePayloadAttributesEvent(_ context.Context, f event.SubscriberSender, nextSlot primitives.Slot) {
func (s *Service) firePayloadAttributesEvent(f event.SubscriberSender, block interfaces.ReadOnlySignedBeaconBlock, root [32]byte, nextSlot primitives.Slot) {
// If we're syncing a block in the past and init-sync is still running, we shouldn't fire this event.
if !s.cfg.SyncChecker.Synced() {
return
}
// the fcu args have differing amounts of completeness based on the code path,
// and there is work we only want to do if a client is actually listening to the events beacon api endpoint.
// temporary solution: just fire a blank event and fill in the details in the api handler.
f.Send(&feed.Event{
Type: statefeed.PayloadAttributes,
Data: payloadattribute.EventData{ProposalSlot: nextSlot},
Data: payloadattribute.EventData{HeadBlock: block, HeadRoot: root, ProposalSlot: nextSlot},
})
}

View File

@@ -102,7 +102,7 @@ func (s *Service) forkchoiceUpdateWithExecution(ctx context.Context, args *fcuCo
log.WithError(err).Error("could not save head")
}
go firePayloadAttributesEvent(ctx, s.cfg.StateNotifier.StateFeed(), s.CurrentSlot()+1)
go s.firePayloadAttributesEvent(s.cfg.StateNotifier.StateFeed(), args.headBlock, args.headRoot, s.CurrentSlot()+1)
// Only need to prune attestations from pool if the head has changed.
s.pruneAttsFromPool(s.ctx, args.headState, args.headBlock)

View File

@@ -7,7 +7,7 @@ go_library(
"trusted_setup.go",
"validation.go",
],
embedsrcs = ["trusted_setup.json"],
embedsrcs = ["trusted_setup_4096.json"],
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg",
visibility = ["//visibility:public"],
deps = [

View File

@@ -31,25 +31,31 @@ type Bytes48 = ckzg4844.Bytes48
// Bytes32 is a 32-byte array.
type Bytes32 = ckzg4844.Bytes32
// CellsAndProofs represents the Cells and Proofs corresponding to
// a single blob.
// CellsAndProofs represents the Cells and Proofs corresponding to a single blob.
type CellsAndProofs struct {
Cells []Cell
Proofs []Proof
}
// BlobToKZGCommitment computes a KZG commitment from a given blob.
func BlobToKZGCommitment(blob *Blob) (Commitment, error) {
kzgBlob := kzg4844.Blob(*blob)
comm, err := kzg4844.BlobToCommitment(&kzgBlob)
var kzgBlob kzg4844.Blob
copy(kzgBlob[:], blob[:])
commitment, err := kzg4844.BlobToCommitment(&kzgBlob)
if err != nil {
return Commitment{}, err
}
return Commitment(comm), nil
return Commitment(commitment), nil
}
// ComputeCells computes the (extended) cells from a given blob.
func ComputeCells(blob *Blob) ([]Cell, error) {
ckzgBlob := (*ckzg4844.Blob)(blob)
ckzgCells, err := ckzg4844.ComputeCells(ckzgBlob)
var ckzgBlob ckzg4844.Blob
copy(ckzgBlob[:], blob[:])
ckzgCells, err := ckzg4844.ComputeCells(&ckzgBlob)
if err != nil {
return nil, errors.Wrap(err, "compute cells")
}
@@ -58,11 +64,15 @@ func ComputeCells(blob *Blob) ([]Cell, error) {
for i := range ckzgCells {
cells[i] = Cell(ckzgCells[i])
}
return cells, nil
}
// ComputeBlobKZGProof computes the blob KZG proof from a given blob and its commitment.
func ComputeBlobKZGProof(blob *Blob, commitment Commitment) (Proof, error) {
kzgBlob := kzg4844.Blob(*blob)
var kzgBlob kzg4844.Blob
copy(kzgBlob[:], blob[:])
proof, err := kzg4844.ComputeBlobProof(&kzgBlob, kzg4844.Commitment(commitment))
if err != nil {
return [48]byte{}, err
@@ -70,9 +80,12 @@ func ComputeBlobKZGProof(blob *Blob, commitment Commitment) (Proof, error) {
return Proof(proof), nil
}
// ComputeCellsAndKZGProofs computes the cells and cells KZG proofs from a given blob.
func ComputeCellsAndKZGProofs(blob *Blob) (CellsAndProofs, error) {
ckzgBlob := (*ckzg4844.Blob)(blob)
ckzgCells, ckzgProofs, err := ckzg4844.ComputeCellsAndKZGProofs(ckzgBlob)
var ckzgBlob ckzg4844.Blob
copy(ckzgBlob[:], blob[:])
ckzgCells, ckzgProofs, err := ckzg4844.ComputeCellsAndKZGProofs(&ckzgBlob)
if err != nil {
return CellsAndProofs{}, err
}
@@ -80,9 +93,12 @@ func ComputeCellsAndKZGProofs(blob *Blob) (CellsAndProofs, error) {
return makeCellsAndProofs(ckzgCells[:], ckzgProofs[:])
}
// VerifyCellKZGProofBatch verifies the KZG proofs for a given slice of commitments, cells indices, cells and proofs.
// Note: It is way more efficient to call once this function with big slices than calling it multiple times with small slices.
func VerifyCellKZGProofBatch(commitmentsBytes []Bytes48, cellIndices []uint64, cells []Cell, proofsBytes []Bytes48) (bool, error) {
// Convert `Cell` type to `ckzg4844.Cell`
ckzgCells := make([]ckzg4844.Cell, len(cells))
for i := range cells {
ckzgCells[i] = ckzg4844.Cell(cells[i])
}
@@ -90,6 +106,7 @@ func VerifyCellKZGProofBatch(commitmentsBytes []Bytes48, cellIndices []uint64, c
return ckzg4844.VerifyCellKZGProofBatch(commitmentsBytes, cellIndices, ckzgCells, proofsBytes)
}
// RecoverCellsAndKZGProofs recovers the complete cells and KZG proofs from a given set of cell indices and partial cells.
func RecoverCellsAndKZGProofs(cellIndices []uint64, partialCells []Cell) (CellsAndProofs, error) {
// Convert `Cell` type to `ckzg4844.Cell`
ckzgPartialCells := make([]ckzg4844.Cell, len(partialCells))
@@ -105,14 +122,15 @@ func RecoverCellsAndKZGProofs(cellIndices []uint64, partialCells []Cell) (CellsA
return makeCellsAndProofs(ckzgCells[:], ckzgProofs[:])
}
// Convert cells/proofs to the CellsAndProofs type defined in this package.
// makeCellsAndProofs converts cells/proofs to the CellsAndProofs type defined in this package.
func makeCellsAndProofs(ckzgCells []ckzg4844.Cell, ckzgProofs []ckzg4844.KZGProof) (CellsAndProofs, error) {
if len(ckzgCells) != len(ckzgProofs) {
return CellsAndProofs{}, errors.New("different number of cells/proofs")
}
var cells []Cell
var proofs []Proof
cells := make([]Cell, 0, len(ckzgCells))
proofs := make([]Proof, 0, len(ckzgProofs))
for i := range ckzgCells {
cells = append(cells, Cell(ckzgCells[i]))
proofs = append(proofs, Proof(ckzgProofs[i]))

View File

@@ -11,7 +11,8 @@ import (
)
var (
//go:embed trusted_setup.json
// https://github.com/ethereum/consensus-specs/blob/dev/presets/mainnet/trusted_setups/trusted_setup_4096.json
//go:embed trusted_setup_4096.json
embeddedTrustedSetup []byte // 1.2Mb
kzgContext *GoKZG.Context
kzgLoaded bool
@@ -29,9 +30,11 @@ func Start() error {
if err != nil {
return errors.Wrap(err, "could not parse trusted setup JSON")
}
kzgContext, err = GoKZG.NewContext4096(&GoKZG.JSONTrustedSetup{
SetupG2: trustedSetup.G2Monomial[:],
SetupG1Lagrange: trustedSetup.G1Lagrange})
SetupG1Lagrange: trustedSetup.G1Lagrange,
})
if err != nil {
return errors.Wrap(err, "could not initialize go-kzg context")
}
@@ -41,26 +44,30 @@ func Start() error {
for i, g1 := range &trustedSetup.G1Monomial {
copy(g1MonomialBytes[i*(len(g1)-2)/2:], hexutil.MustDecode(g1))
}
// Length of a G1 point, converted from hex to binary.
g1LagrangeBytes := make([]byte, len(trustedSetup.G1Lagrange)*(len(trustedSetup.G1Lagrange[0])-2)/2)
for i, g1 := range &trustedSetup.G1Lagrange {
copy(g1LagrangeBytes[i*(len(g1)-2)/2:], hexutil.MustDecode(g1))
}
// Length of a G2 point, converted from hex to binary.
g2MonomialBytes := make([]byte, len(trustedSetup.G2Monomial)*(len(trustedSetup.G2Monomial[0])-2)/2)
for i, g2 := range &trustedSetup.G2Monomial {
copy(g2MonomialBytes[i*(len(g2)-2)/2:], hexutil.MustDecode(g2))
}
if !kzgLoaded {
// TODO: Provide a configuration option for this.
var precompute uint = 8
// Free the current trusted setup before running this method. CKZG
// panics if the same setup is run multiple times.
if !kzgLoaded {
const precompute uint = 8
kzgLoaded = true
// Free the current trusted setup before running this method.
// CKZG panics if the same setup is run multiple times.
if err = CKZG.LoadTrustedSetup(g1MonomialBytes, g1LagrangeBytes, g2MonomialBytes, precompute); err != nil {
return errors.Wrap(err, "load trust setup")
}
}
kzgLoaded = true
return nil
}

View File

@@ -20,6 +20,7 @@ func testServiceOptsWithDB(t *testing.T) []Option {
WithForkChoiceStore(fcs),
WithClockSynchronizer(cs),
WithStateNotifier(&mock.MockStateNotifier{RecordEvents: true}),
WithSyncChecker(&mock.MockSyncChecker{}),
}
}

View File

@@ -924,9 +924,13 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
attribute := s.getPayloadAttribute(ctx, headState, s.CurrentSlot()+1, headRoot[:])
// return early if we are not proposing next slot
if attribute.IsEmpty() {
headBlock, err := s.headBlock()
if err != nil {
log.WithError(err).WithField("head_root", headRoot).Error("unable to retrieve head block to fire payload attributes event")
}
// notifyForkchoiceUpdate fires the payload attribute event. But in this case, we won't
// call notifyForkchoiceUpdate, so the event is fired here.
go firePayloadAttributesEvent(ctx, s.cfg.StateNotifier.StateFeed(), s.CurrentSlot()+1)
go s.firePayloadAttributesEvent(s.cfg.StateNotifier.StateFeed(), headBlock, headRoot, s.CurrentSlot()+1)
return
}

View File

@@ -103,15 +103,29 @@ func (s *Service) sendStateFeedOnBlock(cfg *postBlockProcessConfig) {
log.WithError(err).Debug("Could not check if block is optimistic")
optimistic = true
}
currEpoch := slots.ToEpoch(s.CurrentSlot())
currDependenRoot, err := s.cfg.ForkChoiceStore.DependentRoot(currEpoch)
if err != nil {
log.WithError(err).Debug("Could not get dependent root")
}
prevDependentRoot := [32]byte{}
if currEpoch > 0 {
prevDependentRoot, err = s.cfg.ForkChoiceStore.DependentRoot(currEpoch - 1)
if err != nil {
log.WithError(err).Debug("Could not get previous dependent root")
}
}
// Send notification of the processed block to the state feed.
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.BlockProcessed,
Data: &statefeed.BlockProcessedData{
Slot: cfg.roblock.Block().Slot(),
BlockRoot: cfg.roblock.Root(),
SignedBlock: cfg.roblock,
Verified: true,
Optimistic: optimistic,
Slot: cfg.roblock.Block().Slot(),
BlockRoot: cfg.roblock.Root(),
SignedBlock: cfg.roblock,
CurrDependentRoot: currDependenRoot,
PrevDependentRoot: prevDependentRoot,
Verified: true,
Optimistic: optimistic,
},
})
}

View File

@@ -53,6 +53,7 @@ type ChainService struct {
InitSyncBlockRoots map[[32]byte]bool
DB db.Database
State state.BeaconState
HeadStateErr error
Block interfaces.ReadOnlySignedBeaconBlock
VerifyBlkDescendantErr error
stateNotifier statefeed.Notifier
@@ -365,6 +366,9 @@ func (s *ChainService) HeadState(context.Context) (state.BeaconState, error) {
// HeadStateReadOnly mocks HeadStateReadOnly method in chain service.
func (s *ChainService) HeadStateReadOnly(context.Context) (state.ReadOnlyBeaconState, error) {
if s.HeadStateErr != nil {
return nil, s.HeadStateErr
}
return s.State, nil
}
@@ -727,3 +731,14 @@ func (*ChainService) ReceiveDataColumns(_ []blocks.VerifiedRODataColumn) error {
func (c *ChainService) TargetRootForEpoch(_ [32]byte, _ primitives.Epoch) ([32]byte, error) {
return c.TargetRoot, nil
}
// MockSyncChecker is a mock implementation of blockchain.Checker.
// We can't make an assertion here that this is true because that would create a circular dependency.
type MockSyncChecker struct {
synced bool
}
// Synced satisfies the blockchain.Checker interface.
func (m *MockSyncChecker) Synced() bool {
return m.synced
}

View File

@@ -16,6 +16,9 @@ import (
"google.golang.org/protobuf/proto"
)
// ErrCouldNotVerifyBlockHeader is returned when a block header's signature cannot be verified.
var ErrCouldNotVerifyBlockHeader = errors.New("could not verify beacon block header")
type slashValidatorFunc func(
ctx context.Context,
st state.BeaconState,
@@ -114,7 +117,7 @@ func VerifyProposerSlashing(
for _, header := range headers {
if err := signing.ComputeDomainVerifySigningRoot(beaconState, pIdx, slots.ToEpoch(hSlot),
header.Header, params.BeaconConfig().DomainBeaconProposer, header.Signature); err != nil {
return errors.Wrap(err, "could not verify beacon block header")
return errors.Wrap(ErrCouldNotVerifyBlockHeader, err.Error())
}
}
return nil

View File

@@ -11,6 +11,8 @@ const (
// ReceivedBlockData is the data sent with ReceivedBlock events.
type ReceivedBlockData struct {
SignedBlock interfaces.ReadOnlySignedBeaconBlock
IsOptimistic bool
SignedBlock interfaces.ReadOnlySignedBeaconBlock
CurrDependentRoot [32]byte
PrevDependentRoot [32]byte
IsOptimistic bool
}

View File

@@ -43,6 +43,10 @@ type BlockProcessedData struct {
BlockRoot [32]byte
// SignedBlock is the physical processed block.
SignedBlock interfaces.ReadOnlySignedBeaconBlock
// CurrDependentRoot is the current dependent root
CurrDependentRoot [32]byte
// PrevDependentRoot is the previous dependent root
PrevDependentRoot [32]byte
// Verified is true if the block's BLS contents have been verified.
Verified bool
// Optimistic is true if the block is optimistic.

View File

@@ -11,7 +11,7 @@ import (
)
// UpgradeToFulu updates inputs a generic state to return the version Fulu state.
// https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/fork.md#upgrading-the-state
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/fork.md#upgrading-the-state
func UpgradeToFulu(beaconState state.BeaconState) (state.BeaconState, error) {
currentSyncCommittee, err := beaconState.CurrentSyncCommittee()
if err != nil {
@@ -69,7 +69,7 @@ func UpgradeToFulu(beaconState state.BeaconState) (state.BeaconState, error) {
if err != nil {
return nil, err
}
depostiRequestsStartIndex, err := beaconState.DepositRequestsStartIndex()
depositRequestsStartIndex, err := beaconState.DepositRequestsStartIndex()
if err != nil {
return nil, err
}
@@ -158,7 +158,7 @@ func UpgradeToFulu(beaconState state.BeaconState) (state.BeaconState, error) {
NextWithdrawalValidatorIndex: vi,
HistoricalSummaries: summaries,
DepositRequestsStartIndex: depostiRequestsStartIndex,
DepositRequestsStartIndex: depositRequestsStartIndex,
DepositBalanceToConsume: depositBalanceToConsume,
ExitBalanceToConsume: exitBalanceToConsume,
EarliestExitEpoch: earliestExitEpoch,

View File

@@ -10,6 +10,7 @@ go_library(
"peer_sampling.go",
"reconstruction.go",
"util.go",
"validator.go",
],
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas",
visibility = ["//visibility:public"],
@@ -47,6 +48,7 @@ go_test(
"peer_sampling_test.go",
"reconstruction_test.go",
"utils_test.go",
"validator_test.go",
],
deps = [
":go_default_library",

View File

@@ -7,12 +7,10 @@ import (
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
beaconState "github.com/OffchainLabs/prysm/v6/beacon-chain/state"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/crypto/hash"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
@@ -23,6 +21,7 @@ import (
var (
// Custom errors
ErrCustodyGroupTooLarge = errors.New("custody group too large")
errCustodyGroupCountTooLarge = errors.New("custody group count too large")
errWrongComputedCustodyGroupCount = errors.New("wrong computed custody group count, should never happen")
@@ -38,7 +37,7 @@ const (
)
// CustodyGroups computes the custody groups the node should participate in for custody.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-alpha.10/specs/fulu/das-core.md#get_custody_groups
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/das-core.md#get_custody_groups
func CustodyGroups(nodeId enode.ID, custodyGroupCount uint64) (map[uint64]bool, error) {
numberOfCustodyGroup := params.BeaconConfig().NumberOfCustodyGroups
@@ -81,7 +80,7 @@ func CustodyGroups(nodeId enode.ID, custodyGroupCount uint64) (map[uint64]bool,
}
// ComputeColumnsForCustodyGroup computes the columns for a given custody group.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-alpha.10/specs/fulu/das-core.md#compute_columns_for_custody_group
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/das-core.md#compute_columns_for_custody_group
func ComputeColumnsForCustodyGroup(custodyGroup uint64) ([]uint64, error) {
beaconConfig := params.BeaconConfig()
numberOfCustodyGroup := beaconConfig.NumberOfCustodyGroups
@@ -176,67 +175,6 @@ func DataColumnSidecars(signedBlock interfaces.ReadOnlySignedBeaconBlock, cellsA
return sidecars, nil
}
// CustodyGroupSamplingSize returns the number of custody groups the node should sample from.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-alpha.10/specs/fulu/das-core.md#custody-sampling
func (custodyInfo *CustodyInfo) CustodyGroupSamplingSize(ct CustodyType) uint64 {
custodyGroupCount := custodyInfo.TargetGroupCount.Get()
if ct == Actual {
custodyGroupCount = custodyInfo.ActualGroupCount()
}
samplesPerSlot := params.BeaconConfig().SamplesPerSlot
return max(samplesPerSlot, custodyGroupCount)
}
// CustodyColumns computes the custody columns from the custody groups.
func CustodyColumns(custodyGroups map[uint64]bool) (map[uint64]bool, error) {
numberOfCustodyGroups := params.BeaconConfig().NumberOfCustodyGroups
custodyGroupCount := len(custodyGroups)
// Compute the columns for each custody group.
columns := make(map[uint64]bool, custodyGroupCount)
for group := range custodyGroups {
if group >= numberOfCustodyGroups {
return nil, errCustodyGroupCountTooLarge
}
groupColumns, err := ComputeColumnsForCustodyGroup(group)
if err != nil {
return nil, errors.Wrap(err, "compute columns for custody group")
}
for _, column := range groupColumns {
columns[column] = true
}
}
return columns, nil
}
// ValidatorsCustodyRequirement returns the number of custody groups regarding the validator indices attached to the beacon node.
// https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/das-core.md#validator-custody
func ValidatorsCustodyRequirement(state beaconState.ReadOnlyBeaconState, validatorsIndex map[primitives.ValidatorIndex]bool) (uint64, error) {
totalNodeBalance := uint64(0)
for index := range validatorsIndex {
balance, err := state.BalanceAtIndex(index)
if err != nil {
return 0, errors.Wrapf(err, "balance at index for validator index %v", index)
}
totalNodeBalance += balance
}
beaconConfig := params.BeaconConfig()
numberOfCustodyGroup := beaconConfig.NumberOfCustodyGroups
validatorCustodyRequirement := beaconConfig.ValidatorCustodyRequirement
balancePerAdditionalCustodyGroup := beaconConfig.BalancePerAdditionalCustodyGroup
count := totalNodeBalance / balancePerAdditionalCustodyGroup
return min(max(count, validatorCustodyRequirement), numberOfCustodyGroup), nil
}
// Blobs extract blobs from `dataColumnsSidecar`.
// This can be seen as the reciprocal function of DataColumnSidecars.
// `dataColumnsSidecar` needs to contain the datacolumns corresponding to the non-extended matrix,
@@ -344,6 +282,45 @@ func Blobs(indices map[uint64]bool, dataColumnsSidecar []*ethpb.DataColumnSideca
return verifiedROBlobs, nil
}
// CustodyGroupSamplingSize returns the number of custody groups the node should sample from.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/das-core.md#custody-sampling
func (custodyInfo *CustodyInfo) CustodyGroupSamplingSize(ct CustodyType) uint64 {
custodyGroupCount := custodyInfo.TargetGroupCount.Get()
if ct == Actual {
custodyGroupCount = custodyInfo.ActualGroupCount()
}
samplesPerSlot := params.BeaconConfig().SamplesPerSlot
return max(samplesPerSlot, custodyGroupCount)
}
// CustodyColumns computes the custody columns from the custody groups.
func CustodyColumns(custodyGroups map[uint64]bool) (map[uint64]bool, error) {
numberOfCustodyGroups := params.BeaconConfig().NumberOfCustodyGroups
custodyGroupCount := len(custodyGroups)
// Compute the columns for each custody group.
columns := make(map[uint64]bool, custodyGroupCount)
for group := range custodyGroups {
if group >= numberOfCustodyGroups {
return nil, ErrCustodyGroupTooLarge
}
groupColumns, err := ComputeColumnsForCustodyGroup(group)
if err != nil {
return nil, errors.Wrap(err, "compute columns for custody group")
}
for _, column := range groupColumns {
columns[column] = true
}
}
return columns, nil
}
// populateAndFilterIndices returns a sorted slices of indices, setting all indices if none are provided,
// and filtering out indices higher than the blob count.
func populateAndFilterIndices(indices map[uint64]bool, blobCount uint64) []uint64 {

View File

@@ -5,16 +5,19 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/pkg/errors"
)
// ---------------------------------------------------------------
// ( CustodyGroups is unit tested in spec tests. )
// ( ComputeColumnsForCustodyGroup is unit tested in spec tests. )
// ---------------------------------------------------------------
func TestDataColumnSidecars(t *testing.T) {
var expected []*ethpb.DataColumnSidecar = nil
actual, err := peerdas.DataColumnSidecars(nil, []kzg.CellsAndProofs{})
@@ -151,46 +154,6 @@ func TestDataColumnsSidecarsBlobsRoundtrip(t *testing.T) {
require.DeepSSZEqual(t, verifiedROBlobs, roundtripBlobs)
}
func TestValidatorsCustodyRequirement(t *testing.T) {
testCases := []struct {
name string
count uint64
expected uint64
}{
{name: "0 validators", count: 0, expected: 8},
{name: "1 validator", count: 1, expected: 8},
{name: "8 validators", count: 8, expected: 8},
{name: "9 validators", count: 9, expected: 9},
{name: "100 validators", count: 100, expected: 100},
{name: "128 validators", count: 128, expected: 128},
{name: "129 validators", count: 129, expected: 128},
{name: "1000 validators", count: 1000, expected: 128},
}
const balance = uint64(32_000_000_000)
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
balances := make([]uint64, 0, tc.count)
for range tc.count {
balances = append(balances, balance)
}
validatorsIndex := make(map[primitives.ValidatorIndex]bool)
for i := range tc.count {
validatorsIndex[primitives.ValidatorIndex(i)] = true
}
beaconState, err := state_native.InitializeFromProtoFulu(&ethpb.BeaconStateElectra{Balances: balances})
require.NoError(t, err)
actual, err := peerdas.ValidatorsCustodyRequirement(beaconState, validatorsIndex)
require.NoError(t, err)
require.Equal(t, tc.expected, actual)
})
}
}
func TestCustodyGroupSamplingSize(t *testing.T) {
testCases := []struct {
name string
@@ -246,3 +209,22 @@ func TestCustodyGroupSamplingSize(t *testing.T) {
})
}
}
func TestCustodyColumns(t *testing.T) {
t.Run("group too large", func(t *testing.T) {
_, err := peerdas.CustodyColumns(map[uint64]bool{1_000_000: true})
require.ErrorIs(t, err, peerdas.ErrCustodyGroupTooLarge)
})
t.Run("nominal", func(t *testing.T) {
input := map[uint64]bool{1: true, 2: true}
expected := map[uint64]bool{1: true, 2: true}
actual, err := peerdas.CustodyColumns(input)
require.NoError(t, err)
require.Equal(t, len(expected), len(actual))
for i := range actual {
require.Equal(t, expected[i], actual[i])
}
})
}

View File

@@ -103,6 +103,61 @@ func Info(nodeID enode.ID, custodyGroupCount uint64) (*info, bool, error) {
return result, false, nil
}
// ActualGroupCount returns the actual custody group count.
func (custodyInfo *CustodyInfo) ActualGroupCount() uint64 {
return min(custodyInfo.TargetGroupCount.Get(), custodyInfo.ToAdvertiseGroupCount.Get())
}
// CustodyGroupCount returns the number of groups we should participate in for custody.
func (tcgc *targetCustodyGroupCount) Get() uint64 {
// If subscribed to all subnets, return the number of custody groups.
if flags.Get().SubscribeAllDataSubnetsubnets {
return params.BeaconConfig().NumberOfCustodyGroups
}
tcgc.mut.RLock()
defer tcgc.mut.RUnlock()
// If no validators are tracked, return the default custody requirement.
if tcgc.validatorsCustodyRequirement == 0 {
return params.BeaconConfig().CustodyRequirement
}
// Return the validators custody requirement.
return tcgc.validatorsCustodyRequirement
}
// setValidatorsCustodyRequirement sets the validators custody requirement.
func (tcgc *targetCustodyGroupCount) SetValidatorsCustodyRequirement(value uint64) {
tcgc.mut.Lock()
defer tcgc.mut.Unlock()
tcgc.validatorsCustodyRequirement = value
}
// Get returns the to advertise custody group count.
func (tacgc *toAdverstiseCustodyGroupCount) Get() uint64 {
// If subscribed to all subnets, return the number of custody groups.
if flags.Get().SubscribeAllDataSubnetsubnets {
return params.BeaconConfig().NumberOfCustodyGroups
}
custodyRequirement := params.BeaconConfig().CustodyRequirement
tacgc.mut.RLock()
defer tacgc.mut.RUnlock()
return max(tacgc.value, custodyRequirement)
}
// Set sets the to advertise custody group count.
func (tacgc *toAdverstiseCustodyGroupCount) Set(value uint64) {
tacgc.mut.Lock()
defer tacgc.mut.Unlock()
tacgc.value = value
}
// createInfoCacheIfNeeded creates a new cache if it doesn't exist.
func createInfoCacheIfNeeded() error {
nodeInfoCacheMut.Lock()
@@ -129,58 +184,3 @@ func computeInfoCacheKey(nodeID enode.ID, custodyGroupCount uint64) [nodeInfoCac
return key
}
// setValidatorsCustodyRequirement sets the validators custody requirement.
func (tcgc *targetCustodyGroupCount) SetValidatorsCustodyRequirement(value uint64) {
tcgc.mut.Lock()
defer tcgc.mut.Unlock()
tcgc.validatorsCustodyRequirement = value
}
// CustodyGroupCount returns the number of groups we should participate in for custody.
func (tcgc *targetCustodyGroupCount) Get() uint64 {
// If subscribed to all subnets, return the number of custody groups.
if flags.Get().SubscribeToAllSubnets {
return params.BeaconConfig().NumberOfCustodyGroups
}
tcgc.mut.RLock()
defer tcgc.mut.RUnlock()
// If no validators are tracked, return the default custody requirement.
if tcgc.validatorsCustodyRequirement == 0 {
return params.BeaconConfig().CustodyRequirement
}
// Return the validators custody requirement.
return tcgc.validatorsCustodyRequirement
}
// Set sets the to advertise custody group count.
func (tacgc *toAdverstiseCustodyGroupCount) Set(value uint64) {
tacgc.mut.Lock()
defer tacgc.mut.Unlock()
tacgc.value = value
}
// Get returns the to advertise custody group count.
func (tacgc *toAdverstiseCustodyGroupCount) Get() uint64 {
// If subscribed to all subnets, return the number of custody groups.
if flags.Get().SubscribeToAllSubnets {
return params.BeaconConfig().NumberOfCustodyGroups
}
custodyRequirement := params.BeaconConfig().CustodyRequirement
tacgc.mut.RLock()
defer tacgc.mut.RUnlock()
return max(tacgc.value, custodyRequirement)
}
// ActualGroupCount returns the actual custody group count.
func (custodyInfo *CustodyInfo) ActualGroupCount() uint64 {
return min(custodyInfo.TargetGroupCount.Get(), custodyInfo.ToAdvertiseGroupCount.Get())
}

View File

@@ -30,25 +30,25 @@ func TestInfo(t *testing.T) {
func TestTargetCustodyGroupCount(t *testing.T) {
testCases := []struct {
name string
subscribeToAllSubnets bool
subscribeToAllColumns bool
validatorsCustodyRequirement uint64
expected uint64
}{
{
name: "subscribed to all subnets",
subscribeToAllSubnets: true,
subscribeToAllColumns: true,
validatorsCustodyRequirement: 100,
expected: 128,
},
{
name: "no validators attached",
subscribeToAllSubnets: false,
subscribeToAllColumns: false,
validatorsCustodyRequirement: 0,
expected: 4,
},
{
name: "some validators attached",
subscribeToAllSubnets: false,
subscribeToAllColumns: false,
validatorsCustodyRequirement: 100,
expected: 100,
},
@@ -57,10 +57,10 @@ func TestTargetCustodyGroupCount(t *testing.T) {
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// Subscribe to all subnets if needed.
if tc.subscribeToAllSubnets {
if tc.subscribeToAllColumns {
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.SubscribeToAllSubnets = true
gFlags.SubscribeAllDataSubnetsubnets = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
}
@@ -82,25 +82,25 @@ func TestTargetCustodyGroupCount(t *testing.T) {
func TestToAdvertiseCustodyGroupCount(t *testing.T) {
testCases := []struct {
name string
subscribeToAllSubnets bool
subscribeToAllColumns bool
toAdvertiseCustodyGroupCount uint64
expected uint64
}{
{
name: "subscribed to all subnets",
subscribeToAllSubnets: true,
subscribeToAllColumns: true,
toAdvertiseCustodyGroupCount: 100,
expected: 128,
},
{
name: "higher than custody requirement",
subscribeToAllSubnets: false,
subscribeToAllColumns: false,
toAdvertiseCustodyGroupCount: 100,
expected: 100,
},
{
name: "lower than custody requirement",
subscribeToAllSubnets: false,
subscribeToAllColumns: false,
toAdvertiseCustodyGroupCount: 1,
expected: 4,
},
@@ -109,10 +109,10 @@ func TestToAdvertiseCustodyGroupCount(t *testing.T) {
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// Subscribe to all subnets if needed.
if tc.subscribeToAllSubnets {
if tc.subscribeToAllColumns {
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.SubscribeToAllSubnets = true
gFlags.SubscribeAllDataSubnetsubnets = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
}

View File

@@ -27,13 +27,13 @@ var (
ErrCannotLoadCustodyGroupCount = errors.New("cannot load the custody group count from peer")
)
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-alpha.10/specs/fulu/p2p-interface.md#the-discovery-domain-discv5
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/p2p-interface.md#custody-group-count
type Cgc uint64
func (Cgc) ENRKey() string { return CustodyGroupCountEnrKey }
// VerifyDataColumnSidecar verifies if the data column sidecar is valid.
// https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/p2p-interface.md#verify_data_column_sidecar
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/p2p-interface.md#verify_data_column_sidecar
func VerifyDataColumnSidecar(sidecar blocks.RODataColumn) error {
// The sidecar index must be within the valid range.
numberOfColumns := params.BeaconConfig().NumberOfColumns
@@ -60,7 +60,7 @@ func VerifyDataColumnSidecar(sidecar blocks.RODataColumn) error {
// while we are verifying all the KZG proofs from multiple sidecars in a batch.
// This is done to improve performance since the internal KZG library is way more
// efficient when verifying in batch.
// https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/p2p-interface.md#verify_data_column_sidecar_kzg_proofs
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/p2p-interface.md#verify_data_column_sidecar_kzg_proofs
func VerifyDataColumnsSidecarKZGProofs(sidecars []blocks.RODataColumn) error {
// Compute the total count.
count := 0
@@ -96,7 +96,7 @@ func VerifyDataColumnsSidecarKZGProofs(sidecars []blocks.RODataColumn) error {
}
// VerifyDataColumnSidecarInclusionProof verifies if the given KZG commitments included in the given beacon block.
// https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/p2p-interface.md#verify_data_column_sidecar_inclusion_proof
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/p2p-interface.md#verify_data_column_sidecar_inclusion_proof
func VerifyDataColumnSidecarInclusionProof(sidecar blocks.RODataColumn) error {
if sidecar.SignedBlockHeader == nil || sidecar.SignedBlockHeader.Header == nil {
return ErrNilBlockHeader
@@ -128,7 +128,7 @@ func VerifyDataColumnSidecarInclusionProof(sidecar blocks.RODataColumn) error {
}
// ComputeSubnetForDataColumnSidecar computes the subnet for a data column sidecar.
// https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/p2p-interface.md#compute_subnet_for_data_column_sidecar
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/p2p-interface.md#compute_subnet_for_data_column_sidecar
func ComputeSubnetForDataColumnSidecar(columnIndex uint64) uint64 {
dataColumnSidecarSubnetCount := params.BeaconConfig().DataColumnSidecarSubnetCount
return columnIndex % dataColumnSidecarSubnetCount

View File

@@ -16,28 +16,6 @@ import (
"github.com/ethereum/go-ethereum/p2p/enr"
)
func createTestSidecar(t *testing.T, index uint64, column, kzgCommitments, kzgProofs [][]byte) blocks.RODataColumn {
pbSignedBeaconBlock := util.NewBeaconBlockDeneb()
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(pbSignedBeaconBlock)
require.NoError(t, err)
signedBlockHeader, err := signedBeaconBlock.Header()
require.NoError(t, err)
sidecar := &ethpb.DataColumnSidecar{
Index: index,
Column: column,
KzgCommitments: kzgCommitments,
KzgProofs: kzgProofs,
SignedBlockHeader: signedBlockHeader,
}
roSidecar, err := blocks.NewRODataColumn(sidecar)
require.NoError(t, err)
return roSidecar
}
func TestVerifyDataColumnSidecar(t *testing.T) {
t.Run("index too large", func(t *testing.T) {
roSidecar := createTestSidecar(t, 1_000_000, nil, nil, nil)
@@ -332,3 +310,25 @@ func TestCustodyGroupCountFromRecord(t *testing.T) {
require.Equal(t, expected, actual)
})
}
func createTestSidecar(t *testing.T, index uint64, column, kzgCommitments, kzgProofs [][]byte) blocks.RODataColumn {
pbSignedBeaconBlock := util.NewBeaconBlockDeneb()
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(pbSignedBeaconBlock)
require.NoError(t, err)
signedBlockHeader, err := signedBeaconBlock.Header()
require.NoError(t, err)
sidecar := &ethpb.DataColumnSidecar{
Index: index,
Column: column,
KzgCommitments: kzgCommitments,
KzgProofs: kzgProofs,
SignedBlockHeader: signedBlockHeader,
}
roSidecar, err := blocks.NewRODataColumn(sidecar)
require.NoError(t, err)
return roSidecar
}

View File

@@ -8,7 +8,7 @@ import (
// ExtendedSampleCount computes, for a given number of samples per slot and allowed failures the
// number of samples we should actually query from peers.
// https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/peer-sampling.md#get_extended_sample_count
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/peer-sampling.md#get_extended_sample_count
func ExtendedSampleCount(samplesPerSlot, allowedFailures uint64) uint64 {
// Retrieve the columns count
columnsCount := params.BeaconConfig().NumberOfColumns

View File

@@ -0,0 +1,30 @@
package peerdas
import (
beaconState "github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/pkg/errors"
)
// ValidatorsCustodyRequirement returns the number of custody groups regarding the validator indices attached to the beacon node.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/validator.md#validator-custody
func ValidatorsCustodyRequirement(state beaconState.ReadOnlyBeaconState, validatorsIndex map[primitives.ValidatorIndex]bool) (uint64, error) {
totalNodeBalance := uint64(0)
for index := range validatorsIndex {
balance, err := state.BalanceAtIndex(index)
if err != nil {
return 0, errors.Wrapf(err, "balance at index for validator index %v", index)
}
totalNodeBalance += balance
}
beaconConfig := params.BeaconConfig()
numberOfCustodyGroup := beaconConfig.NumberOfCustodyGroups
validatorCustodyRequirement := beaconConfig.ValidatorCustodyRequirement
balancePerAdditionalCustodyGroup := beaconConfig.BalancePerAdditionalCustodyGroup
count := totalNodeBalance / balancePerAdditionalCustodyGroup
return min(max(count, validatorCustodyRequirement), numberOfCustodyGroup), nil
}

View File

@@ -0,0 +1,51 @@
package peerdas_test
import (
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/require"
)
func TestValidatorsCustodyRequirement(t *testing.T) {
testCases := []struct {
name string
count uint64
expected uint64
}{
{name: "0 validators", count: 0, expected: 8},
{name: "1 validator", count: 1, expected: 8},
{name: "8 validators", count: 8, expected: 8},
{name: "9 validators", count: 9, expected: 9},
{name: "100 validators", count: 100, expected: 100},
{name: "128 validators", count: 128, expected: 128},
{name: "129 validators", count: 129, expected: 128},
{name: "1000 validators", count: 1000, expected: 128},
}
const balance = uint64(32_000_000_000)
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
balances := make([]uint64, 0, tc.count)
for range tc.count {
balances = append(balances, balance)
}
validatorsIndex := make(map[primitives.ValidatorIndex]bool)
for i := range tc.count {
validatorsIndex[primitives.ValidatorIndex(i)] = true
}
beaconState, err := state_native.InitializeFromProtoFulu(&ethpb.BeaconStateElectra{Balances: balances})
require.NoError(t, err)
actual, err := peerdas.ValidatorsCustodyRequirement(beaconState, validatorsIndex)
require.NoError(t, err)
require.Equal(t, tc.expected, actual)
})
}
}

View File

@@ -228,7 +228,7 @@ func TestFullCommitmentsToCheck(t *testing.T) {
t.Run(tc.name, func(t *testing.T) {
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.SubscribeToAllSubnets = true
gFlags.SubscribeAllDataSubnetsubnets = true
flags.Init(gFlags)
defer flags.Init(resetFlags)

View File

@@ -655,8 +655,16 @@ func (s *Service) ReconstructBlobSidecars(ctx context.Context, block interfaces.
// ReconstructDataColumnSidecars reconstructs the verified data column sidecars for a given beacon block.
// It retrieves the KZG commitments from the block body, fetches the associated blobs and cell proofs from the EL,
// and constructs the corresponding verified read-only data column sidecars.
func (s *Service) ReconstructDataColumnSidecars(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte) ([]blocks.VerifiedRODataColumn, error) {
blockBody := block.Block().Body()
func (s *Service) ReconstructDataColumnSidecars(ctx context.Context, signedROBlock interfaces.ReadOnlySignedBeaconBlock, blockRoot [fieldparams.RootLength]byte) ([]blocks.VerifiedRODataColumn, error) {
block := signedROBlock.Block()
blockBody := block.Body()
blockSlot := block.Slot()
log := log.WithFields(logrus.Fields{
"root": fmt.Sprintf("%#x", blockRoot),
"slot": blockSlot,
})
kzgCommitments, err := blockBody.BlobKzgCommitments()
if err != nil {
return nil, wrapWithBlockRoot(err, blockRoot, "blob KZG commitments")
@@ -674,30 +682,44 @@ func (s *Service) ReconstructDataColumnSidecars(ctx context.Context, block inter
return nil, wrapWithBlockRoot(err, blockRoot, "get blobs V2")
}
var cellsAndProofs []kzg.CellsAndProofs
// Return early if nothing is returned from the EL.
if len(blobAndProofV2s) == 0 {
log.Debug("No blobs returned from EL")
return nil, nil
}
// Compute all cells and proofs.
// Reminder: `engine_getBlobsV2` returns the (non extended) blob (64 cells),
// and the proofs corresponding to the extended blob (128 proofs, one per extended cell).
// We need to reconstruct the cells corresponding to the blob extension.
// https://github.com/ethereum/execution-apis/blob/main/src/engine/osaka.md#engine_getblobsv2
maxBlobCount := params.BeaconConfig().MaxBlobsPerBlock(blockSlot)
cellsAndProofs := make([]kzg.CellsAndProofs, 0, maxBlobCount)
for _, blobAndProof := range blobAndProofV2s {
if blobAndProof == nil {
return nil, wrapWithBlockRoot(errors.New("unable to reconstruct data column sidecars, did not get all blobs from EL"), blockRoot, "")
return nil, errors.Errorf("unable to reconstruct data column sidecars, did not get all blobs from EL for block %#x", blockRoot)
}
var blob kzg.Blob
copy(blob[:], blobAndProof.Blob)
cells, err := kzg.ComputeCells(&blob)
if err != nil {
return nil, wrapWithBlockRoot(err, blockRoot, "could not compute cells")
return nil, wrapWithBlockRoot(err, blockRoot, "compute cells")
}
proofs := make([]kzg.Proof, len(blobAndProof.KzgProofs))
for i, proof := range blobAndProof.KzgProofs {
proofs[i] = kzg.Proof(proof)
proofs := make([]kzg.Proof, 0, len(blobAndProof.KzgProofs))
for _, proof := range blobAndProof.KzgProofs {
proofs = append(proofs, kzg.Proof(proof))
}
cellsAndProofs = append(cellsAndProofs, kzg.CellsAndProofs{
Cells: cells,
Proofs: proofs,
})
}
header, err := block.Header()
header, err := signedROBlock.Header()
if err != nil {
return nil, wrapWithBlockRoot(err, blockRoot, "could not get header")
}
@@ -723,10 +745,7 @@ func (s *Service) ReconstructDataColumnSidecars(ctx context.Context, block inter
verifiedRODataColumns[i] = blocks.NewVerifiedRODataColumn(roDataColumn)
}
log.WithFields(logrus.Fields{
"root": fmt.Sprintf("%#x", blockRoot),
"slot": block.Block().Slot(),
}).Debug("Data columns successfully reconstructed from EL")
log.Debug("Data columns successfully reconstructed from EL")
return verifiedRODataColumns, nil
}

View File

@@ -2587,6 +2587,18 @@ func TestReconstructDataColumnSidecars(t *testing.T) {
require.ErrorContains(t, "get blobs V2 for block", err)
})
t.Run("nothing received", func(t *testing.T) {
srv := createBlobServerV2(t, 0, []bool{})
defer srv.Close()
rpcClient, client := setupRpcClientV2(t, srv.URL, client)
defer rpcClient.Close()
dataColumns, err := client.ReconstructDataColumnSidecars(ctx, sb, r)
require.NoError(t, err)
require.Equal(t, 0, len(dataColumns))
})
t.Run("receiving all blobs", func(t *testing.T) {
blobMasks := []bool{true, true, true, true, true, true}
srv := createBlobServerV2(t, 6, blobMasks)

View File

@@ -59,6 +59,7 @@ go_library(
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/wrapper:go_default_library",
"//container/leaky-bucket:go_default_library",
@@ -145,6 +146,7 @@ go_test(
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/light-client:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/db/testing:go_default_library",
@@ -159,6 +161,7 @@ go_test(
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/wrapper:go_default_library",
"//container/leaky-bucket:go_default_library",
@@ -171,10 +174,12 @@ go_test(
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/metadata:go_default_library",
"//proto/testing:go_default_library",
"//runtime/version:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//crypto:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/discover:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",

View File

@@ -11,9 +11,11 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/crypto/hash"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
"github.com/OffchainLabs/prysm/v6/network/forks"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/pkg/errors"
@@ -279,6 +281,58 @@ func (s *Service) internalBroadcastBlob(
}
}
func (s *Service) BroadcastLightClientOptimisticUpdate(ctx context.Context, update interfaces.LightClientOptimisticUpdate) error {
ctx, span := trace.StartSpan(ctx, "p2p.BroadcastLightClientOptimisticUpdate")
defer span.End()
if update == nil || update.IsNil() {
return errors.New("attempted to broadcast nil light client optimistic update")
}
forkDigest, err := forks.ForkDigestFromEpoch(slots.ToEpoch(update.AttestedHeader().Beacon().Slot), s.genesisValidatorsRoot)
if err != nil {
err := errors.Wrap(err, "could not retrieve fork digest")
tracing.AnnotateError(span, err)
return err
}
// TODO: should we check if the update is too early or too late to broadcast?
if err := s.broadcastObject(ctx, update, lcOptimisticToTopic(forkDigest)); err != nil {
err := errors.Wrap(err, "could not publish message")
tracing.AnnotateError(span, err)
return err
}
return nil
}
func (s *Service) BroadcastLightClientFinalityUpdate(ctx context.Context, update interfaces.LightClientFinalityUpdate) error {
ctx, span := trace.StartSpan(ctx, "p2p.BroadcastLightClientFinalityUpdate")
defer span.End()
if update == nil || update.IsNil() {
return errors.New("attempted to broadcast nil light client finality update")
}
forkDigest, err := forks.ForkDigestFromEpoch(slots.ToEpoch(update.AttestedHeader().Beacon().Slot), s.genesisValidatorsRoot)
if err != nil {
err := errors.Wrap(err, "could not retrieve fork digest")
tracing.AnnotateError(span, err)
return err
}
// TODO: should we check if the update is too early or too late to broadcast?
if err := s.broadcastObject(ctx, update, lcFinalityToTopic(forkDigest)); err != nil {
err := errors.Wrap(err, "could not publish message")
tracing.AnnotateError(span, err)
return err
}
return nil
}
// BroadcastDataColumn broadcasts a data column to the p2p network, the message is assumed to be
// broadcasted to the current fork and to the input column subnet.
// TODO: Add tests
@@ -434,6 +488,14 @@ func blobSubnetToTopic(subnet uint64, forkDigest [fieldparams.VersionLength]byte
return fmt.Sprintf(BlobSubnetTopicFormat, forkDigest, subnet)
}
func lcOptimisticToTopic(forkDigest [4]byte) string {
return fmt.Sprintf(LightClientOptimisticUpdateTopicFormat, forkDigest)
}
func lcFinalityToTopic(forkDigest [4]byte) string {
return fmt.Sprintf(LightClientFinalityUpdateTopicFormat, forkDigest)
}
func dataColumnSubnetToTopic(subnet uint64, forkDigest [fieldparams.VersionLength]byte) string {
return fmt.Sprintf(DataColumnSubnetTopicFormat, forkDigest, subnet)
}

View File

@@ -11,19 +11,24 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers/scorers"
p2ptest "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/wrapper"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/network/forks"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
testpb "github.com/OffchainLabs/prysm/v6/proto/testing"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/OffchainLabs/prysm/v6/time/slots"
pubsub "github.com/libp2p/go-libp2p-pubsub"
"github.com/libp2p/go-libp2p/core/host"
"github.com/prysmaticlabs/go-bitfield"
@@ -520,6 +525,140 @@ func TestService_BroadcastBlob(t *testing.T) {
require.Equal(t, false, util.WaitTimeout(&wg, 1*time.Second), "Failed to receive pubsub within 1s")
}
func TestService_BroadcastLightClientOptimisticUpdate(t *testing.T) {
p1 := p2ptest.NewTestP2P(t)
p2 := p2ptest.NewTestP2P(t)
p1.Connect(p2)
require.NotEqual(t, 0, len(p1.BHost.Network().Peers()))
p := &Service{
host: p1.BHost,
pubsub: p1.PubSub(),
joinedTopics: map[string]*pubsub.Topic{},
cfg: &Config{},
genesisTime: time.Now(),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
subnetsLock: make(map[uint64]*sync.RWMutex),
subnetsLockLock: sync.Mutex{},
peers: peers.NewStatus(context.Background(), &peers.StatusConfig{
ScorerParams: &scorers.Config{},
}),
}
l := util.NewTestLightClient(t, version.Altair)
msg, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(l.Ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock)
require.NoError(t, err)
GossipTypeMapping[reflect.TypeOf(msg)] = LightClientOptimisticUpdateTopicFormat
digest, err := forks.ForkDigestFromEpoch(slots.ToEpoch(msg.AttestedHeader().Beacon().Slot), p.genesisValidatorsRoot)
require.NoError(t, err)
topic := fmt.Sprintf(LightClientOptimisticUpdateTopicFormat, digest)
// External peer subscribes to the topic.
topic += p.Encoding().ProtocolSuffix()
sub, err := p2.SubscribeToTopic(topic)
require.NoError(t, err)
time.Sleep(50 * time.Millisecond) // libp2p fails without this delay...
// Async listen for the pubsub, must be before the broadcast.
var wg sync.WaitGroup
wg.Add(1)
go func(tt *testing.T) {
defer wg.Done()
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
defer cancel()
incomingMessage, err := sub.Next(ctx)
require.NoError(t, err)
result := &ethpb.LightClientOptimisticUpdateAltair{}
require.NoError(t, p.Encoding().DecodeGossip(incomingMessage.Data, result))
if !proto.Equal(result, msg.Proto()) {
tt.Errorf("Did not receive expected message, got %+v, wanted %+v", result, msg)
}
}(t)
// Broadcasting nil should fail.
ctx := context.Background()
require.ErrorContains(t, "attempted to broadcast nil", p.BroadcastLightClientOptimisticUpdate(ctx, nil))
var nilUpdate interfaces.LightClientOptimisticUpdate
require.ErrorContains(t, "attempted to broadcast nil", p.BroadcastLightClientOptimisticUpdate(ctx, nilUpdate))
// Broadcast to peers and wait.
require.NoError(t, p.BroadcastLightClientOptimisticUpdate(ctx, msg))
if util.WaitTimeout(&wg, 1*time.Second) {
t.Error("Failed to receive pubsub within 1s")
}
}
func TestService_BroadcastLightClientFinalityUpdate(t *testing.T) {
p1 := p2ptest.NewTestP2P(t)
p2 := p2ptest.NewTestP2P(t)
p1.Connect(p2)
require.NotEqual(t, 0, len(p1.BHost.Network().Peers()))
p := &Service{
host: p1.BHost,
pubsub: p1.PubSub(),
joinedTopics: map[string]*pubsub.Topic{},
cfg: &Config{},
genesisTime: time.Now(),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
subnetsLock: make(map[uint64]*sync.RWMutex),
subnetsLockLock: sync.Mutex{},
peers: peers.NewStatus(context.Background(), &peers.StatusConfig{
ScorerParams: &scorers.Config{},
}),
}
l := util.NewTestLightClient(t, version.Altair)
msg, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
GossipTypeMapping[reflect.TypeOf(msg)] = LightClientFinalityUpdateTopicFormat
digest, err := forks.ForkDigestFromEpoch(slots.ToEpoch(msg.AttestedHeader().Beacon().Slot), p.genesisValidatorsRoot)
require.NoError(t, err)
topic := fmt.Sprintf(LightClientFinalityUpdateTopicFormat, digest)
// External peer subscribes to the topic.
topic += p.Encoding().ProtocolSuffix()
sub, err := p2.SubscribeToTopic(topic)
require.NoError(t, err)
time.Sleep(50 * time.Millisecond) // libp2p fails without this delay...
// Async listen for the pubsub, must be before the broadcast.
var wg sync.WaitGroup
wg.Add(1)
go func(tt *testing.T) {
defer wg.Done()
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
defer cancel()
incomingMessage, err := sub.Next(ctx)
require.NoError(t, err)
result := &ethpb.LightClientFinalityUpdateAltair{}
require.NoError(t, p.Encoding().DecodeGossip(incomingMessage.Data, result))
if !proto.Equal(result, msg.Proto()) {
tt.Errorf("Did not receive expected message, got %+v, wanted %+v", result, msg)
}
}(t)
// Broadcasting nil should fail.
ctx := context.Background()
require.ErrorContains(t, "attempted to broadcast nil", p.BroadcastLightClientFinalityUpdate(ctx, nil))
var nilUpdate interfaces.LightClientFinalityUpdate
require.ErrorContains(t, "attempted to broadcast nil", p.BroadcastLightClientFinalityUpdate(ctx, nilUpdate))
// Broadcast to peers and wait.
require.NoError(t, p.BroadcastLightClientFinalityUpdate(ctx, msg))
if util.WaitTimeout(&wg, 1*time.Second) {
t.Error("Failed to receive pubsub within 1s")
}
}
func TestService_BroadcastDataColumn(t *testing.T) {
require.NoError(t, kzg.Start())
p1 := p2ptest.NewTestP2P(t)

View File

@@ -106,7 +106,8 @@ func (l *listenerWrapper) RandomNodes() enode.Iterator {
func (l *listenerWrapper) Ping(node *enode.Node) error {
l.mu.RLock()
defer l.mu.RUnlock()
return l.listener.Ping(node)
_, err := l.listener.Ping(node)
return err
}
func (l *listenerWrapper) RequestENR(node *enode.Node) (*enode.Node, error) {

View File

@@ -22,6 +22,8 @@ var gossipTopicMappings = map[string]func() proto.Message{
SyncCommitteeSubnetTopicFormat: func() proto.Message { return &ethpb.SyncCommitteeMessage{} },
BlsToExecutionChangeSubnetTopicFormat: func() proto.Message { return &ethpb.SignedBLSToExecutionChange{} },
BlobSubnetTopicFormat: func() proto.Message { return &ethpb.BlobSidecar{} },
LightClientOptimisticUpdateTopicFormat: func() proto.Message { return &ethpb.LightClientOptimisticUpdateAltair{} },
LightClientFinalityUpdateTopicFormat: func() proto.Message { return &ethpb.LightClientFinalityUpdateAltair{} },
DataColumnSubnetTopicFormat: func() proto.Message { return &ethpb.DataColumnSidecar{} },
}
@@ -64,6 +66,25 @@ func GossipTopicMappings(topic string, epoch primitives.Epoch) proto.Message {
return &ethpb.SignedAggregateAttestationAndProofElectra{}
}
return gossipMessage(topic)
case LightClientOptimisticUpdateTopicFormat:
if epoch >= params.BeaconConfig().DenebForkEpoch {
return &ethpb.LightClientOptimisticUpdateDeneb{}
}
if epoch >= params.BeaconConfig().CapellaForkEpoch {
return &ethpb.LightClientOptimisticUpdateCapella{}
}
return gossipMessage(topic)
case LightClientFinalityUpdateTopicFormat:
if epoch >= params.BeaconConfig().ElectraForkEpoch {
return &ethpb.LightClientFinalityUpdateElectra{}
}
if epoch >= params.BeaconConfig().DenebForkEpoch {
return &ethpb.LightClientFinalityUpdateDeneb{}
}
if epoch >= params.BeaconConfig().CapellaForkEpoch {
return &ethpb.LightClientFinalityUpdateCapella{}
}
return gossipMessage(topic)
default:
return gossipMessage(topic)
}
@@ -98,21 +119,28 @@ func init() {
// Specially handle Altair objects.
GossipTypeMapping[reflect.TypeOf(&ethpb.SignedBeaconBlockAltair{})] = BlockSubnetTopicFormat
GossipTypeMapping[reflect.TypeOf(&ethpb.LightClientFinalityUpdateAltair{})] = LightClientFinalityUpdateTopicFormat
GossipTypeMapping[reflect.TypeOf(&ethpb.LightClientOptimisticUpdateAltair{})] = LightClientOptimisticUpdateTopicFormat
// Specially handle Bellatrix objects.
GossipTypeMapping[reflect.TypeOf(&ethpb.SignedBeaconBlockBellatrix{})] = BlockSubnetTopicFormat
// Specially handle Capella objects.
GossipTypeMapping[reflect.TypeOf(&ethpb.SignedBeaconBlockCapella{})] = BlockSubnetTopicFormat
GossipTypeMapping[reflect.TypeOf(&ethpb.LightClientOptimisticUpdateCapella{})] = LightClientOptimisticUpdateTopicFormat
GossipTypeMapping[reflect.TypeOf(&ethpb.LightClientFinalityUpdateCapella{})] = LightClientFinalityUpdateTopicFormat
// Specially handle Deneb objects.
GossipTypeMapping[reflect.TypeOf(&ethpb.SignedBeaconBlockDeneb{})] = BlockSubnetTopicFormat
GossipTypeMapping[reflect.TypeOf(&ethpb.LightClientOptimisticUpdateDeneb{})] = LightClientOptimisticUpdateTopicFormat
GossipTypeMapping[reflect.TypeOf(&ethpb.LightClientFinalityUpdateDeneb{})] = LightClientFinalityUpdateTopicFormat
// Specially handle Electra objects.
GossipTypeMapping[reflect.TypeOf(&ethpb.SignedBeaconBlockElectra{})] = BlockSubnetTopicFormat
GossipTypeMapping[reflect.TypeOf(&ethpb.SingleAttestation{})] = AttestationSubnetTopicFormat
GossipTypeMapping[reflect.TypeOf(&ethpb.AttesterSlashingElectra{})] = AttesterSlashingSubnetTopicFormat
GossipTypeMapping[reflect.TypeOf(&ethpb.SignedAggregateAttestationAndProofElectra{})] = AggregateAndProofSubnetTopicFormat
GossipTypeMapping[reflect.TypeOf(&ethpb.LightClientFinalityUpdateElectra{})] = LightClientFinalityUpdateTopicFormat
// Specially handle Fulu objects.
GossipTypeMapping[reflect.TypeOf(&ethpb.SignedBeaconBlockFulu{})] = BlockSubnetTopicFormat

View File

@@ -101,6 +101,9 @@ func (s *BadResponsesScorer) countNoLock(pid peer.ID) (int, error) {
// Increment increments the number of bad responses we have received from the given remote peer.
// If peer doesn't exist this method is no-op.
func (s *BadResponsesScorer) Increment(pid peer.ID) {
if pid == "" {
return
}
s.store.Lock()
defer s.store.Unlock()

View File

@@ -124,6 +124,9 @@ func (s *BlockProviderScorer) Params() *BlockProviderScorerConfig {
// IncrementProcessedBlocks increments the number of blocks that have been successfully processed.
func (s *BlockProviderScorer) IncrementProcessedBlocks(pid peer.ID, cnt uint64) {
if pid == "" {
return
}
s.store.Lock()
defer s.store.Unlock()
defer s.touchNoLock(pid)

View File

@@ -30,6 +30,10 @@ const (
GossipBlsToExecutionChangeMessage = "bls_to_execution_change"
// GossipBlobSidecarMessage is the name for the blob sidecar message type.
GossipBlobSidecarMessage = "blob_sidecar"
// GossipLightClientFinalityUpdateMessage is the name for the light client finality update message type.
GossipLightClientFinalityUpdateMessage = "light_client_finality_update"
// GossipLightClientOptimisticUpdateMessage is the name for the light client optimistic update message type.
GossipLightClientOptimisticUpdateMessage = "light_client_optimistic_update"
// GossipDataColumnSidecarMessage is the name for the data column sidecar message type.
GossipDataColumnSidecarMessage = "data_column_sidecar"
@@ -55,6 +59,10 @@ const (
BlsToExecutionChangeSubnetTopicFormat = GossipProtocolAndDigest + GossipBlsToExecutionChangeMessage
// BlobSubnetTopicFormat is the topic format for the blob subnet.
BlobSubnetTopicFormat = GossipProtocolAndDigest + GossipBlobSidecarMessage + "_%d"
// LightClientFinalityUpdateTopicFormat is the topic format for the light client finality update subnet.
LightClientFinalityUpdateTopicFormat = GossipProtocolAndDigest + GossipLightClientFinalityUpdateMessage
// LightClientOptimisticUpdateTopicFormat is the topic format for the light client optimistic update subnet.
LightClientOptimisticUpdateTopicFormat = GossipProtocolAndDigest + GossipLightClientOptimisticUpdateMessage
// DataColumnSubnetTopicFormat is the topic format for the data column subnet.
DataColumnSubnetTopicFormat = GossipProtocolAndDigest + GossipDataColumnSidecarMessage + "_%d"
)

View File

@@ -162,9 +162,9 @@ func (b *BlobSidecarsByRootReq) MarshalSSZ() ([]byte, error) {
// BlobSidecarsByRootReq value.
func (b *BlobSidecarsByRootReq) UnmarshalSSZ(buf []byte) error {
bufLen := len(buf)
maxLength := int(params.BeaconConfig().MaxRequestBlobSidecars) * blobIdSize
maxLength := int(params.BeaconConfig().MaxRequestBlobSidecarsElectra) * blobIdSize
if bufLen > maxLength {
return errors.Errorf("expected buffer with length of up to %d but received length %d", maxLength, bufLen)
return errors.Wrapf(ssz.ErrIncorrectListSize, "expected buffer with length of up to %d but received length %d", maxLength, bufLen)
}
if bufLen%blobIdSize != 0 {
return errors.Wrapf(ssz.ErrIncorrectByteSize, "size=%d", bufLen)

View File

@@ -43,6 +43,15 @@ func TestBlobSidecarsByRootReq_MarshalSSZ(t *testing.T) {
name: "10 item list",
ids: generateBlobIdentifiers(10),
},
{
name: "max list",
ids: generateBlobIdentifiers(int(params.BeaconConfig().MaxRequestBlobSidecarsElectra)),
},
{
name: "beyond max list",
ids: generateBlobIdentifiers(int(params.BeaconConfig().MaxRequestBlobSidecarsElectra) + 1),
unmarshalErr: ssz.ErrIncorrectListSize,
},
{
name: "wonky unmarshal size",
ids: generateBlobIdentifiers(10),

View File

@@ -894,6 +894,15 @@ func (s *Service) beaconEndpoints(
handler: server.GetPendingDeposits,
methods: []string{http.MethodGet},
},
{
template: "/eth/v1/beacon/states/{state_id}/pending_consolidations",
name: namespace + ".GetPendingConsolidations",
middleware: []middleware.Middleware{
middleware.AcceptHeaderHandler([]string{api.JsonMediaType}),
},
handler: server.GetPendingDeposits,
methods: []string{http.MethodGet},
},
{
template: "/eth/v1/beacon/states/{state_id}/pending_partial_withdrawals",
name: namespace + ".GetPendingPartialWithdrawals",
@@ -1041,6 +1050,7 @@ func (s *Service) eventsEndpoints() []endpoint {
HeadFetcher: s.cfg.HeadFetcher,
ChainInfoFetcher: s.cfg.ChainInfoFetcher,
TrackedValidatorsCache: s.cfg.TrackedValidatorsCache,
StateGen: s.cfg.StateGen,
}
const namespace = "events"

View File

@@ -30,6 +30,7 @@ func Test_endpoints(t *testing.T) {
"/eth/v1/beacon/states/{state_id}/randao": {http.MethodGet},
"/eth/v1/beacon/states/{state_id}/pending_deposits": {http.MethodGet},
"/eth/v1/beacon/states/{state_id}/pending_partial_withdrawals": {http.MethodGet},
"/eth/v1/beacon/states/{state_id}/pending_consolidations": {http.MethodGet},
"/eth/v1/beacon/headers": {http.MethodGet},
"/eth/v1/beacon/headers/{block_id}": {http.MethodGet},
"/eth/v1/beacon/blinded_blocks": {http.MethodPost},

View File

@@ -1613,6 +1613,62 @@ func (s *Server) broadcastSeenBlockSidecars(
return nil
}
// GetPendingConsolidations returns pending deposits for state with given 'stateId'.
// Should return 400 if the state retrieved is prior to Electra.
// Supports both JSON and SSZ responses based on Accept header.
func (s *Server) GetPendingConsolidations(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "beacon.GetPendingDeposits")
defer span.End()
stateId := r.PathValue("state_id")
if stateId == "" {
httputil.HandleError(w, "state_id is required in URL params", http.StatusBadRequest)
return
}
st, err := s.Stater.State(ctx, []byte(stateId))
if err != nil {
shared.WriteStateFetchError(w, err)
return
}
if st.Version() < version.Electra {
httputil.HandleError(w, "state_id is prior to electra", http.StatusBadRequest)
return
}
pd, err := st.PendingConsolidations()
if err != nil {
httputil.HandleError(w, "Could not get pending consolidations: "+err.Error(), http.StatusInternalServerError)
return
}
w.Header().Set(api.VersionHeader, version.String(st.Version()))
if httputil.RespondWithSsz(r) {
sszData, err := serializeItems(pd)
if err != nil {
httputil.HandleError(w, "Failed to serialize pending consolidations: "+err.Error(), http.StatusInternalServerError)
return
}
httputil.WriteSsz(w, sszData)
} else {
isOptimistic, err := helpers.IsOptimistic(ctx, []byte(stateId), s.OptimisticModeFetcher, s.Stater, s.ChainInfoFetcher, s.BeaconDB)
if err != nil {
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusInternalServerError)
return
}
blockRoot, err := st.LatestBlockHeader().HashTreeRoot()
if err != nil {
httputil.HandleError(w, "Could not calculate root of latest block header: "+err.Error(), http.StatusInternalServerError)
return
}
isFinalized := s.FinalizationFetcher.IsFinalized(ctx, blockRoot)
resp := structs.GetPendingConsolidationsResponse{
Version: version.String(st.Version()),
ExecutionOptimistic: isOptimistic,
Finalized: isFinalized,
Data: structs.PendingConsolidationsFromConsensus(pd),
}
httputil.WriteJson(w, resp)
}
}
// GetPendingDeposits returns pending deposits for state with given 'stateId'.
// Should return 400 if the state retrieved is prior to Electra.
// Supports both JSON and SSZ responses based on Accept header.

View File

@@ -4755,6 +4755,191 @@ func Test_validateBlobSidecars(t *testing.T) {
require.ErrorContains(t, "could not verify blob proof: can't verify opening proof", s.validateBlobSidecars(b, [][]byte{blob[:]}, [][]byte{proof[:]}))
}
func TestGetPendingConsolidations(t *testing.T) {
st, _ := util.DeterministicGenesisStateElectra(t, 10)
cs := make([]*eth.PendingConsolidation, 10)
for i := 0; i < len(cs); i += 1 {
cs[i] = &eth.PendingConsolidation{
SourceIndex: primitives.ValidatorIndex(i),
TargetIndex: primitives.ValidatorIndex(i + 1),
}
}
require.NoError(t, st.SetPendingConsolidations(cs))
chainService := &chainMock.ChainService{
Optimistic: false,
FinalizedRoots: map[[32]byte]bool{},
}
server := &Server{
Stater: &testutil.MockStater{
BeaconState: st,
},
OptimisticModeFetcher: chainService,
FinalizationFetcher: chainService,
}
t.Run("json response", func(t *testing.T) {
req := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/beacon/states/{state_id}/pending_consolidations", nil)
req.SetPathValue("state_id", "head")
rec := httptest.NewRecorder()
rec.Body = new(bytes.Buffer)
server.GetPendingConsolidations(rec, req)
require.Equal(t, http.StatusOK, rec.Code)
require.Equal(t, "electra", rec.Header().Get(api.VersionHeader))
var resp structs.GetPendingConsolidationsResponse
require.NoError(t, json.Unmarshal(rec.Body.Bytes(), &resp))
expectedVersion := version.String(st.Version())
require.Equal(t, expectedVersion, resp.Version)
require.Equal(t, false, resp.ExecutionOptimistic)
require.Equal(t, false, resp.Finalized)
expectedConsolidations := structs.PendingConsolidationsFromConsensus(cs)
require.DeepEqual(t, expectedConsolidations, resp.Data)
})
t.Run("ssz response", func(t *testing.T) {
req := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/beacon/states/{state_id}/pending_consolidations", nil)
req.Header.Set("Accept", "application/octet-stream")
req.SetPathValue("state_id", "head")
rec := httptest.NewRecorder()
rec.Body = new(bytes.Buffer)
server.GetPendingConsolidations(rec, req)
require.Equal(t, http.StatusOK, rec.Code)
require.Equal(t, "electra", rec.Header().Get(api.VersionHeader))
responseBytes := rec.Body.Bytes()
var recoveredConsolidations []*eth.PendingConsolidation
// Verify total size matches expected number of deposits
consolidationSize := (&eth.PendingConsolidation{}).SizeSSZ()
require.Equal(t, len(responseBytes), consolidationSize*len(cs))
for i := 0; i < len(cs); i++ {
start := i * consolidationSize
end := start + consolidationSize
var c eth.PendingConsolidation
require.NoError(t, c.UnmarshalSSZ(responseBytes[start:end]))
recoveredConsolidations = append(recoveredConsolidations, &c)
}
require.DeepEqual(t, cs, recoveredConsolidations)
})
t.Run("pre electra state", func(t *testing.T) {
preElectraSt, _ := util.DeterministicGenesisStateDeneb(t, 1)
preElectraServer := &Server{
Stater: &testutil.MockStater{
BeaconState: preElectraSt,
},
OptimisticModeFetcher: chainService,
FinalizationFetcher: chainService,
}
// Test JSON request
req := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/beacon/states/{state_id}/pending_consolidations", nil)
req.SetPathValue("state_id", "head")
rec := httptest.NewRecorder()
rec.Body = new(bytes.Buffer)
preElectraServer.GetPendingConsolidations(rec, req)
require.Equal(t, http.StatusBadRequest, rec.Code)
var errResp struct {
Code int `json:"code"`
Message string `json:"message"`
}
require.NoError(t, json.Unmarshal(rec.Body.Bytes(), &errResp))
require.Equal(t, "state_id is prior to electra", errResp.Message)
// Test SSZ request
sszReq := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/beacon/states/{state_id}/pending_consolidations", nil)
sszReq.Header.Set("Accept", "application/octet-stream")
sszReq.SetPathValue("state_id", "head")
sszRec := httptest.NewRecorder()
sszRec.Body = new(bytes.Buffer)
preElectraServer.GetPendingConsolidations(sszRec, sszReq)
require.Equal(t, http.StatusBadRequest, sszRec.Code)
var sszErrResp struct {
Code int `json:"code"`
Message string `json:"message"`
}
require.NoError(t, json.Unmarshal(sszRec.Body.Bytes(), &sszErrResp))
require.Equal(t, "state_id is prior to electra", sszErrResp.Message)
})
t.Run("missing state_id parameter", func(t *testing.T) {
req := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/beacon/states/{state_id}/pending_consolidations", nil)
// Intentionally not setting state_id
rec := httptest.NewRecorder()
rec.Body = new(bytes.Buffer)
server.GetPendingConsolidations(rec, req)
require.Equal(t, http.StatusBadRequest, rec.Code)
var errResp struct {
Code int `json:"code"`
Message string `json:"message"`
}
require.NoError(t, json.Unmarshal(rec.Body.Bytes(), &errResp))
require.Equal(t, "state_id is required in URL params", errResp.Message)
})
t.Run("optimistic node", func(t *testing.T) {
optimisticChainService := &chainMock.ChainService{
Optimistic: true,
FinalizedRoots: map[[32]byte]bool{},
}
optimisticServer := &Server{
Stater: server.Stater,
OptimisticModeFetcher: optimisticChainService,
FinalizationFetcher: optimisticChainService,
}
req := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/beacon/states/{state_id}/pending_consolidations", nil)
req.SetPathValue("state_id", "head")
rec := httptest.NewRecorder()
rec.Body = new(bytes.Buffer)
optimisticServer.GetPendingConsolidations(rec, req)
require.Equal(t, http.StatusOK, rec.Code)
var resp structs.GetPendingConsolidationsResponse
require.NoError(t, json.Unmarshal(rec.Body.Bytes(), &resp))
require.Equal(t, true, resp.ExecutionOptimistic)
})
t.Run("finalized node", func(t *testing.T) {
blockRoot, err := st.LatestBlockHeader().HashTreeRoot()
require.NoError(t, err)
finalizedChainService := &chainMock.ChainService{
Optimistic: false,
FinalizedRoots: map[[32]byte]bool{blockRoot: true},
}
finalizedServer := &Server{
Stater: server.Stater,
OptimisticModeFetcher: finalizedChainService,
FinalizationFetcher: finalizedChainService,
}
req := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/beacon/states/{state_id}/pending_consolidations", nil)
req.SetPathValue("state_id", "head")
rec := httptest.NewRecorder()
rec.Body = new(bytes.Buffer)
finalizedServer.GetPendingConsolidations(rec, req)
require.Equal(t, http.StatusOK, rec.Code)
var resp structs.GetPendingConsolidationsResponse
require.NoError(t, json.Unmarshal(rec.Body.Bytes(), &resp))
require.Equal(t, true, resp.Finalized)
})
}
func TestGetPendingDeposits(t *testing.T) {
st, _ := util.DeterministicGenesisStateElectra(t, 10)

View File

@@ -20,6 +20,7 @@ go_library(
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//config/params:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/payload-attribute:go_default_library",
@@ -53,6 +54,7 @@ go_test(
"//beacon-chain/core/feed/operation:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/stategen/mock:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",

View File

@@ -37,7 +37,6 @@ import (
)
const DefaultEventFeedDepth = 1000
const payloadAttributeTimeout = 2 * time.Second
const (
InvalidTopic = "__invalid__"
@@ -627,6 +626,7 @@ func (s *Server) lazyReaderForEvent(ctx context.Context, event *feed.Event, topi
}
var errUnsupportedPayloadAttribute = errors.New("cannot compute payload attributes pre-Bellatrix")
var errPayloadAttributeExpired = errors.New("skipping payload attribute event for past slot")
func (s *Server) computePayloadAttributes(ctx context.Context, st state.ReadOnlyBeaconState, root [32]byte, proposer primitives.ValidatorIndex, timestamp uint64, randao []byte) (payloadattribute.Attributer, error) {
v := st.Version()
@@ -681,48 +681,48 @@ var zeroRoot [32]byte
// needsFill allows tests to provide filled EventData values. An ordinary event data value fired by the blockchain package will have
// all of the checked fields empty, so the logical short circuit should hit immediately.
func needsFill(ev payloadattribute.EventData) bool {
return ev.HeadState == nil || ev.HeadState.IsNil() || ev.HeadState.LatestBlockHeader() == nil ||
ev.HeadBlock == nil || ev.HeadBlock.IsNil() ||
ev.HeadRoot == zeroRoot || len(ev.ParentBlockRoot) == 0 || len(ev.ParentBlockHash) == 0 ||
return len(ev.ParentBlockHash) == 0 ||
ev.Attributer == nil || ev.Attributer.IsEmpty()
}
func (s *Server) fillEventData(ctx context.Context, ev payloadattribute.EventData) (payloadattribute.EventData, error) {
var err error
if !needsFill(ev) {
return ev, nil
}
ev.HeadState, err = s.HeadFetcher.HeadState(ctx)
if err != nil {
return ev, errors.Wrap(err, "could not get head state")
if ev.HeadBlock == nil || ev.HeadBlock.IsNil() {
return ev, errors.New("head block is nil")
}
if ev.HeadRoot == zeroRoot {
return ev, errors.New("head root is empty")
}
ev.HeadBlock, err = s.HeadFetcher.HeadBlock(ctx)
if err != nil {
return ev, errors.Wrap(err, "could not look up head block")
}
ev.HeadRoot, err = ev.HeadBlock.Block().HashTreeRoot()
if err != nil {
return ev, errors.Wrap(err, "could not compute head block root")
}
pr := ev.HeadBlock.Block().ParentRoot()
ev.ParentBlockRoot = pr[:]
hsr, err := ev.HeadState.LatestBlockHeader().HashTreeRoot()
if err != nil {
return ev, errors.Wrap(err, "could not compute latest block header root")
}
var err error
var st state.BeaconState
// If head is in the same block as the proposal slot, we can use the "read only" state cache.
pse := slots.ToEpoch(ev.ProposalSlot)
st := ev.HeadState
if slots.ToEpoch(st.Slot()) != pse {
st, err = transition.ProcessSlotsUsingNextSlotCache(ctx, st, hsr[:], ev.ProposalSlot)
if slots.ToEpoch(ev.HeadBlock.Block().Slot()) == pse {
st = s.StateGen.StateByRootIfCachedNoCopy(ev.HeadRoot)
}
// If st is nil, we couldn't get the state from the cache, or it isn't in the same epoch.
if st == nil || st.IsNil() {
st, err = s.StateGen.StateByRoot(ctx, ev.HeadRoot)
if err != nil {
return ev, errors.Wrap(err, "could not run process blocks on head state into the proposal slot epoch")
return ev, errors.Wrap(err, "could not get head state")
}
// double check that we need to process_slots, just in case we got here via a hot state cache miss.
if slots.ToEpoch(st.Slot()) < pse {
start, err := slots.EpochStart(pse)
if err != nil {
return ev, errors.Wrap(err, "invalid state slot; could not compute epoch start")
}
st, err = transition.ProcessSlotsUsingNextSlotCache(ctx, st, ev.HeadRoot[:], start)
if err != nil {
return ev, errors.Wrap(err, "could not run process blocks on head state into the proposal slot epoch")
}
}
}
ev.ProposerIndex, err = helpers.BeaconProposerIndexAtSlot(ctx, st, ev.ProposalSlot)
if err != nil {
return ev, errors.Wrap(err, "failed to compute proposer index")
@@ -743,14 +743,18 @@ func (s *Server) fillEventData(ctx context.Context, ev payloadattribute.EventDat
if err != nil {
return ev, errors.Wrap(err, "could not get head state slot time")
}
ev.Attributer, err = s.computePayloadAttributes(ctx, st, hsr, ev.ProposerIndex, uint64(t.Unix()), randao)
ev.Attributer, err = s.computePayloadAttributes(ctx, st, ev.HeadRoot, ev.ProposerIndex, uint64(t.Unix()), randao)
return ev, err
}
// This event stream is intended to be used by builders and relays.
// Parent fields are based on state at N_{current_slot}, while the rest of fields are based on state of N_{current_slot + 1}
func (s *Server) payloadAttributesReader(ctx context.Context, ev payloadattribute.EventData) (lazyReader, error) {
ctx, cancel := context.WithTimeout(ctx, payloadAttributeTimeout)
deadline := slots.BeginsAt(ev.ProposalSlot, s.ChainInfoFetcher.GenesisTime())
if deadline.Before(time.Now()) {
return nil, errors.Wrapf(errPayloadAttributeExpired, "proposal slot time %d", deadline.Unix())
}
ctx, cancel := context.WithDeadline(ctx, deadline)
edc := make(chan asyncPayloadAttrData)
go func() {
d := asyncPayloadAttrData{}
@@ -772,7 +776,7 @@ func (s *Server) payloadAttributesReader(ctx context.Context, ev payloadattribut
ProposerIndex: strconv.FormatUint(uint64(ev.ProposerIndex), 10),
ProposalSlot: strconv.FormatUint(uint64(ev.ProposalSlot), 10),
ParentBlockNumber: strconv.FormatUint(ev.ParentBlockNumber, 10),
ParentBlockRoot: hexutil.Encode(ev.ParentBlockRoot),
ParentBlockRoot: hexutil.Encode(ev.HeadRoot[:]),
ParentBlockHash: hexutil.Encode(ev.ParentBlockHash),
PayloadAttributes: attributesBytes,
})

View File

@@ -18,6 +18,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/operation"
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state/stategen/mock"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
@@ -522,15 +523,22 @@ func TestStreamEvents_OperationsEvents(t *testing.T) {
// to avoid slot processing
require.NoError(t, st.SetSlot(currentSlot+1))
b := tc.getBlock()
genesis := time.Now()
require.NoError(t, st.SetGenesisTime(uint64(genesis.Unix())))
mockChainService := &mockChain.ChainService{
Root: make([]byte, 32),
State: st,
Block: b,
Slot: &currentSlot,
Root: make([]byte, 32),
State: st,
Block: b,
Slot: &currentSlot,
Genesis: genesis,
}
headRoot, err := b.Block().HashTreeRoot()
require.NoError(t, err)
stn := mockChain.NewEventFeedWrapper()
opn := mockChain.NewEventFeedWrapper()
stategen := mock.NewService()
stategen.AddStateForRoot(st, headRoot)
s := &Server{
StateNotifier: &mockChain.SimpleNotifier{Feed: stn},
OperationNotifier: &mockChain.SimpleNotifier{Feed: opn},
@@ -538,6 +546,7 @@ func TestStreamEvents_OperationsEvents(t *testing.T) {
ChainInfoFetcher: mockChainService,
TrackedValidatorsCache: cache.NewTrackedValidatorsCache(),
EventWriteTimeout: testEventWriteTimeout,
StateGen: stategen,
}
if tc.SetTrackedValidatorsCache != nil {
tc.SetTrackedValidatorsCache(s.TrackedValidatorsCache)
@@ -551,13 +560,11 @@ func TestStreamEvents_OperationsEvents(t *testing.T) {
Type: statefeed.PayloadAttributes,
Data: payloadattribute.EventData{
ProposerIndex: 0,
ProposalSlot: 0,
ProposalSlot: mockChainService.CurrentSlot() + 1,
ParentBlockNumber: 0,
ParentBlockRoot: make([]byte, 32),
ParentBlockHash: make([]byte, 32),
HeadState: st,
HeadBlock: b,
HeadRoot: [fieldparams.RootLength]byte{},
HeadRoot: headRoot,
},
},
}
@@ -575,8 +582,6 @@ func TestStreamEvents_OperationsEvents(t *testing.T) {
func TestFillEventData(t *testing.T) {
ctx := context.Background()
t.Run("AlreadyFilledData_ShouldShortCircuitWithoutError", func(t *testing.T) {
st, err := util.NewBeaconStateBellatrix()
require.NoError(t, err)
b, err := blocks.NewSignedBeaconBlock(util.HydrateSignedBeaconBlockBellatrix(&eth.SignedBeaconBlockBellatrix{}))
require.NoError(t, err)
attributor, err := payloadattribute.New(&enginev1.PayloadAttributes{
@@ -584,11 +589,9 @@ func TestFillEventData(t *testing.T) {
})
require.NoError(t, err)
alreadyFilled := payloadattribute.EventData{
HeadState: st,
HeadBlock: b,
HeadRoot: [32]byte{1, 2, 3},
Attributer: attributor,
ParentBlockRoot: []byte{1, 2, 3},
ParentBlockHash: []byte{4, 5, 6},
}
srv := &Server{} // No real HeadFetcher needed here since it won't be called.
@@ -612,12 +615,14 @@ func TestFillEventData(t *testing.T) {
Timestamp: uint64(time.Now().Unix()),
})
require.NoError(t, err)
headRoot, err := b.Block().HashTreeRoot()
require.NoError(t, err)
// Create an event data object missing certain fields:
partial := payloadattribute.EventData{
// The presence of a nil HeadState, nil HeadBlock, zeroed HeadRoot, etc.
// will cause fillEventData to try to fill the values.
ProposalSlot: 42, // different epoch from current slot
Attributer: attributor, // Must be Bellatrix or later
HeadBlock: b,
HeadRoot: headRoot,
}
currentSlot := primitives.Slot(0)
// to avoid slot processing
@@ -629,6 +634,8 @@ func TestFillEventData(t *testing.T) {
Slot: &currentSlot,
}
stategen := mock.NewService()
stategen.AddStateForRoot(st, headRoot)
stn := mockChain.NewEventFeedWrapper()
opn := mockChain.NewEventFeedWrapper()
srv := &Server{
@@ -638,16 +645,15 @@ func TestFillEventData(t *testing.T) {
ChainInfoFetcher: mockChainService,
TrackedValidatorsCache: cache.NewTrackedValidatorsCache(),
EventWriteTimeout: testEventWriteTimeout,
StateGen: stategen,
}
filled, err := srv.fillEventData(ctx, partial)
require.NoError(t, err, "expected successful fill of partial event data")
// Verify that fields have been updated from the mock data:
require.NotNil(t, filled.HeadState, "HeadState should be assigned")
require.NotNil(t, filled.HeadBlock, "HeadBlock should be assigned")
require.NotEqual(t, [32]byte{}, filled.HeadRoot, "HeadRoot should no longer be zero")
require.NotEmpty(t, filled.ParentBlockRoot, "ParentBlockRoot should be filled")
require.NotEmpty(t, filled.ParentBlockHash, "ParentBlockHash should be filled")
require.Equal(t, uint64(0), filled.ParentBlockNumber, "ParentBlockNumber must match mock block")

View File

@@ -10,6 +10,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
opfeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/operation"
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state/stategen"
)
// Server defines a server implementation of the http events service,
@@ -23,4 +24,5 @@ type Server struct {
KeepAliveInterval time.Duration
EventFeedDepth int
EventWriteTimeout time.Duration
StateGen stategen.StateManager
}

View File

@@ -4,7 +4,6 @@ import (
"context"
"fmt"
"math"
"slices"
"strconv"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain"
@@ -136,16 +135,6 @@ func (p *BeaconDbBlocker) Block(ctx context.Context, id []byte) (interfaces.Read
return blk, nil
}
// uint64MapToSortedSlice produces a sorted uint64 slice from a map.
func uint64MapToSortedSlice(input map[uint64]bool) []uint64 {
output := make([]uint64, 0, len(input))
for idx := range input {
output = append(output, idx)
}
slices.Sort[[]uint64](output)
return output
}
// blobsFromStoredBlobs retrieves blobs corresponding to `indices` and `root` from the store.
// This function expects blobs to be stored directly (aka. no data columns).
func (p *BeaconDbBlocker) blobsFromStoredBlobs(
@@ -342,7 +331,7 @@ func (p *BeaconDbBlocker) blobsFromStoredDataColumns(indices map[uint64]bool, ro
if !canReconstruct {
// There is no way to reconstruct the data columns.
return nil, &core.RpcError{
Err: errors.Errorf("the node does not custody enough data columns to reconstruct blobs. Please start the beacon node with the `--%s` flag to ensure this call to success, or retry later if it already the case.", flags.SubscribeToAllSubnets.Name),
Err: errors.Errorf("the node does not custody enough data columns to reconstruct blobs. Please start the beacon node with the `--%s` flag to ensure this call to success, or retry later if it already the case.", flags.SubscribeAllDataSubnets.Name),
Reason: core.NotFound,
}
}

View File

@@ -18,7 +18,7 @@ import (
const errEpoch = "cannot retrieve information about an epoch in the future, current epoch %d, requesting %d"
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// ListValidatorAssignments retrieves the validator assignments for a given epoch,
// optional validator indices or public keys may be included to filter validator assignments.

View File

@@ -49,7 +49,7 @@ func mapAttestationsByTargetRoot(atts []ethpb.Att) map[[32]byte][]ethpb.Att {
return attsMap
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// ListAttestations retrieves attestations by block root, slot, or epoch.
// Attestations are sorted by data slot by default.
@@ -115,7 +115,7 @@ func (bs *Server) ListAttestations(
}, nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// ListAttestationsElectra retrieves attestations by block root, slot, or epoch.
// Attestations are sorted by data slot by default.
@@ -180,7 +180,7 @@ func (bs *Server) ListAttestationsElectra(ctx context.Context, req *ethpb.ListAt
}, nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// ListIndexedAttestations retrieves indexed attestations by block root.
// IndexedAttestationsForEpoch are sorted by data slot by default. Start-end epoch
@@ -242,7 +242,7 @@ func (bs *Server) ListIndexedAttestations(
}, nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// ListIndexedAttestationsElectra retrieves indexed attestations by block root.
// IndexedAttestationsForEpoch are sorted by data slot by default. Start-end epoch
@@ -305,7 +305,7 @@ func (bs *Server) ListIndexedAttestationsElectra(
}, nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// AttestationPool retrieves pending attestations.
//
@@ -350,7 +350,7 @@ func (bs *Server) AttestationPool(_ context.Context, req *ethpb.AttestationPoolR
}, nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
func (bs *Server) AttestationPoolElectra(_ context.Context, req *ethpb.AttestationPoolRequest) (*ethpb.AttestationPoolElectraResponse, error) {
var atts []*ethpb.AttestationElectra
var err error

View File

@@ -26,7 +26,7 @@ type blockContainer struct {
isCanonical bool
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// ListBeaconBlocks retrieves blocks by root, slot, or epoch.
//
@@ -246,7 +246,7 @@ func (bs *Server) listBlocksForGenesis(ctx context.Context, _ *ethpb.ListBlocksR
}}, 1, strconv.Itoa(0), nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// GetChainHead retrieves information about the head of the beacon chain from
// the view of the beacon chain node.

View File

@@ -15,7 +15,7 @@ import (
"google.golang.org/grpc/status"
)
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// ListBeaconCommittees for a given epoch.
//

View File

@@ -10,7 +10,7 @@ import (
"google.golang.org/protobuf/types/known/emptypb"
)
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// GetBeaconConfig retrieves the current configuration parameters of the beacon chain.
func (_ *Server) GetBeaconConfig(_ context.Context, _ *emptypb.Empty) (*ethpb.BeaconConfig, error) {

View File

@@ -11,7 +11,7 @@ import (
"google.golang.org/grpc/status"
)
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// SubmitProposerSlashing receives a proposer slashing object via
// RPC and injects it into the beacon node's operations pool.
@@ -38,12 +38,12 @@ func (bs *Server) SubmitProposerSlashing(
}, nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
func (bs *Server) SubmitAttesterSlashing(ctx context.Context, req *ethpb.AttesterSlashing) (*ethpb.SubmitSlashingResponse, error) {
return bs.submitAttesterSlashing(ctx, req)
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// SubmitAttesterSlashingElectra receives an attester slashing object via
// RPC and injects it into the beacon node's operations pool.

View File

@@ -24,7 +24,7 @@ import (
"google.golang.org/protobuf/types/known/emptypb"
)
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// ListValidatorBalances retrieves the validator balances for a given set of public keys.
// An optional Epoch parameter is provided to request historical validator balances from
@@ -182,7 +182,7 @@ func (bs *Server) ListValidatorBalances(
}, nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// ListValidators retrieves the current list of active validators with an optional historical epoch flag to
// retrieve validator set in time.
@@ -342,7 +342,7 @@ func (bs *Server) ListValidators(
}, nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// GetValidator information from any validator in the registry by index or public key.
func (bs *Server) GetValidator(
@@ -388,7 +388,7 @@ func (bs *Server) GetValidator(
return nil, status.Error(codes.NotFound, "No validator matched filter criteria")
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// GetValidatorActiveSetChanges retrieves the active set changes for a given epoch.
//
@@ -416,7 +416,7 @@ func (bs *Server) GetValidatorActiveSetChanges(
return as, nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// GetValidatorParticipation retrieves the validator participation information for a given epoch,
// it returns the information about validator's participation rate in voting on the proof of stake
@@ -443,7 +443,7 @@ func (bs *Server) GetValidatorParticipation(
return vp, nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// GetValidatorQueue retrieves the current validator queue information.
func (bs *Server) GetValidatorQueue(
@@ -536,7 +536,7 @@ func (bs *Server) GetValidatorQueue(
}, nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// GetValidatorPerformance reports the validator's latest balance along with other important metrics on
// rewards and penalties throughout its lifecycle in the beacon chain.
@@ -550,7 +550,7 @@ func (bs *Server) GetValidatorPerformance(
return response, nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// GetIndividualVotes retrieves individual voting status of validators.
func (bs *Server) GetIndividualVotes(

View File

@@ -17,7 +17,7 @@ import (
"google.golang.org/grpc/status"
)
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// GetBlock in an ssz-encoded format by block root.
func (ds *Server) GetBlock(
@@ -41,7 +41,7 @@ func (ds *Server) GetBlock(
}, nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// GetInclusionSlot of an attestation in block.
func (ds *Server) GetInclusionSlot(ctx context.Context, req *pbrpc.InclusionSlotRequest) (*pbrpc.InclusionSlotResponse, error) {

View File

@@ -13,7 +13,7 @@ import (
"google.golang.org/grpc/status"
)
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// GetPeer returns the data known about the peer defined by the provided peer id.
func (ds *Server) GetPeer(_ context.Context, peerReq *ethpb.PeerRequest) (*ethpb.DebugPeerResponse, error) {
@@ -24,7 +24,7 @@ func (ds *Server) GetPeer(_ context.Context, peerReq *ethpb.PeerRequest) (*ethpb
return ds.getPeer(pid)
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// ListPeers returns all peers known to the host node, regardless of if they are connected/
// disconnected.

View File

@@ -10,7 +10,7 @@ import (
"google.golang.org/grpc/status"
)
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// GetBeaconState retrieves an ssz-encoded beacon state
// from the beacon node by either a slot or block root.

View File

@@ -49,7 +49,7 @@ type Server struct {
BeaconMonitoringPort int
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// GetHealth checks the health of the node
func (ns *Server) GetHealth(ctx context.Context, request *ethpb.HealthRequest) (*empty.Empty, error) {
@@ -80,7 +80,7 @@ func (ns *Server) GetHealth(ctx context.Context, request *ethpb.HealthRequest) (
return &empty.Empty{}, status.Errorf(codes.Unavailable, "service unavailable")
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// GetSyncStatus checks the current network sync status of the node.
func (ns *Server) GetSyncStatus(_ context.Context, _ *empty.Empty) (*ethpb.SyncStatus, error) {
@@ -89,7 +89,7 @@ func (ns *Server) GetSyncStatus(_ context.Context, _ *empty.Empty) (*ethpb.SyncS
}, nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// GetGenesis fetches genesis chain information of Ethereum. Returns unix timestamp 0
// if a genesis time has yet to be determined.
@@ -115,7 +115,7 @@ func (ns *Server) GetGenesis(ctx context.Context, _ *empty.Empty) (*ethpb.Genesi
}, nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// GetVersion checks the version information of the beacon node.
func (_ *Server) GetVersion(_ context.Context, _ *empty.Empty) (*ethpb.Version, error) {
@@ -124,7 +124,7 @@ func (_ *Server) GetVersion(_ context.Context, _ *empty.Empty) (*ethpb.Version,
}, nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// ListImplementedServices lists the services implemented and enabled by this node.
//
@@ -143,7 +143,7 @@ func (ns *Server) ListImplementedServices(_ context.Context, _ *empty.Empty) (*e
}, nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// GetHost returns the p2p data on the current local and host peer.
func (ns *Server) GetHost(_ context.Context, _ *empty.Empty) (*ethpb.HostData, error) {
@@ -168,7 +168,7 @@ func (ns *Server) GetHost(_ context.Context, _ *empty.Empty) (*ethpb.HostData, e
}, nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// GetPeer returns the data known about the peer defined by the provided peer id.
func (ns *Server) GetPeer(_ context.Context, peerReq *ethpb.PeerRequest) (*ethpb.Peer, error) {
@@ -215,7 +215,7 @@ func (ns *Server) GetPeer(_ context.Context, peerReq *ethpb.PeerRequest) (*ethpb
}, nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// ListPeers lists the peers connected to this node.
func (ns *Server) ListPeers(ctx context.Context, _ *empty.Empty) (*ethpb.Peers, error) {
@@ -270,7 +270,7 @@ func (ns *Server) ListPeers(ctx context.Context, _ *empty.Empty) (*ethpb.Peers,
}, nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// GetETH1ConnectionStatus gets data about the ETH1 endpoints.
func (ns *Server) GetETH1ConnectionStatus(_ context.Context, _ *empty.Empty) (*ethpb.ETH1ConnectionStatus, error) {
@@ -286,7 +286,7 @@ func (ns *Server) GetETH1ConnectionStatus(_ context.Context, _ *empty.Empty) (*e
}, nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// StreamBeaconLogs from the beacon node via a gRPC server-side stream.
// DEPRECATED: This endpoint doesn't appear to be used and have been marked for deprecation.

View File

@@ -17,7 +17,7 @@ import (
"google.golang.org/grpc/status"
)
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// SubmitAggregateSelectionProof is called by a validator when its assigned to be an aggregator.
// The aggregator submits the selection proof to obtain the aggregated attestation
@@ -55,7 +55,7 @@ func (vs *Server) SubmitAggregateSelectionProof(ctx context.Context, req *ethpb.
return &ethpb.AggregateSelectionResponse{AggregateAndProof: attAndProof}, nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// SubmitAggregateSelectionProofElectra is called by a validator when its assigned to be an aggregator.
// The aggregator submits the selection proof to obtain the aggregated attestation
@@ -149,7 +149,7 @@ func (vs *Server) processAggregateSelection(ctx context.Context, req *ethpb.Aggr
return indexInCommittee, validatorIndex, nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// SubmitSignedAggregateSelectionProof is called by a validator to broadcast a signed
// aggregated and proof object.
@@ -163,7 +163,7 @@ func (vs *Server) SubmitSignedAggregateSelectionProof(
return &ethpb.SignedAggregateSubmitResponse{}, nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// SubmitSignedAggregateSelectionProofElectra is called by a validator to broadcast a signed
// aggregated and proof object.

View File

@@ -22,7 +22,7 @@ import (
"google.golang.org/protobuf/types/known/emptypb"
)
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// GetAttestationData requests that the beacon node produce an attestation data object,
// which the validator acting as an attester will then sign.
@@ -44,7 +44,7 @@ func (vs *Server) GetAttestationData(ctx context.Context, req *ethpb.Attestation
return res, nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// ProposeAttestation is a function called by an attester to vote
// on a block via an attestation object as defined in the Ethereum specification.
@@ -74,7 +74,7 @@ func (vs *Server) ProposeAttestation(ctx context.Context, att *ethpb.Attestation
return resp, nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// ProposeAttestationElectra is a function called by an attester to vote
// on a block via an attestation object as defined in the Ethereum specification.
@@ -114,7 +114,7 @@ func (vs *Server) ProposeAttestationElectra(ctx context.Context, singleAtt *ethp
return resp, nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// SubscribeCommitteeSubnets subscribes to the committee ID subnet given subscribe request.
func (vs *Server) SubscribeCommitteeSubnets(ctx context.Context, req *ethpb.CommitteeSubnetsSubscribeRequest) (*emptypb.Empty, error) {

View File

@@ -82,7 +82,7 @@ func TestProposeAttestation(t *testing.T) {
config := params.BeaconConfig()
config.ElectraForkEpoch = 0
params.OverrideBeaconConfig(config)
state, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, state.SetSlot(params.BeaconConfig().SlotsPerEpoch+1))

View File

@@ -9,13 +9,12 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/pkg/errors"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// StreamBlocksAltair to clients every single time a block is received by the beacon node.
func (vs *Server) StreamBlocksAltair(req *ethpb.StreamBlocksRequest, stream ethpb.BeaconNodeValidator_StreamBlocksAltairServer) error {
@@ -50,7 +49,7 @@ func (vs *Server) StreamBlocksAltair(req *ethpb.StreamBlocksRequest, stream ethp
}
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// StreamSlots sends a the block's slot and dependent roots to clients every single time a block is received by the beacon node.
func (vs *Server) StreamSlots(req *ethpb.StreamSlotsRequest, stream ethpb.BeaconNodeValidator_StreamSlotsServer) error {
@@ -67,6 +66,7 @@ func (vs *Server) StreamSlots(req *ethpb.StreamSlotsRequest, stream ethpb.Beacon
select {
case ev := <-ch:
var s primitives.Slot
var currDependentRoot, prevDependentRoot [32]byte
if req.VerifiedOnly {
if ev.Type != statefeed.BlockProcessed {
continue
@@ -76,6 +76,8 @@ func (vs *Server) StreamSlots(req *ethpb.StreamSlotsRequest, stream ethpb.Beacon
continue
}
s = data.Slot
currDependentRoot = data.CurrDependentRoot
prevDependentRoot = data.PrevDependentRoot
} else {
if ev.Type != blockfeed.ReceivedBlock {
continue
@@ -85,24 +87,14 @@ func (vs *Server) StreamSlots(req *ethpb.StreamSlotsRequest, stream ethpb.Beacon
continue
}
s = data.SignedBlock.Block().Slot()
}
currEpoch := slots.ToEpoch(s)
currDepRoot, err := vs.ForkchoiceFetcher.DependentRoot(currEpoch)
if err != nil {
return status.Errorf(codes.Internal, "Could not get dependent root: %v", err)
}
prevDepRoot := currDepRoot
if currEpoch > 0 {
prevDepRoot, err = vs.ForkchoiceFetcher.DependentRoot(currEpoch - 1)
if err != nil {
return status.Errorf(codes.Internal, "Could not get dependent root: %v", err)
}
currDependentRoot = data.CurrDependentRoot
prevDependentRoot = data.PrevDependentRoot
}
if err := stream.Send(
&ethpb.StreamSlotsResponse{
Slot: s,
PreviousDutyDependentRoot: prevDepRoot[:],
CurrentDutyDependentRoot: currDepRoot[:],
PreviousDutyDependentRoot: prevDependentRoot[:],
CurrentDutyDependentRoot: currDependentRoot[:],
}); err != nil {
return status.Errorf(codes.Unavailable, "Could not send over stream: %v", err)
}

View File

@@ -16,7 +16,7 @@ import (
"google.golang.org/protobuf/types/known/emptypb"
)
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// GetDuties returns the duties assigned to a list of validators specified
// in the request object.
@@ -178,7 +178,7 @@ func (vs *Server) duties(ctx context.Context, req *ethpb.DutiesRequest) (*ethpb.
}, nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// AssignValidatorToSubnet checks the status and pubkey of a particular validator
// to discern whether persistent subnets need to be registered for them.

View File

@@ -12,7 +12,7 @@ import (
"google.golang.org/grpc/status"
)
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// ProposeExit proposes an exit for a validator.
func (vs *Server) ProposeExit(ctx context.Context, req *ethpb.SignedVoluntaryExit) (*ethpb.ProposeExitResponse, error) {

View File

@@ -49,7 +49,7 @@ const (
defaultBuilderBoostFactor = primitives.Gwei(100)
)
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// GetBeaconBlock is called by a proposer during its assigned slot to request a block to sign
// by passing in the slot and the signed randao reveal of the slot.
@@ -279,7 +279,7 @@ func (vs *Server) BuildBlockParallel(ctx context.Context, sBlk interfaces.Signed
return vs.constructGenericBeaconBlock(sBlk, bundle, winningBid)
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// ProposeBeaconBlock handles the proposal of beacon blocks.
// TODO: Add tests
@@ -525,7 +525,7 @@ func (vs *Server) broadcastAndReceiveDataColumns(
return nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// PrepareBeaconProposer caches and updates the fee recipient for the given proposer.
func (vs *Server) PrepareBeaconProposer(
@@ -562,7 +562,7 @@ func (vs *Server) PrepareBeaconProposer(
return &emptypb.Empty{}, nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// GetFeeRecipientByPubKey returns a fee recipient from the beacon node's settings or db based on a given public key
func (vs *Server) GetFeeRecipientByPubKey(ctx context.Context, request *ethpb.FeeRecipientByPubKeyRequest) (*ethpb.FeeRecipientByPubKeyResponse, error) {
@@ -619,7 +619,7 @@ func (vs *Server) computeStateRoot(ctx context.Context, block interfaces.ReadOnl
return root[:], nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// SubmitValidatorRegistrations submits validator registrations.
func (vs *Server) SubmitValidatorRegistrations(ctx context.Context, reg *ethpb.SignedValidatorRegistrationsV1) (*emptypb.Empty, error) {

View File

@@ -83,7 +83,7 @@ type Server struct {
AttestationStateFetcher blockchain.AttestationStateFetcher
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// WaitForActivation checks if a validator public key exists in the active validator registry of the current
// beacon state, if not, then it creates a stream which listens for canonical states which contain
@@ -133,7 +133,7 @@ func (vs *Server) WaitForActivation(req *ethpb.ValidatorActivationRequest, strea
}
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// ValidatorIndex is called by a validator to get its index location in the beacon state.
func (vs *Server) ValidatorIndex(ctx context.Context, req *ethpb.ValidatorIndexRequest) (*ethpb.ValidatorIndexResponse, error) {
@@ -152,7 +152,7 @@ func (vs *Server) ValidatorIndex(ctx context.Context, req *ethpb.ValidatorIndexR
return &ethpb.ValidatorIndexResponse{Index: index}, nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// DomainData fetches the current domain version information from the beacon state.
func (vs *Server) DomainData(ctx context.Context, request *ethpb.DomainRequest) (*ethpb.DomainResponse, error) {
@@ -184,7 +184,7 @@ func (vs *Server) DomainData(ctx context.Context, request *ethpb.DomainRequest)
}, nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// WaitForChainStart queries the logs of the Deposit Contract in order to verify the beacon chain
// has started its runtime and validators begin their responsibilities. If it has not, it then

View File

@@ -29,7 +29,7 @@ var nonExistentIndex = primitives.ValidatorIndex(^uint64(0))
var errParticipation = status.Errorf(codes.Internal, "Failed to obtain epoch participation")
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// ValidatorStatus returns the validator status of the current epoch.
// The status response can be one of the following:
@@ -54,7 +54,7 @@ func (vs *Server) ValidatorStatus(
return vStatus, nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// MultipleValidatorStatus is the same as ValidatorStatus. Supports retrieval of multiple
// validator statuses. Takes a list of public keys or a list of validator indices.
@@ -104,7 +104,7 @@ func (vs *Server) MultipleValidatorStatus(
}, nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// CheckDoppelGanger checks if the provided keys are currently active in the network.
func (vs *Server) CheckDoppelGanger(ctx context.Context, req *ethpb.DoppelGangerRequest) (*ethpb.DoppelGangerResponse, error) {

View File

@@ -12,7 +12,7 @@ import (
"google.golang.org/protobuf/types/known/emptypb"
)
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// GetSyncMessageBlockRoot retrieves the sync committee block root of the beacon chain.
func (vs *Server) GetSyncMessageBlockRoot(
@@ -34,7 +34,7 @@ func (vs *Server) GetSyncMessageBlockRoot(
}, nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// SubmitSyncMessage submits the sync committee message to the network.
// It also saves the sync committee message into the pending pool for block inclusion.
@@ -45,7 +45,7 @@ func (vs *Server) SubmitSyncMessage(ctx context.Context, msg *ethpb.SyncCommitte
return &emptypb.Empty{}, nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// GetSyncSubcommitteeIndex is called by a sync committee participant to get
// its subcommittee index for sync message aggregation duty.
@@ -63,7 +63,7 @@ func (vs *Server) GetSyncSubcommitteeIndex(
return &ethpb.SyncSubcommitteeIndexResponse{Indices: indices}, nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// GetSyncCommitteeContribution is called by a sync committee aggregator
// to retrieve sync committee contribution object.
@@ -106,7 +106,7 @@ func (vs *Server) GetSyncCommitteeContribution(
return contribution, nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// SubmitSignedContributionAndProof is called by a sync committee aggregator
// to submit signed contribution and proof object.
@@ -120,7 +120,7 @@ func (vs *Server) SubmitSignedContributionAndProof(
return &emptypb.Empty{}, nil
}
// Deprecated: gRPC API will still be supported for some time, most likely until v8 in 2026, but will be eventually removed in favor of REST API.
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
//
// AggregatedSigAndAggregationBits returns the aggregated signature and aggregation bits
// associated with a particular set of sync committee messages.

View File

@@ -62,9 +62,11 @@ func (b *BeaconState) NextWithdrawalValidatorIndex() (primitives.ValidatorIndex,
//
// validator = state.validators[withdrawal.index]
// has_sufficient_effective_balance = validator.effective_balance >= MIN_ACTIVATION_BALANCE
// has_excess_balance = state.balances[withdrawal.index] > MIN_ACTIVATION_BALANCE
// total_withdrawn = sum(w.amount for w in withdrawals if w.validator_index == withdrawal.validator_index)
// balance = state.balances[withdrawal.validator_index] - total_withdrawn
// has_excess_balance = balance > MIN_ACTIVATION_BALANCE
// if validator.exit_epoch == FAR_FUTURE_EPOCH and has_sufficient_effective_balance and has_excess_balance:
// withdrawable_balance = min(state.balances[withdrawal.index] - MIN_ACTIVATION_BALANCE, withdrawal.amount)
// withdrawable_balance = min(balance - MIN_ACTIVATION_BALANCE, withdrawal.amount)
// withdrawals.append(Withdrawal(
// index=withdrawal_index,
// validator_index=withdrawal.index,
@@ -132,9 +134,19 @@ func (b *BeaconState) ExpectedWithdrawals() ([]*enginev1.Withdrawal, uint64, err
return nil, 0, fmt.Errorf("could not retrieve balance at index %d: %w", w.Index, err)
}
hasSufficientEffectiveBalance := v.EffectiveBalance() >= params.BeaconConfig().MinActivationBalance
hasExcessBalance := vBal > params.BeaconConfig().MinActivationBalance
var totalWithdrawn uint64
for _, wi := range withdrawals {
if wi.ValidatorIndex == w.Index {
totalWithdrawn += wi.Amount
}
}
balance, err := mathutil.Sub64(vBal, totalWithdrawn)
if err != nil {
return nil, 0, errors.Wrapf(err, "failed to subtract balance %d with total withdrawn %d", vBal, totalWithdrawn)
}
hasExcessBalance := balance > params.BeaconConfig().MinActivationBalance
if v.ExitEpoch() == params.BeaconConfig().FarFutureEpoch && hasSufficientEffectiveBalance && hasExcessBalance {
amount := min(vBal-params.BeaconConfig().MinActivationBalance, w.Amount)
amount := min(balance-params.BeaconConfig().MinActivationBalance, w.Amount)
withdrawals = append(withdrawals, &enginev1.Withdrawal{
Index: withdrawalIndex,
ValidatorIndex: w.Index,
@@ -165,7 +177,10 @@ func (b *BeaconState) ExpectedWithdrawals() ([]*enginev1.Withdrawal, uint64, err
partiallyWithdrawnBalance += w.Amount
}
}
balance = balance - partiallyWithdrawnBalance
balance, err = mathutil.Sub64(balance, partiallyWithdrawnBalance)
if err != nil {
return nil, 0, errors.Wrapf(err, "could not subtract balance %d with partial withdrawn balance %d", balance, partiallyWithdrawnBalance)
}
}
if helpers.IsFullyWithdrawableValidator(val, balance, epoch, b.version) {
withdrawals = append(withdrawals, &enginev1.Withdrawal{

View File

@@ -416,3 +416,37 @@ func TestExpectedWithdrawals(t *testing.T) {
require.DeepEqual(t, withdrawalFull, expected[1])
})
}
func TestExpectedWithdrawals_underflow_electra(t *testing.T) {
s, err := state_native.InitializeFromProtoUnsafeElectra(&ethpb.BeaconStateElectra{})
require.NoError(t, err)
vals := make([]*ethpb.Validator, 1)
balances := make([]uint64, 1)
balances[0] = 2015_000_000_000 //Validator A begins leaking ETH due to inactivity, and over time, its balance decreases to 2,015 ETH
val := &ethpb.Validator{
WithdrawalCredentials: make([]byte, 32),
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalanceElectra,
WithdrawableEpoch: primitives.Epoch(0),
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
val.WithdrawalCredentials[0] = params.BeaconConfig().CompoundingWithdrawalPrefixByte
val.WithdrawalCredentials[31] = byte(0)
vals[0] = val
require.NoError(t, s.SetValidators(vals))
require.NoError(t, s.SetBalances(balances))
require.NoError(t, s.AppendPendingPartialWithdrawal(&ethpb.PendingPartialWithdrawal{
Amount: 1008_000_000_000,
WithdrawableEpoch: primitives.Epoch(0),
}))
require.NoError(t, s.AppendPendingPartialWithdrawal(&ethpb.PendingPartialWithdrawal{
Amount: 1008_000_000_000,
WithdrawableEpoch: primitives.Epoch(0),
}))
expected, _, err := s.ExpectedWithdrawals()
require.NoError(t, err)
require.Equal(t, 3, len(expected)) // is a fully withdrawable validator
require.Equal(t, uint64(1008_000_000_000), expected[0].Amount)
require.Equal(t, uint64(975_000_000_000), expected[1].Amount)
require.Equal(t, uint64(32_000_000_000), expected[2].Amount)
}

View File

@@ -259,7 +259,7 @@ func (s *State) latestAncestor(ctx context.Context, blockRoot [32]byte) (state.B
defer span.End()
if s.isFinalizedRoot(blockRoot) {
finalizedState := s.finalizedState()
finalizedState := s.FinalizedState()
if finalizedState != nil {
return finalizedState, nil
}
@@ -297,7 +297,7 @@ func (s *State) latestAncestor(ctx context.Context, blockRoot [32]byte) (state.B
// Does the state exist in finalized info cache.
if s.isFinalizedRoot(parentRoot) {
return s.finalizedState(), nil
return s.FinalizedState(), nil
}
// Does the state exist in epoch boundary cache.

View File

@@ -23,8 +23,8 @@ func NewService() *StateManager {
}
// StateByRootIfCachedNoCopy --
func (_ *StateManager) StateByRootIfCachedNoCopy(_ [32]byte) state.BeaconState {
panic("implement me")
func (m *StateManager) StateByRootIfCachedNoCopy(root [32]byte) state.BeaconState {
return m.StatesByRoot[root]
}
// Resume --

View File

@@ -196,7 +196,7 @@ func (s *State) isFinalizedRoot(r [32]byte) bool {
}
// Returns the cached and copied finalized state.
func (s *State) finalizedState() state.BeaconState {
func (s *State) FinalizedState() state.BeaconState {
s.finalizedInfo.lock.RLock()
defer s.finalizedInfo.lock.RUnlock()
return s.finalizedInfo.state.Copy()

View File

@@ -33,5 +33,5 @@ func TestResume(t *testing.T) {
require.DeepSSZEqual(t, beaconState.ToProtoUnsafe(), resumeState.ToProtoUnsafe())
assert.Equal(t, params.BeaconConfig().SlotsPerEpoch, service.finalizedInfo.slot, "Did not get watned slot")
assert.Equal(t, service.finalizedInfo.root, root, "Did not get wanted root")
assert.NotNil(t, service.finalizedState(), "Wanted a non nil finalized state")
assert.NotNil(t, service.FinalizedState(), "Wanted a non nil finalized state")
}

View File

@@ -229,6 +229,7 @@ go_test(
"//beacon-chain/operations/attestations:go_default_library",
"//beacon-chain/operations/blstoexec:go_default_library",
"//beacon-chain/operations/slashings:go_default_library",
"//beacon-chain/operations/slashings/mock:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/p2p/encoder:go_default_library",
"//beacon-chain/p2p/peers:go_default_library",

View File

@@ -204,7 +204,6 @@ func RequestMissingDataColumnsByRange(
rateLimiter *leakybucket.Collector,
groupCount uint64,
dataColumnsStorage filesystem.DataColumnStorageSummarizer,
peers []peer.ID,
blks []blocks.ROBlock,
batchSize int,
) (map[[fieldparams.RootLength]byte][]blocks.RODataColumn, error) {
@@ -285,7 +284,7 @@ func RequestMissingDataColumnsByRange(
// Requests data column sidecars from peers.
retrievedDataColumnsByRoot := make(map[[fieldparams.RootLength]byte][]blocks.RODataColumn)
for _, request := range requests {
roDataColumns, err := fetchDataColumnsFromPeers(ctx, clock, p2p, rateLimiter, ctxMap, peers, request)
roDataColumns, err := fetchDataColumnsFromPeers(ctx, clock, p2p, rateLimiter, ctxMap, request)
if err != nil {
return nil, errors.Wrap(err, "fetch data columns from peers")
}
@@ -576,17 +575,9 @@ func custodyColumnsFromPeers(peers []peer.ID, p2p p2p.P2P) (map[peer.ID]map[uint
// `filterPeerWhichCustodyAtLeastOneDataColumn` filters peers which custody at least one data column
// specified in `neededDataColumns`. It returns also a list of descriptions for non admissible peers.
func filterPeerWhichCustodyAtLeastOneDataColumn(neededDataColumns []uint64, inputDataColumnsByPeer map[peer.ID]map[uint64]bool) (map[peer.ID]map[uint64]bool, []string) {
// Get the count of needed data columns.
neededDataColumnsCount := uint64(len(neededDataColumns))
// Create pretty needed data columns for logs.
var neededDataColumnsLog interface{} = "all"
numberOfColumns := params.BeaconConfig().NumberOfColumns
if neededDataColumnsCount < numberOfColumns {
neededDataColumnsLog = neededDataColumns
}
outputDataColumnsByPeer := make(map[peer.ID]map[uint64]bool, len(inputDataColumnsByPeer))
descriptions := make([]string, 0)
@@ -607,11 +598,7 @@ outerLoop:
peerCustodyColumnsLog = uint64MapToSortedSlice(peerCustodyDataColumns)
}
description := fmt.Sprintf(
"peer %s: does not custody any needed column, custody columns: %v, needed columns: %v",
peer, peerCustodyColumnsLog, neededDataColumnsLog,
)
description := fmt.Sprintf("peer %s: does not custody any needed column, custody columns: %v", peer, peerCustodyColumnsLog)
descriptions = append(descriptions, description)
}
@@ -720,7 +707,6 @@ func fetchDataColumnsFromPeers(
p2p p2p.P2P,
rateLimiter *leakybucket.Collector,
ctxMap ContextByteVersions,
peers []peer.ID,
targetRequest *eth.DataColumnSidecarsByRangeRequest,
) ([]blocks.RODataColumn, error) {
// Filter out requests with no data columns.
@@ -729,7 +715,7 @@ func fetchDataColumnsFromPeers(
}
// Get all admissible peers with the data columns they custody.
dataColumnsByAdmissiblePeer, err := waitForPeersForDataColumns(p2p, rateLimiter, peers, targetRequest)
dataColumnsByAdmissiblePeer, err := waitForPeersForDataColumns(p2p, rateLimiter, targetRequest)
if err != nil {
return nil, errors.Wrap(err, "wait for peers for data columns")
}
@@ -766,7 +752,7 @@ func fetchDataColumnsFromPeers(
// - synced up to `lastSlot`, and
// - have bandwidth to serve `blockCount` blocks.
// It waits until at least one peer per data column is available.
func waitForPeersForDataColumns(p2p p2p.P2P, rateLimiter *leakybucket.Collector, peers []peer.ID, request *eth.DataColumnSidecarsByRangeRequest) (map[peer.ID]map[uint64]bool, error) {
func waitForPeersForDataColumns(p2p p2p.P2P, rateLimiter *leakybucket.Collector, request *eth.DataColumnSidecarsByRangeRequest) (map[peer.ID]map[uint64]bool, error) {
const delay = 5 * time.Second
numberOfColumns := params.BeaconConfig().NumberOfColumns
@@ -788,7 +774,7 @@ func waitForPeersForDataColumns(p2p p2p.P2P, rateLimiter *leakybucket.Collector,
// Keep only peers with head epoch greater than or equal to the epoch corresponding to the target slot, and
// keep only peers with enough bandwidth.
filteredPeers, descriptions, err := filterPeersByTargetSlotAndBandwidth(p2p, rateLimiter, peers, lastSlot, request.Count)
filteredPeers, descriptions, err := filterPeersByTargetSlotAndBandwidth(p2p, rateLimiter, lastSlot, request.Count)
if err != nil {
return nil, errors.Wrap(err, "filter eers by target slot and bandwidth")
}
@@ -834,7 +820,7 @@ func waitForPeersForDataColumns(p2p p2p.P2P, rateLimiter *leakybucket.Collector,
time.Sleep(delay)
// Filter for peers with head epoch greater than or equal to our target epoch for ByRange requests.
filteredPeers, descriptions, err = filterPeersByTargetSlotAndBandwidth(p2p, rateLimiter, peers, lastSlot, request.Count)
filteredPeers, descriptions, err = filterPeersByTargetSlotAndBandwidth(p2p, rateLimiter, lastSlot, request.Count)
if err != nil {
return nil, errors.Wrap(err, "filter peers by target slot and bandwidth")
}
@@ -855,10 +841,8 @@ func waitForPeersForDataColumns(p2p p2p.P2P, rateLimiter *leakybucket.Collector,
}
// Filter peers to ensure they are synced to the target slot and have sufficient bandwidth to serve the request.
func filterPeersByTargetSlotAndBandwidth(p2p p2p.P2P, rateLimiter *leakybucket.Collector, peers []peer.ID, lastSlot primitives.Slot, blockCount uint64) ([]peer.ID, []string, error) {
if len(peers) == 0 {
peers = p2p.Peers().Connected()
}
func filterPeersByTargetSlotAndBandwidth(p2p p2p.P2P, rateLimiter *leakybucket.Collector, lastSlot primitives.Slot, blockCount uint64) ([]peer.ID, []string, error) {
peers := p2p.Peers().Connected()
slotPeers, descriptions, err := filterPeersByTargetSlot(p2p, peers, lastSlot)
if err != nil {

View File

@@ -585,7 +585,11 @@ func TestRequestDataColumnSidecarsByRoot(t *testing.T) {
ctxMap := map[[4]byte]int{{245, 165, 253, 66}: version.Fulu}
verifier := func(cols []blocks.RODataColumn, reqs []verification.Requirement) verification.DataColumnsVerifier {
initializer := &verification.Initializer{}
clockSync := startup.NewClockSynchronizer()
require.NoError(t, clockSync.SetClock(clock))
w := verification.NewInitializerWaiter(clockSync, nil, nil)
initializer, err := w.WaitForInitializer(context.Background())
require.NoError(t, err)
return initializer.NewDataColumnsVerifier(cols, reqs)
}
@@ -1472,7 +1476,7 @@ func TestFetchDataColumnsFromPeers(t *testing.T) {
rateLimiter := leakybucket.NewCollector(1_000, 1_000, 1*time.Hour, false)
// Fetch the data columns from the peers.
fetchedRoDataColumnsByRoot, err := RequestMissingDataColumnsByRange(ctx, clock, ctxMap, p2pSvc, rateLimiter, 4, dataColumnStorageSummarizer, peersID, roBlocks, tc.batchSize)
fetchedRoDataColumnsByRoot, err := RequestMissingDataColumnsByRange(ctx, clock, ctxMap, p2pSvc, rateLimiter, 4, dataColumnStorageSummarizer, roBlocks, tc.batchSize)
if !tc.isError {
require.NoError(t, err)
} else {

View File

@@ -61,6 +61,7 @@ go_test(
"blocks_fetcher_test.go",
"blocks_fetcher_utils_test.go",
"blocks_queue_test.go",
"downscore_test.go",
"fsm_benchmark_test.go",
"fsm_test.go",
"initial_sync_test.go",
@@ -71,6 +72,7 @@ go_test(
tags = ["CI_race_detection"],
deps = [
"//async/abool:go_default_library",
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/das:go_default_library",
@@ -80,6 +82,7 @@ go_test(
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/p2p/peers:go_default_library",
"//beacon-chain/p2p/peers/peerdata:go_default_library",
"//beacon-chain/p2p/peers/scorers:go_default_library",
"//beacon-chain/p2p/testing:go_default_library",
"//beacon-chain/p2p/types:go_default_library",
@@ -107,6 +110,7 @@ go_test(
"@com_github_libp2p_go_libp2p//core/network:go_default_library",
"@com_github_libp2p_go_libp2p//core/peer:go_default_library",
"@com_github_paulbellamy_ratecounter//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
],

View File

@@ -129,11 +129,20 @@ type fetchRequestParams struct {
// fetchRequestResponse is a combined type to hold results of both successful executions and errors.
// Valid usage pattern will be to check whether result's `err` is nil, before using `blocks`.
type fetchRequestResponse struct {
pid peer.ID
start primitives.Slot
count uint64
bwb []blocks.BlockWithROSidecars
err error
blocksFrom peer.ID
blobsFrom peer.ID
start primitives.Slot
count uint64
bwb []blocks.BlockWithROSidecars
err error
}
func (r *fetchRequestResponse) blocksQueueFetchedData() *blocksQueueFetchedData {
return &blocksQueueFetchedData{
blocksFrom: r.blocksFrom,
blobsFrom: r.blobsFrom,
bwb: r.bwb,
}
}
// newBlocksFetcher creates ready to use fetcher.
@@ -330,20 +339,22 @@ func (f *blocksFetcher) handleRequest(ctx context.Context, start primitives.Slot
}
}
response.bwb, response.pid, response.err = f.fetchBlocksFromPeer(ctx, start, count, peers)
if response.err != nil {
return response
}
response.bwb, response.blocksFrom, response.err = f.fetchBlocksFromPeer(ctx, start, count, peers)
if response.err == nil {
pid, err := f.fetchSidecars(ctx, response.blocksFrom, peers, response.bwb)
if err != nil {
response.err = err
}
// Fetch sidecars for blocks in `response.bwb`.
response.err = f.fetchSidecars(ctx, response.pid, peers, response.bwb)
response.blobsFrom = pid
}
return response
}
// fetchSidecars fetches sidecars corresponding to blocks in `response.bwb`.
// It mutates `Blobs` and `Columns` fields of `response.bwb` with fetched sidecars.
func (f *blocksFetcher) fetchSidecars(ctx context.Context, pid peer.ID, peers []peer.ID, bwScs []blocks.BlockWithROSidecars) error {
func (f *blocksFetcher) fetchSidecars(ctx context.Context, pid peer.ID, peers []peer.ID, bwScs []blocks.BlockWithROSidecars) (peer.ID, error) {
const batchSize = 32
// Find the first block with a slot greater than or equal to the first Fulu slot.
@@ -355,15 +366,25 @@ func (f *blocksFetcher) fetchSidecars(ctx context.Context, pid peer.ID, peers []
blocksWithBlobs := bwScs[:firstFuluIndex]
blocksWithDataColumns := bwScs[firstFuluIndex:]
if len(blocksWithBlobs) == 0 && len(blocksWithDataColumns) == 0 {
return "", nil
}
var (
blobsPid peer.ID
err error
)
if len(blocksWithBlobs) > 0 {
// Fetch blob sidecars.
if err := f.fetchBlobsFromPeer(ctx, blocksWithBlobs, pid, peers); err != nil {
return errors.Wrap(err, "fetch blobs from peer")
blobsPid, err = f.fetchBlobsFromPeer(ctx, blocksWithBlobs, pid, peers)
if err != nil {
return "", errors.Wrap(err, "fetch blobs from peer")
}
}
if len(blocksWithDataColumns) == 0 {
return nil
return blobsPid, nil
}
// Extract blocks.
@@ -375,9 +396,9 @@ func (f *blocksFetcher) fetchSidecars(ctx context.Context, pid peer.ID, peers []
// Fetch data column sidecars.
actualGroupCount := f.custodyInfo.ActualGroupCount()
fetchedDataColumnsByRoot, err := prysmsync.RequestMissingDataColumnsByRange(ctx, f.clock, f.ctxMap, f.p2p, f.rateLimiter, actualGroupCount, f.dcs, peers, dataColumnBlocks, batchSize)
fetchedDataColumnsByRoot, err := prysmsync.RequestMissingDataColumnsByRange(ctx, f.clock, f.ctxMap, f.p2p, f.rateLimiter, actualGroupCount, f.dcs, dataColumnBlocks, batchSize)
if err != nil {
return errors.Wrap(err, "fetch missing data columns from peers")
return blobsPid, errors.Wrap(err, "fetch missing data columns from peers")
}
// Populate the response.
@@ -389,7 +410,8 @@ func (f *blocksFetcher) fetchSidecars(ctx context.Context, pid peer.ID, peers []
}
}
return nil
// TODO: Return the (multiple) peer IDs that provided the data columns and not only the one for blobs.
return blobsPid, nil
}
// fetchBlocksFromPeer fetches blocks from a single randomly selected peer, sorted by slot.
@@ -613,24 +635,24 @@ func missingCommitError(root [32]byte, slot primitives.Slot, missing [][]byte) e
// fetchBlobsFromPeer fetches blocks from a single randomly selected peer.
// This function mutates the input `bwb` argument.
func (f *blocksFetcher) fetchBlobsFromPeer(ctx context.Context, bwb []blocks.BlockWithROSidecars, pid peer.ID, peers []peer.ID) error {
func (f *blocksFetcher) fetchBlobsFromPeer(ctx context.Context, bwb []blocks.BlockWithROSidecars, pid peer.ID, peers []peer.ID) (peer.ID, error) {
if len(bwb) == 0 {
return nil
return "", nil
}
ctx, span := trace.StartSpan(ctx, "initialsync.fetchBlobsFromPeer")
defer span.End()
if slots.ToEpoch(f.clock.CurrentSlot()) < params.BeaconConfig().DenebForkEpoch {
return nil
return "", nil
}
blobWindowStart, err := prysmsync.BlobRPCMinValidSlot(f.clock.CurrentSlot())
if err != nil {
return err
return "", err
}
// Construct request message based on observed interval of blocks in need of blobs.
req := countCommitments(bwb, blobWindowStart).blobRange(f.bs).Request()
if req == nil {
return nil
return "", nil
}
peers = f.filterPeers(ctx, peers, peersPercentagePerRequest)
// We dial the initial peer first to ensure that we get the desired set of blobs.
@@ -652,9 +674,9 @@ func (f *blocksFetcher) fetchBlobsFromPeer(ctx context.Context, bwb []blocks.Blo
log.WithField("peer", p).WithError(err).Debug("Invalid BeaconBlobsByRange response")
continue
}
return err
return p, err
}
return errNoPeersAvailable
return "", errNoPeersAvailable
}
// sortedSliceFromMap returns a sorted slice of keys from a map.

View File

@@ -22,8 +22,9 @@ import (
// Blocks are stored in an ascending slot order. The first block is guaranteed to have parent
// either in DB or initial sync cache.
type forkData struct {
peer peer.ID
bwb []blocks.BlockWithROSidecars
blocksFrom peer.ID
blobsFrom peer.ID
bwb []blocks.BlockWithROSidecars
}
// nonSkippedSlotAfter checks slots after the given one in an attempt to find a non-empty future slot.
@@ -279,16 +280,16 @@ func (f *blocksFetcher) findForkWithPeer(ctx context.Context, pid peer.ID, peers
return nil, errors.Wrap(err, "invalid blocks received in findForkWithPeer")
}
if err := f.fetchSidecars(ctx, pid, peers, bwb); err != nil {
sidecarsPid, err := f.fetchSidecars(ctx, pid, peers, bwb)
if err != nil {
return nil, errors.Wrap(err, "fetch sidecars")
}
// We need to fetch the blobs for the given alt-chain if any exist, so that we can try to verify and import
// the blocks.
// The caller will use the BlocksWith VerifiedBlobs in bwb as the starting point for
// round-robin syncing the alternate chain.
return &forkData{peer: pid, bwb: bwb}, nil
return &forkData{blocksFrom: pid, blobsFrom: sidecarsPid, bwb: bwb}, nil
}
return nil, errNoAlternateBlocks
}
@@ -304,12 +305,14 @@ func (f *blocksFetcher) findAncestor(ctx context.Context, pid peer.ID, peers []p
if err != nil {
return nil, errors.Wrap(err, "received invalid blocks in findAncestor")
}
if err := f.fetchSidecars(ctx, pid, peers, bwb); err != nil {
sidecarsPid, err := f.fetchSidecars(ctx, pid, peers, bwb)
if err != nil {
return nil, errors.Wrap(err, "fetch sidecars")
}
return &forkData{
peer: pid,
bwb: bwb,
blocksFrom: pid,
bwb: bwb,
blobsFrom: sidecarsPid,
}, nil
}
// Request block's parent.

View File

@@ -265,7 +265,7 @@ func TestBlocksFetcher_findFork(t *testing.T) {
reqEnd := testForkStartSlot(t, 251) + primitives.Slot(findForkReqRangeSize())
require.Equal(t, primitives.Slot(len(chain1)), fork.bwb[0].Block.Block().Slot())
require.Equal(t, int(reqEnd-forkSlot1b), len(fork.bwb))
require.Equal(t, curForkMoreBlocksPeer, fork.peer)
require.Equal(t, curForkMoreBlocksPeer, fork.blocksFrom)
// Save all chain1b blocks (so that they do not interfere with alternative fork)
for _, blk := range chain1b {
util.SaveBlock(t, ctx, beaconDB, blk)
@@ -285,7 +285,7 @@ func TestBlocksFetcher_findFork(t *testing.T) {
alternativePeer := connectPeerHavingBlocks(t, p2p, chain2, finalizedSlot, p2p.Peers())
fork, err = fetcher.findFork(ctx, 251)
require.NoError(t, err)
assert.Equal(t, alternativePeer, fork.peer)
assert.Equal(t, alternativePeer, fork.blocksFrom)
assert.Equal(t, 65, len(fork.bwb))
ind := forkSlot
for _, blk := range fork.bwb {

View File

@@ -99,8 +99,9 @@ type blocksQueue struct {
// blocksQueueFetchedData is a data container that is returned from a queue on each step.
type blocksQueueFetchedData struct {
pid peer.ID
bwb []blocks.BlockWithROSidecars
blocksFrom peer.ID
blobsFrom peer.ID
bwb []blocks.BlockWithROSidecars
}
// newBlocksQueue creates initialized priority queue.
@@ -347,13 +348,15 @@ func (q *blocksQueue) onDataReceivedEvent(ctx context.Context) eventHandlerFn {
}
if errors.Is(response.err, beaconsync.ErrInvalidFetchedData) {
// Peer returned invalid data, penalize.
q.blocksFetcher.p2p.Peers().Scorers().BadResponsesScorer().Increment(m.pid)
log.WithField("pid", response.pid).Debug("Peer is penalized for invalid blocks")
q.blocksFetcher.p2p.Peers().Scorers().BadResponsesScorer().Increment(response.blocksFrom)
log.WithField("pid", response.blocksFrom).Debug("Peer is penalized for invalid blocks")
} else if errors.Is(response.err, verification.ErrBlobInvalid) {
q.blocksFetcher.p2p.Peers().Scorers().BadResponsesScorer().Increment(response.blobsFrom)
log.WithField("pid", response.blobsFrom).Debug("Peer is penalized for invalid blob response")
}
return m.state, response.err
}
m.pid = response.pid
m.bwb = response.bwb
m.fetched = *response
return stateDataParsed, nil
}
}
@@ -368,19 +371,15 @@ func (q *blocksQueue) onReadyToSendEvent(ctx context.Context) eventHandlerFn {
return m.state, errInvalidInitialState
}
if len(m.bwb) == 0 {
if m.numFetched() == 0 {
return stateSkipped, nil
}
send := func() (stateID, error) {
data := &blocksQueueFetchedData{
pid: m.pid,
bwb: m.bwb,
}
select {
case <-ctx.Done():
return m.state, ctx.Err()
case q.fetchedData <- data:
case q.fetchedData <- m.fetched.blocksQueueFetchedData():
}
return stateSent, nil
}

View File

@@ -477,8 +477,8 @@ func TestBlocksQueue_onDataReceivedEvent(t *testing.T) {
updatedState, err := handlerFn(&stateMachine{
state: stateScheduled,
}, &fetchRequestResponse{
pid: "abc",
err: errSlotIsTooHigh,
blocksFrom: "abc",
err: errSlotIsTooHigh,
})
assert.ErrorContains(t, errSlotIsTooHigh.Error(), err)
assert.Equal(t, stateScheduled, updatedState)
@@ -500,9 +500,9 @@ func TestBlocksQueue_onDataReceivedEvent(t *testing.T) {
updatedState, err := handlerFn(&stateMachine{
state: stateScheduled,
}, &fetchRequestResponse{
pid: "abc",
err: errSlotIsTooHigh,
start: 256,
blocksFrom: "abc",
err: errSlotIsTooHigh,
start: 256,
})
assert.ErrorContains(t, errSlotIsTooHigh.Error(), err)
assert.Equal(t, stateScheduled, updatedState)
@@ -522,8 +522,8 @@ func TestBlocksQueue_onDataReceivedEvent(t *testing.T) {
updatedState, err := handlerFn(&stateMachine{
state: stateScheduled,
}, &fetchRequestResponse{
pid: "abc",
err: beaconsync.ErrInvalidFetchedData,
blocksFrom: "abc",
err: beaconsync.ErrInvalidFetchedData,
})
assert.ErrorContains(t, beaconsync.ErrInvalidFetchedData.Error(), err)
assert.Equal(t, stateScheduled, updatedState)
@@ -542,7 +542,7 @@ func TestBlocksQueue_onDataReceivedEvent(t *testing.T) {
wsbCopy, err := wsb.Copy()
require.NoError(t, err)
response := &fetchRequestResponse{
pid: "abc",
blocksFrom: "abc",
bwb: []blocks.BlockWithROSidecars{
{Block: blocks.ROBlock{ReadOnlySignedBeaconBlock: wsb}},
{Block: blocks.ROBlock{ReadOnlySignedBeaconBlock: wsbCopy}},
@@ -551,13 +551,15 @@ func TestBlocksQueue_onDataReceivedEvent(t *testing.T) {
fsm := &stateMachine{
state: stateScheduled,
}
assert.Equal(t, peer.ID(""), fsm.pid)
assert.Equal(t, 0, len(fsm.bwb))
assert.Equal(t, peer.ID(""), fsm.fetched.blocksFrom)
assert.Equal(t, peer.ID(""), fsm.fetched.blobsFrom)
assert.Equal(t, 0, fsm.numFetched())
updatedState, err := handlerFn(fsm, response)
assert.NoError(t, err)
assert.Equal(t, stateDataParsed, updatedState)
assert.Equal(t, response.pid, fsm.pid)
assert.DeepSSZEqual(t, response.bwb, fsm.bwb)
assert.Equal(t, response.blocksFrom, fsm.fetched.blocksFrom)
assert.Equal(t, response.blobsFrom, fsm.fetched.blobsFrom)
assert.DeepSSZEqual(t, response.bwb, fsm.fetched.bwb)
})
}
@@ -642,10 +644,10 @@ func TestBlocksQueue_onReadyToSendEvent(t *testing.T) {
queue.smm.addStateMachine(256)
queue.smm.addStateMachine(320)
queue.smm.machines[256].state = stateDataParsed
queue.smm.machines[256].pid = pidDataParsed
queue.smm.machines[256].fetched.blocksFrom = pidDataParsed
rwsb, err := blocks.NewROBlock(wsb)
require.NoError(t, err)
queue.smm.machines[256].bwb = []blocks.BlockWithROSidecars{
queue.smm.machines[256].fetched.bwb = []blocks.BlockWithROSidecars{
{Block: rwsb},
}
@@ -677,10 +679,10 @@ func TestBlocksQueue_onReadyToSendEvent(t *testing.T) {
queue.smm.machines[256].state = stateDataParsed
queue.smm.addStateMachine(320)
queue.smm.machines[320].state = stateDataParsed
queue.smm.machines[320].pid = pidDataParsed
queue.smm.machines[320].fetched.blocksFrom = pidDataParsed
rwsb, err := blocks.NewROBlock(wsb)
require.NoError(t, err)
queue.smm.machines[320].bwb = []blocks.BlockWithROSidecars{
queue.smm.machines[320].fetched.bwb = []blocks.BlockWithROSidecars{
{Block: rwsb},
}
@@ -709,10 +711,10 @@ func TestBlocksQueue_onReadyToSendEvent(t *testing.T) {
queue.smm.machines[256].state = stateSkipped
queue.smm.addStateMachine(320)
queue.smm.machines[320].state = stateDataParsed
queue.smm.machines[320].pid = pidDataParsed
queue.smm.machines[320].fetched.blocksFrom = pidDataParsed
rwsb, err := blocks.NewROBlock(wsb)
require.NoError(t, err)
queue.smm.machines[320].bwb = []blocks.BlockWithROSidecars{
queue.smm.machines[320].fetched.bwb = []blocks.BlockWithROSidecars{
{Block: rwsb},
}
@@ -1213,17 +1215,17 @@ func TestBlocksQueue_stuckInUnfavourableFork(t *testing.T) {
firstFSM, ok := queue.smm.findStateMachine(forkedSlot)
require.Equal(t, true, ok)
require.Equal(t, stateDataParsed, firstFSM.state)
require.Equal(t, forkedPeer, firstFSM.pid)
require.Equal(t, forkedPeer, firstFSM.fetched.blocksFrom)
reqEnd := testForkStartSlot(t, 251) + primitives.Slot(findForkReqRangeSize())
require.Equal(t, int(reqEnd-forkedSlot), len(firstFSM.bwb))
require.Equal(t, forkedSlot, firstFSM.bwb[0].Block.Block().Slot())
require.Equal(t, int(reqEnd-forkedSlot), len(firstFSM.fetched.bwb))
require.Equal(t, forkedSlot, firstFSM.fetched.bwb[0].Block.Block().Slot())
// Assert that forked data from chain2 is available (within 64 fetched blocks).
for i, blk := range chain2[forkedSlot:] {
if i >= len(firstFSM.bwb) {
if i >= len(firstFSM.fetched.bwb) {
break
}
rootFromFSM := firstFSM.bwb[i].Block.Root()
rootFromFSM := firstFSM.fetched.bwb[i].Block.Root()
blkRoot, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
assert.Equal(t, blkRoot, rootFromFSM)
@@ -1231,7 +1233,7 @@ func TestBlocksQueue_stuckInUnfavourableFork(t *testing.T) {
// Assert that machines are in the expected state.
startSlot = forkedEpochStartSlot.Add(1 + blocksPerRequest)
require.Equal(t, int(blocksPerRequest)-int(forkedSlot-(forkedEpochStartSlot+1)), len(firstFSM.bwb))
require.Equal(t, int(blocksPerRequest)-int(forkedSlot-(forkedEpochStartSlot+1)), len(firstFSM.fetched.bwb))
for i := startSlot; i < startSlot.Add(blocksPerRequest*(lookaheadSteps-1)); i += primitives.Slot(blocksPerRequest) {
fsm, ok := queue.smm.findStateMachine(i)
require.Equal(t, true, ok)

View File

@@ -24,8 +24,8 @@ func (q *blocksQueue) resetFromFork(fork *forkData) error {
return err
}
fsm := q.smm.addStateMachine(firstBlock.Slot())
fsm.pid = fork.peer
fsm.bwb = fork.bwb
fsm.fetched.bwb = fork.bwb
fsm.fetched.blocksFrom, fsm.fetched.blobsFrom = fork.blocksFrom, fork.blobsFrom
fsm.state = stateDataParsed
// The rest of machines are in skipped state.

View File

@@ -0,0 +1,219 @@
package initialsync
import (
"context"
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain"
mock "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers/peerdata"
p2pt "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v6/beacon-chain/sync"
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
)
type testDownscorePeer int
const (
testDownscoreNeither testDownscorePeer = iota
testDownscoreBlock
testDownscoreBlob
)
func peerIDForTestDownscore(w testDownscorePeer, name string) peer.ID {
switch w {
case testDownscoreBlock:
return peer.ID("block" + name)
case testDownscoreBlob:
return peer.ID("blob" + name)
default:
return ""
}
}
func TestUpdatePeerScorerStats(t *testing.T) {
cases := []struct {
name string
err error
processed uint64
downPeer testDownscorePeer
}{
{
name: "invalid block",
err: blockchain.ErrInvalidPayload,
downPeer: testDownscoreBlock,
processed: 10,
},
{
name: "invalid blob",
err: verification.ErrBlobIndexInvalid,
downPeer: testDownscoreBlob,
processed: 3,
},
{
name: "not validity error",
err: errors.New("test"),
processed: 32,
},
{
name: "no error",
processed: 32,
},
}
s := &Service{
cfg: &Config{
P2P: p2pt.NewTestP2P(t),
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
data := &blocksQueueFetchedData{
blocksFrom: peerIDForTestDownscore(testDownscoreBlock, c.name),
blobsFrom: peerIDForTestDownscore(testDownscoreBlob, c.name),
}
s.updatePeerScorerStats(data, c.processed, c.err)
if c.err != nil && c.downPeer != testDownscoreNeither {
switch c.downPeer {
case testDownscoreBlock:
// block should be downscored
blocksCount, err := s.cfg.P2P.Peers().Scorers().BadResponsesScorer().Count(data.blocksFrom)
require.NoError(t, err)
require.Equal(t, 1, blocksCount)
// blob should not be downscored - also we expect a not found error since peer scoring did not interact with blobs
blobCount, err := s.cfg.P2P.Peers().Scorers().BadResponsesScorer().Count(data.blobsFrom)
require.ErrorIs(t, err, peerdata.ErrPeerUnknown)
require.Equal(t, -1, blobCount)
case testDownscoreBlob:
// block should not be downscored - also we expect a not found error since peer scoring did not interact with blocks
blocksCount, err := s.cfg.P2P.Peers().Scorers().BadResponsesScorer().Count(data.blocksFrom)
require.ErrorIs(t, err, peerdata.ErrPeerUnknown)
require.Equal(t, -1, blocksCount)
// blob should be downscored
blobCount, err := s.cfg.P2P.Peers().Scorers().BadResponsesScorer().Count(data.blobsFrom)
require.NoError(t, err)
require.Equal(t, 1, blobCount)
}
assert.Equal(t, uint64(0), s.cfg.P2P.Peers().Scorers().BlockProviderScorer().ProcessedBlocks(data.blocksFrom))
return
}
// block should not be downscored - also we expect a not found error since peer scoring did not interact with blocks
blocksCount, err := s.cfg.P2P.Peers().Scorers().BadResponsesScorer().Count(data.blocksFrom)
// The scorer will know about the the block peer because it will have a processed blocks count
require.NoError(t, err)
require.Equal(t, 0, blocksCount)
// no downscore, so scorer doesn't know the peer
blobCount, err := s.cfg.P2P.Peers().Scorers().BadResponsesScorer().Count(data.blobsFrom)
require.ErrorIs(t, err, peerdata.ErrPeerUnknown)
require.Equal(t, -1, blobCount)
assert.Equal(t, c.processed, s.cfg.P2P.Peers().Scorers().BlockProviderScorer().ProcessedBlocks(data.blocksFrom))
})
}
}
func TestOnDataReceivedDownscore(t *testing.T) {
cases := []struct {
name string
err error
downPeer testDownscorePeer
}{
{
name: "invalid block",
err: sync.ErrInvalidFetchedData,
downPeer: testDownscoreBlock,
},
{
name: "invalid blob",
err: errors.Wrap(verification.ErrBlobInvalid, "test"),
downPeer: testDownscoreBlob,
},
{
name: "not validity error",
err: errors.New("test"),
},
{
name: "no error",
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
data := &fetchRequestResponse{
blocksFrom: peerIDForTestDownscore(testDownscoreBlock, c.name),
blobsFrom: peerIDForTestDownscore(testDownscoreBlob, c.name),
err: c.err,
}
if c.downPeer == testDownscoreBlob {
require.Equal(t, true, verification.IsBlobValidationFailure(c.err))
}
ctx := context.Background()
p2p := p2pt.NewTestP2P(t)
mc := &mock.ChainService{Genesis: time.Now(), ValidatorsRoot: [32]byte{}}
fetcher := newBlocksFetcher(ctx, &blocksFetcherConfig{
chain: mc,
p2p: p2p,
clock: startup.NewClock(mc.Genesis, mc.ValidatorsRoot),
})
q := newBlocksQueue(ctx, &blocksQueueConfig{
p2p: p2p,
blocksFetcher: fetcher,
highestExpectedSlot: primitives.Slot(32),
chain: mc})
sm := q.smm.addStateMachine(0)
sm.state = stateScheduled
handle := q.onDataReceivedEvent(context.Background())
endState, err := handle(sm, data)
if c.err != nil {
require.ErrorIs(t, err, c.err)
} else {
require.NoError(t, err)
}
// state machine should stay in "scheduled" if there's an error
// and transition to "data parsed" if there's no error
if c.err != nil {
require.Equal(t, stateScheduled, endState)
} else {
require.Equal(t, stateDataParsed, endState)
}
if c.err != nil && c.downPeer != testDownscoreNeither {
switch c.downPeer {
case testDownscoreBlock:
// block should be downscored
blocksCount, err := p2p.Peers().Scorers().BadResponsesScorer().Count(data.blocksFrom)
require.NoError(t, err)
require.Equal(t, 1, blocksCount)
// blob should not be downscored - also we expect a not found error since peer scoring did not interact with blobs
blobCount, err := p2p.Peers().Scorers().BadResponsesScorer().Count(data.blobsFrom)
require.ErrorIs(t, err, peerdata.ErrPeerUnknown)
require.Equal(t, -1, blobCount)
case testDownscoreBlob:
// block should not be downscored - also we expect a not found error since peer scoring did not interact with blocks
blocksCount, err := p2p.Peers().Scorers().BadResponsesScorer().Count(data.blocksFrom)
require.ErrorIs(t, err, peerdata.ErrPeerUnknown)
require.Equal(t, -1, blocksCount)
// blob should be downscored
blobCount, err := p2p.Peers().Scorers().BadResponsesScorer().Count(data.blobsFrom)
require.NoError(t, err)
require.Equal(t, 1, blobCount)
}
assert.Equal(t, uint64(0), p2p.Peers().Scorers().BlockProviderScorer().ProcessedBlocks(data.blocksFrom))
return
}
// block should not be downscored - also we expect a not found error since peer scoring did not interact with blocks
blocksCount, err := p2p.Peers().Scorers().BadResponsesScorer().Count(data.blocksFrom)
// no downscore, so scorer doesn't know the peer
require.ErrorIs(t, err, peerdata.ErrPeerUnknown)
require.Equal(t, -1, blocksCount)
blobCount, err := p2p.Peers().Scorers().BadResponsesScorer().Count(data.blobsFrom)
// no downscore, so scorer doesn't know the peer
require.ErrorIs(t, err, peerdata.ErrPeerUnknown)
require.Equal(t, -1, blobCount)
})
}
}

View File

@@ -6,11 +6,9 @@ import (
"sort"
"time"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
prysmTime "github.com/OffchainLabs/prysm/v6/time"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/libp2p/go-libp2p/core/peer"
)
const (
@@ -45,8 +43,7 @@ type stateMachine struct {
smm *stateMachineManager
start primitives.Slot
state stateID
pid peer.ID
bwb []blocks.BlockWithROSidecars
fetched fetchRequestResponse
updated time.Time
}
@@ -78,7 +75,7 @@ func (smm *stateMachineManager) addStateMachine(startSlot primitives.Slot) *stat
smm: smm,
start: startSlot,
state: stateNew,
bwb: []blocks.BlockWithROSidecars{},
fetched: fetchRequestResponse{},
updated: prysmTime.Now(),
}
smm.recalculateMachineAttribs()
@@ -90,7 +87,7 @@ func (smm *stateMachineManager) removeStateMachine(startSlot primitives.Slot) er
if _, ok := smm.machines[startSlot]; !ok {
return fmt.Errorf("state for machine %v is not found", startSlot)
}
smm.machines[startSlot].bwb = nil
smm.machines[startSlot].fetched = fetchRequestResponse{}
delete(smm.machines, startSlot)
smm.recalculateMachineAttribs()
return nil
@@ -187,6 +184,10 @@ func (m *stateMachine) isLast() bool {
return m.start == m.smm.keys[len(m.smm.keys)-1]
}
func (m *stateMachine) numFetched() int {
return len(m.fetched.bwb)
}
// String returns human-readable representation of a FSM state.
func (m *stateMachine) String() string {
return fmt.Sprintf("{%d:%s}", slots.ToEpoch(m.start), m.state)

View File

@@ -16,7 +16,6 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/paulbellamy/ratecounter"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
@@ -132,7 +131,7 @@ func (s *Service) syncToNonFinalizedEpoch(ctx context.Context) error {
}
for data := range queue.fetchedData {
count, err := s.processFetchedDataRegSync(ctx, data)
s.updatePeerScorerStats(data.pid, count, err)
s.updatePeerScorerStats(data, count, err)
}
log.WithFields(logrus.Fields{
"syncedSlot": s.cfg.Chain.HeadSlot(),
@@ -152,7 +151,7 @@ func (s *Service) processFetchedData(ctx context.Context, data *blocksQueueFetch
if err != nil {
log.WithError(err).Warn("Skip processing batched blocks")
}
s.updatePeerScorerStats(data.pid, count, err)
s.updatePeerScorerStats(data, count, err)
}
// processFetchedDataRegSync processes data received from queue.
@@ -169,7 +168,7 @@ func (s *Service) processFetchedDataRegSync(ctx context.Context, data *blocksQue
nodeID := s.cfg.P2P.NodeID()
// Seperate blocks with blobs from blocks with data columns.
// Separate blocks with blobs from blocks with data columns.
fistDataColumnIndex := sort.Search(len(bwb), func(i int) bool {
return bwb[i].Block.Version() >= version.Fulu
})
@@ -372,7 +371,7 @@ func (s *Service) processBatchedBlocks(ctx context.Context, bwb []blocks.BlockWi
errParentDoesNotExist, firstBlock.Block().ParentRoot(), firstBlock.Block().Slot())
}
// Seperate blocks with blobs from blocks with data columns.
// Seaerate blocks with blobs from blocks with data columns.
fistDataColumnIndex := sort.Search(len(bwb), func(i int) bool {
return bwb[i].Block.Version() >= version.Fulu
})
@@ -455,18 +454,19 @@ func isPunishableError(err error) bool {
}
// updatePeerScorerStats adjusts monitored metrics for a peer.
func (s *Service) updatePeerScorerStats(pid peer.ID, count uint64, err error) {
if pid == "" {
return
}
func (s *Service) updatePeerScorerStats(data *blocksQueueFetchedData, count uint64, err error) {
if isPunishableError(err) {
log.WithError(err).WithField("peer_id", pid).Warn("Incrementing peers bad response count")
s.cfg.P2P.Peers().Scorers().BadResponsesScorer().Increment(pid)
if verification.IsBlobValidationFailure(err) {
log.WithError(err).WithField("peer_id", data.blobsFrom).Warn("Downscoring peer for invalid blobs")
s.cfg.P2P.Peers().Scorers().BadResponsesScorer().Increment(data.blobsFrom)
} else {
log.WithError(err).WithField("peer_id", data.blocksFrom).Warn("Downscoring peer for invalid blocks")
s.cfg.P2P.Peers().Scorers().BadResponsesScorer().Increment(data.blocksFrom)
}
// If the error is punishable, exit here so that we don't give them credit for providing bad blocks.
return
}
scorer := s.cfg.P2P.Peers().Scorers().BlockProviderScorer()
scorer.IncrementProcessedBlocks(pid, count)
s.cfg.P2P.Peers().Scorers().BlockProviderScorer().IncrementProcessedBlocks(data.blocksFrom, count)
}
// isProcessedBlock checks DB and local cache for presence of a given block, to avoid duplicates.

View File

@@ -156,7 +156,7 @@ func readChunkEncodedBlobsLowMax(t *testing.T, s *Service, expect []*expectedBlo
}
return func(stream network.Stream) {
_, err := readChunkEncodedBlobs(stream, encoding, ctxMap, vf, 1)
require.ErrorIs(t, err, ErrInvalidFetchedData)
require.ErrorIs(t, err, errMaxRequestBlobSidecarsExceeded)
}
}

View File

@@ -12,6 +12,7 @@ import (
p2ptest "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
p2pTypes "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/types"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
@@ -877,3 +878,7 @@ func TestSendBlobsByRangeRequest(t *testing.T) {
assert.Equal(t, int(totalElectraBlobs), len(blobs))
})
}
func TestErrInvalidFetchedDataDistinction(t *testing.T) {
require.Equal(t, false, errors.Is(ErrInvalidFetchedData, verification.ErrBlobInvalid))
}

View File

@@ -365,10 +365,9 @@ func (s *Service) wrapAndReportValidation(topic string, v wrappedVal) (string, p
}
}
// reValidateSubscriptions unsubscribe from topics we are currently subscribed to but that are
// pruneSubscriptions unsubscribe from topics we are currently subscribed to but that are
// not in the list of wanted subnets.
// TODO: Rename this functions as it does not only revalidate subscriptions.
func (s *Service) reValidateSubscriptions(
func (s *Service) pruneSubscriptions(
subscriptions map[uint64]*pubsub.Subscription,
wantedSubs []uint64,
topicFormat string,
@@ -467,7 +466,7 @@ func (s *Service) subscribeToSubnets(
"digest": fmt.Sprintf("%#x", digest),
"subnets": description,
}).Debug("Subnets with this digest are no longer valid, unsubscribing from all of them")
s.reValidateSubscriptions(subscriptions, []uint64{}, topicFormat, digest)
s.pruneSubscriptions(subscriptions, []uint64{}, topicFormat, digest)
return false
}
@@ -475,7 +474,7 @@ func (s *Service) subscribeToSubnets(
subnetsToSubscribeIndex := getSubnetsToSubscribe(currentSlot)
// Remove subscriptions that are no longer wanted.
s.reValidateSubscriptions(subscriptions, subnetsToSubscribeIndex, topicFormat, digest)
s.pruneSubscriptions(subscriptions, subnetsToSubscribeIndex, topicFormat, digest)
// Subscribe to wanted subnets.
for _, subnetIndex := range subnetsToSubscribeIndex {

View File

@@ -15,6 +15,7 @@ import (
"github.com/OffchainLabs/prysm/v6/io/file"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/sirupsen/logrus"
"google.golang.org/protobuf/proto"
)
@@ -78,6 +79,11 @@ func (s *Service) reconstructAndBroadcastSidecars(ctx context.Context, block int
func (s *Service) reconstructAndBroadcastDataColumnSidecars(ctx context.Context, roSignedBlock interfaces.ReadOnlySignedBeaconBlock) {
block := roSignedBlock.Block()
log := log.WithFields(logrus.Fields{
"slot": block.Slot(),
"proposerIndex": block.ProposerIndex(),
})
kzgCommitments, err := block.Body().BlobKzgCommitments()
if err != nil {
log.WithError(err).Error("Failed to read commitments from block")
@@ -95,6 +101,8 @@ func (s *Service) reconstructAndBroadcastDataColumnSidecars(ctx context.Context,
return
}
log = log.WithField("blockRoot", fmt.Sprintf("%#x", blockRoot))
if s.cfg.dataColumnStorage == nil {
log.Warning("Data column storage is not enabled, skip saving data column, but continue to reconstruct and broadcast data column")
}
@@ -106,6 +114,11 @@ func (s *Service) reconstructAndBroadcastDataColumnSidecars(ctx context.Context,
return
}
// Return early if no blobs are retrieved from the EL.
if len(sidecars) == 0 {
return
}
nodeID := s.cfg.p2p.NodeID()
s.cfg.custodyInfo.Mut.RLock()
@@ -124,8 +137,9 @@ func (s *Service) reconstructAndBroadcastDataColumnSidecars(ctx context.Context,
// Broadcast and save data columns sidecars to custody but not yet received.
sidecarCount := uint64(len(sidecars))
for columnIndex := range info.CustodyColumns {
log := log.WithField("columnIndex", columnIndex)
if columnIndex >= sidecarCount {
log.WithField("index", columnIndex).Error("Sidecar index out of range - should never happen")
log.Error("Column custody index out of range - should never happen")
continue
}
@@ -136,11 +150,11 @@ func (s *Service) reconstructAndBroadcastDataColumnSidecars(ctx context.Context,
sidecar := sidecars[columnIndex]
if err := s.cfg.p2p.BroadcastDataColumn(ctx, blockRoot, sidecar.Index, sidecar.DataColumnSidecar); err != nil {
log.WithFields(dataColumnFields(sidecar.RODataColumn)).WithError(err).Error("Failed to broadcast data column")
log.WithError(err).Error("Failed to broadcast data column")
}
if err := s.receiveDataColumn(ctx, sidecar); err != nil {
log.WithFields(dataColumnFields(sidecar.RODataColumn)).WithError(err).Error("Failed to receive data column")
log.WithError(err).Error("Failed to receive data column")
}
}
}

View File

@@ -308,7 +308,7 @@ func TestRevalidateSubscription_CorrectlyFormatsTopic(t *testing.T) {
subscriptions[2], err = r.cfg.p2p.SubscribeToTopic(fullTopic)
require.NoError(t, err)
r.reValidateSubscriptions(subscriptions, []uint64{2}, defaultTopic, digest)
r.pruneSubscriptions(subscriptions, []uint64{2}, defaultTopic, digest)
require.LogsDoNotContain(t, hook, "Could not unregister topic validator")
}

View File

@@ -21,6 +21,7 @@ import (
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
prysmTime "github.com/OffchainLabs/prysm/v6/time"
"github.com/OffchainLabs/prysm/v6/time/slots"
@@ -31,8 +32,9 @@ import (
)
var (
ErrOptimisticParent = errors.New("parent of the block is optimistic")
errRejectCommitmentLen = errors.New("[REJECT] The length of KZG commitments is less than or equal to the limitation defined in Consensus Layer")
ErrOptimisticParent = errors.New("parent of the block is optimistic")
errRejectCommitmentLen = errors.New("[REJECT] The length of KZG commitments is less than or equal to the limitation defined in Consensus Layer")
ErrSlashingSignatureFailure = errors.New("proposer slashing signature verification failed")
)
// validateBeaconBlockPubSub checks that the incoming block has a valid BLS signature.
@@ -109,6 +111,16 @@ func (s *Service) validateBeaconBlockPubSub(ctx context.Context, pid peer.ID, ms
// Verify the block is the first block received for the proposer for the slot.
if s.hasSeenBlockIndexSlot(blk.Block().Slot(), blk.Block().ProposerIndex()) {
// Attempt to detect and broadcast equivocation before ignoring
err = s.detectAndBroadcastEquivocation(ctx, blk)
if err != nil {
// If signature verification fails, reject the block
if errors.Is(err, ErrSlashingSignatureFailure) {
return pubsub.ValidationReject, err
}
// In case there is some other error log but don't reject
log.WithError(err).Debug("Could not detect/broadcast equivocation")
}
return pubsub.ValidationIgnore, nil
}
@@ -469,3 +481,74 @@ func getBlockFields(b interfaces.ReadOnlySignedBeaconBlock) logrus.Fields {
"version": b.Block().Version(),
}
}
// detectAndBroadcastEquivocation checks if the given block is an equivocating block by comparing it with
// the head block. If the blocks are from the same slot and proposer but have different signatures,
// it creates and broadcasts a proposer slashing object after verification.
func (s *Service) detectAndBroadcastEquivocation(ctx context.Context, blk interfaces.ReadOnlySignedBeaconBlock) error {
slot := blk.Block().Slot()
proposerIndex := blk.Block().ProposerIndex()
// Get head block for comparison
headBlock, err := s.cfg.chain.HeadBlock(ctx)
if err != nil {
return errors.Wrap(err, "could not get head block")
}
// Only proceed if this block is from same slot and proposer as head
if headBlock.Block().Slot() != slot || headBlock.Block().ProposerIndex() != proposerIndex {
return nil
}
// Compare signatures
sig1 := blk.Signature()
sig2 := headBlock.Signature()
// If signatures match, these are the same block
if sig1 == sig2 {
return nil
}
// Extract headers for slashing
header1, err := blk.Header()
if err != nil {
return errors.Wrap(err, "could not get header from new block")
}
header2, err := headBlock.Header()
if err != nil {
return errors.Wrap(err, "could not get header from head block")
}
slashing := &ethpb.ProposerSlashing{
Header_1: header1,
Header_2: header2,
}
// Get state for verification
headState, err := s.cfg.chain.HeadStateReadOnly(ctx)
if err != nil {
return errors.Wrap(err, "could not get head state")
}
// Verify the slashing against current state
if err := blocks.VerifyProposerSlashing(headState, slashing); err != nil {
if errors.Is(err, blocks.ErrCouldNotVerifyBlockHeader) {
return errors.Wrap(ErrSlashingSignatureFailure, err.Error())
}
return errors.Wrap(err, "could not verify proposer slashing")
}
// Broadcast if verification passes
if !features.Get().DisableBroadcastSlashings {
if err := s.cfg.p2p.Broadcast(ctx, slashing); err != nil {
return errors.Wrap(err, "could not broadcast slashing object")
}
}
// Insert into slashing pool
if err := s.cfg.slashingPool.InsertProposerSlashing(ctx, headState, slashing); err != nil {
return errors.Wrap(err, "could not insert proposer slashing into pool")
}
return nil
}

View File

@@ -18,6 +18,7 @@ import (
dbtest "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
doublylinkedtree "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/OffchainLabs/prysm/v6/beacon-chain/operations/attestations"
slashingsmock "github.com/OffchainLabs/prysm/v6/beacon-chain/operations/slashings/mock"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
p2ptest "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
@@ -713,8 +714,21 @@ func TestValidateBeaconBlockPubSub_SeenProposerSlot(t *testing.T) {
msg.Signature, err = signing.ComputeDomainAndSign(beaconState, 0, msg.Block, params.BeaconConfig().DomainBeaconProposer, privKeys[proposerIdx])
require.NoError(t, err)
chainService := &mock.ChainService{Genesis: time.Unix(time.Now().Unix()-int64(params.BeaconConfig().SecondsPerSlot), 0),
State: beaconState,
// Create a clone of the same block (same signature, not an equivocation)
msgClone := util.NewBeaconBlock()
msgClone.Block.Slot = 1
msgClone.Block.ProposerIndex = proposerIdx
msgClone.Block.ParentRoot = bRoot[:]
msgClone.Signature = msg.Signature // Use the same signature
signedBlock, err := blocks.NewSignedBeaconBlock(msg)
require.NoError(t, err)
slashingPool := &slashingsmock.PoolMock{}
chainService := &mock.ChainService{
Genesis: time.Unix(time.Now().Unix()-int64(params.BeaconConfig().SecondsPerSlot), 0),
State: beaconState,
Block: signedBlock, // Set the first block as the head block
FinalizedCheckPoint: &ethpb.Checkpoint{
Epoch: 0,
Root: make([]byte, 32),
@@ -728,6 +742,7 @@ func TestValidateBeaconBlockPubSub_SeenProposerSlot(t *testing.T) {
chain: chainService,
clock: startup.NewClock(chainService.Genesis, chainService.ValidatorsRoot),
blockNotifier: chainService.BlockNotifier(),
slashingPool: slashingPool,
},
seenBlockCache: lruwrpr.New(10),
badBlockCache: lruwrpr.New(10),
@@ -735,10 +750,15 @@ func TestValidateBeaconBlockPubSub_SeenProposerSlot(t *testing.T) {
seenPendingBlocks: make(map[[32]byte]bool),
}
// Mark the proposer/slot as seen
r.setSeenBlockIndexSlot(msg.Block.Slot, msg.Block.ProposerIndex)
time.Sleep(10 * time.Millisecond) // Wait for cached value to pass through buffers
// Prepare and validate the second message (clone)
buf := new(bytes.Buffer)
_, err = p.Encoding().EncodeGossip(buf, msg)
_, err = p.Encoding().EncodeGossip(buf, msgClone)
require.NoError(t, err)
topic := p2p.GossipTypeMapping[reflect.TypeOf(msg)]
topic := p2p.GossipTypeMapping[reflect.TypeOf(msgClone)]
digest, err := r.currentForkDigest()
assert.NoError(t, err)
topic = r.addDigestToTopic(topic, digest)
@@ -748,11 +768,14 @@ func TestValidateBeaconBlockPubSub_SeenProposerSlot(t *testing.T) {
Topic: &topic,
},
}
r.setSeenBlockIndexSlot(msg.Block.Slot, msg.Block.ProposerIndex)
time.Sleep(10 * time.Millisecond) // Wait for cached value to pass through buffers.
// Since this is not an equivocation (same signature), it should be ignored
res, err := r.validateBeaconBlockPubSub(ctx, "", m)
assert.NoError(t, err)
assert.Equal(t, res, pubsub.ValidationIgnore, "seen proposer block should be ignored")
assert.Equal(t, pubsub.ValidationIgnore, res, "block with same signature should be ignored")
// Verify no slashings were created
assert.Equal(t, 0, len(slashingPool.PendingPropSlashings), "Expected no slashings for same signature")
}
func TestValidateBeaconBlockPubSub_FilterByFinalizedEpoch(t *testing.T) {
@@ -1495,3 +1518,218 @@ func Test_validateDenebBeaconBlock(t *testing.T) {
require.NoError(t, err)
require.ErrorIs(t, validateDenebBeaconBlock(bdb.Block()), errRejectCommitmentLen)
}
func TestDetectAndBroadcastEquivocation(t *testing.T) {
ctx := context.Background()
p := p2ptest.NewTestP2P(t)
beaconState, privKeys := util.DeterministicGenesisState(t, 100)
t.Run("no equivocation", func(t *testing.T) {
block := util.NewBeaconBlock()
block.Block.Slot = 1
block.Block.ProposerIndex = 0
sig, err := signing.ComputeDomainAndSign(beaconState, 0, block.Block, params.BeaconConfig().DomainBeaconProposer, privKeys[0])
require.NoError(t, err)
block.Signature = sig
// Create head block with different slot/proposer
headBlock := util.NewBeaconBlock()
headBlock.Block.Slot = 2 // Different slot
headBlock.Block.ProposerIndex = 1 // Different proposer
signedHeadBlock, err := blocks.NewSignedBeaconBlock(headBlock)
require.NoError(t, err)
chainService := &mock.ChainService{
State: beaconState,
Genesis: time.Now(),
Block: signedHeadBlock,
}
slashingPool := &slashingsmock.PoolMock{}
r := &Service{
cfg: &config{
p2p: p,
chain: chainService,
slashingPool: slashingPool,
},
seenBlockCache: lruwrpr.New(10),
}
signedBlock, err := blocks.NewSignedBeaconBlock(block)
require.NoError(t, err)
err = r.detectAndBroadcastEquivocation(ctx, signedBlock)
require.NoError(t, err)
assert.Equal(t, 0, len(slashingPool.PendingPropSlashings), "Expected no slashings")
})
t.Run("equivocation detected", func(t *testing.T) {
// Create head block
headBlock := util.NewBeaconBlock()
headBlock.Block.Slot = 1
headBlock.Block.ProposerIndex = 0
headBlock.Block.ParentRoot = bytesutil.PadTo([]byte("parent1"), 32)
sig1, err := signing.ComputeDomainAndSign(beaconState, 0, headBlock.Block, params.BeaconConfig().DomainBeaconProposer, privKeys[0])
require.NoError(t, err)
headBlock.Signature = sig1
// Create second block with same slot/proposer but different contents
newBlock := util.NewBeaconBlock()
newBlock.Block.Slot = 1
newBlock.Block.ProposerIndex = 0
newBlock.Block.ParentRoot = bytesutil.PadTo([]byte("parent2"), 32)
sig2, err := signing.ComputeDomainAndSign(beaconState, 0, newBlock.Block, params.BeaconConfig().DomainBeaconProposer, privKeys[0])
require.NoError(t, err)
newBlock.Signature = sig2
signedHeadBlock, err := blocks.NewSignedBeaconBlock(headBlock)
require.NoError(t, err)
slashingPool := &slashingsmock.PoolMock{}
chainService := &mock.ChainService{
State: beaconState,
Genesis: time.Now(),
Block: signedHeadBlock,
}
r := &Service{
cfg: &config{
p2p: p,
chain: chainService,
slashingPool: slashingPool,
},
seenBlockCache: lruwrpr.New(10),
}
signedNewBlock, err := blocks.NewSignedBeaconBlock(newBlock)
require.NoError(t, err)
err = r.detectAndBroadcastEquivocation(ctx, signedNewBlock)
require.NoError(t, err)
// Verify slashing was inserted
require.Equal(t, 1, len(slashingPool.PendingPropSlashings), "Expected a slashing to be inserted")
slashing := slashingPool.PendingPropSlashings[0]
assert.Equal(t, primitives.ValidatorIndex(0), slashing.Header_1.Header.ProposerIndex, "Wrong proposer index")
assert.Equal(t, primitives.Slot(1), slashing.Header_1.Header.Slot, "Wrong slot")
})
t.Run("same signature", func(t *testing.T) {
// Create block
block := util.NewBeaconBlock()
block.Block.Slot = 1
block.Block.ProposerIndex = 0
sig, err := signing.ComputeDomainAndSign(beaconState, 0, block.Block, params.BeaconConfig().DomainBeaconProposer, privKeys[0])
require.NoError(t, err)
block.Signature = sig
signedBlock, err := blocks.NewSignedBeaconBlock(block)
require.NoError(t, err)
slashingPool := &slashingsmock.PoolMock{}
chainService := &mock.ChainService{
State: beaconState,
Genesis: time.Now(),
Block: signedBlock,
}
r := &Service{
cfg: &config{
p2p: p,
chain: chainService,
slashingPool: slashingPool,
},
seenBlockCache: lruwrpr.New(10),
}
err = r.detectAndBroadcastEquivocation(ctx, signedBlock)
require.NoError(t, err)
assert.Equal(t, 0, len(slashingPool.PendingPropSlashings), "Expected no slashings for same signature")
})
t.Run("head state error", func(t *testing.T) {
block := util.NewBeaconBlock()
block.Block.Slot = 1
block.Block.ProposerIndex = 0
block.Block.ParentRoot = bytesutil.PadTo([]byte("parent1"), 32)
sig1, err := signing.ComputeDomainAndSign(beaconState, 0, block.Block, params.BeaconConfig().DomainBeaconProposer, privKeys[0])
require.NoError(t, err)
block.Signature = sig1
headBlock := util.NewBeaconBlock()
headBlock.Block.Slot = 1 // Same slot
headBlock.Block.ProposerIndex = 0 // Same proposer
headBlock.Block.ParentRoot = bytesutil.PadTo([]byte("parent2"), 32) // Different parent root
sig2, err := signing.ComputeDomainAndSign(beaconState, 0, headBlock.Block, params.BeaconConfig().DomainBeaconProposer, privKeys[0])
require.NoError(t, err)
headBlock.Signature = sig2
signedBlock, err := blocks.NewSignedBeaconBlock(block)
require.NoError(t, err)
signedHeadBlock, err := blocks.NewSignedBeaconBlock(headBlock)
require.NoError(t, err)
chainService := &mock.ChainService{
State: nil,
Block: signedHeadBlock,
HeadStateErr: errors.New("could not get head state"),
}
r := &Service{
cfg: &config{
p2p: p,
chain: chainService,
slashingPool: &slashingsmock.PoolMock{},
},
seenBlockCache: lruwrpr.New(10),
}
err = r.detectAndBroadcastEquivocation(ctx, signedBlock)
require.ErrorContains(t, "could not get head state", err)
})
t.Run("signature verification failure", func(t *testing.T) {
// Create head block
headBlock := util.NewBeaconBlock()
headBlock.Block.Slot = 1
headBlock.Block.ProposerIndex = 0
sig1, err := signing.ComputeDomainAndSign(beaconState, 0, headBlock.Block, params.BeaconConfig().DomainBeaconProposer, privKeys[0])
require.NoError(t, err)
headBlock.Signature = sig1
// Create test block with invalid signature
newBlock := util.NewBeaconBlock()
newBlock.Block.Slot = 1
newBlock.Block.ProposerIndex = 0
newBlock.Block.ParentRoot = bytesutil.PadTo([]byte("different"), 32)
// generate invalid signature
invalidSig := make([]byte, 96)
copy(invalidSig, []byte("invalid signature"))
newBlock.Signature = invalidSig
signedHeadBlock, err := blocks.NewSignedBeaconBlock(headBlock)
require.NoError(t, err)
signedNewBlock, err := blocks.NewSignedBeaconBlock(newBlock)
require.NoError(t, err)
slashingPool := &slashingsmock.PoolMock{}
chainService := &mock.ChainService{
State: beaconState,
Genesis: time.Now(),
Block: signedHeadBlock,
}
r := &Service{
cfg: &config{
p2p: p,
chain: chainService,
slashingPool: slashingPool,
},
seenBlockCache: lruwrpr.New(10),
}
err = r.detectAndBroadcastEquivocation(ctx, signedNewBlock)
require.ErrorIs(t, err, ErrSlashingSignatureFailure)
})
}

View File

@@ -169,15 +169,6 @@ func blobFields(b blocks.ROBlob) logrus.Fields {
}
}
func dataColumnFields(b blocks.RODataColumn) logrus.Fields {
return logrus.Fields{
"slot": b.Slot(),
"proposerIndex": b.ProposerIndex(),
"blockRoot": fmt.Sprintf("%#x", b.BlockRoot()),
"columnIndex": b.Index,
}
}
func computeSubnetForBlobSidecar(index uint64, slot primitives.Slot) uint64 {
subnetCount := params.BeaconConfig().BlobsidecarSubnetCount
if slots.ToEpoch(slot) >= params.BeaconConfig().ElectraForkEpoch {

View File

@@ -106,11 +106,6 @@ func (s *Service) validateDataColumn(ctx context.Context, pid peer.ID, msg *pubs
return pubsub.ValidationIgnore, err
}
// [REJECT] The proposer signature of `sidecar.signed_block_header`, is valid with respect to the `block_header.proposer_index` pubkey.
if err := verifier.ValidProposerSignature(ctx); err != nil {
return pubsub.ValidationReject, err
}
// [IGNORE] The sidecar's block's parent (defined by `block_header.parent_root`) has been seen (via gossip or non-gossip sources
// (a client MAY queue sidecars for processing once the parent block is retrieved).
if err := verifier.SidecarParentSeen(s.hasBadBlock); err != nil {
@@ -133,6 +128,13 @@ func (s *Service) validateDataColumn(ctx context.Context, pid peer.ID, msg *pubs
return pubsub.ValidationReject, err
}
// [REJECT] The proposer signature of `sidecar.signed_block_header`, is valid with respect to the `block_header.proposer_index` pubkey.
// We do not strictly respect the spec ordering here. This is necessary because signature verification depends on the parent root,
// which is only available if the parent block is known.
if err := verifier.ValidProposerSignature(ctx); err != nil {
return pubsub.ValidationReject, err
}
// [REJECT] The sidecar is from a higher slot than the sidecar's block's parent (defined by `block_header.parent_root`).
if err := verifier.SidecarParentSlotLower(); err != nil {
return pubsub.ValidationReject, err

View File

@@ -35,14 +35,14 @@ func (s *Service) setTargetValidatorsCustodyRequirement() {
}
// Retrieve the head state.
headState, err := s.cfg.chain.HeadStateReadOnly(s.ctx)
if err != nil || headState == nil {
log.WithError(err).Error("Failed to get head state")
finalizedState := s.cfg.stateGen.FinalizedState()
if finalizedState == nil || finalizedState.IsNil() {
log.Error("Finalized state is nil")
return
}
// Get the validators custody requirement.
validatorsCustodyRequirement, err := peerdas.ValidatorsCustodyRequirement(headState, indices)
validatorsCustodyRequirement, err := peerdas.ValidatorsCustodyRequirement(finalizedState, indices)
if err != nil {
log.WithError(err).Error("Failed to get validators custody requirement")
return

View File

@@ -1,10 +1,8 @@
package sync
import (
"context"
"testing"
mock "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
@@ -102,21 +100,18 @@ func TestSetTargetValidatorsCustodyRequirement(t *testing.T) {
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
stateGen := stategen.New(beaconDB, doublylinkedtree.New())
state, _ := util.DeterministicGenesisState(t, 32)
err := state.SetBalances(tc.validatorsBalance)
require.NoError(t, err)
err = stateGen.SaveState(ctx, [32]byte{}, state)
require.NoError(t, err)
stateGen.SaveFinalizedState(0, [32]byte{}, state)
service := &Service{
trackedValidatorsCache: cache.NewTrackedValidatorsCache(),
cfg: &config{
chain: &mock.ChainService{
State: state,
},
stateGen: stateGen,
custodyInfo: &peerdas.CustodyInfo{},
},
}

Some files were not shown because too many files have changed in this diff Show More