Compare commits

..

50 Commits

Author SHA1 Message Date
Manu NALEPA
6e0dbe57c2 Implement fetchDataColumnsFromPeers. 2024-07-14 22:11:11 +02:00
Manu NALEPA
72f237cc7e BlobStorageSummary: Implement HasDataColumnIndex and AllDataColumnsAvailable. 2024-07-14 21:39:42 +02:00
Manu NALEPA
8a6593b64f SendDataColumnsByRangeRequest: Add some new fields in logs. 2024-07-14 21:36:03 +02:00
Manu NALEPA
2177aeb47f Make deepsource happy. 2024-07-14 21:35:01 +02:00
kevaundray
9572c165e0 chore!: Use RecoverCellsAndKZGProofs instead of RecoverAllCells -> CellsToBlob -> ComputeCellsAndKZGProofs (#14183)
* use recoverCellsAndKZGProofs

* make recoverAllCells and CellsToBlob private

* chore: all methods now return CellsAndProof struct

* chore: update code
2024-07-09 16:20:21 +02:00
Nishant Das
80f3ccab9b Trigger PeerDAS At Deneb For E2E (#14193)
* Trigger At Deneb

* Fix Rate Limits
2024-07-09 16:08:18 +02:00
Manu NALEPA
60652c4f2a PeerDAS: Add KZG verification when sampling (#14187)
* `validateDataColumn`: Add comments and remove debug computation.

* `sampleDataColumnsFromPeer`: Add KZG verification

* `VerifyKZGInclusionProofColumn`: Add unit test.

* Make deepsource happy.

* Address Nishant's comment.

* Address Nishant's comment.
2024-07-09 11:15:02 +02:00
kevaundray
08b3cccdaf chore!: Make Cell be a flat sequence of bytes (#14159)
* chore: move all ckzg related functionality into kzg package

* refactor code to match

* run: bazel run //:gazelle -- fix

* chore: add some docs and stop copying large objects when converting between types

* fixes

* manually add kzg.go dep to Build.Hazel

* move kzg methods to kzg.go

* chore: add RecoverCellsAndProofs method

* bazel run //:gazelle -- fix

* make Cells be flattened sequence of bytes

* chore: add test for flattening roundtrip

* chore: remove code that was doing the flattening outside of the kzg package

* fix merge

* fix

* remove now un-needed conversion

* use pointers for Cell parameters

* linter

* rename cell conversion methods (this only applies to old version of c-kzg)
2024-07-09 11:05:03 +02:00
Manu NALEPA
b19d3dc487 Move log from error to debug. (#14194)
Reason: If a peer does not exposes its `csc` field into it's ENR,
then there is nothing we can do.
2024-07-08 16:52:47 +02:00
Nishant Das
d57db9b9c6 Activate PeerDAS with the EIP7594 Fork Epoch (#14184)
* Save All the Current Changes

* Add check for data sampling

* Fix Test

* Gazelle

* Manu's Review

* Fix Test
2024-07-08 16:03:43 +08:00
kevaundray
6387b51d25 chore!: Refactor RecoverBlob to RecoverCellsAndProofs (#14160)
* change recoverBlobs to recoverCellsAndProofs

* modify code to take in the cells and proofs for a particular blob instead of the blob itself

* add CellsAndProofs structure

* modify recoverCellsAndProofs to return `cellsAndProofs` structure

* modify `DataColumnSidecarsForReconstruct` to accept the `cellsAndKZGProofs` structure

* bazel run //:gazelle -- fix

* use kzg abstraction for kzg method

* move CellsAndProofs to kzg.go
2024-07-03 17:58:05 +02:00
kevaundray
b8f43c0522 chore: Encapsulate all kzg functionality for PeerDAS into the kzg package (#14136)
* chore: move all ckzg related functionality into kzg package

* refactor code to match

* run: bazel run //:gazelle -- fix

* chore: add some docs and stop copying large objects when converting between types

* fixes

* manually add kzg.go dep to Build.Hazel

* move kzg methods to kzg.go

* chore: add RecoverCellsAndProofs method

* bazel run //:gazelle -- fix

* use BytesPerBlob constant

* chore: fix some deepsource issues

* one declaration for commans and blobs
2024-07-03 14:17:47 +02:00
Manu NALEPA
913e84d243 PeerDAS: Implement IncrementalDAS (#14109)
* `ConvertPeerIDToNodeID`: Add tests.

* Remove `extractNodeID` and uses `ConvertPeerIDToNodeID` instead.

* Implement IncrementalDAS.

* `DataColumnSamplingLoop` ==> `DataColumnSamplingRoutine`.

* HypergeomCDF: Add test.

* `GetValidCustodyPeers`: Optimize and add tests.

* Remove blank identifiers.

* Implement `CustodyCountFromRecord`.

* Implement `TestP2P.CustodyCountFromRemotePeer`.

* `NewTestP2P`: Add `swarmt.Option` parameters.

* `incrementalDAS`: Rework and add tests.

* Remove useless warning.
2024-06-21 15:22:52 +02:00
Francis Li
11f56ca0c0 PeerDAS: add data column batch config (#14122) 2024-06-21 13:28:34 +02:00
Francis Li
40924e5b28 PeerDAS: move custody subnet count into helper function (#14117) 2024-06-20 18:12:49 +02:00
Manu NALEPA
8f65ff29a8 Fix columns sampling (#14118) 2024-06-18 15:36:41 +02:00
Francis Li
5f8aad4057 [PeerDAS] implement DataColumnSidecarsByRootReq and fix related bugs (#14103)
* [PeerDAS] add data column related protos and fix data column by root bug

* Add more tests
2024-06-14 11:46:51 +02:00
Francis Li
b25c9186a2 [PeerDAS] fixes and tests for gossiping out data columns (#14102)
* [PeerDAS] Minor fixes and tests for gossiping out data columns

* Fix metrics
2024-06-13 10:38:39 +02:00
Francis Li
2aede8baad [PeerDAS] rework ENR custody_subnet_count and add tests (#14077)
* [PeerDAS] rework ENR custody_subnet_count related code

* update according to proposed spec change

* Run gazelle
2024-06-13 00:49:19 +02:00
Manu NALEPA
30f507acad PeerDAS: Stop generating new P2P private key at start. (#14099)
* `privKey`: Improve logs.

* peerDAS: Move functions in file. Add documentation.

* PeerDAS: Remove unused `ComputeExtendedMatrix` and `RecoverMatrix` functions.

* PeerDAS: Stop generating new P2P private key at start.

* Fix sammy' comment.
2024-06-13 00:49:19 +02:00
Manu NALEPA
f61517ba0f PeerDAS: Gossip the reconstructed columns (#14079)
* PeerDAS: Broadcast not seen via gossip but reconstructed data columns.

* Address Nishant's comment.
2024-06-13 00:49:19 +02:00
Manu NALEPA
1d3399fdba PeerDAS: Only saved custodied columns even after reconstruction. (#14083) 2024-06-13 00:49:19 +02:00
Manu NALEPA
82af2d6270 recoverBlobs: Cover the 0 < blobsCount < fieldparams.MaxBlobsPerBlock case. (#14066)
* `recoverBlobs`: Cover the `0 < blobsCount < fieldparams.MaxBlobsPerBlock` case.

* Fix Nishant's comment.
2024-06-13 00:49:19 +02:00
Manu NALEPA
8f783b7d37 PeerDAS: Withhold data on purpose. (#14076)
* Introduce hidden flag `data-columns-withhold-count`.

* Address Nishant's comment.
2024-06-13 00:49:19 +02:00
Manu NALEPA
f5c46cf059 PeerDAS: Implement / use data column feed from database. (#14062)
* Remove some `_` identifiers.

* Blob storage: Implement a notifier system for data columns.

* `dataColumnSidecarByRootRPCHandler`: Remove ugly `time.Sleep(100 * time.Millisecond)`.

* Address Nishant's comment.
2024-06-13 00:49:19 +02:00
Manu NALEPA
e0f2fc89d5 PeerDAS: Implement reconstruction. (#14036)
* Wrap errors, add logs.

* `missingColumnRequest`: Fix blobs <-> data columns mix.

* `ColumnIndices`: Return `map[uint64]bool` instead of `[fieldparams.NumberOfColumns]bool`.

* `DataColumnSidecars`: `interfaces.SignedBeaconBlock` ==> `interfaces.ReadOnlySignedBeaconBlock`.

We don't need any of the non read-only methods.

* Fix comments.

* `handleUnblidedBlock` ==> `handleUnblindedBlock`.

* `SaveDataColumn`: Move log from debug to trace.

If we attempt to save an already existing data column sidecar,
a debug log was printed.

This case could be quite common now with the data column reconstruction enabled.

* `sampling_data_columns.go` --> `data_columns_sampling.go`.

* Reconstruct data columns.
2024-06-13 00:49:19 +02:00
Nishant Das
78abcc3635 Fix Custody Columns (#14021) 2024-06-13 00:49:19 +02:00
Nishant Das
1f63cec825 Disable Evaluators For E2E (#14019)
* Hack E2E

* Fix it For Real

* Gofmt

* Remove
2024-06-13 00:49:19 +02:00
Nishant Das
a5717132e7 Request Data Columns When Fetching Pending Blocks (#14007)
* Support Data Columns For By Root Requests

* Revert Config Changes

* Fix Panic

* Fix Process Block

* Fix Flags

* Lint

* Support Checkpoint Sync

* Manu's Review

* Add Support For Columns in Remaining Methods

* Unmarshal Uncorrectly
2024-06-13 00:49:19 +02:00
Manu NALEPA
87f838bb89 Fix CustodyColumns to comply with alpha-2 spectests. (#14008)
* Adding error wrapping

* Fix `CustodyColumnSubnets` tests.
2024-06-13 00:49:19 +02:00
Manu NALEPA
df524a8afc Fix beacon chain config. (#14017) 2024-06-13 00:49:19 +02:00
Nishant Das
b48870942f Adding Back Generated Objects (#14009) 2024-06-13 00:49:19 +02:00
Nishant Das
1f1dac934e Set Custody Count Correctly (#14004)
* Set Custody Count Correctly

* Fix Discovery Count
2024-06-13 00:49:07 +02:00
Manu NALEPA
299209f2ae Sample from peers some data columns. (#13980)
* PeerDAS: Implement sampling.

* `TestNewRateLimiter`: Fix with the new number of expected registered topics.
2024-06-13 00:49:07 +02:00
Nishant Das
4620559500 Implement Data Columns By Range Request And Response Methods (#13972)
* Add Data Structure for New Request Type

* Add Data Column By Range Handler

* Add Data Column Request Methods

* Add new validation for columns by range requests

* Fix Build

* Allow Prysm Node To Fetch Data Columns

* Allow Prysm Node To Fetch Data Columns And Sync

* Bug Fixes For Interop

* GoFmt

* Use different var

* Manu's Review
2024-06-13 00:49:07 +02:00
Nishant Das
d8bd4baa1a Enable E2E For PeerDAS (#13945)
* Enable E2E And Add Fixes

* Register Same Topic For Data Columns

* Initialize Capacity Of Slice

* Fix Initialization of Data Column Receiver

* Remove Mix In From Merkle Proof

* E2E: Subscribe to all subnets.

* Remove Index Check

* Remaining Bug Fixes to Get It Working

* Change Evaluator to Allow Test to Finish

* Fix Build

* Add Data Column Verification

* Fix LoopVar Bug

* Do Not Allocate Memory

* Update beacon-chain/blockchain/process_block.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Update beacon-chain/core/peerdas/helpers.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Update beacon-chain/core/peerdas/helpers.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Gofmt

* Fix It Again

* Fix Test Setup

* Fix Build

* Fix Trusted Setup panic

* Fix Trusted Setup panic

* Use New Test

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-06-13 00:49:07 +02:00
Justin Traglia
9678b508d0 [PeerDAS] Upgrade c-kzg-4844 package (#13967)
* Upgrade c-kzg-4844 package

* Upgrade bazel deps
2024-06-13 00:49:07 +02:00
Manu NALEPA
85fa2a2b4d SendDataColumnSidecarByRoot: Return RODataColumn instead of ROBlob. (#13957)
* `SendDataColumnSidecarByRoot`: Return `RODataColumn` instead of `ROBlob`.

* Make deepsource happier.
2024-06-13 00:49:07 +02:00
Manu NALEPA
f0b5d15ccc Spectests (#13940)
* Update `consensus_spec_version` to `v1.5.0-alpha.1`.

* `CustodyColumns`: Fix and implement spec tests.

* Make deepsource happy.

* `^uint64(0)` => `math.MaxUint64`.

* Fix `TestLoadConfigFile` test.
2024-06-13 00:49:07 +02:00
Nishant Das
08db57ca2a Add DA Check For Data Columns (#13938)
* Add new DA check

* Exit early in the event no commitments exist.

* Gazelle

* Fix Mock Broadcaster

* Fix Test Setup

* Update beacon-chain/blockchain/process_block.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Manu's Review

* Fix Build

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-06-13 00:49:07 +02:00
Manu NALEPA
4e219d8f90 Implement peer DAS proposer RPC (#13922)
* Remove capital letter from error messages.

* `[4]byte` => `[fieldparams.VersionLength]byte`.

* Prometheus: Remove extra `committee`.

They are probably due to a bad copy/paste.

Note: The name of the probe itself is remaining,
to ensure backward compatibility.

* Implement Proposer RPC for data columns.

* Fix TestProposer_ProposeBlock_OK test.

* Remove default peerDAS activation.

* `validateDataColumn`: Workaround to return a `VerifiedRODataColumn`
2024-06-13 00:49:07 +02:00
Nishant Das
020fb06380 Update .bazelrc (#13931) 2024-06-13 00:49:07 +02:00
Manu NALEPA
d022651ca0 Implement custody_subnet_count ENR field. (#13915)
https://github.com/ethereum/consensus-specs/blob/dev/specs/_features/eip7594/p2p-interface.md#the-discovery-domain-discv5
2024-06-13 00:49:07 +02:00
Manu NALEPA
890cb23040 Peer das core (#13877)
* Bump `c-kzg-4844` lib to the `das` branch.

* Implement `MerkleProofKZGCommitments`.

* Implement `das-core.md`.

* Use `peerdas.CustodyColumnSubnets` and `peerdas.CustodyColumns`.

* `CustodyColumnSubnets`: Include `i` in the for loop.

* Remove `computeSubscribedColumnSubnet`.

* Remove `peerdas.CustodyColumns` out of the for loop.
2024-06-13 00:49:07 +02:00
Nishant Das
5128cd4888 Add Request And Response RPC Methods For Data Columns (#13909)
* Add RPC Handler

* Add Column Requests

* Update beacon-chain/db/filesystem/blob.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Update beacon-chain/p2p/rpc_topic_mappings.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Manu's Review

* Manu's Review

* Interface Fixes

* mock manager

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-06-13 00:49:07 +02:00
Nishant Das
95071278dc Add Data Column Gossip Handlers (#13894)
* Add Data Column Subscriber

* Add Data Column Vaidator

* Wire all Handlers In

* Fix Build

* Fix Test

* Fix IP in Test

* Fix IP in Test
2024-06-13 00:49:07 +02:00
Nishant Das
435ee12bd5 Add Support For Discovery Of Column Subnets (#13883)
* Add Support For Discovery Of Column Subnets

* Lint for SubnetsPerNode

* Manu's Review

* Change to a better name
2024-06-13 00:49:07 +02:00
Nishant Das
ae59700f22 add in networking params (#13866) 2024-06-13 00:49:07 +02:00
Nishant Das
f6294f134b add it (#13865) 2024-06-13 00:49:07 +02:00
Nishant Das
8cdb386ddf Add in column sidecars protos (#13862) 2024-06-13 00:49:07 +02:00
280 changed files with 29940 additions and 22370 deletions

View File

@@ -22,6 +22,7 @@ coverage --define=coverage_enabled=1
build --workspace_status_command=./hack/workspace_status.sh
build --define blst_disabled=false
build --compilation_mode=opt
run --define blst_disabled=false
build:blst_disabled --define blst_disabled=true

View File

@@ -1,27 +0,0 @@
name: Find stale PRs
on:
schedule:
- cron: '0 13 * * 1'
jobs:
fetch-PRs:
runs-on: ubuntu-latest
steps:
- name: Fetch pull requests from here
id: local
uses: paritytech/stale-pr-finder@main
with:
GITHUB_TOKEN: ${{ github.token }}
repo: prysm
- name: Post to a Slack channel
id: slack
uses: slackapi/slack-github-action@v1.23.0
with:
channel-id: ${{ secrets.CHANNEL }}
slack-message: |
Stale PRs this week:
${{ steps.local.outputs.message }}
env:
SLACK_BOT_TOKEN: ${{ secrets.SLACK_BOT_TOKEN }}
LOCAL_PR: ${{ steps.local.outputs.message }}"

View File

@@ -224,7 +224,6 @@ nogo(
"@org_golang_x_tools//go/analysis/passes/deepequalerrors:go_default_library",
"@org_golang_x_tools//go/analysis/passes/defers:go_default_library",
"@org_golang_x_tools//go/analysis/passes/directive:go_default_library",
"@org_golang_x_tools//go/analysis/passes/errorsas:go_default_library",
# fieldalignment disabled
#"@org_golang_x_tools//go/analysis/passes/fieldalignment:go_default_library",
"@org_golang_x_tools//go/analysis/passes/findcall:go_default_library",

View File

@@ -31,7 +31,7 @@ func TestNormalizeQueryValuesHandler(t *testing.T) {
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
req, err := http.NewRequest("GET", "/test?"+test.inputQuery, http.NoBody)
req, err := http.NewRequest("GET", "/test?"+test.inputQuery, nil)
if err != nil {
t.Fatal(err)
}

View File

@@ -26,6 +26,7 @@ go_library(
"receive_attestation.go",
"receive_blob.go",
"receive_block.go",
"receive_data_column.go",
"service.go",
"tracked_proposer.go",
"weak_subjectivity_checks.go",
@@ -48,6 +49,7 @@ go_library(
"//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/transition:go_default_library",
@@ -157,6 +159,7 @@ go_test(
"//beacon-chain/operations/slashings:go_default_library",
"//beacon-chain/operations/voluntaryexits:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/p2p/testing:go_default_library",
"//beacon-chain/startup:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",

View File

@@ -33,6 +33,7 @@ var (
)
var errMaxBlobsExceeded = errors.New("Expected commitments in block exceeds MAX_BLOBS_PER_BLOCK")
var errMaxDataColumnsExceeded = errors.New("Expected data columns for node exceeds NUMBER_OF_COLUMNS")
// An invalid block is the block that fails state transition based on the core protocol rules.
// The beacon node shall not be accepting nor building blocks that branch off from an invalid block.

View File

@@ -3,6 +3,7 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"kzg.go",
"trusted_setup.go",
"validation.go",
],
@@ -12,6 +13,9 @@ go_library(
deps = [
"//consensus-types/blocks:go_default_library",
"@com_github_crate_crypto_go_kzg_4844//:go_default_library",
"@com_github_ethereum_c_kzg_4844//bindings/go:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_ethereum_go_ethereum//crypto/kzg4844:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)
@@ -19,6 +23,7 @@ go_library(
go_test(
name = "go_default_test",
srcs = [
"kzg_test.go",
"trusted_setup_test.go",
"validation_test.go",
],

View File

@@ -0,0 +1,178 @@
package kzg
import (
"errors"
ckzg4844 "github.com/ethereum/c-kzg-4844/bindings/go"
"github.com/ethereum/go-ethereum/crypto/kzg4844"
)
// Blob represents a serialized chunk of data.
type Blob [BytesPerBlob]byte
// Commitment represent a KZG commitment to a Blob.
type Commitment [48]byte
// Proof represents a KZG proof that attests to the validity of a Blob or parts of it.
type Proof [48]byte
// Bytes48 is a 48-byte array.
type Bytes48 = ckzg4844.Bytes48
// Bytes32 is a 32-byte array.
type Bytes32 = ckzg4844.Bytes32
// BytesPerCell is the number of bytes in a single cell.
const BytesPerCell = ckzg4844.FieldElementsPerCell * ckzg4844.BytesPerFieldElement
// BytesPerBlob is the number of bytes in a single blob.
const BytesPerBlob = ckzg4844.BytesPerBlob
// fieldElementsPerCell is the number of field elements in a single cell.
const fieldElementsPerCell = ckzg4844.FieldElementsPerCell
// CellsPerExtBlob is the number of cells that we generate for a single blob.
// This is equivalent to the number of columns in the data matrix.
const CellsPerExtBlob = ckzg4844.CellsPerExtBlob
// Cell represents a chunk of an encoded Blob.
type Cell [BytesPerCell]byte
// CellsAndProofs represents the Cells and Proofs corresponding to
// a single blob.
type CellsAndProofs struct {
Cells [ckzg4844.CellsPerExtBlob]Cell
Proofs [ckzg4844.CellsPerExtBlob]Proof
}
func BlobToKZGCommitment(blob *Blob) (Commitment, error) {
comm, err := kzg4844.BlobToCommitment(kzg4844.Blob(*blob))
if err != nil {
return Commitment{}, err
}
return Commitment(comm), nil
}
func ComputeBlobKZGProof(blob *Blob, commitment Commitment) (Proof, error) {
proof, err := kzg4844.ComputeBlobProof(kzg4844.Blob(*blob), kzg4844.Commitment(commitment))
if err != nil {
return [48]byte{}, err
}
return Proof(proof), nil
}
func ComputeCellsAndKZGProofs(blob *Blob) (CellsAndProofs, error) {
ckzgBlob := ckzg4844.Blob(*blob)
_cells, _proofs, err := ckzg4844.ComputeCellsAndKZGProofs(&ckzgBlob)
if err != nil {
return CellsAndProofs{}, err
}
// Convert Cells and Proofs to types defined in this package
var cells [ckzg4844.CellsPerExtBlob]Cell
for i := range _cells {
cells[i] = ckzgCellToCell(&_cells[i])
}
var proofs [ckzg4844.CellsPerExtBlob]Proof
for i := range _proofs {
proofs[i] = Proof(_proofs[i])
}
return CellsAndProofs{
Cells: cells,
Proofs: proofs,
}, nil
}
// VerifyCellKZGProof is unused. TODO: We can check when the batch size for `VerifyCellKZGProofBatch` is 1
// and call this, though I think its better if the cryptography library handles this.
func VerifyCellKZGProof(commitmentBytes Bytes48, cellId uint64, cell *Cell, proofBytes Bytes48) (bool, error) {
return ckzg4844.VerifyCellKZGProof(commitmentBytes, cellId, cellToCKZGCell(cell), proofBytes)
}
func VerifyCellKZGProofBatch(commitmentsBytes []Bytes48, rowIndices, columnIndices []uint64, _cells []Cell, proofsBytes []Bytes48) (bool, error) {
// Convert `Cell` type to `ckzg4844.Cell`
ckzgCells := make([]ckzg4844.Cell, len(_cells))
for i := range _cells {
ckzgCells[i] = cellToCKZGCell(&_cells[i])
}
return ckzg4844.VerifyCellKZGProofBatch(commitmentsBytes, rowIndices, columnIndices, ckzgCells, proofsBytes)
}
func recoverAllCells(cellIds []uint64, _cells []Cell) ([ckzg4844.CellsPerExtBlob]Cell, error) {
// Convert `Cell` type to `ckzg4844.Cell`
ckzgCells := make([]ckzg4844.Cell, len(_cells))
for i := range _cells {
ckzgCells[i] = cellToCKZGCell(&_cells[i])
}
recoveredCells, err := ckzg4844.RecoverAllCells(cellIds, ckzgCells)
if err != nil {
return [ckzg4844.CellsPerExtBlob]Cell{}, err
}
// This should never happen, we return an error instead of panicking.
if len(recoveredCells) != ckzg4844.CellsPerExtBlob {
return [ckzg4844.CellsPerExtBlob]Cell{}, errors.New("recovered cells length is not equal to CellsPerExtBlob")
}
// Convert `ckzg4844.Cell` type to `Cell`
var ret [ckzg4844.CellsPerExtBlob]Cell
for i := range recoveredCells {
ret[i] = ckzgCellToCell(&recoveredCells[i])
}
return ret, nil
}
// RecoverCellsAndKZGProofs recovers the cells and compute the KZG Proofs associated with the cells.
func RecoverCellsAndKZGProofs(cellIds []uint64, _cells []Cell) (CellsAndProofs, error) {
// First recover all of the cells
recoveredCells, err := recoverAllCells(cellIds, _cells)
if err != nil {
return CellsAndProofs{}, err
}
// Extract the Blob from all of the Cells
blob, err := cellsToBlob(&recoveredCells)
if err != nil {
return CellsAndProofs{}, err
}
// Compute all of the cells and KZG proofs
return ComputeCellsAndKZGProofs(&blob)
}
func cellsToBlob(_cells *[ckzg4844.CellsPerExtBlob]Cell) (Blob, error) {
// Convert `Cell` type to `ckzg4844.Cell`
var ckzgCells [ckzg4844.CellsPerExtBlob]ckzg4844.Cell
for i := range _cells {
ckzgCells[i] = cellToCKZGCell(&_cells[i])
}
blob, err := ckzg4844.CellsToBlob(ckzgCells)
if err != nil {
return Blob{}, err
}
return Blob(blob), nil
}
// The correct type for Cell is [BytesPerCell]byte
// c-kzg currently uses [BytesPerFieldElement]Bytes32
// so we have these helper methods to convert between the two.
func cellToCKZGCell(flattened *Cell) ckzg4844.Cell {
var cell ckzg4844.Cell
for i := 0; i < fieldElementsPerCell; i++ {
copy(cell[i][:], flattened[i*32:(i+1)*32])
}
return cell
}
func ckzgCellToCell(cell *ckzg4844.Cell) Cell {
var flattened Cell
for i, fieldElement := range cell {
copy(flattened[i*32:(i+1)*32], fieldElement[:])
}
return flattened
}

View File

@@ -0,0 +1,21 @@
package kzg
import "testing"
func TestCellFlattenedChunked(t *testing.T) {
cell := makeCell()
chunkedCell := cellToCKZGCell(&cell)
flattenedCell := ckzgCellToCell(&chunkedCell)
if cell != flattenedCell {
t.Errorf("cell != flattenedCell")
}
}
func makeCell() Cell {
var cell Cell
for i := 0; i < fieldElementsPerCell; i++ {
rand32 := deterministicRandomness(int64(i))
copy(cell[i*32:], rand32[:])
}
return cell
}

View File

@@ -5,6 +5,8 @@ import (
"encoding/json"
GoKZG "github.com/crate-crypto/go-kzg-4844"
CKZG "github.com/ethereum/c-kzg-4844/bindings/go"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
)
@@ -12,17 +14,38 @@ var (
//go:embed trusted_setup.json
embeddedTrustedSetup []byte // 1.2Mb
kzgContext *GoKZG.Context
kzgLoaded bool
)
func Start() error {
parsedSetup := GoKZG.JSONTrustedSetup{}
err := json.Unmarshal(embeddedTrustedSetup, &parsedSetup)
parsedSetup := &GoKZG.JSONTrustedSetup{}
err := json.Unmarshal(embeddedTrustedSetup, parsedSetup)
if err != nil {
return errors.Wrap(err, "could not parse trusted setup JSON")
}
kzgContext, err = GoKZG.NewContext4096(&parsedSetup)
kzgContext, err = GoKZG.NewContext4096(parsedSetup)
if err != nil {
return errors.Wrap(err, "could not initialize go-kzg context")
}
g1Lagrange := &parsedSetup.SetupG1Lagrange
// Length of a G1 point, converted from hex to binary.
g1s := make([]byte, len(g1Lagrange)*(len(g1Lagrange[0])-2)/2)
for i, g1 := range g1Lagrange {
copy(g1s[i*(len(g1)-2)/2:], hexutil.MustDecode(g1))
}
// Length of a G2 point, converted from hex to binary.
g2s := make([]byte, len(parsedSetup.SetupG2)*(len(parsedSetup.SetupG2[0])-2)/2)
for i, g2 := range parsedSetup.SetupG2 {
copy(g2s[i*(len(g2)-2)/2:], hexutil.MustDecode(g2))
}
if !kzgLoaded {
// Free the current trusted setup before running this method. CKZG
// panics if the same setup is run multiple times.
if err = CKZG.LoadTrustedSetup(g1s, g2s); err != nil {
panic(err)
}
}
kzgLoaded = true
return nil
}

View File

@@ -118,9 +118,9 @@ func WithBLSToExecPool(p blstoexec.PoolManager) Option {
}
// WithP2PBroadcaster to broadcast messages after appropriate processing.
func WithP2PBroadcaster(p p2p.Broadcaster) Option {
func WithP2PBroadcaster(p p2p.Acceser) Option {
return func(s *Service) error {
s.cfg.P2p = p
s.cfg.P2P = p
return nil
}
}

View File

@@ -13,6 +13,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
coreTime "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/das"
@@ -500,7 +501,7 @@ func missingIndices(bs *filesystem.BlobStorage, root [32]byte, expected [][]byte
}
indices, err := bs.Indices(root)
if err != nil {
return nil, err
return nil, errors.Wrap(err, "indices")
}
missing := make(map[uint64]struct{}, len(expected))
for i := range expected {
@@ -514,12 +515,35 @@ func missingIndices(bs *filesystem.BlobStorage, root [32]byte, expected [][]byte
return missing, nil
}
func missingDataColumns(bs *filesystem.BlobStorage, root [32]byte, expected map[uint64]bool) (map[uint64]bool, error) {
if len(expected) == 0 {
return nil, nil
}
if len(expected) > int(params.BeaconConfig().NumberOfColumns) {
return nil, errMaxDataColumnsExceeded
}
indices, err := bs.ColumnIndices(root)
if err != nil {
return nil, err
}
missing := make(map[uint64]bool, len(expected))
for col := range expected {
if !indices[col] {
missing[col] = true
}
}
return missing, nil
}
// isDataAvailable blocks until all BlobSidecars committed to in the block are available,
// or an error or context cancellation occurs. A nil result means that the data availability check is successful.
// The function will first check the database to see if all sidecars have been persisted. If any
// sidecars are missing, it will then read from the blobNotifier channel for the given root until the channel is
// closed, the context hits cancellation/timeout, or notifications have been received for all the missing sidecars.
func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed interfaces.ReadOnlySignedBeaconBlock) error {
if coreTime.PeerDASIsActive(signed.Block().Slot()) {
return s.isDataAvailableDataColumns(ctx, root, signed)
}
if signed.Version() < version.Deneb {
return nil
}
@@ -549,7 +573,7 @@ func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed int
// get a map of BlobSidecar indices that are not currently available.
missing, err := missingIndices(s.blobStorage, root, kzgCommitments)
if err != nil {
return err
return errors.Wrap(err, "missing indices")
}
// If there are no missing indices, all BlobSidecars are available.
if len(missing) == 0 {
@@ -568,8 +592,13 @@ func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed int
if len(missing) == 0 {
return
}
log.WithFields(daCheckLogFields(root, signed.Block().Slot(), expected, len(missing))).
Error("Still waiting for DA check at slot end.")
log.WithFields(logrus.Fields{
"slot": signed.Block().Slot(),
"root": fmt.Sprintf("%#x", root),
"blobsExpected": expected,
"blobsWaiting": len(missing),
}).Error("Still waiting for blobs DA check at slot end.")
})
defer nst.Stop()
}
@@ -591,12 +620,100 @@ func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed int
}
}
func daCheckLogFields(root [32]byte, slot primitives.Slot, expected, missing int) logrus.Fields {
return logrus.Fields{
"slot": slot,
"root": fmt.Sprintf("%#x", root),
"blobsExpected": expected,
"blobsWaiting": missing,
func (s *Service) isDataAvailableDataColumns(ctx context.Context, root [32]byte, signed interfaces.ReadOnlySignedBeaconBlock) error {
if signed.Version() < version.Deneb {
return nil
}
block := signed.Block()
if block == nil {
return errors.New("invalid nil beacon block")
}
// We are only required to check within MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS
if !params.WithinDAPeriod(slots.ToEpoch(block.Slot()), slots.ToEpoch(s.CurrentSlot())) {
return nil
}
body := block.Body()
if body == nil {
return errors.New("invalid nil beacon block body")
}
kzgCommitments, err := body.BlobKzgCommitments()
if err != nil {
return errors.Wrap(err, "could not get KZG commitments")
}
// If block has not commitments there is nothing to wait for.
if len(kzgCommitments) == 0 {
return nil
}
colMap, err := peerdas.CustodyColumns(s.cfg.P2P.NodeID(), peerdas.CustodySubnetCount())
if err != nil {
return err
}
// Expected is the number of custodied data columnns a node is expected to have.
expected := len(colMap)
if expected == 0 {
return nil
}
// Subscribe to newsly data columns stored in the database.
rootIndexChan := make(chan filesystem.RootIndexPair)
subscription := s.blobStorage.DataColumnFeed.Subscribe(rootIndexChan)
defer subscription.Unsubscribe()
// Get a map of data column indices that are not currently available.
missing, err := missingDataColumns(s.blobStorage, root, colMap)
if err != nil {
return err
}
// If there are no missing indices, all data column sidecars are available.
// This is the happy path.
if len(missing) == 0 {
return nil
}
// Log for DA checks that cross over into the next slot; helpful for debugging.
nextSlot := slots.BeginsAt(signed.Block().Slot()+1, s.genesisTime)
// Avoid logging if DA check is called after next slot start.
if nextSlot.After(time.Now()) {
nst := time.AfterFunc(time.Until(nextSlot), func() {
if len(missing) == 0 {
return
}
log.WithFields(logrus.Fields{
"slot": signed.Block().Slot(),
"root": fmt.Sprintf("%#x", root),
"columnsExpected": expected,
"columnsWaiting": len(missing),
}).Error("Still waiting for data columns DA check at slot end.")
})
defer nst.Stop()
}
for {
select {
case rootIndex := <-rootIndexChan:
if rootIndex.Root != root {
// This is not the root we are looking for.
continue
}
// Remove the index from the missing map.
delete(missing, rootIndex.Index)
// Exit if there is no more missing data columns.
if len(missing) == 0 {
return nil
}
case <-ctx.Done():
missingIndexes := make([]uint64, 0, len(missing))
for val := range missing {
copiedVal := val
missingIndexes = append(missingIndexes, copiedVal)
}
return errors.Wrapf(ctx.Err(), "context deadline waiting for data column sidecars slot: %d, BlockRoot: %#x, missing %v", block.Slot(), root, missingIndexes)
}
}
}

View File

@@ -50,6 +50,12 @@ type BlobReceiver interface {
ReceiveBlob(context.Context, blocks.VerifiedROBlob) error
}
// DataColumnReceiver interface defines the methods of chain service for receiving new
// data columns
type DataColumnReceiver interface {
ReceiveDataColumn(context.Context, blocks.VerifiedRODataColumn) error
}
// SlashingReceiver interface defines the methods of chain service for receiving validated slashing over the wire.
type SlashingReceiver interface {
ReceiveAttesterSlashing(ctx context.Context, slashing ethpb.AttSlashing)

View File

@@ -0,0 +1,16 @@
package blockchain
import (
"context"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
)
func (s *Service) ReceiveDataColumn(ctx context.Context, ds blocks.VerifiedRODataColumn) error {
if err := s.blobStorage.SaveDataColumn(ds); err != nil {
return errors.Wrap(err, "save data column")
}
return nil
}

View File

@@ -82,7 +82,7 @@ type config struct {
ExitPool voluntaryexits.PoolManager
SlashingPool slashings.PoolManager
BLSToExecPool blstoexec.PoolManager
P2p p2p.Broadcaster
P2P p2p.Acceser
MaxRoutines int
StateNotifier statefeed.Notifier
ForkChoiceStore f.ForkChoicer
@@ -107,15 +107,17 @@ var ErrMissingClockSetter = errors.New("blockchain Service initialized without a
type blobNotifierMap struct {
sync.RWMutex
notifiers map[[32]byte]chan uint64
seenIndex map[[32]byte][fieldparams.MaxBlobsPerBlock]bool
seenIndex map[[32]byte][fieldparams.NumberOfColumns]bool
}
// notifyIndex notifies a blob by its index for a given root.
// It uses internal maps to keep track of seen indices and notifier channels.
func (bn *blobNotifierMap) notifyIndex(root [32]byte, idx uint64) {
if idx >= fieldparams.MaxBlobsPerBlock {
return
}
// TODO: Separate Data Columns from blobs
/*
if idx >= fieldparams.MaxBlobsPerBlock {
return
}*/
bn.Lock()
seen := bn.seenIndex[root]
@@ -129,7 +131,7 @@ func (bn *blobNotifierMap) notifyIndex(root [32]byte, idx uint64) {
// Retrieve or create the notifier channel for the given root.
c, ok := bn.notifiers[root]
if !ok {
c = make(chan uint64, fieldparams.MaxBlobsPerBlock)
c = make(chan uint64, fieldparams.NumberOfColumns)
bn.notifiers[root] = c
}
@@ -143,7 +145,7 @@ func (bn *blobNotifierMap) forRoot(root [32]byte) chan uint64 {
defer bn.Unlock()
c, ok := bn.notifiers[root]
if !ok {
c = make(chan uint64, fieldparams.MaxBlobsPerBlock)
c = make(chan uint64, fieldparams.NumberOfColumns)
bn.notifiers[root] = c
}
return c
@@ -169,7 +171,7 @@ func NewService(ctx context.Context, opts ...Option) (*Service, error) {
ctx, cancel := context.WithCancel(ctx)
bn := &blobNotifierMap{
notifiers: make(map[[32]byte]chan uint64),
seenIndex: make(map[[32]byte][fieldparams.MaxBlobsPerBlock]bool),
seenIndex: make(map[[32]byte][fieldparams.NumberOfColumns]bool),
}
srv := &Service{
ctx: ctx,

View File

@@ -95,7 +95,7 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
WithAttestationPool(attestations.NewPool()),
WithSlashingPool(slashings.NewPool()),
WithExitPool(voluntaryexits.NewPool()),
WithP2PBroadcaster(&mockBroadcaster{}),
WithP2PBroadcaster(&mockAccesser{}),
WithStateNotifier(&mockBeaconNode{}),
WithForkChoiceStore(fc),
WithAttestationService(attService),
@@ -518,7 +518,7 @@ func (s *MockClockSetter) SetClock(g *startup.Clock) error {
func TestNotifyIndex(t *testing.T) {
// Initialize a blobNotifierMap
bn := &blobNotifierMap{
seenIndex: make(map[[32]byte][fieldparams.MaxBlobsPerBlock]bool),
seenIndex: make(map[[32]byte][fieldparams.NumberOfColumns]bool),
notifiers: make(map[[32]byte]chan uint64),
}

View File

@@ -19,6 +19,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/operations/attestations"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/operations/blstoexec"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p"
p2pTesting "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/startup"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state/stategen"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
@@ -45,6 +46,11 @@ type mockBroadcaster struct {
broadcastCalled bool
}
type mockAccesser struct {
mockBroadcaster
p2pTesting.MockPeerManager
}
func (mb *mockBroadcaster) Broadcast(_ context.Context, _ proto.Message) error {
mb.broadcastCalled = true
return nil
@@ -65,6 +71,11 @@ func (mb *mockBroadcaster) BroadcastBlob(_ context.Context, _ uint64, _ *ethpb.B
return nil
}
func (mb *mockBroadcaster) BroadcastDataColumn(_ context.Context, _ uint64, _ *ethpb.DataColumnSidecar) error {
mb.broadcastCalled = true
return nil
}
func (mb *mockBroadcaster) BroadcastBLSChanges(_ context.Context, _ []*ethpb.SignedBLSToExecutionChange) {
}

View File

@@ -628,6 +628,11 @@ func (c *ChainService) ReceiveBlob(_ context.Context, b blocks.VerifiedROBlob) e
return nil
}
// ReceiveDataColumn implements the same method in chain service
func (c *ChainService) ReceiveDataColumn(_ context.Context, _ blocks.VerifiedRODataColumn) error {
return nil
}
// TargetRootForEpoch mocks the same method in the chain service
func (c *ChainService) TargetRootForEpoch(_ [32]byte, _ primitives.Epoch) ([32]byte, error) {
return c.TargetRoot, nil

View File

@@ -8,6 +8,7 @@ go_library(
"attestation_data.go",
"balance_cache_key.go",
"checkpoint_state.go",
"column_subnet_ids.go",
"committee.go",
"committee_disabled.go", # keep
"committees.go",

65
beacon-chain/cache/column_subnet_ids.go vendored Normal file
View File

@@ -0,0 +1,65 @@
package cache
import (
"sync"
"time"
"github.com/patrickmn/go-cache"
"github.com/prysmaticlabs/prysm/v5/config/params"
)
type columnSubnetIDs struct {
colSubCache *cache.Cache
colSubLock sync.RWMutex
}
// ColumnSubnetIDs for column subnet participants
var ColumnSubnetIDs = newColumnSubnetIDs()
const columnKey = "columns"
func newColumnSubnetIDs() *columnSubnetIDs {
epochDuration := time.Duration(params.BeaconConfig().SlotsPerEpoch.Mul(params.BeaconConfig().SecondsPerSlot))
// Set the default duration of a column subnet subscription as the column expiry period.
subLength := epochDuration * time.Duration(params.BeaconConfig().MinEpochsForDataColumnSidecarsRequest)
persistentCache := cache.New(subLength*time.Second, epochDuration*time.Second)
return &columnSubnetIDs{colSubCache: persistentCache}
}
// GetColumnSubnets retrieves the data column subnets.
func (s *columnSubnetIDs) GetColumnSubnets() ([]uint64, bool, time.Time) {
s.colSubLock.RLock()
defer s.colSubLock.RUnlock()
id, duration, ok := s.colSubCache.GetWithExpiration(columnKey)
if !ok {
return nil, false, time.Time{}
}
// Retrieve indices from the cache.
idxs, ok := id.([]uint64)
if !ok {
return nil, false, time.Time{}
}
return idxs, ok, duration
}
// AddColumnSubnets adds the relevant data column subnets.
func (s *columnSubnetIDs) AddColumnSubnets(colIdx []uint64) {
s.colSubLock.Lock()
defer s.colSubLock.Unlock()
s.colSubCache.Set(columnKey, colIdx, 0)
}
// EmptyAllCaches empties out all the related caches and flushes any stored
// entries on them. This should only ever be used for testing, in normal
// production, handling of the relevant subnets for each role is done
// separately.
func (s *columnSubnetIDs) EmptyAllCaches() {
// Clear the cache.
s.colSubLock.Lock()
defer s.colSubLock.Unlock()
s.colSubCache.Flush()
}

View File

@@ -96,6 +96,24 @@ func VerifyBlockHeaderSignature(beaconState state.BeaconState, header *ethpb.Sig
return signing.VerifyBlockHeaderSigningRoot(header.Header, proposerPubKey, header.Signature, domain)
}
func VerifyBlockHeaderSignatureUsingCurrentFork(beaconState state.BeaconState, header *ethpb.SignedBeaconBlockHeader) error {
currentEpoch := slots.ToEpoch(header.Header.Slot)
fork, err := forks.Fork(currentEpoch)
if err != nil {
return err
}
domain, err := signing.Domain(fork, currentEpoch, params.BeaconConfig().DomainBeaconProposer, beaconState.GenesisValidatorsRoot())
if err != nil {
return err
}
proposer, err := beaconState.ValidatorAtIndex(header.Header.ProposerIndex)
if err != nil {
return err
}
proposerPubKey := proposer.PublicKey
return signing.VerifyBlockHeaderSigningRoot(header.Header, proposerPubKey, header.Signature, domain)
}
// VerifyBlockSignatureUsingCurrentFork verifies the proposer signature of a beacon block. This differs
// from the above method by not using fork data from the state and instead retrieving it
// via the respective epoch.

View File

@@ -40,7 +40,7 @@ import (
//
// state.pending_consolidations = state.pending_consolidations[next_pending_consolidation:]
func ProcessPendingConsolidations(ctx context.Context, st state.BeaconState) error {
_, span := trace.StartSpan(ctx, "electra.ProcessPendingConsolidations")
ctx, span := trace.StartSpan(ctx, "electra.ProcessPendingConsolidations")
defer span.End()
if st == nil || st.IsNil() {
@@ -68,7 +68,7 @@ func ProcessPendingConsolidations(ctx context.Context, st state.BeaconState) err
break
}
if err := SwitchToCompoundingValidator(st, pc.TargetIndex); err != nil {
if err := SwitchToCompoundingValidator(ctx, st, pc.TargetIndex); err != nil {
return err
}

View File

@@ -295,14 +295,14 @@ func UpgradeToElectra(beaconState state.BeaconState) (state.BeaconState, error)
}
for _, index := range preActivationIndices {
if err := QueueEntireBalanceAndResetValidator(post, index); err != nil {
if err := helpers.QueueEntireBalanceAndResetValidator(post, index); err != nil {
return nil, errors.Wrap(err, "failed to queue entire balance and reset validator")
}
}
// Ensure early adopters of compounding credentials go through the activation churn
for _, index := range compoundWithdrawalIndices {
if err := QueueExcessActiveBalance(post, index); err != nil {
if err := helpers.QueueExcessActiveBalance(post, index); err != nil {
return nil, errors.Wrap(err, "failed to queue excess active balance")
}
}

View File

@@ -1,6 +1,7 @@
package electra
import (
"context"
"errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
@@ -18,7 +19,7 @@ import (
// if has_eth1_withdrawal_credential(validator):
// validator.withdrawal_credentials = COMPOUNDING_WITHDRAWAL_PREFIX + validator.withdrawal_credentials[1:]
// queue_excess_active_balance(state, index)
func SwitchToCompoundingValidator(s state.BeaconState, idx primitives.ValidatorIndex) error {
func SwitchToCompoundingValidator(ctx context.Context, s state.BeaconState, idx primitives.ValidatorIndex) error {
v, err := s.ValidatorAtIndex(idx)
if err != nil {
return err
@@ -31,12 +32,12 @@ func SwitchToCompoundingValidator(s state.BeaconState, idx primitives.ValidatorI
if err := s.UpdateValidatorAtIndex(idx, v); err != nil {
return err
}
return QueueExcessActiveBalance(s, idx)
return queueExcessActiveBalance(ctx, s, idx)
}
return nil
}
// QueueExcessActiveBalance
// queueExcessActiveBalance
//
// Spec definition:
//
@@ -48,7 +49,7 @@ func SwitchToCompoundingValidator(s state.BeaconState, idx primitives.ValidatorI
// state.pending_balance_deposits.append(
// PendingBalanceDeposit(index=index, amount=excess_balance)
// )
func QueueExcessActiveBalance(s state.BeaconState, idx primitives.ValidatorIndex) error {
func queueExcessActiveBalance(ctx context.Context, s state.BeaconState, idx primitives.ValidatorIndex) error {
bal, err := s.BalanceAtIndex(idx)
if err != nil {
return err
@@ -79,7 +80,7 @@ func QueueExcessActiveBalance(s state.BeaconState, idx primitives.ValidatorIndex
// )
//
//nolint:dupword
func QueueEntireBalanceAndResetValidator(s state.BeaconState, idx primitives.ValidatorIndex) error {
func QueueEntireBalanceAndResetValidator(ctx context.Context, s state.BeaconState, idx primitives.ValidatorIndex) error {
bal, err := s.BalanceAtIndex(idx)
if err != nil {
return err

View File

@@ -2,6 +2,7 @@ package electra_test
import (
"bytes"
"context"
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/electra"
@@ -10,7 +11,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
)
func TestSwitchToCompoundingValidator(t *testing.T) {
@@ -34,10 +34,10 @@ func TestSwitchToCompoundingValidator(t *testing.T) {
})
// Test that a validator with no withdrawal credentials cannot be switched to compounding.
require.NoError(t, err)
require.ErrorContains(t, "validator has no withdrawal credentials", electra.SwitchToCompoundingValidator(s, 0))
require.ErrorContains(t, "validator has no withdrawal credentials", electra.SwitchToCompoundingValidator(context.TODO(), s, 0))
// Test that a validator with withdrawal credentials can be switched to compounding.
require.NoError(t, electra.SwitchToCompoundingValidator(s, 1))
require.NoError(t, electra.SwitchToCompoundingValidator(context.TODO(), s, 1))
v, err := s.ValidatorAtIndex(1)
require.NoError(t, err)
require.Equal(t, true, bytes.HasPrefix(v.WithdrawalCredentials, []byte{params.BeaconConfig().CompoundingWithdrawalPrefixByte}), "withdrawal credentials were not updated")
@@ -50,7 +50,7 @@ func TestSwitchToCompoundingValidator(t *testing.T) {
require.Equal(t, 0, len(pbd), "pending balance deposits should be empty")
// Test that a validator with excess balance can be switched to compounding, excess balance is queued.
require.NoError(t, electra.SwitchToCompoundingValidator(s, 2))
require.NoError(t, electra.SwitchToCompoundingValidator(context.TODO(), s, 2))
b, err = s.BalanceAtIndex(2)
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().MinActivationBalance, b, "balance was not changed")
@@ -74,7 +74,7 @@ func TestQueueEntireBalanceAndResetValidator(t *testing.T) {
},
})
require.NoError(t, err)
require.NoError(t, electra.QueueEntireBalanceAndResetValidator(s, 0))
require.NoError(t, electra.QueueEntireBalanceAndResetValidator(context.TODO(), s, 0))
b, err := s.BalanceAtIndex(0)
require.NoError(t, err)
require.Equal(t, uint64(0), b, "balance was not changed")
@@ -88,57 +88,3 @@ func TestQueueEntireBalanceAndResetValidator(t *testing.T) {
require.Equal(t, params.BeaconConfig().MinActivationBalance+100_000, pbd[0].Amount, "pending balance deposit amount is incorrect")
require.Equal(t, primitives.ValidatorIndex(0), pbd[0].Index, "pending balance deposit index is incorrect")
}
func TestSwitchToCompoundingValidator_Ok(t *testing.T) {
st, _ := util.DeterministicGenesisStateElectra(t, params.BeaconConfig().MaxValidatorsPerCommittee)
vals := st.Validators()
vals[0].WithdrawalCredentials = []byte{params.BeaconConfig().ETH1AddressWithdrawalPrefixByte}
require.NoError(t, st.SetValidators(vals))
bals := st.Balances()
bals[0] = params.BeaconConfig().MinActivationBalance + 1010
require.NoError(t, st.SetBalances(bals))
require.NoError(t, electra.SwitchToCompoundingValidator(st, 0))
pbd, err := st.PendingBalanceDeposits()
require.NoError(t, err)
require.Equal(t, uint64(1010), pbd[0].Amount) // appends it at the end
val, err := st.ValidatorAtIndex(0)
require.NoError(t, err)
bytes.HasPrefix(val.WithdrawalCredentials, []byte{params.BeaconConfig().CompoundingWithdrawalPrefixByte})
}
func TestQueueExcessActiveBalance_Ok(t *testing.T) {
st, _ := util.DeterministicGenesisStateElectra(t, params.BeaconConfig().MaxValidatorsPerCommittee)
bals := st.Balances()
bals[0] = params.BeaconConfig().MinActivationBalance + 1000
require.NoError(t, st.SetBalances(bals))
err := electra.QueueExcessActiveBalance(st, 0)
require.NoError(t, err)
pbd, err := st.PendingBalanceDeposits()
require.NoError(t, err)
require.Equal(t, uint64(1000), pbd[0].Amount) // appends it at the end
bals = st.Balances()
require.Equal(t, params.BeaconConfig().MinActivationBalance, bals[0])
}
func TestQueueEntireBalanceAndResetValidator_Ok(t *testing.T) {
st, _ := util.DeterministicGenesisStateElectra(t, params.BeaconConfig().MaxValidatorsPerCommittee)
// need to manually set this to 0 as after 6110 these balances are now 0 and instead populates pending balance deposits
bals := st.Balances()
bals[0] = params.BeaconConfig().MinActivationBalance - 1000
require.NoError(t, st.SetBalances(bals))
err := electra.QueueEntireBalanceAndResetValidator(st, 0)
require.NoError(t, err)
pbd, err := st.PendingBalanceDeposits()
require.NoError(t, err)
require.Equal(t, 1, len(pbd))
require.Equal(t, params.BeaconConfig().MinActivationBalance-1000, pbd[0].Amount)
bal, err := st.BalanceAtIndex(0)
require.NoError(t, err)
require.Equal(t, uint64(0), bal)
}

View File

@@ -32,6 +32,9 @@ const (
// AttesterSlashingReceived is sent after an attester slashing is received from gossip or rpc
AttesterSlashingReceived = 8
// DataColumnSidecarReceived is sent after a data column sidecar is received from gossip or rpc.
DataColumnSidecarReceived = 9
)
// UnAggregatedAttReceivedData is the data sent with UnaggregatedAttReceived events.
@@ -77,3 +80,7 @@ type ProposerSlashingReceivedData struct {
type AttesterSlashingReceivedData struct {
AttesterSlashing ethpb.AttSlashing
}
type DataColumnSidecarReceivedData struct {
DataColumn *blocks.VerifiedRODataColumn
}

View File

@@ -10,7 +10,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/crypto/hash"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
prysmTime "github.com/prysmaticlabs/prysm/v5/time"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
@@ -92,14 +91,6 @@ func IsAggregated(attestation ethpb.Att) bool {
//
// return uint64((committees_since_epoch_start + committee_index) % ATTESTATION_SUBNET_COUNT)
func ComputeSubnetForAttestation(activeValCount uint64, att ethpb.Att) uint64 {
if att.Version() >= version.Electra {
committeeIndex := 0
committeeIndices := att.CommitteeBitsVal().BitIndices()
if len(committeeIndices) > 0 {
committeeIndex = committeeIndices[0]
}
return ComputeSubnetFromCommitteeAndSlot(activeValCount, primitives.CommitteeIndex(committeeIndex), att.GetData().Slot)
}
return ComputeSubnetFromCommitteeAndSlot(activeValCount, att.GetData().CommitteeIndex, att.GetData().Slot)
}

View File

@@ -73,37 +73,21 @@ func TestAttestation_ComputeSubnetForAttestation(t *testing.T) {
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
require.NoError(t, err)
valCount, err := helpers.ActiveValidatorCount(context.Background(), state, slots.ToEpoch(34))
att := &ethpb.Attestation{
AggregationBits: []byte{'A'},
Data: &ethpb.AttestationData{
Slot: 34,
CommitteeIndex: 4,
BeaconBlockRoot: []byte{'C'},
Source: nil,
Target: nil,
},
Signature: []byte{'B'},
}
valCount, err := helpers.ActiveValidatorCount(context.Background(), state, slots.ToEpoch(att.Data.Slot))
require.NoError(t, err)
t.Run("Phase 0", func(t *testing.T) {
att := &ethpb.Attestation{
AggregationBits: []byte{'A'},
Data: &ethpb.AttestationData{
Slot: 34,
CommitteeIndex: 4,
BeaconBlockRoot: []byte{'C'},
},
Signature: []byte{'B'},
}
sub := helpers.ComputeSubnetForAttestation(valCount, att)
assert.Equal(t, uint64(6), sub, "Did not get correct subnet for attestation")
})
t.Run("Electra", func(t *testing.T) {
cb := primitives.NewAttestationCommitteeBits()
cb.SetBitAt(4, true)
att := &ethpb.AttestationElectra{
AggregationBits: []byte{'A'},
CommitteeBits: cb,
Data: &ethpb.AttestationData{
Slot: 34,
BeaconBlockRoot: []byte{'C'},
},
Signature: []byte{'B'},
}
sub := helpers.ComputeSubnetForAttestation(valCount, att)
assert.Equal(t, uint64(6), sub, "Did not get correct subnet for attestation")
})
sub := helpers.ComputeSubnetForAttestation(valCount, att)
assert.Equal(t, uint64(6), sub, "Did not get correct subnet for attestation")
}
func Test_ValidateAttestationTime(t *testing.T) {

View File

@@ -674,3 +674,68 @@ func ValidatorMaxEffectiveBalance(val *ethpb.Validator) uint64 {
}
return params.BeaconConfig().MinActivationBalance
}
// QueueExcessActiveBalance queues validators with balances above the min activation balance and adds to pending balance deposit.
//
// Spec definition:
//
// def queue_excess_active_balance(state: BeaconState, index: ValidatorIndex) -> None:
// balance = state.balances[index]
// if balance > MIN_ACTIVATION_BALANCE:
// excess_balance = balance - MIN_ACTIVATION_BALANCE
// state.balances[index] = MIN_ACTIVATION_BALANCE
// state.pending_balance_deposits.append(
// PendingBalanceDeposit(index=index, amount=excess_balance)
// )
func QueueExcessActiveBalance(s state.BeaconState, idx primitives.ValidatorIndex) error {
bal, err := s.BalanceAtIndex(idx)
if err != nil {
return err
}
if bal > params.BeaconConfig().MinActivationBalance {
excessBalance := bal - params.BeaconConfig().MinActivationBalance
if err := s.UpdateBalancesAtIndex(idx, params.BeaconConfig().MinActivationBalance); err != nil {
return err
}
return s.AppendPendingBalanceDeposit(idx, excessBalance)
}
return nil
}
// QueueEntireBalanceAndResetValidator queues the entire balance and resets the validator. This is used in electra fork logic.
//
// Spec definition:
//
// def queue_entire_balance_and_reset_validator(state: BeaconState, index: ValidatorIndex) -> None:
// balance = state.balances[index]
// validator = state.validators[index]
// state.balances[index] = 0
// validator.effective_balance = 0
// validator.activation_eligibility_epoch = FAR_FUTURE_EPOCH
// state.pending_balance_deposits.append(
// PendingBalanceDeposit(index=index, amount=balance)
// )
func QueueEntireBalanceAndResetValidator(s state.BeaconState, idx primitives.ValidatorIndex) error {
bal, err := s.BalanceAtIndex(idx)
if err != nil {
return err
}
if err := s.UpdateBalancesAtIndex(idx, 0); err != nil {
return err
}
v, err := s.ValidatorAtIndex(idx)
if err != nil {
return err
}
v.EffectiveBalance = 0
v.ActivationEligibilityEpoch = params.BeaconConfig().FarFutureEpoch
if err := s.UpdateValidatorAtIndex(idx, v); err != nil {
return err
}
return s.AppendPendingBalanceDeposit(idx, bal)
}

View File

@@ -18,6 +18,7 @@ import (
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
)
func TestIsActiveValidator_OK(t *testing.T) {
@@ -1119,3 +1120,40 @@ func TestValidatorMaxEffectiveBalance(t *testing.T) {
// Sanity check that MinActivationBalance equals (pre-electra) MaxEffectiveBalance
assert.Equal(t, params.BeaconConfig().MinActivationBalance, params.BeaconConfig().MaxEffectiveBalance)
}
func TestQueueExcessActiveBalance_Ok(t *testing.T) {
st, _ := util.DeterministicGenesisStateElectra(t, params.BeaconConfig().MaxValidatorsPerCommittee)
bals := st.Balances()
bals[0] = params.BeaconConfig().MinActivationBalance + 1000
require.NoError(t, st.SetBalances(bals))
err := helpers.QueueExcessActiveBalance(st, 0)
require.NoError(t, err)
pbd, err := st.PendingBalanceDeposits()
require.NoError(t, err)
require.Equal(t, uint64(1000), pbd[0].Amount)
bals = st.Balances()
require.Equal(t, params.BeaconConfig().MinActivationBalance, bals[0])
}
func TestQueueEntireBalanceAndResetValidator_Ok(t *testing.T) {
st, _ := util.DeterministicGenesisStateElectra(t, params.BeaconConfig().MaxValidatorsPerCommittee)
val, err := st.ValidatorAtIndex(0)
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().MaxEffectiveBalance, val.EffectiveBalance)
pbd, err := st.PendingBalanceDeposits()
require.NoError(t, err)
require.Equal(t, 0, len(pbd))
err = helpers.QueueEntireBalanceAndResetValidator(st, 0)
require.NoError(t, err)
pbd, err = st.PendingBalanceDeposits()
require.NoError(t, err)
require.Equal(t, 1, len(pbd))
val, err = st.ValidatorAtIndex(0)
require.NoError(t, err)
require.Equal(t, uint64(0), val.EffectiveBalance)
}

View File

@@ -0,0 +1,37 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = ["helpers.go"],
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/blockchain/kzg:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enr:go_default_library",
"@com_github_holiman_uint256//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["helpers_test.go"],
deps = [
":go_default_library",
"//beacon-chain/blockchain/kzg:go_default_library",
"//consensus-types/blocks:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_consensys_gnark_crypto//ecc/bls12-381/fr:go_default_library",
"@com_github_crate_crypto_go_kzg_4844//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)

View File

@@ -0,0 +1,359 @@
package peerdas
import (
"encoding/binary"
"math"
"math/big"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/holiman/uint256"
errors "github.com/pkg/errors"
kzg "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/kzg"
"github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/crypto/hash"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
const (
CustodySubnetCountEnrKey = "csc"
)
// https://github.com/ethereum/consensus-specs/blob/dev/specs/_features/eip7594/p2p-interface.md#the-discovery-domain-discv5
type Csc uint64
func (Csc) ENRKey() string { return CustodySubnetCountEnrKey }
var (
// Custom errors
errCustodySubnetCountTooLarge = errors.New("custody subnet count larger than data column sidecar subnet count")
errIndexTooLarge = errors.New("column index is larger than the specified columns count")
errMismatchLength = errors.New("mismatch in the length of the commitments and proofs")
errRecordNil = errors.New("record is nil")
errCannotLoadCustodySubnetCount = errors.New("cannot load the custody subnet count from peer")
// maxUint256 is the maximum value of a uint256.
maxUint256 = &uint256.Int{math.MaxUint64, math.MaxUint64, math.MaxUint64, math.MaxUint64}
)
// CustodyColumnSubnets computes the subnets the node should participate in for custody.
func CustodyColumnSubnets(nodeId enode.ID, custodySubnetCount uint64) (map[uint64]bool, error) {
dataColumnSidecarSubnetCount := params.BeaconConfig().DataColumnSidecarSubnetCount
// Check if the custody subnet count is larger than the data column sidecar subnet count.
if custodySubnetCount > dataColumnSidecarSubnetCount {
return nil, errCustodySubnetCountTooLarge
}
// First, compute the subnet IDs that the node should participate in.
subnetIds := make(map[uint64]bool, custodySubnetCount)
one := uint256.NewInt(1)
for currentId := new(uint256.Int).SetBytes(nodeId.Bytes()); uint64(len(subnetIds)) < custodySubnetCount; currentId.Add(currentId, one) {
// Convert to big endian bytes.
currentIdBytesBigEndian := currentId.Bytes32()
// Convert to little endian.
currentIdBytesLittleEndian := bytesutil.ReverseByteOrder(currentIdBytesBigEndian[:])
// Hash the result.
hashedCurrentId := hash.Hash(currentIdBytesLittleEndian)
// Get the subnet ID.
subnetId := binary.LittleEndian.Uint64(hashedCurrentId[:8]) % dataColumnSidecarSubnetCount
// Add the subnet to the map.
subnetIds[subnetId] = true
// Overflow prevention.
if currentId.Cmp(maxUint256) == 0 {
currentId = uint256.NewInt(0)
}
}
return subnetIds, nil
}
// CustodyColumns computes the columns the node should custody.
// https://github.com/ethereum/consensus-specs/blob/dev/specs/_features/eip7594/das-core.md#helper-functions
func CustodyColumns(nodeId enode.ID, custodySubnetCount uint64) (map[uint64]bool, error) {
dataColumnSidecarSubnetCount := params.BeaconConfig().DataColumnSidecarSubnetCount
// Compute the custodied subnets.
subnetIds, err := CustodyColumnSubnets(nodeId, custodySubnetCount)
if err != nil {
return nil, errors.Wrap(err, "custody subnets")
}
columnsPerSubnet := kzg.CellsPerExtBlob / dataColumnSidecarSubnetCount
// Knowing the subnet ID and the number of columns per subnet, select all the columns the node should custody.
// Columns belonging to the same subnet are contiguous.
columnIndices := make(map[uint64]bool, custodySubnetCount*columnsPerSubnet)
for i := uint64(0); i < columnsPerSubnet; i++ {
for subnetId := range subnetIds {
columnIndex := dataColumnSidecarSubnetCount*i + subnetId
columnIndices[columnIndex] = true
}
}
return columnIndices, nil
}
// DataColumnSidecars computes the data column sidecars from the signed block and blobs.
// https://github.com/ethereum/consensus-specs/blob/dev/specs/_features/eip7594/das-core.md#recover_matrix
func DataColumnSidecars(signedBlock interfaces.ReadOnlySignedBeaconBlock, blobs []kzg.Blob) ([]*ethpb.DataColumnSidecar, error) {
blobsCount := len(blobs)
if blobsCount == 0 {
return nil, nil
}
// Get the signed block header.
signedBlockHeader, err := signedBlock.Header()
if err != nil {
return nil, errors.Wrap(err, "signed block header")
}
// Get the block body.
block := signedBlock.Block()
blockBody := block.Body()
// Get the blob KZG commitments.
blobKzgCommitments, err := blockBody.BlobKzgCommitments()
if err != nil {
return nil, errors.Wrap(err, "blob KZG commitments")
}
// Compute the KZG commitments inclusion proof.
kzgCommitmentsInclusionProof, err := blocks.MerkleProofKZGCommitments(blockBody)
if err != nil {
return nil, errors.Wrap(err, "merkle proof ZKG commitments")
}
// Compute cells and proofs.
cellsAndProofs := make([]kzg.CellsAndProofs, 0, blobsCount)
for i := range blobs {
blob := &blobs[i]
blobCellsAndProofs, err := kzg.ComputeCellsAndKZGProofs(blob)
if err != nil {
return nil, errors.Wrap(err, "compute cells and KZG proofs")
}
cellsAndProofs = append(cellsAndProofs, blobCellsAndProofs)
}
// Get the column sidecars.
sidecars := make([]*ethpb.DataColumnSidecar, 0, kzg.CellsPerExtBlob)
for columnIndex := uint64(0); columnIndex < kzg.CellsPerExtBlob; columnIndex++ {
column := make([]kzg.Cell, 0, blobsCount)
kzgProofOfColumn := make([]kzg.Proof, 0, blobsCount)
for rowIndex := 0; rowIndex < blobsCount; rowIndex++ {
cellsForRow := cellsAndProofs[rowIndex].Cells
proofsForRow := cellsAndProofs[rowIndex].Proofs
cell := cellsForRow[columnIndex]
column = append(column, cell)
kzgProof := proofsForRow[columnIndex]
kzgProofOfColumn = append(kzgProofOfColumn, kzgProof)
}
columnBytes := make([][]byte, 0, blobsCount)
for i := range column {
columnBytes = append(columnBytes, column[i][:])
}
kzgProofOfColumnBytes := make([][]byte, 0, blobsCount)
for _, kzgProof := range kzgProofOfColumn {
copiedProof := kzgProof
kzgProofOfColumnBytes = append(kzgProofOfColumnBytes, copiedProof[:])
}
sidecar := &ethpb.DataColumnSidecar{
ColumnIndex: columnIndex,
DataColumn: columnBytes,
KzgCommitments: blobKzgCommitments,
KzgProof: kzgProofOfColumnBytes,
SignedBlockHeader: signedBlockHeader,
KzgCommitmentsInclusionProof: kzgCommitmentsInclusionProof,
}
sidecars = append(sidecars, sidecar)
}
return sidecars, nil
}
// DataColumnSidecarsForReconstruct is a TEMPORARY function until there is an official specification for it.
// It is scheduled for deletion.
func DataColumnSidecarsForReconstruct(
blobKzgCommitments [][]byte,
signedBlockHeader *ethpb.SignedBeaconBlockHeader,
kzgCommitmentsInclusionProof [][]byte,
cellsAndProofs []kzg.CellsAndProofs,
) ([]*ethpb.DataColumnSidecar, error) {
// Each CellsAndProofs corresponds to a Blob
// So we can get the BlobCount by checking the length of CellsAndProofs
blobsCount := len(cellsAndProofs)
if blobsCount == 0 {
return nil, nil
}
// Get the column sidecars.
sidecars := make([]*ethpb.DataColumnSidecar, 0, kzg.CellsPerExtBlob)
for columnIndex := uint64(0); columnIndex < kzg.CellsPerExtBlob; columnIndex++ {
column := make([]kzg.Cell, 0, blobsCount)
kzgProofOfColumn := make([]kzg.Proof, 0, blobsCount)
for rowIndex := 0; rowIndex < blobsCount; rowIndex++ {
cellsForRow := cellsAndProofs[rowIndex].Cells
proofsForRow := cellsAndProofs[rowIndex].Proofs
cell := cellsForRow[columnIndex]
column = append(column, cell)
kzgProof := proofsForRow[columnIndex]
kzgProofOfColumn = append(kzgProofOfColumn, kzgProof)
}
columnBytes := make([][]byte, 0, blobsCount)
for i := range column {
columnBytes = append(columnBytes, column[i][:])
}
kzgProofOfColumnBytes := make([][]byte, 0, blobsCount)
for _, kzgProof := range kzgProofOfColumn {
copiedProof := kzgProof
kzgProofOfColumnBytes = append(kzgProofOfColumnBytes, copiedProof[:])
}
sidecar := &ethpb.DataColumnSidecar{
ColumnIndex: columnIndex,
DataColumn: columnBytes,
KzgCommitments: blobKzgCommitments,
KzgProof: kzgProofOfColumnBytes,
SignedBlockHeader: signedBlockHeader,
KzgCommitmentsInclusionProof: kzgCommitmentsInclusionProof,
}
sidecars = append(sidecars, sidecar)
}
return sidecars, nil
}
// VerifyDataColumnSidecarKZGProofs verifies the provided KZG Proofs for the particular
// data column.
func VerifyDataColumnSidecarKZGProofs(sc *ethpb.DataColumnSidecar) (bool, error) {
if sc.ColumnIndex >= params.BeaconConfig().NumberOfColumns {
return false, errIndexTooLarge
}
if len(sc.DataColumn) != len(sc.KzgCommitments) || len(sc.KzgCommitments) != len(sc.KzgProof) {
return false, errMismatchLength
}
blobsCount := len(sc.DataColumn)
rowIdx := make([]uint64, 0, blobsCount)
colIdx := make([]uint64, 0, blobsCount)
for i := 0; i < len(sc.DataColumn); i++ {
copiedI := uint64(i)
rowIdx = append(rowIdx, copiedI)
colI := sc.ColumnIndex
colIdx = append(colIdx, colI)
}
ckzgComms := make([]kzg.Bytes48, 0, len(sc.KzgCommitments))
for _, com := range sc.KzgCommitments {
ckzgComms = append(ckzgComms, kzg.Bytes48(com))
}
var cells []kzg.Cell
for _, cell := range sc.DataColumn {
cells = append(cells, kzg.Cell(cell))
}
var proofs []kzg.Bytes48
for _, p := range sc.KzgProof {
proofs = append(proofs, kzg.Bytes48(p))
}
return kzg.VerifyCellKZGProofBatch(ckzgComms, rowIdx, colIdx, cells, proofs)
}
// CustodySubnetCount returns the number of subnets the node should participate in for custody.
func CustodySubnetCount() uint64 {
count := params.BeaconConfig().CustodyRequirement
if flags.Get().SubscribeToAllSubnets {
count = params.BeaconConfig().DataColumnSidecarSubnetCount
}
return count
}
// HypergeomCDF computes the hypergeometric cumulative distribution function.
// https://en.wikipedia.org/wiki/Hypergeometric_distribution
func HypergeomCDF(k, M, n, N uint64) float64 {
denominatorInt := new(big.Int).Binomial(int64(M), int64(N)) // lint:ignore uintcast
denominator := new(big.Float).SetInt(denominatorInt)
rBig := big.NewFloat(0)
for i := uint64(0); i < k+1; i++ {
a := new(big.Int).Binomial(int64(n), int64(i)) // lint:ignore uintcast
b := new(big.Int).Binomial(int64(M-n), int64(N-i))
numeratorInt := new(big.Int).Mul(a, b)
numerator := new(big.Float).SetInt(numeratorInt)
item := new(big.Float).Quo(numerator, denominator)
rBig.Add(rBig, item)
}
r, _ := rBig.Float64()
return r
}
// ExtendedSampleCount computes, for a given number of samples per slot and allowed failures the
// number of samples we should actually query from peers.
// TODO: Add link to the specification once it is available.
func ExtendedSampleCount(samplesPerSlot, allowedFailures uint64) uint64 {
// Retrieve the columns count
columnsCount := params.BeaconConfig().NumberOfColumns
// If half of the columns are missing, we are able to reconstruct the data.
// If half of the columns + 1 are missing, we are not able to reconstruct the data.
// This is the smallest worst case.
worstCaseMissing := columnsCount/2 + 1
// Compute the false positive threshold.
falsePositiveThreshold := HypergeomCDF(0, columnsCount, worstCaseMissing, samplesPerSlot)
var sampleCount uint64
// Finally, compute the extended sample count.
for sampleCount = samplesPerSlot; sampleCount < columnsCount+1; sampleCount++ {
if HypergeomCDF(allowedFailures, columnsCount, worstCaseMissing, sampleCount) <= falsePositiveThreshold {
break
}
}
return sampleCount
}
func CustodyCountFromRecord(record *enr.Record) (uint64, error) {
// By default, we assume the peer custodies the minimum number of subnets.
if record == nil {
return 0, errRecordNil
}
// Load the `custody_subnet_count`
var csc Csc
if err := record.Load(&csc); err != nil {
return 0, errCannotLoadCustodySubnetCount
}
return uint64(csc), nil
}

View File

@@ -0,0 +1,144 @@
package peerdas_test
import (
"bytes"
"crypto/sha256"
"encoding/binary"
"fmt"
"testing"
"github.com/consensys/gnark-crypto/ecc/bls12-381/fr"
GoKZG "github.com/crate-crypto/go-kzg-4844"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/kzg"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/sirupsen/logrus"
)
func deterministicRandomness(seed int64) [32]byte {
// Converts an int64 to a byte slice
buf := new(bytes.Buffer)
err := binary.Write(buf, binary.BigEndian, seed)
if err != nil {
logrus.WithError(err).Error("Failed to write int64 to bytes buffer")
return [32]byte{}
}
bytes := buf.Bytes()
return sha256.Sum256(bytes)
}
// Returns a serialized random field element in big-endian
func GetRandFieldElement(seed int64) [32]byte {
bytes := deterministicRandomness(seed)
var r fr.Element
r.SetBytes(bytes[:])
return GoKZG.SerializeScalar(r)
}
// Returns a random blob using the passed seed as entropy
func GetRandBlob(seed int64) kzg.Blob {
var blob kzg.Blob
bytesPerBlob := GoKZG.ScalarsPerBlob * GoKZG.SerializedScalarSize
for i := 0; i < bytesPerBlob; i += GoKZG.SerializedScalarSize {
fieldElementBytes := GetRandFieldElement(seed + int64(i))
copy(blob[i:i+GoKZG.SerializedScalarSize], fieldElementBytes[:])
}
return blob
}
func GenerateCommitmentAndProof(blob kzg.Blob) (*kzg.Commitment, *kzg.Proof, error) {
commitment, err := kzg.BlobToKZGCommitment(&blob)
if err != nil {
return nil, nil, err
}
proof, err := kzg.ComputeBlobKZGProof(&blob, commitment)
if err != nil {
return nil, nil, err
}
return &commitment, &proof, err
}
func TestVerifyDataColumnSidecarKZGProofs(t *testing.T) {
dbBlock := util.NewBeaconBlockDeneb()
require.NoError(t, kzg.Start())
var (
comms [][]byte
blobs []kzg.Blob
)
for i := int64(0); i < 6; i++ {
blob := GetRandBlob(i)
commitment, _, err := GenerateCommitmentAndProof(blob)
require.NoError(t, err)
comms = append(comms, commitment[:])
blobs = append(blobs, blob)
}
dbBlock.Block.Body.BlobKzgCommitments = comms
sBlock, err := blocks.NewSignedBeaconBlock(dbBlock)
require.NoError(t, err)
sCars, err := peerdas.DataColumnSidecars(sBlock, blobs)
require.NoError(t, err)
for i, sidecar := range sCars {
verified, err := peerdas.VerifyDataColumnSidecarKZGProofs(sidecar)
require.NoError(t, err)
require.Equal(t, true, verified, fmt.Sprintf("sidecar %d failed", i))
}
}
func TestHypergeomCDF(t *testing.T) {
// Test case from https://en.wikipedia.org/wiki/Hypergeometric_distribution
// Population size: 1000, number of successes in population: 500, sample size: 10, number of successes in sample: 5
// Expected result: 0.072
const (
expected = 0.0796665913283742
margin = 0.000001
)
actual := peerdas.HypergeomCDF(5, 128, 65, 16)
require.Equal(t, true, expected-margin <= actual && actual <= expected+margin)
}
func TestExtendedSampleCount(t *testing.T) {
const samplesPerSlot = 16
testCases := []struct {
name string
allowedMissings uint64
extendedSampleCount uint64
}{
{name: "allowedMissings=0", allowedMissings: 0, extendedSampleCount: 16},
{name: "allowedMissings=1", allowedMissings: 1, extendedSampleCount: 20},
{name: "allowedMissings=2", allowedMissings: 2, extendedSampleCount: 24},
{name: "allowedMissings=3", allowedMissings: 3, extendedSampleCount: 27},
{name: "allowedMissings=4", allowedMissings: 4, extendedSampleCount: 29},
{name: "allowedMissings=5", allowedMissings: 5, extendedSampleCount: 32},
{name: "allowedMissings=6", allowedMissings: 6, extendedSampleCount: 35},
{name: "allowedMissings=7", allowedMissings: 7, extendedSampleCount: 37},
{name: "allowedMissings=8", allowedMissings: 8, extendedSampleCount: 40},
{name: "allowedMissings=9", allowedMissings: 9, extendedSampleCount: 42},
{name: "allowedMissings=10", allowedMissings: 10, extendedSampleCount: 44},
{name: "allowedMissings=11", allowedMissings: 11, extendedSampleCount: 47},
{name: "allowedMissings=12", allowedMissings: 12, extendedSampleCount: 49},
{name: "allowedMissings=13", allowedMissings: 13, extendedSampleCount: 51},
{name: "allowedMissings=14", allowedMissings: 14, extendedSampleCount: 53},
{name: "allowedMissings=15", allowedMissings: 15, extendedSampleCount: 55},
{name: "allowedMissings=16", allowedMissings: 16, extendedSampleCount: 57},
{name: "allowedMissings=17", allowedMissings: 17, extendedSampleCount: 59},
{name: "allowedMissings=18", allowedMissings: 18, extendedSampleCount: 61},
{name: "allowedMissings=19", allowedMissings: 19, extendedSampleCount: 63},
{name: "allowedMissings=20", allowedMissings: 20, extendedSampleCount: 65},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
result := peerdas.ExtendedSampleCount(samplesPerSlot, tc.allowedMissings)
require.Equal(t, tc.extendedSampleCount, result)
})
}
}

View File

@@ -53,6 +53,11 @@ func HigherEqualThanAltairVersionAndEpoch(s state.BeaconState, e primitives.Epoc
return s.Version() >= version.Altair && e >= params.BeaconConfig().AltairForkEpoch
}
// PeerDASIsActive checks whether peerDAS is active at the provided slot.
func PeerDASIsActive(slot primitives.Slot) bool {
return params.PeerDASEnabled() && slots.ToEpoch(slot) >= params.BeaconConfig().Eip7594ForkEpoch
}
// CanUpgradeToAltair returns true if the input `slot` can upgrade to Altair.
// Spec code:
// If state.slot % SLOTS_PER_EPOCH == 0 and compute_epoch_at_slot(state.slot) == ALTAIR_FORK_EPOCH

View File

@@ -4,6 +4,7 @@ go_library(
name = "go_default_library",
srcs = [
"availability.go",
"availability_columns.go",
"cache.go",
"iface.go",
"mock.go",
@@ -20,6 +21,7 @@ go_library(
"//runtime/logging:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],

View File

@@ -0,0 +1,151 @@
package das
import (
"context"
"fmt"
"github.com/ethereum/go-ethereum/p2p/enode"
errors "github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/verification"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
log "github.com/sirupsen/logrus"
)
// LazilyPersistentStoreColumn is an implementation of AvailabilityStore to be used when batch syncing data columns.
// This implementation will hold any blobs passed to Persist until the IsDataAvailable is called for their
// block, at which time they will undergo full verification and be saved to the disk.
type LazilyPersistentStoreColumn struct {
store *filesystem.BlobStorage
cache *cache
verifier ColumnBatchVerifier
nodeID enode.ID
}
type ColumnBatchVerifier interface {
VerifiedRODataColumns(ctx context.Context, blk blocks.ROBlock, sc []blocks.RODataColumn) ([]blocks.VerifiedRODataColumn, error)
}
func NewLazilyPersistentStoreColumn(store *filesystem.BlobStorage, verifier ColumnBatchVerifier, id enode.ID) *LazilyPersistentStoreColumn {
return &LazilyPersistentStoreColumn{
store: store,
cache: newCache(),
verifier: verifier,
nodeID: id,
}
}
// TODO: Very Ugly, change interface to allow for columns and blobs
func (s *LazilyPersistentStoreColumn) Persist(current primitives.Slot, sc ...blocks.ROBlob) error {
return nil
}
// PersistColumns adds columns to the working column cache. columns stored in this cache will be persisted
// for at least as long as the node is running. Once IsDataAvailable succeeds, all blobs referenced
// by the given block are guaranteed to be persisted for the remainder of the retention period.
func (s *LazilyPersistentStoreColumn) PersistColumns(current primitives.Slot, sc ...blocks.RODataColumn) error {
if len(sc) == 0 {
return nil
}
if len(sc) > 1 {
first := sc[0].BlockRoot()
for i := 1; i < len(sc); i++ {
if first != sc[i].BlockRoot() {
return errMixedRoots
}
}
}
if !params.WithinDAPeriod(slots.ToEpoch(sc[0].Slot()), slots.ToEpoch(current)) {
return nil
}
key := keyFromColumn(sc[0])
entry := s.cache.ensure(key)
for i := range sc {
if err := entry.stashColumns(&sc[i]); err != nil {
return err
}
}
return nil
}
// IsDataAvailable returns nil if all the commitments in the given block are persisted to the db and have been verified.
// BlobSidecars already in the db are assumed to have been previously verified against the block.
func (s *LazilyPersistentStoreColumn) IsDataAvailable(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error {
blockCommitments, err := fullCommitmentsToCheck(b, current)
if err != nil {
return errors.Wrapf(err, "could check data availability for block %#x", b.Root())
}
// Return early for blocks that are pre-deneb or which do not have any commitments.
if blockCommitments.count() == 0 {
return nil
}
key := keyFromBlock(b)
entry := s.cache.ensure(key)
defer s.cache.delete(key)
root := b.Root()
sumz, err := s.store.WaitForSummarizer(ctx)
if err != nil {
log.WithField("root", fmt.Sprintf("%#x", b.Root())).
WithError(err).
Debug("Failed to receive BlobStorageSummarizer within IsDataAvailable")
} else {
entry.setDiskSummary(sumz.Summary(root))
}
// Verify we have all the expected sidecars, and fail fast if any are missing or inconsistent.
// We don't try to salvage problematic batches because this indicates a misbehaving peer and we'd rather
// ignore their response and decrease their peer score.
sidecars, err := entry.filterColumns(root, blockCommitments)
if err != nil {
return errors.Wrap(err, "incomplete BlobSidecar batch")
}
// Do thorough verifications of each BlobSidecar for the block.
// Same as above, we don't save BlobSidecars if there are any problems with the batch.
vscs, err := s.verifier.VerifiedRODataColumns(ctx, b, sidecars)
if err != nil {
var me verification.VerificationMultiError
ok := errors.As(err, &me)
if ok {
fails := me.Failures()
lf := make(log.Fields, len(fails))
for i := range fails {
lf[fmt.Sprintf("fail_%d", i)] = fails[i].Error()
}
log.WithFields(lf).
Debug("invalid ColumnSidecars received")
}
return errors.Wrapf(err, "invalid ColumnSidecars received for block %#x", root)
}
// Ensure that each column sidecar is written to disk.
for i := range vscs {
if err := s.store.SaveDataColumn(vscs[i]); err != nil {
return errors.Wrapf(err, "failed to save ColumnSidecar index %d for block %#x", vscs[i].ColumnIndex, root)
}
}
// All ColumnSidecars are persisted - da check succeeds.
return nil
}
func fullCommitmentsToCheck(b blocks.ROBlock, current primitives.Slot) (safeCommitmentsArray, error) {
var ar safeCommitmentsArray
if b.Version() < version.Deneb {
return ar, nil
}
// We are only required to check within MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS
if !params.WithinDAPeriod(slots.ToEpoch(b.Block().Slot()), slots.ToEpoch(current)) {
return ar, nil
}
kc, err := b.Block().Body().BlobKzgCommitments()
if err != nil {
return ar, err
}
for i := range ar {
copy(ar[i], kc)
}
return ar, nil
}

View File

@@ -2,6 +2,7 @@ package das
import (
"bytes"
"reflect"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem"
@@ -38,6 +39,10 @@ func keyFromSidecar(sc blocks.ROBlob) cacheKey {
return cacheKey{slot: sc.Slot(), root: sc.BlockRoot()}
}
func keyFromColumn(sc blocks.RODataColumn) cacheKey {
return cacheKey{slot: sc.Slot(), root: sc.BlockRoot()}
}
// keyFromBlock is a convenience method for constructing a cacheKey from a ROBlock value.
func keyFromBlock(b blocks.ROBlock) cacheKey {
return cacheKey{slot: b.Block().Slot(), root: b.Root()}
@@ -61,6 +66,7 @@ func (c *cache) delete(key cacheKey) {
// cacheEntry holds a fixed-length cache of BlobSidecars.
type cacheEntry struct {
scs [fieldparams.MaxBlobsPerBlock]*blocks.ROBlob
colScs [fieldparams.NumberOfColumns]*blocks.RODataColumn
diskSummary filesystem.BlobStorageSummary
}
@@ -82,6 +88,17 @@ func (e *cacheEntry) stash(sc *blocks.ROBlob) error {
return nil
}
func (e *cacheEntry) stashColumns(sc *blocks.RODataColumn) error {
if sc.ColumnIndex >= fieldparams.NumberOfColumns {
return errors.Wrapf(errIndexOutOfBounds, "index=%d", sc.ColumnIndex)
}
if e.colScs[sc.ColumnIndex] != nil {
return errors.Wrapf(ErrDuplicateSidecar, "root=%#x, index=%d, commitment=%#x", sc.BlockRoot(), sc.ColumnIndex, sc.KzgCommitments)
}
e.colScs[sc.ColumnIndex] = sc
return nil
}
// filter evicts sidecars that are not committed to by the block and returns custom
// errors if the cache is missing any of the commitments, or if the commitments in
// the cache do not match those found in the block. If err is nil, then all expected
@@ -117,6 +134,35 @@ func (e *cacheEntry) filter(root [32]byte, kc safeCommitmentArray) ([]blocks.ROB
return scs, nil
}
func (e *cacheEntry) filterColumns(root [32]byte, kc safeCommitmentsArray) ([]blocks.RODataColumn, error) {
if e.diskSummary.AllAvailable(kc.count()) {
return nil, nil
}
scs := make([]blocks.RODataColumn, 0, kc.count())
for i := uint64(0); i < fieldparams.NumberOfColumns; i++ {
// We already have this blob, we don't need to write it or validate it.
if e.diskSummary.HasIndex(i) {
continue
}
if kc[i] == nil {
if e.colScs[i] != nil {
return nil, errors.Wrapf(errCommitmentMismatch, "root=%#x, index=%#x, commitment=%#x, no block commitment", root, i, e.scs[i].KzgCommitment)
}
continue
}
if e.colScs[i] == nil {
return nil, errors.Wrapf(errMissingSidecar, "root=%#x, index=%#x", root, i)
}
if !reflect.DeepEqual(kc[i], e.colScs[i].KzgCommitments) {
return nil, errors.Wrapf(errCommitmentMismatch, "root=%#x, index=%#x, commitment=%#x, block commitment=%#x", root, i, e.colScs[i].KzgCommitments, kc[i])
}
scs = append(scs, *e.colScs[i])
}
return scs, nil
}
// safeCommitmentArray is a fixed size array of commitment byte slices. This is helpful for avoiding
// gratuitous bounds checks.
type safeCommitmentArray [fieldparams.MaxBlobsPerBlock][]byte
@@ -129,3 +175,14 @@ func (s safeCommitmentArray) count() int {
}
return fieldparams.MaxBlobsPerBlock
}
type safeCommitmentsArray [fieldparams.NumberOfColumns][][]byte
func (s safeCommitmentsArray) count() int {
for i := range s {
if s[i] == nil {
return i
}
}
return fieldparams.NumberOfColumns
}

View File

@@ -13,6 +13,7 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem",
visibility = ["//visibility:public"],
deps = [
"//async/event:go_default_library",
"//beacon-chain/verification:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",

View File

@@ -12,6 +12,7 @@ import (
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/async/event"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/verification"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
@@ -39,8 +40,15 @@ const (
directoryPermissions = 0700
)
// BlobStorageOption is a functional option for configuring a BlobStorage.
type BlobStorageOption func(*BlobStorage) error
type (
// BlobStorageOption is a functional option for configuring a BlobStorage.
BlobStorageOption func(*BlobStorage) error
RootIndexPair struct {
Root [fieldparams.RootLength]byte
Index uint64
}
)
// WithBasePath is a required option that sets the base path of blob storage.
func WithBasePath(base string) BlobStorageOption {
@@ -70,7 +78,10 @@ func WithSaveFsync(fsync bool) BlobStorageOption {
// attempt to hold a file lock to guarantee exclusive control of the blob storage directory, so this should only be
// initialized once per beacon node.
func NewBlobStorage(opts ...BlobStorageOption) (*BlobStorage, error) {
b := &BlobStorage{}
b := &BlobStorage{
DataColumnFeed: new(event.Feed),
}
for _, o := range opts {
if err := o(b); err != nil {
return nil, errors.Wrap(err, "failed to create blob storage")
@@ -99,6 +110,7 @@ type BlobStorage struct {
fsync bool
fs afero.Fs
pruner *blobPruner
DataColumnFeed *event.Feed
}
// WarmCache runs the prune routine with an expiration of slot of 0, so nothing will be pruned, but the pruner's cache
@@ -221,6 +233,110 @@ func (bs *BlobStorage) Save(sidecar blocks.VerifiedROBlob) error {
return nil
}
// SaveDataColumn saves a data column to our local filesystem.
func (bs *BlobStorage) SaveDataColumn(column blocks.VerifiedRODataColumn) error {
startTime := time.Now()
fname := namerForDataColumn(column)
sszPath := fname.path()
exists, err := afero.Exists(bs.fs, sszPath)
if err != nil {
return err
}
if exists {
log.Trace("Ignoring a duplicate data column sidecar save attempt")
return nil
}
if bs.pruner != nil {
hRoot, err := column.SignedBlockHeader.Header.HashTreeRoot()
if err != nil {
return err
}
if err := bs.pruner.notify(hRoot, column.SignedBlockHeader.Header.Slot, column.ColumnIndex); err != nil {
return errors.Wrapf(err, "problem maintaining pruning cache/metrics for sidecar with root=%#x", hRoot)
}
}
// Serialize the ethpb.DataColumnSidecar to binary data using SSZ.
sidecarData, err := column.MarshalSSZ()
if err != nil {
return errors.Wrap(err, "failed to serialize sidecar data")
} else if len(sidecarData) == 0 {
return errSidecarEmptySSZData
}
if err := bs.fs.MkdirAll(fname.dir(), directoryPermissions); err != nil {
return err
}
partPath := fname.partPath(fmt.Sprintf("%p", sidecarData))
partialMoved := false
// Ensure the partial file is deleted.
defer func() {
if partialMoved {
return
}
// It's expected to error if the save is successful.
err = bs.fs.Remove(partPath)
if err == nil {
log.WithFields(logrus.Fields{
"partPath": partPath,
}).Debugf("Removed partial file")
}
}()
// Create a partial file and write the serialized data to it.
partialFile, err := bs.fs.Create(partPath)
if err != nil {
return errors.Wrap(err, "failed to create partial file")
}
n, err := partialFile.Write(sidecarData)
if err != nil {
closeErr := partialFile.Close()
if closeErr != nil {
return closeErr
}
return errors.Wrap(err, "failed to write to partial file")
}
if bs.fsync {
if err := partialFile.Sync(); err != nil {
return err
}
}
if err := partialFile.Close(); err != nil {
return err
}
if n != len(sidecarData) {
return fmt.Errorf("failed to write the full bytes of sidecarData, wrote only %d of %d bytes", n, len(sidecarData))
}
if n == 0 {
return errEmptyBlobWritten
}
// Atomically rename the partial file to its final name.
err = bs.fs.Rename(partPath, sszPath)
if err != nil {
return errors.Wrap(err, "failed to rename partial file to final name")
}
partialMoved = true
// Notify the data column notifier that a new data column has been saved.
bs.DataColumnFeed.Send(RootIndexPair{
Root: column.BlockRoot(),
Index: column.ColumnIndex,
})
// TODO: Use new metrics for data columns
blobsWrittenCounter.Inc()
blobSaveLatency.Observe(float64(time.Since(startTime).Milliseconds()))
return nil
}
// Get retrieves a single BlobSidecar by its root and index.
// Since BlobStorage only writes blobs that have undergone full verification, the return
// value is always a VerifiedROBlob.
@@ -246,6 +362,20 @@ func (bs *BlobStorage) Get(root [32]byte, idx uint64) (blocks.VerifiedROBlob, er
return verification.BlobSidecarNoop(ro)
}
// GetColumn retrieves a single DataColumnSidecar by its root and index.
func (bs *BlobStorage) GetColumn(root [32]byte, idx uint64) (*ethpb.DataColumnSidecar, error) {
expected := blobNamer{root: root, index: idx}
encoded, err := afero.ReadFile(bs.fs, expected.path())
if err != nil {
return nil, err
}
s := &ethpb.DataColumnSidecar{}
if err := s.UnmarshalSSZ(encoded); err != nil {
return nil, err
}
return s, nil
}
// Remove removes all blobs for a given root.
func (bs *BlobStorage) Remove(root [32]byte) error {
rootDir := blobNamer{root: root}.dir()
@@ -289,6 +419,61 @@ func (bs *BlobStorage) Indices(root [32]byte) ([fieldparams.MaxBlobsPerBlock]boo
return mask, nil
}
// ColumnIndices retrieve the stored column indexes from our filesystem.
func (bs *BlobStorage) ColumnIndices(root [32]byte) (map[uint64]bool, error) {
custody := make(map[uint64]bool, fieldparams.NumberOfColumns)
// Get all the files in the directory.
rootDir := blobNamer{root: root}.dir()
entries, err := afero.ReadDir(bs.fs, rootDir)
if err != nil {
// If the directory does not exist, we do not custody any columns.
if os.IsNotExist(err) {
return nil, nil
}
return nil, errors.Wrap(err, "read directory")
}
// Iterate over all the entries in the directory.
for _, entry := range entries {
// If the entry is a directory, skip it.
if entry.IsDir() {
continue
}
// If the entry does not have the correct extension, skip it.
name := entry.Name()
if !strings.HasSuffix(name, sszExt) {
continue
}
// The file should be in the `<index>.<extension>` format.
// Skip the file if it does not match the format.
parts := strings.Split(name, ".")
if len(parts) != 2 {
continue
}
// Get the column index from the file name.
columnIndexStr := parts[0]
columnIndex, err := strconv.ParseUint(columnIndexStr, 10, 64)
if err != nil {
return nil, errors.Wrapf(err, "unexpected directory entry breaks listing, %s", parts[0])
}
// If the column index is out of bounds, return an error.
if columnIndex >= fieldparams.NumberOfColumns {
return nil, errors.Wrapf(errIndexOutOfBounds, "invalid index %d", columnIndex)
}
// Mark the column index as in custody.
custody[columnIndex] = true
}
return custody, nil
}
// Clear deletes all files on the filesystem.
func (bs *BlobStorage) Clear() error {
dirs, err := listDir(bs.fs, ".")
@@ -321,6 +506,10 @@ func namerForSidecar(sc blocks.VerifiedROBlob) blobNamer {
return blobNamer{root: sc.BlockRoot(), index: sc.Index}
}
func namerForDataColumn(col blocks.VerifiedRODataColumn) blobNamer {
return blobNamer{root: col.BlockRoot(), index: col.ColumnIndex}
}
func (p blobNamer) dir() string {
return rootString(p.root)
}

View File

@@ -9,7 +9,7 @@ import (
)
// blobIndexMask is a bitmask representing the set of blob indices that are currently set.
type blobIndexMask [fieldparams.MaxBlobsPerBlock]bool
type blobIndexMask [fieldparams.NumberOfColumns]bool
// BlobStorageSummary represents cached information about the BlobSidecars on disk for each root the cache knows about.
type BlobStorageSummary struct {
@@ -26,6 +26,15 @@ func (s BlobStorageSummary) HasIndex(idx uint64) bool {
return s.mask[idx]
}
// HasDataColumnIndex true if the DataColumnSidecar at the given index is available in the filesystem.
func (s BlobStorageSummary) HasDataColumnIndex(idx uint64) bool {
// Protect from panic, but assume callers are sophisticated enough to not need an error telling them they have an invalid idx.
if idx >= fieldparams.NumberOfColumns {
return false
}
return s.mask[idx]
}
// AllAvailable returns true if we have all blobs for all indices from 0 to count-1.
func (s BlobStorageSummary) AllAvailable(count int) bool {
if count > fieldparams.MaxBlobsPerBlock {
@@ -39,6 +48,21 @@ func (s BlobStorageSummary) AllAvailable(count int) bool {
return true
}
// AllDataColumnsAvailable returns true if we have all datacolumns for corresponding indices.
func (s BlobStorageSummary) AllDataColumnsAvailable(indices map[uint64]bool) bool {
if uint64(len(indices)) > fieldparams.NumberOfColumns {
return false
}
for indice := range indices {
if !s.mask[indice] {
return false
}
}
return true
}
// BlobStorageSummarizer can be used to receive a summary of metadata about blobs on disk for a given root.
// The BlobStorageSummary can be used to check which indices (if any) are available for a given block by root.
type BlobStorageSummarizer interface {
@@ -68,9 +92,12 @@ func (s *blobStorageCache) Summary(root [32]byte) BlobStorageSummary {
}
func (s *blobStorageCache) ensure(key [32]byte, slot primitives.Slot, idx uint64) error {
if idx >= fieldparams.MaxBlobsPerBlock {
return errIndexOutOfBounds
}
// TODO: Separate blob index checks from data column index checks
/*
if idx >= fieldparams.MaxBlobsPerBlock {
return errIndexOutOfBounds
}
*/
s.mu.Lock()
defer s.mu.Unlock()
v := s.cache[key]

View File

@@ -9,6 +9,7 @@ import (
)
func TestSlotByRoot_Summary(t *testing.T) {
t.Skip("Use new test for data columns")
var noneSet, allSet, firstSet, lastSet, oneSet blobIndexMask
firstSet[0] = true
lastSet[len(lastSet)-1] = true

View File

@@ -23,10 +23,10 @@ import (
"go.opencensus.io/trace"
)
// used to represent errors for inconsistent slot ranges.
// Used to represent errors for inconsistent slot ranges.
var errInvalidSlotRange = errors.New("invalid end slot and start slot provided")
// Block retrieval by root.
// Block retrieval by root. Return nil if block is not found.
func (s *Store) Block(ctx context.Context, blockRoot [32]byte) (interfaces.ReadOnlySignedBeaconBlock, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.Block")
defer span.End()

View File

@@ -69,6 +69,7 @@ go_library(
"@com_github_gorilla_mux//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_urfave_cli_v2//:go_default_library",
],

View File

@@ -4,6 +4,7 @@ import (
"fmt"
"github.com/ethereum/go-ethereum/common"
fastssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/prysm/v5/cmd"
"github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/v5/config/params"
@@ -196,3 +197,7 @@ func configureExecutionSetting(cliCtx *cli.Context) error {
" Default fee recipient will be used as a fall back", checksumAddress.Hex())
return params.SetActive(c)
}
func configureFastSSZHashingAlgorithm() {
fastssz.EnableVectorizedHTR = true
}

View File

@@ -277,6 +277,8 @@ func configureBeacon(cliCtx *cli.Context) error {
return errors.Wrap(err, "could not configure execution setting")
}
configureFastSSZHashingAlgorithm()
return nil
}
@@ -988,6 +990,7 @@ func (b *BeaconNode) registerRPCService(router *mux.Router) error {
FinalizationFetcher: chainService,
BlockReceiver: chainService,
BlobReceiver: chainService,
DataColumnReceiver: chainService,
AttestationReceiver: chainService,
GenesisTimeFetcher: chainService,
GenesisFetcher: chainService,

View File

@@ -21,13 +21,12 @@ go_library(
"//config/features:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/hash:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",
"//proto/prysm/v1alpha1/attestation/aggregation/attestations:go_default_library",
"//time:go_default_library",
"//time/slots:go_default_library",
"@com_github_hashicorp_golang_lru//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",

View File

@@ -16,10 +16,9 @@ go_library(
"//beacon-chain/core/helpers:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/hash:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",
"//proto/prysm/v1alpha1/attestation/aggregation/attestations:go_default_library",
"//runtime/version:go_default_library",
"@com_github_patrickmn_go_cache//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
@@ -40,15 +39,14 @@ go_test(
embed = [":go_default_library"],
deps = [
"//config/fieldparams:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/bls:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_patrickmn_go_cache//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
],
)

View File

@@ -9,9 +9,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
attaggregation "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation/aggregation/attestations"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
log "github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
@@ -34,28 +32,28 @@ func (c *AttCaches) aggregateUnaggregatedAtts(ctx context.Context, unaggregatedA
_, span := trace.StartSpan(ctx, "operations.attestations.kv.aggregateUnaggregatedAtts")
defer span.End()
attsByVerAndDataRoot := make(map[attestation.Id][]ethpb.Att, len(unaggregatedAtts))
attsByDataRoot := make(map[[32]byte][]ethpb.Att, len(unaggregatedAtts))
for _, att := range unaggregatedAtts {
id, err := attestation.NewId(att, attestation.Data)
attDataRoot, err := att.GetData().HashTreeRoot()
if err != nil {
return errors.Wrap(err, "could not create attestation ID")
return err
}
attsByVerAndDataRoot[id] = append(attsByVerAndDataRoot[id], att)
attsByDataRoot[attDataRoot] = append(attsByDataRoot[attDataRoot], att)
}
// Aggregate unaggregated attestations from the pool and save them in the pool.
// Track the unaggregated attestations that aren't able to aggregate.
leftOverUnaggregatedAtt := make(map[attestation.Id]bool)
leftOverUnaggregatedAtt := make(map[[32]byte]bool)
leftOverUnaggregatedAtt = c.aggregateParallel(attsByVerAndDataRoot, leftOverUnaggregatedAtt)
leftOverUnaggregatedAtt = c.aggregateParallel(attsByDataRoot, leftOverUnaggregatedAtt)
// Remove the unaggregated attestations from the pool that were successfully aggregated.
for _, att := range unaggregatedAtts {
id, err := attestation.NewId(att, attestation.Full)
h, err := hashFn(att)
if err != nil {
return errors.Wrap(err, "could not create attestation ID")
return err
}
if leftOverUnaggregatedAtt[id] {
if leftOverUnaggregatedAtt[h] {
continue
}
if err := c.DeleteUnaggregatedAttestation(att); err != nil {
@@ -68,7 +66,7 @@ func (c *AttCaches) aggregateUnaggregatedAtts(ctx context.Context, unaggregatedA
// aggregateParallel aggregates attestations in parallel for `atts` and saves them in the pool,
// returns the unaggregated attestations that weren't able to aggregate.
// Given `n` CPU cores, it creates a channel of size `n` and spawns `n` goroutines to aggregate attestations
func (c *AttCaches) aggregateParallel(atts map[attestation.Id][]ethpb.Att, leftOver map[attestation.Id]bool) map[attestation.Id]bool {
func (c *AttCaches) aggregateParallel(atts map[[32]byte][]ethpb.Att, leftOver map[[32]byte]bool) map[[32]byte]bool {
var leftoverLock sync.Mutex
wg := sync.WaitGroup{}
@@ -94,13 +92,13 @@ func (c *AttCaches) aggregateParallel(atts map[attestation.Id][]ethpb.Att, leftO
continue
}
} else {
id, err := attestation.NewId(aggregated, attestation.Full)
h, err := hashFn(aggregated)
if err != nil {
log.WithError(err).Error("Could not create attestation ID")
log.WithError(err).Error("could not hash attestation")
continue
}
leftoverLock.Lock()
leftOver[id] = true
leftOver[h] = true
leftoverLock.Unlock()
}
}
@@ -141,18 +139,17 @@ func (c *AttCaches) SaveAggregatedAttestation(att ethpb.Att) error {
return nil
}
id, err := attestation.NewId(att, attestation.Data)
r, err := hashFn(att.GetData())
if err != nil {
return errors.Wrap(err, "could not create attestation ID")
return errors.Wrap(err, "could not tree hash attestation")
}
copiedAtt := att.Copy()
c.aggregatedAttLock.Lock()
defer c.aggregatedAttLock.Unlock()
atts, ok := c.aggregatedAtt[id]
atts, ok := c.aggregatedAtt[r]
if !ok {
atts := []ethpb.Att{copiedAtt}
c.aggregatedAtt[id] = atts
c.aggregatedAtt[r] = atts
return nil
}
@@ -160,7 +157,7 @@ func (c *AttCaches) SaveAggregatedAttestation(att ethpb.Att) error {
if err != nil {
return err
}
c.aggregatedAtt[id] = atts
c.aggregatedAtt[r] = atts
return nil
}
@@ -194,56 +191,17 @@ func (c *AttCaches) AggregatedAttestations() []ethpb.Att {
// AggregatedAttestationsBySlotIndex returns the aggregated attestations in cache,
// filtered by committee index and slot.
func (c *AttCaches) AggregatedAttestationsBySlotIndex(
ctx context.Context,
slot primitives.Slot,
committeeIndex primitives.CommitteeIndex,
) []*ethpb.Attestation {
func (c *AttCaches) AggregatedAttestationsBySlotIndex(ctx context.Context, slot primitives.Slot, committeeIndex primitives.CommitteeIndex) []ethpb.Att {
_, span := trace.StartSpan(ctx, "operations.attestations.kv.AggregatedAttestationsBySlotIndex")
defer span.End()
atts := make([]*ethpb.Attestation, 0)
atts := make([]ethpb.Att, 0)
c.aggregatedAttLock.RLock()
defer c.aggregatedAttLock.RUnlock()
for _, as := range c.aggregatedAtt {
if as[0].Version() == version.Phase0 && slot == as[0].GetData().Slot && committeeIndex == as[0].GetData().CommitteeIndex {
for _, a := range as {
att, ok := a.(*ethpb.Attestation)
// This will never fail in practice because we asserted the version
if ok {
atts = append(atts, att)
}
}
}
}
return atts
}
// AggregatedAttestationsBySlotIndexElectra returns the aggregated attestations in cache,
// filtered by committee index and slot.
func (c *AttCaches) AggregatedAttestationsBySlotIndexElectra(
ctx context.Context,
slot primitives.Slot,
committeeIndex primitives.CommitteeIndex,
) []*ethpb.AttestationElectra {
_, span := trace.StartSpan(ctx, "operations.attestations.kv.AggregatedAttestationsBySlotIndexElectra")
defer span.End()
atts := make([]*ethpb.AttestationElectra, 0)
c.aggregatedAttLock.RLock()
defer c.aggregatedAttLock.RUnlock()
for _, as := range c.aggregatedAtt {
if as[0].Version() == version.Electra && slot == as[0].GetData().Slot && as[0].CommitteeBitsVal().BitAt(uint64(committeeIndex)) {
for _, a := range as {
att, ok := a.(*ethpb.AttestationElectra)
// This will never fail in practice because we asserted the version
if ok {
atts = append(atts, att)
}
}
for _, a := range c.aggregatedAtt {
if slot == a[0].GetData().Slot && committeeIndex == a[0].GetData().CommitteeIndex {
atts = append(atts, a...)
}
}
@@ -258,19 +216,18 @@ func (c *AttCaches) DeleteAggregatedAttestation(att ethpb.Att) error {
if !helpers.IsAggregated(att) {
return errors.New("attestation is not aggregated")
}
r, err := hashFn(att.GetData())
if err != nil {
return errors.Wrap(err, "could not tree hash attestation data")
}
if err := c.insertSeenBit(att); err != nil {
return err
}
id, err := attestation.NewId(att, attestation.Data)
if err != nil {
return errors.Wrap(err, "could not create attestation ID")
}
c.aggregatedAttLock.Lock()
defer c.aggregatedAttLock.Unlock()
attList, ok := c.aggregatedAtt[id]
attList, ok := c.aggregatedAtt[r]
if !ok {
return nil
}
@@ -284,9 +241,9 @@ func (c *AttCaches) DeleteAggregatedAttestation(att ethpb.Att) error {
}
}
if len(filtered) == 0 {
delete(c.aggregatedAtt, id)
delete(c.aggregatedAtt, r)
} else {
c.aggregatedAtt[id] = filtered
c.aggregatedAtt[r] = filtered
}
return nil
@@ -297,15 +254,14 @@ func (c *AttCaches) HasAggregatedAttestation(att ethpb.Att) (bool, error) {
if err := helpers.ValidateNilAttestation(att); err != nil {
return false, err
}
id, err := attestation.NewId(att, attestation.Data)
r, err := hashFn(att.GetData())
if err != nil {
return false, errors.Wrap(err, "could not create attestation ID")
return false, errors.Wrap(err, "could not tree hash attestation")
}
c.aggregatedAttLock.RLock()
defer c.aggregatedAttLock.RUnlock()
if atts, ok := c.aggregatedAtt[id]; ok {
if atts, ok := c.aggregatedAtt[r]; ok {
for _, a := range atts {
if c, err := a.GetAggregationBits().Contains(att.GetAggregationBits()); err != nil {
return false, err
@@ -317,7 +273,7 @@ func (c *AttCaches) HasAggregatedAttestation(att ethpb.Att) (bool, error) {
c.blockAttLock.RLock()
defer c.blockAttLock.RUnlock()
if atts, ok := c.blockAtt[id]; ok {
if atts, ok := c.blockAtt[r]; ok {
for _, a := range atts {
if c, err := a.GetAggregationBits().Contains(att.GetAggregationBits()); err != nil {
return false, err

View File

@@ -7,11 +7,10 @@ import (
c "github.com/patrickmn/go-cache"
"github.com/pkg/errors"
fssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
@@ -70,7 +69,7 @@ func TestKV_Aggregated_SaveAggregatedAttestation(t *testing.T) {
}),
AggregationBits: bitfield.Bitlist{0b10111},
},
wantErrString: "could not create attestation ID",
wantErrString: "could not tree hash attestation: --.BeaconBlockRoot (" + fssz.ErrBytesLength.Error() + ")",
},
{
name: "already seen",
@@ -93,13 +92,15 @@ func TestKV_Aggregated_SaveAggregatedAttestation(t *testing.T) {
count: 1,
},
}
id, err := attestation.NewId(util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 100}}), attestation.Data)
r, err := hashFn(util.HydrateAttestationData(&ethpb.AttestationData{
Slot: 100,
}))
require.NoError(t, err)
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
cache := NewAttCaches()
cache.seenAtt.Set(id.String(), []bitfield.Bitlist{{0xff}}, c.DefaultExpiration)
cache.seenAtt.Set(string(r[:]), []bitfield.Bitlist{{0xff}}, c.DefaultExpiration)
assert.Equal(t, 0, len(cache.unAggregatedAtt), "Invalid start pool, atts: %d", len(cache.unAggregatedAtt))
err := cache.SaveAggregatedAttestation(tt.att)
@@ -229,7 +230,7 @@ func TestKV_Aggregated_DeleteAggregatedAttestation(t *testing.T) {
},
}
err := cache.DeleteAggregatedAttestation(att)
wantErr := "could not create attestation ID"
wantErr := "could not tree hash attestation data: --.BeaconBlockRoot (" + fssz.ErrBytesLength.Error() + ")"
assert.ErrorContains(t, wantErr, err)
})
@@ -499,49 +500,3 @@ func TestKV_Aggregated_DuplicateAggregatedAttestations(t *testing.T) {
assert.DeepSSZEqual(t, att2, returned[0], "Did not receive correct aggregated atts")
assert.Equal(t, 1, len(returned), "Did not receive correct aggregated atts")
}
func TestKV_Aggregated_AggregatedAttestationsBySlotIndex(t *testing.T) {
cache := NewAttCaches()
att1 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1, CommitteeIndex: 1}, AggregationBits: bitfield.Bitlist{0b1011}})
att2 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1, CommitteeIndex: 2}, AggregationBits: bitfield.Bitlist{0b1101}})
att3 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2, CommitteeIndex: 1}, AggregationBits: bitfield.Bitlist{0b1101}})
atts := []*ethpb.Attestation{att1, att2, att3}
for _, att := range atts {
require.NoError(t, cache.SaveAggregatedAttestation(att))
}
ctx := context.Background()
returned := cache.AggregatedAttestationsBySlotIndex(ctx, 1, 1)
assert.DeepEqual(t, []*ethpb.Attestation{att1}, returned)
returned = cache.AggregatedAttestationsBySlotIndex(ctx, 1, 2)
assert.DeepEqual(t, []*ethpb.Attestation{att2}, returned)
returned = cache.AggregatedAttestationsBySlotIndex(ctx, 2, 1)
assert.DeepEqual(t, []*ethpb.Attestation{att3}, returned)
}
func TestKV_Aggregated_AggregatedAttestationsBySlotIndexElectra(t *testing.T) {
cache := NewAttCaches()
committeeBits := primitives.NewAttestationCommitteeBits()
committeeBits.SetBitAt(1, true)
att1 := util.HydrateAttestationElectra(&ethpb.AttestationElectra{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b1011}, CommitteeBits: committeeBits})
committeeBits = primitives.NewAttestationCommitteeBits()
committeeBits.SetBitAt(2, true)
att2 := util.HydrateAttestationElectra(&ethpb.AttestationElectra{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b1101}, CommitteeBits: committeeBits})
committeeBits = primitives.NewAttestationCommitteeBits()
committeeBits.SetBitAt(1, true)
att3 := util.HydrateAttestationElectra(&ethpb.AttestationElectra{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b1101}, CommitteeBits: committeeBits})
atts := []*ethpb.AttestationElectra{att1, att2, att3}
for _, att := range atts {
require.NoError(t, cache.SaveAggregatedAttestation(att))
}
ctx := context.Background()
returned := cache.AggregatedAttestationsBySlotIndexElectra(ctx, 1, 1)
assert.DeepEqual(t, []*ethpb.AttestationElectra{att1}, returned)
returned = cache.AggregatedAttestationsBySlotIndexElectra(ctx, 1, 2)
assert.DeepEqual(t, []*ethpb.AttestationElectra{att2}, returned)
returned = cache.AggregatedAttestationsBySlotIndexElectra(ctx, 2, 1)
assert.DeepEqual(t, []*ethpb.AttestationElectra{att3}, returned)
}

View File

@@ -3,7 +3,6 @@ package kv
import (
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
)
// SaveBlockAttestation saves an block attestation in cache.
@@ -11,15 +10,14 @@ func (c *AttCaches) SaveBlockAttestation(att ethpb.Att) error {
if att == nil {
return nil
}
id, err := attestation.NewId(att, attestation.Data)
r, err := hashFn(att.GetData())
if err != nil {
return errors.Wrap(err, "could not create attestation ID")
return errors.Wrap(err, "could not tree hash attestation")
}
c.blockAttLock.Lock()
defer c.blockAttLock.Unlock()
atts, ok := c.blockAtt[id]
atts, ok := c.blockAtt[r]
if !ok {
atts = make([]ethpb.Att, 0, 1)
}
@@ -33,7 +31,7 @@ func (c *AttCaches) SaveBlockAttestation(att ethpb.Att) error {
}
}
c.blockAtt[id] = append(atts, att.Copy())
c.blockAtt[r] = append(atts, att.Copy())
return nil
}
@@ -56,15 +54,14 @@ func (c *AttCaches) DeleteBlockAttestation(att ethpb.Att) error {
if att == nil {
return nil
}
id, err := attestation.NewId(att, attestation.Data)
r, err := hashFn(att.GetData())
if err != nil {
return errors.Wrap(err, "could not create attestation ID")
return errors.Wrap(err, "could not tree hash attestation")
}
c.blockAttLock.Lock()
defer c.blockAttLock.Unlock()
delete(c.blockAtt, id)
delete(c.blockAtt, r)
return nil
}

View File

@@ -3,7 +3,6 @@ package kv
import (
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
)
// SaveForkchoiceAttestation saves an forkchoice attestation in cache.
@@ -11,15 +10,14 @@ func (c *AttCaches) SaveForkchoiceAttestation(att ethpb.Att) error {
if att == nil {
return nil
}
id, err := attestation.NewId(att, attestation.Full)
r, err := hashFn(att)
if err != nil {
return errors.Wrap(err, "could not create attestation ID")
return errors.Wrap(err, "could not tree hash attestation")
}
c.forkchoiceAttLock.Lock()
defer c.forkchoiceAttLock.Unlock()
c.forkchoiceAtt[id] = att
c.forkchoiceAtt[r] = att.Copy()
return nil
}
@@ -53,15 +51,14 @@ func (c *AttCaches) DeleteForkchoiceAttestation(att ethpb.Att) error {
if att == nil {
return nil
}
id, err := attestation.NewId(att, attestation.Full)
r, err := hashFn(att)
if err != nil {
return errors.Wrap(err, "could not create attestation ID")
return errors.Wrap(err, "could not tree hash attestation")
}
c.forkchoiceAttLock.Lock()
defer c.forkchoiceAttLock.Unlock()
delete(c.forkchoiceAtt, id)
delete(c.forkchoiceAtt, r)
return nil
}

View File

@@ -9,22 +9,24 @@ import (
"github.com/patrickmn/go-cache"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/crypto/hash"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
)
var hashFn = hash.Proto
// AttCaches defines the caches used to satisfy attestation pool interface.
// These caches are KV store for various attestations
// such are unaggregated, aggregated or attestations within a block.
type AttCaches struct {
aggregatedAttLock sync.RWMutex
aggregatedAtt map[attestation.Id][]ethpb.Att
aggregatedAtt map[[32]byte][]ethpb.Att
unAggregateAttLock sync.RWMutex
unAggregatedAtt map[attestation.Id]ethpb.Att
unAggregatedAtt map[[32]byte]ethpb.Att
forkchoiceAttLock sync.RWMutex
forkchoiceAtt map[attestation.Id]ethpb.Att
forkchoiceAtt map[[32]byte]ethpb.Att
blockAttLock sync.RWMutex
blockAtt map[attestation.Id][]ethpb.Att
blockAtt map[[32]byte][]ethpb.Att
seenAtt *cache.Cache
}
@@ -34,10 +36,10 @@ func NewAttCaches() *AttCaches {
secsInEpoch := time.Duration(params.BeaconConfig().SlotsPerEpoch.Mul(params.BeaconConfig().SecondsPerSlot))
c := cache.New(secsInEpoch*time.Second, 2*secsInEpoch*time.Second)
pool := &AttCaches{
unAggregatedAtt: make(map[attestation.Id]ethpb.Att),
aggregatedAtt: make(map[attestation.Id][]ethpb.Att),
forkchoiceAtt: make(map[attestation.Id]ethpb.Att),
blockAtt: make(map[attestation.Id][]ethpb.Att),
unAggregatedAtt: make(map[[32]byte]ethpb.Att),
aggregatedAtt: make(map[[32]byte][]ethpb.Att),
forkchoiceAtt: make(map[[32]byte]ethpb.Att),
blockAtt: make(map[[32]byte][]ethpb.Att),
seenAtt: c,
}

View File

@@ -1,20 +1,21 @@
package kv
import (
"fmt"
"github.com/patrickmn/go-cache"
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
)
func (c *AttCaches) insertSeenBit(att ethpb.Att) error {
id, err := attestation.NewId(att, attestation.Data)
r, err := hashFn(att.GetData())
if err != nil {
return errors.Wrap(err, "could not create attestation ID")
return err
}
v, ok := c.seenAtt.Get(id.String())
v, ok := c.seenAtt.Get(string(r[:]))
if ok {
seenBits, ok := v.([]bitfield.Bitlist)
if !ok {
@@ -23,7 +24,7 @@ func (c *AttCaches) insertSeenBit(att ethpb.Att) error {
alreadyExists := false
for _, bit := range seenBits {
if c, err := bit.Contains(att.GetAggregationBits()); err != nil {
return err
return fmt.Errorf("failed to check seen bits on attestation when inserting bit: %w", err)
} else if c {
alreadyExists = true
break
@@ -32,21 +33,21 @@ func (c *AttCaches) insertSeenBit(att ethpb.Att) error {
if !alreadyExists {
seenBits = append(seenBits, att.GetAggregationBits())
}
c.seenAtt.Set(id.String(), seenBits, cache.DefaultExpiration /* one epoch */)
c.seenAtt.Set(string(r[:]), seenBits, cache.DefaultExpiration /* one epoch */)
return nil
}
c.seenAtt.Set(id.String(), []bitfield.Bitlist{att.GetAggregationBits()}, cache.DefaultExpiration /* one epoch */)
c.seenAtt.Set(string(r[:]), []bitfield.Bitlist{att.GetAggregationBits()}, cache.DefaultExpiration /* one epoch */)
return nil
}
func (c *AttCaches) hasSeenBit(att ethpb.Att) (bool, error) {
id, err := attestation.NewId(att, attestation.Data)
r, err := hashFn(att.GetData())
if err != nil {
return false, errors.Wrap(err, "could not create attestation ID")
return false, err
}
v, ok := c.seenAtt.Get(id.String())
v, ok := c.seenAtt.Get(string(r[:]))
if ok {
seenBits, ok := v.([]bitfield.Bitlist)
if !ok {
@@ -54,7 +55,7 @@ func (c *AttCaches) hasSeenBit(att ethpb.Att) (bool, error) {
}
for _, bit := range seenBits {
if c, err := bit.Contains(att.GetAggregationBits()); err != nil {
return false, err
return false, fmt.Errorf("failed to check seen bits on attestation when reading bit: %w", err)
} else if c {
return true, nil
}

View File

@@ -5,7 +5,6 @@ import (
"github.com/prysmaticlabs/go-bitfield"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
)
@@ -40,18 +39,18 @@ func TestAttCaches_hasSeenBit(t *testing.T) {
func TestAttCaches_insertSeenBitDuplicates(t *testing.T) {
c := NewAttCaches()
att1 := util.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b10000011}})
id, err := attestation.NewId(att1, attestation.Data)
r, err := hashFn(att1.Data)
require.NoError(t, err)
require.NoError(t, c.insertSeenBit(att1))
require.Equal(t, 1, c.seenAtt.ItemCount())
_, expirationTime1, ok := c.seenAtt.GetWithExpiration(id.String())
_, expirationTime1, ok := c.seenAtt.GetWithExpiration(string(r[:]))
require.Equal(t, true, ok)
// Make sure that duplicates are not inserted, but expiration time gets updated.
require.NoError(t, c.insertSeenBit(att1))
require.Equal(t, 1, c.seenAtt.ItemCount())
_, expirationprysmTime, ok := c.seenAtt.GetWithExpiration(id.String())
_, expirationprysmTime, ok := c.seenAtt.GetWithExpiration(string(r[:]))
require.Equal(t, true, ok)
require.Equal(t, true, expirationprysmTime.After(expirationTime1), "Expiration time is not updated")
}

View File

@@ -7,8 +7,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"go.opencensus.io/trace"
)
@@ -29,14 +27,13 @@ func (c *AttCaches) SaveUnaggregatedAttestation(att ethpb.Att) error {
return nil
}
id, err := attestation.NewId(att, attestation.Full)
r, err := hashFn(att)
if err != nil {
return errors.Wrap(err, "could not create attestation ID")
return errors.Wrap(err, "could not tree hash attestation")
}
c.unAggregateAttLock.Lock()
defer c.unAggregateAttLock.Unlock()
c.unAggregatedAtt[id] = att
c.unAggregatedAtt[r] = att.Copy()
return nil
}
@@ -72,56 +69,19 @@ func (c *AttCaches) UnaggregatedAttestations() ([]ethpb.Att, error) {
// UnaggregatedAttestationsBySlotIndex returns the unaggregated attestations in cache,
// filtered by committee index and slot.
func (c *AttCaches) UnaggregatedAttestationsBySlotIndex(
ctx context.Context,
slot primitives.Slot,
committeeIndex primitives.CommitteeIndex,
) []*ethpb.Attestation {
func (c *AttCaches) UnaggregatedAttestationsBySlotIndex(ctx context.Context, slot primitives.Slot, committeeIndex primitives.CommitteeIndex) []ethpb.Att {
_, span := trace.StartSpan(ctx, "operations.attestations.kv.UnaggregatedAttestationsBySlotIndex")
defer span.End()
atts := make([]*ethpb.Attestation, 0)
atts := make([]ethpb.Att, 0)
c.unAggregateAttLock.RLock()
defer c.unAggregateAttLock.RUnlock()
unAggregatedAtts := c.unAggregatedAtt
for _, a := range unAggregatedAtts {
if a.Version() == version.Phase0 && slot == a.GetData().Slot && committeeIndex == a.GetData().CommitteeIndex {
att, ok := a.(*ethpb.Attestation)
// This will never fail in practice because we asserted the version
if ok {
atts = append(atts, att)
}
}
}
return atts
}
// UnaggregatedAttestationsBySlotIndexElectra returns the unaggregated attestations in cache,
// filtered by committee index and slot.
func (c *AttCaches) UnaggregatedAttestationsBySlotIndexElectra(
ctx context.Context,
slot primitives.Slot,
committeeIndex primitives.CommitteeIndex,
) []*ethpb.AttestationElectra {
_, span := trace.StartSpan(ctx, "operations.attestations.kv.UnaggregatedAttestationsBySlotIndexElectra")
defer span.End()
atts := make([]*ethpb.AttestationElectra, 0)
c.unAggregateAttLock.RLock()
defer c.unAggregateAttLock.RUnlock()
unAggregatedAtts := c.unAggregatedAtt
for _, a := range unAggregatedAtts {
if a.Version() == version.Electra && slot == a.GetData().Slot && a.CommitteeBitsVal().BitAt(uint64(committeeIndex)) {
att, ok := a.(*ethpb.AttestationElectra)
// This will never fail in practice because we asserted the version
if ok {
atts = append(atts, att)
}
if slot == a.GetData().Slot && committeeIndex == a.GetData().CommitteeIndex {
atts = append(atts, a)
}
}
@@ -141,14 +101,14 @@ func (c *AttCaches) DeleteUnaggregatedAttestation(att ethpb.Att) error {
return err
}
id, err := attestation.NewId(att, attestation.Full)
r, err := hashFn(att)
if err != nil {
return errors.Wrap(err, "could not create attestation ID")
return errors.Wrap(err, "could not tree hash attestation")
}
c.unAggregateAttLock.Lock()
defer c.unAggregateAttLock.Unlock()
delete(c.unAggregatedAtt, id)
delete(c.unAggregatedAtt, r)
return nil
}

View File

@@ -7,11 +7,10 @@ import (
"testing"
c "github.com/patrickmn/go-cache"
fssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/go-bitfield"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
@@ -40,7 +39,7 @@ func TestKV_Unaggregated_SaveUnaggregatedAttestation(t *testing.T) {
BeaconBlockRoot: []byte{0b0},
},
},
wantErrString: "could not create attestation ID",
wantErrString: fssz.ErrBytesLength.Error(),
},
{
name: "normal save",
@@ -58,13 +57,13 @@ func TestKV_Unaggregated_SaveUnaggregatedAttestation(t *testing.T) {
count: 0,
},
}
id, err := attestation.NewId(util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 100}}), attestation.Data)
r, err := hashFn(util.HydrateAttestationData(&ethpb.AttestationData{Slot: 100}))
require.NoError(t, err)
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
cache := NewAttCaches()
cache.seenAtt.Set(id.String(), []bitfield.Bitlist{{0xff}}, c.DefaultExpiration)
cache.seenAtt.Set(string(r[:]), []bitfield.Bitlist{{0xff}}, c.DefaultExpiration)
assert.Equal(t, 0, len(cache.unAggregatedAtt), "Invalid start pool, atts: %d", len(cache.unAggregatedAtt))
if tt.att != nil && tt.att.GetSignature() == nil {
@@ -247,35 +246,9 @@ func TestKV_Unaggregated_UnaggregatedAttestationsBySlotIndex(t *testing.T) {
}
ctx := context.Background()
returned := cache.UnaggregatedAttestationsBySlotIndex(ctx, 1, 1)
assert.DeepEqual(t, []*ethpb.Attestation{att1}, returned)
assert.DeepEqual(t, []ethpb.Att{att1}, returned)
returned = cache.UnaggregatedAttestationsBySlotIndex(ctx, 1, 2)
assert.DeepEqual(t, []*ethpb.Attestation{att2}, returned)
assert.DeepEqual(t, []ethpb.Att{att2}, returned)
returned = cache.UnaggregatedAttestationsBySlotIndex(ctx, 2, 1)
assert.DeepEqual(t, []*ethpb.Attestation{att3}, returned)
}
func TestKV_Unaggregated_UnaggregatedAttestationsBySlotIndexElectra(t *testing.T) {
cache := NewAttCaches()
committeeBits := primitives.NewAttestationCommitteeBits()
committeeBits.SetBitAt(1, true)
att1 := util.HydrateAttestationElectra(&ethpb.AttestationElectra{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b101}, CommitteeBits: committeeBits})
committeeBits = primitives.NewAttestationCommitteeBits()
committeeBits.SetBitAt(2, true)
att2 := util.HydrateAttestationElectra(&ethpb.AttestationElectra{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b110}, CommitteeBits: committeeBits})
committeeBits = primitives.NewAttestationCommitteeBits()
committeeBits.SetBitAt(1, true)
att3 := util.HydrateAttestationElectra(&ethpb.AttestationElectra{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b110}, CommitteeBits: committeeBits})
atts := []*ethpb.AttestationElectra{att1, att2, att3}
for _, att := range atts {
require.NoError(t, cache.SaveUnaggregatedAttestation(att))
}
ctx := context.Background()
returned := cache.UnaggregatedAttestationsBySlotIndexElectra(ctx, 1, 1)
assert.DeepEqual(t, []*ethpb.AttestationElectra{att1}, returned)
returned = cache.UnaggregatedAttestationsBySlotIndexElectra(ctx, 1, 2)
assert.DeepEqual(t, []*ethpb.AttestationElectra{att2}, returned)
returned = cache.UnaggregatedAttestationsBySlotIndexElectra(ctx, 2, 1)
assert.DeepEqual(t, []*ethpb.AttestationElectra{att3}, returned)
assert.DeepEqual(t, []ethpb.Att{att3}, returned)
}

View File

@@ -18,8 +18,7 @@ type Pool interface {
SaveAggregatedAttestation(att ethpb.Att) error
SaveAggregatedAttestations(atts []ethpb.Att) error
AggregatedAttestations() []ethpb.Att
AggregatedAttestationsBySlotIndex(ctx context.Context, slot primitives.Slot, committeeIndex primitives.CommitteeIndex) []*ethpb.Attestation
AggregatedAttestationsBySlotIndexElectra(ctx context.Context, slot primitives.Slot, committeeIndex primitives.CommitteeIndex) []*ethpb.AttestationElectra
AggregatedAttestationsBySlotIndex(ctx context.Context, slot primitives.Slot, committeeIndex primitives.CommitteeIndex) []ethpb.Att
DeleteAggregatedAttestation(att ethpb.Att) error
HasAggregatedAttestation(att ethpb.Att) (bool, error)
AggregatedAttestationCount() int
@@ -27,8 +26,7 @@ type Pool interface {
SaveUnaggregatedAttestation(att ethpb.Att) error
SaveUnaggregatedAttestations(atts []ethpb.Att) error
UnaggregatedAttestations() ([]ethpb.Att, error)
UnaggregatedAttestationsBySlotIndex(ctx context.Context, slot primitives.Slot, committeeIndex primitives.CommitteeIndex) []*ethpb.Attestation
UnaggregatedAttestationsBySlotIndexElectra(ctx context.Context, slot primitives.Slot, committeeIndex primitives.CommitteeIndex) []*ethpb.AttestationElectra
UnaggregatedAttestationsBySlotIndex(ctx context.Context, slot primitives.Slot, committeeIndex primitives.CommitteeIndex) []ethpb.Att
DeleteUnaggregatedAttestation(att ethpb.Att) error
DeleteSeenUnaggregatedAttestations() (int, error)
UnaggregatedAttestationCount() int

View File

@@ -3,14 +3,14 @@ package attestations
import (
"bytes"
"context"
"errors"
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v5/config/features"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/crypto/hash"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
attaggregation "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation/aggregation/attestations"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"go.opencensus.io/trace"
@@ -67,7 +67,7 @@ func (s *Service) batchForkChoiceAtts(ctx context.Context) error {
atts := append(s.cfg.Pool.AggregatedAttestations(), s.cfg.Pool.BlockAttestations()...)
atts = append(atts, s.cfg.Pool.ForkchoiceAttestations()...)
attsByVerAndDataRoot := make(map[attestation.Id][]ethpb.Att, len(atts))
attsByDataRoot := make(map[[32]byte][]ethpb.Att, len(atts))
// Consolidate attestations by aggregating them by similar data root.
for _, att := range atts {
@@ -79,14 +79,14 @@ func (s *Service) batchForkChoiceAtts(ctx context.Context) error {
continue
}
id, err := attestation.NewId(att, attestation.Data)
attDataRoot, err := att.GetData().HashTreeRoot()
if err != nil {
return errors.Wrap(err, "could not create attestation ID")
return err
}
attsByVerAndDataRoot[id] = append(attsByVerAndDataRoot[id], att)
attsByDataRoot[attDataRoot] = append(attsByDataRoot[attDataRoot], att)
}
for _, atts := range attsByVerAndDataRoot {
for _, atts := range attsByDataRoot {
if err := s.aggregateAndSaveForkChoiceAtts(atts); err != nil {
return err
}
@@ -119,12 +119,12 @@ func (s *Service) aggregateAndSaveForkChoiceAtts(atts []ethpb.Att) error {
// This checks if the attestation has previously been aggregated for fork choice
// return true if yes, false if no.
func (s *Service) seen(att ethpb.Att) (bool, error) {
id, err := attestation.NewId(att, attestation.Data)
attRoot, err := hash.Proto(att.GetData())
if err != nil {
return false, errors.Wrap(err, "could not create attestation ID")
return false, err
}
incomingBits := att.GetAggregationBits()
savedBits, ok := s.forkChoiceProcessedAtts.Get(id)
savedBits, ok := s.forkChoiceProcessedRoots.Get(attRoot)
if ok {
savedBitlist, ok := savedBits.(bitfield.Bitlist)
if !ok {
@@ -149,6 +149,6 @@ func (s *Service) seen(att ethpb.Att) (bool, error) {
}
}
s.forkChoiceProcessedAtts.Add(id, incomingBits)
s.forkChoiceProcessedRoots.Add(attRoot, incomingBits)
return false, nil
}

View File

@@ -13,16 +13,16 @@ import (
"github.com/prysmaticlabs/prysm/v5/config/params"
)
var forkChoiceProcessedAttsSize = 1 << 16
var forkChoiceProcessedRootsSize = 1 << 16
// Service of attestation pool operations.
type Service struct {
cfg *Config
ctx context.Context
cancel context.CancelFunc
err error
forkChoiceProcessedAtts *lru.Cache
genesisTime uint64
cfg *Config
ctx context.Context
cancel context.CancelFunc
err error
forkChoiceProcessedRoots *lru.Cache
genesisTime uint64
}
// Config options for the service.
@@ -35,7 +35,7 @@ type Config struct {
// NewService instantiates a new attestation pool service instance that will
// be registered into a running beacon node.
func NewService(ctx context.Context, cfg *Config) (*Service, error) {
cache := lruwrpr.New(forkChoiceProcessedAttsSize)
cache := lruwrpr.New(forkChoiceProcessedRootsSize)
if cfg.pruneInterval == 0 {
// Prune expired attestations from the pool every slot interval.
@@ -44,10 +44,10 @@ func NewService(ctx context.Context, cfg *Config) (*Service, error) {
ctx, cancel := context.WithCancel(ctx)
return &Service{
cfg: cfg,
ctx: ctx,
cancel: cancel,
forkChoiceProcessedAtts: cache,
cfg: cfg,
ctx: ctx,
cancel: cancel,
forkChoiceProcessedRoots: cache,
}, nil
}

View File

@@ -7,6 +7,7 @@ go_library(
"broadcaster.go",
"config.go",
"connection_gater.go",
"custody.go",
"dial_relay_node.go",
"discovery.go",
"doc.go",
@@ -46,6 +47,7 @@ go_library(
"//beacon-chain/core/altair:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/p2p/encoder:go_default_library",
@@ -56,6 +58,7 @@ go_library(
"//beacon-chain/startup:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/wrapper:go_default_library",
@@ -74,6 +77,8 @@ go_library(
"//runtime/version:go_default_library",
"//time:go_default_library",
"//time/slots:go_default_library",
"@com_github_btcsuite_btcd_btcec_v2//:go_default_library",
"@com_github_ethereum_go_ethereum//crypto:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/discover:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enr:go_default_library",
@@ -115,6 +120,7 @@ go_test(
"addr_factory_test.go",
"broadcaster_test.go",
"connection_gater_test.go",
"custody_test.go",
"dial_relay_node_test.go",
"discovery_test.go",
"fork_test.go",
@@ -136,9 +142,11 @@ go_test(
flaky = True,
tags = ["requires-network"],
deps = [
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/p2p/encoder:go_default_library",
@@ -151,6 +159,7 @@ go_test(
"//cmd/beacon-chain/flags:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/wrapper:go_default_library",
"//container/leaky-bucket:go_default_library",

View File

@@ -9,16 +9,18 @@ import (
"github.com/pkg/errors"
ssz "github.com/prysmaticlabs/fastssz"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
"google.golang.org/protobuf/proto"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/altair"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/crypto/hash"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
"google.golang.org/protobuf/proto"
)
// ErrMessageNotMapped occurs on a Broadcast attempt when a message has not been defined in the
@@ -96,7 +98,12 @@ func (s *Service) BroadcastSyncCommitteeMessage(ctx context.Context, subnet uint
return nil
}
func (s *Service) internalBroadcastAttestation(ctx context.Context, subnet uint64, att ethpb.Att, forkDigest [4]byte) {
func (s *Service) internalBroadcastAttestation(
ctx context.Context,
subnet uint64,
att ethpb.Att,
forkDigest [fieldparams.VersionLength]byte,
) {
_, span := trace.StartSpan(ctx, "p2p.internalBroadcastAttestation")
defer span.End()
ctx = trace.NewContext(context.Background(), span) // clear parent context / deadline.
@@ -152,7 +159,7 @@ func (s *Service) internalBroadcastAttestation(ctx context.Context, subnet uint6
}
}
func (s *Service) broadcastSyncCommittee(ctx context.Context, subnet uint64, sMsg *ethpb.SyncCommitteeMessage, forkDigest [4]byte) {
func (s *Service) broadcastSyncCommittee(ctx context.Context, subnet uint64, sMsg *ethpb.SyncCommitteeMessage, forkDigest [fieldparams.VersionLength]byte) {
_, span := trace.StartSpan(ctx, "p2p.broadcastSyncCommittee")
defer span.End()
ctx = trace.NewContext(context.Background(), span) // clear parent context / deadline.
@@ -228,7 +235,12 @@ func (s *Service) BroadcastBlob(ctx context.Context, subnet uint64, blob *ethpb.
return nil
}
func (s *Service) internalBroadcastBlob(ctx context.Context, subnet uint64, blobSidecar *ethpb.BlobSidecar, forkDigest [4]byte) {
func (s *Service) internalBroadcastBlob(
ctx context.Context,
subnet uint64,
blobSidecar *ethpb.BlobSidecar,
forkDigest [fieldparams.VersionLength]byte,
) {
_, span := trace.StartSpan(ctx, "p2p.internalBroadcastBlob")
defer span.End()
ctx = trace.NewContext(context.Background(), span) // clear parent context / deadline.
@@ -243,7 +255,7 @@ func (s *Service) internalBroadcastBlob(ctx context.Context, subnet uint64, blob
s.subnetLocker(wrappedSubIdx).RUnlock()
if !hasPeer {
blobSidecarCommitteeBroadcastAttempts.Inc()
blobSidecarBroadcastAttempts.Inc()
if err := func() error {
s.subnetLocker(wrappedSubIdx).Lock()
defer s.subnetLocker(wrappedSubIdx).Unlock()
@@ -252,7 +264,7 @@ func (s *Service) internalBroadcastBlob(ctx context.Context, subnet uint64, blob
return err
}
if ok {
blobSidecarCommitteeBroadcasts.Inc()
blobSidecarBroadcasts.Inc()
return nil
}
return errors.New("failed to find peers for subnet")
@@ -268,6 +280,99 @@ func (s *Service) internalBroadcastBlob(ctx context.Context, subnet uint64, blob
}
}
// BroadcastDataColumn broadcasts a data column to the p2p network, the message is assumed to be
// broadcasted to the current fork and to the input column subnet.
// TODO: Add tests
func (s *Service) BroadcastDataColumn(ctx context.Context, columnSubnet uint64, dataColumnSidecar *ethpb.DataColumnSidecar) error {
// Add tracing to the function.
ctx, span := trace.StartSpan(ctx, "p2p.BroadcastBlob")
defer span.End()
// Ensure the data column sidecar is not nil.
if dataColumnSidecar == nil {
return errors.Errorf("attempted to broadcast nil data column sidecar at subnet %d", columnSubnet)
}
// Retrieve the current fork digest.
forkDigest, err := s.currentForkDigest()
if err != nil {
err := errors.Wrap(err, "current fork digest")
tracing.AnnotateError(span, err)
return err
}
// Non-blocking broadcast, with attempts to discover a column subnet peer if none available.
go s.internalBroadcastDataColumn(ctx, columnSubnet, dataColumnSidecar, forkDigest)
return nil
}
func (s *Service) internalBroadcastDataColumn(
ctx context.Context,
columnSubnet uint64,
dataColumnSidecar *ethpb.DataColumnSidecar,
forkDigest [fieldparams.VersionLength]byte,
) {
// Add tracing to the function.
_, span := trace.StartSpan(ctx, "p2p.internalBroadcastDataColumn")
defer span.End()
// Increase the number of broadcast attempts.
dataColumnSidecarBroadcastAttempts.Inc()
// Clear parent context / deadline.
ctx = trace.NewContext(context.Background(), span)
// Define a one-slot length context timeout.
oneSlot := time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second
ctx, cancel := context.WithTimeout(ctx, oneSlot)
defer cancel()
// Build the topic corresponding to this column subnet and this fork digest.
topic := dataColumnSubnetToTopic(columnSubnet, forkDigest)
// Compute the wrapped subnet index.
wrappedSubIdx := columnSubnet + dataColumnSubnetVal
// Check if we have peers with this subnet.
hasPeer := func() bool {
s.subnetLocker(wrappedSubIdx).RLock()
defer s.subnetLocker(wrappedSubIdx).RUnlock()
return s.hasPeerWithSubnet(topic)
}()
// If no peers are found, attempt to find peers with this subnet.
if !hasPeer {
if err := func() error {
s.subnetLocker(wrappedSubIdx).Lock()
defer s.subnetLocker(wrappedSubIdx).Unlock()
ok, err := s.FindPeersWithSubnet(ctx, topic, columnSubnet, 1 /*threshold*/)
if err != nil {
return errors.Wrap(err, "find peers for subnet")
}
if ok {
return nil
}
return errors.New("failed to find peers for subnet")
}(); err != nil {
log.WithError(err).Error("Failed to find peers")
tracing.AnnotateError(span, err)
}
}
// Broadcast the data column sidecar to the network.
if err := s.broadcastObject(ctx, dataColumnSidecar, topic); err != nil {
log.WithError(err).Error("Failed to broadcast data column sidecar")
tracing.AnnotateError(span, err)
}
// Increase the number of successful broadcasts.
dataColumnSidecarBroadcasts.Inc()
}
// method to broadcast messages to other peers in our gossip mesh.
func (s *Service) broadcastObject(ctx context.Context, obj ssz.Marshaler, topic string) error {
ctx, span := trace.StartSpan(ctx, "p2p.broadcastObject")
@@ -297,14 +402,18 @@ func (s *Service) broadcastObject(ctx context.Context, obj ssz.Marshaler, topic
return nil
}
func attestationToTopic(subnet uint64, forkDigest [4]byte) string {
func attestationToTopic(subnet uint64, forkDigest [fieldparams.VersionLength]byte) string {
return fmt.Sprintf(AttestationSubnetTopicFormat, forkDigest, subnet)
}
func syncCommitteeToTopic(subnet uint64, forkDigest [4]byte) string {
func syncCommitteeToTopic(subnet uint64, forkDigest [fieldparams.VersionLength]byte) string {
return fmt.Sprintf(SyncCommitteeSubnetTopicFormat, forkDigest, subnet)
}
func blobSubnetToTopic(subnet uint64, forkDigest [4]byte) string {
func blobSubnetToTopic(subnet uint64, forkDigest [fieldparams.VersionLength]byte) string {
return fmt.Sprintf(BlobSubnetTopicFormat, forkDigest, subnet)
}
func dataColumnSubnetToTopic(subnet uint64, forkDigest [fieldparams.VersionLength]byte) string {
return fmt.Sprintf(DataColumnSubnetTopicFormat, forkDigest, subnet)
}

View File

@@ -13,11 +13,16 @@ import (
pubsub "github.com/libp2p/go-libp2p-pubsub"
"github.com/libp2p/go-libp2p/core/host"
"github.com/prysmaticlabs/go-bitfield"
"google.golang.org/protobuf/proto"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/kzg"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers/scorers"
p2ptest "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/testing"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/wrapper"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
@@ -25,7 +30,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"google.golang.org/protobuf/proto"
)
func TestService_Broadcast(t *testing.T) {
@@ -520,3 +524,70 @@ func TestService_BroadcastBlob(t *testing.T) {
require.NoError(t, p.BroadcastBlob(ctx, subnet, blobSidecar))
require.Equal(t, false, util.WaitTimeout(&wg, 1*time.Second), "Failed to receive pubsub within 1s")
}
func TestService_BroadcastDataColumn(t *testing.T) {
require.NoError(t, kzg.Start())
p1 := p2ptest.NewTestP2P(t)
p2 := p2ptest.NewTestP2P(t)
p1.Connect(p2)
require.NotEqual(t, 0, len(p1.BHost.Network().Peers()), "No peers")
p := &Service{
host: p1.BHost,
pubsub: p1.PubSub(),
joinedTopics: map[string]*pubsub.Topic{},
cfg: &Config{},
genesisTime: time.Now(),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
subnetsLock: make(map[uint64]*sync.RWMutex),
subnetsLockLock: sync.Mutex{},
peers: peers.NewStatus(context.Background(), &peers.StatusConfig{
ScorerParams: &scorers.Config{},
}),
}
b, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlockElectra())
require.NoError(t, err)
blobs := make([]kzg.Blob, fieldparams.MaxBlobsPerBlock)
sidecars, err := peerdas.DataColumnSidecars(b, blobs)
require.NoError(t, err)
sidecar := sidecars[0]
subnet := uint64(0)
topic := DataColumnSubnetTopicFormat
GossipTypeMapping[reflect.TypeOf(sidecar)] = topic
digest, err := p.currentForkDigest()
require.NoError(t, err)
topic = fmt.Sprintf(topic, digest, subnet)
// External peer subscribes to the topic.
topic += p.Encoding().ProtocolSuffix()
sub, err := p2.SubscribeToTopic(topic)
require.NoError(t, err)
time.Sleep(50 * time.Millisecond) // libp2p fails without this delay...
// Async listen for the pubsub, must be before the broadcast.
var wg sync.WaitGroup
wg.Add(1)
go func(tt *testing.T) {
defer wg.Done()
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
defer cancel()
msg, err := sub.Next(ctx)
require.NoError(t, err)
result := &ethpb.DataColumnSidecar{}
require.NoError(t, p.Encoding().DecodeGossip(msg.Data, result))
require.DeepEqual(t, result, sidecar)
}(t)
// Attempt to broadcast nil object should fail.
ctx := context.Background()
require.ErrorContains(t, "attempted to broadcast nil", p.BroadcastDataColumn(ctx, subnet, nil))
// Broadcast to peers and wait.
require.NoError(t, p.BroadcastDataColumn(ctx, subnet, sidecar))
require.Equal(t, false, util.WaitTimeout(&wg, 1*time.Second), "Failed to receive pubsub within 1s")
}

View File

@@ -336,7 +336,7 @@ func TestService_InterceptAddrDial_Public(t *testing.T) {
}),
}
var err error
// test with public filter
//test with public filter
cidr := "public"
ip := "212.67.10.122"
s.addrFilter, err = configureFilter(&Config{AllowListCIDR: cidr})
@@ -348,7 +348,7 @@ func TestService_InterceptAddrDial_Public(t *testing.T) {
t.Errorf("Expected multiaddress with ip %s to not be rejected since we allow public addresses", ip)
}
ip = "192.168.1.0" // this is private and should fail
ip = "192.168.1.0" //this is private and should fail
multiAddress, err = ma.NewMultiaddr(fmt.Sprintf("/ip4/%s/tcp/%d", ip, 3000))
require.NoError(t, err)
valid = s.InterceptAddrDial("", multiAddress)
@@ -356,7 +356,7 @@ func TestService_InterceptAddrDial_Public(t *testing.T) {
t.Errorf("Expected multiaddress with ip %s to be rejected since we are only allowing public addresses", ip)
}
// test with public allow filter, with a public address added to the deny list
//test with public allow filter, with a public address added to the deny list
invalidPublicIp := "212.67.10.122"
validPublicIp := "91.65.69.69"
s.addrFilter, err = configureFilter(&Config{AllowListCIDR: "public", DenyListCIDR: []string{"212.67.89.112/16"}})
@@ -384,7 +384,7 @@ func TestService_InterceptAddrDial_Private(t *testing.T) {
}),
}
var err error
// test with private filter
//test with private filter
cidr := "private"
s.addrFilter, err = configureFilter(&Config{DenyListCIDR: []string{cidr}})
require.NoError(t, err)
@@ -413,7 +413,7 @@ func TestService_InterceptAddrDial_AllowPrivate(t *testing.T) {
}),
}
var err error
// test with private filter
//test with private filter
cidr := "private"
s.addrFilter, err = configureFilter(&Config{AllowListCIDR: cidr})
require.NoError(t, err)
@@ -442,7 +442,7 @@ func TestService_InterceptAddrDial_DenyPublic(t *testing.T) {
}),
}
var err error
// test with private filter
//test with private filter
cidr := "public"
s.addrFilter, err = configureFilter(&Config{DenyListCIDR: []string{cidr}})
require.NoError(t, err)
@@ -471,7 +471,7 @@ func TestService_InterceptAddrDial_AllowConflict(t *testing.T) {
}),
}
var err error
// test with private filter
//test with private filter
cidr := "public"
s.addrFilter, err = configureFilter(&Config{DenyListCIDR: []string{cidr, "192.168.0.0/16"}})
require.NoError(t, err)

103
beacon-chain/p2p/custody.go Normal file
View File

@@ -0,0 +1,103 @@
package p2p
import (
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
"github.com/prysmaticlabs/prysm/v5/config/params"
)
// GetValidCustodyPeers returns a list of peers that custody a super set of the local node's custody columns.
func (s *Service) GetValidCustodyPeers(peers []peer.ID) ([]peer.ID, error) {
// Get the total number of columns.
numberOfColumns := params.BeaconConfig().NumberOfColumns
localCustodySubnetCount := peerdas.CustodySubnetCount()
localCustodyColumns, err := peerdas.CustodyColumns(s.NodeID(), localCustodySubnetCount)
if err != nil {
return nil, errors.Wrap(err, "custody columns for local node")
}
localCustotyColumnsCount := uint64(len(localCustodyColumns))
// Find the valid peers.
validPeers := make([]peer.ID, 0, len(peers))
loop:
for _, pid := range peers {
// Get the custody subnets count of the remote peer.
remoteCustodySubnetCount := s.CustodyCountFromRemotePeer(pid)
// Get the remote node ID from the peer ID.
remoteNodeID, err := ConvertPeerIDToNodeID(pid)
if err != nil {
return nil, errors.Wrap(err, "convert peer ID to node ID")
}
// Get the custody columns of the remote peer.
remoteCustodyColumns, err := peerdas.CustodyColumns(remoteNodeID, remoteCustodySubnetCount)
if err != nil {
return nil, errors.Wrap(err, "custody columns")
}
remoteCustodyColumnsCount := uint64(len(remoteCustodyColumns))
// If the remote peer custodies less columns than the local node, skip it.
if remoteCustodyColumnsCount < localCustotyColumnsCount {
continue
}
// If the remote peers custodies all the possible columns, add it to the list.
if remoteCustodyColumnsCount == numberOfColumns {
copiedId := pid
validPeers = append(validPeers, copiedId)
continue
}
// Filter out invalid peers.
for c := range localCustodyColumns {
if !remoteCustodyColumns[c] {
continue loop
}
}
copiedId := pid
// Add valid peer to list
validPeers = append(validPeers, copiedId)
}
return validPeers, nil
}
// CustodyCountFromRemotePeer retrieves the custody count from a remote peer.
func (s *Service) CustodyCountFromRemotePeer(pid peer.ID) uint64 {
// By default, we assume the peer custodies the minimum number of subnets.
custodyRequirement := params.BeaconConfig().CustodyRequirement
// Retrieve the ENR of the peer.
record, err := s.peers.ENR(pid)
if err != nil {
log.WithError(err).WithFields(logrus.Fields{
"peerID": pid,
"defaultValue": custodyRequirement,
}).Error("Failed to retrieve ENR for peer, defaulting to the default value")
return custodyRequirement
}
// Retrieve the custody subnets count from the ENR.
custodyCount, err := peerdas.CustodyCountFromRecord(record)
if err != nil {
log.WithError(err).WithFields(logrus.Fields{
"peerID": pid,
"defaultValue": custodyRequirement,
}).Debug("Failed to retrieve custody count from ENR for peer, defaulting to the default value")
return custodyRequirement
}
return custodyCount
}

View File

@@ -0,0 +1,163 @@
package p2p
import (
"context"
"crypto/ecdsa"
"net"
"testing"
"time"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/libp2p/go-libp2p/core/crypto"
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers/scorers"
"github.com/prysmaticlabs/prysm/v5/config/params"
ecdsaprysm "github.com/prysmaticlabs/prysm/v5/crypto/ecdsa"
prysmNetwork "github.com/prysmaticlabs/prysm/v5/network"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func createPeer(t *testing.T, privateKeyOffset int, custodyCount uint64) (*enr.Record, peer.ID, *ecdsa.PrivateKey) {
privateKeyBytes := make([]byte, 32)
for i := 0; i < 32; i++ {
privateKeyBytes[i] = byte(privateKeyOffset + i)
}
unmarshalledPrivateKey, err := crypto.UnmarshalSecp256k1PrivateKey(privateKeyBytes)
require.NoError(t, err)
privateKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(unmarshalledPrivateKey)
require.NoError(t, err)
peerID, err := peer.IDFromPrivateKey(unmarshalledPrivateKey)
require.NoError(t, err)
record := &enr.Record{}
record.Set(peerdas.Csc(custodyCount))
record.Set(enode.Secp256k1(privateKey.PublicKey))
return record, peerID, privateKey
}
func TestGetValidCustodyPeers(t *testing.T) {
genesisValidatorRoot := make([]byte, 32)
for i := 0; i < 32; i++ {
genesisValidatorRoot[i] = byte(i)
}
service := &Service{
cfg: &Config{},
genesisTime: time.Now(),
genesisValidatorsRoot: genesisValidatorRoot,
peers: peers.NewStatus(context.Background(), &peers.StatusConfig{
ScorerParams: &scorers.Config{},
}),
}
ipAddrString, err := prysmNetwork.ExternalIPv4()
require.NoError(t, err)
ipAddr := net.ParseIP(ipAddrString)
custodyRequirement := params.BeaconConfig().CustodyRequirement
dataColumnSidecarSubnetCount := params.BeaconConfig().DataColumnSidecarSubnetCount
// Peer 1 custodies exactly the same columns than us.
// (We use the same keys pair than ours for simplicity)
peer1Record, peer1ID, localPrivateKey := createPeer(t, 1, custodyRequirement)
// Peer 2 custodies all the columns.
peer2Record, peer2ID, _ := createPeer(t, 2, dataColumnSidecarSubnetCount)
// Peer 3 custodies different columns than us (but the same count).
// (We use the same public key than peer 2 for simplicity)
peer3Record, peer3ID, _ := createPeer(t, 3, custodyRequirement)
// Peer 4 custodies less columns than us.
peer4Record, peer4ID, _ := createPeer(t, 4, custodyRequirement-1)
listener, err := service.createListener(ipAddr, localPrivateKey)
require.NoError(t, err)
service.dv5Listener = listener
service.peers.Add(peer1Record, peer1ID, nil, network.DirOutbound)
service.peers.Add(peer2Record, peer2ID, nil, network.DirOutbound)
service.peers.Add(peer3Record, peer3ID, nil, network.DirOutbound)
service.peers.Add(peer4Record, peer4ID, nil, network.DirOutbound)
actual, err := service.GetValidCustodyPeers([]peer.ID{peer1ID, peer2ID, peer3ID, peer4ID})
require.NoError(t, err)
expected := []peer.ID{peer1ID, peer2ID}
require.DeepSSZEqual(t, expected, actual)
}
func TestCustodyCountFromRemotePeer(t *testing.T) {
const (
expected uint64 = 7
pid = "test-id"
)
csc := peerdas.Csc(expected)
// Define a nil record
var nilRecord *enr.Record = nil
// Define an empty record (record with non `csc` entry)
emptyRecord := &enr.Record{}
// Define a nominal record
nominalRecord := &enr.Record{}
nominalRecord.Set(csc)
testCases := []struct {
name string
record *enr.Record
expected uint64
}{
{
name: "nominal",
record: nominalRecord,
expected: expected,
},
{
name: "nil",
record: nilRecord,
expected: params.BeaconConfig().CustodyRequirement,
},
{
name: "empty",
record: emptyRecord,
expected: params.BeaconConfig().CustodyRequirement,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// Create peers status.
peers := peers.NewStatus(context.Background(), &peers.StatusConfig{
ScorerParams: &scorers.Config{},
})
// Add a new peer with the record.
peers.Add(tc.record, pid, nil, network.DirOutbound)
// Create a new service.
service := &Service{
peers: peers,
}
// Retrieve the custody count from the remote peer.
actual := service.CustodyCountFromRemotePeer(pid)
// Verify the result.
require.Equal(t, tc.expected, actual)
})
}
}

View File

@@ -15,7 +15,9 @@ import (
ma "github.com/multiformats/go-multiaddr"
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
"github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/v5/config/features"
"github.com/prysmaticlabs/prysm/v5/config/params"
@@ -42,16 +44,18 @@ const (
udp6
)
const quickProtocolEnrKey = "quic"
type quicProtocol uint16
// quicProtocol is the "quic" key, which holds the QUIC port of the node.
func (quicProtocol) ENRKey() string { return "quic" }
func (quicProtocol) ENRKey() string { return quickProtocolEnrKey }
// RefreshENR uses an epoch to refresh the enr entry for our node
// with the tracked committee ids for the epoch, allowing our node
// to be dynamically discoverable by others given our tracked committee ids.
func (s *Service) RefreshENR() {
// return early if discv5 isn't running
// RefreshPersistentSubnets checks that we are tracking our local persistent subnets for a variety of gossip topics.
// This routine checks for our attestation, sync committee and data column subnets and updates them if they have
// been rotated.
func (s *Service) RefreshPersistentSubnets() {
// return early if discv5 isnt running
if s.dv5Listener == nil || !s.isInitialized() {
return
}
@@ -60,6 +64,10 @@ func (s *Service) RefreshENR() {
log.WithError(err).Error("Could not initialize persistent subnets")
return
}
if err := initializePersistentColumnSubnets(s.dv5Listener.LocalNode().ID()); err != nil {
log.WithError(err).Error("Could not initialize persistent column subnets")
return
}
bitV := bitfield.NewBitvector64()
committees := cache.SubnetIDs.GetAllSubnets()
@@ -258,6 +266,10 @@ func (s *Service) createLocalNode(
localNode.Set(quicEntry)
}
if params.PeerDASEnabled() {
localNode.Set(peerdas.Csc(peerdas.CustodySubnetCount()))
}
localNode.SetFallbackIP(ipAddr)
localNode.SetFallbackUDP(udpPort)
@@ -346,6 +358,8 @@ func (s *Service) filterPeer(node *enode.Node) bool {
// Ignore nodes that are already active.
if s.peers.IsActive(peerData.ID) {
// Constantly update enr for known peers
s.peers.UpdateENR(node.Record(), peerData.ID)
return false
}

View File

@@ -20,8 +20,11 @@ import (
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/prysmaticlabs/go-bitfield"
logTest "github.com/sirupsen/logrus/hooks/test"
mock "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers/peerdata"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers/scorers"
@@ -37,7 +40,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/time/slots"
logTest "github.com/sirupsen/logrus/hooks/test"
)
var discoveryWaitTime = 1 * time.Second
@@ -131,6 +133,10 @@ func TestStartDiscV5_DiscoverAllPeers(t *testing.T) {
}
func TestCreateLocalNode(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.Eip7594ForkEpoch = 1
params.OverrideBeaconConfig(cfg)
testCases := []struct {
name string
cfg *Config
@@ -227,6 +233,11 @@ func TestCreateLocalNode(t *testing.T) {
syncSubnets := new([]byte)
require.NoError(t, localNode.Node().Record().Load(enr.WithEntry(syncCommsSubnetEnrKey, syncSubnets)))
require.DeepSSZEqual(t, []byte{0}, *syncSubnets)
// Check custody_subnet_count config.
custodySubnetCount := new(uint64)
require.NoError(t, localNode.Node().Record().Load(enr.WithEntry(peerdas.CustodySubnetCountEnrKey, custodySubnetCount)))
require.Equal(t, uint64(1), *custodySubnetCount)
})
}
}
@@ -601,7 +612,7 @@ func TestRefreshENR_ForkBoundaries(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
s := tt.svcBuilder(t)
s.RefreshENR()
s.RefreshPersistentSubnets()
tt.postValidation(t, s)
s.dv5Listener.Close()
cache.SubnetIDs.EmptyAllCaches()

View File

@@ -121,7 +121,7 @@ func (s *Service) topicScoreParams(topic string) (*pubsub.TopicScoreParams, erro
return defaultAttesterSlashingTopicParams(), nil
case strings.Contains(topic, GossipBlsToExecutionChangeMessage):
return defaultBlsToExecutionChangeTopicParams(), nil
case strings.Contains(topic, GossipBlobSidecarMessage):
case strings.Contains(topic, GossipBlobSidecarMessage), strings.Contains(topic, GossipDataColumnSidecarMessage):
// TODO(Deneb): Using the default block scoring. But this should be updated.
return defaultBlockTopicParams(), nil
default:

View File

@@ -22,13 +22,13 @@ var gossipTopicMappings = map[string]proto.Message{
SyncCommitteeSubnetTopicFormat: &ethpb.SyncCommitteeMessage{},
BlsToExecutionChangeSubnetTopicFormat: &ethpb.SignedBLSToExecutionChange{},
BlobSubnetTopicFormat: &ethpb.BlobSidecar{},
DataColumnSubnetTopicFormat: &ethpb.DataColumnSidecar{},
}
// GossipTopicMappings is a function to return the assigned data type
// versioned by epoch.
func GossipTopicMappings(topic string, epoch primitives.Epoch) proto.Message {
switch topic {
case BlockSubnetTopicFormat:
if topic == BlockSubnetTopicFormat {
if epoch >= params.BeaconConfig().ElectraForkEpoch {
return &ethpb.SignedBeaconBlockElectra{}
}
@@ -44,25 +44,8 @@ func GossipTopicMappings(topic string, epoch primitives.Epoch) proto.Message {
if epoch >= params.BeaconConfig().AltairForkEpoch {
return &ethpb.SignedBeaconBlockAltair{}
}
return gossipTopicMappings[topic]
case AttestationSubnetTopicFormat:
if epoch >= params.BeaconConfig().ElectraForkEpoch {
return &ethpb.AttestationElectra{}
}
return gossipTopicMappings[topic]
case AttesterSlashingSubnetTopicFormat:
if epoch >= params.BeaconConfig().ElectraForkEpoch {
return &ethpb.AttesterSlashingElectra{}
}
return gossipTopicMappings[topic]
case AggregateAndProofSubnetTopicFormat:
if epoch >= params.BeaconConfig().ElectraForkEpoch {
return &ethpb.SignedAggregateAttestationAndProofElectra{}
}
return gossipTopicMappings[topic]
default:
return gossipTopicMappings[topic]
}
return gossipTopicMappings[topic]
}
// AllTopics returns all topics stored in our
@@ -93,7 +76,4 @@ func init() {
GossipTypeMapping[reflect.TypeOf(&ethpb.SignedBeaconBlockDeneb{})] = BlockSubnetTopicFormat
// Specially handle Electra objects.
GossipTypeMapping[reflect.TypeOf(&ethpb.SignedBeaconBlockElectra{})] = BlockSubnetTopicFormat
GossipTypeMapping[reflect.TypeOf(&ethpb.AttestationElectra{})] = AttestationSubnetTopicFormat
GossipTypeMapping[reflect.TypeOf(&ethpb.AttesterSlashingElectra{})] = AttesterSlashingSubnetTopicFormat
GossipTypeMapping[reflect.TypeOf(&ethpb.SignedAggregateAttestationAndProofElectra{})] = AggregateAndProofSubnetTopicFormat
}

View File

@@ -22,20 +22,20 @@ func TestMappingHasNoDuplicates(t *testing.T) {
}
}
func TestGossipTopicMappings_CorrectType(t *testing.T) {
func TestGossipTopicMappings_CorrectBlockType(t *testing.T) {
params.SetupTestConfigCleanup(t)
bCfg := params.BeaconConfig().Copy()
altairForkEpoch := primitives.Epoch(100)
bellatrixForkEpoch := primitives.Epoch(200)
capellaForkEpoch := primitives.Epoch(300)
denebForkEpoch := primitives.Epoch(400)
electraForkEpoch := primitives.Epoch(500)
BellatrixForkEpoch := primitives.Epoch(200)
CapellaForkEpoch := primitives.Epoch(300)
DenebForkEpoch := primitives.Epoch(400)
ElectraForkEpoch := primitives.Epoch(500)
bCfg.AltairForkEpoch = altairForkEpoch
bCfg.BellatrixForkEpoch = bellatrixForkEpoch
bCfg.CapellaForkEpoch = capellaForkEpoch
bCfg.DenebForkEpoch = denebForkEpoch
bCfg.ElectraForkEpoch = electraForkEpoch
bCfg.BellatrixForkEpoch = BellatrixForkEpoch
bCfg.CapellaForkEpoch = CapellaForkEpoch
bCfg.DenebForkEpoch = DenebForkEpoch
bCfg.ElectraForkEpoch = ElectraForkEpoch
bCfg.ForkVersionSchedule[bytesutil.ToBytes4(bCfg.AltairForkVersion)] = primitives.Epoch(100)
bCfg.ForkVersionSchedule[bytesutil.ToBytes4(bCfg.BellatrixForkVersion)] = primitives.Epoch(200)
bCfg.ForkVersionSchedule[bytesutil.ToBytes4(bCfg.CapellaForkVersion)] = primitives.Epoch(300)
@@ -47,83 +47,29 @@ func TestGossipTopicMappings_CorrectType(t *testing.T) {
pMessage := GossipTopicMappings(BlockSubnetTopicFormat, 0)
_, ok := pMessage.(*ethpb.SignedBeaconBlock)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AttestationSubnetTopicFormat, 0)
_, ok = pMessage.(*ethpb.Attestation)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AttesterSlashingSubnetTopicFormat, 0)
_, ok = pMessage.(*ethpb.AttesterSlashing)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AggregateAndProofSubnetTopicFormat, 0)
_, ok = pMessage.(*ethpb.SignedAggregateAttestationAndProof)
assert.Equal(t, true, ok)
// Altair Fork
pMessage = GossipTopicMappings(BlockSubnetTopicFormat, altairForkEpoch)
_, ok = pMessage.(*ethpb.SignedBeaconBlockAltair)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AttestationSubnetTopicFormat, altairForkEpoch)
_, ok = pMessage.(*ethpb.Attestation)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AttesterSlashingSubnetTopicFormat, altairForkEpoch)
_, ok = pMessage.(*ethpb.AttesterSlashing)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AggregateAndProofSubnetTopicFormat, altairForkEpoch)
_, ok = pMessage.(*ethpb.SignedAggregateAttestationAndProof)
assert.Equal(t, true, ok)
// Bellatrix Fork
pMessage = GossipTopicMappings(BlockSubnetTopicFormat, bellatrixForkEpoch)
pMessage = GossipTopicMappings(BlockSubnetTopicFormat, BellatrixForkEpoch)
_, ok = pMessage.(*ethpb.SignedBeaconBlockBellatrix)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AttestationSubnetTopicFormat, bellatrixForkEpoch)
_, ok = pMessage.(*ethpb.Attestation)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AttesterSlashingSubnetTopicFormat, bellatrixForkEpoch)
_, ok = pMessage.(*ethpb.AttesterSlashing)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AggregateAndProofSubnetTopicFormat, bellatrixForkEpoch)
_, ok = pMessage.(*ethpb.SignedAggregateAttestationAndProof)
assert.Equal(t, true, ok)
// Capella Fork
pMessage = GossipTopicMappings(BlockSubnetTopicFormat, capellaForkEpoch)
pMessage = GossipTopicMappings(BlockSubnetTopicFormat, CapellaForkEpoch)
_, ok = pMessage.(*ethpb.SignedBeaconBlockCapella)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AttestationSubnetTopicFormat, capellaForkEpoch)
_, ok = pMessage.(*ethpb.Attestation)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AttesterSlashingSubnetTopicFormat, capellaForkEpoch)
_, ok = pMessage.(*ethpb.AttesterSlashing)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AggregateAndProofSubnetTopicFormat, capellaForkEpoch)
_, ok = pMessage.(*ethpb.SignedAggregateAttestationAndProof)
assert.Equal(t, true, ok)
// Deneb Fork
pMessage = GossipTopicMappings(BlockSubnetTopicFormat, denebForkEpoch)
pMessage = GossipTopicMappings(BlockSubnetTopicFormat, DenebForkEpoch)
_, ok = pMessage.(*ethpb.SignedBeaconBlockDeneb)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AttestationSubnetTopicFormat, denebForkEpoch)
_, ok = pMessage.(*ethpb.Attestation)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AttesterSlashingSubnetTopicFormat, denebForkEpoch)
_, ok = pMessage.(*ethpb.AttesterSlashing)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AggregateAndProofSubnetTopicFormat, denebForkEpoch)
_, ok = pMessage.(*ethpb.SignedAggregateAttestationAndProof)
assert.Equal(t, true, ok)
// Electra Fork
pMessage = GossipTopicMappings(BlockSubnetTopicFormat, electraForkEpoch)
pMessage = GossipTopicMappings(BlockSubnetTopicFormat, ElectraForkEpoch)
_, ok = pMessage.(*ethpb.SignedBeaconBlockElectra)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AttestationSubnetTopicFormat, electraForkEpoch)
_, ok = pMessage.(*ethpb.AttestationElectra)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AttesterSlashingSubnetTopicFormat, electraForkEpoch)
_, ok = pMessage.(*ethpb.AttesterSlashingElectra)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AggregateAndProofSubnetTopicFormat, electraForkEpoch)
_, ok = pMessage.(*ethpb.SignedAggregateAttestationAndProofElectra)
assert.Equal(t, true, ok)
}

View File

@@ -3,6 +3,7 @@ package p2p
import (
"context"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
pubsub "github.com/libp2p/go-libp2p-pubsub"
"github.com/libp2p/go-libp2p/core/connmgr"
@@ -28,6 +29,12 @@ type P2P interface {
ConnectionHandler
PeersProvider
MetadataProvider
CustodyHandler
}
type Acceser interface {
Broadcaster
PeerManager
}
// Broadcaster broadcasts messages to peers over the p2p pubsub protocol.
@@ -36,6 +43,7 @@ type Broadcaster interface {
BroadcastAttestation(ctx context.Context, subnet uint64, att ethpb.Att) error
BroadcastSyncCommitteeMessage(ctx context.Context, subnet uint64, sMsg *ethpb.SyncCommitteeMessage) error
BroadcastBlob(ctx context.Context, subnet uint64, blob *ethpb.BlobSidecar) error
BroadcastDataColumn(ctx context.Context, columnSubnet uint64, dataColumnSidecar *ethpb.DataColumnSidecar) error
}
// SetStreamHandler configures p2p to handle streams of a certain topic ID.
@@ -81,8 +89,9 @@ type PeerManager interface {
PeerID() peer.ID
Host() host.Host
ENR() *enr.Record
NodeID() enode.ID
DiscoveryAddresses() ([]multiaddr.Multiaddr, error)
RefreshENR()
RefreshPersistentSubnets()
FindPeersWithSubnet(ctx context.Context, topic string, subIndex uint64, threshold int) (bool, error)
AddPingMethod(reqFunc func(ctx context.Context, id peer.ID) error)
}
@@ -102,3 +111,8 @@ type MetadataProvider interface {
Metadata() metadata.Metadata
MetadataSeq() uint64
}
type CustodyHandler interface {
CustodyCountFromRemotePeer(peer.ID) uint64
GetValidCustodyPeers([]peer.ID) ([]peer.ID, error)
}

View File

@@ -60,17 +60,25 @@ var (
"the subnet. The beacon node increments this counter when the broadcast is blocked " +
"until a subnet peer can be found.",
})
blobSidecarCommitteeBroadcasts = promauto.NewCounter(prometheus.CounterOpts{
blobSidecarBroadcasts = promauto.NewCounter(prometheus.CounterOpts{
Name: "p2p_blob_sidecar_committee_broadcasts",
Help: "The number of blob sidecar committee messages that were broadcast with no peer on.",
Help: "The number of blob sidecar messages that were broadcast with no peer on.",
})
syncCommitteeBroadcastAttempts = promauto.NewCounter(prometheus.CounterOpts{
Name: "p2p_sync_committee_subnet_attempted_broadcasts",
Help: "The number of sync committee that were attempted to be broadcast.",
})
blobSidecarCommitteeBroadcastAttempts = promauto.NewCounter(prometheus.CounterOpts{
blobSidecarBroadcastAttempts = promauto.NewCounter(prometheus.CounterOpts{
Name: "p2p_blob_sidecar_committee_attempted_broadcasts",
Help: "The number of blob sidecar committee messages that were attempted to be broadcast.",
Help: "The number of blob sidecar messages that were attempted to be broadcast.",
})
dataColumnSidecarBroadcasts = promauto.NewCounter(prometheus.CounterOpts{
Name: "p2p_data_column_sidecar_broadcasts",
Help: "The number of data column sidecar messages that were broadcasted.",
})
dataColumnSidecarBroadcastAttempts = promauto.NewCounter(prometheus.CounterOpts{
Name: "p2p_data_column_sidecar_attempted_broadcasts",
Help: "The number of data column sidecar messages that were attempted to be broadcast.",
})
// Gossip Tracer Metrics

View File

@@ -159,6 +159,14 @@ func (p *Status) Add(record *enr.Record, pid peer.ID, address ma.Multiaddr, dire
p.addIpToTracker(pid)
}
func (p *Status) UpdateENR(record *enr.Record, pid peer.ID) {
p.store.Lock()
defer p.store.Unlock()
if peerData, ok := p.store.PeerData(pid); ok {
peerData.Enr = record
}
}
// Address returns the multiaddress of the given remote peer.
// This will error if the peer does not exist.
func (p *Status) Address(pid peer.ID) (ma.Multiaddr, error) {

View File

@@ -165,14 +165,14 @@ func (s *Service) pubsubOptions() []pubsub.Option {
func parsePeersEnr(peers []string) ([]peer.AddrInfo, error) {
addrs, err := PeersFromStringAddrs(peers)
if err != nil {
return nil, fmt.Errorf("Cannot convert peers raw ENRs into multiaddresses: %v", err)
return nil, fmt.Errorf("cannot convert peers raw ENRs into multiaddresses: %v", err)
}
if len(addrs) == 0 {
return nil, fmt.Errorf("Converting peers raw ENRs into multiaddresses resulted in an empty list")
return nil, fmt.Errorf("converting peers raw ENRs into multiaddresses resulted in an empty list")
}
directAddrInfos, err := peer.AddrInfosFromP2pAddrs(addrs...)
if err != nil {
return nil, fmt.Errorf("Cannot convert peers multiaddresses into AddrInfos: %v", err)
return nil, fmt.Errorf("cannot convert peers multiaddresses into AddrInfos: %v", err)
}
return directAddrInfos, nil
}

View File

@@ -90,7 +90,7 @@ func TestService_CanSubscribe(t *testing.T) {
formatting := []interface{}{digest}
// Special case for attestation subnets which have a second formatting placeholder.
if topic == AttestationSubnetTopicFormat || topic == SyncCommitteeSubnetTopicFormat || topic == BlobSubnetTopicFormat {
if topic == AttestationSubnetTopicFormat || topic == SyncCommitteeSubnetTopicFormat || topic == BlobSubnetTopicFormat || topic == DataColumnSubnetTopicFormat {
formatting = append(formatting, 0 /* some subnet ID */)
}

View File

@@ -43,6 +43,12 @@ const BlobSidecarsByRangeName = "/blob_sidecars_by_range"
// BlobSidecarsByRootName is the name for the BlobSidecarsByRoot v1 message topic.
const BlobSidecarsByRootName = "/blob_sidecars_by_root"
// DataColumnSidecarsByRootName is the name for the DataColumnSidecarsByRoot v1 message topic.
const DataColumnSidecarsByRootName = "/data_column_sidecars_by_root"
// DataColumnSidecarsByRangeName is the name for the DataColumnSidecarsByRange v1 message topic.
const DataColumnSidecarsByRangeName = "/data_column_sidecars_by_range"
const (
// V1 RPC Topics
// RPCStatusTopicV1 defines the v1 topic for the status rpc method.
@@ -65,6 +71,12 @@ const (
// RPCBlobSidecarsByRootTopicV1 is a topic for requesting blob sidecars by their block root. New in deneb.
// /eth2/beacon_chain/req/blob_sidecars_by_root/1/
RPCBlobSidecarsByRootTopicV1 = protocolPrefix + BlobSidecarsByRootName + SchemaVersionV1
// RPCDataColumnSidecarsByRootTopicV1 is a topic for requesting data column sidecars by their block root. New in PeerDAS.
// /eth2/beacon_chain/req/data_column_sidecars_by_root/1
RPCDataColumnSidecarsByRootTopicV1 = protocolPrefix + DataColumnSidecarsByRootName + SchemaVersionV1
// RPCDataColumnSidecarsByRangeTopicV1 is a topic for requesting data column sidecars by their slot. New in PeerDAS.
// /eth2/beacon_chain/req/data_column_sidecars_by_range/1
RPCDataColumnSidecarsByRangeTopicV1 = protocolPrefix + DataColumnSidecarsByRangeName + SchemaVersionV1
// V2 RPC Topics
// RPCBlocksByRangeTopicV2 defines v2 the topic for the blocks by range rpc method.
@@ -101,6 +113,10 @@ var RPCTopicMappings = map[string]interface{}{
RPCBlobSidecarsByRangeTopicV1: new(pb.BlobSidecarsByRangeRequest),
// BlobSidecarsByRoot v1 Message
RPCBlobSidecarsByRootTopicV1: new(p2ptypes.BlobSidecarsByRootReq),
// DataColumnSidecarsByRange v1 Message
RPCDataColumnSidecarsByRangeTopicV1: new(pb.DataColumnSidecarsByRangeRequest),
// DataColumnSidecarsByRoot v1 Message
RPCDataColumnSidecarsByRootTopicV1: new(p2ptypes.DataColumnSidecarsByRootReq),
}
// Maps all registered protocol prefixes.
@@ -119,6 +135,8 @@ var messageMapping = map[string]bool{
MetadataMessageName: true,
BlobSidecarsByRangeName: true,
BlobSidecarsByRootName: true,
DataColumnSidecarsByRootName: true,
DataColumnSidecarsByRangeName: true,
}
// Maps all the RPC messages which are to updated in altair.

View File

@@ -226,7 +226,7 @@ func (s *Service) Start() {
}
// Initialize metadata according to the
// current epoch.
s.RefreshENR()
s.RefreshPersistentSubnets()
// Periodic functions.
async.RunEvery(s.ctx, params.BeaconConfig().TtfbTimeoutDuration(), func() {
@@ -234,7 +234,7 @@ func (s *Service) Start() {
})
async.RunEvery(s.ctx, 30*time.Minute, s.Peers().Prune)
async.RunEvery(s.ctx, time.Duration(params.BeaconConfig().RespTimeout)*time.Second, s.updateMetrics)
async.RunEvery(s.ctx, refreshRate, s.RefreshENR)
async.RunEvery(s.ctx, refreshRate, s.RefreshPersistentSubnets)
async.RunEvery(s.ctx, 1*time.Minute, func() {
inboundQUICCount := len(s.peers.InboundConnectedWithProtocol(peers.QUIC))
inboundTCPCount := len(s.peers.InboundConnectedWithProtocol(peers.TCP))
@@ -358,6 +358,15 @@ func (s *Service) ENR() *enr.Record {
return s.dv5Listener.Self().Record()
}
// NodeID returns the local node's node ID
// for discovery.
func (s *Service) NodeID() enode.ID {
if s.dv5Listener == nil {
return [32]byte{}
}
return s.dv5Listener.Self().ID()
}
// DiscoveryAddresses represents our enr addresses as multiaddresses.
func (s *Service) DiscoveryAddresses() ([]multiaddr.Multiaddr, error) {
if s.dv5Listener == nil {

View File

@@ -11,17 +11,18 @@ import (
"github.com/holiman/uint256"
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield"
"go.opencensus.io/trace"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
"github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/consensus-types/wrapper"
"github.com/prysmaticlabs/prysm/v5/crypto/hash"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
mathutil "github.com/prysmaticlabs/prysm/v5/math"
"go.opencensus.io/trace"
"github.com/prysmaticlabs/prysm/v5/config/params"
pb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
@@ -34,8 +35,8 @@ var syncCommsSubnetEnrKey = params.BeaconNetworkConfig().SyncCommsSubnetKey
// The value used with the subnet, inorder
// to create an appropriate key to retrieve
// the relevant lock. This is used to differentiate
// sync subnets from attestation subnets. This is deliberately
// chosen as more than 64(attestation subnet count).
// sync subnets from others. This is deliberately
// chosen as more than 64 (attestation subnet count).
const syncLockerVal = 100
// The value used with the blob sidecar subnet, in order
@@ -45,6 +46,13 @@ const syncLockerVal = 100
// chosen more than sync and attestation subnet combined.
const blobSubnetLockerVal = 110
// The value used with the data column sidecar subnet, in order
// to create an appropriate key to retrieve
// the relevant lock. This is used to differentiate
// data column subnets from others. This is deliberately
// chosen more than sync, attestation and blob subnet (6) combined.
const dataColumnSubnetVal = 150
// FindPeersWithSubnet performs a network search for peers
// subscribed to a particular subnet. Then it tries to connect
// with those peers. This method will block until either:
@@ -207,6 +215,25 @@ func initializePersistentSubnets(id enode.ID, epoch primitives.Epoch) error {
return nil
}
func initializePersistentColumnSubnets(id enode.ID) error {
_, ok, expTime := cache.ColumnSubnetIDs.GetColumnSubnets()
if ok && expTime.After(time.Now()) {
return nil
}
subsMap, err := peerdas.CustodyColumnSubnets(id, peerdas.CustodySubnetCount())
if err != nil {
return err
}
subs := make([]uint64, 0, len(subsMap))
for sub := range subsMap {
subs = append(subs, sub)
}
cache.ColumnSubnetIDs.AddColumnSubnets(subs)
return nil
}
// Spec pseudocode definition:
//
// def compute_subscribed_subnets(node_id: NodeID, epoch: Epoch) -> Sequence[SubnetID]:
@@ -356,10 +383,11 @@ func syncBitvector(record *enr.Record) (bitfield.Bitvector4, error) {
// The subnet locker is a map which keeps track of all
// mutexes stored per subnet. This locker is re-used
// between both the attestation and sync subnets. In
// order to differentiate between attestation and sync
// subnets. Sync subnets are stored by (subnet+syncLockerVal). This
// is to prevent conflicts while allowing both subnets
// between both the attestation, sync and blob subnets.
// Sync subnets are stored by (subnet+syncLockerVal).
// Blob subnets are stored by (subnet+blobSubnetLockerVal).
// Data column subnets are stored by (subnet+dataColumnSubnetVal).
// This is to prevent conflicts while allowing subnets
// to use a single locker.
func (s *Service) subnetLocker(i uint64) *sync.RWMutex {
s.subnetsLockLock.Lock()

View File

@@ -81,7 +81,7 @@ func TestStartDiscV5_FindPeersWithSubnet(t *testing.T) {
bootNodeENR := bootListener.Self().String()
// Create 3 nodes, each subscribed to a different subnet.
// Each node is connected to the bootstrap node.
// Each node is connected to the boostrap node.
services := make([]*Service, 0, 3)
for i := 1; i <= 3; i++ {

View File

@@ -17,9 +17,11 @@ go_library(
"//beacon-chain:__subpackages__",
],
deps = [
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/p2p/encoder:go_default_library",
"//beacon-chain/p2p/peers:go_default_library",
"//beacon-chain/p2p/peers/scorers:go_default_library",
"//config/params:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/metadata:go_default_library",
"@com_github_ethereum_go_ethereum//crypto:go_default_library",

View File

@@ -3,6 +3,7 @@ package testing
import (
"context"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
pubsub "github.com/libp2p/go-libp2p-pubsub"
"github.com/libp2p/go-libp2p/core/control"
@@ -27,148 +28,166 @@ func NewFuzzTestP2P() *FakeP2P {
}
// Encoding -- fake.
func (_ *FakeP2P) Encoding() encoder.NetworkEncoding {
func (*FakeP2P) Encoding() encoder.NetworkEncoding {
return &encoder.SszNetworkEncoder{}
}
// AddConnectionHandler -- fake.
func (_ *FakeP2P) AddConnectionHandler(_, _ func(ctx context.Context, id peer.ID) error) {
func (*FakeP2P) AddConnectionHandler(_, _ func(ctx context.Context, id peer.ID) error) {
}
// AddDisconnectionHandler -- fake.
func (_ *FakeP2P) AddDisconnectionHandler(_ func(ctx context.Context, id peer.ID) error) {
func (*FakeP2P) AddDisconnectionHandler(_ func(ctx context.Context, id peer.ID) error) {
}
// AddPingMethod -- fake.
func (_ *FakeP2P) AddPingMethod(_ func(ctx context.Context, id peer.ID) error) {
func (*FakeP2P) AddPingMethod(_ func(ctx context.Context, id peer.ID) error) {
}
// PeerID -- fake.
func (_ *FakeP2P) PeerID() peer.ID {
func (*FakeP2P) PeerID() peer.ID {
return "fake"
}
// ENR returns the enr of the local peer.
func (_ *FakeP2P) ENR() *enr.Record {
func (*FakeP2P) ENR() *enr.Record {
return new(enr.Record)
}
// NodeID returns the node id of the local peer.
func (*FakeP2P) NodeID() enode.ID {
return [32]byte{}
}
// DiscoveryAddresses -- fake
func (_ *FakeP2P) DiscoveryAddresses() ([]multiaddr.Multiaddr, error) {
func (*FakeP2P) DiscoveryAddresses() ([]multiaddr.Multiaddr, error) {
return nil, nil
}
// FindPeersWithSubnet mocks the p2p func.
func (_ *FakeP2P) FindPeersWithSubnet(_ context.Context, _ string, _ uint64, _ int) (bool, error) {
func (*FakeP2P) FindPeersWithSubnet(_ context.Context, _ string, _ uint64, _ int) (bool, error) {
return false, nil
}
// RefreshENR mocks the p2p func.
func (_ *FakeP2P) RefreshENR() {}
func (*FakeP2P) RefreshPersistentSubnets() {}
// LeaveTopic -- fake.
func (_ *FakeP2P) LeaveTopic(_ string) error {
func (*FakeP2P) LeaveTopic(_ string) error {
return nil
}
// Metadata -- fake.
func (_ *FakeP2P) Metadata() metadata.Metadata {
func (*FakeP2P) Metadata() metadata.Metadata {
return nil
}
// Peers -- fake.
func (_ *FakeP2P) Peers() *peers.Status {
func (*FakeP2P) Peers() *peers.Status {
return nil
}
// PublishToTopic -- fake.
func (_ *FakeP2P) PublishToTopic(_ context.Context, _ string, _ []byte, _ ...pubsub.PubOpt) error {
func (*FakeP2P) PublishToTopic(_ context.Context, _ string, _ []byte, _ ...pubsub.PubOpt) error {
return nil
}
// Send -- fake.
func (_ *FakeP2P) Send(_ context.Context, _ interface{}, _ string, _ peer.ID) (network.Stream, error) {
func (*FakeP2P) Send(_ context.Context, _ interface{}, _ string, _ peer.ID) (network.Stream, error) {
return nil, nil
}
// PubSub -- fake.
func (_ *FakeP2P) PubSub() *pubsub.PubSub {
func (*FakeP2P) PubSub() *pubsub.PubSub {
return nil
}
// MetadataSeq -- fake.
func (_ *FakeP2P) MetadataSeq() uint64 {
func (*FakeP2P) MetadataSeq() uint64 {
return 0
}
// SetStreamHandler -- fake.
func (_ *FakeP2P) SetStreamHandler(_ string, _ network.StreamHandler) {
func (*FakeP2P) SetStreamHandler(_ string, _ network.StreamHandler) {
}
// SubscribeToTopic -- fake.
func (_ *FakeP2P) SubscribeToTopic(_ string, _ ...pubsub.SubOpt) (*pubsub.Subscription, error) {
func (*FakeP2P) SubscribeToTopic(_ string, _ ...pubsub.SubOpt) (*pubsub.Subscription, error) {
return nil, nil
}
// JoinTopic -- fake.
func (_ *FakeP2P) JoinTopic(_ string, _ ...pubsub.TopicOpt) (*pubsub.Topic, error) {
func (*FakeP2P) JoinTopic(_ string, _ ...pubsub.TopicOpt) (*pubsub.Topic, error) {
return nil, nil
}
// Host -- fake.
func (_ *FakeP2P) Host() host.Host {
func (*FakeP2P) Host() host.Host {
return nil
}
// Disconnect -- fake.
func (_ *FakeP2P) Disconnect(_ peer.ID) error {
func (*FakeP2P) Disconnect(_ peer.ID) error {
return nil
}
// Broadcast -- fake.
func (_ *FakeP2P) Broadcast(_ context.Context, _ proto.Message) error {
func (*FakeP2P) Broadcast(_ context.Context, _ proto.Message) error {
return nil
}
// BroadcastAttestation -- fake.
func (_ *FakeP2P) BroadcastAttestation(_ context.Context, _ uint64, _ ethpb.Att) error {
func (*FakeP2P) BroadcastAttestation(_ context.Context, _ uint64, _ ethpb.Att) error {
return nil
}
// BroadcastSyncCommitteeMessage -- fake.
func (_ *FakeP2P) BroadcastSyncCommitteeMessage(_ context.Context, _ uint64, _ *ethpb.SyncCommitteeMessage) error {
func (*FakeP2P) BroadcastSyncCommitteeMessage(_ context.Context, _ uint64, _ *ethpb.SyncCommitteeMessage) error {
return nil
}
// BroadcastBlob -- fake.
func (_ *FakeP2P) BroadcastBlob(_ context.Context, _ uint64, _ *ethpb.BlobSidecar) error {
func (*FakeP2P) BroadcastBlob(_ context.Context, _ uint64, _ *ethpb.BlobSidecar) error {
return nil
}
// BroadcastDataColumn -- fake.
func (*FakeP2P) BroadcastDataColumn(_ context.Context, _ uint64, _ *ethpb.DataColumnSidecar) error {
return nil
}
// InterceptPeerDial -- fake.
func (_ *FakeP2P) InterceptPeerDial(peer.ID) (allow bool) {
func (*FakeP2P) InterceptPeerDial(peer.ID) (allow bool) {
return true
}
// InterceptAddrDial -- fake.
func (_ *FakeP2P) InterceptAddrDial(peer.ID, multiaddr.Multiaddr) (allow bool) {
func (*FakeP2P) InterceptAddrDial(peer.ID, multiaddr.Multiaddr) (allow bool) {
return true
}
// InterceptAccept -- fake.
func (_ *FakeP2P) InterceptAccept(_ network.ConnMultiaddrs) (allow bool) {
func (*FakeP2P) InterceptAccept(_ network.ConnMultiaddrs) (allow bool) {
return true
}
// InterceptSecured -- fake.
func (_ *FakeP2P) InterceptSecured(network.Direction, peer.ID, network.ConnMultiaddrs) (allow bool) {
func (*FakeP2P) InterceptSecured(network.Direction, peer.ID, network.ConnMultiaddrs) (allow bool) {
return true
}
// InterceptUpgraded -- fake.
func (_ *FakeP2P) InterceptUpgraded(network.Conn) (allow bool, reason control.DisconnectReason) {
func (*FakeP2P) InterceptUpgraded(network.Conn) (allow bool, reason control.DisconnectReason) {
return true, 0
}
func (*FakeP2P) CustodyCountFromRemotePeer(peer.ID) uint64 {
return 0
}
func (*FakeP2P) GetValidCustodyPeers(peers []peer.ID) ([]peer.ID, error) {
return peers, nil
}

View File

@@ -48,6 +48,12 @@ func (m *MockBroadcaster) BroadcastBlob(context.Context, uint64, *ethpb.BlobSide
return nil
}
// BroadcastDataColumn broadcasts a data column for mock.
func (m *MockBroadcaster) BroadcastDataColumn(context.Context, uint64, *ethpb.DataColumnSidecar) error {
m.BroadcastCalled.Store(true)
return nil
}
// NumMessages returns the number of messages broadcasted.
func (m *MockBroadcaster) NumMessages() int {
m.msgLock.Lock()

View File

@@ -18,12 +18,12 @@ type MockHost struct {
}
// ID --
func (_ *MockHost) ID() peer.ID {
func (*MockHost) ID() peer.ID {
return ""
}
// Peerstore --
func (_ *MockHost) Peerstore() peerstore.Peerstore {
func (*MockHost) Peerstore() peerstore.Peerstore {
return nil
}
@@ -33,46 +33,46 @@ func (m *MockHost) Addrs() []ma.Multiaddr {
}
// Network --
func (_ *MockHost) Network() network.Network {
func (*MockHost) Network() network.Network {
return nil
}
// Mux --
func (_ *MockHost) Mux() protocol.Switch {
func (*MockHost) Mux() protocol.Switch {
return nil
}
// Connect --
func (_ *MockHost) Connect(_ context.Context, _ peer.AddrInfo) error {
func (*MockHost) Connect(_ context.Context, _ peer.AddrInfo) error {
return nil
}
// SetStreamHandler --
func (_ *MockHost) SetStreamHandler(_ protocol.ID, _ network.StreamHandler) {}
func (*MockHost) SetStreamHandler(_ protocol.ID, _ network.StreamHandler) {}
// SetStreamHandlerMatch --
func (_ *MockHost) SetStreamHandlerMatch(protocol.ID, func(id protocol.ID) bool, network.StreamHandler) {
func (*MockHost) SetStreamHandlerMatch(protocol.ID, func(id protocol.ID) bool, network.StreamHandler) {
}
// RemoveStreamHandler --
func (_ *MockHost) RemoveStreamHandler(_ protocol.ID) {}
func (*MockHost) RemoveStreamHandler(_ protocol.ID) {}
// NewStream --
func (_ *MockHost) NewStream(_ context.Context, _ peer.ID, _ ...protocol.ID) (network.Stream, error) {
func (*MockHost) NewStream(_ context.Context, _ peer.ID, _ ...protocol.ID) (network.Stream, error) {
return nil, nil
}
// Close --
func (_ *MockHost) Close() error {
func (*MockHost) Close() error {
return nil
}
// ConnManager --
func (_ *MockHost) ConnManager() connmgr.ConnManager {
func (*MockHost) ConnManager() connmgr.ConnManager {
return nil
}
// EventBus --
func (_ *MockHost) EventBus() event.Bus {
func (*MockHost) EventBus() event.Bus {
return nil
}

View File

@@ -4,6 +4,7 @@ import (
"context"
"errors"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/libp2p/go-libp2p/core/host"
"github.com/libp2p/go-libp2p/core/peer"
@@ -20,7 +21,7 @@ type MockPeerManager struct {
}
// Disconnect .
func (_ *MockPeerManager) Disconnect(peer.ID) error {
func (*MockPeerManager) Disconnect(peer.ID) error {
return nil
}
@@ -39,6 +40,11 @@ func (m MockPeerManager) ENR() *enr.Record {
return m.Enr
}
// NodeID .
func (m MockPeerManager) NodeID() enode.ID {
return [32]byte{}
}
// DiscoveryAddresses .
func (m MockPeerManager) DiscoveryAddresses() ([]multiaddr.Multiaddr, error) {
if m.FailDiscoveryAddr {
@@ -48,12 +54,12 @@ func (m MockPeerManager) DiscoveryAddresses() ([]multiaddr.Multiaddr, error) {
}
// RefreshENR .
func (_ MockPeerManager) RefreshENR() {}
func (MockPeerManager) RefreshPersistentSubnets() {}
// FindPeersWithSubnet .
func (_ MockPeerManager) FindPeersWithSubnet(_ context.Context, _ string, _ uint64, _ int) (bool, error) {
func (MockPeerManager) FindPeersWithSubnet(_ context.Context, _ string, _ uint64, _ int) (bool, error) {
return true, nil
}
// AddPingMethod .
func (_ MockPeerManager) AddPingMethod(_ func(ctx context.Context, id peer.ID) error) {}
func (MockPeerManager) AddPingMethod(_ func(ctx context.Context, id peer.ID) error) {}

View File

@@ -10,6 +10,7 @@ import (
"testing"
"time"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
pubsub "github.com/libp2p/go-libp2p-pubsub"
core "github.com/libp2p/go-libp2p/core"
@@ -22,9 +23,11 @@ import (
swarmt "github.com/libp2p/go-libp2p/p2p/net/swarm/testing"
"github.com/multiformats/go-multiaddr"
ssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/encoder"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers/scorers"
"github.com/prysmaticlabs/prysm/v5/config/params"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/metadata"
"github.com/sirupsen/logrus"
@@ -50,9 +53,10 @@ type TestP2P struct {
}
// NewTestP2P initializes a new p2p test service.
func NewTestP2P(t *testing.T) *TestP2P {
func NewTestP2P(t *testing.T, opts ...swarmt.Option) *TestP2P {
opts = append(opts, swarmt.OptDisableQUIC)
ctx := context.Background()
h := bhost.NewBlankHost(swarmt.GenSwarm(t, swarmt.OptDisableQUIC))
h := bhost.NewBlankHost(swarmt.GenSwarm(t, opts...))
ps, err := pubsub.NewFloodSub(ctx, h,
pubsub.WithMessageSigning(false),
pubsub.WithStrictSignatureVerification(false),
@@ -183,6 +187,12 @@ func (p *TestP2P) BroadcastBlob(context.Context, uint64, *ethpb.BlobSidecar) err
return nil
}
// BroadcastDataColumn broadcasts a data column for mock.
func (p *TestP2P) BroadcastDataColumn(context.Context, uint64, *ethpb.DataColumnSidecar) error {
p.BroadcastCalled.Store(true)
return nil
}
// SetStreamHandler for RPC.
func (p *TestP2P) SetStreamHandler(topic string, handler network.StreamHandler) {
p.BHost.SetStreamHandler(protocol.ID(topic), handler)
@@ -232,7 +242,7 @@ func (p *TestP2P) LeaveTopic(topic string) error {
}
// Encoding returns ssz encoding.
func (_ *TestP2P) Encoding() encoder.NetworkEncoding {
func (*TestP2P) Encoding() encoder.NetworkEncoding {
return &encoder.SszNetworkEncoder{}
}
@@ -259,12 +269,17 @@ func (p *TestP2P) Host() host.Host {
}
// ENR returns the enr of the local peer.
func (_ *TestP2P) ENR() *enr.Record {
func (*TestP2P) ENR() *enr.Record {
return new(enr.Record)
}
// NodeID returns the node id of the local peer.
func (*TestP2P) NodeID() enode.ID {
return [32]byte{}
}
// DiscoveryAddresses --
func (_ *TestP2P) DiscoveryAddresses() ([]multiaddr.Multiaddr, error) {
func (*TestP2P) DiscoveryAddresses() ([]multiaddr.Multiaddr, error) {
return nil, nil
}
@@ -346,7 +361,7 @@ func (p *TestP2P) Send(ctx context.Context, msg interface{}, topic string, pid p
}
// Started always returns true.
func (_ *TestP2P) Started() bool {
func (*TestP2P) Started() bool {
return true
}
@@ -356,12 +371,12 @@ func (p *TestP2P) Peers() *peers.Status {
}
// FindPeersWithSubnet mocks the p2p func.
func (_ *TestP2P) FindPeersWithSubnet(_ context.Context, _ string, _ uint64, _ int) (bool, error) {
func (*TestP2P) FindPeersWithSubnet(_ context.Context, _ string, _ uint64, _ int) (bool, error) {
return false, nil
}
// RefreshENR mocks the p2p func.
func (_ *TestP2P) RefreshENR() {}
func (*TestP2P) RefreshPersistentSubnets() {}
// ForkDigest mocks the p2p func.
func (p *TestP2P) ForkDigest() ([4]byte, error) {
@@ -379,31 +394,54 @@ func (p *TestP2P) MetadataSeq() uint64 {
}
// AddPingMethod mocks the p2p func.
func (_ *TestP2P) AddPingMethod(_ func(ctx context.Context, id peer.ID) error) {
func (*TestP2P) AddPingMethod(_ func(ctx context.Context, id peer.ID) error) {
// no-op
}
// InterceptPeerDial .
func (_ *TestP2P) InterceptPeerDial(peer.ID) (allow bool) {
func (*TestP2P) InterceptPeerDial(peer.ID) (allow bool) {
return true
}
// InterceptAddrDial .
func (_ *TestP2P) InterceptAddrDial(peer.ID, multiaddr.Multiaddr) (allow bool) {
func (*TestP2P) InterceptAddrDial(peer.ID, multiaddr.Multiaddr) (allow bool) {
return true
}
// InterceptAccept .
func (_ *TestP2P) InterceptAccept(_ network.ConnMultiaddrs) (allow bool) {
func (*TestP2P) InterceptAccept(_ network.ConnMultiaddrs) (allow bool) {
return true
}
// InterceptSecured .
func (_ *TestP2P) InterceptSecured(network.Direction, peer.ID, network.ConnMultiaddrs) (allow bool) {
func (*TestP2P) InterceptSecured(network.Direction, peer.ID, network.ConnMultiaddrs) (allow bool) {
return true
}
// InterceptUpgraded .
func (_ *TestP2P) InterceptUpgraded(network.Conn) (allow bool, reason control.DisconnectReason) {
func (*TestP2P) InterceptUpgraded(network.Conn) (allow bool, reason control.DisconnectReason) {
return true, 0
}
func (s *TestP2P) CustodyCountFromRemotePeer(pid peer.ID) uint64 {
// By default, we assume the peer custodies the minimum number of subnets.
custodyRequirement := params.BeaconConfig().CustodyRequirement
// Retrieve the ENR of the peer.
record, err := s.peers.ENR(pid)
if err != nil {
return custodyRequirement
}
// Retrieve the custody subnets count from the ENR.
custodyCount, err := peerdas.CustodyCountFromRecord(record)
if err != nil {
return custodyRequirement
}
return custodyCount
}
func (*TestP2P) GetValidCustodyPeers(peers []peer.ID) ([]peer.ID, error) {
return peers, nil
}

View File

@@ -30,6 +30,9 @@ const (
GossipBlsToExecutionChangeMessage = "bls_to_execution_change"
// GossipBlobSidecarMessage is the name for the blob sidecar message type.
GossipBlobSidecarMessage = "blob_sidecar"
// GossipDataColumnSidecarMessage is the name for the data column sidecar message type.
GossipDataColumnSidecarMessage = "data_column_sidecar"
// Topic Formats
//
// AttestationSubnetTopicFormat is the topic format for the attestation subnet.
@@ -52,4 +55,6 @@ const (
BlsToExecutionChangeSubnetTopicFormat = GossipProtocolAndDigest + GossipBlsToExecutionChangeMessage
// BlobSubnetTopicFormat is the topic format for the blob subnet.
BlobSubnetTopicFormat = GossipProtocolAndDigest + GossipBlobSidecarMessage + "_%d"
// DataColumnSubnetTopicFormat is the topic format for the data column subnet.
DataColumnSubnetTopicFormat = GossipProtocolAndDigest + GossipDataColumnSidecarMessage + "_%d"
)

View File

@@ -43,7 +43,6 @@ go_test(
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",

View File

@@ -26,13 +26,7 @@ var (
BlockMap map[[4]byte]func() (interfaces.ReadOnlySignedBeaconBlock, error)
// MetaDataMap maps the fork-version to the underlying data type for that
// particular fork period.
MetaDataMap map[[4]byte]func() (metadata.Metadata, error)
// AttestationMap maps the fork-version to the underlying data type for that
// particular fork period.
AttestationMap map[[4]byte]func() (ethpb.Att, error)
// AggregateAttestationMap maps the fork-version to the underlying data type for that
// particular fork period.
AggregateAttestationMap map[[4]byte]func() (ethpb.SignedAggregateAttAndProof, error)
MetaDataMap map[[4]byte]func() metadata.Metadata
)
// InitializeDataMaps initializes all the relevant object maps. This function is called to
@@ -73,68 +67,24 @@ func InitializeDataMaps() {
}
// Reset our metadata map.
MetaDataMap = map[[4]byte]func() (metadata.Metadata, error){
bytesutil.ToBytes4(params.BeaconConfig().GenesisForkVersion): func() (metadata.Metadata, error) {
return wrapper.WrappedMetadataV0(&ethpb.MetaDataV0{}), nil
MetaDataMap = map[[4]byte]func() metadata.Metadata{
bytesutil.ToBytes4(params.BeaconConfig().GenesisForkVersion): func() metadata.Metadata {
return wrapper.WrappedMetadataV0(&ethpb.MetaDataV0{})
},
bytesutil.ToBytes4(params.BeaconConfig().AltairForkVersion): func() (metadata.Metadata, error) {
return wrapper.WrappedMetadataV1(&ethpb.MetaDataV1{}), nil
bytesutil.ToBytes4(params.BeaconConfig().AltairForkVersion): func() metadata.Metadata {
return wrapper.WrappedMetadataV1(&ethpb.MetaDataV1{})
},
bytesutil.ToBytes4(params.BeaconConfig().BellatrixForkVersion): func() (metadata.Metadata, error) {
return wrapper.WrappedMetadataV1(&ethpb.MetaDataV1{}), nil
bytesutil.ToBytes4(params.BeaconConfig().BellatrixForkVersion): func() metadata.Metadata {
return wrapper.WrappedMetadataV1(&ethpb.MetaDataV1{})
},
bytesutil.ToBytes4(params.BeaconConfig().CapellaForkVersion): func() (metadata.Metadata, error) {
return wrapper.WrappedMetadataV1(&ethpb.MetaDataV1{}), nil
bytesutil.ToBytes4(params.BeaconConfig().CapellaForkVersion): func() metadata.Metadata {
return wrapper.WrappedMetadataV1(&ethpb.MetaDataV1{})
},
bytesutil.ToBytes4(params.BeaconConfig().DenebForkVersion): func() (metadata.Metadata, error) {
return wrapper.WrappedMetadataV1(&ethpb.MetaDataV1{}), nil
bytesutil.ToBytes4(params.BeaconConfig().DenebForkVersion): func() metadata.Metadata {
return wrapper.WrappedMetadataV1(&ethpb.MetaDataV1{})
},
bytesutil.ToBytes4(params.BeaconConfig().ElectraForkVersion): func() (metadata.Metadata, error) {
return wrapper.WrappedMetadataV1(&ethpb.MetaDataV1{}), nil
},
}
// Reset our attestation map.
AttestationMap = map[[4]byte]func() (ethpb.Att, error){
bytesutil.ToBytes4(params.BeaconConfig().GenesisForkVersion): func() (ethpb.Att, error) {
return &ethpb.Attestation{}, nil
},
bytesutil.ToBytes4(params.BeaconConfig().AltairForkVersion): func() (ethpb.Att, error) {
return &ethpb.Attestation{}, nil
},
bytesutil.ToBytes4(params.BeaconConfig().BellatrixForkVersion): func() (ethpb.Att, error) {
return &ethpb.Attestation{}, nil
},
bytesutil.ToBytes4(params.BeaconConfig().CapellaForkVersion): func() (ethpb.Att, error) {
return &ethpb.Attestation{}, nil
},
bytesutil.ToBytes4(params.BeaconConfig().DenebForkVersion): func() (ethpb.Att, error) {
return &ethpb.Attestation{}, nil
},
bytesutil.ToBytes4(params.BeaconConfig().ElectraForkVersion): func() (ethpb.Att, error) {
return &ethpb.AttestationElectra{}, nil
},
}
// Reset our aggregate attestation map.
AggregateAttestationMap = map[[4]byte]func() (ethpb.SignedAggregateAttAndProof, error){
bytesutil.ToBytes4(params.BeaconConfig().GenesisForkVersion): func() (ethpb.SignedAggregateAttAndProof, error) {
return &ethpb.SignedAggregateAttestationAndProof{}, nil
},
bytesutil.ToBytes4(params.BeaconConfig().AltairForkVersion): func() (ethpb.SignedAggregateAttAndProof, error) {
return &ethpb.SignedAggregateAttestationAndProof{}, nil
},
bytesutil.ToBytes4(params.BeaconConfig().BellatrixForkVersion): func() (ethpb.SignedAggregateAttAndProof, error) {
return &ethpb.SignedAggregateAttestationAndProof{}, nil
},
bytesutil.ToBytes4(params.BeaconConfig().CapellaForkVersion): func() (ethpb.SignedAggregateAttAndProof, error) {
return &ethpb.SignedAggregateAttestationAndProof{}, nil
},
bytesutil.ToBytes4(params.BeaconConfig().DenebForkVersion): func() (ethpb.SignedAggregateAttAndProof, error) {
return &ethpb.SignedAggregateAttestationAndProof{}, nil
},
bytesutil.ToBytes4(params.BeaconConfig().ElectraForkVersion): func() (ethpb.SignedAggregateAttAndProof, error) {
return &ethpb.SignedAggregateAttestationAndProofElectra{}, nil
bytesutil.ToBytes4(params.BeaconConfig().ElectraForkVersion): func() metadata.Metadata {
return wrapper.WrappedMetadataV1(&ethpb.MetaDataV1{})
},
}
}

View File

@@ -5,9 +5,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestInitializeDataMaps(t *testing.T) {
@@ -46,36 +44,8 @@ func TestInitializeDataMaps(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
tt.action()
bFunc, ok := BlockMap[bytesutil.ToBytes4(params.BeaconConfig().GenesisForkVersion)]
_, ok := BlockMap[bytesutil.ToBytes4(params.BeaconConfig().GenesisForkVersion)]
assert.Equal(t, tt.exists, ok)
if tt.exists {
b, err := bFunc()
require.NoError(t, err)
generic, err := b.PbGenericBlock()
require.NoError(t, err)
assert.NotNil(t, generic.GetPhase0())
}
mdFunc, ok := MetaDataMap[bytesutil.ToBytes4(params.BeaconConfig().GenesisForkVersion)]
if tt.exists {
md, err := mdFunc()
require.NoError(t, err)
assert.NotNil(t, md.MetadataObjV0())
}
assert.Equal(t, tt.exists, ok)
attFunc, ok := AttestationMap[bytesutil.ToBytes4(params.BeaconConfig().GenesisForkVersion)]
if tt.exists {
att, err := attFunc()
require.NoError(t, err)
assert.Equal(t, version.Phase0, att.Version())
}
assert.Equal(t, tt.exists, ok)
aggFunc, ok := AggregateAttestationMap[bytesutil.ToBytes4(params.BeaconConfig().GenesisForkVersion)]
assert.Equal(t, tt.exists, ok)
if tt.exists {
agg, err := aggFunc()
require.NoError(t, err)
assert.Equal(t, version.Phase0, agg.Version())
}
})
}
}

View File

@@ -9,10 +9,15 @@ var (
ErrInvalidSequenceNum = errors.New("invalid sequence number provided")
ErrGeneric = errors.New("internal service error")
ErrRateLimited = errors.New("rate limited")
ErrIODeadline = errors.New("i/o deadline exceeded")
ErrInvalidRequest = errors.New("invalid range, step or count")
ErrBlobLTMinRequest = errors.New("blob slot < minimum_request_epoch")
ErrMaxBlobReqExceeded = errors.New("requested more than MAX_REQUEST_BLOB_SIDECARS")
ErrRateLimited = errors.New("rate limited")
ErrIODeadline = errors.New("i/o deadline exceeded")
ErrInvalidRequest = errors.New("invalid range, step or count")
ErrBlobLTMinRequest = errors.New("blob epoch < minimum_request_epoch")
ErrDataColumnLTMinRequest = errors.New("data column epoch < minimum_request_epoch")
ErrMaxBlobReqExceeded = errors.New("requested more than MAX_REQUEST_BLOB_SIDECARS")
ErrMaxDataColumnReqExceeded = errors.New("requested more than MAX_REQUEST_DATA_COLUMN_SIDECARS")
ErrResourceUnavailable = errors.New("resource requested unavailable")
ErrInvalidColumnIndex = errors.New("invalid column index requested")
)

View File

@@ -9,6 +9,7 @@ import (
"github.com/pkg/errors"
ssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/prysm/v5/config/params"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
@@ -207,7 +208,94 @@ func (s BlobSidecarsByRootReq) Len() int {
return len(s)
}
func init() {
sizer := &eth.BlobIdentifier{}
blobIdSize = sizer.SizeSSZ()
// =====================================
// DataColumnSidecarsByRootReq section
// =====================================
var _ ssz.Marshaler = (*DataColumnSidecarsByRootReq)(nil)
var _ ssz.Unmarshaler = (*DataColumnSidecarsByRootReq)(nil)
var _ sort.Interface = (*DataColumnSidecarsByRootReq)(nil)
// DataColumnSidecarsByRootReq is used to specify a list of data column targets (root+index) in a DataColumnSidecarsByRoot RPC request.
type DataColumnSidecarsByRootReq []*eth.DataColumnIdentifier
// DataColumnIdentifier is a fixed size value, so we can compute its fixed size at start time (see init below)
var dataColumnIdSize int
// UnmarshalSSZ implements ssz.Unmarshaler. It unmarshals the provided bytes buffer into the DataColumnSidecarsByRootReq value.
func (d *DataColumnSidecarsByRootReq) UnmarshalSSZ(buf []byte) error {
bufLen := len(buf)
maxLen := int(params.BeaconConfig().MaxRequestDataColumnSidecars) * dataColumnIdSize
if bufLen > maxLen {
return errors.Errorf("expected buffer with length of up to %d but received length %d", maxLen, bufLen)
}
if bufLen%dataColumnIdSize != 0 {
return errors.Wrapf(ssz.ErrIncorrectByteSize, "size=%d", bufLen)
}
count := bufLen / dataColumnIdSize
*d = make([]*eth.DataColumnIdentifier, count)
for i := 0; i < count; i++ {
id := &eth.DataColumnIdentifier{}
err := id.UnmarshalSSZ(buf[i*dataColumnIdSize : (i+1)*dataColumnIdSize])
if err != nil {
return err
}
(*d)[i] = id
}
return nil
}
// MarshalSSZ implements ssz.Marshaler. It serializes the DataColumnSidecarsByRootReq value to a byte slice.
func (d *DataColumnSidecarsByRootReq) MarshalSSZ() ([]byte, error) {
buf := make([]byte, d.SizeSSZ())
for i, id := range *d {
bytes, err := id.MarshalSSZ()
if err != nil {
return nil, err
}
copy(buf[i*dataColumnIdSize:(i+1)*dataColumnIdSize], bytes)
}
return buf, nil
}
// MarshalSSZTo implements ssz.Marshaler. It appends the serialized DataColumnSidecarsByRootReq value to the provided byte slice.
func (d *DataColumnSidecarsByRootReq) MarshalSSZTo(dst []byte) ([]byte, error) {
mobj, err := d.MarshalSSZ()
if err != nil {
return nil, err
}
return append(dst, mobj...), nil
}
// SizeSSZ implements ssz.Marshaler. It returns the size of the serialized representation.
func (d *DataColumnSidecarsByRootReq) SizeSSZ() int {
return len(*d) * dataColumnIdSize
}
// Len implements sort.Interface. It returns the number of elements in the collection.
func (d DataColumnSidecarsByRootReq) Len() int {
return len(d)
}
// Less implements sort.Interface. It reports whether the element with index i must sort before the element with index j.
func (d DataColumnSidecarsByRootReq) Less(i int, j int) bool {
rootCmp := bytes.Compare(d[i].BlockRoot, d[j].BlockRoot)
if rootCmp != 0 {
return rootCmp < 0
}
return d[i].ColumnIndex < d[j].ColumnIndex
}
// Swap implements sort.Interface. It swaps the elements with indexes i and j.
func (d DataColumnSidecarsByRootReq) Swap(i int, j int) {
d[i], d[j] = d[j], d[i]
}
func init() {
blobSizer := &eth.BlobIdentifier{}
blobIdSize = blobSizer.SizeSSZ()
dataColumnSizer := &eth.DataColumnIdentifier{}
dataColumnIdSize = dataColumnSizer.SizeSSZ()
}

View File

@@ -5,6 +5,7 @@ import (
"testing"
ssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
@@ -194,3 +195,136 @@ func hexDecodeOrDie(t *testing.T, str string) []byte {
require.NoError(t, err)
return decoded
}
// =====================================
// DataColumnSidecarsByRootReq section
// =====================================
func generateDataColumnIdentifiers(n int) []*eth.DataColumnIdentifier {
r := make([]*eth.DataColumnIdentifier, n)
for i := 0; i < n; i++ {
r[i] = &eth.DataColumnIdentifier{
BlockRoot: bytesutil.PadTo([]byte{byte(i)}, 32),
ColumnIndex: uint64(i),
}
}
return r
}
func TestDataColumnSidecarsByRootReq_MarshalUnmarshal(t *testing.T) {
cases := []struct {
name string
ids []*eth.DataColumnIdentifier
marshalErr error
unmarshalErr string
unmarshalMod func([]byte) []byte
}{
{
name: "empty list",
},
{
name: "single item list",
ids: generateDataColumnIdentifiers(1),
},
{
name: "10 item list",
ids: generateDataColumnIdentifiers(10),
},
{
name: "wonky unmarshal size",
ids: generateDataColumnIdentifiers(10),
unmarshalMod: func(in []byte) []byte {
in = append(in, byte(0))
return in
},
unmarshalErr: ssz.ErrIncorrectByteSize.Error(),
},
{
name: "size too big",
ids: generateDataColumnIdentifiers(1),
unmarshalMod: func(in []byte) []byte {
maxLen := params.BeaconConfig().MaxRequestDataColumnSidecars * uint64(dataColumnIdSize)
add := make([]byte, maxLen)
in = append(in, add...)
return in
},
unmarshalErr: "expected buffer with length of up to",
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
req := DataColumnSidecarsByRootReq(c.ids)
bytes, err := req.MarshalSSZ()
if c.marshalErr != nil {
require.ErrorIs(t, err, c.marshalErr)
return
}
require.NoError(t, err)
if c.unmarshalMod != nil {
bytes = c.unmarshalMod(bytes)
}
got := &DataColumnSidecarsByRootReq{}
err = got.UnmarshalSSZ(bytes)
if c.unmarshalErr != "" {
require.ErrorContains(t, c.unmarshalErr, err)
return
}
require.NoError(t, err)
for i, id := range *got {
require.DeepEqual(t, c.ids[i], id)
}
})
}
// Test MarshalSSZTo
req := DataColumnSidecarsByRootReq(generateDataColumnIdentifiers(10))
buf := make([]byte, 0)
buf, err := req.MarshalSSZTo(buf)
require.NoError(t, err)
require.Equal(t, len(buf), int(req.SizeSSZ()))
var unmarshalled DataColumnSidecarsByRootReq
err = unmarshalled.UnmarshalSSZ(buf)
require.NoError(t, err)
require.DeepEqual(t, req, unmarshalled)
}
func TestDataColumnSidecarsByRootReq_Sort(t *testing.T) {
ids := []*eth.DataColumnIdentifier{
{
BlockRoot: bytesutil.PadTo([]byte{3}, 32),
ColumnIndex: 0,
},
{
BlockRoot: bytesutil.PadTo([]byte{2}, 32),
ColumnIndex: 2,
},
{
BlockRoot: bytesutil.PadTo([]byte{2}, 32),
ColumnIndex: 1,
},
{
BlockRoot: bytesutil.PadTo([]byte{1}, 32),
ColumnIndex: 2,
},
{
BlockRoot: bytesutil.PadTo([]byte{0}, 32),
ColumnIndex: 3,
},
}
req := DataColumnSidecarsByRootReq(ids)
require.Equal(t, true, req.Less(4, 3))
require.Equal(t, true, req.Less(3, 2))
require.Equal(t, true, req.Less(2, 1))
require.Equal(t, true, req.Less(1, 0))
require.Equal(t, 5, req.Len())
ids = []*eth.DataColumnIdentifier{
{
BlockRoot: bytesutil.PadTo([]byte{0}, 32),
ColumnIndex: 3,
},
}
req = DataColumnSidecarsByRootReq(ids)
require.Equal(t, 1, req.Len())
}

View File

@@ -12,10 +12,15 @@ import (
"path"
"time"
"github.com/btcsuite/btcd/btcec/v2"
gCrypto "github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/libp2p/go-libp2p/core/crypto"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/wrapper"
ecdsaprysm "github.com/prysmaticlabs/prysm/v5/crypto/ecdsa"
"github.com/prysmaticlabs/prysm/v5/io/file"
@@ -62,6 +67,7 @@ func privKey(cfg *Config) (*ecdsa.PrivateKey, error) {
}
if defaultKeysExist {
log.WithField("filePath", defaultKeyPath).Info("Reading static P2P private key from a file. To generate a new random private key at every start, please remove this file.")
return privKeyFromFile(defaultKeyPath)
}
@@ -71,8 +77,8 @@ func privKey(cfg *Config) (*ecdsa.PrivateKey, error) {
return nil, err
}
// If the StaticPeerID flag is not set, return the private key.
if !cfg.StaticPeerID {
// If the StaticPeerID flag is not set and if peerDAS is not enabled, return the private key.
if !(cfg.StaticPeerID || params.PeerDASEnabled()) {
return ecdsaprysm.ConvertFromInterfacePrivKey(priv)
}
@@ -89,7 +95,7 @@ func privKey(cfg *Config) (*ecdsa.PrivateKey, error) {
return nil, err
}
log.Info("Wrote network key to file")
log.WithField("path", defaultKeyPath).Info("Wrote network key to file")
// Read the key from the defaultKeyPath file just written
// for the strongest guarantee that the next start will be the same as this one.
return privKeyFromFile(defaultKeyPath)
@@ -173,3 +179,23 @@ func verifyConnectivity(addr string, port uint, protocol string) {
}
}
}
func ConvertPeerIDToNodeID(pid peer.ID) (enode.ID, error) {
// Retrieve the public key object of the peer under "crypto" form.
pubkeyObjCrypto, err := pid.ExtractPublicKey()
if err != nil {
return [32]byte{}, errors.Wrap(err, "extract public key")
}
// Extract the bytes representation of the public key.
compressedPubKeyBytes, err := pubkeyObjCrypto.Raw()
if err != nil {
return [32]byte{}, errors.Wrap(err, "public key raw")
}
// Retrieve the public key object of the peer under "SECP256K1" form.
pubKeyObjSecp256k1, err := btcec.ParsePubKey(compressedPubKeyBytes)
if err != nil {
return [32]byte{}, errors.Wrap(err, "parse public key")
}
newPubkey := &ecdsa.PublicKey{Curve: gCrypto.S256(), X: pubKeyObjSecp256k1.X(), Y: pubKeyObjSecp256k1.Y()}
return enode.PubkeyToIDV4(newPubkey), nil
}

View File

@@ -6,6 +6,7 @@ import (
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
@@ -64,3 +65,19 @@ func TestSerializeENR(t *testing.T) {
assert.ErrorContains(t, "could not serialize nil record", err)
})
}
func TestConvertPeerIDToNodeID(t *testing.T) {
const (
peerIDStr = "16Uiu2HAmRrhnqEfybLYimCiAYer2AtZKDGamQrL1VwRCyeh2YiFc"
expectedNodeIDStr = "eed26c5d2425ab95f57246a5dca87317c41cacee4bcafe8bbe57e5965527c290"
)
peerID, err := peer.Decode(peerIDStr)
require.NoError(t, err)
actualNodeID, err := ConvertPeerIDToNodeID(peerID)
require.NoError(t, err)
actualNodeIDStr := actualNodeID.String()
require.Equal(t, expectedNodeIDStr, actualNodeIDStr)
}

View File

@@ -70,7 +70,7 @@ func (s *Service) endpoints(
endpoints = append(endpoints, s.eventsEndpoints()...)
endpoints = append(endpoints, s.prysmBeaconEndpoints(ch, stater)...)
endpoints = append(endpoints, s.prysmNodeEndpoints()...)
endpoints = append(endpoints, s.prysmValidatorEndpoints(coreService)...)
endpoints = append(endpoints, s.prysmValidatorEndpoints(coreService, stater)...)
if enableDebug {
endpoints = append(endpoints, s.debugEndpoints(stater)...)
}
@@ -143,7 +143,7 @@ func (s *Service) builderEndpoints(stater lookup.Stater) []endpoint {
}
}
func (*Service) blobEndpoints(blocker lookup.Blocker) []endpoint {
func (s *Service) blobEndpoints(blocker lookup.Blocker) []endpoint {
server := &blob.Server{
Blocker: blocker,
}
@@ -777,7 +777,7 @@ func (s *Service) beaconEndpoints(
}
}
func (*Service) configEndpoints() []endpoint {
func (s *Service) configEndpoints() []endpoint {
const namespace = "config"
return []endpoint{
{
@@ -1045,7 +1045,7 @@ func (s *Service) prysmNodeEndpoints() []endpoint {
}
}
func (*Service) prysmValidatorEndpoints(coreService *core.Service) []endpoint {
func (s *Service) prysmValidatorEndpoints(coreService *core.Service, stater lookup.Stater) []endpoint {
server := &validatorprysm.Server{
CoreService: coreService,
}

View File

@@ -80,6 +80,7 @@ func TestGetSpec(t *testing.T) {
config.DenebForkEpoch = 105
config.ElectraForkVersion = []byte("ElectraForkVersion")
config.ElectraForkEpoch = 107
config.Eip7594ForkEpoch = 109
config.BLSWithdrawalPrefixByte = byte('b')
config.ETH1AddressWithdrawalPrefixByte = byte('c')
config.GenesisDelay = 24
@@ -193,7 +194,7 @@ func TestGetSpec(t *testing.T) {
data, ok := resp.Data.(map[string]interface{})
require.Equal(t, true, ok)
assert.Equal(t, 155, len(data))
assert.Equal(t, 156, len(data))
for k, v := range data {
t.Run(k, func(t *testing.T) {
switch k {
@@ -271,6 +272,8 @@ func TestGetSpec(t *testing.T) {
assert.Equal(t, "0x"+hex.EncodeToString([]byte("ElectraForkVersion")), v)
case "ELECTRA_FORK_EPOCH":
assert.Equal(t, "107", v)
case "EIP7594_FORK_EPOCH":
assert.Equal(t, "109", v)
case "MIN_ANCHOR_POW_BLOCK_DIFFICULTY":
assert.Equal(t, "1000", v)
case "BLS_WITHDRAWAL_PREFIX":

View File

@@ -5,7 +5,6 @@ go_library(
srcs = [
"handlers.go",
"handlers_block.go",
"log.go",
"server.go",
],
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/rpc/eth/validator",

View File

@@ -31,7 +31,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/network/httputil"
ethpbalpha "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
log "github.com/sirupsen/logrus"
"go.opencensus.io/trace"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
@@ -592,7 +592,7 @@ func (s *Server) PrepareBeaconProposer(w http.ResponseWriter, r *http.Request) {
if len(validatorIndices) == 0 {
return
}
log.WithFields(logrus.Fields{
log.WithFields(log.Fields{
"validatorIndices": validatorIndices,
}).Info("Updated fee recipient addresses")
}

View File

@@ -220,9 +220,8 @@ func (s *Server) produceBlockV3(ctx context.Context, w http.ResponseWriter, r *h
consensusBlockValue, httpError := getConsensusBlockValue(ctx, s.BlockRewardFetcher, v1alpha1resp.Block)
if httpError != nil {
log.WithError(httpError).Debug("Failed to get consensus block value")
// Having the consensus block value is not critical to block production
consensusBlockValue = ""
httputil.WriteError(w, httpError)
return
}
w.Header().Set(api.ExecutionPayloadBlindedHeader, fmt.Sprintf("%v", v1alpha1resp.IsBlinded))
@@ -298,7 +297,7 @@ func getConsensusBlockValue(ctx context.Context, blockRewardsFetcher rewards.Blo
}
}
if bb.Version() == version.Phase0 {
// Getting the block value for Phase 0 is very hard, so we ignore it
// ignore for phase 0
return "", nil
}
// Get consensus payload value which is the same as the total from the block rewards api.

View File

@@ -1,5 +0,0 @@
package validator
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "beacon-api")

Some files were not shown because too many files have changed in this diff Show More