Compare commits

...

12 Commits

Author SHA1 Message Date
terence
b4a66a0993 Increase sepolia gas limit to 60m (#15253) 2025-05-08 16:30:50 +00:00
Preston Van Loon
d38064f181 Add tracing spans to GetDuties (#15258)
* Add additional spans for tracing GetDuties

* Changelog fragment
2025-05-08 14:50:02 +00:00
Nishant Das
7f89bb3c6f Increase Limit For Rebuilding Field Trie (#15252)
* Increase Limit

* Fix Test
2025-05-08 04:42:30 +00:00
terence
a69c033e35 Update spec tests to v1.5.0 (#15256) 2025-05-07 22:38:39 +00:00
james-prysm
35151c7bc8 deduplicating rest propose block (#15147)
* deduplicating rest propose block

* gaz

* linting

* gaz and linting

* remove unneeded import"
"

* gofmt
2025-05-07 18:09:22 +00:00
Potuz
c07479b99a remove error return from HistoricalRoots (#15255)
* remove error return from HistoricalRoots

* Radek's review
2025-05-07 16:06:30 +00:00
james-prysm
0d3b7f0ade fixing alias (#15254) 2025-05-07 15:37:46 +00:00
Potuz
dd9a5fba59 Force duties update on received blocks. (#15251)
* Force duties update on received blocks.

- Change the context on UpdateDuties to be passed by the calling
  function.
- Change the context passed to UpdateDuties to not be dependent on a
  slot context.
- Change the deadlines to be forced to be an entire epoch.
- Force duties to be initialized when receiving a HeadEvent if they
  aren't already.
- Adds a read lock on the event handling

* review

* Add deadlines at start and healthyagain

* cancel once
2025-05-07 00:49:22 +00:00
Manu NALEPA
7da7019a20 PeerDAS: Implement core. (#15192)
* Fulu: Implement params.

* KZG tests: Re-implement `getRandBlob` to avoid tests cyclical dependencies.

Not ideal, but any better idea welcome.

* Fulu testing util: Implement `GenerateCellsAndProofs`.

* Create `RODataColumn`.

* Implement `MerkleProofKZGCommitments`.

* Export `leavesFromCommitments`.

* Implement peerDAS core.

* Add changelog.

* Update beacon-chain/core/peerdas/das_core.go

Co-authored-by: terence <terence@prysmaticlabs.com>

* Fix Terence's comment: Use `IsNil`.

* Fix Terence's comment: Avoid useless `filteredIndices`.

* Fix Terence's comment: Simplify odd/even cases.

* Fix Terence's comment: Use `IsNil`.

* Spectests: Add Fulu networking

* Fix Terence's comment: `CustodyGroups`: Stick to the spec by returning a (sorted) slice.

* Fix Terence's comment: `CustodyGroups`: Handle correctly the `maxUint256` case.

* Update beacon-chain/core/peerdas/das_core.go

Co-authored-by: terence <terence@prysmaticlabs.com>

* Fix Terence's comment: `ComputeColumnsForCustodyGroup`: Add test if `custodyGroup == numberOfCustodyGroup`

* `CustodyGroups`: Test if `custodyGroupCount > numberOfCustodyGroup`.

* `CustodyGroups`: Add a shortcut if all custody groups are needed.

* `ComputeCystodyGroupForColumn`: Move from `p2p_interface.go` to `das_core.go`.

* Fix Terence's comment: Fix `ComputeCustodyGroupForColumn`.

* Fix Terence's comment: Remove `constructCellsAndProofs` function.

* Fix Terence's comment: `ValidatorsCustodyRequirement`: Use effective balance instead of balance.

* `MerkleProofKZGCommitments`: Add tests

* Remove peer sampling.

* `DataColumnSidecars`: Add missing tests.

* Fix Jame's comment.

* Fix James' comment.

* Fix James' comment.

* Fix James' coment.

* Fix James' comment.

---------

Co-authored-by: terence <terence@prysmaticlabs.com>
2025-05-06 21:37:07 +00:00
Leonardo Arias
24cf930952 Upgrade ristretto to v2.2.0 (#15170)
* Upgrade ristretto to v2.2.0

* Added the changelog

* gazelle

* Run goimports and gofmt

* Fix build

* Fix tests

* fix some golangci-lint violations

---------

Co-authored-by: Preston Van Loon <preston@pvl.dev>
2025-05-06 01:51:05 +00:00
Preston Van Loon
97a95dddfc Use otelgrpc for tracing grpc server and client (#15237)
* Use otelgrpc for tracing grpc server and client.

* Changelog fragment

* gofmt

* Use context in prometheus service

* Remove async start of prometheus service

* Use random port to reduce the probability of concurrent tests using the same port

* Remove comment

* fix lint error

---------

Co-authored-by: Bastin <bastin.m@proton.me>
2025-05-05 18:46:33 +00:00
Bastin
6df476835c Turn on lc gossip (#15220)
* add and pass lcstore to sync service

* validator for optimistic updates

* validator for finality updates

* subscribers

* gossip scorings

* tmp - add validation test

* optimistic update validation tests

* finality update validation tests

* tests for subscribers

* deps

* changelog entry

* play around with config cleanup

* turn on gossip

* add logs

* fix typo

* mock p2p

* deps

* better logs

* turn on gossip

* add logs

* fix typo

* update geth v1.15.9 (#15216)

* Update go-ethereum to v1.15.9

* Fix go-ethereum secp256k1 build after https://github.com/ethereum/go-ethereum/pull/31242

* Fix Ping API change

* Changelog fragment

* mock p2p

* deps

* fix merge problems

* changelog entry

* Update broadcaster.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update broadcaster.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-05-05 11:36:36 +00:00
117 changed files with 3445 additions and 1293 deletions

View File

@@ -255,7 +255,7 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.5.0-beta.5"
consensus_spec_version = "v1.5.0"
bls_test_version = "v0.1.1"
@@ -271,7 +271,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-H+Pt4z+HCVDnEBAv814yvsjR7f5l1IpumjFoTj2XnLE=",
integrity = "sha256-JljxS/if/t0qvGWcf5CgsX+72fj90yGTg/uEgC56y7U=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/general.tar.gz" % consensus_spec_version,
)
@@ -287,7 +287,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-Dqiwf5BG7yYyURGf+i87AIdArAyztvcgjoi2kSxrGvo=",
integrity = "sha256-NRba2h4zqb2LAXyDPglHTtkT4gVyuwpY708XmwXKXV8=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/minimal.tar.gz" % consensus_spec_version,
)
@@ -303,7 +303,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-xrmsFF243pzXHAjh1EQYKS9gtcwmtHK3wRZDSLlVVRk=",
integrity = "sha256-hpbtKUbc3NHtVcUPk/Zm+Hn57G2ijI9qvXJwl9hc/tM=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/mainnet.tar.gz" % consensus_spec_version,
)
@@ -318,7 +318,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-c+gGapqifCvFtmtxfhOwieBDO2Syxp13GECWEpWM/Ho=",
integrity = "sha256-Wy3YcJxoXiKQwrGgJecrtjtdokc4X/VUNBmyQXJf0Oc=",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)

View File

@@ -57,6 +57,7 @@ go_test(
deps = [
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",

View File

@@ -26,10 +26,7 @@ func BeaconStateFromConsensus(st beaconState.BeaconState) (*BeaconState, error)
for i, r := range srcSr {
sr[i] = hexutil.Encode(r)
}
srcHr, err := st.HistoricalRoots()
if err != nil {
return nil, err
}
srcHr := st.HistoricalRoots()
hr := make([]string, len(srcHr))
for i, r := range srcHr {
hr[i] = hexutil.Encode(r)
@@ -116,10 +113,7 @@ func BeaconStateAltairFromConsensus(st beaconState.BeaconState) (*BeaconStateAlt
for i, r := range srcSr {
sr[i] = hexutil.Encode(r)
}
srcHr, err := st.HistoricalRoots()
if err != nil {
return nil, err
}
srcHr := st.HistoricalRoots()
hr := make([]string, len(srcHr))
for i, r := range srcHr {
hr[i] = hexutil.Encode(r)
@@ -225,10 +219,7 @@ func BeaconStateBellatrixFromConsensus(st beaconState.BeaconState) (*BeaconState
for i, r := range srcSr {
sr[i] = hexutil.Encode(r)
}
srcHr, err := st.HistoricalRoots()
if err != nil {
return nil, err
}
srcHr := st.HistoricalRoots()
hr := make([]string, len(srcHr))
for i, r := range srcHr {
hr[i] = hexutil.Encode(r)
@@ -347,10 +338,7 @@ func BeaconStateCapellaFromConsensus(st beaconState.BeaconState) (*BeaconStateCa
for i, r := range srcSr {
sr[i] = hexutil.Encode(r)
}
srcHr, err := st.HistoricalRoots()
if err != nil {
return nil, err
}
srcHr := st.HistoricalRoots()
hr := make([]string, len(srcHr))
for i, r := range srcHr {
hr[i] = hexutil.Encode(r)
@@ -488,10 +476,7 @@ func BeaconStateDenebFromConsensus(st beaconState.BeaconState) (*BeaconStateDene
for i, r := range srcSr {
sr[i] = hexutil.Encode(r)
}
srcHr, err := st.HistoricalRoots()
if err != nil {
return nil, err
}
srcHr := st.HistoricalRoots()
hr := make([]string, len(srcHr))
for i, r := range srcHr {
hr[i] = hexutil.Encode(r)
@@ -629,10 +614,7 @@ func BeaconStateElectraFromConsensus(st beaconState.BeaconState) (*BeaconStateEl
for i, r := range srcSr {
sr[i] = hexutil.Encode(r)
}
srcHr, err := st.HistoricalRoots()
if err != nil {
return nil, err
}
srcHr := st.HistoricalRoots()
hr := make([]string, len(srcHr))
for i, r := range srcHr {
hr[i] = hexutil.Encode(r)
@@ -815,10 +797,7 @@ func BeaconStateFuluFromConsensus(st beaconState.BeaconState) (*BeaconStateFulu,
for i, r := range srcSr {
sr[i] = hexutil.Encode(r)
}
srcHr, err := st.HistoricalRoots()
if err != nil {
return nil, err
}
srcHr := st.HistoricalRoots()
hr := make([]string, len(srcHr))
for i, r := range srcHr {
hr[i] = hexutil.Encode(r)

View File

@@ -4,7 +4,9 @@ import (
"testing"
eth "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/ethereum/go-ethereum/common/hexutil"
)
func TestDepositSnapshotFromConsensus(t *testing.T) {
@@ -102,12 +104,266 @@ func TestProposerSlashing_ToConsensus(t *testing.T) {
require.ErrorContains(t, errNilValue.Error(), err)
}
func TestProposerSlashing_FromConsensus(t *testing.T) {
input := []*eth.ProposerSlashing{
{
Header_1: &eth.SignedBeaconBlockHeader{
Header: &eth.BeaconBlockHeader{
Slot: 1,
ProposerIndex: 2,
ParentRoot: []byte{3},
StateRoot: []byte{4},
BodyRoot: []byte{5},
},
Signature: []byte{6},
},
Header_2: &eth.SignedBeaconBlockHeader{
Header: &eth.BeaconBlockHeader{
Slot: 7,
ProposerIndex: 8,
ParentRoot: []byte{9},
StateRoot: []byte{10},
BodyRoot: []byte{11},
},
Signature: []byte{12},
},
},
{
Header_1: &eth.SignedBeaconBlockHeader{
Header: &eth.BeaconBlockHeader{
Slot: 13,
ProposerIndex: 14,
ParentRoot: []byte{15},
StateRoot: []byte{16},
BodyRoot: []byte{17},
},
Signature: []byte{18},
},
Header_2: &eth.SignedBeaconBlockHeader{
Header: &eth.BeaconBlockHeader{
Slot: 19,
ProposerIndex: 20,
ParentRoot: []byte{21},
StateRoot: []byte{22},
BodyRoot: []byte{23},
},
Signature: []byte{24},
},
},
}
expectedResult := []*ProposerSlashing{
{
SignedHeader1: &SignedBeaconBlockHeader{
Message: &BeaconBlockHeader{
Slot: "1",
ProposerIndex: "2",
ParentRoot: hexutil.Encode([]byte{3}),
StateRoot: hexutil.Encode([]byte{4}),
BodyRoot: hexutil.Encode([]byte{5}),
},
Signature: hexutil.Encode([]byte{6}),
},
SignedHeader2: &SignedBeaconBlockHeader{
Message: &BeaconBlockHeader{
Slot: "7",
ProposerIndex: "8",
ParentRoot: hexutil.Encode([]byte{9}),
StateRoot: hexutil.Encode([]byte{10}),
BodyRoot: hexutil.Encode([]byte{11}),
},
Signature: hexutil.Encode([]byte{12}),
},
},
{
SignedHeader1: &SignedBeaconBlockHeader{
Message: &BeaconBlockHeader{
Slot: "13",
ProposerIndex: "14",
ParentRoot: hexutil.Encode([]byte{15}),
StateRoot: hexutil.Encode([]byte{16}),
BodyRoot: hexutil.Encode([]byte{17}),
},
Signature: hexutil.Encode([]byte{18}),
},
SignedHeader2: &SignedBeaconBlockHeader{
Message: &BeaconBlockHeader{
Slot: "19",
ProposerIndex: "20",
ParentRoot: hexutil.Encode([]byte{21}),
StateRoot: hexutil.Encode([]byte{22}),
BodyRoot: hexutil.Encode([]byte{23}),
},
Signature: hexutil.Encode([]byte{24}),
},
},
}
result := ProposerSlashingsFromConsensus(input)
assert.DeepEqual(t, expectedResult, result)
}
func TestAttesterSlashing_ToConsensus(t *testing.T) {
a := &AttesterSlashing{Attestation1: nil, Attestation2: nil}
_, err := a.ToConsensus()
require.ErrorContains(t, errNilValue.Error(), err)
}
func TestAttesterSlashing_FromConsensus(t *testing.T) {
input := []*eth.AttesterSlashing{
{
Attestation_1: &eth.IndexedAttestation{
AttestingIndices: []uint64{1, 2},
Data: &eth.AttestationData{
Slot: 3,
CommitteeIndex: 4,
BeaconBlockRoot: []byte{5},
Source: &eth.Checkpoint{
Epoch: 6,
Root: []byte{7},
},
Target: &eth.Checkpoint{
Epoch: 8,
Root: []byte{9},
},
},
Signature: []byte{10},
},
Attestation_2: &eth.IndexedAttestation{
AttestingIndices: []uint64{11, 12},
Data: &eth.AttestationData{
Slot: 13,
CommitteeIndex: 14,
BeaconBlockRoot: []byte{15},
Source: &eth.Checkpoint{
Epoch: 16,
Root: []byte{17},
},
Target: &eth.Checkpoint{
Epoch: 18,
Root: []byte{19},
},
},
Signature: []byte{20},
},
},
{
Attestation_1: &eth.IndexedAttestation{
AttestingIndices: []uint64{21, 22},
Data: &eth.AttestationData{
Slot: 23,
CommitteeIndex: 24,
BeaconBlockRoot: []byte{25},
Source: &eth.Checkpoint{
Epoch: 26,
Root: []byte{27},
},
Target: &eth.Checkpoint{
Epoch: 28,
Root: []byte{29},
},
},
Signature: []byte{30},
},
Attestation_2: &eth.IndexedAttestation{
AttestingIndices: []uint64{31, 32},
Data: &eth.AttestationData{
Slot: 33,
CommitteeIndex: 34,
BeaconBlockRoot: []byte{35},
Source: &eth.Checkpoint{
Epoch: 36,
Root: []byte{37},
},
Target: &eth.Checkpoint{
Epoch: 38,
Root: []byte{39},
},
},
Signature: []byte{40},
},
},
}
expectedResult := []*AttesterSlashing{
{
Attestation1: &IndexedAttestation{
AttestingIndices: []string{"1", "2"},
Data: &AttestationData{
Slot: "3",
CommitteeIndex: "4",
BeaconBlockRoot: hexutil.Encode([]byte{5}),
Source: &Checkpoint{
Epoch: "6",
Root: hexutil.Encode([]byte{7}),
},
Target: &Checkpoint{
Epoch: "8",
Root: hexutil.Encode([]byte{9}),
},
},
Signature: hexutil.Encode([]byte{10}),
},
Attestation2: &IndexedAttestation{
AttestingIndices: []string{"11", "12"},
Data: &AttestationData{
Slot: "13",
CommitteeIndex: "14",
BeaconBlockRoot: hexutil.Encode([]byte{15}),
Source: &Checkpoint{
Epoch: "16",
Root: hexutil.Encode([]byte{17}),
},
Target: &Checkpoint{
Epoch: "18",
Root: hexutil.Encode([]byte{19}),
},
},
Signature: hexutil.Encode([]byte{20}),
},
},
{
Attestation1: &IndexedAttestation{
AttestingIndices: []string{"21", "22"},
Data: &AttestationData{
Slot: "23",
CommitteeIndex: "24",
BeaconBlockRoot: hexutil.Encode([]byte{25}),
Source: &Checkpoint{
Epoch: "26",
Root: hexutil.Encode([]byte{27}),
},
Target: &Checkpoint{
Epoch: "28",
Root: hexutil.Encode([]byte{29}),
},
},
Signature: hexutil.Encode([]byte{30}),
},
Attestation2: &IndexedAttestation{
AttestingIndices: []string{"31", "32"},
Data: &AttestationData{
Slot: "33",
CommitteeIndex: "34",
BeaconBlockRoot: hexutil.Encode([]byte{35}),
Source: &Checkpoint{
Epoch: "36",
Root: hexutil.Encode([]byte{37}),
},
Target: &Checkpoint{
Epoch: "38",
Root: hexutil.Encode([]byte{39}),
},
},
Signature: hexutil.Encode([]byte{40}),
},
},
}
result := AttesterSlashingsFromConsensus(input)
assert.DeepEqual(t, expectedResult, result)
}
func TestIndexedAttestation_ToConsensus(t *testing.T) {
a := &IndexedAttestation{
AttestingIndices: []string{"1"},

View File

@@ -253,7 +253,7 @@ type PendingDeposit struct {
}
type PendingPartialWithdrawal struct {
Index string `json:"index"`
Index string `json:"validator_index"`
Amount string `json:"amount"`
WithdrawableEpoch string `json:"withdrawable_epoch"`
}

View File

@@ -163,6 +163,7 @@ go_test(
"//beacon-chain/operations/slashings:go_default_library",
"//beacon-chain/operations/voluntaryexits:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/p2p/testing:go_default_library",
"//beacon-chain/startup:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",

View File

@@ -30,7 +30,8 @@ go_test(
deps = [
"//consensus-types/blocks:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_consensys_gnark_crypto//ecc/bls12-381/fr:go_default_library",
"@com_github_crate_crypto_go_kzg_4844//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)

View File

@@ -1,12 +1,16 @@
package kzg
import (
"bytes"
"crypto/sha256"
"encoding/binary"
"testing"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/consensys/gnark-crypto/ecc/bls12-381/fr"
GoKZG "github.com/crate-crypto/go-kzg-4844"
"github.com/sirupsen/logrus"
)
func GenerateCommitmentAndProof(blob GoKZG.Blob) (GoKZG.KZGCommitment, GoKZG.KZGProof, error) {
@@ -37,7 +41,7 @@ func TestBytesToAny(t *testing.T) {
}
func TestGenerateCommitmentAndProof(t *testing.T) {
blob := util.GetRandBlob(123)
blob := getRandBlob(123)
commitment, proof, err := GenerateCommitmentAndProof(blob)
require.NoError(t, err)
expectedCommitment := GoKZG.KZGCommitment{180, 218, 156, 194, 59, 20, 10, 189, 186, 254, 132, 93, 7, 127, 104, 172, 238, 240, 237, 70, 83, 89, 1, 152, 99, 0, 165, 65, 143, 62, 20, 215, 230, 14, 205, 95, 28, 245, 54, 25, 160, 16, 178, 31, 232, 207, 38, 85}
@@ -45,3 +49,36 @@ func TestGenerateCommitmentAndProof(t *testing.T) {
require.Equal(t, expectedCommitment, commitment)
require.Equal(t, expectedProof, proof)
}
func deterministicRandomness(seed int64) [32]byte {
// Converts an int64 to a byte slice
buf := new(bytes.Buffer)
err := binary.Write(buf, binary.BigEndian, seed)
if err != nil {
logrus.WithError(err).Error("Failed to write int64 to bytes buffer")
return [32]byte{}
}
bytes := buf.Bytes()
return sha256.Sum256(bytes)
}
// Returns a serialized random field element in big-endian
func getRandFieldElement(seed int64) [32]byte {
bytes := deterministicRandomness(seed)
var r fr.Element
r.SetBytes(bytes[:])
return GoKZG.SerializeScalar(r)
}
// Returns a random blob using the passed seed as entropy
func getRandBlob(seed int64) GoKZG.Blob {
var blob GoKZG.Blob
bytesPerBlob := GoKZG.ScalarsPerBlob * GoKZG.SerializedScalarSize
for i := 0; i < bytesPerBlob; i += GoKZG.SerializedScalarSize {
fieldElementBytes := getRandFieldElement(seed + int64(i))
copy(blob[i:i+GoKZG.SerializedScalarSize], fieldElementBytes[:])
}
return blob
}

View File

@@ -309,6 +309,11 @@ func (s *Service) processLightClientFinalityUpdate(
Type: statefeed.LightClientFinalityUpdate,
Data: newUpdate,
})
if err = s.cfg.P2p.BroadcastLightClientFinalityUpdate(ctx, newUpdate); err != nil {
return errors.Wrap(err, "could not broadcast light client finality update")
}
return nil
}
@@ -358,6 +363,10 @@ func (s *Service) processLightClientOptimisticUpdate(ctx context.Context, signed
Data: newUpdate,
})
if err = s.cfg.P2p.BroadcastLightClientOptimisticUpdate(ctx, newUpdate); err != nil {
return errors.Wrap(err, "could not broadcast light client optimistic update")
}
return nil
}

View File

@@ -24,6 +24,7 @@ import (
doublylinkedtree "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/doubly-linked-tree"
forkchoicetypes "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/types"
"github.com/OffchainLabs/prysm/v6/beacon-chain/operations/attestations/kv"
mockp2p "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/config/features"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
@@ -3309,6 +3310,7 @@ func TestProcessLightClientOptimisticUpdate(t *testing.T) {
params.OverrideBeaconConfig(beaconCfg)
s, tr := minimalTestService(t)
s.cfg.P2p = &mockp2p.FakeP2P{}
ctx := tr.ctx
testCases := []struct {
@@ -3444,6 +3446,7 @@ func TestProcessLightClientFinalityUpdate(t *testing.T) {
params.OverrideBeaconConfig(beaconCfg)
s, tr := minimalTestService(t)
s.cfg.P2p = &mockp2p.FakeP2P{}
ctx := tr.ctx
testCases := []struct {

View File

@@ -22,6 +22,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state/stategen"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/require"
"google.golang.org/protobuf/proto"
@@ -66,6 +67,16 @@ func (mb *mockBroadcaster) BroadcastBlob(_ context.Context, _ uint64, _ *ethpb.B
return nil
}
func (mb *mockBroadcaster) BroadcastLightClientOptimisticUpdate(_ context.Context, _ interfaces.LightClientOptimisticUpdate) error {
mb.broadcastCalled = true
return nil
}
func (mb *mockBroadcaster) BroadcastLightClientFinalityUpdate(_ context.Context, _ interfaces.LightClientFinalityUpdate) error {
mb.broadcastCalled = true
return nil
}
func (mb *mockBroadcaster) BroadcastBLSChanges(_ context.Context, _ []*ethpb.SignedBLSToExecutionChange) {
}

View File

@@ -67,10 +67,6 @@ func UpgradeToAltair(ctx context.Context, state state.BeaconState) (state.Beacon
epoch := time.CurrentEpoch(state)
numValidators := state.NumValidators()
hrs, err := state.HistoricalRoots()
if err != nil {
return nil, err
}
s := &ethpb.BeaconStateAltair{
GenesisTime: state.GenesisTime(),
GenesisValidatorsRoot: state.GenesisValidatorsRoot(),
@@ -83,7 +79,7 @@ func UpgradeToAltair(ctx context.Context, state state.BeaconState) (state.Beacon
LatestBlockHeader: state.LatestBlockHeader(),
BlockRoots: state.BlockRoots(),
StateRoots: state.StateRoots(),
HistoricalRoots: hrs,
HistoricalRoots: state.HistoricalRoots(),
Eth1Data: state.Eth1Data(),
Eth1DataVotes: state.Eth1DataVotes(),
Eth1DepositIndex: state.Eth1DepositIndex(),

View File

@@ -82,10 +82,8 @@ func TestUpgradeToAltair(t *testing.T) {
require.DeepSSZEqual(t, preForkState.LatestBlockHeader(), aState.LatestBlockHeader())
require.DeepSSZEqual(t, preForkState.BlockRoots(), aState.BlockRoots())
require.DeepSSZEqual(t, preForkState.StateRoots(), aState.StateRoots())
r1, err := preForkState.HistoricalRoots()
require.NoError(t, err)
r2, err := aState.HistoricalRoots()
require.NoError(t, err)
r1 := preForkState.HistoricalRoots()
r2 := aState.HistoricalRoots()
require.DeepSSZEqual(t, r1, r2)
require.DeepSSZEqual(t, preForkState.Eth1Data(), aState.Eth1Data())
require.DeepSSZEqual(t, preForkState.Eth1DataVotes(), aState.Eth1DataVotes())

View File

@@ -42,10 +42,6 @@ func UpgradeToCapella(state state.BeaconState) (state.BeaconState, error) {
return nil, err
}
hrs, err := state.HistoricalRoots()
if err != nil {
return nil, err
}
s := &ethpb.BeaconStateCapella{
GenesisTime: state.GenesisTime(),
GenesisValidatorsRoot: state.GenesisValidatorsRoot(),
@@ -58,7 +54,7 @@ func UpgradeToCapella(state state.BeaconState) (state.BeaconState, error) {
LatestBlockHeader: state.LatestBlockHeader(),
BlockRoots: state.BlockRoots(),
StateRoots: state.StateRoots(),
HistoricalRoots: hrs,
HistoricalRoots: state.HistoricalRoots(),
Eth1Data: state.Eth1Data(),
Eth1DataVotes: state.Eth1DataVotes(),
Eth1DepositIndex: state.Eth1DepositIndex(),

View File

@@ -57,10 +57,6 @@ func UpgradeToDeneb(state state.BeaconState) (state.BeaconState, error) {
if err != nil {
return nil, err
}
historicalRoots, err := state.HistoricalRoots()
if err != nil {
return nil, err
}
s := &ethpb.BeaconStateDeneb{
GenesisTime: state.GenesisTime(),
@@ -74,7 +70,7 @@ func UpgradeToDeneb(state state.BeaconState) (state.BeaconState, error) {
LatestBlockHeader: state.LatestBlockHeader(),
BlockRoots: state.BlockRoots(),
StateRoots: state.StateRoots(),
HistoricalRoots: historicalRoots,
HistoricalRoots: state.HistoricalRoots(),
Eth1Data: state.Eth1Data(),
Eth1DataVotes: state.Eth1DataVotes(),
Eth1DepositIndex: state.Eth1DepositIndex(),

View File

@@ -47,10 +47,8 @@ func TestUpgradeToDeneb(t *testing.T) {
require.NoError(t, err)
require.DeepSSZEqual(t, make([]uint64, numValidators), s)
hr1, err := preForkState.HistoricalRoots()
require.NoError(t, err)
hr2, err := mSt.HistoricalRoots()
require.NoError(t, err)
hr1 := preForkState.HistoricalRoots()
hr2 := mSt.HistoricalRoots()
require.DeepEqual(t, hr1, hr2)
f := mSt.Fork()

View File

@@ -170,10 +170,6 @@ func UpgradeToElectra(beaconState state.BeaconState) (state.BeaconState, error)
if err != nil {
return nil, err
}
historicalRoots, err := beaconState.HistoricalRoots()
if err != nil {
return nil, err
}
excessBlobGas, err := payloadHeader.ExcessBlobGas()
if err != nil {
return nil, err
@@ -223,7 +219,7 @@ func UpgradeToElectra(beaconState state.BeaconState) (state.BeaconState, error)
LatestBlockHeader: beaconState.LatestBlockHeader(),
BlockRoots: beaconState.BlockRoots(),
StateRoots: beaconState.StateRoots(),
HistoricalRoots: historicalRoots,
HistoricalRoots: beaconState.HistoricalRoots(),
Eth1Data: beaconState.Eth1Data(),
Eth1DataVotes: beaconState.Eth1DataVotes(),
Eth1DepositIndex: beaconState.Eth1DepositIndex(),

View File

@@ -77,10 +77,8 @@ func TestUpgradeToElectra(t *testing.T) {
require.NoError(t, err)
require.DeepSSZEqual(t, make([]uint64, numValidators), s)
hr1, err := preForkState.HistoricalRoots()
require.NoError(t, err)
hr2, err := mSt.HistoricalRoots()
require.NoError(t, err)
hr1 := preForkState.HistoricalRoots()
hr2 := mSt.HistoricalRoots()
require.DeepEqual(t, hr1, hr2)
f := mSt.Fork()

View File

@@ -148,8 +148,7 @@ func TestProcessFinalUpdates_CanProcess(t *testing.T) {
assert.DeepNotEqual(t, params.BeaconConfig().ZeroHash[:], mix, "latest RANDAO still zero hashes")
// Verify historical root accumulator was appended.
roots, err := newS.HistoricalRoots()
require.NoError(t, err)
roots := newS.HistoricalRoots()
assert.Equal(t, 1, len(roots), "Unexpected slashed balance")
currAtt, err := newS.CurrentEpochAttestations()
require.NoError(t, err)
@@ -379,8 +378,7 @@ func TestProcessHistoricalDataUpdate(t *testing.T) {
return st
},
verifier: func(st state.BeaconState) {
roots, err := st.HistoricalRoots()
require.NoError(t, err)
roots := st.HistoricalRoots()
require.Equal(t, 0, len(roots))
},
},
@@ -393,8 +391,7 @@ func TestProcessHistoricalDataUpdate(t *testing.T) {
return st
},
verifier: func(st state.BeaconState) {
roots, err := st.HistoricalRoots()
require.NoError(t, err)
roots := st.HistoricalRoots()
require.Equal(t, 1, len(roots))
b := &ethpb.HistoricalBatch{
@@ -431,8 +428,7 @@ func TestProcessHistoricalDataUpdate(t *testing.T) {
StateSummaryRoot: sr[:],
}
require.DeepEqual(t, b, summaries[0])
hrs, err := st.HistoricalRoots()
require.NoError(t, err)
hrs := st.HistoricalRoots()
require.DeepEqual(t, hrs, [][]byte{})
},
},

View File

@@ -35,10 +35,6 @@ func UpgradeToBellatrix(state state.BeaconState) (state.BeaconState, error) {
return nil, err
}
hrs, err := state.HistoricalRoots()
if err != nil {
return nil, err
}
s := &ethpb.BeaconStateBellatrix{
GenesisTime: state.GenesisTime(),
GenesisValidatorsRoot: state.GenesisValidatorsRoot(),
@@ -51,7 +47,7 @@ func UpgradeToBellatrix(state state.BeaconState) (state.BeaconState, error) {
LatestBlockHeader: state.LatestBlockHeader(),
BlockRoots: state.BlockRoots(),
StateRoots: state.StateRoots(),
HistoricalRoots: hrs,
HistoricalRoots: state.HistoricalRoots(),
Eth1Data: state.Eth1Data(),
Eth1DataVotes: state.Eth1DataVotes(),
Eth1DepositIndex: state.Eth1DepositIndex(),

View File

@@ -24,10 +24,8 @@ func TestUpgradeToBellatrix(t *testing.T) {
require.DeepSSZEqual(t, preForkState.LatestBlockHeader(), mSt.LatestBlockHeader())
require.DeepSSZEqual(t, preForkState.BlockRoots(), mSt.BlockRoots())
require.DeepSSZEqual(t, preForkState.StateRoots(), mSt.StateRoots())
r1, err := preForkState.HistoricalRoots()
require.NoError(t, err)
r2, err := mSt.HistoricalRoots()
require.NoError(t, err)
r1 := preForkState.HistoricalRoots()
r2 := mSt.HistoricalRoots()
require.DeepSSZEqual(t, r1, r2)
require.DeepSSZEqual(t, preForkState.Eth1Data(), mSt.Eth1Data())
require.DeepSSZEqual(t, preForkState.Eth1DataVotes(), mSt.Eth1DataVotes())

View File

@@ -57,10 +57,6 @@ func UpgradeToFulu(beaconState state.BeaconState) (state.BeaconState, error) {
if err != nil {
return nil, err
}
historicalRoots, err := beaconState.HistoricalRoots()
if err != nil {
return nil, err
}
excessBlobGas, err := payloadHeader.ExcessBlobGas()
if err != nil {
return nil, err
@@ -118,7 +114,7 @@ func UpgradeToFulu(beaconState state.BeaconState) (state.BeaconState, error) {
LatestBlockHeader: beaconState.LatestBlockHeader(),
BlockRoots: beaconState.BlockRoots(),
StateRoots: beaconState.StateRoots(),
HistoricalRoots: historicalRoots,
HistoricalRoots: beaconState.HistoricalRoots(),
Eth1Data: beaconState.Eth1Data(),
Eth1DataVotes: beaconState.Eth1DataVotes(),
Eth1DepositIndex: beaconState.Eth1DepositIndex(),

View File

@@ -43,10 +43,8 @@ func TestUpgradeToFulu(t *testing.T) {
require.DeepSSZEqual(t, preForkState.BlockRoots(), mSt.BlockRoots())
require.DeepSSZEqual(t, preForkState.StateRoots(), mSt.StateRoots())
hr1, err := preForkState.HistoricalRoots()
require.NoError(t, err)
hr2, err := mSt.HistoricalRoots()
require.NoError(t, err)
hr1 := preForkState.HistoricalRoots()
hr2 := mSt.HistoricalRoots()
require.DeepEqual(t, hr1, hr2)
require.DeepSSZEqual(t, preForkState.Eth1Data(), mSt.Eth1Data())

View File

@@ -18,6 +18,7 @@ import (
"github.com/OffchainLabs/prysm/v6/crypto/hash"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/math"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
@@ -297,6 +298,9 @@ func verifyAssignmentEpoch(epoch primitives.Epoch, state state.BeaconState) erro
// It verifies the validity of the epoch, then iterates through each slot in the epoch to determine the
// proposer for that slot and assigns them accordingly.
func ProposerAssignments(ctx context.Context, state state.BeaconState, epoch primitives.Epoch) (map[primitives.ValidatorIndex][]primitives.Slot, error) {
ctx, span := trace.StartSpan(ctx, "helpers.ProposerAssignments")
defer span.End()
// Verify if the epoch is valid for assignment based on the provided state.
if err := verifyAssignmentEpoch(epoch, state); err != nil {
return nil, err
@@ -345,6 +349,9 @@ func ProposerAssignments(ctx context.Context, state state.BeaconState, epoch pri
// It retrieves active validator indices, determines the number of committees per slot, and computes
// assignments for each validator based on their presence in the provided validators slice.
func CommitteeAssignments(ctx context.Context, state state.BeaconState, epoch primitives.Epoch, validators []primitives.ValidatorIndex) (map[primitives.ValidatorIndex]*CommitteeAssignment, error) {
ctx, span := trace.StartSpan(ctx, "helpers.CommitteeAssignments")
defer span.End()
// Verify if the epoch is valid for assignment based on the provided state.
if err := verifyAssignmentEpoch(epoch, state); err != nil {
return nil, err

View File

@@ -0,0 +1,71 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"das_core.go",
"info.go",
"metrics.go",
"p2p_interface.go",
"reconstruction.go",
"util.go",
"validator.go",
],
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/state:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//container/trie:go_default_library",
"//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enr:go_default_library",
"@com_github_hashicorp_golang_lru//:go_default_library",
"@com_github_holiman_uint256//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@org_golang_x_sync//errgroup:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = [
"das_core_test.go",
"info_test.go",
"p2p_interface_test.go",
"reconstruction_test.go",
"utils_test.go",
"validator_test.go",
],
deps = [
":go_default_library",
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_consensys_gnark_crypto//ecc/bls12-381/fr:go_default_library",
"@com_github_crate_crypto_go_kzg_4844//:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enr:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)

View File

@@ -0,0 +1,402 @@
package peerdas
import (
"encoding/binary"
"math"
"slices"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/crypto/hash"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/holiman/uint256"
"github.com/pkg/errors"
)
var (
// Custom errors
ErrCustodyGroupTooLarge = errors.New("custody group too large")
ErrCustodyGroupCountTooLarge = errors.New("custody group count too large")
ErrMismatchSize = errors.New("mismatch in the number of blob KZG commitments and cellsAndProofs")
errWrongComputedCustodyGroupCount = errors.New("wrong computed custody group count, should never happen")
// maxUint256 is the maximum value of an uint256.
maxUint256 = &uint256.Int{math.MaxUint64, math.MaxUint64, math.MaxUint64, math.MaxUint64}
)
type CustodyType int
const (
Target CustodyType = iota
Actual
)
// CustodyGroups computes the custody groups the node should participate in for custody.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/das-core.md#get_custody_groups
func CustodyGroups(nodeId enode.ID, custodyGroupCount uint64) ([]uint64, error) {
numberOfCustodyGroup := params.BeaconConfig().NumberOfCustodyGroups
// Check if the custody group count is larger than the number of custody groups.
if custodyGroupCount > numberOfCustodyGroup {
return nil, ErrCustodyGroupCountTooLarge
}
// Shortcut if all custody groups are needed.
if custodyGroupCount == numberOfCustodyGroup {
custodyGroups := make([]uint64, 0, numberOfCustodyGroup)
for i := range numberOfCustodyGroup {
custodyGroups = append(custodyGroups, i)
}
return custodyGroups, nil
}
one := uint256.NewInt(1)
custodyGroupsMap := make(map[uint64]bool, custodyGroupCount)
custodyGroups := make([]uint64, 0, custodyGroupCount)
for currentId := new(uint256.Int).SetBytes(nodeId.Bytes()); uint64(len(custodyGroups)) < custodyGroupCount; {
// Convert to big endian bytes.
currentIdBytesBigEndian := currentId.Bytes32()
// Convert to little endian.
currentIdBytesLittleEndian := bytesutil.ReverseByteOrder(currentIdBytesBigEndian[:])
// Hash the result.
hashedCurrentId := hash.Hash(currentIdBytesLittleEndian)
// Get the custody group ID.
custodyGroup := binary.LittleEndian.Uint64(hashedCurrentId[:8]) % numberOfCustodyGroup
// Add the custody group to the map.
if !custodyGroupsMap[custodyGroup] {
custodyGroupsMap[custodyGroup] = true
custodyGroups = append(custodyGroups, custodyGroup)
}
if currentId.Cmp(maxUint256) == 0 {
// Overflow prevention.
currentId = uint256.NewInt(0)
} else {
// Increment the current ID.
currentId.Add(currentId, one)
}
// Sort the custody groups.
slices.Sort[[]uint64](custodyGroups)
}
// Final check.
if uint64(len(custodyGroups)) != custodyGroupCount {
return nil, errWrongComputedCustodyGroupCount
}
return custodyGroups, nil
}
// ComputeColumnsForCustodyGroup computes the columns for a given custody group.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/das-core.md#compute_columns_for_custody_group
func ComputeColumnsForCustodyGroup(custodyGroup uint64) ([]uint64, error) {
beaconConfig := params.BeaconConfig()
numberOfCustodyGroup := beaconConfig.NumberOfCustodyGroups
if custodyGroup >= numberOfCustodyGroup {
return nil, ErrCustodyGroupTooLarge
}
numberOfColumns := beaconConfig.NumberOfColumns
columnsPerGroup := numberOfColumns / numberOfCustodyGroup
columns := make([]uint64, 0, columnsPerGroup)
for i := range columnsPerGroup {
column := numberOfCustodyGroup*i + custodyGroup
columns = append(columns, column)
}
return columns, nil
}
// DataColumnSidecars computes the data column sidecars from the signed block, cells and cell proofs.
// The returned value contains pointers to function parameters.
// (If the caller alterates `cellsAndProofs` afterwards, the returned value will be modified as well.)
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.3/specs/fulu/das-core.md#get_data_column_sidecars
func DataColumnSidecars(signedBlock interfaces.ReadOnlySignedBeaconBlock, cellsAndProofs []kzg.CellsAndProofs) ([]*ethpb.DataColumnSidecar, error) {
if signedBlock == nil || signedBlock.IsNil() || len(cellsAndProofs) == 0 {
return nil, nil
}
block := signedBlock.Block()
blockBody := block.Body()
blobKzgCommitments, err := blockBody.BlobKzgCommitments()
if err != nil {
return nil, errors.Wrap(err, "blob KZG commitments")
}
if len(blobKzgCommitments) != len(cellsAndProofs) {
return nil, ErrMismatchSize
}
signedBlockHeader, err := signedBlock.Header()
if err != nil {
return nil, errors.Wrap(err, "signed block header")
}
kzgCommitmentsInclusionProof, err := blocks.MerkleProofKZGCommitments(blockBody)
if err != nil {
return nil, errors.Wrap(err, "merkle proof ZKG commitments")
}
dataColumnSidecars, err := DataColumnsSidecarsFromItems(signedBlockHeader, blobKzgCommitments, kzgCommitmentsInclusionProof, cellsAndProofs)
if err != nil {
return nil, errors.Wrap(err, "data column sidecars from items")
}
return dataColumnSidecars, nil
}
// DataColumnsSidecarsFromItems computes the data column sidecars from the signed block header, the blob KZG commiments,
// the KZG commitment includion proofs and cells and cell proofs.
// The returned value contains pointers to function parameters.
// (If the caller alterates input parameters afterwards, the returned value will be modified as well.)
func DataColumnsSidecarsFromItems(
signedBlockHeader *ethpb.SignedBeaconBlockHeader,
blobKzgCommitments [][]byte,
kzgCommitmentsInclusionProof [][]byte,
cellsAndProofs []kzg.CellsAndProofs,
) ([]*ethpb.DataColumnSidecar, error) {
start := time.Now()
if len(blobKzgCommitments) != len(cellsAndProofs) {
return nil, ErrMismatchSize
}
numberOfColumns := params.BeaconConfig().NumberOfColumns
blobsCount := len(cellsAndProofs)
sidecars := make([]*ethpb.DataColumnSidecar, 0, numberOfColumns)
for columnIndex := range numberOfColumns {
column := make([]kzg.Cell, 0, blobsCount)
kzgProofOfColumn := make([]kzg.Proof, 0, blobsCount)
for rowIndex := range blobsCount {
cellsForRow := cellsAndProofs[rowIndex].Cells
proofsForRow := cellsAndProofs[rowIndex].Proofs
cell := cellsForRow[columnIndex]
column = append(column, cell)
kzgProof := proofsForRow[columnIndex]
kzgProofOfColumn = append(kzgProofOfColumn, kzgProof)
}
columnBytes := make([][]byte, 0, blobsCount)
for i := range column {
columnBytes = append(columnBytes, column[i][:])
}
kzgProofOfColumnBytes := make([][]byte, 0, blobsCount)
for _, kzgProof := range kzgProofOfColumn {
kzgProofOfColumnBytes = append(kzgProofOfColumnBytes, kzgProof[:])
}
sidecar := &ethpb.DataColumnSidecar{
Index: columnIndex,
Column: columnBytes,
KzgCommitments: blobKzgCommitments,
KzgProofs: kzgProofOfColumnBytes,
SignedBlockHeader: signedBlockHeader,
KzgCommitmentsInclusionProof: kzgCommitmentsInclusionProof,
}
sidecars = append(sidecars, sidecar)
}
dataColumnComputationTime.Observe(float64(time.Since(start).Milliseconds()))
return sidecars, nil
}
// ComputeCustodyGroupForColumn computes the custody group for a given column.
// It is the reciprocal function of ComputeColumnsForCustodyGroup.
func ComputeCustodyGroupForColumn(columnIndex uint64) (uint64, error) {
beaconConfig := params.BeaconConfig()
numberOfColumns := beaconConfig.NumberOfColumns
numberOfCustodyGroups := beaconConfig.NumberOfCustodyGroups
if columnIndex >= numberOfColumns {
return 0, ErrIndexTooLarge
}
return columnIndex % numberOfCustodyGroups, nil
}
// Blobs extract blobs from `dataColumnsSidecar`.
// This can be seen as the reciprocal function of DataColumnSidecars.
// `dataColumnsSidecar` needs to contain the datacolumns corresponding to the non-extended matrix,
// else an error will be returned.
// (`dataColumnsSidecar` can contain extra columns, but they will be ignored.)
func Blobs(indices map[uint64]bool, dataColumnsSidecar []*ethpb.DataColumnSidecar) ([]*blocks.VerifiedROBlob, error) {
numberOfColumns := params.BeaconConfig().NumberOfColumns
// Compute the number of needed columns, including the number of columns is odd case.
neededColumnCount := (numberOfColumns + 1) / 2
// Check if all needed columns are present.
sliceIndexFromColumnIndex := make(map[uint64]int, len(dataColumnsSidecar))
for i := range dataColumnsSidecar {
dataColumnSideCar := dataColumnsSidecar[i]
index := dataColumnSideCar.Index
if index < neededColumnCount {
sliceIndexFromColumnIndex[index] = i
}
}
actualColumnCount := uint64(len(sliceIndexFromColumnIndex))
// Get missing columns.
if actualColumnCount < neededColumnCount {
var missingColumnsSlice []uint64
for i := range neededColumnCount {
if _, ok := sliceIndexFromColumnIndex[i]; !ok {
missingColumnsSlice = append(missingColumnsSlice, i)
}
}
slices.Sort[[]uint64](missingColumnsSlice)
return nil, errors.Errorf("some columns are missing: %v", missingColumnsSlice)
}
// It is safe to retrieve the first column since we already checked that `dataColumnsSidecar` is not empty.
firstDataColumnSidecar := dataColumnsSidecar[0]
blobCount := uint64(len(firstDataColumnSidecar.Column))
// Check all colums have te same length.
for i := range dataColumnsSidecar {
if uint64(len(dataColumnsSidecar[i].Column)) != blobCount {
return nil, errors.Errorf("mismatch in the length of the data columns, expected %d, got %d", blobCount, len(dataColumnsSidecar[i].Column))
}
}
// Reconstruct verified RO blobs from columns.
verifiedROBlobs := make([]*blocks.VerifiedROBlob, 0, blobCount)
// Populate and filter indices.
indicesSlice := populateAndFilterIndices(indices, blobCount)
for _, blobIndex := range indicesSlice {
var blob kzg.Blob
// Compute the content of the blob.
for columnIndex := range neededColumnCount {
sliceIndex, ok := sliceIndexFromColumnIndex[columnIndex]
if !ok {
return nil, errors.Errorf("missing column %d, this should never happen", columnIndex)
}
dataColumnSideCar := dataColumnsSidecar[sliceIndex]
cell := dataColumnSideCar.Column[blobIndex]
for i := range cell {
blob[columnIndex*kzg.BytesPerCell+uint64(i)] = cell[i]
}
}
// Retrieve the blob KZG commitment.
blobKZGCommitment := kzg.Commitment(firstDataColumnSidecar.KzgCommitments[blobIndex])
// Compute the blob KZG proof.
blobKzgProof, err := kzg.ComputeBlobKZGProof(&blob, blobKZGCommitment)
if err != nil {
return nil, errors.Wrap(err, "compute blob KZG proof")
}
blobSidecar := &ethpb.BlobSidecar{
Index: blobIndex,
Blob: blob[:],
KzgCommitment: blobKZGCommitment[:],
KzgProof: blobKzgProof[:],
SignedBlockHeader: firstDataColumnSidecar.SignedBlockHeader,
CommitmentInclusionProof: firstDataColumnSidecar.KzgCommitmentsInclusionProof,
}
roBlob, err := blocks.NewROBlob(blobSidecar)
if err != nil {
return nil, errors.Wrap(err, "new RO blob")
}
verifiedROBlob := blocks.NewVerifiedROBlob(roBlob)
verifiedROBlobs = append(verifiedROBlobs, &verifiedROBlob)
}
return verifiedROBlobs, nil
}
// CustodyGroupSamplingSize returns the number of custody groups the node should sample from.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/das-core.md#custody-sampling
func (custodyInfo *CustodyInfo) CustodyGroupSamplingSize(ct CustodyType) uint64 {
custodyGroupCount := custodyInfo.TargetGroupCount.Get()
if ct == Actual {
custodyGroupCount = custodyInfo.ActualGroupCount()
}
samplesPerSlot := params.BeaconConfig().SamplesPerSlot
return max(samplesPerSlot, custodyGroupCount)
}
// CustodyColumns computes the custody columns from the custody groups.
func CustodyColumns(custodyGroups []uint64) (map[uint64]bool, error) {
numberOfCustodyGroups := params.BeaconConfig().NumberOfCustodyGroups
custodyGroupCount := len(custodyGroups)
// Compute the columns for each custody group.
columns := make(map[uint64]bool, custodyGroupCount)
for _, group := range custodyGroups {
if group >= numberOfCustodyGroups {
return nil, ErrCustodyGroupTooLarge
}
groupColumns, err := ComputeColumnsForCustodyGroup(group)
if err != nil {
return nil, errors.Wrap(err, "compute columns for custody group")
}
for _, column := range groupColumns {
columns[column] = true
}
}
return columns, nil
}
// populateAndFilterIndices returns a sorted slices of indices, setting all indices if none are provided,
// and filtering out indices higher than the blob count.
func populateAndFilterIndices(indices map[uint64]bool, blobCount uint64) []uint64 {
// If no indices are provided, provide all blobs.
if len(indices) == 0 {
for i := range blobCount {
indices[i] = true
}
}
// Filter blobs index higher than the blob count.
indicesSlice := make([]uint64, 0, len(indices))
for i := range indices {
if i < blobCount {
indicesSlice = append(indicesSlice, i)
}
}
// Sort the indices.
slices.Sort[[]uint64](indicesSlice)
return indicesSlice
}

View File

@@ -0,0 +1,311 @@
package peerdas_test
import (
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/pkg/errors"
)
func TestCustodyGroups(t *testing.T) {
// The happy path is unit tested in spec tests.
numberOfCustodyGroup := params.BeaconConfig().NumberOfCustodyGroups
_, err := peerdas.CustodyGroups(enode.ID{}, numberOfCustodyGroup+1)
require.ErrorIs(t, err, peerdas.ErrCustodyGroupCountTooLarge)
}
func TestComputeColumnsForCustodyGroup(t *testing.T) {
// The happy path is unit tested in spec tests.
numberOfCustodyGroup := params.BeaconConfig().NumberOfCustodyGroups
_, err := peerdas.ComputeColumnsForCustodyGroup(numberOfCustodyGroup)
require.ErrorIs(t, err, peerdas.ErrCustodyGroupTooLarge)
}
func TestDataColumnSidecars(t *testing.T) {
t.Run("nil signed block", func(t *testing.T) {
var expected []*ethpb.DataColumnSidecar = nil
actual, err := peerdas.DataColumnSidecars(nil, []kzg.CellsAndProofs{})
require.NoError(t, err)
require.DeepSSZEqual(t, expected, actual)
})
t.Run("empty cells and proofs", func(t *testing.T) {
// Create a protobuf signed beacon block.
signedBeaconBlockPb := util.NewBeaconBlockDeneb()
// Create a signed beacon block from the protobuf.
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(signedBeaconBlockPb)
require.NoError(t, err)
actual, err := peerdas.DataColumnSidecars(signedBeaconBlock, []kzg.CellsAndProofs{})
require.NoError(t, err)
require.IsNil(t, actual)
})
t.Run("sizes mismatch", func(t *testing.T) {
// Create a protobuf signed beacon block.
signedBeaconBlockPb := util.NewBeaconBlockDeneb()
// Create a signed beacon block from the protobuf.
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(signedBeaconBlockPb)
require.NoError(t, err)
// Create cells and proofs.
cellsAndProofs := make([]kzg.CellsAndProofs, 1)
_, err = peerdas.DataColumnSidecars(signedBeaconBlock, cellsAndProofs)
require.ErrorIs(t, err, peerdas.ErrMismatchSize)
})
}
// --------------------------------------------------------------------------------------------------------------------------------------
// DataColumnsSidecarsFromItems is tested as part of the DataColumnSidecars tests, in the TestDataColumnsSidecarsBlobsRoundtrip function.
// --------------------------------------------------------------------------------------------------------------------------------------
func TestComputeCustodyGroupForColumn(t *testing.T) {
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.NumberOfColumns = 128
config.NumberOfCustodyGroups = 64
params.OverrideBeaconConfig(config)
t.Run("index too large", func(t *testing.T) {
_, err := peerdas.ComputeCustodyGroupForColumn(1_000_000)
require.ErrorIs(t, err, peerdas.ErrIndexTooLarge)
})
t.Run("nominal", func(t *testing.T) {
expected := uint64(2)
actual, err := peerdas.ComputeCustodyGroupForColumn(2)
require.NoError(t, err)
require.Equal(t, expected, actual)
expected = uint64(3)
actual, err = peerdas.ComputeCustodyGroupForColumn(3)
require.NoError(t, err)
require.Equal(t, expected, actual)
expected = uint64(2)
actual, err = peerdas.ComputeCustodyGroupForColumn(66)
require.NoError(t, err)
require.Equal(t, expected, actual)
expected = uint64(3)
actual, err = peerdas.ComputeCustodyGroupForColumn(67)
require.NoError(t, err)
require.Equal(t, expected, actual)
})
}
func TestBlobs(t *testing.T) {
blobsIndice := map[uint64]bool{}
numberOfColumns := params.BeaconConfig().NumberOfColumns
almostAllColumns := make([]*ethpb.DataColumnSidecar, 0, numberOfColumns/2)
for i := uint64(2); i < numberOfColumns/2+2; i++ {
almostAllColumns = append(almostAllColumns, &ethpb.DataColumnSidecar{
Index: i,
})
}
testCases := []struct {
name string
input []*ethpb.DataColumnSidecar
expected []*blocks.VerifiedROBlob
err error
}{
{
name: "empty input",
input: []*ethpb.DataColumnSidecar{},
expected: nil,
err: errors.New("some columns are missing: [0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63]"),
},
{
name: "missing columns",
input: almostAllColumns,
expected: nil,
err: errors.New("some columns are missing: [0 1]"),
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
actual, err := peerdas.Blobs(blobsIndice, tc.input)
if tc.err != nil {
require.Equal(t, tc.err.Error(), err.Error())
} else {
require.NoError(t, err)
}
require.DeepSSZEqual(t, tc.expected, actual)
})
}
}
func TestDataColumnsSidecarsBlobsRoundtrip(t *testing.T) {
const blobCount = 5
blobsIndex := map[uint64]bool{}
// Start the trusted setup.
err := kzg.Start()
require.NoError(t, err)
// Create a protobuf signed beacon block.
signedBeaconBlockPb := util.NewBeaconBlockDeneb()
// Generate random blobs and their corresponding commitments and proofs.
blobs := make([]kzg.Blob, 0, blobCount)
blobKzgCommitments := make([]*kzg.Commitment, 0, blobCount)
blobKzgProofs := make([]*kzg.Proof, 0, blobCount)
for blobIndex := range blobCount {
// Create a random blob.
blob := getRandBlob(int64(blobIndex))
blobs = append(blobs, blob)
// Generate a blobKZGCommitment for the blob.
blobKZGCommitment, proof, err := generateCommitmentAndProof(&blob)
require.NoError(t, err)
blobKzgCommitments = append(blobKzgCommitments, blobKZGCommitment)
blobKzgProofs = append(blobKzgProofs, proof)
}
// Set the commitments into the block.
blobZkgCommitmentsBytes := make([][]byte, 0, blobCount)
for _, blobKZGCommitment := range blobKzgCommitments {
blobZkgCommitmentsBytes = append(blobZkgCommitmentsBytes, blobKZGCommitment[:])
}
signedBeaconBlockPb.Block.Body.BlobKzgCommitments = blobZkgCommitmentsBytes
// Generate verified RO blobs.
verifiedROBlobs := make([]*blocks.VerifiedROBlob, 0, blobCount)
// Create a signed beacon block from the protobuf.
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(signedBeaconBlockPb)
require.NoError(t, err)
commitmentInclusionProof, err := blocks.MerkleProofKZGCommitments(signedBeaconBlock.Block().Body())
require.NoError(t, err)
for blobIndex := range blobCount {
blob := blobs[blobIndex]
blobKZGCommitment := blobKzgCommitments[blobIndex]
blobKzgProof := blobKzgProofs[blobIndex]
// Get the signed beacon block header.
signedBeaconBlockHeader, err := signedBeaconBlock.Header()
require.NoError(t, err)
blobSidecar := &ethpb.BlobSidecar{
Index: uint64(blobIndex),
Blob: blob[:],
KzgCommitment: blobKZGCommitment[:],
KzgProof: blobKzgProof[:],
SignedBlockHeader: signedBeaconBlockHeader,
CommitmentInclusionProof: commitmentInclusionProof,
}
roBlob, err := blocks.NewROBlob(blobSidecar)
require.NoError(t, err)
verifiedROBlob := blocks.NewVerifiedROBlob(roBlob)
verifiedROBlobs = append(verifiedROBlobs, &verifiedROBlob)
}
// Compute data columns sidecars from the signed beacon block and from the blobs.
cellsAndProofs := util.GenerateCellsAndProofs(t, blobs)
dataColumnsSidecar, err := peerdas.DataColumnSidecars(signedBeaconBlock, cellsAndProofs)
require.NoError(t, err)
// Compute the blobs from the data columns sidecar.
roundtripBlobs, err := peerdas.Blobs(blobsIndex, dataColumnsSidecar)
require.NoError(t, err)
// Check that the blobs are the same.
require.DeepSSZEqual(t, verifiedROBlobs, roundtripBlobs)
}
func TestCustodyGroupSamplingSize(t *testing.T) {
testCases := []struct {
name string
custodyType peerdas.CustodyType
validatorsCustodyRequirement uint64
toAdvertiseCustodyGroupCount uint64
expected uint64
}{
{
name: "target, lower than samples per slot",
custodyType: peerdas.Target,
validatorsCustodyRequirement: 2,
expected: 8,
},
{
name: "target, higher than samples per slot",
custodyType: peerdas.Target,
validatorsCustodyRequirement: 100,
expected: 100,
},
{
name: "actual, lower than samples per slot",
custodyType: peerdas.Actual,
validatorsCustodyRequirement: 3,
toAdvertiseCustodyGroupCount: 4,
expected: 8,
},
{
name: "actual, higher than samples per slot",
custodyType: peerdas.Actual,
validatorsCustodyRequirement: 100,
toAdvertiseCustodyGroupCount: 101,
expected: 100,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// Create a custody info.
custodyInfo := peerdas.CustodyInfo{}
// Set the validators custody requirement for target custody group count.
custodyInfo.TargetGroupCount.SetValidatorsCustodyRequirement(tc.validatorsCustodyRequirement)
// Set the to advertise custody group count.
custodyInfo.ToAdvertiseGroupCount.Set(tc.toAdvertiseCustodyGroupCount)
// Compute the custody group sampling size.
actual := custodyInfo.CustodyGroupSamplingSize(tc.custodyType)
// Check the result.
require.Equal(t, tc.expected, actual)
})
}
}
func TestCustodyColumns(t *testing.T) {
t.Run("group too large", func(t *testing.T) {
_, err := peerdas.CustodyColumns([]uint64{1_000_000})
require.ErrorIs(t, err, peerdas.ErrCustodyGroupTooLarge)
})
t.Run("nominal", func(t *testing.T) {
input := []uint64{1, 2}
expected := map[uint64]bool{1: true, 2: true}
actual, err := peerdas.CustodyColumns(input)
require.NoError(t, err)
require.Equal(t, len(expected), len(actual))
for i := range actual {
require.Equal(t, expected[i], actual[i])
}
})
}

View File

@@ -0,0 +1,192 @@
package peerdas
import (
"encoding/binary"
"sync"
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/ethereum/go-ethereum/p2p/enode"
lru "github.com/hashicorp/golang-lru"
"github.com/pkg/errors"
)
// info contains all useful peerDAS related information regarding a peer.
type (
info struct {
CustodyGroups map[uint64]bool
CustodyColumns map[uint64]bool
DataColumnsSubnets map[uint64]bool
}
targetCustodyGroupCount struct {
mut sync.RWMutex
validatorsCustodyRequirement uint64
}
toAdverstiseCustodyGroupCount struct {
mut sync.RWMutex
value uint64
}
CustodyInfo struct {
// Mut is a mutex to be used by caller to ensure neither
// TargetCustodyGroupCount nor ToAdvertiseCustodyGroupCount are being modified.
// (This is not necessary to use this mutex for any data protection.)
Mut sync.RWMutex
// TargetGroupCount represents the target number of custody groups we should custody
// regarding the validators we are tracking.
TargetGroupCount targetCustodyGroupCount
// ToAdvertiseGroupCount represents the number of custody groups to advertise to the network.
ToAdvertiseGroupCount toAdverstiseCustodyGroupCount
}
)
const (
nodeInfoCacheSize = 200
nodeInfoCachKeySize = 32 + 8
)
var (
nodeInfoCacheMut sync.Mutex
nodeInfoCache *lru.Cache
)
// Info returns the peerDAS information for a given nodeID and custodyGroupCount.
// It returns a boolean indicating if the peer info was already in the cache and an error if any.
func Info(nodeID enode.ID, custodyGroupCount uint64) (*info, bool, error) {
// Create a new cache if it doesn't exist.
if err := createInfoCacheIfNeeded(); err != nil {
return nil, false, errors.Wrap(err, "create cache if needed")
}
// Compute the key.
key := computeInfoCacheKey(nodeID, custodyGroupCount)
// If the value is already in the cache, return it.
if value, ok := nodeInfoCache.Get(key); ok {
peerInfo, ok := value.(*info)
if !ok {
return nil, false, errors.New("failed to cast peer info (should never happen)")
}
return peerInfo, true, nil
}
// The peer info is not in the cache, compute it.
// Compute custody groups.
custodyGroups, err := CustodyGroups(nodeID, custodyGroupCount)
if err != nil {
return nil, false, errors.Wrap(err, "custody groups")
}
// Compute custody columns.
custodyColumns, err := CustodyColumns(custodyGroups)
if err != nil {
return nil, false, errors.Wrap(err, "custody columns")
}
// Compute data columns subnets.
dataColumnsSubnets := DataColumnSubnets(custodyColumns)
// Convert the custody groups to a map.
custodyGroupsMap := make(map[uint64]bool, len(custodyGroups))
for _, group := range custodyGroups {
custodyGroupsMap[group] = true
}
result := &info{
CustodyGroups: custodyGroupsMap,
CustodyColumns: custodyColumns,
DataColumnsSubnets: dataColumnsSubnets,
}
// Add the result to the cache.
nodeInfoCache.Add(key, result)
return result, false, nil
}
// ActualGroupCount returns the actual custody group count.
func (custodyInfo *CustodyInfo) ActualGroupCount() uint64 {
return min(custodyInfo.TargetGroupCount.Get(), custodyInfo.ToAdvertiseGroupCount.Get())
}
// CustodyGroupCount returns the number of groups we should participate in for custody.
func (tcgc *targetCustodyGroupCount) Get() uint64 {
// If subscribed to all subnets, return the number of custody groups.
if flags.Get().SubscribeToAllSubnets {
return params.BeaconConfig().NumberOfCustodyGroups
}
tcgc.mut.RLock()
defer tcgc.mut.RUnlock()
// If no validators are tracked, return the default custody requirement.
if tcgc.validatorsCustodyRequirement == 0 {
return params.BeaconConfig().CustodyRequirement
}
// Return the validators custody requirement.
return tcgc.validatorsCustodyRequirement
}
// setValidatorsCustodyRequirement sets the validators custody requirement.
func (tcgc *targetCustodyGroupCount) SetValidatorsCustodyRequirement(value uint64) {
tcgc.mut.Lock()
defer tcgc.mut.Unlock()
tcgc.validatorsCustodyRequirement = value
}
// Get returns the to advertise custody group count.
func (tacgc *toAdverstiseCustodyGroupCount) Get() uint64 {
// If subscribed to all subnets, return the number of custody groups.
if flags.Get().SubscribeToAllSubnets {
return params.BeaconConfig().NumberOfCustodyGroups
}
custodyRequirement := params.BeaconConfig().CustodyRequirement
tacgc.mut.RLock()
defer tacgc.mut.RUnlock()
return max(tacgc.value, custodyRequirement)
}
// Set sets the to advertise custody group count.
func (tacgc *toAdverstiseCustodyGroupCount) Set(value uint64) {
tacgc.mut.Lock()
defer tacgc.mut.Unlock()
tacgc.value = value
}
// createInfoCacheIfNeeded creates a new cache if it doesn't exist.
func createInfoCacheIfNeeded() error {
nodeInfoCacheMut.Lock()
defer nodeInfoCacheMut.Unlock()
if nodeInfoCache == nil {
c, err := lru.New(nodeInfoCacheSize)
if err != nil {
return errors.Wrap(err, "lru new")
}
nodeInfoCache = c
}
return nil
}
// computeInfoCacheKey returns a unique key for a node and its custodyGroupCount.
func computeInfoCacheKey(nodeID enode.ID, custodyGroupCount uint64) [nodeInfoCachKeySize]byte {
var key [nodeInfoCachKeySize]byte
copy(key[:32], nodeID[:])
binary.BigEndian.PutUint64(key[32:], custodyGroupCount)
return key
}

View File

@@ -0,0 +1,133 @@
package peerdas_test
import (
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/ethereum/go-ethereum/p2p/enode"
)
func TestInfo(t *testing.T) {
nodeID := enode.ID{}
custodyGroupCount := uint64(7)
expectedCustodyGroup := map[uint64]bool{1: true, 17: true, 19: true, 42: true, 75: true, 87: true, 102: true}
expectedCustodyColumns := map[uint64]bool{1: true, 17: true, 19: true, 42: true, 75: true, 87: true, 102: true}
expectedDataColumnsSubnets := map[uint64]bool{1: true, 17: true, 19: true, 42: true, 75: true, 87: true, 102: true}
for _, cached := range []bool{false, true} {
actual, ok, err := peerdas.Info(nodeID, custodyGroupCount)
require.NoError(t, err)
require.Equal(t, cached, ok)
require.DeepEqual(t, expectedCustodyGroup, actual.CustodyGroups)
require.DeepEqual(t, expectedCustodyColumns, actual.CustodyColumns)
require.DeepEqual(t, expectedDataColumnsSubnets, actual.DataColumnsSubnets)
}
}
func TestTargetCustodyGroupCount(t *testing.T) {
testCases := []struct {
name string
subscribeToAllSubnets bool
validatorsCustodyRequirement uint64
expected uint64
}{
{
name: "subscribed to all subnets",
subscribeToAllSubnets: true,
validatorsCustodyRequirement: 100,
expected: 128,
},
{
name: "no validators attached",
subscribeToAllSubnets: false,
validatorsCustodyRequirement: 0,
expected: 4,
},
{
name: "some validators attached",
subscribeToAllSubnets: false,
validatorsCustodyRequirement: 100,
expected: 100,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// Subscribe to all subnets if needed.
if tc.subscribeToAllSubnets {
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.SubscribeToAllSubnets = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
}
var custodyInfo peerdas.CustodyInfo
// Set the validators custody requirement.
custodyInfo.TargetGroupCount.SetValidatorsCustodyRequirement(tc.validatorsCustodyRequirement)
// Get the target custody group count.
actual := custodyInfo.TargetGroupCount.Get()
// Compare the expected and actual values.
require.Equal(t, tc.expected, actual)
})
}
}
func TestToAdvertiseCustodyGroupCount(t *testing.T) {
testCases := []struct {
name string
subscribeToAllSubnets bool
toAdvertiseCustodyGroupCount uint64
expected uint64
}{
{
name: "subscribed to all subnets",
subscribeToAllSubnets: true,
toAdvertiseCustodyGroupCount: 100,
expected: 128,
},
{
name: "higher than custody requirement",
subscribeToAllSubnets: false,
toAdvertiseCustodyGroupCount: 100,
expected: 100,
},
{
name: "lower than custody requirement",
subscribeToAllSubnets: false,
toAdvertiseCustodyGroupCount: 1,
expected: 4,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// Subscribe to all subnets if needed.
if tc.subscribeToAllSubnets {
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.SubscribeToAllSubnets = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
}
// Create a custody info.
var custodyInfo peerdas.CustodyInfo
// Set the to advertise custody group count.
custodyInfo.ToAdvertiseGroupCount.Set(tc.toAdvertiseCustodyGroupCount)
// Get the to advertise custody group count.
actual := custodyInfo.ToAdvertiseGroupCount.Get()
// Compare the expected and actual values.
require.Equal(t, tc.expected, actual)
})
}
}

View File

@@ -0,0 +1,14 @@
package peerdas
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
)
var dataColumnComputationTime = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "beacon_data_column_sidecar_computation_milliseconds",
Help: "Captures the time taken to compute data column sidecars from blobs.",
Buckets: []float64{100, 250, 500, 750, 1000, 1500, 2000, 4000, 8000, 12000, 16000},
},
)

View File

@@ -0,0 +1,162 @@
package peerdas
import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/container/trie"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/pkg/errors"
)
const (
CustodyGroupCountEnrKey = "cgc"
kzgPosition = 11 // The index of the KZG commitment list in the Body
)
var (
ErrIndexTooLarge = errors.New("column index is larger than the specified columns count")
ErrNoKzgCommitments = errors.New("no KZG commitments found")
ErrMismatchLength = errors.New("mismatch in the length of the column, commitments or proofs")
ErrInvalidKZGProof = errors.New("invalid KZG proof")
ErrBadRootLength = errors.New("bad root length")
ErrInvalidInclusionProof = errors.New("invalid inclusion proof")
ErrRecordNil = errors.New("record is nil")
ErrNilBlockHeader = errors.New("nil beacon block header")
ErrCannotLoadCustodyGroupCount = errors.New("cannot load the custody group count from peer")
)
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/p2p-interface.md#custody-group-count
type Cgc uint64
func (Cgc) ENRKey() string { return CustodyGroupCountEnrKey }
// VerifyDataColumnSidecar verifies if the data column sidecar is valid.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/p2p-interface.md#verify_data_column_sidecar
func VerifyDataColumnSidecar(sidecar blocks.RODataColumn) error {
// The sidecar index must be within the valid range.
numberOfColumns := params.BeaconConfig().NumberOfColumns
if sidecar.Index >= numberOfColumns {
return ErrIndexTooLarge
}
// A sidecar for zero blobs is invalid.
if len(sidecar.KzgCommitments) == 0 {
return ErrNoKzgCommitments
}
// The column length must be equal to the number of commitments/proofs.
if len(sidecar.Column) != len(sidecar.KzgCommitments) || len(sidecar.Column) != len(sidecar.KzgProofs) {
return ErrMismatchLength
}
return nil
}
// VerifyDataColumnsSidecarKZGProofs verifies if the KZG proofs are correct.
// Note: We are slightly deviating from the specification here:
// The specification verifies the KZG proofs for each sidecar separately,
// while we are verifying all the KZG proofs from multiple sidecars in a batch.
// This is done to improve performance since the internal KZG library is way more
// efficient when verifying in batch.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/p2p-interface.md#verify_data_column_sidecar_kzg_proofs
func VerifyDataColumnsSidecarKZGProofs(sidecars []blocks.RODataColumn) error {
// Compute the total count.
count := 0
for _, sidecar := range sidecars {
count += len(sidecar.Column)
}
commitments := make([]kzg.Bytes48, 0, count)
indices := make([]uint64, 0, count)
cells := make([]kzg.Cell, 0, count)
proofs := make([]kzg.Bytes48, 0, count)
for _, sidecar := range sidecars {
for i := range sidecar.Column {
commitments = append(commitments, kzg.Bytes48(sidecar.KzgCommitments[i]))
indices = append(indices, sidecar.Index)
cells = append(cells, kzg.Cell(sidecar.Column[i]))
proofs = append(proofs, kzg.Bytes48(sidecar.KzgProofs[i]))
}
}
// Batch verify that the cells match the corresponding commitments and proofs.
verified, err := kzg.VerifyCellKZGProofBatch(commitments, indices, cells, proofs)
if err != nil {
return errors.Wrap(err, "verify cell KZG proof batch")
}
if !verified {
return ErrInvalidKZGProof
}
return nil
}
// VerifyDataColumnSidecarInclusionProof verifies if the given KZG commitments included in the given beacon block.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/p2p-interface.md#verify_data_column_sidecar_inclusion_proof
func VerifyDataColumnSidecarInclusionProof(sidecar blocks.RODataColumn) error {
if sidecar.SignedBlockHeader == nil || sidecar.SignedBlockHeader.Header == nil {
return ErrNilBlockHeader
}
root := sidecar.SignedBlockHeader.Header.BodyRoot
if len(root) != fieldparams.RootLength {
return ErrBadRootLength
}
leaves := blocks.LeavesFromCommitments(sidecar.KzgCommitments)
sparse, err := trie.GenerateTrieFromItems(leaves, fieldparams.LogMaxBlobCommitments)
if err != nil {
return errors.Wrap(err, "generate trie from items")
}
hashTreeRoot, err := sparse.HashTreeRoot()
if err != nil {
return errors.Wrap(err, "hash tree root")
}
verified := trie.VerifyMerkleProof(root, hashTreeRoot[:], kzgPosition, sidecar.KzgCommitmentsInclusionProof)
if !verified {
return ErrInvalidInclusionProof
}
return nil
}
// ComputeSubnetForDataColumnSidecar computes the subnet for a data column sidecar.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/p2p-interface.md#compute_subnet_for_data_column_sidecar
func ComputeSubnetForDataColumnSidecar(columnIndex uint64) uint64 {
dataColumnSidecarSubnetCount := params.BeaconConfig().DataColumnSidecarSubnetCount
return columnIndex % dataColumnSidecarSubnetCount
}
// DataColumnSubnets computes the subnets for the data columns.
func DataColumnSubnets(dataColumns map[uint64]bool) map[uint64]bool {
subnets := make(map[uint64]bool, len(dataColumns))
for column := range dataColumns {
subnet := ComputeSubnetForDataColumnSidecar(column)
subnets[subnet] = true
}
return subnets
}
// CustodyGroupCountFromRecord extracts the custody group count from an ENR record.
func CustodyGroupCountFromRecord(record *enr.Record) (uint64, error) {
if record == nil {
return 0, ErrRecordNil
}
// Load the `cgc`
var cgc Cgc
if err := record.Load(&cgc); err != nil {
return 0, ErrCannotLoadCustodyGroupCount
}
return uint64(cgc), nil
}

View File

@@ -0,0 +1,304 @@
package peerdas_test
import (
"crypto/rand"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
enginev1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/ethereum/go-ethereum/p2p/enr"
)
func TestVerifyDataColumnSidecar(t *testing.T) {
t.Run("index too large", func(t *testing.T) {
roSidecar := createTestSidecar(t, 1_000_000, nil, nil, nil)
err := peerdas.VerifyDataColumnSidecar(roSidecar)
require.ErrorIs(t, err, peerdas.ErrIndexTooLarge)
})
t.Run("no commitments", func(t *testing.T) {
roSidecar := createTestSidecar(t, 0, nil, nil, nil)
err := peerdas.VerifyDataColumnSidecar(roSidecar)
require.ErrorIs(t, err, peerdas.ErrNoKzgCommitments)
})
t.Run("KZG commitments size mismatch", func(t *testing.T) {
kzgCommitments := make([][]byte, 1)
roSidecar := createTestSidecar(t, 0, nil, kzgCommitments, nil)
err := peerdas.VerifyDataColumnSidecar(roSidecar)
require.ErrorIs(t, err, peerdas.ErrMismatchLength)
})
t.Run("KZG proofs size mismatch", func(t *testing.T) {
column, kzgCommitments := make([][]byte, 1), make([][]byte, 1)
roSidecar := createTestSidecar(t, 0, column, kzgCommitments, nil)
err := peerdas.VerifyDataColumnSidecar(roSidecar)
require.ErrorIs(t, err, peerdas.ErrMismatchLength)
})
t.Run("nominal", func(t *testing.T) {
column, kzgCommitments, kzgProofs := make([][]byte, 1), make([][]byte, 1), make([][]byte, 1)
roSidecar := createTestSidecar(t, 0, column, kzgCommitments, kzgProofs)
err := peerdas.VerifyDataColumnSidecar(roSidecar)
require.NoError(t, err)
})
}
func TestVerifyDataColumnSidecarKZGProofs(t *testing.T) {
err := kzg.Start()
require.NoError(t, err)
generateSidecars := func(t *testing.T) []*ethpb.DataColumnSidecar {
const blobCount = int64(6)
dbBlock := util.NewBeaconBlockDeneb()
commitments := make([][]byte, 0, blobCount)
blobs := make([]kzg.Blob, 0, blobCount)
for i := range blobCount {
blob := getRandBlob(i)
commitment, _, err := generateCommitmentAndProof(&blob)
require.NoError(t, err)
commitments = append(commitments, commitment[:])
blobs = append(blobs, blob)
}
dbBlock.Block.Body.BlobKzgCommitments = commitments
sBlock, err := blocks.NewSignedBeaconBlock(dbBlock)
require.NoError(t, err)
cellsAndProofs := util.GenerateCellsAndProofs(t, blobs)
sidecars, err := peerdas.DataColumnSidecars(sBlock, cellsAndProofs)
require.NoError(t, err)
return sidecars
}
generateRODataColumnSidecars := func(t *testing.T, sidecars []*ethpb.DataColumnSidecar) []blocks.RODataColumn {
roDataColumnSidecars := make([]blocks.RODataColumn, 0, len(sidecars))
for _, sidecar := range sidecars {
roCol, err := blocks.NewRODataColumn(sidecar)
require.NoError(t, err)
roDataColumnSidecars = append(roDataColumnSidecars, roCol)
}
return roDataColumnSidecars
}
t.Run("invalid proof", func(t *testing.T) {
sidecars := generateSidecars(t)
sidecars[0].Column[0][0]++ // It is OK to overflow
roDataColumnSidecars := generateRODataColumnSidecars(t, sidecars)
err := peerdas.VerifyDataColumnsSidecarKZGProofs(roDataColumnSidecars)
require.ErrorIs(t, err, peerdas.ErrInvalidKZGProof)
})
t.Run("nominal", func(t *testing.T) {
sidecars := generateSidecars(t)
roDataColumnSidecars := generateRODataColumnSidecars(t, sidecars)
err := peerdas.VerifyDataColumnsSidecarKZGProofs(roDataColumnSidecars)
require.NoError(t, err)
})
}
func Test_VerifyKZGInclusionProofColumn(t *testing.T) {
const (
blobCount = 3
columnIndex = 0
)
// Generate random KZG commitments `blobCount` blobs.
kzgCommitments := make([][]byte, blobCount)
for i := 0; i < blobCount; i++ {
kzgCommitments[i] = make([]byte, 48)
_, err := rand.Read(kzgCommitments[i])
require.NoError(t, err)
}
pbBody := &ethpb.BeaconBlockBodyDeneb{
RandaoReveal: make([]byte, 96),
Eth1Data: &ethpb.Eth1Data{
DepositRoot: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
},
Graffiti: make([]byte, 32),
SyncAggregate: &ethpb.SyncAggregate{
SyncCommitteeBits: make([]byte, fieldparams.SyncAggregateSyncCommitteeBytesLength),
SyncCommitteeSignature: make([]byte, fieldparams.BLSSignatureLength),
},
ExecutionPayload: &enginev1.ExecutionPayloadDeneb{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, 20),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, 256),
PrevRandao: make([]byte, fieldparams.RootLength),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
ExtraData: make([]byte, 0),
},
BlobKzgCommitments: kzgCommitments,
}
root, err := pbBody.HashTreeRoot()
require.NoError(t, err)
body, err := blocks.NewBeaconBlockBody(pbBody)
require.NoError(t, err)
kzgCommitmentsInclusionProof, err := blocks.MerkleProofKZGCommitments(body)
require.NoError(t, err)
testCases := []struct {
name string
expectedError error
dataColumnSidecar *ethpb.DataColumnSidecar
}{
{
name: "nilSignedBlockHeader",
expectedError: peerdas.ErrNilBlockHeader,
dataColumnSidecar: &ethpb.DataColumnSidecar{},
},
{
name: "nilHeader",
expectedError: peerdas.ErrNilBlockHeader,
dataColumnSidecar: &ethpb.DataColumnSidecar{
SignedBlockHeader: &ethpb.SignedBeaconBlockHeader{},
},
},
{
name: "invalidBodyRoot",
expectedError: peerdas.ErrBadRootLength,
dataColumnSidecar: &ethpb.DataColumnSidecar{
SignedBlockHeader: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{},
},
},
},
{
name: "unverifiedMerkleProof",
expectedError: peerdas.ErrInvalidInclusionProof,
dataColumnSidecar: &ethpb.DataColumnSidecar{
SignedBlockHeader: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{
BodyRoot: make([]byte, 32),
},
},
KzgCommitments: kzgCommitments,
},
},
{
name: "nominal",
expectedError: nil,
dataColumnSidecar: &ethpb.DataColumnSidecar{
KzgCommitments: kzgCommitments,
SignedBlockHeader: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{
BodyRoot: root[:],
},
},
KzgCommitmentsInclusionProof: kzgCommitmentsInclusionProof,
},
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
roDataColumn := blocks.RODataColumn{DataColumnSidecar: tc.dataColumnSidecar}
err = peerdas.VerifyDataColumnSidecarInclusionProof(roDataColumn)
if tc.expectedError == nil {
require.NoError(t, err)
return
}
require.ErrorIs(t, tc.expectedError, err)
})
}
}
func TestComputeSubnetForDataColumnSidecar(t *testing.T) {
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.DataColumnSidecarSubnetCount = 128
params.OverrideBeaconConfig(config)
require.Equal(t, uint64(0), peerdas.ComputeSubnetForDataColumnSidecar(0))
require.Equal(t, uint64(1), peerdas.ComputeSubnetForDataColumnSidecar(1))
require.Equal(t, uint64(0), peerdas.ComputeSubnetForDataColumnSidecar(128))
require.Equal(t, uint64(1), peerdas.ComputeSubnetForDataColumnSidecar(129))
}
func TestDataColumnSubnets(t *testing.T) {
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.DataColumnSidecarSubnetCount = 128
params.OverrideBeaconConfig(config)
input := map[uint64]bool{0: true, 1: true, 128: true, 129: true, 131: true}
expected := map[uint64]bool{0: true, 1: true, 3: true}
actual := peerdas.DataColumnSubnets(input)
require.Equal(t, len(expected), len(actual))
for k, v := range expected {
require.Equal(t, v, actual[k])
}
}
func TestCustodyGroupCountFromRecord(t *testing.T) {
t.Run("nil record", func(t *testing.T) {
_, err := peerdas.CustodyGroupCountFromRecord(nil)
require.ErrorIs(t, err, peerdas.ErrRecordNil)
})
t.Run("no cgc", func(t *testing.T) {
_, err := peerdas.CustodyGroupCountFromRecord(&enr.Record{})
require.ErrorIs(t, err, peerdas.ErrCannotLoadCustodyGroupCount)
})
t.Run("nominal", func(t *testing.T) {
const expected uint64 = 7
record := &enr.Record{}
record.Set(peerdas.Cgc(expected))
actual, err := peerdas.CustodyGroupCountFromRecord(record)
require.NoError(t, err)
require.Equal(t, expected, actual)
})
}
func createTestSidecar(t *testing.T, index uint64, column, kzgCommitments, kzgProofs [][]byte) blocks.RODataColumn {
pbSignedBeaconBlock := util.NewBeaconBlockDeneb()
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(pbSignedBeaconBlock)
require.NoError(t, err)
signedBlockHeader, err := signedBeaconBlock.Header()
require.NoError(t, err)
sidecar := &ethpb.DataColumnSidecar{
Index: index,
Column: column,
KzgCommitments: kzgCommitments,
KzgProofs: kzgProofs,
SignedBlockHeader: signedBlockHeader,
}
roSidecar, err := blocks.NewRODataColumn(sidecar)
require.NoError(t, err)
return roSidecar
}

View File

@@ -0,0 +1,76 @@
package peerdas
import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
"github.com/OffchainLabs/prysm/v6/config/params"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/pkg/errors"
"golang.org/x/sync/errgroup"
)
// CanSelfReconstruct returns true if the node can self-reconstruct all the data columns from its custody group count.
func CanSelfReconstruct(custodyGroupCount uint64) bool {
total := params.BeaconConfig().NumberOfCustodyGroups
// If total is odd, then we need total / 2 + 1 columns to reconstruct.
// If total is even, then we need total / 2 columns to reconstruct.
return custodyGroupCount >= (total+1)/2
}
// RecoverCellsAndProofs recovers the cells and proofs from the data column sidecars.
func RecoverCellsAndProofs(dataColumnSideCars []*ethpb.DataColumnSidecar) ([]kzg.CellsAndProofs, error) {
var wg errgroup.Group
dataColumnSideCarsCount := len(dataColumnSideCars)
if dataColumnSideCarsCount == 0 {
return nil, errors.New("no data column sidecars")
}
// Check if all columns have the same length.
blobCount := len(dataColumnSideCars[0].Column)
for _, sidecar := range dataColumnSideCars {
length := len(sidecar.Column)
if length != blobCount {
return nil, errors.New("columns do not have the same length")
}
}
// Recover cells and compute proofs in parallel.
recoveredCellsAndProofs := make([]kzg.CellsAndProofs, blobCount)
for blobIndex := 0; blobIndex < blobCount; blobIndex++ {
bIndex := blobIndex
wg.Go(func() error {
cellsIndices := make([]uint64, 0, dataColumnSideCarsCount)
cells := make([]kzg.Cell, 0, dataColumnSideCarsCount)
for _, sidecar := range dataColumnSideCars {
// Build the cell indices.
cellsIndices = append(cellsIndices, sidecar.Index)
// Get the cell.
column := sidecar.Column
cell := column[bIndex]
cells = append(cells, kzg.Cell(cell))
}
// Recover the cells and proofs for the corresponding blob
cellsAndProofs, err := kzg.RecoverCellsAndKZGProofs(cellsIndices, cells)
if err != nil {
return errors.Wrapf(err, "recover cells and KZG proofs for blob %d", bIndex)
}
recoveredCellsAndProofs[bIndex] = cellsAndProofs
return nil
})
}
if err := wg.Wait(); err != nil {
return nil, err
}
return recoveredCellsAndProofs, nil
}

View File

@@ -0,0 +1,57 @@
package peerdas_test
import (
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/testing/require"
)
func TestCanSelfReconstruct(t *testing.T) {
testCases := []struct {
name string
totalNumberOfCustodyGroups uint64
custodyNumberOfGroups uint64
expected bool
}{
{
name: "totalNumberOfCustodyGroups=64, custodyNumberOfGroups=31",
totalNumberOfCustodyGroups: 64,
custodyNumberOfGroups: 31,
expected: false,
},
{
name: "totalNumberOfCustodyGroups=64, custodyNumberOfGroups=32",
totalNumberOfCustodyGroups: 64,
custodyNumberOfGroups: 32,
expected: true,
},
{
name: "totalNumberOfCustodyGroups=65, custodyNumberOfGroups=32",
totalNumberOfCustodyGroups: 65,
custodyNumberOfGroups: 32,
expected: false,
},
{
name: "totalNumberOfCustodyGroups=63, custodyNumberOfGroups=33",
totalNumberOfCustodyGroups: 65,
custodyNumberOfGroups: 33,
expected: true,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// Set the total number of columns.
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.NumberOfCustodyGroups = tc.totalNumberOfCustodyGroups
params.OverrideBeaconConfig(cfg)
// Check if reconstuction is possible.
actual := peerdas.CanSelfReconstruct(tc.custodyNumberOfGroups)
require.Equal(t, tc.expected, actual)
})
}
}

View File

@@ -0,0 +1,54 @@
package peerdas
import (
"fmt"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/pkg/errors"
)
// ConstructDataColumnSidecars constructs data column sidecars from a block, blobs and their cell proofs.
// This is a convenience method as blob and cell proofs are common inputs.
func ConstructDataColumnSidecars(block interfaces.SignedBeaconBlock, blobs [][]byte, cellProofs [][]byte) ([]*ethpb.DataColumnSidecar, error) {
// Check if the block is at least a Fulu block.
if block.Version() < version.Fulu {
return nil, nil
}
numberOfColumns := params.BeaconConfig().NumberOfColumns
if uint64(len(blobs))*numberOfColumns != uint64(len(cellProofs)) {
return nil, fmt.Errorf("number of blobs and cell proofs do not match: %d * %d != %d", len(blobs), numberOfColumns, len(cellProofs))
}
cellsAndProofs := make([]kzg.CellsAndProofs, 0, len(blobs))
for i, blob := range blobs {
var b kzg.Blob
copy(b[:], blob)
cells, err := kzg.ComputeCells(&b)
if err != nil {
return nil, err
}
var proofs []kzg.Proof
for idx := uint64(i) * numberOfColumns; idx < (uint64(i)+1)*numberOfColumns; idx++ {
proofs = append(proofs, kzg.Proof(cellProofs[idx]))
}
cellsAndProofs = append(cellsAndProofs, kzg.CellsAndProofs{
Cells: cells,
Proofs: proofs,
})
}
dataColumnSidecars, err := DataColumnSidecars(block, cellsAndProofs)
if err != nil {
return nil, errors.Wrap(err, "data column sidcars")
}
return dataColumnSidecars, nil
}

View File

@@ -0,0 +1,57 @@
package peerdas_test
import (
"bytes"
"crypto/sha256"
"encoding/binary"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
"github.com/consensys/gnark-crypto/ecc/bls12-381/fr"
GoKZG "github.com/crate-crypto/go-kzg-4844"
"github.com/sirupsen/logrus"
)
func generateCommitmentAndProof(blob *kzg.Blob) (*kzg.Commitment, *kzg.Proof, error) {
commitment, err := kzg.BlobToKZGCommitment(blob)
if err != nil {
return nil, nil, err
}
proof, err := kzg.ComputeBlobKZGProof(blob, commitment)
if err != nil {
return nil, nil, err
}
return &commitment, &proof, err
}
// Returns a random blob using the passed seed as entropy
func getRandBlob(seed int64) kzg.Blob {
var blob kzg.Blob
bytesPerBlob := GoKZG.ScalarsPerBlob * GoKZG.SerializedScalarSize
for i := 0; i < bytesPerBlob; i += GoKZG.SerializedScalarSize {
fieldElementBytes := getRandFieldElement(seed + int64(i))
copy(blob[i:i+GoKZG.SerializedScalarSize], fieldElementBytes[:])
}
return blob
}
// Returns a serialized random field element in big-endian
func getRandFieldElement(seed int64) [32]byte {
bytes := deterministicRandomness(seed)
var r fr.Element
r.SetBytes(bytes[:])
return GoKZG.SerializeScalar(r)
}
func deterministicRandomness(seed int64) [32]byte {
// Converts an int64 to a byte slice
buf := new(bytes.Buffer)
err := binary.Write(buf, binary.BigEndian, seed)
if err != nil {
logrus.WithError(err).Error("Failed to write int64 to bytes buffer")
return [32]byte{}
}
bytes := buf.Bytes()
return sha256.Sum256(bytes)
}

View File

@@ -0,0 +1,30 @@
package peerdas
import (
beaconState "github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/pkg/errors"
)
// ValidatorsCustodyRequirement returns the number of custody groups regarding the validator indices attached to the beacon node.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/validator.md#validator-custody
func ValidatorsCustodyRequirement(state beaconState.ReadOnlyBeaconState, validatorsIndex map[primitives.ValidatorIndex]bool) (uint64, error) {
totalNodeBalance := uint64(0)
for index := range validatorsIndex {
validator, err := state.ValidatorAtIndexReadOnly(index)
if err != nil {
return 0, errors.Wrapf(err, "validator at index %v", index)
}
totalNodeBalance += validator.EffectiveBalance()
}
beaconConfig := params.BeaconConfig()
numberOfCustodyGroup := beaconConfig.NumberOfCustodyGroups
validatorCustodyRequirement := beaconConfig.ValidatorCustodyRequirement
balancePerAdditionalCustodyGroup := beaconConfig.BalancePerAdditionalCustodyGroup
count := totalNodeBalance / balancePerAdditionalCustodyGroup
return min(max(count, validatorCustodyRequirement), numberOfCustodyGroup), nil
}

View File

@@ -0,0 +1,55 @@
package peerdas_test
import (
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/require"
)
func TestValidatorsCustodyRequirement(t *testing.T) {
testCases := []struct {
name string
count uint64
expected uint64
}{
{name: "0 validators", count: 0, expected: 8},
{name: "1 validator", count: 1, expected: 8},
{name: "8 validators", count: 8, expected: 8},
{name: "9 validators", count: 9, expected: 9},
{name: "100 validators", count: 100, expected: 100},
{name: "128 validators", count: 128, expected: 128},
{name: "129 validators", count: 129, expected: 128},
{name: "1000 validators", count: 1000, expected: 128},
}
const balance = uint64(32_000_000_000)
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
validators := make([]*ethpb.Validator, 0, tc.count)
for range tc.count {
validator := &ethpb.Validator{
EffectiveBalance: balance,
}
validators = append(validators, validator)
}
validatorsIndex := make(map[primitives.ValidatorIndex]bool)
for i := range tc.count {
validatorsIndex[primitives.ValidatorIndex(i)] = true
}
beaconState, err := state_native.InitializeFromProtoFulu(&ethpb.BeaconStateElectra{Validators: validators})
require.NoError(t, err)
actual, err := peerdas.ValidatorsCustodyRequirement(beaconState, validatorsIndex)
require.NoError(t, err)
require.Equal(t, tc.expected, actual)
})
}
}

View File

@@ -59,7 +59,7 @@ go_library(
"//runtime/version:go_default_library",
"//time:go_default_library",
"//time/slots:go_default_library",
"@com_github_dgraph_io_ristretto//:go_default_library",
"@com_github_dgraph_io_ristretto_v2//:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_golang_snappy//:go_default_library",

View File

@@ -40,7 +40,7 @@ func (s *Store) Block(ctx context.Context, blockRoot [32]byte) (interfaces.ReadO
func (s *Store) getBlock(ctx context.Context, blockRoot [32]byte, tx *bolt.Tx) (interfaces.ReadOnlySignedBeaconBlock, error) {
if v, ok := s.blockCache.Get(string(blockRoot[:])); v != nil && ok {
return v.(interfaces.ReadOnlySignedBeaconBlock), nil
return v, nil
}
// This method allows the caller to pass in its tx if one is already open.
// Or if a nil value is used, a transaction will be managed intenally.

View File

@@ -13,8 +13,10 @@ import (
"github.com/OffchainLabs/prysm/v6/config/features"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/io/file"
"github.com/dgraph-io/ristretto"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/dgraph-io/ristretto/v2"
"github.com/pkg/errors"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
@@ -86,8 +88,8 @@ var blockedBuckets = [][]byte{
type Store struct {
db *bolt.DB
databasePath string
blockCache *ristretto.Cache
validatorEntryCache *ristretto.Cache
blockCache *ristretto.Cache[string, interfaces.ReadOnlySignedBeaconBlock]
validatorEntryCache *ristretto.Cache[[]byte, *ethpb.Validator]
stateSummaryCache *stateSummaryCache
ctx context.Context
}
@@ -156,7 +158,7 @@ func NewKVStore(ctx context.Context, dirPath string, opts ...KVStoreOption) (*St
return nil, err
}
boltDB.AllocSize = boltAllocSize
blockCache, err := ristretto.NewCache(&ristretto.Config{
blockCache, err := ristretto.NewCache(&ristretto.Config[string, interfaces.ReadOnlySignedBeaconBlock]{
NumCounters: 1000, // number of keys to track frequency of (1000).
MaxCost: BlockCacheSize, // maximum cost of cache (1000 Blocks).
BufferItems: 64, // number of keys per Get buffer.
@@ -165,7 +167,7 @@ func NewKVStore(ctx context.Context, dirPath string, opts ...KVStoreOption) (*St
return nil, err
}
validatorCache, err := ristretto.NewCache(&ristretto.Config{
validatorCache, err := ristretto.NewCache(&ristretto.Config[[]byte, *ethpb.Validator]{
NumCounters: NumOfValidatorEntries, // number of entries in cache (2 Million).
MaxCost: ValidatorEntryMaxCost, // maximum size of the cache (64Mb)
BufferItems: 64, // number of keys per Get buffer.

View File

@@ -744,14 +744,9 @@ func (s *Store) validatorEntries(ctx context.Context, blockRoot [32]byte) ([]*et
// get the entry bytes from the cache or from the DB.
v, ok := s.validatorEntryCache.Get(key)
if ok {
valEntry, vType := v.(*ethpb.Validator)
if vType {
validatorEntries = append(validatorEntries, valEntry)
validatorEntryCacheHit.Inc()
} else {
// this should never happen, but anyway it's good to bail out if one happens.
return errors.New("validator cache does not have proper object type")
}
valEntry := v
validatorEntries = append(validatorEntries, valEntry)
validatorEntryCacheHit.Inc()
} else {
// not in cache, so get it from the DB, decode it and add to the entry list.
valEntryBytes := valBkt.Get(key)

View File

@@ -321,15 +321,11 @@ func TestState_CanSaveRetrieveValidatorEntriesFromCache(t *testing.T) {
hash, hashErr := stateValidators[i].HashTreeRoot()
assert.NoError(t, hashErr)
data, ok := db.validatorEntryCache.Get(string(hash[:]))
data, ok := db.validatorEntryCache.Get(hash[:])
assert.Equal(t, true, ok)
require.NotNil(t, data)
valEntry, vType := data.(*ethpb.Validator)
assert.Equal(t, true, vType)
require.NotNil(t, valEntry)
require.DeepSSZEqual(t, stateValidators[i], valEntry, "validator entry is not matching")
require.DeepSSZEqual(t, stateValidators[i], data, "validator entry is not matching")
}
// check if all the validator entries are still intact in the validator entry bucket.
@@ -447,7 +443,7 @@ func TestState_DeleteState(t *testing.T) {
assert.NoError(t, hashErr)
v, found := db.validatorEntryCache.Get(hash[:])
require.Equal(t, false, found)
require.Equal(t, nil, v)
require.IsNil(t, v)
}
// check if the index of the first state is deleted.

View File

@@ -1032,6 +1032,7 @@ func (b *BeaconNode) registerPrometheusService(_ *cli.Context) error {
}
service := prometheus.NewService(
b.cliCtx.Context,
fmt.Sprintf("%s:%d", b.cliCtx.String(cmd.MonitoringHostFlag.Name), b.cliCtx.Int(flags.MonitoringPortFlag.Name)),
b.services,
additionalHandlers...,

View File

@@ -285,13 +285,13 @@ func (s *Service) BroadcastLightClientOptimisticUpdate(ctx context.Context, upda
return err
}
// TODO: should we check if the update is too early or too late to broadcast?
if err := s.broadcastObject(ctx, update, lcOptimisticToTopic(forkDigest)); err != nil {
log.WithError(err).Debug("Failed to broadcast light client optimistic update")
err := errors.Wrap(err, "could not publish message")
tracing.AnnotateError(span, err)
return err
}
log.Debug("Successfully broadcast light client optimistic update")
return nil
}
@@ -311,13 +311,13 @@ func (s *Service) BroadcastLightClientFinalityUpdate(ctx context.Context, update
return err
}
// TODO: should we check if the update is too early or too late to broadcast?
if err := s.broadcastObject(ctx, update, lcFinalityToTopic(forkDigest)); err != nil {
log.WithError(err).Debug("Failed to broadcast light client finality update")
err := errors.Wrap(err, "could not publish message")
tracing.AnnotateError(span, err)
return err
}
log.Debug("Successfully broadcast light client finality update")
return nil
}

View File

@@ -5,6 +5,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/encoder"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1/metadata"
"github.com/ethereum/go-ethereum/p2p/enr"
@@ -36,6 +37,8 @@ type Broadcaster interface {
BroadcastAttestation(ctx context.Context, subnet uint64, att ethpb.Att) error
BroadcastSyncCommitteeMessage(ctx context.Context, subnet uint64, sMsg *ethpb.SyncCommitteeMessage) error
BroadcastBlob(ctx context.Context, subnet uint64, blob *ethpb.BlobSidecar) error
BroadcastLightClientOptimisticUpdate(ctx context.Context, update interfaces.LightClientOptimisticUpdate) error
BroadcastLightClientFinalityUpdate(ctx context.Context, update interfaces.LightClientFinalityUpdate) error
}
// SetStreamHandler configures p2p to handle streams of a certain topic ID.

View File

@@ -20,6 +20,7 @@ go_library(
"//beacon-chain/p2p/encoder:go_default_library",
"//beacon-chain/p2p/peers:go_default_library",
"//beacon-chain/p2p/peers/scorers:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/metadata:go_default_library",
"//testing/require:go_default_library",

View File

@@ -5,6 +5,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/encoder"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1/metadata"
"github.com/ethereum/go-ethereum/p2p/enr"
@@ -148,6 +149,16 @@ func (*FakeP2P) BroadcastBlob(_ context.Context, _ uint64, _ *ethpb.BlobSidecar)
return nil
}
// BroadcastLightClientOptimisticUpdate -- fake.
func (*FakeP2P) BroadcastLightClientOptimisticUpdate(_ context.Context, _ interfaces.LightClientOptimisticUpdate) error {
return nil
}
// BroadcastLightClientFinalityUpdate -- fake.
func (*FakeP2P) BroadcastLightClientFinalityUpdate(_ context.Context, _ interfaces.LightClientFinalityUpdate) error {
return nil
}
// InterceptPeerDial -- fake.
func (*FakeP2P) InterceptPeerDial(peer.ID) (allow bool) {
return true

View File

@@ -5,6 +5,7 @@ import (
"sync"
"sync/atomic"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"google.golang.org/protobuf/proto"
)
@@ -48,6 +49,18 @@ func (m *MockBroadcaster) BroadcastBlob(context.Context, uint64, *ethpb.BlobSide
return nil
}
// BroadcastLightClientOptimisticUpdate records a broadcast occurred.
func (m *MockBroadcaster) BroadcastLightClientOptimisticUpdate(_ context.Context, _ interfaces.LightClientOptimisticUpdate) error {
m.BroadcastCalled.Store(true)
return nil
}
// BroadcastLightClientFinalityUpdate records a broadcast occurred.
func (m *MockBroadcaster) BroadcastLightClientFinalityUpdate(_ context.Context, _ interfaces.LightClientFinalityUpdate) error {
m.BroadcastCalled.Store(true)
return nil
}
// NumMessages returns the number of messages broadcasted.
func (m *MockBroadcaster) NumMessages() int {
m.msgLock.Lock()

View File

@@ -13,6 +13,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/encoder"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers/scorers"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1/metadata"
"github.com/OffchainLabs/prysm/v6/testing/require"
@@ -207,6 +208,18 @@ func (p *TestP2P) BroadcastBlob(context.Context, uint64, *ethpb.BlobSidecar) err
return nil
}
// BroadcastLightClientOptimisticUpdate broadcasts an optimistic update for mock.
func (p *TestP2P) BroadcastLightClientOptimisticUpdate(_ context.Context, _ interfaces.LightClientOptimisticUpdate) error {
p.BroadcastCalled.Store(true)
return nil
}
// BroadcastLightClientFinalityUpdate broadcasts a finality update for mock.
func (p *TestP2P) BroadcastLightClientFinalityUpdate(_ context.Context, _ interfaces.LightClientFinalityUpdate) error {
p.BroadcastCalled.Store(true)
return nil
}
// SetStreamHandler for RPC.
func (p *TestP2P) SetStreamHandler(topic string, handler network.StreamHandler) {
p.BHost.SetStreamHandler(protocol.ID(topic), handler)

View File

@@ -77,8 +77,9 @@ func privKey(cfg *Config) (*ecdsa.PrivateKey, error) {
return nil, err
}
// If the StaticPeerID flag is not set and if peerDAS is not enabled, return the private key.
if !(cfg.StaticPeerID || params.PeerDASEnabled()) {
// If the StaticPeerID flag is not set or the Fulu epoch is not set, return the private key.
// Starting at Fulu, we don't want to generate a new key every time, to avoid custody columns changes.
if !(cfg.StaticPeerID || params.FuluEnabled()) {
return ecdsaprysm.ConvertFromInterfacePrivKey(priv)
}

View File

@@ -66,7 +66,7 @@ go_library(
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promhttp:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opencensus_go//plugin/ocgrpc:go_default_library",
"@io_opentelemetry_go_contrib_instrumentation_google_golang_org_grpc_otelgrpc//:go_default_library",
"@org_golang_google_grpc//:go_default_library",
"@org_golang_google_grpc//credentials:go_default_library",
"@org_golang_google_grpc//peer:go_default_library",

View File

@@ -200,7 +200,7 @@ func TestGetSpec(t *testing.T) {
data, ok := resp.Data.(map[string]interface{})
require.Equal(t, true, ok)
assert.Equal(t, 169, len(data))
assert.Equal(t, 175, len(data))
for k, v := range data {
t.Run(k, func(t *testing.T) {
switch k {
@@ -545,6 +545,18 @@ func TestGetSpec(t *testing.T) {
assert.Equal(t, "9", v)
case "MAX_REQUEST_BLOB_SIDECARS_ELECTRA":
assert.Equal(t, "1152", v)
case "NUMBER_OF_CUSTODY_GROUPS":
assert.Equal(t, "128", v)
case "BALANCE_PER_ADDITIONAL_CUSTODY_GROUP":
assert.Equal(t, "32000000000", v)
case "CUSTODY_REQUIREMENT":
assert.Equal(t, "4", v)
case "SAMPLES_PER_SLOT":
assert.Equal(t, "8", v)
case "VALIDATOR_CUSTODY_REQUIREMENT":
assert.Equal(t, "8", v)
case "MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS":
assert.Equal(t, "4096", v)
case "MAX_BLOB_COMMITMENTS_PER_BLOCK":
assert.Equal(t, "95", v)
case "MAX_BYTES_PER_TRANSACTION":
@@ -559,6 +571,8 @@ func TestGetSpec(t *testing.T) {
assert.Equal(t, "100", v)
case "KZG_COMMITMENT_INCLUSION_PROOF_DEPTH":
assert.Equal(t, "101", v)
case "MAX_BLOBS_PER_BLOCK_FULU":
assert.Equal(t, "12", v)
case "BLOB_SIDECAR_SUBNET_COUNT":
assert.Equal(t, "102", v)
case "BLOB_SIDECAR_SUBNET_COUNT_ELECTRA":

View File

@@ -103,7 +103,6 @@ go_library(
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
"@org_golang_google_grpc//codes:go_default_library",
"@org_golang_google_grpc//status:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",

View File

@@ -9,6 +9,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/core"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/time/slots"
"google.golang.org/grpc/codes"
@@ -80,9 +81,11 @@ func (vs *Server) duties(ctx context.Context, req *ethpb.DutiesRequest) (*ethpb.
return nil, status.Errorf(codes.Internal, "Could not compute proposer slots: %v", err)
}
ctx, span := trace.StartSpan(ctx, "getDuties.BuildResponse")
defer span.End()
validatorAssignments := make([]*ethpb.DutiesResponse_Duty, 0, len(req.PublicKeys))
nextValidatorAssignments := make([]*ethpb.DutiesResponse_Duty, 0, len(req.PublicKeys))
for _, pubKey := range req.PublicKeys {
if ctx.Err() != nil {
return nil, status.Errorf(codes.Aborted, "Could not continue fetching assignments: %v", ctx.Err())

View File

@@ -47,7 +47,7 @@ import (
grpcprometheus "github.com/grpc-ecosystem/go-grpc-prometheus"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"go.opencensus.io/plugin/ocgrpc"
"go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/peer"
@@ -146,7 +146,7 @@ func NewService(ctx context.Context, cfg *Config) *Service {
log.WithField("address", address).Info("gRPC server listening on port")
opts := []grpc.ServerOption{
grpc.StatsHandler(&ocgrpc.ServerHandler{}),
grpc.StatsHandler(otelgrpc.NewServerHandler()),
grpc.StreamInterceptor(middleware.ChainStreamServer(
recovery.StreamServerInterceptor(
recovery.WithRecoveryHandlerContext(tracing.RecoveryHandlerFunc),

View File

@@ -68,7 +68,7 @@ type ReadOnlyBeaconState interface {
Slot() primitives.Slot
Fork() *ethpb.Fork
LatestBlockHeader() *ethpb.BeaconBlockHeader
HistoricalRoots() ([][]byte, error)
HistoricalRoots() [][]byte
HistoricalSummaries() ([]*ethpb.HistoricalSummary, error)
Slashings() []uint64
FieldReferencesCount() map[string]uint64

View File

@@ -73,15 +73,15 @@ func (b *BeaconState) forkVal() *ethpb.Fork {
}
// HistoricalRoots based on epochs stored in the beacon state.
func (b *BeaconState) HistoricalRoots() ([][]byte, error) {
func (b *BeaconState) HistoricalRoots() [][]byte {
if b.historicalRoots == nil {
return nil, nil
return nil
}
b.lock.RLock()
defer b.lock.RUnlock()
return b.historicalRoots.Slice(), nil
return b.historicalRoots.Slice()
}
// HistoricalSummaries of the beacon state.

View File

@@ -39,7 +39,7 @@ const (
// This specifies the limit till which we process all dirty indices for a certain field.
// If we have more dirty indices than the threshold, then we rebuild the whole trie. This
// comes due to the fact that O(alogn) > O(n) beyond a certain value of a.
indicesLimit = 8000
indicesLimit = 20000
)
// SetGenesisTime for the beacon state.

View File

@@ -400,11 +400,11 @@ func TestDuplicateDirtyIndices(t *testing.T) {
newState.dirtyIndices[types.Balances] = append(newState.dirtyIndices[types.Balances], []uint64{0, 1, 2, 3, 4}...)
// We would remove the duplicates and stay under the threshold
newState.addDirtyIndices(types.Balances, []uint64{9997, 9998})
newState.addDirtyIndices(types.Balances, []uint64{20997, 20998})
assert.Equal(t, false, newState.rebuildTrie[types.Balances])
// We would trigger above the threshold.
newState.addDirtyIndices(types.Balances, []uint64{10000, 10001, 10002, 10003})
newState.addDirtyIndices(types.Balances, []uint64{21000, 21001, 21002, 21003})
assert.Equal(t, true, newState.rebuildTrie[types.Balances])
}

View File

@@ -133,6 +133,20 @@ func (s *Service) registerSubscribers(epoch primitives.Epoch, digest [4]byte) {
s.activeSyncSubnetIndices,
func(currentSlot primitives.Slot) []uint64 { return []uint64{} },
)
if features.Get().EnableLightClient {
s.subscribe(
p2p.LightClientOptimisticUpdateTopicFormat,
s.validateLightClientOptimisticUpdate,
s.lightClientOptimisticUpdateSubscriber,
digest,
)
s.subscribe(
p2p.LightClientFinalityUpdateTopicFormat,
s.validateLightClientFinalityUpdate,
s.lightClientFinalityUpdateSubscriber,
digest,
)
}
}
// New gossip topic in Capella

View File

@@ -0,0 +1,3 @@
### Added
- Enable light client gossip for optimistic and finality updates.

View File

@@ -0,0 +1,3 @@
### Changed
- Upgraded ristretto to v2.2.0, for RISC-V support.

View File

@@ -0,0 +1,7 @@
### Fixed
- Fixed gocognit on propose block rest path.
### Ignored
- Removed jsonify functions that duplicate FromConsensus functions in the structs package for rest calls on propose block.

View File

@@ -0,0 +1,3 @@
### Fixed
- Fixed wrong field name in pending partial withdrawals was returned on state json representation, described in https://github.com/ethereum/consensus-specs/blob/dev/specs/electra/beacon-chain.md#pendingpartialwithdrawal

View File

@@ -0,0 +1,2 @@
### Added
- Implement peerDAS core functions.

View File

@@ -0,0 +1,3 @@
### Changed
- Increase indices limit in field trie rebuilding.

View File

@@ -0,0 +1,3 @@
### Added
- Force duties start on received blocks.

View File

@@ -0,0 +1,2 @@
### Ignored
- HistoricalRoots should not return an error.

View File

@@ -0,0 +1,3 @@
### Added
- Added additional tracing spans for the GetDuties routine

3
changelog/pvl_otel.md Normal file
View File

@@ -0,0 +1,3 @@
### Changed
- Use otelgrpc for tracing grpc server and client

3
changelog/tt_rice.md Normal file
View File

@@ -0,0 +1,3 @@
### Changed
- Increase sepolia gas limit to 60M.

3
changelog/tt_udon.md Normal file
View File

@@ -0,0 +1,3 @@
### Changed
- Update spec to v1.5.0 compliance which changes minimal execution requests size.

View File

@@ -242,7 +242,7 @@ type BeaconChainConfig struct {
MaxPerEpochActivationChurnLimit uint64 `yaml:"MAX_PER_EPOCH_ACTIVATION_CHURN_LIMIT" spec:"true"` // MaxPerEpochActivationChurnLimit is the maximum amount of churn allotted for validator activation.
MinEpochsForBlobsSidecarsRequest primitives.Epoch `yaml:"MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS" spec:"true"` // MinEpochsForBlobsSidecarsRequest is the minimum number of epochs the node will keep the blobs for.
MaxRequestBlobSidecars uint64 `yaml:"MAX_REQUEST_BLOB_SIDECARS" spec:"true"` // MaxRequestBlobSidecars is the maximum number of blobs to request in a single request.
MaxRequestBlobSidecarsElectra uint64 `yaml:"MAX_REQUEST_BLOB_SIDECARS_ELECTRA" spec:"true"` // MaxRequestBlobSidecarsElectra is the maximum number of blobs to request in a single request.
MaxRequestBlobSidecarsElectra uint64 `yaml:"MAX_REQUEST_BLOB_SIDECARS_ELECTRA" spec:"true"` // MaxRequestBlobSidecarsElectra is the maximum number of blobs to request in a single request after the electra epoch.
MaxRequestBlocksDeneb uint64 `yaml:"MAX_REQUEST_BLOCKS_DENEB" spec:"true"` // MaxRequestBlocksDeneb is the maximum number of blocks in a single request after the deneb epoch.
FieldElementsPerBlob uint64 `yaml:"FIELD_ELEMENTS_PER_BLOB" spec:"true"` // FieldElementsPerBlob is the number of field elements that constitute a single blob.
MaxBlobCommitmentsPerBlock uint64 `yaml:"MAX_BLOB_COMMITMENTS_PER_BLOCK" spec:"true"` // MaxBlobCommitmentsPerBlock is the maximum number of KZG commitments that a block can have.
@@ -265,14 +265,17 @@ type BeaconChainConfig struct {
MaxDepositRequestsPerPayload uint64 `yaml:"MAX_DEPOSIT_REQUESTS_PER_PAYLOAD" spec:"true"` // MaxDepositRequestsPerPayload is the maximum number of execution layer deposits in each payload
UnsetDepositRequestsStartIndex uint64 `yaml:"UNSET_DEPOSIT_REQUESTS_START_INDEX" spec:"true"` // UnsetDepositRequestsStartIndex is used to check the start index for eip6110
// PeerDAS Values
SamplesPerSlot uint64 `yaml:"SAMPLES_PER_SLOT"` // SamplesPerSlot refers to the number of random samples a node queries per slot.
CustodyRequirement uint64 `yaml:"CUSTODY_REQUIREMENT"` // CustodyRequirement refers to the minimum amount of subnets a peer must custody and serve samples from.
MinEpochsForDataColumnSidecarsRequest primitives.Epoch `yaml:"MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS"` // MinEpochsForDataColumnSidecarsRequest is the minimum number of epochs the node will keep the data columns for.
MaxRequestDataColumnSidecars uint64 `yaml:"MAX_REQUEST_DATA_COLUMN_SIDECARS" spec:"true"` // MaxRequestDataColumnSidecars is the maximum number of data column sidecars in a single request
MaxCellsInExtendedMatrix uint64 `yaml:"MAX_CELLS_IN_EXTENDED_MATRIX" spec:"true"` // MaxCellsInExtendedMatrix is the full data of one-dimensional erasure coding extended blobs (in row major format).
NumberOfColumns uint64 `yaml:"NUMBER_OF_COLUMNS" spec:"true"` // NumberOfColumns in the extended data matrix.
DataColumnSidecarSubnetCount uint64 `yaml:"DATA_COLUMN_SIDECAR_SUBNET_COUNT" spec:"true"` // DataColumnSidecarSubnetCount is the number of data column sidecar subnets used in the gossipsub protocol
// Values introduced in Fulu upgrade
NumberOfColumns uint64 `yaml:"NUMBER_OF_COLUMNS" spec:"true"` // NumberOfColumns in the extended data matrix.
SamplesPerSlot uint64 `yaml:"SAMPLES_PER_SLOT" spec:"true"` // SamplesPerSlot refers to the number of random samples a node queries per slot.
NumberOfCustodyGroups uint64 `yaml:"NUMBER_OF_CUSTODY_GROUPS" spec:"true"` // NumberOfCustodyGroups available for nodes to custody.
CustodyRequirement uint64 `yaml:"CUSTODY_REQUIREMENT" spec:"true"` // CustodyRequirement refers to the minimum amount of subnets a peer must custody and serve samples from.
MinEpochsForDataColumnSidecarsRequest primitives.Epoch `yaml:"MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS" spec:"true"` // MinEpochsForDataColumnSidecarsRequest is the minimum number of epochs the node will keep the data columns for.
MaxCellsInExtendedMatrix uint64 `yaml:"MAX_CELLS_IN_EXTENDED_MATRIX"` // MaxCellsInExtendedMatrix is the full data of one-dimensional erasure coding extended blobs (in row major format).
DataColumnSidecarSubnetCount uint64 `yaml:"DATA_COLUMN_SIDECAR_SUBNET_COUNT" spec:"true"` // DataColumnSidecarSubnetCount is the number of data column sidecar subnets used in the gossipsub protocol
MaxRequestDataColumnSidecars uint64 `yaml:"MAX_REQUEST_DATA_COLUMN_SIDECARS" spec:"true"` // MaxRequestDataColumnSidecars is the maximum number of data column sidecars in a single request
ValidatorCustodyRequirement uint64 `yaml:"VALIDATOR_CUSTODY_REQUIREMENT" spec:"true"` // ValidatorCustodyRequirement is the minimum number of custody groups an honest node with validators attached custodies and serves samples from
BalancePerAdditionalCustodyGroup uint64 `yaml:"BALANCE_PER_ADDITIONAL_CUSTODY_GROUP" spec:"true"` // BalancePerAdditionalCustodyGroup is the balance increment corresponding to one additional group to custody.
// Networking Specific Parameters
MaxPayloadSize uint64 `yaml:"MAX_PAYLOAD_SIZE" spec:"true"` // MAX_PAYLOAD_SIZE is the maximum allowed size of uncompressed payload in gossip messages and rpc chunks.
@@ -304,6 +307,10 @@ type BeaconChainConfig struct {
// DeprecatedTargetBlobsPerBlockElectra defines the target number of blobs per block post Electra hard fork.
// Deprecated: This field is no longer supported. Avoid using it.
DeprecatedTargetBlobsPerBlockElectra int `yaml:"TARGET_BLOBS_PER_BLOCK_ELECTRA" spec:"true"`
// DeprecatedMaxBlobsPerBlockFulu defines the max blobs that could exist in a block post Fulu hard fork.
// Deprecated: This field is no longer supported. Avoid using it.
DeprecatedMaxBlobsPerBlockFulu int `yaml:"MAX_BLOBS_PER_BLOCK_FULU" spec:"true"`
}
// InitializeForkSchedule initializes the schedules forks baked into the config.
@@ -389,32 +396,49 @@ func (b *BeaconChainConfig) TargetBlobsPerBlock(slot primitives.Slot) int {
if primitives.Epoch(slot.DivSlot(b.SlotsPerEpoch)) >= b.ElectraForkEpoch {
return b.DeprecatedTargetBlobsPerBlockElectra
}
return b.DeprecatedMaxBlobsPerBlock / 2
}
// MaxBlobsPerBlock returns the maximum number of blobs per block for the given slot,
// adjusting for the Electra fork.
// MaxBlobsPerBlock returns the maximum number of blobs per block for the given slot.
func (b *BeaconChainConfig) MaxBlobsPerBlock(slot primitives.Slot) int {
if primitives.Epoch(slot.DivSlot(b.SlotsPerEpoch)) >= b.ElectraForkEpoch {
epoch := primitives.Epoch(slot.DivSlot(b.SlotsPerEpoch))
if epoch >= b.FuluForkEpoch {
return b.DeprecatedMaxBlobsPerBlockFulu
}
if epoch >= b.ElectraForkEpoch {
return b.DeprecatedMaxBlobsPerBlockElectra
}
return b.DeprecatedMaxBlobsPerBlock
}
// MaxBlobsPerBlockByVersion returns the maximum number of blobs per block for the given fork version
func (b *BeaconChainConfig) MaxBlobsPerBlockByVersion(v int) int {
if v >= version.Fulu {
return b.DeprecatedMaxBlobsPerBlockFulu
}
if v >= version.Electra {
return b.DeprecatedMaxBlobsPerBlockElectra
}
return b.DeprecatedMaxBlobsPerBlock
}
// MaxBlobsPerBlockByEpoch returns the maximum number of blobs per block for the given epoch,
// adjusting for the Electra fork.
func (b *BeaconChainConfig) MaxBlobsPerBlockAtEpoch(epoch primitives.Epoch) int {
if epoch >= b.FuluForkEpoch {
return b.DeprecatedMaxBlobsPerBlockFulu
}
if epoch >= b.ElectraForkEpoch {
return b.DeprecatedMaxBlobsPerBlockElectra
}
return b.DeprecatedMaxBlobsPerBlock
}
@@ -432,9 +456,9 @@ func ElectraEnabled() bool {
return BeaconConfig().ElectraForkEpoch < math.MaxUint64
}
// PeerDASEnabled centralizes the check to determine if code paths
// that are specific to peerdas should be allowed to execute.
func PeerDASEnabled() bool {
// FuluEnabled centralizes the check to determine if code paths that are specific to Fulu should be allowed to execute.
// This will make it easier to find call sites that do this kind of check and remove them post-fulu.
func FuluEnabled() bool {
return BeaconConfig().FuluForkEpoch < math.MaxUint64
}

View File

@@ -141,9 +141,14 @@ func TestMaxBlobsPerBlockByVersion(t *testing.T) {
want: params.BeaconConfig().DeprecatedMaxBlobsPerBlockElectra,
},
{
name: "Version above Electra",
v: version.Electra + 1,
want: params.BeaconConfig().DeprecatedMaxBlobsPerBlockElectra,
name: "Version equal to Fulu",
v: version.Fulu,
want: params.BeaconConfig().DeprecatedMaxBlobsPerBlockFulu,
},
{
name: "Version above Fulu",
v: version.Fulu + 1,
want: params.BeaconConfig().DeprecatedMaxBlobsPerBlockFulu,
},
}

View File

@@ -241,6 +241,7 @@ func ConfigToYaml(cfg *BeaconChainConfig) []byte {
fmt.Sprintf("MIN_PER_EPOCH_CHURN_LIMIT_ELECTRA: %d", cfg.MinPerEpochChurnLimitElectra),
fmt.Sprintf("MAX_BLOBS_PER_BLOCK: %d", cfg.DeprecatedMaxBlobsPerBlock),
fmt.Sprintf("MAX_BLOBS_PER_BLOCK_ELECTRA: %d", cfg.DeprecatedMaxBlobsPerBlockElectra),
fmt.Sprintf("MAX_BLOBS_PER_BLOCK_FULU: %d", cfg.DeprecatedMaxBlobsPerBlockFulu),
}
yamlFile := []byte(strings.Join(lines, "\n"))

View File

@@ -25,7 +25,6 @@ import (
// IMPORTANT: Use one field per line and sort these alphabetically to reduce conflicts.
var placeholderFields = []string{
"ATTESTATION_DEADLINE",
"BALANCE_PER_ADDITIONAL_CUSTODY_GROUP",
"BLOB_SIDECAR_SUBNET_COUNT_FULU",
"EIP6110_FORK_EPOCH",
"EIP6110_FORK_VERSION",
@@ -38,15 +37,15 @@ var placeholderFields = []string{
"EIP7805_FORK_EPOCH",
"EIP7805_FORK_VERSION",
"EPOCHS_PER_SHUFFLING_PHASE",
"MAX_BLOBS_PER_BLOCK_FULU",
"MAX_BYTES_PER_INCLUSION_LIST",
"MAX_REQUEST_BLOB_SIDECARS_FULU",
"MAX_REQUEST_INCLUSION_LIST",
"MAX_REQUEST_PAYLOADS", // Compile time constant on BeaconBlockBody.ExecutionRequests
"NUMBER_OF_CUSTODY_GROUPS",
"PROPOSER_INCLUSION_LIST_CUT_OFF",
"PROPOSER_SCORE_BOOST_EIP7732",
"PROPOSER_SELECTION_GAP",
"TARGET_NUMBER_OF_PEERS",
"UPDATE_TIMEOUT",
"VALIDATOR_CUSTODY_REQUIREMENT",
"VIEW_FREEZE_DEADLINE",
"WHISK_EPOCHS_PER_SHUFFLING_PHASE",
"WHISK_FORK_EPOCH",

View File

@@ -37,6 +37,7 @@ var mainnetNetworkConfig = &NetworkConfig{
ETH2Key: "eth2",
AttSubnetKey: "attnets",
SyncCommsSubnetKey: "syncnets",
CustodyGroupCountKey: "cgc",
MinimumPeersInSubnetSearch: 20,
ContractDeploymentBlock: 11184524, // Note: contract was deployed in block 11052984 but no transactions were sent until 11184524.
BootstrapNodes: []string{
@@ -286,10 +287,9 @@ var mainnetBeaconConfig = &BeaconChainConfig{
FieldElementsPerBlob: 4096,
MaxBlobCommitmentsPerBlock: 4096,
KzgCommitmentInclusionProofDepth: 17,
DeprecatedMaxBlobsPerBlock: 6,
// Values related to electra
MaxRequestDataColumnSidecars: 16384,
DataColumnSidecarSubnetCount: 128,
MinPerEpochChurnLimitElectra: 128_000_000_000,
MaxPerEpochActivationExitChurnLimit: 256_000_000_000,
MaxEffectiveBalanceElectra: 2048_000_000_000,
@@ -306,13 +306,22 @@ var mainnetBeaconConfig = &BeaconChainConfig{
MaxWithdrawalRequestsPerPayload: 16,
MaxDepositRequestsPerPayload: 8192, // 2**13 (= 8192)
UnsetDepositRequestsStartIndex: math.MaxUint64,
DeprecatedMaxBlobsPerBlockElectra: 9,
DeprecatedTargetBlobsPerBlockElectra: 6,
MaxRequestBlobSidecarsElectra: 1152,
// PeerDAS
// Values related to fulu
MaxRequestDataColumnSidecars: 16384,
DataColumnSidecarSubnetCount: 128,
NumberOfColumns: 128,
MaxCellsInExtendedMatrix: 768,
SamplesPerSlot: 8,
NumberOfCustodyGroups: 128,
CustodyRequirement: 4,
MinEpochsForDataColumnSidecarsRequest: 4096,
MaxCellsInExtendedMatrix: 768,
ValidatorCustodyRequirement: 8,
BalancePerAdditionalCustodyGroup: 32_000_000_000,
DeprecatedMaxBlobsPerBlockFulu: 12,
// Values related to networking parameters.
MaxPayloadSize: 10 * 1 << 20, // 10 MiB
@@ -330,11 +339,6 @@ var mainnetBeaconConfig = &BeaconChainConfig{
AttestationSubnetPrefixBits: 6,
SubnetsPerNode: 2,
NodeIdBits: 256,
DeprecatedMaxBlobsPerBlock: 6,
DeprecatedMaxBlobsPerBlockElectra: 9,
DeprecatedTargetBlobsPerBlockElectra: 6,
MaxRequestBlobSidecarsElectra: 1152,
}
// MainnetTestConfig provides a version of the mainnet config that has a different name

View File

@@ -112,8 +112,6 @@ func MinimalSpecConfig() *BeaconChainConfig {
minimalConfig.MaxPerEpochActivationExitChurnLimit = 128000000000
minimalConfig.PendingConsolidationsLimit = 64
minimalConfig.MaxPartialWithdrawalsPerPayload = 1
minimalConfig.MaxWithdrawalRequestsPerPayload = 2
minimalConfig.MaxDepositRequestsPerPayload = 4
minimalConfig.PendingPartialWithdrawalsLimit = 64
minimalConfig.MaxPendingPartialsPerWithdrawalsSweep = 2
minimalConfig.PendingDepositsLimit = 134217728

View File

@@ -8,9 +8,10 @@ import (
// NetworkConfig defines the spec based network parameters.
type NetworkConfig struct {
// DiscoveryV5 Config
ETH2Key string // ETH2Key is the ENR key of the Ethereum consensus object in an enr.
AttSubnetKey string // AttSubnetKey is the ENR key of the subnet bitfield in the enr.
SyncCommsSubnetKey string // SyncCommsSubnetKey is the ENR key of the sync committee subnet bitfield in the enr.
ETH2Key string // ETH2Key is the ENR key of the Ethereum consensus object.
AttSubnetKey string // AttSubnetKey is the ENR key of the subnet bitfield.
SyncCommsSubnetKey string // SyncCommsSubnetKey is the ENR key of the sync committee subnet bitfield.
CustodyGroupCountKey string // CustodyGroupsCountKey is the ENR key of the custody group count.
MinimumPeersInSubnetSearch uint64 // PeersInSubnetSearch is the required amount of peers that we need to be able to lookup in a subnet search.
// Chain Network Config

View File

@@ -50,6 +50,7 @@ func SepoliaConfig() *BeaconChainConfig {
cfg.FuluForkVersion = []byte{0x90, 0x00, 0x00, 0x75} // TODO: Define sepolia fork version for fulu. This is a placeholder value.
cfg.TerminalTotalDifficulty = "17000000000000000"
cfg.DepositContractAddress = "0x7f02C3E3c98b133055B8B348B2Ac625669Ed295D"
cfg.DefaultBuilderGasLimit = uint64(60000000)
cfg.InitializeForkSchedule()
return cfg
}

View File

@@ -12,6 +12,7 @@ go_library(
"proto.go",
"roblob.go",
"roblock.go",
"rodatacolumn.go",
"setters.go",
"types.go",
],
@@ -51,6 +52,7 @@ go_test(
"proto_test.go",
"roblob_test.go",
"roblock_test.go",
"rodatacolumn_test.go",
],
embed = [":go_default_library"],
deps = [

View File

@@ -80,8 +80,38 @@ func MerkleProofKZGCommitment(body interfaces.ReadOnlyBeaconBlockBody, index int
return proof, nil
}
// leavesFromCommitments hashes each commitment to construct a slice of roots
func leavesFromCommitments(commitments [][]byte) [][]byte {
// MerkleProofKZGCommitments constructs a Merkle proof of inclusion of the KZG
// commitments into the Beacon Block with the given `body`
func MerkleProofKZGCommitments(body interfaces.ReadOnlyBeaconBlockBody) ([][]byte, error) {
bodyVersion := body.Version()
if bodyVersion < version.Deneb {
return nil, errUnsupportedBeaconBlockBody
}
membersRoots, err := topLevelRoots(body)
if err != nil {
return nil, errors.Wrap(err, "top level roots")
}
sparse, err := trie.GenerateTrieFromItems(membersRoots, logBodyLength)
if err != nil {
return nil, errors.Wrap(err, "generate trie from items")
}
proof, err := sparse.MerkleProof(kzgPosition)
if err != nil {
return nil, errors.Wrap(err, "merkle proof")
}
// Remove the last element as it is a mix in with the number of
// elements in the trie.
proof = proof[:len(proof)-1]
return proof, nil
}
// LeavesFromCommitments hashes each commitment to construct a slice of roots
func LeavesFromCommitments(commitments [][]byte) [][]byte {
leaves := make([][]byte, len(commitments))
for i, kzg := range commitments {
chunk := makeChunk(kzg)
@@ -105,7 +135,7 @@ func bodyProof(commitments [][]byte, index int) ([][]byte, error) {
if index < 0 || index >= len(commitments) {
return nil, errInvalidIndex
}
leaves := leavesFromCommitments(commitments)
leaves := LeavesFromCommitments(commitments)
sparse, err := trie.GenerateTrieFromItems(leaves, field_params.LogMaxBlobCommitments)
if err != nil {
return nil, err

View File

@@ -6,6 +6,7 @@ import (
"testing"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/container/trie"
enginev1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
@@ -32,7 +33,7 @@ func Test_MerkleProofKZGCommitment_Altair(t *testing.T) {
require.ErrorIs(t, errUnsupportedBeaconBlockBody, err)
}
func Test_MerkleProofKZGCommitment(t *testing.T) {
func buildTestKzgsAndBody(t *testing.T) ([][]byte, interfaces.ReadOnlyBeaconBlockBody) {
kzgs := make([][]byte, 3)
kzgs[0] = make([]byte, 48)
_, err := rand.Read(kzgs[0])
@@ -69,8 +70,15 @@ func Test_MerkleProofKZGCommitment(t *testing.T) {
body, err := NewBeaconBlockBody(pbBody)
require.NoError(t, err)
index := 1
_, err = MerkleProofKZGCommitment(body, 10)
return kzgs, body
}
func Test_MerkleProofKZGCommitment(t *testing.T) {
const index = 1
kzgs, body := buildTestKzgsAndBody(t)
_, err := MerkleProofKZGCommitment(body, 10)
require.ErrorIs(t, errInvalidIndex, err)
proof, err := MerkleProofKZGCommitment(body, index)
require.NoError(t, err)
@@ -104,6 +112,40 @@ func Test_MerkleProofKZGCommitment(t *testing.T) {
require.Equal(t, true, trie.VerifyMerkleProof(root[:], chunk[0][:], uint64(index+KZGOffset), proof))
}
func TestMerkleProofKZGCommitments(t *testing.T) {
t.Run("invalid version", func(t *testing.T) {
pbBody := &ethpb.BeaconBlockBodyAltair{}
body, err := NewBeaconBlockBody(pbBody)
require.NoError(t, err)
_, err = MerkleProofKZGCommitments(body)
require.ErrorIs(t, errUnsupportedBeaconBlockBody, err)
})
t.Run("nominal", func(t *testing.T) {
kzgs, body := buildTestKzgsAndBody(t)
proof, err := MerkleProofKZGCommitments(body)
require.NoError(t, err)
commitmentsRoot, err := getBlobKzgCommitmentsRoot(kzgs)
require.NoError(t, err)
bodyMembersRoots, err := topLevelRoots(body)
require.NoError(t, err, "Failed to get top level roots")
bodySparse, err := trie.GenerateTrieFromItems(bodyMembersRoots, logBodyLength)
require.NoError(t, err, "Failed to generate trie from member roots")
require.Equal(t, bodyLength, bodySparse.NumOfItems())
root, err := body.HashTreeRoot()
require.NoError(t, err)
require.Equal(t, true, trie.VerifyMerkleProof(root[:], commitmentsRoot[:], kzgPosition, proof))
})
}
// This test explains the calculation of the KZG commitment root's Merkle index
// in the Body's Merkle tree based on the index of the KZG commitment list in the Body.
func Test_KZGRootIndex(t *testing.T) {
@@ -139,7 +181,7 @@ func ceilLog2(x uint32) (uint32, error) {
}
func getBlobKzgCommitmentsRoot(commitments [][]byte) ([32]byte, error) {
commitmentsLeaves := leavesFromCommitments(commitments)
commitmentsLeaves := LeavesFromCommitments(commitments)
commitmentsSparse, err := trie.GenerateTrieFromItems(
commitmentsLeaves,
fieldparams.LogMaxBlobCommitments,

View File

@@ -0,0 +1,68 @@
package blocks
import (
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
)
// RODataColumn represents a read-only data column sidecar with its block root.
type RODataColumn struct {
*ethpb.DataColumnSidecar
root [fieldparams.RootLength]byte
}
func roDataColumnNilCheck(dc *ethpb.DataColumnSidecar) error {
// Check if the data column is nil.
if dc == nil {
return errNilDataColumn
}
// Check if the data column header is nil.
if dc.SignedBlockHeader == nil || dc.SignedBlockHeader.Header == nil {
return errNilBlockHeader
}
// Check if the data column signature is nil.
if len(dc.SignedBlockHeader.Signature) == 0 {
return errMissingBlockSignature
}
return nil
}
// NewRODataColumn creates a new RODataColumn by computing the HashTreeRoot of the header.
func NewRODataColumn(dc *ethpb.DataColumnSidecar) (RODataColumn, error) {
if err := roDataColumnNilCheck(dc); err != nil {
return RODataColumn{}, err
}
root, err := dc.SignedBlockHeader.Header.HashTreeRoot()
if err != nil {
return RODataColumn{}, err
}
return RODataColumn{DataColumnSidecar: dc, root: root}, nil
}
// NewRODataColumnWithRoot creates a new RODataColumn with a given root.
func NewRODataColumnWithRoot(dc *ethpb.DataColumnSidecar, root [fieldparams.RootLength]byte) (RODataColumn, error) {
// Check if the data column is nil.
if err := roDataColumnNilCheck(dc); err != nil {
return RODataColumn{}, err
}
return RODataColumn{DataColumnSidecar: dc, root: root}, nil
}
// BlockRoot returns the root of the block.
func (dc *RODataColumn) BlockRoot() [fieldparams.RootLength]byte {
return dc.root
}
// VerifiedRODataColumn represents an RODataColumn that has undergone full verification (eg block sig, inclusion proof, commitment check).
type VerifiedRODataColumn struct {
RODataColumn
}
// NewVerifiedRODataColumn "upgrades" an RODataColumn to a VerifiedRODataColumn. This method should only be used by the verification package.
func NewVerifiedRODataColumn(roDataColumn RODataColumn) VerifiedRODataColumn {
return VerifiedRODataColumn{RODataColumn: roDataColumn}
}

View File

@@ -0,0 +1,125 @@
package blocks
import (
"testing"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
)
func TestNewRODataColumnWithAndWithoutRoot(t *testing.T) {
cases := []struct {
name string
dcFunc func(t *testing.T) *ethpb.DataColumnSidecar
err error
root []byte
}{
{
name: "nil signed data column",
dcFunc: func(t *testing.T) *ethpb.DataColumnSidecar {
return nil
},
err: errNilDataColumn,
root: bytesutil.PadTo([]byte("sup"), fieldparams.RootLength),
},
{
name: "nil signed block header",
dcFunc: func(t *testing.T) *ethpb.DataColumnSidecar {
return &ethpb.DataColumnSidecar{
SignedBlockHeader: nil,
}
},
err: errNilBlockHeader,
root: bytesutil.PadTo([]byte("sup"), fieldparams.RootLength),
},
{
name: "nil inner header",
dcFunc: func(t *testing.T) *ethpb.DataColumnSidecar {
return &ethpb.DataColumnSidecar{
SignedBlockHeader: &ethpb.SignedBeaconBlockHeader{
Header: nil,
},
}
},
err: errNilBlockHeader,
root: bytesutil.PadTo([]byte("sup"), fieldparams.RootLength),
},
{
name: "nil signature",
dcFunc: func(t *testing.T) *ethpb.DataColumnSidecar {
return &ethpb.DataColumnSidecar{
SignedBlockHeader: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{
ParentRoot: make([]byte, fieldparams.RootLength),
StateRoot: make([]byte, fieldparams.RootLength),
BodyRoot: make([]byte, fieldparams.RootLength),
},
Signature: nil,
},
}
},
err: errMissingBlockSignature,
root: bytesutil.PadTo([]byte("sup"), fieldparams.RootLength),
},
{
name: "nominal",
dcFunc: func(t *testing.T) *ethpb.DataColumnSidecar {
return &ethpb.DataColumnSidecar{
SignedBlockHeader: &ethpb.SignedBeaconBlockHeader{
Header: &ethpb.BeaconBlockHeader{
ParentRoot: make([]byte, fieldparams.RootLength),
StateRoot: make([]byte, fieldparams.RootLength),
BodyRoot: make([]byte, fieldparams.RootLength),
},
Signature: make([]byte, fieldparams.BLSSignatureLength),
},
}
},
root: bytesutil.PadTo([]byte("sup"), fieldparams.RootLength),
},
}
for _, c := range cases {
t.Run(c.name+" NewRODataColumn", func(t *testing.T) {
dataColumnSidecar := c.dcFunc(t)
roDataColumnSidecar, err := NewRODataColumn(dataColumnSidecar)
if c.err != nil {
require.ErrorIs(t, err, c.err)
return
}
require.NoError(t, err)
hr, err := dataColumnSidecar.SignedBlockHeader.Header.HashTreeRoot()
require.NoError(t, err)
require.Equal(t, hr, roDataColumnSidecar.BlockRoot())
})
if len(c.root) == 0 {
continue
}
t.Run(c.name+" NewRODataColumnWithRoot", func(t *testing.T) {
b := c.dcFunc(t)
// We want the same validation when specifying a root.
bl, err := NewRODataColumnWithRoot(b, bytesutil.ToBytes32(c.root))
if c.err != nil {
require.ErrorIs(t, err, c.err)
return
}
assert.Equal(t, bytesutil.ToBytes32(c.root), bl.BlockRoot())
})
}
}
func TestDataColumn_BlockRoot(t *testing.T) {
root := [fieldparams.RootLength]byte{1}
dataColumn := &RODataColumn{
root: root,
}
assert.Equal(t, root, dataColumn.BlockRoot())
}

View File

@@ -29,6 +29,7 @@ var (
// ErrUnsupportedVersion for beacon block methods.
ErrUnsupportedVersion = errors.New("unsupported beacon block version")
errNilBlob = errors.New("received nil blob sidecar")
errNilDataColumn = errors.New("received nil data column sidecar")
errNilBlock = errors.New("received nil beacon block")
errNilBlockBody = errors.New("received nil beacon block body")
errIncorrectBlockVersion = errors.New(incorrectBlockVersion)

118
deps.bzl
View File

@@ -367,8 +367,8 @@ def prysm_deps():
go_repository(
name = "com_github_census_instrumentation_opencensus_proto",
importpath = "github.com/census-instrumentation/opencensus-proto",
sum = "h1:iKLQ0xPNFxR/2hzXZMrBo8f1j86j5WHzznCCQxV/b8g=",
version = "v0.4.1",
sum = "h1:glEXhBS5PSLLv4IXzLA5yPRVX4bilULVyxxbrfOtDAk=",
version = "v0.2.1",
)
go_repository(
name = "com_github_cespare_cp",
@@ -451,8 +451,8 @@ def prysm_deps():
go_repository(
name = "com_github_cncf_xds_go",
importpath = "github.com/cncf/xds/go",
sum = "h1:QVw89YDxXxEe+l8gU8ETbOasdwEV+avkR75ZzsVV9WI=",
version = "v0.0.0-20240905190251-b4127c9b8d78",
sum = "h1:boJj011Hh+874zpIySeApCX4GeOjPl9qhRF3QuIZq+Q=",
version = "v0.0.0-20241223141626-cff3c89139a3",
)
go_repository(
name = "com_github_cockroachdb_datadriven",
@@ -639,8 +639,14 @@ def prysm_deps():
go_repository(
name = "com_github_dgraph_io_ristretto",
importpath = "github.com/dgraph-io/ristretto",
sum = "h1:cNcG4c2n5xanQzp2hMyxDxPYVQmZ91y4WN6fJFlndLo=",
version = "v0.0.4-0.20210318174700-74754f61e018",
sum = "h1:a5WaUrDa0qm0YrAAS1tUykT5El3kt62KNZZeMxQn3po=",
version = "v0.0.2",
)
go_repository(
name = "com_github_dgraph_io_ristretto_v2",
importpath = "github.com/dgraph-io/ristretto/v2",
sum = "h1:bkY3XzJcXoMuELV8F+vS8kzNgicwQFAaGINAEJdWGOM=",
version = "v2.2.0",
)
go_repository(
name = "com_github_dgrijalva_jwt_go",
@@ -651,8 +657,8 @@ def prysm_deps():
go_repository(
name = "com_github_dgryski_go_farm",
importpath = "github.com/dgryski/go-farm",
sum = "h1:tdlZCpZ/P9DhczCTSixgIKmwPv6+wP5DGjqLYw5SUiA=",
version = "v0.0.0-20190423205320-6a90982ecee2",
sum = "h1:aIftn67I1fkbMa512G+w+Pxci9hJPB8oMnkcP3iZF38=",
version = "v0.0.0-20240924180020-3414d57e47da",
)
go_repository(
name = "com_github_dlclark_regexp2",
@@ -687,8 +693,8 @@ def prysm_deps():
go_repository(
name = "com_github_dustin_go_humanize",
importpath = "github.com/dustin/go-humanize",
sum = "h1:VSnTsYCnlFHaM2/igO1h6X3HA71jcobQuxemgkq4zYo=",
version = "v1.0.0",
sum = "h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=",
version = "v1.0.1",
)
go_repository(
name = "com_github_eapache_go_resiliency",
@@ -741,14 +747,26 @@ def prysm_deps():
go_repository(
name = "com_github_envoyproxy_go_control_plane",
importpath = "github.com/envoyproxy/go-control-plane",
sum = "h1:vPfJZCkob6yTMEgS+0TwfTUfbHjfy/6vOJ8hUWX/uXE=",
version = "v0.13.1",
sum = "h1:zEqyPVyku6IvWCFwux4x9RxkLOMUL+1vC9xUFv5l2/M=",
version = "v0.13.4",
)
go_repository(
name = "com_github_envoyproxy_go_control_plane_envoy",
importpath = "github.com/envoyproxy/go-control-plane/envoy",
sum = "h1:jb83lalDRZSpPWW2Z7Mck/8kXZ5CQAFYVjQcdVIr83A=",
version = "v1.32.4",
)
go_repository(
name = "com_github_envoyproxy_go_control_plane_ratelimit",
importpath = "github.com/envoyproxy/go-control-plane/ratelimit",
sum = "h1:/G9QYbddjL25KvtKTv3an9lx6VBE2cnb8wp1vEGNYGI=",
version = "v0.1.0",
)
go_repository(
name = "com_github_envoyproxy_protoc_gen_validate",
importpath = "github.com/envoyproxy/protoc-gen-validate",
sum = "h1:tntQDh69XqOCOZsDz0lVJQez/2L6Uu2PdjCQwWCJ3bM=",
version = "v1.1.0",
sum = "h1:DEo3O99U8j4hBFwbJfrz9VtgcDfUKS7KJ7spH3d86P8=",
version = "v1.2.1",
)
go_repository(
name = "com_github_ethereum_c_kzg_4844",
@@ -1150,8 +1168,8 @@ def prysm_deps():
go_repository(
name = "com_github_golang_glog",
importpath = "github.com/golang/glog",
sum = "h1:1+mZ9upx1Dh6FmUTFR1naJ77miKiXgALjWOZ3NVFPmY=",
version = "v1.2.2",
sum = "h1:CNNw5U8lSiiBk7druxtSHHTsRWcxKoac6kZKm2peBBc=",
version = "v1.2.4",
)
go_repository(
name = "com_github_golang_groupcache",
@@ -1212,8 +1230,8 @@ def prysm_deps():
go_repository(
name = "com_github_google_go_cmp",
importpath = "github.com/google/go-cmp",
sum = "h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=",
version = "v0.6.0",
sum = "h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=",
version = "v0.7.0",
)
go_repository(
name = "com_github_google_go_github",
@@ -1296,8 +1314,8 @@ def prysm_deps():
go_repository(
name = "com_github_googlecloudplatform_opentelemetry_operations_go_detectors_gcp",
importpath = "github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp",
sum = "h1:cZpsGsWTIFKymTA0je7IIvi1O7Es7apb9CF3EQlOcfE=",
version = "v1.24.2",
sum = "h1:3c8yed4lgqTt+oTQ+JNMDo+F4xprBf+O/il4ZC0nRLw=",
version = "v1.25.0",
)
go_repository(
name = "com_github_gopherjs_gopherjs",
@@ -2455,12 +2473,6 @@ def prysm_deps():
sum = "h1:P2Ga83D34wi1o9J6Wh1mRuqd4mF/x/lgBS7N7AbDhec=",
version = "v0.0.5",
)
go_repository(
name = "com_github_oneofone_xxhash",
importpath = "github.com/OneOfOne/xxhash",
sum = "h1:KMrpdQIwFcEqXDklaen+P1axHaj9BSKzvpUUfnHldSE=",
version = "v1.2.2",
)
go_repository(
name = "com_github_onsi_ginkgo",
importpath = "github.com/onsi/ginkgo",
@@ -3726,8 +3738,8 @@ def prysm_deps():
go_repository(
name = "com_google_cloud_go_compute_metadata",
importpath = "cloud.google.com/go/compute/metadata",
sum = "h1:UxK4uu/Tn+I3p2dYWTfiX4wva7aYlKixAHn3fyqngqo=",
version = "v0.5.2",
sum = "h1:A6hENjEsCDtC1k8byVsgwvVcioamEHvZ4j01OwKxG9I=",
version = "v0.6.0",
)
go_repository(
name = "com_google_cloud_go_contactcenterinsights",
@@ -4356,8 +4368,8 @@ def prysm_deps():
go_repository(
name = "dev_cel_expr",
importpath = "cel.dev/expr",
sum = "h1:RwRhoH17VhAu9U5CMvMhH1PDVgf0tuz9FT+24AfMLfU=",
version = "v0.16.2",
sum = "h1:NciYrtDRIR0lNCnH1LFJegdjspNx9fI59O7TWcua/W4=",
version = "v0.19.1",
)
go_repository(
name = "in_gopkg_alecthomas_kingpin_v2",
@@ -4570,8 +4582,8 @@ def prysm_deps():
go_repository(
name = "io_opencensus_go",
importpath = "go.opencensus.io",
sum = "h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0=",
version = "v0.24.0",
sum = "h1:dntmOdLpSpHlVqbW5Eay97DelsZHe+55D+xC6i0dDS0=",
version = "v0.22.5",
)
go_repository(
name = "io_opentelemetry_go_auto_sdk",
@@ -4582,8 +4594,14 @@ def prysm_deps():
go_repository(
name = "io_opentelemetry_go_contrib_detectors_gcp",
importpath = "go.opentelemetry.io/contrib/detectors/gcp",
sum = "h1:G1JQOreVrfhRkner+l4mrGxmfqYCAuy76asTDAo0xsA=",
version = "v1.31.0",
sum = "h1:JRxssobiPg23otYU5SbWtQC//snGVIM3Tx6QRzlQBao=",
version = "v1.34.0",
)
go_repository(
name = "io_opentelemetry_go_contrib_instrumentation_google_golang_org_grpc_otelgrpc",
importpath = "go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc",
sum = "h1:x7wzEgXfnzJcHDwStJT+mxOz4etr2EcexjqhBvmoakw=",
version = "v0.60.0",
)
go_repository(
name = "io_opentelemetry_go_contrib_instrumentation_net_http_otelhttp",
@@ -4594,8 +4612,8 @@ def prysm_deps():
go_repository(
name = "io_opentelemetry_go_otel",
importpath = "go.opentelemetry.io/otel",
sum = "h1:zRLXxLCgL1WyKsPVrgbSdMN4c0FMkDAskSTQP+0hdUY=",
version = "v1.34.0",
sum = "h1:xKWKPxrxB6OtMCbmMY021CqC45J+3Onta9MqjhnusiQ=",
version = "v1.35.0",
)
go_repository(
name = "io_opentelemetry_go_otel_exporters_otlp_otlptrace",
@@ -4612,8 +4630,8 @@ def prysm_deps():
go_repository(
name = "io_opentelemetry_go_otel_metric",
importpath = "go.opentelemetry.io/otel/metric",
sum = "h1:+eTR3U0MyfWjRDhmFMxe2SsW64QrZ84AOhvqS7Y+PoQ=",
version = "v1.34.0",
sum = "h1:0znxYu2SNyuMSQT4Y9WDWej0VpcsxkuklLa4/siN90M=",
version = "v1.35.0",
)
go_repository(
name = "io_opentelemetry_go_otel_sdk",
@@ -4624,14 +4642,14 @@ def prysm_deps():
go_repository(
name = "io_opentelemetry_go_otel_sdk_metric",
importpath = "go.opentelemetry.io/otel/sdk/metric",
sum = "h1:i9hxxLJF/9kkvfHppyLL55aW7iIJz4JjxTeYusH7zMc=",
version = "v1.31.0",
sum = "h1:5CeK9ujjbFVL5c1PhLuStg1wxA7vQv7ce1EK0Gyvahk=",
version = "v1.34.0",
)
go_repository(
name = "io_opentelemetry_go_otel_trace",
importpath = "go.opentelemetry.io/otel/trace",
sum = "h1:+ouXS2V8Rd4hp4580a8q23bg0azF2nI8cqLYnC8mh/k=",
version = "v1.34.0",
sum = "h1:dPpEfJu1sDIqruz7BHFG3c7528f6ddfSWfFDVt/xgMs=",
version = "v1.35.0",
)
go_repository(
name = "io_opentelemetry_go_proto_otlp",
@@ -4708,21 +4726,21 @@ def prysm_deps():
go_repository(
name = "org_golang_google_genproto_googleapis_rpc",
importpath = "google.golang.org/genproto/googleapis/rpc",
sum = "h1:OxYkA3wjPsZyBylwymxSHa7ViiW1Sml4ToBrncvFehI=",
version = "v0.0.0-20250115164207-1a7da9e5054f",
sum = "h1:51aaUVRocpvUOSQKM6Q7VuoaktNIaMCLuhZB6DKksq4=",
version = "v0.0.0-20250218202821-56aae31c358a",
)
go_repository(
name = "org_golang_google_grpc",
build_file_proto_mode = "disable",
importpath = "google.golang.org/grpc",
sum = "h1:MF5TftSMkd8GLw/m0KM6V8CMOCY6NZ1NQDPGFgbTt4A=",
version = "v1.69.4",
sum = "h1:kF77BGdPTQ4/JZWMlb9VpJ5pa25aqvVqogsxNHHdeBg=",
version = "v1.71.0",
)
go_repository(
name = "org_golang_google_protobuf",
importpath = "google.golang.org/protobuf",
sum = "h1:82DV7MYdb8anAVi3qge1wSnMDrnKK7ebr+I0hHRN1BU=",
version = "v1.36.3",
sum = "h1:tPhr+woSbjfYvY6/GPufUoYizxw1cF/yFoxJ2fmpwlM=",
version = "v1.36.5",
)
go_repository(
name = "org_golang_x_build",
@@ -4781,8 +4799,8 @@ def prysm_deps():
go_repository(
name = "org_golang_x_oauth2",
importpath = "golang.org/x/oauth2",
sum = "h1:KTBBxWqUa0ykRPLtV69rRto9TLXcqYkeswu48x/gvNE=",
version = "v0.24.0",
sum = "h1:CY4y7XT9v0cRI9oupztF8AgiIu99L/ksR/Xp/6jrZ70=",
version = "v0.25.0",
)
go_repository(
name = "org_golang_x_perf",

22
go.mod
View File

@@ -11,8 +11,8 @@ require (
github.com/consensys/gnark-crypto v0.14.0
github.com/crate-crypto/go-kzg-4844 v1.1.0
github.com/d4l3k/messagediff v1.2.1
github.com/dgraph-io/ristretto v0.0.4-0.20210318174700-74754f61e018
github.com/dustin/go-humanize v1.0.0
github.com/dgraph-io/ristretto/v2 v2.2.0
github.com/dustin/go-humanize v1.0.1
github.com/emicklei/dot v0.11.0
github.com/ethereum/c-kzg-4844/v2 v2.1.1
github.com/ethereum/go-ethereum v1.15.9
@@ -24,7 +24,7 @@ require (
github.com/golang/gddo v0.0.0-20200528160355-8d077c1d8f4c
github.com/golang/protobuf v1.5.4
github.com/golang/snappy v0.0.5-0.20231225225746-43d5d4cd4e0e
github.com/google/go-cmp v0.6.0
github.com/google/go-cmp v0.7.0
github.com/google/gofuzz v1.2.0
github.com/google/uuid v1.6.0
github.com/gostaticanalysis/comment v1.4.2
@@ -80,12 +80,12 @@ require (
github.com/wealdtech/go-eth2-util v1.6.3
github.com/wealdtech/go-eth2-wallet-encryptor-keystorev4 v1.1.3
go.etcd.io/bbolt v1.3.6
go.opencensus.io v0.24.0
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.60.0
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.59.0
go.opentelemetry.io/otel v1.34.0
go.opentelemetry.io/otel v1.35.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.34.0
go.opentelemetry.io/otel/sdk v1.34.0
go.opentelemetry.io/otel/trace v1.34.0
go.opentelemetry.io/otel/trace v1.35.0
go.uber.org/automaxprocs v1.5.2
go.uber.org/mock v0.4.0
golang.org/x/crypto v0.36.0
@@ -93,8 +93,8 @@ require (
golang.org/x/sync v0.12.0
golang.org/x/tools v0.30.0
google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1
google.golang.org/grpc v1.69.4
google.golang.org/protobuf v1.36.3
google.golang.org/grpc v1.71.0
google.golang.org/protobuf v1.36.5
gopkg.in/d4l3k/messagediff.v1 v1.2.1
gopkg.in/yaml.v2 v2.4.0
gopkg.in/yaml.v3 v3.0.1
@@ -113,7 +113,6 @@ require (
github.com/bits-and-blooms/bitset v1.17.0 // indirect
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
github.com/cespare/cp v1.1.1 // indirect
github.com/cespare/xxhash v1.1.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/chzyer/readline v1.5.1 // indirect
github.com/cockroachdb/errors v1.11.3 // indirect
@@ -150,7 +149,6 @@ require (
github.com/go-task/slim-sprig/v3 v3.0.0 // indirect
github.com/godbus/dbus/v5 v5.1.0 // indirect
github.com/gofrs/flock v0.8.1 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/google/gopacket v1.1.19 // indirect
github.com/google/pprof v0.0.0-20240727154555-813a5fbdbec8 // indirect
github.com/gorilla/websocket v1.5.3 // indirect
@@ -253,7 +251,7 @@ require (
github.com/yusufpapurcu/wmi v1.2.3 // indirect
go.opentelemetry.io/auto/sdk v1.1.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.34.0 // indirect
go.opentelemetry.io/otel/metric v1.34.0 // indirect
go.opentelemetry.io/otel/metric v1.35.0 // indirect
go.opentelemetry.io/proto/otlp v1.5.0 // indirect
go.uber.org/dig v1.18.0 // indirect
go.uber.org/fx v1.22.2 // indirect
@@ -262,7 +260,7 @@ require (
golang.org/x/exp/typeparams v0.0.0-20231108232855-2478ac86f678 // indirect
golang.org/x/mod v0.23.0 // indirect
golang.org/x/net v0.38.0 // indirect
golang.org/x/oauth2 v0.24.0 // indirect
golang.org/x/oauth2 v0.25.0 // indirect
golang.org/x/term v0.30.0 // indirect
golang.org/x/text v0.23.0 // indirect
golang.org/x/time v0.9.0 // indirect

57
go.sum
View File

@@ -57,8 +57,6 @@ github.com/MariusVanDerWijden/tx-fuzz v1.4.0 h1:Tq4lXivsR8mtoP4RpasUDIUpDLHfN1Yh
github.com/MariusVanDerWijden/tx-fuzz v1.4.0/go.mod h1:gmOVECg7o5FY5VU3DQ/fY0zTk/ExBdMkUGz0vA8qqms=
github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY=
github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU=
github.com/OneOfOne/xxhash v1.2.2 h1:KMrpdQIwFcEqXDklaen+P1axHaj9BSKzvpUUfnHldSE=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/Shopify/sarama v1.19.0/go.mod h1:FVkBWblsNy7DGZRfXLU0O9RCGt5g3g3yEuWXgklEdEo=
github.com/Shopify/sarama v1.26.1/go.mod h1:NbSGBSSndYaIhRcBtY9V0U7AyH+x71bG668AuWys/yU=
github.com/Shopify/toxiproxy v2.1.4+incompatible/go.mod h1:OXgGpZ6Cli1/URJOF1DMxUHB2q5Ap20/P/eIdh4G0pI=
@@ -114,8 +112,6 @@ github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyY
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cespare/cp v1.1.1 h1:nCb6ZLdB7NRaqsm91JtQTAme2SKJzXVsdPIPkyJr1MU=
github.com/cespare/cp v1.1.1/go.mod h1:SOGHArjBr4JWaSDEVpWpo/hNg6RoKrls6Oh40hiwW+s=
github.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
@@ -197,11 +193,11 @@ github.com/decred/dcrd/dcrec/secp256k1/v4 v4.3.0/go.mod h1:v57UDF4pDQJcEfFUCRop3
github.com/deepmap/oapi-codegen v1.6.0/go.mod h1:ryDa9AgbELGeB+YEXE1dR53yAjHwFvE9iAUlWl9Al3M=
github.com/deepmap/oapi-codegen v1.8.2 h1:SegyeYGcdi0jLLrpbCMoJxnUUn8GBXHsvr4rbzjuhfU=
github.com/deepmap/oapi-codegen v1.8.2/go.mod h1:YLgSKSDv/bZQB7N4ws6luhozi3cEdRktEqrX88CvjIw=
github.com/dgraph-io/ristretto v0.0.4-0.20210318174700-74754f61e018 h1:cNcG4c2n5xanQzp2hMyxDxPYVQmZ91y4WN6fJFlndLo=
github.com/dgraph-io/ristretto v0.0.4-0.20210318174700-74754f61e018/go.mod h1:MIonLggsKgZLUSt414ExgwNtlOL5MuEoAJP514mwGe8=
github.com/dgraph-io/ristretto/v2 v2.2.0 h1:bkY3XzJcXoMuELV8F+vS8kzNgicwQFAaGINAEJdWGOM=
github.com/dgraph-io/ristretto/v2 v2.2.0/go.mod h1:RZrm63UmcBAaYWC1DotLYBmTvgkrs0+XhBd7Npn7/zI=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgryski/go-farm v0.0.0-20190423205320-6a90982ecee2 h1:tdlZCpZ/P9DhczCTSixgIKmwPv6+wP5DGjqLYw5SUiA=
github.com/dgryski/go-farm v0.0.0-20190423205320-6a90982ecee2/go.mod h1:SqUrOPUnsFjfmXRMNPybcSiG0BgUW2AuFH8PAnS2iTw=
github.com/dgryski/go-farm v0.0.0-20240924180020-3414d57e47da h1:aIftn67I1fkbMa512G+w+Pxci9hJPB8oMnkcP3iZF38=
github.com/dgryski/go-farm v0.0.0-20240924180020-3414d57e47da/go.mod h1:SqUrOPUnsFjfmXRMNPybcSiG0BgUW2AuFH8PAnS2iTw=
github.com/dlclark/regexp2 v1.4.1-0.20201116162257-a2a8dda75c91/go.mod h1:2pZnwuY/m+8K6iRw6wQdMtk+rH5tNGR1i55kozfMjCc=
github.com/dlclark/regexp2 v1.7.0 h1:7lJfhqlPssTb1WQx4yvTHN0uElPEv52sbaECrAQxjAo=
github.com/dlclark/regexp2 v1.7.0/go.mod h1:DHkYz0B9wPfa6wondMfaivmHpzrQ3v9q8cnmRbL6yW8=
@@ -214,8 +210,9 @@ github.com/dop251/goja v0.0.0-20230806174421-c933cf95e127/go.mod h1:QMWlm50DNe14
github.com/dop251/goja_nodejs v0.0.0-20210225215109-d91c329300e7/go.mod h1:hn7BA7c8pLvoGndExHudxTDKZ84Pyvv+90pbBjbTz0Y=
github.com/dop251/goja_nodejs v0.0.0-20211022123610-8dd9abb0616d/go.mod h1:DngW8aVqWbuLRMHItjPUyqdj+HWPvnQe8V8y1nDpIbM=
github.com/dustin/go-humanize v0.0.0-20171111073723-bb3d318650d4/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/dustin/go-humanize v1.0.0 h1:VSnTsYCnlFHaM2/igO1h6X3HA71jcobQuxemgkq4zYo=
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/eapache/go-resiliency v1.1.0/go.mod h1:kFI+JgMyC7bLPUVY133qvEBtVayf5mFgVsvEsIPBvNs=
github.com/eapache/go-resiliency v1.2.0/go.mod h1:kFI+JgMyC7bLPUVY133qvEBtVayf5mFgVsvEsIPBvNs=
github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21/go.mod h1:+020luEh2TKB4/GOp8oxxtq0Daoen/Cii55CzbTV6DU=
@@ -344,8 +341,6 @@ github.com/golang/groupcache v0.0.0-20160516000752-02826c3e7903/go.mod h1:cIg4er
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/lint v0.0.0-20170918230701-e5d664eb928e/go.mod h1:tluoj9z5200jBnyusfRPU2LqT6J+DAorxEvtC7LHB+E=
github.com/golang/lint v0.0.0-20180702182130-06c8688daad7/go.mod h1:tluoj9z5200jBnyusfRPU2LqT6J+DAorxEvtC7LHB+E=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
@@ -392,12 +387,11 @@ github.com/google/go-cmp v0.4.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/go-github v17.0.0+incompatible/go.mod h1:zLgOLi98H3fifZn+44m+umXrS52loVEgC2AApnigrVQ=
github.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
@@ -983,7 +977,6 @@ github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4k
github.com/sony/gobreaker v0.4.1/go.mod h1:ZKptC7FHNvhBz7dN2LGjPVBz2sZJmc0/PkyDJOjmxWY=
github.com/sourcegraph/annotate v0.0.0-20160123013949-f4cad6c6324d/go.mod h1:UdhH50NIW0fCiwBSr0co2m7BnFLdv4fQTgdqdJTHFeE=
github.com/sourcegraph/syntaxhighlight v0.0.0-20170531221838-bd320f5d308e/go.mod h1:HuIsMU8RRBOtsCgI77wP899iHVBQpCmg4ErYMZB+2IA=
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spaolacci/murmur3 v1.1.0 h1:7c1g84S4BPRrfL5Xrdp6fOJ206sU9y293DDHaoy0bLI=
github.com/spaolacci/murmur3 v1.1.0/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spf13/afero v0.0.0-20170901052352-ee1bd8ee15a1/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
@@ -1015,7 +1008,6 @@ github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.8.2/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.8.3/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
@@ -1100,26 +1092,26 @@ go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.3/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.4/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.5/go.mod h1:5pWMHQbX5EPX2/62yrJeAkowc+lfs/XD7Uxpq3pI6kk=
go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0=
go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo=
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.60.0 h1:x7wzEgXfnzJcHDwStJT+mxOz4etr2EcexjqhBvmoakw=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.60.0/go.mod h1:rg+RlpR5dKwaS95IyyZqj5Wd4E13lk/msnTS0Xl9lJM=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.59.0 h1:CV7UdSGJt/Ao6Gp4CXckLxVRRsRgDHoI8XjbL3PDl8s=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.59.0/go.mod h1:FRmFuRJfag1IZ2dPkHnEoSFVgTVPUd2qf5Vi69hLb8I=
go.opentelemetry.io/otel v1.34.0 h1:zRLXxLCgL1WyKsPVrgbSdMN4c0FMkDAskSTQP+0hdUY=
go.opentelemetry.io/otel v1.34.0/go.mod h1:OWFPOQ+h4G8xpyjgqo4SxJYdDQ/qmRH+wivy7zzx9oI=
go.opentelemetry.io/otel v1.35.0 h1:xKWKPxrxB6OtMCbmMY021CqC45J+3Onta9MqjhnusiQ=
go.opentelemetry.io/otel v1.35.0/go.mod h1:UEqy8Zp11hpkUrL73gSlELM0DupHoiq72dR+Zqel/+Y=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.34.0 h1:OeNbIYk/2C15ckl7glBlOBp5+WlYsOElzTNmiPW/x60=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.34.0/go.mod h1:7Bept48yIeqxP2OZ9/AqIpYS94h2or0aB4FypJTc8ZM=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.34.0 h1:BEj3SPM81McUZHYjRS5pEgNgnmzGJ5tRpU5krWnV8Bs=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.34.0/go.mod h1:9cKLGBDzI/F3NoHLQGm4ZrYdIHsvGt6ej6hUowxY0J4=
go.opentelemetry.io/otel/metric v1.34.0 h1:+eTR3U0MyfWjRDhmFMxe2SsW64QrZ84AOhvqS7Y+PoQ=
go.opentelemetry.io/otel/metric v1.34.0/go.mod h1:CEDrp0fy2D0MvkXE+dPV7cMi8tWZwX3dmaIhwPOaqHE=
go.opentelemetry.io/otel/metric v1.35.0 h1:0znxYu2SNyuMSQT4Y9WDWej0VpcsxkuklLa4/siN90M=
go.opentelemetry.io/otel/metric v1.35.0/go.mod h1:nKVFgxBZ2fReX6IlyW28MgZojkoAkJGaE8CpgeAU3oE=
go.opentelemetry.io/otel/sdk v1.34.0 h1:95zS4k/2GOy069d321O8jWgYsW3MzVV+KuSPKp7Wr1A=
go.opentelemetry.io/otel/sdk v1.34.0/go.mod h1:0e/pNiaMAqaykJGKbi+tSjWfNNHMTxoC9qANsCzbyxU=
go.opentelemetry.io/otel/sdk/metric v1.31.0 h1:i9hxxLJF/9kkvfHppyLL55aW7iIJz4JjxTeYusH7zMc=
go.opentelemetry.io/otel/sdk/metric v1.31.0/go.mod h1:CRInTMVvNhUKgSAMbKyTMxqOBC0zgyxzW55lZzX43Y8=
go.opentelemetry.io/otel/trace v1.34.0 h1:+ouXS2V8Rd4hp4580a8q23bg0azF2nI8cqLYnC8mh/k=
go.opentelemetry.io/otel/trace v1.34.0/go.mod h1:Svm7lSjQD7kG7KJ/MUHPVXSDGz2OX4h0M2jHBhmSfRE=
go.opentelemetry.io/otel/sdk/metric v1.34.0 h1:5CeK9ujjbFVL5c1PhLuStg1wxA7vQv7ce1EK0Gyvahk=
go.opentelemetry.io/otel/sdk/metric v1.34.0/go.mod h1:jQ/r8Ze28zRKoNRdkjCZxfs6YvBTG1+YIqyFVFYec5w=
go.opentelemetry.io/otel/trace v1.35.0 h1:dPpEfJu1sDIqruz7BHFG3c7528f6ddfSWfFDVt/xgMs=
go.opentelemetry.io/otel/trace v1.35.0/go.mod h1:WUk7DtFp1Aw2MkvqGdwiXYDZZNvA/1J8o6xRXLrIkyc=
go.opentelemetry.io/proto/otlp v1.5.0 h1:xJvq7gMzB31/d406fB8U5CBdyQGw4P399D1aQWU/3i4=
go.opentelemetry.io/proto/otlp v1.5.0/go.mod h1:keN8WnHxOy8PG0rQZjJJ5A2ebUoafqWp0eVQ4yIXvJ4=
go.uber.org/atomic v1.3.2/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
@@ -1263,7 +1255,6 @@ golang.org/x/net v0.0.0-20200813134508-3edf25e44fcc/go.mod h1:/O7V0waA8r7cgGh81R
golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201031054903-ff519b6c9102/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201209123823-ac852fbbde11/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20201224014010-6772e930b67b/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210119194325-5f4716e94777/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
@@ -1290,8 +1281,8 @@ golang.org/x/oauth2 v0.0.0-20200902213428-5d25da1a8d43/go.mod h1:KelEdhl1UZF7XfJ
golang.org/x/oauth2 v0.0.0-20201109201403-9fd604954f58/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20201208152858-08078c50e5b5/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210218202405-ba52d332ba99/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.24.0 h1:KTBBxWqUa0ykRPLtV69rRto9TLXcqYkeswu48x/gvNE=
golang.org/x/oauth2 v0.24.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI=
golang.org/x/oauth2 v0.25.0 h1:CY4y7XT9v0cRI9oupztF8AgiIu99L/ksR/Xp/6jrZ70=
golang.org/x/oauth2 v0.25.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI=
golang.org/x/perf v0.0.0-20180704124530-6e6d33e29852/go.mod h1:JLpeXjPJfIyPr5TlbXLkXWLhP8nz10XfvxElABhCtcw=
golang.org/x/sync v0.0.0-20170517211232-f52d1811a629/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -1607,8 +1598,8 @@ google.golang.org/grpc v1.31.1/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM
google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc=
google.golang.org/grpc v1.34.0/go.mod h1:WotjhfgOW/POjDeRt8vscBtXq+2VjORFy659qA51WJ8=
google.golang.org/grpc v1.35.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
google.golang.org/grpc v1.69.4 h1:MF5TftSMkd8GLw/m0KM6V8CMOCY6NZ1NQDPGFgbTt4A=
google.golang.org/grpc v1.69.4/go.mod h1:vyjdE6jLBI76dgpDojsFGNaHlxdjXN9ghpnd2o7JGZ4=
google.golang.org/grpc v1.71.0 h1:kF77BGdPTQ4/JZWMlb9VpJ5pa25aqvVqogsxNHHdeBg=
google.golang.org/grpc v1.71.0/go.mod h1:H0GRtasmQOh9LkFoCPDu3ZrwUtD1YGE+b2vYBYd/8Ec=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
@@ -1621,8 +1612,8 @@ google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGj
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.36.3 h1:82DV7MYdb8anAVi3qge1wSnMDrnKK7ebr+I0hHRN1BU=
google.golang.org/protobuf v1.36.3/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
google.golang.org/protobuf v1.36.5 h1:tPhr+woSbjfYvY6/GPufUoYizxw1cF/yFoxJ2fmpwlM=
google.golang.org/protobuf v1.36.5/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/bsm/ratelimit.v1 v1.0.0-20160220154919-db14e161995a/go.mod h1:KF9sEfUPAXdG8Oev9e99iLGnl2uJMjc5B+4y3O7x610=
gopkg.in/cenkalti/backoff.v1 v1.1.0 h1:Arh75ttbsvlpVA7WtVpH4u9h6Zl46xuptxqLxPiSo4Y=

View File

@@ -3,6 +3,7 @@ package prometheus_test
import (
"fmt"
"io"
"math/rand"
"net/http"
"strconv"
"strings"
@@ -15,8 +16,6 @@ import (
log "github.com/sirupsen/logrus"
)
const addr = "127.0.0.1:8989"
type logger interface {
Info(args ...interface{})
Warn(args ...interface{})
@@ -24,10 +23,11 @@ type logger interface {
}
func TestLogrusCollector(t *testing.T) {
service := prometheus.NewService(addr, nil)
addr := fmt.Sprintf("0.0.0.0:%d", 1000+rand.Intn(1000))
service := prometheus.NewService(t.Context(), addr, nil)
hook := prometheus.NewLogrusCollector()
log.AddHook(hook)
go service.Start()
service.Start()
defer func() {
err := service.Stop()
require.NoError(t, err)
@@ -60,8 +60,8 @@ func TestLogrusCollector(t *testing.T) {
}
logExampleMessage(log.StandardLogger(), tt.level)
}
time.Sleep(time.Millisecond)
metrics := metrics(t)
time.Sleep(time.Second)
metrics := metrics(t, addr)
count := valueFor(t, metrics, prefix, tt.level)
if count != tt.want {
t.Errorf("Expecting %d and receive %d", tt.want, count)
@@ -70,7 +70,7 @@ func TestLogrusCollector(t *testing.T) {
}
}
func metrics(t *testing.T) []string {
func metrics(t *testing.T, addr string) []string {
resp, err := http.Get(fmt.Sprintf("http://%s/metrics", addr))
require.NoError(t, err)
body, err := io.ReadAll(resp.Body)

View File

@@ -23,6 +23,7 @@ var log = logrus.WithField("prefix", "prometheus")
// Service provides Prometheus metrics via the /metrics route. This route will
// show all the metrics registered with the Prometheus DefaultRegisterer.
type Service struct {
ctx context.Context
server *http.Server
svcRegistry *runtime.ServiceRegistry
failStatus error
@@ -36,8 +37,8 @@ type Handler struct {
// NewService sets up a new instance for a given address host:port.
// An empty host will match with any IP so an address like ":2121" is perfectly acceptable.
func NewService(addr string, svcRegistry *runtime.ServiceRegistry, additionalHandlers ...Handler) *Service {
s := &Service{svcRegistry: svcRegistry}
func NewService(ctx context.Context, addr string, svcRegistry *runtime.ServiceRegistry, additionalHandlers ...Handler) *Service {
s := &Service{ctx: ctx, svcRegistry: svcRegistry}
mux := http.NewServeMux()
mux.Handle("/metrics", promhttp.HandlerFor(prometheus.DefaultGatherer, promhttp.HandlerOpts{
@@ -148,7 +149,7 @@ func (s *Service) Start() {
// Stop the service gracefully.
func (s *Service) Stop() error {
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
ctx, cancel := context.WithTimeout(s.ctx, 2*time.Second)
defer cancel()
return s.server.Shutdown(ctx)
}

View File

@@ -2,7 +2,9 @@ package prometheus
import (
"errors"
"fmt"
"io"
"math/rand"
"net/http"
"net/http/httptest"
"strings"
@@ -21,13 +23,14 @@ func init() {
}
func TestLifecycle(t *testing.T) {
prometheusService := NewService(":2112", nil)
port := 1000 + rand.Intn(1000)
prometheusService := NewService(t.Context(), fmt.Sprintf(":%d", port), nil)
prometheusService.Start()
// Give service time to start.
time.Sleep(time.Second)
// Query the service to ensure it really started.
resp, err := http.Get("http://localhost:2112/metrics")
resp, err := http.Get(fmt.Sprintf("http://localhost:%d/metrics", port))
require.NoError(t, err)
assert.NotEqual(t, uint64(0), resp.ContentLength, "Unexpected content length 0")
@@ -37,7 +40,7 @@ func TestLifecycle(t *testing.T) {
time.Sleep(time.Second)
// Query the service to ensure it really stopped.
_, err = http.Get("http://localhost:2112/metrics")
_, err = http.Get(fmt.Sprintf("http://localhost:%d/metrics", port))
assert.NotNil(t, err, "Service still running after Stop()")
}
@@ -60,7 +63,7 @@ func TestHealthz(t *testing.T) {
registry := runtime.NewServiceRegistry()
m := &mockService{}
require.NoError(t, registry.RegisterService(m), "Failed to register service")
s := NewService("" /*addr*/, registry)
s := NewService(t.Context(), "" /*addr*/, registry)
req, err := http.NewRequest("GET", "/healthz", nil /*reader*/)
require.NoError(t, err)
@@ -112,7 +115,7 @@ func TestContentNegotiation(t *testing.T) {
registry := runtime.NewServiceRegistry()
m := &mockService{}
require.NoError(t, registry.RegisterService(m), "Failed to register service")
s := NewService("", registry)
s := NewService(t.Context(), "", registry)
req, err := http.NewRequest("GET", "/healthz", nil /* body */)
require.NoError(t, err)
@@ -143,7 +146,7 @@ func TestContentNegotiation(t *testing.T) {
m := &mockService{}
m.status = errors.New("something is wrong")
require.NoError(t, registry.RegisterService(m), "Failed to register service")
s := NewService("", registry)
s := NewService(t.Context(), "", registry)
req, err := http.NewRequest("GET", "/healthz", nil /* body */)
require.NoError(t, err)

View File

@@ -65,8 +65,8 @@ minimal = {
"max_blob_commitments.size": "32",
"max_cell_proofs_length.size": "524288", # CELLS_PER_EXT_BLOB * MAX_BLOB_COMMITMENTS_PER_BLOCK
"kzg_commitment_inclusion_proof_depth.size": "10",
"max_withdrawal_requests_per_payload.size": "2",
"max_deposit_requests_per_payload.size": "4",
"max_withdrawal_requests_per_payload.size": "16",
"max_deposit_requests_per_payload.size": "8192",
"max_attesting_indices.size": "8192",
"max_committees_per_slot.size": "4",
"committee_bits.size": "1",

View File

@@ -0,0 +1,12 @@
load("@prysm//tools/go:def.bzl", "go_test")
go_test(
name = "go_default_test",
size = "small",
srcs = ["custody_groups_test.go"],
data = glob(["*.yaml"]) + [
"@consensus_spec_tests_mainnet//:test_data",
],
tags = ["spectest"],
deps = ["//testing/spectest/shared/fulu/networking:go_default_library"],
)

View File

@@ -0,0 +1,15 @@
package networking
import (
"testing"
"github.com/OffchainLabs/prysm/v6/testing/spectest/shared/fulu/networking"
)
func TestMainnet_Fulu_Networking_CustodyGroups(t *testing.T) {
networking.RunCustodyGroupsTest(t, "mainnet")
}
func TestMainnet_Fulu_Networking_ComputeCustodyColumnsForCustodyGroup(t *testing.T) {
networking.RunComputeColumnsForCustodyGroupTest(t, "mainnet")
}

View File

@@ -0,0 +1,12 @@
load("@prysm//tools/go:def.bzl", "go_test")
go_test(
name = "go_default_test",
size = "small",
srcs = ["custody_columns_test.go"],
data = glob(["*.yaml"]) + [
"@consensus_spec_tests_minimal//:test_data",
],
tags = ["spectest"],
deps = ["//testing/spectest/shared/fulu/networking:go_default_library"],
)

View File

@@ -0,0 +1,15 @@
package networking
import (
"testing"
"github.com/OffchainLabs/prysm/v6/testing/spectest/shared/fulu/networking"
)
func TestMainnet_Fulu_Networking_CustodyGroups(t *testing.T) {
networking.RunCustodyGroupsTest(t, "minimal")
}
func TestMainnet_Fulu_Networking_ComputeCustodyColumnsForCustodyGroup(t *testing.T) {
networking.RunComputeColumnsForCustodyGroupTest(t, "minimal")
}

View File

@@ -0,0 +1,17 @@
load("@prysm//tools/go:def.bzl", "go_library")
go_library(
name = "go_default_library",
testonly = True,
srcs = ["custody_groups.go"],
importpath = "github.com/OffchainLabs/prysm/v6/testing/spectest/shared/fulu/networking",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/core/peerdas:go_default_library",
"//testing/require:go_default_library",
"//testing/spectest/utils:go_default_library",
"//testing/util:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@in_gopkg_yaml_v3//:go_default_library",
],
)

View File

@@ -0,0 +1,107 @@
package networking
import (
"math/big"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/spectest/utils"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/ethereum/go-ethereum/p2p/enode"
"gopkg.in/yaml.v3"
)
// RunCustodyGroupsTest executes custody groups spec tests.
func RunCustodyGroupsTest(t *testing.T, config string) {
type configuration struct {
NodeId *big.Int `yaml:"node_id"`
CustodyGroupCount uint64 `yaml:"custody_group_count"`
Expected []uint64 `yaml:"result"`
}
err := utils.SetConfig(t, config)
require.NoError(t, err, "failed to set config")
// Retrieve the test vector folders.
testFolders, testsFolderPath := utils.TestFolders(t, config, "fulu", "networking/get_custody_groups/pyspec_tests")
if len(testFolders) == 0 {
t.Fatalf("no test folders found for %s", testsFolderPath)
}
for _, folder := range testFolders {
t.Run(folder.Name(), func(t *testing.T) {
var (
config configuration
nodeIdBytes32 [32]byte
)
// Load the test vector.
file, err := util.BazelFileBytes(testsFolderPath, folder.Name(), "meta.yaml")
require.NoError(t, err, "failed to retrieve the `meta.yaml` YAML file")
// Unmarshal the test vector.
err = yaml.Unmarshal(file, &config)
require.NoError(t, err, "failed to unmarshal the YAML file")
// Get the node ID.
nodeIdBytes := make([]byte, 32)
config.NodeId.FillBytes(nodeIdBytes)
copy(nodeIdBytes32[:], nodeIdBytes)
nodeId := enode.ID(nodeIdBytes32)
// Compute the custody groups.
actual, err := peerdas.CustodyGroups(nodeId, config.CustodyGroupCount)
require.NoError(t, err, "failed to compute the custody groups")
// Compare the results.
require.Equal(t, len(config.Expected), len(actual))
for i := range config.Expected {
require.Equal(t, config.Expected[i], actual[i], "at position %d", i)
}
})
}
}
// RunComputeColumnsForCustodyGroupTest executes compute columns for custody group spec tests.
func RunComputeColumnsForCustodyGroupTest(t *testing.T, config string) {
type configuration struct {
CustodyGroup uint64 `yaml:"custody_group"`
Expected []uint64 `yaml:"result"`
}
err := utils.SetConfig(t, config)
require.NoError(t, err, "failed to set config")
// Retrieve the test vector folders.
testFolders, testsFolderPath := utils.TestFolders(t, config, "fulu", "networking/compute_columns_for_custody_group/pyspec_tests")
if len(testFolders) == 0 {
t.Fatalf("no test folders found for %s", testsFolderPath)
}
for _, folder := range testFolders {
t.Run(folder.Name(), func(t *testing.T) {
var config configuration
// Load the test vector.
file, err := util.BazelFileBytes(testsFolderPath, folder.Name(), "meta.yaml")
require.NoError(t, err, "failed to retrieve the `meta.yaml` YAML file")
// Unmarshal the test vector.
err = yaml.Unmarshal(file, &config)
require.NoError(t, err, "failed to unmarshal the YAML file")
// Compute the custody columns.
actual, err := peerdas.ComputeColumnsForCustodyGroup(config.CustodyGroup)
require.NoError(t, err, "failed to compute the custody columns")
// Compare the results.
require.Equal(t, len(config.Expected), len(actual), "expected %d custody columns, got %d", len(config.Expected), len(actual))
for i := range config.Expected {
require.Equal(t, config.Expected[i], actual[i], "expected column at index %i differs from actual column", i)
}
})
}
}

Some files were not shown because too many files have changed in this diff Show More