Compare commits

..

21 Commits

Author SHA1 Message Date
nisdas
764aab3822 add it 2024-11-28 14:01:34 +08:00
Potuz
f27092fa91 Check if validator exists when applying pending deposit (#14666)
* Check if validator exists when applying pending deposit

* Add test TestProcessPendingDepositsMultiplesSameDeposits

* keep a map of added pubkeys

---------

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2024-11-25 20:31:02 +00:00
Radosław Kapka
67cef41cbf Better attestation packing for Electra (#14534)
* Better attestation packing for Electra

* changelog <3

* bzl

* sort before constructing on-chain aggregates

* move ctx to top

* extract Electra logic and add comments

* benchmark
2024-11-25 18:41:51 +00:00
Manu NALEPA
258908d50e Diverse log improvements, comment additions and small refactors. (#14658)
* `logProposedBlock`: Fix log.

Before, the value of the pointer to the function were printed for `blockNumber`
instead of the block number itself.

* Add blob prefix before sidecars.

In order to prepare for data columns sidecars.

* Verification: Add log prefix.

* `validate_aggregate_proof.go`: Add comments.

* `blobSubscriber`: Fix error message.

* `registerHandlers`: Rename, add comments and little refactor.

* Remove duplicate `pb` vs. `ethpb` import.

* `rpc_ping.go`: Factorize / Add comments.

* `blobSidecarsByRangeRPCHandler`: Do not write error response if rate limited.

* `sendRecentBeaconBlocksRequest` ==> `sendBeaconBlocksRequest`.

The function itself does not know anything about the age of the beacon block.

* `beaconBlocksByRangeRPCHandler`: Refactor and add logs.

* `retentionSeconds` ==> `retentionDuration`.

* `oneEpoch`: Add documentation.

* `TestProposer_ProposeBlock_OK`: Improve error message.

* `getLocalPayloadFromEngine`: Tiny refactor.

* `eth1DataMajorityVote`: Improve log message.

* Implement `ConvertPeerIDToNodeID`and do note generate random private key if peerDAS is enabled.

* Remove useless `_`.

* `parsePeersEnr`: Fix error mesages.

* `ShouldOverrideFCU`: Fix error message.

* `blocks.go`: Minor comments improvements.

* CI: Upgrade golanci and enable spancheck.

* `ConvertPeerIDToNodeID`: Add godoc comment.

* Update CHANGELOG.md

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Update beacon-chain/sync/initial-sync/service_test.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Update beacon-chain/sync/rpc_beacon_blocks_by_range.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Update beacon-chain/sync/rpc_blob_sidecars_by_range.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Update beacon-chain/sync/rpc_ping.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Remove trailing whitespace in godoc.

---------

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
2024-11-25 09:22:33 +00:00
Manu NALEPA
415a42a4aa Add proto for DataColumnIdentifier, DataColumnSidecar, DataColumnSidecarsByRangeRequest and MetadataV2. (#14649)
* Add data column sidecars proto.

* Fix Terence's comment.

* Re-add everything.
2024-11-22 09:50:06 +00:00
kasey
25eae3acda Fix eventstream electra atts (#14655)
* fix handler for electra atts

* same fix for attestation_slashing

* changelog

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-11-22 03:04:00 +00:00
Rupam Dey
956d9d108c Update light-client consensus types (#14652)
* update diff

* deps

* changelog

* remove `SetNextSyncCommitteeBranchElectra`
2024-11-21 12:28:44 +00:00
Sammy Rosso
c285715f9f Add missing Eth-Consensus-Version headers (#14647)
* add missing Eth-Consensus-Version headers

* changelog

* fix header return value
2024-11-20 22:16:33 +00:00
james-prysm
9382ae736d validator REST: attestation v2 (#14633)
* wip

* fixing tests

* adding unit tests

* fixing tests

* adding back v1 usage

* changelog

* rolling back test and adding placeholder

* adding electra tests

* adding attestation nil check based on review

* reduce code duplication

* linting

* fixing tests

* based on sammy review

* radek feedback

* adding fall back for pre electra and updated tests

* fixing api calls and associated tests

* gaz

* Update validator/client/beacon-api/propose_attestation.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* review feedback

* add missing fallback

* fixing tests

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2024-11-20 17:13:57 +00:00
Radosław Kapka
f16ff45a6b Update light client protobufs (#14650)
* Update light client protobufs

* changelog <3
2024-11-20 14:47:54 +00:00
kasey
8d6577be84 defer payload attribute computation (#14644)
* defer payload attribute computation

* fire payload event on skipped slots

* changelog

* fix test and missing version attr

* fix lint

* deepsource

* mv head block lookup for missed slots to streamer

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-11-19 16:49:52 +00:00
james-prysm
9de75b5376 reorganizing p2p and backfill service registration for consistency (#14640)
* reorganizing for consistency

* Update beacon-chain/node/node.go

Co-authored-by: kasey <489222+kasey@users.noreply.github.com>

* kasey's feedback

---------

Co-authored-by: kasey <489222+kasey@users.noreply.github.com>
2024-11-19 16:29:59 +00:00
james-prysm
a7ba11df37 adding nil checks on attestation interface (#14638)
* adding nil checks on interface

* changelog

* add linting

* adding missed checks

* review feedback

* attestation bits should not be in nil check

* fixing nil checks

* simplifying function

* fixing some missed items

* more missed items

* fixing more tests

* reverting some changes and fixing more tests

* adding in source check back in

* missed test

* sammy's review

* radek feedback
2024-11-18 17:51:17 +00:00
Stefano
00aeea3656 feat(issue-12348): add validator index label to validator_statuses me… (#14473)
* feat(issue-12348): add validator index label to validator_statuses metric

* fix: epochDuties added label on emission of metric

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2024-11-18 16:35:05 +00:00
james-prysm
9dbf979e77 move get data after nil check for attestations (#14642)
* move getData to after validations

* changelog
2024-11-15 18:28:35 +00:00
james-prysm
be60504512 Validator REST api: adding in check for empty keys changed (#14637)
* adding in check for empty keys changed

* changelog

* kasey feedback

* fixing unit tests

* Update CHANGELOG.md

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

---------

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
2024-11-13 16:09:11 +00:00
james-prysm
1857496159 Electra: unskipping merkle spec tests: (#14635)
* unskipping spec tests

* changelog
2024-11-12 15:41:44 +00:00
Justin Traglia
ccf61e1700 Rename remaining "deposit receipt" to "deposit request" (#14629)
* Rename remaining "deposit receipt" to "deposit request"

* Add changelog entry

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2024-11-08 21:15:43 +00:00
Justin Traglia
4edbd2f9ef Remove outdated spectest exclusions for EIP-6110 (#14630)
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2024-11-08 20:41:02 +00:00
james-prysm
5179af1438 validator REST API: block v2 and Electra support (#14623)
* adding electra to validator client rest for get and post, also migrates to use the v2 endpoints

* changelog

* fixing test

* fixing linting
2024-11-08 18:24:51 +00:00
Sammy Rosso
c0f9689e30 Add POST /eth/v2/beacon/pool/attestations endpoint (#14621)
* modify v1 and add v2

* test

* changelog

* small fixes

* fix tests

* simplify functions + remove duplication

* Radek' review + group V2 tests

* better errors

* fix tests
2024-11-08 11:33:27 +00:00
157 changed files with 6771 additions and 1448 deletions

View File

@@ -54,7 +54,7 @@ jobs:
- name: Golangci-lint
uses: golangci/golangci-lint-action@v5
with:
version: v1.55.2
version: v1.56.1
args: --config=.golangci.yml --out-${NO_FUTURE}format colored-line-number
build:

View File

@@ -73,6 +73,7 @@ linters:
- promlinter
- protogetter
- revive
- spancheck
- staticcheck
- stylecheck
- tagalign

View File

@@ -8,7 +8,7 @@ The format is based on Keep a Changelog, and this project adheres to Semantic Ve
### Added
- Electra EIP6110: Queue deposit [pr](https://github.com/prysmaticlabs/prysm/pull/14430)
- Electra EIP6110: Queue deposit [pr](https://github.com/prysmaticlabs/prysm/pull/14430).
- Add Bellatrix tests for light client functions.
- Add Discovery Rebooter Feature.
- Added GetBlockAttestationsV2 endpoint.
@@ -22,7 +22,12 @@ The format is based on Keep a Changelog, and this project adheres to Semantic Ve
- Added benchmarks for process slots for Capella, Deneb, Electra.
- Add helper to cast bytes to string without allocating memory.
- Added GetAggregatedAttestationV2 endpoint.
- Added testnet config for [mekong](https://blog.ethereum.org/2024/11/07/introducing-mekong-testnet)
- Added SubmitAttestationsV2 endpoint.
- Validator REST mode Electra block support.
- Added validator index label to `validator_statuses` metric.
- Added Validator REST mode use of Attestation V2 endpoints and Electra attestations.
- PeerDAS: Added proto for `DataColumnIdentifier`, `DataColumnSidecar`, `DataColumnSidecarsByRangeRequest` and `MetadataV2`.
- Better attestation packing for Electra. [PR](https://github.com/prysmaticlabs/prysm/pull/14534)
### Changed
@@ -50,7 +55,14 @@ The format is based on Keep a Changelog, and this project adheres to Semantic Ve
- Only Build the Protobuf state once during serialization.
- Capella blocks are execution.
- Fixed panic when http request to subscribe to event stream fails.
- Return early for blob reconstructor during capella fork
- Return early for blob reconstructor during capella fork.
- Updated block endpoint from V1 to V2.
- Rename instances of "deposit receipts" to "deposit requests".
- Non-blocking payload attribute event handling in beacon api [pr](https://github.com/prysmaticlabs/prysm/pull/14644).
- Updated light client protobufs. [PR](https://github.com/prysmaticlabs/prysm/pull/14650)
- Added `Eth-Consensus-Version` header to `ListAttestationsV2` and `GetAggregateAttestationV2` endpoints.
- Updated light client consensus types. [PR](https://github.com/prysmaticlabs/prysm/pull/14652)
- Fixed pending deposits processing on Electra.
### Deprecated
@@ -60,6 +72,7 @@ The format is based on Keep a Changelog, and this project adheres to Semantic Ve
- Removed finalized validator index cache, no longer needed.
- Removed validator queue position log on key reload and wait for activation.
- Removed outdated spectest exclusions for EIP-6110.
### Fixed
@@ -74,6 +87,12 @@ The format is based on Keep a Changelog, and this project adheres to Semantic Ve
- Fix keymanager API so that get keys returns an empty response instead of a 500 error when using an unsupported keystore.
- Small log imporvement, removing some redundant or duplicate logs
- EIP7521 - Fixes withdrawal bug by accounting for pending partial withdrawals and deducting already withdrawn amounts from the sweep balance. [PR](https://github.com/prysmaticlabs/prysm/pull/14578)
- unskip electra merkle spec test
- Fix panic in validator REST mode when checking status after removing all keys
- Fix panic on attestation interface since we call data before validation
- corrects nil check on some interface attestation types
- temporary solution to handling electra attesation and attester_slashing events. [pr](14655)
- Diverse log improvements and comment additions.
### Security

View File

@@ -358,22 +358,6 @@ filegroup(
url = "https://github.com/eth-clients/sepolia/archive/f2c219a93c4491cee3d90c18f2f8e82aed850eab.tar.gz", # 2024-09-19
)
http_archive(
name = "mekong_testnet",
build_file_content = """
filegroup(
name = "configs",
srcs = [
"network-configs/devnet-0/metadata/config.yaml",
],
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-y8id4VtAmk+geH52V77+UjR4NCTmGzXdtGpAUMkwvPM=",
strip_prefix = "mekong-devnets-c144c729c3cb898e1d6bb299d42eeb595809252c",
url = "https://github.com/ethpandaops/mekong-devnets/archive/c144c729c3cb898e1d6bb299d42eeb595809252c.tar.gz", # 2024-11-07
)
http_archive(
name = "com_google_protobuf",
sha256 = "9bd87b8280ef720d3240514f884e56a712f2218f0d693b48050c836028940a42",

View File

@@ -26,7 +26,7 @@ type ListAttestationsResponse struct {
}
type SubmitAttestationsRequest struct {
Data []*Attestation `json:"data"`
Data json.RawMessage `json:"data"`
}
type ListVoluntaryExitsResponse struct {

View File

@@ -6,8 +6,11 @@ import (
"github.com/ethereum/go-ethereum/common"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/async/event"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
@@ -69,6 +72,7 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *fcuConfig) (*
if arg.attributes == nil {
arg.attributes = payloadattribute.EmptyWithVersion(headBlk.Version())
}
go firePayloadAttributesEvent(ctx, s.cfg.StateNotifier.StateFeed(), arg)
payloadID, lastValidHash, err := s.cfg.ExecutionEngineCaller.ForkchoiceUpdated(ctx, fcs, arg.attributes)
if err != nil {
switch {
@@ -167,6 +171,38 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *fcuConfig) (*
return payloadID, nil
}
func firePayloadAttributesEvent(ctx context.Context, f event.SubscriberSender, cfg *fcuConfig) {
pidx, err := helpers.BeaconProposerIndex(ctx, cfg.headState)
if err != nil {
log.WithError(err).
WithField("head_root", cfg.headRoot[:]).
Error("Could not get proposer index for PayloadAttributes event")
return
}
evd := payloadattribute.EventData{
ProposerIndex: pidx,
ProposalSlot: cfg.headState.Slot(),
ParentBlockRoot: cfg.headRoot[:],
Attributer: cfg.attributes,
HeadRoot: cfg.headRoot,
HeadState: cfg.headState,
HeadBlock: cfg.headBlock,
}
if cfg.headBlock != nil && !cfg.headBlock.IsNil() {
headPayload, err := cfg.headBlock.Block().Body().Execution()
if err != nil {
log.WithError(err).Error("Could not get execution payload for head block")
return
}
evd.ParentBlockHash = headPayload.BlockHash()
evd.ParentBlockNumber = headPayload.BlockNumber()
}
f.Send(&feed.Event{
Type: statefeed.PayloadAttributes,
Data: evd,
})
}
// getPayloadHash returns the payload hash given the block root.
// if the block is before bellatrix fork epoch, it returns the zero hash.
func (s *Service) getPayloadHash(ctx context.Context, root []byte) ([32]byte, error) {

View File

@@ -92,12 +92,12 @@ func TestStore_OnAttestation_ErrorConditions(t *testing.T) {
{
name: "process nil attestation",
a: nil,
wantedErr: "attestation can't be nil",
wantedErr: "attestation is nil",
},
{
name: "process nil field (a.Data) in attestation",
a: &ethpb.Attestation{},
wantedErr: "attestation's data can't be nil",
wantedErr: "attestation is nil",
},
{
name: "process nil field (a.Target) in attestation",

View File

@@ -7,8 +7,6 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
coreTime "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
@@ -620,9 +618,6 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
if !s.inRegularSync() {
return
}
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.MissedSlot,
})
s.headLock.RLock()
headRoot := s.headRoot()
headState := s.headState(ctx)
@@ -650,6 +645,13 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
attribute := s.getPayloadAttribute(ctx, headState, s.CurrentSlot()+1, headRoot[:])
// return early if we are not proposing next slot
if attribute.IsEmpty() {
fcuArgs := &fcuConfig{
headState: headState,
headRoot: headRoot,
headBlock: nil,
attributes: attribute,
}
go firePayloadAttributesEvent(ctx, s.cfg.StateNotifier.StateFeed(), fcuArgs)
return
}

View File

@@ -448,6 +448,7 @@ func TestValidateIndexedAttestation_AboveMaxLength(t *testing.T) {
Target: &ethpb.Checkpoint{
Epoch: primitives.Epoch(i),
},
Source: &ethpb.Checkpoint{},
}
}
@@ -489,6 +490,7 @@ func TestValidateIndexedAttestation_BadAttestationsSignatureSet(t *testing.T) {
Target: &ethpb.Checkpoint{
Root: []byte{},
},
Source: &ethpb.Checkpoint{},
},
Signature: sig.Marshal(),
AggregationBits: list,

View File

@@ -386,8 +386,14 @@ func batchProcessNewPendingDeposits(ctx context.Context, state state.BeaconState
return errors.Wrap(err, "batch signature verification failed")
}
pubKeyMap := make(map[[48]byte]struct{}, len(pendingDeposits))
// Process each deposit individually
for _, pendingDeposit := range pendingDeposits {
_, found := pubKeyMap[bytesutil.ToBytes48(pendingDeposit.PublicKey)]
if !found {
pubKeyMap[bytesutil.ToBytes48(pendingDeposit.PublicKey)] = struct{}{}
}
validSignature := allSignaturesVerified
// If batch verification failed, check the individual deposit signature
@@ -405,9 +411,16 @@ func batchProcessNewPendingDeposits(ctx context.Context, state state.BeaconState
// Add validator to the registry if the signature is valid
if validSignature {
err = AddValidatorToRegistry(state, pendingDeposit.PublicKey, pendingDeposit.WithdrawalCredentials, pendingDeposit.Amount)
if err != nil {
return errors.Wrap(err, "failed to add validator to registry")
if found {
index, _ := state.ValidatorIndexByPubkey(bytesutil.ToBytes48(pendingDeposit.PublicKey))
if err := helpers.IncreaseBalance(state, index, pendingDeposit.Amount); err != nil {
return errors.Wrap(err, "could not increase balance")
}
} else {
err = AddValidatorToRegistry(state, pendingDeposit.PublicKey, pendingDeposit.WithdrawalCredentials, pendingDeposit.Amount)
if err != nil {
return errors.Wrap(err, "failed to add validator to registry")
}
}
}
}
@@ -560,7 +573,7 @@ func ProcessDepositRequests(ctx context.Context, beaconState state.BeaconState,
return beaconState, nil
}
// processDepositRequest processes the specific deposit receipt
// processDepositRequest processes the specific deposit request
// def process_deposit_request(state: BeaconState, deposit_request: DepositRequest) -> None:
//
// # Set deposit request start index

View File

@@ -22,6 +22,40 @@ import (
"github.com/prysmaticlabs/prysm/v5/testing/util"
)
func TestProcessPendingDepositsMultiplesSameDeposits(t *testing.T) {
st := stateWithActiveBalanceETH(t, 1000)
deps := make([]*eth.PendingDeposit, 2) // Make same deposit twice
validators := st.Validators()
sk, err := bls.RandKey()
require.NoError(t, err)
for i := 0; i < len(deps); i += 1 {
wc := make([]byte, 32)
wc[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
wc[31] = byte(i)
validators[i].PublicKey = sk.PublicKey().Marshal()
validators[i].WithdrawalCredentials = wc
deps[i] = stateTesting.GeneratePendingDeposit(t, sk, 32, bytesutil.ToBytes32(wc), 0)
}
require.NoError(t, st.SetPendingDeposits(deps))
err = electra.ProcessPendingDeposits(context.TODO(), st, 10000)
require.NoError(t, err)
val := st.Validators()
seenPubkeys := make(map[string]struct{})
for i := 0; i < len(val); i += 1 {
if len(val[i].PublicKey) == 0 {
continue
}
_, ok := seenPubkeys[string(val[i].PublicKey)]
if ok {
t.Fatalf("duplicated pubkeys")
} else {
seenPubkeys[string(val[i].PublicKey)] = struct{}{}
}
}
}
func TestProcessPendingDeposits(t *testing.T) {
tests := []struct {
name string
@@ -285,7 +319,7 @@ func TestBatchProcessNewPendingDeposits(t *testing.T) {
wc[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
wc[31] = byte(0)
validDep := stateTesting.GeneratePendingDeposit(t, sk, params.BeaconConfig().MinActivationBalance, bytesutil.ToBytes32(wc), 0)
invalidDep := &eth.PendingDeposit{}
invalidDep := &eth.PendingDeposit{PublicKey: make([]byte, 48)}
// have a combination of valid and invalid deposits
deps := []*eth.PendingDeposit{validDep, invalidDep}
require.NoError(t, electra.BatchProcessNewPendingDeposits(context.Background(), st, deps))

View File

@@ -84,11 +84,11 @@ func ProcessOperations(
}
st, err = ProcessDepositRequests(ctx, st, requests.Deposits)
if err != nil {
return nil, errors.Wrap(err, "could not process deposit receipts")
return nil, errors.Wrap(err, "could not process deposit requests")
}
st, err = ProcessWithdrawalRequests(ctx, st, requests.Withdrawals)
if err != nil {
return nil, errors.Wrap(err, "could not process execution layer withdrawal requests")
return nil, errors.Wrap(err, "could not process withdrawal requests")
}
if err := ProcessConsolidationRequests(ctx, st, requests.Consolidations); err != nil {
return nil, fmt.Errorf("could not process consolidation requests: %w", err)

View File

@@ -31,6 +31,8 @@ const (
LightClientFinalityUpdate
// LightClientOptimisticUpdate event
LightClientOptimisticUpdate
// PayloadAttributes events are fired upon a missed slot or new head.
PayloadAttributes
)
// BlockProcessedData is the data sent with BlockProcessed events.

View File

@@ -23,11 +23,8 @@ var (
// Access to these nil fields will result in run time panic,
// it is recommended to run these checks as first line of defense.
func ValidateNilAttestation(attestation ethpb.Att) error {
if attestation == nil {
return errors.New("attestation can't be nil")
}
if attestation.GetData() == nil {
return errors.New("attestation's data can't be nil")
if attestation == nil || attestation.IsNil() {
return errors.New("attestation is nil")
}
if attestation.GetData().Source == nil {
return errors.New("attestation's source can't be nil")

View File

@@ -260,12 +260,12 @@ func TestValidateNilAttestation(t *testing.T) {
{
name: "nil attestation",
attestation: nil,
errString: "attestation can't be nil",
errString: "attestation is nil",
},
{
name: "nil attestation data",
attestation: &ethpb.Attestation{},
errString: "attestation's data can't be nil",
errString: "attestation is nil",
},
{
name: "nil attestation source",

View File

@@ -23,10 +23,10 @@ import (
bolt "go.etcd.io/bbolt"
)
// used to represent errors for inconsistent slot ranges.
// Used to represent errors for inconsistent slot ranges.
var errInvalidSlotRange = errors.New("invalid end slot and start slot provided")
// Block retrieval by root.
// Block retrieval by root. Return nil if block is not found.
func (s *Store) Block(ctx context.Context, blockRoot [32]byte) (interfaces.ReadOnlySignedBeaconBlock, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.Block")
defer span.End()

View File

@@ -688,7 +688,7 @@ func decodeSlasherChunk(enc []byte) ([]uint16, error) {
// Encode attestation record to bytes.
// The output encoded attestation record consists in the signing root concatenated with the compressed attestation record.
func encodeAttestationRecord(att *slashertypes.IndexedAttestationWrapper) ([]byte, error) {
if att == nil || att.IndexedAttestation == nil {
if att == nil || att.IndexedAttestation == nil || att.IndexedAttestation.IsNil() {
return []byte{}, errors.New("nil proposal record")
}

View File

@@ -53,7 +53,7 @@ func (f *ForkChoice) ShouldOverrideFCU() (override bool) {
// Only reorg blocks that arrive late
early, err := head.arrivedEarly(f.store.genesisTime)
if err != nil {
log.WithError(err).Error("could not check if block arrived early")
log.WithError(err).Error("Could not check if block arrived early")
return
}
if early {

View File

@@ -192,20 +192,13 @@ func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*Beaco
beacon.verifyInitWaiter = verification.NewInitializerWaiter(
beacon.clockWaiter, forkchoice.NewROForkChoice(beacon.forkChoicer), beacon.stateGen)
pa := peers.NewAssigner(beacon.fetchP2P().Peers(), beacon.forkChoicer)
beacon.BackfillOpts = append(
beacon.BackfillOpts,
backfill.WithVerifierWaiter(beacon.verifyInitWaiter),
backfill.WithInitSyncWaiter(initSyncWaiter(ctx, beacon.initialSyncComplete)),
)
bf, err := backfill.NewService(ctx, bfs, beacon.BlobStorage, beacon.clockWaiter, beacon.fetchP2P(), pa, beacon.BackfillOpts...)
if err != nil {
return nil, errors.Wrap(err, "error initializing backfill service")
}
if err := registerServices(cliCtx, beacon, synchronizer, bf, bfs); err != nil {
if err := registerServices(cliCtx, beacon, synchronizer, bfs); err != nil {
return nil, errors.Wrap(err, "could not register services")
}
@@ -292,11 +285,6 @@ func startBaseServices(cliCtx *cli.Context, beacon *BeaconNode, depositAddress s
return nil, errors.Wrap(err, "could not start slashing DB")
}
log.Debugln("Registering P2P Service")
if err := beacon.registerP2P(cliCtx); err != nil {
return nil, errors.Wrap(err, "could not register P2P service")
}
bfs, err := backfill.NewUpdater(ctx, beacon.db)
if err != nil {
return nil, errors.Wrap(err, "could not create backfill updater")
@@ -315,9 +303,15 @@ func startBaseServices(cliCtx *cli.Context, beacon *BeaconNode, depositAddress s
return bfs, nil
}
func registerServices(cliCtx *cli.Context, beacon *BeaconNode, synchronizer *startup.ClockSynchronizer, bf *backfill.Service, bfs *backfill.Store) error {
if err := beacon.services.RegisterService(bf); err != nil {
return errors.Wrap(err, "could not register backfill service")
func registerServices(cliCtx *cli.Context, beacon *BeaconNode, synchronizer *startup.ClockSynchronizer, bfs *backfill.Store) error {
log.Debugln("Registering P2P Service")
if err := beacon.registerP2P(cliCtx); err != nil {
return errors.Wrap(err, "could not register P2P service")
}
log.Debugln("Registering Backfill Service")
if err := beacon.RegisterBackfillService(cliCtx, bfs); err != nil {
return errors.Wrap(err, "could not register Back Fill service")
}
log.Debugln("Registering POW Chain Service")
@@ -1136,6 +1130,16 @@ func (b *BeaconNode) registerBuilderService(cliCtx *cli.Context) error {
return b.services.RegisterService(svc)
}
func (b *BeaconNode) RegisterBackfillService(cliCtx *cli.Context, bfs *backfill.Store) error {
pa := peers.NewAssigner(b.fetchP2P().Peers(), b.forkChoicer)
bf, err := backfill.NewService(cliCtx.Context, bfs, b.BlobStorage, b.clockWaiter, b.fetchP2P(), pa, b.BackfillOpts...)
if err != nil {
return errors.Wrap(err, "error initializing backfill service")
}
return b.services.RegisterService(bf)
}
func hasNetworkFlag(cliCtx *cli.Context) bool {
for _, flag := range features.NetworkFlags {
for _, name := range flag.Names() {

View File

@@ -49,12 +49,12 @@ func TestKV_Aggregated_SaveAggregatedAttestation(t *testing.T) {
{
name: "nil attestation",
att: nil,
wantErrString: "attestation can't be nil",
wantErrString: "attestation is nil",
},
{
name: "nil attestation data",
att: &ethpb.Attestation{},
wantErrString: "attestation's data can't be nil",
wantErrString: "attestation is nil",
},
{
name: "not aggregated",
@@ -206,7 +206,7 @@ func TestKV_Aggregated_AggregatedAttestations(t *testing.T) {
func TestKV_Aggregated_DeleteAggregatedAttestation(t *testing.T) {
t.Run("nil attestation", func(t *testing.T) {
cache := NewAttCaches()
assert.ErrorContains(t, "attestation can't be nil", cache.DeleteAggregatedAttestation(nil))
assert.ErrorContains(t, "attestation is nil", cache.DeleteAggregatedAttestation(nil))
att := util.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b10101}, Data: &ethpb.AttestationData{Slot: 2}})
assert.NoError(t, cache.DeleteAggregatedAttestation(att))
})
@@ -288,7 +288,7 @@ func TestKV_Aggregated_HasAggregatedAttestation(t *testing.T) {
name: "nil attestation",
input: nil,
want: false,
err: errors.New("can't be nil"),
err: errors.New("is nil"),
},
{
name: "nil attestation data",
@@ -296,7 +296,7 @@ func TestKV_Aggregated_HasAggregatedAttestation(t *testing.T) {
AggregationBits: bitfield.Bitlist{0b1111},
},
want: false,
err: errors.New("can't be nil"),
err: errors.New("is nil"),
},
{
name: "empty cache aggregated",

View File

@@ -8,7 +8,7 @@ import (
// SaveBlockAttestation saves an block attestation in cache.
func (c *AttCaches) SaveBlockAttestation(att ethpb.Att) error {
if att == nil {
if att == nil || att.IsNil() {
return nil
}
@@ -53,10 +53,9 @@ func (c *AttCaches) BlockAttestations() []ethpb.Att {
// DeleteBlockAttestation deletes a block attestation in cache.
func (c *AttCaches) DeleteBlockAttestation(att ethpb.Att) error {
if att == nil {
if att == nil || att.IsNil() {
return nil
}
id, err := attestation.NewId(att, attestation.Data)
if err != nil {
return errors.Wrap(err, "could not create attestation ID")

View File

@@ -8,7 +8,7 @@ import (
// SaveForkchoiceAttestation saves an forkchoice attestation in cache.
func (c *AttCaches) SaveForkchoiceAttestation(att ethpb.Att) error {
if att == nil {
if att == nil || att.IsNil() {
return nil
}
@@ -50,7 +50,7 @@ func (c *AttCaches) ForkchoiceAttestations() []ethpb.Att {
// DeleteForkchoiceAttestation deletes a forkchoice attestation in cache.
func (c *AttCaches) DeleteForkchoiceAttestation(att ethpb.Att) error {
if att == nil {
if att == nil || att.IsNil() {
return nil
}

View File

@@ -14,7 +14,7 @@ import (
// SaveUnaggregatedAttestation saves an unaggregated attestation in cache.
func (c *AttCaches) SaveUnaggregatedAttestation(att ethpb.Att) error {
if att == nil {
if att == nil || att.IsNil() {
return nil
}
if helpers.IsAggregated(att) {
@@ -130,9 +130,10 @@ func (c *AttCaches) UnaggregatedAttestationsBySlotIndexElectra(
// DeleteUnaggregatedAttestation deletes the unaggregated attestations in cache.
func (c *AttCaches) DeleteUnaggregatedAttestation(att ethpb.Att) error {
if att == nil {
if att == nil || att.IsNil() {
return nil
}
if helpers.IsAggregated(att) {
return errors.New("attestation is aggregated")
}
@@ -161,7 +162,7 @@ func (c *AttCaches) DeleteSeenUnaggregatedAttestations() (int, error) {
count := 0
for r, att := range c.unAggregatedAtt {
if att == nil || helpers.IsAggregated(att) {
if att == nil || att.IsNil() || helpers.IsAggregated(att) {
continue
}
if seen, err := c.hasSeenBit(att); err == nil && seen {

View File

@@ -7,6 +7,7 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/operations/attestations/mock",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/operations/attestations:go_default_library",
"//consensus-types/primitives:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
],

View File

@@ -3,13 +3,17 @@ package mock
import (
"context"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/operations/attestations"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
var _ attestations.Pool = &PoolMock{}
// PoolMock --
type PoolMock struct {
AggregatedAtts []*ethpb.Attestation
AggregatedAtts []ethpb.Att
UnaggregatedAtts []ethpb.Att
}
// AggregateUnaggregatedAttestations --
@@ -23,18 +27,18 @@ func (*PoolMock) AggregateUnaggregatedAttestationsBySlotIndex(_ context.Context,
}
// SaveAggregatedAttestation --
func (*PoolMock) SaveAggregatedAttestation(_ *ethpb.Attestation) error {
func (*PoolMock) SaveAggregatedAttestation(_ ethpb.Att) error {
panic("implement me")
}
// SaveAggregatedAttestations --
func (m *PoolMock) SaveAggregatedAttestations(atts []*ethpb.Attestation) error {
func (m *PoolMock) SaveAggregatedAttestations(atts []ethpb.Att) error {
m.AggregatedAtts = append(m.AggregatedAtts, atts...)
return nil
}
// AggregatedAttestations --
func (m *PoolMock) AggregatedAttestations() []*ethpb.Attestation {
func (m *PoolMock) AggregatedAttestations() []ethpb.Att {
return m.AggregatedAtts
}
@@ -43,13 +47,18 @@ func (*PoolMock) AggregatedAttestationsBySlotIndex(_ context.Context, _ primitiv
panic("implement me")
}
// AggregatedAttestationsBySlotIndexElectra --
func (*PoolMock) AggregatedAttestationsBySlotIndexElectra(_ context.Context, _ primitives.Slot, _ primitives.CommitteeIndex) []*ethpb.AttestationElectra {
panic("implement me")
}
// DeleteAggregatedAttestation --
func (*PoolMock) DeleteAggregatedAttestation(_ *ethpb.Attestation) error {
func (*PoolMock) DeleteAggregatedAttestation(_ ethpb.Att) error {
panic("implement me")
}
// HasAggregatedAttestation --
func (*PoolMock) HasAggregatedAttestation(_ *ethpb.Attestation) (bool, error) {
func (*PoolMock) HasAggregatedAttestation(_ ethpb.Att) (bool, error) {
panic("implement me")
}
@@ -59,18 +68,19 @@ func (*PoolMock) AggregatedAttestationCount() int {
}
// SaveUnaggregatedAttestation --
func (*PoolMock) SaveUnaggregatedAttestation(_ *ethpb.Attestation) error {
func (*PoolMock) SaveUnaggregatedAttestation(_ ethpb.Att) error {
panic("implement me")
}
// SaveUnaggregatedAttestations --
func (*PoolMock) SaveUnaggregatedAttestations(_ []*ethpb.Attestation) error {
panic("implement me")
func (m *PoolMock) SaveUnaggregatedAttestations(atts []ethpb.Att) error {
m.UnaggregatedAtts = append(m.UnaggregatedAtts, atts...)
return nil
}
// UnaggregatedAttestations --
func (*PoolMock) UnaggregatedAttestations() ([]*ethpb.Attestation, error) {
panic("implement me")
func (m *PoolMock) UnaggregatedAttestations() ([]ethpb.Att, error) {
return m.UnaggregatedAtts, nil
}
// UnaggregatedAttestationsBySlotIndex --
@@ -78,8 +88,13 @@ func (*PoolMock) UnaggregatedAttestationsBySlotIndex(_ context.Context, _ primit
panic("implement me")
}
// UnaggregatedAttestationsBySlotIndexElectra --
func (*PoolMock) UnaggregatedAttestationsBySlotIndexElectra(_ context.Context, _ primitives.Slot, _ primitives.CommitteeIndex) []*ethpb.AttestationElectra {
panic("implement me")
}
// DeleteUnaggregatedAttestation --
func (*PoolMock) DeleteUnaggregatedAttestation(_ *ethpb.Attestation) error {
func (*PoolMock) DeleteUnaggregatedAttestation(_ ethpb.Att) error {
panic("implement me")
}
@@ -94,42 +109,42 @@ func (*PoolMock) UnaggregatedAttestationCount() int {
}
// SaveBlockAttestation --
func (*PoolMock) SaveBlockAttestation(_ *ethpb.Attestation) error {
func (*PoolMock) SaveBlockAttestation(_ ethpb.Att) error {
panic("implement me")
}
// SaveBlockAttestations --
func (*PoolMock) SaveBlockAttestations(_ []*ethpb.Attestation) error {
func (*PoolMock) SaveBlockAttestations(_ []ethpb.Att) error {
panic("implement me")
}
// BlockAttestations --
func (*PoolMock) BlockAttestations() []*ethpb.Attestation {
func (*PoolMock) BlockAttestations() []ethpb.Att {
panic("implement me")
}
// DeleteBlockAttestation --
func (*PoolMock) DeleteBlockAttestation(_ *ethpb.Attestation) error {
func (*PoolMock) DeleteBlockAttestation(_ ethpb.Att) error {
panic("implement me")
}
// SaveForkchoiceAttestation --
func (*PoolMock) SaveForkchoiceAttestation(_ *ethpb.Attestation) error {
func (*PoolMock) SaveForkchoiceAttestation(_ ethpb.Att) error {
panic("implement me")
}
// SaveForkchoiceAttestations --
func (*PoolMock) SaveForkchoiceAttestations(_ []*ethpb.Attestation) error {
func (*PoolMock) SaveForkchoiceAttestations(_ []ethpb.Att) error {
panic("implement me")
}
// ForkchoiceAttestations --
func (*PoolMock) ForkchoiceAttestations() []*ethpb.Attestation {
func (*PoolMock) ForkchoiceAttestations() []ethpb.Att {
panic("implement me")
}
// DeleteForkchoiceAttestation --
func (*PoolMock) DeleteForkchoiceAttestation(_ *ethpb.Attestation) error {
func (*PoolMock) DeleteForkchoiceAttestation(_ ethpb.Att) error {
panic("implement me")
}

View File

@@ -75,6 +75,8 @@ go_library(
"//runtime/version:go_default_library",
"//time:go_default_library",
"//time/slots:go_default_library",
"@com_github_btcsuite_btcd_btcec_v2//:go_default_library",
"@com_github_ethereum_go_ethereum//crypto:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/discover:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enr:go_default_library",

View File

@@ -165,14 +165,14 @@ func (s *Service) pubsubOptions() []pubsub.Option {
func parsePeersEnr(peers []string) ([]peer.AddrInfo, error) {
addrs, err := PeersFromStringAddrs(peers)
if err != nil {
return nil, fmt.Errorf("Cannot convert peers raw ENRs into multiaddresses: %w", err)
return nil, fmt.Errorf("cannot convert peers raw ENRs into multiaddresses: %w", err)
}
if len(addrs) == 0 {
return nil, fmt.Errorf("Converting peers raw ENRs into multiaddresses resulted in an empty list")
return nil, fmt.Errorf("converting peers raw ENRs into multiaddresses resulted in an empty list")
}
directAddrInfos, err := peer.AddrInfosFromP2pAddrs(addrs...)
if err != nil {
return nil, fmt.Errorf("Cannot convert peers multiaddresses into AddrInfos: %w", err)
return nil, fmt.Errorf("cannot convert peers multiaddresses into AddrInfos: %w", err)
}
return directAddrInfos, nil
}

View File

@@ -27,148 +27,148 @@ func NewFuzzTestP2P() *FakeP2P {
}
// Encoding -- fake.
func (_ *FakeP2P) Encoding() encoder.NetworkEncoding {
func (*FakeP2P) Encoding() encoder.NetworkEncoding {
return &encoder.SszNetworkEncoder{}
}
// AddConnectionHandler -- fake.
func (_ *FakeP2P) AddConnectionHandler(_, _ func(ctx context.Context, id peer.ID) error) {
func (*FakeP2P) AddConnectionHandler(_, _ func(ctx context.Context, id peer.ID) error) {
}
// AddDisconnectionHandler -- fake.
func (_ *FakeP2P) AddDisconnectionHandler(_ func(ctx context.Context, id peer.ID) error) {
func (*FakeP2P) AddDisconnectionHandler(_ func(ctx context.Context, id peer.ID) error) {
}
// AddPingMethod -- fake.
func (_ *FakeP2P) AddPingMethod(_ func(ctx context.Context, id peer.ID) error) {
func (*FakeP2P) AddPingMethod(_ func(ctx context.Context, id peer.ID) error) {
}
// PeerID -- fake.
func (_ *FakeP2P) PeerID() peer.ID {
func (*FakeP2P) PeerID() peer.ID {
return "fake"
}
// ENR returns the enr of the local peer.
func (_ *FakeP2P) ENR() *enr.Record {
func (*FakeP2P) ENR() *enr.Record {
return new(enr.Record)
}
// DiscoveryAddresses -- fake
func (_ *FakeP2P) DiscoveryAddresses() ([]multiaddr.Multiaddr, error) {
func (*FakeP2P) DiscoveryAddresses() ([]multiaddr.Multiaddr, error) {
return nil, nil
}
// FindPeersWithSubnet mocks the p2p func.
func (_ *FakeP2P) FindPeersWithSubnet(_ context.Context, _ string, _ uint64, _ int) (bool, error) {
func (*FakeP2P) FindPeersWithSubnet(_ context.Context, _ string, _ uint64, _ int) (bool, error) {
return false, nil
}
// RefreshENR mocks the p2p func.
func (_ *FakeP2P) RefreshENR() {}
func (*FakeP2P) RefreshENR() {}
// LeaveTopic -- fake.
func (_ *FakeP2P) LeaveTopic(_ string) error {
func (*FakeP2P) LeaveTopic(_ string) error {
return nil
}
// Metadata -- fake.
func (_ *FakeP2P) Metadata() metadata.Metadata {
func (*FakeP2P) Metadata() metadata.Metadata {
return nil
}
// Peers -- fake.
func (_ *FakeP2P) Peers() *peers.Status {
func (*FakeP2P) Peers() *peers.Status {
return nil
}
// PublishToTopic -- fake.
func (_ *FakeP2P) PublishToTopic(_ context.Context, _ string, _ []byte, _ ...pubsub.PubOpt) error {
func (*FakeP2P) PublishToTopic(_ context.Context, _ string, _ []byte, _ ...pubsub.PubOpt) error {
return nil
}
// Send -- fake.
func (_ *FakeP2P) Send(_ context.Context, _ interface{}, _ string, _ peer.ID) (network.Stream, error) {
func (*FakeP2P) Send(_ context.Context, _ interface{}, _ string, _ peer.ID) (network.Stream, error) {
return nil, nil
}
// PubSub -- fake.
func (_ *FakeP2P) PubSub() *pubsub.PubSub {
func (*FakeP2P) PubSub() *pubsub.PubSub {
return nil
}
// MetadataSeq -- fake.
func (_ *FakeP2P) MetadataSeq() uint64 {
func (*FakeP2P) MetadataSeq() uint64 {
return 0
}
// SetStreamHandler -- fake.
func (_ *FakeP2P) SetStreamHandler(_ string, _ network.StreamHandler) {
func (*FakeP2P) SetStreamHandler(_ string, _ network.StreamHandler) {
}
// SubscribeToTopic -- fake.
func (_ *FakeP2P) SubscribeToTopic(_ string, _ ...pubsub.SubOpt) (*pubsub.Subscription, error) {
func (*FakeP2P) SubscribeToTopic(_ string, _ ...pubsub.SubOpt) (*pubsub.Subscription, error) {
return nil, nil
}
// JoinTopic -- fake.
func (_ *FakeP2P) JoinTopic(_ string, _ ...pubsub.TopicOpt) (*pubsub.Topic, error) {
func (*FakeP2P) JoinTopic(_ string, _ ...pubsub.TopicOpt) (*pubsub.Topic, error) {
return nil, nil
}
// Host -- fake.
func (_ *FakeP2P) Host() host.Host {
func (*FakeP2P) Host() host.Host {
return nil
}
// Disconnect -- fake.
func (_ *FakeP2P) Disconnect(_ peer.ID) error {
func (*FakeP2P) Disconnect(_ peer.ID) error {
return nil
}
// Broadcast -- fake.
func (_ *FakeP2P) Broadcast(_ context.Context, _ proto.Message) error {
func (*FakeP2P) Broadcast(_ context.Context, _ proto.Message) error {
return nil
}
// BroadcastAttestation -- fake.
func (_ *FakeP2P) BroadcastAttestation(_ context.Context, _ uint64, _ ethpb.Att) error {
func (*FakeP2P) BroadcastAttestation(_ context.Context, _ uint64, _ ethpb.Att) error {
return nil
}
// BroadcastSyncCommitteeMessage -- fake.
func (_ *FakeP2P) BroadcastSyncCommitteeMessage(_ context.Context, _ uint64, _ *ethpb.SyncCommitteeMessage) error {
func (*FakeP2P) BroadcastSyncCommitteeMessage(_ context.Context, _ uint64, _ *ethpb.SyncCommitteeMessage) error {
return nil
}
// BroadcastBlob -- fake.
func (_ *FakeP2P) BroadcastBlob(_ context.Context, _ uint64, _ *ethpb.BlobSidecar) error {
func (*FakeP2P) BroadcastBlob(_ context.Context, _ uint64, _ *ethpb.BlobSidecar) error {
return nil
}
// InterceptPeerDial -- fake.
func (_ *FakeP2P) InterceptPeerDial(peer.ID) (allow bool) {
func (*FakeP2P) InterceptPeerDial(peer.ID) (allow bool) {
return true
}
// InterceptAddrDial -- fake.
func (_ *FakeP2P) InterceptAddrDial(peer.ID, multiaddr.Multiaddr) (allow bool) {
func (*FakeP2P) InterceptAddrDial(peer.ID, multiaddr.Multiaddr) (allow bool) {
return true
}
// InterceptAccept -- fake.
func (_ *FakeP2P) InterceptAccept(_ network.ConnMultiaddrs) (allow bool) {
func (*FakeP2P) InterceptAccept(_ network.ConnMultiaddrs) (allow bool) {
return true
}
// InterceptSecured -- fake.
func (_ *FakeP2P) InterceptSecured(network.Direction, peer.ID, network.ConnMultiaddrs) (allow bool) {
func (*FakeP2P) InterceptSecured(network.Direction, peer.ID, network.ConnMultiaddrs) (allow bool) {
return true
}
// InterceptUpgraded -- fake.
func (_ *FakeP2P) InterceptUpgraded(network.Conn) (allow bool, reason control.DisconnectReason) {
func (*FakeP2P) InterceptUpgraded(network.Conn) (allow bool, reason control.DisconnectReason) {
return true, 0
}

View File

@@ -18,12 +18,12 @@ type MockHost struct {
}
// ID --
func (_ *MockHost) ID() peer.ID {
func (*MockHost) ID() peer.ID {
return ""
}
// Peerstore --
func (_ *MockHost) Peerstore() peerstore.Peerstore {
func (*MockHost) Peerstore() peerstore.Peerstore {
return nil
}
@@ -33,46 +33,46 @@ func (m *MockHost) Addrs() []ma.Multiaddr {
}
// Network --
func (_ *MockHost) Network() network.Network {
func (*MockHost) Network() network.Network {
return nil
}
// Mux --
func (_ *MockHost) Mux() protocol.Switch {
func (*MockHost) Mux() protocol.Switch {
return nil
}
// Connect --
func (_ *MockHost) Connect(_ context.Context, _ peer.AddrInfo) error {
func (*MockHost) Connect(_ context.Context, _ peer.AddrInfo) error {
return nil
}
// SetStreamHandler --
func (_ *MockHost) SetStreamHandler(_ protocol.ID, _ network.StreamHandler) {}
func (*MockHost) SetStreamHandler(_ protocol.ID, _ network.StreamHandler) {}
// SetStreamHandlerMatch --
func (_ *MockHost) SetStreamHandlerMatch(protocol.ID, func(id protocol.ID) bool, network.StreamHandler) {
func (*MockHost) SetStreamHandlerMatch(protocol.ID, func(id protocol.ID) bool, network.StreamHandler) {
}
// RemoveStreamHandler --
func (_ *MockHost) RemoveStreamHandler(_ protocol.ID) {}
func (*MockHost) RemoveStreamHandler(_ protocol.ID) {}
// NewStream --
func (_ *MockHost) NewStream(_ context.Context, _ peer.ID, _ ...protocol.ID) (network.Stream, error) {
func (*MockHost) NewStream(_ context.Context, _ peer.ID, _ ...protocol.ID) (network.Stream, error) {
return nil, nil
}
// Close --
func (_ *MockHost) Close() error {
func (*MockHost) Close() error {
return nil
}
// ConnManager --
func (_ *MockHost) ConnManager() connmgr.ConnManager {
func (*MockHost) ConnManager() connmgr.ConnManager {
return nil
}
// EventBus --
func (_ *MockHost) EventBus() event.Bus {
func (*MockHost) EventBus() event.Bus {
return nil
}

View File

@@ -12,10 +12,15 @@ import (
"path"
"time"
"github.com/btcsuite/btcd/btcec/v2"
gCrypto "github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/libp2p/go-libp2p/core/crypto"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/wrapper"
ecdsaprysm "github.com/prysmaticlabs/prysm/v5/crypto/ecdsa"
"github.com/prysmaticlabs/prysm/v5/io/file"
@@ -62,6 +67,7 @@ func privKey(cfg *Config) (*ecdsa.PrivateKey, error) {
}
if defaultKeysExist {
log.WithField("filePath", defaultKeyPath).Info("Reading static P2P private key from a file. To generate a new random private key at every start, please remove this file.")
return privKeyFromFile(defaultKeyPath)
}
@@ -71,8 +77,8 @@ func privKey(cfg *Config) (*ecdsa.PrivateKey, error) {
return nil, err
}
// If the StaticPeerID flag is not set, return the private key.
if !cfg.StaticPeerID {
// If the StaticPeerID flag is not set and if peerDAS is not enabled, return the private key.
if !(cfg.StaticPeerID || params.PeerDASEnabled()) {
return ecdsaprysm.ConvertFromInterfacePrivKey(priv)
}
@@ -89,7 +95,7 @@ func privKey(cfg *Config) (*ecdsa.PrivateKey, error) {
return nil, err
}
log.Info("Wrote network key to file")
log.WithField("path", defaultKeyPath).Info("Wrote network key to file")
// Read the key from the defaultKeyPath file just written
// for the strongest guarantee that the next start will be the same as this one.
return privKeyFromFile(defaultKeyPath)
@@ -173,3 +179,27 @@ func verifyConnectivity(addr string, port uint, protocol string) {
}
}
}
// ConvertPeerIDToNodeID converts a peer ID (libp2p) to a node ID (devp2p).
func ConvertPeerIDToNodeID(pid peer.ID) (enode.ID, error) {
// Retrieve the public key object of the peer under "crypto" form.
pubkeyObjCrypto, err := pid.ExtractPublicKey()
if err != nil {
return [32]byte{}, errors.Wrapf(err, "extract public key from peer ID `%s`", pid)
}
// Extract the bytes representation of the public key.
compressedPubKeyBytes, err := pubkeyObjCrypto.Raw()
if err != nil {
return [32]byte{}, errors.Wrap(err, "public key raw")
}
// Retrieve the public key object of the peer under "SECP256K1" form.
pubKeyObjSecp256k1, err := btcec.ParsePubKey(compressedPubKeyBytes)
if err != nil {
return [32]byte{}, errors.Wrap(err, "parse public key")
}
newPubkey := &ecdsa.PublicKey{Curve: gCrypto.S256(), X: pubKeyObjSecp256k1.X(), Y: pubKeyObjSecp256k1.Y()}
return enode.PubkeyToIDV4(newPubkey), nil
}

View File

@@ -6,6 +6,7 @@ import (
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
@@ -64,3 +65,19 @@ func TestSerializeENR(t *testing.T) {
assert.ErrorContains(t, "could not serialize nil record", err)
})
}
func TestConvertPeerIDToNodeID(t *testing.T) {
const (
peerIDStr = "16Uiu2HAmRrhnqEfybLYimCiAYer2AtZKDGamQrL1VwRCyeh2YiFc"
expectedNodeIDStr = "eed26c5d2425ab95f57246a5dca87317c41cacee4bcafe8bbe57e5965527c290"
)
peerID, err := peer.Decode(peerIDStr)
require.NoError(t, err)
actualNodeID, err := ConvertPeerIDToNodeID(peerID)
require.NoError(t, err)
actualNodeIDStr := actualNodeID.String()
require.Equal(t, expectedNodeIDStr, actualNodeIDStr)
}

View File

@@ -381,21 +381,12 @@ func (s *Service) SubmitSignedAggregateSelectionProof(
ctx, span := trace.StartSpan(ctx, "coreService.SubmitSignedAggregateSelectionProof")
defer span.End()
if agg == nil {
if agg == nil || agg.IsNil() {
return &RpcError{Err: errors.New("signed aggregate request can't be nil"), Reason: BadRequest}
}
attAndProof := agg.AggregateAttestationAndProof()
if attAndProof == nil {
return &RpcError{Err: errors.New("signed aggregate request can't be nil"), Reason: BadRequest}
}
att := attAndProof.AggregateVal()
if att == nil {
return &RpcError{Err: errors.New("signed aggregate request can't be nil"), Reason: BadRequest}
}
data := att.GetData()
if data == nil {
return &RpcError{Err: errors.New("signed aggregate request can't be nil"), Reason: BadRequest}
}
emptySig := make([]byte, fieldparams.BLSSignatureLength)
if bytes.Equal(agg.GetSignature(), emptySig) || bytes.Equal(attAndProof.GetSelectionProof(), emptySig) {
return &RpcError{Err: errors.New("signed signatures can't be zero hashes"), Reason: BadRequest}

View File

@@ -659,6 +659,16 @@ func (s *Service) beaconEndpoints(
handler: server.SubmitAttestations,
methods: []string{http.MethodPost},
},
{
template: "/eth/v2/beacon/pool/attestations",
name: namespace + ".SubmitAttestationsV2",
middleware: []middleware.Middleware{
middleware.ContentTypeHandler([]string{api.JsonMediaType}),
middleware.AcceptHeaderHandler([]string{api.JsonMediaType}),
},
handler: server.SubmitAttestationsV2,
methods: []string{http.MethodPost},
},
{
template: "/eth/v1/beacon/pool/voluntary_exits",
name: namespace + ".ListVoluntaryExits",

View File

@@ -41,7 +41,7 @@ func Test_endpoints(t *testing.T) {
"/eth/v1/beacon/deposit_snapshot": {http.MethodGet},
"/eth/v1/beacon/blinded_blocks/{block_id}": {http.MethodGet},
"/eth/v1/beacon/pool/attestations": {http.MethodGet, http.MethodPost},
"/eth/v2/beacon/pool/attestations": {http.MethodGet},
"/eth/v2/beacon/pool/attestations": {http.MethodGet, http.MethodPost},
"/eth/v1/beacon/pool/attester_slashings": {http.MethodGet, http.MethodPost},
"/eth/v2/beacon/pool/attester_slashings": {http.MethodGet, http.MethodPost},
"/eth/v1/beacon/pool/proposer_slashings": {http.MethodGet, http.MethodPost},

View File

@@ -3,13 +3,14 @@ package beacon
import (
"context"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"strconv"
"strings"
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api"
"github.com/prysmaticlabs/prysm/v5/api/server"
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
@@ -148,6 +149,7 @@ func (s *Server) ListAttestationsV2(w http.ResponseWriter, r *http.Request) {
return
}
w.Header().Set(api.VersionHeader, version.String(headState.Version()))
httputil.WriteJson(w, &structs.ListAttestationsResponse{
Version: version.String(headState.Version()),
Data: attsData,
@@ -189,70 +191,13 @@ func (s *Server) SubmitAttestations(w http.ResponseWriter, r *http.Request) {
httputil.HandleError(w, "Could not decode request body: "+err.Error(), http.StatusBadRequest)
return
}
if len(req.Data) == 0 {
httputil.HandleError(w, "No data submitted", http.StatusBadRequest)
attFailures, failedBroadcasts, err := s.handleAttestations(ctx, req.Data)
if err != nil {
httputil.HandleError(w, err.Error(), http.StatusBadRequest)
return
}
var validAttestations []*eth.Attestation
var attFailures []*server.IndexedVerificationFailure
for i, sourceAtt := range req.Data {
att, err := sourceAtt.ToConsensus()
if err != nil {
attFailures = append(attFailures, &server.IndexedVerificationFailure{
Index: i,
Message: "Could not convert request attestation to consensus attestation: " + err.Error(),
})
continue
}
if _, err = bls.SignatureFromBytes(att.Signature); err != nil {
attFailures = append(attFailures, &server.IndexedVerificationFailure{
Index: i,
Message: "Incorrect attestation signature: " + err.Error(),
})
continue
}
// Broadcast the unaggregated attestation on a feed to notify other services in the beacon node
// of a received unaggregated attestation.
// Note we can't send for aggregated att because we don't have selection proof.
if !corehelpers.IsAggregated(att) {
s.OperationNotifier.OperationFeed().Send(&feed.Event{
Type: operation.UnaggregatedAttReceived,
Data: &operation.UnAggregatedAttReceivedData{
Attestation: att,
},
})
}
validAttestations = append(validAttestations, att)
}
failedBroadcasts := make([]string, 0)
for i, att := range validAttestations {
// Determine subnet to broadcast attestation to
wantedEpoch := slots.ToEpoch(att.Data.Slot)
vals, err := s.HeadFetcher.HeadValidatorsIndices(ctx, wantedEpoch)
if err != nil {
httputil.HandleError(w, "Could not get head validator indices: "+err.Error(), http.StatusInternalServerError)
return
}
subnet := corehelpers.ComputeSubnetFromCommitteeAndSlot(uint64(len(vals)), att.Data.CommitteeIndex, att.Data.Slot)
if err = s.Broadcaster.BroadcastAttestation(ctx, subnet, att); err != nil {
log.WithError(err).Errorf("could not broadcast attestation at index %d", i)
}
if corehelpers.IsAggregated(att) {
if err = s.AttestationsPool.SaveAggregatedAttestation(att); err != nil {
log.WithError(err).Error("could not save aggregated attestation")
}
} else {
if err = s.AttestationsPool.SaveUnaggregatedAttestation(att); err != nil {
log.WithError(err).Error("could not save unaggregated attestation")
}
}
}
if len(failedBroadcasts) > 0 {
httputil.HandleError(
w,
@@ -272,6 +217,213 @@ func (s *Server) SubmitAttestations(w http.ResponseWriter, r *http.Request) {
}
}
// SubmitAttestationsV2 submits an attestation object to node. If the attestation passes all validation
// constraints, node MUST publish the attestation on an appropriate subnet.
func (s *Server) SubmitAttestationsV2(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "beacon.SubmitAttestationsV2")
defer span.End()
versionHeader := r.Header.Get(api.VersionHeader)
if versionHeader == "" {
httputil.HandleError(w, api.VersionHeader+" header is required", http.StatusBadRequest)
return
}
v, err := version.FromString(versionHeader)
if err != nil {
httputil.HandleError(w, "Invalid version: "+err.Error(), http.StatusBadRequest)
return
}
var req structs.SubmitAttestationsRequest
err = json.NewDecoder(r.Body).Decode(&req.Data)
switch {
case errors.Is(err, io.EOF):
httputil.HandleError(w, "No data submitted", http.StatusBadRequest)
return
case err != nil:
httputil.HandleError(w, "Could not decode request body: "+err.Error(), http.StatusBadRequest)
return
}
var attFailures []*server.IndexedVerificationFailure
var failedBroadcasts []string
if v >= version.Electra {
attFailures, failedBroadcasts, err = s.handleAttestationsElectra(ctx, req.Data)
} else {
attFailures, failedBroadcasts, err = s.handleAttestations(ctx, req.Data)
}
if err != nil {
httputil.HandleError(w, fmt.Sprintf("Failed to handle attestations: %v", err), http.StatusBadRequest)
return
}
if len(failedBroadcasts) > 0 {
httputil.HandleError(
w,
fmt.Sprintf("Attestations at index %s could not be broadcasted", strings.Join(failedBroadcasts, ", ")),
http.StatusInternalServerError,
)
return
}
if len(attFailures) > 0 {
failuresErr := &server.IndexedVerificationFailureError{
Code: http.StatusBadRequest,
Message: "One or more attestations failed validation",
Failures: attFailures,
}
httputil.WriteError(w, failuresErr)
}
}
func (s *Server) handleAttestationsElectra(ctx context.Context, data json.RawMessage) (attFailures []*server.IndexedVerificationFailure, failedBroadcasts []string, err error) {
var sourceAttestations []*structs.AttestationElectra
if err = json.Unmarshal(data, &sourceAttestations); err != nil {
return nil, nil, errors.Wrap(err, "failed to unmarshal attestation")
}
if len(sourceAttestations) == 0 {
return nil, nil, errors.New("no data submitted")
}
var validAttestations []*eth.AttestationElectra
for i, sourceAtt := range sourceAttestations {
att, err := sourceAtt.ToConsensus()
if err != nil {
attFailures = append(attFailures, &server.IndexedVerificationFailure{
Index: i,
Message: "Could not convert request attestation to consensus attestation: " + err.Error(),
})
continue
}
if _, err = bls.SignatureFromBytes(att.Signature); err != nil {
attFailures = append(attFailures, &server.IndexedVerificationFailure{
Index: i,
Message: "Incorrect attestation signature: " + err.Error(),
})
continue
}
validAttestations = append(validAttestations, att)
}
for i, att := range validAttestations {
// Broadcast the unaggregated attestation on a feed to notify other services in the beacon node
// of a received unaggregated attestation.
// Note we can't send for aggregated att because we don't have selection proof.
if !corehelpers.IsAggregated(att) {
s.OperationNotifier.OperationFeed().Send(&feed.Event{
Type: operation.UnaggregatedAttReceived,
Data: &operation.UnAggregatedAttReceivedData{
Attestation: att,
},
})
}
wantedEpoch := slots.ToEpoch(att.Data.Slot)
vals, err := s.HeadFetcher.HeadValidatorsIndices(ctx, wantedEpoch)
if err != nil {
failedBroadcasts = append(failedBroadcasts, strconv.Itoa(i))
continue
}
committeeIndex, err := att.GetCommitteeIndex()
if err != nil {
return nil, nil, errors.Wrap(err, "failed to retrieve attestation committee index")
}
subnet := corehelpers.ComputeSubnetFromCommitteeAndSlot(uint64(len(vals)), committeeIndex, att.Data.Slot)
if err = s.Broadcaster.BroadcastAttestation(ctx, subnet, att); err != nil {
log.WithError(err).Errorf("could not broadcast attestation at index %d", i)
failedBroadcasts = append(failedBroadcasts, strconv.Itoa(i))
continue
}
if corehelpers.IsAggregated(att) {
if err = s.AttestationsPool.SaveAggregatedAttestation(att); err != nil {
log.WithError(err).Error("could not save aggregated attestation")
}
} else {
if err = s.AttestationsPool.SaveUnaggregatedAttestation(att); err != nil {
log.WithError(err).Error("could not save unaggregated attestation")
}
}
}
return attFailures, failedBroadcasts, nil
}
func (s *Server) handleAttestations(ctx context.Context, data json.RawMessage) (attFailures []*server.IndexedVerificationFailure, failedBroadcasts []string, err error) {
var sourceAttestations []*structs.Attestation
if err = json.Unmarshal(data, &sourceAttestations); err != nil {
return nil, nil, errors.Wrap(err, "failed to unmarshal attestation")
}
if len(sourceAttestations) == 0 {
return nil, nil, errors.New("no data submitted")
}
var validAttestations []*eth.Attestation
for i, sourceAtt := range sourceAttestations {
att, err := sourceAtt.ToConsensus()
if err != nil {
attFailures = append(attFailures, &server.IndexedVerificationFailure{
Index: i,
Message: "Could not convert request attestation to consensus attestation: " + err.Error(),
})
continue
}
if _, err = bls.SignatureFromBytes(att.Signature); err != nil {
attFailures = append(attFailures, &server.IndexedVerificationFailure{
Index: i,
Message: "Incorrect attestation signature: " + err.Error(),
})
continue
}
validAttestations = append(validAttestations, att)
}
for i, att := range validAttestations {
// Broadcast the unaggregated attestation on a feed to notify other services in the beacon node
// of a received unaggregated attestation.
// Note we can't send for aggregated att because we don't have selection proof.
if !corehelpers.IsAggregated(att) {
s.OperationNotifier.OperationFeed().Send(&feed.Event{
Type: operation.UnaggregatedAttReceived,
Data: &operation.UnAggregatedAttReceivedData{
Attestation: att,
},
})
}
wantedEpoch := slots.ToEpoch(att.Data.Slot)
vals, err := s.HeadFetcher.HeadValidatorsIndices(ctx, wantedEpoch)
if err != nil {
failedBroadcasts = append(failedBroadcasts, strconv.Itoa(i))
continue
}
subnet := corehelpers.ComputeSubnetFromCommitteeAndSlot(uint64(len(vals)), att.Data.CommitteeIndex, att.Data.Slot)
if err = s.Broadcaster.BroadcastAttestation(ctx, subnet, att); err != nil {
log.WithError(err).Errorf("could not broadcast attestation at index %d", i)
failedBroadcasts = append(failedBroadcasts, strconv.Itoa(i))
continue
}
if corehelpers.IsAggregated(att) {
if err = s.AttestationsPool.SaveAggregatedAttestation(att); err != nil {
log.WithError(err).Error("could not save aggregated attestation")
}
} else {
if err = s.AttestationsPool.SaveUnaggregatedAttestation(att); err != nil {
log.WithError(err).Error("could not save unaggregated attestation")
}
}
}
return attFailures, failedBroadcasts, nil
}
// ListVoluntaryExits retrieves voluntary exits known by the node but
// not necessarily incorporated into any block.
func (s *Server) ListVoluntaryExits(w http.ResponseWriter, r *http.Request) {

View File

@@ -500,95 +500,292 @@ func TestSubmitAttestations(t *testing.T) {
ChainInfoFetcher: chainService,
OperationNotifier: &blockchainmock.MockOperationNotifier{},
}
t.Run("V1", func(t *testing.T) {
t.Run("single", func(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
s.Broadcaster = broadcaster
s.AttestationsPool = attestations.NewPool()
t.Run("single", func(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
s.Broadcaster = broadcaster
s.AttestationsPool = attestations.NewPool()
var body bytes.Buffer
_, err := body.WriteString(singleAtt)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
var body bytes.Buffer
_, err := body.WriteString(singleAtt)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestations(writer, request)
s.SubmitAttestations(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, 1, broadcaster.NumAttestations())
assert.Equal(t, "0x03", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetAggregationBits()))
assert.Equal(t, "0x8146f4397bfd8fd057ebbcd6a67327bdc7ed5fb650533edcb6377b650dea0b6da64c14ecd60846d5c0a0cd43893d6972092500f82c9d8a955e2b58c5ed3cbe885d84008ace6bd86ba9e23652f58e2ec207cec494c916063257abf285b9b15b15", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetSignature()))
assert.Equal(t, primitives.Slot(0), broadcaster.BroadcastAttestations[0].GetData().Slot)
assert.Equal(t, primitives.CommitteeIndex(0), broadcaster.BroadcastAttestations[0].GetData().CommitteeIndex)
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().BeaconBlockRoot))
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().Source.Root))
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Source.Epoch)
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().Target.Root))
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Target.Epoch)
assert.Equal(t, 1, s.AttestationsPool.UnaggregatedAttestationCount())
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, 1, broadcaster.NumAttestations())
assert.Equal(t, "0x03", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetAggregationBits()))
assert.Equal(t, "0x8146f4397bfd8fd057ebbcd6a67327bdc7ed5fb650533edcb6377b650dea0b6da64c14ecd60846d5c0a0cd43893d6972092500f82c9d8a955e2b58c5ed3cbe885d84008ace6bd86ba9e23652f58e2ec207cec494c916063257abf285b9b15b15", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetSignature()))
assert.Equal(t, primitives.Slot(0), broadcaster.BroadcastAttestations[0].GetData().Slot)
assert.Equal(t, primitives.CommitteeIndex(0), broadcaster.BroadcastAttestations[0].GetData().CommitteeIndex)
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().BeaconBlockRoot))
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().Source.Root))
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Source.Epoch)
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().Target.Root))
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Target.Epoch)
assert.Equal(t, 1, s.AttestationsPool.UnaggregatedAttestationCount())
})
t.Run("multiple", func(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
s.Broadcaster = broadcaster
s.AttestationsPool = attestations.NewPool()
var body bytes.Buffer
_, err := body.WriteString(multipleAtts)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestations(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, 2, broadcaster.NumAttestations())
assert.Equal(t, 2, s.AttestationsPool.UnaggregatedAttestationCount())
})
t.Run("no body", func(t *testing.T) {
request := httptest.NewRequest(http.MethodPost, "http://example.com", nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestations(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "No data submitted"))
})
t.Run("empty", func(t *testing.T) {
var body bytes.Buffer
_, err := body.WriteString("[]")
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestations(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "no data submitted"))
})
t.Run("invalid", func(t *testing.T) {
var body bytes.Buffer
_, err := body.WriteString(invalidAtt)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestations(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &server.IndexedVerificationFailureError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
require.Equal(t, 1, len(e.Failures))
assert.Equal(t, true, strings.Contains(e.Failures[0].Message, "Incorrect attestation signature"))
})
})
t.Run("multiple", func(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
s.Broadcaster = broadcaster
s.AttestationsPool = attestations.NewPool()
t.Run("V2", func(t *testing.T) {
t.Run("pre-electra", func(t *testing.T) {
t.Run("single", func(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
s.Broadcaster = broadcaster
s.AttestationsPool = attestations.NewPool()
var body bytes.Buffer
_, err := body.WriteString(multipleAtts)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
var body bytes.Buffer
_, err := body.WriteString(singleAtt)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.VersionHeader, version.String(version.Phase0))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestations(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, 2, broadcaster.NumAttestations())
assert.Equal(t, 2, s.AttestationsPool.UnaggregatedAttestationCount())
s.SubmitAttestationsV2(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, 1, broadcaster.NumAttestations())
assert.Equal(t, "0x03", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetAggregationBits()))
assert.Equal(t, "0x8146f4397bfd8fd057ebbcd6a67327bdc7ed5fb650533edcb6377b650dea0b6da64c14ecd60846d5c0a0cd43893d6972092500f82c9d8a955e2b58c5ed3cbe885d84008ace6bd86ba9e23652f58e2ec207cec494c916063257abf285b9b15b15", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetSignature()))
assert.Equal(t, primitives.Slot(0), broadcaster.BroadcastAttestations[0].GetData().Slot)
assert.Equal(t, primitives.CommitteeIndex(0), broadcaster.BroadcastAttestations[0].GetData().CommitteeIndex)
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().BeaconBlockRoot))
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().Source.Root))
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Source.Epoch)
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().Target.Root))
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Target.Epoch)
assert.Equal(t, 1, s.AttestationsPool.UnaggregatedAttestationCount())
})
t.Run("multiple", func(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
s.Broadcaster = broadcaster
s.AttestationsPool = attestations.NewPool()
var body bytes.Buffer
_, err := body.WriteString(multipleAtts)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.VersionHeader, version.String(version.Phase0))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestationsV2(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, 2, broadcaster.NumAttestations())
assert.Equal(t, 2, s.AttestationsPool.UnaggregatedAttestationCount())
})
t.Run("no body", func(t *testing.T) {
request := httptest.NewRequest(http.MethodPost, "http://example.com", nil)
request.Header.Set(api.VersionHeader, version.String(version.Phase0))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestationsV2(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "No data submitted"))
})
t.Run("empty", func(t *testing.T) {
var body bytes.Buffer
_, err := body.WriteString("[]")
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.VersionHeader, version.String(version.Phase0))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestationsV2(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "no data submitted"))
})
t.Run("invalid", func(t *testing.T) {
var body bytes.Buffer
_, err := body.WriteString(invalidAtt)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.VersionHeader, version.String(version.Phase0))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestationsV2(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &server.IndexedVerificationFailureError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
require.Equal(t, 1, len(e.Failures))
assert.Equal(t, true, strings.Contains(e.Failures[0].Message, "Incorrect attestation signature"))
})
})
t.Run("post-electra", func(t *testing.T) {
t.Run("single", func(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
s.Broadcaster = broadcaster
s.AttestationsPool = attestations.NewPool()
var body bytes.Buffer
_, err := body.WriteString(singleAttElectra)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.VersionHeader, version.String(version.Electra))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestationsV2(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, 1, broadcaster.NumAttestations())
assert.Equal(t, "0x03", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetAggregationBits()))
assert.Equal(t, "0x8146f4397bfd8fd057ebbcd6a67327bdc7ed5fb650533edcb6377b650dea0b6da64c14ecd60846d5c0a0cd43893d6972092500f82c9d8a955e2b58c5ed3cbe885d84008ace6bd86ba9e23652f58e2ec207cec494c916063257abf285b9b15b15", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetSignature()))
assert.Equal(t, primitives.Slot(0), broadcaster.BroadcastAttestations[0].GetData().Slot)
assert.Equal(t, primitives.CommitteeIndex(0), broadcaster.BroadcastAttestations[0].GetData().CommitteeIndex)
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().BeaconBlockRoot))
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().Source.Root))
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Source.Epoch)
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().Target.Root))
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Target.Epoch)
assert.Equal(t, 1, s.AttestationsPool.UnaggregatedAttestationCount())
})
t.Run("multiple", func(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
s.Broadcaster = broadcaster
s.AttestationsPool = attestations.NewPool()
var body bytes.Buffer
_, err := body.WriteString(multipleAttsElectra)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.VersionHeader, version.String(version.Electra))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestationsV2(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, 2, broadcaster.NumAttestations())
assert.Equal(t, 2, s.AttestationsPool.UnaggregatedAttestationCount())
})
t.Run("no body", func(t *testing.T) {
request := httptest.NewRequest(http.MethodPost, "http://example.com", nil)
request.Header.Set(api.VersionHeader, version.String(version.Electra))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestationsV2(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "No data submitted"))
})
t.Run("empty", func(t *testing.T) {
var body bytes.Buffer
_, err := body.WriteString("[]")
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.VersionHeader, version.String(version.Electra))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestationsV2(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "no data submitted"))
})
t.Run("invalid", func(t *testing.T) {
var body bytes.Buffer
_, err := body.WriteString(invalidAttElectra)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.VersionHeader, version.String(version.Electra))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestationsV2(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &server.IndexedVerificationFailureError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
require.Equal(t, 1, len(e.Failures))
assert.Equal(t, true, strings.Contains(e.Failures[0].Message, "Incorrect attestation signature"))
})
})
})
t.Run("no body", func(t *testing.T) {
request := httptest.NewRequest(http.MethodPost, "http://example.com", nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestations(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "No data submitted"))
})
t.Run("empty", func(t *testing.T) {
var body bytes.Buffer
_, err := body.WriteString("[]")
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestations(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "No data submitted"))
})
t.Run("invalid", func(t *testing.T) {
var body bytes.Buffer
_, err := body.WriteString(invalidAtt)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestations(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &server.IndexedVerificationFailureError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
require.Equal(t, 1, len(e.Failures))
assert.Equal(t, true, strings.Contains(e.Failures[0].Message, "Incorrect attestation signature"))
})
}
func TestListVoluntaryExits(t *testing.T) {
@@ -2063,6 +2260,85 @@ var (
}
}
}
]`
singleAttElectra = `[
{
"aggregation_bits": "0x03",
"committee_bits": "0x0100000000000000",
"signature": "0x8146f4397bfd8fd057ebbcd6a67327bdc7ed5fb650533edcb6377b650dea0b6da64c14ecd60846d5c0a0cd43893d6972092500f82c9d8a955e2b58c5ed3cbe885d84008ace6bd86ba9e23652f58e2ec207cec494c916063257abf285b9b15b15",
"data": {
"slot": "0",
"index": "0",
"beacon_block_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"source": {
"epoch": "0",
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
},
"target": {
"epoch": "0",
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
}
}
}
]`
multipleAttsElectra = `[
{
"aggregation_bits": "0x03",
"committee_bits": "0x0100000000000000",
"signature": "0x8146f4397bfd8fd057ebbcd6a67327bdc7ed5fb650533edcb6377b650dea0b6da64c14ecd60846d5c0a0cd43893d6972092500f82c9d8a955e2b58c5ed3cbe885d84008ace6bd86ba9e23652f58e2ec207cec494c916063257abf285b9b15b15",
"data": {
"slot": "0",
"index": "0",
"beacon_block_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"source": {
"epoch": "0",
"root": "0x736f75726365726f6f7431000000000000000000000000000000000000000000"
},
"target": {
"epoch": "0",
"root": "0x746172676574726f6f7431000000000000000000000000000000000000000000"
}
}
},
{
"aggregation_bits": "0x03",
"committee_bits": "0x0100000000000000",
"signature": "0x8146f4397bfd8fd057ebbcd6a67327bdc7ed5fb650533edcb6377b650dea0b6da64c14ecd60846d5c0a0cd43893d6972092500f82c9d8a955e2b58c5ed3cbe885d84008ace6bd86ba9e23652f58e2ec207cec494c916063257abf285b9b15b15",
"data": {
"slot": "0",
"index": "0",
"beacon_block_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"source": {
"epoch": "0",
"root": "0x736f75726365726f6f7431000000000000000000000000000000000000000000"
},
"target": {
"epoch": "0",
"root": "0x746172676574726f6f7432000000000000000000000000000000000000000000"
}
}
}
]`
// signature is invalid
invalidAttElectra = `[
{
"aggregation_bits": "0x03",
"committee_bits": "0x0100000000000000",
"signature": "0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"data": {
"slot": "0",
"index": "0",
"beacon_block_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"source": {
"epoch": "0",
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
},
"target": {
"epoch": "0",
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
}
}
}
]`
exit1 = `{
"message": {

View File

@@ -79,6 +79,7 @@ func TestGetSpec(t *testing.T) {
config.DenebForkEpoch = 105
config.ElectraForkVersion = []byte("ElectraForkVersion")
config.ElectraForkEpoch = 107
config.Eip7594ForkEpoch = 109
config.BLSWithdrawalPrefixByte = byte('b')
config.ETH1AddressWithdrawalPrefixByte = byte('c')
config.GenesisDelay = 24
@@ -189,7 +190,7 @@ func TestGetSpec(t *testing.T) {
data, ok := resp.Data.(map[string]interface{})
require.Equal(t, true, ok)
assert.Equal(t, 155, len(data))
assert.Equal(t, 156, len(data))
for k, v := range data {
t.Run(k, func(t *testing.T) {
switch k {
@@ -267,6 +268,8 @@ func TestGetSpec(t *testing.T) {
assert.Equal(t, "0x"+hex.EncodeToString([]byte("ElectraForkVersion")), v)
case "ELECTRA_FORK_EPOCH":
assert.Equal(t, "107", v)
case "EIP7594_FORK_EPOCH":
assert.Equal(t, "109", v)
case "MIN_ANCHOR_POW_BLOCK_DIFFICULTY":
assert.Equal(t, "1000", v)
case "BLS_WITHDRAWAL_PREFIX":

View File

@@ -19,11 +19,12 @@ go_library(
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//config/params:go_default_library",
"//consensus-types/payload-attribute:go_default_library",
"//consensus-types/primitives:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//network/httputil:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/eth/v2:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
@@ -52,6 +53,7 @@ go_test(
"//config/fieldparams:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/payload-attribute:go_default_library",
"//consensus-types/primitives:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",

View File

@@ -7,6 +7,7 @@ import (
"fmt"
"io"
"net/http"
"strconv"
"time"
"github.com/ethereum/go-ethereum/common/hexutil"
@@ -18,11 +19,12 @@ import (
statefeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
chaintime "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v5/config/params"
payloadattribute "github.com/prysmaticlabs/prysm/v5/consensus-types/payload-attribute"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
"github.com/prysmaticlabs/prysm/v5/network/httputil"
engine "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/eth/v1"
ethpbv2 "github.com/prysmaticlabs/prysm/v5/proto/eth/v2"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
@@ -31,6 +33,7 @@ import (
)
const DefaultEventFeedDepth = 1000
const payloadAttributeTimeout = 2 * time.Second
const (
InvalidTopic = "__invalid__"
@@ -89,12 +92,12 @@ var opsFeedEventTopics = map[feed.EventType]string{
var stateFeedEventTopics = map[feed.EventType]string{
statefeed.NewHead: HeadTopic,
statefeed.MissedSlot: PayloadAttributesTopic,
statefeed.FinalizedCheckpoint: FinalizedCheckpointTopic,
statefeed.LightClientFinalityUpdate: LightClientFinalityUpdateTopic,
statefeed.LightClientOptimisticUpdate: LightClientOptimisticUpdateTopic,
statefeed.Reorg: ChainReorgTopic,
statefeed.BlockProcessed: BlockTopic,
statefeed.PayloadAttributes: PayloadAttributesTopic,
}
var topicsForStateFeed = topicsForFeed(stateFeedEventTopics)
@@ -418,10 +421,9 @@ func topicForEvent(event *feed.Event) string {
return ChainReorgTopic
case *statefeed.BlockProcessedData:
return BlockTopic
case payloadattribute.EventData:
return PayloadAttributesTopic
default:
if event.Type == statefeed.MissedSlot {
return PayloadAttributesTopic
}
return InvalidTopic
}
}
@@ -431,31 +433,17 @@ func (s *Server) lazyReaderForEvent(ctx context.Context, event *feed.Event, topi
if !topics.requested(eventName) {
return nil, errNotRequested
}
if eventName == PayloadAttributesTopic {
return s.currentPayloadAttributes(ctx)
}
if event == nil || event.Data == nil {
return nil, errors.New("event or event data is nil")
}
switch v := event.Data.(type) {
case payloadattribute.EventData:
return s.payloadAttributesReader(ctx, v)
case *ethpb.EventHead:
// The head event is a special case because, if the client requested the payload attributes topic,
// we send two event messages in reaction; the head event and the payload attributes.
headReader := func() io.Reader {
return jsonMarshalReader(eventName, structs.HeadEventFromV1(v))
}
// Don't do the expensive attr lookup unless the client requested it.
if !topics.requested(PayloadAttributesTopic) {
return headReader, nil
}
// Since payload attributes could change before the outbox is written, we need to do a blocking operation to
// get the current payload attributes right here.
attrReader, err := s.currentPayloadAttributes(ctx)
if err != nil {
return nil, errors.Wrap(err, "could not get payload attributes for head event")
}
return func() io.Reader {
return io.MultiReader(headReader(), attrReader())
return jsonMarshalReader(eventName, structs.HeadEventFromV1(v))
}, nil
case *operation.AggregatedAttReceivedData:
return func() io.Reader {
@@ -463,14 +451,20 @@ func (s *Server) lazyReaderForEvent(ctx context.Context, event *feed.Event, topi
return jsonMarshalReader(eventName, att)
}, nil
case *operation.UnAggregatedAttReceivedData:
att, ok := v.Attestation.(*eth.Attestation)
if !ok {
switch att := v.Attestation.(type) {
case *eth.Attestation:
return func() io.Reader {
att := structs.AttFromConsensus(att)
return jsonMarshalReader(eventName, att)
}, nil
case *eth.AttestationElectra:
return func() io.Reader {
att := structs.AttElectraFromConsensus(att)
return jsonMarshalReader(eventName, att)
}, nil
default:
return nil, errors.Wrapf(errUnhandledEventData, "Unexpected type %T for the .Attestation field of UnAggregatedAttReceivedData", v.Attestation)
}
return func() io.Reader {
att := structs.AttFromConsensus(att)
return jsonMarshalReader(eventName, att)
}, nil
case *operation.ExitReceivedData:
return func() io.Reader {
return jsonMarshalReader(eventName, structs.SignedExitFromConsensus(v.Exit))
@@ -495,13 +489,18 @@ func (s *Server) lazyReaderForEvent(ctx context.Context, event *feed.Event, topi
})
}, nil
case *operation.AttesterSlashingReceivedData:
slashing, ok := v.AttesterSlashing.(*eth.AttesterSlashing)
if !ok {
switch slashing := v.AttesterSlashing.(type) {
case *eth.AttesterSlashing:
return func() io.Reader {
return jsonMarshalReader(eventName, structs.AttesterSlashingFromConsensus(slashing))
}, nil
case *eth.AttesterSlashingElectra:
return func() io.Reader {
return jsonMarshalReader(eventName, structs.AttesterSlashingElectraFromConsensus(slashing))
}, nil
default:
return nil, errors.Wrapf(errUnhandledEventData, "Unexpected type %T for the .AttesterSlashing field of AttesterSlashingReceivedData", v.AttesterSlashing)
}
return func() io.Reader {
return jsonMarshalReader(eventName, structs.AttesterSlashingFromConsensus(slashing))
}, nil
case *operation.ProposerSlashingReceivedData:
return func() io.Reader {
return jsonMarshalReader(eventName, structs.ProposerSlashingFromConsensus(v.ProposerSlashing))
@@ -556,115 +555,202 @@ func (s *Server) lazyReaderForEvent(ctx context.Context, event *feed.Event, topi
}
}
// This event stream is intended to be used by builders and relays.
// Parent fields are based on state at N_{current_slot}, while the rest of fields are based on state of N_{current_slot + 1}
func (s *Server) currentPayloadAttributes(ctx context.Context) (lazyReader, error) {
headRoot, err := s.HeadFetcher.HeadRoot(ctx)
if err != nil {
return nil, errors.Wrap(err, "could not get head root")
}
st, err := s.HeadFetcher.HeadState(ctx)
if err != nil {
return nil, errors.Wrap(err, "could not get head state")
}
// advance the head state
headState, err := transition.ProcessSlotsIfPossible(ctx, st, s.ChainInfoFetcher.CurrentSlot()+1)
if err != nil {
return nil, errors.Wrap(err, "could not advance head state")
var errUnsupportedPayloadAttribute = errors.New("cannot compute payload attributes pre-Bellatrix")
func (s *Server) computePayloadAttributes(ctx context.Context, ev payloadattribute.EventData) (payloadattribute.Attributer, error) {
v := ev.HeadState.Version()
if v < version.Bellatrix {
return nil, errors.Wrapf(errUnsupportedPayloadAttribute, "%s is not supported", version.String(v))
}
headBlock, err := s.HeadFetcher.HeadBlock(ctx)
if err != nil {
return nil, errors.Wrap(err, "could not get head block")
}
headPayload, err := headBlock.Block().Body().Execution()
if err != nil {
return nil, errors.Wrap(err, "could not get execution payload")
}
t, err := slots.ToTime(headState.GenesisTime(), headState.Slot())
t, err := slots.ToTime(ev.HeadState.GenesisTime(), ev.HeadState.Slot())
if err != nil {
return nil, errors.Wrap(err, "could not get head state slot time")
}
prevRando, err := helpers.RandaoMix(headState, chaintime.CurrentEpoch(headState))
timestamp := uint64(t.Unix())
prevRando, err := helpers.RandaoMix(ev.HeadState, chaintime.CurrentEpoch(ev.HeadState))
if err != nil {
return nil, errors.Wrap(err, "could not get head state randao mix")
}
proposerIndex, err := helpers.BeaconProposerIndex(ctx, headState)
proposerIndex, err := helpers.BeaconProposerIndex(ctx, ev.HeadState)
if err != nil {
return nil, errors.Wrap(err, "could not get head state proposer index")
}
feeRecipient := params.BeaconConfig().DefaultFeeRecipient.Bytes()
feeRecpt := params.BeaconConfig().DefaultFeeRecipient.Bytes()
tValidator, exists := s.TrackedValidatorsCache.Validator(proposerIndex)
if exists {
feeRecipient = tValidator.FeeRecipient[:]
}
var attributes interface{}
switch headState.Version() {
case version.Bellatrix:
attributes = &structs.PayloadAttributesV1{
Timestamp: fmt.Sprintf("%d", t.Unix()),
PrevRandao: hexutil.Encode(prevRando),
SuggestedFeeRecipient: hexutil.Encode(feeRecipient),
}
case version.Capella:
withdrawals, _, err := headState.ExpectedWithdrawals()
if err != nil {
return nil, errors.Wrap(err, "could not get head state expected withdrawals")
}
attributes = &structs.PayloadAttributesV2{
Timestamp: fmt.Sprintf("%d", t.Unix()),
PrevRandao: hexutil.Encode(prevRando),
SuggestedFeeRecipient: hexutil.Encode(feeRecipient),
Withdrawals: structs.WithdrawalsFromConsensus(withdrawals),
}
case version.Deneb, version.Electra:
withdrawals, _, err := headState.ExpectedWithdrawals()
if err != nil {
return nil, errors.Wrap(err, "could not get head state expected withdrawals")
}
parentRoot, err := headBlock.Block().HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "could not get head block root")
}
attributes = &structs.PayloadAttributesV3{
Timestamp: fmt.Sprintf("%d", t.Unix()),
PrevRandao: hexutil.Encode(prevRando),
SuggestedFeeRecipient: hexutil.Encode(feeRecipient),
Withdrawals: structs.WithdrawalsFromConsensus(withdrawals),
ParentBeaconBlockRoot: hexutil.Encode(parentRoot[:]),
}
default:
return nil, errors.Wrapf(err, "Payload version %s is not supported", version.String(headState.Version()))
feeRecpt = tValidator.FeeRecipient[:]
}
attributesBytes, err := json.Marshal(attributes)
if err != nil {
return nil, errors.Wrap(err, "errors marshaling payload attributes to json")
}
eventData := structs.PayloadAttributesEventData{
ProposerIndex: fmt.Sprintf("%d", proposerIndex),
ProposalSlot: fmt.Sprintf("%d", headState.Slot()),
ParentBlockNumber: fmt.Sprintf("%d", headPayload.BlockNumber()),
ParentBlockRoot: hexutil.Encode(headRoot),
ParentBlockHash: hexutil.Encode(headPayload.BlockHash()),
PayloadAttributes: attributesBytes,
}
eventDataBytes, err := json.Marshal(eventData)
if err != nil {
return nil, errors.Wrap(err, "errors marshaling payload attributes event data to json")
}
return func() io.Reader {
return jsonMarshalReader(PayloadAttributesTopic, &structs.PayloadAttributesEvent{
Version: version.String(headState.Version()),
Data: eventDataBytes,
if v == version.Bellatrix {
return payloadattribute.New(&engine.PayloadAttributes{
Timestamp: timestamp,
PrevRandao: prevRando,
SuggestedFeeRecipient: feeRecpt,
})
}
w, _, err := ev.HeadState.ExpectedWithdrawals()
if err != nil {
return nil, errors.Wrap(err, "could not get withdrawals from head state")
}
if v == version.Capella {
return payloadattribute.New(&engine.PayloadAttributesV2{
Timestamp: timestamp,
PrevRandao: prevRando,
SuggestedFeeRecipient: feeRecpt,
Withdrawals: w,
})
}
pr, err := ev.HeadBlock.Block().HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "could not compute head block root")
}
return payloadattribute.New(&engine.PayloadAttributesV3{
Timestamp: timestamp,
PrevRandao: prevRando,
SuggestedFeeRecipient: feeRecpt,
Withdrawals: w,
ParentBeaconBlockRoot: pr[:],
})
}
type asyncPayloadAttrData struct {
data json.RawMessage
version string
err error
}
func (s *Server) fillEventData(ctx context.Context, ev payloadattribute.EventData) (payloadattribute.EventData, error) {
if ev.HeadBlock == nil || ev.HeadBlock.IsNil() {
hb, err := s.HeadFetcher.HeadBlock(ctx)
if err != nil {
return ev, errors.Wrap(err, "Could not look up head block")
}
root, err := hb.Block().HashTreeRoot()
if err != nil {
return ev, errors.Wrap(err, "Could not compute head block root")
}
if ev.HeadRoot != root {
return ev, errors.Wrap(err, "head root changed before payload attribute event handler execution")
}
ev.HeadBlock = hb
payload, err := hb.Block().Body().Execution()
if err != nil {
return ev, errors.Wrap(err, "Could not get execution payload for head block")
}
ev.ParentBlockHash = payload.BlockHash()
ev.ParentBlockNumber = payload.BlockNumber()
}
attr := ev.Attributer
if attr == nil || attr.IsEmpty() {
attr, err := s.computePayloadAttributes(ctx, ev)
if err != nil {
return ev, errors.Wrap(err, "Could not compute payload attributes")
}
ev.Attributer = attr
}
return ev, nil
}
// This event stream is intended to be used by builders and relays.
// Parent fields are based on state at N_{current_slot}, while the rest of fields are based on state of N_{current_slot + 1}
func (s *Server) payloadAttributesReader(ctx context.Context, ev payloadattribute.EventData) (lazyReader, error) {
ctx, cancel := context.WithTimeout(ctx, payloadAttributeTimeout)
edc := make(chan asyncPayloadAttrData)
go func() {
d := asyncPayloadAttrData{
version: version.String(ev.HeadState.Version()),
}
defer func() {
edc <- d
}()
ev, err := s.fillEventData(ctx, ev)
if err != nil {
d.err = errors.Wrap(err, "Could not fill event data")
return
}
attributesBytes, err := marshalAttributes(ev.Attributer)
if err != nil {
d.err = errors.Wrap(err, "errors marshaling payload attributes to json")
return
}
d.data, d.err = json.Marshal(structs.PayloadAttributesEventData{
ProposerIndex: strconv.FormatUint(uint64(ev.ProposerIndex), 10),
ProposalSlot: strconv.FormatUint(uint64(ev.ProposalSlot), 10),
ParentBlockNumber: strconv.FormatUint(ev.ParentBlockNumber, 10),
ParentBlockRoot: hexutil.Encode(ev.ParentBlockRoot),
ParentBlockHash: hexutil.Encode(ev.ParentBlockHash),
PayloadAttributes: attributesBytes,
})
if d.err != nil {
d.err = errors.Wrap(d.err, "errors marshaling payload attributes event data to json")
}
}()
return func() io.Reader {
defer cancel()
select {
case <-ctx.Done():
log.WithError(ctx.Err()).Warn("Context canceled while waiting for payload attributes event data")
return nil
case ed := <-edc:
if ed.err != nil {
log.WithError(ed.err).Warn("Error while marshaling payload attributes event data")
return nil
}
return jsonMarshalReader(PayloadAttributesTopic, &structs.PayloadAttributesEvent{
Version: ed.version,
Data: ed.data,
})
}
}, nil
}
func marshalAttributes(attr payloadattribute.Attributer) ([]byte, error) {
v := attr.Version()
if v < version.Bellatrix {
return nil, errors.Wrapf(errUnsupportedPayloadAttribute, "Payload version %s is not supported", version.String(v))
}
timestamp := strconv.FormatUint(attr.Timestamp(), 10)
prevRandao := hexutil.Encode(attr.PrevRandao())
feeRecpt := hexutil.Encode(attr.SuggestedFeeRecipient())
if v == version.Bellatrix {
return json.Marshal(&structs.PayloadAttributesV1{
Timestamp: timestamp,
PrevRandao: prevRandao,
SuggestedFeeRecipient: feeRecpt,
})
}
w, err := attr.Withdrawals()
if err != nil {
return nil, errors.Wrap(err, "could not get withdrawals from payload attributes event")
}
withdrawals := structs.WithdrawalsFromConsensus(w)
if v == version.Capella {
return json.Marshal(&structs.PayloadAttributesV2{
Timestamp: timestamp,
PrevRandao: prevRandao,
SuggestedFeeRecipient: feeRecpt,
Withdrawals: withdrawals,
})
}
parentRoot, err := attr.ParentBeaconBlockRoot()
if err != nil {
return nil, errors.Wrap(err, "could not get parent beacon block root from payload attributes event")
}
return json.Marshal(&structs.PayloadAttributesV3{
Timestamp: timestamp,
PrevRandao: prevRandao,
SuggestedFeeRecipient: feeRecpt,
Withdrawals: withdrawals,
ParentBeaconBlockRoot: hexutil.Encode(parentRoot),
})
}
func newStreamingResponseController(rw http.ResponseWriter, timeout time.Duration) *streamingResponseWriterController {
rc := http.NewResponseController(rw)
return &streamingResponseWriterController{

View File

@@ -21,6 +21,7 @@ import (
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
payloadattribute "github.com/prysmaticlabs/prysm/v5/consensus-types/payload-attribute"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/eth/v1"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
@@ -489,7 +490,21 @@ func TestStreamEvents_OperationsEvents(t *testing.T) {
require.NoError(t, err)
request := topics.testHttpRequest(testSync.ctx, t)
w := NewStreamingResponseWriterRecorder(testSync.ctx)
events := []*feed.Event{&feed.Event{Type: statefeed.MissedSlot}}
events := []*feed.Event{
&feed.Event{
Type: statefeed.PayloadAttributes,
Data: payloadattribute.EventData{
ProposerIndex: 0,
ProposalSlot: 0,
ParentBlockNumber: 0,
ParentBlockRoot: make([]byte, 32),
ParentBlockHash: make([]byte, 32),
HeadState: st,
HeadBlock: b,
HeadRoot: [fieldparams.RootLength]byte{},
},
},
}
go func() {
s.StreamEvents(w, request)

View File

@@ -75,7 +75,7 @@ func (s *Server) GetAggregateAttestation(w http.ResponseWriter, r *http.Request)
// GetAggregateAttestationV2 aggregates all attestations matching the given attestation data root and slot, returning the aggregated result.
func (s *Server) GetAggregateAttestationV2(w http.ResponseWriter, r *http.Request) {
_, span := trace.StartSpan(r.Context(), "validator.GetAggregateAttestationV2")
ctx, span := trace.StartSpan(r.Context(), "validator.GetAggregateAttestationV2")
defer span.End()
_, attDataRoot, ok := shared.HexFromQuery(w, r, "attestation_data_root", fieldparams.RootLength, true)
@@ -123,6 +123,12 @@ func (s *Server) GetAggregateAttestationV2(w http.ResponseWriter, r *http.Reques
}
resp.Data = data
}
headState, err := s.ChainInfoFetcher.HeadStateReadOnly(ctx)
if err != nil {
httputil.HandleError(w, "Could not get head state: "+err.Error(), http.StatusInternalServerError)
return
}
w.Header().Set(api.VersionHeader, version.String(headState.Version()))
httputil.WriteJson(w, resp)
}

View File

@@ -262,7 +262,10 @@ func TestGetAggregateAttestation(t *testing.T) {
require.NoError(t, pool.SaveAggregatedAttestations([]ethpbalpha.Att{aggSlot1_Root1_1, aggSlot1_Root1_2, aggSlot1_Root2, aggSlot2}), "Failed to save aggregated attestations")
agg := pool.AggregatedAttestations()
require.Equal(t, 4, len(agg), "Expected 4 aggregated attestations")
bs, err := util.NewBeaconState()
require.NoError(t, err)
s := &Server{
ChainInfoFetcher: &mockChain.ChainService{State: bs},
AttestationsPool: pool,
}
t.Run("non-matching attestation request", func(t *testing.T) {

View File

@@ -212,7 +212,9 @@ go_test(
embed = [":go_default_library"],
eth_network = "minimal",
tags = ["minimal"],
deps = common_deps,
deps = common_deps + [
"//beacon-chain/operations/attestations/mock:go_default_library",
],
)
go_test(

View File

@@ -339,7 +339,7 @@ func (vs *Server) handleBlindedBlock(ctx context.Context, block interfaces.Signe
sidecars, err := unblindBlobsSidecars(copiedBlock, bundle)
if err != nil {
return nil, nil, errors.Wrap(err, "unblind sidecars failed")
return nil, nil, errors.Wrap(err, "unblind blobs sidecars: commitment value doesn't match block")
}
return copiedBlock, sidecars, nil

View File

@@ -91,14 +91,7 @@ func (vs *Server) packAttestations(ctx context.Context, latestState state.Beacon
var attsForInclusion proposerAtts
if postElectra {
// TODO: hack for Electra devnet-1, take only one aggregate per ID
// (which essentially means one aggregate for an attestation_data+committee combination
topAggregates := make([]ethpb.Att, 0)
for _, v := range attsById {
topAggregates = append(topAggregates, v[0])
}
attsForInclusion, err = computeOnChainAggregate(topAggregates)
attsForInclusion, err = onChainAggregates(attsById)
if err != nil {
return nil, err
}
@@ -113,14 +106,68 @@ func (vs *Server) packAttestations(ctx context.Context, latestState state.Beacon
if err != nil {
return nil, err
}
sorted, err := deduped.sort()
if err != nil {
return nil, err
var sorted proposerAtts
if postElectra {
sorted, err = deduped.sortOnChainAggregates()
if err != nil {
return nil, err
}
} else {
sorted, err = deduped.sort()
if err != nil {
return nil, err
}
}
atts = sorted.limitToMaxAttestations()
return vs.filterAttestationBySignature(ctx, atts, latestState)
}
func onChainAggregates(attsById map[attestation.Id][]ethpb.Att) (proposerAtts, error) {
var result proposerAtts
var err error
// When constructing on-chain aggregates, we want to combine the most profitable
// aggregate for each ID, then the second most profitable, and so on and so forth.
// Because of this we sort attestations at the beginning.
for id, as := range attsById {
attsById[id], err = proposerAtts(as).sort()
if err != nil {
return nil, err
}
}
// We construct the first on-chain aggregate by taking the first aggregate for each ID.
// We construct the second on-chain aggregate by taking the second aggregate for each ID.
// We continue doing this until we run out of aggregates.
idx := 0
for {
topAggregates := make([]ethpb.Att, 0, len(attsById))
for _, as := range attsById {
// In case there are no more aggregates for an ID, we skip that ID.
if len(as) > idx {
topAggregates = append(topAggregates, as[idx])
}
}
// Once there are no more aggregates for any ID, we are done.
if len(topAggregates) == 0 {
break
}
onChainAggs, err := computeOnChainAggregate(topAggregates)
if err != nil {
return nil, err
}
result = append(result, onChainAggs...)
idx++
}
return result, nil
}
// filter separates attestation list into two groups: valid and invalid attestations.
// The first group passes the all the required checks for attestation to be considered for proposing.
// And attestations from the second group should be deleted.
@@ -223,6 +270,14 @@ func (a proposerAtts) sort() (proposerAtts, error) {
return a.sortBySlotAndCommittee()
}
func (a proposerAtts) sortOnChainAggregates() (proposerAtts, error) {
if len(a) < 2 {
return a, nil
}
return a.sortByProfitabilityUsingMaxCover()
}
// Separate attestations by slot, as slot number takes higher precedence when sorting.
// Also separate by committee index because maxcover will prefer attestations for the same
// committee with disjoint bits over attestations for different committees with overlapping
@@ -231,7 +286,6 @@ func (a proposerAtts) sortBySlotAndCommittee() (proposerAtts, error) {
type slotAtts struct {
candidates map[primitives.CommitteeIndex]proposerAtts
selected map[primitives.CommitteeIndex]proposerAtts
leftover map[primitives.CommitteeIndex]proposerAtts
}
var slots []primitives.Slot
@@ -250,7 +304,6 @@ func (a proposerAtts) sortBySlotAndCommittee() (proposerAtts, error) {
var err error
for _, sa := range attsBySlot {
sa.selected = make(map[primitives.CommitteeIndex]proposerAtts)
sa.leftover = make(map[primitives.CommitteeIndex]proposerAtts)
for ci, committeeAtts := range sa.candidates {
sa.selected[ci], err = committeeAtts.sortByProfitabilityUsingMaxCover_committeeAwarePacking()
if err != nil {
@@ -266,9 +319,6 @@ func (a proposerAtts) sortBySlotAndCommittee() (proposerAtts, error) {
for _, slot := range slots {
sortedAtts = append(sortedAtts, sortSlotAttestations(attsBySlot[slot].selected)...)
}
for _, slot := range slots {
sortedAtts = append(sortedAtts, sortSlotAttestations(attsBySlot[slot].leftover)...)
}
return sortedAtts, nil
}
@@ -287,15 +337,11 @@ func (a proposerAtts) sortByProfitabilityUsingMaxCover_committeeAwarePacking() (
return nil, err
}
}
// Add selected candidates on top, those that are not selected - append at bottom.
selectedKeys, _, err := aggregation.MaxCover(candidates, len(candidates), true /* allowOverlaps */)
if err != nil {
log.WithError(err).Debug("MaxCover aggregation failed")
return a, nil
}
// Pick selected attestations first, leftover attestations will be appended at the end.
// Both lists will be sorted by number of bits set.
selected := make(proposerAtts, selectedKeys.Count())
for i, key := range selectedKeys.BitIndices() {
selected[i] = a[key]

View File

@@ -13,6 +13,9 @@ import (
// computeOnChainAggregate constructs a final aggregate form a list of network aggregates with equal attestation data.
// It assumes that each network aggregate has exactly one committee bit set.
//
// Our implementation allows to pass aggregates for different attestation data, in which case the function will return
// one final aggregate per attestation data.
//
// Spec definition:
//
// def compute_on_chain_aggregate(network_aggregates: Sequence[Attestation]) -> Attestation:

View File

@@ -3,16 +3,21 @@ package validator
import (
"bytes"
"context"
"math/rand"
"sort"
"strconv"
"testing"
"github.com/prysmaticlabs/go-bitfield"
chainMock "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/operations/attestations"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/operations/attestations/mock"
"github.com/prysmaticlabs/prysm/v5/config/features"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/crypto/bls/blst"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
@@ -680,6 +685,212 @@ func Test_packAttestations(t *testing.T) {
})
}
func Test_packAttestations_ElectraOnChainAggregates(t *testing.T) {
ctx := context.Background()
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.ElectraForkEpoch = 1
params.OverrideBeaconConfig(cfg)
key, err := blst.RandKey()
require.NoError(t, err)
sig := key.Sign([]byte{'X'})
cb0 := primitives.NewAttestationCommitteeBits()
cb0.SetBitAt(0, true)
cb1 := primitives.NewAttestationCommitteeBits()
cb1.SetBitAt(1, true)
data0 := util.HydrateAttestationData(&ethpb.AttestationData{BeaconBlockRoot: bytesutil.PadTo([]byte{'0'}, 32)})
data1 := util.HydrateAttestationData(&ethpb.AttestationData{BeaconBlockRoot: bytesutil.PadTo([]byte{'1'}, 32)})
// Glossary:
// - Single Aggregate: aggregate with exactly one committee bit set, from which an On-Chain Aggregate is constructed
// - On-Chain Aggregate: final aggregate packed into a block
//
// We construct the following number of single aggregates:
// - data_root_0 and committee_index_0: 3 single aggregates
// - data_root_0 and committee_index_1: 2 single aggregates
// - data_root_1 and committee_index_0: 1 single aggregate
// - data_root_1 and committee_index_1: 3 single aggregates
//
// Because the function tries to aggregate attestations, we have to create attestations which are not aggregatable
// and are not redundant when using MaxCover.
// The function should also sort attestation by ID before computing the On-Chain Aggregate, so we want unsorted aggregation bits
// to test the sorting part.
//
// The result should be the following six on-chain aggregates:
// - for data_root_0 combining the most profitable aggregate for each committee
// - for data_root_0 combining the second most profitable aggregate for each committee
// - for data_root_0 constructed from the single aggregate at index 2 for committee_index_0
// - for data_root_1 combining the most profitable aggregate for each committee
// - for data_root_1 constructed from the single aggregate at index 1 for committee_index_1
// - for data_root_1 constructed from the single aggregate at index 2 for committee_index_1
d0_c0_a1 := &ethpb.AttestationElectra{
AggregationBits: bitfield.Bitlist{0b1000011},
CommitteeBits: cb0,
Data: data0,
Signature: sig.Marshal(),
}
d0_c0_a2 := &ethpb.AttestationElectra{
AggregationBits: bitfield.Bitlist{0b1100101},
CommitteeBits: cb0,
Data: data0,
Signature: sig.Marshal(),
}
d0_c0_a3 := &ethpb.AttestationElectra{
AggregationBits: bitfield.Bitlist{0b1111000},
CommitteeBits: cb0,
Data: data0,
Signature: sig.Marshal(),
}
d0_c1_a1 := &ethpb.AttestationElectra{
AggregationBits: bitfield.Bitlist{0b1111100},
CommitteeBits: cb1,
Data: data0,
Signature: sig.Marshal(),
}
d0_c1_a2 := &ethpb.AttestationElectra{
AggregationBits: bitfield.Bitlist{0b1001111},
CommitteeBits: cb1,
Data: data0,
Signature: sig.Marshal(),
}
d1_c0_a1 := &ethpb.AttestationElectra{
AggregationBits: bitfield.Bitlist{0b1111111},
CommitteeBits: cb0,
Data: data1,
Signature: sig.Marshal(),
}
d1_c1_a1 := &ethpb.AttestationElectra{
AggregationBits: bitfield.Bitlist{0b1000011},
CommitteeBits: cb1,
Data: data1,
Signature: sig.Marshal(),
}
d1_c1_a2 := &ethpb.AttestationElectra{
AggregationBits: bitfield.Bitlist{0b1100101},
CommitteeBits: cb1,
Data: data1,
Signature: sig.Marshal(),
}
d1_c1_a3 := &ethpb.AttestationElectra{
AggregationBits: bitfield.Bitlist{0b1111000},
CommitteeBits: cb1,
Data: data1,
Signature: sig.Marshal(),
}
pool := &mock.PoolMock{}
require.NoError(t, pool.SaveAggregatedAttestations([]ethpb.Att{d0_c0_a1, d0_c0_a2, d0_c0_a3, d0_c1_a1, d0_c1_a2, d1_c0_a1, d1_c1_a1, d1_c1_a2, d1_c1_a3}))
slot := primitives.Slot(1)
s := &Server{AttPool: pool, HeadFetcher: &chainMock.ChainService{}, TimeFetcher: &chainMock.ChainService{Slot: &slot}}
// We need the correct number of validators so that there are at least 2 committees per slot
// and each committee has exactly 6 validators (this is because we have 6 aggregation bits).
st, _ := util.DeterministicGenesisStateElectra(t, 192)
require.NoError(t, st.SetSlot(params.BeaconConfig().SlotsPerEpoch+1))
atts, err := s.packAttestations(ctx, st, params.BeaconConfig().SlotsPerEpoch)
require.NoError(t, err)
require.Equal(t, 6, len(atts))
assert.Equal(t, true,
atts[0].GetAggregationBits().Count() >= atts[1].GetAggregationBits().Count() &&
atts[1].GetAggregationBits().Count() >= atts[2].GetAggregationBits().Count() &&
atts[2].GetAggregationBits().Count() >= atts[3].GetAggregationBits().Count() &&
atts[3].GetAggregationBits().Count() >= atts[4].GetAggregationBits().Count() &&
atts[4].GetAggregationBits().Count() >= atts[5].GetAggregationBits().Count(),
"on-chain aggregates are not sorted by aggregation bit count",
)
t.Run("slot takes precedence", func(t *testing.T) {
moreRecentAtt := &ethpb.AttestationElectra{
AggregationBits: bitfield.Bitlist{0b1100000}, // we set only one bit for committee_index_0
CommitteeBits: cb1,
Data: util.HydrateAttestationData(&ethpb.AttestationData{Slot: 1, BeaconBlockRoot: bytesutil.PadTo([]byte{'0'}, 32)}),
Signature: sig.Marshal(),
}
require.NoError(t, pool.SaveUnaggregatedAttestations([]ethpb.Att{moreRecentAtt}))
atts, err = s.packAttestations(ctx, st, params.BeaconConfig().SlotsPerEpoch)
require.NoError(t, err)
require.Equal(t, 7, len(atts))
assert.Equal(t, true, atts[0].GetData().Slot == 1)
})
}
func Benchmark_packAttestations_Electra(b *testing.B) {
ctx := context.Background()
params.SetupTestConfigCleanup(b)
cfg := params.MainnetConfig().Copy()
cfg.ElectraForkEpoch = 1
params.OverrideBeaconConfig(cfg)
valCount := uint64(1048576)
committeeCount := helpers.SlotCommitteeCount(valCount)
valsPerCommittee := valCount / committeeCount / uint64(params.BeaconConfig().SlotsPerEpoch)
st, _ := util.DeterministicGenesisStateElectra(b, valCount)
key, err := blst.RandKey()
require.NoError(b, err)
sig := key.Sign([]byte{'X'})
r := rand.New(rand.NewSource(123))
var atts []ethpb.Att
for c := uint64(0); c < committeeCount; c++ {
for a := uint64(0); a < params.BeaconConfig().TargetAggregatorsPerCommittee; a++ {
cb := primitives.NewAttestationCommitteeBits()
cb.SetBitAt(c, true)
var att *ethpb.AttestationElectra
// Last two aggregators send aggregates for some random block root with only a few bits set.
if a >= params.BeaconConfig().TargetAggregatorsPerCommittee-2 {
root := bytesutil.PadTo([]byte("root_"+strconv.Itoa(r.Intn(100))), 32)
att = &ethpb.AttestationElectra{
Data: util.HydrateAttestationData(&ethpb.AttestationData{Slot: params.BeaconConfig().SlotsPerEpoch - 1, BeaconBlockRoot: root}),
AggregationBits: bitfield.NewBitlist(valsPerCommittee),
CommitteeBits: cb,
Signature: sig.Marshal(),
}
for bit := uint64(0); bit < valsPerCommittee; bit++ {
att.AggregationBits.SetBitAt(bit, r.Intn(100) < 2) // 2% that the bit is set
}
} else {
att = &ethpb.AttestationElectra{
Data: util.HydrateAttestationData(&ethpb.AttestationData{Slot: params.BeaconConfig().SlotsPerEpoch - 1, BeaconBlockRoot: bytesutil.PadTo([]byte("root"), 32)}),
AggregationBits: bitfield.NewBitlist(valsPerCommittee),
CommitteeBits: cb,
Signature: sig.Marshal(),
}
for bit := uint64(0); bit < valsPerCommittee; bit++ {
att.AggregationBits.SetBitAt(bit, r.Intn(100) < 98) // 98% that the bit is set
}
}
atts = append(atts, att)
}
}
pool := &mock.PoolMock{}
require.NoError(b, pool.SaveAggregatedAttestations(atts))
slot := primitives.Slot(1)
s := &Server{AttPool: pool, HeadFetcher: &chainMock.ChainService{}, TimeFetcher: &chainMock.ChainService{Slot: &slot}}
require.NoError(b, st.SetSlot(params.BeaconConfig().SlotsPerEpoch))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err = s.packAttestations(ctx, st, params.BeaconConfig().SlotsPerEpoch+1)
require.NoError(b, err)
}
}
func Test_limitToMaxAttestations(t *testing.T) {
t.Run("Phase 0", func(t *testing.T) {
atts := make([]ethpb.Att, params.BeaconConfig().MaxAttestations+1)

View File

@@ -54,7 +54,7 @@ func (vs *Server) eth1DataMajorityVote(ctx context.Context, beaconState state.Be
// by ETH1_FOLLOW_DISTANCE. The head state should maintain the same ETH1Data until this condition has passed, so
// trust the existing head for the right eth1 vote until we can get a meaningful value from the deposit contract.
if latestValidTime < genesisTime+followDistanceSeconds {
log.WithField("genesisTime", genesisTime).WithField("latestValidTime", latestValidTime).Warn("voting period before genesis + follow distance, using eth1data from head")
log.WithField("genesisTime", genesisTime).WithField("latestValidTime", latestValidTime).Warn("Voting period before genesis + follow distance, using eth1data from head")
return vs.HeadFetcher.HeadETH1Data(), nil
}

View File

@@ -84,7 +84,6 @@ func (vs *Server) getLocalPayloadFromEngine(
}
setFeeRecipientIfBurnAddress(&val)
var err error
if ok && payloadId != [8]byte{} {
// Payload ID is cache hit. Return the cached payload ID.
var pid primitives.PayloadID
@@ -102,7 +101,7 @@ func (vs *Server) getLocalPayloadFromEngine(
return nil, errors.Wrap(err, "could not get cached payload from execution client")
}
}
log.WithFields(logFields).Debug("payload ID cache miss")
log.WithFields(logFields).Debug("Payload ID cache miss")
parentHash, err := vs.getParentBlockHash(ctx, st, slot)
switch {
case errors.Is(err, errActivationNotReached) || errors.Is(err, errNoTerminalBlockHash):
@@ -191,7 +190,7 @@ func (vs *Server) getLocalPayloadFromEngine(
}
warnIfFeeRecipientDiffers(val.FeeRecipient[:], res.ExecutionData.FeeRecipient())
log.WithField("value", res.Bid).Debug("received execution payload from local engine")
log.WithField("value", res.Bid).Debug("Received execution payload from local engine")
return res, nil
}

View File

@@ -912,7 +912,7 @@ func TestProposer_ProposeBlock_OK(t *testing.T) {
return &ethpb.GenericSignedBeaconBlock{Block: blk}
},
useBuilder: true,
err: "unblind sidecars failed: commitment value doesn't match block",
err: "unblind blobs sidecars: commitment value doesn't match block",
},
{
name: "electra block no blob",

View File

@@ -287,6 +287,9 @@ func (vs *Server) validatorStatus(
Status: ethpb.ValidatorStatus_UNKNOWN_STATUS,
ActivationEpoch: params.BeaconConfig().FarFutureEpoch,
}
if len(pubKey) == 0 {
return resp, nonExistentIndex
}
vStatus, idx, err := statusForPubKey(headState, pubKey)
if err != nil && !errors.Is(err, errPubkeyDoesNotExist) {
tracing.AnnotateError(span, err)

View File

@@ -97,10 +97,7 @@ func (s *Service) filterAttestations(
// detection (except for the genesis epoch).
func validateAttestationIntegrity(att ethpb.IndexedAtt) bool {
// If an attestation is malformed, we drop it.
if att == nil ||
att.GetData() == nil ||
att.GetData().Source == nil ||
att.GetData().Target == nil {
if att == nil || att.IsNil() || att.GetData().Source == nil || att.GetData().Target == nil {
return false
}

View File

@@ -4,7 +4,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/runtime/version"
)
// DepositRequestsStartIndex is used for returning the deposit receipts start index which is used for eip6110
// DepositRequestsStartIndex is used for returning the deposit requests start index which is used for eip6110
func (b *BeaconState) DepositRequestsStartIndex() (uint64, error) {
if b.version < version.Electra {
return 0, errNotSupported("DepositRequestsStartIndex", b.version)

View File

@@ -107,7 +107,7 @@ type blobBatchVerifier struct {
func (bbv *blobBatchVerifier) newVerifier(rb blocks.ROBlob) verification.BlobVerifier {
m := bbv.verifiers[rb.BlockRoot()]
m[rb.Index] = bbv.newBlobVerifier(rb, verification.BackfillSidecarRequirements)
m[rb.Index] = bbv.newBlobVerifier(rb, verification.BackfillBlobSidecarRequirements)
bbv.verifiers[rb.BlockRoot()] = m
return m[rb.Index]
}

View File

@@ -388,6 +388,7 @@ func TestService_CheckForPreviousEpochFork(t *testing.T) {
}
}
// oneEpoch returns the duration of one epoch.
func oneEpoch() time.Duration {
return time.Duration(params.BeaconConfig().SlotsPerEpoch.Mul(params.BeaconConfig().SecondsPerSlot)) * time.Second
}

View File

@@ -172,7 +172,7 @@ func (s *Service) processFetchedDataRegSync(
if len(bwb) == 0 {
return
}
bv := verification.NewBlobBatchVerifier(s.newBlobVerifier, verification.InitsyncSidecarRequirements)
bv := verification.NewBlobBatchVerifier(s.newBlobVerifier, verification.InitsyncBlobSidecarRequirements)
avs := das.NewLazilyPersistentStore(s.cfg.BlobStorage, bv)
batchFields := logrus.Fields{
"firstSlot": data.bwb[0].Block.Block().Slot(),
@@ -331,7 +331,7 @@ func (s *Service) processBatchedBlocks(ctx context.Context, genesis time.Time,
errParentDoesNotExist, first.Block().ParentRoot(), first.Block().Slot())
}
bv := verification.NewBlobBatchVerifier(s.newBlobVerifier, verification.InitsyncSidecarRequirements)
bv := verification.NewBlobBatchVerifier(s.newBlobVerifier, verification.InitsyncBlobSidecarRequirements)
avs := das.NewLazilyPersistentStore(s.cfg.BlobStorage, bv)
s.logBatchSyncStatus(genesis, first, len(bwb))
for _, bb := range bwb {

View File

@@ -340,7 +340,7 @@ func (s *Service) fetchOriginBlobs(pids []peer.ID) error {
if len(sidecars) != len(req) {
continue
}
bv := verification.NewBlobBatchVerifier(s.newBlobVerifier, verification.InitsyncSidecarRequirements)
bv := verification.NewBlobBatchVerifier(s.newBlobVerifier, verification.InitsyncBlobSidecarRequirements)
avs := das.NewLazilyPersistentStore(s.cfg.BlobStorage, bv)
current := s.clock.CurrentSlot()
if err := avs.Persist(current, sidecars...); err != nil {

View File

@@ -495,8 +495,8 @@ func TestOriginOutsideRetention(t *testing.T) {
bdb := dbtest.SetupDB(t)
genesis := time.Unix(0, 0)
secsPerEpoch := params.BeaconConfig().SecondsPerSlot * uint64(params.BeaconConfig().SlotsPerEpoch)
retentionSeconds := time.Second * time.Duration(uint64(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest+1)*secsPerEpoch)
outsideRetention := genesis.Add(retentionSeconds)
retentionPeriod := time.Second * time.Duration(uint64(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest+1)*secsPerEpoch)
outsideRetention := genesis.Add(retentionPeriod)
now := func() time.Time {
return outsideRetention
}

View File

@@ -315,7 +315,7 @@ func (s *Service) sendBatchRootRequest(ctx context.Context, roots [][32]byte, ra
if uint64(len(roots)) > maxReqBlock {
req = roots[:maxReqBlock]
}
if err := s.sendRecentBeaconBlocksRequest(ctx, &req, pid); err != nil {
if err := s.sendBeaconBlocksRequest(ctx, &req, pid); err != nil {
tracing.AnnotateError(span, err)
log.WithError(err).Debug("Could not send recent block request")
}

View File

@@ -16,6 +16,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
pb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
)
// beaconBlocksByRangeRPCHandler looks up the request blocks from the database from a given start block.
@@ -26,15 +27,23 @@ func (s *Service) beaconBlocksByRangeRPCHandler(ctx context.Context, msg interfa
defer cancel()
SetRPCStreamDeadlines(stream)
remotePeer := stream.Conn().RemotePeer()
m, ok := msg.(*pb.BeaconBlocksByRangeRequest)
if !ok {
return errors.New("message is not type *pb.BeaconBlockByRangeRequest")
}
log.WithField("startSlot", m.StartSlot).WithField("count", m.Count).Debug("Serving block by range request")
log.WithFields(logrus.Fields{
"startSlot": m.StartSlot,
"count": m.Count,
"peer": remotePeer,
}).Debug("Serving block by range request")
rp, err := validateRangeRequest(m, s.cfg.clock.CurrentSlot())
if err != nil {
s.writeErrorResponseToStream(responseCodeInvalidRequest, err.Error(), stream)
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(remotePeer)
tracing.AnnotateError(span, err)
return err
}
@@ -50,12 +59,12 @@ func (s *Service) beaconBlocksByRangeRPCHandler(ctx context.Context, msg interfa
if err != nil {
return err
}
remainingBucketCapacity := blockLimiter.Remaining(stream.Conn().RemotePeer().String())
remainingBucketCapacity := blockLimiter.Remaining(remotePeer.String())
span.SetAttributes(
trace.Int64Attribute("start", int64(rp.start)), // lint:ignore uintcast -- This conversion is OK for tracing.
trace.Int64Attribute("end", int64(rp.end)), // lint:ignore uintcast -- This conversion is OK for tracing.
trace.Int64Attribute("count", int64(m.Count)),
trace.StringAttribute("peer", stream.Conn().RemotePeer().String()),
trace.StringAttribute("peer", remotePeer.String()),
trace.Int64Attribute("remaining_capacity", remainingBucketCapacity),
)
@@ -82,12 +91,19 @@ func (s *Service) beaconBlocksByRangeRPCHandler(ctx context.Context, msg interfa
}
rpcBlocksByRangeResponseLatency.Observe(float64(time.Since(batchStart).Milliseconds()))
}
if err := batch.error(); err != nil {
log.WithError(err).Debug("error in BlocksByRange batch")
s.writeErrorResponseToStream(responseCodeServerError, p2ptypes.ErrGeneric.Error(), stream)
log.WithError(err).Debug("Serving block by range request - BlocksByRange batch")
// If a rate limit is hit, it means an error response has already been sent and the stream has been closed.
if !errors.Is(err, p2ptypes.ErrRateLimited) {
s.writeErrorResponseToStream(responseCodeServerError, p2ptypes.ErrGeneric.Error(), stream)
}
tracing.AnnotateError(span, err)
return err
}
closeStream(stream, log)
return nil
}

View File

@@ -20,9 +20,9 @@ import (
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
// sendRecentBeaconBlocksRequest sends a recent beacon blocks request to a peer to get
// sendBeaconBlocksRequest sends a recent beacon blocks request to a peer to get
// those corresponding blocks from that peer.
func (s *Service) sendRecentBeaconBlocksRequest(ctx context.Context, requests *types.BeaconBlockByRootsReq, id peer.ID) error {
func (s *Service) sendBeaconBlocksRequest(ctx context.Context, requests *types.BeaconBlockByRootsReq, id peer.ID) error {
ctx, cancel := context.WithTimeout(ctx, respTimeout)
defer cancel()
@@ -151,7 +151,7 @@ func (s *Service) sendAndSaveBlobSidecars(ctx context.Context, request types.Blo
if len(sidecars) != len(request) {
return fmt.Errorf("received %d blob sidecars, expected %d for RPC", len(sidecars), len(request))
}
bv := verification.NewBlobBatchVerifier(s.newBlobVerifier, verification.PendingQueueSidecarRequirements)
bv := verification.NewBlobBatchVerifier(s.newBlobVerifier, verification.PendingQueueBlobSidecarRequirements)
for _, sidecar := range sidecars {
if err := verify.BlobAlignsWithBlock(sidecar, RoBlock); err != nil {
return err

View File

@@ -253,7 +253,7 @@ func TestRecentBeaconBlocks_RPCRequestSent(t *testing.T) {
})
p1.Connect(p2)
require.NoError(t, r.sendRecentBeaconBlocksRequest(context.Background(), &expectedRoots, p2.PeerID()))
require.NoError(t, r.sendBeaconBlocksRequest(context.Background(), &expectedRoots, p2.PeerID()))
if util.WaitTimeout(&wg, 1*time.Second) {
t.Fatal("Did not receive stream within 1 sec")
@@ -328,7 +328,7 @@ func TestRecentBeaconBlocks_RPCRequestSent_IncorrectRoot(t *testing.T) {
})
p1.Connect(p2)
require.ErrorContains(t, "received unexpected block with root", r.sendRecentBeaconBlocksRequest(context.Background(), &expectedRoots, p2.PeerID()))
require.ErrorContains(t, "received unexpected block with root", r.sendBeaconBlocksRequest(context.Background(), &expectedRoots, p2.PeerID()))
}
func TestRecentBeaconBlocksRPCHandler_HandleZeroBlocks(t *testing.T) {

View File

@@ -99,6 +99,7 @@ func (s *Service) blobSidecarsByRangeRPCHandler(ctx context.Context, msg interfa
}
var batch blockBatch
wQuota := params.BeaconConfig().MaxRequestBlobSidecars
for batch, ok = batcher.next(ctx, stream); ok; batch, ok = batcher.next(ctx, stream) {
batchStart := time.Now()
@@ -114,7 +115,12 @@ func (s *Service) blobSidecarsByRangeRPCHandler(ctx context.Context, msg interfa
}
if err := batch.error(); err != nil {
log.WithError(err).Debug("error in BlobSidecarsByRange batch")
s.writeErrorResponseToStream(responseCodeServerError, p2ptypes.ErrGeneric.Error(), stream)
// If a rate limit is hit, it means an error response has already been sent and the stream has been closed.
if !errors.Is(err, p2ptypes.ErrRateLimited) {
s.writeErrorResponseToStream(responseCodeServerError, p2ptypes.ErrGeneric.Error(), stream)
}
tracing.AnnotateError(span, err)
return err
}

View File

@@ -2,12 +2,12 @@ package sync
import (
"context"
"errors"
"fmt"
"strings"
libp2pcore "github.com/libp2p/go-libp2p/core"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p"
p2ptypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/types"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
@@ -16,127 +16,191 @@ import (
)
// pingHandler reads the incoming ping rpc message from the peer.
// If the peer's sequence number is higher than the one stored locally,
// a METADATA request is sent to the peer to retrieve and update the latest metadata.
// Note: This function is misnamed, as it performs more than just reading a ping message.
func (s *Service) pingHandler(_ context.Context, msg interface{}, stream libp2pcore.Stream) error {
SetRPCStreamDeadlines(stream)
// Convert the message to SSW Uint64 type.
m, ok := msg.(*primitives.SSZUint64)
if !ok {
return fmt.Errorf("wrong message type for ping, got %T, wanted *uint64", msg)
}
// Validate the incoming request regarding rate limiting.
if err := s.rateLimiter.validateRequest(stream, 1); err != nil {
return err
return errors.Wrap(err, "validate request")
}
s.rateLimiter.add(stream, 1)
valid, err := s.validateSequenceNum(*m, stream.Conn().RemotePeer())
// Retrieve the peer ID.
peerID := stream.Conn().RemotePeer()
// Check if the peer sequence number is higher than the one we have in our store.
valid, err := s.validateSequenceNum(*m, peerID)
if err != nil {
// Descore peer for giving us a bad sequence number.
if errors.Is(err, p2ptypes.ErrInvalidSequenceNum) {
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(peerID)
s.writeErrorResponseToStream(responseCodeInvalidRequest, p2ptypes.ErrInvalidSequenceNum.Error(), stream)
}
return err
return errors.Wrap(err, "validate sequence number")
}
// We can already prepare a success response to the peer.
if _, err := stream.Write([]byte{responseCodeSuccess}); err != nil {
return err
return errors.Wrap(err, "write response")
}
sq := primitives.SSZUint64(s.cfg.p2p.MetadataSeq())
if _, err := s.cfg.p2p.Encoding().EncodeWithMaxLength(stream, &sq); err != nil {
// Retrieve our own sequence number.
seqNumber := s.cfg.p2p.MetadataSeq()
// SSZ encode our sequence number.
seqNumberSSZ := primitives.SSZUint64(seqNumber)
// Send our sequence number back to the peer.
if _, err := s.cfg.p2p.Encoding().EncodeWithMaxLength(stream, &seqNumberSSZ); err != nil {
return err
}
closeStream(stream, log)
if valid {
// If the sequence number was valid we're done.
// If the peer's sequence numberwas valid we're done.
return nil
}
// The sequence number was not valid. Start our own ping back to the peer.
// The peer's sequence number was not valid. We ask the peer for its metadata.
go func() {
// New context so the calling function doesn't cancel on us.
// Define a new context so the calling function doesn't cancel on us.
ctx, cancel := context.WithTimeout(context.Background(), ttfbTimeout)
defer cancel()
md, err := s.sendMetaDataRequest(ctx, stream.Conn().RemotePeer())
// Send a METADATA request to the peer.
peerMetadata, err := s.sendMetaDataRequest(ctx, peerID)
if err != nil {
// We cannot compare errors directly as the stream muxer error
// type isn't compatible with the error we have, so a direct
// equality checks fails.
if !strings.Contains(err.Error(), p2ptypes.ErrIODeadline.Error()) {
log.WithField("peer", stream.Conn().RemotePeer()).WithError(err).Debug("Could not send metadata request")
log.WithField("peer", peerID).WithError(err).Debug("Could not send metadata request")
}
return
}
// update metadata if there is no error
s.cfg.p2p.Peers().SetMetadata(stream.Conn().RemotePeer(), md)
// Update peer's metadata.
s.cfg.p2p.Peers().SetMetadata(peerID, peerMetadata)
}()
return nil
}
func (s *Service) sendPingRequest(ctx context.Context, id peer.ID) error {
// sendPingRequest first sends a PING request to the peer.
// If the peer responds with a sequence number higher than latest one for it we have in our store,
// then this function sends a METADATA request to the peer, and stores the metadata received.
// This function is actually poorly named, since it does more than just sending a ping request.
func (s *Service) sendPingRequest(ctx context.Context, peerID peer.ID) error {
ctx, cancel := context.WithTimeout(ctx, respTimeout)
defer cancel()
metadataSeq := primitives.SSZUint64(s.cfg.p2p.MetadataSeq())
topic, err := p2p.TopicFromMessage(p2p.PingMessageName, slots.ToEpoch(s.cfg.clock.CurrentSlot()))
// Get the current epoch.
currentSlot := s.cfg.clock.CurrentSlot()
currentEpoch := slots.ToEpoch(currentSlot)
// SSZ encode our metadata sequence number.
metadataSeq := s.cfg.p2p.MetadataSeq()
encodedMetadataSeq := primitives.SSZUint64(metadataSeq)
// Get the PING topic for the current epoch.
topic, err := p2p.TopicFromMessage(p2p.PingMessageName, currentEpoch)
if err != nil {
return err
return errors.Wrap(err, "topic from message")
}
stream, err := s.cfg.p2p.Send(ctx, &metadataSeq, topic, id)
// Send the PING request to the peer.
stream, err := s.cfg.p2p.Send(ctx, &encodedMetadataSeq, topic, peerID)
if err != nil {
return err
return errors.Wrap(err, "send ping request")
}
currentTime := time.Now()
defer closeStream(stream, log)
startTime := time.Now()
// Read the response from the peer.
code, errMsg, err := ReadStatusCode(stream, s.cfg.p2p.Encoding())
if err != nil {
return err
return errors.Wrap(err, "read status code")
}
// Records the latency of the ping request for that peer.
s.cfg.p2p.Host().Peerstore().RecordLatency(id, time.Now().Sub(currentTime))
// Record the latency of the ping request for that peer.
s.cfg.p2p.Host().Peerstore().RecordLatency(peerID, time.Now().Sub(startTime))
// If the peer responded with an error, increment the bad responses scorer.
if code != 0 {
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
return errors.New(errMsg)
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(peerID)
return errors.Errorf("code: %d - %s", code, errMsg)
}
// Decode the sequence number from the peer.
msg := new(primitives.SSZUint64)
if err := s.cfg.p2p.Encoding().DecodeWithMaxLength(stream, msg); err != nil {
return err
return errors.Wrap(err, "decode sequence number")
}
valid, err := s.validateSequenceNum(*msg, stream.Conn().RemotePeer())
// Determine if the peer's sequence number returned by the peer is higher than the one we have in our store.
valid, err := s.validateSequenceNum(*msg, peerID)
if err != nil {
// Descore peer for giving us a bad sequence number.
if errors.Is(err, p2ptypes.ErrInvalidSequenceNum) {
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(peerID)
}
return err
return errors.Wrap(err, "validate sequence number")
}
// The sequence number have in our store for this peer is the same as the one returned by the peer, all good.
if valid {
return nil
}
md, err := s.sendMetaDataRequest(ctx, stream.Conn().RemotePeer())
// We need to send a METADATA request to the peer to get its latest metadata.
md, err := s.sendMetaDataRequest(ctx, peerID)
if err != nil {
// do not increment bad responses, as its
// already done in the request method.
return err
// do not increment bad responses, as its already done in the request method.
return errors.Wrap(err, "send metadata request")
}
s.cfg.p2p.Peers().SetMetadata(stream.Conn().RemotePeer(), md)
// Update the metadata for the peer.
s.cfg.p2p.Peers().SetMetadata(peerID, md)
return nil
}
// validates the peer's sequence number.
// validateSequenceNum validates the peer's sequence number.
// - If the peer's sequence number is greater than the sequence number we have in our store for the peer, return false.
// - If the peer's sequence number is equal to the sequence number we have in our store for the peer, return true.
// - If the peer's sequence number is less than the sequence number we have in our store for the peer, return an error.
func (s *Service) validateSequenceNum(seq primitives.SSZUint64, id peer.ID) (bool, error) {
// Retrieve the metadata for the peer we got in our store.
md, err := s.cfg.p2p.Peers().Metadata(id)
if err != nil {
return false, err
return false, errors.Wrap(err, "get metadata")
}
// If we have no metadata for the peer, return false.
if md == nil || md.IsNil() {
return false, nil
}
// Return error on invalid sequence number.
// The peer's sequence number must be less than or equal to the sequence number we have in our store.
if md.SequenceNumber() > uint64(seq) {
return false, p2ptypes.ErrInvalidSequenceNum
}
// Return true if the peer's sequence number is equal to the sequence number we have in our store.
return md.SequenceNumber() == uint64(seq), nil
}

View File

@@ -19,7 +19,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
pb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
@@ -49,7 +48,7 @@ type BeaconBlockProcessor func(block interfaces.ReadOnlySignedBeaconBlock) error
// SendBeaconBlocksByRangeRequest sends BeaconBlocksByRange and returns fetched blocks, if any.
func SendBeaconBlocksByRangeRequest(
ctx context.Context, tor blockchain.TemporalOracle, p2pProvider p2p.SenderEncoder, pid peer.ID,
req *pb.BeaconBlocksByRangeRequest, blockProcessor BeaconBlockProcessor,
req *ethpb.BeaconBlocksByRangeRequest, blockProcessor BeaconBlockProcessor,
) ([]interfaces.ReadOnlySignedBeaconBlock, error) {
topic, err := p2p.TopicFromMessage(p2p.BeaconBlocksByRangeMessageName, slots.ToEpoch(tor.CurrentSlot()))
if err != nil {
@@ -155,7 +154,7 @@ func SendBeaconBlocksByRootRequest(
return blocks, nil
}
func SendBlobsByRangeRequest(ctx context.Context, tor blockchain.TemporalOracle, p2pApi p2p.SenderEncoder, pid peer.ID, ctxMap ContextByteVersions, req *pb.BlobSidecarsByRangeRequest, bvs ...BlobResponseValidation) ([]blocks.ROBlob, error) {
func SendBlobsByRangeRequest(ctx context.Context, tor blockchain.TemporalOracle, p2pApi p2p.SenderEncoder, pid peer.ID, ctxMap ContextByteVersions, req *ethpb.BlobSidecarsByRangeRequest, bvs ...BlobResponseValidation) ([]blocks.ROBlob, error) {
topic, err := p2p.TopicFromMessage(p2p.BlobSidecarsByRangeName, slots.ToEpoch(tor.CurrentSlot()))
if err != nil {
return nil, err
@@ -298,7 +297,7 @@ func blobValidatorFromRootReq(req *p2ptypes.BlobSidecarsByRootReq) BlobResponseV
}
}
func blobValidatorFromRangeReq(req *pb.BlobSidecarsByRangeRequest) BlobResponseValidation {
func blobValidatorFromRangeReq(req *ethpb.BlobSidecarsByRangeRequest) BlobResponseValidation {
end := req.StartSlot + primitives.Slot(req.Count)
return func(sc blocks.ROBlob) error {
if sc.Slot() < req.StartSlot || sc.Slot() >= end {

View File

@@ -15,6 +15,8 @@ import (
"github.com/libp2p/go-libp2p/core/peer"
gcache "github.com/patrickmn/go-cache"
"github.com/pkg/errors"
"github.com/trailofbits/go-mutexasserts"
"github.com/prysmaticlabs/prysm/v5/async"
"github.com/prysmaticlabs/prysm/v5/async/abool"
"github.com/prysmaticlabs/prysm/v5/async/event"
@@ -44,22 +46,24 @@ import (
"github.com/prysmaticlabs/prysm/v5/runtime"
prysmTime "github.com/prysmaticlabs/prysm/v5/time"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/trailofbits/go-mutexasserts"
)
var _ runtime.Service = (*Service)(nil)
const rangeLimit uint64 = 1024
const seenBlockSize = 1000
const seenBlobSize = seenBlockSize * 4 // Each block can have max 4 blobs. Worst case 164kB for cache.
const seenUnaggregatedAttSize = 20000
const seenAggregatedAttSize = 16384
const seenSyncMsgSize = 1000 // Maximum of 512 sync committee members, 1000 is a safe amount.
const seenSyncContributionSize = 512 // Maximum of SYNC_COMMITTEE_SIZE as specified by the spec.
const seenExitSize = 100
const seenProposerSlashingSize = 100
const badBlockSize = 1000
const syncMetricsInterval = 10 * time.Second
const (
rangeLimit uint64 = 1024
seenBlockSize = 1000
seenBlobSize = seenBlockSize * 6 // Each block can have max 6 blobs.
seenDataColumnSize = seenBlockSize * 128 // Each block can have max 128 data columns.
seenUnaggregatedAttSize = 20000
seenAggregatedAttSize = 16384
seenSyncMsgSize = 1000 // Maximum of 512 sync committee members, 1000 is a safe amount.
seenSyncContributionSize = 512 // Maximum of SYNC_COMMITTEE_SIZE as specified by the spec.
seenExitSize = 100
seenProposerSlashingSize = 100
badBlockSize = 1000
syncMetricsInterval = 10 * time.Second
)
var (
// Seconds in one epoch.
@@ -162,18 +166,18 @@ type Service struct {
// NewService initializes new regular sync service.
func NewService(ctx context.Context, opts ...Option) *Service {
c := gcache.New(pendingBlockExpTime /* exp time */, 0 /* disable janitor */)
ctx, cancel := context.WithCancel(ctx)
r := &Service{
ctx: ctx,
cancel: cancel,
chainStarted: abool.New(),
cfg: &config{clock: startup.NewClock(time.Unix(0, 0), [32]byte{})},
slotToPendingBlocks: c,
slotToPendingBlocks: gcache.New(pendingBlockExpTime /* exp time */, 0 /* disable janitor */),
seenPendingBlocks: make(map[[32]byte]bool),
blkRootToPendingAtts: make(map[[32]byte][]ethpb.SignedAggregateAttAndProof),
signatureChan: make(chan *signatureVerifier, verifierLimit),
}
for _, opt := range opts {
if err := opt(r); err != nil {
return nil
@@ -224,7 +228,7 @@ func (s *Service) Start() {
s.newBlobVerifier = newBlobVerifierFromInitializer(v)
go s.verifierRoutine()
go s.registerHandlers()
go s.startTasksPostInitialSync()
s.cfg.p2p.AddConnectionHandler(s.reValidatePeer, s.sendGoodbye)
s.cfg.p2p.AddDisconnectionHandler(func(_ context.Context, _ peer.ID) error {
@@ -315,23 +319,31 @@ func (s *Service) waitForChainStart() {
s.markForChainStart()
}
func (s *Service) registerHandlers() {
func (s *Service) startTasksPostInitialSync() {
// Wait for the chain to start.
s.waitForChainStart()
select {
case <-s.initialSyncComplete:
// Register respective pubsub handlers at state synced event.
digest, err := s.currentForkDigest()
// Compute the current epoch.
currentSlot := slots.CurrentSlot(uint64(s.cfg.clock.GenesisTime().Unix()))
currentEpoch := slots.ToEpoch(currentSlot)
// Compute the current fork forkDigest.
forkDigest, err := s.currentForkDigest()
if err != nil {
log.WithError(err).Error("Could not retrieve current fork digest")
return
}
currentEpoch := slots.ToEpoch(slots.CurrentSlot(uint64(s.cfg.clock.GenesisTime().Unix())))
s.registerSubscribers(currentEpoch, digest)
// Register respective pubsub handlers at state synced event.
s.registerSubscribers(currentEpoch, forkDigest)
// Start the fork watcher.
go s.forkWatcher()
return
case <-s.ctx.Done():
log.Debug("Context closed, exiting goroutine")
return
}
}

View File

@@ -62,7 +62,7 @@ func TestSyncHandlers_WaitToSync(t *testing.T) {
}
topic := "/eth2/%x/beacon_block"
go r.registerHandlers()
go r.startTasksPostInitialSync()
time.Sleep(100 * time.Millisecond)
var vr [32]byte
@@ -143,7 +143,7 @@ func TestSyncHandlers_WaitTillSynced(t *testing.T) {
syncCompleteCh := make(chan bool)
go func() {
r.registerHandlers()
r.startTasksPostInitialSync()
syncCompleteCh <- true
}()
@@ -200,7 +200,7 @@ func TestSyncService_StopCleanly(t *testing.T) {
initialSyncComplete: make(chan struct{}),
}
go r.registerHandlers()
go r.startTasksPostInitialSync()
var vr [32]byte
require.NoError(t, gs.SetClock(startup.NewClock(time.Now(), vr)))
r.waitForChainStart()

View File

@@ -13,7 +13,7 @@ import (
func (s *Service) blobSubscriber(ctx context.Context, msg proto.Message) error {
b, ok := msg.(blocks.VerifiedROBlob)
if !ok {
return fmt.Errorf("message was not type blocks.ROBlob, type=%T", msg)
return fmt.Errorf("message was not type blocks.VerifiedROBlob, type=%T", msg)
}
return s.subscribeBlob(ctx, b)

View File

@@ -57,11 +57,10 @@ func (s *Service) validateAggregateAndProof(ctx context.Context, pid peer.ID, ms
}
aggregate := m.AggregateAttestationAndProof().AggregateVal()
data := aggregate.GetData()
if err := helpers.ValidateNilAttestation(aggregate); err != nil {
return pubsub.ValidationReject, err
}
data := aggregate.GetData()
// Do not process slot 0 aggregates.
if data.Slot == 0 {
return pubsub.ValidationIgnore, nil
@@ -118,6 +117,9 @@ func (s *Service) validateAggregateAndProof(ctx context.Context, pid peer.ID, ms
if seen {
return pubsub.ValidationIgnore, nil
}
// Verify the block being voted on is in the beacon chain.
// If not, store this attestation in the map of pending attestations.
if !s.validateBlockInAttestation(ctx, m) {
return pubsub.ValidationIgnore, nil
}
@@ -223,6 +225,8 @@ func (s *Service) validateAggregatedAtt(ctx context.Context, signed ethpb.Signed
return s.validateWithBatchVerifier(ctx, "aggregate", set)
}
// validateBlocksInAttestation checks if the block being voted on is in the beaconDB.
// If not, it store this attestation in the map of pending attestations.
func (s *Service) validateBlockInAttestation(ctx context.Context, satt ethpb.SignedAggregateAttAndProof) bool {
// Verify the block being voted and the processed state is in beaconDB. The block should have passed validation if it's in the beaconDB.
blockRoot := bytesutil.ToBytes32(satt.AggregateAttestationAndProof().AggregateVal().GetData().BeaconBlockRoot)

View File

@@ -62,12 +62,11 @@ func (s *Service) validateCommitteeIndexBeaconAttestation(ctx context.Context, p
if !ok {
return pubsub.ValidationReject, errWrongMessage
}
data := att.GetData()
if err := helpers.ValidateNilAttestation(att); err != nil {
return pubsub.ValidationReject, err
}
data := att.GetData()
// Do not process slot 0 attestations.
if data.Slot == 0 {
return pubsub.ValidationIgnore, nil

View File

@@ -211,11 +211,16 @@ func (s *Service) validateBeaconBlockPubSub(ctx context.Context, pid peer.ID, ms
// Log the arrival time of the accepted block
graffiti := blk.Block().Body().Graffiti()
exec, err := blk.Block().Body().Execution()
if err != nil {
log.WithError(err)
}
startTime, err := slots.ToTime(genesisTime, blk.Block().Slot())
logFields := logrus.Fields{
"blockSlot": blk.Block().Slot(),
"proposerIndex": blk.Block().ProposerIndex(),
"graffiti": string(graffiti[:]),
"extraData": string(exec.ExtraData()),
}
if err != nil {
log.WithError(err).WithFields(logFields).Warn("Received block, could not report timing information.")

View File

@@ -51,7 +51,7 @@ func (s *Service) validateBlob(ctx context.Context, pid peer.ID, msg *pubsub.Mes
if err != nil {
return pubsub.ValidationReject, errors.Wrap(err, "roblob conversion failure")
}
vf := s.newBlobVerifier(blob, verification.GossipSidecarRequirements)
vf := s.newBlobVerifier(blob, verification.GossipBlobSidecarRequirements)
if err := vf.BlobIndexInBounds(); err != nil {
return pubsub.ValidationReject, err

View File

@@ -10,6 +10,7 @@ go_library(
"fake.go",
"initializer.go",
"interface.go",
"log.go",
"metrics.go",
"mock.go",
"result.go",

View File

@@ -169,7 +169,7 @@ func TestBatchVerifier(t *testing.T) {
blk, blbs := c.bandb(t, c.nblobs)
reqs := c.reqs
if reqs == nil {
reqs = InitsyncSidecarRequirements
reqs = InitsyncBlobSidecarRequirements
}
bbv := NewBlobBatchVerifier(c.nv(), reqs)
if c.cv == nil {

View File

@@ -2,6 +2,7 @@ package verification
import (
"context"
goError "errors"
"github.com/pkg/errors"
forkchoicetypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/forkchoice/types"
@@ -12,7 +13,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/runtime/logging"
"github.com/prysmaticlabs/prysm/v5/time/slots"
log "github.com/sirupsen/logrus"
)
const (
@@ -29,7 +29,7 @@ const (
RequireSidecarProposerExpected
)
var allSidecarRequirements = []Requirement{
var allBlobSidecarRequirements = []Requirement{
RequireBlobIndexInBounds,
RequireNotFromFutureSlot,
RequireSlotAboveFinalized,
@@ -43,21 +43,21 @@ var allSidecarRequirements = []Requirement{
RequireSidecarProposerExpected,
}
// GossipSidecarRequirements defines the set of requirements that BlobSidecars received on gossip
// GossipBlobSidecarRequirements defines the set of requirements that BlobSidecars received on gossip
// must satisfy in order to upgrade an ROBlob to a VerifiedROBlob.
var GossipSidecarRequirements = requirementList(allSidecarRequirements).excluding()
var GossipBlobSidecarRequirements = requirementList(allBlobSidecarRequirements).excluding()
// SpectestSidecarRequirements is used by the forkchoice spectests when verifying blobs used in the on_block tests.
// SpectestBlobSidecarRequirements is used by the forkchoice spectests when verifying blobs used in the on_block tests.
// The only requirements we exclude for these tests are the parent validity and seen tests, as these are specific to
// gossip processing and require the bad block cache that we only use there.
var SpectestSidecarRequirements = requirementList(GossipSidecarRequirements).excluding(
var SpectestBlobSidecarRequirements = requirementList(GossipBlobSidecarRequirements).excluding(
RequireSidecarParentSeen, RequireSidecarParentValid)
// InitsyncSidecarRequirements is the list of verification requirements to be used by the init-sync service
// InitsyncBlobSidecarRequirements is the list of verification requirements to be used by the init-sync service
// for batch-mode syncing. Because we only perform batch verification as part of the IsDataAvailable method
// for blobs after the block has been verified, and the blobs to be verified are keyed in the cache by the
// block root, the list of required verifications is much shorter than gossip.
var InitsyncSidecarRequirements = requirementList(GossipSidecarRequirements).excluding(
var InitsyncBlobSidecarRequirements = requirementList(GossipBlobSidecarRequirements).excluding(
RequireNotFromFutureSlot,
RequireSlotAboveFinalized,
RequireSidecarParentSeen,
@@ -71,36 +71,16 @@ var InitsyncSidecarRequirements = requirementList(GossipSidecarRequirements).exc
// execution layer mempool. Only the KZG proof verification is required.
var ELMemPoolRequirements = []Requirement{RequireSidecarKzgProofVerified}
// BackfillSidecarRequirements is the same as InitsyncSidecarRequirements.
var BackfillSidecarRequirements = requirementList(InitsyncSidecarRequirements).excluding()
// BackfillBlobSidecarRequirements is the same as InitsyncBlobSidecarRequirements.
var BackfillBlobSidecarRequirements = requirementList(InitsyncBlobSidecarRequirements).excluding()
// PendingQueueSidecarRequirements is the same as InitsyncSidecarRequirements, used by the pending blocks queue.
var PendingQueueSidecarRequirements = requirementList(InitsyncSidecarRequirements).excluding()
// PendingQueueBlobSidecarRequirements is the same as InitsyncBlobSidecarRequirements, used by the pending blocks queue.
var PendingQueueBlobSidecarRequirements = requirementList(InitsyncBlobSidecarRequirements).excluding()
var (
ErrBlobInvalid = errors.New("blob failed verification")
// ErrBlobIndexInvalid means RequireBlobIndexInBounds failed.
ErrBlobIndexInvalid = errors.Wrap(ErrBlobInvalid, "incorrect blob sidecar index")
// ErrFromFutureSlot means RequireSlotNotTooEarly failed.
ErrFromFutureSlot = errors.Wrap(ErrBlobInvalid, "slot is too far in the future")
// ErrSlotNotAfterFinalized means RequireSlotAboveFinalized failed.
ErrSlotNotAfterFinalized = errors.Wrap(ErrBlobInvalid, "slot <= finalized checkpoint")
// ErrInvalidProposerSignature means RequireValidProposerSignature failed.
ErrInvalidProposerSignature = errors.Wrap(ErrBlobInvalid, "proposer signature could not be verified")
// ErrSidecarParentNotSeen means RequireSidecarParentSeen failed.
ErrSidecarParentNotSeen = errors.Wrap(ErrBlobInvalid, "parent root has not been seen")
// ErrSidecarParentInvalid means RequireSidecarParentValid failed.
ErrSidecarParentInvalid = errors.Wrap(ErrBlobInvalid, "parent block is not valid")
// ErrSlotNotAfterParent means RequireSidecarParentSlotLower failed.
ErrSlotNotAfterParent = errors.Wrap(ErrBlobInvalid, "slot <= slot")
// ErrSidecarNotFinalizedDescendent means RequireSidecarDescendsFromFinalized failed.
ErrSidecarNotFinalizedDescendent = errors.Wrap(ErrBlobInvalid, "blob parent is not descended from the finalized block")
// ErrSidecarInclusionProofInvalid means RequireSidecarInclusionProven failed.
ErrSidecarInclusionProofInvalid = errors.Wrap(ErrBlobInvalid, "sidecar inclusion proof verification failed")
// ErrSidecarKzgProofInvalid means RequireSidecarKzgProofVerified failed.
ErrSidecarKzgProofInvalid = errors.Wrap(ErrBlobInvalid, "sidecar kzg commitment proof verification failed")
// ErrSidecarUnexpectedProposer means RequireSidecarProposerExpected failed.
ErrSidecarUnexpectedProposer = errors.Wrap(ErrBlobInvalid, "sidecar was not proposed by the expected proposer_index")
ErrBlobIndexInvalid = errors.New("incorrect blob sidecar index")
)
type ROBlobVerifier struct {
@@ -149,7 +129,7 @@ func (bv *ROBlobVerifier) BlobIndexInBounds() (err error) {
defer bv.recordResult(RequireBlobIndexInBounds, &err)
if bv.blob.Index >= fieldparams.MaxBlobsPerBlock {
log.WithFields(logging.BlobFields(bv.blob)).Debug("Sidecar index >= MAX_BLOBS_PER_BLOCK")
return ErrBlobIndexInvalid
return blobErrBuilder(ErrBlobIndexInvalid)
}
return nil
}
@@ -168,7 +148,7 @@ func (bv *ROBlobVerifier) NotFromFutureSlot() (err error) {
// If the system time is still before earliestStart, we consider the blob from a future slot and return an error.
if bv.clock.Now().Before(earliestStart) {
log.WithFields(logging.BlobFields(bv.blob)).Debug("sidecar slot is too far in the future")
return ErrFromFutureSlot
return blobErrBuilder(ErrFromFutureSlot)
}
return nil
}
@@ -181,11 +161,11 @@ func (bv *ROBlobVerifier) SlotAboveFinalized() (err error) {
fcp := bv.fc.FinalizedCheckpoint()
fSlot, err := slots.EpochStart(fcp.Epoch)
if err != nil {
return errors.Wrapf(ErrSlotNotAfterFinalized, "error computing epoch start slot for finalized checkpoint (%d) %s", fcp.Epoch, err.Error())
return errors.Wrapf(blobErrBuilder(ErrSlotNotAfterFinalized), "error computing epoch start slot for finalized checkpoint (%d) %s", fcp.Epoch, err.Error())
}
if bv.blob.Slot() <= fSlot {
log.WithFields(logging.BlobFields(bv.blob)).Debug("sidecar slot is not after finalized checkpoint")
return ErrSlotNotAfterFinalized
return blobErrBuilder(ErrSlotNotAfterFinalized)
}
return nil
}
@@ -203,7 +183,7 @@ func (bv *ROBlobVerifier) ValidProposerSignature(ctx context.Context) (err error
if err != nil {
log.WithFields(logging.BlobFields(bv.blob)).WithError(err).Debug("reusing failed proposer signature validation from cache")
blobVerificationProposerSignatureCache.WithLabelValues("hit-invalid").Inc()
return ErrInvalidProposerSignature
return blobErrBuilder(ErrInvalidProposerSignature)
}
return nil
}
@@ -213,12 +193,12 @@ func (bv *ROBlobVerifier) ValidProposerSignature(ctx context.Context) (err error
parent, err := bv.parentState(ctx)
if err != nil {
log.WithFields(logging.BlobFields(bv.blob)).WithError(err).Debug("could not replay parent state for blob signature verification")
return ErrInvalidProposerSignature
return blobErrBuilder(ErrInvalidProposerSignature)
}
// Full verification, which will subsequently be cached for anything sharing the signature cache.
if err = bv.sc.VerifySignature(sd, parent); err != nil {
log.WithFields(logging.BlobFields(bv.blob)).WithError(err).Debug("signature verification failed")
return ErrInvalidProposerSignature
return blobErrBuilder(ErrInvalidProposerSignature)
}
return nil
}
@@ -235,7 +215,7 @@ func (bv *ROBlobVerifier) SidecarParentSeen(parentSeen func([32]byte) bool) (err
return nil
}
log.WithFields(logging.BlobFields(bv.blob)).Debug("parent root has not been seen")
return ErrSidecarParentNotSeen
return blobErrBuilder(ErrSidecarParentNotSeen)
}
// SidecarParentValid represents the spec verification:
@@ -244,7 +224,7 @@ func (bv *ROBlobVerifier) SidecarParentValid(badParent func([32]byte) bool) (err
defer bv.recordResult(RequireSidecarParentValid, &err)
if badParent != nil && badParent(bv.blob.ParentRoot()) {
log.WithFields(logging.BlobFields(bv.blob)).Debug("parent root is invalid")
return ErrSidecarParentInvalid
return blobErrBuilder(ErrSidecarParentInvalid)
}
return nil
}
@@ -255,10 +235,10 @@ func (bv *ROBlobVerifier) SidecarParentSlotLower() (err error) {
defer bv.recordResult(RequireSidecarParentSlotLower, &err)
parentSlot, err := bv.fc.Slot(bv.blob.ParentRoot())
if err != nil {
return errors.Wrap(ErrSlotNotAfterParent, "parent root not in forkchoice")
return errors.Wrap(blobErrBuilder(ErrSlotNotAfterParent), "parent root not in forkchoice")
}
if parentSlot >= bv.blob.Slot() {
return ErrSlotNotAfterParent
return blobErrBuilder(ErrSlotNotAfterParent)
}
return nil
}
@@ -270,7 +250,7 @@ func (bv *ROBlobVerifier) SidecarDescendsFromFinalized() (err error) {
defer bv.recordResult(RequireSidecarDescendsFromFinalized, &err)
if !bv.fc.HasNode(bv.blob.ParentRoot()) {
log.WithFields(logging.BlobFields(bv.blob)).Debug("parent root not in forkchoice")
return ErrSidecarNotFinalizedDescendent
return blobErrBuilder(ErrSidecarNotFinalizedDescendent)
}
return nil
}
@@ -281,7 +261,7 @@ func (bv *ROBlobVerifier) SidecarInclusionProven() (err error) {
defer bv.recordResult(RequireSidecarInclusionProven, &err)
if err = blocks.VerifyKZGInclusionProof(bv.blob); err != nil {
log.WithError(err).WithFields(logging.BlobFields(bv.blob)).Debug("sidecar inclusion proof verification failed")
return ErrSidecarInclusionProofInvalid
return blobErrBuilder(ErrSidecarInclusionProofInvalid)
}
return nil
}
@@ -293,7 +273,7 @@ func (bv *ROBlobVerifier) SidecarKzgProofVerified() (err error) {
defer bv.recordResult(RequireSidecarKzgProofVerified, &err)
if err = bv.verifyBlobCommitment(bv.blob); err != nil {
log.WithError(err).WithFields(logging.BlobFields(bv.blob)).Debug("kzg commitment proof verification failed")
return ErrSidecarKzgProofInvalid
return blobErrBuilder(ErrSidecarKzgProofInvalid)
}
return nil
}
@@ -311,7 +291,7 @@ func (bv *ROBlobVerifier) SidecarProposerExpected(ctx context.Context) (err erro
}
r, err := bv.fc.TargetRootForEpoch(bv.blob.ParentRoot(), e)
if err != nil {
return ErrSidecarUnexpectedProposer
return blobErrBuilder(ErrSidecarUnexpectedProposer)
}
c := &forkchoicetypes.Checkpoint{Root: r, Epoch: e}
idx, cached := bv.pc.Proposer(c, bv.blob.Slot())
@@ -319,19 +299,19 @@ func (bv *ROBlobVerifier) SidecarProposerExpected(ctx context.Context) (err erro
pst, err := bv.parentState(ctx)
if err != nil {
log.WithError(err).WithFields(logging.BlobFields(bv.blob)).Debug("state replay to parent_root failed")
return ErrSidecarUnexpectedProposer
return blobErrBuilder(ErrSidecarUnexpectedProposer)
}
idx, err = bv.pc.ComputeProposer(ctx, bv.blob.ParentRoot(), bv.blob.Slot(), pst)
if err != nil {
log.WithError(err).WithFields(logging.BlobFields(bv.blob)).Debug("error computing proposer index from parent state")
return ErrSidecarUnexpectedProposer
return blobErrBuilder(ErrSidecarUnexpectedProposer)
}
}
if idx != bv.blob.ProposerIndex() {
log.WithError(ErrSidecarUnexpectedProposer).
log.WithError(blobErrBuilder(ErrSidecarUnexpectedProposer)).
WithFields(logging.BlobFields(bv.blob)).WithField("expectedProposer", idx).
Debug("unexpected blob proposer")
return ErrSidecarUnexpectedProposer
return blobErrBuilder(ErrSidecarUnexpectedProposer)
}
return nil
}
@@ -357,3 +337,7 @@ func blobToSignatureData(b blocks.ROBlob) SignatureData {
Slot: b.Slot(),
}
}
func blobErrBuilder(baseErr error) error {
return goError.Join(ErrBlobInvalid, baseErr)
}

View File

@@ -27,13 +27,13 @@ func TestBlobIndexInBounds(t *testing.T) {
_, blobs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 0, 1)
b := blobs[0]
// set Index to a value that is out of bounds
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.NoError(t, v.BlobIndexInBounds())
require.Equal(t, true, v.results.executed(RequireBlobIndexInBounds))
require.NoError(t, v.results.result(RequireBlobIndexInBounds))
b.Index = fieldparams.MaxBlobsPerBlock
v = ini.NewBlobVerifier(b, GossipSidecarRequirements)
v = ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.ErrorIs(t, v.BlobIndexInBounds(), ErrBlobIndexInvalid)
require.Equal(t, true, v.results.executed(RequireBlobIndexInBounds))
require.NotNil(t, v.results.result(RequireBlobIndexInBounds))
@@ -52,7 +52,7 @@ func TestSlotNotTooEarly(t *testing.T) {
// This clock will give a current slot of 1 on the nose
happyClock := startup.NewClock(genesis, [32]byte{}, startup.WithNower(func() time.Time { return now }))
ini := Initializer{shared: &sharedResources{clock: happyClock}}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.NoError(t, v.NotFromFutureSlot())
require.Equal(t, true, v.results.executed(RequireNotFromFutureSlot))
require.NoError(t, v.results.result(RequireNotFromFutureSlot))
@@ -61,7 +61,7 @@ func TestSlotNotTooEarly(t *testing.T) {
// but still in the previous slot.
closeClock := startup.NewClock(genesis, [32]byte{}, startup.WithNower(func() time.Time { return now.Add(-1 * params.BeaconConfig().MaximumGossipClockDisparityDuration() / 2) }))
ini = Initializer{shared: &sharedResources{clock: closeClock}}
v = ini.NewBlobVerifier(b, GossipSidecarRequirements)
v = ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.NoError(t, v.NotFromFutureSlot())
// This clock will give a current slot of 0, with now coming more than max clock disparity before slot 1
@@ -69,7 +69,7 @@ func TestSlotNotTooEarly(t *testing.T) {
dispClock := startup.NewClock(genesis, [32]byte{}, startup.WithNower(func() time.Time { return disparate }))
// Set up initializer to use the clock that will set now to a little to far before slot 1
ini = Initializer{shared: &sharedResources{clock: dispClock}}
v = ini.NewBlobVerifier(b, GossipSidecarRequirements)
v = ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.ErrorIs(t, v.NotFromFutureSlot(), ErrFromFutureSlot)
require.Equal(t, true, v.results.executed(RequireNotFromFutureSlot))
require.NotNil(t, v.results.result(RequireNotFromFutureSlot))
@@ -114,7 +114,7 @@ func TestSlotAboveFinalized(t *testing.T) {
_, blobs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 0, 1)
b := blobs[0]
b.SignedBlockHeader.Header.Slot = c.slot
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
err := v.SlotAboveFinalized()
require.Equal(t, true, v.results.executed(RequireSlotAboveFinalized))
if c.err == nil {
@@ -146,7 +146,7 @@ func TestValidProposerSignature_Cached(t *testing.T) {
},
}
ini := Initializer{shared: &sharedResources{sc: sc, sr: &mockStateByRooter{sbr: sbrErrorIfCalled(t)}}}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.NoError(t, v.ValidProposerSignature(ctx))
require.Equal(t, true, v.results.executed(RequireValidProposerSignature))
require.NoError(t, v.results.result(RequireValidProposerSignature))
@@ -159,7 +159,7 @@ func TestValidProposerSignature_Cached(t *testing.T) {
return true, errors.New("derp")
}
ini = Initializer{shared: &sharedResources{sc: sc, sr: &mockStateByRooter{sbr: sbrErrorIfCalled(t)}}}
v = ini.NewBlobVerifier(b, GossipSidecarRequirements)
v = ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.ErrorIs(t, v.ValidProposerSignature(ctx), ErrInvalidProposerSignature)
require.Equal(t, true, v.results.executed(RequireValidProposerSignature))
require.NotNil(t, v.results.result(RequireValidProposerSignature))
@@ -182,14 +182,14 @@ func TestValidProposerSignature_CacheMiss(t *testing.T) {
},
}
ini := Initializer{shared: &sharedResources{sc: sc, sr: sbrForValOverride(b.ProposerIndex(), &ethpb.Validator{})}}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.NoError(t, v.ValidProposerSignature(ctx))
require.Equal(t, true, v.results.executed(RequireValidProposerSignature))
require.NoError(t, v.results.result(RequireValidProposerSignature))
// simulate state not found
ini = Initializer{shared: &sharedResources{sc: sc, sr: sbrNotFound(t, expectedSd.Parent)}}
v = ini.NewBlobVerifier(b, GossipSidecarRequirements)
v = ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.ErrorIs(t, v.ValidProposerSignature(ctx), ErrInvalidProposerSignature)
require.Equal(t, true, v.results.executed(RequireValidProposerSignature))
require.NotNil(t, v.results.result(RequireValidProposerSignature))
@@ -206,7 +206,7 @@ func TestValidProposerSignature_CacheMiss(t *testing.T) {
},
}
ini = Initializer{shared: &sharedResources{sc: sc, sr: sbr}}
v = ini.NewBlobVerifier(b, GossipSidecarRequirements)
v = ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
// make sure all the histories are clean before calling the method
// so we don't get polluted by previous usages
@@ -255,14 +255,14 @@ func TestSidecarParentSeen(t *testing.T) {
t.Run("happy path", func(t *testing.T) {
ini := Initializer{shared: &sharedResources{fc: fcHas}}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.NoError(t, v.SidecarParentSeen(nil))
require.Equal(t, true, v.results.executed(RequireSidecarParentSeen))
require.NoError(t, v.results.result(RequireSidecarParentSeen))
})
t.Run("HasNode false, no badParent cb, expected error", func(t *testing.T) {
ini := Initializer{shared: &sharedResources{fc: fcLacks}}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.ErrorIs(t, v.SidecarParentSeen(nil), ErrSidecarParentNotSeen)
require.Equal(t, true, v.results.executed(RequireSidecarParentSeen))
require.NotNil(t, v.results.result(RequireSidecarParentSeen))
@@ -270,14 +270,14 @@ func TestSidecarParentSeen(t *testing.T) {
t.Run("HasNode false, badParent true", func(t *testing.T) {
ini := Initializer{shared: &sharedResources{fc: fcLacks}}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.NoError(t, v.SidecarParentSeen(badParentCb(t, b.ParentRoot(), true)))
require.Equal(t, true, v.results.executed(RequireSidecarParentSeen))
require.NoError(t, v.results.result(RequireSidecarParentSeen))
})
t.Run("HasNode false, badParent false", func(t *testing.T) {
ini := Initializer{shared: &sharedResources{fc: fcLacks}}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.ErrorIs(t, v.SidecarParentSeen(badParentCb(t, b.ParentRoot(), false)), ErrSidecarParentNotSeen)
require.Equal(t, true, v.results.executed(RequireSidecarParentSeen))
require.NotNil(t, v.results.result(RequireSidecarParentSeen))
@@ -289,14 +289,14 @@ func TestSidecarParentValid(t *testing.T) {
b := blobs[0]
t.Run("parent valid", func(t *testing.T) {
ini := Initializer{shared: &sharedResources{}}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.NoError(t, v.SidecarParentValid(badParentCb(t, b.ParentRoot(), false)))
require.Equal(t, true, v.results.executed(RequireSidecarParentValid))
require.NoError(t, v.results.result(RequireSidecarParentValid))
})
t.Run("parent not valid", func(t *testing.T) {
ini := Initializer{shared: &sharedResources{}}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.ErrorIs(t, v.SidecarParentValid(badParentCb(t, b.ParentRoot(), true)), ErrSidecarParentInvalid)
require.Equal(t, true, v.results.executed(RequireSidecarParentValid))
require.NotNil(t, v.results.result(RequireSidecarParentValid))
@@ -340,7 +340,7 @@ func TestSidecarParentSlotLower(t *testing.T) {
}
return c.fcSlot, c.fcErr
}}}}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
err := v.SidecarParentSlotLower()
require.Equal(t, true, v.results.executed(RequireSidecarParentSlotLower))
if c.err == nil {
@@ -364,7 +364,7 @@ func TestSidecarDescendsFromFinalized(t *testing.T) {
}
return false
}}}}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.ErrorIs(t, v.SidecarDescendsFromFinalized(), ErrSidecarNotFinalizedDescendent)
require.Equal(t, true, v.results.executed(RequireSidecarDescendsFromFinalized))
require.NotNil(t, v.results.result(RequireSidecarDescendsFromFinalized))
@@ -376,7 +376,7 @@ func TestSidecarDescendsFromFinalized(t *testing.T) {
}
return true
}}}}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.NoError(t, v.SidecarDescendsFromFinalized())
require.Equal(t, true, v.results.executed(RequireSidecarDescendsFromFinalized))
require.NoError(t, v.results.result(RequireSidecarDescendsFromFinalized))
@@ -389,7 +389,7 @@ func TestSidecarInclusionProven(t *testing.T) {
b := blobs[0]
ini := Initializer{}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.NoError(t, v.SidecarInclusionProven())
require.Equal(t, true, v.results.executed(RequireSidecarInclusionProven))
require.NoError(t, v.results.result(RequireSidecarInclusionProven))
@@ -397,7 +397,7 @@ func TestSidecarInclusionProven(t *testing.T) {
// Invert bits of the first byte of the body root to mess up the proof
byte0 := b.SignedBlockHeader.Header.BodyRoot[0]
b.SignedBlockHeader.Header.BodyRoot[0] = byte0 ^ 255
v = ini.NewBlobVerifier(b, GossipSidecarRequirements)
v = ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.ErrorIs(t, v.SidecarInclusionProven(), ErrSidecarInclusionProofInvalid)
require.Equal(t, true, v.results.executed(RequireSidecarInclusionProven))
require.NotNil(t, v.results.result(RequireSidecarInclusionProven))
@@ -409,7 +409,7 @@ func TestSidecarInclusionProvenElectra(t *testing.T) {
b := blobs[0]
ini := Initializer{}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.NoError(t, v.SidecarInclusionProven())
require.Equal(t, true, v.results.executed(RequireSidecarInclusionProven))
require.NoError(t, v.results.result(RequireSidecarInclusionProven))
@@ -417,7 +417,7 @@ func TestSidecarInclusionProvenElectra(t *testing.T) {
// Invert bits of the first byte of the body root to mess up the proof
byte0 := b.SignedBlockHeader.Header.BodyRoot[0]
b.SignedBlockHeader.Header.BodyRoot[0] = byte0 ^ 255
v = ini.NewBlobVerifier(b, GossipSidecarRequirements)
v = ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.ErrorIs(t, v.SidecarInclusionProven(), ErrSidecarInclusionProofInvalid)
require.Equal(t, true, v.results.executed(RequireSidecarInclusionProven))
require.NotNil(t, v.results.result(RequireSidecarInclusionProven))
@@ -452,21 +452,21 @@ func TestSidecarProposerExpected(t *testing.T) {
b := blobs[0]
t.Run("cached, matches", func(t *testing.T) {
ini := Initializer{shared: &sharedResources{pc: &mockProposerCache{ProposerCB: pcReturnsIdx(b.ProposerIndex())}, fc: &mockForkchoicer{TargetRootForEpochCB: fcReturnsTargetRoot([32]byte{})}}}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.NoError(t, v.SidecarProposerExpected(ctx))
require.Equal(t, true, v.results.executed(RequireSidecarProposerExpected))
require.NoError(t, v.results.result(RequireSidecarProposerExpected))
})
t.Run("cached, does not match", func(t *testing.T) {
ini := Initializer{shared: &sharedResources{pc: &mockProposerCache{ProposerCB: pcReturnsIdx(b.ProposerIndex() + 1)}, fc: &mockForkchoicer{TargetRootForEpochCB: fcReturnsTargetRoot([32]byte{})}}}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.ErrorIs(t, v.SidecarProposerExpected(ctx), ErrSidecarUnexpectedProposer)
require.Equal(t, true, v.results.executed(RequireSidecarProposerExpected))
require.NotNil(t, v.results.result(RequireSidecarProposerExpected))
})
t.Run("not cached, state lookup failure", func(t *testing.T) {
ini := Initializer{shared: &sharedResources{sr: sbrNotFound(t, b.ParentRoot()), pc: &mockProposerCache{ProposerCB: pcReturnsNotFound()}, fc: &mockForkchoicer{TargetRootForEpochCB: fcReturnsTargetRoot([32]byte{})}}}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.ErrorIs(t, v.SidecarProposerExpected(ctx), ErrSidecarUnexpectedProposer)
require.Equal(t, true, v.results.executed(RequireSidecarProposerExpected))
require.NotNil(t, v.results.result(RequireSidecarProposerExpected))
@@ -475,14 +475,14 @@ func TestSidecarProposerExpected(t *testing.T) {
t.Run("not cached, proposer matches", func(t *testing.T) {
pc := &mockProposerCache{
ProposerCB: pcReturnsNotFound(),
ComputeProposerCB: func(ctx context.Context, root [32]byte, slot primitives.Slot, pst state.BeaconState) (primitives.ValidatorIndex, error) {
ComputeProposerCB: func(_ context.Context, root [32]byte, slot primitives.Slot, _ state.BeaconState) (primitives.ValidatorIndex, error) {
require.Equal(t, b.ParentRoot(), root)
require.Equal(t, b.Slot(), slot)
return b.ProposerIndex(), nil
},
}
ini := Initializer{shared: &sharedResources{sr: sbrForValOverride(b.ProposerIndex(), &ethpb.Validator{}), pc: pc, fc: &mockForkchoicer{TargetRootForEpochCB: fcReturnsTargetRoot([32]byte{})}}}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.NoError(t, v.SidecarProposerExpected(ctx))
require.Equal(t, true, v.results.executed(RequireSidecarProposerExpected))
require.NoError(t, v.results.result(RequireSidecarProposerExpected))
@@ -490,14 +490,14 @@ func TestSidecarProposerExpected(t *testing.T) {
t.Run("not cached, proposer does not match", func(t *testing.T) {
pc := &mockProposerCache{
ProposerCB: pcReturnsNotFound(),
ComputeProposerCB: func(ctx context.Context, root [32]byte, slot primitives.Slot, pst state.BeaconState) (primitives.ValidatorIndex, error) {
ComputeProposerCB: func(_ context.Context, root [32]byte, slot primitives.Slot, _ state.BeaconState) (primitives.ValidatorIndex, error) {
require.Equal(t, b.ParentRoot(), root)
require.Equal(t, b.Slot(), slot)
return b.ProposerIndex() + 1, nil
},
}
ini := Initializer{shared: &sharedResources{sr: sbrForValOverride(b.ProposerIndex(), &ethpb.Validator{}), pc: pc, fc: &mockForkchoicer{TargetRootForEpochCB: fcReturnsTargetRoot([32]byte{})}}}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.ErrorIs(t, v.SidecarProposerExpected(ctx), ErrSidecarUnexpectedProposer)
require.Equal(t, true, v.results.executed(RequireSidecarProposerExpected))
require.NotNil(t, v.results.result(RequireSidecarProposerExpected))
@@ -505,14 +505,14 @@ func TestSidecarProposerExpected(t *testing.T) {
t.Run("not cached, ComputeProposer fails", func(t *testing.T) {
pc := &mockProposerCache{
ProposerCB: pcReturnsNotFound(),
ComputeProposerCB: func(ctx context.Context, root [32]byte, slot primitives.Slot, pst state.BeaconState) (primitives.ValidatorIndex, error) {
ComputeProposerCB: func(_ context.Context, root [32]byte, slot primitives.Slot, _ state.BeaconState) (primitives.ValidatorIndex, error) {
require.Equal(t, b.ParentRoot(), root)
require.Equal(t, b.Slot(), slot)
return 0, errors.New("ComputeProposer failed")
},
}
ini := Initializer{shared: &sharedResources{sr: sbrForValOverride(b.ProposerIndex(), &ethpb.Validator{}), pc: pc, fc: &mockForkchoicer{TargetRootForEpochCB: fcReturnsTargetRoot([32]byte{})}}}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.ErrorIs(t, v.SidecarProposerExpected(ctx), ErrSidecarUnexpectedProposer)
require.Equal(t, true, v.results.executed(RequireSidecarProposerExpected))
require.NotNil(t, v.results.result(RequireSidecarProposerExpected))
@@ -523,7 +523,7 @@ func TestRequirementSatisfaction(t *testing.T) {
_, blobs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 1)
b := blobs[0]
ini := Initializer{}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
_, err := v.VerifiedROBlob()
require.ErrorIs(t, err, ErrBlobInvalid)
@@ -537,7 +537,7 @@ func TestRequirementSatisfaction(t *testing.T) {
}
// satisfy everything through the backdoor and ensure we get the verified ro blob at the end
for _, r := range GossipSidecarRequirements {
for _, r := range GossipBlobSidecarRequirements {
v.results.record(r, nil)
}
require.Equal(t, true, v.results.allSatisfied())

View File

@@ -17,7 +17,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/network/forks"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/time/slots"
log "github.com/sirupsen/logrus"
"github.com/sirupsen/logrus"
)
const (
@@ -50,8 +50,8 @@ type SignatureData struct {
Slot primitives.Slot
}
func (d SignatureData) logFields() log.Fields {
return log.Fields{
func (d SignatureData) logFields() logrus.Fields {
return logrus.Fields{
"root": fmt.Sprintf("%#x", d.Root),
"parentRoot": fmt.Sprintf("%#x", d.Parent),
"signature": fmt.Sprintf("%#x", d.Signature),

View File

@@ -2,8 +2,40 @@ package verification
import "github.com/pkg/errors"
// ErrMissingVerification indicates that the given verification function was never performed on the value.
var ErrMissingVerification = errors.New("verification was not performed for requirement")
var (
// ErrFromFutureSlot means RequireSlotNotTooEarly failed.
ErrFromFutureSlot = errors.New("slot is too far in the future")
// ErrSlotNotAfterFinalized means RequireSlotAboveFinalized failed.
ErrSlotNotAfterFinalized = errors.New("slot <= finalized checkpoint")
// ErrInvalidProposerSignature means RequireValidProposerSignature failed.
ErrInvalidProposerSignature = errors.New("proposer signature could not be verified")
// ErrSidecarParentNotSeen means RequireSidecarParentSeen failed.
ErrSidecarParentNotSeen = errors.New("parent root has not been seen")
// ErrSidecarParentInvalid means RequireSidecarParentValid failed.
ErrSidecarParentInvalid = errors.New("parent block is not valid")
// ErrSlotNotAfterParent means RequireSidecarParentSlotLower failed.
ErrSlotNotAfterParent = errors.New("slot <= slot")
// ErrSidecarNotFinalizedDescendent means RequireSidecarDescendsFromFinalized failed.
ErrSidecarNotFinalizedDescendent = errors.New("parent is not descended from the finalized block")
// ErrSidecarInclusionProofInvalid means RequireSidecarInclusionProven failed.
ErrSidecarInclusionProofInvalid = errors.New("sidecar inclusion proof verification failed")
// ErrSidecarKzgProofInvalid means RequireSidecarKzgProofVerified failed.
ErrSidecarKzgProofInvalid = errors.New("sidecar kzg commitment proof verification failed")
// ErrSidecarUnexpectedProposer means RequireSidecarProposerExpected failed.
ErrSidecarUnexpectedProposer = errors.New("sidecar was not proposed by the expected proposer_index")
// ErrMissingVerification indicates that the given verification function was never performed on the value.
ErrMissingVerification = errors.New("verification was not performed for requirement")
)
// VerificationMultiError is a custom error that can be used to access individual verification failures.
type VerificationMultiError struct {

View File

@@ -0,0 +1,5 @@
package verification
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "verification")

View File

@@ -39,7 +39,7 @@ func TestResultList(t *testing.T) {
func TestExportedBlobSanityCheck(t *testing.T) {
// make sure all requirement lists contain the bare minimum checks
sanity := []Requirement{RequireValidProposerSignature, RequireSidecarKzgProofVerified, RequireBlobIndexInBounds, RequireSidecarInclusionProven}
reqs := [][]Requirement{GossipSidecarRequirements, SpectestSidecarRequirements, InitsyncSidecarRequirements, BackfillSidecarRequirements, PendingQueueSidecarRequirements}
reqs := [][]Requirement{GossipBlobSidecarRequirements, SpectestBlobSidecarRequirements, InitsyncBlobSidecarRequirements, BackfillBlobSidecarRequirements, PendingQueueBlobSidecarRequirements}
for i := range reqs {
r := reqs[i]
reqMap := make(map[Requirement]struct{})
@@ -51,13 +51,13 @@ func TestExportedBlobSanityCheck(t *testing.T) {
require.Equal(t, true, ok)
}
}
require.DeepEqual(t, allSidecarRequirements, GossipSidecarRequirements)
require.DeepEqual(t, allBlobSidecarRequirements, GossipBlobSidecarRequirements)
}
func TestAllBlobRequirementsHaveStrings(t *testing.T) {
var derp Requirement = math.MaxInt
require.Equal(t, unknownRequirementName, derp.String())
for i := range allSidecarRequirements {
require.NotEqual(t, unknownRequirementName, allSidecarRequirements[i].String())
for i := range allBlobSidecarRequirements {
require.NotEqual(t, unknownRequirementName, allBlobSidecarRequirements[i].String())
}
}

View File

@@ -141,12 +141,6 @@ func configureTestnet(ctx *cli.Context) error {
}
applyHoleskyFeatureFlags(ctx)
params.UseHoleskyNetworkConfig()
} else if ctx.Bool(MekongTestnet.Name) {
log.Info("Running on the Mekong Beacon Chain Testnet")
if err := params.SetActive(params.MekongConfig().Copy()); err != nil {
return err
}
params.UseMekongNetworkConfig()
} else {
if ctx.IsSet(cmd.ChainConfigFileFlag.Name) {
log.Warn("Running on custom Ethereum network specified in a chain configuration yaml file")

View File

@@ -18,11 +18,6 @@ var (
Name: "holesky",
Usage: "Runs Prysm configured for the Holesky test network.",
}
// MekongTestnet flag for the multiclient Ethereum consensus testnet.
MekongTestnet = &cli.BoolFlag{
Name: "mekong",
Usage: "Runs Prysm configured for the Mekong test network.",
}
// Mainnet flag for easier tooling, no-op
Mainnet = &cli.BoolFlag{
Value: true,
@@ -192,7 +187,6 @@ var ValidatorFlags = append(deprecatedFlags, []cli.Flag{
writeWalletPasswordOnWebOnboarding,
HoleskyTestnet,
SepoliaTestnet,
MekongTestnet,
Mainnet,
dynamicKeyReloadDebounceInterval,
attestTimely,
@@ -217,7 +211,6 @@ var BeaconChainFlags = append(deprecatedBeaconFlags, append(deprecatedFlags, []c
disableGRPCConnectionLogging,
HoleskyTestnet,
SepoliaTestnet,
MekongTestnet,
Mainnet,
disablePeerScorer,
disableBroadcastSlashingFlag,
@@ -251,5 +244,4 @@ var NetworkFlags = []cli.Flag{
Mainnet,
SepoliaTestnet,
HoleskyTestnet,
MekongTestnet,
}

View File

@@ -37,6 +37,7 @@ const (
SyncCommitteeBranchDepth = 5 // SyncCommitteeBranchDepth defines the number of leaves in a merkle proof of a sync committee.
SyncCommitteeBranchDepthElectra = 6 // SyncCommitteeBranchDepthElectra defines the number of leaves in a merkle proof of a sync committee.
FinalityBranchDepth = 6 // FinalityBranchDepth defines the number of leaves in a merkle proof of the finalized checkpoint root.
FinalityBranchDepthElectra = 7 // FinalityBranchDepthElectra defines the number of leaves in a merkle proof of the finalized checkpoint root.
PendingDepositsLimit = 134217728 // Maximum number of pending balance deposits in the beacon state.
PendingPartialWithdrawalsLimit = 134217728 // Maximum number of pending partial withdrawals in the beacon state.
PendingConsolidationsLimit = 262144 // Maximum number of pending consolidations in the beacon state.

View File

@@ -37,6 +37,7 @@ const (
SyncCommitteeBranchDepth = 5 // SyncCommitteeBranchDepth defines the number of leaves in a merkle proof of a sync committee.
SyncCommitteeBranchDepthElectra = 6 // SyncCommitteeBranchDepthElectra defines the number of leaves in a merkle proof of a sync committee.
FinalityBranchDepth = 6 // FinalityBranchDepth defines the number of leaves in a merkle proof of the finalized checkpoint root.
FinalityBranchDepthElectra = 7 // FinalityBranchDepthElectra defines the number of leaves in a merkle proof of the finalized checkpoint root.
PendingDepositsLimit = 134217728 // Maximum number of pending balance deposits in the beacon state.
PendingPartialWithdrawalsLimit = 64 // Maximum number of pending partial withdrawals in the beacon state.
PendingConsolidationsLimit = 64 // Maximum number of pending consolidations in the beacon state.

View File

@@ -16,7 +16,6 @@ go_library(
"network_config.go",
"testnet_e2e_config.go",
"testnet_holesky_config.go",
"testnet_mekong_config.go",
"testnet_sepolia_config.go",
"testutils.go",
"testutils_develop.go", # keep
@@ -31,7 +30,6 @@ go_library(
"//math:go_default_library",
"//runtime/version:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_ethereum_go_ethereum//params:go_default_library",
"@com_github_mohae_deepcopy//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
@@ -51,7 +49,6 @@ go_test(
"mainnet_config_test.go",
"testnet_config_test.go",
"testnet_holesky_config_test.go",
"testnet_mekong_config_test.go",
"testnet_sepolia_config_test.go",
],
data = glob(["*.yaml"]) + [
@@ -61,7 +58,6 @@ go_test(
"@consensus_spec_tests_minimal//:test_data",
"@eth2_networks//:configs",
"@holesky_testnet//:configs",
"@mekong_testnet//:configs",
"@sepolia_testnet//:configs",
],
embed = [":go_default_library"],

View File

@@ -166,6 +166,7 @@ type BeaconChainConfig struct {
DenebForkEpoch primitives.Epoch `yaml:"DENEB_FORK_EPOCH" spec:"true"` // DenebForkEpoch is used to represent the assigned fork epoch for deneb.
ElectraForkVersion []byte `yaml:"ELECTRA_FORK_VERSION" spec:"true"` // ElectraForkVersion is used to represent the fork version for electra.
ElectraForkEpoch primitives.Epoch `yaml:"ELECTRA_FORK_EPOCH" spec:"true"` // ElectraForkEpoch is used to represent the assigned fork epoch for electra.
Eip7594ForkEpoch primitives.Epoch `yaml:"EIP7594_FORK_EPOCH" spec:"true"` // EIP7594ForkEpoch is used to represent the assigned fork epoch for peer das.
ForkVersionSchedule map[[fieldparams.VersionLength]byte]primitives.Epoch // Schedule of fork epochs by version.
ForkVersionNames map[[fieldparams.VersionLength]byte]string // Human-readable names of fork versions.
@@ -255,6 +256,13 @@ type BeaconChainConfig struct {
MaxDepositRequestsPerPayload uint64 `yaml:"MAX_DEPOSIT_REQUESTS_PER_PAYLOAD" spec:"true"` // MaxDepositRequestsPerPayload is the maximum number of execution layer deposits in each payload
UnsetDepositRequestsStartIndex uint64 `yaml:"UNSET_DEPOSIT_REQUESTS_START_INDEX" spec:"true"` // UnsetDepositRequestsStartIndex is used to check the start index for eip6110
// PeerDAS Values
SamplesPerSlot uint64 `yaml:"SAMPLES_PER_SLOT"` // SamplesPerSlot refers to the number of random samples a node queries per slot.
CustodyRequirement uint64 `yaml:"CUSTODY_REQUIREMENT"` // CustodyRequirement refers to the minimum amount of subnets a peer must custody and serve samples from.
MinEpochsForDataColumnSidecarsRequest primitives.Epoch `yaml:"MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS"` // MinEpochsForDataColumnSidecarsRequest is the minimum number of epochs the node will keep the data columns for.
MaxCellsInExtendedMatrix uint64 `yaml:"MAX_CELLS_IN_EXTENDED_MATRIX" spec:"true"` // MaxCellsInExtendedMatrix is the full data of one-dimensional erasure coding extended blobs (in row major format).
NumberOfColumns uint64 `yaml:"NUMBER_OF_COLUMNS" spec:"true"` // NumberOfColumns in the extended data matrix.
// Networking Specific Parameters
GossipMaxSize uint64 `yaml:"GOSSIP_MAX_SIZE" spec:"true"` // GossipMaxSize is the maximum allowed size of uncompressed gossip messages.
MaxChunkSize uint64 `yaml:"MAX_CHUNK_SIZE" spec:"true"` // MaxChunkSize is the maximum allowed size of uncompressed req/resp chunked responses.
@@ -272,10 +280,6 @@ type BeaconChainConfig struct {
AttestationSubnetPrefixBits uint64 `yaml:"ATTESTATION_SUBNET_PREFIX_BITS" spec:"true"` // AttestationSubnetPrefixBits is defined as (ceillog2(ATTESTATION_SUBNET_COUNT) + ATTESTATION_SUBNET_EXTRA_BITS).
SubnetsPerNode uint64 `yaml:"SUBNETS_PER_NODE" spec:"true"` // SubnetsPerNode is the number of long-lived subnets a beacon node should be subscribed to.
NodeIdBits uint64 `yaml:"NODE_ID_BITS" spec:"true"` // NodeIdBits defines the bit length of a node id.
// PeerDAS
NumberOfColumns uint64 `yaml:"NUMBER_OF_COLUMNS" spec:"true"` // NumberOfColumns in the extended data matrix.
MaxCellsInExtendedMatrix uint64 `yaml:"MAX_CELLS_IN_EXTENDED_MATRIX" spec:"true"` // MaxCellsInExtendedMatrix is the full data of one-dimensional erasure coding extended blobs (in row major format).
}
// InitializeForkSchedule initializes the schedules forks baked into the config.
@@ -360,6 +364,12 @@ func DenebEnabled() bool {
return BeaconConfig().DenebForkEpoch < math.MaxUint64
}
// PeerDASEnabled centralizes the check to determine if code paths
// that are specific to peerdas should be allowed to execute.
func PeerDASEnabled() bool {
return BeaconConfig().Eip7594ForkEpoch < math.MaxUint64
}
// WithinDAPeriod checks if the block epoch is within MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS of the given current epoch.
func WithinDAPeriod(block, current primitives.Epoch) bool {
return block+BeaconConfig().MinEpochsForBlobsSidecarsRequest >= current

View File

@@ -25,19 +25,15 @@ import (
// IMPORTANT: Use one field per line and sort these alphabetically to reduce conflicts.
var placeholderFields = []string{
"BYTES_PER_LOGS_BLOOM", // Compile time constant on ExecutionPayload.logs_bloom.
"CUSTODY_REQUIREMENT",
"EIP6110_FORK_EPOCH",
"EIP6110_FORK_VERSION",
"EIP7002_FORK_EPOCH",
"EIP7002_FORK_VERSION",
"EIP7594_FORK_EPOCH",
"EIP7594_FORK_VERSION",
"EIP7732_FORK_EPOCH",
"EIP7732_FORK_VERSION",
"FIELD_ELEMENTS_PER_BLOB", // Compile time constant.
"KZG_COMMITMENT_INCLUSION_PROOF_DEPTH", // Compile time constant on BlobSidecar.commitment_inclusion_proof.
"FULU_FORK_EPOCH",
"FULU_FORK_VERSION",
"MAX_BLOBS_PER_BLOCK",
"MAX_BLOB_COMMITMENTS_PER_BLOCK", // Compile time constant on BeaconBlockBodyDeneb.blob_kzg_commitments.
"MAX_BYTES_PER_TRANSACTION", // Used for ssz of EL transactions. Unused in Prysm.
@@ -45,7 +41,6 @@ var placeholderFields = []string{
"MAX_REQUEST_PAYLOADS", // Compile time constant on BeaconBlockBody.ExecutionRequests
"MAX_TRANSACTIONS_PER_PAYLOAD", // Compile time constant on ExecutionPayload.transactions.
"REORG_HEAD_WEIGHT_THRESHOLD",
"SAMPLES_PER_SLOT",
"TARGET_NUMBER_OF_PEERS",
"UPDATE_TIMEOUT",
"WHISK_EPOCHS_PER_SHUFFLING_PHASE",

View File

@@ -216,6 +216,7 @@ var mainnetBeaconConfig = &BeaconChainConfig{
DenebForkEpoch: mainnetDenebForkEpoch,
ElectraForkVersion: []byte{5, 0, 0, 0},
ElectraForkEpoch: mainnetElectraForkEpoch,
Eip7594ForkEpoch: math.MaxUint64,
// New values introduced in Altair hard fork 1.
// Participation flag indices.
@@ -295,8 +296,11 @@ var mainnetBeaconConfig = &BeaconChainConfig{
UnsetDepositRequestsStartIndex: math.MaxUint64,
// PeerDAS
NumberOfColumns: 128,
MaxCellsInExtendedMatrix: 768,
NumberOfColumns: 128,
MaxCellsInExtendedMatrix: 768,
SamplesPerSlot: 8,
CustodyRequirement: 4,
MinEpochsForDataColumnSidecarsRequest: 4096,
// Values related to networking parameters.
GossipMaxSize: 10 * 1 << 20, // 10 MiB

View File

@@ -1,166 +0,0 @@
package params
import (
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
)
// UseMekongNetworkConfig uses the Mekong beacon chain specific network config.
func UseMekongNetworkConfig() {
cfg := BeaconNetworkConfig().Copy()
cfg.ContractDeploymentBlock = 0
cfg.BootstrapNodes = []string{
"enr:-Iq4QB2ny1q6gkBjqNRU_e-GTbpcJQcI4i3cIZDea0mnAzGgbUTKH8j81g9PRl_-m40F1V4GFBlqZElrcbGnUj9AjGeGAZL8bgmtgmlkgnY0gmlwhJ3m4Z6Jc2VjcDI1NmsxoQJJ3h8aUO3GJHv-bdvHtsQZ2OEisutelYfGjXO4lSg8BYN1ZHCCIzI",
"enr:-LK4QF2XD_Fe5H9QMVVwBoDs6P_37eURcFvNTcLzOc60p_XlDKIBleMgudA7nltZ7TyAiOuY0BSQzHsdv5iUs7sFyWQEh2F0dG5ldHOIAwAAAAAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhJ3m4Z6Jc2VjcDI1NmsxoQJJ7y6LF_to7NYQd3BVRW1840gm5r1Lm3lfAfC9Wqmw8YN0Y3CCIyiDdWRwgiMo",
"enr:-Mm4QMpfvuUXcMcRx0sCtnzvE8zTJEm7BAwhFtU6CvXCv1i5Wsksx0P7ocBEJLPHULf_O3w159cnbUoB-XZuyDBfME0Bh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEQ82iEYRxdWljgiMpiXNlY3AyNTZrMaECDdViaSTH6xjxrJT_gIaha3_CzJ64OQDQwTdbcN84BGuIc3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QNLiswecY50ofHVF3twuOmJCqqdzOviXI9xAmKU3SBmbdDUr_v-dpxcP_AHoYMBw62yEcpPRsKzY-yes3wMoJnUBh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEp2MADIRxdWljgiMpiXNlY3AyNTZrMaEDgRLt4r0C6K63co6eoGRvi55-viwcW_ijblPrlNVulAqIc3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QMM5lT_k074lQ094vF8rGQXkM4sLmti9kTbwuZF-6fZ5KhTlTxEAWI8x5-EKRduIMoCaz6z0SfaYeOsfS35jr0EBh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEoSM7S4RxdWljgiMpiXNlY3AyNTZrMaECW3vJ2OZ6yjQfrDB9AW_5J_YZUuQRFsp3U5z7VvCdu9GIc3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QPw3-nzQ6CXpRWLR_eo4KYcoshqRtaTlmqYTlwqcW5AtWMDuF6o9oVyUhYJ9BdqP7x-vi4D5C83wJ2N-sRX0D6YBh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEjl3-GYRxdWljgiMpiXNlY3AyNTZrMaED9NVIDXAwA2kDI9wU2y1KV7oGALUrc6h6LQ2zqcWD7UWIc3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QO3f8z6h8uhY2fOr4iOmUkfGN8Q_ej-DOwcHJ6P4dCqWa20aA6Jxac5Ta6dcfnwdvSfWUz3ZFWUU3n3mmkNvjuEBh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEsoBQvoRxdWljgiMpiXNlY3AyNTZrMaEDMSbjjMae81uRoMG_hZjfbXnLLuI9eEKUkKShaTBkfEaIc3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QFJKhdNYDyWgtK1rM_QsbocVqvkQGVrecxbcMSMzw58JN7Cw5uJoUVJJxxlY1bWzZZ6Y6fuL0MxVKVj2jQS-F-MBh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEj26p9YRxdWljgiMpiXNlY3AyNTZrMaEDx6X-X_2EgQ2AUWkwmm1GFB2SYrm7XihW-IUlx5u0-FuIc3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QJkcqvThmP8DfQ7T-kwsIc_7J9VZovhRWNJJ_3mWksaiJPT0YYsLwIRMxhZaIoQ96umQDyXaAnYccJHXo4UWqdQBh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEisVoUYRxdWljgiMpiXNlY3AyNTZrMaECcdygVaepRRqO6BpryxhbQCCZNRxJ6C4pUujAXUdM1E-Ic3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QH8zBo5Jxm8c6Tindg5gE6Ju94Zqik8Jli6t7p8gelwJX9hacP-9UteVI-PnQZNnXuTQ8MEWfkl_zyklzF4dP9IBh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEp2MnPIRxdWljgiMpiXNlY3AyNTZrMaECRHo92MBzVuUg9l0XPGpT7SUm060N40IShmk39pg_TWmIc3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QCaIOIsJoVLQfna3OK6gMN0YBBpAhjrD6wh1zKb9SRA8GsJAQMREOg6uK137hV2IkqfieKvIEPiYUOsJ8LsaBf8Bh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEibgchYRxdWljgiMpiXNlY3AyNTZrMaEDbe9GtmMNbJRzijDmQfDQQN2YkGFTr7ImH9yPyveaxOqIc3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QDkVL9-sXu44WnfRdLee3nlGAiDFE0mqcahIKzPcQx5IMDcWsKddzjk7sIa1t_A3OrbgKZ-myxp-9Ft98lc7jYEBh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEvKb94YRxdWljgiMpiXNlY3AyNTZrMaEDz6UjXFkPJYpRmVYPyJVGefekeG1Qxkg_AcDKIc6KNEaIc3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QPqcZ_jT7ZUOkLi-BgCExPlUytIP6jIvhV56nPV1A5pyMzKDnk4e-6a5_1seXtyQQluHkfAjZn9_H1NVXVdkA94Bh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEkH7L54RxdWljgiMpiXNlY3AyNTZrMaED0EyYkEKk31vfdXMLYneEyiCNimlOgdm0qcoI5YVZ7b-Ic3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QFjWfPILQklP6hndcbtDNhl3vGWQkXf7pR8Kun039DRpClIEwvxg3MW6-JUcVNjmD3MtJFsS8702ZG3fB8YHEJsBh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEp2MG04RxdWljgiMpiXNlY3AyNTZrMaECYV9wARPI7Y69DK_yoe0D58LPTF3WaIHZmkTzzC-4PZyIc3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QANhzPcE163m_6hv99UOWiKNmD_qvUkZOhBP1jezm-wdfji1fL0L1yue7l5kb8TiSs_6dSEeaykst-bZ4OpChQoBh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEGJB6D4RxdWljgiMpiXNlY3AyNTZrMaECOcOVxvzmKm5yFLyZBA50bzAWCi4wHzlYBIEujIFt46SIc3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QFGHCpBCazbxQbjG-QQXVNt0XKjOSpRXaseRQ7eCciBpWfT7VrRJ1btLOuNMzyKpQcRMfmywxH7LwK5awZhdpS4Bh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEQ82n3IRxdWljgiMpiXNlY3AyNTZrMaEC8WOCzbdKFeg3dYciZ7QKfXxNss29CqaHG5iwm4EHQcWIc3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QCGRQd2a8-pIgsH3PuqA5NZjcwIjVH9H7Dh6cPJJRsEzNojxFtDzY8rRebHKb1xrVAZPXneMuw5K3okA1k2KOOwBh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEsoBRjIRxdWljgiMpiXNlY3AyNTZrMaEDMOw8ey6EZ_4GyR0kNDoB55it-p7PzFPpsXyc4TFMPRGIc3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QLaFtJfw-TGxUNAHQO2xmnvYDnmKgjfJve2MgrWG9OPaMnVDlLIAIz9v1_SdDTS1FH3vt10iU7YI5t7CIA7sKLsBh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEGJBi-4RxdWljgiMpiXNlY3AyNTZrMaED4PMVrXwoLyISWzFzYU4cd76TnpxIZwIK1SwyFyN9rWSIc3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QPUpI9m3garHkV47o8jKj8VGTaJTOwpIWnUb4GlD5r-vK6xC06h-wPtUzQngBFk_FOUfuw1mcqAP5L0yTUozdHsBh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEmCrsz4RxdWljgiMpiXNlY3AyNTZrMaECfWt_Vji8tocNvFs6orlyMLsoNkQa19BNqsbXN_tUhkyIc3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QHJGsttZD0_WxO_CjaP1s_gqJcb9gCYLnYI8G4qUi3t4FYGX3oJvkZD49kNfRx38a48GjpuBCxnHpd78OSxxgnUBh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEpeNaNoRxdWljgiMpiXNlY3AyNTZrMaECseT38XZgU025QL32df-blQfLLYdrby3pO-d7axe0zF-Ic3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QPtT8J4rpYkixx-COebnEPreuWv9OpgOGOvM01hqZ19eeySxCxOEEVHl2r2c0BYwBuct_yZhvkLqUQatRORlIP4Bh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEibhIf4RxdWljgiMpiXNlY3AyNTZrMaEDjight_62uShKNt4IorH13hfqm7kZzVyFxXKI_qDlsTGIc3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QIW31dzBPtPxCAKvj-T29sFb9wCDM509ANmDizLq5ys1YJCcILx6PotmJxF4g829FlDiqSsREm4da7smcPcAalMBh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCE0SYdtIRxdWljgiMpiXNlY3AyNTZrMaECNWg27r6a05oup0qlmM9iSuw2sj1ulbCSGGPKDYjvGDKIc3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QBwdkGDZH5GMc49ZkabCo4PbLDxKUxjy9AdUZSe6zGK4NCrGq27i4bkEXwg7IrZtzCJXhfgMocUz7a7uBWq_jrQBh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEp2ObP4RxdWljgiMpiXNlY3AyNTZrMaEDRDkl8RqHIqP0kgd1-bhZY18QRV9nfpWPr8FEKdRRwZaIc3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QFCmA4GNnGBuIdevzwZ4-5GFN0cYDqGQucQ_zZyjr3m1Srqg1eN6RuRj3gKbxChAcAp6hVOX-wl2fFzqRpsZJ-UBh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEneYo34RxdWljgiMpiXNlY3AyNTZrMaEDX6gmy0PDR51SEtytWsA0IxCE_LZdY_x9FiMT0WIYJV-Ic3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QBOzKDeXEfUTFsEBk3tX2-OvfpU7uPdQRl6iex1r-N-oJaB2Xw9Bfney1x4k8ZMsHlHFPBCRrcR2Lzmu2ghHqVEBh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEsj4zroRxdWljgiMpiXNlY3AyNTZrMaECpp1D7pHZbF8l6vl4_DWdm4NJVDNMxGxvti-8ZqHrHCuIc3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QBL6auezk-Zi385j0PyjkzGwQJW7TdOFZKGZMKTGRkI4fxTSTiHLe7kTvdjhBq4kgjPXvUnFiXR6AisA8a0w2lQBh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEmCr3YYRxdWljgiMpiXNlY3AyNTZrMaEDJ4xl2Our0Y7OKsSDX9f908HznXm3PKzmC9zD8OB2d0mIc3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QDOEqa8lWRsJXIoFqGGytT8xBJuGu_eE1p8Nob_QQI1-B4S0_6JjVoyH3pwfezShdUj9RdgdETUon3bXM06UVSUBh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEikSL4oRxdWljgiMpiXNlY3AyNTZrMaEDe4qmuHzM_TfUgfRc_0nuiQRz7S2_UhphHbgC9ilzgnGIc3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QIPjOKp5Y9Gr2gS3OnAr3_1GxNU2yqs5Nwv-3r44yPriBJv5pIDm0vyJe7Pi9FQCBmikt00IXoO_pF0zwWHv5b0Bh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEhtEjL4RxdWljgiMpiXNlY3AyNTZrMaEDeCId8CCbXbvI53xmkqjFgJgsqmxbnWE76f90OiFsmYqIc3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QBOCEcm86USjdb2n8zRwzDfdEo9T_rKvRGLJ25y2PL9Zbc4RKEi8yOWk-vJ4QRZtgQ9PwgBUi0Zi478X_Wc6nlsBh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEgMcixoRxdWljgiMpiXNlY3AyNTZrMaEDBTPknXjsIhdfxDYucKiVm6SgYTC9zohYC5s0YV_6EeyIc3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QEgoppH7VZWNAb398z96aK2gDlWc0x_EQlDjOKcp6DJMahC6LSf9yjXHmnHzdii_r9ztNMXw0_b9gsfmIG6so14Bh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEpeOpZYRxdWljgiMpiXNlY3AyNTZrMaECR7znvBtmDQ80qnoAeO_FFgwB5_tALE_aZiub5YgHJx-Ic3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QCnjdBwJ6xUGvP0wOAizMDUtFJ--0Zq37c6plEGsiuu3XUk_JMytx7I_LjQHsFx54GXxSP5T1x9-tJFMKq-mBbEBh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEpeMtMYRxdWljgiMpiXNlY3AyNTZrMaECm_3lxmnovulk3kxkt9SRc8CJX591ufqSGLDUl2vFulWIc3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QJRpkRrnmIUApBcQsRl8my3qx_ThNeMuY7PZTRzgqDqpIOrISZOFpCcXRTMRo_va_AzAMNSLCt21xWsvc4iX_FUBh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEhtGam4RxdWljgiMpiXNlY3AyNTZrMaECTsHuqtUTyvnbr_-bBlPQrDjJ2fK_fo6EI8cGBlkT1taIc3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QHF_2oiGaRM13lElGu9KVq6DcllqoHVbXKnT9BMZsQHDeQYJFrzRl2GZuzzjdzMsb_Qa0_fpwPqacrwKA1hCbaoBh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEpeiA-oRxdWljgiMpiXNlY3AyNTZrMaEC5n6EKmWkFr_F_xhKlyL04Q1R9l6bPX-ew3l4_aX3tw-Ic3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QLZkxDCSlFHNHrELwY3Wbck4uEBfDe--Oe-ymSSdb54fajSDO91ILmm5UWJbiEpgaQ-tOaRBI_mzkzrBK9vOLiABh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEGJBqxIRxdWljgiMpiXNlY3AyNTZrMaED4YrrKCAARO7iCBkBCKg18IRChlSsrdgLXUFlWN7cAVSIc3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QM__NZQmR1Ujykum0flKS6es2lXTCsKMnGC4s4J3g_m-ZEk8TADO5s8BtdJMnBSJIXsqQsPvGZRCOrSye7UrcWMBh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEnfXIRYRxdWljgiMpiXNlY3AyNTZrMaED-132tNKgfT5Kvdtyoy08SWPtnmqbKZYCTcw8F4HL6iWIc3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QIDMKaGp33io88FS5RJXLftryhiGmne8QSjP2ZlCqB2mAQu3uOEOoeeNQYrd6ZJaA16xDvcO1PnqLCEwzuOuoiEBh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEizulmoRxdWljgiMpiXNlY3AyNTZrMaEC9Oi3dcFrr0j1P39XIVsXdoNc9AeRZG-UVkEG1C6j8aOIc3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QD3COs2fIKQrvFG4cphKA0U4Bg7IltWfnnAzxrONjqkhWjbrXP14d3EGUIEuOd7FzfZaIL6bK7CwBSQRH_dGm6ABh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEn0Hy14RxdWljgiMpiXNlY3AyNTZrMaEC02svkGF3_0dOYuBvvHKP5WqW8Vp8hfh6FOTPgREZWZeIc3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QPyFixsMFYX5FEqsq6Su1kZF0qUlR9WSdSlXGki2a1NSNdKPYx7teRVfnny22rCgjWQzEJrh9tqI_WcFNg_9wWgBh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEaPjC2YRxdWljgiMpiXNlY3AyNTZrMaEDg8vFkCPwfrszMGSNp9O1d24cq_A-4XIxFlMAaBNP10WIc3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QLTvtxuCz_GEJclTPTbY2PFtq4sD3LD4HWglNJIO_tUcQ6QrtgHThuyKz_sIqE1_aW24aedXXpC2p_zPWO_WV9MBh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEQOJWSIRxdWljgiMpiXNlY3AyNTZrMaEDZpd0NBAzCF2-LHUVu9_g0wJKoQvnEH3PfX-rz-WoC-yIc3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QDZSQ1JfQl9jqm4kIEzOTlvO9KMjG5J8xDB6cKRBGWtuaYItPaRDzXJybMr0yV-XmE3Vp24s5jks22dCE-nlRyQBh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEpRbtA4RxdWljgiMpiXNlY3AyNTZrMaEC2YFYSOh3-zoIqHNAN3dzKXxZcfzaTcIO2eubX3BiEQ-Ic3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QNti3RuTSPH3aneI9m5R-4c5SUZqaOH_njvk2ezJUj2UQDFkgm2deWLOGkn0jMUjRyMOxXAluMvIS_-fDcySS8EBh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEnfVSpIRxdWljgiMpiXNlY3AyNTZrMaECAZcUD06u8vG4odGwNWwnqd5YvLQzWpZWbcNJrUXl5UqIc3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QFtC2SkZ1Fmyfjg2r-lew5YX6AJU43EmOhvGTKLjDclPEqx22L809fuZibYzdUC4QziO4lw1NfvW7q3fGbmobCgBh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEkr7eV4RxdWljgiMpiXNlY3AyNTZrMaECWM-19HsETwrty153VQK2Qf0lggex_NP-RUhkOgepnBWIc3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-Mm4QLaSJFtlYxdSqJvJFn7K1CSD5ctr8EC113_aQYB4V4aCEn1h1rxw4z1WFUcHhsLVvtXiRQT8E7rEuNA-zJYDe1sBh2F0dG5ldHOIAAAAAAAAAACDY3NjBIRldGgykNjdQwZgY3YkAAEAAAAAAACCaWSCdjSCaXCEp0fSfYRxdWljgiMpiXNlY3AyNTZrMaEDJzRMI6aBykFmWsQZ2TFFjQmZ2rbHqTZ-LYzbvOobl8eIc3luY25ldHMAg3RjcIIjKIN1ZHCCIyg",
"enr:-MS4QL7la6R4Sp8mD6nbUto7mXQpCPIucMXYonbxaihETbddQGirx9-Qqvtx4Ngw1g4_mleYM_I0H8i0-KPQQkbQjjMEh2F0dG5ldHOIABgAAAAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhI_Gf0iJc2VjcDI1NmsxoQI4KtS1lao9CxXhT-dthQGovzUEnODDPFYl7SBfEA08R4hzeW5jbmV0c4gAAAAAAAAAAIN0Y3CCIyiDdWRwgiMo",
"enr:-MS4QFSt__5LhjnRp9F-Q0v93uvlKpYyeb42V5qvRNSFkHxDPTHJ84j31_HZVJtCfeIyfnKpScbEklmAC3O8D75hdE8Eh2F0dG5ldHOIAIABAAAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhJO2s4qJc2VjcDI1NmsxoQPDA64YfJmZDlj1LFjnJczhWpTEl32kc4RXy0cMiA6hcYhzeW5jbmV0c4gAAAAAAAAAAIN0Y3CCIyiDdWRwgiMo",
"enr:-MS4QBqSz2fdFrHnpZnrjpOXXX7DRhO8IZiDM_Du2M_I6uYwEEcwfBPsa9v2D9Pxz2eTXJPKQav_hHMzeO3-59Nz-VIEh2F0dG5ldHOIAAAAAAAADACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhES3e-OJc2VjcDI1NmsxoQJ6gbvJoChFGM3fziLl6TfKD4Ddt_DKmIpA3F-nijLMHIhzeW5jbmV0c4gAAAAAAAAAAIN0Y3CCIyiDdWRwgiMo",
"enr:-MS4QMdKhukqEi_AnWG-_qI_t3CrFQ8p-uGOEYBbXuJnbDxBSqdPnbpx1K7b42H8TQpxr3bJeOZuZ4vz1ret7VkJPocEh2F0dG5ldHOIAABgAAAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhJ31WCWJc2VjcDI1NmsxoQIEdDMa4eWOEzJPjVoElGm5CKpvUdD2VF-Q-o25AyV43YhzeW5jbmV0c4gAAAAAAAAAAIN0Y3CCIyiDdWRwgiMo",
"enr:-MS4QLB-DtV5j-GdScoGse3R0ZsEXQYkh3pZzgLElGH__jN7cIaFuFfyw5vrTbxWMKJbHIYaaHfVcPrz2YFGSIJ_hXQEh2F0dG5ldHOIAAAAwAAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhM69VfuJc2VjcDI1NmsxoQL2OAp25wYmzzzJKubNowvK5I3nHC4I5eEVS6R43Z4C-ohzeW5jbmV0c4gAAAAAAAAAAIN0Y3CCIyiDdWRwgiMo",
"enr:-MS4QE3-f8qUWYbx1zNdoh0WMjYaHuRvdo33xP5dNJQ56pBaMI8GpGwvLeaWun2UbF7OexlrL8evl39yhuUOPrepD0oEh2F0dG5ldHOIAAAAAAAAAAaEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhLKAJmyJc2VjcDI1NmsxoQNE22g9OW0JX2MsomMWm64rCbsiSrZ4rB3cWB2-kT1oWYhzeW5jbmV0c4gAAAAAAAAAAIN0Y3CCIyiDdWRwgiMo",
"enr:-MS4QHYLVxO7IRuJpcYeQJK81tow8_P58XnIfYkVoQwOBDnLU2QiLTNtzDVHBLTLwpMrth5XwOJhjlC4z8L6Y3ffUBgEh2F0dG5ldHOIAAAwAAAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhIrFV6KJc2VjcDI1NmsxoQKA5AkBxLZePDZUsYHLgsf10Ib7x_PdYZ_ORAvLNHVOoohzeW5jbmV0c4gAAAAAAAAAAIN0Y3CCIyiDdWRwgiMo",
"enr:-MS4QNJOA_EYx3fSHmfWFz5ffCCfRZW5n3C9gVAn64-1Vq4vZzYsmtF2fm6l4GQd53bIbDJO5XXwr-Ltaow2PbbNxq8Eh2F0dG5ldHOIAAAAAwAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhIbRzgOJc2VjcDI1NmsxoQLownRcAKPtJvJ24jFhNL-XVkf0CUKRpaQCMaOX1DdbMYhzeW5jbmV0c4gAAAAAAAAAAIN0Y3CCIyiDdWRwgiMo",
"enr:-MS4QKjT4PY-Bf8hyxURb8ONCVH4CPTc8DSp1mTlEDyPcr6rTR1G3AbDuFEDcL5z97K2OfeKwMwp9V82HmwRwC6UWMoEh2F0dG5ldHOIAAAAAAYAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhES3H5uJc2VjcDI1NmsxoQJjQFB0Q6Z8mh1dbe8DsbvVvfgX7na14JR8OnzOVongfohzeW5jbmV0c4gAAAAAAAAAAIN0Y3CCIyiDdWRwgiMo",
"enr:-MS4QAhMlMk0Sr4K29zo0qVw1ef4ufyealQRdIAXqbgtZVh6ab3o7WKx5Hvr3MK3sc60nBQB3hNaMpt--isKv09IM2EEh2F0dG5ldHOIAAAAAAAwAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhJ31NHGJc2VjcDI1NmsxoQP0uJtV8gdLm995mg98Gfp9L9TP4AA7wCZKSZcl0aYeLohzeW5jbmV0c4gAAAAAAAAAAIN0Y3CCIyiDdWRwgiMo",
"enr:-MS4QOu9DJTE50sYY-z_lU6xozdB5LJwhfCIPCUNozdhjeihLAA9CHyL1GBeJqJjvoIm5RIO6iiNZFp3ZB4uWB4hzuwEh2F0dG5ldHOIAAAAAAwAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhIs7uVSJc2VjcDI1NmsxoQNWqxvo4uv00UaK_ETex9kqaB3xX17slXTxdvMqeHWUIIhzeW5jbmV0c4gAAAAAAAAAAIN0Y3CCIyiDdWRwgiMo",
"enr:-MS4QGFtyo4ejnaQuOf6w62R7pWAkVQs02zZqiKB06ARhNsRJGL4TIewlqV0-cPSdSguuZYPSRJddrhHMRSnIa6kcvUEh2F0dG5ldHOIAAAAAAAAADCEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhEDjHhOJc2VjcDI1NmsxoQJiLwrZNj46tPKMIaXyZp2D_dExmxjHMH_W0bDlSw4xEIhzeW5jbmV0c4gAAAAAAAAAAIN0Y3CCIyiDdWRwgiMo",
"enr:-MS4QDa4tn2FRJLfuAPGrIIEYcPMaTlJlUqoHnZZBN7VX95Rbwn9cFN93mdYq5klQRfL3pSNo-iYCmN1SqIkm3B_fGkEh2F0dG5ldHOIAAYAAAAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhJ_fqkuJc2VjcDI1NmsxoQJX1m_Dyt2QJekkL2O3f44GeJvqGXLPLLevk5oM5rNjpYhzeW5jbmV0c4gAAAAAAAAAAIN0Y3CCIyiDdWRwgiMo",
"enr:-Ly4QOxOkGvFqhKllD4vfg4xfwf1OH555pTPxd9n2IKddj9cC3v6XpALXuQOoFe3MeXuKcRSWx61LcTn-8kzkmFEIEADh2F0dG5ldHOIAAAAAAAAAAyEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhES3cBaJc2VjcDI1NmsxoQLi9EnRObjSmOysxaGITFdArRXVmpJCMKRkMX8KSc4rS4hzeW5jbmV0cwaDdGNwgiMog3VkcIIjKA",
"enr:-Ly4QH89XtIf-D3uX6m0YjkoTH7eGRhBGSZiu39ib2jXX6V5fa0QxUYabZdPl-5Fdew-rUPJ8sYuDQXYdEVeDjKwmKQDh2F0dG5ldHOIGAAAAAAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhI_GpWiJc2VjcDI1NmsxoQM2u9Syt4BkQn-WSycLUXp7ajuESEUtVdtsjNMJ7n7Me4hzeW5jbmV0cwmDdGNwgiMog3VkcIIjKA",
"enr:-LK4QAH0JWA-ME1EiLIcJmQ7h9mStB0pkcV4R8d4BkG8t1hOe47k6vBNtEkMV9rBlRXlFIFfKwMcqH6h8sw7rgK0ORMCh2F0dG5ldHOIAAwAAAAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhEPNoYaJc2VjcDI1NmsxoQK-pYgRudAjqNz7OVA5a2qmtxtsx_ZmDKwZm_Vub7lWPIN0Y3CCIyiDdWRwgiMo",
"enr:-Ly4QDVEJYjwK3H8Tj857ZmNZRo_20B7fYPR53keYS8pPIrHeiPFCwOBok6fvWr4457kOwXoszf41IpJEmFNXingyN8Dh2F0dG5ldHOIAAAAAAYAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhKEjPAeJc2VjcDI1NmsxoQP6kCCPnIwk94L0eTPSyEnqtvM1yQZzg1WkDkZKjLdaTIhzeW5jbmV0cw-DdGNwgiMog3VkcIIjKA",
"enr:-Ly4QETJyo6kF40C8v3SYt_49v2tFHxuJiGNGFtr2GUHtn_pdHIEXYEFCAkooMLstcQY20qlLk_EYzs4nPx66swzZnQDh2F0dG5ldHOIADAAAAAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhKUW-6KJc2VjcDI1NmsxoQKIjpnE7-XBEpSfEM3OTxwMirrmlWmuWxYTtuwQDaJ2XYhzeW5jbmV0cwKDdGNwgiMog3VkcIIjKA",
"enr:-Ly4QPJX60M6LiSApuKzABdZGcRZdXeNs9YK_RXkOi7mu-XBeT4p6ws6TV6UOWaQNghtOQBGt2dwDhA75oK_ZoktMCMDh2F0dG5ldHOIAACAAQAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhC5lURmJc2VjcDI1NmsxoQPyysL6sOJtdbbxCZ5R1aJjUfQmJhmE-I_xYqO9-i90johzeW5jbmV0cwqDdGNwgiMog3VkcIIjKA",
"enr:-Ly4QAw204L45p1Mp3ryCSdVvsHlBHMVJrwScuqXELpp7k6WQ8Vt7BypqMOSOVd3uL4ROT4R_paqzfrMIR-fFKItNMsDh2F0dG5ldHOIAAAYAAAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhGj4ByyJc2VjcDI1NmsxoQMJdFDFq8q3DhC943QMFSxkkluGHNsNvbrTTWKD-LYr94hzeW5jbmV0cw-DdGNwgiMog3VkcIIjKA",
"enr:-Ly4QKJJaj_g8L8Okg4S3MJmQ9ZcNLytmtrPx_FY60TD_Pn2cVdm-PaIE9p5csITOvIpu6TYY4zBwc1uIE5FC4Vmh8YDh2F0dG5ldHOIAAAAAAAAwACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhLymYWaJc2VjcDI1NmsxoQLB7079lklX_9vsCAj2LZx71-4Dzj-MlLn2qmj6UiDjd4hzeW5jbmV0cw2DdGNwgiMog3VkcIIjKA",
"enr:-Ly4QMZw5ndFHQP8xRF46D_k1eyibF7FAypRq_oPkQioIJPBB2VdI43v2UV8Y_A0Ll0Rr1IoVkxGUW6JSP_naF3oStYDh2F0dG5ldHOIAAAAAIABAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhEPPX46Jc2VjcDI1NmsxoQI2VB9Qwv9Azwm5oM2MXDd-wILue3jE2DxVsVv3HKoBNohzeW5jbmV0cw6DdGNwgiMog3VkcIIjKA",
"enr:-Ly4QLpIqVziOnAZ-U2-uUgNq9EiEukP-Hsmr1rX5q9RN6bbOaMEJfgiQftGVje33Xz1z0YOqRR136VEn8r0jS5IFd8Dh2F0dG5ldHOIAAAAAAAAABiEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhJK-Uy2Jc2VjcDI1NmsxoQLH5B5DL6IO9nhqNoj4sVyyYteRN1j012_LZ5HDGFaLvIhzeW5jbmV0cw-DdGNwgiMog3VkcIIjKA",
"enr:-Ly4QKYuxsuhgcWTcwC-LfLKwBQLVTDmMg8iGNwa6Fu462j2IMCNKx1I-Vb77JxpDQs6JIq23O0sgvAv2sGkxnDAf9wDh2F0dG5ldHOIAAAAwAAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhIbRuLmJc2VjcDI1NmsxoQJoV-IcjTx1TkZqUDsKULphwU9maN4y7uLmAzQFycAAtohzeW5jbmV0cw-DdGNwgiMog3VkcIIjKA",
"enr:-LK4QDJp2U08hakweMKge9h1AHrsr28cIAMeY18b8-7PAathFBuUuO6sXyJABhloJ94uWqKS26YC2UgA6H3Moju7Bb4Ch2F0dG5ldHOIDAAAAAAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhKesmW2Jc2VjcDI1NmsxoQI_jFdc5GSJ0TnwA5DlKK-doN89Tr6oXDmB4KxWwzYoWIN0Y3CCIyiDdWRwgiMo",
"enr:-Ly4QH42VUJjJxw24sBTppaF190Pcn0laRqa3UUxcR5EoA-kLDPl0EAnH7hqnD4JBJ4BjkJ4IQGgDjquHUoWJpneAWcDh2F0dG5ldHOIAAAYAAAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhJO21QqJc2VjcDI1NmsxoQKU8u3T4sCtMUkucKyRFXMGebCffrMnss3P_52W55RJIohzeW5jbmV0cwSDdGNwgiMog3VkcIIjKA",
"enr:-MK4QHMmDt1EuRPEgjoUJ5-leNqpUqIZBwWmzBrR-a_2AO5eFYukPyiE_2VHH8rVTu-CXpPwX2KBkSJBeacgF5VYCwiGAZL8lfxmh2F0dG5ldHOIAIABAAAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhKXjybOJc2VjcDI1NmsxoQJS3R2VkqpdkwyZkZQMDMU_IF04dLxQt6NAf70kWcXYvohzeW5jbmV0cw-DdGNwgiMog3VkcIIjKA",
"enr:-MK4QItySKXmpWDl3tLnVqAP00A5rFrlzBqfYbEXriD1gnIpWpiQIQ5fPLyUQq3leYmC-GpbtV6VQ2Iz9E_0hfKWaQKGAZL8lgN5h2F0dG5ldHOIAAAAAAAAAGCEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhJ_fSeGJc2VjcDI1NmsxoQPMIZKzBRt9YGP7hlJHA1LzFAm8URxH98D4RDmQhKc-yIhzeW5jbmV0cwmDdGNwgiMog3VkcIIjKA",
"enr:-MK4QBlCr5bxnEMn3xk0GsS48gxy66ZVUBu6hJKvCJGTy6NhM2d8M4l08S-bqkN43z6swzN57bJ86Lcq9LXkNd-xLC-GAZL8lgKnh2F0dG5ldHOIAAAMAAAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhI5dy46Jc2VjcDI1NmsxoQJoK-aOOTD9lMwTCcO37Ihmcnknp9FxO9lsFYBuTZRfGohzeW5jbmV0cw-DdGNwgiMog3VkcIIjKA",
"enr:-MK4QFP-ZBOIUZKl24rwQ2MDBiWioMBQ4Oi_rMBXRikKX0XDMV9-nXe9ITJJPQlWPFcSNrVIwcah3Trw5ImLKblk3ymGAZL8lgu_h2F0dG5ldHOIAAAAAAAAMACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhES34myJc2VjcDI1NmsxoQNTVdI6Vw-YubsEVxTl5-JECJHcsESDoq60UruR5JwjcohzeW5jbmV0cweDdGNwgiMog3VkcIIjKA",
"enr:-MK4QFEPaQEsgH8YL5Gu9Xw1HdSaCjfHBaavRcuwRlEq3QulQdU3xLoPXV0l_zRoEMkdMGh-5e3AY842xFamYub7ojGGAZL8lgByh2F0dG5ldHOIAAAAAAAAAMCEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhBjHXU2Jc2VjcDI1NmsxoQIUCDXAKfrPVgTp5yIYkGgneVdc5FaYQjnRSAMD50_LIYhzeW5jbmV0cwSDdGNwgiMog3VkcIIjKA",
"enr:-MK4QNOSfKR0-_DO35S3ogEDB0HyS5O1KVG3AG56YUsSdTbvDq7nLS_BQBtYF3W_JwHjOLidbyCdsSpEZ73iMJuFhEeGAZL8lf2nh2F0dG5ldHOIAAAAADAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhMbHVWCJc2VjcDI1NmsxoQJmKRg0hEM-TnwPj2ilnshK9hGixG55MaGHv_vnYASWBYhzeW5jbmV0cweDdGNwgiMog3VkcIIjKA",
"enr:-MK4QABR7d4ppxIxFOvg5jUTJ2mBzkPjvsFAmx4ijBOlYgk7AU41gNphw78hzxmeCEw0Wq7LK5YTuHUahjz8WM7i3tmGAZL8lgVIh2F0dG5ldHOIAAAAAAAAgAGEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhKpAyyGJc2VjcDI1NmsxoQM9MtxjAx2y-mL3gQD25yWl3Gks7nrEHGEM5DvSpX1rLohzeW5jbmV0cw2DdGNwgiMog3VkcIIjKA",
"enr:-MK4QCfAmFbSe4QLoUDERuSZ7aE74lIfXnXRkvm7cjjayHFLMVy0XRQxxvxRLBoLpZV_-SS-lQyaVDkjhmNuv-igbTeGAZL8lfv7h2F0dG5ldHOIGAAAAAAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhI_GdHKJc2VjcDI1NmsxoQKl9J1dyW1CnoiAN9is5uNLnk9op4wIIXWVahx6DsLmDYhzeW5jbmV0cweDdGNwgiMog3VkcIIjKA",
"enr:-MK4QEjbxm5EwlJUp062pFTwpbnyOHUZJOE0BnxBuQYro80aWwzPxuYsj9S1zdOEhtMCadpHaueKsoJtD6x2be06XwyGAZL8lg22h2F0dG5ldHOIAAwAAAAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhJ3mJJeJc2VjcDI1NmsxoQP68-24KkowwXNVIM5A5BlStNX8dUR636iyStKqFuXDIYhzeW5jbmV0cwiDdGNwgiMog3VkcIIjKA",
"enr:-MK4QPTbd5CgBXLtFu34aJw8TR4Mpk55PBJj6RQTZP_HPfSFOhOpLI4cLbR39tBBShjc43GGKhANt2RK0hHYUrMUNIeGAZL8lf3ch2F0dG5ldHOIAAAAAAAwAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhKXoaVWJc2VjcDI1NmsxoQK9VrHPA291SJwcKGe4cdSJdnRc2imeTGuUscFIlzIRSIhzeW5jbmV0cwuDdGNwgiMog3VkcIIjKA",
"enr:-MK4QM6qzQ2kwTRRe9dRGEGqpvdN_6czu0t3gSAO7ojsuennXtY2y2h3HlNoI8lFvQtjXFVz-TDJTia1aabby113GgSGAZL8lgc-h2F0dG5ldHOIAAAAAAAAAGCEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhES37GeJc2VjcDI1NmsxoQLGSNIMObiRYK12eYlHVqXrRtkjld2qnfqeymyqS09vNYhzeW5jbmV0cwSDdGNwgiMog3VkcIIjKA",
"enr:-MK4QC1IbLWxtbrF4aiHmvelea2gn0DUjRQLbOVjfDvIg7NeCDkmgHcAULDfyv8nokHVadeHU_nZtXx5KFSWMRThXpeGAZL8lfxSh2F0dG5ldHOIAAAAAABgAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhKXj6aWJc2VjcDI1NmsxoQIRdEG4LqcWFxuGL8rKnuloHMaXaWcyTAW36bQF-vq43ohzeW5jbmV0cwaDdGNwgiMog3VkcIIjKA",
"enr:-MK4QB2VcgCAj5ZZW8ItWHT7kT115pGK9U2jQt9e6A-_blQCGgZYCVfsAoUPnFoatzpolSqfq5yDqaOUZVRqfble4-6GAZL8lf37h2F0dG5ldHOIAAAAADAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhGiDI7CJc2VjcDI1NmsxoQP1gXjy89Aq_qC3q4h6YpMAQ7ArRyOAkNSdt2uxR95aRYhzeW5jbmV0cwiDdGNwgiMog3VkcIIjKA",
"enr:-MK4QE5FCzl2lOL1yLeUI9sOKol9sMDIP_2_829ONxfEKJ0RCMJ5fVoVCgAzkQeCB3Qy5cTc-J_xvV0BTUGwVncUeSuGAZL8lf3ah2F0dG5ldHOIAACAAQAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhLKA-BqJc2VjcDI1NmsxoQOsPITil6o4aZGLJU4QVOouhwA_asQYKzx73WQrX9mKJ4hzeW5jbmV0cwuDdGNwgiMog3VkcIIjKA",
"enr:-MK4QDTSMAJnlzM5ge0zFDYIf3Dm1jo-QlW16bGEK1c5V4XqROkjICrItNtTHn_rWZ7sFLJ-HEJrEXuelLoNRq4Mk7KGAZL8lf41h2F0dG5ldHOIAAAAAAAAABiEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhEDjdKaJc2VjcDI1NmsxoQLd_yMbZ703FvFi-XREC6rLHnbD_UWUCza99qjnAmrv4IhzeW5jbmV0cw-DdGNwgiMog3VkcIIjKA",
"enr:-MK4QM6zcrPDsArscPtgE1vIUYUttaaaoy9KK1UVJt_v2vltRucEOoJf1Y0YhSmEjVTnxtquUnNsthr1xEbzlINJMaCGAZL8lgbDh2F0dG5ldHOIAAAYAAAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhJO2m8KJc2VjcDI1NmsxoQP-z75gbAYqR6_LCL2rpPTZmrOz8NnS1UFhqjsmwVbw9IhzeW5jbmV0cweDdGNwgiMog3VkcIIjKA",
"enr:-MK4QL18SZHDapmjGWInRUZYBuU3CTkqVLyhxpovPLcnMgsiPx6b2uf2jHQnj3lK8y0mLxv74y6Ww2kBFdrLKFLxZTmGAZL8lgfth2F0dG5ldHOIAAAAAAYAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhIs7BZaJc2VjcDI1NmsxoQM1pgLIjSoQCDzGaH6k-xiL_H9FP_aO62iZXpnRFU_6zohzeW5jbmV0cw2DdGNwgiMog3VkcIIjKA",
"enr:-MK4QAKZrFxKHiZlNYqq3a3t5SSbg36XL1ipI93wPvj6nhmfVSi5iNK9qRxQmhRt8b6S5iEAcPhnHNiO4gtaZdGg3DKGAZL8lgRah2F0dG5ldHOIAAAAAADAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhI_GaKKJc2VjcDI1NmsxoQMYWC2DzvQ8_K9V6kU3g2Bfr7qkp0MkkpLcc04szxTQ_ohzeW5jbmV0cwKDdGNwgiMog3VkcIIjKA",
"enr:-MK4QAeVsVSVHkpiv2wpX4JovFW-8zlNJGQvso-cu25VIRIkYDs1uokbt_XexoabwaneyuftPbDFnNbuTVFjOiixIAqGAZL8lgJOh2F0dG5ldHOIAQAAAAAAAICEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhIm4xiSJc2VjcDI1NmsxoQNj5zT1FZCoZp2vu0xyPvIn3nTGjbIweVwEkAAeqdMJ9YhzeW5jbmV0cwWDdGNwgiMog3VkcIIjKA",
"enr:-MK4QIejUXUsOX2Zt0gaZ0L0nNjE40rCu5S3H_lNaug79Gk2aNObUAorRsTXKslKTc2oFRFd68vn_294oKtOjKQ6lP2GAZL8lgrih2F0dG5ldHOIABgAAAAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhKdjQ9WJc2VjcDI1NmsxoQNBNJ9KuHDBEp0PnogcADqdijjfYEI94yP91iLwFNM6johzeW5jbmV0cw2DdGNwgiMog3VkcIIjKA",
"enr:-MK4QN0Jin_u-UobugNNP9ux3Fin5tk93ax5tCwNTWhIQNmkf4hKHVQc7n39zaYTauxHg44byC_F1XAroTedeudWKMCGAZL8lgHwh2F0dG5ldHOIAAAAABgAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhKXj6zuJc2VjcDI1NmsxoQJKQP5WBCZz89oRMDF0ZK657hmXuScPIKk22SQMBXBseYhzeW5jbmV0cw-DdGNwgiMog3VkcIIjKA",
"enr:-MK4QAySvod_fX8760QzGJ6pMeZGGY05vUG7HY2IndLnkvJBYR48NdvtvqWWTXmScVEy4blZoxDuRIKiu1FX3ym2XjaGAZL8lgNyh2F0dG5ldHOIAAAAAADAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhJ_LaLyJc2VjcDI1NmsxoQLVROfxZ1332xSDedlu954b4dQKw3stV5r7PMKWS0VD94hzeW5jbmV0cwWDdGNwgiMog3VkcIIjKA",
"enr:-MK4QKThqho0pK2Ez22RIJkCqWasVhyFkXxQKTq1B_o05Yx9elbWfAMvoyl15XfE1u5oNRP6JGLSNoBGPwfAHDQT_XmGAZL8lgFxh2F0dG5ldHOIAAAAYAAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhGj4WPmJc2VjcDI1NmsxoQIt33J7LcbYU-dhqYzMYlK9NNZoGq6Pd6Gkei5LITPYHYhzeW5jbmV0cwODdGNwgiMog3VkcIIjKA",
"enr:-MK4QPvlUxfKTEuGcjtgBmh6dmMVlJ9kXCuvEJVxhXLpHokLcQO70_HIiPhTifc7nL9uVYf-aLsR9i9_Ju0juMl2obqGAZL8lgRih2F0dG5ldHOIAIABAAAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhKXoREOJc2VjcDI1NmsxoQP8EniEQ3S7NIc7G0eCJ-4fmsBmPEsXLLAIBTqhiEr1dohzeW5jbmV0cwODdGNwgiMog3VkcIIjKA",
"enr:-MK4QPVLA2XuDDDHyJv6a6vs1zKf-QucEokYhtG5pJtwPWBza6TXV8q1SgwU_FdkK-ngvd4VX4VkqFxgTc0Rjc9xrASGAZL8lgHph2F0dG5ldHOIAAAAAAAAAwCEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhI9u1yKJc2VjcDI1NmsxoQMdrx7mbyTnW_cSUJci7F6zPiIRtn99q5F6l_m0jQqtA4hzeW5jbmV0cwqDdGNwgiMog3VkcIIjKA",
"enr:-MK4QE7dJR9k1mKgRM8N5i_MGxWwqGRX3jUD_ODVlu3ai9zEd59l9Cc1LcUzcrNhnsxsY7ESwvGA1WsR-uvbpwMNz4OGAZL8lfirh2F0dG5ldHOIAAAAAAAAYACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhJ_fp9qJc2VjcDI1NmsxoQIslFIGQRB_VFWMPevzyxhMDX9CkCnJp4PEFRG5n7jk2YhzeW5jbmV0cwSDdGNwgiMog3VkcIIjKA",
"enr:-MK4QEuke5XW36g1t9fIhgBBdTsBsbVxUy6BPT5I-QMZ0gylMLKTO8OWrxprpOL_bqFM24x4YbpKzCcGlJet1m8P6D6GAZL8lfmLh2F0dG5ldHOIgAEAAAAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhIm4z1uJc2VjcDI1NmsxoQLsGvwvgwBKlErW3e4Wxlp_C-2LoemsgxNh4YXhzYh184hzeW5jbmV0cweDdGNwgiMog3VkcIIjKA",
"enr:-MK4QOKr0la5UiDsWR0svdmz5bRU2cAO8q2gbVGFVP_5zUjUTAyS482sVR2EEMU84R5Vag1rmKN2B50N62SAeFDvmUmGAZL8lgXAh2F0dG5ldHOIAAAAAAAAAwCEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhIs7dg6Jc2VjcDI1NmsxoQI_Abpl-E4Sm5BYS-Rq19nAYTj3xG2t1GwsR6azMDvjjYhzeW5jbmV0cw-DdGNwgiMog3VkcIIjKA",
"enr:-LK4QGKKpJY2mO2wwRWE1GrSOo0YbIhvGU3Q-jCp89LHJRZKObPECBJFrkx55lBHmBdofOU42tjHtcdaDEuz1y3mWOkEh2F0dG5ldHOIAAAAAMAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhJ9Z7GWJc2VjcDI1NmsxoQMcuYnhiF7_6Qlli7BHk-HtRJAJF1sAQIjBMMZFhHibLYN0Y3CCIyiDdWRwgiMo",
"enr:-LK4QDE7XQf1xXn_1svSTXMAGEfMCB06HojK0hGLFufHu65SBXvlV1oPxy9O_Ww4hqSwK_Ttn9RujfEpAl4p2Yp3Ox8Eh2F0dG5ldHOIAwAAAAAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhKUW-E2Jc2VjcDI1NmsxoQO1c8OIsBWo_BTB2AcTOi3qr8PvnYK1mM5kIfuyJZ5ptIN0Y3CCIyiDdWRwgiMo",
"enr:-LK4QHbSqmyPO4e25hz5dAjD9R70Am45I757aI1VOivcheRnCL0IGR1srnCizSDmV0DAN-pgPPB7v4D_e8WIPpCKtWIEh2F0dG5ldHOIMAAAAAAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhI_GfsCJc2VjcDI1NmsxoQPRD80hT3L5trkyN7YR6dTa0eY8m8-BPEQ9VzVHr-KCm4N0Y3CCIyiDdWRwgiMo",
"enr:-LK4QDiQXp7rnmRd32bGVtboxwh-MFBpIAWTSUXAaxWhjVE9UERQUvGLsMYMcOHSRTND2Q7ViZ-t-VdnWG37q-ModXQEh2F0dG5ldHOIAABgAAAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhKdH2zWJc2VjcDI1NmsxoQLlA-CJY9lvScDxC3c0gbqgv0fVcK8__JhCLi-OWqjLooN0Y3CCIyiDdWRwgiMo",
"enr:-LK4QF4azbnoYa11BniDYhO9-U14PRHaGGgU9VkUBXxiaQQLBoEXnE5PitCXZksrjr6IXZs1Rtk_rYat6Kg56rVl7A8Eh2F0dG5ldHOIAABgAAAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhKXjWVmJc2VjcDI1NmsxoQJ-BjW6tpjmiDKUW-KNDVRL_avDPmK_yRo5HSGvJYgt04N0Y3CCIyiDdWRwgiMo",
"enr:-LK4QFPSLoXEbUBBQTlZEwJtNqX8hVq1GccfOqUYPY7UEggpGrC0IFuIpZGi1yLetMUd9gc055On116W36EaRtlxVeYEh2F0dG5ldHOIAAAAAAwAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhBjHXyKJc2VjcDI1NmsxoQI47lLtfQTztWdm3pVhY6giqC1PKudRwODxTQZEpGpcY4N0Y3CCIyiDdWRwgiMo",
"enr:-LK4QFFn8IQKKhPTcIos-3CSClVm7r102YpoTnjUvSZ6syyNWyJGeIDF6Uvm8SwzXKjfKmdV7g5TGyCgA0hO0-PNRFoEh2F0dG5ldHOIAAAADAAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhKpAkzeJc2VjcDI1NmsxoQPk4qNUMWmRxQKwDA99a2CjqktjXKi5wqCmNHQmFtegDYN0Y3CCIyiDdWRwgiMo",
"enr:-LK4QL2JLllEpB8OyarfTP9CxjpgsHAHsPZoY19qBjR85HYTI76d64iTA0RxusXCiJdTLRJWgE6DE8ArpbkRvAWYJ6sEh2F0dG5ldHOIAAAAAwAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhM69nQ6Jc2VjcDI1NmsxoQJIB8SxOEE4zHpubxYwAEFTFPvCvH1PK-aAawd5X_IfeoN0Y3CCIyiDdWRwgiMo",
"enr:-LK4QAW2bNcn_byPRldddp4zOJQrdZ0Tm_rmPLNIOUC0VUFjInLcNFLdxfUgWouL3VNN0sfs7zT0nfcSTWL4joj_0IMEh2F0dG5ldHOIAAAwAAAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhLI-E7uJc2VjcDI1NmsxoQLBU9O0JOoFxULrs225uuzI--xb3VzeV-Ub_i7ylzWp2YN0Y3CCIyiDdWRwgiMo",
"enr:-LK4QF83C8OV4vbbr1HS0LFL1Fy4ZpJKOIi9QlSEJkfrDyRqIyy-D-Dp8A_vRVcJbbV5SbjBq1g-pQh7j1zzctlBXWoEh2F0dG5ldHOIAAAAAAAMAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhKEji72Jc2VjcDI1NmsxoQPEmmzHJQH9TRjb_K5yanwJ6Cakw3ki3ChgFXYOqLR-3IN0Y3CCIyiDdWRwgiMo",
"enr:-LK4QDKtIFH1vZaG2pMhLPrgY-4g0zFk2iXOVs3WySC3MikxAhBdDEttTKSH1opVZOhnI_5Ivxp8raDNhMhntvJVTkUEh2F0dG5ldHOIAAAAAAwAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhJ_f11yJc2VjcDI1NmsxoQLdayDMFPEnWPa_1MkL0YDmom_3zQKH2D3atBAIIt9rpIN0Y3CCIyiDdWRwgiMo",
"enr:-LK4QAt1xPThwZtp4fFdytsWwzuBuN_Kh_rlV0aJjkjWQUbddvBYctYLrzHAo7mFWktctMsGUO0VoBQdnm3ZiMoo3lEEh2F0dG5ldHOIAAAAAAAYAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhEDiYIuJc2VjcDI1NmsxoQOkRH_IBZwuZmpxh9ldiwDYUqChz6_ei6yXAxqQ9z8QyYN0Y3CCIyiDdWRwgiMo",
"enr:-LK4QPYpTeJc1dtO_CB2Lvp7TCg6iQOU_WfpKPrTt_e3fHVKRZnmkGMA7aUzA-lxEgQQgb72TyPECN4z_vbO_XbJCL8Eh2F0dG5ldHOIAAAAAAAAgAGEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhKXjJ-CJc2VjcDI1NmsxoQOuuqf6k-s_yOnNLcDNv0tIhesiYvvdphMIgyOBLDNeKoN0Y3CCIyiDdWRwgiMo",
"enr:-LK4QG0SdN74uXa3ZthPN7B-1egIGMBzW5wlR7QX3Y0eoJwRAetxow_pyIBPsTRkPY2FHLhv8jEqna_kPHt7eXe1A_AEh2F0dG5ldHOIAAwAAAAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhKUW3BOJc2VjcDI1NmsxoQJKVmvTZgOjFKGc3ShLgTA1H192A5PLYAPaFW3CVxEmHIN0Y3CCIyiDdWRwgiMo",
"enr:-LK4QFBL2CvVr5Sz-P3nkE6jvAQdArjrLkYkxlhAxNrOHPauOsaBaa5m5OF2IzIkw09hosGwOs4XOnhVPHuYfFdV2YIEh2F0dG5ldHOIAAAAAAAAAMCEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhJ_fxziJc2VjcDI1NmsxoQPfWXddZQ4UAiu178G3WlyghgoIpgrZATEBNsApnA9b5YN0Y3CCIyiDdWRwgiMo",
"enr:-LK4QPsih3HzKvodMe13_S7G7-LdHARWSWqGrfjBwpld7uBZAmE5509YBXunP5YOVfN7-65FjBRkR4HmImaVTW-3ILIEh2F0dG5ldHOIAAYAAAAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhEPNp5mJc2VjcDI1NmsxoQPKHJGGjFBqh9a3p0xyykr1iyzYG1YGPujEkx_hilG9tIN0Y3CCIyiDdWRwgiMo",
"enr:-LK4QHRInI7TYE6LVBAAsU1ahJONNnYjGpJEVZMOgLH3hEPzQsEWzYAMWTIU_d9mlL4SAT3jMHi0RUOQ1tNvKpdFo5cEh2F0dG5ldHOIAAAwAAAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhIs75jmJc2VjcDI1NmsxoQJTVNZQIGja0Btyt1pPX21sQ25PAIs16JYqRpc2OIOHMIN0Y3CCIyiDdWRwgiMo",
"enr:-LK4QJOiEnrs_7W2xD5ni1S4HGGXKhrzuluvt3XGMM-4HZl2OM5gtKY3OtkKKUPYYTFbkz8-IZh27fimVhiYHAr7rwwEh2F0dG5ldHOIAADAAAAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhKdjX6-Jc2VjcDI1NmsxoQLDUVY1B_Y7uKNnNuFBqVjsXXZPF4UcHYwlRIX8bOqwDIN0Y3CCIyiDdWRwgiMo",
"enr:-LK4QGJlf3p3S3hfJTs2Bmj2YoidPP_QWT14t6eMySxYa22cDWdNPDY3gmFkevT_C_f1J1DAbQCCEELcjh2Y2gQ9U6sEh2F0dG5ldHOIAAAAAAAwAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhIbRrmKJc2VjcDI1NmsxoQND9l9iIY92LxR5Yg0JThjZt2atGz9Wu7V-_Zn1JFsPgoN0Y3CCIyiDdWRwgiMo",
"enr:-LK4QIzc1HoQowg-CyrkruCbNxJ19dcn_UpWh07zKhyl_b3TBhjju0QVnwMC0HBUo-mFPlLqVI-v4dvS7pIp5llr1cAEh2F0dG5ldHOIgAEAAAAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhKEjnAqJc2VjcDI1NmsxoQJI2HA4O9X4qs7K2Tgh637dz876s8f40wanbnnPxJcBKYN0Y3CCIyiDdWRwgiMo",
"enr:-LK4QG9_qJS-qov-VWaRJB7NDD1XJExA2zaTqSUsXS87mqsGdLbNlMEUcUl6-Jjn2h5JCrgjdekJajxIDBSozs2FLPIEh2F0dG5ldHOIAAAAAAAwAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhI5deB6Jc2VjcDI1NmsxoQL0hy3fyx2QhB-qxvdBIB1RqBIXwEcpNAspkyMESe-Ds4N0Y3CCIyiDdWRwgiMo",
"enr:-LK4QMjMMPjyls4JEnYlNcQi7k9j66Z6aDvmyLGh-MmK6Do_CncBF2dTEvrXJPcbDgIrpFrImsyw6tF4YUzO0OhHaGYEh2F0dG5ldHOIwAAAAAAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhJ3m4YuJc2VjcDI1NmsxoQLP3iDQVX979MeG4JUG5TEEOfcBDq4DL6IwV_vMO-iMsIN0Y3CCIyiDdWRwgiMo",
"enr:-LK4QPNn6P_ydDEeOPMp77RH7NHZtEdFstujjbM_mg9N9mHDEGaAim7qZo5jWgMC1HZpf0c4-tf0Y6eFkDdw9EbhPlAEh2F0dG5ldHOIAAAAwAAAAACEZXRoMpDY3UMGYGN2JAABAAAAAAAAgmlkgnY0gmlwhJgq582Jc2VjcDI1NmsxoQN2tk0LwZ9dacFxiUApBGnqfHwLTiOjtC8Kstlel29Xw4N0Y3CCIyiDdWRwgiMo",
}
OverrideBeaconNetworkConfig(cfg)
}
// MekongConfig defines the config for the Mekong beacon chain testnet.
func MekongConfig() *BeaconChainConfig {
cfg := MainnetConfig().Copy()
cfg.MinGenesisTime = 1730822340
cfg.GenesisDelay = 60
cfg.ConfigName = MekongName
cfg.GenesisValidatorsRoot = bytesutil.ToBytes32(hexutil.MustDecode("0x9838240bca889c52818d7502179b393a828f61f15119d9027827c36caeb67db7"))
cfg.GenesisForkVersion = []byte{0x10, 0x63, 0x76, 0x24}
cfg.SecondsPerETH1Block = 12
cfg.DepositChainID = 7078815900
cfg.DepositNetworkID = 7078815900
cfg.AltairForkEpoch = 0
cfg.AltairForkVersion = []byte{0x20, 0x63, 0x76, 0x24}
cfg.BellatrixForkEpoch = 0
cfg.BellatrixForkVersion = []byte{0x30, 0x63, 0x76, 0x24}
cfg.CapellaForkEpoch = 0
cfg.CapellaForkVersion = []byte{0x40, 0x63, 0x76, 0x24}
cfg.DenebForkEpoch = 0
cfg.DenebForkVersion = []byte{0x50, 0x63, 0x76, 0x24}
cfg.ElectraForkEpoch = 256
cfg.ElectraForkVersion = []byte{0x60, 0x63, 0x76, 0x24}
cfg.TerminalTotalDifficulty = "0"
cfg.DepositContractAddress = "0x4242424242424242424242424242424242424242"
cfg.EjectionBalance = 30000000000
cfg.ChurnLimitQuotient = 128
cfg.MinGenesisActiveValidatorCount = 100000
cfg.MinValidatorWithdrawabilityDelay = 2
cfg.InitializeForkSchedule()
return cfg
}

View File

@@ -1,28 +0,0 @@
package params_test
import (
"path"
"testing"
"github.com/bazelbuild/rules_go/go/tools/bazel"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestMekongConfigMatchesUpstreamYaml(t *testing.T) {
presetFPs := presetsFilePath(t, "mainnet")
mn, err := params.ByName(params.MainnetName)
require.NoError(t, err)
cfg := mn.Copy()
for _, fp := range presetFPs {
cfg, err = params.UnmarshalConfigFile(fp, cfg)
require.NoError(t, err)
}
fPath, err := bazel.Runfile("external/mekong_testnet")
require.NoError(t, err)
configFP := path.Join(fPath, "network-configs/devnet-0/metadata", "config.yaml")
pcfg, err := params.UnmarshalConfigFile(configFP, nil)
require.NoError(t, err)
fields := fieldsFromYamls(t, append(presetFPs, configFP))
assertYamlFieldsMatch(t, "mekong", fields, pcfg, params.MekongConfig())
}

View File

@@ -10,5 +10,4 @@ const (
MinimalName = "minimal"
SepoliaName = "sepolia"
HoleskyName = "holesky"
MekongName = "mekong"
)

View File

@@ -5,15 +5,18 @@ import (
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
pb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"google.golang.org/protobuf/proto"
)
type LightClientExecutionBranch = [fieldparams.ExecutionBranchDepth][fieldparams.RootLength]byte
type LightClientSyncCommitteeBranch = [fieldparams.SyncCommitteeBranchDepth][fieldparams.RootLength]byte
type LightClientSyncCommitteeBranchElectra = [fieldparams.SyncCommitteeBranchDepthElectra][fieldparams.RootLength]byte
type LightClientFinalityBranch = [fieldparams.FinalityBranchDepth][fieldparams.RootLength]byte
type LightClientFinalityBranchElectra = [fieldparams.FinalityBranchDepthElectra][fieldparams.RootLength]byte
type LightClientHeader interface {
ssz.Marshaler
Proto() proto.Message
Version() int
Beacon() *pb.BeaconBlockHeader
Execution() (ExecutionData, error)
@@ -31,29 +34,41 @@ type LightClientBootstrap interface {
type LightClientUpdate interface {
ssz.Marshaler
Proto() proto.Message
Version() int
AttestedHeader() LightClientHeader
SetAttestedHeader(header LightClientHeader) error
NextSyncCommittee() *pb.SyncCommittee
SetNextSyncCommittee(sc *pb.SyncCommittee)
NextSyncCommitteeBranch() (LightClientSyncCommitteeBranch, error)
SetNextSyncCommitteeBranch(branch [][]byte) error
NextSyncCommitteeBranchElectra() (LightClientSyncCommitteeBranchElectra, error)
FinalizedHeader() LightClientHeader
FinalityBranch() LightClientFinalityBranch
SetFinalizedHeader(header LightClientHeader) error
FinalityBranch() (LightClientFinalityBranch, error)
FinalityBranchElectra() (LightClientFinalityBranchElectra, error)
SetFinalityBranch(branch [][]byte) error
SyncAggregate() *pb.SyncAggregate
SetSyncAggregate(sa *pb.SyncAggregate)
SignatureSlot() primitives.Slot
SetSignatureSlot(slot primitives.Slot)
}
type LightClientFinalityUpdate interface {
ssz.Marshaler
Proto() proto.Message
Version() int
AttestedHeader() LightClientHeader
FinalizedHeader() LightClientHeader
FinalityBranch() LightClientFinalityBranch
FinalityBranch() (LightClientFinalityBranch, error)
FinalityBranchElectra() (LightClientFinalityBranchElectra, error)
SyncAggregate() *pb.SyncAggregate
SignatureSlot() primitives.Slot
}
type LightClientOptimisticUpdate interface {
ssz.Marshaler
Proto() proto.Message
Version() int
AttestedHeader() LightClientHeader
SyncAggregate() *pb.SyncAggregate

View File

@@ -14,6 +14,7 @@ go_library(
visibility = ["//visibility:public"],
deps = [
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
@@ -21,6 +22,7 @@ go_library(
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
],
)

View File

@@ -41,7 +41,7 @@ func NewWrappedBootstrapAltair(p *pb.LightClientBootstrapAltair) (interfaces.Lig
if p == nil {
return nil, consensustypes.ErrNilObjectWrapped
}
header, err := NewWrappedHeaderAltair(p.Header)
header, err := NewWrappedHeader(p.Header)
if err != nil {
return nil, err
}
@@ -105,7 +105,7 @@ func NewWrappedBootstrapCapella(p *pb.LightClientBootstrapCapella) (interfaces.L
if p == nil {
return nil, consensustypes.ErrNilObjectWrapped
}
header, err := NewWrappedHeaderCapella(p.Header)
header, err := NewWrappedHeader(p.Header)
if err != nil {
return nil, err
}
@@ -169,7 +169,7 @@ func NewWrappedBootstrapDeneb(p *pb.LightClientBootstrapDeneb) (interfaces.Light
if p == nil {
return nil, consensustypes.ErrNilObjectWrapped
}
header, err := NewWrappedHeaderDeneb(p.Header)
header, err := NewWrappedHeader(p.Header)
if err != nil {
return nil, err
}
@@ -233,7 +233,7 @@ func NewWrappedBootstrapElectra(p *pb.LightClientBootstrapElectra) (interfaces.L
if p == nil {
return nil, consensustypes.ErrNilObjectWrapped
}
header, err := NewWrappedHeaderDeneb(p.Header)
header, err := NewWrappedHeader(p.Header)
if err != nil {
return nil, err
}

View File

@@ -23,11 +23,72 @@ func NewWrappedFinalityUpdate(m proto.Message) (interfaces.LightClientFinalityUp
return NewWrappedFinalityUpdateCapella(t)
case *pb.LightClientFinalityUpdateDeneb:
return NewWrappedFinalityUpdateDeneb(t)
case *pb.LightClientFinalityUpdateElectra:
return NewWrappedFinalityUpdateElectra(t)
default:
return nil, fmt.Errorf("cannot construct light client finality update from type %T", t)
}
}
func NewFinalityUpdateFromUpdate(update interfaces.LightClientUpdate) (interfaces.LightClientFinalityUpdate, error) {
switch t := update.(type) {
case *updateAltair:
return &finalityUpdateAltair{
p: &pb.LightClientFinalityUpdateAltair{
AttestedHeader: t.p.AttestedHeader,
FinalizedHeader: t.p.FinalizedHeader,
FinalityBranch: t.p.FinalityBranch,
SyncAggregate: t.p.SyncAggregate,
SignatureSlot: t.p.SignatureSlot,
},
attestedHeader: t.attestedHeader,
finalizedHeader: t.finalizedHeader,
finalityBranch: t.finalityBranch,
}, nil
case *updateCapella:
return &finalityUpdateCapella{
p: &pb.LightClientFinalityUpdateCapella{
AttestedHeader: t.p.AttestedHeader,
FinalizedHeader: t.p.FinalizedHeader,
FinalityBranch: t.p.FinalityBranch,
SyncAggregate: t.p.SyncAggregate,
SignatureSlot: t.p.SignatureSlot,
},
attestedHeader: t.attestedHeader,
finalizedHeader: t.finalizedHeader,
finalityBranch: t.finalityBranch,
}, nil
case *updateDeneb:
return &finalityUpdateDeneb{
p: &pb.LightClientFinalityUpdateDeneb{
AttestedHeader: t.p.AttestedHeader,
FinalizedHeader: t.p.FinalizedHeader,
FinalityBranch: t.p.FinalityBranch,
SyncAggregate: t.p.SyncAggregate,
SignatureSlot: t.p.SignatureSlot,
},
attestedHeader: t.attestedHeader,
finalizedHeader: t.finalizedHeader,
finalityBranch: t.finalityBranch,
}, nil
case *updateElectra:
return &finalityUpdateElectra{
p: &pb.LightClientFinalityUpdateElectra{
AttestedHeader: t.p.AttestedHeader,
FinalizedHeader: t.p.FinalizedHeader,
FinalityBranch: t.p.FinalityBranch,
SyncAggregate: t.p.SyncAggregate,
SignatureSlot: t.p.SignatureSlot,
},
attestedHeader: t.attestedHeader,
finalizedHeader: t.finalizedHeader,
finalityBranch: t.finalityBranch,
}, nil
default:
return nil, fmt.Errorf("unsupported type %T", t)
}
}
type finalityUpdateAltair struct {
p *pb.LightClientFinalityUpdateAltair
attestedHeader interfaces.LightClientHeader
@@ -41,11 +102,11 @@ func NewWrappedFinalityUpdateAltair(p *pb.LightClientFinalityUpdateAltair) (inte
if p == nil {
return nil, consensustypes.ErrNilObjectWrapped
}
attestedHeader, err := NewWrappedHeaderAltair(p.AttestedHeader)
attestedHeader, err := NewWrappedHeader(p.AttestedHeader)
if err != nil {
return nil, err
}
finalizedHeader, err := NewWrappedHeaderAltair(p.FinalizedHeader)
finalizedHeader, err := NewWrappedHeader(p.FinalizedHeader)
if err != nil {
return nil, err
}
@@ -78,6 +139,10 @@ func (u *finalityUpdateAltair) SizeSSZ() int {
return u.p.SizeSSZ()
}
func (u *finalityUpdateAltair) Proto() proto.Message {
return u.p
}
func (u *finalityUpdateAltair) Version() int {
return version.Altair
}
@@ -90,8 +155,12 @@ func (u *finalityUpdateAltair) FinalizedHeader() interfaces.LightClientHeader {
return u.finalizedHeader
}
func (u *finalityUpdateAltair) FinalityBranch() interfaces.LightClientFinalityBranch {
return u.finalityBranch
func (u *finalityUpdateAltair) FinalityBranch() (interfaces.LightClientFinalityBranch, error) {
return u.finalityBranch, nil
}
func (u *finalityUpdateAltair) FinalityBranchElectra() (interfaces.LightClientFinalityBranchElectra, error) {
return interfaces.LightClientFinalityBranchElectra{}, consensustypes.ErrNotSupported("FinalityBranchElectra", u.Version())
}
func (u *finalityUpdateAltair) SyncAggregate() *pb.SyncAggregate {
@@ -115,11 +184,11 @@ func NewWrappedFinalityUpdateCapella(p *pb.LightClientFinalityUpdateCapella) (in
if p == nil {
return nil, consensustypes.ErrNilObjectWrapped
}
attestedHeader, err := NewWrappedHeaderCapella(p.AttestedHeader)
attestedHeader, err := NewWrappedHeader(p.AttestedHeader)
if err != nil {
return nil, err
}
finalizedHeader, err := NewWrappedHeaderCapella(p.FinalizedHeader)
finalizedHeader, err := NewWrappedHeader(p.FinalizedHeader)
if err != nil {
return nil, err
}
@@ -152,6 +221,10 @@ func (u *finalityUpdateCapella) SizeSSZ() int {
return u.p.SizeSSZ()
}
func (u *finalityUpdateCapella) Proto() proto.Message {
return u.p
}
func (u *finalityUpdateCapella) Version() int {
return version.Capella
}
@@ -164,8 +237,12 @@ func (u *finalityUpdateCapella) FinalizedHeader() interfaces.LightClientHeader {
return u.finalizedHeader
}
func (u *finalityUpdateCapella) FinalityBranch() interfaces.LightClientFinalityBranch {
return u.finalityBranch
func (u *finalityUpdateCapella) FinalityBranch() (interfaces.LightClientFinalityBranch, error) {
return u.finalityBranch, nil
}
func (u *finalityUpdateCapella) FinalityBranchElectra() (interfaces.LightClientFinalityBranchElectra, error) {
return interfaces.LightClientFinalityBranchElectra{}, consensustypes.ErrNotSupported("FinalityBranchElectra", u.Version())
}
func (u *finalityUpdateCapella) SyncAggregate() *pb.SyncAggregate {
@@ -189,11 +266,11 @@ func NewWrappedFinalityUpdateDeneb(p *pb.LightClientFinalityUpdateDeneb) (interf
if p == nil {
return nil, consensustypes.ErrNilObjectWrapped
}
attestedHeader, err := NewWrappedHeaderDeneb(p.AttestedHeader)
attestedHeader, err := NewWrappedHeader(p.AttestedHeader)
if err != nil {
return nil, err
}
finalizedHeader, err := NewWrappedHeaderDeneb(p.FinalizedHeader)
finalizedHeader, err := NewWrappedHeader(p.FinalizedHeader)
if err != nil {
return nil, err
}
@@ -226,6 +303,10 @@ func (u *finalityUpdateDeneb) SizeSSZ() int {
return u.p.SizeSSZ()
}
func (u *finalityUpdateDeneb) Proto() proto.Message {
return u.p
}
func (u *finalityUpdateDeneb) Version() int {
return version.Deneb
}
@@ -238,8 +319,12 @@ func (u *finalityUpdateDeneb) FinalizedHeader() interfaces.LightClientHeader {
return u.finalizedHeader
}
func (u *finalityUpdateDeneb) FinalityBranch() interfaces.LightClientFinalityBranch {
return u.finalityBranch
func (u *finalityUpdateDeneb) FinalityBranch() (interfaces.LightClientFinalityBranch, error) {
return u.finalityBranch, nil
}
func (u *finalityUpdateDeneb) FinalityBranchElectra() (interfaces.LightClientFinalityBranchElectra, error) {
return interfaces.LightClientFinalityBranchElectra{}, consensustypes.ErrNotSupported("FinalityBranchElectra", u.Version())
}
func (u *finalityUpdateDeneb) SyncAggregate() *pb.SyncAggregate {
@@ -249,3 +334,86 @@ func (u *finalityUpdateDeneb) SyncAggregate() *pb.SyncAggregate {
func (u *finalityUpdateDeneb) SignatureSlot() primitives.Slot {
return u.p.SignatureSlot
}
type finalityUpdateElectra struct {
p *pb.LightClientFinalityUpdateElectra
attestedHeader interfaces.LightClientHeader
finalizedHeader interfaces.LightClientHeader
finalityBranch interfaces.LightClientFinalityBranchElectra
}
var _ interfaces.LightClientFinalityUpdate = &finalityUpdateElectra{}
func NewWrappedFinalityUpdateElectra(p *pb.LightClientFinalityUpdateElectra) (interfaces.LightClientFinalityUpdate, error) {
if p == nil {
return nil, consensustypes.ErrNilObjectWrapped
}
attestedHeader, err := NewWrappedHeader(p.AttestedHeader)
if err != nil {
return nil, err
}
finalizedHeader, err := NewWrappedHeader(p.FinalizedHeader)
if err != nil {
return nil, err
}
finalityBranch, err := createBranch[interfaces.LightClientFinalityBranchElectra](
"finality",
p.FinalityBranch,
fieldparams.FinalityBranchDepthElectra,
)
if err != nil {
return nil, err
}
return &finalityUpdateElectra{
p: p,
attestedHeader: attestedHeader,
finalizedHeader: finalizedHeader,
finalityBranch: finalityBranch,
}, nil
}
func (u *finalityUpdateElectra) MarshalSSZTo(dst []byte) ([]byte, error) {
return u.p.MarshalSSZTo(dst)
}
func (u *finalityUpdateElectra) MarshalSSZ() ([]byte, error) {
return u.p.MarshalSSZ()
}
func (u *finalityUpdateElectra) SizeSSZ() int {
return u.p.SizeSSZ()
}
func (u *finalityUpdateElectra) Proto() proto.Message {
return u.p
}
func (u *finalityUpdateElectra) Version() int {
return version.Electra
}
func (u *finalityUpdateElectra) AttestedHeader() interfaces.LightClientHeader {
return u.attestedHeader
}
func (u *finalityUpdateElectra) FinalizedHeader() interfaces.LightClientHeader {
return u.finalizedHeader
}
func (u *finalityUpdateElectra) FinalityBranch() (interfaces.LightClientFinalityBranch, error) {
return interfaces.LightClientFinalityBranch{}, consensustypes.ErrNotSupported("FinalityBranch", u.Version())
}
func (u *finalityUpdateElectra) FinalityBranchElectra() (interfaces.LightClientFinalityBranchElectra, error) {
return u.finalityBranch, nil
}
func (u *finalityUpdateElectra) SyncAggregate() *pb.SyncAggregate {
return u.p.SyncAggregate
}
func (u *finalityUpdateElectra) SignatureSlot() primitives.Slot {
return u.p.SignatureSlot
}

View File

@@ -4,11 +4,13 @@ import (
"fmt"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
consensustypes "github.com/prysmaticlabs/prysm/v5/consensus-types"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
pb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"google.golang.org/protobuf/proto"
)
@@ -22,6 +24,9 @@ func NewWrappedHeader(m proto.Message) (interfaces.LightClientHeader, error) {
case *pb.LightClientHeaderCapella:
return NewWrappedHeaderCapella(t)
case *pb.LightClientHeaderDeneb:
if slots.ToEpoch(t.Beacon.Slot) >= params.BeaconConfig().ElectraForkEpoch {
return NewWrappedHeaderElectra(t)
}
return NewWrappedHeaderDeneb(t)
default:
return nil, fmt.Errorf("cannot construct light client header from type %T", t)
@@ -53,6 +58,10 @@ func (h *headerAltair) SizeSSZ() int {
return h.p.SizeSSZ()
}
func (h *headerAltair) Proto() proto.Message {
return h.p
}
func (h *headerAltair) Version() int {
return version.Altair
}
@@ -62,11 +71,11 @@ func (h *headerAltair) Beacon() *pb.BeaconBlockHeader {
}
func (h *headerAltair) Execution() (interfaces.ExecutionData, error) {
return nil, consensustypes.ErrNotSupported("Execution", version.Altair)
return nil, consensustypes.ErrNotSupported("Execution", h.Version())
}
func (h *headerAltair) ExecutionBranch() (interfaces.LightClientExecutionBranch, error) {
return interfaces.LightClientExecutionBranch{}, consensustypes.ErrNotSupported("ExecutionBranch", version.Altair)
return interfaces.LightClientExecutionBranch{}, consensustypes.ErrNotSupported("ExecutionBranch", h.Version())
}
type headerCapella struct {
@@ -114,6 +123,10 @@ func (h *headerCapella) SizeSSZ() int {
return h.p.SizeSSZ()
}
func (h *headerCapella) Proto() proto.Message {
return h.p
}
func (h *headerCapella) Version() int {
return version.Capella
}
@@ -175,6 +188,10 @@ func (h *headerDeneb) SizeSSZ() int {
return h.p.SizeSSZ()
}
func (h *headerDeneb) Proto() proto.Message {
return h.p
}
func (h *headerDeneb) Version() int {
return version.Deneb
}
@@ -190,3 +207,68 @@ func (h *headerDeneb) Execution() (interfaces.ExecutionData, error) {
func (h *headerDeneb) ExecutionBranch() (interfaces.LightClientExecutionBranch, error) {
return h.executionBranch, nil
}
type headerElectra struct {
p *pb.LightClientHeaderDeneb
execution interfaces.ExecutionData
executionBranch interfaces.LightClientExecutionBranch
}
var _ interfaces.LightClientHeader = &headerElectra{}
func NewWrappedHeaderElectra(p *pb.LightClientHeaderDeneb) (interfaces.LightClientHeader, error) {
if p == nil {
return nil, consensustypes.ErrNilObjectWrapped
}
execution, err := blocks.WrappedExecutionPayloadHeaderDeneb(p.Execution)
if err != nil {
return nil, err
}
branch, err := createBranch[interfaces.LightClientExecutionBranch](
"execution",
p.ExecutionBranch,
fieldparams.ExecutionBranchDepth,
)
if err != nil {
return nil, err
}
return &headerElectra{
p: p,
execution: execution,
executionBranch: branch,
}, nil
}
func (h *headerElectra) MarshalSSZTo(dst []byte) ([]byte, error) {
return h.p.MarshalSSZTo(dst)
}
func (h *headerElectra) MarshalSSZ() ([]byte, error) {
return h.p.MarshalSSZ()
}
func (h *headerElectra) SizeSSZ() int {
return h.p.SizeSSZ()
}
func (h *headerElectra) Proto() proto.Message {
return h.p
}
func (h *headerElectra) Version() int {
return version.Electra
}
func (h *headerElectra) Beacon() *pb.BeaconBlockHeader {
return h.p.Beacon
}
func (h *headerElectra) Execution() (interfaces.ExecutionData, error) {
return h.execution, nil
}
func (h *headerElectra) ExecutionBranch() (interfaces.LightClientExecutionBranch, error) {
return h.executionBranch, nil
}

View File

@@ -4,12 +4,11 @@ import (
"fmt"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
)
type branchConstraint interface {
~interfaces.LightClientExecutionBranch | ~interfaces.LightClientSyncCommitteeBranch | ~interfaces.LightClientFinalityBranch
[4][fieldparams.RootLength]byte | [5][fieldparams.RootLength]byte | [6][fieldparams.RootLength]byte | [7][fieldparams.RootLength]byte
}
func createBranch[T branchConstraint](name string, input [][]byte, depth int) (T, error) {

View File

@@ -27,12 +27,55 @@ func NewWrappedOptimisticUpdate(m proto.Message) (interfaces.LightClientOptimist
}
}
type OptimisticUpdateAltair struct {
func NewOptimisticUpdateFromUpdate(update interfaces.LightClientUpdate) (interfaces.LightClientOptimisticUpdate, error) {
switch t := update.(type) {
case *updateAltair:
return &optimisticUpdateAltair{
p: &pb.LightClientOptimisticUpdateAltair{
AttestedHeader: t.p.AttestedHeader,
SyncAggregate: t.p.SyncAggregate,
SignatureSlot: t.p.SignatureSlot,
},
attestedHeader: t.attestedHeader,
}, nil
case *updateCapella:
return &optimisticUpdateCapella{
p: &pb.LightClientOptimisticUpdateCapella{
AttestedHeader: t.p.AttestedHeader,
SyncAggregate: t.p.SyncAggregate,
SignatureSlot: t.p.SignatureSlot,
},
attestedHeader: t.attestedHeader,
}, nil
case *updateDeneb:
return &optimisticUpdateDeneb{
p: &pb.LightClientOptimisticUpdateDeneb{
AttestedHeader: t.p.AttestedHeader,
SyncAggregate: t.p.SyncAggregate,
SignatureSlot: t.p.SignatureSlot,
},
attestedHeader: t.attestedHeader,
}, nil
case *updateElectra:
return &optimisticUpdateDeneb{
p: &pb.LightClientOptimisticUpdateDeneb{
AttestedHeader: t.p.AttestedHeader,
SyncAggregate: t.p.SyncAggregate,
SignatureSlot: t.p.SignatureSlot,
},
attestedHeader: t.attestedHeader,
}, nil
default:
return nil, fmt.Errorf("unsupported type %T", t)
}
}
type optimisticUpdateAltair struct {
p *pb.LightClientOptimisticUpdateAltair
attestedHeader interfaces.LightClientHeader
}
var _ interfaces.LightClientOptimisticUpdate = &OptimisticUpdateAltair{}
var _ interfaces.LightClientOptimisticUpdate = &optimisticUpdateAltair{}
func NewWrappedOptimisticUpdateAltair(p *pb.LightClientOptimisticUpdateAltair) (interfaces.LightClientOptimisticUpdate, error) {
if p == nil {
@@ -43,46 +86,50 @@ func NewWrappedOptimisticUpdateAltair(p *pb.LightClientOptimisticUpdateAltair) (
return nil, err
}
return &OptimisticUpdateAltair{
return &optimisticUpdateAltair{
p: p,
attestedHeader: attestedHeader,
}, nil
}
func (u *OptimisticUpdateAltair) MarshalSSZTo(dst []byte) ([]byte, error) {
func (u *optimisticUpdateAltair) MarshalSSZTo(dst []byte) ([]byte, error) {
return u.p.MarshalSSZTo(dst)
}
func (u *OptimisticUpdateAltair) MarshalSSZ() ([]byte, error) {
func (u *optimisticUpdateAltair) MarshalSSZ() ([]byte, error) {
return u.p.MarshalSSZ()
}
func (u *OptimisticUpdateAltair) SizeSSZ() int {
func (u *optimisticUpdateAltair) SizeSSZ() int {
return u.p.SizeSSZ()
}
func (u *OptimisticUpdateAltair) Version() int {
func (u *optimisticUpdateAltair) Proto() proto.Message {
return u.p
}
func (u *optimisticUpdateAltair) Version() int {
return version.Altair
}
func (u *OptimisticUpdateAltair) AttestedHeader() interfaces.LightClientHeader {
func (u *optimisticUpdateAltair) AttestedHeader() interfaces.LightClientHeader {
return u.attestedHeader
}
func (u *OptimisticUpdateAltair) SyncAggregate() *pb.SyncAggregate {
func (u *optimisticUpdateAltair) SyncAggregate() *pb.SyncAggregate {
return u.p.SyncAggregate
}
func (u *OptimisticUpdateAltair) SignatureSlot() primitives.Slot {
func (u *optimisticUpdateAltair) SignatureSlot() primitives.Slot {
return u.p.SignatureSlot
}
type OptimisticUpdateCapella struct {
type optimisticUpdateCapella struct {
p *pb.LightClientOptimisticUpdateCapella
attestedHeader interfaces.LightClientHeader
}
var _ interfaces.LightClientOptimisticUpdate = &OptimisticUpdateCapella{}
var _ interfaces.LightClientOptimisticUpdate = &optimisticUpdateCapella{}
func NewWrappedOptimisticUpdateCapella(p *pb.LightClientOptimisticUpdateCapella) (interfaces.LightClientOptimisticUpdate, error) {
if p == nil {
@@ -93,46 +140,50 @@ func NewWrappedOptimisticUpdateCapella(p *pb.LightClientOptimisticUpdateCapella)
return nil, err
}
return &OptimisticUpdateCapella{
return &optimisticUpdateCapella{
p: p,
attestedHeader: attestedHeader,
}, nil
}
func (u *OptimisticUpdateCapella) MarshalSSZTo(dst []byte) ([]byte, error) {
func (u *optimisticUpdateCapella) MarshalSSZTo(dst []byte) ([]byte, error) {
return u.p.MarshalSSZTo(dst)
}
func (u *OptimisticUpdateCapella) MarshalSSZ() ([]byte, error) {
func (u *optimisticUpdateCapella) MarshalSSZ() ([]byte, error) {
return u.p.MarshalSSZ()
}
func (u *OptimisticUpdateCapella) SizeSSZ() int {
func (u *optimisticUpdateCapella) SizeSSZ() int {
return u.p.SizeSSZ()
}
func (u *OptimisticUpdateCapella) Version() int {
func (u *optimisticUpdateCapella) Proto() proto.Message {
return u.p
}
func (u *optimisticUpdateCapella) Version() int {
return version.Capella
}
func (u *OptimisticUpdateCapella) AttestedHeader() interfaces.LightClientHeader {
func (u *optimisticUpdateCapella) AttestedHeader() interfaces.LightClientHeader {
return u.attestedHeader
}
func (u *OptimisticUpdateCapella) SyncAggregate() *pb.SyncAggregate {
func (u *optimisticUpdateCapella) SyncAggregate() *pb.SyncAggregate {
return u.p.SyncAggregate
}
func (u *OptimisticUpdateCapella) SignatureSlot() primitives.Slot {
func (u *optimisticUpdateCapella) SignatureSlot() primitives.Slot {
return u.p.SignatureSlot
}
type OptimisticUpdateDeneb struct {
type optimisticUpdateDeneb struct {
p *pb.LightClientOptimisticUpdateDeneb
attestedHeader interfaces.LightClientHeader
}
var _ interfaces.LightClientOptimisticUpdate = &OptimisticUpdateDeneb{}
var _ interfaces.LightClientOptimisticUpdate = &optimisticUpdateDeneb{}
func NewWrappedOptimisticUpdateDeneb(p *pb.LightClientOptimisticUpdateDeneb) (interfaces.LightClientOptimisticUpdate, error) {
if p == nil {
@@ -143,36 +194,40 @@ func NewWrappedOptimisticUpdateDeneb(p *pb.LightClientOptimisticUpdateDeneb) (in
return nil, err
}
return &OptimisticUpdateDeneb{
return &optimisticUpdateDeneb{
p: p,
attestedHeader: attestedHeader,
}, nil
}
func (u *OptimisticUpdateDeneb) MarshalSSZTo(dst []byte) ([]byte, error) {
func (u *optimisticUpdateDeneb) MarshalSSZTo(dst []byte) ([]byte, error) {
return u.p.MarshalSSZTo(dst)
}
func (u *OptimisticUpdateDeneb) MarshalSSZ() ([]byte, error) {
func (u *optimisticUpdateDeneb) MarshalSSZ() ([]byte, error) {
return u.p.MarshalSSZ()
}
func (u *OptimisticUpdateDeneb) SizeSSZ() int {
func (u *optimisticUpdateDeneb) SizeSSZ() int {
return u.p.SizeSSZ()
}
func (u *OptimisticUpdateDeneb) Version() int {
func (u *optimisticUpdateDeneb) Proto() proto.Message {
return u.p
}
func (u *optimisticUpdateDeneb) Version() int {
return version.Deneb
}
func (u *OptimisticUpdateDeneb) AttestedHeader() interfaces.LightClientHeader {
func (u *optimisticUpdateDeneb) AttestedHeader() interfaces.LightClientHeader {
return u.attestedHeader
}
func (u *OptimisticUpdateDeneb) SyncAggregate() *pb.SyncAggregate {
func (u *optimisticUpdateDeneb) SyncAggregate() *pb.SyncAggregate {
return u.p.SyncAggregate
}
func (u *OptimisticUpdateDeneb) SignatureSlot() primitives.Slot {
func (u *optimisticUpdateDeneb) SignatureSlot() primitives.Slot {
return u.p.SignatureSlot
}

View File

@@ -23,11 +23,17 @@ func NewWrappedUpdate(m proto.Message) (interfaces.LightClientUpdate, error) {
return NewWrappedUpdateCapella(t)
case *pb.LightClientUpdateDeneb:
return NewWrappedUpdateDeneb(t)
case *pb.LightClientUpdateElectra:
return NewWrappedUpdateElectra(t)
default:
return nil, fmt.Errorf("cannot construct light client update from type %T", t)
}
}
// In addition to the proto object being wrapped, we store some fields that have to be
// constructed from the proto, so that we don't have to reconstruct them every time
// in getters.
type updateAltair struct {
p *pb.LightClientUpdateAltair
attestedHeader interfaces.LightClientHeader
@@ -42,14 +48,20 @@ func NewWrappedUpdateAltair(p *pb.LightClientUpdateAltair) (interfaces.LightClie
if p == nil {
return nil, consensustypes.ErrNilObjectWrapped
}
attestedHeader, err := NewWrappedHeaderAltair(p.AttestedHeader)
attestedHeader, err := NewWrappedHeader(p.AttestedHeader)
if err != nil {
return nil, err
}
finalizedHeader, err := NewWrappedHeaderAltair(p.FinalizedHeader)
if err != nil {
return nil, err
var finalizedHeader interfaces.LightClientHeader
if p.FinalizedHeader != nil {
finalizedHeader, err = NewWrappedHeader(p.FinalizedHeader)
if err != nil {
return nil, err
}
}
scBranch, err := createBranch[interfaces.LightClientSyncCommitteeBranch](
"sync committee",
p.NextSyncCommitteeBranch,
@@ -88,6 +100,10 @@ func (u *updateAltair) SizeSSZ() int {
return u.p.SizeSSZ()
}
func (u *updateAltair) Proto() proto.Message {
return u.p
}
func (u *updateAltair) Version() int {
return version.Altair
}
@@ -96,14 +112,40 @@ func (u *updateAltair) AttestedHeader() interfaces.LightClientHeader {
return u.attestedHeader
}
func (u *updateAltair) SetAttestedHeader(header interfaces.LightClientHeader) error {
proto, ok := header.Proto().(*pb.LightClientHeaderAltair)
if !ok {
return fmt.Errorf("header type %T is not %T", proto, &pb.LightClientHeaderAltair{})
}
u.p.AttestedHeader = proto
u.attestedHeader = header
return nil
}
func (u *updateAltair) NextSyncCommittee() *pb.SyncCommittee {
return u.p.NextSyncCommittee
}
func (u *updateAltair) SetNextSyncCommittee(sc *pb.SyncCommittee) {
u.p.NextSyncCommittee = sc
}
func (u *updateAltair) NextSyncCommitteeBranch() (interfaces.LightClientSyncCommitteeBranch, error) {
return u.nextSyncCommitteeBranch, nil
}
func (u *updateAltair) SetNextSyncCommitteeBranch(branch [][]byte) error {
b, err := createBranch[interfaces.LightClientSyncCommitteeBranch]("sync committee", branch, fieldparams.SyncCommitteeBranchDepth)
if err != nil {
return err
}
u.nextSyncCommitteeBranch = b
u.p.NextSyncCommitteeBranch = branch
return nil
}
func (u *updateAltair) NextSyncCommitteeBranchElectra() (interfaces.LightClientSyncCommitteeBranchElectra, error) {
return [6][32]byte{}, consensustypes.ErrNotSupported("NextSyncCommitteeBranchElectra", version.Altair)
}
@@ -112,18 +154,53 @@ func (u *updateAltair) FinalizedHeader() interfaces.LightClientHeader {
return u.finalizedHeader
}
func (u *updateAltair) FinalityBranch() interfaces.LightClientFinalityBranch {
return u.finalityBranch
func (u *updateAltair) SetFinalizedHeader(header interfaces.LightClientHeader) error {
proto, ok := header.Proto().(*pb.LightClientHeaderAltair)
if !ok {
return fmt.Errorf("header type %T is not %T", proto, &pb.LightClientHeaderAltair{})
}
u.p.FinalizedHeader = proto
u.finalizedHeader = header
return nil
}
func (u *updateAltair) FinalityBranch() (interfaces.LightClientFinalityBranch, error) {
return u.finalityBranch, nil
}
func (u *updateAltair) SetFinalityBranch(branch [][]byte) error {
b, err := createBranch[interfaces.LightClientFinalityBranch]("finality", branch, fieldparams.FinalityBranchDepth)
if err != nil {
return err
}
u.finalityBranch = b
u.p.FinalityBranch = branch
return nil
}
func (u *updateAltair) FinalityBranchElectra() (interfaces.LightClientFinalityBranchElectra, error) {
return interfaces.LightClientFinalityBranchElectra{}, consensustypes.ErrNotSupported("FinalityBranchElectra", version.Altair)
}
func (u *updateAltair) SyncAggregate() *pb.SyncAggregate {
return u.p.SyncAggregate
}
func (u *updateAltair) SetSyncAggregate(sa *pb.SyncAggregate) {
u.p.SyncAggregate = sa
}
func (u *updateAltair) SignatureSlot() primitives.Slot {
return u.p.SignatureSlot
}
func (u *updateAltair) SetSignatureSlot(slot primitives.Slot) {
u.p.SignatureSlot = slot
}
// In addition to the proto object being wrapped, we store some fields that have to be
// constructed from the proto, so that we don't have to reconstruct them every time
// in getters.
type updateCapella struct {
p *pb.LightClientUpdateCapella
attestedHeader interfaces.LightClientHeader
@@ -138,14 +215,20 @@ func NewWrappedUpdateCapella(p *pb.LightClientUpdateCapella) (interfaces.LightCl
if p == nil {
return nil, consensustypes.ErrNilObjectWrapped
}
attestedHeader, err := NewWrappedHeaderCapella(p.AttestedHeader)
attestedHeader, err := NewWrappedHeader(p.AttestedHeader)
if err != nil {
return nil, err
}
finalizedHeader, err := NewWrappedHeaderCapella(p.FinalizedHeader)
if err != nil {
return nil, err
var finalizedHeader interfaces.LightClientHeader
if p.FinalizedHeader != nil {
finalizedHeader, err = NewWrappedHeader(p.FinalizedHeader)
if err != nil {
return nil, err
}
}
scBranch, err := createBranch[interfaces.LightClientSyncCommitteeBranch](
"sync committee",
p.NextSyncCommitteeBranch,
@@ -184,6 +267,10 @@ func (u *updateCapella) SizeSSZ() int {
return u.p.SizeSSZ()
}
func (u *updateCapella) Proto() proto.Message {
return u.p
}
func (u *updateCapella) Version() int {
return version.Capella
}
@@ -192,14 +279,40 @@ func (u *updateCapella) AttestedHeader() interfaces.LightClientHeader {
return u.attestedHeader
}
func (u *updateCapella) SetAttestedHeader(header interfaces.LightClientHeader) error {
proto, ok := header.Proto().(*pb.LightClientHeaderCapella)
if !ok {
return fmt.Errorf("header type %T is not %T", proto, &pb.LightClientHeaderCapella{})
}
u.p.AttestedHeader = proto
u.attestedHeader = header
return nil
}
func (u *updateCapella) NextSyncCommittee() *pb.SyncCommittee {
return u.p.NextSyncCommittee
}
func (u *updateCapella) SetNextSyncCommittee(sc *pb.SyncCommittee) {
u.p.NextSyncCommittee = sc
}
func (u *updateCapella) NextSyncCommitteeBranch() (interfaces.LightClientSyncCommitteeBranch, error) {
return u.nextSyncCommitteeBranch, nil
}
func (u *updateCapella) SetNextSyncCommitteeBranch(branch [][]byte) error {
b, err := createBranch[interfaces.LightClientSyncCommitteeBranch]("sync committee", branch, fieldparams.SyncCommitteeBranchDepth)
if err != nil {
return err
}
u.nextSyncCommitteeBranch = b
u.p.NextSyncCommitteeBranch = branch
return nil
}
func (u *updateCapella) NextSyncCommitteeBranchElectra() (interfaces.LightClientSyncCommitteeBranchElectra, error) {
return [6][32]byte{}, consensustypes.ErrNotSupported("NextSyncCommitteeBranchElectra", version.Capella)
}
@@ -208,18 +321,53 @@ func (u *updateCapella) FinalizedHeader() interfaces.LightClientHeader {
return u.finalizedHeader
}
func (u *updateCapella) FinalityBranch() interfaces.LightClientFinalityBranch {
return u.finalityBranch
func (u *updateCapella) SetFinalizedHeader(header interfaces.LightClientHeader) error {
proto, ok := header.Proto().(*pb.LightClientHeaderCapella)
if !ok {
return fmt.Errorf("header type %T is not %T", proto, &pb.LightClientHeaderCapella{})
}
u.p.FinalizedHeader = proto
u.finalizedHeader = header
return nil
}
func (u *updateCapella) FinalityBranch() (interfaces.LightClientFinalityBranch, error) {
return u.finalityBranch, nil
}
func (u *updateCapella) SetFinalityBranch(branch [][]byte) error {
b, err := createBranch[interfaces.LightClientFinalityBranch]("finality", branch, fieldparams.FinalityBranchDepth)
if err != nil {
return err
}
u.finalityBranch = b
u.p.FinalityBranch = branch
return nil
}
func (u *updateCapella) FinalityBranchElectra() (interfaces.LightClientFinalityBranchElectra, error) {
return interfaces.LightClientFinalityBranchElectra{}, consensustypes.ErrNotSupported("FinalityBranchElectra", u.Version())
}
func (u *updateCapella) SyncAggregate() *pb.SyncAggregate {
return u.p.SyncAggregate
}
func (u *updateCapella) SetSyncAggregate(sa *pb.SyncAggregate) {
u.p.SyncAggregate = sa
}
func (u *updateCapella) SignatureSlot() primitives.Slot {
return u.p.SignatureSlot
}
func (u *updateCapella) SetSignatureSlot(slot primitives.Slot) {
u.p.SignatureSlot = slot
}
// In addition to the proto object being wrapped, we store some fields that have to be
// constructed from the proto, so that we don't have to reconstruct them every time
// in getters.
type updateDeneb struct {
p *pb.LightClientUpdateDeneb
attestedHeader interfaces.LightClientHeader
@@ -234,14 +382,20 @@ func NewWrappedUpdateDeneb(p *pb.LightClientUpdateDeneb) (interfaces.LightClient
if p == nil {
return nil, consensustypes.ErrNilObjectWrapped
}
attestedHeader, err := NewWrappedHeaderDeneb(p.AttestedHeader)
attestedHeader, err := NewWrappedHeader(p.AttestedHeader)
if err != nil {
return nil, err
}
finalizedHeader, err := NewWrappedHeaderDeneb(p.FinalizedHeader)
if err != nil {
return nil, err
var finalizedHeader interfaces.LightClientHeader
if p.FinalizedHeader != nil {
finalizedHeader, err = NewWrappedHeader(p.FinalizedHeader)
if err != nil {
return nil, err
}
}
scBranch, err := createBranch[interfaces.LightClientSyncCommitteeBranch](
"sync committee",
p.NextSyncCommitteeBranch,
@@ -280,6 +434,10 @@ func (u *updateDeneb) SizeSSZ() int {
return u.p.SizeSSZ()
}
func (u *updateDeneb) Proto() proto.Message {
return u.p
}
func (u *updateDeneb) Version() int {
return version.Deneb
}
@@ -288,14 +446,40 @@ func (u *updateDeneb) AttestedHeader() interfaces.LightClientHeader {
return u.attestedHeader
}
func (u *updateDeneb) SetAttestedHeader(header interfaces.LightClientHeader) error {
proto, ok := header.Proto().(*pb.LightClientHeaderDeneb)
if !ok {
return fmt.Errorf("header type %T is not %T", proto, &pb.LightClientHeaderDeneb{})
}
u.p.AttestedHeader = proto
u.attestedHeader = header
return nil
}
func (u *updateDeneb) NextSyncCommittee() *pb.SyncCommittee {
return u.p.NextSyncCommittee
}
func (u *updateDeneb) SetNextSyncCommittee(sc *pb.SyncCommittee) {
u.p.NextSyncCommittee = sc
}
func (u *updateDeneb) NextSyncCommitteeBranch() (interfaces.LightClientSyncCommitteeBranch, error) {
return u.nextSyncCommitteeBranch, nil
}
func (u *updateDeneb) SetNextSyncCommitteeBranch(branch [][]byte) error {
b, err := createBranch[interfaces.LightClientSyncCommitteeBranch]("sync committee", branch, fieldparams.SyncCommitteeBranchDepth)
if err != nil {
return err
}
u.nextSyncCommitteeBranch = b
u.p.NextSyncCommitteeBranch = branch
return nil
}
func (u *updateDeneb) NextSyncCommitteeBranchElectra() (interfaces.LightClientSyncCommitteeBranchElectra, error) {
return [6][32]byte{}, consensustypes.ErrNotSupported("NextSyncCommitteeBranchElectra", version.Deneb)
}
@@ -304,24 +488,59 @@ func (u *updateDeneb) FinalizedHeader() interfaces.LightClientHeader {
return u.finalizedHeader
}
func (u *updateDeneb) FinalityBranch() interfaces.LightClientFinalityBranch {
return u.finalityBranch
func (u *updateDeneb) SetFinalizedHeader(header interfaces.LightClientHeader) error {
proto, ok := header.Proto().(*pb.LightClientHeaderDeneb)
if !ok {
return fmt.Errorf("header type %T is not %T", proto, &pb.LightClientHeaderDeneb{})
}
u.p.FinalizedHeader = proto
u.finalizedHeader = header
return nil
}
func (u *updateDeneb) FinalityBranch() (interfaces.LightClientFinalityBranch, error) {
return u.finalityBranch, nil
}
func (u *updateDeneb) SetFinalityBranch(branch [][]byte) error {
b, err := createBranch[interfaces.LightClientFinalityBranch]("finality", branch, fieldparams.FinalityBranchDepth)
if err != nil {
return err
}
u.finalityBranch = b
u.p.FinalityBranch = branch
return nil
}
func (u *updateDeneb) FinalityBranchElectra() (interfaces.LightClientFinalityBranchElectra, error) {
return interfaces.LightClientFinalityBranchElectra{}, consensustypes.ErrNotSupported("FinalityBranchElectra", u.Version())
}
func (u *updateDeneb) SyncAggregate() *pb.SyncAggregate {
return u.p.SyncAggregate
}
func (u *updateDeneb) SetSyncAggregate(sa *pb.SyncAggregate) {
u.p.SyncAggregate = sa
}
func (u *updateDeneb) SignatureSlot() primitives.Slot {
return u.p.SignatureSlot
}
func (u *updateDeneb) SetSignatureSlot(slot primitives.Slot) {
u.p.SignatureSlot = slot
}
// In addition to the proto object being wrapped, we store some fields that have to be
// constructed from the proto, so that we don't have to reconstruct them every time
// in getters.
type updateElectra struct {
p *pb.LightClientUpdateElectra
attestedHeader interfaces.LightClientHeader
nextSyncCommitteeBranch interfaces.LightClientSyncCommitteeBranchElectra
finalizedHeader interfaces.LightClientHeader
finalityBranch interfaces.LightClientFinalityBranch
finalityBranch interfaces.LightClientFinalityBranchElectra
}
var _ interfaces.LightClientUpdate = &updateElectra{}
@@ -330,14 +549,20 @@ func NewWrappedUpdateElectra(p *pb.LightClientUpdateElectra) (interfaces.LightCl
if p == nil {
return nil, consensustypes.ErrNilObjectWrapped
}
attestedHeader, err := NewWrappedHeaderDeneb(p.AttestedHeader)
attestedHeader, err := NewWrappedHeader(p.AttestedHeader)
if err != nil {
return nil, err
}
finalizedHeader, err := NewWrappedHeaderDeneb(p.FinalizedHeader)
if err != nil {
return nil, err
var finalizedHeader interfaces.LightClientHeader
if p.FinalizedHeader != nil {
finalizedHeader, err = NewWrappedHeader(p.FinalizedHeader)
if err != nil {
return nil, err
}
}
scBranch, err := createBranch[interfaces.LightClientSyncCommitteeBranchElectra](
"sync committee",
p.NextSyncCommitteeBranch,
@@ -346,10 +571,11 @@ func NewWrappedUpdateElectra(p *pb.LightClientUpdateElectra) (interfaces.LightCl
if err != nil {
return nil, err
}
finalityBranch, err := createBranch[interfaces.LightClientFinalityBranch](
finalityBranch, err := createBranch[interfaces.LightClientFinalityBranchElectra](
"finality",
p.FinalityBranch,
fieldparams.FinalityBranchDepth,
fieldparams.FinalityBranchDepthElectra,
)
if err != nil {
return nil, err
@@ -376,6 +602,10 @@ func (u *updateElectra) SizeSSZ() int {
return u.p.SizeSSZ()
}
func (u *updateElectra) Proto() proto.Message {
return u.p
}
func (u *updateElectra) Version() int {
return version.Electra
}
@@ -384,14 +614,40 @@ func (u *updateElectra) AttestedHeader() interfaces.LightClientHeader {
return u.attestedHeader
}
func (u *updateElectra) SetAttestedHeader(header interfaces.LightClientHeader) error {
proto, ok := header.Proto().(*pb.LightClientHeaderDeneb)
if !ok {
return fmt.Errorf("header type %T is not %T", proto, &pb.LightClientHeaderDeneb{})
}
u.p.AttestedHeader = proto
u.attestedHeader = header
return nil
}
func (u *updateElectra) NextSyncCommittee() *pb.SyncCommittee {
return u.p.NextSyncCommittee
}
func (u *updateElectra) SetNextSyncCommittee(sc *pb.SyncCommittee) {
u.p.NextSyncCommittee = sc
}
func (u *updateElectra) NextSyncCommitteeBranch() (interfaces.LightClientSyncCommitteeBranch, error) {
return [5][32]byte{}, consensustypes.ErrNotSupported("NextSyncCommitteeBranch", version.Electra)
}
func (u *updateElectra) SetNextSyncCommitteeBranch(branch [][]byte) error {
b, err := createBranch[interfaces.LightClientSyncCommitteeBranchElectra]("sync committee", branch, fieldparams.SyncCommitteeBranchDepthElectra)
if err != nil {
return err
}
u.nextSyncCommitteeBranch = b
u.p.NextSyncCommitteeBranch = branch
return nil
}
func (u *updateElectra) NextSyncCommitteeBranchElectra() (interfaces.LightClientSyncCommitteeBranchElectra, error) {
return u.nextSyncCommitteeBranch, nil
}
@@ -400,14 +656,46 @@ func (u *updateElectra) FinalizedHeader() interfaces.LightClientHeader {
return u.finalizedHeader
}
func (u *updateElectra) FinalityBranch() interfaces.LightClientFinalityBranch {
return u.finalityBranch
func (u *updateElectra) SetFinalizedHeader(header interfaces.LightClientHeader) error {
proto, ok := header.Proto().(*pb.LightClientHeaderDeneb)
if !ok {
return fmt.Errorf("header type %T is not %T", proto, &pb.LightClientHeaderDeneb{})
}
u.p.FinalizedHeader = proto
u.finalizedHeader = header
return nil
}
func (u *updateElectra) FinalityBranch() (interfaces.LightClientFinalityBranch, error) {
return interfaces.LightClientFinalityBranch{}, consensustypes.ErrNotSupported("FinalityBranch", u.Version())
}
func (u *updateElectra) SetFinalityBranch(branch [][]byte) error {
b, err := createBranch[interfaces.LightClientFinalityBranchElectra]("finality", branch, fieldparams.FinalityBranchDepthElectra)
if err != nil {
return err
}
u.finalityBranch = b
u.p.FinalityBranch = branch
return nil
}
func (u *updateElectra) FinalityBranchElectra() (interfaces.LightClientFinalityBranchElectra, error) {
return u.finalityBranch, nil
}
func (u *updateElectra) SyncAggregate() *pb.SyncAggregate {
return u.p.SyncAggregate
}
func (u *updateElectra) SetSyncAggregate(sa *pb.SyncAggregate) {
u.p.SyncAggregate = sa
}
func (u *updateElectra) SignatureSlot() primitives.Slot {
return u.p.SignatureSlot
}
func (u *updateElectra) SetSignatureSlot(slot primitives.Slot) {
u.p.SignatureSlot = slot
}

View File

@@ -10,8 +10,12 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/v5/consensus-types/payload-attribute",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/state:go_default_library",
"//config/fieldparams:go_default_library",
"//consensus-types:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//proto/engine/v1:go_default_library",
"//runtime/version:go_default_library",
"@com_github_pkg_errors//:go_default_library",

Some files were not shown because too many files have changed in this diff Show More