Compare commits

..

5 Commits

Author SHA1 Message Date
Sammy Rosso
3a1023d32a Merge branch 'develop' into otel-tracing-middleware 2024-10-15 17:07:09 +02:00
Saolyn
3008c67e76 changelog 2024-10-15 17:06:16 +02:00
Sammy Rosso
ef8e705f96 Merge branch 'develop' into otel-tracing-middleware 2024-10-15 14:30:36 +02:00
Saolyn
85fc558f0c gaz 2024-10-14 17:02:23 +02:00
Saolyn
ae1b4ebcff Add otel middleware to client 2024-10-03 18:21:23 +02:00
114 changed files with 1086 additions and 3043 deletions

View File

@@ -4,76 +4,7 @@ All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
## [Unreleased](https://github.com/prysmaticlabs/prysm/compare/v5.1.2...HEAD)
### Added
- Electra EIP6110: Queue deposit [pr](https://github.com/prysmaticlabs/prysm/pull/14430)
- Add Bellatrix tests for light client functions.
- Add Discovery Rebooter Feature.
- Added GetBlockAttestationsV2 endpoint.
- Light client support: Consensus types for Electra
- Added SubmitPoolAttesterSlashingV2 endpoint.
- Added SubmitAggregateAndProofsRequestV2 endpoint.
- Updated the `beacon-chain/monitor` package to Electra. [PR](https://github.com/prysmaticlabs/prysm/pull/14562)
### Changed
- Electra EIP6110: Queue deposit requests changes from consensus spec pr #3818
- reversed the boolean return on `BatchVerifyDepositsSignatures`, from need verification, to all keys successfully verified
- Fix `engine_exchangeCapabilities` implementation.
- Updated the default `scrape-interval` in `Client-stats` to 2 minutes to accommodate Beaconcha.in API rate limits.
- Switch to compounding when consolidating with source==target.
- Revert block db save when saving state fails.
- Return false from HasBlock if the block is being synced.
- Cleanup forkchoice on failed insertions.
- Use read only validator for core processing to avoid unnecessary copying.
### Deprecated
- `/eth/v1alpha1/validator/activation/stream` grpc wait for activation stream is deprecated. [pr](https://github.com/prysmaticlabs/prysm/pull/14514)
### Removed
- Removed finalized validator index cache, no longer needed.
### Fixed
- Fixed mesh size by appending `gParams.Dhi = gossipSubDhi`
- Fix skipping partial withdrawals count.
### Security
## [v5.1.2](https://github.com/prysmaticlabs/prysm/compare/v5.1.1...v5.1.2) - 2024-10-16
This is a hotfix release with one change.
Prysm v5.1.1 contains an updated implementation of the beacon api streaming events endpoint. This
new implementation contains a bug that can cause a panic in certain conditions. The issue is
difficult to reproduce reliably and we are still trying to determine the root cause, but in the
meantime we are issuing a patch that recovers from the panic to prevent the node from crashing.
This only impacts the v5.1.1 release beacon api event stream endpoints. This endpoint is used by the
prysm REST mode validator (a feature which requires the validator to be configured to use the beacon
api intead of prysm's stock grpc endpoints) or accessory software that connects to the events api,
like https://github.com/ethpandaops/ethereum-metrics-exporter
### Fixed
- Recover from panics when writing the event stream [#14545](https://github.com/prysmaticlabs/prysm/pull/14545)
## [v5.1.1](https://github.com/prysmaticlabs/prysm/compare/v5.1.0...v5.1.1) - 2024-10-15
This release has a number of features and improvements. Most notably, the feature flag
`--enable-experimental-state` has been flipped to "opt out" via `--disable-experimental-state`.
The experimental state management design has shown significant improvements in memory usage at
runtime. Updates to libp2p's gossipsub have some bandwidith stability improvements with support for
IDONTWANT control messages.
The gRPC gateway has been deprecated from Prysm in this release. If you need JSON data, consider the
standardized beacon-APIs.
Updating to this release is recommended at your convenience.
## [Unreleased](https://github.com/prysmaticlabs/prysm/compare/v5.1.0...HEAD)
### Added
@@ -82,18 +13,23 @@ Updating to this release is recommended at your convenience.
- Light client support: Implement `ComputeFieldRootsForBlockBody`.
- Light client support: Add light client database changes.
- Light client support: Implement capella and deneb changes.
- Electra EIP6110: Queue deposit [pr](https://github.com/prysmaticlabs/prysm/pull/14430)
- Light client support: Implement `BlockToLightClientHeader` function.
- Light client support: Consensus types.
- GetBeaconStateV2: add Electra case.
- Implement [consensus-specs/3875](https://github.com/ethereum/consensus-specs/pull/3875).
- Tests to ensure sepolia config matches the official upstream yaml.
- `engine_newPayloadV4`,`engine_getPayloadV4` used for electra payload communication with execution client. [pr](https://github.com/prysmaticlabs/prysm/pull/14492)
- HTTP endpoint for PublishBlobs.
- GetBlockV2, GetBlindedBlock, ProduceBlockV2, ProduceBlockV3: add Electra case.
- Add Electra support and tests for light client functions.
- fastssz version bump (better error messages).
- SSE implementation that sheds stuck clients. [pr](https://github.com/prysmaticlabs/prysm/pull/14413)
- Added GetPoolAttesterSlashingsV2 endpoint.
- Add Bellatrix tests for light client functions.
- Add Discovery Rebooter Feature.
- Added GetBlockAttestationsV2 endpoint.
- Light client support: Consensus types for Electra.
- Added SubmitPoolAttesterSlashingV2 endpoint.
- Add OpenTelemetry HTTP tracing middleware.
### Changed
@@ -112,6 +48,7 @@ Updating to this release is recommended at your convenience.
- `grpc-gateway-corsdomain` is renamed to http-cors-domain. The old name can still be used as an alias.
- `api-timeout` is changed from int flag to duration flag, default value updated.
- Light client support: abstracted out the light client headers with different versions.
- Electra EIP6110: Queue deposit requests changes from consensus spec pr #3818
- `ApplyToEveryValidator` has been changed to prevent misuse bugs, it takes a closure that takes a `ReadOnlyValidator` and returns a raw pointer to a `Validator`.
- Removed gorilla mux library and replaced it with net/http updates in go 1.22.
- Clean up `ProposeBlock` for validator client to reduce cognitive scoring and enable further changes.
@@ -123,18 +60,19 @@ Updating to this release is recommended at your convenience.
- Updated Sepolia bootnodes.
- Make committee aware packing the default by deprecating `--enable-committee-aware-packing`.
- Moved `ConvertKzgCommitmentToVersionedHash` to the `primitives` package.
- Updated correlation penalty for EIP-7251.
- reversed the boolean return on `BatchVerifyDepositsSignatures`, from need verification, to all keys successfully verified
- Fix `engine_exchangeCapabilities` implementation.
### Deprecated
- `--disable-grpc-gateway` flag is deprecated due to grpc gateway removal.
- `--enable-experimental-state` flag is deprecated. This feature is now on by default. Opt-out with `--disable-experimental-state`.
- `/eth/v1alpha1/validator/activation/stream` grpc wait for activation stream is deprecated. [pr](https://github.com/prysmaticlabs/prysm/pull/14514)
### Removed
- Removed gRPC Gateway.
- Removed unused blobs bundle cache.
- removed gRPC Gateway
- Removed unused blobs bundle cache
- Removed consolidation signing domain from params. The Electra design changed such that EL handles consolidation signature verification.
- Remove engine_getPayloadBodiesBy{Hash|Range}V2
### Fixed
@@ -153,11 +91,11 @@ Updating to this release is recommended at your convenience.
- Light client support: fix light client attested header execution fields' wrong version bug.
- Testing: added custom matcher for better push settings testing.
- Registered `GetDepositSnapshot` Beacon API endpoint.
- Fixed mesh size by appending `gParams.Dhi = gossipSubDhi`
- Fix skipping partial withdrawals count.
### Security
No notable security updates.
## [v5.1.0](https://github.com/prysmaticlabs/prysm/compare/v5.0.4...v5.1.0) - 2024-08-20
This release contains 171 new changes and many of these are related to Electra! Along side the Electra changes, there

View File

@@ -9,7 +9,10 @@ go_library(
],
importpath = "github.com/prysmaticlabs/prysm/v5/api/client",
visibility = ["//visibility:public"],
deps = ["@com_github_pkg_errors//:go_default_library"],
deps = [
"@com_github_pkg_errors//:go_default_library",
"@io_opentelemetry_go_contrib_instrumentation_net_http_otelhttp//:go_default_library",
],
)
go_test(

View File

@@ -8,6 +8,7 @@ import (
"net/url"
"github.com/pkg/errors"
"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
)
// Client is a wrapper object around the HTTP client.
@@ -26,7 +27,9 @@ func NewClient(host string, opts ...ClientOpt) (*Client, error) {
return nil, err
}
c := &Client{
hc: &http.Client{},
hc: &http.Client{
Transport: otelhttp.NewTransport(http.DefaultTransport),
},
baseURL: u,
}
for _, o := range opts {

View File

@@ -176,8 +176,7 @@ type BLSToExecutionChangesPoolResponse struct {
}
type GetAttesterSlashingsResponse struct {
Version string `json:"version,omitempty"`
Data json.RawMessage `json:"data"` // Accepts both `[]*AttesterSlashing` and `[]*AttesterSlashingElectra` types
Data []*AttesterSlashing `json:"data"`
}
type GetProposerSlashingsResponse struct {

View File

@@ -15,7 +15,7 @@ type SubmitContributionAndProofsRequest struct {
}
type SubmitAggregateAndProofsRequest struct {
Data []json.RawMessage `json:"data"`
Data []*SignedAggregateAttestationAndProof `json:"data"`
}
type SubmitSyncCommitteeSubscriptionsRequest struct {

View File

@@ -216,25 +216,17 @@ func (s *Service) notifyNewPayload(ctx context.Context, preStateVersion int,
}
var lastValidHash []byte
var parentRoot *common.Hash
var versionedHashes []common.Hash
var requests *enginev1.ExecutionRequests
if blk.Version() >= version.Deneb {
var versionedHashes []common.Hash
versionedHashes, err = kzgCommitmentsToVersionedHashes(blk.Block().Body())
if err != nil {
return false, errors.Wrap(err, "could not get versioned hashes to feed the engine")
}
prh := common.Hash(blk.Block().ParentRoot())
parentRoot = &prh
pr := common.Hash(blk.Block().ParentRoot())
lastValidHash, err = s.cfg.ExecutionEngineCaller.NewPayload(ctx, payload, versionedHashes, &pr)
} else {
lastValidHash, err = s.cfg.ExecutionEngineCaller.NewPayload(ctx, payload, []common.Hash{}, &common.Hash{} /*empty version hashes and root before Deneb*/)
}
if blk.Version() >= version.Electra {
requests, err = blk.Block().Body().ExecutionRequests()
if err != nil {
return false, errors.Wrap(err, "could not get execution requests")
}
}
lastValidHash, err = s.cfg.ExecutionEngineCaller.NewPayload(ctx, payload, versionedHashes, parentRoot, requests)
switch {
case err == nil:
newPayloadValidNodeCount.Inc()

View File

@@ -20,17 +20,16 @@ func Verify(sidecars ...blocks.ROBlob) error {
cmts := make([]GoKZG.KZGCommitment, len(sidecars))
proofs := make([]GoKZG.KZGProof, len(sidecars))
for i, sidecar := range sidecars {
blobs[i] = *bytesToBlob(sidecar.Blob)
blobs[i] = bytesToBlob(sidecar.Blob)
cmts[i] = bytesToCommitment(sidecar.KzgCommitment)
proofs[i] = bytesToKZGProof(sidecar.KzgProof)
}
return kzgContext.VerifyBlobKZGProofBatch(blobs, cmts, proofs)
}
func bytesToBlob(blob []byte) *GoKZG.Blob {
var ret GoKZG.Blob
func bytesToBlob(blob []byte) (ret GoKZG.Blob) {
copy(ret[:], blob)
return &ret
return
}
func bytesToCommitment(commitment []byte) (ret GoKZG.KZGCommitment) {

View File

@@ -47,11 +47,11 @@ func GetRandBlob(seed int64) GoKZG.Blob {
}
func GenerateCommitmentAndProof(blob GoKZG.Blob) (GoKZG.KZGCommitment, GoKZG.KZGProof, error) {
commitment, err := kzgContext.BlobToKZGCommitment(&blob, 0)
commitment, err := kzgContext.BlobToKZGCommitment(blob, 0)
if err != nil {
return GoKZG.KZGCommitment{}, GoKZG.KZGProof{}, err
}
proof, err := kzgContext.ComputeBlobKZGProof(&blob, commitment, 0)
proof, err := kzgContext.ComputeBlobKZGProof(blob, commitment, 0)
if err != nil {
return GoKZG.KZGCommitment{}, GoKZG.KZGProof{}, err
}
@@ -68,7 +68,7 @@ func TestBytesToAny(t *testing.T) {
blob := GoKZG.Blob{0x01, 0x02}
commitment := GoKZG.KZGCommitment{0x01, 0x02}
proof := GoKZG.KZGProof{0x01, 0x02}
require.DeepEqual(t, blob, *bytesToBlob(bytes))
require.DeepEqual(t, blob, bytesToBlob(bytes))
require.DeepEqual(t, commitment, bytesToCommitment(bytes))
require.DeepEqual(t, proof, bytesToKZGProof(bytes))
}

View File

@@ -404,10 +404,6 @@ func (s *Service) savePostStateInfo(ctx context.Context, r [32]byte, b interface
return errors.Wrapf(err, "could not save block from slot %d", b.Block().Slot())
}
if err := s.cfg.StateGen.SaveState(ctx, r, st); err != nil {
log.Warnf("Rolling back insertion of block with root %#x", r)
if err := s.cfg.BeaconDB.DeleteBlock(ctx, r); err != nil {
log.WithError(err).Errorf("Could not delete block with block root %#x", r)
}
return errors.Wrap(err, "could not save state")
}
return nil

View File

@@ -273,6 +273,7 @@ func (s *Service) reportPostBlockProcessing(
func (s *Service) executePostFinalizationTasks(ctx context.Context, finalizedState state.BeaconState) {
finalized := s.cfg.ForkChoiceStore.FinalizedCheckpoint()
go func() {
finalizedState.SaveValidatorIndices() // used to handle Validator index invariant from EIP6110
s.sendNewFinalizedEvent(ctx, finalizedState)
}()
depCtx, cancel := context.WithTimeout(context.Background(), depositDeadline)
@@ -349,9 +350,6 @@ func (s *Service) ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock
// HasBlock returns true if the block of the input root exists in initial sync blocks cache or DB.
func (s *Service) HasBlock(ctx context.Context, root [32]byte) bool {
if s.BlockBeingSynced(root) {
return false
}
return s.hasBlockInInitSyncOrDB(ctx, root)
}

View File

@@ -278,8 +278,6 @@ func TestService_HasBlock(t *testing.T) {
r, err = b.Block.HashTreeRoot()
require.NoError(t, err)
require.Equal(t, true, s.HasBlock(context.Background(), r))
s.blockBeingSynced.set(r)
require.Equal(t, false, s.HasBlock(context.Background(), r))
}
func TestCheckSaveHotStateDB_Enabling(t *testing.T) {

View File

@@ -329,6 +329,8 @@ func (s *Service) StartFromSavedState(saved state.BeaconState) error {
return errors.Wrap(err, "failed to initialize blockchain service")
}
saved.SaveValidatorIndices() // used to handle Validator index invariant from EIP6110
return nil
}

View File

@@ -15,7 +15,6 @@ import (
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/time/slots"
log "github.com/sirupsen/logrus"
)
// ProcessPendingConsolidations implements the spec definition below. This method makes mutating
@@ -23,28 +22,25 @@ import (
//
// Spec definition:
//
// def process_pending_consolidations(state: BeaconState) -> None:
// def process_pending_consolidations(state: BeaconState) -> None:
// next_pending_consolidation = 0
// for pending_consolidation in state.pending_consolidations:
// source_validator = state.validators[pending_consolidation.source_index]
// if source_validator.slashed:
// next_pending_consolidation += 1
// continue
// if source_validator.withdrawable_epoch > get_current_epoch(state):
// break
//
// next_epoch = Epoch(get_current_epoch(state) + 1)
// next_pending_consolidation = 0
// for pending_consolidation in state.pending_consolidations:
// source_validator = state.validators[pending_consolidation.source_index]
// if source_validator.slashed:
// # Churn any target excess active balance of target and raise its max
// switch_to_compounding_validator(state, pending_consolidation.target_index)
// # Move active balance to target. Excess balance is withdrawable.
// active_balance = get_active_balance(state, pending_consolidation.source_index)
// decrease_balance(state, pending_consolidation.source_index, active_balance)
// increase_balance(state, pending_consolidation.target_index, active_balance)
// next_pending_consolidation += 1
// continue
// if source_validator.withdrawable_epoch > next_epoch:
// break
//
// # Calculate the consolidated balance
// max_effective_balance = get_max_effective_balance(source_validator)
// source_effective_balance = min(state.balances[pending_consolidation.source_index], max_effective_balance)
//
// # Move active balance to target. Excess balance is withdrawable.
// decrease_balance(state, pending_consolidation.source_index, source_effective_balance)
// increase_balance(state, pending_consolidation.target_index, source_effective_balance)
// next_pending_consolidation += 1
//
// state.pending_consolidations = state.pending_consolidations[next_pending_consolidation:]
// state.pending_consolidations = state.pending_consolidations[next_pending_consolidation:]
func ProcessPendingConsolidations(ctx context.Context, st state.BeaconState) error {
_, span := trace.StartSpan(ctx, "electra.ProcessPendingConsolidations")
defer span.End()
@@ -55,34 +51,37 @@ func ProcessPendingConsolidations(ctx context.Context, st state.BeaconState) err
nextEpoch := slots.ToEpoch(st.Slot()) + 1
var nextPendingConsolidation uint64
pendingConsolidations, err := st.PendingConsolidations()
if err != nil {
return err
}
var nextPendingConsolidation uint64
for _, pc := range pendingConsolidations {
sourceValidator, err := st.ValidatorAtIndexReadOnly(pc.SourceIndex)
sourceValidator, err := st.ValidatorAtIndex(pc.SourceIndex)
if err != nil {
return err
}
if sourceValidator.Slashed() {
if sourceValidator.Slashed {
nextPendingConsolidation++
continue
}
if sourceValidator.WithdrawableEpoch() > nextEpoch {
if sourceValidator.WithdrawableEpoch > nextEpoch {
break
}
validatorBalance, err := st.BalanceAtIndex(pc.SourceIndex)
if err := SwitchToCompoundingValidator(st, pc.TargetIndex); err != nil {
return err
}
activeBalance, err := st.ActiveBalanceAtIndex(pc.SourceIndex)
if err != nil {
return err
}
b := min(validatorBalance, helpers.ValidatorMaxEffectiveBalance(sourceValidator))
if err := helpers.DecreaseBalance(st, pc.SourceIndex, b); err != nil {
if err := helpers.DecreaseBalance(st, pc.SourceIndex, activeBalance); err != nil {
return err
}
if err := helpers.IncreaseBalance(st, pc.TargetIndex, b); err != nil {
if err := helpers.IncreaseBalance(st, pc.TargetIndex, activeBalance); err != nil {
return err
}
nextPendingConsolidation++
@@ -102,16 +101,6 @@ func ProcessPendingConsolidations(ctx context.Context, st state.BeaconState) err
// state: BeaconState,
// consolidation_request: ConsolidationRequest
// ) -> None:
// if is_valid_switch_to_compounding_request(state, consolidation_request):
// validator_pubkeys = [v.pubkey for v in state.validators]
// request_source_pubkey = consolidation_request.source_pubkey
// source_index = ValidatorIndex(validator_pubkeys.index(request_source_pubkey))
// switch_to_compounding_validator(state, source_index)
// return
//
// # Verify that source != target, so a consolidation cannot be used as an exit.
// if consolidation_request.source_pubkey == consolidation_request.target_pubkey:
// return
// # If the pending consolidations queue is full, consolidation requests are ignored
// if len(state.pending_consolidations) == PENDING_CONSOLIDATIONS_LIMIT:
// return
@@ -132,6 +121,10 @@ func ProcessPendingConsolidations(ctx context.Context, st state.BeaconState) err
// source_validator = state.validators[source_index]
// target_validator = state.validators[target_index]
//
// # Verify that source != target, so a consolidation cannot be used as an exit.
// if source_index == target_index:
// return
//
// # Verify source withdrawal credentials
// has_correct_credential = has_execution_withdrawal_credential(source_validator)
// is_correct_source_address = (
@@ -167,14 +160,19 @@ func ProcessPendingConsolidations(ctx context.Context, st state.BeaconState) err
// source_index=source_index,
// target_index=target_index
// ))
//
// # Churn any target excess active balance of target and raise its max
// if has_eth1_withdrawal_credential(target_validator):
// switch_to_compounding_validator(state, target_index)
func ProcessConsolidationRequests(ctx context.Context, st state.BeaconState, reqs []*enginev1.ConsolidationRequest) error {
if len(reqs) == 0 || st == nil {
return nil
}
activeBal, err := helpers.TotalActiveBalance(st)
if err != nil {
return err
}
churnLimit := helpers.ConsolidationChurnLimit(primitives.Gwei(activeBal))
if churnLimit <= primitives.Gwei(params.BeaconConfig().MinActivationBalance) {
return nil
}
curEpoch := slots.ToEpoch(st.Slot())
ffe := params.BeaconConfig().FarFutureEpoch
minValWithdrawDelay := params.BeaconConfig().MinValidatorWithdrawabilityDelay
@@ -184,47 +182,25 @@ func ProcessConsolidationRequests(ctx context.Context, st state.BeaconState, req
if ctx.Err() != nil {
return fmt.Errorf("cannot process consolidation requests: %w", ctx.Err())
}
if IsValidSwitchToCompoundingRequest(st, cr) {
srcIdx, ok := st.ValidatorIndexByPubkey(bytesutil.ToBytes48(cr.SourcePubkey))
if !ok {
log.Error("failed to find source validator index")
continue
}
if err := SwitchToCompoundingValidator(st, srcIdx); err != nil {
log.WithError(err).Error("failed to switch to compounding validator")
}
continue
}
sourcePubkey := bytesutil.ToBytes48(cr.SourcePubkey)
targetPubkey := bytesutil.ToBytes48(cr.TargetPubkey)
if sourcePubkey == targetPubkey {
continue
}
if npc, err := st.NumPendingConsolidations(); err != nil {
return fmt.Errorf("failed to fetch number of pending consolidations: %w", err) // This should never happen.
} else if npc >= pcLimit {
return nil
}
activeBal, err := helpers.TotalActiveBalance(st)
if err != nil {
return err
}
churnLimit := helpers.ConsolidationChurnLimit(primitives.Gwei(activeBal))
if churnLimit <= primitives.Gwei(params.BeaconConfig().MinActivationBalance) {
return nil
}
srcIdx, ok := st.ValidatorIndexByPubkey(sourcePubkey)
srcIdx, ok := st.ValidatorIndexByPubkey(bytesutil.ToBytes48(cr.SourcePubkey))
if !ok {
continue
}
tgtIdx, ok := st.ValidatorIndexByPubkey(targetPubkey)
tgtIdx, ok := st.ValidatorIndexByPubkey(bytesutil.ToBytes48(cr.TargetPubkey))
if !ok {
continue
}
if srcIdx == tgtIdx {
continue
}
srcV, err := st.ValidatorAtIndex(srcIdx)
if err != nil {
return fmt.Errorf("failed to fetch source validator: %w", err) // This should never happen.
@@ -261,8 +237,7 @@ func ProcessConsolidationRequests(ctx context.Context, st state.BeaconState, req
// Initiate the exit of the source validator.
exitEpoch, err := ComputeConsolidationEpochAndUpdateChurn(ctx, st, primitives.Gwei(srcV.EffectiveBalance))
if err != nil {
log.WithError(err).Error("failed to compute consolidation epoch")
continue
return fmt.Errorf("failed to compute consolidaiton epoch: %w", err)
}
srcV.ExitEpoch = exitEpoch
srcV.WithdrawableEpoch = exitEpoch + minValWithdrawDelay
@@ -273,95 +248,7 @@ func ProcessConsolidationRequests(ctx context.Context, st state.BeaconState, req
if err := st.AppendPendingConsolidation(&eth.PendingConsolidation{SourceIndex: srcIdx, TargetIndex: tgtIdx}); err != nil {
return fmt.Errorf("failed to append pending consolidation: %w", err) // This should never happen.
}
if helpers.HasETH1WithdrawalCredential(tgtV) {
if err := SwitchToCompoundingValidator(st, tgtIdx); err != nil {
log.WithError(err).Error("failed to switch to compounding validator")
continue
}
}
}
return nil
}
// IsValidSwitchToCompoundingRequest returns true if the given consolidation request is valid for switching to compounding.
//
// Spec code:
//
// def is_valid_switch_to_compounding_request(
//
// state: BeaconState,
// consolidation_request: ConsolidationRequest
//
// ) -> bool:
//
// # Switch to compounding requires source and target be equal
// if consolidation_request.source_pubkey != consolidation_request.target_pubkey:
// return False
//
// # Verify pubkey exists
// source_pubkey = consolidation_request.source_pubkey
// validator_pubkeys = [v.pubkey for v in state.validators]
// if source_pubkey not in validator_pubkeys:
// return False
//
// source_validator = state.validators[ValidatorIndex(validator_pubkeys.index(source_pubkey))]
//
// # Verify request has been authorized
// if source_validator.withdrawal_credentials[12:] != consolidation_request.source_address:
// return False
//
// # Verify source withdrawal credentials
// if not has_eth1_withdrawal_credential(source_validator):
// return False
//
// # Verify the source is active
// current_epoch = get_current_epoch(state)
// if not is_active_validator(source_validator, current_epoch):
// return False
//
// # Verify exit for source has not been initiated
// if source_validator.exit_epoch != FAR_FUTURE_EPOCH:
// return False
//
// return True
func IsValidSwitchToCompoundingRequest(st state.BeaconState, req *enginev1.ConsolidationRequest) bool {
if req.SourcePubkey == nil || req.TargetPubkey == nil {
return false
}
if !bytes.Equal(req.SourcePubkey, req.TargetPubkey) {
return false
}
srcIdx, ok := st.ValidatorIndexByPubkey(bytesutil.ToBytes48(req.SourcePubkey))
if !ok {
return false
}
// As per the consensus specification, this error is not considered an assertion.
// Therefore, if the source_pubkey is not found in validator_pubkeys, we simply return false.
srcV, err := st.ValidatorAtIndexReadOnly(srcIdx)
if err != nil {
return false
}
sourceAddress := req.SourceAddress
withdrawalCreds := srcV.GetWithdrawalCredentials()
if len(withdrawalCreds) != 32 || len(sourceAddress) != 20 || !bytes.HasSuffix(withdrawalCreds, sourceAddress) {
return false
}
if !helpers.HasETH1WithdrawalCredential(srcV) {
return false
}
curEpoch := slots.ToEpoch(st.Slot())
if !helpers.IsActiveValidatorUsingTrie(srcV, curEpoch) {
return false
}
if srcV.ExitEpoch() != params.BeaconConfig().FarFutureEpoch {
return false
}
return true
}

View File

@@ -13,7 +13,6 @@ import (
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
)
func TestProcessPendingConsolidations(t *testing.T) {
@@ -81,10 +80,10 @@ func TestProcessPendingConsolidations(t *testing.T) {
require.NoError(t, err)
require.Equal(t, uint64(0), num)
// v1 withdrawal credentials should not be updated.
// v1 is switched to compounding validator.
v1, err := st.ValidatorAtIndex(1)
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().ETH1AddressWithdrawalPrefixByte, v1.WithdrawalCredentials[0])
require.Equal(t, params.BeaconConfig().CompoundingWithdrawalPrefixByte, v1.WithdrawalCredentials[0])
},
wantErr: false,
},
@@ -397,87 +396,3 @@ func TestProcessConsolidationRequests(t *testing.T) {
})
}
}
func TestIsValidSwitchToCompoundingRequest(t *testing.T) {
st, _ := util.DeterministicGenesisStateElectra(t, 1)
t.Run("nil source pubkey", func(t *testing.T) {
ok := electra.IsValidSwitchToCompoundingRequest(st, &enginev1.ConsolidationRequest{
SourcePubkey: nil,
TargetPubkey: []byte{'a'},
})
require.Equal(t, false, ok)
})
t.Run("nil target pubkey", func(t *testing.T) {
ok := electra.IsValidSwitchToCompoundingRequest(st, &enginev1.ConsolidationRequest{
TargetPubkey: nil,
SourcePubkey: []byte{'a'},
})
require.Equal(t, false, ok)
})
t.Run("different source and target pubkey", func(t *testing.T) {
ok := electra.IsValidSwitchToCompoundingRequest(st, &enginev1.ConsolidationRequest{
TargetPubkey: []byte{'a'},
SourcePubkey: []byte{'b'},
})
require.Equal(t, false, ok)
})
t.Run("source validator not found in state", func(t *testing.T) {
ok := electra.IsValidSwitchToCompoundingRequest(st, &enginev1.ConsolidationRequest{
SourceAddress: make([]byte, 20),
TargetPubkey: []byte{'a'},
SourcePubkey: []byte{'a'},
})
require.Equal(t, false, ok)
})
t.Run("incorrect source address", func(t *testing.T) {
v, err := st.ValidatorAtIndex(0)
require.NoError(t, err)
pubkey := v.PublicKey
ok := electra.IsValidSwitchToCompoundingRequest(st, &enginev1.ConsolidationRequest{
SourceAddress: make([]byte, 20),
TargetPubkey: pubkey,
SourcePubkey: pubkey,
})
require.Equal(t, false, ok)
})
t.Run("incorrect eth1 withdrawal credential", func(t *testing.T) {
v, err := st.ValidatorAtIndex(0)
require.NoError(t, err)
pubkey := v.PublicKey
wc := v.WithdrawalCredentials
ok := electra.IsValidSwitchToCompoundingRequest(st, &enginev1.ConsolidationRequest{
SourceAddress: wc[12:],
TargetPubkey: pubkey,
SourcePubkey: pubkey,
})
require.Equal(t, false, ok)
})
t.Run("is valid compounding request", func(t *testing.T) {
v, err := st.ValidatorAtIndex(0)
require.NoError(t, err)
pubkey := v.PublicKey
wc := v.WithdrawalCredentials
v.WithdrawalCredentials[0] = 1
require.NoError(t, st.UpdateValidatorAtIndex(0, v))
ok := electra.IsValidSwitchToCompoundingRequest(st, &enginev1.ConsolidationRequest{
SourceAddress: wc[12:],
TargetPubkey: pubkey,
SourcePubkey: pubkey,
})
require.Equal(t, true, ok)
})
t.Run("already has an exit epoch", func(t *testing.T) {
v, err := st.ValidatorAtIndex(0)
require.NoError(t, err)
pubkey := v.PublicKey
wc := v.WithdrawalCredentials
v.ExitEpoch = 100
require.NoError(t, st.UpdateValidatorAtIndex(0, v))
ok := electra.IsValidSwitchToCompoundingRequest(st, &enginev1.ConsolidationRequest{
SourceAddress: wc[12:],
TargetPubkey: pubkey,
SourcePubkey: pubkey,
})
require.Equal(t, false, ok)
})
}

View File

@@ -8,7 +8,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/signing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/contracts/deposit"
@@ -475,10 +474,7 @@ func ApplyPendingDeposit(ctx context.Context, st state.BeaconState, deposit *eth
// set_or_append_list(state.current_epoch_participation, index, ParticipationFlags(0b0000_0000))
// set_or_append_list(state.inactivity_scores, index, uint64(0))
func AddValidatorToRegistry(beaconState state.BeaconState, pubKey []byte, withdrawalCredentials []byte, amount uint64) error {
val, err := GetValidatorFromDeposit(pubKey, withdrawalCredentials, amount)
if err != nil {
return errors.Wrap(err, "could not get validator from deposit")
}
val := GetValidatorFromDeposit(pubKey, withdrawalCredentials, amount)
if err := beaconState.AppendValidator(val); err != nil {
return err
}
@@ -520,7 +516,7 @@ func AddValidatorToRegistry(beaconState state.BeaconState, pubKey []byte, withdr
// validator.effective_balance = min(amount - amount % EFFECTIVE_BALANCE_INCREMENT, max_effective_balance)
//
// return validator
func GetValidatorFromDeposit(pubKey []byte, withdrawalCredentials []byte, amount uint64) (*ethpb.Validator, error) {
func GetValidatorFromDeposit(pubKey []byte, withdrawalCredentials []byte, amount uint64) *ethpb.Validator {
validator := &ethpb.Validator{
PublicKey: pubKey,
WithdrawalCredentials: withdrawalCredentials,
@@ -530,13 +526,9 @@ func GetValidatorFromDeposit(pubKey []byte, withdrawalCredentials []byte, amount
WithdrawableEpoch: params.BeaconConfig().FarFutureEpoch,
EffectiveBalance: 0,
}
v, err := state_native.NewValidator(validator)
if err != nil {
return nil, err
}
maxEffectiveBalance := helpers.ValidatorMaxEffectiveBalance(v)
maxEffectiveBalance := helpers.ValidatorMaxEffectiveBalance(validator)
validator.EffectiveBalance = min(amount-(amount%params.BeaconConfig().EffectiveBalanceIncrement), maxEffectiveBalance)
return validator, nil
return validator
}
// ProcessDepositRequests is a function as part of electra to process execution layer deposits

View File

@@ -177,6 +177,7 @@ func TestProcessPendingDeposits(t *testing.T) {
dep := stateTesting.GeneratePendingDeposit(t, sk, excessBalance, bytesutil.ToBytes32(wc), 0)
dep.Signature = common.InfiniteSignature[:]
require.NoError(t, st.SetValidators(validators))
st.SaveValidatorIndices()
require.NoError(t, st.SetPendingDeposits([]*eth.PendingDeposit{dep}))
return st
}(),
@@ -507,6 +508,7 @@ func stateWithPendingDeposits(t *testing.T, balETH uint64, numDeposits, amount u
deps[i] = stateTesting.GeneratePendingDeposit(t, sk, amount, bytesutil.ToBytes32(wc), 0)
}
require.NoError(t, st.SetValidators(validators))
st.SaveValidatorIndices()
require.NoError(t, st.SetPendingDeposits(deps))
return st
}
@@ -525,6 +527,7 @@ func TestApplyPendingDeposit_TopUp(t *testing.T) {
dep := stateTesting.GeneratePendingDeposit(t, sk, excessBalance, bytesutil.ToBytes32(wc), 0)
dep.Signature = common.InfiniteSignature[:]
require.NoError(t, st.SetValidators(validators))
st.SaveValidatorIndices()
require.NoError(t, electra.ApplyPendingDeposit(context.Background(), st, dep))

View File

@@ -3,6 +3,7 @@ package electra
import (
"errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
@@ -17,8 +18,9 @@ import (
// def switch_to_compounding_validator(state: BeaconState, index: ValidatorIndex) -> None:
//
// validator = state.validators[index]
// validator.withdrawal_credentials = COMPOUNDING_WITHDRAWAL_PREFIX + validator.withdrawal_credentials[1:]
// queue_excess_active_balance(state, index)
// if has_eth1_withdrawal_credential(validator):
// validator.withdrawal_credentials = COMPOUNDING_WITHDRAWAL_PREFIX + validator.withdrawal_credentials[1:]
// queue_excess_active_balance(state, index)
func SwitchToCompoundingValidator(s state.BeaconState, idx primitives.ValidatorIndex) error {
v, err := s.ValidatorAtIndex(idx)
if err != nil {
@@ -27,12 +29,14 @@ func SwitchToCompoundingValidator(s state.BeaconState, idx primitives.ValidatorI
if len(v.WithdrawalCredentials) == 0 {
return errors.New("validator has no withdrawal credentials")
}
v.WithdrawalCredentials[0] = params.BeaconConfig().CompoundingWithdrawalPrefixByte
if err := s.UpdateValidatorAtIndex(idx, v); err != nil {
return err
if helpers.HasETH1WithdrawalCredential(v) {
v.WithdrawalCredentials[0] = params.BeaconConfig().CompoundingWithdrawalPrefixByte
if err := s.UpdateValidatorAtIndex(idx, v); err != nil {
return err
}
return QueueExcessActiveBalance(s, idx)
}
return QueueExcessActiveBalance(s, idx)
return nil
}
// QueueExcessActiveBalance queues validators with balances above the min activation balance and adds to pending deposit.
@@ -64,14 +68,13 @@ func QueueExcessActiveBalance(s state.BeaconState, idx primitives.ValidatorIndex
return err
}
excessBalance := bal - params.BeaconConfig().MinActivationBalance
val, err := s.ValidatorAtIndexReadOnly(idx)
val, err := s.ValidatorAtIndex(idx)
if err != nil {
return err
}
pk := val.PublicKey()
return s.AppendPendingDeposit(&ethpb.PendingDeposit{
PublicKey: pk[:],
WithdrawalCredentials: val.GetWithdrawalCredentials(),
PublicKey: val.PublicKey,
WithdrawalCredentials: val.WithdrawalCredentials,
Amount: excessBalance,
Signature: common.InfiniteSignature[:],
Slot: params.BeaconConfig().GenesisSlot,

View File

@@ -111,31 +111,30 @@ func ProcessWithdrawalRequests(ctx context.Context, st state.BeaconState, wrs []
log.Debugf("Skipping execution layer withdrawal request, validator index for %s not found\n", hexutil.Encode(wr.ValidatorPubkey))
continue
}
validator, err := st.ValidatorAtIndexReadOnly(vIdx)
validator, err := st.ValidatorAtIndex(vIdx)
if err != nil {
return nil, err
}
// Verify withdrawal credentials
hasCorrectCredential := helpers.HasExecutionWithdrawalCredentials(validator)
wc := validator.GetWithdrawalCredentials()
isCorrectSourceAddress := bytes.Equal(wc[12:], wr.SourceAddress)
isCorrectSourceAddress := bytes.Equal(validator.WithdrawalCredentials[12:], wr.SourceAddress)
if !hasCorrectCredential || !isCorrectSourceAddress {
log.Debugln("Skipping execution layer withdrawal request, wrong withdrawal credentials")
continue
}
// Verify the validator is active.
if !helpers.IsActiveValidatorUsingTrie(validator, currentEpoch) {
if !helpers.IsActiveValidator(validator, currentEpoch) {
log.Debugln("Skipping execution layer withdrawal request, validator not active")
continue
}
// Verify the validator has not yet submitted an exit.
if validator.ExitEpoch() != params.BeaconConfig().FarFutureEpoch {
if validator.ExitEpoch != params.BeaconConfig().FarFutureEpoch {
log.Debugln("Skipping execution layer withdrawal request, validator has submitted an exit already")
continue
}
// Verify the validator has been active long enough.
if currentEpoch < validator.ActivationEpoch().AddEpoch(params.BeaconConfig().ShardCommitteePeriod) {
if currentEpoch < validator.ActivationEpoch.AddEpoch(params.BeaconConfig().ShardCommitteePeriod) {
log.Debugln("Skipping execution layer withdrawal request, validator has not been active long enough")
continue
}
@@ -157,7 +156,7 @@ func ProcessWithdrawalRequests(ctx context.Context, st state.BeaconState, wrs []
continue
}
hasSufficientEffectiveBalance := validator.EffectiveBalance() >= params.BeaconConfig().MinActivationBalance
hasSufficientEffectiveBalance := validator.EffectiveBalance >= params.BeaconConfig().MinActivationBalance
vBal, err := st.BalanceAtIndex(vIdx)
if err != nil {
return nil, err

View File

@@ -147,17 +147,11 @@ func ProcessRegistryUpdates(ctx context.Context, st state.BeaconState) (state.Be
// epoch = get_current_epoch(state)
// total_balance = get_total_active_balance(state)
// adjusted_total_slashing_balance = min(sum(state.slashings) * PROPORTIONAL_SLASHING_MULTIPLIER, total_balance)
// if state.version == electra:
// increment = EFFECTIVE_BALANCE_INCREMENT # Factored out from total balance to avoid uint64 overflow
// penalty_per_effective_balance_increment = adjusted_total_slashing_balance // (total_balance // increment)
// for index, validator in enumerate(state.validators):
// if validator.slashed and epoch + EPOCHS_PER_SLASHINGS_VECTOR // 2 == validator.withdrawable_epoch:
// increment = EFFECTIVE_BALANCE_INCREMENT # Factored out from penalty numerator to avoid uint64 overflow
// penalty_numerator = validator.effective_balance // increment * adjusted_total_slashing_balance
// penalty = penalty_numerator // total_balance * increment
// if state.version == electra:
// effective_balance_increments = validator.effective_balance // increment
// penalty = penalty_per_effective_balance_increment * effective_balance_increments
// decrease_balance(state, ValidatorIndex(index), penalty)
func ProcessSlashings(st state.BeaconState, slashingMultiplier uint64) (state.BeaconState, error) {
currentEpoch := time.CurrentEpoch(st)
@@ -183,26 +177,13 @@ func ProcessSlashings(st state.BeaconState, slashingMultiplier uint64) (state.Be
// below equally.
increment := params.BeaconConfig().EffectiveBalanceIncrement
minSlashing := math.Min(totalSlashing*slashingMultiplier, totalBalance)
// Modified in Electra:EIP7251
var penaltyPerEffectiveBalanceIncrement uint64
if st.Version() >= version.Electra {
penaltyPerEffectiveBalanceIncrement = minSlashing / (totalBalance / increment)
}
bals := st.Balances()
changed := false
err = st.ReadFromEveryValidator(func(idx int, val state.ReadOnlyValidator) error {
correctEpoch := (currentEpoch + exitLength/2) == val.WithdrawableEpoch()
if val.Slashed() && correctEpoch {
var penalty uint64
if st.Version() >= version.Electra {
effectiveBalanceIncrements := val.EffectiveBalance() / increment
penalty = penaltyPerEffectiveBalanceIncrement * effectiveBalanceIncrements
} else {
penaltyNumerator := val.EffectiveBalance() / increment * minSlashing
penalty = penaltyNumerator / totalBalance * increment
}
penaltyNumerator := val.EffectiveBalance() / increment * minSlashing
penalty := penaltyNumerator / totalBalance * increment
bals[idx] = helpers.DecreaseBalanceWithVal(bals[idx], penalty)
changed = true
}

View File

@@ -448,75 +448,3 @@ func TestProcessHistoricalDataUpdate(t *testing.T) {
})
}
}
func TestProcessSlashings_SlashedElectra(t *testing.T) {
tests := []struct {
state *ethpb.BeaconStateElectra
want uint64
}{
{
state: &ethpb.BeaconStateElectra{
Validators: []*ethpb.Validator{
{Slashed: true,
WithdrawableEpoch: params.BeaconConfig().EpochsPerSlashingsVector / 2,
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance},
{ExitEpoch: params.BeaconConfig().FarFutureEpoch, EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance}},
Balances: []uint64{params.BeaconConfig().MaxEffectiveBalance, params.BeaconConfig().MaxEffectiveBalance},
Slashings: []uint64{0, 1e9},
},
want: uint64(29000000000),
},
{
state: &ethpb.BeaconStateElectra{
Validators: []*ethpb.Validator{
{Slashed: true,
WithdrawableEpoch: params.BeaconConfig().EpochsPerSlashingsVector / 2,
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance},
{ExitEpoch: params.BeaconConfig().FarFutureEpoch, EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance},
{ExitEpoch: params.BeaconConfig().FarFutureEpoch, EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance},
},
Balances: []uint64{params.BeaconConfig().MaxEffectiveBalance, params.BeaconConfig().MaxEffectiveBalance},
Slashings: []uint64{0, 1e9},
},
want: uint64(30500000000),
},
{
state: &ethpb.BeaconStateElectra{
Validators: []*ethpb.Validator{
{Slashed: true,
WithdrawableEpoch: params.BeaconConfig().EpochsPerSlashingsVector / 2,
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalanceElectra},
{ExitEpoch: params.BeaconConfig().FarFutureEpoch, EffectiveBalance: params.BeaconConfig().MaxEffectiveBalanceElectra},
{ExitEpoch: params.BeaconConfig().FarFutureEpoch, EffectiveBalance: params.BeaconConfig().MaxEffectiveBalanceElectra},
},
Balances: []uint64{params.BeaconConfig().MaxEffectiveBalance * 10, params.BeaconConfig().MaxEffectiveBalance * 20},
Slashings: []uint64{0, 2 * 1e9},
},
want: uint64(317000001536),
},
{
state: &ethpb.BeaconStateElectra{
Validators: []*ethpb.Validator{
{Slashed: true,
WithdrawableEpoch: params.BeaconConfig().EpochsPerSlashingsVector / 2,
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalanceElectra - params.BeaconConfig().EffectiveBalanceIncrement},
{ExitEpoch: params.BeaconConfig().FarFutureEpoch, EffectiveBalance: params.BeaconConfig().MaxEffectiveBalanceElectra - params.BeaconConfig().EffectiveBalanceIncrement}},
Balances: []uint64{params.BeaconConfig().MaxEffectiveBalanceElectra - params.BeaconConfig().EffectiveBalanceIncrement, params.BeaconConfig().MaxEffectiveBalanceElectra - params.BeaconConfig().EffectiveBalanceIncrement},
Slashings: []uint64{0, 1e9},
},
want: uint64(2044000000727),
},
}
for i, tt := range tests {
t.Run(fmt.Sprint(i), func(t *testing.T) {
original := proto.Clone(tt.state)
s, err := state_native.InitializeFromProtoElectra(tt.state)
require.NoError(t, err)
helpers.ClearCache()
newState, err := epoch.ProcessSlashings(s, params.BeaconConfig().ProportionalSlashingMultiplierBellatrix)
require.NoError(t, err)
assert.Equal(t, tt.want, newState.Balances()[0], "ProcessSlashings({%v}) = newState; newState.Balances[0] = %d", original, newState.Balances()[0])
})
}
}

View File

@@ -63,7 +63,6 @@ go_test(
"validators_test.go",
"weak_subjectivity_test.go",
],
data = glob(["testdata/**"]),
embed = [":go_default_library"],
shard_count = 2,
tags = ["CI_race_detection"],

View File

@@ -69,16 +69,15 @@ func IsNextPeriodSyncCommittee(
}
indices, err := syncCommitteeCache.NextPeriodIndexPosition(root, valIdx)
if errors.Is(err, cache.ErrNonExistingSyncCommitteeKey) {
val, err := st.ValidatorAtIndexReadOnly(valIdx)
val, err := st.ValidatorAtIndex(valIdx)
if err != nil {
return false, err
}
pk := val.PublicKey()
committee, err := st.NextSyncCommittee()
if err != nil {
return false, err
}
return len(findSubCommitteeIndices(pk[:], committee.Pubkeys)) > 0, nil
return len(findSubCommitteeIndices(val.PublicKey, committee.Pubkeys)) > 0, nil
}
if err != nil {
return false, err
@@ -97,11 +96,10 @@ func CurrentPeriodSyncSubcommitteeIndices(
}
indices, err := syncCommitteeCache.CurrentPeriodIndexPosition(root, valIdx)
if errors.Is(err, cache.ErrNonExistingSyncCommitteeKey) {
val, err := st.ValidatorAtIndexReadOnly(valIdx)
val, err := st.ValidatorAtIndex(valIdx)
if err != nil {
return nil, err
}
pk := val.PublicKey()
committee, err := st.CurrentSyncCommittee()
if err != nil {
return nil, err
@@ -114,7 +112,7 @@ func CurrentPeriodSyncSubcommitteeIndices(
}
}()
return findSubCommitteeIndices(pk[:], committee.Pubkeys), nil
return findSubCommitteeIndices(val.PublicKey, committee.Pubkeys), nil
}
if err != nil {
return nil, err
@@ -132,16 +130,15 @@ func NextPeriodSyncSubcommitteeIndices(
}
indices, err := syncCommitteeCache.NextPeriodIndexPosition(root, valIdx)
if errors.Is(err, cache.ErrNonExistingSyncCommitteeKey) {
val, err := st.ValidatorAtIndexReadOnly(valIdx)
val, err := st.ValidatorAtIndex(valIdx)
if err != nil {
return nil, err
}
pk := val.PublicKey()
committee, err := st.NextSyncCommittee()
if err != nil {
return nil, err
}
return findSubCommitteeIndices(pk[:], committee.Pubkeys), nil
return findSubCommitteeIndices(val.PublicKey, committee.Pubkeys), nil
}
if err != nil {
return nil, err

View File

@@ -584,23 +584,23 @@ func IsSameWithdrawalCredentials(a, b *ethpb.Validator) bool {
// and validator.withdrawable_epoch <= epoch
// and balance > 0
// )
func IsFullyWithdrawableValidator(val state.ReadOnlyValidator, balance uint64, epoch primitives.Epoch, fork int) bool {
func IsFullyWithdrawableValidator(val *ethpb.Validator, balance uint64, epoch primitives.Epoch, fork int) bool {
if val == nil || balance <= 0 {
return false
}
// Electra / EIP-7251 logic
if fork >= version.Electra {
return HasExecutionWithdrawalCredentials(val) && val.WithdrawableEpoch() <= epoch
return HasExecutionWithdrawalCredentials(val) && val.WithdrawableEpoch <= epoch
}
return HasETH1WithdrawalCredential(val) && val.WithdrawableEpoch() <= epoch
return HasETH1WithdrawalCredential(val) && val.WithdrawableEpoch <= epoch
}
// IsPartiallyWithdrawableValidator returns whether the validator is able to perform a
// partial withdrawal. This function assumes that the caller has a lock on the state.
// This method conditionally calls the fork appropriate implementation based on the epoch argument.
func IsPartiallyWithdrawableValidator(val state.ReadOnlyValidator, balance uint64, epoch primitives.Epoch, fork int) bool {
func IsPartiallyWithdrawableValidator(val *ethpb.Validator, balance uint64, epoch primitives.Epoch, fork int) bool {
if val == nil {
return false
}
@@ -630,9 +630,9 @@ func IsPartiallyWithdrawableValidator(val state.ReadOnlyValidator, balance uint6
// and has_max_effective_balance
// and has_excess_balance
// )
func isPartiallyWithdrawableValidatorElectra(val state.ReadOnlyValidator, balance uint64, epoch primitives.Epoch) bool {
func isPartiallyWithdrawableValidatorElectra(val *ethpb.Validator, balance uint64, epoch primitives.Epoch) bool {
maxEB := ValidatorMaxEffectiveBalance(val)
hasMaxBalance := val.EffectiveBalance() == maxEB
hasMaxBalance := val.EffectiveBalance == maxEB
hasExcessBalance := balance > maxEB
return HasExecutionWithdrawalCredentials(val) &&
@@ -652,8 +652,8 @@ func isPartiallyWithdrawableValidatorElectra(val state.ReadOnlyValidator, balanc
// has_max_effective_balance = validator.effective_balance == MAX_EFFECTIVE_BALANCE
// has_excess_balance = balance > MAX_EFFECTIVE_BALANCE
// return has_eth1_withdrawal_credential(validator) and has_max_effective_balance and has_excess_balance
func isPartiallyWithdrawableValidatorCapella(val state.ReadOnlyValidator, balance uint64, epoch primitives.Epoch) bool {
hasMaxBalance := val.EffectiveBalance() == params.BeaconConfig().MaxEffectiveBalance
func isPartiallyWithdrawableValidatorCapella(val *ethpb.Validator, balance uint64, epoch primitives.Epoch) bool {
hasMaxBalance := val.EffectiveBalance == params.BeaconConfig().MaxEffectiveBalance
hasExcessBalance := balance > params.BeaconConfig().MaxEffectiveBalance
return HasETH1WithdrawalCredential(val) && hasExcessBalance && hasMaxBalance
}
@@ -670,7 +670,7 @@ func isPartiallyWithdrawableValidatorCapella(val state.ReadOnlyValidator, balanc
// return MAX_EFFECTIVE_BALANCE_ELECTRA
// else:
// return MIN_ACTIVATION_BALANCE
func ValidatorMaxEffectiveBalance(val state.ReadOnlyValidator) uint64 {
func ValidatorMaxEffectiveBalance(val *ethpb.Validator) uint64 {
if HasCompoundingWithdrawalCredential(val) {
return params.BeaconConfig().MaxEffectiveBalanceElectra
}

View File

@@ -974,6 +974,13 @@ func TestIsFullyWithdrawableValidator(t *testing.T) {
fork int
want bool
}{
{
name: "Handles nil case",
validator: nil,
balance: 0,
epoch: 0,
want: false,
},
{
name: "No ETH1 prefix",
validator: &ethpb.Validator{
@@ -1029,9 +1036,7 @@ func TestIsFullyWithdrawableValidator(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
v, err := state_native.NewValidator(tt.validator)
require.NoError(t, err)
assert.Equal(t, tt.want, helpers.IsFullyWithdrawableValidator(v, tt.balance, tt.epoch, tt.fork))
assert.Equal(t, tt.want, helpers.IsFullyWithdrawableValidator(tt.validator, tt.balance, tt.epoch, tt.fork))
})
}
}
@@ -1045,6 +1050,13 @@ func TestIsPartiallyWithdrawableValidator(t *testing.T) {
fork int
want bool
}{
{
name: "Handles nil case",
validator: nil,
balance: 0,
epoch: 0,
want: false,
},
{
name: "No ETH1 prefix",
validator: &ethpb.Validator{
@@ -1101,9 +1113,7 @@ func TestIsPartiallyWithdrawableValidator(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
v, err := state_native.NewValidator(tt.validator)
require.NoError(t, err)
assert.Equal(t, tt.want, helpers.IsPartiallyWithdrawableValidator(v, tt.balance, tt.epoch, tt.fork))
assert.Equal(t, tt.want, helpers.IsPartiallyWithdrawableValidator(tt.validator, tt.balance, tt.epoch, tt.fork))
})
}
}
@@ -1157,12 +1167,15 @@ func TestValidatorMaxEffectiveBalance(t *testing.T) {
validator: &ethpb.Validator{WithdrawalCredentials: []byte{params.BeaconConfig().ETH1AddressWithdrawalPrefixByte, 0xCC}},
want: params.BeaconConfig().MinActivationBalance,
},
{
"Handles nil case",
nil,
params.BeaconConfig().MinActivationBalance,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
v, err := state_native.NewValidator(tt.validator)
require.NoError(t, err)
assert.Equal(t, tt.want, helpers.ValidatorMaxEffectiveBalance(v))
assert.Equal(t, tt.want, helpers.ValidatorMaxEffectiveBalance(tt.validator))
})
}
// Sanity check that MinActivationBalance equals (pre-electra) MaxEffectiveBalance

View File

@@ -216,19 +216,16 @@ func TestService_BlockNumberByTimestamp(t *testing.T) {
require.NoError(t, err)
web3Service = setDefaultMocks(web3Service)
web3Service.rpcClient = &mockExecution.RPCClient{Backend: testAcc.Backend}
wantedTime := time.Now().Unix() + 100*10
for i := 0; i < 200; i++ {
// Throw away error in case adjustment wasn't successful.
err = testAcc.Backend.AdjustTime(10 * time.Second)
_ = err
testAcc.Backend.Commit()
}
ctx := context.Background()
hd, err := testAcc.Backend.HeaderByNumber(ctx, nil)
require.NoError(t, err)
web3Service.latestEth1Data.BlockTime = hd.Time
web3Service.latestEth1Data.BlockHeight = hd.Number.Uint64()
blk, err := web3Service.BlockByTimestamp(ctx, uint64(wantedTime) /* time */)
blk, err := web3Service.BlockByTimestamp(ctx, 1000 /* time */)
require.NoError(t, err)
if blk.Number.Cmp(big.NewInt(0)) == 0 {
t.Error("Returned a block with zero number, expected to be non zero")

View File

@@ -44,6 +44,8 @@ var (
GetPayloadMethodV4,
GetPayloadBodiesByHashV1,
GetPayloadBodiesByRangeV1,
GetPayloadBodiesByHashV2,
GetPayloadBodiesByRangeV2,
}
)
@@ -75,8 +77,12 @@ const (
BlockByNumberMethod = "eth_getBlockByNumber"
// GetPayloadBodiesByHashV1 is the engine_getPayloadBodiesByHashX JSON-RPC method for pre-Electra payloads.
GetPayloadBodiesByHashV1 = "engine_getPayloadBodiesByHashV1"
// GetPayloadBodiesByHashV2 is the engine_getPayloadBodiesByHashX JSON-RPC method introduced by Electra.
GetPayloadBodiesByHashV2 = "engine_getPayloadBodiesByHashV2"
// GetPayloadBodiesByRangeV1 is the engine_getPayloadBodiesByRangeX JSON-RPC method for pre-Electra payloads.
GetPayloadBodiesByRangeV1 = "engine_getPayloadBodiesByRangeV1"
// GetPayloadBodiesByRangeV2 is the engine_getPayloadBodiesByRangeX JSON-RPC method introduced by Electra.
GetPayloadBodiesByRangeV2 = "engine_getPayloadBodiesByRangeV2"
// ExchangeCapabilities request string for JSON-RPC.
ExchangeCapabilities = "engine_exchangeCapabilities"
// Defines the seconds before timing out engine endpoints with non-block execution semantics.
@@ -108,7 +114,7 @@ type PayloadReconstructor interface {
// EngineCaller defines a client that can interact with an Ethereum
// execution node's engine service via JSON-RPC.
type EngineCaller interface {
NewPayload(ctx context.Context, payload interfaces.ExecutionData, versionedHashes []common.Hash, parentBlockRoot *common.Hash, executionRequests *pb.ExecutionRequests) ([]byte, error)
NewPayload(ctx context.Context, payload interfaces.ExecutionData, versionedHashes []common.Hash, parentBlockRoot *common.Hash) ([]byte, error)
ForkchoiceUpdated(
ctx context.Context, state *pb.ForkchoiceState, attrs payloadattribute.Attributer,
) (*pb.PayloadIDBytes, []byte, error)
@@ -119,8 +125,8 @@ type EngineCaller interface {
var ErrEmptyBlockHash = errors.New("Block hash is empty 0x0000...")
// NewPayload request calls the engine_newPayloadVX method via JSON-RPC.
func (s *Service) NewPayload(ctx context.Context, payload interfaces.ExecutionData, versionedHashes []common.Hash, parentBlockRoot *common.Hash, executionRequests *pb.ExecutionRequests) ([]byte, error) {
// NewPayload calls the engine_newPayloadVX method via JSON-RPC.
func (s *Service) NewPayload(ctx context.Context, payload interfaces.ExecutionData, versionedHashes []common.Hash, parentBlockRoot *common.Hash) ([]byte, error) {
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.NewPayload")
defer span.End()
start := time.Now()
@@ -157,20 +163,9 @@ func (s *Service) NewPayload(ctx context.Context, payload interfaces.ExecutionDa
if !ok {
return nil, errors.New("execution data must be a Deneb execution payload")
}
if executionRequests == nil {
err := s.rpcClient.CallContext(ctx, result, NewPayloadMethodV3, payloadPb, versionedHashes, parentBlockRoot)
if err != nil {
return nil, handleRPCError(err)
}
} else {
flattenedRequests, err := pb.EncodeExecutionRequests(executionRequests)
if err != nil {
return nil, errors.Wrap(err, "failed to encode execution requests")
}
err = s.rpcClient.CallContext(ctx, result, NewPayloadMethodV4, payloadPb, versionedHashes, parentBlockRoot, flattenedRequests)
if err != nil {
return nil, handleRPCError(err)
}
err := s.rpcClient.CallContext(ctx, result, NewPayloadMethodV3, payloadPb, versionedHashes, parentBlockRoot)
if err != nil {
return nil, handleRPCError(err)
}
default:
return nil, errors.New("unknown execution data type")
@@ -264,9 +259,6 @@ func (s *Service) ForkchoiceUpdated(
func getPayloadMethodAndMessage(slot primitives.Slot) (string, proto.Message) {
pe := slots.ToEpoch(slot)
if pe >= params.BeaconConfig().ElectraForkEpoch {
return GetPayloadMethodV4, &pb.ExecutionBundleElectra{}
}
if pe >= params.BeaconConfig().DenebForkEpoch {
return GetPayloadMethodV3, &pb.ExecutionPayloadDenebWithValueAndBlobsBundle{}
}

View File

@@ -123,7 +123,7 @@ func TestClient_IPC(t *testing.T) {
require.Equal(t, true, ok)
wrappedPayload, err := blocks.WrappedExecutionPayload(req)
require.NoError(t, err)
latestValidHash, err := srv.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
latestValidHash, err := srv.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{})
require.NoError(t, err)
require.DeepEqual(t, bytesutil.ToBytes32(want.LatestValidHash), bytesutil.ToBytes32(latestValidHash))
})
@@ -134,7 +134,7 @@ func TestClient_IPC(t *testing.T) {
require.Equal(t, true, ok)
wrappedPayload, err := blocks.WrappedExecutionPayloadCapella(req)
require.NoError(t, err)
latestValidHash, err := srv.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
latestValidHash, err := srv.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{})
require.NoError(t, err)
require.DeepEqual(t, bytesutil.ToBytes32(want.LatestValidHash), bytesutil.ToBytes32(latestValidHash))
})
@@ -163,7 +163,7 @@ func TestClient_HTTP(t *testing.T) {
cfg := params.BeaconConfig().Copy()
cfg.CapellaForkEpoch = 1
cfg.DenebForkEpoch = 2
cfg.ElectraForkEpoch = 3
cfg.ElectraForkEpoch = 2
params.OverrideBeaconConfig(cfg)
t.Run(GetPayloadMethod, func(t *testing.T) {
@@ -320,89 +320,6 @@ func TestClient_HTTP(t *testing.T) {
blobs := [][]byte{bytesutil.PadTo([]byte("a"), fieldparams.BlobLength), bytesutil.PadTo([]byte("b"), fieldparams.BlobLength)}
require.DeepEqual(t, blobs, resp.BlobsBundle.Blobs)
})
t.Run(GetPayloadMethodV4, func(t *testing.T) {
payloadId := [8]byte{1}
want, ok := fix["ExecutionBundleElectra"].(*pb.GetPayloadV4ResponseJson)
require.Equal(t, true, ok)
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
enc, err := io.ReadAll(r.Body)
require.NoError(t, err)
jsonRequestString := string(enc)
reqArg, err := json.Marshal(pb.PayloadIDBytes(payloadId))
require.NoError(t, err)
// We expect the JSON string RPC request contains the right arguments.
require.Equal(t, true, strings.Contains(
jsonRequestString, string(reqArg),
))
resp := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
"result": want,
}
err = json.NewEncoder(w).Encode(resp)
require.NoError(t, err)
}))
defer srv.Close()
rpcClient, err := rpc.DialHTTP(srv.URL)
require.NoError(t, err)
defer rpcClient.Close()
client := &Service{}
client.rpcClient = rpcClient
// We call the RPC method via HTTP and expect a proper result.
resp, err := client.GetPayload(ctx, payloadId, 3*params.BeaconConfig().SlotsPerEpoch)
require.NoError(t, err)
require.Equal(t, true, resp.OverrideBuilder)
g, err := resp.ExecutionData.ExcessBlobGas()
require.NoError(t, err)
require.DeepEqual(t, uint64(3), g)
g, err = resp.ExecutionData.BlobGasUsed()
require.NoError(t, err)
require.DeepEqual(t, uint64(2), g)
commitments := [][]byte{bytesutil.PadTo([]byte("commitment1"), fieldparams.BLSPubkeyLength), bytesutil.PadTo([]byte("commitment2"), fieldparams.BLSPubkeyLength)}
require.DeepEqual(t, commitments, resp.BlobsBundle.KzgCommitments)
proofs := [][]byte{bytesutil.PadTo([]byte("proof1"), fieldparams.BLSPubkeyLength), bytesutil.PadTo([]byte("proof2"), fieldparams.BLSPubkeyLength)}
require.DeepEqual(t, proofs, resp.BlobsBundle.Proofs)
blobs := [][]byte{bytesutil.PadTo([]byte("a"), fieldparams.BlobLength), bytesutil.PadTo([]byte("b"), fieldparams.BlobLength)}
require.DeepEqual(t, blobs, resp.BlobsBundle.Blobs)
requests := &pb.ExecutionRequests{
Deposits: []*pb.DepositRequest{
{
Pubkey: bytesutil.PadTo([]byte{byte('a')}, fieldparams.BLSPubkeyLength),
WithdrawalCredentials: bytesutil.PadTo([]byte{byte('b')}, fieldparams.RootLength),
Amount: params.BeaconConfig().MinActivationBalance,
Signature: bytesutil.PadTo([]byte{byte('c')}, fieldparams.BLSSignatureLength),
Index: 0,
},
},
Withdrawals: []*pb.WithdrawalRequest{
{
SourceAddress: bytesutil.PadTo([]byte{byte('d')}, common.AddressLength),
ValidatorPubkey: bytesutil.PadTo([]byte{byte('e')}, fieldparams.BLSPubkeyLength),
Amount: params.BeaconConfig().MinActivationBalance,
},
},
Consolidations: []*pb.ConsolidationRequest{
{
SourceAddress: bytesutil.PadTo([]byte{byte('f')}, common.AddressLength),
SourcePubkey: bytesutil.PadTo([]byte{byte('g')}, fieldparams.BLSPubkeyLength),
TargetPubkey: bytesutil.PadTo([]byte{byte('h')}, fieldparams.BLSPubkeyLength),
},
},
}
require.DeepEqual(t, requests, resp.ExecutionRequests)
})
t.Run(ForkchoiceUpdatedMethod+" VALID status", func(t *testing.T) {
forkChoiceState := &pb.ForkchoiceState{
HeadBlockHash: []byte("head"),
@@ -553,7 +470,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayload(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{})
require.NoError(t, err)
require.DeepEqual(t, want.LatestValidHash, resp)
})
@@ -567,7 +484,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayloadCapella(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{})
require.NoError(t, err)
require.DeepEqual(t, want.LatestValidHash, resp)
})
@@ -581,46 +498,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayloadDeneb(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, nil)
require.NoError(t, err)
require.DeepEqual(t, want.LatestValidHash, resp)
})
t.Run(NewPayloadMethodV4+" VALID status", func(t *testing.T) {
execPayload, ok := fix["ExecutionPayloadDeneb"].(*pb.ExecutionPayloadDeneb)
require.Equal(t, true, ok)
want, ok := fix["ValidPayloadStatus"].(*pb.PayloadStatus)
require.Equal(t, true, ok)
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayloadDeneb(execPayload)
require.NoError(t, err)
requests := &pb.ExecutionRequests{
Deposits: []*pb.DepositRequest{
{
Pubkey: bytesutil.PadTo([]byte{byte('a')}, fieldparams.BLSPubkeyLength),
WithdrawalCredentials: bytesutil.PadTo([]byte{byte('b')}, fieldparams.RootLength),
Amount: params.BeaconConfig().MinActivationBalance,
Signature: bytesutil.PadTo([]byte{byte('c')}, fieldparams.BLSSignatureLength),
Index: 0,
},
},
Withdrawals: []*pb.WithdrawalRequest{
{
SourceAddress: bytesutil.PadTo([]byte{byte('d')}, common.AddressLength),
ValidatorPubkey: bytesutil.PadTo([]byte{byte('e')}, fieldparams.BLSPubkeyLength),
Amount: params.BeaconConfig().MinActivationBalance,
},
},
Consolidations: []*pb.ConsolidationRequest{
{
SourceAddress: bytesutil.PadTo([]byte{byte('f')}, common.AddressLength),
SourcePubkey: bytesutil.PadTo([]byte{byte('g')}, fieldparams.BLSPubkeyLength),
TargetPubkey: bytesutil.PadTo([]byte{byte('h')}, fieldparams.BLSPubkeyLength),
},
},
}
client := newPayloadV4Setup(t, want, execPayload, requests)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, requests)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'})
require.NoError(t, err)
require.DeepEqual(t, want.LatestValidHash, resp)
})
@@ -634,7 +512,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayload(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{})
require.ErrorIs(t, ErrAcceptedSyncingPayloadStatus, err)
require.DeepEqual(t, []uint8(nil), resp)
})
@@ -648,7 +526,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayloadCapella(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{})
require.ErrorIs(t, ErrAcceptedSyncingPayloadStatus, err)
require.DeepEqual(t, []uint8(nil), resp)
})
@@ -662,46 +540,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayloadDeneb(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, nil)
require.ErrorIs(t, ErrAcceptedSyncingPayloadStatus, err)
require.DeepEqual(t, []uint8(nil), resp)
})
t.Run(NewPayloadMethodV4+" SYNCING status", func(t *testing.T) {
execPayload, ok := fix["ExecutionPayloadDeneb"].(*pb.ExecutionPayloadDeneb)
require.Equal(t, true, ok)
want, ok := fix["SyncingStatus"].(*pb.PayloadStatus)
require.Equal(t, true, ok)
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayloadDeneb(execPayload)
require.NoError(t, err)
requests := &pb.ExecutionRequests{
Deposits: []*pb.DepositRequest{
{
Pubkey: bytesutil.PadTo([]byte{byte('a')}, fieldparams.BLSPubkeyLength),
WithdrawalCredentials: bytesutil.PadTo([]byte{byte('b')}, fieldparams.RootLength),
Amount: params.BeaconConfig().MinActivationBalance,
Signature: bytesutil.PadTo([]byte{byte('c')}, fieldparams.BLSSignatureLength),
Index: 0,
},
},
Withdrawals: []*pb.WithdrawalRequest{
{
SourceAddress: bytesutil.PadTo([]byte{byte('d')}, common.AddressLength),
ValidatorPubkey: bytesutil.PadTo([]byte{byte('e')}, fieldparams.BLSPubkeyLength),
Amount: params.BeaconConfig().MinActivationBalance,
},
},
Consolidations: []*pb.ConsolidationRequest{
{
SourceAddress: bytesutil.PadTo([]byte{byte('f')}, common.AddressLength),
SourcePubkey: bytesutil.PadTo([]byte{byte('g')}, fieldparams.BLSPubkeyLength),
TargetPubkey: bytesutil.PadTo([]byte{byte('h')}, fieldparams.BLSPubkeyLength),
},
},
}
client := newPayloadV4Setup(t, want, execPayload, requests)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, requests)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'})
require.ErrorIs(t, ErrAcceptedSyncingPayloadStatus, err)
require.DeepEqual(t, []uint8(nil), resp)
})
@@ -715,7 +554,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayload(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{})
require.ErrorIs(t, ErrInvalidBlockHashPayloadStatus, err)
require.DeepEqual(t, []uint8(nil), resp)
})
@@ -729,7 +568,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayloadCapella(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{})
require.ErrorIs(t, ErrInvalidBlockHashPayloadStatus, err)
require.DeepEqual(t, []uint8(nil), resp)
})
@@ -743,45 +582,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayloadDeneb(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, nil)
require.ErrorIs(t, ErrInvalidBlockHashPayloadStatus, err)
require.DeepEqual(t, []uint8(nil), resp)
})
t.Run(NewPayloadMethodV4+" INVALID_BLOCK_HASH status", func(t *testing.T) {
execPayload, ok := fix["ExecutionPayloadDeneb"].(*pb.ExecutionPayloadDeneb)
require.Equal(t, true, ok)
want, ok := fix["InvalidBlockHashStatus"].(*pb.PayloadStatus)
require.Equal(t, true, ok)
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayloadDeneb(execPayload)
require.NoError(t, err)
requests := &pb.ExecutionRequests{
Deposits: []*pb.DepositRequest{
{
Pubkey: bytesutil.PadTo([]byte{byte('a')}, fieldparams.BLSPubkeyLength),
WithdrawalCredentials: bytesutil.PadTo([]byte{byte('b')}, fieldparams.RootLength),
Amount: params.BeaconConfig().MinActivationBalance,
Signature: bytesutil.PadTo([]byte{byte('c')}, fieldparams.BLSSignatureLength),
Index: 0,
},
},
Withdrawals: []*pb.WithdrawalRequest{
{
SourceAddress: bytesutil.PadTo([]byte{byte('d')}, common.AddressLength),
ValidatorPubkey: bytesutil.PadTo([]byte{byte('e')}, fieldparams.BLSPubkeyLength),
Amount: params.BeaconConfig().MinActivationBalance,
},
},
Consolidations: []*pb.ConsolidationRequest{
{
SourceAddress: bytesutil.PadTo([]byte{byte('f')}, common.AddressLength),
SourcePubkey: bytesutil.PadTo([]byte{byte('g')}, fieldparams.BLSPubkeyLength),
TargetPubkey: bytesutil.PadTo([]byte{byte('h')}, fieldparams.BLSPubkeyLength),
},
},
}
client := newPayloadV4Setup(t, want, execPayload, requests)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, requests)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'})
require.ErrorIs(t, ErrInvalidBlockHashPayloadStatus, err)
require.DeepEqual(t, []uint8(nil), resp)
})
@@ -795,7 +596,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayload(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{})
require.ErrorIs(t, ErrInvalidPayloadStatus, err)
require.DeepEqual(t, want.LatestValidHash, resp)
})
@@ -809,7 +610,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayloadCapella(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{})
require.ErrorIs(t, ErrInvalidPayloadStatus, err)
require.DeepEqual(t, want.LatestValidHash, resp)
})
@@ -823,46 +624,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayloadDeneb(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, nil)
require.ErrorIs(t, ErrInvalidPayloadStatus, err)
require.DeepEqual(t, want.LatestValidHash, resp)
})
t.Run(NewPayloadMethodV4+" INVALID status", func(t *testing.T) {
execPayload, ok := fix["ExecutionPayloadDeneb"].(*pb.ExecutionPayloadDeneb)
require.Equal(t, true, ok)
want, ok := fix["InvalidStatus"].(*pb.PayloadStatus)
require.Equal(t, true, ok)
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayloadDeneb(execPayload)
require.NoError(t, err)
requests := &pb.ExecutionRequests{
Deposits: []*pb.DepositRequest{
{
Pubkey: bytesutil.PadTo([]byte{byte('a')}, fieldparams.BLSPubkeyLength),
WithdrawalCredentials: bytesutil.PadTo([]byte{byte('b')}, fieldparams.RootLength),
Amount: params.BeaconConfig().MinActivationBalance,
Signature: bytesutil.PadTo([]byte{byte('c')}, fieldparams.BLSSignatureLength),
Index: 0,
},
},
Withdrawals: []*pb.WithdrawalRequest{
{
SourceAddress: bytesutil.PadTo([]byte{byte('d')}, common.AddressLength),
ValidatorPubkey: bytesutil.PadTo([]byte{byte('e')}, fieldparams.BLSPubkeyLength),
Amount: params.BeaconConfig().MinActivationBalance,
},
},
Consolidations: []*pb.ConsolidationRequest{
{
SourceAddress: bytesutil.PadTo([]byte{byte('f')}, common.AddressLength),
SourcePubkey: bytesutil.PadTo([]byte{byte('g')}, fieldparams.BLSPubkeyLength),
TargetPubkey: bytesutil.PadTo([]byte{byte('h')}, fieldparams.BLSPubkeyLength),
},
},
}
client := newPayloadV4Setup(t, want, execPayload, requests)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, requests)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'})
require.ErrorIs(t, ErrInvalidPayloadStatus, err)
require.DeepEqual(t, want.LatestValidHash, resp)
})
@@ -876,7 +638,7 @@ func TestClient_HTTP(t *testing.T) {
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayload(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{})
require.ErrorIs(t, ErrUnknownPayloadStatus, err)
require.DeepEqual(t, []uint8(nil), resp)
})
@@ -1535,7 +1297,6 @@ func fixtures() map[string]interface{} {
"ExecutionPayloadDeneb": s.ExecutionPayloadDeneb,
"ExecutionPayloadCapellaWithValue": s.ExecutionPayloadWithValueCapella,
"ExecutionPayloadDenebWithValue": s.ExecutionPayloadWithValueDeneb,
"ExecutionBundleElectra": s.ExecutionBundleElectra,
"ValidPayloadStatus": s.ValidPayloadStatus,
"InvalidBlockHashStatus": s.InvalidBlockHashStatus,
"AcceptedStatus": s.AcceptedStatus,
@@ -1722,53 +1483,6 @@ func fixturesStruct() *payloadFixtures {
Blobs: []hexutil.Bytes{{'a'}, {'b'}},
},
}
depositRequestBytes, err := hexutil.Decode("0x610000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000" +
"620000000000000000000000000000000000000000000000000000000000000000" +
"4059730700000063000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000" +
"00000000000000000000000000000000000000000000000000000000000000000000000000000000")
if err != nil {
panic("failed to decode deposit request")
}
withdrawalRequestBytes, err := hexutil.Decode("0x6400000000000000000000000000000000000000" +
"6500000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000040597307000000")
if err != nil {
panic("failed to decode withdrawal request")
}
consolidationRequestBytes, err := hexutil.Decode("0x6600000000000000000000000000000000000000" +
"670000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000" +
"680000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000")
if err != nil {
panic("failed to decode consolidation request")
}
executionBundleFixtureElectra := &pb.GetPayloadV4ResponseJson{
ShouldOverrideBuilder: true,
ExecutionPayload: &pb.ExecutionPayloadDenebJSON{
ParentHash: &common.Hash{'a'},
FeeRecipient: &common.Address{'b'},
StateRoot: &common.Hash{'c'},
ReceiptsRoot: &common.Hash{'d'},
LogsBloom: &hexutil.Bytes{'e'},
PrevRandao: &common.Hash{'f'},
BaseFeePerGas: "0x123",
BlockHash: &common.Hash{'g'},
Transactions: []hexutil.Bytes{{'h'}},
Withdrawals: []*pb.Withdrawal{},
BlockNumber: &hexUint,
GasLimit: &hexUint,
GasUsed: &hexUint,
Timestamp: &hexUint,
BlobGasUsed: &bgu,
ExcessBlobGas: &ebg,
},
BlockValue: "0x11fffffffff",
BlobsBundle: &pb.BlobBundleJSON{
Commitments: []hexutil.Bytes{[]byte("commitment1"), []byte("commitment2")},
Proofs: []hexutil.Bytes{[]byte("proof1"), []byte("proof2")},
Blobs: []hexutil.Bytes{{'a'}, {'b'}},
},
ExecutionRequests: []hexutil.Bytes{depositRequestBytes, withdrawalRequestBytes, consolidationRequestBytes},
}
parent := bytesutil.PadTo([]byte("parentHash"), fieldparams.RootLength)
sha3Uncles := bytesutil.PadTo([]byte("sha3Uncles"), fieldparams.RootLength)
miner := bytesutil.PadTo([]byte("miner"), fieldparams.FeeRecipientLength)
@@ -1862,7 +1576,6 @@ func fixturesStruct() *payloadFixtures {
EmptyExecutionPayloadDeneb: emptyExecutionPayloadDeneb,
ExecutionPayloadWithValueCapella: executionPayloadWithValueFixtureCapella,
ExecutionPayloadWithValueDeneb: executionPayloadWithValueFixtureDeneb,
ExecutionBundleElectra: executionBundleFixtureElectra,
ValidPayloadStatus: validStatus,
InvalidBlockHashStatus: inValidBlockHashStatus,
AcceptedStatus: acceptedStatus,
@@ -1886,7 +1599,6 @@ type payloadFixtures struct {
ExecutionPayloadDeneb *pb.ExecutionPayloadDeneb
ExecutionPayloadWithValueCapella *pb.GetPayloadV2ResponseJson
ExecutionPayloadWithValueDeneb *pb.GetPayloadV3ResponseJson
ExecutionBundleElectra *pb.GetPayloadV4ResponseJson
ValidPayloadStatus *pb.PayloadStatus
InvalidBlockHashStatus *pb.PayloadStatus
AcceptedStatus *pb.PayloadStatus
@@ -2245,54 +1957,6 @@ func newPayloadV3Setup(t *testing.T, status *pb.PayloadStatus, payload *pb.Execu
return service
}
func newPayloadV4Setup(t *testing.T, status *pb.PayloadStatus, payload *pb.ExecutionPayloadDeneb, requests *pb.ExecutionRequests) *Service {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
enc, err := io.ReadAll(r.Body)
require.NoError(t, err)
jsonRequestString := string(enc)
require.Equal(t, true, strings.Contains(
jsonRequestString, string("engine_newPayloadV4"),
))
reqPayload, err := json.Marshal(payload)
require.NoError(t, err)
// We expect the JSON string RPC request contains the right arguments.
require.Equal(t, true, strings.Contains(
jsonRequestString, string(reqPayload),
))
reqRequests, err := pb.EncodeExecutionRequests(requests)
require.NoError(t, err)
jsonRequests, err := json.Marshal(reqRequests)
require.NoError(t, err)
require.Equal(t, true, strings.Contains(
jsonRequestString, string(jsonRequests),
))
resp := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
"result": status,
}
err = json.NewEncoder(w).Encode(resp)
require.NoError(t, err)
}))
rpcClient, err := rpc.DialHTTP(srv.URL)
require.NoError(t, err)
service := &Service{}
service.rpcClient = rpcClient
return service
}
func TestReconstructBlindedBlockBatch(t *testing.T) {
t.Run("empty response works", func(t *testing.T) {
ctx := context.Background()

View File

@@ -405,10 +405,8 @@ func TestProcessETH2GenesisLog_CorrectNumOfDeposits(t *testing.T) {
web3Service.rpcClient = &mockExecution.RPCClient{Backend: testAcc.Backend}
web3Service.httpLogger = testAcc.Backend
web3Service.latestEth1Data.LastRequestedBlock = 0
hdr, err := testAcc.Backend.Client.HeaderByNumber(context.Background(), nil)
require.NoError(t, err)
web3Service.latestEth1Data.BlockHeight = hdr.Number.Uint64()
web3Service.latestEth1Data.BlockTime = hdr.Time
web3Service.latestEth1Data.BlockHeight = testAcc.Backend.Blockchain().CurrentBlock().Number.Uint64()
web3Service.latestEth1Data.BlockTime = testAcc.Backend.Blockchain().CurrentBlock().Time
bConfig := params.MinimalSpecConfig().Copy()
bConfig.MinGenesisTime = 0
bConfig.SecondsPerETH1Block = 10
@@ -417,7 +415,7 @@ func TestProcessETH2GenesisLog_CorrectNumOfDeposits(t *testing.T) {
nConfig.ContractDeploymentBlock = 0
params.OverrideBeaconNetworkConfig(nConfig)
assert.NoError(t, testAcc.Backend.AdjustTime(10*time.Second))
testAcc.Backend.Commit()
totalNumOfDeposits := depositsReqForChainStart + 30
@@ -440,22 +438,14 @@ func TestProcessETH2GenesisLog_CorrectNumOfDeposits(t *testing.T) {
// 5
if (i+1)%8 == depositOffset {
testAcc.Backend.Commit()
// Throw away error in case adjustment wasn't successful.
err = testAcc.Backend.AdjustTime(10 * time.Second)
_ = err
}
}
// Forward the chain to account for the follow distance
for i := uint64(0); i < params.BeaconConfig().Eth1FollowDistance; i++ {
testAcc.Backend.Commit()
// Throw away error in case adjustment wasn't successful.
err = testAcc.Backend.AdjustTime(10 * time.Second)
_ = err
}
hdr, err = testAcc.Backend.Client.HeaderByNumber(context.Background(), nil)
require.NoError(t, err)
web3Service.latestEth1Data.BlockHeight = hdr.Number.Uint64()
web3Service.latestEth1Data.BlockTime = hdr.Time
web3Service.latestEth1Data.BlockHeight = testAcc.Backend.Blockchain().CurrentBlock().Number.Uint64()
web3Service.latestEth1Data.BlockTime = testAcc.Backend.Blockchain().CurrentBlock().Time
// Set up our subscriber now to listen for the chain started event.
stateChannel := make(chan *feed.Event, 1)
@@ -512,10 +502,8 @@ func TestProcessETH2GenesisLog_LargePeriodOfNoLogs(t *testing.T) {
web3Service.rpcClient = &mockExecution.RPCClient{Backend: testAcc.Backend}
web3Service.httpLogger = testAcc.Backend
web3Service.latestEth1Data.LastRequestedBlock = 0
hdr, err := testAcc.Backend.Client.HeaderByNumber(context.Background(), nil)
require.NoError(t, err)
web3Service.latestEth1Data.BlockHeight = hdr.Number.Uint64()
web3Service.latestEth1Data.BlockTime = hdr.Time
web3Service.latestEth1Data.BlockHeight = testAcc.Backend.Blockchain().CurrentBlock().Number.Uint64()
web3Service.latestEth1Data.BlockTime = testAcc.Backend.Blockchain().CurrentBlock().Time
bConfig := params.MinimalSpecConfig().Copy()
bConfig.SecondsPerETH1Block = 10
params.OverrideBeaconConfig(bConfig)
@@ -552,18 +540,14 @@ func TestProcessETH2GenesisLog_LargePeriodOfNoLogs(t *testing.T) {
for i := uint64(0); i < 1500; i++ {
testAcc.Backend.Commit()
}
hdr, err = testAcc.Backend.Client.HeaderByNumber(context.Background(), nil)
require.NoError(t, err)
wantedGenesisTime := hdr.Time
wantedGenesisTime := testAcc.Backend.Blockchain().CurrentBlock().Time
// Forward the chain to account for the follow distance
for i := uint64(0); i < params.BeaconConfig().Eth1FollowDistance; i++ {
testAcc.Backend.Commit()
}
hdr, err = testAcc.Backend.Client.HeaderByNumber(context.Background(), nil)
require.NoError(t, err)
web3Service.latestEth1Data.BlockHeight = hdr.Number.Uint64()
web3Service.latestEth1Data.BlockTime = hdr.Time
web3Service.latestEth1Data.BlockHeight = testAcc.Backend.Blockchain().CurrentBlock().Number.Uint64()
web3Service.latestEth1Data.BlockTime = testAcc.Backend.Blockchain().CurrentBlock().Time
// Set the genesis time 500 blocks ahead of the last
// deposit log.

View File

@@ -3,6 +3,7 @@ package execution
import (
"context"
"encoding/json"
"math"
"net/http"
"net/http/httptest"
"testing"
@@ -130,10 +131,21 @@ func TestParseRequest(t *testing.T) {
strToHexBytes(t, "0x66756c6c00000000000000000000000000000000000000000000000000000000"),
},
},
{
method: GetPayloadBodiesByHashV2,
byteArgs: []hexutil.Bytes{
strToHexBytes(t, "0x656d707479000000000000000000000000000000000000000000000000000000"),
strToHexBytes(t, "0x66756c6c00000000000000000000000000000000000000000000000000000000"),
},
},
{
method: GetPayloadBodiesByRangeV1,
hexArgs: []string{hexutil.EncodeUint64(0), hexutil.EncodeUint64(1)},
},
{
method: GetPayloadBodiesByRangeV2,
hexArgs: []string{hexutil.EncodeUint64(math.MaxUint64), hexutil.EncodeUint64(1)},
},
}
for _, c := range cases {
t.Run(c.method, func(t *testing.T) {
@@ -179,7 +191,9 @@ func TestParseRequest(t *testing.T) {
func TestCallCount(t *testing.T) {
methods := []string{
GetPayloadBodiesByHashV1,
GetPayloadBodiesByHashV2,
GetPayloadBodiesByRangeV1,
GetPayloadBodiesByRangeV2,
}
cases := []struct {
method string
@@ -187,8 +201,10 @@ func TestCallCount(t *testing.T) {
}{
{method: GetPayloadBodiesByHashV1, count: 1},
{method: GetPayloadBodiesByHashV1, count: 2},
{method: GetPayloadBodiesByHashV2, count: 1},
{method: GetPayloadBodiesByRangeV1, count: 1},
{method: GetPayloadBodiesByRangeV1, count: 2},
{method: GetPayloadBodiesByRangeV2, count: 1},
}
for _, c := range cases {
t.Run(c.method, func(t *testing.T) {

View File

@@ -12,6 +12,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
pb "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"google.golang.org/protobuf/proto"
)
@@ -86,7 +87,10 @@ func (r *blindedBlockReconstructor) addToBatch(b interfaces.ReadOnlySignedBeacon
return nil
}
func payloadBodyMethodForBlock(_ interface{ Version() int }) string {
func payloadBodyMethodForBlock(b interface{ Version() int }) string {
if b.Version() > version.Deneb {
return GetPayloadBodiesByHashV2
}
return GetPayloadBodiesByHashV1
}
@@ -239,6 +243,9 @@ func (r *blindedBlockReconstructor) unblinded() ([]interfaces.SignedBeaconBlock,
return unblinded, nil
}
func rangeMethodForHashMethod(_ string) string {
func rangeMethodForHashMethod(method string) string {
if method == GetPayloadBodiesByHashV2 {
return GetPayloadBodiesByRangeV2
}
return GetPayloadBodiesByRangeV1
}

View File

@@ -25,6 +25,33 @@ func (v versioner) Version() int {
return v.version
}
func TestPayloadBodyMethodForBlock(t *testing.T) {
cases := []struct {
versions []int
want string
}{
{
versions: []int{version.Phase0, version.Altair, version.Bellatrix, version.Capella, version.Deneb},
want: GetPayloadBodiesByHashV1,
},
{
versions: []int{version.Electra},
want: GetPayloadBodiesByHashV2,
},
}
for _, c := range cases {
for _, v := range c.versions {
t.Run(version.String(v), func(t *testing.T) {
v := versioner{version: v}
require.Equal(t, c.want, payloadBodyMethodForBlock(v))
})
}
}
t.Run("post-electra", func(t *testing.T) {
require.Equal(t, GetPayloadBodiesByHashV2, payloadBodyMethodForBlock(versioner{version: version.Electra + 1}))
})
}
func payloadToBody(t *testing.T, ed interfaces.ExecutionData) *pb.ExecutionPayloadBody {
body := &pb.ExecutionPayloadBody{}
txs, err := ed.Transactions()
@@ -320,6 +347,22 @@ func TestReconstructBlindedBlockBatchFallbackToRange(t *testing.T) {
}
mockWriteResult(t, w, msg, executionPayloadBodies)
})
// separate methods for the electra block
srv.register(GetPayloadBodiesByHashV2, func(msg *jsonrpcMessage, w http.ResponseWriter, r *http.Request) {
executionPayloadBodies := []*pb.ExecutionPayloadBody{nil}
mockWriteResult(t, w, msg, executionPayloadBodies)
})
srv.register(GetPayloadBodiesByRangeV2, func(msg *jsonrpcMessage, w http.ResponseWriter, r *http.Request) {
p := mockParseUintList(t, msg.Params)
require.Equal(t, 2, len(p))
start, count := p[0], p[1]
require.Equal(t, fx.electra.blinded.header.BlockNumber(), start)
require.Equal(t, uint64(1), count)
executionPayloadBodies := []*pb.ExecutionPayloadBody{
payloadToBody(t, fx.electra.blinded.header),
}
mockWriteResult(t, w, msg, executionPayloadBodies)
})
blind := []interfaces.ReadOnlySignedBeaconBlock{
fx.denebBlock.blinded.block,
fx.emptyDenebBlock.blinded.block,
@@ -338,8 +381,13 @@ func TestReconstructBlindedBlockBatchDenebAndElectra(t *testing.T) {
t.Run("deneb and electra", func(t *testing.T) {
cli, srv := newMockEngine(t)
fx := testBlindedBlockFixtures(t)
// The reconstructed should make separate calls for the deneb (v1) and electra (v2) blocks.
srv.register(GetPayloadBodiesByHashV1, func(msg *jsonrpcMessage, w http.ResponseWriter, r *http.Request) {
executionPayloadBodies := []*pb.ExecutionPayloadBody{payloadToBody(t, fx.denebBlock.blinded.header), payloadToBody(t, fx.electra.blinded.header)}
executionPayloadBodies := []*pb.ExecutionPayloadBody{payloadToBody(t, fx.denebBlock.blinded.header)}
mockWriteResult(t, w, msg, executionPayloadBodies)
})
srv.register(GetPayloadBodiesByHashV2, func(msg *jsonrpcMessage, w http.ResponseWriter, r *http.Request) {
executionPayloadBodies := []*pb.ExecutionPayloadBody{payloadToBody(t, fx.electra.blinded.header)}
mockWriteResult(t, w, msg, executionPayloadBodies)
})
blinded := []interfaces.ReadOnlySignedBeaconBlock{

View File

@@ -180,9 +180,7 @@ func TestService_Eth1Synced(t *testing.T) {
web3Service.depositContractCaller, err = contracts.NewDepositContractCaller(testAcc.ContractAddr, testAcc.Backend)
require.NoError(t, err)
hdr, err := testAcc.Backend.Client.HeaderByNumber(context.Background(), nil)
require.NoError(t, err)
currTime := hdr.Time
currTime := testAcc.Backend.Blockchain().CurrentHeader().Time
now := time.Now()
assert.NoError(t, testAcc.Backend.AdjustTime(now.Sub(time.Unix(int64(currTime), 0))))
testAcc.Backend.Commit()
@@ -213,20 +211,14 @@ func TestFollowBlock_OK(t *testing.T) {
web3Service = setDefaultMocks(web3Service)
web3Service.rpcClient = &mockExecution.RPCClient{Backend: testAcc.Backend}
hdr, err := testAcc.Backend.Client.HeaderByNumber(context.Background(), nil)
require.NoError(t, err)
baseHeight := hdr.Number.Uint64()
baseHeight := testAcc.Backend.Blockchain().CurrentBlock().Number.Uint64()
// process follow_distance blocks
for i := 0; i < int(params.BeaconConfig().Eth1FollowDistance); i++ {
// Throw away error in case adjustment wasn't successful.
err = testAcc.Backend.AdjustTime(10 * time.Second)
_ = err
testAcc.Backend.Commit()
}
hdr, err = testAcc.Backend.Client.HeaderByNumber(context.Background(), nil)
require.NoError(t, err)
// set current height
web3Service.latestEth1Data.BlockHeight = hdr.Number.Uint64()
web3Service.latestEth1Data.BlockTime = hdr.Time
web3Service.latestEth1Data.BlockHeight = testAcc.Backend.Blockchain().CurrentBlock().Number.Uint64()
web3Service.latestEth1Data.BlockTime = testAcc.Backend.Blockchain().CurrentBlock().Time
h, err := web3Service.followedBlockHeight(context.Background())
require.NoError(t, err)
@@ -235,15 +227,11 @@ func TestFollowBlock_OK(t *testing.T) {
expectedHeight := numToForward + baseHeight
// forward 2 blocks
for i := uint64(0); i < numToForward; i++ {
// Throw away error in case adjustment wasn't successful.
err = testAcc.Backend.AdjustTime(10 * time.Second)
_ = err
testAcc.Backend.Commit()
}
hdr, err = testAcc.Backend.Client.HeaderByNumber(context.Background(), nil)
require.NoError(t, err)
// set current height
web3Service.latestEth1Data.BlockHeight = hdr.Number.Uint64()
web3Service.latestEth1Data.BlockTime = hdr.Time
web3Service.latestEth1Data.BlockHeight = testAcc.Backend.Blockchain().CurrentBlock().Number.Uint64()
web3Service.latestEth1Data.BlockTime = testAcc.Backend.Blockchain().CurrentBlock().Time
h, err = web3Service.followedBlockHeight(context.Background())
require.NoError(t, err)
@@ -488,20 +476,15 @@ func TestNewService_EarliestVotingBlock(t *testing.T) {
for i := 0; i < numToForward; i++ {
testAcc.Backend.Commit()
}
hdr, err := testAcc.Backend.Client.HeaderByNumber(context.Background(), nil)
require.NoError(t, err)
currTime := hdr.Time
currTime := testAcc.Backend.Blockchain().CurrentHeader().Time
now := time.Now()
err = testAcc.Backend.AdjustTime(now.Sub(time.Unix(int64(currTime), 0)))
require.NoError(t, err)
testAcc.Backend.Commit()
hdr, err = testAcc.Backend.Client.HeaderByNumber(context.Background(), nil)
require.NoError(t, err)
currTime = hdr.Time
web3Service.latestEth1Data.BlockHeight = hdr.Number.Uint64()
web3Service.latestEth1Data.BlockTime = hdr.Time
currTime = testAcc.Backend.Blockchain().CurrentHeader().Time
web3Service.latestEth1Data.BlockHeight = testAcc.Backend.Blockchain().CurrentHeader().Number.Uint64()
web3Service.latestEth1Data.BlockTime = testAcc.Backend.Blockchain().CurrentHeader().Time
web3Service.chainStartData.GenesisTime = currTime
// With a current slot of zero, only request follow_blocks behind.

View File

@@ -39,7 +39,7 @@ type EngineClient struct {
}
// NewPayload --
func (e *EngineClient) NewPayload(_ context.Context, _ interfaces.ExecutionData, _ []common.Hash, _ *common.Hash, _ *pb.ExecutionRequests) ([]byte, error) {
func (e *EngineClient) NewPayload(_ context.Context, _ interfaces.ExecutionData, _ []common.Hash, _ *common.Hash) ([]byte, error) {
return e.NewPayloadResp, e.ErrNewPayload
}
@@ -54,7 +54,7 @@ func (e *EngineClient) ForkchoiceUpdated(
}
// GetPayload --
func (e *EngineClient) GetPayload(_ context.Context, _ [8]byte, _ primitives.Slot) (*blocks.GetPayloadResponse, error) {
func (e *EngineClient) GetPayload(_ context.Context, _ [8]byte, s primitives.Slot) (*blocks.GetPayloadResponse, error) {
return e.GetPayloadResponse, e.ErrGetPayload
}

View File

@@ -141,14 +141,7 @@ func (f *ForkChoice) InsertNode(ctx context.Context, state state.BeaconState, ro
}
jc, fc = f.store.pullTips(state, node, jc, fc)
if err := f.updateCheckpoints(ctx, jc, fc); err != nil {
_, remErr := f.store.removeNode(ctx, node)
if remErr != nil {
log.WithError(remErr).Error("could not remove node")
}
return errors.Wrap(err, "could not update checkpoints")
}
return nil
return f.updateCheckpoints(ctx, jc, fc)
}
// updateCheckpoints update the checkpoints when inserting a new node.

View File

@@ -3,7 +3,6 @@ package doublylinkedtree
import (
"context"
"encoding/binary"
"errors"
"testing"
"time"
@@ -888,16 +887,3 @@ func TestForkchoiceParentRoot(t *testing.T) {
require.NoError(t, err)
require.Equal(t, zeroHash, root)
}
func TestForkChoice_CleanupInserting(t *testing.T) {
f := setup(0, 0)
ctx := context.Background()
st, blkRoot, err := prepareForkchoiceState(ctx, 1, indexToHash(1), params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 2, 2)
f.SetBalancesByRooter(func(_ context.Context, _ [32]byte) ([]uint64, error) {
return f.justifiedBalances, errors.New("mock err")
})
require.NoError(t, err)
require.NotNil(t, f.InsertNode(ctx, st, blkRoot))
require.Equal(t, false, f.HasNode(blkRoot))
}

View File

@@ -107,9 +107,7 @@ func (s *Store) insert(ctx context.Context,
s.headNode = n
s.highestReceivedNode = n
} else {
delete(s.nodeByRoot, root)
delete(s.nodeByPayload, payloadHash)
return nil, errInvalidParentRoot
return n, errInvalidParentRoot
}
} else {
parent.children = append(parent.children, n)
@@ -130,11 +128,7 @@ func (s *Store) insert(ctx context.Context,
jEpoch := s.justifiedCheckpoint.Epoch
fEpoch := s.finalizedCheckpoint.Epoch
if err := s.treeRootNode.updateBestDescendant(ctx, jEpoch, fEpoch, slots.ToEpoch(currentSlot)); err != nil {
_, remErr := s.removeNode(ctx, n)
if remErr != nil {
log.WithError(remErr).Error("could not remove node")
}
return nil, errors.Wrap(err, "could not update best descendants")
return n, err
}
}
// Update metrics.

View File

@@ -525,12 +525,3 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
require.NoError(t, err)
require.Equal(t, root4, target)
}
func TestStore_CleanupInserting(t *testing.T) {
f := setup(0, 0)
ctx := context.Background()
st, blkRoot, err := prepareForkchoiceState(ctx, 1, indexToHash(1), indexToHash(2), params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NotNil(t, f.InsertNode(ctx, st, blkRoot))
require.Equal(t, false, f.HasNode(blkRoot))
}

View File

@@ -185,42 +185,42 @@ func (s *Service) processUnaggregatedAttestation(ctx context.Context, att ethpb.
}
// processUnaggregatedAttestation logs when the beacon node observes an aggregated attestation from tracked validator.
func (s *Service) processAggregatedAttestation(ctx context.Context, att ethpb.AggregateAttAndProof) {
func (s *Service) processAggregatedAttestation(ctx context.Context, att *ethpb.AggregateAttestationAndProof) {
s.Lock()
defer s.Unlock()
if s.trackedIndex(att.GetAggregatorIndex()) {
if s.trackedIndex(att.AggregatorIndex) {
log.WithFields(logrus.Fields{
"aggregatorIndex": att.GetAggregatorIndex(),
"slot": att.AggregateVal().GetData().Slot,
"aggregatorIndex": att.AggregatorIndex,
"slot": att.Aggregate.Data.Slot,
"beaconBlockRoot": fmt.Sprintf("%#x", bytesutil.Trunc(
att.AggregateVal().GetData().BeaconBlockRoot)),
att.Aggregate.Data.BeaconBlockRoot)),
"sourceRoot": fmt.Sprintf("%#x", bytesutil.Trunc(
att.AggregateVal().GetData().Source.Root)),
att.Aggregate.Data.Source.Root)),
"targetRoot": fmt.Sprintf("%#x", bytesutil.Trunc(
att.AggregateVal().GetData().Target.Root)),
att.Aggregate.Data.Target.Root)),
}).Info("Processed attestation aggregation")
aggregatedPerf := s.aggregatedPerformance[att.GetAggregatorIndex()]
aggregatedPerf := s.aggregatedPerformance[att.AggregatorIndex]
aggregatedPerf.totalAggregations++
s.aggregatedPerformance[att.GetAggregatorIndex()] = aggregatedPerf
aggregationCounter.WithLabelValues(fmt.Sprintf("%d", att.GetAggregatorIndex())).Inc()
s.aggregatedPerformance[att.AggregatorIndex] = aggregatedPerf
aggregationCounter.WithLabelValues(fmt.Sprintf("%d", att.AggregatorIndex)).Inc()
}
var root [32]byte
copy(root[:], att.AggregateVal().GetData().BeaconBlockRoot)
copy(root[:], att.Aggregate.Data.BeaconBlockRoot)
st := s.config.StateGen.StateByRootIfCachedNoCopy(root)
if st == nil {
log.WithField("beaconBlockRoot", fmt.Sprintf("%#x", bytesutil.Trunc(root[:]))).Debug(
"Skipping aggregated attestation due to state not found in cache")
return
}
attestingIndices, err := attestingIndices(ctx, st, att.AggregateVal())
attestingIndices, err := attestingIndices(ctx, st, att.Aggregate)
if err != nil {
log.WithError(err).Error("Could not get attesting indices")
return
}
for _, idx := range attestingIndices {
if s.canUpdateAttestedValidator(primitives.ValidatorIndex(idx), att.AggregateVal().GetData().Slot) {
logFields := logMessageTimelyFlagsForIndex(primitives.ValidatorIndex(idx), att.AggregateVal().GetData())
if s.canUpdateAttestedValidator(primitives.ValidatorIndex(idx), att.Aggregate.Data.Slot) {
logFields := logMessageTimelyFlagsForIndex(primitives.ValidatorIndex(idx), att.Aggregate.Data)
log.WithFields(logFields).Info("Processed aggregated attestation")
}
}

View File

@@ -209,7 +209,7 @@ func TestService_BroadcastAttestation(t *testing.T) {
func TestService_BroadcastAttestationWithDiscoveryAttempts(t *testing.T) {
// Setup bootnode.
cfg := &Config{PingInterval: testPingInterval}
cfg := &Config{}
port := 2000
cfg.UDPPort = uint(port)
_, pkey := createAddrAndPrivKey(t)
@@ -241,16 +241,12 @@ func TestService_BroadcastAttestationWithDiscoveryAttempts(t *testing.T) {
cfg = &Config{
Discv5BootStrapAddrs: []string{bootNode.String()},
MaxPeers: 30,
PingInterval: testPingInterval,
}
// Setup 2 different hosts
for i := 1; i <= 2; i++ {
h, pkey, ipAddr := createHost(t, port+i)
cfg.UDPPort = uint(port + i)
cfg.TCPPort = uint(port + i)
if len(listeners) > 0 {
cfg.Discv5BootStrapAddrs = append(cfg.Discv5BootStrapAddrs, listeners[len(listeners)-1].Self().String())
}
s := &Service{
cfg: cfg,
genesisTime: genesisTime,

View File

@@ -1,8 +1,6 @@
package p2p
import (
"time"
statefeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/startup"
@@ -17,7 +15,6 @@ type Config struct {
NoDiscovery bool
EnableUPnP bool
StaticPeerID bool
DisableLivenessCheck bool
StaticPeers []string
Discv5BootStrapAddrs []string
RelayNodeAddr string
@@ -30,7 +27,6 @@ type Config struct {
QUICPort uint
TCPPort uint
UDPPort uint
PingInterval time.Duration
MaxPeers uint
QueueSize uint
AllowListCIDR string

View File

@@ -330,10 +330,8 @@ func (s *Service) createListener(
}
dv5Cfg := discover.Config{
PrivateKey: privKey,
Bootnodes: bootNodes,
PingInterval: s.cfg.PingInterval,
NoFindnodeLivenessCheck: s.cfg.DisableLivenessCheck,
PrivateKey: privKey,
Bootnodes: bootNodes,
}
listener, err := discover.ListenV5(conn, localNode, dv5Cfg)

View File

@@ -85,7 +85,7 @@ func TestStartDiscV5_DiscoverAllPeers(t *testing.T) {
genesisTime := time.Now()
genesisValidatorsRoot := make([]byte, 32)
s := &Service{
cfg: &Config{UDPPort: uint(port), PingInterval: testPingInterval, DisableLivenessCheck: true},
cfg: &Config{UDPPort: uint(port)},
genesisTime: genesisTime,
genesisValidatorsRoot: genesisValidatorsRoot,
}
@@ -93,10 +93,6 @@ func TestStartDiscV5_DiscoverAllPeers(t *testing.T) {
require.NoError(t, err)
defer bootListener.Close()
// Allow bootnode's table to have its initial refresh. This allows
// inbound nodes to be added in.
time.Sleep(5 * time.Second)
bootNode := bootListener.Self()
var listeners []*listenerWrapper
@@ -105,8 +101,6 @@ func TestStartDiscV5_DiscoverAllPeers(t *testing.T) {
cfg := &Config{
Discv5BootStrapAddrs: []string{bootNode.String()},
UDPPort: uint(port),
PingInterval: testPingInterval,
DisableLivenessCheck: true,
}
ipAddr, pkey := createAddrAndPrivKey(t)
s = &Service{

View File

@@ -34,10 +34,8 @@ func TestStartDiscv5_DifferentForkDigests(t *testing.T) {
genesisValidatorsRoot := make([]byte, fieldparams.RootLength)
s := &Service{
cfg: &Config{
UDPPort: uint(port),
StateNotifier: &mock.MockStateNotifier{},
PingInterval: testPingInterval,
DisableLivenessCheck: true,
UDPPort: uint(port),
StateNotifier: &mock.MockStateNotifier{},
},
genesisTime: genesisTime,
genesisValidatorsRoot: genesisValidatorsRoot,
@@ -46,17 +44,11 @@ func TestStartDiscv5_DifferentForkDigests(t *testing.T) {
require.NoError(t, err)
defer bootListener.Close()
// Allow bootnode's table to have its initial refresh. This allows
// inbound nodes to be added in.
time.Sleep(5 * time.Second)
bootNode := bootListener.Self()
cfg := &Config{
Discv5BootStrapAddrs: []string{bootNode.String()},
UDPPort: uint(port),
StateNotifier: &mock.MockStateNotifier{},
PingInterval: testPingInterval,
DisableLivenessCheck: true,
}
var listeners []*listenerWrapper
@@ -132,7 +124,7 @@ func TestStartDiscv5_SameForkDigests_DifferentNextForkData(t *testing.T) {
genesisTime := time.Now()
genesisValidatorsRoot := make([]byte, 32)
s := &Service{
cfg: &Config{UDPPort: uint(port), PingInterval: testPingInterval, DisableLivenessCheck: true},
cfg: &Config{UDPPort: uint(port)},
genesisTime: genesisTime,
genesisValidatorsRoot: genesisValidatorsRoot,
}
@@ -140,16 +132,10 @@ func TestStartDiscv5_SameForkDigests_DifferentNextForkData(t *testing.T) {
require.NoError(t, err)
defer bootListener.Close()
// Allow bootnode's table to have its initial refresh. This allows
// inbound nodes to be added in.
time.Sleep(5 * time.Second)
bootNode := bootListener.Self()
cfg := &Config{
Discv5BootStrapAddrs: []string{bootNode.String()},
UDPPort: uint(port),
PingInterval: testPingInterval,
DisableLivenessCheck: true,
}
var listeners []*listenerWrapper
@@ -157,6 +143,7 @@ func TestStartDiscv5_SameForkDigests_DifferentNextForkData(t *testing.T) {
port := 3000 + i
cfg.UDPPort = uint(port)
ipAddr, pkey := createAddrAndPrivKey(t)
c := params.BeaconConfig().Copy()
nextForkEpoch := primitives.Epoch(i)
c.ForkVersionSchedule[[4]byte{'A', 'B', 'C', 'D'}] = nextForkEpoch

View File

@@ -28,8 +28,6 @@ import (
logTest "github.com/sirupsen/logrus/hooks/test"
)
const testPingInterval = 100 * time.Millisecond
type mockListener struct {
localNode *enode.LocalNode
}
@@ -188,7 +186,7 @@ func TestListenForNewNodes(t *testing.T) {
params.SetupTestConfigCleanup(t)
// Setup bootnode.
notifier := &mock.MockStateNotifier{}
cfg := &Config{StateNotifier: notifier, PingInterval: testPingInterval, DisableLivenessCheck: true}
cfg := &Config{StateNotifier: notifier}
port := 2000
cfg.UDPPort = uint(port)
_, pkey := createAddrAndPrivKey(t)
@@ -204,10 +202,6 @@ func TestListenForNewNodes(t *testing.T) {
require.NoError(t, err)
defer bootListener.Close()
// Allow bootnode's table to have its initial refresh. This allows
// inbound nodes to be added in.
time.Sleep(5 * time.Second)
// Use shorter period for testing.
currentPeriod := pollingPeriod
pollingPeriod = 1 * time.Second
@@ -223,8 +217,6 @@ func TestListenForNewNodes(t *testing.T) {
cs := startup.NewClockSynchronizer()
cfg = &Config{
Discv5BootStrapAddrs: []string{bootNode.String()},
PingInterval: testPingInterval,
DisableLivenessCheck: true,
MaxPeers: 30,
ClockWaiter: cs,
}

View File

@@ -66,7 +66,7 @@ func TestStartDiscV5_FindPeersWithSubnet(t *testing.T) {
genesisTime := time.Now()
bootNodeService := &Service{
cfg: &Config{UDPPort: 2000, TCPPort: 3000, QUICPort: 3000, DisableLivenessCheck: true, PingInterval: testPingInterval},
cfg: &Config{UDPPort: 2000, TCPPort: 3000, QUICPort: 3000},
genesisTime: genesisTime,
genesisValidatorsRoot: genesisValidatorsRoot,
}
@@ -78,10 +78,6 @@ func TestStartDiscV5_FindPeersWithSubnet(t *testing.T) {
require.NoError(t, err)
defer bootListener.Close()
// Allow bootnode's table to have its initial refresh. This allows
// inbound nodes to be added in.
time.Sleep(5 * time.Second)
bootNodeENR := bootListener.Self().String()
// Create 3 nodes, each subscribed to a different subnet.
@@ -96,8 +92,6 @@ func TestStartDiscV5_FindPeersWithSubnet(t *testing.T) {
UDPPort: uint(2000 + i),
TCPPort: uint(3000 + i),
QUICPort: uint(3000 + i),
PingInterval: testPingInterval,
DisableLivenessCheck: true,
})
require.NoError(t, err)
@@ -139,8 +133,6 @@ func TestStartDiscV5_FindPeersWithSubnet(t *testing.T) {
cfg := &Config{
Discv5BootStrapAddrs: []string{bootNodeENR},
PingInterval: testPingInterval,
DisableLivenessCheck: true,
MaxPeers: 30,
UDPPort: 2010,
TCPPort: 3010,

View File

@@ -219,16 +219,6 @@ func (s *Service) validatorEndpoints(
handler: server.SubmitAggregateAndProofs,
methods: []string{http.MethodPost},
},
{
template: "/eth/v2/validator/aggregate_and_proofs",
name: namespace + ".SubmitAggregateAndProofsV2",
middleware: []middleware.Middleware{
middleware.ContentTypeHandler([]string{api.JsonMediaType}),
middleware.AcceptHeaderHandler([]string{api.JsonMediaType}),
},
handler: server.SubmitAggregateAndProofsV2,
methods: []string{http.MethodPost},
},
{
template: "/eth/v1/validator/sync_committee_contribution",
name: namespace + ".ProduceSyncCommitteeContribution",
@@ -698,15 +688,6 @@ func (s *Service) beaconEndpoints(
handler: server.GetAttesterSlashings,
methods: []string{http.MethodGet},
},
{
template: "/eth/v2/beacon/pool/attester_slashings",
name: namespace + ".GetAttesterSlashingsV2",
middleware: []middleware.Middleware{
middleware.AcceptHeaderHandler([]string{api.JsonMediaType}),
},
handler: server.GetAttesterSlashingsV2,
methods: []string{http.MethodGet},
},
{
template: "/eth/v1/beacon/pool/attester_slashings",
name: namespace + ".SubmitAttesterSlashings",

View File

@@ -42,7 +42,7 @@ func Test_endpoints(t *testing.T) {
"/eth/v1/beacon/blinded_blocks/{block_id}": {http.MethodGet},
"/eth/v1/beacon/pool/attestations": {http.MethodGet, http.MethodPost},
"/eth/v1/beacon/pool/attester_slashings": {http.MethodGet, http.MethodPost},
"/eth/v2/beacon/pool/attester_slashings": {http.MethodGet, http.MethodPost},
"/eth/v2/beacon/pool/attester_slashings": {http.MethodPost},
"/eth/v1/beacon/pool/proposer_slashings": {http.MethodGet, http.MethodPost},
"/eth/v1/beacon/pool/sync_committees": {http.MethodPost},
"/eth/v1/beacon/pool/voluntary_exits": {http.MethodGet, http.MethodPost},
@@ -101,7 +101,6 @@ func Test_endpoints(t *testing.T) {
"/eth/v1/validator/attestation_data": {http.MethodGet},
"/eth/v1/validator/aggregate_attestation": {http.MethodGet},
"/eth/v1/validator/aggregate_and_proofs": {http.MethodPost},
"/eth/v2/validator/aggregate_and_proofs": {http.MethodPost},
"/eth/v1/validator/beacon_committee_subscriptions": {http.MethodPost},
"/eth/v1/validator/sync_committee_subscriptions": {http.MethodPost},
"/eth/v1/validator/beacon_committee_selections": {http.MethodPost},

View File

@@ -468,65 +468,19 @@ func (s *Server) GetAttesterSlashings(w http.ResponseWriter, r *http.Request) {
return
}
sourceSlashings := s.SlashingsPool.PendingAttesterSlashings(ctx, headState, true /* return unlimited slashings */)
slashings := make([]*structs.AttesterSlashing, len(sourceSlashings))
for i, slashing := range sourceSlashings {
as, ok := slashing.(*eth.AttesterSlashing)
if !ok {
httputil.HandleError(w, fmt.Sprintf("Unable to convert slashing of type %T", slashing), http.StatusInternalServerError)
ss := make([]*eth.AttesterSlashing, 0, len(sourceSlashings))
for _, slashing := range sourceSlashings {
s, ok := slashing.(*eth.AttesterSlashing)
if ok {
ss = append(ss, s)
} else {
httputil.HandleError(w, fmt.Sprintf("unable to convert slashing of type %T", slashing), http.StatusInternalServerError)
return
}
slashings[i] = structs.AttesterSlashingFromConsensus(as)
}
attBytes, err := json.Marshal(slashings)
if err != nil {
httputil.HandleError(w, fmt.Sprintf("Failed to marshal slashings: %v", err), http.StatusInternalServerError)
return
}
httputil.WriteJson(w, &structs.GetAttesterSlashingsResponse{Data: attBytes})
}
slashings := structs.AttesterSlashingsFromConsensus(ss)
// GetAttesterSlashingsV2 retrieves attester slashings known by the node but
// not necessarily incorporated into any block, supporting both AttesterSlashing and AttesterSlashingElectra.
func (s *Server) GetAttesterSlashingsV2(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "beacon.GetAttesterSlashingsV2")
defer span.End()
headState, err := s.ChainInfoFetcher.HeadStateReadOnly(ctx)
if err != nil {
httputil.HandleError(w, "Could not get head state: "+err.Error(), http.StatusInternalServerError)
return
}
var attStructs []interface{}
sourceSlashings := s.SlashingsPool.PendingAttesterSlashings(ctx, headState, true /* return unlimited slashings */)
for _, slashing := range sourceSlashings {
if slashing.Version() >= version.Electra {
a, ok := slashing.(*eth.AttesterSlashingElectra)
if !ok {
httputil.HandleError(w, fmt.Sprintf("Unable to convert electra slashing of type %T to an Electra slashing", slashing), http.StatusInternalServerError)
return
}
attStruct := structs.AttesterSlashingElectraFromConsensus(a)
attStructs = append(attStructs, attStruct)
} else {
a, ok := slashing.(*eth.AttesterSlashing)
if !ok {
httputil.HandleError(w, fmt.Sprintf("Unable to convert slashing of type %T to a Phase0 slashing", slashing), http.StatusInternalServerError)
return
}
attStruct := structs.AttesterSlashingFromConsensus(a)
attStructs = append(attStructs, attStruct)
}
}
attBytes, err := json.Marshal(attStructs)
if err != nil {
httputil.HandleError(w, fmt.Sprintf("Failed to marshal slashing: %v", err), http.StatusInternalServerError)
return
}
resp := &structs.GetAttesterSlashingsResponse{
Version: version.String(sourceSlashings[0].Version()),
Data: attBytes,
}
httputil.WriteJson(w, resp)
httputil.WriteJson(w, &structs.GetAttesterSlashingsResponse{Data: slashings})
}
// SubmitAttesterSlashings submits an attester slashing object to node's pool and

View File

@@ -985,7 +985,9 @@ func TestSubmitSignedBLSToExecutionChanges_Failures(t *testing.T) {
}
func TestGetAttesterSlashings(t *testing.T) {
slashing1PreElectra := &ethpbv1alpha1.AttesterSlashing{
bs, err := util.NewBeaconState()
require.NoError(t, err)
slashing1 := &ethpbv1alpha1.AttesterSlashing{
Attestation_1: &ethpbv1alpha1.IndexedAttestation{
AttestingIndices: []uint64{1, 10},
Data: &ethpbv1alpha1.AttestationData{
@@ -1021,7 +1023,7 @@ func TestGetAttesterSlashings(t *testing.T) {
Signature: bytesutil.PadTo([]byte("signature2"), 96),
},
}
slashing2PreElectra := &ethpbv1alpha1.AttesterSlashing{
slashing2 := &ethpbv1alpha1.AttesterSlashing{
Attestation_1: &ethpbv1alpha1.IndexedAttestation{
AttestingIndices: []uint64{3, 30},
Data: &ethpbv1alpha1.AttestationData{
@@ -1057,168 +1059,23 @@ func TestGetAttesterSlashings(t *testing.T) {
Signature: bytesutil.PadTo([]byte("signature4"), 96),
},
}
slashing1PostElectra := &ethpbv1alpha1.AttesterSlashingElectra{
Attestation_1: &ethpbv1alpha1.IndexedAttestationElectra{
AttestingIndices: []uint64{1, 10},
Data: &ethpbv1alpha1.AttestationData{
Slot: 1,
CommitteeIndex: 1,
BeaconBlockRoot: bytesutil.PadTo([]byte("blockroot1"), 32),
Source: &ethpbv1alpha1.Checkpoint{
Epoch: 1,
Root: bytesutil.PadTo([]byte("sourceroot1"), 32),
},
Target: &ethpbv1alpha1.Checkpoint{
Epoch: 10,
Root: bytesutil.PadTo([]byte("targetroot1"), 32),
},
},
Signature: bytesutil.PadTo([]byte("signature1"), 96),
},
Attestation_2: &ethpbv1alpha1.IndexedAttestationElectra{
AttestingIndices: []uint64{2, 20},
Data: &ethpbv1alpha1.AttestationData{
Slot: 2,
CommitteeIndex: 2,
BeaconBlockRoot: bytesutil.PadTo([]byte("blockroot2"), 32),
Source: &ethpbv1alpha1.Checkpoint{
Epoch: 2,
Root: bytesutil.PadTo([]byte("sourceroot2"), 32),
},
Target: &ethpbv1alpha1.Checkpoint{
Epoch: 20,
Root: bytesutil.PadTo([]byte("targetroot2"), 32),
},
},
Signature: bytesutil.PadTo([]byte("signature2"), 96),
},
}
slashing2PostElectra := &ethpbv1alpha1.AttesterSlashingElectra{
Attestation_1: &ethpbv1alpha1.IndexedAttestationElectra{
AttestingIndices: []uint64{3, 30},
Data: &ethpbv1alpha1.AttestationData{
Slot: 3,
CommitteeIndex: 3,
BeaconBlockRoot: bytesutil.PadTo([]byte("blockroot3"), 32),
Source: &ethpbv1alpha1.Checkpoint{
Epoch: 3,
Root: bytesutil.PadTo([]byte("sourceroot3"), 32),
},
Target: &ethpbv1alpha1.Checkpoint{
Epoch: 30,
Root: bytesutil.PadTo([]byte("targetroot3"), 32),
},
},
Signature: bytesutil.PadTo([]byte("signature3"), 96),
},
Attestation_2: &ethpbv1alpha1.IndexedAttestationElectra{
AttestingIndices: []uint64{4, 40},
Data: &ethpbv1alpha1.AttestationData{
Slot: 4,
CommitteeIndex: 4,
BeaconBlockRoot: bytesutil.PadTo([]byte("blockroot4"), 32),
Source: &ethpbv1alpha1.Checkpoint{
Epoch: 4,
Root: bytesutil.PadTo([]byte("sourceroot4"), 32),
},
Target: &ethpbv1alpha1.Checkpoint{
Epoch: 40,
Root: bytesutil.PadTo([]byte("targetroot4"), 32),
},
},
Signature: bytesutil.PadTo([]byte("signature4"), 96),
},
s := &Server{
ChainInfoFetcher: &blockchainmock.ChainService{State: bs},
SlashingsPool: &slashingsmock.PoolMock{PendingAttSlashings: []ethpbv1alpha1.AttSlashing{slashing1, slashing2}},
}
t.Run("V1", func(t *testing.T) {
bs, err := util.NewBeaconState()
require.NoError(t, err)
request := httptest.NewRequest(http.MethodGet, "http://example.com/beacon/pool/attester_slashings", nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s := &Server{
ChainInfoFetcher: &blockchainmock.ChainService{State: bs},
SlashingsPool: &slashingsmock.PoolMock{PendingAttSlashings: []ethpbv1alpha1.AttSlashing{slashing1PreElectra, slashing2PreElectra}},
}
request := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/beacon/pool/attester_slashings", nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetAttesterSlashings(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &structs.GetAttesterSlashingsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.NotNil(t, resp)
require.NotNil(t, resp.Data)
var slashings []*structs.AttesterSlashing
require.NoError(t, json.Unmarshal(resp.Data, &slashings))
ss, err := structs.AttesterSlashingsToConsensus(slashings)
require.NoError(t, err)
require.DeepEqual(t, slashing1PreElectra, ss[0])
require.DeepEqual(t, slashing2PreElectra, ss[1])
})
t.Run("V2-post-electra", func(t *testing.T) {
bs, err := util.NewBeaconStateElectra()
require.NoError(t, err)
s := &Server{
ChainInfoFetcher: &blockchainmock.ChainService{State: bs},
SlashingsPool: &slashingsmock.PoolMock{PendingAttSlashings: []ethpbv1alpha1.AttSlashing{slashing1PostElectra, slashing2PostElectra}},
}
request := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v2/beacon/pool/attester_slashings", nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetAttesterSlashingsV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &structs.GetAttesterSlashingsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.NotNil(t, resp)
require.NotNil(t, resp.Data)
assert.Equal(t, "electra", resp.Version)
// Unmarshal resp.Data into a slice of slashings
var slashings []*structs.AttesterSlashingElectra
require.NoError(t, json.Unmarshal(resp.Data, &slashings))
ss, err := structs.AttesterSlashingsElectraToConsensus(slashings)
require.NoError(t, err)
require.DeepEqual(t, slashing1PostElectra, ss[0])
require.DeepEqual(t, slashing2PostElectra, ss[1])
})
t.Run("V2-pre-electra", func(t *testing.T) {
bs, err := util.NewBeaconState()
require.NoError(t, err)
s := &Server{
ChainInfoFetcher: &blockchainmock.ChainService{State: bs},
SlashingsPool: &slashingsmock.PoolMock{PendingAttSlashings: []ethpbv1alpha1.AttSlashing{slashing1PreElectra, slashing2PreElectra}},
}
request := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/beacon/pool/attester_slashings", nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetAttesterSlashingsV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &structs.GetAttesterSlashingsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.NotNil(t, resp)
require.NotNil(t, resp.Data)
var slashings []*structs.AttesterSlashing
require.NoError(t, json.Unmarshal(resp.Data, &slashings))
ss, err := structs.AttesterSlashingsToConsensus(slashings)
require.NoError(t, err)
require.DeepEqual(t, slashing1PreElectra, ss[0])
require.DeepEqual(t, slashing2PreElectra, ss[1])
})
s.GetAttesterSlashings(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &structs.GetAttesterSlashingsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.NotNil(t, resp)
require.NotNil(t, resp.Data)
assert.Equal(t, 2, len(resp.Data))
}
func TestGetProposerSlashings(t *testing.T) {

View File

@@ -71,7 +71,6 @@ var (
errSlowReader = errors.New("client failed to read fast enough to keep outgoing buffer below threshold")
errNotRequested = errors.New("event not requested by client")
errUnhandledEventData = errors.New("unable to represent event data in the event stream")
errWriterUnusable = errors.New("http response writer is unusable")
)
// StreamingResponseWriter defines a type that can be used by the eventStreamer.
@@ -310,21 +309,10 @@ func (es *eventStreamer) outboxWriteLoop(ctx context.Context, cancel context.Can
}
}
func writeLazyReaderWithRecover(w StreamingResponseWriter, lr lazyReader) (err error) {
defer func() {
if r := recover(); r != nil {
log.WithField("panic", r).Error("Recovered from panic while writing event to client.")
err = errWriterUnusable
}
}()
_, err = io.Copy(w, lr())
return err
}
func (es *eventStreamer) writeOutbox(ctx context.Context, w StreamingResponseWriter, first lazyReader) error {
needKeepAlive := true
if first != nil {
if err := writeLazyReaderWithRecover(w, first); err != nil {
if _, err := io.Copy(w, first()); err != nil {
return err
}
needKeepAlive = false
@@ -337,13 +325,13 @@ func (es *eventStreamer) writeOutbox(ctx context.Context, w StreamingResponseWri
case <-ctx.Done():
return ctx.Err()
case rf := <-es.outbox:
if err := writeLazyReaderWithRecover(w, rf); err != nil {
if _, err := io.Copy(w, rf()); err != nil {
return err
}
needKeepAlive = false
default:
if needKeepAlive {
if err := writeLazyReaderWithRecover(w, newlineReader); err != nil {
if _, err := io.Copy(w, newlineReader()); err != nil {
return err
}
}

View File

@@ -85,7 +85,6 @@ go_test(
"//encoding/bytesutil:go_default_library",
"//network/httputil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//testing/assert:go_default_library",
"//testing/mock:go_default_library",
"//testing/require:go_default_library",

View File

@@ -13,7 +13,6 @@ import (
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api"
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/builder"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
@@ -32,7 +31,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
"github.com/prysmaticlabs/prysm/v5/network/httputil"
ethpbalpha "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
"google.golang.org/grpc/codes"
@@ -120,34 +118,30 @@ func (s *Server) SubmitContributionAndProofs(w http.ResponseWriter, r *http.Requ
ctx, span := trace.StartSpan(r.Context(), "validator.SubmitContributionAndProofs")
defer span.End()
var reqData []json.RawMessage
if err := json.NewDecoder(r.Body).Decode(&reqData); err != nil {
if errors.Is(err, io.EOF) {
httputil.HandleError(w, "No data submitted", http.StatusBadRequest)
} else {
httputil.HandleError(w, "Could not decode request body: "+err.Error(), http.StatusBadRequest)
}
var req structs.SubmitContributionAndProofsRequest
err := json.NewDecoder(r.Body).Decode(&req.Data)
switch {
case errors.Is(err, io.EOF):
httputil.HandleError(w, "No data submitted", http.StatusBadRequest)
return
case err != nil:
httputil.HandleError(w, "Could not decode request body: "+err.Error(), http.StatusBadRequest)
return
}
if len(reqData) == 0 {
if len(req.Data) == 0 {
httputil.HandleError(w, "No data submitted", http.StatusBadRequest)
return
}
for _, item := range reqData {
var contribution structs.SignedContributionAndProof
if err := json.Unmarshal(item, &contribution); err != nil {
httputil.HandleError(w, "Could not decode item: "+err.Error(), http.StatusBadRequest)
return
}
consensusItem, err := contribution.ToConsensus()
for _, item := range req.Data {
consensusItem, err := item.ToConsensus()
if err != nil {
httputil.HandleError(w, "Could not convert contribution to consensus format: "+err.Error(), http.StatusBadRequest)
httputil.HandleError(w, "Could not convert request contribution to consensus contribution: "+err.Error(), http.StatusBadRequest)
return
}
if rpcError := s.CoreService.SubmitSignedContributionAndProof(ctx, consensusItem); rpcError != nil {
rpcError := s.CoreService.SubmitSignedContributionAndProof(ctx, consensusItem)
if rpcError != nil {
httputil.HandleError(w, rpcError.Err.Error(), core.ErrorReasonToHTTP(rpcError.Reason))
return
}
}
}
@@ -174,13 +168,7 @@ func (s *Server) SubmitAggregateAndProofs(w http.ResponseWriter, r *http.Request
broadcastFailed := false
for _, item := range req.Data {
var signedAggregate structs.SignedAggregateAttestationAndProof
err := json.Unmarshal(item, &signedAggregate)
if err != nil {
httputil.HandleError(w, "Could not decode item: "+err.Error(), http.StatusBadRequest)
return
}
consensusItem, err := signedAggregate.ToConsensus()
consensusItem, err := item.ToConsensus()
if err != nil {
httputil.HandleError(w, "Could not convert request aggregate to consensus aggregate: "+err.Error(), http.StatusBadRequest)
return
@@ -203,81 +191,6 @@ func (s *Server) SubmitAggregateAndProofs(w http.ResponseWriter, r *http.Request
}
}
// SubmitAggregateAndProofsV2 verifies given aggregate and proofs and publishes them on appropriate gossipsub topic.
func (s *Server) SubmitAggregateAndProofsV2(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "validator.SubmitAggregateAndProofsV2")
defer span.End()
var reqData []json.RawMessage
if err := json.NewDecoder(r.Body).Decode(&reqData); err != nil {
if errors.Is(err, io.EOF) {
httputil.HandleError(w, "No data submitted", http.StatusBadRequest)
} else {
httputil.HandleError(w, "Could not decode request body: "+err.Error(), http.StatusBadRequest)
}
return
}
if len(reqData) == 0 {
httputil.HandleError(w, "No data submitted", http.StatusBadRequest)
return
}
versionHeader := r.Header.Get(api.VersionHeader)
if versionHeader == "" {
httputil.HandleError(w, api.VersionHeader+" header is required", http.StatusBadRequest)
}
v, err := version.FromString(versionHeader)
if err != nil {
httputil.HandleError(w, "Invalid version: "+err.Error(), http.StatusBadRequest)
return
}
broadcastFailed := false
var rpcError *core.RpcError
for _, raw := range reqData {
if v >= version.Electra {
var signedAggregate structs.SignedAggregateAttestationAndProofElectra
err = json.Unmarshal(raw, &signedAggregate)
if err != nil {
httputil.HandleError(w, "Failed to parse aggregate attestation and proof: "+err.Error(), http.StatusBadRequest)
return
}
consensusItem, err := signedAggregate.ToConsensus()
if err != nil {
httputil.HandleError(w, "Could not convert request aggregate to consensus aggregate: "+err.Error(), http.StatusBadRequest)
return
}
rpcError = s.CoreService.SubmitSignedAggregateSelectionProof(ctx, consensusItem)
} else {
var signedAggregate structs.SignedAggregateAttestationAndProof
err = json.Unmarshal(raw, &signedAggregate)
if err != nil {
httputil.HandleError(w, "Failed to parse aggregate attestation and proof: "+err.Error(), http.StatusBadRequest)
return
}
consensusItem, err := signedAggregate.ToConsensus()
if err != nil {
httputil.HandleError(w, "Could not convert request aggregate to consensus aggregate: "+err.Error(), http.StatusBadRequest)
return
}
rpcError = s.CoreService.SubmitSignedAggregateSelectionProof(ctx, consensusItem)
}
if rpcError != nil {
var aggregateBroadcastFailedError *core.AggregateBroadcastFailedError
if errors.As(rpcError.Err, &aggregateBroadcastFailedError) {
broadcastFailed = true
} else {
httputil.HandleError(w, rpcError.Err.Error(), core.ErrorReasonToHTTP(rpcError.Reason))
return
}
}
}
if broadcastFailed {
httputil.HandleError(w, "Could not broadcast one or more signed aggregated attestations", http.StatusInternalServerError)
}
}
// SubmitSyncCommitteeSubscription subscribe to a number of sync committee subnets.
//
// Subscribing to sync committee subnets is an action performed by VC to enable

View File

@@ -14,7 +14,6 @@ import (
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api"
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
mockChain "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/testing"
builderTest "github.com/prysmaticlabs/prysm/v5/beacon-chain/builder/testing"
@@ -38,7 +37,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/network/httputil"
ethpbalpha "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
@@ -429,238 +427,85 @@ func TestSubmitContributionAndProofs(t *testing.T) {
}
func TestSubmitAggregateAndProofs(t *testing.T) {
s := &Server{
CoreService: &core.Service{GenesisTimeFetcher: &mockChain.ChainService{}},
c := &core.Service{
GenesisTimeFetcher: &mockChain.ChainService{},
}
t.Run("V1", func(t *testing.T) {
t.Run("single", func(t *testing.T) {
broadcaster := &p2pmock.MockBroadcaster{}
s.CoreService.Broadcaster = broadcaster
var body bytes.Buffer
_, err := body.WriteString(singleAggregate)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s := &Server{
CoreService: c,
}
s.SubmitAggregateAndProofs(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, 1, len(broadcaster.BroadcastMessages))
})
t.Run("multiple", func(t *testing.T) {
broadcaster := &p2pmock.MockBroadcaster{}
s.CoreService.Broadcaster = broadcaster
s.CoreService.SyncCommitteePool = synccommittee.NewStore()
t.Run("single", func(t *testing.T) {
broadcaster := &p2pmock.MockBroadcaster{}
c.Broadcaster = broadcaster
var body bytes.Buffer
_, err := body.WriteString(multipleAggregates)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
var body bytes.Buffer
_, err := body.WriteString(singleAggregate)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAggregateAndProofs(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, 2, len(broadcaster.BroadcastMessages))
})
t.Run("no body", func(t *testing.T) {
request := httptest.NewRequest(http.MethodPost, "http://example.com", nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAggregateAndProofs(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "No data submitted"))
})
t.Run("empty", func(t *testing.T) {
var body bytes.Buffer
_, err := body.WriteString("[]")
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAggregateAndProofs(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "No data submitted"))
})
t.Run("invalid", func(t *testing.T) {
var body bytes.Buffer
_, err := body.WriteString(invalidAggregate)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAggregateAndProofs(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
})
s.SubmitAggregateAndProofs(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, 1, len(broadcaster.BroadcastMessages))
})
t.Run("V2", func(t *testing.T) {
t.Run("single", func(t *testing.T) {
broadcaster := &p2pmock.MockBroadcaster{}
s.CoreService.Broadcaster = broadcaster
t.Run("multiple", func(t *testing.T) {
broadcaster := &p2pmock.MockBroadcaster{}
c.Broadcaster = broadcaster
c.SyncCommitteePool = synccommittee.NewStore()
var body bytes.Buffer
_, err := body.WriteString(singleAggregateElectra)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.VersionHeader, version.String(version.Electra))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
var body bytes.Buffer
_, err := body.WriteString(multipleAggregates)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAggregateAndProofsV2(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, 1, len(broadcaster.BroadcastMessages))
})
t.Run("single-pre-electra", func(t *testing.T) {
broadcaster := &p2pmock.MockBroadcaster{}
s.CoreService.Broadcaster = broadcaster
s.SubmitAggregateAndProofs(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, 2, len(broadcaster.BroadcastMessages))
})
t.Run("no body", func(t *testing.T) {
request := httptest.NewRequest(http.MethodPost, "http://example.com", nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
var body bytes.Buffer
_, err := body.WriteString(singleAggregate)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.VersionHeader, version.String(version.Phase0))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAggregateAndProofs(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "No data submitted"))
})
t.Run("empty", func(t *testing.T) {
var body bytes.Buffer
_, err := body.WriteString("[]")
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAggregateAndProofsV2(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, 1, len(broadcaster.BroadcastMessages))
})
t.Run("multiple", func(t *testing.T) {
broadcaster := &p2pmock.MockBroadcaster{}
s.CoreService.Broadcaster = broadcaster
s.CoreService.SyncCommitteePool = synccommittee.NewStore()
s.SubmitAggregateAndProofs(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "No data submitted"))
})
t.Run("invalid", func(t *testing.T) {
var body bytes.Buffer
_, err := body.WriteString(invalidAggregate)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
var body bytes.Buffer
_, err := body.WriteString(multipleAggregatesElectra)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.VersionHeader, version.String(version.Electra))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAggregateAndProofsV2(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, 2, len(broadcaster.BroadcastMessages))
})
t.Run("multiple-pre-electra", func(t *testing.T) {
broadcaster := &p2pmock.MockBroadcaster{}
s.CoreService.Broadcaster = broadcaster
s.CoreService.SyncCommitteePool = synccommittee.NewStore()
var body bytes.Buffer
_, err := body.WriteString(multipleAggregates)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.VersionHeader, version.String(version.Phase0))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAggregateAndProofsV2(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, 2, len(broadcaster.BroadcastMessages))
})
t.Run("no body", func(t *testing.T) {
request := httptest.NewRequest(http.MethodPost, "http://example.com", nil)
request.Header.Set(api.VersionHeader, version.String(version.Electra))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAggregateAndProofsV2(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "No data submitted"))
})
t.Run("no body-pre-electra", func(t *testing.T) {
request := httptest.NewRequest(http.MethodPost, "http://example.com", nil)
request.Header.Set(api.VersionHeader, version.String(version.Bellatrix))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAggregateAndProofsV2(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "No data submitted"))
})
t.Run("empty", func(t *testing.T) {
var body bytes.Buffer
_, err := body.WriteString("[]")
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.VersionHeader, version.String(version.Electra))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAggregateAndProofsV2(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "No data submitted"))
})
t.Run("empty-pre-electra", func(t *testing.T) {
var body bytes.Buffer
_, err := body.WriteString("[]")
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.VersionHeader, version.String(version.Altair))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAggregateAndProofsV2(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "No data submitted"))
})
t.Run("invalid", func(t *testing.T) {
var body bytes.Buffer
_, err := body.WriteString(invalidAggregateElectra)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.VersionHeader, version.String(version.Electra))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAggregateAndProofsV2(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
})
t.Run("invalid-pre-electra", func(t *testing.T) {
var body bytes.Buffer
_, err := body.WriteString(invalidAggregate)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.VersionHeader, version.String(version.Deneb))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAggregateAndProofsV2(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
})
s.SubmitAggregateAndProofs(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
})
}
@@ -2921,115 +2766,6 @@ var (
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
}
]`
singleAggregateElectra = `[
{
"message": {
"aggregator_index": "1",
"aggregate": {
"aggregation_bits": "0x01",
"committee_bits": "0x01",
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505",
"data": {
"slot": "1",
"index": "1",
"beacon_block_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"source": {
"epoch": "1",
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
},
"target": {
"epoch": "1",
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
}
}
},
"selection_proof": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
},
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
}
]`
multipleAggregatesElectra = `[
{
"message": {
"aggregator_index": "1",
"aggregate": {
"aggregation_bits": "0x01",
"committee_bits": "0x01",
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505",
"data": {
"slot": "1",
"index": "1",
"beacon_block_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"source": {
"epoch": "1",
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
},
"target": {
"epoch": "1",
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
}
}
},
"selection_proof": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
},
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
},
{
"message": {
"aggregator_index": "1",
"aggregate": {
"aggregation_bits": "0x01",
"committee_bits": "0x01",
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505",
"data": {
"slot": "1",
"index": "1",
"beacon_block_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"source": {
"epoch": "1",
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
},
"target": {
"epoch": "1",
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
}
}
},
"selection_proof": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
},
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
}
]
`
// aggregator_index is invalid
invalidAggregateElectra = `[
{
"message": {
"aggregator_index": "foo",
"aggregate": {
"aggregation_bits": "0x01",
"committee_bits": "0x01",
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505",
"data": {
"slot": "1",
"index": "1",
"beacon_block_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"source": {
"epoch": "1",
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
},
"target": {
"epoch": "1",
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
}
}
},
"selection_proof": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
},
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
}
]`
singleSyncCommitteeSubscription = `[
{
"validator_index": "1",

View File

@@ -56,7 +56,9 @@ func (_ *Server) SetLoggingLevel(_ context.Context, req *pbrpc.LoggingLevelReque
// Libp2p specific logging.
golog.SetAllLoggers(golog.LevelDebug)
// Geth specific logging.
gethlog.SetDefault(gethlog.NewLogger(gethlog.NewTerminalHandlerWithLevel(os.Stderr, gethlog.LvlTrace, true)))
glogger := gethlog.NewGlogHandler(gethlog.StreamHandler(os.Stderr, gethlog.TerminalFormat(true)))
glogger.Verbosity(gethlog.LvlTrace)
gethlog.Root().SetHandler(glogger)
}
return &empty.Empty{}, nil
}

View File

@@ -77,7 +77,6 @@ func setExecutionData(ctx context.Context, blk interfaces.SignedBeaconBlock, loc
log.WithError(err).Warn("Proposer: failed to retrieve header from BuilderBid")
return local.Bid, local.BlobsBundle, setLocalExecution(blk, local)
}
//TODO: add builder execution requests here.
if bid.Version() >= version.Deneb {
builderKzgCommitments, err = bid.BlobKzgCommitments()
if err != nil {
@@ -354,19 +353,12 @@ func setLocalExecution(blk interfaces.SignedBeaconBlock, local *blocks.GetPayloa
if local.BlobsBundle != nil {
kzgCommitments = local.BlobsBundle.KzgCommitments
}
if local.ExecutionRequests != nil {
if err := blk.SetExecutionRequests(local.ExecutionRequests); err != nil {
return errors.Wrap(err, "could not set execution requests")
}
}
return setExecution(blk, local.ExecutionData, false, kzgCommitments)
}
// setBuilderExecution sets the execution context for a builder's beacon block.
// It delegates to setExecution for the actual work.
func setBuilderExecution(blk interfaces.SignedBeaconBlock, execution interfaces.ExecutionData, builderKzgCommitments [][]byte) error {
// TODO #14344: add execution requests for electra
return setExecution(blk, execution, true, builderKzgCommitments)
}

View File

@@ -105,6 +105,7 @@ type WriteOnlyBeaconState interface {
AppendHistoricalRoots(root [32]byte) error
AppendHistoricalSummaries(*ethpb.HistoricalSummary) error
SetLatestExecutionPayloadHeader(payload interfaces.ExecutionData) error
SaveValidatorIndices()
}
// ReadOnlyValidator defines a struct which only has read access to validator methods.
@@ -140,6 +141,7 @@ type ReadOnlyBalances interface {
Balances() []uint64
BalanceAtIndex(idx primitives.ValidatorIndex) (uint64, error)
BalancesLength() int
ActiveBalanceAtIndex(idx primitives.ValidatorIndex) (uint64, error)
}
// ReadOnlyCheckpoint defines a struct which only has read access to checkpoint methods.

View File

@@ -46,6 +46,7 @@ go_library(
"ssz.go",
"state_trie.go",
"types.go",
"validator_index_cache.go",
],
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native",
visibility = ["//visibility:public"],
@@ -120,6 +121,7 @@ go_test(
"state_test.go",
"state_trie_test.go",
"types_test.go",
"validator_index_cache_test.go",
],
data = glob(["testdata/**"]) + [
"@consensus_spec_tests_mainnet//:test_data",
@@ -135,6 +137,7 @@ go_test(
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",

View File

@@ -76,6 +76,7 @@ type BeaconState struct {
stateFieldLeaves map[types.FieldIndex]*fieldtrie.FieldTrie
rebuildTrie map[types.FieldIndex]bool
valMapHandler *stateutil.ValidatorMapHandler
validatorIndexCache *finalizedValidatorIndexCache
merkleLayers [][][]byte
sharedFieldReferences map[types.FieldIndex]*stateutil.Reference
}

View File

@@ -2,6 +2,7 @@ package state_native
import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/features"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
@@ -150,10 +151,6 @@ func (b *BeaconState) ValidatorAtIndexReadOnly(idx primitives.ValidatorIndex) (s
b.lock.RLock()
defer b.lock.RUnlock()
return b.validatorAtIndexReadOnly(idx)
}
func (b *BeaconState) validatorAtIndexReadOnly(idx primitives.ValidatorIndex) (state.ReadOnlyValidator, error) {
if features.Get().EnableExperimentalState {
if b.validatorsMultiValue == nil {
return nil, state.ErrNilValidatorsInState
@@ -183,6 +180,10 @@ func (b *BeaconState) ValidatorIndexByPubkey(key [fieldparams.BLSPubkeyLength]by
b.lock.RLock()
defer b.lock.RUnlock()
if b.Version() >= version.Electra {
return b.getValidatorIndex(key)
}
var numOfVals int
if features.Get().EnableExperimentalState {
numOfVals = b.validatorsMultiValue.Len(b)
@@ -445,6 +446,34 @@ func (b *BeaconState) inactivityScoresVal() []uint64 {
return res
}
// ActiveBalanceAtIndex returns the active balance for the given validator.
//
// Spec definition:
//
// def get_active_balance(state: BeaconState, validator_index: ValidatorIndex) -> Gwei:
// max_effective_balance = get_validator_max_effective_balance(state.validators[validator_index])
// return min(state.balances[validator_index], max_effective_balance)
func (b *BeaconState) ActiveBalanceAtIndex(i primitives.ValidatorIndex) (uint64, error) {
if b.version < version.Electra {
return 0, errNotSupported("ActiveBalanceAtIndex", b.version)
}
b.lock.RLock()
defer b.lock.RUnlock()
v, err := b.validatorAtIndex(i)
if err != nil {
return 0, err
}
bal, err := b.balanceAtIndex(i)
if err != nil {
return 0, err
}
return min(bal, helpers.ValidatorMaxEffectiveBalance(v)), nil
}
// PendingBalanceToWithdraw returns the sum of all pending withdrawals for the given validator.
//
// Spec definition:

View File

@@ -1,12 +1,15 @@
package state_native_test
import (
"math"
"testing"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
statenative "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
testtmpl "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/testing"
"github.com/prysmaticlabs/prysm/v5/config/params"
consensus_types "github.com/prysmaticlabs/prysm/v5/consensus-types"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
@@ -69,6 +72,60 @@ func TestValidatorIndexes(t *testing.T) {
})
}
func TestActiveBalanceAtIndex(t *testing.T) {
// Test setup with a state with 4 validators.
// Validators 0 & 1 have compounding withdrawal credentials while validators 2 & 3 have BLS withdrawal credentials.
pb := &ethpb.BeaconStateElectra{
Validators: []*ethpb.Validator{
{
WithdrawalCredentials: []byte{params.BeaconConfig().CompoundingWithdrawalPrefixByte},
},
{
WithdrawalCredentials: []byte{params.BeaconConfig().CompoundingWithdrawalPrefixByte},
},
{
WithdrawalCredentials: []byte{params.BeaconConfig().BLSWithdrawalPrefixByte},
},
{
WithdrawalCredentials: []byte{params.BeaconConfig().BLSWithdrawalPrefixByte},
},
},
Balances: []uint64{
55,
math.MaxUint64,
55,
math.MaxUint64,
},
}
state, err := statenative.InitializeFromProtoUnsafeElectra(pb)
require.NoError(t, err)
ab, err := state.ActiveBalanceAtIndex(0)
require.NoError(t, err)
require.Equal(t, uint64(55), ab)
ab, err = state.ActiveBalanceAtIndex(1)
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().MaxEffectiveBalanceElectra, ab)
ab, err = state.ActiveBalanceAtIndex(2)
require.NoError(t, err)
require.Equal(t, uint64(55), ab)
ab, err = state.ActiveBalanceAtIndex(3)
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().MinActivationBalance, ab)
// Accessing a validator index out of bounds should error.
_, err = state.ActiveBalanceAtIndex(4)
require.ErrorIs(t, err, consensus_types.ErrOutOfBounds)
// Accessing a validator wwhere balance slice is out of bounds for some reason.
require.NoError(t, state.SetBalances([]uint64{}))
_, err = state.ActiveBalanceAtIndex(0)
require.ErrorIs(t, err, consensus_types.ErrOutOfBounds)
}
func TestPendingBalanceToWithdraw(t *testing.T) {
pb := &ethpb.BeaconStateElectra{
PendingPartialWithdrawals: []*ethpb.PendingPartialWithdrawal{

View File

@@ -120,7 +120,7 @@ func (b *BeaconState) ExpectedWithdrawals() ([]*enginev1.Withdrawal, uint64, err
break
}
v, err := b.validatorAtIndexReadOnly(w.Index)
v, err := b.validatorAtIndex(w.Index)
if err != nil {
return nil, 0, fmt.Errorf("failed to determine withdrawals at index %d: %w", w.Index, err)
}
@@ -128,14 +128,14 @@ func (b *BeaconState) ExpectedWithdrawals() ([]*enginev1.Withdrawal, uint64, err
if err != nil {
return nil, 0, fmt.Errorf("could not retrieve balance at index %d: %w", w.Index, err)
}
hasSufficientEffectiveBalance := v.EffectiveBalance() >= params.BeaconConfig().MinActivationBalance
hasSufficientEffectiveBalance := v.EffectiveBalance >= params.BeaconConfig().MinActivationBalance
hasExcessBalance := vBal > params.BeaconConfig().MinActivationBalance
if v.ExitEpoch() == params.BeaconConfig().FarFutureEpoch && hasSufficientEffectiveBalance && hasExcessBalance {
if v.ExitEpoch == params.BeaconConfig().FarFutureEpoch && hasSufficientEffectiveBalance && hasExcessBalance {
amount := min(vBal-params.BeaconConfig().MinActivationBalance, w.Amount)
withdrawals = append(withdrawals, &enginev1.Withdrawal{
Index: withdrawalIndex,
ValidatorIndex: w.Index,
Address: v.GetWithdrawalCredentials()[12:],
Address: v.WithdrawalCredentials[12:],
Amount: amount,
})
withdrawalIndex++
@@ -147,7 +147,7 @@ func (b *BeaconState) ExpectedWithdrawals() ([]*enginev1.Withdrawal, uint64, err
validatorsLen := b.validatorsLen()
bound := mathutil.Min(uint64(validatorsLen), params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep)
for i := uint64(0); i < bound; i++ {
val, err := b.validatorAtIndexReadOnly(validatorIndex)
val, err := b.validatorAtIndex(validatorIndex)
if err != nil {
return nil, 0, errors.Wrapf(err, "could not retrieve validator at index %d", validatorIndex)
}

View File

@@ -70,6 +70,11 @@ func (v readOnlyValidator) PublicKey() [fieldparams.BLSPubkeyLength]byte {
return pubkey
}
// publicKeySlice returns the public key in the slice form for the read only validator.
func (v readOnlyValidator) publicKeySlice() []byte {
return v.validator.PublicKey
}
// WithdrawalCredentials returns the withdrawal credentials of the
// read only validator.
func (v readOnlyValidator) GetWithdrawalCredentials() []byte {

View File

@@ -107,6 +107,22 @@ func (b *BeaconState) SetHistoricalRoots(val [][]byte) error {
return nil
}
// SaveValidatorIndices save validator indices of beacon chain to cache
func (b *BeaconState) SaveValidatorIndices() {
if b.Version() < version.Electra {
return
}
b.lock.Lock()
defer b.lock.Unlock()
if b.validatorIndexCache == nil {
b.validatorIndexCache = newFinalizedValidatorIndexCache()
}
b.saveValidatorIndices()
}
// AppendHistoricalRoots for the beacon state. Appends the new value
// to the end of list.
func (b *BeaconState) AppendHistoricalRoots(root [32]byte) error {

View File

@@ -189,11 +189,12 @@ func InitializeFromProtoUnsafePhase0(st *ethpb.BeaconState) (state.BeaconState,
id: types.Enumerator.Inc(),
dirtyFields: make(map[types.FieldIndex]bool, fieldCount),
dirtyIndices: make(map[types.FieldIndex][]uint64, fieldCount),
stateFieldLeaves: make(map[types.FieldIndex]*fieldtrie.FieldTrie, fieldCount),
rebuildTrie: make(map[types.FieldIndex]bool, fieldCount),
valMapHandler: stateutil.NewValMapHandler(st.Validators),
dirtyFields: make(map[types.FieldIndex]bool, fieldCount),
dirtyIndices: make(map[types.FieldIndex][]uint64, fieldCount),
stateFieldLeaves: make(map[types.FieldIndex]*fieldtrie.FieldTrie, fieldCount),
rebuildTrie: make(map[types.FieldIndex]bool, fieldCount),
valMapHandler: stateutil.NewValMapHandler(st.Validators),
validatorIndexCache: newFinalizedValidatorIndexCache(),
}
if features.Get().EnableExperimentalState {
@@ -295,11 +296,12 @@ func InitializeFromProtoUnsafeAltair(st *ethpb.BeaconStateAltair) (state.BeaconS
id: types.Enumerator.Inc(),
dirtyFields: make(map[types.FieldIndex]bool, fieldCount),
dirtyIndices: make(map[types.FieldIndex][]uint64, fieldCount),
stateFieldLeaves: make(map[types.FieldIndex]*fieldtrie.FieldTrie, fieldCount),
rebuildTrie: make(map[types.FieldIndex]bool, fieldCount),
valMapHandler: stateutil.NewValMapHandler(st.Validators),
dirtyFields: make(map[types.FieldIndex]bool, fieldCount),
dirtyIndices: make(map[types.FieldIndex][]uint64, fieldCount),
stateFieldLeaves: make(map[types.FieldIndex]*fieldtrie.FieldTrie, fieldCount),
rebuildTrie: make(map[types.FieldIndex]bool, fieldCount),
valMapHandler: stateutil.NewValMapHandler(st.Validators),
validatorIndexCache: newFinalizedValidatorIndexCache(),
}
if features.Get().EnableExperimentalState {
@@ -405,11 +407,12 @@ func InitializeFromProtoUnsafeBellatrix(st *ethpb.BeaconStateBellatrix) (state.B
id: types.Enumerator.Inc(),
dirtyFields: make(map[types.FieldIndex]bool, fieldCount),
dirtyIndices: make(map[types.FieldIndex][]uint64, fieldCount),
stateFieldLeaves: make(map[types.FieldIndex]*fieldtrie.FieldTrie, fieldCount),
rebuildTrie: make(map[types.FieldIndex]bool, fieldCount),
valMapHandler: stateutil.NewValMapHandler(st.Validators),
dirtyFields: make(map[types.FieldIndex]bool, fieldCount),
dirtyIndices: make(map[types.FieldIndex][]uint64, fieldCount),
stateFieldLeaves: make(map[types.FieldIndex]*fieldtrie.FieldTrie, fieldCount),
rebuildTrie: make(map[types.FieldIndex]bool, fieldCount),
valMapHandler: stateutil.NewValMapHandler(st.Validators),
validatorIndexCache: newFinalizedValidatorIndexCache(),
}
if features.Get().EnableExperimentalState {
@@ -519,11 +522,12 @@ func InitializeFromProtoUnsafeCapella(st *ethpb.BeaconStateCapella) (state.Beaco
id: types.Enumerator.Inc(),
dirtyFields: make(map[types.FieldIndex]bool, fieldCount),
dirtyIndices: make(map[types.FieldIndex][]uint64, fieldCount),
stateFieldLeaves: make(map[types.FieldIndex]*fieldtrie.FieldTrie, fieldCount),
rebuildTrie: make(map[types.FieldIndex]bool, fieldCount),
valMapHandler: stateutil.NewValMapHandler(st.Validators),
dirtyFields: make(map[types.FieldIndex]bool, fieldCount),
dirtyIndices: make(map[types.FieldIndex][]uint64, fieldCount),
stateFieldLeaves: make(map[types.FieldIndex]*fieldtrie.FieldTrie, fieldCount),
rebuildTrie: make(map[types.FieldIndex]bool, fieldCount),
valMapHandler: stateutil.NewValMapHandler(st.Validators),
validatorIndexCache: newFinalizedValidatorIndexCache(),
}
if features.Get().EnableExperimentalState {
@@ -632,11 +636,12 @@ func InitializeFromProtoUnsafeDeneb(st *ethpb.BeaconStateDeneb) (state.BeaconSta
nextWithdrawalValidatorIndex: st.NextWithdrawalValidatorIndex,
historicalSummaries: st.HistoricalSummaries,
dirtyFields: make(map[types.FieldIndex]bool, fieldCount),
dirtyIndices: make(map[types.FieldIndex][]uint64, fieldCount),
stateFieldLeaves: make(map[types.FieldIndex]*fieldtrie.FieldTrie, fieldCount),
rebuildTrie: make(map[types.FieldIndex]bool, fieldCount),
valMapHandler: stateutil.NewValMapHandler(st.Validators),
dirtyFields: make(map[types.FieldIndex]bool, fieldCount),
dirtyIndices: make(map[types.FieldIndex][]uint64, fieldCount),
stateFieldLeaves: make(map[types.FieldIndex]*fieldtrie.FieldTrie, fieldCount),
rebuildTrie: make(map[types.FieldIndex]bool, fieldCount),
valMapHandler: stateutil.NewValMapHandler(st.Validators),
validatorIndexCache: newFinalizedValidatorIndexCache(),
}
if features.Get().EnableExperimentalState {
@@ -754,11 +759,12 @@ func InitializeFromProtoUnsafeElectra(st *ethpb.BeaconStateElectra) (state.Beaco
pendingPartialWithdrawals: st.PendingPartialWithdrawals,
pendingConsolidations: st.PendingConsolidations,
dirtyFields: make(map[types.FieldIndex]bool, fieldCount),
dirtyIndices: make(map[types.FieldIndex][]uint64, fieldCount),
stateFieldLeaves: make(map[types.FieldIndex]*fieldtrie.FieldTrie, fieldCount),
rebuildTrie: make(map[types.FieldIndex]bool, fieldCount),
valMapHandler: stateutil.NewValMapHandler(st.Validators),
dirtyFields: make(map[types.FieldIndex]bool, fieldCount),
dirtyIndices: make(map[types.FieldIndex][]uint64, fieldCount),
stateFieldLeaves: make(map[types.FieldIndex]*fieldtrie.FieldTrie, fieldCount),
rebuildTrie: make(map[types.FieldIndex]bool, fieldCount),
valMapHandler: stateutil.NewValMapHandler(st.Validators),
validatorIndexCache: newFinalizedValidatorIndexCache(), //only used in post-electra and only populates when finalizing, otherwise it falls back to processing the full validator set
}
if features.Get().EnableExperimentalState {
@@ -919,7 +925,8 @@ func (b *BeaconState) Copy() state.BeaconState {
stateFieldLeaves: make(map[types.FieldIndex]*fieldtrie.FieldTrie, fieldCount),
// Share the reference to validator index map.
valMapHandler: b.valMapHandler,
valMapHandler: b.valMapHandler,
validatorIndexCache: b.validatorIndexCache,
}
if features.Get().EnableExperimentalState {

View File

@@ -0,0 +1,96 @@
package state_native
import (
"bytes"
"sync"
"github.com/prysmaticlabs/prysm/v5/config/features"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
// finalizedValidatorIndexCache maintains a mapping from validator public keys to their indices within the beacon state.
// It includes a lastFinalizedIndex to track updates up to the last finalized validator index,
// and uses a mutex for concurrent read/write access to the cache.
type finalizedValidatorIndexCache struct {
indexMap map[[fieldparams.BLSPubkeyLength]byte]primitives.ValidatorIndex // Maps finalized BLS public keys to validator indices.
sync.RWMutex
}
// newFinalizedValidatorIndexCache initializes a new validator index cache with an empty index map.
func newFinalizedValidatorIndexCache() *finalizedValidatorIndexCache {
return &finalizedValidatorIndexCache{
indexMap: make(map[[fieldparams.BLSPubkeyLength]byte]primitives.ValidatorIndex),
}
}
// getValidatorIndex retrieves the validator index for a given public key from the cache.
// If the public key is not found in the cache, it searches through the state starting from the last finalized index.
func (b *BeaconState) getValidatorIndex(pubKey [fieldparams.BLSPubkeyLength]byte) (primitives.ValidatorIndex, bool) {
b.validatorIndexCache.RLock()
defer b.validatorIndexCache.RUnlock()
index, found := b.validatorIndexCache.indexMap[pubKey]
if found {
return index, true
}
validatorCount := len(b.validatorIndexCache.indexMap)
vals := b.validatorsReadOnlySinceIndex(validatorCount)
for i, val := range vals {
if bytes.Equal(bytesutil.PadTo(val.publicKeySlice(), 48), pubKey[:]) {
index := primitives.ValidatorIndex(validatorCount + i)
return index, true
}
}
return 0, false
}
// saveValidatorIndices updates the validator index cache with new indices.
// It processes validator indices starting after the last finalized index and updates the tracker.
func (b *BeaconState) saveValidatorIndices() {
b.validatorIndexCache.Lock()
defer b.validatorIndexCache.Unlock()
validatorCount := len(b.validatorIndexCache.indexMap)
vals := b.validatorsReadOnlySinceIndex(validatorCount)
for i, val := range vals {
b.validatorIndexCache.indexMap[val.PublicKey()] = primitives.ValidatorIndex(validatorCount + i)
}
}
// validatorsReadOnlySinceIndex constructs a list of read only validator references after a specified index.
// The indices in the returned list correspond to their respective validator indices in the state.
// It returns nil if the specified index is out of bounds. This function is read-only and does not use locks.
func (b *BeaconState) validatorsReadOnlySinceIndex(index int) []readOnlyValidator {
totalValidators := b.validatorsLen()
if index >= totalValidators {
return nil
}
var v []*ethpb.Validator
if features.Get().EnableExperimentalState {
if b.validatorsMultiValue == nil {
return nil
}
v = b.validatorsMultiValue.Value(b)
} else {
if b.validators == nil {
return nil
}
v = b.validators
}
result := make([]readOnlyValidator, totalValidators-index)
for i := 0; i < len(result); i++ {
val := v[i+index]
if val == nil {
continue
}
result[i] = readOnlyValidator{
validator: val,
}
}
return result
}

View File

@@ -0,0 +1,105 @@
package state_native
import (
"testing"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func Test_FinalizedValidatorIndexCache(t *testing.T) {
c := newFinalizedValidatorIndexCache()
b := &BeaconState{validatorIndexCache: c}
// What happens if you call getValidatorIndex with a public key that is not in the cache and state?
// The function will return 0 and false.
i, exists := b.getValidatorIndex([fieldparams.BLSPubkeyLength]byte{0})
require.Equal(t, primitives.ValidatorIndex(0), i)
require.Equal(t, false, exists)
// Validators are added to the state. They are [0, 1, 2]
b.validators = []*ethpb.Validator{
{PublicKey: []byte{1}},
{PublicKey: []byte{2}},
{PublicKey: []byte{3}},
}
// We should be able to retrieve these validators by public key even when they are not in the cache
i, exists = b.getValidatorIndex([fieldparams.BLSPubkeyLength]byte{1})
require.Equal(t, primitives.ValidatorIndex(0), i)
require.Equal(t, true, exists)
i, exists = b.getValidatorIndex([fieldparams.BLSPubkeyLength]byte{2})
require.Equal(t, primitives.ValidatorIndex(1), i)
require.Equal(t, true, exists)
i, exists = b.getValidatorIndex([fieldparams.BLSPubkeyLength]byte{3})
require.Equal(t, primitives.ValidatorIndex(2), i)
require.Equal(t, true, exists)
// State is finalized. We save [0, 1, 2 ] to the cache.
b.saveValidatorIndices()
require.Equal(t, 3, len(b.validatorIndexCache.indexMap))
i, exists = b.getValidatorIndex([fieldparams.BLSPubkeyLength]byte{1})
require.Equal(t, primitives.ValidatorIndex(0), i)
require.Equal(t, true, exists)
i, exists = b.getValidatorIndex([fieldparams.BLSPubkeyLength]byte{2})
require.Equal(t, primitives.ValidatorIndex(1), i)
require.Equal(t, true, exists)
i, exists = b.getValidatorIndex([fieldparams.BLSPubkeyLength]byte{3})
require.Equal(t, primitives.ValidatorIndex(2), i)
require.Equal(t, true, exists)
// New validators are added to the state. They are [4, 5]
b.validators = []*ethpb.Validator{
{PublicKey: []byte{1}},
{PublicKey: []byte{2}},
{PublicKey: []byte{3}},
{PublicKey: []byte{4}},
{PublicKey: []byte{5}},
}
// We should be able to retrieve these validators by public key even when they are not in the cache
i, exists = b.getValidatorIndex([fieldparams.BLSPubkeyLength]byte{4})
require.Equal(t, primitives.ValidatorIndex(3), i)
require.Equal(t, true, exists)
i, exists = b.getValidatorIndex([fieldparams.BLSPubkeyLength]byte{5})
require.Equal(t, primitives.ValidatorIndex(4), i)
require.Equal(t, true, exists)
// State is finalized. We save [4, 5] to the cache.
b.saveValidatorIndices()
require.Equal(t, 5, len(b.validatorIndexCache.indexMap))
// New validators are added to the state. They are [6]
b.validators = []*ethpb.Validator{
{PublicKey: []byte{1}},
{PublicKey: []byte{2}},
{PublicKey: []byte{3}},
{PublicKey: []byte{4}},
{PublicKey: []byte{5}},
{PublicKey: []byte{6}},
}
// We should be able to retrieve these validators by public key even when they are not in the cache
i, exists = b.getValidatorIndex([fieldparams.BLSPubkeyLength]byte{6})
require.Equal(t, primitives.ValidatorIndex(5), i)
require.Equal(t, true, exists)
// State is finalized. We save [6] to the cache.
b.saveValidatorIndices()
require.Equal(t, 6, len(b.validatorIndexCache.indexMap))
// Save a few more times.
b.saveValidatorIndices()
b.saveValidatorIndices()
require.Equal(t, 6, len(b.validatorIndexCache.indexMap))
// Can still retrieve the validators from the cache
i, exists = b.getValidatorIndex([fieldparams.BLSPubkeyLength]byte{1})
require.Equal(t, primitives.ValidatorIndex(0), i)
require.Equal(t, true, exists)
i, exists = b.getValidatorIndex([fieldparams.BLSPubkeyLength]byte{2})
require.Equal(t, primitives.ValidatorIndex(1), i)
require.Equal(t, true, exists)
i, exists = b.getValidatorIndex([fieldparams.BLSPubkeyLength]byte{3})
require.Equal(t, primitives.ValidatorIndex(2), i)
require.Equal(t, true, exists)
}

View File

@@ -230,6 +230,7 @@ func TestReplayBlocks_ProcessEpoch_Electra(t *testing.T) {
EffectiveBalance: params.BeaconConfig().MinActivationBalance,
},
}))
beaconState.SaveValidatorIndices()
require.NoError(t, beaconState.SetPendingDeposits([]*ethpb.PendingDeposit{
stateTesting.GeneratePendingDeposit(t, sk, uint64(amountAvailForProcessing)/10, bytesutil.ToBytes32(withdrawalCredentials), genesisBlock.Block.Slot),

View File

@@ -305,12 +305,11 @@ func validateSelectionIndex(
domain := params.BeaconConfig().DomainSelectionProof
epoch := slots.ToEpoch(slot)
v, err := bs.ValidatorAtIndexReadOnly(validatorIndex)
v, err := bs.ValidatorAtIndex(validatorIndex)
if err != nil {
return nil, err
}
pk := v.PublicKey()
publicKey, err := bls.PublicKeyFromBytes(pk[:])
publicKey, err := bls.PublicKeyFromBytes(v.PublicKey)
if err != nil {
return nil, err
}
@@ -336,12 +335,11 @@ func validateSelectionIndex(
func aggSigSet(s state.ReadOnlyBeaconState, a ethpb.SignedAggregateAttAndProof) (*bls.SignatureBatch, error) {
aggregateAndProof := a.AggregateAttestationAndProof()
v, err := s.ValidatorAtIndexReadOnly(aggregateAndProof.GetAggregatorIndex())
v, err := s.ValidatorAtIndex(aggregateAndProof.GetAggregatorIndex())
if err != nil {
return nil, err
}
pk := v.PublicKey()
publicKey, err := bls.PublicKeyFromBytes(pk[:])
publicKey, err := bls.PublicKeyFromBytes(v.PublicKey)
if err != nil {
return nil, err
}

View File

@@ -369,7 +369,7 @@ func (s *Service) verifyPendingBlockSignature(ctx context.Context, blk interface
return pubsub.ValidationIgnore, err
}
// Ignore block in the event of non-existent proposer.
_, err = roState.ValidatorAtIndexReadOnly(blk.Block().ProposerIndex())
_, err = roState.ValidatorAtIndex(blk.Block().ProposerIndex())
if err != nil {
return pubsub.ValidationIgnore, err
}

View File

@@ -285,7 +285,9 @@ func startNode(ctx *cli.Context, cancel context.CancelFunc) error {
// libp2p specific logging.
golog.SetAllLoggers(golog.LevelDebug)
// Geth specific logging.
gethlog.SetDefault(gethlog.NewLogger(gethlog.NewTerminalHandlerWithLevel(os.Stderr, gethlog.LvlTrace, true)))
glogger := gethlog.NewGlogHandler(gethlog.StreamHandler(os.Stderr, gethlog.TerminalFormat(true)))
glogger.Verbosity(gethlog.LvlTrace)
gethlog.Root().SetHandler(glogger)
}
blockchainFlagOpts, err := blockchaincmd.FlagOptions(ctx)

View File

@@ -28,6 +28,6 @@ var (
ScrapeIntervalFlag = &cli.DurationFlag{
Name: "scrape-interval",
Usage: "Frequency of scraping expressed as a duration, eg 2m or 1m5s.",
Value: 120 * time.Second,
Value: 60 * time.Second,
}
)

View File

@@ -180,6 +180,7 @@ func TestModifiedE2E(t *testing.T) {
func TestLoadConfigFile(t *testing.T) {
t.Run("mainnet", func(t *testing.T) {
t.Skip("TODO: add back in after all spec test features are in.")
mn := params.MainnetConfig().Copy()
mainnetPresetsFiles := presetsFilePath(t, "mainnet")
var err error
@@ -198,6 +199,7 @@ func TestLoadConfigFile(t *testing.T) {
})
t.Run("minimal", func(t *testing.T) {
t.Skip("TODO: add back in after all spec test features are in.")
min := params.MinimalSpecConfig().Copy()
minimalPresetsFiles := presetsFilePath(t, "minimal")
var err error

View File

@@ -110,7 +110,7 @@ func MinimalSpecConfig() *BeaconChainConfig {
minimalConfig.MaxWithdrawalRequestsPerPayload = 2
minimalConfig.MaxDepositRequestsPerPayload = 4
minimalConfig.PendingPartialWithdrawalsLimit = 64
minimalConfig.MaxPendingPartialsPerWithdrawalsSweep = 2
minimalConfig.MaxPendingPartialsPerWithdrawalsSweep = 1
minimalConfig.PendingDepositLimit = 134217728
minimalConfig.MaxPendingDepositsPerEpoch = 16

View File

@@ -37,9 +37,6 @@ func NewWrappedExecutionData(v proto.Message) (interfaces.ExecutionData, error)
return WrappedExecutionPayloadDeneb(pbStruct)
case *enginev1.ExecutionPayloadDenebWithValueAndBlobsBundle:
return WrappedExecutionPayloadDeneb(pbStruct.Payload)
case *enginev1.ExecutionBundleElectra:
// note: no payload changes in electra so using deneb
return WrappedExecutionPayloadDeneb(pbStruct.Payload)
default:
return nil, ErrUnsupportedVersion
}

View File

@@ -14,8 +14,7 @@ type GetPayloadResponse struct {
BlobsBundle *pb.BlobsBundle
OverrideBuilder bool
// todo: should we convert this to Gwei up front?
Bid primitives.Wei
ExecutionRequests *pb.ExecutionRequests
Bid primitives.Wei
}
// bundleGetter is an interface satisfied by get payload responses that have a blobs bundle.
@@ -32,10 +31,6 @@ type shouldOverrideBuilderGetter interface {
GetShouldOverrideBuilder() bool
}
type executionRequestsGetter interface {
GetDecodedExecutionRequests() (*pb.ExecutionRequests, error)
}
func NewGetPayloadResponse(msg proto.Message) (*GetPayloadResponse, error) {
r := &GetPayloadResponse{}
bundleGetter, hasBundle := msg.(bundleGetter)
@@ -43,7 +38,6 @@ func NewGetPayloadResponse(msg proto.Message) (*GetPayloadResponse, error) {
r.BlobsBundle = bundleGetter.GetBlobsBundle()
}
bidValueGetter, hasBid := msg.(bidValueGetter)
executionRequestsGetter, hasExecutionRequests := msg.(executionRequestsGetter)
wei := primitives.ZeroWei()
if hasBid {
// The protobuf types that engine api responses unmarshal into store their values in little endian form.
@@ -62,12 +56,5 @@ func NewGetPayloadResponse(msg proto.Message) (*GetPayloadResponse, error) {
return nil, err
}
r.ExecutionData = ed
if hasExecutionRequests {
requests, err := executionRequestsGetter.GetDecodedExecutionRequests()
if err != nil {
return nil, err
}
r.ExecutionRequests = requests
}
return r, nil
}

View File

@@ -6,7 +6,6 @@ import (
consensus_types "github.com/prysmaticlabs/prysm/v5/consensus-types"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
)
@@ -173,12 +172,3 @@ func (b *SignedBeaconBlock) SetBlobKzgCommitments(c [][]byte) error {
b.block.body.blobKzgCommitments = c
return nil
}
// SetExecutionRequests sets the execution requests in the block.
func (b *SignedBeaconBlock) SetExecutionRequests(req *enginev1.ExecutionRequests) error {
if b.version < version.Electra {
return consensus_types.ErrNotSupported("SetExecutionRequests", b.version)
}
b.block.body.executionRequests = req
return nil
}

View File

@@ -91,7 +91,6 @@ type SignedBeaconBlock interface {
SetProposerIndex(idx primitives.ValidatorIndex)
SetSlot(slot primitives.Slot)
SetSignature(sig []byte)
SetExecutionRequests(er *enginev1.ExecutionRequests) error
Unblind(e ExecutionData) error
}

193
deps.bzl
View File

@@ -295,8 +295,8 @@ def prysm_deps():
go_repository(
name = "com_github_bits_and_blooms_bitset",
importpath = "github.com/bits-and-blooms/bitset",
sum = "h1:bAQ9OPNFYbGHV6Nez0tmNI0RiEu7/hxlYJRUA0wFAVE=",
version = "v1.13.0",
sum = "h1:RMyy2mBBShArUAhfVRZJ2xyBO58KCBCtZFShw3umo6k=",
version = "v1.11.0",
)
go_repository(
name = "com_github_bradfitz_go_smtpd",
@@ -313,8 +313,8 @@ def prysm_deps():
go_repository(
name = "com_github_btcsuite_btcd_btcec_v2",
importpath = "github.com/btcsuite/btcd/btcec/v2",
sum = "h1:3EJjcN70HCu/mwqlUsGK8GcNVyLVxFDlWurTXGPFfiQ=",
version = "v2.3.4",
sum = "h1:5n0X6hX0Zk+6omWcihdYvdAlGf2DfasC0GMf7DClJ3U=",
version = "v2.3.2",
)
go_repository(
name = "com_github_btcsuite_btcd_chaincfg_chainhash",
@@ -452,14 +452,8 @@ def prysm_deps():
name = "com_github_cockroachdb_errors",
build_file_proto_mode = "disable_global",
importpath = "github.com/cockroachdb/errors",
sum = "h1:5bA+k2Y6r+oz/6Z/RFlNeVCesGARKuC6YymtcDrbC/I=",
version = "v1.11.3",
)
go_repository(
name = "com_github_cockroachdb_fifo",
importpath = "github.com/cockroachdb/fifo",
sum = "h1:giXvy4KSc/6g/esnpM7Geqxka4WSqI1SZc7sMJFd3y4=",
version = "v0.0.0-20240606204812-0bbfbd93a7ce",
sum = "h1:xSEW75zKaKCWzR3OfxXUxgrk/NtT4G1MiOv5lWZazG8=",
version = "v1.11.1",
)
go_repository(
name = "com_github_cockroachdb_logtags",
@@ -470,8 +464,8 @@ def prysm_deps():
go_repository(
name = "com_github_cockroachdb_pebble",
importpath = "github.com/cockroachdb/pebble",
sum = "h1:CUh2IPtR4swHlEj48Rhfzw6l/d0qA31fItcIszQVIsA=",
version = "v1.1.2",
sum = "h1:aPEJyR4rPBvDmeyi+l/FS/VtA00IWvjeFvjen1m1l1A=",
version = "v0.0.0-20230928194634-aa077af62593",
)
go_repository(
name = "com_github_cockroachdb_redact",
@@ -479,6 +473,12 @@ def prysm_deps():
sum = "h1:u1PMllDkdFfPWaNGMyLD1+so+aq3uUItthCFqzwPJ30=",
version = "v1.1.5",
)
go_repository(
name = "com_github_cockroachdb_sentry_go",
importpath = "github.com/cockroachdb/sentry-go",
sum = "h1:IKgmqgMQlVJIZj19CdocBeSfSaiCbEBZGKODaixqtHM=",
version = "v0.6.1-cockroachdb.2",
)
go_repository(
name = "com_github_cockroachdb_tokenbucket",
importpath = "github.com/cockroachdb/tokenbucket",
@@ -549,14 +549,14 @@ def prysm_deps():
go_repository(
name = "com_github_crate_crypto_go_ipa",
importpath = "github.com/crate-crypto/go-ipa",
sum = "h1:uQYC5Z1mdLRPrZhHjHxufI8+2UG/i25QG92j0Er9p6I=",
version = "v0.0.0-20240223125850-b1e8a79f509c",
sum = "h1:DuBDHVjgGMPki7bAyh91+3cF1Vh34sAEdH8JQgbc2R0=",
version = "v0.0.0-20230601170251-1830d0757c80",
)
go_repository(
name = "com_github_crate_crypto_go_kzg_4844",
importpath = "github.com/crate-crypto/go-kzg-4844",
sum = "h1:TsSgHwrkTKecKJ4kadtHi4b3xHW5dCFUDFnUp1TsawI=",
version = "v1.0.0",
sum = "h1:C0vgZRk4q4EZ/JgPfzuSoxdCq3C3mOZMBShovmncxvA=",
version = "v0.7.0",
)
go_repository(
name = "com_github_creack_pty",
@@ -597,8 +597,8 @@ def prysm_deps():
go_repository(
name = "com_github_deckarep_golang_set_v2",
importpath = "github.com/deckarep/golang-set/v2",
sum = "h1:XfcQbWM1LlMB8BsJ8N9vW5ehnnPVIw0je80NsVHagjM=",
version = "v2.6.0",
sum = "h1:hn6cEZtQ0h3J8kFrHR/NrzyOoTnjgW1+FmNJzQ7y/sA=",
version = "v2.5.0",
)
go_repository(
name = "com_github_decred_dcrd_crypto_blake256",
@@ -654,12 +654,6 @@ def prysm_deps():
sum = "h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4=",
version = "v0.5.0",
)
go_repository(
name = "com_github_donovanhide_eventsource",
importpath = "github.com/donovanhide/eventsource",
sum = "h1:C7t6eeMaEQVy6e8CarIhscYQlNmw5e3G36y7l7Y21Ao=",
version = "v0.0.0-20210830082556-c59027999da0",
)
go_repository(
name = "com_github_dop251_goja",
importpath = "github.com/dop251/goja",
@@ -746,28 +740,21 @@ def prysm_deps():
importpath = "github.com/ethereum/c-kzg-4844",
patch_args = ["-p1"],
patches = ["//third_party:com_github_ethereum_c_kzg_4844.patch"],
sum = "h1:0X1LBXxaEtYD9xsyj9B9ctQEZIpnvVDeoBx8aHEwTNA=",
version = "v1.0.0",
sum = "h1:3MS1s4JtA868KpJxroZoepdV0ZKBp3u/O5HcZ7R3nlY=",
version = "v0.4.0",
)
go_repository(
name = "com_github_ethereum_go_ethereum",
build_directives = [
"gazelle:resolve go github.com/karalabe/usb @prysm//third_party/usb:go_default_library",
"gazelle:resolve go github.com/karalabe/hid @prysm//third_party/hid:go_default_library",
],
importpath = "github.com/ethereum/go-ethereum",
patch_args = ["-p1"],
patches = [
"//third_party:com_github_ethereum_go_ethereum_secp256k1.patch",
],
sum = "h1:8nFDCUUE67rPc6AKxFj7JKaOa2W/W1Rse3oS6LvvxEY=",
version = "v1.14.11",
)
go_repository(
name = "com_github_ethereum_go_verkle",
importpath = "github.com/ethereum/go-verkle",
sum = "h1:8NfxH2iXvJ60YRB8ChToFTUzl8awsc3cJ8CbLjGIl/A=",
version = "v0.1.1-0.20240829091221-dffa7562dbe9",
sum = "h1:U6TCRciCqZRe4FPXmy1sMGxTfuk8P7u2UoinF3VbaFk=",
version = "v1.13.5",
)
go_repository(
name = "com_github_evanphx_json_patch",
@@ -778,8 +765,8 @@ def prysm_deps():
go_repository(
name = "com_github_fatih_color",
importpath = "github.com/fatih/color",
sum = "h1:zmkK9Ngbjj+K0yRhTVONQh1p/HknKYSlNT+vZCzyokM=",
version = "v1.16.0",
sum = "h1:8LOYc1KYPPmyKMuN8QV2DNRWNbLo6LZ0iLs8+mlH53w=",
version = "v1.13.0",
)
go_repository(
name = "com_github_fatih_structs",
@@ -793,11 +780,17 @@ def prysm_deps():
sum = "h1:VvyZxILNuCiUCSXtPtYmmtGvb65nqXh2QFWc0Wpf2/g=",
version = "v0.9.3",
)
go_repository(
name = "com_github_felixge_httpsnoop",
importpath = "github.com/felixge/httpsnoop",
sum = "h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=",
version = "v1.0.4",
)
go_repository(
name = "com_github_ferranbt_fastssz",
importpath = "github.com/ferranbt/fastssz",
sum = "h1:ZI+z3JH05h4kgmFXdHuR1aWYsgrg7o+Fw7/NCzM16Mo=",
version = "v0.1.3",
sum = "h1:9VDpsWq096+oGMDTT/SgBD/VgZYf4pTF+KTPmZ+OaKM=",
version = "v0.0.0-20210120143747-11b9eff30ea9",
)
go_repository(
name = "com_github_fjl_gencodec",
@@ -805,6 +798,12 @@ def prysm_deps():
sum = "h1:bBLctRc7kr01YGvaDfgLbTwjFNW5jdp5y5rj8XXBHfY=",
version = "v0.0.0-20230517082657-f9840df7b83e",
)
go_repository(
name = "com_github_fjl_memsize",
importpath = "github.com/fjl/memsize",
sum = "h1:FtmdgXiUlNeRsoNMFlKLDt+S+6hbjVMEW6RGQ7aUf7c=",
version = "v0.0.0-20190710130421-bcb5799ab5e5",
)
go_repository(
name = "com_github_flosch_pongo2_v4",
importpath = "github.com/flosch/pongo2/v4",
@@ -886,8 +885,8 @@ def prysm_deps():
go_repository(
name = "com_github_gballet_go_verkle",
importpath = "github.com/gballet/go-verkle",
sum = "h1:BAIP2GihuqhwdILrV+7GJel5lyPV3u1+PgzrWLc0TkE=",
version = "v0.1.1-0.20231031103413-a67434b50f46",
sum = "h1:vMT47RYsrftsHSTQhqXwC3BYflo38OLC3Y4LtXtLyU0=",
version = "v0.0.0-20230607174250-df487255f46b",
)
go_repository(
name = "com_github_gdamore_encoding",
@@ -898,8 +897,8 @@ def prysm_deps():
go_repository(
name = "com_github_gdamore_tcell_v2",
importpath = "github.com/gdamore/tcell/v2",
sum = "h1:I5LiGTQuwrysAt1KS9wg1yFfOI3arI3ucFrxtd/xqaA=",
version = "v2.7.0",
sum = "h1:OKbluoP9VYmJwZwq/iLb4BxwKcwGthaa1YNBJIyCySg=",
version = "v2.6.0",
)
go_repository(
name = "com_github_getkin_kin_openapi",
@@ -910,8 +909,8 @@ def prysm_deps():
go_repository(
name = "com_github_getsentry_sentry_go",
importpath = "github.com/getsentry/sentry-go",
sum = "h1:Pv98CIbtB3LkMWmXi4Joa5OOcwbmnX88sF5qbK3r3Ps=",
version = "v0.27.0",
sum = "h1:q6Eo+hS+yoJlTO3uu/azhQadsD8V+jQn2D8VvX1eOyI=",
version = "v0.25.0",
)
go_repository(
name = "com_github_ghemawat_stream",
@@ -1170,8 +1169,8 @@ def prysm_deps():
go_repository(
name = "com_github_golang_snappy",
importpath = "github.com/golang/snappy",
sum = "h1:4bw4WeyTYPp0smaXiJZCNnLrvVBqirQVreixayXezGc=",
version = "v0.0.5-0.20231225225746-43d5d4cd4e0e",
sum = "h1:PBC98N2aIaM3XXiurYmW7fx4GZkL8feAMVq7nEjURHk=",
version = "v0.0.5-0.20220116011046-fa5810519dcb",
)
go_repository(
name = "com_github_golangci_lint_1",
@@ -1500,14 +1499,14 @@ def prysm_deps():
go_repository(
name = "com_github_herumi_bls_eth_go_binary",
importpath = "github.com/herumi/bls-eth-go-binary",
sum = "h1:9eeW3EA4epCb7FIHt2luENpAW69MvKGL5jieHlBiP+w=",
version = "v1.31.0",
sum = "h1:wCMygKUQhmcQAjlk2Gquzq6dLmyMv2kF+llRspoRgrk=",
version = "v0.0.0-20210917013441-d37c07cfda4e",
)
go_repository(
name = "com_github_holiman_billy",
importpath = "github.com/holiman/billy",
sum = "h1:X4egAf/gcS1zATw6wn4Ej8vjuVGxeHdan+bRb2ebyv4=",
version = "v0.0.0-20240216141850-2abb0c79d3c4",
sum = "h1:3JQNjnMRil1yD0IfZKHF9GxxWKDJGj8I0IqOUol//sw=",
version = "v0.0.0-20230718173358-1c7e68d277a7",
)
go_repository(
name = "com_github_holiman_bloomfilter_v2",
@@ -1518,14 +1517,14 @@ def prysm_deps():
go_repository(
name = "com_github_holiman_goevmlab",
importpath = "github.com/holiman/goevmlab",
sum = "h1:MnqrjbCnYO6ImEJiqza1vTcSFJno9cErjWG9ONXJgoc=",
version = "v0.0.0-20240515165425-8414a52dc9d4",
sum = "h1:J973NLskKmFIj3EGfpQ1ztUQKdwyJ+fG34638ief0eA=",
version = "v0.0.0-20231201084119-c73b3c97929c",
)
go_repository(
name = "com_github_holiman_uint256",
importpath = "github.com/holiman/uint256",
sum = "h1:JfTzmih28bittyHM8z360dCjIA9dbPIBlcTI6lmctQs=",
version = "v1.3.1",
sum = "h1:jUc4Nk8fm9jZabQuqr2JzednajVmBpC+oiTiXZJEApU=",
version = "v1.2.4",
)
go_repository(
name = "com_github_hpcloud_tail",
@@ -1752,10 +1751,10 @@ def prysm_deps():
version = "v0.0.0-20180517002512-3bf9e2903213",
)
go_repository(
name = "com_github_karalabe_hid",
importpath = "github.com/karalabe/hid",
sum = "h1:msKODTL1m0wigztaqILOtla9HeW1ciscYG4xjLtvk5I=",
version = "v1.0.1-0.20240306101548-573246063e52",
name = "com_github_karalabe_usb",
importpath = "github.com/karalabe/usb",
sum = "h1:AqsttAyEyIEsNz5WLRwuRwjiT5CMDUfLk6cFJDVPebs=",
version = "v0.0.3-0.20230711191512-61db3e06439c",
)
go_repository(
name = "com_github_kataras_blocks",
@@ -2071,14 +2070,14 @@ def prysm_deps():
go_repository(
name = "com_github_mariusvanderwijden_fuzzyvm",
importpath = "github.com/MariusVanDerWijden/FuzzyVM",
sum = "h1:RQtzNvriR3Yu5CvVBTJPwDmfItBT90TWZ3fFondhc08=",
version = "v0.0.0-20240516070431-7828990cad7d",
sum = "h1:BwEuC3xavrv4HTUDH2fUrKgKooiH3Q/nSVnFGtnzpN0=",
version = "v0.0.0-20240209103030-ec53fa766bf8",
)
go_repository(
name = "com_github_mariusvanderwijden_tx_fuzz",
importpath = "github.com/MariusVanDerWijden/tx-fuzz",
sum = "h1:Tq4lXivsR8mtoP4RpasUDIUpDLHfN1YhFge/kzrzK78=",
version = "v1.4.0",
sum = "h1:QDTh0xHorSykJ4+2VccBADMeRAVUbnHaWrCPIMtN+Vc=",
version = "v1.3.3-0.20240227085032-f70dd7c85c97",
)
go_repository(
name = "com_github_marten_seemann_tcp",
@@ -2137,8 +2136,8 @@ def prysm_deps():
go_repository(
name = "com_github_microsoft_go_winio",
importpath = "github.com/Microsoft/go-winio",
sum = "h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY=",
version = "v0.6.2",
sum = "h1:9/kr64B9VUZrLm5YYwbGtUJnMgqWVOdUAXu6Migciow=",
version = "v0.6.1",
)
go_repository(
name = "com_github_miekg_dns",
@@ -2782,20 +2781,8 @@ def prysm_deps():
go_repository(
name = "com_github_protolambda_bls12_381_util",
importpath = "github.com/protolambda/bls12-381-util",
sum = "h1:05DU2wJN7DTU7z28+Q+zejXkIsA/MF8JZQGhtBZZiWk=",
version = "v0.1.0",
)
go_repository(
name = "com_github_protolambda_zrnt",
importpath = "github.com/protolambda/zrnt",
sum = "h1:KZ48T+3UhsPXNdtE/5QEvGc9DGjUaRI17nJaoznoIaM=",
version = "v0.32.2",
)
go_repository(
name = "com_github_protolambda_ztyp",
importpath = "github.com/protolambda/ztyp",
sum = "h1:rVcL3vBu9W/aV646zF6caLS/dyn9BN8NYiuJzicLNyY=",
version = "v0.2.2",
sum = "h1:cZC+usqsYgHtlBaGulVnZ1hfKAi8iWtujBnRLQE698c=",
version = "v0.0.0-20220416220906-d8552aa452c7",
)
go_repository(
name = "com_github_prysmaticlabs_fastssz",
@@ -2873,8 +2860,8 @@ def prysm_deps():
go_repository(
name = "com_github_rivo_tview",
importpath = "github.com/rivo/tview",
sum = "h1:QLKAX9JLJ9RJVjnywcVg/U8nKNZvdftCtJRv1qzALYI=",
version = "v0.0.0-20240118093911-742cf086196e",
sum = "h1:7UMY2qN9VlcY+x9jlhpYe5Bf1zrdhvmfZyLMk2u65BM=",
version = "v0.0.0-20231126152417-33a1d271f2b6",
)
go_repository(
name = "com_github_rivo_uniseg",
@@ -3123,8 +3110,8 @@ def prysm_deps():
go_repository(
name = "com_github_sirupsen_logrus",
importpath = "github.com/sirupsen/logrus",
sum = "h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=",
version = "v1.9.3",
sum = "h1:trlNQbNUG3OdDrDil03MCb1H2o9nJ1x4/5LYw7byDE0=",
version = "v1.9.0",
)
go_repository(
name = "com_github_smartystreets_assertions",
@@ -3336,12 +3323,6 @@ def prysm_deps():
sum = "h1:YPXUKf7fYbp/y8xloBqZOw2qaVggbfwMlI8WM3wZUJ0=",
version = "v1.2.7",
)
go_repository(
name = "com_github_umbracle_gohashtree",
importpath = "github.com/umbracle/gohashtree",
sum = "h1:CQh33pStIp/E30b7TxDlXfM0145bn2e8boI30IxAhTg=",
version = "v0.0.2-alpha.0.20230207094856-5b775a815c10",
)
go_repository(
name = "com_github_urfave_cli",
importpath = "github.com/urfave/cli",
@@ -3351,8 +3332,8 @@ def prysm_deps():
go_repository(
name = "com_github_urfave_cli_v2",
importpath = "github.com/urfave/cli/v2",
sum = "h1:8xSQ6szndafKVRmfyeUMxkNUJQMjL1F2zmsZ+qHpfho=",
version = "v2.27.1",
sum = "h1:3f3AMg3HpThFNT4I++TKOejZO8yU55t3JnnSr4S4QEI=",
version = "v2.26.0",
)
go_repository(
name = "com_github_urfave_negroni",
@@ -3432,8 +3413,8 @@ def prysm_deps():
"gazelle:resolve go github.com/herumi/bls-eth-go-binary/bls @herumi_bls_eth_go_binary//:go_default_library",
],
importpath = "github.com/wealdtech/go-eth2-types/v2",
sum = "h1:b5aXlNBLKgjAg/Fft9VvGlqAUCQMP5LzYhlHRrr4yPg=",
version = "v2.8.2",
sum = "h1:tiA6T88M6XQIbrV5Zz53l1G5HtRERcxQfmET225V4Ls=",
version = "v2.5.2",
)
go_repository(
name = "com_github_wealdtech_go_eth2_util",
@@ -4422,8 +4403,8 @@ def prysm_deps():
go_repository(
name = "in_gopkg_natefinch_lumberjack_v2",
importpath = "gopkg.in/natefinch/lumberjack.v2",
sum = "h1:bBRl1b0OH9s/DuPhuXpNl+VtCaJXFZ5/uEFST95x9zc=",
version = "v2.2.1",
sum = "h1:1Lc07Kr7qY4U2YPouBjpCLxpiyxIVoxqXgkXLknAOE8=",
version = "v2.0.0",
)
go_repository(
name = "in_gopkg_redis_v4",
@@ -4537,11 +4518,17 @@ def prysm_deps():
sum = "h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0=",
version = "v0.24.0",
)
go_repository(
name = "io_opentelemetry_go_contrib_instrumentation_net_http_otelhttp",
importpath = "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp",
sum = "h1:ZIg3ZT/aQ7AfKqdwp7ECpOK6vHqquXXuyTjIO8ZdmPs=",
version = "v0.55.0",
)
go_repository(
name = "io_opentelemetry_go_otel",
importpath = "go.opentelemetry.io/otel",
sum = "h1:PdomN/Al4q/lN6iBJEN3AwPvUiHPMlt93c8bqTG5Llw=",
version = "v1.29.0",
sum = "h1:F2t8sK4qf1fAmY9ua4ohFS/K+FUuOPemHUIXHtktrts=",
version = "v1.30.0",
)
go_repository(
name = "io_opentelemetry_go_otel_exporters_jaeger",
@@ -4552,8 +4539,8 @@ def prysm_deps():
go_repository(
name = "io_opentelemetry_go_otel_metric",
importpath = "go.opentelemetry.io/otel/metric",
sum = "h1:vPf/HFWTNkPu1aYeIsc98l4ktOQaL6LeSoeV2g+8YLc=",
version = "v1.29.0",
sum = "h1:4xNulvn9gjzo4hjg+wzIKG7iNFEaBMX00Qd4QIZs7+w=",
version = "v1.30.0",
)
go_repository(
name = "io_opentelemetry_go_otel_sdk",
@@ -4564,8 +4551,8 @@ def prysm_deps():
go_repository(
name = "io_opentelemetry_go_otel_trace",
importpath = "go.opentelemetry.io/otel/trace",
sum = "h1:J/8ZNK4XgR7a21DZUAsbF8pZ5Jcw1VhACmnYt39JTi4=",
version = "v1.29.0",
sum = "h1:7UBkkYzeg3C7kQX8VAidWh2biiQbtAKjyIML8dQ9wmc=",
version = "v1.30.0",
)
go_repository(
name = "io_rsc_binaryregexp",

63
go.mod
View File

@@ -5,18 +5,19 @@ go 1.22.0
toolchain go1.22.4
require (
github.com/MariusVanDerWijden/FuzzyVM v0.0.0-20240516070431-7828990cad7d
github.com/MariusVanDerWijden/tx-fuzz v1.4.0
github.com/MariusVanDerWijden/FuzzyVM v0.0.0-20240209103030-ec53fa766bf8
github.com/MariusVanDerWijden/tx-fuzz v1.3.3-0.20240227085032-f70dd7c85c97
github.com/aristanetworks/goarista v0.0.0-20200805130819-fd197cf57d96
github.com/bazelbuild/rules_go v0.23.2
github.com/btcsuite/btcd/btcec/v2 v2.3.4
github.com/btcsuite/btcd/btcec/v2 v2.3.2
github.com/consensys/gnark-crypto v0.12.1
github.com/crate-crypto/go-kzg-4844 v1.0.0
github.com/crate-crypto/go-kzg-4844 v0.7.0
github.com/d4l3k/messagediff v1.2.1
github.com/dgraph-io/ristretto v0.0.4-0.20210318174700-74754f61e018
github.com/dustin/go-humanize v1.0.0
github.com/emicklei/dot v0.11.0
github.com/ethereum/go-ethereum v1.14.11
github.com/ethereum/go-ethereum v1.13.5
github.com/fjl/memsize v0.0.0-20190710130421-bcb5799ab5e5
github.com/fsnotify/fsnotify v1.6.0
github.com/ghodss/yaml v1.0.0
github.com/go-yaml/yaml v2.1.0+incompatible
@@ -24,7 +25,7 @@ require (
github.com/golang-jwt/jwt/v4 v4.5.0
github.com/golang/gddo v0.0.0-20200528160355-8d077c1d8f4c
github.com/golang/protobuf v1.5.4
github.com/golang/snappy v0.0.5-0.20231225225746-43d5d4cd4e0e
github.com/golang/snappy v0.0.5-0.20220116011046-fa5810519dcb
github.com/google/go-cmp v0.6.0
github.com/google/gofuzz v1.2.0
github.com/google/uuid v1.6.0
@@ -32,8 +33,8 @@ require (
github.com/grpc-ecosystem/go-grpc-middleware v1.2.2
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0
github.com/hashicorp/golang-lru v0.5.5-0.20210104140557-80c98217689d
github.com/herumi/bls-eth-go-binary v1.31.0
github.com/holiman/uint256 v1.3.1
github.com/herumi/bls-eth-go-binary v0.0.0-20210917013441-d37c07cfda4e
github.com/holiman/uint256 v1.2.4
github.com/ianlancetaylor/cgosymbolizer v0.0.0-20200424224625-be1b05b0b279
github.com/ipfs/go-log/v2 v2.5.1
github.com/jedib0t/go-pretty/v6 v6.5.4
@@ -68,25 +69,26 @@ require (
github.com/r3labs/sse/v2 v2.10.0
github.com/rs/cors v1.7.0
github.com/schollz/progressbar/v3 v3.3.4
github.com/sirupsen/logrus v1.9.3
github.com/sirupsen/logrus v1.9.0
github.com/spf13/afero v1.10.0
github.com/status-im/keycard-go v0.2.0
github.com/stretchr/testify v1.9.0
github.com/supranational/blst v0.3.13
github.com/supranational/blst v0.3.11
github.com/thomaso-mirodin/intmath v0.0.0-20160323211736-5dc6d854e46e
github.com/trailofbits/go-mutexasserts v0.0.0-20230328101604-8cdbc5f3d279
github.com/tyler-smith/go-bip39 v1.1.0
github.com/urfave/cli/v2 v2.27.1
github.com/urfave/cli/v2 v2.26.0
github.com/uudashr/gocognit v1.0.5
github.com/wealdtech/go-bytesutil v1.1.1
github.com/wealdtech/go-eth2-util v1.6.3
github.com/wealdtech/go-eth2-wallet-encryptor-keystorev4 v1.1.3
go.etcd.io/bbolt v1.3.6
go.opencensus.io v0.24.0
go.opentelemetry.io/otel v1.29.0
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.55.0
go.opentelemetry.io/otel v1.30.0
go.opentelemetry.io/otel/exporters/jaeger v1.17.0
go.opentelemetry.io/otel/sdk v1.29.0
go.opentelemetry.io/otel/trace v1.29.0
go.opentelemetry.io/otel/trace v1.30.0
go.uber.org/automaxprocs v1.5.2
go.uber.org/mock v0.4.0
golang.org/x/crypto v0.26.0
@@ -108,45 +110,44 @@ require (
require (
github.com/BurntSushi/toml v1.3.2 // indirect
github.com/DataDog/zstd v1.5.5 // indirect
github.com/Microsoft/go-winio v0.6.2 // indirect
github.com/Microsoft/go-winio v0.6.1 // indirect
github.com/VictoriaMetrics/fastcache v1.12.2 // indirect
github.com/benbjohnson/clock v1.3.5 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/bits-and-blooms/bitset v1.13.0 // indirect
github.com/bits-and-blooms/bitset v1.11.0 // indirect
github.com/cespare/cp v1.1.1 // indirect
github.com/cespare/xxhash v1.1.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/chzyer/readline v1.5.1 // indirect
github.com/cockroachdb/errors v1.11.3 // indirect
github.com/cockroachdb/fifo v0.0.0-20240606204812-0bbfbd93a7ce // indirect
github.com/cockroachdb/errors v1.11.1 // indirect
github.com/cockroachdb/logtags v0.0.0-20230118201751-21c54148d20b // indirect
github.com/cockroachdb/pebble v1.1.2 // indirect
github.com/cockroachdb/pebble v0.0.0-20230928194634-aa077af62593 // indirect
github.com/cockroachdb/redact v1.1.5 // indirect
github.com/cockroachdb/tokenbucket v0.0.0-20230807174530-cc333fc44b06 // indirect
github.com/consensys/bavard v0.1.13 // indirect
github.com/containerd/cgroups v1.1.0 // indirect
github.com/coreos/go-systemd/v22 v22.5.0 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.3 // indirect
github.com/crate-crypto/go-ipa v0.0.0-20240223125850-b1e8a79f509c // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/davidlazar/go-crypto v0.0.0-20200604182044-b73af7476f6c // indirect
github.com/deckarep/golang-set/v2 v2.6.0 // indirect
github.com/deckarep/golang-set/v2 v2.5.0 // indirect
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.3.0 // indirect
github.com/deepmap/oapi-codegen v1.8.2 // indirect
github.com/dlclark/regexp2 v1.7.0 // indirect
github.com/docker/go-units v0.5.0 // indirect
github.com/dop251/goja v0.0.0-20230806174421-c933cf95e127 // indirect
github.com/elastic/gosigar v0.14.3 // indirect
github.com/ethereum/c-kzg-4844 v1.0.0 // indirect
github.com/ethereum/go-verkle v0.1.1-0.20240829091221-dffa7562dbe9 // indirect
github.com/ferranbt/fastssz v0.1.3 // indirect
github.com/ethereum/c-kzg-4844 v0.4.0 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/ferranbt/fastssz v0.0.0-20210120143747-11b9eff30ea9 // indirect
github.com/flynn/noise v1.1.0 // indirect
github.com/francoispqt/gojay v1.2.13 // indirect
github.com/getsentry/sentry-go v0.27.0 // indirect
github.com/getsentry/sentry-go v0.25.0 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-playground/locales v0.14.1 // indirect
github.com/go-playground/universal-translator v0.18.1 // indirect
github.com/go-sourcemap/sourcemap v2.1.3+incompatible // indirect
github.com/go-stack/stack v1.8.1 // indirect
github.com/go-task/slim-sprig/v3 v3.0.0 // indirect
github.com/godbus/dbus/v5 v5.1.0 // indirect
github.com/gofrs/flock v0.8.1 // indirect
@@ -157,9 +158,9 @@ require (
github.com/graph-gophers/graphql-go v1.3.0 // indirect
github.com/hashicorp/go-bexpr v0.1.10 // indirect
github.com/hashicorp/golang-lru/v2 v2.0.7 // indirect
github.com/holiman/billy v0.0.0-20240216141850-2abb0c79d3c4 // indirect
github.com/holiman/billy v0.0.0-20230718173358-1c7e68d277a7 // indirect
github.com/holiman/bloomfilter/v2 v2.0.3 // indirect
github.com/holiman/goevmlab v0.0.0-20240515165425-8414a52dc9d4 // indirect
github.com/holiman/goevmlab v0.0.0-20231201084119-c73b3c97929c // indirect
github.com/huin/goupnp v1.3.0 // indirect
github.com/influxdata/influxdb-client-go/v2 v2.4.0 // indirect
github.com/influxdata/influxdb1-client v0.0.0-20220302092344-a9ab5670611c // indirect
@@ -168,7 +169,7 @@ require (
github.com/jackpal/go-nat-pmp v1.0.2 // indirect
github.com/jbenet/go-temp-err-catcher v0.1.0 // indirect
github.com/juju/ansiterm v0.0.0-20180109212912-720a0952cc2a // indirect
github.com/karalabe/hid v1.0.1-0.20240306101548-573246063e52 // indirect
github.com/karalabe/usb v0.0.3-0.20230711191512-61db3e06439c // indirect
github.com/klauspost/compress v1.17.9 // indirect
github.com/klauspost/cpuid/v2 v2.2.8 // indirect
github.com/koron/go-ssdp v0.0.4 // indirect
@@ -246,11 +247,11 @@ require (
github.com/syndtr/goleveldb v1.0.1-0.20210819022825-2ae1ddf74ef7 // indirect
github.com/tklauser/go-sysconf v0.3.13 // indirect
github.com/tklauser/numcpus v0.7.0 // indirect
github.com/wealdtech/go-eth2-types/v2 v2.8.2 // indirect
github.com/wealdtech/go-eth2-types/v2 v2.5.2 // indirect
github.com/wlynxg/anet v0.0.4 // indirect
github.com/xrash/smetrics v0.0.0-20201216005158-039620a65673 // indirect
github.com/yusufpapurcu/wmi v1.2.3 // indirect
go.opentelemetry.io/otel/metric v1.29.0 // indirect
go.opentelemetry.io/otel/metric v1.30.0 // indirect
go.uber.org/dig v1.18.0 // indirect
go.uber.org/fx v1.22.2 // indirect
go.uber.org/multierr v1.11.0 // indirect
@@ -263,7 +264,7 @@ require (
golang.org/x/time v0.5.0 // indirect
gopkg.in/cenkalti/backoff.v1 v1.1.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/natefinch/lumberjack.v2 v2.2.1 // indirect
gopkg.in/natefinch/lumberjack.v2 v2.0.0 // indirect
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 // indirect
lukechampine.com/blake3 v1.3.0 // indirect
rsc.io/tmplfunc v0.0.3 // indirect
@@ -274,7 +275,7 @@ require (
require (
github.com/coreos/go-systemd v0.0.0-20191104093116-d3cd4ed1dbcf
github.com/fatih/color v1.16.0 // indirect
github.com/fatih/color v1.13.0 // indirect
github.com/gballet/go-libpcsclite v0.0.0-20191108122812-4678299bea08 // indirect
github.com/go-logr/logr v1.4.2 // indirect
github.com/go-ole/go-ole v1.3.0 // indirect

128
go.sum
View File

@@ -51,12 +51,12 @@ github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym
github.com/DataDog/zstd v1.5.5 h1:oWf5W7GtOLgp6bciQYDmhHHjdhYkALu6S/5Ni9ZgSvQ=
github.com/DataDog/zstd v1.5.5/go.mod h1:g4AWEaM3yOg3HYfnJ3YIawPnVdXJh9QME85blwSAmyw=
github.com/Knetic/govaluate v3.0.1-0.20171022003610-9aa49832a739+incompatible/go.mod h1:r7JcOSlj0wfOMncg0iLm8Leh48TZaKVeNIfJntJ2wa0=
github.com/MariusVanDerWijden/FuzzyVM v0.0.0-20240516070431-7828990cad7d h1:RQtzNvriR3Yu5CvVBTJPwDmfItBT90TWZ3fFondhc08=
github.com/MariusVanDerWijden/FuzzyVM v0.0.0-20240516070431-7828990cad7d/go.mod h1:gWTykV/ZinShgltWofTEJY4TsletuvGhB6l4+Ai2F+E=
github.com/MariusVanDerWijden/tx-fuzz v1.4.0 h1:Tq4lXivsR8mtoP4RpasUDIUpDLHfN1YhFge/kzrzK78=
github.com/MariusVanDerWijden/tx-fuzz v1.4.0/go.mod h1:gmOVECg7o5FY5VU3DQ/fY0zTk/ExBdMkUGz0vA8qqms=
github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY=
github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU=
github.com/MariusVanDerWijden/FuzzyVM v0.0.0-20240209103030-ec53fa766bf8 h1:BwEuC3xavrv4HTUDH2fUrKgKooiH3Q/nSVnFGtnzpN0=
github.com/MariusVanDerWijden/FuzzyVM v0.0.0-20240209103030-ec53fa766bf8/go.mod h1:L1QpLBqXlboJMOC2hndG95d1eiElzKsBhjzcuy8pxeM=
github.com/MariusVanDerWijden/tx-fuzz v1.3.3-0.20240227085032-f70dd7c85c97 h1:QDTh0xHorSykJ4+2VccBADMeRAVUbnHaWrCPIMtN+Vc=
github.com/MariusVanDerWijden/tx-fuzz v1.3.3-0.20240227085032-f70dd7c85c97/go.mod h1:xcjGtET6+7KeDHcwLQp3sIfyFALtoTjzZgY8Y+RUozM=
github.com/Microsoft/go-winio v0.6.1 h1:9/kr64B9VUZrLm5YYwbGtUJnMgqWVOdUAXu6Migciow=
github.com/Microsoft/go-winio v0.6.1/go.mod h1:LRdKpFKfdobln8UmuiYcKPot9D2v6svN5+sAH+4kjUM=
github.com/OneOfOne/xxhash v1.2.2 h1:KMrpdQIwFcEqXDklaen+P1axHaj9BSKzvpUUfnHldSE=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/Shopify/sarama v1.19.0/go.mod h1:FVkBWblsNy7DGZRfXLU0O9RCGt5g3g3yEuWXgklEdEo=
@@ -100,12 +100,12 @@ github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+Ce
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
github.com/bits-and-blooms/bitset v1.13.0 h1:bAQ9OPNFYbGHV6Nez0tmNI0RiEu7/hxlYJRUA0wFAVE=
github.com/bits-and-blooms/bitset v1.13.0/go.mod h1:7hO7Gc7Pp1vODcmWvKMRA9BNmbv6a/7QIWpPxHddWR8=
github.com/bits-and-blooms/bitset v1.11.0 h1:RMyy2mBBShArUAhfVRZJ2xyBO58KCBCtZFShw3umo6k=
github.com/bits-and-blooms/bitset v1.11.0/go.mod h1:7hO7Gc7Pp1vODcmWvKMRA9BNmbv6a/7QIWpPxHddWR8=
github.com/bradfitz/go-smtpd v0.0.0-20170404230938-deb6d6237625/go.mod h1:HYsPBTaaSFSlLx/70C2HPIMNZpVV8+vt/A+FMnYP11g=
github.com/bradfitz/gomemcache v0.0.0-20170208213004-1952afaa557d/go.mod h1:PmM6Mmwb0LSuEubjR8N7PtNe1KxZLtOUHtbeikc5h60=
github.com/btcsuite/btcd/btcec/v2 v2.3.4 h1:3EJjcN70HCu/mwqlUsGK8GcNVyLVxFDlWurTXGPFfiQ=
github.com/btcsuite/btcd/btcec/v2 v2.3.4/go.mod h1:zYzJ8etWJQIv1Ogk7OzpWjowwOdXY1W/17j2MW85J04=
github.com/btcsuite/btcd/btcec/v2 v2.3.2 h1:5n0X6hX0Zk+6omWcihdYvdAlGf2DfasC0GMf7DClJ3U=
github.com/btcsuite/btcd/btcec/v2 v2.3.2/go.mod h1:zYzJ8etWJQIv1Ogk7OzpWjowwOdXY1W/17j2MW85J04=
github.com/btcsuite/btcd/chaincfg/chainhash v1.0.1 h1:q0rUy8C/TYNBQS1+CGKw68tLOFYSNEs0TFnxxnS9+4U=
github.com/btcsuite/btcd/chaincfg/chainhash v1.0.1/go.mod h1:7SFka0XMvUgj3hfZtydOrQY2mwhPclbT2snogU7SQQc=
github.com/buger/jsonparser v0.0.0-20181115193947-bf1c66bbce23/go.mod h1:bbYlZJ7hK1yFx9hf58LP0zeX7UjIGs20ufpu3evjr+s=
@@ -141,14 +141,12 @@ github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403/go.mod h1:WmhPx2Nbnht
github.com/cockroachdb/datadriven v0.0.0-20190809214429-80d97fb3cbaa/go.mod h1:zn76sxSg3SzpJ0PPJaLDCu+Bu0Lg3sKTORVIj19EIF8=
github.com/cockroachdb/datadriven v1.0.3-0.20230413201302-be42291fc80f h1:otljaYPt5hWxV3MUfO5dFPFiOXg9CyG5/kCfayTqsJ4=
github.com/cockroachdb/datadriven v1.0.3-0.20230413201302-be42291fc80f/go.mod h1:a9RdTaap04u637JoCzcUoIcDmvwSUtcUFtT/C3kJlTU=
github.com/cockroachdb/errors v1.11.3 h1:5bA+k2Y6r+oz/6Z/RFlNeVCesGARKuC6YymtcDrbC/I=
github.com/cockroachdb/errors v1.11.3/go.mod h1:m4UIW4CDjx+R5cybPsNrRbreomiFqt8o1h1wUVazSd8=
github.com/cockroachdb/fifo v0.0.0-20240606204812-0bbfbd93a7ce h1:giXvy4KSc/6g/esnpM7Geqxka4WSqI1SZc7sMJFd3y4=
github.com/cockroachdb/fifo v0.0.0-20240606204812-0bbfbd93a7ce/go.mod h1:9/y3cnZ5GKakj/H4y9r9GTjCvAFta7KLgSHPJJYc52M=
github.com/cockroachdb/errors v1.11.1 h1:xSEW75zKaKCWzR3OfxXUxgrk/NtT4G1MiOv5lWZazG8=
github.com/cockroachdb/errors v1.11.1/go.mod h1:8MUxA3Gi6b25tYlFEBGLf+D8aISL+M4MIpiWMSNRfxw=
github.com/cockroachdb/logtags v0.0.0-20230118201751-21c54148d20b h1:r6VH0faHjZeQy818SGhaone5OnYfxFR/+AzdY3sf5aE=
github.com/cockroachdb/logtags v0.0.0-20230118201751-21c54148d20b/go.mod h1:Vz9DsVWQQhf3vs21MhPMZpMGSht7O/2vFW2xusFUVOs=
github.com/cockroachdb/pebble v1.1.2 h1:CUh2IPtR4swHlEj48Rhfzw6l/d0qA31fItcIszQVIsA=
github.com/cockroachdb/pebble v1.1.2/go.mod h1:4exszw1r40423ZsmkG/09AFEG83I0uDgfujJdbL6kYU=
github.com/cockroachdb/pebble v0.0.0-20230928194634-aa077af62593 h1:aPEJyR4rPBvDmeyi+l/FS/VtA00IWvjeFvjen1m1l1A=
github.com/cockroachdb/pebble v0.0.0-20230928194634-aa077af62593/go.mod h1:6hk1eMY/u5t+Cf18q5lFMUA1Rc+Sm5I6Ra1QuPyxXCo=
github.com/cockroachdb/redact v1.1.5 h1:u1PMllDkdFfPWaNGMyLD1+so+aq3uUItthCFqzwPJ30=
github.com/cockroachdb/redact v1.1.5/go.mod h1:BVNblN9mBWFyMyqK1k3AAiSxhvhfK2oOZZ2lK+dpvRg=
github.com/cockroachdb/tokenbucket v0.0.0-20230807174530-cc333fc44b06 h1:zuQyyAKVxetITBuuhv3BI9cMrmStnpT18zmgmTxunpo=
@@ -174,10 +172,8 @@ github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:ma
github.com/cpuguy83/go-md2man/v2 v2.0.0/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/cpuguy83/go-md2man/v2 v2.0.3 h1:qMCsGGgs+MAzDFyp9LpAe1Lqy/fY/qCovCm0qnXZOBM=
github.com/cpuguy83/go-md2man/v2 v2.0.3/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/crate-crypto/go-ipa v0.0.0-20240223125850-b1e8a79f509c h1:uQYC5Z1mdLRPrZhHjHxufI8+2UG/i25QG92j0Er9p6I=
github.com/crate-crypto/go-ipa v0.0.0-20240223125850-b1e8a79f509c/go.mod h1:geZJZH3SzKCqnz5VT0q/DyIG/tvu/dZk+VIfXicupJs=
github.com/crate-crypto/go-kzg-4844 v1.0.0 h1:TsSgHwrkTKecKJ4kadtHi4b3xHW5dCFUDFnUp1TsawI=
github.com/crate-crypto/go-kzg-4844 v1.0.0/go.mod h1:1kMhvPgI0Ky3yIa+9lFySEBUBXkYxeOi8ZF1sYioxhc=
github.com/crate-crypto/go-kzg-4844 v0.7.0 h1:C0vgZRk4q4EZ/JgPfzuSoxdCq3C3mOZMBShovmncxvA=
github.com/crate-crypto/go-kzg-4844 v0.7.0/go.mod h1:1kMhvPgI0Ky3yIa+9lFySEBUBXkYxeOi8ZF1sYioxhc=
github.com/creack/pty v1.1.7/go.mod h1:lj5s0c3V2DBrqTV7llrYr5NG6My20zk30Fl46Y7DoTY=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/cyberdelia/templates v0.0.0-20141128023046-ca7fffd4298c/go.mod h1:GyV+0YP4qX0UQ7r2MoYZ+AvYDp12OF5yg4q8rGnyNh4=
@@ -188,8 +184,8 @@ github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davidlazar/go-crypto v0.0.0-20200604182044-b73af7476f6c h1:pFUpOrbxDR6AkioZ1ySsx5yxlDQZ8stG2b88gTPxgJU=
github.com/davidlazar/go-crypto v0.0.0-20200604182044-b73af7476f6c/go.mod h1:6UhI8N9EjYm1c2odKpFpAYeR8dsBeM7PtzQhRgxRr9U=
github.com/deckarep/golang-set/v2 v2.6.0 h1:XfcQbWM1LlMB8BsJ8N9vW5ehnnPVIw0je80NsVHagjM=
github.com/deckarep/golang-set/v2 v2.6.0/go.mod h1:VAky9rY/yGXJOLEDv3OMci+7wtDpOF4IN+y82NBOac4=
github.com/deckarep/golang-set/v2 v2.5.0 h1:hn6cEZtQ0h3J8kFrHR/NrzyOoTnjgW1+FmNJzQ7y/sA=
github.com/deckarep/golang-set/v2 v2.5.0/go.mod h1:VAky9rY/yGXJOLEDv3OMci+7wtDpOF4IN+y82NBOac4=
github.com/decred/dcrd/crypto/blake256 v1.0.1 h1:7PltbUIQB7u/FfZ39+DGa/ShuMyJ5ilcvdfma9wOH6Y=
github.com/decred/dcrd/crypto/blake256 v1.0.1/go.mod h1:2OfgNZ5wDpcsFmHmCK5gZTPcCXqlm2ArzUIkw9czNJo=
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.3.0 h1:rpfIENRNNilwHwZeG5+P150SMrnNEcHYvcCuK6dPZSg=
@@ -235,18 +231,19 @@ github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1m
github.com/envoyproxy/go-control-plane v0.9.7/go.mod h1:cwu0lG7PUMfa9snN8LXBig5ynNVH9qI8YYLbd1fK2po=
github.com/envoyproxy/go-control-plane v0.9.9-0.20201210154907-fd9021fe5dad/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/ethereum/c-kzg-4844 v1.0.0 h1:0X1LBXxaEtYD9xsyj9B9ctQEZIpnvVDeoBx8aHEwTNA=
github.com/ethereum/c-kzg-4844 v1.0.0/go.mod h1:VewdlzQmpT5QSrVhbBuGoCdFJkpaJlO1aQputP83wc0=
github.com/ethereum/go-ethereum v1.14.11 h1:8nFDCUUE67rPc6AKxFj7JKaOa2W/W1Rse3oS6LvvxEY=
github.com/ethereum/go-ethereum v1.14.11/go.mod h1:+l/fr42Mma+xBnhefL/+z11/hcmJ2egl+ScIVPjhc7E=
github.com/ethereum/go-verkle v0.1.1-0.20240829091221-dffa7562dbe9 h1:8NfxH2iXvJ60YRB8ChToFTUzl8awsc3cJ8CbLjGIl/A=
github.com/ethereum/go-verkle v0.1.1-0.20240829091221-dffa7562dbe9/go.mod h1:M3b90YRnzqKyyzBEWJGqj8Qff4IDeXnzFw0P9bFw3uk=
github.com/ethereum/c-kzg-4844 v0.4.0 h1:3MS1s4JtA868KpJxroZoepdV0ZKBp3u/O5HcZ7R3nlY=
github.com/ethereum/c-kzg-4844 v0.4.0/go.mod h1:VewdlzQmpT5QSrVhbBuGoCdFJkpaJlO1aQputP83wc0=
github.com/ethereum/go-ethereum v1.13.5 h1:U6TCRciCqZRe4FPXmy1sMGxTfuk8P7u2UoinF3VbaFk=
github.com/ethereum/go-ethereum v1.13.5/go.mod h1:yMTu38GSuyxaYzQMViqNmQ1s3cE84abZexQmTgenWk0=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/fatih/color v1.16.0 h1:zmkK9Ngbjj+K0yRhTVONQh1p/HknKYSlNT+vZCzyokM=
github.com/fatih/color v1.16.0/go.mod h1:fL2Sau1YI5c0pdGEVCbKQbLXB6edEj1ZgiY4NijnWvE=
github.com/fatih/color v1.13.0 h1:8LOYc1KYPPmyKMuN8QV2DNRWNbLo6LZ0iLs8+mlH53w=
github.com/fatih/color v1.13.0/go.mod h1:kLAiJbzzSOZDVNGyDpeOxJ47H46qBXwg5ILebYFFOfk=
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
github.com/ferranbt/fastssz v0.0.0-20210120143747-11b9eff30ea9 h1:9VDpsWq096+oGMDTT/SgBD/VgZYf4pTF+KTPmZ+OaKM=
github.com/ferranbt/fastssz v0.0.0-20210120143747-11b9eff30ea9/go.mod h1:DyEu2iuLBnb/T51BlsiO3yLYdJC6UbGMrIkqK1KmQxM=
github.com/ferranbt/fastssz v0.1.3 h1:ZI+z3JH05h4kgmFXdHuR1aWYsgrg7o+Fw7/NCzM16Mo=
github.com/ferranbt/fastssz v0.1.3/go.mod h1:0Y9TEd/9XuFlh7mskMPfXiI2Dkw4Ddg9EyXt1W7MRvE=
github.com/fjl/memsize v0.0.0-20190710130421-bcb5799ab5e5 h1:FtmdgXiUlNeRsoNMFlKLDt+S+6hbjVMEW6RGQ7aUf7c=
github.com/fjl/memsize v0.0.0-20190710130421-bcb5799ab5e5/go.mod h1:VvhXpOYNQvB+uIk2RvXzuaQtkQJzzIx6lSBe1xv7hi0=
github.com/flynn/go-shlex v0.0.0-20150515145356-3f9db97f8568/go.mod h1:xEzjJPgXI435gkrCt3MPfRiAkVrwSbHsst4LCFVfpJc=
github.com/flynn/noise v1.1.0 h1:KjPQoQCEFdZDiP03phOvGi11+SVVhBG2wOWAorLsstg=
github.com/flynn/noise v1.1.0/go.mod h1:xbMo+0i6+IGbYdJhF31t2eR1BIU0CYc12+BNAKwUTag=
@@ -267,8 +264,8 @@ github.com/gballet/go-libpcsclite v0.0.0-20191108122812-4678299bea08 h1:f6D9Hr8x
github.com/gballet/go-libpcsclite v0.0.0-20191108122812-4678299bea08/go.mod h1:x7DCsMOv1taUwEWCzT4cmDeAkigA5/QCwUodaVOe8Ww=
github.com/getkin/kin-openapi v0.53.0/go.mod h1:7Yn5whZr5kJi6t+kShccXS8ae1APpYTW6yheSwk8Yi4=
github.com/getkin/kin-openapi v0.61.0/go.mod h1:7Yn5whZr5kJi6t+kShccXS8ae1APpYTW6yheSwk8Yi4=
github.com/getsentry/sentry-go v0.27.0 h1:Pv98CIbtB3LkMWmXi4Joa5OOcwbmnX88sF5qbK3r3Ps=
github.com/getsentry/sentry-go v0.27.0/go.mod h1:lc76E2QywIyW8WuBnwl8Lc4bkmQH4+w1gwTf25trprY=
github.com/getsentry/sentry-go v0.25.0 h1:q6Eo+hS+yoJlTO3uu/azhQadsD8V+jQn2D8VvX1eOyI=
github.com/getsentry/sentry-go v0.25.0/go.mod h1:lc76E2QywIyW8WuBnwl8Lc4bkmQH4+w1gwTf25trprY=
github.com/ghodss/yaml v1.0.0 h1:wQHKEahhL6wmXdzwWG11gIVCkOv05bNOh+Rxn0yngAk=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/gliderlabs/ssh v0.1.1/go.mod h1:U7qILu1NlMHj9FlMhZLlkCdDnU1DBEAqr0aevW3Awn0=
@@ -314,6 +311,8 @@ github.com/go-sourcemap/sourcemap v2.1.3+incompatible/go.mod h1:F8jJfvm2KbVjc5Nq
github.com/go-sql-driver/mysql v1.4.0/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG5ZlKdlhCg5w=
github.com/go-stack/stack v1.6.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/go-stack/stack v1.8.1 h1:ntEHSVwIt7PNXNpgPmVfMrNhLtgjlmnZha2kOpuRiDw=
github.com/go-stack/stack v1.8.1/go.mod h1:dcoOX6HbPZSZptuspn9bctJ+N/CnF5gGygcUP3XYfe4=
github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0/go.mod h1:fyg7847qk6SyHyPtNmDHnmrv/HOrqktSC+C9fM+CJOE=
github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI=
github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8=
@@ -373,8 +372,8 @@ github.com/golang/snappy v0.0.0-20170215233205-553a64147049/go.mod h1:/XxbfmMg8l
github.com/golang/snappy v0.0.0-20180518054509-2e65f85255db/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/golang/snappy v0.0.1/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/golang/snappy v0.0.5-0.20231225225746-43d5d4cd4e0e h1:4bw4WeyTYPp0smaXiJZCNnLrvVBqirQVreixayXezGc=
github.com/golang/snappy v0.0.5-0.20231225225746-43d5d4cd4e0e/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/golang/snappy v0.0.5-0.20220116011046-fa5810519dcb h1:PBC98N2aIaM3XXiurYmW7fx4GZkL8feAMVq7nEjURHk=
github.com/golang/snappy v0.0.5-0.20220116011046-fa5810519dcb/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/golangci/lint-1 v0.0.0-20181222135242-d2cdd8c08219/go.mod h1:/X8TswGSh1pIozq4ZwCfxS0WA5JGXguxk94ar/4c87Y=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
@@ -483,16 +482,16 @@ github.com/hashicorp/mdns v1.0.0/go.mod h1:tL+uN++7HEJ6SQLQ2/p+z2pH24WQKWjBPkE0m
github.com/hashicorp/memberlist v0.1.3/go.mod h1:ajVTdAv/9Im8oMAAj5G31PhhMCZJV2pPBoIllUwCN7I=
github.com/hashicorp/serf v0.8.2/go.mod h1:6hOLApaqBFA1NXqRQAsxw9QxuDEvNxSQRwA/JwenrHc=
github.com/herumi/bls-eth-go-binary v0.0.0-20210130185500-57372fb27371/go.mod h1:luAnRm3OsMQeokhGzpYmc0ZKwawY7o87PUEP11Z7r7U=
github.com/herumi/bls-eth-go-binary v1.31.0 h1:9eeW3EA4epCb7FIHt2luENpAW69MvKGL5jieHlBiP+w=
github.com/herumi/bls-eth-go-binary v1.31.0/go.mod h1:luAnRm3OsMQeokhGzpYmc0ZKwawY7o87PUEP11Z7r7U=
github.com/holiman/billy v0.0.0-20240216141850-2abb0c79d3c4 h1:X4egAf/gcS1zATw6wn4Ej8vjuVGxeHdan+bRb2ebyv4=
github.com/holiman/billy v0.0.0-20240216141850-2abb0c79d3c4/go.mod h1:5GuXa7vkL8u9FkFuWdVvfR5ix8hRB7DbOAaYULamFpc=
github.com/herumi/bls-eth-go-binary v0.0.0-20210917013441-d37c07cfda4e h1:wCMygKUQhmcQAjlk2Gquzq6dLmyMv2kF+llRspoRgrk=
github.com/herumi/bls-eth-go-binary v0.0.0-20210917013441-d37c07cfda4e/go.mod h1:luAnRm3OsMQeokhGzpYmc0ZKwawY7o87PUEP11Z7r7U=
github.com/holiman/billy v0.0.0-20230718173358-1c7e68d277a7 h1:3JQNjnMRil1yD0IfZKHF9GxxWKDJGj8I0IqOUol//sw=
github.com/holiman/billy v0.0.0-20230718173358-1c7e68d277a7/go.mod h1:5GuXa7vkL8u9FkFuWdVvfR5ix8hRB7DbOAaYULamFpc=
github.com/holiman/bloomfilter/v2 v2.0.3 h1:73e0e/V0tCydx14a0SCYS/EWCxgwLZ18CZcZKVu0fao=
github.com/holiman/bloomfilter/v2 v2.0.3/go.mod h1:zpoh+gs7qcpqrHr3dB55AMiJwo0iURXE7ZOP9L9hSkA=
github.com/holiman/goevmlab v0.0.0-20240515165425-8414a52dc9d4 h1:MnqrjbCnYO6ImEJiqza1vTcSFJno9cErjWG9ONXJgoc=
github.com/holiman/goevmlab v0.0.0-20240515165425-8414a52dc9d4/go.mod h1:3JURK6FtzpaEjCVxopnYN37phjHNI5gW9fhsKFF51Us=
github.com/holiman/uint256 v1.3.1 h1:JfTzmih28bittyHM8z360dCjIA9dbPIBlcTI6lmctQs=
github.com/holiman/uint256 v1.3.1/go.mod h1:EOMSn4q6Nyt9P6efbI3bueV4e1b3dGlUCXeiRV4ng7E=
github.com/holiman/goevmlab v0.0.0-20231201084119-c73b3c97929c h1:J973NLskKmFIj3EGfpQ1ztUQKdwyJ+fG34638ief0eA=
github.com/holiman/goevmlab v0.0.0-20231201084119-c73b3c97929c/go.mod h1:K6KFgcQq1U9ksldcRyLYcwtj4nUTPn4rEaZtX4Gjofc=
github.com/holiman/uint256 v1.2.4 h1:jUc4Nk8fm9jZabQuqr2JzednajVmBpC+oiTiXZJEApU=
github.com/holiman/uint256 v1.2.4/go.mod h1:EOMSn4q6Nyt9P6efbI3bueV4e1b3dGlUCXeiRV4ng7E=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/hudl/fargo v1.3.0/go.mod h1:y3CKSmjA+wD2gak7sUSXTAoopbhU08POFhmITJgmKTg=
github.com/huin/goupnp v1.3.0 h1:UvLUlWDNpoUdYzb2TCn+MuTWtcjXKSza2n6CBdQ0xXc=
@@ -537,11 +536,12 @@ github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfV
github.com/juju/ansiterm v0.0.0-20180109212912-720a0952cc2a h1:FaWFmfWdAUKbSCtOU2QjDaorUexogfaMgbipgYATUMU=
github.com/juju/ansiterm v0.0.0-20180109212912-720a0952cc2a/go.mod h1:UJSiEoRfvx3hP73CvoARgeLjaIOjybY9vj8PUPPFGeU=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/julienschmidt/httprouter v1.3.0 h1:U0609e9tgbseu3rBINet9P48AI/D3oJs4dN7jwJOQ1U=
github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM=
github.com/k0kubun/go-ansi v0.0.0-20180517002512-3bf9e2903213 h1:qGQQKEcAR99REcMpsXCp3lJ03zYT1PkRd3kQGPn9GVg=
github.com/k0kubun/go-ansi v0.0.0-20180517002512-3bf9e2903213/go.mod h1:vNUNkEQ1e29fT/6vq2aBdFsgNPmy8qMdSay1npru+Sw=
github.com/karalabe/hid v1.0.1-0.20240306101548-573246063e52 h1:msKODTL1m0wigztaqILOtla9HeW1ciscYG4xjLtvk5I=
github.com/karalabe/hid v1.0.1-0.20240306101548-573246063e52/go.mod h1:qk1sX/IBgppQNcGCRoj90u6EGC056EBoIc1oEjCWla8=
github.com/karalabe/usb v0.0.3-0.20230711191512-61db3e06439c h1:AqsttAyEyIEsNz5WLRwuRwjiT5CMDUfLk6cFJDVPebs=
github.com/karalabe/usb v0.0.3-0.20230711191512-61db3e06439c/go.mod h1:Od972xHfMJowv7NGVDiWVxk2zxnWgjLlJzE+F4F7AGU=
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
@@ -630,6 +630,7 @@ github.com/mattn/go-colorable v0.0.10-0.20170816031813-ad5389df28cd/go.mod h1:9v
github.com/mattn/go-colorable v0.1.2/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE=
github.com/mattn/go-colorable v0.1.7/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
github.com/mattn/go-colorable v0.1.8/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
github.com/mattn/go-colorable v0.1.9/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
github.com/mattn/go-isatty v0.0.2/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
@@ -972,8 +973,8 @@ github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPx
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/sirupsen/logrus v1.9.0 h1:trlNQbNUG3OdDrDil03MCb1H2o9nJ1x4/5LYw7byDE0=
github.com/sirupsen/logrus v1.9.0/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
@@ -1018,8 +1019,8 @@ github.com/stretchr/testify v1.8.3/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXl
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/supranational/blst v0.3.13 h1:AYeSxdOMacwu7FBmpfloBz5pbFXDmJL33RuwnKtmTjk=
github.com/supranational/blst v0.3.13/go.mod h1:jZJtfjgudtNl4en1tzwPIV3KjUnQUvG3/j+w+fVonLw=
github.com/supranational/blst v0.3.11 h1:LyU6FolezeWAhvQk0k6O/d49jqgO52MSDDfYgbeoEm4=
github.com/supranational/blst v0.3.11/go.mod h1:jZJtfjgudtNl4en1tzwPIV3KjUnQUvG3/j+w+fVonLw=
github.com/syndtr/goleveldb v1.0.1-0.20210819022825-2ae1ddf74ef7 h1:epCh84lMvA70Z7CTTCmYQn2CKbY8j86K7/FAIr141uY=
github.com/syndtr/goleveldb v1.0.1-0.20210819022825-2ae1ddf74ef7/go.mod h1:q4W45IWZaF22tdD+VEXcAWRA037jwmWEB5VWYORlTpc=
github.com/tarm/serial v0.0.0-20180830185346-98f6abe2eb07/go.mod h1:kDXzergiv9cbyO7IOYJZWg1U88JhDg3PB6klq9Hg2pA=
@@ -1039,13 +1040,11 @@ github.com/trailofbits/go-mutexasserts v0.0.0-20230328101604-8cdbc5f3d279 h1:+Ly
github.com/trailofbits/go-mutexasserts v0.0.0-20230328101604-8cdbc5f3d279/go.mod h1:GA3+Mq3kt3tYAfM0WZCu7ofy+GW9PuGysHfhr+6JX7s=
github.com/tyler-smith/go-bip39 v1.1.0 h1:5eUemwrMargf3BSLRRCalXT93Ns6pQJIjYQN2nyfOP8=
github.com/tyler-smith/go-bip39 v1.1.0/go.mod h1:gUYDtqQw1JS3ZJ8UWVcGTGqqr6YIN3CWg+kkNaLt55U=
github.com/umbracle/gohashtree v0.0.2-alpha.0.20230207094856-5b775a815c10 h1:CQh33pStIp/E30b7TxDlXfM0145bn2e8boI30IxAhTg=
github.com/umbracle/gohashtree v0.0.2-alpha.0.20230207094856-5b775a815c10/go.mod h1:x/Pa0FF5Te9kdrlZKJK82YmAkvL8+f989USgz6Jiw7M=
github.com/urfave/cli v1.20.0/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
github.com/urfave/cli v1.22.1/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
github.com/urfave/cli v1.22.2/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
github.com/urfave/cli/v2 v2.27.1 h1:8xSQ6szndafKVRmfyeUMxkNUJQMjL1F2zmsZ+qHpfho=
github.com/urfave/cli/v2 v2.27.1/go.mod h1:8qnjx1vcq5s2/wpsqoZFndg2CE5tNFyrTvS6SinrnYQ=
github.com/urfave/cli/v2 v2.26.0 h1:3f3AMg3HpThFNT4I++TKOejZO8yU55t3JnnSr4S4QEI=
github.com/urfave/cli/v2 v2.26.0/go.mod h1:8qnjx1vcq5s2/wpsqoZFndg2CE5tNFyrTvS6SinrnYQ=
github.com/uudashr/gocognit v1.0.5 h1:rrSex7oHr3/pPLQ0xoWq108XMU8s678FJcQ+aSfOHa4=
github.com/uudashr/gocognit v1.0.5/go.mod h1:wgYz0mitoKOTysqxTDMOUXg+Jb5SvtihkfmugIZYpEA=
github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=
@@ -1055,9 +1054,8 @@ github.com/viant/assertly v0.4.8/go.mod h1:aGifi++jvCrUaklKEKT0BU95igDNaqkvz+49u
github.com/viant/toolbox v0.24.0/go.mod h1:OxMCG57V0PXuIP2HNQrtJf2CjqdmbrOx5EkMILuUhzM=
github.com/wealdtech/go-bytesutil v1.1.1 h1:ocEg3Ke2GkZ4vQw5lp46rmO+pfqCCTgq35gqOy8JKVc=
github.com/wealdtech/go-bytesutil v1.1.1/go.mod h1:jENeMqeTEU8FNZyDFRVc7KqBdRKSnJ9CCh26TcuNb9s=
github.com/wealdtech/go-eth2-types/v2 v2.5.2 h1:tiA6T88M6XQIbrV5Zz53l1G5HtRERcxQfmET225V4Ls=
github.com/wealdtech/go-eth2-types/v2 v2.5.2/go.mod h1:8lkNUbgklSQ4LZ2oMSuxSdR7WwJW3L9ge1dcoCVyzws=
github.com/wealdtech/go-eth2-types/v2 v2.8.2 h1:b5aXlNBLKgjAg/Fft9VvGlqAUCQMP5LzYhlHRrr4yPg=
github.com/wealdtech/go-eth2-types/v2 v2.8.2/go.mod h1:IAz9Lz1NVTaHabQa+4zjk2QDKMv8LVYo0n46M9o/TXw=
github.com/wealdtech/go-eth2-util v1.6.3 h1:2INPeOR35x5LdFFpSzyw954WzTD+DFyHe3yKlJnG5As=
github.com/wealdtech/go-eth2-util v1.6.3/go.mod h1:0hFMj/qtio288oZFHmAbCnPQ9OB3c4WFzs5NVPKTY4k=
github.com/wealdtech/go-eth2-wallet-encryptor-keystorev4 v1.1.3 h1:SxrDVSr+oXuT1x8kZt4uWqNCvv5xXEGV9zd7cuSrZS8=
@@ -1098,16 +1096,18 @@ go.opencensus.io v0.22.4/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.5/go.mod h1:5pWMHQbX5EPX2/62yrJeAkowc+lfs/XD7Uxpq3pI6kk=
go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0=
go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo=
go.opentelemetry.io/otel v1.29.0 h1:PdomN/Al4q/lN6iBJEN3AwPvUiHPMlt93c8bqTG5Llw=
go.opentelemetry.io/otel v1.29.0/go.mod h1:N/WtXPs1CNCUEx+Agz5uouwCba+i+bJGFicT8SR4NP8=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.55.0 h1:ZIg3ZT/aQ7AfKqdwp7ECpOK6vHqquXXuyTjIO8ZdmPs=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.55.0/go.mod h1:DQAwmETtZV00skUwgD6+0U89g80NKsJE3DCKeLLPQMI=
go.opentelemetry.io/otel v1.30.0 h1:F2t8sK4qf1fAmY9ua4ohFS/K+FUuOPemHUIXHtktrts=
go.opentelemetry.io/otel v1.30.0/go.mod h1:tFw4Br9b7fOS+uEao81PJjVMjW/5fvNCbpsDIXqP0pc=
go.opentelemetry.io/otel/exporters/jaeger v1.17.0 h1:D7UpUy2Xc2wsi1Ras6V40q806WM07rqoCWzXu7Sqy+4=
go.opentelemetry.io/otel/exporters/jaeger v1.17.0/go.mod h1:nPCqOnEH9rNLKqH/+rrUjiMzHJdV1BlpKcTwRTyKkKI=
go.opentelemetry.io/otel/metric v1.29.0 h1:vPf/HFWTNkPu1aYeIsc98l4ktOQaL6LeSoeV2g+8YLc=
go.opentelemetry.io/otel/metric v1.29.0/go.mod h1:auu/QWieFVWx+DmQOUMgj0F8LHWdgalxXqvp7BII/W8=
go.opentelemetry.io/otel/metric v1.30.0 h1:4xNulvn9gjzo4hjg+wzIKG7iNFEaBMX00Qd4QIZs7+w=
go.opentelemetry.io/otel/metric v1.30.0/go.mod h1:aXTfST94tswhWEb+5QjlSqG+cZlmyXy/u8jFpor3WqQ=
go.opentelemetry.io/otel/sdk v1.29.0 h1:vkqKjk7gwhS8VaWb0POZKmIEDimRCMsopNYnriHyryo=
go.opentelemetry.io/otel/sdk v1.29.0/go.mod h1:pM8Dx5WKnvxLCb+8lG1PRNIDxu9g9b9g59Qr7hfAAok=
go.opentelemetry.io/otel/trace v1.29.0 h1:J/8ZNK4XgR7a21DZUAsbF8pZ5Jcw1VhACmnYt39JTi4=
go.opentelemetry.io/otel/trace v1.29.0/go.mod h1:eHl3w0sp3paPkYstJOmAimxhiFXPg+MMTlEh3nsQgWQ=
go.opentelemetry.io/otel/trace v1.30.0 h1:7UBkkYzeg3C7kQX8VAidWh2biiQbtAKjyIML8dQ9wmc=
go.opentelemetry.io/otel/trace v1.30.0/go.mod h1:5EyKqTzzmyqB9bwtCCq6pDLktPK6fmGf/Dph+8VI02o=
go.uber.org/atomic v1.3.2/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/atomic v1.5.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
@@ -1634,8 +1634,8 @@ gopkg.in/jcmturner/dnsutils.v1 v1.0.1/go.mod h1:m3v+5svpVOhtFAP/wSz+yzh4Mc0Fg7eR
gopkg.in/jcmturner/goidentity.v3 v3.0.0/go.mod h1:oG2kH0IvSYNIu80dVAyu/yoefjq1mNfM5bm88whjWx4=
gopkg.in/jcmturner/gokrb5.v7 v7.5.0/go.mod h1:l8VISx+WGYp+Fp7KRbsiUuXTTOnxIc3Tuvyavf11/WM=
gopkg.in/jcmturner/rpc.v1 v1.1.0/go.mod h1:YIdkC4XfD6GXbzje11McwsDuOlZQSb9W4vfLvuNnlv8=
gopkg.in/natefinch/lumberjack.v2 v2.2.1 h1:bBRl1b0OH9s/DuPhuXpNl+VtCaJXFZ5/uEFST95x9zc=
gopkg.in/natefinch/lumberjack.v2 v2.2.1/go.mod h1:YD8tP3GAjkrDg1eZH7EGmyESg/lsYskCTPBJVb9jqSc=
gopkg.in/natefinch/lumberjack.v2 v2.0.0 h1:1Lc07Kr7qY4U2YPouBjpCLxpiyxIVoxqXgkXLknAOE8=
gopkg.in/natefinch/lumberjack.v2 v2.0.0/go.mod h1:l0ndWWf7gzL7RNwBG7wST/UCcT4T24xpD6X8LsfU/+k=
gopkg.in/redis.v4 v4.2.4/go.mod h1:8KREHdypkCEojGKQcjMqAODMICIVwZAONWq8RowTITA=
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=

View File

@@ -48,7 +48,7 @@ ssz_gen_marshal(
"WithdrawalRequest",
"DepositRequest",
"ConsolidationRequest",
"ExecutionRequests",
"ExecutionRequests",
],
)
@@ -74,7 +74,7 @@ go_proto_library(
go_library(
name = "go_default_library",
srcs = [
"electra.go",
"electra.go",
"execution_engine.go",
"json_marshal_unmarshal.go",
":ssz_generated_files", # keep

View File

@@ -1,118 +1,4 @@
package enginev1
import (
"fmt"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
)
type ExecutionPayloadElectra = ExecutionPayloadDeneb
type ExecutionPayloadHeaderElectra = ExecutionPayloadHeaderDeneb
var (
drExample = &DepositRequest{}
drSize = drExample.SizeSSZ()
wrExample = &WithdrawalRequest{}
wrSize = wrExample.SizeSSZ()
crExample = &ConsolidationRequest{}
crSize = crExample.SizeSSZ()
)
const LenExecutionRequestsElectra = 3
func (ebe *ExecutionBundleElectra) GetDecodedExecutionRequests() (*ExecutionRequests, error) {
requests := &ExecutionRequests{}
if len(ebe.ExecutionRequests) != LenExecutionRequestsElectra /* types of requests */ {
return nil, errors.Errorf("invalid execution request size: %d", len(ebe.ExecutionRequests))
}
// deposit requests
drs, err := unmarshalItems(ebe.ExecutionRequests[0], drSize, func() *DepositRequest { return &DepositRequest{} })
if err != nil {
return nil, err
}
requests.Deposits = drs
// withdrawal requests
wrs, err := unmarshalItems(ebe.ExecutionRequests[1], wrSize, func() *WithdrawalRequest { return &WithdrawalRequest{} })
if err != nil {
return nil, err
}
requests.Withdrawals = wrs
// consolidation requests
crs, err := unmarshalItems(ebe.ExecutionRequests[2], crSize, func() *ConsolidationRequest { return &ConsolidationRequest{} })
if err != nil {
return nil, err
}
requests.Consolidations = crs
return requests, nil
}
func EncodeExecutionRequests(requests *ExecutionRequests) ([]hexutil.Bytes, error) {
if requests == nil {
return nil, errors.New("invalid execution requests")
}
drBytes, err := marshalItems(requests.Deposits)
if err != nil {
return nil, errors.Wrap(err, "failed to marshal deposit requests")
}
wrBytes, err := marshalItems(requests.Withdrawals)
if err != nil {
return nil, errors.Wrap(err, "failed to marshal withdrawal requests")
}
crBytes, err := marshalItems(requests.Consolidations)
if err != nil {
return nil, errors.Wrap(err, "failed to marshal consolidation requests")
}
return []hexutil.Bytes{drBytes, wrBytes, crBytes}, nil
}
type sszUnmarshaler interface {
UnmarshalSSZ([]byte) error
}
type sszMarshaler interface {
MarshalSSZTo(buf []byte) ([]byte, error)
SizeSSZ() int
}
func marshalItems[T sszMarshaler](items []T) ([]byte, error) {
if len(items) == 0 {
return []byte{}, nil
}
size := items[0].SizeSSZ()
buf := make([]byte, 0, size*len(items))
var err error
for i, item := range items {
buf, err = item.MarshalSSZTo(buf)
if err != nil {
return nil, fmt.Errorf("failed to marshal item at index %d: %w", i, err)
}
}
return buf, nil
}
// Generic function to unmarshal items
func unmarshalItems[T sszUnmarshaler](data []byte, itemSize int, newItem func() T) ([]T, error) {
if len(data)%itemSize != 0 {
return nil, fmt.Errorf("invalid data length: data size (%d) is not a multiple of item size (%d)", len(data), itemSize)
}
numItems := len(data) / itemSize
items := make([]T, numItems)
for i := range items {
itemBytes := data[i*itemSize : (i+1)*itemSize]
item := newItem()
if err := item.UnmarshalSSZ(itemBytes); err != nil {
return nil, fmt.Errorf("failed to unmarshal item at index %d: %w", i, err)
}
items[i] = item
}
return items, nil
}

View File

@@ -290,85 +290,6 @@ func (x *ExecutionRequests) GetConsolidations() []*ConsolidationRequest {
return nil
}
type ExecutionBundleElectra struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Payload *ExecutionPayloadDeneb `protobuf:"bytes,1,opt,name=payload,proto3" json:"payload,omitempty"`
Value []byte `protobuf:"bytes,2,opt,name=value,proto3" json:"value,omitempty"`
BlobsBundle *BlobsBundle `protobuf:"bytes,3,opt,name=blobs_bundle,json=blobsBundle,proto3" json:"blobs_bundle,omitempty"`
ShouldOverrideBuilder bool `protobuf:"varint,4,opt,name=should_override_builder,json=shouldOverrideBuilder,proto3" json:"should_override_builder,omitempty"`
ExecutionRequests [][]byte `protobuf:"bytes,5,rep,name=execution_requests,json=executionRequests,proto3" json:"execution_requests,omitempty"`
}
func (x *ExecutionBundleElectra) Reset() {
*x = ExecutionBundleElectra{}
if protoimpl.UnsafeEnabled {
mi := &file_proto_engine_v1_electra_proto_msgTypes[4]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *ExecutionBundleElectra) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ExecutionBundleElectra) ProtoMessage() {}
func (x *ExecutionBundleElectra) ProtoReflect() protoreflect.Message {
mi := &file_proto_engine_v1_electra_proto_msgTypes[4]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ExecutionBundleElectra.ProtoReflect.Descriptor instead.
func (*ExecutionBundleElectra) Descriptor() ([]byte, []int) {
return file_proto_engine_v1_electra_proto_rawDescGZIP(), []int{4}
}
func (x *ExecutionBundleElectra) GetPayload() *ExecutionPayloadDeneb {
if x != nil {
return x.Payload
}
return nil
}
func (x *ExecutionBundleElectra) GetValue() []byte {
if x != nil {
return x.Value
}
return nil
}
func (x *ExecutionBundleElectra) GetBlobsBundle() *BlobsBundle {
if x != nil {
return x.BlobsBundle
}
return nil
}
func (x *ExecutionBundleElectra) GetShouldOverrideBuilder() bool {
if x != nil {
return x.ShouldOverrideBuilder
}
return false
}
func (x *ExecutionBundleElectra) GetExecutionRequests() [][]byte {
if x != nil {
return x.ExecutionRequests
}
return nil
}
var File_proto_engine_v1_electra_proto protoreflect.FileDescriptor
var file_proto_engine_v1_electra_proto_rawDesc = []byte{
@@ -377,85 +298,64 @@ var file_proto_engine_v1_electra_proto_rawDesc = []byte{
0x12, 0x65, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x2e, 0x65, 0x6e, 0x67, 0x69, 0x6e, 0x65,
0x2e, 0x76, 0x31, 0x1a, 0x1b, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x65, 0x74, 0x68, 0x2f, 0x65,
0x78, 0x74, 0x2f, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f,
0x1a, 0x26, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x65, 0x6e, 0x67, 0x69, 0x6e, 0x65, 0x2f, 0x76,
0x31, 0x2f, 0x65, 0x78, 0x65, 0x63, 0x75, 0x74, 0x69, 0x6f, 0x6e, 0x5f, 0x65, 0x6e, 0x67, 0x69,
0x6e, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0x8d, 0x01, 0x0a, 0x11, 0x57, 0x69, 0x74,
0x68, 0x64, 0x72, 0x61, 0x77, 0x61, 0x6c, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x2d,
0x0a, 0x0e, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x5f, 0x61, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73,
0x18, 0x01, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x06, 0x8a, 0xb5, 0x18, 0x02, 0x32, 0x30, 0x52, 0x0d,
0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x41, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73, 0x12, 0x31, 0x0a,
0x10, 0x76, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x6f, 0x72, 0x5f, 0x70, 0x75, 0x62, 0x6b, 0x65,
0x79, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x06, 0x8a, 0xb5, 0x18, 0x02, 0x34, 0x38, 0x52,
0x0f, 0x76, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x6f, 0x72, 0x50, 0x75, 0x62, 0x6b, 0x65, 0x79,
0x12, 0x16, 0x0a, 0x06, 0x61, 0x6d, 0x6f, 0x75, 0x6e, 0x74, 0x18, 0x03, 0x20, 0x01, 0x28, 0x04,
0x52, 0x06, 0x61, 0x6d, 0x6f, 0x75, 0x6e, 0x74, 0x22, 0xc3, 0x01, 0x0a, 0x0e, 0x44, 0x65, 0x70,
0x6f, 0x73, 0x69, 0x74, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x1e, 0x0a, 0x06, 0x70,
0x75, 0x62, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x06, 0x8a, 0xb5, 0x18,
0x02, 0x34, 0x38, 0x52, 0x06, 0x70, 0x75, 0x62, 0x6b, 0x65, 0x79, 0x12, 0x3d, 0x0a, 0x16, 0x77,
0x69, 0x74, 0x68, 0x64, 0x72, 0x61, 0x77, 0x61, 0x6c, 0x5f, 0x63, 0x72, 0x65, 0x64, 0x65, 0x6e,
0x74, 0x69, 0x61, 0x6c, 0x73, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x06, 0x8a, 0xb5, 0x18,
0x02, 0x33, 0x32, 0x52, 0x15, 0x77, 0x69, 0x74, 0x68, 0x64, 0x72, 0x61, 0x77, 0x61, 0x6c, 0x43,
0x72, 0x65, 0x64, 0x65, 0x6e, 0x74, 0x69, 0x61, 0x6c, 0x73, 0x12, 0x16, 0x0a, 0x06, 0x61, 0x6d,
0x6f, 0x75, 0x6e, 0x74, 0x18, 0x03, 0x20, 0x01, 0x28, 0x04, 0x52, 0x06, 0x61, 0x6d, 0x6f, 0x75,
0x6e, 0x74, 0x12, 0x24, 0x0a, 0x09, 0x73, 0x69, 0x67, 0x6e, 0x61, 0x74, 0x75, 0x72, 0x65, 0x18,
0x04, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x06, 0x8a, 0xb5, 0x18, 0x02, 0x39, 0x36, 0x52, 0x09, 0x73,
0x69, 0x67, 0x6e, 0x61, 0x74, 0x75, 0x72, 0x65, 0x12, 0x14, 0x0a, 0x05, 0x69, 0x6e, 0x64, 0x65,
0x78, 0x18, 0x05, 0x20, 0x01, 0x28, 0x04, 0x52, 0x05, 0x69, 0x6e, 0x64, 0x65, 0x78, 0x22, 0x9f,
0x01, 0x0a, 0x14, 0x43, 0x6f, 0x6e, 0x73, 0x6f, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x69, 0x6f, 0x6e,
0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x2d, 0x0a, 0x0e, 0x73, 0x6f, 0x75, 0x72, 0x63,
0x65, 0x5f, 0x61, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0c, 0x42,
0x06, 0x8a, 0xb5, 0x18, 0x02, 0x32, 0x30, 0x52, 0x0d, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x41,
0x64, 0x64, 0x72, 0x65, 0x73, 0x73, 0x12, 0x2b, 0x0a, 0x0d, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65,
0x5f, 0x70, 0x75, 0x62, 0x6b, 0x65, 0x79, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x06, 0x8a,
0xb5, 0x18, 0x02, 0x34, 0x38, 0x52, 0x0c, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x50, 0x75, 0x62,
0x6b, 0x65, 0x79, 0x12, 0x2b, 0x0a, 0x0d, 0x74, 0x61, 0x72, 0x67, 0x65, 0x74, 0x5f, 0x70, 0x75,
0x62, 0x6b, 0x65, 0x79, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x06, 0x8a, 0xb5, 0x18, 0x02,
0x34, 0x38, 0x52, 0x0c, 0x74, 0x61, 0x72, 0x67, 0x65, 0x74, 0x50, 0x75, 0x62, 0x6b, 0x65, 0x79,
0x22, 0x87, 0x02, 0x0a, 0x11, 0x45, 0x78, 0x65, 0x63, 0x75, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x65,
0x71, 0x75, 0x65, 0x73, 0x74, 0x73, 0x12, 0x48, 0x0a, 0x08, 0x64, 0x65, 0x70, 0x6f, 0x73, 0x69,
0x74, 0x73, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x22, 0x2e, 0x65, 0x74, 0x68, 0x65, 0x72,
0x65, 0x75, 0x6d, 0x2e, 0x65, 0x6e, 0x67, 0x69, 0x6e, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x44, 0x65,
0x70, 0x6f, 0x73, 0x69, 0x74, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x42, 0x08, 0x92, 0xb5,
0x18, 0x04, 0x38, 0x31, 0x39, 0x32, 0x52, 0x08, 0x64, 0x65, 0x70, 0x6f, 0x73, 0x69, 0x74, 0x73,
0x12, 0x4f, 0x0a, 0x0b, 0x77, 0x69, 0x74, 0x68, 0x64, 0x72, 0x61, 0x77, 0x61, 0x6c, 0x73, 0x18,
0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x25, 0x2e, 0x65, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d,
0x2e, 0x65, 0x6e, 0x67, 0x69, 0x6e, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x57, 0x69, 0x74, 0x68, 0x64,
0x72, 0x61, 0x77, 0x61, 0x6c, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x42, 0x06, 0x92, 0xb5,
0x18, 0x02, 0x31, 0x36, 0x52, 0x0b, 0x77, 0x69, 0x74, 0x68, 0x64, 0x72, 0x61, 0x77, 0x61, 0x6c,
0x73, 0x12, 0x57, 0x0a, 0x0e, 0x63, 0x6f, 0x6e, 0x73, 0x6f, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x69,
0x6f, 0x6e, 0x73, 0x18, 0x03, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x28, 0x2e, 0x65, 0x74, 0x68, 0x65,
0x72, 0x65, 0x75, 0x6d, 0x2e, 0x65, 0x6e, 0x67, 0x69, 0x6e, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x43,
0x6f, 0x6e, 0x73, 0x6f, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x65, 0x71, 0x75,
0x65, 0x73, 0x74, 0x42, 0x05, 0x92, 0xb5, 0x18, 0x01, 0x31, 0x52, 0x0e, 0x63, 0x6f, 0x6e, 0x73,
0x6f, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x22, 0x9e, 0x02, 0x0a, 0x16, 0x45,
0x78, 0x65, 0x63, 0x75, 0x74, 0x69, 0x6f, 0x6e, 0x42, 0x75, 0x6e, 0x64, 0x6c, 0x65, 0x45, 0x6c,
0x65, 0x63, 0x74, 0x72, 0x61, 0x12, 0x43, 0x0a, 0x07, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64,
0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x29, 0x2e, 0x65, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75,
0x6d, 0x2e, 0x65, 0x6e, 0x67, 0x69, 0x6e, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x45, 0x78, 0x65, 0x63,
0x75, 0x74, 0x69, 0x6f, 0x6e, 0x50, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x44, 0x65, 0x6e, 0x65,
0x62, 0x52, 0x07, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x12, 0x14, 0x0a, 0x05, 0x76, 0x61,
0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65,
0x12, 0x42, 0x0a, 0x0c, 0x62, 0x6c, 0x6f, 0x62, 0x73, 0x5f, 0x62, 0x75, 0x6e, 0x64, 0x6c, 0x65,
0x18, 0x03, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1f, 0x2e, 0x65, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75,
0x6d, 0x2e, 0x65, 0x6e, 0x67, 0x69, 0x6e, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x42, 0x6c, 0x6f, 0x62,
0x73, 0x42, 0x75, 0x6e, 0x64, 0x6c, 0x65, 0x52, 0x0b, 0x62, 0x6c, 0x6f, 0x62, 0x73, 0x42, 0x75,
0x6e, 0x64, 0x6c, 0x65, 0x12, 0x36, 0x0a, 0x17, 0x73, 0x68, 0x6f, 0x75, 0x6c, 0x64, 0x5f, 0x6f,
0x76, 0x65, 0x72, 0x72, 0x69, 0x64, 0x65, 0x5f, 0x62, 0x75, 0x69, 0x6c, 0x64, 0x65, 0x72, 0x18,
0x04, 0x20, 0x01, 0x28, 0x08, 0x52, 0x15, 0x73, 0x68, 0x6f, 0x75, 0x6c, 0x64, 0x4f, 0x76, 0x65,
0x72, 0x72, 0x69, 0x64, 0x65, 0x42, 0x75, 0x69, 0x6c, 0x64, 0x65, 0x72, 0x12, 0x2d, 0x0a, 0x12,
0x65, 0x78, 0x65, 0x63, 0x75, 0x74, 0x69, 0x6f, 0x6e, 0x5f, 0x72, 0x65, 0x71, 0x75, 0x65, 0x73,
0x74, 0x73, 0x18, 0x05, 0x20, 0x03, 0x28, 0x0c, 0x52, 0x11, 0x65, 0x78, 0x65, 0x63, 0x75, 0x74,
0x69, 0x6f, 0x6e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x73, 0x42, 0x8e, 0x01, 0x0a, 0x16,
0x6f, 0x72, 0x67, 0x2e, 0x65, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x2e, 0x65, 0x6e, 0x67,
0x69, 0x6e, 0x65, 0x2e, 0x76, 0x31, 0x42, 0x0c, 0x45, 0x6c, 0x65, 0x63, 0x74, 0x72, 0x61, 0x50,
0x72, 0x6f, 0x74, 0x6f, 0x50, 0x01, 0x5a, 0x3a, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63,
0x6f, 0x6d, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x61, 0x74, 0x69, 0x63, 0x6c, 0x61, 0x62, 0x73,
0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x2f, 0x76, 0x35, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f,
0x65, 0x6e, 0x67, 0x69, 0x6e, 0x65, 0x2f, 0x76, 0x31, 0x3b, 0x65, 0x6e, 0x67, 0x69, 0x6e, 0x65,
0x76, 0x31, 0xaa, 0x02, 0x12, 0x45, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x2e, 0x45, 0x6e,
0x67, 0x69, 0x6e, 0x65, 0x2e, 0x56, 0x31, 0xca, 0x02, 0x12, 0x45, 0x74, 0x68, 0x65, 0x72, 0x65,
0x75, 0x6d, 0x5c, 0x45, 0x6e, 0x67, 0x69, 0x6e, 0x65, 0x5c, 0x76, 0x31, 0x62, 0x06, 0x70, 0x72,
0x6f, 0x74, 0x6f, 0x33,
0x22, 0x8d, 0x01, 0x0a, 0x11, 0x57, 0x69, 0x74, 0x68, 0x64, 0x72, 0x61, 0x77, 0x61, 0x6c, 0x52,
0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x2d, 0x0a, 0x0e, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65,
0x5f, 0x61, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x06,
0x8a, 0xb5, 0x18, 0x02, 0x32, 0x30, 0x52, 0x0d, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x41, 0x64,
0x64, 0x72, 0x65, 0x73, 0x73, 0x12, 0x31, 0x0a, 0x10, 0x76, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74,
0x6f, 0x72, 0x5f, 0x70, 0x75, 0x62, 0x6b, 0x65, 0x79, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x42,
0x06, 0x8a, 0xb5, 0x18, 0x02, 0x34, 0x38, 0x52, 0x0f, 0x76, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74,
0x6f, 0x72, 0x50, 0x75, 0x62, 0x6b, 0x65, 0x79, 0x12, 0x16, 0x0a, 0x06, 0x61, 0x6d, 0x6f, 0x75,
0x6e, 0x74, 0x18, 0x03, 0x20, 0x01, 0x28, 0x04, 0x52, 0x06, 0x61, 0x6d, 0x6f, 0x75, 0x6e, 0x74,
0x22, 0xc3, 0x01, 0x0a, 0x0e, 0x44, 0x65, 0x70, 0x6f, 0x73, 0x69, 0x74, 0x52, 0x65, 0x71, 0x75,
0x65, 0x73, 0x74, 0x12, 0x1e, 0x0a, 0x06, 0x70, 0x75, 0x62, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20,
0x01, 0x28, 0x0c, 0x42, 0x06, 0x8a, 0xb5, 0x18, 0x02, 0x34, 0x38, 0x52, 0x06, 0x70, 0x75, 0x62,
0x6b, 0x65, 0x79, 0x12, 0x3d, 0x0a, 0x16, 0x77, 0x69, 0x74, 0x68, 0x64, 0x72, 0x61, 0x77, 0x61,
0x6c, 0x5f, 0x63, 0x72, 0x65, 0x64, 0x65, 0x6e, 0x74, 0x69, 0x61, 0x6c, 0x73, 0x18, 0x02, 0x20,
0x01, 0x28, 0x0c, 0x42, 0x06, 0x8a, 0xb5, 0x18, 0x02, 0x33, 0x32, 0x52, 0x15, 0x77, 0x69, 0x74,
0x68, 0x64, 0x72, 0x61, 0x77, 0x61, 0x6c, 0x43, 0x72, 0x65, 0x64, 0x65, 0x6e, 0x74, 0x69, 0x61,
0x6c, 0x73, 0x12, 0x16, 0x0a, 0x06, 0x61, 0x6d, 0x6f, 0x75, 0x6e, 0x74, 0x18, 0x03, 0x20, 0x01,
0x28, 0x04, 0x52, 0x06, 0x61, 0x6d, 0x6f, 0x75, 0x6e, 0x74, 0x12, 0x24, 0x0a, 0x09, 0x73, 0x69,
0x67, 0x6e, 0x61, 0x74, 0x75, 0x72, 0x65, 0x18, 0x04, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x06, 0x8a,
0xb5, 0x18, 0x02, 0x39, 0x36, 0x52, 0x09, 0x73, 0x69, 0x67, 0x6e, 0x61, 0x74, 0x75, 0x72, 0x65,
0x12, 0x14, 0x0a, 0x05, 0x69, 0x6e, 0x64, 0x65, 0x78, 0x18, 0x05, 0x20, 0x01, 0x28, 0x04, 0x52,
0x05, 0x69, 0x6e, 0x64, 0x65, 0x78, 0x22, 0x9f, 0x01, 0x0a, 0x14, 0x43, 0x6f, 0x6e, 0x73, 0x6f,
0x6c, 0x69, 0x64, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12,
0x2d, 0x0a, 0x0e, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x5f, 0x61, 0x64, 0x64, 0x72, 0x65, 0x73,
0x73, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x06, 0x8a, 0xb5, 0x18, 0x02, 0x32, 0x30, 0x52,
0x0d, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x41, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73, 0x12, 0x2b,
0x0a, 0x0d, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x5f, 0x70, 0x75, 0x62, 0x6b, 0x65, 0x79, 0x18,
0x02, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x06, 0x8a, 0xb5, 0x18, 0x02, 0x34, 0x38, 0x52, 0x0c, 0x73,
0x6f, 0x75, 0x72, 0x63, 0x65, 0x50, 0x75, 0x62, 0x6b, 0x65, 0x79, 0x12, 0x2b, 0x0a, 0x0d, 0x74,
0x61, 0x72, 0x67, 0x65, 0x74, 0x5f, 0x70, 0x75, 0x62, 0x6b, 0x65, 0x79, 0x18, 0x03, 0x20, 0x01,
0x28, 0x0c, 0x42, 0x06, 0x8a, 0xb5, 0x18, 0x02, 0x34, 0x38, 0x52, 0x0c, 0x74, 0x61, 0x72, 0x67,
0x65, 0x74, 0x50, 0x75, 0x62, 0x6b, 0x65, 0x79, 0x22, 0x87, 0x02, 0x0a, 0x11, 0x45, 0x78, 0x65,
0x63, 0x75, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x73, 0x12, 0x48,
0x0a, 0x08, 0x64, 0x65, 0x70, 0x6f, 0x73, 0x69, 0x74, 0x73, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b,
0x32, 0x22, 0x2e, 0x65, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x2e, 0x65, 0x6e, 0x67, 0x69,
0x6e, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x44, 0x65, 0x70, 0x6f, 0x73, 0x69, 0x74, 0x52, 0x65, 0x71,
0x75, 0x65, 0x73, 0x74, 0x42, 0x08, 0x92, 0xb5, 0x18, 0x04, 0x38, 0x31, 0x39, 0x32, 0x52, 0x08,
0x64, 0x65, 0x70, 0x6f, 0x73, 0x69, 0x74, 0x73, 0x12, 0x4f, 0x0a, 0x0b, 0x77, 0x69, 0x74, 0x68,
0x64, 0x72, 0x61, 0x77, 0x61, 0x6c, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x25, 0x2e,
0x65, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x2e, 0x65, 0x6e, 0x67, 0x69, 0x6e, 0x65, 0x2e,
0x76, 0x31, 0x2e, 0x57, 0x69, 0x74, 0x68, 0x64, 0x72, 0x61, 0x77, 0x61, 0x6c, 0x52, 0x65, 0x71,
0x75, 0x65, 0x73, 0x74, 0x42, 0x06, 0x92, 0xb5, 0x18, 0x02, 0x31, 0x36, 0x52, 0x0b, 0x77, 0x69,
0x74, 0x68, 0x64, 0x72, 0x61, 0x77, 0x61, 0x6c, 0x73, 0x12, 0x57, 0x0a, 0x0e, 0x63, 0x6f, 0x6e,
0x73, 0x6f, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x03, 0x20, 0x03, 0x28,
0x0b, 0x32, 0x28, 0x2e, 0x65, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x2e, 0x65, 0x6e, 0x67,
0x69, 0x6e, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x43, 0x6f, 0x6e, 0x73, 0x6f, 0x6c, 0x69, 0x64, 0x61,
0x74, 0x69, 0x6f, 0x6e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x42, 0x05, 0x92, 0xb5, 0x18,
0x01, 0x31, 0x52, 0x0e, 0x63, 0x6f, 0x6e, 0x73, 0x6f, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x69, 0x6f,
0x6e, 0x73, 0x42, 0x8e, 0x01, 0x0a, 0x16, 0x6f, 0x72, 0x67, 0x2e, 0x65, 0x74, 0x68, 0x65, 0x72,
0x65, 0x75, 0x6d, 0x2e, 0x65, 0x6e, 0x67, 0x69, 0x6e, 0x65, 0x2e, 0x76, 0x31, 0x42, 0x0c, 0x45,
0x6c, 0x65, 0x63, 0x74, 0x72, 0x61, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x50, 0x01, 0x5a, 0x3a, 0x67,
0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x61,
0x74, 0x69, 0x63, 0x6c, 0x61, 0x62, 0x73, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x2f, 0x76, 0x35,
0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x65, 0x6e, 0x67, 0x69, 0x6e, 0x65, 0x2f, 0x76, 0x31,
0x3b, 0x65, 0x6e, 0x67, 0x69, 0x6e, 0x65, 0x76, 0x31, 0xaa, 0x02, 0x12, 0x45, 0x74, 0x68, 0x65,
0x72, 0x65, 0x75, 0x6d, 0x2e, 0x45, 0x6e, 0x67, 0x69, 0x6e, 0x65, 0x2e, 0x56, 0x31, 0xca, 0x02,
0x12, 0x45, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x5c, 0x45, 0x6e, 0x67, 0x69, 0x6e, 0x65,
0x5c, 0x76, 0x31, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
var (
@@ -470,27 +370,22 @@ func file_proto_engine_v1_electra_proto_rawDescGZIP() []byte {
return file_proto_engine_v1_electra_proto_rawDescData
}
var file_proto_engine_v1_electra_proto_msgTypes = make([]protoimpl.MessageInfo, 5)
var file_proto_engine_v1_electra_proto_msgTypes = make([]protoimpl.MessageInfo, 4)
var file_proto_engine_v1_electra_proto_goTypes = []interface{}{
(*WithdrawalRequest)(nil), // 0: ethereum.engine.v1.WithdrawalRequest
(*DepositRequest)(nil), // 1: ethereum.engine.v1.DepositRequest
(*ConsolidationRequest)(nil), // 2: ethereum.engine.v1.ConsolidationRequest
(*ExecutionRequests)(nil), // 3: ethereum.engine.v1.ExecutionRequests
(*ExecutionBundleElectra)(nil), // 4: ethereum.engine.v1.ExecutionBundleElectra
(*ExecutionPayloadDeneb)(nil), // 5: ethereum.engine.v1.ExecutionPayloadDeneb
(*BlobsBundle)(nil), // 6: ethereum.engine.v1.BlobsBundle
(*WithdrawalRequest)(nil), // 0: ethereum.engine.v1.WithdrawalRequest
(*DepositRequest)(nil), // 1: ethereum.engine.v1.DepositRequest
(*ConsolidationRequest)(nil), // 2: ethereum.engine.v1.ConsolidationRequest
(*ExecutionRequests)(nil), // 3: ethereum.engine.v1.ExecutionRequests
}
var file_proto_engine_v1_electra_proto_depIdxs = []int32{
1, // 0: ethereum.engine.v1.ExecutionRequests.deposits:type_name -> ethereum.engine.v1.DepositRequest
0, // 1: ethereum.engine.v1.ExecutionRequests.withdrawals:type_name -> ethereum.engine.v1.WithdrawalRequest
2, // 2: ethereum.engine.v1.ExecutionRequests.consolidations:type_name -> ethereum.engine.v1.ConsolidationRequest
5, // 3: ethereum.engine.v1.ExecutionBundleElectra.payload:type_name -> ethereum.engine.v1.ExecutionPayloadDeneb
6, // 4: ethereum.engine.v1.ExecutionBundleElectra.blobs_bundle:type_name -> ethereum.engine.v1.BlobsBundle
5, // [5:5] is the sub-list for method output_type
5, // [5:5] is the sub-list for method input_type
5, // [5:5] is the sub-list for extension type_name
5, // [5:5] is the sub-list for extension extendee
0, // [0:5] is the sub-list for field type_name
3, // [3:3] is the sub-list for method output_type
3, // [3:3] is the sub-list for method input_type
3, // [3:3] is the sub-list for extension type_name
3, // [3:3] is the sub-list for extension extendee
0, // [0:3] is the sub-list for field type_name
}
func init() { file_proto_engine_v1_electra_proto_init() }
@@ -498,7 +393,6 @@ func file_proto_engine_v1_electra_proto_init() {
if File_proto_engine_v1_electra_proto != nil {
return
}
file_proto_engine_v1_execution_engine_proto_init()
if !protoimpl.UnsafeEnabled {
file_proto_engine_v1_electra_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*WithdrawalRequest); i {
@@ -548,18 +442,6 @@ func file_proto_engine_v1_electra_proto_init() {
return nil
}
}
file_proto_engine_v1_electra_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*ExecutionBundleElectra); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
type x struct{}
out := protoimpl.TypeBuilder{
@@ -567,7 +449,7 @@ func file_proto_engine_v1_electra_proto_init() {
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_proto_engine_v1_electra_proto_rawDesc,
NumEnums: 0,
NumMessages: 5,
NumMessages: 4,
NumExtensions: 0,
NumServices: 0,
},

View File

@@ -16,7 +16,6 @@ syntax = "proto3";
package ethereum.engine.v1;
import "proto/eth/ext/options.proto";
import "proto/engine/v1/execution_engine.proto";
option csharp_namespace = "Ethereum.Engine.V1";
option go_package = "github.com/prysmaticlabs/prysm/v5/proto/engine/v1;enginev1";
@@ -61,16 +60,7 @@ message ConsolidationRequest {
// ExecutionRequests is a container that contains all the requests from the execution layer to be included in a block
message ExecutionRequests {
repeated DepositRequest deposits = 1 [(ethereum.eth.ext.ssz_max) = "max_deposit_requests_per_payload.size"];
repeated DepositRequest deposits = 1 [(ethereum.eth.ext.ssz_max) = "max_deposit_requests_per_payload.size"];
repeated WithdrawalRequest withdrawals = 2 [(ethereum.eth.ext.ssz_max) = "max_withdrawal_requests_per_payload.size"];
repeated ConsolidationRequest consolidations = 3 [(ethereum.eth.ext.ssz_max) = "max_consolidation_requests_per_payload.size"];
}
// ExecutionBundleElectra is a container that builds on Payload V4 and includes execution requests sidecar needed post Electra
message ExecutionBundleElectra {
ExecutionPayloadDeneb payload = 1;
bytes value = 2;
BlobsBundle blobs_bundle = 3;
bool should_override_builder = 4;
repeated bytes execution_requests = 5;
repeated ConsolidationRequest consolidations = 3 [(ethereum.eth.ext.ssz_max) = "max_consolidation_requests_per_payload.size"];
}

View File

@@ -1,56 +0,0 @@
package enginev1_test
import (
"testing"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
var depositRequestsSSZHex = "0x706b0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000077630000000000000000000000000000000000000000000000000000000000007b00000000000000736967000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000c801000000000000706b00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000776300000000000000000000000000000000000000000000000000000000000090010000000000007369670000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002000000000000000"
func TestUnmarshalItems_OK(t *testing.T) {
drb, err := hexutil.Decode(depositRequestsSSZHex)
require.NoError(t, err)
exampleRequest := &enginev1.DepositRequest{}
depositRequests, err := enginev1.UnmarshalItems(drb, exampleRequest.SizeSSZ(), func() *enginev1.DepositRequest { return &enginev1.DepositRequest{} })
require.NoError(t, err)
exampleRequest1 := &enginev1.DepositRequest{
Pubkey: bytesutil.PadTo([]byte("pk"), 48),
WithdrawalCredentials: bytesutil.PadTo([]byte("wc"), 32),
Amount: 123,
Signature: bytesutil.PadTo([]byte("sig"), 96),
Index: 456,
}
exampleRequest2 := &enginev1.DepositRequest{
Pubkey: bytesutil.PadTo([]byte("pk"), 48),
WithdrawalCredentials: bytesutil.PadTo([]byte("wc"), 32),
Amount: 400,
Signature: bytesutil.PadTo([]byte("sig"), 96),
Index: 32,
}
require.DeepEqual(t, depositRequests, []*enginev1.DepositRequest{exampleRequest1, exampleRequest2})
}
func TestMarshalItems_OK(t *testing.T) {
exampleRequest1 := &enginev1.DepositRequest{
Pubkey: bytesutil.PadTo([]byte("pk"), 48),
WithdrawalCredentials: bytesutil.PadTo([]byte("wc"), 32),
Amount: 123,
Signature: bytesutil.PadTo([]byte("sig"), 96),
Index: 456,
}
exampleRequest2 := &enginev1.DepositRequest{
Pubkey: bytesutil.PadTo([]byte("pk"), 48),
WithdrawalCredentials: bytesutil.PadTo([]byte("wc"), 32),
Amount: 400,
Signature: bytesutil.PadTo([]byte("sig"), 96),
Index: 32,
}
drbs, err := enginev1.MarshalItems([]*enginev1.DepositRequest{exampleRequest1, exampleRequest2})
require.NoError(t, err)
require.DeepEqual(t, depositRequestsSSZHex, hexutil.Encode(drbs))
}

View File

@@ -1,11 +1,3 @@
package enginev1
type Copier[T any] copier[T]
func MarshalItems[T sszMarshaler](items []T) ([]byte, error) {
return marshalItems(items)
}
func UnmarshalItems[T sszUnmarshaler](data []byte, itemSize int, newItem func() T) ([]T, error) {
return unmarshalItems(data, itemSize, newItem)
}

View File

@@ -297,14 +297,6 @@ type GetPayloadV3ResponseJson struct {
ShouldOverrideBuilder bool `json:"shouldOverrideBuilder"`
}
type GetPayloadV4ResponseJson struct {
ExecutionPayload *ExecutionPayloadDenebJSON `json:"executionPayload"`
BlockValue string `json:"blockValue"`
BlobsBundle *BlobBundleJSON `json:"blobsBundle"`
ShouldOverrideBuilder bool `json:"shouldOverrideBuilder"`
ExecutionRequests []hexutil.Bytes `json:"executionRequests"`
}
// ExecutionPayloadBody represents the engine API ExecutionPayloadV1 or ExecutionPayloadV2 type.
type ExecutionPayloadBody struct {
Transactions []hexutil.Bytes `json:"transactions"`
@@ -1120,137 +1112,6 @@ func (e *ExecutionPayloadDenebWithValueAndBlobsBundle) UnmarshalJSON(enc []byte)
return nil
}
func (e *ExecutionBundleElectra) UnmarshalJSON(enc []byte) error {
dec := GetPayloadV4ResponseJson{}
if err := json.Unmarshal(enc, &dec); err != nil {
return err
}
if dec.ExecutionPayload.ParentHash == nil {
return errors.New("missing required field 'parentHash' for ExecutionPayload")
}
if dec.ExecutionPayload.FeeRecipient == nil {
return errors.New("missing required field 'feeRecipient' for ExecutionPayload")
}
if dec.ExecutionPayload.StateRoot == nil {
return errors.New("missing required field 'stateRoot' for ExecutionPayload")
}
if dec.ExecutionPayload.ReceiptsRoot == nil {
return errors.New("missing required field 'receiptsRoot' for ExecutableDataV1")
}
if dec.ExecutionPayload.LogsBloom == nil {
return errors.New("missing required field 'logsBloom' for ExecutionPayload")
}
if dec.ExecutionPayload.PrevRandao == nil {
return errors.New("missing required field 'prevRandao' for ExecutionPayload")
}
if dec.ExecutionPayload.ExtraData == nil {
return errors.New("missing required field 'extraData' for ExecutionPayload")
}
if dec.ExecutionPayload.BlockHash == nil {
return errors.New("missing required field 'blockHash' for ExecutionPayload")
}
if dec.ExecutionPayload.Transactions == nil {
return errors.New("missing required field 'transactions' for ExecutionPayload")
}
if dec.ExecutionPayload.BlockNumber == nil {
return errors.New("missing required field 'blockNumber' for ExecutionPayload")
}
if dec.ExecutionPayload.Timestamp == nil {
return errors.New("missing required field 'timestamp' for ExecutionPayload")
}
if dec.ExecutionPayload.GasUsed == nil {
return errors.New("missing required field 'gasUsed' for ExecutionPayload")
}
if dec.ExecutionPayload.GasLimit == nil {
return errors.New("missing required field 'gasLimit' for ExecutionPayload")
}
if dec.ExecutionPayload.BlobGasUsed == nil {
return errors.New("missing required field 'blobGasUsed' for ExecutionPayload")
}
if dec.ExecutionPayload.ExcessBlobGas == nil {
return errors.New("missing required field 'excessBlobGas' for ExecutionPayload")
}
*e = ExecutionBundleElectra{Payload: &ExecutionPayloadDeneb{}}
e.Payload.ParentHash = dec.ExecutionPayload.ParentHash.Bytes()
e.Payload.FeeRecipient = dec.ExecutionPayload.FeeRecipient.Bytes()
e.Payload.StateRoot = dec.ExecutionPayload.StateRoot.Bytes()
e.Payload.ReceiptsRoot = dec.ExecutionPayload.ReceiptsRoot.Bytes()
e.Payload.LogsBloom = *dec.ExecutionPayload.LogsBloom
e.Payload.PrevRandao = dec.ExecutionPayload.PrevRandao.Bytes()
e.Payload.BlockNumber = uint64(*dec.ExecutionPayload.BlockNumber)
e.Payload.GasLimit = uint64(*dec.ExecutionPayload.GasLimit)
e.Payload.GasUsed = uint64(*dec.ExecutionPayload.GasUsed)
e.Payload.Timestamp = uint64(*dec.ExecutionPayload.Timestamp)
e.Payload.ExtraData = dec.ExecutionPayload.ExtraData
baseFee, err := hexutil.DecodeBig(dec.ExecutionPayload.BaseFeePerGas)
if err != nil {
return err
}
e.Payload.BaseFeePerGas = bytesutil.PadTo(bytesutil.ReverseByteOrder(baseFee.Bytes()), fieldparams.RootLength)
e.Payload.ExcessBlobGas = uint64(*dec.ExecutionPayload.ExcessBlobGas)
e.Payload.BlobGasUsed = uint64(*dec.ExecutionPayload.BlobGasUsed)
e.Payload.BlockHash = dec.ExecutionPayload.BlockHash.Bytes()
transactions := make([][]byte, len(dec.ExecutionPayload.Transactions))
for i, tx := range dec.ExecutionPayload.Transactions {
transactions[i] = tx
}
e.Payload.Transactions = transactions
if dec.ExecutionPayload.Withdrawals == nil {
dec.ExecutionPayload.Withdrawals = make([]*Withdrawal, 0)
}
e.Payload.Withdrawals = dec.ExecutionPayload.Withdrawals
v, err := hexutil.DecodeBig(dec.BlockValue)
if err != nil {
return err
}
e.Value = bytesutil.PadTo(bytesutil.ReverseByteOrder(v.Bytes()), fieldparams.RootLength)
if dec.BlobsBundle == nil {
return nil
}
e.BlobsBundle = &BlobsBundle{}
commitments := make([][]byte, len(dec.BlobsBundle.Commitments))
for i, kzg := range dec.BlobsBundle.Commitments {
k := kzg
commitments[i] = bytesutil.PadTo(k[:], fieldparams.BLSPubkeyLength)
}
e.BlobsBundle.KzgCommitments = commitments
proofs := make([][]byte, len(dec.BlobsBundle.Proofs))
for i, proof := range dec.BlobsBundle.Proofs {
p := proof
proofs[i] = bytesutil.PadTo(p[:], fieldparams.BLSPubkeyLength)
}
e.BlobsBundle.Proofs = proofs
blobs := make([][]byte, len(dec.BlobsBundle.Blobs))
for i, blob := range dec.BlobsBundle.Blobs {
b := make([]byte, fieldparams.BlobLength)
copy(b, blob)
blobs[i] = b
}
e.BlobsBundle.Blobs = blobs
e.ShouldOverrideBuilder = dec.ShouldOverrideBuilder
requests := make([][]byte, len(dec.ExecutionRequests))
for i, request := range dec.ExecutionRequests {
r := make([]byte, len(request))
copy(r, request)
requests[i] = r
}
e.ExecutionRequests = requests
return nil
}
// RecastHexutilByteSlice converts a []hexutil.Bytes to a [][]byte
func RecastHexutilByteSlice(h []hexutil.Bytes) [][]byte {
r := make([][]byte, len(h))

View File

@@ -607,9 +607,6 @@ func TestJsonMarshalUnmarshal(t *testing.T) {
// Expect no transaction objects in the unmarshaled data.
require.Equal(t, 0, len(payloadPb.Transactions))
})
t.Run("execution bundle electra with deneb payload, blob data, and execution requests", func(t *testing.T) {
// TODO #14351: update this test when geth updates
})
}
func TestPayloadIDBytes_MarshalUnmarshalJSON(t *testing.T) {

View File

@@ -18,6 +18,7 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/v5/runtime/debug",
visibility = ["//visibility:public"],
deps = [
"@com_github_fjl_memsize//memsizeui:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",

View File

@@ -37,6 +37,7 @@ import (
"sync"
"time"
"github.com/fjl/memsize/memsizeui"
log "github.com/sirupsen/logrus"
"github.com/urfave/cli/v2"
)
@@ -44,6 +45,8 @@ import (
// Handler is the global debugging handler.
var Handler = new(HandlerT)
// Memsize is the memsizeui Handler(?).
var Memsize memsizeui.Handler
var (
// PProfFlag to enable pprof HTTP server.
PProfFlag = &cli.BoolFlag{
@@ -348,6 +351,7 @@ func Setup(ctx *cli.Context) error {
}
func startPProf(address string) {
http.Handle("/memsize/", http.StripPrefix("/memsize", &Memsize))
log.WithField("addr", fmt.Sprintf("http://%s/debug/pprof", address)).Info("Starting pprof server")
go func() {
srv := &http.Server{

View File

@@ -138,8 +138,7 @@ func StringContains(loggerFn assertionLoggerFn, expected, actual string, flag bo
// NoError asserts that error is nil.
func NoError(loggerFn assertionLoggerFn, err error, msg ...interface{}) {
// reflect.ValueOf is needed for nil instances of custom types implementing Error
v := reflect.ValueOf(err)
if err != nil && !(v.Kind() == reflect.Ptr && v.IsNil()) {
if err != nil && !reflect.ValueOf(err).IsNil() {
errMsg := parseMsg("Unexpected error", msg...)
_, file, line, _ := runtime.Caller(2)
loggerFn("%s:%d %s: %v", filepath.Base(file), line, errMsg, err)

View File

@@ -342,17 +342,17 @@ func EncodeBlobs(data []byte) ([]kzg4844.Blob, []kzg4844.Commitment, []kzg4844.P
versionedHashes []common.Hash
)
for _, blob := range blobs {
commit, err := kzg4844.BlobToCommitment(&blob)
commit, err := kzg4844.BlobToCommitment(blob)
if err != nil {
return nil, nil, nil, nil, err
}
commits = append(commits, commit)
proof, err := kzg4844.ComputeBlobProof(&blob, commit)
proof, err := kzg4844.ComputeBlobProof(blob, commit)
if err != nil {
return nil, nil, nil, nil, err
}
if err := kzg4844.VerifyBlobProof(&blob, commit, proof); err != nil {
if err := kzg4844.VerifyBlobProof(blob, commit, proof); err != nil {
return nil, nil, nil, nil, err
}
proofs = append(proofs, proof)

View File

@@ -12,6 +12,7 @@ import (
_ "github.com/ethereum/go-ethereum/eth" // Required for go-ethereum e2e.
_ "github.com/ethereum/go-ethereum/eth/downloader" // Required for go-ethereum e2e.
_ "github.com/ethereum/go-ethereum/ethclient" // Required for go-ethereum e2e.
_ "github.com/ethereum/go-ethereum/les" // Required for go-ethereum e2e.
_ "github.com/ethereum/go-ethereum/log" // Required for go-ethereum e2e.
_ "github.com/ethereum/go-ethereum/metrics" // Required for go-ethereum e2e.
_ "github.com/ethereum/go-ethereum/node" // Required for go-ethereum e2e.

View File

@@ -798,7 +798,7 @@ func executableDataToBlock(params engine.ExecutableData, prevBeaconRoot []byte)
pRoot := common.Hash(prevBeaconRoot)
header.ParentBeaconRoot = &pRoot
}
block := gethTypes.NewBlockWithHeader(header).WithBody(gethTypes.Body{Transactions: txs, Uncles: nil, Withdrawals: params.Withdrawals})
block := gethTypes.NewBlockWithHeader(header).WithBody(txs, nil /* uncles */).WithWithdrawals(params.Withdrawals)
return block, nil
}

View File

@@ -7,5 +7,6 @@ import (
)
func TestMainnet_Electra_EpochProcessing_PendingConsolidations(t *testing.T) {
t.Skip("TODO: add back in after all spec test features are in.")
epoch_processing.RunPendingConsolidationsTests(t, "mainnet")
}

View File

@@ -7,5 +7,6 @@ import (
)
func TestMainnet_Electra_EpochProcessing_Slashings(t *testing.T) {
t.Skip("slashing processing missing")
epoch_processing.RunSlashingsTests(t, "mainnet")
}

View File

@@ -7,5 +7,6 @@ import (
)
func TestMainnet_Electra_Operations_Consolidation(t *testing.T) {
t.Skip("TODO: add back in after all spec test features are in.")
operations.RunConsolidationTest(t, "mainnet")
}

View File

@@ -7,5 +7,6 @@ import (
)
func TestMainnet_Electra_Sanity_Blocks(t *testing.T) {
t.Skip("TODO: add back in after all spec test features are in.")
sanity.RunBlockProcessingTest(t, "mainnet", "sanity/blocks/pyspec_tests")
}

View File

@@ -7,5 +7,6 @@ import (
)
func TestMinimal_Electra_EpochProcessing_PendingConsolidations(t *testing.T) {
t.Skip("TODO: add back in after all spec test features are in.")
epoch_processing.RunPendingConsolidationsTests(t, "minimal")
}

View File

@@ -7,5 +7,6 @@ import (
)
func TestMinimal_Electra_EpochProcessing_Slashings(t *testing.T) {
t.Skip("slashing processing missing")
epoch_processing.RunSlashingsTests(t, "minimal")
}

Some files were not shown because too many files have changed in this diff Show More