mirror of
https://github.com/OffchainLabs/prysm.git
synced 2026-01-10 05:47:59 -05:00
Compare commits
19 Commits
rm-electra
...
print-trac
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
5de71c532f | ||
|
|
9ec8c6c4b5 | ||
|
|
d7c68e8f82 | ||
|
|
f11d80456c | ||
|
|
073cf19b69 | ||
|
|
6ac8090599 | ||
|
|
4aa54107e4 | ||
|
|
ffc443b5f2 | ||
|
|
d6c5692dc0 | ||
|
|
1086bdf2b3 | ||
|
|
2afa63b442 | ||
|
|
5a5193c59d | ||
|
|
c8d3ed02cb | ||
|
|
30fcf5366a | ||
|
|
f776b968ad | ||
|
|
de094b0078 | ||
|
|
0a4ed8279b | ||
|
|
f307a369a5 | ||
|
|
dc91c963b9 |
75
CHANGELOG.md
75
CHANGELOG.md
@@ -4,7 +4,60 @@ All notable changes to this project will be documented in this file.
|
||||
|
||||
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
|
||||
|
||||
## [Unreleased](https://github.com/prysmaticlabs/prysm/compare/v5.1.0...HEAD)
|
||||
## [Unreleased](https://github.com/prysmaticlabs/prysm/compare/v5.1.1...HEAD)
|
||||
|
||||
### Added
|
||||
|
||||
- Electra EIP6110: Queue deposit [pr](https://github.com/prysmaticlabs/prysm/pull/14430)
|
||||
- Add Bellatrix tests for light client functions.
|
||||
- Add Discovery Rebooter Feature.
|
||||
- Added GetBlockAttestationsV2 endpoint.
|
||||
- Light client support: Consensus types for Electra
|
||||
- Added SubmitPoolAttesterSlashingV2 endpoint.
|
||||
- Added SubmitAggregateAndProofsRequestV2 endpoint.
|
||||
|
||||
### Changed
|
||||
|
||||
- Electra EIP6110: Queue deposit requests changes from consensus spec pr #3818
|
||||
- reversed the boolean return on `BatchVerifyDepositsSignatures`, from need verification, to all keys successfully verified
|
||||
- Fix `engine_exchangeCapabilities` implementation.
|
||||
- Updated the default `scrape-interval` in `Client-stats` to 2 minutes to accommodate Beaconcha.in API rate limits.
|
||||
- Switch to compounding when consolidating with source==target.
|
||||
- Revert block db save when saving state fails.
|
||||
- Return false from HasBlock if the block is being synced.
|
||||
- Cleanup forkchoice on failed insertions.
|
||||
- Use read only validator for core processing to avoid unnecessary copying.
|
||||
- Fee Recipient log also prints total tracked validators cache size.
|
||||
|
||||
### Deprecated
|
||||
|
||||
- `/eth/v1alpha1/validator/activation/stream` grpc wait for activation stream is deprecated. [pr](https://github.com/prysmaticlabs/prysm/pull/14514)
|
||||
|
||||
### Removed
|
||||
|
||||
- Removed finalized validator index cache, no longer needed.
|
||||
|
||||
### Fixed
|
||||
|
||||
- Fixed mesh size by appending `gParams.Dhi = gossipSubDhi`
|
||||
- Fix skipping partial withdrawals count.
|
||||
- recover from panics when writing the event stream [pr](https://github.com/prysmaticlabs/prysm/pull/14545)
|
||||
|
||||
### Security
|
||||
|
||||
|
||||
## [v5.1.1](https://github.com/prysmaticlabs/prysm/compare/v5.1.0...v5.1.1) - 2024-10-15
|
||||
|
||||
This release has a number of features and improvements. Most notably, the feature flag
|
||||
`--enable-experimental-state` has been flipped to "opt out" via `--disable-experimental-state`.
|
||||
The experimental state management design has shown significant improvements in memory usage at
|
||||
runtime. Updates to libp2p's gossipsub have some bandwidith stability improvements with support for
|
||||
IDONTWANT control messages.
|
||||
|
||||
The gRPC gateway has been deprecated from Prysm in this release. If you need JSON data, consider the
|
||||
standardized beacon-APIs.
|
||||
|
||||
Updating to this release is recommended at your convenience.
|
||||
|
||||
### Added
|
||||
|
||||
@@ -13,20 +66,18 @@ The format is based on Keep a Changelog, and this project adheres to Semantic Ve
|
||||
- Light client support: Implement `ComputeFieldRootsForBlockBody`.
|
||||
- Light client support: Add light client database changes.
|
||||
- Light client support: Implement capella and deneb changes.
|
||||
- Electra EIP6110: Queue deposit [pr](https://github.com/prysmaticlabs/prysm/pull/14430)
|
||||
- Light client support: Implement `BlockToLightClientHeader` function.
|
||||
- Light client support: Consensus types.
|
||||
- GetBeaconStateV2: add Electra case.
|
||||
- Implement [consensus-specs/3875](https://github.com/ethereum/consensus-specs/pull/3875).
|
||||
- Tests to ensure sepolia config matches the official upstream yaml.
|
||||
- `engine_newPayloadV4`,`engine_getPayloadV4` used for electra payload communication with execution client. [pr](https://github.com/prysmaticlabs/prysm/pull/14492)
|
||||
- HTTP endpoint for PublishBlobs.
|
||||
- GetBlockV2, GetBlindedBlock, ProduceBlockV2, ProduceBlockV3: add Electra case.
|
||||
- Add Electra support and tests for light client functions.
|
||||
- fastssz version bump (better error messages).
|
||||
- SSE implementation that sheds stuck clients. [pr](https://github.com/prysmaticlabs/prysm/pull/14413)
|
||||
- Add Bellatrix tests for light client functions.
|
||||
- Add Discovery Rebooter Feature.
|
||||
- Added GetBlockAttestationsV2 endpoint.
|
||||
- Added GetPoolAttesterSlashingsV2 endpoint.
|
||||
|
||||
### Changed
|
||||
|
||||
@@ -45,7 +96,6 @@ The format is based on Keep a Changelog, and this project adheres to Semantic Ve
|
||||
- `grpc-gateway-corsdomain` is renamed to http-cors-domain. The old name can still be used as an alias.
|
||||
- `api-timeout` is changed from int flag to duration flag, default value updated.
|
||||
- Light client support: abstracted out the light client headers with different versions.
|
||||
- Electra EIP6110: Queue deposit requests changes from consensus spec pr #3818
|
||||
- `ApplyToEveryValidator` has been changed to prevent misuse bugs, it takes a closure that takes a `ReadOnlyValidator` and returns a raw pointer to a `Validator`.
|
||||
- Removed gorilla mux library and replaced it with net/http updates in go 1.22.
|
||||
- Clean up `ProposeBlock` for validator client to reduce cognitive scoring and enable further changes.
|
||||
@@ -57,19 +107,18 @@ The format is based on Keep a Changelog, and this project adheres to Semantic Ve
|
||||
- Updated Sepolia bootnodes.
|
||||
- Make committee aware packing the default by deprecating `--enable-committee-aware-packing`.
|
||||
- Moved `ConvertKzgCommitmentToVersionedHash` to the `primitives` package.
|
||||
- reversed the boolean return on `BatchVerifyDepositsSignatures`, from need verification, to all keys successfully verified
|
||||
- Fix `engine_exchangeCapabilities` implementation.
|
||||
- Updated correlation penalty for EIP-7251.
|
||||
|
||||
### Deprecated
|
||||
- `--disable-grpc-gateway` flag is deprecated due to grpc gateway removal.
|
||||
- `--enable-experimental-state` flag is deprecated. This feature is now on by default. Opt-out with `--disable-experimental-state`.
|
||||
- `/eth/v1alpha1/validator/activation/stream` grpc wait for activation stream is deprecated. [pr](https://github.com/prysmaticlabs/prysm/pull/14514)
|
||||
|
||||
### Removed
|
||||
|
||||
- removed gRPC Gateway
|
||||
- Removed unused blobs bundle cache
|
||||
- Removed gRPC Gateway.
|
||||
- Removed unused blobs bundle cache.
|
||||
- Removed consolidation signing domain from params. The Electra design changed such that EL handles consolidation signature verification.
|
||||
- Remove engine_getPayloadBodiesBy{Hash|Range}V2
|
||||
|
||||
### Fixed
|
||||
|
||||
@@ -88,11 +137,11 @@ The format is based on Keep a Changelog, and this project adheres to Semantic Ve
|
||||
- Light client support: fix light client attested header execution fields' wrong version bug.
|
||||
- Testing: added custom matcher for better push settings testing.
|
||||
- Registered `GetDepositSnapshot` Beacon API endpoint.
|
||||
- Fixed mesh size by appending `gParams.Dhi = gossipSubDhi`
|
||||
- Fix skipping partial withdrawals count.
|
||||
|
||||
### Security
|
||||
|
||||
No notable security updates.
|
||||
|
||||
## [v5.1.0](https://github.com/prysmaticlabs/prysm/compare/v5.0.4...v5.1.0) - 2024-08-20
|
||||
|
||||
This release contains 171 new changes and many of these are related to Electra! Along side the Electra changes, there
|
||||
|
||||
@@ -176,7 +176,8 @@ type BLSToExecutionChangesPoolResponse struct {
|
||||
}
|
||||
|
||||
type GetAttesterSlashingsResponse struct {
|
||||
Data []*AttesterSlashing `json:"data"`
|
||||
Version string `json:"version,omitempty"`
|
||||
Data json.RawMessage `json:"data"` // Accepts both `[]*AttesterSlashing` and `[]*AttesterSlashingElectra` types
|
||||
}
|
||||
|
||||
type GetProposerSlashingsResponse struct {
|
||||
|
||||
@@ -15,7 +15,7 @@ type SubmitContributionAndProofsRequest struct {
|
||||
}
|
||||
|
||||
type SubmitAggregateAndProofsRequest struct {
|
||||
Data []*SignedAggregateAttestationAndProof `json:"data"`
|
||||
Data []json.RawMessage `json:"data"`
|
||||
}
|
||||
|
||||
type SubmitSyncCommitteeSubscriptionsRequest struct {
|
||||
|
||||
@@ -216,17 +216,25 @@ func (s *Service) notifyNewPayload(ctx context.Context, preStateVersion int,
|
||||
}
|
||||
|
||||
var lastValidHash []byte
|
||||
var parentRoot *common.Hash
|
||||
var versionedHashes []common.Hash
|
||||
var requests *enginev1.ExecutionRequests
|
||||
if blk.Version() >= version.Deneb {
|
||||
var versionedHashes []common.Hash
|
||||
versionedHashes, err = kzgCommitmentsToVersionedHashes(blk.Block().Body())
|
||||
if err != nil {
|
||||
return false, errors.Wrap(err, "could not get versioned hashes to feed the engine")
|
||||
}
|
||||
pr := common.Hash(blk.Block().ParentRoot())
|
||||
lastValidHash, err = s.cfg.ExecutionEngineCaller.NewPayload(ctx, payload, versionedHashes, &pr)
|
||||
} else {
|
||||
lastValidHash, err = s.cfg.ExecutionEngineCaller.NewPayload(ctx, payload, []common.Hash{}, &common.Hash{} /*empty version hashes and root before Deneb*/)
|
||||
prh := common.Hash(blk.Block().ParentRoot())
|
||||
parentRoot = &prh
|
||||
}
|
||||
if blk.Version() >= version.Electra {
|
||||
requests, err = blk.Block().Body().ExecutionRequests()
|
||||
if err != nil {
|
||||
return false, errors.Wrap(err, "could not get execution requests")
|
||||
}
|
||||
}
|
||||
lastValidHash, err = s.cfg.ExecutionEngineCaller.NewPayload(ctx, payload, versionedHashes, parentRoot, requests)
|
||||
|
||||
switch {
|
||||
case err == nil:
|
||||
newPayloadValidNodeCount.Inc()
|
||||
|
||||
@@ -404,6 +404,10 @@ func (s *Service) savePostStateInfo(ctx context.Context, r [32]byte, b interface
|
||||
return errors.Wrapf(err, "could not save block from slot %d", b.Block().Slot())
|
||||
}
|
||||
if err := s.cfg.StateGen.SaveState(ctx, r, st); err != nil {
|
||||
log.Warnf("Rolling back insertion of block with root %#x", r)
|
||||
if err := s.cfg.BeaconDB.DeleteBlock(ctx, r); err != nil {
|
||||
log.WithError(err).Errorf("Could not delete block with block root %#x", r)
|
||||
}
|
||||
return errors.Wrap(err, "could not save state")
|
||||
}
|
||||
return nil
|
||||
|
||||
@@ -273,7 +273,6 @@ func (s *Service) reportPostBlockProcessing(
|
||||
func (s *Service) executePostFinalizationTasks(ctx context.Context, finalizedState state.BeaconState) {
|
||||
finalized := s.cfg.ForkChoiceStore.FinalizedCheckpoint()
|
||||
go func() {
|
||||
finalizedState.SaveValidatorIndices() // used to handle Validator index invariant from EIP6110
|
||||
s.sendNewFinalizedEvent(ctx, finalizedState)
|
||||
}()
|
||||
depCtx, cancel := context.WithTimeout(context.Background(), depositDeadline)
|
||||
@@ -350,6 +349,9 @@ func (s *Service) ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock
|
||||
|
||||
// HasBlock returns true if the block of the input root exists in initial sync blocks cache or DB.
|
||||
func (s *Service) HasBlock(ctx context.Context, root [32]byte) bool {
|
||||
if s.BlockBeingSynced(root) {
|
||||
return false
|
||||
}
|
||||
return s.hasBlockInInitSyncOrDB(ctx, root)
|
||||
}
|
||||
|
||||
|
||||
@@ -278,6 +278,8 @@ func TestService_HasBlock(t *testing.T) {
|
||||
r, err = b.Block.HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, true, s.HasBlock(context.Background(), r))
|
||||
s.blockBeingSynced.set(r)
|
||||
require.Equal(t, false, s.HasBlock(context.Background(), r))
|
||||
}
|
||||
|
||||
func TestCheckSaveHotStateDB_Enabling(t *testing.T) {
|
||||
|
||||
@@ -329,8 +329,6 @@ func (s *Service) StartFromSavedState(saved state.BeaconState) error {
|
||||
return errors.Wrap(err, "failed to initialize blockchain service")
|
||||
}
|
||||
|
||||
saved.SaveValidatorIndices() // used to handle Validator index invariant from EIP6110
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
6
beacon-chain/cache/tracked_validators.go
vendored
6
beacon-chain/cache/tracked_validators.go
vendored
@@ -47,3 +47,9 @@ func (t *TrackedValidatorsCache) Validating() bool {
|
||||
defer t.Unlock()
|
||||
return len(t.trackedValidators) > 0
|
||||
}
|
||||
|
||||
func (t *TrackedValidatorsCache) Size() int {
|
||||
t.Lock()
|
||||
defer t.Unlock()
|
||||
return len(t.trackedValidators)
|
||||
}
|
||||
|
||||
@@ -15,6 +15,7 @@ import (
|
||||
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
|
||||
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
|
||||
"github.com/prysmaticlabs/prysm/v5/time/slots"
|
||||
log "github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
// ProcessPendingConsolidations implements the spec definition below. This method makes mutating
|
||||
@@ -22,25 +23,28 @@ import (
|
||||
//
|
||||
// Spec definition:
|
||||
//
|
||||
// def process_pending_consolidations(state: BeaconState) -> None:
|
||||
// next_pending_consolidation = 0
|
||||
// for pending_consolidation in state.pending_consolidations:
|
||||
// source_validator = state.validators[pending_consolidation.source_index]
|
||||
// if source_validator.slashed:
|
||||
// next_pending_consolidation += 1
|
||||
// continue
|
||||
// if source_validator.withdrawable_epoch > get_current_epoch(state):
|
||||
// break
|
||||
// def process_pending_consolidations(state: BeaconState) -> None:
|
||||
//
|
||||
// # Churn any target excess active balance of target and raise its max
|
||||
// switch_to_compounding_validator(state, pending_consolidation.target_index)
|
||||
// # Move active balance to target. Excess balance is withdrawable.
|
||||
// active_balance = get_active_balance(state, pending_consolidation.source_index)
|
||||
// decrease_balance(state, pending_consolidation.source_index, active_balance)
|
||||
// increase_balance(state, pending_consolidation.target_index, active_balance)
|
||||
// next_epoch = Epoch(get_current_epoch(state) + 1)
|
||||
// next_pending_consolidation = 0
|
||||
// for pending_consolidation in state.pending_consolidations:
|
||||
// source_validator = state.validators[pending_consolidation.source_index]
|
||||
// if source_validator.slashed:
|
||||
// next_pending_consolidation += 1
|
||||
// continue
|
||||
// if source_validator.withdrawable_epoch > next_epoch:
|
||||
// break
|
||||
//
|
||||
// state.pending_consolidations = state.pending_consolidations[next_pending_consolidation:]
|
||||
// # Calculate the consolidated balance
|
||||
// max_effective_balance = get_max_effective_balance(source_validator)
|
||||
// source_effective_balance = min(state.balances[pending_consolidation.source_index], max_effective_balance)
|
||||
//
|
||||
// # Move active balance to target. Excess balance is withdrawable.
|
||||
// decrease_balance(state, pending_consolidation.source_index, source_effective_balance)
|
||||
// increase_balance(state, pending_consolidation.target_index, source_effective_balance)
|
||||
// next_pending_consolidation += 1
|
||||
//
|
||||
// state.pending_consolidations = state.pending_consolidations[next_pending_consolidation:]
|
||||
func ProcessPendingConsolidations(ctx context.Context, st state.BeaconState) error {
|
||||
_, span := trace.StartSpan(ctx, "electra.ProcessPendingConsolidations")
|
||||
defer span.End()
|
||||
@@ -51,37 +55,34 @@ func ProcessPendingConsolidations(ctx context.Context, st state.BeaconState) err
|
||||
|
||||
nextEpoch := slots.ToEpoch(st.Slot()) + 1
|
||||
|
||||
var nextPendingConsolidation uint64
|
||||
pendingConsolidations, err := st.PendingConsolidations()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
var nextPendingConsolidation uint64
|
||||
for _, pc := range pendingConsolidations {
|
||||
sourceValidator, err := st.ValidatorAtIndex(pc.SourceIndex)
|
||||
sourceValidator, err := st.ValidatorAtIndexReadOnly(pc.SourceIndex)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if sourceValidator.Slashed {
|
||||
if sourceValidator.Slashed() {
|
||||
nextPendingConsolidation++
|
||||
continue
|
||||
}
|
||||
if sourceValidator.WithdrawableEpoch > nextEpoch {
|
||||
if sourceValidator.WithdrawableEpoch() > nextEpoch {
|
||||
break
|
||||
}
|
||||
|
||||
if err := SwitchToCompoundingValidator(st, pc.TargetIndex); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
activeBalance, err := st.ActiveBalanceAtIndex(pc.SourceIndex)
|
||||
validatorBalance, err := st.BalanceAtIndex(pc.SourceIndex)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if err := helpers.DecreaseBalance(st, pc.SourceIndex, activeBalance); err != nil {
|
||||
b := min(validatorBalance, helpers.ValidatorMaxEffectiveBalance(sourceValidator))
|
||||
|
||||
if err := helpers.DecreaseBalance(st, pc.SourceIndex, b); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := helpers.IncreaseBalance(st, pc.TargetIndex, activeBalance); err != nil {
|
||||
if err := helpers.IncreaseBalance(st, pc.TargetIndex, b); err != nil {
|
||||
return err
|
||||
}
|
||||
nextPendingConsolidation++
|
||||
@@ -101,6 +102,16 @@ func ProcessPendingConsolidations(ctx context.Context, st state.BeaconState) err
|
||||
// state: BeaconState,
|
||||
// consolidation_request: ConsolidationRequest
|
||||
// ) -> None:
|
||||
// if is_valid_switch_to_compounding_request(state, consolidation_request):
|
||||
// validator_pubkeys = [v.pubkey for v in state.validators]
|
||||
// request_source_pubkey = consolidation_request.source_pubkey
|
||||
// source_index = ValidatorIndex(validator_pubkeys.index(request_source_pubkey))
|
||||
// switch_to_compounding_validator(state, source_index)
|
||||
// return
|
||||
//
|
||||
// # Verify that source != target, so a consolidation cannot be used as an exit.
|
||||
// if consolidation_request.source_pubkey == consolidation_request.target_pubkey:
|
||||
// return
|
||||
// # If the pending consolidations queue is full, consolidation requests are ignored
|
||||
// if len(state.pending_consolidations) == PENDING_CONSOLIDATIONS_LIMIT:
|
||||
// return
|
||||
@@ -121,10 +132,6 @@ func ProcessPendingConsolidations(ctx context.Context, st state.BeaconState) err
|
||||
// source_validator = state.validators[source_index]
|
||||
// target_validator = state.validators[target_index]
|
||||
//
|
||||
// # Verify that source != target, so a consolidation cannot be used as an exit.
|
||||
// if source_index == target_index:
|
||||
// return
|
||||
//
|
||||
// # Verify source withdrawal credentials
|
||||
// has_correct_credential = has_execution_withdrawal_credential(source_validator)
|
||||
// is_correct_source_address = (
|
||||
@@ -160,19 +167,14 @@ func ProcessPendingConsolidations(ctx context.Context, st state.BeaconState) err
|
||||
// source_index=source_index,
|
||||
// target_index=target_index
|
||||
// ))
|
||||
//
|
||||
// # Churn any target excess active balance of target and raise its max
|
||||
// if has_eth1_withdrawal_credential(target_validator):
|
||||
// switch_to_compounding_validator(state, target_index)
|
||||
func ProcessConsolidationRequests(ctx context.Context, st state.BeaconState, reqs []*enginev1.ConsolidationRequest) error {
|
||||
if len(reqs) == 0 || st == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
activeBal, err := helpers.TotalActiveBalance(st)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
churnLimit := helpers.ConsolidationChurnLimit(primitives.Gwei(activeBal))
|
||||
if churnLimit <= primitives.Gwei(params.BeaconConfig().MinActivationBalance) {
|
||||
return nil
|
||||
}
|
||||
curEpoch := slots.ToEpoch(st.Slot())
|
||||
ffe := params.BeaconConfig().FarFutureEpoch
|
||||
minValWithdrawDelay := params.BeaconConfig().MinValidatorWithdrawabilityDelay
|
||||
@@ -182,22 +184,44 @@ func ProcessConsolidationRequests(ctx context.Context, st state.BeaconState, req
|
||||
if ctx.Err() != nil {
|
||||
return fmt.Errorf("cannot process consolidation requests: %w", ctx.Err())
|
||||
}
|
||||
if IsValidSwitchToCompoundingRequest(st, cr) {
|
||||
srcIdx, ok := st.ValidatorIndexByPubkey(bytesutil.ToBytes48(cr.SourcePubkey))
|
||||
if !ok {
|
||||
log.Error("failed to find source validator index")
|
||||
continue
|
||||
}
|
||||
if err := SwitchToCompoundingValidator(st, srcIdx); err != nil {
|
||||
log.WithError(err).Error("failed to switch to compounding validator")
|
||||
}
|
||||
continue
|
||||
}
|
||||
sourcePubkey := bytesutil.ToBytes48(cr.SourcePubkey)
|
||||
targetPubkey := bytesutil.ToBytes48(cr.TargetPubkey)
|
||||
if sourcePubkey == targetPubkey {
|
||||
continue
|
||||
}
|
||||
|
||||
if npc, err := st.NumPendingConsolidations(); err != nil {
|
||||
return fmt.Errorf("failed to fetch number of pending consolidations: %w", err) // This should never happen.
|
||||
} else if npc >= pcLimit {
|
||||
return nil
|
||||
}
|
||||
|
||||
srcIdx, ok := st.ValidatorIndexByPubkey(bytesutil.ToBytes48(cr.SourcePubkey))
|
||||
if !ok {
|
||||
continue
|
||||
activeBal, err := helpers.TotalActiveBalance(st)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
tgtIdx, ok := st.ValidatorIndexByPubkey(bytesutil.ToBytes48(cr.TargetPubkey))
|
||||
if !ok {
|
||||
continue
|
||||
churnLimit := helpers.ConsolidationChurnLimit(primitives.Gwei(activeBal))
|
||||
if churnLimit <= primitives.Gwei(params.BeaconConfig().MinActivationBalance) {
|
||||
return nil
|
||||
}
|
||||
|
||||
if srcIdx == tgtIdx {
|
||||
srcIdx, ok := st.ValidatorIndexByPubkey(sourcePubkey)
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
tgtIdx, ok := st.ValidatorIndexByPubkey(targetPubkey)
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
|
||||
@@ -237,7 +261,8 @@ func ProcessConsolidationRequests(ctx context.Context, st state.BeaconState, req
|
||||
// Initiate the exit of the source validator.
|
||||
exitEpoch, err := ComputeConsolidationEpochAndUpdateChurn(ctx, st, primitives.Gwei(srcV.EffectiveBalance))
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to compute consolidaiton epoch: %w", err)
|
||||
log.WithError(err).Error("failed to compute consolidation epoch")
|
||||
continue
|
||||
}
|
||||
srcV.ExitEpoch = exitEpoch
|
||||
srcV.WithdrawableEpoch = exitEpoch + minValWithdrawDelay
|
||||
@@ -248,7 +273,95 @@ func ProcessConsolidationRequests(ctx context.Context, st state.BeaconState, req
|
||||
if err := st.AppendPendingConsolidation(ð.PendingConsolidation{SourceIndex: srcIdx, TargetIndex: tgtIdx}); err != nil {
|
||||
return fmt.Errorf("failed to append pending consolidation: %w", err) // This should never happen.
|
||||
}
|
||||
|
||||
if helpers.HasETH1WithdrawalCredential(tgtV) {
|
||||
if err := SwitchToCompoundingValidator(st, tgtIdx); err != nil {
|
||||
log.WithError(err).Error("failed to switch to compounding validator")
|
||||
continue
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// IsValidSwitchToCompoundingRequest returns true if the given consolidation request is valid for switching to compounding.
|
||||
//
|
||||
// Spec code:
|
||||
//
|
||||
// def is_valid_switch_to_compounding_request(
|
||||
//
|
||||
// state: BeaconState,
|
||||
// consolidation_request: ConsolidationRequest
|
||||
//
|
||||
// ) -> bool:
|
||||
//
|
||||
// # Switch to compounding requires source and target be equal
|
||||
// if consolidation_request.source_pubkey != consolidation_request.target_pubkey:
|
||||
// return False
|
||||
//
|
||||
// # Verify pubkey exists
|
||||
// source_pubkey = consolidation_request.source_pubkey
|
||||
// validator_pubkeys = [v.pubkey for v in state.validators]
|
||||
// if source_pubkey not in validator_pubkeys:
|
||||
// return False
|
||||
//
|
||||
// source_validator = state.validators[ValidatorIndex(validator_pubkeys.index(source_pubkey))]
|
||||
//
|
||||
// # Verify request has been authorized
|
||||
// if source_validator.withdrawal_credentials[12:] != consolidation_request.source_address:
|
||||
// return False
|
||||
//
|
||||
// # Verify source withdrawal credentials
|
||||
// if not has_eth1_withdrawal_credential(source_validator):
|
||||
// return False
|
||||
//
|
||||
// # Verify the source is active
|
||||
// current_epoch = get_current_epoch(state)
|
||||
// if not is_active_validator(source_validator, current_epoch):
|
||||
// return False
|
||||
//
|
||||
// # Verify exit for source has not been initiated
|
||||
// if source_validator.exit_epoch != FAR_FUTURE_EPOCH:
|
||||
// return False
|
||||
//
|
||||
// return True
|
||||
func IsValidSwitchToCompoundingRequest(st state.BeaconState, req *enginev1.ConsolidationRequest) bool {
|
||||
if req.SourcePubkey == nil || req.TargetPubkey == nil {
|
||||
return false
|
||||
}
|
||||
|
||||
if !bytes.Equal(req.SourcePubkey, req.TargetPubkey) {
|
||||
return false
|
||||
}
|
||||
|
||||
srcIdx, ok := st.ValidatorIndexByPubkey(bytesutil.ToBytes48(req.SourcePubkey))
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
// As per the consensus specification, this error is not considered an assertion.
|
||||
// Therefore, if the source_pubkey is not found in validator_pubkeys, we simply return false.
|
||||
srcV, err := st.ValidatorAtIndexReadOnly(srcIdx)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
sourceAddress := req.SourceAddress
|
||||
withdrawalCreds := srcV.GetWithdrawalCredentials()
|
||||
if len(withdrawalCreds) != 32 || len(sourceAddress) != 20 || !bytes.HasSuffix(withdrawalCreds, sourceAddress) {
|
||||
return false
|
||||
}
|
||||
|
||||
if !helpers.HasETH1WithdrawalCredential(srcV) {
|
||||
return false
|
||||
}
|
||||
|
||||
curEpoch := slots.ToEpoch(st.Slot())
|
||||
if !helpers.IsActiveValidatorUsingTrie(srcV, curEpoch) {
|
||||
return false
|
||||
}
|
||||
|
||||
if srcV.ExitEpoch() != params.BeaconConfig().FarFutureEpoch {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
@@ -13,6 +13,7 @@ import (
|
||||
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
|
||||
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/require"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/util"
|
||||
)
|
||||
|
||||
func TestProcessPendingConsolidations(t *testing.T) {
|
||||
@@ -80,10 +81,10 @@ func TestProcessPendingConsolidations(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, uint64(0), num)
|
||||
|
||||
// v1 is switched to compounding validator.
|
||||
// v1 withdrawal credentials should not be updated.
|
||||
v1, err := st.ValidatorAtIndex(1)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, params.BeaconConfig().CompoundingWithdrawalPrefixByte, v1.WithdrawalCredentials[0])
|
||||
require.Equal(t, params.BeaconConfig().ETH1AddressWithdrawalPrefixByte, v1.WithdrawalCredentials[0])
|
||||
},
|
||||
wantErr: false,
|
||||
},
|
||||
@@ -396,3 +397,87 @@ func TestProcessConsolidationRequests(t *testing.T) {
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestIsValidSwitchToCompoundingRequest(t *testing.T) {
|
||||
st, _ := util.DeterministicGenesisStateElectra(t, 1)
|
||||
t.Run("nil source pubkey", func(t *testing.T) {
|
||||
ok := electra.IsValidSwitchToCompoundingRequest(st, &enginev1.ConsolidationRequest{
|
||||
SourcePubkey: nil,
|
||||
TargetPubkey: []byte{'a'},
|
||||
})
|
||||
require.Equal(t, false, ok)
|
||||
})
|
||||
t.Run("nil target pubkey", func(t *testing.T) {
|
||||
ok := electra.IsValidSwitchToCompoundingRequest(st, &enginev1.ConsolidationRequest{
|
||||
TargetPubkey: nil,
|
||||
SourcePubkey: []byte{'a'},
|
||||
})
|
||||
require.Equal(t, false, ok)
|
||||
})
|
||||
t.Run("different source and target pubkey", func(t *testing.T) {
|
||||
ok := electra.IsValidSwitchToCompoundingRequest(st, &enginev1.ConsolidationRequest{
|
||||
TargetPubkey: []byte{'a'},
|
||||
SourcePubkey: []byte{'b'},
|
||||
})
|
||||
require.Equal(t, false, ok)
|
||||
})
|
||||
t.Run("source validator not found in state", func(t *testing.T) {
|
||||
ok := electra.IsValidSwitchToCompoundingRequest(st, &enginev1.ConsolidationRequest{
|
||||
SourceAddress: make([]byte, 20),
|
||||
TargetPubkey: []byte{'a'},
|
||||
SourcePubkey: []byte{'a'},
|
||||
})
|
||||
require.Equal(t, false, ok)
|
||||
})
|
||||
t.Run("incorrect source address", func(t *testing.T) {
|
||||
v, err := st.ValidatorAtIndex(0)
|
||||
require.NoError(t, err)
|
||||
pubkey := v.PublicKey
|
||||
ok := electra.IsValidSwitchToCompoundingRequest(st, &enginev1.ConsolidationRequest{
|
||||
SourceAddress: make([]byte, 20),
|
||||
TargetPubkey: pubkey,
|
||||
SourcePubkey: pubkey,
|
||||
})
|
||||
require.Equal(t, false, ok)
|
||||
})
|
||||
t.Run("incorrect eth1 withdrawal credential", func(t *testing.T) {
|
||||
v, err := st.ValidatorAtIndex(0)
|
||||
require.NoError(t, err)
|
||||
pubkey := v.PublicKey
|
||||
wc := v.WithdrawalCredentials
|
||||
ok := electra.IsValidSwitchToCompoundingRequest(st, &enginev1.ConsolidationRequest{
|
||||
SourceAddress: wc[12:],
|
||||
TargetPubkey: pubkey,
|
||||
SourcePubkey: pubkey,
|
||||
})
|
||||
require.Equal(t, false, ok)
|
||||
})
|
||||
t.Run("is valid compounding request", func(t *testing.T) {
|
||||
v, err := st.ValidatorAtIndex(0)
|
||||
require.NoError(t, err)
|
||||
pubkey := v.PublicKey
|
||||
wc := v.WithdrawalCredentials
|
||||
v.WithdrawalCredentials[0] = 1
|
||||
require.NoError(t, st.UpdateValidatorAtIndex(0, v))
|
||||
ok := electra.IsValidSwitchToCompoundingRequest(st, &enginev1.ConsolidationRequest{
|
||||
SourceAddress: wc[12:],
|
||||
TargetPubkey: pubkey,
|
||||
SourcePubkey: pubkey,
|
||||
})
|
||||
require.Equal(t, true, ok)
|
||||
})
|
||||
t.Run("already has an exit epoch", func(t *testing.T) {
|
||||
v, err := st.ValidatorAtIndex(0)
|
||||
require.NoError(t, err)
|
||||
pubkey := v.PublicKey
|
||||
wc := v.WithdrawalCredentials
|
||||
v.ExitEpoch = 100
|
||||
require.NoError(t, st.UpdateValidatorAtIndex(0, v))
|
||||
ok := electra.IsValidSwitchToCompoundingRequest(st, &enginev1.ConsolidationRequest{
|
||||
SourceAddress: wc[12:],
|
||||
TargetPubkey: pubkey,
|
||||
SourcePubkey: pubkey,
|
||||
})
|
||||
require.Equal(t, false, ok)
|
||||
})
|
||||
}
|
||||
|
||||
@@ -8,6 +8,7 @@ import (
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/signing"
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
|
||||
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/params"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
"github.com/prysmaticlabs/prysm/v5/contracts/deposit"
|
||||
@@ -474,7 +475,10 @@ func ApplyPendingDeposit(ctx context.Context, st state.BeaconState, deposit *eth
|
||||
// set_or_append_list(state.current_epoch_participation, index, ParticipationFlags(0b0000_0000))
|
||||
// set_or_append_list(state.inactivity_scores, index, uint64(0))
|
||||
func AddValidatorToRegistry(beaconState state.BeaconState, pubKey []byte, withdrawalCredentials []byte, amount uint64) error {
|
||||
val := GetValidatorFromDeposit(pubKey, withdrawalCredentials, amount)
|
||||
val, err := GetValidatorFromDeposit(pubKey, withdrawalCredentials, amount)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not get validator from deposit")
|
||||
}
|
||||
if err := beaconState.AppendValidator(val); err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -516,7 +520,7 @@ func AddValidatorToRegistry(beaconState state.BeaconState, pubKey []byte, withdr
|
||||
// validator.effective_balance = min(amount - amount % EFFECTIVE_BALANCE_INCREMENT, max_effective_balance)
|
||||
//
|
||||
// return validator
|
||||
func GetValidatorFromDeposit(pubKey []byte, withdrawalCredentials []byte, amount uint64) *ethpb.Validator {
|
||||
func GetValidatorFromDeposit(pubKey []byte, withdrawalCredentials []byte, amount uint64) (*ethpb.Validator, error) {
|
||||
validator := ðpb.Validator{
|
||||
PublicKey: pubKey,
|
||||
WithdrawalCredentials: withdrawalCredentials,
|
||||
@@ -526,9 +530,13 @@ func GetValidatorFromDeposit(pubKey []byte, withdrawalCredentials []byte, amount
|
||||
WithdrawableEpoch: params.BeaconConfig().FarFutureEpoch,
|
||||
EffectiveBalance: 0,
|
||||
}
|
||||
maxEffectiveBalance := helpers.ValidatorMaxEffectiveBalance(validator)
|
||||
v, err := state_native.NewValidator(validator)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
maxEffectiveBalance := helpers.ValidatorMaxEffectiveBalance(v)
|
||||
validator.EffectiveBalance = min(amount-(amount%params.BeaconConfig().EffectiveBalanceIncrement), maxEffectiveBalance)
|
||||
return validator
|
||||
return validator, nil
|
||||
}
|
||||
|
||||
// ProcessDepositRequests is a function as part of electra to process execution layer deposits
|
||||
|
||||
@@ -177,7 +177,6 @@ func TestProcessPendingDeposits(t *testing.T) {
|
||||
dep := stateTesting.GeneratePendingDeposit(t, sk, excessBalance, bytesutil.ToBytes32(wc), 0)
|
||||
dep.Signature = common.InfiniteSignature[:]
|
||||
require.NoError(t, st.SetValidators(validators))
|
||||
st.SaveValidatorIndices()
|
||||
require.NoError(t, st.SetPendingDeposits([]*eth.PendingDeposit{dep}))
|
||||
return st
|
||||
}(),
|
||||
@@ -508,7 +507,6 @@ func stateWithPendingDeposits(t *testing.T, balETH uint64, numDeposits, amount u
|
||||
deps[i] = stateTesting.GeneratePendingDeposit(t, sk, amount, bytesutil.ToBytes32(wc), 0)
|
||||
}
|
||||
require.NoError(t, st.SetValidators(validators))
|
||||
st.SaveValidatorIndices()
|
||||
require.NoError(t, st.SetPendingDeposits(deps))
|
||||
return st
|
||||
}
|
||||
@@ -527,7 +525,6 @@ func TestApplyPendingDeposit_TopUp(t *testing.T) {
|
||||
dep := stateTesting.GeneratePendingDeposit(t, sk, excessBalance, bytesutil.ToBytes32(wc), 0)
|
||||
dep.Signature = common.InfiniteSignature[:]
|
||||
require.NoError(t, st.SetValidators(validators))
|
||||
st.SaveValidatorIndices()
|
||||
|
||||
require.NoError(t, electra.ApplyPendingDeposit(context.Background(), st, dep))
|
||||
|
||||
|
||||
@@ -3,7 +3,6 @@ package electra
|
||||
import (
|
||||
"errors"
|
||||
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/params"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
@@ -18,9 +17,8 @@ import (
|
||||
// def switch_to_compounding_validator(state: BeaconState, index: ValidatorIndex) -> None:
|
||||
//
|
||||
// validator = state.validators[index]
|
||||
// if has_eth1_withdrawal_credential(validator):
|
||||
// validator.withdrawal_credentials = COMPOUNDING_WITHDRAWAL_PREFIX + validator.withdrawal_credentials[1:]
|
||||
// queue_excess_active_balance(state, index)
|
||||
// validator.withdrawal_credentials = COMPOUNDING_WITHDRAWAL_PREFIX + validator.withdrawal_credentials[1:]
|
||||
// queue_excess_active_balance(state, index)
|
||||
func SwitchToCompoundingValidator(s state.BeaconState, idx primitives.ValidatorIndex) error {
|
||||
v, err := s.ValidatorAtIndex(idx)
|
||||
if err != nil {
|
||||
@@ -29,14 +27,12 @@ func SwitchToCompoundingValidator(s state.BeaconState, idx primitives.ValidatorI
|
||||
if len(v.WithdrawalCredentials) == 0 {
|
||||
return errors.New("validator has no withdrawal credentials")
|
||||
}
|
||||
if helpers.HasETH1WithdrawalCredential(v) {
|
||||
v.WithdrawalCredentials[0] = params.BeaconConfig().CompoundingWithdrawalPrefixByte
|
||||
if err := s.UpdateValidatorAtIndex(idx, v); err != nil {
|
||||
return err
|
||||
}
|
||||
return QueueExcessActiveBalance(s, idx)
|
||||
|
||||
v.WithdrawalCredentials[0] = params.BeaconConfig().CompoundingWithdrawalPrefixByte
|
||||
if err := s.UpdateValidatorAtIndex(idx, v); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
return QueueExcessActiveBalance(s, idx)
|
||||
}
|
||||
|
||||
// QueueExcessActiveBalance queues validators with balances above the min activation balance and adds to pending deposit.
|
||||
@@ -68,13 +64,14 @@ func QueueExcessActiveBalance(s state.BeaconState, idx primitives.ValidatorIndex
|
||||
return err
|
||||
}
|
||||
excessBalance := bal - params.BeaconConfig().MinActivationBalance
|
||||
val, err := s.ValidatorAtIndex(idx)
|
||||
val, err := s.ValidatorAtIndexReadOnly(idx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
pk := val.PublicKey()
|
||||
return s.AppendPendingDeposit(ðpb.PendingDeposit{
|
||||
PublicKey: val.PublicKey,
|
||||
WithdrawalCredentials: val.WithdrawalCredentials,
|
||||
PublicKey: pk[:],
|
||||
WithdrawalCredentials: val.GetWithdrawalCredentials(),
|
||||
Amount: excessBalance,
|
||||
Signature: common.InfiniteSignature[:],
|
||||
Slot: params.BeaconConfig().GenesisSlot,
|
||||
|
||||
@@ -111,30 +111,31 @@ func ProcessWithdrawalRequests(ctx context.Context, st state.BeaconState, wrs []
|
||||
log.Debugf("Skipping execution layer withdrawal request, validator index for %s not found\n", hexutil.Encode(wr.ValidatorPubkey))
|
||||
continue
|
||||
}
|
||||
validator, err := st.ValidatorAtIndex(vIdx)
|
||||
validator, err := st.ValidatorAtIndexReadOnly(vIdx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// Verify withdrawal credentials
|
||||
hasCorrectCredential := helpers.HasExecutionWithdrawalCredentials(validator)
|
||||
isCorrectSourceAddress := bytes.Equal(validator.WithdrawalCredentials[12:], wr.SourceAddress)
|
||||
wc := validator.GetWithdrawalCredentials()
|
||||
isCorrectSourceAddress := bytes.Equal(wc[12:], wr.SourceAddress)
|
||||
if !hasCorrectCredential || !isCorrectSourceAddress {
|
||||
log.Debugln("Skipping execution layer withdrawal request, wrong withdrawal credentials")
|
||||
continue
|
||||
}
|
||||
|
||||
// Verify the validator is active.
|
||||
if !helpers.IsActiveValidator(validator, currentEpoch) {
|
||||
if !helpers.IsActiveValidatorUsingTrie(validator, currentEpoch) {
|
||||
log.Debugln("Skipping execution layer withdrawal request, validator not active")
|
||||
continue
|
||||
}
|
||||
// Verify the validator has not yet submitted an exit.
|
||||
if validator.ExitEpoch != params.BeaconConfig().FarFutureEpoch {
|
||||
if validator.ExitEpoch() != params.BeaconConfig().FarFutureEpoch {
|
||||
log.Debugln("Skipping execution layer withdrawal request, validator has submitted an exit already")
|
||||
continue
|
||||
}
|
||||
// Verify the validator has been active long enough.
|
||||
if currentEpoch < validator.ActivationEpoch.AddEpoch(params.BeaconConfig().ShardCommitteePeriod) {
|
||||
if currentEpoch < validator.ActivationEpoch().AddEpoch(params.BeaconConfig().ShardCommitteePeriod) {
|
||||
log.Debugln("Skipping execution layer withdrawal request, validator has not been active long enough")
|
||||
continue
|
||||
}
|
||||
@@ -156,7 +157,7 @@ func ProcessWithdrawalRequests(ctx context.Context, st state.BeaconState, wrs []
|
||||
continue
|
||||
}
|
||||
|
||||
hasSufficientEffectiveBalance := validator.EffectiveBalance >= params.BeaconConfig().MinActivationBalance
|
||||
hasSufficientEffectiveBalance := validator.EffectiveBalance() >= params.BeaconConfig().MinActivationBalance
|
||||
vBal, err := st.BalanceAtIndex(vIdx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
||||
@@ -147,11 +147,17 @@ func ProcessRegistryUpdates(ctx context.Context, st state.BeaconState) (state.Be
|
||||
// epoch = get_current_epoch(state)
|
||||
// total_balance = get_total_active_balance(state)
|
||||
// adjusted_total_slashing_balance = min(sum(state.slashings) * PROPORTIONAL_SLASHING_MULTIPLIER, total_balance)
|
||||
// if state.version == electra:
|
||||
// increment = EFFECTIVE_BALANCE_INCREMENT # Factored out from total balance to avoid uint64 overflow
|
||||
// penalty_per_effective_balance_increment = adjusted_total_slashing_balance // (total_balance // increment)
|
||||
// for index, validator in enumerate(state.validators):
|
||||
// if validator.slashed and epoch + EPOCHS_PER_SLASHINGS_VECTOR // 2 == validator.withdrawable_epoch:
|
||||
// increment = EFFECTIVE_BALANCE_INCREMENT # Factored out from penalty numerator to avoid uint64 overflow
|
||||
// penalty_numerator = validator.effective_balance // increment * adjusted_total_slashing_balance
|
||||
// penalty = penalty_numerator // total_balance * increment
|
||||
// if state.version == electra:
|
||||
// effective_balance_increments = validator.effective_balance // increment
|
||||
// penalty = penalty_per_effective_balance_increment * effective_balance_increments
|
||||
// decrease_balance(state, ValidatorIndex(index), penalty)
|
||||
func ProcessSlashings(st state.BeaconState, slashingMultiplier uint64) (state.BeaconState, error) {
|
||||
currentEpoch := time.CurrentEpoch(st)
|
||||
@@ -177,13 +183,26 @@ func ProcessSlashings(st state.BeaconState, slashingMultiplier uint64) (state.Be
|
||||
// below equally.
|
||||
increment := params.BeaconConfig().EffectiveBalanceIncrement
|
||||
minSlashing := math.Min(totalSlashing*slashingMultiplier, totalBalance)
|
||||
|
||||
// Modified in Electra:EIP7251
|
||||
var penaltyPerEffectiveBalanceIncrement uint64
|
||||
if st.Version() >= version.Electra {
|
||||
penaltyPerEffectiveBalanceIncrement = minSlashing / (totalBalance / increment)
|
||||
}
|
||||
|
||||
bals := st.Balances()
|
||||
changed := false
|
||||
err = st.ReadFromEveryValidator(func(idx int, val state.ReadOnlyValidator) error {
|
||||
correctEpoch := (currentEpoch + exitLength/2) == val.WithdrawableEpoch()
|
||||
if val.Slashed() && correctEpoch {
|
||||
penaltyNumerator := val.EffectiveBalance() / increment * minSlashing
|
||||
penalty := penaltyNumerator / totalBalance * increment
|
||||
var penalty uint64
|
||||
if st.Version() >= version.Electra {
|
||||
effectiveBalanceIncrements := val.EffectiveBalance() / increment
|
||||
penalty = penaltyPerEffectiveBalanceIncrement * effectiveBalanceIncrements
|
||||
} else {
|
||||
penaltyNumerator := val.EffectiveBalance() / increment * minSlashing
|
||||
penalty = penaltyNumerator / totalBalance * increment
|
||||
}
|
||||
bals[idx] = helpers.DecreaseBalanceWithVal(bals[idx], penalty)
|
||||
changed = true
|
||||
}
|
||||
|
||||
@@ -448,3 +448,75 @@ func TestProcessHistoricalDataUpdate(t *testing.T) {
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestProcessSlashings_SlashedElectra(t *testing.T) {
|
||||
tests := []struct {
|
||||
state *ethpb.BeaconStateElectra
|
||||
want uint64
|
||||
}{
|
||||
{
|
||||
state: ðpb.BeaconStateElectra{
|
||||
Validators: []*ethpb.Validator{
|
||||
{Slashed: true,
|
||||
WithdrawableEpoch: params.BeaconConfig().EpochsPerSlashingsVector / 2,
|
||||
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance},
|
||||
{ExitEpoch: params.BeaconConfig().FarFutureEpoch, EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance}},
|
||||
Balances: []uint64{params.BeaconConfig().MaxEffectiveBalance, params.BeaconConfig().MaxEffectiveBalance},
|
||||
Slashings: []uint64{0, 1e9},
|
||||
},
|
||||
want: uint64(29000000000),
|
||||
},
|
||||
{
|
||||
state: ðpb.BeaconStateElectra{
|
||||
Validators: []*ethpb.Validator{
|
||||
{Slashed: true,
|
||||
WithdrawableEpoch: params.BeaconConfig().EpochsPerSlashingsVector / 2,
|
||||
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance},
|
||||
{ExitEpoch: params.BeaconConfig().FarFutureEpoch, EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance},
|
||||
{ExitEpoch: params.BeaconConfig().FarFutureEpoch, EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance},
|
||||
},
|
||||
Balances: []uint64{params.BeaconConfig().MaxEffectiveBalance, params.BeaconConfig().MaxEffectiveBalance},
|
||||
Slashings: []uint64{0, 1e9},
|
||||
},
|
||||
want: uint64(30500000000),
|
||||
},
|
||||
{
|
||||
state: ðpb.BeaconStateElectra{
|
||||
Validators: []*ethpb.Validator{
|
||||
{Slashed: true,
|
||||
WithdrawableEpoch: params.BeaconConfig().EpochsPerSlashingsVector / 2,
|
||||
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalanceElectra},
|
||||
{ExitEpoch: params.BeaconConfig().FarFutureEpoch, EffectiveBalance: params.BeaconConfig().MaxEffectiveBalanceElectra},
|
||||
{ExitEpoch: params.BeaconConfig().FarFutureEpoch, EffectiveBalance: params.BeaconConfig().MaxEffectiveBalanceElectra},
|
||||
},
|
||||
Balances: []uint64{params.BeaconConfig().MaxEffectiveBalance * 10, params.BeaconConfig().MaxEffectiveBalance * 20},
|
||||
Slashings: []uint64{0, 2 * 1e9},
|
||||
},
|
||||
want: uint64(317000001536),
|
||||
},
|
||||
{
|
||||
state: ðpb.BeaconStateElectra{
|
||||
Validators: []*ethpb.Validator{
|
||||
{Slashed: true,
|
||||
WithdrawableEpoch: params.BeaconConfig().EpochsPerSlashingsVector / 2,
|
||||
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalanceElectra - params.BeaconConfig().EffectiveBalanceIncrement},
|
||||
{ExitEpoch: params.BeaconConfig().FarFutureEpoch, EffectiveBalance: params.BeaconConfig().MaxEffectiveBalanceElectra - params.BeaconConfig().EffectiveBalanceIncrement}},
|
||||
Balances: []uint64{params.BeaconConfig().MaxEffectiveBalanceElectra - params.BeaconConfig().EffectiveBalanceIncrement, params.BeaconConfig().MaxEffectiveBalanceElectra - params.BeaconConfig().EffectiveBalanceIncrement},
|
||||
Slashings: []uint64{0, 1e9},
|
||||
},
|
||||
want: uint64(2044000000727),
|
||||
},
|
||||
}
|
||||
|
||||
for i, tt := range tests {
|
||||
t.Run(fmt.Sprint(i), func(t *testing.T) {
|
||||
original := proto.Clone(tt.state)
|
||||
s, err := state_native.InitializeFromProtoElectra(tt.state)
|
||||
require.NoError(t, err)
|
||||
helpers.ClearCache()
|
||||
newState, err := epoch.ProcessSlashings(s, params.BeaconConfig().ProportionalSlashingMultiplierBellatrix)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, tt.want, newState.Balances()[0], "ProcessSlashings({%v}) = newState; newState.Balances[0] = %d", original, newState.Balances()[0])
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -63,6 +63,7 @@ go_test(
|
||||
"validators_test.go",
|
||||
"weak_subjectivity_test.go",
|
||||
],
|
||||
data = glob(["testdata/**"]),
|
||||
embed = [":go_default_library"],
|
||||
shard_count = 2,
|
||||
tags = ["CI_race_detection"],
|
||||
|
||||
@@ -69,15 +69,16 @@ func IsNextPeriodSyncCommittee(
|
||||
}
|
||||
indices, err := syncCommitteeCache.NextPeriodIndexPosition(root, valIdx)
|
||||
if errors.Is(err, cache.ErrNonExistingSyncCommitteeKey) {
|
||||
val, err := st.ValidatorAtIndex(valIdx)
|
||||
val, err := st.ValidatorAtIndexReadOnly(valIdx)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
pk := val.PublicKey()
|
||||
committee, err := st.NextSyncCommittee()
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
return len(findSubCommitteeIndices(val.PublicKey, committee.Pubkeys)) > 0, nil
|
||||
return len(findSubCommitteeIndices(pk[:], committee.Pubkeys)) > 0, nil
|
||||
}
|
||||
if err != nil {
|
||||
return false, err
|
||||
@@ -96,10 +97,11 @@ func CurrentPeriodSyncSubcommitteeIndices(
|
||||
}
|
||||
indices, err := syncCommitteeCache.CurrentPeriodIndexPosition(root, valIdx)
|
||||
if errors.Is(err, cache.ErrNonExistingSyncCommitteeKey) {
|
||||
val, err := st.ValidatorAtIndex(valIdx)
|
||||
val, err := st.ValidatorAtIndexReadOnly(valIdx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
pk := val.PublicKey()
|
||||
committee, err := st.CurrentSyncCommittee()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -112,7 +114,7 @@ func CurrentPeriodSyncSubcommitteeIndices(
|
||||
}
|
||||
}()
|
||||
|
||||
return findSubCommitteeIndices(val.PublicKey, committee.Pubkeys), nil
|
||||
return findSubCommitteeIndices(pk[:], committee.Pubkeys), nil
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -130,15 +132,16 @@ func NextPeriodSyncSubcommitteeIndices(
|
||||
}
|
||||
indices, err := syncCommitteeCache.NextPeriodIndexPosition(root, valIdx)
|
||||
if errors.Is(err, cache.ErrNonExistingSyncCommitteeKey) {
|
||||
val, err := st.ValidatorAtIndex(valIdx)
|
||||
val, err := st.ValidatorAtIndexReadOnly(valIdx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
pk := val.PublicKey()
|
||||
committee, err := st.NextSyncCommittee()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return findSubCommitteeIndices(val.PublicKey, committee.Pubkeys), nil
|
||||
return findSubCommitteeIndices(pk[:], committee.Pubkeys), nil
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
||||
@@ -584,23 +584,23 @@ func IsSameWithdrawalCredentials(a, b *ethpb.Validator) bool {
|
||||
// and validator.withdrawable_epoch <= epoch
|
||||
// and balance > 0
|
||||
// )
|
||||
func IsFullyWithdrawableValidator(val *ethpb.Validator, balance uint64, epoch primitives.Epoch, fork int) bool {
|
||||
func IsFullyWithdrawableValidator(val state.ReadOnlyValidator, balance uint64, epoch primitives.Epoch, fork int) bool {
|
||||
if val == nil || balance <= 0 {
|
||||
return false
|
||||
}
|
||||
|
||||
// Electra / EIP-7251 logic
|
||||
if fork >= version.Electra {
|
||||
return HasExecutionWithdrawalCredentials(val) && val.WithdrawableEpoch <= epoch
|
||||
return HasExecutionWithdrawalCredentials(val) && val.WithdrawableEpoch() <= epoch
|
||||
}
|
||||
|
||||
return HasETH1WithdrawalCredential(val) && val.WithdrawableEpoch <= epoch
|
||||
return HasETH1WithdrawalCredential(val) && val.WithdrawableEpoch() <= epoch
|
||||
}
|
||||
|
||||
// IsPartiallyWithdrawableValidator returns whether the validator is able to perform a
|
||||
// partial withdrawal. This function assumes that the caller has a lock on the state.
|
||||
// This method conditionally calls the fork appropriate implementation based on the epoch argument.
|
||||
func IsPartiallyWithdrawableValidator(val *ethpb.Validator, balance uint64, epoch primitives.Epoch, fork int) bool {
|
||||
func IsPartiallyWithdrawableValidator(val state.ReadOnlyValidator, balance uint64, epoch primitives.Epoch, fork int) bool {
|
||||
if val == nil {
|
||||
return false
|
||||
}
|
||||
@@ -630,9 +630,9 @@ func IsPartiallyWithdrawableValidator(val *ethpb.Validator, balance uint64, epoc
|
||||
// and has_max_effective_balance
|
||||
// and has_excess_balance
|
||||
// )
|
||||
func isPartiallyWithdrawableValidatorElectra(val *ethpb.Validator, balance uint64, epoch primitives.Epoch) bool {
|
||||
func isPartiallyWithdrawableValidatorElectra(val state.ReadOnlyValidator, balance uint64, epoch primitives.Epoch) bool {
|
||||
maxEB := ValidatorMaxEffectiveBalance(val)
|
||||
hasMaxBalance := val.EffectiveBalance == maxEB
|
||||
hasMaxBalance := val.EffectiveBalance() == maxEB
|
||||
hasExcessBalance := balance > maxEB
|
||||
|
||||
return HasExecutionWithdrawalCredentials(val) &&
|
||||
@@ -652,8 +652,8 @@ func isPartiallyWithdrawableValidatorElectra(val *ethpb.Validator, balance uint6
|
||||
// has_max_effective_balance = validator.effective_balance == MAX_EFFECTIVE_BALANCE
|
||||
// has_excess_balance = balance > MAX_EFFECTIVE_BALANCE
|
||||
// return has_eth1_withdrawal_credential(validator) and has_max_effective_balance and has_excess_balance
|
||||
func isPartiallyWithdrawableValidatorCapella(val *ethpb.Validator, balance uint64, epoch primitives.Epoch) bool {
|
||||
hasMaxBalance := val.EffectiveBalance == params.BeaconConfig().MaxEffectiveBalance
|
||||
func isPartiallyWithdrawableValidatorCapella(val state.ReadOnlyValidator, balance uint64, epoch primitives.Epoch) bool {
|
||||
hasMaxBalance := val.EffectiveBalance() == params.BeaconConfig().MaxEffectiveBalance
|
||||
hasExcessBalance := balance > params.BeaconConfig().MaxEffectiveBalance
|
||||
return HasETH1WithdrawalCredential(val) && hasExcessBalance && hasMaxBalance
|
||||
}
|
||||
@@ -670,7 +670,7 @@ func isPartiallyWithdrawableValidatorCapella(val *ethpb.Validator, balance uint6
|
||||
// return MAX_EFFECTIVE_BALANCE_ELECTRA
|
||||
// else:
|
||||
// return MIN_ACTIVATION_BALANCE
|
||||
func ValidatorMaxEffectiveBalance(val *ethpb.Validator) uint64 {
|
||||
func ValidatorMaxEffectiveBalance(val state.ReadOnlyValidator) uint64 {
|
||||
if HasCompoundingWithdrawalCredential(val) {
|
||||
return params.BeaconConfig().MaxEffectiveBalanceElectra
|
||||
}
|
||||
|
||||
@@ -974,13 +974,6 @@ func TestIsFullyWithdrawableValidator(t *testing.T) {
|
||||
fork int
|
||||
want bool
|
||||
}{
|
||||
{
|
||||
name: "Handles nil case",
|
||||
validator: nil,
|
||||
balance: 0,
|
||||
epoch: 0,
|
||||
want: false,
|
||||
},
|
||||
{
|
||||
name: "No ETH1 prefix",
|
||||
validator: ðpb.Validator{
|
||||
@@ -1036,7 +1029,9 @@ func TestIsFullyWithdrawableValidator(t *testing.T) {
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
assert.Equal(t, tt.want, helpers.IsFullyWithdrawableValidator(tt.validator, tt.balance, tt.epoch, tt.fork))
|
||||
v, err := state_native.NewValidator(tt.validator)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, tt.want, helpers.IsFullyWithdrawableValidator(v, tt.balance, tt.epoch, tt.fork))
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -1050,13 +1045,6 @@ func TestIsPartiallyWithdrawableValidator(t *testing.T) {
|
||||
fork int
|
||||
want bool
|
||||
}{
|
||||
{
|
||||
name: "Handles nil case",
|
||||
validator: nil,
|
||||
balance: 0,
|
||||
epoch: 0,
|
||||
want: false,
|
||||
},
|
||||
{
|
||||
name: "No ETH1 prefix",
|
||||
validator: ðpb.Validator{
|
||||
@@ -1113,7 +1101,9 @@ func TestIsPartiallyWithdrawableValidator(t *testing.T) {
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
assert.Equal(t, tt.want, helpers.IsPartiallyWithdrawableValidator(tt.validator, tt.balance, tt.epoch, tt.fork))
|
||||
v, err := state_native.NewValidator(tt.validator)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, tt.want, helpers.IsPartiallyWithdrawableValidator(v, tt.balance, tt.epoch, tt.fork))
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -1167,15 +1157,12 @@ func TestValidatorMaxEffectiveBalance(t *testing.T) {
|
||||
validator: ðpb.Validator{WithdrawalCredentials: []byte{params.BeaconConfig().ETH1AddressWithdrawalPrefixByte, 0xCC}},
|
||||
want: params.BeaconConfig().MinActivationBalance,
|
||||
},
|
||||
{
|
||||
"Handles nil case",
|
||||
nil,
|
||||
params.BeaconConfig().MinActivationBalance,
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
assert.Equal(t, tt.want, helpers.ValidatorMaxEffectiveBalance(tt.validator))
|
||||
v, err := state_native.NewValidator(tt.validator)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, tt.want, helpers.ValidatorMaxEffectiveBalance(v))
|
||||
})
|
||||
}
|
||||
// Sanity check that MinActivationBalance equals (pre-electra) MaxEffectiveBalance
|
||||
|
||||
@@ -44,8 +44,6 @@ var (
|
||||
GetPayloadMethodV4,
|
||||
GetPayloadBodiesByHashV1,
|
||||
GetPayloadBodiesByRangeV1,
|
||||
GetPayloadBodiesByHashV2,
|
||||
GetPayloadBodiesByRangeV2,
|
||||
}
|
||||
)
|
||||
|
||||
@@ -77,12 +75,8 @@ const (
|
||||
BlockByNumberMethod = "eth_getBlockByNumber"
|
||||
// GetPayloadBodiesByHashV1 is the engine_getPayloadBodiesByHashX JSON-RPC method for pre-Electra payloads.
|
||||
GetPayloadBodiesByHashV1 = "engine_getPayloadBodiesByHashV1"
|
||||
// GetPayloadBodiesByHashV2 is the engine_getPayloadBodiesByHashX JSON-RPC method introduced by Electra.
|
||||
GetPayloadBodiesByHashV2 = "engine_getPayloadBodiesByHashV2"
|
||||
// GetPayloadBodiesByRangeV1 is the engine_getPayloadBodiesByRangeX JSON-RPC method for pre-Electra payloads.
|
||||
GetPayloadBodiesByRangeV1 = "engine_getPayloadBodiesByRangeV1"
|
||||
// GetPayloadBodiesByRangeV2 is the engine_getPayloadBodiesByRangeX JSON-RPC method introduced by Electra.
|
||||
GetPayloadBodiesByRangeV2 = "engine_getPayloadBodiesByRangeV2"
|
||||
// ExchangeCapabilities request string for JSON-RPC.
|
||||
ExchangeCapabilities = "engine_exchangeCapabilities"
|
||||
// Defines the seconds before timing out engine endpoints with non-block execution semantics.
|
||||
@@ -114,7 +108,7 @@ type PayloadReconstructor interface {
|
||||
// EngineCaller defines a client that can interact with an Ethereum
|
||||
// execution node's engine service via JSON-RPC.
|
||||
type EngineCaller interface {
|
||||
NewPayload(ctx context.Context, payload interfaces.ExecutionData, versionedHashes []common.Hash, parentBlockRoot *common.Hash) ([]byte, error)
|
||||
NewPayload(ctx context.Context, payload interfaces.ExecutionData, versionedHashes []common.Hash, parentBlockRoot *common.Hash, executionRequests *pb.ExecutionRequests) ([]byte, error)
|
||||
ForkchoiceUpdated(
|
||||
ctx context.Context, state *pb.ForkchoiceState, attrs payloadattribute.Attributer,
|
||||
) (*pb.PayloadIDBytes, []byte, error)
|
||||
@@ -125,8 +119,8 @@ type EngineCaller interface {
|
||||
|
||||
var ErrEmptyBlockHash = errors.New("Block hash is empty 0x0000...")
|
||||
|
||||
// NewPayload calls the engine_newPayloadVX method via JSON-RPC.
|
||||
func (s *Service) NewPayload(ctx context.Context, payload interfaces.ExecutionData, versionedHashes []common.Hash, parentBlockRoot *common.Hash) ([]byte, error) {
|
||||
// NewPayload request calls the engine_newPayloadVX method via JSON-RPC.
|
||||
func (s *Service) NewPayload(ctx context.Context, payload interfaces.ExecutionData, versionedHashes []common.Hash, parentBlockRoot *common.Hash, executionRequests *pb.ExecutionRequests) ([]byte, error) {
|
||||
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.NewPayload")
|
||||
defer span.End()
|
||||
start := time.Now()
|
||||
@@ -163,9 +157,20 @@ func (s *Service) NewPayload(ctx context.Context, payload interfaces.ExecutionDa
|
||||
if !ok {
|
||||
return nil, errors.New("execution data must be a Deneb execution payload")
|
||||
}
|
||||
err := s.rpcClient.CallContext(ctx, result, NewPayloadMethodV3, payloadPb, versionedHashes, parentBlockRoot)
|
||||
if err != nil {
|
||||
return nil, handleRPCError(err)
|
||||
if executionRequests == nil {
|
||||
err := s.rpcClient.CallContext(ctx, result, NewPayloadMethodV3, payloadPb, versionedHashes, parentBlockRoot)
|
||||
if err != nil {
|
||||
return nil, handleRPCError(err)
|
||||
}
|
||||
} else {
|
||||
flattenedRequests, err := pb.EncodeExecutionRequests(executionRequests)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "failed to encode execution requests")
|
||||
}
|
||||
err = s.rpcClient.CallContext(ctx, result, NewPayloadMethodV4, payloadPb, versionedHashes, parentBlockRoot, flattenedRequests)
|
||||
if err != nil {
|
||||
return nil, handleRPCError(err)
|
||||
}
|
||||
}
|
||||
default:
|
||||
return nil, errors.New("unknown execution data type")
|
||||
@@ -259,6 +264,9 @@ func (s *Service) ForkchoiceUpdated(
|
||||
|
||||
func getPayloadMethodAndMessage(slot primitives.Slot) (string, proto.Message) {
|
||||
pe := slots.ToEpoch(slot)
|
||||
if pe >= params.BeaconConfig().ElectraForkEpoch {
|
||||
return GetPayloadMethodV4, &pb.ExecutionBundleElectra{}
|
||||
}
|
||||
if pe >= params.BeaconConfig().DenebForkEpoch {
|
||||
return GetPayloadMethodV3, &pb.ExecutionPayloadDenebWithValueAndBlobsBundle{}
|
||||
}
|
||||
|
||||
@@ -123,7 +123,7 @@ func TestClient_IPC(t *testing.T) {
|
||||
require.Equal(t, true, ok)
|
||||
wrappedPayload, err := blocks.WrappedExecutionPayload(req)
|
||||
require.NoError(t, err)
|
||||
latestValidHash, err := srv.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{})
|
||||
latestValidHash, err := srv.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
|
||||
require.NoError(t, err)
|
||||
require.DeepEqual(t, bytesutil.ToBytes32(want.LatestValidHash), bytesutil.ToBytes32(latestValidHash))
|
||||
})
|
||||
@@ -134,7 +134,7 @@ func TestClient_IPC(t *testing.T) {
|
||||
require.Equal(t, true, ok)
|
||||
wrappedPayload, err := blocks.WrappedExecutionPayloadCapella(req)
|
||||
require.NoError(t, err)
|
||||
latestValidHash, err := srv.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{})
|
||||
latestValidHash, err := srv.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
|
||||
require.NoError(t, err)
|
||||
require.DeepEqual(t, bytesutil.ToBytes32(want.LatestValidHash), bytesutil.ToBytes32(latestValidHash))
|
||||
})
|
||||
@@ -163,7 +163,7 @@ func TestClient_HTTP(t *testing.T) {
|
||||
cfg := params.BeaconConfig().Copy()
|
||||
cfg.CapellaForkEpoch = 1
|
||||
cfg.DenebForkEpoch = 2
|
||||
cfg.ElectraForkEpoch = 2
|
||||
cfg.ElectraForkEpoch = 3
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
t.Run(GetPayloadMethod, func(t *testing.T) {
|
||||
@@ -320,6 +320,89 @@ func TestClient_HTTP(t *testing.T) {
|
||||
blobs := [][]byte{bytesutil.PadTo([]byte("a"), fieldparams.BlobLength), bytesutil.PadTo([]byte("b"), fieldparams.BlobLength)}
|
||||
require.DeepEqual(t, blobs, resp.BlobsBundle.Blobs)
|
||||
})
|
||||
t.Run(GetPayloadMethodV4, func(t *testing.T) {
|
||||
payloadId := [8]byte{1}
|
||||
want, ok := fix["ExecutionBundleElectra"].(*pb.GetPayloadV4ResponseJson)
|
||||
require.Equal(t, true, ok)
|
||||
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
defer func() {
|
||||
require.NoError(t, r.Body.Close())
|
||||
}()
|
||||
enc, err := io.ReadAll(r.Body)
|
||||
require.NoError(t, err)
|
||||
jsonRequestString := string(enc)
|
||||
|
||||
reqArg, err := json.Marshal(pb.PayloadIDBytes(payloadId))
|
||||
require.NoError(t, err)
|
||||
|
||||
// We expect the JSON string RPC request contains the right arguments.
|
||||
require.Equal(t, true, strings.Contains(
|
||||
jsonRequestString, string(reqArg),
|
||||
))
|
||||
resp := map[string]interface{}{
|
||||
"jsonrpc": "2.0",
|
||||
"id": 1,
|
||||
"result": want,
|
||||
}
|
||||
err = json.NewEncoder(w).Encode(resp)
|
||||
require.NoError(t, err)
|
||||
}))
|
||||
defer srv.Close()
|
||||
|
||||
rpcClient, err := rpc.DialHTTP(srv.URL)
|
||||
require.NoError(t, err)
|
||||
defer rpcClient.Close()
|
||||
|
||||
client := &Service{}
|
||||
client.rpcClient = rpcClient
|
||||
|
||||
// We call the RPC method via HTTP and expect a proper result.
|
||||
resp, err := client.GetPayload(ctx, payloadId, 3*params.BeaconConfig().SlotsPerEpoch)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, true, resp.OverrideBuilder)
|
||||
g, err := resp.ExecutionData.ExcessBlobGas()
|
||||
require.NoError(t, err)
|
||||
require.DeepEqual(t, uint64(3), g)
|
||||
g, err = resp.ExecutionData.BlobGasUsed()
|
||||
require.NoError(t, err)
|
||||
require.DeepEqual(t, uint64(2), g)
|
||||
|
||||
commitments := [][]byte{bytesutil.PadTo([]byte("commitment1"), fieldparams.BLSPubkeyLength), bytesutil.PadTo([]byte("commitment2"), fieldparams.BLSPubkeyLength)}
|
||||
require.DeepEqual(t, commitments, resp.BlobsBundle.KzgCommitments)
|
||||
proofs := [][]byte{bytesutil.PadTo([]byte("proof1"), fieldparams.BLSPubkeyLength), bytesutil.PadTo([]byte("proof2"), fieldparams.BLSPubkeyLength)}
|
||||
require.DeepEqual(t, proofs, resp.BlobsBundle.Proofs)
|
||||
blobs := [][]byte{bytesutil.PadTo([]byte("a"), fieldparams.BlobLength), bytesutil.PadTo([]byte("b"), fieldparams.BlobLength)}
|
||||
require.DeepEqual(t, blobs, resp.BlobsBundle.Blobs)
|
||||
requests := &pb.ExecutionRequests{
|
||||
Deposits: []*pb.DepositRequest{
|
||||
{
|
||||
Pubkey: bytesutil.PadTo([]byte{byte('a')}, fieldparams.BLSPubkeyLength),
|
||||
WithdrawalCredentials: bytesutil.PadTo([]byte{byte('b')}, fieldparams.RootLength),
|
||||
Amount: params.BeaconConfig().MinActivationBalance,
|
||||
Signature: bytesutil.PadTo([]byte{byte('c')}, fieldparams.BLSSignatureLength),
|
||||
Index: 0,
|
||||
},
|
||||
},
|
||||
Withdrawals: []*pb.WithdrawalRequest{
|
||||
{
|
||||
SourceAddress: bytesutil.PadTo([]byte{byte('d')}, common.AddressLength),
|
||||
ValidatorPubkey: bytesutil.PadTo([]byte{byte('e')}, fieldparams.BLSPubkeyLength),
|
||||
Amount: params.BeaconConfig().MinActivationBalance,
|
||||
},
|
||||
},
|
||||
Consolidations: []*pb.ConsolidationRequest{
|
||||
{
|
||||
SourceAddress: bytesutil.PadTo([]byte{byte('f')}, common.AddressLength),
|
||||
SourcePubkey: bytesutil.PadTo([]byte{byte('g')}, fieldparams.BLSPubkeyLength),
|
||||
TargetPubkey: bytesutil.PadTo([]byte{byte('h')}, fieldparams.BLSPubkeyLength),
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
require.DeepEqual(t, requests, resp.ExecutionRequests)
|
||||
})
|
||||
|
||||
t.Run(ForkchoiceUpdatedMethod+" VALID status", func(t *testing.T) {
|
||||
forkChoiceState := &pb.ForkchoiceState{
|
||||
HeadBlockHash: []byte("head"),
|
||||
@@ -470,7 +553,7 @@ func TestClient_HTTP(t *testing.T) {
|
||||
// We call the RPC method via HTTP and expect a proper result.
|
||||
wrappedPayload, err := blocks.WrappedExecutionPayload(execPayload)
|
||||
require.NoError(t, err)
|
||||
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{})
|
||||
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
|
||||
require.NoError(t, err)
|
||||
require.DeepEqual(t, want.LatestValidHash, resp)
|
||||
})
|
||||
@@ -484,7 +567,7 @@ func TestClient_HTTP(t *testing.T) {
|
||||
// We call the RPC method via HTTP and expect a proper result.
|
||||
wrappedPayload, err := blocks.WrappedExecutionPayloadCapella(execPayload)
|
||||
require.NoError(t, err)
|
||||
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{})
|
||||
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
|
||||
require.NoError(t, err)
|
||||
require.DeepEqual(t, want.LatestValidHash, resp)
|
||||
})
|
||||
@@ -498,7 +581,46 @@ func TestClient_HTTP(t *testing.T) {
|
||||
// We call the RPC method via HTTP and expect a proper result.
|
||||
wrappedPayload, err := blocks.WrappedExecutionPayloadDeneb(execPayload)
|
||||
require.NoError(t, err)
|
||||
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'})
|
||||
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, nil)
|
||||
require.NoError(t, err)
|
||||
require.DeepEqual(t, want.LatestValidHash, resp)
|
||||
})
|
||||
t.Run(NewPayloadMethodV4+" VALID status", func(t *testing.T) {
|
||||
execPayload, ok := fix["ExecutionPayloadDeneb"].(*pb.ExecutionPayloadDeneb)
|
||||
require.Equal(t, true, ok)
|
||||
want, ok := fix["ValidPayloadStatus"].(*pb.PayloadStatus)
|
||||
require.Equal(t, true, ok)
|
||||
|
||||
// We call the RPC method via HTTP and expect a proper result.
|
||||
wrappedPayload, err := blocks.WrappedExecutionPayloadDeneb(execPayload)
|
||||
require.NoError(t, err)
|
||||
requests := &pb.ExecutionRequests{
|
||||
Deposits: []*pb.DepositRequest{
|
||||
{
|
||||
Pubkey: bytesutil.PadTo([]byte{byte('a')}, fieldparams.BLSPubkeyLength),
|
||||
WithdrawalCredentials: bytesutil.PadTo([]byte{byte('b')}, fieldparams.RootLength),
|
||||
Amount: params.BeaconConfig().MinActivationBalance,
|
||||
Signature: bytesutil.PadTo([]byte{byte('c')}, fieldparams.BLSSignatureLength),
|
||||
Index: 0,
|
||||
},
|
||||
},
|
||||
Withdrawals: []*pb.WithdrawalRequest{
|
||||
{
|
||||
SourceAddress: bytesutil.PadTo([]byte{byte('d')}, common.AddressLength),
|
||||
ValidatorPubkey: bytesutil.PadTo([]byte{byte('e')}, fieldparams.BLSPubkeyLength),
|
||||
Amount: params.BeaconConfig().MinActivationBalance,
|
||||
},
|
||||
},
|
||||
Consolidations: []*pb.ConsolidationRequest{
|
||||
{
|
||||
SourceAddress: bytesutil.PadTo([]byte{byte('f')}, common.AddressLength),
|
||||
SourcePubkey: bytesutil.PadTo([]byte{byte('g')}, fieldparams.BLSPubkeyLength),
|
||||
TargetPubkey: bytesutil.PadTo([]byte{byte('h')}, fieldparams.BLSPubkeyLength),
|
||||
},
|
||||
},
|
||||
}
|
||||
client := newPayloadV4Setup(t, want, execPayload, requests)
|
||||
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, requests)
|
||||
require.NoError(t, err)
|
||||
require.DeepEqual(t, want.LatestValidHash, resp)
|
||||
})
|
||||
@@ -512,7 +634,7 @@ func TestClient_HTTP(t *testing.T) {
|
||||
// We call the RPC method via HTTP and expect a proper result.
|
||||
wrappedPayload, err := blocks.WrappedExecutionPayload(execPayload)
|
||||
require.NoError(t, err)
|
||||
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{})
|
||||
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
|
||||
require.ErrorIs(t, ErrAcceptedSyncingPayloadStatus, err)
|
||||
require.DeepEqual(t, []uint8(nil), resp)
|
||||
})
|
||||
@@ -526,7 +648,7 @@ func TestClient_HTTP(t *testing.T) {
|
||||
// We call the RPC method via HTTP and expect a proper result.
|
||||
wrappedPayload, err := blocks.WrappedExecutionPayloadCapella(execPayload)
|
||||
require.NoError(t, err)
|
||||
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{})
|
||||
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
|
||||
require.ErrorIs(t, ErrAcceptedSyncingPayloadStatus, err)
|
||||
require.DeepEqual(t, []uint8(nil), resp)
|
||||
})
|
||||
@@ -540,7 +662,46 @@ func TestClient_HTTP(t *testing.T) {
|
||||
// We call the RPC method via HTTP and expect a proper result.
|
||||
wrappedPayload, err := blocks.WrappedExecutionPayloadDeneb(execPayload)
|
||||
require.NoError(t, err)
|
||||
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'})
|
||||
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, nil)
|
||||
require.ErrorIs(t, ErrAcceptedSyncingPayloadStatus, err)
|
||||
require.DeepEqual(t, []uint8(nil), resp)
|
||||
})
|
||||
t.Run(NewPayloadMethodV4+" SYNCING status", func(t *testing.T) {
|
||||
execPayload, ok := fix["ExecutionPayloadDeneb"].(*pb.ExecutionPayloadDeneb)
|
||||
require.Equal(t, true, ok)
|
||||
want, ok := fix["SyncingStatus"].(*pb.PayloadStatus)
|
||||
require.Equal(t, true, ok)
|
||||
|
||||
// We call the RPC method via HTTP and expect a proper result.
|
||||
wrappedPayload, err := blocks.WrappedExecutionPayloadDeneb(execPayload)
|
||||
require.NoError(t, err)
|
||||
requests := &pb.ExecutionRequests{
|
||||
Deposits: []*pb.DepositRequest{
|
||||
{
|
||||
Pubkey: bytesutil.PadTo([]byte{byte('a')}, fieldparams.BLSPubkeyLength),
|
||||
WithdrawalCredentials: bytesutil.PadTo([]byte{byte('b')}, fieldparams.RootLength),
|
||||
Amount: params.BeaconConfig().MinActivationBalance,
|
||||
Signature: bytesutil.PadTo([]byte{byte('c')}, fieldparams.BLSSignatureLength),
|
||||
Index: 0,
|
||||
},
|
||||
},
|
||||
Withdrawals: []*pb.WithdrawalRequest{
|
||||
{
|
||||
SourceAddress: bytesutil.PadTo([]byte{byte('d')}, common.AddressLength),
|
||||
ValidatorPubkey: bytesutil.PadTo([]byte{byte('e')}, fieldparams.BLSPubkeyLength),
|
||||
Amount: params.BeaconConfig().MinActivationBalance,
|
||||
},
|
||||
},
|
||||
Consolidations: []*pb.ConsolidationRequest{
|
||||
{
|
||||
SourceAddress: bytesutil.PadTo([]byte{byte('f')}, common.AddressLength),
|
||||
SourcePubkey: bytesutil.PadTo([]byte{byte('g')}, fieldparams.BLSPubkeyLength),
|
||||
TargetPubkey: bytesutil.PadTo([]byte{byte('h')}, fieldparams.BLSPubkeyLength),
|
||||
},
|
||||
},
|
||||
}
|
||||
client := newPayloadV4Setup(t, want, execPayload, requests)
|
||||
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, requests)
|
||||
require.ErrorIs(t, ErrAcceptedSyncingPayloadStatus, err)
|
||||
require.DeepEqual(t, []uint8(nil), resp)
|
||||
})
|
||||
@@ -554,7 +715,7 @@ func TestClient_HTTP(t *testing.T) {
|
||||
// We call the RPC method via HTTP and expect a proper result.
|
||||
wrappedPayload, err := blocks.WrappedExecutionPayload(execPayload)
|
||||
require.NoError(t, err)
|
||||
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{})
|
||||
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
|
||||
require.ErrorIs(t, ErrInvalidBlockHashPayloadStatus, err)
|
||||
require.DeepEqual(t, []uint8(nil), resp)
|
||||
})
|
||||
@@ -568,7 +729,7 @@ func TestClient_HTTP(t *testing.T) {
|
||||
// We call the RPC method via HTTP and expect a proper result.
|
||||
wrappedPayload, err := blocks.WrappedExecutionPayloadCapella(execPayload)
|
||||
require.NoError(t, err)
|
||||
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{})
|
||||
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
|
||||
require.ErrorIs(t, ErrInvalidBlockHashPayloadStatus, err)
|
||||
require.DeepEqual(t, []uint8(nil), resp)
|
||||
})
|
||||
@@ -582,7 +743,45 @@ func TestClient_HTTP(t *testing.T) {
|
||||
// We call the RPC method via HTTP and expect a proper result.
|
||||
wrappedPayload, err := blocks.WrappedExecutionPayloadDeneb(execPayload)
|
||||
require.NoError(t, err)
|
||||
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'})
|
||||
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, nil)
|
||||
require.ErrorIs(t, ErrInvalidBlockHashPayloadStatus, err)
|
||||
require.DeepEqual(t, []uint8(nil), resp)
|
||||
})
|
||||
t.Run(NewPayloadMethodV4+" INVALID_BLOCK_HASH status", func(t *testing.T) {
|
||||
execPayload, ok := fix["ExecutionPayloadDeneb"].(*pb.ExecutionPayloadDeneb)
|
||||
require.Equal(t, true, ok)
|
||||
want, ok := fix["InvalidBlockHashStatus"].(*pb.PayloadStatus)
|
||||
require.Equal(t, true, ok)
|
||||
// We call the RPC method via HTTP and expect a proper result.
|
||||
wrappedPayload, err := blocks.WrappedExecutionPayloadDeneb(execPayload)
|
||||
require.NoError(t, err)
|
||||
requests := &pb.ExecutionRequests{
|
||||
Deposits: []*pb.DepositRequest{
|
||||
{
|
||||
Pubkey: bytesutil.PadTo([]byte{byte('a')}, fieldparams.BLSPubkeyLength),
|
||||
WithdrawalCredentials: bytesutil.PadTo([]byte{byte('b')}, fieldparams.RootLength),
|
||||
Amount: params.BeaconConfig().MinActivationBalance,
|
||||
Signature: bytesutil.PadTo([]byte{byte('c')}, fieldparams.BLSSignatureLength),
|
||||
Index: 0,
|
||||
},
|
||||
},
|
||||
Withdrawals: []*pb.WithdrawalRequest{
|
||||
{
|
||||
SourceAddress: bytesutil.PadTo([]byte{byte('d')}, common.AddressLength),
|
||||
ValidatorPubkey: bytesutil.PadTo([]byte{byte('e')}, fieldparams.BLSPubkeyLength),
|
||||
Amount: params.BeaconConfig().MinActivationBalance,
|
||||
},
|
||||
},
|
||||
Consolidations: []*pb.ConsolidationRequest{
|
||||
{
|
||||
SourceAddress: bytesutil.PadTo([]byte{byte('f')}, common.AddressLength),
|
||||
SourcePubkey: bytesutil.PadTo([]byte{byte('g')}, fieldparams.BLSPubkeyLength),
|
||||
TargetPubkey: bytesutil.PadTo([]byte{byte('h')}, fieldparams.BLSPubkeyLength),
|
||||
},
|
||||
},
|
||||
}
|
||||
client := newPayloadV4Setup(t, want, execPayload, requests)
|
||||
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, requests)
|
||||
require.ErrorIs(t, ErrInvalidBlockHashPayloadStatus, err)
|
||||
require.DeepEqual(t, []uint8(nil), resp)
|
||||
})
|
||||
@@ -596,7 +795,7 @@ func TestClient_HTTP(t *testing.T) {
|
||||
// We call the RPC method via HTTP and expect a proper result.
|
||||
wrappedPayload, err := blocks.WrappedExecutionPayload(execPayload)
|
||||
require.NoError(t, err)
|
||||
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{})
|
||||
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
|
||||
require.ErrorIs(t, ErrInvalidPayloadStatus, err)
|
||||
require.DeepEqual(t, want.LatestValidHash, resp)
|
||||
})
|
||||
@@ -610,7 +809,7 @@ func TestClient_HTTP(t *testing.T) {
|
||||
// We call the RPC method via HTTP and expect a proper result.
|
||||
wrappedPayload, err := blocks.WrappedExecutionPayloadCapella(execPayload)
|
||||
require.NoError(t, err)
|
||||
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{})
|
||||
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
|
||||
require.ErrorIs(t, ErrInvalidPayloadStatus, err)
|
||||
require.DeepEqual(t, want.LatestValidHash, resp)
|
||||
})
|
||||
@@ -624,7 +823,46 @@ func TestClient_HTTP(t *testing.T) {
|
||||
// We call the RPC method via HTTP and expect a proper result.
|
||||
wrappedPayload, err := blocks.WrappedExecutionPayloadDeneb(execPayload)
|
||||
require.NoError(t, err)
|
||||
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'})
|
||||
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, nil)
|
||||
require.ErrorIs(t, ErrInvalidPayloadStatus, err)
|
||||
require.DeepEqual(t, want.LatestValidHash, resp)
|
||||
})
|
||||
t.Run(NewPayloadMethodV4+" INVALID status", func(t *testing.T) {
|
||||
execPayload, ok := fix["ExecutionPayloadDeneb"].(*pb.ExecutionPayloadDeneb)
|
||||
require.Equal(t, true, ok)
|
||||
want, ok := fix["InvalidStatus"].(*pb.PayloadStatus)
|
||||
require.Equal(t, true, ok)
|
||||
|
||||
// We call the RPC method via HTTP and expect a proper result.
|
||||
wrappedPayload, err := blocks.WrappedExecutionPayloadDeneb(execPayload)
|
||||
require.NoError(t, err)
|
||||
requests := &pb.ExecutionRequests{
|
||||
Deposits: []*pb.DepositRequest{
|
||||
{
|
||||
Pubkey: bytesutil.PadTo([]byte{byte('a')}, fieldparams.BLSPubkeyLength),
|
||||
WithdrawalCredentials: bytesutil.PadTo([]byte{byte('b')}, fieldparams.RootLength),
|
||||
Amount: params.BeaconConfig().MinActivationBalance,
|
||||
Signature: bytesutil.PadTo([]byte{byte('c')}, fieldparams.BLSSignatureLength),
|
||||
Index: 0,
|
||||
},
|
||||
},
|
||||
Withdrawals: []*pb.WithdrawalRequest{
|
||||
{
|
||||
SourceAddress: bytesutil.PadTo([]byte{byte('d')}, common.AddressLength),
|
||||
ValidatorPubkey: bytesutil.PadTo([]byte{byte('e')}, fieldparams.BLSPubkeyLength),
|
||||
Amount: params.BeaconConfig().MinActivationBalance,
|
||||
},
|
||||
},
|
||||
Consolidations: []*pb.ConsolidationRequest{
|
||||
{
|
||||
SourceAddress: bytesutil.PadTo([]byte{byte('f')}, common.AddressLength),
|
||||
SourcePubkey: bytesutil.PadTo([]byte{byte('g')}, fieldparams.BLSPubkeyLength),
|
||||
TargetPubkey: bytesutil.PadTo([]byte{byte('h')}, fieldparams.BLSPubkeyLength),
|
||||
},
|
||||
},
|
||||
}
|
||||
client := newPayloadV4Setup(t, want, execPayload, requests)
|
||||
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'}, requests)
|
||||
require.ErrorIs(t, ErrInvalidPayloadStatus, err)
|
||||
require.DeepEqual(t, want.LatestValidHash, resp)
|
||||
})
|
||||
@@ -638,7 +876,7 @@ func TestClient_HTTP(t *testing.T) {
|
||||
// We call the RPC method via HTTP and expect a proper result.
|
||||
wrappedPayload, err := blocks.WrappedExecutionPayload(execPayload)
|
||||
require.NoError(t, err)
|
||||
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{})
|
||||
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
|
||||
require.ErrorIs(t, ErrUnknownPayloadStatus, err)
|
||||
require.DeepEqual(t, []uint8(nil), resp)
|
||||
})
|
||||
@@ -1297,6 +1535,7 @@ func fixtures() map[string]interface{} {
|
||||
"ExecutionPayloadDeneb": s.ExecutionPayloadDeneb,
|
||||
"ExecutionPayloadCapellaWithValue": s.ExecutionPayloadWithValueCapella,
|
||||
"ExecutionPayloadDenebWithValue": s.ExecutionPayloadWithValueDeneb,
|
||||
"ExecutionBundleElectra": s.ExecutionBundleElectra,
|
||||
"ValidPayloadStatus": s.ValidPayloadStatus,
|
||||
"InvalidBlockHashStatus": s.InvalidBlockHashStatus,
|
||||
"AcceptedStatus": s.AcceptedStatus,
|
||||
@@ -1483,6 +1722,53 @@ func fixturesStruct() *payloadFixtures {
|
||||
Blobs: []hexutil.Bytes{{'a'}, {'b'}},
|
||||
},
|
||||
}
|
||||
|
||||
depositRequestBytes, err := hexutil.Decode("0x610000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000" +
|
||||
"620000000000000000000000000000000000000000000000000000000000000000" +
|
||||
"4059730700000063000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000" +
|
||||
"00000000000000000000000000000000000000000000000000000000000000000000000000000000")
|
||||
if err != nil {
|
||||
panic("failed to decode deposit request")
|
||||
}
|
||||
withdrawalRequestBytes, err := hexutil.Decode("0x6400000000000000000000000000000000000000" +
|
||||
"6500000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000040597307000000")
|
||||
if err != nil {
|
||||
panic("failed to decode withdrawal request")
|
||||
}
|
||||
consolidationRequestBytes, err := hexutil.Decode("0x6600000000000000000000000000000000000000" +
|
||||
"670000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000" +
|
||||
"680000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000")
|
||||
if err != nil {
|
||||
panic("failed to decode consolidation request")
|
||||
}
|
||||
executionBundleFixtureElectra := &pb.GetPayloadV4ResponseJson{
|
||||
ShouldOverrideBuilder: true,
|
||||
ExecutionPayload: &pb.ExecutionPayloadDenebJSON{
|
||||
ParentHash: &common.Hash{'a'},
|
||||
FeeRecipient: &common.Address{'b'},
|
||||
StateRoot: &common.Hash{'c'},
|
||||
ReceiptsRoot: &common.Hash{'d'},
|
||||
LogsBloom: &hexutil.Bytes{'e'},
|
||||
PrevRandao: &common.Hash{'f'},
|
||||
BaseFeePerGas: "0x123",
|
||||
BlockHash: &common.Hash{'g'},
|
||||
Transactions: []hexutil.Bytes{{'h'}},
|
||||
Withdrawals: []*pb.Withdrawal{},
|
||||
BlockNumber: &hexUint,
|
||||
GasLimit: &hexUint,
|
||||
GasUsed: &hexUint,
|
||||
Timestamp: &hexUint,
|
||||
BlobGasUsed: &bgu,
|
||||
ExcessBlobGas: &ebg,
|
||||
},
|
||||
BlockValue: "0x11fffffffff",
|
||||
BlobsBundle: &pb.BlobBundleJSON{
|
||||
Commitments: []hexutil.Bytes{[]byte("commitment1"), []byte("commitment2")},
|
||||
Proofs: []hexutil.Bytes{[]byte("proof1"), []byte("proof2")},
|
||||
Blobs: []hexutil.Bytes{{'a'}, {'b'}},
|
||||
},
|
||||
ExecutionRequests: []hexutil.Bytes{depositRequestBytes, withdrawalRequestBytes, consolidationRequestBytes},
|
||||
}
|
||||
parent := bytesutil.PadTo([]byte("parentHash"), fieldparams.RootLength)
|
||||
sha3Uncles := bytesutil.PadTo([]byte("sha3Uncles"), fieldparams.RootLength)
|
||||
miner := bytesutil.PadTo([]byte("miner"), fieldparams.FeeRecipientLength)
|
||||
@@ -1576,6 +1862,7 @@ func fixturesStruct() *payloadFixtures {
|
||||
EmptyExecutionPayloadDeneb: emptyExecutionPayloadDeneb,
|
||||
ExecutionPayloadWithValueCapella: executionPayloadWithValueFixtureCapella,
|
||||
ExecutionPayloadWithValueDeneb: executionPayloadWithValueFixtureDeneb,
|
||||
ExecutionBundleElectra: executionBundleFixtureElectra,
|
||||
ValidPayloadStatus: validStatus,
|
||||
InvalidBlockHashStatus: inValidBlockHashStatus,
|
||||
AcceptedStatus: acceptedStatus,
|
||||
@@ -1599,6 +1886,7 @@ type payloadFixtures struct {
|
||||
ExecutionPayloadDeneb *pb.ExecutionPayloadDeneb
|
||||
ExecutionPayloadWithValueCapella *pb.GetPayloadV2ResponseJson
|
||||
ExecutionPayloadWithValueDeneb *pb.GetPayloadV3ResponseJson
|
||||
ExecutionBundleElectra *pb.GetPayloadV4ResponseJson
|
||||
ValidPayloadStatus *pb.PayloadStatus
|
||||
InvalidBlockHashStatus *pb.PayloadStatus
|
||||
AcceptedStatus *pb.PayloadStatus
|
||||
@@ -1957,6 +2245,54 @@ func newPayloadV3Setup(t *testing.T, status *pb.PayloadStatus, payload *pb.Execu
|
||||
return service
|
||||
}
|
||||
|
||||
func newPayloadV4Setup(t *testing.T, status *pb.PayloadStatus, payload *pb.ExecutionPayloadDeneb, requests *pb.ExecutionRequests) *Service {
|
||||
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
defer func() {
|
||||
require.NoError(t, r.Body.Close())
|
||||
}()
|
||||
enc, err := io.ReadAll(r.Body)
|
||||
require.NoError(t, err)
|
||||
jsonRequestString := string(enc)
|
||||
require.Equal(t, true, strings.Contains(
|
||||
jsonRequestString, string("engine_newPayloadV4"),
|
||||
))
|
||||
|
||||
reqPayload, err := json.Marshal(payload)
|
||||
require.NoError(t, err)
|
||||
|
||||
// We expect the JSON string RPC request contains the right arguments.
|
||||
require.Equal(t, true, strings.Contains(
|
||||
jsonRequestString, string(reqPayload),
|
||||
))
|
||||
|
||||
reqRequests, err := pb.EncodeExecutionRequests(requests)
|
||||
require.NoError(t, err)
|
||||
|
||||
jsonRequests, err := json.Marshal(reqRequests)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.Equal(t, true, strings.Contains(
|
||||
jsonRequestString, string(jsonRequests),
|
||||
))
|
||||
|
||||
resp := map[string]interface{}{
|
||||
"jsonrpc": "2.0",
|
||||
"id": 1,
|
||||
"result": status,
|
||||
}
|
||||
err = json.NewEncoder(w).Encode(resp)
|
||||
require.NoError(t, err)
|
||||
}))
|
||||
|
||||
rpcClient, err := rpc.DialHTTP(srv.URL)
|
||||
require.NoError(t, err)
|
||||
|
||||
service := &Service{}
|
||||
service.rpcClient = rpcClient
|
||||
return service
|
||||
}
|
||||
|
||||
func TestReconstructBlindedBlockBatch(t *testing.T) {
|
||||
t.Run("empty response works", func(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
@@ -3,7 +3,6 @@ package execution
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"math"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"testing"
|
||||
@@ -131,21 +130,10 @@ func TestParseRequest(t *testing.T) {
|
||||
strToHexBytes(t, "0x66756c6c00000000000000000000000000000000000000000000000000000000"),
|
||||
},
|
||||
},
|
||||
{
|
||||
method: GetPayloadBodiesByHashV2,
|
||||
byteArgs: []hexutil.Bytes{
|
||||
strToHexBytes(t, "0x656d707479000000000000000000000000000000000000000000000000000000"),
|
||||
strToHexBytes(t, "0x66756c6c00000000000000000000000000000000000000000000000000000000"),
|
||||
},
|
||||
},
|
||||
{
|
||||
method: GetPayloadBodiesByRangeV1,
|
||||
hexArgs: []string{hexutil.EncodeUint64(0), hexutil.EncodeUint64(1)},
|
||||
},
|
||||
{
|
||||
method: GetPayloadBodiesByRangeV2,
|
||||
hexArgs: []string{hexutil.EncodeUint64(math.MaxUint64), hexutil.EncodeUint64(1)},
|
||||
},
|
||||
}
|
||||
for _, c := range cases {
|
||||
t.Run(c.method, func(t *testing.T) {
|
||||
@@ -191,9 +179,7 @@ func TestParseRequest(t *testing.T) {
|
||||
func TestCallCount(t *testing.T) {
|
||||
methods := []string{
|
||||
GetPayloadBodiesByHashV1,
|
||||
GetPayloadBodiesByHashV2,
|
||||
GetPayloadBodiesByRangeV1,
|
||||
GetPayloadBodiesByRangeV2,
|
||||
}
|
||||
cases := []struct {
|
||||
method string
|
||||
@@ -201,10 +187,8 @@ func TestCallCount(t *testing.T) {
|
||||
}{
|
||||
{method: GetPayloadBodiesByHashV1, count: 1},
|
||||
{method: GetPayloadBodiesByHashV1, count: 2},
|
||||
{method: GetPayloadBodiesByHashV2, count: 1},
|
||||
{method: GetPayloadBodiesByRangeV1, count: 1},
|
||||
{method: GetPayloadBodiesByRangeV1, count: 2},
|
||||
{method: GetPayloadBodiesByRangeV2, count: 1},
|
||||
}
|
||||
for _, c := range cases {
|
||||
t.Run(c.method, func(t *testing.T) {
|
||||
|
||||
@@ -12,7 +12,6 @@ import (
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
|
||||
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
|
||||
pb "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
|
||||
"github.com/prysmaticlabs/prysm/v5/runtime/version"
|
||||
"google.golang.org/protobuf/proto"
|
||||
)
|
||||
|
||||
@@ -87,10 +86,7 @@ func (r *blindedBlockReconstructor) addToBatch(b interfaces.ReadOnlySignedBeacon
|
||||
return nil
|
||||
}
|
||||
|
||||
func payloadBodyMethodForBlock(b interface{ Version() int }) string {
|
||||
if b.Version() > version.Deneb {
|
||||
return GetPayloadBodiesByHashV2
|
||||
}
|
||||
func payloadBodyMethodForBlock(_ interface{ Version() int }) string {
|
||||
return GetPayloadBodiesByHashV1
|
||||
}
|
||||
|
||||
@@ -243,9 +239,6 @@ func (r *blindedBlockReconstructor) unblinded() ([]interfaces.SignedBeaconBlock,
|
||||
return unblinded, nil
|
||||
}
|
||||
|
||||
func rangeMethodForHashMethod(method string) string {
|
||||
if method == GetPayloadBodiesByHashV2 {
|
||||
return GetPayloadBodiesByRangeV2
|
||||
}
|
||||
func rangeMethodForHashMethod(_ string) string {
|
||||
return GetPayloadBodiesByRangeV1
|
||||
}
|
||||
|
||||
@@ -25,33 +25,6 @@ func (v versioner) Version() int {
|
||||
return v.version
|
||||
}
|
||||
|
||||
func TestPayloadBodyMethodForBlock(t *testing.T) {
|
||||
cases := []struct {
|
||||
versions []int
|
||||
want string
|
||||
}{
|
||||
{
|
||||
versions: []int{version.Phase0, version.Altair, version.Bellatrix, version.Capella, version.Deneb},
|
||||
want: GetPayloadBodiesByHashV1,
|
||||
},
|
||||
{
|
||||
versions: []int{version.Electra},
|
||||
want: GetPayloadBodiesByHashV2,
|
||||
},
|
||||
}
|
||||
for _, c := range cases {
|
||||
for _, v := range c.versions {
|
||||
t.Run(version.String(v), func(t *testing.T) {
|
||||
v := versioner{version: v}
|
||||
require.Equal(t, c.want, payloadBodyMethodForBlock(v))
|
||||
})
|
||||
}
|
||||
}
|
||||
t.Run("post-electra", func(t *testing.T) {
|
||||
require.Equal(t, GetPayloadBodiesByHashV2, payloadBodyMethodForBlock(versioner{version: version.Electra + 1}))
|
||||
})
|
||||
}
|
||||
|
||||
func payloadToBody(t *testing.T, ed interfaces.ExecutionData) *pb.ExecutionPayloadBody {
|
||||
body := &pb.ExecutionPayloadBody{}
|
||||
txs, err := ed.Transactions()
|
||||
@@ -347,22 +320,6 @@ func TestReconstructBlindedBlockBatchFallbackToRange(t *testing.T) {
|
||||
}
|
||||
mockWriteResult(t, w, msg, executionPayloadBodies)
|
||||
})
|
||||
// separate methods for the electra block
|
||||
srv.register(GetPayloadBodiesByHashV2, func(msg *jsonrpcMessage, w http.ResponseWriter, r *http.Request) {
|
||||
executionPayloadBodies := []*pb.ExecutionPayloadBody{nil}
|
||||
mockWriteResult(t, w, msg, executionPayloadBodies)
|
||||
})
|
||||
srv.register(GetPayloadBodiesByRangeV2, func(msg *jsonrpcMessage, w http.ResponseWriter, r *http.Request) {
|
||||
p := mockParseUintList(t, msg.Params)
|
||||
require.Equal(t, 2, len(p))
|
||||
start, count := p[0], p[1]
|
||||
require.Equal(t, fx.electra.blinded.header.BlockNumber(), start)
|
||||
require.Equal(t, uint64(1), count)
|
||||
executionPayloadBodies := []*pb.ExecutionPayloadBody{
|
||||
payloadToBody(t, fx.electra.blinded.header),
|
||||
}
|
||||
mockWriteResult(t, w, msg, executionPayloadBodies)
|
||||
})
|
||||
blind := []interfaces.ReadOnlySignedBeaconBlock{
|
||||
fx.denebBlock.blinded.block,
|
||||
fx.emptyDenebBlock.blinded.block,
|
||||
@@ -381,13 +338,8 @@ func TestReconstructBlindedBlockBatchDenebAndElectra(t *testing.T) {
|
||||
t.Run("deneb and electra", func(t *testing.T) {
|
||||
cli, srv := newMockEngine(t)
|
||||
fx := testBlindedBlockFixtures(t)
|
||||
// The reconstructed should make separate calls for the deneb (v1) and electra (v2) blocks.
|
||||
srv.register(GetPayloadBodiesByHashV1, func(msg *jsonrpcMessage, w http.ResponseWriter, r *http.Request) {
|
||||
executionPayloadBodies := []*pb.ExecutionPayloadBody{payloadToBody(t, fx.denebBlock.blinded.header)}
|
||||
mockWriteResult(t, w, msg, executionPayloadBodies)
|
||||
})
|
||||
srv.register(GetPayloadBodiesByHashV2, func(msg *jsonrpcMessage, w http.ResponseWriter, r *http.Request) {
|
||||
executionPayloadBodies := []*pb.ExecutionPayloadBody{payloadToBody(t, fx.electra.blinded.header)}
|
||||
executionPayloadBodies := []*pb.ExecutionPayloadBody{payloadToBody(t, fx.denebBlock.blinded.header), payloadToBody(t, fx.electra.blinded.header)}
|
||||
mockWriteResult(t, w, msg, executionPayloadBodies)
|
||||
})
|
||||
blinded := []interfaces.ReadOnlySignedBeaconBlock{
|
||||
|
||||
@@ -39,7 +39,7 @@ type EngineClient struct {
|
||||
}
|
||||
|
||||
// NewPayload --
|
||||
func (e *EngineClient) NewPayload(_ context.Context, _ interfaces.ExecutionData, _ []common.Hash, _ *common.Hash) ([]byte, error) {
|
||||
func (e *EngineClient) NewPayload(_ context.Context, _ interfaces.ExecutionData, _ []common.Hash, _ *common.Hash, _ *pb.ExecutionRequests) ([]byte, error) {
|
||||
return e.NewPayloadResp, e.ErrNewPayload
|
||||
}
|
||||
|
||||
@@ -54,7 +54,7 @@ func (e *EngineClient) ForkchoiceUpdated(
|
||||
}
|
||||
|
||||
// GetPayload --
|
||||
func (e *EngineClient) GetPayload(_ context.Context, _ [8]byte, s primitives.Slot) (*blocks.GetPayloadResponse, error) {
|
||||
func (e *EngineClient) GetPayload(_ context.Context, _ [8]byte, _ primitives.Slot) (*blocks.GetPayloadResponse, error) {
|
||||
return e.GetPayloadResponse, e.ErrGetPayload
|
||||
}
|
||||
|
||||
|
||||
@@ -141,7 +141,14 @@ func (f *ForkChoice) InsertNode(ctx context.Context, state state.BeaconState, ro
|
||||
}
|
||||
|
||||
jc, fc = f.store.pullTips(state, node, jc, fc)
|
||||
return f.updateCheckpoints(ctx, jc, fc)
|
||||
if err := f.updateCheckpoints(ctx, jc, fc); err != nil {
|
||||
_, remErr := f.store.removeNode(ctx, node)
|
||||
if remErr != nil {
|
||||
log.WithError(remErr).Error("could not remove node")
|
||||
}
|
||||
return errors.Wrap(err, "could not update checkpoints")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// updateCheckpoints update the checkpoints when inserting a new node.
|
||||
|
||||
@@ -3,6 +3,7 @@ package doublylinkedtree
|
||||
import (
|
||||
"context"
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
@@ -887,3 +888,16 @@ func TestForkchoiceParentRoot(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, zeroHash, root)
|
||||
}
|
||||
|
||||
func TestForkChoice_CleanupInserting(t *testing.T) {
|
||||
f := setup(0, 0)
|
||||
ctx := context.Background()
|
||||
st, blkRoot, err := prepareForkchoiceState(ctx, 1, indexToHash(1), params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 2, 2)
|
||||
f.SetBalancesByRooter(func(_ context.Context, _ [32]byte) ([]uint64, error) {
|
||||
return f.justifiedBalances, errors.New("mock err")
|
||||
})
|
||||
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, f.InsertNode(ctx, st, blkRoot))
|
||||
require.Equal(t, false, f.HasNode(blkRoot))
|
||||
}
|
||||
|
||||
@@ -107,7 +107,9 @@ func (s *Store) insert(ctx context.Context,
|
||||
s.headNode = n
|
||||
s.highestReceivedNode = n
|
||||
} else {
|
||||
return n, errInvalidParentRoot
|
||||
delete(s.nodeByRoot, root)
|
||||
delete(s.nodeByPayload, payloadHash)
|
||||
return nil, errInvalidParentRoot
|
||||
}
|
||||
} else {
|
||||
parent.children = append(parent.children, n)
|
||||
@@ -128,7 +130,11 @@ func (s *Store) insert(ctx context.Context,
|
||||
jEpoch := s.justifiedCheckpoint.Epoch
|
||||
fEpoch := s.finalizedCheckpoint.Epoch
|
||||
if err := s.treeRootNode.updateBestDescendant(ctx, jEpoch, fEpoch, slots.ToEpoch(currentSlot)); err != nil {
|
||||
return n, err
|
||||
_, remErr := s.removeNode(ctx, n)
|
||||
if remErr != nil {
|
||||
log.WithError(remErr).Error("could not remove node")
|
||||
}
|
||||
return nil, errors.Wrap(err, "could not update best descendants")
|
||||
}
|
||||
}
|
||||
// Update metrics.
|
||||
|
||||
@@ -525,3 +525,12 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, root4, target)
|
||||
}
|
||||
|
||||
func TestStore_CleanupInserting(t *testing.T) {
|
||||
f := setup(0, 0)
|
||||
ctx := context.Background()
|
||||
st, blkRoot, err := prepareForkchoiceState(ctx, 1, indexToHash(1), indexToHash(2), params.BeaconConfig().ZeroHash, 0, 0)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, f.InsertNode(ctx, st, blkRoot))
|
||||
require.Equal(t, false, f.HasNode(blkRoot))
|
||||
}
|
||||
|
||||
@@ -219,6 +219,16 @@ func (s *Service) validatorEndpoints(
|
||||
handler: server.SubmitAggregateAndProofs,
|
||||
methods: []string{http.MethodPost},
|
||||
},
|
||||
{
|
||||
template: "/eth/v2/validator/aggregate_and_proofs",
|
||||
name: namespace + ".SubmitAggregateAndProofsV2",
|
||||
middleware: []middleware.Middleware{
|
||||
middleware.ContentTypeHandler([]string{api.JsonMediaType}),
|
||||
middleware.AcceptHeaderHandler([]string{api.JsonMediaType}),
|
||||
},
|
||||
handler: server.SubmitAggregateAndProofsV2,
|
||||
methods: []string{http.MethodPost},
|
||||
},
|
||||
{
|
||||
template: "/eth/v1/validator/sync_committee_contribution",
|
||||
name: namespace + ".ProduceSyncCommitteeContribution",
|
||||
@@ -688,14 +698,33 @@ func (s *Service) beaconEndpoints(
|
||||
handler: server.GetAttesterSlashings,
|
||||
methods: []string{http.MethodGet},
|
||||
},
|
||||
{
|
||||
template: "/eth/v2/beacon/pool/attester_slashings",
|
||||
name: namespace + ".GetAttesterSlashingsV2",
|
||||
middleware: []middleware.Middleware{
|
||||
middleware.AcceptHeaderHandler([]string{api.JsonMediaType}),
|
||||
},
|
||||
handler: server.GetAttesterSlashingsV2,
|
||||
methods: []string{http.MethodGet},
|
||||
},
|
||||
{
|
||||
template: "/eth/v1/beacon/pool/attester_slashings",
|
||||
name: namespace + ".SubmitAttesterSlashing",
|
||||
name: namespace + ".SubmitAttesterSlashings",
|
||||
middleware: []middleware.Middleware{
|
||||
middleware.ContentTypeHandler([]string{api.JsonMediaType}),
|
||||
middleware.AcceptHeaderHandler([]string{api.JsonMediaType}),
|
||||
},
|
||||
handler: server.SubmitAttesterSlashing,
|
||||
handler: server.SubmitAttesterSlashings,
|
||||
methods: []string{http.MethodPost},
|
||||
},
|
||||
{
|
||||
template: "/eth/v2/beacon/pool/attester_slashings",
|
||||
name: namespace + ".SubmitAttesterSlashingsV2",
|
||||
middleware: []middleware.Middleware{
|
||||
middleware.ContentTypeHandler([]string{api.JsonMediaType}),
|
||||
middleware.AcceptHeaderHandler([]string{api.JsonMediaType}),
|
||||
},
|
||||
handler: server.SubmitAttesterSlashingsV2,
|
||||
methods: []string{http.MethodPost},
|
||||
},
|
||||
{
|
||||
|
||||
@@ -42,6 +42,7 @@ func Test_endpoints(t *testing.T) {
|
||||
"/eth/v1/beacon/blinded_blocks/{block_id}": {http.MethodGet},
|
||||
"/eth/v1/beacon/pool/attestations": {http.MethodGet, http.MethodPost},
|
||||
"/eth/v1/beacon/pool/attester_slashings": {http.MethodGet, http.MethodPost},
|
||||
"/eth/v2/beacon/pool/attester_slashings": {http.MethodGet, http.MethodPost},
|
||||
"/eth/v1/beacon/pool/proposer_slashings": {http.MethodGet, http.MethodPost},
|
||||
"/eth/v1/beacon/pool/sync_committees": {http.MethodPost},
|
||||
"/eth/v1/beacon/pool/voluntary_exits": {http.MethodGet, http.MethodPost},
|
||||
@@ -100,6 +101,7 @@ func Test_endpoints(t *testing.T) {
|
||||
"/eth/v1/validator/attestation_data": {http.MethodGet},
|
||||
"/eth/v1/validator/aggregate_attestation": {http.MethodGet},
|
||||
"/eth/v1/validator/aggregate_and_proofs": {http.MethodPost},
|
||||
"/eth/v2/validator/aggregate_and_proofs": {http.MethodPost},
|
||||
"/eth/v1/validator/beacon_committee_subscriptions": {http.MethodPost},
|
||||
"/eth/v1/validator/sync_committee_subscriptions": {http.MethodPost},
|
||||
"/eth/v1/validator/beacon_committee_selections": {http.MethodPost},
|
||||
|
||||
@@ -10,6 +10,7 @@ import (
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/prysmaticlabs/prysm/v5/api"
|
||||
"github.com/prysmaticlabs/prysm/v5/api/server"
|
||||
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
|
||||
@@ -467,25 +468,71 @@ func (s *Server) GetAttesterSlashings(w http.ResponseWriter, r *http.Request) {
|
||||
return
|
||||
}
|
||||
sourceSlashings := s.SlashingsPool.PendingAttesterSlashings(ctx, headState, true /* return unlimited slashings */)
|
||||
ss := make([]*eth.AttesterSlashing, 0, len(sourceSlashings))
|
||||
for _, slashing := range sourceSlashings {
|
||||
s, ok := slashing.(*eth.AttesterSlashing)
|
||||
if ok {
|
||||
ss = append(ss, s)
|
||||
} else {
|
||||
httputil.HandleError(w, fmt.Sprintf("unable to convert slashing of type %T", slashing), http.StatusInternalServerError)
|
||||
slashings := make([]*structs.AttesterSlashing, len(sourceSlashings))
|
||||
for i, slashing := range sourceSlashings {
|
||||
as, ok := slashing.(*eth.AttesterSlashing)
|
||||
if !ok {
|
||||
httputil.HandleError(w, fmt.Sprintf("Unable to convert slashing of type %T", slashing), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
slashings[i] = structs.AttesterSlashingFromConsensus(as)
|
||||
}
|
||||
slashings := structs.AttesterSlashingsFromConsensus(ss)
|
||||
|
||||
httputil.WriteJson(w, &structs.GetAttesterSlashingsResponse{Data: slashings})
|
||||
attBytes, err := json.Marshal(slashings)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, fmt.Sprintf("Failed to marshal slashings: %v", err), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
httputil.WriteJson(w, &structs.GetAttesterSlashingsResponse{Data: attBytes})
|
||||
}
|
||||
|
||||
// SubmitAttesterSlashing submits an attester slashing object to node's pool and
|
||||
// GetAttesterSlashingsV2 retrieves attester slashings known by the node but
|
||||
// not necessarily incorporated into any block, supporting both AttesterSlashing and AttesterSlashingElectra.
|
||||
func (s *Server) GetAttesterSlashingsV2(w http.ResponseWriter, r *http.Request) {
|
||||
ctx, span := trace.StartSpan(r.Context(), "beacon.GetAttesterSlashingsV2")
|
||||
defer span.End()
|
||||
|
||||
headState, err := s.ChainInfoFetcher.HeadStateReadOnly(ctx)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not get head state: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
var attStructs []interface{}
|
||||
sourceSlashings := s.SlashingsPool.PendingAttesterSlashings(ctx, headState, true /* return unlimited slashings */)
|
||||
for _, slashing := range sourceSlashings {
|
||||
if slashing.Version() >= version.Electra {
|
||||
a, ok := slashing.(*eth.AttesterSlashingElectra)
|
||||
if !ok {
|
||||
httputil.HandleError(w, fmt.Sprintf("Unable to convert electra slashing of type %T to an Electra slashing", slashing), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
attStruct := structs.AttesterSlashingElectraFromConsensus(a)
|
||||
attStructs = append(attStructs, attStruct)
|
||||
} else {
|
||||
a, ok := slashing.(*eth.AttesterSlashing)
|
||||
if !ok {
|
||||
httputil.HandleError(w, fmt.Sprintf("Unable to convert slashing of type %T to a Phase0 slashing", slashing), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
attStruct := structs.AttesterSlashingFromConsensus(a)
|
||||
attStructs = append(attStructs, attStruct)
|
||||
}
|
||||
}
|
||||
attBytes, err := json.Marshal(attStructs)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, fmt.Sprintf("Failed to marshal slashing: %v", err), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
resp := &structs.GetAttesterSlashingsResponse{
|
||||
Version: version.String(sourceSlashings[0].Version()),
|
||||
Data: attBytes,
|
||||
}
|
||||
httputil.WriteJson(w, resp)
|
||||
}
|
||||
|
||||
// SubmitAttesterSlashings submits an attester slashing object to node's pool and
|
||||
// if passes validation node MUST broadcast it to network.
|
||||
func (s *Server) SubmitAttesterSlashing(w http.ResponseWriter, r *http.Request) {
|
||||
ctx, span := trace.StartSpan(r.Context(), "beacon.SubmitAttesterSlashing")
|
||||
func (s *Server) SubmitAttesterSlashings(w http.ResponseWriter, r *http.Request) {
|
||||
ctx, span := trace.StartSpan(r.Context(), "beacon.SubmitAttesterSlashings")
|
||||
defer span.End()
|
||||
|
||||
var req structs.AttesterSlashing
|
||||
@@ -504,16 +551,80 @@ func (s *Server) SubmitAttesterSlashing(w http.ResponseWriter, r *http.Request)
|
||||
httputil.HandleError(w, "Could not convert request slashing to consensus slashing: "+err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
s.submitAttesterSlashing(w, ctx, slashing)
|
||||
}
|
||||
|
||||
// SubmitAttesterSlashingsV2 submits an attester slashing object to node's pool and
|
||||
// if passes validation node MUST broadcast it to network.
|
||||
func (s *Server) SubmitAttesterSlashingsV2(w http.ResponseWriter, r *http.Request) {
|
||||
ctx, span := trace.StartSpan(r.Context(), "beacon.SubmitAttesterSlashingsV2")
|
||||
defer span.End()
|
||||
|
||||
versionHeader := r.Header.Get(api.VersionHeader)
|
||||
if versionHeader == "" {
|
||||
httputil.HandleError(w, api.VersionHeader+" header is required", http.StatusBadRequest)
|
||||
}
|
||||
v, err := version.FromString(versionHeader)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Invalid version: "+err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
if v >= version.Electra {
|
||||
var req structs.AttesterSlashingElectra
|
||||
err := json.NewDecoder(r.Body).Decode(&req)
|
||||
switch {
|
||||
case errors.Is(err, io.EOF):
|
||||
httputil.HandleError(w, "No data submitted", http.StatusBadRequest)
|
||||
return
|
||||
case err != nil:
|
||||
httputil.HandleError(w, "Could not decode request body: "+err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
slashing, err := req.ToConsensus()
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not convert request slashing to consensus slashing: "+err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
s.submitAttesterSlashing(w, ctx, slashing)
|
||||
} else {
|
||||
var req structs.AttesterSlashing
|
||||
err := json.NewDecoder(r.Body).Decode(&req)
|
||||
switch {
|
||||
case errors.Is(err, io.EOF):
|
||||
httputil.HandleError(w, "No data submitted", http.StatusBadRequest)
|
||||
return
|
||||
case err != nil:
|
||||
httputil.HandleError(w, "Could not decode request body: "+err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
slashing, err := req.ToConsensus()
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not convert request slashing to consensus slashing: "+err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
s.submitAttesterSlashing(w, ctx, slashing)
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Server) submitAttesterSlashing(
|
||||
w http.ResponseWriter,
|
||||
ctx context.Context,
|
||||
slashing eth.AttSlashing,
|
||||
) {
|
||||
headState, err := s.ChainInfoFetcher.HeadState(ctx)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not get head state: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
headState, err = transition.ProcessSlotsIfPossible(ctx, headState, slashing.Attestation_1.Data.Slot)
|
||||
headState, err = transition.ProcessSlotsIfPossible(ctx, headState, slashing.FirstAttestation().GetData().Slot)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not process slots: "+err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
err = blocks.VerifyAttesterSlashing(ctx, headState, slashing)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Invalid attester slashing: "+err.Error(), http.StatusBadRequest)
|
||||
|
||||
@@ -13,6 +13,7 @@ import (
|
||||
|
||||
"github.com/ethereum/go-ethereum/common/hexutil"
|
||||
"github.com/prysmaticlabs/go-bitfield"
|
||||
"github.com/prysmaticlabs/prysm/v5/api"
|
||||
"github.com/prysmaticlabs/prysm/v5/api/server"
|
||||
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
|
||||
blockchainmock "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/testing"
|
||||
@@ -37,6 +38,7 @@ import (
|
||||
"github.com/prysmaticlabs/prysm/v5/encoding/ssz"
|
||||
"github.com/prysmaticlabs/prysm/v5/network/httputil"
|
||||
ethpbv1alpha1 "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
|
||||
"github.com/prysmaticlabs/prysm/v5/runtime/version"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/assert"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/require"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/util"
|
||||
@@ -983,9 +985,7 @@ func TestSubmitSignedBLSToExecutionChanges_Failures(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestGetAttesterSlashings(t *testing.T) {
|
||||
bs, err := util.NewBeaconState()
|
||||
require.NoError(t, err)
|
||||
slashing1 := ðpbv1alpha1.AttesterSlashing{
|
||||
slashing1PreElectra := ðpbv1alpha1.AttesterSlashing{
|
||||
Attestation_1: ðpbv1alpha1.IndexedAttestation{
|
||||
AttestingIndices: []uint64{1, 10},
|
||||
Data: ðpbv1alpha1.AttestationData{
|
||||
@@ -1021,7 +1021,7 @@ func TestGetAttesterSlashings(t *testing.T) {
|
||||
Signature: bytesutil.PadTo([]byte("signature2"), 96),
|
||||
},
|
||||
}
|
||||
slashing2 := ðpbv1alpha1.AttesterSlashing{
|
||||
slashing2PreElectra := ðpbv1alpha1.AttesterSlashing{
|
||||
Attestation_1: ðpbv1alpha1.IndexedAttestation{
|
||||
AttestingIndices: []uint64{3, 30},
|
||||
Data: ðpbv1alpha1.AttestationData{
|
||||
@@ -1057,23 +1057,168 @@ func TestGetAttesterSlashings(t *testing.T) {
|
||||
Signature: bytesutil.PadTo([]byte("signature4"), 96),
|
||||
},
|
||||
}
|
||||
|
||||
s := &Server{
|
||||
ChainInfoFetcher: &blockchainmock.ChainService{State: bs},
|
||||
SlashingsPool: &slashingsmock.PoolMock{PendingAttSlashings: []ethpbv1alpha1.AttSlashing{slashing1, slashing2}},
|
||||
slashing1PostElectra := ðpbv1alpha1.AttesterSlashingElectra{
|
||||
Attestation_1: ðpbv1alpha1.IndexedAttestationElectra{
|
||||
AttestingIndices: []uint64{1, 10},
|
||||
Data: ðpbv1alpha1.AttestationData{
|
||||
Slot: 1,
|
||||
CommitteeIndex: 1,
|
||||
BeaconBlockRoot: bytesutil.PadTo([]byte("blockroot1"), 32),
|
||||
Source: ðpbv1alpha1.Checkpoint{
|
||||
Epoch: 1,
|
||||
Root: bytesutil.PadTo([]byte("sourceroot1"), 32),
|
||||
},
|
||||
Target: ðpbv1alpha1.Checkpoint{
|
||||
Epoch: 10,
|
||||
Root: bytesutil.PadTo([]byte("targetroot1"), 32),
|
||||
},
|
||||
},
|
||||
Signature: bytesutil.PadTo([]byte("signature1"), 96),
|
||||
},
|
||||
Attestation_2: ðpbv1alpha1.IndexedAttestationElectra{
|
||||
AttestingIndices: []uint64{2, 20},
|
||||
Data: ðpbv1alpha1.AttestationData{
|
||||
Slot: 2,
|
||||
CommitteeIndex: 2,
|
||||
BeaconBlockRoot: bytesutil.PadTo([]byte("blockroot2"), 32),
|
||||
Source: ðpbv1alpha1.Checkpoint{
|
||||
Epoch: 2,
|
||||
Root: bytesutil.PadTo([]byte("sourceroot2"), 32),
|
||||
},
|
||||
Target: ðpbv1alpha1.Checkpoint{
|
||||
Epoch: 20,
|
||||
Root: bytesutil.PadTo([]byte("targetroot2"), 32),
|
||||
},
|
||||
},
|
||||
Signature: bytesutil.PadTo([]byte("signature2"), 96),
|
||||
},
|
||||
}
|
||||
slashing2PostElectra := ðpbv1alpha1.AttesterSlashingElectra{
|
||||
Attestation_1: ðpbv1alpha1.IndexedAttestationElectra{
|
||||
AttestingIndices: []uint64{3, 30},
|
||||
Data: ðpbv1alpha1.AttestationData{
|
||||
Slot: 3,
|
||||
CommitteeIndex: 3,
|
||||
BeaconBlockRoot: bytesutil.PadTo([]byte("blockroot3"), 32),
|
||||
Source: ðpbv1alpha1.Checkpoint{
|
||||
Epoch: 3,
|
||||
Root: bytesutil.PadTo([]byte("sourceroot3"), 32),
|
||||
},
|
||||
Target: ðpbv1alpha1.Checkpoint{
|
||||
Epoch: 30,
|
||||
Root: bytesutil.PadTo([]byte("targetroot3"), 32),
|
||||
},
|
||||
},
|
||||
Signature: bytesutil.PadTo([]byte("signature3"), 96),
|
||||
},
|
||||
Attestation_2: ðpbv1alpha1.IndexedAttestationElectra{
|
||||
AttestingIndices: []uint64{4, 40},
|
||||
Data: ðpbv1alpha1.AttestationData{
|
||||
Slot: 4,
|
||||
CommitteeIndex: 4,
|
||||
BeaconBlockRoot: bytesutil.PadTo([]byte("blockroot4"), 32),
|
||||
Source: ðpbv1alpha1.Checkpoint{
|
||||
Epoch: 4,
|
||||
Root: bytesutil.PadTo([]byte("sourceroot4"), 32),
|
||||
},
|
||||
Target: ðpbv1alpha1.Checkpoint{
|
||||
Epoch: 40,
|
||||
Root: bytesutil.PadTo([]byte("targetroot4"), 32),
|
||||
},
|
||||
},
|
||||
Signature: bytesutil.PadTo([]byte("signature4"), 96),
|
||||
},
|
||||
}
|
||||
|
||||
request := httptest.NewRequest(http.MethodGet, "http://example.com/beacon/pool/attester_slashings", nil)
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
t.Run("V1", func(t *testing.T) {
|
||||
bs, err := util.NewBeaconState()
|
||||
require.NoError(t, err)
|
||||
|
||||
s.GetAttesterSlashings(writer, request)
|
||||
require.Equal(t, http.StatusOK, writer.Code)
|
||||
resp := &structs.GetAttesterSlashingsResponse{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
|
||||
require.NotNil(t, resp)
|
||||
require.NotNil(t, resp.Data)
|
||||
assert.Equal(t, 2, len(resp.Data))
|
||||
s := &Server{
|
||||
ChainInfoFetcher: &blockchainmock.ChainService{State: bs},
|
||||
SlashingsPool: &slashingsmock.PoolMock{PendingAttSlashings: []ethpbv1alpha1.AttSlashing{slashing1PreElectra, slashing2PreElectra}},
|
||||
}
|
||||
|
||||
request := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/beacon/pool/attester_slashings", nil)
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.GetAttesterSlashings(writer, request)
|
||||
require.Equal(t, http.StatusOK, writer.Code)
|
||||
resp := &structs.GetAttesterSlashingsResponse{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
|
||||
require.NotNil(t, resp)
|
||||
require.NotNil(t, resp.Data)
|
||||
|
||||
var slashings []*structs.AttesterSlashing
|
||||
require.NoError(t, json.Unmarshal(resp.Data, &slashings))
|
||||
|
||||
ss, err := structs.AttesterSlashingsToConsensus(slashings)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.DeepEqual(t, slashing1PreElectra, ss[0])
|
||||
require.DeepEqual(t, slashing2PreElectra, ss[1])
|
||||
})
|
||||
t.Run("V2-post-electra", func(t *testing.T) {
|
||||
bs, err := util.NewBeaconStateElectra()
|
||||
require.NoError(t, err)
|
||||
|
||||
s := &Server{
|
||||
ChainInfoFetcher: &blockchainmock.ChainService{State: bs},
|
||||
SlashingsPool: &slashingsmock.PoolMock{PendingAttSlashings: []ethpbv1alpha1.AttSlashing{slashing1PostElectra, slashing2PostElectra}},
|
||||
}
|
||||
|
||||
request := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v2/beacon/pool/attester_slashings", nil)
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.GetAttesterSlashingsV2(writer, request)
|
||||
require.Equal(t, http.StatusOK, writer.Code)
|
||||
resp := &structs.GetAttesterSlashingsResponse{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
|
||||
require.NotNil(t, resp)
|
||||
require.NotNil(t, resp.Data)
|
||||
assert.Equal(t, "electra", resp.Version)
|
||||
|
||||
// Unmarshal resp.Data into a slice of slashings
|
||||
var slashings []*structs.AttesterSlashingElectra
|
||||
require.NoError(t, json.Unmarshal(resp.Data, &slashings))
|
||||
|
||||
ss, err := structs.AttesterSlashingsElectraToConsensus(slashings)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.DeepEqual(t, slashing1PostElectra, ss[0])
|
||||
require.DeepEqual(t, slashing2PostElectra, ss[1])
|
||||
})
|
||||
t.Run("V2-pre-electra", func(t *testing.T) {
|
||||
bs, err := util.NewBeaconState()
|
||||
require.NoError(t, err)
|
||||
|
||||
s := &Server{
|
||||
ChainInfoFetcher: &blockchainmock.ChainService{State: bs},
|
||||
SlashingsPool: &slashingsmock.PoolMock{PendingAttSlashings: []ethpbv1alpha1.AttSlashing{slashing1PreElectra, slashing2PreElectra}},
|
||||
}
|
||||
|
||||
request := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/beacon/pool/attester_slashings", nil)
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.GetAttesterSlashingsV2(writer, request)
|
||||
require.Equal(t, http.StatusOK, writer.Code)
|
||||
resp := &structs.GetAttesterSlashingsResponse{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
|
||||
require.NotNil(t, resp)
|
||||
require.NotNil(t, resp.Data)
|
||||
|
||||
var slashings []*structs.AttesterSlashing
|
||||
require.NoError(t, json.Unmarshal(resp.Data, &slashings))
|
||||
|
||||
ss, err := structs.AttesterSlashingsToConsensus(slashings)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.DeepEqual(t, slashing1PreElectra, ss[0])
|
||||
require.DeepEqual(t, slashing2PreElectra, ss[1])
|
||||
})
|
||||
}
|
||||
|
||||
func TestGetProposerSlashings(t *testing.T) {
|
||||
@@ -1142,383 +1287,350 @@ func TestGetProposerSlashings(t *testing.T) {
|
||||
assert.Equal(t, 2, len(resp.Data))
|
||||
}
|
||||
|
||||
func TestSubmitAttesterSlashing_Ok(t *testing.T) {
|
||||
func TestSubmitAttesterSlashings(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
transition.SkipSlotCache.Disable()
|
||||
defer transition.SkipSlotCache.Enable()
|
||||
|
||||
_, keys, err := util.DeterministicDepositsAndKeys(1)
|
||||
require.NoError(t, err)
|
||||
validator := ðpbv1alpha1.Validator{
|
||||
PublicKey: keys[0].PublicKey().Marshal(),
|
||||
attestationData1 := ðpbv1alpha1.AttestationData{
|
||||
CommitteeIndex: 1,
|
||||
BeaconBlockRoot: bytesutil.PadTo([]byte("blockroot1"), 32),
|
||||
Source: ðpbv1alpha1.Checkpoint{
|
||||
Epoch: 1,
|
||||
Root: bytesutil.PadTo([]byte("sourceroot1"), 32),
|
||||
},
|
||||
Target: ðpbv1alpha1.Checkpoint{
|
||||
Epoch: 10,
|
||||
Root: bytesutil.PadTo([]byte("targetroot1"), 32),
|
||||
},
|
||||
}
|
||||
bs, err := util.NewBeaconState(func(state *ethpbv1alpha1.BeaconState) error {
|
||||
state.Validators = []*ethpbv1alpha1.Validator{validator}
|
||||
return nil
|
||||
attestationData2 := ðpbv1alpha1.AttestationData{
|
||||
CommitteeIndex: 1,
|
||||
BeaconBlockRoot: bytesutil.PadTo([]byte("blockroot2"), 32),
|
||||
Source: ðpbv1alpha1.Checkpoint{
|
||||
Epoch: 1,
|
||||
Root: bytesutil.PadTo([]byte("sourceroot2"), 32),
|
||||
},
|
||||
Target: ðpbv1alpha1.Checkpoint{
|
||||
Epoch: 10,
|
||||
Root: bytesutil.PadTo([]byte("targetroot2"), 32),
|
||||
},
|
||||
}
|
||||
|
||||
t.Run("V1", func(t *testing.T) {
|
||||
t.Run("ok", func(t *testing.T) {
|
||||
attestationData1.Slot = 1
|
||||
attestationData2.Slot = 1
|
||||
slashing := ðpbv1alpha1.AttesterSlashing{
|
||||
Attestation_1: ðpbv1alpha1.IndexedAttestation{
|
||||
AttestingIndices: []uint64{0},
|
||||
Data: attestationData1,
|
||||
Signature: make([]byte, 96),
|
||||
},
|
||||
Attestation_2: ðpbv1alpha1.IndexedAttestation{
|
||||
AttestingIndices: []uint64{0},
|
||||
Data: attestationData2,
|
||||
Signature: make([]byte, 96),
|
||||
},
|
||||
}
|
||||
|
||||
_, keys, err := util.DeterministicDepositsAndKeys(1)
|
||||
require.NoError(t, err)
|
||||
validator := ðpbv1alpha1.Validator{
|
||||
PublicKey: keys[0].PublicKey().Marshal(),
|
||||
}
|
||||
|
||||
bs, err := util.NewBeaconState(func(state *ethpbv1alpha1.BeaconState) error {
|
||||
state.Validators = []*ethpbv1alpha1.Validator{validator}
|
||||
return nil
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
for _, att := range []*ethpbv1alpha1.IndexedAttestation{slashing.Attestation_1, slashing.Attestation_2} {
|
||||
sb, err := signing.ComputeDomainAndSign(bs, att.Data.Target.Epoch, att.Data, params.BeaconConfig().DomainBeaconAttester, keys[0])
|
||||
require.NoError(t, err)
|
||||
sig, err := bls.SignatureFromBytes(sb)
|
||||
require.NoError(t, err)
|
||||
att.Signature = sig.Marshal()
|
||||
}
|
||||
|
||||
chainmock := &blockchainmock.ChainService{State: bs}
|
||||
broadcaster := &p2pMock.MockBroadcaster{}
|
||||
s := &Server{
|
||||
ChainInfoFetcher: chainmock,
|
||||
SlashingsPool: &slashingsmock.PoolMock{},
|
||||
Broadcaster: broadcaster,
|
||||
OperationNotifier: chainmock.OperationNotifier(),
|
||||
}
|
||||
|
||||
toSubmit := structs.AttesterSlashingsFromConsensus([]*ethpbv1alpha1.AttesterSlashing{slashing})
|
||||
b, err := json.Marshal(toSubmit[0])
|
||||
require.NoError(t, err)
|
||||
var body bytes.Buffer
|
||||
_, err = body.Write(b)
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com/beacon/pool/attester_slashings", &body)
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.SubmitAttesterSlashings(writer, request)
|
||||
require.Equal(t, http.StatusOK, writer.Code)
|
||||
pendingSlashings := s.SlashingsPool.PendingAttesterSlashings(ctx, bs, true)
|
||||
require.Equal(t, 1, len(pendingSlashings))
|
||||
assert.DeepEqual(t, slashing, pendingSlashings[0])
|
||||
require.Equal(t, 1, broadcaster.NumMessages())
|
||||
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
|
||||
_, ok := broadcaster.BroadcastMessages[0].(*ethpbv1alpha1.AttesterSlashing)
|
||||
assert.Equal(t, true, ok)
|
||||
})
|
||||
t.Run("accross-fork", func(t *testing.T) {
|
||||
attestationData1.Slot = params.BeaconConfig().SlotsPerEpoch
|
||||
attestationData2.Slot = params.BeaconConfig().SlotsPerEpoch
|
||||
slashing := ðpbv1alpha1.AttesterSlashing{
|
||||
Attestation_1: ðpbv1alpha1.IndexedAttestation{
|
||||
AttestingIndices: []uint64{0},
|
||||
Data: attestationData1,
|
||||
Signature: make([]byte, 96),
|
||||
},
|
||||
Attestation_2: ðpbv1alpha1.IndexedAttestation{
|
||||
AttestingIndices: []uint64{0},
|
||||
Data: attestationData2,
|
||||
Signature: make([]byte, 96),
|
||||
},
|
||||
}
|
||||
|
||||
params.SetupTestConfigCleanup(t)
|
||||
config := params.BeaconConfig()
|
||||
config.AltairForkEpoch = 1
|
||||
params.OverrideBeaconConfig(config)
|
||||
|
||||
bs, keys := util.DeterministicGenesisState(t, 1)
|
||||
newBs := bs.Copy()
|
||||
newBs, err := transition.ProcessSlots(ctx, newBs, params.BeaconConfig().SlotsPerEpoch)
|
||||
require.NoError(t, err)
|
||||
|
||||
for _, att := range []*ethpbv1alpha1.IndexedAttestation{slashing.Attestation_1, slashing.Attestation_2} {
|
||||
sb, err := signing.ComputeDomainAndSign(newBs, att.Data.Target.Epoch, att.Data, params.BeaconConfig().DomainBeaconAttester, keys[0])
|
||||
require.NoError(t, err)
|
||||
sig, err := bls.SignatureFromBytes(sb)
|
||||
require.NoError(t, err)
|
||||
att.Signature = sig.Marshal()
|
||||
}
|
||||
|
||||
broadcaster := &p2pMock.MockBroadcaster{}
|
||||
chainmock := &blockchainmock.ChainService{State: bs}
|
||||
s := &Server{
|
||||
ChainInfoFetcher: chainmock,
|
||||
SlashingsPool: &slashingsmock.PoolMock{},
|
||||
Broadcaster: broadcaster,
|
||||
OperationNotifier: chainmock.OperationNotifier(),
|
||||
}
|
||||
|
||||
toSubmit := structs.AttesterSlashingsFromConsensus([]*ethpbv1alpha1.AttesterSlashing{slashing})
|
||||
b, err := json.Marshal(toSubmit[0])
|
||||
require.NoError(t, err)
|
||||
var body bytes.Buffer
|
||||
_, err = body.Write(b)
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com/beacon/pool/attester_slashings", &body)
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.SubmitAttesterSlashings(writer, request)
|
||||
require.Equal(t, http.StatusOK, writer.Code)
|
||||
pendingSlashings := s.SlashingsPool.PendingAttesterSlashings(ctx, bs, true)
|
||||
require.Equal(t, 1, len(pendingSlashings))
|
||||
assert.DeepEqual(t, slashing, pendingSlashings[0])
|
||||
require.Equal(t, 1, broadcaster.NumMessages())
|
||||
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
|
||||
_, ok := broadcaster.BroadcastMessages[0].(*ethpbv1alpha1.AttesterSlashing)
|
||||
assert.Equal(t, true, ok)
|
||||
})
|
||||
t.Run("invalid-slashing", func(t *testing.T) {
|
||||
bs, err := util.NewBeaconState()
|
||||
require.NoError(t, err)
|
||||
|
||||
broadcaster := &p2pMock.MockBroadcaster{}
|
||||
s := &Server{
|
||||
ChainInfoFetcher: &blockchainmock.ChainService{State: bs},
|
||||
SlashingsPool: &slashingsmock.PoolMock{},
|
||||
Broadcaster: broadcaster,
|
||||
}
|
||||
|
||||
var body bytes.Buffer
|
||||
_, err = body.WriteString(invalidAttesterSlashing)
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com/beacon/pool/attester_slashings", &body)
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.SubmitAttesterSlashings(writer, request)
|
||||
require.Equal(t, http.StatusBadRequest, writer.Code)
|
||||
e := &httputil.DefaultJsonError{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusBadRequest, e.Code)
|
||||
assert.StringContains(t, "Invalid attester slashing", e.Message)
|
||||
})
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
slashing := ðpbv1alpha1.AttesterSlashing{
|
||||
Attestation_1: ðpbv1alpha1.IndexedAttestation{
|
||||
AttestingIndices: []uint64{0},
|
||||
Data: ðpbv1alpha1.AttestationData{
|
||||
Slot: 1,
|
||||
CommitteeIndex: 1,
|
||||
BeaconBlockRoot: bytesutil.PadTo([]byte("blockroot1"), 32),
|
||||
Source: ðpbv1alpha1.Checkpoint{
|
||||
Epoch: 1,
|
||||
Root: bytesutil.PadTo([]byte("sourceroot1"), 32),
|
||||
t.Run("V2", func(t *testing.T) {
|
||||
t.Run("ok", func(t *testing.T) {
|
||||
attestationData1.Slot = 1
|
||||
attestationData2.Slot = 1
|
||||
electraSlashing := ðpbv1alpha1.AttesterSlashingElectra{
|
||||
Attestation_1: ðpbv1alpha1.IndexedAttestationElectra{
|
||||
AttestingIndices: []uint64{0},
|
||||
Data: attestationData1,
|
||||
Signature: make([]byte, 96),
|
||||
},
|
||||
Target: ðpbv1alpha1.Checkpoint{
|
||||
Epoch: 10,
|
||||
Root: bytesutil.PadTo([]byte("targetroot1"), 32),
|
||||
Attestation_2: ðpbv1alpha1.IndexedAttestationElectra{
|
||||
AttestingIndices: []uint64{0},
|
||||
Data: attestationData2,
|
||||
Signature: make([]byte, 96),
|
||||
},
|
||||
},
|
||||
Signature: make([]byte, 96),
|
||||
},
|
||||
Attestation_2: ðpbv1alpha1.IndexedAttestation{
|
||||
AttestingIndices: []uint64{0},
|
||||
Data: ðpbv1alpha1.AttestationData{
|
||||
Slot: 1,
|
||||
CommitteeIndex: 1,
|
||||
BeaconBlockRoot: bytesutil.PadTo([]byte("blockroot2"), 32),
|
||||
Source: ðpbv1alpha1.Checkpoint{
|
||||
Epoch: 1,
|
||||
Root: bytesutil.PadTo([]byte("sourceroot2"), 32),
|
||||
}
|
||||
|
||||
_, keys, err := util.DeterministicDepositsAndKeys(1)
|
||||
require.NoError(t, err)
|
||||
validator := ðpbv1alpha1.Validator{
|
||||
PublicKey: keys[0].PublicKey().Marshal(),
|
||||
}
|
||||
|
||||
ebs, err := util.NewBeaconStateElectra(func(state *ethpbv1alpha1.BeaconStateElectra) error {
|
||||
state.Validators = []*ethpbv1alpha1.Validator{validator}
|
||||
return nil
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
for _, att := range []*ethpbv1alpha1.IndexedAttestationElectra{electraSlashing.Attestation_1, electraSlashing.Attestation_2} {
|
||||
sb, err := signing.ComputeDomainAndSign(ebs, att.Data.Target.Epoch, att.Data, params.BeaconConfig().DomainBeaconAttester, keys[0])
|
||||
require.NoError(t, err)
|
||||
sig, err := bls.SignatureFromBytes(sb)
|
||||
require.NoError(t, err)
|
||||
att.Signature = sig.Marshal()
|
||||
}
|
||||
|
||||
chainmock := &blockchainmock.ChainService{State: ebs}
|
||||
broadcaster := &p2pMock.MockBroadcaster{}
|
||||
s := &Server{
|
||||
ChainInfoFetcher: chainmock,
|
||||
SlashingsPool: &slashingsmock.PoolMock{},
|
||||
Broadcaster: broadcaster,
|
||||
OperationNotifier: chainmock.OperationNotifier(),
|
||||
}
|
||||
|
||||
toSubmit := structs.AttesterSlashingsElectraFromConsensus([]*ethpbv1alpha1.AttesterSlashingElectra{electraSlashing})
|
||||
b, err := json.Marshal(toSubmit[0])
|
||||
require.NoError(t, err)
|
||||
var body bytes.Buffer
|
||||
_, err = body.Write(b)
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com/beacon/pool/attester_electras", &body)
|
||||
request.Header.Set(api.VersionHeader, version.String(version.Electra))
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.SubmitAttesterSlashingsV2(writer, request)
|
||||
require.Equal(t, http.StatusOK, writer.Code)
|
||||
pendingSlashings := s.SlashingsPool.PendingAttesterSlashings(ctx, ebs, true)
|
||||
require.Equal(t, 1, len(pendingSlashings))
|
||||
require.Equal(t, 1, broadcaster.NumMessages())
|
||||
assert.DeepEqual(t, electraSlashing, pendingSlashings[0])
|
||||
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
|
||||
_, ok := broadcaster.BroadcastMessages[0].(*ethpbv1alpha1.AttesterSlashingElectra)
|
||||
assert.Equal(t, true, ok)
|
||||
})
|
||||
t.Run("accross-fork", func(t *testing.T) {
|
||||
attestationData1.Slot = params.BeaconConfig().SlotsPerEpoch
|
||||
attestationData2.Slot = params.BeaconConfig().SlotsPerEpoch
|
||||
slashing := ðpbv1alpha1.AttesterSlashingElectra{
|
||||
Attestation_1: ðpbv1alpha1.IndexedAttestationElectra{
|
||||
AttestingIndices: []uint64{0},
|
||||
Data: attestationData1,
|
||||
Signature: make([]byte, 96),
|
||||
},
|
||||
Target: ðpbv1alpha1.Checkpoint{
|
||||
Epoch: 10,
|
||||
Root: bytesutil.PadTo([]byte("targetroot2"), 32),
|
||||
Attestation_2: ðpbv1alpha1.IndexedAttestationElectra{
|
||||
AttestingIndices: []uint64{0},
|
||||
Data: attestationData2,
|
||||
Signature: make([]byte, 96),
|
||||
},
|
||||
},
|
||||
Signature: make([]byte, 96),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
for _, att := range []*ethpbv1alpha1.IndexedAttestation{slashing.Attestation_1, slashing.Attestation_2} {
|
||||
sb, err := signing.ComputeDomainAndSign(bs, att.Data.Target.Epoch, att.Data, params.BeaconConfig().DomainBeaconAttester, keys[0])
|
||||
require.NoError(t, err)
|
||||
sig, err := bls.SignatureFromBytes(sb)
|
||||
require.NoError(t, err)
|
||||
att.Signature = sig.Marshal()
|
||||
}
|
||||
params.SetupTestConfigCleanup(t)
|
||||
config := params.BeaconConfig()
|
||||
config.AltairForkEpoch = 1
|
||||
params.OverrideBeaconConfig(config)
|
||||
|
||||
broadcaster := &p2pMock.MockBroadcaster{}
|
||||
chainmock := &blockchainmock.ChainService{State: bs}
|
||||
s := &Server{
|
||||
ChainInfoFetcher: chainmock,
|
||||
SlashingsPool: &slashingsmock.PoolMock{},
|
||||
Broadcaster: broadcaster,
|
||||
OperationNotifier: chainmock.OperationNotifier(),
|
||||
}
|
||||
bs, keys := util.DeterministicGenesisState(t, 1)
|
||||
newBs := bs.Copy()
|
||||
newBs, err := transition.ProcessSlots(ctx, newBs, params.BeaconConfig().SlotsPerEpoch)
|
||||
require.NoError(t, err)
|
||||
|
||||
toSubmit := structs.AttesterSlashingsFromConsensus([]*ethpbv1alpha1.AttesterSlashing{slashing})
|
||||
b, err := json.Marshal(toSubmit[0])
|
||||
require.NoError(t, err)
|
||||
var body bytes.Buffer
|
||||
_, err = body.Write(b)
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com/beacon/pool/attester_slashings", &body)
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
for _, att := range []*ethpbv1alpha1.IndexedAttestationElectra{slashing.Attestation_1, slashing.Attestation_2} {
|
||||
sb, err := signing.ComputeDomainAndSign(newBs, att.Data.Target.Epoch, att.Data, params.BeaconConfig().DomainBeaconAttester, keys[0])
|
||||
require.NoError(t, err)
|
||||
sig, err := bls.SignatureFromBytes(sb)
|
||||
require.NoError(t, err)
|
||||
att.Signature = sig.Marshal()
|
||||
}
|
||||
|
||||
s.SubmitAttesterSlashing(writer, request)
|
||||
require.Equal(t, http.StatusOK, writer.Code)
|
||||
pendingSlashings := s.SlashingsPool.PendingAttesterSlashings(ctx, bs, true)
|
||||
require.Equal(t, 1, len(pendingSlashings))
|
||||
assert.DeepEqual(t, slashing, pendingSlashings[0])
|
||||
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
|
||||
require.Equal(t, 1, broadcaster.NumMessages())
|
||||
_, ok := broadcaster.BroadcastMessages[0].(*ethpbv1alpha1.AttesterSlashing)
|
||||
assert.Equal(t, true, ok)
|
||||
}
|
||||
broadcaster := &p2pMock.MockBroadcaster{}
|
||||
chainmock := &blockchainmock.ChainService{State: bs}
|
||||
s := &Server{
|
||||
ChainInfoFetcher: chainmock,
|
||||
SlashingsPool: &slashingsmock.PoolMock{},
|
||||
Broadcaster: broadcaster,
|
||||
OperationNotifier: chainmock.OperationNotifier(),
|
||||
}
|
||||
|
||||
func TestSubmitAttesterSlashing_AcrossFork(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
toSubmit := structs.AttesterSlashingsElectraFromConsensus([]*ethpbv1alpha1.AttesterSlashingElectra{slashing})
|
||||
b, err := json.Marshal(toSubmit[0])
|
||||
require.NoError(t, err)
|
||||
var body bytes.Buffer
|
||||
_, err = body.Write(b)
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com/beacon/pool/attester_slashings", &body)
|
||||
request.Header.Set(api.VersionHeader, version.String(version.Electra))
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
transition.SkipSlotCache.Disable()
|
||||
defer transition.SkipSlotCache.Enable()
|
||||
|
||||
params.SetupTestConfigCleanup(t)
|
||||
config := params.BeaconConfig()
|
||||
config.AltairForkEpoch = 1
|
||||
params.OverrideBeaconConfig(config)
|
||||
|
||||
bs, keys := util.DeterministicGenesisState(t, 1)
|
||||
|
||||
slashing := ðpbv1alpha1.AttesterSlashing{
|
||||
Attestation_1: ðpbv1alpha1.IndexedAttestation{
|
||||
AttestingIndices: []uint64{0},
|
||||
Data: ðpbv1alpha1.AttestationData{
|
||||
Slot: params.BeaconConfig().SlotsPerEpoch,
|
||||
CommitteeIndex: 1,
|
||||
BeaconBlockRoot: bytesutil.PadTo([]byte("blockroot1"), 32),
|
||||
Source: ðpbv1alpha1.Checkpoint{
|
||||
Epoch: 1,
|
||||
Root: bytesutil.PadTo([]byte("sourceroot1"), 32),
|
||||
},
|
||||
Target: ðpbv1alpha1.Checkpoint{
|
||||
Epoch: 10,
|
||||
Root: bytesutil.PadTo([]byte("targetroot1"), 32),
|
||||
},
|
||||
},
|
||||
Signature: make([]byte, 96),
|
||||
},
|
||||
Attestation_2: ðpbv1alpha1.IndexedAttestation{
|
||||
AttestingIndices: []uint64{0},
|
||||
Data: ðpbv1alpha1.AttestationData{
|
||||
Slot: params.BeaconConfig().SlotsPerEpoch,
|
||||
CommitteeIndex: 1,
|
||||
BeaconBlockRoot: bytesutil.PadTo([]byte("blockroot2"), 32),
|
||||
Source: ðpbv1alpha1.Checkpoint{
|
||||
Epoch: 1,
|
||||
Root: bytesutil.PadTo([]byte("sourceroot2"), 32),
|
||||
},
|
||||
Target: ðpbv1alpha1.Checkpoint{
|
||||
Epoch: 10,
|
||||
Root: bytesutil.PadTo([]byte("targetroot2"), 32),
|
||||
},
|
||||
},
|
||||
Signature: make([]byte, 96),
|
||||
},
|
||||
}
|
||||
|
||||
newBs := bs.Copy()
|
||||
newBs, err := transition.ProcessSlots(ctx, newBs, params.BeaconConfig().SlotsPerEpoch)
|
||||
require.NoError(t, err)
|
||||
|
||||
for _, att := range []*ethpbv1alpha1.IndexedAttestation{slashing.Attestation_1, slashing.Attestation_2} {
|
||||
sb, err := signing.ComputeDomainAndSign(newBs, att.Data.Target.Epoch, att.Data, params.BeaconConfig().DomainBeaconAttester, keys[0])
|
||||
require.NoError(t, err)
|
||||
sig, err := bls.SignatureFromBytes(sb)
|
||||
require.NoError(t, err)
|
||||
att.Signature = sig.Marshal()
|
||||
}
|
||||
|
||||
broadcaster := &p2pMock.MockBroadcaster{}
|
||||
chainmock := &blockchainmock.ChainService{State: bs}
|
||||
s := &Server{
|
||||
ChainInfoFetcher: chainmock,
|
||||
SlashingsPool: &slashingsmock.PoolMock{},
|
||||
Broadcaster: broadcaster,
|
||||
OperationNotifier: chainmock.OperationNotifier(),
|
||||
}
|
||||
|
||||
toSubmit := structs.AttesterSlashingsFromConsensus([]*ethpbv1alpha1.AttesterSlashing{slashing})
|
||||
b, err := json.Marshal(toSubmit[0])
|
||||
require.NoError(t, err)
|
||||
var body bytes.Buffer
|
||||
_, err = body.Write(b)
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com/beacon/pool/attester_slashings", &body)
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.SubmitAttesterSlashing(writer, request)
|
||||
require.Equal(t, http.StatusOK, writer.Code)
|
||||
pendingSlashings := s.SlashingsPool.PendingAttesterSlashings(ctx, bs, true)
|
||||
require.Equal(t, 1, len(pendingSlashings))
|
||||
assert.DeepEqual(t, slashing, pendingSlashings[0])
|
||||
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
|
||||
require.Equal(t, 1, broadcaster.NumMessages())
|
||||
_, ok := broadcaster.BroadcastMessages[0].(*ethpbv1alpha1.AttesterSlashing)
|
||||
assert.Equal(t, true, ok)
|
||||
}
|
||||
|
||||
func TestSubmitAttesterSlashing_InvalidSlashing(t *testing.T) {
|
||||
bs, err := util.NewBeaconState()
|
||||
require.NoError(t, err)
|
||||
|
||||
broadcaster := &p2pMock.MockBroadcaster{}
|
||||
s := &Server{
|
||||
ChainInfoFetcher: &blockchainmock.ChainService{State: bs},
|
||||
SlashingsPool: &slashingsmock.PoolMock{},
|
||||
Broadcaster: broadcaster,
|
||||
}
|
||||
|
||||
var body bytes.Buffer
|
||||
_, err = body.WriteString(invalidAttesterSlashing)
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com/beacon/pool/attester_slashings", &body)
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.SubmitAttesterSlashing(writer, request)
|
||||
require.Equal(t, http.StatusBadRequest, writer.Code)
|
||||
e := &httputil.DefaultJsonError{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusBadRequest, e.Code)
|
||||
assert.StringContains(t, "Invalid attester slashing", e.Message)
|
||||
}
|
||||
|
||||
func TestSubmitProposerSlashing_Ok(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
transition.SkipSlotCache.Disable()
|
||||
defer transition.SkipSlotCache.Enable()
|
||||
|
||||
_, keys, err := util.DeterministicDepositsAndKeys(1)
|
||||
require.NoError(t, err)
|
||||
validator := ðpbv1alpha1.Validator{
|
||||
PublicKey: keys[0].PublicKey().Marshal(),
|
||||
WithdrawableEpoch: primitives.Epoch(1),
|
||||
}
|
||||
bs, err := util.NewBeaconState(func(state *ethpbv1alpha1.BeaconState) error {
|
||||
state.Validators = []*ethpbv1alpha1.Validator{validator}
|
||||
return nil
|
||||
s.SubmitAttesterSlashingsV2(writer, request)
|
||||
require.Equal(t, http.StatusOK, writer.Code)
|
||||
pendingSlashings := s.SlashingsPool.PendingAttesterSlashings(ctx, bs, true)
|
||||
require.Equal(t, 1, len(pendingSlashings))
|
||||
assert.DeepEqual(t, slashing, pendingSlashings[0])
|
||||
require.Equal(t, 1, broadcaster.NumMessages())
|
||||
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
|
||||
_, ok := broadcaster.BroadcastMessages[0].(*ethpbv1alpha1.AttesterSlashingElectra)
|
||||
assert.Equal(t, true, ok)
|
||||
})
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
slashing := ðpbv1alpha1.ProposerSlashing{
|
||||
Header_1: ðpbv1alpha1.SignedBeaconBlockHeader{
|
||||
Header: ðpbv1alpha1.BeaconBlockHeader{
|
||||
Slot: 1,
|
||||
ProposerIndex: 0,
|
||||
ParentRoot: bytesutil.PadTo([]byte("parentroot1"), 32),
|
||||
StateRoot: bytesutil.PadTo([]byte("stateroot1"), 32),
|
||||
BodyRoot: bytesutil.PadTo([]byte("bodyroot1"), 32),
|
||||
},
|
||||
Signature: make([]byte, 96),
|
||||
},
|
||||
Header_2: ðpbv1alpha1.SignedBeaconBlockHeader{
|
||||
Header: ðpbv1alpha1.BeaconBlockHeader{
|
||||
Slot: 1,
|
||||
ProposerIndex: 0,
|
||||
ParentRoot: bytesutil.PadTo([]byte("parentroot2"), 32),
|
||||
StateRoot: bytesutil.PadTo([]byte("stateroot2"), 32),
|
||||
BodyRoot: bytesutil.PadTo([]byte("bodyroot2"), 32),
|
||||
},
|
||||
Signature: make([]byte, 96),
|
||||
},
|
||||
}
|
||||
|
||||
for _, h := range []*ethpbv1alpha1.SignedBeaconBlockHeader{slashing.Header_1, slashing.Header_2} {
|
||||
sb, err := signing.ComputeDomainAndSign(
|
||||
bs,
|
||||
slots.ToEpoch(h.Header.Slot),
|
||||
h.Header,
|
||||
params.BeaconConfig().DomainBeaconProposer,
|
||||
keys[0],
|
||||
)
|
||||
t.Run("invalid-slashing", func(t *testing.T) {
|
||||
bs, err := util.NewBeaconStateElectra()
|
||||
require.NoError(t, err)
|
||||
sig, err := bls.SignatureFromBytes(sb)
|
||||
|
||||
broadcaster := &p2pMock.MockBroadcaster{}
|
||||
s := &Server{
|
||||
ChainInfoFetcher: &blockchainmock.ChainService{State: bs},
|
||||
SlashingsPool: &slashingsmock.PoolMock{},
|
||||
Broadcaster: broadcaster,
|
||||
}
|
||||
|
||||
var body bytes.Buffer
|
||||
_, err = body.WriteString(invalidAttesterSlashing)
|
||||
require.NoError(t, err)
|
||||
h.Signature = sig.Marshal()
|
||||
}
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com/beacon/pool/attester_slashings", &body)
|
||||
request.Header.Set(api.VersionHeader, version.String(version.Electra))
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
broadcaster := &p2pMock.MockBroadcaster{}
|
||||
chainmock := &blockchainmock.ChainService{State: bs}
|
||||
s := &Server{
|
||||
ChainInfoFetcher: chainmock,
|
||||
SlashingsPool: &slashingsmock.PoolMock{},
|
||||
Broadcaster: broadcaster,
|
||||
OperationNotifier: chainmock.OperationNotifier(),
|
||||
}
|
||||
|
||||
toSubmit := structs.ProposerSlashingsFromConsensus([]*ethpbv1alpha1.ProposerSlashing{slashing})
|
||||
b, err := json.Marshal(toSubmit[0])
|
||||
require.NoError(t, err)
|
||||
var body bytes.Buffer
|
||||
_, err = body.Write(b)
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com/beacon/pool/proposer_slashings", &body)
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.SubmitProposerSlashing(writer, request)
|
||||
require.Equal(t, http.StatusOK, writer.Code)
|
||||
pendingSlashings := s.SlashingsPool.PendingProposerSlashings(ctx, bs, true)
|
||||
require.Equal(t, 1, len(pendingSlashings))
|
||||
assert.DeepEqual(t, slashing, pendingSlashings[0])
|
||||
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
|
||||
require.Equal(t, 1, broadcaster.NumMessages())
|
||||
_, ok := broadcaster.BroadcastMessages[0].(*ethpbv1alpha1.ProposerSlashing)
|
||||
assert.Equal(t, true, ok)
|
||||
}
|
||||
|
||||
func TestSubmitProposerSlashing_AcrossFork(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
transition.SkipSlotCache.Disable()
|
||||
defer transition.SkipSlotCache.Enable()
|
||||
|
||||
params.SetupTestConfigCleanup(t)
|
||||
config := params.BeaconConfig()
|
||||
config.AltairForkEpoch = 1
|
||||
params.OverrideBeaconConfig(config)
|
||||
|
||||
bs, keys := util.DeterministicGenesisState(t, 1)
|
||||
|
||||
slashing := ðpbv1alpha1.ProposerSlashing{
|
||||
Header_1: ðpbv1alpha1.SignedBeaconBlockHeader{
|
||||
Header: ðpbv1alpha1.BeaconBlockHeader{
|
||||
Slot: params.BeaconConfig().SlotsPerEpoch,
|
||||
ProposerIndex: 0,
|
||||
ParentRoot: bytesutil.PadTo([]byte("parentroot1"), 32),
|
||||
StateRoot: bytesutil.PadTo([]byte("stateroot1"), 32),
|
||||
BodyRoot: bytesutil.PadTo([]byte("bodyroot1"), 32),
|
||||
},
|
||||
Signature: make([]byte, 96),
|
||||
},
|
||||
Header_2: ðpbv1alpha1.SignedBeaconBlockHeader{
|
||||
Header: ðpbv1alpha1.BeaconBlockHeader{
|
||||
Slot: params.BeaconConfig().SlotsPerEpoch,
|
||||
ProposerIndex: 0,
|
||||
ParentRoot: bytesutil.PadTo([]byte("parentroot2"), 32),
|
||||
StateRoot: bytesutil.PadTo([]byte("stateroot2"), 32),
|
||||
BodyRoot: bytesutil.PadTo([]byte("bodyroot2"), 32),
|
||||
},
|
||||
Signature: make([]byte, 96),
|
||||
},
|
||||
}
|
||||
|
||||
newBs := bs.Copy()
|
||||
newBs, err := transition.ProcessSlots(ctx, newBs, params.BeaconConfig().SlotsPerEpoch)
|
||||
require.NoError(t, err)
|
||||
|
||||
for _, h := range []*ethpbv1alpha1.SignedBeaconBlockHeader{slashing.Header_1, slashing.Header_2} {
|
||||
sb, err := signing.ComputeDomainAndSign(
|
||||
newBs,
|
||||
slots.ToEpoch(h.Header.Slot),
|
||||
h.Header,
|
||||
params.BeaconConfig().DomainBeaconProposer,
|
||||
keys[0],
|
||||
)
|
||||
require.NoError(t, err)
|
||||
sig, err := bls.SignatureFromBytes(sb)
|
||||
require.NoError(t, err)
|
||||
h.Signature = sig.Marshal()
|
||||
}
|
||||
|
||||
broadcaster := &p2pMock.MockBroadcaster{}
|
||||
chainmock := &blockchainmock.ChainService{State: bs}
|
||||
s := &Server{
|
||||
ChainInfoFetcher: chainmock,
|
||||
SlashingsPool: &slashingsmock.PoolMock{},
|
||||
Broadcaster: broadcaster,
|
||||
OperationNotifier: chainmock.OperationNotifier(),
|
||||
}
|
||||
|
||||
toSubmit := structs.ProposerSlashingsFromConsensus([]*ethpbv1alpha1.ProposerSlashing{slashing})
|
||||
b, err := json.Marshal(toSubmit[0])
|
||||
require.NoError(t, err)
|
||||
var body bytes.Buffer
|
||||
_, err = body.Write(b)
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com/beacon/pool/proposer_slashings", &body)
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.SubmitProposerSlashing(writer, request)
|
||||
require.Equal(t, http.StatusOK, writer.Code)
|
||||
pendingSlashings := s.SlashingsPool.PendingProposerSlashings(ctx, bs, true)
|
||||
require.Equal(t, 1, len(pendingSlashings))
|
||||
assert.DeepEqual(t, slashing, pendingSlashings[0])
|
||||
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
|
||||
require.Equal(t, 1, broadcaster.NumMessages())
|
||||
_, ok := broadcaster.BroadcastMessages[0].(*ethpbv1alpha1.ProposerSlashing)
|
||||
assert.Equal(t, true, ok)
|
||||
s.SubmitAttesterSlashingsV2(writer, request)
|
||||
require.Equal(t, http.StatusBadRequest, writer.Code)
|
||||
e := &httputil.DefaultJsonError{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusBadRequest, e.Code)
|
||||
assert.StringContains(t, "Invalid attester slashing", e.Message)
|
||||
})
|
||||
}
|
||||
|
||||
func TestSubmitProposerSlashing_InvalidSlashing(t *testing.T) {
|
||||
|
||||
@@ -71,6 +71,7 @@ var (
|
||||
errSlowReader = errors.New("client failed to read fast enough to keep outgoing buffer below threshold")
|
||||
errNotRequested = errors.New("event not requested by client")
|
||||
errUnhandledEventData = errors.New("unable to represent event data in the event stream")
|
||||
errWriterUnusable = errors.New("http response writer is unusable")
|
||||
)
|
||||
|
||||
// StreamingResponseWriter defines a type that can be used by the eventStreamer.
|
||||
@@ -309,10 +310,21 @@ func (es *eventStreamer) outboxWriteLoop(ctx context.Context, cancel context.Can
|
||||
}
|
||||
}
|
||||
|
||||
func writeLazyReaderWithRecover(w StreamingResponseWriter, lr lazyReader) (err error) {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
log.WithField("panic", r).Error("Recovered from panic while writing event to client.")
|
||||
err = errWriterUnusable
|
||||
}
|
||||
}()
|
||||
_, err = io.Copy(w, lr())
|
||||
return err
|
||||
}
|
||||
|
||||
func (es *eventStreamer) writeOutbox(ctx context.Context, w StreamingResponseWriter, first lazyReader) error {
|
||||
needKeepAlive := true
|
||||
if first != nil {
|
||||
if _, err := io.Copy(w, first()); err != nil {
|
||||
if err := writeLazyReaderWithRecover(w, first); err != nil {
|
||||
return err
|
||||
}
|
||||
needKeepAlive = false
|
||||
@@ -325,13 +337,13 @@ func (es *eventStreamer) writeOutbox(ctx context.Context, w StreamingResponseWri
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
case rf := <-es.outbox:
|
||||
if _, err := io.Copy(w, rf()); err != nil {
|
||||
if err := writeLazyReaderWithRecover(w, rf); err != nil {
|
||||
return err
|
||||
}
|
||||
needKeepAlive = false
|
||||
default:
|
||||
if needKeepAlive {
|
||||
if _, err := io.Copy(w, newlineReader()); err != nil {
|
||||
if err := writeLazyReaderWithRecover(w, newlineReader); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
@@ -85,6 +85,7 @@ go_test(
|
||||
"//encoding/bytesutil:go_default_library",
|
||||
"//network/httputil:go_default_library",
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
"//runtime/version:go_default_library",
|
||||
"//testing/assert:go_default_library",
|
||||
"//testing/mock:go_default_library",
|
||||
"//testing/require:go_default_library",
|
||||
|
||||
@@ -13,6 +13,7 @@ import (
|
||||
|
||||
"github.com/ethereum/go-ethereum/common/hexutil"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/prysmaticlabs/prysm/v5/api"
|
||||
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/builder"
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
|
||||
@@ -31,6 +32,7 @@ import (
|
||||
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
|
||||
"github.com/prysmaticlabs/prysm/v5/network/httputil"
|
||||
ethpbalpha "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
|
||||
"github.com/prysmaticlabs/prysm/v5/runtime/version"
|
||||
"github.com/prysmaticlabs/prysm/v5/time/slots"
|
||||
"github.com/sirupsen/logrus"
|
||||
"google.golang.org/grpc/codes"
|
||||
@@ -118,30 +120,34 @@ func (s *Server) SubmitContributionAndProofs(w http.ResponseWriter, r *http.Requ
|
||||
ctx, span := trace.StartSpan(r.Context(), "validator.SubmitContributionAndProofs")
|
||||
defer span.End()
|
||||
|
||||
var req structs.SubmitContributionAndProofsRequest
|
||||
err := json.NewDecoder(r.Body).Decode(&req.Data)
|
||||
switch {
|
||||
case errors.Is(err, io.EOF):
|
||||
httputil.HandleError(w, "No data submitted", http.StatusBadRequest)
|
||||
return
|
||||
case err != nil:
|
||||
httputil.HandleError(w, "Could not decode request body: "+err.Error(), http.StatusBadRequest)
|
||||
var reqData []json.RawMessage
|
||||
if err := json.NewDecoder(r.Body).Decode(&reqData); err != nil {
|
||||
if errors.Is(err, io.EOF) {
|
||||
httputil.HandleError(w, "No data submitted", http.StatusBadRequest)
|
||||
} else {
|
||||
httputil.HandleError(w, "Could not decode request body: "+err.Error(), http.StatusBadRequest)
|
||||
}
|
||||
return
|
||||
}
|
||||
if len(req.Data) == 0 {
|
||||
if len(reqData) == 0 {
|
||||
httputil.HandleError(w, "No data submitted", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
for _, item := range req.Data {
|
||||
consensusItem, err := item.ToConsensus()
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not convert request contribution to consensus contribution: "+err.Error(), http.StatusBadRequest)
|
||||
for _, item := range reqData {
|
||||
var contribution structs.SignedContributionAndProof
|
||||
if err := json.Unmarshal(item, &contribution); err != nil {
|
||||
httputil.HandleError(w, "Could not decode item: "+err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
rpcError := s.CoreService.SubmitSignedContributionAndProof(ctx, consensusItem)
|
||||
if rpcError != nil {
|
||||
consensusItem, err := contribution.ToConsensus()
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not convert contribution to consensus format: "+err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
if rpcError := s.CoreService.SubmitSignedContributionAndProof(ctx, consensusItem); rpcError != nil {
|
||||
httputil.HandleError(w, rpcError.Err.Error(), core.ErrorReasonToHTTP(rpcError.Reason))
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -168,7 +174,13 @@ func (s *Server) SubmitAggregateAndProofs(w http.ResponseWriter, r *http.Request
|
||||
|
||||
broadcastFailed := false
|
||||
for _, item := range req.Data {
|
||||
consensusItem, err := item.ToConsensus()
|
||||
var signedAggregate structs.SignedAggregateAttestationAndProof
|
||||
err := json.Unmarshal(item, &signedAggregate)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not decode item: "+err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
consensusItem, err := signedAggregate.ToConsensus()
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not convert request aggregate to consensus aggregate: "+err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
@@ -191,6 +203,81 @@ func (s *Server) SubmitAggregateAndProofs(w http.ResponseWriter, r *http.Request
|
||||
}
|
||||
}
|
||||
|
||||
// SubmitAggregateAndProofsV2 verifies given aggregate and proofs and publishes them on appropriate gossipsub topic.
|
||||
func (s *Server) SubmitAggregateAndProofsV2(w http.ResponseWriter, r *http.Request) {
|
||||
ctx, span := trace.StartSpan(r.Context(), "validator.SubmitAggregateAndProofsV2")
|
||||
defer span.End()
|
||||
|
||||
var reqData []json.RawMessage
|
||||
if err := json.NewDecoder(r.Body).Decode(&reqData); err != nil {
|
||||
if errors.Is(err, io.EOF) {
|
||||
httputil.HandleError(w, "No data submitted", http.StatusBadRequest)
|
||||
} else {
|
||||
httputil.HandleError(w, "Could not decode request body: "+err.Error(), http.StatusBadRequest)
|
||||
}
|
||||
return
|
||||
}
|
||||
if len(reqData) == 0 {
|
||||
httputil.HandleError(w, "No data submitted", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
versionHeader := r.Header.Get(api.VersionHeader)
|
||||
if versionHeader == "" {
|
||||
httputil.HandleError(w, api.VersionHeader+" header is required", http.StatusBadRequest)
|
||||
}
|
||||
v, err := version.FromString(versionHeader)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Invalid version: "+err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
broadcastFailed := false
|
||||
var rpcError *core.RpcError
|
||||
for _, raw := range reqData {
|
||||
if v >= version.Electra {
|
||||
var signedAggregate structs.SignedAggregateAttestationAndProofElectra
|
||||
err = json.Unmarshal(raw, &signedAggregate)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Failed to parse aggregate attestation and proof: "+err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
consensusItem, err := signedAggregate.ToConsensus()
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not convert request aggregate to consensus aggregate: "+err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
rpcError = s.CoreService.SubmitSignedAggregateSelectionProof(ctx, consensusItem)
|
||||
} else {
|
||||
var signedAggregate structs.SignedAggregateAttestationAndProof
|
||||
err = json.Unmarshal(raw, &signedAggregate)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Failed to parse aggregate attestation and proof: "+err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
consensusItem, err := signedAggregate.ToConsensus()
|
||||
if err != nil {
|
||||
httputil.HandleError(w, "Could not convert request aggregate to consensus aggregate: "+err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
rpcError = s.CoreService.SubmitSignedAggregateSelectionProof(ctx, consensusItem)
|
||||
}
|
||||
|
||||
if rpcError != nil {
|
||||
var aggregateBroadcastFailedError *core.AggregateBroadcastFailedError
|
||||
if errors.As(rpcError.Err, &aggregateBroadcastFailedError) {
|
||||
broadcastFailed = true
|
||||
} else {
|
||||
httputil.HandleError(w, rpcError.Err.Error(), core.ErrorReasonToHTTP(rpcError.Reason))
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
if broadcastFailed {
|
||||
httputil.HandleError(w, "Could not broadcast one or more signed aggregated attestations", http.StatusInternalServerError)
|
||||
}
|
||||
}
|
||||
|
||||
// SubmitSyncCommitteeSubscription subscribe to a number of sync committee subnets.
|
||||
//
|
||||
// Subscribing to sync committee subnets is an action performed by VC to enable
|
||||
|
||||
@@ -14,6 +14,7 @@ import (
|
||||
|
||||
"github.com/ethereum/go-ethereum/common/hexutil"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/prysmaticlabs/prysm/v5/api"
|
||||
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
|
||||
mockChain "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/testing"
|
||||
builderTest "github.com/prysmaticlabs/prysm/v5/beacon-chain/builder/testing"
|
||||
@@ -37,6 +38,7 @@ import (
|
||||
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
|
||||
"github.com/prysmaticlabs/prysm/v5/network/httputil"
|
||||
ethpbalpha "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
|
||||
"github.com/prysmaticlabs/prysm/v5/runtime/version"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/assert"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/require"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/util"
|
||||
@@ -427,85 +429,238 @@ func TestSubmitContributionAndProofs(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestSubmitAggregateAndProofs(t *testing.T) {
|
||||
c := &core.Service{
|
||||
GenesisTimeFetcher: &mockChain.ChainService{},
|
||||
}
|
||||
|
||||
s := &Server{
|
||||
CoreService: c,
|
||||
CoreService: &core.Service{GenesisTimeFetcher: &mockChain.ChainService{}},
|
||||
}
|
||||
t.Run("V1", func(t *testing.T) {
|
||||
t.Run("single", func(t *testing.T) {
|
||||
broadcaster := &p2pmock.MockBroadcaster{}
|
||||
s.CoreService.Broadcaster = broadcaster
|
||||
|
||||
t.Run("single", func(t *testing.T) {
|
||||
broadcaster := &p2pmock.MockBroadcaster{}
|
||||
c.Broadcaster = broadcaster
|
||||
var body bytes.Buffer
|
||||
_, err := body.WriteString(singleAggregate)
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
var body bytes.Buffer
|
||||
_, err := body.WriteString(singleAggregate)
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
s.SubmitAggregateAndProofs(writer, request)
|
||||
assert.Equal(t, http.StatusOK, writer.Code)
|
||||
assert.Equal(t, 1, len(broadcaster.BroadcastMessages))
|
||||
})
|
||||
t.Run("multiple", func(t *testing.T) {
|
||||
broadcaster := &p2pmock.MockBroadcaster{}
|
||||
s.CoreService.Broadcaster = broadcaster
|
||||
s.CoreService.SyncCommitteePool = synccommittee.NewStore()
|
||||
|
||||
s.SubmitAggregateAndProofs(writer, request)
|
||||
assert.Equal(t, http.StatusOK, writer.Code)
|
||||
assert.Equal(t, 1, len(broadcaster.BroadcastMessages))
|
||||
var body bytes.Buffer
|
||||
_, err := body.WriteString(multipleAggregates)
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.SubmitAggregateAndProofs(writer, request)
|
||||
assert.Equal(t, http.StatusOK, writer.Code)
|
||||
assert.Equal(t, 2, len(broadcaster.BroadcastMessages))
|
||||
})
|
||||
t.Run("no body", func(t *testing.T) {
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com", nil)
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.SubmitAggregateAndProofs(writer, request)
|
||||
assert.Equal(t, http.StatusBadRequest, writer.Code)
|
||||
e := &httputil.DefaultJsonError{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusBadRequest, e.Code)
|
||||
assert.Equal(t, true, strings.Contains(e.Message, "No data submitted"))
|
||||
})
|
||||
t.Run("empty", func(t *testing.T) {
|
||||
var body bytes.Buffer
|
||||
_, err := body.WriteString("[]")
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.SubmitAggregateAndProofs(writer, request)
|
||||
assert.Equal(t, http.StatusBadRequest, writer.Code)
|
||||
e := &httputil.DefaultJsonError{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusBadRequest, e.Code)
|
||||
assert.Equal(t, true, strings.Contains(e.Message, "No data submitted"))
|
||||
})
|
||||
t.Run("invalid", func(t *testing.T) {
|
||||
var body bytes.Buffer
|
||||
_, err := body.WriteString(invalidAggregate)
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.SubmitAggregateAndProofs(writer, request)
|
||||
assert.Equal(t, http.StatusBadRequest, writer.Code)
|
||||
e := &httputil.DefaultJsonError{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusBadRequest, e.Code)
|
||||
})
|
||||
})
|
||||
t.Run("multiple", func(t *testing.T) {
|
||||
broadcaster := &p2pmock.MockBroadcaster{}
|
||||
c.Broadcaster = broadcaster
|
||||
c.SyncCommitteePool = synccommittee.NewStore()
|
||||
t.Run("V2", func(t *testing.T) {
|
||||
t.Run("single", func(t *testing.T) {
|
||||
broadcaster := &p2pmock.MockBroadcaster{}
|
||||
s.CoreService.Broadcaster = broadcaster
|
||||
|
||||
var body bytes.Buffer
|
||||
_, err := body.WriteString(multipleAggregates)
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
var body bytes.Buffer
|
||||
_, err := body.WriteString(singleAggregateElectra)
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
|
||||
request.Header.Set(api.VersionHeader, version.String(version.Electra))
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.SubmitAggregateAndProofs(writer, request)
|
||||
assert.Equal(t, http.StatusOK, writer.Code)
|
||||
assert.Equal(t, 2, len(broadcaster.BroadcastMessages))
|
||||
})
|
||||
t.Run("no body", func(t *testing.T) {
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com", nil)
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
s.SubmitAggregateAndProofsV2(writer, request)
|
||||
assert.Equal(t, http.StatusOK, writer.Code)
|
||||
assert.Equal(t, 1, len(broadcaster.BroadcastMessages))
|
||||
})
|
||||
t.Run("single-pre-electra", func(t *testing.T) {
|
||||
broadcaster := &p2pmock.MockBroadcaster{}
|
||||
s.CoreService.Broadcaster = broadcaster
|
||||
|
||||
s.SubmitAggregateAndProofs(writer, request)
|
||||
assert.Equal(t, http.StatusBadRequest, writer.Code)
|
||||
e := &httputil.DefaultJsonError{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusBadRequest, e.Code)
|
||||
assert.Equal(t, true, strings.Contains(e.Message, "No data submitted"))
|
||||
})
|
||||
t.Run("empty", func(t *testing.T) {
|
||||
var body bytes.Buffer
|
||||
_, err := body.WriteString("[]")
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
var body bytes.Buffer
|
||||
_, err := body.WriteString(singleAggregate)
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
|
||||
request.Header.Set(api.VersionHeader, version.String(version.Phase0))
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.SubmitAggregateAndProofs(writer, request)
|
||||
assert.Equal(t, http.StatusBadRequest, writer.Code)
|
||||
e := &httputil.DefaultJsonError{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusBadRequest, e.Code)
|
||||
assert.Equal(t, true, strings.Contains(e.Message, "No data submitted"))
|
||||
})
|
||||
t.Run("invalid", func(t *testing.T) {
|
||||
var body bytes.Buffer
|
||||
_, err := body.WriteString(invalidAggregate)
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
s.SubmitAggregateAndProofsV2(writer, request)
|
||||
assert.Equal(t, http.StatusOK, writer.Code)
|
||||
assert.Equal(t, 1, len(broadcaster.BroadcastMessages))
|
||||
})
|
||||
t.Run("multiple", func(t *testing.T) {
|
||||
broadcaster := &p2pmock.MockBroadcaster{}
|
||||
s.CoreService.Broadcaster = broadcaster
|
||||
s.CoreService.SyncCommitteePool = synccommittee.NewStore()
|
||||
|
||||
s.SubmitAggregateAndProofs(writer, request)
|
||||
assert.Equal(t, http.StatusBadRequest, writer.Code)
|
||||
e := &httputil.DefaultJsonError{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusBadRequest, e.Code)
|
||||
var body bytes.Buffer
|
||||
_, err := body.WriteString(multipleAggregatesElectra)
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
|
||||
request.Header.Set(api.VersionHeader, version.String(version.Electra))
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.SubmitAggregateAndProofsV2(writer, request)
|
||||
assert.Equal(t, http.StatusOK, writer.Code)
|
||||
assert.Equal(t, 2, len(broadcaster.BroadcastMessages))
|
||||
})
|
||||
t.Run("multiple-pre-electra", func(t *testing.T) {
|
||||
broadcaster := &p2pmock.MockBroadcaster{}
|
||||
s.CoreService.Broadcaster = broadcaster
|
||||
s.CoreService.SyncCommitteePool = synccommittee.NewStore()
|
||||
|
||||
var body bytes.Buffer
|
||||
_, err := body.WriteString(multipleAggregates)
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
|
||||
request.Header.Set(api.VersionHeader, version.String(version.Phase0))
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.SubmitAggregateAndProofsV2(writer, request)
|
||||
assert.Equal(t, http.StatusOK, writer.Code)
|
||||
assert.Equal(t, 2, len(broadcaster.BroadcastMessages))
|
||||
})
|
||||
t.Run("no body", func(t *testing.T) {
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com", nil)
|
||||
request.Header.Set(api.VersionHeader, version.String(version.Electra))
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.SubmitAggregateAndProofsV2(writer, request)
|
||||
assert.Equal(t, http.StatusBadRequest, writer.Code)
|
||||
e := &httputil.DefaultJsonError{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusBadRequest, e.Code)
|
||||
assert.Equal(t, true, strings.Contains(e.Message, "No data submitted"))
|
||||
})
|
||||
t.Run("no body-pre-electra", func(t *testing.T) {
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com", nil)
|
||||
request.Header.Set(api.VersionHeader, version.String(version.Bellatrix))
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.SubmitAggregateAndProofsV2(writer, request)
|
||||
assert.Equal(t, http.StatusBadRequest, writer.Code)
|
||||
e := &httputil.DefaultJsonError{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusBadRequest, e.Code)
|
||||
assert.Equal(t, true, strings.Contains(e.Message, "No data submitted"))
|
||||
})
|
||||
t.Run("empty", func(t *testing.T) {
|
||||
var body bytes.Buffer
|
||||
_, err := body.WriteString("[]")
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
|
||||
request.Header.Set(api.VersionHeader, version.String(version.Electra))
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.SubmitAggregateAndProofsV2(writer, request)
|
||||
assert.Equal(t, http.StatusBadRequest, writer.Code)
|
||||
e := &httputil.DefaultJsonError{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusBadRequest, e.Code)
|
||||
assert.Equal(t, true, strings.Contains(e.Message, "No data submitted"))
|
||||
})
|
||||
t.Run("empty-pre-electra", func(t *testing.T) {
|
||||
var body bytes.Buffer
|
||||
_, err := body.WriteString("[]")
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
|
||||
request.Header.Set(api.VersionHeader, version.String(version.Altair))
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.SubmitAggregateAndProofsV2(writer, request)
|
||||
assert.Equal(t, http.StatusBadRequest, writer.Code)
|
||||
e := &httputil.DefaultJsonError{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusBadRequest, e.Code)
|
||||
assert.Equal(t, true, strings.Contains(e.Message, "No data submitted"))
|
||||
})
|
||||
t.Run("invalid", func(t *testing.T) {
|
||||
var body bytes.Buffer
|
||||
_, err := body.WriteString(invalidAggregateElectra)
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
|
||||
request.Header.Set(api.VersionHeader, version.String(version.Electra))
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.SubmitAggregateAndProofsV2(writer, request)
|
||||
assert.Equal(t, http.StatusBadRequest, writer.Code)
|
||||
e := &httputil.DefaultJsonError{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusBadRequest, e.Code)
|
||||
})
|
||||
t.Run("invalid-pre-electra", func(t *testing.T) {
|
||||
var body bytes.Buffer
|
||||
_, err := body.WriteString(invalidAggregate)
|
||||
require.NoError(t, err)
|
||||
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
|
||||
request.Header.Set(api.VersionHeader, version.String(version.Deneb))
|
||||
writer := httptest.NewRecorder()
|
||||
writer.Body = &bytes.Buffer{}
|
||||
|
||||
s.SubmitAggregateAndProofsV2(writer, request)
|
||||
assert.Equal(t, http.StatusBadRequest, writer.Code)
|
||||
e := &httputil.DefaultJsonError{}
|
||||
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
|
||||
assert.Equal(t, http.StatusBadRequest, e.Code)
|
||||
})
|
||||
})
|
||||
}
|
||||
|
||||
@@ -2766,6 +2921,115 @@ var (
|
||||
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
|
||||
}
|
||||
]`
|
||||
|
||||
singleAggregateElectra = `[
|
||||
{
|
||||
"message": {
|
||||
"aggregator_index": "1",
|
||||
"aggregate": {
|
||||
"aggregation_bits": "0x01",
|
||||
"committee_bits": "0x01",
|
||||
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505",
|
||||
"data": {
|
||||
"slot": "1",
|
||||
"index": "1",
|
||||
"beacon_block_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
|
||||
"source": {
|
||||
"epoch": "1",
|
||||
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
|
||||
},
|
||||
"target": {
|
||||
"epoch": "1",
|
||||
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
|
||||
}
|
||||
}
|
||||
},
|
||||
"selection_proof": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
|
||||
},
|
||||
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
|
||||
}
|
||||
]`
|
||||
multipleAggregatesElectra = `[
|
||||
{
|
||||
"message": {
|
||||
"aggregator_index": "1",
|
||||
"aggregate": {
|
||||
"aggregation_bits": "0x01",
|
||||
"committee_bits": "0x01",
|
||||
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505",
|
||||
"data": {
|
||||
"slot": "1",
|
||||
"index": "1",
|
||||
"beacon_block_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
|
||||
"source": {
|
||||
"epoch": "1",
|
||||
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
|
||||
},
|
||||
"target": {
|
||||
"epoch": "1",
|
||||
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
|
||||
}
|
||||
}
|
||||
},
|
||||
"selection_proof": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
|
||||
},
|
||||
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
|
||||
},
|
||||
{
|
||||
"message": {
|
||||
"aggregator_index": "1",
|
||||
"aggregate": {
|
||||
"aggregation_bits": "0x01",
|
||||
"committee_bits": "0x01",
|
||||
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505",
|
||||
"data": {
|
||||
"slot": "1",
|
||||
"index": "1",
|
||||
"beacon_block_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
|
||||
"source": {
|
||||
"epoch": "1",
|
||||
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
|
||||
},
|
||||
"target": {
|
||||
"epoch": "1",
|
||||
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
|
||||
}
|
||||
}
|
||||
},
|
||||
"selection_proof": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
|
||||
},
|
||||
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
|
||||
}
|
||||
]
|
||||
`
|
||||
// aggregator_index is invalid
|
||||
invalidAggregateElectra = `[
|
||||
{
|
||||
"message": {
|
||||
"aggregator_index": "foo",
|
||||
"aggregate": {
|
||||
"aggregation_bits": "0x01",
|
||||
"committee_bits": "0x01",
|
||||
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505",
|
||||
"data": {
|
||||
"slot": "1",
|
||||
"index": "1",
|
||||
"beacon_block_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
|
||||
"source": {
|
||||
"epoch": "1",
|
||||
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
|
||||
},
|
||||
"target": {
|
||||
"epoch": "1",
|
||||
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
|
||||
}
|
||||
}
|
||||
},
|
||||
"selection_proof": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
|
||||
},
|
||||
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
|
||||
}
|
||||
]`
|
||||
singleSyncCommitteeSubscription = `[
|
||||
{
|
||||
"validator_index": "1",
|
||||
|
||||
@@ -404,7 +404,7 @@ func (vs *Server) PrepareBeaconProposer(
|
||||
_ context.Context, request *ethpb.PrepareBeaconProposerRequest,
|
||||
) (*emptypb.Empty, error) {
|
||||
var validatorIndices []primitives.ValidatorIndex
|
||||
|
||||
beforeCacheSize := vs.TrackedValidatorsCache.Size()
|
||||
for _, r := range request.Recipients {
|
||||
recipient := hexutil.Encode(r.FeeRecipient)
|
||||
if !common.IsHexAddress(recipient) {
|
||||
@@ -428,7 +428,9 @@ func (vs *Server) PrepareBeaconProposer(
|
||||
}
|
||||
if len(validatorIndices) != 0 {
|
||||
log.WithFields(logrus.Fields{
|
||||
"validatorCount": len(validatorIndices),
|
||||
"validatorCount": len(validatorIndices),
|
||||
"previousTotalTrackedValidators": beforeCacheSize,
|
||||
"totalTrackedValidators": vs.TrackedValidatorsCache.Size(),
|
||||
}).Debug("Updated fee recipient addresses for validator indices")
|
||||
}
|
||||
return &emptypb.Empty{}, nil
|
||||
|
||||
@@ -77,6 +77,7 @@ func setExecutionData(ctx context.Context, blk interfaces.SignedBeaconBlock, loc
|
||||
log.WithError(err).Warn("Proposer: failed to retrieve header from BuilderBid")
|
||||
return local.Bid, local.BlobsBundle, setLocalExecution(blk, local)
|
||||
}
|
||||
//TODO: add builder execution requests here.
|
||||
if bid.Version() >= version.Deneb {
|
||||
builderKzgCommitments, err = bid.BlobKzgCommitments()
|
||||
if err != nil {
|
||||
@@ -353,12 +354,19 @@ func setLocalExecution(blk interfaces.SignedBeaconBlock, local *blocks.GetPayloa
|
||||
if local.BlobsBundle != nil {
|
||||
kzgCommitments = local.BlobsBundle.KzgCommitments
|
||||
}
|
||||
if local.ExecutionRequests != nil {
|
||||
if err := blk.SetExecutionRequests(local.ExecutionRequests); err != nil {
|
||||
return errors.Wrap(err, "could not set execution requests")
|
||||
}
|
||||
}
|
||||
|
||||
return setExecution(blk, local.ExecutionData, false, kzgCommitments)
|
||||
}
|
||||
|
||||
// setBuilderExecution sets the execution context for a builder's beacon block.
|
||||
// It delegates to setExecution for the actual work.
|
||||
func setBuilderExecution(blk interfaces.SignedBeaconBlock, execution interfaces.ExecutionData, builderKzgCommitments [][]byte) error {
|
||||
// TODO #14344: add execution requests for electra
|
||||
return setExecution(blk, execution, true, builderKzgCommitments)
|
||||
}
|
||||
|
||||
|
||||
@@ -105,7 +105,6 @@ type WriteOnlyBeaconState interface {
|
||||
AppendHistoricalRoots(root [32]byte) error
|
||||
AppendHistoricalSummaries(*ethpb.HistoricalSummary) error
|
||||
SetLatestExecutionPayloadHeader(payload interfaces.ExecutionData) error
|
||||
SaveValidatorIndices()
|
||||
}
|
||||
|
||||
// ReadOnlyValidator defines a struct which only has read access to validator methods.
|
||||
@@ -141,7 +140,6 @@ type ReadOnlyBalances interface {
|
||||
Balances() []uint64
|
||||
BalanceAtIndex(idx primitives.ValidatorIndex) (uint64, error)
|
||||
BalancesLength() int
|
||||
ActiveBalanceAtIndex(idx primitives.ValidatorIndex) (uint64, error)
|
||||
}
|
||||
|
||||
// ReadOnlyCheckpoint defines a struct which only has read access to checkpoint methods.
|
||||
|
||||
@@ -46,7 +46,6 @@ go_library(
|
||||
"ssz.go",
|
||||
"state_trie.go",
|
||||
"types.go",
|
||||
"validator_index_cache.go",
|
||||
],
|
||||
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native",
|
||||
visibility = ["//visibility:public"],
|
||||
@@ -121,7 +120,6 @@ go_test(
|
||||
"state_test.go",
|
||||
"state_trie_test.go",
|
||||
"types_test.go",
|
||||
"validator_index_cache_test.go",
|
||||
],
|
||||
data = glob(["testdata/**"]) + [
|
||||
"@consensus_spec_tests_mainnet//:test_data",
|
||||
@@ -137,7 +135,6 @@ go_test(
|
||||
"//config/features:go_default_library",
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
"//consensus-types:go_default_library",
|
||||
"//consensus-types/blocks:go_default_library",
|
||||
"//consensus-types/interfaces:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
|
||||
@@ -76,7 +76,6 @@ type BeaconState struct {
|
||||
stateFieldLeaves map[types.FieldIndex]*fieldtrie.FieldTrie
|
||||
rebuildTrie map[types.FieldIndex]bool
|
||||
valMapHandler *stateutil.ValidatorMapHandler
|
||||
validatorIndexCache *finalizedValidatorIndexCache
|
||||
merkleLayers [][][]byte
|
||||
sharedFieldReferences map[types.FieldIndex]*stateutil.Reference
|
||||
}
|
||||
|
||||
@@ -2,7 +2,6 @@ package state_native
|
||||
|
||||
import (
|
||||
"github.com/pkg/errors"
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/features"
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
@@ -151,6 +150,10 @@ func (b *BeaconState) ValidatorAtIndexReadOnly(idx primitives.ValidatorIndex) (s
|
||||
b.lock.RLock()
|
||||
defer b.lock.RUnlock()
|
||||
|
||||
return b.validatorAtIndexReadOnly(idx)
|
||||
}
|
||||
|
||||
func (b *BeaconState) validatorAtIndexReadOnly(idx primitives.ValidatorIndex) (state.ReadOnlyValidator, error) {
|
||||
if features.Get().EnableExperimentalState {
|
||||
if b.validatorsMultiValue == nil {
|
||||
return nil, state.ErrNilValidatorsInState
|
||||
@@ -180,10 +183,6 @@ func (b *BeaconState) ValidatorIndexByPubkey(key [fieldparams.BLSPubkeyLength]by
|
||||
b.lock.RLock()
|
||||
defer b.lock.RUnlock()
|
||||
|
||||
if b.Version() >= version.Electra {
|
||||
return b.getValidatorIndex(key)
|
||||
}
|
||||
|
||||
var numOfVals int
|
||||
if features.Get().EnableExperimentalState {
|
||||
numOfVals = b.validatorsMultiValue.Len(b)
|
||||
@@ -446,34 +445,6 @@ func (b *BeaconState) inactivityScoresVal() []uint64 {
|
||||
return res
|
||||
}
|
||||
|
||||
// ActiveBalanceAtIndex returns the active balance for the given validator.
|
||||
//
|
||||
// Spec definition:
|
||||
//
|
||||
// def get_active_balance(state: BeaconState, validator_index: ValidatorIndex) -> Gwei:
|
||||
// max_effective_balance = get_validator_max_effective_balance(state.validators[validator_index])
|
||||
// return min(state.balances[validator_index], max_effective_balance)
|
||||
func (b *BeaconState) ActiveBalanceAtIndex(i primitives.ValidatorIndex) (uint64, error) {
|
||||
if b.version < version.Electra {
|
||||
return 0, errNotSupported("ActiveBalanceAtIndex", b.version)
|
||||
}
|
||||
|
||||
b.lock.RLock()
|
||||
defer b.lock.RUnlock()
|
||||
|
||||
v, err := b.validatorAtIndex(i)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
bal, err := b.balanceAtIndex(i)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
return min(bal, helpers.ValidatorMaxEffectiveBalance(v)), nil
|
||||
}
|
||||
|
||||
// PendingBalanceToWithdraw returns the sum of all pending withdrawals for the given validator.
|
||||
//
|
||||
// Spec definition:
|
||||
|
||||
@@ -1,15 +1,12 @@
|
||||
package state_native_test
|
||||
|
||||
import (
|
||||
"math"
|
||||
"testing"
|
||||
|
||||
"github.com/ethereum/go-ethereum/common/hexutil"
|
||||
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
|
||||
statenative "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
|
||||
testtmpl "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/testing"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/params"
|
||||
consensus_types "github.com/prysmaticlabs/prysm/v5/consensus-types"
|
||||
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
|
||||
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/assert"
|
||||
@@ -72,60 +69,6 @@ func TestValidatorIndexes(t *testing.T) {
|
||||
})
|
||||
}
|
||||
|
||||
func TestActiveBalanceAtIndex(t *testing.T) {
|
||||
// Test setup with a state with 4 validators.
|
||||
// Validators 0 & 1 have compounding withdrawal credentials while validators 2 & 3 have BLS withdrawal credentials.
|
||||
pb := ðpb.BeaconStateElectra{
|
||||
Validators: []*ethpb.Validator{
|
||||
{
|
||||
WithdrawalCredentials: []byte{params.BeaconConfig().CompoundingWithdrawalPrefixByte},
|
||||
},
|
||||
{
|
||||
WithdrawalCredentials: []byte{params.BeaconConfig().CompoundingWithdrawalPrefixByte},
|
||||
},
|
||||
{
|
||||
WithdrawalCredentials: []byte{params.BeaconConfig().BLSWithdrawalPrefixByte},
|
||||
},
|
||||
{
|
||||
WithdrawalCredentials: []byte{params.BeaconConfig().BLSWithdrawalPrefixByte},
|
||||
},
|
||||
},
|
||||
Balances: []uint64{
|
||||
55,
|
||||
math.MaxUint64,
|
||||
55,
|
||||
math.MaxUint64,
|
||||
},
|
||||
}
|
||||
state, err := statenative.InitializeFromProtoUnsafeElectra(pb)
|
||||
require.NoError(t, err)
|
||||
|
||||
ab, err := state.ActiveBalanceAtIndex(0)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, uint64(55), ab)
|
||||
|
||||
ab, err = state.ActiveBalanceAtIndex(1)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, params.BeaconConfig().MaxEffectiveBalanceElectra, ab)
|
||||
|
||||
ab, err = state.ActiveBalanceAtIndex(2)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, uint64(55), ab)
|
||||
|
||||
ab, err = state.ActiveBalanceAtIndex(3)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, params.BeaconConfig().MinActivationBalance, ab)
|
||||
|
||||
// Accessing a validator index out of bounds should error.
|
||||
_, err = state.ActiveBalanceAtIndex(4)
|
||||
require.ErrorIs(t, err, consensus_types.ErrOutOfBounds)
|
||||
|
||||
// Accessing a validator wwhere balance slice is out of bounds for some reason.
|
||||
require.NoError(t, state.SetBalances([]uint64{}))
|
||||
_, err = state.ActiveBalanceAtIndex(0)
|
||||
require.ErrorIs(t, err, consensus_types.ErrOutOfBounds)
|
||||
}
|
||||
|
||||
func TestPendingBalanceToWithdraw(t *testing.T) {
|
||||
pb := ðpb.BeaconStateElectra{
|
||||
PendingPartialWithdrawals: []*ethpb.PendingPartialWithdrawal{
|
||||
|
||||
@@ -120,7 +120,7 @@ func (b *BeaconState) ExpectedWithdrawals() ([]*enginev1.Withdrawal, uint64, err
|
||||
break
|
||||
}
|
||||
|
||||
v, err := b.validatorAtIndex(w.Index)
|
||||
v, err := b.validatorAtIndexReadOnly(w.Index)
|
||||
if err != nil {
|
||||
return nil, 0, fmt.Errorf("failed to determine withdrawals at index %d: %w", w.Index, err)
|
||||
}
|
||||
@@ -128,14 +128,14 @@ func (b *BeaconState) ExpectedWithdrawals() ([]*enginev1.Withdrawal, uint64, err
|
||||
if err != nil {
|
||||
return nil, 0, fmt.Errorf("could not retrieve balance at index %d: %w", w.Index, err)
|
||||
}
|
||||
hasSufficientEffectiveBalance := v.EffectiveBalance >= params.BeaconConfig().MinActivationBalance
|
||||
hasSufficientEffectiveBalance := v.EffectiveBalance() >= params.BeaconConfig().MinActivationBalance
|
||||
hasExcessBalance := vBal > params.BeaconConfig().MinActivationBalance
|
||||
if v.ExitEpoch == params.BeaconConfig().FarFutureEpoch && hasSufficientEffectiveBalance && hasExcessBalance {
|
||||
if v.ExitEpoch() == params.BeaconConfig().FarFutureEpoch && hasSufficientEffectiveBalance && hasExcessBalance {
|
||||
amount := min(vBal-params.BeaconConfig().MinActivationBalance, w.Amount)
|
||||
withdrawals = append(withdrawals, &enginev1.Withdrawal{
|
||||
Index: withdrawalIndex,
|
||||
ValidatorIndex: w.Index,
|
||||
Address: v.WithdrawalCredentials[12:],
|
||||
Address: v.GetWithdrawalCredentials()[12:],
|
||||
Amount: amount,
|
||||
})
|
||||
withdrawalIndex++
|
||||
@@ -147,7 +147,7 @@ func (b *BeaconState) ExpectedWithdrawals() ([]*enginev1.Withdrawal, uint64, err
|
||||
validatorsLen := b.validatorsLen()
|
||||
bound := mathutil.Min(uint64(validatorsLen), params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep)
|
||||
for i := uint64(0); i < bound; i++ {
|
||||
val, err := b.validatorAtIndex(validatorIndex)
|
||||
val, err := b.validatorAtIndexReadOnly(validatorIndex)
|
||||
if err != nil {
|
||||
return nil, 0, errors.Wrapf(err, "could not retrieve validator at index %d", validatorIndex)
|
||||
}
|
||||
|
||||
@@ -70,11 +70,6 @@ func (v readOnlyValidator) PublicKey() [fieldparams.BLSPubkeyLength]byte {
|
||||
return pubkey
|
||||
}
|
||||
|
||||
// publicKeySlice returns the public key in the slice form for the read only validator.
|
||||
func (v readOnlyValidator) publicKeySlice() []byte {
|
||||
return v.validator.PublicKey
|
||||
}
|
||||
|
||||
// WithdrawalCredentials returns the withdrawal credentials of the
|
||||
// read only validator.
|
||||
func (v readOnlyValidator) GetWithdrawalCredentials() []byte {
|
||||
|
||||
@@ -107,22 +107,6 @@ func (b *BeaconState) SetHistoricalRoots(val [][]byte) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// SaveValidatorIndices save validator indices of beacon chain to cache
|
||||
func (b *BeaconState) SaveValidatorIndices() {
|
||||
if b.Version() < version.Electra {
|
||||
return
|
||||
}
|
||||
|
||||
b.lock.Lock()
|
||||
defer b.lock.Unlock()
|
||||
|
||||
if b.validatorIndexCache == nil {
|
||||
b.validatorIndexCache = newFinalizedValidatorIndexCache()
|
||||
}
|
||||
|
||||
b.saveValidatorIndices()
|
||||
}
|
||||
|
||||
// AppendHistoricalRoots for the beacon state. Appends the new value
|
||||
// to the end of list.
|
||||
func (b *BeaconState) AppendHistoricalRoots(root [32]byte) error {
|
||||
|
||||
@@ -189,12 +189,11 @@ func InitializeFromProtoUnsafePhase0(st *ethpb.BeaconState) (state.BeaconState,
|
||||
|
||||
id: types.Enumerator.Inc(),
|
||||
|
||||
dirtyFields: make(map[types.FieldIndex]bool, fieldCount),
|
||||
dirtyIndices: make(map[types.FieldIndex][]uint64, fieldCount),
|
||||
stateFieldLeaves: make(map[types.FieldIndex]*fieldtrie.FieldTrie, fieldCount),
|
||||
rebuildTrie: make(map[types.FieldIndex]bool, fieldCount),
|
||||
valMapHandler: stateutil.NewValMapHandler(st.Validators),
|
||||
validatorIndexCache: newFinalizedValidatorIndexCache(),
|
||||
dirtyFields: make(map[types.FieldIndex]bool, fieldCount),
|
||||
dirtyIndices: make(map[types.FieldIndex][]uint64, fieldCount),
|
||||
stateFieldLeaves: make(map[types.FieldIndex]*fieldtrie.FieldTrie, fieldCount),
|
||||
rebuildTrie: make(map[types.FieldIndex]bool, fieldCount),
|
||||
valMapHandler: stateutil.NewValMapHandler(st.Validators),
|
||||
}
|
||||
|
||||
if features.Get().EnableExperimentalState {
|
||||
@@ -296,12 +295,11 @@ func InitializeFromProtoUnsafeAltair(st *ethpb.BeaconStateAltair) (state.BeaconS
|
||||
|
||||
id: types.Enumerator.Inc(),
|
||||
|
||||
dirtyFields: make(map[types.FieldIndex]bool, fieldCount),
|
||||
dirtyIndices: make(map[types.FieldIndex][]uint64, fieldCount),
|
||||
stateFieldLeaves: make(map[types.FieldIndex]*fieldtrie.FieldTrie, fieldCount),
|
||||
rebuildTrie: make(map[types.FieldIndex]bool, fieldCount),
|
||||
valMapHandler: stateutil.NewValMapHandler(st.Validators),
|
||||
validatorIndexCache: newFinalizedValidatorIndexCache(),
|
||||
dirtyFields: make(map[types.FieldIndex]bool, fieldCount),
|
||||
dirtyIndices: make(map[types.FieldIndex][]uint64, fieldCount),
|
||||
stateFieldLeaves: make(map[types.FieldIndex]*fieldtrie.FieldTrie, fieldCount),
|
||||
rebuildTrie: make(map[types.FieldIndex]bool, fieldCount),
|
||||
valMapHandler: stateutil.NewValMapHandler(st.Validators),
|
||||
}
|
||||
|
||||
if features.Get().EnableExperimentalState {
|
||||
@@ -407,12 +405,11 @@ func InitializeFromProtoUnsafeBellatrix(st *ethpb.BeaconStateBellatrix) (state.B
|
||||
|
||||
id: types.Enumerator.Inc(),
|
||||
|
||||
dirtyFields: make(map[types.FieldIndex]bool, fieldCount),
|
||||
dirtyIndices: make(map[types.FieldIndex][]uint64, fieldCount),
|
||||
stateFieldLeaves: make(map[types.FieldIndex]*fieldtrie.FieldTrie, fieldCount),
|
||||
rebuildTrie: make(map[types.FieldIndex]bool, fieldCount),
|
||||
valMapHandler: stateutil.NewValMapHandler(st.Validators),
|
||||
validatorIndexCache: newFinalizedValidatorIndexCache(),
|
||||
dirtyFields: make(map[types.FieldIndex]bool, fieldCount),
|
||||
dirtyIndices: make(map[types.FieldIndex][]uint64, fieldCount),
|
||||
stateFieldLeaves: make(map[types.FieldIndex]*fieldtrie.FieldTrie, fieldCount),
|
||||
rebuildTrie: make(map[types.FieldIndex]bool, fieldCount),
|
||||
valMapHandler: stateutil.NewValMapHandler(st.Validators),
|
||||
}
|
||||
|
||||
if features.Get().EnableExperimentalState {
|
||||
@@ -522,12 +519,11 @@ func InitializeFromProtoUnsafeCapella(st *ethpb.BeaconStateCapella) (state.Beaco
|
||||
|
||||
id: types.Enumerator.Inc(),
|
||||
|
||||
dirtyFields: make(map[types.FieldIndex]bool, fieldCount),
|
||||
dirtyIndices: make(map[types.FieldIndex][]uint64, fieldCount),
|
||||
stateFieldLeaves: make(map[types.FieldIndex]*fieldtrie.FieldTrie, fieldCount),
|
||||
rebuildTrie: make(map[types.FieldIndex]bool, fieldCount),
|
||||
valMapHandler: stateutil.NewValMapHandler(st.Validators),
|
||||
validatorIndexCache: newFinalizedValidatorIndexCache(),
|
||||
dirtyFields: make(map[types.FieldIndex]bool, fieldCount),
|
||||
dirtyIndices: make(map[types.FieldIndex][]uint64, fieldCount),
|
||||
stateFieldLeaves: make(map[types.FieldIndex]*fieldtrie.FieldTrie, fieldCount),
|
||||
rebuildTrie: make(map[types.FieldIndex]bool, fieldCount),
|
||||
valMapHandler: stateutil.NewValMapHandler(st.Validators),
|
||||
}
|
||||
|
||||
if features.Get().EnableExperimentalState {
|
||||
@@ -636,12 +632,11 @@ func InitializeFromProtoUnsafeDeneb(st *ethpb.BeaconStateDeneb) (state.BeaconSta
|
||||
nextWithdrawalValidatorIndex: st.NextWithdrawalValidatorIndex,
|
||||
historicalSummaries: st.HistoricalSummaries,
|
||||
|
||||
dirtyFields: make(map[types.FieldIndex]bool, fieldCount),
|
||||
dirtyIndices: make(map[types.FieldIndex][]uint64, fieldCount),
|
||||
stateFieldLeaves: make(map[types.FieldIndex]*fieldtrie.FieldTrie, fieldCount),
|
||||
rebuildTrie: make(map[types.FieldIndex]bool, fieldCount),
|
||||
valMapHandler: stateutil.NewValMapHandler(st.Validators),
|
||||
validatorIndexCache: newFinalizedValidatorIndexCache(),
|
||||
dirtyFields: make(map[types.FieldIndex]bool, fieldCount),
|
||||
dirtyIndices: make(map[types.FieldIndex][]uint64, fieldCount),
|
||||
stateFieldLeaves: make(map[types.FieldIndex]*fieldtrie.FieldTrie, fieldCount),
|
||||
rebuildTrie: make(map[types.FieldIndex]bool, fieldCount),
|
||||
valMapHandler: stateutil.NewValMapHandler(st.Validators),
|
||||
}
|
||||
|
||||
if features.Get().EnableExperimentalState {
|
||||
@@ -759,12 +754,11 @@ func InitializeFromProtoUnsafeElectra(st *ethpb.BeaconStateElectra) (state.Beaco
|
||||
pendingPartialWithdrawals: st.PendingPartialWithdrawals,
|
||||
pendingConsolidations: st.PendingConsolidations,
|
||||
|
||||
dirtyFields: make(map[types.FieldIndex]bool, fieldCount),
|
||||
dirtyIndices: make(map[types.FieldIndex][]uint64, fieldCount),
|
||||
stateFieldLeaves: make(map[types.FieldIndex]*fieldtrie.FieldTrie, fieldCount),
|
||||
rebuildTrie: make(map[types.FieldIndex]bool, fieldCount),
|
||||
valMapHandler: stateutil.NewValMapHandler(st.Validators),
|
||||
validatorIndexCache: newFinalizedValidatorIndexCache(), //only used in post-electra and only populates when finalizing, otherwise it falls back to processing the full validator set
|
||||
dirtyFields: make(map[types.FieldIndex]bool, fieldCount),
|
||||
dirtyIndices: make(map[types.FieldIndex][]uint64, fieldCount),
|
||||
stateFieldLeaves: make(map[types.FieldIndex]*fieldtrie.FieldTrie, fieldCount),
|
||||
rebuildTrie: make(map[types.FieldIndex]bool, fieldCount),
|
||||
valMapHandler: stateutil.NewValMapHandler(st.Validators),
|
||||
}
|
||||
|
||||
if features.Get().EnableExperimentalState {
|
||||
@@ -925,8 +919,7 @@ func (b *BeaconState) Copy() state.BeaconState {
|
||||
stateFieldLeaves: make(map[types.FieldIndex]*fieldtrie.FieldTrie, fieldCount),
|
||||
|
||||
// Share the reference to validator index map.
|
||||
valMapHandler: b.valMapHandler,
|
||||
validatorIndexCache: b.validatorIndexCache,
|
||||
valMapHandler: b.valMapHandler,
|
||||
}
|
||||
|
||||
if features.Get().EnableExperimentalState {
|
||||
|
||||
@@ -1,96 +0,0 @@
|
||||
package state_native
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"sync"
|
||||
|
||||
"github.com/prysmaticlabs/prysm/v5/config/features"
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
|
||||
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
|
||||
)
|
||||
|
||||
// finalizedValidatorIndexCache maintains a mapping from validator public keys to their indices within the beacon state.
|
||||
// It includes a lastFinalizedIndex to track updates up to the last finalized validator index,
|
||||
// and uses a mutex for concurrent read/write access to the cache.
|
||||
type finalizedValidatorIndexCache struct {
|
||||
indexMap map[[fieldparams.BLSPubkeyLength]byte]primitives.ValidatorIndex // Maps finalized BLS public keys to validator indices.
|
||||
sync.RWMutex
|
||||
}
|
||||
|
||||
// newFinalizedValidatorIndexCache initializes a new validator index cache with an empty index map.
|
||||
func newFinalizedValidatorIndexCache() *finalizedValidatorIndexCache {
|
||||
return &finalizedValidatorIndexCache{
|
||||
indexMap: make(map[[fieldparams.BLSPubkeyLength]byte]primitives.ValidatorIndex),
|
||||
}
|
||||
}
|
||||
|
||||
// getValidatorIndex retrieves the validator index for a given public key from the cache.
|
||||
// If the public key is not found in the cache, it searches through the state starting from the last finalized index.
|
||||
func (b *BeaconState) getValidatorIndex(pubKey [fieldparams.BLSPubkeyLength]byte) (primitives.ValidatorIndex, bool) {
|
||||
b.validatorIndexCache.RLock()
|
||||
defer b.validatorIndexCache.RUnlock()
|
||||
index, found := b.validatorIndexCache.indexMap[pubKey]
|
||||
if found {
|
||||
return index, true
|
||||
}
|
||||
|
||||
validatorCount := len(b.validatorIndexCache.indexMap)
|
||||
vals := b.validatorsReadOnlySinceIndex(validatorCount)
|
||||
for i, val := range vals {
|
||||
if bytes.Equal(bytesutil.PadTo(val.publicKeySlice(), 48), pubKey[:]) {
|
||||
index := primitives.ValidatorIndex(validatorCount + i)
|
||||
return index, true
|
||||
}
|
||||
}
|
||||
return 0, false
|
||||
}
|
||||
|
||||
// saveValidatorIndices updates the validator index cache with new indices.
|
||||
// It processes validator indices starting after the last finalized index and updates the tracker.
|
||||
func (b *BeaconState) saveValidatorIndices() {
|
||||
b.validatorIndexCache.Lock()
|
||||
defer b.validatorIndexCache.Unlock()
|
||||
|
||||
validatorCount := len(b.validatorIndexCache.indexMap)
|
||||
vals := b.validatorsReadOnlySinceIndex(validatorCount)
|
||||
for i, val := range vals {
|
||||
b.validatorIndexCache.indexMap[val.PublicKey()] = primitives.ValidatorIndex(validatorCount + i)
|
||||
}
|
||||
}
|
||||
|
||||
// validatorsReadOnlySinceIndex constructs a list of read only validator references after a specified index.
|
||||
// The indices in the returned list correspond to their respective validator indices in the state.
|
||||
// It returns nil if the specified index is out of bounds. This function is read-only and does not use locks.
|
||||
func (b *BeaconState) validatorsReadOnlySinceIndex(index int) []readOnlyValidator {
|
||||
totalValidators := b.validatorsLen()
|
||||
if index >= totalValidators {
|
||||
return nil
|
||||
}
|
||||
|
||||
var v []*ethpb.Validator
|
||||
if features.Get().EnableExperimentalState {
|
||||
if b.validatorsMultiValue == nil {
|
||||
return nil
|
||||
}
|
||||
v = b.validatorsMultiValue.Value(b)
|
||||
} else {
|
||||
if b.validators == nil {
|
||||
return nil
|
||||
}
|
||||
v = b.validators
|
||||
}
|
||||
|
||||
result := make([]readOnlyValidator, totalValidators-index)
|
||||
for i := 0; i < len(result); i++ {
|
||||
val := v[i+index]
|
||||
if val == nil {
|
||||
continue
|
||||
}
|
||||
result[i] = readOnlyValidator{
|
||||
validator: val,
|
||||
}
|
||||
}
|
||||
return result
|
||||
}
|
||||
@@ -1,105 +0,0 @@
|
||||
package state_native
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/require"
|
||||
)
|
||||
|
||||
func Test_FinalizedValidatorIndexCache(t *testing.T) {
|
||||
c := newFinalizedValidatorIndexCache()
|
||||
b := &BeaconState{validatorIndexCache: c}
|
||||
|
||||
// What happens if you call getValidatorIndex with a public key that is not in the cache and state?
|
||||
// The function will return 0 and false.
|
||||
i, exists := b.getValidatorIndex([fieldparams.BLSPubkeyLength]byte{0})
|
||||
require.Equal(t, primitives.ValidatorIndex(0), i)
|
||||
require.Equal(t, false, exists)
|
||||
|
||||
// Validators are added to the state. They are [0, 1, 2]
|
||||
b.validators = []*ethpb.Validator{
|
||||
{PublicKey: []byte{1}},
|
||||
{PublicKey: []byte{2}},
|
||||
{PublicKey: []byte{3}},
|
||||
}
|
||||
// We should be able to retrieve these validators by public key even when they are not in the cache
|
||||
i, exists = b.getValidatorIndex([fieldparams.BLSPubkeyLength]byte{1})
|
||||
require.Equal(t, primitives.ValidatorIndex(0), i)
|
||||
require.Equal(t, true, exists)
|
||||
i, exists = b.getValidatorIndex([fieldparams.BLSPubkeyLength]byte{2})
|
||||
require.Equal(t, primitives.ValidatorIndex(1), i)
|
||||
require.Equal(t, true, exists)
|
||||
i, exists = b.getValidatorIndex([fieldparams.BLSPubkeyLength]byte{3})
|
||||
require.Equal(t, primitives.ValidatorIndex(2), i)
|
||||
require.Equal(t, true, exists)
|
||||
|
||||
// State is finalized. We save [0, 1, 2 ] to the cache.
|
||||
b.saveValidatorIndices()
|
||||
require.Equal(t, 3, len(b.validatorIndexCache.indexMap))
|
||||
i, exists = b.getValidatorIndex([fieldparams.BLSPubkeyLength]byte{1})
|
||||
require.Equal(t, primitives.ValidatorIndex(0), i)
|
||||
require.Equal(t, true, exists)
|
||||
i, exists = b.getValidatorIndex([fieldparams.BLSPubkeyLength]byte{2})
|
||||
require.Equal(t, primitives.ValidatorIndex(1), i)
|
||||
require.Equal(t, true, exists)
|
||||
i, exists = b.getValidatorIndex([fieldparams.BLSPubkeyLength]byte{3})
|
||||
require.Equal(t, primitives.ValidatorIndex(2), i)
|
||||
require.Equal(t, true, exists)
|
||||
|
||||
// New validators are added to the state. They are [4, 5]
|
||||
b.validators = []*ethpb.Validator{
|
||||
{PublicKey: []byte{1}},
|
||||
{PublicKey: []byte{2}},
|
||||
{PublicKey: []byte{3}},
|
||||
{PublicKey: []byte{4}},
|
||||
{PublicKey: []byte{5}},
|
||||
}
|
||||
// We should be able to retrieve these validators by public key even when they are not in the cache
|
||||
i, exists = b.getValidatorIndex([fieldparams.BLSPubkeyLength]byte{4})
|
||||
require.Equal(t, primitives.ValidatorIndex(3), i)
|
||||
require.Equal(t, true, exists)
|
||||
i, exists = b.getValidatorIndex([fieldparams.BLSPubkeyLength]byte{5})
|
||||
require.Equal(t, primitives.ValidatorIndex(4), i)
|
||||
require.Equal(t, true, exists)
|
||||
|
||||
// State is finalized. We save [4, 5] to the cache.
|
||||
b.saveValidatorIndices()
|
||||
require.Equal(t, 5, len(b.validatorIndexCache.indexMap))
|
||||
|
||||
// New validators are added to the state. They are [6]
|
||||
b.validators = []*ethpb.Validator{
|
||||
{PublicKey: []byte{1}},
|
||||
{PublicKey: []byte{2}},
|
||||
{PublicKey: []byte{3}},
|
||||
{PublicKey: []byte{4}},
|
||||
{PublicKey: []byte{5}},
|
||||
{PublicKey: []byte{6}},
|
||||
}
|
||||
// We should be able to retrieve these validators by public key even when they are not in the cache
|
||||
i, exists = b.getValidatorIndex([fieldparams.BLSPubkeyLength]byte{6})
|
||||
require.Equal(t, primitives.ValidatorIndex(5), i)
|
||||
require.Equal(t, true, exists)
|
||||
|
||||
// State is finalized. We save [6] to the cache.
|
||||
b.saveValidatorIndices()
|
||||
require.Equal(t, 6, len(b.validatorIndexCache.indexMap))
|
||||
|
||||
// Save a few more times.
|
||||
b.saveValidatorIndices()
|
||||
b.saveValidatorIndices()
|
||||
require.Equal(t, 6, len(b.validatorIndexCache.indexMap))
|
||||
|
||||
// Can still retrieve the validators from the cache
|
||||
i, exists = b.getValidatorIndex([fieldparams.BLSPubkeyLength]byte{1})
|
||||
require.Equal(t, primitives.ValidatorIndex(0), i)
|
||||
require.Equal(t, true, exists)
|
||||
i, exists = b.getValidatorIndex([fieldparams.BLSPubkeyLength]byte{2})
|
||||
require.Equal(t, primitives.ValidatorIndex(1), i)
|
||||
require.Equal(t, true, exists)
|
||||
i, exists = b.getValidatorIndex([fieldparams.BLSPubkeyLength]byte{3})
|
||||
require.Equal(t, primitives.ValidatorIndex(2), i)
|
||||
require.Equal(t, true, exists)
|
||||
}
|
||||
@@ -230,7 +230,6 @@ func TestReplayBlocks_ProcessEpoch_Electra(t *testing.T) {
|
||||
EffectiveBalance: params.BeaconConfig().MinActivationBalance,
|
||||
},
|
||||
}))
|
||||
beaconState.SaveValidatorIndices()
|
||||
|
||||
require.NoError(t, beaconState.SetPendingDeposits([]*ethpb.PendingDeposit{
|
||||
stateTesting.GeneratePendingDeposit(t, sk, uint64(amountAvailForProcessing)/10, bytesutil.ToBytes32(withdrawalCredentials), genesisBlock.Block.Slot),
|
||||
|
||||
@@ -305,11 +305,12 @@ func validateSelectionIndex(
|
||||
domain := params.BeaconConfig().DomainSelectionProof
|
||||
epoch := slots.ToEpoch(slot)
|
||||
|
||||
v, err := bs.ValidatorAtIndex(validatorIndex)
|
||||
v, err := bs.ValidatorAtIndexReadOnly(validatorIndex)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
publicKey, err := bls.PublicKeyFromBytes(v.PublicKey)
|
||||
pk := v.PublicKey()
|
||||
publicKey, err := bls.PublicKeyFromBytes(pk[:])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -335,11 +336,12 @@ func validateSelectionIndex(
|
||||
func aggSigSet(s state.ReadOnlyBeaconState, a ethpb.SignedAggregateAttAndProof) (*bls.SignatureBatch, error) {
|
||||
aggregateAndProof := a.AggregateAttestationAndProof()
|
||||
|
||||
v, err := s.ValidatorAtIndex(aggregateAndProof.GetAggregatorIndex())
|
||||
v, err := s.ValidatorAtIndexReadOnly(aggregateAndProof.GetAggregatorIndex())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
publicKey, err := bls.PublicKeyFromBytes(v.PublicKey)
|
||||
pk := v.PublicKey()
|
||||
publicKey, err := bls.PublicKeyFromBytes(pk[:])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
@@ -369,7 +369,7 @@ func (s *Service) verifyPendingBlockSignature(ctx context.Context, blk interface
|
||||
return pubsub.ValidationIgnore, err
|
||||
}
|
||||
// Ignore block in the event of non-existent proposer.
|
||||
_, err = roState.ValidatorAtIndex(blk.Block().ProposerIndex())
|
||||
_, err = roState.ValidatorAtIndexReadOnly(blk.Block().ProposerIndex())
|
||||
if err != nil {
|
||||
return pubsub.ValidationIgnore, err
|
||||
}
|
||||
|
||||
@@ -28,6 +28,6 @@ var (
|
||||
ScrapeIntervalFlag = &cli.DurationFlag{
|
||||
Name: "scrape-interval",
|
||||
Usage: "Frequency of scraping expressed as a duration, eg 2m or 1m5s.",
|
||||
Value: 60 * time.Second,
|
||||
Value: 120 * time.Second,
|
||||
}
|
||||
)
|
||||
|
||||
@@ -180,7 +180,6 @@ func TestModifiedE2E(t *testing.T) {
|
||||
|
||||
func TestLoadConfigFile(t *testing.T) {
|
||||
t.Run("mainnet", func(t *testing.T) {
|
||||
t.Skip("TODO: add back in after all spec test features are in.")
|
||||
mn := params.MainnetConfig().Copy()
|
||||
mainnetPresetsFiles := presetsFilePath(t, "mainnet")
|
||||
var err error
|
||||
@@ -199,7 +198,6 @@ func TestLoadConfigFile(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("minimal", func(t *testing.T) {
|
||||
t.Skip("TODO: add back in after all spec test features are in.")
|
||||
min := params.MinimalSpecConfig().Copy()
|
||||
minimalPresetsFiles := presetsFilePath(t, "minimal")
|
||||
var err error
|
||||
|
||||
@@ -110,7 +110,7 @@ func MinimalSpecConfig() *BeaconChainConfig {
|
||||
minimalConfig.MaxWithdrawalRequestsPerPayload = 2
|
||||
minimalConfig.MaxDepositRequestsPerPayload = 4
|
||||
minimalConfig.PendingPartialWithdrawalsLimit = 64
|
||||
minimalConfig.MaxPendingPartialsPerWithdrawalsSweep = 1
|
||||
minimalConfig.MaxPendingPartialsPerWithdrawalsSweep = 2
|
||||
minimalConfig.PendingDepositLimit = 134217728
|
||||
minimalConfig.MaxPendingDepositsPerEpoch = 16
|
||||
|
||||
|
||||
@@ -37,6 +37,9 @@ func NewWrappedExecutionData(v proto.Message) (interfaces.ExecutionData, error)
|
||||
return WrappedExecutionPayloadDeneb(pbStruct)
|
||||
case *enginev1.ExecutionPayloadDenebWithValueAndBlobsBundle:
|
||||
return WrappedExecutionPayloadDeneb(pbStruct.Payload)
|
||||
case *enginev1.ExecutionBundleElectra:
|
||||
// note: no payload changes in electra so using deneb
|
||||
return WrappedExecutionPayloadDeneb(pbStruct.Payload)
|
||||
default:
|
||||
return nil, ErrUnsupportedVersion
|
||||
}
|
||||
|
||||
@@ -14,7 +14,8 @@ type GetPayloadResponse struct {
|
||||
BlobsBundle *pb.BlobsBundle
|
||||
OverrideBuilder bool
|
||||
// todo: should we convert this to Gwei up front?
|
||||
Bid primitives.Wei
|
||||
Bid primitives.Wei
|
||||
ExecutionRequests *pb.ExecutionRequests
|
||||
}
|
||||
|
||||
// bundleGetter is an interface satisfied by get payload responses that have a blobs bundle.
|
||||
@@ -31,6 +32,10 @@ type shouldOverrideBuilderGetter interface {
|
||||
GetShouldOverrideBuilder() bool
|
||||
}
|
||||
|
||||
type executionRequestsGetter interface {
|
||||
GetDecodedExecutionRequests() (*pb.ExecutionRequests, error)
|
||||
}
|
||||
|
||||
func NewGetPayloadResponse(msg proto.Message) (*GetPayloadResponse, error) {
|
||||
r := &GetPayloadResponse{}
|
||||
bundleGetter, hasBundle := msg.(bundleGetter)
|
||||
@@ -38,6 +43,7 @@ func NewGetPayloadResponse(msg proto.Message) (*GetPayloadResponse, error) {
|
||||
r.BlobsBundle = bundleGetter.GetBlobsBundle()
|
||||
}
|
||||
bidValueGetter, hasBid := msg.(bidValueGetter)
|
||||
executionRequestsGetter, hasExecutionRequests := msg.(executionRequestsGetter)
|
||||
wei := primitives.ZeroWei()
|
||||
if hasBid {
|
||||
// The protobuf types that engine api responses unmarshal into store their values in little endian form.
|
||||
@@ -56,5 +62,12 @@ func NewGetPayloadResponse(msg proto.Message) (*GetPayloadResponse, error) {
|
||||
return nil, err
|
||||
}
|
||||
r.ExecutionData = ed
|
||||
if hasExecutionRequests {
|
||||
requests, err := executionRequestsGetter.GetDecodedExecutionRequests()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
r.ExecutionRequests = requests
|
||||
}
|
||||
return r, nil
|
||||
}
|
||||
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
consensus_types "github.com/prysmaticlabs/prysm/v5/consensus-types"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
|
||||
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
|
||||
"github.com/prysmaticlabs/prysm/v5/runtime/version"
|
||||
)
|
||||
@@ -172,3 +173,12 @@ func (b *SignedBeaconBlock) SetBlobKzgCommitments(c [][]byte) error {
|
||||
b.block.body.blobKzgCommitments = c
|
||||
return nil
|
||||
}
|
||||
|
||||
// SetExecutionRequests sets the execution requests in the block.
|
||||
func (b *SignedBeaconBlock) SetExecutionRequests(req *enginev1.ExecutionRequests) error {
|
||||
if b.version < version.Electra {
|
||||
return consensus_types.ErrNotSupported("SetExecutionRequests", b.version)
|
||||
}
|
||||
b.block.body.executionRequests = req
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -91,6 +91,7 @@ type SignedBeaconBlock interface {
|
||||
SetProposerIndex(idx primitives.ValidatorIndex)
|
||||
SetSlot(slot primitives.Slot)
|
||||
SetSignature(sig []byte)
|
||||
SetExecutionRequests(er *enginev1.ExecutionRequests) error
|
||||
Unblind(e ExecutionData) error
|
||||
}
|
||||
|
||||
|
||||
@@ -9,6 +9,7 @@ import (
|
||||
|
||||
type LightClientExecutionBranch = [fieldparams.ExecutionBranchDepth][fieldparams.RootLength]byte
|
||||
type LightClientSyncCommitteeBranch = [fieldparams.SyncCommitteeBranchDepth][fieldparams.RootLength]byte
|
||||
type LightClientSyncCommitteeBranchElectra = [fieldparams.SyncCommitteeBranchDepthElectra][fieldparams.RootLength]byte
|
||||
type LightClientFinalityBranch = [fieldparams.FinalityBranchDepth][fieldparams.RootLength]byte
|
||||
|
||||
type LightClientHeader interface {
|
||||
@@ -24,7 +25,8 @@ type LightClientBootstrap interface {
|
||||
Version() int
|
||||
Header() LightClientHeader
|
||||
CurrentSyncCommittee() *pb.SyncCommittee
|
||||
CurrentSyncCommitteeBranch() LightClientSyncCommitteeBranch
|
||||
CurrentSyncCommitteeBranch() (LightClientSyncCommitteeBranch, error)
|
||||
CurrentSyncCommitteeBranchElectra() (LightClientSyncCommitteeBranchElectra, error)
|
||||
}
|
||||
|
||||
type LightClientUpdate interface {
|
||||
@@ -32,7 +34,8 @@ type LightClientUpdate interface {
|
||||
Version() int
|
||||
AttestedHeader() LightClientHeader
|
||||
NextSyncCommittee() *pb.SyncCommittee
|
||||
NextSyncCommitteeBranch() LightClientSyncCommitteeBranch
|
||||
NextSyncCommitteeBranch() (LightClientSyncCommitteeBranch, error)
|
||||
NextSyncCommitteeBranchElectra() (LightClientSyncCommitteeBranchElectra, error)
|
||||
FinalizedHeader() LightClientHeader
|
||||
FinalityBranch() LightClientFinalityBranch
|
||||
SyncAggregate() *pb.SyncAggregate
|
||||
|
||||
@@ -22,6 +22,8 @@ func NewWrappedBootstrap(m proto.Message) (interfaces.LightClientBootstrap, erro
|
||||
return NewWrappedBootstrapCapella(t)
|
||||
case *pb.LightClientBootstrapDeneb:
|
||||
return NewWrappedBootstrapDeneb(t)
|
||||
case *pb.LightClientBootstrapElectra:
|
||||
return NewWrappedBootstrapElectra(t)
|
||||
default:
|
||||
return nil, fmt.Errorf("cannot construct light client bootstrap from type %T", t)
|
||||
}
|
||||
@@ -83,8 +85,12 @@ func (h *bootstrapAltair) CurrentSyncCommittee() *pb.SyncCommittee {
|
||||
return h.p.CurrentSyncCommittee
|
||||
}
|
||||
|
||||
func (h *bootstrapAltair) CurrentSyncCommitteeBranch() interfaces.LightClientSyncCommitteeBranch {
|
||||
return h.currentSyncCommitteeBranch
|
||||
func (h *bootstrapAltair) CurrentSyncCommitteeBranch() (interfaces.LightClientSyncCommitteeBranch, error) {
|
||||
return h.currentSyncCommitteeBranch, nil
|
||||
}
|
||||
|
||||
func (h *bootstrapAltair) CurrentSyncCommitteeBranchElectra() (interfaces.LightClientSyncCommitteeBranchElectra, error) {
|
||||
return [6][32]byte{}, consensustypes.ErrNotSupported("CurrentSyncCommitteeBranchElectra", version.Altair)
|
||||
}
|
||||
|
||||
type bootstrapCapella struct {
|
||||
@@ -143,8 +149,12 @@ func (h *bootstrapCapella) CurrentSyncCommittee() *pb.SyncCommittee {
|
||||
return h.p.CurrentSyncCommittee
|
||||
}
|
||||
|
||||
func (h *bootstrapCapella) CurrentSyncCommitteeBranch() interfaces.LightClientSyncCommitteeBranch {
|
||||
return h.currentSyncCommitteeBranch
|
||||
func (h *bootstrapCapella) CurrentSyncCommitteeBranch() (interfaces.LightClientSyncCommitteeBranch, error) {
|
||||
return h.currentSyncCommitteeBranch, nil
|
||||
}
|
||||
|
||||
func (h *bootstrapCapella) CurrentSyncCommitteeBranchElectra() (interfaces.LightClientSyncCommitteeBranchElectra, error) {
|
||||
return [6][32]byte{}, consensustypes.ErrNotSupported("CurrentSyncCommitteeBranchElectra", version.Capella)
|
||||
}
|
||||
|
||||
type bootstrapDeneb struct {
|
||||
@@ -203,6 +213,74 @@ func (h *bootstrapDeneb) CurrentSyncCommittee() *pb.SyncCommittee {
|
||||
return h.p.CurrentSyncCommittee
|
||||
}
|
||||
|
||||
func (h *bootstrapDeneb) CurrentSyncCommitteeBranch() interfaces.LightClientSyncCommitteeBranch {
|
||||
return h.currentSyncCommitteeBranch
|
||||
func (h *bootstrapDeneb) CurrentSyncCommitteeBranch() (interfaces.LightClientSyncCommitteeBranch, error) {
|
||||
return h.currentSyncCommitteeBranch, nil
|
||||
}
|
||||
|
||||
func (h *bootstrapDeneb) CurrentSyncCommitteeBranchElectra() (interfaces.LightClientSyncCommitteeBranchElectra, error) {
|
||||
return [6][32]byte{}, consensustypes.ErrNotSupported("CurrentSyncCommitteeBranchElectra", version.Deneb)
|
||||
}
|
||||
|
||||
type bootstrapElectra struct {
|
||||
p *pb.LightClientBootstrapElectra
|
||||
header interfaces.LightClientHeader
|
||||
currentSyncCommitteeBranch interfaces.LightClientSyncCommitteeBranchElectra
|
||||
}
|
||||
|
||||
var _ interfaces.LightClientBootstrap = &bootstrapElectra{}
|
||||
|
||||
func NewWrappedBootstrapElectra(p *pb.LightClientBootstrapElectra) (interfaces.LightClientBootstrap, error) {
|
||||
if p == nil {
|
||||
return nil, consensustypes.ErrNilObjectWrapped
|
||||
}
|
||||
header, err := NewWrappedHeaderDeneb(p.Header)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
branch, err := createBranch[interfaces.LightClientSyncCommitteeBranchElectra](
|
||||
"sync committee",
|
||||
p.CurrentSyncCommitteeBranch,
|
||||
fieldparams.SyncCommitteeBranchDepthElectra,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &bootstrapElectra{
|
||||
p: p,
|
||||
header: header,
|
||||
currentSyncCommitteeBranch: branch,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (h *bootstrapElectra) MarshalSSZTo(dst []byte) ([]byte, error) {
|
||||
return h.p.MarshalSSZTo(dst)
|
||||
}
|
||||
|
||||
func (h *bootstrapElectra) MarshalSSZ() ([]byte, error) {
|
||||
return h.p.MarshalSSZ()
|
||||
}
|
||||
|
||||
func (h *bootstrapElectra) SizeSSZ() int {
|
||||
return h.p.SizeSSZ()
|
||||
}
|
||||
|
||||
func (h *bootstrapElectra) Version() int {
|
||||
return version.Electra
|
||||
}
|
||||
|
||||
func (h *bootstrapElectra) Header() interfaces.LightClientHeader {
|
||||
return h.header
|
||||
}
|
||||
|
||||
func (h *bootstrapElectra) CurrentSyncCommittee() *pb.SyncCommittee {
|
||||
return h.p.CurrentSyncCommittee
|
||||
}
|
||||
|
||||
func (h *bootstrapElectra) CurrentSyncCommitteeBranch() (interfaces.LightClientSyncCommitteeBranch, error) {
|
||||
return [5][32]byte{}, consensustypes.ErrNotSupported("CurrentSyncCommitteeBranch", version.Electra)
|
||||
}
|
||||
|
||||
func (h *bootstrapElectra) CurrentSyncCommitteeBranchElectra() (interfaces.LightClientSyncCommitteeBranchElectra, error) {
|
||||
return h.currentSyncCommitteeBranch, nil
|
||||
}
|
||||
|
||||
@@ -100,8 +100,12 @@ func (u *updateAltair) NextSyncCommittee() *pb.SyncCommittee {
|
||||
return u.p.NextSyncCommittee
|
||||
}
|
||||
|
||||
func (u *updateAltair) NextSyncCommitteeBranch() interfaces.LightClientSyncCommitteeBranch {
|
||||
return u.nextSyncCommitteeBranch
|
||||
func (u *updateAltair) NextSyncCommitteeBranch() (interfaces.LightClientSyncCommitteeBranch, error) {
|
||||
return u.nextSyncCommitteeBranch, nil
|
||||
}
|
||||
|
||||
func (u *updateAltair) NextSyncCommitteeBranchElectra() (interfaces.LightClientSyncCommitteeBranchElectra, error) {
|
||||
return [6][32]byte{}, consensustypes.ErrNotSupported("NextSyncCommitteeBranchElectra", version.Altair)
|
||||
}
|
||||
|
||||
func (u *updateAltair) FinalizedHeader() interfaces.LightClientHeader {
|
||||
@@ -192,8 +196,12 @@ func (u *updateCapella) NextSyncCommittee() *pb.SyncCommittee {
|
||||
return u.p.NextSyncCommittee
|
||||
}
|
||||
|
||||
func (u *updateCapella) NextSyncCommitteeBranch() interfaces.LightClientSyncCommitteeBranch {
|
||||
return u.nextSyncCommitteeBranch
|
||||
func (u *updateCapella) NextSyncCommitteeBranch() (interfaces.LightClientSyncCommitteeBranch, error) {
|
||||
return u.nextSyncCommitteeBranch, nil
|
||||
}
|
||||
|
||||
func (u *updateCapella) NextSyncCommitteeBranchElectra() (interfaces.LightClientSyncCommitteeBranchElectra, error) {
|
||||
return [6][32]byte{}, consensustypes.ErrNotSupported("NextSyncCommitteeBranchElectra", version.Capella)
|
||||
}
|
||||
|
||||
func (u *updateCapella) FinalizedHeader() interfaces.LightClientHeader {
|
||||
@@ -284,8 +292,12 @@ func (u *updateDeneb) NextSyncCommittee() *pb.SyncCommittee {
|
||||
return u.p.NextSyncCommittee
|
||||
}
|
||||
|
||||
func (u *updateDeneb) NextSyncCommitteeBranch() interfaces.LightClientSyncCommitteeBranch {
|
||||
return u.nextSyncCommitteeBranch
|
||||
func (u *updateDeneb) NextSyncCommitteeBranch() (interfaces.LightClientSyncCommitteeBranch, error) {
|
||||
return u.nextSyncCommitteeBranch, nil
|
||||
}
|
||||
|
||||
func (u *updateDeneb) NextSyncCommitteeBranchElectra() (interfaces.LightClientSyncCommitteeBranchElectra, error) {
|
||||
return [6][32]byte{}, consensustypes.ErrNotSupported("NextSyncCommitteeBranchElectra", version.Deneb)
|
||||
}
|
||||
|
||||
func (u *updateDeneb) FinalizedHeader() interfaces.LightClientHeader {
|
||||
@@ -303,3 +315,99 @@ func (u *updateDeneb) SyncAggregate() *pb.SyncAggregate {
|
||||
func (u *updateDeneb) SignatureSlot() primitives.Slot {
|
||||
return u.p.SignatureSlot
|
||||
}
|
||||
|
||||
type updateElectra struct {
|
||||
p *pb.LightClientUpdateElectra
|
||||
attestedHeader interfaces.LightClientHeader
|
||||
nextSyncCommitteeBranch interfaces.LightClientSyncCommitteeBranchElectra
|
||||
finalizedHeader interfaces.LightClientHeader
|
||||
finalityBranch interfaces.LightClientFinalityBranch
|
||||
}
|
||||
|
||||
var _ interfaces.LightClientUpdate = &updateElectra{}
|
||||
|
||||
func NewWrappedUpdateElectra(p *pb.LightClientUpdateElectra) (interfaces.LightClientUpdate, error) {
|
||||
if p == nil {
|
||||
return nil, consensustypes.ErrNilObjectWrapped
|
||||
}
|
||||
attestedHeader, err := NewWrappedHeaderDeneb(p.AttestedHeader)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
finalizedHeader, err := NewWrappedHeaderDeneb(p.FinalizedHeader)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
scBranch, err := createBranch[interfaces.LightClientSyncCommitteeBranchElectra](
|
||||
"sync committee",
|
||||
p.NextSyncCommitteeBranch,
|
||||
fieldparams.SyncCommitteeBranchDepthElectra,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
finalityBranch, err := createBranch[interfaces.LightClientFinalityBranch](
|
||||
"finality",
|
||||
p.FinalityBranch,
|
||||
fieldparams.FinalityBranchDepth,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &updateElectra{
|
||||
p: p,
|
||||
attestedHeader: attestedHeader,
|
||||
nextSyncCommitteeBranch: scBranch,
|
||||
finalizedHeader: finalizedHeader,
|
||||
finalityBranch: finalityBranch,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (u *updateElectra) MarshalSSZTo(dst []byte) ([]byte, error) {
|
||||
return u.p.MarshalSSZTo(dst)
|
||||
}
|
||||
|
||||
func (u *updateElectra) MarshalSSZ() ([]byte, error) {
|
||||
return u.p.MarshalSSZ()
|
||||
}
|
||||
|
||||
func (u *updateElectra) SizeSSZ() int {
|
||||
return u.p.SizeSSZ()
|
||||
}
|
||||
|
||||
func (u *updateElectra) Version() int {
|
||||
return version.Electra
|
||||
}
|
||||
|
||||
func (u *updateElectra) AttestedHeader() interfaces.LightClientHeader {
|
||||
return u.attestedHeader
|
||||
}
|
||||
|
||||
func (u *updateElectra) NextSyncCommittee() *pb.SyncCommittee {
|
||||
return u.p.NextSyncCommittee
|
||||
}
|
||||
|
||||
func (u *updateElectra) NextSyncCommitteeBranch() (interfaces.LightClientSyncCommitteeBranch, error) {
|
||||
return [5][32]byte{}, consensustypes.ErrNotSupported("NextSyncCommitteeBranch", version.Electra)
|
||||
}
|
||||
|
||||
func (u *updateElectra) NextSyncCommitteeBranchElectra() (interfaces.LightClientSyncCommitteeBranchElectra, error) {
|
||||
return u.nextSyncCommitteeBranch, nil
|
||||
}
|
||||
|
||||
func (u *updateElectra) FinalizedHeader() interfaces.LightClientHeader {
|
||||
return u.finalizedHeader
|
||||
}
|
||||
|
||||
func (u *updateElectra) FinalityBranch() interfaces.LightClientFinalityBranch {
|
||||
return u.finalityBranch
|
||||
}
|
||||
|
||||
func (u *updateElectra) SyncAggregate() *pb.SyncAggregate {
|
||||
return u.p.SyncAggregate
|
||||
}
|
||||
|
||||
func (u *updateElectra) SignatureSlot() primitives.Slot {
|
||||
return u.p.SignatureSlot
|
||||
}
|
||||
|
||||
@@ -48,7 +48,7 @@ ssz_gen_marshal(
|
||||
"WithdrawalRequest",
|
||||
"DepositRequest",
|
||||
"ConsolidationRequest",
|
||||
"ExecutionRequests",
|
||||
"ExecutionRequests",
|
||||
],
|
||||
)
|
||||
|
||||
@@ -74,7 +74,7 @@ go_proto_library(
|
||||
go_library(
|
||||
name = "go_default_library",
|
||||
srcs = [
|
||||
"electra.go",
|
||||
"electra.go",
|
||||
"execution_engine.go",
|
||||
"json_marshal_unmarshal.go",
|
||||
":ssz_generated_files", # keep
|
||||
|
||||
@@ -1,4 +1,118 @@
|
||||
package enginev1
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/ethereum/go-ethereum/common/hexutil"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
type ExecutionPayloadElectra = ExecutionPayloadDeneb
|
||||
type ExecutionPayloadHeaderElectra = ExecutionPayloadHeaderDeneb
|
||||
|
||||
var (
|
||||
drExample = &DepositRequest{}
|
||||
drSize = drExample.SizeSSZ()
|
||||
wrExample = &WithdrawalRequest{}
|
||||
wrSize = wrExample.SizeSSZ()
|
||||
crExample = &ConsolidationRequest{}
|
||||
crSize = crExample.SizeSSZ()
|
||||
)
|
||||
|
||||
const LenExecutionRequestsElectra = 3
|
||||
|
||||
func (ebe *ExecutionBundleElectra) GetDecodedExecutionRequests() (*ExecutionRequests, error) {
|
||||
requests := &ExecutionRequests{}
|
||||
|
||||
if len(ebe.ExecutionRequests) != LenExecutionRequestsElectra /* types of requests */ {
|
||||
return nil, errors.Errorf("invalid execution request size: %d", len(ebe.ExecutionRequests))
|
||||
}
|
||||
|
||||
// deposit requests
|
||||
drs, err := unmarshalItems(ebe.ExecutionRequests[0], drSize, func() *DepositRequest { return &DepositRequest{} })
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
requests.Deposits = drs
|
||||
|
||||
// withdrawal requests
|
||||
wrs, err := unmarshalItems(ebe.ExecutionRequests[1], wrSize, func() *WithdrawalRequest { return &WithdrawalRequest{} })
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
requests.Withdrawals = wrs
|
||||
|
||||
// consolidation requests
|
||||
crs, err := unmarshalItems(ebe.ExecutionRequests[2], crSize, func() *ConsolidationRequest { return &ConsolidationRequest{} })
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
requests.Consolidations = crs
|
||||
|
||||
return requests, nil
|
||||
}
|
||||
|
||||
func EncodeExecutionRequests(requests *ExecutionRequests) ([]hexutil.Bytes, error) {
|
||||
if requests == nil {
|
||||
return nil, errors.New("invalid execution requests")
|
||||
}
|
||||
|
||||
drBytes, err := marshalItems(requests.Deposits)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "failed to marshal deposit requests")
|
||||
}
|
||||
|
||||
wrBytes, err := marshalItems(requests.Withdrawals)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "failed to marshal withdrawal requests")
|
||||
}
|
||||
|
||||
crBytes, err := marshalItems(requests.Consolidations)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "failed to marshal consolidation requests")
|
||||
}
|
||||
return []hexutil.Bytes{drBytes, wrBytes, crBytes}, nil
|
||||
}
|
||||
|
||||
type sszUnmarshaler interface {
|
||||
UnmarshalSSZ([]byte) error
|
||||
}
|
||||
|
||||
type sszMarshaler interface {
|
||||
MarshalSSZTo(buf []byte) ([]byte, error)
|
||||
SizeSSZ() int
|
||||
}
|
||||
|
||||
func marshalItems[T sszMarshaler](items []T) ([]byte, error) {
|
||||
if len(items) == 0 {
|
||||
return []byte{}, nil
|
||||
}
|
||||
size := items[0].SizeSSZ()
|
||||
buf := make([]byte, 0, size*len(items))
|
||||
var err error
|
||||
for i, item := range items {
|
||||
buf, err = item.MarshalSSZTo(buf)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to marshal item at index %d: %w", i, err)
|
||||
}
|
||||
}
|
||||
return buf, nil
|
||||
}
|
||||
|
||||
// Generic function to unmarshal items
|
||||
func unmarshalItems[T sszUnmarshaler](data []byte, itemSize int, newItem func() T) ([]T, error) {
|
||||
if len(data)%itemSize != 0 {
|
||||
return nil, fmt.Errorf("invalid data length: data size (%d) is not a multiple of item size (%d)", len(data), itemSize)
|
||||
}
|
||||
numItems := len(data) / itemSize
|
||||
items := make([]T, numItems)
|
||||
for i := range items {
|
||||
itemBytes := data[i*itemSize : (i+1)*itemSize]
|
||||
item := newItem()
|
||||
if err := item.UnmarshalSSZ(itemBytes); err != nil {
|
||||
return nil, fmt.Errorf("failed to unmarshal item at index %d: %w", i, err)
|
||||
}
|
||||
items[i] = item
|
||||
}
|
||||
return items, nil
|
||||
}
|
||||
|
||||
256
proto/engine/v1/electra.pb.go
generated
256
proto/engine/v1/electra.pb.go
generated
@@ -290,6 +290,85 @@ func (x *ExecutionRequests) GetConsolidations() []*ConsolidationRequest {
|
||||
return nil
|
||||
}
|
||||
|
||||
type ExecutionBundleElectra struct {
|
||||
state protoimpl.MessageState
|
||||
sizeCache protoimpl.SizeCache
|
||||
unknownFields protoimpl.UnknownFields
|
||||
|
||||
Payload *ExecutionPayloadDeneb `protobuf:"bytes,1,opt,name=payload,proto3" json:"payload,omitempty"`
|
||||
Value []byte `protobuf:"bytes,2,opt,name=value,proto3" json:"value,omitempty"`
|
||||
BlobsBundle *BlobsBundle `protobuf:"bytes,3,opt,name=blobs_bundle,json=blobsBundle,proto3" json:"blobs_bundle,omitempty"`
|
||||
ShouldOverrideBuilder bool `protobuf:"varint,4,opt,name=should_override_builder,json=shouldOverrideBuilder,proto3" json:"should_override_builder,omitempty"`
|
||||
ExecutionRequests [][]byte `protobuf:"bytes,5,rep,name=execution_requests,json=executionRequests,proto3" json:"execution_requests,omitempty"`
|
||||
}
|
||||
|
||||
func (x *ExecutionBundleElectra) Reset() {
|
||||
*x = ExecutionBundleElectra{}
|
||||
if protoimpl.UnsafeEnabled {
|
||||
mi := &file_proto_engine_v1_electra_proto_msgTypes[4]
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
}
|
||||
|
||||
func (x *ExecutionBundleElectra) String() string {
|
||||
return protoimpl.X.MessageStringOf(x)
|
||||
}
|
||||
|
||||
func (*ExecutionBundleElectra) ProtoMessage() {}
|
||||
|
||||
func (x *ExecutionBundleElectra) ProtoReflect() protoreflect.Message {
|
||||
mi := &file_proto_engine_v1_electra_proto_msgTypes[4]
|
||||
if protoimpl.UnsafeEnabled && x != nil {
|
||||
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
||||
if ms.LoadMessageInfo() == nil {
|
||||
ms.StoreMessageInfo(mi)
|
||||
}
|
||||
return ms
|
||||
}
|
||||
return mi.MessageOf(x)
|
||||
}
|
||||
|
||||
// Deprecated: Use ExecutionBundleElectra.ProtoReflect.Descriptor instead.
|
||||
func (*ExecutionBundleElectra) Descriptor() ([]byte, []int) {
|
||||
return file_proto_engine_v1_electra_proto_rawDescGZIP(), []int{4}
|
||||
}
|
||||
|
||||
func (x *ExecutionBundleElectra) GetPayload() *ExecutionPayloadDeneb {
|
||||
if x != nil {
|
||||
return x.Payload
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (x *ExecutionBundleElectra) GetValue() []byte {
|
||||
if x != nil {
|
||||
return x.Value
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (x *ExecutionBundleElectra) GetBlobsBundle() *BlobsBundle {
|
||||
if x != nil {
|
||||
return x.BlobsBundle
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (x *ExecutionBundleElectra) GetShouldOverrideBuilder() bool {
|
||||
if x != nil {
|
||||
return x.ShouldOverrideBuilder
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (x *ExecutionBundleElectra) GetExecutionRequests() [][]byte {
|
||||
if x != nil {
|
||||
return x.ExecutionRequests
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
var File_proto_engine_v1_electra_proto protoreflect.FileDescriptor
|
||||
|
||||
var file_proto_engine_v1_electra_proto_rawDesc = []byte{
|
||||
@@ -298,64 +377,85 @@ var file_proto_engine_v1_electra_proto_rawDesc = []byte{
|
||||
0x12, 0x65, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x2e, 0x65, 0x6e, 0x67, 0x69, 0x6e, 0x65,
|
||||
0x2e, 0x76, 0x31, 0x1a, 0x1b, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x65, 0x74, 0x68, 0x2f, 0x65,
|
||||
0x78, 0x74, 0x2f, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f,
|
||||
0x22, 0x8d, 0x01, 0x0a, 0x11, 0x57, 0x69, 0x74, 0x68, 0x64, 0x72, 0x61, 0x77, 0x61, 0x6c, 0x52,
|
||||
0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x2d, 0x0a, 0x0e, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65,
|
||||
0x5f, 0x61, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x06,
|
||||
0x8a, 0xb5, 0x18, 0x02, 0x32, 0x30, 0x52, 0x0d, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x41, 0x64,
|
||||
0x64, 0x72, 0x65, 0x73, 0x73, 0x12, 0x31, 0x0a, 0x10, 0x76, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74,
|
||||
0x6f, 0x72, 0x5f, 0x70, 0x75, 0x62, 0x6b, 0x65, 0x79, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x42,
|
||||
0x06, 0x8a, 0xb5, 0x18, 0x02, 0x34, 0x38, 0x52, 0x0f, 0x76, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74,
|
||||
0x6f, 0x72, 0x50, 0x75, 0x62, 0x6b, 0x65, 0x79, 0x12, 0x16, 0x0a, 0x06, 0x61, 0x6d, 0x6f, 0x75,
|
||||
0x6e, 0x74, 0x18, 0x03, 0x20, 0x01, 0x28, 0x04, 0x52, 0x06, 0x61, 0x6d, 0x6f, 0x75, 0x6e, 0x74,
|
||||
0x22, 0xc3, 0x01, 0x0a, 0x0e, 0x44, 0x65, 0x70, 0x6f, 0x73, 0x69, 0x74, 0x52, 0x65, 0x71, 0x75,
|
||||
0x65, 0x73, 0x74, 0x12, 0x1e, 0x0a, 0x06, 0x70, 0x75, 0x62, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20,
|
||||
0x01, 0x28, 0x0c, 0x42, 0x06, 0x8a, 0xb5, 0x18, 0x02, 0x34, 0x38, 0x52, 0x06, 0x70, 0x75, 0x62,
|
||||
0x6b, 0x65, 0x79, 0x12, 0x3d, 0x0a, 0x16, 0x77, 0x69, 0x74, 0x68, 0x64, 0x72, 0x61, 0x77, 0x61,
|
||||
0x6c, 0x5f, 0x63, 0x72, 0x65, 0x64, 0x65, 0x6e, 0x74, 0x69, 0x61, 0x6c, 0x73, 0x18, 0x02, 0x20,
|
||||
0x01, 0x28, 0x0c, 0x42, 0x06, 0x8a, 0xb5, 0x18, 0x02, 0x33, 0x32, 0x52, 0x15, 0x77, 0x69, 0x74,
|
||||
0x68, 0x64, 0x72, 0x61, 0x77, 0x61, 0x6c, 0x43, 0x72, 0x65, 0x64, 0x65, 0x6e, 0x74, 0x69, 0x61,
|
||||
0x6c, 0x73, 0x12, 0x16, 0x0a, 0x06, 0x61, 0x6d, 0x6f, 0x75, 0x6e, 0x74, 0x18, 0x03, 0x20, 0x01,
|
||||
0x28, 0x04, 0x52, 0x06, 0x61, 0x6d, 0x6f, 0x75, 0x6e, 0x74, 0x12, 0x24, 0x0a, 0x09, 0x73, 0x69,
|
||||
0x67, 0x6e, 0x61, 0x74, 0x75, 0x72, 0x65, 0x18, 0x04, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x06, 0x8a,
|
||||
0xb5, 0x18, 0x02, 0x39, 0x36, 0x52, 0x09, 0x73, 0x69, 0x67, 0x6e, 0x61, 0x74, 0x75, 0x72, 0x65,
|
||||
0x12, 0x14, 0x0a, 0x05, 0x69, 0x6e, 0x64, 0x65, 0x78, 0x18, 0x05, 0x20, 0x01, 0x28, 0x04, 0x52,
|
||||
0x05, 0x69, 0x6e, 0x64, 0x65, 0x78, 0x22, 0x9f, 0x01, 0x0a, 0x14, 0x43, 0x6f, 0x6e, 0x73, 0x6f,
|
||||
0x6c, 0x69, 0x64, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12,
|
||||
0x2d, 0x0a, 0x0e, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x5f, 0x61, 0x64, 0x64, 0x72, 0x65, 0x73,
|
||||
0x73, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x06, 0x8a, 0xb5, 0x18, 0x02, 0x32, 0x30, 0x52,
|
||||
0x0d, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x41, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73, 0x12, 0x2b,
|
||||
0x0a, 0x0d, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x5f, 0x70, 0x75, 0x62, 0x6b, 0x65, 0x79, 0x18,
|
||||
0x02, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x06, 0x8a, 0xb5, 0x18, 0x02, 0x34, 0x38, 0x52, 0x0c, 0x73,
|
||||
0x6f, 0x75, 0x72, 0x63, 0x65, 0x50, 0x75, 0x62, 0x6b, 0x65, 0x79, 0x12, 0x2b, 0x0a, 0x0d, 0x74,
|
||||
0x61, 0x72, 0x67, 0x65, 0x74, 0x5f, 0x70, 0x75, 0x62, 0x6b, 0x65, 0x79, 0x18, 0x03, 0x20, 0x01,
|
||||
0x28, 0x0c, 0x42, 0x06, 0x8a, 0xb5, 0x18, 0x02, 0x34, 0x38, 0x52, 0x0c, 0x74, 0x61, 0x72, 0x67,
|
||||
0x65, 0x74, 0x50, 0x75, 0x62, 0x6b, 0x65, 0x79, 0x22, 0x87, 0x02, 0x0a, 0x11, 0x45, 0x78, 0x65,
|
||||
0x63, 0x75, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x73, 0x12, 0x48,
|
||||
0x0a, 0x08, 0x64, 0x65, 0x70, 0x6f, 0x73, 0x69, 0x74, 0x73, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b,
|
||||
0x32, 0x22, 0x2e, 0x65, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x2e, 0x65, 0x6e, 0x67, 0x69,
|
||||
0x6e, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x44, 0x65, 0x70, 0x6f, 0x73, 0x69, 0x74, 0x52, 0x65, 0x71,
|
||||
0x75, 0x65, 0x73, 0x74, 0x42, 0x08, 0x92, 0xb5, 0x18, 0x04, 0x38, 0x31, 0x39, 0x32, 0x52, 0x08,
|
||||
0x64, 0x65, 0x70, 0x6f, 0x73, 0x69, 0x74, 0x73, 0x12, 0x4f, 0x0a, 0x0b, 0x77, 0x69, 0x74, 0x68,
|
||||
0x64, 0x72, 0x61, 0x77, 0x61, 0x6c, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x25, 0x2e,
|
||||
0x65, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x2e, 0x65, 0x6e, 0x67, 0x69, 0x6e, 0x65, 0x2e,
|
||||
0x76, 0x31, 0x2e, 0x57, 0x69, 0x74, 0x68, 0x64, 0x72, 0x61, 0x77, 0x61, 0x6c, 0x52, 0x65, 0x71,
|
||||
0x75, 0x65, 0x73, 0x74, 0x42, 0x06, 0x92, 0xb5, 0x18, 0x02, 0x31, 0x36, 0x52, 0x0b, 0x77, 0x69,
|
||||
0x74, 0x68, 0x64, 0x72, 0x61, 0x77, 0x61, 0x6c, 0x73, 0x12, 0x57, 0x0a, 0x0e, 0x63, 0x6f, 0x6e,
|
||||
0x73, 0x6f, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x03, 0x20, 0x03, 0x28,
|
||||
0x0b, 0x32, 0x28, 0x2e, 0x65, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x2e, 0x65, 0x6e, 0x67,
|
||||
0x69, 0x6e, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x43, 0x6f, 0x6e, 0x73, 0x6f, 0x6c, 0x69, 0x64, 0x61,
|
||||
0x74, 0x69, 0x6f, 0x6e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x42, 0x05, 0x92, 0xb5, 0x18,
|
||||
0x01, 0x31, 0x52, 0x0e, 0x63, 0x6f, 0x6e, 0x73, 0x6f, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x69, 0x6f,
|
||||
0x6e, 0x73, 0x42, 0x8e, 0x01, 0x0a, 0x16, 0x6f, 0x72, 0x67, 0x2e, 0x65, 0x74, 0x68, 0x65, 0x72,
|
||||
0x65, 0x75, 0x6d, 0x2e, 0x65, 0x6e, 0x67, 0x69, 0x6e, 0x65, 0x2e, 0x76, 0x31, 0x42, 0x0c, 0x45,
|
||||
0x6c, 0x65, 0x63, 0x74, 0x72, 0x61, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x50, 0x01, 0x5a, 0x3a, 0x67,
|
||||
0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x61,
|
||||
0x74, 0x69, 0x63, 0x6c, 0x61, 0x62, 0x73, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x2f, 0x76, 0x35,
|
||||
0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x65, 0x6e, 0x67, 0x69, 0x6e, 0x65, 0x2f, 0x76, 0x31,
|
||||
0x3b, 0x65, 0x6e, 0x67, 0x69, 0x6e, 0x65, 0x76, 0x31, 0xaa, 0x02, 0x12, 0x45, 0x74, 0x68, 0x65,
|
||||
0x72, 0x65, 0x75, 0x6d, 0x2e, 0x45, 0x6e, 0x67, 0x69, 0x6e, 0x65, 0x2e, 0x56, 0x31, 0xca, 0x02,
|
||||
0x12, 0x45, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x5c, 0x45, 0x6e, 0x67, 0x69, 0x6e, 0x65,
|
||||
0x5c, 0x76, 0x31, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
|
||||
0x1a, 0x26, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x65, 0x6e, 0x67, 0x69, 0x6e, 0x65, 0x2f, 0x76,
|
||||
0x31, 0x2f, 0x65, 0x78, 0x65, 0x63, 0x75, 0x74, 0x69, 0x6f, 0x6e, 0x5f, 0x65, 0x6e, 0x67, 0x69,
|
||||
0x6e, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0x8d, 0x01, 0x0a, 0x11, 0x57, 0x69, 0x74,
|
||||
0x68, 0x64, 0x72, 0x61, 0x77, 0x61, 0x6c, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x2d,
|
||||
0x0a, 0x0e, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x5f, 0x61, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73,
|
||||
0x18, 0x01, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x06, 0x8a, 0xb5, 0x18, 0x02, 0x32, 0x30, 0x52, 0x0d,
|
||||
0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x41, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73, 0x12, 0x31, 0x0a,
|
||||
0x10, 0x76, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x6f, 0x72, 0x5f, 0x70, 0x75, 0x62, 0x6b, 0x65,
|
||||
0x79, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x06, 0x8a, 0xb5, 0x18, 0x02, 0x34, 0x38, 0x52,
|
||||
0x0f, 0x76, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x6f, 0x72, 0x50, 0x75, 0x62, 0x6b, 0x65, 0x79,
|
||||
0x12, 0x16, 0x0a, 0x06, 0x61, 0x6d, 0x6f, 0x75, 0x6e, 0x74, 0x18, 0x03, 0x20, 0x01, 0x28, 0x04,
|
||||
0x52, 0x06, 0x61, 0x6d, 0x6f, 0x75, 0x6e, 0x74, 0x22, 0xc3, 0x01, 0x0a, 0x0e, 0x44, 0x65, 0x70,
|
||||
0x6f, 0x73, 0x69, 0x74, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x1e, 0x0a, 0x06, 0x70,
|
||||
0x75, 0x62, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x06, 0x8a, 0xb5, 0x18,
|
||||
0x02, 0x34, 0x38, 0x52, 0x06, 0x70, 0x75, 0x62, 0x6b, 0x65, 0x79, 0x12, 0x3d, 0x0a, 0x16, 0x77,
|
||||
0x69, 0x74, 0x68, 0x64, 0x72, 0x61, 0x77, 0x61, 0x6c, 0x5f, 0x63, 0x72, 0x65, 0x64, 0x65, 0x6e,
|
||||
0x74, 0x69, 0x61, 0x6c, 0x73, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x06, 0x8a, 0xb5, 0x18,
|
||||
0x02, 0x33, 0x32, 0x52, 0x15, 0x77, 0x69, 0x74, 0x68, 0x64, 0x72, 0x61, 0x77, 0x61, 0x6c, 0x43,
|
||||
0x72, 0x65, 0x64, 0x65, 0x6e, 0x74, 0x69, 0x61, 0x6c, 0x73, 0x12, 0x16, 0x0a, 0x06, 0x61, 0x6d,
|
||||
0x6f, 0x75, 0x6e, 0x74, 0x18, 0x03, 0x20, 0x01, 0x28, 0x04, 0x52, 0x06, 0x61, 0x6d, 0x6f, 0x75,
|
||||
0x6e, 0x74, 0x12, 0x24, 0x0a, 0x09, 0x73, 0x69, 0x67, 0x6e, 0x61, 0x74, 0x75, 0x72, 0x65, 0x18,
|
||||
0x04, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x06, 0x8a, 0xb5, 0x18, 0x02, 0x39, 0x36, 0x52, 0x09, 0x73,
|
||||
0x69, 0x67, 0x6e, 0x61, 0x74, 0x75, 0x72, 0x65, 0x12, 0x14, 0x0a, 0x05, 0x69, 0x6e, 0x64, 0x65,
|
||||
0x78, 0x18, 0x05, 0x20, 0x01, 0x28, 0x04, 0x52, 0x05, 0x69, 0x6e, 0x64, 0x65, 0x78, 0x22, 0x9f,
|
||||
0x01, 0x0a, 0x14, 0x43, 0x6f, 0x6e, 0x73, 0x6f, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x69, 0x6f, 0x6e,
|
||||
0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x2d, 0x0a, 0x0e, 0x73, 0x6f, 0x75, 0x72, 0x63,
|
||||
0x65, 0x5f, 0x61, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0c, 0x42,
|
||||
0x06, 0x8a, 0xb5, 0x18, 0x02, 0x32, 0x30, 0x52, 0x0d, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x41,
|
||||
0x64, 0x64, 0x72, 0x65, 0x73, 0x73, 0x12, 0x2b, 0x0a, 0x0d, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65,
|
||||
0x5f, 0x70, 0x75, 0x62, 0x6b, 0x65, 0x79, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x06, 0x8a,
|
||||
0xb5, 0x18, 0x02, 0x34, 0x38, 0x52, 0x0c, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x50, 0x75, 0x62,
|
||||
0x6b, 0x65, 0x79, 0x12, 0x2b, 0x0a, 0x0d, 0x74, 0x61, 0x72, 0x67, 0x65, 0x74, 0x5f, 0x70, 0x75,
|
||||
0x62, 0x6b, 0x65, 0x79, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0c, 0x42, 0x06, 0x8a, 0xb5, 0x18, 0x02,
|
||||
0x34, 0x38, 0x52, 0x0c, 0x74, 0x61, 0x72, 0x67, 0x65, 0x74, 0x50, 0x75, 0x62, 0x6b, 0x65, 0x79,
|
||||
0x22, 0x87, 0x02, 0x0a, 0x11, 0x45, 0x78, 0x65, 0x63, 0x75, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x65,
|
||||
0x71, 0x75, 0x65, 0x73, 0x74, 0x73, 0x12, 0x48, 0x0a, 0x08, 0x64, 0x65, 0x70, 0x6f, 0x73, 0x69,
|
||||
0x74, 0x73, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x22, 0x2e, 0x65, 0x74, 0x68, 0x65, 0x72,
|
||||
0x65, 0x75, 0x6d, 0x2e, 0x65, 0x6e, 0x67, 0x69, 0x6e, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x44, 0x65,
|
||||
0x70, 0x6f, 0x73, 0x69, 0x74, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x42, 0x08, 0x92, 0xb5,
|
||||
0x18, 0x04, 0x38, 0x31, 0x39, 0x32, 0x52, 0x08, 0x64, 0x65, 0x70, 0x6f, 0x73, 0x69, 0x74, 0x73,
|
||||
0x12, 0x4f, 0x0a, 0x0b, 0x77, 0x69, 0x74, 0x68, 0x64, 0x72, 0x61, 0x77, 0x61, 0x6c, 0x73, 0x18,
|
||||
0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x25, 0x2e, 0x65, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d,
|
||||
0x2e, 0x65, 0x6e, 0x67, 0x69, 0x6e, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x57, 0x69, 0x74, 0x68, 0x64,
|
||||
0x72, 0x61, 0x77, 0x61, 0x6c, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x42, 0x06, 0x92, 0xb5,
|
||||
0x18, 0x02, 0x31, 0x36, 0x52, 0x0b, 0x77, 0x69, 0x74, 0x68, 0x64, 0x72, 0x61, 0x77, 0x61, 0x6c,
|
||||
0x73, 0x12, 0x57, 0x0a, 0x0e, 0x63, 0x6f, 0x6e, 0x73, 0x6f, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x69,
|
||||
0x6f, 0x6e, 0x73, 0x18, 0x03, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x28, 0x2e, 0x65, 0x74, 0x68, 0x65,
|
||||
0x72, 0x65, 0x75, 0x6d, 0x2e, 0x65, 0x6e, 0x67, 0x69, 0x6e, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x43,
|
||||
0x6f, 0x6e, 0x73, 0x6f, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x65, 0x71, 0x75,
|
||||
0x65, 0x73, 0x74, 0x42, 0x05, 0x92, 0xb5, 0x18, 0x01, 0x31, 0x52, 0x0e, 0x63, 0x6f, 0x6e, 0x73,
|
||||
0x6f, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x22, 0x9e, 0x02, 0x0a, 0x16, 0x45,
|
||||
0x78, 0x65, 0x63, 0x75, 0x74, 0x69, 0x6f, 0x6e, 0x42, 0x75, 0x6e, 0x64, 0x6c, 0x65, 0x45, 0x6c,
|
||||
0x65, 0x63, 0x74, 0x72, 0x61, 0x12, 0x43, 0x0a, 0x07, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64,
|
||||
0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x29, 0x2e, 0x65, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75,
|
||||
0x6d, 0x2e, 0x65, 0x6e, 0x67, 0x69, 0x6e, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x45, 0x78, 0x65, 0x63,
|
||||
0x75, 0x74, 0x69, 0x6f, 0x6e, 0x50, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x44, 0x65, 0x6e, 0x65,
|
||||
0x62, 0x52, 0x07, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x12, 0x14, 0x0a, 0x05, 0x76, 0x61,
|
||||
0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65,
|
||||
0x12, 0x42, 0x0a, 0x0c, 0x62, 0x6c, 0x6f, 0x62, 0x73, 0x5f, 0x62, 0x75, 0x6e, 0x64, 0x6c, 0x65,
|
||||
0x18, 0x03, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1f, 0x2e, 0x65, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75,
|
||||
0x6d, 0x2e, 0x65, 0x6e, 0x67, 0x69, 0x6e, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x42, 0x6c, 0x6f, 0x62,
|
||||
0x73, 0x42, 0x75, 0x6e, 0x64, 0x6c, 0x65, 0x52, 0x0b, 0x62, 0x6c, 0x6f, 0x62, 0x73, 0x42, 0x75,
|
||||
0x6e, 0x64, 0x6c, 0x65, 0x12, 0x36, 0x0a, 0x17, 0x73, 0x68, 0x6f, 0x75, 0x6c, 0x64, 0x5f, 0x6f,
|
||||
0x76, 0x65, 0x72, 0x72, 0x69, 0x64, 0x65, 0x5f, 0x62, 0x75, 0x69, 0x6c, 0x64, 0x65, 0x72, 0x18,
|
||||
0x04, 0x20, 0x01, 0x28, 0x08, 0x52, 0x15, 0x73, 0x68, 0x6f, 0x75, 0x6c, 0x64, 0x4f, 0x76, 0x65,
|
||||
0x72, 0x72, 0x69, 0x64, 0x65, 0x42, 0x75, 0x69, 0x6c, 0x64, 0x65, 0x72, 0x12, 0x2d, 0x0a, 0x12,
|
||||
0x65, 0x78, 0x65, 0x63, 0x75, 0x74, 0x69, 0x6f, 0x6e, 0x5f, 0x72, 0x65, 0x71, 0x75, 0x65, 0x73,
|
||||
0x74, 0x73, 0x18, 0x05, 0x20, 0x03, 0x28, 0x0c, 0x52, 0x11, 0x65, 0x78, 0x65, 0x63, 0x75, 0x74,
|
||||
0x69, 0x6f, 0x6e, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x73, 0x42, 0x8e, 0x01, 0x0a, 0x16,
|
||||
0x6f, 0x72, 0x67, 0x2e, 0x65, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x2e, 0x65, 0x6e, 0x67,
|
||||
0x69, 0x6e, 0x65, 0x2e, 0x76, 0x31, 0x42, 0x0c, 0x45, 0x6c, 0x65, 0x63, 0x74, 0x72, 0x61, 0x50,
|
||||
0x72, 0x6f, 0x74, 0x6f, 0x50, 0x01, 0x5a, 0x3a, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63,
|
||||
0x6f, 0x6d, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x61, 0x74, 0x69, 0x63, 0x6c, 0x61, 0x62, 0x73,
|
||||
0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x2f, 0x76, 0x35, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f,
|
||||
0x65, 0x6e, 0x67, 0x69, 0x6e, 0x65, 0x2f, 0x76, 0x31, 0x3b, 0x65, 0x6e, 0x67, 0x69, 0x6e, 0x65,
|
||||
0x76, 0x31, 0xaa, 0x02, 0x12, 0x45, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x2e, 0x45, 0x6e,
|
||||
0x67, 0x69, 0x6e, 0x65, 0x2e, 0x56, 0x31, 0xca, 0x02, 0x12, 0x45, 0x74, 0x68, 0x65, 0x72, 0x65,
|
||||
0x75, 0x6d, 0x5c, 0x45, 0x6e, 0x67, 0x69, 0x6e, 0x65, 0x5c, 0x76, 0x31, 0x62, 0x06, 0x70, 0x72,
|
||||
0x6f, 0x74, 0x6f, 0x33,
|
||||
}
|
||||
|
||||
var (
|
||||
@@ -370,22 +470,27 @@ func file_proto_engine_v1_electra_proto_rawDescGZIP() []byte {
|
||||
return file_proto_engine_v1_electra_proto_rawDescData
|
||||
}
|
||||
|
||||
var file_proto_engine_v1_electra_proto_msgTypes = make([]protoimpl.MessageInfo, 4)
|
||||
var file_proto_engine_v1_electra_proto_msgTypes = make([]protoimpl.MessageInfo, 5)
|
||||
var file_proto_engine_v1_electra_proto_goTypes = []interface{}{
|
||||
(*WithdrawalRequest)(nil), // 0: ethereum.engine.v1.WithdrawalRequest
|
||||
(*DepositRequest)(nil), // 1: ethereum.engine.v1.DepositRequest
|
||||
(*ConsolidationRequest)(nil), // 2: ethereum.engine.v1.ConsolidationRequest
|
||||
(*ExecutionRequests)(nil), // 3: ethereum.engine.v1.ExecutionRequests
|
||||
(*WithdrawalRequest)(nil), // 0: ethereum.engine.v1.WithdrawalRequest
|
||||
(*DepositRequest)(nil), // 1: ethereum.engine.v1.DepositRequest
|
||||
(*ConsolidationRequest)(nil), // 2: ethereum.engine.v1.ConsolidationRequest
|
||||
(*ExecutionRequests)(nil), // 3: ethereum.engine.v1.ExecutionRequests
|
||||
(*ExecutionBundleElectra)(nil), // 4: ethereum.engine.v1.ExecutionBundleElectra
|
||||
(*ExecutionPayloadDeneb)(nil), // 5: ethereum.engine.v1.ExecutionPayloadDeneb
|
||||
(*BlobsBundle)(nil), // 6: ethereum.engine.v1.BlobsBundle
|
||||
}
|
||||
var file_proto_engine_v1_electra_proto_depIdxs = []int32{
|
||||
1, // 0: ethereum.engine.v1.ExecutionRequests.deposits:type_name -> ethereum.engine.v1.DepositRequest
|
||||
0, // 1: ethereum.engine.v1.ExecutionRequests.withdrawals:type_name -> ethereum.engine.v1.WithdrawalRequest
|
||||
2, // 2: ethereum.engine.v1.ExecutionRequests.consolidations:type_name -> ethereum.engine.v1.ConsolidationRequest
|
||||
3, // [3:3] is the sub-list for method output_type
|
||||
3, // [3:3] is the sub-list for method input_type
|
||||
3, // [3:3] is the sub-list for extension type_name
|
||||
3, // [3:3] is the sub-list for extension extendee
|
||||
0, // [0:3] is the sub-list for field type_name
|
||||
5, // 3: ethereum.engine.v1.ExecutionBundleElectra.payload:type_name -> ethereum.engine.v1.ExecutionPayloadDeneb
|
||||
6, // 4: ethereum.engine.v1.ExecutionBundleElectra.blobs_bundle:type_name -> ethereum.engine.v1.BlobsBundle
|
||||
5, // [5:5] is the sub-list for method output_type
|
||||
5, // [5:5] is the sub-list for method input_type
|
||||
5, // [5:5] is the sub-list for extension type_name
|
||||
5, // [5:5] is the sub-list for extension extendee
|
||||
0, // [0:5] is the sub-list for field type_name
|
||||
}
|
||||
|
||||
func init() { file_proto_engine_v1_electra_proto_init() }
|
||||
@@ -393,6 +498,7 @@ func file_proto_engine_v1_electra_proto_init() {
|
||||
if File_proto_engine_v1_electra_proto != nil {
|
||||
return
|
||||
}
|
||||
file_proto_engine_v1_execution_engine_proto_init()
|
||||
if !protoimpl.UnsafeEnabled {
|
||||
file_proto_engine_v1_electra_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
|
||||
switch v := v.(*WithdrawalRequest); i {
|
||||
@@ -442,6 +548,18 @@ func file_proto_engine_v1_electra_proto_init() {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
file_proto_engine_v1_electra_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} {
|
||||
switch v := v.(*ExecutionBundleElectra); i {
|
||||
case 0:
|
||||
return &v.state
|
||||
case 1:
|
||||
return &v.sizeCache
|
||||
case 2:
|
||||
return &v.unknownFields
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
}
|
||||
type x struct{}
|
||||
out := protoimpl.TypeBuilder{
|
||||
@@ -449,7 +567,7 @@ func file_proto_engine_v1_electra_proto_init() {
|
||||
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
|
||||
RawDescriptor: file_proto_engine_v1_electra_proto_rawDesc,
|
||||
NumEnums: 0,
|
||||
NumMessages: 4,
|
||||
NumMessages: 5,
|
||||
NumExtensions: 0,
|
||||
NumServices: 0,
|
||||
},
|
||||
|
||||
@@ -16,6 +16,7 @@ syntax = "proto3";
|
||||
package ethereum.engine.v1;
|
||||
|
||||
import "proto/eth/ext/options.proto";
|
||||
import "proto/engine/v1/execution_engine.proto";
|
||||
|
||||
option csharp_namespace = "Ethereum.Engine.V1";
|
||||
option go_package = "github.com/prysmaticlabs/prysm/v5/proto/engine/v1;enginev1";
|
||||
@@ -60,7 +61,16 @@ message ConsolidationRequest {
|
||||
|
||||
// ExecutionRequests is a container that contains all the requests from the execution layer to be included in a block
|
||||
message ExecutionRequests {
|
||||
repeated DepositRequest deposits = 1 [(ethereum.eth.ext.ssz_max) = "max_deposit_requests_per_payload.size"];
|
||||
repeated DepositRequest deposits = 1 [(ethereum.eth.ext.ssz_max) = "max_deposit_requests_per_payload.size"];
|
||||
repeated WithdrawalRequest withdrawals = 2 [(ethereum.eth.ext.ssz_max) = "max_withdrawal_requests_per_payload.size"];
|
||||
repeated ConsolidationRequest consolidations = 3 [(ethereum.eth.ext.ssz_max) = "max_consolidation_requests_per_payload.size"];
|
||||
repeated ConsolidationRequest consolidations = 3 [(ethereum.eth.ext.ssz_max) = "max_consolidation_requests_per_payload.size"];
|
||||
}
|
||||
|
||||
// ExecutionBundleElectra is a container that builds on Payload V4 and includes execution requests sidecar needed post Electra
|
||||
message ExecutionBundleElectra {
|
||||
ExecutionPayloadDeneb payload = 1;
|
||||
bytes value = 2;
|
||||
BlobsBundle blobs_bundle = 3;
|
||||
bool should_override_builder = 4;
|
||||
repeated bytes execution_requests = 5;
|
||||
}
|
||||
|
||||
56
proto/engine/v1/electra_test.go
Normal file
56
proto/engine/v1/electra_test.go
Normal file
@@ -0,0 +1,56 @@
|
||||
package enginev1_test
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/ethereum/go-ethereum/common/hexutil"
|
||||
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
|
||||
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/require"
|
||||
)
|
||||
|
||||
var depositRequestsSSZHex = "0x706b0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000077630000000000000000000000000000000000000000000000000000000000007b00000000000000736967000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000c801000000000000706b00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000776300000000000000000000000000000000000000000000000000000000000090010000000000007369670000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002000000000000000"
|
||||
|
||||
func TestUnmarshalItems_OK(t *testing.T) {
|
||||
drb, err := hexutil.Decode(depositRequestsSSZHex)
|
||||
require.NoError(t, err)
|
||||
exampleRequest := &enginev1.DepositRequest{}
|
||||
depositRequests, err := enginev1.UnmarshalItems(drb, exampleRequest.SizeSSZ(), func() *enginev1.DepositRequest { return &enginev1.DepositRequest{} })
|
||||
require.NoError(t, err)
|
||||
|
||||
exampleRequest1 := &enginev1.DepositRequest{
|
||||
Pubkey: bytesutil.PadTo([]byte("pk"), 48),
|
||||
WithdrawalCredentials: bytesutil.PadTo([]byte("wc"), 32),
|
||||
Amount: 123,
|
||||
Signature: bytesutil.PadTo([]byte("sig"), 96),
|
||||
Index: 456,
|
||||
}
|
||||
exampleRequest2 := &enginev1.DepositRequest{
|
||||
Pubkey: bytesutil.PadTo([]byte("pk"), 48),
|
||||
WithdrawalCredentials: bytesutil.PadTo([]byte("wc"), 32),
|
||||
Amount: 400,
|
||||
Signature: bytesutil.PadTo([]byte("sig"), 96),
|
||||
Index: 32,
|
||||
}
|
||||
require.DeepEqual(t, depositRequests, []*enginev1.DepositRequest{exampleRequest1, exampleRequest2})
|
||||
}
|
||||
|
||||
func TestMarshalItems_OK(t *testing.T) {
|
||||
exampleRequest1 := &enginev1.DepositRequest{
|
||||
Pubkey: bytesutil.PadTo([]byte("pk"), 48),
|
||||
WithdrawalCredentials: bytesutil.PadTo([]byte("wc"), 32),
|
||||
Amount: 123,
|
||||
Signature: bytesutil.PadTo([]byte("sig"), 96),
|
||||
Index: 456,
|
||||
}
|
||||
exampleRequest2 := &enginev1.DepositRequest{
|
||||
Pubkey: bytesutil.PadTo([]byte("pk"), 48),
|
||||
WithdrawalCredentials: bytesutil.PadTo([]byte("wc"), 32),
|
||||
Amount: 400,
|
||||
Signature: bytesutil.PadTo([]byte("sig"), 96),
|
||||
Index: 32,
|
||||
}
|
||||
drbs, err := enginev1.MarshalItems([]*enginev1.DepositRequest{exampleRequest1, exampleRequest2})
|
||||
require.NoError(t, err)
|
||||
require.DeepEqual(t, depositRequestsSSZHex, hexutil.Encode(drbs))
|
||||
}
|
||||
@@ -1,3 +1,11 @@
|
||||
package enginev1
|
||||
|
||||
type Copier[T any] copier[T]
|
||||
|
||||
func MarshalItems[T sszMarshaler](items []T) ([]byte, error) {
|
||||
return marshalItems(items)
|
||||
}
|
||||
|
||||
func UnmarshalItems[T sszUnmarshaler](data []byte, itemSize int, newItem func() T) ([]T, error) {
|
||||
return unmarshalItems(data, itemSize, newItem)
|
||||
}
|
||||
|
||||
@@ -297,6 +297,14 @@ type GetPayloadV3ResponseJson struct {
|
||||
ShouldOverrideBuilder bool `json:"shouldOverrideBuilder"`
|
||||
}
|
||||
|
||||
type GetPayloadV4ResponseJson struct {
|
||||
ExecutionPayload *ExecutionPayloadDenebJSON `json:"executionPayload"`
|
||||
BlockValue string `json:"blockValue"`
|
||||
BlobsBundle *BlobBundleJSON `json:"blobsBundle"`
|
||||
ShouldOverrideBuilder bool `json:"shouldOverrideBuilder"`
|
||||
ExecutionRequests []hexutil.Bytes `json:"executionRequests"`
|
||||
}
|
||||
|
||||
// ExecutionPayloadBody represents the engine API ExecutionPayloadV1 or ExecutionPayloadV2 type.
|
||||
type ExecutionPayloadBody struct {
|
||||
Transactions []hexutil.Bytes `json:"transactions"`
|
||||
@@ -1112,6 +1120,137 @@ func (e *ExecutionPayloadDenebWithValueAndBlobsBundle) UnmarshalJSON(enc []byte)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (e *ExecutionBundleElectra) UnmarshalJSON(enc []byte) error {
|
||||
dec := GetPayloadV4ResponseJson{}
|
||||
if err := json.Unmarshal(enc, &dec); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if dec.ExecutionPayload.ParentHash == nil {
|
||||
return errors.New("missing required field 'parentHash' for ExecutionPayload")
|
||||
}
|
||||
if dec.ExecutionPayload.FeeRecipient == nil {
|
||||
return errors.New("missing required field 'feeRecipient' for ExecutionPayload")
|
||||
}
|
||||
if dec.ExecutionPayload.StateRoot == nil {
|
||||
return errors.New("missing required field 'stateRoot' for ExecutionPayload")
|
||||
}
|
||||
if dec.ExecutionPayload.ReceiptsRoot == nil {
|
||||
return errors.New("missing required field 'receiptsRoot' for ExecutableDataV1")
|
||||
}
|
||||
if dec.ExecutionPayload.LogsBloom == nil {
|
||||
return errors.New("missing required field 'logsBloom' for ExecutionPayload")
|
||||
}
|
||||
if dec.ExecutionPayload.PrevRandao == nil {
|
||||
return errors.New("missing required field 'prevRandao' for ExecutionPayload")
|
||||
}
|
||||
if dec.ExecutionPayload.ExtraData == nil {
|
||||
return errors.New("missing required field 'extraData' for ExecutionPayload")
|
||||
}
|
||||
if dec.ExecutionPayload.BlockHash == nil {
|
||||
return errors.New("missing required field 'blockHash' for ExecutionPayload")
|
||||
}
|
||||
if dec.ExecutionPayload.Transactions == nil {
|
||||
return errors.New("missing required field 'transactions' for ExecutionPayload")
|
||||
}
|
||||
if dec.ExecutionPayload.BlockNumber == nil {
|
||||
return errors.New("missing required field 'blockNumber' for ExecutionPayload")
|
||||
}
|
||||
if dec.ExecutionPayload.Timestamp == nil {
|
||||
return errors.New("missing required field 'timestamp' for ExecutionPayload")
|
||||
}
|
||||
if dec.ExecutionPayload.GasUsed == nil {
|
||||
return errors.New("missing required field 'gasUsed' for ExecutionPayload")
|
||||
}
|
||||
if dec.ExecutionPayload.GasLimit == nil {
|
||||
return errors.New("missing required field 'gasLimit' for ExecutionPayload")
|
||||
}
|
||||
if dec.ExecutionPayload.BlobGasUsed == nil {
|
||||
return errors.New("missing required field 'blobGasUsed' for ExecutionPayload")
|
||||
}
|
||||
if dec.ExecutionPayload.ExcessBlobGas == nil {
|
||||
return errors.New("missing required field 'excessBlobGas' for ExecutionPayload")
|
||||
}
|
||||
|
||||
*e = ExecutionBundleElectra{Payload: &ExecutionPayloadDeneb{}}
|
||||
e.Payload.ParentHash = dec.ExecutionPayload.ParentHash.Bytes()
|
||||
e.Payload.FeeRecipient = dec.ExecutionPayload.FeeRecipient.Bytes()
|
||||
e.Payload.StateRoot = dec.ExecutionPayload.StateRoot.Bytes()
|
||||
e.Payload.ReceiptsRoot = dec.ExecutionPayload.ReceiptsRoot.Bytes()
|
||||
e.Payload.LogsBloom = *dec.ExecutionPayload.LogsBloom
|
||||
e.Payload.PrevRandao = dec.ExecutionPayload.PrevRandao.Bytes()
|
||||
e.Payload.BlockNumber = uint64(*dec.ExecutionPayload.BlockNumber)
|
||||
e.Payload.GasLimit = uint64(*dec.ExecutionPayload.GasLimit)
|
||||
e.Payload.GasUsed = uint64(*dec.ExecutionPayload.GasUsed)
|
||||
e.Payload.Timestamp = uint64(*dec.ExecutionPayload.Timestamp)
|
||||
e.Payload.ExtraData = dec.ExecutionPayload.ExtraData
|
||||
baseFee, err := hexutil.DecodeBig(dec.ExecutionPayload.BaseFeePerGas)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
e.Payload.BaseFeePerGas = bytesutil.PadTo(bytesutil.ReverseByteOrder(baseFee.Bytes()), fieldparams.RootLength)
|
||||
|
||||
e.Payload.ExcessBlobGas = uint64(*dec.ExecutionPayload.ExcessBlobGas)
|
||||
e.Payload.BlobGasUsed = uint64(*dec.ExecutionPayload.BlobGasUsed)
|
||||
|
||||
e.Payload.BlockHash = dec.ExecutionPayload.BlockHash.Bytes()
|
||||
transactions := make([][]byte, len(dec.ExecutionPayload.Transactions))
|
||||
for i, tx := range dec.ExecutionPayload.Transactions {
|
||||
transactions[i] = tx
|
||||
}
|
||||
e.Payload.Transactions = transactions
|
||||
if dec.ExecutionPayload.Withdrawals == nil {
|
||||
dec.ExecutionPayload.Withdrawals = make([]*Withdrawal, 0)
|
||||
}
|
||||
e.Payload.Withdrawals = dec.ExecutionPayload.Withdrawals
|
||||
|
||||
v, err := hexutil.DecodeBig(dec.BlockValue)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
e.Value = bytesutil.PadTo(bytesutil.ReverseByteOrder(v.Bytes()), fieldparams.RootLength)
|
||||
|
||||
if dec.BlobsBundle == nil {
|
||||
return nil
|
||||
}
|
||||
e.BlobsBundle = &BlobsBundle{}
|
||||
|
||||
commitments := make([][]byte, len(dec.BlobsBundle.Commitments))
|
||||
for i, kzg := range dec.BlobsBundle.Commitments {
|
||||
k := kzg
|
||||
commitments[i] = bytesutil.PadTo(k[:], fieldparams.BLSPubkeyLength)
|
||||
}
|
||||
e.BlobsBundle.KzgCommitments = commitments
|
||||
|
||||
proofs := make([][]byte, len(dec.BlobsBundle.Proofs))
|
||||
for i, proof := range dec.BlobsBundle.Proofs {
|
||||
p := proof
|
||||
proofs[i] = bytesutil.PadTo(p[:], fieldparams.BLSPubkeyLength)
|
||||
}
|
||||
e.BlobsBundle.Proofs = proofs
|
||||
|
||||
blobs := make([][]byte, len(dec.BlobsBundle.Blobs))
|
||||
for i, blob := range dec.BlobsBundle.Blobs {
|
||||
b := make([]byte, fieldparams.BlobLength)
|
||||
copy(b, blob)
|
||||
blobs[i] = b
|
||||
}
|
||||
e.BlobsBundle.Blobs = blobs
|
||||
|
||||
e.ShouldOverrideBuilder = dec.ShouldOverrideBuilder
|
||||
|
||||
requests := make([][]byte, len(dec.ExecutionRequests))
|
||||
for i, request := range dec.ExecutionRequests {
|
||||
r := make([]byte, len(request))
|
||||
copy(r, request)
|
||||
requests[i] = r
|
||||
}
|
||||
|
||||
e.ExecutionRequests = requests
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// RecastHexutilByteSlice converts a []hexutil.Bytes to a [][]byte
|
||||
func RecastHexutilByteSlice(h []hexutil.Bytes) [][]byte {
|
||||
r := make([][]byte, len(h))
|
||||
|
||||
@@ -607,6 +607,9 @@ func TestJsonMarshalUnmarshal(t *testing.T) {
|
||||
// Expect no transaction objects in the unmarshaled data.
|
||||
require.Equal(t, 0, len(payloadPb.Transactions))
|
||||
})
|
||||
t.Run("execution bundle electra with deneb payload, blob data, and execution requests", func(t *testing.T) {
|
||||
// TODO #14351: update this test when geth updates
|
||||
})
|
||||
}
|
||||
|
||||
func TestPayloadIDBytes_MarshalUnmarshalJSON(t *testing.T) {
|
||||
|
||||
@@ -159,6 +159,8 @@ ssz_electra_objs = [
|
||||
"BlindedBeaconBlockElectra",
|
||||
"Consolidation",
|
||||
"IndexedAttestationElectra",
|
||||
"LightClientBootstrapElectra",
|
||||
"LightClientUpdateElectra",
|
||||
"PendingDeposit",
|
||||
"PendingDeposits",
|
||||
"PendingConsolidation",
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
// Code generated by fastssz. DO NOT EDIT.
|
||||
// Hash: 3a2dbf56ebf4e81fbf961840a4cd2addac9047b17c12bad04e60879df5b69277
|
||||
// Hash: 5ca1c2c4e61b47b1f8185b3e9c477ae280f82e1483b88d4e11fa214452da5117
|
||||
package eth
|
||||
|
||||
import (
|
||||
@@ -4252,3 +4252,409 @@ func (p *PendingConsolidation) HashTreeRootWith(hh *ssz.Hasher) (err error) {
|
||||
hh.Merkleize(indx)
|
||||
return
|
||||
}
|
||||
|
||||
// MarshalSSZ ssz marshals the LightClientBootstrapElectra object
|
||||
func (l *LightClientBootstrapElectra) MarshalSSZ() ([]byte, error) {
|
||||
return ssz.MarshalSSZ(l)
|
||||
}
|
||||
|
||||
// MarshalSSZTo ssz marshals the LightClientBootstrapElectra object to a target array
|
||||
func (l *LightClientBootstrapElectra) MarshalSSZTo(buf []byte) (dst []byte, err error) {
|
||||
dst = buf
|
||||
offset := int(24820)
|
||||
|
||||
// Offset (0) 'Header'
|
||||
dst = ssz.WriteOffset(dst, offset)
|
||||
if l.Header == nil {
|
||||
l.Header = new(LightClientHeaderDeneb)
|
||||
}
|
||||
offset += l.Header.SizeSSZ()
|
||||
|
||||
// Field (1) 'CurrentSyncCommittee'
|
||||
if l.CurrentSyncCommittee == nil {
|
||||
l.CurrentSyncCommittee = new(SyncCommittee)
|
||||
}
|
||||
if dst, err = l.CurrentSyncCommittee.MarshalSSZTo(dst); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
// Field (2) 'CurrentSyncCommitteeBranch'
|
||||
if size := len(l.CurrentSyncCommitteeBranch); size != 6 {
|
||||
err = ssz.ErrVectorLengthFn("--.CurrentSyncCommitteeBranch", size, 6)
|
||||
return
|
||||
}
|
||||
for ii := 0; ii < 6; ii++ {
|
||||
if size := len(l.CurrentSyncCommitteeBranch[ii]); size != 32 {
|
||||
err = ssz.ErrBytesLengthFn("--.CurrentSyncCommitteeBranch[ii]", size, 32)
|
||||
return
|
||||
}
|
||||
dst = append(dst, l.CurrentSyncCommitteeBranch[ii]...)
|
||||
}
|
||||
|
||||
// Field (0) 'Header'
|
||||
if dst, err = l.Header.MarshalSSZTo(dst); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
// UnmarshalSSZ ssz unmarshals the LightClientBootstrapElectra object
|
||||
func (l *LightClientBootstrapElectra) UnmarshalSSZ(buf []byte) error {
|
||||
var err error
|
||||
size := uint64(len(buf))
|
||||
if size < 24820 {
|
||||
return ssz.ErrSize
|
||||
}
|
||||
|
||||
tail := buf
|
||||
var o0 uint64
|
||||
|
||||
// Offset (0) 'Header'
|
||||
if o0 = ssz.ReadOffset(buf[0:4]); o0 > size {
|
||||
return ssz.ErrOffset
|
||||
}
|
||||
|
||||
if o0 != 24820 {
|
||||
return ssz.ErrInvalidVariableOffset
|
||||
}
|
||||
|
||||
// Field (1) 'CurrentSyncCommittee'
|
||||
if l.CurrentSyncCommittee == nil {
|
||||
l.CurrentSyncCommittee = new(SyncCommittee)
|
||||
}
|
||||
if err = l.CurrentSyncCommittee.UnmarshalSSZ(buf[4:24628]); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Field (2) 'CurrentSyncCommitteeBranch'
|
||||
l.CurrentSyncCommitteeBranch = make([][]byte, 6)
|
||||
for ii := 0; ii < 6; ii++ {
|
||||
if cap(l.CurrentSyncCommitteeBranch[ii]) == 0 {
|
||||
l.CurrentSyncCommitteeBranch[ii] = make([]byte, 0, len(buf[24628:24820][ii*32:(ii+1)*32]))
|
||||
}
|
||||
l.CurrentSyncCommitteeBranch[ii] = append(l.CurrentSyncCommitteeBranch[ii], buf[24628:24820][ii*32:(ii+1)*32]...)
|
||||
}
|
||||
|
||||
// Field (0) 'Header'
|
||||
{
|
||||
buf = tail[o0:]
|
||||
if l.Header == nil {
|
||||
l.Header = new(LightClientHeaderDeneb)
|
||||
}
|
||||
if err = l.Header.UnmarshalSSZ(buf); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// SizeSSZ returns the ssz encoded size in bytes for the LightClientBootstrapElectra object
|
||||
func (l *LightClientBootstrapElectra) SizeSSZ() (size int) {
|
||||
size = 24820
|
||||
|
||||
// Field (0) 'Header'
|
||||
if l.Header == nil {
|
||||
l.Header = new(LightClientHeaderDeneb)
|
||||
}
|
||||
size += l.Header.SizeSSZ()
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
// HashTreeRoot ssz hashes the LightClientBootstrapElectra object
|
||||
func (l *LightClientBootstrapElectra) HashTreeRoot() ([32]byte, error) {
|
||||
return ssz.HashWithDefaultHasher(l)
|
||||
}
|
||||
|
||||
// HashTreeRootWith ssz hashes the LightClientBootstrapElectra object with a hasher
|
||||
func (l *LightClientBootstrapElectra) HashTreeRootWith(hh *ssz.Hasher) (err error) {
|
||||
indx := hh.Index()
|
||||
|
||||
// Field (0) 'Header'
|
||||
if err = l.Header.HashTreeRootWith(hh); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
// Field (1) 'CurrentSyncCommittee'
|
||||
if err = l.CurrentSyncCommittee.HashTreeRootWith(hh); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
// Field (2) 'CurrentSyncCommitteeBranch'
|
||||
{
|
||||
if size := len(l.CurrentSyncCommitteeBranch); size != 6 {
|
||||
err = ssz.ErrVectorLengthFn("--.CurrentSyncCommitteeBranch", size, 6)
|
||||
return
|
||||
}
|
||||
subIndx := hh.Index()
|
||||
for _, i := range l.CurrentSyncCommitteeBranch {
|
||||
if len(i) != 32 {
|
||||
err = ssz.ErrBytesLength
|
||||
return
|
||||
}
|
||||
hh.Append(i)
|
||||
}
|
||||
hh.Merkleize(subIndx)
|
||||
}
|
||||
|
||||
hh.Merkleize(indx)
|
||||
return
|
||||
}
|
||||
|
||||
// MarshalSSZ ssz marshals the LightClientUpdateElectra object
|
||||
func (l *LightClientUpdateElectra) MarshalSSZ() ([]byte, error) {
|
||||
return ssz.MarshalSSZ(l)
|
||||
}
|
||||
|
||||
// MarshalSSZTo ssz marshals the LightClientUpdateElectra object to a target array
|
||||
func (l *LightClientUpdateElectra) MarshalSSZTo(buf []byte) (dst []byte, err error) {
|
||||
dst = buf
|
||||
offset := int(25184)
|
||||
|
||||
// Offset (0) 'AttestedHeader'
|
||||
dst = ssz.WriteOffset(dst, offset)
|
||||
if l.AttestedHeader == nil {
|
||||
l.AttestedHeader = new(LightClientHeaderDeneb)
|
||||
}
|
||||
offset += l.AttestedHeader.SizeSSZ()
|
||||
|
||||
// Field (1) 'NextSyncCommittee'
|
||||
if l.NextSyncCommittee == nil {
|
||||
l.NextSyncCommittee = new(SyncCommittee)
|
||||
}
|
||||
if dst, err = l.NextSyncCommittee.MarshalSSZTo(dst); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
// Field (2) 'NextSyncCommitteeBranch'
|
||||
if size := len(l.NextSyncCommitteeBranch); size != 6 {
|
||||
err = ssz.ErrVectorLengthFn("--.NextSyncCommitteeBranch", size, 6)
|
||||
return
|
||||
}
|
||||
for ii := 0; ii < 6; ii++ {
|
||||
if size := len(l.NextSyncCommitteeBranch[ii]); size != 32 {
|
||||
err = ssz.ErrBytesLengthFn("--.NextSyncCommitteeBranch[ii]", size, 32)
|
||||
return
|
||||
}
|
||||
dst = append(dst, l.NextSyncCommitteeBranch[ii]...)
|
||||
}
|
||||
|
||||
// Offset (3) 'FinalizedHeader'
|
||||
dst = ssz.WriteOffset(dst, offset)
|
||||
if l.FinalizedHeader == nil {
|
||||
l.FinalizedHeader = new(LightClientHeaderDeneb)
|
||||
}
|
||||
offset += l.FinalizedHeader.SizeSSZ()
|
||||
|
||||
// Field (4) 'FinalityBranch'
|
||||
if size := len(l.FinalityBranch); size != 6 {
|
||||
err = ssz.ErrVectorLengthFn("--.FinalityBranch", size, 6)
|
||||
return
|
||||
}
|
||||
for ii := 0; ii < 6; ii++ {
|
||||
if size := len(l.FinalityBranch[ii]); size != 32 {
|
||||
err = ssz.ErrBytesLengthFn("--.FinalityBranch[ii]", size, 32)
|
||||
return
|
||||
}
|
||||
dst = append(dst, l.FinalityBranch[ii]...)
|
||||
}
|
||||
|
||||
// Field (5) 'SyncAggregate'
|
||||
if l.SyncAggregate == nil {
|
||||
l.SyncAggregate = new(SyncAggregate)
|
||||
}
|
||||
if dst, err = l.SyncAggregate.MarshalSSZTo(dst); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
// Field (6) 'SignatureSlot'
|
||||
dst = ssz.MarshalUint64(dst, uint64(l.SignatureSlot))
|
||||
|
||||
// Field (0) 'AttestedHeader'
|
||||
if dst, err = l.AttestedHeader.MarshalSSZTo(dst); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
// Field (3) 'FinalizedHeader'
|
||||
if dst, err = l.FinalizedHeader.MarshalSSZTo(dst); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
// UnmarshalSSZ ssz unmarshals the LightClientUpdateElectra object
|
||||
func (l *LightClientUpdateElectra) UnmarshalSSZ(buf []byte) error {
|
||||
var err error
|
||||
size := uint64(len(buf))
|
||||
if size < 25184 {
|
||||
return ssz.ErrSize
|
||||
}
|
||||
|
||||
tail := buf
|
||||
var o0, o3 uint64
|
||||
|
||||
// Offset (0) 'AttestedHeader'
|
||||
if o0 = ssz.ReadOffset(buf[0:4]); o0 > size {
|
||||
return ssz.ErrOffset
|
||||
}
|
||||
|
||||
if o0 != 25184 {
|
||||
return ssz.ErrInvalidVariableOffset
|
||||
}
|
||||
|
||||
// Field (1) 'NextSyncCommittee'
|
||||
if l.NextSyncCommittee == nil {
|
||||
l.NextSyncCommittee = new(SyncCommittee)
|
||||
}
|
||||
if err = l.NextSyncCommittee.UnmarshalSSZ(buf[4:24628]); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Field (2) 'NextSyncCommitteeBranch'
|
||||
l.NextSyncCommitteeBranch = make([][]byte, 6)
|
||||
for ii := 0; ii < 6; ii++ {
|
||||
if cap(l.NextSyncCommitteeBranch[ii]) == 0 {
|
||||
l.NextSyncCommitteeBranch[ii] = make([]byte, 0, len(buf[24628:24820][ii*32:(ii+1)*32]))
|
||||
}
|
||||
l.NextSyncCommitteeBranch[ii] = append(l.NextSyncCommitteeBranch[ii], buf[24628:24820][ii*32:(ii+1)*32]...)
|
||||
}
|
||||
|
||||
// Offset (3) 'FinalizedHeader'
|
||||
if o3 = ssz.ReadOffset(buf[24820:24824]); o3 > size || o0 > o3 {
|
||||
return ssz.ErrOffset
|
||||
}
|
||||
|
||||
// Field (4) 'FinalityBranch'
|
||||
l.FinalityBranch = make([][]byte, 6)
|
||||
for ii := 0; ii < 6; ii++ {
|
||||
if cap(l.FinalityBranch[ii]) == 0 {
|
||||
l.FinalityBranch[ii] = make([]byte, 0, len(buf[24824:25016][ii*32:(ii+1)*32]))
|
||||
}
|
||||
l.FinalityBranch[ii] = append(l.FinalityBranch[ii], buf[24824:25016][ii*32:(ii+1)*32]...)
|
||||
}
|
||||
|
||||
// Field (5) 'SyncAggregate'
|
||||
if l.SyncAggregate == nil {
|
||||
l.SyncAggregate = new(SyncAggregate)
|
||||
}
|
||||
if err = l.SyncAggregate.UnmarshalSSZ(buf[25016:25176]); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Field (6) 'SignatureSlot'
|
||||
l.SignatureSlot = github_com_prysmaticlabs_prysm_v5_consensus_types_primitives.Slot(ssz.UnmarshallUint64(buf[25176:25184]))
|
||||
|
||||
// Field (0) 'AttestedHeader'
|
||||
{
|
||||
buf = tail[o0:o3]
|
||||
if l.AttestedHeader == nil {
|
||||
l.AttestedHeader = new(LightClientHeaderDeneb)
|
||||
}
|
||||
if err = l.AttestedHeader.UnmarshalSSZ(buf); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Field (3) 'FinalizedHeader'
|
||||
{
|
||||
buf = tail[o3:]
|
||||
if l.FinalizedHeader == nil {
|
||||
l.FinalizedHeader = new(LightClientHeaderDeneb)
|
||||
}
|
||||
if err = l.FinalizedHeader.UnmarshalSSZ(buf); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// SizeSSZ returns the ssz encoded size in bytes for the LightClientUpdateElectra object
|
||||
func (l *LightClientUpdateElectra) SizeSSZ() (size int) {
|
||||
size = 25184
|
||||
|
||||
// Field (0) 'AttestedHeader'
|
||||
if l.AttestedHeader == nil {
|
||||
l.AttestedHeader = new(LightClientHeaderDeneb)
|
||||
}
|
||||
size += l.AttestedHeader.SizeSSZ()
|
||||
|
||||
// Field (3) 'FinalizedHeader'
|
||||
if l.FinalizedHeader == nil {
|
||||
l.FinalizedHeader = new(LightClientHeaderDeneb)
|
||||
}
|
||||
size += l.FinalizedHeader.SizeSSZ()
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
// HashTreeRoot ssz hashes the LightClientUpdateElectra object
|
||||
func (l *LightClientUpdateElectra) HashTreeRoot() ([32]byte, error) {
|
||||
return ssz.HashWithDefaultHasher(l)
|
||||
}
|
||||
|
||||
// HashTreeRootWith ssz hashes the LightClientUpdateElectra object with a hasher
|
||||
func (l *LightClientUpdateElectra) HashTreeRootWith(hh *ssz.Hasher) (err error) {
|
||||
indx := hh.Index()
|
||||
|
||||
// Field (0) 'AttestedHeader'
|
||||
if err = l.AttestedHeader.HashTreeRootWith(hh); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
// Field (1) 'NextSyncCommittee'
|
||||
if err = l.NextSyncCommittee.HashTreeRootWith(hh); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
// Field (2) 'NextSyncCommitteeBranch'
|
||||
{
|
||||
if size := len(l.NextSyncCommitteeBranch); size != 6 {
|
||||
err = ssz.ErrVectorLengthFn("--.NextSyncCommitteeBranch", size, 6)
|
||||
return
|
||||
}
|
||||
subIndx := hh.Index()
|
||||
for _, i := range l.NextSyncCommitteeBranch {
|
||||
if len(i) != 32 {
|
||||
err = ssz.ErrBytesLength
|
||||
return
|
||||
}
|
||||
hh.Append(i)
|
||||
}
|
||||
hh.Merkleize(subIndx)
|
||||
}
|
||||
|
||||
// Field (3) 'FinalizedHeader'
|
||||
if err = l.FinalizedHeader.HashTreeRootWith(hh); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
// Field (4) 'FinalityBranch'
|
||||
{
|
||||
if size := len(l.FinalityBranch); size != 6 {
|
||||
err = ssz.ErrVectorLengthFn("--.FinalityBranch", size, 6)
|
||||
return
|
||||
}
|
||||
subIndx := hh.Index()
|
||||
for _, i := range l.FinalityBranch {
|
||||
if len(i) != 32 {
|
||||
err = ssz.ErrBytesLength
|
||||
return
|
||||
}
|
||||
hh.Append(i)
|
||||
}
|
||||
hh.Merkleize(subIndx)
|
||||
}
|
||||
|
||||
// Field (5) 'SyncAggregate'
|
||||
if err = l.SyncAggregate.HashTreeRootWith(hh); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
// Field (6) 'SignatureSlot'
|
||||
hh.PutUint64(uint64(l.SignatureSlot))
|
||||
|
||||
hh.Merkleize(indx)
|
||||
return
|
||||
}
|
||||
|
||||
714
proto/prysm/v1alpha1/light_client.pb.go
generated
714
proto/prysm/v1alpha1/light_client.pb.go
generated
File diff suppressed because it is too large
Load Diff
@@ -61,6 +61,12 @@ message LightClientBootstrapDeneb {
|
||||
repeated bytes current_sync_committee_branch = 3 [(ethereum.eth.ext.ssz_size) = "5,32"];
|
||||
}
|
||||
|
||||
message LightClientBootstrapElectra {
|
||||
LightClientHeaderDeneb header = 1;
|
||||
SyncCommittee current_sync_committee = 2;
|
||||
repeated bytes current_sync_committee_branch = 3 [(ethereum.eth.ext.ssz_size) = "6,32"];
|
||||
}
|
||||
|
||||
message LightClientUpdateAltair {
|
||||
LightClientHeaderAltair attested_header = 1;
|
||||
SyncCommittee next_sync_committee = 2;
|
||||
@@ -91,6 +97,16 @@ message LightClientUpdateDeneb {
|
||||
uint64 signature_slot = 7 [(ethereum.eth.ext.cast_type) = "github.com/prysmaticlabs/prysm/v5/consensus-types/primitives.Slot"];
|
||||
}
|
||||
|
||||
message LightClientUpdateElectra {
|
||||
LightClientHeaderDeneb attested_header = 1;
|
||||
SyncCommittee next_sync_committee = 2;
|
||||
repeated bytes next_sync_committee_branch = 3 [(ethereum.eth.ext.ssz_size) = "6,32"];
|
||||
LightClientHeaderDeneb finalized_header = 4;
|
||||
repeated bytes finality_branch = 5 [(ethereum.eth.ext.ssz_size) = "6,32"];
|
||||
SyncAggregate sync_aggregate = 6;
|
||||
uint64 signature_slot = 7 [(ethereum.eth.ext.cast_type) = "github.com/prysmaticlabs/prysm/v5/consensus-types/primitives.Slot"];
|
||||
}
|
||||
|
||||
message LightClientFinalityUpdateAltair {
|
||||
LightClientHeaderAltair attested_header = 1;
|
||||
LightClientHeaderAltair finalized_header = 2;
|
||||
|
||||
4
testing/mock/beacon_service_mock.go
generated
4
testing/mock/beacon_service_mock.go
generated
@@ -429,7 +429,7 @@ func (m *MockBeaconChainClient) SubmitAttesterSlashing(arg0 context.Context, arg
|
||||
for _, a := range arg2 {
|
||||
varargs = append(varargs, a)
|
||||
}
|
||||
ret := m.ctrl.Call(m, "SubmitAttesterSlashing", varargs...)
|
||||
ret := m.ctrl.Call(m, "SubmitAttesterSlashings", varargs...)
|
||||
ret0, _ := ret[0].(*eth.SubmitSlashingResponse)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
@@ -439,7 +439,7 @@ func (m *MockBeaconChainClient) SubmitAttesterSlashing(arg0 context.Context, arg
|
||||
func (mr *MockBeaconChainClientMockRecorder) SubmitAttesterSlashing(arg0, arg1 any, arg2 ...any) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
varargs := append([]any{arg0, arg1}, arg2...)
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SubmitAttesterSlashing", reflect.TypeOf((*MockBeaconChainClient)(nil).SubmitAttesterSlashing), varargs...)
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SubmitAttesterSlashings", reflect.TypeOf((*MockBeaconChainClient)(nil).SubmitAttesterSlashing), varargs...)
|
||||
}
|
||||
|
||||
// SubmitAttesterSlashingElectra mocks base method.
|
||||
|
||||
@@ -7,6 +7,5 @@ import (
|
||||
)
|
||||
|
||||
func TestMainnet_Electra_EpochProcessing_PendingConsolidations(t *testing.T) {
|
||||
t.Skip("TODO: add back in after all spec test features are in.")
|
||||
epoch_processing.RunPendingConsolidationsTests(t, "mainnet")
|
||||
}
|
||||
|
||||
@@ -7,6 +7,5 @@ import (
|
||||
)
|
||||
|
||||
func TestMainnet_Electra_EpochProcessing_Slashings(t *testing.T) {
|
||||
t.Skip("slashing processing missing")
|
||||
epoch_processing.RunSlashingsTests(t, "mainnet")
|
||||
}
|
||||
|
||||
@@ -7,6 +7,5 @@ import (
|
||||
)
|
||||
|
||||
func TestMainnet_Electra_Operations_Consolidation(t *testing.T) {
|
||||
t.Skip("TODO: add back in after all spec test features are in.")
|
||||
operations.RunConsolidationTest(t, "mainnet")
|
||||
}
|
||||
|
||||
@@ -7,6 +7,5 @@ import (
|
||||
)
|
||||
|
||||
func TestMainnet_Electra_Sanity_Blocks(t *testing.T) {
|
||||
t.Skip("TODO: add back in after all spec test features are in.")
|
||||
sanity.RunBlockProcessingTest(t, "mainnet", "sanity/blocks/pyspec_tests")
|
||||
}
|
||||
|
||||
@@ -7,6 +7,5 @@ import (
|
||||
)
|
||||
|
||||
func TestMinimal_Electra_EpochProcessing_PendingConsolidations(t *testing.T) {
|
||||
t.Skip("TODO: add back in after all spec test features are in.")
|
||||
epoch_processing.RunPendingConsolidationsTests(t, "minimal")
|
||||
}
|
||||
|
||||
@@ -7,6 +7,5 @@ import (
|
||||
)
|
||||
|
||||
func TestMinimal_Electra_EpochProcessing_Slashings(t *testing.T) {
|
||||
t.Skip("slashing processing missing")
|
||||
epoch_processing.RunSlashingsTests(t, "minimal")
|
||||
}
|
||||
|
||||
@@ -7,6 +7,5 @@ import (
|
||||
)
|
||||
|
||||
func TestMinimal_Electra_Operations_Consolidation(t *testing.T) {
|
||||
t.Skip("TODO: add back in after all spec test features are in.")
|
||||
operations.RunConsolidationTest(t, "minimal")
|
||||
}
|
||||
|
||||
@@ -7,6 +7,5 @@ import (
|
||||
)
|
||||
|
||||
func TestMinimal_Electra_Sanity_Blocks(t *testing.T) {
|
||||
t.Skip("TODO: add back in after all spec test features are in.")
|
||||
sanity.RunBlockProcessingTest(t, "minimal", "sanity/blocks/pyspec_tests")
|
||||
}
|
||||
|
||||
@@ -103,7 +103,7 @@ func (m *engineMock) ForkchoiceUpdated(context.Context, *pb.ForkchoiceState, pay
|
||||
return nil, m.latestValidHash, m.payloadStatus
|
||||
}
|
||||
|
||||
func (m *engineMock) NewPayload(context.Context, interfaces.ExecutionData, []common.Hash, *common.Hash) ([]byte, error) {
|
||||
func (m *engineMock) NewPayload(context.Context, interfaces.ExecutionData, []common.Hash, *common.Hash, *pb.ExecutionRequests) ([]byte, error) {
|
||||
return m.latestValidHash, m.payloadStatus
|
||||
}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user