Compare commits

...

37 Commits

Author SHA1 Message Date
Preston Van Loon
aa7d0df7ff Fix metric name from PR #12430 (#12445)
* Fix metric name from PR #12430

* @potuz can't spell 'unknown'

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
(cherry picked from commit 7fe935e94d)
2023-05-22 12:45:38 -05:00
Nishant Das
0a213d70e8 disable it (#12438)
(cherry picked from commit 51bde7a845)
2023-05-22 11:24:43 -05:00
Potuz
b84dd40ba9 Use forkchoice to validate sync messages faster (#12430)
* Use forkchoice to validate sync messages faster

* add metric
2023-05-19 14:47:39 +00:00
kasey
aeaa72fdc2 fix deadlock between monitor service and init-sync (#12427)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2023-05-18 18:35:06 +00:00
kasey
ddc1e48e05 Revert "BeaconBlocksByRange and BlobSidecarsByRange consistency (#123… (#12426)
This reverts commit 73e4bdccbb.

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2023-05-18 18:01:26 +00:00
Nishant Das
f91159337b Fix Migration Of State (#12423) 2023-05-18 13:18:56 +00:00
Potuz
537236e1c9 Add aggregation metrics (#12417) 2023-05-17 12:18:59 -07:00
kasey
73e4bdccbb BeaconBlocksByRange and BlobSidecarsByRange consistency (#12383)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2023-05-17 12:16:10 +00:00
Potuz
f54bd64bdd Default aggregation ticker times (#12412)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-05-16 19:38:44 +00:00
james-prysm
d907cae595 persistent validator settings in validator client (#12354)
* WIP

* improving proposer settings store

* WIP persistent validator settings

* WIP persistent validator settings

* changing visibility level

* fixing some deepsource issues

* fixing more deepsource issues

* fixing json marshalling

* fix linting

* fixing tests

* fixing more tests

* fixing more tests

* fixing more tests

* fixing linting

* WIP fixing unit tests

* fixing remaining db tests

* converting json to protobuf

* fixing e2e

* k8s yaml library is used directly

* fixing linting

* fixing broken unit test

* reverting changes on e2e

* fixing linting

* fixing deepsource issue

* resolving some internal comments

* resolving some comments and adding more tests

* adding more unit tests

* gaz

* fixing flaking test

* fixing unit test contamination

* fixing deepsource issue

* resolving review item

* gaz
2023-05-16 14:08:49 -05:00
Potuz
be23773924 Don't use max cover on unaggregated atts nor check subgroup of validated signatures (#12350)
* Don't use max cover on unnaggregated atts

* Do not validate signature on the attestation package

* separate nil error checks

* fix unit tests
2023-05-16 17:06:26 +00:00
terencechain
29f6de1e96 Flip build block parallel feature flag (#12408) 2023-05-16 08:43:15 -07:00
Potuz
955a21fea4 revert revert of f6764fe62b (#12399) 2023-05-16 11:50:02 +00:00
Preston Van Loon
b4f1fea029 CI: fix docker image tagging (#12407) 2023-05-16 02:10:23 +00:00
kasey
f1b88d005d fix broken slasher service init (#12405)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2023-05-15 17:00:29 -05:00
Preston Van Loon
ee612d958a Update discord invite (#12403) 2023-05-15 13:54:19 +00:00
Nishant Das
09e22538f9 Support Capella Blocks for Tool (#12402)
* fix it

* fix it
2023-05-15 17:59:02 +08:00
terencechain
3b9e974a45 Add epoch and root to not a checkpt in forkchoice log (#12400)
* Add epoch number and root in not a checkpt in forkchoice log

* Update beacon-chain/blockchain/process_attestation_helpers.go

Co-authored-by: Nishant Das <nishdas93@gmail.com>

* Fix test

* Fix typo

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2023-05-14 06:02:36 +00:00
Potuz
ad749a40b6 Save to checkpoint cache if the nsc hits (#12398)
* Save to checkpoint cache if the nsc hits

Also move the head check before the checkpoint cache check

* add unit test
2023-05-13 09:54:33 -03:00
terencechain
9b13454457 Metrics: Invert too late and too early att received count (#12392)
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2023-05-12 19:45:43 +00:00
Potuz
b9917807d8 Ignore some untimely attestations (#12387)
* Ignore some untimely attestations

* correct child slot check

* consider tips as viable for checkpoints

* deal with canonical blocks

* forkchoice unit tests

* blockchain readonly beacon state

* Ignore some untimely attestations

* correct child slot check

* consider tips as viable for checkpoints

* deal with canonical blocks

* forkchoice unit tests

* blockchain readonly beacon state

* Fix AttestationTargetState mock

* Fix ineffectual assignment lint

* Fix blockchain tests

* Fix build

* Add Nil check

* add comment on lock

---------

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2023-05-12 16:55:33 +00:00
Raul Jordan
e5c9387cd9 Update Github Actions Go Version (#12391)
* update github actions

* use quotes or it is go 1.2 

lol

* Update gosec

* Update gosec

* Update go lint

* fix gosec violations

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2023-05-12 15:51:20 +00:00
Preston Van Loon
2c3b3b802a Revert "Add a new slot ticker and use it on attestation aggregation" (#12390)
This reverts commit f6764fe62b.
2023-05-12 14:49:37 +00:00
Simon
3ef3e1d13b fix-subnets-oom (#12388)
fix-subnets-oom, close iterator after using it
2023-05-12 07:52:17 -05:00
Nick Sullivan
5c00fcb84f Fix numerous spelling error and typos in the log messages, comments, and documentation. (#12385)
* Minor typos and spelling fixes (comments, logs, & docs only, no code changes)

* Fix seplling in log message

* Additional spelling tweaks based on review from @prestonvanloon
2023-05-11 20:45:43 +00:00
Nishant Das
aef22bf54e Add In Support For Builder in E2E (#12343)
* fix it up

* add gaz

* add changes in

* finally runs

* fix it

* add progress

* add capella support

* save progress

* remove debug logs

* cleanup

* remove log

* fix flag

* remove unused lock

* gaz

* change

* fix

* lint

* james review

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2023-05-11 11:10:29 -05:00
Potuz
f6764fe62b Add a new slot ticker and use it on attestation aggregation (#12377)
* Add slot ticker with intervals

* add flags for aggregation duration

* misspelling

* hide flags

* fix flags and default durations

* lint

* wait for initial sync

* deep source

* add log

* Preston's review

* fix error message

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-05-10 12:48:51 +00:00
Preston Van Loon
07db0dc448 CI: Add support for buildbuddy uploads (#12378)
* Add build metadata

* Add buildbuddy flags

* more metadata

* fix latest tag

* fix branch

* revert branch change

* touch a file to trigger build

* remove unknown command

* fix script

* Update latest_version_tag.sh
2023-05-10 12:30:40 +00:00
Preston Van Loon
4b4e213a24 stategen: Pre-populate bls pubkey cache as part of stategen's Resume function (#11482)
* Pre-populate bls pubkey cache as part of state gen's Resume function. This adds some helpers and a benchmark to blst

* Do it async

* fix missing import

* lint

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: rauljordan <raul@prysmaticlabs.com>
2023-05-10 10:44:15 +00:00
kasey
7d9f36985e Fix initialization race (#12374)
* block all the sync startup code on init signal

* don't need chainStarted if everything blocks

* set empty clock by default to work around panics

* remove unused clock, zero-value for init-sync

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2023-05-10 04:09:15 +00:00
james-prysm
98f8ca4e34 reverting version check on bid header (#12376)
* adding reverting change

* adding additional tests and checks

* removing comment

* updating based on review suggestions

* fixing deepsource issues

* attempting to resolve dependencies

* removing dependency

* accidently flipped if statement with deepsource fix

* fixing unit test
2023-05-10 08:45:09 +08:00
james-prysm
535b38395e migrating code from PR#12343 (#12371) 2023-05-08 09:33:26 -05:00
terencechain
4bd7e8aefd WeiToGwei copy input big.int (#12370) 2023-05-07 15:15:08 +00:00
Raul Jordan
6f383f272a Do Not Panic on Broadcasting Nil Object (#12369) 2023-05-07 05:00:30 +00:00
terencechain
dbb8279fe6 Ran update-go-pbs.sh (#12359)
* Ran update-go-pbs.sh

* Fix import
2023-05-05 17:44:51 +00:00
Nishant Das
b6625554dd fix build (#12358) 2023-05-05 08:32:56 -05:00
Radosław Kapka
bd833e1c12 Use v1alpha1 server in block production (#12336)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2023-05-04 19:52:41 +02:00
207 changed files with 5100 additions and 1923 deletions

View File

@@ -43,4 +43,12 @@ build --flaky_test_attempts=5
# Better caching
build:nostamp --nostamp
build:nostamp --workspace_status_command=./hack/workspace_status_ci.sh
# Build metadata
build --build_metadata=ROLE=CI
build --build_metadata=REPO_URL=https://github.com/prysmaticlabs/prysm.git
build --workspace_status_command=./hack/workspace_status_ci.sh
# Buildbuddy
build --bes_results_url=https://app.buildbuddy.io/invocation/
build --bes_backend=grpcs://remote.buildbuddy.io

View File

@@ -26,14 +26,14 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Set up Go 1.19
- name: Set up Go 1.20
uses: actions/setup-go@v3
with:
go-version: 1.19
go-version: '1.20'
- name: Run Gosec Security Scanner
run: | # https://github.com/securego/gosec/issues/469
export PATH=$PATH:$(go env GOPATH)/bin
go install github.com/securego/gosec/v2/cmd/gosec@v2.12.0
go install github.com/securego/gosec/v2/cmd/gosec@v2.15.0
gosec -exclude=G307 -exclude-dir=crypto/bls/herumi ./...
lint:
@@ -43,16 +43,16 @@ jobs:
- name: Checkout
uses: actions/checkout@v2
- name: Set up Go 1.19
- name: Set up Go 1.20
uses: actions/setup-go@v3
with:
go-version: 1.19
go-version: '1.20'
id: go
- name: Golangci-lint
uses: golangci/golangci-lint-action@v3
with:
version: v1.50.1
version: v1.52.2
args: --config=.golangci.yml --out-${NO_FUTURE}format colored-line-number
build:
@@ -62,7 +62,7 @@ jobs:
- name: Set up Go 1.x
uses: actions/setup-go@v2
with:
go-version: 1.19
go-version: '1.20'
id: go
- name: Check out code into the Go module directory

View File

@@ -4,14 +4,14 @@
[![Go Report Card](https://goreportcard.com/badge/github.com/prysmaticlabs/prysm)](https://goreportcard.com/report/github.com/prysmaticlabs/prysm)
[![Consensus_Spec_Version 1.3.0](https://img.shields.io/badge/Consensus%20Spec%20Version-v1.3.0-blue.svg)](https://github.com/ethereum/consensus-specs/tree/v1.3.0)
[![Execution_API_Version 1.0.0-beta.2](https://img.shields.io/badge/Execution%20API%20Version-v1.0.0.beta.2-blue.svg)](https://github.com/ethereum/execution-apis/tree/v1.0.0-beta.2/src/engine)
[![Discord](https://user-images.githubusercontent.com/7288322/34471967-1df7808a-efbb-11e7-9088-ed0b04151291.png)](https://discord.gg/CTYGPUJ)
[![Discord](https://user-images.githubusercontent.com/7288322/34471967-1df7808a-efbb-11e7-9088-ed0b04151291.png)](https://discord.gg/prysmaticlabs)
[![GitPOAP Badge](https://public-api.gitpoap.io/v1/repo/prysmaticlabs/prysm/badge)](https://www.gitpoap.io/gh/prysmaticlabs/prysm)
This is the core repository for Prysm, a [Golang](https://golang.org/) implementation of the [Ethereum Consensus](https://ethereum.org/en/eth2/) specification, developed by [Prysmatic Labs](https://prysmaticlabs.com). See the [Changelog](https://github.com/prysmaticlabs/prysm/releases) for details of the latest releases and upcoming breaking changes.
### Getting Started
A detailed set of installation and usage instructions as well as breakdowns of each individual component are available in the [official documentation portal](https://docs.prylabs.network). If you still have questions, feel free to stop by our [Discord](https://discord.gg/CTYGPUJ).
A detailed set of installation and usage instructions as well as breakdowns of each individual component are available in the [official documentation portal](https://docs.prylabs.network). If you still have questions, feel free to stop by our [Discord](https://discord.gg/prysmaticlabs).
### Staking on Mainnet

View File

@@ -346,6 +346,74 @@ func (p *ExecutionPayload) ToProto() (*v1.ExecutionPayload, error) {
}, nil
}
// FromProto converts a proto execution payload type to our builder
// compatible payload type.
func FromProto(payload *v1.ExecutionPayload) (ExecutionPayload, error) {
bFee, err := sszBytesToUint256(payload.BaseFeePerGas)
if err != nil {
return ExecutionPayload{}, err
}
txs := make([]hexutil.Bytes, len(payload.Transactions))
for i := range payload.Transactions {
txs[i] = payload.Transactions[i]
}
return ExecutionPayload{
ParentHash: payload.ParentHash,
FeeRecipient: payload.FeeRecipient,
StateRoot: payload.StateRoot,
ReceiptsRoot: payload.ReceiptsRoot,
LogsBloom: payload.LogsBloom,
PrevRandao: payload.PrevRandao,
BlockNumber: Uint64String(payload.BlockNumber),
GasLimit: Uint64String(payload.GasLimit),
GasUsed: Uint64String(payload.GasUsed),
Timestamp: Uint64String(payload.Timestamp),
ExtraData: payload.ExtraData,
BaseFeePerGas: bFee,
BlockHash: payload.BlockHash,
Transactions: txs,
}, nil
}
// FromProtoCapella converts a proto execution payload type for capella to our
// builder compatible payload type.
func FromProtoCapella(payload *v1.ExecutionPayloadCapella) (ExecutionPayloadCapella, error) {
bFee, err := sszBytesToUint256(payload.BaseFeePerGas)
if err != nil {
return ExecutionPayloadCapella{}, err
}
txs := make([]hexutil.Bytes, len(payload.Transactions))
for i := range payload.Transactions {
txs[i] = payload.Transactions[i]
}
withdrawals := make([]Withdrawal, len(payload.Withdrawals))
for i, w := range payload.Withdrawals {
withdrawals[i] = Withdrawal{
Index: Uint256{Int: big.NewInt(0).SetUint64(w.Index)},
ValidatorIndex: Uint256{Int: big.NewInt(0).SetUint64(uint64(w.ValidatorIndex))},
Address: w.Address,
Amount: Uint256{Int: big.NewInt(0).SetUint64(w.Amount)},
}
}
return ExecutionPayloadCapella{
ParentHash: payload.ParentHash,
FeeRecipient: payload.FeeRecipient,
StateRoot: payload.StateRoot,
ReceiptsRoot: payload.ReceiptsRoot,
LogsBloom: payload.LogsBloom,
PrevRandao: payload.PrevRandao,
BlockNumber: Uint64String(payload.BlockNumber),
GasLimit: Uint64String(payload.GasLimit),
GasUsed: Uint64String(payload.GasUsed),
Timestamp: Uint64String(payload.Timestamp),
ExtraData: payload.ExtraData,
BaseFeePerGas: bFee,
BlockHash: payload.BlockHash,
Transactions: txs,
Withdrawals: withdrawals,
}, nil
}
type ExecHeaderResponseCapella struct {
Data struct {
Signature hexutil.Bytes `json:"signature"`

View File

@@ -8,6 +8,7 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
doublylinkedtree "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/doubly-linked-tree"
forkchoicetypes "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/config/params"
@@ -380,6 +381,14 @@ func (s *Service) InForkchoice(root [32]byte) bool {
return s.cfg.ForkChoiceStore.HasNode(root)
}
// IsViableForkCheckpoint returns whether the given checkpoint is a checkpoint in any
// chain known to forkchoice
func (s *Service) IsViableForCheckpoint(cp *forkchoicetypes.Checkpoint) (bool, error) {
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
return s.cfg.ForkChoiceStore.IsViableForCheckpoint(cp)
}
// IsOptimisticForRoot takes the root as argument instead of the current head
// and returns true if it is optimistic.
func (s *Service) IsOptimisticForRoot(ctx context.Context, root [32]byte) (bool, error) {

View File

@@ -25,8 +25,11 @@ var (
errWSBlockNotFound = errors.New("weak subjectivity root not found in db")
// errWSBlockNotFoundInEpoch is returned when a block is not found in the WS cache or DB within epoch.
errWSBlockNotFoundInEpoch = errors.New("weak subjectivity root not found in db within epoch")
// errNotDescendantOfFinalized is returned when a block is not a descendant of the finalized checkpoint
// ErrNotDescendantOfFinalized is returned when a block is not a descendant of the finalized checkpoint
ErrNotDescendantOfFinalized = invalidBlock{error: errors.New("not descendant of finalized checkpoint")}
// ErrNotCheckpoint is returned when a given checkpoint is not a
// checkpoint in any chain known to forkchoice
ErrNotCheckpoint = errors.New("not a checkpoint in forkchoice")
)
// An invalid block is the block that fails state transition based on the core protocol rules.

View File

@@ -53,7 +53,7 @@ func (s *Service) HeadSyncContributionProofDomain(ctx context.Context, slot prim
// HeadSyncCommitteeIndices returns the sync committee index position using the head state. Input `slot` is taken in consideration
// where validator's duty for `slot - 1` is used for block inclusion in `slot`. That means when a validator is at epoch boundary
// across EPOCHS_PER_SYNC_COMMITTEE_PERIOD then the valiator will be considered using next period sync committee.
// across EPOCHS_PER_SYNC_COMMITTEE_PERIOD then the validator will be considered using next period sync committee.
//
// Spec definition:
// Being assigned to a sync committee for a given slot means that the validator produces and broadcasts signatures for slot - 1 for inclusion in slot.

View File

@@ -8,6 +8,7 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/async"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
forkchoicetypes "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
@@ -18,7 +19,17 @@ import (
)
// getAttPreState retrieves the att pre state by either from the cache or the DB.
func (s *Service) getAttPreState(ctx context.Context, c *ethpb.Checkpoint) (state.BeaconState, error) {
func (s *Service) getAttPreState(ctx context.Context, c *ethpb.Checkpoint) (state.ReadOnlyBeaconState, error) {
// If the attestation is recent and canonical we can use the head state to compute the shuffling.
headEpoch := slots.ToEpoch(s.HeadSlot())
if c.Epoch == headEpoch {
targetSlot, err := s.cfg.ForkChoiceStore.Slot([32]byte(c.Root))
if err == nil && slots.ToEpoch(targetSlot)+1 >= headEpoch {
if s.cfg.ForkChoiceStore.IsCanonical([32]byte(c.Root)) {
return s.HeadStateReadOnly(ctx)
}
}
}
// Use a multilock to allow scoped holding of a mutex by a checkpoint root + epoch
// allowing us to behave smarter in terms of how this function is used concurrently.
epochKey := strconv.FormatUint(uint64(c.Epoch), 10 /* base 10 */)
@@ -32,7 +43,36 @@ func (s *Service) getAttPreState(ctx context.Context, c *ethpb.Checkpoint) (stat
if cachedState != nil && !cachedState.IsNil() {
return cachedState, nil
}
// Try the next slot cache for the early epoch calls, this should mostly have been covered already
// but is cheap
slot, err := slots.EpochStart(c.Epoch)
if err != nil {
return nil, errors.Wrap(err, "could not compute epoch start")
}
cachedState = transition.NextSlotState(c.Root, slot)
if cachedState != nil && !cachedState.IsNil() {
if cachedState.Slot() != slot {
cachedState, err = transition.ProcessSlots(ctx, cachedState, slot)
if err != nil {
return nil, errors.Wrap(err, "could not process slots")
}
}
if err := s.checkpointStateCache.AddCheckpointState(c, cachedState); err != nil {
return nil, errors.Wrap(err, "could not save checkpoint state to cache")
}
return cachedState, nil
}
// Do not process attestations for old non viable checkpoints otherwise
ok, err := s.cfg.ForkChoiceStore.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: [32]byte(c.Root), Epoch: c.Epoch})
if err != nil {
return nil, errors.Wrap(err, "could not check checkpoint condition in forkchoice")
}
if !ok {
return nil, errors.Wrap(ErrNotCheckpoint, fmt.Sprintf("epoch %d root %#x", c.Epoch, c.Root))
}
// Fallback to state regeneration.
baseState, err := s.cfg.StateGen.StateByRoot(ctx, bytesutil.ToBytes32(c.Root))
if err != nil {
return nil, errors.Wrapf(err, "could not get pre state for epoch %d", c.Epoch)

View File

@@ -27,11 +27,20 @@ func TestStore_OnAttestation_ErrorConditions(t *testing.T) {
blkWithoutState := util.NewBeaconBlock()
blkWithoutState.Block.Slot = 0
util.SaveBlock(t, ctx, beaconDB, blkWithoutState)
BlkWithOutStateRoot, err := blkWithoutState.Block.HashTreeRoot()
cp := &ethpb.Checkpoint{}
st, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, cp, cp)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
blkWithStateBadAtt := util.NewBeaconBlock()
blkWithStateBadAtt.Block.Slot = 1
r, err := blkWithStateBadAtt.Block.HashTreeRoot()
require.NoError(t, err)
cp = &ethpb.Checkpoint{Root: r[:]}
st, blkRoot, err = prepareForkchoiceState(ctx, blkWithStateBadAtt.Block.Slot, r, [32]byte{}, params.BeaconConfig().ZeroHash, cp, cp)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
util.SaveBlock(t, ctx, beaconDB, blkWithStateBadAtt)
BlkWithStateBadAttRoot, err := blkWithStateBadAtt.Block.HashTreeRoot()
require.NoError(t, err)
@@ -42,7 +51,7 @@ func TestStore_OnAttestation_ErrorConditions(t *testing.T) {
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, s, BlkWithStateBadAttRoot))
blkWithValidState := util.NewBeaconBlock()
blkWithValidState.Block.Slot = 2
blkWithValidState.Block.Slot = 32
util.SaveBlock(t, ctx, beaconDB, blkWithValidState)
blkWithValidStateRoot, err := blkWithValidState.Block.HashTreeRoot()
@@ -57,6 +66,10 @@ func TestStore_OnAttestation_ErrorConditions(t *testing.T) {
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, s, blkWithValidStateRoot))
service.head = &head{
state: st,
}
tests := []struct {
name string
a *ethpb.Attestation
@@ -67,11 +80,6 @@ func TestStore_OnAttestation_ErrorConditions(t *testing.T) {
a: util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: params.BeaconConfig().SlotsPerEpoch, Target: &ethpb.Checkpoint{Root: make([]byte, 32)}}}),
wantedErr: "slot 32 does not match target epoch 0",
},
{
name: "no pre state for attestations's target block",
a: util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Target: &ethpb.Checkpoint{Root: BlkWithOutStateRoot[:]}}}),
wantedErr: "could not get pre state for epoch 0",
},
{
name: "process attestation doesn't match current epoch",
a: util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 100 * params.BeaconConfig().SlotsPerEpoch, Target: &ethpb.Checkpoint{Epoch: 100,
@@ -160,6 +168,9 @@ func TestStore_SaveCheckpointState(t *testing.T) {
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, s, bytesutil.ToBytes32([]byte{'A'})))
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Root: bytesutil.PadTo([]byte{'A'}, fieldparams.RootLength)}))
st, root, err := prepareForkchoiceState(ctx, 1, [32]byte(cp1.Root), [32]byte{}, [32]byte{'R'}, cp1, cp1)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, root))
s1, err := service.getAttPreState(ctx, cp1)
require.NoError(t, err)
assert.Equal(t, 1*params.BeaconConfig().SlotsPerEpoch, s1.Slot(), "Unexpected state slot")
@@ -167,8 +178,17 @@ func TestStore_SaveCheckpointState(t *testing.T) {
cp2 := &ethpb.Checkpoint{Epoch: 2, Root: bytesutil.PadTo([]byte{'B'}, fieldparams.RootLength)}
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, s, bytesutil.ToBytes32([]byte{'B'})))
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Root: bytesutil.PadTo([]byte{'B'}, fieldparams.RootLength)}))
s2, err := service.getAttPreState(ctx, cp2)
require.ErrorContains(t, "epoch 2 root 0x4200000000000000000000000000000000000000000000000000000000000000: not a checkpoint in forkchoice", err)
st, root, err = prepareForkchoiceState(ctx, 33, [32]byte(cp2.Root), [32]byte(cp1.Root), [32]byte{'R'}, cp2, cp2)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, root))
s2, err = service.getAttPreState(ctx, cp2)
require.NoError(t, err)
assert.Equal(t, 2*params.BeaconConfig().SlotsPerEpoch, s2.Slot(), "Unexpected state slot")
s1, err = service.getAttPreState(ctx, cp1)
@@ -187,6 +207,10 @@ func TestStore_SaveCheckpointState(t *testing.T) {
cp3 := &ethpb.Checkpoint{Epoch: 1, Root: bytesutil.PadTo([]byte{'C'}, fieldparams.RootLength)}
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, s, bytesutil.ToBytes32([]byte{'C'})))
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Root: bytesutil.PadTo([]byte{'C'}, fieldparams.RootLength)}))
st, root, err = prepareForkchoiceState(ctx, 31, [32]byte(cp3.Root), [32]byte(cp2.Root), [32]byte{'P'}, cp2, cp2)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, root))
s3, err := service.getAttPreState(ctx, cp3)
require.NoError(t, err)
assert.Equal(t, s.Slot(), s3.Slot(), "Unexpected state slot")
@@ -195,11 +219,18 @@ func TestStore_SaveCheckpointState(t *testing.T) {
func TestStore_UpdateCheckpointState(t *testing.T) {
service, tr := minimalTestService(t)
ctx := tr.ctx
baseState, _ := util.DeterministicGenesisState(t, 1)
epoch := primitives.Epoch(1)
baseState, _ := util.DeterministicGenesisState(t, 1)
checkpoint := &ethpb.Checkpoint{Epoch: epoch, Root: bytesutil.PadTo([]byte("hi"), fieldparams.RootLength)}
blk := util.NewBeaconBlock()
r1, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
checkpoint := &ethpb.Checkpoint{Epoch: epoch, Root: r1[:]}
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, baseState, bytesutil.ToBytes32(checkpoint.Root)))
st, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r1, [32]byte{}, params.BeaconConfig().ZeroHash, checkpoint, checkpoint)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, r1))
returned, err := service.getAttPreState(ctx, checkpoint)
require.NoError(t, err)
assert.Equal(t, params.BeaconConfig().SlotsPerEpoch.Mul(uint64(checkpoint.Epoch)), returned.Slot(), "Incorrectly returned base state")
@@ -209,8 +240,16 @@ func TestStore_UpdateCheckpointState(t *testing.T) {
assert.Equal(t, returned.Slot(), cached.Slot(), "State should have been cached")
epoch = 2
newCheckpoint := &ethpb.Checkpoint{Epoch: epoch, Root: bytesutil.PadTo([]byte("bye"), fieldparams.RootLength)}
blk = util.NewBeaconBlock()
blk.Block.Slot = 64
r2, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
newCheckpoint := &ethpb.Checkpoint{Epoch: epoch, Root: r2[:]}
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, baseState, bytesutil.ToBytes32(newCheckpoint.Root)))
st, blkRoot, err = prepareForkchoiceState(ctx, blk.Block.Slot, r2, r1, params.BeaconConfig().ZeroHash, newCheckpoint, newCheckpoint)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, r2))
returned, err = service.getAttPreState(ctx, newCheckpoint)
require.NoError(t, err)
s, err := slots.EpochStart(newCheckpoint.Epoch)
@@ -289,3 +328,22 @@ func TestVerifyBeaconBlock_OK(t *testing.T) {
assert.NoError(t, service.verifyBeaconBlock(ctx, d), "Did not receive the wanted error")
}
func TestGetAttPreState_HeadState(t *testing.T) {
service, tr := minimalTestService(t)
ctx := tr.ctx
baseState, _ := util.DeterministicGenesisState(t, 1)
epoch := primitives.Epoch(1)
blk := util.NewBeaconBlock()
r1, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
checkpoint := &ethpb.Checkpoint{Epoch: epoch, Root: r1[:]}
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, baseState, bytesutil.ToBytes32(checkpoint.Root)))
require.NoError(t, transition.UpdateNextSlotCache(ctx, checkpoint.Root, baseState))
_, err = service.getAttPreState(ctx, checkpoint)
require.NoError(t, err)
st, err := service.checkpointStateCache.StateByCheckpoint(checkpoint)
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().SlotsPerEpoch, st.Slot())
}

View File

@@ -1875,9 +1875,9 @@ func TestOnBlock_HandleBlockAttestations(t *testing.T) {
r3 := bytesutil.ToBytes32(a3.Data.BeaconBlockRoot)
require.Equal(t, false, service.cfg.ForkChoiceStore.HasNode(r3))
require.NoError(t, service.handleBlockAttestations(ctx, wsb.Block(), st)) // fine to use the same committe as st
require.NoError(t, service.handleBlockAttestations(ctx, wsb.Block(), st)) // fine to use the same committee as st
require.Equal(t, 0, service.cfg.AttPool.ForkchoiceAttestationCount())
require.NoError(t, service.handleBlockAttestations(ctx, wsb3.Block(), st3)) // fine to use the same committe as st
require.NoError(t, service.handleBlockAttestations(ctx, wsb3.Block(), st3)) // fine to use the same committee as st
require.Equal(t, 1, len(service.cfg.AttPool.BlockAttestations()))
}

View File

@@ -26,7 +26,7 @@ const reorgLateBlockCountAttestations = 2 * time.Second
// AttestationStateFetcher allows for retrieving a beacon state corresponding to the block
// root of an attestation's target checkpoint.
type AttestationStateFetcher interface {
AttestationTargetState(ctx context.Context, target *ethpb.Checkpoint) (state.BeaconState, error)
AttestationTargetState(ctx context.Context, target *ethpb.Checkpoint) (state.ReadOnlyBeaconState, error)
}
// AttestationReceiver interface defines the methods of chain service receive and processing new attestations.
@@ -37,7 +37,7 @@ type AttestationReceiver interface {
}
// AttestationTargetState returns the pre state of attestation.
func (s *Service) AttestationTargetState(ctx context.Context, target *ethpb.Checkpoint) (state.BeaconState, error) {
func (s *Service) AttestationTargetState(ctx context.Context, target *ethpb.Checkpoint) (state.ReadOnlyBeaconState, error) {
ss, err := slots.EpochStart(target.Epoch)
if err != nil {
return nil, err
@@ -45,6 +45,9 @@ func (s *Service) AttestationTargetState(ctx context.Context, target *ethpb.Chec
if err := slots.ValidateClock(ss, uint64(s.genesisTime.Unix())); err != nil {
return nil, err
}
// We acquire the lock here instead than on gettAttPreState because that function gets called from UpdateHead that holds a write lock
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
return s.getAttPreState(ctx, target)
}

View File

@@ -320,7 +320,7 @@ func (_ *ChainService) ReceiveAttestation(_ context.Context, _ *ethpb.Attestatio
}
// AttestationTargetState mocks AttestationTargetState method in chain service.
func (s *ChainService) AttestationTargetState(_ context.Context, _ *ethpb.Checkpoint) (state.BeaconState, error) {
func (s *ChainService) AttestationTargetState(_ context.Context, _ *ethpb.Checkpoint) (state.ReadOnlyBeaconState, error) {
return s.State, nil
}

View File

@@ -58,7 +58,7 @@ func (s *MockBuilderService) SubmitBlindedBlock(_ context.Context, _ interfaces.
// GetHeader for mocking.
func (s *MockBuilderService) GetHeader(_ context.Context, slot primitives.Slot, _ [32]byte, _ [48]byte) (builder.SignedBid, error) {
if slots.ToEpoch(slot) >= params.BeaconConfig().CapellaForkEpoch {
if slots.ToEpoch(slot) >= params.BeaconConfig().CapellaForkEpoch || s.BidCapella != nil {
return builder.WrappedSignedBuilderBidCapella(s.BidCapella)
}
w, err := builder.WrappedSignedBuilderBid(s.Bid)

View File

@@ -804,7 +804,7 @@ func TestFinalizedDeposits_ReturnsTrieCorrectly(t *testing.T) {
require.NoError(t, dc.InsertFinalizedDeposits(context.Background(), 3))
require.NoError(t, dc.InsertFinalizedDeposits(context.Background(), 4))
// Mimick finalized deposit trie fetch.
// Mimic finalized deposit trie fetch.
fd := dc.FinalizedDeposits(context.Background())
deps := dc.NonFinalizedDeposits(context.Background(), fd.MerkleTrieIndex, big.NewInt(14))
insertIndex := fd.MerkleTrieIndex + 1

View File

@@ -183,11 +183,11 @@ func ValidateAttestationTime(attSlot primitives.Slot, genesisTime time.Time, clo
currentSlot,
)
if attTime.Before(lowerBounds) {
attReceivedTooEarlyCount.Inc()
attReceivedTooLateCount.Inc()
return attError
}
if attTime.After(upperBounds) {
attReceivedTooLateCount.Inc()
attReceivedTooEarlyCount.Inc()
return attError
}
return nil

View File

@@ -150,7 +150,7 @@ func TestSlashValidator_OK(t *testing.T) {
maxBalance := params.BeaconConfig().MaxEffectiveBalance
slashedBalance := state.Slashings()[state.Slot().Mod(uint64(params.BeaconConfig().EpochsPerSlashingsVector))]
assert.Equal(t, maxBalance, slashedBalance, "Slashed balance isnt the expected amount")
assert.Equal(t, maxBalance, slashedBalance, "Slashed balance isn't the expected amount")
whistleblowerReward := slashedBalance / params.BeaconConfig().WhistleBlowerRewardQuotient
bal, err := state.BalanceAtIndex(proposer)

View File

@@ -543,7 +543,7 @@ func TestStore_Blocks_Retrieve_SlotRangeWithStep(t *testing.T) {
require.NoError(t, err)
assert.Equal(t, 150, len(retrieved))
for _, b := range retrieved {
assert.Equal(t, primitives.Slot(0), (b.Block().Slot()-100)%step, "Unexpect block slot %d", b.Block().Slot())
assert.Equal(t, primitives.Slot(0), (b.Block().Slot()-100)%step, "Unexpected block slot %d", b.Block().Slot())
}
})
}

View File

@@ -130,9 +130,15 @@ func performValidatorStateMigration(ctx context.Context, bar *progressbar.Progre
return err
}
item := enc
if hasAltairKey(item) {
switch {
case hasAltairKey(enc):
item = item[len(altairKey):]
case hasBellatrixKey(enc):
item = item[len(bellatrixKey):]
case hasCapellaKey(enc):
item = item[len(capellaKey):]
}
detector, err := detect.FromState(item)
if err != nil {
return err
@@ -165,9 +171,14 @@ func performValidatorStateMigration(ctx context.Context, bar *progressbar.Progre
return err
}
var stateBytes []byte
if hasAltairKey(enc) {
switch {
case hasAltairKey(enc):
stateBytes = snappy.Encode(nil, append(altairKey, rawObj...))
} else {
case hasBellatrixKey(enc):
stateBytes = snappy.Encode(nil, append(bellatrixKey, rawObj...))
case hasCapellaKey(enc):
stateBytes = snappy.Encode(nil, append(capellaKey, rawObj...))
default:
stateBytes = snappy.Encode(nil, rawObj)
}
if stateErr := stateBkt.Put(keys[index], stateBytes); stateErr != nil {

View File

@@ -313,3 +313,217 @@ func Test_migrateAltairStateValidators(t *testing.T) {
})
}
}
func Test_migrateBellatrixStateValidators(t *testing.T) {
tests := []struct {
name string
setup func(t *testing.T, dbStore *Store, state state.BeaconState, vals []*v1alpha1.Validator)
eval func(t *testing.T, dbStore *Store, state state.BeaconState, vals []*v1alpha1.Validator)
}{
{
name: "migrates validators and adds them to new buckets",
setup: func(t *testing.T, dbStore *Store, state state.BeaconState, vals []*v1alpha1.Validator) {
// create some new buckets that should be present for this migration
err := dbStore.db.Update(func(tx *bbolt.Tx) error {
_, err := tx.CreateBucketIfNotExists(stateValidatorsBucket)
assert.NoError(t, err)
_, err = tx.CreateBucketIfNotExists(blockRootValidatorHashesBucket)
assert.NoError(t, err)
return nil
})
assert.NoError(t, err)
},
eval: func(t *testing.T, dbStore *Store, state state.BeaconState, vals []*v1alpha1.Validator) {
// check whether the new buckets are present
err := dbStore.db.View(func(tx *bbolt.Tx) error {
valBkt := tx.Bucket(stateValidatorsBucket)
assert.NotNil(t, valBkt)
idxBkt := tx.Bucket(blockRootValidatorHashesBucket)
assert.NotNil(t, idxBkt)
return nil
})
assert.NoError(t, err)
// check if the migration worked
blockRoot := [32]byte{'A'}
rcvdState, err := dbStore.State(context.Background(), blockRoot)
assert.NoError(t, err)
require.DeepSSZEqual(t, rcvdState.ToProtoUnsafe(), state.ToProtoUnsafe(), "saved state with validators and retrieved state are not matching")
// find hashes of the validators that are set as part of the state
var hashes []byte
var individualHashes [][]byte
for _, val := range vals {
hash, hashErr := val.HashTreeRoot()
assert.NoError(t, hashErr)
hashes = append(hashes, hash[:]...)
individualHashes = append(individualHashes, hash[:])
}
// check if all the validators that were in the state, are stored properly in the validator bucket
pbState, err := state_native.ProtobufBeaconStateBellatrix(rcvdState.ToProtoUnsafe())
assert.NoError(t, err)
validatorsFoundCount := 0
for _, val := range pbState.Validators {
hash, hashErr := val.HashTreeRoot()
assert.NoError(t, hashErr)
found := false
for _, h := range individualHashes {
if bytes.Equal(hash[:], h) {
found = true
}
}
require.Equal(t, true, found)
validatorsFoundCount++
}
require.Equal(t, len(vals), validatorsFoundCount)
// check if the state validator indexes are stored properly
err = dbStore.db.View(func(tx *bbolt.Tx) error {
rcvdValhashBytes := tx.Bucket(blockRootValidatorHashesBucket).Get(blockRoot[:])
rcvdValHashes, sErr := snappy.Decode(nil, rcvdValhashBytes)
assert.NoError(t, sErr)
require.DeepEqual(t, hashes, rcvdValHashes)
return nil
})
assert.NoError(t, err)
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
dbStore := setupDB(t)
// add a state with the given validators
vals := validators(10)
blockRoot := [32]byte{'A'}
st, _ := util.DeterministicGenesisStateBellatrix(t, 20)
err := st.SetFork(&v1alpha1.Fork{
PreviousVersion: params.BeaconConfig().AltairForkVersion,
CurrentVersion: params.BeaconConfig().BellatrixForkVersion,
Epoch: 0,
})
require.NoError(t, err)
assert.NoError(t, st.SetSlot(100))
assert.NoError(t, st.SetValidators(vals))
assert.NoError(t, dbStore.SaveState(context.Background(), st, blockRoot))
// enable historical state representation flag to test this
resetCfg := features.InitWithReset(&features.Flags{
EnableHistoricalSpaceRepresentation: true,
})
defer resetCfg()
tt.setup(t, dbStore, st, vals)
assert.NoError(t, migrateStateValidators(context.Background(), dbStore.db), "migrateArchivedIndex(tx) error")
tt.eval(t, dbStore, st, vals)
})
}
}
func Test_migrateCapellaStateValidators(t *testing.T) {
tests := []struct {
name string
setup func(t *testing.T, dbStore *Store, state state.BeaconState, vals []*v1alpha1.Validator)
eval func(t *testing.T, dbStore *Store, state state.BeaconState, vals []*v1alpha1.Validator)
}{
{
name: "migrates validators and adds them to new buckets",
setup: func(t *testing.T, dbStore *Store, state state.BeaconState, vals []*v1alpha1.Validator) {
// create some new buckets that should be present for this migration
err := dbStore.db.Update(func(tx *bbolt.Tx) error {
_, err := tx.CreateBucketIfNotExists(stateValidatorsBucket)
assert.NoError(t, err)
_, err = tx.CreateBucketIfNotExists(blockRootValidatorHashesBucket)
assert.NoError(t, err)
return nil
})
assert.NoError(t, err)
},
eval: func(t *testing.T, dbStore *Store, state state.BeaconState, vals []*v1alpha1.Validator) {
// check whether the new buckets are present
err := dbStore.db.View(func(tx *bbolt.Tx) error {
valBkt := tx.Bucket(stateValidatorsBucket)
assert.NotNil(t, valBkt)
idxBkt := tx.Bucket(blockRootValidatorHashesBucket)
assert.NotNil(t, idxBkt)
return nil
})
assert.NoError(t, err)
// check if the migration worked
blockRoot := [32]byte{'A'}
rcvdState, err := dbStore.State(context.Background(), blockRoot)
assert.NoError(t, err)
require.DeepSSZEqual(t, rcvdState.ToProtoUnsafe(), state.ToProtoUnsafe(), "saved state with validators and retrieved state are not matching")
// find hashes of the validators that are set as part of the state
var hashes []byte
var individualHashes [][]byte
for _, val := range vals {
hash, hashErr := val.HashTreeRoot()
assert.NoError(t, hashErr)
hashes = append(hashes, hash[:]...)
individualHashes = append(individualHashes, hash[:])
}
// check if all the validators that were in the state, are stored properly in the validator bucket
pbState, err := state_native.ProtobufBeaconStateCapella(rcvdState.ToProtoUnsafe())
assert.NoError(t, err)
validatorsFoundCount := 0
for _, val := range pbState.Validators {
hash, hashErr := val.HashTreeRoot()
assert.NoError(t, hashErr)
found := false
for _, h := range individualHashes {
if bytes.Equal(hash[:], h) {
found = true
}
}
require.Equal(t, true, found)
validatorsFoundCount++
}
require.Equal(t, len(vals), validatorsFoundCount)
// check if the state validator indexes are stored properly
err = dbStore.db.View(func(tx *bbolt.Tx) error {
rcvdValhashBytes := tx.Bucket(blockRootValidatorHashesBucket).Get(blockRoot[:])
rcvdValHashes, sErr := snappy.Decode(nil, rcvdValhashBytes)
assert.NoError(t, sErr)
require.DeepEqual(t, hashes, rcvdValHashes)
return nil
})
assert.NoError(t, err)
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
dbStore := setupDB(t)
// add a state with the given validators
vals := validators(10)
blockRoot := [32]byte{'A'}
st, _ := util.DeterministicGenesisStateCapella(t, 20)
err := st.SetFork(&v1alpha1.Fork{
PreviousVersion: params.BeaconConfig().BellatrixForkVersion,
CurrentVersion: params.BeaconConfig().CapellaForkVersion,
Epoch: 0,
})
require.NoError(t, err)
assert.NoError(t, st.SetSlot(100))
assert.NoError(t, st.SetValidators(vals))
assert.NoError(t, dbStore.SaveState(context.Background(), st, blockRoot))
// enable historical state representation flag to test this
resetCfg := features.InitWithReset(&features.Flags{
EnableHistoricalSpaceRepresentation: true,
})
defer resetCfg()
tt.setup(t, dbStore, st, vals)
assert.NoError(t, migrateStateValidators(context.Background(), dbStore.db), "migrateArchivedIndex(tx) error")
tt.eval(t, dbStore, st, vals)
})
}
}

View File

@@ -14,7 +14,6 @@ go_library(
"metrics.go",
"options.go",
"prometheus.go",
"provider.go",
"rpc_connection.go",
"service.go",
],
@@ -90,7 +89,6 @@ go_test(
"init_test.go",
"log_processing_test.go",
"prometheus_test.go",
"provider_test.go",
"service_test.go",
],
data = glob(["testdata/**"]),
@@ -122,7 +120,6 @@ go_test(
"//crypto/bls:go_default_library",
"//encoding/bytesutil:go_default_library",
"//monitoring/clientstats:go_default_library",
"//network/authorization:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",

View File

@@ -7,6 +7,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/v4/network"
"github.com/prysmaticlabs/prysm/v4/network/authorization"
)
@@ -15,7 +16,7 @@ type Option func(s *Service) error
// WithHttpEndpoint parse http endpoint for the powchain service to use.
func WithHttpEndpoint(endpointString string) Option {
return func(s *Service) error {
s.cfg.currHttpEndpoint = HttpEndpoint(endpointString)
s.cfg.currHttpEndpoint = network.HttpEndpoint(endpointString)
return nil
}
}
@@ -27,7 +28,7 @@ func WithHttpEndpointAndJWTSecret(endpointString string, secret []byte) Option {
return nil
}
// Overwrite authorization type for all endpoints to be of a bearer type.
hEndpoint := HttpEndpoint(endpointString)
hEndpoint := network.HttpEndpoint(endpointString)
hEndpoint.Auth.Method = authorization.Bearer
hEndpoint.Auth.Value = string(secret)

View File

@@ -15,7 +15,7 @@ import (
func TestCleanup(t *testing.T) {
ctx := context.Background()
pc, err := NewPowchainCollector(ctx)
assert.NoError(t, err, "Uxpected error caling NewPowchainCollector")
assert.NoError(t, err, "Unexpected error calling NewPowchainCollector")
unregistered := pc.unregister()
assert.Equal(t, true, unregistered, "PowchainCollector.unregister did not return true (via prometheus.DefaultRegistry)")
// PowchainCollector is a prometheus.Collector, so we should be able to register it again
@@ -39,7 +39,7 @@ func TestCleanup(t *testing.T) {
func TestCancelation(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
pc, err := NewPowchainCollector(ctx)
assert.NoError(t, err, "Uxpected error caling NewPowchainCollector")
assert.NoError(t, err, "Unexpected error calling NewPowchainCollector")
ticker := time.NewTicker(10 * time.Second)
cancel()
select {

View File

@@ -1,49 +0,0 @@
package execution
import (
"encoding/base64"
"strings"
"github.com/prysmaticlabs/prysm/v4/network"
"github.com/prysmaticlabs/prysm/v4/network/authorization"
)
// HttpEndpoint extracts an httputils.Endpoint from the provider parameter.
func HttpEndpoint(eth1Provider string) network.Endpoint {
endpoint := network.Endpoint{
Url: "",
Auth: network.AuthorizationData{
Method: authorization.None,
Value: "",
}}
authValues := strings.Split(eth1Provider, ",")
endpoint.Url = strings.TrimSpace(authValues[0])
if len(authValues) > 2 {
log.Errorf(
"ETH1 endpoint string can contain one comma for specifying the authorization header to access the provider."+
" String contains too many commas: %d. Skipping authorization.", len(authValues)-1)
} else if len(authValues) == 2 {
switch network.Method(strings.TrimSpace(authValues[1])) {
case authorization.Basic:
basicAuthValues := strings.Split(strings.TrimSpace(authValues[1]), " ")
if len(basicAuthValues) != 2 {
log.Errorf("Basic Authentication has incorrect format. Skipping authorization.")
} else {
endpoint.Auth.Method = authorization.Basic
endpoint.Auth.Value = base64.StdEncoding.EncodeToString([]byte(basicAuthValues[1]))
}
case authorization.Bearer:
bearerAuthValues := strings.Split(strings.TrimSpace(authValues[1]), " ")
if len(bearerAuthValues) != 2 {
log.Errorf("Bearer Authentication has incorrect format. Skipping authorization.")
} else {
endpoint.Auth.Method = authorization.Bearer
endpoint.Auth.Value = bearerAuthValues[1]
}
case authorization.None:
log.Errorf("Authorization has incorrect format or authorization type is not supported.")
}
}
return endpoint
}

View File

@@ -1,74 +0,0 @@
package execution
import (
"testing"
"github.com/prysmaticlabs/prysm/v4/network/authorization"
"github.com/prysmaticlabs/prysm/v4/testing/assert"
logTest "github.com/sirupsen/logrus/hooks/test"
)
func TestHttpEndpoint(t *testing.T) {
hook := logTest.NewGlobal()
url := "http://test"
t.Run("URL", func(t *testing.T) {
endpoint := HttpEndpoint(url)
assert.Equal(t, url, endpoint.Url)
assert.Equal(t, authorization.None, endpoint.Auth.Method)
})
t.Run("URL with separator", func(t *testing.T) {
endpoint := HttpEndpoint(url + ",")
assert.Equal(t, url, endpoint.Url)
assert.Equal(t, authorization.None, endpoint.Auth.Method)
})
t.Run("URL with whitespace", func(t *testing.T) {
endpoint := HttpEndpoint(" " + url + " ,")
assert.Equal(t, url, endpoint.Url)
assert.Equal(t, authorization.None, endpoint.Auth.Method)
})
t.Run("Basic auth", func(t *testing.T) {
endpoint := HttpEndpoint(url + ",Basic username:password")
assert.Equal(t, url, endpoint.Url)
assert.Equal(t, authorization.Basic, endpoint.Auth.Method)
assert.Equal(t, "dXNlcm5hbWU6cGFzc3dvcmQ=", endpoint.Auth.Value)
})
t.Run("Basic auth with whitespace", func(t *testing.T) {
endpoint := HttpEndpoint(url + ", Basic username:password ")
assert.Equal(t, url, endpoint.Url)
assert.Equal(t, authorization.Basic, endpoint.Auth.Method)
assert.Equal(t, "dXNlcm5hbWU6cGFzc3dvcmQ=", endpoint.Auth.Value)
})
t.Run("Basic auth with incorrect format", func(t *testing.T) {
hook.Reset()
endpoint := HttpEndpoint(url + ",Basic username:password foo")
assert.Equal(t, url, endpoint.Url)
assert.Equal(t, authorization.None, endpoint.Auth.Method)
assert.LogsContain(t, hook, "Skipping authorization")
})
t.Run("Bearer auth", func(t *testing.T) {
endpoint := HttpEndpoint(url + ",Bearer token")
assert.Equal(t, url, endpoint.Url)
assert.Equal(t, authorization.Bearer, endpoint.Auth.Method)
assert.Equal(t, "token", endpoint.Auth.Value)
})
t.Run("Bearer auth with whitespace", func(t *testing.T) {
endpoint := HttpEndpoint(url + ", Bearer token ")
assert.Equal(t, url, endpoint.Url)
assert.Equal(t, authorization.Bearer, endpoint.Auth.Method)
assert.Equal(t, "token", endpoint.Auth.Value)
})
t.Run("Bearer auth with incorrect format", func(t *testing.T) {
hook.Reset()
endpoint := HttpEndpoint(url + ",Bearer token foo")
assert.Equal(t, url, endpoint.Url)
assert.Equal(t, authorization.None, endpoint.Auth.Method)
assert.LogsContain(t, hook, "Skipping authorization")
})
t.Run("Too many separators", func(t *testing.T) {
endpoint := HttpEndpoint(url + ",Bearer token,foo")
assert.Equal(t, url, endpoint.Url)
assert.Equal(t, authorization.None, endpoint.Auth.Method)
assert.LogsContain(t, hook, "Skipping authorization")
})
}

View File

@@ -3,7 +3,6 @@ package execution
import (
"context"
"fmt"
"net/url"
"strings"
"time"
@@ -107,26 +106,10 @@ func (s *Service) retryExecutionClientConnection(ctx context.Context, err error)
// Initializes an RPC connection with authentication headers.
func (s *Service) newRPCClientWithAuth(ctx context.Context, endpoint network.Endpoint) (*gethRPC.Client, error) {
// Need to handle ipc and http
var client *gethRPC.Client
u, err := url.Parse(endpoint.Url)
client, err := network.NewExecutionRPCClient(ctx, endpoint)
if err != nil {
return nil, err
}
switch u.Scheme {
case "http", "https":
client, err = gethRPC.DialOptions(ctx, endpoint.Url, gethRPC.WithHTTPClient(endpoint.HttpClient()))
if err != nil {
return nil, err
}
case "", "ipc":
client, err = gethRPC.DialIPC(ctx, endpoint.Url)
if err != nil {
return nil, err
}
default:
return nil, fmt.Errorf("no known transport for URL scheme %q", u.Scheme)
}
if endpoint.Auth.Method != authorization.None {
header, err := endpoint.Auth.ToHeaderValue()
if err != nil {

View File

@@ -228,6 +228,39 @@ func (f *ForkChoice) AncestorRoot(ctx context.Context, root [32]byte, slot primi
return n.root, nil
}
// IsViableForCheckpoint returns whether the root passed is a checkpoint root for any
// known chain in forkchoice.
func (f *ForkChoice) IsViableForCheckpoint(cp *forkchoicetypes.Checkpoint) (bool, error) {
node, ok := f.store.nodeByRoot[cp.Root]
if !ok || node == nil {
return false, nil
}
epochStart, err := slots.EpochStart(cp.Epoch)
if err != nil {
return false, err
}
if node.slot > epochStart {
return false, nil
}
if len(node.children) == 0 {
return true, nil
}
if node.slot == epochStart {
return true, nil
}
nodeEpoch := slots.ToEpoch(node.slot)
if nodeEpoch >= cp.Epoch {
return false, nil
}
for _, child := range node.children {
if child.slot > epochStart {
return true, nil
}
}
return false, nil
}
// updateBalances updates the balances that directly voted for each block taking into account the
// validators' latest votes.
func (f *ForkChoice) updateBalances() error {
@@ -594,3 +627,12 @@ func (f *ForkChoice) updateJustifiedBalances(ctx context.Context, root [32]byte)
f.store.committeeWeight /= uint64(params.BeaconConfig().SlotsPerEpoch)
return nil
}
// Slot returns the slot of the given root if it's known to forkchoice
func (f *ForkChoice) Slot(root [32]byte) (primitives.Slot, error) {
n, ok := f.store.nodeByRoot[root]
if !ok || n == nil {
return 0, ErrNilNode
}
return n.slot, nil
}

View File

@@ -754,3 +754,110 @@ func TestForkChoice_UnrealizedJustifiedPayloadBlockHash(t *testing.T) {
got := f.UnrealizedJustifiedPayloadBlockHash()
require.Equal(t, [32]byte{'A'}, got)
}
func TestForkChoiceIsViableForCheckpoint(t *testing.T) {
f := setup(0, 0)
ctx := context.Background()
st, root, err := prepareForkchoiceState(ctx, 0, [32]byte{'a'}, params.BeaconConfig().ZeroHash, [32]byte{'A'}, 0, 0)
require.NoError(t, err)
// No Node
viable, err := f.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: root})
require.NoError(t, err)
require.Equal(t, false, viable)
// No Children
require.NoError(t, f.InsertNode(ctx, st, root))
viable, err = f.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: root, Epoch: 0})
require.NoError(t, err)
require.Equal(t, true, viable)
viable, err = f.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: root, Epoch: 1})
require.NoError(t, err)
require.Equal(t, true, viable)
viable, err = f.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: root, Epoch: 2})
require.NoError(t, err)
require.Equal(t, true, viable)
st, broot, err := prepareForkchoiceState(ctx, 1, [32]byte{'b'}, root, [32]byte{'B'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, broot))
// Epoch start
viable, err = f.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: root})
require.NoError(t, err)
require.Equal(t, true, viable)
viable, err = f.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: root, Epoch: 1})
require.NoError(t, err)
require.Equal(t, false, viable)
// No Children but impossible checkpoint
viable, err = f.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: broot})
require.NoError(t, err)
require.Equal(t, false, viable)
viable, err = f.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: broot, Epoch: 1})
require.NoError(t, err)
require.Equal(t, true, viable)
st, croot, err := prepareForkchoiceState(ctx, 2, [32]byte{'c'}, broot, [32]byte{'C'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, croot))
// Children in same epoch
viable, err = f.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: broot})
require.NoError(t, err)
require.Equal(t, false, viable)
viable, err = f.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: broot, Epoch: 1})
require.NoError(t, err)
require.Equal(t, false, viable)
st, droot, err := prepareForkchoiceState(ctx, params.BeaconConfig().SlotsPerEpoch, [32]byte{'d'}, broot, [32]byte{'D'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, droot))
// Children in next epoch but boundary
viable, err = f.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: broot})
require.NoError(t, err)
require.Equal(t, false, viable)
viable, err = f.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: broot, Epoch: 1})
require.NoError(t, err)
require.Equal(t, false, viable)
// Boundary block
viable, err = f.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: droot, Epoch: 1})
require.NoError(t, err)
require.Equal(t, true, viable)
viable, err = f.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: droot, Epoch: 0})
require.NoError(t, err)
require.Equal(t, false, viable)
// Children in next epoch
st, eroot, err := prepareForkchoiceState(ctx, params.BeaconConfig().SlotsPerEpoch+1, [32]byte{'e'}, broot, [32]byte{'E'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, eroot))
viable, err = f.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: broot, Epoch: 1})
require.NoError(t, err)
require.Equal(t, true, viable)
}
func TestForkChoiceSlot(t *testing.T) {
f := setup(0, 0)
ctx := context.Background()
st, root, err := prepareForkchoiceState(ctx, 3, [32]byte{'a'}, params.BeaconConfig().ZeroHash, [32]byte{'A'}, 0, 0)
require.NoError(t, err)
// No Node
_, err = f.Slot(root)
require.ErrorIs(t, ErrNilNode, err)
require.NoError(t, f.InsertNode(ctx, st, root))
slot, err := f.Slot(root)
require.NoError(t, err)
require.Equal(t, primitives.Slot(3), slot)
}

View File

@@ -53,6 +53,7 @@ type Getter interface {
CommonAncestor(ctx context.Context, root1 [32]byte, root2 [32]byte) ([32]byte, primitives.Slot, error)
IsCanonical(root [32]byte) bool
FinalizedCheckpoint() *forkchoicetypes.Checkpoint
IsViableForCheckpoint(*forkchoicetypes.Checkpoint) (bool, error)
FinalizedPayloadBlockHash() [32]byte
JustifiedCheckpoint() *forkchoicetypes.Checkpoint
PreviousJustifiedCheckpoint() *forkchoicetypes.Checkpoint
@@ -66,6 +67,7 @@ type Getter interface {
Tips() ([][32]byte, []primitives.Slot)
IsOptimistic(root [32]byte) (bool, error)
ShouldOverrideFCU() bool
Slot([32]byte) (primitives.Slot, error)
}
// Setter allows to set forkchoice information

View File

@@ -114,19 +114,11 @@ func (s *Service) Start() {
"ValidatorIndices": tracked,
}).Info("Starting service")
stateChannel := make(chan *feed.Event, 1)
stateSub := s.config.StateNotifier.StateFeed().Subscribe(stateChannel)
go s.run(stateChannel, stateSub)
go s.run()
}
// run waits until the beacon is synced and starts the monitoring system.
func (s *Service) run(stateChannel chan *feed.Event, stateSub event.Subscription) {
if stateChannel == nil {
log.Error("State state is nil")
return
}
func (s *Service) run() {
if err := s.waitForSync(s.config.InitialSyncComplete); err != nil {
log.WithError(err)
return
@@ -154,6 +146,8 @@ func (s *Service) run(stateChannel chan *feed.Event, stateSub event.Subscription
s.isLogging = true
s.Unlock()
stateChannel := make(chan *feed.Event, 1)
stateSub := s.config.StateNotifier.StateFeed().Subscribe(stateChannel)
s.monitorRoutine(stateChannel, stateSub)
}

View File

@@ -271,11 +271,9 @@ func TestWaitForSyncCanceled(t *testing.T) {
func TestRun(t *testing.T) {
hook := logTest.NewGlobal()
s := setupService(t)
stateChannel := make(chan *feed.Event, 1)
stateSub := s.config.StateNotifier.StateFeed().Subscribe(stateChannel)
go func() {
s.run(stateChannel, stateSub)
s.run()
}()
close(s.config.InitialSyncComplete)

View File

@@ -581,7 +581,8 @@ func (b *BeaconNode) fetchBuilderService() *builder.Service {
func (b *BeaconNode) registerAttestationPool() error {
s, err := attestations.NewService(b.ctx, &attestations.Config{
Pool: b.attestationPool,
Pool: b.attestationPool,
InitialSyncComplete: b.initialSyncComplete,
})
if err != nil {
return errors.Wrap(err, "could not register atts pool service")
@@ -742,6 +743,7 @@ func (b *BeaconNode) registerSlasherService() error {
SlashingPoolInserter: b.slashingsPool,
SyncChecker: syncService,
HeadStateFetcher: chainService,
ClockWaiter: b.clockWaiter,
})
if err != nil {
return err

View File

@@ -18,6 +18,7 @@ go_library(
deps = [
"//beacon-chain/operations/attestations/kv:go_default_library",
"//cache/lru:go_default_library",
"//config/features:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/hash:go_default_library",

View File

@@ -53,27 +53,25 @@ func (c *AttCaches) aggregateUnaggregatedAttestations(ctx context.Context, unagg
// Track the unaggregated attestations that aren't able to aggregate.
leftOverUnaggregatedAtt := make(map[[32]byte]bool)
for _, atts := range attsByDataRoot {
aggregatedAtts := make([]*ethpb.Attestation, 0, len(atts))
processedAtts, err := attaggregation.Aggregate(atts)
aggregated, err := attaggregation.AggregateDisjointOneBitAtts(atts)
if err != nil {
return err
return errors.Wrap(err, "could not aggregate unaggregated attestations")
}
for _, att := range processedAtts {
if helpers.IsAggregated(att) {
aggregatedAtts = append(aggregatedAtts, att)
} else {
h, err := hashFn(att)
if err != nil {
return err
}
leftOverUnaggregatedAtt[h] = true
if aggregated == nil {
return errors.New("could not aggregate unaggregated attestations")
}
if helpers.IsAggregated(aggregated) {
if err := c.SaveAggregatedAttestations([]*ethpb.Attestation{aggregated}); err != nil {
return err
}
}
if err := c.SaveAggregatedAttestations(aggregatedAtts); err != nil {
return err
} else {
h, err := hashFn(aggregated)
if err != nil {
return err
}
leftOverUnaggregatedAtt[h] = true
}
}
// Remove the unaggregated attestations from the pool that were successfully aggregated.
for _, att := range unaggregatedAtts {
h, err := hashFn(att)
@@ -87,7 +85,6 @@ func (c *AttCaches) aggregateUnaggregatedAttestations(ctx context.Context, unagg
return err
}
}
return nil
}

View File

@@ -30,6 +30,20 @@ var (
Name: "expired_block_atts_total",
Help: "The number of expired and deleted block attestations in the pool.",
})
batchForkChoiceAttsT1 = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "aggregate_attestations_t1",
Help: "Captures times of attestation aggregation in milliseconds during the first interval per slot",
Buckets: []float64{100, 200, 500, 1000, 1500, 2000, 2500, 3500},
},
)
batchForkChoiceAttsT2 = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "aggregate_attestations_t2",
Help: "Captures times of attestation aggregation in milliseconds during the second interval per slot",
Buckets: []float64{10, 40, 100, 200, 600},
},
)
)
func (s *Service) updateMetrics() {

View File

@@ -7,6 +7,8 @@ import (
"time"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v4/config/features"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/crypto/hash"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
attaggregation "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1/attestation/aggregation/attestations"
@@ -14,20 +16,34 @@ import (
"go.opencensus.io/trace"
)
// Prepare attestations for fork choice three times per slot.
var prepareForkChoiceAttsPeriod = slots.DivideSlotBy(3 /* times-per-slot */)
// This prepares fork choice attestations by running batchForkChoiceAtts
// every prepareForkChoiceAttsPeriod.
func (s *Service) prepareForkChoiceAtts() {
ticker := time.NewTicker(prepareForkChoiceAttsPeriod)
defer ticker.Stop()
intervals := features.Get().AggregateIntervals
slotDuration := time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second
// Adjust intervals for networks with a lower slot duration (Hive, e2e, etc)
for {
if intervals[len(intervals)-1] >= slotDuration {
for i, offset := range intervals {
intervals[i] = offset / 2
}
} else {
break
}
}
ticker := slots.NewSlotTickerWithIntervals(time.Unix(int64(s.genesisTime), 0), intervals[:])
for {
select {
case <-ticker.C:
case <-ticker.C():
t := time.Now()
if err := s.batchForkChoiceAtts(s.ctx); err != nil {
log.WithError(err).Error("Could not prepare attestations for fork choice")
}
if slots.TimeIntoSlot(s.genesisTime) < intervals[1] {
batchForkChoiceAttsT1.Observe(float64(time.Since(t).Milliseconds()))
} else if slots.TimeIntoSlot(s.genesisTime) < intervals[2] {
batchForkChoiceAttsT2.Observe(float64(time.Since(t).Milliseconds()))
}
case <-s.ctx.Done():
log.Debug("Context closed, exiting routine")
return

View File

@@ -5,6 +5,7 @@ package attestations
import (
"context"
"errors"
"time"
lru "github.com/hashicorp/golang-lru"
@@ -26,8 +27,9 @@ type Service struct {
// Config options for the service.
type Config struct {
Pool Pool
pruneInterval time.Duration
Pool Pool
pruneInterval time.Duration
InitialSyncComplete chan struct{}
}
// NewService instantiates a new attestation pool service instance that will
@@ -51,10 +53,24 @@ func NewService(ctx context.Context, cfg *Config) (*Service, error) {
// Start an attestation pool service's main event loop.
func (s *Service) Start() {
if err := s.waitForSync(s.cfg.InitialSyncComplete); err != nil {
log.WithError(err).Error("failed to wait for initial sync")
return
}
go s.prepareForkChoiceAtts()
go s.pruneAttsPool()
}
// waitForSync waits until the beacon node is synced to the latest head.
func (s *Service) waitForSync(syncChan chan struct{}) error {
select {
case <-syncChan:
return nil
case <-s.ctx.Done():
return errors.New("context closed, exiting goroutine")
}
}
// Stop the beacon block attestation pool service's main event loop
// and associated goroutines.
func (s *Service) Stop() error {

View File

@@ -30,8 +30,22 @@ func (s *Store) SaveSyncCommitteeMessage(msg *ethpb.SyncCommitteeMessage) error
return errors.New("not typed []ethpb.SyncCommitteeMessage")
}
messages = append(messages, copied)
savedSyncCommitteeMessageTotal.Inc()
idx := -1
for i, msg := range messages {
if msg.ValidatorIndex == copied.ValidatorIndex {
idx = i
break
}
}
if idx >= 0 {
// Override the existing messages with a new one
messages[idx] = copied
} else {
// Append the new message
messages = append(messages, copied)
savedSyncCommitteeMessageTotal.Inc()
}
return s.messageCache.Push(&queue.Item{
Key: syncCommitteeKey(msg.Slot),
Value: messages,

View File

@@ -55,6 +55,7 @@ go_library(
"//beacon-chain/p2p/types:go_default_library",
"//beacon-chain/startup:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/features:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/wrapper:go_default_library",

View File

@@ -55,6 +55,9 @@ func (s *Service) Broadcast(ctx context.Context, msg proto.Message) error {
// BroadcastAttestation broadcasts an attestation to the p2p network, the message is assumed to be
// broadcasted to the current fork.
func (s *Service) BroadcastAttestation(ctx context.Context, subnet uint64, att *ethpb.Attestation) error {
if att == nil {
return errors.New("attempted to broadcast nil attestation")
}
ctx, span := trace.StartSpan(ctx, "p2p.BroadcastAttestation")
defer span.End()
forkDigest, err := s.currentForkDigest()
@@ -73,6 +76,9 @@ func (s *Service) BroadcastAttestation(ctx context.Context, subnet uint64, att *
// BroadcastSyncCommitteeMessage broadcasts a sync committee message to the p2p network, the message is assumed to be
// broadcasted to the current fork.
func (s *Service) BroadcastSyncCommitteeMessage(ctx context.Context, subnet uint64, sMsg *ethpb.SyncCommitteeMessage) error {
if sMsg == nil {
return errors.New("attempted to broadcast nil sync committee message")
}
ctx, span := trace.StartSpan(ctx, "p2p.BroadcastSyncCommitteeMessage")
defer span.End()
forkDigest, err := s.currentForkDigest()

View File

@@ -196,8 +196,12 @@ func TestService_BroadcastAttestation(t *testing.T) {
}
}(t)
// Attempt to broadcast nil object should fail.
ctx := context.Background()
require.ErrorContains(t, "attempted to broadcast nil", p.BroadcastAttestation(ctx, subnet, nil))
// Broadcast to peers and wait.
require.NoError(t, p.BroadcastAttestation(context.Background(), subnet, msg))
require.NoError(t, p.BroadcastAttestation(ctx, subnet, msg))
if util.WaitTimeout(&wg, 1*time.Second) {
t.Error("Failed to receive pubsub within 1s")
}
@@ -433,8 +437,12 @@ func TestService_BroadcastSyncCommittee(t *testing.T) {
}
}(t)
// Broadcasting nil should fail.
ctx := context.Background()
require.ErrorContains(t, "attempted to broadcast nil", p.BroadcastSyncCommitteeMessage(ctx, subnet, nil))
// Broadcast to peers and wait.
require.NoError(t, p.BroadcastSyncCommitteeMessage(context.Background(), subnet, msg))
require.NoError(t, p.BroadcastSyncCommitteeMessage(ctx, subnet, msg))
if util.WaitTimeout(&wg, 1*time.Second) {
t.Error("Failed to receive pubsub within 1s")
}

View File

@@ -6,12 +6,14 @@ import (
"net"
"github.com/libp2p/go-libp2p"
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/libp2p/go-libp2p/p2p/muxer/mplex"
"github.com/libp2p/go-libp2p/p2p/security/noise"
"github.com/libp2p/go-libp2p/p2p/transport/tcp"
ma "github.com/multiformats/go-multiaddr"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/config/features"
ecdsaprysm "github.com/prysmaticlabs/prysm/v4/crypto/ecdsa"
"github.com/prysmaticlabs/prysm/v4/runtime/version"
)
@@ -99,6 +101,9 @@ func (s *Service) buildOptions(ip net.IP, priKey *ecdsa.PrivateKey) []libp2p.Opt
}
// Disable Ping Service.
options = append(options, libp2p.Ping(false))
if features.Get().DisableResourceManager {
options = append(options, libp2p.ResourceManager(&network.NullResourceManager{}))
}
return options
}

View File

@@ -202,7 +202,7 @@ func TestGossipTopicMapping_scanfcheck_GossipTopicFormattingSanityCheck(t *testi
if string(c) == "%" {
next := string(topic[i+1])
if next != "d" && next != "x" {
t.Errorf("Topic %s has formatting incompatiable with scanfcheck. Only %%d and %%x are supported", topic)
t.Errorf("Topic %s has formatting incompatible with scanfcheck. Only %%d and %%x are supported", topic)
}
}
}

View File

@@ -49,6 +49,7 @@ func (s *Service) FindPeersWithSubnet(ctx context.Context, topic string,
topic += s.Encoding().ProtocolSuffix()
iterator := s.dv5Listener.RandomNodes()
defer iterator.Close()
switch {
case strings.Contains(topic, GossipAttestationMessage):
iterator = filterNodes(ctx, iterator, s.filterPeerForAttSubnet(index))

View File

@@ -41,7 +41,6 @@ go_library(
"//beacon-chain/state/stategen:go_default_library",
"//beacon-chain/sync:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types:go_default_library",
"//consensus-types/blocks:go_default_library",
@@ -87,7 +86,6 @@ go_test(
deps = [
"//api/grpc:go_default_library",
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/builder/testing:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/transition:go_default_library",
@@ -106,6 +104,7 @@ go_test(
"//beacon-chain/rpc/testutil:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//beacon-chain/sync/initial-sync/testing:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
@@ -116,17 +115,18 @@ go_test(
"//encoding/bytesutil:go_default_library",
"//encoding/ssz:go_default_library",
"//network/forks:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/eth/service:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/eth/v2:go_default_library",
"//proto/migration:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/assert:go_default_library",
"//testing/mock:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_golang_mock//gomock:go_default_library",
"@com_github_grpc_ecosystem_grpc_gateway_v2//runtime:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_wealdtech_go_bytesutil//:go_default_library",

View File

@@ -2,11 +2,15 @@ package beacon
import (
"context"
"strings"
"github.com/pkg/errors"
rpchelpers "github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/helpers"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/prysm/v1alpha1/validator"
"github.com/prysmaticlabs/prysm/v4/config/params"
consensus_types "github.com/prysmaticlabs/prysm/v4/consensus-types"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v4/encoding/ssz/detect"
"github.com/prysmaticlabs/prysm/v4/network/forks"
ethpbv1 "github.com/prysmaticlabs/prysm/v4/proto/eth/v1"
@@ -142,6 +146,11 @@ func (bs *Server) SubmitBlindedBlock(ctx context.Context, req *ethpbv2.SignedBli
ctx, span := trace.StartSpan(ctx, "beacon.SubmitBlindedBlock")
defer span.End()
if err := rpchelpers.ValidateSync(ctx, bs.SyncChecker, bs.HeadFetcher, bs.TimeFetcher, bs.OptimisticModeFetcher); err != nil {
// We simply return the error because it's already a gRPC error.
return nil, err
}
switch blkContainer := req.Message.(type) {
case *ethpbv2.SignedBlindedBeaconBlockContainer_CapellaBlock:
if err := bs.submitBlindedCapellaBlock(ctx, blkContainer.CapellaBlock, req.Signature); err != nil {
@@ -180,6 +189,11 @@ func (bs *Server) SubmitBlindedBlockSSZ(ctx context.Context, req *ethpbv2.SSZCon
ctx, span := trace.StartSpan(ctx, "beacon.SubmitBlindedBlockSSZ")
defer span.End()
if err := rpchelpers.ValidateSync(ctx, bs.SyncChecker, bs.HeadFetcher, bs.TimeFetcher, bs.OptimisticModeFetcher); err != nil {
// We simply return the error because it's already a gRPC error.
return nil, err
}
md, ok := metadata.FromIncomingContext(ctx)
if !ok {
return &emptypb.Empty{}, status.Errorf(codes.Internal, "Could not read"+versionHeader+" header")
@@ -202,31 +216,84 @@ func (bs *Server) SubmitBlindedBlockSSZ(ctx context.Context, req *ethpbv2.SSZCon
return &emptypb.Empty{}, status.Errorf(codes.Internal, "Could not unmarshal request data into block: %v", err)
}
if block.IsBlinded() {
switch forkVer {
case bytesutil.ToBytes4(params.BeaconConfig().CapellaForkVersion):
if !block.IsBlinded() {
return nil, status.Error(codes.InvalidArgument, "Submitted block is not blinded")
}
b, err := block.PbBlindedCapellaBlock()
if err != nil {
return &emptypb.Empty{}, status.Errorf(codes.Internal, "Could not get proto block: %v", err)
}
_, err = bs.V1Alpha1ValidatorServer.ProposeBeaconBlock(ctx, &eth.GenericSignedBeaconBlock{
Block: &eth.GenericSignedBeaconBlock_BlindedCapella{
BlindedCapella: b,
},
})
if err != nil {
if strings.Contains(err.Error(), validator.CouldNotDecodeBlock) {
return &emptypb.Empty{}, status.Error(codes.InvalidArgument, err.Error())
}
return &emptypb.Empty{}, status.Errorf(codes.Internal, "Could not propose block: %v", err)
}
return &emptypb.Empty{}, nil
case bytesutil.ToBytes4(params.BeaconConfig().BellatrixForkVersion):
if !block.IsBlinded() {
return nil, status.Error(codes.InvalidArgument, "Submitted block is not blinded")
}
b, err := block.PbBlindedBellatrixBlock()
if err != nil {
b, err := block.PbBlindedCapellaBlock()
if err != nil {
return &emptypb.Empty{}, status.Errorf(codes.Internal, "Could not get blinded block: %v", err)
}
bb, err := migration.V1Alpha1BeaconBlockBlindedCapellaToV2Blinded(b.Block)
if err != nil {
return &emptypb.Empty{}, status.Errorf(codes.Internal, "Could not migrate block: %v", err)
}
return &emptypb.Empty{}, bs.submitBlindedCapellaBlock(ctx, bb, b.Signature)
return &emptypb.Empty{}, status.Errorf(codes.Internal, "Could not get proto block: %v", err)
}
bb, err := migration.V1Alpha1BeaconBlockBlindedBellatrixToV2Blinded(b.Block)
_, err = bs.V1Alpha1ValidatorServer.ProposeBeaconBlock(ctx, &eth.GenericSignedBeaconBlock{
Block: &eth.GenericSignedBeaconBlock_BlindedBellatrix{
BlindedBellatrix: b,
},
})
if err != nil {
return &emptypb.Empty{}, status.Errorf(codes.Internal, "Could not migrate block: %v", err)
if strings.Contains(err.Error(), validator.CouldNotDecodeBlock) {
return &emptypb.Empty{}, status.Error(codes.InvalidArgument, err.Error())
}
return &emptypb.Empty{}, status.Errorf(codes.Internal, "Could not propose block: %v", err)
}
return &emptypb.Empty{}, bs.submitBlindedBellatrixBlock(ctx, bb, b.Signature)
return &emptypb.Empty{}, nil
case bytesutil.ToBytes4(params.BeaconConfig().AltairForkVersion):
b, err := block.PbAltairBlock()
if err != nil {
return &emptypb.Empty{}, status.Errorf(codes.Internal, "Could not get proto block: %v", err)
}
_, err = bs.V1Alpha1ValidatorServer.ProposeBeaconBlock(ctx, &eth.GenericSignedBeaconBlock{
Block: &eth.GenericSignedBeaconBlock_Altair{
Altair: b,
},
})
if err != nil {
if strings.Contains(err.Error(), validator.CouldNotDecodeBlock) {
return &emptypb.Empty{}, status.Error(codes.InvalidArgument, err.Error())
}
return &emptypb.Empty{}, status.Errorf(codes.Internal, "Could not propose block: %v", err)
}
return &emptypb.Empty{}, nil
case bytesutil.ToBytes4(params.BeaconConfig().GenesisForkVersion):
b, err := block.PbPhase0Block()
if err != nil {
return &emptypb.Empty{}, status.Errorf(codes.Internal, "Could not get proto block: %v", err)
}
_, err = bs.V1Alpha1ValidatorServer.ProposeBeaconBlock(ctx, &eth.GenericSignedBeaconBlock{
Block: &eth.GenericSignedBeaconBlock_Phase0{
Phase0: b,
},
})
if err != nil {
if strings.Contains(err.Error(), validator.CouldNotDecodeBlock) {
return &emptypb.Empty{}, status.Error(codes.InvalidArgument, err.Error())
}
return &emptypb.Empty{}, status.Errorf(codes.Internal, "Could not propose block: %v", err)
}
return &emptypb.Empty{}, nil
default:
return &emptypb.Empty{}, status.Errorf(codes.InvalidArgument, "Unsupported fork %s", string(forkVer[:]))
}
root, err := block.Block().HashTreeRoot()
if err != nil {
return &emptypb.Empty{}, status.Errorf(codes.Internal, "Could not compute block's hash tree root: %v", err)
}
return &emptypb.Empty{}, bs.submitBlock(ctx, root, block)
}
func getBlindedBlockPhase0(blk interfaces.ReadOnlySignedBeaconBlock) (*ethpbv2.BlindedBlockResponse, error) {
@@ -570,7 +637,7 @@ func (bs *Server) submitBlindedBellatrixBlock(ctx context.Context, blindedBellat
Signature: sig,
})
if err != nil {
return status.Errorf(codes.Internal, "Could not get blinded block: %v", err)
return status.Errorf(codes.Internal, "Could not convert block: %v", err)
}
_, err = bs.V1Alpha1ValidatorServer.ProposeBeaconBlock(ctx, &eth.GenericSignedBeaconBlock{
Block: &eth.GenericSignedBeaconBlock_BlindedBellatrix{
@@ -578,6 +645,9 @@ func (bs *Server) submitBlindedBellatrixBlock(ctx context.Context, blindedBellat
},
})
if err != nil {
if strings.Contains(err.Error(), validator.CouldNotDecodeBlock) {
return status.Error(codes.InvalidArgument, err.Error())
}
return status.Errorf(codes.Internal, "Could not propose blinded block: %v", err)
}
return nil
@@ -589,7 +659,7 @@ func (bs *Server) submitBlindedCapellaBlock(ctx context.Context, blindedCapellaB
Signature: sig,
})
if err != nil {
return status.Errorf(codes.Internal, "Could not get blinded block: %v", err)
return status.Errorf(codes.Internal, "Could not convert block: %v", err)
}
_, err = bs.V1Alpha1ValidatorServer.ProposeBeaconBlock(ctx, &eth.GenericSignedBeaconBlock{
Block: &eth.GenericSignedBeaconBlock_BlindedCapella{
@@ -597,6 +667,9 @@ func (bs *Server) submitBlindedCapellaBlock(ctx context.Context, blindedCapellaB
},
})
if err != nil {
if strings.Contains(err.Error(), validator.CouldNotDecodeBlock) {
return status.Error(codes.InvalidArgument, err.Error())
}
return status.Errorf(codes.Internal, "Could not propose blinded block: %v", err)
}
return nil

View File

@@ -4,22 +4,17 @@ import (
"context"
"testing"
"github.com/golang/mock/gomock"
mock "github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain/testing"
builderTest "github.com/prysmaticlabs/prysm/v4/beacon-chain/builder/testing"
dbTest "github.com/prysmaticlabs/prysm/v4/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/operations/synccommittee"
mockp2p "github.com/prysmaticlabs/prysm/v4/beacon-chain/p2p/testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/prysm/v1alpha1/validator"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/testutil"
mockSync "github.com/prysmaticlabs/prysm/v4/beacon-chain/sync/initial-sync/testing"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v4/encoding/ssz"
enginev1 "github.com/prysmaticlabs/prysm/v4/proto/engine/v1"
ethpbv1 "github.com/prysmaticlabs/prysm/v4/proto/eth/v1"
ethpbv2 "github.com/prysmaticlabs/prysm/v4/proto/eth/v2"
"github.com/prysmaticlabs/prysm/v4/proto/migration"
"github.com/prysmaticlabs/prysm/v4/testing/assert"
mock2 "github.com/prysmaticlabs/prysm/v4/testing/mock"
"github.com/prysmaticlabs/prysm/v4/testing/require"
"github.com/prysmaticlabs/prysm/v4/testing/util"
"google.golang.org/grpc/metadata"
@@ -307,383 +302,214 @@ func TestServer_GetBlindedBlockSSZ(t *testing.T) {
})
}
func TestServer_SubmitBlindedBlockSSZ_OK(t *testing.T) {
func TestServer_SubmitBlindedBlockSSZ(t *testing.T) {
ctrl := gomock.NewController(t)
ctx := context.Background()
t.Run("Phase 0", func(t *testing.T) {
beaconDB := dbTest.SetupDB(t)
ctx := context.Background()
genesis := util.NewBeaconBlock()
util.SaveBlock(t, context.Background(), beaconDB, genesis)
numDeposits := uint64(64)
beaconState, _ := util.DeterministicGenesisState(t, numDeposits)
bsRoot, err := beaconState.HashTreeRoot(ctx)
require.NoError(t, err)
genesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveState(ctx, beaconState, genesisRoot), "Could not save genesis state")
c := &mock.ChainService{Root: bsRoot[:], State: beaconState}
beaconChainServer := &Server{
BeaconDB: beaconDB,
BlockReceiver: c,
ChainInfoFetcher: c,
BlockNotifier: c.BlockNotifier(),
Broadcaster: mockp2p.NewTestP2P(t),
HeadFetcher: c,
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), gomock.Any())
server := &Server{
V1Alpha1ValidatorServer: v1alpha1Server,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
req := util.NewBeaconBlock()
req.Block.Slot = 5
req.Block.ParentRoot = bsRoot[:]
util.SaveBlock(t, ctx, beaconDB, req)
blockSsz, err := req.MarshalSSZ()
b := util.NewBeaconBlock()
ssz, err := b.MarshalSSZ()
require.NoError(t, err)
blockReq := &ethpbv2.SSZContainer{
Data: blockSsz,
Data: ssz,
}
md := metadata.MD{}
md.Set(versionHeader, "phase0")
sszCtx := metadata.NewIncomingContext(ctx, md)
_, err = beaconChainServer.SubmitBlindedBlockSSZ(sszCtx, blockReq)
assert.NoError(t, err, "Could not propose block correctly")
_, err = server.SubmitBlindedBlockSSZ(sszCtx, blockReq)
assert.NoError(t, err)
})
t.Run("Altair", func(t *testing.T) {
beaconDB := dbTest.SetupDB(t)
ctx := context.Background()
genesis := util.NewBeaconBlockAltair()
util.SaveBlock(t, context.Background(), beaconDB, genesis)
numDeposits := uint64(64)
beaconState, _ := util.DeterministicGenesisState(t, numDeposits)
bsRoot, err := beaconState.HashTreeRoot(ctx)
require.NoError(t, err)
genesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveState(ctx, beaconState, genesisRoot), "Could not save genesis state")
c := &mock.ChainService{Root: bsRoot[:], State: beaconState}
beaconChainServer := &Server{
BeaconDB: beaconDB,
BlockReceiver: c,
ChainInfoFetcher: c,
BlockNotifier: c.BlockNotifier(),
Broadcaster: mockp2p.NewTestP2P(t),
HeadFetcher: c,
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), gomock.Any())
server := &Server{
V1Alpha1ValidatorServer: v1alpha1Server,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
req := util.NewBeaconBlockAltair()
req.Block.Slot = params.BeaconConfig().SlotsPerEpoch.Mul(uint64(params.BeaconConfig().AltairForkEpoch))
req.Block.ParentRoot = bsRoot[:]
util.SaveBlock(t, ctx, beaconDB, req)
blockSsz, err := req.MarshalSSZ()
b := util.NewBeaconBlockAltair()
b.Block.Slot = params.BeaconConfig().SlotsPerEpoch.Mul(uint64(params.BeaconConfig().AltairForkEpoch))
ssz, err := b.MarshalSSZ()
require.NoError(t, err)
blockReq := &ethpbv2.SSZContainer{
Data: blockSsz,
Data: ssz,
}
md := metadata.MD{}
md.Set(versionHeader, "altair")
sszCtx := metadata.NewIncomingContext(ctx, md)
_, err = beaconChainServer.SubmitBlindedBlockSSZ(sszCtx, blockReq)
assert.NoError(t, err, "Could not propose block correctly")
_, err = server.SubmitBlindedBlockSSZ(sszCtx, blockReq)
assert.NoError(t, err)
})
t.Run("Bellatrix", func(t *testing.T) {
beaconDB := dbTest.SetupDB(t)
ctx := context.Background()
genesis := util.NewBeaconBlockBellatrix()
util.SaveBlock(t, context.Background(), beaconDB, genesis)
numDeposits := uint64(64)
beaconState, _ := util.DeterministicGenesisState(t, numDeposits)
bsRoot, err := beaconState.HashTreeRoot(ctx)
require.NoError(t, err)
genesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveState(ctx, beaconState, genesisRoot), "Could not save genesis state")
c := &mock.ChainService{Root: bsRoot[:], State: beaconState}
alphaServer := &validator.Server{
SyncCommitteePool: synccommittee.NewStore(),
P2P: &mockp2p.MockBroadcaster{},
BlockBuilder: &builderTest.MockBuilderService{},
BlockReceiver: c,
BlockNotifier: &mock.MockBlockNotifier{},
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), gomock.Any())
server := &Server{
V1Alpha1ValidatorServer: v1alpha1Server,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
beaconChainServer := &Server{
BeaconDB: beaconDB,
BlockReceiver: c,
ChainInfoFetcher: c,
BlockNotifier: c.BlockNotifier(),
Broadcaster: mockp2p.NewTestP2P(t),
HeadFetcher: c,
V1Alpha1ValidatorServer: alphaServer,
}
req := util.NewBlindedBeaconBlockBellatrix()
req.Block.Slot = params.BeaconConfig().SlotsPerEpoch.Mul(uint64(params.BeaconConfig().BellatrixForkEpoch))
req.Block.ParentRoot = bsRoot[:]
util.SaveBlock(t, ctx, beaconDB, req)
blockSsz, err := req.MarshalSSZ()
b := util.NewBlindedBeaconBlockBellatrix()
b.Block.Slot = params.BeaconConfig().SlotsPerEpoch.Mul(uint64(params.BeaconConfig().BellatrixForkEpoch))
ssz, err := b.MarshalSSZ()
require.NoError(t, err)
blockReq := &ethpbv2.SSZContainer{
Data: blockSsz,
Data: ssz,
}
md := metadata.MD{}
md.Set(versionHeader, "bellatrix")
sszCtx := metadata.NewIncomingContext(ctx, md)
_, err = beaconChainServer.SubmitBlindedBlockSSZ(sszCtx, blockReq)
assert.NoError(t, err, "Could not propose block correctly")
_, err = server.SubmitBlindedBlockSSZ(sszCtx, blockReq)
assert.NoError(t, err)
})
t.Run("Capella", func(t *testing.T) {
beaconDB := dbTest.SetupDB(t)
ctx := context.Background()
genesis := util.NewBeaconBlockCapella()
util.SaveBlock(t, context.Background(), beaconDB, genesis)
numDeposits := uint64(64)
beaconState, _ := util.DeterministicGenesisState(t, numDeposits)
bsRoot, err := beaconState.HashTreeRoot(ctx)
require.NoError(t, err)
genesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveState(ctx, beaconState, genesisRoot), "Could not save genesis state")
c := &mock.ChainService{Root: bsRoot[:], State: beaconState}
alphaServer := &validator.Server{
SyncCommitteePool: synccommittee.NewStore(),
P2P: &mockp2p.MockBroadcaster{},
BlockBuilder: &builderTest.MockBuilderService{},
BlockReceiver: c,
BlockNotifier: &mock.MockBlockNotifier{},
t.Run("Bellatrix full", func(t *testing.T) {
server := &Server{
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
beaconChainServer := &Server{
BeaconDB: beaconDB,
BlockReceiver: c,
ChainInfoFetcher: c,
BlockNotifier: c.BlockNotifier(),
Broadcaster: mockp2p.NewTestP2P(t),
HeadFetcher: c,
V1Alpha1ValidatorServer: alphaServer,
}
req := util.NewBlindedBeaconBlockCapella()
req.Block.Slot = params.BeaconConfig().SlotsPerEpoch.Mul(uint64(params.BeaconConfig().CapellaForkEpoch))
req.Block.ParentRoot = bsRoot[:]
util.SaveBlock(t, ctx, beaconDB, req)
blockSsz, err := req.MarshalSSZ()
b := util.NewBeaconBlockBellatrix()
b.Block.Slot = params.BeaconConfig().SlotsPerEpoch.Mul(uint64(params.BeaconConfig().BellatrixForkEpoch))
ssz, err := b.MarshalSSZ()
require.NoError(t, err)
blockReq := &ethpbv2.SSZContainer{
Data: blockSsz,
Data: ssz,
}
md := metadata.MD{}
md.Set(versionHeader, "bellatrix")
sszCtx := metadata.NewIncomingContext(ctx, md)
_, err = server.SubmitBlindedBlockSSZ(sszCtx, blockReq)
assert.NotNil(t, err)
})
t.Run("Capella", func(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), gomock.Any())
server := &Server{
V1Alpha1ValidatorServer: v1alpha1Server,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
b := util.NewBlindedBeaconBlockCapella()
b.Block.Slot = params.BeaconConfig().SlotsPerEpoch.Mul(uint64(params.BeaconConfig().CapellaForkEpoch))
ssz, err := b.MarshalSSZ()
require.NoError(t, err)
blockReq := &ethpbv2.SSZContainer{
Data: ssz,
}
md := metadata.MD{}
md.Set(versionHeader, "capella")
sszCtx := metadata.NewIncomingContext(ctx, md)
_, err = beaconChainServer.SubmitBlindedBlockSSZ(sszCtx, blockReq)
assert.NoError(t, err, "Could not propose block correctly")
_, err = server.SubmitBlindedBlockSSZ(sszCtx, blockReq)
assert.NoError(t, err)
})
t.Run("Capella full", func(t *testing.T) {
server := &Server{
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
b := util.NewBeaconBlockCapella()
b.Block.Slot = params.BeaconConfig().SlotsPerEpoch.Mul(uint64(params.BeaconConfig().CapellaForkEpoch))
ssz, err := b.MarshalSSZ()
require.NoError(t, err)
blockReq := &ethpbv2.SSZContainer{
Data: ssz,
}
md := metadata.MD{}
md.Set(versionHeader, "capella")
sszCtx := metadata.NewIncomingContext(ctx, md)
_, err = server.SubmitBlindedBlockSSZ(sszCtx, blockReq)
assert.NotNil(t, err)
})
t.Run("sync not ready", func(t *testing.T) {
chainService := &mock.ChainService{}
v1Server := &Server{
SyncChecker: &mockSync.Sync{IsSyncing: true},
HeadFetcher: chainService,
TimeFetcher: chainService,
OptimisticModeFetcher: chainService,
}
_, err := v1Server.SubmitBlindedBlockSSZ(context.Background(), nil)
require.ErrorContains(t, "Syncing to latest head", err)
})
}
func TestSubmitBlindedBlock(t *testing.T) {
ctrl := gomock.NewController(t)
t.Run("Phase 0", func(t *testing.T) {
beaconDB := dbTest.SetupDB(t)
ctx := context.Background()
genesis := util.NewBeaconBlock()
util.SaveBlock(t, context.Background(), beaconDB, genesis)
numDeposits := uint64(64)
beaconState, _ := util.DeterministicGenesisState(t, numDeposits)
bsRoot, err := beaconState.HashTreeRoot(ctx)
require.NoError(t, err)
genesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveState(ctx, beaconState, genesisRoot), "Could not save genesis state")
c := &mock.ChainService{Root: bsRoot[:], State: beaconState}
beaconChainServer := &Server{
BeaconDB: beaconDB,
BlockReceiver: c,
ChainInfoFetcher: c,
BlockNotifier: c.BlockNotifier(),
Broadcaster: mockp2p.NewTestP2P(t),
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), gomock.Any())
server := &Server{
V1Alpha1ValidatorServer: v1alpha1Server,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
req := util.NewBeaconBlock()
req.Block.Slot = 5
req.Block.ParentRoot = bsRoot[:]
v1Block, err := migration.V1Alpha1ToV1SignedBlock(req)
require.NoError(t, err)
util.SaveBlock(t, ctx, beaconDB, req)
blockReq := &ethpbv2.SignedBlindedBeaconBlockContainer{
Message: &ethpbv2.SignedBlindedBeaconBlockContainer_Phase0Block{Phase0Block: v1Block.Block},
Signature: v1Block.Signature,
Message: &ethpbv2.SignedBlindedBeaconBlockContainer_Phase0Block{Phase0Block: &ethpbv1.BeaconBlock{}},
Signature: []byte("sig"),
}
_, err = beaconChainServer.SubmitBlindedBlock(context.Background(), blockReq)
assert.NoError(t, err, "Could not propose block correctly")
_, err := server.SubmitBlindedBlock(context.Background(), blockReq)
assert.NoError(t, err)
})
t.Run("Altair", func(t *testing.T) {
beaconDB := dbTest.SetupDB(t)
ctx := context.Background()
genesis := util.NewBeaconBlockAltair()
util.SaveBlock(t, context.Background(), beaconDB, genesis)
numDeposits := uint64(64)
beaconState, _ := util.DeterministicGenesisState(t, numDeposits)
bsRoot, err := beaconState.HashTreeRoot(ctx)
require.NoError(t, err)
genesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveState(ctx, beaconState, genesisRoot), "Could not save genesis state")
c := &mock.ChainService{Root: bsRoot[:], State: beaconState}
beaconChainServer := &Server{
BeaconDB: beaconDB,
BlockReceiver: c,
ChainInfoFetcher: c,
BlockNotifier: c.BlockNotifier(),
Broadcaster: mockp2p.NewTestP2P(t),
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), gomock.Any())
server := &Server{
V1Alpha1ValidatorServer: v1alpha1Server,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
req := util.NewBeaconBlockAltair()
req.Block.Slot = 5
req.Block.ParentRoot = bsRoot[:]
v2Block, err := migration.V1Alpha1BeaconBlockAltairToV2(req.Block)
require.NoError(t, err)
util.SaveBlock(t, ctx, beaconDB, req)
blockReq := &ethpbv2.SignedBlindedBeaconBlockContainer{
Message: &ethpbv2.SignedBlindedBeaconBlockContainer_AltairBlock{AltairBlock: v2Block},
Signature: req.Signature,
Message: &ethpbv2.SignedBlindedBeaconBlockContainer_AltairBlock{AltairBlock: &ethpbv2.BeaconBlockAltair{}},
Signature: []byte("sig"),
}
_, err = beaconChainServer.SubmitBlindedBlock(context.Background(), blockReq)
assert.NoError(t, err, "Could not propose block correctly")
_, err := server.SubmitBlindedBlock(context.Background(), blockReq)
assert.NoError(t, err)
})
t.Run("Bellatrix", func(t *testing.T) {
transactions := [][]byte{[]byte("transaction1"), []byte("transaction2")}
transactionsRoot, err := ssz.TransactionsRoot(transactions)
require.NoError(t, err)
beaconDB := dbTest.SetupDB(t)
ctx := context.Background()
genesis := util.NewBeaconBlockBellatrix()
util.SaveBlock(t, context.Background(), beaconDB, genesis)
numDeposits := uint64(64)
beaconState, _ := util.DeterministicGenesisState(t, numDeposits)
bsRoot, err := beaconState.HashTreeRoot(ctx)
require.NoError(t, err)
genesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveState(ctx, beaconState, genesisRoot), "Could not save genesis state")
c := &mock.ChainService{Root: bsRoot[:], State: beaconState}
alphaServer := &validator.Server{
SyncCommitteePool: synccommittee.NewStore(),
P2P: &mockp2p.MockBroadcaster{},
BlockBuilder: &builderTest.MockBuilderService{},
BlockReceiver: c,
BlockNotifier: &mock.MockBlockNotifier{},
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), gomock.Any())
server := &Server{
V1Alpha1ValidatorServer: v1alpha1Server,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
beaconChainServer := &Server{
BeaconDB: beaconDB,
BlockReceiver: c,
ChainInfoFetcher: c,
BlockNotifier: c.BlockNotifier(),
Broadcaster: mockp2p.NewTestP2P(t),
V1Alpha1ValidatorServer: alphaServer,
}
blk := util.NewBeaconBlockBellatrix()
blk.Block.Slot = 5
blk.Block.ParentRoot = bsRoot[:]
blk.Block.Body.ExecutionPayload.Transactions = transactions
blindedBlk := util.NewBlindedBeaconBlockBellatrixV2()
blindedBlk.Message.Slot = 5
blindedBlk.Message.ParentRoot = bsRoot[:]
blindedBlk.Message.Body.ExecutionPayloadHeader.TransactionsRoot = transactionsRoot[:]
util.SaveBlock(t, ctx, beaconDB, blk)
blockReq := &ethpbv2.SignedBlindedBeaconBlockContainer{
Message: &ethpbv2.SignedBlindedBeaconBlockContainer_BellatrixBlock{BellatrixBlock: blindedBlk.Message},
Signature: blindedBlk.Signature,
Message: &ethpbv2.SignedBlindedBeaconBlockContainer_BellatrixBlock{BellatrixBlock: &ethpbv2.BlindedBeaconBlockBellatrix{}},
Signature: []byte("sig"),
}
_, err = beaconChainServer.SubmitBlindedBlock(context.Background(), blockReq)
_, err := server.SubmitBlindedBlock(context.Background(), blockReq)
assert.NoError(t, err)
})
t.Run("Capella", func(t *testing.T) {
transactions := [][]byte{[]byte("transaction1"), []byte("transaction2")}
transactionsRoot, err := ssz.TransactionsRoot(transactions)
require.NoError(t, err)
withdrawals := []*enginev1.Withdrawal{
{
Index: 1,
ValidatorIndex: 1,
Address: bytesutil.PadTo([]byte("address1"), 20),
Amount: 1,
},
{
Index: 2,
ValidatorIndex: 2,
Address: bytesutil.PadTo([]byte("address2"), 20),
Amount: 2,
},
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), gomock.Any())
server := &Server{
V1Alpha1ValidatorServer: v1alpha1Server,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
withdrawalsRoot, err := ssz.WithdrawalSliceRoot(withdrawals, 16)
require.NoError(t, err)
beaconDB := dbTest.SetupDB(t)
ctx := context.Background()
genesis := util.NewBeaconBlockCapella()
util.SaveBlock(t, context.Background(), beaconDB, genesis)
numDeposits := uint64(64)
beaconState, _ := util.DeterministicGenesisState(t, numDeposits)
bsRoot, err := beaconState.HashTreeRoot(ctx)
require.NoError(t, err)
genesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveState(ctx, beaconState, genesisRoot), "Could not save genesis state")
c := &mock.ChainService{Root: bsRoot[:], State: beaconState}
alphaServer := &validator.Server{
SyncCommitteePool: synccommittee.NewStore(),
P2P: &mockp2p.MockBroadcaster{},
BlockBuilder: &builderTest.MockBuilderService{},
BlockReceiver: c,
BlockNotifier: &mock.MockBlockNotifier{},
}
beaconChainServer := &Server{
BeaconDB: beaconDB,
BlockReceiver: c,
ChainInfoFetcher: c,
BlockNotifier: c.BlockNotifier(),
Broadcaster: mockp2p.NewTestP2P(t),
V1Alpha1ValidatorServer: alphaServer,
}
blk := util.NewBeaconBlockCapella()
blk.Block.Slot = 5
blk.Block.ParentRoot = bsRoot[:]
blk.Block.Body.ExecutionPayload.Transactions = transactions
blk.Block.Body.ExecutionPayload.Withdrawals = withdrawals
blindedBlk := util.NewBlindedBeaconBlockCapellaV2()
blindedBlk.Message.Slot = 5
blindedBlk.Message.ParentRoot = bsRoot[:]
blindedBlk.Message.Body.ExecutionPayloadHeader.TransactionsRoot = transactionsRoot[:]
blindedBlk.Message.Body.ExecutionPayloadHeader.WithdrawalsRoot = withdrawalsRoot[:]
util.SaveBlock(t, ctx, beaconDB, blk)
blockReq := &ethpbv2.SignedBlindedBeaconBlockContainer{
Message: &ethpbv2.SignedBlindedBeaconBlockContainer_CapellaBlock{CapellaBlock: blindedBlk.Message},
Signature: blindedBlk.Signature,
Message: &ethpbv2.SignedBlindedBeaconBlockContainer_CapellaBlock{CapellaBlock: &ethpbv2.BlindedBeaconBlockCapella{}},
Signature: []byte("sig"),
}
_, err = beaconChainServer.SubmitBlindedBlock(context.Background(), blockReq)
_, err := server.SubmitBlindedBlock(context.Background(), blockReq)
assert.NoError(t, err)
})
t.Run("sync not ready", func(t *testing.T) {
chainService := &mock.ChainService{}
v1Server := &Server{
SyncChecker: &mockSync.Sync{IsSyncing: true},
HeadFetcher: chainService,
TimeFetcher: chainService,
OptimisticModeFetcher: chainService,
}
_, err := v1Server.SubmitBlindedBlock(context.Background(), nil)
require.ErrorContains(t, "Syncing to latest head", err)
})
}

View File

@@ -4,16 +4,15 @@ import (
"context"
"fmt"
"strconv"
"strings"
"github.com/golang/protobuf/ptypes/empty"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed"
blockfeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/block"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db/filters"
rpchelpers "github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/helpers"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/lookup"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/prysm/v1alpha1/validator"
"github.com/prysmaticlabs/prysm/v4/config/params"
consensus_types "github.com/prysmaticlabs/prysm/v4/consensus-types"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
@@ -25,6 +24,7 @@ import (
ethpbv1 "github.com/prysmaticlabs/prysm/v4/proto/eth/v1"
ethpbv2 "github.com/prysmaticlabs/prysm/v4/proto/eth/v2"
"github.com/prysmaticlabs/prysm/v4/proto/migration"
eth "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/time/slots"
"go.opencensus.io/trace"
"google.golang.org/grpc/codes"
@@ -206,6 +206,11 @@ func (bs *Server) SubmitBlock(ctx context.Context, req *ethpbv2.SignedBeaconBloc
ctx, span := trace.StartSpan(ctx, "beacon.SubmitBlock")
defer span.End()
if err := rpchelpers.ValidateSync(ctx, bs.SyncChecker, bs.HeadFetcher, bs.TimeFetcher, bs.OptimisticModeFetcher); err != nil {
// We simply return the error because it's already a gRPC error.
return nil, err
}
switch blkContainer := req.Message.(type) {
case *ethpbv2.SignedBeaconBlockContainer_Phase0Block:
if err := bs.submitPhase0Block(ctx, blkContainer.Phase0Block, req.Signature); err != nil {
@@ -241,6 +246,11 @@ func (bs *Server) SubmitBlockSSZ(ctx context.Context, req *ethpbv2.SSZContainer)
ctx, span := trace.StartSpan(ctx, "beacon.SubmitBlockSSZ")
defer span.End()
if err := rpchelpers.ValidateSync(ctx, bs.SyncChecker, bs.HeadFetcher, bs.TimeFetcher, bs.OptimisticModeFetcher); err != nil {
// We simply return the error because it's already a gRPC error.
return nil, err
}
md, ok := metadata.FromIncomingContext(ctx)
if !ok {
return &emptypb.Empty{}, status.Errorf(codes.Internal, "Could not read "+versionHeader+" header")
@@ -262,11 +272,85 @@ func (bs *Server) SubmitBlockSSZ(ctx context.Context, req *ethpbv2.SSZContainer)
if err != nil {
return &emptypb.Empty{}, status.Errorf(codes.Internal, "Could not unmarshal request data into block: %v", err)
}
root, err := block.Block().HashTreeRoot()
if err != nil {
return &emptypb.Empty{}, status.Errorf(codes.Internal, "Could not compute block's hash tree root: %v", err)
switch forkVer {
case bytesutil.ToBytes4(params.BeaconConfig().CapellaForkVersion):
if block.IsBlinded() {
return nil, status.Error(codes.InvalidArgument, "Submitted block is blinded")
}
b, err := block.PbCapellaBlock()
if err != nil {
return &emptypb.Empty{}, status.Errorf(codes.Internal, "Could not get proto block: %v", err)
}
_, err = bs.V1Alpha1ValidatorServer.ProposeBeaconBlock(ctx, &eth.GenericSignedBeaconBlock{
Block: &eth.GenericSignedBeaconBlock_Capella{
Capella: b,
},
})
if err != nil {
if strings.Contains(err.Error(), validator.CouldNotDecodeBlock) {
return &emptypb.Empty{}, status.Error(codes.InvalidArgument, err.Error())
}
return &emptypb.Empty{}, status.Errorf(codes.Internal, "Could not propose block: %v", err)
}
return &emptypb.Empty{}, nil
case bytesutil.ToBytes4(params.BeaconConfig().BellatrixForkVersion):
if block.IsBlinded() {
return nil, status.Error(codes.InvalidArgument, "Submitted block is blinded")
}
b, err := block.PbBellatrixBlock()
if err != nil {
return &emptypb.Empty{}, status.Errorf(codes.Internal, "Could not get proto block: %v", err)
}
_, err = bs.V1Alpha1ValidatorServer.ProposeBeaconBlock(ctx, &eth.GenericSignedBeaconBlock{
Block: &eth.GenericSignedBeaconBlock_Bellatrix{
Bellatrix: b,
},
})
if err != nil {
if strings.Contains(err.Error(), validator.CouldNotDecodeBlock) {
return &emptypb.Empty{}, status.Error(codes.InvalidArgument, err.Error())
}
return &emptypb.Empty{}, status.Errorf(codes.Internal, "Could not propose block: %v", err)
}
return &emptypb.Empty{}, nil
case bytesutil.ToBytes4(params.BeaconConfig().AltairForkVersion):
b, err := block.PbAltairBlock()
if err != nil {
return &emptypb.Empty{}, status.Errorf(codes.Internal, "Could not get proto block: %v", err)
}
_, err = bs.V1Alpha1ValidatorServer.ProposeBeaconBlock(ctx, &eth.GenericSignedBeaconBlock{
Block: &eth.GenericSignedBeaconBlock_Altair{
Altair: b,
},
})
if err != nil {
if strings.Contains(err.Error(), validator.CouldNotDecodeBlock) {
return &emptypb.Empty{}, status.Error(codes.InvalidArgument, err.Error())
}
return &emptypb.Empty{}, status.Errorf(codes.Internal, "Could not propose block: %v", err)
}
return &emptypb.Empty{}, nil
case bytesutil.ToBytes4(params.BeaconConfig().GenesisForkVersion):
b, err := block.PbPhase0Block()
if err != nil {
return &emptypb.Empty{}, status.Errorf(codes.Internal, "Could not get proto block: %v", err)
}
_, err = bs.V1Alpha1ValidatorServer.ProposeBeaconBlock(ctx, &eth.GenericSignedBeaconBlock{
Block: &eth.GenericSignedBeaconBlock_Phase0{
Phase0: b,
},
})
if err != nil {
if strings.Contains(err.Error(), validator.CouldNotDecodeBlock) {
return &emptypb.Empty{}, status.Error(codes.InvalidArgument, err.Error())
}
return &emptypb.Empty{}, status.Errorf(codes.Internal, "Could not propose block: %v", err)
}
return &emptypb.Empty{}, nil
default:
return &emptypb.Empty{}, status.Errorf(codes.InvalidArgument, "Unsupported fork %s", string(forkVer[:]))
}
return &emptypb.Empty{}, bs.submitBlock(ctx, root, block)
}
// GetBlock retrieves block details for given block ID.
@@ -940,36 +1024,39 @@ func (bs *Server) getSSZBlockCapella(ctx context.Context, blk interfaces.ReadOnl
func (bs *Server) submitPhase0Block(ctx context.Context, phase0Blk *ethpbv1.BeaconBlock, sig []byte) error {
v1alpha1Blk, err := migration.V1ToV1Alpha1SignedBlock(&ethpbv1.SignedBeaconBlock{Block: phase0Blk, Signature: sig})
if err != nil {
return status.Errorf(codes.InvalidArgument, "Could not convert block to v1 block")
return status.Errorf(codes.InvalidArgument, "Could not convert block: %v", err)
}
wrappedPhase0Blk, err := blocks.NewSignedBeaconBlock(v1alpha1Blk)
_, err = bs.V1Alpha1ValidatorServer.ProposeBeaconBlock(ctx, &eth.GenericSignedBeaconBlock{
Block: &eth.GenericSignedBeaconBlock_Phase0{
Phase0: v1alpha1Blk,
},
})
if err != nil {
return status.Errorf(codes.InvalidArgument, "Could not prepare block")
if strings.Contains(err.Error(), validator.CouldNotDecodeBlock) {
return status.Error(codes.InvalidArgument, err.Error())
}
return status.Errorf(codes.Internal, "Could not propose block: %v", err)
}
root, err := phase0Blk.HashTreeRoot()
if err != nil {
return status.Errorf(codes.InvalidArgument, "Could not tree hash block: %v", err)
}
return bs.submitBlock(ctx, root, wrappedPhase0Blk)
return nil
}
func (bs *Server) submitAltairBlock(ctx context.Context, altairBlk *ethpbv2.BeaconBlockAltair, sig []byte) error {
v1alpha1Blk, err := migration.AltairToV1Alpha1SignedBlock(&ethpbv2.SignedBeaconBlockAltair{Message: altairBlk, Signature: sig})
if err != nil {
return status.Errorf(codes.InvalidArgument, "Could not convert block to v1 block")
return status.Errorf(codes.InvalidArgument, "Could not convert block %v", err)
}
wrappedAltairBlk, err := blocks.NewSignedBeaconBlock(v1alpha1Blk)
_, err = bs.V1Alpha1ValidatorServer.ProposeBeaconBlock(ctx, &eth.GenericSignedBeaconBlock{
Block: &eth.GenericSignedBeaconBlock_Altair{
Altair: v1alpha1Blk,
},
})
if err != nil {
return status.Errorf(codes.InvalidArgument, "Could not prepare block")
if strings.Contains(err.Error(), validator.CouldNotDecodeBlock) {
return status.Error(codes.InvalidArgument, err.Error())
}
return status.Errorf(codes.Internal, "Could not propose block: %v", err)
}
root, err := altairBlk.HashTreeRoot()
if err != nil {
return status.Errorf(codes.InvalidArgument, "Could not tree hash block: %v", err)
}
return bs.submitBlock(ctx, root, wrappedAltairBlk)
return nil
}
func (bs *Server) submitBellatrixBlock(ctx context.Context, bellatrixBlk *ethpbv2.BeaconBlockBellatrix, sig []byte) error {
@@ -977,17 +1064,18 @@ func (bs *Server) submitBellatrixBlock(ctx context.Context, bellatrixBlk *ethpbv
if err != nil {
return status.Errorf(codes.InvalidArgument, "Could not convert block to v1 block")
}
wrappedBellatrixBlk, err := blocks.NewSignedBeaconBlock(v1alpha1Blk)
_, err = bs.V1Alpha1ValidatorServer.ProposeBeaconBlock(ctx, &eth.GenericSignedBeaconBlock{
Block: &eth.GenericSignedBeaconBlock_Bellatrix{
Bellatrix: v1alpha1Blk,
},
})
if err != nil {
return status.Errorf(codes.InvalidArgument, "Could not prepare block")
if strings.Contains(err.Error(), validator.CouldNotDecodeBlock) {
return status.Error(codes.InvalidArgument, err.Error())
}
return status.Errorf(codes.Internal, "Could not propose block: %v", err)
}
root, err := bellatrixBlk.HashTreeRoot()
if err != nil {
return status.Errorf(codes.InvalidArgument, "Could not tree hash block: %v", err)
}
return bs.submitBlock(ctx, root, wrappedBellatrixBlk)
return nil
}
func (bs *Server) submitCapellaBlock(ctx context.Context, capellaBlk *ethpbv2.BeaconBlockCapella, sig []byte) error {
@@ -995,42 +1083,16 @@ func (bs *Server) submitCapellaBlock(ctx context.Context, capellaBlk *ethpbv2.Be
if err != nil {
return status.Errorf(codes.InvalidArgument, "Could not convert block to v1 block")
}
wrappedCapellaBlk, err := blocks.NewSignedBeaconBlock(v1alpha1Blk)
_, err = bs.V1Alpha1ValidatorServer.ProposeBeaconBlock(ctx, &eth.GenericSignedBeaconBlock{
Block: &eth.GenericSignedBeaconBlock_Capella{
Capella: v1alpha1Blk,
},
})
if err != nil {
return status.Errorf(codes.InvalidArgument, "Could not prepare block")
if strings.Contains(err.Error(), validator.CouldNotDecodeBlock) {
return status.Error(codes.InvalidArgument, err.Error())
}
return status.Errorf(codes.Internal, "Could not propose block: %v", err)
}
root, err := capellaBlk.HashTreeRoot()
if err != nil {
return status.Errorf(codes.InvalidArgument, "Could not tree hash block: %v", err)
}
return bs.submitBlock(ctx, root, wrappedCapellaBlk)
}
func (bs *Server) submitBlock(ctx context.Context, blockRoot [fieldparams.RootLength]byte, block interfaces.ReadOnlySignedBeaconBlock) error {
// Do not block proposal critical path with debug logging or block feed updates.
defer func() {
log.WithField("blockRoot", fmt.Sprintf("%#x", bytesutil.Trunc(blockRoot[:]))).Debugf(
"Block proposal received via RPC")
bs.BlockNotifier.BlockFeed().Send(&feed.Event{
Type: blockfeed.ReceivedBlock,
Data: &blockfeed.ReceivedBlockData{SignedBlock: block},
})
}()
// Broadcast the new block to the network.
blockPb, err := block.Proto()
if err != nil {
return errors.Wrap(err, "could not get protobuf block")
}
if err := bs.Broadcaster.Broadcast(ctx, blockPb); err != nil {
return status.Errorf(codes.Internal, "Could not broadcast block: %v", err)
}
if err := bs.BlockReceiver.ReceiveBlock(ctx, block, blockRoot); err != nil {
return status.Errorf(codes.Internal, "Could not process beacon block: %v", err)
}
return nil
}

View File

@@ -4,12 +4,13 @@ import (
"context"
"testing"
"github.com/golang/mock/gomock"
"github.com/prysmaticlabs/go-bitfield"
mock "github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db"
dbTest "github.com/prysmaticlabs/prysm/v4/beacon-chain/db/testing"
mockp2p "github.com/prysmaticlabs/prysm/v4/beacon-chain/p2p/testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/testutil"
mockSync "github.com/prysmaticlabs/prysm/v4/beacon-chain/sync/initial-sync/testing"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
@@ -20,6 +21,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/proto/migration"
ethpbalpha "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/testing/assert"
mock2 "github.com/prysmaticlabs/prysm/v4/testing/mock"
"github.com/prysmaticlabs/prysm/v4/testing/require"
"github.com/prysmaticlabs/prysm/v4/testing/util"
"google.golang.org/grpc/metadata"
@@ -347,319 +349,215 @@ func TestServer_ListBlockHeaders(t *testing.T) {
})
}
func TestServer_SubmitBlock_OK(t *testing.T) {
func TestServer_SubmitBlock(t *testing.T) {
ctrl := gomock.NewController(t)
t.Run("Phase 0", func(t *testing.T) {
beaconDB := dbTest.SetupDB(t)
ctx := context.Background()
genesis := util.NewBeaconBlock()
util.SaveBlock(t, context.Background(), beaconDB, genesis)
numDeposits := uint64(64)
beaconState, _ := util.DeterministicGenesisState(t, numDeposits)
bsRoot, err := beaconState.HashTreeRoot(ctx)
require.NoError(t, err)
genesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveState(ctx, beaconState, genesisRoot), "Could not save genesis state")
c := &mock.ChainService{Root: bsRoot[:], State: beaconState}
beaconChainServer := &Server{
BeaconDB: beaconDB,
BlockReceiver: c,
ChainInfoFetcher: c,
BlockNotifier: c.BlockNotifier(),
Broadcaster: mockp2p.NewTestP2P(t),
HeadFetcher: c,
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), gomock.Any())
server := &Server{
V1Alpha1ValidatorServer: v1alpha1Server,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
req := util.NewBeaconBlock()
req.Block.Slot = 5
req.Block.ParentRoot = bsRoot[:]
v1Block, err := migration.V1Alpha1ToV1SignedBlock(req)
require.NoError(t, err)
util.SaveBlock(t, ctx, beaconDB, req)
blockReq := &ethpbv2.SignedBeaconBlockContainer{
Message: &ethpbv2.SignedBeaconBlockContainer_Phase0Block{Phase0Block: v1Block.Block},
Signature: v1Block.Signature,
Message: &ethpbv2.SignedBeaconBlockContainer_Phase0Block{Phase0Block: &ethpbv1.BeaconBlock{}},
Signature: []byte("sig"),
}
_, err = beaconChainServer.SubmitBlock(context.Background(), blockReq)
assert.NoError(t, err, "Could not propose block correctly")
_, err := server.SubmitBlock(context.Background(), blockReq)
assert.NoError(t, err)
})
t.Run("Altair", func(t *testing.T) {
beaconDB := dbTest.SetupDB(t)
ctx := context.Background()
genesis := util.NewBeaconBlockAltair()
util.SaveBlock(t, context.Background(), beaconDB, genesis)
numDeposits := uint64(64)
beaconState, _ := util.DeterministicGenesisState(t, numDeposits)
bsRoot, err := beaconState.HashTreeRoot(ctx)
require.NoError(t, err)
genesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveState(ctx, beaconState, genesisRoot), "Could not save genesis state")
c := &mock.ChainService{Root: bsRoot[:], State: beaconState}
beaconChainServer := &Server{
BeaconDB: beaconDB,
BlockReceiver: c,
ChainInfoFetcher: c,
BlockNotifier: c.BlockNotifier(),
Broadcaster: mockp2p.NewTestP2P(t),
HeadFetcher: c,
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), gomock.Any())
server := &Server{
V1Alpha1ValidatorServer: v1alpha1Server,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
req := util.NewBeaconBlockAltair()
req.Block.Slot = 5
req.Block.ParentRoot = bsRoot[:]
v2Block, err := migration.V1Alpha1BeaconBlockAltairToV2(req.Block)
require.NoError(t, err)
util.SaveBlock(t, ctx, beaconDB, req)
blockReq := &ethpbv2.SignedBeaconBlockContainer{
Message: &ethpbv2.SignedBeaconBlockContainer_AltairBlock{AltairBlock: v2Block},
Signature: req.Signature,
Message: &ethpbv2.SignedBeaconBlockContainer_AltairBlock{AltairBlock: &ethpbv2.BeaconBlockAltair{}},
Signature: []byte("sig"),
}
_, err = beaconChainServer.SubmitBlock(context.Background(), blockReq)
assert.NoError(t, err, "Could not propose block correctly")
_, err := server.SubmitBlock(context.Background(), blockReq)
assert.NoError(t, err)
})
t.Run("Bellatrix", func(t *testing.T) {
beaconDB := dbTest.SetupDB(t)
ctx := context.Background()
genesis := util.NewBeaconBlockBellatrix()
util.SaveBlock(t, context.Background(), beaconDB, genesis)
numDeposits := uint64(64)
beaconState, _ := util.DeterministicGenesisState(t, numDeposits)
bsRoot, err := beaconState.HashTreeRoot(ctx)
require.NoError(t, err)
genesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveState(ctx, beaconState, genesisRoot), "Could not save genesis state")
c := &mock.ChainService{Root: bsRoot[:], State: beaconState}
beaconChainServer := &Server{
BeaconDB: beaconDB,
BlockReceiver: c,
ChainInfoFetcher: c,
BlockNotifier: c.BlockNotifier(),
Broadcaster: mockp2p.NewTestP2P(t),
HeadFetcher: c,
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), gomock.Any())
server := &Server{
V1Alpha1ValidatorServer: v1alpha1Server,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
req := util.NewBeaconBlockBellatrix()
req.Block.Slot = 5
req.Block.ParentRoot = bsRoot[:]
v2Block, err := migration.V1Alpha1BeaconBlockBellatrixToV2(req.Block)
require.NoError(t, err)
util.SaveBlock(t, ctx, beaconDB, req)
blockReq := &ethpbv2.SignedBeaconBlockContainer{
Message: &ethpbv2.SignedBeaconBlockContainer_BellatrixBlock{BellatrixBlock: v2Block},
Signature: req.Signature,
Message: &ethpbv2.SignedBeaconBlockContainer_BellatrixBlock{BellatrixBlock: &ethpbv2.BeaconBlockBellatrix{}},
Signature: []byte("sig"),
}
_, err = beaconChainServer.SubmitBlock(context.Background(), blockReq)
assert.NoError(t, err, "Could not propose block correctly")
_, err := server.SubmitBlock(context.Background(), blockReq)
assert.NoError(t, err)
})
t.Run("Capella", func(t *testing.T) {
beaconDB := dbTest.SetupDB(t)
ctx := context.Background()
genesis := util.NewBeaconBlockCapella()
util.SaveBlock(t, context.Background(), beaconDB, genesis)
numDeposits := uint64(64)
beaconState, _ := util.DeterministicGenesisState(t, numDeposits)
bsRoot, err := beaconState.HashTreeRoot(ctx)
require.NoError(t, err)
genesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveState(ctx, beaconState, genesisRoot), "Could not save genesis state")
c := &mock.ChainService{Root: bsRoot[:], State: beaconState}
beaconChainServer := &Server{
BeaconDB: beaconDB,
BlockReceiver: c,
ChainInfoFetcher: c,
BlockNotifier: c.BlockNotifier(),
Broadcaster: mockp2p.NewTestP2P(t),
HeadFetcher: c,
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), gomock.Any())
server := &Server{
V1Alpha1ValidatorServer: v1alpha1Server,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
req := util.NewBeaconBlockCapella()
req.Block.Slot = 5
req.Block.ParentRoot = bsRoot[:]
v2Block, err := migration.V1Alpha1BeaconBlockCapellaToV2(req.Block)
require.NoError(t, err)
util.SaveBlock(t, ctx, beaconDB, req)
blockReq := &ethpbv2.SignedBeaconBlockContainer{
Message: &ethpbv2.SignedBeaconBlockContainer_CapellaBlock{CapellaBlock: v2Block},
Signature: req.Signature,
Message: &ethpbv2.SignedBeaconBlockContainer_CapellaBlock{CapellaBlock: &ethpbv2.BeaconBlockCapella{}},
Signature: []byte("sig"),
}
_, err = beaconChainServer.SubmitBlock(context.Background(), blockReq)
assert.NoError(t, err, "Could not propose block correctly")
_, err := server.SubmitBlock(context.Background(), blockReq)
assert.NoError(t, err)
})
t.Run("sync not ready", func(t *testing.T) {
chainService := &mock.ChainService{}
v1Server := &Server{
SyncChecker: &mockSync.Sync{IsSyncing: true},
HeadFetcher: chainService,
TimeFetcher: chainService,
OptimisticModeFetcher: chainService,
}
_, err := v1Server.SubmitBlock(context.Background(), nil)
require.ErrorContains(t, "Syncing to latest head", err)
})
}
func TestServer_SubmitBlockSSZ_OK(t *testing.T) {
func TestServer_SubmitBlockSSZ(t *testing.T) {
ctrl := gomock.NewController(t)
ctx := context.Background()
t.Run("Phase 0", func(t *testing.T) {
beaconDB := dbTest.SetupDB(t)
ctx := context.Background()
genesis := util.NewBeaconBlock()
util.SaveBlock(t, context.Background(), beaconDB, genesis)
numDeposits := uint64(64)
beaconState, _ := util.DeterministicGenesisState(t, numDeposits)
bsRoot, err := beaconState.HashTreeRoot(ctx)
require.NoError(t, err)
genesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveState(ctx, beaconState, genesisRoot), "Could not save genesis state")
c := &mock.ChainService{Root: bsRoot[:], State: beaconState}
beaconChainServer := &Server{
BeaconDB: beaconDB,
BlockReceiver: c,
ChainInfoFetcher: c,
BlockNotifier: c.BlockNotifier(),
Broadcaster: mockp2p.NewTestP2P(t),
HeadFetcher: c,
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), gomock.Any())
server := &Server{
V1Alpha1ValidatorServer: v1alpha1Server,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
req := util.NewBeaconBlock()
req.Block.Slot = 5
req.Block.ParentRoot = bsRoot[:]
util.SaveBlock(t, ctx, beaconDB, req)
blockSsz, err := req.MarshalSSZ()
b := util.NewBeaconBlock()
ssz, err := b.MarshalSSZ()
require.NoError(t, err)
blockReq := &ethpbv2.SSZContainer{
Data: blockSsz,
Data: ssz,
}
md := metadata.MD{}
md.Set(versionHeader, "phase0")
sszCtx := metadata.NewIncomingContext(ctx, md)
_, err = beaconChainServer.SubmitBlockSSZ(sszCtx, blockReq)
assert.NoError(t, err, "Could not propose block correctly")
_, err = server.SubmitBlockSSZ(sszCtx, blockReq)
assert.NoError(t, err)
})
t.Run("Altair", func(t *testing.T) {
beaconDB := dbTest.SetupDB(t)
ctx := context.Background()
genesis := util.NewBeaconBlockAltair()
util.SaveBlock(t, context.Background(), beaconDB, genesis)
numDeposits := uint64(64)
beaconState, _ := util.DeterministicGenesisState(t, numDeposits)
bsRoot, err := beaconState.HashTreeRoot(ctx)
require.NoError(t, err)
genesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveState(ctx, beaconState, genesisRoot), "Could not save genesis state")
c := &mock.ChainService{Root: bsRoot[:], State: beaconState}
beaconChainServer := &Server{
BeaconDB: beaconDB,
BlockReceiver: c,
ChainInfoFetcher: c,
BlockNotifier: c.BlockNotifier(),
Broadcaster: mockp2p.NewTestP2P(t),
HeadFetcher: c,
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), gomock.Any())
server := &Server{
V1Alpha1ValidatorServer: v1alpha1Server,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
req := util.NewBeaconBlockAltair()
req.Block.Slot = params.BeaconConfig().SlotsPerEpoch.Mul(uint64(params.BeaconConfig().AltairForkEpoch))
req.Block.ParentRoot = bsRoot[:]
util.SaveBlock(t, ctx, beaconDB, req)
blockSsz, err := req.MarshalSSZ()
b := util.NewBeaconBlockAltair()
b.Block.Slot = params.BeaconConfig().SlotsPerEpoch.Mul(uint64(params.BeaconConfig().AltairForkEpoch))
ssz, err := b.MarshalSSZ()
require.NoError(t, err)
blockReq := &ethpbv2.SSZContainer{
Data: blockSsz,
Data: ssz,
}
md := metadata.MD{}
md.Set(versionHeader, "altair")
sszCtx := metadata.NewIncomingContext(ctx, md)
_, err = beaconChainServer.SubmitBlockSSZ(sszCtx, blockReq)
assert.NoError(t, err, "Could not propose block correctly")
_, err = server.SubmitBlockSSZ(sszCtx, blockReq)
assert.NoError(t, err)
})
t.Run("Bellatrix", func(t *testing.T) {
beaconDB := dbTest.SetupDB(t)
ctx := context.Background()
genesis := util.NewBeaconBlockBellatrix()
util.SaveBlock(t, context.Background(), beaconDB, genesis)
numDeposits := uint64(64)
beaconState, _ := util.DeterministicGenesisState(t, numDeposits)
bsRoot, err := beaconState.HashTreeRoot(ctx)
require.NoError(t, err)
genesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveState(ctx, beaconState, genesisRoot), "Could not save genesis state")
c := &mock.ChainService{Root: bsRoot[:], State: beaconState}
beaconChainServer := &Server{
BeaconDB: beaconDB,
BlockReceiver: c,
ChainInfoFetcher: c,
BlockNotifier: c.BlockNotifier(),
Broadcaster: mockp2p.NewTestP2P(t),
HeadFetcher: c,
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), gomock.Any())
server := &Server{
V1Alpha1ValidatorServer: v1alpha1Server,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
req := util.NewBeaconBlockBellatrix()
req.Block.Slot = params.BeaconConfig().SlotsPerEpoch.Mul(uint64(params.BeaconConfig().BellatrixForkEpoch))
req.Block.ParentRoot = bsRoot[:]
util.SaveBlock(t, ctx, beaconDB, req)
blockSsz, err := req.MarshalSSZ()
b := util.NewBeaconBlockBellatrix()
b.Block.Slot = params.BeaconConfig().SlotsPerEpoch.Mul(uint64(params.BeaconConfig().BellatrixForkEpoch))
ssz, err := b.MarshalSSZ()
require.NoError(t, err)
blockReq := &ethpbv2.SSZContainer{
Data: blockSsz,
Data: ssz,
}
md := metadata.MD{}
md.Set(versionHeader, "bellatrix")
sszCtx := metadata.NewIncomingContext(ctx, md)
_, err = beaconChainServer.SubmitBlockSSZ(sszCtx, blockReq)
assert.NoError(t, err, "Could not propose block correctly")
_, err = server.SubmitBlockSSZ(sszCtx, blockReq)
assert.NoError(t, err)
})
t.Run("Capella", func(t *testing.T) {
beaconDB := dbTest.SetupDB(t)
ctx := context.Background()
genesis := util.NewBeaconBlockCapella()
util.SaveBlock(t, context.Background(), beaconDB, genesis)
numDeposits := uint64(64)
beaconState, _ := util.DeterministicGenesisState(t, numDeposits)
bsRoot, err := beaconState.HashTreeRoot(ctx)
require.NoError(t, err)
genesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveState(ctx, beaconState, genesisRoot), "Could not save genesis state")
c := &mock.ChainService{Root: bsRoot[:], State: beaconState}
beaconChainServer := &Server{
BeaconDB: beaconDB,
BlockReceiver: c,
ChainInfoFetcher: c,
BlockNotifier: c.BlockNotifier(),
Broadcaster: mockp2p.NewTestP2P(t),
HeadFetcher: c,
t.Run("Bellatrix blinded", func(t *testing.T) {
server := &Server{
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
req := util.NewBeaconBlockCapella()
req.Block.Slot = params.BeaconConfig().SlotsPerEpoch.Mul(uint64(params.BeaconConfig().CapellaForkEpoch))
req.Block.ParentRoot = bsRoot[:]
util.SaveBlock(t, ctx, beaconDB, req)
blockSsz, err := req.MarshalSSZ()
b := util.NewBlindedBeaconBlockBellatrix()
b.Block.Slot = params.BeaconConfig().SlotsPerEpoch.Mul(uint64(params.BeaconConfig().BellatrixForkEpoch))
ssz, err := b.MarshalSSZ()
require.NoError(t, err)
blockReq := &ethpbv2.SSZContainer{
Data: blockSsz,
Data: ssz,
}
md := metadata.MD{}
md.Set(versionHeader, "bellatrix")
sszCtx := metadata.NewIncomingContext(ctx, md)
_, err = server.SubmitBlockSSZ(sszCtx, blockReq)
assert.NotNil(t, err)
})
t.Run("Capella", func(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), gomock.Any())
server := &Server{
V1Alpha1ValidatorServer: v1alpha1Server,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
b := util.NewBeaconBlockCapella()
b.Block.Slot = params.BeaconConfig().SlotsPerEpoch.Mul(uint64(params.BeaconConfig().CapellaForkEpoch))
ssz, err := b.MarshalSSZ()
require.NoError(t, err)
blockReq := &ethpbv2.SSZContainer{
Data: ssz,
}
md := metadata.MD{}
md.Set(versionHeader, "capella")
sszCtx := metadata.NewIncomingContext(ctx, md)
_, err = beaconChainServer.SubmitBlockSSZ(sszCtx, blockReq)
assert.NoError(t, err, "Could not propose block correctly")
_, err = server.SubmitBlockSSZ(sszCtx, blockReq)
assert.NoError(t, err)
})
t.Run("Capella blinded", func(t *testing.T) {
server := &Server{
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
b := util.NewBlindedBeaconBlockCapella()
b.Block.Slot = params.BeaconConfig().SlotsPerEpoch.Mul(uint64(params.BeaconConfig().CapellaForkEpoch))
ssz, err := b.MarshalSSZ()
require.NoError(t, err)
blockReq := &ethpbv2.SSZContainer{
Data: ssz,
}
md := metadata.MD{}
md.Set(versionHeader, "capella")
sszCtx := metadata.NewIncomingContext(ctx, md)
_, err = server.SubmitBlockSSZ(sszCtx, blockReq)
assert.NotNil(t, err)
})
t.Run("sync not ready", func(t *testing.T) {
chainService := &mock.ChainService{}
v1Server := &Server{
SyncChecker: &mockSync.Sync{IsSyncing: true},
HeadFetcher: chainService,
TimeFetcher: chainService,
OptimisticModeFetcher: chainService,
}
_, err := v1Server.SubmitBlockSSZ(context.Background(), nil)
require.ErrorContains(t, "Syncing to latest head", err)
})
}

View File

@@ -1183,7 +1183,7 @@ func TestServer_SubmitAttestations_InvalidAttestationGRPCHeader(t *testing.T) {
require.Equal(t, true, ok, "could not retrieve custom error metadata value")
assert.DeepEqual(
t,
[]string{"{\"failures\":[{\"index\":0,\"message\":\"Incorrect attestation signature: signature must be 96 bytes\"}]}"},
[]string{"{\"failures\":[{\"index\":0,\"message\":\"Incorrect attestation signature: could not create signature from byte slice: signature must be 96 bytes\"}]}"},
v,
)
}

View File

@@ -15,9 +15,9 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/operations/voluntaryexits"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/p2p"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/lookup"
v1alpha1validator "github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/prysm/v1alpha1/validator"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/sync"
eth "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
)
// Server defines a server implementation of the gRPC Beacon Chain service,
@@ -37,8 +37,9 @@ type Server struct {
Stater lookup.Stater
Blocker lookup.Blocker
HeadFetcher blockchain.HeadFetcher
TimeFetcher blockchain.TimeFetcher
OptimisticModeFetcher blockchain.OptimisticModeFetcher
V1Alpha1ValidatorServer *v1alpha1validator.Server
V1Alpha1ValidatorServer eth.BeaconNodeValidatorServer
SyncChecker sync.Checker
CanonicalHistory *stategen.CanonicalHistory
ExecutionPayloadReconstructor execution.ExecutionPayloadReconstructor

View File

@@ -809,16 +809,14 @@ func TestProduceBlockV2(t *testing.T) {
require.ErrorContains(t, "The node is currently optimistic and cannot serve validators", err)
})
t.Run("sync not ready", func(t *testing.T) {
st, err := util.NewBeaconState()
require.NoError(t, err)
chainService := &mockChain.ChainService{State: st}
chainService := &mockChain.ChainService{}
v1Server := &Server{
SyncChecker: &mockSync.Sync{IsSyncing: true},
HeadFetcher: chainService,
TimeFetcher: chainService,
OptimisticModeFetcher: chainService,
}
_, err = v1Server.ProduceBlockV2(context.Background(), nil)
_, err := v1Server.ProduceBlockV2(context.Background(), nil)
require.ErrorContains(t, "Syncing to latest head", err)
})
}
@@ -938,16 +936,14 @@ func TestProduceBlockV2SSZ(t *testing.T) {
require.ErrorContains(t, "The node is currently optimistic and cannot serve validators", err)
})
t.Run("sync not ready", func(t *testing.T) {
st, err := util.NewBeaconState()
require.NoError(t, err)
chainService := &mockChain.ChainService{State: st}
chainService := &mockChain.ChainService{}
v1Server := &Server{
SyncChecker: &mockSync.Sync{IsSyncing: true},
HeadFetcher: chainService,
TimeFetcher: chainService,
OptimisticModeFetcher: chainService,
}
_, err = v1Server.ProduceBlockV2SSZ(context.Background(), nil)
_, err := v1Server.ProduceBlockV2SSZ(context.Background(), nil)
require.ErrorContains(t, "Syncing to latest head", err)
})
}
@@ -1080,9 +1076,7 @@ func TestProduceBlindedBlock(t *testing.T) {
require.ErrorContains(t, "Block builder not configured", err)
})
t.Run("sync not ready", func(t *testing.T) {
st, err := util.NewBeaconState()
require.NoError(t, err)
chainService := &mockChain.ChainService{State: st}
chainService := &mockChain.ChainService{}
v1Server := &Server{
SyncChecker: &mockSync.Sync{IsSyncing: true},
HeadFetcher: chainService,
@@ -1090,7 +1084,7 @@ func TestProduceBlindedBlock(t *testing.T) {
OptimisticModeFetcher: chainService,
BlockBuilder: &builderTest.MockBuilderService{HasConfigured: true},
}
_, err = v1Server.ProduceBlindedBlock(context.Background(), nil)
_, err := v1Server.ProduceBlindedBlock(context.Background(), nil)
require.ErrorContains(t, "Syncing to latest head", err)
})
}
@@ -1223,9 +1217,7 @@ func TestProduceBlindedBlockSSZ(t *testing.T) {
require.ErrorContains(t, "Block builder not configured", err)
})
t.Run("sync not ready", func(t *testing.T) {
st, err := util.NewBeaconState()
require.NoError(t, err)
chainService := &mockChain.ChainService{State: st}
chainService := &mockChain.ChainService{}
v1Server := &Server{
SyncChecker: &mockSync.Sync{IsSyncing: true},
HeadFetcher: chainService,
@@ -1233,7 +1225,7 @@ func TestProduceBlindedBlockSSZ(t *testing.T) {
OptimisticModeFetcher: chainService,
BlockBuilder: &builderTest.MockBuilderService{HasConfigured: true},
}
_, err = v1Server.ProduceBlindedBlockSSZ(context.Background(), nil)
_, err := v1Server.ProduceBlindedBlockSSZ(context.Background(), nil)
require.ErrorContains(t, "Syncing to latest head", err)
})
}

View File

@@ -22,7 +22,7 @@ func (ds *Server) GetPeer(_ context.Context, peerReq *ethpb.PeerRequest) (*ethpb
return ds.getPeer(pid)
}
// ListPeers returns all peers known to the host node, irregardless of if they are connected/
// ListPeers returns all peers known to the host node, regardless of if they are connected/
// disconnected.
func (ds *Server) ListPeers(_ context.Context, _ *empty.Empty) (*ethpb.DebugPeerResponses, error) {
var responses []*ethpb.DebugPeerResponse

View File

@@ -37,7 +37,11 @@ import (
// eth1DataNotification is a latch to stop flooding logs with the same warning.
var eth1DataNotification bool
const eth1dataTimeout = 2 * time.Second
const (
// CouldNotDecodeBlock means that a signed beacon block couldn't be created from the block present in the request.
CouldNotDecodeBlock = "Could not decode block"
eth1dataTimeout = 2 * time.Second
)
// GetBeaconBlock is called by a proposer during its assigned slot to request a block to sign
// by passing in the slot and the signed randao reveal of the slot.
@@ -221,7 +225,7 @@ func (vs *Server) ProposeBeaconBlock(ctx context.Context, req *ethpb.GenericSign
defer span.End()
blk, err := blocks.NewSignedBeaconBlock(req.Block)
if err != nil {
return nil, status.Errorf(codes.InvalidArgument, "Could not decode block: %v", err)
return nil, status.Errorf(codes.InvalidArgument, "%s: %v", CouldNotDecodeBlock, err)
}
return vs.proposeGenericBeaconBlock(ctx, blk)
}

View File

@@ -4,6 +4,7 @@ import (
"bytes"
"context"
"fmt"
"strings"
"time"
"github.com/pkg/errors"
@@ -20,6 +21,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v4/encoding/ssz"
"github.com/prysmaticlabs/prysm/v4/monitoring/tracing"
"github.com/prysmaticlabs/prysm/v4/network/forks"
enginev1 "github.com/prysmaticlabs/prysm/v4/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/runtime/version"
@@ -125,7 +127,6 @@ func (vs *Server) setExecutionData(ctx context.Context, blk interfaces.SignedBea
}
}
}
executionData, err := vs.getExecutionPayload(ctx, slot, idx, blk.Block().ParentRoot(), headState)
if err != nil {
return errors.Wrap(err, "failed to get execution payload")
@@ -167,9 +168,18 @@ func (vs *Server) getPayloadHeaderFromBuilder(ctx context.Context, slot primitiv
if signedBid.IsNil() {
return nil, errors.New("builder returned nil bid")
}
if signedBid.Version() != b.Version() {
return nil, fmt.Errorf("builder bid response version: %d is different from head block version: %d", signedBid.Version(), b.Version())
fork, err := forks.Fork(slots.ToEpoch(slot))
if err != nil {
return nil, errors.Wrap(err, "unable to get fork information")
}
forkName, ok := params.BeaconConfig().ForkVersionNames[bytesutil.ToBytes4(fork.CurrentVersion)]
if !ok {
return nil, errors.New("unable to find current fork in schedule")
}
if !strings.EqualFold(version.String(signedBid.Version()), forkName) {
return nil, fmt.Errorf("builder bid response version: %d is different from head block version: %d for epoch %d", signedBid.Version(), b.Version(), slots.ToEpoch(slot))
}
bid, err := signedBid.Message()
if err != nil {
return nil, errors.Wrap(err, "could not get bid")

View File

@@ -231,11 +231,18 @@ func TestServer_setExecutionData(t *testing.T) {
})
}
func TestServer_getPayloadHeader(t *testing.T) {
genesis := time.Now().Add(-time.Duration(params.BeaconConfig().SlotsPerEpoch) * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second)
params.SetupTestConfigCleanup(t)
bc := params.BeaconConfig()
bc.BellatrixForkEpoch = 1
params.OverrideBeaconConfig(bc)
fakeCapellaEpoch := primitives.Epoch(10)
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.CapellaForkVersion = []byte{'A', 'B', 'C', 'Z'}
cfg.CapellaForkEpoch = fakeCapellaEpoch
cfg.InitializeForkSchedule()
params.OverrideBeaconConfig(cfg)
emptyRoot, err := ssz.TransactionsRoot([][]byte{})
require.NoError(t, err)
ti, err := slots.ToTime(uint64(time.Now().Unix()), 0)
@@ -268,15 +275,50 @@ func TestServer_getPayloadHeader(t *testing.T) {
Message: bid,
Signature: sk.Sign(sr[:]).Marshal(),
}
withdrawals := []*v1.Withdrawal{{
Index: 1,
ValidatorIndex: 2,
Address: make([]byte, fieldparams.FeeRecipientLength),
Amount: 3,
}}
wr, err := ssz.WithdrawalSliceRoot(withdrawals, fieldparams.MaxWithdrawalsPerPayload)
require.NoError(t, err)
tiCapella, err := slots.ToTime(uint64(genesis.Unix()), primitives.Slot(fakeCapellaEpoch)*params.BeaconConfig().SlotsPerEpoch)
require.NoError(t, err)
bidCapella := &ethpb.BuilderBidCapella{
Header: &v1.ExecutionPayloadHeaderCapella{
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: bytesutil.PadTo([]byte{1}, fieldparams.RootLength),
ParentHash: params.BeaconConfig().ZeroHash[:],
Timestamp: uint64(tiCapella.Unix()),
WithdrawalsRoot: wr[:],
},
Pubkey: sk.PublicKey().Marshal(),
Value: bytesutil.PadTo([]byte{1, 2, 3}, 32),
}
srCapella, err := signing.ComputeSigningRoot(bidCapella, domain)
require.NoError(t, err)
sBidCapella := &ethpb.SignedBuilderBidCapella{
Message: bidCapella,
Signature: sk.Sign(srCapella[:]).Marshal(),
}
require.NoError(t, err)
tests := []struct {
name string
head interfaces.ReadOnlySignedBeaconBlock
mock *builderTest.MockBuilderService
fetcher *blockchainTest.ChainService
err string
returnedHeader *v1.ExecutionPayloadHeader
name string
head interfaces.ReadOnlySignedBeaconBlock
mock *builderTest.MockBuilderService
fetcher *blockchainTest.ChainService
err string
returnedHeader *v1.ExecutionPayloadHeader
returnedHeaderCapella *v1.ExecutionPayloadHeaderCapella
}{
{
name: "can't request before bellatrix epoch",
@@ -365,11 +407,41 @@ func TestServer_getPayloadHeader(t *testing.T) {
},
returnedHeader: bid.Header,
},
{
name: "wrong bid version",
mock: &builderTest.MockBuilderService{
BidCapella: sBidCapella,
},
fetcher: &blockchainTest.ChainService{
Block: func() interfaces.ReadOnlySignedBeaconBlock {
wb, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlockBellatrix())
require.NoError(t, err)
wb.SetSlot(primitives.Slot(params.BeaconConfig().BellatrixForkEpoch) * params.BeaconConfig().SlotsPerEpoch)
return wb
}(),
},
err: "is different from head block version",
},
{
name: "different bid version during hard fork",
mock: &builderTest.MockBuilderService{
BidCapella: sBidCapella,
},
fetcher: &blockchainTest.ChainService{
Block: func() interfaces.ReadOnlySignedBeaconBlock {
wb, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlockBellatrix())
require.NoError(t, err)
wb.SetSlot(primitives.Slot(fakeCapellaEpoch) * params.BeaconConfig().SlotsPerEpoch)
return wb
}(),
},
returnedHeaderCapella: bidCapella.Header,
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
vs := &Server{BlockBuilder: tc.mock, HeadFetcher: tc.fetcher, TimeFetcher: &blockchainTest.ChainService{
Genesis: time.Now().Add(-time.Duration(params.BeaconConfig().SlotsPerEpoch) * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second),
Genesis: genesis,
}}
hb, err := vs.HeadFetcher.HeadBlock(context.Background())
require.NoError(t, err)
@@ -383,6 +455,11 @@ func TestServer_getPayloadHeader(t *testing.T) {
require.NoError(t, err)
require.DeepEqual(t, want, h)
}
if tc.returnedHeaderCapella != nil {
want, err := blocks.WrappedExecutionPayloadHeaderCapella(tc.returnedHeaderCapella, 0) // value is a mock
require.NoError(t, err)
require.DeepEqual(t, want, h)
}
}
})
}

View File

@@ -333,6 +333,7 @@ func (s *Service) Start() {
Blocker: blocker,
OptimisticModeFetcher: s.cfg.OptimisticModeFetcher,
HeadFetcher: s.cfg.HeadFetcher,
TimeFetcher: s.cfg.GenesisTimeFetcher,
VoluntaryExitsPool: s.cfg.ExitPool,
V1Alpha1ValidatorServer: validatorServer,
SyncChecker: s.cfg.SyncService,

View File

@@ -409,7 +409,7 @@ func (m *MaxSpanChunksSlice) Update(
// a min span chunk for use in chunk updates. To compute this value, we look at the difference between
// H = historyLength and the current epoch. Then, we check if the source epoch > difference. If so,
// then the start epoch is source epoch - 1. Otherwise, we return to the caller a boolean signifying
// the input argumets are invalid for the chunk and the start epoch does not exist.
// the input arguments are invalid for the chunk and the start epoch does not exist.
func (m *MinSpanChunksSlice) StartEpoch(
sourceEpoch, currentEpoch primitives.Epoch,
) (epoch primitives.Epoch, exists bool) {

View File

@@ -348,13 +348,13 @@ func TestBeaconState_HashTreeRoot(t *testing.T) {
assert.NoError(t, err)
root, err := testState.HashTreeRoot(context.Background())
if err == nil && tt.error != "" {
t.Errorf("Expected error, expected %v, recevied %v", tt.error, err)
t.Errorf("Expected error, expected %v, received %v", tt.error, err)
}
pbState, err := statenative.ProtobufBeaconStatePhase0(testState.ToProtoUnsafe())
require.NoError(t, err)
genericHTR, err := pbState.HashTreeRoot()
if err == nil && tt.error != "" {
t.Errorf("Expected error, expected %v, recevied %v", tt.error, err)
t.Errorf("Expected error, expected %v, received %v", tt.error, err)
}
assert.DeepNotEqual(t, []byte{}, root[:], "Received empty hash tree root")
assert.DeepEqual(t, genericHTR[:], root[:], "Expected hash tree root to match generic")
@@ -435,13 +435,13 @@ func TestBeaconState_HashTreeRoot_FieldTrie(t *testing.T) {
assert.NoError(t, err)
root, err := testState.HashTreeRoot(context.Background())
if err == nil && tt.error != "" {
t.Errorf("Expected error, expected %v, recevied %v", tt.error, err)
t.Errorf("Expected error, expected %v, received %v", tt.error, err)
}
pbState, err := statenative.ProtobufBeaconStatePhase0(testState.ToProtoUnsafe())
require.NoError(t, err)
genericHTR, err := pbState.HashTreeRoot()
if err == nil && tt.error != "" {
t.Errorf("Expected error, expected %v, recevied %v", tt.error, err)
t.Errorf("Expected error, expected %v, received %v", tt.error, err)
}
assert.DeepNotEqual(t, []byte{}, root[:], "Received empty hash tree root")
assert.DeepEqual(t, genericHTR[:], root[:], "Expected hash tree root to match generic")

View File

@@ -36,6 +36,7 @@ go_library(
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/bls:go_default_library",
"//encoding/bytesutil:go_default_library",
"//monitoring/tracing:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",

View File

@@ -6,6 +6,7 @@ package stategen
import (
"context"
"sync"
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db"
@@ -14,12 +15,15 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/sync/backfill"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/crypto/bls"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
"go.opencensus.io/trace"
)
var defaultHotStateDBInterval primitives.Slot = 128
var populatePubkeyCacheOnce sync.Once
// StateManager represents a management object that handles the internal
// logic of maintaining both hot and cold states in DB.
type StateManager interface {
@@ -138,6 +142,24 @@ func (s *State) Resume(ctx context.Context, fState state.BeaconState) (state.Bea
s.finalizedInfo = &finalizedInfo{slot: fState.Slot(), root: fRoot, state: fState.Copy()}
// Pre-populate the pubkey cache with the validator public keys from the finalized state.
// This process takes about 30 seconds on mainnet with 450,000 validators.
go populatePubkeyCacheOnce.Do(func() {
log.Debug("Populating pubkey cache")
start := time.Now()
if err := fState.ReadFromEveryValidator(func(_ int, val state.ReadOnlyValidator) error {
if ctx.Err() != nil {
return ctx.Err()
}
pub := val.PublicKey()
_, err := bls.PublicKeyFromBytes(pub[:])
return err
}); err != nil {
log.WithError(err).Error("Failed to populate pubkey cache")
}
log.WithField("duration", time.Since(start)).Debug("Done populating pubkey cache")
})
return fState, nil
}

View File

@@ -237,7 +237,7 @@ func recomputeRootFromLayerVariable(idx int, item [32]byte, layers [][]*[32]byte
return root, layers, nil
}
// AddInMixin describes a method from which a lenth mixin is added to the
// AddInMixin describes a method from which a length mixin is added to the
// provided root.
func AddInMixin(root [32]byte, length uint64) ([32]byte, error) {
rootBuf := new(bytes.Buffer)

View File

@@ -11,7 +11,6 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/p2p"
p2pTypes "github.com/prysmaticlabs/prysm/v4/beacon-chain/p2p/types"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/startup"
prysmsync "github.com/prysmaticlabs/prysm/v4/beacon-chain/sync"
"github.com/prysmaticlabs/prysm/v4/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/v4/config/params"
@@ -87,7 +86,6 @@ type blocksFetcher struct {
capacityWeight float64 // how remaining capacity affects peer selection
mode syncMode // allows to use fetcher in different sync scenarios
quit chan struct{} // termination notifier
clock *startup.Clock
}
// peerLock restricts fetcher actions on per peer basis. Currently, used for rate limiting.

View File

@@ -552,7 +552,6 @@ func TestBlocksFetcher_RequestBlocksRateLimitingLocks(t *testing.T) {
gt := time.Now()
vr := [32]byte{}
fetcher.chain = &mock.ChainService{Genesis: gt, ValidatorsRoot: vr}
fetcher.clock = startup.NewClock(gt, vr)
hook := logTest.NewGlobal()
wg := new(sync.WaitGroup)
wg.Add(1)
@@ -621,7 +620,6 @@ func TestBlocksFetcher_WaitForBandwidth(t *testing.T) {
gt := time.Now()
vr := [32]byte{}
fetcher.chain = &mock.ChainService{Genesis: gt, ValidatorsRoot: vr}
fetcher.clock = startup.NewClock(gt, vr)
start := time.Now()
assert.NoError(t, fetcher.waitForBandwidth(p2.PeerID(), 10))
dur := time.Since(start)

View File

@@ -67,6 +67,7 @@ func NewService(ctx context.Context, cfg *Config) *Service {
chainStarted: abool.New(),
counter: ratecounter.NewRateCounter(counterSeconds * time.Second),
genesisChan: make(chan time.Time),
clock: startup.NewClock(time.Unix(0, 0), [32]byte{}), // default clock to prevent panic
}
return s

View File

@@ -125,6 +125,14 @@ var (
Help: "Time to verify gossiped blocks",
},
)
// Sync committee verification performance.
syncMessagesForUnknownBlocks = promauto.NewCounter(
prometheus.CounterOpts{
Name: "sync_committee_messages_unknown_root",
Help: "The number of sync committee messages that are checked against DB to see if there vote is for an unknown root",
},
)
)
func (s *Service) updateMetrics() {

View File

@@ -150,7 +150,7 @@ func NewService(ctx context.Context, opts ...Option) *Service {
ctx: ctx,
cancel: cancel,
chainStarted: abool.New(),
cfg: &config{},
cfg: &config{clock: startup.NewClock(time.Unix(0, 0), [32]byte{})},
slotToPendingBlocks: c,
seenPendingBlocks: make(map[[32]byte]bool),
blkRootToPendingAtts: make(map[[32]byte][]*ethpb.SignedAggregateAttestationAndProof),

View File

@@ -13,7 +13,6 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/operation"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/signing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
@@ -152,20 +151,6 @@ func (s *Service) validateAggregatedAtt(ctx context.Context, signed *ethpb.Signe
return pubsub.ValidationIgnore, err
}
attSlot := signed.Message.Aggregate.Data.Slot
// Only advance state if different epoch as the committee can only change on an epoch transition.
if slots.ToEpoch(attSlot) > slots.ToEpoch(bs.Slot()) {
startSlot, err := slots.EpochStart(slots.ToEpoch(attSlot))
if err != nil {
return pubsub.ValidationIgnore, err
}
bs, err = transition.ProcessSlots(ctx, bs, startSlot)
if err != nil {
tracing.AnnotateError(span, err)
return pubsub.ValidationIgnore, err
}
}
// Verify validator index is within the beacon committee.
if err := validateIndexInCommittee(ctx, bs, signed.Message.Aggregate, signed.Message.AggregatorIndex); err != nil {
wrappedErr := errors.Wrapf(err, "Could not validate index in committee")

View File

@@ -90,7 +90,7 @@ func (s *Service) validateSyncCommitteeMessage(
ctx,
ignoreEmptyCommittee(committeeIndices),
s.rejectIncorrectSyncCommittee(committeeIndices, *msg.Topic),
s.ignoreHasSeenSyncMsg(m, committeeIndices),
s.ignoreHasSeenSyncMsg(ctx, m, committeeIndices),
s.rejectInvalidSyncCommitteeSignature(m),
); result != pubsub.ValidationAccept {
return result, err
@@ -123,24 +123,45 @@ func (s *Service) markSyncCommitteeMessagesSeen(committeeIndices []primitives.Co
subCommitteeSize := params.BeaconConfig().SyncCommitteeSize / params.BeaconConfig().SyncCommitteeSubnetCount
for _, idx := range committeeIndices {
subnet := uint64(idx) / subCommitteeSize
s.setSeenSyncMessageIndexSlot(m.Slot, m.ValidatorIndex, subnet)
s.setSeenSyncMessageIndexSlot(m, subnet)
}
}
// Returns true if the node has received sync committee for the validator with index and slot.
func (s *Service) hasSeenSyncMessageIndexSlot(slot primitives.Slot, valIndex primitives.ValidatorIndex, subCommitteeIndex uint64) bool {
func (s *Service) hasSeenSyncMessageIndexSlot(ctx context.Context, m *ethpb.SyncCommitteeMessage, subCommitteeIndex uint64) bool {
s.seenSyncMessageLock.RLock()
defer s.seenSyncMessageLock.RUnlock()
_, seen := s.seenSyncMessageCache.Get(seenSyncCommitteeKey(slot, valIndex, subCommitteeIndex))
return seen
rt, seen := s.seenSyncMessageCache.Get(seenSyncCommitteeKey(m.Slot, m.ValidatorIndex, subCommitteeIndex))
if !seen {
// return early if this is the first message
return false
}
root, ok := rt.([32]byte)
if !ok {
return true // Impossible. Return true to be safe
}
if !s.cfg.chain.InForkchoice(root) && !s.cfg.beaconDB.HasBlock(ctx, root) {
syncMessagesForUnknownBlocks.Inc()
return true
}
msgRoot := [32]byte(m.BlockRoot)
if !s.cfg.chain.InForkchoice(msgRoot) && !s.cfg.beaconDB.HasBlock(ctx, msgRoot) {
syncMessagesForUnknownBlocks.Inc()
return false
}
headRoot := s.cfg.chain.CachedHeadRoot()
if root == headRoot {
return true
}
return msgRoot != headRoot
}
// Set sync committee message validator index and slot as seen.
func (s *Service) setSeenSyncMessageIndexSlot(slot primitives.Slot, valIndex primitives.ValidatorIndex, subCommitteeIndex uint64) {
func (s *Service) setSeenSyncMessageIndexSlot(m *ethpb.SyncCommitteeMessage, subCommitteeIndex uint64) {
s.seenSyncMessageLock.Lock()
defer s.seenSyncMessageLock.Unlock()
key := seenSyncCommitteeKey(slot, valIndex, subCommitteeIndex)
s.seenSyncMessageCache.Add(key, true)
key := seenSyncCommitteeKey(m.Slot, m.ValidatorIndex, subCommitteeIndex)
s.seenSyncMessageCache.Add(key, [32]byte(m.BlockRoot))
}
// The `subnet_id` is valid for the given validator. This implies the validator is part of the broader
@@ -184,7 +205,7 @@ func (s *Service) rejectIncorrectSyncCommittee(
// There has been no other valid sync committee signature for the declared `slot`, `validator_index`,
// and `subcommittee_index`. In the event of `validator_index` belongs to multiple subnets, as long
// as one subnet has not been seen, we should let it in.
func (s *Service) ignoreHasSeenSyncMsg(
func (s *Service) ignoreHasSeenSyncMsg(ctx context.Context,
m *ethpb.SyncCommitteeMessage, committeeIndices []primitives.CommitteeIndex,
) validationFn {
return func(ctx context.Context) (pubsub.ValidationResult, error) {
@@ -192,7 +213,7 @@ func (s *Service) ignoreHasSeenSyncMsg(
subCommitteeSize := params.BeaconConfig().SyncCommitteeSize / params.BeaconConfig().SyncCommitteeSubnetCount
for _, idx := range committeeIndices {
subnet := uint64(idx) / subCommitteeSize
if !s.hasSeenSyncMessageIndexSlot(m.Slot, m.ValidatorIndex, subnet) {
if !s.hasSeenSyncMessageIndexSlot(ctx, m, subnet) {
isValid = true
break
}

View File

@@ -144,8 +144,12 @@ func TestService_ValidateSyncCommitteeMessage(t *testing.T) {
s.cfg.stateGen = stategen.New(beaconDB, doublylinkedtree.New())
s.cfg.beaconDB = beaconDB
s.initCaches()
s.setSeenSyncMessageIndexSlot(1, 1, 0)
m := &ethpb.SyncCommitteeMessage{
Slot: 1,
ValidatorIndex: 1,
BlockRoot: params.BeaconConfig().ZeroHash[:],
}
s.setSeenSyncMessageIndexSlot(m, 0)
return s, topic, startup.NewClock(time.Now(), [32]byte{})
},
args: args{
@@ -441,10 +445,15 @@ func TestService_ignoreHasSeenSyncMsg(t *testing.T) {
name: "has seen",
setupSvc: func(s *Service, msg *ethpb.SyncCommitteeMessage, topic string) (*Service, string) {
s.initCaches()
s.setSeenSyncMessageIndexSlot(1, 0, 0)
m := &ethpb.SyncCommitteeMessage{
Slot: 1,
BlockRoot: params.BeaconConfig().ZeroHash[:],
}
s.setSeenSyncMessageIndexSlot(m, 0)
return s, ""
},
msg: &ethpb.SyncCommitteeMessage{ValidatorIndex: 0, Slot: 1},
msg: &ethpb.SyncCommitteeMessage{ValidatorIndex: 0, Slot: 1,
BlockRoot: params.BeaconConfig().ZeroHash[:]},
committee: []primitives.CommitteeIndex{1, 2, 3},
want: pubsub.ValidationIgnore,
},
@@ -452,19 +461,26 @@ func TestService_ignoreHasSeenSyncMsg(t *testing.T) {
name: "has not seen",
setupSvc: func(s *Service, msg *ethpb.SyncCommitteeMessage, topic string) (*Service, string) {
s.initCaches()
s.setSeenSyncMessageIndexSlot(1, 0, 0)
m := &ethpb.SyncCommitteeMessage{
Slot: 1,
BlockRoot: params.BeaconConfig().ZeroHash[:],
}
s.setSeenSyncMessageIndexSlot(m, 0)
return s, ""
},
msg: &ethpb.SyncCommitteeMessage{ValidatorIndex: 1, Slot: 1},
msg: &ethpb.SyncCommitteeMessage{ValidatorIndex: 1, Slot: 1,
BlockRoot: bytesutil.PadTo([]byte{'A'}, 32)},
committee: []primitives.CommitteeIndex{1, 2, 3},
want: pubsub.ValidationAccept,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
s := &Service{}
s := &Service{
cfg: &config{chain: &mockChain.ChainService{}},
}
s, _ = tt.setupSvc(s, tt.msg, "")
f := s.ignoreHasSeenSyncMsg(tt.msg, tt.committee)
f := s.ignoreHasSeenSyncMsg(context.Background(), tt.msg, tt.committee)
result, err := f(context.Background())
_ = err
require.Equal(t, tt.want, result)

View File

@@ -329,7 +329,7 @@ func (s *Service) setSyncContributionBits(c *ethpb.SyncCommitteeContribution) er
}
bitsList, ok := v.([][]byte)
if !ok {
return errors.New("could not covert cached value to []bitfield.Bitvector")
return errors.New("could not convert cached value to []bitfield.Bitvector")
}
has, err := bitListOverlaps(bitsList, c.AggregationBits)
if err != nil {
@@ -354,7 +354,7 @@ func (s *Service) hasSeenSyncContributionBits(c *ethpb.SyncCommitteeContribution
}
bitsList, ok := v.([][]byte)
if !ok {
return false, errors.New("could not covert cached value to []bitfield.Bitvector128")
return false, errors.New("could not convert cached value to []bitfield.Bitvector128")
}
return bitListOverlaps(bitsList, c.AggregationBits.Bytes())
}

View File

@@ -60,6 +60,7 @@ type Flags struct {
SaveFullExecutionPayloads bool // Save full beacon blocks with execution payloads in the database.
EnableStartOptimistic bool // EnableStartOptimistic treats every block as optimistic at startup.
DisableResourceManager bool // Disables running the node with libp2p's resource manager.
DisableStakinContractCheck bool // Disables check for deposit contract when proposing blocks
EnableVerboseSigVerification bool // EnableVerboseSigVerification specifies whether to verify individual signature if batch verification fails
@@ -72,6 +73,9 @@ type Flags struct {
// KeystoreImportDebounceInterval specifies the time duration the validator waits to reload new keys if they have
// changed on disk. This feature is for advanced use cases only.
KeystoreImportDebounceInterval time.Duration
// AggregateIntervals specifies the time durations at which we aggregate attestations preparing for forkchoice.
AggregateIntervals [3]time.Duration
}
var featureConfig *Flags
@@ -214,10 +218,16 @@ func ConfigureBeaconChain(ctx *cli.Context) error {
logEnabled(prepareAllPayloads)
cfg.PrepareAllPayloads = true
}
if ctx.IsSet(buildBlockParallel.Name) {
logEnabled(buildBlockParallel)
cfg.BuildBlockParallel = true
cfg.BuildBlockParallel = true
if ctx.IsSet(disableBuildBlockParallel.Name) {
logEnabled(disableBuildBlockParallel)
cfg.BuildBlockParallel = false
}
if ctx.IsSet(disableResourceManager.Name) {
logEnabled(disableResourceManager)
cfg.DisableResourceManager = true
}
cfg.AggregateIntervals = [3]time.Duration{aggregateFirstInterval.Value, aggregateSecondInterval.Value, aggregateThirdInterval.Value}
Init(cfg)
return nil
}

View File

@@ -32,6 +32,11 @@ var (
Usage: deprecatedUsage,
Hidden: true,
}
deprecatedBuildBlockParallel = &cli.BoolFlag{
Name: "build-block-parallel",
Usage: deprecatedUsage,
Hidden: true,
}
)
// Deprecated flags for both the beacon node and validator client.
@@ -41,6 +46,7 @@ var deprecatedFlags = []cli.Flag{
deprecatedDisableVecHTR,
deprecatedEnableReorgLateBlocks,
deprecatedDisableGossipBatchAggregation,
deprecatedBuildBlockParallel,
}
// deprecatedBeaconFlags contains flags that are still used by other components

View File

@@ -50,6 +50,24 @@ var (
Usage: "(Danger): Writes the wallet password to the wallet directory on completing Prysm web onboarding. " +
"We recommend against this flag unless you are an advanced user.",
}
aggregateFirstInterval = &cli.DurationFlag{
Name: "aggregate-first-interval",
Usage: "(Advanced): Specifies the first interval in which attestations are aggregated in the slot (typically unnaggregated attestations are aggregated in this interval)",
Value: 6500 * time.Millisecond,
Hidden: true,
}
aggregateSecondInterval = &cli.DurationFlag{
Name: "aggregate-second-interval",
Usage: "(Advanced): Specifies the second interval in which attestations are aggregated in the slot",
Value: 9500 * time.Millisecond,
Hidden: true,
}
aggregateThirdInterval = &cli.DurationFlag{
Name: "aggregate-third-interval",
Usage: "(Advanced): Specifies the third interval in which attestations are aggregated in the slot",
Value: 11800 * time.Millisecond,
Hidden: true,
}
dynamicKeyReloadDebounceInterval = &cli.DurationFlag{
Name: "dynamic-key-reload-debounce-interval",
Usage: "(Advanced): Specifies the time duration the validator waits to reload new keys if they have " +
@@ -118,9 +136,13 @@ var (
Name: "prepare-all-payloads",
Usage: "Informs the engine to prepare all local payloads. Useful for relayers and builders",
}
buildBlockParallel = &cli.BoolFlag{
Name: "build-block-parallel",
Usage: "Builds a beacon block in parallel for consensus and execution. It results in faster block construction time",
disableBuildBlockParallel = &cli.BoolFlag{
Name: "disable-build-block-parallel",
Usage: "Disables building a beacon block in parallel for consensus and execution",
}
disableResourceManager = &cli.BoolFlag{
Name: "disable-resource-manager",
Usage: "Disables running the libp2p resource manager",
}
)
@@ -168,7 +190,11 @@ var BeaconChainFlags = append(deprecatedBeaconFlags, append(deprecatedFlags, []c
enableVerboseSigVerification,
enableOptionalEngineMethods,
prepareAllPayloads,
buildBlockParallel,
disableBuildBlockParallel,
aggregateFirstInterval,
aggregateSecondInterval,
aggregateThirdInterval,
disableResourceManager,
}...)...)
// E2EBeaconChainFlags contains a list of the beacon chain feature flags to be tested in E2E.

View File

@@ -1,12 +1,32 @@
load("@prysm//tools/go:def.bzl", "go_library")
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = ["proposer-settings.go"],
srcs = ["proposer_settings.go"],
importpath = "github.com/prysmaticlabs/prysm/v4/config/validator/service",
visibility = ["//visibility:public"],
deps = [
"//config/fieldparams:go_default_library",
"//consensus-types/validator:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1/validator-client:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["proposer_settings_test.go"],
embed = [":go_default_library"],
deps = [
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/validator:go_default_library",
"//encoding/bytesutil:go_default_library",
"//testing/require:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
],
)

View File

@@ -1,79 +0,0 @@
package validator_service_config
import (
"strconv"
"github.com/ethereum/go-ethereum/common"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
)
// ProposerSettingsPayload is the struct representation of the JSON or YAML payload set in the validator through the CLI.
// ProposerConfig is the map of validator address to fee recipient options all in hex format.
// DefaultConfig is the default fee recipient address for all validators unless otherwise specified in the propose config.required.
type ProposerSettingsPayload struct {
ProposerConfig map[string]*ProposerOptionPayload `json:"proposer_config" yaml:"proposer_config"`
DefaultConfig *ProposerOptionPayload `json:"default_config" yaml:"default_config"`
}
// ProposerOptionPayload is the struct representation of the JSON config file set in the validator through the CLI.
// FeeRecipient is set to an eth address in hex string format with 0x prefix.
type ProposerOptionPayload struct {
FeeRecipient string `json:"fee_recipient" yaml:"fee_recipient"`
BuilderConfig *BuilderConfig `json:"builder" yaml:"builder"`
}
// BuilderConfig is the struct representation of the JSON config file set in the validator through the CLI.
// GasLimit is a number set to help the network decide on the maximum gas in each block.
type BuilderConfig struct {
Enabled bool `json:"enabled" yaml:"enabled"`
GasLimit Uint64 `json:"gas_limit,omitempty" yaml:"gas_limit,omitempty"`
Relays []string `json:"relays" yaml:"relays"`
}
type Uint64 uint64
func (u *Uint64) UnmarshalJSON(bs []byte) error {
str := string(bs) // Parse plain numbers directly.
if bs[0] == '"' && bs[len(bs)-1] == '"' {
// Unwrap the quotes from string numbers.
str = string(bs[1 : len(bs)-1])
}
x, err := strconv.ParseUint(str, 10, 64)
if err != nil {
return err
}
*u = Uint64(x)
return nil
}
func (u *Uint64) UnmarshalYAML(unmarshal func(interface{}) error) error {
var str string
err := unmarshal(&str)
if err != nil {
return err
}
x, err := strconv.ParseUint(str, 10, 64)
if err != nil {
return err
}
*u = Uint64(x)
return nil
}
// ProposerSettings is a Prysm internal representation of the fee recipient config on the validator client.
// ProposerSettingsPayload maps to ProposerSettings on import through the CLI.
type ProposerSettings struct {
ProposeConfig map[[fieldparams.BLSPubkeyLength]byte]*ProposerOption
DefaultConfig *ProposerOption
}
type FeeRecipientConfig struct {
FeeRecipient common.Address
}
// ProposerOption is a Prysm internal representation of the ProposerOptionPayload on the validator client in bytes format instead of hex.
type ProposerOption struct {
FeeRecipientConfig *FeeRecipientConfig
BuilderConfig *BuilderConfig
}

View File

@@ -0,0 +1,206 @@
package validator_service_config
import (
"fmt"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/consensus-types/validator"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
validatorpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1/validator-client"
)
// ToSettings converts struct to ProposerSettings
func ToSettings(ps *validatorpb.ProposerSettingsPayload) (*ProposerSettings, error) {
settings := &ProposerSettings{}
if ps.ProposerConfig != nil {
settings.ProposeConfig = make(map[[fieldparams.BLSPubkeyLength]byte]*ProposerOption)
for key, optionPayload := range ps.ProposerConfig {
if optionPayload.FeeRecipient == "" {
continue
}
b, err := hexutil.Decode(key)
if err != nil {
return nil, errors.Wrap(err, fmt.Sprintf("cannot decode public key %s", key))
}
p := &ProposerOption{
FeeRecipientConfig: &FeeRecipientConfig{
FeeRecipient: common.HexToAddress(optionPayload.FeeRecipient),
},
}
if optionPayload.Builder != nil {
p.BuilderConfig = ToBuilderConfig(optionPayload.Builder)
}
settings.ProposeConfig[bytesutil.ToBytes48(b)] = p
}
}
if ps.DefaultConfig != nil {
d := &ProposerOption{}
if ps.DefaultConfig.FeeRecipient != "" {
d.FeeRecipientConfig = &FeeRecipientConfig{
FeeRecipient: common.HexToAddress(ps.DefaultConfig.FeeRecipient),
}
}
if ps.DefaultConfig.Builder != nil {
d.BuilderConfig = ToBuilderConfig(ps.DefaultConfig.Builder)
}
settings.DefaultConfig = d
}
return settings, nil
}
// BuilderConfig is the struct representation of the JSON config file set in the validator through the CLI.
// GasLimit is a number set to help the network decide on the maximum gas in each block.
type BuilderConfig struct {
Enabled bool `json:"enabled" yaml:"enabled"`
GasLimit validator.Uint64 `json:"gas_limit,omitempty" yaml:"gas_limit,omitempty"`
Relays []string `json:"relays,omitempty" yaml:"relays,omitempty"`
}
// ToBuilderConfig converts protobuf to a builder config used in inmemory storage
func ToBuilderConfig(from *validatorpb.BuilderConfig) *BuilderConfig {
if from == nil {
return nil
}
config := &BuilderConfig{
Enabled: from.Enabled,
GasLimit: from.GasLimit,
}
if from.Relays != nil {
relays := make([]string, len(from.Relays))
copy(relays, from.Relays)
config.Relays = relays
}
return config
}
// ProposerSettings is a Prysm internal representation of the fee recipient config on the validator client.
// validatorpb.ProposerSettingsPayload maps to ProposerSettings on import through the CLI.
type ProposerSettings struct {
ProposeConfig map[[fieldparams.BLSPubkeyLength]byte]*ProposerOption
DefaultConfig *ProposerOption
}
// ToPayload converts struct to ProposerSettingsPayload
func (ps *ProposerSettings) ToPayload() *validatorpb.ProposerSettingsPayload {
if ps == nil {
return nil
}
payload := &validatorpb.ProposerSettingsPayload{}
if ps.ProposeConfig != nil {
payload.ProposerConfig = make(map[string]*validatorpb.ProposerOptionPayload)
for key, option := range ps.ProposeConfig {
p := &validatorpb.ProposerOptionPayload{}
if option.FeeRecipientConfig != nil {
p.FeeRecipient = option.FeeRecipientConfig.FeeRecipient.Hex()
}
if option.BuilderConfig != nil {
p.Builder = option.BuilderConfig.ToPayload()
}
payload.ProposerConfig[hexutil.Encode(key[:])] = p
}
}
if ps.DefaultConfig != nil {
p := &validatorpb.ProposerOptionPayload{}
if ps.DefaultConfig.FeeRecipientConfig != nil {
p.FeeRecipient = ps.DefaultConfig.FeeRecipientConfig.FeeRecipient.Hex()
}
if ps.DefaultConfig.BuilderConfig != nil {
p.Builder = ps.DefaultConfig.BuilderConfig.ToPayload()
}
payload.DefaultConfig = p
}
return payload
}
// FeeRecipientConfig is a prysm internal representation to see if the fee recipient was set.
type FeeRecipientConfig struct {
FeeRecipient common.Address
}
// ProposerOption is a Prysm internal representation of the ProposerOptionPayload on the validator client in bytes format instead of hex.
type ProposerOption struct {
FeeRecipientConfig *FeeRecipientConfig
BuilderConfig *BuilderConfig
}
// Clone creates a deep copy of the proposer settings
func (ps *ProposerSettings) Clone() *ProposerSettings {
if ps == nil {
return nil
}
clone := &ProposerSettings{}
if ps.DefaultConfig != nil {
clone.DefaultConfig = ps.DefaultConfig.Clone()
}
if ps.ProposeConfig != nil {
clone.ProposeConfig = make(map[[fieldparams.BLSPubkeyLength]byte]*ProposerOption)
for k, v := range ps.ProposeConfig {
keyCopy := k
valCopy := v.Clone()
clone.ProposeConfig[keyCopy] = valCopy
}
}
return clone
}
// Clone creates a deep copy of fee recipient config
func (fo *FeeRecipientConfig) Clone() *FeeRecipientConfig {
if fo == nil {
return nil
}
return &FeeRecipientConfig{fo.FeeRecipient}
}
// Clone creates a deep copy of builder config
func (bc *BuilderConfig) Clone() *BuilderConfig {
if bc == nil {
return nil
}
config := &BuilderConfig{}
config.Enabled = bc.Enabled
config.GasLimit = bc.GasLimit
var relays []string
if bc.Relays != nil {
relays = make([]string, len(bc.Relays))
copy(relays, bc.Relays)
config.Relays = relays
}
return config
}
// ToPayload converts Builder Config to the protobuf object
func (bc *BuilderConfig) ToPayload() *validatorpb.BuilderConfig {
if bc == nil {
return nil
}
config := &validatorpb.BuilderConfig{}
config.Enabled = bc.Enabled
var relays []string
if bc.Relays != nil {
relays = make([]string, len(bc.Relays))
copy(relays, bc.Relays)
config.Relays = relays
}
config.GasLimit = bc.GasLimit
return config
}
// Clone creates a deep copy of proposer option
func (po *ProposerOption) Clone() *ProposerOption {
if po == nil {
return nil
}
p := &ProposerOption{}
if po.FeeRecipientConfig != nil {
p.FeeRecipientConfig = po.FeeRecipientConfig.Clone()
}
if po.BuilderConfig != nil {
p.BuilderConfig = po.BuilderConfig.Clone()
}
return p
}

View File

@@ -0,0 +1,100 @@
package validator_service_config
import (
"testing"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/validator"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v4/testing/require"
)
func Test_Proposer_Setting_Cloning(t *testing.T) {
key1hex := "0xa057816155ad77931185101128655c0191bd0214c201ca48ed887f6c4c6adf334070efcd75140eada5ac83a92506dd7a"
key1, err := hexutil.Decode(key1hex)
require.NoError(t, err)
settings := &ProposerSettings{
ProposeConfig: map[[fieldparams.BLSPubkeyLength]byte]*ProposerOption{
bytesutil.ToBytes48(key1): {
FeeRecipientConfig: &FeeRecipientConfig{
FeeRecipient: common.HexToAddress("0x50155530FCE8a85ec7055A5F8b2bE214B3DaeFd3"),
},
BuilderConfig: &BuilderConfig{
Enabled: true,
GasLimit: validator.Uint64(40000000),
Relays: []string{"https://example-relay.com"},
},
},
},
DefaultConfig: &ProposerOption{
FeeRecipientConfig: &FeeRecipientConfig{
FeeRecipient: common.HexToAddress("0x6e35733c5af9B61374A128e6F85f553aF09ff89A"),
},
BuilderConfig: &BuilderConfig{
Enabled: false,
GasLimit: validator.Uint64(params.BeaconConfig().DefaultBuilderGasLimit),
Relays: []string{"https://example-relay.com"},
},
},
}
t.Run("Happy Path Cloning", func(t *testing.T) {
clone := settings.Clone()
require.DeepEqual(t, settings, clone)
option, ok := settings.ProposeConfig[bytesutil.ToBytes48(key1)]
require.Equal(t, true, ok)
newFeeRecipient := "0x44455530FCE8a85ec7055A5F8b2bE214B3DaeFd3"
option.FeeRecipientConfig.FeeRecipient = common.HexToAddress(newFeeRecipient)
coption, k := clone.ProposeConfig[bytesutil.ToBytes48(key1)]
require.Equal(t, true, k)
require.NotEqual(t, option.FeeRecipientConfig.FeeRecipient.Hex(), coption.FeeRecipientConfig.FeeRecipient.Hex())
require.Equal(t, "0x50155530FCE8a85ec7055A5F8b2bE214B3DaeFd3", coption.FeeRecipientConfig.FeeRecipient.Hex())
})
t.Run("Happy Path Cloning Builder config", func(t *testing.T) {
clone := settings.DefaultConfig.BuilderConfig.Clone()
require.DeepEqual(t, settings.DefaultConfig.BuilderConfig, clone)
settings.DefaultConfig.BuilderConfig.GasLimit = 1
require.NotEqual(t, settings.DefaultConfig.BuilderConfig.GasLimit, clone.GasLimit)
})
t.Run("Happy Path ToBuilderConfig", func(t *testing.T) {
clone := settings.DefaultConfig.BuilderConfig.Clone()
config := ToBuilderConfig(clone.ToPayload())
require.DeepEqual(t, config.Relays, clone.Relays)
require.Equal(t, config.Enabled, clone.Enabled)
require.Equal(t, config.GasLimit, clone.GasLimit)
})
t.Run("To Payload and ToSettings", func(t *testing.T) {
payload := settings.ToPayload()
option, ok := settings.ProposeConfig[bytesutil.ToBytes48(key1)]
require.Equal(t, true, ok)
fee := option.FeeRecipientConfig.FeeRecipient.Hex()
potion, pok := payload.ProposerConfig[key1hex]
require.Equal(t, true, pok)
require.Equal(t, option.FeeRecipientConfig.FeeRecipient.Hex(), potion.FeeRecipient)
require.Equal(t, settings.DefaultConfig.FeeRecipientConfig.FeeRecipient.Hex(), payload.DefaultConfig.FeeRecipient)
require.Equal(t, settings.DefaultConfig.BuilderConfig.Enabled, payload.DefaultConfig.Builder.Enabled)
potion.FeeRecipient = ""
newSettings, err := ToSettings(payload)
require.NoError(t, err)
// when converting to settings if a fee recipient is empty string then it will be skipped
noption, ok := newSettings.ProposeConfig[bytesutil.ToBytes48(key1)]
require.Equal(t, false, ok)
require.Equal(t, true, noption == nil)
require.DeepEqual(t, newSettings.DefaultConfig, settings.DefaultConfig)
// if fee recipient is set it will not skip
potion.FeeRecipient = fee
newSettings, err = ToSettings(payload)
require.NoError(t, err)
noption, ok = newSettings.ProposeConfig[bytesutil.ToBytes48(key1)]
require.Equal(t, true, ok)
require.Equal(t, option.FeeRecipientConfig.FeeRecipient.Hex(), noption.FeeRecipientConfig.FeeRecipient.Hex())
require.Equal(t, option.BuilderConfig.GasLimit, option.BuilderConfig.GasLimit)
require.Equal(t, option.BuilderConfig.Enabled, option.BuilderConfig.Enabled)
})
}

View File

@@ -0,0 +1,18 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = ["custom_types.go"],
importpath = "github.com/prysmaticlabs/prysm/v4/consensus-types/validator",
visibility = ["//visibility:public"],
)
go_test(
name = "go_default_test",
srcs = ["custom_types_test.go"],
embed = [":go_default_library"],
deps = [
"//testing/require:go_default_library",
"@io_k8s_apimachinery//pkg/util/yaml:go_default_library",
],
)

View File

@@ -0,0 +1,41 @@
package validator
import "strconv"
// Uint64 custom uint64 to be unmarshallable
type Uint64 uint64
// UnmarshalJSON custom unmarshal function for json
func (u *Uint64) UnmarshalJSON(bs []byte) error {
str := string(bs) // Parse plain numbers directly.
if str == "" {
*u = Uint64(0)
return nil
}
if len(bs) >= 3 && bs[0] == '"' && bs[len(bs)-1] == '"' {
// Unwrap the quotes from string numbers.
str = string(bs[1 : len(bs)-1])
}
x, err := strconv.ParseUint(str, 10, 64)
if err != nil {
return err
}
*u = Uint64(x)
return nil
}
// UnmarshalYAML custom unmarshal function for yaml
func (u *Uint64) UnmarshalYAML(unmarshal func(interface{}) error) error {
var str string
err := unmarshal(&str)
if err != nil {
return err
}
x, err := strconv.ParseUint(str, 10, 64)
if err != nil {
return err
}
*u = Uint64(x)
return nil
}

View File

@@ -0,0 +1,58 @@
package validator
import (
"testing"
"github.com/prysmaticlabs/prysm/v4/testing/require"
"k8s.io/apimachinery/pkg/util/yaml"
)
func Test_customUint_UnmarshalJSON(t *testing.T) {
type Custom struct {
Test Uint64 `json:"test"`
}
tests := []struct {
name string
jsonString string
number uint64
wantUnmarshalErr string
}{
{
name: "Happy Path string",
jsonString: `{"test": "123441"}`,
number: 123441,
},
{
name: "Happy Path number",
jsonString: `{"test": 123441}`,
number: 123441,
},
{
name: "empty",
jsonString: `{"test":""}`,
wantUnmarshalErr: "error unmarshaling JSON",
},
{
name: "digits more than uint64",
jsonString: `{"test":"8888888888888888888888888888888888888888888888888888888888888"}`,
wantUnmarshalErr: "error unmarshaling JSON",
},
{
name: "not a uint64",
jsonString: `{"test":"one hundred"}`,
wantUnmarshalErr: "error unmarshaling JSON",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
var to Custom
err := yaml.Unmarshal([]byte(tt.jsonString), &to)
if tt.wantUnmarshalErr != "" {
require.ErrorContains(t, tt.wantUnmarshalErr, err)
} else {
require.NoError(t, err)
}
require.Equal(t, tt.number, uint64(to.Test))
})
}
}

View File

@@ -72,7 +72,7 @@ func (m MetadataV0) MetadataObjV0() *pb.MetaDataV0 {
return m.md
}
// MetadataObjV1 returns the inner metatdata object in its type
// MetadataObjV1 returns the inner metadata object in its type
// specified form. If it doesn't exist then we return nothing.
func (_ MetadataV0) MetadataObjV1() *pb.MetaDataV1 {
return nil
@@ -147,7 +147,7 @@ func (_ MetadataV1) MetadataObjV0() *pb.MetaDataV0 {
return nil
}
// MetadataObjV1 returns the inner metatdata object in its type
// MetadataObjV1 returns the inner metadata object in its type
// specified form. If it doesn't exist then we return nothing.
func (m MetadataV1) MetadataObjV1() *pb.MetaDataV1 {
return m.md

View File

@@ -14,7 +14,7 @@ var _ heap.Interface = &queue{}
// some tests rely on the ordering of items from this method
func testCases() (tc []*Item) {
// create a slice of items with priority / times offest by these seconds
// create a slice of items with priority / times offset by these seconds
for i, m := range []time.Duration{
5,
183600, // 51 hours

View File

@@ -78,7 +78,7 @@ func TestValidatorRegister_OK(t *testing.T) {
merkleTreeIndex[i] = binary.LittleEndian.Uint64(idx)
}
assert.Equal(t, uint64(0), merkleTreeIndex[0], "Deposit event total desposit count miss matched")
assert.Equal(t, uint64(1), merkleTreeIndex[1], "Deposit event total desposit count miss matched")
assert.Equal(t, uint64(2), merkleTreeIndex[2], "Deposit event total desposit count miss matched")
assert.Equal(t, uint64(0), merkleTreeIndex[0], "Deposit event total deposit count mismatched")
assert.Equal(t, uint64(1), merkleTreeIndex[1], "Deposit event total deposit count mismatched")
assert.Equal(t, uint64(2), merkleTreeIndex[2], "Deposit event total deposit count mismatched")
}

View File

@@ -24,6 +24,13 @@ func PublicKeyFromBytes(pubKey []byte) (PublicKey, error) {
return blst.PublicKeyFromBytes(pubKey)
}
// SignatureFromBytesNoValidation creates a BLS signature from a LittleEndian byte slice.
// It does not check validity of the signature, use only when the byte slice has
// already been verified
func SignatureFromBytesNoValidation(sig []byte) (Signature, error) {
return blst.SignatureFromBytesNoValidation(sig)
}
// SignatureFromBytes creates a BLS signature from a LittleEndian byte slice.
func SignatureFromBytes(sig []byte) (Signature, error) {
return blst.SignatureFromBytes(sig)

View File

@@ -34,6 +34,7 @@ go_test(
"public_key_test.go",
"secret_key_test.go",
"signature_test.go",
"test_helper_test.go",
],
embed = [":go_default_library"],
deps = [

View File

@@ -12,7 +12,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/crypto/bls/common"
)
var maxKeys = 1000000
var maxKeys = 1_000_000
var pubkeyCache = lruwrpr.New(maxKeys)
// PublicKey used in the BLS signature scheme.

View File

@@ -97,3 +97,27 @@ func TestPublicKeysEmpty(t *testing.T) {
_, err := blst.AggregatePublicKeys(pubs)
require.ErrorContains(t, "nil or empty public keys", err)
}
func BenchmarkPublicKeyFromBytes(b *testing.B) {
priv, err := blst.RandKey()
require.NoError(b, err)
pubkey := priv.PublicKey()
pubkeyBytes := pubkey.Marshal()
b.Run("cache on", func(b *testing.B) {
blst.EnableCaches()
for i := 0; i < b.N; i++ {
_, err := blst.PublicKeyFromBytes(pubkeyBytes)
require.NoError(b, err)
}
})
b.Run("cache off", func(b *testing.B) {
blst.DisableCaches()
for i := 0; i < b.N; i++ {
_, err := blst.PublicKeyFromBytes(pubkeyBytes)
require.NoError(b, err)
}
})
}

View File

@@ -24,8 +24,9 @@ type Signature struct {
s *blstSignature
}
// SignatureFromBytes creates a BLS signature from a LittleEndian byte slice.
func SignatureFromBytes(sig []byte) (common.Signature, error) {
// signatureFromBytesNoValidation creates a BLS signature from a LittleEndian
// byte slice. It does not validate that the signature is in the BLS group
func signatureFromBytesNoValidation(sig []byte) (*blstSignature, error) {
if len(sig) != fieldparams.BLSSignatureLength {
return nil, fmt.Errorf("signature must be %d bytes", fieldparams.BLSSignatureLength)
}
@@ -33,6 +34,25 @@ func SignatureFromBytes(sig []byte) (common.Signature, error) {
if signature == nil {
return nil, errors.New("could not unmarshal bytes into signature")
}
return signature, nil
}
// SignatureFromBytesNoValidation creates a BLS signature from a LittleEndian
// byte slice. It does not validate that the signature is in the BLS group
func SignatureFromBytesNoValidation(sig []byte) (common.Signature, error) {
signature, err := signatureFromBytesNoValidation(sig)
if err != nil {
return nil, errors.Wrap(err, "could not create signature from byte slice")
}
return &Signature{s: signature}, nil
}
// SignatureFromBytes creates a BLS signature from a LittleEndian byte slice.
func SignatureFromBytes(sig []byte) (common.Signature, error) {
signature, err := signatureFromBytesNoValidation(sig)
if err != nil {
return nil, errors.Wrap(err, "could not create signature from byte slice")
}
// Group check signature. Do not check for infinity since an aggregated signature
// could be infinite.
if !signature.SigValidate(false) {

View File

@@ -207,6 +207,11 @@ func TestSignatureFromBytes(t *testing.T) {
input: []byte{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
err: errors.New("could not unmarshal bytes into signature"),
},
{
input: []byte{0xac, 0xb0, 0x12, 0x4c, 0x75, 0x74, 0xf2, 0x81, 0xa2, 0x93, 0xf4, 0x18, 0x5c, 0xad, 0x3c, 0xb2, 0x26, 0x81, 0xd5, 0x20, 0x91, 0x7c, 0xe4, 0x66, 0x65, 0x24, 0x3e, 0xac, 0xb0, 0x51, 0x00, 0x0d, 0x8b, 0xac, 0xf7, 0x5e, 0x14, 0x51, 0x87, 0x0c, 0xa6, 0xb3, 0xb9, 0xe6, 0xc9, 0xd4, 0x1a, 0x7b, 0x02, 0xea, 0xd2, 0x68, 0x5a, 0x84, 0x18, 0x8a, 0x4f, 0xaf, 0xd3, 0x82, 0x5d, 0xaf, 0x6a, 0x98, 0x96, 0x25, 0xd7, 0x19, 0xcc, 0xd2, 0xd8, 0x3a, 0x40, 0x10, 0x1f, 0x4a, 0x45, 0x3f, 0xca, 0x62, 0x87, 0x8c, 0x89, 0x0e, 0xca, 0x62, 0x23, 0x63, 0xf9, 0xdd, 0xb8, 0xf3, 0x67, 0xa9, 0x1e, 0x84},
name: "Not in group",
err: errors.New("signature not in group"),
},
{
name: "Good",
input: []byte{0xab, 0xb0, 0x12, 0x4c, 0x75, 0x74, 0xf2, 0x81, 0xa2, 0x93, 0xf4, 0x18, 0x5c, 0xad, 0x3c, 0xb2, 0x26, 0x81, 0xd5, 0x20, 0x91, 0x7c, 0xe4, 0x66, 0x65, 0x24, 0x3e, 0xac, 0xb0, 0x51, 0x00, 0x0d, 0x8b, 0xac, 0xf7, 0x5e, 0x14, 0x51, 0x87, 0x0c, 0xa6, 0xb3, 0xb9, 0xe6, 0xc9, 0xd4, 0x1a, 0x7b, 0x02, 0xea, 0xd2, 0x68, 0x5a, 0x84, 0x18, 0x8a, 0x4f, 0xaf, 0xd3, 0x82, 0x5d, 0xaf, 0x6a, 0x98, 0x96, 0x25, 0xd7, 0x19, 0xcc, 0xd2, 0xd8, 0x3a, 0x40, 0x10, 0x1f, 0x4a, 0x45, 0x3f, 0xca, 0x62, 0x87, 0x8c, 0x89, 0x0e, 0xca, 0x62, 0x23, 0x63, 0xf9, 0xdd, 0xb8, 0xf3, 0x67, 0xa9, 0x1e, 0x84},
@@ -227,6 +232,60 @@ func TestSignatureFromBytes(t *testing.T) {
}
}
func TestSignatureFromBytesNoValidation(t *testing.T) {
tests := []struct {
name string
input []byte
err error
}{
{
name: "Nil",
err: errors.New("signature must be 96 bytes"),
},
{
name: "Empty",
input: []byte{},
err: errors.New("signature must be 96 bytes"),
},
{
name: "Short",
input: []byte{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
err: errors.New("signature must be 96 bytes"),
},
{
name: "Long",
input: []byte{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
err: errors.New("signature must be 96 bytes"),
},
{
name: "Bad",
input: []byte{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
err: errors.New("could not unmarshal bytes into signature"),
},
{
name: "Not in group",
input: []byte{0xac, 0xb0, 0x12, 0x4c, 0x75, 0x74, 0xf2, 0x81, 0xa2, 0x93, 0xf4, 0x18, 0x5c, 0xad, 0x3c, 0xb2, 0x26, 0x81, 0xd5, 0x20, 0x91, 0x7c, 0xe4, 0x66, 0x65, 0x24, 0x3e, 0xac, 0xb0, 0x51, 0x00, 0x0d, 0x8b, 0xac, 0xf7, 0x5e, 0x14, 0x51, 0x87, 0x0c, 0xa6, 0xb3, 0xb9, 0xe6, 0xc9, 0xd4, 0x1a, 0x7b, 0x02, 0xea, 0xd2, 0x68, 0x5a, 0x84, 0x18, 0x8a, 0x4f, 0xaf, 0xd3, 0x82, 0x5d, 0xaf, 0x6a, 0x98, 0x96, 0x25, 0xd7, 0x19, 0xcc, 0xd2, 0xd8, 0x3a, 0x40, 0x10, 0x1f, 0x4a, 0x45, 0x3f, 0xca, 0x62, 0x87, 0x8c, 0x89, 0x0e, 0xca, 0x62, 0x23, 0x63, 0xf9, 0xdd, 0xb8, 0xf3, 0x67, 0xa9, 0x1e, 0x84},
},
{
name: "Good",
input: []byte{0xab, 0xb0, 0x12, 0x4c, 0x75, 0x74, 0xf2, 0x81, 0xa2, 0x93, 0xf4, 0x18, 0x5c, 0xad, 0x3c, 0xb2, 0x26, 0x81, 0xd5, 0x20, 0x91, 0x7c, 0xe4, 0x66, 0x65, 0x24, 0x3e, 0xac, 0xb0, 0x51, 0x00, 0x0d, 0x8b, 0xac, 0xf7, 0x5e, 0x14, 0x51, 0x87, 0x0c, 0xa6, 0xb3, 0xb9, 0xe6, 0xc9, 0xd4, 0x1a, 0x7b, 0x02, 0xea, 0xd2, 0x68, 0x5a, 0x84, 0x18, 0x8a, 0x4f, 0xaf, 0xd3, 0x82, 0x5d, 0xaf, 0x6a, 0x98, 0x96, 0x25, 0xd7, 0x19, 0xcc, 0xd2, 0xd8, 0x3a, 0x40, 0x10, 0x1f, 0x4a, 0x45, 0x3f, 0xca, 0x62, 0x87, 0x8c, 0x89, 0x0e, 0xca, 0x62, 0x23, 0x63, 0xf9, 0xdd, 0xb8, 0xf3, 0x67, 0xa9, 0x1e, 0x84},
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
res, err := SignatureFromBytesNoValidation(test.input)
if test.err != nil {
assert.NotEqual(t, nil, err, "No error returned")
assert.ErrorContains(t, test.err.Error(), err, "Unexpected error returned")
} else {
assert.NoError(t, err)
assert.DeepEqual(t, 0, bytes.Compare(res.Marshal(), test.input))
}
})
}
}
func TestMultipleSignatureFromBytes(t *testing.T) {
tests := []struct {
name string

View File

@@ -0,0 +1,13 @@
package blst
// Note: These functions are for tests to access private globals, such as pubkeyCache.
// DisableCaches sets the cache sizes to 0.
func DisableCaches() {
pubkeyCache.Resize(0)
}
// EnableCaches sets the cache sizes to the default values.
func EnableCaches() {
pubkeyCache.Resize(maxKeys)
}

View File

@@ -84,7 +84,7 @@ func TestVerifyVerbosely_VerificationThrowsError(t *testing.T) {
valid, err := set.VerifyVerbosely()
assert.Equal(t, false, valid, "SignatureSet is expected to be invalid")
assert.StringContains(t, "signature 'signature of bad0' is invalid", err.Error())
assert.StringContains(t, "error: could not unmarshal bytes into signature", err.Error())
assert.StringContains(t, "could not unmarshal bytes into signature", err.Error())
assert.StringNotContains(t, "signature 'signature of good0' is invalid", err.Error())
}

View File

@@ -131,7 +131,7 @@ func (cf *VersionedUnmarshaler) UnmarshalBeaconState(marshaled []byte) (s state.
}
var beaconBlockSlot = fieldSpec{
// ssz variable length offset (not to be confused with the fieldSpec offest) is a uint32
// ssz variable length offset (not to be confused with the fieldSpec offset) is a uint32
// variable length. Offsets come before fixed length data, so that's 4 bytes at the beginning
// then signature is 96 bytes, 4+96 = 100
offset: 100,

2
go.mod
View File

@@ -91,6 +91,7 @@ require (
gopkg.in/d4l3k/messagediff.v1 v1.2.1
gopkg.in/yaml.v2 v2.4.0
gopkg.in/yaml.v3 v3.0.1
k8s.io/apimachinery v0.20.0
k8s.io/client-go v0.20.0
)
@@ -233,7 +234,6 @@ require (
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/natefinch/npipe.v2 v2.0.0-20160621034901-c1b8fa8bdcce // indirect
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 // indirect
k8s.io/apimachinery v0.20.0 // indirect
lukechampine.com/blake3 v1.1.7 // indirect
nhooyr.io/websocket v1.8.7 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.0.2 // indirect

View File

@@ -1,6 +1,6 @@
#!/bin/bash
# Continous Integration script to check that BUILD.bazel files are as expected
# Continuous Integration script to check that BUILD.bazel files are as expected
# when generated from gazelle.
# Duplicate redirect 5 to stdout so that it can be captured, but still printed

View File

@@ -155,7 +155,7 @@ cat << EOF
-c Move discovered coverage reports to the trash
-z FILE Upload specified file directly to Codecov and bypass all report generation.
This is inteded to be used only with a pre-formatted Codecov report and is not
This is intended to be used only with a pre-formatted Codecov report and is not
expected to work under any other circumstances.
-Z Exit with 1 if not successful. Default will Exit with 0

8
hack/latest_version_tag.sh Executable file
View File

@@ -0,0 +1,8 @@
#!/usr/bin/env bash
set -e
# Prints the latest git version tag, like "v2.12.8"
git tag -l 'v*' --sort=creatordate |
perl -nle 'if (/^v\d+\.\d+\.\d+$/) { print $_ }' |
tail -n1

View File

@@ -70,7 +70,7 @@ if git diff-index --quiet HEAD --; then
echo "nothing to push, exiting early"
exit 0
else
echo "changes detected, commiting and pushing to ethereumapis"
echo "changes detected, committing and pushing to ethereumapis"
fi
# Push to the mirror repository

View File

@@ -1,9 +1,24 @@
#!/bin/bash
# Note: The STABLE_ prefix will force a relink when the value changes when using rules_go x_defs.
repo_url=$(git config --get remote.origin.url)
echo "REPO_URL $repo_url"
echo STABLE_GIT_COMMIT "continuous-integration"
echo DATE "now"
echo DATE_UNIX "0"
echo DOCKER_TAG "ci-foo"
echo STABLE_GIT_TAG "c1000deadbeef"
commit_sha=$(git rev-parse HEAD)
echo "COMMIT_SHA $commit_sha"
echo "GIT_BRANCH $git_branch"
git_tree_status=$(git diff-index --quiet HEAD -- && echo 'Clean' || echo 'Modified')
echo "GIT_TREE_STATUS $git_tree_status"
# Note: the "STABLE_" suffix causes these to be part of the "stable" workspace
# status, which may trigger rebuilds of certain targets if these values change
# and you're building with the "--stamp" flag.
latest_version_tag=$(./hack/latest_version_tag.sh)
echo "STABLE_VERSION_TAG $latest_version_tag"
echo "STABLE_COMMIT_SHA $commit_sha"
echo "STABLE_GIT_TAG $latest_version_tag"
echo DOCKER_TAG "$(git rev-parse --abbrev-ref HEAD)-$(git rev-parse --short=6 HEAD)"
echo DATE "$(date --rfc-3339=seconds --utc)"
echo DATE_UNIX "$(date --utc +%s)"

View File

@@ -214,11 +214,13 @@ func AddInt(i ...int) (int, error) {
}
// WeiToGwei converts big int wei to uint64 gwei.
// The input `v` is copied before being modified.
func WeiToGwei(v *big.Int) uint64 {
if v == nil {
return 0
}
gweiPerEth := big.NewInt(1e9)
v.Div(v, gweiPerEth)
return v.Uint64()
copied := big.NewInt(0).Set(v)
copied.Div(copied, gweiPerEth)
return copied.Uint64()
}

View File

@@ -567,3 +567,11 @@ func TestWeiToGwei(t *testing.T) {
}
}
}
func TestWeiToGwei_CopyOk(t *testing.T) {
v := big.NewInt(1e9)
got := math.WeiToGwei(v)
require.Equal(t, uint64(1), got)
require.Equal(t, big.NewInt(1e9).Uint64(), v.Uint64())
}

Some files were not shown because too many files have changed in this diff Show More