Compare commits

...

48 Commits

Author SHA1 Message Date
Potuz
76d1ca4a86 Init sync fixes
- Only save finalized checkpoint to DB if it's newer than previous
  checkpoint
- Do not save Justified checkpoints to DB
2022-08-30 12:05:27 -03:00
Nishant Das
4a00b295ed Pin Fuzzbuzz to Go 1.18 (#11350) 2022-08-30 10:18:23 +02:00
Potuz
d2b39e9697 Defensive pull tips, doubly-linked-tree (#11175)
* Defensive pull tips, doubly-linked-tree

* feature flag

* gaz

Co-authored-by: terencechain <terence@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-08-30 00:48:25 +00:00
Shem Leong
97dc86e742 Support passing of headers to all Engine API calls (#11330)
* Support passing of headers to all Engine API calls

* Update execution headers example

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-08-29 23:34:29 +00:00
terencechain
cff3b99918 Fix can propose blind block (#11346) 2022-08-29 13:30:28 -07:00
terencechain
be9847f23c Remove unused code (#11345) 2022-08-29 18:03:03 +00:00
Håvard Anda Estensen
4796827d22 Replace deprecated linter deadcode with unused (#11334)
* Replace deprecated linter deadcode with unused

* Ignore unused warnings

* Print filename and line number when linting fails

* Fix path

* Remove unused methods

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
Co-authored-by: terencechain <terence@prysmaticlabs.com>
2022-08-29 12:45:25 -04:00
Preston Van Loon
57b7e0b572 db: Wrap errors in db.fetchAncestor to better identify unmarshalling issues (#11342)
* db: Wrap errors in db.fetchAncestor to better identify unmarshalling issues. See #11327

* Wrap genesis state fetch, just in case

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-08-29 16:08:03 +00:00
terencechain
b5039e9bd9 Better chain start log (#11332)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-08-29 15:48:23 +00:00
james-prysm
f5d792299f e2e: updating web3signer version (#11339)
* updating version

* reverting change to lighthouse sha
2022-08-29 15:29:40 +00:00
Potuz
9ce922304f Track timestamp in forkchoice (#11333) 2022-08-29 14:49:02 +00:00
Nishant Das
3cbb4aace4 Fix IPC Paths For Windows (#11324)
* return early for windows

* mick's review
2022-08-26 23:05:28 +00:00
terencechain
c94095b609 Accept everything when node is optimistic (#11320) 2022-08-26 21:41:59 +00:00
kasey
ae858bbd0a removing dead code to appease linter (#11326)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2022-08-26 16:06:44 +00:00
Radosław Kapka
30cd158ae5 Move forkchoice dump to eth namespace (#11325)
* protos

* server code

* rename v2 to v1 in endpoint

* middleware

* test fix

* test fix

* oops

* remove duplicated import

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-08-26 14:54:32 +00:00
Nishant Das
2db22adfe0 Handle Execution Client Failures Better (#11321)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-08-26 14:30:13 +00:00
Nishant Das
161a14d256 Update Lighthouse to v3 in our E2E Runner (#11323)
* update to v3

* fix sha

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-08-26 13:24:28 +00:00
Håvard Anda Estensen
9dee22f7ab Pre-allocate slices (#11317) 2022-08-26 13:49:50 +02:00
Potuz
52271cf0ba Report depth and distance on reorgs (#11315)
* Report depth and distance on reorgs

* rename to CommonAncestor

* change event feed

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-08-25 23:59:08 +00:00
Potuz
e1f56d403c Restore forkchoice dump endpoint. (#11312)
* Restore forkchoice dump endpoint.

Only working on doubly-linked-tree.

* unit test

* revert proto changes

* protoarray

* Deepsource

* shut up!

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-08-25 23:37:23 +00:00
terencechain
a2193ee014 Accept attestations when node is optimistic (#11319)
* Accept attestations when node is optimistic

* Fix tests

* Add regression tests

* Fix tests

* Fix more bad tests
2022-08-25 20:15:07 -03:00
james-prysm
762b3df491 Beacon API: api wrongly marked deprecated (#11316)
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-08-25 21:02:17 +00:00
terencechain
2b3025828f ErrorContains dont allow empty string (#11314)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-08-25 19:07:39 +00:00
terencechain
436792fe38 Builder: filter header with 0 bid and empty tx root (#11313)
* Filter header with 0 bid and empty root

* Check nil

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-08-25 18:24:02 +00:00
terencechain
1d07bffe11 Beacon api: fix get blind block (#11304)
* Beacon api: fix get blind block

* Gaz

* Add back before bellatrix behavior

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-08-25 17:19:17 +00:00
Preston Van Loon
f086535c8a Update llvm to 13.0.1 (#11310)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-08-25 15:15:14 +00:00
Han Shen
3a4c599a96 Implement delete gaslimit (#11290)
* Implement delete gaslimit.

* Minor comment change.

* Reset gaslimit to DefaultConfig's gaslimt instead of 0.

* After gaslimit deletion, use global gaslimit default value instead of values provided in ProposalConfig.

* After deletion, use config default, if that is not available, use global default gaslimit value.

* Use grpc's codes.NotFound instead of http code "404".

* Updated bazel deps (new imports "google.golang.org/grpc/codes" was added for tests).

* Fix "TestServer_RecoverWallet_Derived" test failure.

Previously "params.BeaconConfig()" (thus the default global value
"BLSSecretKeyLength") was overriden by standard_api_test:TestServer_DeleteGasLimit.
Fixed the problem by retoring the origin global default after the test is done.

* Do not change BeaconConfig object, instead change BeaconConfig.DefaultBuilderGasLimit.

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2022-08-25 14:43:21 +00:00
Nishant Das
1c6cbc574e Update Geth Version In Prysm (#11308)
* clean up

* clean up
2022-08-25 13:55:01 +00:00
Potuz
2317375983 Add feature flag to treat all blocks as optimistic at startup (#11303)
* Add feature flag to treat all blocks as optimistic at startup

* Terence's review

* remove changed empty lines

* Apply suggestions from code review

* Go fmt sorry

* bad comments

Co-authored-by: terencechain <terence@prysmaticlabs.com>
2022-08-25 12:40:29 +00:00
terencechain
6354748b12 Update badges (#11305)
* Update badges

* Update README.md
2022-08-24 22:46:56 +00:00
Nishant Das
e910471784 Add In Duty Logging (#11301)
* add it in

* use time until

* potuz's review

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-08-24 19:50:19 +00:00
Potuz
ab7e97ba63 Fix setNodeAndParentValidated (#11302)
* Fix setNodeAndParentValidated

* fix tests
2022-08-24 19:30:45 +00:00
Mike Neuder
e99de7726d Wallet recover CLI Manager migration (#11278)
* Wallet recover CLI Manager migration

* bazel run //:gazelle -- fix

* fix lint and build errors

* add TODO to remove duplicate code

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-08-24 16:57:03 +00:00
Justin Traglia
606fdd2299 Return copy of deposits instead of internal pointer (#11273)
* Return copy of deposits instead of internal pointer

* Update the comment

* Fix linter warning

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-08-24 15:46:51 +00:00
james-prysm
1eb6025aaa Beacon API: validator registration encoding bug (#11299) 2022-08-24 15:05:43 +00:00
Nishant Das
d431ceee25 Improve Logging When Parsing JWT Secret (#11300)
* remove all references

* remove warning
2022-08-24 13:16:48 +00:00
james-prysm
4597599196 Code Cleanup: remove forkchoicer from beacon node (#11294)
* removing forkchoicestore on beacon node

* fixing linting

* Update beacon-chain/node/node.go

Co-authored-by: Potuz <potuz@prysmaticlabs.com>

* fixing if statement

Co-authored-by: Potuz <potuz@prysmaticlabs.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-08-23 17:47:12 +00:00
james-prysm
0c32eb5c03 Beacon API: skip updating fee recipient if it's the same (#11296)
* adding in redudant check

* adding unit tests

* fixing linting

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-08-23 17:26:43 +00:00
terencechain
4b1cb6fa80 Fork aware beacon API end points (#11274)
* Make operation RPC fork aware

* Gaz

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-08-23 17:07:11 +00:00
Nishant Das
9cfb823cc6 Simplify List Attestations RPC Method (#11292)
* simplify

* fix tests

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-08-23 12:47:16 -04:00
terencechain
cb502ceb8c Skip updating fee recipient if it's the same (#11295) 2022-08-23 10:54:38 -05:00
Roberto Bayardo
8da4d572d9 fix wrapping of nil errors (#11282)
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-08-22 16:57:43 +00:00
terencechain
1c6fa65f7b Add back deprecated flags (#11284)
* Add back deprecated flags

* Add enable-validator-registration as alias

* Clean up

* Add deprecatedEnableLargerGossipHistory

* Rm duplicated gossip batch aggregation
2022-08-22 16:05:15 +00:00
Nishant Das
eaa2566e90 Add Back Fallback Provider Flag (#11281)
* add it back

* remove all references

Co-authored-by: terencechain <terence@prysmaticlabs.com>
2022-08-22 11:20:21 -04:00
Nishant Das
6957f0637f Bring Down Error To A Debug Log (#11283) 2022-08-22 12:00:50 +00:00
Nishant Das
01b1f15bdf Add Back Resync Routine (#11280) 2022-08-21 13:31:40 +00:00
Nishant Das
b787fd877a Handle Deprecated Flags Correctly (#11276) 2022-08-20 04:16:14 +00:00
Nishant Das
2c89ce810d Bring back old execution flag as an alias (#11275) 2022-08-20 03:28:22 +00:00
151 changed files with 3384 additions and 1765 deletions

View File

@@ -53,6 +53,7 @@ jobs:
uses: golangci/golangci-lint-action@v3
with:
version: v1.47.2
args: --config=.golangci.yml --out-${NO_FUTURE}format colored-line-number
build:
name: Build

View File

@@ -13,7 +13,7 @@ linters:
enable:
- gofmt
- goimports
- deadcode
- unused
- errcheck
- gosimple
- gocognit

View File

@@ -2,8 +2,8 @@
[![Build status](https://badge.buildkite.com/b555891daf3614bae4284dcf365b2340cefc0089839526f096.svg?branch=master)](https://buildkite.com/prysmatic-labs/prysm)
[![Go Report Card](https://goreportcard.com/badge/github.com/prysmaticlabs/prysm)](https://goreportcard.com/report/github.com/prysmaticlabs/prysm)
[![Consensus_Spec_Version 1.2.0-rc.1](https://img.shields.io/badge/Consensus%20Spec%20Version-v1.2.0.rc.1-blue.svg)](https://github.com/ethereum/consensus-specs/tree/v1.2.0-rc.1)
[![Execution_API_Version 1.0.0-alpha.9](https://img.shields.io/badge/Execution%20API%20Version-v1.0.0.alpha.9-blue.svg)](https://github.com/ethereum/execution-apis/tree/v1.0.0-alpha.9/src/engine)
[![Consensus_Spec_Version 1.2.0-rc.3](https://img.shields.io/badge/Consensus%20Spec%20Version-v1.2.0.rc.3-blue.svg)](https://github.com/ethereum/consensus-specs/tree/v1.2.0-rc.3)
[![Execution_API_Version 1.0.0-beta.1](https://img.shields.io/badge/Execution%20API%20Version-v1.0.0.beta.1-blue.svg)](https://github.com/ethereum/execution-apis/tree/v1.0.0-beta.1/src/engine)
[![Discord](https://user-images.githubusercontent.com/7288322/34471967-1df7808a-efbb-11e7-9088-ed0b04151291.png)](https://discord.gg/CTYGPUJ)
[![GitPOAP Badge](https://public-api.gitpoap.io/v1/repo/prysmaticlabs/prysm/badge)](https://www.gitpoap.io/gh/prysmaticlabs/prysm)

View File

@@ -28,7 +28,7 @@ load("@com_grail_bazel_toolchain//toolchain:rules.bzl", "llvm_toolchain")
llvm_toolchain(
name = "llvm_toolchain",
llvm_version = "10.0.0",
llvm_version = "13.0.1",
)
load("@llvm_toolchain//:toolchains.bzl", "llvm_register_toolchains")

View File

@@ -21,14 +21,13 @@ import (
// OriginData represents the BeaconState and SignedBeaconBlock necessary to start an empty Beacon Node
// using Checkpoint Sync.
type OriginData struct {
wsd *WeakSubjectivityData
sb []byte
bb []byte
st state.BeaconState
b interfaces.SignedBeaconBlock
vu *detect.VersionedUnmarshaler
br [32]byte
sr [32]byte
sb []byte
bb []byte
st state.BeaconState
b interfaces.SignedBeaconBlock
vu *detect.VersionedUnmarshaler
br [32]byte
sr [32]byte
}
// SaveBlock saves the downloaded block to a unique file in the given path.

View File

@@ -95,8 +95,6 @@ func WithTimeout(timeout time.Duration) ClientOpt {
// Client provides a collection of helper methods for calling the Eth Beacon Node API endpoints.
type Client struct {
hc *http.Client
host string
scheme string
baseURL *url.URL
}

View File

@@ -17,6 +17,7 @@ import (
"github.com/prysmaticlabs/prysm/v3/consensus-types/interfaces"
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v3/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v3/math"
ethpbv1 "github.com/prysmaticlabs/prysm/v3/proto/eth/v1"
"github.com/prysmaticlabs/prysm/v3/time/slots"
"github.com/sirupsen/logrus"
@@ -100,7 +101,7 @@ func (s *Service) saveHead(ctx context.Context, newHeadRoot [32]byte, headBlock
// A chain re-org occurred, so we fire an event notifying the rest of the services.
if bytesutil.ToBytes32(headBlock.Block().ParentRoot()) != oldHeadRoot {
commonRoot, err := s.ForkChoicer().CommonAncestorRoot(ctx, oldHeadRoot, newHeadRoot)
commonRoot, forkSlot, err := s.ForkChoicer().CommonAncestor(ctx, oldHeadRoot, newHeadRoot)
if err != nil {
log.WithError(err).Error("Could not find common ancestor root")
commonRoot = params.BeaconConfig().ZeroHash
@@ -111,8 +112,9 @@ func (s *Service) saveHead(ctx context.Context, newHeadRoot [32]byte, headBlock
"oldSlot": fmt.Sprintf("%d", headSlot),
"oldRoot": fmt.Sprintf("%#x", oldHeadRoot),
"commonAncestorRoot": fmt.Sprintf("%#x", commonRoot),
"distance": headSlot + newHeadSlot - 2*forkSlot,
"depth": math.Max(uint64(headSlot-forkSlot), uint64(newHeadSlot-forkSlot)),
}).Info("Chain reorg occurred")
absoluteSlotDifference := slots.AbsoluteValueSlotDifference(newHeadSlot, headSlot)
isOptimistic, err := s.IsOptimistic(ctx)
if err != nil {
return errors.Wrap(err, "could not check if node is optimistically synced")
@@ -121,7 +123,7 @@ func (s *Service) saveHead(ctx context.Context, newHeadRoot [32]byte, headBlock
Type: statefeed.Reorg,
Data: &ethpbv1.EventChainReorg{
Slot: newHeadSlot,
Depth: absoluteSlotDifference,
Depth: math.Max(uint64(headSlot-forkSlot), uint64(newHeadSlot-forkSlot)),
OldHeadBlock: oldHeadRoot[:],
NewHeadBlock: newHeadRoot[:],
OldHeadState: oldStateRoot,
@@ -342,7 +344,7 @@ func (s *Service) notifyNewHeadEvent(
// This saves the attestations between `orphanedRoot` and the common ancestor root that is derived using `newHeadRoot`.
// It also filters out the attestations that is one epoch older as a defense so invalid attestations don't flow into the attestation pool.
func (s *Service) saveOrphanedAtts(ctx context.Context, orphanedRoot [32]byte, newHeadRoot [32]byte) error {
commonAncestorRoot, err := s.ForkChoicer().CommonAncestorRoot(ctx, newHeadRoot, orphanedRoot)
commonAncestorRoot, _, err := s.ForkChoicer().CommonAncestor(ctx, newHeadRoot, orphanedRoot)
switch {
// Exit early if there's no common ancestor and root doesn't exist, there would be nothing to save.
case errors.Is(err, forkchoice.ErrUnknownCommonAncestor):

View File

@@ -150,6 +150,8 @@ func TestSaveHead_Different_Reorg(t *testing.T) {
assert.DeepEqual(t, newHeadSignedBlock, pb, "Head did not change")
assert.DeepSSZEqual(t, headState.CloneInnerState(), service.headState(ctx).CloneInnerState(), "Head did not change")
require.LogsContain(t, hook, "Chain reorg occurred")
require.LogsContain(t, hook, "distance=1")
require.LogsContain(t, hook, "depth=1")
}
func TestCacheJustifiedStateBalances_CanCache(t *testing.T) {

View File

@@ -217,16 +217,6 @@ func (s *Service) onBlock(ctx context.Context, signed interfaces.SignedBeaconBlo
}
}()
// Save justified check point to db.
postStateJustifiedEpoch := postState.CurrentJustifiedCheckpoint().Epoch
if justified.Epoch > currStoreJustifiedEpoch || (justified.Epoch == postStateJustifiedEpoch && justified.Epoch > preStateJustifiedEpoch) {
if err := s.cfg.BeaconDB.SaveJustifiedCheckpoint(ctx, &ethpb.Checkpoint{
Epoch: justified.Epoch, Root: justified.Root[:],
}); err != nil {
return err
}
}
// Save finalized check point to db and more.
postStateFinalizedEpoch := postState.FinalizedCheckpoint().Epoch
finalized := s.ForkChoicer().FinalizedCheckpoint()
@@ -403,12 +393,6 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []interfaces.SignedBeac
tracing.AnnotateError(span, err)
return err
}
if i > 0 && jCheckpoints[i].Epoch > jCheckpoints[i-1].Epoch {
if err := s.cfg.BeaconDB.SaveJustifiedCheckpoint(ctx, jCheckpoints[i]); err != nil {
tracing.AnnotateError(span, err)
return err
}
}
if i > 0 && fCheckpoints[i].Epoch > fCheckpoints[i-1].Epoch {
if err := s.updateFinalized(ctx, fCheckpoints[i]); err != nil {
tracing.AnnotateError(span, err)

View File

@@ -135,11 +135,18 @@ func (s *Service) verifyBlkFinalizedSlot(b interfaces.BeaconBlock) error {
}
// updateFinalized saves the init sync blocks, finalized checkpoint, migrates
// to cold old states and saves the last validated checkpoint to DB
// to cold old states and saves the last validated checkpoint to DB. It returns
// early if the new checkpoint is older than the one on db.
func (s *Service) updateFinalized(ctx context.Context, cp *ethpb.Checkpoint) error {
ctx, span := trace.StartSpan(ctx, "blockChain.updateFinalized")
defer span.End()
// return early if new checkpoint is not newer than the one in DB
currentFinalizedEpoch := s.FinalizedCheckpt().Epoch
if cp.Epoch <= currentFinalizedEpoch {
return nil
}
// Blocks need to be saved so that we can retrieve finalized block from
// DB when migrating states.
if err := s.cfg.BeaconDB.SaveBlocks(ctx, s.getInitSyncBlocks()); err != nil {

View File

@@ -1190,10 +1190,6 @@ func TestOnBlock_CanFinalize_WithOnTick(t *testing.T) {
require.Equal(t, types.Epoch(2), cp.Epoch)
// The update should persist in DB.
j, err := service.cfg.BeaconDB.JustifiedCheckpoint(ctx)
require.NoError(t, err)
cp = service.CurrentJustifiedCheckpt()
require.Equal(t, j.Epoch, cp.Epoch)
f, err := service.cfg.BeaconDB.FinalizedCheckpoint(ctx)
require.NoError(t, err)
cp = service.FinalizedCheckpt()
@@ -1238,10 +1234,6 @@ func TestOnBlock_CanFinalize(t *testing.T) {
require.Equal(t, types.Epoch(2), cp.Epoch)
// The update should persist in DB.
j, err := service.cfg.BeaconDB.JustifiedCheckpoint(ctx)
require.NoError(t, err)
cp = service.CurrentJustifiedCheckpt()
require.Equal(t, j.Epoch, cp.Epoch)
f, err := service.cfg.BeaconDB.FinalizedCheckpoint(ctx)
require.NoError(t, err)
cp = service.FinalizedCheckpt()
@@ -3006,10 +2998,9 @@ func TestStore_NoViableHead_Reboot_DoublyLinkedTree(t *testing.T) {
headRoot, err := service.HeadRoot(ctx)
require.NoError(t, err)
require.Equal(t, genesisRoot, bytesutil.ToBytes32(headRoot))
// The node is optimistic now.
optimistic, err := service.IsOptimistic(ctx)
require.NoError(t, err)
require.Equal(t, true, optimistic)
require.Equal(t, false, optimistic)
require.Equal(t, false, service.ForkChoicer().AllTipsAreInvalid())
// Check that the node's justified checkpoint does not agree with the
@@ -3230,10 +3221,9 @@ func TestStore_NoViableHead_Reboot_Protoarray(t *testing.T) {
headRoot, err := service.HeadRoot(ctx)
require.NoError(t, err)
require.Equal(t, genesisRoot, bytesutil.ToBytes32(headRoot))
// The node is optimistic now
optimistic, err := service.IsOptimistic(ctx)
require.NoError(t, err)
require.Equal(t, true, optimistic)
require.Equal(t, false, optimistic)
require.Equal(t, false, service.ForkChoicer().AllTipsAreInvalid())
// Check that the node's justified checkpoint does not agree with the

View File

@@ -191,13 +191,6 @@ func (s *Service) StartFromSavedState(saved state.BeaconState) error {
}
spawnCountdownIfPreGenesis(s.ctx, s.genesisTime, s.cfg.BeaconDB)
justified, err := s.cfg.BeaconDB.JustifiedCheckpoint(s.ctx)
if err != nil {
return errors.Wrap(err, "could not get justified checkpoint")
}
if justified == nil {
return errNilJustifiedCheckpoint
}
finalized, err := s.cfg.BeaconDB.FinalizedCheckpoint(s.ctx)
if err != nil {
return errors.Wrap(err, "could not get finalized checkpoint")
@@ -214,8 +207,8 @@ func (s *Service) StartFromSavedState(saved state.BeaconState) error {
forkChoicer = protoarray.New()
}
s.cfg.ForkChoiceStore = forkChoicer
if err := forkChoicer.UpdateJustifiedCheckpoint(&forkchoicetypes.Checkpoint{Epoch: justified.Epoch,
Root: bytesutil.ToBytes32(justified.Root)}); err != nil {
if err := forkChoicer.UpdateJustifiedCheckpoint(&forkchoicetypes.Checkpoint{Epoch: finalized.Epoch,
Root: bytesutil.ToBytes32(finalized.Root)}); err != nil {
return errors.Wrap(err, "could not update forkchoice's justified checkpoint")
}
if err := forkChoicer.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Epoch: finalized.Epoch,
@@ -231,14 +224,15 @@ func (s *Service) StartFromSavedState(saved state.BeaconState) error {
if err := forkChoicer.InsertNode(s.ctx, st, fRoot); err != nil {
return errors.Wrap(err, "could not insert finalized block to forkchoice")
}
lastValidatedCheckpoint, err := s.cfg.BeaconDB.LastValidatedCheckpoint(s.ctx)
if err != nil {
return errors.Wrap(err, "could not get last validated checkpoint")
}
if bytes.Equal(finalized.Root, lastValidatedCheckpoint.Root) {
if err := forkChoicer.SetOptimisticToValid(s.ctx, fRoot); err != nil {
return errors.Wrap(err, "could not set finalized block as validated")
if !features.Get().EnableStartOptimistic {
lastValidatedCheckpoint, err := s.cfg.BeaconDB.LastValidatedCheckpoint(s.ctx)
if err != nil {
return errors.Wrap(err, "could not get last validated checkpoint")
}
if bytes.Equal(finalized.Root, lastValidatedCheckpoint.Root) {
if err := forkChoicer.SetOptimisticToValid(s.ctx, fRoot); err != nil {
return errors.Wrap(err, "could not set finalized block as validated")
}
}
}
// not attempting to save initial sync blocks here, because there shouldn't be any until

View File

@@ -26,6 +26,7 @@ import (
"github.com/prysmaticlabs/prysm/v3/beacon-chain/p2p"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state/stategen"
v1 "github.com/prysmaticlabs/prysm/v3/beacon-chain/state/v1"
"github.com/prysmaticlabs/prysm/v3/config/features"
"github.com/prysmaticlabs/prysm/v3/config/params"
consensusblocks "github.com/prysmaticlabs/prysm/v3/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v3/consensus-types/interfaces"
@@ -157,7 +158,6 @@ func TestChainStartStop_Initialized(t *testing.T) {
require.NoError(t, beaconDB.SaveState(ctx, s, blkRoot))
require.NoError(t, beaconDB.SaveHeadBlockRoot(ctx, blkRoot))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, blkRoot))
require.NoError(t, beaconDB.SaveJustifiedCheckpoint(ctx, &ethpb.Checkpoint{Root: blkRoot[:]}))
require.NoError(t, beaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Root: blkRoot[:]}))
ss := &ethpb.StateSummary{
Slot: 1,
@@ -191,7 +191,6 @@ func TestChainStartStop_GenesisZeroHashes(t *testing.T) {
require.NoError(t, beaconDB.SaveState(ctx, s, blkRoot))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, blkRoot))
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
require.NoError(t, beaconDB.SaveJustifiedCheckpoint(ctx, &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}))
require.NoError(t, beaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Root: blkRoot[:]}))
chainService.cfg.FinalizedStateAtStartUp = s
// Test the start function.
@@ -528,3 +527,45 @@ func BenchmarkHasBlockForkChoiceStore_DoublyLinkedTree(b *testing.B) {
require.Equal(b, true, s.cfg.ForkChoiceStore.HasNode(r), "Block is not in fork choice store")
}
}
func TestChainService_EverythingOptimistic(t *testing.T) {
resetFn := features.InitWithReset(&features.Flags{
EnableStartOptimistic: true,
})
defer resetFn()
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
genesis := util.NewBeaconBlock()
genesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, genesisRoot))
util.SaveBlock(t, ctx, beaconDB, genesis)
finalizedSlot := params.BeaconConfig().SlotsPerEpoch*2 + 1
headBlock := util.NewBeaconBlock()
headBlock.Block.Slot = finalizedSlot
headBlock.Block.ParentRoot = bytesutil.PadTo(genesisRoot[:], 32)
headState, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, headState.SetSlot(finalizedSlot))
require.NoError(t, headState.SetGenesisValidatorsRoot(params.BeaconConfig().ZeroHash[:]))
headRoot, err := headBlock.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, beaconDB.SaveState(ctx, headState, headRoot))
require.NoError(t, beaconDB.SaveState(ctx, headState, genesisRoot))
util.SaveBlock(t, ctx, beaconDB, headBlock)
require.NoError(t, beaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Epoch: slots.ToEpoch(finalizedSlot), Root: headRoot[:]}))
attSrv, err := attestations.NewService(ctx, &attestations.Config{})
require.NoError(t, err)
stateGen := stategen.New(beaconDB)
c, err := NewService(ctx, WithDatabase(beaconDB), WithStateGen(stateGen), WithAttestationService(attSrv), WithStateNotifier(&mock.MockStateNotifier{}), WithFinalizedStateAtStartUp(headState))
require.NoError(t, err)
require.NoError(t, stateGen.SaveState(ctx, headRoot, headState))
require.NoError(t, beaconDB.SaveLastValidatedCheckpoint(ctx, &ethpb.Checkpoint{Epoch: slots.ToEpoch(finalizedSlot), Root: headRoot[:]}))
require.NoError(t, c.StartFromSavedState(headState))
require.Equal(t, true, c.cfg.ForkChoiceStore.HasNode(headRoot))
op, err := c.cfg.ForkChoiceStore.IsOptimistic(headRoot)
require.NoError(t, err)
require.Equal(t, true, op)
}

View File

@@ -185,7 +185,21 @@ func (dc *DepositCache) AllDepositContainers(ctx context.Context) []*ethpb.Depos
dc.depositsLock.RLock()
defer dc.depositsLock.RUnlock()
return dc.deposits
// Make a shallow copy of the deposits and return that. This way, the
// caller can safely iterate over the returned list of deposits without
// the possibility of new deposits showing up. If we were to return the
// list without a copy, when a new deposit is added to the cache, it
// would also be present in the returned value. This could result in a
// race condition if the list is being iterated over.
//
// It's not necessary to make a deep copy of this list because the
// deposits in the cache should never be modified. It is still possible
// for the caller to modify one of the underlying deposits and modify
// the cache, but that's not a race condition. Also, a deep copy would
// take too long and use too much memory.
deposits := make([]*ethpb.DepositContainer, len(dc.deposits))
copy(deposits, dc.deposits)
return deposits
}
// AllDeposits returns a list of historical deposits until the given block number

View File

@@ -55,7 +55,7 @@ func (dc *DepositCache) PendingDeposits(ctx context.Context, untilBlk *big.Int)
depositCntrs := dc.PendingContainers(ctx, untilBlk)
var deposits []*ethpb.Deposit
deposits := make([]*ethpb.Deposit, 0, len(depositCntrs))
for _, dep := range depositCntrs {
deposits = append(deposits, dep.Deposit)
}
@@ -71,7 +71,7 @@ func (dc *DepositCache) PendingContainers(ctx context.Context, untilBlk *big.Int
dc.depositsLock.RLock()
defer dc.depositsLock.RUnlock()
var depositCntrs []*ethpb.DepositContainer
depositCntrs := make([]*ethpb.DepositContainer, 0, len(dc.pendingDeposits))
for _, ctnr := range dc.pendingDeposits {
if untilBlk == nil || untilBlk.Uint64() >= ctnr.Eth1BlockHeight {
depositCntrs = append(depositCntrs, ctnr)
@@ -139,7 +139,7 @@ func (dc *DepositCache) PrunePendingDeposits(ctx context.Context, merkleTreeInde
dc.depositsLock.Lock()
defer dc.depositsLock.Unlock()
var cleanDeposits []*ethpb.DepositContainer
cleanDeposits := make([]*ethpb.DepositContainer, 0, len(dc.pendingDeposits))
for _, dp := range dc.pendingDeposits {
if dp.Index >= merkleTreeIndex {
cleanDeposits = append(cleanDeposits, dp)

View File

@@ -16,7 +16,7 @@ func UpdateGenesisEth1Data(state state.BeaconState, deposits []*ethpb.Deposit, e
return nil, errors.New("no eth1data provided for genesis state")
}
var leaves [][]byte
leaves := make([][]byte, 0, len(deposits))
for _, deposit := range deposits {
if deposit == nil || deposit.Data == nil {
return nil, fmt.Errorf("nil deposit or deposit with nil data cannot be processed: %v", deposit)

View File

@@ -40,7 +40,6 @@ type ReadOnlyDatabase interface {
HasStateSummary(ctx context.Context, blockRoot [32]byte) bool
HighestSlotStatesBelow(ctx context.Context, slot types.Slot) ([]state.ReadOnlyBeaconState, error)
// Checkpoint operations.
JustifiedCheckpoint(ctx context.Context) (*ethpb.Checkpoint, error)
FinalizedCheckpoint(ctx context.Context) (*ethpb.Checkpoint, error)
ArchivedPointRoot(ctx context.Context, slot types.Slot) [32]byte
HasArchivedPoint(ctx context.Context, slot types.Slot) bool
@@ -76,7 +75,6 @@ type NoHeadAccessDatabase interface {
SaveStateSummary(ctx context.Context, summary *ethpb.StateSummary) error
SaveStateSummaries(ctx context.Context, summaries []*ethpb.StateSummary) error
// Checkpoint operations.
SaveJustifiedCheckpoint(ctx context.Context, checkpoint *ethpb.Checkpoint) error
SaveFinalizedCheckpoint(ctx context.Context, checkpoint *ethpb.Checkpoint) error
SaveLastValidatedCheckpoint(ctx context.Context, checkpoint *ethpb.Checkpoint) error
// Deposit contract related handlers.

View File

@@ -259,16 +259,12 @@ func TestStore_DeleteJustifiedBlock(t *testing.T) {
b.Block.Slot = 1
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
cp := &ethpb.Checkpoint{
Root: root[:],
}
st, err := util.NewBeaconState()
require.NoError(t, err)
blk, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
require.NoError(t, db.SaveBlock(ctx, blk))
require.NoError(t, db.SaveState(ctx, st, root))
require.NoError(t, db.SaveJustifiedCheckpoint(ctx, cp))
require.ErrorIs(t, db.DeleteBlock(ctx, root), ErrDeleteJustifiedAndFinalized)
}

View File

@@ -14,24 +14,6 @@ import (
var errMissingStateForCheckpoint = errors.New("missing state summary for checkpoint root")
// JustifiedCheckpoint returns the latest justified checkpoint in beacon chain.
func (s *Store) JustifiedCheckpoint(ctx context.Context) (*ethpb.Checkpoint, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.JustifiedCheckpoint")
defer span.End()
var checkpoint *ethpb.Checkpoint
err := s.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(checkpointBucket)
enc := bkt.Get(justifiedCheckpointKey)
if enc == nil {
checkpoint = &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
return nil
}
checkpoint = &ethpb.Checkpoint{}
return decode(ctx, enc, checkpoint)
})
return checkpoint, err
}
// FinalizedCheckpoint returns the latest finalized checkpoint in beacon chain.
func (s *Store) FinalizedCheckpoint(ctx context.Context) (*ethpb.Checkpoint, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.FinalizedCheckpoint")
@@ -50,29 +32,6 @@ func (s *Store) FinalizedCheckpoint(ctx context.Context) (*ethpb.Checkpoint, err
return checkpoint, err
}
// SaveJustifiedCheckpoint saves justified checkpoint in beacon chain.
func (s *Store) SaveJustifiedCheckpoint(ctx context.Context, checkpoint *ethpb.Checkpoint) error {
ctx, span := trace.StartSpan(ctx, "BeaconDB.SaveJustifiedCheckpoint")
defer span.End()
enc, err := encode(ctx, checkpoint)
if err != nil {
return err
}
return s.db.Update(func(tx *bolt.Tx) error {
bucket := tx.Bucket(checkpointBucket)
hasStateSummary := s.hasStateSummaryBytes(tx, bytesutil.ToBytes32(checkpoint.Root))
hasStateInDB := tx.Bucket(stateBucket).Get(checkpoint.Root) != nil
if !(hasStateInDB || hasStateSummary) {
log.Warnf("Recovering state summary for justified root: %#x", bytesutil.Trunc(checkpoint.Root))
if err := recoverStateSummary(ctx, tx, checkpoint.Root); err != nil {
return errors.Wrapf(errMissingStateForCheckpoint, "could not save justified checkpoint, finalized root: %#x", bytesutil.Trunc(checkpoint.Root))
}
}
return bucket.Put(justifiedCheckpointKey, enc)
})
}
// SaveFinalizedCheckpoint saves finalized checkpoint in beacon chain.
func (s *Store) SaveFinalizedCheckpoint(ctx context.Context, checkpoint *ethpb.Checkpoint) error {
ctx, span := trace.StartSpan(ctx, "BeaconDB.SaveFinalizedCheckpoint")

View File

@@ -14,44 +14,6 @@ import (
"google.golang.org/protobuf/proto"
)
func TestStore_JustifiedCheckpoint_CanSaveRetrieve(t *testing.T) {
db := setupDB(t)
ctx := context.Background()
root := bytesutil.ToBytes32([]byte{'A'})
cp := &ethpb.Checkpoint{
Epoch: 10,
Root: root[:],
}
st, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, st.SetSlot(1))
require.NoError(t, db.SaveState(ctx, st, root))
require.NoError(t, db.SaveJustifiedCheckpoint(ctx, cp))
retrieved, err := db.JustifiedCheckpoint(ctx)
require.NoError(t, err)
assert.Equal(t, true, proto.Equal(cp, retrieved), "Wanted %v, received %v", cp, retrieved)
}
func TestStore_JustifiedCheckpoint_Recover(t *testing.T) {
db := setupDB(t)
ctx := context.Background()
blk := util.HydrateSignedBeaconBlock(&ethpb.SignedBeaconBlock{})
r, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
cp := &ethpb.Checkpoint{
Epoch: 2,
Root: r[:],
}
wb, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, db.SaveBlock(ctx, wb))
require.NoError(t, db.SaveJustifiedCheckpoint(ctx, cp))
retrieved, err := db.JustifiedCheckpoint(ctx)
require.NoError(t, err)
assert.Equal(t, true, proto.Equal(cp, retrieved), "Wanted %v, received %v", cp, retrieved)
}
func TestStore_FinalizedCheckpoint_CanSaveRetrieve(t *testing.T) {
db := setupDB(t)
ctx := context.Background()
@@ -108,16 +70,6 @@ func TestStore_FinalizedCheckpoint_Recover(t *testing.T) {
assert.Equal(t, true, proto.Equal(cp, retrieved), "Wanted %v, received %v", cp, retrieved)
}
func TestStore_JustifiedCheckpoint_DefaultIsZeroHash(t *testing.T) {
db := setupDB(t)
ctx := context.Background()
cp := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
retrieved, err := db.JustifiedCheckpoint(ctx)
require.NoError(t, err)
assert.Equal(t, true, proto.Equal(cp, retrieved), "Wanted %v, received %v", cp, retrieved)
}
func TestStore_FinalizedCheckpoint_DefaultIsZeroHash(t *testing.T) {
db := setupDB(t)
ctx := context.Background()

View File

@@ -93,9 +93,6 @@ func (s *Store) SaveOrigin(ctx context.Context, serState, serBlock []byte) error
Epoch: types.Epoch(slotEpoch),
Root: blockRoot[:],
}
if err = s.SaveJustifiedCheckpoint(ctx, chkpt); err != nil {
return errors.Wrap(err, "could not mark checkpoint sync block as justified")
}
if err = s.SaveFinalizedCheckpoint(ctx, chkpt); err != nil {
return errors.Wrap(err, "could not mark checkpoint sync block as finalized")
}

View File

@@ -403,6 +403,10 @@ func (s *Service) processBlockInBatch(ctx context.Context, currentBlockNum uint6
}
}
s.latestEth1DataLock.RLock()
lastReqBlock := s.latestEth1Data.LastRequestedBlock
s.latestEth1DataLock.RUnlock()
for _, filterLog := range logs {
if filterLog.BlockNumber > currentBlockNum {
if err := s.checkHeaderRange(ctx, currentBlockNum, filterLog.BlockNumber-1, headersMap, requestHeaders); err != nil {
@@ -415,6 +419,13 @@ func (s *Service) processBlockInBatch(ctx context.Context, currentBlockNum uint6
currentBlockNum = filterLog.BlockNumber
}
if err := s.ProcessLog(ctx, filterLog); err != nil {
// In the event the execution client gives us a garbled/bad log
// we reset the last requested block to the previous valid block range. This
// prevents the beacon from advancing processing of logs to another range
// in the event of an execution client failure.
s.latestEth1DataLock.Lock()
s.latestEth1Data.LastRequestedBlock = lastReqBlock
s.latestEth1DataLock.Unlock()
return 0, 0, err
}
}

View File

@@ -36,6 +36,14 @@ func WithHttpEndpointAndJWTSecret(endpointString string, secret []byte) Option {
}
}
// WithHeaders adds headers to the execution node JSON-RPC requests.
func WithHeaders(headers []string) Option {
return func(s *Service) error {
s.cfg.headers = headers
return nil
}
}
// WithDepositContractAddress for the deposit contract.
func WithDepositContractAddress(addr common.Address) Option {
return func(s *Service) error {

View File

@@ -4,6 +4,7 @@ import (
"context"
"fmt"
"net/url"
"strings"
"time"
"github.com/ethereum/go-ethereum/ethclient"
@@ -65,7 +66,6 @@ func (s *Service) pollConnectionStatus(ctx context.Context) {
currClient := s.rpcClient
if err := s.setupExecutionClientConnections(ctx, s.cfg.currHttpEndpoint); err != nil {
errorLogger(err, "Could not connect to execution client endpoint")
s.retryExecutionClientConnection(ctx, err)
continue
}
// Close previous client, if connection was successful.
@@ -114,7 +114,7 @@ func (s *Service) newRPCClientWithAuth(ctx context.Context, endpoint network.End
if err != nil {
return nil, err
}
case "":
case "", "ipc":
client, err = gethRPC.DialIPC(ctx, endpoint.Url)
if err != nil {
return nil, err
@@ -129,6 +129,16 @@ func (s *Service) newRPCClientWithAuth(ctx context.Context, endpoint network.End
}
client.SetHeader("Authorization", header)
}
for _, h := range s.cfg.headers {
if h != "" {
keyValue := strings.Split(h, "=")
if len(keyValue) < 2 {
log.Warnf("Incorrect HTTP header flag format. Skipping %v", keyValue[0])
continue
}
client.SetHeader(keyValue[0], strings.Join(keyValue[1:], "="))
}
}
return client, nil
}

View File

@@ -128,6 +128,7 @@ type config struct {
eth1HeaderReqLimit uint64
beaconNodeStatsUpdater BeaconNodeStatsUpdater
currHttpEndpoint network.Endpoint
headers []string
finalizedStateAtStartup state.BeaconState
}
@@ -316,11 +317,6 @@ func (s *Service) updateBeaconNodeStats() {
s.cfg.beaconNodeStatsUpdater.Update(bs)
}
func (s *Service) updateCurrHttpEndpoint(endpoint network.Endpoint) {
s.cfg.currHttpEndpoint = endpoint
s.updateBeaconNodeStats()
}
func (s *Service) updateConnectedETH1(state bool) {
s.connectedETH1 = state
s.updateBeaconNodeStats()

View File

@@ -18,6 +18,7 @@ go_library(
"//beacon-chain/state:go_default_library",
"//config/fieldparams:go_default_library",
"//consensus-types/primitives:go_default_library",
"//proto/eth/v1:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)

View File

@@ -31,6 +31,7 @@ go_library(
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
@@ -62,12 +63,14 @@ go_test(
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/v3:go_default_library",
"//config/features:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",

View File

@@ -3,6 +3,7 @@ package doublylinkedtree
import (
"context"
"fmt"
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/blocks"
@@ -14,6 +15,7 @@ import (
"github.com/prysmaticlabs/prysm/v3/config/params"
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v3/encoding/bytesutil"
v1 "github.com/prysmaticlabs/prysm/v3/proto/eth/v1"
ethpb "github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v3/runtime/version"
"github.com/prysmaticlabs/prysm/v3/time/slots"
@@ -82,7 +84,8 @@ func (f *ForkChoice) Head(
jc := f.JustifiedCheckpoint()
fc := f.FinalizedCheckpoint()
if err := f.store.treeRootNode.updateBestDescendant(ctx, jc.Epoch, fc.Epoch); err != nil {
currentEpoch := slots.EpochsSinceGenesis(time.Unix(int64(f.store.genesisTime), 0))
if err := f.store.treeRootNode.updateBestDescendant(ctx, jc.Epoch, fc.Epoch, currentEpoch); err != nil {
return [32]byte{}, errors.Wrap(err, "could not update best descendant")
}
return f.store.head(ctx)
@@ -490,30 +493,31 @@ func (f *ForkChoice) UpdateFinalizedCheckpoint(fc *forkchoicetypes.Checkpoint) e
}
// CommonAncestorRoot returns the common ancestor root between the two block roots r1 and r2.
func (f *ForkChoice) CommonAncestorRoot(ctx context.Context, r1 [32]byte, r2 [32]byte) ([32]byte, error) {
func (f *ForkChoice) CommonAncestor(ctx context.Context, r1 [32]byte, r2 [32]byte) ([32]byte, types.Slot, error) {
ctx, span := trace.StartSpan(ctx, "doublelinkedtree.CommonAncestorRoot")
defer span.End()
// Do nothing if the input roots are the same.
if r1 == r2 {
return r1, nil
}
f.store.nodesLock.RLock()
defer f.store.nodesLock.RUnlock()
n1, ok := f.store.nodeByRoot[r1]
if !ok || n1 == nil {
return [32]byte{}, forkchoice.ErrUnknownCommonAncestor
return [32]byte{}, 0, forkchoice.ErrUnknownCommonAncestor
}
// Do nothing if the input roots are the same.
if r1 == r2 {
return r1, n1.slot, nil
}
n2, ok := f.store.nodeByRoot[r2]
if !ok || n2 == nil {
return [32]byte{}, forkchoice.ErrUnknownCommonAncestor
return [32]byte{}, 0, forkchoice.ErrUnknownCommonAncestor
}
for {
if ctx.Err() != nil {
return [32]byte{}, ctx.Err()
return [32]byte{}, 0, ctx.Err()
}
if n1.slot > n2.slot {
n1 = n1.parent
@@ -521,17 +525,17 @@ func (f *ForkChoice) CommonAncestorRoot(ctx context.Context, r1 [32]byte, r2 [32
// This should not happen at runtime as the finalized
// node has to be a common ancestor
if n1 == nil {
return [32]byte{}, forkchoice.ErrUnknownCommonAncestor
return [32]byte{}, 0, forkchoice.ErrUnknownCommonAncestor
}
} else {
n2 = n2.parent
// Reaches the end of the tree and unable to find common ancestor.
if n2 == nil {
return [32]byte{}, forkchoice.ErrUnknownCommonAncestor
return [32]byte{}, 0, forkchoice.ErrUnknownCommonAncestor
}
}
if n1 == n2 {
return n1.root, nil
return n1.root, n1.slot, nil
}
}
}
@@ -612,3 +616,52 @@ func (f *ForkChoice) JustifiedPayloadBlockHash() [32]byte {
}
return node.payloadHash
}
// ForkChoiceDump returns a full dump of forkhoice.
func (f *ForkChoice) ForkChoiceDump(ctx context.Context) (*v1.ForkChoiceResponse, error) {
jc := &v1.Checkpoint{
Epoch: f.store.justifiedCheckpoint.Epoch,
Root: f.store.justifiedCheckpoint.Root[:],
}
bjc := &v1.Checkpoint{
Epoch: f.store.bestJustifiedCheckpoint.Epoch,
Root: f.store.bestJustifiedCheckpoint.Root[:],
}
ujc := &v1.Checkpoint{
Epoch: f.store.unrealizedJustifiedCheckpoint.Epoch,
Root: f.store.unrealizedJustifiedCheckpoint.Root[:],
}
fc := &v1.Checkpoint{
Epoch: f.store.finalizedCheckpoint.Epoch,
Root: f.store.finalizedCheckpoint.Root[:],
}
ufc := &v1.Checkpoint{
Epoch: f.store.unrealizedFinalizedCheckpoint.Epoch,
Root: f.store.unrealizedFinalizedCheckpoint.Root[:],
}
nodes := make([]*v1.ForkChoiceNode, 0, f.NodeCount())
var err error
if f.store.treeRootNode != nil {
nodes, err = f.store.treeRootNode.nodeTreeDump(ctx, nodes)
if err != nil {
return nil, err
}
}
var headRoot [32]byte
if f.store.headNode != nil {
headRoot = f.store.headNode.root
}
resp := &v1.ForkChoiceResponse{
JustifiedCheckpoint: jc,
BestJustifiedCheckpoint: bjc,
UnrealizedJustifiedCheckpoint: ujc,
FinalizedCheckpoint: fc,
UnrealizedFinalizedCheckpoint: ufc,
ProposerBoostRoot: f.store.proposerBoostRoot[:],
PreviousProposerBoostRoot: f.store.previousProposerBoostRoot[:],
HeadRoot: headRoot[:],
ForkchoiceNodes: nodes,
}
return resp, nil
}

View File

@@ -208,7 +208,7 @@ func TestForkChoice_IsCanonicalReorg(t *testing.T) {
require.Equal(t, uint64(10), f.store.nodeByRoot[[32]byte{'1'}].weight)
require.Equal(t, uint64(0), f.store.nodeByRoot[[32]byte{'2'}].weight)
require.NoError(t, f.store.treeRootNode.updateBestDescendant(ctx, 1, 1))
require.NoError(t, f.store.treeRootNode.updateBestDescendant(ctx, 1, 1, 1))
require.DeepEqual(t, [32]byte{'3'}, f.store.treeRootNode.bestDescendant.root)
f.store.nodesLock.Unlock()
@@ -408,73 +408,85 @@ func TestStore_CommonAncestor(t *testing.T) {
r1 [32]byte
r2 [32]byte
wantRoot [32]byte
wantSlot types.Slot
}{
{
name: "Common ancestor between c and b is a",
r1: [32]byte{'c'},
r2: [32]byte{'b'},
wantRoot: [32]byte{'a'},
wantSlot: 0,
},
{
name: "Common ancestor between c and d is a",
r1: [32]byte{'c'},
r2: [32]byte{'d'},
wantRoot: [32]byte{'a'},
wantSlot: 0,
},
{
name: "Common ancestor between c and e is a",
r1: [32]byte{'c'},
r2: [32]byte{'e'},
wantRoot: [32]byte{'a'},
wantSlot: 0,
},
{
name: "Common ancestor between g and f is c",
r1: [32]byte{'g'},
r2: [32]byte{'f'},
wantRoot: [32]byte{'c'},
wantSlot: 2,
},
{
name: "Common ancestor between f and h is c",
r1: [32]byte{'f'},
r2: [32]byte{'h'},
wantRoot: [32]byte{'c'},
wantSlot: 2,
},
{
name: "Common ancestor between g and h is c",
r1: [32]byte{'g'},
r2: [32]byte{'h'},
wantRoot: [32]byte{'c'},
wantSlot: 2,
},
{
name: "Common ancestor between b and h is a",
r1: [32]byte{'b'},
r2: [32]byte{'h'},
wantRoot: [32]byte{'a'},
wantSlot: 0,
},
{
name: "Common ancestor between e and h is a",
r1: [32]byte{'e'},
r2: [32]byte{'h'},
wantRoot: [32]byte{'a'},
wantSlot: 0,
},
{
name: "Common ancestor between i and f is c",
r1: [32]byte{'i'},
r2: [32]byte{'f'},
wantRoot: [32]byte{'c'},
wantSlot: 2,
},
{
name: "Common ancestor between e and h is a",
r1: [32]byte{'j'},
r2: [32]byte{'g'},
wantRoot: [32]byte{'c'},
wantSlot: 2,
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
gotRoot, err := f.CommonAncestorRoot(ctx, tc.r1, tc.r2)
gotRoot, gotSlot, err := f.CommonAncestor(ctx, tc.r1, tc.r2)
require.NoError(t, err)
require.Equal(t, tc.wantRoot, gotRoot)
require.Equal(t, tc.wantSlot, gotSlot)
})
}
@@ -497,46 +509,53 @@ func TestStore_CommonAncestor(t *testing.T) {
r1 [32]byte
r2 [32]byte
wantRoot [32]byte
wantSlot types.Slot
}{
{
name: "Common ancestor between a and b is a",
r1: [32]byte{'a'},
r2: [32]byte{'b'},
wantRoot: [32]byte{'a'},
wantSlot: 0,
},
{
name: "Common ancestor between b and d is b",
r1: [32]byte{'d'},
r2: [32]byte{'b'},
wantRoot: [32]byte{'b'},
wantSlot: 1,
},
{
name: "Common ancestor between d and a is a",
r1: [32]byte{'d'},
r2: [32]byte{'a'},
wantRoot: [32]byte{'a'},
wantSlot: 0,
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
gotRoot, err := f.CommonAncestorRoot(ctx, tc.r1, tc.r2)
gotRoot, gotSlot, err := f.CommonAncestor(ctx, tc.r1, tc.r2)
require.NoError(t, err)
require.Equal(t, tc.wantRoot, gotRoot)
require.Equal(t, tc.wantSlot, gotSlot)
})
}
// Equal inputs should return the same root.
r, err := f.CommonAncestorRoot(ctx, [32]byte{'b'}, [32]byte{'b'})
r, s, err := f.CommonAncestor(ctx, [32]byte{'b'}, [32]byte{'b'})
require.NoError(t, err)
require.Equal(t, [32]byte{'b'}, r)
require.Equal(t, types.Slot(1), s)
// Requesting finalized root (last node) should return the same root.
r, err = f.CommonAncestorRoot(ctx, [32]byte{'a'}, [32]byte{'a'})
r, s, err = f.CommonAncestor(ctx, [32]byte{'a'}, [32]byte{'a'})
require.NoError(t, err)
require.Equal(t, [32]byte{'a'}, r)
require.Equal(t, types.Slot(0), s)
// Requesting unknown root
_, err = f.CommonAncestorRoot(ctx, [32]byte{'a'}, [32]byte{'z'})
_, _, err = f.CommonAncestor(ctx, [32]byte{'a'}, [32]byte{'z'})
require.ErrorIs(t, err, forkchoice.ErrUnknownCommonAncestor)
_, err = f.CommonAncestorRoot(ctx, [32]byte{'z'}, [32]byte{'a'})
_, _, err = f.CommonAncestor(ctx, [32]byte{'z'}, [32]byte{'a'})
require.ErrorIs(t, err, forkchoice.ErrUnknownCommonAncestor)
n := &Node{
slot: 100,
@@ -550,7 +569,7 @@ func TestStore_CommonAncestor(t *testing.T) {
f.store.nodeByRoot[[32]byte{'y'}] = n
// broken link
_, err = f.CommonAncestorRoot(ctx, [32]byte{'y'}, [32]byte{'a'})
_, _, err = f.CommonAncestor(ctx, [32]byte{'y'}, [32]byte{'a'})
require.ErrorIs(t, err, forkchoice.ErrUnknownCommonAncestor)
}

View File

@@ -5,8 +5,10 @@ import (
"context"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v3/config/features"
"github.com/prysmaticlabs/prysm/v3/config/params"
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
v1 "github.com/prysmaticlabs/prysm/v3/proto/eth/v1"
)
// depth returns the length of the path to the root of Fork Choice
@@ -42,7 +44,7 @@ func (n *Node) applyWeightChanges(ctx context.Context) error {
// updateBestDescendant updates the best descendant of this node and its
// children. This function assumes the caller has a lock on Store.nodesLock
func (n *Node) updateBestDescendant(ctx context.Context, justifiedEpoch, finalizedEpoch types.Epoch) error {
func (n *Node) updateBestDescendant(ctx context.Context, justifiedEpoch, finalizedEpoch, currentEpoch types.Epoch) error {
if ctx.Err() != nil {
return ctx.Err()
}
@@ -58,10 +60,10 @@ func (n *Node) updateBestDescendant(ctx context.Context, justifiedEpoch, finaliz
if child == nil {
return errors.Wrap(ErrNilNode, "could not update best descendant")
}
if err := child.updateBestDescendant(ctx, justifiedEpoch, finalizedEpoch); err != nil {
if err := child.updateBestDescendant(ctx, justifiedEpoch, finalizedEpoch, currentEpoch); err != nil {
return err
}
childLeadsToViableHead := child.leadsToViableHead(justifiedEpoch, finalizedEpoch)
childLeadsToViableHead := child.leadsToViableHead(justifiedEpoch, finalizedEpoch, currentEpoch)
if childLeadsToViableHead && !hasViableDescendant {
// The child leads to a viable head, but the current
// parent's best child doesn't.
@@ -96,18 +98,24 @@ func (n *Node) updateBestDescendant(ctx context.Context, justifiedEpoch, finaliz
// viableForHead returns true if the node is viable to head.
// Any node with different finalized or justified epoch than
// the ones in fork choice store should not be viable to head.
func (n *Node) viableForHead(justifiedEpoch, finalizedEpoch types.Epoch) bool {
func (n *Node) viableForHead(justifiedEpoch, finalizedEpoch, currentEpoch types.Epoch) bool {
justified := justifiedEpoch == n.justifiedEpoch || justifiedEpoch == 0
finalized := finalizedEpoch == n.finalizedEpoch || finalizedEpoch == 0
if features.Get().EnableDefensivePull && !justified && justifiedEpoch+1 == currentEpoch {
if n.unrealizedJustifiedEpoch+1 >= currentEpoch {
justified = true
finalized = true
}
}
return justified && finalized
}
func (n *Node) leadsToViableHead(justifiedEpoch, finalizedEpoch types.Epoch) bool {
func (n *Node) leadsToViableHead(justifiedEpoch, finalizedEpoch, currentEpoch types.Epoch) bool {
if n.bestDescendant == nil {
return n.viableForHead(justifiedEpoch, finalizedEpoch)
return n.viableForHead(justifiedEpoch, finalizedEpoch, currentEpoch)
}
return n.bestDescendant.viableForHead(justifiedEpoch, finalizedEpoch)
return n.bestDescendant.viableForHead(justifiedEpoch, finalizedEpoch, currentEpoch)
}
// setNodeAndParentValidated sets the current node and all the ancestors as validated (i.e. non-optimistic).
@@ -116,10 +124,48 @@ func (n *Node) setNodeAndParentValidated(ctx context.Context) error {
return ctx.Err()
}
if !n.optimistic || n.parent == nil {
if !n.optimistic {
return nil
}
n.optimistic = false
if n.parent == nil {
return nil
}
return n.parent.setNodeAndParentValidated(ctx)
}
// nodeTreeDump appends to the given list all the nodes descending from this one
func (n *Node) nodeTreeDump(ctx context.Context, nodes []*v1.ForkChoiceNode) ([]*v1.ForkChoiceNode, error) {
if ctx.Err() != nil {
return nil, ctx.Err()
}
var parentRoot [32]byte
if n.parent != nil {
parentRoot = n.parent.root
}
thisNode := &v1.ForkChoiceNode{
Slot: n.slot,
Root: n.root[:],
ParentRoot: parentRoot[:],
JustifiedEpoch: n.justifiedEpoch,
FinalizedEpoch: n.finalizedEpoch,
UnrealizedJustifiedEpoch: n.unrealizedJustifiedEpoch,
UnrealizedFinalizedEpoch: n.unrealizedFinalizedEpoch,
Balance: n.balance,
Weight: n.weight,
ExecutionOptimistic: n.optimistic,
ExecutionPayload: n.payloadHash[:],
Timestamp: n.timestamp,
}
nodes = append(nodes, thisNode)
var err error
for _, child := range n.children {
nodes, err = child.nodeTreeDump(ctx, nodes)
if err != nil {
return nil, err
}
}
return nodes, nil
}

View File

@@ -6,6 +6,7 @@ import (
"github.com/prysmaticlabs/prysm/v3/config/params"
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
v1 "github.com/prysmaticlabs/prysm/v3/proto/eth/v1"
"github.com/prysmaticlabs/prysm/v3/testing/assert"
"github.com/prysmaticlabs/prysm/v3/testing/require"
)
@@ -113,7 +114,7 @@ func TestNode_UpdateBestDescendant_HigherWeightChild(t *testing.T) {
s := f.store
s.nodeByRoot[indexToHash(1)].weight = 100
s.nodeByRoot[indexToHash(2)].weight = 200
assert.NoError(t, s.treeRootNode.updateBestDescendant(ctx, 1, 1))
assert.NoError(t, s.treeRootNode.updateBestDescendant(ctx, 1, 1, 1))
assert.Equal(t, 2, len(s.treeRootNode.children))
assert.Equal(t, s.treeRootNode.children[1], s.treeRootNode.bestDescendant)
@@ -133,7 +134,7 @@ func TestNode_UpdateBestDescendant_LowerWeightChild(t *testing.T) {
s := f.store
s.nodeByRoot[indexToHash(1)].weight = 200
s.nodeByRoot[indexToHash(2)].weight = 100
assert.NoError(t, s.treeRootNode.updateBestDescendant(ctx, 1, 1))
assert.NoError(t, s.treeRootNode.updateBestDescendant(ctx, 1, 1, 1))
assert.Equal(t, 2, len(s.treeRootNode.children))
assert.Equal(t, s.treeRootNode.children[0], s.treeRootNode.bestDescendant)
@@ -173,7 +174,7 @@ func TestNode_ViableForHead(t *testing.T) {
{&Node{finalizedEpoch: 3, justifiedEpoch: 4}, 4, 3, true},
}
for _, tc := range tests {
got := tc.n.viableForHead(tc.justifiedEpoch, tc.finalizedEpoch)
got := tc.n.viableForHead(tc.justifiedEpoch, tc.finalizedEpoch, 5)
assert.Equal(t, tc.want, got)
}
}
@@ -197,15 +198,17 @@ func TestNode_LeadsToViableHead(t *testing.T) {
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
require.Equal(t, true, f.store.treeRootNode.leadsToViableHead(4, 3))
require.Equal(t, true, f.store.nodeByRoot[indexToHash(5)].leadsToViableHead(4, 3))
require.Equal(t, false, f.store.nodeByRoot[indexToHash(2)].leadsToViableHead(4, 3))
require.Equal(t, false, f.store.nodeByRoot[indexToHash(4)].leadsToViableHead(4, 3))
require.Equal(t, true, f.store.treeRootNode.leadsToViableHead(4, 3, 5))
require.Equal(t, true, f.store.nodeByRoot[indexToHash(5)].leadsToViableHead(4, 3, 5))
require.Equal(t, false, f.store.nodeByRoot[indexToHash(2)].leadsToViableHead(4, 3, 5))
require.Equal(t, false, f.store.nodeByRoot[indexToHash(4)].leadsToViableHead(4, 3, 5))
}
func TestNode_SetFullyValidated(t *testing.T) {
f := setup(1, 1)
ctx := context.Background()
storeNodes := make([]*Node, 6)
storeNodes[0] = f.store.treeRootNode
// insert blocks in the fork pattern (optimistic status in parenthesis)
//
// 0 (false) -- 1 (false) -- 2 (false) -- 3 (true) -- 4 (true)
@@ -215,20 +218,25 @@ func TestNode_SetFullyValidated(t *testing.T) {
state, blkRoot, err := prepareForkchoiceState(ctx, 1, indexToHash(1), params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
storeNodes[1] = f.store.nodeByRoot[blkRoot]
require.NoError(t, f.SetOptimisticToValid(ctx, params.BeaconConfig().ZeroHash))
state, blkRoot, err = prepareForkchoiceState(ctx, 2, indexToHash(2), indexToHash(1), params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
storeNodes[2] = f.store.nodeByRoot[blkRoot]
require.NoError(t, f.SetOptimisticToValid(ctx, indexToHash(1)))
state, blkRoot, err = prepareForkchoiceState(ctx, 3, indexToHash(3), indexToHash(2), params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
storeNodes[3] = f.store.nodeByRoot[blkRoot]
state, blkRoot, err = prepareForkchoiceState(ctx, 4, indexToHash(4), indexToHash(3), params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
storeNodes[4] = f.store.nodeByRoot[blkRoot]
state, blkRoot, err = prepareForkchoiceState(ctx, 5, indexToHash(5), indexToHash(1), params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
storeNodes[5] = f.store.nodeByRoot[blkRoot]
opt, err := f.IsOptimistic(indexToHash(5))
require.NoError(t, err)
@@ -253,4 +261,22 @@ func TestNode_SetFullyValidated(t *testing.T) {
opt, err = f.IsOptimistic(indexToHash(3))
require.NoError(t, err)
require.Equal(t, false, opt)
respNodes := make([]*v1.ForkChoiceNode, 0)
respNodes, err = f.store.treeRootNode.nodeTreeDump(ctx, respNodes)
require.NoError(t, err)
require.Equal(t, len(respNodes), f.NodeCount())
for i, respNode := range respNodes {
require.Equal(t, storeNodes[i].slot, respNode.Slot)
require.DeepEqual(t, storeNodes[i].root[:], respNode.Root)
require.Equal(t, storeNodes[i].balance, respNode.Balance)
require.Equal(t, storeNodes[i].weight, respNode.Weight)
require.Equal(t, storeNodes[i].optimistic, respNode.ExecutionOptimistic)
require.Equal(t, storeNodes[i].justifiedEpoch, respNode.JustifiedEpoch)
require.Equal(t, storeNodes[i].unrealizedJustifiedEpoch, respNode.UnrealizedJustifiedEpoch)
require.Equal(t, storeNodes[i].finalizedEpoch, respNode.FinalizedEpoch)
require.Equal(t, storeNodes[i].unrealizedFinalizedEpoch, respNode.UnrealizedFinalizedEpoch)
require.Equal(t, storeNodes[i].timestamp, respNode.Timestamp)
}
}

View File

@@ -389,3 +389,14 @@ func TestSetOptimisticToInvalid_ForkAtMerge_bis(t *testing.T) {
})
require.DeepEqual(t, roots, [][32]byte{{'b'}, {'c'}, {'d'}, {'e'}})
}
func TestSetOptimisticToValid(t *testing.T) {
f := setup(1, 1)
op, err := f.IsOptimistic([32]byte{})
require.NoError(t, err)
require.Equal(t, true, op)
require.NoError(t, f.SetOptimisticToValid(context.Background(), [32]byte{}))
op, err = f.IsOptimistic([32]byte{})
require.NoError(t, err)
require.Equal(t, false, op)
}

View File

@@ -95,8 +95,8 @@ func (s *Store) head(ctx context.Context) ([32]byte, error) {
if bestDescendant == nil {
bestDescendant = justifiedNode
}
if !bestDescendant.viableForHead(s.justifiedCheckpoint.Epoch, s.finalizedCheckpoint.Epoch) {
currentEpoch := slots.EpochsSinceGenesis(time.Unix(int64(s.genesisTime), 0))
if !bestDescendant.viableForHead(s.justifiedCheckpoint.Epoch, s.finalizedCheckpoint.Epoch, currentEpoch) {
s.allTipsAreInvalid = true
return [32]byte{}, fmt.Errorf("head at slot %d with weight %d is not eligible, finalizedEpoch, justified Epoch %d, %d != %d, %d",
bestDescendant.slot, bestDescendant.weight/10e9, bestDescendant.finalizedEpoch, bestDescendant.justifiedEpoch, s.finalizedCheckpoint.Epoch, s.justifiedCheckpoint.Epoch)
@@ -142,6 +142,7 @@ func (s *Store) insert(ctx context.Context,
unrealizedFinalizedEpoch: finalizedEpoch,
optimistic: true,
payloadHash: payloadHash,
timestamp: uint64(time.Now().Unix()),
}
s.nodeByPayload[payloadHash] = n
@@ -174,7 +175,7 @@ func (s *Store) insert(ctx context.Context,
jEpoch := s.justifiedCheckpoint.Epoch
fEpoch := s.finalizedCheckpoint.Epoch
s.checkpointsLock.RUnlock()
if err := s.treeRootNode.updateBestDescendant(ctx, jEpoch, fEpoch); err != nil {
if err := s.treeRootNode.updateBestDescendant(ctx, jEpoch, fEpoch, slots.ToEpoch(currentSlot)); err != nil {
return n, err
}
}

View File

@@ -59,6 +59,7 @@ type Node struct {
weight uint64 // weight of this node: the total balance including children
bestDescendant *Node // bestDescendant node of this node.
optimistic bool // whether the block has been fully validated or not
timestamp uint64 // The timestamp when the node was inserted.
}
// Vote defines an individual validator's vote.

View File

@@ -5,6 +5,7 @@ import (
"testing"
forkchoicetypes "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v3/config/features"
"github.com/prysmaticlabs/prysm/v3/config/params"
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v3/testing/require"
@@ -198,31 +199,36 @@ func TestStore_NoDeadLock(t *testing.T) {
// D justifies and comes late.
//
func TestStore_ForkNextEpoch(t *testing.T) {
resetCfg := features.InitWithReset(&features.Flags{
EnableDefensivePull: true,
})
defer resetCfg()
f := setup(0, 0)
ctx := context.Background()
// Epoch 1 blocks (D does not arrive)
state, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, params.BeaconConfig().ZeroHash, [32]byte{'A'}, 0, 0)
state, blkRoot, err := prepareForkchoiceState(ctx, 92, [32]byte{'a'}, params.BeaconConfig().ZeroHash, [32]byte{'A'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'B'}, 0, 0)
state, blkRoot, err = prepareForkchoiceState(ctx, 93, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'B'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, [32]byte{'C'}, 0, 0)
state, blkRoot, err = prepareForkchoiceState(ctx, 94, [32]byte{'c'}, [32]byte{'b'}, [32]byte{'C'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
// Epoch 2 blocks
state, blkRoot, err = prepareForkchoiceState(ctx, 104, [32]byte{'e'}, [32]byte{'c'}, [32]byte{'E'}, 0, 0)
state, blkRoot, err = prepareForkchoiceState(ctx, 96, [32]byte{'e'}, [32]byte{'c'}, [32]byte{'E'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 105, [32]byte{'f'}, [32]byte{'e'}, [32]byte{'F'}, 0, 0)
state, blkRoot, err = prepareForkchoiceState(ctx, 97, [32]byte{'f'}, [32]byte{'e'}, [32]byte{'F'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 106, [32]byte{'g'}, [32]byte{'f'}, [32]byte{'G'}, 0, 0)
state, blkRoot, err = prepareForkchoiceState(ctx, 98, [32]byte{'g'}, [32]byte{'f'}, [32]byte{'G'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 107, [32]byte{'h'}, [32]byte{'g'}, [32]byte{'H'}, 0, 0)
state, blkRoot, err = prepareForkchoiceState(ctx, 99, [32]byte{'h'}, [32]byte{'g'}, [32]byte{'H'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
@@ -234,16 +240,25 @@ func TestStore_ForkNextEpoch(t *testing.T) {
require.Equal(t, types.Epoch(0), f.JustifiedCheckpoint().Epoch)
// D arrives late, D is head
state, blkRoot, err = prepareForkchoiceState(ctx, 103, [32]byte{'d'}, [32]byte{'c'}, [32]byte{'D'}, 0, 0)
state, blkRoot, err = prepareForkchoiceState(ctx, 95, [32]byte{'d'}, [32]byte{'c'}, [32]byte{'D'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
require.NoError(t, f.store.setUnrealizedJustifiedEpoch([32]byte{'d'}, 1))
f.store.unrealizedJustifiedCheckpoint = &forkchoicetypes.Checkpoint{Epoch: 1}
require.NoError(t, f.store.setUnrealizedJustifiedEpoch([32]byte{'d'}, 2))
f.store.unrealizedJustifiedCheckpoint = &forkchoicetypes.Checkpoint{Epoch: 2}
f.updateUnrealizedCheckpoints()
headRoot, err = f.Head(ctx, []uint64{100})
require.NoError(t, err)
require.Equal(t, [32]byte{'d'}, headRoot)
require.Equal(t, types.Epoch(1), f.JustifiedCheckpoint().Epoch)
require.Equal(t, types.Epoch(2), f.JustifiedCheckpoint().Epoch)
require.Equal(t, uint64(0), f.store.nodeByRoot[[32]byte{'d'}].weight)
require.Equal(t, uint64(100), f.store.nodeByRoot[[32]byte{'h'}].weight)
// Set current epoch to 3, and H's unrealized checkpoint. Check it's head
driftGenesisTime(f, 99, 0)
require.NoError(t, f.store.setUnrealizedJustifiedEpoch([32]byte{'h'}, 2))
headRoot, err = f.Head(ctx, []uint64{100})
require.NoError(t, err)
require.Equal(t, [32]byte{'h'}, headRoot)
require.Equal(t, types.Epoch(2), f.JustifiedCheckpoint().Epoch)
require.Equal(t, uint64(0), f.store.nodeByRoot[[32]byte{'d'}].weight)
require.Equal(t, uint64(100), f.store.nodeByRoot[[32]byte{'h'}].weight)
}

View File

@@ -7,6 +7,7 @@ import (
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state"
fieldparams "github.com/prysmaticlabs/prysm/v3/config/fieldparams"
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
v1 "github.com/prysmaticlabs/prysm/v3/proto/eth/v1"
)
// ForkChoicer represents the full fork choice interface composed of all the sub-interfaces.
@@ -51,7 +52,7 @@ type Getter interface {
ProposerBoost() [fieldparams.RootLength]byte
HasParent(root [32]byte) bool
AncestorRoot(ctx context.Context, root [32]byte, slot types.Slot) ([32]byte, error)
CommonAncestorRoot(ctx context.Context, root1 [32]byte, root2 [32]byte) ([32]byte, error)
CommonAncestor(ctx context.Context, root1 [32]byte, root2 [32]byte) ([32]byte, types.Slot, error)
IsCanonical(root [32]byte) bool
FinalizedCheckpoint() *forkchoicetypes.Checkpoint
FinalizedPayloadBlockHash() [32]byte
@@ -62,6 +63,7 @@ type Getter interface {
NodeCount() int
HighestReceivedBlockSlot() types.Slot
ReceivedBlocksLastEpoch() (uint64, error)
ForkChoiceDump(context.Context) (*v1.ForkChoiceResponse, error)
}
// Setter allows to set forkchoice information

View File

@@ -32,6 +32,7 @@ go_library(
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",

View File

@@ -17,6 +17,7 @@ import (
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v3/encoding/bytesutil"
pmath "github.com/prysmaticlabs/prysm/v3/math"
v1 "github.com/prysmaticlabs/prysm/v3/proto/eth/v1"
ethpb "github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v3/runtime/version"
"github.com/prysmaticlabs/prysm/v3/time/slots"
@@ -281,49 +282,51 @@ func (f *ForkChoice) AncestorRoot(ctx context.Context, root [32]byte, slot types
}
// CommonAncestorRoot returns the common ancestor root between the two block roots r1 and r2.
func (f *ForkChoice) CommonAncestorRoot(ctx context.Context, r1 [32]byte, r2 [32]byte) ([32]byte, error) {
func (f *ForkChoice) CommonAncestor(ctx context.Context, r1 [32]byte, r2 [32]byte) ([32]byte, types.Slot, error) {
ctx, span := trace.StartSpan(ctx, "protoArray.CommonAncestorRoot")
defer span.End()
// Do nothing if the two input roots are the same.
if r1 == r2 {
return r1, nil
}
f.store.nodesLock.RLock()
defer f.store.nodesLock.RUnlock()
i1, ok := f.store.nodesIndices[r1]
if !ok || i1 >= uint64(len(f.store.nodes)) {
return [32]byte{}, forkchoice.ErrUnknownCommonAncestor
return [32]byte{}, 0, forkchoice.ErrUnknownCommonAncestor
}
// Do nothing if the two input roots are the same.
if r1 == r2 {
n1 := f.store.nodes[i1]
return r1, n1.slot, nil
}
i2, ok := f.store.nodesIndices[r2]
if !ok || i2 >= uint64(len(f.store.nodes)) {
return [32]byte{}, forkchoice.ErrUnknownCommonAncestor
return [32]byte{}, 0, forkchoice.ErrUnknownCommonAncestor
}
for {
if ctx.Err() != nil {
return [32]byte{}, ctx.Err()
return [32]byte{}, 0, ctx.Err()
}
if i1 > i2 {
n1 := f.store.nodes[i1]
i1 = n1.parent
// Reaches the end of the tree and unable to find common ancestor.
if i1 >= uint64(len(f.store.nodes)) {
return [32]byte{}, forkchoice.ErrUnknownCommonAncestor
return [32]byte{}, 0, forkchoice.ErrUnknownCommonAncestor
}
} else {
n2 := f.store.nodes[i2]
i2 = n2.parent
// Reaches the end of the tree and unable to find common ancestor.
if i2 >= uint64(len(f.store.nodes)) {
return [32]byte{}, forkchoice.ErrUnknownCommonAncestor
return [32]byte{}, 0, forkchoice.ErrUnknownCommonAncestor
}
}
if i1 == i2 {
n1 := f.store.nodes[i1]
return n1.root, nil
return n1.root, n1.slot, nil
}
}
}
@@ -1079,3 +1082,7 @@ func (f *ForkChoice) ReceivedBlocksLastEpoch() (uint64, error) {
}
return count, nil
}
func (*ForkChoice) ForkChoiceDump(_ context.Context) (*v1.ForkChoiceResponse, error) {
return nil, errors.New("ForkChoiceDump is not supported by protoarray")
}

View File

@@ -677,73 +677,85 @@ func TestStore_CommonAncestor(t *testing.T) {
r1 [32]byte
r2 [32]byte
wantRoot [32]byte
wantSlot types.Slot
}{
{
name: "Common ancestor between c and b is a",
r1: [32]byte{'c'},
r2: [32]byte{'b'},
wantRoot: [32]byte{'a'},
wantSlot: 0,
},
{
name: "Common ancestor between c and d is a",
r1: [32]byte{'c'},
r2: [32]byte{'d'},
wantRoot: [32]byte{'a'},
wantSlot: 0,
},
{
name: "Common ancestor between c and e is a",
r1: [32]byte{'c'},
r2: [32]byte{'e'},
wantRoot: [32]byte{'a'},
wantSlot: 0,
},
{
name: "Common ancestor between g and f is c",
r1: [32]byte{'g'},
r2: [32]byte{'f'},
wantRoot: [32]byte{'c'},
wantSlot: 2,
},
{
name: "Common ancestor between f and h is c",
r1: [32]byte{'f'},
r2: [32]byte{'h'},
wantRoot: [32]byte{'c'},
wantSlot: 2,
},
{
name: "Common ancestor between g and h is c",
r1: [32]byte{'g'},
r2: [32]byte{'h'},
wantRoot: [32]byte{'c'},
wantSlot: 2,
},
{
name: "Common ancestor between b and h is a",
r1: [32]byte{'b'},
r2: [32]byte{'h'},
wantRoot: [32]byte{'a'},
wantSlot: 0,
},
{
name: "Common ancestor between e and h is a",
r1: [32]byte{'e'},
r2: [32]byte{'h'},
wantRoot: [32]byte{'a'},
wantSlot: 0,
},
{
name: "Common ancestor between i and f is c",
r1: [32]byte{'i'},
r2: [32]byte{'f'},
wantRoot: [32]byte{'c'},
wantSlot: 2,
},
{
name: "Common ancestor between e and h is a",
r1: [32]byte{'j'},
r2: [32]byte{'g'},
wantRoot: [32]byte{'c'},
wantSlot: 2,
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
gotRoot, err := f.CommonAncestorRoot(ctx, tc.r1, tc.r2)
gotRoot, gotSlot, err := f.CommonAncestor(ctx, tc.r1, tc.r2)
require.NoError(t, err)
require.Equal(t, tc.wantRoot, gotRoot)
require.Equal(t, tc.wantSlot, gotSlot)
})
}
@@ -766,52 +778,59 @@ func TestStore_CommonAncestor(t *testing.T) {
r1 [32]byte
r2 [32]byte
wantRoot [32]byte
wantSlot types.Slot
}{
{
name: "Common ancestor between a and b is a",
r1: [32]byte{'a'},
r2: [32]byte{'b'},
wantRoot: [32]byte{'a'},
wantSlot: 0,
},
{
name: "Common ancestor between b and d is b",
r1: [32]byte{'d'},
r2: [32]byte{'b'},
wantRoot: [32]byte{'b'},
wantSlot: 1,
},
{
name: "Common ancestor between d and a is a",
r1: [32]byte{'d'},
r2: [32]byte{'a'},
wantRoot: [32]byte{'a'},
wantSlot: 0,
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
gotRoot, err := f.CommonAncestorRoot(ctx, tc.r1, tc.r2)
gotRoot, gotSlot, err := f.CommonAncestor(ctx, tc.r1, tc.r2)
require.NoError(t, err)
require.Equal(t, tc.wantRoot, gotRoot)
require.Equal(t, tc.wantSlot, gotSlot)
})
}
// Equal inputs should return the same root.
r, err := f.CommonAncestorRoot(ctx, [32]byte{'b'}, [32]byte{'b'})
r, s, err := f.CommonAncestor(ctx, [32]byte{'b'}, [32]byte{'b'})
require.NoError(t, err)
require.Equal(t, [32]byte{'b'}, r)
require.Equal(t, types.Slot(1), s)
// Requesting finalized root (last node) should return the same root.
r, err = f.CommonAncestorRoot(ctx, [32]byte{'a'}, [32]byte{'a'})
r, s, err = f.CommonAncestor(ctx, [32]byte{'a'}, [32]byte{'a'})
require.NoError(t, err)
require.Equal(t, [32]byte{'a'}, r)
require.Equal(t, types.Slot(0), s)
// Requesting unknown root
_, err = f.CommonAncestorRoot(ctx, [32]byte{'a'}, [32]byte{'z'})
_, _, err = f.CommonAncestor(ctx, [32]byte{'a'}, [32]byte{'z'})
require.ErrorIs(t, err, forkchoice.ErrUnknownCommonAncestor)
_, err = f.CommonAncestorRoot(ctx, [32]byte{'z'}, [32]byte{'a'})
_, _, err = f.CommonAncestor(ctx, [32]byte{'z'}, [32]byte{'a'})
require.ErrorIs(t, err, forkchoice.ErrUnknownCommonAncestor)
state, blkRoot, err = prepareForkchoiceState(ctx, 100, [32]byte{'y'}, [32]byte{'z'}, [32]byte{}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
// broken link
_, err = f.CommonAncestorRoot(ctx, [32]byte{'y'}, [32]byte{'a'})
_, _, err = f.CommonAncestor(ctx, [32]byte{'y'}, [32]byte{'a'})
require.ErrorIs(t, err, forkchoice.ErrUnknownCommonAncestor)
}

View File

@@ -26,7 +26,6 @@ go_library(
"//beacon-chain/db/slasherkv:go_default_library",
"//beacon-chain/deterministic-genesis:go_default_library",
"//beacon-chain/execution:go_default_library",
"//beacon-chain/forkchoice:go_default_library",
"//beacon-chain/forkchoice/doubly-linked-tree:go_default_library",
"//beacon-chain/forkchoice/protoarray:go_default_library",
"//beacon-chain/gateway:go_default_library",

View File

@@ -28,7 +28,6 @@ import (
"github.com/prysmaticlabs/prysm/v3/beacon-chain/db/slasherkv"
interopcoldstart "github.com/prysmaticlabs/prysm/v3/beacon-chain/deterministic-genesis"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/execution"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice"
doublylinkedtree "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/protoarray"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/gateway"
@@ -100,14 +99,12 @@ type BeaconNode struct {
stateFeed *event.Feed
blockFeed *event.Feed
opFeed *event.Feed
forkChoiceStore forkchoice.ForkChoicer
stateGen *stategen.State
collector *bcnodeCollector
slasherBlockHeadersFeed *event.Feed
slasherAttestationsFeed *event.Feed
finalizedStateAtStartUp state.BeaconState
serviceFlagOpts *serviceFlagOpts
blockchainFlagOpts []blockchain.Option
GenesisInitializer genesis.Initializer
CheckpointInitializer checkpoint.Initializer
}
@@ -229,9 +226,6 @@ func New(cliCtx *cli.Context, opts ...Option) (*BeaconNode, error) {
return nil, err
}
log.Debugln("Starting Fork Choice")
beacon.startForkChoice()
log.Debugln("Registering Blockchain Service")
if err := beacon.registerBlockchainService(); err != nil {
return nil, err
@@ -355,14 +349,6 @@ func (b *BeaconNode) Close() {
close(b.stop)
}
func (b *BeaconNode) startForkChoice() {
if !features.Get().DisableForkchoiceDoublyLinkedTree {
b.forkChoiceStore = doublylinkedtree.New()
} else {
b.forkChoiceStore = protoarray.New()
}
}
func (b *BeaconNode) startDB(cliCtx *cli.Context, depositAddress string) error {
baseDir := cliCtx.String(cmd.DataDirFlag.Name)
dbPath := filepath.Join(baseDir, kv.BeaconNodeDbDirName)
@@ -609,13 +595,19 @@ func (b *BeaconNode) registerBlockchainService() error {
blockchain.WithSlashingPool(b.slashingsPool),
blockchain.WithP2PBroadcaster(b.fetchP2P()),
blockchain.WithStateNotifier(b),
blockchain.WithForkChoiceStore(b.forkChoiceStore),
blockchain.WithAttestationService(attService),
blockchain.WithStateGen(b.stateGen),
blockchain.WithSlasherAttestationsFeed(b.slasherAttestationsFeed),
blockchain.WithFinalizedStateAtStartUp(b.finalizedStateAtStartUp),
blockchain.WithProposerIdsCache(b.proposerIdsCache),
)
if features.Get().DisableForkchoiceDoublyLinkedTree {
opts = append(opts, blockchain.WithForkChoiceStore(protoarray.New()))
} else {
opts = append(opts, blockchain.WithForkChoiceStore(doublylinkedtree.New()))
}
blockchainService, err := blockchain.NewService(b.ctx, opts...)
if err != nil {
return errors.Wrap(err, "could not register blockchain service")
@@ -843,7 +835,7 @@ func (b *BeaconNode) registerRPCService() error {
return b.services.RegisterService(rpcService)
}
func (b *BeaconNode) registerPrometheusService(cliCtx *cli.Context) error {
func (b *BeaconNode) registerPrometheusService(_ *cli.Context) error {
var additionalHandlers []prometheus.Handler
var p *p2p.Service
if err := b.services.FetchService(&p); err != nil {

View File

@@ -6,12 +6,6 @@ datadir: /var/lib/prysm/beacon
# http-web3provider: ETH1 API endpoint, eg. http://localhost:8545 for a local geth service on the default port
http-web3provider: http://localhost:8545
# fallback-web3provider: List of backup ETH1 API endpoints, used if above is not working
# For example:
# fallback-web3provider:
# - https://mainnet.infura.io/v3/YOUR-PROJECT-ID
# - https://eth-mainnet.alchemyapi.io/v2/YOUR-PROJECT-ID
# Optional tuning parameters
# For full list, see https://docs.prylabs.network/docs/prysm-usage/parameters

View File

@@ -50,6 +50,7 @@ func (_ *BeaconEndpointFactory) Paths() []string {
"/eth/v2/debug/beacon/states/{state_id}",
"/eth/v1/debug/beacon/heads",
"/eth/v2/debug/beacon/heads",
"/eth/v1/debug/beacon/forkchoice",
"/eth/v1/config/fork_schedule",
"/eth/v1/config/deposit_contract",
"/eth/v1/config/spec",
@@ -185,6 +186,8 @@ func (_ *BeaconEndpointFactory) Create(path string) (*apimiddleware.Endpoint, er
endpoint.GetResponse = &forkChoiceHeadsResponseJson{}
case "/eth/v2/debug/beacon/heads":
endpoint.GetResponse = &v2ForkChoiceHeadsResponseJson{}
case "/eth/v1/debug/beacon/forkchoice":
endpoint.GetResponse = &forkchoiceResponse{}
case "/eth/v1/config/fork_schedule":
endpoint.GetResponse = &forkScheduleResponseJson{}
case "/eth/v1/config/deposit_contract":

View File

@@ -277,6 +277,18 @@ type submitContributionAndProofsRequestJson struct {
Data []*signedContributionAndProofJson `json:"data"`
}
type forkchoiceResponse struct {
JustifiedCheckpoint *checkpointJson `json:"justified_checkpoint"`
FinalizedCheckpoint *checkpointJson `json:"finalized_checkpoint"`
BestJustifiedCheckpoint *checkpointJson `json:"best_justified_checkpoint"`
UnrealizedJustifiedCheckpoint *checkpointJson `json:"unrealized_justified_checkpoint"`
UnrealizedFinalizedCheckpoint *checkpointJson `json:"unrealized_finalized_checkpoint"`
ProposerBoostRoot string `json:"proposer_boost_root" hex:"true"`
PreviousProposerBoostRoot string `json:"previous_proposer_boost_root" hex:"true"`
HeadRoot string `json:"head_root" hex:"true"`
ForkChoiceNodes []*forkChoiceNodeJson `json:"forkchoice_nodes"`
}
//----------------
// Reusable types.
//----------------
@@ -781,14 +793,28 @@ type validatorRegistrationJson struct {
}
type signedValidatorRegistrationJson struct {
Message validatorRegistrationJson `json:"message"`
Signature string `json:"signature" hex:"true"`
Message *validatorRegistrationJson `json:"message"`
Signature string `json:"signature" hex:"true"`
}
type signedValidatorRegistrationsRequestJson struct {
Registrations []*signedValidatorRegistrationJson `json:"registrations"`
}
type forkChoiceNodeJson struct {
Slot string `json:"slot"`
Root string `json:"root" hex:"true"`
ParentRoot string `json:"parent_root" hex:"true"`
JustifiedEpoch string `json:"justified_epoch"`
FinalizedEpoch string `json:"finalized_epoch"`
UnrealizedJustifiedEpoch string `json:"unrealized_justified_epoch"`
UnrealizedFinalizedEpoch string `json:"unrealized_finalized_epoch"`
Balance string `json:"balance"`
Weight string `json:"weight"`
ExecutionOptimistic bool `json:"execution_optimistic"`
ExecutionPayload string `json:"execution_payload" hex:"true"`
}
//----------------
// SSZ
// ---------------

View File

@@ -23,6 +23,7 @@ go_library(
"//beacon-chain/core/feed/block:go_default_library",
"//beacon-chain/core/feed/operation:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/filters:go_default_library",
"//beacon-chain/execution:go_default_library",
@@ -82,6 +83,7 @@ go_test(
"//api/grpc:go_default_library",
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/execution/testing:go_default_library",

View File

@@ -722,66 +722,24 @@ func (bs *Server) ListBlockAttestations(ctx context.Context, req *ethpbv1.BlockR
return nil, err
}
_, err = blk.PbPhase0Block()
if err != nil && !errors.Is(err, blocks.ErrUnsupportedGetter) {
return nil, status.Errorf(codes.Internal, "Could not get signed beacon block: %v", err)
v1Alpha1Attestations := blk.Block().Body().Attestations()
v1Attestations := make([]*ethpbv1.Attestation, 0, len(v1Alpha1Attestations))
for _, att := range v1Alpha1Attestations {
migratedAtt := migration.V1Alpha1AttestationToV1(att)
v1Attestations = append(v1Attestations, migratedAtt)
}
if err == nil {
v1Blk, err := migration.SignedBeaconBlock(blk)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not get signed beacon block: %v", err)
}
return &ethpbv1.BlockAttestationsResponse{
Data: v1Blk.Block.Body.Attestations,
ExecutionOptimistic: false,
}, nil
root, err := blk.Block().HashTreeRoot()
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not get block root: %v", err)
}
altairBlk, err := blk.PbAltairBlock()
if err != nil && !errors.Is(err, blocks.ErrUnsupportedGetter) {
return nil, status.Errorf(codes.Internal, "Could not get signed beacon block: %v", err)
isOptimistic, err := bs.OptimisticModeFetcher.IsOptimisticForRoot(ctx, root)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not check if block is optimistic: %v", err)
}
if err == nil {
if altairBlk == nil {
return nil, status.Errorf(codes.Internal, "Nil block")
}
v2Blk, err := migration.V1Alpha1BeaconBlockAltairToV2(altairBlk.Block)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not get signed beacon block: %v", err)
}
return &ethpbv1.BlockAttestationsResponse{
Data: v2Blk.Body.Attestations,
ExecutionOptimistic: false,
}, nil
}
bellatrixBlock, err := blk.PbBellatrixBlock()
if err != nil && !errors.Is(err, blocks.ErrUnsupportedGetter) {
return nil, status.Errorf(codes.Internal, "Could not get signed beacon block: %v", err)
}
if err == nil {
if bellatrixBlock == nil {
return nil, status.Errorf(codes.Internal, "Nil block")
}
v2Blk, err := migration.V1Alpha1BeaconBlockBellatrixToV2(bellatrixBlock.Block)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not get signed beacon block: %v", err)
}
root, err := blk.Block().HashTreeRoot()
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not get block root: %v", err)
}
isOptimistic, err := bs.OptimisticModeFetcher.IsOptimisticForRoot(ctx, root)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not check if block is optimistic: %v", err)
}
return &ethpbv1.BlockAttestationsResponse{
Data: v2Blk.Body.Attestations,
ExecutionOptimistic: isOptimistic,
}, nil
}
return nil, status.Errorf(codes.Internal, "Could not get signed beacon block: %v", err)
return &ethpbv1.BlockAttestationsResponse{
Data: v1Attestations,
ExecutionOptimistic: isOptimistic,
}, nil
}
func (bs *Server) blockFromBlockID(ctx context.Context, blockId []byte) (interfaces.SignedBeaconBlock, error) {

View File

@@ -1857,8 +1857,11 @@ func TestServer_ListBlockAttestations(t *testing.T) {
v1Block, err := migration.V1Alpha1ToV1SignedBlock(tt.want)
require.NoError(t, err)
if !reflect.DeepEqual(blk.Data, v1Block.Block.Body.Attestations) {
blkAtts := blk.Data
if len(blkAtts) == 0 {
blkAtts = nil
}
if !reflect.DeepEqual(blkAtts, v1Block.Block.Body.Attestations) {
t.Error("Expected attestations to equal")
}
})
@@ -1961,7 +1964,11 @@ func TestServer_ListBlockAttestations(t *testing.T) {
v1Block, err := migration.V1Alpha1BeaconBlockAltairToV2(tt.want.Block)
require.NoError(t, err)
if !reflect.DeepEqual(blk.Data, v1Block.Body.Attestations) {
blkAtts := blk.Data
if len(blkAtts) == 0 {
blkAtts = nil
}
if !reflect.DeepEqual(blkAtts, v1Block.Body.Attestations) {
t.Error("Expected attestations to equal")
}
})
@@ -2064,7 +2071,11 @@ func TestServer_ListBlockAttestations(t *testing.T) {
v1Block, err := migration.V1Alpha1BeaconBlockBellatrixToV2(tt.want.Block)
require.NoError(t, err)
if !reflect.DeepEqual(blk.Data, v1Block.Body.Attestations) {
blkAtts := blk.Data
if len(blkAtts) == 0 {
blkAtts = nil
}
if !reflect.DeepEqual(blkAtts, v1Block.Body.Attestations) {
t.Error("Expected attestations to equal")
}
})

View File

@@ -8,6 +8,7 @@ import (
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/feed"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/feed/operation"
corehelpers "github.com/prysmaticlabs/prysm/v3/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/rpc/eth/helpers"
"github.com/prysmaticlabs/prysm/v3/config/features"
"github.com/prysmaticlabs/prysm/v3/crypto/bls"
@@ -164,6 +165,10 @@ func (bs *Server) SubmitAttesterSlashing(ctx context.Context, req *ethpbv1.Attes
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not get head state: %v", err)
}
headState, err = transition.ProcessSlotsIfPossible(ctx, headState, req.Attestation_1.Data.Slot)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not process slots: %v", err)
}
alphaSlashing := migration.V1AttSlashingToV1Alpha1(req)
err = blocks.VerifyAttesterSlashing(ctx, headState, alphaSlashing)
@@ -216,6 +221,10 @@ func (bs *Server) SubmitProposerSlashing(ctx context.Context, req *ethpbv1.Propo
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not get head state: %v", err)
}
headState, err = transition.ProcessSlotsIfPossible(ctx, headState, req.SignedHeader_1.Message.Slot)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not process slots: %v", err)
}
alphaSlashing := migration.V1ProposerSlashingToV1Alpha1(req)
err = blocks.VerifyProposerSlashing(headState, alphaSlashing)
@@ -269,6 +278,14 @@ func (bs *Server) SubmitVoluntaryExit(ctx context.Context, req *ethpbv1.SignedVo
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not get head state: %v", err)
}
s, err := slots.EpochStart(req.Message.Epoch)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not get epoch from message: %v", err)
}
headState, err = transition.ProcessSlotsIfPossible(ctx, headState, s)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not process slots: %v", err)
}
validator, err := headState.ValidatorAtIndexReadOnly(req.Message.ValidatorIndex)
if err != nil {

View File

@@ -11,6 +11,7 @@ import (
grpcutil "github.com/prysmaticlabs/prysm/v3/api/grpc"
blockchainmock "github.com/prysmaticlabs/prysm/v3/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/signing"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/operations/attestations"
slashingsmock "github.com/prysmaticlabs/prysm/v3/beacon-chain/operations/slashings/mock"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/operations/voluntaryexits/mock"
@@ -444,6 +445,80 @@ func TestSubmitAttesterSlashing_Ok(t *testing.T) {
assert.Equal(t, true, broadcaster.BroadcastCalled)
}
func TestSubmitAttesterSlashing_AcrossFork(t *testing.T) {
ctx := context.Background()
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.AltairForkEpoch = 1
params.OverrideBeaconConfig(config)
bs, keys := util.DeterministicGenesisState(t, 1)
slashing := &ethpbv1.AttesterSlashing{
Attestation_1: &ethpbv1.IndexedAttestation{
AttestingIndices: []uint64{0},
Data: &ethpbv1.AttestationData{
Slot: params.BeaconConfig().SlotsPerEpoch,
Index: 1,
BeaconBlockRoot: bytesutil.PadTo([]byte("blockroot1"), 32),
Source: &ethpbv1.Checkpoint{
Epoch: 1,
Root: bytesutil.PadTo([]byte("sourceroot1"), 32),
},
Target: &ethpbv1.Checkpoint{
Epoch: 10,
Root: bytesutil.PadTo([]byte("targetroot1"), 32),
},
},
Signature: make([]byte, 96),
},
Attestation_2: &ethpbv1.IndexedAttestation{
AttestingIndices: []uint64{0},
Data: &ethpbv1.AttestationData{
Slot: params.BeaconConfig().SlotsPerEpoch,
Index: 1,
BeaconBlockRoot: bytesutil.PadTo([]byte("blockroot2"), 32),
Source: &ethpbv1.Checkpoint{
Epoch: 1,
Root: bytesutil.PadTo([]byte("sourceroot2"), 32),
},
Target: &ethpbv1.Checkpoint{
Epoch: 10,
Root: bytesutil.PadTo([]byte("targetroot2"), 32),
},
},
Signature: make([]byte, 96),
},
}
newBs := bs.Copy()
newBs, err := transition.ProcessSlots(ctx, newBs, params.BeaconConfig().SlotsPerEpoch)
require.NoError(t, err)
for _, att := range []*ethpbv1.IndexedAttestation{slashing.Attestation_1, slashing.Attestation_2} {
sb, err := signing.ComputeDomainAndSign(newBs, att.Data.Target.Epoch, att.Data, params.BeaconConfig().DomainBeaconAttester, keys[0])
require.NoError(t, err)
sig, err := bls.SignatureFromBytes(sb)
require.NoError(t, err)
att.Signature = sig.Marshal()
}
broadcaster := &p2pMock.MockBroadcaster{}
s := &Server{
ChainInfoFetcher: &blockchainmock.ChainService{State: bs},
SlashingsPool: &slashingsmock.PoolMock{},
Broadcaster: broadcaster,
}
_, err = s.SubmitAttesterSlashing(ctx, slashing)
require.NoError(t, err)
pendingSlashings := s.SlashingsPool.PendingAttesterSlashings(ctx, bs, true)
require.Equal(t, 1, len(pendingSlashings))
assert.DeepEqual(t, migration.V1AttSlashingToV1Alpha1(slashing), pendingSlashings[0])
assert.Equal(t, true, broadcaster.BroadcastCalled)
}
func TestSubmitAttesterSlashing_InvalidSlashing(t *testing.T) {
ctx := context.Background()
bs, err := util.NewBeaconState()
@@ -551,6 +626,68 @@ func TestSubmitProposerSlashing_Ok(t *testing.T) {
assert.Equal(t, true, broadcaster.BroadcastCalled)
}
func TestSubmitProposerSlashing_AcrossFork(t *testing.T) {
ctx := context.Background()
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.AltairForkEpoch = 1
params.OverrideBeaconConfig(config)
bs, keys := util.DeterministicGenesisState(t, 1)
slashing := &ethpbv1.ProposerSlashing{
SignedHeader_1: &ethpbv1.SignedBeaconBlockHeader{
Message: &ethpbv1.BeaconBlockHeader{
Slot: params.BeaconConfig().SlotsPerEpoch,
ProposerIndex: 0,
ParentRoot: bytesutil.PadTo([]byte("parentroot1"), 32),
StateRoot: bytesutil.PadTo([]byte("stateroot1"), 32),
BodyRoot: bytesutil.PadTo([]byte("bodyroot1"), 32),
},
Signature: make([]byte, 96),
},
SignedHeader_2: &ethpbv1.SignedBeaconBlockHeader{
Message: &ethpbv1.BeaconBlockHeader{
Slot: params.BeaconConfig().SlotsPerEpoch,
ProposerIndex: 0,
ParentRoot: bytesutil.PadTo([]byte("parentroot2"), 32),
StateRoot: bytesutil.PadTo([]byte("stateroot2"), 32),
BodyRoot: bytesutil.PadTo([]byte("bodyroot2"), 32),
},
Signature: make([]byte, 96),
},
}
newBs := bs.Copy()
newBs, err := transition.ProcessSlots(ctx, newBs, params.BeaconConfig().SlotsPerEpoch)
require.NoError(t, err)
for _, h := range []*ethpbv1.SignedBeaconBlockHeader{slashing.SignedHeader_1, slashing.SignedHeader_2} {
sb, err := signing.ComputeDomainAndSign(
newBs,
slots.ToEpoch(h.Message.Slot),
h.Message,
params.BeaconConfig().DomainBeaconProposer,
keys[0],
)
require.NoError(t, err)
sig, err := bls.SignatureFromBytes(sb)
require.NoError(t, err)
h.Signature = sig.Marshal()
}
broadcaster := &p2pMock.MockBroadcaster{}
s := &Server{
ChainInfoFetcher: &blockchainmock.ChainService{State: bs},
SlashingsPool: &slashingsmock.PoolMock{},
Broadcaster: broadcaster,
}
_, err = s.SubmitProposerSlashing(ctx, slashing)
require.NoError(t, err)
}
func TestSubmitProposerSlashing_InvalidSlashing(t *testing.T) {
ctx := context.Background()
bs, err := util.NewBeaconState()
@@ -630,6 +767,47 @@ func TestSubmitVoluntaryExit_Ok(t *testing.T) {
assert.Equal(t, true, broadcaster.BroadcastCalled)
}
func TestSubmitVoluntaryExit_AcrossFork(t *testing.T) {
ctx := context.Background()
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.AltairForkEpoch = params.BeaconConfig().ShardCommitteePeriod + 1
params.OverrideBeaconConfig(config)
bs, keys := util.DeterministicGenesisState(t, 1)
// Satisfy activity time required before exiting.
require.NoError(t, bs.SetSlot(params.BeaconConfig().SlotsPerEpoch.Mul(uint64(params.BeaconConfig().ShardCommitteePeriod))))
exit := &ethpbv1.SignedVoluntaryExit{
Message: &ethpbv1.VoluntaryExit{
Epoch: params.BeaconConfig().ShardCommitteePeriod + 1,
ValidatorIndex: 0,
},
Signature: make([]byte, 96),
}
newBs := bs.Copy()
newBs, err := transition.ProcessSlots(ctx, newBs, params.BeaconConfig().SlotsPerEpoch.Mul(uint64(params.BeaconConfig().ShardCommitteePeriod)+1))
require.NoError(t, err)
sb, err := signing.ComputeDomainAndSign(newBs, exit.Message.Epoch, exit.Message, params.BeaconConfig().DomainVoluntaryExit, keys[0])
require.NoError(t, err)
sig, err := bls.SignatureFromBytes(sb)
require.NoError(t, err)
exit.Signature = sig.Marshal()
broadcaster := &p2pMock.MockBroadcaster{}
s := &Server{
ChainInfoFetcher: &blockchainmock.ChainService{State: bs},
VoluntaryExitsPool: &mock.PoolMock{},
Broadcaster: broadcaster,
}
_, err = s.SubmitVoluntaryExit(ctx, exit)
require.NoError(t, err)
}
func TestSubmitVoluntaryExit_InvalidValidatorIndex(t *testing.T) {
ctx := context.Background()

View File

@@ -31,6 +31,8 @@ go_test(
deps = [
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/forkchoice/doubly-linked-tree:go_default_library",
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/rpc/testutil:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
@@ -39,6 +41,7 @@ go_test(
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@io_bazel_rules_go//proto/wkt:empty_go_proto",
"@org_golang_google_protobuf//types/known/emptypb:go_default_library",
],
)

View File

@@ -140,3 +140,8 @@ func (ds *Server) ListForkChoiceHeadsV2(ctx context.Context, _ *emptypb.Empty) (
return resp, nil
}
// GetForkChoice returns a dump fork choice store.
func (ds *Server) GetForkChoice(ctx context.Context, _ *emptypb.Empty) (*ethpbv1.ForkChoiceResponse, error) {
return ds.ForkFetcher.ForkChoicer().ForkChoiceDump(ctx)
}

View File

@@ -4,8 +4,11 @@ import (
"context"
"testing"
"github.com/golang/protobuf/ptypes/empty"
blockchainmock "github.com/prysmaticlabs/prysm/v3/beacon-chain/blockchain/testing"
dbTest "github.com/prysmaticlabs/prysm/v3/beacon-chain/db/testing"
doublylinkedtree "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/doubly-linked-tree"
forkchoicetypes "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/rpc/testutil"
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v3/encoding/bytesutil"
@@ -237,3 +240,18 @@ func TestListForkChoiceHeadsV2(t *testing.T) {
}
})
}
func TestServer_GetForkChoice(t *testing.T) {
store := doublylinkedtree.New()
fRoot := [32]byte{'a'}
jRoot := [32]byte{'b'}
fc := &forkchoicetypes.Checkpoint{Epoch: 2, Root: fRoot}
jc := &forkchoicetypes.Checkpoint{Epoch: 3, Root: jRoot}
require.NoError(t, store.UpdateFinalizedCheckpoint(fc))
require.NoError(t, store.UpdateJustifiedCheckpoint(jc))
bs := &Server{ForkFetcher: &blockchainmock.ChainService{ForkChoiceStore: store}}
res, err := bs.GetForkChoice(context.Background(), &empty.Empty{})
require.NoError(t, err)
require.Equal(t, types.Epoch(3), res.JustifiedCheckpoint.Epoch, "Did not get wanted justified epoch")
require.Equal(t, types.Epoch(2), res.FinalizedCheckpoint.Epoch, "Did not get wanted finalized epoch")
}

View File

@@ -16,4 +16,5 @@ type Server struct {
HeadFetcher blockchain.HeadFetcher
StateFetcher statefetcher.Fetcher
OptimisticModeFetcher blockchain.OptimisticModeFetcher
ForkFetcher blockchain.ForkFetcher
}

View File

@@ -14,6 +14,7 @@ go_library(
"//beacon-chain/cache:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/db/kv:go_default_library",
"//beacon-chain/operations/attestations:go_default_library",
"//beacon-chain/operations/synccommittee:go_default_library",
"//beacon-chain/p2p:go_default_library",
@@ -53,11 +54,13 @@ go_test(
"//beacon-chain/builder/testing:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/core/altair:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/execution/testing:go_default_library",
"//beacon-chain/forkchoice/protoarray:go_default_library",
"//beacon-chain/operations/attestations:go_default_library",
"//beacon-chain/operations/attestations/mock:go_default_library",
"//beacon-chain/operations/slashings:go_default_library",
@@ -72,6 +75,7 @@ go_test(
"//beacon-chain/sync/initial-sync/testing:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/bls:go_default_library",
"//encoding/bytesutil:go_default_library",
@@ -86,6 +90,7 @@ go_test(
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
],
)

View File

@@ -16,6 +16,7 @@ import (
"github.com/prysmaticlabs/prysm/v3/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/db/kv"
rpchelpers "github.com/prysmaticlabs/prysm/v3/beacon-chain/rpc/eth/helpers"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state"
statev1 "github.com/prysmaticlabs/prysm/v3/beacon-chain/state/v1"
@@ -194,11 +195,11 @@ func (vs *Server) GetProposerDuties(ctx context.Context, req *ethpbv1.ProposerDu
// where `epoch` is described as `epoch // EPOCHS_PER_SYNC_COMMITTEE_PERIOD <= current_epoch // EPOCHS_PER_SYNC_COMMITTEE_PERIOD + 1`.
//
// Algorithm:
// - Get the last valid epoch. This is the last epoch of the next sync committee period.
// - Get the state for the requested epoch. If it's a future epoch from the current sync committee period
// or an epoch from the next sync committee period, then get the current state.
// - Get the state's current sync committee. If it's an epoch from the next sync committee period, then get the next sync committee.
// - Get duties.
// - Get the last valid epoch. This is the last epoch of the next sync committee period.
// - Get the state for the requested epoch. If it's a future epoch from the current sync committee period
// or an epoch from the next sync committee period, then get the current state.
// - Get the state's current sync committee. If it's an epoch from the next sync committee period, then get the next sync committee.
// - Get duties.
func (vs *Server) GetSyncCommitteeDuties(ctx context.Context, req *ethpbv2.SyncCommitteeDutiesRequest) (*ethpbv2.SyncCommitteeDutiesResponse, error) {
ctx, span := trace.StartSpan(ctx, "validator.GetSyncCommitteeDuties")
defer span.End()
@@ -403,7 +404,11 @@ func (vs *Server) ProduceBlockV2SSZ(ctx context.Context, req *ethpbv1.ProduceBlo
// ProduceBlindedBlock requests the beacon node to produce a valid unsigned blinded beacon block,
// which can then be signed by a proposer and submitted.
//
// Pre-Bellatrix, this endpoint will return a regular block.
// Under the following conditions, this endpoint will return an error.
// - The node is syncing or optimistic mode (after bellatrix).
// - The builder is not figured (after bellatrix).
// - The relayer circuit breaker is activated (after bellatrix).
// - The relayer responded with an error (after bellatrix).
func (vs *Server) ProduceBlindedBlock(ctx context.Context, req *ethpbv1.ProduceBlockRequest) (*ethpbv2.ProduceBlindedBlockResponse, error) {
ctx, span := trace.StartSpan(ctx, "validator.ProduceBlindedBlock")
defer span.End()
@@ -412,57 +417,76 @@ func (vs *Server) ProduceBlindedBlock(ctx context.Context, req *ethpbv1.ProduceB
// We simply return the error because it's already a gRPC error.
return nil, err
}
v1alpha1req := &ethpbalpha.BlockRequest{
Slot: req.Slot,
RandaoReveal: req.RandaoReveal,
Graffiti: req.Graffiti,
}
v1alpha1resp, err := vs.V1Alpha1Server.GetBeaconBlock(ctx, v1alpha1req)
// Before Bellatrix, return normal block.
if req.Slot < types.Slot(params.BeaconConfig().BellatrixForkEpoch)*params.BeaconConfig().SlotsPerEpoch {
v1alpha1resp, err := vs.V1Alpha1Server.GetBeaconBlock(ctx, v1alpha1req)
if err != nil {
// We simply return err because it's already of a gRPC error type.
return nil, err
}
phase0Block, ok := v1alpha1resp.Block.(*ethpbalpha.GenericBeaconBlock_Phase0)
if ok {
block, err := migration.V1Alpha1ToV1Block(phase0Block.Phase0)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not prepare beacon block: %v", err)
}
return &ethpbv2.ProduceBlindedBlockResponse{
Version: ethpbv2.Version_PHASE0,
Data: &ethpbv2.BlindedBeaconBlockContainer{
Block: &ethpbv2.BlindedBeaconBlockContainer_Phase0Block{Phase0Block: block},
},
}, nil
}
altairBlock, ok := v1alpha1resp.Block.(*ethpbalpha.GenericBeaconBlock_Altair)
if ok {
block, err := migration.V1Alpha1BeaconBlockAltairToV2(altairBlock.Altair)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not prepare beacon block: %v", err)
}
return &ethpbv2.ProduceBlindedBlockResponse{
Version: ethpbv2.Version_ALTAIR,
Data: &ethpbv2.BlindedBeaconBlockContainer{
Block: &ethpbv2.BlindedBeaconBlockContainer_AltairBlock{AltairBlock: block},
},
}, nil
}
}
// After Bellatrix, return blinded block.
optimistic, err := vs.OptimisticModeFetcher.IsOptimistic(ctx)
if err != nil {
// We simply return err because it's already of a gRPC error type.
return nil, err
return nil, status.Errorf(codes.Internal, "Could not determine if the node is a optimistic node: %v", err)
}
phase0Block, ok := v1alpha1resp.Block.(*ethpbalpha.GenericBeaconBlock_Phase0)
if ok {
block, err := migration.V1Alpha1ToV1Block(phase0Block.Phase0)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not prepare beacon block: %v", err)
}
return &ethpbv2.ProduceBlindedBlockResponse{
Version: ethpbv2.Version_PHASE0,
Data: &ethpbv2.BlindedBeaconBlockContainer{
Block: &ethpbv2.BlindedBeaconBlockContainer_Phase0Block{Phase0Block: block},
},
}, nil
if optimistic {
return nil, status.Errorf(codes.Unavailable, "The node is currently optimistic and cannot serve validators")
}
altairBlock, ok := v1alpha1resp.Block.(*ethpbalpha.GenericBeaconBlock_Altair)
if ok {
block, err := migration.V1Alpha1BeaconBlockAltairToV2(altairBlock.Altair)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not prepare beacon block: %v", err)
}
return &ethpbv2.ProduceBlindedBlockResponse{
Version: ethpbv2.Version_ALTAIR,
Data: &ethpbv2.BlindedBeaconBlockContainer{
Block: &ethpbv2.BlindedBeaconBlockContainer_AltairBlock{AltairBlock: block},
},
}, nil
altairBlk, err := vs.V1Alpha1Server.BuildAltairBeaconBlock(ctx, v1alpha1req)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not prepare beacon block: %v", err)
}
bellatrixBlock, ok := v1alpha1resp.Block.(*ethpbalpha.GenericBeaconBlock_Bellatrix)
if ok {
block, err := migration.V1Alpha1BeaconBlockBellatrixToV2Blinded(bellatrixBlock.Bellatrix)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not prepare beacon block: %v", err)
}
return &ethpbv2.ProduceBlindedBlockResponse{
Version: ethpbv2.Version_BELLATRIX,
Data: &ethpbv2.BlindedBeaconBlockContainer{
Block: &ethpbv2.BlindedBeaconBlockContainer_BellatrixBlock{BellatrixBlock: block},
},
}, nil
ok, b, err := vs.V1Alpha1Server.GetAndBuildBlindBlock(ctx, altairBlk)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not prepare blind beacon block: %v", err)
}
return nil, status.Error(codes.InvalidArgument, "Unsupported block type")
if !ok {
return nil, status.Error(codes.Unavailable, "Builder is not available due to miss-config or circuit breaker")
}
blk, err := migration.V1Alpha1BeaconBlockBlindedBellatrixToV2Blinded(b.GetBlindedBellatrix())
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not prepare beacon block: %v", err)
}
return &ethpbv2.ProduceBlindedBlockResponse{
Version: ethpbv2.Version_BELLATRIX,
Data: &ethpbv2.BlindedBeaconBlockContainer{
Block: &ethpbv2.BlindedBeaconBlockContainer_BellatrixBlock{BellatrixBlock: blk},
},
}, nil
}
// ProduceBlindedBlockSSZ requests the beacon node to produce a valid unsigned blinded beacon block,
@@ -546,7 +570,24 @@ func (vs *Server) PrepareBeaconProposer(
defer span.End()
var feeRecipients []common.Address
var validatorIndices []types.ValidatorIndex
for _, recipientContainer := range request.Recipients {
newRecipients := make([]*ethpbv1.PrepareBeaconProposerRequest_FeeRecipientContainer, 0, len(request.Recipients))
for _, r := range request.Recipients {
f, err := vs.V1Alpha1Server.BeaconDB.FeeRecipientByValidatorID(ctx, r.ValidatorIndex)
switch {
case errors.Is(err, kv.ErrNotFoundFeeRecipient):
newRecipients = append(newRecipients, r)
case err != nil:
return nil, status.Errorf(codes.Internal, "Could not get fee recipient by validator index: %v", err)
default:
}
if common.BytesToAddress(r.FeeRecipient) != f {
newRecipients = append(newRecipients, r)
}
}
if len(newRecipients) == 0 {
return &emptypb.Empty{}, nil
}
for _, recipientContainer := range newRecipients {
recipient := hexutil.Encode(recipientContainer.FeeRecipient)
if !common.IsHexAddress(recipient) {
return nil, status.Errorf(codes.InvalidArgument, fmt.Sprintf("Invalid fee recipient address: %v", recipient))
@@ -994,19 +1035,6 @@ func v1ValidatorStatusToV1Alpha1(valStatus ethpbv1.ValidatorStatus) ethpbalpha.V
}
}
func (vs *Server) v1BeaconBlock(ctx context.Context, req *ethpbv1.ProduceBlockRequest) (*ethpbv1.BeaconBlock, error) {
v1alpha1req := &ethpbalpha.BlockRequest{
Slot: req.Slot,
RandaoReveal: req.RandaoReveal,
Graffiti: req.Graffiti,
}
v1alpha1resp, err := vs.V1Alpha1Server.GetBeaconBlock(ctx, v1alpha1req)
if err != nil {
return nil, err
}
return migration.V1Alpha1ToV1Block(v1alpha1resp.GetPhase0())
}
func syncCommitteeDutiesLastValidEpoch(currentEpoch types.Epoch) types.Epoch {
currentSyncPeriodIndex := currentEpoch / params.BeaconConfig().EpochsPerSyncCommitteePeriod
// Return the last epoch of the next sync committee.

View File

@@ -13,11 +13,13 @@ import (
builderTest "github.com/prysmaticlabs/prysm/v3/beacon-chain/builder/testing"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/altair"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/signing"
coreTime "github.com/prysmaticlabs/prysm/v3/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/transition"
dbutil "github.com/prysmaticlabs/prysm/v3/beacon-chain/db/testing"
mockExecution "github.com/prysmaticlabs/prysm/v3/beacon-chain/execution/testing"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/protoarray"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/operations/attestations"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/operations/attestations/mock"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/operations/slashings"
@@ -32,6 +34,7 @@ import (
mockSync "github.com/prysmaticlabs/prysm/v3/beacon-chain/sync/initial-sync/testing"
fieldparams "github.com/prysmaticlabs/prysm/v3/config/fieldparams"
"github.com/prysmaticlabs/prysm/v3/config/params"
"github.com/prysmaticlabs/prysm/v3/consensus-types/blocks"
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v3/crypto/bls"
"github.com/prysmaticlabs/prysm/v3/encoding/bytesutil"
@@ -44,6 +47,7 @@ import (
"github.com/prysmaticlabs/prysm/v3/testing/require"
"github.com/prysmaticlabs/prysm/v3/testing/util"
"github.com/prysmaticlabs/prysm/v3/time/slots"
logTest "github.com/sirupsen/logrus/hooks/test"
"google.golang.org/protobuf/proto"
)
@@ -1841,7 +1845,7 @@ func TestProduceBlindedBlock(t *testing.T) {
assert.DeepEqual(t, aggregatedSig, blk.Body.SyncAggregate.SyncCommitteeSignature)
})
t.Run("Bellatrix", func(t *testing.T) {
t.Run("Can get blind block from builder service", func(t *testing.T) {
db := dbutil.SetupDB(t)
ctx := context.Background()
@@ -1849,6 +1853,8 @@ func TestProduceBlindedBlock(t *testing.T) {
bc := params.BeaconConfig().Copy()
bc.AltairForkEpoch = types.Epoch(0)
bc.BellatrixForkEpoch = types.Epoch(1)
bc.MaxBuilderConsecutiveMissedSlots = params.BeaconConfig().SlotsPerEpoch + 1
bc.MaxBuilderEpochMissedSlots = params.BeaconConfig().SlotsPerEpoch
params.OverrideBeaconConfig(bc)
beaconState, privKeys := util.DeterministicGenesisStateBellatrix(t, params.BeaconConfig().SyncCommitteeSize)
@@ -1869,14 +1875,56 @@ func TestProduceBlindedBlock(t *testing.T) {
require.NoError(t, db.SaveState(ctx, beaconState, parentRoot), "Could not save genesis state")
require.NoError(t, db.SaveHeadBlockRoot(ctx, parentRoot), "Could not save genesis state")
v1Alpha1Server := &v1alpha1validator.Server{
ExecutionEngineCaller: &mockExecution.EngineClient{
ExecutionBlock: &enginev1.ExecutionBlock{
TotalDifficulty: "0x1",
},
fb := util.HydrateSignedBeaconBlockBellatrix(&ethpbalpha.SignedBeaconBlockBellatrix{})
fb.Block.Body.ExecutionPayload.GasLimit = 123
wfb, err := blocks.NewSignedBeaconBlock(fb)
require.NoError(t, err)
require.NoError(t, db.SaveBlock(ctx, wfb), "Could not save block")
r, err := wfb.Block().HashTreeRoot()
require.NoError(t, err)
sk, err := bls.RandKey()
require.NoError(t, err)
ti := time.Unix(0, 0)
ts, err := slots.ToTime(uint64(ti.Unix()), 33)
require.NoError(t, err)
require.NoError(t, beaconState.SetGenesisTime(uint64(ti.Unix())))
random, err := helpers.RandaoMix(beaconState, coreTime.CurrentEpoch(beaconState))
require.NoError(t, err)
bid := &ethpbalpha.BuilderBid{
Header: &enginev1.ExecutionPayloadHeader{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: random,
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: make([]byte, fieldparams.RootLength),
BlockNumber: 1,
Timestamp: uint64(ts.Unix()),
},
TimeFetcher: &mockChain.ChainService{},
HeadFetcher: &mockChain.ChainService{State: beaconState, Root: parentRoot[:]},
Pubkey: sk.PublicKey().Marshal(),
Value: bytesutil.PadTo([]byte{1, 2, 3}, 32),
}
d := params.BeaconConfig().DomainApplicationBuilder
domain, err := signing.ComputeDomain(d, nil, nil)
require.NoError(t, err)
sr, err := signing.ComputeSigningRoot(bid, domain)
require.NoError(t, err)
sBid := &ethpbalpha.SignedBuilderBid{
Message: bid,
Signature: sk.Sign(sr[:]).Marshal(),
}
v1Alpha1Server := &v1alpha1validator.Server{
BeaconDB: db,
ForkFetcher: &mockChain.ChainService{ForkChoiceStore: protoarray.New()},
TimeFetcher: &mockChain.ChainService{
Genesis: ti,
},
HeadFetcher: &mockChain.ChainService{State: beaconState, Root: parentRoot[:], Block: wfb},
OptimisticModeFetcher: &mockChain.ChainService{},
SyncChecker: &mockSync.Sync{IsSyncing: false},
BlockReceiver: &mockChain.ChainService{},
@@ -1891,6 +1939,15 @@ func TestProduceBlindedBlock(t *testing.T) {
StateGen: stategen.New(db),
SyncCommitteePool: synccommittee.NewStore(),
ProposerSlotIndexCache: cache.NewProposerPayloadIDsCache(),
BlockBuilder: &builderTest.MockBuilderService{
HasConfigured: true,
Bid: sBid,
},
FinalizationFetcher: &mockChain.ChainService{
FinalizedCheckPoint: &ethpbalpha.Checkpoint{
Root: r[:],
},
},
}
proposerSlashings := make([]*ethpbalpha.ProposerSlashing, params.BeaconConfig().MaxProposerSlashings)
@@ -1948,8 +2005,10 @@ func TestProduceBlindedBlock(t *testing.T) {
require.NoError(t, v1Alpha1Server.SyncCommitteePool.SaveSyncCommitteeContribution(contribution))
v1Server := &Server{
V1Alpha1Server: v1Alpha1Server,
SyncChecker: &mockSync.Sync{IsSyncing: false},
V1Alpha1Server: v1Alpha1Server,
SyncChecker: &mockSync.Sync{IsSyncing: false},
TimeFetcher: &mockChain.ChainService{},
OptimisticModeFetcher: &mockChain.ChainService{},
}
randaoReveal, err := util.RandaoReveal(beaconState, 1, privKeys)
require.NoError(t, err)
@@ -3621,6 +3680,89 @@ func TestPrepareBeaconProposer(t *testing.T) {
})
}
}
func TestProposer_PrepareBeaconProposerOverlapping(t *testing.T) {
hook := logTest.NewGlobal()
db := dbutil.SetupDB(t)
ctx := context.Background()
v1Server := &v1alpha1validator.Server{
BeaconDB: db,
}
proposerServer := &Server{V1Alpha1Server: v1Server}
// New validator
f := bytesutil.PadTo([]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF}, fieldparams.FeeRecipientLength)
req := &ethpbv1.PrepareBeaconProposerRequest{
Recipients: []*ethpbv1.PrepareBeaconProposerRequest_FeeRecipientContainer{
{FeeRecipient: f, ValidatorIndex: 1},
},
}
_, err := proposerServer.PrepareBeaconProposer(ctx, req)
require.NoError(t, err)
require.LogsContain(t, hook, "Updated fee recipient addresses for validator indices")
// Same validator
hook.Reset()
_, err = proposerServer.PrepareBeaconProposer(ctx, req)
require.NoError(t, err)
require.LogsDoNotContain(t, hook, "Updated fee recipient addresses for validator indices")
// Same validator with different fee recipient
hook.Reset()
f = bytesutil.PadTo([]byte{0x01, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF}, fieldparams.FeeRecipientLength)
req = &ethpbv1.PrepareBeaconProposerRequest{
Recipients: []*ethpbv1.PrepareBeaconProposerRequest_FeeRecipientContainer{
{FeeRecipient: f, ValidatorIndex: 1},
},
}
_, err = proposerServer.PrepareBeaconProposer(ctx, req)
require.NoError(t, err)
require.LogsContain(t, hook, "Updated fee recipient addresses for validator indices")
// More than one validator
hook.Reset()
f = bytesutil.PadTo([]byte{0x01, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF}, fieldparams.FeeRecipientLength)
req = &ethpbv1.PrepareBeaconProposerRequest{
Recipients: []*ethpbv1.PrepareBeaconProposerRequest_FeeRecipientContainer{
{FeeRecipient: f, ValidatorIndex: 1},
{FeeRecipient: f, ValidatorIndex: 2},
},
}
_, err = proposerServer.PrepareBeaconProposer(ctx, req)
require.NoError(t, err)
require.LogsContain(t, hook, "Updated fee recipient addresses for validator indices")
// Same validators
hook.Reset()
_, err = proposerServer.PrepareBeaconProposer(ctx, req)
require.NoError(t, err)
require.LogsDoNotContain(t, hook, "Updated fee recipient addresses for validator indices")
}
func BenchmarkServer_PrepareBeaconProposer(b *testing.B) {
db := dbutil.SetupDB(b)
ctx := context.Background()
v1Server := &v1alpha1validator.Server{
BeaconDB: db,
}
proposerServer := &Server{V1Alpha1Server: v1Server}
f := bytesutil.PadTo([]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF}, fieldparams.FeeRecipientLength)
recipients := make([]*ethpbv1.PrepareBeaconProposerRequest_FeeRecipientContainer, 0)
for i := 0; i < 10000; i++ {
recipients = append(recipients, &ethpbv1.PrepareBeaconProposerRequest_FeeRecipientContainer{FeeRecipient: f, ValidatorIndex: types.ValidatorIndex(i)})
}
req := &ethpbv1.PrepareBeaconProposerRequest{
Recipients: recipients,
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := proposerServer.PrepareBeaconProposer(ctx, req)
if err != nil {
b.Fatal(err)
}
}
}
func TestServer_SubmitValidatorRegistrations(t *testing.T) {
type args struct {

View File

@@ -64,6 +64,7 @@ go_library(
"//crypto/hash:go_default_library",
"//crypto/rand:go_default_library",
"//encoding/bytesutil:go_default_library",
"//encoding/ssz:go_default_library",
"//monitoring/tracing:go_default_library",
"//network/forks:go_default_library",
"//proto/engine/v1:go_default_library",

View File

@@ -14,6 +14,7 @@ import (
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/feed"
blockfeed "github.com/prysmaticlabs/prysm/v3/beacon-chain/core/feed/block"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/db/kv"
"github.com/prysmaticlabs/prysm/v3/config/params"
"github.com/prysmaticlabs/prysm/v3/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v3/consensus-types/interfaces"
@@ -81,7 +82,26 @@ func (vs *Server) PrepareBeaconProposer(
defer span.End()
var feeRecipients []common.Address
var validatorIndices []types.ValidatorIndex
for _, recipientContainer := range request.Recipients {
newRecipients := make([]*ethpb.PrepareBeaconProposerRequest_FeeRecipientContainer, 0, len(request.Recipients))
for _, r := range request.Recipients {
f, err := vs.BeaconDB.FeeRecipientByValidatorID(ctx, r.ValidatorIndex)
switch {
case errors.Is(err, kv.ErrNotFoundFeeRecipient):
newRecipients = append(newRecipients, r)
case err != nil:
return nil, status.Errorf(codes.Internal, "Could not get fee recipient by validator index: %v", err)
default:
}
if common.BytesToAddress(r.FeeRecipient) != f {
newRecipients = append(newRecipients, r)
}
}
if len(newRecipients) == 0 {
return &emptypb.Empty{}, nil
}
for _, recipientContainer := range newRecipients {
recipient := hexutil.Encode(recipientContainer.FeeRecipient)
if !common.IsHexAddress(recipient) {
return nil, status.Errorf(codes.InvalidArgument, fmt.Sprintf("Invalid fee recipient address: %v", recipient))

View File

@@ -15,8 +15,8 @@ import (
"go.opencensus.io/trace"
)
func (vs *Server) buildAltairBeaconBlock(ctx context.Context, req *ethpb.BlockRequest) (*ethpb.BeaconBlockAltair, error) {
ctx, span := trace.StartSpan(ctx, "ProposerServer.buildAltairBeaconBlock")
func (vs *Server) BuildAltairBeaconBlock(ctx context.Context, req *ethpb.BlockRequest) (*ethpb.BeaconBlockAltair, error) {
ctx, span := trace.StartSpan(ctx, "ProposerServer.BuildAltairBeaconBlock")
defer span.End()
blkData, err := vs.buildPhase0BlockData(ctx, req)
if err != nil {
@@ -55,7 +55,7 @@ func (vs *Server) buildAltairBeaconBlock(ctx context.Context, req *ethpb.BlockRe
func (vs *Server) getAltairBeaconBlock(ctx context.Context, req *ethpb.BlockRequest) (*ethpb.BeaconBlockAltair, error) {
ctx, span := trace.StartSpan(ctx, "ProposerServer.getAltairBeaconBlock")
defer span.End()
blk, err := vs.buildAltairBeaconBlock(ctx, req)
blk, err := vs.BuildAltairBeaconBlock(ctx, req)
if err != nil {
return nil, fmt.Errorf("could not build block data: %v", err)
}

View File

@@ -4,6 +4,7 @@ import (
"bytes"
"context"
"fmt"
"math/big"
"time"
"github.com/pkg/errors"
@@ -19,6 +20,7 @@ import (
"github.com/prysmaticlabs/prysm/v3/consensus-types/interfaces"
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v3/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v3/encoding/ssz"
enginev1 "github.com/prysmaticlabs/prysm/v3/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v3/runtime/version"
@@ -37,14 +39,14 @@ var builderGetPayloadMissCount = promauto.NewCounter(prometheus.CounterOpts{
const blockBuilderTimeout = 1 * time.Second
func (vs *Server) getBellatrixBeaconBlock(ctx context.Context, req *ethpb.BlockRequest) (*ethpb.GenericBeaconBlock, error) {
altairBlk, err := vs.buildAltairBeaconBlock(ctx, req)
altairBlk, err := vs.BuildAltairBeaconBlock(ctx, req)
if err != nil {
return nil, err
}
registered, err := vs.validatorRegistered(ctx, altairBlk.ProposerIndex)
if registered && err == nil {
builderReady, b, err := vs.getAndBuildBlindBlock(ctx, altairBlk)
builderReady, b, err := vs.GetAndBuildBlindBlock(ctx, altairBlk)
if err != nil {
// In the event of an error, the node should fall back to default execution engine for building block.
log.WithError(err).Error("Failed to build a block from external builder, falling " +
@@ -108,6 +110,7 @@ func (vs *Server) getPayloadHeaderFromBuilder(ctx context.Context, slot types.Sl
if blocks.IsPreBellatrixVersion(b.Version()) {
return nil, nil
}
h, err := b.Block().Body().Execution()
if err != nil {
return nil, err
@@ -120,6 +123,25 @@ func (vs *Server) getPayloadHeaderFromBuilder(ctx context.Context, slot types.Sl
if err != nil {
return nil, err
}
if bid == nil || bid.Message == nil {
return nil, errors.New("builder returned nil bid")
}
v := bid.Message.Value
if new(big.Int).SetBytes(bytesutil.ReverseByteOrder(v)).String() == "0" {
return nil, errors.New("builder returned header with 0 bid amount")
}
emptyRoot, err := ssz.TransactionsRoot([][]byte{})
if err != nil {
return nil, err
}
if bytesutil.ToBytes32(bid.Message.Header.TransactionsRoot) == emptyRoot {
return nil, errors.New("builder returned header with an empty tx root")
}
if !bytes.Equal(bid.Message.Header.ParentHash, h.BlockHash()) {
return nil, fmt.Errorf("incorrect parent hash %#x != %#x", bid.Message.Header.ParentHash, h.BlockHash())
}
@@ -357,10 +379,10 @@ func (vs *Server) circuitBreakBuilder(s types.Slot) (bool, error) {
return false, nil
}
// Get and build blind block from builder network. Returns a boolean status, built block and error.
// GetAndBuildBlindBlock builds blind block from builder network. Returns a boolean status, built block and error.
// If the status is false that means builder the header block is disallowed.
// This routine is time limited by `blockBuilderTimeout`.
func (vs *Server) getAndBuildBlindBlock(ctx context.Context, b *ethpb.BeaconBlockAltair) (bool, *ethpb.GenericBeaconBlock, error) {
func (vs *Server) GetAndBuildBlindBlock(ctx context.Context, b *ethpb.BeaconBlockAltair) (bool, *ethpb.GenericBeaconBlock, error) {
// No op. Builder is not defined. User did not specify a user URL. We should use local EE.
if vs.BlockBuilder == nil || !vs.BlockBuilder.Configured() {
return false, nil, nil

View File

@@ -97,6 +97,40 @@ func TestServer_buildHeaderBlock(t *testing.T) {
}
func TestServer_getPayloadHeader(t *testing.T) {
emptyRoot, err := ssz.TransactionsRoot([][]byte{})
require.NoError(t, err)
ti, err := slots.ToTime(uint64(time.Now().Unix()), 0)
require.NoError(t, err)
sk, err := bls.RandKey()
require.NoError(t, err)
bid := &ethpb.BuilderBid{
Header: &v1.ExecutionPayloadHeader{
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: bytesutil.PadTo([]byte{1}, fieldparams.RootLength),
ParentHash: params.BeaconConfig().ZeroHash[:],
Timestamp: uint64(ti.Unix()),
},
Pubkey: sk.PublicKey().Marshal(),
Value: bytesutil.PadTo([]byte{1, 2, 3}, 32),
}
d := params.BeaconConfig().DomainApplicationBuilder
domain, err := signing.ComputeDomain(d, nil, nil)
require.NoError(t, err)
sr, err := signing.ComputeSigningRoot(bid, domain)
require.NoError(t, err)
sBid := &ethpb.SignedBuilderBid{
Message: bid,
Signature: sk.Sign(sr[:]).Marshal(),
}
require.NoError(t, err)
tests := []struct {
name string
head interfaces.SignedBeaconBlock
@@ -131,7 +165,7 @@ func TestServer_getPayloadHeader(t *testing.T) {
err: "can't get header",
},
{
name: "get header correct",
name: "0 bid",
mock: &builderTest.MockBuilderService{
Bid: &ethpb.SignedBuilderBid{
Message: &ethpb.BuilderBid{
@@ -140,7 +174,6 @@ func TestServer_getPayloadHeader(t *testing.T) {
},
},
},
ErrGetHeader: errors.New("can't get header"),
},
fetcher: &blockchainTest.ChainService{
Block: func() interfaces.SignedBeaconBlock {
@@ -149,18 +182,55 @@ func TestServer_getPayloadHeader(t *testing.T) {
return wb
}(),
},
returnedHeader: &v1.ExecutionPayloadHeader{
BlockNumber: 123,
err: "builder returned header with 0 bid amount",
},
{
name: "invalid tx root",
mock: &builderTest.MockBuilderService{
Bid: &ethpb.SignedBuilderBid{
Message: &ethpb.BuilderBid{
Value: []byte{1},
Header: &v1.ExecutionPayloadHeader{
BlockNumber: 123,
TransactionsRoot: emptyRoot[:],
},
},
},
},
fetcher: &blockchainTest.ChainService{
Block: func() interfaces.SignedBeaconBlock {
wb, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlockBellatrix())
require.NoError(t, err)
return wb
}(),
},
err: "builder returned header with an empty tx root",
},
{
name: "can get header",
mock: &builderTest.MockBuilderService{
Bid: sBid,
},
fetcher: &blockchainTest.ChainService{
Block: func() interfaces.SignedBeaconBlock {
wb, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlockBellatrix())
require.NoError(t, err)
return wb
}(),
},
returnedHeader: bid.Header,
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
vs := &Server{BlockBuilder: tc.mock, HeadFetcher: tc.fetcher}
vs := &Server{BlockBuilder: tc.mock, HeadFetcher: tc.fetcher, TimeFetcher: &blockchainTest.ChainService{
Genesis: time.Now(),
}}
h, err := vs.getPayloadHeaderFromBuilder(context.Background(), 0, 0)
if err != nil {
if tc.err != "" {
require.ErrorContains(t, tc.err, err)
} else {
require.NoError(t, err)
require.DeepEqual(t, tc.returnedHeader, h)
}
})
@@ -350,20 +420,20 @@ func TestServer_getAndBuildHeaderBlock(t *testing.T) {
vs := &Server{}
// Nil builder
ready, _, err := vs.getAndBuildBlindBlock(ctx, nil)
ready, _, err := vs.GetAndBuildBlindBlock(ctx, nil)
require.NoError(t, err)
require.Equal(t, false, ready)
// Not configured
vs.BlockBuilder = &builderTest.MockBuilderService{}
ready, _, err = vs.getAndBuildBlindBlock(ctx, nil)
ready, _, err = vs.GetAndBuildBlindBlock(ctx, nil)
require.NoError(t, err)
require.Equal(t, false, ready)
// Block is not ready
vs.BlockBuilder = &builderTest.MockBuilderService{HasConfigured: true}
vs.FinalizationFetcher = &blockchainTest.ChainService{FinalizedCheckPoint: &ethpb.Checkpoint{}}
ready, _, err = vs.getAndBuildBlindBlock(ctx, nil)
ready, _, err = vs.GetAndBuildBlindBlock(ctx, nil)
require.NoError(t, err)
require.Equal(t, false, ready)
@@ -380,7 +450,7 @@ func TestServer_getAndBuildHeaderBlock(t *testing.T) {
vs.HeadFetcher = &blockchainTest.ChainService{Block: wb1}
vs.BlockBuilder = &builderTest.MockBuilderService{HasConfigured: true, ErrGetHeader: errors.New("could not get payload")}
vs.ForkFetcher = &blockchainTest.ChainService{ForkChoiceStore: protoarray.New()}
ready, _, err = vs.getAndBuildBlindBlock(ctx, &ethpb.BeaconBlockAltair{})
ready, _, err = vs.GetAndBuildBlindBlock(ctx, &ethpb.BeaconBlockAltair{})
require.ErrorContains(t, "could not get payload", err)
require.Equal(t, false, ready)
@@ -456,7 +526,7 @@ func TestServer_getAndBuildHeaderBlock(t *testing.T) {
vs.BlockBuilder = &builderTest.MockBuilderService{HasConfigured: true, Bid: sBid}
vs.TimeFetcher = &blockchainTest.ChainService{Genesis: time.Now()}
vs.ForkFetcher = &blockchainTest.ChainService{ForkChoiceStore: protoarray.New()}
ready, builtBlk, err := vs.getAndBuildBlindBlock(ctx, altairBlk.Block)
ready, builtBlk, err := vs.GetAndBuildBlindBlock(ctx, altairBlk.Block)
require.NoError(t, err)
require.Equal(t, true, ready)
require.DeepEqual(t, h, builtBlk.GetBlindedBellatrix().Body.ExecutionPayloadHeader)

View File

@@ -2357,6 +2357,84 @@ func TestProposer_PrepareBeaconProposer(t *testing.T) {
}
}
func TestProposer_PrepareBeaconProposerOverlapping(t *testing.T) {
hook := logTest.NewGlobal()
db := dbutil.SetupDB(t)
ctx := context.Background()
proposerServer := &Server{BeaconDB: db}
// New validator
f := bytesutil.PadTo([]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF}, fieldparams.FeeRecipientLength)
req := &ethpb.PrepareBeaconProposerRequest{
Recipients: []*ethpb.PrepareBeaconProposerRequest_FeeRecipientContainer{
{FeeRecipient: f, ValidatorIndex: 1},
},
}
_, err := proposerServer.PrepareBeaconProposer(ctx, req)
require.NoError(t, err)
require.LogsContain(t, hook, "Updated fee recipient addresses for validator indices")
// Same validator
hook.Reset()
_, err = proposerServer.PrepareBeaconProposer(ctx, req)
require.NoError(t, err)
require.LogsDoNotContain(t, hook, "Updated fee recipient addresses for validator indices")
// Same validator with different fee recipient
hook.Reset()
f = bytesutil.PadTo([]byte{0x01, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF}, fieldparams.FeeRecipientLength)
req = &ethpb.PrepareBeaconProposerRequest{
Recipients: []*ethpb.PrepareBeaconProposerRequest_FeeRecipientContainer{
{FeeRecipient: f, ValidatorIndex: 1},
},
}
_, err = proposerServer.PrepareBeaconProposer(ctx, req)
require.NoError(t, err)
require.LogsContain(t, hook, "Updated fee recipient addresses for validator indices")
// More than one validator
hook.Reset()
f = bytesutil.PadTo([]byte{0x01, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF}, fieldparams.FeeRecipientLength)
req = &ethpb.PrepareBeaconProposerRequest{
Recipients: []*ethpb.PrepareBeaconProposerRequest_FeeRecipientContainer{
{FeeRecipient: f, ValidatorIndex: 1},
{FeeRecipient: f, ValidatorIndex: 2},
},
}
_, err = proposerServer.PrepareBeaconProposer(ctx, req)
require.NoError(t, err)
require.LogsContain(t, hook, "Updated fee recipient addresses for validator indices")
// Same validators
hook.Reset()
_, err = proposerServer.PrepareBeaconProposer(ctx, req)
require.NoError(t, err)
require.LogsDoNotContain(t, hook, "Updated fee recipient addresses for validator indices")
}
func BenchmarkServer_PrepareBeaconProposer(b *testing.B) {
db := dbutil.SetupDB(b)
ctx := context.Background()
proposerServer := &Server{BeaconDB: db}
f := bytesutil.PadTo([]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF}, fieldparams.FeeRecipientLength)
recipients := make([]*ethpb.PrepareBeaconProposerRequest_FeeRecipientContainer, 0)
for i := 0; i < 10000; i++ {
recipients = append(recipients, &ethpb.PrepareBeaconProposerRequest_FeeRecipientContainer{FeeRecipient: f, ValidatorIndex: types.ValidatorIndex(i)})
}
req := &ethpb.PrepareBeaconProposerRequest{
Recipients: recipients,
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := proposerServer.PrepareBeaconProposer(ctx, req)
if err != nil {
b.Fatal(err)
}
}
}
func TestProposer_SubmitValidatorRegistrations(t *testing.T) {
ctx := context.Background()
proposerServer := &Server{}

View File

@@ -30,13 +30,14 @@ var errParticipation = status.Errorf(codes.Internal, "Failed to obtain epoch par
// ValidatorStatus returns the validator status of the current epoch.
// The status response can be one of the following:
// DEPOSITED - validator's deposit has been recognized by Ethereum 1, not yet recognized by Ethereum.
// PENDING - validator is in Ethereum's activation queue.
// ACTIVE - validator is active.
// EXITING - validator has initiated an an exit request, or has dropped below the ejection balance and is being kicked out.
// EXITED - validator is no longer validating.
// SLASHING - validator has been kicked out due to meeting a slashing condition.
// UNKNOWN_STATUS - validator does not have a known status in the network.
//
// DEPOSITED - validator's deposit has been recognized by Ethereum 1, not yet recognized by Ethereum.
// PENDING - validator is in Ethereum's activation queue.
// ACTIVE - validator is active.
// EXITING - validator has initiated an an exit request, or has dropped below the ejection balance and is being kicked out.
// EXITED - validator is no longer validating.
// SLASHING - validator has been kicked out due to meeting a slashing condition.
// UNKNOWN_STATUS - validator does not have a known status in the network.
func (vs *Server) ValidatorStatus(
ctx context.Context,
req *ethpb.ValidatorStatusRequest,
@@ -363,15 +364,6 @@ func (vs *Server) validatorStatus(
}
}
func (vs *Server) retrieveAfterEpochTransition(ctx context.Context, epoch types.Epoch) (state.BeaconState, error) {
endSlot, err := slots.EpochEnd(epoch)
if err != nil {
return nil, err
}
// replay to first slot of following epoch
return vs.ReplayerBuilder.ReplayerForSlot(endSlot).ReplayToSlot(ctx, endSlot+1)
}
func checkValidatorsAreRecent(headEpoch types.Epoch, req *ethpb.DoppelGangerRequest) (bool, *ethpb.DoppelGangerResponse) {
validatorsAreRecent := true
resp := &ethpb.DoppelGangerResponse{

View File

@@ -351,6 +351,7 @@ func (s *Service) Start() {
ReplayerBuilder: ch,
},
OptimisticModeFetcher: s.cfg.OptimisticModeFetcher,
ForkFetcher: s.cfg.ForkFetcher,
}
ethpbv1alpha1.RegisterDebugServer(s.grpcServer, debugServer)
ethpbservice.RegisterBeaconDebugServer(s.grpcServer, debugServerV1)

View File

@@ -253,10 +253,7 @@ func (p *StateProvider) finalizedStateRoot(ctx context.Context) ([]byte, error)
}
func (p *StateProvider) justifiedStateRoot(ctx context.Context) ([]byte, error) {
cp, err := p.BeaconDB.JustifiedCheckpoint(ctx)
if err != nil {
return nil, errors.Wrap(err, "could not get justified checkpoint")
}
cp := p.ChainInfoFetcher.CurrentJustifiedCheckpt()
b, err := p.BeaconDB.Block(ctx, bytesutil.ToBytes32(cp.Root))
if err != nil {
return nil, errors.Wrap(err, "could not get justified block")

View File

@@ -284,10 +284,6 @@ func TestGetStateRoot(t *testing.T) {
blk.Block.Slot = 40
root, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
cp := &ethpb.Checkpoint{
Epoch: 5,
Root: root[:],
}
// a valid chain is required to save finalized checkpoint.
util.SaveBlock(t, ctx, db, blk)
st, err := util.NewBeaconState()
@@ -295,7 +291,6 @@ func TestGetStateRoot(t *testing.T) {
require.NoError(t, st.SetSlot(1))
// a state is required to save checkpoint
require.NoError(t, db.SaveState(ctx, st, root))
require.NoError(t, db.SaveJustifiedCheckpoint(ctx, cp))
p := StateProvider{
BeaconDB: db,

View File

@@ -239,7 +239,8 @@ func (s *State) latestAncestor(ctx context.Context, blockRoot [32]byte) (state.B
// Is the state the genesis state.
parentRoot := bytesutil.ToBytes32(b.Block().ParentRoot())
if parentRoot == params.BeaconConfig().ZeroHash {
return s.beaconDB.GenesisState(ctx)
s, err := s.beaconDB.GenesisState(ctx)
return s, errors.Wrap(err, "could not get genesis state")
}
// Return an error if slot hasn't been covered by checkpoint sync.
@@ -268,12 +269,13 @@ func (s *State) latestAncestor(ctx context.Context, blockRoot [32]byte) (state.B
// Does the state exists in DB.
if s.beaconDB.HasState(ctx, parentRoot) {
return s.beaconDB.State(ctx, parentRoot)
s, err := s.beaconDB.State(ctx, parentRoot)
return s, errors.Wrap(err, "failed to retrieve state from db")
}
b, err = s.beaconDB.Block(ctx, parentRoot)
if err != nil {
return nil, err
return nil, errors.Wrap(err, "failed to retrieve block from db")
}
if b == nil || b.IsNil() {
return nil, errUnknownBlock

View File

@@ -63,7 +63,6 @@ type chainer interface {
}
type stateReplayer struct {
s state.BeaconState
target types.Slot
method retrievalMethod
chainer chainer

View File

@@ -190,7 +190,7 @@ func (s *Service) writeBlockRangeToStream(ctx context.Context, startSlot, endSlo
continue
}
if chunkErr := s.chunkBlockWriter(stream, b); chunkErr != nil {
log.WithError(chunkErr).Error("Could not send a chunked response")
log.WithError(chunkErr).Debug("Could not send a chunked response")
s.writeErrorResponseToStream(responseCodeServerError, p2ptypes.ErrGeneric.Error(), stream)
tracing.AnnotateError(span, chunkErr)
return chunkErr

View File

@@ -178,6 +178,7 @@ func (s *Service) Start() {
s.processPendingBlocksQueue()
s.processPendingAttsQueue()
s.maintainPeerStatuses()
s.resyncIfBehind()
// Update sync metrics.
async.RunEvery(s.ctx, syncMetricsInterval, s.updateMetrics)

View File

@@ -40,16 +40,6 @@ func (s *Service) validateAggregateAndProof(ctx context.Context, pid peer.ID, ms
return pubsub.ValidationIgnore, nil
}
// We should not attempt to process this message if the node is running in optimistic mode.
// We just ignore in p2p so that the peer is not penalized.
optimistic, err := s.cfg.chain.IsOptimistic(ctx)
if err != nil {
return pubsub.ValidationReject, err
}
if optimistic {
return pubsub.ValidationIgnore, nil
}
raw, err := s.decodePubsubMessage(msg)
if err != nil {
tracing.AnnotateError(span, err)

View File

@@ -362,6 +362,7 @@ func TestValidateAggregateAndProof_CanValidate(t *testing.T) {
beaconDB: db,
initialSync: &mockSync.Sync{IsSyncing: false},
chain: &mock.ChainService{Genesis: time.Now().Add(-oneEpoch()),
Optimistic: true,
DB: db,
State: beaconState,
ValidAttestation: true,
@@ -697,35 +698,3 @@ func TestValidateAggregateAndProof_RejectWhenAttEpochDoesntEqualTargetEpoch(t *t
assert.NotNil(t, err)
assert.Equal(t, pubsub.ValidationReject, res)
}
func TestValidateAggregateAndProof_Optimistic(t *testing.T) {
p := p2ptest.NewTestP2P(t)
ctx := context.Background()
exit, s := setupValidExit(t)
r := &Service{
cfg: &config{
p2p: p,
chain: &mock.ChainService{
State: s,
Optimistic: true,
},
initialSync: &mockSync.Sync{IsSyncing: false},
},
}
buf := new(bytes.Buffer)
_, err := p.Encoding().EncodeGossip(buf, exit)
require.NoError(t, err)
topic := p2p.GossipTypeMapping[reflect.TypeOf(exit)]
m := &pubsub.Message{
Message: &pubsubpb.Message{
Data: buf.Bytes(),
Topic: &topic,
},
}
res, err := r.validateAggregateAndProof(ctx, "", m)
assert.NoError(t, err)
valid := res == pubsub.ValidationIgnore
assert.Equal(t, true, valid, "Validation should have ignored the message")
}

View File

@@ -26,16 +26,6 @@ func (s *Service) validateAttesterSlashing(ctx context.Context, pid peer.ID, msg
return pubsub.ValidationIgnore, nil
}
// We should not attempt to process this message if the node is running in optimistic mode.
// We just ignore in p2p so that the peer is not penalized.
optimistic, err := s.cfg.chain.IsOptimistic(ctx)
if err != nil {
return pubsub.ValidationReject, err
}
if optimistic {
return pubsub.ValidationIgnore, nil
}
ctx, span := trace.StartSpan(ctx, "sync.validateAttesterSlashing")
defer span.End()

View File

@@ -293,34 +293,3 @@ func TestSeenAttesterSlashingIndices(t *testing.T) {
assert.Equal(t, tc.seen, r.hasSeenAttesterSlashingIndices(tc.checkIndices1, tc.checkIndices2))
}
}
func TestValidateAttesterSlashing_Optimistic(t *testing.T) {
p := p2ptest.NewTestP2P(t)
ctx := context.Background()
slashing, s := setupValidAttesterSlashing(t)
r := &Service{
cfg: &config{
p2p: p,
chain: &mock.ChainService{State: s, Optimistic: true},
initialSync: &mockSync.Sync{IsSyncing: false},
},
}
buf := new(bytes.Buffer)
_, err := p.Encoding().EncodeGossip(buf, slashing)
require.NoError(t, err)
topic := p2p.GossipTypeMapping[reflect.TypeOf(slashing)]
msg := &pubsub.Message{
Message: &pubsubpb.Message{
Data: buf.Bytes(),
Topic: &topic,
},
}
res, err := r.validateAttesterSlashing(ctx, "foobar", msg)
assert.NoError(t, err)
valid := res == pubsub.ValidationIgnore
assert.Equal(t, true, valid, "Should have ignore this message")
}

View File

@@ -41,16 +41,6 @@ func (s *Service) validateCommitteeIndexBeaconAttestation(ctx context.Context, p
return pubsub.ValidationIgnore, nil
}
// We should not attempt to process this message if the node is running in optimistic mode.
// We just ignore in p2p so that the peer is not penalized.
optimistic, err := s.cfg.chain.IsOptimistic(ctx)
if err != nil {
return pubsub.ValidationReject, err
}
if optimistic {
return pubsub.ValidationIgnore, nil
}
ctx, span := trace.StartSpan(ctx, "sync.validateCommitteeIndexBeaconAttestation")
defer span.End()

View File

@@ -4,7 +4,6 @@ import (
"bytes"
"context"
"fmt"
"reflect"
"testing"
"time"
@@ -15,7 +14,6 @@ import (
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/signing"
dbtest "github.com/prysmaticlabs/prysm/v3/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/p2p"
p2ptest "github.com/prysmaticlabs/prysm/v3/beacon-chain/p2p/testing"
mockSync "github.com/prysmaticlabs/prysm/v3/beacon-chain/sync/initial-sync/testing"
lruwrpr "github.com/prysmaticlabs/prysm/v3/cache/lru"
@@ -23,7 +21,6 @@ import (
"github.com/prysmaticlabs/prysm/v3/config/params"
"github.com/prysmaticlabs/prysm/v3/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v3/testing/assert"
"github.com/prysmaticlabs/prysm/v3/testing/require"
"github.com/prysmaticlabs/prysm/v3/testing/util"
)
@@ -38,6 +35,7 @@ func TestService_validateCommitteeIndexBeaconAttestation(t *testing.T) {
ValidatorsRoot: [32]byte{'A'},
ValidAttestation: true,
DB: db,
Optimistic: true,
}
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
@@ -306,37 +304,6 @@ func TestService_validateCommitteeIndexBeaconAttestation(t *testing.T) {
}
}
func TestServiceValidateCommitteeIndexBeaconAttestation_Optimistic(t *testing.T) {
p := p2ptest.NewTestP2P(t)
ctx := context.Background()
slashing, s := setupValidAttesterSlashing(t)
r := &Service{
cfg: &config{
p2p: p,
chain: &mockChain.ChainService{State: s, Optimistic: true},
initialSync: &mockSync.Sync{IsSyncing: false},
},
}
buf := new(bytes.Buffer)
_, err := p.Encoding().EncodeGossip(buf, slashing)
require.NoError(t, err)
topic := p2p.GossipTypeMapping[reflect.TypeOf(slashing)]
msg := &pubsub.Message{
Message: &pubsubpb.Message{
Data: buf.Bytes(),
Topic: &topic,
},
}
res, err := r.validateCommitteeIndexBeaconAttestation(ctx, "foobar", msg)
assert.NoError(t, err)
valid := res == pubsub.ValidationIgnore
assert.Equal(t, true, valid, "Should have ignore this message")
}
func TestService_setSeenCommitteeIndicesSlot(t *testing.T) {
chainService := &mockChain.ChainService{
Genesis: time.Now(),

View File

@@ -26,16 +26,6 @@ func (s *Service) validateProposerSlashing(ctx context.Context, pid peer.ID, msg
return pubsub.ValidationIgnore, nil
}
// We should not attempt to process this message if the node is running in optimistic mode.
// We just ignore in p2p so that the peer is not penalized.
optimistic, err := s.cfg.chain.IsOptimistic(ctx)
if err != nil {
return pubsub.ValidationReject, err
}
if optimistic {
return pubsub.ValidationIgnore, nil
}
ctx, span := trace.StartSpan(ctx, "sync.validateProposerSlashing")
defer span.End()

View File

@@ -209,33 +209,3 @@ func TestValidateProposerSlashing_Syncing(t *testing.T) {
valid := res == pubsub.ValidationAccept
assert.Equal(t, false, valid, "Did not fail validation")
}
func TestValidateProposerSlashing_Optimistic(t *testing.T) {
p := p2ptest.NewTestP2P(t)
ctx := context.Background()
slashing, s := setupValidProposerSlashing(t)
r := &Service{
cfg: &config{
p2p: p,
chain: &mock.ChainService{State: s, Optimistic: true},
initialSync: &mockSync.Sync{IsSyncing: false},
},
}
buf := new(bytes.Buffer)
_, err := p.Encoding().EncodeGossip(buf, slashing)
require.NoError(t, err)
topic := p2p.GossipTypeMapping[reflect.TypeOf(slashing)]
m := &pubsub.Message{
Message: &pubsubpb.Message{
Data: buf.Bytes(),
Topic: &topic,
},
}
res, err := r.validateProposerSlashing(ctx, "", m)
assert.NoError(t, err)
valid := res == pubsub.ValidationIgnore
assert.Equal(t, true, valid, "Did not ignore the message")
}

View File

@@ -57,16 +57,6 @@ func (s *Service) validateSyncCommitteeMessage(
return pubsub.ValidationIgnore, nil
}
// We should not attempt to process this message if the node is running in optimistic mode.
// We just ignore in p2p so that the peer is not penalized.
optimistic, err := s.cfg.chain.IsOptimistic(ctx)
if err != nil {
return pubsub.ValidationReject, err
}
if optimistic {
return pubsub.ValidationIgnore, nil
}
if msg.Topic == nil {
return pubsub.ValidationReject, errInvalidTopic
}

View File

@@ -1,7 +1,6 @@
package sync
import (
"bytes"
"context"
"fmt"
"reflect"
@@ -562,34 +561,3 @@ func Test_ignoreEmptyCommittee(t *testing.T) {
})
}
}
func TestValidateSyncCommitteeMessage_Optimistic(t *testing.T) {
p := mockp2p.NewTestP2P(t)
ctx := context.Background()
slashing, s := setupValidAttesterSlashing(t)
r := &Service{
cfg: &config{
p2p: p,
chain: &mockChain.ChainService{State: s, Optimistic: true},
initialSync: &mockSync.Sync{IsSyncing: false},
},
}
buf := new(bytes.Buffer)
_, err := p.Encoding().EncodeGossip(buf, slashing)
require.NoError(t, err)
topic := p2p.GossipTypeMapping[reflect.TypeOf(slashing)]
msg := &pubsub.Message{
Message: &pubsubpb.Message{
Data: buf.Bytes(),
Topic: &topic,
},
}
res, err := r.validateCommitteeIndexBeaconAttestation(ctx, "foobar", msg)
assert.NoError(t, err)
valid := res == pubsub.ValidationIgnore
assert.Equal(t, true, valid, "Should have ignore this message")
}

View File

@@ -52,16 +52,6 @@ func (s *Service) validateSyncContributionAndProof(ctx context.Context, pid peer
return pubsub.ValidationIgnore, nil
}
// We should not attempt to process this message if the node is running in optimistic mode.
// We just ignore in p2p so that the peer is not penalized.
optimistic, err := s.cfg.chain.IsOptimistic(ctx)
if err != nil {
return pubsub.ValidationReject, err
}
if optimistic {
return pubsub.ValidationIgnore, nil
}
m, err := s.readSyncContributionMessage(msg)
if err != nil {
tracing.AnnotateError(span, err)

View File

@@ -1,10 +1,8 @@
package sync
import (
"bytes"
"context"
"fmt"
"reflect"
"testing"
"time"
@@ -1029,37 +1027,6 @@ func TestValidateSyncContributionAndProof(t *testing.T) {
}
}
func TestValidateSyncContributionAndProof_Optimistic(t *testing.T) {
p := mockp2p.NewTestP2P(t)
ctx := context.Background()
slashing, s := setupValidAttesterSlashing(t)
r := &Service{
cfg: &config{
p2p: p,
chain: &mockChain.ChainService{State: s, Optimistic: true},
initialSync: &mockSync.Sync{IsSyncing: false},
},
}
buf := new(bytes.Buffer)
_, err := p.Encoding().EncodeGossip(buf, slashing)
require.NoError(t, err)
topic := p2p.GossipTypeMapping[reflect.TypeOf(slashing)]
msg := &pubsub.Message{
Message: &pubsubpb.Message{
Data: buf.Bytes(),
Topic: &topic,
},
}
res, err := r.validateCommitteeIndexBeaconAttestation(ctx, "foobar", msg)
assert.NoError(t, err)
valid := res == pubsub.ValidationIgnore
assert.Equal(t, true, valid, "Should have ignore this message")
}
func fillUpBlocksAndState(ctx context.Context, t *testing.T, beaconDB db.Database) ([32]byte, []bls.SecretKey) {
gs, keys := util.DeterministicGenesisStateAltair(t, 64)
sCom, err := altair.NextSyncCommittee(ctx, gs)

View File

@@ -29,16 +29,6 @@ func (s *Service) validateVoluntaryExit(ctx context.Context, pid peer.ID, msg *p
return pubsub.ValidationIgnore, nil
}
// We should not attempt to process this message if the node is running in optimistic mode.
// We just ignore in p2p so that the peer is not penalized.
optimistic, err := s.cfg.chain.IsOptimistic(ctx)
if err != nil {
return pubsub.ValidationReject, err
}
if optimistic {
return pubsub.ValidationIgnore, nil
}
ctx, span := trace.StartSpan(ctx, "sync.validateVoluntaryExit")
defer span.End()

View File

@@ -195,35 +195,3 @@ func TestValidateVoluntaryExit_ValidExit_Syncing(t *testing.T) {
valid := res == pubsub.ValidationAccept
assert.Equal(t, false, valid, "Validation should have failed")
}
func TestValidateVoluntaryExit_Optimistic(t *testing.T) {
p := p2ptest.NewTestP2P(t)
ctx := context.Background()
exit, s := setupValidExit(t)
r := &Service{
cfg: &config{
p2p: p,
chain: &mock.ChainService{
State: s,
Optimistic: true,
},
initialSync: &mockSync.Sync{IsSyncing: false},
},
}
buf := new(bytes.Buffer)
_, err := p.Encoding().EncodeGossip(buf, exit)
require.NoError(t, err)
topic := p2p.GossipTypeMapping[reflect.TypeOf(exit)]
m := &pubsub.Message{
Message: &pubsubpb.Message{
Data: buf.Bytes(),
Topic: &topic,
},
}
res, err := r.validateVoluntaryExit(ctx, "", m)
assert.NoError(t, err)
valid := res == pubsub.ValidationIgnore
assert.Equal(t, true, valid, "Validation should have ignored the message")
}

View File

@@ -13,6 +13,7 @@ go_library(
"//cmd/beacon-chain/flags:go_default_library",
"//io/file:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_urfave_cli_v2//:go_default_library",
],
)

View File

@@ -9,6 +9,7 @@ import (
"github.com/prysmaticlabs/prysm/v3/beacon-chain/execution"
"github.com/prysmaticlabs/prysm/v3/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/v3/io/file"
log "github.com/sirupsen/logrus"
"github.com/urfave/cli/v2"
)
@@ -22,9 +23,11 @@ func FlagOptions(c *cli.Context) ([]execution.Option, error) {
if err != nil {
return nil, errors.Wrap(err, "could not read JWT secret file for authenticating execution API")
}
headers := strings.Split(c.String(flags.ExecutionEngineHeaders.Name), ",")
opts := []execution.Option{
execution.WithHttpEndpoint(endpoint),
execution.WithEth1HeaderRequestLimit(c.Uint64(flags.Eth1HeaderReqLimit.Name)),
execution.WithHeaders(headers),
}
if len(jwtSecret) > 0 {
opts = append(opts, execution.WithHttpEndpointAndJWTSecret(endpoint, jwtSecret))
@@ -60,11 +63,13 @@ func parseJWTSecretFromFile(c *cli.Context) ([]byte, error) {
if len(secret) < 32 {
return nil, errors.New("provided JWT secret should be a hex string of at least 32 bytes")
}
log.Infof("Finished reading JWT secret from %s", jwtSecretFile)
return secret, nil
}
func parseExecutionChainEndpoint(c *cli.Context) (string, error) {
if c.String(flags.ExecutionEngineEndpoint.Name) == "" {
aliasUsed := c.IsSet(flags.HTTPWeb3ProviderFlag.Name)
if c.String(flags.ExecutionEngineEndpoint.Name) == "" && !aliasUsed {
return "", fmt.Errorf(
"you need to specify %s to provide a connection endpoint to an Ethereum execution client "+
"for your Prysm beacon node. This is a requirement for running a node. You can read more about "+
@@ -73,5 +78,12 @@ func parseExecutionChainEndpoint(c *cli.Context) (string, error) {
flags.ExecutionEngineEndpoint.Name,
)
}
// If users only declare the deprecated flag without setting the execution engine
// flag, we fallback to using the deprecated flag value.
if aliasUsed && !c.IsSet(flags.ExecutionEngineEndpoint.Name) {
log.Warnf("The %s flag has been deprecated and will be removed in a future release,"+
"please use the execution endpoint flag instead %s", flags.HTTPWeb3ProviderFlag.Name, flags.ExecutionEngineEndpoint.Name)
return c.String(flags.HTTPWeb3ProviderFlag.Name), nil
}
return c.String(flags.ExecutionEngineEndpoint.Name), nil
}

View File

@@ -32,6 +32,19 @@ var (
Usage: "An execution client http endpoint. Can contain auth header as well in the format",
Value: "http://localhost:8551",
}
// ExecutionEngineHeaders defines a list of HTTP headers to send with all execution client requests.
ExecutionEngineHeaders = &cli.StringFlag{
Name: "execution-headers",
Usage: "A comma separated list of key value pairs to pass as HTTP headers for all execution " +
"client calls. Example: --execution-headers=key1=value1,key2=value2",
}
// Deprecated: HTTPWeb3ProviderFlag is a deprecated flag and is an alias for the ExecutionEngineEndpoint flag.
HTTPWeb3ProviderFlag = &cli.StringFlag{
Name: "http-web3provider",
Usage: "DEPRECATED: A mainchain web3 provider string http endpoint. Can contain auth header as well in the format --http-web3provider=\"https://goerli.infura.io/v3/xxxx,Basic xxx\" for project secret (base64 encoded) and --http-web3provider=\"https://goerli.infura.io/v3/xxxx,Bearer xxx\" for jwt use",
Value: "http://localhost:8551",
Hidden: true,
}
// ExecutionJWTSecretFlag provides a path to a file containing a hex-encoded string representing a 32 byte secret
// used to authenticate with an execution node via HTTP. This is required if using an HTTP connection, otherwise all requests
// to execution nodes for consensus-related calls will fail. This is not required if using an IPC connection.

View File

@@ -38,6 +38,8 @@ import (
var appFlags = []cli.Flag{
flags.DepositContractFlag,
flags.ExecutionEngineEndpoint,
flags.ExecutionEngineHeaders,
flags.HTTPWeb3ProviderFlag,
flags.ExecutionJWTSecretFlag,
flags.RPCHost,
flags.RPCPort,
@@ -187,6 +189,9 @@ func main() {
if err := cmd.ExpandSingleEndpointIfFile(ctx, flags.ExecutionEngineEndpoint); err != nil {
return err
}
if err := cmd.ExpandSingleEndpointIfFile(ctx, flags.HTTPWeb3ProviderFlag); err != nil {
return err
}
if ctx.IsSet(flags.SetGCPercent.Name) {
runtimeDebug.SetGCPercent(ctx.Int(flags.SetGCPercent.Name))
}

View File

@@ -107,6 +107,8 @@ var appHelpFlagGroups = []flagGroup{
flags.GRPCGatewayPort,
flags.GPRCGatewayCorsDomain,
flags.ExecutionEngineEndpoint,
flags.ExecutionEngineHeaders,
flags.HTTPWeb3ProviderFlag,
flags.ExecutionJWTSecretFlag,
flags.SetGCPercent,
flags.SlotsPerArchivedPoint,

View File

@@ -4,6 +4,7 @@ import (
"bufio"
"fmt"
"os"
"runtime"
"strings"
"github.com/pkg/errors"
@@ -73,12 +74,17 @@ func EnterPassword(confirmPassword bool, pr PasswordReader) (string, error) {
return passphrase, nil
}
// ExpandSingleEndpointIfFile expands the path for --http-web3provider if specified as a file.
// ExpandSingleEndpointIfFile expands the path for --execution-provider if specified as a file.
func ExpandSingleEndpointIfFile(ctx *cli.Context, flag *cli.StringFlag) error {
// Return early if no flag value is set.
if !ctx.IsSet(flag.Name) {
return nil
}
// Return early for non-unix operating systems, as there is
// no shell path expansion for ipc endpoints on windows.
if runtime.GOOS == "windows" {
return nil
}
web3endpoint := ctx.String(flag.Name)
switch {
case strings.HasPrefix(web3endpoint, "http://"):
@@ -96,31 +102,3 @@ func ExpandSingleEndpointIfFile(ctx *cli.Context, flag *cli.StringFlag) error {
}
return nil
}
// ExpandWeb3EndpointsIfFile expands the path for --fallback-web3provider if specified as a file.
func ExpandWeb3EndpointsIfFile(ctx *cli.Context, flags *cli.StringSliceFlag) error {
// Return early if no flag value is set.
if !ctx.IsSet(flags.Name) {
return nil
}
rawFlags := ctx.StringSlice(flags.Name)
for i, rawValue := range rawFlags {
switch {
case strings.HasPrefix(rawValue, "http://"):
case strings.HasPrefix(rawValue, "https://"):
case strings.HasPrefix(rawValue, "ws://"):
case strings.HasPrefix(rawValue, "wss://"):
default:
web3endpoint, err := file.ExpandPath(rawValue)
if err != nil {
return errors.Wrapf(err, "could not expand path for %s", rawValue)
}
// Given that rawFlags is a pointer this will replace the unexpanded path
// with the expanded one. Also there is no easy way to replace the string
// slice flag value compared to other flag types. This is why we resort to
// replacing it like this.
rawFlags[i] = web3endpoint
}
}
return nil
}

View File

@@ -111,43 +111,3 @@ func TestExpandSingleEndpointIfFile(t *testing.T) {
require.NoError(t, ExpandSingleEndpointIfFile(context, HTTPWeb3ProviderFlag))
require.Equal(t, curentdir+"/path.ipc", context.String(HTTPWeb3ProviderFlag.Name))
}
func TestExpandWeb3EndpointsIfFile(t *testing.T) {
app := cli.App{}
set := flag.NewFlagSet("test", 0)
HTTPWeb3ProviderFlag := &cli.StringSliceFlag{Name: "fallback-web3provider", Value: cli.NewStringSlice()}
set.Var(cli.NewStringSlice(), HTTPWeb3ProviderFlag.Name, "")
context := cli.NewContext(&app, set, nil)
// with nothing set
require.NoError(t, ExpandWeb3EndpointsIfFile(context, HTTPWeb3ProviderFlag))
require.DeepEqual(t, []string{}, context.StringSlice(HTTPWeb3ProviderFlag.Name))
// with url scheme
require.NoError(t, context.Set(HTTPWeb3ProviderFlag.Name, "http://localhost:8545"))
require.NoError(t, ExpandWeb3EndpointsIfFile(context, HTTPWeb3ProviderFlag))
require.DeepEqual(t, []string{"http://localhost:8545"}, context.StringSlice(HTTPWeb3ProviderFlag.Name))
// reset context
set = flag.NewFlagSet("test", 0)
set.Var(cli.NewStringSlice(), HTTPWeb3ProviderFlag.Name, "")
context = cli.NewContext(&app, set, nil)
// relative user home path
usr, err := user.Current()
require.NoError(t, err)
require.NoError(t, context.Set(HTTPWeb3ProviderFlag.Name, "~/relative/path.ipc"))
require.NoError(t, ExpandWeb3EndpointsIfFile(context, HTTPWeb3ProviderFlag))
require.DeepEqual(t, []string{usr.HomeDir + "/relative/path.ipc"}, context.StringSlice(HTTPWeb3ProviderFlag.Name))
// reset context
set = flag.NewFlagSet("test", 0)
set.Var(cli.NewStringSlice(), HTTPWeb3ProviderFlag.Name, "")
context = cli.NewContext(&app, set, nil)
// current dir path
curentdir, err := os.Getwd()
require.NoError(t, err)
require.NoError(t, context.Set(HTTPWeb3ProviderFlag.Name, "./path.ipc"))
require.NoError(t, ExpandWeb3EndpointsIfFile(context, HTTPWeb3ProviderFlag))
require.DeepEqual(t, []string{curentdir + "/path.ipc"}, context.StringSlice(HTTPWeb3ProviderFlag.Name))
}

View File

@@ -341,9 +341,10 @@ var (
// EnableBuilderFlag enables the periodic validator registration API calls that will update the custom builder with validator settings.
EnableBuilderFlag = &cli.BoolFlag{
Name: "enable-builder",
Usage: "Enables Builder validator registration APIs for the validator client to update settings such as fee recipient and gas limit. Note* this flag is not required if using proposer settings config file",
Value: false,
Name: "enable-builder",
Usage: "Enables Builder validator registration APIs for the validator client to update settings such as fee recipient and gas limit. Note* this flag is not required if using proposer settings config file",
Value: false,
Aliases: []string{"enable-validator-registration"},
}
// BuilderGasLimitFlag defines the gas limit for the builder to use for constructing a payload.

View File

@@ -4,6 +4,7 @@ go_library(
name = "go_default_library",
srcs = [
"edit.go",
"recover.go",
"wallet.go",
],
importpath = "github.com/prysmaticlabs/prysm/v3/cmd/validator/wallet",
@@ -12,6 +13,7 @@ go_library(
"//cmd:go_default_library",
"//cmd/validator/flags:go_default_library",
"//config/features:go_default_library",
"//io/prompt:go_default_library",
"//runtime/tos:go_default_library",
"//validator/accounts:go_default_library",
"//validator/accounts/userprompt:go_default_library",
@@ -20,13 +22,18 @@ go_library(
"//validator/keymanager/remote:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_tyler_smith_go_bip39//:go_default_library",
"@com_github_tyler_smith_go_bip39//wordlists:go_default_library",
"@com_github_urfave_cli_v2//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["edit_test.go"],
srcs = [
"edit_test.go",
"recover_test.go",
],
embed = [":go_default_library"],
deps = [
"//cmd/validator/flags:go_default_library",
@@ -34,8 +41,10 @@ go_test(
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//validator/accounts:go_default_library",
"//validator/accounts/iface:go_default_library",
"//validator/accounts/wallet:go_default_library",
"//validator/keymanager:go_default_library",
"//validator/keymanager/derived:go_default_library",
"//validator/keymanager/remote:go_default_library",
"@com_github_urfave_cli_v2//:go_default_library",
],

View File

@@ -0,0 +1,175 @@
package wallet
import (
"fmt"
"os"
"sort"
"strconv"
"strings"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v3/cmd/validator/flags"
"github.com/prysmaticlabs/prysm/v3/io/prompt"
"github.com/prysmaticlabs/prysm/v3/validator/accounts"
"github.com/prysmaticlabs/prysm/v3/validator/accounts/userprompt"
"github.com/prysmaticlabs/prysm/v3/validator/accounts/wallet"
"github.com/tyler-smith/go-bip39"
"github.com/tyler-smith/go-bip39/wordlists"
"github.com/urfave/cli/v2"
)
const (
// #nosec G101 -- Not sensitive data
mnemonicPassphraseYesNoText = "(Advanced) Do you have an optional '25th word' passphrase for your mnemonic? [y/n]"
// #nosec G101 -- Not sensitive data
mnemonicPassphrasePromptText = "(Advanced) Enter the '25th word' passphrase for your mnemonic"
)
func walletRecover(c *cli.Context) error {
mnemonic, err := inputMnemonic(c)
if err != nil {
return errors.Wrap(err, "could not get mnemonic phrase")
}
opts := []accounts.Option{
accounts.WithMnemonic(mnemonic),
}
skipMnemonic25thWord := c.IsSet(flags.SkipMnemonic25thWordCheckFlag.Name)
has25thWordFile := c.IsSet(flags.Mnemonic25thWordFileFlag.Name)
if !skipMnemonic25thWord && !has25thWordFile {
resp, err := prompt.ValidatePrompt(
os.Stdin, mnemonicPassphraseYesNoText, prompt.ValidateYesOrNo,
)
if err != nil {
return errors.Wrap(err, "could not validate choice")
}
if strings.EqualFold(resp, "y") {
mnemonicPassphrase, err := prompt.InputPassword(
c,
flags.Mnemonic25thWordFileFlag,
mnemonicPassphrasePromptText,
"Confirm mnemonic passphrase",
false, /* Should confirm password */
func(input string) error {
if strings.TrimSpace(input) == "" {
return errors.New("input cannot be empty")
}
return nil
},
)
if err != nil {
return err
}
opts = append(opts, accounts.WithMnemonic25thWord(mnemonicPassphrase))
}
}
walletDir, err := userprompt.InputDirectory(c, userprompt.WalletDirPromptText, flags.WalletDirFlag)
if err != nil {
return err
}
walletPassword, err := prompt.InputPassword(
c,
flags.WalletPasswordFileFlag,
wallet.NewWalletPasswordPromptText,
wallet.ConfirmPasswordPromptText,
true, /* Should confirm password */
prompt.ValidatePasswordInput,
)
if err != nil {
return err
}
numAccounts, err := inputNumAccounts(c)
if err != nil {
return errors.Wrap(err, "could not get number of accounts to recover")
}
opts = append(opts, accounts.WithWalletDir(walletDir))
opts = append(opts, accounts.WithWalletPassword(walletPassword))
opts = append(opts, accounts.WithNumAccounts(int(numAccounts)))
acc, err := accounts.NewCLIManager(opts...)
if err != nil {
return err
}
if _, err = acc.WalletRecover(c.Context); err != nil {
return err
}
log.Infof(
"Successfully recovered HD wallet with accounts and saved configuration to disk",
)
return nil
}
func inputMnemonic(cliCtx *cli.Context) (mnemonicPhrase string, err error) {
if cliCtx.IsSet(flags.MnemonicFileFlag.Name) {
mnemonicFilePath := cliCtx.String(flags.MnemonicFileFlag.Name)
data, err := os.ReadFile(mnemonicFilePath) // #nosec G304 -- ReadFile is safe
if err != nil {
return "", err
}
enteredMnemonic := string(data)
if err := accounts.ValidateMnemonic(enteredMnemonic); err != nil {
return "", errors.Wrap(err, "mnemonic phrase did not pass validation")
}
return enteredMnemonic, nil
}
allowedLanguages := map[string][]string{
"chinese_simplified": wordlists.ChineseSimplified,
"chinese_traditional": wordlists.ChineseTraditional,
"czech": wordlists.Czech,
"english": wordlists.English,
"french": wordlists.French,
"japanese": wordlists.Japanese,
"korean": wordlists.Korean,
"italian": wordlists.Italian,
"spanish": wordlists.Spanish,
}
languages := make([]string, 0)
for k := range allowedLanguages {
languages = append(languages, k)
}
sort.Strings(languages)
selectedLanguage, err := prompt.ValidatePrompt(
os.Stdin,
fmt.Sprintf("Enter the language of your seed phrase: %s", strings.Join(languages, ", ")),
func(input string) error {
if _, ok := allowedLanguages[input]; !ok {
return errors.New("input not in the list of allowed languages")
}
return nil
},
)
if err != nil {
return "", fmt.Errorf("could not get mnemonic language: %w", err)
}
bip39.SetWordList(allowedLanguages[selectedLanguage])
mnemonicPhrase, err = prompt.ValidatePrompt(
os.Stdin,
"Enter the seed phrase for the wallet you would like to recover",
accounts.ValidateMnemonic)
if err != nil {
return "", fmt.Errorf("could not get mnemonic phrase: %w", err)
}
return mnemonicPhrase, nil
}
func inputNumAccounts(cliCtx *cli.Context) (int64, error) {
if cliCtx.IsSet(flags.NumAccountsFlag.Name) {
numAccounts := cliCtx.Int64(flags.NumAccountsFlag.Name)
if numAccounts <= 0 {
return 0, errors.New("must recover at least 1 account")
}
return numAccounts, nil
}
numAccounts, err := prompt.ValidatePrompt(os.Stdin, "Enter how many accounts you would like to generate from the mnemonic", prompt.ValidateNumber)
if err != nil {
return 0, err
}
numAccountsInt, err := strconv.Atoi(numAccounts)
if err != nil {
return 0, err
}
if numAccountsInt <= 0 {
return 0, errors.New("must recover at least 1 account")
}
return int64(numAccountsInt), nil
}

View File

@@ -0,0 +1,113 @@
package wallet
import (
"context"
"flag"
"os"
"path/filepath"
"strconv"
"testing"
"github.com/prysmaticlabs/prysm/v3/cmd/validator/flags"
"github.com/prysmaticlabs/prysm/v3/testing/assert"
"github.com/prysmaticlabs/prysm/v3/testing/require"
"github.com/prysmaticlabs/prysm/v3/validator/accounts/iface"
"github.com/prysmaticlabs/prysm/v3/validator/accounts/wallet"
"github.com/prysmaticlabs/prysm/v3/validator/keymanager"
"github.com/prysmaticlabs/prysm/v3/validator/keymanager/derived"
"github.com/urfave/cli/v2"
)
const (
walletDirName = "wallet"
mnemonicFileName = "mnemonic.txt"
mnemonic = "garage car helmet trade salmon embrace market giant movie wet same champion dawn chair shield drill amazing panther accident puzzle garden mosquito kind arena"
)
type recoverCfgStruct struct {
walletDir string
passwordFilePath string
mnemonicFilePath string
numAccounts int64
}
func setupRecoverCfg(t *testing.T) *recoverCfgStruct {
testDir := t.TempDir()
walletDir := filepath.Join(testDir, walletDirName)
passwordFilePath := filepath.Join(testDir, passwordFileName)
require.NoError(t, os.WriteFile(passwordFilePath, []byte(password), os.ModePerm))
mnemonicFilePath := filepath.Join(testDir, mnemonicFileName)
require.NoError(t, os.WriteFile(mnemonicFilePath, []byte(mnemonic), os.ModePerm))
return &recoverCfgStruct{
walletDir: walletDir,
passwordFilePath: passwordFilePath,
mnemonicFilePath: mnemonicFilePath,
}
}
func createRecoverCliCtx(t *testing.T, cfg *recoverCfgStruct) *cli.Context {
app := cli.App{}
set := flag.NewFlagSet("test", 0)
set.String(flags.WalletDirFlag.Name, cfg.walletDir, "")
set.String(flags.WalletPasswordFileFlag.Name, cfg.passwordFilePath, "")
set.String(flags.KeymanagerKindFlag.Name, keymanager.Derived.String(), "")
set.String(flags.MnemonicFileFlag.Name, cfg.mnemonicFilePath, "")
set.Bool(flags.SkipMnemonic25thWordCheckFlag.Name, true, "")
set.Int64(flags.NumAccountsFlag.Name, cfg.numAccounts, "")
assert.NoError(t, set.Set(flags.SkipMnemonic25thWordCheckFlag.Name, "true"))
assert.NoError(t, set.Set(flags.WalletDirFlag.Name, cfg.walletDir))
assert.NoError(t, set.Set(flags.WalletPasswordFileFlag.Name, cfg.passwordFilePath))
assert.NoError(t, set.Set(flags.KeymanagerKindFlag.Name, keymanager.Derived.String()))
assert.NoError(t, set.Set(flags.MnemonicFileFlag.Name, cfg.mnemonicFilePath))
assert.NoError(t, set.Set(flags.NumAccountsFlag.Name, strconv.Itoa(int(cfg.numAccounts))))
return cli.NewContext(&app, set, nil)
}
func TestRecoverDerivedWallet(t *testing.T) {
cfg := setupRecoverCfg(t)
cfg.numAccounts = 4
cliCtx := createRecoverCliCtx(t, cfg)
require.NoError(t, walletRecover(cliCtx))
ctx := context.Background()
w, err := wallet.OpenWallet(cliCtx.Context, &wallet.Config{
WalletDir: cfg.walletDir,
WalletPassword: password,
})
assert.NoError(t, err)
km, err := w.InitializeKeymanager(cliCtx.Context, iface.InitKeymanagerConfig{ListenForChanges: false})
require.NoError(t, err)
derivedKM, ok := km.(*derived.Keymanager)
if !ok {
t.Fatal("not a derived keymanager")
}
names, err := derivedKM.ValidatingAccountNames(ctx)
assert.NoError(t, err)
require.Equal(t, len(names), int(cfg.numAccounts))
}
// TestRecoverDerivedWallet_OneAccount is a test for regression in cases where the number of accounts recovered is 1
func TestRecoverDerivedWallet_OneAccount(t *testing.T) {
cfg := setupRecoverCfg(t)
cfg.numAccounts = 1
cliCtx := createRecoverCliCtx(t, cfg)
require.NoError(t, walletRecover(cliCtx))
_, err := wallet.OpenWallet(cliCtx.Context, &wallet.Config{
WalletDir: cfg.walletDir,
WalletPassword: password,
})
assert.NoError(t, err)
}
func TestRecoverDerivedWallet_AlreadyExists(t *testing.T) {
cfg := setupRecoverCfg(t)
cfg.numAccounts = 4
cliCtx := createRecoverCliCtx(t, cfg)
require.NoError(t, walletRecover(cliCtx))
// Trying to recover an HD wallet into a directory that already exists should give an error
require.ErrorContains(t, "a wallet already exists at this location", walletRecover(cliCtx))
}

View File

@@ -108,13 +108,13 @@ var Commands = &cli.Command{
if err := cmd.LoadFlagsFromConfig(cliCtx, cliCtx.Command.Flags); err != nil {
return err
}
return tos.VerifyTosAcceptedOrPrompt(cliCtx)
},
Action: func(cliCtx *cli.Context) error {
if err := features.ConfigureValidator(cliCtx); err != nil {
if err := tos.VerifyTosAcceptedOrPrompt(cliCtx); err != nil {
return err
}
if err := accounts.RecoverWalletCli(cliCtx); err != nil {
return features.ConfigureBeaconChain(cliCtx)
},
Action: func(cliCtx *cli.Context) error {
if err := walletRecover(cliCtx); err != nil {
log.WithError(err).Fatal("Could not recover wallet")
}
return nil

View File

@@ -61,11 +61,13 @@ type Flags struct {
EnableSlashingProtectionPruning bool
EnableNativeState bool // EnableNativeState defines whether the beacon state will be represented as a pure Go struct or a Go struct that wraps a proto struct.
DisablePullTips bool // DisablePullTips enables experimental disabling of boundary checks.
DisablePullTips bool // DisablePullTips disables experimental disabling of boundary checks.
EnableDefensivePull bool // EnableDefensivePull enables exerimental back boundary checks.
EnableVectorizedHTR bool // EnableVectorizedHTR specifies whether the beacon state will use the optimized sha256 routines.
DisableForkchoiceDoublyLinkedTree bool // DisableForkChoiceDoublyLinkedTree specifies whether fork choice store will use a doubly linked tree.
EnableBatchGossipAggregation bool // EnableBatchGossipAggregation specifies whether to further aggregate our gossip batches before verifying them.
EnableOnlyBlindedBeaconBlocks bool // EnableOnlyBlindedBeaconBlocks enables only storing blinded beacon blocks in the DB post-Bellatrix fork.
EnableStartOptimistic bool // EnableStartOptimistic treats every block as optimistic at startup.
// KeystoreImportDebounceInterval specifies the time duration the validator waits to reload new keys if they have
// changed on disk. This feature is for advanced use cases only.
@@ -208,6 +210,11 @@ func ConfigureBeaconChain(ctx *cli.Context) error {
logEnabled(disablePullTips)
cfg.DisablePullTips = true
}
if ctx.Bool(enableDefensivePull.Name) {
logEnabled(enableDefensivePull)
cfg.EnableDefensivePull = true
}
if ctx.Bool(disableVecHTR.Name) {
logEnabled(disableVecHTR)
} else {
@@ -241,6 +248,10 @@ func ConfigureBeaconChain(ctx *cli.Context) error {
logEnabled(EnableOnlyBlindedBeaconBlocks)
cfg.EnableOnlyBlindedBeaconBlocks = true
}
if ctx.Bool(enableStartupOptimistic.Name) {
logEnabled(enableStartupOptimistic)
cfg.EnableStartOptimistic = true
}
Init(cfg)
return nil
}

View File

@@ -12,8 +12,84 @@ var (
Usage: deprecatedUsage,
Hidden: true,
}
deprecatedBackupWebHookFlag = &cli.BoolFlag{
Name: "enable-db-backup-webhook",
Usage: deprecatedUsage,
Hidden: true,
}
deprecatedBoltMmapFlag = &cli.StringFlag{
Name: "bolt-mmap-initial-size",
Usage: deprecatedUsage,
Hidden: true,
}
deprecatedDisableDiscV5Flag = &cli.BoolFlag{
Name: "disable-discv5",
Usage: deprecatedUsage,
Hidden: true,
}
deprecatedDisableAttHistoryCacheFlag = &cli.BoolFlag{
Name: "disable-attesting-history-db-cache",
Usage: deprecatedUsage,
Hidden: true,
}
deprecatedEnableVectorizedHtr = &cli.BoolFlag{
Name: "enable-vectorized-htr",
Usage: deprecatedUsage,
Hidden: true,
}
deprecatedEnablePeerScorer = &cli.BoolFlag{
Name: "enable-peer-scorer",
Usage: deprecatedUsage,
Hidden: true,
}
deprecatedEnableForkchoiceDoublyLinkedTree = &cli.BoolFlag{
Name: "enable-forkchoice-doubly-linked-tree",
Usage: deprecatedUsage,
Hidden: true,
}
deprecatedDutyCountdown = &cli.BoolFlag{
Name: "enable-duty-count-down",
Usage: deprecatedUsage,
Hidden: true,
}
deprecatedHeadSync = &cli.BoolFlag{
Name: "head-sync",
Usage: deprecatedUsage,
Hidden: true,
}
deprecatedGossipBatchAggregation = &cli.BoolFlag{
Name: "enable-gossip-batch-aggregation",
Usage: deprecatedUsage,
Hidden: true,
}
deprecatedEnableLargerGossipHistory = &cli.BoolFlag{
Name: "enable-larger-gossip-history",
Usage: deprecatedUsage,
Hidden: true,
}
deprecatedFallbackProvider = &cli.StringFlag{
Name: "fallback-web3provider",
Usage: deprecatedUsage,
Hidden: true,
}
)
// Deprecated flags for both the beacon node and validator client.
var deprecatedFlags = []cli.Flag{
exampleDeprecatedFeatureFlag,
deprecatedBoltMmapFlag,
deprecatedDisableDiscV5Flag,
deprecatedDisableAttHistoryCacheFlag,
deprecatedEnableVectorizedHtr,
deprecatedEnablePeerScorer,
deprecatedEnableForkchoiceDoublyLinkedTree,
deprecatedDutyCountdown,
deprecatedHeadSync,
deprecatedGossipBatchAggregation,
deprecatedEnableLargerGossipHistory,
deprecatedFallbackProvider,
}
var deprecatedBeaconFlags = []cli.Flag{
deprecatedBackupWebHookFlag,
}

View File

@@ -97,6 +97,11 @@ var (
Name: "experimental-enable-boundary-checks",
Usage: "Experimental enable of boundary checks, useful for debugging, may cause bad votes.",
}
enableDefensivePull = &cli.BoolFlag{
Name: "enable-back-pull",
Usage: "Experimental enable of past boundary checks, useful for debugging, may cause bad votes.",
Hidden: true,
}
disableVecHTR = &cli.BoolFlag{
Name: "disable-vectorized-htr",
Usage: "Disables the new go sha256 library which utilizes optimized routines for merkle trees",
@@ -113,6 +118,12 @@ var (
Name: "enable-only-blinded-beacon-blocks",
Usage: "Enables storing only blinded beacon blocks in the database without full execution layer transactions",
}
enableStartupOptimistic = &cli.BoolFlag{
Name: "startup-optimistic",
Usage: "Treats every block as optimistically synced at launch. Use with caution",
Value: false,
Hidden: true,
}
)
// devModeFlags holds list of flags that are set when development mode is on.
@@ -138,7 +149,7 @@ var E2EValidatorFlags = []string{
}
// BeaconChainFlags contains a list of all the feature flags that apply to the beacon-chain client.
var BeaconChainFlags = append(deprecatedFlags, []cli.Flag{
var BeaconChainFlags = append(deprecatedBeaconFlags, append(deprecatedFlags, []cli.Flag{
devModeFlag,
writeSSZStateTransitionsFlag,
disableGRPCConnectionLogging,
@@ -156,7 +167,9 @@ var BeaconChainFlags = append(deprecatedFlags, []cli.Flag{
disableForkChoiceDoublyLinkedTree,
disableGossipBatchAggregation,
EnableOnlyBlindedBeaconBlocks,
}...)
enableStartupOptimistic,
enableDefensivePull,
}...)...)
// E2EBeaconChainFlags contains a list of the beacon chain feature flags to be tested in E2E.
var E2EBeaconChainFlags = []string{

View File

@@ -2,7 +2,7 @@ package params
const (
altairE2EForkEpoch = 6
bellatrixE2EForkEpoch = 8 //nolint:deadcode
bellatrixE2EForkEpoch = 8
)
// E2ETestConfig retrieves the configurations made specifically for E2E testing.

Some files were not shown because too many files have changed in this diff Show More