Compare commits

...

48 Commits

Author SHA1 Message Date
james-prysm
648ab9f2c2 revert moving validator-exit command (#11575) 2022-10-25 03:52:30 +00:00
terencechain
43a0b4bb16 Retrieve proposer existence in cache with correct parameter (#11567) 2022-10-24 13:20:41 -05:00
int88
57d7207554 fix typo and log error (#11565)
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-10-23 22:01:44 +00:00
Potuz
b7b5b28c5a Fix locks in Capella setters (#11569)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-10-22 23:10:45 +00:00
terencechain
968dc5d1e8 Beacon api: get duties prune payload id cache (#11568)
* Beacon api: get duties prune payload id cache

* Fix complains and bad test

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-10-22 22:43:57 +00:00
terencechain
c24016f006 Add validator index with Withdrawal pb (#11563)
* Add validator index with Withdrawal pb

* Update BUILD.bazel

* Fix test

* Better tests
2022-10-22 20:54:50 +00:00
Nishant Das
661cbc45ae Vendor Leaky Bucket Implementation (#11560)
* add changes

* fix tests

* change to minute

* remove dep

* remove

* fix tests

* add test for period

* improve

* linter

* build files

* ci

* make it stricter

* fix tests

* fix

* Update beacon-chain/sync/rate_limiter.go

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

Co-authored-by: terencechain <terence@prysmaticlabs.com>
Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2022-10-20 16:40:13 -05:00
Nishant Das
4bd4d6392d Reduce Block Burst Factor (#11546)
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2022-10-20 14:20:47 +00:00
Radosław Kapka
5f4c26875b Remove ValidatorCapella protobuf (#11562) 2022-10-19 17:56:50 +00:00
Radosław Kapka
6aab2b2b8d /eth/v1/beacon/blinded_blocks/{block_id} API endpoint (#11538)
* proto + stub

* Add execution_optimistic to SSZ response

* implementation

* proto fix

* api middleware

* tests

* more tests

* switch from Errorf to Error

Co-authored-by: terencechain <terence@prysmaticlabs.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-10-19 15:08:30 +00:00
Radosław Kapka
b7a878d011 Resolve remaining native state tasks (#11561)
* remove ToProto and ToProtoUnsafe wrappers

* TestAppendBeyondIndicesLimit

* change type of genesisValidatorsRoot

* fuzz tests

* check type assertion
2022-10-19 10:37:45 -04:00
Raul Jordan
2ae9f1be9e Prysm Web UI Release v2.0.2 (#11559)
Co-authored-by: james-prysm <james-prysm@users.noreply.github.com>
2022-10-19 03:11:22 +00:00
Radosław Kapka
98b9c9e6c9 Handle unaggregated attestation event (#11558) 2022-10-18 10:34:25 -04:00
james-prysm
df694aad71 prysmctl: validator exit (#11515)
* moving voluntary exit command to prysmctl

* fixing linting

* fixing imports

* removing unused import:

* refactoring and adding a new unit test

* rolling back some changes

* fixing parameters

* fixing tests

* adding to main prysm ctl

* refactoring how wallet function works and ux to validator exit

* adding interop support

* fixing bazel build

* fixing if statement

* fixing beaconcha.in printout

* fixing web3signer bug, missing signing slot info

* fixing deep source issues

* fixing bazel package visibility for prysmctl

* gaz

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-10-17 16:04:19 -05:00
terencechain
e8400a0773 Fix complains and bad test (#11555)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-10-17 16:20:26 +00:00
Radosław Kapka
dcba27ffbc add server name to address (#11556) 2022-10-17 10:59:17 -04:00
Radosław Kapka
cafe0bd1f8 Capella beacon state (#11141)
* fork

* types

* cloners

* getters

* remove CapellaBlind from fork

* hasher

* setters

* spec params, config tests

* generate ssz

* executionPayloadHeaderCapella

* proto state

* BeaconStateCapella SSZ

* saving state

* configfork

* BUILD files

* fix RealPosition

* fix hasher

* SetLatestExecutionPayloadHeaderCapella

* fix error message

* reduce complexity of saveStatesEfficientInternal

* add latestExecutionPayloadHeaderCapella to minimal state

* halway done interface

* remove withdrawal methods

* merge setters

* change signatures for v1 and v2

* fixing errors pt. 1

* paylod_test fixes

* fix everything

* remove unused func

* fix tests

* state_trie_test improvements

* in progress...

* hasher test

* fix configs

* simplify hashing

* Revert "fix configs"

This reverts commit bcae2825fc.

* remove capella from config test

* unify locking

* review

* hashing

* fixes

Co-authored-by: terencechain <terence@prysmaticlabs.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-10-12 11:39:19 -05:00
Potuz
ce7f042974 Add Withdrawal helpers (#11552)
* Add Withdrawal helpers

* Review

Co-authored-by: terencechain <terence@prysmaticlabs.com>
2022-10-12 08:44:41 -05:00
int88
974cf51a8c fix some comment and log error (#11550)
* fix some comment and log error

* Update testing/endtoend/component_handler_test.go

* Apply suggestions from code review

Co-authored-by: terencechain <terence@prysmaticlabs.com>
2022-10-11 14:07:47 -04:00
Radosław Kapka
eb49404a75 Expose structs from API Middleware (#11547) 2022-10-07 15:45:26 +00:00
Preston Van Loon
6bea17cb54 Update libp2p to support go 1.19 (#11309)
* Update libp2p to support go 1.19

* gaz

* go mod tidy

* Only update the minimum deps

* go mod tidy

* revert .bazelrc

* Update go-libp2p to v0.22.0 and update import paths (#11440)

* Fix import paths

* Fix go-libp2p-peerstore import

* Bazel updates

* fix

* revert some comments changes

* revert some comment stuff

* fix dependency issues

* vendor problematic library

* use your brain

* remove

* tests

Co-authored-by: Marco Munizaga <marco@marcopolo.io>
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2022-10-07 15:24:51 +08:00
Nishant Das
de8e50d8b6 Migrate Historical States In Separate Routine (#11501)
* add changes

* space

* add test

* space

Co-authored-by: terencechain <terence@prysmaticlabs.com>
2022-10-05 19:11:03 +00:00
Sammy Rosso
8049060119 Add flag to enable logging on rejected gossip message (#11524)
* add option to log rejected gossip message

* fix rejectGossipMessage return

* revert test + fix import

* revert all beaconchain/sync/validate_* files

* log object and message if flag is set

* fix failing build

* remove object field

Co-authored-by: terencechain <terence@prysmaticlabs.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-10-05 15:18:08 +00:00
Krasimir Georgiev
c1446a35c5 fix panic when the rpc client is not initialized (#11528)
Co-authored-by: Nishant Das <nishdas93@gmail.com>
Co-authored-by: Your Name <you@example.com>
2022-10-05 14:02:01 +02:00
james-prysm
f5efde5ccc Keymanager API: fix fee recipient API and add persistence (#11540)
* fixing bug with fee recipient api

* fixing unit tests

* clarifying logs
2022-10-04 17:05:46 +00:00
Nishant Das
cdcb289693 Handle New Agent Version For Lodestar (#11536)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-10-04 14:39:12 +02:00
Nishant Das
5dca874530 Fix Validator Monitor Registration (#11537)
* fix it

* gaz
2022-10-04 12:44:57 +02:00
Nishant Das
885dd2e327 Revert "More efficient way of computing skip slot cache key (#11441)" (#11535)
This reverts commit 0f0d480dbc.
2022-10-04 14:13:11 +08:00
terencechain
0f0d480dbc More efficient way of computing skip slot cache key (#11441)
* More efficient way of computing skip slot cache key

* Gazelle

* Add defensive check

* Fix test setup

* Disable skip slot cache

* Fix rpc tests for dependent root
2022-10-03 13:53:17 -04:00
terencechain
1696208318 Add CLI flag to define execution engine timout (#11489)
* Add CLI flag to define execution engine timout

* Rm unused

* Fix import

* Fix lint

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-09-30 18:05:05 +00:00
terencechain
7f686a7878 Cache proposer slot index for GetProposerDuties (#11521) 2022-09-30 19:19:40 +02:00
Potuz
2817f8e8d6 update head should go even without attestations (#11503)
* update head should go even without attestations

* add unit tests

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-09-30 13:29:18 +00:00
Radosław Kapka
db76be2f15 Remove state code owners (#11519) 2022-09-30 13:21:06 +00:00
Potuz
ad65c841c4 Add a justified balance getter to forkchoice (#11513)
* init

* unit test

* DeepSource complains

* gaz

* shutting deepsource down

* change var name and use handler type

* Kasey's name suggestion

* fix stategen

* interface signature

Co-authored-by: terencechain <terence@prysmaticlabs.com>
2022-09-30 06:39:07 -03:00
terencechain
7b255f5c9d Don't mark builder status unhealthy if mev-boost status is non 200 (#11506)
* Log error for mev boost / relayer status, not return error

* Add ticker to poll relayer status

* Add ticker to poll relayer status

* Update beacon-chain/builder/service.go

* Rm test

* Check nil before calling status

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-09-29 20:54:24 +00:00
Sammy Rosso
157b1e373c fix validator loggin timeTillDuty as a negative number (#11512)
* fix timeTillDuty log shows a negative number

* Update validator/client/validator.go

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2022-09-29 19:48:43 +00:00
terencechain
e3df25f443 Fix attestations update head error logging (#11514) 2022-09-29 12:16:42 -07:00
kasey
805473cb38 Give forkchoice to stategen (#11439)
* add forkchoice to stategen.New, update everywhere

* conflict_1

* Fix proposer_bellatrix test

Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2022-09-28 20:10:27 +00:00
terencechain
a54bb19c82 Set builder get payload timeout to 3s (#11413) 2022-09-28 07:55:44 -07:00
terencechain
14908f639e Remove INTERVALS_PER_SLOT from place holder for test (#11502)
* Rm INTERVALS_PER_SLOT from place holder

* Comment
2022-09-26 15:19:04 +00:00
Potuz
77fc45304b Remove protoarray forkchoice (#11455)
* Remove protoarray forkchoice

* exported errors

* fix spectests

* fix tests

* conflict 1

* Preston's review

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-09-26 14:45:21 +00:00
terencechain
722c7fb034 Update consensus layer badge to v1.2.0 (#11492) 2022-09-26 00:27:54 +00:00
Potuz
97ef8dde4d Fix sync tests and update to 1.2.0 (#11498)
* Fix sync tests and update to 1.2.0

* fix repo sha
2022-09-25 23:48:14 +00:00
terencechain
211c5c2c5c Beacon api: produce block should skip mev-boost (#11488)
* Beacon api: produce block should skip mev-boost

* Update comments

* Additional test and comments

* Update beacon-chain/rpc/prysm/v1alpha1/validator/proposer_bellatrix.go

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

* Fix suggestion

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2022-09-23 18:45:26 +00:00
Yier
2ea66a8467 Remove unwanted wrapper of GRPC status error (#11486)
Co-authored-by: prestonvanloon <preston@prysmaticlabs.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-09-23 14:37:30 +00:00
terencechain
7720d98764 Beacon api: propoerly submit blind block (#11483)
* Beacon api: propoerly submit blind block

* Gazelle

* Fix SubmitBlockSSZ

* Update beacon-chain/rpc/eth/beacon/blocks.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/rpc/eth/beacon/blocks.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-09-22 20:13:43 +00:00
james-prysm
20e99fd1f9 Improvement to Fee Recipient UX (#11307)
* updating mock

* adding new internal api

* adding generated code

* converting validator index to pubkey

* adding new API endpoint

* fixing mock related new API

* fixing unit tests

* preventing failure on startup, replacing with warnings

* improving log message

* changing warn log to error log

* fixing error formatting

* improve log on beacon node side on startup

* fixing deepsource issue

* improving logs

* fixing unit tests

* fixing more tests

* adding error check

* adding in new case for fee recipient to avoid conflicts on changing beacon node suggested fee recipient

* adding default fee recipient check for gas limit as well

* adding improved unit tests

* accidently checked in tmp folder

* adding more unit tests

* fixing gas limit unit test

* Update validator/rpc/standard_api_test.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/rpc/standard_api_test.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/rpc/standard_api_test.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/node/config.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/node/config.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update proto/prysm/v1alpha1/validator.proto

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/rpc/standard_api.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/runner.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* addressing comments

* updating proto generated files

* fixing linting and addressign review comments

* fixing unit test

* improve error handling

* accidently added tmp folder

* improving function error returns

* realizing i am wrapping error incorrectly

* fixing unit test

* addressing review comment

* fixing linting

* fixing unit test

* improving ux around enable builder

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-09-22 11:35:35 -05:00
Nishant Das
61f6b27548 Change Default Timeout To 30 Seconds (#11487) 2022-09-22 10:43:53 +00:00
417 changed files with 14114 additions and 12777 deletions

10
.github/CODEOWNERS vendored
View File

@@ -5,12 +5,4 @@
*.bzl @prestonvanloon
# Anyone on prylabs team can approve dependency updates.
deps.bzl @prysmaticlabs/core-team
# Radek and Nishant are responsible for changes that can affect the native state feature.
# See https://www.notion.so/prysmaticlabs/Native-Beacon-State-Redesign-6cc9744b4ec1439bb34fa829b36a35c1
/beacon-chain/state/fieldtrie/ @rkapka @nisdas @rauljordan
/beacon-chain/state/v1/ @rkapka @nisdas @rauljordan
/beacon-chain/state/v2/ @rkapka @nisdas @rauljordan
/beacon-chain/state/v3/ @rkapka @nisdas @rauljordan
/beacon-chain/state/state-native/ @rkapka @nisdas @rauljordan
deps.bzl @prysmaticlabs/core-team

View File

@@ -2,7 +2,7 @@
[![Build status](https://badge.buildkite.com/b555891daf3614bae4284dcf365b2340cefc0089839526f096.svg?branch=master)](https://buildkite.com/prysmatic-labs/prysm)
[![Go Report Card](https://goreportcard.com/badge/github.com/prysmaticlabs/prysm)](https://goreportcard.com/report/github.com/prysmaticlabs/prysm)
[![Consensus_Spec_Version 1.2.0-rc.3](https://img.shields.io/badge/Consensus%20Spec%20Version-v1.2.0.rc.3-blue.svg)](https://github.com/ethereum/consensus-specs/tree/v1.2.0-rc.3)
[![Consensus_Spec_Version 1.2.0](https://img.shields.io/badge/Consensus%20Spec%20Version-v1.2.0-blue.svg)](https://github.com/ethereum/consensus-specs/tree/v1.2.0)
[![Execution_API_Version 1.0.0-beta.1](https://img.shields.io/badge/Execution%20API%20Version-v1.0.0.beta.1-blue.svg)](https://github.com/ethereum/execution-apis/tree/v1.0.0-beta.1/src/engine)
[![Discord](https://user-images.githubusercontent.com/7288322/34471967-1df7808a-efbb-11e7-9088-ed0b04151291.png)](https://discord.gg/CTYGPUJ)
[![GitPOAP Badge](https://public-api.gitpoap.io/v1/repo/prysmaticlabs/prysm/badge)](https://www.gitpoap.io/gh/prysmaticlabs/prysm)

View File

@@ -88,10 +88,10 @@ http_archive(
# Expose internals of go_test for custom build transitions.
"//third_party:io_bazel_rules_go_test.patch",
],
sha256 = "16e9fca53ed6bd4ff4ad76facc9b7b651a89db1689a2877d6fd7b82aa824e366",
sha256 = "099a9fb96a376ccbbb7d291ed4ecbdfd42f6bc822ab77ae6f1b5cb9e914e94fa",
urls = [
"https://mirror.bazel.build/github.com/bazelbuild/rules_go/releases/download/v0.34.0/rules_go-v0.34.0.zip",
"https://github.com/bazelbuild/rules_go/releases/download/v0.34.0/rules_go-v0.34.0.zip",
"https://mirror.bazel.build/github.com/bazelbuild/rules_go/releases/download/v0.35.0/rules_go-v0.35.0.zip",
"https://github.com/bazelbuild/rules_go/releases/download/v0.35.0/rules_go-v0.35.0.zip",
],
)
@@ -215,7 +215,7 @@ filegroup(
url = "https://github.com/eth-clients/slashing-protection-interchange-tests/archive/b8413ca42dc92308019d0d4db52c87e9e125c4e9.tar.gz",
)
consensus_spec_version = "v1.2.0-rc.3"
consensus_spec_version = "v1.2.0"
bls_test_version = "v0.1.1"
@@ -231,7 +231,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "18ca21497f41042cdbe60e2333b100d218b2994fb514964b9deb23daf615a12f",
sha256 = "eded065f923a99b78372d6f748c9b3f1de8229f8f574c1fec9c5fe76c8affb65",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/general.tar.gz" % consensus_spec_version,
)
@@ -247,7 +247,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "47b8f6fabe39b4a69f13054ba74e26ab51581ddbd359c18cf0f03317474e299c",
sha256 = "2ed83783129e93360f4bf9d5d5f606ee28adbe8b458acdfac61b8d99218d16a9",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/minimal.tar.gz" % consensus_spec_version,
)
@@ -263,7 +263,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "a061efc05429b169393c32dc2633a948269461b0fe681f11d41e170a880dcc71",
sha256 = "f5eff2adac78c99a4180491f373328465263caa2cba0206308a7c598abf76cda",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/mainnet.tar.gz" % consensus_spec_version,
)
@@ -278,7 +278,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "753d51c6a6cc6df101c897e4bea77f73b271f50aeda74440f412514d4bd88a86",
sha256 = "f1a33b7459391716defa4c2b6f0c1bd7ccc38471ce9126d752d3bad767bebf2b",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)
@@ -342,9 +342,9 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "e0c0b5dc609b3a221e74c720f483c595441f2ad5e38bb8aa3522636039945a6f",
sha256 = "b2226874526805d64c29e5053fa28e511b57c0860585d6d59777ee81ff4859ca",
urls = [
"https://github.com/prysmaticlabs/prysm-web-ui/releases/download/v2.0.1/prysm-web-ui.tar.gz",
"https://github.com/prysmaticlabs/prysm-web-ui/releases/download/v2.0.2/prysm-web-ui.tar.gz",
],
)

View File

@@ -10,6 +10,7 @@ import (
"net/http"
"net/url"
"text/template"
"time"
"github.com/pkg/errors"
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
@@ -31,6 +32,7 @@ const (
var errMalformedHostname = errors.New("hostname must include port, separated by one colon, like example.com:3500")
var errMalformedRequest = errors.New("required request data are missing")
var submitBlindedBlockTimeout = 3 * time.Second
// ClientOpt is a functional option for the Client type (http.Client wrapper)
type ClientOpt func(*Client)
@@ -252,7 +254,11 @@ func (c *Client) SubmitBlindedBlock(ctx context.Context, sb *ethpb.SignedBlinded
if err != nil {
return nil, errors.Wrap(err, "error encoding the SignedBlindedBeaconBlockBellatrix value body in SubmitBlindedBlock")
}
ctx, cancel := context.WithTimeout(ctx, submitBlindedBlockTimeout)
defer cancel()
rb, err := c.do(ctx, http.MethodPost, postBlindedBeaconBlockPath, bytes.NewBuffer(body))
if err != nil {
return nil, errors.Wrap(err, "error posting the SignedBlindedBeaconBlockBellatrix to the builder api")
}

View File

@@ -51,7 +51,6 @@ go_library(
"//beacon-chain/execution:go_default_library",
"//beacon-chain/forkchoice:go_default_library",
"//beacon-chain/forkchoice/doubly-linked-tree:go_default_library",
"//beacon-chain/forkchoice/protoarray:go_default_library",
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/operations/attestations:go_default_library",
"//beacon-chain/operations/slashings:go_default_library",

View File

@@ -5,10 +5,10 @@ import (
"context"
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice"
doublylinkedtree "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/protoarray"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state"
fieldparams "github.com/prysmaticlabs/prysm/v3/config/fieldparams"
"github.com/prysmaticlabs/prysm/v3/config/params"
@@ -314,7 +314,7 @@ func (s *Service) IsOptimistic(ctx context.Context) (bool, error) {
if err == nil {
return optimistic, nil
}
if err != protoarray.ErrUnknownNodeRoot && err != doublylinkedtree.ErrNilNode {
if !errors.Is(err, doublylinkedtree.ErrNilNode) {
return true, err
}
// If fockchoice does not have the headroot, then the node is considered
@@ -338,7 +338,7 @@ func (s *Service) IsOptimisticForRoot(ctx context.Context, root [32]byte) (bool,
if err == nil {
return optimistic, nil
}
if err != protoarray.ErrUnknownNodeRoot && err != doublylinkedtree.ErrNilNode {
if !errors.Is(err, doublylinkedtree.ErrNilNode) {
return false, err
}
// if the requested root is the headroot and the root is not found in

View File

@@ -5,6 +5,7 @@ import (
"testing"
testDB "github.com/prysmaticlabs/prysm/v3/beacon-chain/db/testing"
doublylinkedtree "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/v3/consensus-types/blocks"
ethpb "github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1"
@@ -32,7 +33,7 @@ func TestHeadSlot_DataRace(t *testing.T) {
func TestHeadRoot_DataRace(t *testing.T) {
beaconDB := testDB.SetupDB(t)
s := &Service{
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB)},
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB, doublylinkedtree.New())},
head: &head{root: [32]byte{'A'}},
}
b, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
@@ -54,7 +55,7 @@ func TestHeadBlock_DataRace(t *testing.T) {
wsb, err := blocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Body: &ethpb.BeaconBlockBody{}}})
require.NoError(t, err)
s := &Service{
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB)},
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB, doublylinkedtree.New())},
head: &head{block: wsb},
}
b, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
@@ -74,7 +75,7 @@ func TestHeadBlock_DataRace(t *testing.T) {
func TestHeadState_DataRace(t *testing.T) {
beaconDB := testDB.SetupDB(t)
s := &Service{
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB)},
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB, doublylinkedtree.New())},
}
b, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)

View File

@@ -7,7 +7,6 @@ import (
testDB "github.com/prysmaticlabs/prysm/v3/beacon-chain/db/testing"
doublylinkedtree "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/protoarray"
forkchoicetypes "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state"
state_native "github.com/prysmaticlabs/prysm/v3/beacon-chain/state/state-native"
@@ -85,7 +84,7 @@ func TestFinalizedCheckpt_GenesisRootOk(t *testing.T) {
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(fcs),
WithStateGen(stategen.New(beaconDB)),
WithStateGen(stategen.New(beaconDB, fcs)),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
@@ -110,7 +109,7 @@ func TestCurrentJustifiedCheckpt_CanRetrieve(t *testing.T) {
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(fcs),
WithStateGen(stategen.New(beaconDB)),
WithStateGen(stategen.New(beaconDB, fcs)),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
@@ -137,7 +136,7 @@ func TestHeadRoot_CanRetrieve(t *testing.T) {
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(fcs),
WithStateGen(stategen.New(beaconDB)),
WithStateGen(stategen.New(beaconDB, fcs)),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
@@ -156,7 +155,7 @@ func TestHeadRoot_UseDB(t *testing.T) {
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(fcs),
WithStateGen(stategen.New(beaconDB)),
WithStateGen(stategen.New(beaconDB, fcs)),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
@@ -199,7 +198,7 @@ func TestHeadState_CanRetrieve(t *testing.T) {
c.head = &head{state: s}
headState, err := c.HeadState(context.Background())
require.NoError(t, err)
assert.DeepEqual(t, headState.InnerStateUnsafe(), s.InnerStateUnsafe(), "Incorrect head state received")
assert.DeepEqual(t, headState.ToProtoUnsafe(), s.ToProtoUnsafe(), "Incorrect head state received")
}
func TestGenesisTime_CanRetrieve(t *testing.T) {
@@ -307,7 +306,7 @@ func TestService_HeadGenesisValidatorsRoot(t *testing.T) {
}
func TestService_ChainHeads_ProtoArray(t *testing.T) {
ctx := context.Background()
c := &Service{cfg: &config{ForkChoiceStore: protoarray.New()}}
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}}
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
st, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
@@ -337,7 +336,7 @@ func TestService_ChainHeads_ProtoArray(t *testing.T) {
// \ ---------- E
// ---------- D
func TestService_ChainHeads_DoublyLinkedTree(t *testing.T) {
func TestService_ChainHeads(t *testing.T) {
ctx := context.Background()
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}}
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
@@ -429,29 +428,7 @@ func TestService_HeadValidatorIndexToPublicKeyNil(t *testing.T) {
require.Equal(t, [fieldparams.BLSPubkeyLength]byte{}, p)
}
func TestService_IsOptimistic_ProtoArray(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.BellatrixForkEpoch = 0
params.OverrideBeaconConfig(cfg)
ctx := context.Background()
c := &Service{cfg: &config{ForkChoiceStore: protoarray.New()}, head: &head{slot: 101, root: [32]byte{'b'}}}
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
st, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
st, blkRoot, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
opt, err := c.IsOptimistic(ctx)
require.NoError(t, err)
require.Equal(t, true, opt)
}
func TestService_IsOptimistic_DoublyLinkedTree(t *testing.T) {
func TestService_IsOptimistic(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.BellatrixForkEpoch = 0
@@ -481,24 +458,7 @@ func TestService_IsOptimisticBeforeBellatrix(t *testing.T) {
require.Equal(t, false, opt)
}
func TestService_IsOptimisticForRoot_ProtoArray(t *testing.T) {
ctx := context.Background()
c := &Service{cfg: &config{ForkChoiceStore: protoarray.New()}, head: &head{slot: 101, root: [32]byte{'b'}}}
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
st, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
st, blkRoot, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
opt, err := c.IsOptimisticForRoot(ctx, [32]byte{'a'})
require.NoError(t, err)
require.Equal(t, true, opt)
}
func TestService_IsOptimisticForRoot_DoublyLinkedTree(t *testing.T) {
func TestService_IsOptimisticForRoot(t *testing.T) {
ctx := context.Background()
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}, head: &head{slot: 101, root: [32]byte{'b'}}}
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
@@ -515,66 +475,7 @@ func TestService_IsOptimisticForRoot_DoublyLinkedTree(t *testing.T) {
require.Equal(t, true, opt)
}
func TestService_IsOptimisticForRoot_DB_ProtoArray(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
c := &Service{cfg: &config{BeaconDB: beaconDB, ForkChoiceStore: protoarray.New()}, head: &head{slot: 101, root: [32]byte{'b'}}}
c.head = &head{root: params.BeaconConfig().ZeroHash}
b := util.NewBeaconBlock()
b.Block.Slot = 10
br, err := b.Block.HashTreeRoot()
require.NoError(t, err)
util.SaveBlock(t, context.Background(), beaconDB, b)
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: br[:], Slot: 10}))
optimisticBlock := util.NewBeaconBlock()
optimisticBlock.Block.Slot = 97
optimisticRoot, err := optimisticBlock.Block.HashTreeRoot()
require.NoError(t, err)
util.SaveBlock(t, context.Background(), beaconDB, optimisticBlock)
validatedBlock := util.NewBeaconBlock()
validatedBlock.Block.Slot = 9
validatedRoot, err := validatedBlock.Block.HashTreeRoot()
require.NoError(t, err)
util.SaveBlock(t, context.Background(), beaconDB, validatedBlock)
validatedCheckpoint := &ethpb.Checkpoint{Root: br[:]}
require.NoError(t, beaconDB.SaveLastValidatedCheckpoint(ctx, validatedCheckpoint))
_, err = c.IsOptimisticForRoot(ctx, optimisticRoot)
require.ErrorContains(t, "nil summary returned from the DB", err)
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: optimisticRoot[:], Slot: 11}))
optimistic, err := c.IsOptimisticForRoot(ctx, optimisticRoot)
require.NoError(t, err)
require.Equal(t, true, optimistic)
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: validatedRoot[:], Slot: 9}))
cp := &ethpb.Checkpoint{
Epoch: 1,
Root: validatedRoot[:],
}
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, validatedRoot))
require.NoError(t, beaconDB.SaveFinalizedCheckpoint(ctx, cp))
validated, err := c.IsOptimisticForRoot(ctx, validatedRoot)
require.NoError(t, err)
require.Equal(t, false, validated)
// Before the first finalized epoch, finalized root could be zeros.
validatedCheckpoint = &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, br))
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: params.BeaconConfig().ZeroHash[:], Slot: 10}))
require.NoError(t, beaconDB.SaveLastValidatedCheckpoint(ctx, validatedCheckpoint))
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: optimisticRoot[:], Slot: 11}))
optimistic, err = c.IsOptimisticForRoot(ctx, optimisticRoot)
require.NoError(t, err)
require.Equal(t, true, optimistic)
}
func TestService_IsOptimisticForRoot_DB_DoublyLinkedTree(t *testing.T) {
func TestService_IsOptimisticForRoot_DB(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
c := &Service{cfg: &config{BeaconDB: beaconDB, ForkChoiceStore: doublylinkedtree.New()}, head: &head{slot: 101, root: [32]byte{'b'}}}

View File

@@ -183,7 +183,7 @@ func (s *Service) getPayloadHash(ctx context.Context, root []byte) ([32]byte, er
// notifyForkchoiceUpdate signals execution engine on a new payload.
// It returns true if the EL has returned VALID for the block
func (s *Service) notifyNewPayload(ctx context.Context, postStateVersion int,
postStateHeader *enginev1.ExecutionPayloadHeader, blk interfaces.SignedBeaconBlock) (bool, error) {
postStateHeader interfaces.ExecutionData, blk interfaces.SignedBeaconBlock) (bool, error) {
ctx, span := trace.StartSpan(ctx, "blockChain.notifyNewPayload")
defer span.End()

View File

@@ -13,7 +13,6 @@ import (
"github.com/prysmaticlabs/prysm/v3/beacon-chain/execution"
mockExecution "github.com/prysmaticlabs/prysm/v3/beacon-chain/execution/testing"
doublylinkedtree "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/protoarray"
forkchoicetypes "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/types"
bstate "github.com/prysmaticlabs/prysm/v3/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state/stategen"
@@ -43,7 +42,7 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
fcs := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithStateGen(stategen.New(beaconDB, fcs)),
WithForkChoiceStore(fcs),
WithProposerIdsCache(cache.NewProposerPayloadIDsCache()),
}
@@ -198,151 +197,6 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
}
}
//
//
// A <- B <- C <- D
// \
// ---------- E <- F
// \
// ------ G
// D is the current head, attestations for F and G come late, both are invalid.
// We switch recursively to F then G and finally to D.
//
// We test:
// 1. forkchoice removes blocks F and G from the forkchoice implementation
// 2. forkchoice removes the weights of these blocks
// 3. the blockchain package calls fcu to obtain heads G -> F -> D.
func Test_NotifyForkchoiceUpdateRecursive_Protoarray(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
// Prepare blocks
ba := util.NewBeaconBlockBellatrix()
ba.Block.Body.ExecutionPayload.BlockNumber = 1
wba := util.SaveBlock(t, ctx, beaconDB, ba)
bra, err := wba.Block().HashTreeRoot()
require.NoError(t, err)
bb := util.NewBeaconBlockBellatrix()
bb.Block.Body.ExecutionPayload.BlockNumber = 2
wbb := util.SaveBlock(t, ctx, beaconDB, bb)
brb, err := wbb.Block().HashTreeRoot()
require.NoError(t, err)
bc := util.NewBeaconBlockBellatrix()
bc.Block.Body.ExecutionPayload.BlockNumber = 3
wbc := util.SaveBlock(t, ctx, beaconDB, bc)
brc, err := wbc.Block().HashTreeRoot()
require.NoError(t, err)
bd := util.NewBeaconBlockBellatrix()
pd := [32]byte{'D'}
bd.Block.Body.ExecutionPayload.BlockHash = pd[:]
bd.Block.Body.ExecutionPayload.BlockNumber = 4
wbd := util.SaveBlock(t, ctx, beaconDB, bd)
brd, err := wbd.Block().HashTreeRoot()
require.NoError(t, err)
be := util.NewBeaconBlockBellatrix()
pe := [32]byte{'E'}
be.Block.Body.ExecutionPayload.BlockHash = pe[:]
be.Block.Body.ExecutionPayload.BlockNumber = 5
wbe := util.SaveBlock(t, ctx, beaconDB, be)
bre, err := wbe.Block().HashTreeRoot()
require.NoError(t, err)
bf := util.NewBeaconBlockBellatrix()
pf := [32]byte{'F'}
bf.Block.Body.ExecutionPayload.BlockHash = pf[:]
bf.Block.Body.ExecutionPayload.BlockNumber = 6
bf.Block.ParentRoot = bre[:]
wbf := util.SaveBlock(t, ctx, beaconDB, bf)
brf, err := wbf.Block().HashTreeRoot()
require.NoError(t, err)
bg := util.NewBeaconBlockBellatrix()
bg.Block.Body.ExecutionPayload.BlockNumber = 7
pg := [32]byte{'G'}
bg.Block.Body.ExecutionPayload.BlockHash = pg[:]
bg.Block.ParentRoot = bre[:]
wbg := util.SaveBlock(t, ctx, beaconDB, bg)
brg, err := wbg.Block().HashTreeRoot()
require.NoError(t, err)
// Insert blocks into forkchoice
service := setupBeaconChain(t, beaconDB)
fcs := protoarray.New()
service.cfg.ForkChoiceStore = fcs
service.cfg.ProposerSlotIndexCache = cache.NewProposerPayloadIDsCache()
service.justifiedBalances.balances = []uint64{50, 100, 200}
require.NoError(t, err)
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 1, bra, [32]byte{}, [32]byte{'A'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 2, brb, bra, [32]byte{'B'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 3, brc, brb, [32]byte{'C'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 4, brd, brc, [32]byte{'D'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 5, bre, brb, [32]byte{'E'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 6, brf, bre, [32]byte{'F'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 7, brg, bre, [32]byte{'G'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
// Insert Attestations to D, F and G so that they have higher weight than D
// Ensure G is head
fcs.ProcessAttestation(ctx, []uint64{0}, brd, 1)
fcs.ProcessAttestation(ctx, []uint64{1}, brf, 1)
fcs.ProcessAttestation(ctx, []uint64{2}, brg, 1)
jc := &forkchoicetypes.Checkpoint{Epoch: 0, Root: bra}
require.NoError(t, fcs.UpdateJustifiedCheckpoint(jc))
headRoot, err := fcs.Head(ctx, []uint64{50, 100, 200})
require.NoError(t, err)
require.Equal(t, brg, headRoot)
// Prepare Engine Mock to return invalid unless head is D, LVH = E
service.cfg.ExecutionEngineCaller = &mockExecution.EngineClient{ErrForkchoiceUpdated: execution.ErrInvalidPayloadStatus, ForkChoiceUpdatedResp: pe[:], OverrideValidHash: [32]byte{'D'}}
st, _ := util.DeterministicGenesisState(t, 1)
service.head = &head{
state: st,
block: wba,
}
require.NoError(t, beaconDB.SaveState(ctx, st, bra))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, bra))
a := &notifyForkchoiceUpdateArg{
headState: st,
headBlock: wbg.Block(),
headRoot: brg,
}
_, err = service.notifyForkchoiceUpdate(ctx, a)
require.Equal(t, true, IsInvalidBlock(err))
require.Equal(t, brf, InvalidBlockRoot(err))
// Ensure Head is D
headRoot, err = fcs.Head(ctx, service.justifiedBalances.balances)
require.NoError(t, err)
require.Equal(t, brd, headRoot)
// Ensure F and G where removed but their parent E wasn't
require.Equal(t, false, fcs.HasNode(brf))
require.Equal(t, false, fcs.HasNode(brg))
require.Equal(t, true, fcs.HasNode(bre))
}
func Test_NotifyForkchoiceUpdate_NIlLVH(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
@@ -577,7 +431,7 @@ func Test_NotifyNewPayload(t *testing.T) {
fcs := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithStateGen(stategen.New(beaconDB, fcs)),
WithForkChoiceStore(fcs),
}
phase0State, _ := util.DeterministicGenesisState(t, 1)
@@ -823,7 +677,7 @@ func Test_NotifyNewPayload_SetOptimisticToValid(t *testing.T) {
fcs := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithStateGen(stategen.New(beaconDB, fcs)),
WithForkChoiceStore(fcs),
}
bellatrixState, _ := util.DeterministicGenesisStateBellatrix(t, 2)
@@ -866,7 +720,7 @@ func Test_GetPayloadAttribute(t *testing.T) {
beaconDB := testDB.SetupDB(t)
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithStateGen(stategen.New(beaconDB, doublylinkedtree.New())),
WithProposerIdsCache(cache.NewProposerPayloadIDsCache()),
}
@@ -908,8 +762,8 @@ func Test_UpdateLastValidatedCheckpoint(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
stateGen := stategen.New(beaconDB)
fcs := doublylinkedtree.New()
stateGen := stategen.New(beaconDB, fcs)
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stateGen),
@@ -1022,10 +876,11 @@ func Test_UpdateLastValidatedCheckpoint(t *testing.T) {
func TestService_removeInvalidBlockAndState(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fc := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(doublylinkedtree.New()),
WithStateGen(stategen.New(beaconDB, fc)),
WithForkChoiceStore(fc),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
@@ -1074,10 +929,11 @@ func TestService_removeInvalidBlockAndState(t *testing.T) {
func TestService_getPayloadHash(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fc := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(doublylinkedtree.New()),
WithStateGen(stategen.New(beaconDB, fc)),
WithForkChoiceStore(fc),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)

View File

@@ -7,6 +7,7 @@ import (
"github.com/prysmaticlabs/prysm/v3/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/signing"
dbtest "github.com/prysmaticlabs/prysm/v3/beacon-chain/db/testing"
doublylinkedtree "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/v3/config/params"
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
@@ -19,7 +20,7 @@ func TestService_headSyncCommitteeFetcher_Errors(t *testing.T) {
beaconDB := dbtest.SetupDB(t)
c := &Service{
cfg: &config{
StateGen: stategen.New(beaconDB),
StateGen: stategen.New(beaconDB, doublylinkedtree.New()),
},
}
c.head = &head{}
@@ -37,7 +38,7 @@ func TestService_HeadDomainFetcher_Errors(t *testing.T) {
beaconDB := dbtest.SetupDB(t)
c := &Service{
cfg: &config{
StateGen: stategen.New(beaconDB),
StateGen: stategen.New(beaconDB, doublylinkedtree.New()),
},
}
c.head = &head{}

View File

@@ -10,7 +10,6 @@ import (
mock "github.com/prysmaticlabs/prysm/v3/beacon-chain/blockchain/testing"
testDB "github.com/prysmaticlabs/prysm/v3/beacon-chain/db/testing"
doublylinkedtree "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/protoarray"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/v3/config/features"
"github.com/prysmaticlabs/prysm/v3/config/params"
@@ -90,7 +89,7 @@ func TestSaveHead_Different(t *testing.T) {
pb, err := headBlock.Proto()
require.NoError(t, err)
assert.DeepEqual(t, newHeadSignedBlock, pb, "Head did not change")
assert.DeepSSZEqual(t, headState.CloneInnerState(), service.headState(ctx).CloneInnerState(), "Head did not change")
assert.DeepSSZEqual(t, headState.ToProto(), service.headState(ctx).ToProto(), "Head did not change")
}
func TestSaveHead_Different_Reorg(t *testing.T) {
@@ -148,7 +147,7 @@ func TestSaveHead_Different_Reorg(t *testing.T) {
pb, err := headBlock.Proto()
require.NoError(t, err)
assert.DeepEqual(t, newHeadSignedBlock, pb, "Head did not change")
assert.DeepSSZEqual(t, headState.CloneInnerState(), service.headState(ctx).CloneInnerState(), "Head did not change")
assert.DeepSSZEqual(t, headState.ToProto(), service.headState(ctx).ToProto(), "Head did not change")
require.LogsContain(t, hook, "Chain reorg occurred")
require.LogsContain(t, hook, "distance=1")
require.LogsContain(t, hook, "depth=1")
@@ -234,64 +233,6 @@ func Test_notifyNewHeadEvent(t *testing.T) {
})
}
func TestSaveOrphanedAtts_NoCommonAncestor_Protoarray(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
// this test does not make sense in doubly linked tree since it enforces
// that the finalized node is a common ancestor
service.cfg.ForkChoiceStore = protoarray.New()
service.genesisTime = time.Now().Add(time.Duration(-10*int64(1)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)
// Chain setup
// 0 -- 1 -- 2 -- 3
// -4
st, keys := util.DeterministicGenesisState(t, 64)
blkG, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 0)
assert.NoError(t, err)
util.SaveBlock(t, ctx, service.cfg.BeaconDB, blkG)
rG, err := blkG.Block.HashTreeRoot()
require.NoError(t, err)
blk1, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 1)
assert.NoError(t, err)
blk1.Block.ParentRoot = rG[:]
r1, err := blk1.Block.HashTreeRoot()
require.NoError(t, err)
blk2, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 2)
assert.NoError(t, err)
blk2.Block.ParentRoot = r1[:]
r2, err := blk2.Block.HashTreeRoot()
require.NoError(t, err)
blk3, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 3)
assert.NoError(t, err)
blk3.Block.ParentRoot = r2[:]
r3, err := blk3.Block.HashTreeRoot()
require.NoError(t, err)
blk4 := util.NewBeaconBlock()
blk4.Block.Slot = 4
r4, err := blk4.Block.HashTreeRoot()
require.NoError(t, err)
for _, blk := range []*ethpb.SignedBeaconBlock{blkG, blk1, blk2, blk3, blk4} {
r, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.ForkChoicer().InsertNode(ctx, state, blkRoot))
util.SaveBlock(t, ctx, beaconDB, blk)
}
require.NoError(t, service.saveOrphanedAtts(ctx, r3, r4))
require.Equal(t, 0, service.cfg.AttPool.AggregatedAttestationCount())
}
func TestSaveOrphanedAtts(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
@@ -538,7 +479,7 @@ func TestUpdateHead_noSavedChanges(t *testing.T) {
fcs := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithStateGen(stategen.New(beaconDB, fcs)),
WithForkChoiceStore(fcs),
}

View File

@@ -338,7 +338,7 @@ func reportEpochMetrics(ctx context.Context, postState, headState state.BeaconSt
return err
}
default:
return errors.Errorf("invalid state type provided: %T", headState.InnerStateUnsafe())
return errors.Errorf("invalid state type provided: %T", headState.ToProtoUnsafe())
}
prevEpochActiveBalances.Set(float64(b.ActivePrevEpoch))
prevEpochSourceBalances.Set(float64(b.PrevEpochAttested))

View File

@@ -6,17 +6,17 @@ import (
"testing"
testDB "github.com/prysmaticlabs/prysm/v3/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/protoarray"
doublylinkedtree "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state/stategen"
)
func testServiceOptsWithDB(t *testing.T) []Option {
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New()
fcs := doublylinkedtree.New()
return []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithStateGen(stategen.New(beaconDB, fcs)),
WithForkChoiceStore(fcs),
}
}

View File

@@ -10,7 +10,7 @@ import (
"github.com/holiman/uint256"
testDB "github.com/prysmaticlabs/prysm/v3/beacon-chain/db/testing"
mocks "github.com/prysmaticlabs/prysm/v3/beacon-chain/execution/testing"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/protoarray"
doublylinkedtree "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/v3/config/params"
"github.com/prysmaticlabs/prysm/v3/consensus-types/blocks"
@@ -109,10 +109,10 @@ func Test_validateMergeBlock(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New()
fcs := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithStateGen(stategen.New(beaconDB, fcs)),
WithForkChoiceStore(fcs),
}
service, err := NewService(ctx, opts...)
@@ -159,10 +159,10 @@ func Test_validateMergeBlock(t *testing.T) {
func Test_getBlkParentHashAndTD(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New()
fcs := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithStateGen(stategen.New(beaconDB, fcs)),
WithForkChoiceStore(fcs),
}
service, err := NewService(ctx, opts...)

View File

@@ -8,7 +8,6 @@ import (
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/transition"
testDB "github.com/prysmaticlabs/prysm/v3/beacon-chain/db/testing"
doublylinkedtree "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/protoarray"
forkchoicetypes "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state/stategen"
fieldparams "github.com/prysmaticlabs/prysm/v3/config/fieldparams"
@@ -22,14 +21,15 @@ import (
"github.com/prysmaticlabs/prysm/v3/time/slots"
)
func TestStore_OnAttestation_ErrorConditions_ProtoArray(t *testing.T) {
func TestStore_OnAttestation_ErrorConditions(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fc := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(doublylinkedtree.New()),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fc),
WithStateGen(stategen.New(beaconDB, fc)),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
@@ -128,142 +128,6 @@ func TestStore_OnAttestation_ErrorConditions_ProtoArray(t *testing.T) {
}
}
func TestStore_OnAttestation_ErrorConditions_DoublyLinkedTree(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(doublylinkedtree.New()),
WithStateGen(stategen.New(beaconDB)),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
_, err = blockTree1(t, beaconDB, []byte{'g'})
require.NoError(t, err)
blkWithoutState := util.NewBeaconBlock()
blkWithoutState.Block.Slot = 0
util.SaveBlock(t, ctx, beaconDB, blkWithoutState)
BlkWithOutStateRoot, err := blkWithoutState.Block.HashTreeRoot()
require.NoError(t, err)
blkWithStateBadAtt := util.NewBeaconBlock()
blkWithStateBadAtt.Block.Slot = 1
util.SaveBlock(t, ctx, beaconDB, blkWithStateBadAtt)
BlkWithStateBadAttRoot, err := blkWithStateBadAtt.Block.HashTreeRoot()
require.NoError(t, err)
s, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, s.SetSlot(100*params.BeaconConfig().SlotsPerEpoch))
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, s, BlkWithStateBadAttRoot))
blkWithValidState := util.NewBeaconBlock()
blkWithValidState.Block.Slot = 2
util.SaveBlock(t, ctx, beaconDB, blkWithValidState)
blkWithValidStateRoot, err := blkWithValidState.Block.HashTreeRoot()
require.NoError(t, err)
s, err = util.NewBeaconState()
require.NoError(t, err)
err = s.SetFork(&ethpb.Fork{
Epoch: 0,
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
})
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, s, blkWithValidStateRoot))
tests := []struct {
name string
a *ethpb.Attestation
wantedErr string
}{
{
name: "attestation's data slot not aligned with target vote",
a: util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: params.BeaconConfig().SlotsPerEpoch, Target: &ethpb.Checkpoint{Root: make([]byte, 32)}}}),
wantedErr: "slot 32 does not match target epoch 0",
},
{
name: "no pre state for attestations's target block",
a: util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Target: &ethpb.Checkpoint{Root: BlkWithOutStateRoot[:]}}}),
wantedErr: "could not get pre state for epoch 0",
},
{
name: "process attestation doesn't match current epoch",
a: util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 100 * params.BeaconConfig().SlotsPerEpoch, Target: &ethpb.Checkpoint{Epoch: 100,
Root: BlkWithStateBadAttRoot[:]}}}),
wantedErr: "target epoch 100 does not match current epoch",
},
{
name: "process nil attestation",
a: nil,
wantedErr: "attestation can't be nil",
},
{
name: "process nil field (a.Data) in attestation",
a: &ethpb.Attestation{},
wantedErr: "attestation's data can't be nil",
},
{
name: "process nil field (a.Target) in attestation",
a: &ethpb.Attestation{
Data: &ethpb.AttestationData{
BeaconBlockRoot: make([]byte, fieldparams.RootLength),
Target: nil,
Source: &ethpb.Checkpoint{Root: make([]byte, fieldparams.RootLength)},
},
AggregationBits: make([]byte, 1),
Signature: make([]byte, 96),
},
wantedErr: "attestation's target can't be nil",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := service.OnAttestation(ctx, tt.a)
if tt.wantedErr != "" {
assert.ErrorContains(t, tt.wantedErr, err)
} else {
assert.NoError(t, err)
}
})
}
}
func TestStore_OnAttestation_Ok_ProtoArray(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
genesisState, pks := util.DeterministicGenesisState(t, 64)
service.SetGenesisTime(time.Unix(time.Now().Unix()-int64(params.BeaconConfig().SecondsPerSlot), 0))
require.NoError(t, service.saveGenesisData(ctx, genesisState))
att, err := util.GenerateAttestations(genesisState, pks, 1, 0, false)
require.NoError(t, err)
tRoot := bytesutil.ToBytes32(att[0].Data.Target.Root)
copied := genesisState.Copy()
copied, err = transition.ProcessSlots(ctx, copied, 1)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, copied, tRoot))
ojc := &ethpb.Checkpoint{Epoch: 1, Root: tRoot[:]}
ofc := &ethpb.Checkpoint{Epoch: 1, Root: tRoot[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 0, tRoot, tRoot, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
require.NoError(t, service.OnAttestation(ctx, att[0]))
}
func TestStore_OnAttestation_Ok_DoublyLinkedTree(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
@@ -271,7 +135,7 @@ func TestStore_OnAttestation_Ok_DoublyLinkedTree(t *testing.T) {
fcs := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithStateGen(stategen.New(beaconDB, fcs)),
WithForkChoiceStore(fcs),
}
service, err := NewService(ctx, opts...)
@@ -300,7 +164,7 @@ func TestStore_SaveCheckpointState(t *testing.T) {
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithStateGen(stategen.New(beaconDB, doublylinkedtree.New())),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
@@ -362,7 +226,7 @@ func TestStore_UpdateCheckpointState(t *testing.T) {
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithStateGen(stategen.New(beaconDB, doublylinkedtree.New())),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
@@ -392,7 +256,7 @@ func TestStore_UpdateCheckpointState(t *testing.T) {
cached, err = service.checkpointStateCache.StateByCheckpoint(newCheckpoint)
require.NoError(t, err)
require.DeepSSZEqual(t, returned.InnerStateUnsafe(), cached.InnerStateUnsafe())
require.DeepSSZEqual(t, returned.ToProtoUnsafe(), cached.ToProtoUnsafe())
}
func TestAttEpoch_MatchPrevEpoch(t *testing.T) {
@@ -461,37 +325,6 @@ func TestVerifyBeaconBlock_OK(t *testing.T) {
assert.NoError(t, service.verifyBeaconBlock(ctx, d), "Did not receive the wanted error")
}
func TestVerifyFinalizedConsistency_InconsistentRoot_ProtoArray(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := protoarray.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithForkChoiceStore(fcs),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
b32 := util.NewBeaconBlock()
b32.Block.Slot = 32
util.SaveBlock(t, ctx, service.cfg.BeaconDB, b32)
r32, err := b32.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.ForkChoicer().UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Epoch: 1}))
b33 := util.NewBeaconBlock()
b33.Block.Slot = 33
b33.Block.ParentRoot = r32[:]
util.SaveBlock(t, ctx, service.cfg.BeaconDB, b33)
r33, err := b33.Block.HashTreeRoot()
require.NoError(t, err)
err = service.VerifyFinalizedConsistency(context.Background(), r33[:])
require.ErrorContains(t, "Root and finalized store are not consistent", err)
}
func TestVerifyFinalizedConsistency_InconsistentRoot_DoublyLinkedTree(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
@@ -499,7 +332,7 @@ func TestVerifyFinalizedConsistency_InconsistentRoot_DoublyLinkedTree(t *testing
fcs := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
WithStateGen(stategen.New(beaconDB, fcs)),
WithForkChoiceStore(fcs),
}
service, err := NewService(ctx, opts...)

View File

@@ -23,7 +23,6 @@ import (
"github.com/prysmaticlabs/prysm/v3/crypto/bls"
"github.com/prysmaticlabs/prysm/v3/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v3/monitoring/tracing"
enginev1 "github.com/prysmaticlabs/prysm/v3/proto/engine/v1"
ethpbv1 "github.com/prysmaticlabs/prysm/v3/proto/eth/v1"
ethpb "github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1/attestation"
@@ -278,11 +277,11 @@ func (s *Service) onBlock(ctx context.Context, signed interfaces.SignedBeaconBlo
return nil
}
func getStateVersionAndPayload(st state.BeaconState) (int, *enginev1.ExecutionPayloadHeader, error) {
func getStateVersionAndPayload(st state.BeaconState) (int, interfaces.ExecutionData, error) {
if st == nil {
return 0, nil, errors.New("nil state")
}
var preStateHeader *enginev1.ExecutionPayloadHeader
var preStateHeader interfaces.ExecutionData
var err error
preStateVersion := st.Version()
switch preStateVersion {
@@ -340,7 +339,7 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []interfaces.SignedBeac
}
type versionAndHeader struct {
version int
header *enginev1.ExecutionPayloadHeader
header interfaces.ExecutionData
}
preVersionAndHeaders := make([]*versionAndHeader, len(blks))
postVersionAndHeaders := make([]*versionAndHeader, len(blks))
@@ -607,7 +606,7 @@ func (s *Service) pruneCanonicalAttsFromPool(ctx context.Context, r [32]byte, b
}
// validateMergeTransitionBlock validates the merge transition block.
func (s *Service) validateMergeTransitionBlock(ctx context.Context, stateVersion int, stateHeader *enginev1.ExecutionPayloadHeader, blk interfaces.SignedBeaconBlock) error {
func (s *Service) validateMergeTransitionBlock(ctx context.Context, stateVersion int, stateHeader interfaces.ExecutionData, blk interfaces.SignedBeaconBlock) error {
// Skip validation if block is older than Bellatrix.
if blocks.IsPreBellatrixVersion(blk.Block().Version()) {
return nil
@@ -634,11 +633,7 @@ func (s *Service) validateMergeTransitionBlock(ctx context.Context, stateVersion
// Skip validation if the block is not a merge transition block.
// To reach here. The payload must be non-empty. If the state header is empty then it's at transition.
wh, err := consensusblocks.WrappedExecutionPayloadHeader(stateHeader)
if err != nil {
return err
}
empty, err := consensusblocks.IsEmptyExecutionData(wh)
empty, err := consensusblocks.IsEmptyExecutionData(stateHeader)
if err != nil {
return err
}
@@ -672,7 +667,8 @@ func (s *Service) fillMissingPayloadIDRoutine(ctx context.Context, stateFeed *ev
if !atHalfSlot(ti) {
continue
}
_, id, has := s.cfg.ProposerSlotIndexCache.GetProposerPayloadIDs(s.CurrentSlot()+1, s.headRoot())
// Head root should be empty when retrieving proposer index for the next slot.
_, id, has := s.cfg.ProposerSlotIndexCache.GetProposerPayloadIDs(s.CurrentSlot()+1, [32]byte{} /* head root */)
// There exists proposer for next slot, but we haven't called fcu w/ payload attribute yet.
if has && id == [8]byte{} {
headBlock, err := s.headBlock()

View File

@@ -8,7 +8,6 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/helpers"
doublylinkedtree "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/protoarray"
forkchoicetypes "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v3/config/params"
@@ -163,7 +162,7 @@ func (s *Service) updateFinalized(ctx context.Context, cp *ethpb.Checkpoint) err
fRoot := bytesutil.ToBytes32(cp.Root)
optimistic, err := s.cfg.ForkChoiceStore.IsOptimistic(fRoot)
if err != nil && err != protoarray.ErrUnknownNodeRoot && err != doublylinkedtree.ErrNilNode {
if err != nil && !errors.Is(err, doublylinkedtree.ErrNilNode) {
return err
}
if !optimistic {
@@ -172,9 +171,14 @@ func (s *Service) updateFinalized(ctx context.Context, cp *ethpb.Checkpoint) err
return err
}
}
if err := s.cfg.StateGen.MigrateToCold(ctx, fRoot); err != nil {
return errors.Wrap(err, "could not migrate to cold")
}
go func() {
// We do not pass in the parent context from the method as this method call
// is meant to be asynchronous and run in the background rather than being
// tied to the execution of a block.
if err := s.cfg.StateGen.MigrateToCold(s.ctx, fRoot); err != nil {
log.WithError(err).Error("could not migrate to cold")
}
}()
return nil
}

File diff suppressed because it is too large Load Diff

View File

@@ -135,12 +135,6 @@ func (s *Service) spawnProcessAttestationsRoutine(stateFeed *event.Feed) {
// UpdateHead updates the canonical head of the chain based on information from fork-choice attestations and votes.
// It requires no external inputs.
func (s *Service) UpdateHead(ctx context.Context) error {
// Continue when there's no fork choice attestation, there's nothing to process and update head.
// This covers the condition when the node is still initial syncing to the head of the chain.
if s.cfg.AttPool.ForkchoiceAttestationCount() == 0 {
return nil
}
// Only one process can process attestations and update head at a time.
s.processAttestationsLock.Lock()
defer s.processAttestationsLock.Unlock()
@@ -157,7 +151,7 @@ func (s *Service) UpdateHead(ctx context.Context) error {
start = time.Now()
newHeadRoot, err := s.cfg.ForkChoiceStore.Head(ctx, balances)
if err != nil {
log.WithError(err).Warn("Resolving fork due to new attestation")
log.WithError(err).Error("Could not compute head from new attestations")
}
newAttHeadElapsedTime.Observe(float64(time.Since(start).Milliseconds()))

View File

@@ -253,3 +253,52 @@ func TestService_ProcessAttestationsAndUpdateHead(t *testing.T) {
require.Equal(t, 0, len(service.cfg.AttPool.ForkchoiceAttestations())) // Validate att pool is empty
require.Equal(t, tRoot, service.head.root) // Validate head is the new one
}
func TestService_UpdateHead_NoAtts(t *testing.T) {
ctx := context.Background()
opts := testServiceOptsWithDB(t)
fcs := doublylinkedtree.New()
opts = append(opts,
WithAttestationPool(attestations.NewPool()),
WithStateNotifier(&mockBeaconNode{}),
WithForkChoiceStore(fcs),
)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.genesisTime = prysmTime.Now().Add(-2 * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second)
genesisState, pks := util.DeterministicGenesisState(t, 64)
require.NoError(t, service.saveGenesisData(ctx, genesisState))
copied := genesisState.Copy()
// Generate a new block
blk, err := util.GenerateFullBlock(copied, pks, util.DefaultBlockGenConfig(), 1)
require.NoError(t, err)
tRoot, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, service.onBlock(ctx, wsb, tRoot))
require.Equal(t, 2, fcs.NodeCount())
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb))
require.Equal(t, tRoot, service.head.root)
// Insert a new block to forkchoice
ojc := &ethpb.Checkpoint{Epoch: 0, Root: params.BeaconConfig().ZeroHash[:]}
b, err := util.GenerateFullBlock(genesisState, pks, util.DefaultBlockGenConfig(), 2)
require.NoError(t, err)
b.Block.ParentRoot = service.originBlockRoot[:]
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
util.SaveBlock(t, ctx, service.cfg.BeaconDB, b)
state, blkRoot, err := prepareForkchoiceState(ctx, 2, r, service.originBlockRoot, [32]byte{'b'}, ojc, ojc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
require.Equal(t, 3, fcs.NodeCount())
require.Equal(t, 0, service.cfg.AttPool.ForkchoiceAttestationCount())
require.NoError(t, err, service.UpdateHead(ctx))
require.Equal(t, 0, len(service.cfg.AttPool.ForkchoiceAttestations())) // Validate att pool is empty
require.Equal(t, r, service.head.root) // Validate head is the new one
}

View File

@@ -8,7 +8,7 @@ import (
blockchainTesting "github.com/prysmaticlabs/prysm/v3/beacon-chain/blockchain/testing"
testDB "github.com/prysmaticlabs/prysm/v3/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/protoarray"
doublylinkedtree "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/operations/attestations"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/operations/voluntaryexits"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state/stategen"
@@ -127,13 +127,14 @@ func TestService_ReceiveBlock(t *testing.T) {
genesisBlockRoot := bytesutil.ToBytes32(nil)
require.NoError(t, beaconDB.SaveState(ctx, genesis, genesisBlockRoot))
fc := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(protoarray.New()),
WithForkChoiceStore(fc),
WithAttestationPool(attestations.NewPool()),
WithExitPool(voluntaryexits.NewPool()),
WithStateNotifier(&blockchainTesting.MockStateNotifier{RecordEvents: true}),
WithStateGen(stategen.New(beaconDB)),
WithStateGen(stategen.New(beaconDB, fc)),
WithFinalizedStateAtStartUp(genesis),
}
s, err := NewService(ctx, opts...)
@@ -166,13 +167,14 @@ func TestService_ReceiveBlockUpdateHead(t *testing.T) {
beaconDB := testDB.SetupDB(t)
genesisBlockRoot := bytesutil.ToBytes32(nil)
require.NoError(t, beaconDB.SaveState(ctx, genesis, genesisBlockRoot))
fc := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(protoarray.New()),
WithForkChoiceStore(fc),
WithAttestationPool(attestations.NewPool()),
WithExitPool(voluntaryexits.NewPool()),
WithStateNotifier(&blockchainTesting.MockStateNotifier{RecordEvents: true}),
WithStateGen(stategen.New(beaconDB)),
WithStateGen(stategen.New(beaconDB, fc)),
}
s, err := NewService(ctx, opts...)
@@ -242,12 +244,13 @@ func TestService_ReceiveBlockBatch(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
fc := doublylinkedtree.New()
beaconDB := testDB.SetupDB(t)
opts := []Option{
WithDatabase(beaconDB),
WithForkChoiceStore(protoarray.New()),
WithForkChoiceStore(fc),
WithStateNotifier(&blockchainTesting.MockStateNotifier{RecordEvents: true}),
WithStateGen(stategen.New(beaconDB)),
WithStateGen(stategen.New(beaconDB, fc)),
}
s, err := NewService(ctx, opts...)
require.NoError(t, err)

View File

@@ -21,7 +21,6 @@ import (
"github.com/prysmaticlabs/prysm/v3/beacon-chain/execution"
mockExecution "github.com/prysmaticlabs/prysm/v3/beacon-chain/execution/testing"
doublylinkedtree "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/protoarray"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/operations/attestations"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/p2p"
state_native "github.com/prysmaticlabs/prysm/v3/beacon-chain/state/state-native"
@@ -84,7 +83,7 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
srv.Stop()
})
bState, _ := util.DeterministicGenesisState(t, 10)
pbState, err := state_native.ProtobufBeaconStatePhase0(bState.InnerStateUnsafe())
pbState, err := state_native.ProtobufBeaconStatePhase0(bState.ToProtoUnsafe())
require.NoError(t, err)
mockTrie, err := trie.NewTrie(0)
require.NoError(t, err)
@@ -118,7 +117,8 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
depositCache, err := depositcache.New()
require.NoError(t, err)
stateGen := stategen.New(beaconDB)
fc := doublylinkedtree.New()
stateGen := stategen.New(beaconDB, fc)
// Safe a state in stategen to purposes of testing a service stop / shutdown.
require.NoError(t, stateGen.SaveState(ctx, bytesutil.ToBytes32(bState.FinalizedCheckpoint().Root), bState))
@@ -129,7 +129,7 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
WithAttestationPool(attestations.NewPool()),
WithP2PBroadcaster(&mockBroadcaster{}),
WithStateNotifier(&mockBeaconNode{}),
WithForkChoiceStore(doublylinkedtree.New()),
WithForkChoiceStore(fc),
WithAttestationService(attService),
WithStateGen(stateGen),
}
@@ -306,9 +306,10 @@ func TestChainService_InitializeChainInfo(t *testing.T) {
require.NoError(t, beaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Epoch: slots.ToEpoch(finalizedSlot), Root: headRoot[:]}))
attSrv, err := attestations.NewService(ctx, &attestations.Config{})
require.NoError(t, err)
stateGen := stategen.New(beaconDB)
fc := doublylinkedtree.New()
stateGen := stategen.New(beaconDB, fc)
c, err := NewService(ctx,
WithForkChoiceStore(doublylinkedtree.New()),
WithForkChoiceStore(fc),
WithDatabase(beaconDB),
WithStateGen(stateGen),
WithAttestationService(attSrv),
@@ -324,7 +325,7 @@ func TestChainService_InitializeChainInfo(t *testing.T) {
assert.DeepEqual(t, headBlock, pb, "Head block incorrect")
s, err := c.HeadState(ctx)
require.NoError(t, err)
assert.DeepSSZEqual(t, headState.InnerStateUnsafe(), s.InnerStateUnsafe(), "Head state incorrect")
assert.DeepSSZEqual(t, headState.ToProtoUnsafe(), s.ToProtoUnsafe(), "Head state incorrect")
assert.Equal(t, c.HeadSlot(), headBlock.Block.Slot, "Head slot incorrect")
r, err := c.HeadRoot(context.Background())
require.NoError(t, err)
@@ -365,9 +366,10 @@ func TestChainService_InitializeChainInfo_SetHeadAtGenesis(t *testing.T) {
}
require.NoError(t, beaconDB.SaveStateSummary(ctx, ss))
require.NoError(t, beaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Root: headRoot[:], Epoch: slots.ToEpoch(finalizedSlot)}))
stateGen := stategen.New(beaconDB)
fc := doublylinkedtree.New()
stateGen := stategen.New(beaconDB, fc)
c, err := NewService(ctx,
WithForkChoiceStore(doublylinkedtree.New()),
WithForkChoiceStore(fc),
WithDatabase(beaconDB),
WithStateGen(stateGen),
WithAttestationService(attSrv),
@@ -378,7 +380,7 @@ func TestChainService_InitializeChainInfo_SetHeadAtGenesis(t *testing.T) {
require.NoError(t, c.StartFromSavedState(headState))
s, err := c.HeadState(ctx)
require.NoError(t, err)
assert.DeepSSZEqual(t, headState.InnerStateUnsafe(), s.InnerStateUnsafe(), "Head state incorrect")
assert.DeepSSZEqual(t, headState.ToProtoUnsafe(), s.ToProtoUnsafe(), "Head state incorrect")
assert.Equal(t, genesisRoot, c.originBlockRoot, "Genesis block root incorrect")
pb, err := c.head.block.Proto()
require.NoError(t, err)
@@ -388,8 +390,9 @@ func TestChainService_InitializeChainInfo_SetHeadAtGenesis(t *testing.T) {
func TestChainService_SaveHeadNoDB(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
fc := doublylinkedtree.New()
s := &Service{
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB), ForkChoiceStore: doublylinkedtree.New()},
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB, fc), ForkChoiceStore: fc},
}
blk := util.NewBeaconBlock()
blk.Block.Slot = 1
@@ -409,25 +412,6 @@ func TestChainService_SaveHeadNoDB(t *testing.T) {
}
}
func TestHasBlock_ForkChoiceAndDB_ProtoArray(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
s := &Service{
cfg: &config{ForkChoiceStore: protoarray.New(), BeaconDB: beaconDB},
}
b := util.NewBeaconBlock()
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
beaconState, err := util.NewBeaconState()
require.NoError(t, err)
wsb, err := consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
require.NoError(t, s.insertBlockToForkchoiceStore(ctx, wsb.Block(), r, beaconState))
assert.Equal(t, false, s.hasBlock(ctx, [32]byte{}), "Should not have block")
assert.Equal(t, true, s.hasBlock(ctx, r), "Should have block")
}
func TestHasBlock_ForkChoiceAndDB_DoublyLinkedTree(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
@@ -451,7 +435,7 @@ func TestServiceStop_SaveCachedBlocks(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
beaconDB := testDB.SetupDB(t)
s := &Service{
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB)},
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB, doublylinkedtree.New())},
ctx: ctx,
cancel: cancel,
initSyncBlocks: make(map[[32]byte]interfaces.SignedBeaconBlock),
@@ -499,27 +483,6 @@ func BenchmarkHasBlockDB(b *testing.B) {
}
}
func BenchmarkHasBlockForkChoiceStore_ProtoArray(b *testing.B) {
ctx := context.Background()
beaconDB := testDB.SetupDB(b)
s := &Service{
cfg: &config{ForkChoiceStore: protoarray.New(), BeaconDB: beaconDB},
}
blk := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Body: &ethpb.BeaconBlockBody{}}}
r, err := blk.Block.HashTreeRoot()
require.NoError(b, err)
bs := &ethpb.BeaconState{FinalizedCheckpoint: &ethpb.Checkpoint{Root: make([]byte, 32)}, CurrentJustifiedCheckpoint: &ethpb.Checkpoint{Root: make([]byte, 32)}}
beaconState, err := state_native.InitializeFromProtoPhase0(bs)
require.NoError(b, err)
wsb, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(b, err)
require.NoError(b, s.insertBlockToForkchoiceStore(ctx, wsb.Block(), r, beaconState))
b.ResetTimer()
for i := 0; i < b.N; i++ {
require.Equal(b, true, s.cfg.ForkChoiceStore.HasNode(r), "Block is not in fork choice store")
}
}
func BenchmarkHasBlockForkChoiceStore_DoublyLinkedTree(b *testing.B) {
ctx := context.Background()
beaconDB := testDB.SetupDB(b)
@@ -572,9 +535,10 @@ func TestChainService_EverythingOptimistic(t *testing.T) {
require.NoError(t, beaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Epoch: slots.ToEpoch(finalizedSlot), Root: headRoot[:]}))
attSrv, err := attestations.NewService(ctx, &attestations.Config{})
require.NoError(t, err)
stateGen := stategen.New(beaconDB)
fc := doublylinkedtree.New()
stateGen := stategen.New(beaconDB, fc)
c, err := NewService(ctx,
WithForkChoiceStore(doublylinkedtree.New()),
WithForkChoiceStore(fc),
WithDatabase(beaconDB),
WithStateGen(stateGen),
WithAttestationService(attSrv),

View File

@@ -6,7 +6,7 @@ import (
"github.com/pkg/errors"
testDB "github.com/prysmaticlabs/prysm/v3/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/protoarray"
doublylinkedtree "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/doubly-linked-tree"
forkchoicetypes "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/types"
fieldparams "github.com/prysmaticlabs/prysm/v3/config/fieldparams"
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
@@ -72,7 +72,7 @@ func TestService_VerifyWeakSubjectivityRoot(t *testing.T) {
wv, err := NewWeakSubjectivityVerifier(tt.checkpt, beaconDB)
require.Equal(t, !tt.disabled, wv.enabled)
require.NoError(t, err)
fcs := protoarray.New()
fcs := doublylinkedtree.New()
s := &Service{
cfg: &config{BeaconDB: beaconDB, WeakSubjectivityCheckpt: tt.checkpt, ForkChoiceStore: fcs},
wsVerifier: wv,

View File

@@ -20,13 +20,6 @@ var (
Buckets: []float64{1, 2, 5, 10, 20, 50, 100, 200, 500, 1000},
},
)
getStatusLatency = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "get_status_latency_milliseconds",
Help: "Captures RPC latency for get status in milliseconds",
Buckets: []float64{1, 2, 5, 10, 20, 50, 100, 200, 500, 1000},
},
)
registerValidatorLatency = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "register_validator_latency_milliseconds",

View File

@@ -72,7 +72,9 @@ func NewService(ctx context.Context, opts ...Option) (*Service, error) {
}
// Start initializes the service.
func (*Service) Start() {}
func (s *Service) Start() {
go s.pollRelayerStatus(s.ctx)
}
// Stop halts the service.
func (*Service) Stop() error {
@@ -105,19 +107,12 @@ func (s *Service) GetHeader(ctx context.Context, slot types.Slot, parentHash [32
// Status retrieves the status of the builder relay network.
func (s *Service) Status() error {
ctx, span := trace.StartSpan(context.Background(), "builder.Status")
defer span.End()
start := time.Now()
defer func() {
getStatusLatency.Observe(float64(time.Since(start).Milliseconds()))
}()
// Return early if builder isn't initialized in service.
if s.c == nil {
return nil
}
return s.c.Status(ctx)
return nil
}
// RegisterValidator registers a validator with the builder relay network.
@@ -157,3 +152,20 @@ func (s *Service) RegisterValidator(ctx context.Context, reg []*ethpb.SignedVali
func (s *Service) Configured() bool {
return s.c != nil && !reflect.ValueOf(s.c).IsNil()
}
func (s *Service) pollRelayerStatus(ctx context.Context) {
ticker := time.NewTicker(time.Minute)
defer ticker.Stop()
for {
select {
case <-ticker.C:
if s.c != nil {
if err := s.c.Status(ctx); err != nil {
log.WithError(err).Error("Failed to call relayer status endpoint, perhaps mev-boost or relayers are down")
}
}
case <-ctx.Done():
return
}
}
}

View File

@@ -33,9 +33,9 @@ func TestCheckpointStateCache_StateByCheckpoint(t *testing.T) {
s, err = cache.StateByCheckpoint(cp1)
require.NoError(t, err)
pbState1, err := state_native.ProtobufBeaconStatePhase0(s.InnerStateUnsafe())
pbState1, err := state_native.ProtobufBeaconStatePhase0(s.ToProtoUnsafe())
require.NoError(t, err)
pbstate, err := state_native.ProtobufBeaconStatePhase0(st.InnerStateUnsafe())
pbstate, err := state_native.ProtobufBeaconStatePhase0(st.ToProtoUnsafe())
require.NoError(t, err)
if !proto.Equal(pbState1, pbstate) {
t.Error("incorrectly cached state")
@@ -50,11 +50,11 @@ func TestCheckpointStateCache_StateByCheckpoint(t *testing.T) {
s, err = cache.StateByCheckpoint(cp2)
require.NoError(t, err)
assert.DeepEqual(t, st2.CloneInnerState(), s.CloneInnerState(), "incorrectly cached state")
assert.DeepEqual(t, st2.ToProto(), s.ToProto(), "incorrectly cached state")
s, err = cache.StateByCheckpoint(cp1)
require.NoError(t, err)
assert.DeepEqual(t, st.CloneInnerState(), s.CloneInnerState(), "incorrectly cached state")
assert.DeepEqual(t, st.ToProto(), s.ToProto(), "incorrectly cached state")
}
func TestCheckpointStateCache_MaxSize(t *testing.T) {

View File

@@ -33,5 +33,5 @@ func TestSkipSlotCache_RoundTrip(t *testing.T) {
res, err := c.Get(ctx, r)
require.NoError(t, err)
assert.DeepEqual(t, res.CloneInnerState(), s.CloneInnerState(), "Expected equal protos to return from cache")
assert.DeepEqual(t, res.ToProto(), s.ToProto(), "Expected equal protos to return from cache")
}

View File

@@ -45,7 +45,6 @@ go_library(
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//network/forks:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",
"//proto/prysm/v1alpha1/slashings:go_default_library",
@@ -93,6 +92,7 @@ go_test(
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//container/trie:go_default_library",
"//crypto/bls:go_default_library",

View File

@@ -10,7 +10,6 @@ import (
"github.com/prysmaticlabs/prysm/v3/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v3/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v3/encoding/bytesutil"
enginev1 "github.com/prysmaticlabs/prysm/v3/proto/engine/v1"
"github.com/prysmaticlabs/prysm/v3/runtime/version"
"github.com/prysmaticlabs/prysm/v3/time/slots"
)
@@ -38,11 +37,7 @@ func IsMergeTransitionComplete(st state.BeaconState) (bool, error) {
if err != nil {
return false, err
}
wrappedHeader, err := blocks.WrappedExecutionPayloadHeader(h)
if err != nil {
return false, err
}
isEmpty, err := blocks.IsEmptyExecutionData(wrappedHeader)
isEmpty, err := blocks.IsEmptyExecutionData(h)
if err != nil {
return false, err
}
@@ -95,12 +90,8 @@ func IsExecutionEnabled(st state.BeaconState, body interfaces.BeaconBlockBody) (
// IsExecutionEnabledUsingHeader returns true if the execution is enabled using post processed payload header and block body.
// This is an optimized version of IsExecutionEnabled where beacon state is not required as an argument.
func IsExecutionEnabledUsingHeader(header *enginev1.ExecutionPayloadHeader, body interfaces.BeaconBlockBody) (bool, error) {
wrappedHeader, err := blocks.WrappedExecutionPayloadHeader(header)
if err != nil {
return false, err
}
isEmpty, err := blocks.IsEmptyExecutionData(wrappedHeader)
func IsExecutionEnabledUsingHeader(header interfaces.ExecutionData, body interfaces.BeaconBlockBody) (bool, error) {
isEmpty, err := blocks.IsEmptyExecutionData(header)
if err != nil {
return false, err
}
@@ -135,7 +126,7 @@ func ValidatePayloadWhenMergeCompletes(st state.BeaconState, payload interfaces.
if err != nil {
return err
}
if !bytes.Equal(payload.ParentHash(), header.BlockHash) {
if !bytes.Equal(payload.ParentHash(), header.BlockHash()) {
return ErrInvalidPayloadBlockHash
}
return nil
@@ -236,7 +227,7 @@ func ValidatePayloadHeaderWhenMergeCompletes(st state.BeaconState, header interf
if err != nil {
return err
}
if !bytes.Equal(header.ParentHash(), h.BlockHash) {
if !bytes.Equal(header.ParentHash(), h.BlockHash()) {
return ErrInvalidPayloadBlockHash
}
return nil

View File

@@ -9,6 +9,7 @@ import (
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state"
fieldparams "github.com/prysmaticlabs/prysm/v3/config/fieldparams"
consensusblocks "github.com/prysmaticlabs/prysm/v3/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v3/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v3/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v3/encoding/ssz"
enginev1 "github.com/prysmaticlabs/prysm/v3/proto/engine/v1"
@@ -20,127 +21,170 @@ import (
func Test_IsMergeComplete(t *testing.T) {
tests := []struct {
name string
payload *enginev1.ExecutionPayloadHeader
payload interfaces.ExecutionData
want bool
}{
{
name: "empty payload header",
payload: emptyPayloadHeader(),
want: false,
name: "empty payload header",
payload: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
return h
}(),
want: false,
},
{
name: "has parent hash",
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
payload: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
}(),
want: true,
},
{
name: "has fee recipient",
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.FeeRecipient = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
payload: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.FeeRecipient = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
}(),
want: true,
},
{
name: "has state root",
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.StateRoot = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
payload: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.StateRoot = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
}(),
want: true,
},
{
name: "has receipt root",
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.ReceiptsRoot = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
payload: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.ReceiptsRoot = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
}(),
want: true,
},
{
name: "has logs bloom",
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.LogsBloom = bytesutil.PadTo([]byte{'a'}, fieldparams.LogsBloomLength)
payload: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.LogsBloom = bytesutil.PadTo([]byte{'a'}, fieldparams.LogsBloomLength)
return h
}(),
want: true,
},
{
name: "has random",
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.PrevRandao = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
payload: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.PrevRandao = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
}(),
want: true,
},
{
name: "has base fee",
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.BaseFeePerGas = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
payload: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.BaseFeePerGas = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
}(),
want: true,
},
{
name: "has block hash",
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.BlockHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
payload: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.BlockHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
}(),
want: true,
},
{
name: "has extra data",
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.ExtraData = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
payload: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.ExtraData = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
}(),
want: true,
},
{
name: "has block number",
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.BlockNumber = 1
payload: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.BlockNumber = 1
return h
}(),
want: true,
},
{
name: "has gas limit",
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.GasLimit = 1
payload: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.GasLimit = 1
return h
}(),
want: true,
},
{
name: "has gas used",
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.GasUsed = 1
payload: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.GasUsed = 1
return h
}(),
want: true,
},
{
name: "has time stamp",
payload: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.Timestamp = 1
payload: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.Timestamp = 1
return h
}(),
want: true,
@@ -149,9 +193,7 @@ func Test_IsMergeComplete(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
wrappedHeader, err := consensusblocks.WrappedExecutionPayloadHeader(tt.payload)
require.NoError(t, err)
require.NoError(t, st.SetLatestExecutionPayloadHeader(wrappedHeader))
require.NoError(t, st.SetLatestExecutionPayloadHeader(tt.payload))
got, err := blocks.IsMergeTransitionComplete(st)
require.NoError(t, err)
if got != tt.want {
@@ -199,36 +241,51 @@ func Test_IsExecutionEnabled(t *testing.T) {
tests := []struct {
name string
payload *enginev1.ExecutionPayload
header *enginev1.ExecutionPayloadHeader
header interfaces.ExecutionData
useAltairSt bool
want bool
}{
{
name: "use older than bellatrix state",
payload: emptyPayload(),
header: emptyPayloadHeader(),
name: "use older than bellatrix state",
payload: emptyPayload(),
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
return h
}(),
useAltairSt: true,
want: false,
},
{
name: "empty header, empty payload",
payload: emptyPayload(),
header: emptyPayloadHeader(),
want: false,
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
return h
}(),
want: false,
},
{
name: "non-empty header, empty payload",
payload: emptyPayload(),
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
}(),
want: true,
},
{
name: "empty header, non-empty payload",
header: emptyPayloadHeader(),
name: "empty header, non-empty payload",
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
return h
}(),
payload: func() *enginev1.ExecutionPayload {
p := emptyPayload()
p.Timestamp = 1
@@ -238,9 +295,12 @@ func Test_IsExecutionEnabled(t *testing.T) {
},
{
name: "non-empty header, non-empty payload",
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
}(),
payload: func() *enginev1.ExecutionPayload {
@@ -254,9 +314,7 @@ func Test_IsExecutionEnabled(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
wrappedHeader, err := consensusblocks.WrappedExecutionPayloadHeader(tt.header)
require.NoError(t, err)
require.NoError(t, st.SetLatestExecutionPayloadHeader(wrappedHeader))
require.NoError(t, st.SetLatestExecutionPayloadHeader(tt.header))
blk := util.NewBeaconBlockBellatrix()
blk.Block.Body.ExecutionPayload = tt.payload
body, err := consensusblocks.NewBeaconBlockBody(blk.Block.Body)
@@ -277,28 +335,39 @@ func Test_IsExecutionEnabledUsingHeader(t *testing.T) {
tests := []struct {
name string
payload *enginev1.ExecutionPayload
header *enginev1.ExecutionPayloadHeader
header interfaces.ExecutionData
want bool
}{
{
name: "empty header, empty payload",
payload: emptyPayload(),
header: emptyPayloadHeader(),
want: false,
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
return h
}(),
want: false,
},
{
name: "non-empty header, empty payload",
payload: emptyPayload(),
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
}(),
want: true,
},
{
name: "empty header, non-empty payload",
header: emptyPayloadHeader(),
name: "empty header, non-empty payload",
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
return h
}(),
payload: func() *enginev1.ExecutionPayload {
p := emptyPayload()
p.Timestamp = 1
@@ -308,9 +377,12 @@ func Test_IsExecutionEnabledUsingHeader(t *testing.T) {
},
{
name: "non-empty header, non-empty payload",
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
}(),
payload: func() *enginev1.ExecutionPayload {
@@ -340,14 +412,18 @@ func Test_ValidatePayloadWhenMergeCompletes(t *testing.T) {
tests := []struct {
name string
payload *enginev1.ExecutionPayload
header *enginev1.ExecutionPayloadHeader
header interfaces.ExecutionData
err error
}{
{
name: "merge incomplete",
payload: emptyPayload(),
header: emptyPayloadHeader(),
err: nil,
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
return h
}(),
err: nil,
},
{
name: "validate passes",
@@ -356,9 +432,12 @@ func Test_ValidatePayloadWhenMergeCompletes(t *testing.T) {
p.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return p
}(),
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.BlockHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.BlockHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return h
}(),
err: nil,
@@ -370,9 +449,12 @@ func Test_ValidatePayloadWhenMergeCompletes(t *testing.T) {
p.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
return p
}(),
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.BlockHash = bytesutil.PadTo([]byte{'b'}, fieldparams.RootLength)
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.BlockHash = bytesutil.PadTo([]byte{'b'}, fieldparams.RootLength)
return h
}(),
err: blocks.ErrInvalidPayloadBlockHash,
@@ -381,9 +463,7 @@ func Test_ValidatePayloadWhenMergeCompletes(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
wrappedHeader, err := consensusblocks.WrappedExecutionPayloadHeader(tt.header)
require.NoError(t, err)
require.NoError(t, st.SetLatestExecutionPayloadHeader(wrappedHeader))
require.NoError(t, st.SetLatestExecutionPayloadHeader(tt.header))
wrappedPayload, err := consensusblocks.WrappedExecutionPayload(tt.payload)
require.NoError(t, err)
err = blocks.ValidatePayloadWhenMergeCompletes(st, wrappedPayload)
@@ -493,8 +573,10 @@ func Test_ProcessPayload(t *testing.T) {
require.Equal(t, tt.err, err)
want, err := consensusblocks.PayloadToHeader(wrappedPayload)
require.Equal(t, tt.err, err)
got, err := st.LatestExecutionPayloadHeader()
h, err := st.LatestExecutionPayloadHeader()
require.NoError(t, err)
got, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
require.DeepSSZEqual(t, want, got)
}
})
@@ -509,29 +591,39 @@ func Test_ProcessPayloadHeader(t *testing.T) {
require.NoError(t, err)
tests := []struct {
name string
header *enginev1.ExecutionPayloadHeader
header interfaces.ExecutionData
err error
}{
{
name: "process passes",
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.PrevRandao = random
h.Timestamp = uint64(ts.Unix())
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.PrevRandao = random
p.Timestamp = uint64(ts.Unix())
return h
}(), err: nil,
},
{
name: "incorrect prev randao",
header: emptyPayloadHeader(),
err: blocks.ErrInvalidPayloadPrevRandao,
name: "incorrect prev randao",
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
return h
}(),
err: blocks.ErrInvalidPayloadPrevRandao,
},
{
name: "incorrect timestamp",
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.PrevRandao = random
h.Timestamp = 1
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.PrevRandao = random
p.Timestamp = 1
return h
}(),
err: blocks.ErrInvalidPayloadTimeStamp,
@@ -539,16 +631,18 @@ func Test_ProcessPayloadHeader(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
wrappedHeader, err := consensusblocks.WrappedExecutionPayloadHeader(tt.header)
require.NoError(t, err)
st, err := blocks.ProcessPayloadHeader(st, wrappedHeader)
st, err := blocks.ProcessPayloadHeader(st, tt.header)
if err != nil {
require.Equal(t, tt.err.Error(), err.Error())
} else {
require.Equal(t, tt.err, err)
got, err := st.LatestExecutionPayloadHeader()
want, ok := tt.header.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
h, err := st.LatestExecutionPayloadHeader()
require.NoError(t, err)
require.DeepSSZEqual(t, tt.header, got)
got, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
require.DeepSSZEqual(t, want, got)
}
})
}
@@ -562,29 +656,39 @@ func Test_ValidatePayloadHeader(t *testing.T) {
require.NoError(t, err)
tests := []struct {
name string
header *enginev1.ExecutionPayloadHeader
header interfaces.ExecutionData
err error
}{
{
name: "process passes",
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.PrevRandao = random
h.Timestamp = uint64(ts.Unix())
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.PrevRandao = random
p.Timestamp = uint64(ts.Unix())
return h
}(), err: nil,
},
{
name: "incorrect prev randao",
header: emptyPayloadHeader(),
err: blocks.ErrInvalidPayloadPrevRandao,
name: "incorrect prev randao",
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
return h
}(),
err: blocks.ErrInvalidPayloadPrevRandao,
},
{
name: "incorrect timestamp",
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.PrevRandao = random
h.Timestamp = 1
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.PrevRandao = random
p.Timestamp = 1
return h
}(),
err: blocks.ErrInvalidPayloadTimeStamp,
@@ -592,9 +696,7 @@ func Test_ValidatePayloadHeader(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
wrappedHeader, err := consensusblocks.WrappedExecutionPayloadHeader(tt.header)
require.NoError(t, err)
err = blocks.ValidatePayloadHeader(st, wrappedHeader)
err = blocks.ValidatePayloadHeader(st, tt.header)
require.Equal(t, tt.err, err)
})
}
@@ -609,13 +711,14 @@ func Test_ValidatePayloadHeaderWhenMergeCompletes(t *testing.T) {
tests := []struct {
name string
state state.BeaconState
header *enginev1.ExecutionPayloadHeader
header interfaces.ExecutionData
err error
}{
{
name: "no merge",
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
return h
}(),
state: emptySt,
@@ -623,9 +726,12 @@ func Test_ValidatePayloadHeaderWhenMergeCompletes(t *testing.T) {
},
{
name: "process passes",
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.ParentHash = []byte{'a'}
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.ParentHash = []byte{'a'}
return h
}(),
state: st,
@@ -633,9 +739,12 @@ func Test_ValidatePayloadHeaderWhenMergeCompletes(t *testing.T) {
},
{
name: "invalid block hash",
header: func() *enginev1.ExecutionPayloadHeader {
h := emptyPayloadHeader()
h.ParentHash = []byte{'b'}
header: func() interfaces.ExecutionData {
h, err := emptyPayloadHeader()
require.NoError(t, err)
p, ok := h.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
p.ParentHash = []byte{'b'}
return h
}(),
state: st,
@@ -644,9 +753,7 @@ func Test_ValidatePayloadHeaderWhenMergeCompletes(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
wrappedHeader, err := consensusblocks.WrappedExecutionPayloadHeader(tt.header)
require.NoError(t, err)
err = blocks.ValidatePayloadHeaderWhenMergeCompletes(tt.state, wrappedHeader)
err = blocks.ValidatePayloadHeaderWhenMergeCompletes(tt.state, tt.header)
require.Equal(t, tt.err, err)
})
}
@@ -695,9 +802,9 @@ func Test_PayloadToHeader(t *testing.T) {
func BenchmarkBellatrixComplete(b *testing.B) {
st, _ := util.DeterministicGenesisStateBellatrix(b, 1)
wrappedHeader, err := consensusblocks.WrappedExecutionPayloadHeader(emptyPayloadHeader())
h, err := emptyPayloadHeader()
require.NoError(b, err)
require.NoError(b, st.SetLatestExecutionPayloadHeader(wrappedHeader))
require.NoError(b, st.SetLatestExecutionPayloadHeader(h))
b.ResetTimer()
for i := 0; i < b.N; i++ {
@@ -706,8 +813,8 @@ func BenchmarkBellatrixComplete(b *testing.B) {
}
}
func emptyPayloadHeader() *enginev1.ExecutionPayloadHeader {
return &enginev1.ExecutionPayloadHeader{
func emptyPayloadHeader() (interfaces.ExecutionData, error) {
return consensusblocks.WrappedExecutionPayloadHeader(&enginev1.ExecutionPayloadHeader{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
@@ -718,7 +825,7 @@ func emptyPayloadHeader() *enginev1.ExecutionPayloadHeader {
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
}
})
}
func emptyPayload() *enginev1.ExecutionPayload {

View File

@@ -61,6 +61,9 @@ func TestUpgradeToBellatrix(t *testing.T) {
header, err := mSt.LatestExecutionPayloadHeader()
require.NoError(t, err)
protoHeader, ok := header.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
wanted := &enginev1.ExecutionPayloadHeader{
ParentHash: make([]byte, 32),
FeeRecipient: make([]byte, 20),
@@ -76,5 +79,5 @@ func TestUpgradeToBellatrix(t *testing.T) {
BlockHash: make([]byte, 32),
TransactionsRoot: make([]byte, 32),
}
require.DeepEqual(t, wanted, header)
require.DeepEqual(t, wanted, protoHeader)
}

View File

@@ -124,7 +124,7 @@ func BenchmarkHashTreeRootState_FullState(b *testing.B) {
func BenchmarkMarshalState_FullState(b *testing.B) {
beaconState, err := benchmark.PreGenstateFullEpochs()
require.NoError(b, err)
natState, err := state_native.ProtobufBeaconStatePhase0(beaconState.InnerStateUnsafe())
natState, err := state_native.ProtobufBeaconStatePhase0(beaconState.ToProtoUnsafe())
require.NoError(b, err)
b.Run("Proto_Marshal", func(b *testing.B) {
b.ResetTimer()
@@ -148,7 +148,7 @@ func BenchmarkMarshalState_FullState(b *testing.B) {
func BenchmarkUnmarshalState_FullState(b *testing.B) {
beaconState, err := benchmark.PreGenstateFullEpochs()
require.NoError(b, err)
natState, err := state_native.ProtobufBeaconStatePhase0(beaconState.InnerStateUnsafe())
natState, err := state_native.ProtobufBeaconStatePhase0(beaconState.ToProtoUnsafe())
require.NoError(b, err)
protoObject, err := proto.Marshal(natState)
require.NoError(b, err)

View File

@@ -20,7 +20,7 @@ func TestSkipSlotCache_OK(t *testing.T) {
transition.SkipSlotCache.Enable()
defer transition.SkipSlotCache.Disable()
bState, privs := util.DeterministicGenesisState(t, params.MinimalSpecConfig().MinGenesisActiveValidatorCount)
pbState, err := state_native.ProtobufBeaconStatePhase0(bState.CloneInnerState())
pbState, err := state_native.ProtobufBeaconStatePhase0(bState.ToProto())
require.NoError(t, err)
originalState, err := state_native.InitializeFromProtoPhase0(pbState)
require.NoError(t, err)
@@ -42,12 +42,12 @@ func TestSkipSlotCache_OK(t *testing.T) {
bState, err = transition.ExecuteStateTransition(context.Background(), bState, wsb)
require.NoError(t, err, "Could not process state transition")
assert.DeepEqual(t, originalState.CloneInnerState(), bState.CloneInnerState(), "Skipped slots cache leads to different states")
assert.DeepEqual(t, originalState.ToProto(), bState.ToProto(), "Skipped slots cache leads to different states")
}
func TestSkipSlotCache_ConcurrentMixup(t *testing.T) {
bState, privs := util.DeterministicGenesisState(t, params.MinimalSpecConfig().MinGenesisActiveValidatorCount)
pbState, err := state_native.ProtobufBeaconStatePhase0(bState.CloneInnerState())
pbState, err := state_native.ProtobufBeaconStatePhase0(bState.ToProto())
require.NoError(t, err)
originalState, err := state_native.InitializeFromProtoPhase0(pbState)
require.NoError(t, err)

View File

@@ -98,9 +98,9 @@ func TestGenesisState_HashEquality(t *testing.T) {
state, err := transition.GenesisBeaconState(context.Background(), deposits, 0, &ethpb.Eth1Data{BlockHash: make([]byte, 32)})
require.NoError(t, err)
pbState1, err := state_native.ProtobufBeaconStatePhase0(state1.CloneInnerState())
pbState1, err := state_native.ProtobufBeaconStatePhase0(state1.ToProto())
require.NoError(t, err)
pbstate, err := state_native.ProtobufBeaconStatePhase0(state.CloneInnerState())
pbstate, err := state_native.ProtobufBeaconStatePhase0(state.ToProto())
require.NoError(t, err)
root1, err1 := hash.HashProto(pbState1)

View File

@@ -51,7 +51,7 @@ type ReadOnlyDatabase interface {
DepositContractAddress(ctx context.Context) ([]byte, error)
// ExecutionChainData operations.
ExecutionChainData(ctx context.Context) (*ethpb.ETH1ChainData, error)
// Fee reicipients operations.
// Fee recipients operations.
FeeRecipientByValidatorID(ctx context.Context, id types.ValidatorIndex) (common.Address, error)
RegistrationByValidatorID(ctx context.Context, id types.ValidatorIndex) (*ethpb.ValidatorRegistrationV1, error)
// origin checkpoint sync support
@@ -85,7 +85,7 @@ type NoHeadAccessDatabase interface {
SaveExecutionChainData(ctx context.Context, data *ethpb.ETH1ChainData) error
// Run any required database migrations.
RunMigrations(ctx context.Context) error
// Fee reicipients operations.
// Fee recipients operations.
SaveFeeRecipientsByValidatorIDs(ctx context.Context, ids []types.ValidatorIndex, addrs []common.Address) error
SaveRegistrationsByValidatorIDs(ctx context.Context, ids []types.ValidatorIndex, regs []*ethpb.ValidatorRegistrationV1) error

View File

@@ -23,3 +23,10 @@ func hasBellatrixBlindKey(enc []byte) bool {
}
return bytes.Equal(enc[:len(bellatrixBlindKey)], bellatrixBlindKey)
}
func hasCapellaKey(enc []byte) bool {
if len(capellaKey) >= len(enc) {
return false
}
return bytes.Equal(enc[:len(capellaKey)], capellaKey)
}

View File

@@ -92,7 +92,7 @@ func Test_migrateStateValidators(t *testing.T) {
assert.NoError(t, hashErr)
individualHashes = append(individualHashes, hash[:])
}
pbState, err := state_native.ProtobufBeaconStatePhase0(st.InnerStateUnsafe())
pbState, err := state_native.ProtobufBeaconStatePhase0(st.ToProtoUnsafe())
assert.NoError(t, err)
validatorsFoundCount := 0
for _, val := range pbState.Validators {
@@ -138,7 +138,7 @@ func Test_migrateStateValidators(t *testing.T) {
blockRoot := [32]byte{'A'}
rcvdState, err := dbStore.State(context.Background(), blockRoot)
assert.NoError(t, err)
require.DeepSSZEqual(t, rcvdState.InnerStateUnsafe(), state.InnerStateUnsafe(), "saved state with validators and retrieved state are not matching")
require.DeepSSZEqual(t, rcvdState.ToProtoUnsafe(), state.ToProtoUnsafe(), "saved state with validators and retrieved state are not matching")
// find hashes of the validators that are set as part of the state
var hashes []byte
@@ -151,7 +151,7 @@ func Test_migrateStateValidators(t *testing.T) {
}
// check if all the validators that were in the state, are stored properly in the validator bucket
pbState, err := state_native.ProtobufBeaconStatePhase0(rcvdState.InnerStateUnsafe())
pbState, err := state_native.ProtobufBeaconStatePhase0(rcvdState.ToProtoUnsafe())
assert.NoError(t, err)
validatorsFoundCount := 0
for _, val := range pbState.Validators {
@@ -241,7 +241,7 @@ func Test_migrateAltairStateValidators(t *testing.T) {
blockRoot := [32]byte{'A'}
rcvdState, err := dbStore.State(context.Background(), blockRoot)
assert.NoError(t, err)
require.DeepSSZEqual(t, rcvdState.InnerStateUnsafe(), state.InnerStateUnsafe(), "saved state with validators and retrieved state are not matching")
require.DeepSSZEqual(t, rcvdState.ToProtoUnsafe(), state.ToProtoUnsafe(), "saved state with validators and retrieved state are not matching")
// find hashes of the validators that are set as part of the state
var hashes []byte
@@ -254,7 +254,7 @@ func Test_migrateAltairStateValidators(t *testing.T) {
}
// check if all the validators that were in the state, are stored properly in the validator bucket
pbState, err := state_native.ProtobufBeaconStateAltair(rcvdState.InnerStateUnsafe())
pbState, err := state_native.ProtobufBeaconStateAltair(rcvdState.ToProtoUnsafe())
assert.NoError(t, err)
validatorsFoundCount := 0
for _, val := range pbState.Validators {

View File

@@ -52,6 +52,7 @@ var (
altairKey = []byte("altair")
bellatrixKey = []byte("merge")
bellatrixBlindKey = []byte("blind-bellatrix")
capellaKey = []byte("capella")
// block root included in the beacon state used by weak subjectivity initial sync
originCheckpointBlockRootKey = []byte("origin-checkpoint-block-root")
// block root tracking the progress of backfill, or pointing at genesis if backfill has not been initiated

View File

@@ -189,7 +189,7 @@ func getValidators(states []state.ReadOnlyBeaconState) ([][]byte, map[string]*et
validatorsEntries := make(map[string]*ethpb.Validator) // It's a map to make sure that you store only new validator entries.
validatorKeys := make([][]byte, len(states)) // For every state, this stores a compressed list of validator keys.
for i, st := range states {
pb, ok := st.InnerStateUnsafe().(withValidators)
pb, ok := st.ToProtoUnsafe().(withValidators)
if !ok {
return nil, nil, errors.New("could not cast state to interface with GetValidators()")
}
@@ -228,7 +228,7 @@ func (s *Store) saveStatesEfficientInternal(ctx context.Context, tx *bolt.Tx, bl
// validator entries.To bring the gap closer, we empty the validators
// just before Put() and repopulate that state with original validators.
// look at issue https://github.com/prysmaticlabs/prysm/issues/9262.
switch rawType := states[i].InnerStateUnsafe().(type) {
switch rawType := states[i].ToProtoUnsafe().(type) {
case *ethpb.BeaconState:
pbState, err := statenative.ProtobufBeaconStatePhase0(rawType)
if err != nil {
@@ -294,6 +294,28 @@ func (s *Store) saveStatesEfficientInternal(ctx context.Context, tx *bolt.Tx, bl
if err := valIdxBkt.Put(rt[:], validatorKeys[i]); err != nil {
return err
}
case *ethpb.BeaconStateCapella:
pbState, err := statenative.ProtobufBeaconStateCapella(rawType)
if err != nil {
return err
}
if pbState == nil {
return errors.New("nil state")
}
valEntries := pbState.Validators
pbState.Validators = make([]*ethpb.Validator, 0)
rawObj, err := pbState.MarshalSSZ()
if err != nil {
return err
}
encodedState := snappy.Encode(nil, append(capellaKey, rawObj...))
if err := bucket.Put(rt[:], encodedState); err != nil {
return err
}
pbState.Validators = valEntries
if err := valIdxBkt.Put(rt[:], validatorKeys[i]); err != nil {
return err
}
default:
return errors.New("invalid state type")
}
@@ -451,11 +473,25 @@ func (s *Store) unmarshalState(_ context.Context, enc []byte, validatorEntries [
}
switch {
case hasCapellaKey(enc):
// Marshal state bytes to capella beacon state.
protoState := &ethpb.BeaconStateCapella{}
if err := protoState.UnmarshalSSZ(enc[len(capellaKey):]); err != nil {
return nil, errors.Wrap(err, "failed to unmarshal encoding for capella")
}
ok, err := s.isStateValidatorMigrationOver()
if err != nil {
return nil, err
}
if ok {
protoState.Validators = validatorEntries
}
return statenative.InitializeFromProtoUnsafeCapella(protoState)
case hasBellatrixKey(enc):
// Marshal state bytes to altair beacon state.
// Marshal state bytes to bellatrix beacon state.
protoState := &ethpb.BeaconStateBellatrix{}
if err := protoState.UnmarshalSSZ(enc[len(bellatrixKey):]); err != nil {
return nil, errors.Wrap(err, "failed to unmarshal encoding for altair")
return nil, errors.Wrap(err, "failed to unmarshal encoding for bellatrix")
}
ok, err := s.isStateValidatorMigrationOver()
if err != nil {
@@ -498,15 +534,15 @@ func (s *Store) unmarshalState(_ context.Context, enc []byte, validatorEntries [
// marshal versioned state from struct type down to bytes.
func marshalState(ctx context.Context, st state.ReadOnlyBeaconState) ([]byte, error) {
switch st.InnerStateUnsafe().(type) {
switch st.ToProtoUnsafe().(type) {
case *ethpb.BeaconState:
rState, ok := st.InnerStateUnsafe().(*ethpb.BeaconState)
rState, ok := st.ToProtoUnsafe().(*ethpb.BeaconState)
if !ok {
return nil, errors.New("non valid inner state")
}
return encode(ctx, rState)
case *ethpb.BeaconStateAltair:
rState, ok := st.InnerStateUnsafe().(*ethpb.BeaconStateAltair)
rState, ok := st.ToProtoUnsafe().(*ethpb.BeaconStateAltair)
if !ok {
return nil, errors.New("non valid inner state")
}
@@ -519,7 +555,7 @@ func marshalState(ctx context.Context, st state.ReadOnlyBeaconState) ([]byte, er
}
return snappy.Encode(nil, append(altairKey, rawObj...)), nil
case *ethpb.BeaconStateBellatrix:
rState, ok := st.InnerStateUnsafe().(*ethpb.BeaconStateBellatrix)
rState, ok := st.ToProtoUnsafe().(*ethpb.BeaconStateBellatrix)
if !ok {
return nil, errors.New("non valid inner state")
}
@@ -531,6 +567,19 @@ func marshalState(ctx context.Context, st state.ReadOnlyBeaconState) ([]byte, er
return nil, err
}
return snappy.Encode(nil, append(bellatrixKey, rawObj...)), nil
case *ethpb.BeaconStateCapella:
rState, ok := st.ToProtoUnsafe().(*ethpb.BeaconStateCapella)
if !ok {
return nil, errors.New("non valid inner state")
}
if rState == nil {
return nil, errors.New("nil state")
}
rawObj, err := rState.MarshalSSZ()
if err != nil {
return nil, err
}
return snappy.Encode(nil, append(capellaKey, rawObj...)), nil
default:
return nil, errors.New("invalid inner state")
}

View File

@@ -44,7 +44,7 @@ func TestState_CanSaveRetrieve(t *testing.T) {
savedS, err := db.State(context.Background(), r)
require.NoError(t, err)
require.DeepSSZEqual(t, st.InnerStateUnsafe(), savedS.InnerStateUnsafe(), "saved state and retrieved state are not matching")
require.DeepSSZEqual(t, st.ToProtoUnsafe(), savedS.ToProtoUnsafe(), "saved state and retrieved state are not matching")
savedS, err = db.State(context.Background(), [32]byte{'B'})
require.NoError(t, err)
@@ -77,7 +77,7 @@ func TestState_CanSaveRetrieveValidatorEntries(t *testing.T) {
savedS, err := db.State(context.Background(), r)
require.NoError(t, err)
require.DeepSSZEqual(t, st.InnerStateUnsafe(), savedS.InnerStateUnsafe(), "saved state with validators and retrieved state are not matching")
require.DeepSSZEqual(t, st.ToProtoUnsafe(), savedS.ToProtoUnsafe(), "saved state with validators and retrieved state are not matching")
// check if the index of the second state is still present.
err = db.db.Update(func(tx *bolt.Tx) error {
@@ -129,7 +129,7 @@ func TestStateAltair_CanSaveRetrieveValidatorEntries(t *testing.T) {
savedS, err := db.State(context.Background(), r)
require.NoError(t, err)
require.DeepSSZEqual(t, st.InnerStateUnsafe(), savedS.InnerStateUnsafe(), "saved state with validators and retrieved state are not matching")
require.DeepSSZEqual(t, st.ToProtoUnsafe(), savedS.ToProtoUnsafe(), "saved state with validators and retrieved state are not matching")
// check if the index of the second state is still present.
err = db.db.Update(func(tx *bolt.Tx) error {
@@ -239,7 +239,7 @@ func TestState_CanSaveRetrieveValidatorEntriesWithoutCache(t *testing.T) {
savedS, err := db.State(context.Background(), r)
require.NoError(t, err)
require.DeepSSZEqual(t, st.InnerStateUnsafe(), savedS.InnerStateUnsafe(), "saved state with validators and retrieved state are not matching")
require.DeepSSZEqual(t, st.ToProtoUnsafe(), savedS.ToProtoUnsafe(), "saved state with validators and retrieved state are not matching")
// check if the index of the second state is still present.
err = db.db.Update(func(tx *bolt.Tx) error {
@@ -360,7 +360,7 @@ func TestGenesisState_CanSaveRetrieve(t *testing.T) {
savedGenesisS, err := db.GenesisState(context.Background())
require.NoError(t, err)
assert.DeepSSZEqual(t, st.InnerStateUnsafe(), savedGenesisS.InnerStateUnsafe(), "Did not retrieve saved state")
assert.DeepSSZEqual(t, st.ToProtoUnsafe(), savedGenesisS.ToProtoUnsafe(), "Did not retrieve saved state")
require.NoError(t, db.SaveGenesisBlockRoot(context.Background(), [32]byte{'C'}))
}
@@ -481,7 +481,7 @@ func TestStore_SaveDeleteState_CanGetHighestBelow(t *testing.T) {
st, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, st.SetSlot(1))
s0 := st.InnerStateUnsafe()
s0 := st.ToProtoUnsafe()
require.NoError(t, db.SaveState(context.Background(), st, r))
b.Block.Slot = 100
@@ -493,7 +493,7 @@ func TestStore_SaveDeleteState_CanGetHighestBelow(t *testing.T) {
st, err = util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, st.SetSlot(100))
s1 := st.InnerStateUnsafe()
s1 := st.ToProtoUnsafe()
require.NoError(t, db.SaveState(context.Background(), st, r1))
b.Block.Slot = 1000
@@ -505,21 +505,21 @@ func TestStore_SaveDeleteState_CanGetHighestBelow(t *testing.T) {
st, err = util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, st.SetSlot(1000))
s2 := st.InnerStateUnsafe()
s2 := st.ToProtoUnsafe()
require.NoError(t, db.SaveState(context.Background(), st, r2))
highest, err := db.HighestSlotStatesBelow(context.Background(), 2)
require.NoError(t, err)
assert.DeepSSZEqual(t, highest[0].InnerStateUnsafe(), s0)
assert.DeepSSZEqual(t, highest[0].ToProtoUnsafe(), s0)
highest, err = db.HighestSlotStatesBelow(context.Background(), 101)
require.NoError(t, err)
assert.DeepSSZEqual(t, highest[0].InnerStateUnsafe(), s1)
assert.DeepSSZEqual(t, highest[0].ToProtoUnsafe(), s1)
highest, err = db.HighestSlotStatesBelow(context.Background(), 1001)
require.NoError(t, err)
assert.DeepSSZEqual(t, highest[0].InnerStateUnsafe(), s2)
assert.DeepSSZEqual(t, highest[0].ToProtoUnsafe(), s2)
}
func TestStore_GenesisState_CanGetHighestBelow(t *testing.T) {
@@ -546,14 +546,14 @@ func TestStore_GenesisState_CanGetHighestBelow(t *testing.T) {
highest, err := db.HighestSlotStatesBelow(context.Background(), 2)
require.NoError(t, err)
assert.DeepSSZEqual(t, highest[0].InnerStateUnsafe(), st.InnerStateUnsafe())
assert.DeepSSZEqual(t, highest[0].ToProtoUnsafe(), st.ToProtoUnsafe())
highest, err = db.HighestSlotStatesBelow(context.Background(), 1)
require.NoError(t, err)
assert.DeepSSZEqual(t, highest[0].InnerStateUnsafe(), genesisState.InnerStateUnsafe())
assert.DeepSSZEqual(t, highest[0].ToProtoUnsafe(), genesisState.ToProtoUnsafe())
highest, err = db.HighestSlotStatesBelow(context.Background(), 0)
require.NoError(t, err)
assert.DeepSSZEqual(t, highest[0].InnerStateUnsafe(), genesisState.InnerStateUnsafe())
assert.DeepSSZEqual(t, highest[0].ToProtoUnsafe(), genesisState.ToProtoUnsafe())
}
func TestStore_CleanUpDirtyStates_AboveThreshold(t *testing.T) {
@@ -680,7 +680,7 @@ func TestAltairState_CanSaveRetrieve(t *testing.T) {
savedS, err := db.State(context.Background(), r)
require.NoError(t, err)
require.DeepSSZEqual(t, st.InnerStateUnsafe(), savedS.InnerStateUnsafe())
require.DeepSSZEqual(t, st.ToProtoUnsafe(), savedS.ToProtoUnsafe())
savedS, err = db.State(context.Background(), [32]byte{'B'})
require.NoError(t, err)
@@ -831,7 +831,7 @@ func TestStateBellatrix_CanSaveRetrieveValidatorEntries(t *testing.T) {
savedS, err := db.State(context.Background(), r)
require.NoError(t, err)
require.DeepSSZEqual(t, st.InnerStateUnsafe(), savedS.InnerStateUnsafe(), "saved state with validators and retrieved state are not matching")
require.DeepSSZEqual(t, st.ToProtoUnsafe(), savedS.ToProtoUnsafe(), "saved state with validators and retrieved state are not matching")
// check if the index of the second state is still present.
err = db.db.Update(func(tx *bolt.Tx) error {
@@ -874,7 +874,7 @@ func TestBellatrixState_CanSaveRetrieve(t *testing.T) {
savedS, err := db.State(context.Background(), r)
require.NoError(t, err)
require.DeepSSZEqual(t, st.InnerStateUnsafe(), savedS.InnerStateUnsafe())
require.DeepSSZEqual(t, st.ToProtoUnsafe(), savedS.ToProtoUnsafe())
savedS, err = db.State(context.Background(), [32]byte{'B'})
require.NoError(t, err)

View File

@@ -102,6 +102,7 @@ go_test(
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/execution/testing:go_default_library",
"//beacon-chain/execution/types:go_default_library",
"//beacon-chain/forkchoice/doubly-linked-tree:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",

View File

@@ -10,7 +10,6 @@ import (
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/rpc"
gethRPC "github.com/ethereum/go-ethereum/rpc"
"github.com/holiman/uint256"
"github.com/pkg/errors"
@@ -37,8 +36,6 @@ const (
ExecutionBlockByHashMethod = "eth_getBlockByHash"
// ExecutionBlockByNumberMethod request string for JSON-RPC.
ExecutionBlockByNumberMethod = "eth_getBlockByNumber"
// Defines the seconds to wait before timing out engine endpoints with block execution semantics (newPayload, forkchoiceUpdated).
payloadAndForkchoiceUpdatedTimeout = 8 * time.Second
// Defines the seconds before timing out engine endpoints with non-block execution semantics.
defaultEngineTimeout = time.Second
)
@@ -85,7 +82,8 @@ func (s *Service) NewPayload(ctx context.Context, payload interfaces.ExecutionDa
defer func() {
newPayloadLatency.Observe(float64(time.Since(start).Milliseconds()))
}()
d := time.Now().Add(payloadAndForkchoiceUpdatedTimeout)
d := time.Now().Add(time.Duration(params.BeaconConfig().ExecutionEngineTimeoutValue) * time.Second)
ctx, cancel := context.WithDeadline(ctx, d)
defer cancel()
result := &pb.PayloadStatus{}
@@ -123,7 +121,7 @@ func (s *Service) ForkchoiceUpdated(
forkchoiceUpdatedLatency.Observe(float64(time.Since(start).Milliseconds()))
}()
d := time.Now().Add(payloadAndForkchoiceUpdatedTimeout)
d := time.Now().Add(time.Duration(params.BeaconConfig().ExecutionEngineTimeoutValue) * time.Second)
ctx, cancel := context.WithDeadline(ctx, d)
defer cancel()
result := &ForkchoiceUpdatedResponse{}
@@ -322,7 +320,7 @@ func (s *Service) ExecutionBlockByHash(ctx context.Context, hash common.Hash, wi
// ExecutionBlocksByHashes fetches a batch of execution engine blocks by hash by calling
// eth_blockByHash via JSON-RPC.
func (s *Service) ExecutionBlocksByHashes(ctx context.Context, hashes []common.Hash, withTxs bool) ([]*pb.ExecutionBlock, error) {
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.ExecutionBlocksByHashes")
_, span := trace.StartSpan(ctx, "powchain.engine-api-client.ExecutionBlocksByHashes")
defer span.End()
numOfHashes := len(hashes)
elems := make([]gethRPC.BatchElem, 0, numOfHashes)
@@ -524,7 +522,7 @@ func handleRPCError(err error) error {
if isTimeout(err) {
return ErrHTTPTimeout
}
e, ok := err.(rpc.Error)
e, ok := err.(gethRPC.Error)
if !ok {
if strings.Contains(err.Error(), "401 Unauthorized") {
log.Error("HTTP authentication to your execution client is not working. Please ensure " +
@@ -563,7 +561,7 @@ func handleRPCError(err error) error {
case -32000:
errServerErrorCount.Inc()
// Only -32000 status codes are data errors in the RPC specification.
errWithData, ok := err.(rpc.DataError)
errWithData, ok := err.(gethRPC.DataError)
if !ok {
return errors.Wrapf(err, "got an unexpected error in JSON-RPC response")
}

View File

@@ -550,7 +550,7 @@ func (s *Service) processChainStartIfReady(ctx context.Context, blockHash [32]by
// savePowchainData saves all powchain related metadata to disk.
func (s *Service) savePowchainData(ctx context.Context) error {
pbState, err := statenative.ProtobufBeaconStatePhase0(s.preGenesisState.InnerStateUnsafe())
pbState, err := statenative.ProtobufBeaconStatePhase0(s.preGenesisState.ToProtoUnsafe())
if err != nil {
return err
}

View File

@@ -112,6 +112,18 @@ type RPCClient interface {
CallContext(ctx context.Context, result interface{}, method string, args ...interface{}) error
}
type RPCClientEmpty struct {
}
func (RPCClientEmpty) Close() {}
func (RPCClientEmpty) BatchCall([]gethRPC.BatchElem) error {
return errors.New("rpc client is not initialized")
}
func (RPCClientEmpty) CallContext(context.Context, interface{}, string, ...interface{}) error {
return errors.New("rpc client is not initialized")
}
// config defines a config struct for dependencies into the service.
type config struct {
depositContractAddr common.Address
@@ -169,8 +181,9 @@ func NewService(ctx context.Context, opts ...Option) (*Service, error) {
}
s := &Service{
ctx: ctx,
cancel: cancel,
ctx: ctx,
cancel: cancel,
rpcClient: RPCClientEmpty{},
cfg: &config{
beaconNodeStatsUpdater: &NopBeaconNodeStatsUpdater{},
eth1HeaderReqLimit: defaultEth1HeaderReqLimit,
@@ -778,7 +791,7 @@ func (s *Service) ensureValidPowchainData(ctx context.Context) error {
return errors.Wrap(err, "unable to retrieve eth1 data")
}
if eth1Data == nil || !eth1Data.ChainstartData.Chainstarted || !validateDepositContainers(eth1Data.DepositContainers) {
pbState, err := native.ProtobufBeaconStatePhase0(s.preGenesisState.InnerStateUnsafe())
pbState, err := native.ProtobufBeaconStatePhase0(s.preGenesisState.ToProtoUnsafe())
if err != nil {
return err
}

View File

@@ -20,6 +20,7 @@ import (
"github.com/prysmaticlabs/prysm/v3/beacon-chain/cache/depositcache"
dbutil "github.com/prysmaticlabs/prysm/v3/beacon-chain/db/testing"
mockExecution "github.com/prysmaticlabs/prysm/v3/beacon-chain/execution/testing"
doublylinkedtree "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/v3/config/params"
contracts "github.com/prysmaticlabs/prysm/v3/contracts/deposit"
@@ -469,7 +470,7 @@ func TestInitDepositCacheWithFinalization_OK(t *testing.T) {
headBlock := util.NewBeaconBlock()
headRoot, err := headBlock.Block.HashTreeRoot()
require.NoError(t, err)
stateGen := stategen.New(beaconDB)
stateGen := stategen.New(beaconDB, doublylinkedtree.New())
emptyState, err := util.NewBeaconState()
require.NoError(t, err)

View File

@@ -136,7 +136,7 @@ func (f *ForkChoice) InsertNode(ctx context.Context, state state.BeaconState, ro
return err
}
if ph != nil {
copy(payloadHash[:], ph.BlockHash)
copy(payloadHash[:], ph.BlockHash())
}
}
jc := state.CurrentJustifiedCheckpoint()
@@ -664,3 +664,8 @@ func (f *ForkChoice) ForkChoiceDump(ctx context.Context) (*v1.ForkChoiceResponse
return resp, nil
}
// SetBalancesByRooter sets the balanceByRoot handler in forkchoice
func (f *ForkChoice) SetBalancesByRooter(handler forkchoice.BalancesByRooter) {
f.balancesByRoot = handler
}

View File

@@ -261,6 +261,9 @@ func (s *Store) tips() ([][32]byte, []types.Slot) {
func (f *ForkChoice) HighestReceivedBlockSlot() types.Slot {
f.store.nodesLock.RLock()
defer f.store.nodesLock.RUnlock()
if f.store.highestReceivedNode == nil {
return 0
}
return f.store.highestReceivedNode.slot
}
@@ -268,6 +271,9 @@ func (f *ForkChoice) HighestReceivedBlockSlot() types.Slot {
func (f *ForkChoice) HighestReceivedBlockRoot() [32]byte {
f.store.nodesLock.RLock()
defer f.store.nodesLock.RUnlock()
if f.store.highestReceivedNode == nil {
return [32]byte{}
}
return f.store.highestReceivedNode.root
}

View File

@@ -3,6 +3,7 @@ package doublylinkedtree
import (
"sync"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice"
forkchoicetypes "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/types"
fieldparams "github.com/prysmaticlabs/prysm/v3/config/fieldparams"
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
@@ -10,10 +11,11 @@ import (
// ForkChoice defines the overall fork choice store which includes all block nodes, validator's latest votes and balances.
type ForkChoice struct {
store *Store
votes []Vote // tracks individual validator's last vote.
votesLock sync.RWMutex
balances []uint64 // tracks individual validator's last justified balances.
store *Store
votes []Vote // tracks individual validator's last vote.
votesLock sync.RWMutex
balances []uint64 // tracks individual validator's last justified balances.
balancesByRoot forkchoice.BalancesByRooter // handler to obtain balances for the state with a given root
}
// Store defines the fork choice store which includes block nodes and the last view of checkpoint information.

View File

@@ -10,6 +10,10 @@ import (
v1 "github.com/prysmaticlabs/prysm/v3/proto/eth/v1"
)
// BalanceByRooter is a handler to obtain the effective balances of the state
// with the given block root
type BalancesByRooter func(context.Context, [32]byte) ([]uint64, error)
// ForkChoicer represents the full fork choice interface composed of all the sub-interfaces.
type ForkChoicer interface {
HeadRetriever // to compute head.
@@ -77,4 +81,5 @@ type Setter interface {
SetGenesisTime(uint64)
SetOriginRoot([32]byte)
NewSlot(context.Context, types.Slot) error
SetBalancesByRooter(BalancesByRooter)
}

View File

@@ -1,79 +0,0 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"doc.go",
"errors.go",
"helpers.go",
"metrics.go",
"node.go",
"on_tick.go",
"optimistic_sync.go",
"proposer_boost.go",
"store.go",
"types.go",
"unrealized_justification.go",
],
importpath = "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/protoarray",
visibility = [
"//beacon-chain:__subpackages__",
"//testing/spectest:__subpackages__",
],
deps = [
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/epoch/precompute:go_default_library",
"//beacon-chain/forkchoice:go_default_library",
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = [
"ffg_update_test.go",
"helpers_test.go",
"no_vote_test.go",
"node_test.go",
"on_tick_test.go",
"optimistic_sync_test.go",
"proposer_boost_test.go",
"store_test.go",
"unrealized_justification_test.go",
"vote_test.go",
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/forkchoice:go_default_library",
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/features:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
],
)

View File

@@ -1,7 +0,0 @@
/*
Package protoarray implements proto array fork choice as outlined:
https://github.com/protolambda/lmd-ghost#array-based-stateful-dag-proto_array
This was motivated by the original implementation by Sigma Prime here:
https://github.com/sigp/lighthouse/pull/804
*/
package protoarray

View File

@@ -1,19 +0,0 @@
package protoarray
import "errors"
var errUnknownFinalizedRoot = errors.New("unknown finalized root")
var errUnknownJustifiedRoot = errors.New("unknown justified root")
var errInvalidNodeIndex = errors.New("node index is invalid")
var errInvalidFinalizedNode = errors.New("invalid finalized block on chain")
var ErrUnknownNodeRoot = errors.New("unknown block root")
var errInvalidJustifiedIndex = errors.New("justified index is invalid")
var errInvalidBestDescendantIndex = errors.New("best descendant index is invalid")
var errInvalidParentDelta = errors.New("parent delta is invalid")
var errInvalidNodeDelta = errors.New("node delta is invalid")
var errInvalidDeltaLength = errors.New("delta length is invalid")
var errInvalidOptimisticStatus = errors.New("invalid optimistic status")
var errInvalidNilCheckpoint = errors.New("invalid nil checkpoint")
var errInvalidUnrealizedJustifiedEpoch = errors.New("invalid unrealized justified epoch")
var errInvalidUnrealizedFinalizedEpoch = errors.New("invalid unrealized finalized epoch")
var errNilBlockHeader = errors.New("invalid nil block header")

View File

@@ -1,280 +0,0 @@
package protoarray
import (
"context"
"testing"
forkchoicetypes "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state"
state_native "github.com/prysmaticlabs/prysm/v3/beacon-chain/state/state-native"
"github.com/prysmaticlabs/prysm/v3/config/params"
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
enginev1 "github.com/prysmaticlabs/prysm/v3/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v3/testing/assert"
"github.com/prysmaticlabs/prysm/v3/testing/require"
)
// prepareForkchoiceState prepares a beacon State with the given data to mock
// insert into forkchoice
func prepareForkchoiceState(
_ context.Context,
slot types.Slot,
blockRoot [32]byte,
parentRoot [32]byte,
payloadHash [32]byte,
justifiedEpoch types.Epoch,
finalizedEpoch types.Epoch,
) (state.BeaconState, [32]byte, error) {
blockHeader := &ethpb.BeaconBlockHeader{
ParentRoot: parentRoot[:],
}
executionHeader := &enginev1.ExecutionPayloadHeader{
BlockHash: payloadHash[:],
}
justifiedCheckpoint := &ethpb.Checkpoint{
Epoch: justifiedEpoch,
}
finalizedCheckpoint := &ethpb.Checkpoint{
Epoch: finalizedEpoch,
}
base := &ethpb.BeaconStateBellatrix{
Slot: slot,
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
BlockRoots: make([][]byte, 1),
CurrentJustifiedCheckpoint: justifiedCheckpoint,
FinalizedCheckpoint: finalizedCheckpoint,
LatestExecutionPayloadHeader: executionHeader,
LatestBlockHeader: blockHeader,
}
base.BlockRoots[0] = append(base.BlockRoots[0], blockRoot[:]...)
st, err := state_native.InitializeFromProtoBellatrix(base)
return st, blockRoot, err
}
func TestFFGUpdates_OneBranch(t *testing.T) {
balances := []uint64{1, 1}
f := setup(0, 0)
ctx := context.Background()
// The head should always start at the finalized block.
r, err := f.Head(context.Background(), balances)
require.NoError(t, err)
assert.Equal(t, params.BeaconConfig().ZeroHash, r, "Incorrect head with genesis")
// Define the following tree:
// 0 <- justified: 0, finalized: 0
// |
// 1 <- justified: 0, finalized: 0
// |
// 2 <- justified: 1, finalized: 0
// |
// 3 <- justified: 2, finalized: 1
st, blkRoot, err := prepareForkchoiceState(context.Background(), 1, indexToHash(1), params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, blkRoot))
st, blkRoot, err = prepareForkchoiceState(context.Background(), 2, indexToHash(2), indexToHash(1), params.BeaconConfig().ZeroHash, 1, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, blkRoot))
st, blkRoot, err = prepareForkchoiceState(context.Background(), 3, indexToHash(3), indexToHash(2), params.BeaconConfig().ZeroHash, 2, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, blkRoot))
// With starting justified epoch at 0, the head should be 3:
// 0 <- start
// |
// 1
// |
// 2
// |
// 3 <- head
r, err = f.Head(context.Background(), balances)
require.NoError(t, err)
assert.Equal(t, indexToHash(3), r, "Incorrect head for with justified epoch at 0")
// With starting justified epoch at 1, the head should be 2:
// 0
// |
// 1 <- start
// |
// 2 <- head
// |
// 3
f.store.justifiedCheckpoint = &forkchoicetypes.Checkpoint{Root: indexToHash(1), Epoch: 1}
f.store.finalizedCheckpoint = &forkchoicetypes.Checkpoint{Root: indexToHash(0), Epoch: 0}
r, err = f.Head(context.Background(), balances)
require.NoError(t, err)
assert.Equal(t, indexToHash(2), r, "Incorrect head with justified epoch at 1")
// With starting justified epoch at 2, the head should be 3:
// 0
// |
// 1
// |
// 2 <- start
// |
// 3 <- head
f.store.justifiedCheckpoint = &forkchoicetypes.Checkpoint{Root: indexToHash(3), Epoch: 2}
r, err = f.Head(context.Background(), balances)
require.NoError(t, err)
assert.Equal(t, indexToHash(3), r, "Incorrect head with justified epoch at 2")
}
func TestFFGUpdates_TwoBranches(t *testing.T) {
balances := []uint64{1, 1}
f := setup(0, 0)
ctx := context.Background()
r, err := f.Head(context.Background(), balances)
require.NoError(t, err)
assert.Equal(t, params.BeaconConfig().ZeroHash, r, "Incorrect head with genesis")
// Define the following tree:
// 0
// / \
// justified: 0, finalized: 0 -> 1 2 <- justified: 0, finalized: 0
// | |
// justified: 1, finalized: 0 -> 3 4 <- justified: 0, finalized: 0
// | |
// justified: 1, finalized: 0 -> 5 6 <- justified: 0, finalized: 0
// | |
// justified: 1, finalized: 0 -> 7 8 <- justified: 1, finalized: 0
// | |
// justified: 2, finalized: 0 -> 9 10 <- justified: 2, finalized: 0
// Left branch.
st, blkRoot, err := prepareForkchoiceState(context.Background(), 1, indexToHash(1), params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, blkRoot))
st, blkRoot, err = prepareForkchoiceState(context.Background(), 2, indexToHash(3), indexToHash(1), params.BeaconConfig().ZeroHash, 1, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, blkRoot))
st, blkRoot, err = prepareForkchoiceState(context.Background(), 3, indexToHash(5), indexToHash(3), params.BeaconConfig().ZeroHash, 1, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, blkRoot))
st, blkRoot, err = prepareForkchoiceState(context.Background(), 4, indexToHash(7), indexToHash(5), params.BeaconConfig().ZeroHash, 1, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, blkRoot))
st, blkRoot, err = prepareForkchoiceState(context.Background(), 4, indexToHash(9), indexToHash(7), params.BeaconConfig().ZeroHash, 2, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, blkRoot))
// Right branch.
st, blkRoot, err = prepareForkchoiceState(context.Background(), 1, indexToHash(2), params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, blkRoot))
st, blkRoot, err = prepareForkchoiceState(context.Background(), 2, indexToHash(4), indexToHash(2), params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, blkRoot))
st, blkRoot, err = prepareForkchoiceState(context.Background(), 3, indexToHash(6), indexToHash(4), params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, blkRoot))
st, blkRoot, err = prepareForkchoiceState(context.Background(), 4, indexToHash(8), indexToHash(6), params.BeaconConfig().ZeroHash, 1, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, blkRoot))
st, blkRoot, err = prepareForkchoiceState(context.Background(), 4, indexToHash(10), indexToHash(8), params.BeaconConfig().ZeroHash, 2, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, blkRoot))
// With start at 0, the head should be 10:
// 0 <-- start
// / \
// 1 2
// | |
// 3 4
// | |
// 5 6
// | |
// 7 8
// | |
// 9 10 <-- head
r, err = f.Head(context.Background(), balances)
require.NoError(t, err)
assert.Equal(t, indexToHash(10), r, "Incorrect head with justified epoch at 0")
// Add a vote to 1:
// 0
// / \
// +1 vote -> 1 2
// | |
// 3 4
// | |
// 5 6
// | |
// 7 8
// | |
// 9 10
f.ProcessAttestation(context.Background(), []uint64{0}, indexToHash(1), 0)
// With the additional vote to the left branch, the head should be 9:
// 0 <-- start
// / \
// 1 2
// | |
// 3 4
// | |
// 5 6
// | |
// 7 8
// | |
// head -> 9 10
r, err = f.Head(context.Background(), balances)
require.NoError(t, err)
assert.Equal(t, indexToHash(9), r, "Incorrect head with justified epoch at 0")
// Add a vote to 2:
// 0
// / \
// 1 2 <- +1 vote
// | |
// 3 4
// | |
// 5 6
// | |
// 7 8
// | |
// 9 10
f.ProcessAttestation(context.Background(), []uint64{1}, indexToHash(2), 0)
// With the additional vote to the right branch, the head should be 10:
// 0 <-- start
// / \
// 1 2
// | |
// 3 4
// | |
// 5 6
// | |
// 7 8
// | |
// 9 10 <-- head
r, err = f.Head(context.Background(), balances)
require.NoError(t, err)
assert.Equal(t, indexToHash(10), r, "Incorrect head with justified epoch at 0")
f.store.justifiedCheckpoint = &forkchoicetypes.Checkpoint{Epoch: 1, Root: indexToHash(1)}
r, err = f.Head(context.Background(), balances)
require.NoError(t, err)
assert.Equal(t, indexToHash(7), r, "Incorrect head with justified epoch at 0")
}
func setup(justifiedEpoch, finalizedEpoch types.Epoch) *ForkChoice {
f := New()
f.store.nodesIndices[params.BeaconConfig().ZeroHash] = 0
f.store.justifiedCheckpoint = &forkchoicetypes.Checkpoint{Epoch: justifiedEpoch, Root: params.BeaconConfig().ZeroHash}
f.store.bestJustifiedCheckpoint = &forkchoicetypes.Checkpoint{Epoch: justifiedEpoch, Root: params.BeaconConfig().ZeroHash}
f.store.finalizedCheckpoint = &forkchoicetypes.Checkpoint{Epoch: finalizedEpoch, Root: params.BeaconConfig().ZeroHash}
f.store.nodes = append(f.store.nodes, &Node{
slot: 0,
root: params.BeaconConfig().ZeroHash,
parent: NonExistentNode,
justifiedEpoch: justifiedEpoch,
finalizedEpoch: finalizedEpoch,
bestChild: NonExistentNode,
bestDescendant: NonExistentNode,
weight: 0,
})
return f
}

View File

@@ -1,109 +0,0 @@
package protoarray
import (
"context"
"github.com/prysmaticlabs/prysm/v3/config/params"
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
pmath "github.com/prysmaticlabs/prysm/v3/math"
"go.opencensus.io/trace"
)
// This computes validator balance delta from validator votes.
// It returns a list of deltas that represents the difference between old
// balances and new balances. This function assumes the caller holds a lock in
// Store.nodesLock and Store.votesLock
func computeDeltas(
ctx context.Context,
count int,
blockIndices map[[32]byte]uint64,
votes []Vote,
oldBalances, newBalances []uint64,
slashedIndices map[types.ValidatorIndex]bool,
) ([]int, []Vote, error) {
ctx, span := trace.StartSpan(ctx, "doublyLinkedForkchoice.computeDeltas")
defer span.End()
deltas := make([]int, count)
for validatorIndex, vote := range votes {
// Skip if validator has been slashed
if slashedIndices[types.ValidatorIndex(validatorIndex)] {
continue
}
oldBalance := uint64(0)
newBalance := uint64(0)
// Skip if validator has never voted for current root and next root (i.e. if the
// votes are zero hash aka genesis block), there's nothing to compute.
if vote.currentRoot == params.BeaconConfig().ZeroHash && vote.nextRoot == params.BeaconConfig().ZeroHash {
continue
}
// If the validator index did not exist in `oldBalance` or `newBalance` list above, the balance is just 0.
if validatorIndex < len(oldBalances) {
oldBalance = oldBalances[validatorIndex]
}
if validatorIndex < len(newBalances) {
newBalance = newBalances[validatorIndex]
}
// Perform delta only if the validator's balance or vote has changed.
if vote.currentRoot != vote.nextRoot || oldBalance != newBalance {
// Ignore the vote if it's not known in `blockIndices`,
// that means we have not seen the block before.
nextDeltaIndex, ok := blockIndices[vote.nextRoot]
if ok {
// Protection against out of bound, the `nextDeltaIndex` which defines
// the block location in the dag can not exceed the total `delta` length.
if nextDeltaIndex >= uint64(len(deltas)) {
return nil, nil, errInvalidNodeDelta
}
delta, err := pmath.Int(newBalance)
if err != nil {
return nil, nil, err
}
deltas[nextDeltaIndex] += delta
}
currentDeltaIndex, ok := blockIndices[vote.currentRoot]
if ok {
// Protection against out of bound (same as above)
if currentDeltaIndex >= uint64(len(deltas)) {
return nil, nil, errInvalidNodeDelta
}
delta, err := pmath.Int(oldBalance)
if err != nil {
return nil, nil, err
}
deltas[currentDeltaIndex] -= delta
}
}
// Rotate the validator vote.
vote.currentRoot = vote.nextRoot
votes[validatorIndex] = vote
}
return deltas, votes, nil
}
// This return a copy of the proto array node object.
func copyNode(node *Node) *Node {
if node == nil {
return &Node{}
}
return &Node{
slot: node.slot,
root: node.root,
parent: node.parent,
payloadHash: node.payloadHash,
justifiedEpoch: node.justifiedEpoch,
finalizedEpoch: node.finalizedEpoch,
weight: node.weight,
bestChild: node.bestChild,
bestDescendant: node.bestDescendant,
status: node.status,
}
}

View File

@@ -1,258 +0,0 @@
package protoarray
import (
"context"
"encoding/binary"
"testing"
"github.com/prysmaticlabs/prysm/v3/config/params"
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v3/crypto/hash"
"github.com/prysmaticlabs/prysm/v3/testing/assert"
"github.com/prysmaticlabs/prysm/v3/testing/require"
)
func TestComputeDelta_ZeroHash(t *testing.T) {
validatorCount := uint64(16)
indices := make(map[[32]byte]uint64)
votes := make([]Vote, 0)
oldBalances := make([]uint64, 0)
newBalances := make([]uint64, 0)
for i := uint64(0); i < validatorCount; i++ {
indices[indexToHash(i)] = i
votes = append(votes, Vote{params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 0})
oldBalances = append(oldBalances, 0)
newBalances = append(newBalances, 0)
}
slashedIndices := make(map[types.ValidatorIndex]bool)
delta, _, err := computeDeltas(context.Background(), len(indices), indices, votes, oldBalances, newBalances, slashedIndices)
require.NoError(t, err)
assert.Equal(t, int(validatorCount), len(delta))
for _, d := range delta {
assert.Equal(t, 0, d)
}
for _, vote := range votes {
assert.Equal(t, vote.currentRoot, vote.nextRoot, "The vote should not have changed")
}
}
func TestComputeDelta_AllVoteTheSame(t *testing.T) {
validatorCount := uint64(16)
balance := uint64(32)
indices := make(map[[32]byte]uint64)
votes := make([]Vote, 0)
oldBalances := make([]uint64, 0)
newBalances := make([]uint64, 0)
for i := uint64(0); i < validatorCount; i++ {
indices[indexToHash(i)] = i
votes = append(votes, Vote{params.BeaconConfig().ZeroHash, indexToHash(0), 0})
oldBalances = append(oldBalances, balance)
newBalances = append(newBalances, balance)
}
slashedIndices := make(map[types.ValidatorIndex]bool)
delta, _, err := computeDeltas(context.Background(), len(indices), indices, votes, oldBalances, newBalances, slashedIndices)
require.NoError(t, err)
assert.Equal(t, int(validatorCount), len(delta))
for i, d := range delta {
if i == 0 {
assert.Equal(t, balance*validatorCount, uint64(d))
} else {
assert.Equal(t, 0, d)
}
}
for _, vote := range votes {
assert.Equal(t, vote.currentRoot, vote.nextRoot, "The vote should not have changed")
}
}
func TestComputeDelta_DifferentVotes(t *testing.T) {
validatorCount := uint64(16)
balance := uint64(32)
indices := make(map[[32]byte]uint64)
votes := make([]Vote, 0)
oldBalances := make([]uint64, 0)
newBalances := make([]uint64, 0)
for i := uint64(0); i < validatorCount; i++ {
indices[indexToHash(i)] = i
votes = append(votes, Vote{params.BeaconConfig().ZeroHash, indexToHash(i), 0})
oldBalances = append(oldBalances, balance)
newBalances = append(newBalances, balance)
}
slashedIndices := make(map[types.ValidatorIndex]bool)
delta, _, err := computeDeltas(context.Background(), len(indices), indices, votes, oldBalances, newBalances, slashedIndices)
require.NoError(t, err)
assert.Equal(t, int(validatorCount), len(delta))
for _, d := range delta {
assert.Equal(t, balance, uint64(d))
}
for _, vote := range votes {
assert.Equal(t, vote.currentRoot, vote.nextRoot, "The vote should not have changed")
}
}
func TestComputeDelta_MovingVotes(t *testing.T) {
validatorCount := uint64(16)
balance := uint64(32)
indices := make(map[[32]byte]uint64)
votes := make([]Vote, 0)
oldBalances := make([]uint64, 0)
newBalances := make([]uint64, 0)
lastIndex := uint64(len(indices) - 1)
for i := uint64(0); i < validatorCount; i++ {
indices[indexToHash(i)] = i
votes = append(votes, Vote{indexToHash(0), indexToHash(lastIndex), 0})
oldBalances = append(oldBalances, balance)
newBalances = append(newBalances, balance)
}
slashedIndices := make(map[types.ValidatorIndex]bool)
delta, _, err := computeDeltas(context.Background(), len(indices), indices, votes, oldBalances, newBalances, slashedIndices)
require.NoError(t, err)
assert.Equal(t, int(validatorCount), len(delta))
for i, d := range delta {
if i == 0 {
assert.Equal(t, -int(balance*validatorCount), d, "First root should have negative delta")
} else if i == int(lastIndex) {
assert.Equal(t, int(balance*validatorCount), d, "Last root should have positive delta")
} else {
assert.Equal(t, 0, d)
}
}
for _, vote := range votes {
assert.Equal(t, vote.currentRoot, vote.nextRoot, "The vote should not have changed")
}
}
func TestComputeDelta_MoveOutOfTree(t *testing.T) {
balance := uint64(32)
indices := make(map[[32]byte]uint64)
votes := make([]Vote, 0)
oldBalances := []uint64{balance, balance}
newBalances := []uint64{balance, balance}
indices[indexToHash(1)] = 0
votes = append(votes,
Vote{indexToHash(1), params.BeaconConfig().ZeroHash, 0},
Vote{indexToHash(1), [32]byte{'A'}, 0})
slashedIndices := make(map[types.ValidatorIndex]bool)
delta, _, err := computeDeltas(context.Background(), len(indices), indices, votes, oldBalances, newBalances, slashedIndices)
require.NoError(t, err)
assert.Equal(t, 1, len(delta))
assert.Equal(t, 0-2*int(balance), delta[0])
for _, vote := range votes {
assert.Equal(t, vote.currentRoot, vote.nextRoot, "The vote should not have changed")
}
}
func TestComputeDelta_ChangingBalances(t *testing.T) {
oldBalance := uint64(32)
newBalance := oldBalance * 2
validatorCount := uint64(16)
indices := make(map[[32]byte]uint64)
votes := make([]Vote, 0)
oldBalances := make([]uint64, 0)
newBalances := make([]uint64, 0)
indices[indexToHash(1)] = 0
for i := uint64(0); i < validatorCount; i++ {
indices[indexToHash(i)] = i
votes = append(votes, Vote{indexToHash(0), indexToHash(1), 0})
oldBalances = append(oldBalances, oldBalance)
newBalances = append(newBalances, newBalance)
}
slashedIndices := make(map[types.ValidatorIndex]bool)
delta, _, err := computeDeltas(context.Background(), len(indices), indices, votes, oldBalances, newBalances, slashedIndices)
require.NoError(t, err)
assert.Equal(t, 16, len(delta))
for i, d := range delta {
if i == 0 {
assert.Equal(t, -int(oldBalance*validatorCount), d, "First root should have negative delta")
} else if i == 1 {
assert.Equal(t, int(newBalance*validatorCount), d, "Last root should have positive delta")
} else {
assert.Equal(t, 0, d)
}
}
for _, vote := range votes {
assert.Equal(t, vote.currentRoot, vote.nextRoot, "The vote should not have changed")
}
}
func TestComputeDelta_ValidatorAppear(t *testing.T) {
balance := uint64(32)
indices := make(map[[32]byte]uint64)
votes := make([]Vote, 0)
oldBalances := []uint64{balance}
newBalances := []uint64{balance, balance}
indices[indexToHash(1)] = 0
indices[indexToHash(2)] = 1
votes = append(votes,
Vote{indexToHash(1), indexToHash(2), 0},
Vote{indexToHash(1), indexToHash(2), 0})
slashedIndices := make(map[types.ValidatorIndex]bool)
delta, _, err := computeDeltas(context.Background(), len(indices), indices, votes, oldBalances, newBalances, slashedIndices)
require.NoError(t, err)
assert.Equal(t, 2, len(delta))
assert.Equal(t, 0-int(balance), delta[0])
assert.Equal(t, 2*int(balance), delta[1])
for _, vote := range votes {
assert.Equal(t, vote.currentRoot, vote.nextRoot, "The vote should not have changed")
}
}
func TestComputeDelta_ValidatorDisappears(t *testing.T) {
balance := uint64(32)
indices := make(map[[32]byte]uint64)
votes := make([]Vote, 0)
oldBalances := []uint64{balance, balance}
newBalances := []uint64{balance}
indices[indexToHash(1)] = 0
indices[indexToHash(2)] = 1
votes = append(votes,
Vote{indexToHash(1), indexToHash(2), 0},
Vote{indexToHash(1), indexToHash(2), 0})
slashedIndices := make(map[types.ValidatorIndex]bool)
delta, _, err := computeDeltas(context.Background(), len(indices), indices, votes, oldBalances, newBalances, slashedIndices)
require.NoError(t, err)
assert.Equal(t, 2, len(delta))
assert.Equal(t, 0-2*int(balance), delta[0])
assert.Equal(t, int(balance), delta[1])
for _, vote := range votes {
assert.Equal(t, vote.currentRoot, vote.nextRoot, "The vote should not have changed")
}
}
func indexToHash(i uint64) [32]byte {
var b [8]byte
binary.LittleEndian.PutUint64(b[:], i)
return hash.Hash(b[:])
}

View File

@@ -1,54 +0,0 @@
package protoarray
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/sirupsen/logrus"
)
var (
log = logrus.WithField("prefix", "forkchoice-protoarray")
headSlotNumber = promauto.NewGauge(
prometheus.GaugeOpts{
Name: "proto_array_head_slot",
Help: "The slot number of the current head.",
},
)
nodeCount = promauto.NewGauge(
prometheus.GaugeOpts{
Name: "proto_array_node_count",
Help: "The number of nodes in the DAG array based store structure.",
},
)
headChangesCount = promauto.NewCounter(
prometheus.CounterOpts{
Name: "proto_array_head_changed_count",
Help: "The number of times head changes.",
},
)
calledHeadCount = promauto.NewCounter(
prometheus.CounterOpts{
Name: "proto_array_head_requested_count",
Help: "The number of times someone called head.",
},
)
processedBlockCount = promauto.NewCounter(
prometheus.CounterOpts{
Name: "proto_array_block_processed_count",
Help: "The number of times a block is processed for fork choice.",
},
)
processedAttestationCount = promauto.NewCounter(
prometheus.CounterOpts{
Name: "proto_array_attestation_processed_count",
Help: "The number of times an attestation is processed for fork choice.",
},
)
prunedCount = promauto.NewCounter(
prometheus.CounterOpts{
Name: "proto_array_pruned_count",
Help: "The number of times pruning happened.",
},
)
)

View File

@@ -1,103 +0,0 @@
package protoarray
import (
"context"
"testing"
"github.com/prysmaticlabs/prysm/v3/config/params"
"github.com/prysmaticlabs/prysm/v3/testing/assert"
"github.com/prysmaticlabs/prysm/v3/testing/require"
)
func TestNoVote_CanFindHead(t *testing.T) {
balances := make([]uint64, 16)
f := setup(1, 1)
ctx := context.Background()
// The head should always start at the finalized block.
r, err := f.Head(context.Background(), balances)
require.NoError(t, err)
if r != params.BeaconConfig().ZeroHash {
t.Errorf("Incorrect head with genesis")
}
// Insert block 2 into the tree and verify head is at 2:
// 0
// /
// 2 <- head
state, blkRoot, err := prepareForkchoiceState(context.Background(), 0, indexToHash(2), params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
r, err = f.Head(context.Background(), balances)
require.NoError(t, err)
assert.Equal(t, indexToHash(2), r, "Incorrect head for with justified epoch at 1")
// Insert block 1 into the tree and verify head is still at 2:
// 0
// / \
// head -> 2 1
state, blkRoot, err = prepareForkchoiceState(context.Background(), 0, indexToHash(1), params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
r, err = f.Head(context.Background(), balances)
require.NoError(t, err)
assert.Equal(t, indexToHash(2), r, "Incorrect head for with justified epoch at 1")
// Insert block 3 into the tree and verify head is still at 2:
// 0
// / \
// head -> 2 1
// |
// 3
state, blkRoot, err = prepareForkchoiceState(context.Background(), 0, indexToHash(3), indexToHash(1), params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
r, err = f.Head(context.Background(), balances)
require.NoError(t, err)
assert.Equal(t, indexToHash(2), r, "Incorrect head for with justified epoch at 1")
// Insert block 4 into the tree and verify head is at 4:
// 0
// / \
// 2 1
// | |
// head -> 4 3
state, blkRoot, err = prepareForkchoiceState(context.Background(), 0, indexToHash(4), indexToHash(2), params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
r, err = f.Head(context.Background(), balances)
require.NoError(t, err)
assert.Equal(t, indexToHash(4), r, "Incorrect head for with justified epoch at 1")
// Insert block 5 with justified epoch of 2, verify head is 5
// 0
// / \
// 2 1
// | |
// head -> 4 3
// |
// 5 <- justified epoch = 2
state, blkRoot, err = prepareForkchoiceState(context.Background(), 0, indexToHash(5), indexToHash(4), params.BeaconConfig().ZeroHash, 2, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
r, err = f.Head(context.Background(), balances)
require.NoError(t, err)
assert.Equal(t, indexToHash(5), r, "Incorrect head for with justified epoch at 2")
// Insert block 6 with justified epoch of 2, verify head is at 6.
// 0
// / \
// 2 1
// | |
// 4 3
// |
// 5
// |
// 6 <- head
state, blkRoot, err = prepareForkchoiceState(context.Background(), 0, indexToHash(6), indexToHash(5), params.BeaconConfig().ZeroHash, 2, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
r, err = f.Head(context.Background(), balances)
require.NoError(t, err)
assert.Equal(t, indexToHash(6), r, "Incorrect head for with justified epoch at 2")
}

View File

@@ -1,50 +0,0 @@
package protoarray
import (
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
)
// Slot of the fork choice node.
func (n *Node) Slot() types.Slot {
return n.slot
}
// Root of the fork choice node.
func (n *Node) Root() [32]byte {
return n.root
}
// Parent of the fork choice node.
func (n *Node) Parent() uint64 {
return n.parent
}
// JustifiedEpoch of the fork choice node.
func (n *Node) JustifiedEpoch() types.Epoch {
return n.justifiedEpoch
}
// FinalizedEpoch of the fork choice node.
func (n *Node) FinalizedEpoch() types.Epoch {
return n.finalizedEpoch
}
// Weight of the fork choice node.
func (n *Node) Weight() uint64 {
return n.weight
}
// BestChild of the fork choice node.
func (n *Node) BestChild() uint64 {
return n.bestChild
}
// BestDescendant of the fork choice node.
func (n *Node) BestDescendant() uint64 {
return n.bestDescendant
}
// VotedFraction is not implemented for protoarray
func (*ForkChoice) VotedFraction(_ [32]byte) (uint64, error) {
return 0, nil
}

View File

@@ -1,38 +0,0 @@
package protoarray
import (
"testing"
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v3/testing/require"
)
func TestNode_Getters(t *testing.T) {
slot := types.Slot(100)
root := [32]byte{'a'}
parent := uint64(10)
jEpoch := types.Epoch(20)
fEpoch := types.Epoch(30)
weight := uint64(10000)
bestChild := uint64(5)
bestDescendant := uint64(4)
n := &Node{
slot: slot,
root: root,
parent: parent,
justifiedEpoch: jEpoch,
finalizedEpoch: fEpoch,
weight: weight,
bestChild: bestChild,
bestDescendant: bestDescendant,
}
require.Equal(t, slot, n.Slot())
require.Equal(t, root, n.Root())
require.Equal(t, parent, n.Parent())
require.Equal(t, jEpoch, n.JustifiedEpoch())
require.Equal(t, fEpoch, n.FinalizedEpoch())
require.Equal(t, weight, n.Weight())
require.Equal(t, bestChild, n.BestChild())
require.Equal(t, bestDescendant, n.BestDescendant())
}

View File

@@ -1,74 +0,0 @@
package protoarray
import (
"context"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v3/config/features"
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v3/time/slots"
)
// NewSlot mimics the implementation of `on_tick` in fork choice consensus spec.
// It resets the proposer boost root in fork choice, and it updates store's justified checkpoint
// if a better checkpoint on the store's finalized checkpoint chain.
// This should only be called at the start of every slot interval.
//
// Spec pseudocode definition:
// # Reset store.proposer_boost_root if this is a new slot
// if current_slot > previous_slot:
// store.proposer_boost_root = Root()
//
// # Not a new epoch, return
// if not (current_slot > previous_slot and compute_slots_since_epoch_start(current_slot) == 0):
// return
//
// # Update store.justified_checkpoint if a better checkpoint on the store.finalized_checkpoint chain
// if store.best_justified_checkpoint.epoch > store.justified_checkpoint.epoch:
// finalized_slot = compute_start_slot_at_epoch(store.finalized_checkpoint.epoch)
// ancestor_at_finalized_slot = get_ancestor(store, store.best_justified_checkpoint.root, finalized_slot)
// if ancestor_at_finalized_slot == store.finalized_checkpoint.root:
// store.justified_checkpoint = store.best_justified_checkpoint
func (f *ForkChoice) NewSlot(ctx context.Context, slot types.Slot) error {
// Reset proposer boost root
if err := f.ResetBoostedProposerRoot(ctx); err != nil {
return errors.Wrap(err, "could not reset boosted proposer root in fork choice")
}
// Return if it's not a new epoch.
if !slots.IsEpochStart(slot) {
return nil
}
// Update store.justified_checkpoint if a better checkpoint on the store.finalized_checkpoint chain
f.store.checkpointsLock.RLock()
bjcp := f.store.bestJustifiedCheckpoint
jcp := f.store.justifiedCheckpoint
fcp := f.store.finalizedCheckpoint
f.store.checkpointsLock.RUnlock()
if bjcp.Epoch > jcp.Epoch {
finalizedSlot, err := slots.EpochStart(fcp.Epoch)
if err != nil {
return err
}
// We check that the best justified checkpoint is a descendant of the finalized checkpoint.
// This should always happen as forkchoice enforces that every node is a descendant of the
// finalized checkpoint. This check is here for additional security, consider removing the extra
// loop call here.
r, err := f.AncestorRoot(ctx, bjcp.Root, finalizedSlot)
if err != nil {
return err
}
if r == fcp.Root {
f.store.checkpointsLock.Lock()
f.store.prevJustifiedCheckpoint = jcp
f.store.justifiedCheckpoint = bjcp
f.store.checkpointsLock.Unlock()
}
}
if !features.Get().DisablePullTips {
f.updateUnrealizedCheckpoints()
}
return nil
}

View File

@@ -1,105 +0,0 @@
package protoarray
import (
"context"
"testing"
forkchoicetypes "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v3/config/params"
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v3/testing/require"
)
func TestStore_NewSlot(t *testing.T) {
ctx := context.Background()
bj := [32]byte{'z'}
type args struct {
slot types.Slot
finalized *forkchoicetypes.Checkpoint
justified *forkchoicetypes.Checkpoint
bestJustified *forkchoicetypes.Checkpoint
shouldEqual bool
}
tests := []struct {
name string
args args
}{
{
name: "Not epoch boundary. No change",
args: args{
slot: params.BeaconConfig().SlotsPerEpoch + 1,
finalized: &forkchoicetypes.Checkpoint{Epoch: 1, Root: [32]byte{'a'}},
justified: &forkchoicetypes.Checkpoint{Epoch: 2, Root: [32]byte{'b'}},
bestJustified: &forkchoicetypes.Checkpoint{Epoch: 3, Root: bj},
shouldEqual: false,
},
},
{
name: "Justified higher than best justified. No change",
args: args{
slot: params.BeaconConfig().SlotsPerEpoch,
finalized: &forkchoicetypes.Checkpoint{Epoch: 1, Root: [32]byte{'a'}},
justified: &forkchoicetypes.Checkpoint{Epoch: 3, Root: [32]byte{'b'}},
bestJustified: &forkchoicetypes.Checkpoint{Epoch: 2, Root: bj},
shouldEqual: false,
},
},
{
name: "Best justified not on the same chain as finalized. No change",
args: args{
slot: params.BeaconConfig().SlotsPerEpoch,
finalized: &forkchoicetypes.Checkpoint{Epoch: 1, Root: [32]byte{'a'}},
justified: &forkchoicetypes.Checkpoint{Epoch: 2, Root: [32]byte{'b'}},
bestJustified: &forkchoicetypes.Checkpoint{Epoch: 3, Root: [32]byte{'d'}},
shouldEqual: false,
},
},
{
name: "Best justified on the same chain as finalized. Yes change",
args: args{
slot: params.BeaconConfig().SlotsPerEpoch,
finalized: &forkchoicetypes.Checkpoint{Epoch: 1, Root: [32]byte{'a'}},
justified: &forkchoicetypes.Checkpoint{Epoch: 2, Root: [32]byte{'b'}},
bestJustified: &forkchoicetypes.Checkpoint{Epoch: 3, Root: bj},
shouldEqual: true,
},
},
}
for _, test := range tests {
f := setup(test.args.justified.Epoch, test.args.finalized.Epoch)
state, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, [32]byte{}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot)) // genesis
state, blkRoot, err = prepareForkchoiceState(ctx, 32, [32]byte{'a'}, [32]byte{}, [32]byte{}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot)) // finalized
state, blkRoot, err = prepareForkchoiceState(ctx, 64, [32]byte{'b'}, [32]byte{'a'}, [32]byte{}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot)) // justified
state, blkRoot, err = prepareForkchoiceState(ctx, 96, bj, [32]byte{'a'}, [32]byte{}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot)) // best justified
state, blkRoot, err = prepareForkchoiceState(ctx, 97, [32]byte{'d'}, [32]byte{}, [32]byte{}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot)) // bad
require.NoError(t, f.UpdateFinalizedCheckpoint(test.args.finalized))
require.NoError(t, f.UpdateJustifiedCheckpoint(test.args.justified))
f.store.bestJustifiedCheckpoint = test.args.bestJustified
require.NoError(t, f.NewSlot(ctx, test.args.slot))
if test.args.shouldEqual {
bcp := f.BestJustifiedCheckpoint()
cp := f.JustifiedCheckpoint()
require.Equal(t, bcp.Epoch, cp.Epoch)
require.Equal(t, bcp.Root, cp.Root)
} else {
bcp := f.BestJustifiedCheckpoint()
cp := f.JustifiedCheckpoint()
epochsEqual := bcp.Epoch == cp.Epoch
rootsEqual := bcp.Root == cp.Root
require.Equal(t, false, epochsEqual && rootsEqual)
}
}
}

View File

@@ -1,180 +0,0 @@
package protoarray
import (
"context"
"github.com/prysmaticlabs/prysm/v3/config/params"
)
// IsOptimistic returns true if this node is optimistically synced
// A optimistically synced block is synced as usual, but its
// execution payload is not validated, while the EL is still syncing.
// This function returns an error if the block is not found in the fork choice
// store
func (f *ForkChoice) IsOptimistic(root [32]byte) (bool, error) {
f.store.nodesLock.RLock()
defer f.store.nodesLock.RUnlock()
index, ok := f.store.nodesIndices[root]
if !ok {
return true, ErrUnknownNodeRoot
}
node := f.store.nodes[index]
return node.status == syncing, nil
}
// SetOptimisticToValid is called with the root of a block that was returned as
// VALID by the EL.
//
// WARNING: This method returns an error if the root is not found in forkchoice
func (f *ForkChoice) SetOptimisticToValid(ctx context.Context, root [32]byte) error {
f.store.nodesLock.Lock()
defer f.store.nodesLock.Unlock()
// We can only update if given root is in Fork Choice
index, ok := f.store.nodesIndices[root]
if !ok {
return ErrUnknownNodeRoot
}
for node := f.store.nodes[index]; node.status == syncing; node = f.store.nodes[index] {
if ctx.Err() != nil {
return ctx.Err()
}
node.status = valid
index = node.parent
if index == NonExistentNode {
break
}
}
return nil
}
// SetOptimisticToInvalid updates the synced_tips map when the block with the given root becomes INVALID.
// It takes three parameters: the root of the INVALID block, its parent root and the payload Hash
// of the last valid block
func (f *ForkChoice) SetOptimisticToInvalid(ctx context.Context, root, parentRoot, payloadHash [32]byte) ([][32]byte, error) {
f.store.nodesLock.Lock()
defer f.store.nodesLock.Unlock()
invalidRoots := make([][32]byte, 0)
lastValidIndex, ok := f.store.payloadIndices[payloadHash]
if !ok {
lastValidIndex = uint64(len(f.store.nodes))
}
invalidIndex, ok := f.store.nodesIndices[root]
if !ok {
invalidIndex, ok = f.store.nodesIndices[parentRoot]
if !ok {
return invalidRoots, ErrUnknownNodeRoot
}
// return early if parent is LVH
if invalidIndex == lastValidIndex {
return invalidRoots, nil
}
}
node := f.store.nodes[invalidIndex]
// Check if last valid hash is an ancestor of the passed node
firstInvalidIndex := node.parent
for ; firstInvalidIndex != NonExistentNode && firstInvalidIndex != lastValidIndex; firstInvalidIndex = node.parent {
node = f.store.nodes[firstInvalidIndex]
}
// Deal with the case that the last valid payload is in a different fork
// This means we are dealing with an EE that does not follow the spec
if node.parent != lastValidIndex {
node = f.store.nodes[invalidIndex]
// return early if invalid node was not imported
if node.root == parentRoot {
return invalidRoots, nil
}
firstInvalidIndex = invalidIndex
lastValidIndex = node.parent
if lastValidIndex == NonExistentNode {
return invalidRoots, errInvalidFinalizedNode
}
} else {
firstInvalidIndex = f.store.nodesIndices[node.root]
}
// Update the weights of the nodes subtracting the first INVALID node's weight
weight := node.weight
var validNode *Node
for index := lastValidIndex; index != NonExistentNode; index = validNode.parent {
validNode = f.store.nodes[index]
validNode.weight -= weight
}
// Find the current proposer boost (it should be set to zero if an
// INVALID block was boosted)
f.store.proposerBoostLock.RLock()
boostRoot := f.store.proposerBoostRoot
previousBoostRoot := f.store.previousProposerBoostRoot
f.store.proposerBoostLock.RUnlock()
// Remove the invalid roots from our store maps and adjust their weight
// to zero
boosted := node.root == boostRoot
previouslyBoosted := node.root == previousBoostRoot
invalidIndices := map[uint64]bool{firstInvalidIndex: true}
node.status = invalid
node.weight = 0
delete(f.store.nodesIndices, node.root)
delete(f.store.canonicalNodes, node.root)
delete(f.store.payloadIndices, node.payloadHash)
for index := firstInvalidIndex + 1; index < uint64(len(f.store.nodes)); index++ {
invalidNode := f.store.nodes[index]
if _, ok := invalidIndices[invalidNode.parent]; !ok {
continue
}
if invalidNode.status == valid {
return invalidRoots, errInvalidOptimisticStatus
}
if !boosted && invalidNode.root == boostRoot {
boosted = true
}
if !previouslyBoosted && invalidNode.root == previousBoostRoot {
previouslyBoosted = true
}
invalidNode.status = invalid
invalidIndices[index] = true
invalidNode.weight = 0
delete(f.store.nodesIndices, invalidNode.root)
delete(f.store.canonicalNodes, invalidNode.root)
delete(f.store.payloadIndices, invalidNode.payloadHash)
}
if boosted {
if err := f.ResetBoostedProposerRoot(ctx); err != nil {
return invalidRoots, err
}
}
if previouslyBoosted {
f.store.proposerBoostLock.Lock()
f.store.previousProposerBoostRoot = params.BeaconConfig().ZeroHash
f.store.previousProposerBoostScore = 0
f.store.proposerBoostLock.Unlock()
}
for index := range invalidIndices {
invalidRoots = append(invalidRoots, f.store.nodes[index].root)
}
// Update the best child and descendant
for i := len(f.store.nodes) - 1; i >= 0; i-- {
n := f.store.nodes[i]
if n.parent != NonExistentNode {
if err := f.store.updateBestChildAndDescendant(n.parent, uint64(i)); err != nil {
return invalidRoots, err
}
}
}
return invalidRoots, nil
}
// AllTipsAreInvalid returns true if no forkchoice tip is viable for head
func (f *ForkChoice) AllTipsAreInvalid() bool {
f.store.nodesLock.RLock()
defer f.store.nodesLock.RUnlock()
return f.store.allTipsAreInvalid
}

View File

@@ -1,554 +0,0 @@
package protoarray
import (
"context"
"testing"
"github.com/prysmaticlabs/prysm/v3/config/params"
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v3/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v3/testing/require"
)
func slicesEqual(a, b [][32]byte) bool {
if len(a) != len(b) {
return false
}
mapA := make(map[[32]byte]bool, len(a))
for _, root := range a {
mapA[root] = true
}
for _, root := range b {
_, ok := mapA[root]
if !ok {
return false
}
}
return true
}
func TestOptimistic_Outside_ForkChoice(t *testing.T) {
root0 := bytesutil.ToBytes32([]byte("hello0"))
nodeA := &Node{
slot: types.Slot(100),
root: bytesutil.ToBytes32([]byte("helloA")),
bestChild: 1,
status: valid,
}
nodes := []*Node{
nodeA,
}
ni := map[[32]byte]uint64{
nodeA.root: 0,
}
s := &Store{
nodes: nodes,
nodesIndices: ni,
}
f := &ForkChoice{
store: s,
}
_, err := f.IsOptimistic(root0)
require.ErrorIs(t, ErrUnknownNodeRoot, err)
}
// This tests the algorithm to update optimistic Status
// We start with the following diagram
//
// E -- F
// /
// C -- D
// / \
// A -- B G -- H -- I
// \ \
// J -- K -- L
//
// The Chain A -- B -- C -- D -- E is VALID.
//
func TestSetOptimisticToValid(t *testing.T) {
ctx := context.Background()
tests := []struct {
root [32]byte // the root of the new VALID block
testRoot [32]byte // root of the node we will test optimistic status
wantedOptimistic bool // wanted optimistic status for tested node
wantedErr error // wanted error message
}{
{
[32]byte{'i'},
[32]byte{'i'},
false,
nil,
},
{
[32]byte{'i'},
[32]byte{'f'},
true,
nil,
},
{
[32]byte{'i'},
[32]byte{'b'},
false,
nil,
},
{
[32]byte{'i'},
[32]byte{'h'},
false,
nil,
},
{
[32]byte{'b'},
[32]byte{'b'},
false,
nil,
},
{
[32]byte{'b'},
[32]byte{'h'},
true,
nil,
},
{
[32]byte{'b'},
[32]byte{'a'},
false,
nil,
},
{
[32]byte{'k'},
[32]byte{'k'},
false,
nil,
},
{
[32]byte{'k'},
[32]byte{'l'},
true,
nil,
},
{
[32]byte{'p'},
[32]byte{},
false,
ErrUnknownNodeRoot,
},
}
for _, tc := range tests {
f := setup(1, 1)
state, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, params.BeaconConfig().ZeroHash, [32]byte{'A'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'B'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, [32]byte{'C'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 102, [32]byte{'j'}, [32]byte{'b'}, [32]byte{'J'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 103, [32]byte{'d'}, [32]byte{'c'}, [32]byte{'D'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 104, [32]byte{'e'}, [32]byte{'d'}, [32]byte{'E'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 104, [32]byte{'g'}, [32]byte{'d'}, [32]byte{'G'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 105, [32]byte{'f'}, [32]byte{'e'}, [32]byte{'F'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 105, [32]byte{'h'}, [32]byte{'g'}, [32]byte{'H'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 105, [32]byte{'k'}, [32]byte{'g'}, [32]byte{'K'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 106, [32]byte{'i'}, [32]byte{'h'}, [32]byte{'I'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 106, [32]byte{'l'}, [32]byte{'k'}, [32]byte{'L'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
require.NoError(t, f.SetOptimisticToValid(context.Background(), [32]byte{'e'}))
optimistic, err := f.IsOptimistic([32]byte{'b'})
require.NoError(t, err)
require.Equal(t, false, optimistic)
err = f.SetOptimisticToValid(context.Background(), tc.root)
if tc.wantedErr != nil {
require.ErrorIs(t, err, tc.wantedErr)
} else {
require.NoError(t, err)
optimistic, err := f.IsOptimistic(tc.testRoot)
require.NoError(t, err)
require.Equal(t, tc.wantedOptimistic, optimistic)
}
}
}
// We test the algorithm to update a node from SYNCING to INVALID
// We start with the same diagram as above:
//
// E(2) -- F(1)
// /
// C(7) -- D(6)
// / \
// A(10) -- B(9) G(3) -- H(1) -- I(0)
// \ \
// J(1) -- K(1) -- L(0)
//
// And the chain A -- B -- C -- D -- E has been fully validated. The numbers in parentheses are
// the weights of the nodes.
//
func TestSetOptimisticToInvalid(t *testing.T) {
tests := []struct {
name string // test description
root [32]byte // the root of the new INVALID block
parentRoot [32]byte // the root of the parent block
payload [32]byte // the payload of the last valid hash
newBestChild uint64
newBestDescendant uint64
newParentWeight uint64
returnedRoots [][32]byte
}{
{
"Remove tip, parent was valid",
[32]byte{'j'},
[32]byte{'b'},
[32]byte{'B'},
3,
12,
8,
[][32]byte{{'j'}},
},
{
"Remove tip, parent was optimistic",
[32]byte{'i'},
[32]byte{'h'},
[32]byte{'H'},
NonExistentNode,
NonExistentNode,
1,
[][32]byte{{'i'}},
},
{
"Remove tip, lvh is inner and valid",
[32]byte{'i'},
[32]byte{'h'},
[32]byte{'D'},
6,
8,
3,
[][32]byte{{'g'}, {'h'}, {'k'}, {'i'}, {'l'}},
},
{
"Remove inner, lvh is inner and optimistic",
[32]byte{'h'},
[32]byte{'g'},
[32]byte{'G'},
10,
12,
2,
[][32]byte{{'h'}, {'i'}},
},
{
"Remove tip, lvh is inner and optimistic",
[32]byte{'l'},
[32]byte{'k'},
[32]byte{'G'},
9,
11,
2,
[][32]byte{{'k'}, {'l'}},
},
{
"Remove tip, lvh is not an ancestor",
[32]byte{'j'},
[32]byte{'b'},
[32]byte{'C'},
5,
12,
7,
[][32]byte{{'j'}},
},
{
"Remove inner, lvh is not an ancestor",
[32]byte{'g'},
[32]byte{'d'},
[32]byte{'J'},
NonExistentNode,
NonExistentNode,
1,
[][32]byte{{'g'}, {'h'}, {'k'}, {'i'}, {'l'}},
},
{
"Remove not inserted, parent was invalid",
[32]byte{'z'},
[32]byte{'j'},
[32]byte{'B'},
3,
12,
8,
[][32]byte{{'j'}},
},
{
"Remove not inserted, parent was valid",
[32]byte{'z'},
[32]byte{'j'},
[32]byte{'J'},
NonExistentNode,
NonExistentNode,
1,
[][32]byte{},
},
}
for _, tc := range tests {
ctx := context.Background()
f := setup(1, 1)
state, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'B'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, [32]byte{'C'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 102, [32]byte{'j'}, [32]byte{'b'}, [32]byte{'J'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 103, [32]byte{'d'}, [32]byte{'c'}, [32]byte{'D'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 104, [32]byte{'e'}, [32]byte{'d'}, [32]byte{'E'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 104, [32]byte{'g'}, [32]byte{'d'}, [32]byte{'G'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 105, [32]byte{'f'}, [32]byte{'e'}, [32]byte{'F'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 105, [32]byte{'h'}, [32]byte{'g'}, [32]byte{'H'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 105, [32]byte{'k'}, [32]byte{'g'}, [32]byte{'K'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 106, [32]byte{'i'}, [32]byte{'h'}, [32]byte{'I'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 106, [32]byte{'l'}, [32]byte{'k'}, [32]byte{'L'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
weights := []uint64{10, 10, 9, 7, 1, 6, 2, 3, 1, 1, 1, 0, 0}
f.store.nodesLock.Lock()
for i, node := range f.store.nodes {
node.weight = weights[i]
}
f.store.nodesLock.Unlock()
require.NoError(t, f.SetOptimisticToValid(ctx, [32]byte{'e'}))
roots, err := f.SetOptimisticToInvalid(ctx, tc.root, tc.parentRoot, tc.payload)
require.NoError(t, err)
f.store.nodesLock.RLock()
_, ok := f.store.nodesIndices[tc.root]
require.Equal(t, false, ok)
lvh := f.store.nodes[f.store.payloadIndices[tc.payload]]
require.Equal(t, true, slicesEqual(tc.returnedRoots, roots))
require.Equal(t, tc.newBestChild, lvh.bestChild)
require.Equal(t, tc.newBestDescendant, lvh.bestDescendant)
require.Equal(t, tc.newParentWeight, lvh.weight)
require.Equal(t, syncing, f.store.nodes[8].status /* F */)
require.Equal(t, valid, f.store.nodes[5].status /* E */)
f.store.nodesLock.RUnlock()
}
}
func TestSetOptimisticToInvalid_InvalidRoots(t *testing.T) {
ctx := context.Background()
f := setup(1, 1)
state, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'B'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
_, err = f.SetOptimisticToInvalid(ctx, [32]byte{'p'}, [32]byte{'p'}, [32]byte{'B'})
require.ErrorIs(t, ErrUnknownNodeRoot, err)
}
// This is a regression test (10445)
func TestSetOptimisticToInvalid_ProposerBoost(t *testing.T) {
ctx := context.Background()
f := setup(1, 1)
state, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, params.BeaconConfig().ZeroHash, [32]byte{'A'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'B'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 101, [32]byte{'c'}, [32]byte{'b'}, [32]byte{'C'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
f.store.proposerBoostLock.Lock()
f.store.proposerBoostRoot = [32]byte{'c'}
f.store.previousProposerBoostScore = 10
f.store.previousProposerBoostRoot = [32]byte{'b'}
f.store.proposerBoostLock.Unlock()
_, err = f.SetOptimisticToInvalid(ctx, [32]byte{'c'}, [32]byte{'b'}, [32]byte{'A'})
require.NoError(t, err)
f.store.proposerBoostLock.RLock()
require.Equal(t, uint64(0), f.store.previousProposerBoostScore)
require.DeepEqual(t, [32]byte{}, f.store.proposerBoostRoot)
require.DeepEqual(t, params.BeaconConfig().ZeroHash, f.store.previousProposerBoostRoot)
f.store.proposerBoostLock.RUnlock()
}
// This is a regression test (10996)
func TestSetOptimisticToInvalid_BogusLVH(t *testing.T) {
ctx := context.Background()
f := setup(1, 1)
state, root, err := prepareForkchoiceState(ctx, 1, [32]byte{'a'}, [32]byte{}, [32]byte{'A'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, root))
state, root, err = prepareForkchoiceState(ctx, 2, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'B'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, root))
invalidRoots, err := f.SetOptimisticToInvalid(ctx, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'R'})
require.NoError(t, err)
require.Equal(t, 1, len(invalidRoots))
require.Equal(t, [32]byte{'b'}, invalidRoots[0])
}
// This is a regression test (10996)
func TestSetOptimisticToInvalid_BogusLVH_RotNotImported(t *testing.T) {
ctx := context.Background()
f := setup(1, 1)
state, root, err := prepareForkchoiceState(ctx, 1, [32]byte{'a'}, [32]byte{}, [32]byte{'A'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, root))
state, root, err = prepareForkchoiceState(ctx, 2, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'B'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, root))
invalidRoots, err := f.SetOptimisticToInvalid(ctx, [32]byte{'c'}, [32]byte{'b'}, [32]byte{'R'})
require.NoError(t, err)
require.Equal(t, 0, len(invalidRoots))
}
// Pow | Pos
//
// CA -- A -- B -- C-----D
// \ \--------------E
// \
// ----------------------F -- G
// B is INVALID
//
func TestSetOptimisticToInvalid_ForkAtMerge(t *testing.T) {
ctx := context.Background()
f := setup(1, 1)
st, root, err := prepareForkchoiceState(ctx, 100, [32]byte{'r'}, [32]byte{}, [32]byte{}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, root))
st, root, err = prepareForkchoiceState(ctx, 101, [32]byte{'a'}, [32]byte{'r'}, [32]byte{}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, root))
st, root, err = prepareForkchoiceState(ctx, 102, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'B'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, root))
st, root, err = prepareForkchoiceState(ctx, 103, [32]byte{'c'}, [32]byte{'b'}, [32]byte{'C'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, root))
st, root, err = prepareForkchoiceState(ctx, 104, [32]byte{'d'}, [32]byte{'c'}, [32]byte{'D'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, root))
st, root, err = prepareForkchoiceState(ctx, 105, [32]byte{'e'}, [32]byte{'b'}, [32]byte{'E'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, root))
st, root, err = prepareForkchoiceState(ctx, 106, [32]byte{'f'}, [32]byte{'r'}, [32]byte{'F'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, root))
st, root, err = prepareForkchoiceState(ctx, 107, [32]byte{'g'}, [32]byte{'f'}, [32]byte{'G'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, root))
roots, err := f.SetOptimisticToInvalid(ctx, [32]byte{'x'}, [32]byte{'d'}, [32]byte{})
require.NoError(t, err)
require.Equal(t, 4, len(roots))
require.Equal(t, true, slicesEqual(roots, [][32]byte{{'b'}, {'c'}, {'d'}, {'e'}}))
}
// Pow | Pos
//
// CA -------- B -- C-----D
// \ \--------------E
// \
// --A -------------------------F -- G
// B is INVALID
//
func TestSetOptimisticToInvalid_ForkAtMerge_bis(t *testing.T) {
ctx := context.Background()
f := setup(1, 1)
st, root, err := prepareForkchoiceState(ctx, 100, [32]byte{'r'}, [32]byte{}, [32]byte{}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, root))
st, root, err = prepareForkchoiceState(ctx, 101, [32]byte{'a'}, [32]byte{'r'}, [32]byte{}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, root))
st, root, err = prepareForkchoiceState(ctx, 102, [32]byte{'b'}, [32]byte{}, [32]byte{'B'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, root))
st, root, err = prepareForkchoiceState(ctx, 103, [32]byte{'c'}, [32]byte{'b'}, [32]byte{'C'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, root))
st, root, err = prepareForkchoiceState(ctx, 104, [32]byte{'d'}, [32]byte{'c'}, [32]byte{'D'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, root))
st, root, err = prepareForkchoiceState(ctx, 105, [32]byte{'e'}, [32]byte{'b'}, [32]byte{'E'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, root))
st, root, err = prepareForkchoiceState(ctx, 106, [32]byte{'f'}, [32]byte{'a'}, [32]byte{'F'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, root))
st, root, err = prepareForkchoiceState(ctx, 107, [32]byte{'g'}, [32]byte{'f'}, [32]byte{'G'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, root))
roots, err := f.SetOptimisticToInvalid(ctx, [32]byte{'d'}, [32]byte{'c'}, [32]byte{})
require.NoError(t, err)
require.Equal(t, 1, len(roots))
require.Equal(t, [32]byte{'d'}, roots[0])
}

View File

@@ -1,43 +0,0 @@
package protoarray
import (
"context"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v3/config/params"
)
// ResetBoostedProposerRoot sets the value of the proposer boosted root to zeros.
func (f *ForkChoice) ResetBoostedProposerRoot(_ context.Context) error {
f.store.proposerBoostLock.Lock()
f.store.proposerBoostRoot = [32]byte{}
f.store.proposerBoostLock.Unlock()
return nil
}
// Given a list of validator balances, we compute the proposer boost score
// that should be given to a proposer based on their committee weight, derived from
// the total active balances, the size of a committee, and a boost score constant.
// IMPORTANT: The caller MUST pass in a list of validator balances where balances > 0 refer to active
// validators while balances == 0 are for inactive validators.
func computeProposerBoostScore(validatorBalances []uint64) (score uint64, err error) {
totalActiveBalance := uint64(0)
numActive := uint64(0)
for _, balance := range validatorBalances {
// We only consider balances > 0. The input slice should be constructed
// as balance > 0 for all active validators and 0 for inactive ones.
if balance == 0 {
continue
}
totalActiveBalance += balance
numActive += 1
}
if numActive == 0 {
// Should never happen.
err = errors.New("no active validators")
return
}
committeeWeight := totalActiveBalance / uint64(params.BeaconConfig().SlotsPerEpoch)
score = (committeeWeight * params.BeaconConfig().ProposerScoreBoost) / 100
return
}

View File

@@ -1,476 +0,0 @@
package protoarray
import (
"context"
"testing"
"time"
"github.com/prysmaticlabs/prysm/v3/config/params"
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v3/testing/assert"
"github.com/prysmaticlabs/prysm/v3/testing/require"
)
// Helper function to simulate the block being on time or delayed for proposer
// boost. It alters the genesisTime tracked by the store.
func driftGenesisTime(f *ForkChoice, slot types.Slot, delay uint64) {
f.SetGenesisTime(uint64(time.Now().Unix()) - uint64(slot)*params.BeaconConfig().SecondsPerSlot - delay)
}
// Simple, ex-ante attack mitigation using proposer boost.
// In a nutshell, an adversarial block proposer in slot n+1 keeps its proposal hidden.
// The honest block proposer in slot n+2 will then propose an honest block. The
// adversary can now use its committee members votes from both slots n+1 and n+2.
// and release their withheld block of slot n+2 in an attempt to win fork choice.
// If the honest proposal is boosted at slot n+2, it will win against this attacker.
func TestForkChoice_BoostProposerRoot_PreventsExAnteAttack(t *testing.T) {
ctx := context.Background()
jEpoch, fEpoch := types.Epoch(0), types.Epoch(0)
zeroHash := params.BeaconConfig().ZeroHash
balances := make([]uint64, 64) // 64 active validators.
for i := 0; i < len(balances); i++ {
balances[i] = 10
}
t.Run("back-propagates boost score to ancestors after proposer boosting", func(t *testing.T) {
f := setup(jEpoch, fEpoch)
// The head should always start at the finalized block.
headRoot, err := f.Head(ctx, balances)
require.NoError(t, err)
assert.Equal(t, zeroHash, headRoot, "Incorrect head with genesis")
// Insert block at slot 1 into the tree and verify head is at that block:
// 0
// |
// 1 <- HEAD
slot := types.Slot(1)
driftGenesisTime(f, slot, 0)
newRoot := indexToHash(1)
state, blkRoot, err := prepareForkchoiceState(
ctx,
slot,
newRoot,
headRoot,
zeroHash,
jEpoch,
fEpoch,
)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
f.ProcessAttestation(ctx, []uint64{0}, newRoot, fEpoch)
headRoot, err = f.Head(ctx, balances)
require.NoError(t, err)
assert.Equal(t, newRoot, headRoot, "Incorrect head for justified epoch at slot 1")
// Insert block at slot 2 into the tree and verify head is at that block:
// 0
// |
// 1
// |
// 2 <- HEAD
slot = types.Slot(2)
driftGenesisTime(f, slot, 0)
newRoot = indexToHash(2)
state, blkRoot, err = prepareForkchoiceState(
ctx,
slot,
newRoot,
headRoot,
zeroHash,
jEpoch,
fEpoch,
)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
f.ProcessAttestation(ctx, []uint64{1}, newRoot, fEpoch)
headRoot, err = f.Head(ctx, balances)
require.NoError(t, err)
assert.Equal(t, newRoot, headRoot, "Incorrect head for justified epoch at slot 2")
// Insert block at slot 3 into the tree and verify head is at that block:
// 0
// |
// 1
// |
// 2
// |
// 3 <- HEAD
slot = types.Slot(3)
driftGenesisTime(f, slot, 0)
newRoot = indexToHash(3)
state, blkRoot, err = prepareForkchoiceState(
ctx,
slot,
newRoot,
headRoot,
zeroHash,
jEpoch,
fEpoch,
)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
f.ProcessAttestation(ctx, []uint64{2}, newRoot, fEpoch)
headRoot, err = f.Head(ctx, balances)
require.NoError(t, err)
assert.Equal(t, newRoot, headRoot, "Incorrect head for justified epoch at slot 3")
// Insert a second block at slot 4 into the tree and boost its score.
// 0
// |
// 1
// |
// 2
// / \
// 3 |
// 4 <- HEAD
slot = types.Slot(4)
driftGenesisTime(f, slot, 0)
newRoot = indexToHash(4)
state, blkRoot, err = prepareForkchoiceState(
ctx,
slot,
newRoot,
indexToHash(2),
zeroHash,
jEpoch,
fEpoch,
)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
f.ProcessAttestation(ctx, []uint64{3}, newRoot, fEpoch)
headRoot, err = f.Head(ctx, balances)
require.NoError(t, err)
assert.Equal(t, newRoot, headRoot, "Incorrect head for justified epoch at slot 3")
// Check the ancestor scores from the store.
require.Equal(t, 5, len(f.store.nodes))
// Expect nodes to have a boosted, back-propagated score.
// Ancestors have the added weights of their children. Genesis is a special exception at 0 weight,
require.Equal(t, f.store.nodes[0].weight, uint64(0))
// Proposer boost score with this tests parameters is 8
// Each of the nodes received one attestation accounting for 10.
// Node D is the only one with a proposer boost still applied:
//
// (A: 48) -> (B: 38) -> (C: 10)
// \--------------->(D: 18)
//
require.Equal(t, f.store.nodes[4].weight, uint64(18))
require.Equal(t, f.store.nodes[1].weight, uint64(48))
require.Equal(t, f.store.nodes[2].weight, uint64(38))
require.Equal(t, f.store.nodes[3].weight, uint64(10))
// Regression: process attestations for C, check that it
// becomes head, we need two attestations to have C.weight = 30 > 24 = D.weight
f.ProcessAttestation(ctx, []uint64{4, 5}, indexToHash(3), fEpoch)
headRoot, err = f.Head(ctx, balances)
require.NoError(t, err)
assert.Equal(t, indexToHash(3), headRoot, "Incorrect head for justified epoch at slot 4")
})
t.Run("vanilla ex ante attack", func(t *testing.T) {
f := setup(jEpoch, fEpoch)
// The head should always start at the finalized block.
r, err := f.Head(ctx, balances)
require.NoError(t, err)
assert.Equal(t, zeroHash, r, "Incorrect head with genesis")
// Proposer from slot 1 does not reveal their block, B, at slot 1.
// Proposer at slot 2 does reveal their block, C, and it becomes the head.
// C builds on A, as proposer at slot 1 did not reveal B.
// A
// / \
// (B?) \
// \
// C <- Slot 2 HEAD
honestBlockSlot := types.Slot(2)
driftGenesisTime(f, honestBlockSlot, 0)
honestBlock := indexToHash(2)
state, blkRoot, err := prepareForkchoiceState(
ctx,
honestBlockSlot,
honestBlock,
zeroHash,
zeroHash,
jEpoch,
fEpoch,
)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
r, err = f.Head(ctx, balances)
require.NoError(t, err)
assert.Equal(t, honestBlock, r, "Incorrect head for justified epoch at slot 2")
maliciouslyWithheldBlockSlot := types.Slot(1)
maliciouslyWithheldBlock := indexToHash(1)
state, blkRoot, err = prepareForkchoiceState(
ctx,
maliciouslyWithheldBlockSlot,
maliciouslyWithheldBlock,
zeroHash,
zeroHash,
jEpoch,
fEpoch,
)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
// Ensure the head is C, the honest block.
r, err = f.Head(ctx, balances)
require.NoError(t, err)
assert.Equal(t, honestBlock, r, "Incorrect head for justified epoch at slot 2")
// The maliciously withheld block has one vote.
votes := []uint64{1}
f.ProcessAttestation(ctx, votes, maliciouslyWithheldBlock, fEpoch)
// The honest block has one vote.
votes = []uint64{2}
f.ProcessAttestation(ctx, votes, honestBlock, fEpoch)
// Ensure the head is STILL C, the honest block, as the honest block had proposer boost.
r, err = f.Head(ctx, balances)
require.NoError(t, err)
assert.Equal(t, honestBlock, r, "Incorrect head for justified epoch at slot 2")
})
t.Run("adversarial attestations > proposer boosting", func(t *testing.T) {
f := setup(jEpoch, fEpoch)
// The head should always start at the finalized block.
r, err := f.Head(ctx, balances)
require.NoError(t, err)
assert.Equal(t, zeroHash, r, "Incorrect head with genesis")
// Proposer from slot 1 does not reveal their block, B, at slot 1.
// Proposer at slot 2 does reveal their block, C, and it becomes the head.
// C builds on A, as proposer at slot 1 did not reveal B.
// A
// / \
// (B?) \
// \
// C <- Slot 2 HEAD
honestBlockSlot := types.Slot(2)
driftGenesisTime(f, honestBlockSlot, 0)
honestBlock := indexToHash(2)
state, blkRoot, err := prepareForkchoiceState(
ctx,
honestBlockSlot,
honestBlock,
zeroHash,
zeroHash,
jEpoch,
fEpoch,
)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
// Ensure C is the head.
r, err = f.Head(ctx, balances)
require.NoError(t, err)
assert.Equal(t, honestBlock, r, "Incorrect head for justified epoch at slot 2")
maliciouslyWithheldBlockSlot := types.Slot(1)
maliciouslyWithheldBlock := indexToHash(1)
state, blkRoot, err = prepareForkchoiceState(
ctx,
maliciouslyWithheldBlockSlot,
maliciouslyWithheldBlock,
zeroHash,
zeroHash,
jEpoch,
fEpoch,
)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
// Ensure C is still the head after the malicious proposer reveals their block.
r, err = f.Head(ctx, balances)
require.NoError(t, err)
assert.Equal(t, honestBlock, r, "Incorrect head for justified epoch at slot 2")
// An attestation is received for B that has more voting power than C with the proposer boost,
// allowing B to then become the head if their attestation has enough adversarial votes.
votes := []uint64{1, 2}
f.ProcessAttestation(ctx, votes, maliciouslyWithheldBlock, fEpoch)
// Expect the head to have switched to B.
r, err = f.Head(ctx, balances)
require.NoError(t, err)
assert.Equal(t, maliciouslyWithheldBlock, r, "Expected B to become the head")
})
t.Run("boosting necessary to sandwich attack", func(t *testing.T) {
// Boosting necessary to sandwich attack.
// Objects:
// Block A - slot N
// Block B (parent A) - slot N+1
// Block C (parent A) - slot N+2
// Block D (parent B) - slot N+3
// Attestation_1 (Block C); size 1 - slot N+2 (honest)
// Steps:
// Block A received at N — A is head
// Block C received at N+2 — C is head
// Block B received at N+2 — C is head
// Attestation_1 received at N+3 — C is head
// Block D received at N+3 — D is head
f := setup(jEpoch, fEpoch)
a := zeroHash
// The head should always start at the finalized block.
r, err := f.Head(ctx, balances)
require.NoError(t, err)
assert.Equal(t, zeroHash, r, "Incorrect head with genesis")
cSlot := types.Slot(2)
driftGenesisTime(f, cSlot, 0)
c := indexToHash(2)
state, blkRoot, err := prepareForkchoiceState(
ctx,
cSlot,
c,
a, // parent
zeroHash,
jEpoch,
fEpoch,
)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
// Ensure C is the head.
r, err = f.Head(ctx, balances)
require.NoError(t, err)
assert.Equal(t, c, r, "Incorrect head for justified epoch at slot 2")
bSlot := types.Slot(1)
b := indexToHash(1)
state, blkRoot, err = prepareForkchoiceState(
ctx,
bSlot,
b,
a, // parent
zeroHash,
jEpoch,
fEpoch,
)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
// Ensure C is still the head.
r, err = f.Head(ctx, balances)
require.NoError(t, err)
assert.Equal(t, c, r, "Incorrect head for justified epoch at slot 2")
// An attestation for C is received at slot N+3.
votes := []uint64{1}
f.ProcessAttestation(ctx, votes, c, fEpoch)
// A block D, building on B, is received at slot N+3. It should not be able to win without boosting.
dSlot := types.Slot(3)
d := indexToHash(3)
state, blkRoot, err = prepareForkchoiceState(
ctx,
dSlot,
d,
b, // parent
zeroHash,
jEpoch,
fEpoch,
)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
// D cannot win without a boost.
r, err = f.Head(ctx, balances)
require.NoError(t, err)
assert.Equal(t, c, r, "Expected C to remain the head")
// If the same block arrives with boosting then it becomes head:
driftGenesisTime(f, dSlot, 0)
d2 := indexToHash(30)
state, blkRoot, err = prepareForkchoiceState(
ctx,
dSlot,
d2,
b, // parent
zeroHash,
jEpoch,
fEpoch,
)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
votes = []uint64{2}
f.ProcessAttestation(ctx, votes, d2, fEpoch)
// Ensure D becomes the head thanks to boosting.
r, err = f.Head(ctx, balances)
require.NoError(t, err)
assert.Equal(t, d2, r, "Expected D to become the head")
})
}
func TestForkChoice_BoostProposerRoot(t *testing.T) {
ctx := context.Background()
root := [32]byte{'A'}
zeroHash := [32]byte{}
t.Run("does not boost block from different slot", func(t *testing.T) {
f := setup(0, 0)
slot := types.Slot(0)
currentSlot := types.Slot(1)
driftGenesisTime(f, currentSlot, 0)
state, blkRoot, err := prepareForkchoiceState(ctx, slot, root, zeroHash, zeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
require.Equal(t, [32]byte{}, f.store.proposerBoostRoot)
})
t.Run("does not boost untimely block from same slot", func(t *testing.T) {
f := setup(0, 0)
slot := types.Slot(1)
currentSlot := types.Slot(1)
driftGenesisTime(f, currentSlot, params.BeaconConfig().SecondsPerSlot-1)
state, blkRoot, err := prepareForkchoiceState(ctx, slot, root, zeroHash, zeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
require.Equal(t, [32]byte{}, f.store.proposerBoostRoot)
})
t.Run("boosts perfectly timely block from same slot", func(t *testing.T) {
f := setup(0, 0)
slot := types.Slot(1)
currentSlot := types.Slot(1)
driftGenesisTime(f, currentSlot, 0)
state, blkRoot, err := prepareForkchoiceState(ctx, slot, root, zeroHash, zeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
require.Equal(t, root, f.store.proposerBoostRoot)
})
t.Run("boosts timely block from same slot", func(t *testing.T) {
f := setup(0, 0)
slot := types.Slot(1)
currentSlot := types.Slot(1)
driftGenesisTime(f, currentSlot, 1)
state, blkRoot, err := prepareForkchoiceState(ctx, slot, root, zeroHash, zeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
require.Equal(t, root, f.store.proposerBoostRoot)
})
}
func TestForkChoice_computeProposerBoostScore(t *testing.T) {
t.Run("nil justified balances throws error", func(t *testing.T) {
_, err := computeProposerBoostScore(nil)
require.ErrorContains(t, "no active validators", err)
})
t.Run("normal active balances computes score", func(t *testing.T) {
validatorBalances := make([]uint64, 64) // Num validators
for i := 0; i < len(validatorBalances); i++ {
validatorBalances[i] = 10
}
// Avg balance is 10, and the number of validators is 64.
// With a committee size of num validators (64) / slots per epoch (32) == 2.
// we then have a committee weight of avg balance * committee size = 10 * 2 = 20.
// The score then becomes 10 * PROPOSER_SCORE_BOOST // 100, which is
// 20 * 40 / 100 = 8.
score, err := computeProposerBoostScore(validatorBalances)
require.NoError(t, err)
require.Equal(t, uint64(8), score)
})
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,81 +0,0 @@
package protoarray
import (
"sync"
forkchoicetypes "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/types"
fieldparams "github.com/prysmaticlabs/prysm/v3/config/fieldparams"
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
)
// ForkChoice defines the overall fork choice store which includes all block nodes, validator's latest votes and balances.
type ForkChoice struct {
store *Store
votes []Vote // tracks individual validator's last vote.
votesLock sync.RWMutex
balances []uint64 // tracks individual validator's last justified balances.
}
// Store defines the fork choice store which includes block nodes and the last view of checkpoint information.
type Store struct {
pruneThreshold uint64 // do not prune tree unless threshold is reached.
justifiedCheckpoint *forkchoicetypes.Checkpoint // latest justified checkpoint in store.
bestJustifiedCheckpoint *forkchoicetypes.Checkpoint // best justified checkpoint in store.
unrealizedJustifiedCheckpoint *forkchoicetypes.Checkpoint // best justified checkpoint in store.
unrealizedFinalizedCheckpoint *forkchoicetypes.Checkpoint // best justified checkpoint in store.
prevJustifiedCheckpoint *forkchoicetypes.Checkpoint // previous justified checkpoint in store.
finalizedCheckpoint *forkchoicetypes.Checkpoint // latest finalized checkpoint in store.
proposerBoostRoot [fieldparams.RootLength]byte // latest block root that was boosted after being received in a timely manner.
previousProposerBoostRoot [fieldparams.RootLength]byte // previous block root that was boosted after being received in a timely manner.
previousProposerBoostScore uint64 // previous proposer boosted root score.
nodes []*Node // list of block nodes, each node is a representation of one block.
nodesIndices map[[fieldparams.RootLength]byte]uint64 // the root of block node and the nodes index in the list.
canonicalNodes map[[fieldparams.RootLength]byte]bool // the canonical block nodes.
payloadIndices map[[fieldparams.RootLength]byte]uint64 // the payload hash of block node and the index in the list
slashedIndices map[types.ValidatorIndex]bool // The list of equivocating validators
originRoot [fieldparams.RootLength]byte // The genesis block root
lastHeadRoot [fieldparams.RootLength]byte // The last cached head block root
nodesLock sync.RWMutex
proposerBoostLock sync.RWMutex
checkpointsLock sync.RWMutex
genesisTime uint64
highestReceivedIndex uint64 // The highest slot node's index.
receivedBlocksLastEpoch [fieldparams.SlotsPerEpoch]types.Slot // Using `highestReceivedSlot`. The slot of blocks received in the last epoch.
allTipsAreInvalid bool // tracks if all tips are not viable for head
}
// Node defines the individual block which includes its block parent, ancestor and how much weight accounted for it.
// This is used as an array based stateful DAG for efficient fork choice look up.
type Node struct {
slot types.Slot // slot of the block converted to the node.
root [fieldparams.RootLength]byte // root of the block converted to the node.
payloadHash [fieldparams.RootLength]byte // payloadHash of the block converted to the node.
parent uint64 // parent index of this node.
justifiedEpoch types.Epoch // justifiedEpoch of this node.
unrealizedJustifiedEpoch types.Epoch // the epoch that would be justified if the block would be advanced to the next epoch.
finalizedEpoch types.Epoch // finalizedEpoch of this node.
unrealizedFinalizedEpoch types.Epoch // the epoch that would be finalized if the block would be advanced to the next epoch.
weight uint64 // weight of this node.
bestChild uint64 // bestChild index of this node.
bestDescendant uint64 // bestDescendant of this node.
status status // optimistic status of this node
}
// enum used as optimistic status of a node
type status uint8
const (
syncing status = iota // the node is optimistic
valid //fully validated node
invalid // invalid execution payload
)
// Vote defines an individual validator's vote.
type Vote struct {
currentRoot [fieldparams.RootLength]byte // current voting root.
nextRoot [fieldparams.RootLength]byte // next voting root.
nextEpoch types.Epoch // epoch of next voting period.
}
// NonExistentNode defines an unknown node which is used for the array based stateful DAG.
const NonExistentNode = ^uint64(0)

View File

@@ -1,135 +0,0 @@
package protoarray
import (
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/epoch/precompute"
forkchoicetypes "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v3/config/params"
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v3/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v3/time/slots"
)
func (s *Store) setUnrealizedJustifiedEpoch(root [32]byte, epoch types.Epoch) error {
s.nodesLock.Lock()
defer s.nodesLock.Unlock()
index, ok := s.nodesIndices[root]
if !ok {
return ErrUnknownNodeRoot
}
if index >= uint64(len(s.nodes)) {
return errInvalidNodeIndex
}
node := s.nodes[index]
if node == nil {
return errInvalidNodeIndex
}
if epoch < node.unrealizedJustifiedEpoch {
return errInvalidUnrealizedJustifiedEpoch
}
node.unrealizedJustifiedEpoch = epoch
return nil
}
func (s *Store) setUnrealizedFinalizedEpoch(root [32]byte, epoch types.Epoch) error {
s.nodesLock.Lock()
defer s.nodesLock.Unlock()
index, ok := s.nodesIndices[root]
if !ok {
return ErrUnknownNodeRoot
}
if index >= uint64(len(s.nodes)) {
return errInvalidNodeIndex
}
node := s.nodes[index]
if node == nil {
return errInvalidNodeIndex
}
if epoch < node.unrealizedFinalizedEpoch {
return errInvalidUnrealizedFinalizedEpoch
}
node.unrealizedFinalizedEpoch = epoch
return nil
}
// UpdateUnrealizedCheckpoints "realizes" the unrealized justified and finalized
// epochs stored within nodes. It should be called at the beginning of each
// epoch
func (f *ForkChoice) updateUnrealizedCheckpoints() {
f.store.nodesLock.Lock()
defer f.store.nodesLock.Unlock()
f.store.checkpointsLock.Lock()
defer f.store.checkpointsLock.Unlock()
for _, node := range f.store.nodes {
node.justifiedEpoch = node.unrealizedJustifiedEpoch
node.finalizedEpoch = node.unrealizedFinalizedEpoch
if node.justifiedEpoch > f.store.justifiedCheckpoint.Epoch {
if node.justifiedEpoch > f.store.bestJustifiedCheckpoint.Epoch {
f.store.bestJustifiedCheckpoint = f.store.unrealizedJustifiedCheckpoint
}
f.store.prevJustifiedCheckpoint = f.store.justifiedCheckpoint
f.store.justifiedCheckpoint = f.store.unrealizedJustifiedCheckpoint
}
if node.finalizedEpoch > f.store.finalizedCheckpoint.Epoch {
f.store.justifiedCheckpoint = f.store.unrealizedJustifiedCheckpoint
f.store.finalizedCheckpoint = f.store.unrealizedFinalizedCheckpoint
}
}
}
func (s *Store) pullTips(state state.BeaconState, node *Node, jc, fc *ethpb.Checkpoint) (*ethpb.Checkpoint, *ethpb.Checkpoint) {
s.nodesLock.Lock()
defer s.nodesLock.Unlock()
if node.parent == NonExistentNode { // Nothing to do if the parent is nil.
return jc, fc
}
currentEpoch := slots.ToEpoch(slots.CurrentSlot(s.genesisTime))
stateSlot := state.Slot()
stateEpoch := slots.ToEpoch(stateSlot)
parent := s.nodes[node.parent]
currJustified := parent.unrealizedJustifiedEpoch == currentEpoch
prevJustified := parent.unrealizedJustifiedEpoch+1 == currentEpoch
tooEarlyForCurr := slots.SinceEpochStarts(stateSlot)*3 < params.BeaconConfig().SlotsPerEpoch*2
if currJustified || (stateEpoch == currentEpoch && prevJustified && tooEarlyForCurr) {
node.unrealizedJustifiedEpoch = parent.unrealizedJustifiedEpoch
node.unrealizedFinalizedEpoch = parent.unrealizedFinalizedEpoch
return jc, fc
}
_, uj, uf, err := precompute.UnrealizedCheckpoints(state)
if err != nil {
log.WithError(err).Debug("could not compute unrealized checkpoints")
uj, uf = jc, fc
}
// Update store's unrealized checkpoints.
s.checkpointsLock.Lock()
if uj.Epoch > s.unrealizedJustifiedCheckpoint.Epoch {
s.unrealizedJustifiedCheckpoint = &forkchoicetypes.Checkpoint{
Epoch: uj.Epoch, Root: bytesutil.ToBytes32(uj.Root),
}
}
if uf.Epoch > s.unrealizedFinalizedCheckpoint.Epoch {
s.unrealizedJustifiedCheckpoint = &forkchoicetypes.Checkpoint{
Epoch: uj.Epoch, Root: bytesutil.ToBytes32(uj.Root),
}
s.unrealizedFinalizedCheckpoint = &forkchoicetypes.Checkpoint{
Epoch: uf.Epoch, Root: bytesutil.ToBytes32(uf.Root),
}
}
s.checkpointsLock.Unlock()
// Update node's checkpoints.
node.unrealizedJustifiedEpoch, node.unrealizedFinalizedEpoch = uj.Epoch, uf.Epoch
if stateEpoch < currentEpoch {
jc, fc = uj, uf
node.justifiedEpoch = uj.Epoch
node.finalizedEpoch = uf.Epoch
}
return jc, fc
}

View File

@@ -1,333 +0,0 @@
package protoarray
import (
"context"
"testing"
forkchoicetypes "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v3/config/params"
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v3/testing/require"
)
func TestStore_SetUnrealizedEpochs(t *testing.T) {
f := setup(1, 1)
ctx := context.Background()
state, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, params.BeaconConfig().ZeroHash, [32]byte{'A'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'B'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, [32]byte{'C'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
f.store.nodesLock.RLock()
require.Equal(t, types.Epoch(1), f.store.nodes[2].unrealizedJustifiedEpoch)
require.Equal(t, types.Epoch(1), f.store.nodes[2].unrealizedFinalizedEpoch)
f.store.nodesLock.RUnlock()
require.NoError(t, f.store.setUnrealizedJustifiedEpoch([32]byte{'b'}, 2))
require.NoError(t, f.store.setUnrealizedFinalizedEpoch([32]byte{'b'}, 2))
f.store.nodesLock.RLock()
require.Equal(t, types.Epoch(2), f.store.nodes[2].unrealizedJustifiedEpoch)
require.Equal(t, types.Epoch(2), f.store.nodes[2].unrealizedFinalizedEpoch)
f.store.nodesLock.RUnlock()
require.ErrorIs(t, errInvalidUnrealizedJustifiedEpoch, f.store.setUnrealizedJustifiedEpoch([32]byte{'b'}, 0))
require.ErrorIs(t, errInvalidUnrealizedFinalizedEpoch, f.store.setUnrealizedFinalizedEpoch([32]byte{'b'}, 0))
}
func TestStore_UpdateUnrealizedCheckpoints(t *testing.T) {
f := setup(1, 1)
ctx := context.Background()
state, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, params.BeaconConfig().ZeroHash, [32]byte{'A'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'B'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, [32]byte{'C'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
}
//
// Epoch 2 | Epoch 3
// |
// C |
// / |
// A <-- B |
// \ |
// ---- D
//
// B is the first block that justifies A.
//
func TestStore_LongFork(t *testing.T) {
f := setup(1, 1)
ctx := context.Background()
state, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, params.BeaconConfig().ZeroHash, [32]byte{'A'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'B'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
require.NoError(t, f.store.setUnrealizedJustifiedEpoch([32]byte{'b'}, 2))
state, blkRoot, err = prepareForkchoiceState(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, [32]byte{'C'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
require.NoError(t, f.store.setUnrealizedJustifiedEpoch([32]byte{'c'}, 2))
// Add an attestation to c, it is head
f.ProcessAttestation(ctx, []uint64{0}, [32]byte{'c'}, 1)
headRoot, err := f.Head(ctx, []uint64{100})
require.NoError(t, err)
require.Equal(t, [32]byte{'c'}, headRoot)
// D is head even though its weight is lower.
ha := [32]byte{'a'}
state, blkRoot, err = prepareForkchoiceState(ctx, 103, [32]byte{'d'}, [32]byte{'b'}, [32]byte{'D'}, 2, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
require.NoError(t, f.UpdateJustifiedCheckpoint(&forkchoicetypes.Checkpoint{Epoch: 2, Root: ha}))
headRoot, err = f.Head(ctx, []uint64{100})
require.NoError(t, err)
require.Equal(t, [32]byte{'d'}, headRoot)
require.Equal(t, uint64(0), f.store.nodes[4].weight)
require.Equal(t, uint64(100), f.store.nodes[3].weight)
// Update unrealized justification, c becomes head
f.updateUnrealizedCheckpoints()
headRoot, err = f.Head(ctx, []uint64{100})
require.NoError(t, err)
require.Equal(t, [32]byte{'c'}, headRoot)
}
//
//
// Epoch 1 Epoch 2 Epoch 3
// | |
// | |
// A <-- B <-- C <-- D <-- E <-- F <-- G <-- H |
// | \ |
// | --------------- I
// | |
//
// E justifies A. G justifies E.
//
func TestStore_NoDeadLock(t *testing.T) {
f := setup(0, 0)
ctx := context.Background()
// Epoch 1 blocks
state, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, params.BeaconConfig().ZeroHash, [32]byte{'A'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'B'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, [32]byte{'C'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 103, [32]byte{'d'}, [32]byte{'c'}, [32]byte{'D'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
// Epoch 2 Blocks
state, blkRoot, err = prepareForkchoiceState(ctx, 104, [32]byte{'e'}, [32]byte{'d'}, [32]byte{'E'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
require.NoError(t, f.store.setUnrealizedJustifiedEpoch([32]byte{'e'}, 1))
state, blkRoot, err = prepareForkchoiceState(ctx, 105, [32]byte{'f'}, [32]byte{'e'}, [32]byte{'F'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
require.NoError(t, f.store.setUnrealizedJustifiedEpoch([32]byte{'f'}, 1))
state, blkRoot, err = prepareForkchoiceState(ctx, 106, [32]byte{'g'}, [32]byte{'f'}, [32]byte{'G'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
require.NoError(t, f.store.setUnrealizedJustifiedEpoch([32]byte{'g'}, 2))
require.NoError(t, f.store.setUnrealizedFinalizedEpoch([32]byte{'g'}, 1))
f.store.unrealizedJustifiedCheckpoint = &forkchoicetypes.Checkpoint{Epoch: 2}
f.store.unrealizedFinalizedCheckpoint = &forkchoicetypes.Checkpoint{Epoch: 1}
state, blkRoot, err = prepareForkchoiceState(ctx, 107, [32]byte{'h'}, [32]byte{'g'}, [32]byte{'H'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
require.NoError(t, f.store.setUnrealizedJustifiedEpoch([32]byte{'h'}, 2))
require.NoError(t, f.store.setUnrealizedFinalizedEpoch([32]byte{'h'}, 1))
// Add an attestation for h
f.ProcessAttestation(ctx, []uint64{0}, [32]byte{'h'}, 1)
// Epoch 3
// Current Head is H
headRoot, err := f.Head(ctx, []uint64{100})
require.NoError(t, err)
require.Equal(t, [32]byte{'h'}, headRoot)
require.Equal(t, types.Epoch(0), f.JustifiedCheckpoint().Epoch)
// Insert Block I, it becomes Head
hr := [32]byte{'i'}
state, blkRoot, err = prepareForkchoiceState(ctx, 108, hr, [32]byte{'f'}, [32]byte{'I'}, 1, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
ha := [32]byte{'a'}
require.NoError(t, f.UpdateJustifiedCheckpoint(&forkchoicetypes.Checkpoint{Epoch: 1, Root: ha}))
headRoot, err = f.Head(ctx, []uint64{100})
require.NoError(t, err)
require.Equal(t, hr, headRoot)
require.Equal(t, types.Epoch(1), f.JustifiedCheckpoint().Epoch)
require.Equal(t, types.Epoch(0), f.FinalizedCheckpoint().Epoch)
// Realized Justified checkpoints, H becomes head
f.updateUnrealizedCheckpoints()
headRoot, err = f.Head(ctx, []uint64{100})
require.NoError(t, err)
require.Equal(t, [32]byte{'h'}, headRoot)
require.Equal(t, types.Epoch(2), f.JustifiedCheckpoint().Epoch)
require.Equal(t, types.Epoch(1), f.FinalizedCheckpoint().Epoch)
}
// Epoch 1 | Epoch 2
// |
// -- D (late) --
// / |
// A <- B <- C |
// \ |
// -- -- -- E <- F <- G <- H
// |
//
// D justifies and comes late.
//
func TestStore_ForkNextEpoch(t *testing.T) {
f := setup(0, 0)
ctx := context.Background()
// Epoch 1 blocks (D does not arrive)
state, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, params.BeaconConfig().ZeroHash, [32]byte{'A'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'B'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, [32]byte{'C'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
// Epoch 2 blocks
state, blkRoot, err = prepareForkchoiceState(ctx, 104, [32]byte{'e'}, [32]byte{'c'}, [32]byte{'E'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 105, [32]byte{'f'}, [32]byte{'e'}, [32]byte{'F'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 106, [32]byte{'g'}, [32]byte{'f'}, [32]byte{'G'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 107, [32]byte{'h'}, [32]byte{'g'}, [32]byte{'H'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
// Insert an attestation to H, H is head
f.ProcessAttestation(ctx, []uint64{0}, [32]byte{'h'}, 1)
headRoot, err := f.Head(ctx, []uint64{100})
require.NoError(t, err)
require.Equal(t, [32]byte{'h'}, headRoot)
require.Equal(t, types.Epoch(0), f.JustifiedCheckpoint().Epoch)
// D arrives late, D is head
state, blkRoot, err = prepareForkchoiceState(ctx, 103, [32]byte{'d'}, [32]byte{'c'}, [32]byte{'D'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
require.NoError(t, f.store.setUnrealizedJustifiedEpoch([32]byte{'d'}, 1))
f.store.unrealizedJustifiedCheckpoint = &forkchoicetypes.Checkpoint{Epoch: 1}
f.updateUnrealizedCheckpoints()
headRoot, err = f.Head(ctx, []uint64{100})
require.NoError(t, err)
require.Equal(t, [32]byte{'d'}, headRoot)
require.Equal(t, types.Epoch(1), f.JustifiedCheckpoint().Epoch)
// nodes[8] = D since it's late!
require.Equal(t, uint64(0), f.store.nodes[8].weight)
require.Equal(t, uint64(100), f.store.nodes[7].weight)
}
func TestStore_PullTips_Heuristics(t *testing.T) {
ctx := context.Background()
t.Run("Current epoch is justified", func(tt *testing.T) {
f := setup(1, 1)
st, root, err := prepareForkchoiceState(ctx, 65, [32]byte{'p'}, [32]byte{}, [32]byte{}, 1, 1)
require.NoError(tt, err)
require.NoError(tt, f.InsertNode(ctx, st, root))
f.store.nodes[1].unrealizedJustifiedEpoch = types.Epoch(2)
driftGenesisTime(f, 66, 0)
st, root, err = prepareForkchoiceState(ctx, 66, [32]byte{'h'}, [32]byte{'p'}, [32]byte{}, 1, 1)
require.NoError(tt, err)
require.NoError(tt, f.InsertNode(ctx, st, root))
require.Equal(tt, types.Epoch(2), f.store.nodes[2].unrealizedJustifiedEpoch)
require.Equal(tt, types.Epoch(1), f.store.nodes[2].unrealizedFinalizedEpoch)
})
t.Run("Previous Epoch is justified and too early for current", func(tt *testing.T) {
f := setup(1, 1)
st, root, err := prepareForkchoiceState(ctx, 95, [32]byte{'p'}, [32]byte{}, [32]byte{}, 1, 1)
require.NoError(tt, err)
require.NoError(tt, f.InsertNode(ctx, st, root))
f.store.nodes[1].unrealizedJustifiedEpoch = types.Epoch(2)
driftGenesisTime(f, 96, 0)
st, root, err = prepareForkchoiceState(ctx, 96, [32]byte{'h'}, [32]byte{'p'}, [32]byte{}, 1, 1)
require.NoError(tt, err)
require.NoError(tt, f.InsertNode(ctx, st, root))
require.Equal(tt, types.Epoch(2), f.store.nodes[2].unrealizedJustifiedEpoch)
require.Equal(tt, types.Epoch(1), f.store.nodes[2].unrealizedFinalizedEpoch)
})
t.Run("Previous Epoch is justified and not too early for current", func(tt *testing.T) {
f := setup(1, 1)
st, root, err := prepareForkchoiceState(ctx, 95, [32]byte{'p'}, [32]byte{}, [32]byte{}, 1, 1)
require.NoError(tt, err)
require.NoError(tt, f.InsertNode(ctx, st, root))
f.store.nodes[1].unrealizedJustifiedEpoch = types.Epoch(2)
driftGenesisTime(f, 127, 0)
st, root, err = prepareForkchoiceState(ctx, 127, [32]byte{'h'}, [32]byte{'p'}, [32]byte{}, 1, 1)
require.NoError(tt, err)
require.NoError(tt, f.InsertNode(ctx, st, root))
// Check that the justification point is not the parent's.
// This tests that the heuristics in pullTips did not apply and
// the test continues to compute a bogus unrealized
// justification
require.Equal(tt, types.Epoch(1), f.store.nodes[2].unrealizedJustifiedEpoch)
})
t.Run("Block from previous Epoch", func(tt *testing.T) {
f := setup(1, 1)
st, root, err := prepareForkchoiceState(ctx, 94, [32]byte{'p'}, [32]byte{}, [32]byte{}, 1, 1)
require.NoError(tt, err)
require.NoError(tt, f.InsertNode(ctx, st, root))
f.store.nodes[1].unrealizedJustifiedEpoch = types.Epoch(2)
driftGenesisTime(f, 96, 0)
st, root, err = prepareForkchoiceState(ctx, 95, [32]byte{'h'}, [32]byte{'p'}, [32]byte{}, 1, 1)
require.NoError(tt, err)
require.NoError(tt, f.InsertNode(ctx, st, root))
// Check that the justification point is not the parent's.
// This tests that the heuristics in pullTips did not apply and
// the test continues to compute a bogus unrealized
// justification
require.Equal(tt, types.Epoch(1), f.store.nodes[2].unrealizedJustifiedEpoch)
})
t.Run("Previous Epoch is not justified", func(tt *testing.T) {
f := setup(1, 1)
st, root, err := prepareForkchoiceState(ctx, 128, [32]byte{'p'}, [32]byte{}, [32]byte{}, 2, 1)
require.NoError(tt, err)
require.NoError(tt, f.InsertNode(ctx, st, root))
driftGenesisTime(f, 129, 0)
st, root, err = prepareForkchoiceState(ctx, 129, [32]byte{'h'}, [32]byte{'p'}, [32]byte{}, 2, 1)
require.NoError(tt, err)
require.NoError(tt, f.InsertNode(ctx, st, root))
// Check that the justification point is not the parent's.
// This tests that the heuristics in pullTips did not apply and
// the test continues to compute a bogus unrealized
// justification
require.Equal(tt, types.Epoch(2), f.store.nodes[2].unrealizedJustifiedEpoch)
})
}

View File

@@ -1,321 +0,0 @@
package protoarray
import (
"context"
"testing"
"github.com/prysmaticlabs/prysm/v3/config/params"
"github.com/prysmaticlabs/prysm/v3/testing/assert"
"github.com/prysmaticlabs/prysm/v3/testing/require"
)
func TestVotes_CanFindHead(t *testing.T) {
balances := []uint64{1, 1}
f := setup(1, 1)
ctx := context.Background()
// The head should always start at the finalized block.
r, err := f.Head(context.Background(), balances)
require.NoError(t, err)
assert.Equal(t, params.BeaconConfig().ZeroHash, r, "Incorrect head with genesis")
// Insert block 2 into the tree and verify head is at 2:
// 0
// /
// 2 <- head
state, blkRoot, err := prepareForkchoiceState(context.Background(), 0, indexToHash(2), params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
r, err = f.Head(context.Background(), balances)
require.NoError(t, err)
assert.Equal(t, indexToHash(2), r, "Incorrect head for with justified epoch at 1")
// Insert block 1 into the tree and verify head is still at 2:
// 0
// / \
// head -> 2 1
state, blkRoot, err = prepareForkchoiceState(context.Background(), 0, indexToHash(1), params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
r, err = f.Head(context.Background(), balances)
require.NoError(t, err)
assert.Equal(t, indexToHash(2), r, "Incorrect head for with justified epoch at 1")
// Add a vote to block 1 of the tree and verify head is switched to 1:
// 0
// / \
// 2 1 <- +vote, new head
f.ProcessAttestation(context.Background(), []uint64{0}, indexToHash(1), 2)
r, err = f.Head(context.Background(), balances)
require.NoError(t, err)
assert.Equal(t, indexToHash(1), r, "Incorrect head for with justified epoch at 1")
// Add a vote to block 2 of the tree and verify head is switched to 2:
// 0
// / \
// vote, new head -> 2 1
f.ProcessAttestation(context.Background(), []uint64{1}, indexToHash(2), 2)
r, err = f.Head(context.Background(), balances)
require.NoError(t, err)
assert.Equal(t, indexToHash(2), r, "Incorrect head for with justified epoch at 1")
// Insert block 3 into the tree and verify head is still at 2:
// 0
// / \
// head -> 2 1
// |
// 3
state, blkRoot, err = prepareForkchoiceState(context.Background(), 0, indexToHash(3), indexToHash(1), params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
r, err = f.Head(context.Background(), balances)
require.NoError(t, err)
assert.Equal(t, indexToHash(2), r, "Incorrect head for with justified epoch at 1")
// Move validator 0's vote from 1 to 3 and verify head is still at 2:
// 0
// / \
// head -> 2 1 <- old vote
// |
// 3 <- new vote
f.ProcessAttestation(context.Background(), []uint64{0}, indexToHash(3), 3)
r, err = f.Head(context.Background(), balances)
require.NoError(t, err)
assert.Equal(t, indexToHash(2), r, "Incorrect head for with justified epoch at 1")
// Move validator 1's vote from 2 to 1 and verify head is switched to 3:
// 0
// / \
// old vote -> 2 1 <- new vote
// |
// 3 <- head
f.ProcessAttestation(context.Background(), []uint64{1}, indexToHash(1), 3)
r, err = f.Head(context.Background(), balances)
require.NoError(t, err)
assert.Equal(t, indexToHash(3), r, "Incorrect head for with justified epoch at 1")
// Insert block 4 into the tree and verify head is at 4:
// 0
// / \
// 2 1
// |
// 3
// |
// 4 <- head
state, blkRoot, err = prepareForkchoiceState(context.Background(), 0, indexToHash(4), indexToHash(3), params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
r, err = f.Head(context.Background(), balances)
require.NoError(t, err)
assert.Equal(t, indexToHash(4), r, "Incorrect head for with justified epoch at 1")
// Insert block 5 with justified epoch 2, it becomes head
// 0
// / \
// 2 1
// |
// 3
// |
// 4 <- head
// /
// 5 <- justified epoch = 2
state, blkRoot, err = prepareForkchoiceState(context.Background(), 0, indexToHash(5), indexToHash(4), params.BeaconConfig().ZeroHash, 2, 2)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
r, err = f.Head(context.Background(), balances)
require.NoError(t, err)
assert.Equal(t, indexToHash(5), r, "Incorrect head for with justified epoch at 1")
// Insert block 6 with justified epoch 3: verify it's head
// 0
// / \
// 2 1
// |
// 3
// |
// 4 <- head
// / \
// 5 6 <- justified epoch = 3
state, blkRoot, err = prepareForkchoiceState(context.Background(), 0, indexToHash(6), indexToHash(4), params.BeaconConfig().ZeroHash, 3, 2)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
r, err = f.Head(context.Background(), balances)
require.NoError(t, err)
assert.Equal(t, indexToHash(6), r, "Incorrect head for with justified epoch at 1")
// Moved 2 votes to block 5:
f.ProcessAttestation(context.Background(), []uint64{0, 1}, indexToHash(5), 4)
// Inset blocks 7 and 8
// 6 should still be the head, even though 5 has all the votes.
// 0
// / \
// 2 1
// |
// 3
// |
// 4
// / \
// 5 6 <- head
// |
// 7
// |
// 8
state, blkRoot, err = prepareForkchoiceState(context.Background(), 0, indexToHash(7), indexToHash(5), params.BeaconConfig().ZeroHash, 2, 2)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(context.Background(), 0, indexToHash(8), indexToHash(7), params.BeaconConfig().ZeroHash, 2, 2)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
r, err = f.Head(context.Background(), balances)
require.NoError(t, err)
assert.Equal(t, indexToHash(6), r, "Incorrect head for with justified epoch at 1")
// Insert block 9 with justified epoch 3, it becomes head
// Verify 9 is the head:
// 0
// / \
// 2 1
// |
// 3
// |
// 4
// / \
// 5 6
// |
// 7
// |
// 8
// |
// 10 <- head
state, blkRoot, err = prepareForkchoiceState(context.Background(), 0, indexToHash(10), indexToHash(8), params.BeaconConfig().ZeroHash, 3, 2)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
r, err = f.Head(context.Background(), balances)
require.NoError(t, err)
assert.Equal(t, indexToHash(10), r, "Incorrect head for with justified epoch at 3")
// Insert block 9 forking 10 verify it's head (lexicographic order)
// 0
// / \
// 2 1
// |
// 3
// |
// 4
// / \
// 5 6
// |
// 7
// |
// 8
// / \
// 9 10
state, blkRoot, err = prepareForkchoiceState(context.Background(), 0, indexToHash(9), indexToHash(8), params.BeaconConfig().ZeroHash, 3, 2)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
r, err = f.Head(context.Background(), balances)
require.NoError(t, err)
assert.Equal(t, indexToHash(9), r, "Incorrect head for with justified epoch at 3")
// Move two votes for 10, verify it's head
f.ProcessAttestation(context.Background(), []uint64{0, 1}, indexToHash(10), 5)
r, err = f.Head(context.Background(), balances)
require.NoError(t, err)
assert.Equal(t, indexToHash(10), r, "Incorrect head for with justified epoch at 3")
// Add 3 more validators to the system.
balances = []uint64{1, 1, 1, 1, 1}
// The new validators voted for 9
f.ProcessAttestation(context.Background(), []uint64{2, 3, 4}, indexToHash(9), 5)
// The new head should be 9.
r, err = f.Head(context.Background(), balances)
require.NoError(t, err)
assert.Equal(t, indexToHash(9), r, "Incorrect head for with justified epoch at 3")
// Set the balances of the last 2 validators to 0.
balances = []uint64{1, 1, 1, 0, 0}
// The head should be back to 10.
r, err = f.Head(context.Background(), balances)
require.NoError(t, err)
assert.Equal(t, indexToHash(10), r, "Incorrect head for with justified epoch at 3")
// Set the balances back to normal.
balances = []uint64{1, 1, 1, 1, 1}
// The head should be back to 9.
r, err = f.Head(context.Background(), balances)
require.NoError(t, err)
assert.Equal(t, indexToHash(9), r, "Incorrect head for with justified epoch at 3")
// Remove the last 2 validators.
balances = []uint64{1, 1, 1}
// The head should be back to 10.
r, err = f.Head(context.Background(), balances)
require.NoError(t, err)
assert.Equal(t, indexToHash(10), r, "Incorrect head for with justified epoch at 3")
// Verify pruning below the prune threshold does not affect head.
f.store.pruneThreshold = 1000
prevRoot := f.store.finalizedCheckpoint.Root
f.store.finalizedCheckpoint.Root = indexToHash(5)
require.NoError(t, f.store.prune(context.Background()))
assert.Equal(t, 11, len(f.store.nodes), "Incorrect nodes length after prune")
f.store.finalizedCheckpoint.Root = prevRoot
r, err = f.Head(context.Background(), balances)
require.NoError(t, err)
assert.Equal(t, indexToHash(10), r, "Incorrect head for with justified epoch at 3")
// Verify pruning above the prune threshold does prune:
// 0
// / \
// 2 1
// |
// 3
// |
// 4
// -------pruned here ------
// 5 6
// |
// 7
// |
// 8
// / \
// 9 10
f.store.pruneThreshold = 1
f.store.finalizedCheckpoint.Root = indexToHash(5)
require.NoError(t, f.store.prune(context.Background()))
assert.Equal(t, 5, len(f.store.nodes), "Incorrect nodes length after prune")
// we pruned artificially the justified root.
f.store.justifiedCheckpoint.Root = indexToHash(5)
f.store.finalizedCheckpoint.Root = prevRoot
r, err = f.Head(context.Background(), balances)
require.NoError(t, err)
assert.Equal(t, indexToHash(10), r, "Incorrect head for with justified epoch at 2")
// Insert new block 11 and verify head is at 11.
// 5 6
// |
// 7
// |
// 8
// / \
// 10 9
// |
// head-> 11
state, blkRoot, err = prepareForkchoiceState(context.Background(), 0, indexToHash(11), indexToHash(10), params.BeaconConfig().ZeroHash, 3, 2)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
r, err = f.Head(context.Background(), balances)
require.NoError(t, err)
assert.Equal(t, indexToHash(11), r, "Incorrect head for with justified epoch at 3")
}

View File

@@ -54,6 +54,7 @@ go_test(
"//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/forkchoice/doubly-linked-tree:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",

View File

@@ -12,6 +12,7 @@ import (
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v3/beacon-chain/core/feed/state"
testDB "github.com/prysmaticlabs/prysm/v3/beacon-chain/db/testing"
doublylinkedtree "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/v3/consensus-types/blocks"
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
@@ -88,7 +89,7 @@ func setupService(t *testing.T) *Service {
}
return &Service{
config: &ValidatorMonitorConfig{
StateGen: stategen.New(beaconDB),
StateGen: stategen.New(beaconDB, doublylinkedtree.New()),
StateNotifier: chainService.StateNotifier(),
HeadFetcher: chainService,
AttestationNotifier: chainService.OperationNotifier(),

View File

@@ -28,7 +28,6 @@ go_library(
"//beacon-chain/execution:go_default_library",
"//beacon-chain/forkchoice:go_default_library",
"//beacon-chain/forkchoice/doubly-linked-tree:go_default_library",
"//beacon-chain/forkchoice/protoarray:go_default_library",
"//beacon-chain/gateway:go_default_library",
"//beacon-chain/monitor:go_default_library",
"//beacon-chain/node/registration:go_default_library",
@@ -83,6 +82,7 @@ go_test(
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/execution:go_default_library",
"//beacon-chain/execution/testing:go_default_library",
"//beacon-chain/monitor:go_default_library",
"//cmd:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/fieldparams:go_default_library",

View File

@@ -4,7 +4,6 @@ import (
"fmt"
"github.com/ethereum/go-ethereum/common"
"github.com/pkg/errors"
fastssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/prysm/v3/cmd"
"github.com/prysmaticlabs/prysm/v3/cmd/beacon-chain/flags"
@@ -97,6 +96,12 @@ func configureEth1Config(cliCtx *cli.Context) error {
return err
}
}
if cliCtx.IsSet(flags.EngineEndpointTimeoutSeconds.Name) {
c.ExecutionEngineTimeoutValue = cliCtx.Uint64(flags.EngineEndpointTimeoutSeconds.Name)
if err := params.SetActive(c); err != nil {
return err
}
}
if cliCtx.IsSet(flags.DepositContractFlag.Name) {
c.DepositContractAddress = cliCtx.String(flags.DepositContractFlag.Name)
if err := params.SetActive(c); err != nil {
@@ -158,17 +163,22 @@ func configureExecutionSetting(cliCtx *cli.Context) error {
}
if !cliCtx.IsSet(flags.SuggestedFeeRecipient.Name) {
log.Warnf("In order to receive transaction fees from proposing blocks, " +
"you must provide flag --" + flags.SuggestedFeeRecipient.Name + " with a valid ethereum address when starting your beacon node. " +
"Please see our documentation for more information on this requirement (https://docs.prylabs.network/docs/execution-node/fee-recipient).")
return nil
}
c := params.BeaconConfig().Copy()
ha := cliCtx.String(flags.SuggestedFeeRecipient.Name)
if !common.IsHexAddress(ha) {
return fmt.Errorf("%s is not a valid fee recipient address", ha)
log.Warnf("%s is not a valid fee recipient address, setting suggested-fee-recipient failed", ha)
return nil
}
mixedcaseAddress, err := common.NewMixedcaseAddressFromString(ha)
if err != nil {
return errors.Wrapf(err, "could not decode fee recipient %s", ha)
log.WithError(err).Error(fmt.Sprintf("Could not decode fee recipient %s, setting suggested-fee-recipient failed", ha))
return nil
}
checksumAddress := common.HexToAddress(ha)
if !mixedcaseAddress.ValidChecksum() {
@@ -178,6 +188,8 @@ func configureExecutionSetting(cliCtx *cli.Context) error {
"to prevent spelling mistakes in your fee recipient Ethereum address", ha, checksumAddress.Hex())
}
c.DefaultFeeRecipient = checksumAddress
log.Infof("Default fee recipient is set to %s, recipient may be overwritten from validator client and persist in db."+
" Default fee recipient will be used as a fall back", checksumAddress.Hex())
return params.SetActive(c)
}

View File

@@ -90,7 +90,8 @@ func TestConfigureExecutionSetting(t *testing.T) {
require.NoError(t, set.Set(flags.SuggestedFeeRecipient.Name, "0xB"))
cliCtx := cli.NewContext(&app, set, nil)
err := configureExecutionSetting(cliCtx)
require.ErrorContains(t, "0xB is not a valid fee recipient address", err)
assert.LogsContain(t, hook, "0xB is not a valid fee recipient address")
require.NoError(t, err)
require.NoError(t, set.Set(flags.SuggestedFeeRecipient.Name, "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"))
cliCtx = cli.NewContext(&app, set, nil)

View File

@@ -30,7 +30,6 @@ import (
"github.com/prysmaticlabs/prysm/v3/beacon-chain/execution"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice"
doublylinkedtree "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/protoarray"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/gateway"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/monitor"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/node/registration"
@@ -181,12 +180,7 @@ func New(cliCtx *cli.Context, opts ...Option) (*BeaconNode, error) {
}
}
if features.Get().DisableForkchoiceDoublyLinkedTree {
beacon.forkChoicer = protoarray.New()
} else {
beacon.forkChoicer = doublylinkedtree.New()
}
beacon.forkChoicer = doublylinkedtree.New()
depositAddress, err := execution.DepositContractAddress()
if err != nil {
return nil, err
@@ -207,7 +201,7 @@ func New(cliCtx *cli.Context, opts ...Option) (*BeaconNode, error) {
}
log.Debugln("Starting State Gen")
if err := beacon.startStateGen(ctx, bfs); err != nil {
if err := beacon.startStateGen(ctx, bfs, beacon.forkChoicer); err != nil {
return nil, err
}
@@ -486,9 +480,9 @@ func (b *BeaconNode) startSlasherDB(cliCtx *cli.Context) error {
return nil
}
func (b *BeaconNode) startStateGen(ctx context.Context, bfs *backfill.Status) error {
func (b *BeaconNode) startStateGen(ctx context.Context, bfs *backfill.Status, fc forkchoice.ForkChoicer) error {
opts := []stategen.StateGenOption{stategen.WithBackfillStatus(bfs)}
sg := stategen.New(b.db, opts...)
sg := stategen.New(b.db, fc, opts...)
cp, err := b.db.FinalizedCheckpoint(ctx)
if err != nil {
@@ -935,10 +929,7 @@ func (b *BeaconNode) registerDeterminsticGenesisService() error {
}
func (b *BeaconNode) registerValidatorMonitorService() error {
if cmd.ValidatorMonitorIndicesFlag.Value == nil {
return nil
}
cliSlice := cmd.ValidatorMonitorIndicesFlag.Value.Value()
cliSlice := b.cliCtx.IntSlice(cmd.ValidatorMonitorIndicesFlag.Name)
if cliSlice == nil {
return nil
}

View File

@@ -15,6 +15,7 @@ import (
statefeed "github.com/prysmaticlabs/prysm/v3/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/execution"
mockExecution "github.com/prysmaticlabs/prysm/v3/beacon-chain/execution/testing"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/monitor"
"github.com/prysmaticlabs/prysm/v3/cmd"
"github.com/prysmaticlabs/prysm/v3/cmd/beacon-chain/flags"
fieldparams "github.com/prysmaticlabs/prysm/v3/config/fieldparams"
@@ -42,6 +43,8 @@ func TestNodeClose_OK(t *testing.T) {
set.String("p2p-encoding", "ssz", "p2p encoding scheme")
set.Bool("demo-config", true, "demo configuration")
set.String("deposit-contract", "0x0000000000000000000000000000000000000000", "deposit contract address")
set.String("suggested-fee-recipient", "0x6e35733c5af9B61374A128e6F85f553aF09ff89A", "fee recipient")
require.NoError(t, set.Set("suggested-fee-recipient", "0x6e35733c5af9B61374A128e6F85f553aF09ff89A"))
cmd.ValidatorMonitorIndicesFlag.Value = &cli.IntSlice{}
cmd.ValidatorMonitorIndicesFlag.Value.SetInt(1)
ctx := cli.NewContext(&app, set, nil)
@@ -60,6 +63,9 @@ func TestNodeStart_Ok(t *testing.T) {
tmp := fmt.Sprintf("%s/datadirtest2", t.TempDir())
set := flag.NewFlagSet("test", 0)
set.String("datadir", tmp, "node data directory")
set.String("suggested-fee-recipient", "0x6e35733c5af9B61374A128e6F85f553aF09ff89A", "fee recipient")
require.NoError(t, set.Set("suggested-fee-recipient", "0x6e35733c5af9B61374A128e6F85f553aF09ff89A"))
ctx := cli.NewContext(&app, set, nil)
node, err := New(ctx, WithBlockchainFlagOptions([]blockchain.Option{}),
WithBuilderFlagOptions([]builder.Option{}),
@@ -83,6 +89,8 @@ func TestNodeStart_Ok_registerDeterministicGenesisService(t *testing.T) {
set := flag.NewFlagSet("test", 0)
set.String("datadir", tmp, "node data directory")
set.Uint64(flags.InteropNumValidatorsFlag.Name, numValidators, "")
set.String("suggested-fee-recipient", "0x6e35733c5af9B61374A128e6F85f553aF09ff89A", "fee recipient")
require.NoError(t, set.Set("suggested-fee-recipient", "0x6e35733c5af9B61374A128e6F85f553aF09ff89A"))
genesisState, _, err := interop.GenerateGenesisState(context.Background(), 0, numValidators)
require.NoError(t, err, "Could not generate genesis beacon state")
for i := uint64(1); i < 2; i++ {
@@ -136,7 +144,8 @@ func TestClearDB(t *testing.T) {
set := flag.NewFlagSet("test", 0)
set.String("datadir", tmp, "node data directory")
set.Bool(cmd.ForceClearDB.Name, true, "force clear db")
set.String("suggested-fee-recipient", "0x6e35733c5af9B61374A128e6F85f553aF09ff89A", "fee recipient")
require.NoError(t, set.Set("suggested-fee-recipient", "0x6e35733c5af9B61374A128e6F85f553aF09ff89A"))
context := cli.NewContext(&app, set, nil)
_, err = New(context, WithExecutionChainOptions([]execution.Option{
execution.WithHttpEndpoint(endpoint),
@@ -145,3 +154,20 @@ func TestClearDB(t *testing.T) {
require.LogsContain(t, hook, "Removing database")
}
func TestMonitor_RegisteredCorrectly(t *testing.T) {
app := cli.App{}
set := flag.NewFlagSet("test", 0)
require.NoError(t, cmd.ValidatorMonitorIndicesFlag.Apply(set))
cliCtx := cli.NewContext(&app, set, nil)
require.NoError(t, cliCtx.Set(cmd.ValidatorMonitorIndicesFlag.Name, "1,2"))
n := &BeaconNode{ctx: context.Background(), cliCtx: cliCtx, services: runtime.NewServiceRegistry()}
require.NoError(t, n.services.RegisterService(&blockchain.Service{}))
require.NoError(t, n.registerValidatorMonitorService())
var mService *monitor.Service
require.NoError(t, n.services.FetchService(&mService))
require.Equal(t, true, mService.TrackedValidators[1])
require.Equal(t, true, mService.TrackedValidators[2])
require.Equal(t, false, mService.TrackedValidators[100])
}

View File

@@ -227,7 +227,9 @@ func TestAggregateAndSaveForkChoiceAtts_Multiple(t *testing.T) {
sort.Slice(received, func(i, j int) bool {
return received[i].Data.Slot < received[j].Data.Slot
})
assert.DeepEqual(t, wanted, received)
for i, a := range wanted {
assert.Equal(t, true, proto.Equal(a, received[i]))
}
}
func TestSeenAttestations_PresentInCache(t *testing.T) {

View File

@@ -57,6 +57,7 @@ go_library(
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/wrapper:go_default_library",
"//container/leaky-bucket:go_default_library",
"//crypto/ecdsa:go_default_library",
"//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library",
@@ -74,21 +75,20 @@ go_library(
"@com_github_ethereum_go_ethereum//p2p/discover:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enr:go_default_library",
"@com_github_kevinms_leakybucket_go//:go_default_library",
"@com_github_kr_pretty//:go_default_library",
"@com_github_libp2p_go_libp2p//:go_default_library",
"@com_github_libp2p_go_libp2p//config:go_default_library",
"@com_github_libp2p_go_libp2p//core/connmgr:go_default_library",
"@com_github_libp2p_go_libp2p//core/control:go_default_library",
"@com_github_libp2p_go_libp2p//core/crypto:go_default_library",
"@com_github_libp2p_go_libp2p//core/host:go_default_library",
"@com_github_libp2p_go_libp2p//core/network:go_default_library",
"@com_github_libp2p_go_libp2p//core/peer:go_default_library",
"@com_github_libp2p_go_libp2p//core/protocol:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/muxer/mplex:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/protocol/identify:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/security/noise:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/transport/tcp:go_default_library",
"@com_github_libp2p_go_libp2p_core//connmgr:go_default_library",
"@com_github_libp2p_go_libp2p_core//control:go_default_library",
"@com_github_libp2p_go_libp2p_core//crypto:go_default_library",
"@com_github_libp2p_go_libp2p_core//host:go_default_library",
"@com_github_libp2p_go_libp2p_core//network:go_default_library",
"@com_github_libp2p_go_libp2p_core//peer:go_default_library",
"@com_github_libp2p_go_libp2p_core//protocol:go_default_library",
"@com_github_libp2p_go_libp2p_pubsub//:go_default_library",
"@com_github_libp2p_go_libp2p_pubsub//pb:go_default_library",
"@com_github_multiformats_go_multiaddr//:go_default_library",
@@ -150,6 +150,7 @@ go_test(
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/wrapper:go_default_library",
"//container/leaky-bucket:go_default_library",
"//crypto/ecdsa:go_default_library",
"//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library",
@@ -168,15 +169,14 @@ go_test(
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enr:go_default_library",
"@com_github_golang_snappy//:go_default_library",
"@com_github_kevinms_leakybucket_go//:go_default_library",
"@com_github_libp2p_go_libp2p//:go_default_library",
"@com_github_libp2p_go_libp2p//core/crypto:go_default_library",
"@com_github_libp2p_go_libp2p//core/host:go_default_library",
"@com_github_libp2p_go_libp2p//core/network:go_default_library",
"@com_github_libp2p_go_libp2p//core/peer:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/host/blank:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/net/swarm/testing:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/security/noise:go_default_library",
"@com_github_libp2p_go_libp2p_core//crypto:go_default_library",
"@com_github_libp2p_go_libp2p_core//host:go_default_library",
"@com_github_libp2p_go_libp2p_core//network:go_default_library",
"@com_github_libp2p_go_libp2p_core//peer:go_default_library",
"@com_github_libp2p_go_libp2p_pubsub//:go_default_library",
"@com_github_libp2p_go_libp2p_pubsub//pb:go_default_library",
"@com_github_multiformats_go_multiaddr//:go_default_library",

View File

@@ -10,8 +10,8 @@ import (
"time"
"github.com/ethereum/go-ethereum/p2p/discover"
"github.com/libp2p/go-libp2p-core/host"
pubsub "github.com/libp2p/go-libp2p-pubsub"
"github.com/libp2p/go-libp2p/core/host"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/p2p/peers"

View File

@@ -4,9 +4,9 @@ import (
"net"
"runtime"
"github.com/libp2p/go-libp2p-core/control"
"github.com/libp2p/go-libp2p-core/network"
"github.com/libp2p/go-libp2p-core/peer"
"github.com/libp2p/go-libp2p/core/control"
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/multiformats/go-multiaddr"
manet "github.com/multiformats/go-multiaddr/net"
"github.com/sirupsen/logrus"
@@ -153,8 +153,8 @@ func configureFilter(cfg *Config) (*multiaddr.Filters, error) {
return addrFilter, nil
}
//helper function to either accept or deny all private addresses
//if a new rule for a private address is in conflict with a previous one, log a warning
// helper function to either accept or deny all private addresses
// if a new rule for a private address is in conflict with a previous one, log a warning
func privateCIDRFilter(addrFilter *multiaddr.Filters, action multiaddr.Action) (*multiaddr.Filters, error) {
for _, privCidr := range privateCIDRList {
_, ipnet, err := net.ParseCIDR(privCidr)

View File

@@ -4,15 +4,16 @@ import (
"context"
"fmt"
"testing"
"time"
"github.com/kevinms/leakybucket-go"
"github.com/libp2p/go-libp2p"
"github.com/libp2p/go-libp2p-core/peer"
"github.com/libp2p/go-libp2p/core/peer"
ma "github.com/multiformats/go-multiaddr"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/p2p/peers"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/p2p/peers/peerdata"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/p2p/peers/scorers"
mockp2p "github.com/prysmaticlabs/prysm/v3/beacon-chain/p2p/testing"
leakybucket "github.com/prysmaticlabs/prysm/v3/container/leaky-bucket"
ethpb "github.com/prysmaticlabs/prysm/v3/proto/eth/v1"
"github.com/prysmaticlabs/prysm/v3/testing/assert"
"github.com/prysmaticlabs/prysm/v3/testing/require"
@@ -26,7 +27,7 @@ func TestPeer_AtMaxLimit(t *testing.T) {
listen, err := ma.NewMultiaddr(fmt.Sprintf("/ip4/%s/tcp/%d", ipAddr, 2000))
require.NoError(t, err, "Failed to p2p listen")
s := &Service{
ipLimiter: leakybucket.NewCollector(ipLimit, ipBurst, false),
ipLimiter: leakybucket.NewCollector(ipLimit, ipBurst, 1*time.Second, false),
}
s.peers = peers.NewStatus(context.Background(), &peers.StatusConfig{
PeerLimit: 0,
@@ -70,7 +71,7 @@ func TestPeer_AtMaxLimit(t *testing.T) {
func TestService_InterceptBannedIP(t *testing.T) {
s := &Service{
ipLimiter: leakybucket.NewCollector(ipLimit, ipBurst, false),
ipLimiter: leakybucket.NewCollector(ipLimit, ipBurst, 1*time.Second, false),
peers: peers.NewStatus(context.Background(), &peers.StatusConfig{
PeerLimit: 20,
ScorerParams: &scorers.Config{},
@@ -98,7 +99,7 @@ func TestService_InterceptBannedIP(t *testing.T) {
func TestService_RejectInboundPeersBeyondLimit(t *testing.T) {
limit := 20
s := &Service{
ipLimiter: leakybucket.NewCollector(ipLimit, ipBurst, false),
ipLimiter: leakybucket.NewCollector(ipLimit, ipBurst, 1*time.Second, false),
peers: peers.NewStatus(context.Background(), &peers.StatusConfig{
PeerLimit: limit,
ScorerParams: &scorers.Config{},
@@ -140,7 +141,7 @@ func TestPeer_BelowMaxLimit(t *testing.T) {
listen, err := ma.NewMultiaddr(fmt.Sprintf("/ip4/%s/tcp/%d", ipAddr, 2000))
require.NoError(t, err, "Failed to p2p listen")
s := &Service{
ipLimiter: leakybucket.NewCollector(ipLimit, ipBurst, false),
ipLimiter: leakybucket.NewCollector(ipLimit, ipBurst, 1*time.Second, false),
}
s.peers = peers.NewStatus(context.Background(), &peers.StatusConfig{
PeerLimit: 1,
@@ -191,7 +192,7 @@ func TestPeerAllowList(t *testing.T) {
listen, err := ma.NewMultiaddr(fmt.Sprintf("/ip4/%s/tcp/%d", ipAddr, 2000))
require.NoError(t, err, "Failed to p2p listen")
s := &Service{
ipLimiter: leakybucket.NewCollector(ipLimit, ipBurst, false),
ipLimiter: leakybucket.NewCollector(ipLimit, ipBurst, 1*time.Second, false),
peers: peers.NewStatus(context.Background(), &peers.StatusConfig{
ScorerParams: &scorers.Config{},
}),
@@ -237,7 +238,7 @@ func TestPeerDenyList(t *testing.T) {
listen, err := ma.NewMultiaddr(fmt.Sprintf("/ip4/%s/tcp/%d", ipAddr, 2000))
require.NoError(t, err, "Failed to p2p listen")
s := &Service{
ipLimiter: leakybucket.NewCollector(ipLimit, ipBurst, false),
ipLimiter: leakybucket.NewCollector(ipLimit, ipBurst, 1*time.Second, false),
peers: peers.NewStatus(context.Background(), &peers.StatusConfig{
ScorerParams: &scorers.Config{},
}),
@@ -272,7 +273,7 @@ func TestPeerDenyList(t *testing.T) {
func TestService_InterceptAddrDial_Allow(t *testing.T) {
s := &Service{
ipLimiter: leakybucket.NewCollector(ipLimit, ipBurst, false),
ipLimiter: leakybucket.NewCollector(ipLimit, ipBurst, 1*time.Second, false),
peers: peers.NewStatus(context.Background(), &peers.StatusConfig{
ScorerParams: &scorers.Config{},
}),
@@ -292,7 +293,7 @@ func TestService_InterceptAddrDial_Allow(t *testing.T) {
func TestService_InterceptAddrDial_Public(t *testing.T) {
s := &Service{
ipLimiter: leakybucket.NewCollector(ipLimit, ipBurst, false),
ipLimiter: leakybucket.NewCollector(ipLimit, ipBurst, 1*time.Second, false),
peers: peers.NewStatus(context.Background(), &peers.StatusConfig{
ScorerParams: &scorers.Config{},
}),
@@ -340,7 +341,7 @@ func TestService_InterceptAddrDial_Public(t *testing.T) {
func TestService_InterceptAddrDial_Private(t *testing.T) {
s := &Service{
ipLimiter: leakybucket.NewCollector(ipLimit, ipBurst, false),
ipLimiter: leakybucket.NewCollector(ipLimit, ipBurst, 1*time.Second, false),
peers: peers.NewStatus(context.Background(), &peers.StatusConfig{
ScorerParams: &scorers.Config{},
}),
@@ -369,7 +370,7 @@ func TestService_InterceptAddrDial_Private(t *testing.T) {
func TestService_InterceptAddrDial_AllowPrivate(t *testing.T) {
s := &Service{
ipLimiter: leakybucket.NewCollector(ipLimit, ipBurst, false),
ipLimiter: leakybucket.NewCollector(ipLimit, ipBurst, 1*time.Second, false),
peers: peers.NewStatus(context.Background(), &peers.StatusConfig{
ScorerParams: &scorers.Config{},
}),
@@ -398,7 +399,7 @@ func TestService_InterceptAddrDial_AllowPrivate(t *testing.T) {
func TestService_InterceptAddrDial_DenyPublic(t *testing.T) {
s := &Service{
ipLimiter: leakybucket.NewCollector(ipLimit, ipBurst, false),
ipLimiter: leakybucket.NewCollector(ipLimit, ipBurst, 1*time.Second, false),
peers: peers.NewStatus(context.Background(), &peers.StatusConfig{
ScorerParams: &scorers.Config{},
}),
@@ -427,7 +428,7 @@ func TestService_InterceptAddrDial_DenyPublic(t *testing.T) {
func TestService_InterceptAddrDial_AllowConflict(t *testing.T) {
s := &Service{
ipLimiter: leakybucket.NewCollector(ipLimit, ipBurst, false),
ipLimiter: leakybucket.NewCollector(ipLimit, ipBurst, 1*time.Second, false),
peers: peers.NewStatus(context.Background(), &peers.StatusConfig{
ScorerParams: &scorers.Config{},
}),

View File

@@ -3,8 +3,8 @@ package p2p
import (
"context"
"github.com/libp2p/go-libp2p-core/host"
"github.com/libp2p/go-libp2p-core/peer"
"github.com/libp2p/go-libp2p/core/host"
"github.com/libp2p/go-libp2p/core/peer"
"go.opencensus.io/trace"
)

View File

@@ -9,8 +9,8 @@ import (
"github.com/ethereum/go-ethereum/p2p/discover"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/libp2p/go-libp2p-core/network"
"github.com/libp2p/go-libp2p-core/peer"
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer"
ma "github.com/multiformats/go-multiaddr"
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield"
@@ -256,14 +256,14 @@ func (s *Service) startDiscoveryV5(
// filterPeer validates each node that we retrieve from our dht. We
// try to ascertain that the peer can be a valid protocol peer.
// Validity Conditions:
// 1) The local node is still actively looking for peers to
// connect to.
// 2) Peer has a valid IP and TCP port set in their enr.
// 3) Peer hasn't been marked as 'bad'
// 4) Peer is not currently active or connected.
// 5) Peer is ready to receive incoming connections.
// 6) Peer's fork digest in their ENR matches that of
// our localnodes.
// 1. The local node is still actively looking for peers to
// connect to.
// 2. Peer has a valid IP and TCP port set in their enr.
// 3. Peer hasn't been marked as 'bad'
// 4. Peer is not currently active or connected.
// 5. Peer is ready to receive incoming connections.
// 6. Peer's fork digest in their ENR matches that of
// our localnodes.
func (s *Service) filterPeer(node *enode.Node) bool {
// Ignore nil node entries passed in.
if node == nil {

View File

@@ -16,10 +16,9 @@ import (
"github.com/ethereum/go-ethereum/p2p/discover"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/kevinms/leakybucket-go"
"github.com/libp2p/go-libp2p-core/host"
"github.com/libp2p/go-libp2p-core/network"
"github.com/libp2p/go-libp2p-core/peer"
"github.com/libp2p/go-libp2p/core/host"
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/prysmaticlabs/go-bitfield"
mock "github.com/prysmaticlabs/prysm/v3/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/cache"
@@ -31,6 +30,7 @@ import (
testp2p "github.com/prysmaticlabs/prysm/v3/beacon-chain/p2p/testing"
"github.com/prysmaticlabs/prysm/v3/config/params"
"github.com/prysmaticlabs/prysm/v3/consensus-types/wrapper"
leakybucket "github.com/prysmaticlabs/prysm/v3/container/leaky-bucket"
"github.com/prysmaticlabs/prysm/v3/encoding/bytesutil"
prysmNetwork "github.com/prysmaticlabs/prysm/v3/network"
ethpb "github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1"
@@ -246,7 +246,7 @@ func TestInboundPeerLimit(t *testing.T) {
fakePeer := testp2p.NewTestP2P(t)
s := &Service{
cfg: &Config{MaxPeers: 30},
ipLimiter: leakybucket.NewCollector(ipLimit, ipBurst, false),
ipLimiter: leakybucket.NewCollector(ipLimit, ipBurst, 1*time.Second, false),
peers: peers.NewStatus(context.Background(), &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{},

View File

@@ -115,7 +115,7 @@ func TestSszNetworkEncoder_DecodeWithMultipleFrames(t *testing.T) {
maxChunkSize := uint64(1 << 22)
encoder.MaxChunkSize = maxChunkSize
params.OverrideBeaconNetworkConfig(c)
_, err := e.EncodeWithMaxLength(buf, st.InnerStateUnsafe().(*ethpb.BeaconState))
_, err := e.EncodeWithMaxLength(buf, st.ToProtoUnsafe().(*ethpb.BeaconState))
require.NoError(t, err)
// Max snappy block size
if buf.Len() <= 76490 {

View File

@@ -7,8 +7,8 @@ import (
"strings"
"time"
"github.com/libp2p/go-libp2p-core/peer"
pubsub "github.com/libp2p/go-libp2p-pubsub"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/helpers"
coreTime "github.com/prysmaticlabs/prysm/v3/beacon-chain/core/time"

View File

@@ -8,8 +8,8 @@ import (
"sync"
"time"
"github.com/libp2p/go-libp2p-core/network"
"github.com/libp2p/go-libp2p-core/peer"
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/p2p/peers"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/p2p/peers/peerdata"
prysmTime "github.com/prysmaticlabs/prysm/v3/time"

View File

@@ -6,8 +6,8 @@ import (
"net/http"
"strings"
"github.com/libp2p/go-libp2p-core/host"
"github.com/libp2p/go-libp2p-core/peer"
"github.com/libp2p/go-libp2p/core/host"
"github.com/libp2p/go-libp2p/core/peer"
ma "github.com/multiformats/go-multiaddr"
)

View File

@@ -4,11 +4,11 @@ import (
"context"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/libp2p/go-libp2p-core/connmgr"
"github.com/libp2p/go-libp2p-core/host"
"github.com/libp2p/go-libp2p-core/network"
"github.com/libp2p/go-libp2p-core/peer"
pubsub "github.com/libp2p/go-libp2p-pubsub"
"github.com/libp2p/go-libp2p/core/connmgr"
"github.com/libp2p/go-libp2p/core/host"
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/multiformats/go-multiaddr"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/p2p/encoder"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/p2p/peers"

View File

@@ -4,7 +4,7 @@ import (
"strconv"
"strings"
"github.com/libp2p/go-libp2p-core/peer"
"github.com/libp2p/go-libp2p/core/peer"
ma "github.com/multiformats/go-multiaddr"
"github.com/sirupsen/logrus"
)

View File

@@ -3,7 +3,7 @@ package p2p
import (
"strings"
"github.com/libp2p/go-libp2p-core/peer"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
)
@@ -14,6 +14,7 @@ var (
"nimbus",
"prysm",
"teku",
"lodestar",
"js-libp2p",
"rust-libp2p",
}

View File

@@ -6,7 +6,7 @@ import (
"net"
"github.com/libp2p/go-libp2p"
"github.com/libp2p/go-libp2p-core/peer"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/libp2p/go-libp2p/p2p/muxer/mplex"
"github.com/libp2p/go-libp2p/p2p/security/noise"
"github.com/libp2p/go-libp2p/p2p/transport/tcp"

View File

@@ -11,7 +11,7 @@ import (
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/libp2p/go-libp2p"
"github.com/libp2p/go-libp2p-core/crypto"
"github.com/libp2p/go-libp2p/core/crypto"
mock "github.com/prysmaticlabs/prysm/v3/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v3/config/params"
ecdsaprysm "github.com/prysmaticlabs/prysm/v3/crypto/ecdsa"

View File

@@ -24,8 +24,8 @@ go_library(
"//time:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enr:go_default_library",
"@com_github_libp2p_go_libp2p_core//network:go_default_library",
"@com_github_libp2p_go_libp2p_core//peer:go_default_library",
"@com_github_libp2p_go_libp2p//core/network:go_default_library",
"@com_github_libp2p_go_libp2p//core/peer:go_default_library",
"@com_github_multiformats_go_multiaddr//:go_default_library",
"@com_github_multiformats_go_multiaddr//net:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
@@ -54,8 +54,8 @@ go_test(
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enr:go_default_library",
"@com_github_libp2p_go_libp2p_core//network:go_default_library",
"@com_github_libp2p_go_libp2p_core//peer:go_default_library",
"@com_github_libp2p_go_libp2p//core/network:go_default_library",
"@com_github_libp2p_go_libp2p//core/peer:go_default_library",
"@com_github_multiformats_go_multiaddr//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",

View File

@@ -9,8 +9,8 @@ go_library(
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/metadata:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enr:go_default_library",
"@com_github_libp2p_go_libp2p_core//network:go_default_library",
"@com_github_libp2p_go_libp2p_core//peer:go_default_library",
"@com_github_libp2p_go_libp2p//core/network:go_default_library",
"@com_github_libp2p_go_libp2p//core/peer:go_default_library",
"@com_github_multiformats_go_multiaddr//:go_default_library",
],
)
@@ -22,6 +22,6 @@ go_test(
":go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_libp2p_go_libp2p_core//peer:go_default_library",
"@com_github_libp2p_go_libp2p//core/peer:go_default_library",
],
)

View File

@@ -7,8 +7,8 @@ import (
"time"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/libp2p/go-libp2p-core/network"
"github.com/libp2p/go-libp2p-core/peer"
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer"
ma "github.com/multiformats/go-multiaddr"
ethpb "github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1/metadata"

View File

@@ -4,7 +4,7 @@ import (
"context"
"testing"
"github.com/libp2p/go-libp2p-core/peer"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/p2p/peers/peerdata"
"github.com/prysmaticlabs/prysm/v3/testing/assert"
"github.com/prysmaticlabs/prysm/v3/testing/require"

Some files were not shown because too many files have changed in this diff Show More