Compare commits

...

35 Commits

Author SHA1 Message Date
nisdas
6a8ac948a2 Preston's Review 2024-05-28 21:40:19 +08:00
nisdas
7a9a026cef Add Flag To Configure Dials 2024-05-28 20:49:32 +08:00
nisdas
f52c29489d Slow Down Lookups 2024-05-27 17:12:25 +08:00
nisdas
3c6ea738f4 Handle backoff in Iterator 2024-05-23 19:07:19 +08:00
nisdas
4ce1261197 Fix Excessive Subnet Dials 2024-05-21 21:18:14 +08:00
Sammy Rosso
c8d6f47749 Remove EnableEIP4881 flag (#13826)
* Remove EnableEIP4881 flag

* Gaz

* Fix missing error handler

* Remove old tree and fix tests

* Gaz

* Fix build import

* Replace depositcache

* Add pendingDeposit tests

* Nishant's fix

* Fix unsafe uint64 to int

* Fix other unsafe uint64 to int

* Remove: RemovePendingDeposit

* Deprecate and remove DisableEIP4881 flag

* Check: index not greater than deposit count

* Move index check
2024-04-24 13:24:51 +00:00
Manu NALEPA
a6f134e48e Revert "zig: Update zig to recent main branch commit (#13142)" (#13908)
This reverts commit b24b60dbd8.
2024-04-24 03:56:52 +00:00
Preston Van Loon
fdbb5136d9 spectests: fail hard on missing test folders (#13913) 2024-04-24 02:49:33 +00:00
Preston Van Loon
2c66918594 Refactor beacon-chain/core/helpers tests to be black box (#13906) 2024-04-23 15:24:47 +00:00
Manu NALEPA
a0dac292ff Do not remove blobs DB in slasher. (#13881) 2024-04-23 08:09:26 +00:00
terence
75857e7177 Simplify prune invalid by reusing existing fork choice store call (#13878) 2024-04-23 03:09:16 +00:00
Preston Van Loon
0369f70b0b Electra beacon config (#13907)
* Update spectests to v1.5.0

* Add electra config

* Fix tests in beacon-chain/rpc/eth/config

* gofmt
2024-04-22 22:14:57 +00:00
Alec Thomas
276b97530f Change example.org DNS record (#13904)
DNS Changed for this record causing tests to fail

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2024-04-22 16:36:30 +00:00
Radosław Kapka
d5daf49a9a Use correct port for health check in Beacon API e2e evaluator (#13892) 2024-04-22 16:18:07 +00:00
james-prysm
feb16ae4aa consistent auth token for validator apis (#13747)
* wip

* fixing tests

* adding more tests especially to handle legacy

* fixing linting

* fixing deepsource issues and flags

* fixing some deepsource issues,pathing issues, and logs

* some review items

* adding additional review feedback

* updating to follow updates from https://github.com/ethereum/keymanager-APIs/pull/74

* adjusting functions to match changes in keymanagers PR

* Update validator/rpc/auth_token.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/rpc/auth_token.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/rpc/auth_token.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* review feedback

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2024-04-18 16:26:49 +00:00
kasey
219301339c Don't return error that can be internally handled (#13887)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-04-17 18:28:01 +00:00
Radosław Kapka
aec349f75a Upgrade the Beacon API e2e evaluator (#13868)
* GET

* POST

* Revert "Auxiliary commit to revert individual files from 615feb104004d6a945ededf5862ae38325fc7ec2"

This reverts commit 55cf071c684019f3d6124179154c10b2277fda49.

* comment fix

* deepsource
2024-04-15 05:56:47 +00:00
terence
5f909caedf Remove unused IsViableForCheckpoint (#13879) 2024-04-14 16:38:25 +00:00
Radosław Kapka
ba6dff3adb Return syncing status when node is optimistic (#13875) 2024-04-12 10:24:30 +00:00
Radosław Kapka
8cd05f098b Use read only validators in Beacon API (#13873) 2024-04-12 07:19:40 +00:00
Radosław Kapka
425f5387fa Handle overflow in retention period calculation (#13874) 2024-04-12 07:17:12 +00:00
Nishant Das
f2ce115ade Revert Peer Log Changes (#13872) 2024-04-12 06:49:01 +00:00
kasey
090a3e1ded Fix bug from PR 13827 (#13871)
* fix AS cache bug, tighten ro constructors

* additional coverage on AS cache filter

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-04-11 23:07:44 +00:00
kasey
c0acb7d352 Backfill throttling (#13855)
* add a sleep between retries as a simple throttle

* unit test

* deepsource

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-04-11 15:22:29 +00:00
Radosław Kapka
0d6070e6fc Use retention period when fetching blobs (#13869) 2024-04-11 14:06:00 +00:00
Manu NALEPA
bd00f851f0 e2e: Expected log Running node with peerId= -> Running node with. (#13861)
Rationale:
The `FindFollowingTextInFile` seems to have troubles with `logrus` fields.
2024-04-09 07:51:56 +00:00
Nishant Das
1a0c07deec Extend Broadcast Window For Attestations (#13858)
* fix it

* make check better
2024-04-08 04:49:20 +00:00
kasey
04f231a400 Initsync skip local blobs (#13827)
* wip - init-sync skip available blob req

* satisfy deep source

* gaz

* don't need to sort blobs; simplify blobRequest stack

* wip debug log to watch blob skip behavior

* unit tests for new blob req generator

* refactor to reduce blob req func count

* log when WaitForSummarizer fails

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-04-05 19:09:43 +00:00
Manu NALEPA
be1bfcce63 P2P: Add QUIC support (#13786)
* (Unrelated) DoppelGanger: Improve message.

* `beacon-blocks-by-range`: Add `--network` option.

* `ensurePeerConnections`: Remove capital letter in error message.

* `MultiAddressBuilder{WithID}`: Refactor.

* `buildOptions`: Improve log.

* `NewService`: Bubbles up errors.

* `tcp` ==> `libp2ptcp`

* `multiAddressBuilderWithID`: Add the ability to build QUIC multiaddr

* `p2p Start`: Fix error message.

* `p2p`: Add QUIC support.

* Status: Implement `{Inbound,Outbound}Connected{TCP,QUIC}`.

* Logging: Display the number of TCP/QUIC connected peers.

* P2P: Implement `{Inbound,Outbound}ConnectedWithProtocol`.

* Hide QUIC protocol behind the `--enable-quic` feature flag.

* `e2e`: Add `--enable-quic` flag.

* Add `--enable-quic` in `devModeFlag`.

* `convertToMultiAddrs` ==> `retrieveMultiAddrsFromNode`.

* `convertToAddrInfo`: Ensure `len(infos) == 1`.
2024-04-04 12:21:35 +00:00
Manu NALEPA
8cf5d79852 Remove the Goerli/Prater support. (#13846) 2024-04-03 19:19:17 +00:00
redistay
f7912e7c20 chore: fix some comments (#13843)
Signed-off-by: redistay <wujunjing@outlook.com>
2024-04-02 22:19:15 +00:00
terence
caa8be5dd1 Beacon-api: broadcast blobs in the event of seen block (#13830)
* Beacon-api: broadcast blobs in the event of seen block

* Fix parameters

* Fix test

* Check forkchoice

* Ran go format

* Revert "Ran go format"

This reverts commit 091e77e81d6e2b9861fecc27c0bad1898033f9a3.

* James feedback

* Radek's feedback

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Fix bad tests

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2024-04-02 18:12:58 +00:00
cui
0c15a30a34 using slices.Index (#13836) 2024-04-02 16:30:05 +00:00
cui
7bce1c0714 using slices.IndexFunc (#13839) 2024-04-02 16:06:27 +00:00
Radosław Kapka
d1084cbe48 Send correct state root with finalization event (#13842) 2024-04-02 15:32:32 +00:00
171 changed files with 3393 additions and 3572 deletions

View File

@@ -29,23 +29,7 @@ http_archive(
load("@hermetic_cc_toolchain//toolchain:defs.bzl", zig_toolchains = "toolchains")
# Temporarily use a nightly build until 0.12.0 is released.
# See: https://github.com/prysmaticlabs/prysm/issues/13130
zig_toolchains(
host_platform_sha256 = {
"linux-aarch64": "45afb8e32adde825165f4f293fcea9ecea503f7f9ec0e9bf4435afe70e67fb70",
"linux-x86_64": "f136c6a8a0f6adcb057d73615fbcd6f88281b3593f7008d5f7ed514ff925c02e",
"macos-aarch64": "05d995853c05243151deff47b60bdc2674f1e794a939eaeca0f42312da031cee",
"macos-x86_64": "721754ba5a50f31e8a1f0e1a74cace26f8246576878ac4a8591b0ee7b6db1fc1",
"windows-x86_64": "93f5248b2ea8c5ee8175e15b1384e133edc1cd49870b3ea259062a2e04164343",
},
url_formats = [
"https://ziglang.org/builds/zig-{host_platform}-{version}.{_ext}",
"https://mirror.bazel.build/ziglang.org/builds/zig-{host_platform}-{version}.{_ext}",
"https://prysmaticlabs.com/mirror/ziglang.org/builds/zig-{host_platform}-{version}.{_ext}",
],
version = "0.12.0-dev.1349+fa022d1ec",
)
zig_toolchains()
# Register zig sdk toolchains with support for Ubuntu 20.04 (Focal Fossa) which has an EOL date of April, 2025.
# For ubuntu glibc support, see https://launchpad.net/ubuntu/+source/glibc
@@ -243,7 +227,7 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.4.0"
consensus_spec_version = "v1.5.0-alpha.0"
bls_test_version = "v0.1.1"
@@ -259,7 +243,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "c282c0f86f23f3d2e0f71f5975769a4077e62a7e3c7382a16bd26a7e589811a0",
sha256 = "33c5547772b6d8d6f041dff7e7d26b0358c2392daed34394a3aa81147812a81c",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/general.tar.gz" % consensus_spec_version,
)
@@ -275,7 +259,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "4649c35aa3b8eb0cfdc81bee7c05649f90ef36bede5b0513e1f2e8baf37d6033",
sha256 = "06f286199cf2fedd4700487fb8feb0904e0ae18daaa4b3f70ea430ca9c388167",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/minimal.tar.gz" % consensus_spec_version,
)
@@ -291,7 +275,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "c5a03f724f757456ffaabd2a899992a71d2baf45ee4db65ca3518f2b7ee928c8",
sha256 = "5f2a4452b323075eba6bf950003f7d91fd04ebcbde5bd087beafb5d6f6325ad4",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/mainnet.tar.gz" % consensus_spec_version,
)
@@ -306,7 +290,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "cd1c9d97baccbdde1d2454a7dceb8c6c61192a3b581eee12ffc94969f2db8453",
sha256 = "fd7e83e8cbeb3e297f2aeb93776305f7d606272c97834d8d9be673984501ed36",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)
@@ -342,22 +326,6 @@ filegroup(
url = "https://github.com/eth-clients/eth2-networks/archive/934c948e69205dcf2deb87e4ae6cc140c335f94d.tar.gz",
)
http_archive(
name = "goerli_testnet",
build_file_content = """
filegroup(
name = "configs",
srcs = [
"prater/config.yaml",
],
visibility = ["//visibility:public"],
)
""",
sha256 = "43fc0f55ddff7b511713e2de07aa22846a67432df997296fb4fc09cd8ed1dcdb",
strip_prefix = "goerli-6522ac6684693740cd4ddcc2a0662e03702aa4a1",
url = "https://github.com/eth-clients/goerli/archive/6522ac6684693740cd4ddcc2a0662e03702aa4a1.tar.gz",
)
http_archive(
name = "holesky_testnet",
build_file_content = """

View File

@@ -1,11 +1,24 @@
load("@prysm//tools/go:def.bzl", "go_library")
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"constants.go",
"headers.go",
"jwt.go",
],
importpath = "github.com/prysmaticlabs/prysm/v5/api",
visibility = ["//visibility:public"],
deps = [
"//crypto/rand:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["jwt_test.go"],
embed = [":go_default_library"],
deps = ["//testing/require:go_default_library"],
)

View File

@@ -4,4 +4,6 @@ const (
WebUrlPrefix = "/v2/validator/"
WebApiUrlPrefix = "/api/v2/validator/"
KeymanagerApiPrefix = "/eth/v1"
AuthTokenFileName = "auth-token"
)

32
api/jwt.go Normal file
View File

@@ -0,0 +1,32 @@
package api
import (
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/crypto/rand"
)
// GenerateRandomHexString generates a random hex string that follows the standards for jwt token
// used for beacon node -> execution client
// used for web client -> validator client
func GenerateRandomHexString() (string, error) {
secret := make([]byte, 32)
randGen := rand.NewGenerator()
n, err := randGen.Read(secret)
if err != nil {
return "", err
} else if n != 32 {
return "", errors.New("rand: unexpected length")
}
return hexutil.Encode(secret), nil
}
// ValidateAuthToken validating auth token for web
func ValidateAuthToken(token string) error {
b, err := hexutil.Decode(token)
// token should be hex-encoded and at least 256 bits
if err != nil || len(b) < 32 {
return errors.New("invalid auth token: token should be hex-encoded and at least 256 bits")
}
return nil
}

13
api/jwt_test.go Normal file
View File

@@ -0,0 +1,13 @@
package api
import (
"testing"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestGenerateRandomHexString(t *testing.T) {
token, err := GenerateRandomHexString()
require.NoError(t, err)
require.NoError(t, ValidateAuthToken(token))
}

View File

@@ -137,7 +137,7 @@ go_test(
"//async/event:go_default_library",
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/cache/depositcache:go_default_library",
"//beacon-chain/cache/depositsnapshot:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",

View File

@@ -11,7 +11,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
f "github.com/prysmaticlabs/prysm/v5/beacon-chain/forkchoice"
doublylinkedtree "github.com/prysmaticlabs/prysm/v5/beacon-chain/forkchoice/doubly-linked-tree"
forkchoicetypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
@@ -399,14 +398,6 @@ func (s *Service) InForkchoice(root [32]byte) bool {
return s.cfg.ForkChoiceStore.HasNode(root)
}
// IsViableForCheckpoint returns whether the given checkpoint is a checkpoint in any
// chain known to forkchoice
func (s *Service) IsViableForCheckpoint(cp *forkchoicetypes.Checkpoint) (bool, error) {
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
return s.cfg.ForkChoiceStore.IsViableForCheckpoint(cp)
}
// IsOptimisticForRoot takes the root as argument instead of the current head
// and returns true if it is optimistic.
func (s *Service) IsOptimisticForRoot(ctx context.Context, root [32]byte) (bool, error) {
@@ -518,13 +509,6 @@ func (s *Service) Ancestor(ctx context.Context, root []byte, slot primitives.Slo
return ar[:], nil
}
// SetOptimisticToInvalid wraps the corresponding method in forkchoice
func (s *Service) SetOptimisticToInvalid(ctx context.Context, root, parent, lvh [32]byte) ([][32]byte, error) {
s.cfg.ForkChoiceStore.Lock()
defer s.cfg.ForkChoiceStore.Unlock()
return s.cfg.ForkChoiceStore.SetOptimisticToInvalid(ctx, root, parent, lvh)
}
// SetGenesisTime sets the genesis time of beacon chain.
func (s *Service) SetGenesisTime(t time.Time) {
s.genesisTime = t

View File

@@ -256,7 +256,7 @@ func (s *Service) notifyNewPayload(ctx context.Context, preStateVersion int,
// reportInvalidBlock deals with the event that an invalid block was detected by the execution layer
func (s *Service) pruneInvalidBlock(ctx context.Context, root, parentRoot, lvh [32]byte) error {
newPayloadInvalidNodeCount.Inc()
invalidRoots, err := s.SetOptimisticToInvalid(ctx, root, parentRoot, lvh)
invalidRoots, err := s.cfg.ForkChoiceStore.SetOptimisticToInvalid(ctx, root, parentRoot, lvh)
if err != nil {
return err
}

View File

@@ -61,7 +61,7 @@ func TestService_headNextSyncCommitteeIndices(t *testing.T) {
indices, err := c.headNextSyncCommitteeIndices(context.Background(), 0, primitives.Slot(slot))
require.NoError(t, err)
// NextSyncCommittee should be be empty after `ProcessSyncCommitteeUpdates`. Validator should get indices.
// NextSyncCommittee should be empty after `ProcessSyncCommitteeUpdates`. Validator should get indices.
require.NotEqual(t, 0, len(indices))
}

View File

@@ -170,7 +170,7 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
// Send finalized events and finalized deposits in the background
if newFinalized {
finalized := s.cfg.ForkChoiceStore.FinalizedCheckpoint()
go s.sendNewFinalizedEvent(blockCopy, postState)
go s.sendNewFinalizedEvent(ctx, postState)
depCtx, cancel := context.WithTimeout(context.Background(), depositDeadline)
go func() {
s.insertFinalizedDeposits(depCtx, finalized.Root)
@@ -443,7 +443,7 @@ func (s *Service) updateFinalizationOnBlock(ctx context.Context, preState, postS
// sendNewFinalizedEvent sends a new finalization checkpoint event over the
// event feed. It needs to be called on the background
func (s *Service) sendNewFinalizedEvent(signed interfaces.ReadOnlySignedBeaconBlock, postState state.BeaconState) {
func (s *Service) sendNewFinalizedEvent(ctx context.Context, postState state.BeaconState) {
isValidPayload := false
s.headLock.RLock()
if s.head != nil {
@@ -451,8 +451,17 @@ func (s *Service) sendNewFinalizedEvent(signed interfaces.ReadOnlySignedBeaconBl
}
s.headLock.RUnlock()
blk, err := s.cfg.BeaconDB.Block(ctx, bytesutil.ToBytes32(postState.FinalizedCheckpoint().Root))
if err != nil {
log.WithError(err).Error("Could not retrieve block for finalized checkpoint root. Finalized event will not be emitted")
return
}
if blk == nil || blk.IsNil() || blk.Block() == nil || blk.Block().IsNil() {
log.WithError(err).Error("Block retrieved for finalized checkpoint root is nil. Finalized event will not be emitted")
return
}
stateRoot := blk.Block().StateRoot()
// Send an event regarding the new finalized checkpoint over a common event feed.
stateRoot := signed.Block().StateRoot()
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.FinalizedCheckpoint,
Data: &ethpbv1.EventFinalizedCheckpoint{
@@ -488,7 +497,10 @@ func (s *Service) sendBlockAttestationsToSlasher(signed interfaces.ReadOnlySigne
func (s *Service) validateExecutionOnBlock(ctx context.Context, ver int, header interfaces.ExecutionData, signed interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte) (bool, error) {
isValidPayload, err := s.notifyNewPayload(ctx, ver, header, signed)
if err != nil {
return false, s.handleInvalidExecutionError(ctx, err, blockRoot, signed.Block().ParentRoot())
s.cfg.ForkChoiceStore.Lock()
err = s.handleInvalidExecutionError(ctx, err, blockRoot, signed.Block().ParentRoot())
s.cfg.ForkChoiceStore.Unlock()
return false, err
}
if signed.Version() < version.Capella && isValidPayload {
if err := s.validateMergeTransitionBlock(ctx, ver, header, signed); err != nil {

View File

@@ -8,12 +8,14 @@ import (
blockchainTesting "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
statefeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/das"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/operations/voluntaryexits"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpbv1 "github.com/prysmaticlabs/prysm/v5/proto/eth/v1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
@@ -378,3 +380,38 @@ func TestHandleBlockBLSToExecutionChanges(t *testing.T) {
require.Equal(t, false, pool.ValidatorExists(idx))
})
}
func Test_sendNewFinalizedEvent(t *testing.T) {
s, _ := minimalTestService(t)
notifier := &blockchainTesting.MockStateNotifier{RecordEvents: true}
s.cfg.StateNotifier = notifier
finalizedSt, err := util.NewBeaconState()
require.NoError(t, err)
finalizedStRoot, err := finalizedSt.HashTreeRoot(s.ctx)
require.NoError(t, err)
b := util.NewBeaconBlock()
b.Block.StateRoot = finalizedStRoot[:]
sbb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
sbbRoot, err := sbb.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, s.cfg.BeaconDB.SaveBlock(s.ctx, sbb))
st, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, st.SetFinalizedCheckpoint(&ethpb.Checkpoint{
Epoch: 123,
Root: sbbRoot[:],
}))
s.sendNewFinalizedEvent(s.ctx, st)
require.Equal(t, 1, len(notifier.ReceivedEvents()))
e := notifier.ReceivedEvents()[0]
assert.Equal(t, statefeed.FinalizedCheckpoint, int(e.Type))
fc, ok := e.Data.(*ethpbv1.EventFinalizedCheckpoint)
require.Equal(t, true, ok, "event has wrong data type")
assert.Equal(t, primitives.Epoch(123), fc.Epoch)
assert.DeepEqual(t, sbbRoot[:], fc.Block)
assert.DeepEqual(t, finalizedStRoot[:], fc.State)
assert.Equal(t, false, fc.ExecutionOptimistic)
}

View File

@@ -9,7 +9,7 @@ import (
"github.com/ethereum/go-ethereum/common"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache/depositcache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache/depositsnapshot"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
@@ -80,7 +80,7 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
attService, err := attestations.NewService(ctx, &attestations.Config{Pool: attestations.NewPool()})
require.NoError(t, err)
depositCache, err := depositcache.New()
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
fc := doublylinkedtree.New()

View File

@@ -8,7 +8,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/async/event"
mock "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache/depositcache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache/depositsnapshot"
statefeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem"
@@ -79,7 +79,7 @@ type testServiceRequirements struct {
attPool attestations.Pool
attSrv *attestations.Service
blsPool *blstoexec.Pool
dc *depositcache.DepositCache
dc *depositsnapshot.Cache
}
func minimalTestService(t *testing.T, opts ...Option) (*Service, *testServiceRequirements) {
@@ -94,7 +94,7 @@ func minimalTestService(t *testing.T, opts ...Option) (*Service, *testServiceReq
attSrv, err := attestations.NewService(ctx, &attestations.Config{Pool: attPool})
require.NoError(t, err)
blsPool := blstoexec.NewPool()
dc, err := depositcache.New()
dc, err := depositsnapshot.New()
require.NoError(t, err)
req := &testServiceRequirements{
ctx: ctx,

View File

@@ -1,50 +0,0 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"deposits_cache.go",
"log.go",
"pending_deposits.go",
],
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/cache/depositcache",
visibility = [
"//beacon-chain:__subpackages__",
"//testing/spectest:__subpackages__",
],
deps = [
"//beacon-chain/cache:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//container/trie:go_default_library",
"//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = [
"deposits_cache_test.go",
"pending_deposits_test.go",
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/cache:go_default_library",
"//config/params:go_default_library",
"//container/trie:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
],
)

View File

@@ -1,327 +0,0 @@
// Package depositcache is the source of validator deposits maintained
// in-memory by the beacon node deposits processed from the
// eth1 powchain are then stored in this cache to be accessed by
// any other service during a beacon node's runtime.
package depositcache
import (
"context"
"encoding/hex"
"math/big"
"sort"
"sync"
"github.com/ethereum/go-ethereum/common"
"github.com/pkg/errors"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/container/trie"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
var (
historicalDepositsCount = promauto.NewCounter(prometheus.CounterOpts{
Name: "beacondb_all_deposits",
Help: "The number of total deposits in the beaconDB in-memory database",
})
)
// FinalizedDeposits stores the trie of deposits that have been included
// in the beacon state up to the latest finalized checkpoint.
type FinalizedDeposits struct {
deposits *trie.SparseMerkleTrie
merkleTrieIndex int64
}
// DepositCache stores all in-memory deposit objects. This
// stores all the deposit related data that is required by the beacon-node.
type DepositCache struct {
// Beacon chain deposits in memory.
pendingDeposits []*ethpb.DepositContainer
deposits []*ethpb.DepositContainer
finalizedDeposits FinalizedDeposits
depositsByKey map[[fieldparams.BLSPubkeyLength]byte][]*ethpb.DepositContainer
depositsLock sync.RWMutex
}
// New instantiates a new deposit cache
func New() (*DepositCache, error) {
finalizedDepositsTrie, err := trie.NewTrie(params.BeaconConfig().DepositContractTreeDepth)
if err != nil {
return nil, err
}
// finalizedDeposits.merkleTrieIndex is initialized to -1 because it represents the index of the last trie item.
// Inserting the first item into the trie will set the value of the index to 0.
return &DepositCache{
pendingDeposits: []*ethpb.DepositContainer{},
deposits: []*ethpb.DepositContainer{},
depositsByKey: map[[fieldparams.BLSPubkeyLength]byte][]*ethpb.DepositContainer{},
finalizedDeposits: FinalizedDeposits{deposits: finalizedDepositsTrie, merkleTrieIndex: -1},
}, nil
}
// InsertDeposit into the database. If deposit or block number are nil
// then this method does nothing.
func (dc *DepositCache) InsertDeposit(ctx context.Context, d *ethpb.Deposit, blockNum uint64, index int64, depositRoot [32]byte) error {
_, span := trace.StartSpan(ctx, "DepositsCache.InsertDeposit")
defer span.End()
if d == nil {
log.WithFields(logrus.Fields{
"block": blockNum,
"deposit": d,
"index": index,
"depositRoot": hex.EncodeToString(depositRoot[:]),
}).Warn("Ignoring nil deposit insertion")
return errors.New("nil deposit inserted into the cache")
}
dc.depositsLock.Lock()
defer dc.depositsLock.Unlock()
if int(index) != len(dc.deposits) {
return errors.Errorf("wanted deposit with index %d to be inserted but received %d", len(dc.deposits), index)
}
// Keep the slice sorted on insertion in order to avoid costly sorting on retrieval.
heightIdx := sort.Search(len(dc.deposits), func(i int) bool { return dc.deposits[i].Index >= index })
depCtr := &ethpb.DepositContainer{Deposit: d, Eth1BlockHeight: blockNum, DepositRoot: depositRoot[:], Index: index}
newDeposits := append(
[]*ethpb.DepositContainer{depCtr},
dc.deposits[heightIdx:]...)
dc.deposits = append(dc.deposits[:heightIdx], newDeposits...)
// Append the deposit to our map, in the event no deposits
// exist for the pubkey , it is simply added to the map.
pubkey := bytesutil.ToBytes48(d.Data.PublicKey)
dc.depositsByKey[pubkey] = append(dc.depositsByKey[pubkey], depCtr)
historicalDepositsCount.Inc()
return nil
}
// InsertDepositContainers inserts a set of deposit containers into our deposit cache.
func (dc *DepositCache) InsertDepositContainers(ctx context.Context, ctrs []*ethpb.DepositContainer) {
_, span := trace.StartSpan(ctx, "DepositsCache.InsertDepositContainers")
defer span.End()
dc.depositsLock.Lock()
defer dc.depositsLock.Unlock()
sort.SliceStable(ctrs, func(i int, j int) bool { return ctrs[i].Index < ctrs[j].Index })
dc.deposits = ctrs
for _, c := range ctrs {
// Use a new value, as the reference
// of c changes in the next iteration.
newPtr := c
pKey := bytesutil.ToBytes48(newPtr.Deposit.Data.PublicKey)
dc.depositsByKey[pKey] = append(dc.depositsByKey[pKey], newPtr)
}
historicalDepositsCount.Add(float64(len(ctrs)))
}
// InsertFinalizedDeposits inserts deposits up to eth1DepositIndex (inclusive) into the finalized deposits cache.
func (dc *DepositCache) InsertFinalizedDeposits(ctx context.Context,
eth1DepositIndex int64, _ common.Hash, _ uint64) error {
_, span := trace.StartSpan(ctx, "DepositsCache.InsertFinalizedDeposits")
defer span.End()
dc.depositsLock.Lock()
defer dc.depositsLock.Unlock()
depositTrie := dc.finalizedDeposits.Deposits()
insertIndex := int(dc.finalizedDeposits.merkleTrieIndex + 1)
// Don't insert into finalized trie if there is no deposit to
// insert.
if len(dc.deposits) == 0 {
return nil
}
// In the event we have less deposits than we need to
// finalize we finalize till the index on which we do have it.
if len(dc.deposits) <= int(eth1DepositIndex) {
eth1DepositIndex = int64(len(dc.deposits)) - 1
}
// If we finalize to some lower deposit index, we
// ignore it.
if int(eth1DepositIndex) < insertIndex {
return nil
}
for _, d := range dc.deposits {
if d.Index <= dc.finalizedDeposits.merkleTrieIndex {
continue
}
if d.Index > eth1DepositIndex {
break
}
depHash, err := d.Deposit.Data.HashTreeRoot()
if err != nil {
return errors.Wrap(err, "could not hash deposit data")
}
if err = depositTrie.Insert(depHash[:], insertIndex); err != nil {
return errors.Wrap(err, "could not insert deposit hash")
}
insertIndex++
}
tree, ok := depositTrie.(*trie.SparseMerkleTrie)
if !ok {
return errors.New("not a sparse merkle tree")
}
dc.finalizedDeposits = FinalizedDeposits{
deposits: tree,
merkleTrieIndex: eth1DepositIndex,
}
return nil
}
// AllDepositContainers returns all historical deposit containers.
func (dc *DepositCache) AllDepositContainers(ctx context.Context) []*ethpb.DepositContainer {
_, span := trace.StartSpan(ctx, "DepositsCache.AllDepositContainers")
defer span.End()
dc.depositsLock.RLock()
defer dc.depositsLock.RUnlock()
// Make a shallow copy of the deposits and return that. This way, the
// caller can safely iterate over the returned list of deposits without
// the possibility of new deposits showing up. If we were to return the
// list without a copy, when a new deposit is added to the cache, it
// would also be present in the returned value. This could result in a
// race condition if the list is being iterated over.
//
// It's not necessary to make a deep copy of this list because the
// deposits in the cache should never be modified. It is still possible
// for the caller to modify one of the underlying deposits and modify
// the cache, but that's not a race condition. Also, a deep copy would
// take too long and use too much memory.
deposits := make([]*ethpb.DepositContainer, len(dc.deposits))
copy(deposits, dc.deposits)
return deposits
}
// AllDeposits returns a list of historical deposits until the given block number
// (inclusive). If no block is specified then this method returns all historical deposits.
func (dc *DepositCache) AllDeposits(ctx context.Context, untilBlk *big.Int) []*ethpb.Deposit {
dc.depositsLock.RLock()
defer dc.depositsLock.RUnlock()
return dc.allDeposits(untilBlk)
}
func (dc *DepositCache) allDeposits(untilBlk *big.Int) []*ethpb.Deposit {
var deposits []*ethpb.Deposit
for _, ctnr := range dc.deposits {
if untilBlk == nil || untilBlk.Uint64() >= ctnr.Eth1BlockHeight {
deposits = append(deposits, ctnr.Deposit)
}
}
return deposits
}
// DepositsNumberAndRootAtHeight returns number of deposits made up to blockheight and the
// root that corresponds to the latest deposit at that blockheight.
func (dc *DepositCache) DepositsNumberAndRootAtHeight(ctx context.Context, blockHeight *big.Int) (uint64, [32]byte) {
_, span := trace.StartSpan(ctx, "DepositsCache.DepositsNumberAndRootAtHeight")
defer span.End()
dc.depositsLock.RLock()
defer dc.depositsLock.RUnlock()
heightIdx := sort.Search(len(dc.deposits), func(i int) bool { return dc.deposits[i].Eth1BlockHeight > blockHeight.Uint64() })
// send the deposit root of the empty trie, if eth1follow distance is greater than the time of the earliest
// deposit.
if heightIdx == 0 {
return 0, [32]byte{}
}
return uint64(heightIdx), bytesutil.ToBytes32(dc.deposits[heightIdx-1].DepositRoot)
}
// DepositByPubkey looks through historical deposits and finds one which contains
// a certain public key within its deposit data.
func (dc *DepositCache) DepositByPubkey(ctx context.Context, pubKey []byte) (*ethpb.Deposit, *big.Int) {
_, span := trace.StartSpan(ctx, "DepositsCache.DepositByPubkey")
defer span.End()
dc.depositsLock.RLock()
defer dc.depositsLock.RUnlock()
var deposit *ethpb.Deposit
var blockNum *big.Int
deps, ok := dc.depositsByKey[bytesutil.ToBytes48(pubKey)]
if !ok || len(deps) == 0 {
return deposit, blockNum
}
// We always return the first deposit if a particular
// validator key has multiple deposits assigned to
// it.
deposit = deps[0].Deposit
blockNum = big.NewInt(int64(deps[0].Eth1BlockHeight))
return deposit, blockNum
}
// FinalizedDeposits returns the finalized deposits trie.
func (dc *DepositCache) FinalizedDeposits(ctx context.Context) (cache.FinalizedDeposits, error) {
_, span := trace.StartSpan(ctx, "DepositsCache.FinalizedDeposits")
defer span.End()
dc.depositsLock.RLock()
defer dc.depositsLock.RUnlock()
return &FinalizedDeposits{
deposits: dc.finalizedDeposits.deposits.Copy(),
merkleTrieIndex: dc.finalizedDeposits.merkleTrieIndex,
}, nil
}
// NonFinalizedDeposits returns the list of non-finalized deposits until the given block number (inclusive).
// If no block is specified then this method returns all non-finalized deposits.
func (dc *DepositCache) NonFinalizedDeposits(ctx context.Context, lastFinalizedIndex int64, untilBlk *big.Int) []*ethpb.Deposit {
_, span := trace.StartSpan(ctx, "DepositsCache.NonFinalizedDeposits")
defer span.End()
dc.depositsLock.RLock()
defer dc.depositsLock.RUnlock()
if dc.finalizedDeposits.Deposits() == nil {
return dc.allDeposits(untilBlk)
}
var deposits []*ethpb.Deposit
for _, d := range dc.deposits {
if (d.Index > lastFinalizedIndex) && (untilBlk == nil || untilBlk.Uint64() >= d.Eth1BlockHeight) {
deposits = append(deposits, d.Deposit)
}
}
return deposits
}
// PruneProofs removes proofs from all deposits whose index is equal or less than untilDepositIndex.
func (dc *DepositCache) PruneProofs(ctx context.Context, untilDepositIndex int64) error {
_, span := trace.StartSpan(ctx, "DepositsCache.PruneProofs")
defer span.End()
dc.depositsLock.Lock()
defer dc.depositsLock.Unlock()
if untilDepositIndex >= int64(len(dc.deposits)) {
untilDepositIndex = int64(len(dc.deposits) - 1)
}
for i := untilDepositIndex; i >= 0; i-- {
// Finding a nil proof means that all proofs up to this deposit have been already pruned.
if dc.deposits[i].Deposit.Proof == nil {
break
}
dc.deposits[i].Deposit.Proof = nil
}
return nil
}
// Deposits returns the cached internal deposit tree.
func (fd *FinalizedDeposits) Deposits() cache.MerkleTree {
if fd.deposits != nil {
return fd.deposits
}
return nil
}
// MerkleTrieIndex represents the last finalized index in
// the finalized deposit container.
func (fd *FinalizedDeposits) MerkleTrieIndex() int64 {
return fd.merkleTrieIndex
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,5 +0,0 @@
package depositcache
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "depositcache")

View File

@@ -1,151 +0,0 @@
package depositcache
import (
"context"
"math/big"
"sort"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prysmaticlabs/prysm/v5/crypto/hash"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
var (
pendingDepositsCount = promauto.NewGauge(prometheus.GaugeOpts{
Name: "beacondb_pending_deposits",
Help: "The number of pending deposits in the beaconDB in-memory database",
})
)
// PendingDepositsFetcher specifically outlines a struct that can retrieve deposits
// which have not yet been included in the chain.
type PendingDepositsFetcher interface {
PendingContainers(ctx context.Context, untilBlk *big.Int) []*ethpb.DepositContainer
}
// InsertPendingDeposit into the database. If deposit or block number are nil
// then this method does nothing.
func (dc *DepositCache) InsertPendingDeposit(ctx context.Context, d *ethpb.Deposit, blockNum uint64, index int64, depositRoot [32]byte) {
_, span := trace.StartSpan(ctx, "DepositsCache.InsertPendingDeposit")
defer span.End()
if d == nil {
log.WithFields(logrus.Fields{
"block": blockNum,
"deposit": d,
}).Debug("Ignoring nil deposit insertion")
return
}
dc.depositsLock.Lock()
defer dc.depositsLock.Unlock()
dc.pendingDeposits = append(dc.pendingDeposits,
&ethpb.DepositContainer{Deposit: d, Eth1BlockHeight: blockNum, Index: index, DepositRoot: depositRoot[:]})
pendingDepositsCount.Inc()
span.AddAttributes(trace.Int64Attribute("count", int64(len(dc.pendingDeposits))))
}
// PendingDeposits returns a list of deposits until the given block number
// (inclusive). If no block is specified then this method returns all pending
// deposits.
func (dc *DepositCache) PendingDeposits(ctx context.Context, untilBlk *big.Int) []*ethpb.Deposit {
ctx, span := trace.StartSpan(ctx, "DepositsCache.PendingDeposits")
defer span.End()
depositCntrs := dc.PendingContainers(ctx, untilBlk)
deposits := make([]*ethpb.Deposit, 0, len(depositCntrs))
for _, dep := range depositCntrs {
deposits = append(deposits, dep.Deposit)
}
return deposits
}
// PendingContainers returns a list of deposit containers until the given block number
// (inclusive).
func (dc *DepositCache) PendingContainers(ctx context.Context, untilBlk *big.Int) []*ethpb.DepositContainer {
_, span := trace.StartSpan(ctx, "DepositsCache.PendingDeposits")
defer span.End()
dc.depositsLock.RLock()
defer dc.depositsLock.RUnlock()
depositCntrs := make([]*ethpb.DepositContainer, 0, len(dc.pendingDeposits))
for _, ctnr := range dc.pendingDeposits {
if untilBlk == nil || untilBlk.Uint64() >= ctnr.Eth1BlockHeight {
depositCntrs = append(depositCntrs, ctnr)
}
}
// Sort the deposits by Merkle index.
sort.SliceStable(depositCntrs, func(i, j int) bool {
return depositCntrs[i].Index < depositCntrs[j].Index
})
span.AddAttributes(trace.Int64Attribute("count", int64(len(depositCntrs))))
return depositCntrs
}
// RemovePendingDeposit from the database. The deposit is indexed by the
// Index. This method does nothing if deposit ptr is nil.
func (dc *DepositCache) RemovePendingDeposit(ctx context.Context, d *ethpb.Deposit) {
_, span := trace.StartSpan(ctx, "DepositsCache.RemovePendingDeposit")
defer span.End()
if d == nil {
log.Debug("Ignoring nil deposit removal")
return
}
depRoot, err := hash.Proto(d)
if err != nil {
log.WithError(err).Error("Could not remove deposit")
return
}
dc.depositsLock.Lock()
defer dc.depositsLock.Unlock()
idx := -1
for i, ctnr := range dc.pendingDeposits {
h, err := hash.Proto(ctnr.Deposit)
if err != nil {
log.WithError(err).Error("Could not hash deposit")
continue
}
if h == depRoot {
idx = i
break
}
}
if idx >= 0 {
dc.pendingDeposits = append(dc.pendingDeposits[:idx], dc.pendingDeposits[idx+1:]...)
pendingDepositsCount.Dec()
}
}
// PrunePendingDeposits removes any deposit which is older than the given deposit merkle tree index.
func (dc *DepositCache) PrunePendingDeposits(ctx context.Context, merkleTreeIndex int64) {
_, span := trace.StartSpan(ctx, "DepositsCache.PrunePendingDeposits")
defer span.End()
if merkleTreeIndex == 0 {
log.Debug("Ignoring 0 deposit removal")
return
}
dc.depositsLock.Lock()
defer dc.depositsLock.Unlock()
cleanDeposits := make([]*ethpb.DepositContainer, 0, len(dc.pendingDeposits))
for _, dp := range dc.pendingDeposits {
if dp.Index >= merkleTreeIndex {
cleanDeposits = append(cleanDeposits, dp)
}
}
dc.pendingDeposits = cleanDeposits
pendingDepositsCount.Set(float64(len(dc.pendingDeposits)))
}

View File

@@ -34,6 +34,7 @@ go_test(
name = "go_default_test",
srcs = [
"deposit_cache_test.go",
"deposit_fetcher_test.go",
"deposit_tree_snapshot_test.go",
"merkle_tree_test.go",
"spec_test.go",

View File

@@ -262,6 +262,12 @@ func toFinalizedDepositsContainer(deposits *DepositTree, index int64) finalizedD
}
}
// PendingDepositsFetcher specifically outlines a struct that can retrieve deposits
// which have not yet been included in the chain.
type PendingDepositsFetcher interface {
PendingContainers(ctx context.Context, untilBlk *big.Int) []*ethpb.DepositContainer
}
// PendingDeposits returns a list of deposits until the given block number
// (inclusive). If no block is specified then this method returns all pending
// deposits.

View File

@@ -1,82 +1,32 @@
package depositcache
package depositsnapshot
import (
"context"
"math/big"
"testing"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"google.golang.org/protobuf/proto"
)
var _ PendingDepositsFetcher = (*DepositCache)(nil)
var _ PendingDepositsFetcher = (*Cache)(nil)
func TestInsertPendingDeposit_OK(t *testing.T) {
dc := DepositCache{}
dc := Cache{}
dc.InsertPendingDeposit(context.Background(), &ethpb.Deposit{}, 111, 100, [32]byte{})
assert.Equal(t, 1, len(dc.pendingDeposits), "deposit not inserted")
}
func TestInsertPendingDeposit_ignoresNilDeposit(t *testing.T) {
dc := DepositCache{}
dc := Cache{}
dc.InsertPendingDeposit(context.Background(), nil /*deposit*/, 0 /*blockNum*/, 0, [32]byte{})
assert.Equal(t, 0, len(dc.pendingDeposits))
}
func TestRemovePendingDeposit_OK(t *testing.T) {
db := DepositCache{}
proof1 := makeDepositProof()
proof1[0] = bytesutil.PadTo([]byte{'A'}, 32)
proof2 := makeDepositProof()
proof2[0] = bytesutil.PadTo([]byte{'A'}, 32)
data := &ethpb.Deposit_Data{
PublicKey: make([]byte, 48),
WithdrawalCredentials: make([]byte, 32),
Amount: 0,
Signature: make([]byte, 96),
}
depToRemove := &ethpb.Deposit{Proof: proof1, Data: data}
otherDep := &ethpb.Deposit{Proof: proof2, Data: data}
db.pendingDeposits = []*ethpb.DepositContainer{
{Deposit: depToRemove, Index: 1},
{Deposit: otherDep, Index: 5},
}
db.RemovePendingDeposit(context.Background(), depToRemove)
if len(db.pendingDeposits) != 1 || !proto.Equal(db.pendingDeposits[0].Deposit, otherDep) {
t.Error("Failed to remove deposit")
}
}
func TestRemovePendingDeposit_IgnoresNilDeposit(t *testing.T) {
dc := DepositCache{}
dc.pendingDeposits = []*ethpb.DepositContainer{{Deposit: &ethpb.Deposit{}}}
dc.RemovePendingDeposit(context.Background(), nil /*deposit*/)
assert.Equal(t, 1, len(dc.pendingDeposits), "deposit unexpectedly removed")
}
func TestPendingDeposit_RoundTrip(t *testing.T) {
dc := DepositCache{}
proof := makeDepositProof()
proof[0] = bytesutil.PadTo([]byte{'A'}, 32)
data := &ethpb.Deposit_Data{
PublicKey: make([]byte, 48),
WithdrawalCredentials: make([]byte, 32),
Amount: 0,
Signature: make([]byte, 96),
}
dep := &ethpb.Deposit{Proof: proof, Data: data}
dc.InsertPendingDeposit(context.Background(), dep, 111, 100, [32]byte{})
dc.RemovePendingDeposit(context.Background(), dep)
assert.Equal(t, 0, len(dc.pendingDeposits), "Failed to insert & delete a pending deposit")
}
func TestPendingDeposits_OK(t *testing.T) {
dc := DepositCache{}
dc := Cache{}
dc.pendingDeposits = []*ethpb.DepositContainer{
{Eth1BlockHeight: 2, Deposit: &ethpb.Deposit{Proof: [][]byte{[]byte("A")}}},
@@ -96,7 +46,7 @@ func TestPendingDeposits_OK(t *testing.T) {
}
func TestPrunePendingDeposits_ZeroMerkleIndex(t *testing.T) {
dc := DepositCache{}
dc := Cache{}
dc.pendingDeposits = []*ethpb.DepositContainer{
{Eth1BlockHeight: 2, Index: 2},
@@ -120,7 +70,7 @@ func TestPrunePendingDeposits_ZeroMerkleIndex(t *testing.T) {
}
func TestPrunePendingDeposits_OK(t *testing.T) {
dc := DepositCache{}
dc := Cache{}
dc.pendingDeposits = []*ethpb.DepositContainer{
{Eth1BlockHeight: 2, Index: 2},

View File

@@ -99,11 +99,23 @@ func (d *DepositTree) getProof(index uint64) ([32]byte, [][32]byte, error) {
if d.depositCount <= 0 {
return [32]byte{}, nil, ErrInvalidDepositCount
}
finalizedDeposits, _ := d.tree.GetFinalized([][32]byte{})
if finalizedDeposits != 0 {
finalizedDeposits = finalizedDeposits - 1
if index >= d.depositCount {
return [32]byte{}, nil, ErrInvalidIndex
}
if index <= finalizedDeposits {
finalizedDeposits, _ := d.tree.GetFinalized([][32]byte{})
finalizedIdx := -1
if finalizedDeposits != 0 {
fd, err := math.Int(finalizedDeposits)
if err != nil {
return [32]byte{}, nil, err
}
finalizedIdx = fd - 1
}
i, err := math.Int(index)
if err != nil {
return [32]byte{}, nil, err
}
if finalizedDeposits > 0 && i <= finalizedIdx {
return [32]byte{}, nil, ErrInvalidIndex
}
leaf, proof := generateProof(d.tree, index, DepositContractDepth)

View File

@@ -50,6 +50,8 @@ go_test(
"attestation_test.go",
"beacon_committee_test.go",
"block_test.go",
"private_access_fuzz_noop_test.go", # keep
"private_access_test.go",
"randao_test.go",
"rewards_penalties_test.go",
"shuffle_test.go",

View File

@@ -140,7 +140,7 @@ func BeaconCommittee(
}
count := committeesPerSlot * uint64(params.BeaconConfig().SlotsPerEpoch)
return computeCommittee(validatorIndices, seed, indexOffset, count)
return ComputeCommittee(validatorIndices, seed, indexOffset, count)
}
// CommitteeAssignmentContainer represents a committee list, committee index, and to be attested slot for a given epoch.
@@ -359,7 +359,7 @@ func UpdateProposerIndicesInCache(ctx context.Context, state state.ReadOnlyBeaco
if err != nil {
return err
}
proposerIndices, err := precomputeProposerIndices(state, indices, epoch)
proposerIndices, err := PrecomputeProposerIndices(state, indices, epoch)
if err != nil {
return err
}
@@ -409,7 +409,7 @@ func ClearCache() {
balanceCache.Clear()
}
// computeCommittee returns the requested shuffled committee out of the total committees using
// ComputeCommittee returns the requested shuffled committee out of the total committees using
// validator indices and seed.
//
// Spec pseudocode definition:
@@ -424,7 +424,7 @@ func ClearCache() {
// start = (len(indices) * index) // count
// end = (len(indices) * uint64(index + 1)) // count
// return [indices[compute_shuffled_index(uint64(i), uint64(len(indices)), seed)] for i in range(start, end)]
func computeCommittee(
func ComputeCommittee(
indices []primitives.ValidatorIndex,
seed [32]byte,
index, count uint64,
@@ -451,9 +451,9 @@ func computeCommittee(
return shuffledList[start:end], nil
}
// This computes proposer indices of the current epoch and returns a list of proposer indices,
// PrecomputeProposerIndices computes proposer indices of the current epoch and returns a list of proposer indices,
// the index of the list represents the slot number.
func precomputeProposerIndices(state state.ReadOnlyBeaconState, activeIndices []primitives.ValidatorIndex, e primitives.Epoch) ([]primitives.ValidatorIndex, error) {
func PrecomputeProposerIndices(state state.ReadOnlyBeaconState, activeIndices []primitives.ValidatorIndex, e primitives.Epoch) ([]primitives.ValidatorIndex, error) {
hashFunc := hash.CustomSHA256Hasher()
proposerIndices := make([]primitives.ValidatorIndex, params.BeaconConfig().SlotsPerEpoch)

View File

@@ -1,4 +1,4 @@
package helpers
package helpers_test
import (
"context"
@@ -7,6 +7,7 @@ import (
"testing"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
"github.com/prysmaticlabs/prysm/v5/config/params"
@@ -21,7 +22,7 @@ import (
)
func TestComputeCommittee_WithoutCache(t *testing.T) {
ClearCache()
helpers.ClearCache()
// Create 10 committees
committeeCount := uint64(10)
@@ -48,16 +49,16 @@ func TestComputeCommittee_WithoutCache(t *testing.T) {
require.NoError(t, err)
epoch := time.CurrentEpoch(state)
indices, err := ActiveValidatorIndices(context.Background(), state, epoch)
indices, err := helpers.ActiveValidatorIndices(context.Background(), state, epoch)
require.NoError(t, err)
seed, err := Seed(state, epoch, params.BeaconConfig().DomainBeaconAttester)
seed, err := helpers.Seed(state, epoch, params.BeaconConfig().DomainBeaconAttester)
require.NoError(t, err)
committees, err := computeCommittee(indices, seed, 0, 1 /* Total committee*/)
committees, err := helpers.ComputeCommittee(indices, seed, 0, 1 /* Total committee*/)
assert.NoError(t, err, "Could not compute committee")
// Test shuffled indices are correct for index 5 committee
index := uint64(5)
committee5, err := computeCommittee(indices, seed, index, committeeCount)
committee5, err := helpers.ComputeCommittee(indices, seed, index, committeeCount)
assert.NoError(t, err, "Could not compute committee")
start := slice.SplitOffset(validatorCount, committeeCount, index)
end := slice.SplitOffset(validatorCount, committeeCount, index+1)
@@ -65,7 +66,7 @@ func TestComputeCommittee_WithoutCache(t *testing.T) {
// Test shuffled indices are correct for index 9 committee
index = uint64(9)
committee9, err := computeCommittee(indices, seed, index, committeeCount)
committee9, err := helpers.ComputeCommittee(indices, seed, index, committeeCount)
assert.NoError(t, err, "Could not compute committee")
start = slice.SplitOffset(validatorCount, committeeCount, index)
end = slice.SplitOffset(validatorCount, committeeCount, index+1)
@@ -73,42 +74,42 @@ func TestComputeCommittee_WithoutCache(t *testing.T) {
}
func TestComputeCommittee_RegressionTest(t *testing.T) {
ClearCache()
helpers.ClearCache()
indices := []primitives.ValidatorIndex{1, 3, 8, 16, 18, 19, 20, 23, 30, 35, 43, 46, 47, 54, 56, 58, 69, 70, 71, 83, 84, 85, 91, 96, 100, 103, 105, 106, 112, 121, 127, 128, 129, 140, 142, 144, 146, 147, 149, 152, 153, 154, 157, 160, 173, 175, 180, 182, 188, 189, 191, 194, 201, 204, 217, 221, 226, 228, 230, 231, 239, 241, 249, 250, 255}
seed := [32]byte{68, 110, 161, 250, 98, 230, 161, 172, 227, 226, 99, 11, 138, 124, 201, 134, 38, 197, 0, 120, 6, 165, 122, 34, 19, 216, 43, 226, 210, 114, 165, 183}
index := uint64(215)
count := uint64(32)
_, err := computeCommittee(indices, seed, index, count)
_, err := helpers.ComputeCommittee(indices, seed, index, count)
require.ErrorContains(t, "index out of range", err)
}
func TestVerifyBitfieldLength_OK(t *testing.T) {
ClearCache()
helpers.ClearCache()
bf := bitfield.Bitlist{0xFF, 0x01}
committeeSize := uint64(8)
assert.NoError(t, VerifyBitfieldLength(bf, committeeSize), "Bitfield is not validated when it was supposed to be")
assert.NoError(t, helpers.VerifyBitfieldLength(bf, committeeSize), "Bitfield is not validated when it was supposed to be")
bf = bitfield.Bitlist{0xFF, 0x07}
committeeSize = 10
assert.NoError(t, VerifyBitfieldLength(bf, committeeSize), "Bitfield is not validated when it was supposed to be")
assert.NoError(t, helpers.VerifyBitfieldLength(bf, committeeSize), "Bitfield is not validated when it was supposed to be")
}
func TestCommitteeAssignments_CannotRetrieveFutureEpoch(t *testing.T) {
ClearCache()
helpers.ClearCache()
epoch := primitives.Epoch(1)
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Slot: 0, // Epoch 0.
})
require.NoError(t, err)
_, _, err = CommitteeAssignments(context.Background(), state, epoch+1)
_, _, err = helpers.CommitteeAssignments(context.Background(), state, epoch+1)
assert.ErrorContains(t, "can't be greater than next epoch", err)
}
func TestCommitteeAssignments_NoProposerForSlot0(t *testing.T) {
ClearCache()
helpers.ClearCache()
validators := make([]*ethpb.Validator, 4*params.BeaconConfig().SlotsPerEpoch)
for i := 0; i < len(validators); i++ {
@@ -127,7 +128,7 @@ func TestCommitteeAssignments_NoProposerForSlot0(t *testing.T) {
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
require.NoError(t, err)
_, proposerIndexToSlots, err := CommitteeAssignments(context.Background(), state, 0)
_, proposerIndexToSlots, err := helpers.CommitteeAssignments(context.Background(), state, 0)
require.NoError(t, err, "Failed to determine CommitteeAssignments")
for _, ss := range proposerIndexToSlots {
for _, s := range ss {
@@ -198,9 +199,9 @@ func TestCommitteeAssignments_CanRetrieve(t *testing.T) {
for i, tt := range tests {
t.Run(fmt.Sprintf("%d", i), func(t *testing.T) {
ClearCache()
helpers.ClearCache()
validatorIndexToCommittee, proposerIndexToSlots, err := CommitteeAssignments(context.Background(), state, slots.ToEpoch(tt.slot))
validatorIndexToCommittee, proposerIndexToSlots, err := helpers.CommitteeAssignments(context.Background(), state, slots.ToEpoch(tt.slot))
require.NoError(t, err, "Failed to determine CommitteeAssignments")
cac := validatorIndexToCommittee[tt.index]
assert.Equal(t, tt.committeeIndex, cac.CommitteeIndex, "Unexpected committeeIndex for validator index %d", tt.index)
@@ -215,7 +216,7 @@ func TestCommitteeAssignments_CanRetrieve(t *testing.T) {
}
func TestCommitteeAssignments_CannotRetrieveFuture(t *testing.T) {
ClearCache()
helpers.ClearCache()
// Initialize test with 256 validators, each slot and each index gets 4 validators.
validators := make([]*ethpb.Validator, 4*params.BeaconConfig().SlotsPerEpoch)
@@ -237,17 +238,17 @@ func TestCommitteeAssignments_CannotRetrieveFuture(t *testing.T) {
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
require.NoError(t, err)
_, proposerIndxs, err := CommitteeAssignments(context.Background(), state, time.CurrentEpoch(state))
_, proposerIndxs, err := helpers.CommitteeAssignments(context.Background(), state, time.CurrentEpoch(state))
require.NoError(t, err)
require.NotEqual(t, 0, len(proposerIndxs), "wanted non-zero proposer index set")
_, proposerIndxs, err = CommitteeAssignments(context.Background(), state, time.CurrentEpoch(state)+1)
_, proposerIndxs, err = helpers.CommitteeAssignments(context.Background(), state, time.CurrentEpoch(state)+1)
require.NoError(t, err)
require.NotEqual(t, 0, len(proposerIndxs), "wanted non-zero proposer index set")
}
func TestCommitteeAssignments_CannotRetrieveOlderThanSlotsPerHistoricalRoot(t *testing.T) {
ClearCache()
helpers.ClearCache()
// Initialize test with 256 validators, each slot and each index gets 4 validators.
validators := make([]*ethpb.Validator, 4*params.BeaconConfig().SlotsPerEpoch)
@@ -263,12 +264,12 @@ func TestCommitteeAssignments_CannotRetrieveOlderThanSlotsPerHistoricalRoot(t *t
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
require.NoError(t, err)
_, _, err = CommitteeAssignments(context.Background(), state, 0)
_, _, err = helpers.CommitteeAssignments(context.Background(), state, 0)
require.ErrorContains(t, "start slot 0 is smaller than the minimum valid start slot 1", err)
}
func TestCommitteeAssignments_EverySlotHasMin1Proposer(t *testing.T) {
ClearCache()
helpers.ClearCache()
// Initialize test with 256 validators, each slot and each index gets 4 validators.
validators := make([]*ethpb.Validator, 4*params.BeaconConfig().SlotsPerEpoch)
@@ -285,7 +286,7 @@ func TestCommitteeAssignments_EverySlotHasMin1Proposer(t *testing.T) {
})
require.NoError(t, err)
epoch := primitives.Epoch(1)
_, proposerIndexToSlots, err := CommitteeAssignments(context.Background(), state, epoch)
_, proposerIndexToSlots, err := helpers.CommitteeAssignments(context.Background(), state, epoch)
require.NoError(t, err, "Failed to determine CommitteeAssignments")
slotsWithProposers := make(map[primitives.Slot]bool)
@@ -391,10 +392,10 @@ func TestVerifyAttestationBitfieldLengths_OK(t *testing.T) {
}
for i, tt := range tests {
ClearCache()
helpers.ClearCache()
require.NoError(t, state.SetSlot(tt.stateSlot))
err := VerifyAttestationBitfieldLengths(context.Background(), state, tt.attestation)
err := helpers.VerifyAttestationBitfieldLengths(context.Background(), state, tt.attestation)
if tt.verificationFailure {
assert.NotNil(t, err, "Verification succeeded when it was supposed to fail")
} else {
@@ -404,7 +405,7 @@ func TestVerifyAttestationBitfieldLengths_OK(t *testing.T) {
}
func TestUpdateCommitteeCache_CanUpdate(t *testing.T) {
ClearCache()
helpers.ClearCache()
validatorCount := params.BeaconConfig().MinGenesisActiveValidatorCount
validators := make([]*ethpb.Validator, validatorCount)
@@ -421,20 +422,20 @@ func TestUpdateCommitteeCache_CanUpdate(t *testing.T) {
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
require.NoError(t, err)
require.NoError(t, UpdateCommitteeCache(context.Background(), state, time.CurrentEpoch(state)))
require.NoError(t, helpers.UpdateCommitteeCache(context.Background(), state, time.CurrentEpoch(state)))
epoch := primitives.Epoch(0)
idx := primitives.CommitteeIndex(1)
seed, err := Seed(state, epoch, params.BeaconConfig().DomainBeaconAttester)
seed, err := helpers.Seed(state, epoch, params.BeaconConfig().DomainBeaconAttester)
require.NoError(t, err)
indices, err = committeeCache.Committee(context.Background(), params.BeaconConfig().SlotsPerEpoch.Mul(uint64(epoch)), seed, idx)
indices, err = helpers.CommitteeCache().Committee(context.Background(), params.BeaconConfig().SlotsPerEpoch.Mul(uint64(epoch)), seed, idx)
require.NoError(t, err)
assert.Equal(t, params.BeaconConfig().TargetCommitteeSize, uint64(len(indices)), "Did not save correct indices lengths")
}
func TestUpdateCommitteeCache_CanUpdateAcrossEpochs(t *testing.T) {
ClearCache()
helpers.ClearCache()
validatorCount := params.BeaconConfig().MinGenesisActiveValidatorCount
validators := make([]*ethpb.Validator, validatorCount)
@@ -452,19 +453,19 @@ func TestUpdateCommitteeCache_CanUpdateAcrossEpochs(t *testing.T) {
})
require.NoError(t, err)
e := time.CurrentEpoch(state)
require.NoError(t, UpdateCommitteeCache(context.Background(), state, e))
require.NoError(t, helpers.UpdateCommitteeCache(context.Background(), state, e))
seed, err := Seed(state, e, params.BeaconConfig().DomainBeaconAttester)
seed, err := helpers.Seed(state, e, params.BeaconConfig().DomainBeaconAttester)
require.NoError(t, err)
require.Equal(t, true, committeeCache.HasEntry(string(seed[:])))
require.Equal(t, true, helpers.CommitteeCache().HasEntry(string(seed[:])))
nextSeed, err := Seed(state, e+1, params.BeaconConfig().DomainBeaconAttester)
nextSeed, err := helpers.Seed(state, e+1, params.BeaconConfig().DomainBeaconAttester)
require.NoError(t, err)
require.Equal(t, false, committeeCache.HasEntry(string(nextSeed[:])))
require.Equal(t, false, helpers.CommitteeCache().HasEntry(string(nextSeed[:])))
require.NoError(t, UpdateCommitteeCache(context.Background(), state, e+1))
require.NoError(t, helpers.UpdateCommitteeCache(context.Background(), state, e+1))
require.Equal(t, true, committeeCache.HasEntry(string(nextSeed[:])))
require.Equal(t, true, helpers.CommitteeCache().HasEntry(string(nextSeed[:])))
}
func BenchmarkComputeCommittee300000_WithPreCache(b *testing.B) {
@@ -481,20 +482,20 @@ func BenchmarkComputeCommittee300000_WithPreCache(b *testing.B) {
require.NoError(b, err)
epoch := time.CurrentEpoch(state)
indices, err := ActiveValidatorIndices(context.Background(), state, epoch)
indices, err := helpers.ActiveValidatorIndices(context.Background(), state, epoch)
require.NoError(b, err)
seed, err := Seed(state, epoch, params.BeaconConfig().DomainBeaconAttester)
seed, err := helpers.Seed(state, epoch, params.BeaconConfig().DomainBeaconAttester)
require.NoError(b, err)
index := uint64(3)
_, err = computeCommittee(indices, seed, index, params.BeaconConfig().MaxCommitteesPerSlot)
_, err = helpers.ComputeCommittee(indices, seed, index, params.BeaconConfig().MaxCommitteesPerSlot)
if err != nil {
panic(err)
}
b.ResetTimer()
for n := 0; n < b.N; n++ {
_, err := computeCommittee(indices, seed, index, params.BeaconConfig().MaxCommitteesPerSlot)
_, err := helpers.ComputeCommittee(indices, seed, index, params.BeaconConfig().MaxCommitteesPerSlot)
if err != nil {
panic(err)
}
@@ -515,20 +516,20 @@ func BenchmarkComputeCommittee3000000_WithPreCache(b *testing.B) {
require.NoError(b, err)
epoch := time.CurrentEpoch(state)
indices, err := ActiveValidatorIndices(context.Background(), state, epoch)
indices, err := helpers.ActiveValidatorIndices(context.Background(), state, epoch)
require.NoError(b, err)
seed, err := Seed(state, epoch, params.BeaconConfig().DomainBeaconAttester)
seed, err := helpers.Seed(state, epoch, params.BeaconConfig().DomainBeaconAttester)
require.NoError(b, err)
index := uint64(3)
_, err = computeCommittee(indices, seed, index, params.BeaconConfig().MaxCommitteesPerSlot)
_, err = helpers.ComputeCommittee(indices, seed, index, params.BeaconConfig().MaxCommitteesPerSlot)
if err != nil {
panic(err)
}
b.ResetTimer()
for n := 0; n < b.N; n++ {
_, err := computeCommittee(indices, seed, index, params.BeaconConfig().MaxCommitteesPerSlot)
_, err := helpers.ComputeCommittee(indices, seed, index, params.BeaconConfig().MaxCommitteesPerSlot)
if err != nil {
panic(err)
}
@@ -549,9 +550,9 @@ func BenchmarkComputeCommittee128000_WithOutPreCache(b *testing.B) {
require.NoError(b, err)
epoch := time.CurrentEpoch(state)
indices, err := ActiveValidatorIndices(context.Background(), state, epoch)
indices, err := helpers.ActiveValidatorIndices(context.Background(), state, epoch)
require.NoError(b, err)
seed, err := Seed(state, epoch, params.BeaconConfig().DomainBeaconAttester)
seed, err := helpers.Seed(state, epoch, params.BeaconConfig().DomainBeaconAttester)
require.NoError(b, err)
i := uint64(0)
@@ -559,7 +560,7 @@ func BenchmarkComputeCommittee128000_WithOutPreCache(b *testing.B) {
b.ResetTimer()
for n := 0; n < b.N; n++ {
i++
_, err := computeCommittee(indices, seed, index, params.BeaconConfig().MaxCommitteesPerSlot)
_, err := helpers.ComputeCommittee(indices, seed, index, params.BeaconConfig().MaxCommitteesPerSlot)
if err != nil {
panic(err)
}
@@ -584,9 +585,9 @@ func BenchmarkComputeCommittee1000000_WithOutCache(b *testing.B) {
require.NoError(b, err)
epoch := time.CurrentEpoch(state)
indices, err := ActiveValidatorIndices(context.Background(), state, epoch)
indices, err := helpers.ActiveValidatorIndices(context.Background(), state, epoch)
require.NoError(b, err)
seed, err := Seed(state, epoch, params.BeaconConfig().DomainBeaconAttester)
seed, err := helpers.Seed(state, epoch, params.BeaconConfig().DomainBeaconAttester)
require.NoError(b, err)
i := uint64(0)
@@ -594,7 +595,7 @@ func BenchmarkComputeCommittee1000000_WithOutCache(b *testing.B) {
b.ResetTimer()
for n := 0; n < b.N; n++ {
i++
_, err := computeCommittee(indices, seed, index, params.BeaconConfig().MaxCommitteesPerSlot)
_, err := helpers.ComputeCommittee(indices, seed, index, params.BeaconConfig().MaxCommitteesPerSlot)
if err != nil {
panic(err)
}
@@ -619,9 +620,9 @@ func BenchmarkComputeCommittee4000000_WithOutCache(b *testing.B) {
require.NoError(b, err)
epoch := time.CurrentEpoch(state)
indices, err := ActiveValidatorIndices(context.Background(), state, epoch)
indices, err := helpers.ActiveValidatorIndices(context.Background(), state, epoch)
require.NoError(b, err)
seed, err := Seed(state, epoch, params.BeaconConfig().DomainBeaconAttester)
seed, err := helpers.Seed(state, epoch, params.BeaconConfig().DomainBeaconAttester)
require.NoError(b, err)
i := uint64(0)
@@ -629,7 +630,7 @@ func BenchmarkComputeCommittee4000000_WithOutCache(b *testing.B) {
b.ResetTimer()
for n := 0; n < b.N; n++ {
i++
_, err := computeCommittee(indices, seed, index, params.BeaconConfig().MaxCommitteesPerSlot)
_, err := helpers.ComputeCommittee(indices, seed, index, params.BeaconConfig().MaxCommitteesPerSlot)
if err != nil {
panic(err)
}
@@ -655,13 +656,13 @@ func TestBeaconCommitteeFromState_UpdateCacheForPreviousEpoch(t *testing.T) {
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
require.NoError(t, err)
_, err = BeaconCommitteeFromState(context.Background(), state, 1 /* previous epoch */, 0)
_, err = helpers.BeaconCommitteeFromState(context.Background(), state, 1 /* previous epoch */, 0)
require.NoError(t, err)
// Verify previous epoch is cached
seed, err := Seed(state, 0, params.BeaconConfig().DomainBeaconAttester)
seed, err := helpers.Seed(state, 0, params.BeaconConfig().DomainBeaconAttester)
require.NoError(t, err)
activeIndices, err := committeeCache.ActiveIndices(context.Background(), seed)
activeIndices, err := helpers.CommitteeCache().ActiveIndices(context.Background(), seed)
require.NoError(t, err)
assert.NotNil(t, activeIndices, "Did not cache active indices")
}
@@ -680,19 +681,19 @@ func TestPrecomputeProposerIndices_Ok(t *testing.T) {
})
require.NoError(t, err)
indices, err := ActiveValidatorIndices(context.Background(), state, 0)
indices, err := helpers.ActiveValidatorIndices(context.Background(), state, 0)
require.NoError(t, err)
proposerIndices, err := precomputeProposerIndices(state, indices, time.CurrentEpoch(state))
proposerIndices, err := helpers.PrecomputeProposerIndices(state, indices, time.CurrentEpoch(state))
require.NoError(t, err)
var wantedProposerIndices []primitives.ValidatorIndex
seed, err := Seed(state, 0, params.BeaconConfig().DomainBeaconProposer)
seed, err := helpers.Seed(state, 0, params.BeaconConfig().DomainBeaconProposer)
require.NoError(t, err)
for i := uint64(0); i < uint64(params.BeaconConfig().SlotsPerEpoch); i++ {
seedWithSlot := append(seed[:], bytesutil.Bytes8(i)...)
seedWithSlotHash := hash.Hash(seedWithSlot)
index, err := ComputeProposerIndex(state, indices, seedWithSlotHash)
index, err := helpers.ComputeProposerIndex(state, indices, seedWithSlotHash)
require.NoError(t, err)
wantedProposerIndices = append(wantedProposerIndices, index)
}

View File

@@ -0,0 +1,17 @@
//go:build fuzz
package helpers
import "github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
func CommitteeCache() *cache.FakeCommitteeCache {
return committeeCache
}
func SyncCommitteeCache() *cache.FakeSyncCommitteeCache {
return syncCommitteeCache
}
func ProposerIndicesCache() *cache.FakeProposerIndicesCache {
return proposerIndicesCache
}

View File

@@ -0,0 +1,17 @@
//go:build !fuzz
package helpers
import "github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
func CommitteeCache() *cache.CommitteeCache {
return committeeCache
}
func SyncCommitteeCache() *cache.SyncCommitteeCache {
return syncCommitteeCache
}
func ProposerIndicesCache() *cache.ProposerIndicesCache {
return proposerIndicesCache
}

View File

@@ -1,9 +1,10 @@
package helpers
package helpers_test
import (
"encoding/binary"
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
@@ -40,10 +41,10 @@ func TestRandaoMix_OK(t *testing.T) {
},
}
for _, test := range tests {
ClearCache()
helpers.ClearCache()
require.NoError(t, state.SetSlot(params.BeaconConfig().SlotsPerEpoch.Mul(uint64(test.epoch+1))))
mix, err := RandaoMix(state, test.epoch)
mix, err := helpers.RandaoMix(state, test.epoch)
require.NoError(t, err)
assert.DeepEqual(t, test.randaoMix, mix, "Incorrect randao mix")
}
@@ -76,10 +77,10 @@ func TestRandaoMix_CopyOK(t *testing.T) {
},
}
for _, test := range tests {
ClearCache()
helpers.ClearCache()
require.NoError(t, state.SetSlot(params.BeaconConfig().SlotsPerEpoch.Mul(uint64(test.epoch+1))))
mix, err := RandaoMix(state, test.epoch)
mix, err := helpers.RandaoMix(state, test.epoch)
require.NoError(t, err)
uniqueNumber := uint64(params.BeaconConfig().EpochsPerHistoricalVector.Add(1000))
binary.LittleEndian.PutUint64(mix, uniqueNumber)
@@ -92,7 +93,7 @@ func TestRandaoMix_CopyOK(t *testing.T) {
}
func TestGenerateSeed_OK(t *testing.T) {
ClearCache()
helpers.ClearCache()
randaoMixes := make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector)
for i := 0; i < len(randaoMixes); i++ {
@@ -107,7 +108,7 @@ func TestGenerateSeed_OK(t *testing.T) {
})
require.NoError(t, err)
got, err := Seed(state, 10, params.BeaconConfig().DomainBeaconAttester)
got, err := helpers.Seed(state, 10, params.BeaconConfig().DomainBeaconAttester)
require.NoError(t, err)
wanted := [32]byte{102, 82, 23, 40, 226, 79, 171, 11, 203, 23, 175, 7, 88, 202, 80,

View File

@@ -1,9 +1,10 @@
package helpers
package helpers_test
import (
"math"
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
"github.com/prysmaticlabs/prysm/v5/config/params"
@@ -14,7 +15,7 @@ import (
)
func TestTotalBalance_OK(t *testing.T) {
ClearCache()
helpers.ClearCache()
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{Validators: []*ethpb.Validator{
{EffectiveBalance: 27 * 1e9}, {EffectiveBalance: 28 * 1e9},
@@ -22,19 +23,19 @@ func TestTotalBalance_OK(t *testing.T) {
}})
require.NoError(t, err)
balance := TotalBalance(state, []primitives.ValidatorIndex{0, 1, 2, 3})
balance := helpers.TotalBalance(state, []primitives.ValidatorIndex{0, 1, 2, 3})
wanted := state.Validators()[0].EffectiveBalance + state.Validators()[1].EffectiveBalance +
state.Validators()[2].EffectiveBalance + state.Validators()[3].EffectiveBalance
assert.Equal(t, wanted, balance, "Incorrect TotalBalance")
}
func TestTotalBalance_ReturnsEffectiveBalanceIncrement(t *testing.T) {
ClearCache()
helpers.ClearCache()
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{Validators: []*ethpb.Validator{}})
require.NoError(t, err)
balance := TotalBalance(state, []primitives.ValidatorIndex{})
balance := helpers.TotalBalance(state, []primitives.ValidatorIndex{})
wanted := params.BeaconConfig().EffectiveBalanceIncrement
assert.Equal(t, wanted, balance, "Incorrect TotalBalance")
}
@@ -51,7 +52,7 @@ func TestGetBalance_OK(t *testing.T) {
{i: 2, b: []uint64{0, 0, 0}},
}
for _, test := range tests {
ClearCache()
helpers.ClearCache()
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{Balances: test.b})
require.NoError(t, err)
@@ -68,7 +69,7 @@ func TestTotalActiveBalance(t *testing.T) {
{10000},
}
for _, test := range tests {
ClearCache()
helpers.ClearCache()
validators := make([]*ethpb.Validator, 0)
for i := 0; i < test.vCount; i++ {
@@ -76,7 +77,7 @@ func TestTotalActiveBalance(t *testing.T) {
}
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{Validators: validators})
require.NoError(t, err)
bal, err := TotalActiveBalance(state)
bal, err := helpers.TotalActiveBalance(state)
require.NoError(t, err)
require.Equal(t, uint64(test.vCount)*params.BeaconConfig().MaxEffectiveBalance, bal)
}
@@ -91,7 +92,7 @@ func TestTotalActiveBal_ReturnMin(t *testing.T) {
{10000},
}
for _, test := range tests {
ClearCache()
helpers.ClearCache()
validators := make([]*ethpb.Validator, 0)
for i := 0; i < test.vCount; i++ {
@@ -99,7 +100,7 @@ func TestTotalActiveBal_ReturnMin(t *testing.T) {
}
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{Validators: validators})
require.NoError(t, err)
bal, err := TotalActiveBalance(state)
bal, err := helpers.TotalActiveBalance(state)
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().EffectiveBalanceIncrement, bal)
}
@@ -115,7 +116,7 @@ func TestTotalActiveBalance_WithCache(t *testing.T) {
{vCount: 10000, wantCount: 10000},
}
for _, test := range tests {
ClearCache()
helpers.ClearCache()
validators := make([]*ethpb.Validator, 0)
for i := 0; i < test.vCount; i++ {
@@ -123,7 +124,7 @@ func TestTotalActiveBalance_WithCache(t *testing.T) {
}
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{Validators: validators})
require.NoError(t, err)
bal, err := TotalActiveBalance(state)
bal, err := helpers.TotalActiveBalance(state)
require.NoError(t, err)
require.Equal(t, uint64(test.wantCount)*params.BeaconConfig().MaxEffectiveBalance, bal)
}
@@ -141,7 +142,7 @@ func TestIncreaseBalance_OK(t *testing.T) {
{i: 2, b: []uint64{27 * 1e9, 28 * 1e9, 32 * 1e9}, nb: 33 * 1e9, eb: 65 * 1e9},
}
for _, test := range tests {
ClearCache()
helpers.ClearCache()
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: []*ethpb.Validator{
@@ -149,7 +150,7 @@ func TestIncreaseBalance_OK(t *testing.T) {
Balances: test.b,
})
require.NoError(t, err)
require.NoError(t, IncreaseBalance(state, test.i, test.nb))
require.NoError(t, helpers.IncreaseBalance(state, test.i, test.nb))
assert.Equal(t, test.eb, state.Balances()[test.i], "Incorrect Validator balance")
}
}
@@ -167,7 +168,7 @@ func TestDecreaseBalance_OK(t *testing.T) {
{i: 3, b: []uint64{27 * 1e9, 28 * 1e9, 1, 28 * 1e9}, nb: 28 * 1e9, eb: 0},
}
for _, test := range tests {
ClearCache()
helpers.ClearCache()
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: []*ethpb.Validator{
@@ -175,13 +176,13 @@ func TestDecreaseBalance_OK(t *testing.T) {
Balances: test.b,
})
require.NoError(t, err)
require.NoError(t, DecreaseBalance(state, test.i, test.nb))
require.NoError(t, helpers.DecreaseBalance(state, test.i, test.nb))
assert.Equal(t, test.eb, state.Balances()[test.i], "Incorrect Validator balance")
}
}
func TestFinalityDelay(t *testing.T) {
ClearCache()
helpers.ClearCache()
base := buildState(params.BeaconConfig().SlotsPerEpoch*10, 1)
base.FinalizedCheckpoint = &ethpb.Checkpoint{Epoch: 3}
@@ -195,25 +196,25 @@ func TestFinalityDelay(t *testing.T) {
finalizedEpoch = beaconState.FinalizedCheckpointEpoch()
}
setVal()
d := FinalityDelay(prevEpoch, finalizedEpoch)
d := helpers.FinalityDelay(prevEpoch, finalizedEpoch)
w := time.PrevEpoch(beaconState) - beaconState.FinalizedCheckpointEpoch()
assert.Equal(t, w, d, "Did not get wanted finality delay")
require.NoError(t, beaconState.SetFinalizedCheckpoint(&ethpb.Checkpoint{Epoch: 4}))
setVal()
d = FinalityDelay(prevEpoch, finalizedEpoch)
d = helpers.FinalityDelay(prevEpoch, finalizedEpoch)
w = time.PrevEpoch(beaconState) - beaconState.FinalizedCheckpointEpoch()
assert.Equal(t, w, d, "Did not get wanted finality delay")
require.NoError(t, beaconState.SetFinalizedCheckpoint(&ethpb.Checkpoint{Epoch: 5}))
setVal()
d = FinalityDelay(prevEpoch, finalizedEpoch)
d = helpers.FinalityDelay(prevEpoch, finalizedEpoch)
w = time.PrevEpoch(beaconState) - beaconState.FinalizedCheckpointEpoch()
assert.Equal(t, w, d, "Did not get wanted finality delay")
}
func TestIsInInactivityLeak(t *testing.T) {
ClearCache()
helpers.ClearCache()
base := buildState(params.BeaconConfig().SlotsPerEpoch*10, 1)
base.FinalizedCheckpoint = &ethpb.Checkpoint{Epoch: 3}
@@ -227,13 +228,13 @@ func TestIsInInactivityLeak(t *testing.T) {
finalizedEpoch = beaconState.FinalizedCheckpointEpoch()
}
setVal()
assert.Equal(t, true, IsInInactivityLeak(prevEpoch, finalizedEpoch), "Wanted inactivity leak true")
assert.Equal(t, true, helpers.IsInInactivityLeak(prevEpoch, finalizedEpoch), "Wanted inactivity leak true")
require.NoError(t, beaconState.SetFinalizedCheckpoint(&ethpb.Checkpoint{Epoch: 4}))
setVal()
assert.Equal(t, true, IsInInactivityLeak(prevEpoch, finalizedEpoch), "Wanted inactivity leak true")
assert.Equal(t, true, helpers.IsInInactivityLeak(prevEpoch, finalizedEpoch), "Wanted inactivity leak true")
require.NoError(t, beaconState.SetFinalizedCheckpoint(&ethpb.Checkpoint{Epoch: 5}))
setVal()
assert.Equal(t, false, IsInInactivityLeak(prevEpoch, finalizedEpoch), "Wanted inactivity leak false")
assert.Equal(t, false, helpers.IsInInactivityLeak(prevEpoch, finalizedEpoch), "Wanted inactivity leak false")
}
func buildState(slot primitives.Slot, validatorCount uint64) *ethpb.BeaconState {
@@ -285,7 +286,7 @@ func TestIncreaseBadBalance_NotOK(t *testing.T) {
{i: 2, b: []uint64{math.MaxUint64, math.MaxUint64, math.MaxUint64}, nb: 33 * 1e9},
}
for _, test := range tests {
ClearCache()
helpers.ClearCache()
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: []*ethpb.Validator{
@@ -293,6 +294,6 @@ func TestIncreaseBadBalance_NotOK(t *testing.T) {
Balances: test.b,
})
require.NoError(t, err)
require.ErrorContains(t, "addition overflows", IncreaseBalance(state, test.i, test.nb))
require.ErrorContains(t, "addition overflows", helpers.IncreaseBalance(state, test.i, test.nb))
}
}

View File

@@ -26,7 +26,7 @@ var (
// 1. Checks if the public key exists in the sync committee cache
// 2. If 1 fails, checks if the public key exists in the input current sync committee object
func IsCurrentPeriodSyncCommittee(st state.BeaconState, valIdx primitives.ValidatorIndex) (bool, error) {
root, err := syncPeriodBoundaryRoot(st)
root, err := SyncPeriodBoundaryRoot(st)
if err != nil {
return false, err
}
@@ -63,7 +63,7 @@ func IsCurrentPeriodSyncCommittee(st state.BeaconState, valIdx primitives.Valida
func IsNextPeriodSyncCommittee(
st state.BeaconState, valIdx primitives.ValidatorIndex,
) (bool, error) {
root, err := syncPeriodBoundaryRoot(st)
root, err := SyncPeriodBoundaryRoot(st)
if err != nil {
return false, err
}
@@ -90,7 +90,7 @@ func IsNextPeriodSyncCommittee(
func CurrentPeriodSyncSubcommitteeIndices(
st state.BeaconState, valIdx primitives.ValidatorIndex,
) ([]primitives.CommitteeIndex, error) {
root, err := syncPeriodBoundaryRoot(st)
root, err := SyncPeriodBoundaryRoot(st)
if err != nil {
return nil, err
}
@@ -124,7 +124,7 @@ func CurrentPeriodSyncSubcommitteeIndices(
func NextPeriodSyncSubcommitteeIndices(
st state.BeaconState, valIdx primitives.ValidatorIndex,
) ([]primitives.CommitteeIndex, error) {
root, err := syncPeriodBoundaryRoot(st)
root, err := SyncPeriodBoundaryRoot(st)
if err != nil {
return nil, err
}
@@ -182,10 +182,10 @@ func findSubCommitteeIndices(pubKey []byte, pubKeys [][]byte) []primitives.Commi
return indices
}
// Retrieve the current sync period boundary root by calculating sync period start epoch
// SyncPeriodBoundaryRoot computes the current sync period boundary root by calculating sync period start epoch
// and calling `BlockRoot`.
// It uses the boundary slot - 1 for block root. (Ex: SlotsPerEpoch * EpochsPerSyncCommitteePeriod - 1)
func syncPeriodBoundaryRoot(st state.ReadOnlyBeaconState) ([32]byte, error) {
func SyncPeriodBoundaryRoot(st state.ReadOnlyBeaconState) ([32]byte, error) {
// Can't call `BlockRoot` until the first slot.
if st.Slot() == params.BeaconConfig().GenesisSlot {
return params.BeaconConfig().ZeroHash, nil

View File

@@ -1,4 +1,4 @@
package helpers
package helpers_test
import (
"math/rand"
@@ -7,6 +7,7 @@ import (
"time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
@@ -17,7 +18,7 @@ import (
)
func TestIsCurrentEpochSyncCommittee_UsingCache(t *testing.T) {
ClearCache()
helpers.ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
@@ -40,15 +41,15 @@ func TestIsCurrentEpochSyncCommittee_UsingCache(t *testing.T) {
require.NoError(t, state.SetNextSyncCommittee(syncCommittee))
r := [32]byte{'a'}
require.NoError(t, err, syncCommitteeCache.UpdatePositionsInCommittee(r, state))
require.NoError(t, err, helpers.SyncCommitteeCache().UpdatePositionsInCommittee(r, state))
ok, err := IsCurrentPeriodSyncCommittee(state, 0)
ok, err := helpers.IsCurrentPeriodSyncCommittee(state, 0)
require.NoError(t, err)
require.Equal(t, true, ok)
}
func TestIsCurrentEpochSyncCommittee_UsingCommittee(t *testing.T) {
ClearCache()
helpers.ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
@@ -70,13 +71,13 @@ func TestIsCurrentEpochSyncCommittee_UsingCommittee(t *testing.T) {
require.NoError(t, state.SetCurrentSyncCommittee(syncCommittee))
require.NoError(t, state.SetNextSyncCommittee(syncCommittee))
ok, err := IsCurrentPeriodSyncCommittee(state, 0)
ok, err := helpers.IsCurrentPeriodSyncCommittee(state, 0)
require.NoError(t, err)
require.Equal(t, true, ok)
}
func TestIsCurrentEpochSyncCommittee_DoesNotExist(t *testing.T) {
ClearCache()
helpers.ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
@@ -98,13 +99,13 @@ func TestIsCurrentEpochSyncCommittee_DoesNotExist(t *testing.T) {
require.NoError(t, state.SetCurrentSyncCommittee(syncCommittee))
require.NoError(t, state.SetNextSyncCommittee(syncCommittee))
ok, err := IsCurrentPeriodSyncCommittee(state, 12390192)
ok, err := helpers.IsCurrentPeriodSyncCommittee(state, 12390192)
require.ErrorContains(t, "validator index 12390192 does not exist", err)
require.Equal(t, false, ok)
}
func TestIsNextEpochSyncCommittee_UsingCache(t *testing.T) {
ClearCache()
helpers.ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
@@ -127,15 +128,15 @@ func TestIsNextEpochSyncCommittee_UsingCache(t *testing.T) {
require.NoError(t, state.SetNextSyncCommittee(syncCommittee))
r := [32]byte{'a'}
require.NoError(t, err, syncCommitteeCache.UpdatePositionsInCommittee(r, state))
require.NoError(t, err, helpers.SyncCommitteeCache().UpdatePositionsInCommittee(r, state))
ok, err := IsNextPeriodSyncCommittee(state, 0)
ok, err := helpers.IsNextPeriodSyncCommittee(state, 0)
require.NoError(t, err)
require.Equal(t, true, ok)
}
func TestIsNextEpochSyncCommittee_UsingCommittee(t *testing.T) {
ClearCache()
helpers.ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
@@ -157,13 +158,13 @@ func TestIsNextEpochSyncCommittee_UsingCommittee(t *testing.T) {
require.NoError(t, state.SetCurrentSyncCommittee(syncCommittee))
require.NoError(t, state.SetNextSyncCommittee(syncCommittee))
ok, err := IsNextPeriodSyncCommittee(state, 0)
ok, err := helpers.IsNextPeriodSyncCommittee(state, 0)
require.NoError(t, err)
require.Equal(t, true, ok)
}
func TestIsNextEpochSyncCommittee_DoesNotExist(t *testing.T) {
ClearCache()
helpers.ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
@@ -185,13 +186,13 @@ func TestIsNextEpochSyncCommittee_DoesNotExist(t *testing.T) {
require.NoError(t, state.SetCurrentSyncCommittee(syncCommittee))
require.NoError(t, state.SetNextSyncCommittee(syncCommittee))
ok, err := IsNextPeriodSyncCommittee(state, 120391029)
ok, err := helpers.IsNextPeriodSyncCommittee(state, 120391029)
require.ErrorContains(t, "validator index 120391029 does not exist", err)
require.Equal(t, false, ok)
}
func TestCurrentEpochSyncSubcommitteeIndices_UsingCache(t *testing.T) {
ClearCache()
helpers.ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
@@ -214,15 +215,15 @@ func TestCurrentEpochSyncSubcommitteeIndices_UsingCache(t *testing.T) {
require.NoError(t, state.SetNextSyncCommittee(syncCommittee))
r := [32]byte{'a'}
require.NoError(t, err, syncCommitteeCache.UpdatePositionsInCommittee(r, state))
require.NoError(t, err, helpers.SyncCommitteeCache().UpdatePositionsInCommittee(r, state))
index, err := CurrentPeriodSyncSubcommitteeIndices(state, 0)
index, err := helpers.CurrentPeriodSyncSubcommitteeIndices(state, 0)
require.NoError(t, err)
require.DeepEqual(t, []primitives.CommitteeIndex{0}, index)
}
func TestCurrentEpochSyncSubcommitteeIndices_UsingCommittee(t *testing.T) {
ClearCache()
helpers.ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
@@ -243,27 +244,27 @@ func TestCurrentEpochSyncSubcommitteeIndices_UsingCommittee(t *testing.T) {
require.NoError(t, err)
require.NoError(t, state.SetCurrentSyncCommittee(syncCommittee))
require.NoError(t, state.SetNextSyncCommittee(syncCommittee))
root, err := syncPeriodBoundaryRoot(state)
root, err := helpers.SyncPeriodBoundaryRoot(state)
require.NoError(t, err)
// Test that cache was empty.
_, err = syncCommitteeCache.CurrentPeriodIndexPosition(root, 0)
_, err = helpers.SyncCommitteeCache().CurrentPeriodIndexPosition(root, 0)
require.Equal(t, cache.ErrNonExistingSyncCommitteeKey, err)
// Test that helper can retrieve the index given empty cache.
index, err := CurrentPeriodSyncSubcommitteeIndices(state, 0)
index, err := helpers.CurrentPeriodSyncSubcommitteeIndices(state, 0)
require.NoError(t, err)
require.DeepEqual(t, []primitives.CommitteeIndex{0}, index)
// Test that cache was able to fill on miss.
time.Sleep(100 * time.Millisecond)
index, err = syncCommitteeCache.CurrentPeriodIndexPosition(root, 0)
index, err = helpers.SyncCommitteeCache().CurrentPeriodIndexPosition(root, 0)
require.NoError(t, err)
require.DeepEqual(t, []primitives.CommitteeIndex{0}, index)
}
func TestCurrentEpochSyncSubcommitteeIndices_DoesNotExist(t *testing.T) {
ClearCache()
helpers.ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
@@ -285,13 +286,13 @@ func TestCurrentEpochSyncSubcommitteeIndices_DoesNotExist(t *testing.T) {
require.NoError(t, state.SetCurrentSyncCommittee(syncCommittee))
require.NoError(t, state.SetNextSyncCommittee(syncCommittee))
index, err := CurrentPeriodSyncSubcommitteeIndices(state, 129301923)
index, err := helpers.CurrentPeriodSyncSubcommitteeIndices(state, 129301923)
require.ErrorContains(t, "validator index 129301923 does not exist", err)
require.DeepEqual(t, []primitives.CommitteeIndex(nil), index)
}
func TestNextEpochSyncSubcommitteeIndices_UsingCache(t *testing.T) {
ClearCache()
helpers.ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
@@ -314,15 +315,15 @@ func TestNextEpochSyncSubcommitteeIndices_UsingCache(t *testing.T) {
require.NoError(t, state.SetNextSyncCommittee(syncCommittee))
r := [32]byte{'a'}
require.NoError(t, err, syncCommitteeCache.UpdatePositionsInCommittee(r, state))
require.NoError(t, err, helpers.SyncCommitteeCache().UpdatePositionsInCommittee(r, state))
index, err := NextPeriodSyncSubcommitteeIndices(state, 0)
index, err := helpers.NextPeriodSyncSubcommitteeIndices(state, 0)
require.NoError(t, err)
require.DeepEqual(t, []primitives.CommitteeIndex{0}, index)
}
func TestNextEpochSyncSubcommitteeIndices_UsingCommittee(t *testing.T) {
ClearCache()
helpers.ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
@@ -344,13 +345,13 @@ func TestNextEpochSyncSubcommitteeIndices_UsingCommittee(t *testing.T) {
require.NoError(t, state.SetCurrentSyncCommittee(syncCommittee))
require.NoError(t, state.SetNextSyncCommittee(syncCommittee))
index, err := NextPeriodSyncSubcommitteeIndices(state, 0)
index, err := helpers.NextPeriodSyncSubcommitteeIndices(state, 0)
require.NoError(t, err)
require.DeepEqual(t, []primitives.CommitteeIndex{0}, index)
}
func TestNextEpochSyncSubcommitteeIndices_DoesNotExist(t *testing.T) {
ClearCache()
helpers.ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
@@ -372,43 +373,43 @@ func TestNextEpochSyncSubcommitteeIndices_DoesNotExist(t *testing.T) {
require.NoError(t, state.SetCurrentSyncCommittee(syncCommittee))
require.NoError(t, state.SetNextSyncCommittee(syncCommittee))
index, err := NextPeriodSyncSubcommitteeIndices(state, 21093019)
index, err := helpers.NextPeriodSyncSubcommitteeIndices(state, 21093019)
require.ErrorContains(t, "validator index 21093019 does not exist", err)
require.DeepEqual(t, []primitives.CommitteeIndex(nil), index)
}
func TestUpdateSyncCommitteeCache_BadSlot(t *testing.T) {
ClearCache()
helpers.ClearCache()
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Slot: 1,
})
require.NoError(t, err)
err = UpdateSyncCommitteeCache(state)
err = helpers.UpdateSyncCommitteeCache(state)
require.ErrorContains(t, "not at the end of the epoch to update cache", err)
state, err = state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Slot: params.BeaconConfig().SlotsPerEpoch - 1,
})
require.NoError(t, err)
err = UpdateSyncCommitteeCache(state)
err = helpers.UpdateSyncCommitteeCache(state)
require.ErrorContains(t, "not at sync committee period boundary to update cache", err)
}
func TestUpdateSyncCommitteeCache_BadRoot(t *testing.T) {
ClearCache()
helpers.ClearCache()
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Slot: primitives.Slot(params.BeaconConfig().EpochsPerSyncCommitteePeriod)*params.BeaconConfig().SlotsPerEpoch - 1,
LatestBlockHeader: &ethpb.BeaconBlockHeader{StateRoot: params.BeaconConfig().ZeroHash[:]},
})
require.NoError(t, err)
err = UpdateSyncCommitteeCache(state)
err = helpers.UpdateSyncCommitteeCache(state)
require.ErrorContains(t, "zero hash state root can't be used to update cache", err)
}
func TestIsCurrentEpochSyncCommittee_SameBlockRoot(t *testing.T) {
ClearCache()
helpers.ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
@@ -435,7 +436,7 @@ func TestIsCurrentEpochSyncCommittee_SameBlockRoot(t *testing.T) {
require.NoError(t, state.SetCurrentSyncCommittee(syncCommittee))
require.NoError(t, state.SetNextSyncCommittee(syncCommittee))
comIdxs, err := CurrentPeriodSyncSubcommitteeIndices(state, 200)
comIdxs, err := helpers.CurrentPeriodSyncSubcommitteeIndices(state, 200)
require.NoError(t, err)
wantedSlot := params.BeaconConfig().EpochsPerSyncCommitteePeriod.Mul(uint64(params.BeaconConfig().SlotsPerEpoch))
@@ -446,7 +447,7 @@ func TestIsCurrentEpochSyncCommittee_SameBlockRoot(t *testing.T) {
syncCommittee.Pubkeys[i], syncCommittee.Pubkeys[j] = syncCommittee.Pubkeys[j], syncCommittee.Pubkeys[i]
})
require.NoError(t, state.SetCurrentSyncCommittee(syncCommittee))
newIdxs, err := CurrentPeriodSyncSubcommitteeIndices(state, 200)
newIdxs, err := helpers.CurrentPeriodSyncSubcommitteeIndices(state, 200)
require.NoError(t, err)
require.DeepNotEqual(t, comIdxs, newIdxs)
}

View File

@@ -1,4 +1,4 @@
package helpers
package helpers_test
import (
"context"
@@ -6,6 +6,7 @@ import (
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
forkchoicetypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/forkchoice/types"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
@@ -32,7 +33,7 @@ func TestIsActiveValidator_OK(t *testing.T) {
}
for _, test := range tests {
validator := &ethpb.Validator{ActivationEpoch: 10, ExitEpoch: 100}
assert.Equal(t, test.b, IsActiveValidator(validator, test.a), "IsActiveValidator(%d)", test.a)
assert.Equal(t, test.b, helpers.IsActiveValidator(validator, test.a), "IsActiveValidator(%d)", test.a)
}
}
@@ -53,7 +54,7 @@ func TestIsActiveValidatorUsingTrie_OK(t *testing.T) {
for _, test := range tests {
readOnlyVal, err := beaconState.ValidatorAtIndexReadOnly(0)
require.NoError(t, err)
assert.Equal(t, test.b, IsActiveValidatorUsingTrie(readOnlyVal, test.a), "IsActiveValidatorUsingTrie(%d)", test.a)
assert.Equal(t, test.b, helpers.IsActiveValidatorUsingTrie(readOnlyVal, test.a), "IsActiveValidatorUsingTrie(%d)", test.a)
}
}
@@ -81,7 +82,7 @@ func TestIsActiveNonSlashedValidatorUsingTrie_OK(t *testing.T) {
require.NoError(t, err)
readOnlyVal, err := beaconState.ValidatorAtIndexReadOnly(0)
require.NoError(t, err)
assert.Equal(t, test.b, IsActiveNonSlashedValidatorUsingTrie(readOnlyVal, test.a), "IsActiveNonSlashedValidatorUsingTrie(%d)", test.a)
assert.Equal(t, test.b, helpers.IsActiveNonSlashedValidatorUsingTrie(readOnlyVal, test.a), "IsActiveNonSlashedValidatorUsingTrie(%d)", test.a)
}
}
@@ -161,7 +162,7 @@ func TestIsSlashableValidator_OK(t *testing.T) {
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
t.Run("without trie", func(t *testing.T) {
slashableValidator := IsSlashableValidator(test.validator.ActivationEpoch,
slashableValidator := helpers.IsSlashableValidator(test.validator.ActivationEpoch,
test.validator.WithdrawableEpoch, test.validator.Slashed, test.epoch)
assert.Equal(t, test.slashable, slashableValidator, "Expected active validator slashable to be %t", test.slashable)
})
@@ -170,7 +171,7 @@ func TestIsSlashableValidator_OK(t *testing.T) {
require.NoError(t, err)
readOnlyVal, err := beaconState.ValidatorAtIndexReadOnly(0)
require.NoError(t, err)
slashableValidator := IsSlashableValidatorUsingTrie(readOnlyVal, test.epoch)
slashableValidator := helpers.IsSlashableValidatorUsingTrie(readOnlyVal, test.epoch)
assert.Equal(t, test.slashable, slashableValidator, "Expected active validator slashable to be %t", test.slashable)
})
})
@@ -223,17 +224,17 @@ func TestBeaconProposerIndex_OK(t *testing.T) {
}
for _, tt := range tests {
ClearCache()
helpers.ClearCache()
require.NoError(t, state.SetSlot(tt.slot))
result, err := BeaconProposerIndex(context.Background(), state)
result, err := helpers.BeaconProposerIndex(context.Background(), state)
require.NoError(t, err, "Failed to get shard and committees at slot")
assert.Equal(t, tt.index, result, "Result index was an unexpected value")
}
}
func TestBeaconProposerIndex_BadState(t *testing.T) {
ClearCache()
helpers.ClearCache()
params.SetupTestConfigCleanup(t)
c := params.BeaconConfig()
@@ -261,12 +262,12 @@ func TestBeaconProposerIndex_BadState(t *testing.T) {
// Set a very high slot, so that retrieved block root will be
// non existent for the proposer cache.
require.NoError(t, state.SetSlot(100))
_, err = BeaconProposerIndex(context.Background(), state)
_, err = helpers.BeaconProposerIndex(context.Background(), state)
require.NoError(t, err)
}
func TestComputeProposerIndex_Compatibility(t *testing.T) {
ClearCache()
helpers.ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().MinGenesisActiveValidatorCount)
for i := 0; i < len(validators); i++ {
@@ -281,22 +282,22 @@ func TestComputeProposerIndex_Compatibility(t *testing.T) {
})
require.NoError(t, err)
indices, err := ActiveValidatorIndices(context.Background(), state, 0)
indices, err := helpers.ActiveValidatorIndices(context.Background(), state, 0)
require.NoError(t, err)
var proposerIndices []primitives.ValidatorIndex
seed, err := Seed(state, 0, params.BeaconConfig().DomainBeaconProposer)
seed, err := helpers.Seed(state, 0, params.BeaconConfig().DomainBeaconProposer)
require.NoError(t, err)
for i := uint64(0); i < uint64(params.BeaconConfig().SlotsPerEpoch); i++ {
seedWithSlot := append(seed[:], bytesutil.Bytes8(i)...)
seedWithSlotHash := hash.Hash(seedWithSlot)
index, err := ComputeProposerIndex(state, indices, seedWithSlotHash)
index, err := helpers.ComputeProposerIndex(state, indices, seedWithSlotHash)
require.NoError(t, err)
proposerIndices = append(proposerIndices, index)
}
var wantedProposerIndices []primitives.ValidatorIndex
seed, err = Seed(state, 0, params.BeaconConfig().DomainBeaconProposer)
seed, err = helpers.Seed(state, 0, params.BeaconConfig().DomainBeaconProposer)
require.NoError(t, err)
for i := uint64(0); i < uint64(params.BeaconConfig().SlotsPerEpoch); i++ {
seedWithSlot := append(seed[:], bytesutil.Bytes8(i)...)
@@ -309,15 +310,15 @@ func TestComputeProposerIndex_Compatibility(t *testing.T) {
}
func TestDelayedActivationExitEpoch_OK(t *testing.T) {
ClearCache()
helpers.ClearCache()
epoch := primitives.Epoch(9999)
wanted := epoch + 1 + params.BeaconConfig().MaxSeedLookahead
assert.Equal(t, wanted, ActivationExitEpoch(epoch))
assert.Equal(t, wanted, helpers.ActivationExitEpoch(epoch))
}
func TestActiveValidatorCount_Genesis(t *testing.T) {
ClearCache()
helpers.ClearCache()
c := 1000
validators := make([]*ethpb.Validator, c)
@@ -334,10 +335,10 @@ func TestActiveValidatorCount_Genesis(t *testing.T) {
require.NoError(t, err)
// Preset cache to a bad count.
seed, err := Seed(beaconState, 0, params.BeaconConfig().DomainBeaconAttester)
seed, err := helpers.Seed(beaconState, 0, params.BeaconConfig().DomainBeaconAttester)
require.NoError(t, err)
require.NoError(t, committeeCache.AddCommitteeShuffledList(context.Background(), &cache.Committees{Seed: seed, ShuffledIndices: []primitives.ValidatorIndex{1, 2, 3}}))
validatorCount, err := ActiveValidatorCount(context.Background(), beaconState, time.CurrentEpoch(beaconState))
require.NoError(t, helpers.CommitteeCache().AddCommitteeShuffledList(context.Background(), &cache.Committees{Seed: seed, ShuffledIndices: []primitives.ValidatorIndex{1, 2, 3}}))
validatorCount, err := helpers.ActiveValidatorCount(context.Background(), beaconState, time.CurrentEpoch(beaconState))
require.NoError(t, err)
assert.Equal(t, uint64(c), validatorCount, "Did not get the correct validator count")
}
@@ -353,7 +354,7 @@ func TestChurnLimit_OK(t *testing.T) {
{validatorCount: 2000000, wantedChurn: 30 /* validatorCount/churnLimitQuotient */},
}
for _, test := range tests {
ClearCache()
helpers.ClearCache()
validators := make([]*ethpb.Validator, test.validatorCount)
for i := 0; i < len(validators); i++ {
@@ -368,9 +369,9 @@ func TestChurnLimit_OK(t *testing.T) {
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
require.NoError(t, err)
validatorCount, err := ActiveValidatorCount(context.Background(), beaconState, time.CurrentEpoch(beaconState))
validatorCount, err := helpers.ActiveValidatorCount(context.Background(), beaconState, time.CurrentEpoch(beaconState))
require.NoError(t, err)
resultChurn := ValidatorActivationChurnLimit(validatorCount)
resultChurn := helpers.ValidatorActivationChurnLimit(validatorCount)
assert.Equal(t, test.wantedChurn, resultChurn, "ValidatorActivationChurnLimit(%d)", test.validatorCount)
}
}
@@ -386,7 +387,7 @@ func TestChurnLimitDeneb_OK(t *testing.T) {
{2000000, params.BeaconConfig().MaxPerEpochActivationChurnLimit},
}
for _, test := range tests {
ClearCache()
helpers.ClearCache()
// Create validators
validators := make([]*ethpb.Validator, test.validatorCount)
@@ -405,11 +406,11 @@ func TestChurnLimitDeneb_OK(t *testing.T) {
require.NoError(t, err)
// Get active validator count
validatorCount, err := ActiveValidatorCount(context.Background(), beaconState, time.CurrentEpoch(beaconState))
validatorCount, err := helpers.ActiveValidatorCount(context.Background(), beaconState, time.CurrentEpoch(beaconState))
require.NoError(t, err)
// Test churn limit calculation
resultChurn := ValidatorActivationChurnLimitDeneb(validatorCount)
resultChurn := helpers.ValidatorActivationChurnLimitDeneb(validatorCount)
assert.Equal(t, test.wantedChurn, resultChurn)
}
}
@@ -574,11 +575,11 @@ func TestActiveValidatorIndices(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ClearCache()
helpers.ClearCache()
s, err := state_native.InitializeFromProtoPhase0(tt.args.state)
require.NoError(t, err)
got, err := ActiveValidatorIndices(context.Background(), s, tt.args.epoch)
got, err := helpers.ActiveValidatorIndices(context.Background(), s, tt.args.epoch)
if tt.wantedErr != "" {
assert.ErrorContains(t, tt.wantedErr, err)
return
@@ -684,12 +685,12 @@ func TestComputeProposerIndex(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ClearCache()
helpers.ClearCache()
bState := &ethpb.BeaconState{Validators: tt.args.validators}
stTrie, err := state_native.InitializeFromProtoUnsafePhase0(bState)
require.NoError(t, err)
got, err := ComputeProposerIndex(stTrie, tt.args.indices, tt.args.seed)
got, err := helpers.ComputeProposerIndex(stTrie, tt.args.indices, tt.args.seed)
if tt.wantedErr != "" {
assert.ErrorContains(t, tt.wantedErr, err)
return
@@ -718,9 +719,9 @@ func TestIsEligibleForActivationQueue(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ClearCache()
helpers.ClearCache()
assert.Equal(t, tt.want, IsEligibleForActivationQueue(tt.validator), "IsEligibleForActivationQueue()")
assert.Equal(t, tt.want, helpers.IsEligibleForActivationQueue(tt.validator), "IsEligibleForActivationQueue()")
})
}
}
@@ -747,11 +748,11 @@ func TestIsIsEligibleForActivation(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ClearCache()
helpers.ClearCache()
s, err := state_native.InitializeFromProtoPhase0(tt.state)
require.NoError(t, err)
assert.Equal(t, tt.want, IsEligibleForActivation(s, tt.validator), "IsEligibleForActivation()")
assert.Equal(t, tt.want, helpers.IsEligibleForActivation(s, tt.validator), "IsEligibleForActivation()")
})
}
}
@@ -765,7 +766,7 @@ func computeProposerIndexWithValidators(validators []*ethpb.Validator, activeInd
hashFunc := hash.CustomSHA256Hasher()
for i := uint64(0); ; i++ {
candidateIndex, err := ComputeShuffledIndex(primitives.ValidatorIndex(i%length), length, seed, true /* shuffle */)
candidateIndex, err := helpers.ComputeShuffledIndex(primitives.ValidatorIndex(i%length), length, seed, true /* shuffle */)
if err != nil {
return 0, err
}
@@ -787,7 +788,7 @@ func computeProposerIndexWithValidators(validators []*ethpb.Validator, activeInd
}
func TestLastActivatedValidatorIndex_OK(t *testing.T) {
ClearCache()
helpers.ClearCache()
beaconState, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{})
require.NoError(t, err)
@@ -806,13 +807,13 @@ func TestLastActivatedValidatorIndex_OK(t *testing.T) {
require.NoError(t, beaconState.SetValidators(validators))
require.NoError(t, beaconState.SetBalances(balances))
index, err := LastActivatedValidatorIndex(context.Background(), beaconState)
index, err := helpers.LastActivatedValidatorIndex(context.Background(), beaconState)
require.NoError(t, err)
require.Equal(t, index, primitives.ValidatorIndex(3))
}
func TestProposerIndexFromCheckpoint(t *testing.T) {
ClearCache()
helpers.ClearCache()
e := primitives.Epoch(2)
r := [32]byte{'a'}
@@ -820,10 +821,10 @@ func TestProposerIndexFromCheckpoint(t *testing.T) {
ids := [32]primitives.ValidatorIndex{}
slot := primitives.Slot(69) // slot 5 in the Epoch
ids[5] = primitives.ValidatorIndex(19)
proposerIndicesCache.Set(e, r, ids)
helpers.ProposerIndicesCache().Set(e, r, ids)
c := &forkchoicetypes.Checkpoint{Root: root, Epoch: e - 1}
proposerIndicesCache.SetCheckpoint(*c, r)
id, err := ProposerIndexAtSlotFromCheckpoint(c, slot)
helpers.ProposerIndicesCache().SetCheckpoint(*c, r)
id, err := helpers.ProposerIndexAtSlotFromCheckpoint(c, slot)
require.NoError(t, err)
require.Equal(t, ids[5], id)
}

View File

@@ -94,6 +94,15 @@ func (s *LazilyPersistentStore) IsDataAvailable(ctx context.Context, current pri
entry := s.cache.ensure(key)
defer s.cache.delete(key)
root := b.Root()
sumz, err := s.store.WaitForSummarizer(ctx)
if err != nil {
log.WithField("root", fmt.Sprintf("%#x", b.Root())).
WithError(err).
Debug("Failed to receive BlobStorageSummarizer within IsDataAvailable")
} else {
entry.setDiskSummary(sumz.Summary(root))
}
// Verify we have all the expected sidecars, and fail fast if any are missing or inconsistent.
// We don't try to salvage problematic batches because this indicates a misbehaving peer and we'd rather
// ignore their response and decrease their peer score.

View File

@@ -4,6 +4,7 @@ import (
"bytes"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
@@ -59,7 +60,12 @@ func (c *cache) delete(key cacheKey) {
// cacheEntry holds a fixed-length cache of BlobSidecars.
type cacheEntry struct {
scs [fieldparams.MaxBlobsPerBlock]*blocks.ROBlob
scs [fieldparams.MaxBlobsPerBlock]*blocks.ROBlob
diskSummary filesystem.BlobStorageSummary
}
func (e *cacheEntry) setDiskSummary(sum filesystem.BlobStorageSummary) {
e.diskSummary = sum
}
// stash adds an item to the in-memory cache of BlobSidecars.
@@ -81,9 +87,17 @@ func (e *cacheEntry) stash(sc *blocks.ROBlob) error {
// the cache do not match those found in the block. If err is nil, then all expected
// commitments were found in the cache and the sidecar slice return value can be used
// to perform a DA check against the cached sidecars.
// filter only returns blobs that need to be checked. Blobs already available on disk will be excluded.
func (e *cacheEntry) filter(root [32]byte, kc safeCommitmentArray) ([]blocks.ROBlob, error) {
scs := make([]blocks.ROBlob, kc.count())
if e.diskSummary.AllAvailable(kc.count()) {
return nil, nil
}
scs := make([]blocks.ROBlob, 0, kc.count())
for i := uint64(0); i < fieldparams.MaxBlobsPerBlock; i++ {
// We already have this blob, we don't need to write it or validate it.
if e.diskSummary.HasIndex(i) {
continue
}
if kc[i] == nil {
if e.scs[i] != nil {
return nil, errors.Wrapf(errCommitmentMismatch, "root=%#x, index=%#x, commitment=%#x, no block commitment", root, i, e.scs[i].KzgCommitment)
@@ -97,7 +111,7 @@ func (e *cacheEntry) filter(root [32]byte, kc safeCommitmentArray) ([]blocks.ROB
if !bytes.Equal(kc[i], e.scs[i].KzgCommitment) {
return nil, errors.Wrapf(errCommitmentMismatch, "root=%#x, index=%#x, commitment=%#x, block commitment=%#x", root, i, e.scs[i].KzgCommitment, kc[i])
}
scs[i] = *e.scs[i]
scs = append(scs, *e.scs[i])
}
return scs, nil

View File

@@ -3,9 +3,14 @@ package das
import (
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
func TestCacheEnsureDelete(t *testing.T) {
@@ -23,3 +28,145 @@ func TestCacheEnsureDelete(t *testing.T) {
var nilEntry *cacheEntry
require.Equal(t, nilEntry, c.entries[k])
}
type filterTestCaseSetupFunc func(t *testing.T) (*cacheEntry, safeCommitmentArray, []blocks.ROBlob)
func filterTestCaseSetup(slot primitives.Slot, nBlobs int, onDisk []int, numExpected int) filterTestCaseSetupFunc {
return func(t *testing.T) (*cacheEntry, safeCommitmentArray, []blocks.ROBlob) {
blk, blobs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, slot, nBlobs)
commits, err := commitmentsToCheck(blk, blk.Block().Slot())
require.NoError(t, err)
entry := &cacheEntry{}
if len(onDisk) > 0 {
od := map[[32]byte][]int{blk.Root(): onDisk}
sumz := filesystem.NewMockBlobStorageSummarizer(t, od)
sum := sumz.Summary(blk.Root())
entry.setDiskSummary(sum)
}
expected := make([]blocks.ROBlob, 0, nBlobs)
for i := 0; i < commits.count(); i++ {
if entry.diskSummary.HasIndex(uint64(i)) {
continue
}
// If we aren't telling the cache a blob is on disk, add it to the expected list and stash.
expected = append(expected, blobs[i])
require.NoError(t, entry.stash(&blobs[i]))
}
require.Equal(t, numExpected, len(expected))
return entry, commits, expected
}
}
func TestFilterDiskSummary(t *testing.T) {
denebSlot, err := slots.EpochStart(params.BeaconConfig().DenebForkEpoch)
require.NoError(t, err)
cases := []struct {
name string
setup filterTestCaseSetupFunc
}{
{
name: "full blobs, all on disk",
setup: filterTestCaseSetup(denebSlot, 6, []int{0, 1, 2, 3, 4, 5}, 0),
},
{
name: "full blobs, first on disk",
setup: filterTestCaseSetup(denebSlot, 6, []int{0}, 5),
},
{
name: "full blobs, middle on disk",
setup: filterTestCaseSetup(denebSlot, 6, []int{2}, 5),
},
{
name: "full blobs, last on disk",
setup: filterTestCaseSetup(denebSlot, 6, []int{5}, 5),
},
{
name: "full blobs, none on disk",
setup: filterTestCaseSetup(denebSlot, 6, []int{}, 6),
},
{
name: "one commitment, on disk",
setup: filterTestCaseSetup(denebSlot, 1, []int{0}, 0),
},
{
name: "one commitment, not on disk",
setup: filterTestCaseSetup(denebSlot, 1, []int{}, 1),
},
{
name: "two commitments, first on disk",
setup: filterTestCaseSetup(denebSlot, 2, []int{0}, 1),
},
{
name: "two commitments, last on disk",
setup: filterTestCaseSetup(denebSlot, 2, []int{1}, 1),
},
{
name: "two commitments, none on disk",
setup: filterTestCaseSetup(denebSlot, 2, []int{}, 2),
},
{
name: "two commitments, all on disk",
setup: filterTestCaseSetup(denebSlot, 2, []int{0, 1}, 0),
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
entry, commits, expected := c.setup(t)
// first (root) argument doesn't matter, it is just for logs
got, err := entry.filter([32]byte{}, commits)
require.NoError(t, err)
require.Equal(t, len(expected), len(got))
})
}
}
func TestFilter(t *testing.T) {
denebSlot, err := slots.EpochStart(params.BeaconConfig().DenebForkEpoch)
require.NoError(t, err)
cases := []struct {
name string
setup func(t *testing.T) (*cacheEntry, safeCommitmentArray, []blocks.ROBlob)
err error
}{
{
name: "commitments mismatch - extra sidecar",
setup: func(t *testing.T) (*cacheEntry, safeCommitmentArray, []blocks.ROBlob) {
entry, commits, expected := filterTestCaseSetup(denebSlot, 6, []int{0, 1}, 4)(t)
commits[5] = nil
return entry, commits, expected
},
err: errCommitmentMismatch,
},
{
name: "sidecar missing",
setup: func(t *testing.T) (*cacheEntry, safeCommitmentArray, []blocks.ROBlob) {
entry, commits, expected := filterTestCaseSetup(denebSlot, 6, []int{0, 1}, 4)(t)
entry.scs[5] = nil
return entry, commits, expected
},
err: errMissingSidecar,
},
{
name: "commitments mismatch - different bytes",
setup: func(t *testing.T) (*cacheEntry, safeCommitmentArray, []blocks.ROBlob) {
entry, commits, expected := filterTestCaseSetup(denebSlot, 6, []int{0, 1}, 4)(t)
entry.scs[5].KzgCommitment = []byte("nope")
return entry, commits, expected
},
err: errCommitmentMismatch,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
entry, commits, expected := c.setup(t)
// first (root) argument doesn't matter, it is just for logs
got, err := entry.filter([32]byte{}, commits)
if c.err != nil {
require.ErrorIs(t, err, c.err)
return
}
require.NoError(t, err)
require.Equal(t, len(expected), len(got))
})
}
}

View File

@@ -5,9 +5,9 @@ go_library(
srcs = [
"blob.go",
"cache.go",
"ephemeral.go",
"log.go",
"metrics.go",
"mock.go",
"pruner.go",
],
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem",

View File

@@ -3,6 +3,7 @@ package filesystem
import (
"context"
"fmt"
"math"
"os"
"path"
"strconv"
@@ -121,7 +122,7 @@ var ErrBlobStorageSummarizerUnavailable = errors.New("BlobStorage not initialize
// BlobStorageSummarizer is not ready immediately on node startup because it needs to sample the blob filesystem to
// determine which blobs are available.
func (bs *BlobStorage) WaitForSummarizer(ctx context.Context) (BlobStorageSummarizer, error) {
if bs.pruner == nil {
if bs == nil || bs.pruner == nil {
return nil, ErrBlobStorageSummarizerUnavailable
}
return bs.pruner.waitForCache(ctx)
@@ -299,6 +300,15 @@ func (bs *BlobStorage) Clear() error {
return nil
}
// WithinRetentionPeriod checks if the requested epoch is within the blob retention period.
func (bs *BlobStorage) WithinRetentionPeriod(requested, current primitives.Epoch) bool {
if requested > math.MaxUint64-bs.retentionEpochs {
// If there is an overflow, then the retention period was set to an extremely large number.
return true
}
return requested+bs.retentionEpochs >= current
}
type blobNamer struct {
root [32]byte
index uint64

View File

@@ -2,6 +2,7 @@ package filesystem
import (
"bytes"
"math"
"os"
"path"
"sync"
@@ -24,8 +25,7 @@ func TestBlobStorage_SaveBlobData(t *testing.T) {
require.NoError(t, err)
t.Run("no error for duplicate", func(t *testing.T) {
fs, bs, err := NewEphemeralBlobStorageWithFs(t)
require.NoError(t, err)
fs, bs := NewEphemeralBlobStorageWithFs(t)
existingSidecar := testSidecars[0]
blobPath := namerForSidecar(existingSidecar).path()
@@ -129,8 +129,7 @@ func TestBlobStorage_SaveBlobData(t *testing.T) {
// pollUntil polls a condition function until it returns true or a timeout is reached.
func TestBlobIndicesBounds(t *testing.T) {
fs, bs, err := NewEphemeralBlobStorageWithFs(t)
require.NoError(t, err)
fs, bs := NewEphemeralBlobStorageWithFs(t)
root := [32]byte{}
okIdx := uint64(fieldparams.MaxBlobsPerBlock - 1)
@@ -161,8 +160,7 @@ func writeFakeSSZ(t *testing.T, fs afero.Fs, root [32]byte, idx uint64) {
func TestBlobStoragePrune(t *testing.T) {
currentSlot := primitives.Slot(200000)
fs, bs, err := NewEphemeralBlobStorageWithFs(t)
require.NoError(t, err)
fs, bs := NewEphemeralBlobStorageWithFs(t)
t.Run("PruneOne", func(t *testing.T) {
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 300, fieldparams.MaxBlobsPerBlock)
@@ -218,8 +216,7 @@ func TestBlobStoragePrune(t *testing.T) {
func BenchmarkPruning(b *testing.B) {
var t *testing.T
_, bs, err := NewEphemeralBlobStorageWithFs(t)
require.NoError(t, err)
_, bs := NewEphemeralBlobStorageWithFs(t)
blockQty := 10000
currentSlot := primitives.Slot(150000)
@@ -248,3 +245,50 @@ func TestNewBlobStorage(t *testing.T) {
_, err = NewBlobStorage(WithBasePath(path.Join(t.TempDir(), "good")))
require.NoError(t, err)
}
func TestConfig_WithinRetentionPeriod(t *testing.T) {
retention := primitives.Epoch(16)
storage := &BlobStorage{retentionEpochs: retention}
cases := []struct {
name string
requested primitives.Epoch
current primitives.Epoch
within bool
}{
{
name: "before",
requested: 0,
current: retention + 1,
within: false,
},
{
name: "same",
requested: 0,
current: 0,
within: true,
},
{
name: "boundary",
requested: 0,
current: retention,
within: true,
},
{
name: "one less",
requested: retention - 1,
current: retention,
within: true,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
require.Equal(t, c.within, storage.WithinRetentionPeriod(c.requested, c.current))
})
}
t.Run("overflow", func(t *testing.T) {
storage := &BlobStorage{retentionEpochs: math.MaxUint64}
require.Equal(t, true, storage.WithinRetentionPeriod(1, 1))
})
}

View File

@@ -12,7 +12,7 @@ import (
// improving test performance and simplifying cleanup.
func NewEphemeralBlobStorage(t testing.TB) *BlobStorage {
fs := afero.NewMemMapFs()
pruner, err := newBlobPruner(fs, params.BeaconConfig().MinEpochsForBlobsSidecarsRequest)
pruner, err := newBlobPruner(fs, params.BeaconConfig().MinEpochsForBlobsSidecarsRequest, withWarmedCache())
if err != nil {
t.Fatal("test setup issue", err)
}
@@ -21,13 +21,13 @@ func NewEphemeralBlobStorage(t testing.TB) *BlobStorage {
// NewEphemeralBlobStorageWithFs can be used by tests that want access to the virtual filesystem
// in order to interact with it outside the parameters of the BlobStorage api.
func NewEphemeralBlobStorageWithFs(t testing.TB) (afero.Fs, *BlobStorage, error) {
func NewEphemeralBlobStorageWithFs(t testing.TB) (afero.Fs, *BlobStorage) {
fs := afero.NewMemMapFs()
pruner, err := newBlobPruner(fs, params.BeaconConfig().MinEpochsForBlobsSidecarsRequest)
pruner, err := newBlobPruner(fs, params.BeaconConfig().MinEpochsForBlobsSidecarsRequest, withWarmedCache())
if err != nil {
t.Fatal("test setup issue", err)
}
return fs, &BlobStorage{fs: fs, pruner: pruner}, nil
return fs, &BlobStorage{fs: fs, pruner: pruner}
}
type BlobMocker struct {
@@ -61,3 +61,15 @@ func NewEphemeralBlobStorageWithMocker(_ testing.TB) (*BlobMocker, *BlobStorage)
bs := &BlobStorage{fs: fs}
return &BlobMocker{fs: fs, bs: bs}, bs
}
func NewMockBlobStorageSummarizer(t *testing.T, set map[[32]byte][]int) BlobStorageSummarizer {
c := newBlobStorageCache()
for k, v := range set {
for i := range v {
if err := c.ensure(rootString(k), 0, uint64(v[i])); err != nil {
t.Fatal(err)
}
}
}
return c
}

View File

@@ -33,17 +33,32 @@ type blobPruner struct {
prunedBefore atomic.Uint64
windowSize primitives.Slot
cache *blobStorageCache
cacheWarmed chan struct{}
cacheReady chan struct{}
warmed bool
fs afero.Fs
}
func newBlobPruner(fs afero.Fs, retain primitives.Epoch) (*blobPruner, error) {
type prunerOpt func(*blobPruner) error
func withWarmedCache() prunerOpt {
return func(p *blobPruner) error {
return p.warmCache()
}
}
func newBlobPruner(fs afero.Fs, retain primitives.Epoch, opts ...prunerOpt) (*blobPruner, error) {
r, err := slots.EpochStart(retain + retentionBuffer)
if err != nil {
return nil, errors.Wrap(err, "could not set retentionSlots")
}
cw := make(chan struct{})
return &blobPruner{fs: fs, windowSize: r, cache: newBlobStorageCache(), cacheWarmed: cw}, nil
p := &blobPruner{fs: fs, windowSize: r, cache: newBlobStorageCache(), cacheReady: cw}
for _, o := range opts {
if err := o(p); err != nil {
return nil, err
}
}
return p, nil
}
// notify updates the pruner's view of root->blob mappings. This allows the pruner to build a cache
@@ -57,6 +72,8 @@ func (p *blobPruner) notify(root [32]byte, latest primitives.Slot, idx uint64) e
return nil
}
go func() {
p.Lock()
defer p.Unlock()
if err := p.prune(primitives.Slot(pruned)); err != nil {
log.WithError(err).Errorf("Failed to prune blobs from slot %d", latest)
}
@@ -74,16 +91,21 @@ func windowMin(latest, offset primitives.Slot) primitives.Slot {
}
func (p *blobPruner) warmCache() error {
p.Lock()
defer p.Unlock()
if err := p.prune(0); err != nil {
return err
}
close(p.cacheWarmed)
if !p.warmed {
p.warmed = true
close(p.cacheReady)
}
return nil
}
func (p *blobPruner) waitForCache(ctx context.Context) (*blobStorageCache, error) {
select {
case <-p.cacheWarmed:
case <-p.cacheReady:
return p.cache, nil
case <-ctx.Done():
return nil, ctx.Err()
@@ -94,8 +116,6 @@ func (p *blobPruner) waitForCache(ctx context.Context) (*blobStorageCache, error
// It deletes blobs older than currentEpoch - (retentionEpochs+bufferEpochs).
// This is so that we keep a slight buffer and blobs are deleted after n+2 epochs.
func (p *blobPruner) prune(pruneBefore primitives.Slot) error {
p.Lock()
defer p.Unlock()
start := time.Now()
totalPruned, totalErr := 0, 0
// Customize logging/metrics behavior for the initial cache warmup when slot=0.

View File

@@ -51,8 +51,7 @@ func TestTryPruneDir_CachedExpired(t *testing.T) {
require.Equal(t, 0, pruned)
})
t.Run("blobs to delete", func(t *testing.T) {
fs, bs, err := NewEphemeralBlobStorageWithFs(t)
require.NoError(t, err)
fs, bs := NewEphemeralBlobStorageWithFs(t)
var slot primitives.Slot = 0
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, slot, 2)
scs, err := verification.BlobSidecarSliceNoop(sidecars)
@@ -83,8 +82,7 @@ func TestTryPruneDir_CachedExpired(t *testing.T) {
func TestTryPruneDir_SlotFromFile(t *testing.T) {
t.Run("expired blobs deleted", func(t *testing.T) {
fs, bs, err := NewEphemeralBlobStorageWithFs(t)
require.NoError(t, err)
fs, bs := NewEphemeralBlobStorageWithFs(t)
var slot primitives.Slot = 0
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, slot, 2)
scs, err := verification.BlobSidecarSliceNoop(sidecars)
@@ -116,8 +114,7 @@ func TestTryPruneDir_SlotFromFile(t *testing.T) {
require.Equal(t, 0, len(files))
})
t.Run("not expired, intact", func(t *testing.T) {
fs, bs, err := NewEphemeralBlobStorageWithFs(t)
require.NoError(t, err)
fs, bs := NewEphemeralBlobStorageWithFs(t)
// Set slot equal to the window size, so it should be retained.
slot := bs.pruner.windowSize
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, slot, 2)
@@ -184,8 +181,7 @@ func TestSlotFromFile(t *testing.T) {
}
for _, c := range cases {
t.Run(fmt.Sprintf("slot %d", c.slot), func(t *testing.T) {
fs, bs, err := NewEphemeralBlobStorageWithFs(t)
require.NoError(t, err)
fs, bs := NewEphemeralBlobStorageWithFs(t)
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, c.slot, 1)
sc, err := verification.BlobSidecarNoop(sidecars[0])
require.NoError(t, err)

View File

@@ -36,7 +36,6 @@ go_library(
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
@@ -94,7 +93,7 @@ go_test(
embed = [":go_default_library"],
deps = [
"//async/event:go_default_library",
"//beacon-chain/cache/depositcache:go_default_library",
"//beacon-chain/cache/depositsnapshot:go_default_library",
"//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",

View File

@@ -20,9 +20,7 @@ import (
coreState "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/execution/types"
statenative "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
"github.com/prysmaticlabs/prysm/v5/config/features"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/container/trie"
contracts "github.com/prysmaticlabs/prysm/v5/contracts/deposit"
"github.com/prysmaticlabs/prysm/v5/crypto/hash"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
@@ -226,16 +224,14 @@ func (s *Service) ProcessDepositLog(ctx context.Context, depositLog *gethtypes.L
"merkleTreeIndex": index,
}).Info("Invalid deposit registered in deposit contract")
}
if features.Get().EnableEIP4881 {
// We finalize the trie here so that old deposits are not kept around, as they make
// deposit tree htr computation expensive.
dTrie, ok := s.depositTrie.(*depositsnapshot.DepositTree)
if !ok {
return errors.Errorf("wrong trie type initialized: %T", dTrie)
}
if err := dTrie.Finalize(index, depositLog.BlockHash, depositLog.BlockNumber); err != nil {
log.WithError(err).Error("Could not finalize trie")
}
// We finalize the trie here so that old deposits are not kept around, as they make
// deposit tree htr computation expensive.
dTrie, ok := s.depositTrie.(*depositsnapshot.DepositTree)
if !ok {
return errors.Errorf("wrong trie type initialized: %T", dTrie)
}
if err := dTrie.Finalize(index, depositLog.BlockHash, depositLog.BlockNumber); err != nil {
log.WithError(err).Error("Could not finalize trie")
}
return nil
@@ -579,25 +575,17 @@ func (s *Service) savePowchainData(ctx context.Context) error {
BeaconState: pbState, // I promise not to mutate it!
DepositContainers: s.cfg.depositCache.AllDepositContainers(ctx),
}
if features.Get().EnableEIP4881 {
fd, err := s.cfg.depositCache.FinalizedDeposits(ctx)
if err != nil {
return errors.Errorf("could not get finalized deposit tree: %v", err)
}
tree, ok := fd.Deposits().(*depositsnapshot.DepositTree)
if !ok {
return errors.New("deposit tree was not EIP4881 DepositTree")
}
eth1Data.DepositSnapshot, err = tree.ToProto()
if err != nil {
return err
}
} else {
tree, ok := s.depositTrie.(*trie.SparseMerkleTrie)
if !ok {
return errors.New("deposit tree was not SparseMerkleTrie")
}
eth1Data.Trie = tree.ToProto()
fd, err := s.cfg.depositCache.FinalizedDeposits(ctx)
if err != nil {
return errors.Errorf("could not get finalized deposit tree: %v", err)
}
tree, ok := fd.Deposits().(*depositsnapshot.DepositTree)
if !ok {
return errors.New("deposit tree was not EIP4881 DepositTree")
}
eth1Data.DepositSnapshot, err = tree.ToProto()
if err != nil {
return err
}
return s.cfg.beaconDB.SaveExecutionChainData(ctx, eth1Data)
}

View File

@@ -9,7 +9,7 @@ import (
"github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum/common"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache/depositcache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache/depositsnapshot"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db"
@@ -31,7 +31,7 @@ func TestProcessDepositLog_OK(t *testing.T) {
require.NoError(t, err, "Unable to set up simulated backend")
beaconDB := testDB.SetupDB(t)
depositCache, err := depositcache.New()
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
server, endpoint, err := mockExecution.SetupRPCServer()
@@ -100,7 +100,7 @@ func TestProcessDepositLog_InsertsPendingDeposit(t *testing.T) {
testAcc, err := mock.Setup()
require.NoError(t, err, "Unable to set up simulated backend")
beaconDB := testDB.SetupDB(t)
depositCache, err := depositcache.New()
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
server, endpoint, err := mockExecution.SetupRPCServer()
require.NoError(t, err)
@@ -216,7 +216,7 @@ func TestProcessETH2GenesisLog_8DuplicatePubkeys(t *testing.T) {
testAcc, err := mock.Setup()
require.NoError(t, err, "Unable to set up simulated backend")
beaconDB := testDB.SetupDB(t)
depositCache, err := depositcache.New()
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
server, endpoint, err := mockExecution.SetupRPCServer()
require.NoError(t, err)
@@ -291,7 +291,7 @@ func TestProcessETH2GenesisLog(t *testing.T) {
testAcc, err := mock.Setup()
require.NoError(t, err, "Unable to set up simulated backend")
beaconDB := testDB.SetupDB(t)
depositCache, err := depositcache.New()
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
server, endpoint, err := mockExecution.SetupRPCServer()
@@ -384,7 +384,7 @@ func TestProcessETH2GenesisLog_CorrectNumOfDeposits(t *testing.T) {
testAcc, err := mock.Setup()
require.NoError(t, err, "Unable to set up simulated backend")
kvStore := testDB.SetupDB(t)
depositCache, err := depositcache.New()
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
server, endpoint, err := mockExecution.SetupRPCServer()
require.NoError(t, err)
@@ -481,7 +481,7 @@ func TestProcessETH2GenesisLog_LargePeriodOfNoLogs(t *testing.T) {
testAcc, err := mock.Setup()
require.NoError(t, err, "Unable to set up simulated backend")
kvStore := testDB.SetupDB(t)
depositCache, err := depositcache.New()
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
server, endpoint, err := mockExecution.SetupRPCServer()
require.NoError(t, err)
@@ -593,7 +593,7 @@ func TestCheckForChainstart_NoValidator(t *testing.T) {
}
func newPowchainService(t *testing.T, eth1Backend *mock.TestAccount, beaconDB db.Database) *Service {
depositCache, err := depositcache.New()
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
server, endpoint, err := mockExecution.SetupRPCServer()
require.NoError(t, err)

View File

@@ -29,7 +29,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/v5/config/features"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/container/trie"
contracts "github.com/prysmaticlabs/prysm/v5/contracts/deposit"
@@ -164,14 +163,7 @@ func NewService(ctx context.Context, opts ...Option) (*Service, error) {
_ = cancel // govet fix for lost cancel. Cancel is handled in service.Stop()
var depositTrie cache.MerkleTree
var err error
if features.Get().EnableEIP4881 {
depositTrie = depositsnapshot.NewDepositTree()
} else {
depositTrie, err = trie.NewTrie(params.BeaconConfig().DepositContractTreeDepth)
if err != nil {
return nil, errors.Wrap(err, "could not set up deposit trie")
}
}
depositTrie = depositsnapshot.NewDepositTree()
genState, err := transition.EmptyGenesisState()
if err != nil {
return nil, errors.Wrap(err, "could not set up genesis state")
@@ -740,20 +732,12 @@ func (s *Service) initializeEth1Data(ctx context.Context, eth1DataInDB *ethpb.ET
return nil
}
var err error
if features.Get().EnableEIP4881 {
if eth1DataInDB.DepositSnapshot != nil {
s.depositTrie, err = depositsnapshot.DepositTreeFromSnapshotProto(eth1DataInDB.DepositSnapshot)
} else {
if err := s.migrateOldDepositTree(eth1DataInDB); err != nil {
return err
}
}
if eth1DataInDB.DepositSnapshot != nil {
s.depositTrie, err = depositsnapshot.DepositTreeFromSnapshotProto(eth1DataInDB.DepositSnapshot)
} else {
if eth1DataInDB.Trie == nil && eth1DataInDB.DepositSnapshot != nil {
return errors.Errorf("trying to use old deposit trie after migration to the new trie. "+
"Remove the --%s flag to resume normal operations.", features.DisableEIP4881.Name)
if err = s.migrateOldDepositTree(eth1DataInDB); err != nil {
return err
}
s.depositTrie, err = trie.CreateTrieFromProto(eth1DataInDB.Trie)
}
if err != nil {
return err
@@ -766,21 +750,19 @@ func (s *Service) initializeEth1Data(ctx context.Context, eth1DataInDB *ethpb.ET
}
}
s.latestEth1Data = eth1DataInDB.CurrentEth1Data
if features.Get().EnableEIP4881 {
ctrs := eth1DataInDB.DepositContainers
// Look at previously finalized index, as we are building off a finalized
// snapshot rather than the full trie.
lastFinalizedIndex := int64(s.depositTrie.NumOfItems() - 1)
// Correctly initialize missing deposits into active trie.
for _, c := range ctrs {
if c.Index > lastFinalizedIndex {
depRoot, err := c.Deposit.Data.HashTreeRoot()
if err != nil {
return err
}
if err := s.depositTrie.Insert(depRoot[:], int(c.Index)); err != nil {
return err
}
ctrs := eth1DataInDB.DepositContainers
// Look at previously finalized index, as we are building off a finalized
// snapshot rather than the full trie.
lastFinalizedIndex := int64(s.depositTrie.NumOfItems() - 1)
// Correctly initialize missing deposits into active trie.
for _, c := range ctrs {
if c.Index > lastFinalizedIndex {
depRoot, err := c.Deposit.Data.HashTreeRoot()
if err != nil {
return err
}
if err := s.depositTrie.Insert(depRoot[:], int(c.Index)); err != nil {
return err
}
}
}
@@ -847,21 +829,13 @@ func (s *Service) validPowchainData(ctx context.Context) (*ethpb.ETH1ChainData,
BeaconState: pbState,
DepositContainers: s.cfg.depositCache.AllDepositContainers(ctx),
}
if features.Get().EnableEIP4881 {
trie, ok := s.depositTrie.(*depositsnapshot.DepositTree)
if !ok {
return nil, errors.New("deposit trie was not EIP4881 DepositTree")
}
eth1Data.DepositSnapshot, err = trie.ToProto()
if err != nil {
return nil, err
}
} else {
trie, ok := s.depositTrie.(*trie.SparseMerkleTrie)
if !ok {
return nil, errors.New("deposit trie was not SparseMerkleTrie")
}
eth1Data.Trie = trie.ToProto()
trie, ok := s.depositTrie.(*depositsnapshot.DepositTree)
if !ok {
return nil, errors.New("deposit trie was not EIP4881 DepositTree")
}
eth1Data.DepositSnapshot, err = trie.ToProto()
if err != nil {
return nil, err
}
if err := s.cfg.beaconDB.SaveExecutionChainData(ctx, eth1Data); err != nil {
return nil, err

View File

@@ -15,7 +15,7 @@ import (
"github.com/ethereum/go-ethereum/rpc"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/async/event"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache/depositcache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache/depositsnapshot"
dbutil "github.com/prysmaticlabs/prysm/v5/beacon-chain/db/testing"
mockExecution "github.com/prysmaticlabs/prysm/v5/beacon-chain/execution/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/execution/types"
@@ -348,7 +348,7 @@ func TestInitDepositCache_OK(t *testing.T) {
cfg: &config{beaconDB: beaconDB},
}
var err error
s.cfg.depositCache, err = depositcache.New()
s.cfg.depositCache, err = depositsnapshot.New()
require.NoError(t, err)
require.NoError(t, s.initDepositCaches(context.Background(), ctrs))
@@ -409,7 +409,7 @@ func TestInitDepositCacheWithFinalization_OK(t *testing.T) {
cfg: &config{beaconDB: beaconDB},
}
var err error
s.cfg.depositCache, err = depositcache.New()
s.cfg.depositCache, err = depositsnapshot.New()
require.NoError(t, err)
require.NoError(t, s.initDepositCaches(context.Background(), ctrs))
@@ -553,7 +553,7 @@ func Test_batchRequestHeaders_UnderflowChecks(t *testing.T) {
func TestService_EnsureConsistentPowchainData(t *testing.T) {
beaconDB := dbutil.SetupDB(t)
cache, err := depositcache.New()
cache, err := depositsnapshot.New()
require.NoError(t, err)
srv, endpoint, err := mockExecution.SetupRPCServer()
require.NoError(t, err)
@@ -583,7 +583,7 @@ func TestService_EnsureConsistentPowchainData(t *testing.T) {
func TestService_InitializeCorrectly(t *testing.T) {
beaconDB := dbutil.SetupDB(t)
cache, err := depositcache.New()
cache, err := depositsnapshot.New()
require.NoError(t, err)
srv, endpoint, err := mockExecution.SetupRPCServer()
@@ -614,7 +614,7 @@ func TestService_InitializeCorrectly(t *testing.T) {
func TestService_EnsureValidPowchainData(t *testing.T) {
beaconDB := dbutil.SetupDB(t)
cache, err := depositcache.New()
cache, err := depositsnapshot.New()
require.NoError(t, err)
srv, endpoint, err := mockExecution.SetupRPCServer()
require.NoError(t, err)
@@ -809,7 +809,7 @@ func (s *slowRPCClient) CallContext(_ context.Context, _ interface{}, _ string,
func TestService_migrateOldDepositTree(t *testing.T) {
beaconDB := dbutil.SetupDB(t)
cache, err := depositcache.New()
cache, err := depositsnapshot.New()
require.NoError(t, err)
srv, endpoint, err := mockExecution.SetupRPCServer()

View File

@@ -21,7 +21,6 @@ go_library(
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/builder:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/cache/depositcache:go_default_library",
"//beacon-chain/cache/depositsnapshot:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",

View File

@@ -25,7 +25,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/builder"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache/depositcache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache/depositsnapshot"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem"
@@ -567,11 +566,7 @@ func (b *BeaconNode) startDB(cliCtx *cli.Context, depositAddress string) error {
b.db = d
if features.Get().EnableEIP4881 {
depositCache, err = depositsnapshot.New()
} else {
depositCache, err = depositcache.New()
}
depositCache, err = depositsnapshot.New()
if err != nil {
return errors.Wrap(err, "could not create deposit cache")
}
@@ -643,9 +638,7 @@ func (b *BeaconNode) startSlasherDB(cliCtx *cli.Context) error {
if err := d.ClearDB(); err != nil {
return errors.Wrap(err, "could not clear database")
}
if err := b.BlobStorage.Clear(); err != nil {
return errors.Wrap(err, "could not clear blob storage")
}
d, err = slasherkv.NewKVStore(b.ctx, dbPath)
if err != nil {
return errors.Wrap(err, "could not create new database")
@@ -707,6 +700,7 @@ func (b *BeaconNode) registerP2P(cliCtx *cli.Context) error {
PrivateKey: cliCtx.String(cmd.P2PPrivKey.Name),
StaticPeerID: cliCtx.Bool(cmd.P2PStaticID.Name),
MetaDataDir: cliCtx.String(cmd.P2PMetadata.Name),
QUICPort: cliCtx.Uint(cmd.P2PQUICPort.Name),
TCPPort: cliCtx.Uint(cmd.P2PTCPPort.Name),
UDPPort: cliCtx.Uint(cmd.P2PUDPPort.Name),
MaxPeers: cliCtx.Uint(cmd.P2PMaxPeers.Name),

View File

@@ -217,9 +217,9 @@ func Test_hasNetworkFlag(t *testing.T) {
want bool
}{
{
name: "Prater testnet",
networkName: features.PraterTestnet.Name,
networkValue: "prater",
name: "Holesky testnet",
networkName: features.HoleskyTestnet.Name,
networkValue: "holesky",
want: true,
},
{

View File

@@ -90,6 +90,7 @@ go_library(
"@com_github_libp2p_go_libp2p//core/peerstore:go_default_library",
"@com_github_libp2p_go_libp2p//core/protocol:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/security/noise:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/transport/quic:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/transport/tcp:go_default_library",
"@com_github_libp2p_go_libp2p_mplex//:go_default_library",
"@com_github_libp2p_go_libp2p_pubsub//:go_default_library",

View File

@@ -10,6 +10,7 @@ import (
"github.com/pkg/errors"
ssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/altair"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/crypto/hash"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing"
@@ -137,11 +138,11 @@ func (s *Service) internalBroadcastAttestation(ctx context.Context, subnet uint6
// In the event our attestation is outdated and beyond the
// acceptable threshold, we exit early and do not broadcast it.
currSlot := slots.CurrentSlot(uint64(s.genesisTime.Unix()))
if att.Data.Slot+params.BeaconConfig().SlotsPerEpoch < currSlot {
if err := helpers.ValidateAttestationTime(att.Data.Slot, s.genesisTime, params.BeaconConfig().MaximumGossipClockDisparityDuration()); err != nil {
log.WithFields(logrus.Fields{
"attestationSlot": att.Data.Slot,
"currentSlot": currSlot,
}).Warning("Attestation is too old to broadcast, discarding it")
}).WithError(err).Warning("Attestation is too old to broadcast, discarding it")
return
}

View File

@@ -24,6 +24,7 @@ type Config struct {
PrivateKey string
DataDir string
MetaDataDir string
QUICPort uint
TCPPort uint
UDPPort uint
MaxPeers uint

View File

@@ -4,6 +4,7 @@ import (
"bytes"
"crypto/ecdsa"
"net"
"sync"
"time"
"github.com/ethereum/go-ethereum/p2p/discover"
@@ -15,8 +16,11 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/v5/config/features"
"github.com/prysmaticlabs/prysm/v5/config/params"
ecdsaprysm "github.com/prysmaticlabs/prysm/v5/crypto/ecdsa"
"github.com/prysmaticlabs/prysm/v5/math"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
@@ -39,6 +43,11 @@ const (
udp6
)
type quicProtocol uint16
// quicProtocol is the "quic" key, which holds the QUIC port of the node.
func (quicProtocol) ENRKey() string { return "quic" }
// RefreshENR uses an epoch to refresh the enr entry for our node
// with the tracked committee ids for the epoch, allowing our node
// to be dynamically discoverable by others given our tracked committee ids.
@@ -100,14 +109,15 @@ func (s *Service) RefreshENR() {
// listen for new nodes watches for new nodes in the network and adds them to the peerstore.
func (s *Service) listenForNewNodes() {
iterator := s.dv5Listener.RandomNodes()
iterator = enode.Filter(iterator, s.filterPeer)
iterator := filterNodes(s.ctx, s.dv5Listener.RandomNodes(), s.filterPeer)
defer iterator.Close()
for {
// Exit if service's context is canceled
// Exit if service's context is canceled.
if s.ctx.Err() != nil {
break
}
if s.isPeerAtLimit(false /* inbound */) {
// Pause the main loop for a period to stop looking
// for new peers.
@@ -115,23 +125,47 @@ func (s *Service) listenForNewNodes() {
time.Sleep(pollingPeriod)
continue
}
exists := iterator.Next()
if !exists {
break
}
node := iterator.Node()
peerInfo, _, err := convertToAddrInfo(node)
if err != nil {
log.WithError(err).Error("Could not convert to peer info")
wantedCount := s.wantedPeerDials()
if wantedCount == 0 {
log.Trace("Not looking for peers, at peer limit")
time.Sleep(pollingPeriod)
continue
}
// Make sure that peer is not dialed too often, for each connection attempt there's a backoff period.
s.Peers().RandomizeBackOff(peerInfo.ID)
go func(info *peer.AddrInfo) {
if err := s.connectWithPeer(s.ctx, *info); err != nil {
log.WithError(err).Tracef("Could not connect with peer %s", info.String())
// Restrict dials if limit is applied.
if flags.MaxDialIsActive() {
var err error
wantedICount := math.Min(uint64(wantedCount), uint64(flags.Get().MaxConcurrentDials))
wantedCount, err = math.Int(wantedICount)
if err != nil {
log.WithError(err).Error("Could not get wanted count")
continue
}
}(peerInfo)
}
wantedNodes := enode.ReadNodes(iterator, wantedCount)
wg := new(sync.WaitGroup)
for i := 0; i < len(wantedNodes); i++ {
node := wantedNodes[i]
peerInfo, _, err := convertToAddrInfo(node)
if err != nil {
log.WithError(err).Error("Could not convert to peer info")
continue
}
if peerInfo == nil {
continue
}
// Make sure that peer is not dialed too often, for each connection attempt there's a backoff period.
s.Peers().RandomizeBackOff(peerInfo.ID)
wg.Add(1)
go func(info *peer.AddrInfo) {
if err := s.connectWithPeer(s.ctx, *info); err != nil {
log.WithError(err).Tracef("Could not connect with peer %s", info.String())
}
wg.Done()
}(peerInfo)
}
wg.Wait()
}
}
@@ -167,8 +201,7 @@ func (s *Service) createListener(
// Listen to all network interfaces
// for both ip protocols.
networkVersion := "udp"
conn, err := net.ListenUDP(networkVersion, udpAddr)
conn, err := net.ListenUDP("udp", udpAddr)
if err != nil {
return nil, errors.Wrap(err, "could not listen to UDP")
}
@@ -178,6 +211,7 @@ func (s *Service) createListener(
ipAddr,
int(s.cfg.UDPPort),
int(s.cfg.TCPPort),
int(s.cfg.QUICPort),
)
if err != nil {
return nil, errors.Wrap(err, "could not create local node")
@@ -209,7 +243,7 @@ func (s *Service) createListener(
func (s *Service) createLocalNode(
privKey *ecdsa.PrivateKey,
ipAddr net.IP,
udpPort, tcpPort int,
udpPort, tcpPort, quicPort int,
) (*enode.LocalNode, error) {
db, err := enode.OpenDB("")
if err != nil {
@@ -218,11 +252,19 @@ func (s *Service) createLocalNode(
localNode := enode.NewLocalNode(db, privKey)
ipEntry := enr.IP(ipAddr)
udpEntry := enr.UDP(udpPort)
tcpEntry := enr.TCP(tcpPort)
localNode.Set(ipEntry)
udpEntry := enr.UDP(udpPort)
localNode.Set(udpEntry)
tcpEntry := enr.TCP(tcpPort)
localNode.Set(tcpEntry)
if features.Get().EnableQUIC {
quicEntry := quicProtocol(quicPort)
localNode.Set(quicEntry)
}
localNode.SetFallbackIP(ipAddr)
localNode.SetFallbackUDP(udpPort)
@@ -277,7 +319,7 @@ func (s *Service) startDiscoveryV5(
// filterPeer validates each node that we retrieve from our dht. We
// try to ascertain that the peer can be a valid protocol peer.
// Validity Conditions:
// 1. Peer has a valid IP and TCP port set in their enr.
// 1. Peer has a valid IP and a (QUIC and/or TCP) port set in their enr.
// 2. Peer hasn't been marked as 'bad'.
// 3. Peer is not currently active or connected.
// 4. Peer is ready to receive incoming connections.
@@ -294,17 +336,13 @@ func (s *Service) filterPeer(node *enode.Node) bool {
return false
}
// Ignore nodes with their TCP ports not set.
if err := node.Record().Load(enr.WithEntry("tcp", new(enr.TCP))); err != nil {
if !enr.IsNotFound(err) {
log.WithError(err).Debug("Could not retrieve tcp port")
}
peerData, multiAddrs, err := convertToAddrInfo(node)
if err != nil {
log.WithError(err).Debug("Could not convert to peer data")
return false
}
peerData, multiAddr, err := convertToAddrInfo(node)
if err != nil {
log.WithError(err).Debug("Could not convert to peer data")
if peerData == nil || len(multiAddrs) == 0 {
return false
}
@@ -337,6 +375,9 @@ func (s *Service) filterPeer(node *enode.Node) bool {
}
}
// If the peer has 2 multiaddrs, favor the QUIC address, which is in first position.
multiAddr := multiAddrs[0]
// Add peer to peer handler.
s.peers.Add(nodeENR, peerData.ID, multiAddr, network.DirUnknown)
@@ -364,6 +405,17 @@ func (s *Service) isPeerAtLimit(inbound bool) bool {
return activePeers >= maxPeers || numOfConns >= maxPeers
}
func (s *Service) wantedPeerDials() int {
maxPeers := int(s.cfg.MaxPeers)
activePeers := len(s.Peers().Active())
wantedCount := 0
if maxPeers > activePeers {
wantedCount = maxPeers - activePeers
}
return wantedCount
}
// PeersFromStringAddrs converts peer raw ENRs into multiaddrs for p2p.
func PeersFromStringAddrs(addrs []string) ([]ma.Multiaddr, error) {
var allAddrs []ma.Multiaddr
@@ -380,11 +432,11 @@ func PeersFromStringAddrs(addrs []string) ([]ma.Multiaddr, error) {
if err != nil {
return nil, errors.Wrapf(err, "Could not get enode from string")
}
addr, err := convertToSingleMultiAddr(enodeAddr)
nodeAddrs, err := retrieveMultiAddrsFromNode(enodeAddr)
if err != nil {
return nil, errors.Wrapf(err, "Could not get multiaddr")
}
allAddrs = append(allAddrs, addr)
allAddrs = append(allAddrs, nodeAddrs...)
}
return allAddrs, nil
}
@@ -419,45 +471,139 @@ func parseGenericAddrs(addrs []string) (enodeString, multiAddrString []string) {
}
func convertToMultiAddr(nodes []*enode.Node) []ma.Multiaddr {
var multiAddrs []ma.Multiaddr
// Expect each node to have a TCP and a QUIC address.
multiAddrs := make([]ma.Multiaddr, 0, 2*len(nodes))
for _, node := range nodes {
// ignore nodes with no ip address stored
// Skip nodes with no ip address stored.
if node.IP() == nil {
continue
}
multiAddr, err := convertToSingleMultiAddr(node)
// Get up to two multiaddrs (TCP and QUIC) for each node.
nodeMultiAddrs, err := retrieveMultiAddrsFromNode(node)
if err != nil {
log.WithError(err).Error("Could not convert to multiAddr")
log.WithError(err).Errorf("Could not convert to multiAddr node %s", node)
continue
}
multiAddrs = append(multiAddrs, multiAddr)
multiAddrs = append(multiAddrs, nodeMultiAddrs...)
}
return multiAddrs
}
func convertToAddrInfo(node *enode.Node) (*peer.AddrInfo, ma.Multiaddr, error) {
multiAddr, err := convertToSingleMultiAddr(node)
func convertToAddrInfo(node *enode.Node) (*peer.AddrInfo, []ma.Multiaddr, error) {
multiAddrs, err := retrieveMultiAddrsFromNode(node)
if err != nil {
return nil, nil, err
}
info, err := peer.AddrInfoFromP2pAddr(multiAddr)
if err != nil {
return nil, nil, err
if len(multiAddrs) == 0 {
return nil, nil, nil
}
return info, multiAddr, nil
infos, err := peer.AddrInfosFromP2pAddrs(multiAddrs...)
if err != nil {
return nil, nil, errors.Wrapf(err, "could not convert to peer info: %v", multiAddrs)
}
if len(infos) != 1 {
return nil, nil, errors.Errorf("infos contains %v elements, expected exactly 1", len(infos))
}
return &infos[0], multiAddrs, nil
}
func convertToSingleMultiAddr(node *enode.Node) (ma.Multiaddr, error) {
// retrieveMultiAddrsFromNode converts an enode.Node to a list of multiaddrs.
// If the node has a both a QUIC and a TCP port set in their ENR, then
// the multiaddr corresponding to the QUIC port is added first, followed
// by the multiaddr corresponding to the TCP port.
func retrieveMultiAddrsFromNode(node *enode.Node) ([]ma.Multiaddr, error) {
multiaddrs := make([]ma.Multiaddr, 0, 2)
// Retrieve the node public key.
pubkey := node.Pubkey()
assertedKey, err := ecdsaprysm.ConvertToInterfacePubkey(pubkey)
if err != nil {
return nil, errors.Wrap(err, "could not get pubkey")
}
// Compute the node ID from the public key.
id, err := peer.IDFromPublicKey(assertedKey)
if err != nil {
return nil, errors.Wrap(err, "could not get peer id")
}
return multiAddressBuilderWithID(node.IP().String(), "tcp", uint(node.TCP()), id)
if features.Get().EnableQUIC {
// If the QUIC entry is present in the ENR, build the corresponding multiaddress.
port, ok, err := getPort(node, quic)
if err != nil {
return nil, errors.Wrap(err, "could not get QUIC port")
}
if ok {
addr, err := multiAddressBuilderWithID(node.IP(), quic, port, id)
if err != nil {
return nil, errors.Wrap(err, "could not build QUIC address")
}
multiaddrs = append(multiaddrs, addr)
}
}
// If the TCP entry is present in the ENR, build the corresponding multiaddress.
port, ok, err := getPort(node, tcp)
if err != nil {
return nil, errors.Wrap(err, "could not get TCP port")
}
if ok {
addr, err := multiAddressBuilderWithID(node.IP(), tcp, port, id)
if err != nil {
return nil, errors.Wrap(err, "could not build TCP address")
}
multiaddrs = append(multiaddrs, addr)
}
return multiaddrs, nil
}
// getPort retrieves the port for a given node and protocol, as well as a boolean
// indicating whether the port was found, and an error
func getPort(node *enode.Node, protocol internetProtocol) (uint, bool, error) {
var (
port uint
err error
)
switch protocol {
case tcp:
var entry enr.TCP
err = node.Load(&entry)
port = uint(entry)
case udp:
var entry enr.UDP
err = node.Load(&entry)
port = uint(entry)
case quic:
var entry quicProtocol
err = node.Load(&entry)
port = uint(entry)
default:
return 0, false, errors.Errorf("invalid protocol: %v", protocol)
}
if enr.IsNotFound(err) {
return port, false, nil
}
if err != nil {
return 0, false, errors.Wrap(err, "could not get port")
}
return port, true, nil
}
func convertToUdpMultiAddr(node *enode.Node) ([]ma.Multiaddr, error) {
@@ -475,14 +621,14 @@ func convertToUdpMultiAddr(node *enode.Node) ([]ma.Multiaddr, error) {
var ip4 enr.IPv4
var ip6 enr.IPv6
if node.Load(&ip4) == nil {
address, ipErr := multiAddressBuilderWithID(net.IP(ip4).String(), "udp", uint(node.UDP()), id)
address, ipErr := multiAddressBuilderWithID(net.IP(ip4), udp, uint(node.UDP()), id)
if ipErr != nil {
return nil, errors.Wrap(ipErr, "could not build IPv4 address")
}
addresses = append(addresses, address)
}
if node.Load(&ip6) == nil {
address, ipErr := multiAddressBuilderWithID(net.IP(ip6).String(), "udp", uint(node.UDP()), id)
address, ipErr := multiAddressBuilderWithID(net.IP(ip6), udp, uint(node.UDP()), id)
if ipErr != nil {
return nil, errors.Wrap(ipErr, "could not build IPv6 address")
}

View File

@@ -166,8 +166,9 @@ func TestCreateLocalNode(t *testing.T) {
t.Run(tt.name, func(t *testing.T) {
// Define ports.
const (
udpPort = 2000
tcpPort = 3000
udpPort = 2000
tcpPort = 3000
quicPort = 3000
)
// Create a private key.
@@ -180,7 +181,7 @@ func TestCreateLocalNode(t *testing.T) {
cfg: tt.cfg,
}
localNode, err := service.createLocalNode(privKey, address, udpPort, tcpPort)
localNode, err := service.createLocalNode(privKey, address, udpPort, tcpPort, quicPort)
if tt.expectedError {
require.NotNil(t, err)
return
@@ -237,7 +238,7 @@ func TestMultiAddrsConversion_InvalidIPAddr(t *testing.T) {
genesisTime: time.Now(),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
}
node, err := s.createLocalNode(pkey, addr, 0, 0)
node, err := s.createLocalNode(pkey, addr, 0, 0, 0)
require.NoError(t, err)
multiAddr := convertToMultiAddr([]*enode.Node{node.Node()})
assert.Equal(t, 0, len(multiAddr), "Invalid ip address converted successfully")
@@ -248,8 +249,9 @@ func TestMultiAddrConversion_OK(t *testing.T) {
ipAddr, pkey := createAddrAndPrivKey(t)
s := &Service{
cfg: &Config{
TCPPort: 0,
UDPPort: 0,
UDPPort: 2000,
TCPPort: 3000,
QUICPort: 3000,
},
genesisTime: time.Now(),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
@@ -315,7 +317,7 @@ func TestHostIsResolved(t *testing.T) {
// As defined in RFC 2606 , example.org is a
// reserved example domain name.
exampleHost := "example.org"
exampleIP := "93.184.216.34"
exampleIP := "93.184.215.14"
s := &Service{
cfg: &Config{

View File

@@ -28,7 +28,8 @@ import (
)
func TestStartDiscv5_DifferentForkDigests(t *testing.T) {
port := 2000
const port = 2000
ipAddr, pkey := createAddrAndPrivKey(t)
genesisTime := time.Now()
genesisValidatorsRoot := make([]byte, fieldparams.RootLength)
@@ -53,7 +54,7 @@ func TestStartDiscv5_DifferentForkDigests(t *testing.T) {
var listeners []*discover.UDPv5
for i := 1; i <= 5; i++ {
port = 3000 + i
port := 3000 + i
cfg.UDPPort = uint(port)
ipAddr, pkey := createAddrAndPrivKey(t)
@@ -98,13 +99,14 @@ func TestStartDiscv5_DifferentForkDigests(t *testing.T) {
s.genesisTime = genesisTime
s.genesisValidatorsRoot = make([]byte, 32)
s.dv5Listener = lastListener
var addrs []ma.Multiaddr
for _, n := range nodes {
if s.filterPeer(n) {
addr, err := convertToSingleMultiAddr(n)
addrs := make([]ma.Multiaddr, 0)
for _, node := range nodes {
if s.filterPeer(node) {
nodeAddrs, err := retrieveMultiAddrsFromNode(node)
require.NoError(t, err)
addrs = append(addrs, addr)
addrs = append(addrs, nodeAddrs...)
}
}
@@ -114,10 +116,11 @@ func TestStartDiscv5_DifferentForkDigests(t *testing.T) {
}
func TestStartDiscv5_SameForkDigests_DifferentNextForkData(t *testing.T) {
const port = 2000
params.SetupTestConfigCleanup(t)
hook := logTest.NewGlobal()
logrus.SetLevel(logrus.TraceLevel)
port := 2000
ipAddr, pkey := createAddrAndPrivKey(t)
genesisTime := time.Now()
genesisValidatorsRoot := make([]byte, 32)
@@ -138,7 +141,7 @@ func TestStartDiscv5_SameForkDigests_DifferentNextForkData(t *testing.T) {
var listeners []*discover.UDPv5
for i := 1; i <= 5; i++ {
port = 3000 + i
port := 3000 + i
cfg.UDPPort = uint(port)
ipAddr, pkey := createAddrAndPrivKey(t)
@@ -188,13 +191,13 @@ func TestStartDiscv5_SameForkDigests_DifferentNextForkData(t *testing.T) {
s.genesisTime = genesisTime
s.genesisValidatorsRoot = make([]byte, 32)
s.dv5Listener = lastListener
var addrs []ma.Multiaddr
addrs := make([]ma.Multiaddr, 0, len(nodes))
for _, n := range nodes {
if s.filterPeer(n) {
addr, err := convertToSingleMultiAddr(n)
for _, node := range nodes {
if s.filterPeer(node) {
nodeAddrs, err := retrieveMultiAddrsFromNode(node)
require.NoError(t, err)
addrs = append(addrs, addr)
addrs = append(addrs, nodeAddrs...)
}
}
if len(addrs) == 0 {

View File

@@ -2,10 +2,14 @@ package p2p
import (
"context"
"runtime"
"time"
"github.com/ethereum/go-ethereum/p2p/enode"
)
const backOffCounter = 50
// filterNodes wraps an iterator such that Next only returns nodes for which
// the 'check' function returns true. This custom implementation also
// checks for context deadlines so that in the event the parent context has
@@ -24,13 +28,21 @@ type filterIter struct {
// Next looks up for the next valid node according to our
// filter criteria.
func (f *filterIter) Next() bool {
lookupCounter := 0
for f.Iterator.Next() {
// Do not excessively perform lookups if we constantly receive non-viable peers.
if lookupCounter > backOffCounter {
lookupCounter = 0
runtime.Gosched()
time.Sleep(30 * time.Second)
}
if f.Context.Err() != nil {
return false
}
if f.check(f.Node()) {
return true
}
lookupCounter++
}
return false
}

View File

@@ -1,6 +1,7 @@
package p2p
import (
"net"
"strconv"
"strings"
@@ -12,32 +13,32 @@ import (
var log = logrus.WithField("prefix", "p2p")
func logIPAddr(id peer.ID, addrs ...ma.Multiaddr) {
var correctAddr ma.Multiaddr
for _, addr := range addrs {
if strings.Contains(addr.String(), "/ip4/") || strings.Contains(addr.String(), "/ip6/") {
correctAddr = addr
break
if !(strings.Contains(addr.String(), "/ip4/") || strings.Contains(addr.String(), "/ip6/")) {
continue
}
}
if correctAddr != nil {
log.WithField(
"multiAddr",
correctAddr.String()+"/p2p/"+id.String(),
addr.String()+"/p2p/"+id.String(),
).Info("Node started p2p server")
}
}
func logExternalIPAddr(id peer.ID, addr string, port uint) {
func logExternalIPAddr(id peer.ID, addr string, tcpPort, quicPort uint) {
if addr != "" {
multiAddr, err := MultiAddressBuilder(addr, port)
multiAddrs, err := MultiAddressBuilder(net.ParseIP(addr), tcpPort, quicPort)
if err != nil {
log.WithError(err).Error("Could not create multiaddress")
return
}
log.WithField(
"multiAddr",
multiAddr.String()+"/p2p/"+id.String(),
).Info("Node started external p2p server")
for _, multiAddr := range multiAddrs {
log.WithField(
"multiAddr",
multiAddr.String()+"/p2p/"+id.String(),
).Info("Node started external p2p server")
}
}
}

View File

@@ -11,40 +11,68 @@ import (
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/libp2p/go-libp2p/p2p/security/noise"
"github.com/libp2p/go-libp2p/p2p/transport/tcp"
libp2pquic "github.com/libp2p/go-libp2p/p2p/transport/quic"
libp2ptcp "github.com/libp2p/go-libp2p/p2p/transport/tcp"
gomplex "github.com/libp2p/go-mplex"
ma "github.com/multiformats/go-multiaddr"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/config/features"
ecdsaprysm "github.com/prysmaticlabs/prysm/v5/crypto/ecdsa"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
)
type internetProtocol string
const (
udp = "udp"
tcp = "tcp"
quic = "quic"
)
// MultiAddressBuilder takes in an ip address string and port to produce a go multiaddr format.
func MultiAddressBuilder(ipAddr string, port uint) (ma.Multiaddr, error) {
parsedIP := net.ParseIP(ipAddr)
if parsedIP.To4() == nil && parsedIP.To16() == nil {
return nil, errors.Errorf("invalid ip address provided: %s", ipAddr)
func MultiAddressBuilder(ip net.IP, tcpPort, quicPort uint) ([]ma.Multiaddr, error) {
ipType, err := extractIpType(ip)
if err != nil {
return nil, errors.Wrap(err, "unable to determine IP type")
}
if parsedIP.To4() != nil {
return ma.NewMultiaddr(fmt.Sprintf("/ip4/%s/tcp/%d", ipAddr, port))
// Example: /ip4/1.2.3.4./tcp/5678
multiaddrStr := fmt.Sprintf("/%s/%s/tcp/%d", ipType, ip, tcpPort)
multiAddrTCP, err := ma.NewMultiaddr(multiaddrStr)
if err != nil {
return nil, errors.Wrapf(err, "cannot produce TCP multiaddr format from %s:%d", ip, tcpPort)
}
return ma.NewMultiaddr(fmt.Sprintf("/ip6/%s/tcp/%d", ipAddr, port))
multiaddrs := []ma.Multiaddr{multiAddrTCP}
if features.Get().EnableQUIC {
// Example: /ip4/1.2.3.4/udp/5678/quic-v1
multiAddrQUIC, err := ma.NewMultiaddr(fmt.Sprintf("/%s/%s/udp/%d/quic-v1", ipType, ip, quicPort))
if err != nil {
return nil, errors.Wrapf(err, "cannot produce QUIC multiaddr format from %s:%d", ip, tcpPort)
}
multiaddrs = append(multiaddrs, multiAddrQUIC)
}
return multiaddrs, nil
}
// buildOptions for the libp2p host.
func (s *Service) buildOptions(ip net.IP, priKey *ecdsa.PrivateKey) ([]libp2p.Option, error) {
cfg := s.cfg
listen, err := MultiAddressBuilder(ip.String(), cfg.TCPPort)
multiaddrs, err := MultiAddressBuilder(ip, cfg.TCPPort, cfg.QUICPort)
if err != nil {
return nil, errors.Wrapf(err, "cannot produce multiaddr format from %s:%d", ip.String(), cfg.TCPPort)
return nil, errors.Wrapf(err, "cannot produce multiaddr format from %s:%d", ip, cfg.TCPPort)
}
if cfg.LocalIP != "" {
if net.ParseIP(cfg.LocalIP) == nil {
localIP := net.ParseIP(cfg.LocalIP)
if localIP == nil {
return nil, errors.Wrapf(err, "invalid local ip provided: %s:%d", cfg.LocalIP, cfg.TCPPort)
}
listen, err = MultiAddressBuilder(cfg.LocalIP, cfg.TCPPort)
multiaddrs, err = MultiAddressBuilder(localIP, cfg.TCPPort, cfg.QUICPort)
if err != nil {
return nil, errors.Wrapf(err, "cannot produce multiaddr format from %s:%d", cfg.LocalIP, cfg.TCPPort)
}
@@ -62,36 +90,43 @@ func (s *Service) buildOptions(ip net.IP, priKey *ecdsa.PrivateKey) ([]libp2p.Op
options := []libp2p.Option{
privKeyOption(priKey),
libp2p.ListenAddrs(listen),
libp2p.ListenAddrs(multiaddrs...),
libp2p.UserAgent(version.BuildData()),
libp2p.ConnectionGater(s),
libp2p.Transport(tcp.NewTCPTransport),
libp2p.Transport(libp2ptcp.NewTCPTransport),
libp2p.DefaultMuxers,
libp2p.Muxer("/mplex/6.7.0", mplex.DefaultTransport),
libp2p.Security(noise.ID, noise.New),
libp2p.Ping(false), // Disable Ping Service.
}
if features.Get().EnableQUIC {
options = append(options, libp2p.Transport(libp2pquic.NewTransport))
}
if cfg.EnableUPnP {
options = append(options, libp2p.NATPortMap()) // Allow to use UPnP
}
if cfg.RelayNodeAddr != "" {
options = append(options, libp2p.AddrsFactory(withRelayAddrs(cfg.RelayNodeAddr)))
} else {
// Disable relay if it has not been set.
options = append(options, libp2p.DisableRelay())
}
if cfg.HostAddress != "" {
options = append(options, libp2p.AddrsFactory(func(addrs []ma.Multiaddr) []ma.Multiaddr {
external, err := MultiAddressBuilder(cfg.HostAddress, cfg.TCPPort)
externalMultiaddrs, err := MultiAddressBuilder(net.ParseIP(cfg.HostAddress), cfg.TCPPort, cfg.QUICPort)
if err != nil {
log.WithError(err).Error("Unable to create external multiaddress")
} else {
addrs = append(addrs, external)
addrs = append(addrs, externalMultiaddrs...)
}
return addrs
}))
}
if cfg.HostDNS != "" {
options = append(options, libp2p.AddrsFactory(func(addrs []ma.Multiaddr) []ma.Multiaddr {
external, err := ma.NewMultiaddr(fmt.Sprintf("/dns4/%s/tcp/%d", cfg.HostDNS, cfg.TCPPort))
@@ -107,21 +142,47 @@ func (s *Service) buildOptions(ip net.IP, priKey *ecdsa.PrivateKey) ([]libp2p.Op
if features.Get().DisableResourceManager {
options = append(options, libp2p.ResourceManager(&network.NullResourceManager{}))
}
return options, nil
}
func multiAddressBuilderWithID(ipAddr, protocol string, port uint, id peer.ID) (ma.Multiaddr, error) {
parsedIP := net.ParseIP(ipAddr)
if parsedIP.To4() == nil && parsedIP.To16() == nil {
return nil, errors.Errorf("invalid ip address provided: %s", ipAddr)
func extractIpType(ip net.IP) (string, error) {
if ip.To4() != nil {
return "ip4", nil
}
if id.String() == "" {
return nil, errors.New("empty peer id given")
if ip.To16() != nil {
return "ip6", nil
}
if parsedIP.To4() != nil {
return ma.NewMultiaddr(fmt.Sprintf("/ip4/%s/%s/%d/p2p/%s", ipAddr, protocol, port, id.String()))
return "", errors.Errorf("provided IP address is neither IPv4 nor IPv6: %s", ip)
}
func multiAddressBuilderWithID(ip net.IP, protocol internetProtocol, port uint, id peer.ID) (ma.Multiaddr, error) {
var multiaddrStr string
if id == "" {
return nil, errors.Errorf("empty peer id given: %s", id)
}
return ma.NewMultiaddr(fmt.Sprintf("/ip6/%s/%s/%d/p2p/%s", ipAddr, protocol, port, id.String()))
ipType, err := extractIpType(ip)
if err != nil {
return nil, errors.Wrap(err, "unable to determine IP type")
}
switch protocol {
case udp, tcp:
// Example with UDP: /ip4/1.2.3.4/udp/5678/p2p/16Uiu2HAkum7hhuMpWqFj3yNLcmQBGmThmqw2ohaCRThXQuKU9ohs
// Example with TCP: /ip6/1.2.3.4/tcp/5678/p2p/16Uiu2HAkum7hhuMpWqFj3yNLcmQBGmThmqw2ohaCRThXQuKU9ohs
multiaddrStr = fmt.Sprintf("/%s/%s/%s/%d/p2p/%s", ipType, ip, protocol, port, id)
case quic:
// Example: /ip4/1.2.3.4/udp/5678/quic-v1/p2p/16Uiu2HAkum7hhuMpWqFj3yNLcmQBGmThmqw2ohaCRThXQuKU9ohs
multiaddrStr = fmt.Sprintf("/%s/%s/udp/%d/quic-v1/p2p/%s", ipType, ip, port, id)
default:
return nil, errors.Errorf("unsupported protocol: %s", protocol)
}
return ma.NewMultiaddr(multiaddrStr)
}
// Adds a private key to the libp2p option if the option was provided.

View File

@@ -13,6 +13,7 @@ import (
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/libp2p/go-libp2p"
"github.com/libp2p/go-libp2p/core/crypto"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/libp2p/go-libp2p/core/protocol"
mock "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v5/config/params"
@@ -88,30 +89,34 @@ func TestIPV6Support(t *testing.T) {
lNode := enode.NewLocalNode(db, key)
mockIPV6 := net.IP{0xff, 0x02, 0xAA, 0, 0x1F, 0, 0x2E, 0, 0, 0x36, 0x45, 0, 0, 0, 0, 0x02}
lNode.Set(enr.IP(mockIPV6))
ma, err := convertToSingleMultiAddr(lNode.Node())
mas, err := retrieveMultiAddrsFromNode(lNode.Node())
if err != nil {
t.Fatal(err)
}
ipv6Exists := false
for _, p := range ma.Protocols() {
if p.Name == "ip4" {
t.Error("Got ip4 address instead of ip6")
for _, ma := range mas {
ipv6Exists := false
for _, p := range ma.Protocols() {
if p.Name == "ip4" {
t.Error("Got ip4 address instead of ip6")
}
if p.Name == "ip6" {
ipv6Exists = true
}
}
if p.Name == "ip6" {
ipv6Exists = true
if !ipv6Exists {
t.Error("Multiaddress did not have ipv6 protocol")
}
}
if !ipv6Exists {
t.Error("Multiaddress did not have ipv6 protocol")
}
}
func TestDefaultMultiplexers(t *testing.T) {
var cfg libp2p.Config
_ = cfg
p2pCfg := &Config{
TCPPort: 2000,
UDPPort: 2000,
TCPPort: 3000,
QUICPort: 3000,
StateNotifier: &mock.MockStateNotifier{},
}
svc := &Service{cfg: p2pCfg}
@@ -127,5 +132,57 @@ func TestDefaultMultiplexers(t *testing.T) {
assert.Equal(t, protocol.ID("/yamux/1.0.0"), cfg.Muxers[0].ID)
assert.Equal(t, protocol.ID("/mplex/6.7.0"), cfg.Muxers[1].ID)
}
func TestMultiAddressBuilderWithID(t *testing.T) {
testCases := []struct {
name string
ip net.IP
protocol internetProtocol
port uint
id string
expectedMultiaddrStr string
}{
{
name: "UDP",
ip: net.IPv4(192, 168, 0, 1),
protocol: udp,
port: 5678,
id: "0025080212210204fb1ebb1aa467527d34306a4794a5171d6516405e720b909b7f816d63aef96a",
expectedMultiaddrStr: "/ip4/192.168.0.1/udp/5678/p2p/16Uiu2HAkum7hhuMpWqFj3yNLcmQBGmThmqw2ohaCRThXQuKU9ohs",
},
{
name: "TCP",
ip: net.IPv4(192, 168, 0, 1),
protocol: tcp,
port: 5678,
id: "0025080212210204fb1ebb1aa467527d34306a4794a5171d6516405e720b909b7f816d63aef96a",
expectedMultiaddrStr: "/ip4/192.168.0.1/tcp/5678/p2p/16Uiu2HAkum7hhuMpWqFj3yNLcmQBGmThmqw2ohaCRThXQuKU9ohs",
},
{
name: "QUIC",
ip: net.IPv4(192, 168, 0, 1),
protocol: quic,
port: 5678,
id: "0025080212210204fb1ebb1aa467527d34306a4794a5171d6516405e720b909b7f816d63aef96a",
expectedMultiaddrStr: "/ip4/192.168.0.1/udp/5678/quic-v1/p2p/16Uiu2HAkum7hhuMpWqFj3yNLcmQBGmThmqw2ohaCRThXQuKU9ohs",
},
}
for _, tt := range testCases {
t.Run(tt.name, func(t *testing.T) {
id, err := hex.DecodeString(tt.id)
require.NoError(t, err)
actualMultiaddr, err := multiAddressBuilderWithID(tt.ip, tt.protocol, tt.port, peer.ID(id))
require.NoError(t, err)
actualMultiaddrStr := actualMultiaddr.String()
require.Equal(t, tt.expectedMultiaddrStr, actualMultiaddrStr)
})
}
}

View File

@@ -26,6 +26,7 @@ import (
"context"
"math"
"sort"
"strings"
"time"
"github.com/ethereum/go-ethereum/p2p/enr"
@@ -76,6 +77,13 @@ const (
MaxBackOffDuration = 5000
)
type InternetProtocol string
const (
TCP = "tcp"
QUIC = "quic"
)
// Status is the structure holding the peer status information.
type Status struct {
ctx context.Context
@@ -449,6 +457,19 @@ func (p *Status) InboundConnected() []peer.ID {
return peers
}
// InboundConnectedWithProtocol returns the current batch of inbound peers that are connected with a given protocol.
func (p *Status) InboundConnectedWithProtocol(protocol InternetProtocol) []peer.ID {
p.store.RLock()
defer p.store.RUnlock()
peers := make([]peer.ID, 0)
for pid, peerData := range p.store.Peers() {
if peerData.ConnState == PeerConnected && peerData.Direction == network.DirInbound && strings.Contains(peerData.Address.String(), string(protocol)) {
peers = append(peers, pid)
}
}
return peers
}
// Outbound returns the current batch of outbound peers.
func (p *Status) Outbound() []peer.ID {
p.store.RLock()
@@ -475,7 +496,20 @@ func (p *Status) OutboundConnected() []peer.ID {
return peers
}
// Active returns the peers that are connecting or connected.
// OutboundConnectedWithProtocol returns the current batch of outbound peers that are connected with a given protocol.
func (p *Status) OutboundConnectedWithProtocol(protocol InternetProtocol) []peer.ID {
p.store.RLock()
defer p.store.RUnlock()
peers := make([]peer.ID, 0)
for pid, peerData := range p.store.Peers() {
if peerData.ConnState == PeerConnected && peerData.Direction == network.DirOutbound && strings.Contains(peerData.Address.String(), string(protocol)) {
peers = append(peers, pid)
}
}
return peers
}
// Active returns the peers that are active (connecting or connected).
func (p *Status) Active() []peer.ID {
p.store.RLock()
defer p.store.RUnlock()
@@ -514,7 +548,7 @@ func (p *Status) Disconnected() []peer.ID {
return peers
}
// Inactive returns the peers that are disconnecting or disconnected.
// Inactive returns the peers that are inactive (disconnecting or disconnected).
func (p *Status) Inactive() []peer.ID {
p.store.RLock()
defer p.store.RUnlock()

View File

@@ -1111,6 +1111,87 @@ func TestInbound(t *testing.T) {
assert.Equal(t, inbound.String(), result[0].String())
}
func TestInboundConnected(t *testing.T) {
p := peers.NewStatus(context.Background(), &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{
BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
Threshold: 0,
},
},
})
addr, err := ma.NewMultiaddr("/ip4/127.0.0.1/tcp/33333")
require.NoError(t, err)
inbound := createPeer(t, p, addr, network.DirInbound, peers.PeerConnected)
createPeer(t, p, addr, network.DirInbound, peers.PeerConnecting)
result := p.InboundConnected()
require.Equal(t, 1, len(result))
assert.Equal(t, inbound.String(), result[0].String())
}
func TestInboundConnectedWithProtocol(t *testing.T) {
p := peers.NewStatus(context.Background(), &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{
BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
Threshold: 0,
},
},
})
addrsTCP := []string{
"/ip4/127.0.0.1/tcp/33333",
"/ip4/127.0.0.2/tcp/44444",
}
addrsQUIC := []string{
"/ip4/192.168.1.3/udp/13000/quic-v1",
"/ip4/192.168.1.4/udp/14000/quic-v1",
"/ip4/192.168.1.5/udp/14000/quic-v1",
}
expectedTCP := make(map[string]bool, len(addrsTCP))
for _, addr := range addrsTCP {
multiaddr, err := ma.NewMultiaddr(addr)
require.NoError(t, err)
peer := createPeer(t, p, multiaddr, network.DirInbound, peers.PeerConnected)
expectedTCP[peer.String()] = true
}
expectedQUIC := make(map[string]bool, len(addrsQUIC))
for _, addr := range addrsQUIC {
multiaddr, err := ma.NewMultiaddr(addr)
require.NoError(t, err)
peer := createPeer(t, p, multiaddr, network.DirInbound, peers.PeerConnected)
expectedQUIC[peer.String()] = true
}
// TCP
// ---
actualTCP := p.InboundConnectedWithProtocol(peers.TCP)
require.Equal(t, len(expectedTCP), len(actualTCP))
for _, actualPeer := range actualTCP {
_, ok := expectedTCP[actualPeer.String()]
require.Equal(t, true, ok)
}
// QUIC
// ----
actualQUIC := p.InboundConnectedWithProtocol(peers.QUIC)
require.Equal(t, len(expectedQUIC), len(actualQUIC))
for _, actualPeer := range actualQUIC {
_, ok := expectedQUIC[actualPeer.String()]
require.Equal(t, true, ok)
}
}
func TestOutbound(t *testing.T) {
p := peers.NewStatus(context.Background(), &peers.StatusConfig{
PeerLimit: 30,
@@ -1130,6 +1211,87 @@ func TestOutbound(t *testing.T) {
assert.Equal(t, outbound.String(), result[0].String())
}
func TestOutboundConnected(t *testing.T) {
p := peers.NewStatus(context.Background(), &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{
BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
Threshold: 0,
},
},
})
addr, err := ma.NewMultiaddr("/ip4/127.0.0.1/tcp/33333")
require.NoError(t, err)
inbound := createPeer(t, p, addr, network.DirOutbound, peers.PeerConnected)
createPeer(t, p, addr, network.DirOutbound, peers.PeerConnecting)
result := p.OutboundConnected()
require.Equal(t, 1, len(result))
assert.Equal(t, inbound.String(), result[0].String())
}
func TestOutboundConnectedWithProtocol(t *testing.T) {
p := peers.NewStatus(context.Background(), &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{
BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
Threshold: 0,
},
},
})
addrsTCP := []string{
"/ip4/127.0.0.1/tcp/33333",
"/ip4/127.0.0.2/tcp/44444",
}
addrsQUIC := []string{
"/ip4/192.168.1.3/udp/13000/quic-v1",
"/ip4/192.168.1.4/udp/14000/quic-v1",
"/ip4/192.168.1.5/udp/14000/quic-v1",
}
expectedTCP := make(map[string]bool, len(addrsTCP))
for _, addr := range addrsTCP {
multiaddr, err := ma.NewMultiaddr(addr)
require.NoError(t, err)
peer := createPeer(t, p, multiaddr, network.DirOutbound, peers.PeerConnected)
expectedTCP[peer.String()] = true
}
expectedQUIC := make(map[string]bool, len(addrsQUIC))
for _, addr := range addrsQUIC {
multiaddr, err := ma.NewMultiaddr(addr)
require.NoError(t, err)
peer := createPeer(t, p, multiaddr, network.DirOutbound, peers.PeerConnected)
expectedQUIC[peer.String()] = true
}
// TCP
// ---
actualTCP := p.OutboundConnectedWithProtocol(peers.TCP)
require.Equal(t, len(expectedTCP), len(actualTCP))
for _, actualPeer := range actualTCP {
_, ok := expectedTCP[actualPeer.String()]
require.Equal(t, true, ok)
}
// QUIC
// ----
actualQUIC := p.OutboundConnectedWithProtocol(peers.QUIC)
require.Equal(t, len(expectedQUIC), len(actualQUIC))
for _, actualPeer := range actualQUIC {
_, ok := expectedQUIC[actualPeer.String()]
require.Equal(t, true, ok)
}
}
// addPeer is a helper to add a peer with a given connection state)
func addPeer(t *testing.T, p *peers.Status, state peerdata.PeerConnectionState) peer.ID {
// Set up some peers with different states

View File

@@ -24,6 +24,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers/scorers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/types"
"github.com/prysmaticlabs/prysm/v5/config/features"
"github.com/prysmaticlabs/prysm/v5/config/params"
leakybucket "github.com/prysmaticlabs/prysm/v5/container/leaky-bucket"
prysmnetwork "github.com/prysmaticlabs/prysm/v5/network"
@@ -124,31 +125,34 @@ func NewService(ctx context.Context, cfg *Config) (*Service, error) {
if err != nil {
return nil, errors.Wrapf(err, "failed to build p2p options")
}
// Sets mplex timeouts
configureMplex()
h, err := libp2p.New(opts...)
if err != nil {
log.WithError(err).Error("Failed to create p2p host")
return nil, err
return nil, errors.Wrapf(err, "failed to create p2p host")
}
s.host = h
// Gossipsub registration is done before we add in any new peers
// due to libp2p's gossipsub implementation not taking into
// account previously added peers when creating the gossipsub
// object.
psOpts := s.pubsubOptions()
// Set the pubsub global parameters that we require.
setPubSubParameters()
// Reinitialize them in the event we are running a custom config.
attestationSubnetCount = params.BeaconConfig().AttestationSubnetCount
syncCommsSubnetCount = params.BeaconConfig().SyncCommitteeSubnetCount
gs, err := pubsub.NewGossipSub(s.ctx, s.host, psOpts...)
if err != nil {
log.WithError(err).Error("Failed to start pubsub")
return nil, err
return nil, errors.Wrapf(err, "failed to create p2p pubsub")
}
s.pubsub = gs
s.peers = peers.NewStatus(ctx, &peers.StatusConfig{
@@ -213,7 +217,7 @@ func (s *Service) Start() {
if len(s.cfg.StaticPeers) > 0 {
addrs, err := PeersFromStringAddrs(s.cfg.StaticPeers)
if err != nil {
log.WithError(err).Error("Could not connect to static peer")
log.WithError(err).Error("could not convert ENR to multiaddr")
}
// Set trusted peers for those that are provided as static addresses.
pids := peerIdsFromMultiAddrs(addrs)
@@ -232,11 +236,24 @@ func (s *Service) Start() {
async.RunEvery(s.ctx, time.Duration(params.BeaconConfig().RespTimeout)*time.Second, s.updateMetrics)
async.RunEvery(s.ctx, refreshRate, s.RefreshENR)
async.RunEvery(s.ctx, 1*time.Minute, func() {
log.WithFields(logrus.Fields{
"inbound": len(s.peers.InboundConnected()),
"outbound": len(s.peers.OutboundConnected()),
"activePeers": len(s.peers.Active()),
}).Info("Peer summary")
inboundQUICCount := len(s.peers.InboundConnectedWithProtocol(peers.QUIC))
inboundTCPCount := len(s.peers.InboundConnectedWithProtocol(peers.TCP))
outboundQUICCount := len(s.peers.OutboundConnectedWithProtocol(peers.QUIC))
outboundTCPCount := len(s.peers.OutboundConnectedWithProtocol(peers.TCP))
total := inboundQUICCount + inboundTCPCount + outboundQUICCount + outboundTCPCount
fields := logrus.Fields{
"inboundTCP": inboundTCPCount,
"outboundTCP": outboundTCPCount,
"total": total,
}
if features.Get().EnableQUIC {
fields["inboundQUIC"] = inboundQUICCount
fields["outboundQUIC"] = outboundQUICCount
}
log.WithFields(fields).Info("Connected peers")
})
multiAddrs := s.host.Network().ListenAddresses()
@@ -244,9 +261,10 @@ func (s *Service) Start() {
p2pHostAddress := s.cfg.HostAddress
p2pTCPPort := s.cfg.TCPPort
p2pQUICPort := s.cfg.QUICPort
if p2pHostAddress != "" {
logExternalIPAddr(s.host.ID(), p2pHostAddress, p2pTCPPort)
logExternalIPAddr(s.host.ID(), p2pHostAddress, p2pTCPPort, p2pQUICPort)
verifyConnectivity(p2pHostAddress, p2pTCPPort, "tcp")
}

View File

@@ -102,8 +102,9 @@ func TestService_Start_OnlyStartsOnce(t *testing.T) {
cs := startup.NewClockSynchronizer()
cfg := &Config{
TCPPort: 2000,
UDPPort: 2000,
TCPPort: 3000,
QUICPort: 3000,
ClockWaiter: cs,
}
s, err := NewService(context.Background(), cfg)
@@ -147,8 +148,9 @@ func TestService_Start_NoDiscoverFlag(t *testing.T) {
cs := startup.NewClockSynchronizer()
cfg := &Config{
TCPPort: 2000,
UDPPort: 2000,
TCPPort: 3000,
QUICPort: 3000,
StateNotifier: &mock.MockStateNotifier{},
NoDiscovery: true, // <-- no s.dv5Listener is created
ClockWaiter: cs,

View File

@@ -87,12 +87,28 @@ func (s *Service) FindPeersWithSubnet(ctx context.Context, topic string,
return false, errors.Errorf("unable to find requisite number of peers for topic %s - "+
"only %d out of %d peers were able to be found", topic, currNum, threshold)
}
nodes := enode.ReadNodes(iterator, int(params.BeaconNetworkConfig().MinimumPeersInSubnetSearch))
nodeCount := int(params.BeaconNetworkConfig().MinimumPeersInSubnetSearch)
// Restrict dials if limit is applied.
if flags.MaxDialIsActive() {
var err error
wantedICount := mathutil.Min(uint64(nodeCount), uint64(flags.Get().MaxConcurrentDials))
nodeCount, err = mathutil.Int(wantedICount)
if err != nil {
log.WithError(err).Error("Could not get wanted count")
continue
}
}
nodes := enode.ReadNodes(iterator, nodeCount)
for _, node := range nodes {
info, _, err := convertToAddrInfo(node)
if err != nil {
continue
}
if info == nil {
continue
}
wg.Add(1)
go func() {
if err := s.connectWithPeer(ctx, *info); err != nil {

View File

@@ -66,7 +66,7 @@ func TestStartDiscV5_FindPeersWithSubnet(t *testing.T) {
genesisTime := time.Now()
bootNodeService := &Service{
cfg: &Config{TCPPort: 2000, UDPPort: 3000},
cfg: &Config{UDPPort: 2000, TCPPort: 3000, QUICPort: 3000},
genesisTime: genesisTime,
genesisValidatorsRoot: genesisValidatorsRoot,
}
@@ -89,8 +89,9 @@ func TestStartDiscV5_FindPeersWithSubnet(t *testing.T) {
service, err := NewService(ctx, &Config{
Discv5BootStrapAddrs: []string{bootNodeENR},
MaxPeers: 30,
TCPPort: uint(2000 + i),
UDPPort: uint(3000 + i),
UDPPort: uint(2000 + i),
TCPPort: uint(3000 + i),
QUICPort: uint(3000 + i),
})
require.NoError(t, err)
@@ -133,8 +134,9 @@ func TestStartDiscV5_FindPeersWithSubnet(t *testing.T) {
cfg := &Config{
Discv5BootStrapAddrs: []string{bootNodeENR},
MaxPeers: 30,
TCPPort: 2010,
UDPPort: 3010,
UDPPort: 2010,
TCPPort: 3010,
QUICPort: 3010,
}
service, err := NewService(ctx, cfg)

View File

@@ -50,7 +50,7 @@ func ensurePeerConnections(ctx context.Context, h host.Host, peers *peers.Status
c := h.Network().ConnsToPeer(p.ID)
if len(c) == 0 {
if err := connectWithTimeout(ctx, h, p); err != nil {
log.WithField("peer", p.ID).WithField("addrs", p.Addrs).WithError(err).Errorf("Failed to reconnect to peer")
log.WithField("peer", p.ID).WithField("addrs", p.Addrs).WithError(err).Errorf("failed to reconnect to peer")
continue
}
}

View File

@@ -13,7 +13,7 @@ go_library(
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/builder:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/cache/depositcache:go_default_library",
"//beacon-chain/cache/depositsnapshot:go_default_library",
"//beacon-chain/core/feed/block:go_default_library",
"//beacon-chain/core/feed/operation:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",

View File

@@ -37,6 +37,7 @@ go_library(
"//beacon-chain/rpc/eth/helpers:go_default_library",
"//beacon-chain/rpc/eth/shared:go_default_library",
"//beacon-chain/rpc/lookup:go_default_library",
"//beacon-chain/rpc/prysm/v1alpha1/validator:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
@@ -121,6 +122,7 @@ go_test(
"@com_github_gorilla_mux//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@com_github_stretchr_testify//mock:go_default_library",
"@org_uber_go_mock//gomock:go_default_library",
],

View File

@@ -21,6 +21,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filters"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/rpc/eth/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/rpc/eth/shared"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/rpc/prysm/v1alpha1/validator"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
consensus_types "github.com/prysmaticlabs/prysm/v5/consensus-types"
@@ -32,6 +33,7 @@ import (
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
@@ -42,7 +44,8 @@ const (
)
var (
errNilBlock = errors.New("nil block")
errNilBlock = errors.New("nil block")
errEquivocatedBlock = errors.New("block is equivocated")
)
type handled bool
@@ -1254,6 +1257,16 @@ func (s *Server) publishBlockSSZ(ctx context.Context, w http.ResponseWriter, r *
},
}
if err = s.validateBroadcast(ctx, r, genericBlock); err != nil {
if errors.Is(err, errEquivocatedBlock) {
b, err := blocks.NewSignedBeaconBlock(genericBlock)
if err != nil {
httputil.HandleError(w, err.Error(), http.StatusBadRequest)
return
}
if err := s.broadcastSeenBlockSidecars(ctx, b, genericBlock.GetDeneb().Blobs, genericBlock.GetDeneb().KzgProofs); err != nil {
log.WithError(err).Error("Failed to broadcast blob sidecars")
}
}
httputil.HandleError(w, err.Error(), http.StatusBadRequest)
return
}
@@ -1383,6 +1396,16 @@ func (s *Server) publishBlock(ctx context.Context, w http.ResponseWriter, r *htt
consensusBlock, err = denebBlockContents.ToGeneric()
if err == nil {
if err = s.validateBroadcast(ctx, r, consensusBlock); err != nil {
if errors.Is(err, errEquivocatedBlock) {
b, err := blocks.NewSignedBeaconBlock(consensusBlock)
if err != nil {
httputil.HandleError(w, err.Error(), http.StatusBadRequest)
return
}
if err := s.broadcastSeenBlockSidecars(ctx, b, consensusBlock.GetDeneb().Blobs, consensusBlock.GetDeneb().KzgProofs); err != nil {
log.WithError(err).Error("Failed to broadcast blob sidecars")
}
}
httputil.HandleError(w, err.Error(), http.StatusBadRequest)
return
}
@@ -1547,7 +1570,7 @@ func (s *Server) validateConsensus(ctx context.Context, blk interfaces.ReadOnlyS
func (s *Server) validateEquivocation(blk interfaces.ReadOnlyBeaconBlock) error {
if s.ForkchoiceFetcher.HighestReceivedBlockSlot() == blk.Slot() {
return fmt.Errorf("block for slot %d already exists in fork choice", blk.Slot())
return errors.Wrapf(errEquivocatedBlock, "block for slot %d already exists in fork choice", blk.Slot())
}
return nil
}
@@ -2072,3 +2095,37 @@ func (s *Server) GetDepositSnapshot(w http.ResponseWriter, r *http.Request) {
},
)
}
// Broadcast blob sidecars even if the block of the same slot has been imported.
// To ensure safety, we will only broadcast blob sidecars if the header references the same block that was previously seen.
// Otherwise, a proposer could get slashed through a different blob sidecar header reference.
func (s *Server) broadcastSeenBlockSidecars(
ctx context.Context,
b interfaces.SignedBeaconBlock,
blobs [][]byte,
kzgProofs [][]byte) error {
scs, err := validator.BuildBlobSidecars(b, blobs, kzgProofs)
if err != nil {
return err
}
for _, sc := range scs {
r, err := sc.SignedBlockHeader.Header.HashTreeRoot()
if err != nil {
log.WithError(err).Error("Failed to hash block header for blob sidecar")
continue
}
if !s.FinalizationFetcher.InForkchoice(r) {
log.WithField("root", fmt.Sprintf("%#x", r)).Debug("Block header not in forkchoice, skipping blob sidecar broadcast")
continue
}
if err := s.Broadcaster.BroadcastBlob(ctx, sc.Index, sc); err != nil {
log.WithError(err).Error("Failed to broadcast blob sidecar for index ", sc.Index)
}
log.WithFields(logrus.Fields{
"index": sc.Index,
"slot": sc.SignedBlockHeader.Header.Slot,
"kzgCommitment": fmt.Sprintf("%#x", sc.KzgCommitment),
}).Info("Broadcasted blob sidecar for already seen block")
}
return nil
}

View File

@@ -13,6 +13,8 @@ import (
"time"
"github.com/ethereum/go-ethereum/common/hexutil"
mockp2p "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/testing"
logTest "github.com/sirupsen/logrus/hooks/test"
"go.uber.org/mock/gomock"
"github.com/gorilla/mux"
@@ -2472,7 +2474,9 @@ func TestValidateEquivocation(t *testing.T) {
require.NoError(t, err)
blk.SetSlot(st.Slot())
assert.ErrorContains(t, "already exists", server.validateEquivocation(blk.Block()))
err = server.validateEquivocation(blk.Block())
assert.ErrorContains(t, "already exists", err)
require.ErrorIs(t, err, errEquivocatedBlock)
})
}
@@ -3630,3 +3634,27 @@ func TestGetDepositSnapshot(t *testing.T) {
assert.Equal(t, finalized, len(resp.Finalized))
})
}
func TestServer_broadcastBlobSidecars(t *testing.T) {
hook := logTest.NewGlobal()
blockToPropose := util.NewBeaconBlockContentsDeneb()
blockToPropose.Blobs = [][]byte{{0x01}, {0x02}, {0x03}}
blockToPropose.KzgProofs = [][]byte{{0x01}, {0x02}, {0x03}}
blockToPropose.Block.Block.Body.BlobKzgCommitments = [][]byte{bytesutil.PadTo([]byte("kc"), 48), bytesutil.PadTo([]byte("kc1"), 48), bytesutil.PadTo([]byte("kc2"), 48)}
d := &eth.GenericSignedBeaconBlock_Deneb{Deneb: blockToPropose}
b := &eth.GenericSignedBeaconBlock{Block: d}
server := &Server{
Broadcaster: &mockp2p.MockBroadcaster{},
FinalizationFetcher: &chainMock.ChainService{NotFinalized: true},
}
blk, err := blocks.NewSignedBeaconBlock(b.Block)
require.NoError(t, err)
require.NoError(t, server.broadcastSeenBlockSidecars(context.Background(), blk, b.GetDeneb().Blobs, b.GetDeneb().KzgProofs))
require.LogsDoNotContain(t, hook, "Broadcasted blob sidecar for already seen block")
server.FinalizationFetcher = &chainMock.ChainService{NotFinalized: false}
require.NoError(t, server.broadcastSeenBlockSidecars(context.Background(), blk, b.GetDeneb().Blobs, b.GetDeneb().KzgProofs))
require.LogsContain(t, hook, "Broadcasted blob sidecar for already seen block")
}

View File

@@ -369,16 +369,7 @@ func decodeIds(w http.ResponseWriter, st state.BeaconState, rawIds []string, ign
func valsFromIds(w http.ResponseWriter, st state.BeaconState, ids []primitives.ValidatorIndex) ([]state.ReadOnlyValidator, bool) {
var vals []state.ReadOnlyValidator
if len(ids) == 0 {
allVals := st.Validators()
vals = make([]state.ReadOnlyValidator, len(allVals))
for i, val := range allVals {
readOnlyVal, err := statenative.NewValidator(val)
if err != nil {
httputil.HandleError(w, "Could not convert validator: "+err.Error(), http.StatusInternalServerError)
return nil, false
}
vals[i] = readOnlyVal
}
vals = st.ValidatorsReadOnly()
} else {
vals = make([]state.ReadOnlyValidator, 0, len(ids))
for _, id := range ids {

View File

@@ -78,6 +78,8 @@ func TestGetSpec(t *testing.T) {
config.CapellaForkEpoch = 103
config.DenebForkVersion = []byte("DenebForkVersion")
config.DenebForkEpoch = 105
config.ElectraForkVersion = []byte("ElectraForkVersion")
config.ElectraForkEpoch = 107
config.BLSWithdrawalPrefixByte = byte('b')
config.ETH1AddressWithdrawalPrefixByte = byte('c')
config.GenesisDelay = 24
@@ -170,293 +172,309 @@ func TestGetSpec(t *testing.T) {
data, ok := resp.Data.(map[string]interface{})
require.Equal(t, true, ok)
assert.Equal(t, 129, len(data))
assert.Equal(t, 136, len(data))
for k, v := range data {
switch k {
case "CONFIG_NAME":
assert.Equal(t, "ConfigName", v)
case "PRESET_BASE":
assert.Equal(t, "PresetBase", v)
case "MAX_COMMITTEES_PER_SLOT":
assert.Equal(t, "1", v)
case "TARGET_COMMITTEE_SIZE":
assert.Equal(t, "2", v)
case "MAX_VALIDATORS_PER_COMMITTEE":
assert.Equal(t, "3", v)
case "MIN_PER_EPOCH_CHURN_LIMIT":
assert.Equal(t, "4", v)
case "CHURN_LIMIT_QUOTIENT":
assert.Equal(t, "5", v)
case "SHUFFLE_ROUND_COUNT":
assert.Equal(t, "6", v)
case "MIN_GENESIS_ACTIVE_VALIDATOR_COUNT":
assert.Equal(t, "7", v)
case "MIN_GENESIS_TIME":
assert.Equal(t, "8", v)
case "HYSTERESIS_QUOTIENT":
assert.Equal(t, "9", v)
case "HYSTERESIS_DOWNWARD_MULTIPLIER":
assert.Equal(t, "10", v)
case "HYSTERESIS_UPWARD_MULTIPLIER":
assert.Equal(t, "11", v)
case "SAFE_SLOTS_TO_UPDATE_JUSTIFIED":
assert.Equal(t, "0", v)
case "ETH1_FOLLOW_DISTANCE":
assert.Equal(t, "13", v)
case "TARGET_AGGREGATORS_PER_COMMITTEE":
assert.Equal(t, "14", v)
case "RANDOM_SUBNETS_PER_VALIDATOR":
assert.Equal(t, "15", v)
case "EPOCHS_PER_RANDOM_SUBNET_SUBSCRIPTION":
assert.Equal(t, "16", v)
case "SECONDS_PER_ETH1_BLOCK":
assert.Equal(t, "17", v)
case "DEPOSIT_CHAIN_ID":
assert.Equal(t, "18", v)
case "DEPOSIT_NETWORK_ID":
assert.Equal(t, "19", v)
case "DEPOSIT_CONTRACT_ADDRESS":
assert.Equal(t, "DepositContractAddress", v)
case "MIN_DEPOSIT_AMOUNT":
assert.Equal(t, "20", v)
case "MAX_EFFECTIVE_BALANCE":
assert.Equal(t, "21", v)
case "EJECTION_BALANCE":
assert.Equal(t, "22", v)
case "EFFECTIVE_BALANCE_INCREMENT":
assert.Equal(t, "23", v)
case "GENESIS_FORK_VERSION":
assert.Equal(t, "0x"+hex.EncodeToString([]byte("GenesisForkVersion")), v)
case "ALTAIR_FORK_VERSION":
assert.Equal(t, "0x"+hex.EncodeToString([]byte("AltairForkVersion")), v)
case "ALTAIR_FORK_EPOCH":
assert.Equal(t, "100", v)
case "BELLATRIX_FORK_VERSION":
assert.Equal(t, "0x"+hex.EncodeToString([]byte("BellatrixForkVersion")), v)
case "BELLATRIX_FORK_EPOCH":
assert.Equal(t, "101", v)
case "CAPELLA_FORK_VERSION":
assert.Equal(t, "0x"+hex.EncodeToString([]byte("CapellaForkVersion")), v)
case "CAPELLA_FORK_EPOCH":
assert.Equal(t, "103", v)
case "DENEB_FORK_VERSION":
assert.Equal(t, "0x"+hex.EncodeToString([]byte("DenebForkVersion")), v)
case "DENEB_FORK_EPOCH":
assert.Equal(t, "105", v)
case "MIN_ANCHOR_POW_BLOCK_DIFFICULTY":
assert.Equal(t, "1000", v)
case "BLS_WITHDRAWAL_PREFIX":
assert.Equal(t, "0x62", v)
case "ETH1_ADDRESS_WITHDRAWAL_PREFIX":
assert.Equal(t, "0x63", v)
case "GENESIS_DELAY":
assert.Equal(t, "24", v)
case "SECONDS_PER_SLOT":
assert.Equal(t, "25", v)
case "MIN_ATTESTATION_INCLUSION_DELAY":
assert.Equal(t, "26", v)
case "SLOTS_PER_EPOCH":
assert.Equal(t, "27", v)
case "MIN_SEED_LOOKAHEAD":
assert.Equal(t, "28", v)
case "MAX_SEED_LOOKAHEAD":
assert.Equal(t, "29", v)
case "EPOCHS_PER_ETH1_VOTING_PERIOD":
assert.Equal(t, "30", v)
case "SLOTS_PER_HISTORICAL_ROOT":
assert.Equal(t, "31", v)
case "MIN_VALIDATOR_WITHDRAWABILITY_DELAY":
assert.Equal(t, "32", v)
case "SHARD_COMMITTEE_PERIOD":
assert.Equal(t, "33", v)
case "MIN_EPOCHS_TO_INACTIVITY_PENALTY":
assert.Equal(t, "34", v)
case "EPOCHS_PER_HISTORICAL_VECTOR":
assert.Equal(t, "35", v)
case "EPOCHS_PER_SLASHINGS_VECTOR":
assert.Equal(t, "36", v)
case "HISTORICAL_ROOTS_LIMIT":
assert.Equal(t, "37", v)
case "VALIDATOR_REGISTRY_LIMIT":
assert.Equal(t, "38", v)
case "BASE_REWARD_FACTOR":
assert.Equal(t, "39", v)
case "WHISTLEBLOWER_REWARD_QUOTIENT":
assert.Equal(t, "40", v)
case "PROPOSER_REWARD_QUOTIENT":
assert.Equal(t, "41", v)
case "INACTIVITY_PENALTY_QUOTIENT":
assert.Equal(t, "42", v)
case "HF1_INACTIVITY_PENALTY_QUOTIENT":
assert.Equal(t, "43", v)
case "MIN_SLASHING_PENALTY_QUOTIENT":
assert.Equal(t, "44", v)
case "HF1_MIN_SLASHING_PENALTY_QUOTIENT":
assert.Equal(t, "45", v)
case "PROPORTIONAL_SLASHING_MULTIPLIER":
assert.Equal(t, "46", v)
case "HF1_PROPORTIONAL_SLASHING_MULTIPLIER":
assert.Equal(t, "47", v)
case "MAX_PROPOSER_SLASHINGS":
assert.Equal(t, "48", v)
case "MAX_ATTESTER_SLASHINGS":
assert.Equal(t, "49", v)
case "MAX_ATTESTATIONS":
assert.Equal(t, "50", v)
case "MAX_DEPOSITS":
assert.Equal(t, "51", v)
case "MAX_VOLUNTARY_EXITS":
assert.Equal(t, "52", v)
case "MAX_BLOBS_PER_BLOCK":
assert.Equal(t, "4", v)
case "TIMELY_HEAD_FLAG_INDEX":
assert.Equal(t, "0x35", v)
case "TIMELY_SOURCE_FLAG_INDEX":
assert.Equal(t, "0x36", v)
case "TIMELY_TARGET_FLAG_INDEX":
assert.Equal(t, "0x37", v)
case "TIMELY_HEAD_WEIGHT":
assert.Equal(t, "56", v)
case "TIMELY_SOURCE_WEIGHT":
assert.Equal(t, "57", v)
case "TIMELY_TARGET_WEIGHT":
assert.Equal(t, "58", v)
case "SYNC_REWARD_WEIGHT":
assert.Equal(t, "59", v)
case "WEIGHT_DENOMINATOR":
assert.Equal(t, "60", v)
case "TARGET_AGGREGATORS_PER_SYNC_SUBCOMMITTEE":
assert.Equal(t, "61", v)
case "SYNC_COMMITTEE_SUBNET_COUNT":
assert.Equal(t, "62", v)
case "SYNC_COMMITTEE_SIZE":
assert.Equal(t, "63", v)
case "SYNC_PUBKEYS_PER_AGGREGATE":
assert.Equal(t, "64", v)
case "INACTIVITY_SCORE_BIAS":
assert.Equal(t, "65", v)
case "EPOCHS_PER_SYNC_COMMITTEE_PERIOD":
assert.Equal(t, "66", v)
case "INACTIVITY_PENALTY_QUOTIENT_ALTAIR":
assert.Equal(t, "67", v)
case "MIN_SLASHING_PENALTY_QUOTIENT_ALTAIR":
assert.Equal(t, "68", v)
case "PROPORTIONAL_SLASHING_MULTIPLIER_ALTAIR":
assert.Equal(t, "69", v)
case "INACTIVITY_SCORE_RECOVERY_RATE":
assert.Equal(t, "70", v)
case "MIN_SYNC_COMMITTEE_PARTICIPANTS":
assert.Equal(t, "71", v)
case "PROPOSER_WEIGHT":
assert.Equal(t, "8", v)
case "DOMAIN_BEACON_PROPOSER":
assert.Equal(t, "0x30303031", v)
case "DOMAIN_BEACON_ATTESTER":
assert.Equal(t, "0x30303032", v)
case "DOMAIN_RANDAO":
assert.Equal(t, "0x30303033", v)
case "DOMAIN_DEPOSIT":
assert.Equal(t, "0x30303034", v)
case "DOMAIN_VOLUNTARY_EXIT":
assert.Equal(t, "0x30303035", v)
case "DOMAIN_SELECTION_PROOF":
assert.Equal(t, "0x30303036", v)
case "DOMAIN_AGGREGATE_AND_PROOF":
assert.Equal(t, "0x30303037", v)
case "DOMAIN_APPLICATION_MASK":
assert.Equal(t, "0x31303030", v)
case "DOMAIN_SYNC_COMMITTEE":
assert.Equal(t, "0x07000000", v)
case "DOMAIN_SYNC_COMMITTEE_SELECTION_PROOF":
assert.Equal(t, "0x08000000", v)
case "DOMAIN_CONTRIBUTION_AND_PROOF":
assert.Equal(t, "0x09000000", v)
case "DOMAIN_BLS_TO_EXECUTION_CHANGE":
assert.Equal(t, "0x0a000000", v)
case "DOMAIN_APPLICATION_BUILDER":
assert.Equal(t, "0x00000001", v)
case "DOMAIN_BLOB_SIDECAR":
assert.Equal(t, "0x00000000", v)
case "TRANSITION_TOTAL_DIFFICULTY":
assert.Equal(t, "0", v)
case "TERMINAL_BLOCK_HASH_ACTIVATION_EPOCH":
assert.Equal(t, "72", v)
case "TERMINAL_BLOCK_HASH":
s, ok := v.(string)
require.Equal(t, true, ok)
assert.Equal(t, common.HexToHash("TerminalBlockHash"), common.HexToHash(s))
case "TERMINAL_TOTAL_DIFFICULTY":
assert.Equal(t, "73", v)
case "DefaultFeeRecipient":
assert.Equal(t, common.HexToAddress("DefaultFeeRecipient"), v)
case "PROPORTIONAL_SLASHING_MULTIPLIER_BELLATRIX":
assert.Equal(t, "3", v)
case "MIN_SLASHING_PENALTY_QUOTIENT_BELLATRIX":
assert.Equal(t, "32", v)
case "INACTIVITY_PENALTY_QUOTIENT_BELLATRIX":
assert.Equal(t, "16777216", v)
case "PROPOSER_SCORE_BOOST":
assert.Equal(t, "40", v)
case "INTERVALS_PER_SLOT":
assert.Equal(t, "3", v)
case "MAX_WITHDRAWALS_PER_PAYLOAD":
assert.Equal(t, "74", v)
case "MAX_BLS_TO_EXECUTION_CHANGES":
assert.Equal(t, "75", v)
case "MAX_VALIDATORS_PER_WITHDRAWALS_SWEEP":
assert.Equal(t, "76", v)
case "REORG_MAX_EPOCHS_SINCE_FINALIZATION":
assert.Equal(t, "2", v)
case "REORG_WEIGHT_THRESHOLD":
assert.Equal(t, "20", v)
case "REORG_PARENT_WEIGHT_THRESHOLD":
assert.Equal(t, "160", v)
case "MAX_PER_EPOCH_ACTIVATION_CHURN_LIMIT":
assert.Equal(t, "8", v)
case "MAX_REQUEST_LIGHT_CLIENT_UPDATES":
assert.Equal(t, "128", v)
case "SAFE_SLOTS_TO_IMPORT_OPTIMISTICALLY":
case "NODE_ID_BITS":
assert.Equal(t, "256", v)
case "ATTESTATION_SUBNET_EXTRA_BITS":
assert.Equal(t, "0", v)
case "ATTESTATION_SUBNET_PREFIX_BITS":
assert.Equal(t, "6", v)
case "SUBNETS_PER_NODE":
assert.Equal(t, "2", v)
case "EPOCHS_PER_SUBNET_SUBSCRIPTION":
assert.Equal(t, "256", v)
case "MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS":
assert.Equal(t, "4096", v)
case "MAX_REQUEST_BLOB_SIDECARS":
assert.Equal(t, "768", v)
case "MESSAGE_DOMAIN_INVALID_SNAPPY":
assert.Equal(t, "0x00000000", v)
case "MESSAGE_DOMAIN_VALID_SNAPPY":
assert.Equal(t, "0x01000000", v)
case "ATTESTATION_PROPAGATION_SLOT_RANGE":
assert.Equal(t, "32", v)
case "RESP_TIMEOUT":
assert.Equal(t, "10", v)
case "TTFB_TIMEOUT":
assert.Equal(t, "5", v)
case "MIN_EPOCHS_FOR_BLOCK_REQUESTS":
assert.Equal(t, "33024", v)
case "GOSSIP_MAX_SIZE":
assert.Equal(t, "10485760", v)
case "MAX_CHUNK_SIZE":
assert.Equal(t, "10485760", v)
case "ATTESTATION_SUBNET_COUNT":
assert.Equal(t, "64", v)
case "MAXIMUM_GOSSIP_CLOCK_DISPARITY":
assert.Equal(t, "500", v)
case "MAX_REQUEST_BLOCKS":
assert.Equal(t, "1024", v)
case "MAX_REQUEST_BLOCKS_DENEB":
assert.Equal(t, "128", v)
default:
t.Errorf("Incorrect key: %s", k)
}
t.Run(k, func(t *testing.T) {
switch k {
case "CONFIG_NAME":
assert.Equal(t, "ConfigName", v)
case "PRESET_BASE":
assert.Equal(t, "PresetBase", v)
case "MAX_COMMITTEES_PER_SLOT":
assert.Equal(t, "1", v)
case "TARGET_COMMITTEE_SIZE":
assert.Equal(t, "2", v)
case "MAX_VALIDATORS_PER_COMMITTEE":
assert.Equal(t, "3", v)
case "MIN_PER_EPOCH_CHURN_LIMIT":
assert.Equal(t, "4", v)
case "CHURN_LIMIT_QUOTIENT":
assert.Equal(t, "5", v)
case "SHUFFLE_ROUND_COUNT":
assert.Equal(t, "6", v)
case "MIN_GENESIS_ACTIVE_VALIDATOR_COUNT":
assert.Equal(t, "7", v)
case "MIN_GENESIS_TIME":
assert.Equal(t, "8", v)
case "HYSTERESIS_QUOTIENT":
assert.Equal(t, "9", v)
case "HYSTERESIS_DOWNWARD_MULTIPLIER":
assert.Equal(t, "10", v)
case "HYSTERESIS_UPWARD_MULTIPLIER":
assert.Equal(t, "11", v)
case "SAFE_SLOTS_TO_UPDATE_JUSTIFIED":
assert.Equal(t, "0", v)
case "ETH1_FOLLOW_DISTANCE":
assert.Equal(t, "13", v)
case "TARGET_AGGREGATORS_PER_COMMITTEE":
assert.Equal(t, "14", v)
case "RANDOM_SUBNETS_PER_VALIDATOR":
assert.Equal(t, "15", v)
case "EPOCHS_PER_RANDOM_SUBNET_SUBSCRIPTION":
assert.Equal(t, "16", v)
case "SECONDS_PER_ETH1_BLOCK":
assert.Equal(t, "17", v)
case "DEPOSIT_CHAIN_ID":
assert.Equal(t, "18", v)
case "DEPOSIT_NETWORK_ID":
assert.Equal(t, "19", v)
case "DEPOSIT_CONTRACT_ADDRESS":
assert.Equal(t, "DepositContractAddress", v)
case "MIN_DEPOSIT_AMOUNT":
assert.Equal(t, "20", v)
case "MAX_EFFECTIVE_BALANCE":
assert.Equal(t, "21", v)
case "EJECTION_BALANCE":
assert.Equal(t, "22", v)
case "EFFECTIVE_BALANCE_INCREMENT":
assert.Equal(t, "23", v)
case "GENESIS_FORK_VERSION":
assert.Equal(t, "0x"+hex.EncodeToString([]byte("GenesisForkVersion")), v)
case "ALTAIR_FORK_VERSION":
assert.Equal(t, "0x"+hex.EncodeToString([]byte("AltairForkVersion")), v)
case "ALTAIR_FORK_EPOCH":
assert.Equal(t, "100", v)
case "BELLATRIX_FORK_VERSION":
assert.Equal(t, "0x"+hex.EncodeToString([]byte("BellatrixForkVersion")), v)
case "BELLATRIX_FORK_EPOCH":
assert.Equal(t, "101", v)
case "CAPELLA_FORK_VERSION":
assert.Equal(t, "0x"+hex.EncodeToString([]byte("CapellaForkVersion")), v)
case "CAPELLA_FORK_EPOCH":
assert.Equal(t, "103", v)
case "DENEB_FORK_VERSION":
assert.Equal(t, "0x"+hex.EncodeToString([]byte("DenebForkVersion")), v)
case "DENEB_FORK_EPOCH":
assert.Equal(t, "105", v)
case "ELECTRA_FORK_VERSION":
assert.Equal(t, "0x"+hex.EncodeToString([]byte("ElectraForkVersion")), v)
case "ELECTRA_FORK_EPOCH":
assert.Equal(t, "107", v)
case "MIN_ANCHOR_POW_BLOCK_DIFFICULTY":
assert.Equal(t, "1000", v)
case "BLS_WITHDRAWAL_PREFIX":
assert.Equal(t, "0x62", v)
case "ETH1_ADDRESS_WITHDRAWAL_PREFIX":
assert.Equal(t, "0x63", v)
case "GENESIS_DELAY":
assert.Equal(t, "24", v)
case "SECONDS_PER_SLOT":
assert.Equal(t, "25", v)
case "MIN_ATTESTATION_INCLUSION_DELAY":
assert.Equal(t, "26", v)
case "SLOTS_PER_EPOCH":
assert.Equal(t, "27", v)
case "MIN_SEED_LOOKAHEAD":
assert.Equal(t, "28", v)
case "MAX_SEED_LOOKAHEAD":
assert.Equal(t, "29", v)
case "EPOCHS_PER_ETH1_VOTING_PERIOD":
assert.Equal(t, "30", v)
case "SLOTS_PER_HISTORICAL_ROOT":
assert.Equal(t, "31", v)
case "MIN_VALIDATOR_WITHDRAWABILITY_DELAY":
assert.Equal(t, "32", v)
case "SHARD_COMMITTEE_PERIOD":
assert.Equal(t, "33", v)
case "MIN_EPOCHS_TO_INACTIVITY_PENALTY":
assert.Equal(t, "34", v)
case "EPOCHS_PER_HISTORICAL_VECTOR":
assert.Equal(t, "35", v)
case "EPOCHS_PER_SLASHINGS_VECTOR":
assert.Equal(t, "36", v)
case "HISTORICAL_ROOTS_LIMIT":
assert.Equal(t, "37", v)
case "VALIDATOR_REGISTRY_LIMIT":
assert.Equal(t, "38", v)
case "BASE_REWARD_FACTOR":
assert.Equal(t, "39", v)
case "WHISTLEBLOWER_REWARD_QUOTIENT":
assert.Equal(t, "40", v)
case "PROPOSER_REWARD_QUOTIENT":
assert.Equal(t, "41", v)
case "INACTIVITY_PENALTY_QUOTIENT":
assert.Equal(t, "42", v)
case "HF1_INACTIVITY_PENALTY_QUOTIENT":
assert.Equal(t, "43", v)
case "MIN_SLASHING_PENALTY_QUOTIENT":
assert.Equal(t, "44", v)
case "HF1_MIN_SLASHING_PENALTY_QUOTIENT":
assert.Equal(t, "45", v)
case "PROPORTIONAL_SLASHING_MULTIPLIER":
assert.Equal(t, "46", v)
case "HF1_PROPORTIONAL_SLASHING_MULTIPLIER":
assert.Equal(t, "47", v)
case "MAX_PROPOSER_SLASHINGS":
assert.Equal(t, "48", v)
case "MAX_ATTESTER_SLASHINGS":
assert.Equal(t, "49", v)
case "MAX_ATTESTATIONS":
assert.Equal(t, "50", v)
case "MAX_DEPOSITS":
assert.Equal(t, "51", v)
case "MAX_VOLUNTARY_EXITS":
assert.Equal(t, "52", v)
case "MAX_BLOBS_PER_BLOCK":
assert.Equal(t, "4", v)
case "TIMELY_HEAD_FLAG_INDEX":
assert.Equal(t, "0x35", v)
case "TIMELY_SOURCE_FLAG_INDEX":
assert.Equal(t, "0x36", v)
case "TIMELY_TARGET_FLAG_INDEX":
assert.Equal(t, "0x37", v)
case "TIMELY_HEAD_WEIGHT":
assert.Equal(t, "56", v)
case "TIMELY_SOURCE_WEIGHT":
assert.Equal(t, "57", v)
case "TIMELY_TARGET_WEIGHT":
assert.Equal(t, "58", v)
case "SYNC_REWARD_WEIGHT":
assert.Equal(t, "59", v)
case "WEIGHT_DENOMINATOR":
assert.Equal(t, "60", v)
case "TARGET_AGGREGATORS_PER_SYNC_SUBCOMMITTEE":
assert.Equal(t, "61", v)
case "SYNC_COMMITTEE_SUBNET_COUNT":
assert.Equal(t, "62", v)
case "SYNC_COMMITTEE_SIZE":
assert.Equal(t, "63", v)
case "SYNC_PUBKEYS_PER_AGGREGATE":
assert.Equal(t, "64", v)
case "INACTIVITY_SCORE_BIAS":
assert.Equal(t, "65", v)
case "EPOCHS_PER_SYNC_COMMITTEE_PERIOD":
assert.Equal(t, "66", v)
case "INACTIVITY_PENALTY_QUOTIENT_ALTAIR":
assert.Equal(t, "67", v)
case "MIN_SLASHING_PENALTY_QUOTIENT_ALTAIR":
assert.Equal(t, "68", v)
case "PROPORTIONAL_SLASHING_MULTIPLIER_ALTAIR":
assert.Equal(t, "69", v)
case "INACTIVITY_SCORE_RECOVERY_RATE":
assert.Equal(t, "70", v)
case "MIN_SYNC_COMMITTEE_PARTICIPANTS":
assert.Equal(t, "71", v)
case "PROPOSER_WEIGHT":
assert.Equal(t, "8", v)
case "DOMAIN_BEACON_PROPOSER":
assert.Equal(t, "0x30303031", v)
case "DOMAIN_BEACON_ATTESTER":
assert.Equal(t, "0x30303032", v)
case "DOMAIN_RANDAO":
assert.Equal(t, "0x30303033", v)
case "DOMAIN_DEPOSIT":
assert.Equal(t, "0x30303034", v)
case "DOMAIN_VOLUNTARY_EXIT":
assert.Equal(t, "0x30303035", v)
case "DOMAIN_SELECTION_PROOF":
assert.Equal(t, "0x30303036", v)
case "DOMAIN_AGGREGATE_AND_PROOF":
assert.Equal(t, "0x30303037", v)
case "DOMAIN_APPLICATION_MASK":
assert.Equal(t, "0x31303030", v)
case "DOMAIN_SYNC_COMMITTEE":
assert.Equal(t, "0x07000000", v)
case "DOMAIN_SYNC_COMMITTEE_SELECTION_PROOF":
assert.Equal(t, "0x08000000", v)
case "DOMAIN_CONTRIBUTION_AND_PROOF":
assert.Equal(t, "0x09000000", v)
case "DOMAIN_BLS_TO_EXECUTION_CHANGE":
assert.Equal(t, "0x0a000000", v)
case "DOMAIN_APPLICATION_BUILDER":
assert.Equal(t, "0x00000001", v)
case "DOMAIN_BLOB_SIDECAR":
assert.Equal(t, "0x00000000", v)
case "TRANSITION_TOTAL_DIFFICULTY":
assert.Equal(t, "0", v)
case "TERMINAL_BLOCK_HASH_ACTIVATION_EPOCH":
assert.Equal(t, "72", v)
case "TERMINAL_BLOCK_HASH":
s, ok := v.(string)
require.Equal(t, true, ok)
assert.Equal(t, common.HexToHash("TerminalBlockHash"), common.HexToHash(s))
case "TERMINAL_TOTAL_DIFFICULTY":
assert.Equal(t, "73", v)
case "DefaultFeeRecipient":
assert.Equal(t, common.HexToAddress("DefaultFeeRecipient"), v)
case "PROPORTIONAL_SLASHING_MULTIPLIER_BELLATRIX":
assert.Equal(t, "3", v)
case "MIN_SLASHING_PENALTY_QUOTIENT_BELLATRIX":
assert.Equal(t, "32", v)
case "INACTIVITY_PENALTY_QUOTIENT_BELLATRIX":
assert.Equal(t, "16777216", v)
case "PROPOSER_SCORE_BOOST":
assert.Equal(t, "40", v)
case "INTERVALS_PER_SLOT":
assert.Equal(t, "3", v)
case "MAX_WITHDRAWALS_PER_PAYLOAD":
assert.Equal(t, "74", v)
case "MAX_BLS_TO_EXECUTION_CHANGES":
assert.Equal(t, "75", v)
case "MAX_VALIDATORS_PER_WITHDRAWALS_SWEEP":
assert.Equal(t, "76", v)
case "REORG_MAX_EPOCHS_SINCE_FINALIZATION":
assert.Equal(t, "2", v)
case "REORG_WEIGHT_THRESHOLD":
assert.Equal(t, "20", v)
case "REORG_PARENT_WEIGHT_THRESHOLD":
assert.Equal(t, "160", v)
case "MAX_PER_EPOCH_ACTIVATION_CHURN_LIMIT":
assert.Equal(t, "8", v)
case "MAX_REQUEST_LIGHT_CLIENT_UPDATES":
assert.Equal(t, "128", v)
case "SAFE_SLOTS_TO_IMPORT_OPTIMISTICALLY":
case "NODE_ID_BITS":
assert.Equal(t, "256", v)
case "ATTESTATION_SUBNET_EXTRA_BITS":
assert.Equal(t, "0", v)
case "ATTESTATION_SUBNET_PREFIX_BITS":
assert.Equal(t, "6", v)
case "SUBNETS_PER_NODE":
assert.Equal(t, "2", v)
case "EPOCHS_PER_SUBNET_SUBSCRIPTION":
assert.Equal(t, "256", v)
case "MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS":
assert.Equal(t, "4096", v)
case "MAX_REQUEST_BLOB_SIDECARS":
assert.Equal(t, "768", v)
case "MESSAGE_DOMAIN_INVALID_SNAPPY":
assert.Equal(t, "0x00000000", v)
case "MESSAGE_DOMAIN_VALID_SNAPPY":
assert.Equal(t, "0x01000000", v)
case "ATTESTATION_PROPAGATION_SLOT_RANGE":
assert.Equal(t, "32", v)
case "RESP_TIMEOUT":
assert.Equal(t, "10", v)
case "TTFB_TIMEOUT":
assert.Equal(t, "5", v)
case "MIN_EPOCHS_FOR_BLOCK_REQUESTS":
assert.Equal(t, "33024", v)
case "GOSSIP_MAX_SIZE":
assert.Equal(t, "10485760", v)
case "MAX_CHUNK_SIZE":
assert.Equal(t, "10485760", v)
case "ATTESTATION_SUBNET_COUNT":
assert.Equal(t, "64", v)
case "MAXIMUM_GOSSIP_CLOCK_DISPARITY":
assert.Equal(t, "500", v)
case "MAX_REQUEST_BLOCKS":
assert.Equal(t, "1024", v)
case "MAX_REQUEST_BLOCKS_DENEB":
assert.Equal(t, "128", v)
case "NUMBER_OF_COLUMNS":
assert.Equal(t, "128", v)
case "MIN_PER_EPOCH_CHURN_LIMIT_ELECTRA":
assert.Equal(t, "128000000000", v)
case "MAX_PER_EPOCH_ACTIVATION_EXIT_CHURN_LIMIT":
assert.Equal(t, "256000000000", v)
case "DATA_COLUMN_SIDECAR_SUBNET_COUNT":
assert.Equal(t, "32", v)
case "MAX_REQUEST_DATA_COLUMN_SIDECARS":
assert.Equal(t, "16384", v)
default:
t.Errorf("Incorrect key: %s", k)
}
})
}
}

View File

@@ -108,7 +108,7 @@ func (*Server) GetVersion(w http.ResponseWriter, r *http.Request) {
// GetHealth returns node health status in http status codes. Useful for load balancers.
func (s *Server) GetHealth(w http.ResponseWriter, r *http.Request) {
_, span := trace.StartSpan(r.Context(), "node.GetHealth")
ctx, span := trace.StartSpan(r.Context(), "node.GetHealth")
defer span.End()
rawSyncingStatus, syncingStatus, ok := shared.UintFromQuery(w, r, "syncing_status", false)
@@ -119,10 +119,14 @@ func (s *Server) GetHealth(w http.ResponseWriter, r *http.Request) {
return
}
if s.SyncChecker.Synced() {
optimistic, err := s.OptimisticModeFetcher.IsOptimistic(ctx)
if err != nil {
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusInternalServerError)
}
if s.SyncChecker.Synced() && !optimistic {
return
}
if s.SyncChecker.Syncing() || s.SyncChecker.Initialized() {
if s.SyncChecker.Syncing() || optimistic {
if rawSyncingStatus != "" {
w.WriteHeader(intSyncingStatus)
} else {
@@ -130,5 +134,6 @@ func (s *Server) GetHealth(w http.ResponseWriter, r *http.Request) {
}
return
}
w.WriteHeader(http.StatusServiceUnavailable)
}

View File

@@ -91,8 +91,10 @@ func TestGetVersion(t *testing.T) {
func TestGetHealth(t *testing.T) {
checker := &syncmock.Sync{}
optimisticFetcher := &mock.ChainService{Optimistic: false}
s := &Server{
SyncChecker: checker,
SyncChecker: checker,
OptimisticModeFetcher: optimisticFetcher,
}
request := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/node/health", nil)
@@ -101,25 +103,30 @@ func TestGetHealth(t *testing.T) {
s.GetHealth(writer, request)
assert.Equal(t, http.StatusServiceUnavailable, writer.Code)
checker.IsInitialized = true
request = httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/node/health", nil)
writer = httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetHealth(writer, request)
assert.Equal(t, http.StatusPartialContent, writer.Code)
checker.IsSyncing = true
checker.IsSynced = false
request = httptest.NewRequest(http.MethodGet, fmt.Sprintf("http://example.com/eth/v1/node/health?syncing_status=%d", http.StatusPaymentRequired), nil)
writer = httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetHealth(writer, request)
assert.Equal(t, http.StatusPaymentRequired, writer.Code)
checker.IsSyncing = false
checker.IsSynced = true
request = httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/node/health", nil)
writer = httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetHealth(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
checker.IsSyncing = false
checker.IsSynced = true
optimisticFetcher.Optimistic = true
request = httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/node/health", nil)
writer = httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetHealth(writer, request)
assert.Equal(t, http.StatusPartialContent, writer.Code)
}
func TestGetIdentity(t *testing.T) {

View File

@@ -223,7 +223,7 @@ func (p *BeaconDbBlocker) Blobs(ctx context.Context, id string, indices []uint64
return nil, &core.RpcError{Err: errors.Wrap(err, "failed to retrieve block from db"), Reason: core.Internal}
}
// if block is not in the retention window return 200 w/ empty list
if !params.WithinDAPeriod(slots.ToEpoch(b.Block().Slot()), slots.ToEpoch(p.GenesisTimeFetcher.CurrentSlot())) {
if !p.BlobStorage.WithinRetentionPeriod(slots.ToEpoch(b.Block().Slot()), slots.ToEpoch(p.GenesisTimeFetcher.CurrentSlot())) {
return make([]*blocks.VerifiedROBlob, 0), nil
}
commitments, err := b.Block().Body().BlobKzgCommitments()

View File

@@ -164,8 +164,7 @@ func TestGetBlob(t *testing.T) {
db := testDB.SetupDB(t)
denebBlock, blobs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 123, 4)
require.NoError(t, db.SaveBlock(context.Background(), denebBlock))
_, bs, err := filesystem.NewEphemeralBlobStorageWithFs(t)
require.NoError(t, err)
_, bs := filesystem.NewEphemeralBlobStorageWithFs(t)
testSidecars, err := verification.BlobSidecarSliceNoop(blobs)
require.NoError(t, err)
for i := range testSidecars {

View File

@@ -37,7 +37,7 @@ go_library(
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/builder:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/cache/depositcache:go_default_library",
"//beacon-chain/cache/depositsnapshot:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/block:go_default_library",
@@ -112,7 +112,7 @@ common_deps = [
"//beacon-chain/builder:go_default_library",
"//beacon-chain/builder/testing:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/cache/depositcache:go_default_library",
"//beacon-chain/cache/depositsnapshot:go_default_library",
"//beacon-chain/core/altair:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/execution:go_default_library",

View File

@@ -8,7 +8,7 @@ import (
mockChain "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache/depositcache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache/depositsnapshot"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/altair"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/execution"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
@@ -334,7 +334,7 @@ func TestGetAltairDuties_UnknownPubkey(t *testing.T) {
chain := &mockChain.ChainService{
State: bs, Root: genesisRoot[:], Genesis: time.Now().Add(time.Duration(-1*int64(slot-1)) * time.Second),
}
depositCache, err := depositcache.New()
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
vs := &Server{

View File

@@ -341,7 +341,7 @@ func (vs *Server) handleUnblindedBlock(block interfaces.SignedBeaconBlock, req *
if dbBlockContents == nil {
return nil, nil
}
return buildBlobSidecars(block, dbBlockContents.Blobs, dbBlockContents.KzgProofs)
return BuildBlobSidecars(block, dbBlockContents.Blobs, dbBlockContents.KzgProofs)
}
// broadcastReceiveBlock broadcasts a block and handles its reception.

View File

@@ -56,8 +56,8 @@ func (c *blobsBundleCache) prune(minSlot primitives.Slot) {
}
}
// buildBlobSidecars given a block, builds the blob sidecars for the block.
func buildBlobSidecars(blk interfaces.SignedBeaconBlock, blobs [][]byte, kzgProofs [][]byte) ([]*ethpb.BlobSidecar, error) {
// BuildBlobSidecars given a block, builds the blob sidecars for the block.
func BuildBlobSidecars(blk interfaces.SignedBeaconBlock, blobs [][]byte, kzgProofs [][]byte) ([]*ethpb.BlobSidecar, error) {
if blk.Version() < version.Deneb {
return nil, nil // No blobs before deneb.
}

View File

@@ -51,7 +51,7 @@ func TestServer_buildBlobSidecars(t *testing.T) {
require.NoError(t, blk.SetBlobKzgCommitments(kzgCommitments))
proof, err := hexutil.Decode("0xb4021b0de10f743893d4f71e1bf830c019e832958efd6795baf2f83b8699a9eccc5dc99015d8d4d8ec370d0cc333c06a")
require.NoError(t, err)
scs, err := buildBlobSidecars(blk, [][]byte{
scs, err := BuildBlobSidecars(blk, [][]byte{
make([]byte, fieldparams.BlobLength), make([]byte, fieldparams.BlobLength),
}, [][]byte{
proof, proof,

View File

@@ -14,7 +14,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/builder"
builderTest "github.com/prysmaticlabs/prysm/v5/beacon-chain/builder/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache/depositcache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache/depositsnapshot"
b "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/signing"
@@ -1000,7 +1000,7 @@ func TestProposer_PendingDeposits_OutsideEth1FollowWindow(t *testing.T) {
},
}
depositCache, err := depositcache.New()
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
depositTrie, err := trie.NewTrie(params.BeaconConfig().DepositContractTreeDepth)
@@ -1139,7 +1139,7 @@ func TestProposer_PendingDeposits_FollowsCorrectEth1Block(t *testing.T) {
},
}
depositCache, err := depositcache.New()
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
depositTrie, err := trie.NewTrie(params.BeaconConfig().DepositContractTreeDepth)
@@ -1244,7 +1244,7 @@ func TestProposer_PendingDeposits_CantReturnBelowStateEth1DepositIndex(t *testin
depositTrie, err := trie.NewTrie(params.BeaconConfig().DepositContractTreeDepth)
require.NoError(t, err, "Could not setup deposit trie")
depositCache, err := depositcache.New()
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
for _, dp := range append(readyDeposits, recentDeposits...) {
@@ -1344,7 +1344,7 @@ func TestProposer_PendingDeposits_CantReturnMoreThanMax(t *testing.T) {
depositTrie, err := trie.NewTrie(params.BeaconConfig().DepositContractTreeDepth)
require.NoError(t, err, "Could not setup deposit trie")
depositCache, err := depositcache.New()
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
for _, dp := range append(readyDeposits, recentDeposits...) {
@@ -1442,7 +1442,7 @@ func TestProposer_PendingDeposits_CantReturnMoreThanDepositCount(t *testing.T) {
depositTrie, err := trie.NewTrie(params.BeaconConfig().DepositContractTreeDepth)
require.NoError(t, err, "Could not setup deposit trie")
depositCache, err := depositcache.New()
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
for _, dp := range append(readyDeposits, recentDeposits...) {
@@ -1553,7 +1553,7 @@ func TestProposer_DepositTrie_UtilizesCachedFinalizedDeposits(t *testing.T) {
},
}
depositCache, err := depositcache.New()
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
depositTrie, err := trie.NewTrie(params.BeaconConfig().DepositContractTreeDepth)
@@ -1669,7 +1669,7 @@ func TestProposer_DepositTrie_RebuildTrie(t *testing.T) {
},
}
depositCache, err := depositcache.New()
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
depositTrie, err := trie.NewTrie(params.BeaconConfig().DepositContractTreeDepth)
@@ -1810,7 +1810,7 @@ func TestProposer_Eth1Data_MajorityVote_SpansGenesis(t *testing.T) {
InsertBlock(100, latestValidTime, []byte("latest"))
headBlockHash := []byte("headb")
depositCache, err := depositcache.New()
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
ps := &Server{
ChainStartFetcher: p,
@@ -1851,7 +1851,7 @@ func TestProposer_Eth1Data_MajorityVote(t *testing.T) {
}
depositTrie, err := trie.NewTrie(params.BeaconConfig().DepositContractTreeDepth)
require.NoError(t, err)
depositCache, err := depositcache.New()
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
root, err := depositTrie.HashTreeRoot()
require.NoError(t, err)
@@ -2351,7 +2351,7 @@ func TestProposer_Eth1Data_MajorityVote(t *testing.T) {
BlockHash: []byte("eth1data"),
}
depositCache, err := depositcache.New()
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
beaconState, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
@@ -2547,7 +2547,7 @@ func TestProposer_Deposits_ReturnsEmptyList_IfLatestEth1DataEqGenesisEth1Block(t
depositTrie, err := trie.NewTrie(params.BeaconConfig().DepositContractTreeDepth)
require.NoError(t, err, "Could not setup deposit trie")
depositCache, err := depositcache.New()
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
for _, dp := range append(readyDeposits, recentDeposits...) {

View File

@@ -10,7 +10,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/builder"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache/depositcache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache/depositsnapshot"
blockfeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/block"
opfeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/operation"
statefeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/state"
@@ -69,7 +69,7 @@ type Server struct {
BlobReceiver blockchain.BlobReceiver
MockEth1Votes bool
Eth1BlockFetcher execution.POWBlockFetcher
PendingDepositsFetcher depositcache.PendingDepositsFetcher
PendingDepositsFetcher depositsnapshot.PendingDepositsFetcher
OperationNotifier opfeed.Notifier
StateGen stategen.StateManager
ReplayerBuilder stategen.ReplayerBuilder

View File

@@ -5,7 +5,7 @@ import (
"testing"
mockChain "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache/depositcache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache/depositsnapshot"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/signing"
mockExecution "github.com/prysmaticlabs/prysm/v5/beacon-chain/execution/testing"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
@@ -65,7 +65,7 @@ func TestWaitForActivation_ValidatorOriginallyExists(t *testing.T) {
}
depositTrie, err := trie.NewTrie(params.BeaconConfig().DepositContractTreeDepth)
require.NoError(t, err, "Could not setup deposit trie")
depositCache, err := depositcache.New()
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
root, err := depositTrie.HashTreeRoot()

View File

@@ -8,7 +8,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/async/event"
mockChain "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache/depositcache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache/depositsnapshot"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/signing"
mockExecution "github.com/prysmaticlabs/prysm/v5/beacon-chain/execution/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/startup"
@@ -70,7 +70,7 @@ func TestWaitForActivation_ContextClosed(t *testing.T) {
require.NoError(t, err, "Could not get signing root")
ctx, cancel := context.WithCancel(context.Background())
depositCache, err := depositcache.New()
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
vs := &Server{

View File

@@ -7,7 +7,7 @@ import (
"time"
mockChain "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache/depositcache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache/depositsnapshot"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
mockExecution "github.com/prysmaticlabs/prysm/v5/beacon-chain/execution/testing"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
@@ -40,7 +40,7 @@ func TestValidatorStatus_Active(t *testing.T) {
}
depositTrie, err := trie.NewTrie(params.BeaconConfig().DepositContractTreeDepth)
require.NoError(t, err, "Could not setup deposit trie")
depositCache, err := depositcache.New()
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
root, err := depositTrie.HashTreeRoot()

View File

@@ -8,7 +8,7 @@ import (
"github.com/d4l3k/messagediff"
mockChain "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache/depositcache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache/depositsnapshot"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
mockExecution "github.com/prysmaticlabs/prysm/v5/beacon-chain/execution/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
@@ -38,7 +38,7 @@ func TestValidatorStatus_DepositedEth1(t *testing.T) {
pubKey1 := deposit.Data.PublicKey
depositTrie, err := trie.NewTrie(params.BeaconConfig().DepositContractTreeDepth)
require.NoError(t, err, "Could not setup deposit trie")
depositCache, err := depositcache.New()
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
root, err := depositTrie.HashTreeRoot()
@@ -80,7 +80,7 @@ func TestValidatorStatus_Deposited(t *testing.T) {
}
depositTrie, err := trie.NewTrie(params.BeaconConfig().DepositContractTreeDepth)
require.NoError(t, err, "Could not setup deposit trie")
depositCache, err := depositcache.New()
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
root, err := depositTrie.HashTreeRoot()
@@ -125,7 +125,7 @@ func TestValidatorStatus_PartiallyDeposited(t *testing.T) {
}
depositTrie, err := trie.NewTrie(params.BeaconConfig().DepositContractTreeDepth)
require.NoError(t, err, "Could not setup deposit trie")
depositCache, err := depositcache.New()
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
root, err := depositTrie.HashTreeRoot()
@@ -178,7 +178,7 @@ func TestValidatorStatus_Pending_MultipleDeposits(t *testing.T) {
}
depositTrie, err := trie.NewTrie(params.BeaconConfig().DepositContractTreeDepth)
require.NoError(t, err, "Could not setup deposit trie")
depositCache, err := depositcache.New()
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
root, err := depositTrie.HashTreeRoot()
@@ -255,7 +255,7 @@ func TestValidatorStatus_Pending(t *testing.T) {
}
depositTrie, err := trie.NewTrie(params.BeaconConfig().DepositContractTreeDepth)
require.NoError(t, err, "Could not setup deposit trie")
depositCache, err := depositcache.New()
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
root, err := depositTrie.HashTreeRoot()
@@ -318,7 +318,7 @@ func TestValidatorStatus_Exiting(t *testing.T) {
}
depositTrie, err := trie.NewTrie(params.BeaconConfig().DepositContractTreeDepth)
require.NoError(t, err, "Could not setup deposit trie")
depositCache, err := depositcache.New()
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
root, err := depositTrie.HashTreeRoot()
@@ -377,7 +377,7 @@ func TestValidatorStatus_Slashing(t *testing.T) {
}
depositTrie, err := trie.NewTrie(params.BeaconConfig().DepositContractTreeDepth)
require.NoError(t, err, "Could not setup deposit trie")
depositCache, err := depositcache.New()
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
root, err := depositTrie.HashTreeRoot()
@@ -435,7 +435,7 @@ func TestValidatorStatus_Exited(t *testing.T) {
}
depositTrie, err := trie.NewTrie(params.BeaconConfig().DepositContractTreeDepth)
require.NoError(t, err, "Could not setup deposit trie")
depositCache, err := depositcache.New()
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
root, err := depositTrie.HashTreeRoot()
@@ -464,7 +464,7 @@ func TestValidatorStatus_Exited(t *testing.T) {
func TestValidatorStatus_UnknownStatus(t *testing.T) {
pubKey := pubKey(1)
depositCache, err := depositcache.New()
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
stateObj, err := state_native.InitializeFromProtoUnsafePhase0(&ethpb.BeaconState{
@@ -520,7 +520,7 @@ func TestActivationStatus_OK(t *testing.T) {
dep := deposits[0]
depositTrie, err := trie.NewTrie(params.BeaconConfig().DepositContractTreeDepth)
require.NoError(t, err, "Could not setup deposit trie")
depositCache, err := depositcache.New()
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
root, err := depositTrie.HashTreeRoot()
@@ -659,7 +659,7 @@ func TestValidatorStatus_CorrectActivationQueue(t *testing.T) {
depositTrie, err := trie.NewTrie(params.BeaconConfig().DepositContractTreeDepth)
require.NoError(t, err, "Could not setup deposit trie")
depositCache, err := depositcache.New()
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
for i := 0; i < 6; i++ {
@@ -751,7 +751,7 @@ func TestMultipleValidatorStatus_Pubkeys(t *testing.T) {
require.NoError(t, err, "Could not get signing root")
depositTrie, err := trie.NewTrie(params.BeaconConfig().DepositContractTreeDepth)
require.NoError(t, err, "Could not setup deposit trie")
depositCache, err := depositcache.New()
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
dep := deposits[0]
@@ -916,7 +916,7 @@ func TestValidatorStatus_Invalid(t *testing.T) {
deposit.Data.Signature = deposit.Data.Signature[1:]
depositTrie, err := trie.NewTrie(params.BeaconConfig().DepositContractTreeDepth)
require.NoError(t, err, "Could not setup deposit trie")
depositCache, err := depositcache.New()
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
root, err := depositTrie.HashTreeRoot()

View File

@@ -20,7 +20,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/builder"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache/depositcache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache/depositsnapshot"
blockfeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/block"
opfeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/operation"
statefeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/state"
@@ -127,7 +127,7 @@ type Config struct {
PeerManager p2p.PeerManager
MetadataProvider p2p.MetadataProvider
DepositFetcher cache.DepositFetcher
PendingDepositFetcher depositcache.PendingDepositsFetcher
PendingDepositFetcher depositsnapshot.PendingDepositsFetcher
StateNotifier statefeed.Notifier
BlockNotifier blockfeed.Notifier
OperationNotifier opfeed.Notifier

View File

@@ -135,7 +135,7 @@ With 1_048_576 validators, we need 4096 * 2MB = 8GB
Storing both MIN and MAX spans for 1_048_576 validators takes 16GB.
Each chunk is stored snappy-compressed in the database.
If all validators attest ideally, a MIN SPAN chunk will contain only `2`s, and and MAX SPAN chunk will contain only `0`s.
If all validators attest ideally, a MIN SPAN chunk will contain only `2`s, and MAX SPAN chunk will contain only `0`s.
This will compress very well, and will let us store a lot of data in a small amount of space.
*/

View File

@@ -118,6 +118,7 @@ type ReadOnlyValidator interface {
// ReadOnlyValidators defines a struct which only has read access to validators methods.
type ReadOnlyValidators interface {
Validators() []*ethpb.Validator
ValidatorsReadOnly() []ReadOnlyValidator
ValidatorAtIndex(idx primitives.ValidatorIndex) (*ethpb.Validator, error)
ValidatorAtIndexReadOnly(idx primitives.ValidatorIndex) (ReadOnlyValidator, error)
ValidatorIndexByPubkey(key [fieldparams.BLSPubkeyLength]byte) (primitives.ValidatorIndex, bool)

View File

@@ -21,6 +21,14 @@ func (b *BeaconState) Validators() []*ethpb.Validator {
return b.validatorsVal()
}
// ValidatorsReadOnly participating in consensus on the beacon chain.
func (b *BeaconState) ValidatorsReadOnly() []state.ReadOnlyValidator {
b.lock.RLock()
defer b.lock.RUnlock()
return b.validatorsReadOnlyVal()
}
func (b *BeaconState) validatorsVal() []*ethpb.Validator {
var v []*ethpb.Validator
if features.Get().EnableExperimentalState {
@@ -46,6 +54,35 @@ func (b *BeaconState) validatorsVal() []*ethpb.Validator {
return res
}
func (b *BeaconState) validatorsReadOnlyVal() []state.ReadOnlyValidator {
var v []*ethpb.Validator
if features.Get().EnableExperimentalState {
if b.validatorsMultiValue == nil {
return nil
}
v = b.validatorsMultiValue.Value(b)
} else {
if b.validators == nil {
return nil
}
v = b.validators
}
res := make([]state.ReadOnlyValidator, len(v))
var err error
for i := 0; i < len(res); i++ {
val := v[i]
if val == nil {
continue
}
res[i], err = NewValidator(val)
if err != nil {
continue
}
}
return res
}
// references of validators participating in consensus on the beacon chain.
// This assumes that a lock is already held on BeaconState. This does not
// copy fully and instead just copies the reference.

View File

@@ -1,6 +1,7 @@
package backfill
import (
"context"
"fmt"
"sort"
"time"
@@ -55,6 +56,8 @@ const (
batchEndSequence
)
var retryDelay = time.Second
type batchId string
type batch struct {
@@ -62,6 +65,7 @@ type batch struct {
scheduled time.Time
seq int // sequence identifier, ie how many times has the sequence() method served this batch
retries int
retryAfter time.Time
begin primitives.Slot
end primitives.Slot // half-open interval, [begin, end), ie >= start, < end.
results verifiedROBlocks
@@ -74,7 +78,7 @@ type batch struct {
}
func (b batch) logFields() logrus.Fields {
return map[string]interface{}{
f := map[string]interface{}{
"batchId": b.id(),
"state": b.state.String(),
"scheduled": b.scheduled.String(),
@@ -86,6 +90,10 @@ func (b batch) logFields() logrus.Fields {
"blockPid": b.blockPid,
"blobPid": b.blobPid,
}
if b.retries > 0 {
f["retryAfter"] = b.retryAfter.String()
}
return f
}
func (b batch) replaces(r batch) bool {
@@ -153,7 +161,8 @@ func (b batch) withState(s batchState) batch {
switch b.state {
case batchErrRetryable:
b.retries += 1
log.WithFields(b.logFields()).Info("Sequencing batch for retry")
b.retryAfter = time.Now().Add(retryDelay)
log.WithFields(b.logFields()).Info("Sequencing batch for retry after delay")
case batchInit, batchNil:
b.firstScheduled = b.scheduled
}
@@ -190,8 +199,32 @@ func (b batch) availabilityStore() das.AvailabilityStore {
return b.bs.store
}
var batchBlockUntil = func(ctx context.Context, untilRetry time.Duration, b batch) error {
log.WithFields(b.logFields()).WithField("untilRetry", untilRetry.String()).
Debug("Sleeping for retry backoff delay")
select {
case <-ctx.Done():
return ctx.Err()
case <-time.After(untilRetry):
return nil
}
}
func (b batch) waitUntilReady(ctx context.Context) error {
// Wait to retry a failed batch to avoid hammering peers
// if we've hit a state where batches will consistently fail.
// Avoids spamming requests and logs.
if b.retries > 0 {
untilRetry := time.Until(b.retryAfter)
if untilRetry > time.Millisecond {
return batchBlockUntil(ctx, untilRetry, b)
}
}
return nil
}
func sortBatchDesc(bb []batch) {
sort.Slice(bb, func(i, j int) bool {
return bb[j].end < bb[i].end
return bb[i].end > bb[j].end
})
}

View File

@@ -1,8 +1,11 @@
package backfill
import (
"context"
"testing"
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
@@ -19,3 +22,22 @@ func TestSortBatchDesc(t *testing.T) {
require.Equal(t, orderOut[i], batches[i].end)
}
}
func TestWaitUntilReady(t *testing.T) {
b := batch{}.withState(batchErrRetryable)
require.Equal(t, time.Time{}, b.retryAfter)
var got time.Duration
wur := batchBlockUntil
var errDerp = errors.New("derp")
batchBlockUntil = func(_ context.Context, ur time.Duration, _ batch) error {
got = ur
return errDerp
}
// retries counter and timestamp are set when we mark the batch for sequencing, if it is in the retry state
b = b.withState(batchSequenced)
require.ErrorIs(t, b.waitUntilReady(context.Background()), errDerp)
require.Equal(t, true, retryDelay-time.Until(b.retryAfter) < time.Millisecond)
require.Equal(t, true, got < retryDelay && got > retryDelay-time.Millisecond)
require.Equal(t, 1, b.retries)
batchBlockUntil = wur
}

View File

@@ -143,6 +143,11 @@ func (p *p2pBatchWorkerPool) batchRouter(pa PeerAssigner) {
return
}
for _, pid := range assigned {
if err := todo[0].waitUntilReady(p.ctx); err != nil {
log.WithError(p.ctx.Err()).Info("p2pBatchWorkerPool context canceled, shutting down")
p.shutdown(p.ctx.Err())
return
}
busy[pid] = true
todo[0].busy = pid
p.toWorkers <- todo[0].withPeer(pid)

View File

@@ -33,7 +33,6 @@ go_library(
"//beacon-chain/verification:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/params:go_default_library",
"//consensus-types:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
@@ -83,10 +82,10 @@ go_test(
"//beacon-chain/p2p/types:go_default_library",
"//beacon-chain/startup:go_default_library",
"//beacon-chain/sync:go_default_library",
"//beacon-chain/sync/verify:go_default_library",
"//beacon-chain/verification:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",

View File

@@ -11,6 +11,7 @@ import (
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p"
p2pTypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/types"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/startup"
@@ -18,7 +19,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/sync/verify"
"github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/v5/config/params"
consensus_types "github.com/prysmaticlabs/prysm/v5/consensus-types"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
blocks2 "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
@@ -27,6 +27,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/crypto/rand"
"github.com/prysmaticlabs/prysm/v5/math"
p2ppb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
@@ -76,6 +77,7 @@ type blocksFetcherConfig struct {
db db.ReadOnlyDatabase
peerFilterCapacityWeight float64
mode syncMode
bs filesystem.BlobStorageSummarizer
}
// blocksFetcher is a service to fetch chain data from peers.
@@ -91,6 +93,7 @@ type blocksFetcher struct {
ctxMap prysmsync.ContextByteVersions
p2p p2p.P2P
db db.ReadOnlyDatabase
bs filesystem.BlobStorageSummarizer
blocksPerPeriod uint64
rateLimiter *leakybucket.Collector
peerLocks map[peer.ID]*peerLock
@@ -149,6 +152,7 @@ func newBlocksFetcher(ctx context.Context, cfg *blocksFetcherConfig) *blocksFetc
ctxMap: cfg.ctxMap,
p2p: cfg.p2p,
db: cfg.db,
bs: cfg.bs,
blocksPerPeriod: uint64(blocksPerPeriod),
rateLimiter: rateLimiter,
peerLocks: make(map[peer.ID]*peerLock),
@@ -236,7 +240,7 @@ func (f *blocksFetcher) loop() {
// Main loop.
for {
// Make sure there is are available peers before processing requests.
// Make sure there are available peers before processing requests.
if _, err := f.waitForMinimumPeers(f.ctx); err != nil {
log.Error(err)
}
@@ -372,22 +376,17 @@ func sortedBlockWithVerifiedBlobSlice(blocks []interfaces.ReadOnlySignedBeaconBl
return rb, nil
}
func blobRequest(bwb []blocks2.BlockWithROBlobs, blobWindowStart primitives.Slot) *p2ppb.BlobSidecarsByRangeRequest {
if len(bwb) == 0 {
return nil
}
lowest := lowestSlotNeedsBlob(blobWindowStart, bwb)
if lowest == nil {
return nil
}
highest := bwb[len(bwb)-1].Block.Block().Slot()
return &p2ppb.BlobSidecarsByRangeRequest{
StartSlot: *lowest,
Count: uint64(highest.SubSlot(*lowest)) + 1,
}
type commitmentCount struct {
slot primitives.Slot
root [32]byte
count int
}
func lowestSlotNeedsBlob(retentionStart primitives.Slot, bwb []blocks2.BlockWithROBlobs) *primitives.Slot {
type commitmentCountList []commitmentCount
// countCommitments makes a list of all blocks that have commitments that need to be satisfied.
// This gives us a representation to finish building the request that is lightweight and readable for testing.
func countCommitments(bwb []blocks2.BlockWithROBlobs, retentionStart primitives.Slot) commitmentCountList {
if len(bwb) == 0 {
return nil
}
@@ -397,8 +396,13 @@ func lowestSlotNeedsBlob(retentionStart primitives.Slot, bwb []blocks2.BlockWith
if bwb[len(bwb)-1].Block.Block().Slot() < retentionStart {
return nil
}
for _, b := range bwb {
fc := make([]commitmentCount, 0, len(bwb))
for i := range bwb {
b := bwb[i]
slot := b.Block.Block().Slot()
if b.Block.Version() < version.Deneb {
continue
}
if slot < retentionStart {
continue
}
@@ -406,67 +410,116 @@ func lowestSlotNeedsBlob(retentionStart primitives.Slot, bwb []blocks2.BlockWith
if err != nil || len(commits) == 0 {
continue
}
return &slot
fc = append(fc, commitmentCount{slot: slot, root: b.Block.Root(), count: len(commits)})
}
return fc
}
// func slotRangeForCommitmentCounts(cc []commitmentCount, bs filesystem.BlobStorageSummarizer) *blobRange {
func (cc commitmentCountList) blobRange(bs filesystem.BlobStorageSummarizer) *blobRange {
if len(cc) == 0 {
return nil
}
// If we don't have a blob summarizer, can't check local blobs, request blobs over complete range.
if bs == nil {
return &blobRange{low: cc[0].slot, high: cc[len(cc)-1].slot}
}
for i := range cc {
hci := cc[i]
// This list is always ordered by increasing slot, per req/resp validation rules.
// Skip through slots until we find one with missing blobs.
if bs.Summary(hci.root).AllAvailable(hci.count) {
continue
}
// The slow of the first missing blob is the lower bound.
// If we don't find an upper bound, we'll have a 1 slot request (same low/high).
needed := &blobRange{low: hci.slot, high: hci.slot}
// Iterate backward through the list to find the highest missing slot above the lower bound.
// Return the complete range as soon as we find it; if lower bound is already the last element,
// or if we never find an upper bound, we'll fall through to the bounds being equal after this loop.
for z := len(cc) - 1; z > i; z-- {
hcz := cc[z]
if bs.Summary(hcz.root).AllAvailable(hcz.count) {
continue
}
needed.high = hcz.slot
return needed
}
return needed
}
return nil
}
func sortBlobs(blobs []blocks.ROBlob) []blocks.ROBlob {
sort.Slice(blobs, func(i, j int) bool {
if blobs[i].Slot() == blobs[j].Slot() {
return blobs[i].Index < blobs[j].Index
}
return blobs[i].Slot() < blobs[j].Slot()
})
type blobRange struct {
low primitives.Slot
high primitives.Slot
}
return blobs
func (r *blobRange) Request() *p2ppb.BlobSidecarsByRangeRequest {
if r == nil {
return nil
}
return &p2ppb.BlobSidecarsByRangeRequest{
StartSlot: r.low,
Count: uint64(r.high.SubSlot(r.low)) + 1,
}
}
var errBlobVerification = errors.New("peer unable to serve aligned BlobSidecarsByRange and BeaconBlockSidecarsByRange responses")
var errMissingBlobsForBlockCommitments = errors.Wrap(errBlobVerification, "blobs unavailable for processing block with kzg commitments")
func verifyAndPopulateBlobs(bwb []blocks2.BlockWithROBlobs, blobs []blocks.ROBlob, blobWindowStart primitives.Slot) ([]blocks2.BlockWithROBlobs, error) {
// Assumes bwb has already been sorted by sortedBlockWithVerifiedBlobSlice.
blobs = sortBlobs(blobs)
blobi := 0
// Loop over all blocks, and each time a commitment is observed, advance the index into the blob slice.
// The assumption is that the blob slice contains a value for every commitment in the blocks it is based on,
// correctly ordered by slot and blob index.
for i, bb := range bwb {
block := bb.Block.Block()
if block.Slot() < blobWindowStart {
func verifyAndPopulateBlobs(bwb []blocks2.BlockWithROBlobs, blobs []blocks.ROBlob, req *p2ppb.BlobSidecarsByRangeRequest, bss filesystem.BlobStorageSummarizer) ([]blocks2.BlockWithROBlobs, error) {
blobsByRoot := make(map[[32]byte][]blocks.ROBlob)
for i := range blobs {
if blobs[i].Slot() < req.StartSlot {
continue
}
commits, err := block.Body().BlobKzgCommitments()
br := blobs[i].BlockRoot()
blobsByRoot[br] = append(blobsByRoot[br], blobs[i])
}
for i := range bwb {
bwi, err := populateBlock(bwb[i], blobsByRoot[bwb[i].Block.Root()], req, bss)
if err != nil {
if errors.Is(err, consensus_types.ErrUnsupportedField) {
log.
WithField("blockSlot", block.Slot()).
WithField("retentionStart", blobWindowStart).
Warn("block with slot within blob retention period has version which does not support commitments")
if errors.Is(err, errDidntPopulate) {
continue
}
return nil, err
return bwb, err
}
bb.Blobs = make([]blocks.ROBlob, len(commits))
for ci := range commits {
// There are more expected commitments in this block, but we've run out of blobs from the response
// (out-of-bound error guard).
if blobi == len(blobs) {
return nil, missingCommitError(bb.Block.Root(), bb.Block.Block().Slot(), commits[ci:])
}
bl := blobs[blobi]
if err := verify.BlobAlignsWithBlock(bl, bb.Block); err != nil {
return nil, err
}
bb.Blobs[ci] = bl
blobi += 1
}
bwb[i] = bb
bwb[i] = bwi
}
return bwb, nil
}
var errDidntPopulate = errors.New("skipping population of block")
func populateBlock(bw blocks2.BlockWithROBlobs, blobs []blocks.ROBlob, req *p2ppb.BlobSidecarsByRangeRequest, bss filesystem.BlobStorageSummarizer) (blocks2.BlockWithROBlobs, error) {
blk := bw.Block
if blk.Version() < version.Deneb || blk.Block().Slot() < req.StartSlot {
return bw, errDidntPopulate
}
commits, err := blk.Block().Body().BlobKzgCommitments()
if err != nil {
return bw, errDidntPopulate
}
if len(commits) == 0 {
return bw, errDidntPopulate
}
// Drop blobs on the floor if we already have them.
if bss != nil && bss.Summary(blk.Root()).AllAvailable(len(commits)) {
return bw, errDidntPopulate
}
if len(commits) != len(blobs) {
return bw, missingCommitError(blk.Root(), blk.Block().Slot(), commits)
}
for ci := range commits {
if err := verify.BlobAlignsWithBlock(blobs[ci], blk); err != nil {
return bw, err
}
}
bw.Blobs = blobs
return bw, nil
}
func missingCommitError(root [32]byte, slot primitives.Slot, missing [][]byte) error {
missStr := make([]string, 0, len(missing))
for k := range missing {
@@ -488,7 +541,7 @@ func (f *blocksFetcher) fetchBlobsFromPeer(ctx context.Context, bwb []blocks2.Bl
return nil, err
}
// Construct request message based on observed interval of blocks in need of blobs.
req := blobRequest(bwb, blobWindowStart)
req := countCommitments(bwb, blobWindowStart).blobRange(f.bs).Request()
if req == nil {
return bwb, nil
}
@@ -508,7 +561,7 @@ func (f *blocksFetcher) fetchBlobsFromPeer(ctx context.Context, bwb []blocks2.Bl
continue
}
f.p2p.Peers().Scorers().BlockProviderScorer().Touch(p)
robs, err := verifyAndPopulateBlobs(bwb, blobs, blobWindowStart)
robs, err := verifyAndPopulateBlobs(bwb, blobs, req, f.bs)
if err != nil {
log.WithField("peer", p).WithError(err).Debug("Invalid BeaconBlobsByRange response")
continue

View File

@@ -4,7 +4,6 @@ import (
"context"
"fmt"
"math"
"math/rand"
"sort"
"sync"
"testing"
@@ -14,13 +13,14 @@ import (
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer"
mock "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem"
dbtest "github.com/prysmaticlabs/prysm/v5/beacon-chain/db/testing"
p2pm "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p"
p2pt "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/startup"
beaconsync "github.com/prysmaticlabs/prysm/v5/beacon-chain/sync"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/sync/verify"
"github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
@@ -960,28 +960,7 @@ func TestTimeToWait(t *testing.T) {
}
}
func TestSortBlobs(t *testing.T) {
_, blobs := util.ExtendBlocksPlusBlobs(t, []blocks.ROBlock{}, 10)
shuffled := make([]blocks.ROBlob, len(blobs))
for i := range blobs {
shuffled[i] = blobs[i]
}
rand.Shuffle(len(shuffled), func(i, j int) {
shuffled[i], shuffled[j] = shuffled[j], shuffled[i]
})
sorted := sortBlobs(shuffled)
require.Equal(t, len(sorted), len(shuffled))
for i := range blobs {
expect := blobs[i]
actual := sorted[i]
require.Equal(t, expect.Slot(), actual.Slot())
require.Equal(t, expect.Index, actual.Index)
require.Equal(t, bytesutil.ToBytes48(expect.KzgCommitment), bytesutil.ToBytes48(actual.KzgCommitment))
require.Equal(t, expect.BlockRoot(), actual.BlockRoot())
}
}
func TestLowestSlotNeedsBlob(t *testing.T) {
func TestBlobRangeForBlocks(t *testing.T) {
blks, _ := util.ExtendBlocksPlusBlobs(t, []blocks.ROBlock{}, 10)
sbbs := make([]interfaces.ReadOnlySignedBeaconBlock, len(blks))
for i := range blks {
@@ -990,12 +969,12 @@ func TestLowestSlotNeedsBlob(t *testing.T) {
retentionStart := primitives.Slot(5)
bwb, err := sortedBlockWithVerifiedBlobSlice(sbbs)
require.NoError(t, err)
lowest := lowestSlotNeedsBlob(retentionStart, bwb)
require.Equal(t, retentionStart, *lowest)
bounds := countCommitments(bwb, retentionStart).blobRange(nil)
require.Equal(t, retentionStart, bounds.low)
higher := primitives.Slot(len(blks) + 1)
lowest = lowestSlotNeedsBlob(higher, bwb)
var nilSlot *primitives.Slot
require.Equal(t, nilSlot, lowest)
bounds = countCommitments(bwb, higher).blobRange(nil)
var nilBounds *blobRange
require.Equal(t, nilBounds, bounds)
blks, _ = util.ExtendBlocksPlusBlobs(t, []blocks.ROBlock{}, 10)
sbbs = make([]interfaces.ReadOnlySignedBeaconBlock, len(blks))
@@ -1008,14 +987,14 @@ func TestLowestSlotNeedsBlob(t *testing.T) {
next := bwb[6].Block.Block().Slot()
skip := bwb[5].Block.Block()
bwb[5].Block, _ = util.GenerateTestDenebBlockWithSidecar(t, skip.ParentRoot(), skip.Slot(), 0)
lowest = lowestSlotNeedsBlob(retentionStart, bwb)
require.Equal(t, next, *lowest)
bounds = countCommitments(bwb, retentionStart).blobRange(nil)
require.Equal(t, next, bounds.low)
}
func TestBlobRequest(t *testing.T) {
var nilReq *ethpb.BlobSidecarsByRangeRequest
// no blocks
req := blobRequest([]blocks.BlockWithROBlobs{}, 0)
req := countCommitments([]blocks.BlockWithROBlobs{}, 0).blobRange(nil).Request()
require.Equal(t, nilReq, req)
blks, _ := util.ExtendBlocksPlusBlobs(t, []blocks.ROBlock{}, 10)
sbbs := make([]interfaces.ReadOnlySignedBeaconBlock, len(blks))
@@ -1027,26 +1006,180 @@ func TestBlobRequest(t *testing.T) {
maxBlkSlot := primitives.Slot(len(blks) - 1)
tooHigh := primitives.Slot(len(blks) + 1)
req = blobRequest(bwb, tooHigh)
req = countCommitments(bwb, tooHigh).blobRange(nil).Request()
require.Equal(t, nilReq, req)
req = blobRequest(bwb, maxBlkSlot)
req = countCommitments(bwb, maxBlkSlot).blobRange(nil).Request()
require.Equal(t, uint64(1), req.Count)
require.Equal(t, maxBlkSlot, req.StartSlot)
halfway := primitives.Slot(5)
req = blobRequest(bwb, halfway)
req = countCommitments(bwb, halfway).blobRange(nil).Request()
require.Equal(t, halfway, req.StartSlot)
// adding 1 to include the halfway slot itself
require.Equal(t, uint64(1+maxBlkSlot-halfway), req.Count)
before := bwb[0].Block.Block().Slot()
allAfter := bwb[1:]
req = blobRequest(allAfter, before)
req = countCommitments(allAfter, before).blobRange(nil).Request()
require.Equal(t, allAfter[0].Block.Block().Slot(), req.StartSlot)
require.Equal(t, len(allAfter), int(req.Count))
}
func TestCountCommitments(t *testing.T) {
// no blocks
// blocks before retention start filtered
// blocks without commitments filtered
// pre-deneb filtered
// variety of commitment counts are accurate, from 1 to max
type testcase struct {
name string
bwb func(t *testing.T, c testcase) []blocks.BlockWithROBlobs
numBlocks int
retStart primitives.Slot
resCount int
}
cases := []testcase{
{
name: "nil blocks is safe",
bwb: func(t *testing.T, c testcase) []blocks.BlockWithROBlobs {
return nil
},
retStart: 0,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
bwb := c.bwb(t, c)
cc := countCommitments(bwb, c.retStart)
require.Equal(t, c.resCount, len(cc))
})
}
}
func TestCommitmentCountList(t *testing.T) {
cases := []struct {
name string
cc commitmentCountList
bss func(*testing.T) filesystem.BlobStorageSummarizer
expected *blobRange
request *ethpb.BlobSidecarsByRangeRequest
}{
{
name: "nil commitmentCount is safe",
cc: nil,
expected: nil,
request: nil,
},
{
name: "nil bss, single slot",
cc: []commitmentCount{
{slot: 11235, count: 1},
},
expected: &blobRange{low: 11235, high: 11235},
request: &ethpb.BlobSidecarsByRangeRequest{StartSlot: 11235, Count: 1},
},
{
name: "nil bss, sparse slots",
cc: []commitmentCount{
{slot: 11235, count: 1},
{slot: 11240, count: fieldparams.MaxBlobsPerBlock},
{slot: 11250, count: 3},
},
expected: &blobRange{low: 11235, high: 11250},
request: &ethpb.BlobSidecarsByRangeRequest{StartSlot: 11235, Count: 16},
},
{
name: "AllAvailable in middle, some avail low, none high",
bss: func(t *testing.T) filesystem.BlobStorageSummarizer {
onDisk := map[[32]byte][]int{
bytesutil.ToBytes32([]byte("0")): {0, 1},
bytesutil.ToBytes32([]byte("1")): {0, 1, 2, 3, 4, 5},
}
return filesystem.NewMockBlobStorageSummarizer(t, onDisk)
},
cc: []commitmentCount{
{slot: 0, count: 3, root: bytesutil.ToBytes32([]byte("0"))},
{slot: 5, count: fieldparams.MaxBlobsPerBlock, root: bytesutil.ToBytes32([]byte("1"))},
{slot: 15, count: 3},
},
expected: &blobRange{low: 0, high: 15},
request: &ethpb.BlobSidecarsByRangeRequest{StartSlot: 0, Count: 16},
},
{
name: "AllAvailable at high and low",
bss: func(t *testing.T) filesystem.BlobStorageSummarizer {
onDisk := map[[32]byte][]int{
bytesutil.ToBytes32([]byte("0")): {0, 1},
bytesutil.ToBytes32([]byte("2")): {0, 1, 2, 3, 4, 5},
}
return filesystem.NewMockBlobStorageSummarizer(t, onDisk)
},
cc: []commitmentCount{
{slot: 0, count: 2, root: bytesutil.ToBytes32([]byte("0"))},
{slot: 5, count: 3},
{slot: 15, count: fieldparams.MaxBlobsPerBlock, root: bytesutil.ToBytes32([]byte("2"))},
},
expected: &blobRange{low: 5, high: 5},
request: &ethpb.BlobSidecarsByRangeRequest{StartSlot: 5, Count: 1},
},
{
name: "AllAvailable at high and low, adjacent range in middle",
bss: func(t *testing.T) filesystem.BlobStorageSummarizer {
onDisk := map[[32]byte][]int{
bytesutil.ToBytes32([]byte("0")): {0, 1},
bytesutil.ToBytes32([]byte("2")): {0, 1, 2, 3, 4, 5},
}
return filesystem.NewMockBlobStorageSummarizer(t, onDisk)
},
cc: []commitmentCount{
{slot: 0, count: 2, root: bytesutil.ToBytes32([]byte("0"))},
{slot: 5, count: 3},
{slot: 6, count: 3},
{slot: 15, count: fieldparams.MaxBlobsPerBlock, root: bytesutil.ToBytes32([]byte("2"))},
},
expected: &blobRange{low: 5, high: 6},
request: &ethpb.BlobSidecarsByRangeRequest{StartSlot: 5, Count: 2},
},
{
name: "AllAvailable at high and low, range in middle",
bss: func(t *testing.T) filesystem.BlobStorageSummarizer {
onDisk := map[[32]byte][]int{
bytesutil.ToBytes32([]byte("0")): {0, 1},
bytesutil.ToBytes32([]byte("1")): {0, 1},
bytesutil.ToBytes32([]byte("2")): {0, 1, 2, 3, 4, 5},
}
return filesystem.NewMockBlobStorageSummarizer(t, onDisk)
},
cc: []commitmentCount{
{slot: 0, count: 2, root: bytesutil.ToBytes32([]byte("0"))},
{slot: 5, count: 3, root: bytesutil.ToBytes32([]byte("1"))},
{slot: 10, count: 3},
{slot: 15, count: fieldparams.MaxBlobsPerBlock, root: bytesutil.ToBytes32([]byte("2"))},
},
expected: &blobRange{low: 5, high: 10},
request: &ethpb.BlobSidecarsByRangeRequest{StartSlot: 5, Count: 6},
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
var bss filesystem.BlobStorageSummarizer
if c.bss != nil {
bss = c.bss(t)
}
br := c.cc.blobRange(bss)
require.DeepEqual(t, c.expected, br)
if c.request == nil {
require.IsNil(t, br.Request())
} else {
req := br.Request()
require.DeepEqual(t, req.StartSlot, c.request.StartSlot)
require.DeepEqual(t, req.Count, c.request.Count)
}
})
}
}
func testSequenceBlockWithBlob(t *testing.T, nblocks int) ([]blocks.BlockWithROBlobs, []blocks.ROBlob) {
blks, blobs := util.ExtendBlocksPlusBlobs(t, []blocks.ROBlock{}, nblocks)
sbbs := make([]interfaces.ReadOnlySignedBeaconBlock, len(blks))
@@ -1058,91 +1191,75 @@ func testSequenceBlockWithBlob(t *testing.T, nblocks int) ([]blocks.BlockWithROB
return bwb, blobs
}
func testReqFromResp(bwb []blocks.BlockWithROBlobs) *ethpb.BlobSidecarsByRangeRequest {
return &ethpb.BlobSidecarsByRangeRequest{
StartSlot: bwb[0].Block.Block().Slot(),
Count: uint64(bwb[len(bwb)-1].Block.Block().Slot()-bwb[0].Block.Block().Slot()) + 1,
}
}
func TestVerifyAndPopulateBlobs(t *testing.T) {
bwb, blobs := testSequenceBlockWithBlob(t, 10)
lastBlobIdx := len(blobs) - 1
// Blocks are all before the retention window, blobs argument is ignored.
windowAfter := bwb[len(bwb)-1].Block.Block().Slot() + 1
_, err := verifyAndPopulateBlobs(bwb, nil, windowAfter)
require.NoError(t, err)
t.Run("happy path", func(t *testing.T) {
bwb, blobs := testSequenceBlockWithBlob(t, 10)
firstBlockSlot := bwb[0].Block.Block().Slot()
// slice off blobs for the last block so we hit the out of bounds / blob exhaustion check.
_, err = verifyAndPopulateBlobs(bwb, blobs[0:len(blobs)-6], firstBlockSlot)
require.ErrorIs(t, err, errMissingBlobsForBlockCommitments)
bwb, blobs = testSequenceBlockWithBlob(t, 10)
// Misalign the slots of the blobs for the first block to simulate them being missing from the response.
offByOne := blobs[0].Slot()
for i := range blobs {
if blobs[i].Slot() == offByOne {
blobs[i].SignedBlockHeader.Header.Slot = offByOne + 1
expectedCommits := make(map[[48]byte]bool)
for _, bl := range blobs {
expectedCommits[bytesutil.ToBytes48(bl.KzgCommitment)] = true
}
}
_, err = verifyAndPopulateBlobs(bwb, blobs, firstBlockSlot)
require.ErrorIs(t, err, verify.ErrBlobBlockMisaligned)
require.Equal(t, len(blobs), len(expectedCommits))
bwb, blobs = testSequenceBlockWithBlob(t, 10)
blobs[lastBlobIdx], err = blocks.NewROBlobWithRoot(blobs[lastBlobIdx].BlobSidecar, blobs[0].BlockRoot())
require.NoError(t, err)
_, err = verifyAndPopulateBlobs(bwb, blobs, firstBlockSlot)
require.ErrorIs(t, err, verify.ErrBlobBlockMisaligned)
bwb, blobs = testSequenceBlockWithBlob(t, 10)
blobs[lastBlobIdx].Index = 100
_, err = verifyAndPopulateBlobs(bwb, blobs, firstBlockSlot)
require.ErrorIs(t, err, verify.ErrIncorrectBlobIndex)
bwb, blobs = testSequenceBlockWithBlob(t, 10)
blobs[lastBlobIdx].SignedBlockHeader.Header.ProposerIndex = 100
blobs[lastBlobIdx], err = blocks.NewROBlob(blobs[lastBlobIdx].BlobSidecar)
require.NoError(t, err)
_, err = verifyAndPopulateBlobs(bwb, blobs, firstBlockSlot)
require.ErrorIs(t, err, verify.ErrBlobBlockMisaligned)
bwb, blobs = testSequenceBlockWithBlob(t, 10)
blobs[lastBlobIdx].SignedBlockHeader.Header.ParentRoot = blobs[0].SignedBlockHeader.Header.ParentRoot
blobs[lastBlobIdx], err = blocks.NewROBlob(blobs[lastBlobIdx].BlobSidecar)
require.NoError(t, err)
_, err = verifyAndPopulateBlobs(bwb, blobs, firstBlockSlot)
require.ErrorIs(t, err, verify.ErrBlobBlockMisaligned)
var emptyKzg [48]byte
bwb, blobs = testSequenceBlockWithBlob(t, 10)
blobs[lastBlobIdx].KzgCommitment = emptyKzg[:]
blobs[lastBlobIdx], err = blocks.NewROBlob(blobs[lastBlobIdx].BlobSidecar)
require.NoError(t, err)
_, err = verifyAndPopulateBlobs(bwb, blobs, firstBlockSlot)
require.ErrorIs(t, err, verify.ErrMismatchedBlobCommitments)
// happy path
bwb, blobs = testSequenceBlockWithBlob(t, 10)
expectedCommits := make(map[[48]byte]bool)
for _, bl := range blobs {
expectedCommits[bytesutil.ToBytes48(bl.KzgCommitment)] = true
}
// The assertions using this map expect all commitments to be unique, so make sure that stays true.
require.Equal(t, len(blobs), len(expectedCommits))
bwb, err = verifyAndPopulateBlobs(bwb, blobs, firstBlockSlot)
require.NoError(t, err)
for _, bw := range bwb {
commits, err := bw.Block.Block().Body().BlobKzgCommitments()
bwb, err := verifyAndPopulateBlobs(bwb, blobs, testReqFromResp(bwb), nil)
require.NoError(t, err)
require.Equal(t, len(commits), len(bw.Blobs))
for i := range commits {
bc := bytesutil.ToBytes48(commits[i])
require.Equal(t, bc, bytesutil.ToBytes48(bw.Blobs[i].KzgCommitment))
// Since we delete entries we've seen, duplicates will cause an error here.
_, ok := expectedCommits[bc]
// Make sure this was an expected delete, then delete it from the map so we can make sure we saw all of them.
require.Equal(t, true, ok)
delete(expectedCommits, bc)
for _, bw := range bwb {
commits, err := bw.Block.Block().Body().BlobKzgCommitments()
require.NoError(t, err)
require.Equal(t, len(commits), len(bw.Blobs))
for i := range commits {
bc := bytesutil.ToBytes48(commits[i])
require.Equal(t, bc, bytesutil.ToBytes48(bw.Blobs[i].KzgCommitment))
// Since we delete entries we've seen, duplicates will cause an error here.
_, ok := expectedCommits[bc]
// Make sure this was an expected delete, then delete it from the map so we can make sure we saw all of them.
require.Equal(t, true, ok)
delete(expectedCommits, bc)
}
}
}
// We delete each entry we've seen, so if we see all expected commits, the map should be empty at the end.
require.Equal(t, 0, len(expectedCommits))
// We delete each entry we've seen, so if we see all expected commits, the map should be empty at the end.
require.Equal(t, 0, len(expectedCommits))
})
t.Run("missing blobs", func(t *testing.T) {
bwb, blobs := testSequenceBlockWithBlob(t, 10)
_, err := verifyAndPopulateBlobs(bwb, blobs[1:], testReqFromResp(bwb), nil)
require.ErrorIs(t, err, errMissingBlobsForBlockCommitments)
})
t.Run("no blobs for last block", func(t *testing.T) {
bwb, blobs := testSequenceBlockWithBlob(t, 10)
lastIdx := len(bwb) - 1
lastBlk := bwb[lastIdx].Block
cmts, err := lastBlk.Block().Body().BlobKzgCommitments()
require.NoError(t, err)
blobs = blobs[0 : len(blobs)-len(cmts)]
lastBlk, _ = util.GenerateTestDenebBlockWithSidecar(t, lastBlk.Block().ParentRoot(), lastBlk.Block().Slot(), 0)
bwb[lastIdx].Block = lastBlk
_, err = verifyAndPopulateBlobs(bwb, blobs, testReqFromResp(bwb), nil)
require.NoError(t, err)
})
t.Run("blobs not copied if all locally available", func(t *testing.T) {
bwb, blobs := testSequenceBlockWithBlob(t, 10)
// r1 only has some blobs locally available, so we'll still copy them all.
// r7 has all blobs locally available, so we shouldn't copy them.
i1, i7 := 1, 7
r1, r7 := bwb[i1].Block.Root(), bwb[i7].Block.Root()
onDisk := map[[32]byte][]int{
r1: {0, 1},
r7: {0, 1, 2, 3, 4, 5},
}
bss := filesystem.NewMockBlobStorageSummarizer(t, onDisk)
bwb, err := verifyAndPopulateBlobs(bwb, blobs, testReqFromResp(bwb), bss)
require.NoError(t, err)
require.Equal(t, 6, len(bwb[i1].Blobs))
require.Equal(t, 0, len(bwb[i7].Blobs))
})
}
func TestBatchLimit(t *testing.T) {

View File

@@ -7,6 +7,7 @@ import (
"github.com/libp2p/go-libp2p/core/peer"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/startup"
beaconsync "github.com/prysmaticlabs/prysm/v5/beacon-chain/sync"
@@ -69,6 +70,7 @@ type blocksQueueConfig struct {
p2p p2p.P2P
db db.ReadOnlyDatabase
mode syncMode
bs filesystem.BlobStorageSummarizer
}
// blocksQueue is a priority queue that serves as a intermediary between block fetchers (producers)
@@ -101,12 +103,16 @@ func newBlocksQueue(ctx context.Context, cfg *blocksQueueConfig) *blocksQueue {
blocksFetcher := cfg.blocksFetcher
if blocksFetcher == nil {
if cfg.bs == nil {
log.Warn("rpc fetcher starting without blob availability cache, duplicate blobs may be requested.")
}
blocksFetcher = newBlocksFetcher(ctx, &blocksFetcherConfig{
ctxMap: cfg.ctxMap,
chain: cfg.chain,
p2p: cfg.p2p,
db: cfg.db,
clock: cfg.clock,
bs: cfg.bs,
})
}
highestExpectedSlot := cfg.highestExpectedSlot
@@ -139,7 +145,7 @@ func newBlocksQueue(ctx context.Context, cfg *blocksQueueConfig) *blocksQueue {
queue.smm.addEventHandler(eventDataReceived, stateScheduled, queue.onDataReceivedEvent(ctx))
queue.smm.addEventHandler(eventTick, stateDataParsed, queue.onReadyToSendEvent(ctx))
queue.smm.addEventHandler(eventTick, stateSkipped, queue.onProcessSkippedEvent(ctx))
queue.smm.addEventHandler(eventTick, stateSent, queue.onCheckStaleEvent(ctx))
queue.smm.addEventHandler(eventTick, stateSent, onCheckStaleEvent(ctx))
return queue
}
@@ -451,7 +457,7 @@ func (q *blocksQueue) onProcessSkippedEvent(ctx context.Context) eventHandlerFn
// onCheckStaleEvent is an event that allows to mark stale epochs,
// so that they can be re-processed.
func (_ *blocksQueue) onCheckStaleEvent(ctx context.Context) eventHandlerFn {
func onCheckStaleEvent(ctx context.Context) eventHandlerFn {
return func(m *stateMachine, in interface{}) (stateID, error) {
if ctx.Err() != nil {
return m.state, ctx.Err()

View File

@@ -971,24 +971,12 @@ func TestBlocksQueue_onProcessSkippedEvent(t *testing.T) {
}
func TestBlocksQueue_onCheckStaleEvent(t *testing.T) {
blockBatchLimit := flags.Get().BlockBatchLimit
mc, p2p, _ := initializeTestServices(t, []primitives.Slot{}, []*peerData{})
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
fetcher := newBlocksFetcher(ctx, &blocksFetcherConfig{
chain: mc,
p2p: p2p,
})
t.Run("expired context", func(t *testing.T) {
ctx, cancel := context.WithCancel(ctx)
queue := newBlocksQueue(ctx, &blocksQueueConfig{
blocksFetcher: fetcher,
chain: mc,
highestExpectedSlot: primitives.Slot(blockBatchLimit),
})
handlerFn := queue.onCheckStaleEvent(ctx)
handlerFn := onCheckStaleEvent(ctx)
cancel()
updatedState, err := handlerFn(&stateMachine{
state: stateSkipped,
@@ -998,16 +986,10 @@ func TestBlocksQueue_onCheckStaleEvent(t *testing.T) {
})
t.Run("invalid input state", func(t *testing.T) {
queue := newBlocksQueue(ctx, &blocksQueueConfig{
blocksFetcher: fetcher,
chain: mc,
highestExpectedSlot: primitives.Slot(blockBatchLimit),
})
invalidStates := []stateID{stateNew, stateScheduled, stateDataParsed, stateSkipped}
for _, state := range invalidStates {
t.Run(state.String(), func(t *testing.T) {
handlerFn := queue.onCheckStaleEvent(ctx)
handlerFn := onCheckStaleEvent(ctx)
updatedState, err := handlerFn(&stateMachine{
state: state,
}, nil)
@@ -1018,12 +1000,7 @@ func TestBlocksQueue_onCheckStaleEvent(t *testing.T) {
})
t.Run("process non stale machine", func(t *testing.T) {
queue := newBlocksQueue(ctx, &blocksQueueConfig{
blocksFetcher: fetcher,
chain: mc,
highestExpectedSlot: primitives.Slot(blockBatchLimit),
})
handlerFn := queue.onCheckStaleEvent(ctx)
handlerFn := onCheckStaleEvent(ctx)
updatedState, err := handlerFn(&stateMachine{
state: stateSent,
updated: prysmTime.Now().Add(-staleEpochTimeout / 2),
@@ -1034,12 +1011,7 @@ func TestBlocksQueue_onCheckStaleEvent(t *testing.T) {
})
t.Run("process stale machine", func(t *testing.T) {
queue := newBlocksQueue(ctx, &blocksQueueConfig{
blocksFetcher: fetcher,
chain: mc,
highestExpectedSlot: primitives.Slot(blockBatchLimit),
})
handlerFn := queue.onCheckStaleEvent(ctx)
handlerFn := onCheckStaleEvent(ctx)
updatedState, err := handlerFn(&stateMachine{
state: stateSent,
updated: prysmTime.Now().Add(-staleEpochTimeout),

View File

@@ -11,6 +11,7 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/das"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/sync"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/verification"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
@@ -62,7 +63,39 @@ func (s *Service) roundRobinSync(genesis time.Time) error {
return s.syncToNonFinalizedEpoch(ctx, genesis)
}
// syncToFinalizedEpoch sync from head to best known finalized epoch.
func (s *Service) startBlocksQueue(ctx context.Context, highestSlot primitives.Slot, mode syncMode) (*blocksQueue, error) {
vr := s.clock.GenesisValidatorsRoot()
ctxMap, err := sync.ContextByteVersionsForValRoot(vr)
if err != nil {
return nil, errors.Wrapf(err, "unable to initialize context version map using genesis validator root = %#x", vr)
}
summarizer, err := s.cfg.BlobStorage.WaitForSummarizer(ctx)
if err != nil {
// The summarizer is an optional optimization, we can continue without, only stop if there is a different error.
if !errors.Is(err, filesystem.ErrBlobStorageSummarizerUnavailable) {
return nil, err
}
summarizer = nil // This should already be nil, but we'll set it just to be safe.
}
cfg := &blocksQueueConfig{
p2p: s.cfg.P2P,
db: s.cfg.DB,
chain: s.cfg.Chain,
clock: s.clock,
ctxMap: ctxMap,
highestExpectedSlot: highestSlot,
mode: mode,
bs: summarizer,
}
queue := newBlocksQueue(ctx, cfg)
if err := queue.start(); err != nil {
return nil, err
}
return queue, nil
}
// syncToFinalizedEpoch sync from head to the best known finalized epoch.
func (s *Service) syncToFinalizedEpoch(ctx context.Context, genesis time.Time) error {
highestFinalizedSlot, err := slots.EpochStart(s.highestFinalizedEpoch())
if err != nil {
@@ -74,28 +107,12 @@ func (s *Service) syncToFinalizedEpoch(ctx context.Context, genesis time.Time) e
return nil
}
vr := s.clock.GenesisValidatorsRoot()
ctxMap, err := sync.ContextByteVersionsForValRoot(vr)
queue, err := s.startBlocksQueue(ctx, highestFinalizedSlot, modeStopOnFinalizedEpoch)
if err != nil {
return errors.Wrapf(err, "unable to initialize context version map using genesis validator root = %#x", vr)
}
queue := newBlocksQueue(ctx, &blocksQueueConfig{
p2p: s.cfg.P2P,
db: s.cfg.DB,
chain: s.cfg.Chain,
clock: s.clock,
ctxMap: ctxMap,
highestExpectedSlot: highestFinalizedSlot,
mode: modeStopOnFinalizedEpoch,
})
if err := queue.start(); err != nil {
return err
}
for data := range queue.fetchedData {
// If blobs are available. Verify blobs and blocks are consistence.
// We can't import a block if there's no associated blob within DA bound.
// The blob has to pass aggregated proof check.
s.processFetchedData(ctx, genesis, s.cfg.Chain.HeadSlot(), data)
}
@@ -113,21 +130,8 @@ func (s *Service) syncToFinalizedEpoch(ctx context.Context, genesis time.Time) e
// syncToNonFinalizedEpoch sync from head to best known non-finalized epoch supported by majority
// of peers (no less than MinimumSyncPeers*2 peers).
func (s *Service) syncToNonFinalizedEpoch(ctx context.Context, genesis time.Time) error {
vr := s.clock.GenesisValidatorsRoot()
ctxMap, err := sync.ContextByteVersionsForValRoot(vr)
queue, err := s.startBlocksQueue(ctx, slots.Since(genesis), modeNonConstrained)
if err != nil {
return errors.Wrapf(err, "unable to initialize context version map using genesis validator root = %#x", vr)
}
queue := newBlocksQueue(ctx, &blocksQueueConfig{
p2p: s.cfg.P2P,
db: s.cfg.DB,
chain: s.cfg.Chain,
clock: s.clock,
ctxMap: ctxMap,
highestExpectedSlot: slots.Since(genesis),
mode: modeNonConstrained,
})
if err := queue.start(); err != nil {
return err
}
for data := range queue.fetchedData {

Some files were not shown because too many files have changed in this diff Show More