Compare commits

...

38 Commits

Author SHA1 Message Date
james-prysm
48e561ae9b adding max blobs validation 2025-08-21 15:42:17 -05:00
Justin Traglia
ee03c7cce2 Add spec references, a mapping of spec to implementation (#15592)
* Add spec references, a mapping of spec to implementation

* Add changelog fragment

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-08-21 14:54:04 +00:00
kasey
c5135f6995 enforce schedule alignment when next_fork_epoch matches (#15604)
* enforce schedule alignment when next_fork_epoch matches

* lint & typo

* James feedback

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-08-20 23:57:30 +00:00
Pop Chunhapanya
29aedac113 Fix subnet peer discovery (#15603)
* Fix subnet peer discovery

Currently computeAllNeededSubnets is called only once when the subnets
are subscribed. It should have been called regularly.

* changelog

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-08-20 16:52:11 +00:00
kasey
08fb3812b7 provide data column storage to rpc handlers (#15606)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-08-20 15:51:11 +00:00
kasey
07738dd9a4 improve pubsub topic subscription failure logging (#15600)
* improve pubsub topic subscription failure logging

* Errorf doesn't support %w, so use %v

* log capitalization

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-08-19 12:24:07 +00:00
james-prysm
53df29b07f Fix find peers regression (#15578)
* adding what I think could be a fix for find peer

* removing uneeded comment

* unit tests

* linting

* gofmt

* changelog

* Update beacon-chain/p2p/discovery_test.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update changelog/james-prysm_fix-find-peers.md

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* fixing test import

* applying suggestions

* fixing typo

* manu feedback

* accidently checked in files

* addressing manu's edgecase, old bug

* moving tests from service-test.go to subnets_test.go and adding coverage for receiving bad existing node with higher seq

* cleanup

* updating for clarity

* missingPeerCount should increment if we are removing the peer from map

* manu's recommendations on defective subnet rollback edge case

* rollback introduced too much complication as well as a new bug so we are removing it

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-18 19:41:32 +00:00
Manu NALEPA
00cf1f2507 Implement PeerDAS sync (#15564)
* PeerDAS: Implement sync

* Fix Potuz's comment.

* Fix Potuz's comment.

* Fix Potuz's comment.

* Fix Satyajit's comment.

* Partially fix Potuz's comment.

* Fix Potuz's comment.

* Fix Potuz's comment.

* Fix Potuz's comment.

* Fix Potuz's comment.

* Fix Potuz's comment.

* Fix Potuz's comment.

* Fix Potuz's comment.

* Add tests for `sendDataColumnSidecarsRequest`.

* Fix Satyajit's comment.

* Implement `TestSendDataColumnSidecarsRequest`.

* Implement `TestFetchDataColumnSidecarsFromPeers`.

* Implement `TestUpdateResults`.

* Implement `TestSelectPeers`.

* Implement `TestCategorizeIndices`.

* Fix James' comment.

* Fix James's comment.

* Fix James' commit.

* Fix James' comment.

* Fix James' comment.

* Fix flakiness in `TestSelectPeers`.

* Update cmd/beacon-chain/flags/config.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* Fix Preston's comment.

* Fix James's comment.

* Implement `TestFetchDataColumnSidecars`.

* Revert "Fix Potuz's comment."

This reverts commit c45230b455.

* Fix Potuz's comment.

* Revert "Fix James' comment."

This reverts commit a3f919205a.

* Fix James' comment.

* Fix Preston's comment.

* Fix James' comment.

* `selectPeers`: Avoid map with key but empty value.

* Fix typo.

* Fix Potuz's comment.

* Fix Potuz's comment.

* Fix James' comment.

* Add DataColumnStorage and SubscribeAllDataSubnets flag.

* Add extra flags

* Fix Potuz's and Preston's comment.

* Add rate limiter check.

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-08-18 14:36:07 +00:00
terence
6528fb9cea Update consensus spec to v1.6.0-alpha.4 and implement data column support (#15590)
* Update consensus spec to v1.6.0-alpha.4 and implement data column support for forkchoice spectests

* Apply suggestion from @prestonvanloon

Co-Authored-By: Preston Van Loon <pvanloon@offchainlabs.com>

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2025-08-16 15:49:12 +00:00
terence
5021131811 Fix NewSignedBeaconBlock calls to use Block field for equivocation handling (#15595) 2025-08-16 14:19:11 +00:00
kasey
26cec9d9c7 omit NetworkScheduleEntry fields that are not part of BlobScheduleEntry (#15557)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-08-14 22:45:47 +00:00
Justin Traglia
4ed90a02ef Rename various variables/functions to be more clear (#15529)
* Rename various variables/functions to be more clear

* Add changelog fragment

---------

Co-authored-by: Bastin <43618253+Inspector-Butters@users.noreply.github.com>
2025-08-14 11:06:22 +00:00
trinadh61
7d528c75bb adding user agent validator beacon client (#15574)
* adding user agent validator beacon client

* Update runtime/version/version.go

Co-authored-by: Bastin <43618253+Inspector-Butters@users.noreply.github.com>

* test cases

* contribution readme

* setting user agent to build version data

---------

Co-authored-by: Bastin <43618253+Inspector-Butters@users.noreply.github.com>
2025-08-13 21:36:17 +00:00
Preston Van Loon
e7b2953d5a Address out of bounds concern in beacon-chain/core/peerdas/das_core.go (#15586) 2025-08-13 15:07:26 +00:00
Muzry
acf35e849e Update endpoint to return 404 after isOptimistic check (#15559)
* Update endpoint to return 404 after isOptimistic check

* Fix error handling by using predefined errors

* fix: helpers build.bazel

* remove the StateIdDecodeError

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-08-13 14:40:20 +00:00
Preston Van Loon
c826d334a1 Add missing Fulu block type in stream handler (grpc StreamBlocksAltair) (#15583) 2025-08-12 22:26:26 +00:00
Preston Van Loon
eace128ee9 Fix panic in blob cache when scs array is empty or shorter than commitments (#15581)
* Fix panic in beacon-chain/das/blob_cache.go

* Regression test for empty/short scs array panic

* Changelog fragment
2025-08-12 03:26:22 +00:00
terence
9161b80a32 Implement fulu mev-boost change for blinded block submission (#15486)
* Builder: handle fulu submit blind block return

* Builder: update post-Fulu client to handle 202 Accepted response

* Builder: consolidate HTTP request methods and improve status code validation
2025-08-12 00:49:52 +00:00
terence
009a1507a1 Move aggregated attestation cache key generation outside locks (#15579) 2025-08-11 21:55:44 +00:00
kasey
fa71a6e117 Fusaka ENR changes (#15501)
* fusaka fork digest enr changes

* Add tests for fulu NFD key

* add testcase

* Update log field to spell out 'nfd' as 'NextForkDigest'

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: Preston Van Loon <preston@pvl.dev>
Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2025-08-11 21:40:39 +00:00
kasey
3da40ecd9c Refactor fork schedules (#15490)
* overhaul fork schedule management for bpos

* Unify log

* Radek's comments

* Use arg config to determine previous epoch, with regression test

* Remove unnecessary NewClock. @potuz feedback

* Continuation of previous commit: Remove unnecessary NewClock. @potuz feedback

* Remove VerifyBlockHeaderSignatureUsingCurrentFork

* cosmetic changes

* Remove unnecessary copy. entryWithForkDigest passes by value, not by pointer so it shold be fine

* Reuse ErrInvalidTopic from p2p package

* Unskip TestServer_GetBeaconConfig

* Resolve TODO about forkwatcher in local mode

* remove Copy()

---------

Co-authored-by: Kasey <kasey@users.noreply.github.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: rkapka <radoslaw.kapka@gmail.com>
Co-authored-by: Preston Van Loon <preston@pvl.dev>
2025-08-11 16:08:53 +00:00
terence
f7f992c256 gofmt: fix formatting issues in test files (#15577) 2025-08-11 15:56:53 +00:00
terence
09565e0c3a Refactor attestation cache key generation outside critical section (#15572)
* Refactor attestation cache key generation outside critical section

* Improve attestation cache key error handling and logging
2025-08-10 17:41:30 +00:00
Jun Song
05a3736310 refactor: removing redundant codes in htrutils.go (#15453)
* refactor: use auto-generated HashTreeRoot functions in htrutil.go

* refactor: use type alias for Transaction & use SliceRoot for TransactionsRoot

* changelog

* fix: TransactionsRoot receives raw 2d bytes as an argument

* fix: handle nil argument

* test: add nil test for fork and checkpoint

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-08-10 10:44:01 +00:00
kasey
84c8653a52 initialize genesis data asap at node start (#15470)
* initialize genesis data asap at node start

* add genesis validation tests with embedded state verification

* Add test for hardcoded mainnet genesis validator root and time from init() function

* Add test for UnmarshalState in encoding/ssz/detect/configfork.go

* Add tests for genesis.Initialize

* Move genesis/embedded to genesis/internal/embedded

* Gazelle / BUILD fix

* James feedback

* Fix lint

* Revert lock

---------

Co-authored-by: Kasey <kasey@users.noreply.github.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: Preston Van Loon <preston@pvl.dev>
2025-08-10 02:09:40 +00:00
Tomás Andróil
921ff23c6b refactor: use central flag name for validator HTTP server port (#15236)
* Update validator.go

* Create tomasandroil_replace_grpc_gateway_flag_name.md
2025-08-09 10:35:04 +00:00
Radosław Kapka
87235fb384 Don't submit duplicate aggregated SignedContributionAndProof messages (#15571) 2025-08-08 22:57:36 +00:00
james-prysm
2ec5914b4a fixing builder version check (#15568)
* adding fix

* fixing test
2025-08-08 01:42:55 +00:00
Potuz
fe000e5629 Fix validateConsensus (#15548)
* Fix validateConsensus

Reported by NuConstruct

The stater package looks for a stateroot using the head state from the
blockchain package. However, this state is very unlikely to have the
poststateroot since that's only added after slot processing. I assume
that essentially any REST endpoint that uses this mechanism to get head
is broken if it needs to gather a state by stateroot.

This PR is a placeholder to verify this is the issue, here I just check
if the NSC already has the post-state since that will have already the
processing state cached.

* Add changelog

* add fallback

* Fix tests
2025-08-07 13:08:40 +00:00
Preston Van Loon
447a3d8add Update go to v1.24.6 (#15566) 2025-08-06 20:59:26 +00:00
Jun Song
b00aaef202 Persist metadata sequence number using Beacon DB (#15554)
* Add entry for sequence number in chain-metadata bucket & Basic getter/setter

* Mark p2p-metadata flag as deprecated

* Fix metaDataFromConfig: use DB instead to get seqnum

* Save sequence number after updating the metadata

* Fix beacon-chain/p2p unit tests: add DB in config

* Add changelog

* Add ReadOnlyDatabaseWithSeqNum

* Code suggestion from Manu

* Remove seqnum getter at interface

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-08-06 20:18:33 +00:00
Potuz
0f6070a866 Fix race on ReceibeBlock (#15565)
* Fix race on ReceibeBlock

In the event two routines for `ReceiveBlock` are triggered with the same
block (it may happen if one routine is triggered over gossip and the
other in init-sync) it may happen that the second routine believes it's
syncing the block for the first time. This is because the cache on
`blocksBeingSynced` is not checked to be set and the block may still not
be put in forkchoice by the first routine.

In the normal case this would not cause any trouble as the second
forkchoice insertion is a noop by design. However, if the second routine
times out or has any error in processing (for example the engine will
return an error if we try to send FCU to an older head) then the second
routine will attempt to remove the inserted block from forkchoice and
this bricks the node since forkchoice refuses to remove a valid node,
the root is removed inconditionally from db and the node ends up with a
root that is not in db and remains in forkchoice.

This PR just prevents the race.

As a followup perhaps we can gate the rollback function from db to nodes
that are effectively not in forkchoice, alternatively, force removal
from forkchoice when rolling back from db (although this version is
complicated due to possible accounting issues on forkchoice).

* Fix lint
2025-08-06 17:18:13 +00:00
Justin Traglia
2a09c9f681 Fix BlobsBundleV2 proofs limit (#15530)
* Assign max_cell_proofs_length the correct value

* Add changelog fragment

* Run update-go-pbs.sh

* Run update-go-ssz.sh

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-08-05 21:33:45 +00:00
Jun Song
9c6ccd67c1 Add Fulu case for saveStatesEfficientInternal (#15553)
* Add Fulu case for saveStatesEfficientInternal

* Add changelog

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-08-05 19:25:17 +00:00
Radosław Kapka
36e5d4926b Redesign the pending attestation queue (#15024)
* Redesign pending attestation queue

* fix bad signature test

* equality tests

* changelog <3

* rename functions

* change logs

* fix fuzzing

* fixes after rebasing

* build fix

* review

* James' review

* fix imports
2025-08-05 18:35:19 +00:00
terence
17b7d3ff12 Add fork choice check to pending attestations processing (#15547) 2025-08-05 16:08:47 +00:00
terence
fb2bceece8 beacon api: optimize val assignment lookup (#15558) 2025-08-05 15:23:37 +00:00
terence
d012ab653c Beacon api: fix proposer duty computation for fulu (#15534) 2025-08-05 15:17:02 +00:00
347 changed files with 22945 additions and 5435 deletions

43
.github/workflows/check-specrefs.yml vendored Normal file
View File

@@ -0,0 +1,43 @@
name: Check Spec References
on: [push, pull_request]
jobs:
check-specrefs:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Check version consistency
run: |
WORKSPACE_VERSION=$(grep 'consensus_spec_version = ' WORKSPACE | sed 's/.*"\(.*\)"/\1/')
ETHSPECIFY_VERSION=$(grep '^version:' specrefs/.ethspecify.yml | sed 's/version: //')
if [ "$WORKSPACE_VERSION" != "$ETHSPECIFY_VERSION" ]; then
echo "Version mismatch between WORKSPACE and ethspecify"
echo " WORKSPACE: $WORKSPACE_VERSION"
echo " specrefs/.ethspecify.yml: $ETHSPECIFY_VERSION"
exit 1
else
echo "Versions match: $WORKSPACE_VERSION"
fi
- name: Install ethspecify
run: python3 -mpip install ethspecify
- name: Update spec references
run: ethspecify process --path=specrefs
- name: Check for differences
run: |
if ! git diff --exit-code specrefs >/dev/null; then
echo "Spec references are out-of-date!"
echo ""
git --no-pager diff specrefs
exit 1
else
echo "Spec references are up-to-date!"
fi
- name: Check spec references
run: ethspecify check --path=specrefs

View File

@@ -208,7 +208,7 @@ load("@io_bazel_rules_go//go:deps.bzl", "go_register_toolchains", "go_rules_depe
go_rules_dependencies()
go_register_toolchains(
go_version = "1.24.5",
go_version = "1.24.6",
nogo = "@//:nogo",
)
@@ -253,16 +253,16 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.6.0-alpha.1"
consensus_spec_version = "v1.6.0-alpha.4"
load("@prysm//tools:download_spectests.bzl", "consensus_spec_tests")
consensus_spec_tests(
name = "consensus_spec_tests",
flavors = {
"general": "sha256-o4t9p3R+fQHF4KOykGmwlG3zDw5wUdVWprkzId8aIsk=",
"minimal": "sha256-sU7ToI8t3MR8x0vVjC8ERmAHZDWpEmnAC9FWIpHi5x4=",
"mainnet": "sha256-YKS4wngg0LgI9Upp4MYJ77aG+8+e/G4YeqEIlp06LZw=",
"general": "sha256-MaN4zu3o0vWZypUHS5r4D8WzJF4wANoadM8qm6iyDs4=",
"minimal": "sha256-aZGNPp/bBvJgq3Wf6vyR0H6G3DOkbSuggEmOL4jEmtg=",
"mainnet": "sha256-C7jjosvpzUgw3GPajlsWBV02ZbkZ5Uv4ikmOqfDGajI=",
},
version = consensus_spec_version,
)
@@ -278,7 +278,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-Nv4TEuEJPQIM4E6T9J0FOITsmappmXZjGtlhe1HEXnU=",
integrity = "sha256-qreawRS77l8CebiNww8z727qUItw7KlHY1Xqj7IrPdk=",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)

View File

@@ -16,7 +16,6 @@ go_library(
"//api/server/structs:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//network/forks:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",

View File

@@ -9,7 +9,6 @@ import (
"net/url"
"path"
"regexp"
"sort"
"strconv"
"github.com/OffchainLabs/prysm/v6/api/client"
@@ -17,7 +16,6 @@ import (
"github.com/OffchainLabs/prysm/v6/api/server/structs"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/network/forks"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
@@ -137,24 +135,6 @@ func (c *Client) GetFork(ctx context.Context, stateId StateOrBlockId) (*ethpb.Fo
return fr.ToConsensus()
}
// GetForkSchedule retrieve all forks, past present and future, of which this node is aware.
func (c *Client) GetForkSchedule(ctx context.Context) (forks.OrderedSchedule, error) {
body, err := c.Get(ctx, getForkSchedulePath)
if err != nil {
return nil, errors.Wrap(err, "error requesting fork schedule")
}
fsr := &forkScheduleResponse{}
err = json.Unmarshal(body, fsr)
if err != nil {
return nil, err
}
ofs, err := fsr.OrderedForkSchedule()
if err != nil {
return nil, errors.Wrap(err, fmt.Sprintf("problem unmarshaling %s response", getForkSchedulePath))
}
return ofs, nil
}
// GetConfigSpec retrieve the current configs of the network used by the beacon node.
func (c *Client) GetConfigSpec(ctx context.Context) (*structs.GetSpecResponse, error) {
body, err := c.Get(ctx, getConfigSpecPath)
@@ -334,31 +314,3 @@ func (c *Client) GetBLStoExecutionChanges(ctx context.Context) (*structs.BLSToEx
}
return poolResponse, nil
}
type forkScheduleResponse struct {
Data []structs.Fork
}
func (fsr *forkScheduleResponse) OrderedForkSchedule() (forks.OrderedSchedule, error) {
ofs := make(forks.OrderedSchedule, 0)
for _, d := range fsr.Data {
epoch, err := strconv.ParseUint(d.Epoch, 10, 64)
if err != nil {
return nil, errors.Wrapf(err, "error parsing epoch %s", d.Epoch)
}
vSlice, err := hexutil.Decode(d.CurrentVersion)
if err != nil {
return nil, err
}
if len(vSlice) != 4 {
return nil, fmt.Errorf("got %d byte version, expected 4 bytes. version hex=%s", len(vSlice), d.CurrentVersion)
}
version := bytesutil.ToBytes4(vSlice)
ofs = append(ofs, forks.ForkScheduleEntry{
Version: version,
Epoch: primitives.Epoch(epoch),
})
}
sort.Sort(ofs)
return ofs, nil
}

View File

@@ -102,6 +102,7 @@ type BuilderClient interface {
GetHeader(ctx context.Context, slot primitives.Slot, parentHash [32]byte, pubkey [48]byte) (SignedBid, error)
RegisterValidator(ctx context.Context, svr []*ethpb.SignedValidatorRegistrationV1) error
SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, v1.BlobsBundler, error)
SubmitBlindedBlockPostFulu(ctx context.Context, sb interfaces.ReadOnlySignedBeaconBlock) error
Status(ctx context.Context) error
}
@@ -152,7 +153,8 @@ func (c *Client) NodeURL() string {
type reqOption func(*http.Request)
// do is a generic, opinionated request function to reduce boilerplate amongst the methods in this package api/client/builder.
func (c *Client) do(ctx context.Context, method string, path string, body io.Reader, opts ...reqOption) (res []byte, header http.Header, err error) {
// It validates that the HTTP response status matches the expectedStatus parameter.
func (c *Client) do(ctx context.Context, method string, path string, body io.Reader, expectedStatus int, opts ...reqOption) (res []byte, header http.Header, err error) {
ctx, span := trace.StartSpan(ctx, "builder.client.do")
defer func() {
tracing.AnnotateError(span, err)
@@ -187,8 +189,8 @@ func (c *Client) do(ctx context.Context, method string, path string, body io.Rea
log.WithError(closeErr).Error("Failed to close response body")
}
}()
if r.StatusCode != http.StatusOK {
err = non200Err(r)
if r.StatusCode != expectedStatus {
err = unexpectedStatusErr(r, expectedStatus)
return
}
res, err = io.ReadAll(io.LimitReader(r.Body, client.MaxBodySize))
@@ -236,7 +238,7 @@ func (c *Client) GetHeader(ctx context.Context, slot primitives.Slot, parentHash
r.Header.Set("Accept", api.JsonMediaType)
}
}
data, header, err := c.do(ctx, http.MethodGet, path, nil, getOpts)
data, header, err := c.do(ctx, http.MethodGet, path, nil, http.StatusOK, getOpts)
if err != nil {
return nil, errors.Wrap(err, "error getting header from builder server")
}
@@ -409,7 +411,7 @@ func (c *Client) RegisterValidator(ctx context.Context, svr []*ethpb.SignedValid
}
}
if _, _, err = c.do(ctx, http.MethodPost, postRegisterValidatorPath, bytes.NewBuffer(body), postOpts); err != nil {
if _, _, err = c.do(ctx, http.MethodPost, postRegisterValidatorPath, bytes.NewBuffer(body), http.StatusOK, postOpts); err != nil {
return errors.Wrap(err, "do")
}
log.WithField("registrationCount", len(svr)).Debug("Successfully registered validator(s) on builder")
@@ -471,7 +473,7 @@ func (c *Client) SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlyS
// post the blinded block - the execution payload response should contain the unblinded payload, along with the
// blobs bundle if it is post deneb.
data, header, err := c.do(ctx, http.MethodPost, postBlindedBeaconBlockPath, bytes.NewBuffer(body), postOpts)
data, header, err := c.do(ctx, http.MethodPost, postBlindedBeaconBlockPath, bytes.NewBuffer(body), http.StatusOK, postOpts)
if err != nil {
return nil, nil, errors.Wrap(err, "error posting the blinded block to the builder api")
}
@@ -501,6 +503,24 @@ func (c *Client) SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlyS
return ed, blobs, nil
}
// SubmitBlindedBlockPostFulu calls the builder API endpoint post-Fulu where relays only return status codes.
// This method is used after the Fulu fork when MEV-boost relays no longer return execution payloads.
func (c *Client) SubmitBlindedBlockPostFulu(ctx context.Context, sb interfaces.ReadOnlySignedBeaconBlock) error {
body, postOpts, err := c.buildBlindedBlockRequest(sb)
if err != nil {
return err
}
// Post the blinded block - the response should only contain a status code (no payload)
_, _, err = c.do(ctx, http.MethodPost, postBlindedBeaconBlockPath, bytes.NewBuffer(body), http.StatusAccepted, postOpts)
if err != nil {
return errors.Wrap(err, "error posting the blinded block to the builder api post-Fulu")
}
// Success is indicated by no error (status 202)
return nil
}
func (c *Client) checkBlockVersion(respBytes []byte, header http.Header) (int, error) {
var versionHeader string
if c.sszEnabled {
@@ -657,11 +677,11 @@ func (c *Client) Status(ctx context.Context) error {
getOpts := func(r *http.Request) {
r.Header.Set("Accept", api.JsonMediaType)
}
_, _, err := c.do(ctx, http.MethodGet, getStatus, nil, getOpts)
_, _, err := c.do(ctx, http.MethodGet, getStatus, nil, http.StatusOK, getOpts)
return err
}
func non200Err(response *http.Response) error {
func unexpectedStatusErr(response *http.Response, expected int) error {
bodyBytes, err := io.ReadAll(io.LimitReader(response.Body, client.MaxErrBodySize))
var errMessage ErrorMessage
var body string
@@ -670,7 +690,7 @@ func non200Err(response *http.Response) error {
} else {
body = "response body:\n" + string(bodyBytes)
}
msg := fmt.Sprintf("code=%d, url=%s, body=%s", response.StatusCode, response.Request.URL, body)
msg := fmt.Sprintf("expected=%d, got=%d, url=%s, body=%s", expected, response.StatusCode, response.Request.URL, body)
switch response.StatusCode {
case http.StatusUnsupportedMediaType:
log.WithError(ErrUnsupportedMediaType).Debug(msg)

View File

@@ -1555,6 +1555,89 @@ func testSignedBlindedBeaconBlockElectra(t *testing.T) *eth.SignedBlindedBeaconB
}
}
func TestSubmitBlindedBlockPostFulu(t *testing.T) {
ctx := t.Context()
t.Run("success", func(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, postBlindedBeaconBlockPath, r.URL.Path)
require.Equal(t, "bellatrix", r.Header.Get("Eth-Consensus-Version"))
require.Equal(t, api.JsonMediaType, r.Header.Get("Content-Type"))
require.Equal(t, api.JsonMediaType, r.Header.Get("Accept"))
// Post-Fulu: only return status code, no payload
return &http.Response{
StatusCode: http.StatusAccepted,
Body: io.NopCloser(bytes.NewBufferString("")),
Request: r.Clone(ctx),
}, nil
}),
}
c := &Client{
hc: hc,
baseURL: &url.URL{Host: "localhost:3500", Scheme: "http"},
}
sbbb, err := blocks.NewSignedBeaconBlock(testSignedBlindedBeaconBlockBellatrix(t))
require.NoError(t, err)
err = c.SubmitBlindedBlockPostFulu(ctx, sbbb)
require.NoError(t, err)
})
t.Run("success_ssz", func(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, postBlindedBeaconBlockPath, r.URL.Path)
require.Equal(t, "bellatrix", r.Header.Get(api.VersionHeader))
require.Equal(t, api.OctetStreamMediaType, r.Header.Get("Content-Type"))
require.Equal(t, api.OctetStreamMediaType, r.Header.Get("Accept"))
// Post-Fulu: only return status code, no payload
return &http.Response{
StatusCode: http.StatusAccepted,
Body: io.NopCloser(bytes.NewBufferString("")),
Request: r.Clone(ctx),
}, nil
}),
}
c := &Client{
hc: hc,
baseURL: &url.URL{Host: "localhost:3500", Scheme: "http"},
sszEnabled: true,
}
sbbb, err := blocks.NewSignedBeaconBlock(testSignedBlindedBeaconBlockBellatrix(t))
require.NoError(t, err)
err = c.SubmitBlindedBlockPostFulu(ctx, sbbb)
require.NoError(t, err)
})
t.Run("error_response", func(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, postBlindedBeaconBlockPath, r.URL.Path)
require.Equal(t, "bellatrix", r.Header.Get("Eth-Consensus-Version"))
message := ErrorMessage{
Code: 400,
Message: "Bad Request",
}
resp, err := json.Marshal(message)
require.NoError(t, err)
return &http.Response{
StatusCode: http.StatusBadRequest,
Body: io.NopCloser(bytes.NewBuffer(resp)),
Request: r.Clone(ctx),
}, nil
}),
}
c := &Client{
hc: hc,
baseURL: &url.URL{Host: "localhost:3500", Scheme: "http"},
}
sbbb, err := blocks.NewSignedBeaconBlock(testSignedBlindedBeaconBlockBellatrix(t))
require.NoError(t, err)
err = c.SubmitBlindedBlockPostFulu(ctx, sbbb)
require.ErrorIs(t, err, ErrNotOK)
})
}
func TestRequestLogger(t *testing.T) {
wo := WithObserver(&requestLogger{})
c, err := NewClient("localhost:3500", wo)
@@ -1727,7 +1810,7 @@ func TestSubmitBlindedBlock_BlobsBundlerInterface(t *testing.T) {
t.Run("Interface signature verification", func(t *testing.T) {
// This test verifies that the SubmitBlindedBlock method signature
// has been updated to return BlobsBundler interface
client := &Client{}
// Verify the method exists with the correct signature

View File

@@ -45,6 +45,11 @@ func (MockClient) SubmitBlindedBlock(_ context.Context, _ interfaces.ReadOnlySig
return nil, nil, nil
}
// SubmitBlindedBlockPostFulu --
func (MockClient) SubmitBlindedBlockPostFulu(_ context.Context, _ interfaces.ReadOnlySignedBeaconBlock) error {
return nil
}
// Status --
func (MockClient) Status(_ context.Context) error {
return nil

View File

@@ -1699,7 +1699,7 @@ func TestExecutionPayloadHeaderCapellaRoundtrip(t *testing.T) {
require.DeepEqual(t, string(expected[0:len(expected)-1]), string(m))
}
func TestErrorMessage_non200Err(t *testing.T) {
func TestErrorMessage_unexpectedStatusErr(t *testing.T) {
mockRequest := &http.Request{
URL: &url.URL{Path: "example.com"},
}
@@ -1779,7 +1779,7 @@ func TestErrorMessage_non200Err(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := non200Err(tt.args)
err := unexpectedStatusErr(tt.args, http.StatusOK)
if err != nil && tt.wantMessage != "" {
require.ErrorContains(t, tt.wantMessage, err)
}

View File

@@ -182,6 +182,7 @@ go_test(
"//container/trie:go_default_library",
"//crypto/bls:go_default_library",
"//encoding/bytesutil:go_default_library",
"//genesis:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",

View File

@@ -14,6 +14,7 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/genesis"
enginev1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/assert"
@@ -624,6 +625,7 @@ func Test_hashForGenesisRoot(t *testing.T) {
ctx := t.Context()
c := setupBeaconChain(t, beaconDB)
st, _ := util.DeterministicGenesisStateElectra(t, 10)
genesis.StoreDuringTest(t, genesis.GenesisData{State: st})
require.NoError(t, c.cfg.BeaconDB.SaveGenesisData(ctx, st))
root, err := beaconDB.GenesisBlockRoot(ctx)
require.NoError(t, err)

View File

@@ -7,10 +7,15 @@ type currentlySyncingBlock struct {
roots map[[32]byte]struct{}
}
func (b *currentlySyncingBlock) set(root [32]byte) {
func (b *currentlySyncingBlock) set(root [32]byte) error {
b.Lock()
defer b.Unlock()
_, ok := b.roots[root]
if ok {
return errBlockBeingSynced
}
b.roots[root] = struct{}{}
return nil
}
func (b *currentlySyncingBlock) unset(root [32]byte) {

View File

@@ -44,6 +44,8 @@ var (
errMaxBlobsExceeded = verification.AsVerificationFailure(errors.New("expected commitments in block exceeds MAX_BLOBS_PER_BLOCK"))
// errMaxDataColumnsExceeded is returned when the number of data columns exceeds the maximum allowed.
errMaxDataColumnsExceeded = verification.AsVerificationFailure(errors.New("expected data columns for node exceeds NUMBER_OF_COLUMNS"))
// errBlockBeingSynced is returned when a block is being synced.
errBlockBeingSynced = errors.New("block is being synced")
)
// An invalid block is the block that fails state transition based on the core protocol rules.

View File

@@ -19,6 +19,7 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/genesis"
v1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/assert"
@@ -309,6 +310,7 @@ func Test_NotifyForkchoiceUpdate_NIlLVH(t *testing.T) {
block: wba,
}
genesis.StoreStateDuringTest(t, st)
require.NoError(t, beaconDB.SaveState(ctx, st, bra))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, bra))
a := &fcuConfig{
@@ -403,6 +405,7 @@ func Test_NotifyForkchoiceUpdateRecursive_DoublyLinkedTree(t *testing.T) {
require.NoError(t, err)
bState, _ := util.DeterministicGenesisState(t, 10)
genesis.StoreStateDuringTest(t, bState)
require.NoError(t, beaconDB.SaveState(ctx, bState, bra))
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 2, brb, bra, [32]byte{'B'}, ojc, ofc)

View File

@@ -6,20 +6,20 @@ import (
)
// Verify performs single or batch verification of commitments depending on the number of given BlobSidecars.
func Verify(sidecars ...blocks.ROBlob) error {
if len(sidecars) == 0 {
func Verify(blobSidecars ...blocks.ROBlob) error {
if len(blobSidecars) == 0 {
return nil
}
if len(sidecars) == 1 {
if len(blobSidecars) == 1 {
return kzgContext.VerifyBlobKZGProof(
bytesToBlob(sidecars[0].Blob),
bytesToCommitment(sidecars[0].KzgCommitment),
bytesToKZGProof(sidecars[0].KzgProof))
bytesToBlob(blobSidecars[0].Blob),
bytesToCommitment(blobSidecars[0].KzgCommitment),
bytesToKZGProof(blobSidecars[0].KzgProof))
}
blobs := make([]GoKZG.Blob, len(sidecars))
cmts := make([]GoKZG.KZGCommitment, len(sidecars))
proofs := make([]GoKZG.KZGProof, len(sidecars))
for i, sidecar := range sidecars {
blobs := make([]GoKZG.Blob, len(blobSidecars))
cmts := make([]GoKZG.KZGCommitment, len(blobSidecars))
proofs := make([]GoKZG.KZGProof, len(blobSidecars))
for i, sidecar := range blobSidecars {
blobs[i] = *bytesToBlob(sidecar.Blob)
cmts[i] = bytesToCommitment(sidecar.KzgCommitment)
proofs[i] = bytesToKZGProof(sidecar.KzgProof)

View File

@@ -22,8 +22,8 @@ func GenerateCommitmentAndProof(blob GoKZG.Blob) (GoKZG.KZGCommitment, GoKZG.KZG
}
func TestVerify(t *testing.T) {
sidecars := make([]blocks.ROBlob, 0)
require.NoError(t, Verify(sidecars...))
blobSidecars := make([]blocks.ROBlob, 0)
require.NoError(t, Verify(blobSidecars...))
}
func TestBytesToAny(t *testing.T) {

View File

@@ -240,9 +240,10 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
}
}
if err := avs.IsDataAvailable(ctx, s.CurrentSlot(), b); err != nil {
return errors.Wrapf(err, "could not validate sidecar availability at slot %d", b.Block().Slot())
if err := s.areSidecarsAvailable(ctx, avs, b); err != nil {
return errors.Wrapf(err, "could not validate sidecar availability for block %#x at slot %d", b.Root(), b.Block().Slot())
}
args := &forkchoicetypes.BlockAndCheckpoints{Block: b,
JustifiedCheckpoint: jCheckpoints[i],
FinalizedCheckpoint: fCheckpoints[i]}
@@ -308,6 +309,30 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
return s.saveHeadNoDB(ctx, lastB, lastBR, preState, !isValidPayload)
}
func (s *Service) areSidecarsAvailable(ctx context.Context, avs das.AvailabilityStore, roBlock consensusblocks.ROBlock) error {
blockVersion := roBlock.Version()
block := roBlock.Block()
slot := block.Slot()
if blockVersion >= version.Fulu {
if err := s.areDataColumnsAvailable(ctx, roBlock.Root(), block); err != nil {
return errors.Wrapf(err, "are data columns available for block %#x with slot %d", roBlock.Root(), slot)
}
return nil
}
if blockVersion >= version.Deneb {
if err := avs.IsDataAvailable(ctx, s.CurrentSlot(), roBlock); err != nil {
return errors.Wrapf(err, "could not validate sidecar availability at slot %d", slot)
}
return nil
}
return nil
}
func (s *Service) updateEpochBoundaryCaches(ctx context.Context, st state.BeaconState) error {
e := coreTime.CurrentEpoch(st)
if err := helpers.UpdateCommitteeCache(ctx, st, e); err != nil {
@@ -584,7 +609,7 @@ func (s *Service) runLateBlockTasks() {
// It returns a map where each key represents a missing BlobSidecar index.
// An empty map means we have all indices; a non-empty map can be used to compare incoming
// BlobSidecars against the set of known missing sidecars.
func missingBlobIndices(bs *filesystem.BlobStorage, root [fieldparams.RootLength]byte, expected [][]byte, slot primitives.Slot) (map[uint64]bool, error) {
func missingBlobIndices(store *filesystem.BlobStorage, root [fieldparams.RootLength]byte, expected [][]byte, slot primitives.Slot) (map[uint64]bool, error) {
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
if len(expected) == 0 {
return nil, nil
@@ -592,7 +617,7 @@ func missingBlobIndices(bs *filesystem.BlobStorage, root [fieldparams.RootLength
if len(expected) > maxBlobsPerBlock {
return nil, errMaxBlobsExceeded
}
indices := bs.Summary(root)
indices := store.Summary(root)
missing := make(map[uint64]bool, len(expected))
for i := range expected {
if len(expected[i]) > 0 && !indices.HasIndex(uint64(i)) {
@@ -607,7 +632,7 @@ func missingBlobIndices(bs *filesystem.BlobStorage, root [fieldparams.RootLength
// It returns a map where each key represents a missing DataColumnSidecar index.
// An empty map means we have all indices; a non-empty map can be used to compare incoming
// DataColumns against the set of known missing sidecars.
func missingDataColumnIndices(bs *filesystem.DataColumnStorage, root [fieldparams.RootLength]byte, expected map[uint64]bool) (map[uint64]bool, error) {
func missingDataColumnIndices(store *filesystem.DataColumnStorage, root [fieldparams.RootLength]byte, expected map[uint64]bool) (map[uint64]bool, error) {
if len(expected) == 0 {
return nil, nil
}
@@ -619,7 +644,7 @@ func missingDataColumnIndices(bs *filesystem.DataColumnStorage, root [fieldparam
}
// Get a summary of the data columns stored in the database.
summary := bs.Summary(root)
summary := store.Summary(root)
// Check all expected data columns against the summary.
missing := make(map[uint64]bool)
@@ -717,7 +742,7 @@ func (s *Service) areDataColumnsAvailable(
summary := s.dataColumnStorage.Summary(root)
storedDataColumnsCount := summary.Count()
minimumColumnCountToReconstruct := peerdas.MinimumColumnsCountToReconstruct()
minimumColumnCountToReconstruct := peerdas.MinimumColumnCountToReconstruct()
// As soon as we have enough data column sidecars, we can reconstruct the missing ones.
// We don't need to wait for the rest of the data columns to declare the block as available.
@@ -820,7 +845,7 @@ func (s *Service) areDataColumnsAvailable(
missingIndices = uint64MapToSortedSlice(missingMap)
}
return errors.Wrapf(ctx.Err(), "data column sidecars slot: %d, BlockRoot: %#x, missing %v", block.Slot(), root, missingIndices)
return errors.Wrapf(ctx.Err(), "data column sidecars slot: %d, BlockRoot: %#x, missing: %v", block.Slot(), root, missingIndices)
}
}
}

View File

@@ -35,6 +35,7 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/crypto/bls"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/genesis"
enginev1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
@@ -1980,14 +1981,15 @@ func TestNoViableHead_Reboot(t *testing.T) {
genesisState, keys := util.DeterministicGenesisState(t, 64)
stateRoot, err := genesisState.HashTreeRoot(ctx)
require.NoError(t, err, "Could not hash genesis state")
genesis := blocks.NewGenesisBlock(stateRoot[:])
wsb, err := consensusblocks.NewSignedBeaconBlock(genesis)
gb := blocks.NewGenesisBlock(stateRoot[:])
wsb, err := consensusblocks.NewSignedBeaconBlock(gb)
require.NoError(t, err)
genesisRoot, err := genesis.Block.HashTreeRoot()
genesisRoot, err := gb.Block.HashTreeRoot()
require.NoError(t, err, "Could not get signing root")
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb), "Could not save genesis block")
require.NoError(t, service.saveGenesisData(ctx, genesisState))
genesis.StoreStateDuringTest(t, genesisState)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, genesisState, genesisRoot), "Could not save genesis state")
require.NoError(t, service.cfg.BeaconDB.SaveHeadBlockRoot(ctx, genesisRoot), "Could not save genesis state")
require.NoError(t, service.cfg.BeaconDB.SaveGenesisBlockRoot(ctx, genesisRoot), "Could not save genesis state")
@@ -2887,7 +2889,7 @@ func TestIsDataAvailable(t *testing.T) {
})
t.Run("Fulu - more than half of the columns in custody", func(t *testing.T) {
minimumColumnsCountToReconstruct := peerdas.MinimumColumnsCountToReconstruct()
minimumColumnsCountToReconstruct := peerdas.MinimumColumnCountToReconstruct()
indices := make([]uint64, 0, minimumColumnsCountToReconstruct)
for i := range minimumColumnsCountToReconstruct {
indices = append(indices, i)
@@ -2972,7 +2974,7 @@ func TestIsDataAvailable(t *testing.T) {
startWaiting := make(chan bool)
minimumColumnsCountToReconstruct := peerdas.MinimumColumnsCountToReconstruct()
minimumColumnsCountToReconstruct := peerdas.MinimumColumnCountToReconstruct()
indices := make([]uint64, 0, minimumColumnsCountToReconstruct-missingColumns)
for i := range minimumColumnsCountToReconstruct - missingColumns {

View File

@@ -84,7 +84,11 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
}
receivedTime := time.Now()
s.blockBeingSynced.set(blockRoot)
err := s.blockBeingSynced.set(blockRoot)
if errors.Is(err, errBlockBeingSynced) {
log.WithField("blockRoot", fmt.Sprintf("%#x", blockRoot)).Debug("Ignoring block currently being synced")
return nil
}
defer s.blockBeingSynced.unset(blockRoot)
blockCopy, err := block.Copy()

View File

@@ -311,7 +311,10 @@ func TestService_HasBlock(t *testing.T) {
r, err = b.Block.HashTreeRoot()
require.NoError(t, err)
require.Equal(t, true, s.HasBlock(t.Context(), r))
s.blockBeingSynced.set(r)
err = s.blockBeingSynced.set(r)
require.NoError(t, err)
err = s.blockBeingSynced.set(r)
require.ErrorIs(t, err, errBlockBeingSynced)
require.Equal(t, false, s.HasBlock(t.Context(), r))
}

View File

@@ -17,7 +17,7 @@ func (s *Service) ReceiveDataColumns(dataColumnSidecars []blocks.VerifiedRODataC
// ReceiveDataColumn receives a single data column.
func (s *Service) ReceiveDataColumn(dataColumnSidecar blocks.VerifiedRODataColumn) error {
if err := s.dataColumnStorage.Save([]blocks.VerifiedRODataColumn{dataColumnSidecar}); err != nil {
return errors.Wrap(err, "save data column sidecars")
return errors.Wrap(err, "save data column sidecar")
}
return nil

View File

@@ -12,7 +12,6 @@ import (
"github.com/OffchainLabs/prysm/v6/async/event"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed"
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
@@ -207,17 +206,9 @@ func NewService(ctx context.Context, opts ...Option) (*Service, error) {
// Start a blockchain service's main event loop.
func (s *Service) Start() {
saved := s.cfg.FinalizedStateAtStartUp
defer s.removeStartupState()
if saved != nil && !saved.IsNil() {
if err := s.StartFromSavedState(saved); err != nil {
log.Fatal(err)
}
} else {
if err := s.startFromExecutionChain(); err != nil {
log.Fatal(err)
}
if err := s.StartFromSavedState(s.cfg.FinalizedStateAtStartUp); err != nil {
log.Fatal(err)
}
s.spawnProcessAttestationsRoutine()
go s.runLateBlockTasks()
@@ -266,6 +257,9 @@ func (s *Service) Status() error {
// StartFromSavedState initializes the blockchain using a previously saved finalized checkpoint.
func (s *Service) StartFromSavedState(saved state.BeaconState) error {
if state.IsNil(saved) {
return errors.New("Last finalized state at startup is nil")
}
log.Info("Blockchain data already exists in DB, initializing...")
s.genesisTime = saved.GenesisTime()
s.cfg.AttService.SetGenesisTime(saved.GenesisTime())
@@ -371,62 +365,6 @@ func (s *Service) initializeHead(ctx context.Context, st state.BeaconState) erro
return nil
}
func (s *Service) startFromExecutionChain() error {
log.Info("Waiting to reach the validator deposit threshold to start the beacon chain...")
if s.cfg.ChainStartFetcher == nil {
return errors.New("not configured execution chain")
}
go func() {
stateChannel := make(chan *feed.Event, 1)
stateSub := s.cfg.StateNotifier.StateFeed().Subscribe(stateChannel)
defer stateSub.Unsubscribe()
for {
select {
case e := <-stateChannel:
if e.Type == statefeed.ChainStarted {
data, ok := e.Data.(*statefeed.ChainStartedData)
if !ok {
log.Error("Event data is not type *statefeed.ChainStartedData")
return
}
log.WithField("startTime", data.StartTime).Debug("Received chain start event")
s.onExecutionChainStart(s.ctx, data.StartTime)
return
}
case <-s.ctx.Done():
log.Debug("Context closed, exiting goroutine")
return
case err := <-stateSub.Err():
log.WithError(err).Error("Subscription to state forRoot failed")
return
}
}
}()
return nil
}
// onExecutionChainStart initializes a series of deposits from the ChainStart deposits in the eth1
// deposit contract, initializes the beacon chain's state, and kicks off the beacon chain.
func (s *Service) onExecutionChainStart(ctx context.Context, genesisTime time.Time) {
preGenesisState := s.cfg.ChainStartFetcher.PreGenesisState()
initializedState, err := s.initializeBeaconChain(ctx, genesisTime, preGenesisState, s.cfg.ChainStartFetcher.ChainStartEth1Data())
if err != nil {
log.WithError(err).Fatal("Could not initialize beacon chain")
}
// We start a counter to genesis, if needed.
gRoot, err := initializedState.HashTreeRoot(s.ctx)
if err != nil {
log.WithError(err).Fatal("Could not hash tree root genesis state")
}
go slots.CountdownToGenesis(ctx, genesisTime, uint64(initializedState.NumValidators()), gRoot)
vr := bytesutil.ToBytes32(initializedState.GenesisValidatorsRoot())
if err := s.clockSetter.SetClock(startup.NewClock(genesisTime, vr)); err != nil {
log.WithError(err).Fatal("Failed to initialize blockchain service from execution start event")
}
}
// initializes the state and genesis block of the beacon chain to persistent storage
// based on a genesis timestamp value obtained from the ChainStart event emitted
// by the ETH1.0 Deposit Contract and the POWChain service of the node.

View File

@@ -31,6 +31,7 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/container/trie"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/genesis"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
@@ -51,6 +52,7 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
srv.Stop()
})
bState, _ := util.DeterministicGenesisState(t, 10)
genesis.StoreStateDuringTest(t, bState)
pbState, err := state_native.ProtobufBeaconStatePhase0(bState.ToProtoUnsafe())
require.NoError(t, err)
mockTrie, err := trie.NewTrie(0)
@@ -71,20 +73,22 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
DepositContainers: []*ethpb.DepositContainer{},
})
require.NoError(t, err)
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
web3Service, err = execution.NewService(
ctx,
execution.WithDatabase(beaconDB),
execution.WithHttpEndpoint(endpoint),
execution.WithDepositContractAddress(common.Address{}),
execution.WithDepositCache(depositCache),
)
require.NoError(t, err, "Unable to set up web3 service")
attService, err := attestations.NewService(ctx, &attestations.Config{Pool: attestations.NewPool()})
require.NoError(t, err)
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
fc := doublylinkedtree.New()
stateGen := stategen.New(beaconDB, fc)
// Safe a state in stategen to purposes of testing a service stop / shutdown.
@@ -396,24 +400,6 @@ func TestServiceStop_SaveCachedBlocks(t *testing.T) {
require.Equal(t, true, s.cfg.BeaconDB.HasBlock(s.ctx, r))
}
func TestProcessChainStartTime_ReceivedFeed(t *testing.T) {
ctx := t.Context()
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
mgs := &MockClockSetter{}
service.clockSetter = mgs
gt := time.Now()
service.onExecutionChainStart(t.Context(), gt)
gs, err := beaconDB.GenesisState(ctx)
require.NoError(t, err)
require.NotEqual(t, nil, gs)
require.Equal(t, 32, len(gs.GenesisValidatorsRoot()))
var zero [32]byte
require.DeepNotEqual(t, gs.GenesisValidatorsRoot(), zero[:])
require.Equal(t, gt, mgs.G.GenesisTime())
require.Equal(t, bytesutil.ToBytes32(gs.GenesisValidatorsRoot()), mgs.G.GenesisValidatorsRoot())
}
func BenchmarkHasBlockDB(b *testing.B) {
ctx := b.Context()
s := testServiceWithDB(b)

View File

@@ -89,7 +89,7 @@ func (mb *mockBroadcaster) BroadcastLightClientFinalityUpdate(_ context.Context,
return nil
}
func (mb *mockBroadcaster) BroadcastDataColumn(_ [fieldparams.RootLength]byte, _ uint64, _ *ethpb.DataColumnSidecar) error {
func (mb *mockBroadcaster) BroadcastDataColumnSidecar(_ [fieldparams.RootLength]byte, _ uint64, _ *ethpb.DataColumnSidecar) error {
mb.broadcastCalled = true
return nil
}

View File

@@ -25,6 +25,7 @@ var ErrNoBuilder = errors.New("builder endpoint not configured")
// BlockBuilder defines the interface for interacting with the block builder
type BlockBuilder interface {
SubmitBlindedBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, v1.BlobsBundler, error)
SubmitBlindedBlockPostFulu(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock) error
GetHeader(ctx context.Context, slot primitives.Slot, parentHash [32]byte, pubKey [48]byte) (builder.SignedBid, error)
RegisterValidator(ctx context.Context, reg []*ethpb.SignedValidatorRegistrationV1) error
RegistrationByValidatorID(ctx context.Context, id primitives.ValidatorIndex) (*ethpb.ValidatorRegistrationV1, error)
@@ -101,6 +102,22 @@ func (s *Service) SubmitBlindedBlock(ctx context.Context, b interfaces.ReadOnlyS
return s.c.SubmitBlindedBlock(ctx, b)
}
// SubmitBlindedBlockPostFulu submits a blinded block to the builder relay network post-Fulu.
// After Fulu, relays only return status codes (no payload).
func (s *Service) SubmitBlindedBlockPostFulu(ctx context.Context, b interfaces.ReadOnlySignedBeaconBlock) error {
ctx, span := trace.StartSpan(ctx, "builder.SubmitBlindedBlockPostFulu")
defer span.End()
start := time.Now()
defer func() {
submitBlindedBlockLatency.Observe(float64(time.Since(start).Milliseconds()))
}()
if s.c == nil {
return ErrNoBuilder
}
return s.c.SubmitBlindedBlockPostFulu(ctx, b)
}
// GetHeader retrieves the header for a given slot and parent hash from the builder relay network.
func (s *Service) GetHeader(ctx context.Context, slot primitives.Slot, parentHash [32]byte, pubKey [48]byte) (builder.SignedBid, error) {
ctx, span := trace.StartSpan(ctx, "builder.GetHeader")

View File

@@ -24,21 +24,22 @@ type Config struct {
// MockBuilderService to mock builder.
type MockBuilderService struct {
HasConfigured bool
Payload *v1.ExecutionPayload
PayloadCapella *v1.ExecutionPayloadCapella
PayloadDeneb *v1.ExecutionPayloadDeneb
BlobBundle *v1.BlobsBundle
BlobBundleV2 *v1.BlobsBundleV2
ErrSubmitBlindedBlock error
Bid *ethpb.SignedBuilderBid
BidCapella *ethpb.SignedBuilderBidCapella
BidDeneb *ethpb.SignedBuilderBidDeneb
BidElectra *ethpb.SignedBuilderBidElectra
RegistrationCache *cache.RegistrationCache
ErrGetHeader error
ErrRegisterValidator error
Cfg *Config
HasConfigured bool
Payload *v1.ExecutionPayload
PayloadCapella *v1.ExecutionPayloadCapella
PayloadDeneb *v1.ExecutionPayloadDeneb
BlobBundle *v1.BlobsBundle
BlobBundleV2 *v1.BlobsBundleV2
ErrSubmitBlindedBlock error
ErrSubmitBlindedBlockPostFulu error
Bid *ethpb.SignedBuilderBid
BidCapella *ethpb.SignedBuilderBidCapella
BidDeneb *ethpb.SignedBuilderBidDeneb
BidElectra *ethpb.SignedBuilderBidElectra
RegistrationCache *cache.RegistrationCache
ErrGetHeader error
ErrRegisterValidator error
Cfg *Config
}
// Configured for mocking.
@@ -115,3 +116,8 @@ func (s *MockBuilderService) RegistrationByValidatorID(ctx context.Context, id p
func (s *MockBuilderService) RegisterValidator(context.Context, []*ethpb.SignedValidatorRegistrationV1) error {
return s.ErrRegisterValidator
}
// SubmitBlindedBlockPostFulu for mocking.
func (s *MockBuilderService) SubmitBlindedBlockPostFulu(_ context.Context, _ interfaces.ReadOnlySignedBeaconBlock) error {
return s.ErrSubmitBlindedBlockPostFulu
}

View File

@@ -41,7 +41,6 @@ go_library(
"//encoding/ssz:go_default_library",
"//math:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//network/forks:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",

View File

@@ -11,7 +11,6 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/crypto/bls"
"github.com/OffchainLabs/prysm/v6/network/forks"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1/attestation"
"github.com/OffchainLabs/prysm/v6/time/slots"
@@ -101,7 +100,7 @@ func VerifyBlockHeaderSignature(beaconState state.BeaconState, header *ethpb.Sig
// via the respective epoch.
func VerifyBlockSignatureUsingCurrentFork(beaconState state.ReadOnlyBeaconState, blk interfaces.ReadOnlySignedBeaconBlock, blkRoot [32]byte) error {
currentEpoch := slots.ToEpoch(blk.Block().Slot())
fork, err := forks.Fork(currentEpoch)
fork, err := params.Fork(currentEpoch)
if err != nil {
return err
}

View File

@@ -78,6 +78,7 @@ func TestIsCurrentEpochSyncCommittee_UsingCommittee(t *testing.T) {
func TestIsCurrentEpochSyncCommittee_DoesNotExist(t *testing.T) {
helpers.ClearCache()
params.SetupTestConfigCleanup(t)
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
@@ -264,6 +265,7 @@ func TestCurrentEpochSyncSubcommitteeIndices_UsingCommittee(t *testing.T) {
}
func TestCurrentEpochSyncSubcommitteeIndices_DoesNotExist(t *testing.T) {
params.SetupTestConfigCleanup(t)
helpers.ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)

View File

@@ -223,6 +223,14 @@ func dataColumnsSidecars(
cellsForRow := cellsAndProofs[rowIndex].Cells
proofsForRow := cellsAndProofs[rowIndex].Proofs
// Validate that we have enough cells and proofs for this column index
if columnIndex >= uint64(len(cellsForRow)) {
return nil, errors.Errorf("column index %d exceeds cells length %d for blob %d", columnIndex, len(cellsForRow), rowIndex)
}
if columnIndex >= uint64(len(proofsForRow)) {
return nil, errors.Errorf("column index %d exceeds proofs length %d for blob %d", columnIndex, len(proofsForRow), rowIndex)
}
cell := cellsForRow[columnIndex]
column = append(column, cell)

View File

@@ -67,6 +67,55 @@ func TestDataColumnSidecars(t *testing.T) {
_, err = peerdas.DataColumnSidecars(signedBeaconBlock, cellsAndProofs)
require.ErrorIs(t, err, peerdas.ErrSizeMismatch)
})
t.Run("cells array too short for column index", func(t *testing.T) {
// Create a Fulu block with a blob commitment.
signedBeaconBlockPb := util.NewBeaconBlockFulu()
signedBeaconBlockPb.Block.Body.BlobKzgCommitments = [][]byte{make([]byte, 48)}
// Create a signed beacon block from the protobuf.
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(signedBeaconBlockPb)
require.NoError(t, err)
// Create cells and proofs with insufficient cells for the number of columns.
// This simulates a scenario where cellsAndProofs has fewer cells than expected columns.
cellsAndProofs := []kzg.CellsAndProofs{
{
Cells: make([]kzg.Cell, 10), // Only 10 cells
Proofs: make([]kzg.Proof, 10), // Only 10 proofs
},
}
// This should fail because the function will try to access columns up to NumberOfColumns
// but we only have 10 cells/proofs.
_, err = peerdas.DataColumnSidecars(signedBeaconBlock, cellsAndProofs)
require.ErrorContains(t, "column index", err)
require.ErrorContains(t, "exceeds cells length", err)
})
t.Run("proofs array too short for column index", func(t *testing.T) {
// Create a Fulu block with a blob commitment.
signedBeaconBlockPb := util.NewBeaconBlockFulu()
signedBeaconBlockPb.Block.Body.BlobKzgCommitments = [][]byte{make([]byte, 48)}
// Create a signed beacon block from the protobuf.
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(signedBeaconBlockPb)
require.NoError(t, err)
// Create cells and proofs with sufficient cells but insufficient proofs.
numberOfColumns := params.BeaconConfig().NumberOfColumns
cellsAndProofs := []kzg.CellsAndProofs{
{
Cells: make([]kzg.Cell, numberOfColumns),
Proofs: make([]kzg.Proof, 5), // Only 5 proofs, less than columns
},
}
// This should fail when trying to access proof beyond index 4.
_, err = peerdas.DataColumnSidecars(signedBeaconBlock, cellsAndProofs)
require.ErrorContains(t, "column index", err)
require.ErrorContains(t, "exceeds proofs length", err)
})
}
func TestComputeCustodyGroupForColumn(t *testing.T) {

View File

@@ -18,8 +18,8 @@ var (
ErrBlobsCellsProofsMismatch = errors.New("blobs and cells proofs mismatch")
)
// MinimumColumnsCountToReconstruct return the minimum number of columns needed to proceed to a reconstruction.
func MinimumColumnsCountToReconstruct() uint64 {
// MinimumColumnCountToReconstruct return the minimum number of columns needed to proceed to a reconstruction.
func MinimumColumnCountToReconstruct() uint64 {
// If the number of columns is odd, then we need total / 2 + 1 columns to reconstruct.
// If the number of columns is even, then we need total / 2 columns to reconstruct.
return (params.BeaconConfig().NumberOfColumns + 1) / 2
@@ -58,7 +58,7 @@ func ReconstructDataColumnSidecars(inVerifiedRoSidecars []blocks.VerifiedRODataC
// Check if there is enough sidecars to reconstruct the missing columns.
sidecarCount := len(sidecarByIndex)
if uint64(sidecarCount) < MinimumColumnsCountToReconstruct() {
if uint64(sidecarCount) < MinimumColumnCountToReconstruct() {
return nil, ErrNotEnoughDataColumnSidecars
}

View File

@@ -48,7 +48,7 @@ func TestMinimumColumnsCountToReconstruct(t *testing.T) {
params.OverrideBeaconConfig(cfg)
// Compute the minimum number of columns needed to reconstruct.
actual := peerdas.MinimumColumnsCountToReconstruct()
actual := peerdas.MinimumColumnCountToReconstruct()
require.Equal(t, tc.expected, actual)
})
}
@@ -100,7 +100,7 @@ func TestReconstructDataColumnSidecars(t *testing.T) {
t.Run("not enough columns to enable reconstruction", func(t *testing.T) {
_, _, verifiedRoSidecars := util.GenerateTestFuluBlockWithSidecars(t, 3)
minimum := peerdas.MinimumColumnsCountToReconstruct()
minimum := peerdas.MinimumColumnCountToReconstruct()
_, err := peerdas.ReconstructDataColumnSidecars(verifiedRoSidecars[:minimum-1])
require.ErrorIs(t, err, peerdas.ErrNotEnoughDataColumnSidecars)
})

View File

@@ -4,7 +4,6 @@ go_library(
name = "go_default_library",
srcs = [
"domain.go",
"signature.go",
"signing_root.go",
],
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing",
@@ -25,7 +24,6 @@ go_test(
name = "go_default_test",
srcs = [
"domain_test.go",
"signature_test.go",
"signing_root_test.go",
],
embed = [":go_default_library"],

View File

@@ -1,34 +0,0 @@
package signing
import (
"github.com/OffchainLabs/prysm/v6/config/params"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/pkg/errors"
)
var ErrNilRegistration = errors.New("nil signed registration")
// VerifyRegistrationSignature verifies the signature of a validator's registration.
func VerifyRegistrationSignature(
sr *ethpb.SignedValidatorRegistrationV1,
) error {
if sr == nil || sr.Message == nil {
return ErrNilRegistration
}
d := params.BeaconConfig().DomainApplicationBuilder
// Per spec, we want the fork version and genesis validator to be nil.
// Which is genesis value and zero by default.
sd, err := ComputeDomain(
d,
nil, /* fork version */
nil /* genesis val root */)
if err != nil {
return err
}
if err := VerifySigningRoot(sr.Message, sr.Message.Pubkey, sr.Signature, sd); err != nil {
return ErrSigFailedToVerify
}
return nil
}

View File

@@ -1,42 +0,0 @@
package signing_test
import (
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/crypto/bls"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/require"
)
func TestVerifyRegistrationSignature(t *testing.T) {
sk, err := bls.RandKey()
require.NoError(t, err)
reg := &ethpb.ValidatorRegistrationV1{
FeeRecipient: bytesutil.PadTo([]byte("fee"), 20),
GasLimit: 123456,
Timestamp: uint64(time.Now().Unix()),
Pubkey: sk.PublicKey().Marshal(),
}
d := params.BeaconConfig().DomainApplicationBuilder
domain, err := signing.ComputeDomain(d, nil, nil)
require.NoError(t, err)
sr, err := signing.ComputeSigningRoot(reg, domain)
require.NoError(t, err)
sk.Sign(sr[:]).Marshal()
sReg := &ethpb.SignedValidatorRegistrationV1{
Message: reg,
Signature: sk.Sign(sr[:]).Marshal(),
}
require.NoError(t, signing.VerifyRegistrationSignature(sReg))
sReg.Signature = []byte("bad")
require.ErrorIs(t, signing.VerifyRegistrationSignature(sReg), signing.ErrSigFailedToVerify)
sReg.Message = nil
require.ErrorIs(t, signing.VerifyRegistrationSignature(sReg), signing.ErrNilRegistration)
}

View File

@@ -4,7 +4,6 @@ go_library(
name = "go_default_library",
srcs = [
"availability_blobs.go",
"availability_columns.go",
"blob_cache.go",
"data_column_cache.go",
"iface.go",
@@ -13,7 +12,6 @@ go_library(
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/das",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/verification:go_default_library",
"//config/fieldparams:go_default_library",
@@ -23,7 +21,6 @@ go_library(
"//runtime/logging:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
@@ -33,7 +30,6 @@ go_test(
name = "go_default_test",
srcs = [
"availability_blobs_test.go",
"availability_columns_test.go",
"blob_cache_test.go",
"data_column_cache_test.go",
],
@@ -49,7 +45,6 @@ go_test(
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)

View File

@@ -53,30 +53,25 @@ func NewLazilyPersistentStore(store *filesystem.BlobStorage, verifier BlobBatchV
// Persist adds blobs to the working blob cache. Blobs stored in this cache will be persisted
// for at least as long as the node is running. Once IsDataAvailable succeeds, all blobs referenced
// by the given block are guaranteed to be persisted for the remainder of the retention period.
func (s *LazilyPersistentStoreBlob) Persist(current primitives.Slot, sidecars ...blocks.ROSidecar) error {
func (s *LazilyPersistentStoreBlob) Persist(current primitives.Slot, sidecars ...blocks.ROBlob) error {
if len(sidecars) == 0 {
return nil
}
blobSidecars, err := blocks.BlobSidecarsFromSidecars(sidecars)
if err != nil {
return errors.Wrap(err, "blob sidecars from sidecars")
}
if len(blobSidecars) > 1 {
firstRoot := blobSidecars[0].BlockRoot()
for _, sidecar := range blobSidecars[1:] {
if len(sidecars) > 1 {
firstRoot := sidecars[0].BlockRoot()
for _, sidecar := range sidecars[1:] {
if sidecar.BlockRoot() != firstRoot {
return errMixedRoots
}
}
}
if !params.WithinDAPeriod(slots.ToEpoch(blobSidecars[0].Slot()), slots.ToEpoch(current)) {
if !params.WithinDAPeriod(slots.ToEpoch(sidecars[0].Slot()), slots.ToEpoch(current)) {
return nil
}
key := keyFromSidecar(blobSidecars[0])
key := keyFromSidecar(sidecars[0])
entry := s.cache.ensure(key)
for _, blobSidecar := range blobSidecars {
for _, blobSidecar := range sidecars {
if err := entry.stash(&blobSidecar); err != nil {
return err
}

View File

@@ -118,23 +118,21 @@ func TestLazilyPersistent_Missing(t *testing.T) {
blk, blobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 3)
scs := blocks.NewSidecarsFromBlobSidecars(blobSidecars)
mbv := &mockBlobBatchVerifier{t: t, scs: blobSidecars}
as := NewLazilyPersistentStore(store, mbv)
// Only one commitment persisted, should return error with other indices
require.NoError(t, as.Persist(1, scs[2]))
require.NoError(t, as.Persist(1, blobSidecars[2]))
err := as.IsDataAvailable(ctx, 1, blk)
require.ErrorIs(t, err, errMissingSidecar)
// All but one persisted, return missing idx
require.NoError(t, as.Persist(1, scs[0]))
require.NoError(t, as.Persist(1, blobSidecars[0]))
err = as.IsDataAvailable(ctx, 1, blk)
require.ErrorIs(t, err, errMissingSidecar)
// All persisted, return nil
require.NoError(t, as.Persist(1, scs...))
require.NoError(t, as.Persist(1, blobSidecars...))
require.NoError(t, as.IsDataAvailable(ctx, 1, blk))
}
@@ -149,10 +147,8 @@ func TestLazilyPersistent_Mismatch(t *testing.T) {
blobSidecars[0].KzgCommitment = bytesutil.PadTo([]byte("nope"), 48)
as := NewLazilyPersistentStore(store, mbv)
scs := blocks.NewSidecarsFromBlobSidecars(blobSidecars)
// Only one commitment persisted, should return error with other indices
require.NoError(t, as.Persist(1, scs[0]))
require.NoError(t, as.Persist(1, blobSidecars[0]))
err := as.IsDataAvailable(ctx, 1, blk)
require.NotNil(t, err)
require.ErrorIs(t, err, errCommitmentMismatch)
@@ -161,29 +157,25 @@ func TestLazilyPersistent_Mismatch(t *testing.T) {
func TestLazyPersistOnceCommitted(t *testing.T) {
_, blobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 6)
scs := blocks.NewSidecarsFromBlobSidecars(blobSidecars)
as := NewLazilyPersistentStore(filesystem.NewEphemeralBlobStorage(t), &mockBlobBatchVerifier{})
// stashes as expected
require.NoError(t, as.Persist(1, scs...))
require.NoError(t, as.Persist(1, blobSidecars...))
// ignores duplicates
require.ErrorIs(t, as.Persist(1, scs...), ErrDuplicateSidecar)
require.ErrorIs(t, as.Persist(1, blobSidecars...), ErrDuplicateSidecar)
// ignores index out of bound
blobSidecars[0].Index = 6
require.ErrorIs(t, as.Persist(1, blocks.NewSidecarFromBlobSidecar(blobSidecars[0])), errIndexOutOfBounds)
require.ErrorIs(t, as.Persist(1, blobSidecars[0]), errIndexOutOfBounds)
_, moreBlobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 4)
more := blocks.NewSidecarsFromBlobSidecars(moreBlobSidecars)
// ignores sidecars before the retention period
slotOOB, err := slots.EpochStart(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest)
require.NoError(t, err)
require.NoError(t, as.Persist(32+slotOOB, more[0]))
require.NoError(t, as.Persist(32+slotOOB, moreBlobSidecars[0]))
// doesn't ignore new sidecars with a different block root
require.NoError(t, as.Persist(1, more...))
require.NoError(t, as.Persist(1, moreBlobSidecars...))
}
type mockBlobBatchVerifier struct {

View File

@@ -1,213 +0,0 @@
package das
import (
"context"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/ethereum/go-ethereum/p2p/enode"
errors "github.com/pkg/errors"
)
// LazilyPersistentStoreColumn is an implementation of AvailabilityStore to be used when batch syncing data columns.
// This implementation will hold any data columns passed to Persist until the IsDataAvailable is called for their
// block, at which time they will undergo full verification and be saved to the disk.
type LazilyPersistentStoreColumn struct {
store *filesystem.DataColumnStorage
nodeID enode.ID
cache *dataColumnCache
newDataColumnsVerifier verification.NewDataColumnsVerifier
custodyGroupCount uint64
}
var _ AvailabilityStore = &LazilyPersistentStoreColumn{}
// DataColumnsVerifier enables LazilyPersistentStoreColumn to manage the verification process
// going from RODataColumn->VerifiedRODataColumn, while avoiding the decision of which individual verifications
// to run and in what order. Since LazilyPersistentStoreColumn always tries to verify and save data columns only when
// they are all available, the interface takes a slice of data column sidecars.
type DataColumnsVerifier interface {
VerifiedRODataColumns(ctx context.Context, blk blocks.ROBlock, scs []blocks.RODataColumn) ([]blocks.VerifiedRODataColumn, error)
}
// NewLazilyPersistentStoreColumn creates a new LazilyPersistentStoreColumn.
// WARNING: The resulting LazilyPersistentStoreColumn is NOT thread-safe.
func NewLazilyPersistentStoreColumn(
store *filesystem.DataColumnStorage,
nodeID enode.ID,
newDataColumnsVerifier verification.NewDataColumnsVerifier,
custodyGroupCount uint64,
) *LazilyPersistentStoreColumn {
return &LazilyPersistentStoreColumn{
store: store,
nodeID: nodeID,
cache: newDataColumnCache(),
newDataColumnsVerifier: newDataColumnsVerifier,
custodyGroupCount: custodyGroupCount,
}
}
// PersistColumns adds columns to the working column cache. Columns stored in this cache will be persisted
// for at least as long as the node is running. Once IsDataAvailable succeeds, all columns referenced
// by the given block are guaranteed to be persisted for the remainder of the retention period.
func (s *LazilyPersistentStoreColumn) Persist(current primitives.Slot, sidecars ...blocks.ROSidecar) error {
if len(sidecars) == 0 {
return nil
}
dataColumnSidecars, err := blocks.DataColumnSidecarsFromSidecars(sidecars)
if err != nil {
return errors.Wrap(err, "blob sidecars from sidecars")
}
// It is safe to retrieve the first sidecar.
firstSidecar := dataColumnSidecars[0]
if len(sidecars) > 1 {
firstRoot := firstSidecar.BlockRoot()
for _, sidecar := range dataColumnSidecars[1:] {
if sidecar.BlockRoot() != firstRoot {
return errMixedRoots
}
}
}
firstSidecarEpoch, currentEpoch := slots.ToEpoch(firstSidecar.Slot()), slots.ToEpoch(current)
if !params.WithinDAPeriod(firstSidecarEpoch, currentEpoch) {
return nil
}
key := cacheKey{slot: firstSidecar.Slot(), root: firstSidecar.BlockRoot()}
entry := s.cache.ensure(key)
for _, sidecar := range dataColumnSidecars {
if err := entry.stash(&sidecar); err != nil {
return errors.Wrap(err, "stash DataColumnSidecar")
}
}
return nil
}
// IsDataAvailable returns nil if all the commitments in the given block are persisted to the db and have been verified.
// DataColumnsSidecars already in the db are assumed to have been previously verified against the block.
func (s *LazilyPersistentStoreColumn) IsDataAvailable(ctx context.Context, currentSlot primitives.Slot, block blocks.ROBlock) error {
blockCommitments, err := s.fullCommitmentsToCheck(s.nodeID, block, currentSlot)
if err != nil {
return errors.Wrapf(err, "full commitments to check with block root `%#x` and current slot `%d`", block.Root(), currentSlot)
}
// Return early for blocks that do not have any commitments.
if blockCommitments.count() == 0 {
return nil
}
// Get the root of the block.
blockRoot := block.Root()
// Build the cache key for the block.
key := cacheKey{slot: block.Block().Slot(), root: blockRoot}
// Retrieve the cache entry for the block, or create an empty one if it doesn't exist.
entry := s.cache.ensure(key)
// Delete the cache entry for the block at the end.
defer s.cache.delete(key)
// Set the disk summary for the block in the cache entry.
entry.setDiskSummary(s.store.Summary(blockRoot))
// Verify we have all the expected sidecars, and fail fast if any are missing or inconsistent.
// We don't try to salvage problematic batches because this indicates a misbehaving peer and we'd rather
// ignore their response and decrease their peer score.
roDataColumns, err := entry.filter(blockRoot, blockCommitments)
if err != nil {
return errors.Wrap(err, "entry filter")
}
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/p2p-interface.md#datacolumnsidecarsbyrange-v1
verifier := s.newDataColumnsVerifier(roDataColumns, verification.ByRangeRequestDataColumnSidecarRequirements)
if err := verifier.ValidFields(); err != nil {
return errors.Wrap(err, "valid fields")
}
if err := verifier.SidecarInclusionProven(); err != nil {
return errors.Wrap(err, "sidecar inclusion proven")
}
if err := verifier.SidecarKzgProofVerified(); err != nil {
return errors.Wrap(err, "sidecar KZG proof verified")
}
verifiedRoDataColumns, err := verifier.VerifiedRODataColumns()
if err != nil {
return errors.Wrap(err, "verified RO data columns - should never happen")
}
if err := s.store.Save(verifiedRoDataColumns); err != nil {
return errors.Wrap(err, "save data column sidecars")
}
return nil
}
// fullCommitmentsToCheck returns the commitments to check for a given block.
func (s *LazilyPersistentStoreColumn) fullCommitmentsToCheck(nodeID enode.ID, block blocks.ROBlock, currentSlot primitives.Slot) (*safeCommitmentsArray, error) {
samplesPerSlot := params.BeaconConfig().SamplesPerSlot
// Return early for blocks that are pre-Fulu.
if block.Version() < version.Fulu {
return &safeCommitmentsArray{}, nil
}
// Compute the block epoch.
blockSlot := block.Block().Slot()
blockEpoch := slots.ToEpoch(blockSlot)
// Compute the current epoch.
currentEpoch := slots.ToEpoch(currentSlot)
// Return early if the request is out of the MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS window.
if !params.WithinDAPeriod(blockEpoch, currentEpoch) {
return &safeCommitmentsArray{}, nil
}
// Retrieve the KZG commitments for the block.
kzgCommitments, err := block.Block().Body().BlobKzgCommitments()
if err != nil {
return nil, errors.Wrap(err, "blob KZG commitments")
}
// Return early if there are no commitments in the block.
if len(kzgCommitments) == 0 {
return &safeCommitmentsArray{}, nil
}
// Retrieve peer info.
samplingSize := max(s.custodyGroupCount, samplesPerSlot)
peerInfo, _, err := peerdas.Info(nodeID, samplingSize)
if err != nil {
return nil, errors.Wrap(err, "peer info")
}
// Create a safe commitments array for the custody columns.
commitmentsArray := &safeCommitmentsArray{}
commitmentsArraySize := uint64(len(commitmentsArray))
for column := range peerInfo.CustodyColumns {
if column >= commitmentsArraySize {
return nil, errors.Errorf("custody column index %d too high (max allowed %d) - should never happen", column, commitmentsArraySize)
}
commitmentsArray[column] = kzgCommitments
}
return commitmentsArray, nil
}

View File

@@ -1,313 +0,0 @@
package das
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/ethereum/go-ethereum/p2p/enode"
)
var commitments = [][]byte{
bytesutil.PadTo([]byte("a"), 48),
bytesutil.PadTo([]byte("b"), 48),
bytesutil.PadTo([]byte("c"), 48),
bytesutil.PadTo([]byte("d"), 48),
}
func TestPersist(t *testing.T) {
t.Run("no sidecars", func(t *testing.T) {
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, nil, 0)
err := lazilyPersistentStoreColumns.Persist(0)
require.NoError(t, err)
require.Equal(t, 0, len(lazilyPersistentStoreColumns.cache.entries))
})
t.Run("mixed roots", func(t *testing.T) {
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
dataColumnParamsByBlockRoot := []util.DataColumnParam{
{Slot: 1, Index: 1},
{Slot: 2, Index: 2},
}
roSidecars, _ := roSidecarsFromDataColumnParamsByBlockRoot(t, dataColumnParamsByBlockRoot)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, nil, 0)
err := lazilyPersistentStoreColumns.Persist(0, roSidecars...)
require.ErrorIs(t, err, errMixedRoots)
require.Equal(t, 0, len(lazilyPersistentStoreColumns.cache.entries))
})
t.Run("outside DA period", func(t *testing.T) {
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
dataColumnParamsByBlockRoot := []util.DataColumnParam{
{Slot: 1, Index: 1},
}
roSidecars, _ := roSidecarsFromDataColumnParamsByBlockRoot(t, dataColumnParamsByBlockRoot)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, nil, 0)
err := lazilyPersistentStoreColumns.Persist(1_000_000, roSidecars...)
require.NoError(t, err)
require.Equal(t, 0, len(lazilyPersistentStoreColumns.cache.entries))
})
t.Run("nominal", func(t *testing.T) {
const slot = 42
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
dataColumnParamsByBlockRoot := []util.DataColumnParam{
{Slot: slot, Index: 1},
{Slot: slot, Index: 5},
}
roSidecars, roDataColumns := roSidecarsFromDataColumnParamsByBlockRoot(t, dataColumnParamsByBlockRoot)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, nil, 0)
err := lazilyPersistentStoreColumns.Persist(slot, roSidecars...)
require.NoError(t, err)
require.Equal(t, 1, len(lazilyPersistentStoreColumns.cache.entries))
key := cacheKey{slot: slot, root: roDataColumns[0].BlockRoot()}
entry, ok := lazilyPersistentStoreColumns.cache.entries[key]
require.Equal(t, true, ok)
// A call to Persist does NOT save the sidecars to disk.
require.Equal(t, uint64(0), entry.diskSummary.Count())
require.DeepSSZEqual(t, roDataColumns[0], *entry.scs[1])
require.DeepSSZEqual(t, roDataColumns[1], *entry.scs[5])
for i, roDataColumn := range entry.scs {
if map[int]bool{1: true, 5: true}[i] {
continue
}
require.IsNil(t, roDataColumn)
}
})
}
func TestIsDataAvailable(t *testing.T) {
newDataColumnsVerifier := func(dataColumnSidecars []blocks.RODataColumn, _ []verification.Requirement) verification.DataColumnsVerifier {
return &mockDataColumnsVerifier{t: t, dataColumnSidecars: dataColumnSidecars}
}
ctx := t.Context()
t.Run("without commitments", func(t *testing.T) {
signedBeaconBlockFulu := util.NewBeaconBlockFulu()
signedRoBlock := newSignedRoBlock(t, signedBeaconBlockFulu)
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, newDataColumnsVerifier, 0)
err := lazilyPersistentStoreColumns.IsDataAvailable(ctx, 0 /*current slot*/, signedRoBlock)
require.NoError(t, err)
})
t.Run("with commitments", func(t *testing.T) {
signedBeaconBlockFulu := util.NewBeaconBlockFulu()
signedBeaconBlockFulu.Block.Body.BlobKzgCommitments = commitments
signedRoBlock := newSignedRoBlock(t, signedBeaconBlockFulu)
block := signedRoBlock.Block()
slot := block.Slot()
proposerIndex := block.ProposerIndex()
parentRoot := block.ParentRoot()
stateRoot := block.StateRoot()
bodyRoot, err := block.Body().HashTreeRoot()
require.NoError(t, err)
root := signedRoBlock.Root()
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, newDataColumnsVerifier, 0)
indices := [...]uint64{1, 17, 19, 42, 75, 87, 102, 117}
dataColumnsParams := make([]util.DataColumnParam, 0, len(indices))
for _, index := range indices {
dataColumnParams := util.DataColumnParam{
Index: index,
KzgCommitments: commitments,
Slot: slot,
ProposerIndex: proposerIndex,
ParentRoot: parentRoot[:],
StateRoot: stateRoot[:],
BodyRoot: bodyRoot[:],
}
dataColumnsParams = append(dataColumnsParams, dataColumnParams)
}
_, verifiedRoDataColumns := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnsParams)
key := cacheKey{root: root}
entry := lazilyPersistentStoreColumns.cache.ensure(key)
defer lazilyPersistentStoreColumns.cache.delete(key)
for _, verifiedRoDataColumn := range verifiedRoDataColumns {
err := entry.stash(&verifiedRoDataColumn.RODataColumn)
require.NoError(t, err)
}
err = lazilyPersistentStoreColumns.IsDataAvailable(ctx, slot, signedRoBlock)
require.NoError(t, err)
actual, err := dataColumnStorage.Get(root, indices[:])
require.NoError(t, err)
summary := dataColumnStorage.Summary(root)
require.Equal(t, uint64(len(indices)), summary.Count())
require.DeepSSZEqual(t, verifiedRoDataColumns, actual)
})
}
func TestFullCommitmentsToCheck(t *testing.T) {
windowSlots, err := slots.EpochEnd(params.BeaconConfig().MinEpochsForDataColumnSidecarsRequest)
require.NoError(t, err)
testCases := []struct {
name string
commitments [][]byte
block func(*testing.T) blocks.ROBlock
slot primitives.Slot
}{
{
name: "Pre-Fulu block",
block: func(t *testing.T) blocks.ROBlock {
return newSignedRoBlock(t, util.NewBeaconBlockElectra())
},
},
{
name: "Commitments outside data availability window",
block: func(t *testing.T) blocks.ROBlock {
beaconBlockElectra := util.NewBeaconBlockElectra()
// Block is from slot 0, "current slot" is window size +1 (so outside the window)
beaconBlockElectra.Block.Body.BlobKzgCommitments = commitments
return newSignedRoBlock(t, beaconBlockElectra)
},
slot: windowSlots + 1,
},
{
name: "Commitments within data availability window",
block: func(t *testing.T) blocks.ROBlock {
signedBeaconBlockFulu := util.NewBeaconBlockFulu()
signedBeaconBlockFulu.Block.Body.BlobKzgCommitments = commitments
signedBeaconBlockFulu.Block.Slot = 100
return newSignedRoBlock(t, signedBeaconBlockFulu)
},
commitments: commitments,
slot: 100,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
numberOfColumns := params.BeaconConfig().NumberOfColumns
b := tc.block(t)
s := NewLazilyPersistentStoreColumn(nil, enode.ID{}, nil, numberOfColumns)
commitmentsArray, err := s.fullCommitmentsToCheck(enode.ID{}, b, tc.slot)
require.NoError(t, err)
for _, commitments := range commitmentsArray {
require.DeepEqual(t, tc.commitments, commitments)
}
})
}
}
func roSidecarsFromDataColumnParamsByBlockRoot(t *testing.T, parameters []util.DataColumnParam) ([]blocks.ROSidecar, []blocks.RODataColumn) {
roDataColumns, _ := util.CreateTestVerifiedRoDataColumnSidecars(t, parameters)
roSidecars := make([]blocks.ROSidecar, 0, len(roDataColumns))
for _, roDataColumn := range roDataColumns {
roSidecars = append(roSidecars, blocks.NewSidecarFromDataColumnSidecar(roDataColumn))
}
return roSidecars, roDataColumns
}
func newSignedRoBlock(t *testing.T, signedBeaconBlock interface{}) blocks.ROBlock {
sb, err := blocks.NewSignedBeaconBlock(signedBeaconBlock)
require.NoError(t, err)
rb, err := blocks.NewROBlock(sb)
require.NoError(t, err)
return rb
}
type mockDataColumnsVerifier struct {
t *testing.T
dataColumnSidecars []blocks.RODataColumn
validCalled, SidecarInclusionProvenCalled, SidecarKzgProofVerifiedCalled bool
}
var _ verification.DataColumnsVerifier = &mockDataColumnsVerifier{}
func (m *mockDataColumnsVerifier) VerifiedRODataColumns() ([]blocks.VerifiedRODataColumn, error) {
require.Equal(m.t, true, m.validCalled && m.SidecarInclusionProvenCalled && m.SidecarKzgProofVerifiedCalled)
verifiedDataColumnSidecars := make([]blocks.VerifiedRODataColumn, 0, len(m.dataColumnSidecars))
for _, dataColumnSidecar := range m.dataColumnSidecars {
verifiedDataColumnSidecar := blocks.NewVerifiedRODataColumn(dataColumnSidecar)
verifiedDataColumnSidecars = append(verifiedDataColumnSidecars, verifiedDataColumnSidecar)
}
return verifiedDataColumnSidecars, nil
}
func (m *mockDataColumnsVerifier) SatisfyRequirement(verification.Requirement) {}
func (m *mockDataColumnsVerifier) ValidFields() error {
m.validCalled = true
return nil
}
func (m *mockDataColumnsVerifier) CorrectSubnet(dataColumnSidecarSubTopic string, expectedTopics []string) error {
return nil
}
func (m *mockDataColumnsVerifier) NotFromFutureSlot() error { return nil }
func (m *mockDataColumnsVerifier) SlotAboveFinalized() error { return nil }
func (m *mockDataColumnsVerifier) ValidProposerSignature(ctx context.Context) error { return nil }
func (m *mockDataColumnsVerifier) SidecarParentSeen(parentSeen func([fieldparams.RootLength]byte) bool) error {
return nil
}
func (m *mockDataColumnsVerifier) SidecarParentValid(badParent func([fieldparams.RootLength]byte) bool) error {
return nil
}
func (m *mockDataColumnsVerifier) SidecarParentSlotLower() error { return nil }
func (m *mockDataColumnsVerifier) SidecarDescendsFromFinalized() error { return nil }
func (m *mockDataColumnsVerifier) SidecarInclusionProven() error {
m.SidecarInclusionProvenCalled = true
return nil
}
func (m *mockDataColumnsVerifier) SidecarKzgProofVerified() error {
m.SidecarKzgProofVerifiedCalled = true
return nil
}
func (m *mockDataColumnsVerifier) SidecarProposerExpected(ctx context.Context) error { return nil }

View File

@@ -99,20 +99,26 @@ func (e *blobCacheEntry) filter(root [32]byte, kc [][]byte, slot primitives.Slot
if e.diskSummary.HasIndex(i) {
continue
}
// Check if e.scs has this index before accessing
var sidecar *blocks.ROBlob
if i < uint64(len(e.scs)) {
sidecar = e.scs[i]
}
if kc[i] == nil {
if e.scs[i] != nil {
return nil, errors.Wrapf(errCommitmentMismatch, "root=%#x, index=%#x, commitment=%#x, no block commitment", root, i, e.scs[i].KzgCommitment)
if sidecar != nil {
return nil, errors.Wrapf(errCommitmentMismatch, "root=%#x, index=%#x, commitment=%#x, no block commitment", root, i, sidecar.KzgCommitment)
}
continue
}
if e.scs[i] == nil {
if sidecar == nil {
return nil, errors.Wrapf(errMissingSidecar, "root=%#x, index=%#x", root, i)
}
if !bytes.Equal(kc[i], e.scs[i].KzgCommitment) {
return nil, errors.Wrapf(errCommitmentMismatch, "root=%#x, index=%#x, commitment=%#x, block commitment=%#x", root, i, e.scs[i].KzgCommitment, kc[i])
if !bytes.Equal(kc[i], sidecar.KzgCommitment) {
return nil, errors.Wrapf(errCommitmentMismatch, "root=%#x, index=%#x, commitment=%#x, block commitment=%#x", root, i, sidecar.KzgCommitment, kc[i])
}
scs = append(scs, *e.scs[i])
scs = append(scs, *sidecar)
}
return scs, nil

View File

@@ -155,6 +155,41 @@ func TestFilter(t *testing.T) {
},
err: errCommitmentMismatch,
},
{
name: "empty scs array with commitments",
setup: func(t *testing.T) (*blobCacheEntry, [][]byte, []blocks.ROBlob) {
// This reproduces the panic condition where entry.scs is empty or nil
// but we have commitments to check
entry := &blobCacheEntry{
scs: nil, // Empty/nil array that caused the panic
}
// Create a commitment that would trigger the check at index 0
commits := [][]byte{
bytesutil.PadTo([]byte("commitment1"), 48),
}
return entry, commits, nil
},
err: errMissingSidecar,
},
{
name: "scs array shorter than commitments",
setup: func(t *testing.T) (*blobCacheEntry, [][]byte, []blocks.ROBlob) {
// This reproduces the condition where entry.scs exists but is shorter
// than the number of commitments we're checking
entry := &blobCacheEntry{
scs: make([]*blocks.ROBlob, 2), // Only 2 slots
}
// Create 4 commitments, accessing index 2 and 3 would have panicked
commits := [][]byte{
nil,
nil,
bytesutil.PadTo([]byte("commitment3"), 48),
bytesutil.PadTo([]byte("commitment4"), 48),
}
return entry, commits, nil
},
err: errMissingSidecar,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {

View File

@@ -15,5 +15,5 @@ import (
// durably persisted before returning a non-error value.
type AvailabilityStore interface {
IsDataAvailable(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error
Persist(current primitives.Slot, sc ...blocks.ROSidecar) error
Persist(current primitives.Slot, blobSidecar ...blocks.ROBlob) error
}

View File

@@ -5,13 +5,12 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
errors "github.com/pkg/errors"
)
// MockAvailabilityStore is an implementation of AvailabilityStore that can be used by other packages in tests.
type MockAvailabilityStore struct {
VerifyAvailabilityCallback func(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error
PersistBlobsCallback func(current primitives.Slot, sc ...blocks.ROBlob) error
PersistBlobsCallback func(current primitives.Slot, blobSidecar ...blocks.ROBlob) error
}
var _ AvailabilityStore = &MockAvailabilityStore{}
@@ -25,13 +24,9 @@ func (m *MockAvailabilityStore) IsDataAvailable(ctx context.Context, current pri
}
// Persist satisfies the corresponding method of the AvailabilityStore interface in a way that is useful for tests.
func (m *MockAvailabilityStore) Persist(current primitives.Slot, sc ...blocks.ROSidecar) error {
blobSidecars, err := blocks.BlobSidecarsFromSidecars(sc)
if err != nil {
return errors.Wrap(err, "blob sidecars from sidecars")
}
func (m *MockAvailabilityStore) Persist(current primitives.Slot, blobSidecar ...blocks.ROBlob) error {
if m.PersistBlobsCallback != nil {
return m.PersistBlobsCallback(current, blobSidecars...)
return m.PersistBlobsCallback(current, blobSidecar...)
}
return nil
}

View File

@@ -13,6 +13,7 @@ go_library(
visibility = [
"//beacon-chain:__subpackages__",
"//cmd/beacon-chain:__subpackages__",
"//genesis:__subpackages__",
"//testing/slasher/simulator:__pkg__",
"//tools:__subpackages__",
],

View File

@@ -10,6 +10,11 @@ type ReadOnlyDatabase = iface.ReadOnlyDatabase
// about head info. For head info, use github.com/prysmaticlabs/prysm/blockchain.HeadFetcher.
type NoHeadAccessDatabase = iface.NoHeadAccessDatabase
// ReadOnlyDatabaseWithSeqNum exposes Prysm's Ethereum data backend for read access only, no information about
// head info, but with read/write access to the p2p metadata sequence number.
// This is used for the p2p service.
type ReadOnlyDatabaseWithSeqNum = iface.ReadOnlyDatabaseWithSeqNum
// HeadAccessDatabase exposes Prysm's Ethereum backend for read/write access with information about
// chain head information. This interface should be used sparingly as the HeadFetcher is the source
// of truth around chain head information while this interface serves as persistent storage for the

View File

@@ -100,6 +100,14 @@ type (
}
)
// DataColumnStorageReader is an interface to read data column sidecars from the filesystem.
type DataColumnStorageReader interface {
Summary(root [fieldparams.RootLength]byte) DataColumnStorageSummary
Get(root [fieldparams.RootLength]byte, indices []uint64) ([]blocks.VerifiedRODataColumn, error)
}
var _ DataColumnStorageReader = &DataColumnStorage{}
// WithDataColumnBasePath is a required option that sets the base path of data column storage.
func WithDataColumnBasePath(base string) DataColumnStorageOption {
return func(b *DataColumnStorage) error {

View File

@@ -84,12 +84,6 @@ func (s DataColumnStorageSummary) Stored() map[uint64]bool {
return stored
}
// DataColumnStorageSummarizer can be used to receive a summary of metadata about data columns on disk for a given root.
// The DataColumnStorageSummary can be used to check which indices (if any) are available for a given block by root.
type DataColumnStorageSummarizer interface {
Summary(root [fieldparams.RootLength]byte) DataColumnStorageSummary
}
type dataColumnStorageSummaryCache struct {
mu sync.RWMutex
dataColumnCount float64
@@ -98,8 +92,6 @@ type dataColumnStorageSummaryCache struct {
cache map[[fieldparams.RootLength]byte]DataColumnStorageSummary
}
var _ DataColumnStorageSummarizer = &dataColumnStorageSummaryCache{}
func newDataColumnStorageSummaryCache() *dataColumnStorageSummaryCache {
return &dataColumnStorageSummaryCache{
cache: make(map[[fieldparams.RootLength]byte]DataColumnStorageSummary),

View File

@@ -144,14 +144,3 @@ func NewEphemeralDataColumnStorageWithMocker(t testing.TB) (*DataColumnMocker, *
fs, dcs := NewEphemeralDataColumnStorageAndFs(t)
return &DataColumnMocker{fs: fs, dcs: dcs}, dcs
}
func NewMockDataColumnStorageSummarizer(t *testing.T, set map[[fieldparams.RootLength]byte][]uint64) DataColumnStorageSummarizer {
c := newDataColumnStorageSummaryCache()
for root, indices := range set {
if err := c.set(DataColumnsIdent{Root: root, Epoch: 0, Indices: indices}); err != nil {
t.Fatal(err)
}
}
return c
}

View File

@@ -64,6 +64,18 @@ type ReadOnlyDatabase interface {
// Origin checkpoint sync support
OriginCheckpointBlockRoot(ctx context.Context) ([32]byte, error)
BackfillStatus(context.Context) (*dbval.BackfillStatus, error)
// P2P Metadata operations.
MetadataSeqNum(ctx context.Context) (uint64, error)
}
// ReadOnlyDatabaseWithSeqNum defines a struct which has read access to database methods
// and also has read/write access to the p2p metadata sequence number.
// Only used for the p2p service.
type ReadOnlyDatabaseWithSeqNum interface {
ReadOnlyDatabase
SaveMetadataSeqNum(ctx context.Context, seqNum uint64) error
}
// NoHeadAccessDatabase defines a struct without access to chain head data.
@@ -103,9 +115,23 @@ type NoHeadAccessDatabase interface {
CleanUpDirtyStates(ctx context.Context, slotsPerArchivedPoint primitives.Slot) error
DeleteHistoricalDataBeforeSlot(ctx context.Context, slot primitives.Slot, batchSize int) (int, error)
// Genesis operations.
LoadGenesis(ctx context.Context, stateBytes []byte) error
SaveGenesisData(ctx context.Context, state state.BeaconState) error
EnsureEmbeddedGenesis(ctx context.Context) error
// Support for checkpoint sync and backfill.
SaveOriginCheckpointBlockRoot(ctx context.Context, blockRoot [32]byte) error
SaveOrigin(ctx context.Context, serState, serBlock []byte) error
SaveBackfillStatus(context.Context, *dbval.BackfillStatus) error
BackfillFinalizedIndex(ctx context.Context, blocks []blocks.ROBlock, finalizedChildRoot [32]byte) error
// Custody operations.
UpdateSubscribedToAllDataSubnets(ctx context.Context, subscribed bool) (bool, error)
UpdateCustodyInfo(ctx context.Context, earliestAvailableSlot primitives.Slot, custodyGroupCount uint64) (primitives.Slot, uint64, error)
// P2P Metadata operations.
SaveMetadataSeqNum(ctx context.Context, seqNum uint64) error
}
// HeadAccessDatabase defines a struct with access to reading chain head data.
@@ -116,16 +142,6 @@ type HeadAccessDatabase interface {
HeadBlock(ctx context.Context) (interfaces.ReadOnlySignedBeaconBlock, error)
HeadBlockRoot() ([32]byte, error)
SaveHeadBlockRoot(ctx context.Context, blockRoot [32]byte) error
// Genesis operations.
LoadGenesis(ctx context.Context, stateBytes []byte) error
SaveGenesisData(ctx context.Context, state state.BeaconState) error
EnsureEmbeddedGenesis(ctx context.Context) error
// Support for checkpoint sync and backfill.
SaveOrigin(ctx context.Context, serState, serBlock []byte) error
SaveBackfillStatus(context.Context, *dbval.BackfillStatus) error
BackfillFinalizedIndex(ctx context.Context, blocks []blocks.ROBlock, finalizedChildRoot [32]byte) error
}
// SlasherDatabase interface for persisting data related to detecting slashable offenses on Ethereum.

View File

@@ -24,6 +24,7 @@ go_library(
"migration_block_slot_index.go",
"migration_finalized_parent.go",
"migration_state_validators.go",
"p2p.go",
"schema.go",
"state.go",
"state_summary.go",
@@ -39,7 +40,6 @@ go_library(
"//beacon-chain/db/filters:go_default_library",
"//beacon-chain/db/iface:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/genesis:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
@@ -51,6 +51,7 @@ go_library(
"//container/slice:go_default_library",
"//encoding/bytesutil:go_default_library",
"//encoding/ssz/detect:go_default_library",
"//genesis:go_default_library",
"//io/file:go_default_library",
"//monitoring/progress:go_default_library",
"//monitoring/tracing:go_default_library",
@@ -96,6 +97,7 @@ go_test(
"migration_archived_index_test.go",
"migration_block_slot_index_test.go",
"migration_state_validators_test.go",
"p2p_test.go",
"state_summary_test.go",
"state_test.go",
"utils_test.go",
@@ -108,7 +110,6 @@ go_test(
"//beacon-chain/db/filters:go_default_library",
"//beacon-chain/db/iface:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/genesis:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
@@ -118,6 +119,7 @@ go_test(
"//consensus-types/light-client:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//genesis:go_default_library",
"//proto/dbval:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",

View File

@@ -19,6 +19,9 @@ var ErrNotFoundGenesisBlockRoot = errors.Wrap(ErrNotFound, "OriginGenesisRoot")
// ErrNotFoundFeeRecipient is a not found error specifically for the fee recipient getter
var ErrNotFoundFeeRecipient = errors.Wrap(ErrNotFound, "fee recipient")
// ErrNotFoundMetadataSeqNum is a not found error specifically for the metadata sequence number getter
var ErrNotFoundMetadataSeqNum = errors.Wrap(ErrNotFound, "metadata sequence number")
var errEmptyBlockSlice = errors.New("[]blocks.ROBlock is empty")
var errIncorrectBlockParent = errors.New("unexpected missing or forked blocks in a []ROBlock")
var errFinalizedChildNotFound = errors.New("unable to find finalized root descending from backfill batch")

View File

@@ -8,6 +8,7 @@ import (
dbIface "github.com/OffchainLabs/prysm/v6/beacon-chain/db/iface"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/encoding/ssz/detect"
"github.com/OffchainLabs/prysm/v6/genesis"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/pkg/errors"
)
@@ -97,8 +98,22 @@ func (s *Store) EnsureEmbeddedGenesis(ctx context.Context) error {
if err != nil {
return err
}
if gs != nil && !gs.IsNil() {
if !state.IsNil(gs) {
return s.SaveGenesisData(ctx, gs)
}
return nil
}
type LegacyGenesisProvider struct {
store *Store
}
func NewLegacyGenesisProvider(store *Store) *LegacyGenesisProvider {
return &LegacyGenesisProvider{store: store}
}
var _ genesis.Provider = &LegacyGenesisProvider{}
func (p *LegacyGenesisProvider) Genesis(ctx context.Context) (state.BeaconState, error) {
return p.store.LegacyGenesisState(ctx)
}

View File

@@ -8,6 +8,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/iface"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/genesis"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
@@ -152,6 +153,7 @@ func TestEnsureEmbeddedGenesis(t *testing.T) {
require.NoError(t, undo())
}()
genesis.StoreEmbeddedDuringTest(t, params.BeaconConfig().ConfigName)
ctx := t.Context()
db := setupDB(t)

42
beacon-chain/db/kv/p2p.go Normal file
View File

@@ -0,0 +1,42 @@
package kv
import (
"context"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
bolt "go.etcd.io/bbolt"
)
// MetadataSeqNum retrieves the p2p metadata sequence number from the database.
// It returns 0 and ErrNotFoundMetadataSeqNum if the key does not exist.
func (s *Store) MetadataSeqNum(ctx context.Context) (uint64, error) {
_, span := trace.StartSpan(ctx, "BeaconDB.MetadataSeqNum")
defer span.End()
var seqNum uint64
err := s.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(chainMetadataBucket)
val := bkt.Get(metadataSequenceNumberKey)
if val == nil {
return ErrNotFoundMetadataSeqNum
}
seqNum = bytesutil.BytesToUint64BigEndian(val)
return nil
})
return seqNum, err
}
// SaveMetadataSeqNum saves the p2p metadata sequence number to the database.
func (s *Store) SaveMetadataSeqNum(ctx context.Context, seqNum uint64) error {
_, span := trace.StartSpan(ctx, "BeaconDB.SaveMetadataSeqNum")
defer span.End()
return s.db.Update(func(tx *bolt.Tx) error {
bkt := tx.Bucket(chainMetadataBucket)
val := bytesutil.Uint64ToBytesBigEndian(seqNum)
return bkt.Put(metadataSequenceNumberKey, val)
})
}

View File

@@ -0,0 +1,33 @@
package kv
import (
"testing"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
)
func TestStore_MetadataSeqNum(t *testing.T) {
ctx := t.Context()
db := setupDB(t)
seqNum, err := db.MetadataSeqNum(ctx)
require.ErrorIs(t, err, ErrNotFoundMetadataSeqNum)
assert.Equal(t, uint64(0), seqNum)
initialSeqNum := uint64(42)
err = db.SaveMetadataSeqNum(ctx, initialSeqNum)
require.NoError(t, err)
retrievedSeqNum, err := db.MetadataSeqNum(ctx)
require.NoError(t, err)
assert.Equal(t, initialSeqNum, retrievedSeqNum)
updatedSeqNum := uint64(43)
err = db.SaveMetadataSeqNum(ctx, updatedSeqNum)
require.NoError(t, err)
retrievedSeqNum, err = db.MetadataSeqNum(ctx)
require.NoError(t, err)
assert.Equal(t, updatedSeqNum, retrievedSeqNum)
}

View File

@@ -42,6 +42,7 @@ var (
finalizedCheckpointKey = []byte("finalized-checkpoint")
powchainDataKey = []byte("powchain-data")
lastValidatedCheckpointKey = []byte("last-validated-checkpoint")
metadataSequenceNumberKey = []byte("metadata-seq-number")
// Below keys are used to identify objects are to be fork compatible.
// Objects that are only compatible with specific forks should be prefixed with such keys.

View File

@@ -6,14 +6,12 @@ import (
"fmt"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state/genesis"
statenative "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v6/config/features"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing"
"github.com/OffchainLabs/prysm/v6/genesis"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
@@ -65,21 +63,21 @@ func (s *Store) StateOrError(ctx context.Context, blockRoot [32]byte) (state.Bea
return st, nil
}
// GenesisState returns the genesis state in beacon chain.
func (s *Store) GenesisState(ctx context.Context) (state.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.GenesisState")
st, err := genesis.State()
if errors.Is(err, genesis.ErrGenesisStateNotInitialized) {
log.WithError(err).Error("genesis state not initialized, returning nil state. this should only happen in tests")
return nil, nil
}
return st, err
}
// GenesisState returns the genesis state in beacon chain.
func (s *Store) LegacyGenesisState(ctx context.Context) (state.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.LegacyGenesisState")
defer span.End()
cached, err := genesis.State(params.BeaconConfig().ConfigName)
if err != nil {
tracing.AnnotateError(span, err)
return nil, err
}
span.SetAttributes(trace.BoolAttribute("cache_hit", cached != nil))
if cached != nil {
return cached, nil
}
var err error
var st state.BeaconState
err = s.db.View(func(tx *bolt.Tx) error {
// Retrieve genesis block's signing root from blocks bucket,
@@ -253,6 +251,10 @@ func (s *Store) saveStatesEfficientInternal(ctx context.Context, tx *bolt.Tx, bl
if err := s.processElectra(ctx, rawType, rt[:], bucket, valIdxBkt, validatorKeys[i]); err != nil {
return err
}
case *ethpb.BeaconStateFulu:
if err := s.processFulu(ctx, rawType, rt[:], bucket, valIdxBkt, validatorKeys[i]); err != nil {
return err
}
default:
return errors.New("invalid state type")
}
@@ -368,6 +370,24 @@ func (s *Store) processElectra(ctx context.Context, pbState *ethpb.BeaconStateEl
return nil
}
func (s *Store) processFulu(ctx context.Context, pbState *ethpb.BeaconStateFulu, rootHash []byte, bucket, valIdxBkt *bolt.Bucket, validatorKey []byte) error {
valEntries := pbState.Validators
pbState.Validators = make([]*ethpb.Validator, 0)
rawObj, err := pbState.MarshalSSZ()
if err != nil {
return err
}
encodedState := snappy.Encode(nil, append(fuluKey, rawObj...))
if err := bucket.Put(rootHash, encodedState); err != nil {
return err
}
pbState.Validators = valEntries
if err := valIdxBkt.Put(rootHash, validatorKey); err != nil {
return err
}
return nil
}
func (s *Store) storeValidatorEntriesSeparately(ctx context.Context, tx *bolt.Tx, validatorsEntries map[string]*ethpb.Validator) error {
valBkt := tx.Bucket(stateValidatorsBucket)
for hashStr, validatorEntry := range validatorsEntries {

View File

@@ -15,6 +15,7 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/genesis"
enginev1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/assert"
@@ -488,7 +489,7 @@ func TestGenesisState_CanSaveRetrieve(t *testing.T) {
require.NoError(t, err)
require.NoError(t, st.SetSlot(1))
require.NoError(t, db.SaveGenesisBlockRoot(t.Context(), headRoot))
require.NoError(t, db.SaveState(t.Context(), st, headRoot))
genesis.StoreStateDuringTest(t, st)
savedGenesisS, err := db.GenesisState(t.Context())
require.NoError(t, err)
@@ -661,7 +662,7 @@ func TestStore_GenesisState_CanGetHighestBelow(t *testing.T) {
require.NoError(t, err)
genesisRoot := [32]byte{'a'}
require.NoError(t, db.SaveGenesisBlockRoot(t.Context(), genesisRoot))
require.NoError(t, db.SaveState(t.Context(), genesisState, genesisRoot))
genesis.StoreStateDuringTest(t, genesisState)
b := util.NewBeaconBlock()
b.Block.Slot = 1

View File

@@ -3,9 +3,9 @@ package kv
import (
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state/genesis"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/genesis"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
)
@@ -18,7 +18,11 @@ func TestSaveOrigin(t *testing.T) {
ctx := t.Context()
db := setupDB(t)
st, err := genesis.State(params.MainnetName)
// Initialize genesis with mainnet config - this will load the embedded mainnet state
require.NoError(t, genesis.Initialize(ctx, t.TempDir()))
// Get the initialized genesis state
st, err := genesis.State()
require.NoError(t, err)
sb, err := st.MarshalSSZ()

View File

@@ -125,6 +125,7 @@ go_test(
"//contracts/deposit/mock:go_default_library",
"//crypto/bls:go_default_library",
"//encoding/bytesutil:go_default_library",
"//genesis:go_default_library",
"//monitoring/clientstats:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",

View File

@@ -22,6 +22,7 @@ import (
contracts "github.com/OffchainLabs/prysm/v6/contracts/deposit"
"github.com/OffchainLabs/prysm/v6/contracts/deposit/mock"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/genesis"
"github.com/OffchainLabs/prysm/v6/monitoring/clientstats"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/assert"
@@ -381,6 +382,7 @@ func TestInitDepositCache_OK(t *testing.T) {
require.NoError(t, err)
require.NoError(t, s.cfg.beaconDB.SaveGenesisBlockRoot(t.Context(), blockRootA))
require.NoError(t, s.cfg.beaconDB.SaveState(t.Context(), emptyState, blockRootA))
genesis.StoreStateDuringTest(t, emptyState)
s.chainStartData.Chainstarted = true
require.NoError(t, s.initDepositCaches(t.Context(), ctrs))
require.Equal(t, 3, len(s.cfg.depositCache.PendingContainers(t.Context(), nil)))
@@ -446,6 +448,7 @@ func TestInitDepositCacheWithFinalization_OK(t *testing.T) {
require.NoError(t, s.cfg.beaconDB.SaveGenesisBlockRoot(t.Context(), headRoot))
require.NoError(t, s.cfg.beaconDB.SaveState(t.Context(), emptyState, headRoot))
require.NoError(t, stateGen.SaveState(t.Context(), headRoot, emptyState))
genesis.StoreStateDuringTest(t, emptyState)
s.cfg.stateGen = stateGen
require.NoError(t, emptyState.SetEth1DepositIndex(3))
@@ -594,6 +597,7 @@ func TestService_EnsureConsistentPowchainData(t *testing.T) {
require.NoError(t, err)
assert.NoError(t, genState.SetSlot(1000))
genesis.StoreStateDuringTest(t, genState)
require.NoError(t, s1.cfg.beaconDB.SaveGenesisData(t.Context(), genState))
_, err = s1.validPowchainData(t.Context())
require.NoError(t, err)
@@ -655,6 +659,7 @@ func TestService_EnsureValidPowchainData(t *testing.T) {
require.NoError(t, err)
assert.NoError(t, genState.SetSlot(1000))
genesis.StoreStateDuringTest(t, genState)
require.NoError(t, s1.cfg.beaconDB.SaveGenesisData(t.Context(), genState))
err = s1.cfg.beaconDB.SaveExecutionChainData(t.Context(), &ethpb.ETH1ChainData{

View File

@@ -3,6 +3,7 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"clear_db.go",
"config.go",
"log.go",
"node.go",
@@ -49,7 +50,6 @@ go_library(
"//beacon-chain/sync/backfill:go_default_library",
"//beacon-chain/sync/backfill/coverage:go_default_library",
"//beacon-chain/sync/checkpoint:go_default_library",
"//beacon-chain/sync/genesis:go_default_library",
"//beacon-chain/sync/initial-sync:go_default_library",
"//beacon-chain/verification:go_default_library",
"//cmd:go_default_library",
@@ -59,6 +59,7 @@ go_library(
"//consensus-types/primitives:go_default_library",
"//container/slice:go_default_library",
"//encoding/bytesutil:go_default_library",
"//genesis:go_default_library",
"//monitoring/prometheus:go_default_library",
"//monitoring/tracing:go_default_library",
"//runtime:go_default_library",

View File

@@ -0,0 +1,101 @@
package node
import (
"context"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/kv"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/slasherkv"
"github.com/OffchainLabs/prysm/v6/cmd"
"github.com/pkg/errors"
"github.com/urfave/cli/v2"
)
type dbClearer struct {
shouldClear bool
force bool
confirmed bool
}
const (
clearConfirmation = "This will delete your beacon chain database stored in your data directory. " +
"Your database backups will not be removed - do you want to proceed? (Y/N)"
clearDeclined = "Database will not be deleted. No changes have been made."
)
func (c *dbClearer) clearKV(ctx context.Context, db *kv.Store) (*kv.Store, error) {
if !c.shouldProceed() {
return db, nil
}
log.Warning("Removing database")
if err := db.ClearDB(); err != nil {
return nil, errors.Wrap(err, "could not clear database")
}
return kv.NewKVStore(ctx, db.DatabasePath())
}
func (c *dbClearer) clearBlobs(bs *filesystem.BlobStorage) error {
if !c.shouldProceed() {
return nil
}
log.Warning("Removing blob storage")
if err := bs.Clear(); err != nil {
return errors.Wrap(err, "could not clear blob storage")
}
return nil
}
func (c *dbClearer) clearColumns(cs *filesystem.DataColumnStorage) error {
if !c.shouldProceed() {
return nil
}
log.Warning("Removing data columns storage")
if err := cs.Clear(); err != nil {
return errors.Wrap(err, "could not clear data columns storage")
}
return nil
}
func (c *dbClearer) clearSlasher(ctx context.Context, db *slasherkv.Store) (*slasherkv.Store, error) {
if !c.shouldProceed() {
return db, nil
}
log.Warning("Removing slasher database")
if err := db.ClearDB(); err != nil {
return nil, errors.Wrap(err, "could not clear slasher database")
}
return slasherkv.NewKVStore(ctx, db.DatabasePath())
}
func (c *dbClearer) shouldProceed() bool {
if !c.shouldClear {
return false
}
if c.force {
return true
}
if !c.confirmed {
confirmed, err := cmd.ConfirmAction(clearConfirmation, clearDeclined)
if err != nil {
log.WithError(err).Error("Not clearing db due to confirmation error")
return false
}
c.confirmed = confirmed
}
return c.confirmed
}
func newDbClearer(cliCtx *cli.Context) *dbClearer {
force := cliCtx.Bool(cmd.ForceClearDB.Name)
return &dbClearer{
shouldClear: cliCtx.Bool(cmd.ClearDB.Name) || force,
force: force,
}
}

View File

@@ -52,7 +52,6 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/sync/backfill"
"github.com/OffchainLabs/prysm/v6/beacon-chain/sync/backfill/coverage"
"github.com/OffchainLabs/prysm/v6/beacon-chain/sync/checkpoint"
"github.com/OffchainLabs/prysm/v6/beacon-chain/sync/genesis"
initialsync "github.com/OffchainLabs/prysm/v6/beacon-chain/sync/initial-sync"
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
"github.com/OffchainLabs/prysm/v6/cmd"
@@ -62,6 +61,7 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/container/slice"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/genesis"
"github.com/OffchainLabs/prysm/v6/monitoring/prometheus"
"github.com/OffchainLabs/prysm/v6/runtime"
"github.com/OffchainLabs/prysm/v6/runtime/prereqs"
@@ -113,7 +113,7 @@ type BeaconNode struct {
slasherAttestationsFeed *event.Feed
finalizedStateAtStartUp state.BeaconState
serviceFlagOpts *serviceFlagOpts
GenesisInitializer genesis.Initializer
GenesisProviders []genesis.Provider
CheckpointInitializer checkpoint.Initializer
forkChoicer forkchoice.ForkChoicer
clockWaiter startup.ClockWaiter
@@ -127,6 +127,7 @@ type BeaconNode struct {
syncChecker *initialsync.SyncChecker
slasherEnabled bool
lcStore *lightclient.Store
ConfigOptions []params.Option
}
// New creates a new node instance, sets up configuration options, and registers
@@ -135,18 +136,13 @@ func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*Beaco
if err := configureBeacon(cliCtx); err != nil {
return nil, errors.Wrap(err, "could not set beacon configuration options")
}
// Initializes any forks here.
params.BeaconConfig().InitializeForkSchedule()
registry := runtime.NewServiceRegistry()
ctx := cliCtx.Context
beacon := &BeaconNode{
cliCtx: cliCtx,
ctx: ctx,
cancel: cancel,
services: registry,
services: runtime.NewServiceRegistry(),
stop: make(chan struct{}),
stateFeed: new(event.Feed),
blockFeed: new(event.Feed),
@@ -173,6 +169,25 @@ func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*Beaco
}
}
dbClearer := newDbClearer(cliCtx)
dataDir := cliCtx.String(cmd.DataDirFlag.Name)
boltFname := filepath.Join(dataDir, kv.BeaconNodeDbDirName)
kvdb, err := openDB(ctx, boltFname, dbClearer)
if err != nil {
return nil, errors.Wrap(err, "could not open database")
}
beacon.db = kvdb
providers := append(beacon.GenesisProviders, kv.NewLegacyGenesisProvider(kvdb))
if err := genesis.Initialize(ctx, dataDir, providers...); err != nil {
return nil, errors.Wrap(err, "could not initialize genesis state")
}
beacon.ConfigOptions = append([]params.Option{params.WithGenesisValidatorsRoot(genesis.ValidatorsRoot())}, beacon.ConfigOptions...)
params.BeaconConfig().ApplyOptions(beacon.ConfigOptions...)
params.BeaconConfig().InitializeForkSchedule()
params.LogDigests(params.BeaconConfig())
synchronizer := startup.NewClockSynchronizer()
beacon.clockWaiter = synchronizer
beacon.forkChoicer = doublylinkedtree.New()
@@ -191,6 +206,9 @@ func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*Beaco
}
beacon.BlobStorage = blobs
}
if err := dbClearer.clearBlobs(beacon.BlobStorage); err != nil {
return nil, errors.Wrap(err, "could not clear blob storage")
}
if beacon.DataColumnStorage == nil {
dataColumnStorage, err := filesystem.NewDataColumnStorage(cliCtx.Context, beacon.DataColumnStorageOptions...)
@@ -200,8 +218,11 @@ func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*Beaco
beacon.DataColumnStorage = dataColumnStorage
}
if err := dbClearer.clearColumns(beacon.DataColumnStorage); err != nil {
return nil, errors.Wrap(err, "could not clear data column storage")
}
bfs, err := startBaseServices(cliCtx, beacon, depositAddress)
bfs, err := startBaseServices(cliCtx, beacon, depositAddress, dbClearer)
if err != nil {
return nil, errors.Wrap(err, "could not start modules")
}
@@ -289,7 +310,7 @@ func configureBeacon(cliCtx *cli.Context) error {
return nil
}
func startBaseServices(cliCtx *cli.Context, beacon *BeaconNode, depositAddress string) (*backfill.Store, error) {
func startBaseServices(cliCtx *cli.Context, beacon *BeaconNode, depositAddress string, clearer *dbClearer) (*backfill.Store, error) {
ctx := cliCtx.Context
log.Debugln("Starting DB")
if err := beacon.startDB(cliCtx, depositAddress); err != nil {
@@ -299,7 +320,7 @@ func startBaseServices(cliCtx *cli.Context, beacon *BeaconNode, depositAddress s
beacon.BlobStorage.WarmCache()
log.Debugln("Starting Slashing DB")
if err := beacon.startSlasherDB(cliCtx); err != nil {
if err := beacon.startSlasherDB(cliCtx, clearer); err != nil {
return nil, errors.Wrap(err, "could not start slashing DB")
}
@@ -479,43 +500,6 @@ func (b *BeaconNode) Close() {
close(b.stop)
}
func (b *BeaconNode) clearDB(clearDB, forceClearDB bool, d *kv.Store, dbPath string) (*kv.Store, error) {
var err error
clearDBConfirmed := false
if clearDB && !forceClearDB {
const (
actionText = "This will delete your beacon chain database stored in your data directory. " +
"Your database backups will not be removed - do you want to proceed? (Y/N)"
deniedText = "Database will not be deleted. No changes have been made."
)
clearDBConfirmed, err = cmd.ConfirmAction(actionText, deniedText)
if err != nil {
return nil, errors.Wrapf(err, "could not confirm action")
}
}
if clearDBConfirmed || forceClearDB {
log.Warning("Removing database")
if err := d.ClearDB(); err != nil {
return nil, errors.Wrap(err, "could not clear database")
}
if err := b.BlobStorage.Clear(); err != nil {
return nil, errors.Wrap(err, "could not clear blob storage")
}
d, err = kv.NewKVStore(b.ctx, dbPath)
if err != nil {
return nil, errors.Wrap(err, "could not create new database")
}
}
return d, nil
}
func (b *BeaconNode) checkAndSaveDepositContract(depositAddress string) error {
knownContract, err := b.db.DepositContractAddress(b.ctx)
if err != nil {
@@ -539,60 +523,36 @@ func (b *BeaconNode) checkAndSaveDepositContract(depositAddress string) error {
return nil
}
func (b *BeaconNode) startDB(cliCtx *cli.Context, depositAddress string) error {
var depositCache cache.DepositCache
baseDir := cliCtx.String(cmd.DataDirFlag.Name)
dbPath := filepath.Join(baseDir, kv.BeaconNodeDbDirName)
clearDBRequired := cliCtx.Bool(cmd.ClearDB.Name)
forceClearDBRequired := cliCtx.Bool(cmd.ForceClearDB.Name)
func openDB(ctx context.Context, dbPath string, clearer *dbClearer) (*kv.Store, error) {
log.WithField("databasePath", dbPath).Info("Checking DB")
d, err := kv.NewKVStore(b.ctx, dbPath)
d, err := kv.NewKVStore(ctx, dbPath)
if err != nil {
return errors.Wrapf(err, "could not create database at %s", dbPath)
return nil, errors.Wrapf(err, "could not create database at %s", dbPath)
}
if clearDBRequired || forceClearDBRequired {
d, err = b.clearDB(clearDBRequired, forceClearDBRequired, d, dbPath)
if err != nil {
return errors.Wrap(err, "could not clear database")
}
d, err = clearer.clearKV(ctx, d)
if err != nil {
return nil, errors.Wrap(err, "could not clear database")
}
if err := d.RunMigrations(b.ctx); err != nil {
return err
}
return d, d.RunMigrations(ctx)
}
b.db = d
depositCache, err = depositsnapshot.New()
func (b *BeaconNode) startDB(cliCtx *cli.Context, depositAddress string) error {
depositCache, err := depositsnapshot.New()
if err != nil {
return errors.Wrap(err, "could not create deposit cache")
}
b.depositCache = depositCache
if b.GenesisInitializer != nil {
if err := b.GenesisInitializer.Initialize(b.ctx, d); err != nil {
if errors.Is(err, db.ErrExistingGenesisState) {
return errors.Errorf("Genesis state flag specified but a genesis state "+
"exists already. Run again with --%s and/or ensure you are using the "+
"appropriate testnet flag to load the given genesis state.", cmd.ClearDB.Name)
}
return errors.Wrap(err, "could not load genesis from file")
}
}
if err := b.db.EnsureEmbeddedGenesis(b.ctx); err != nil {
return errors.Wrap(err, "could not ensure embedded genesis")
}
if b.CheckpointInitializer != nil {
log.Info("Checkpoint sync - Downloading origin state and block")
if err := b.CheckpointInitializer.Initialize(b.ctx, d); err != nil {
if err := b.CheckpointInitializer.Initialize(b.ctx, b.db); err != nil {
return err
}
}
@@ -604,49 +564,25 @@ func (b *BeaconNode) startDB(cliCtx *cli.Context, depositAddress string) error {
log.WithField("address", depositAddress).Info("Deposit contract")
return nil
}
func (b *BeaconNode) startSlasherDB(cliCtx *cli.Context) error {
func (b *BeaconNode) startSlasherDB(cliCtx *cli.Context, clearer *dbClearer) error {
if !b.slasherEnabled {
return nil
}
baseDir := cliCtx.String(cmd.DataDirFlag.Name)
if cliCtx.IsSet(flags.SlasherDirFlag.Name) {
baseDir = cliCtx.String(flags.SlasherDirFlag.Name)
}
dbPath := filepath.Join(baseDir, kv.BeaconNodeDbDirName)
clearDB := cliCtx.Bool(cmd.ClearDB.Name)
forceClearDB := cliCtx.Bool(cmd.ForceClearDB.Name)
log.WithField("databasePath", dbPath).Info("Checking DB")
d, err := slasherkv.NewKVStore(b.ctx, dbPath)
if err != nil {
return err
}
clearDBConfirmed := false
if clearDB && !forceClearDB {
actionText := "This will delete your beacon chain database stored in your data directory. " +
"Your database backups will not be removed - do you want to proceed? (Y/N)"
deniedText := "Database will not be deleted. No changes have been made."
clearDBConfirmed, err = cmd.ConfirmAction(actionText, deniedText)
if err != nil {
return err
}
d, err = clearer.clearSlasher(b.ctx, d)
if err != nil {
return errors.Wrap(err, "could not clear slasher database")
}
if clearDBConfirmed || forceClearDB {
log.Warning("Removing database")
if err := d.ClearDB(); err != nil {
return errors.Wrap(err, "could not clear database")
}
d, err = slasherkv.NewKVStore(b.ctx, dbPath)
if err != nil {
return errors.Wrap(err, "could not create new database")
}
}
b.slasherDB = d
return nil
}
@@ -702,7 +638,6 @@ func (b *BeaconNode) registerP2P(cliCtx *cli.Context) error {
HostDNS: cliCtx.String(cmd.P2PHostDNS.Name),
PrivateKey: cliCtx.String(cmd.P2PPrivKey.Name),
StaticPeerID: cliCtx.Bool(cmd.P2PStaticID.Name),
MetaDataDir: cliCtx.String(cmd.P2PMetadata.Name),
QUICPort: cliCtx.Uint(cmd.P2PQUICPort.Name),
TCPPort: cliCtx.Uint(cmd.P2PTCPPort.Name),
UDPPort: cliCtx.Uint(cmd.P2PUDPPort.Name),
@@ -910,6 +845,7 @@ func (b *BeaconNode) registerInitialSyncService(complete chan struct{}) error {
ClockWaiter: b.clockWaiter,
InitialSyncComplete: complete,
BlobStorage: b.BlobStorage,
DataColumnStorage: b.DataColumnStorage,
}, opts...)
return b.services.RegisterService(is)
}
@@ -1031,6 +967,7 @@ func (b *BeaconNode) registerRPCService(router *http.ServeMux) error {
Router: router,
ClockWaiter: b.clockWaiter,
BlobStorage: b.BlobStorage,
DataColumnStorage: b.DataColumnStorage,
TrackedValidatorsCache: b.trackedValidatorsCache,
PayloadIDCache: b.payloadIDCache,
LCStore: b.lcStore,

View File

@@ -5,6 +5,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/builder"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/execution"
"github.com/OffchainLabs/prysm/v6/config/params"
)
// Option for beacon node configuration.
@@ -51,6 +52,13 @@ func WithBlobStorageOptions(opt ...filesystem.BlobStorageOption) Option {
}
}
func WithConfigOptions(opt ...params.Option) Option {
return func(bn *BeaconNode) error {
bn.ConfigOptions = append(bn.ConfigOptions, opt...)
return nil
}
}
// WithDataColumnStorage sets the DataColumnStorage backend for the BeaconNode
func WithDataColumnStorage(bs *filesystem.DataColumnStorage) Option {
return func(bn *BeaconNode) error {

View File

@@ -49,6 +49,7 @@ go_library(
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/kv:go_default_library",
"//beacon-chain/p2p/encoder:go_default_library",
"//beacon-chain/p2p/peers:go_default_library",
"//beacon-chain/p2p/peers/peerdata:go_default_library",
@@ -71,7 +72,6 @@ go_library(
"//monitoring/tracing:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//network:go_default_library",
"//network/forks:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/metadata:go_default_library",
"//runtime:go_default_library",
@@ -168,7 +168,6 @@ go_test(
"//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library",
"//network:go_default_library",
"//network/forks:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/metadata:go_default_library",
@@ -178,6 +177,7 @@ go_test(
"//testing/util:go_default_library",
"//time:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_ethereum_go_ethereum//crypto:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/discover:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
@@ -195,7 +195,6 @@ go_test(
"@com_github_multiformats_go_multiaddr//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
],

View File

@@ -15,7 +15,6 @@ import (
"github.com/OffchainLabs/prysm/v6/crypto/hash"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
"github.com/OffchainLabs/prysm/v6/network/forks"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/pkg/errors"
@@ -274,14 +273,8 @@ func (s *Service) BroadcastLightClientOptimisticUpdate(ctx context.Context, upda
return errors.New("attempted to broadcast nil light client optimistic update")
}
forkDigest, err := forks.ForkDigestFromEpoch(slots.ToEpoch(update.AttestedHeader().Beacon().Slot), s.genesisValidatorsRoot)
if err != nil {
err := errors.Wrap(err, "could not retrieve fork digest")
tracing.AnnotateError(span, err)
return err
}
if err := s.broadcastObject(ctx, update, lcOptimisticToTopic(forkDigest)); err != nil {
digest := params.ForkDigest(slots.ToEpoch(update.AttestedHeader().Beacon().Slot))
if err := s.broadcastObject(ctx, update, lcOptimisticToTopic(digest)); err != nil {
log.WithError(err).Debug("Failed to broadcast light client optimistic update")
err := errors.Wrap(err, "could not publish message")
tracing.AnnotateError(span, err)
@@ -300,13 +293,7 @@ func (s *Service) BroadcastLightClientFinalityUpdate(ctx context.Context, update
return errors.New("attempted to broadcast nil light client finality update")
}
forkDigest, err := forks.ForkDigestFromEpoch(slots.ToEpoch(update.AttestedHeader().Beacon().Slot), s.genesisValidatorsRoot)
if err != nil {
err := errors.Wrap(err, "could not retrieve fork digest")
tracing.AnnotateError(span, err)
return err
}
forkDigest := params.ForkDigest(slots.ToEpoch(update.AttestedHeader().Beacon().Slot))
if err := s.broadcastObject(ctx, update, lcFinalityToTopic(forkDigest)); err != nil {
log.WithError(err).Debug("Failed to broadcast light client finality update")
err := errors.Wrap(err, "could not publish message")
@@ -318,15 +305,15 @@ func (s *Service) BroadcastLightClientFinalityUpdate(ctx context.Context, update
return nil
}
// BroadcastDataColumn broadcasts a data column to the p2p network, the message is assumed to be
// BroadcastDataColumnSidecar broadcasts a data column to the p2p network, the message is assumed to be
// broadcasted to the current fork and to the input column subnet.
func (s *Service) BroadcastDataColumn(
func (s *Service) BroadcastDataColumnSidecar(
root [fieldparams.RootLength]byte,
dataColumnSubnet uint64,
dataColumnSidecar *ethpb.DataColumnSidecar,
) error {
// Add tracing to the function.
ctx, span := trace.StartSpan(s.ctx, "p2p.BroadcastDataColumn")
ctx, span := trace.StartSpan(s.ctx, "p2p.BroadcastDataColumnSidecar")
defer span.End()
// Ensure the data column sidecar is not nil.
@@ -343,12 +330,12 @@ func (s *Service) BroadcastDataColumn(
}
// Non-blocking broadcast, with attempts to discover a column subnet peer if none available.
go s.internalBroadcastDataColumn(ctx, root, dataColumnSubnet, dataColumnSidecar, forkDigest)
go s.internalBroadcastDataColumnSidecar(ctx, root, dataColumnSubnet, dataColumnSidecar, forkDigest)
return nil
}
func (s *Service) internalBroadcastDataColumn(
func (s *Service) internalBroadcastDataColumnSidecar(
ctx context.Context,
root [fieldparams.RootLength]byte,
columnSubnet uint64,
@@ -356,7 +343,7 @@ func (s *Service) internalBroadcastDataColumn(
forkDigest [fieldparams.VersionLength]byte,
) {
// Add tracing to the function.
_, span := trace.StartSpan(ctx, "p2p.internalBroadcastDataColumn")
_, span := trace.StartSpan(ctx, "p2p.internalBroadcastDataColumnSidecar")
defer span.End()
// Increase the number of broadcast attempts.

View File

@@ -15,12 +15,13 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers/scorers"
p2ptest "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/wrapper"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/network/forks"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
testpb "github.com/OffchainLabs/prysm/v6/proto/testing"
"github.com/OffchainLabs/prysm/v6/testing/assert"
@@ -59,6 +60,7 @@ func TestService_Broadcast(t *testing.T) {
topic := "/eth2/%x/testing"
// Set a test gossip mapping for testpb.TestSimpleMessage.
GossipTypeMapping[reflect.TypeOf(msg)] = topic
p.clock = startup.NewClock(p.genesisTime, bytesutil.ToBytes32(p.genesisValidatorsRoot))
digest, err := p.currentForkDigest()
require.NoError(t, err)
topic = fmt.Sprintf(topic, digest)
@@ -265,7 +267,8 @@ func TestService_BroadcastAttestationWithDiscoveryAttempts(t *testing.T) {
s.metaData = wrapper.WrappedMetadataV0(new(ethpb.MetaDataV0))
bitV := bitfield.NewBitvector64()
bitV.SetBitAt(subnet, true)
s.updateSubnetRecordWithMetadata(bitV)
err := s.updateSubnetRecordWithMetadata(bitV)
require.NoError(t, err)
}
assert.NoError(t, err, "Could not start discovery for node")
listeners = append(listeners, listener)
@@ -550,9 +553,7 @@ func TestService_BroadcastLightClientOptimisticUpdate(t *testing.T) {
require.NoError(t, err)
GossipTypeMapping[reflect.TypeOf(msg)] = LightClientOptimisticUpdateTopicFormat
digest, err := forks.ForkDigestFromEpoch(slots.ToEpoch(msg.AttestedHeader().Beacon().Slot), p.genesisValidatorsRoot)
require.NoError(t, err)
topic := fmt.Sprintf(LightClientOptimisticUpdateTopicFormat, digest)
topic := fmt.Sprintf(LightClientOptimisticUpdateTopicFormat, params.ForkDigest(slots.ToEpoch(msg.AttestedHeader().Beacon().Slot)))
// External peer subscribes to the topic.
topic += p.Encoding().ProtocolSuffix()
@@ -616,9 +617,7 @@ func TestService_BroadcastLightClientFinalityUpdate(t *testing.T) {
require.NoError(t, err)
GossipTypeMapping[reflect.TypeOf(msg)] = LightClientFinalityUpdateTopicFormat
digest, err := forks.ForkDigestFromEpoch(slots.ToEpoch(msg.AttestedHeader().Beacon().Slot), p.genesisValidatorsRoot)
require.NoError(t, err)
topic := fmt.Sprintf(LightClientFinalityUpdateTopicFormat, digest)
topic := fmt.Sprintf(LightClientFinalityUpdateTopicFormat, params.ForkDigest(slots.ToEpoch(msg.AttestedHeader().Beacon().Slot)))
// External peer subscribes to the topic.
topic += p.Encoding().ProtocolSuffix()
@@ -717,7 +716,7 @@ func TestService_BroadcastDataColumn(t *testing.T) {
// Attempt to broadcast nil object should fail.
var emptyRoot [fieldparams.RootLength]byte
err = service.BroadcastDataColumn(emptyRoot, subnet, nil)
err = service.BroadcastDataColumnSidecar(emptyRoot, subnet, nil)
require.ErrorContains(t, "attempted to broadcast nil", err)
// Subscribe to the topic.
@@ -728,7 +727,7 @@ func TestService_BroadcastDataColumn(t *testing.T) {
time.Sleep(50 * time.Millisecond)
// Broadcast to peers and wait.
err = service.BroadcastDataColumn(emptyRoot, subnet, sidecar)
err = service.BroadcastDataColumnSidecar(emptyRoot, subnet, sidecar)
require.NoError(t, err)
// Receive the message.

View File

@@ -27,7 +27,6 @@ type Config struct {
PrivateKey string
DataDir string
DiscoveryDir string
MetaDataDir string
QUICPort uint
TCPPort uint
UDPPort uint
@@ -37,7 +36,7 @@ type Config struct {
AllowListCIDR string
DenyListCIDR []string
StateNotifier statefeed.Notifier
DB db.ReadOnlyDatabase
DB db.ReadOnlyDatabaseWithSeqNum
ClockWaiter startup.ClockWaiter
}

View File

@@ -211,7 +211,10 @@ func (s *Service) RefreshPersistentSubnets() {
}
// Some data changed. Update the record and the metadata.
s.updateSubnetRecordWithMetadata(bitV)
// Not returning early here because the error comes from saving the metadata sequence number.
if err := s.updateSubnetRecordWithMetadata(bitV); err != nil {
log.WithError(err).Error("Failed to update subnet record with metadata")
}
// Ping all peers.
s.pingPeersAndLogEnr()
@@ -269,7 +272,10 @@ func (s *Service) RefreshPersistentSubnets() {
}
// Some data have changed, update our record and metadata.
s.updateSubnetRecordWithMetadataV2(bitV, bitS, custodyGroupCount)
// Not returning early here because the error comes from saving the metadata sequence number.
if err := s.updateSubnetRecordWithMetadataV2(bitV, bitS, custodyGroupCount); err != nil {
log.WithError(err).Error("Failed to update subnet record with metadata")
}
// Ping all peers to inform them of new metadata
s.pingPeersAndLogEnr()
@@ -289,7 +295,10 @@ func (s *Service) RefreshPersistentSubnets() {
}
// Some data changed. Update the record and the metadata.
s.updateSubnetRecordWithMetadataV3(bitV, bitS, custodyGroupCount)
// Not returning early here because the error comes from saving the metadata sequence number.
if err := s.updateSubnetRecordWithMetadataV3(bitV, bitS, custodyGroupCount); err != nil {
log.WithError(err).Error("Failed to update subnet record with metadata")
}
// Ping all peers.
s.pingPeersAndLogEnr()
@@ -434,20 +443,27 @@ func (s *Service) findPeers(ctx context.Context, missingPeerCount uint) ([]*enod
return peersToDial, ctx.Err()
}
// Skip peer not matching the filter.
node := iterator.Node()
if !s.filterPeer(node) {
continue
}
// Remove duplicates, keeping the node with higher seq.
existing, ok := nodeByNodeID[node.ID()]
if ok && existing.Seq() > node.Seq() {
if ok && existing.Seq() >= node.Seq() {
continue // keep existing and skip.
}
// Treat nodes that exist in nodeByNodeID with higher seq numbers as new peers
// Skip peer not matching the filter.
if !s.filterPeer(node) {
if ok {
// this means the existing peer with the lower sequence number is no longer valid
delete(nodeByNodeID, existing.ID())
missingPeerCount++
}
continue
}
nodeByNodeID[node.ID()] = node
// We found a new peer. Decrease the missing peer count.
nodeByNodeID[node.ID()] = node
missingPeerCount--
}
@@ -576,8 +592,11 @@ func (s *Service) createLocalNode(
localNode.SetFallbackIP(ipAddr)
localNode.SetFallbackUDP(udpPort)
localNode, err = addForkEntry(localNode, s.genesisTime, s.genesisValidatorsRoot)
if err != nil {
currentSlot := slots.CurrentSlot(s.genesisTime)
currentEpoch := slots.ToEpoch(currentSlot)
current := params.GetNetworkScheduleEntry(currentEpoch)
next := params.NextNetworkScheduleEntry(currentEpoch)
if err := updateENR(localNode, current, next); err != nil {
return nil, errors.Wrap(err, "could not add eth2 fork version entry to enr")
}
@@ -698,7 +717,7 @@ func (s *Service) filterPeer(node *enode.Node) bool {
// Ignore nodes that don't match our fork digest.
nodeENR := node.Record()
if s.genesisValidatorsRoot != nil {
if err := s.compareForkENR(nodeENR); err != nil {
if err := compareForkENR(s.dv5Listener.LocalNode().Node().Record(), nodeENR); err != nil {
log.WithError(err).Trace("Fork ENR mismatches between peer and local node")
return false
}

View File

@@ -1,9 +1,11 @@
package p2p
import (
"bytes"
"context"
"crypto/ecdsa"
"crypto/rand"
"crypto/sha256"
"fmt"
mathRand "math/rand"
"net"
@@ -16,6 +18,7 @@ import (
mock "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers/peerdata"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers/scorers"
@@ -57,6 +60,81 @@ func createAddrAndPrivKey(t *testing.T) (net.IP, *ecdsa.PrivateKey) {
return ipAddr, pkey
}
// createTestNodeWithID creates a LocalNode for testing with deterministic private key
// This is needed for deduplication tests where we need the same node ID across different sequence numbers
func createTestNodeWithID(t *testing.T, id string) *enode.LocalNode {
// Create a deterministic reader based on the ID for consistent key generation
h := sha256.New()
h.Write([]byte(id))
seedBytes := h.Sum(nil)
// Create a deterministic reader using the seed
deterministicReader := bytes.NewReader(seedBytes)
// Generate the private key using the same approach as the production code
privKey, _, err := crypto.GenerateSecp256k1Key(deterministicReader)
require.NoError(t, err)
// Convert to ECDSA private key for enode usage
ecdsaPrivKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(privKey)
require.NoError(t, err)
db, err := enode.OpenDB("")
require.NoError(t, err)
t.Cleanup(func() { db.Close() })
localNode := enode.NewLocalNode(db, ecdsaPrivKey)
// Set basic properties
localNode.SetStaticIP(net.ParseIP("127.0.0.1"))
localNode.Set(enr.TCP(3000))
localNode.Set(enr.UDP(3000))
localNode.Set(enr.WithEntry(eth2EnrKey, make([]byte, 16)))
return localNode
}
// createTestNodeRandom creates a LocalNode for testing using the existing createAddrAndPrivKey function
func createTestNodeRandom(t *testing.T) *enode.LocalNode {
_, privKey := createAddrAndPrivKey(t)
db, err := enode.OpenDB("")
require.NoError(t, err)
t.Cleanup(func() { db.Close() })
localNode := enode.NewLocalNode(db, privKey)
// Set basic properties
localNode.SetStaticIP(net.ParseIP("127.0.0.1"))
localNode.Set(enr.TCP(3000))
localNode.Set(enr.UDP(3000))
localNode.Set(enr.WithEntry(eth2EnrKey, make([]byte, 16)))
return localNode
}
// setNodeSeq updates a LocalNode to have the specified sequence number
func setNodeSeq(localNode *enode.LocalNode, seq uint64) {
// Force set the sequence number - we need to update the record seq-1 times
// because it starts at 1
currentSeq := localNode.Node().Seq()
for currentSeq < seq {
localNode.Set(enr.WithEntry("dummy", currentSeq))
currentSeq++
}
}
// setNodeSubnets sets the attestation subnets for a LocalNode
func setNodeSubnets(localNode *enode.LocalNode, attSubnets []uint64) {
if len(attSubnets) > 0 {
bitV := bitfield.NewBitvector64()
for _, subnet := range attSubnets {
bitV.SetBitAt(subnet, true)
}
localNode.Set(enr.WithEntry(attSubnetEnrKey, &bitV))
}
}
func TestCreateListener(t *testing.T) {
port := 1024
ipAddr, pkey := createAddrAndPrivKey(t)
@@ -240,7 +318,7 @@ func TestCreateLocalNode(t *testing.T) {
// Check fork is set.
fork := new([]byte)
require.NoError(t, localNode.Node().Record().Load(enr.WithEntry(eth2ENRKey, fork)))
require.NoError(t, localNode.Node().Record().Load(enr.WithEntry(eth2EnrKey, fork)))
require.NotEmpty(t, *fork)
// Check att subnets.
@@ -361,6 +439,8 @@ func TestStaticPeering_PeersAreAdded(t *testing.T) {
cfg.StaticPeers = staticPeers
cfg.StateNotifier = &mock.MockStateNotifier{}
cfg.NoDiscovery = true
cfg.DB = testDB.SetupDB(t)
s, err := NewService(t.Context(), cfg)
require.NoError(t, err)
@@ -489,7 +569,7 @@ func TestMultipleDiscoveryAddresses(t *testing.T) {
node := enode.NewLocalNode(db, key)
node.Set(enr.IPv4{127, 0, 0, 1})
node.Set(enr.IPv6{0x20, 0x01, 0x48, 0x60, 0, 0, 0x20, 0x01, 0, 0, 0, 0, 0, 0, 0x00, 0x68})
s := &Service{dv5Listener: mockListener{localNode: node}}
s := &Service{dv5Listener: testp2p.NewMockListener(node, nil)}
multiAddresses, err := s.DiscoveryAddresses()
require.NoError(t, err)
@@ -514,7 +594,7 @@ func TestDiscoveryV5_SeqNumber(t *testing.T) {
node := enode.NewLocalNode(db, key)
node.Set(enr.IPv4{127, 0, 0, 1})
currentSeq := node.Seq()
s := &Service{dv5Listener: mockListener{localNode: node}}
s := &Service{dv5Listener: testp2p.NewMockListener(node, nil)}
_, err = s.DiscoveryAddresses()
require.NoError(t, err)
newSeq := node.Seq()
@@ -526,7 +606,7 @@ func TestDiscoveryV5_SeqNumber(t *testing.T) {
nodeTwo.Set(enr.IPv6{0x20, 0x01, 0x48, 0x60, 0, 0, 0x20, 0x01, 0, 0, 0, 0, 0, 0, 0x00, 0x68})
seqTwo := nodeTwo.Seq()
assert.NotEqual(t, seqTwo, newSeq)
sTwo := &Service{dv5Listener: mockListener{localNode: nodeTwo}}
sTwo := &Service{dv5Listener: testp2p.NewMockListener(nodeTwo, nil)}
_, err = sTwo.DiscoveryAddresses()
require.NoError(t, err)
assert.Equal(t, seqTwo+1, nodeTwo.Seq())
@@ -828,7 +908,7 @@ func TestRefreshPersistentSubnets(t *testing.T) {
actualPingCount++
return nil
},
cfg: &Config{UDPPort: 2000},
cfg: &Config{UDPPort: 2000, DB: testDB.SetupDB(t)},
peers: p2p.Peers(),
genesisTime: time.Now().Add(-time.Duration(tc.epochSinceGenesis*secondsPerEpoch) * time.Second),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
@@ -883,3 +963,291 @@ func TestRefreshPersistentSubnets(t *testing.T) {
// Reset the config.
params.OverrideBeaconConfig(defaultCfg)
}
func TestFindPeers_NodeDeduplication(t *testing.T) {
params.SetupTestConfigCleanup(t)
cache.SubnetIDs.EmptyAllCaches()
defer cache.SubnetIDs.EmptyAllCaches()
ctx := context.Background()
// Create LocalNodes and manipulate sequence numbers
localNode1 := createTestNodeWithID(t, "node1")
localNode2 := createTestNodeWithID(t, "node2")
localNode3 := createTestNodeWithID(t, "node3")
// Create different sequence versions of node1
setNodeSeq(localNode1, 1)
node1_seq1 := localNode1.Node()
setNodeSeq(localNode1, 2)
node1_seq2 := localNode1.Node() // Same ID, higher seq
setNodeSeq(localNode1, 3)
node1_seq3 := localNode1.Node() // Same ID, even higher seq
// Other nodes with seq 1
node2_seq1 := localNode2.Node()
node3_seq1 := localNode3.Node()
tests := []struct {
name string
nodes []*enode.Node
missingPeers uint
expectedCount int
description string
eval func(t *testing.T, result []*enode.Node)
}{
{
name: "No duplicates - all unique nodes",
nodes: []*enode.Node{
node2_seq1,
node3_seq1,
},
missingPeers: 2,
expectedCount: 2,
description: "Should return all unique nodes without deduplication",
eval: nil, // No special validation needed
},
{
name: "Duplicate with lower seq comes first - should replace",
nodes: []*enode.Node{
node1_seq1,
node1_seq2, // Higher seq, should replace
node2_seq1, // Different node added after duplicates are processed
},
missingPeers: 2, // Need 2 peers so we process all nodes
expectedCount: 2, // Should get node1 (with higher seq) and node2
description: "Should keep node with higher sequence number when duplicate found",
eval: func(t *testing.T, result []*enode.Node) {
// Should have node2 and node1 with higher seq (node1_seq2)
foundNode1WithHigherSeq := false
for _, node := range result {
if node.ID() == node1_seq2.ID() {
require.Equal(t, node1_seq2.Seq(), node.Seq(), "Node1 should have higher seq")
foundNode1WithHigherSeq = true
}
}
require.Equal(t, true, foundNode1WithHigherSeq, "Should have node1 with higher seq")
},
},
{
name: "Duplicate with higher seq comes first - should keep existing",
nodes: []*enode.Node{
node1_seq3, // Higher seq
node1_seq2, // Lower seq, should be skipped (continue branch)
node1_seq1, // Even lower seq, should also be skipped (continue branch)
node2_seq1, // Different node added after duplicates are processed
},
missingPeers: 2,
expectedCount: 2,
description: "Should keep existing node when it has higher sequence number and skip all lower seq duplicates",
eval: func(t *testing.T, result []*enode.Node) {
// Should have kept the node with highest seq (node1_seq3)
foundNode1WithHigherSeq := false
for _, node := range result {
if node.ID() == node1_seq3.ID() {
require.Equal(t, node1_seq3.Seq(), node.Seq(), "Node1 should have highest seq")
foundNode1WithHigherSeq = true
}
}
require.Equal(t, true, foundNode1WithHigherSeq, "Should have node1 with highest seq")
},
},
{
name: "Multiple duplicates with increasing seq",
nodes: []*enode.Node{
node1_seq1,
node1_seq2, // Should replace seq1
node1_seq3, // Should replace seq2
node2_seq1, // Different node added after duplicates are processed
},
missingPeers: 2,
expectedCount: 2,
description: "Should keep updating to highest sequence number",
eval: func(t *testing.T, result []*enode.Node) {
// Should have the node with highest seq (node1_seq3)
foundNode1WithHigherSeq := false
for _, node := range result {
if node.ID() == node1_seq3.ID() {
require.Equal(t, node1_seq3.Seq(), node.Seq(), "Node1 should have highest seq")
foundNode1WithHigherSeq = true
}
}
require.Equal(t, true, foundNode1WithHigherSeq, "Should have node1 with highest seq")
},
},
{
name: "Duplicate with equal seq comes after - should skip",
nodes: []*enode.Node{
node1_seq2, // First occurrence
node1_seq2, // Same exact node instance, should be skipped (continue branch for >= case)
node2_seq1, // Different node
},
missingPeers: 2,
expectedCount: 2,
description: "Should skip duplicate with equal sequence number",
eval: func(t *testing.T, result []*enode.Node) {
// Should have exactly one instance of node1_seq2 and one instance of node2_seq1
foundNode1 := false
foundNode2 := false
for _, node := range result {
if node.ID() == node1_seq2.ID() {
require.Equal(t, node1_seq2.Seq(), node.Seq(), "Node1 should have the expected seq")
require.Equal(t, false, foundNode1, "Should have only one instance of node1") // Ensure no duplicates
foundNode1 = true
}
if node.ID() == node2_seq1.ID() {
foundNode2 = true
}
}
require.Equal(t, true, foundNode1, "Should have node1")
require.Equal(t, true, foundNode2, "Should have node2")
},
},
{
name: "Mix of unique and duplicate nodes",
nodes: []*enode.Node{
node1_seq1,
node2_seq1,
node1_seq2, // Should replace node1_seq1
node3_seq1,
node1_seq3, // Should replace node1_seq2
},
missingPeers: 3,
expectedCount: 3,
description: "Should handle mix of unique nodes and duplicates correctly",
eval: nil, // Basic count validation is sufficient
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
fakePeer := testp2p.NewTestP2P(t)
s := &Service{
cfg: &Config{
MaxPeers: 30,
},
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
peers: peers.NewStatus(ctx, &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{},
}),
host: fakePeer.BHost,
}
localNode := createTestNodeRandom(t)
mockIter := testp2p.NewMockIterator(tt.nodes)
s.dv5Listener = testp2p.NewMockListener(localNode, mockIter)
ctxWithTimeout, cancel := context.WithTimeout(ctx, 1*time.Second)
defer cancel()
result, err := s.findPeers(ctxWithTimeout, tt.missingPeers)
require.NoError(t, err, tt.description)
require.Equal(t, tt.expectedCount, len(result), tt.description)
if tt.eval != nil {
tt.eval(t, result)
}
})
}
}
// callbackIterator allows us to execute callbacks at specific points during iteration
type callbackIterator struct {
nodes []*enode.Node
index int
callbacks map[int]func() // map from index to callback function
}
func (c *callbackIterator) Next() bool {
// Execute callback before checking if we can continue (if one exists)
if callback, exists := c.callbacks[c.index]; exists {
callback()
}
return c.index < len(c.nodes)
}
func (c *callbackIterator) Node() *enode.Node {
if c.index >= len(c.nodes) {
return nil
}
node := c.nodes[c.index]
c.index++
return node
}
func (c *callbackIterator) Close() {
// Nothing to clean up for this simple implementation
}
func TestFindPeers_received_bad_existing_node(t *testing.T) {
// This test successfully triggers delete(nodeByNodeID, node.ID()) in subnets.go by:
// 1. Processing node1_seq1 first (passes filterPeer, gets added to map
// 2. Callback marks peer as bad before processing node1_seq2"
// 3. Processing node1_seq2 (fails filterPeer, triggers delete since ok=true
params.SetupTestConfigCleanup(t)
cache.SubnetIDs.EmptyAllCaches()
defer cache.SubnetIDs.EmptyAllCaches()
ctx := context.Background()
// Create LocalNode with same ID but different sequences
localNode1 := createTestNodeWithID(t, "testnode")
node1_seq1 := localNode1.Node() // Get current node
currentSeq := node1_seq1.Seq()
setNodeSeq(localNode1, currentSeq+1) // Increment sequence by 1
node1_seq2 := localNode1.Node() // This should have higher seq
// Additional node to ensure we have enough peers to process
localNode2 := createTestNodeWithID(t, "othernode")
node2 := localNode2.Node()
fakePeer := testp2p.NewTestP2P(t)
service := &Service{
cfg: &Config{
MaxPeers: 30,
},
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
peers: peers.NewStatus(ctx, &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{},
}),
host: fakePeer.BHost,
}
// Create iterator with callback that marks peer as bad before processing node1_seq2
iter := &callbackIterator{
nodes: []*enode.Node{node1_seq1, node1_seq2, node2},
index: 0,
callbacks: map[int]func(){
1: func() { // Before processing node1_seq2 (index 1)
// Mark peer as bad before processing node1_seq2
peerData, _, _ := convertToAddrInfo(node1_seq2)
if peerData != nil {
service.peers.Add(node1_seq2.Record(), peerData.ID, nil, network.DirUnknown)
// Mark as bad peer - need enough increments to exceed threshold (6)
for i := 0; i < 10; i++ {
service.peers.Scorers().BadResponsesScorer().Increment(peerData.ID)
}
}
},
},
}
localNode := createTestNodeRandom(t)
service.dv5Listener = testp2p.NewMockListener(localNode, iter)
// Run findPeers - node1_seq1 gets processed first, then callback marks peer bad, then node1_seq2 fails
ctxWithTimeout, cancel := context.WithTimeout(ctx, 1*time.Second)
defer cancel()
result, err := service.findPeers(ctxWithTimeout, 3)
require.NoError(t, err)
require.Equal(t, 1, len(result))
}

View File

@@ -3,12 +3,9 @@ package p2p
import (
"bytes"
"fmt"
"time"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/network/forks"
pb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
prysmTime "github.com/OffchainLabs/prysm/v6/time"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
@@ -16,8 +13,17 @@ import (
"github.com/sirupsen/logrus"
)
// ENR key used for Ethereum consensus-related fork data.
var eth2ENRKey = params.BeaconNetworkConfig().ETH2Key
var (
errForkScheduleMismatch = errors.New("peer fork schedule incompatible")
errCurrentDigestMismatch = errors.Wrap(errForkScheduleMismatch, "current_fork_digest mismatch")
errNextVersionMismatch = errors.Wrap(errForkScheduleMismatch, "next_fork_version mismatch")
errNextDigestMismatch = errors.Wrap(errForkScheduleMismatch, "nfd (next fork digest) mismatch")
)
const (
eth2EnrKey = "eth2" // The `eth2` ENR entry advertizes the node's view of the fork schedule with an ssz-encoded ENRForkID value.
nfdEnrKey = "nfd" // The `nfd` ENR entry separately advertizes the "next fork digest" aspect of the fork schedule.
)
// ForkDigest returns the current fork digest of
// the node according to the local clock.
@@ -25,97 +31,136 @@ func (s *Service) currentForkDigest() ([4]byte, error) {
if !s.isInitialized() {
return [4]byte{}, errors.New("state is not initialized")
}
return forks.CreateForkDigest(s.genesisTime, s.genesisValidatorsRoot)
currentSlot := slots.CurrentSlot(s.genesisTime)
currentEpoch := slots.ToEpoch(currentSlot)
return params.ForkDigest(currentEpoch), nil
}
// Compares fork ENRs between an incoming peer's record and our node's
// local record values for current and next fork version/epoch.
func (s *Service) compareForkENR(record *enr.Record) error {
currentRecord := s.dv5Listener.LocalNode().Node().Record()
peerForkENR, err := forkEntry(record)
func compareForkENR(self, peer *enr.Record) error {
peerEntry, err := forkEntry(peer)
if err != nil {
return err
}
currentForkENR, err := forkEntry(currentRecord)
selfEntry, err := forkEntry(self)
if err != nil {
return err
}
enrString, err := SerializeENR(record)
peerString, err := SerializeENR(peer)
if err != nil {
return err
}
// Clients SHOULD connect to peers with current_fork_digest, next_fork_version,
// and next_fork_epoch that match local values.
if !bytes.Equal(peerForkENR.CurrentForkDigest, currentForkENR.CurrentForkDigest) {
return fmt.Errorf(
if !bytes.Equal(peerEntry.CurrentForkDigest, selfEntry.CurrentForkDigest) {
return errors.Wrapf(errCurrentDigestMismatch,
"fork digest of peer with ENR %s: %v, does not match local value: %v",
enrString,
peerForkENR.CurrentForkDigest,
currentForkENR.CurrentForkDigest,
peerString,
peerEntry.CurrentForkDigest,
selfEntry.CurrentForkDigest,
)
}
// Clients MAY connect to peers with the same current_fork_version but a
// different next_fork_version/next_fork_epoch. Unless ENRForkID is manually
// updated to matching prior to the earlier next_fork_epoch of the two clients,
// these type of connecting clients will be unable to successfully interact
// starting at the earlier next_fork_epoch.
if peerForkENR.NextForkEpoch != currentForkENR.NextForkEpoch {
if peerEntry.NextForkEpoch != selfEntry.NextForkEpoch {
log.WithFields(logrus.Fields{
"peerNextForkEpoch": peerForkENR.NextForkEpoch,
"peerENR": enrString,
"peerNextForkEpoch": peerEntry.NextForkEpoch,
"peerNextForkVersion": peerEntry.NextForkVersion,
"peerENR": peerString,
}).Trace("Peer matches fork digest but has different next fork epoch")
// We allow the connection because we have a different view of the next fork epoch. This
// could be due to peers that have no upgraded ahead of a fork or BPO schedule change, so
// we allow the connection to continue until the fork boundary.
return nil
}
if !bytes.Equal(peerForkENR.NextForkVersion, currentForkENR.NextForkVersion) {
log.WithFields(logrus.Fields{
"peerNextForkVersion": peerForkENR.NextForkVersion,
"peerENR": enrString,
}).Trace("Peer matches fork digest but has different next fork version")
// Since we agree on the next fork epoch, we require next fork version to also be in agreement.
if !bytes.Equal(peerEntry.NextForkVersion, selfEntry.NextForkVersion) {
return errors.Wrapf(errNextVersionMismatch,
"next fork version of peer with ENR %s: %#x, does not match local value: %#x",
peerString, peerEntry.NextForkVersion, selfEntry.NextForkVersion)
}
// Fulu adds the following to the spec:
// ---
// A new entry is added to the ENR under the key nfd, short for next fork digest. This entry
// communicates the digest of the next scheduled fork, regardless of whether it is a regular
// or a Blob-Parameters-Only fork. This new entry MUST be added once FULU_FORK_EPOCH is assigned
// any value other than FAR_FUTURE_EPOCH. Adding this entry prior to the Fulu fork will not
// impact peering as nodes will ignore unknown ENR entries and nfd mismatches do not cause
// disconnects.
// When discovering and interfacing with peers, nodes MUST evaluate nfd alongside their existing
// consideration of the ENRForkID::next_* fields under the eth2 key, to form a more accurate
// view of the peer's intended next fork for the purposes of sustained peering. If there is a
// mismatch, the node MUST NOT disconnect before the fork boundary, but it MAY disconnect
// at/after the fork boundary.
// Nodes unprepared to follow the Fulu fork will be unaware of nfd entries. However, their
// existing comparison of eth2 entries (concretely next_fork_epoch) is sufficient to detect
// upcoming divergence.
// ---
// Because this is a new in-bound connection, we lean into the pre-fulu point that clients
// MAY connect to peers with the same current_fork_version but a different
// next_fork_version/next_fork_epoch, which implies we can chose to not connect to them when these
// don't match.
//
// Given that the next_fork_epoch matches, we will require the next_fork_digest to match.
if !params.FuluEnabled() {
return nil
}
peerNFD, selfNFD := nfd(peer), nfd(self)
if peerNFD != selfNFD {
return errors.Wrapf(errNextDigestMismatch,
"next fork digest of peer with ENR %s: %v, does not match local value: %v",
peerString, peerNFD, selfNFD)
}
return nil
}
// Adds a fork entry as an ENR record under the Ethereum consensus EnrKey for
// the local node. The fork entry is an ssz-encoded enrForkID type
// which takes into account the current fork version from the current
// epoch to create a fork digest, the next fork version,
// and the next fork epoch.
func addForkEntry(
node *enode.LocalNode,
genesisTime time.Time,
genesisValidatorsRoot []byte,
) (*enode.LocalNode, error) {
digest, err := forks.CreateForkDigest(genesisTime, genesisValidatorsRoot)
if err != nil {
return nil, err
}
currentSlot := slots.CurrentSlot(genesisTime)
currentEpoch := slots.ToEpoch(currentSlot)
if prysmTime.Now().Before(genesisTime) {
currentEpoch = 0
}
nextForkVersion, nextForkEpoch, err := forks.NextForkData(currentEpoch)
if err != nil {
return nil, err
}
func updateENR(node *enode.LocalNode, entry, next params.NetworkScheduleEntry) error {
enrForkID := &pb.ENRForkID{
CurrentForkDigest: digest[:],
NextForkVersion: nextForkVersion[:],
NextForkEpoch: nextForkEpoch,
CurrentForkDigest: entry.ForkDigest[:],
NextForkVersion: next.ForkVersion[:],
NextForkEpoch: next.Epoch,
}
if entry.Epoch == next.Epoch {
enrForkID.NextForkEpoch = params.BeaconConfig().FarFutureEpoch
}
logFields := logrus.Fields{
"CurrentForkDigest": fmt.Sprintf("%#x", enrForkID.CurrentForkDigest),
"NextForkVersion": fmt.Sprintf("%#x", enrForkID.NextForkVersion),
"NextForkEpoch": fmt.Sprintf("%d", enrForkID.NextForkEpoch),
}
if params.BeaconConfig().FuluForkEpoch != params.BeaconConfig().FarFutureEpoch {
if entry.ForkDigest == next.ForkDigest {
node.Set(enr.WithEntry(nfdEnrKey, make([]byte, len(next.ForkDigest))))
} else {
node.Set(enr.WithEntry(nfdEnrKey, next.ForkDigest[:]))
}
logFields["NextForkDigest"] = fmt.Sprintf("%#x", next.ForkDigest)
}
log.WithFields(logFields).Info("Updating ENR Fork ID")
enc, err := enrForkID.MarshalSSZ()
if err != nil {
return nil, err
return err
}
forkEntry := enr.WithEntry(eth2ENRKey, enc)
forkEntry := enr.WithEntry(eth2EnrKey, enc)
node.Set(forkEntry)
return node, nil
return nil
}
// Retrieves an enrForkID from an ENR record by key lookup
// under the Ethereum consensus EnrKey
func forkEntry(record *enr.Record) (*pb.ENRForkID, error) {
sszEncodedForkEntry := make([]byte, 16)
entry := enr.WithEntry(eth2ENRKey, &sszEncodedForkEntry)
entry := enr.WithEntry(eth2EnrKey, &sszEncodedForkEntry)
err := record.Load(entry)
if err != nil {
return nil, err
@@ -126,3 +171,15 @@ func forkEntry(record *enr.Record) (*pb.ENRForkID, error) {
}
return forkEntry, nil
}
// nfd retrieves the value of the `nfd` ("next fork digest") key from an ENR record.
func nfd(record *enr.Record) [4]byte {
digest := [4]byte{}
entry := enr.WithEntry(nfdEnrKey, &digest)
if err := record.Load(entry); err != nil {
// Treat a missing nfd entry as an empty digest.
// We do this to avoid errors when checking peers that have not upgraded for fulu.
return [4]byte{}
}
return digest
}

View File

@@ -8,246 +8,152 @@ import (
"testing"
"time"
mock "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/network/forks"
pb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
ma "github.com/multiformats/go-multiaddr"
"github.com/sirupsen/logrus"
logTest "github.com/sirupsen/logrus/hooks/test"
)
func TestStartDiscv5_DifferentForkDigests(t *testing.T) {
const port = 2000
func TestCompareForkENR(t *testing.T) {
params.SetupTestConfigCleanup(t)
params.BeaconConfig().FuluForkEpoch = params.BeaconConfig().ElectraForkEpoch + 4096
params.BeaconConfig().InitializeForkSchedule()
ipAddr, pkey := createAddrAndPrivKey(t)
genesisTime := time.Now()
genesisValidatorsRoot := make([]byte, fieldparams.RootLength)
s := &Service{
cfg: &Config{
UDPPort: uint(port),
StateNotifier: &mock.MockStateNotifier{},
PingInterval: testPingInterval,
DisableLivenessCheck: true,
db, err := enode.OpenDB("")
assert.NoError(t, err)
_, k := createAddrAndPrivKey(t)
clock := startup.NewClock(time.Now(), params.BeaconConfig().GenesisValidatorsRoot)
current := params.GetNetworkScheduleEntry(clock.CurrentEpoch())
next := params.NextNetworkScheduleEntry(clock.CurrentEpoch())
self := enode.NewLocalNode(db, k)
require.NoError(t, updateENR(self, current, next))
cases := []struct {
name string
expectErr error
expectLog string
node func(t *testing.T) *enode.Node
}{
{
name: "match",
node: func(t *testing.T) *enode.Node {
// Create a peer with the same current fork digest and next fork version/epoch.
peer := enode.NewLocalNode(db, k)
require.NoError(t, updateENR(peer, current, next))
return peer.Node()
},
},
{
name: "current digest mismatch",
node: func(t *testing.T) *enode.Node {
// Create a peer with the same current fork digest and next fork version/epoch.
peer := enode.NewLocalNode(db, k)
testDigest := [4]byte{0xFF, 0xFF, 0xFF, 0xFF}
require.NotEqual(t, current.ForkDigest, testDigest, "ensure test fork digest is unique")
currentCopy := current
currentCopy.ForkDigest = testDigest
require.NoError(t, updateENR(peer, currentCopy, next))
return peer.Node()
},
expectErr: errCurrentDigestMismatch,
},
{
name: "next_fork_epoch match, next_fork_version mismatch",
node: func(t *testing.T) *enode.Node {
// Create a peer with the same current fork digest and next fork version/epoch.
peer := enode.NewLocalNode(db, k)
testVersion := [4]byte{0xFF, 0xFF, 0xFF, 0xFF}
require.NotEqual(t, next.ForkVersion, testVersion, "ensure test fork version is unique")
nextCopy := next
nextCopy.ForkVersion = testVersion
require.NoError(t, updateENR(peer, current, nextCopy))
return peer.Node()
},
expectErr: errNextVersionMismatch,
},
{
name: "next fork epoch mismatch, next fork digest mismatch",
node: func(t *testing.T) *enode.Node {
// Create a peer with the same current fork digest and next fork version/epoch.
peer := enode.NewLocalNode(db, k)
nextCopy := next
// next epoch does not match, and neither does the next fork digest.
nextCopy.Epoch = nextCopy.Epoch + 1
nfd := [4]byte{0xFF, 0xFF, 0xFF, 0xFF}
require.NotEqual(t, next.ForkDigest, nfd)
//peer.Set(enr.WithEntry(nfdEnrKey, nfd[:]))
nextCopy.ForkDigest = nfd
require.NoError(t, updateENR(peer, current, nextCopy))
return peer.Node()
},
// no error because we allow a different next fork version / digest if the next fork epoch does not match
},
{
name: "next fork epoch -match-, next fork digest mismatch",
node: func(t *testing.T) *enode.Node {
peer := enode.NewLocalNode(db, k)
nextCopy := next
nfd := [4]byte{0xFF, 0xFF, 0xFF, 0xFF}
// next epoch *does match*, but the next fork digest doesn't - so we should get an error.
require.NotEqual(t, next.ForkDigest, nfd)
nextCopy.ForkDigest = nfd
//peer.Set(enr.WithEntry(nfdEnrKey, nfd[:]))
require.NoError(t, updateENR(peer, current, nextCopy))
return peer.Node()
},
expectErr: errNextDigestMismatch,
},
genesisTime: genesisTime,
genesisValidatorsRoot: genesisValidatorsRoot,
custodyInfo: &custodyInfo{},
}
bootListener, err := s.createListener(ipAddr, pkey)
require.NoError(t, err)
defer bootListener.Close()
// Allow bootnode's table to have its initial refresh. This allows
// inbound nodes to be added in.
time.Sleep(5 * time.Second)
bootNode := bootListener.Self()
cfg := &Config{
Discv5BootStrapAddrs: []string{bootNode.String()},
UDPPort: uint(port),
StateNotifier: &mock.MockStateNotifier{},
PingInterval: testPingInterval,
DisableLivenessCheck: true,
}
var listeners []*listenerWrapper
for i := 1; i <= 5; i++ {
port := 3000 + i
cfg.UDPPort = uint(port)
ipAddr, pkey := createAddrAndPrivKey(t)
// We give every peer a different genesis validators root, which
// will cause each peer to have a different ForkDigest, preventing
// them from connecting according to our discovery rules for Ethereum consensus.
root := make([]byte, 32)
copy(root, strconv.Itoa(port))
s = &Service{
cfg: cfg,
genesisTime: genesisTime,
genesisValidatorsRoot: root,
custodyInfo: &custodyInfo{},
}
listener, err := s.startDiscoveryV5(ipAddr, pkey)
assert.NoError(t, err, "Could not start discovery for node")
listeners = append(listeners, listener)
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
peer := c.node(t)
err := compareForkENR(self.Node().Record(), peer.Record())
if c.expectErr != nil {
require.ErrorIs(t, err, c.expectErr, "Expected error to match")
} else {
require.NoError(t, err, "Expected no error comparing fork ENRs")
}
})
}
defer func() {
// Close down all peers.
for _, listener := range listeners {
listener.Close()
}
}()
// Wait for the nodes to have their local routing tables to be populated with the other nodes
time.Sleep(discoveryWaitTime)
lastListener := listeners[len(listeners)-1]
nodes := lastListener.Lookup(bootNode.ID())
if len(nodes) < 4 {
t.Errorf("The node's local table doesn't have the expected number of nodes. "+
"Expected more than or equal to %d but got %d", 4, len(nodes))
}
// Now, we start a new p2p service. It should have no peers aside from the
// bootnode given all nodes provided by discv5 will have different fork digests.
cfg.UDPPort = 14000
cfg.TCPPort = 14001
cfg.MaxPeers = 30
s, err = NewService(t.Context(), cfg)
require.NoError(t, err)
s.genesisTime = genesisTime
s.genesisValidatorsRoot = make([]byte, 32)
s.dv5Listener = lastListener
addrs := make([]ma.Multiaddr, 0)
for _, node := range nodes {
if s.filterPeer(node) {
nodeAddrs, err := retrieveMultiAddrsFromNode(node)
require.NoError(t, err)
addrs = append(addrs, nodeAddrs...)
}
}
// We should not have valid peers if the fork digest mismatched.
assert.Equal(t, 0, len(addrs), "Expected 0 valid peers")
require.NoError(t, s.Stop())
}
func TestStartDiscv5_SameForkDigests_DifferentNextForkData(t *testing.T) {
const port = 2000
func TestNfdSetAndLoad(t *testing.T) {
params.SetupTestConfigCleanup(t)
hook := logTest.NewGlobal()
logrus.SetLevel(logrus.TraceLevel)
ipAddr, pkey := createAddrAndPrivKey(t)
genesisTime := time.Now()
genesisValidatorsRoot := make([]byte, 32)
s := &Service{
cfg: &Config{UDPPort: uint(port), PingInterval: testPingInterval, DisableLivenessCheck: true},
genesisTime: genesisTime,
genesisValidatorsRoot: genesisValidatorsRoot,
custodyInfo: &custodyInfo{},
}
bootListener, err := s.createListener(ipAddr, pkey)
require.NoError(t, err)
defer bootListener.Close()
// Allow bootnode's table to have its initial refresh. This allows
// inbound nodes to be added in.
time.Sleep(5 * time.Second)
bootNode := bootListener.Self()
cfg := &Config{
Discv5BootStrapAddrs: []string{bootNode.String()},
UDPPort: uint(port),
PingInterval: testPingInterval,
DisableLivenessCheck: true,
}
var listeners []*listenerWrapper
for i := 1; i <= 5; i++ {
port := 3000 + i
cfg.UDPPort = uint(port)
ipAddr, pkey := createAddrAndPrivKey(t)
c := params.BeaconConfig().Copy()
nextForkEpoch := primitives.Epoch(i)
c.ForkVersionSchedule[[4]byte{'A', 'B', 'C', 'D'}] = nextForkEpoch
params.OverrideBeaconConfig(c)
// We give every peer a different genesis validators root, which
// will cause each peer to have a different ForkDigest, preventing
// them from connecting according to our discovery rules for Ethereum consensus.
s = &Service{
cfg: cfg,
genesisTime: genesisTime,
genesisValidatorsRoot: genesisValidatorsRoot,
custodyInfo: &custodyInfo{},
}
listener, err := s.startDiscoveryV5(ipAddr, pkey)
assert.NoError(t, err, "Could not start discovery for node")
listeners = append(listeners, listener)
}
defer func() {
// Close down all peers.
for _, listener := range listeners {
listener.Close()
}
}()
// Wait for the nodes to have their local routing tables to be populated with the other nodes
time.Sleep(discoveryWaitTime)
lastListener := listeners[len(listeners)-1]
nodes := lastListener.Lookup(bootNode.ID())
if len(nodes) < 4 {
t.Errorf("The node's local table doesn't have the expected number of nodes. "+
"Expected more than or equal to %d but got %d", 4, len(nodes))
}
// Now, we start a new p2p service. It should have no peers aside from the
// bootnode given all nodes provided by discv5 will have different fork digests.
cfg.UDPPort = 14000
cfg.TCPPort = 14001
cfg.MaxPeers = 30
cfg.StateNotifier = &mock.MockStateNotifier{}
s, err = NewService(t.Context(), cfg)
require.NoError(t, err)
s.genesisTime = genesisTime
s.genesisValidatorsRoot = make([]byte, 32)
s.dv5Listener = lastListener
addrs := make([]ma.Multiaddr, 0, len(nodes))
for _, node := range nodes {
if s.filterPeer(node) {
nodeAddrs, err := retrieveMultiAddrsFromNode(node)
require.NoError(t, err)
addrs = append(addrs, nodeAddrs...)
}
}
if len(addrs) == 0 {
t.Error("Expected to have valid peers, got 0")
}
require.LogsContain(t, hook, "Peer matches fork digest but has different next fork epoch")
require.NoError(t, s.Stop())
params.BeaconConfig().FuluForkEpoch = params.BeaconConfig().ElectraForkEpoch + 4096
params.BeaconConfig().InitializeForkSchedule()
db, err := enode.OpenDB("")
assert.NoError(t, err)
_, k := createAddrAndPrivKey(t)
clock := startup.NewClock(time.Now(), params.BeaconConfig().GenesisValidatorsRoot)
current := params.GetNetworkScheduleEntry(clock.CurrentEpoch())
next := params.NextNetworkScheduleEntry(clock.CurrentEpoch())
next.ForkDigest = [4]byte{0xFF, 0xFF, 0xFF, 0xFF} // Ensure a unique digest for testing.
self := enode.NewLocalNode(db, k)
require.NoError(t, updateENR(self, current, next))
n := nfd(self.Node().Record())
assert.Equal(t, next.ForkDigest, n, "Expected nfd to match next fork digest")
}
func TestDiscv5_AddRetrieveForkEntryENR(t *testing.T) {
params.SetupTestConfigCleanup(t)
c := params.BeaconConfig().Copy()
c.ForkVersionSchedule = map[[4]byte]primitives.Epoch{
bytesutil.ToBytes4(params.BeaconConfig().GenesisForkVersion): 0,
{0, 0, 0, 1}: 1,
}
nextForkEpoch := primitives.Epoch(1)
nextForkVersion := []byte{0, 0, 0, 1}
params.OverrideBeaconConfig(c)
params.BeaconConfig().InitializeForkSchedule()
genesisTime := time.Now()
genesisValidatorsRoot := make([]byte, 32)
digest, err := forks.CreateForkDigest(genesisTime, make([]byte, 32))
require.NoError(t, err)
clock := startup.NewClock(time.Now(), params.BeaconConfig().GenesisValidatorsRoot)
current := params.GetNetworkScheduleEntry(clock.CurrentEpoch())
next := params.NextNetworkScheduleEntry(clock.CurrentEpoch())
enrForkID := &pb.ENRForkID{
CurrentForkDigest: digest[:],
NextForkVersion: nextForkVersion,
NextForkEpoch: nextForkEpoch,
CurrentForkDigest: current.ForkDigest[:],
NextForkVersion: next.ForkVersion[:],
NextForkEpoch: next.Epoch,
}
enc, err := enrForkID.MarshalSSZ()
require.NoError(t, err)
entry := enr.WithEntry(eth2ENRKey, enc)
// In epoch 1 of current time, the fork version should be
// {0, 0, 0, 1} according to the configuration override above.
entry := enr.WithEntry(eth2EnrKey, enc)
temp := t.TempDir()
randNum := rand.Int()
tempPath := path.Join(temp, strconv.Itoa(randNum))
@@ -259,18 +165,16 @@ func TestDiscv5_AddRetrieveForkEntryENR(t *testing.T) {
localNode := enode.NewLocalNode(db, pkey)
localNode.Set(entry)
want, err := signing.ComputeForkDigest([]byte{0, 0, 0, 0}, genesisValidatorsRoot)
require.NoError(t, err)
resp, err := forkEntry(localNode.Node().Record())
require.NoError(t, err)
assert.DeepEqual(t, want[:], resp.CurrentForkDigest)
assert.DeepEqual(t, nextForkVersion, resp.NextForkVersion)
assert.Equal(t, nextForkEpoch, resp.NextForkEpoch, "Unexpected next fork epoch")
assert.Equal(t, hexutil.Encode(current.ForkDigest[:]), hexutil.Encode(resp.CurrentForkDigest))
assert.Equal(t, hexutil.Encode(next.ForkVersion[:]), hexutil.Encode(resp.NextForkVersion))
assert.Equal(t, next.Epoch, resp.NextForkEpoch, "Unexpected next fork epoch")
}
func TestAddForkEntry_Genesis(t *testing.T) {
func TestAddForkEntry_NextForkVersion(t *testing.T) {
params.SetupTestConfigCleanup(t)
params.BeaconConfig().InitializeForkSchedule()
temp := t.TempDir()
randNum := rand.Int()
tempPath := path.Join(temp, strconv.Itoa(randNum))
@@ -280,17 +184,135 @@ func TestAddForkEntry_Genesis(t *testing.T) {
db, err := enode.OpenDB("")
require.NoError(t, err)
bCfg := params.MainnetConfig()
bCfg.ForkVersionSchedule = map[[4]byte]primitives.Epoch{}
bCfg.ForkVersionSchedule[bytesutil.ToBytes4(params.BeaconConfig().GenesisForkVersion)] = bCfg.GenesisEpoch
params.OverrideBeaconConfig(bCfg)
localNode := enode.NewLocalNode(db, pkey)
localNode, err = addForkEntry(localNode, time.Now().Add(10*time.Second), bytesutil.PadTo([]byte{'A', 'B', 'C', 'D'}, 32))
clock := startup.NewClock(time.Now(), params.BeaconConfig().GenesisValidatorsRoot)
current := params.GetNetworkScheduleEntry(clock.CurrentEpoch())
next := params.NextNetworkScheduleEntry(clock.CurrentEpoch())
// Add the fork entry to the local node's ENR.
require.NoError(t, updateENR(localNode, current, next))
fe, err := forkEntry(localNode.Node().Record())
require.NoError(t, err)
forkEntry, err := forkEntry(localNode.Node().Record())
require.NoError(t, err)
assert.DeepEqual(t,
params.BeaconConfig().GenesisForkVersion, forkEntry.NextForkVersion,
assert.Equal(t,
hexutil.Encode(params.BeaconConfig().AltairForkVersion), hexutil.Encode(fe.NextForkVersion),
"Wanted Next Fork Version to be equal to genesis fork version")
last := params.LastForkEpoch()
current = params.GetNetworkScheduleEntry(last)
next = params.NextNetworkScheduleEntry(last)
require.NoError(t, updateENR(localNode, current, next))
entry := params.NextNetworkScheduleEntry(last)
fe, err = forkEntry(localNode.Node().Record())
require.NoError(t, err)
assert.Equal(t,
hexutil.Encode(entry.ForkVersion[:]), hexutil.Encode(fe.NextForkVersion),
"Wanted Next Fork Version to be equal to last entry in schedule")
}
func TestUpdateENR_FuluForkDigest(t *testing.T) {
setupTest := func(t *testing.T, fuluEnabled bool) (*enode.LocalNode, func()) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
if fuluEnabled {
cfg.FuluForkEpoch = 100
} else {
cfg.FuluForkEpoch = cfg.FarFutureEpoch
}
cfg.FuluForkVersion = []byte{5, 0, 0, 0}
params.OverrideBeaconConfig(cfg)
cfg.InitializeForkSchedule()
pkey, err := privKey(&Config{DataDir: t.TempDir()})
require.NoError(t, err, "Could not get private key")
db, err := enode.OpenDB("")
require.NoError(t, err)
localNode := enode.NewLocalNode(db, pkey)
cleanup := func() {
db.Close()
}
return localNode, cleanup
}
tests := []struct {
name string
fuluEnabled bool
currentEntry params.NetworkScheduleEntry
nextEntry params.NetworkScheduleEntry
validateNFD func(t *testing.T, localNode *enode.LocalNode, nextEntry params.NetworkScheduleEntry)
}{
{
name: "different digests sets nfd to next digest",
fuluEnabled: true,
currentEntry: params.NetworkScheduleEntry{
Epoch: 50,
ForkDigest: [4]byte{1, 2, 3, 4},
ForkVersion: [4]byte{1, 0, 0, 0},
},
nextEntry: params.NetworkScheduleEntry{
Epoch: 100,
ForkDigest: [4]byte{5, 6, 7, 8}, // Different from current
ForkVersion: [4]byte{2, 0, 0, 0},
},
validateNFD: func(t *testing.T, localNode *enode.LocalNode, nextEntry params.NetworkScheduleEntry) {
var nfdValue []byte
err := localNode.Node().Record().Load(enr.WithEntry(nfdEnrKey, &nfdValue))
require.NoError(t, err)
assert.DeepEqual(t, nextEntry.ForkDigest[:], nfdValue, "nfd entry should equal next fork digest")
},
},
{
name: "same digests sets nfd to empty",
fuluEnabled: true,
currentEntry: params.NetworkScheduleEntry{
Epoch: 50,
ForkDigest: [4]byte{1, 2, 3, 4},
ForkVersion: [4]byte{1, 0, 0, 0},
},
nextEntry: params.NetworkScheduleEntry{
Epoch: 100,
ForkDigest: [4]byte{1, 2, 3, 4}, // Same as current
ForkVersion: [4]byte{2, 0, 0, 0},
},
validateNFD: func(t *testing.T, localNode *enode.LocalNode, nextEntry params.NetworkScheduleEntry) {
var nfdValue []byte
err := localNode.Node().Record().Load(enr.WithEntry(nfdEnrKey, &nfdValue))
require.NoError(t, err)
assert.DeepEqual(t, make([]byte, len(nextEntry.ForkDigest)), nfdValue, "nfd entry should be empty bytes when digests are same")
},
},
{
name: "fulu disabled does not add nfd field",
fuluEnabled: false,
currentEntry: params.NetworkScheduleEntry{
Epoch: 50,
ForkDigest: [4]byte{1, 2, 3, 4},
ForkVersion: [4]byte{1, 0, 0, 0},
},
nextEntry: params.NetworkScheduleEntry{
Epoch: 100,
ForkDigest: [4]byte{5, 6, 7, 8}, // Different from current
ForkVersion: [4]byte{2, 0, 0, 0},
},
validateNFD: func(t *testing.T, localNode *enode.LocalNode, nextEntry params.NetworkScheduleEntry) {
var nfdValue []byte
err := localNode.Node().Record().Load(enr.WithEntry(nfdEnrKey, &nfdValue))
require.ErrorContains(t, "missing ENR key", err, "nfd field should not be present when Fulu fork is disabled")
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
localNode, cleanup := setupTest(t, tt.fuluEnabled)
defer cleanup()
currentEntry := tt.currentEntry
nextEntry := tt.nextEntry
require.NoError(t, updateENR(localNode, currentEntry, nextEntry))
tt.validateNFD(t, localNode, nextEntry)
})
}
}

View File

@@ -9,27 +9,26 @@ import (
// updates the node's discovery service to reflect any new fork version
// changes.
func (s *Service) forkWatcher() {
// Exit early if discovery is disabled - there's no ENR to update
if s.dv5Listener == nil {
log.Debug("Discovery disabled, exiting fork watcher")
return
}
slotTicker := slots.NewSlotTicker(s.genesisTime, params.BeaconConfig().SecondsPerSlot)
var scheduleEntry params.NetworkScheduleEntry
for {
select {
case currSlot := <-slotTicker.C():
currEpoch := slots.ToEpoch(currSlot)
if currEpoch == params.BeaconConfig().AltairForkEpoch ||
currEpoch == params.BeaconConfig().BellatrixForkEpoch ||
currEpoch == params.BeaconConfig().CapellaForkEpoch ||
currEpoch == params.BeaconConfig().DenebForkEpoch ||
currEpoch == params.BeaconConfig().ElectraForkEpoch ||
currEpoch == params.BeaconConfig().FuluForkEpoch {
// If we are in the fork epoch, we update our enr with
// the updated fork digest. These repeatedly does
// this over the epoch, which might be slightly wasteful
// but is fine nonetheless.
if s.dv5Listener != nil { // make sure it's not a local network
_, err := addForkEntry(s.dv5Listener.LocalNode(), s.genesisTime, s.genesisValidatorsRoot)
if err != nil {
log.WithError(err).Error("Could not add fork entry")
}
currentEpoch := slots.ToEpoch(currSlot)
newEntry := params.GetNetworkScheduleEntry(currentEpoch)
if newEntry.ForkDigest != scheduleEntry.ForkDigest {
nextEntry := params.NextNetworkScheduleEntry(currentEpoch)
if err := updateENR(s.dv5Listener.LocalNode(), newEntry, nextEntry); err != nil {
log.WithFields(newEntry.LogFields()).WithError(err).Error("Could not add fork entry")
continue // don't replace scheduleEntry until this succeeds
}
scheduleEntry = newEntry
}
case <-s.ctx.Done():
log.Debug("Context closed, exiting goroutine")

View File

@@ -51,7 +51,7 @@ type (
BroadcastBlob(ctx context.Context, subnet uint64, blob *ethpb.BlobSidecar) error
BroadcastLightClientOptimisticUpdate(ctx context.Context, update interfaces.LightClientOptimisticUpdate) error
BroadcastLightClientFinalityUpdate(ctx context.Context, update interfaces.LightClientFinalityUpdate) error
BroadcastDataColumn(root [fieldparams.RootLength]byte, columnSubnet uint64, dataColumnSidecar *ethpb.DataColumnSidecar) error
BroadcastDataColumnSidecar(root [fieldparams.RootLength]byte, columnSubnet uint64, dataColumnSidecar *ethpb.DataColumnSidecar) error
}
// SetStreamHandler configures p2p to handle streams of a certain topic ID.

View File

@@ -7,7 +7,6 @@ import (
"github.com/OffchainLabs/prysm/v6/crypto/hash"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/math"
"github.com/OffchainLabs/prysm/v6/network/forks"
pubsubpb "github.com/libp2p/go-libp2p-pubsub/pb"
)
@@ -39,7 +38,7 @@ func MsgID(genesisValidatorsRoot []byte, pmsg *pubsubpb.Message) string {
copy(msg, "invalid")
return bytesutil.UnsafeCastToString(msg)
}
_, fEpoch, err := forks.RetrieveForkDataFromDigest(digest, genesisValidatorsRoot)
_, fEpoch, err := params.ForkDataFromDigest(digest)
if err != nil {
// Impossible condition that should
// never be hit.

View File

@@ -7,10 +7,10 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/crypto/hash"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/network/forks"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/golang/snappy"
pubsubpb "github.com/libp2p/go-libp2p-pubsub/pb"
@@ -18,28 +18,27 @@ import (
func TestMsgID_HashesCorrectly(t *testing.T) {
params.SetupTestConfigCleanup(t)
genesisValidatorsRoot := bytesutil.PadTo([]byte{'A'}, 32)
d, err := forks.CreateForkDigest(time.Now(), genesisValidatorsRoot)
assert.NoError(t, err)
clock := startup.NewClock(time.Now(), bytesutil.ToBytes32([]byte{'A'}))
valRoot := clock.GenesisValidatorsRoot()
d := params.ForkDigest(clock.CurrentEpoch())
tpc := fmt.Sprintf(p2p.BlockSubnetTopicFormat, d)
invalidSnappy := [32]byte{'J', 'U', 'N', 'K'}
pMsg := &pubsubpb.Message{Data: invalidSnappy[:], Topic: &tpc}
hashedData := hash.Hash(append(params.BeaconConfig().MessageDomainInvalidSnappy[:], pMsg.Data...))
msgID := string(hashedData[:20])
assert.Equal(t, msgID, p2p.MsgID(genesisValidatorsRoot, pMsg), "Got incorrect msg id")
assert.Equal(t, msgID, p2p.MsgID(valRoot[:], pMsg), "Got incorrect msg id")
validObj := [32]byte{'v', 'a', 'l', 'i', 'd'}
enc := snappy.Encode(nil, validObj[:])
nMsg := &pubsubpb.Message{Data: enc, Topic: &tpc}
hashedData = hash.Hash(append(params.BeaconConfig().MessageDomainValidSnappy[:], validObj[:]...))
msgID = string(hashedData[:20])
assert.Equal(t, msgID, p2p.MsgID(genesisValidatorsRoot, nMsg), "Got incorrect msg id")
assert.Equal(t, msgID, p2p.MsgID(valRoot[:], nMsg), "Got incorrect msg id")
}
func TestMessageIDFunction_HashesCorrectlyAltair(t *testing.T) {
params.SetupTestConfigCleanup(t)
genesisValidatorsRoot := bytesutil.PadTo([]byte{'A'}, 32)
d, err := signing.ComputeForkDigest(params.BeaconConfig().AltairForkVersion, genesisValidatorsRoot)
d, err := signing.ComputeForkDigest(params.BeaconConfig().AltairForkVersion, params.BeaconConfig().GenesisValidatorsRoot[:])
assert.NoError(t, err)
tpc := fmt.Sprintf(p2p.BlockSubnetTopicFormat, d)
topicLen := uint64(len(tpc))
@@ -52,7 +51,7 @@ func TestMessageIDFunction_HashesCorrectlyAltair(t *testing.T) {
combinedObj = append(combinedObj, pMsg.Data...)
hashedData := hash.Hash(combinedObj)
msgID := string(hashedData[:20])
assert.Equal(t, msgID, p2p.MsgID(genesisValidatorsRoot, pMsg), "Got incorrect msg id")
assert.Equal(t, msgID, p2p.MsgID(params.BeaconConfig().GenesisValidatorsRoot[:], pMsg), "Got incorrect msg id")
validObj := [32]byte{'v', 'a', 'l', 'i', 'd'}
enc := snappy.Encode(nil, validObj[:])
@@ -63,13 +62,12 @@ func TestMessageIDFunction_HashesCorrectlyAltair(t *testing.T) {
combinedObj = append(combinedObj, validObj[:]...)
hashedData = hash.Hash(combinedObj)
msgID = string(hashedData[:20])
assert.Equal(t, msgID, p2p.MsgID(genesisValidatorsRoot, nMsg), "Got incorrect msg id")
assert.Equal(t, msgID, p2p.MsgID(params.BeaconConfig().GenesisValidatorsRoot[:], nMsg), "Got incorrect msg id")
}
func TestMessageIDFunction_HashesCorrectlyBellatrix(t *testing.T) {
params.SetupTestConfigCleanup(t)
genesisValidatorsRoot := bytesutil.PadTo([]byte{'A'}, 32)
d, err := signing.ComputeForkDigest(params.BeaconConfig().BellatrixForkVersion, genesisValidatorsRoot)
d, err := signing.ComputeForkDigest(params.BeaconConfig().BellatrixForkVersion, params.BeaconConfig().GenesisValidatorsRoot[:])
assert.NoError(t, err)
tpc := fmt.Sprintf(p2p.BlockSubnetTopicFormat, d)
topicLen := uint64(len(tpc))
@@ -82,7 +80,7 @@ func TestMessageIDFunction_HashesCorrectlyBellatrix(t *testing.T) {
combinedObj = append(combinedObj, pMsg.Data...)
hashedData := hash.Hash(combinedObj)
msgID := string(hashedData[:20])
assert.Equal(t, msgID, p2p.MsgID(genesisValidatorsRoot, pMsg), "Got incorrect msg id")
assert.Equal(t, msgID, p2p.MsgID(params.BeaconConfig().GenesisValidatorsRoot[:], pMsg), "Got incorrect msg id")
validObj := [32]byte{'v', 'a', 'l', 'i', 'd'}
enc := snappy.Encode(nil, validObj[:])
@@ -93,7 +91,7 @@ func TestMessageIDFunction_HashesCorrectlyBellatrix(t *testing.T) {
combinedObj = append(combinedObj, validObj[:]...)
hashedData = hash.Hash(combinedObj)
msgID = string(hashedData[:20])
assert.Equal(t, msgID, p2p.MsgID(genesisValidatorsRoot, nMsg), "Got incorrect msg id")
assert.Equal(t, msgID, p2p.MsgID(params.BeaconConfig().GenesisValidatorsRoot[:], nMsg), "Got incorrect msg id")
}
func TestMsgID_WithNilTopic(t *testing.T) {

View File

@@ -42,7 +42,7 @@ func TestScorers_Gossip_Score(t *testing.T) {
},
check: func(scorer *scorers.GossipScorer) {
assert.Equal(t, 10.0, scorer.Score("peer1"), "Unexpected score")
assert.Equal(t, nil, scorer.IsBadPeer("peer1"), "Unexpected bad peer")
assert.NoError(t, scorer.IsBadPeer("peer1"), "Unexpected bad peer")
_, _, topicMap, err := scorer.GossipData("peer1")
assert.NoError(t, err)
assert.Equal(t, uint64(100), topicMap["a"].TimeInMesh, "incorrect time in mesh")

View File

@@ -40,7 +40,7 @@ const (
rSubD = 8 // random gossip target
)
var errInvalidTopic = errors.New("invalid topic format")
var ErrInvalidTopic = errors.New("invalid topic format")
// Specifies the fixed size context length.
const digestLength = 4
@@ -219,12 +219,12 @@ func convertTopicScores(topicMap map[string]*pubsub.TopicScoreSnapshot) map[stri
func ExtractGossipDigest(topic string) ([4]byte, error) {
// Ensure the topic prefix is correct.
if len(topic) < len(gossipTopicPrefix)+1 || topic[:len(gossipTopicPrefix)] != gossipTopicPrefix {
return [4]byte{}, errInvalidTopic
return [4]byte{}, ErrInvalidTopic
}
start := len(gossipTopicPrefix)
end := strings.Index(topic[start:], "/")
if end == -1 { // Ensure a topic suffix exists.
return [4]byte{}, errInvalidTopic
return [4]byte{}, ErrInvalidTopic
}
end += start
strDigest := topic[start:end]

View File

@@ -1,15 +1,16 @@
package p2p
import (
"encoding/hex"
"fmt"
"strings"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/encoder"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/network/forks"
pubsub "github.com/libp2p/go-libp2p-pubsub"
pubsubpb "github.com/libp2p/go-libp2p-pubsub/pb"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
@@ -32,81 +33,76 @@ var _ pubsub.SubscriptionFilter = (*Service)(nil)
// (Note: BlobSidecar is not included in this list since it is superseded by DataColumnSidecar)
const pubsubSubscriptionRequestLimit = 500
func (s *Service) setAllForkDigests() {
entries := params.SortedNetworkScheduleEntries()
s.allForkDigests = make(map[[4]byte]struct{}, len(entries))
for _, entry := range entries {
s.allForkDigests[entry.ForkDigest] = struct{}{}
}
}
var (
errNotReadyToSubscribe = fmt.Errorf("not ready to subscribe, service is not initialized")
errMissingLeadingSlash = fmt.Errorf("topic is missing leading slash")
errTopicMissingProtocolVersion = fmt.Errorf("topic is missing protocol version (eth2)")
errTopicPathWrongPartCount = fmt.Errorf("topic path has wrong part count")
errDigestInvalid = fmt.Errorf("digest is invalid")
errDigestUnexpected = fmt.Errorf("digest is unexpected")
errSnappySuffixMissing = fmt.Errorf("snappy suffix is missing")
errTopicNotFound = fmt.Errorf("topic not found in gossip topic mappings")
)
// CanSubscribe returns true if the topic is of interest and we could subscribe to it.
func (s *Service) CanSubscribe(topic string) bool {
if !s.isInitialized() {
if err := s.checkSubscribable(topic); err != nil {
if !errors.Is(err, errNotReadyToSubscribe) {
logrus.WithError(err).WithField("topic", topic).Debug("CanSubscribe failed")
}
return false
}
return true
}
func (s *Service) checkSubscribable(topic string) error {
if !s.isInitialized() {
return errNotReadyToSubscribe
}
parts := strings.Split(topic, "/")
if len(parts) != 5 {
return false
return errTopicPathWrongPartCount
}
// The topic must start with a slash, which means the first part will be empty.
if parts[0] != "" {
return false
return errMissingLeadingSlash
}
if parts[1] != "eth2" {
return false
protocol, rawDigest, suffix := parts[1], parts[2], parts[4]
if protocol != "eth2" {
return errTopicMissingProtocolVersion
}
phase0ForkDigest, err := s.currentForkDigest()
if err != nil {
log.WithError(err).Error("Could not determine fork digest")
return false
}
altairForkDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().AltairForkEpoch, s.genesisValidatorsRoot)
if err != nil {
log.WithError(err).Error("Could not determine altair fork digest")
return false
}
bellatrixForkDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().BellatrixForkEpoch, s.genesisValidatorsRoot)
if err != nil {
log.WithError(err).Error("Could not determine Bellatrix fork digest")
return false
}
capellaForkDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().CapellaForkEpoch, s.genesisValidatorsRoot)
if err != nil {
log.WithError(err).Error("Could not determine Capella fork digest")
return false
}
denebForkDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().DenebForkEpoch, s.genesisValidatorsRoot)
if err != nil {
log.WithError(err).Error("Could not determine Deneb fork digest")
return false
}
electraForkDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().ElectraForkEpoch, s.genesisValidatorsRoot)
if err != nil {
log.WithError(err).Error("Could not determine Electra fork digest")
return false
}
fuluForkDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().FuluForkEpoch, s.genesisValidatorsRoot)
if err != nil {
log.WithError(err).Error("Could not determine Fulu fork digest")
return false
}
switch parts[2] {
case fmt.Sprintf("%x", phase0ForkDigest):
case fmt.Sprintf("%x", altairForkDigest):
case fmt.Sprintf("%x", bellatrixForkDigest):
case fmt.Sprintf("%x", capellaForkDigest):
case fmt.Sprintf("%x", denebForkDigest):
case fmt.Sprintf("%x", electraForkDigest):
case fmt.Sprintf("%x", fuluForkDigest):
default:
return false
if suffix != encoder.ProtocolSuffixSSZSnappy {
return errSnappySuffixMissing
}
if parts[4] != encoder.ProtocolSuffixSSZSnappy {
return false
var digest [4]byte
dl, err := hex.Decode(digest[:], []byte(rawDigest))
if err != nil {
return errors.Wrapf(errDigestInvalid, "%v", err)
}
if dl != 4 {
return errors.Wrapf(errDigestInvalid, "wrong byte length")
}
if _, ok := s.allForkDigests[digest]; !ok {
return errDigestUnexpected
}
// Check the incoming topic matches any topic mapping. This includes a check for part[3].
for gt := range gossipTopicMappings {
if _, err := scanfcheck(strings.Join(parts[0:4], "/"), gt); err == nil {
return true
return nil
}
}
return false
return errTopicNotFound
}
// FilterIncomingSubscriptions is invoked for all RPCs containing subscription notifications.
@@ -124,7 +120,22 @@ func (s *Service) FilterIncomingSubscriptions(peerID peer.ID, subs []*pubsubpb.R
return nil, pubsub.ErrTooManySubscriptions
}
return pubsub.FilterSubscriptions(subs, s.CanSubscribe), nil
return pubsub.FilterSubscriptions(subs, s.logCheckSubscribableError(peerID)), nil
}
func (s *Service) logCheckSubscribableError(pid peer.ID) func(string) bool {
return func(topic string) bool {
if err := s.checkSubscribable(topic); err != nil {
if !errors.Is(err, errNotReadyToSubscribe) {
log.WithError(err).WithFields(logrus.Fields{
"peerID": pid,
"topic": topic,
}).Debug("Peer subscription rejected")
}
return false
}
return true
}
}
// scanfcheck uses fmt.Sscanf to check that a given string matches expected format. This method

View File

@@ -7,12 +7,11 @@ import (
"testing"
"time"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/encoder"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/network/forks"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
prysmTime "github.com/OffchainLabs/prysm/v6/time"
pubsubpb "github.com/libp2p/go-libp2p-pubsub/pb"
@@ -21,12 +20,11 @@ import (
func TestService_CanSubscribe(t *testing.T) {
params.SetupTestConfigCleanup(t)
currentFork := [4]byte{0x01, 0x02, 0x03, 0x04}
params.BeaconConfig().InitializeForkSchedule()
validProtocolSuffix := "/" + encoder.ProtocolSuffixSSZSnappy
genesisTime := time.Now()
var valRoot [32]byte
digest, err := forks.CreateForkDigest(genesisTime, valRoot[:])
assert.NoError(t, err)
clock := startup.NewClock(time.Now(), params.BeaconConfig().GenesisValidatorsRoot)
currentFork := params.GetNetworkScheduleEntry(clock.CurrentEpoch()).ForkDigest
digest := params.ForkDigest(clock.CurrentEpoch())
type test struct {
name string
topic string
@@ -108,12 +106,14 @@ func TestService_CanSubscribe(t *testing.T) {
}
tests = append(tests, tt)
}
valRoot := clock.GenesisValidatorsRoot()
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
s := &Service{
genesisValidatorsRoot: valRoot[:],
genesisTime: genesisTime,
genesisTime: clock.GenesisTime(),
}
s.setAllForkDigests()
if got := s.CanSubscribe(tt.topic); got != tt.want {
t.Errorf("CanSubscribe(%s) = %v, want %v", tt.topic, got, tt.want)
}
@@ -219,11 +219,10 @@ func TestGossipTopicMapping_scanfcheck_GossipTopicFormattingSanityCheck(t *testi
func TestService_FilterIncomingSubscriptions(t *testing.T) {
params.SetupTestConfigCleanup(t)
params.BeaconConfig().InitializeForkSchedule()
clock := startup.NewClock(time.Now(), params.BeaconConfig().GenesisValidatorsRoot)
digest := params.ForkDigest(clock.CurrentEpoch())
validProtocolSuffix := "/" + encoder.ProtocolSuffixSSZSnappy
genesisTime := time.Now()
var valRoot [32]byte
digest, err := forks.CreateForkDigest(genesisTime, valRoot[:])
assert.NoError(t, err)
type args struct {
id peer.ID
subs []*pubsubpb.RPC_SubOpts
@@ -320,12 +319,14 @@ func TestService_FilterIncomingSubscriptions(t *testing.T) {
},
},
}
valRoot := clock.GenesisValidatorsRoot()
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
s := &Service{
genesisValidatorsRoot: valRoot[:],
genesisTime: genesisTime,
genesisTime: clock.GenesisTime(),
}
s.setAllForkDigests()
got, err := s.FilterIncomingSubscriptions(tt.args.id, tt.args.subs)
if (err != nil) != tt.wantErr {
t.Errorf("FilterIncomingSubscriptions() error = %v, wantErr %v", err, tt.wantErr)
@@ -343,7 +344,7 @@ func TestService_MonitorsStateForkUpdates(t *testing.T) {
ctx, cancel := context.WithTimeout(t.Context(), 3*time.Second)
defer cancel()
cs := startup.NewClockSynchronizer()
s, err := NewService(ctx, &Config{ClockWaiter: cs})
s, err := NewService(ctx, &Config{ClockWaiter: cs, DB: testDB.SetupDB(t)})
require.NoError(t, err)
require.Equal(t, false, s.isInitialized())

View File

@@ -8,6 +8,7 @@ import (
"time"
mock "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/encoder"
testp2p "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
@@ -21,6 +22,7 @@ func TestService_PublishToTopicConcurrentMapWrite(t *testing.T) {
s, err := NewService(t.Context(), &Config{
StateNotifier: &mock.MockStateNotifier{},
ClockWaiter: cs,
DB: testDB.SetupDB(t),
})
require.NoError(t, err)
ctx, cancel := context.WithTimeout(t.Context(), 3*time.Second)

View File

@@ -169,7 +169,7 @@ var (
RPCDataColumnSidecarsByRangeTopicV1: new(pb.DataColumnSidecarsByRangeRequest),
// DataColumnSidecarsByRoot v1 Message
RPCDataColumnSidecarsByRootTopicV1: new(p2ptypes.DataColumnsByRootIdentifiers),
RPCDataColumnSidecarsByRootTopicV1: p2ptypes.DataColumnsByRootIdentifiers{},
}
// Maps all registered protocol prefixes.

View File

@@ -14,6 +14,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers/scorers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/types"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v6/config/features"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
@@ -63,42 +64,42 @@ var (
)
// Service for managing peer to peer (p2p) networking.
type (
Service struct {
started bool
isPreGenesis bool
pingMethod func(ctx context.Context, id peer.ID) error
pingMethodLock sync.RWMutex
cancel context.CancelFunc
cfg *Config
peers *peers.Status
addrFilter *multiaddr.Filters
ipLimiter *leakybucket.Collector
privKey *ecdsa.PrivateKey
metaData metadata.Metadata
pubsub *pubsub.PubSub
joinedTopics map[string]*pubsub.Topic
joinedTopicsLock sync.RWMutex
subnetsLock map[uint64]*sync.RWMutex
subnetsLockLock sync.Mutex // Lock access to subnetsLock
initializationLock sync.Mutex
dv5Listener ListenerRebooter
startupErr error
ctx context.Context
host host.Host
genesisTime time.Time
genesisValidatorsRoot []byte
activeValidatorCount uint64
peerDisconnectionTime *cache.Cache
custodyInfo *custodyInfo
custodyInfoLock sync.RWMutex // Lock access to custodyInfo
}
type Service struct {
started bool
isPreGenesis bool
pingMethod func(ctx context.Context, id peer.ID) error
pingMethodLock sync.RWMutex
cancel context.CancelFunc
cfg *Config
peers *peers.Status
addrFilter *multiaddr.Filters
ipLimiter *leakybucket.Collector
privKey *ecdsa.PrivateKey
metaData metadata.Metadata
pubsub *pubsub.PubSub
joinedTopics map[string]*pubsub.Topic
joinedTopicsLock sync.RWMutex
subnetsLock map[uint64]*sync.RWMutex
subnetsLockLock sync.Mutex // Lock access to subnetsLock
initializationLock sync.Mutex
dv5Listener ListenerRebooter
startupErr error
ctx context.Context
host host.Host
genesisTime time.Time
genesisValidatorsRoot []byte
activeValidatorCount uint64
peerDisconnectionTime *cache.Cache
custodyInfo *custodyInfo
custodyInfoLock sync.RWMutex // Lock access to custodyInfo
clock *startup.Clock
allForkDigests map[[4]byte]struct{}
}
custodyInfo struct {
earliestAvailableSlot primitives.Slot
groupCount uint64
}
)
type custodyInfo struct {
earliestAvailableSlot primitives.Slot
groupCount uint64
}
// NewService initializes a new p2p service compatible with shared.Service interface. No
// connections are made until the Start function is called during the service registry startup.
@@ -112,7 +113,7 @@ func NewService(ctx context.Context, cfg *Config) (*Service, error) {
return nil, errors.Wrapf(err, "failed to generate p2p private key")
}
metaData, err := metaDataFromConfig(cfg)
metaData, err := metaDataFromDB(ctx, cfg.DB)
if err != nil {
log.WithError(err).Error("Failed to create peer metadata")
return nil, err
@@ -202,6 +203,7 @@ func (s *Service) Start() {
// Waits until the state is initialized via an event feed.
// Used for fork-related data when connecting peers.
s.awaitStateInitialized()
s.setAllForkDigests()
s.isPreGenesis = false
var relayNodes []string
@@ -455,7 +457,7 @@ func (s *Service) awaitStateInitialized() {
s.genesisTime = clock.GenesisTime()
gvr := clock.GenesisValidatorsRoot()
s.genesisValidatorsRoot = gvr[:]
_, err = s.currentForkDigest() // initialize fork digest cache
_, err = s.currentForkDigest()
if err != nil {
log.WithError(err).Error("Could not initialize fork digest")
}

View File

@@ -9,18 +9,17 @@ import (
"time"
mock "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/encoder"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers/scorers"
testp2p "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/network/forks"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
prysmTime "github.com/OffchainLabs/prysm/v6/time"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/libp2p/go-libp2p"
"github.com/libp2p/go-libp2p/core/host"
"github.com/libp2p/go-libp2p/core/peer"
@@ -31,48 +30,6 @@ import (
const testPingInterval = 100 * time.Millisecond
type mockListener struct {
localNode *enode.LocalNode
}
func (m mockListener) Self() *enode.Node {
return m.localNode.Node()
}
func (mockListener) Close() {
// no-op
}
func (mockListener) Lookup(enode.ID) []*enode.Node {
panic("implement me")
}
func (mockListener) ReadRandomNodes(_ []*enode.Node) int {
panic("implement me")
}
func (mockListener) Resolve(*enode.Node) *enode.Node {
panic("implement me")
}
func (mockListener) Ping(*enode.Node) error {
panic("implement me")
}
func (mockListener) RequestENR(*enode.Node) (*enode.Node, error) {
panic("implement me")
}
func (mockListener) LocalNode() *enode.LocalNode {
panic("implement me")
}
func (mockListener) RandomNodes() enode.Iterator {
panic("implement me")
}
func (mockListener) RebootListener() error { panic("implement me") }
func createHost(t *testing.T, port uint) (host.Host, *ecdsa.PrivateKey, net.IP) {
_, pkey := createAddrAndPrivKey(t)
ipAddr := net.ParseIP("127.0.0.1")
@@ -85,17 +42,17 @@ func createHost(t *testing.T, port uint) (host.Host, *ecdsa.PrivateKey, net.IP)
func TestService_Stop_SetsStartedToFalse(t *testing.T) {
params.SetupTestConfigCleanup(t)
s, err := NewService(t.Context(), &Config{StateNotifier: &mock.MockStateNotifier{}})
s, err := NewService(t.Context(), &Config{StateNotifier: &mock.MockStateNotifier{}, DB: testDB.SetupDB(t)})
require.NoError(t, err)
s.started = true
s.dv5Listener = &mockListener{}
s.dv5Listener = testp2p.NewMockListener(nil, nil)
assert.NoError(t, s.Stop())
assert.Equal(t, false, s.started)
}
func TestService_Stop_DontPanicIfDv5ListenerIsNotInited(t *testing.T) {
params.SetupTestConfigCleanup(t)
s, err := NewService(t.Context(), &Config{StateNotifier: &mock.MockStateNotifier{}})
s, err := NewService(t.Context(), &Config{StateNotifier: &mock.MockStateNotifier{}, DB: testDB.SetupDB(t)})
require.NoError(t, err)
assert.NoError(t, s.Stop())
}
@@ -110,10 +67,11 @@ func TestService_Start_OnlyStartsOnce(t *testing.T) {
TCPPort: 3000,
QUICPort: 3000,
ClockWaiter: cs,
DB: testDB.SetupDB(t),
}
s, err := NewService(t.Context(), cfg)
require.NoError(t, err)
s.dv5Listener = &mockListener{}
s.dv5Listener = testp2p.NewMockListener(nil, nil)
s.custodyInfo = &custodyInfo{}
exitRoutine := make(chan bool)
go func() {
@@ -133,14 +91,14 @@ func TestService_Start_OnlyStartsOnce(t *testing.T) {
func TestService_Status_NotRunning(t *testing.T) {
params.SetupTestConfigCleanup(t)
s := &Service{started: false}
s.dv5Listener = &mockListener{}
s.dv5Listener = testp2p.NewMockListener(nil, nil)
assert.ErrorContains(t, "not running", s.Status(), "Status returned wrong error")
}
func TestService_Status_NoGenesisTimeSet(t *testing.T) {
params.SetupTestConfigCleanup(t)
s := &Service{started: true}
s.dv5Listener = &mockListener{}
s.dv5Listener = testp2p.NewMockListener(nil, nil)
assert.ErrorContains(t, "no genesis time set", s.Status(), "Status returned wrong error")
s.genesisTime = time.Now()
@@ -159,6 +117,7 @@ func TestService_Start_NoDiscoverFlag(t *testing.T) {
StateNotifier: &mock.MockStateNotifier{},
NoDiscovery: true, // <-- no s.dv5Listener is created
ClockWaiter: cs,
DB: testDB.SetupDB(t),
}
s, err := NewService(t.Context(), cfg)
require.NoError(t, err)
@@ -194,6 +153,7 @@ func TestListenForNewNodes(t *testing.T) {
)
params.SetupTestConfigCleanup(t)
db := testDB.SetupDB(t)
// Setup bootnode.
cfg := &Config{
@@ -201,6 +161,7 @@ func TestListenForNewNodes(t *testing.T) {
PingInterval: testPingInterval,
DisableLivenessCheck: true,
UDPPort: port,
DB: db,
}
_, pkey := createAddrAndPrivKey(t)
@@ -246,6 +207,7 @@ func TestListenForNewNodes(t *testing.T) {
ClockWaiter: cs,
UDPPort: port + i,
TCPPort: port + i,
DB: db,
}
h, pkey, ipAddr := createHost(t, port+i)
@@ -340,14 +302,16 @@ func TestPeer_Disconnect(t *testing.T) {
func TestService_JoinLeaveTopic(t *testing.T) {
params.SetupTestConfigCleanup(t)
params.BeaconConfig().InitializeForkSchedule()
ctx, cancel := context.WithTimeout(t.Context(), 3*time.Second)
defer cancel()
gs := startup.NewClockSynchronizer()
s, err := NewService(ctx, &Config{StateNotifier: &mock.MockStateNotifier{}, ClockWaiter: gs})
s, err := NewService(ctx, &Config{StateNotifier: &mock.MockStateNotifier{}, ClockWaiter: gs, DB: testDB.SetupDB(t)})
require.NoError(t, err)
go s.awaitStateInitialized()
fd := initializeStateWithForkDigest(ctx, t, gs)
s.setAllForkDigests()
s.awaitStateInitialized()
assert.Equal(t, 0, len(s.joinedTopics))
@@ -376,15 +340,13 @@ func TestService_JoinLeaveTopic(t *testing.T) {
// digest associated with that genesis event.
func initializeStateWithForkDigest(_ context.Context, t *testing.T, gs startup.ClockSetter) [4]byte {
gt := prysmTime.Now()
gvr := bytesutil.ToBytes32(bytesutil.PadTo([]byte("genesis validators root"), 32))
require.NoError(t, gs.SetClock(startup.NewClock(gt, gvr)))
fd, err := forks.CreateForkDigest(gt, gvr[:])
require.NoError(t, err)
gvr := params.BeaconConfig().GenesisValidatorsRoot
clock := startup.NewClock(gt, gvr)
require.NoError(t, gs.SetClock(clock))
time.Sleep(50 * time.Millisecond) // wait for pubsub filter to initialize.
return fd
return params.ForkDigest(clock.CurrentEpoch())
}
func TestService_connectWithPeer(t *testing.T) {

View File

@@ -57,6 +57,8 @@ const blobSubnetLockerVal = 110
// chosen more than sync, attestation and blob subnet (6) combined.
const dataColumnSubnetVal = 150
const errSavingSequenceNumber = "saving sequence number after updating subnets: %w"
// nodeFilter returns a function that filters nodes based on the subnet topic and subnet index.
func (s *Service) nodeFilter(topic string, indices map[uint64]int) (func(node *enode.Node) (map[uint64]bool, error), error) {
switch {
@@ -132,6 +134,24 @@ func (s *Service) FindAndDialPeersWithSubnets(
return nil
}
// updateDefectiveSubnets updates the defective subnets map when a node with matching subnets is found.
// It decrements the defective count for each subnet the node satisfies and removes subnets
// that are fully satisfied (count reaches 0).
func updateDefectiveSubnets(
nodeSubnets map[uint64]bool,
defectiveSubnets map[uint64]int,
) {
for subnet := range defectiveSubnets {
if !nodeSubnets[subnet] {
continue
}
defectiveSubnets[subnet]--
if defectiveSubnets[subnet] == 0 {
delete(defectiveSubnets, subnet)
}
}
}
// findPeersWithSubnets finds peers subscribed to defective subnets in batches
// until enough peers are found or the context is canceled.
// It returns new peers found during the search.
@@ -167,6 +187,7 @@ func (s *Service) findPeersWithSubnets(
// Crawl the network for peers subscribed to the defective subnets.
nodeByNodeID := make(map[enode.ID]*enode.Node)
for len(defectiveSubnets) > 0 && iterator.Next() {
if err := ctx.Err(); err != nil {
// Convert the map to a slice.
@@ -178,14 +199,28 @@ func (s *Service) findPeersWithSubnets(
return peersToDial, err
}
// Get all needed subnets that the node is subscribed to.
// Skip nodes that are not subscribed to any of the defective subnets.
node := iterator.Node()
// Remove duplicates, keeping the node with higher seq.
existing, ok := nodeByNodeID[node.ID()]
if ok && existing.Seq() >= node.Seq() {
continue // keep existing and skip.
}
// Treat nodes that exist in nodeByNodeID with higher seq numbers as new peers
// Skip peer not matching the filter.
if !s.filterPeer(node) {
if ok {
// this means the existing peer with the lower sequence number is no longer valid
delete(nodeByNodeID, existing.ID())
// Note: We are choosing to not rollback changes to the defective subnets map in favor of calling s.defectiveSubnets once again after dialing peers.
// This is a case that should rarely happen and should be handled through a second iteration in FindAndDialPeersWithSubnets
}
continue
}
// Get all needed subnets that the node is subscribed to.
// Skip nodes that are not subscribed to any of the defective subnets.
nodeSubnets, err := filter(node)
if err != nil {
return nil, errors.Wrap(err, "filter node")
@@ -194,30 +229,14 @@ func (s *Service) findPeersWithSubnets(
continue
}
// Remove duplicates, keeping the node with higher seq.
existing, ok := nodeByNodeID[node.ID()]
if ok && existing.Seq() > node.Seq() {
continue
}
nodeByNodeID[node.ID()] = node
// We found a new peer. Modify the defective subnets map
// and the filter accordingly.
for subnet := range defectiveSubnets {
if !nodeSubnets[subnet] {
continue
}
nodeByNodeID[node.ID()] = node
defectiveSubnets[subnet]--
if defectiveSubnets[subnet] == 0 {
delete(defectiveSubnets, subnet)
}
filter, err = s.nodeFilter(topicFormat, defectiveSubnets)
if err != nil {
return nil, errors.Wrap(err, "node filter")
}
updateDefectiveSubnets(nodeSubnets, defectiveSubnets)
filter, err = s.nodeFilter(topicFormat, defectiveSubnets)
if err != nil {
return nil, errors.Wrap(err, "node filter")
}
}
@@ -377,13 +396,18 @@ func (s *Service) hasPeerWithSubnet(subnetTopic string) bool {
// with a new value for a bitfield of subnets tracked. It also updates
// the node's metadata by increasing the sequence number and the
// subnets tracked by the node.
func (s *Service) updateSubnetRecordWithMetadata(bitV bitfield.Bitvector64) {
func (s *Service) updateSubnetRecordWithMetadata(bitV bitfield.Bitvector64) error {
entry := enr.WithEntry(attSubnetEnrKey, &bitV)
s.dv5Listener.LocalNode().Set(entry)
s.metaData = wrapper.WrappedMetadataV0(&pb.MetaDataV0{
SeqNumber: s.metaData.SequenceNumber() + 1,
Attnets: bitV,
})
if err := s.saveSequenceNumberIfNeeded(); err != nil {
return fmt.Errorf(errSavingSequenceNumber, err)
}
return nil
}
// Updates the service's discv5 listener record's attestation subnet
@@ -394,7 +418,7 @@ func (s *Service) updateSubnetRecordWithMetadataV2(
bitVAtt bitfield.Bitvector64,
bitVSync bitfield.Bitvector4,
custodyGroupCount uint64,
) {
) error {
entry := enr.WithEntry(attSubnetEnrKey, &bitVAtt)
subEntry := enr.WithEntry(syncCommsSubnetEnrKey, &bitVSync)
@@ -412,6 +436,11 @@ func (s *Service) updateSubnetRecordWithMetadataV2(
Attnets: bitVAtt,
Syncnets: bitVSync,
})
if err := s.saveSequenceNumberIfNeeded(); err != nil {
return fmt.Errorf(errSavingSequenceNumber, err)
}
return nil
}
// updateSubnetRecordWithMetadataV3 updates:
@@ -423,7 +452,7 @@ func (s *Service) updateSubnetRecordWithMetadataV3(
bitVAtt bitfield.Bitvector64,
bitVSync bitfield.Bitvector4,
custodyGroupCount uint64,
) {
) error {
attSubnetsEntry := enr.WithEntry(attSubnetEnrKey, &bitVAtt)
syncSubnetsEntry := enr.WithEntry(syncCommsSubnetEnrKey, &bitVSync)
custodyGroupCountEntry := enr.WithEntry(custodyGroupCountEnrKey, custodyGroupCount)
@@ -439,6 +468,23 @@ func (s *Service) updateSubnetRecordWithMetadataV3(
Syncnets: bitVSync,
CustodyGroupCount: custodyGroupCount,
})
if err := s.saveSequenceNumberIfNeeded(); err != nil {
return fmt.Errorf(errSavingSequenceNumber, err)
}
return nil
}
// saveSequenceNumberIfNeeded saves the sequence number in DB if either of the following conditions is met:
// - the static peer ID flag is set
// - the fulu epoch is set
func (s *Service) saveSequenceNumberIfNeeded() error {
// Short-circuit if we don't need to save the sequence number.
if !(s.cfg.StaticPeerID || params.FuluEnabled()) {
return nil
}
return s.cfg.DB.SaveMetadataSeqNum(s.ctx, s.metaData.SequenceNumber())
}
func initializePersistentSubnets(id enode.ID, epoch primitives.Epoch) error {

View File

@@ -3,21 +3,26 @@ package p2p
import (
"context"
"crypto/rand"
"encoding/hex"
"fmt"
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers/scorers"
testp2p "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v6/config/params"
ecdsaprysm "github.com/OffchainLabs/prysm/v6/crypto/ecdsa"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/libp2p/go-libp2p/core/crypto"
"github.com/libp2p/go-libp2p/core/network"
"github.com/prysmaticlabs/go-bitfield"
)
@@ -35,17 +40,8 @@ func TestStartDiscV5_FindAndDialPeersWithSubnet(t *testing.T) {
// find and connect to a node already subscribed to a specific subnet.
// In our case: The node i is subscribed to subnet i, with i = 1, 2, 3
// Define the genesis validators root, to ensure everybody is on the same network.
const (
genesisValidatorRootStr = "0xdeadbeefcafecafedeadbeefcafecafedeadbeefcafecafedeadbeefcafecafe"
subnetCount = 3
minimumPeersPerSubnet = 1
)
genesisValidatorsRoot, err := hex.DecodeString(genesisValidatorRootStr[2:])
require.NoError(t, err)
// Create a context.
const subnetCount = 3
const minimumPeersPerSubnet = 1
ctx := t.Context()
// Use shorter period for testing.
@@ -57,6 +53,7 @@ func TestStartDiscV5_FindAndDialPeersWithSubnet(t *testing.T) {
// Create flags.
params.SetupTestConfigCleanup(t)
params.BeaconConfig().InitializeForkSchedule()
gFlags := new(flags.GlobalFlags)
gFlags.MinimumPeersPerSubnet = 1
flags.Init(gFlags)
@@ -73,7 +70,7 @@ func TestStartDiscV5_FindAndDialPeersWithSubnet(t *testing.T) {
bootNodeService := &Service{
cfg: &Config{UDPPort: 2000, TCPPort: 3000, QUICPort: 3000, DisableLivenessCheck: true, PingInterval: testPingInterval},
genesisTime: genesisTime,
genesisValidatorsRoot: genesisValidatorsRoot,
genesisValidatorsRoot: params.BeaconConfig().GenesisValidatorsRoot[:],
custodyInfo: &custodyInfo{},
}
@@ -93,6 +90,7 @@ func TestStartDiscV5_FindAndDialPeersWithSubnet(t *testing.T) {
// Create 3 nodes, each subscribed to a different subnet.
// Each node is connected to the bootstrap node.
services := make([]*Service, 0, subnetCount)
db := testDB.SetupDB(t)
for i := uint64(1); i <= subnetCount; i++ {
service, err := NewService(ctx, &Config{
@@ -103,12 +101,13 @@ func TestStartDiscV5_FindAndDialPeersWithSubnet(t *testing.T) {
QUICPort: uint(3000 + i),
PingInterval: testPingInterval,
DisableLivenessCheck: true,
DB: db,
})
require.NoError(t, err)
service.genesisTime = genesisTime
service.genesisValidatorsRoot = genesisValidatorsRoot
service.genesisValidatorsRoot = params.BeaconConfig().GenesisValidatorsRoot[:]
service.custodyInfo = &custodyInfo{}
nodeForkDigest, err := service.currentForkDigest()
@@ -152,13 +151,14 @@ func TestStartDiscV5_FindAndDialPeersWithSubnet(t *testing.T) {
UDPPort: 2010,
TCPPort: 3010,
QUICPort: 3010,
DB: db,
}
service, err := NewService(ctx, cfg)
service, err := NewService(t.Context(), cfg)
require.NoError(t, err)
service.genesisTime = genesisTime
service.genesisValidatorsRoot = genesisValidatorsRoot
service.genesisValidatorsRoot = params.BeaconConfig().GenesisValidatorsRoot[:]
service.custodyInfo = &custodyInfo{}
service.Start()
@@ -546,3 +546,552 @@ func TestInitializePersistentSubnets(t *testing.T) {
assert.Equal(t, 2, len(subs))
assert.Equal(t, true, expTime.After(time.Now()))
}
func TestFindPeersWithSubnets_NodeDeduplication(t *testing.T) {
params.SetupTestConfigCleanup(t)
cache.SubnetIDs.EmptyAllCaches()
defer cache.SubnetIDs.EmptyAllCaches()
ctx := context.Background()
db := testDB.SetupDB(t)
localNode1 := createTestNodeWithID(t, "node1")
localNode2 := createTestNodeWithID(t, "node2")
localNode3 := createTestNodeWithID(t, "node3")
// Create different sequence versions of node1 with subnet 1
setNodeSubnets(localNode1, []uint64{1})
setNodeSeq(localNode1, 1)
node1_seq1_subnet1 := localNode1.Node()
setNodeSeq(localNode1, 2)
node1_seq2_subnet1 := localNode1.Node() // Same ID, higher seq
setNodeSeq(localNode1, 3)
node1_seq3_subnet1 := localNode1.Node() // Same ID, even higher seq
// Node2 with different sequences and subnets
setNodeSubnets(localNode2, []uint64{1})
node2_seq1_subnet1 := localNode2.Node()
setNodeSubnets(localNode2, []uint64{2}) // Different subnet
setNodeSeq(localNode2, 2)
node2_seq2_subnet2 := localNode2.Node()
// Node3 with multiple subnets
setNodeSubnets(localNode3, []uint64{1, 2})
node3_seq1_subnet1_2 := localNode3.Node()
tests := []struct {
name string
nodes []*enode.Node
defectiveSubnets map[uint64]int
expectedCount int
description string
eval func(t *testing.T, result []*enode.Node) // Custom validation function
}{
{
name: "No duplicates - unique nodes with same subnet",
nodes: []*enode.Node{
node2_seq1_subnet1,
node3_seq1_subnet1_2,
},
defectiveSubnets: map[uint64]int{1: 2},
expectedCount: 2,
description: "Should return all unique nodes subscribed to subnet",
eval: nil, // No special validation needed
},
{
name: "Duplicate with lower seq first - should replace",
nodes: []*enode.Node{
node1_seq1_subnet1,
node1_seq2_subnet1, // Higher seq, should replace
node2_seq1_subnet1, // Different node to ensure we process enough nodes
},
defectiveSubnets: map[uint64]int{1: 2}, // Need 2 peers for subnet 1
expectedCount: 2,
description: "Should replace with higher seq node for same subnet",
eval: func(t *testing.T, result []*enode.Node) {
found := false
for _, node := range result {
if node.ID() == node1_seq2_subnet1.ID() && node.Seq() == node1_seq2_subnet1.Seq() {
found = true
break
}
}
require.Equal(t, true, found, "Should have node with higher seq")
},
},
{
name: "Duplicate with higher seq first - should keep existing",
nodes: []*enode.Node{
node1_seq3_subnet1, // Higher seq
node1_seq2_subnet1, // Lower seq, should be skipped (continue branch)
node1_seq1_subnet1, // Even lower seq, should also be skipped (continue branch)
node2_seq1_subnet1, // Different node
},
defectiveSubnets: map[uint64]int{1: 2},
expectedCount: 2,
description: "Should keep existing node with higher seq and skip lower seq duplicates",
eval: func(t *testing.T, result []*enode.Node) {
found := false
for _, node := range result {
if node.ID() == node1_seq3_subnet1.ID() && node.Seq() == node1_seq3_subnet1.Seq() {
found = true
break
}
}
require.Equal(t, true, found, "Should have node with highest seq")
},
},
{
name: "Multiple updates for same node",
nodes: []*enode.Node{
node1_seq1_subnet1,
node1_seq2_subnet1, // Should replace seq1
node1_seq3_subnet1, // Should replace seq2
node2_seq1_subnet1, // Different node
},
defectiveSubnets: map[uint64]int{1: 2},
expectedCount: 2,
description: "Should keep updating to highest seq",
eval: func(t *testing.T, result []*enode.Node) {
found := false
for _, node := range result {
if node.ID() == node1_seq3_subnet1.ID() && node.Seq() == node1_seq3_subnet1.Seq() {
found = true
break
}
}
require.Equal(t, true, found, "Should have node with highest seq")
},
},
{
name: "Duplicate with equal seq in subnets - should skip",
nodes: []*enode.Node{
node1_seq2_subnet1, // First occurrence
node1_seq2_subnet1, // Same exact node instance, should be skipped (continue branch)
node2_seq1_subnet1, // Different node
},
defectiveSubnets: map[uint64]int{1: 2},
expectedCount: 2,
description: "Should skip duplicate with equal sequence number in subnet search",
eval: func(t *testing.T, result []*enode.Node) {
foundNode1 := false
foundNode2 := false
node1Count := 0
for _, node := range result {
if node.ID() == node1_seq2_subnet1.ID() {
require.Equal(t, node1_seq2_subnet1.Seq(), node.Seq(), "Node1 should have expected seq")
foundNode1 = true
node1Count++
}
if node.ID() == node2_seq1_subnet1.ID() {
foundNode2 = true
}
}
require.Equal(t, true, foundNode1, "Should have node1")
require.Equal(t, true, foundNode2, "Should have node2")
require.Equal(t, 1, node1Count, "Should have exactly one instance of node1")
},
},
{
name: "Mix with different subnets",
nodes: []*enode.Node{
node2_seq1_subnet1,
node2_seq2_subnet2, // Higher seq but different subnet
node3_seq1_subnet1_2,
},
defectiveSubnets: map[uint64]int{1: 2, 2: 1},
expectedCount: 2, // node2 (latest) and node3
description: "Should handle nodes with different subnet subscriptions",
eval: nil, // Basic count validation is sufficient
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
gFlags := new(flags.GlobalFlags)
gFlags.MinimumPeersPerSubnet = 1
flags.Init(gFlags)
defer flags.Init(new(flags.GlobalFlags))
fakePeer := testp2p.NewTestP2P(t)
s := &Service{
cfg: &Config{
MaxPeers: 30,
DB: db,
},
genesisTime: time.Now(),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
peers: peers.NewStatus(ctx, &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{},
}),
host: fakePeer.BHost,
}
localNode := createTestNodeRandom(t)
mockIter := testp2p.NewMockIterator(tt.nodes)
s.dv5Listener = testp2p.NewMockListener(localNode, mockIter)
digest, err := s.currentForkDigest()
require.NoError(t, err)
ctxWithTimeout, cancel := context.WithTimeout(ctx, 100*time.Millisecond)
defer cancel()
result, err := s.findPeersWithSubnets(
ctxWithTimeout,
AttestationSubnetTopicFormat,
digest,
1,
tt.defectiveSubnets,
)
require.NoError(t, err, tt.description)
require.Equal(t, tt.expectedCount, len(result), tt.description)
if tt.eval != nil {
tt.eval(t, result)
}
})
}
}
func TestFindPeersWithSubnets_FilterPeerRemoval(t *testing.T) {
params.SetupTestConfigCleanup(t)
cache.SubnetIDs.EmptyAllCaches()
defer cache.SubnetIDs.EmptyAllCaches()
ctx := context.Background()
db := testDB.SetupDB(t)
localNode1 := createTestNodeWithID(t, "node1")
localNode2 := createTestNodeWithID(t, "node2")
localNode3 := createTestNodeWithID(t, "node3")
// Create versions of node1 with subnet 1
setNodeSubnets(localNode1, []uint64{1})
setNodeSeq(localNode1, 1)
node1_seq1_valid_subnet1 := localNode1.Node()
// Create bad version (higher seq)
setNodeSeq(localNode1, 2)
node1_seq2_bad_subnet1 := localNode1.Node()
// Create another valid version
setNodeSeq(localNode1, 3)
node1_seq3_valid_subnet1 := localNode1.Node()
// Node2 with subnet 1
setNodeSubnets(localNode2, []uint64{1})
node2_seq1_valid_subnet1 := localNode2.Node()
// Node3 with subnet 1 and 2
setNodeSubnets(localNode3, []uint64{1, 2})
node3_seq1_valid_subnet1_2 := localNode3.Node()
tests := []struct {
name string
nodes []*enode.Node
defectiveSubnets map[uint64]int
expectedCount int
description string
eval func(t *testing.T, result []*enode.Node)
}{
{
name: "Valid node in subnet followed by bad version - should remove",
nodes: []*enode.Node{
node1_seq1_valid_subnet1, // First add valid node with subnet 1
node1_seq2_bad_subnet1, // Invalid version with higher seq - should delete
node2_seq1_valid_subnet1, // Different valid node with subnet 1
},
defectiveSubnets: map[uint64]int{1: 2}, // Need 2 peers for subnet 1
expectedCount: 1, // Only node2 should remain
description: "Should remove node from map when bad version arrives, even if it has required subnet",
eval: func(t *testing.T, result []*enode.Node) {
foundNode1 := false
foundNode2 := false
for _, node := range result {
if node.ID() == node1_seq1_valid_subnet1.ID() {
foundNode1 = true
}
if node.ID() == node2_seq1_valid_subnet1.ID() {
foundNode2 = true
}
}
require.Equal(t, false, foundNode1, "Node1 should have been removed despite having subnet")
require.Equal(t, true, foundNode2, "Node2 should be present")
},
},
{
name: "Bad node with subnet stays bad even with higher seq",
nodes: []*enode.Node{
node1_seq2_bad_subnet1, // First bad node - not added
node1_seq3_valid_subnet1, // Higher seq but same bad peer ID
node2_seq1_valid_subnet1, // Different valid node
},
defectiveSubnets: map[uint64]int{1: 2},
expectedCount: 1, // Only node2 (node1 remains bad)
description: "Bad peer with subnet remains bad even with higher seq",
eval: func(t *testing.T, result []*enode.Node) {
foundNode1 := false
foundNode2 := false
for _, node := range result {
if node.ID() == node1_seq3_valid_subnet1.ID() {
foundNode1 = true
}
if node.ID() == node2_seq1_valid_subnet1.ID() {
foundNode2 = true
}
}
require.Equal(t, false, foundNode1, "Node1 should remain bad despite having subnet")
require.Equal(t, true, foundNode2, "Node2 should be present")
},
},
{
name: "Mixed valid and bad nodes with subnets",
nodes: []*enode.Node{
node1_seq1_valid_subnet1, // Add valid node1 with subnet
node2_seq1_valid_subnet1, // Add valid node2 with subnet
node1_seq2_bad_subnet1, // Invalid update for node1 - should remove
node3_seq1_valid_subnet1_2, // Add valid node3 with multiple subnets
},
defectiveSubnets: map[uint64]int{1: 3}, // Need 3 peers for subnet 1
expectedCount: 2, // Only node2 and node3 should remain
description: "Should handle removal of nodes with subnets when they become bad",
eval: func(t *testing.T, result []*enode.Node) {
foundNode1 := false
foundNode2 := false
foundNode3 := false
for _, node := range result {
if node.ID() == node1_seq1_valid_subnet1.ID() {
foundNode1 = true
}
if node.ID() == node2_seq1_valid_subnet1.ID() {
foundNode2 = true
}
if node.ID() == node3_seq1_valid_subnet1_2.ID() {
foundNode3 = true
}
}
require.Equal(t, false, foundNode1, "Node1 should have been removed")
require.Equal(t, true, foundNode2, "Node2 should be present")
require.Equal(t, true, foundNode3, "Node3 should be present")
},
},
{
name: "Node with subnet marked bad stays bad for all sequences",
nodes: []*enode.Node{
node1_seq1_valid_subnet1, // Add valid node1 with subnet
node1_seq2_bad_subnet1, // Bad update - should remove and mark bad
node1_seq3_valid_subnet1, // Higher seq but still same bad peer ID
node2_seq1_valid_subnet1, // Different valid node
},
defectiveSubnets: map[uint64]int{1: 2},
expectedCount: 1, // Only node2 (node1 stays bad)
description: "Once marked bad, subnet peer stays bad for all sequences",
eval: func(t *testing.T, result []*enode.Node) {
foundNode1 := false
foundNode2 := false
for _, node := range result {
if node.ID() == node1_seq3_valid_subnet1.ID() {
foundNode1 = true
}
if node.ID() == node2_seq1_valid_subnet1.ID() {
foundNode2 = true
}
}
require.Equal(t, false, foundNode1, "Node1 should stay bad")
require.Equal(t, true, foundNode2, "Node2 should be present")
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
// Initialize flags for subnet operations
gFlags := new(flags.GlobalFlags)
gFlags.MinimumPeersPerSubnet = 1
flags.Init(gFlags)
defer flags.Init(new(flags.GlobalFlags))
// Create test P2P instance
fakePeer := testp2p.NewTestP2P(t)
// Create mock service
s := &Service{
cfg: &Config{
MaxPeers: 30,
DB: db,
},
genesisTime: time.Now(),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
peers: peers.NewStatus(ctx, &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{},
}),
host: fakePeer.BHost,
}
// Mark specific node versions as "bad" to simulate filterPeer failures
for _, node := range tt.nodes {
if node == node1_seq2_bad_subnet1 {
// Get peer ID from the node to mark it as bad
peerData, _, _ := convertToAddrInfo(node)
if peerData != nil {
s.peers.Add(node.Record(), peerData.ID, nil, network.DirUnknown)
// Mark as bad peer - this will make filterPeer return false
s.peers.Scorers().BadResponsesScorer().Increment(peerData.ID)
s.peers.Scorers().BadResponsesScorer().Increment(peerData.ID)
s.peers.Scorers().BadResponsesScorer().Increment(peerData.ID)
}
}
}
localNode := createTestNodeRandom(t)
mockIter := testp2p.NewMockIterator(tt.nodes)
s.dv5Listener = testp2p.NewMockListener(localNode, mockIter)
digest, err := s.currentForkDigest()
require.NoError(t, err)
ctxWithTimeout, cancel := context.WithTimeout(ctx, 100*time.Millisecond)
defer cancel()
result, err := s.findPeersWithSubnets(
ctxWithTimeout,
AttestationSubnetTopicFormat,
digest,
1,
tt.defectiveSubnets,
)
require.NoError(t, err, tt.description)
require.Equal(t, tt.expectedCount, len(result), tt.description)
if tt.eval != nil {
tt.eval(t, result)
}
})
}
}
// callbackIterator allows us to execute callbacks at specific points during iteration
type callbackIteratorForSubnets struct {
nodes []*enode.Node
index int
callbacks map[int]func() // map from index to callback function
}
func (c *callbackIteratorForSubnets) Next() bool {
// Execute callback before checking if we can continue (if one exists)
if callback, exists := c.callbacks[c.index]; exists {
callback()
}
return c.index < len(c.nodes)
}
func (c *callbackIteratorForSubnets) Node() *enode.Node {
if c.index >= len(c.nodes) {
return nil
}
node := c.nodes[c.index]
c.index++
return node
}
func (c *callbackIteratorForSubnets) Close() {
// Nothing to clean up for this simple implementation
}
func TestFindPeersWithSubnets_received_bad_existing_node(t *testing.T) {
// This test successfully triggers delete(nodeByNodeID, node.ID()) in subnets.go by:
// 1. Processing node1_seq1 first (passes filterPeer, gets added to map
// 2. Callback marks peer as bad before processing node1_seq2"
// 3. Processing node1_seq2 (fails filterPeer, triggers delete since ok=true
params.SetupTestConfigCleanup(t)
cache.SubnetIDs.EmptyAllCaches()
defer cache.SubnetIDs.EmptyAllCaches()
ctx := context.Background()
db := testDB.SetupDB(t)
// Create LocalNode with same ID but different sequences
localNode1 := createTestNodeWithID(t, "testnode")
setNodeSubnets(localNode1, []uint64{1})
node1_seq1 := localNode1.Node() // Get current node
currentSeq := node1_seq1.Seq()
setNodeSeq(localNode1, currentSeq+1) // Increment sequence by 1
node1_seq2 := localNode1.Node() // This should have higher seq
// Additional node to ensure we have enough peers to process
localNode2 := createTestNodeWithID(t, "othernode")
setNodeSubnets(localNode2, []uint64{1})
node2 := localNode2.Node()
gFlags := new(flags.GlobalFlags)
gFlags.MinimumPeersPerSubnet = 1
flags.Init(gFlags)
defer flags.Init(new(flags.GlobalFlags))
fakePeer := testp2p.NewTestP2P(t)
service := &Service{
cfg: &Config{
MaxPeers: 30,
DB: db,
},
genesisTime: time.Now(),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
peers: peers.NewStatus(ctx, &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{},
}),
host: fakePeer.BHost,
}
// Create iterator with callback that marks peer as bad before processing node1_seq2
iter := &callbackIteratorForSubnets{
nodes: []*enode.Node{node1_seq1, node1_seq2, node2},
index: 0,
callbacks: map[int]func(){
1: func() { // Before processing node1_seq2 (index 1)
// Mark peer as bad before processing node1_seq2
peerData, _, _ := convertToAddrInfo(node1_seq2)
if peerData != nil {
service.peers.Add(node1_seq2.Record(), peerData.ID, nil, network.DirUnknown)
// Mark as bad peer - need enough increments to exceed threshold (6)
for i := 0; i < 10; i++ {
service.peers.Scorers().BadResponsesScorer().Increment(peerData.ID)
}
}
},
},
}
localNode := createTestNodeRandom(t)
service.dv5Listener = testp2p.NewMockListener(localNode, iter)
digest, err := service.currentForkDigest()
require.NoError(t, err)
// Run findPeersWithSubnets - node1_seq1 gets processed first, then callback marks peer bad, then node1_seq2 fails
ctxWithTimeout, cancel := context.WithTimeout(ctx, 1*time.Second)
defer cancel()
result, err := service.findPeersWithSubnets(
ctxWithTimeout,
AttestationSubnetTopicFormat,
digest,
1,
map[uint64]int{1: 2}, // Need 2 peers for subnet 1
)
require.NoError(t, err)
require.Equal(t, 1, len(result))
require.Equal(t, localNode2.Node().ID(), result[0].ID()) // only node2 should remain
}

View File

@@ -7,6 +7,7 @@ go_library(
"fuzz_p2p.go",
"mock_broadcaster.go",
"mock_host.go",
"mock_listener.go",
"mock_metadataprovider.go",
"mock_peermanager.go",
"mock_peersprovider.go",

View File

@@ -167,8 +167,8 @@ func (*FakeP2P) BroadcastLightClientFinalityUpdate(_ context.Context, _ interfac
return nil
}
// BroadcastDataColumn -- fake.
func (*FakeP2P) BroadcastDataColumn(_ [fieldparams.RootLength]byte, _ uint64, _ *ethpb.DataColumnSidecar) error {
// BroadcastDataColumnSidecar -- fake.
func (*FakeP2P) BroadcastDataColumnSidecar(_ [fieldparams.RootLength]byte, _ uint64, _ *ethpb.DataColumnSidecar) error {
return nil
}

View File

@@ -62,8 +62,8 @@ func (m *MockBroadcaster) BroadcastLightClientFinalityUpdate(_ context.Context,
return nil
}
// BroadcastDataColumn broadcasts a data column for mock.
func (m *MockBroadcaster) BroadcastDataColumn([fieldparams.RootLength]byte, uint64, *ethpb.DataColumnSidecar) error {
// BroadcastDataColumnSidecar broadcasts a data column for mock.
func (m *MockBroadcaster) BroadcastDataColumnSidecar([fieldparams.RootLength]byte, uint64, *ethpb.DataColumnSidecar) error {
m.BroadcastCalled.Store(true)
return nil
}

View File

@@ -0,0 +1,128 @@
package testing
import (
"github.com/ethereum/go-ethereum/p2p/enode"
)
// MockListener is a mock implementation of the Listener and ListenerRebooter interfaces
// that can be used in tests. It provides configurable behavior for all methods.
type MockListener struct {
LocalNodeFunc func() *enode.LocalNode
SelfFunc func() *enode.Node
RandomNodesFunc func() enode.Iterator
LookupFunc func(enode.ID) []*enode.Node
ResolveFunc func(*enode.Node) *enode.Node
PingFunc func(*enode.Node) error
RequestENRFunc func(*enode.Node) (*enode.Node, error)
RebootFunc func() error
CloseFunc func()
// Default implementations
localNode *enode.LocalNode
iterator enode.Iterator
}
// NewMockListener creates a new MockListener with default implementations
func NewMockListener(localNode *enode.LocalNode, iterator enode.Iterator) *MockListener {
return &MockListener{
localNode: localNode,
iterator: iterator,
}
}
func (m *MockListener) LocalNode() *enode.LocalNode {
if m.LocalNodeFunc != nil {
return m.LocalNodeFunc()
}
return m.localNode
}
func (m *MockListener) Self() *enode.Node {
if m.SelfFunc != nil {
return m.SelfFunc()
}
if m.localNode != nil {
return m.localNode.Node()
}
return nil
}
func (m *MockListener) RandomNodes() enode.Iterator {
if m.RandomNodesFunc != nil {
return m.RandomNodesFunc()
}
return m.iterator
}
func (m *MockListener) Lookup(id enode.ID) []*enode.Node {
if m.LookupFunc != nil {
return m.LookupFunc(id)
}
return nil
}
func (m *MockListener) Resolve(node *enode.Node) *enode.Node {
if m.ResolveFunc != nil {
return m.ResolveFunc(node)
}
return nil
}
func (m *MockListener) Ping(node *enode.Node) error {
if m.PingFunc != nil {
return m.PingFunc(node)
}
return nil
}
func (m *MockListener) RequestENR(node *enode.Node) (*enode.Node, error) {
if m.RequestENRFunc != nil {
return m.RequestENRFunc(node)
}
return nil, nil
}
func (m *MockListener) RebootListener() error {
if m.RebootFunc != nil {
return m.RebootFunc()
}
return nil
}
func (m *MockListener) Close() {
if m.CloseFunc != nil {
m.CloseFunc()
}
}
// MockIterator is a mock implementation of enode.Iterator for testing
type MockIterator struct {
Nodes []*enode.Node
Position int
Closed bool
}
func NewMockIterator(nodes []*enode.Node) *MockIterator {
return &MockIterator{
Nodes: nodes,
}
}
func (m *MockIterator) Next() bool {
if m.Closed || m.Position >= len(m.Nodes) {
return false
}
m.Position++
return true
}
func (m *MockIterator) Node() *enode.Node {
if m.Position == 0 || m.Position > len(m.Nodes) {
return nil
}
return m.Nodes[m.Position-1]
}
func (m *MockIterator) Close() {
m.Closed = true
}

View File

@@ -228,8 +228,8 @@ func (p *TestP2P) BroadcastLightClientFinalityUpdate(_ context.Context, _ interf
return nil
}
// BroadcastDataColumn broadcasts a data column for mock.
func (p *TestP2P) BroadcastDataColumn([fieldparams.RootLength]byte, uint64, *ethpb.DataColumnSidecar) error {
// BroadcastDataColumnSidecar broadcasts a data column for mock.
func (p *TestP2P) BroadcastDataColumnSidecar([fieldparams.RootLength]byte, uint64, *ethpb.DataColumnSidecar) error {
p.BroadcastCalled.Store(true)
return nil
}

View File

@@ -2,6 +2,7 @@ package p2p
import (
"bytes"
"context"
"crypto/ecdsa"
"crypto/rand"
"encoding/base64"
@@ -12,6 +13,8 @@ import (
"path"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/kv"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/wrapper"
ecdsaprysm "github.com/OffchainLabs/prysm/v6/crypto/ecdsa"
@@ -27,11 +30,9 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield"
"github.com/sirupsen/logrus"
"google.golang.org/protobuf/proto"
)
const keyPath = "network-keys"
const metaDataPath = "metaData"
const dialTimeout = 1 * time.Second
@@ -121,45 +122,24 @@ func privKeyFromFile(path string) (*ecdsa.PrivateKey, error) {
return ecdsaprysm.ConvertFromInterfacePrivKey(unmarshalledKey)
}
// Retrieves node p2p metadata from a set of configuration values
// from the p2p service.
// TODO: Figure out how to do a v1/v2 check.
func metaDataFromConfig(cfg *Config) (metadata.Metadata, error) {
defaultKeyPath := path.Join(cfg.DataDir, metaDataPath)
metaDataPath := cfg.MetaDataDir
// Retrieves metadata sequence number from DB and returns a Metadata(V0) object
func metaDataFromDB(ctx context.Context, db db.ReadOnlyDatabaseWithSeqNum) (metadata.Metadata, error) {
seqNum, err := db.MetadataSeqNum(ctx)
// We can proceed if error is `kv.ErrNotFoundMetadataSeqNum` by using default value of 0 for sequence number.
if err != nil && !errors.Is(err, kv.ErrNotFoundMetadataSeqNum) {
return nil, err
}
_, err := os.Stat(defaultKeyPath)
defaultMetadataExist := !os.IsNotExist(err)
if err != nil && defaultMetadataExist {
return nil, err
}
if metaDataPath == "" && !defaultMetadataExist {
metaData := &pb.MetaDataV0{
SeqNumber: 0,
Attnets: bitfield.NewBitvector64(),
}
dst, err := proto.Marshal(metaData)
if err != nil {
return nil, err
}
if err := file.WriteFile(defaultKeyPath, dst); err != nil {
return nil, err
}
return wrapper.WrappedMetadataV0(metaData), nil
}
if defaultMetadataExist && metaDataPath == "" {
metaDataPath = defaultKeyPath
}
src, err := os.ReadFile(metaDataPath) // #nosec G304
if err != nil {
log.WithError(err).Error("Error reading metadata from file")
return nil, err
}
metaData := &pb.MetaDataV0{}
if err := proto.Unmarshal(src, metaData); err != nil {
return nil, err
}
return wrapper.WrappedMetadataV0(metaData), nil
// NOTE: Load V0 metadata because:
// - As the p2p service accesses metadata as an interface, and all versions implement the interface,
// there is no error in calling the fields of higher versions. It just returns the default value.
// - This approach allows us to avoid unnecessary code changes when the metadata version bumps.
// - `RefreshPersistentSubnets` runs twice every slot and it manages updating and saving metadata.
metadata := wrapper.WrappedMetadataV0(&pb.MetaDataV0{
SeqNumber: seqNum,
Attnets: bitfield.NewBitvector64(),
})
return metadata, nil
}
// Attempt to dial an address to verify its connectivity

View File

@@ -1,8 +1,10 @@
package p2p
import (
"context"
"testing"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
@@ -80,3 +82,27 @@ func TestConvertPeerIDToNodeID(t *testing.T) {
actualNodeIDStr := actualNodeID.String()
require.Equal(t, expectedNodeIDStr, actualNodeIDStr)
}
func TestMetadataFromDB(t *testing.T) {
params.SetupTestConfigCleanup(t)
t.Run("Metadata from DB", func(t *testing.T) {
beaconDB := testDB.SetupDB(t)
err := beaconDB.SaveMetadataSeqNum(t.Context(), 42)
require.NoError(t, err)
metaData, err := metaDataFromDB(context.Background(), beaconDB)
require.NoError(t, err)
assert.Equal(t, uint64(42), metaData.SequenceNumber())
})
t.Run("Use default sequence number (=0) as Metadata not found on DB", func(t *testing.T) {
beaconDB := testDB.SetupDB(t)
metaData, err := metaDataFromDB(context.Background(), beaconDB)
require.NoError(t, err)
assert.Equal(t, uint64(0), metaData.SequenceNumber())
})
}

View File

@@ -72,6 +72,7 @@ go_library(
go_test(
name = "go_default_test",
srcs = [
"handlers_equivocation_test.go",
"handlers_pool_test.go",
"handlers_state_test.go",
"handlers_test.go",

View File

@@ -701,7 +701,7 @@ func (s *Server) publishBlockSSZ(ctx context.Context, w http.ResponseWriter, r *
// Validate and optionally broadcast sidecars on equivocation.
if err := s.validateBroadcast(ctx, r, genericBlock); err != nil {
if errors.Is(err, errEquivocatedBlock) {
b, err := blocks.NewSignedBeaconBlock(genericBlock)
b, err := blocks.NewSignedBeaconBlock(genericBlock.Block)
if err != nil {
httputil.HandleError(w, err.Error(), http.StatusBadRequest)
return
@@ -855,7 +855,7 @@ func (s *Server) publishBlock(ctx context.Context, w http.ResponseWriter, r *htt
// Validate and optionally broadcast sidecars on equivocation.
if err := s.validateBroadcast(ctx, r, genericBlock); err != nil {
if errors.Is(err, errEquivocatedBlock) {
b, err := blocks.NewSignedBeaconBlock(genericBlock)
b, err := blocks.NewSignedBeaconBlock(genericBlock.Block)
if err != nil {
httputil.HandleError(w, err.Error(), http.StatusBadRequest)
return
@@ -1008,9 +1008,29 @@ func (s *Server) validateConsensus(ctx context.Context, b *eth.GenericSignedBeac
}
parentStateRoot := parentBlock.Block().StateRoot()
parentState, err := s.Stater.State(ctx, parentStateRoot[:])
if err != nil {
return errors.Wrap(err, "could not get parent state")
// Check if the state is already cached
parentState := transition.NextSlotState(parentBlockRoot[:], blk.Block().Slot())
if parentState == nil {
// The state is not advanced in the NSC, check first if the parent post-state is head
headRoot, err := s.HeadFetcher.HeadRoot(ctx)
if err != nil {
return errors.Wrap(err, "could not get head root")
}
if bytes.Equal(headRoot, parentBlockRoot[:]) {
parentState, err = s.HeadFetcher.HeadState(ctx)
if err != nil {
return errors.Wrap(err, "could not get head state")
}
parentState, err = transition.ProcessSlots(ctx, parentState, blk.Block().Slot())
if err != nil {
return errors.Wrap(err, "could not process slots to get parent state")
}
} else {
parentState, err = s.Stater.State(ctx, parentStateRoot[:])
if err != nil {
return errors.Wrap(err, "could not get parent state")
}
}
}
_, err = transition.ExecuteStateTransition(ctx, parentState, blk)
if err != nil {
@@ -1058,6 +1078,10 @@ func (s *Server) validateBlobSidecars(blk interfaces.SignedBeaconBlock, blobs []
if len(blobs) != len(proofs) || len(blobs) != len(kzgs) {
return errors.New("number of blobs, proofs, and commitments do not match")
}
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(blk.Block().Slot())
if len(blobs) > maxBlobsPerBlock {
return fmt.Errorf("number of blobs over max, %d > %d", len(blobs), maxBlobsPerBlock)
}
for i, blob := range blobs {
b := kzg4844.Blob(blob)
if err := kzg4844.VerifyBlobProof(&b, kzg4844.Commitment(kzgs[i]), kzg4844.Proof(proofs[i])); err != nil {
@@ -1200,7 +1224,7 @@ func (s *Server) GetStateFork(w http.ResponseWriter, r *http.Request) {
fork := st.Fork()
isOptimistic, err := helpers.IsOptimistic(ctx, []byte(stateId), s.OptimisticModeFetcher, s.Stater, s.ChainInfoFetcher, s.BeaconDB)
if err != nil {
httputil.HandleError(w, "Could not check optimistic status"+err.Error(), http.StatusInternalServerError)
helpers.HandleIsOptimisticError(w, err)
return
}
blockRoot, err := st.LatestBlockHeader().HashTreeRoot()
@@ -1311,7 +1335,7 @@ func (s *Server) GetCommittees(w http.ResponseWriter, r *http.Request) {
isOptimistic, err := helpers.IsOptimistic(ctx, []byte(stateId), s.OptimisticModeFetcher, s.Stater, s.ChainInfoFetcher, s.BeaconDB)
if err != nil {
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusInternalServerError)
helpers.HandleIsOptimisticError(w, err)
return
}
@@ -1492,7 +1516,7 @@ func (s *Server) GetFinalityCheckpoints(w http.ResponseWriter, r *http.Request)
}
isOptimistic, err := helpers.IsOptimistic(ctx, []byte(stateId), s.OptimisticModeFetcher, s.Stater, s.ChainInfoFetcher, s.BeaconDB)
if err != nil {
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusInternalServerError)
helpers.HandleIsOptimisticError(w, err)
return
}
blockRoot, err := st.LatestBlockHeader().HashTreeRoot()
@@ -1666,7 +1690,7 @@ func (s *Server) GetPendingConsolidations(w http.ResponseWriter, r *http.Request
} else {
isOptimistic, err := helpers.IsOptimistic(ctx, []byte(stateId), s.OptimisticModeFetcher, s.Stater, s.ChainInfoFetcher, s.BeaconDB)
if err != nil {
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusInternalServerError)
helpers.HandleIsOptimisticError(w, err)
return
}
blockRoot, err := st.LatestBlockHeader().HashTreeRoot()
@@ -1722,7 +1746,7 @@ func (s *Server) GetPendingDeposits(w http.ResponseWriter, r *http.Request) {
} else {
isOptimistic, err := helpers.IsOptimistic(ctx, []byte(stateId), s.OptimisticModeFetcher, s.Stater, s.ChainInfoFetcher, s.BeaconDB)
if err != nil {
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusInternalServerError)
helpers.HandleIsOptimisticError(w, err)
return
}
blockRoot, err := st.LatestBlockHeader().HashTreeRoot()
@@ -1778,7 +1802,7 @@ func (s *Server) GetPendingPartialWithdrawals(w http.ResponseWriter, r *http.Req
} else {
isOptimistic, err := helpers.IsOptimistic(ctx, []byte(stateId), s.OptimisticModeFetcher, s.Stater, s.ChainInfoFetcher, s.BeaconDB)
if err != nil {
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusInternalServerError)
helpers.HandleIsOptimisticError(w, err)
return
}
blockRoot, err := st.LatestBlockHeader().HashTreeRoot()
@@ -1831,7 +1855,7 @@ func (s *Server) GetProposerLookahead(w http.ResponseWriter, r *http.Request) {
} else {
isOptimistic, err := helpers.IsOptimistic(ctx, []byte(stateId), s.OptimisticModeFetcher, s.Stater, s.ChainInfoFetcher, s.BeaconDB)
if err != nil {
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusInternalServerError)
helpers.HandleIsOptimisticError(w, err)
return
}
blockRoot, err := st.LatestBlockHeader().HashTreeRoot()

View File

@@ -0,0 +1,35 @@
package beacon
import (
"encoding/json"
"testing"
"github.com/OffchainLabs/prysm/v6/api/server/structs"
rpctesting "github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/eth/shared/testing"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/testing/require"
)
// TestBlocks_NewSignedBeaconBlock_EquivocationFix tests that blocks.NewSignedBeaconBlock
// correctly handles the fixed case where genericBlock.Block is passed instead of genericBlock
func TestBlocks_NewSignedBeaconBlock_EquivocationFix(t *testing.T) {
// Parse the Phase0 JSON block
var block structs.SignedBeaconBlock
err := json.Unmarshal([]byte(rpctesting.Phase0Block), &block)
require.NoError(t, err)
// Convert to generic format
genericBlock, err := block.ToGeneric()
require.NoError(t, err)
// Test the FIX: pass genericBlock.Block instead of genericBlock
// This is what our fix changed in handlers.go line 704 and 858
_, err = blocks.NewSignedBeaconBlock(genericBlock.Block)
require.NoError(t, err, "NewSignedBeaconBlock should work with genericBlock.Block")
// Test the BROKEN version: pass genericBlock directly (this should fail)
_, err = blocks.NewSignedBeaconBlock(genericBlock)
if err == nil {
t.Errorf("NewSignedBeaconBlock should fail with whole genericBlock but succeeded")
}
}

View File

@@ -56,7 +56,7 @@ func (s *Server) GetStateRoot(w http.ResponseWriter, r *http.Request) {
}
isOptimistic, err := helpers.IsOptimistic(ctx, []byte(stateId), s.OptimisticModeFetcher, s.Stater, s.ChainInfoFetcher, s.BeaconDB)
if err != nil {
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusInternalServerError)
helpers.HandleIsOptimisticError(w, err)
return
}
blockRoot, err := st.LatestBlockHeader().HashTreeRoot()
@@ -125,7 +125,7 @@ func (s *Server) GetRandao(w http.ResponseWriter, r *http.Request) {
isOptimistic, err := helpers.IsOptimistic(ctx, []byte(stateId), s.OptimisticModeFetcher, s.Stater, s.ChainInfoFetcher, s.BeaconDB)
if err != nil {
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusInternalServerError)
helpers.HandleIsOptimisticError(w, err)
return
}
@@ -227,7 +227,7 @@ func (s *Server) GetSyncCommittees(w http.ResponseWriter, r *http.Request) {
isOptimistic, err := helpers.IsOptimistic(ctx, []byte(stateId), s.OptimisticModeFetcher, s.Stater, s.ChainInfoFetcher, s.BeaconDB)
if err != nil {
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusInternalServerError)
helpers.HandleIsOptimisticError(w, err)
return
}

View File

@@ -3504,9 +3504,14 @@ func TestValidateConsensus(t *testing.T) {
require.NoError(t, err)
parentRoot, err := parentSbb.Block().HashTreeRoot()
require.NoError(t, err)
mockChainService := &chainMock.ChainService{
State: parentState,
Root: parentRoot[:],
}
server := &Server{
Blocker: &testutil.MockBlocker{RootBlockMap: map[[32]byte]interfaces.ReadOnlySignedBeaconBlock{parentRoot: parentSbb}},
Stater: &testutil.MockStater{StatesByRoot: map[[32]byte]state.BeaconState{bytesutil.ToBytes32(parentBlock.Block.StateRoot): parentState}},
Blocker: &testutil.MockBlocker{RootBlockMap: map[[32]byte]interfaces.ReadOnlySignedBeaconBlock{parentRoot: parentSbb}},
Stater: &testutil.MockStater{StatesByRoot: map[[32]byte]state.BeaconState{bytesutil.ToBytes32(parentBlock.Block.StateRoot): parentState}},
HeadFetcher: mockChainService,
}
require.NoError(t, server.validateConsensus(ctx, &eth.GenericSignedBeaconBlock{
@@ -4795,6 +4800,222 @@ func Test_validateBlobSidecars(t *testing.T) {
b, err = blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.ErrorContains(t, "could not verify blob proof: can't verify opening proof", s.validateBlobSidecars(b, [][]byte{blob[:]}, [][]byte{proof[:]}))
blobs := [][]byte{}
commitments := [][]byte{}
proofs := [][]byte{}
for i := 0; i < 10; i++ {
blobs = append(blobs, blob[:])
commitments = append(commitments, commitment[:])
proofs = append(proofs, proof[:])
}
t.Run("pre-Deneb block should return early", func(t *testing.T) {
// Create a pre-Deneb block (e.g., Capella)
blk := util.NewBeaconBlockCapella()
b, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
s := &Server{}
// Should return nil for pre-Deneb blocks regardless of blobs
require.NoError(t, s.validateBlobSidecars(b, [][]byte{}, [][]byte{}))
require.NoError(t, s.validateBlobSidecars(b, blobs[:1], proofs[:1]))
})
t.Run("Deneb block with valid single blob", func(t *testing.T) {
blk := util.NewBeaconBlockDeneb()
blk.Block.Body.BlobKzgCommitments = [][]byte{commitment[:]}
b, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
s := &Server{}
require.NoError(t, s.validateBlobSidecars(b, [][]byte{blob[:]}, [][]byte{proof[:]}))
})
t.Run("Deneb block with max blobs (6)", func(t *testing.T) {
cfg := params.BeaconConfig().Copy()
defer params.OverrideBeaconConfig(cfg)
testCfg := params.BeaconConfig().Copy()
testCfg.DenebForkEpoch = 0
testCfg.ElectraForkEpoch = 100
testCfg.DeprecatedMaxBlobsPerBlock = 6
params.OverrideBeaconConfig(testCfg)
blk := util.NewBeaconBlockDeneb()
blk.Block.Slot = 10 // Deneb slot
blk.Block.Body.BlobKzgCommitments = commitments[:6]
b, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
s := &Server{}
// Should pass with exactly 6 blobs
require.NoError(t, s.validateBlobSidecars(b, blobs[:6], proofs[:6]))
})
t.Run("Deneb block exceeding max blobs", func(t *testing.T) {
cfg := params.BeaconConfig().Copy()
defer params.OverrideBeaconConfig(cfg)
testCfg := params.BeaconConfig().Copy()
testCfg.DenebForkEpoch = 0
testCfg.ElectraForkEpoch = 100
testCfg.DeprecatedMaxBlobsPerBlock = 6
params.OverrideBeaconConfig(testCfg)
blk := util.NewBeaconBlockDeneb()
blk.Block.Slot = 10 // Deneb slot
blk.Block.Body.BlobKzgCommitments = commitments[:7]
b, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
s := &Server{}
// Should fail with 7 blobs when max is 6
err = s.validateBlobSidecars(b, blobs[:7], proofs[:7])
require.ErrorContains(t, "number of blobs over max, 7 > 6", err)
})
t.Run("Electra block with valid blobs", func(t *testing.T) {
cfg := params.BeaconConfig().Copy()
defer params.OverrideBeaconConfig(cfg)
// Set up Electra config with max 9 blobs
testCfg := params.BeaconConfig().Copy()
testCfg.DenebForkEpoch = 0
testCfg.ElectraForkEpoch = 5
testCfg.DeprecatedMaxBlobsPerBlock = 6
testCfg.DeprecatedMaxBlobsPerBlockElectra = 9
params.OverrideBeaconConfig(testCfg)
blk := util.NewBeaconBlockElectra()
blk.Block.Slot = 160 // Electra slot (epoch 5+)
blk.Block.Body.BlobKzgCommitments = commitments[:9]
b, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
s := &Server{}
// Should pass with 9 blobs in Electra
require.NoError(t, s.validateBlobSidecars(b, blobs[:9], proofs[:9]))
})
t.Run("Electra block exceeding max blobs", func(t *testing.T) {
cfg := params.BeaconConfig().Copy()
defer params.OverrideBeaconConfig(cfg)
// Set up Electra config with max 9 blobs
testCfg := params.BeaconConfig().Copy()
testCfg.DenebForkEpoch = 0
testCfg.ElectraForkEpoch = 5
testCfg.DeprecatedMaxBlobsPerBlock = 6
testCfg.DeprecatedMaxBlobsPerBlockElectra = 9
params.OverrideBeaconConfig(testCfg)
blk := util.NewBeaconBlockElectra()
blk.Block.Slot = 160 // Electra slot
blk.Block.Body.BlobKzgCommitments = commitments[:10]
b, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
s := &Server{}
// Should fail with 10 blobs when max is 9
err = s.validateBlobSidecars(b, blobs[:10], proofs[:10])
require.ErrorContains(t, "number of blobs over max, 10 > 9", err)
})
t.Run("Fulu block with valid blobs", func(t *testing.T) {
cfg := params.BeaconConfig().Copy()
defer params.OverrideBeaconConfig(cfg)
testCfg := params.BeaconConfig().Copy()
testCfg.DenebForkEpoch = 0
testCfg.ElectraForkEpoch = 5
testCfg.FuluForkEpoch = 10
testCfg.DeprecatedMaxBlobsPerBlock = 6
testCfg.DeprecatedMaxBlobsPerBlockElectra = 9
params.OverrideBeaconConfig(testCfg)
blk := util.NewBeaconBlockFulu()
blk.Block.Slot = 320 // Fulu slot (epoch 10+)
blk.Block.Body.BlobKzgCommitments = commitments[:9]
b, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
s := &Server{}
// Should pass with 9 blobs in Fulu
require.NoError(t, s.validateBlobSidecars(b, blobs[:9], proofs[:9]))
})
t.Run("Fulu block exceeding max blobs", func(t *testing.T) {
cfg := params.BeaconConfig().Copy()
defer params.OverrideBeaconConfig(cfg)
testCfg := params.BeaconConfig().Copy()
testCfg.DenebForkEpoch = 0
testCfg.ElectraForkEpoch = 5
testCfg.FuluForkEpoch = 10
testCfg.DeprecatedMaxBlobsPerBlock = 6
testCfg.DeprecatedMaxBlobsPerBlockElectra = 9
params.OverrideBeaconConfig(testCfg)
blk := util.NewBeaconBlockFulu()
blk.Block.Slot = 320 // Fulu slot (epoch 10+)
blk.Block.Body.BlobKzgCommitments = commitments[:10]
b, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
s := &Server{}
// Should fail with 10 blobs when max is 9
err = s.validateBlobSidecars(b, blobs[:10], proofs[:10])
require.ErrorContains(t, "number of blobs over max, 10 > 9", err)
})
t.Run("BlobSchedule with progressive increases (BPO)", func(t *testing.T) {
cfg := params.BeaconConfig().Copy()
defer params.OverrideBeaconConfig(cfg)
// Set up config with BlobSchedule (BPO - Blob Production Optimization)
testCfg := params.BeaconConfig().Copy()
testCfg.DenebForkEpoch = 0
testCfg.ElectraForkEpoch = 100
testCfg.FuluForkEpoch = 200
testCfg.DeprecatedMaxBlobsPerBlock = 6
testCfg.DeprecatedMaxBlobsPerBlockElectra = 9
// Define blob schedule with progressive increases
testCfg.BlobSchedule = []params.BlobScheduleEntry{
{Epoch: 0, MaxBlobsPerBlock: 3}, // Start with 3 blobs
{Epoch: 10, MaxBlobsPerBlock: 5}, // Increase to 5 at epoch 10
{Epoch: 20, MaxBlobsPerBlock: 7}, // Increase to 7 at epoch 20
{Epoch: 30, MaxBlobsPerBlock: 9}, // Increase to 9 at epoch 30
}
params.OverrideBeaconConfig(testCfg)
s := &Server{}
// Test epoch 0-9: max 3 blobs
t.Run("epoch 0-9: max 3 blobs", func(t *testing.T) {
blk := util.NewBeaconBlockDeneb()
blk.Block.Slot = 5 // Epoch 0
blk.Block.Body.BlobKzgCommitments = commitments[:3]
b, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, s.validateBlobSidecars(b, blobs[:3], proofs[:3]))
// Should fail with 4 blobs
blk.Block.Body.BlobKzgCommitments = commitments[:4]
b, err = blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
err = s.validateBlobSidecars(b, blobs[:4], proofs[:4])
require.ErrorContains(t, "number of blobs over max, 4 > 3", err)
})
// Test epoch 30+: max 9 blobs
t.Run("epoch 30+: max 9 blobs", func(t *testing.T) {
blk := util.NewBeaconBlockDeneb()
blk.Block.Slot = 960 // Epoch 30
blk.Block.Body.BlobKzgCommitments = commitments[:9]
b, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, s.validateBlobSidecars(b, blobs[:9], proofs[:9]))
// Should fail with 10 blobs
blk.Block.Body.BlobKzgCommitments = commitments[:10]
b, err = blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
err = s.validateBlobSidecars(b, blobs[:10], proofs[:10])
require.ErrorContains(t, "number of blobs over max, 10 > 9", err)
})
})
}
func TestGetPendingConsolidations(t *testing.T) {

Some files were not shown because too many files have changed in this diff Show More