Compare commits

...

25 Commits

Author SHA1 Message Date
Preston Van Loon
57d44bd33d Add tests for fulu NFD key 2025-08-11 11:48:30 -05:00
Kasey Kirkham
ff2994c24c fusaka fork digest enr changes 2025-08-11 11:18:11 -05:00
kasey
3da40ecd9c Refactor fork schedules (#15490)
* overhaul fork schedule management for bpos

* Unify log

* Radek's comments

* Use arg config to determine previous epoch, with regression test

* Remove unnecessary NewClock. @potuz feedback

* Continuation of previous commit: Remove unnecessary NewClock. @potuz feedback

* Remove VerifyBlockHeaderSignatureUsingCurrentFork

* cosmetic changes

* Remove unnecessary copy. entryWithForkDigest passes by value, not by pointer so it shold be fine

* Reuse ErrInvalidTopic from p2p package

* Unskip TestServer_GetBeaconConfig

* Resolve TODO about forkwatcher in local mode

* remove Copy()

---------

Co-authored-by: Kasey <kasey@users.noreply.github.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: rkapka <radoslaw.kapka@gmail.com>
Co-authored-by: Preston Van Loon <preston@pvl.dev>
2025-08-11 16:08:53 +00:00
terence
f7f992c256 gofmt: fix formatting issues in test files (#15577) 2025-08-11 15:56:53 +00:00
terence
09565e0c3a Refactor attestation cache key generation outside critical section (#15572)
* Refactor attestation cache key generation outside critical section

* Improve attestation cache key error handling and logging
2025-08-10 17:41:30 +00:00
Jun Song
05a3736310 refactor: removing redundant codes in htrutils.go (#15453)
* refactor: use auto-generated HashTreeRoot functions in htrutil.go

* refactor: use type alias for Transaction & use SliceRoot for TransactionsRoot

* changelog

* fix: TransactionsRoot receives raw 2d bytes as an argument

* fix: handle nil argument

* test: add nil test for fork and checkpoint

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-08-10 10:44:01 +00:00
kasey
84c8653a52 initialize genesis data asap at node start (#15470)
* initialize genesis data asap at node start

* add genesis validation tests with embedded state verification

* Add test for hardcoded mainnet genesis validator root and time from init() function

* Add test for UnmarshalState in encoding/ssz/detect/configfork.go

* Add tests for genesis.Initialize

* Move genesis/embedded to genesis/internal/embedded

* Gazelle / BUILD fix

* James feedback

* Fix lint

* Revert lock

---------

Co-authored-by: Kasey <kasey@users.noreply.github.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: Preston Van Loon <preston@pvl.dev>
2025-08-10 02:09:40 +00:00
Tomás Andróil
921ff23c6b refactor: use central flag name for validator HTTP server port (#15236)
* Update validator.go

* Create tomasandroil_replace_grpc_gateway_flag_name.md
2025-08-09 10:35:04 +00:00
Radosław Kapka
87235fb384 Don't submit duplicate aggregated SignedContributionAndProof messages (#15571) 2025-08-08 22:57:36 +00:00
james-prysm
2ec5914b4a fixing builder version check (#15568)
* adding fix

* fixing test
2025-08-08 01:42:55 +00:00
Potuz
fe000e5629 Fix validateConsensus (#15548)
* Fix validateConsensus

Reported by NuConstruct

The stater package looks for a stateroot using the head state from the
blockchain package. However, this state is very unlikely to have the
poststateroot since that's only added after slot processing. I assume
that essentially any REST endpoint that uses this mechanism to get head
is broken if it needs to gather a state by stateroot.

This PR is a placeholder to verify this is the issue, here I just check
if the NSC already has the post-state since that will have already the
processing state cached.

* Add changelog

* add fallback

* Fix tests
2025-08-07 13:08:40 +00:00
Preston Van Loon
447a3d8add Update go to v1.24.6 (#15566) 2025-08-06 20:59:26 +00:00
Jun Song
b00aaef202 Persist metadata sequence number using Beacon DB (#15554)
* Add entry for sequence number in chain-metadata bucket & Basic getter/setter

* Mark p2p-metadata flag as deprecated

* Fix metaDataFromConfig: use DB instead to get seqnum

* Save sequence number after updating the metadata

* Fix beacon-chain/p2p unit tests: add DB in config

* Add changelog

* Add ReadOnlyDatabaseWithSeqNum

* Code suggestion from Manu

* Remove seqnum getter at interface

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-08-06 20:18:33 +00:00
Potuz
0f6070a866 Fix race on ReceibeBlock (#15565)
* Fix race on ReceibeBlock

In the event two routines for `ReceiveBlock` are triggered with the same
block (it may happen if one routine is triggered over gossip and the
other in init-sync) it may happen that the second routine believes it's
syncing the block for the first time. This is because the cache on
`blocksBeingSynced` is not checked to be set and the block may still not
be put in forkchoice by the first routine.

In the normal case this would not cause any trouble as the second
forkchoice insertion is a noop by design. However, if the second routine
times out or has any error in processing (for example the engine will
return an error if we try to send FCU to an older head) then the second
routine will attempt to remove the inserted block from forkchoice and
this bricks the node since forkchoice refuses to remove a valid node,
the root is removed inconditionally from db and the node ends up with a
root that is not in db and remains in forkchoice.

This PR just prevents the race.

As a followup perhaps we can gate the rollback function from db to nodes
that are effectively not in forkchoice, alternatively, force removal
from forkchoice when rolling back from db (although this version is
complicated due to possible accounting issues on forkchoice).

* Fix lint
2025-08-06 17:18:13 +00:00
Justin Traglia
2a09c9f681 Fix BlobsBundleV2 proofs limit (#15530)
* Assign max_cell_proofs_length the correct value

* Add changelog fragment

* Run update-go-pbs.sh

* Run update-go-ssz.sh

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-08-05 21:33:45 +00:00
Jun Song
9c6ccd67c1 Add Fulu case for saveStatesEfficientInternal (#15553)
* Add Fulu case for saveStatesEfficientInternal

* Add changelog

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-08-05 19:25:17 +00:00
Radosław Kapka
36e5d4926b Redesign the pending attestation queue (#15024)
* Redesign pending attestation queue

* fix bad signature test

* equality tests

* changelog <3

* rename functions

* change logs

* fix fuzzing

* fixes after rebasing

* build fix

* review

* James' review

* fix imports
2025-08-05 18:35:19 +00:00
terence
17b7d3ff12 Add fork choice check to pending attestations processing (#15547) 2025-08-05 16:08:47 +00:00
terence
fb2bceece8 beacon api: optimize val assignment lookup (#15558) 2025-08-05 15:23:37 +00:00
terence
d012ab653c Beacon api: fix proposer duty computation for fulu (#15534) 2025-08-05 15:17:02 +00:00
Preston Van Loon
46e7555c42 Update cc-debian11 to latest (#15562) 2025-08-04 16:17:13 +00:00
Preston Van Loon
181df3995e Update go to v1.24.5 (#15561) 2025-08-04 15:17:59 +00:00
Manu NALEPA
149e220b61 Validator custody: Update to the latest specification. (#15532)
* Validator custody: Update to the latest specfication.

* Update beacon-chain/blockchain/process_block.go

Co-authored-by: terence <terence@prysmaticlabs.com>

* Fix James' comment.

* Fix James' comment.

* Fix James' comment.

---------

Co-authored-by: terence <terence@prysmaticlabs.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-08-02 06:21:08 +00:00
Bastin
ae4b982a6c Fix finality update bugs & Move broadcast logic to LC Store (#15540)
* fix IsBetterFinalityUpdate and add tests

fix finality update bugs

* Update lightclient.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/core/light-client/lightclient.go

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-08-01 12:35:21 +00:00
Radosław Kapka
f330021785 Do not compare liveness response with LH (#15556)
* Do not compare liveness response with LH

* changelog <3
2025-07-31 14:37:36 +00:00
302 changed files with 6800 additions and 4779 deletions

View File

@@ -190,7 +190,7 @@ load("@rules_oci//oci:pull.bzl", "oci_pull")
# A multi-arch base image
oci_pull(
name = "linux_debian11_multiarch_base", # Debian bullseye
digest = "sha256:b82f113425c5b5c714151aaacd8039bc141821cdcd3c65202d42bdf9c43ae60b", # 2023-12-12
digest = "sha256:55a5e011b2c4246b4c51e01fcc2b452d151e03df052e357465f0392fcd59fddf",
image = "gcr.io/prysmaticlabs/distroless/cc-debian11",
platforms = [
"linux/amd64",
@@ -208,7 +208,7 @@ load("@io_bazel_rules_go//go:deps.bzl", "go_register_toolchains", "go_rules_depe
go_rules_dependencies()
go_register_toolchains(
go_version = "1.24.0",
go_version = "1.24.6",
nogo = "@//:nogo",
)

View File

@@ -16,7 +16,6 @@ go_library(
"//api/server/structs:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//network/forks:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",

View File

@@ -9,7 +9,6 @@ import (
"net/url"
"path"
"regexp"
"sort"
"strconv"
"github.com/OffchainLabs/prysm/v6/api/client"
@@ -17,7 +16,6 @@ import (
"github.com/OffchainLabs/prysm/v6/api/server/structs"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/network/forks"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
@@ -137,24 +135,6 @@ func (c *Client) GetFork(ctx context.Context, stateId StateOrBlockId) (*ethpb.Fo
return fr.ToConsensus()
}
// GetForkSchedule retrieve all forks, past present and future, of which this node is aware.
func (c *Client) GetForkSchedule(ctx context.Context) (forks.OrderedSchedule, error) {
body, err := c.Get(ctx, getForkSchedulePath)
if err != nil {
return nil, errors.Wrap(err, "error requesting fork schedule")
}
fsr := &forkScheduleResponse{}
err = json.Unmarshal(body, fsr)
if err != nil {
return nil, err
}
ofs, err := fsr.OrderedForkSchedule()
if err != nil {
return nil, errors.Wrap(err, fmt.Sprintf("problem unmarshaling %s response", getForkSchedulePath))
}
return ofs, nil
}
// GetConfigSpec retrieve the current configs of the network used by the beacon node.
func (c *Client) GetConfigSpec(ctx context.Context) (*structs.GetSpecResponse, error) {
body, err := c.Get(ctx, getConfigSpecPath)
@@ -334,31 +314,3 @@ func (c *Client) GetBLStoExecutionChanges(ctx context.Context) (*structs.BLSToEx
}
return poolResponse, nil
}
type forkScheduleResponse struct {
Data []structs.Fork
}
func (fsr *forkScheduleResponse) OrderedForkSchedule() (forks.OrderedSchedule, error) {
ofs := make(forks.OrderedSchedule, 0)
for _, d := range fsr.Data {
epoch, err := strconv.ParseUint(d.Epoch, 10, 64)
if err != nil {
return nil, errors.Wrapf(err, "error parsing epoch %s", d.Epoch)
}
vSlice, err := hexutil.Decode(d.CurrentVersion)
if err != nil {
return nil, err
}
if len(vSlice) != 4 {
return nil, fmt.Errorf("got %d byte version, expected 4 bytes. version hex=%s", len(vSlice), d.CurrentVersion)
}
version := bytesutil.ToBytes4(vSlice)
ofs = append(ofs, forks.ForkScheduleEntry{
Version: version,
Epoch: primitives.Epoch(epoch),
})
}
sort.Sort(ofs)
return ofs, nil
}

View File

@@ -1727,7 +1727,7 @@ func TestSubmitBlindedBlock_BlobsBundlerInterface(t *testing.T) {
t.Run("Interface signature verification", func(t *testing.T) {
// This test verifies that the SubmitBlindedBlock method signature
// has been updated to return BlobsBundler interface
client := &Client{}
// Verify the method exists with the correct signature

View File

@@ -73,6 +73,7 @@ go_library(
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//beacon-chain/verification:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
@@ -181,6 +182,7 @@ go_test(
"//container/trie:go_default_library",
"//crypto/bls:go_default_library",
"//encoding/bytesutil:go_default_library",
"//genesis:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
@@ -194,6 +196,7 @@ go_test(
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_ethereum_go_ethereum//core/types:go_default_library",
"@com_github_holiman_uint256//:go_default_library",
"@com_github_libp2p_go_libp2p//core/peer:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",

View File

@@ -14,6 +14,7 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/genesis"
enginev1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/assert"
@@ -624,6 +625,7 @@ func Test_hashForGenesisRoot(t *testing.T) {
ctx := t.Context()
c := setupBeaconChain(t, beaconDB)
st, _ := util.DeterministicGenesisStateElectra(t, 10)
genesis.StoreDuringTest(t, genesis.GenesisData{State: st})
require.NoError(t, c.cfg.BeaconDB.SaveGenesisData(ctx, st))
root, err := beaconDB.GenesisBlockRoot(ctx)
require.NoError(t, err)

View File

@@ -7,10 +7,15 @@ type currentlySyncingBlock struct {
roots map[[32]byte]struct{}
}
func (b *currentlySyncingBlock) set(root [32]byte) {
func (b *currentlySyncingBlock) set(root [32]byte) error {
b.Lock()
defer b.Unlock()
_, ok := b.roots[root]
if ok {
return errBlockBeingSynced
}
b.roots[root] = struct{}{}
return nil
}
func (b *currentlySyncingBlock) unset(root [32]byte) {

View File

@@ -44,6 +44,8 @@ var (
errMaxBlobsExceeded = verification.AsVerificationFailure(errors.New("expected commitments in block exceeds MAX_BLOBS_PER_BLOCK"))
// errMaxDataColumnsExceeded is returned when the number of data columns exceeds the maximum allowed.
errMaxDataColumnsExceeded = verification.AsVerificationFailure(errors.New("expected data columns for node exceeds NUMBER_OF_COLUMNS"))
// errBlockBeingSynced is returned when a block is being synced.
errBlockBeingSynced = errors.New("block is being synced")
)
// An invalid block is the block that fails state transition based on the core protocol rules.

View File

@@ -19,6 +19,7 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/genesis"
v1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/assert"
@@ -309,6 +310,7 @@ func Test_NotifyForkchoiceUpdate_NIlLVH(t *testing.T) {
block: wba,
}
genesis.StoreStateDuringTest(t, st)
require.NoError(t, beaconDB.SaveState(ctx, st, bra))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, bra))
a := &fcuConfig{
@@ -403,6 +405,7 @@ func Test_NotifyForkchoiceUpdateRecursive_DoublyLinkedTree(t *testing.T) {
require.NoError(t, err)
bState, _ := util.DeterministicGenesisState(t, 10)
genesis.StoreStateDuringTest(t, bState)
require.NoError(t, beaconDB.SaveState(ctx, bState, bra))
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 2, brb, bra, [32]byte{'B'}, ojc, ofc)

View File

@@ -7,7 +7,6 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
lightclient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/execution"
@@ -36,7 +35,7 @@ func WithMaxGoroutines(x int) Option {
// WithLCStore for light client store access.
func WithLCStore() Option {
return func(s *Service) error {
s.lcStore = lightclient.NewLightClientStore(s.cfg.BeaconDB)
s.lcStore = lightclient.NewLightClientStore(s.cfg.BeaconDB, s.cfg.P2P, s.cfg.StateNotifier.StateFeed())
return nil
}
}
@@ -235,14 +234,6 @@ func WithSyncChecker(checker Checker) Option {
}
}
// WithCustodyInfo sets the custody info for the blockchain service.
func WithCustodyInfo(custodyInfo *peerdas.CustodyInfo) Option {
return func(s *Service) error {
s.cfg.CustodyInfo = custodyInfo
return nil
}
}
// WithSlasherEnabled sets whether the slasher is enabled or not.
func WithSlasherEnabled(enabled bool) Option {
return func(s *Service) error {

View File

@@ -666,7 +666,9 @@ func (s *Service) areDataColumnsAvailable(
root [fieldparams.RootLength]byte,
block interfaces.ReadOnlyBeaconBlock,
) error {
// We are only required to check within MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS.
samplesPerSlot := params.BeaconConfig().SamplesPerSlot
// We are only required to check within MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS
blockSlot, currentSlot := block.Slot(), s.CurrentSlot()
blockEpoch, currentEpoch := slots.ToEpoch(blockSlot), slots.ToEpoch(currentSlot)
if !params.WithinDAPeriod(blockEpoch, currentEpoch) {
@@ -689,16 +691,20 @@ func (s *Service) areDataColumnsAvailable(
}
// All columns to sample need to be available for the block to be considered available.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-alpha.10/specs/fulu/das-core.md#custody-sampling
nodeID := s.cfg.P2P.NodeID()
// Prevent custody group count to change during the rest of the function.
s.cfg.CustodyInfo.Mut.RLock()
defer s.cfg.CustodyInfo.Mut.RUnlock()
// Get the custody group sampling size for the node.
custodyGroupSamplingSize := s.cfg.CustodyInfo.CustodyGroupSamplingSize(peerdas.Actual)
peerInfo, _, err := peerdas.Info(nodeID, custodyGroupSamplingSize)
custodyGroupCount, err := s.cfg.P2P.CustodyGroupCount()
if err != nil {
return errors.Wrap(err, "custody group count")
}
// Compute the sampling size.
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/das-core.md#custody-sampling
samplingSize := max(samplesPerSlot, custodyGroupCount)
// Get the peer info for the node.
peerInfo, _, err := peerdas.Info(nodeID, samplingSize)
if err != nil {
return errors.Wrap(err, "peer info")
}

View File

@@ -1,7 +1,6 @@
package blockchain
import (
"bytes"
"context"
"fmt"
"strings"
@@ -198,8 +197,7 @@ func (s *Service) processLightClientFinalityUpdate(
finalizedCheckpoint := attestedState.FinalizedCheckpoint()
// Check if the finalized checkpoint has changed
if finalizedCheckpoint == nil || bytes.Equal(finalizedCheckpoint.GetRoot(), postState.FinalizedCheckpoint().Root) {
if finalizedCheckpoint == nil {
return nil
}
@@ -224,17 +222,7 @@ func (s *Service) processLightClientFinalityUpdate(
return nil
}
log.Debug("Saving new light client finality update")
s.lcStore.SetLastFinalityUpdate(newUpdate)
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.LightClientFinalityUpdate,
Data: newUpdate,
})
if err = s.cfg.P2P.BroadcastLightClientFinalityUpdate(ctx, newUpdate); err != nil {
return errors.Wrap(err, "could not broadcast light client finality update")
}
s.lcStore.SetLastFinalityUpdate(newUpdate, true)
return nil
}
@@ -266,17 +254,7 @@ func (s *Service) processLightClientOptimisticUpdate(ctx context.Context, signed
return nil
}
log.Debug("Saving new light client optimistic update")
s.lcStore.SetLastOptimisticUpdate(newUpdate)
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.LightClientOptimisticUpdate,
Data: newUpdate,
})
if err = s.cfg.P2P.BroadcastLightClientOptimisticUpdate(ctx, newUpdate); err != nil {
return errors.Wrap(err, "could not broadcast light client optimistic update")
}
s.lcStore.SetLastOptimisticUpdate(newUpdate, true)
return nil
}

View File

@@ -35,6 +35,7 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/crypto/bls"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/genesis"
enginev1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
@@ -1980,14 +1981,15 @@ func TestNoViableHead_Reboot(t *testing.T) {
genesisState, keys := util.DeterministicGenesisState(t, 64)
stateRoot, err := genesisState.HashTreeRoot(ctx)
require.NoError(t, err, "Could not hash genesis state")
genesis := blocks.NewGenesisBlock(stateRoot[:])
wsb, err := consensusblocks.NewSignedBeaconBlock(genesis)
gb := blocks.NewGenesisBlock(stateRoot[:])
wsb, err := consensusblocks.NewSignedBeaconBlock(gb)
require.NoError(t, err)
genesisRoot, err := genesis.Block.HashTreeRoot()
genesisRoot, err := gb.Block.HashTreeRoot()
require.NoError(t, err, "Could not get signing root")
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb), "Could not save genesis block")
require.NoError(t, service.saveGenesisData(ctx, genesisState))
genesis.StoreStateDuringTest(t, genesisState)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, genesisState, genesisRoot), "Could not save genesis state")
require.NoError(t, service.cfg.BeaconDB.SaveHeadBlockRoot(ctx, genesisRoot), "Could not save genesis state")
require.NoError(t, service.cfg.BeaconDB.SaveGenesisBlockRoot(ctx, genesisRoot), "Could not save genesis state")
@@ -2894,7 +2896,6 @@ func TestIsDataAvailable(t *testing.T) {
}
params := testIsAvailableParams{
options: []Option{WithCustodyInfo(&peerdas.CustodyInfo{})},
columnsToSave: indices,
blobKzgCommitmentsCount: 3,
}
@@ -2907,7 +2908,6 @@ func TestIsDataAvailable(t *testing.T) {
t.Run("Fulu - no missing data columns", func(t *testing.T) {
params := testIsAvailableParams{
options: []Option{WithCustodyInfo(&peerdas.CustodyInfo{})},
columnsToSave: []uint64{1, 17, 19, 42, 75, 87, 102, 117, 119}, // 119 is not needed
blobKzgCommitmentsCount: 3,
}
@@ -2922,7 +2922,7 @@ func TestIsDataAvailable(t *testing.T) {
startWaiting := make(chan bool)
testParams := testIsAvailableParams{
options: []Option{WithCustodyInfo(&peerdas.CustodyInfo{}), WithStartWaitingDataColumnSidecars(startWaiting)},
options: []Option{WithStartWaitingDataColumnSidecars(startWaiting)},
columnsToSave: []uint64{1, 17, 19, 75, 102, 117, 119}, // 119 is not needed, 42 and 87 are missing
blobKzgCommitmentsCount: 3,
@@ -2959,6 +2959,9 @@ func TestIsDataAvailable(t *testing.T) {
require.NoError(t, err)
}()
ctx, cancel := context.WithTimeout(ctx, time.Second*2)
defer cancel()
err = service.isDataAvailable(ctx, root, signed)
require.NoError(t, err)
})
@@ -2971,10 +2974,6 @@ func TestIsDataAvailable(t *testing.T) {
startWaiting := make(chan bool)
var custodyInfo peerdas.CustodyInfo
custodyInfo.TargetGroupCount.SetValidatorsCustodyRequirement(cgc)
custodyInfo.ToAdvertiseGroupCount.Set(cgc)
minimumColumnsCountToReconstruct := peerdas.MinimumColumnsCountToReconstruct()
indices := make([]uint64, 0, minimumColumnsCountToReconstruct-missingColumns)
@@ -2983,12 +2982,14 @@ func TestIsDataAvailable(t *testing.T) {
}
testParams := testIsAvailableParams{
options: []Option{WithCustodyInfo(&custodyInfo), WithStartWaitingDataColumnSidecars(startWaiting)},
options: []Option{WithStartWaitingDataColumnSidecars(startWaiting)},
columnsToSave: indices,
blobKzgCommitmentsCount: 3,
}
ctx, _, service, root, signed := testIsAvailableSetup(t, testParams)
_, _, err := service.cfg.P2P.UpdateCustodyInfo(0, cgc)
require.NoError(t, err)
block := signed.Block()
slot := block.Slot()
proposerIndex := block.ProposerIndex()
@@ -3020,6 +3021,9 @@ func TestIsDataAvailable(t *testing.T) {
require.NoError(t, err)
}()
ctx, cancel := context.WithTimeout(ctx, time.Second*2)
defer cancel()
err = service.isDataAvailable(ctx, root, signed)
require.NoError(t, err)
})
@@ -3028,7 +3032,7 @@ func TestIsDataAvailable(t *testing.T) {
startWaiting := make(chan bool)
params := testIsAvailableParams{
options: []Option{WithCustodyInfo(&peerdas.CustodyInfo{}), WithStartWaitingDataColumnSidecars(startWaiting)},
options: []Option{WithStartWaitingDataColumnSidecars(startWaiting)},
blobKzgCommitmentsCount: 3,
}
@@ -3170,7 +3174,7 @@ func TestProcessLightClientOptimisticUpdate(t *testing.T) {
t.Run(version.String(testVersion)+"_"+tc.name, func(t *testing.T) {
s.genesisTime = time.Unix(time.Now().Unix()-(int64(forkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
s.lcStore = &lightClient.Store{}
s.lcStore = lightClient.NewLightClientStore(s.cfg.BeaconDB, s.cfg.P2P, s.cfg.StateNotifier.StateFeed())
var oldActualUpdate interfaces.LightClientOptimisticUpdate
var err error
@@ -3246,39 +3250,39 @@ func TestProcessLightClientFinalityUpdate(t *testing.T) {
expectReplace: true,
},
{
name: "Old update is better - age - no supermajority",
name: "Old update is better - finalized slot is higher",
oldOptions: []util.LightClientOption{util.WithIncreasedFinalizedSlot(1)},
newOptions: []util.LightClientOption{},
expectReplace: false,
},
{
name: "Old update is better - age - both supermajority",
oldOptions: []util.LightClientOption{util.WithIncreasedFinalizedSlot(1), util.WithSupermajority()},
newOptions: []util.LightClientOption{util.WithSupermajority()},
expectReplace: false,
},
{
name: "Old update is better - supermajority",
oldOptions: []util.LightClientOption{util.WithSupermajority()},
name: "Old update is better - attested slot is higher",
oldOptions: []util.LightClientOption{util.WithIncreasedAttestedSlot(1)},
newOptions: []util.LightClientOption{},
expectReplace: false,
},
{
name: "New update is better - age - both supermajority",
oldOptions: []util.LightClientOption{util.WithSupermajority()},
newOptions: []util.LightClientOption{util.WithIncreasedFinalizedSlot(1), util.WithSupermajority()},
name: "Old update is better - signature slot is higher",
oldOptions: []util.LightClientOption{util.WithIncreasedSignatureSlot(1)},
newOptions: []util.LightClientOption{},
expectReplace: false,
},
{
name: "New update is better - finalized slot is higher",
oldOptions: []util.LightClientOption{},
newOptions: []util.LightClientOption{util.WithIncreasedAttestedSlot(1)},
expectReplace: true,
},
{
name: "New update is better - age - no supermajority",
name: "New update is better - attested slot is higher",
oldOptions: []util.LightClientOption{},
newOptions: []util.LightClientOption{util.WithIncreasedFinalizedSlot(1)},
newOptions: []util.LightClientOption{util.WithIncreasedAttestedSlot(1)},
expectReplace: true,
},
{
name: "New update is better - supermajority",
name: "New update is better - signature slot is higher",
oldOptions: []util.LightClientOption{},
newOptions: []util.LightClientOption{util.WithSupermajority()},
newOptions: []util.LightClientOption{util.WithIncreasedSignatureSlot(1)},
expectReplace: true,
},
}
@@ -3310,7 +3314,7 @@ func TestProcessLightClientFinalityUpdate(t *testing.T) {
t.Run(version.String(testVersion)+"_"+tc.name, func(t *testing.T) {
s.genesisTime = time.Unix(time.Now().Unix()-(int64(forkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
s.lcStore = &lightClient.Store{}
s.lcStore = lightClient.NewLightClientStore(s.cfg.BeaconDB, s.cfg.P2P, s.cfg.StateNotifier.StateFeed())
var actualOldUpdate, actualNewUpdate interfaces.LightClientFinalityUpdate
var err error

View File

@@ -84,7 +84,11 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
}
receivedTime := time.Now()
s.blockBeingSynced.set(blockRoot)
err := s.blockBeingSynced.set(blockRoot)
if errors.Is(err, errBlockBeingSynced) {
log.WithField("blockRoot", fmt.Sprintf("%#x", blockRoot)).Debug("Ignoring block currently being synced")
return nil
}
defer s.blockBeingSynced.unset(blockRoot)
blockCopy, err := block.Copy()

View File

@@ -311,7 +311,10 @@ func TestService_HasBlock(t *testing.T) {
r, err = b.Block.HashTreeRoot()
require.NoError(t, err)
require.Equal(t, true, s.HasBlock(t.Context(), r))
s.blockBeingSynced.set(r)
err = s.blockBeingSynced.set(r)
require.NoError(t, err)
err = s.blockBeingSynced.set(r)
require.ErrorIs(t, err, errBlockBeingSynced)
require.Equal(t, false, s.HasBlock(t.Context(), r))
}

View File

@@ -12,11 +12,9 @@ import (
"github.com/OffchainLabs/prysm/v6/async/event"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed"
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
coreTime "github.com/OffchainLabs/prysm/v6/beacon-chain/core/time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
@@ -31,6 +29,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state/stategen"
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
@@ -97,7 +96,6 @@ type config struct {
FinalizedStateAtStartUp state.BeaconState
ExecutionEngineCaller execution.EngineCaller
SyncChecker Checker
CustodyInfo *peerdas.CustodyInfo
}
// Checker is an interface used to determine if a node is in initial sync
@@ -208,17 +206,9 @@ func NewService(ctx context.Context, opts ...Option) (*Service, error) {
// Start a blockchain service's main event loop.
func (s *Service) Start() {
saved := s.cfg.FinalizedStateAtStartUp
defer s.removeStartupState()
if saved != nil && !saved.IsNil() {
if err := s.StartFromSavedState(saved); err != nil {
log.Fatal(err)
}
} else {
if err := s.startFromExecutionChain(); err != nil {
log.Fatal(err)
}
if err := s.StartFromSavedState(s.cfg.FinalizedStateAtStartUp); err != nil {
log.Fatal(err)
}
s.spawnProcessAttestationsRoutine()
go s.runLateBlockTasks()
@@ -267,6 +257,9 @@ func (s *Service) Status() error {
// StartFromSavedState initializes the blockchain using a previously saved finalized checkpoint.
func (s *Service) StartFromSavedState(saved state.BeaconState) error {
if state.IsNil(saved) {
return errors.New("Last finalized state at startup is nil")
}
log.Info("Blockchain data already exists in DB, initializing...")
s.genesisTime = saved.GenesisTime()
s.cfg.AttService.SetGenesisTime(saved.GenesisTime())
@@ -296,6 +289,20 @@ func (s *Service) StartFromSavedState(saved state.BeaconState) error {
if err := s.clockSetter.SetClock(startup.NewClock(s.genesisTime, vr)); err != nil {
return errors.Wrap(err, "failed to initialize blockchain service")
}
if !params.FuluEnabled() {
return nil
}
earliestAvailableSlot, custodySubnetCount, err := s.updateCustodyInfoInDB(saved.Slot())
if err != nil {
return errors.Wrap(err, "could not get and save custody group count")
}
if _, _, err := s.cfg.P2P.UpdateCustodyInfo(earliestAvailableSlot, custodySubnetCount); err != nil {
return errors.Wrap(err, "update custody info")
}
return nil
}
@@ -358,62 +365,6 @@ func (s *Service) initializeHead(ctx context.Context, st state.BeaconState) erro
return nil
}
func (s *Service) startFromExecutionChain() error {
log.Info("Waiting to reach the validator deposit threshold to start the beacon chain...")
if s.cfg.ChainStartFetcher == nil {
return errors.New("not configured execution chain")
}
go func() {
stateChannel := make(chan *feed.Event, 1)
stateSub := s.cfg.StateNotifier.StateFeed().Subscribe(stateChannel)
defer stateSub.Unsubscribe()
for {
select {
case e := <-stateChannel:
if e.Type == statefeed.ChainStarted {
data, ok := e.Data.(*statefeed.ChainStartedData)
if !ok {
log.Error("Event data is not type *statefeed.ChainStartedData")
return
}
log.WithField("startTime", data.StartTime).Debug("Received chain start event")
s.onExecutionChainStart(s.ctx, data.StartTime)
return
}
case <-s.ctx.Done():
log.Debug("Context closed, exiting goroutine")
return
case err := <-stateSub.Err():
log.WithError(err).Error("Subscription to state forRoot failed")
return
}
}
}()
return nil
}
// onExecutionChainStart initializes a series of deposits from the ChainStart deposits in the eth1
// deposit contract, initializes the beacon chain's state, and kicks off the beacon chain.
func (s *Service) onExecutionChainStart(ctx context.Context, genesisTime time.Time) {
preGenesisState := s.cfg.ChainStartFetcher.PreGenesisState()
initializedState, err := s.initializeBeaconChain(ctx, genesisTime, preGenesisState, s.cfg.ChainStartFetcher.ChainStartEth1Data())
if err != nil {
log.WithError(err).Fatal("Could not initialize beacon chain")
}
// We start a counter to genesis, if needed.
gRoot, err := initializedState.HashTreeRoot(s.ctx)
if err != nil {
log.WithError(err).Fatal("Could not hash tree root genesis state")
}
go slots.CountdownToGenesis(ctx, genesisTime, uint64(initializedState.NumValidators()), gRoot)
vr := bytesutil.ToBytes32(initializedState.GenesisValidatorsRoot())
if err := s.clockSetter.SetClock(startup.NewClock(genesisTime, vr)); err != nil {
log.WithError(err).Fatal("Failed to initialize blockchain service from execution start event")
}
}
// initializes the state and genesis block of the beacon chain to persistent storage
// based on a genesis timestamp value obtained from the ChainStart event emitted
// by the ETH1.0 Deposit Contract and the POWChain service of the node.
@@ -516,6 +467,57 @@ func (s *Service) removeStartupState() {
s.cfg.FinalizedStateAtStartUp = nil
}
// UpdateCustodyInfoInDB updates the custody information in the database.
// It returns the (potentially updated) custody group count and the earliest available slot.
func (s *Service) updateCustodyInfoInDB(slot primitives.Slot) (primitives.Slot, uint64, error) {
isSubscribedToAllDataSubnets := flags.Get().SubscribeAllDataSubnets
beaconConfig := params.BeaconConfig()
custodyRequirement := beaconConfig.CustodyRequirement
// Check if the node was previously subscribed to all data subnets, and if so,
// store the new status accordingly.
wasSubscribedToAllDataSubnets, err := s.cfg.BeaconDB.UpdateSubscribedToAllDataSubnets(s.ctx, isSubscribedToAllDataSubnets)
if err != nil {
log.WithError(err).Error("Could not update subscription status to all data subnets")
}
// Warn the user if the node was previously subscribed to all data subnets and is not any more.
if wasSubscribedToAllDataSubnets && !isSubscribedToAllDataSubnets {
log.Warnf(
"Because the flag `--%s` was previously used, the node will still subscribe to all data subnets.",
flags.SubscribeAllDataSubnets.Name,
)
}
// Compute the custody group count.
custodyGroupCount := custodyRequirement
if isSubscribedToAllDataSubnets {
custodyGroupCount = beaconConfig.NumberOfColumns
}
// Safely compute the fulu fork slot.
fuluForkSlot, err := fuluForkSlot()
if err != nil {
return 0, 0, errors.Wrap(err, "fulu fork slot")
}
// If slot is before the fulu fork slot, then use the earliest stored slot as the reference slot.
if slot < fuluForkSlot {
slot, err = s.cfg.BeaconDB.EarliestSlot(s.ctx)
if err != nil {
return 0, 0, errors.Wrap(err, "earliest slot")
}
}
earliestAvailableSlot, custodyGroupCount, err := s.cfg.BeaconDB.UpdateCustodyInfo(s.ctx, slot, custodyGroupCount)
if err != nil {
return 0, 0, errors.Wrap(err, "update custody info")
}
return earliestAvailableSlot, custodyGroupCount, nil
}
func spawnCountdownIfPreGenesis(ctx context.Context, genesisTime time.Time, db db.HeadAccessDatabase) {
currentTime := prysmTime.Now()
if currentTime.After(genesisTime) {
@@ -532,3 +534,19 @@ func spawnCountdownIfPreGenesis(ctx context.Context, genesisTime time.Time, db d
}
go slots.CountdownToGenesis(ctx, genesisTime, uint64(gState.NumValidators()), gRoot)
}
func fuluForkSlot() (primitives.Slot, error) {
beaconConfig := params.BeaconConfig()
fuluForkEpoch := beaconConfig.FuluForkEpoch
if fuluForkEpoch == beaconConfig.FarFutureEpoch {
return beaconConfig.FarFutureSlot, nil
}
forkFuluSlot, err := slots.EpochStart(fuluForkEpoch)
if err != nil {
return 0, errors.Wrap(err, "epoch start")
}
return forkFuluSlot, nil
}

View File

@@ -31,6 +31,7 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/container/trie"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/genesis"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
@@ -51,6 +52,7 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
srv.Stop()
})
bState, _ := util.DeterministicGenesisState(t, 10)
genesis.StoreStateDuringTest(t, bState)
pbState, err := state_native.ProtobufBeaconStatePhase0(bState.ToProtoUnsafe())
require.NoError(t, err)
mockTrie, err := trie.NewTrie(0)
@@ -71,20 +73,22 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
DepositContainers: []*ethpb.DepositContainer{},
})
require.NoError(t, err)
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
web3Service, err = execution.NewService(
ctx,
execution.WithDatabase(beaconDB),
execution.WithHttpEndpoint(endpoint),
execution.WithDepositContractAddress(common.Address{}),
execution.WithDepositCache(depositCache),
)
require.NoError(t, err, "Unable to set up web3 service")
attService, err := attestations.NewService(ctx, &attestations.Config{Pool: attestations.NewPool()})
require.NoError(t, err)
depositCache, err := depositsnapshot.New()
require.NoError(t, err)
fc := doublylinkedtree.New()
stateGen := stategen.New(beaconDB, fc)
// Safe a state in stategen to purposes of testing a service stop / shutdown.
@@ -396,24 +400,6 @@ func TestServiceStop_SaveCachedBlocks(t *testing.T) {
require.Equal(t, true, s.cfg.BeaconDB.HasBlock(s.ctx, r))
}
func TestProcessChainStartTime_ReceivedFeed(t *testing.T) {
ctx := t.Context()
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
mgs := &MockClockSetter{}
service.clockSetter = mgs
gt := time.Now()
service.onExecutionChainStart(t.Context(), gt)
gs, err := beaconDB.GenesisState(ctx)
require.NoError(t, err)
require.NotEqual(t, nil, gs)
require.Equal(t, 32, len(gs.GenesisValidatorsRoot()))
var zero [32]byte
require.DeepNotEqual(t, gs.GenesisValidatorsRoot(), zero[:])
require.Equal(t, gt, mgs.G.GenesisTime())
require.Equal(t, bytesutil.ToBytes32(gs.GenesisValidatorsRoot()), mgs.G.GenesisValidatorsRoot())
}
func BenchmarkHasBlockDB(b *testing.B) {
ctx := b.Context()
s := testServiceWithDB(b)

View File

@@ -30,6 +30,7 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/libp2p/go-libp2p/core/peer"
"google.golang.org/protobuf/proto"
)
@@ -54,6 +55,7 @@ type mockBroadcaster struct {
type mockAccessor struct {
mockBroadcaster
mockCustodyManager
p2pTesting.MockPeerManager
}
@@ -97,6 +99,43 @@ func (mb *mockBroadcaster) BroadcastBLSChanges(_ context.Context, _ []*ethpb.Sig
var _ p2p.Broadcaster = (*mockBroadcaster)(nil)
// mockCustodyManager is a mock implementation of p2p.CustodyManager
type mockCustodyManager struct {
mut sync.RWMutex
earliestAvailableSlot primitives.Slot
custodyGroupCount uint64
}
func (dch *mockCustodyManager) EarliestAvailableSlot() (primitives.Slot, error) {
dch.mut.RLock()
defer dch.mut.RUnlock()
return dch.earliestAvailableSlot, nil
}
func (dch *mockCustodyManager) CustodyGroupCount() (uint64, error) {
dch.mut.RLock()
defer dch.mut.RUnlock()
return dch.custodyGroupCount, nil
}
func (dch *mockCustodyManager) UpdateCustodyInfo(earliestAvailableSlot primitives.Slot, custodyGroupCount uint64) (primitives.Slot, uint64, error) {
dch.mut.Lock()
defer dch.mut.Unlock()
dch.earliestAvailableSlot = earliestAvailableSlot
dch.custodyGroupCount = custodyGroupCount
return earliestAvailableSlot, custodyGroupCount, nil
}
func (dch *mockCustodyManager) CustodyGroupCountFromPeer(peer.ID) uint64 {
return 0
}
var _ p2p.CustodyManager = (*mockCustodyManager)(nil)
type testServiceRequirements struct {
ctx context.Context
db db.Database

View File

@@ -41,7 +41,6 @@ go_library(
"//encoding/ssz:go_default_library",
"//math:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//network/forks:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",

View File

@@ -11,7 +11,6 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/crypto/bls"
"github.com/OffchainLabs/prysm/v6/network/forks"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1/attestation"
"github.com/OffchainLabs/prysm/v6/time/slots"
@@ -101,7 +100,7 @@ func VerifyBlockHeaderSignature(beaconState state.BeaconState, header *ethpb.Sig
// via the respective epoch.
func VerifyBlockSignatureUsingCurrentFork(beaconState state.ReadOnlyBeaconState, blk interfaces.ReadOnlySignedBeaconBlock, blkRoot [32]byte) error {
currentEpoch := slots.ToEpoch(blk.Block().Slot())
fork, err := forks.Fork(currentEpoch)
fork, err := params.Fork(currentEpoch)
if err != nil {
return err
}

View File

@@ -102,13 +102,13 @@ func ProcessWithdrawalRequests(ctx context.Context, st state.BeaconState, wrs []
return nil, err
} else if n == params.BeaconConfig().PendingPartialWithdrawalsLimit && !isFullExitRequest {
// if the PendingPartialWithdrawalsLimit is met, the user would have paid for a partial withdrawal that's not included
log.Debugln("Skipping execution layer withdrawal request, PendingPartialWithdrawalsLimit reached")
log.Debug("Skipping execution layer withdrawal request, PendingPartialWithdrawalsLimit reached")
continue
}
vIdx, exists := st.ValidatorIndexByPubkey(bytesutil.ToBytes48(wr.ValidatorPubkey))
if !exists {
log.Debugf("Skipping execution layer withdrawal request, validator index for %s not found\n", hexutil.Encode(wr.ValidatorPubkey))
log.WithField("validator", hexutil.Encode(wr.ValidatorPubkey)).Debug("Skipping execution layer withdrawal request, validator index not found")
continue
}
validator, err := st.ValidatorAtIndexReadOnly(vIdx)
@@ -120,23 +120,23 @@ func ProcessWithdrawalRequests(ctx context.Context, st state.BeaconState, wrs []
wc := validator.GetWithdrawalCredentials()
isCorrectSourceAddress := bytes.Equal(wc[12:], wr.SourceAddress)
if !hasCorrectCredential || !isCorrectSourceAddress {
log.Debugln("Skipping execution layer withdrawal request, wrong withdrawal credentials")
log.Debug("Skipping execution layer withdrawal request, wrong withdrawal credentials")
continue
}
// Verify the validator is active.
if !helpers.IsActiveValidatorUsingTrie(validator, currentEpoch) {
log.Debugln("Skipping execution layer withdrawal request, validator not active")
log.Debug("Skipping execution layer withdrawal request, validator not active")
continue
}
// Verify the validator has not yet submitted an exit.
if validator.ExitEpoch() != params.BeaconConfig().FarFutureEpoch {
log.Debugln("Skipping execution layer withdrawal request, validator has submitted an exit already")
log.Debug("Skipping execution layer withdrawal request, validator has submitted an exit already")
continue
}
// Verify the validator has been active long enough.
if currentEpoch < validator.ActivationEpoch().AddEpoch(params.BeaconConfig().ShardCommitteePeriod) {
log.Debugln("Skipping execution layer withdrawal request, validator has not been active long enough")
log.Debug("Skipping execution layer withdrawal request, validator has not been active long enough")
continue
}

View File

@@ -15,7 +15,7 @@ import (
)
// UpgradeToFulu updates inputs a generic state to return the version Fulu state.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/fork.md#upgrading-the-state
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/fork.md#upgrading-the-state
func UpgradeToFulu(ctx context.Context, beaconState state.BeaconState) (state.BeaconState, error) {
currentSyncCommittee, err := beaconState.CurrentSyncCommittee()
if err != nil {

View File

@@ -10,8 +10,12 @@ go_library(
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client",
visibility = ["//visibility:public"],
deps = [
"//async/event:go_default_library",
"//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/db/iface:go_default_library",
"//beacon-chain/execution:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
@@ -39,6 +43,9 @@ go_test(
],
deps = [
":go_default_library",
"//async/event:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/p2p/testing:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types:go_default_library",

View File

@@ -750,7 +750,9 @@ func UpdateHasSupermajority(syncAggregate *pb.SyncAggregate) bool {
return numActiveParticipants*3 >= maxActiveParticipants*2
}
func IsBetterFinalityUpdate(newUpdate, oldUpdate interfaces.LightClientFinalityUpdate) bool {
// IsFinalityUpdateValidForBroadcast checks if a finality update needs to be broadcasted.
// It is also used to check if an incoming gossiped finality update is valid for forwarding and saving.
func IsFinalityUpdateValidForBroadcast(newUpdate, oldUpdate interfaces.LightClientFinalityUpdate) bool {
if oldUpdate == nil {
return true
}
@@ -772,6 +774,35 @@ func IsBetterFinalityUpdate(newUpdate, oldUpdate interfaces.LightClientFinalityU
return true
}
// IsBetterFinalityUpdate checks if the new finality update is better than the old one for saving.
// This does not concern broadcasting, but rather the decision of whether to save the new update.
// For broadcasting checks, use IsFinalityUpdateValidForBroadcast.
func IsBetterFinalityUpdate(newUpdate, oldUpdate interfaces.LightClientFinalityUpdate) bool {
if oldUpdate == nil {
return true
}
// Full nodes SHOULD provide the LightClientFinalityUpdate with the highest attested_header.beacon.slot (if multiple, highest signature_slot)
newFinalizedSlot := newUpdate.FinalizedHeader().Beacon().Slot
newAttestedSlot := newUpdate.AttestedHeader().Beacon().Slot
oldFinalizedSlot := oldUpdate.FinalizedHeader().Beacon().Slot
oldAttestedSlot := oldUpdate.AttestedHeader().Beacon().Slot
if newFinalizedSlot < oldFinalizedSlot {
return false
}
if newFinalizedSlot == oldFinalizedSlot {
if newAttestedSlot < oldAttestedSlot {
return false
}
if newAttestedSlot == oldAttestedSlot && newUpdate.SignatureSlot() <= oldUpdate.SignatureSlot() {
return false
}
}
return true
}
func IsBetterOptimisticUpdate(newUpdate, oldUpdate interfaces.LightClientOptimisticUpdate) bool {
if oldUpdate == nil {
return true

View File

@@ -4,7 +4,11 @@ import (
"context"
"sync"
"github.com/OffchainLabs/prysm/v6/async/event"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed"
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/iface"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
@@ -16,13 +20,17 @@ type Store struct {
mu sync.RWMutex
beaconDB iface.HeadAccessDatabase
lastFinalityUpdate interfaces.LightClientFinalityUpdate
lastOptimisticUpdate interfaces.LightClientOptimisticUpdate
lastFinalityUpdate interfaces.LightClientFinalityUpdate // tracks the best finality update seen so far
lastOptimisticUpdate interfaces.LightClientOptimisticUpdate // tracks the best optimistic update seen so far
p2p p2p.Accessor
stateFeed event.SubscriberSender
}
func NewLightClientStore(db iface.HeadAccessDatabase) *Store {
func NewLightClientStore(db iface.HeadAccessDatabase, p p2p.Accessor, e event.SubscriberSender) *Store {
return &Store{
beaconDB: db,
beaconDB: db,
p2p: p,
stateFeed: e,
}
}
@@ -143,10 +151,23 @@ func (s *Store) SaveLightClientUpdate(ctx context.Context, period uint64, update
return nil
}
func (s *Store) SetLastFinalityUpdate(update interfaces.LightClientFinalityUpdate) {
func (s *Store) SetLastFinalityUpdate(update interfaces.LightClientFinalityUpdate, broadcast bool) {
s.mu.Lock()
defer s.mu.Unlock()
if broadcast && IsFinalityUpdateValidForBroadcast(update, s.lastFinalityUpdate) {
if err := s.p2p.BroadcastLightClientFinalityUpdate(context.Background(), update); err != nil {
log.WithError(err).Error("Could not broadcast light client finality update")
}
}
s.lastFinalityUpdate = update
log.Debug("Saved new light client finality update")
s.stateFeed.Send(&feed.Event{
Type: statefeed.LightClientFinalityUpdate,
Data: update,
})
}
func (s *Store) LastFinalityUpdate() interfaces.LightClientFinalityUpdate {
@@ -155,10 +176,23 @@ func (s *Store) LastFinalityUpdate() interfaces.LightClientFinalityUpdate {
return s.lastFinalityUpdate
}
func (s *Store) SetLastOptimisticUpdate(update interfaces.LightClientOptimisticUpdate) {
func (s *Store) SetLastOptimisticUpdate(update interfaces.LightClientOptimisticUpdate, broadcast bool) {
s.mu.Lock()
defer s.mu.Unlock()
if broadcast {
if err := s.p2p.BroadcastLightClientOptimisticUpdate(context.Background(), update); err != nil {
log.WithError(err).Error("Could not broadcast light client optimistic update")
}
}
s.lastOptimisticUpdate = update
log.Debug("Saved new light client optimistic update")
s.stateFeed.Send(&feed.Event{
Type: statefeed.LightClientOptimisticUpdate,
Data: update,
})
}
func (s *Store) LastOptimisticUpdate() interfaces.LightClientOptimisticUpdate {

View File

@@ -3,7 +3,10 @@ package light_client_test
import (
"testing"
"github.com/OffchainLabs/prysm/v6/async/event"
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
p2pTesting "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/testing/require"
@@ -21,7 +24,7 @@ func TestLightClientStore(t *testing.T) {
params.OverrideBeaconConfig(cfg)
// Initialize the light client store
lcStore := &lightClient.Store{}
lcStore := lightClient.NewLightClientStore(testDB.SetupDB(t), &p2pTesting.FakeP2P{}, new(event.Feed))
// Create test light client updates for Capella and Deneb
lCapella := util.NewTestLightClient(t, version.Capella)
@@ -45,24 +48,118 @@ func TestLightClientStore(t *testing.T) {
require.IsNil(t, lcStore.LastOptimisticUpdate(), "lastOptimisticUpdate should be nil")
// Set and get finality with Capella update. Optimistic update should be nil
lcStore.SetLastFinalityUpdate(finUpdateCapella)
lcStore.SetLastFinalityUpdate(finUpdateCapella, false)
require.Equal(t, finUpdateCapella, lcStore.LastFinalityUpdate(), "lastFinalityUpdate is wrong")
require.IsNil(t, lcStore.LastOptimisticUpdate(), "lastOptimisticUpdate should be nil")
// Set and get optimistic with Capella update. Finality update should be Capella
lcStore.SetLastOptimisticUpdate(opUpdateCapella)
lcStore.SetLastOptimisticUpdate(opUpdateCapella, false)
require.Equal(t, opUpdateCapella, lcStore.LastOptimisticUpdate(), "lastOptimisticUpdate is wrong")
require.Equal(t, finUpdateCapella, lcStore.LastFinalityUpdate(), "lastFinalityUpdate is wrong")
// Set and get finality and optimistic with Deneb update
lcStore.SetLastFinalityUpdate(finUpdateDeneb)
lcStore.SetLastOptimisticUpdate(opUpdateDeneb)
lcStore.SetLastFinalityUpdate(finUpdateDeneb, false)
lcStore.SetLastOptimisticUpdate(opUpdateDeneb, false)
require.Equal(t, finUpdateDeneb, lcStore.LastFinalityUpdate(), "lastFinalityUpdate is wrong")
require.Equal(t, opUpdateDeneb, lcStore.LastOptimisticUpdate(), "lastOptimisticUpdate is wrong")
// Set and get finality and optimistic with nil update
lcStore.SetLastFinalityUpdate(nil)
lcStore.SetLastOptimisticUpdate(nil)
require.IsNil(t, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should be nil")
require.IsNil(t, lcStore.LastOptimisticUpdate(), "lastOptimisticUpdate should be nil")
}
func TestLightClientStore_SetLastFinalityUpdate(t *testing.T) {
p2p := p2pTesting.NewTestP2P(t)
lcStore := lightClient.NewLightClientStore(testDB.SetupDB(t), p2p, new(event.Feed))
// update 0 with basic data and no supermajority following an empty lastFinalityUpdate - should save and broadcast
l0 := util.NewTestLightClient(t, version.Altair)
update0, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l0.Ctx, l0.State, l0.Block, l0.AttestedState, l0.AttestedBlock, l0.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, lightClient.IsBetterFinalityUpdate(update0, lcStore.LastFinalityUpdate()), "update0 should be better than nil")
// update0 should be valid for broadcast - meaning it should be broadcasted
require.Equal(t, true, lightClient.IsFinalityUpdateValidForBroadcast(update0, lcStore.LastFinalityUpdate()), "update0 should be valid for broadcast")
lcStore.SetLastFinalityUpdate(update0, true)
require.Equal(t, update0, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, true, p2p.BroadcastCalled.Load(), "Broadcast should have been called after setting a new last finality update when previous is nil")
p2p.BroadcastCalled.Store(false) // Reset for next test
// update 1 with same finality slot, increased attested slot, and no supermajority - should save but not broadcast
l1 := util.NewTestLightClient(t, version.Altair, util.WithIncreasedAttestedSlot(1))
update1, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l1.Ctx, l1.State, l1.Block, l1.AttestedState, l1.AttestedBlock, l1.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, lightClient.IsBetterFinalityUpdate(update1, update0), "update1 should be better than update0")
// update1 should not be valid for broadcast - meaning it should not be broadcasted
require.Equal(t, false, lightClient.IsFinalityUpdateValidForBroadcast(update1, lcStore.LastFinalityUpdate()), "update1 should not be valid for broadcast")
lcStore.SetLastFinalityUpdate(update1, true)
require.Equal(t, update1, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been called after setting a new last finality update without supermajority")
p2p.BroadcastCalled.Store(false) // Reset for next test
// update 2 with same finality slot, increased attested slot, and supermajority - should save and broadcast
l2 := util.NewTestLightClient(t, version.Altair, util.WithIncreasedAttestedSlot(2), util.WithSupermajority())
update2, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l2.Ctx, l2.State, l2.Block, l2.AttestedState, l2.AttestedBlock, l2.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, lightClient.IsBetterFinalityUpdate(update2, update1), "update2 should be better than update1")
// update2 should be valid for broadcast - meaning it should be broadcasted
require.Equal(t, true, lightClient.IsFinalityUpdateValidForBroadcast(update2, lcStore.LastFinalityUpdate()), "update2 should be valid for broadcast")
lcStore.SetLastFinalityUpdate(update2, true)
require.Equal(t, update2, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, true, p2p.BroadcastCalled.Load(), "Broadcast should have been called after setting a new last finality update with supermajority")
p2p.BroadcastCalled.Store(false) // Reset for next test
// update 3 with same finality slot, increased attested slot, and supermajority - should save but not broadcast
l3 := util.NewTestLightClient(t, version.Altair, util.WithIncreasedAttestedSlot(3), util.WithSupermajority())
update3, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l3.Ctx, l3.State, l3.Block, l3.AttestedState, l3.AttestedBlock, l3.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, lightClient.IsBetterFinalityUpdate(update3, update2), "update3 should be better than update2")
// update3 should not be valid for broadcast - meaning it should not be broadcasted
require.Equal(t, false, lightClient.IsFinalityUpdateValidForBroadcast(update3, lcStore.LastFinalityUpdate()), "update3 should not be valid for broadcast")
lcStore.SetLastFinalityUpdate(update3, true)
require.Equal(t, update3, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been when previous was already broadcast")
// update 4 with increased finality slot, increased attested slot, and supermajority - should save and broadcast
l4 := util.NewTestLightClient(t, version.Altair, util.WithIncreasedFinalizedSlot(1), util.WithIncreasedAttestedSlot(1), util.WithSupermajority())
update4, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l4.Ctx, l4.State, l4.Block, l4.AttestedState, l4.AttestedBlock, l4.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, lightClient.IsBetterFinalityUpdate(update4, update3), "update4 should be better than update3")
// update4 should be valid for broadcast - meaning it should be broadcasted
require.Equal(t, true, lightClient.IsFinalityUpdateValidForBroadcast(update4, lcStore.LastFinalityUpdate()), "update4 should be valid for broadcast")
lcStore.SetLastFinalityUpdate(update4, true)
require.Equal(t, update4, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, true, p2p.BroadcastCalled.Load(), "Broadcast should have been called after a new finality update with increased finality slot")
p2p.BroadcastCalled.Store(false) // Reset for next test
// update 5 with the same new finality slot, increased attested slot, and supermajority - should save but not broadcast
l5 := util.NewTestLightClient(t, version.Altair, util.WithIncreasedFinalizedSlot(1), util.WithIncreasedAttestedSlot(2), util.WithSupermajority())
update5, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l5.Ctx, l5.State, l5.Block, l5.AttestedState, l5.AttestedBlock, l5.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, lightClient.IsBetterFinalityUpdate(update5, update4), "update5 should be better than update4")
// update5 should not be valid for broadcast - meaning it should not be broadcasted
require.Equal(t, false, lightClient.IsFinalityUpdateValidForBroadcast(update5, lcStore.LastFinalityUpdate()), "update5 should not be valid for broadcast")
lcStore.SetLastFinalityUpdate(update5, true)
require.Equal(t, update5, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been called when previous was already broadcast with supermajority")
// update 6 with the same new finality slot, increased attested slot, and no supermajority - should save but not broadcast
l6 := util.NewTestLightClient(t, version.Altair, util.WithIncreasedFinalizedSlot(1), util.WithIncreasedAttestedSlot(3))
update6, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l6.Ctx, l6.State, l6.Block, l6.AttestedState, l6.AttestedBlock, l6.FinalizedBlock)
require.NoError(t, err, "Failed to create light client finality update")
require.Equal(t, true, lightClient.IsBetterFinalityUpdate(update6, update5), "update6 should be better than update5")
// update6 should not be valid for broadcast - meaning it should not be broadcasted
require.Equal(t, false, lightClient.IsFinalityUpdateValidForBroadcast(update6, lcStore.LastFinalityUpdate()), "update6 should not be valid for broadcast")
lcStore.SetLastFinalityUpdate(update6, true)
require.Equal(t, update6, lcStore.LastFinalityUpdate(), "lastFinalityUpdate should match the set value")
require.Equal(t, false, p2p.BroadcastCalled.Load(), "Broadcast should not have been called when previous was already broadcast with supermajority")
}

View File

@@ -16,7 +16,6 @@ go_library(
deps = [
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/state:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
@@ -53,7 +52,6 @@ go_test(
":go_default_library",
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",

View File

@@ -31,15 +31,8 @@ var (
maxUint256 = &uint256.Int{math.MaxUint64, math.MaxUint64, math.MaxUint64, math.MaxUint64}
)
type CustodyType int
const (
Target CustodyType = iota
Actual
)
// CustodyGroups computes the custody groups the node should participate in for custody.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/das-core.md#get_custody_groups
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/das-core.md#get_custody_groups
func CustodyGroups(nodeId enode.ID, custodyGroupCount uint64) ([]uint64, error) {
numberOfCustodyGroups := params.BeaconConfig().NumberOfCustodyGroups
@@ -102,7 +95,7 @@ func CustodyGroups(nodeId enode.ID, custodyGroupCount uint64) ([]uint64, error)
}
// ComputeColumnsForCustodyGroup computes the columns for a given custody group.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/das-core.md#compute_columns_for_custody_group
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/das-core.md#compute_columns_for_custody_group
func ComputeColumnsForCustodyGroup(custodyGroup uint64) ([]uint64, error) {
beaconConfig := params.BeaconConfig()
numberOfCustodyGroups := beaconConfig.NumberOfCustodyGroups
@@ -127,7 +120,7 @@ func ComputeColumnsForCustodyGroup(custodyGroup uint64) ([]uint64, error) {
// DataColumnSidecars computes the data column sidecars from the signed block, cells and cell proofs.
// The returned value contains pointers to function parameters.
// (If the caller alterates `cellsAndProofs` afterwards, the returned value will be modified as well.)
// https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.3/specs/fulu/validator.md#get_data_column_sidecars_from_block
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/validator.md#get_data_column_sidecars_from_block
func DataColumnSidecars(signedBlock interfaces.ReadOnlySignedBeaconBlock, cellsAndProofs []kzg.CellsAndProofs) ([]*ethpb.DataColumnSidecar, error) {
if signedBlock == nil || signedBlock.IsNil() || len(cellsAndProofs) == 0 {
return nil, nil
@@ -176,19 +169,6 @@ func ComputeCustodyGroupForColumn(columnIndex uint64) (uint64, error) {
return columnIndex % numberOfCustodyGroups, nil
}
// CustodyGroupSamplingSize returns the number of custody groups the node should sample from.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/das-core.md#custody-sampling
func (custodyInfo *CustodyInfo) CustodyGroupSamplingSize(ct CustodyType) uint64 {
custodyGroupCount := custodyInfo.TargetGroupCount.Get()
if ct == Actual {
custodyGroupCount = custodyInfo.ActualGroupCount()
}
samplesPerSlot := params.BeaconConfig().SamplesPerSlot
return max(samplesPerSlot, custodyGroupCount)
}
// CustodyColumns computes the custody columns from the custody groups.
func CustodyColumns(custodyGroups []uint64) (map[uint64]bool, error) {
numberOfCustodyGroups := params.BeaconConfig().NumberOfCustodyGroups
@@ -219,7 +199,7 @@ func CustodyColumns(custodyGroups []uint64) (map[uint64]bool, error) {
// the KZG commitment includion proofs and cells and cell proofs.
// The returned value contains pointers to function parameters.
// (If the caller alterates input parameters afterwards, the returned value will be modified as well.)
// https://github.com/ethereum/consensus-specs/blob/v1.6.0-alpha.3/specs/fulu/validator.md#get_data_column_sidecars
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/validator.md#get_data_column_sidecars
func dataColumnsSidecars(
signedBlockHeader *ethpb.SignedBeaconBlockHeader,
blobKzgCommitments [][]byte,

View File

@@ -104,62 +104,6 @@ func TestComputeCustodyGroupForColumn(t *testing.T) {
})
}
func TestCustodyGroupSamplingSize(t *testing.T) {
testCases := []struct {
name string
custodyType peerdas.CustodyType
validatorsCustodyRequirement uint64
toAdvertiseCustodyGroupCount uint64
expected uint64
}{
{
name: "target, lower than samples per slot",
custodyType: peerdas.Target,
validatorsCustodyRequirement: 2,
expected: 8,
},
{
name: "target, higher than samples per slot",
custodyType: peerdas.Target,
validatorsCustodyRequirement: 100,
expected: 100,
},
{
name: "actual, lower than samples per slot",
custodyType: peerdas.Actual,
validatorsCustodyRequirement: 3,
toAdvertiseCustodyGroupCount: 4,
expected: 8,
},
{
name: "actual, higher than samples per slot",
custodyType: peerdas.Actual,
validatorsCustodyRequirement: 100,
toAdvertiseCustodyGroupCount: 101,
expected: 100,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// Create a custody info.
custodyInfo := peerdas.CustodyInfo{}
// Set the validators custody requirement for target custody group count.
custodyInfo.TargetGroupCount.SetValidatorsCustodyRequirement(tc.validatorsCustodyRequirement)
// Set the to advertise custody group count.
custodyInfo.ToAdvertiseGroupCount.Set(tc.toAdvertiseCustodyGroupCount)
// Compute the custody group sampling size.
actual := custodyInfo.CustodyGroupSamplingSize(tc.custodyType)
// Check the result.
require.Equal(t, tc.expected, actual)
})
}
}
func TestCustodyColumns(t *testing.T) {
t.Run("group too large", func(t *testing.T) {
_, err := peerdas.CustodyColumns([]uint64{1_000_000})

View File

@@ -4,45 +4,17 @@ import (
"encoding/binary"
"sync"
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/ethereum/go-ethereum/p2p/enode"
lru "github.com/hashicorp/golang-lru"
"github.com/pkg/errors"
)
// info contains all useful peerDAS related information regarding a peer.
type (
info struct {
CustodyGroups map[uint64]bool
CustodyColumns map[uint64]bool
DataColumnsSubnets map[uint64]bool
}
targetCustodyGroupCount struct {
mut sync.RWMutex
validatorsCustodyRequirement uint64
}
toAdverstiseCustodyGroupCount struct {
mut sync.RWMutex
value uint64
}
CustodyInfo struct {
// Mut is a mutex to be used by caller to ensure neither
// TargetCustodyGroupCount nor ToAdvertiseCustodyGroupCount are being modified.
// (This is not necessary to use this mutex for any data protection.)
Mut sync.RWMutex
// TargetGroupCount represents the target number of custody groups we should custody
// regarding the validators we are tracking.
TargetGroupCount targetCustodyGroupCount
// ToAdvertiseGroupCount represents the number of custody groups to advertise to the network.
ToAdvertiseGroupCount toAdverstiseCustodyGroupCount
}
)
type info struct {
CustodyGroups map[uint64]bool
CustodyColumns map[uint64]bool
DataColumnsSubnets map[uint64]bool
}
const (
nodeInfoCacheSize = 200
@@ -109,61 +81,6 @@ func Info(nodeID enode.ID, custodyGroupCount uint64) (*info, bool, error) {
return result, false, nil
}
// ActualGroupCount returns the actual custody group count.
func (custodyInfo *CustodyInfo) ActualGroupCount() uint64 {
return min(custodyInfo.TargetGroupCount.Get(), custodyInfo.ToAdvertiseGroupCount.Get())
}
// CustodyGroupCount returns the number of groups we should participate in for custody.
func (tcgc *targetCustodyGroupCount) Get() uint64 {
// If subscribed to all subnets, return the number of custody groups.
if flags.Get().SubscribeAllDataSubnets {
return params.BeaconConfig().NumberOfCustodyGroups
}
tcgc.mut.RLock()
defer tcgc.mut.RUnlock()
// If no validators are tracked, return the default custody requirement.
if tcgc.validatorsCustodyRequirement == 0 {
return params.BeaconConfig().CustodyRequirement
}
// Return the validators custody requirement.
return tcgc.validatorsCustodyRequirement
}
// setValidatorsCustodyRequirement sets the validators custody requirement.
func (tcgc *targetCustodyGroupCount) SetValidatorsCustodyRequirement(value uint64) {
tcgc.mut.Lock()
defer tcgc.mut.Unlock()
tcgc.validatorsCustodyRequirement = value
}
// Get returns the to advertise custody group count.
func (tacgc *toAdverstiseCustodyGroupCount) Get() uint64 {
// If subscribed to all subnets, return the number of custody groups.
if flags.Get().SubscribeAllDataSubnets {
return params.BeaconConfig().NumberOfCustodyGroups
}
custodyRequirement := params.BeaconConfig().CustodyRequirement
tacgc.mut.RLock()
defer tacgc.mut.RUnlock()
return max(tacgc.value, custodyRequirement)
}
// Set sets the to advertise custody group count.
func (tacgc *toAdverstiseCustodyGroupCount) Set(value uint64) {
tacgc.mut.Lock()
defer tacgc.mut.Unlock()
tacgc.value = value
}
// createInfoCacheIfNeeded creates a new cache if it doesn't exist.
func createInfoCacheIfNeeded() error {
nodeInfoCacheMut.Lock()

View File

@@ -4,7 +4,6 @@ import (
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/ethereum/go-ethereum/p2p/enode"
)
@@ -26,108 +25,3 @@ func TestInfo(t *testing.T) {
require.DeepEqual(t, expectedDataColumnsSubnets, actual.DataColumnsSubnets)
}
}
func TestTargetCustodyGroupCount(t *testing.T) {
testCases := []struct {
name string
subscribeToAllColumns bool
validatorsCustodyRequirement uint64
expected uint64
}{
{
name: "subscribed to all data subnets",
subscribeToAllColumns: true,
validatorsCustodyRequirement: 100,
expected: 128,
},
{
name: "no validators attached",
subscribeToAllColumns: false,
validatorsCustodyRequirement: 0,
expected: 4,
},
{
name: "some validators attached",
subscribeToAllColumns: false,
validatorsCustodyRequirement: 100,
expected: 100,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// Subscribe to all subnets if needed.
if tc.subscribeToAllColumns {
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.SubscribeAllDataSubnets = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
}
var custodyInfo peerdas.CustodyInfo
// Set the validators custody requirement.
custodyInfo.TargetGroupCount.SetValidatorsCustodyRequirement(tc.validatorsCustodyRequirement)
// Get the target custody group count.
actual := custodyInfo.TargetGroupCount.Get()
// Compare the expected and actual values.
require.Equal(t, tc.expected, actual)
})
}
}
func TestToAdvertiseCustodyGroupCount(t *testing.T) {
testCases := []struct {
name string
subscribeToAllColumns bool
toAdvertiseCustodyGroupCount uint64
expected uint64
}{
{
name: "subscribed to all subnets",
subscribeToAllColumns: true,
toAdvertiseCustodyGroupCount: 100,
expected: 128,
},
{
name: "higher than custody requirement",
subscribeToAllColumns: false,
toAdvertiseCustodyGroupCount: 100,
expected: 100,
},
{
name: "lower than custody requirement",
subscribeToAllColumns: false,
toAdvertiseCustodyGroupCount: 1,
expected: 4,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// Subscribe to all subnets if needed.
if tc.subscribeToAllColumns {
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.SubscribeAllDataSubnets = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
}
// Create a custody info.
var custodyInfo peerdas.CustodyInfo
// Set the to advertise custody group count.
custodyInfo.ToAdvertiseGroupCount.Set(tc.toAdvertiseCustodyGroupCount)
// Get the to advertise custody group count.
actual := custodyInfo.ToAdvertiseGroupCount.Get()
// Compare the expected and actual values.
require.Equal(t, tc.expected, actual)
})
}
}

View File

@@ -10,10 +10,7 @@ import (
"github.com/pkg/errors"
)
const (
CustodyGroupCountEnrKey = "cgc"
kzgPosition = 11 // The index of the KZG commitment list in the Body
)
const kzgPosition = 11 // The index of the KZG commitment list in the Body
var (
ErrIndexTooLarge = errors.New("column index is larger than the specified columns count")
@@ -27,13 +24,13 @@ var (
ErrCannotLoadCustodyGroupCount = errors.New("cannot load the custody group count from peer")
)
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/p2p-interface.md#custody-group-count
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/p2p-interface.md#custody-group-count
type Cgc uint64
func (Cgc) ENRKey() string { return CustodyGroupCountEnrKey }
func (Cgc) ENRKey() string { return params.BeaconNetworkConfig().CustodyGroupCountKey }
// VerifyDataColumnSidecar verifies if the data column sidecar is valid.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/p2p-interface.md#verify_data_column_sidecar
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/p2p-interface.md#verify_data_column_sidecar
func VerifyDataColumnSidecar(sidecar blocks.RODataColumn) error {
// The sidecar index must be within the valid range.
numberOfColumns := params.BeaconConfig().NumberOfColumns
@@ -60,7 +57,7 @@ func VerifyDataColumnSidecar(sidecar blocks.RODataColumn) error {
// while we are verifying all the KZG proofs from multiple sidecars in a batch.
// This is done to improve performance since the internal KZG library is way more
// efficient when verifying in batch.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/p2p-interface.md#verify_data_column_sidecar_kzg_proofs
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/p2p-interface.md#verify_data_column_sidecar_kzg_proofs
func VerifyDataColumnsSidecarKZGProofs(sidecars []blocks.RODataColumn) error {
// Compute the total count.
count := 0
@@ -96,7 +93,7 @@ func VerifyDataColumnsSidecarKZGProofs(sidecars []blocks.RODataColumn) error {
}
// VerifyDataColumnSidecarInclusionProof verifies if the given KZG commitments included in the given beacon block.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/p2p-interface.md#verify_data_column_sidecar_inclusion_proof
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/p2p-interface.md#verify_data_column_sidecar_inclusion_proof
func VerifyDataColumnSidecarInclusionProof(sidecar blocks.RODataColumn) error {
if sidecar.SignedBlockHeader == nil || sidecar.SignedBlockHeader.Header == nil {
return ErrNilBlockHeader
@@ -128,7 +125,7 @@ func VerifyDataColumnSidecarInclusionProof(sidecar blocks.RODataColumn) error {
}
// ComputeSubnetForDataColumnSidecar computes the subnet for a data column sidecar.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/p2p-interface.md#compute_subnet_for_data_column_sidecar
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/p2p-interface.md#compute_subnet_for_data_column_sidecar
func ComputeSubnetForDataColumnSidecar(columnIndex uint64) uint64 {
dataColumnSidecarSubnetCount := params.BeaconConfig().DataColumnSidecarSubnetCount
return columnIndex % dataColumnSidecarSubnetCount

View File

@@ -8,7 +8,7 @@ import (
)
// ValidatorsCustodyRequirement returns the number of custody groups regarding the validator indices attached to the beacon node.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-beta.5/specs/fulu/validator.md#validator-custody
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/validator.md#validator-custody
func ValidatorsCustodyRequirement(state beaconState.ReadOnlyBeaconState, validatorsIndex map[primitives.ValidatorIndex]bool) (uint64, error) {
totalNodeBalance := uint64(0)
for index := range validatorsIndex {

View File

@@ -4,7 +4,6 @@ go_library(
name = "go_default_library",
srcs = [
"domain.go",
"signature.go",
"signing_root.go",
],
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing",
@@ -25,7 +24,6 @@ go_test(
name = "go_default_test",
srcs = [
"domain_test.go",
"signature_test.go",
"signing_root_test.go",
],
embed = [":go_default_library"],

View File

@@ -1,34 +0,0 @@
package signing
import (
"github.com/OffchainLabs/prysm/v6/config/params"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/pkg/errors"
)
var ErrNilRegistration = errors.New("nil signed registration")
// VerifyRegistrationSignature verifies the signature of a validator's registration.
func VerifyRegistrationSignature(
sr *ethpb.SignedValidatorRegistrationV1,
) error {
if sr == nil || sr.Message == nil {
return ErrNilRegistration
}
d := params.BeaconConfig().DomainApplicationBuilder
// Per spec, we want the fork version and genesis validator to be nil.
// Which is genesis value and zero by default.
sd, err := ComputeDomain(
d,
nil, /* fork version */
nil /* genesis val root */)
if err != nil {
return err
}
if err := VerifySigningRoot(sr.Message, sr.Message.Pubkey, sr.Signature, sd); err != nil {
return ErrSigFailedToVerify
}
return nil
}

View File

@@ -1,42 +0,0 @@
package signing_test
import (
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/crypto/bls"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/require"
)
func TestVerifyRegistrationSignature(t *testing.T) {
sk, err := bls.RandKey()
require.NoError(t, err)
reg := &ethpb.ValidatorRegistrationV1{
FeeRecipient: bytesutil.PadTo([]byte("fee"), 20),
GasLimit: 123456,
Timestamp: uint64(time.Now().Unix()),
Pubkey: sk.PublicKey().Marshal(),
}
d := params.BeaconConfig().DomainApplicationBuilder
domain, err := signing.ComputeDomain(d, nil, nil)
require.NoError(t, err)
sr, err := signing.ComputeSigningRoot(reg, domain)
require.NoError(t, err)
sk.Sign(sr[:]).Marshal()
sReg := &ethpb.SignedValidatorRegistrationV1{
Message: reg,
Signature: sk.Sign(sr[:]).Marshal(),
}
require.NoError(t, signing.VerifyRegistrationSignature(sReg))
sReg.Signature = []byte("bad")
require.ErrorIs(t, signing.VerifyRegistrationSignature(sReg), signing.ErrSigFailedToVerify)
sReg.Message = nil
require.ErrorIs(t, signing.VerifyRegistrationSignature(sReg), signing.ErrNilRegistration)
}

View File

@@ -39,10 +39,8 @@ go_test(
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/verification:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",

View File

@@ -22,8 +22,8 @@ type LazilyPersistentStoreColumn struct {
store *filesystem.DataColumnStorage
nodeID enode.ID
cache *dataColumnCache
custodyInfo *peerdas.CustodyInfo
newDataColumnsVerifier verification.NewDataColumnsVerifier
custodyGroupCount uint64
}
var _ AvailabilityStore = &LazilyPersistentStoreColumn{}
@@ -38,13 +38,18 @@ type DataColumnsVerifier interface {
// NewLazilyPersistentStoreColumn creates a new LazilyPersistentStoreColumn.
// WARNING: The resulting LazilyPersistentStoreColumn is NOT thread-safe.
func NewLazilyPersistentStoreColumn(store *filesystem.DataColumnStorage, nodeID enode.ID, newDataColumnsVerifier verification.NewDataColumnsVerifier, custodyInfo *peerdas.CustodyInfo) *LazilyPersistentStoreColumn {
func NewLazilyPersistentStoreColumn(
store *filesystem.DataColumnStorage,
nodeID enode.ID,
newDataColumnsVerifier verification.NewDataColumnsVerifier,
custodyGroupCount uint64,
) *LazilyPersistentStoreColumn {
return &LazilyPersistentStoreColumn{
store: store,
nodeID: nodeID,
cache: newDataColumnCache(),
custodyInfo: custodyInfo,
newDataColumnsVerifier: newDataColumnsVerifier,
custodyGroupCount: custodyGroupCount,
}
}
@@ -155,6 +160,8 @@ func (s *LazilyPersistentStoreColumn) IsDataAvailable(ctx context.Context, curre
// fullCommitmentsToCheck returns the commitments to check for a given block.
func (s *LazilyPersistentStoreColumn) fullCommitmentsToCheck(nodeID enode.ID, block blocks.ROBlock, currentSlot primitives.Slot) (*safeCommitmentsArray, error) {
samplesPerSlot := params.BeaconConfig().SamplesPerSlot
// Return early for blocks that are pre-Fulu.
if block.Version() < version.Fulu {
return &safeCommitmentsArray{}, nil
@@ -183,11 +190,9 @@ func (s *LazilyPersistentStoreColumn) fullCommitmentsToCheck(nodeID enode.ID, bl
return &safeCommitmentsArray{}, nil
}
// Retrieve the groups count.
custodyGroupCount := s.custodyInfo.ActualGroupCount()
// Retrieve peer info.
peerInfo, _, err := peerdas.Info(nodeID, custodyGroupCount)
samplingSize := max(s.custodyGroupCount, samplesPerSlot)
peerInfo, _, err := peerdas.Info(nodeID, samplingSize)
if err != nil {
return nil, errors.Wrap(err, "peer info")
}

View File

@@ -4,10 +4,8 @@ import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
@@ -29,7 +27,7 @@ var commitments = [][]byte{
func TestPersist(t *testing.T) {
t.Run("no sidecars", func(t *testing.T) {
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, nil, &peerdas.CustodyInfo{})
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, nil, 0)
err := lazilyPersistentStoreColumns.Persist(0)
require.NoError(t, err)
require.Equal(t, 0, len(lazilyPersistentStoreColumns.cache.entries))
@@ -44,7 +42,7 @@ func TestPersist(t *testing.T) {
}
roSidecars, _ := roSidecarsFromDataColumnParamsByBlockRoot(t, dataColumnParamsByBlockRoot)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, nil, &peerdas.CustodyInfo{})
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, nil, 0)
err := lazilyPersistentStoreColumns.Persist(0, roSidecars...)
require.ErrorIs(t, err, errMixedRoots)
@@ -59,7 +57,7 @@ func TestPersist(t *testing.T) {
}
roSidecars, _ := roSidecarsFromDataColumnParamsByBlockRoot(t, dataColumnParamsByBlockRoot)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, nil, &peerdas.CustodyInfo{})
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, nil, 0)
err := lazilyPersistentStoreColumns.Persist(1_000_000, roSidecars...)
require.NoError(t, err)
@@ -76,7 +74,7 @@ func TestPersist(t *testing.T) {
}
roSidecars, roDataColumns := roSidecarsFromDataColumnParamsByBlockRoot(t, dataColumnParamsByBlockRoot)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, nil, &peerdas.CustodyInfo{})
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, nil, 0)
err := lazilyPersistentStoreColumns.Persist(slot, roSidecars...)
require.NoError(t, err)
@@ -114,7 +112,7 @@ func TestIsDataAvailable(t *testing.T) {
signedRoBlock := newSignedRoBlock(t, signedBeaconBlockFulu)
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, newDataColumnsVerifier, &peerdas.CustodyInfo{})
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, newDataColumnsVerifier, 0)
err := lazilyPersistentStoreColumns.IsDataAvailable(ctx, 0 /*current slot*/, signedRoBlock)
require.NoError(t, err)
@@ -135,9 +133,9 @@ func TestIsDataAvailable(t *testing.T) {
root := signedRoBlock.Root()
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, newDataColumnsVerifier, &peerdas.CustodyInfo{})
lazilyPersistentStoreColumns := NewLazilyPersistentStoreColumn(dataColumnStorage, enode.ID{}, newDataColumnsVerifier, 0)
indices := [...]uint64{1, 17, 87, 102}
indices := [...]uint64{1, 17, 19, 42, 75, 87, 102, 117}
dataColumnsParams := make([]util.DataColumnParam, 0, len(indices))
for _, index := range indices {
dataColumnParams := util.DataColumnParam{
@@ -221,14 +219,10 @@ func TestFullCommitmentsToCheck(t *testing.T) {
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.SubscribeAllDataSubnets = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
numberOfColumns := params.BeaconConfig().NumberOfColumns
b := tc.block(t)
s := NewLazilyPersistentStoreColumn(nil, enode.ID{}, nil, &peerdas.CustodyInfo{})
s := NewLazilyPersistentStoreColumn(nil, enode.ID{}, nil, numberOfColumns)
commitmentsArray, err := s.fullCommitmentsToCheck(enode.ID{}, b, tc.slot)
require.NoError(t, err)

View File

@@ -13,6 +13,7 @@ go_library(
visibility = [
"//beacon-chain:__subpackages__",
"//cmd/beacon-chain:__subpackages__",
"//genesis:__subpackages__",
"//testing/slasher/simulator:__pkg__",
"//tools:__subpackages__",
],

View File

@@ -10,6 +10,11 @@ type ReadOnlyDatabase = iface.ReadOnlyDatabase
// about head info. For head info, use github.com/prysmaticlabs/prysm/blockchain.HeadFetcher.
type NoHeadAccessDatabase = iface.NoHeadAccessDatabase
// ReadOnlyDatabaseWithSeqNum exposes Prysm's Ethereum data backend for read access only, no information about
// head info, but with read/write access to the p2p metadata sequence number.
// This is used for the p2p service.
type ReadOnlyDatabaseWithSeqNum = iface.ReadOnlyDatabaseWithSeqNum
// HeadAccessDatabase exposes Prysm's Ethereum backend for read/write access with information about
// chain head information. This interface should be used sparingly as the HeadFetcher is the source
// of truth around chain head information while this interface serves as persistent storage for the

View File

@@ -33,6 +33,7 @@ type ReadOnlyDatabase interface {
IsFinalizedBlock(ctx context.Context, blockRoot [32]byte) bool
FinalizedChildBlock(ctx context.Context, blockRoot [32]byte) (interfaces.ReadOnlySignedBeaconBlock, error)
HighestRootsBelowSlot(ctx context.Context, slot primitives.Slot) (primitives.Slot, [][32]byte, error)
EarliestSlot(ctx context.Context) (primitives.Slot, error)
// State related methods.
State(ctx context.Context, blockRoot [32]byte) (state.BeaconState, error)
StateOrError(ctx context.Context, blockRoot [32]byte) (state.BeaconState, error)
@@ -56,14 +57,25 @@ type ReadOnlyDatabase interface {
// Fee recipients operations.
FeeRecipientByValidatorID(ctx context.Context, id primitives.ValidatorIndex) (common.Address, error)
RegistrationByValidatorID(ctx context.Context, id primitives.ValidatorIndex) (*ethpb.ValidatorRegistrationV1, error)
// light client operations
// Light client operations
LightClientUpdates(ctx context.Context, startPeriod, endPeriod uint64) (map[uint64]interfaces.LightClientUpdate, error)
LightClientUpdate(ctx context.Context, period uint64) (interfaces.LightClientUpdate, error)
LightClientBootstrap(ctx context.Context, blockRoot []byte) (interfaces.LightClientBootstrap, error)
// origin checkpoint sync support
// Origin checkpoint sync support
OriginCheckpointBlockRoot(ctx context.Context) ([32]byte, error)
BackfillStatus(context.Context) (*dbval.BackfillStatus, error)
// P2P Metadata operations.
MetadataSeqNum(ctx context.Context) (uint64, error)
}
// ReadOnlyDatabaseWithSeqNum defines a struct which has read access to database methods
// and also has read/write access to the p2p metadata sequence number.
// Only used for the p2p service.
type ReadOnlyDatabaseWithSeqNum interface {
ReadOnlyDatabase
SaveMetadataSeqNum(ctx context.Context, seqNum uint64) error
}
// NoHeadAccessDatabase defines a struct without access to chain head data.
@@ -102,6 +114,13 @@ type NoHeadAccessDatabase interface {
CleanUpDirtyStates(ctx context.Context, slotsPerArchivedPoint primitives.Slot) error
DeleteHistoricalDataBeforeSlot(ctx context.Context, slot primitives.Slot, batchSize int) (int, error)
// Custody operations.
UpdateSubscribedToAllDataSubnets(ctx context.Context, subscribed bool) (bool, error)
UpdateCustodyInfo(ctx context.Context, earliestAvailableSlot primitives.Slot, custodyGroupCount uint64) (primitives.Slot, uint64, error)
// P2P Metadata operations.
SaveMetadataSeqNum(ctx context.Context, seqNum uint64) error
}
// HeadAccessDatabase defines a struct with access to reading chain head data.

View File

@@ -8,6 +8,7 @@ go_library(
"backup.go",
"blocks.go",
"checkpoint.go",
"custody.go",
"deposit_contract.go",
"encoding.go",
"error.go",
@@ -23,6 +24,7 @@ go_library(
"migration_block_slot_index.go",
"migration_finalized_parent.go",
"migration_state_validators.go",
"p2p.go",
"schema.go",
"state.go",
"state_summary.go",
@@ -38,7 +40,6 @@ go_library(
"//beacon-chain/db/filters:go_default_library",
"//beacon-chain/db/iface:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/genesis:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
@@ -50,6 +51,7 @@ go_library(
"//container/slice:go_default_library",
"//encoding/bytesutil:go_default_library",
"//encoding/ssz/detect:go_default_library",
"//genesis:go_default_library",
"//io/file:go_default_library",
"//monitoring/progress:go_default_library",
"//monitoring/tracing:go_default_library",
@@ -83,6 +85,7 @@ go_test(
"backup_test.go",
"blocks_test.go",
"checkpoint_test.go",
"custody_test.go",
"deposit_contract_test.go",
"encoding_test.go",
"execution_chain_test.go",
@@ -94,6 +97,7 @@ go_test(
"migration_archived_index_test.go",
"migration_block_slot_index_test.go",
"migration_state_validators_test.go",
"p2p_test.go",
"state_summary_test.go",
"state_test.go",
"utils_test.go",
@@ -106,7 +110,6 @@ go_test(
"//beacon-chain/db/filters:go_default_library",
"//beacon-chain/db/iface:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/genesis:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
@@ -116,6 +119,7 @@ go_test(
"//consensus-types/light-client:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//genesis:go_default_library",
"//proto/dbval:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",

View File

@@ -860,6 +860,47 @@ func (s *Store) SaveRegistrationsByValidatorIDs(ctx context.Context, ids []primi
})
}
// EarliestStoredSlot returns the earliest slot in the database.
func (s *Store) EarliestSlot(ctx context.Context) (primitives.Slot, error) {
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
_, span := trace.StartSpan(ctx, "BeaconDB.EarliestSlot")
defer span.End()
earliestAvailableSlot := primitives.Slot(0)
err := s.db.View(func(tx *bolt.Tx) error {
// Retrieve the root corresponding to the earliest available block.
c := tx.Bucket(blockSlotIndicesBucket).Cursor()
k, v := c.First()
if k == nil || v == nil {
return ErrNotFound
}
slot := bytesutil.BytesToSlotBigEndian(k)
// The genesis block may be indexed in this bucket, even if we started from a checkpoint.
// Because of this, we check the next block. If the next block is still in the genesis epoch,
// then we consider we have the whole chain.
if slot != 0 {
earliestAvailableSlot = slot
}
k, v = c.Next()
if k == nil || v == nil {
// Only the genesis block is available.
return nil
}
slot = bytesutil.BytesToSlotBigEndian(k)
if slot < slotsPerEpoch {
// We are still in the genesis epoch, so we consider we have the whole chain.
return nil
}
earliestAvailableSlot = slot
return nil
})
return earliestAvailableSlot, err
}
type slotRoot struct {
slot primitives.Slot
root [32]byte
@@ -883,7 +924,7 @@ func (s *Store) slotRootsInRange(ctx context.Context, start, end primitives.Slot
c := bkt.Cursor()
for k, v := c.Seek(key); ; /* rely on internal checks to exit */ k, v = c.Prev() {
if len(k) == 0 && len(v) == 0 {
// The `edge`` variable and this `if` deal with 2 edge cases:
// The `edge` variable and this `if` deal with 2 edge cases:
// - Seeking past the end of the bucket (the `end` param is higher than the highest slot).
// - Seeking before the beginning of the bucket (the `start` param is lower than the lowest slot).
// In both of these cases k,v will be nil and we can handle the same way using `edge` to

View File

@@ -1,6 +1,7 @@
package kv
import (
"context"
"fmt"
"testing"
"time"
@@ -1327,3 +1328,86 @@ func TestStore_RegistrationsByValidatorID(t *testing.T) {
want := errors.Wrap(ErrNotFoundFeeRecipient, "validator id 3")
require.Equal(t, want.Error(), err.Error())
}
// Block creates a phase0 beacon block at the specified slot and saves it to the database.
func createAndSaveBlock(t *testing.T, ctx context.Context, db *Store, slot primitives.Slot) {
block := util.NewBeaconBlock()
block.Block.Slot = slot
wrappedBlock, err := blocks.NewSignedBeaconBlock(block)
require.NoError(t, err)
require.NoError(t, db.SaveBlock(ctx, wrappedBlock))
}
func TestStore_EarliestSlot(t *testing.T) {
ctx := t.Context()
t.Run("empty database returns ErrNotFound", func(t *testing.T) {
db := setupDB(t)
slot, err := db.EarliestSlot(ctx)
require.ErrorIs(t, err, ErrNotFound)
assert.Equal(t, primitives.Slot(0), slot)
})
t.Run("database with only genesis block", func(t *testing.T) {
db := setupDB(t)
// Create and save genesis block (slot 0)
createAndSaveBlock(t, ctx, db, 0)
slot, err := db.EarliestSlot(ctx)
require.NoError(t, err)
assert.Equal(t, primitives.Slot(0), slot)
})
t.Run("database with genesis and blocks in genesis epoch", func(t *testing.T) {
db := setupDB(t)
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
// Create and save genesis block (slot 0)
createAndSaveBlock(t, ctx, db, 0)
// Create and save a block in the genesis epoch
createAndSaveBlock(t, ctx, db, primitives.Slot(slotsPerEpoch-1))
slot, err := db.EarliestSlot(ctx)
require.NoError(t, err)
assert.Equal(t, primitives.Slot(0), slot)
})
t.Run("database with genesis and blocks beyond genesis epoch", func(t *testing.T) {
db := setupDB(t)
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
// Create and save genesis block (slot 0)
createAndSaveBlock(t, ctx, db, 0)
// Create and save a block beyond the genesis epoch
nextEpochSlot := primitives.Slot(slotsPerEpoch)
createAndSaveBlock(t, ctx, db, nextEpochSlot)
slot, err := db.EarliestSlot(ctx)
require.NoError(t, err)
assert.Equal(t, nextEpochSlot, slot)
})
t.Run("database starting from checkpoint (non-zero earliest slot)", func(t *testing.T) {
db := setupDB(t)
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
// Simulate starting from a checkpoint by creating blocks starting from a later slot
checkpointSlot := primitives.Slot(slotsPerEpoch * 10) // 10 epochs later
nextEpochSlot := checkpointSlot + slotsPerEpoch
// Create and save first block at checkpoint slot
createAndSaveBlock(t, ctx, db, checkpointSlot)
// Create and save another block in the next epoch
createAndSaveBlock(t, ctx, db, nextEpochSlot)
slot, err := db.EarliestSlot(ctx)
require.NoError(t, err)
assert.Equal(t, nextEpochSlot, slot)
})
}

View File

@@ -0,0 +1,129 @@
package kv
import (
"context"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
bolt "go.etcd.io/bbolt"
)
// UpdateCustodyInfo atomically updates the custody group count only it is greater than the stored one.
// In this case, it also updates the earliest available slot with the provided value.
// It returns the (potentially updated) custody group count and earliest available slot.
func (s *Store) UpdateCustodyInfo(ctx context.Context, earliestAvailableSlot primitives.Slot, custodyGroupCount uint64) (primitives.Slot, uint64, error) {
_, span := trace.StartSpan(ctx, "BeaconDB.UpdateCustodyInfo")
defer span.End()
storedGroupCount, storedEarliestAvailableSlot := uint64(0), primitives.Slot(0)
if err := s.db.Update(func(tx *bolt.Tx) error {
// Retrieve the custody bucket.
bucket, err := tx.CreateBucketIfNotExists(custodyBucket)
if err != nil {
return errors.Wrap(err, "create custody bucket")
}
// Retrieve the stored custody group count.
storedGroupCountBytes := bucket.Get(groupCountKey)
if len(storedGroupCountBytes) != 0 {
storedGroupCount = bytesutil.BytesToUint64BigEndian(storedGroupCountBytes)
}
// Retrieve the stored earliest available slot.
storedEarliestAvailableSlotBytes := bucket.Get(earliestAvailableSlotKey)
if len(storedEarliestAvailableSlotBytes) != 0 {
storedEarliestAvailableSlot = primitives.Slot(bytesutil.BytesToUint64BigEndian(storedEarliestAvailableSlotBytes))
}
// Exit early if the new custody group count is lower than or equal to the stored one.
if custodyGroupCount <= storedGroupCount {
return nil
}
storedGroupCount, storedEarliestAvailableSlot = custodyGroupCount, earliestAvailableSlot
// Store the earliest available slot.
bytes := bytesutil.Uint64ToBytesBigEndian(uint64(earliestAvailableSlot))
if err := bucket.Put(earliestAvailableSlotKey, bytes); err != nil {
return errors.Wrap(err, "put earliest available slot")
}
// Store the custody group count.
bytes = bytesutil.Uint64ToBytesBigEndian(custodyGroupCount)
if err := bucket.Put(groupCountKey, bytes); err != nil {
return errors.Wrap(err, "put custody group count")
}
return nil
}); err != nil {
return 0, 0, err
}
log.WithFields(logrus.Fields{
"earliestAvailableSlot": storedEarliestAvailableSlot,
"groupCount": storedGroupCount,
}).Debug("Custody info")
return storedEarliestAvailableSlot, storedGroupCount, nil
}
// UpdateSubscribedToAllDataSubnets updates the "subscribed to all data subnets" status in the database
// only if `subscribed` is `true`.
// It returns the previous subscription status.
func (s *Store) UpdateSubscribedToAllDataSubnets(ctx context.Context, subscribed bool) (bool, error) {
_, span := trace.StartSpan(ctx, "BeaconDB.UpdateSubscribedToAllDataSubnets")
defer span.End()
result := false
if !subscribed {
if err := s.db.View(func(tx *bolt.Tx) error {
// Retrieve the custody bucket.
bucket := tx.Bucket(custodyBucket)
if bucket == nil {
return nil
}
// Retrieve the subscribe all data subnets flag.
bytes := bucket.Get(subscribeAllDataSubnetsKey)
if len(bytes) == 0 {
return nil
}
if bytes[0] == 1 {
result = true
}
return nil
}); err != nil {
return false, err
}
return result, nil
}
if err := s.db.Update(func(tx *bolt.Tx) error {
// Retrieve the custody bucket.
bucket, err := tx.CreateBucketIfNotExists(custodyBucket)
if err != nil {
return errors.Wrap(err, "create custody bucket")
}
bytes := bucket.Get(subscribeAllDataSubnetsKey)
if len(bytes) != 0 && bytes[0] == 1 {
result = true
}
if err := bucket.Put(subscribeAllDataSubnetsKey, []byte{1}); err != nil {
return errors.Wrap(err, "put subscribe all data subnets")
}
return nil
}); err != nil {
return false, err
}
return result, nil
}

View File

@@ -0,0 +1,176 @@
package kv
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/testing/require"
bolt "go.etcd.io/bbolt"
)
// getCustodyInfoFromDB reads the custody info directly from the database for testing purposes.
func getCustodyInfoFromDB(t *testing.T, db *Store) (primitives.Slot, uint64) {
t.Helper()
var earliestSlot primitives.Slot
var groupCount uint64
err := db.db.View(func(tx *bolt.Tx) error {
bucket := tx.Bucket(custodyBucket)
if bucket == nil {
return nil
}
// Read group count
groupCountBytes := bucket.Get(groupCountKey)
if len(groupCountBytes) != 0 {
groupCount = bytesutil.BytesToUint64BigEndian(groupCountBytes)
}
// Read earliest available slot
earliestSlotBytes := bucket.Get(earliestAvailableSlotKey)
if len(earliestSlotBytes) != 0 {
earliestSlot = primitives.Slot(bytesutil.BytesToUint64BigEndian(earliestSlotBytes))
}
return nil
})
require.NoError(t, err)
return earliestSlot, groupCount
}
// getSubscriptionStatusFromDB reads the subscription status directly from the database for testing purposes.
func getSubscriptionStatusFromDB(t *testing.T, db *Store) bool {
t.Helper()
var subscribed bool
err := db.db.View(func(tx *bolt.Tx) error {
bucket := tx.Bucket(custodyBucket)
if bucket == nil {
return nil
}
bytes := bucket.Get(subscribeAllDataSubnetsKey)
if len(bytes) != 0 && bytes[0] == 1 {
subscribed = true
}
return nil
})
require.NoError(t, err)
return subscribed
}
func TestUpdateCustodyInfo(t *testing.T) {
ctx := t.Context()
t.Run("initial update with empty database", func(t *testing.T) {
const (
earliestSlot = primitives.Slot(100)
groupCount = uint64(5)
)
db := setupDB(t)
slot, count, err := db.UpdateCustodyInfo(ctx, earliestSlot, groupCount)
require.NoError(t, err)
require.Equal(t, earliestSlot, slot)
require.Equal(t, groupCount, count)
storedSlot, storedCount := getCustodyInfoFromDB(t, db)
require.Equal(t, earliestSlot, storedSlot)
require.Equal(t, groupCount, storedCount)
})
t.Run("update with higher group count", func(t *testing.T) {
const (
initialSlot = primitives.Slot(100)
initialCount = uint64(5)
earliestSlot = primitives.Slot(200)
groupCount = uint64(10)
)
db := setupDB(t)
_, _, err := db.UpdateCustodyInfo(ctx, initialSlot, initialCount)
require.NoError(t, err)
slot, count, err := db.UpdateCustodyInfo(ctx, earliestSlot, groupCount)
require.NoError(t, err)
require.Equal(t, earliestSlot, slot)
require.Equal(t, groupCount, count)
storedSlot, storedCount := getCustodyInfoFromDB(t, db)
require.Equal(t, earliestSlot, storedSlot)
require.Equal(t, groupCount, storedCount)
})
t.Run("update with lower group count should not update", func(t *testing.T) {
const (
initialSlot = primitives.Slot(200)
initialCount = uint64(10)
earliestSlot = primitives.Slot(300)
groupCount = uint64(8)
)
db := setupDB(t)
_, _, err := db.UpdateCustodyInfo(ctx, initialSlot, initialCount)
require.NoError(t, err)
slot, count, err := db.UpdateCustodyInfo(ctx, earliestSlot, groupCount)
require.NoError(t, err)
require.Equal(t, initialSlot, slot)
require.Equal(t, initialCount, count)
storedSlot, storedCount := getCustodyInfoFromDB(t, db)
require.Equal(t, initialSlot, storedSlot)
require.Equal(t, initialCount, storedCount)
})
}
func TestUpdateSubscribedToAllDataSubnets(t *testing.T) {
ctx := context.Background()
t.Run("initial update with empty database - set to false", func(t *testing.T) {
db := setupDB(t)
prev, err := db.UpdateSubscribedToAllDataSubnets(ctx, false)
require.NoError(t, err)
require.Equal(t, false, prev)
stored := getSubscriptionStatusFromDB(t, db)
require.Equal(t, false, stored)
})
t.Run("attempt to update from true to false (should not change)", func(t *testing.T) {
db := setupDB(t)
_, err := db.UpdateSubscribedToAllDataSubnets(ctx, true)
require.NoError(t, err)
prev, err := db.UpdateSubscribedToAllDataSubnets(ctx, false)
require.NoError(t, err)
require.Equal(t, true, prev)
stored := getSubscriptionStatusFromDB(t, db)
require.Equal(t, true, stored)
})
t.Run("attempt to update from true to false (should not change)", func(t *testing.T) {
db := setupDB(t)
_, err := db.UpdateSubscribedToAllDataSubnets(ctx, true)
require.NoError(t, err)
prev, err := db.UpdateSubscribedToAllDataSubnets(ctx, true)
require.NoError(t, err)
require.Equal(t, true, prev)
stored := getSubscriptionStatusFromDB(t, db)
require.Equal(t, true, stored)
})
}

View File

@@ -19,6 +19,9 @@ var ErrNotFoundGenesisBlockRoot = errors.Wrap(ErrNotFound, "OriginGenesisRoot")
// ErrNotFoundFeeRecipient is a not found error specifically for the fee recipient getter
var ErrNotFoundFeeRecipient = errors.Wrap(ErrNotFound, "fee recipient")
// ErrNotFoundMetadataSeqNum is a not found error specifically for the metadata sequence number getter
var ErrNotFoundMetadataSeqNum = errors.Wrap(ErrNotFound, "metadata sequence number")
var errEmptyBlockSlice = errors.New("[]blocks.ROBlock is empty")
var errIncorrectBlockParent = errors.New("unexpected missing or forked blocks in a []ROBlock")
var errFinalizedChildNotFound = errors.New("unable to find finalized root descending from backfill batch")

View File

@@ -8,6 +8,7 @@ import (
dbIface "github.com/OffchainLabs/prysm/v6/beacon-chain/db/iface"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/encoding/ssz/detect"
"github.com/OffchainLabs/prysm/v6/genesis"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/pkg/errors"
)
@@ -97,8 +98,22 @@ func (s *Store) EnsureEmbeddedGenesis(ctx context.Context) error {
if err != nil {
return err
}
if gs != nil && !gs.IsNil() {
if !state.IsNil(gs) {
return s.SaveGenesisData(ctx, gs)
}
return nil
}
type LegacyGenesisProvider struct {
store *Store
}
func NewLegacyGenesisProvider(store *Store) *LegacyGenesisProvider {
return &LegacyGenesisProvider{store: store}
}
var _ genesis.Provider = &LegacyGenesisProvider{}
func (p *LegacyGenesisProvider) Genesis(ctx context.Context) (state.BeaconState, error) {
return p.store.LegacyGenesisState(ctx)
}

View File

@@ -8,6 +8,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/iface"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/genesis"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
@@ -152,6 +153,7 @@ func TestEnsureEmbeddedGenesis(t *testing.T) {
require.NoError(t, undo())
}()
genesis.StoreEmbeddedDuringTest(t, params.BeaconConfig().ConfigName)
ctx := t.Context()
db := setupDB(t)

View File

@@ -123,6 +123,7 @@ var Buckets = [][]byte{
feeRecipientBucket,
registrationBucket,
custodyBucket,
}
// KVStoreOption is a functional option that modifies a kv.Store.

42
beacon-chain/db/kv/p2p.go Normal file
View File

@@ -0,0 +1,42 @@
package kv
import (
"context"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
bolt "go.etcd.io/bbolt"
)
// MetadataSeqNum retrieves the p2p metadata sequence number from the database.
// It returns 0 and ErrNotFoundMetadataSeqNum if the key does not exist.
func (s *Store) MetadataSeqNum(ctx context.Context) (uint64, error) {
_, span := trace.StartSpan(ctx, "BeaconDB.MetadataSeqNum")
defer span.End()
var seqNum uint64
err := s.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(chainMetadataBucket)
val := bkt.Get(metadataSequenceNumberKey)
if val == nil {
return ErrNotFoundMetadataSeqNum
}
seqNum = bytesutil.BytesToUint64BigEndian(val)
return nil
})
return seqNum, err
}
// SaveMetadataSeqNum saves the p2p metadata sequence number to the database.
func (s *Store) SaveMetadataSeqNum(ctx context.Context, seqNum uint64) error {
_, span := trace.StartSpan(ctx, "BeaconDB.SaveMetadataSeqNum")
defer span.End()
return s.db.Update(func(tx *bolt.Tx) error {
bkt := tx.Bucket(chainMetadataBucket)
val := bytesutil.Uint64ToBytesBigEndian(seqNum)
return bkt.Put(metadataSequenceNumberKey, val)
})
}

View File

@@ -0,0 +1,33 @@
package kv
import (
"testing"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
)
func TestStore_MetadataSeqNum(t *testing.T) {
ctx := t.Context()
db := setupDB(t)
seqNum, err := db.MetadataSeqNum(ctx)
require.ErrorIs(t, err, ErrNotFoundMetadataSeqNum)
assert.Equal(t, uint64(0), seqNum)
initialSeqNum := uint64(42)
err = db.SaveMetadataSeqNum(ctx, initialSeqNum)
require.NoError(t, err)
retrievedSeqNum, err := db.MetadataSeqNum(ctx)
require.NoError(t, err)
assert.Equal(t, initialSeqNum, retrievedSeqNum)
updatedSeqNum := uint64(43)
err = db.SaveMetadataSeqNum(ctx, updatedSeqNum)
require.NoError(t, err)
retrievedSeqNum, err = db.MetadataSeqNum(ctx)
require.NoError(t, err)
assert.Equal(t, updatedSeqNum, retrievedSeqNum)
}

View File

@@ -42,6 +42,7 @@ var (
finalizedCheckpointKey = []byte("finalized-checkpoint")
powchainDataKey = []byte("powchain-data")
lastValidatedCheckpointKey = []byte("last-validated-checkpoint")
metadataSequenceNumberKey = []byte("metadata-seq-number")
// Below keys are used to identify objects are to be fork compatible.
// Objects that are only compatible with specific forks should be prefixed with such keys.
@@ -70,4 +71,10 @@ var (
// Migrations
migrationsBucket = []byte("migrations")
// Custody
custodyBucket = []byte("custody")
groupCountKey = []byte("group-count")
earliestAvailableSlotKey = []byte("earliest-available-slot")
subscribeAllDataSubnetsKey = []byte("subscribe-all-data-subnets")
)

View File

@@ -6,14 +6,12 @@ import (
"fmt"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state/genesis"
statenative "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v6/config/features"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing"
"github.com/OffchainLabs/prysm/v6/genesis"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
@@ -65,21 +63,21 @@ func (s *Store) StateOrError(ctx context.Context, blockRoot [32]byte) (state.Bea
return st, nil
}
// GenesisState returns the genesis state in beacon chain.
func (s *Store) GenesisState(ctx context.Context) (state.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.GenesisState")
st, err := genesis.State()
if errors.Is(err, genesis.ErrGenesisStateNotInitialized) {
log.WithError(err).Error("genesis state not initialized, returning nil state. this should only happen in tests")
return nil, nil
}
return st, err
}
// GenesisState returns the genesis state in beacon chain.
func (s *Store) LegacyGenesisState(ctx context.Context) (state.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.LegacyGenesisState")
defer span.End()
cached, err := genesis.State(params.BeaconConfig().ConfigName)
if err != nil {
tracing.AnnotateError(span, err)
return nil, err
}
span.SetAttributes(trace.BoolAttribute("cache_hit", cached != nil))
if cached != nil {
return cached, nil
}
var err error
var st state.BeaconState
err = s.db.View(func(tx *bolt.Tx) error {
// Retrieve genesis block's signing root from blocks bucket,
@@ -253,6 +251,10 @@ func (s *Store) saveStatesEfficientInternal(ctx context.Context, tx *bolt.Tx, bl
if err := s.processElectra(ctx, rawType, rt[:], bucket, valIdxBkt, validatorKeys[i]); err != nil {
return err
}
case *ethpb.BeaconStateFulu:
if err := s.processFulu(ctx, rawType, rt[:], bucket, valIdxBkt, validatorKeys[i]); err != nil {
return err
}
default:
return errors.New("invalid state type")
}
@@ -368,6 +370,24 @@ func (s *Store) processElectra(ctx context.Context, pbState *ethpb.BeaconStateEl
return nil
}
func (s *Store) processFulu(ctx context.Context, pbState *ethpb.BeaconStateFulu, rootHash []byte, bucket, valIdxBkt *bolt.Bucket, validatorKey []byte) error {
valEntries := pbState.Validators
pbState.Validators = make([]*ethpb.Validator, 0)
rawObj, err := pbState.MarshalSSZ()
if err != nil {
return err
}
encodedState := snappy.Encode(nil, append(fuluKey, rawObj...))
if err := bucket.Put(rootHash, encodedState); err != nil {
return err
}
pbState.Validators = valEntries
if err := valIdxBkt.Put(rootHash, validatorKey); err != nil {
return err
}
return nil
}
func (s *Store) storeValidatorEntriesSeparately(ctx context.Context, tx *bolt.Tx, validatorsEntries map[string]*ethpb.Validator) error {
valBkt := tx.Bucket(stateValidatorsBucket)
for hashStr, validatorEntry := range validatorsEntries {

View File

@@ -15,6 +15,7 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/genesis"
enginev1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/assert"
@@ -488,7 +489,7 @@ func TestGenesisState_CanSaveRetrieve(t *testing.T) {
require.NoError(t, err)
require.NoError(t, st.SetSlot(1))
require.NoError(t, db.SaveGenesisBlockRoot(t.Context(), headRoot))
require.NoError(t, db.SaveState(t.Context(), st, headRoot))
genesis.StoreStateDuringTest(t, st)
savedGenesisS, err := db.GenesisState(t.Context())
require.NoError(t, err)
@@ -661,7 +662,7 @@ func TestStore_GenesisState_CanGetHighestBelow(t *testing.T) {
require.NoError(t, err)
genesisRoot := [32]byte{'a'}
require.NoError(t, db.SaveGenesisBlockRoot(t.Context(), genesisRoot))
require.NoError(t, db.SaveState(t.Context(), genesisState, genesisRoot))
genesis.StoreStateDuringTest(t, genesisState)
b := util.NewBeaconBlock()
b.Block.Slot = 1

View File

@@ -43,6 +43,7 @@ func (s *Store) SaveOrigin(ctx context.Context, serState, serBlock []byte) error
return errors.Wrap(err, "failed to initialize origin block w/ bytes + config+fork")
}
blk := wblk.Block()
slot := blk.Slot()
blockRoot, err := blk.HashTreeRoot()
if err != nil {
@@ -51,43 +52,43 @@ func (s *Store) SaveOrigin(ctx context.Context, serState, serBlock []byte) error
pr := blk.ParentRoot()
bf := &dbval.BackfillStatus{
LowSlot: uint64(wblk.Block().Slot()),
LowSlot: uint64(slot),
LowRoot: blockRoot[:],
LowParentRoot: pr[:],
OriginRoot: blockRoot[:],
OriginSlot: uint64(wblk.Block().Slot()),
OriginSlot: uint64(slot),
}
if err = s.SaveBackfillStatus(ctx, bf); err != nil {
return errors.Wrap(err, "unable to save backfill status data to db for checkpoint sync")
}
log.WithField("root", fmt.Sprintf("%#x", blockRoot)).Info("Saving checkpoint block to db")
log.WithField("root", fmt.Sprintf("%#x", blockRoot)).Info("Saving checkpoint data into database")
if err := s.SaveBlock(ctx, wblk); err != nil {
return errors.Wrap(err, "could not save checkpoint block")
return errors.Wrap(err, "save block")
}
// save state
log.WithField("blockRoot", fmt.Sprintf("%#x", blockRoot)).Info("Calling SaveState")
if err = s.SaveState(ctx, state, blockRoot); err != nil {
return errors.Wrap(err, "could not save state")
return errors.Wrap(err, "save state")
}
if err = s.SaveStateSummary(ctx, &ethpb.StateSummary{
Slot: state.Slot(),
Root: blockRoot[:],
}); err != nil {
return errors.Wrap(err, "could not save state summary")
return errors.Wrap(err, "save state summary")
}
// mark block as head of chain, so that processing will pick up from this point
if err = s.SaveHeadBlockRoot(ctx, blockRoot); err != nil {
return errors.Wrap(err, "could not save head block root")
return errors.Wrap(err, "save head block root")
}
// save origin block root in a special key, to be used when the canonical
// origin (start of chain, ie alternative to genesis) block or state is needed
if err = s.SaveOriginCheckpointBlockRoot(ctx, blockRoot); err != nil {
return errors.Wrap(err, "could not save origin block root")
return errors.Wrap(err, "save origin checkpoint block root")
}
// rebuild the checkpoint from the block
@@ -96,15 +97,18 @@ func (s *Store) SaveOrigin(ctx context.Context, serState, serBlock []byte) error
if err != nil {
return err
}
chkpt := &ethpb.Checkpoint{
Epoch: primitives.Epoch(slotEpoch),
Root: blockRoot[:],
}
if err = s.SaveJustifiedCheckpoint(ctx, chkpt); err != nil {
return errors.Wrap(err, "could not mark checkpoint sync block as justified")
return errors.Wrap(err, "save justified checkpoint")
}
if err = s.SaveFinalizedCheckpoint(ctx, chkpt); err != nil {
return errors.Wrap(err, "could not mark checkpoint sync block as finalized")
return errors.Wrap(err, "save finalized checkpoint")
}
return nil

View File

@@ -3,9 +3,9 @@ package kv
import (
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state/genesis"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/genesis"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
)
@@ -18,7 +18,11 @@ func TestSaveOrigin(t *testing.T) {
ctx := t.Context()
db := setupDB(t)
st, err := genesis.State(params.MainnetName)
// Initialize genesis with mainnet config - this will load the embedded mainnet state
require.NoError(t, genesis.Initialize(ctx, t.TempDir()))
// Get the initialized genesis state
st, err := genesis.State()
require.NoError(t, err)
sb, err := st.MarshalSSZ()

View File

@@ -125,6 +125,7 @@ go_test(
"//contracts/deposit/mock:go_default_library",
"//crypto/bls:go_default_library",
"//encoding/bytesutil:go_default_library",
"//genesis:go_default_library",
"//monitoring/clientstats:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",

View File

@@ -22,6 +22,7 @@ import (
contracts "github.com/OffchainLabs/prysm/v6/contracts/deposit"
"github.com/OffchainLabs/prysm/v6/contracts/deposit/mock"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/genesis"
"github.com/OffchainLabs/prysm/v6/monitoring/clientstats"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/assert"
@@ -381,6 +382,7 @@ func TestInitDepositCache_OK(t *testing.T) {
require.NoError(t, err)
require.NoError(t, s.cfg.beaconDB.SaveGenesisBlockRoot(t.Context(), blockRootA))
require.NoError(t, s.cfg.beaconDB.SaveState(t.Context(), emptyState, blockRootA))
genesis.StoreStateDuringTest(t, emptyState)
s.chainStartData.Chainstarted = true
require.NoError(t, s.initDepositCaches(t.Context(), ctrs))
require.Equal(t, 3, len(s.cfg.depositCache.PendingContainers(t.Context(), nil)))
@@ -446,6 +448,7 @@ func TestInitDepositCacheWithFinalization_OK(t *testing.T) {
require.NoError(t, s.cfg.beaconDB.SaveGenesisBlockRoot(t.Context(), headRoot))
require.NoError(t, s.cfg.beaconDB.SaveState(t.Context(), emptyState, headRoot))
require.NoError(t, stateGen.SaveState(t.Context(), headRoot, emptyState))
genesis.StoreStateDuringTest(t, emptyState)
s.cfg.stateGen = stateGen
require.NoError(t, emptyState.SetEth1DepositIndex(3))
@@ -594,6 +597,7 @@ func TestService_EnsureConsistentPowchainData(t *testing.T) {
require.NoError(t, err)
assert.NoError(t, genState.SetSlot(1000))
genesis.StoreStateDuringTest(t, genState)
require.NoError(t, s1.cfg.beaconDB.SaveGenesisData(t.Context(), genState))
_, err = s1.validPowchainData(t.Context())
require.NoError(t, err)
@@ -655,6 +659,7 @@ func TestService_EnsureValidPowchainData(t *testing.T) {
require.NoError(t, err)
assert.NoError(t, genState.SetSlot(1000))
genesis.StoreStateDuringTest(t, genState)
require.NoError(t, s1.cfg.beaconDB.SaveGenesisData(t.Context(), genState))
err = s1.cfg.beaconDB.SaveExecutionChainData(t.Context(), &ethpb.ETH1ChainData{

View File

@@ -3,6 +3,7 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"clear_db.go",
"config.go",
"log.go",
"node.go",
@@ -23,7 +24,6 @@ go_library(
"//beacon-chain/cache:go_default_library",
"//beacon-chain/cache/depositsnapshot:go_default_library",
"//beacon-chain/core/light-client:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/db/kv:go_default_library",
@@ -50,7 +50,6 @@ go_library(
"//beacon-chain/sync/backfill:go_default_library",
"//beacon-chain/sync/backfill/coverage:go_default_library",
"//beacon-chain/sync/checkpoint:go_default_library",
"//beacon-chain/sync/genesis:go_default_library",
"//beacon-chain/sync/initial-sync:go_default_library",
"//beacon-chain/verification:go_default_library",
"//cmd:go_default_library",
@@ -60,6 +59,7 @@ go_library(
"//consensus-types/primitives:go_default_library",
"//container/slice:go_default_library",
"//encoding/bytesutil:go_default_library",
"//genesis:go_default_library",
"//monitoring/prometheus:go_default_library",
"//monitoring/tracing:go_default_library",
"//runtime:go_default_library",

View File

@@ -0,0 +1,101 @@
package node
import (
"context"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/kv"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/slasherkv"
"github.com/OffchainLabs/prysm/v6/cmd"
"github.com/pkg/errors"
"github.com/urfave/cli/v2"
)
type dbClearer struct {
shouldClear bool
force bool
confirmed bool
}
const (
clearConfirmation = "This will delete your beacon chain database stored in your data directory. " +
"Your database backups will not be removed - do you want to proceed? (Y/N)"
clearDeclined = "Database will not be deleted. No changes have been made."
)
func (c *dbClearer) clearKV(ctx context.Context, db *kv.Store) (*kv.Store, error) {
if !c.shouldProceed() {
return db, nil
}
log.Warning("Removing database")
if err := db.ClearDB(); err != nil {
return nil, errors.Wrap(err, "could not clear database")
}
return kv.NewKVStore(ctx, db.DatabasePath())
}
func (c *dbClearer) clearBlobs(bs *filesystem.BlobStorage) error {
if !c.shouldProceed() {
return nil
}
log.Warning("Removing blob storage")
if err := bs.Clear(); err != nil {
return errors.Wrap(err, "could not clear blob storage")
}
return nil
}
func (c *dbClearer) clearColumns(cs *filesystem.DataColumnStorage) error {
if !c.shouldProceed() {
return nil
}
log.Warning("Removing data columns storage")
if err := cs.Clear(); err != nil {
return errors.Wrap(err, "could not clear data columns storage")
}
return nil
}
func (c *dbClearer) clearSlasher(ctx context.Context, db *slasherkv.Store) (*slasherkv.Store, error) {
if !c.shouldProceed() {
return db, nil
}
log.Warning("Removing slasher database")
if err := db.ClearDB(); err != nil {
return nil, errors.Wrap(err, "could not clear slasher database")
}
return slasherkv.NewKVStore(ctx, db.DatabasePath())
}
func (c *dbClearer) shouldProceed() bool {
if !c.shouldClear {
return false
}
if c.force {
return true
}
if !c.confirmed {
confirmed, err := cmd.ConfirmAction(clearConfirmation, clearDeclined)
if err != nil {
log.WithError(err).Error("Not clearing db due to confirmation error")
return false
}
c.confirmed = confirmed
}
return c.confirmed
}
func newDbClearer(cliCtx *cli.Context) *dbClearer {
force := cliCtx.Bool(cmd.ForceClearDB.Name)
return &dbClearer{
shouldClear: cliCtx.Bool(cmd.ClearDB.Name) || force,
force: force,
}
}

View File

@@ -26,7 +26,6 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache/depositsnapshot"
lightclient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/kv"
@@ -53,7 +52,6 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/sync/backfill"
"github.com/OffchainLabs/prysm/v6/beacon-chain/sync/backfill/coverage"
"github.com/OffchainLabs/prysm/v6/beacon-chain/sync/checkpoint"
"github.com/OffchainLabs/prysm/v6/beacon-chain/sync/genesis"
initialsync "github.com/OffchainLabs/prysm/v6/beacon-chain/sync/initial-sync"
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
"github.com/OffchainLabs/prysm/v6/cmd"
@@ -63,6 +61,7 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/container/slice"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/genesis"
"github.com/OffchainLabs/prysm/v6/monitoring/prometheus"
"github.com/OffchainLabs/prysm/v6/runtime"
"github.com/OffchainLabs/prysm/v6/runtime/prereqs"
@@ -114,7 +113,7 @@ type BeaconNode struct {
slasherAttestationsFeed *event.Feed
finalizedStateAtStartUp state.BeaconState
serviceFlagOpts *serviceFlagOpts
GenesisInitializer genesis.Initializer
GenesisProviders []genesis.Provider
CheckpointInitializer checkpoint.Initializer
forkChoicer forkchoice.ForkChoicer
clockWaiter startup.ClockWaiter
@@ -124,11 +123,11 @@ type BeaconNode struct {
BlobStorageOptions []filesystem.BlobStorageOption
DataColumnStorage *filesystem.DataColumnStorage
DataColumnStorageOptions []filesystem.DataColumnStorageOption
custodyInfo *peerdas.CustodyInfo
verifyInitWaiter *verification.InitializerWaiter
syncChecker *initialsync.SyncChecker
slasherEnabled bool
lcStore *lightclient.Store
ConfigOptions []params.Option
}
// New creates a new node instance, sets up configuration options, and registers
@@ -137,18 +136,13 @@ func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*Beaco
if err := configureBeacon(cliCtx); err != nil {
return nil, errors.Wrap(err, "could not set beacon configuration options")
}
// Initializes any forks here.
params.BeaconConfig().InitializeForkSchedule()
registry := runtime.NewServiceRegistry()
ctx := cliCtx.Context
beacon := &BeaconNode{
cliCtx: cliCtx,
ctx: ctx,
cancel: cancel,
services: registry,
services: runtime.NewServiceRegistry(),
stop: make(chan struct{}),
stateFeed: new(event.Feed),
blockFeed: new(event.Feed),
@@ -166,7 +160,6 @@ func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*Beaco
serviceFlagOpts: &serviceFlagOpts{},
initialSyncComplete: make(chan struct{}),
syncChecker: &initialsync.SyncChecker{},
custodyInfo: &peerdas.CustodyInfo{},
slasherEnabled: cliCtx.Bool(flags.SlasherFlag.Name),
}
@@ -176,6 +169,25 @@ func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*Beaco
}
}
dbClearer := newDbClearer(cliCtx)
dataDir := cliCtx.String(cmd.DataDirFlag.Name)
boltFname := filepath.Join(dataDir, kv.BeaconNodeDbDirName)
kvdb, err := openDB(ctx, boltFname, dbClearer)
if err != nil {
return nil, errors.Wrap(err, "could not open database")
}
beacon.db = kvdb
providers := append(beacon.GenesisProviders, kv.NewLegacyGenesisProvider(kvdb))
if err := genesis.Initialize(ctx, dataDir, providers...); err != nil {
return nil, errors.Wrap(err, "could not initialize genesis state")
}
beacon.ConfigOptions = append([]params.Option{params.WithGenesisValidatorsRoot(genesis.ValidatorsRoot())}, beacon.ConfigOptions...)
params.BeaconConfig().ApplyOptions(beacon.ConfigOptions...)
params.BeaconConfig().InitializeForkSchedule()
params.LogDigests(params.BeaconConfig())
synchronizer := startup.NewClockSynchronizer()
beacon.clockWaiter = synchronizer
beacon.forkChoicer = doublylinkedtree.New()
@@ -194,6 +206,9 @@ func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*Beaco
}
beacon.BlobStorage = blobs
}
if err := dbClearer.clearBlobs(beacon.BlobStorage); err != nil {
return nil, errors.Wrap(err, "could not clear blob storage")
}
if beacon.DataColumnStorage == nil {
dataColumnStorage, err := filesystem.NewDataColumnStorage(cliCtx.Context, beacon.DataColumnStorageOptions...)
@@ -203,8 +218,11 @@ func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*Beaco
beacon.DataColumnStorage = dataColumnStorage
}
if err := dbClearer.clearColumns(beacon.DataColumnStorage); err != nil {
return nil, errors.Wrap(err, "could not clear data column storage")
}
bfs, err := startBaseServices(cliCtx, beacon, depositAddress)
bfs, err := startBaseServices(cliCtx, beacon, depositAddress, dbClearer)
if err != nil {
return nil, errors.Wrap(err, "could not start modules")
}
@@ -236,7 +254,7 @@ func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*Beaco
beacon.finalizedStateAtStartUp = nil
if features.Get().EnableLightClient {
beacon.lcStore = lightclient.NewLightClientStore(beacon.db)
beacon.lcStore = lightclient.NewLightClientStore(beacon.db, beacon.fetchP2P(), beacon.StateFeed())
}
return beacon, nil
@@ -292,7 +310,7 @@ func configureBeacon(cliCtx *cli.Context) error {
return nil
}
func startBaseServices(cliCtx *cli.Context, beacon *BeaconNode, depositAddress string) (*backfill.Store, error) {
func startBaseServices(cliCtx *cli.Context, beacon *BeaconNode, depositAddress string, clearer *dbClearer) (*backfill.Store, error) {
ctx := cliCtx.Context
log.Debugln("Starting DB")
if err := beacon.startDB(cliCtx, depositAddress); err != nil {
@@ -302,7 +320,7 @@ func startBaseServices(cliCtx *cli.Context, beacon *BeaconNode, depositAddress s
beacon.BlobStorage.WarmCache()
log.Debugln("Starting Slashing DB")
if err := beacon.startSlasherDB(cliCtx); err != nil {
if err := beacon.startSlasherDB(cliCtx, clearer); err != nil {
return nil, errors.Wrap(err, "could not start slashing DB")
}
@@ -482,43 +500,6 @@ func (b *BeaconNode) Close() {
close(b.stop)
}
func (b *BeaconNode) clearDB(clearDB, forceClearDB bool, d *kv.Store, dbPath string) (*kv.Store, error) {
var err error
clearDBConfirmed := false
if clearDB && !forceClearDB {
const (
actionText = "This will delete your beacon chain database stored in your data directory. " +
"Your database backups will not be removed - do you want to proceed? (Y/N)"
deniedText = "Database will not be deleted. No changes have been made."
)
clearDBConfirmed, err = cmd.ConfirmAction(actionText, deniedText)
if err != nil {
return nil, errors.Wrapf(err, "could not confirm action")
}
}
if clearDBConfirmed || forceClearDB {
log.Warning("Removing database")
if err := d.ClearDB(); err != nil {
return nil, errors.Wrap(err, "could not clear database")
}
if err := b.BlobStorage.Clear(); err != nil {
return nil, errors.Wrap(err, "could not clear blob storage")
}
d, err = kv.NewKVStore(b.ctx, dbPath)
if err != nil {
return nil, errors.Wrap(err, "could not create new database")
}
}
return d, nil
}
func (b *BeaconNode) checkAndSaveDepositContract(depositAddress string) error {
knownContract, err := b.db.DepositContractAddress(b.ctx)
if err != nil {
@@ -542,60 +523,36 @@ func (b *BeaconNode) checkAndSaveDepositContract(depositAddress string) error {
return nil
}
func (b *BeaconNode) startDB(cliCtx *cli.Context, depositAddress string) error {
var depositCache cache.DepositCache
baseDir := cliCtx.String(cmd.DataDirFlag.Name)
dbPath := filepath.Join(baseDir, kv.BeaconNodeDbDirName)
clearDBRequired := cliCtx.Bool(cmd.ClearDB.Name)
forceClearDBRequired := cliCtx.Bool(cmd.ForceClearDB.Name)
func openDB(ctx context.Context, dbPath string, clearer *dbClearer) (*kv.Store, error) {
log.WithField("databasePath", dbPath).Info("Checking DB")
d, err := kv.NewKVStore(b.ctx, dbPath)
d, err := kv.NewKVStore(ctx, dbPath)
if err != nil {
return errors.Wrapf(err, "could not create database at %s", dbPath)
return nil, errors.Wrapf(err, "could not create database at %s", dbPath)
}
if clearDBRequired || forceClearDBRequired {
d, err = b.clearDB(clearDBRequired, forceClearDBRequired, d, dbPath)
if err != nil {
return errors.Wrap(err, "could not clear database")
}
d, err = clearer.clearKV(ctx, d)
if err != nil {
return nil, errors.Wrap(err, "could not clear database")
}
if err := d.RunMigrations(b.ctx); err != nil {
return err
}
return d, d.RunMigrations(ctx)
}
b.db = d
depositCache, err = depositsnapshot.New()
func (b *BeaconNode) startDB(cliCtx *cli.Context, depositAddress string) error {
depositCache, err := depositsnapshot.New()
if err != nil {
return errors.Wrap(err, "could not create deposit cache")
}
b.depositCache = depositCache
if b.GenesisInitializer != nil {
if err := b.GenesisInitializer.Initialize(b.ctx, d); err != nil {
if errors.Is(err, db.ErrExistingGenesisState) {
return errors.Errorf("Genesis state flag specified but a genesis state "+
"exists already. Run again with --%s and/or ensure you are using the "+
"appropriate testnet flag to load the given genesis state.", cmd.ClearDB.Name)
}
return errors.Wrap(err, "could not load genesis from file")
}
}
if err := b.db.EnsureEmbeddedGenesis(b.ctx); err != nil {
return errors.Wrap(err, "could not ensure embedded genesis")
}
if b.CheckpointInitializer != nil {
log.Info("Checkpoint sync - Downloading origin state and block")
if err := b.CheckpointInitializer.Initialize(b.ctx, d); err != nil {
if err := b.CheckpointInitializer.Initialize(b.ctx, b.db); err != nil {
return err
}
}
@@ -607,49 +564,25 @@ func (b *BeaconNode) startDB(cliCtx *cli.Context, depositAddress string) error {
log.WithField("address", depositAddress).Info("Deposit contract")
return nil
}
func (b *BeaconNode) startSlasherDB(cliCtx *cli.Context) error {
func (b *BeaconNode) startSlasherDB(cliCtx *cli.Context, clearer *dbClearer) error {
if !b.slasherEnabled {
return nil
}
baseDir := cliCtx.String(cmd.DataDirFlag.Name)
if cliCtx.IsSet(flags.SlasherDirFlag.Name) {
baseDir = cliCtx.String(flags.SlasherDirFlag.Name)
}
dbPath := filepath.Join(baseDir, kv.BeaconNodeDbDirName)
clearDB := cliCtx.Bool(cmd.ClearDB.Name)
forceClearDB := cliCtx.Bool(cmd.ForceClearDB.Name)
log.WithField("databasePath", dbPath).Info("Checking DB")
d, err := slasherkv.NewKVStore(b.ctx, dbPath)
if err != nil {
return err
}
clearDBConfirmed := false
if clearDB && !forceClearDB {
actionText := "This will delete your beacon chain database stored in your data directory. " +
"Your database backups will not be removed - do you want to proceed? (Y/N)"
deniedText := "Database will not be deleted. No changes have been made."
clearDBConfirmed, err = cmd.ConfirmAction(actionText, deniedText)
if err != nil {
return err
}
d, err = clearer.clearSlasher(b.ctx, d)
if err != nil {
return errors.Wrap(err, "could not clear slasher database")
}
if clearDBConfirmed || forceClearDB {
log.Warning("Removing database")
if err := d.ClearDB(); err != nil {
return errors.Wrap(err, "could not clear database")
}
d, err = slasherkv.NewKVStore(b.ctx, dbPath)
if err != nil {
return errors.Wrap(err, "could not create new database")
}
}
b.slasherDB = d
return nil
}
@@ -705,7 +638,6 @@ func (b *BeaconNode) registerP2P(cliCtx *cli.Context) error {
HostDNS: cliCtx.String(cmd.P2PHostDNS.Name),
PrivateKey: cliCtx.String(cmd.P2PPrivKey.Name),
StaticPeerID: cliCtx.Bool(cmd.P2PStaticID.Name),
MetaDataDir: cliCtx.String(cmd.P2PMetadata.Name),
QUICPort: cliCtx.Uint(cmd.P2PQUICPort.Name),
TCPPort: cliCtx.Uint(cmd.P2PTCPPort.Name),
UDPPort: cliCtx.Uint(cmd.P2PUDPPort.Name),
@@ -717,7 +649,6 @@ func (b *BeaconNode) registerP2P(cliCtx *cli.Context) error {
StateNotifier: b,
DB: b.db,
ClockWaiter: b.clockWaiter,
CustodyInfo: b.custodyInfo,
})
if err != nil {
return err
@@ -800,7 +731,6 @@ func (b *BeaconNode) registerBlockchainService(fc forkchoice.ForkChoicer, gs *st
blockchain.WithTrackedValidatorsCache(b.trackedValidatorsCache),
blockchain.WithPayloadIDCache(b.payloadIDCache),
blockchain.WithSyncChecker(b.syncChecker),
blockchain.WithCustodyInfo(b.custodyInfo),
blockchain.WithSlasherEnabled(b.slasherEnabled),
blockchain.WithLightClientStore(b.lcStore),
)
@@ -888,7 +818,7 @@ func (b *BeaconNode) registerSyncService(initialSyncComplete chan struct{}, bFil
regularsync.WithDataColumnStorage(b.DataColumnStorage),
regularsync.WithVerifierWaiter(b.verifyInitWaiter),
regularsync.WithAvailableBlocker(bFillStore),
regularsync.WithCustodyInfo(b.custodyInfo),
regularsync.WithTrackedValidatorsCache(b.trackedValidatorsCache),
regularsync.WithSlasherEnabled(b.slasherEnabled),
regularsync.WithLightClientStore(b.lcStore),
regularsync.WithBatchVerifierLimit(b.cliCtx.Int(flags.BatchVerifierLimit.Name)),

View File

@@ -5,6 +5,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/builder"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/execution"
"github.com/OffchainLabs/prysm/v6/config/params"
)
// Option for beacon node configuration.
@@ -51,6 +52,13 @@ func WithBlobStorageOptions(opt ...filesystem.BlobStorageOption) Option {
}
}
func WithConfigOptions(opt ...params.Option) Option {
return func(bn *BeaconNode) error {
bn.ConfigOptions = append(bn.ConfigOptions, opt...)
return nil
}
}
// WithDataColumnStorage sets the DataColumnStorage backend for the BeaconNode
func WithDataColumnStorage(bs *filesystem.DataColumnStorage) Option {
return func(bn *BeaconNode) error {

View File

@@ -49,6 +49,7 @@ go_library(
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/kv:go_default_library",
"//beacon-chain/p2p/encoder:go_default_library",
"//beacon-chain/p2p/peers:go_default_library",
"//beacon-chain/p2p/peers/peerdata:go_default_library",
@@ -71,7 +72,6 @@ go_library(
"//monitoring/tracing:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//network:go_default_library",
"//network/forks:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/metadata:go_default_library",
"//runtime:go_default_library",
@@ -147,7 +147,6 @@ go_test(
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/light-client:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/db/testing:go_default_library",
@@ -169,17 +168,16 @@ go_test(
"//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library",
"//network:go_default_library",
"//network/forks:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/metadata:go_default_library",
"//proto/testing:go_default_library",
"//runtime/version:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_ethereum_go_ethereum//crypto:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/discover:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",

View File

@@ -15,7 +15,6 @@ import (
"github.com/OffchainLabs/prysm/v6/crypto/hash"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
"github.com/OffchainLabs/prysm/v6/network/forks"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/pkg/errors"
@@ -274,14 +273,8 @@ func (s *Service) BroadcastLightClientOptimisticUpdate(ctx context.Context, upda
return errors.New("attempted to broadcast nil light client optimistic update")
}
forkDigest, err := forks.ForkDigestFromEpoch(slots.ToEpoch(update.AttestedHeader().Beacon().Slot), s.genesisValidatorsRoot)
if err != nil {
err := errors.Wrap(err, "could not retrieve fork digest")
tracing.AnnotateError(span, err)
return err
}
if err := s.broadcastObject(ctx, update, lcOptimisticToTopic(forkDigest)); err != nil {
digest := params.ForkDigest(slots.ToEpoch(update.AttestedHeader().Beacon().Slot))
if err := s.broadcastObject(ctx, update, lcOptimisticToTopic(digest)); err != nil {
log.WithError(err).Debug("Failed to broadcast light client optimistic update")
err := errors.Wrap(err, "could not publish message")
tracing.AnnotateError(span, err)
@@ -300,13 +293,7 @@ func (s *Service) BroadcastLightClientFinalityUpdate(ctx context.Context, update
return errors.New("attempted to broadcast nil light client finality update")
}
forkDigest, err := forks.ForkDigestFromEpoch(slots.ToEpoch(update.AttestedHeader().Beacon().Slot), s.genesisValidatorsRoot)
if err != nil {
err := errors.Wrap(err, "could not retrieve fork digest")
tracing.AnnotateError(span, err)
return err
}
forkDigest := params.ForkDigest(slots.ToEpoch(update.AttestedHeader().Beacon().Slot))
if err := s.broadcastObject(ctx, update, lcFinalityToTopic(forkDigest)); err != nil {
log.WithError(err).Debug("Failed to broadcast light client finality update")
err := errors.Wrap(err, "could not publish message")

View File

@@ -11,20 +11,19 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers/scorers"
p2ptest "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/wrapper"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/network/forks"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
testpb "github.com/OffchainLabs/prysm/v6/proto/testing"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
@@ -61,6 +60,7 @@ func TestService_Broadcast(t *testing.T) {
topic := "/eth2/%x/testing"
// Set a test gossip mapping for testpb.TestSimpleMessage.
GossipTypeMapping[reflect.TypeOf(msg)] = topic
p.clock = startup.NewClock(p.genesisTime, bytesutil.ToBytes32(p.genesisValidatorsRoot))
digest, err := p.currentForkDigest()
require.NoError(t, err)
topic = fmt.Sprintf(topic, digest)
@@ -229,6 +229,7 @@ func TestService_BroadcastAttestationWithDiscoveryAttempts(t *testing.T) {
cfg: cfg,
genesisTime: genesisTime,
genesisValidatorsRoot: genesisValidatorsRoot,
custodyInfo: &custodyInfo{},
}
bootListener, err := s.createListener(ipAddr, pkey)
require.NoError(t, err)
@@ -257,6 +258,7 @@ func TestService_BroadcastAttestationWithDiscoveryAttempts(t *testing.T) {
cfg: cfg,
genesisTime: genesisTime,
genesisValidatorsRoot: genesisValidatorsRoot,
custodyInfo: &custodyInfo{},
}
listener, err := s.startDiscoveryV5(ipAddr, pkey)
// Set for 2nd peer
@@ -265,7 +267,8 @@ func TestService_BroadcastAttestationWithDiscoveryAttempts(t *testing.T) {
s.metaData = wrapper.WrappedMetadataV0(new(ethpb.MetaDataV0))
bitV := bitfield.NewBitvector64()
bitV.SetBitAt(subnet, true)
s.updateSubnetRecordWithMetadata(bitV)
err := s.updateSubnetRecordWithMetadata(bitV)
require.NoError(t, err)
}
assert.NoError(t, err, "Could not start discovery for node")
listeners = append(listeners, listener)
@@ -546,14 +549,11 @@ func TestService_BroadcastLightClientOptimisticUpdate(t *testing.T) {
}),
}
l := util.NewTestLightClient(t, version.Altair)
msg, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock)
msg, err := util.MockOptimisticUpdate()
require.NoError(t, err)
GossipTypeMapping[reflect.TypeOf(msg)] = LightClientOptimisticUpdateTopicFormat
digest, err := forks.ForkDigestFromEpoch(slots.ToEpoch(msg.AttestedHeader().Beacon().Slot), p.genesisValidatorsRoot)
require.NoError(t, err)
topic := fmt.Sprintf(LightClientOptimisticUpdateTopicFormat, digest)
topic := fmt.Sprintf(LightClientOptimisticUpdateTopicFormat, params.ForkDigest(slots.ToEpoch(msg.AttestedHeader().Beacon().Slot)))
// External peer subscribes to the topic.
topic += p.Encoding().ProtocolSuffix()
@@ -613,14 +613,11 @@ func TestService_BroadcastLightClientFinalityUpdate(t *testing.T) {
}),
}
l := util.NewTestLightClient(t, version.Altair)
msg, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
msg, err := util.MockFinalityUpdate()
require.NoError(t, err)
GossipTypeMapping[reflect.TypeOf(msg)] = LightClientFinalityUpdateTopicFormat
digest, err := forks.ForkDigestFromEpoch(slots.ToEpoch(msg.AttestedHeader().Beacon().Slot), p.genesisValidatorsRoot)
require.NoError(t, err)
topic := fmt.Sprintf(LightClientFinalityUpdateTopicFormat, digest)
topic := fmt.Sprintf(LightClientFinalityUpdateTopicFormat, params.ForkDigest(slots.ToEpoch(msg.AttestedHeader().Beacon().Slot)))
// External peer subscribes to the topic.
topic += p.Encoding().ProtocolSuffix()
@@ -699,6 +696,7 @@ func TestService_BroadcastDataColumn(t *testing.T) {
subnetsLock: make(map[uint64]*sync.RWMutex),
subnetsLockLock: sync.Mutex{},
peers: peers.NewStatus(t.Context(), &peers.StatusConfig{ScorerParams: &scorers.Config{}}),
custodyInfo: &custodyInfo{},
}
// Create a listener.

View File

@@ -4,7 +4,6 @@ import (
"time"
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
)
@@ -28,7 +27,6 @@ type Config struct {
PrivateKey string
DataDir string
DiscoveryDir string
MetaDataDir string
QUICPort uint
TCPPort uint
UDPPort uint
@@ -38,9 +36,8 @@ type Config struct {
AllowListCIDR string
DenyListCIDR []string
StateNotifier statefeed.Notifier
DB db.ReadOnlyDatabase
DB db.ReadOnlyDatabaseWithSeqNum
ClockWaiter startup.ClockWaiter
CustodyInfo *peerdas.CustodyInfo
}
// validateConfig validates whether the values provided are accurate and will set

View File

@@ -3,11 +3,114 @@ package p2p
import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
var _ DataColumnsHandler = (*Service)(nil)
var _ CustodyManager = (*Service)(nil)
// EarliestAvailableSlot returns the earliest available slot.
func (s *Service) EarliestAvailableSlot() (primitives.Slot, error) {
s.custodyInfoLock.RLock()
defer s.custodyInfoLock.RUnlock()
if s.custodyInfo == nil {
return 0, errors.New("no custody info available")
}
return s.custodyInfo.earliestAvailableSlot, nil
}
// CustodyGroupCount returns the custody group count.
func (s *Service) CustodyGroupCount() (uint64, error) {
s.custodyInfoLock.Lock()
defer s.custodyInfoLock.Unlock()
if s.custodyInfo == nil {
return 0, errors.New("no custody info available")
}
return s.custodyInfo.groupCount, nil
}
// UpdateCustodyInfo updates the stored custody group count to the incoming one
// if the incoming one is greater than the stored one. In this case, the
// incoming earliest available slot should be greater than or equal to the
// stored one or an error is returned.
//
// - If there is no stored custody info, or
// - If the incoming earliest available slot is greater than or equal to the
// fulu fork slot and the incoming custody group count is greater than the
// number of samples per slot
//
// then the stored earliest available slot is updated to the incoming one.
//
// This function returns a boolean indicating whether the custody info was
// updated and the (possibly updated) custody info itself.
//
// Rationale:
// - The custody group count can only be increased (specification)
// - If the custody group count is increased before Fulu, we can still serve
// all the data, since there is no sharding before Fulu. As a consequence
// we do not need to update the earliest available slot in this case.
// - If the custody group count is increased after Fulu, but to a value less
// than or equal to the number of samples per slot, we can still serve all
// the data, since we store all sampled data column sidecars in all cases.
// As a consequence, we do not need to update the earliest available slot
// - If the custody group count is increased after Fulu to a value higher than
// the number of samples per slot, then, until the backfill is complete, we
// are unable to serve the data column sidecars corresponding to the new
// custody groups. As a consequence, we need to update the earliest
// available slot to inform the peers that we are not able to serve data
// column sidecars before this point.
func (s *Service) UpdateCustodyInfo(earliestAvailableSlot primitives.Slot, custodyGroupCount uint64) (primitives.Slot, uint64, error) {
samplesPerSlot := params.BeaconConfig().SamplesPerSlot
s.custodyInfoLock.Lock()
defer s.custodyInfoLock.Unlock()
if s.custodyInfo == nil {
s.custodyInfo = &custodyInfo{
earliestAvailableSlot: earliestAvailableSlot,
groupCount: custodyGroupCount,
}
return earliestAvailableSlot, custodyGroupCount, nil
}
inMemory := s.custodyInfo
if custodyGroupCount <= inMemory.groupCount {
return inMemory.earliestAvailableSlot, inMemory.groupCount, nil
}
if earliestAvailableSlot < inMemory.earliestAvailableSlot {
return 0, 0, errors.Errorf(
"earliest available slot %d is less than the current one %d. (custody group count: %d, current one: %d)",
earliestAvailableSlot, inMemory.earliestAvailableSlot, custodyGroupCount, inMemory.groupCount,
)
}
if custodyGroupCount <= samplesPerSlot {
inMemory.groupCount = custodyGroupCount
return inMemory.earliestAvailableSlot, custodyGroupCount, nil
}
fuluForkSlot, err := fuluForkSlot()
if err != nil {
return 0, 0, errors.Wrap(err, "fulu fork slot")
}
if earliestAvailableSlot < fuluForkSlot {
inMemory.groupCount = custodyGroupCount
return inMemory.earliestAvailableSlot, custodyGroupCount, nil
}
inMemory.earliestAvailableSlot = earliestAvailableSlot
inMemory.groupCount = custodyGroupCount
return earliestAvailableSlot, custodyGroupCount, nil
}
// CustodyGroupCountFromPeer retrieves custody group count from a peer.
// It first tries to get the custody group count from the peer's metadata,
@@ -72,3 +175,19 @@ func (s *Service) custodyGroupCountFromPeerENR(pid peer.ID) uint64 {
return custodyGroupCount
}
func fuluForkSlot() (primitives.Slot, error) {
beaconConfig := params.BeaconConfig()
fuluForkEpoch := beaconConfig.FuluForkEpoch
if fuluForkEpoch == beaconConfig.FarFutureEpoch {
return beaconConfig.FarFutureSlot, nil
}
forkFuluSlot, err := slots.EpochStart(fuluForkEpoch)
if err != nil {
return 0, errors.Wrap(err, "epoch start")
}
return forkFuluSlot, nil
}

View File

@@ -1,12 +1,15 @@
package p2p
import (
"context"
"strings"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers/scorers"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/consensus-types/wrapper"
pb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1/metadata"
@@ -15,6 +18,174 @@ import (
"github.com/libp2p/go-libp2p/core/network"
)
func TestEarliestAvailableSlot(t *testing.T) {
t.Run("No custody info available", func(t *testing.T) {
service := &Service{
custodyInfo: nil,
}
_, err := service.EarliestAvailableSlot()
require.NotNil(t, err)
})
t.Run("Valid custody info", func(t *testing.T) {
const expected primitives.Slot = 100
service := &Service{
custodyInfo: &custodyInfo{
earliestAvailableSlot: expected,
},
}
slot, err := service.EarliestAvailableSlot()
require.NoError(t, err)
require.Equal(t, expected, slot)
})
}
func TestCustodyGroupCount(t *testing.T) {
t.Run("No custody info available", func(t *testing.T) {
service := &Service{
custodyInfo: nil,
}
_, err := service.CustodyGroupCount()
require.NotNil(t, err)
require.Equal(t, true, strings.Contains(err.Error(), "no custody info available"))
})
t.Run("Valid custody info", func(t *testing.T) {
const expected uint64 = 5
service := &Service{
custodyInfo: &custodyInfo{
groupCount: expected,
},
}
count, err := service.CustodyGroupCount()
require.NoError(t, err)
require.Equal(t, expected, count)
})
}
func TestUpdateCustodyInfo(t *testing.T) {
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.SamplesPerSlot = 8
config.FuluForkEpoch = 10
params.OverrideBeaconConfig(config)
testCases := []struct {
name string
initialCustodyInfo *custodyInfo
inputSlot primitives.Slot
inputGroupCount uint64
expectedUpdated bool
expectedSlot primitives.Slot
expectedGroupCount uint64
expectedErr string
}{
{
name: "First time setting custody info",
initialCustodyInfo: nil,
inputSlot: 100,
inputGroupCount: 5,
expectedUpdated: true,
expectedSlot: 100,
expectedGroupCount: 5,
},
{
name: "Group count decrease - no update",
initialCustodyInfo: &custodyInfo{
earliestAvailableSlot: 50,
groupCount: 10,
},
inputSlot: 60,
inputGroupCount: 8,
expectedUpdated: false,
expectedSlot: 50,
expectedGroupCount: 10,
},
{
name: "Earliest slot decrease - error",
initialCustodyInfo: &custodyInfo{
earliestAvailableSlot: 100,
groupCount: 5,
},
inputSlot: 50,
inputGroupCount: 10,
expectedErr: "earliest available slot 50 is less than the current one 100",
},
{
name: "Group count increase but <= samples per slot",
initialCustodyInfo: &custodyInfo{
earliestAvailableSlot: 50,
groupCount: 5,
},
inputSlot: 60,
inputGroupCount: 8,
expectedUpdated: true,
expectedSlot: 50,
expectedGroupCount: 8,
},
{
name: "Group count increase > samples per slot, before Fulu fork",
initialCustodyInfo: &custodyInfo{
earliestAvailableSlot: 50,
groupCount: 5,
},
inputSlot: 60,
inputGroupCount: 15,
expectedUpdated: true,
expectedSlot: 50,
expectedGroupCount: 15,
},
{
name: "Group count increase > samples per slot, after Fulu fork",
initialCustodyInfo: &custodyInfo{
earliestAvailableSlot: 50,
groupCount: 5,
},
inputSlot: 500,
inputGroupCount: 15,
expectedUpdated: true,
expectedSlot: 500,
expectedGroupCount: 15,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
service := &Service{
custodyInfo: tc.initialCustodyInfo,
}
slot, groupCount, err := service.UpdateCustodyInfo(tc.inputSlot, tc.inputGroupCount)
if tc.expectedErr != "" {
require.NotNil(t, err)
require.Equal(t, true, strings.Contains(err.Error(), tc.expectedErr))
return
}
require.NoError(t, err)
require.Equal(t, tc.expectedSlot, slot)
require.Equal(t, tc.expectedGroupCount, groupCount)
if tc.expectedUpdated {
require.NotNil(t, service.custodyInfo)
require.Equal(t, tc.expectedSlot, service.custodyInfo.earliestAvailableSlot)
require.Equal(t, tc.expectedGroupCount, service.custodyInfo.groupCount)
}
})
}
}
func TestCustodyGroupCountFromPeer(t *testing.T) {
const (
expectedENR uint64 = 7
@@ -109,3 +280,59 @@ func TestCustodyGroupCountFromPeer(t *testing.T) {
}
}
func TestCustodyGroupCountFromPeerENR(t *testing.T) {
const (
expectedENR uint64 = 7
pid = "test-id"
)
cgc := peerdas.Cgc(expectedENR)
custodyRequirement := params.BeaconConfig().CustodyRequirement
testCases := []struct {
name string
record *enr.Record
expected uint64
wantErr bool
}{
{
name: "No ENR record",
record: nil,
expected: custodyRequirement,
},
{
name: "Empty ENR record",
record: &enr.Record{},
expected: custodyRequirement,
},
{
name: "Valid ENR with custody group count",
record: func() *enr.Record {
record := &enr.Record{}
record.Set(cgc)
return record
}(),
expected: expectedENR,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
peers := peers.NewStatus(context.Background(), &peers.StatusConfig{
ScorerParams: &scorers.Config{},
})
if tc.record != nil {
peers.Add(tc.record, pid, nil, network.DirOutbound)
}
service := &Service{
peers: peers,
}
actual := service.custodyGroupCountFromPeerENR(pid)
require.Equal(t, tc.expected, actual)
})
}
}

View File

@@ -211,7 +211,10 @@ func (s *Service) RefreshPersistentSubnets() {
}
// Some data changed. Update the record and the metadata.
s.updateSubnetRecordWithMetadata(bitV)
// Not returning early here because the error comes from saving the metadata sequence number.
if err := s.updateSubnetRecordWithMetadata(bitV); err != nil {
log.WithError(err).Error("Failed to update subnet record with metadata")
}
// Ping all peers.
s.pingPeersAndLogEnr()
@@ -236,22 +239,43 @@ func (s *Service) RefreshPersistentSubnets() {
// Get the sync subnet bitfield in our metadata.
currentBitSInMetadata := s.Metadata().SyncnetsBitfield()
// Is our sync bitvector record up to date?
isBitSUpToDate := bytes.Equal(bitS, inRecordBitS) && bytes.Equal(bitS, currentBitSInMetadata)
// Compare current epoch with the Fulu fork epoch.
fuluForkEpoch := params.BeaconConfig().FuluForkEpoch
custodyGroupCount, inRecordCustodyGroupCount := uint64(0), uint64(0)
if params.FuluEnabled() {
// Get the custody group count we store in our record.
inRecordCustodyGroupCount, err = peerdas.CustodyGroupCountFromRecord(record)
if err != nil {
log.WithError(err).Error("Could not retrieve custody group count")
return
}
custodyGroupCount, err = s.CustodyGroupCount()
if err != nil {
log.WithError(err).Error("Could not retrieve custody group count")
return
}
}
// We add `1` to the current epoch because we want to prepare one epoch before the Fulu fork.
if currentEpoch+1 < fuluForkEpoch {
// Is our custody group count record up to date?
isCustodyGroupCountUpToDate := custodyGroupCount == inRecordCustodyGroupCount
// Altair behaviour.
if metadataVersion == version.Altair && isBitVUpToDate && isBitSUpToDate {
if metadataVersion == version.Altair && isBitVUpToDate && isBitSUpToDate && (!params.FuluEnabled() || isCustodyGroupCountUpToDate) {
// Nothing to do, return early.
return
}
// Some data have changed, update our record and metadata.
s.updateSubnetRecordWithMetadataV2(bitV, bitS)
// Not returning early here because the error comes from saving the metadata sequence number.
if err := s.updateSubnetRecordWithMetadataV2(bitV, bitS, custodyGroupCount); err != nil {
log.WithError(err).Error("Failed to update subnet record with metadata")
}
// Ping all peers to inform them of new metadata
s.pingPeersAndLogEnr()
@@ -259,16 +283,6 @@ func (s *Service) RefreshPersistentSubnets() {
return
}
// Get the current custody group count.
custodyGroupCount := s.cfg.CustodyInfo.ActualGroupCount()
// Get the custody group count we store in our record.
inRecordCustodyGroupCount, err := peerdas.CustodyGroupCountFromRecord(record)
if err != nil {
log.WithError(err).Error("Could not retrieve custody subnet count")
return
}
// Get the custody group count in our metadata.
inMetadataCustodyGroupCount := s.Metadata().CustodyGroupCount()
@@ -281,7 +295,10 @@ func (s *Service) RefreshPersistentSubnets() {
}
// Some data changed. Update the record and the metadata.
s.updateSubnetRecordWithMetadataV3(bitV, bitS, custodyGroupCount)
// Not returning early here because the error comes from saving the metadata sequence number.
if err := s.updateSubnetRecordWithMetadataV3(bitV, bitS, custodyGroupCount); err != nil {
log.WithError(err).Error("Failed to update subnet record with metadata")
}
// Ping all peers.
s.pingPeersAndLogEnr()
@@ -565,22 +582,30 @@ func (s *Service) createLocalNode(
localNode.Set(quicEntry)
}
if params.FuluEnabled() {
custodyGroupCount := s.cfg.CustodyInfo.ActualGroupCount()
localNode.Set(peerdas.Cgc(custodyGroupCount))
}
localNode.SetFallbackIP(ipAddr)
localNode.SetFallbackUDP(udpPort)
localNode, err = addForkEntry(localNode, s.genesisTime, s.genesisValidatorsRoot)
if err != nil {
currentSlot := slots.CurrentSlot(s.genesisTime)
currentEpoch := slots.ToEpoch(currentSlot)
current := params.GetNetworkScheduleEntry(currentEpoch)
next := params.NextNetworkScheduleEntry(currentEpoch)
if err := updateENR(localNode, current, next); err != nil {
return nil, errors.Wrap(err, "could not add eth2 fork version entry to enr")
}
localNode = initializeAttSubnets(localNode)
localNode = initializeSyncCommSubnets(localNode)
if params.FuluEnabled() {
custodyGroupCount, err := s.CustodyGroupCount()
if err != nil {
return nil, errors.Wrap(err, "could not retrieve custody group count")
}
custodyGroupCountEntry := peerdas.Cgc(custodyGroupCount)
localNode.Set(custodyGroupCountEntry)
}
if s.cfg != nil && s.cfg.HostAddress != "" {
hostIP := net.ParseIP(s.cfg.HostAddress)
if hostIP.To4() == nil && hostIP.To16() == nil {
@@ -685,7 +710,7 @@ func (s *Service) filterPeer(node *enode.Node) bool {
// Ignore nodes that don't match our fork digest.
nodeENR := node.Record()
if s.genesisValidatorsRoot != nil {
if err := s.compareForkENR(nodeENR); err != nil {
if err := compareForkENR(s.dv5Listener.LocalNode().Node().Record(), nodeENR); err != nil {
log.WithError(err).Trace("Fork ENR mismatches between peer and local node")
return false
}

View File

@@ -16,7 +16,7 @@ import (
mock "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers/peerdata"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers/scorers"
@@ -65,6 +65,7 @@ func TestCreateListener(t *testing.T) {
genesisTime: time.Now(),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
cfg: &Config{UDPPort: uint(port)},
custodyInfo: &custodyInfo{},
}
listener, err := s.createListener(ipAddr, pkey)
require.NoError(t, err)
@@ -91,6 +92,7 @@ func TestStartDiscV5_DiscoverAllPeers(t *testing.T) {
cfg: &Config{UDPPort: uint(port), PingInterval: testPingInterval, DisableLivenessCheck: true},
genesisTime: genesisTime,
genesisValidatorsRoot: genesisValidatorsRoot,
custodyInfo: &custodyInfo{},
}
bootListener, err := s.createListener(ipAddr, pkey)
require.NoError(t, err)
@@ -116,6 +118,7 @@ func TestStartDiscV5_DiscoverAllPeers(t *testing.T) {
cfg: cfg,
genesisTime: genesisTime,
genesisValidatorsRoot: genesisValidatorsRoot,
custodyInfo: &custodyInfo{},
}
listener, err := s.startDiscoveryV5(ipAddr, pkey)
assert.NoError(t, err, "Could not start discovery for node")
@@ -157,27 +160,27 @@ func TestCreateLocalNode(t *testing.T) {
}{
{
name: "valid config",
cfg: &Config{CustodyInfo: &peerdas.CustodyInfo{}},
cfg: &Config{},
expectedError: false,
},
{
name: "invalid host address",
cfg: &Config{HostAddress: "invalid", CustodyInfo: &peerdas.CustodyInfo{}},
cfg: &Config{HostAddress: "invalid"},
expectedError: true,
},
{
name: "valid host address",
cfg: &Config{HostAddress: "192.168.0.1", CustodyInfo: &peerdas.CustodyInfo{}},
cfg: &Config{HostAddress: "192.168.0.1"},
expectedError: false,
},
{
name: "invalid host DNS",
cfg: &Config{HostDNS: "invalid", CustodyInfo: &peerdas.CustodyInfo{}},
cfg: &Config{HostDNS: "invalid"},
expectedError: true,
},
{
name: "valid host DNS",
cfg: &Config{HostDNS: "www.google.com", CustodyInfo: &peerdas.CustodyInfo{}},
cfg: &Config{HostDNS: "www.google.com"},
expectedError: false,
},
}
@@ -191,6 +194,8 @@ func TestCreateLocalNode(t *testing.T) {
quicPort = 3000
)
custodyRequirement := params.BeaconConfig().CustodyRequirement
// Create a private key.
address, privKey := createAddrAndPrivKey(t)
@@ -199,6 +204,7 @@ func TestCreateLocalNode(t *testing.T) {
genesisTime: time.Now(),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
cfg: tt.cfg,
custodyInfo: &custodyInfo{groupCount: custodyRequirement},
}
localNode, err := service.createLocalNode(privKey, address, udpPort, tcpPort, quicPort)
@@ -210,7 +216,7 @@ func TestCreateLocalNode(t *testing.T) {
require.NoError(t, err)
expectedAddress := address
if tt.cfg.HostAddress != "" {
if tt.cfg != nil && tt.cfg.HostAddress != "" {
expectedAddress = net.ParseIP(tt.cfg.HostAddress)
}
@@ -250,8 +256,8 @@ func TestCreateLocalNode(t *testing.T) {
// Check cgc config.
custodyGroupCount := new(uint64)
require.NoError(t, localNode.Node().Record().Load(enr.WithEntry(peerdas.CustodyGroupCountEnrKey, custodyGroupCount)))
require.Equal(t, params.BeaconConfig().CustodyRequirement, *custodyGroupCount)
require.NoError(t, localNode.Node().Record().Load(enr.WithEntry(params.BeaconNetworkConfig().CustodyGroupCountKey, custodyGroupCount)))
require.Equal(t, custodyRequirement, *custodyGroupCount)
})
}
}
@@ -263,6 +269,7 @@ func TestRebootDiscoveryListener(t *testing.T) {
genesisTime: time.Now(),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
cfg: &Config{UDPPort: uint(port)},
custodyInfo: &custodyInfo{},
}
createListener := func() (*discover.UDPv5, error) {
@@ -295,6 +302,7 @@ func TestMultiAddrsConversion_InvalidIPAddr(t *testing.T) {
genesisTime: time.Now(),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
cfg: &Config{},
custodyInfo: &custodyInfo{},
}
node, err := s.createLocalNode(pkey, addr, 0, 0, 0)
require.NoError(t, err)
@@ -313,6 +321,7 @@ func TestMultiAddrConversion_OK(t *testing.T) {
},
genesisTime: time.Now(),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
custodyInfo: &custodyInfo{},
}
listener, err := s.createListener(ipAddr, pkey)
require.NoError(t, err)
@@ -353,6 +362,8 @@ func TestStaticPeering_PeersAreAdded(t *testing.T) {
cfg.StaticPeers = staticPeers
cfg.StateNotifier = &mock.MockStateNotifier{}
cfg.NoDiscovery = true
cfg.DB = testDB.SetupDB(t)
s, err := NewService(t.Context(), cfg)
require.NoError(t, err)
@@ -386,6 +397,7 @@ func TestHostIsResolved(t *testing.T) {
},
genesisTime: time.Now(),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
custodyInfo: &custodyInfo{},
}
ip, key := createAddrAndPrivKey(t)
list, err := s.createListener(ip, key)
@@ -455,6 +467,7 @@ func TestUDPMultiAddress(t *testing.T) {
cfg: &Config{UDPPort: uint(port)},
genesisTime: genesisTime,
genesisValidatorsRoot: genesisValidatorsRoot,
custodyInfo: &custodyInfo{},
}
createListener := func() (*discover.UDPv5, error) {
@@ -655,7 +668,7 @@ func checkPingCountCacheMetadataRecord(
if expected.custodyGroupCount != nil {
// Check custody subnet count in ENR.
var actualCustodyGroupCount uint64
err := service.dv5Listener.LocalNode().Node().Record().Load(enr.WithEntry(peerdas.CustodyGroupCountEnrKey, &actualCustodyGroupCount))
err := service.dv5Listener.LocalNode().Node().Record().Load(enr.WithEntry(params.BeaconNetworkConfig().CustodyGroupCountKey, &actualCustodyGroupCount))
require.NoError(t, err)
require.Equal(t, *expected.custodyGroupCount, actualCustodyGroupCount)
@@ -818,10 +831,11 @@ func TestRefreshPersistentSubnets(t *testing.T) {
actualPingCount++
return nil
},
cfg: &Config{UDPPort: 2000, CustodyInfo: &peerdas.CustodyInfo{}},
cfg: &Config{UDPPort: 2000, DB: testDB.SetupDB(t)},
peers: p2p.Peers(),
genesisTime: time.Now().Add(-time.Duration(tc.epochSinceGenesis*secondsPerEpoch) * time.Second),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
custodyInfo: &custodyInfo{groupCount: custodyGroupCount},
}
// Set the listener and the metadata.

View File

@@ -3,12 +3,9 @@ package p2p
import (
"bytes"
"fmt"
"time"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/network/forks"
pb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
prysmTime "github.com/OffchainLabs/prysm/v6/time"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
@@ -16,6 +13,8 @@ import (
"github.com/sirupsen/logrus"
)
var errEth2ENRDigestMismatch = errors.New("fork digest of peer does not match local value")
// ENR key used for Ethereum consensus-related fork data.
var eth2ENRKey = params.BeaconNetworkConfig().ETH2Key
@@ -25,29 +24,31 @@ func (s *Service) currentForkDigest() ([4]byte, error) {
if !s.isInitialized() {
return [4]byte{}, errors.New("state is not initialized")
}
return forks.CreateForkDigest(s.genesisTime, s.genesisValidatorsRoot)
currentSlot := slots.CurrentSlot(s.genesisTime)
currentEpoch := slots.ToEpoch(currentSlot)
return params.ForkDigest(currentEpoch), nil
}
// Compares fork ENRs between an incoming peer's record and our node's
// local record values for current and next fork version/epoch.
func (s *Service) compareForkENR(record *enr.Record) error {
currentRecord := s.dv5Listener.LocalNode().Node().Record()
peerForkENR, err := forkEntry(record)
func compareForkENR(self, peer *enr.Record) error {
peerForkENR, err := forkEntry(peer)
if err != nil {
return err
}
currentForkENR, err := forkEntry(currentRecord)
currentForkENR, err := forkEntry(self)
if err != nil {
return err
}
enrString, err := SerializeENR(record)
enrString, err := SerializeENR(peer)
if err != nil {
return err
}
// Clients SHOULD connect to peers with current_fork_digest, next_fork_version,
// and next_fork_epoch that match local values.
if !bytes.Equal(peerForkENR.CurrentForkDigest, currentForkENR.CurrentForkDigest) {
return fmt.Errorf(
return errors.Wrapf(errEth2ENRDigestMismatch,
"fork digest of peer with ENR %s: %v, does not match local value: %v",
enrString,
peerForkENR.CurrentForkDigest,
@@ -74,41 +75,36 @@ func (s *Service) compareForkENR(record *enr.Record) error {
return nil
}
// Adds a fork entry as an ENR record under the Ethereum consensus EnrKey for
// the local node. The fork entry is an ssz-encoded enrForkID type
// which takes into account the current fork version from the current
// epoch to create a fork digest, the next fork version,
// and the next fork epoch.
func addForkEntry(
node *enode.LocalNode,
genesisTime time.Time,
genesisValidatorsRoot []byte,
) (*enode.LocalNode, error) {
digest, err := forks.CreateForkDigest(genesisTime, genesisValidatorsRoot)
if err != nil {
return nil, err
}
currentSlot := slots.CurrentSlot(genesisTime)
currentEpoch := slots.ToEpoch(currentSlot)
if prysmTime.Now().Before(genesisTime) {
currentEpoch = 0
}
nextForkVersion, nextForkEpoch, err := forks.NextForkData(currentEpoch)
if err != nil {
return nil, err
}
func updateENR(node *enode.LocalNode, entry, next params.NetworkScheduleEntry) error {
enrForkID := &pb.ENRForkID{
CurrentForkDigest: digest[:],
NextForkVersion: nextForkVersion[:],
NextForkEpoch: nextForkEpoch,
CurrentForkDigest: entry.ForkDigest[:],
NextForkVersion: next.ForkVersion[:],
NextForkEpoch: next.Epoch,
}
if entry.Epoch == next.Epoch {
enrForkID.NextForkEpoch = params.BeaconConfig().FarFutureEpoch
}
logFields := logrus.Fields{
"CurrentForkDigest": fmt.Sprintf("%#x", enrForkID.CurrentForkDigest),
"NextForkVersion": fmt.Sprintf("%#x", enrForkID.NextForkVersion),
"NextForkEpoch": fmt.Sprintf("%d", enrForkID.NextForkEpoch),
}
if params.BeaconConfig().FuluForkEpoch != params.BeaconConfig().FarFutureEpoch {
if entry.ForkDigest == next.ForkDigest {
node.Set(enr.WithEntry(nfdEnrKey, make([]byte, len(next.ForkDigest))))
} else {
node.Set(enr.WithEntry(nfdEnrKey, next.ForkDigest[:]))
}
logFields[nfdEnrKey] = fmt.Sprintf("%#x", next.ForkDigest)
}
log.WithFields(logFields).Info("Updating ENR Fork ID")
enc, err := enrForkID.MarshalSSZ()
if err != nil {
return nil, err
return err
}
forkEntry := enr.WithEntry(eth2ENRKey, enc)
node.Set(forkEntry)
return node, nil
return nil
}
// Retrieves an enrForkID from an ENR record by key lookup

View File

@@ -8,242 +8,121 @@ import (
"testing"
"time"
mock "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/network/forks"
pb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
ma "github.com/multiformats/go-multiaddr"
"github.com/sirupsen/logrus"
logTest "github.com/sirupsen/logrus/hooks/test"
)
func TestStartDiscv5_DifferentForkDigests(t *testing.T) {
const port = 2000
ipAddr, pkey := createAddrAndPrivKey(t)
genesisTime := time.Now()
genesisValidatorsRoot := make([]byte, fieldparams.RootLength)
s := &Service{
cfg: &Config{
UDPPort: uint(port),
StateNotifier: &mock.MockStateNotifier{},
PingInterval: testPingInterval,
DisableLivenessCheck: true,
},
genesisTime: genesisTime,
genesisValidatorsRoot: genesisValidatorsRoot,
}
bootListener, err := s.createListener(ipAddr, pkey)
require.NoError(t, err)
defer bootListener.Close()
// Allow bootnode's table to have its initial refresh. This allows
// inbound nodes to be added in.
time.Sleep(5 * time.Second)
bootNode := bootListener.Self()
cfg := &Config{
Discv5BootStrapAddrs: []string{bootNode.String()},
UDPPort: uint(port),
StateNotifier: &mock.MockStateNotifier{},
PingInterval: testPingInterval,
DisableLivenessCheck: true,
}
var listeners []*listenerWrapper
for i := 1; i <= 5; i++ {
port := 3000 + i
cfg.UDPPort = uint(port)
ipAddr, pkey := createAddrAndPrivKey(t)
// We give every peer a different genesis validators root, which
// will cause each peer to have a different ForkDigest, preventing
// them from connecting according to our discovery rules for Ethereum consensus.
root := make([]byte, 32)
copy(root, strconv.Itoa(port))
s = &Service{
cfg: cfg,
genesisTime: genesisTime,
genesisValidatorsRoot: root,
}
listener, err := s.startDiscoveryV5(ipAddr, pkey)
assert.NoError(t, err, "Could not start discovery for node")
listeners = append(listeners, listener)
}
defer func() {
// Close down all peers.
for _, listener := range listeners {
listener.Close()
}
}()
// Wait for the nodes to have their local routing tables to be populated with the other nodes
time.Sleep(discoveryWaitTime)
lastListener := listeners[len(listeners)-1]
nodes := lastListener.Lookup(bootNode.ID())
if len(nodes) < 4 {
t.Errorf("The node's local table doesn't have the expected number of nodes. "+
"Expected more than or equal to %d but got %d", 4, len(nodes))
}
// Now, we start a new p2p service. It should have no peers aside from the
// bootnode given all nodes provided by discv5 will have different fork digests.
cfg.UDPPort = 14000
cfg.TCPPort = 14001
cfg.MaxPeers = 30
s, err = NewService(t.Context(), cfg)
require.NoError(t, err)
s.genesisTime = genesisTime
s.genesisValidatorsRoot = make([]byte, 32)
s.dv5Listener = lastListener
addrs := make([]ma.Multiaddr, 0)
for _, node := range nodes {
if s.filterPeer(node) {
nodeAddrs, err := retrieveMultiAddrsFromNode(node)
require.NoError(t, err)
addrs = append(addrs, nodeAddrs...)
}
}
// We should not have valid peers if the fork digest mismatched.
assert.Equal(t, 0, len(addrs), "Expected 0 valid peers")
require.NoError(t, s.Stop())
}
func TestStartDiscv5_SameForkDigests_DifferentNextForkData(t *testing.T) {
const port = 2000
func TestCompareForkENR(t *testing.T) {
params.SetupTestConfigCleanup(t)
hook := logTest.NewGlobal()
params.BeaconConfig().InitializeForkSchedule()
logrus.SetLevel(logrus.TraceLevel)
ipAddr, pkey := createAddrAndPrivKey(t)
genesisTime := time.Now()
genesisValidatorsRoot := make([]byte, 32)
s := &Service{
cfg: &Config{UDPPort: uint(port), PingInterval: testPingInterval, DisableLivenessCheck: true},
genesisTime: genesisTime,
genesisValidatorsRoot: genesisValidatorsRoot,
}
bootListener, err := s.createListener(ipAddr, pkey)
require.NoError(t, err)
defer bootListener.Close()
// Allow bootnode's table to have its initial refresh. This allows
// inbound nodes to be added in.
time.Sleep(5 * time.Second)
db, err := enode.OpenDB("")
assert.NoError(t, err)
_, k := createAddrAndPrivKey(t)
clock := startup.NewClock(time.Now(), params.BeaconConfig().GenesisValidatorsRoot)
current := params.GetNetworkScheduleEntry(clock.CurrentEpoch())
next := params.NextNetworkScheduleEntry(clock.CurrentEpoch())
self := enode.NewLocalNode(db, k)
require.NoError(t, updateENR(self, current, next))
bootNode := bootListener.Self()
cfg := &Config{
Discv5BootStrapAddrs: []string{bootNode.String()},
UDPPort: uint(port),
PingInterval: testPingInterval,
DisableLivenessCheck: true,
cases := []struct {
name string
expectErr error
expectLog string
node func(t *testing.T) *enode.Node
}{
{
name: "match",
node: func(t *testing.T) *enode.Node {
// Create a peer with the same current fork digest and next fork version/epoch.
peer := enode.NewLocalNode(db, k)
require.NoError(t, updateENR(peer, current, next))
return peer.Node()
},
},
{
name: "current digest mismatch",
node: func(t *testing.T) *enode.Node {
// Create a peer with the same current fork digest and next fork version/epoch.
peer := enode.NewLocalNode(db, k)
testDigest := [4]byte{0xFF, 0xFF, 0xFF, 0xFF}
require.NotEqual(t, current.ForkDigest, testDigest, "ensure test fork digest is unique")
currentCopy := current
currentCopy.ForkDigest = testDigest
require.NoError(t, updateENR(peer, currentCopy, next))
return peer.Node()
},
expectErr: errEth2ENRDigestMismatch,
},
{
name: "next fork version mismatch",
node: func(t *testing.T) *enode.Node {
// Create a peer with the same current fork digest and next fork version/epoch.
peer := enode.NewLocalNode(db, k)
testVersion := [4]byte{0xFF, 0xFF, 0xFF, 0xFF}
require.NotEqual(t, next.ForkVersion, testVersion, "ensure test fork version is unique")
nextCopy := next
nextCopy.ForkVersion = testVersion
require.NoError(t, updateENR(peer, current, nextCopy))
return peer.Node()
},
expectLog: "Peer matches fork digest but has different next fork version",
},
{
name: "next fork epoch mismatch",
node: func(t *testing.T) *enode.Node {
// Create a peer with the same current fork digest and next fork version/epoch.
peer := enode.NewLocalNode(db, k)
nextCopy := next
nextCopy.Epoch = nextCopy.Epoch + 1
require.NoError(t, updateENR(peer, current, nextCopy))
return peer.Node()
},
expectLog: "Peer matches fork digest but has different next fork epoch",
},
}
var listeners []*listenerWrapper
for i := 1; i <= 5; i++ {
port := 3000 + i
cfg.UDPPort = uint(port)
ipAddr, pkey := createAddrAndPrivKey(t)
c := params.BeaconConfig().Copy()
nextForkEpoch := primitives.Epoch(i)
c.ForkVersionSchedule[[4]byte{'A', 'B', 'C', 'D'}] = nextForkEpoch
params.OverrideBeaconConfig(c)
// We give every peer a different genesis validators root, which
// will cause each peer to have a different ForkDigest, preventing
// them from connecting according to our discovery rules for Ethereum consensus.
s = &Service{
cfg: cfg,
genesisTime: genesisTime,
genesisValidatorsRoot: genesisValidatorsRoot,
}
listener, err := s.startDiscoveryV5(ipAddr, pkey)
assert.NoError(t, err, "Could not start discovery for node")
listeners = append(listeners, listener)
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
hook := logTest.NewGlobal()
peer := c.node(t)
err := compareForkENR(self.Node().Record(), peer.Record())
if c.expectErr != nil {
require.ErrorIs(t, err, c.expectErr, "Expected error to match")
} else {
require.NoError(t, err, "Expected no error comparing fork ENRs")
}
if c.expectLog != "" {
require.LogsContain(t, hook, c.expectLog, "Expected log message not found")
}
})
}
defer func() {
// Close down all peers.
for _, listener := range listeners {
listener.Close()
}
}()
// Wait for the nodes to have their local routing tables to be populated with the other nodes
time.Sleep(discoveryWaitTime)
lastListener := listeners[len(listeners)-1]
nodes := lastListener.Lookup(bootNode.ID())
if len(nodes) < 4 {
t.Errorf("The node's local table doesn't have the expected number of nodes. "+
"Expected more than or equal to %d but got %d", 4, len(nodes))
}
// Now, we start a new p2p service. It should have no peers aside from the
// bootnode given all nodes provided by discv5 will have different fork digests.
cfg.UDPPort = 14000
cfg.TCPPort = 14001
cfg.MaxPeers = 30
cfg.StateNotifier = &mock.MockStateNotifier{}
s, err = NewService(t.Context(), cfg)
require.NoError(t, err)
s.genesisTime = genesisTime
s.genesisValidatorsRoot = make([]byte, 32)
s.dv5Listener = lastListener
addrs := make([]ma.Multiaddr, 0, len(nodes))
for _, node := range nodes {
if s.filterPeer(node) {
nodeAddrs, err := retrieveMultiAddrsFromNode(node)
require.NoError(t, err)
addrs = append(addrs, nodeAddrs...)
}
}
if len(addrs) == 0 {
t.Error("Expected to have valid peers, got 0")
}
require.LogsContain(t, hook, "Peer matches fork digest but has different next fork epoch")
require.NoError(t, s.Stop())
}
func TestDiscv5_AddRetrieveForkEntryENR(t *testing.T) {
params.SetupTestConfigCleanup(t)
c := params.BeaconConfig().Copy()
c.ForkVersionSchedule = map[[4]byte]primitives.Epoch{
bytesutil.ToBytes4(params.BeaconConfig().GenesisForkVersion): 0,
{0, 0, 0, 1}: 1,
}
nextForkEpoch := primitives.Epoch(1)
nextForkVersion := []byte{0, 0, 0, 1}
params.OverrideBeaconConfig(c)
params.BeaconConfig().InitializeForkSchedule()
genesisTime := time.Now()
genesisValidatorsRoot := make([]byte, 32)
digest, err := forks.CreateForkDigest(genesisTime, make([]byte, 32))
require.NoError(t, err)
clock := startup.NewClock(time.Now(), params.BeaconConfig().GenesisValidatorsRoot)
current := params.GetNetworkScheduleEntry(clock.CurrentEpoch())
next := params.NextNetworkScheduleEntry(clock.CurrentEpoch())
enrForkID := &pb.ENRForkID{
CurrentForkDigest: digest[:],
NextForkVersion: nextForkVersion,
NextForkEpoch: nextForkEpoch,
CurrentForkDigest: current.ForkDigest[:],
NextForkVersion: next.ForkVersion[:],
NextForkEpoch: next.Epoch,
}
enc, err := enrForkID.MarshalSSZ()
require.NoError(t, err)
entry := enr.WithEntry(eth2ENRKey, enc)
// In epoch 1 of current time, the fork version should be
// {0, 0, 0, 1} according to the configuration override above.
temp := t.TempDir()
randNum := rand.Int()
tempPath := path.Join(temp, strconv.Itoa(randNum))
@@ -255,18 +134,16 @@ func TestDiscv5_AddRetrieveForkEntryENR(t *testing.T) {
localNode := enode.NewLocalNode(db, pkey)
localNode.Set(entry)
want, err := signing.ComputeForkDigest([]byte{0, 0, 0, 0}, genesisValidatorsRoot)
require.NoError(t, err)
resp, err := forkEntry(localNode.Node().Record())
require.NoError(t, err)
assert.DeepEqual(t, want[:], resp.CurrentForkDigest)
assert.DeepEqual(t, nextForkVersion, resp.NextForkVersion)
assert.Equal(t, nextForkEpoch, resp.NextForkEpoch, "Unexpected next fork epoch")
assert.Equal(t, hexutil.Encode(current.ForkDigest[:]), hexutil.Encode(resp.CurrentForkDigest))
assert.Equal(t, hexutil.Encode(next.ForkVersion[:]), hexutil.Encode(resp.NextForkVersion))
assert.Equal(t, next.Epoch, resp.NextForkEpoch, "Unexpected next fork epoch")
}
func TestAddForkEntry_Genesis(t *testing.T) {
func TestAddForkEntry_NextForkVersion(t *testing.T) {
params.SetupTestConfigCleanup(t)
params.BeaconConfig().InitializeForkSchedule()
temp := t.TempDir()
randNum := rand.Int()
tempPath := path.Join(temp, strconv.Itoa(randNum))
@@ -276,17 +153,109 @@ func TestAddForkEntry_Genesis(t *testing.T) {
db, err := enode.OpenDB("")
require.NoError(t, err)
bCfg := params.MainnetConfig()
bCfg.ForkVersionSchedule = map[[4]byte]primitives.Epoch{}
bCfg.ForkVersionSchedule[bytesutil.ToBytes4(params.BeaconConfig().GenesisForkVersion)] = bCfg.GenesisEpoch
params.OverrideBeaconConfig(bCfg)
localNode := enode.NewLocalNode(db, pkey)
localNode, err = addForkEntry(localNode, time.Now().Add(10*time.Second), bytesutil.PadTo([]byte{'A', 'B', 'C', 'D'}, 32))
clock := startup.NewClock(time.Now(), params.BeaconConfig().GenesisValidatorsRoot)
current := params.GetNetworkScheduleEntry(clock.CurrentEpoch())
next := params.NextNetworkScheduleEntry(clock.CurrentEpoch())
// Add the fork entry to the local node's ENR.
require.NoError(t, updateENR(localNode, current, next))
fe, err := forkEntry(localNode.Node().Record())
require.NoError(t, err)
forkEntry, err := forkEntry(localNode.Node().Record())
require.NoError(t, err)
assert.DeepEqual(t,
params.BeaconConfig().GenesisForkVersion, forkEntry.NextForkVersion,
assert.Equal(t,
hexutil.Encode(params.BeaconConfig().AltairForkVersion), hexutil.Encode(fe.NextForkVersion),
"Wanted Next Fork Version to be equal to genesis fork version")
last := params.LastForkEpoch()
current = params.GetNetworkScheduleEntry(last)
next = params.NextNetworkScheduleEntry(last)
require.NoError(t, updateENR(localNode, current, next))
entry := params.NextNetworkScheduleEntry(last)
fe, err = forkEntry(localNode.Node().Record())
require.NoError(t, err)
assert.Equal(t,
hexutil.Encode(entry.ForkVersion[:]), hexutil.Encode(fe.NextForkVersion),
"Wanted Next Fork Version to be equal to last entry in schedule")
}
func TestUpdateENR_FuluForkDigest(t *testing.T) {
setupTest := func(t *testing.T) (*enode.LocalNode, func()) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.FuluForkEpoch = 100
cfg.FuluForkVersion = []byte{5, 0, 0, 0}
params.OverrideBeaconConfig(cfg)
cfg.InitializeForkSchedule()
pkey, err := privKey(&Config{DataDir: t.TempDir()})
require.NoError(t, err, "Could not get private key")
db, err := enode.OpenDB("")
require.NoError(t, err)
localNode := enode.NewLocalNode(db, pkey)
cleanup := func() {
db.Close()
}
return localNode, cleanup
}
tests := []struct {
name string
currentEntry params.NetworkScheduleEntry
nextEntry params.NetworkScheduleEntry
validateNFD func(t *testing.T, localNode *enode.LocalNode, nextEntry params.NetworkScheduleEntry)
}{
{
name: "different digests sets nfd to next digest",
currentEntry: params.NetworkScheduleEntry{
Epoch: 50,
ForkDigest: [4]byte{1, 2, 3, 4},
ForkVersion: [4]byte{1, 0, 0, 0},
},
nextEntry: params.NetworkScheduleEntry{
Epoch: 100,
ForkDigest: [4]byte{5, 6, 7, 8}, // Different from current
ForkVersion: [4]byte{2, 0, 0, 0},
},
validateNFD: func(t *testing.T, localNode *enode.LocalNode, nextEntry params.NetworkScheduleEntry) {
var nfdValue []byte
err := localNode.Node().Record().Load(enr.WithEntry(nfdEnrKey, &nfdValue))
require.NoError(t, err)
assert.DeepEqual(t, nextEntry.ForkDigest[:], nfdValue, "nfd entry should equal next fork digest")
},
},
{
name: "same digests sets nfd to empty",
currentEntry: params.NetworkScheduleEntry{
Epoch: 50,
ForkDigest: [4]byte{1, 2, 3, 4},
ForkVersion: [4]byte{1, 0, 0, 0},
},
nextEntry: params.NetworkScheduleEntry{
Epoch: 100,
ForkDigest: [4]byte{1, 2, 3, 4}, // Same as current
ForkVersion: [4]byte{2, 0, 0, 0},
},
validateNFD: func(t *testing.T, localNode *enode.LocalNode, nextEntry params.NetworkScheduleEntry) {
var nfdValue []byte
err := localNode.Node().Record().Load(enr.WithEntry(nfdEnrKey, &nfdValue))
require.NoError(t, err)
assert.DeepEqual(t, make([]byte, len(nextEntry.ForkDigest)), nfdValue, "nfd entry should be empty bytes when digests are same")
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
localNode, cleanup := setupTest(t)
defer cleanup()
currentEntry := tt.currentEntry
nextEntry := tt.nextEntry
require.NoError(t, updateENR(localNode, currentEntry, nextEntry))
tt.validateNFD(t, localNode, nextEntry)
})
}
}

View File

@@ -9,27 +9,26 @@ import (
// updates the node's discovery service to reflect any new fork version
// changes.
func (s *Service) forkWatcher() {
// Exit early if discovery is disabled - there's no ENR to update
if s.dv5Listener == nil {
log.Debug("Discovery disabled, exiting fork watcher")
return
}
slotTicker := slots.NewSlotTicker(s.genesisTime, params.BeaconConfig().SecondsPerSlot)
var scheduleEntry params.NetworkScheduleEntry
for {
select {
case currSlot := <-slotTicker.C():
currEpoch := slots.ToEpoch(currSlot)
if currEpoch == params.BeaconConfig().AltairForkEpoch ||
currEpoch == params.BeaconConfig().BellatrixForkEpoch ||
currEpoch == params.BeaconConfig().CapellaForkEpoch ||
currEpoch == params.BeaconConfig().DenebForkEpoch ||
currEpoch == params.BeaconConfig().ElectraForkEpoch ||
currEpoch == params.BeaconConfig().FuluForkEpoch {
// If we are in the fork epoch, we update our enr with
// the updated fork digest. These repeatedly does
// this over the epoch, which might be slightly wasteful
// but is fine nonetheless.
if s.dv5Listener != nil { // make sure it's not a local network
_, err := addForkEntry(s.dv5Listener.LocalNode(), s.genesisTime, s.genesisValidatorsRoot)
if err != nil {
log.WithError(err).Error("Could not add fork entry")
}
currentEpoch := slots.ToEpoch(currSlot)
newEntry := params.GetNetworkScheduleEntry(currentEpoch)
if newEntry.ForkDigest != scheduleEntry.ForkDigest {
nextEntry := params.NextNetworkScheduleEntry(currentEpoch)
if err := updateENR(s.dv5Listener.LocalNode(), newEntry, nextEntry); err != nil {
log.WithFields(newEntry.LogFields()).WithError(err).Error("Could not add fork entry")
continue // don't replace scheduleEntry until this succeeds
}
scheduleEntry = newEntry
}
case <-s.ctx.Done():
log.Debug("Context closed, exiting goroutine")

View File

@@ -7,6 +7,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1/metadata"
"github.com/ethereum/go-ethereum/p2p/enode"
@@ -32,13 +33,14 @@ type (
ConnectionHandler
PeersProvider
MetadataProvider
DataColumnsHandler
CustodyManager
}
// Accessor provides access to the Broadcaster and PeerManager interfaces.
// Accessor provides access to the Broadcaster, PeerManager and CustodyManager interfaces.
Accessor interface {
Broadcaster
PeerManager
CustodyManager
}
// Broadcaster broadcasts messages to peers over the p2p pubsub protocol.
@@ -118,8 +120,11 @@ type (
MetadataSeq() uint64
}
// DataColumnsHandler abstracts some data columns related methods.
DataColumnsHandler interface {
// CustodyManager abstracts some data columns related methods.
CustodyManager interface {
EarliestAvailableSlot() (primitives.Slot, error)
CustodyGroupCount() (uint64, error)
UpdateCustodyInfo(earliestAvailableSlot primitives.Slot, custodyGroupCount uint64) (primitives.Slot, uint64, error)
CustodyGroupCountFromPeer(peer.ID) uint64
}
)

View File

@@ -7,7 +7,6 @@ import (
"github.com/OffchainLabs/prysm/v6/crypto/hash"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/math"
"github.com/OffchainLabs/prysm/v6/network/forks"
pubsubpb "github.com/libp2p/go-libp2p-pubsub/pb"
)
@@ -39,7 +38,7 @@ func MsgID(genesisValidatorsRoot []byte, pmsg *pubsubpb.Message) string {
copy(msg, "invalid")
return bytesutil.UnsafeCastToString(msg)
}
_, fEpoch, err := forks.RetrieveForkDataFromDigest(digest, genesisValidatorsRoot)
_, fEpoch, err := params.ForkDataFromDigest(digest)
if err != nil {
// Impossible condition that should
// never be hit.

View File

@@ -7,10 +7,10 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/crypto/hash"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/network/forks"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/golang/snappy"
pubsubpb "github.com/libp2p/go-libp2p-pubsub/pb"
@@ -18,28 +18,27 @@ import (
func TestMsgID_HashesCorrectly(t *testing.T) {
params.SetupTestConfigCleanup(t)
genesisValidatorsRoot := bytesutil.PadTo([]byte{'A'}, 32)
d, err := forks.CreateForkDigest(time.Now(), genesisValidatorsRoot)
assert.NoError(t, err)
clock := startup.NewClock(time.Now(), bytesutil.ToBytes32([]byte{'A'}))
valRoot := clock.GenesisValidatorsRoot()
d := params.ForkDigest(clock.CurrentEpoch())
tpc := fmt.Sprintf(p2p.BlockSubnetTopicFormat, d)
invalidSnappy := [32]byte{'J', 'U', 'N', 'K'}
pMsg := &pubsubpb.Message{Data: invalidSnappy[:], Topic: &tpc}
hashedData := hash.Hash(append(params.BeaconConfig().MessageDomainInvalidSnappy[:], pMsg.Data...))
msgID := string(hashedData[:20])
assert.Equal(t, msgID, p2p.MsgID(genesisValidatorsRoot, pMsg), "Got incorrect msg id")
assert.Equal(t, msgID, p2p.MsgID(valRoot[:], pMsg), "Got incorrect msg id")
validObj := [32]byte{'v', 'a', 'l', 'i', 'd'}
enc := snappy.Encode(nil, validObj[:])
nMsg := &pubsubpb.Message{Data: enc, Topic: &tpc}
hashedData = hash.Hash(append(params.BeaconConfig().MessageDomainValidSnappy[:], validObj[:]...))
msgID = string(hashedData[:20])
assert.Equal(t, msgID, p2p.MsgID(genesisValidatorsRoot, nMsg), "Got incorrect msg id")
assert.Equal(t, msgID, p2p.MsgID(valRoot[:], nMsg), "Got incorrect msg id")
}
func TestMessageIDFunction_HashesCorrectlyAltair(t *testing.T) {
params.SetupTestConfigCleanup(t)
genesisValidatorsRoot := bytesutil.PadTo([]byte{'A'}, 32)
d, err := signing.ComputeForkDigest(params.BeaconConfig().AltairForkVersion, genesisValidatorsRoot)
d, err := signing.ComputeForkDigest(params.BeaconConfig().AltairForkVersion, params.BeaconConfig().GenesisValidatorsRoot[:])
assert.NoError(t, err)
tpc := fmt.Sprintf(p2p.BlockSubnetTopicFormat, d)
topicLen := uint64(len(tpc))
@@ -52,7 +51,7 @@ func TestMessageIDFunction_HashesCorrectlyAltair(t *testing.T) {
combinedObj = append(combinedObj, pMsg.Data...)
hashedData := hash.Hash(combinedObj)
msgID := string(hashedData[:20])
assert.Equal(t, msgID, p2p.MsgID(genesisValidatorsRoot, pMsg), "Got incorrect msg id")
assert.Equal(t, msgID, p2p.MsgID(params.BeaconConfig().GenesisValidatorsRoot[:], pMsg), "Got incorrect msg id")
validObj := [32]byte{'v', 'a', 'l', 'i', 'd'}
enc := snappy.Encode(nil, validObj[:])
@@ -63,13 +62,12 @@ func TestMessageIDFunction_HashesCorrectlyAltair(t *testing.T) {
combinedObj = append(combinedObj, validObj[:]...)
hashedData = hash.Hash(combinedObj)
msgID = string(hashedData[:20])
assert.Equal(t, msgID, p2p.MsgID(genesisValidatorsRoot, nMsg), "Got incorrect msg id")
assert.Equal(t, msgID, p2p.MsgID(params.BeaconConfig().GenesisValidatorsRoot[:], nMsg), "Got incorrect msg id")
}
func TestMessageIDFunction_HashesCorrectlyBellatrix(t *testing.T) {
params.SetupTestConfigCleanup(t)
genesisValidatorsRoot := bytesutil.PadTo([]byte{'A'}, 32)
d, err := signing.ComputeForkDigest(params.BeaconConfig().BellatrixForkVersion, genesisValidatorsRoot)
d, err := signing.ComputeForkDigest(params.BeaconConfig().BellatrixForkVersion, params.BeaconConfig().GenesisValidatorsRoot[:])
assert.NoError(t, err)
tpc := fmt.Sprintf(p2p.BlockSubnetTopicFormat, d)
topicLen := uint64(len(tpc))
@@ -82,7 +80,7 @@ func TestMessageIDFunction_HashesCorrectlyBellatrix(t *testing.T) {
combinedObj = append(combinedObj, pMsg.Data...)
hashedData := hash.Hash(combinedObj)
msgID := string(hashedData[:20])
assert.Equal(t, msgID, p2p.MsgID(genesisValidatorsRoot, pMsg), "Got incorrect msg id")
assert.Equal(t, msgID, p2p.MsgID(params.BeaconConfig().GenesisValidatorsRoot[:], pMsg), "Got incorrect msg id")
validObj := [32]byte{'v', 'a', 'l', 'i', 'd'}
enc := snappy.Encode(nil, validObj[:])
@@ -93,7 +91,7 @@ func TestMessageIDFunction_HashesCorrectlyBellatrix(t *testing.T) {
combinedObj = append(combinedObj, validObj[:]...)
hashedData = hash.Hash(combinedObj)
msgID = string(hashedData[:20])
assert.Equal(t, msgID, p2p.MsgID(genesisValidatorsRoot, nMsg), "Got incorrect msg id")
assert.Equal(t, msgID, p2p.MsgID(params.BeaconConfig().GenesisValidatorsRoot[:], nMsg), "Got incorrect msg id")
}
func TestMsgID_WithNilTopic(t *testing.T) {

View File

@@ -54,7 +54,7 @@ type PeerData struct {
NextValidTime time.Time
// Chain related data.
MetaData metadata.Metadata
ChainState *ethpb.Status
ChainState *ethpb.StatusV2
ChainStateLastUpdated time.Time
ChainStateValidationError error
// Scorers internal data.

View File

@@ -112,7 +112,7 @@ func (s *PeerStatusScorer) BadPeers() []peer.ID {
}
// SetPeerStatus sets chain state data for a given peer.
func (s *PeerStatusScorer) SetPeerStatus(pid peer.ID, chainState *pb.Status, validationError error) {
func (s *PeerStatusScorer) SetPeerStatus(pid peer.ID, chainState *pb.StatusV2, validationError error) {
s.store.Lock()
defer s.store.Unlock()
@@ -130,14 +130,14 @@ func (s *PeerStatusScorer) SetPeerStatus(pid peer.ID, chainState *pb.Status, val
// PeerStatus gets the chain state of the given remote peer.
// This can return nil if there is no known chain state for the peer.
// This will error if the peer does not exist.
func (s *PeerStatusScorer) PeerStatus(pid peer.ID) (*pb.Status, error) {
func (s *PeerStatusScorer) PeerStatus(pid peer.ID) (*pb.StatusV2, error) {
s.store.RLock()
defer s.store.RUnlock()
return s.peerStatusNoLock(pid)
}
// peerStatusNoLock lock-free version of PeerStatus.
func (s *PeerStatusScorer) peerStatusNoLock(pid peer.ID) (*pb.Status, error) {
func (s *PeerStatusScorer) peerStatusNoLock(pid peer.ID) (*pb.StatusV2, error) {
if peerData, ok := s.store.PeerData(pid); ok {
if peerData.ChainState == nil {
return nil, peerdata.ErrNoPeerStatus

View File

@@ -35,7 +35,7 @@ func TestScorers_PeerStatus_Score(t *testing.T) {
name: "existent bad peer",
update: func(scorer *scorers.PeerStatusScorer) {
scorer.SetHeadSlot(0)
scorer.SetPeerStatus("peer1", &pb.Status{
scorer.SetPeerStatus("peer1", &pb.StatusV2{
HeadRoot: make([]byte, 32),
HeadSlot: 64,
}, p2ptypes.ErrWrongForkDigestVersion)
@@ -48,7 +48,7 @@ func TestScorers_PeerStatus_Score(t *testing.T) {
name: "existent peer no head slot for the host node is known",
update: func(scorer *scorers.PeerStatusScorer) {
scorer.SetHeadSlot(0)
scorer.SetPeerStatus("peer1", &pb.Status{
scorer.SetPeerStatus("peer1", &pb.StatusV2{
HeadRoot: make([]byte, 32),
HeadSlot: 64,
}, nil)
@@ -61,7 +61,7 @@ func TestScorers_PeerStatus_Score(t *testing.T) {
name: "existent peer head is before ours",
update: func(scorer *scorers.PeerStatusScorer) {
scorer.SetHeadSlot(128)
scorer.SetPeerStatus("peer1", &pb.Status{
scorer.SetPeerStatus("peer1", &pb.StatusV2{
HeadRoot: make([]byte, 32),
HeadSlot: 64,
}, nil)
@@ -75,12 +75,12 @@ func TestScorers_PeerStatus_Score(t *testing.T) {
update: func(scorer *scorers.PeerStatusScorer) {
headSlot := primitives.Slot(128)
scorer.SetHeadSlot(headSlot)
scorer.SetPeerStatus("peer1", &pb.Status{
scorer.SetPeerStatus("peer1", &pb.StatusV2{
HeadRoot: make([]byte, 32),
HeadSlot: headSlot + 64,
}, nil)
// Set another peer to a higher score.
scorer.SetPeerStatus("peer2", &pb.Status{
scorer.SetPeerStatus("peer2", &pb.StatusV2{
HeadRoot: make([]byte, 32),
HeadSlot: headSlot + 128,
}, nil)
@@ -95,7 +95,7 @@ func TestScorers_PeerStatus_Score(t *testing.T) {
update: func(scorer *scorers.PeerStatusScorer) {
headSlot := primitives.Slot(128)
scorer.SetHeadSlot(headSlot)
scorer.SetPeerStatus("peer1", &pb.Status{
scorer.SetPeerStatus("peer1", &pb.StatusV2{
HeadRoot: make([]byte, 32),
HeadSlot: headSlot + 64,
}, nil)
@@ -108,7 +108,7 @@ func TestScorers_PeerStatus_Score(t *testing.T) {
name: "existent peer no max known slot",
update: func(scorer *scorers.PeerStatusScorer) {
scorer.SetHeadSlot(0)
scorer.SetPeerStatus("peer1", &pb.Status{
scorer.SetPeerStatus("peer1", &pb.StatusV2{
HeadRoot: make([]byte, 32),
HeadSlot: 0,
}, nil)
@@ -141,7 +141,7 @@ func TestScorers_PeerStatus_IsBadPeer(t *testing.T) {
assert.NoError(t, peerStatuses.Scorers().IsBadPeer(pid))
assert.NoError(t, peerStatuses.Scorers().PeerStatusScorer().IsBadPeer(pid))
peerStatuses.Scorers().PeerStatusScorer().SetPeerStatus(pid, &pb.Status{}, p2ptypes.ErrWrongForkDigestVersion)
peerStatuses.Scorers().PeerStatusScorer().SetPeerStatus(pid, &pb.StatusV2{}, p2ptypes.ErrWrongForkDigestVersion)
assert.NotNil(t, peerStatuses.Scorers().IsBadPeer(pid))
assert.NotNil(t, peerStatuses.Scorers().PeerStatusScorer().IsBadPeer(pid))
}
@@ -160,9 +160,9 @@ func TestScorers_PeerStatus_BadPeers(t *testing.T) {
assert.NoError(t, peerStatuses.Scorers().IsBadPeer(pid3))
assert.NoError(t, peerStatuses.Scorers().PeerStatusScorer().IsBadPeer(pid3))
peerStatuses.Scorers().PeerStatusScorer().SetPeerStatus(pid1, &pb.Status{}, p2ptypes.ErrWrongForkDigestVersion)
peerStatuses.Scorers().PeerStatusScorer().SetPeerStatus(pid2, &pb.Status{}, nil)
peerStatuses.Scorers().PeerStatusScorer().SetPeerStatus(pid3, &pb.Status{}, p2ptypes.ErrWrongForkDigestVersion)
peerStatuses.Scorers().PeerStatusScorer().SetPeerStatus(pid1, &pb.StatusV2{}, p2ptypes.ErrWrongForkDigestVersion)
peerStatuses.Scorers().PeerStatusScorer().SetPeerStatus(pid2, &pb.StatusV2{}, nil)
peerStatuses.Scorers().PeerStatusScorer().SetPeerStatus(pid3, &pb.StatusV2{}, p2ptypes.ErrWrongForkDigestVersion)
assert.NotNil(t, peerStatuses.Scorers().IsBadPeer(pid1))
assert.NotNil(t, peerStatuses.Scorers().PeerStatusScorer().IsBadPeer(pid1))
assert.NoError(t, peerStatuses.Scorers().IsBadPeer(pid2))
@@ -179,12 +179,12 @@ func TestScorers_PeerStatus_PeerStatus(t *testing.T) {
})
status, err := peerStatuses.Scorers().PeerStatusScorer().PeerStatus("peer1")
require.ErrorContains(t, peerdata.ErrPeerUnknown.Error(), err)
assert.Equal(t, (*pb.Status)(nil), status)
assert.Equal(t, (*pb.StatusV2)(nil), status)
peerStatuses.Scorers().PeerStatusScorer().SetPeerStatus("peer1", &pb.Status{
peerStatuses.Scorers().PeerStatusScorer().SetPeerStatus("peer1", &pb.StatusV2{
HeadSlot: 128,
}, nil)
peerStatuses.Scorers().PeerStatusScorer().SetPeerStatus("peer2", &pb.Status{
peerStatuses.Scorers().PeerStatusScorer().SetPeerStatus("peer2", &pb.StatusV2{
HeadSlot: 128,
}, p2ptypes.ErrInvalidEpoch)
status, err = peerStatuses.Scorers().PeerStatusScorer().PeerStatus("peer1")

View File

@@ -205,14 +205,14 @@ func (p *Status) ENR(pid peer.ID) (*enr.Record, error) {
}
// SetChainState sets the chain state of the given remote peer.
func (p *Status) SetChainState(pid peer.ID, chainState *pb.Status) {
func (p *Status) SetChainState(pid peer.ID, chainState *pb.StatusV2) {
p.scorers.PeerStatusScorer().SetPeerStatus(pid, chainState, nil)
}
// ChainState gets the chain state of the given remote peer.
// This will error if the peer does not exist.
// This will error if there is no known chain state for the peer.
func (p *Status) ChainState(pid peer.ID) (*pb.Status, error) {
func (p *Status) ChainState(pid peer.ID) (*pb.StatusV2, error) {
return p.scorers.PeerStatusScorer().PeerStatus(pid)
}

View File

@@ -289,7 +289,7 @@ func TestPeerChainState(t *testing.T) {
require.NoError(t, err)
finalizedEpoch := primitives.Epoch(123)
p.SetChainState(id, &pb.Status{FinalizedEpoch: finalizedEpoch})
p.SetChainState(id, &pb.StatusV2{FinalizedEpoch: finalizedEpoch})
resChainState, err := p.ChainState(id)
require.NoError(t, err)
@@ -324,7 +324,7 @@ func TestPeerWithNilChainState(t *testing.T) {
resChainState, err := p.ChainState(id)
require.Equal(t, peerdata.ErrNoPeerStatus, err)
var nothing *pb.Status
var nothing *pb.StatusV2
require.Equal(t, resChainState, nothing)
}
@@ -616,7 +616,7 @@ func TestTrimmedOrderedPeers(t *testing.T) {
// Peer 1
pid1 := addPeer(t, p, peers.Connected)
p.SetChainState(pid1, &pb.Status{
p.SetChainState(pid1, &pb.StatusV2{
HeadSlot: 3 * params.BeaconConfig().SlotsPerEpoch,
FinalizedEpoch: 3,
FinalizedRoot: mockroot3[:],
@@ -624,7 +624,7 @@ func TestTrimmedOrderedPeers(t *testing.T) {
// Peer 2
pid2 := addPeer(t, p, peers.Connected)
p.SetChainState(pid2, &pb.Status{
p.SetChainState(pid2, &pb.StatusV2{
HeadSlot: 4 * params.BeaconConfig().SlotsPerEpoch,
FinalizedEpoch: 4,
FinalizedRoot: mockroot4[:],
@@ -632,7 +632,7 @@ func TestTrimmedOrderedPeers(t *testing.T) {
// Peer 3
pid3 := addPeer(t, p, peers.Connected)
p.SetChainState(pid3, &pb.Status{
p.SetChainState(pid3, &pb.StatusV2{
HeadSlot: 5 * params.BeaconConfig().SlotsPerEpoch,
FinalizedEpoch: 5,
FinalizedRoot: mockroot5[:],
@@ -640,7 +640,7 @@ func TestTrimmedOrderedPeers(t *testing.T) {
// Peer 4
pid4 := addPeer(t, p, peers.Connected)
p.SetChainState(pid4, &pb.Status{
p.SetChainState(pid4, &pb.StatusV2{
HeadSlot: 2 * params.BeaconConfig().SlotsPerEpoch,
FinalizedEpoch: 2,
FinalizedRoot: mockroot2[:],
@@ -648,7 +648,7 @@ func TestTrimmedOrderedPeers(t *testing.T) {
// Peer 5
pid5 := addPeer(t, p, peers.Connected)
p.SetChainState(pid5, &pb.Status{
p.SetChainState(pid5, &pb.StatusV2{
HeadSlot: 2 * params.BeaconConfig().SlotsPerEpoch,
FinalizedEpoch: 2,
FinalizedRoot: mockroot2[:],
@@ -1012,7 +1012,7 @@ func TestStatus_BestPeer(t *testing.T) {
},
})
for _, peerConfig := range tt.peers {
p.SetChainState(addPeer(t, p, peers.Connected), &pb.Status{
p.SetChainState(addPeer(t, p, peers.Connected), &pb.StatusV2{
FinalizedEpoch: peerConfig.finalizedEpoch,
HeadSlot: peerConfig.headSlot,
})
@@ -1039,7 +1039,7 @@ func TestBestFinalized_returnsMaxValue(t *testing.T) {
for i := 0; i <= maxPeers+100; i++ {
p.Add(new(enr.Record), peer.ID(rune(i)), nil, network.DirOutbound)
p.SetConnectionState(peer.ID(rune(i)), peers.Connected)
p.SetChainState(peer.ID(rune(i)), &pb.Status{
p.SetChainState(peer.ID(rune(i)), &pb.StatusV2{
FinalizedEpoch: 10,
})
}
@@ -1062,7 +1062,7 @@ func TestStatus_BestNonFinalized(t *testing.T) {
for i, headSlot := range peerSlots {
p.Add(new(enr.Record), peer.ID(rune(i)), nil, network.DirOutbound)
p.SetConnectionState(peer.ID(rune(i)), peers.Connected)
p.SetChainState(peer.ID(rune(i)), &pb.Status{
p.SetChainState(peer.ID(rune(i)), &pb.StatusV2{
HeadSlot: headSlot,
})
}
@@ -1085,17 +1085,17 @@ func TestStatus_CurrentEpoch(t *testing.T) {
})
// Peer 1
pid1 := addPeer(t, p, peers.Connected)
p.SetChainState(pid1, &pb.Status{
p.SetChainState(pid1, &pb.StatusV2{
HeadSlot: params.BeaconConfig().SlotsPerEpoch * 4,
})
// Peer 2
pid2 := addPeer(t, p, peers.Connected)
p.SetChainState(pid2, &pb.Status{
p.SetChainState(pid2, &pb.StatusV2{
HeadSlot: params.BeaconConfig().SlotsPerEpoch * 5,
})
// Peer 3
pid3 := addPeer(t, p, peers.Connected)
p.SetChainState(pid3, &pb.Status{
p.SetChainState(pid3, &pb.StatusV2{
HeadSlot: params.BeaconConfig().SlotsPerEpoch * 4,
})

View File

@@ -40,7 +40,7 @@ const (
rSubD = 8 // random gossip target
)
var errInvalidTopic = errors.New("invalid topic format")
var ErrInvalidTopic = errors.New("invalid topic format")
// Specifies the fixed size context length.
const digestLength = 4
@@ -219,12 +219,12 @@ func convertTopicScores(topicMap map[string]*pubsub.TopicScoreSnapshot) map[stri
func ExtractGossipDigest(topic string) ([4]byte, error) {
// Ensure the topic prefix is correct.
if len(topic) < len(gossipTopicPrefix)+1 || topic[:len(gossipTopicPrefix)] != gossipTopicPrefix {
return [4]byte{}, errInvalidTopic
return [4]byte{}, ErrInvalidTopic
}
start := len(gossipTopicPrefix)
end := strings.Index(topic[start:], "/")
if end == -1 { // Ensure a topic suffix exists.
return [4]byte{}, errInvalidTopic
return [4]byte{}, ErrInvalidTopic
}
end += start
strDigest := topic[start:end]

View File

@@ -1,12 +1,12 @@
package p2p
import (
"encoding/hex"
"fmt"
"strings"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/encoder"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/network/forks"
pubsub "github.com/libp2p/go-libp2p-pubsub"
pubsubpb "github.com/libp2p/go-libp2p-pubsub/pb"
"github.com/libp2p/go-libp2p/core/peer"
@@ -32,6 +32,14 @@ var _ pubsub.SubscriptionFilter = (*Service)(nil)
// (Note: BlobSidecar is not included in this list since it is superseded by DataColumnSidecar)
const pubsubSubscriptionRequestLimit = 500
func (s *Service) setAllForkDigests() {
entries := params.SortedNetworkScheduleEntries()
s.allForkDigests = make(map[[4]byte]struct{}, len(entries))
for _, entry := range entries {
s.allForkDigests[entry.ForkDigest] = struct{}{}
}
}
// CanSubscribe returns true if the topic is of interest and we could subscribe to it.
func (s *Service) CanSubscribe(topic string) bool {
if !s.isInitialized() {
@@ -48,50 +56,18 @@ func (s *Service) CanSubscribe(topic string) bool {
if parts[1] != "eth2" {
return false
}
phase0ForkDigest, err := s.currentForkDigest()
var digest [4]byte
dl, err := hex.Decode(digest[:], []byte(parts[2]))
if err == nil && dl != 4 {
err = fmt.Errorf("expected 4 bytes, got %d", dl)
}
if err != nil {
log.WithError(err).Error("Could not determine fork digest")
log.WithError(err).WithField("topic", topic).WithField("digest", parts[2]).Error("CanSubscribe failed to parse message")
return false
}
altairForkDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().AltairForkEpoch, s.genesisValidatorsRoot)
if err != nil {
log.WithError(err).Error("Could not determine altair fork digest")
return false
}
bellatrixForkDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().BellatrixForkEpoch, s.genesisValidatorsRoot)
if err != nil {
log.WithError(err).Error("Could not determine Bellatrix fork digest")
return false
}
capellaForkDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().CapellaForkEpoch, s.genesisValidatorsRoot)
if err != nil {
log.WithError(err).Error("Could not determine Capella fork digest")
return false
}
denebForkDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().DenebForkEpoch, s.genesisValidatorsRoot)
if err != nil {
log.WithError(err).Error("Could not determine Deneb fork digest")
return false
}
electraForkDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().ElectraForkEpoch, s.genesisValidatorsRoot)
if err != nil {
log.WithError(err).Error("Could not determine Electra fork digest")
return false
}
fuluForkDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().FuluForkEpoch, s.genesisValidatorsRoot)
if err != nil {
log.WithError(err).Error("Could not determine Fulu fork digest")
return false
}
switch parts[2] {
case fmt.Sprintf("%x", phase0ForkDigest):
case fmt.Sprintf("%x", altairForkDigest):
case fmt.Sprintf("%x", bellatrixForkDigest):
case fmt.Sprintf("%x", capellaForkDigest):
case fmt.Sprintf("%x", denebForkDigest):
case fmt.Sprintf("%x", electraForkDigest):
case fmt.Sprintf("%x", fuluForkDigest):
default:
if _, ok := s.allForkDigests[digest]; !ok {
log.WithField("topic", topic).WithField("digest", fmt.Sprintf("%#x", digest)).Error("CanSubscribe failed to find digest in allForkDigests")
return false
}

View File

@@ -7,12 +7,11 @@ import (
"testing"
"time"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/encoder"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/network/forks"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
prysmTime "github.com/OffchainLabs/prysm/v6/time"
pubsubpb "github.com/libp2p/go-libp2p-pubsub/pb"
@@ -21,12 +20,11 @@ import (
func TestService_CanSubscribe(t *testing.T) {
params.SetupTestConfigCleanup(t)
currentFork := [4]byte{0x01, 0x02, 0x03, 0x04}
params.BeaconConfig().InitializeForkSchedule()
validProtocolSuffix := "/" + encoder.ProtocolSuffixSSZSnappy
genesisTime := time.Now()
var valRoot [32]byte
digest, err := forks.CreateForkDigest(genesisTime, valRoot[:])
assert.NoError(t, err)
clock := startup.NewClock(time.Now(), params.BeaconConfig().GenesisValidatorsRoot)
currentFork := params.GetNetworkScheduleEntry(clock.CurrentEpoch()).ForkDigest
digest := params.ForkDigest(clock.CurrentEpoch())
type test struct {
name string
topic string
@@ -108,12 +106,14 @@ func TestService_CanSubscribe(t *testing.T) {
}
tests = append(tests, tt)
}
valRoot := clock.GenesisValidatorsRoot()
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
s := &Service{
genesisValidatorsRoot: valRoot[:],
genesisTime: genesisTime,
genesisTime: clock.GenesisTime(),
}
s.setAllForkDigests()
if got := s.CanSubscribe(tt.topic); got != tt.want {
t.Errorf("CanSubscribe(%s) = %v, want %v", tt.topic, got, tt.want)
}
@@ -219,11 +219,10 @@ func TestGossipTopicMapping_scanfcheck_GossipTopicFormattingSanityCheck(t *testi
func TestService_FilterIncomingSubscriptions(t *testing.T) {
params.SetupTestConfigCleanup(t)
params.BeaconConfig().InitializeForkSchedule()
clock := startup.NewClock(time.Now(), params.BeaconConfig().GenesisValidatorsRoot)
digest := params.ForkDigest(clock.CurrentEpoch())
validProtocolSuffix := "/" + encoder.ProtocolSuffixSSZSnappy
genesisTime := time.Now()
var valRoot [32]byte
digest, err := forks.CreateForkDigest(genesisTime, valRoot[:])
assert.NoError(t, err)
type args struct {
id peer.ID
subs []*pubsubpb.RPC_SubOpts
@@ -320,12 +319,14 @@ func TestService_FilterIncomingSubscriptions(t *testing.T) {
},
},
}
valRoot := clock.GenesisValidatorsRoot()
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
s := &Service{
genesisValidatorsRoot: valRoot[:],
genesisTime: genesisTime,
genesisTime: clock.GenesisTime(),
}
s.setAllForkDigests()
got, err := s.FilterIncomingSubscriptions(tt.args.id, tt.args.subs)
if (err != nil) != tt.wantErr {
t.Errorf("FilterIncomingSubscriptions() error = %v, wantErr %v", err, tt.wantErr)
@@ -343,7 +344,7 @@ func TestService_MonitorsStateForkUpdates(t *testing.T) {
ctx, cancel := context.WithTimeout(t.Context(), 3*time.Second)
defer cancel()
cs := startup.NewClockSynchronizer()
s, err := NewService(ctx, &Config{ClockWaiter: cs})
s, err := NewService(ctx, &Config{ClockWaiter: cs, DB: testDB.SetupDB(t)})
require.NoError(t, err)
require.Equal(t, false, s.isInitialized())

View File

@@ -8,6 +8,7 @@ import (
"time"
mock "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/encoder"
testp2p "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
@@ -21,6 +22,7 @@ func TestService_PublishToTopicConcurrentMapWrite(t *testing.T) {
s, err := NewService(t.Context(), &Config{
StateNotifier: &mock.MockStateNotifier{},
ClockWaiter: cs,
DB: testDB.SetupDB(t),
})
require.NoError(t, err)
ctx, cancel := context.WithTimeout(t.Context(), 3*time.Second)

View File

@@ -108,6 +108,8 @@ const (
RPCDataColumnSidecarsByRangeTopicV1 = protocolPrefix + DataColumnSidecarsByRangeName + SchemaVersionV1
// V2 RPC Topics
// RPCStatusTopicV2 defines the v1 topic for the status rpc method.
RPCStatusTopicV2 = protocolPrefix + StatusMessageName + SchemaVersionV2
// RPCBlocksByRangeTopicV2 defines v2 the topic for the blocks by range rpc method.
RPCBlocksByRangeTopicV2 = protocolPrefix + BeaconBlocksByRangeMessageName + SchemaVersionV2
// RPCBlocksByRootTopicV2 defines the v2 topic for the blocks by root rpc method.
@@ -130,6 +132,7 @@ var (
RPCTopicMappings = map[string]interface{}{
// RPC Status Message
RPCStatusTopicV1: new(pb.Status),
RPCStatusTopicV2: new(pb.StatusV2),
// RPC Goodbye Message
RPCGoodByeTopicV1: new(primitives.SSZUint64),
@@ -201,6 +204,7 @@ var (
// Maps all the RPC messages which are to updated in fulu.
fuluMapping = map[string]string{
StatusMessageName: SchemaVersionV2,
MetadataMessageName: SchemaVersionV3,
}

View File

@@ -141,6 +141,11 @@ func TestTopicFromMessage_CorrectType(t *testing.T) {
require.NoError(t, err)
require.Equal(t, "/eth2/beacon_chain/req/beacon_blocks_by_range/2", topic)
// Modified in fulu fork.
topic, err = TopicFromMessage(StatusMessageName, fuluForkEpoch)
require.NoError(t, err)
require.Equal(t, "/eth2/beacon_chain/req/status/2", topic)
// Modified both in altair and fulu fork.
topic, err = TopicFromMessage(MetadataMessageName, fuluForkEpoch)
require.NoError(t, err)

View File

@@ -14,8 +14,10 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers/scorers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/types"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v6/config/features"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
leakybucket "github.com/OffchainLabs/prysm/v6/container/leaky-bucket"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
prysmnetwork "github.com/OffchainLabs/prysm/v6/network"
@@ -88,6 +90,15 @@ type Service struct {
genesisValidatorsRoot []byte
activeValidatorCount uint64
peerDisconnectionTime *cache.Cache
custodyInfo *custodyInfo
custodyInfoLock sync.RWMutex // Lock access to custodyInfo
clock *startup.Clock
allForkDigests map[[4]byte]struct{}
}
type custodyInfo struct {
earliestAvailableSlot primitives.Slot
groupCount uint64
}
// NewService initializes a new p2p service compatible with shared.Service interface. No
@@ -102,7 +113,7 @@ func NewService(ctx context.Context, cfg *Config) (*Service, error) {
return nil, errors.Wrapf(err, "failed to generate p2p private key")
}
metaData, err := metaDataFromConfig(cfg)
metaData, err := metaDataFromDB(ctx, cfg.DB)
if err != nil {
log.WithError(err).Error("Failed to create peer metadata")
return nil, err
@@ -192,6 +203,7 @@ func (s *Service) Start() {
// Waits until the state is initialized via an event feed.
// Used for fork-related data when connecting peers.
s.awaitStateInitialized()
s.setAllForkDigests()
s.isPreGenesis = false
var relayNodes []string
@@ -445,7 +457,7 @@ func (s *Service) awaitStateInitialized() {
s.genesisTime = clock.GenesisTime()
gvr := clock.GenesisValidatorsRoot()
s.genesisValidatorsRoot = gvr[:]
_, err = s.currentForkDigest() // initialize fork digest cache
_, err = s.currentForkDigest()
if err != nil {
log.WithError(err).Error("Could not initialize fork digest")
}

View File

@@ -9,14 +9,13 @@ import (
"time"
mock "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/encoder"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers/scorers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/network/forks"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
prysmTime "github.com/OffchainLabs/prysm/v6/time"
@@ -85,7 +84,7 @@ func createHost(t *testing.T, port uint) (host.Host, *ecdsa.PrivateKey, net.IP)
func TestService_Stop_SetsStartedToFalse(t *testing.T) {
params.SetupTestConfigCleanup(t)
s, err := NewService(t.Context(), &Config{StateNotifier: &mock.MockStateNotifier{}})
s, err := NewService(t.Context(), &Config{StateNotifier: &mock.MockStateNotifier{}, DB: testDB.SetupDB(t)})
require.NoError(t, err)
s.started = true
s.dv5Listener = &mockListener{}
@@ -95,7 +94,7 @@ func TestService_Stop_SetsStartedToFalse(t *testing.T) {
func TestService_Stop_DontPanicIfDv5ListenerIsNotInited(t *testing.T) {
params.SetupTestConfigCleanup(t)
s, err := NewService(t.Context(), &Config{StateNotifier: &mock.MockStateNotifier{}})
s, err := NewService(t.Context(), &Config{StateNotifier: &mock.MockStateNotifier{}, DB: testDB.SetupDB(t)})
require.NoError(t, err)
assert.NoError(t, s.Stop())
}
@@ -110,10 +109,12 @@ func TestService_Start_OnlyStartsOnce(t *testing.T) {
TCPPort: 3000,
QUICPort: 3000,
ClockWaiter: cs,
DB: testDB.SetupDB(t),
}
s, err := NewService(t.Context(), cfg)
require.NoError(t, err)
s.dv5Listener = &mockListener{}
s.custodyInfo = &custodyInfo{}
exitRoutine := make(chan bool)
go func() {
s.Start()
@@ -158,6 +159,7 @@ func TestService_Start_NoDiscoverFlag(t *testing.T) {
StateNotifier: &mock.MockStateNotifier{},
NoDiscovery: true, // <-- no s.dv5Listener is created
ClockWaiter: cs,
DB: testDB.SetupDB(t),
}
s, err := NewService(t.Context(), cfg)
require.NoError(t, err)
@@ -193,6 +195,7 @@ func TestListenForNewNodes(t *testing.T) {
)
params.SetupTestConfigCleanup(t)
db := testDB.SetupDB(t)
// Setup bootnode.
cfg := &Config{
@@ -200,6 +203,7 @@ func TestListenForNewNodes(t *testing.T) {
PingInterval: testPingInterval,
DisableLivenessCheck: true,
UDPPort: port,
DB: db,
}
_, pkey := createAddrAndPrivKey(t)
@@ -211,6 +215,7 @@ func TestListenForNewNodes(t *testing.T) {
cfg: cfg,
genesisTime: genesisTime,
genesisValidatorsRoot: gvr[:],
custodyInfo: &custodyInfo{},
}
bootListener, err := s.createListener(ipAddr, pkey)
@@ -244,6 +249,7 @@ func TestListenForNewNodes(t *testing.T) {
ClockWaiter: cs,
UDPPort: port + i,
TCPPort: port + i,
DB: db,
}
h, pkey, ipAddr := createHost(t, port+i)
@@ -252,6 +258,7 @@ func TestListenForNewNodes(t *testing.T) {
cfg: cfg,
genesisTime: genesisTime,
genesisValidatorsRoot: gvr[:],
custodyInfo: &custodyInfo{},
}
listener, err := s.startDiscoveryV5(ipAddr, pkey)
@@ -281,6 +288,7 @@ func TestListenForNewNodes(t *testing.T) {
s, err = NewService(t.Context(), cfg)
require.NoError(t, err)
s.custodyInfo = &custodyInfo{}
go s.Start()
@@ -336,14 +344,16 @@ func TestPeer_Disconnect(t *testing.T) {
func TestService_JoinLeaveTopic(t *testing.T) {
params.SetupTestConfigCleanup(t)
params.BeaconConfig().InitializeForkSchedule()
ctx, cancel := context.WithTimeout(t.Context(), 3*time.Second)
defer cancel()
gs := startup.NewClockSynchronizer()
s, err := NewService(ctx, &Config{StateNotifier: &mock.MockStateNotifier{}, ClockWaiter: gs})
s, err := NewService(ctx, &Config{StateNotifier: &mock.MockStateNotifier{}, ClockWaiter: gs, DB: testDB.SetupDB(t)})
require.NoError(t, err)
go s.awaitStateInitialized()
fd := initializeStateWithForkDigest(ctx, t, gs)
s.setAllForkDigests()
s.awaitStateInitialized()
assert.Equal(t, 0, len(s.joinedTopics))
@@ -372,15 +382,13 @@ func TestService_JoinLeaveTopic(t *testing.T) {
// digest associated with that genesis event.
func initializeStateWithForkDigest(_ context.Context, t *testing.T, gs startup.ClockSetter) [4]byte {
gt := prysmTime.Now()
gvr := bytesutil.ToBytes32(bytesutil.PadTo([]byte("genesis validators root"), 32))
require.NoError(t, gs.SetClock(startup.NewClock(gt, gvr)))
fd, err := forks.CreateForkDigest(gt, gvr[:])
require.NoError(t, err)
gvr := params.BeaconConfig().GenesisValidatorsRoot
clock := startup.NewClock(gt, gvr)
require.NoError(t, gs.SetClock(clock))
time.Sleep(50 * time.Millisecond) // wait for pubsub filter to initialize.
return fd
return params.ForkDigest(clock.CurrentEpoch())
}
func TestService_connectWithPeer(t *testing.T) {

View File

@@ -27,6 +27,8 @@ import (
"github.com/prysmaticlabs/go-bitfield"
)
const nfdEnrKey = "nfd" // The ENR record key for "nfd" (Next Fork Digest).
var (
attestationSubnetCount = params.BeaconConfig().AttestationSubnetCount
syncCommsSubnetCount = params.BeaconConfig().SyncCommitteeSubnetCount
@@ -57,6 +59,8 @@ const blobSubnetLockerVal = 110
// chosen more than sync, attestation and blob subnet (6) combined.
const dataColumnSubnetVal = 150
const errSavingSequenceNumber = "saving sequence number after updating subnets: %w"
// nodeFilter returns a function that filters nodes based on the subnet topic and subnet index.
func (s *Service) nodeFilter(topic string, indices map[uint64]int) (func(node *enode.Node) (map[uint64]bool, error), error) {
switch {
@@ -377,29 +381,51 @@ func (s *Service) hasPeerWithSubnet(subnetTopic string) bool {
// with a new value for a bitfield of subnets tracked. It also updates
// the node's metadata by increasing the sequence number and the
// subnets tracked by the node.
func (s *Service) updateSubnetRecordWithMetadata(bitV bitfield.Bitvector64) {
func (s *Service) updateSubnetRecordWithMetadata(bitV bitfield.Bitvector64) error {
entry := enr.WithEntry(attSubnetEnrKey, &bitV)
s.dv5Listener.LocalNode().Set(entry)
s.metaData = wrapper.WrappedMetadataV0(&pb.MetaDataV0{
SeqNumber: s.metaData.SequenceNumber() + 1,
Attnets: bitV,
})
if err := s.saveSequenceNumberIfNeeded(); err != nil {
return fmt.Errorf(errSavingSequenceNumber, err)
}
return nil
}
// Updates the service's discv5 listener record's attestation subnet
// with a new value for a bitfield of subnets tracked. It also record's
// the sync committee subnet in the enr. It also updates the node's
// metadata by increasing the sequence number and the subnets tracked by the node.
func (s *Service) updateSubnetRecordWithMetadataV2(bitVAtt bitfield.Bitvector64, bitVSync bitfield.Bitvector4) {
func (s *Service) updateSubnetRecordWithMetadataV2(
bitVAtt bitfield.Bitvector64,
bitVSync bitfield.Bitvector4,
custodyGroupCount uint64,
) error {
entry := enr.WithEntry(attSubnetEnrKey, &bitVAtt)
subEntry := enr.WithEntry(syncCommsSubnetEnrKey, &bitVSync)
s.dv5Listener.LocalNode().Set(entry)
s.dv5Listener.LocalNode().Set(subEntry)
localNode := s.dv5Listener.LocalNode()
localNode.Set(entry)
localNode.Set(subEntry)
if params.FuluEnabled() {
custodyGroupCountEntry := enr.WithEntry(custodyGroupCountEnrKey, custodyGroupCount)
localNode.Set(custodyGroupCountEntry)
}
s.metaData = wrapper.WrappedMetadataV1(&pb.MetaDataV1{
SeqNumber: s.metaData.SequenceNumber() + 1,
Attnets: bitVAtt,
Syncnets: bitVSync,
})
if err := s.saveSequenceNumberIfNeeded(); err != nil {
return fmt.Errorf(errSavingSequenceNumber, err)
}
return nil
}
// updateSubnetRecordWithMetadataV3 updates:
@@ -411,7 +437,7 @@ func (s *Service) updateSubnetRecordWithMetadataV3(
bitVAtt bitfield.Bitvector64,
bitVSync bitfield.Bitvector4,
custodyGroupCount uint64,
) {
) error {
attSubnetsEntry := enr.WithEntry(attSubnetEnrKey, &bitVAtt)
syncSubnetsEntry := enr.WithEntry(syncCommsSubnetEnrKey, &bitVSync)
custodyGroupCountEntry := enr.WithEntry(custodyGroupCountEnrKey, custodyGroupCount)
@@ -421,14 +447,29 @@ func (s *Service) updateSubnetRecordWithMetadataV3(
localNode.Set(syncSubnetsEntry)
localNode.Set(custodyGroupCountEntry)
newSeqNumber := s.metaData.SequenceNumber() + 1
s.metaData = wrapper.WrappedMetadataV2(&pb.MetaDataV2{
SeqNumber: newSeqNumber,
SeqNumber: s.metaData.SequenceNumber() + 1,
Attnets: bitVAtt,
Syncnets: bitVSync,
CustodyGroupCount: custodyGroupCount,
})
if err := s.saveSequenceNumberIfNeeded(); err != nil {
return fmt.Errorf(errSavingSequenceNumber, err)
}
return nil
}
// saveSequenceNumberIfNeeded saves the sequence number in DB if either of the following conditions is met:
// - the static peer ID flag is set
// - the fulu epoch is set
func (s *Service) saveSequenceNumberIfNeeded() error {
// Short-circuit if we don't need to save the sequence number.
if !(s.cfg.StaticPeerID || params.FuluEnabled()) {
return nil
}
return s.cfg.DB.SaveMetadataSeqNum(s.ctx, s.metaData.SequenceNumber())
}
func initializePersistentSubnets(id enode.ID, epoch primitives.Epoch) error {

View File

@@ -3,13 +3,13 @@ package p2p
import (
"context"
"crypto/rand"
"encoding/hex"
"fmt"
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v6/config/params"
ecdsaprysm "github.com/OffchainLabs/prysm/v6/crypto/ecdsa"
@@ -35,17 +35,8 @@ func TestStartDiscV5_FindAndDialPeersWithSubnet(t *testing.T) {
// find and connect to a node already subscribed to a specific subnet.
// In our case: The node i is subscribed to subnet i, with i = 1, 2, 3
// Define the genesis validators root, to ensure everybody is on the same network.
const (
genesisValidatorRootStr = "0xdeadbeefcafecafedeadbeefcafecafedeadbeefcafecafedeadbeefcafecafe"
subnetCount = 3
minimumPeersPerSubnet = 1
)
genesisValidatorsRoot, err := hex.DecodeString(genesisValidatorRootStr[2:])
require.NoError(t, err)
// Create a context.
const subnetCount = 3
const minimumPeersPerSubnet = 1
ctx := t.Context()
// Use shorter period for testing.
@@ -57,6 +48,7 @@ func TestStartDiscV5_FindAndDialPeersWithSubnet(t *testing.T) {
// Create flags.
params.SetupTestConfigCleanup(t)
params.BeaconConfig().InitializeForkSchedule()
gFlags := new(flags.GlobalFlags)
gFlags.MinimumPeersPerSubnet = 1
flags.Init(gFlags)
@@ -73,7 +65,8 @@ func TestStartDiscV5_FindAndDialPeersWithSubnet(t *testing.T) {
bootNodeService := &Service{
cfg: &Config{UDPPort: 2000, TCPPort: 3000, QUICPort: 3000, DisableLivenessCheck: true, PingInterval: testPingInterval},
genesisTime: genesisTime,
genesisValidatorsRoot: genesisValidatorsRoot,
genesisValidatorsRoot: params.BeaconConfig().GenesisValidatorsRoot[:],
custodyInfo: &custodyInfo{},
}
bootNodeForkDigest, err := bootNodeService.currentForkDigest()
@@ -92,6 +85,7 @@ func TestStartDiscV5_FindAndDialPeersWithSubnet(t *testing.T) {
// Create 3 nodes, each subscribed to a different subnet.
// Each node is connected to the bootstrap node.
services := make([]*Service, 0, subnetCount)
db := testDB.SetupDB(t)
for i := uint64(1); i <= subnetCount; i++ {
service, err := NewService(ctx, &Config{
@@ -102,12 +96,14 @@ func TestStartDiscV5_FindAndDialPeersWithSubnet(t *testing.T) {
QUICPort: uint(3000 + i),
PingInterval: testPingInterval,
DisableLivenessCheck: true,
DB: db,
})
require.NoError(t, err)
service.genesisTime = genesisTime
service.genesisValidatorsRoot = genesisValidatorsRoot
service.genesisValidatorsRoot = params.BeaconConfig().GenesisValidatorsRoot[:]
service.custodyInfo = &custodyInfo{}
nodeForkDigest, err := service.currentForkDigest()
require.NoError(t, err)
@@ -150,13 +146,15 @@ func TestStartDiscV5_FindAndDialPeersWithSubnet(t *testing.T) {
UDPPort: 2010,
TCPPort: 3010,
QUICPort: 3010,
DB: db,
}
service, err := NewService(ctx, cfg)
service, err := NewService(t.Context(), cfg)
require.NoError(t, err)
service.genesisTime = genesisTime
service.genesisValidatorsRoot = genesisValidatorsRoot
service.genesisValidatorsRoot = params.BeaconConfig().GenesisValidatorsRoot[:]
service.custodyInfo = &custodyInfo{}
service.Start()
defer func() {

View File

@@ -15,6 +15,7 @@ go_library(
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing",
visibility = [
"//beacon-chain:__subpackages__",
"//testing:__subpackages__",
],
deps = [
"//beacon-chain/core/peerdas:go_default_library",
@@ -24,6 +25,7 @@ go_library(
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/metadata:go_default_library",
"//testing/require:go_default_library",

View File

@@ -7,6 +7,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1/metadata"
"github.com/ethereum/go-ethereum/p2p/enode"
@@ -196,6 +197,22 @@ func (*FakeP2P) InterceptUpgraded(network.Conn) (allow bool, reason control.Disc
return true, 0
}
// EarliestAvailableSlot -- fake.
func (*FakeP2P) EarliestAvailableSlot() (primitives.Slot, error) {
return 0, nil
}
// CustodyGroupCount -- fake.
func (*FakeP2P) CustodyGroupCount() (uint64, error) {
return 0, nil
}
// UpdateCustodyInfo -- fake.
func (s *FakeP2P) UpdateCustodyInfo(earliestAvailableSlot primitives.Slot, custodyGroupCount uint64) (primitives.Slot, uint64, error) {
return earliestAvailableSlot, custodyGroupCount, nil
}
// CustodyGroupCountFromPeer -- fake.
func (*FakeP2P) CustodyGroupCountFromPeer(peer.ID) uint64 {
return 0
}

View File

@@ -65,7 +65,7 @@ func (m *MockPeersProvider) Peers() *peers.Status {
}
m.peers.Add(createENR(), id0, ma0, network.DirInbound)
m.peers.SetConnectionState(id0, peers.Connected)
m.peers.SetChainState(id0, &pb.Status{FinalizedEpoch: 10})
m.peers.SetChainState(id0, &pb.StatusV2{FinalizedEpoch: 10})
id1, err := peer.Decode(MockRawPeerId1)
if err != nil {
log.WithError(err).Debug("Cannot decode")
@@ -76,7 +76,7 @@ func (m *MockPeersProvider) Peers() *peers.Status {
}
m.peers.Add(createENR(), id1, ma1, network.DirOutbound)
m.peers.SetConnectionState(id1, peers.Connected)
m.peers.SetChainState(id1, &pb.Status{FinalizedEpoch: 11})
m.peers.SetChainState(id1, &pb.StatusV2{FinalizedEpoch: 11})
}
return m.peers
}

View File

@@ -6,6 +6,7 @@ import (
"bytes"
"context"
"fmt"
"sync"
"sync/atomic"
"testing"
"time"
@@ -17,6 +18,7 @@ import (
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1/metadata"
"github.com/OffchainLabs/prysm/v6/testing/require"
@@ -48,16 +50,19 @@ const (
// TestP2P represents a p2p implementation that can be used for testing.
type TestP2P struct {
t *testing.T
BHost host.Host
EnodeID enode.ID
pubsub *pubsub.PubSub
joinedTopics map[string]*pubsub.Topic
BroadcastCalled atomic.Bool
DelaySend bool
Digest [4]byte
peers *peers.Status
LocalMetadata metadata.Metadata
t *testing.T
BHost host.Host
EnodeID enode.ID
pubsub *pubsub.PubSub
joinedTopics map[string]*pubsub.Topic
BroadcastCalled atomic.Bool
DelaySend bool
Digest [4]byte
peers *peers.Status
LocalMetadata metadata.Metadata
custodyInfoMut sync.RWMutex // protects custodyGroupCount and earliestAvailableSlot
earliestAvailableSlot primitives.Slot
custodyGroupCount uint64
}
// NewTestP2P initializes a new p2p test service.
@@ -461,6 +466,34 @@ func (*TestP2P) InterceptUpgraded(network.Conn) (allow bool, reason control.Disc
return true, 0
}
// EarliestAvailableSlot .
func (s *TestP2P) EarliestAvailableSlot() (primitives.Slot, error) {
s.custodyInfoMut.RLock()
defer s.custodyInfoMut.RUnlock()
return s.earliestAvailableSlot, nil
}
// CustodyGroupCount .
func (s *TestP2P) CustodyGroupCount() (uint64, error) {
s.custodyInfoMut.RLock()
defer s.custodyInfoMut.RUnlock()
return s.custodyGroupCount, nil
}
// UpdateCustodyInfo .
func (s *TestP2P) UpdateCustodyInfo(earliestAvailableSlot primitives.Slot, custodyGroupCount uint64) (primitives.Slot, uint64, error) {
s.custodyInfoMut.Lock()
defer s.custodyInfoMut.Unlock()
s.earliestAvailableSlot = earliestAvailableSlot
s.custodyGroupCount = custodyGroupCount
return s.earliestAvailableSlot, s.custodyGroupCount, nil
}
// CustodyGroupCountFromPeer .
func (s *TestP2P) CustodyGroupCountFromPeer(pid peer.ID) uint64 {
// By default, we assume the peer custodies the minimum number of groups.
custodyRequirement := params.BeaconConfig().CustodyRequirement

View File

@@ -2,6 +2,7 @@ package p2p
import (
"bytes"
"context"
"crypto/ecdsa"
"crypto/rand"
"encoding/base64"
@@ -12,6 +13,8 @@ import (
"path"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/kv"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/wrapper"
ecdsaprysm "github.com/OffchainLabs/prysm/v6/crypto/ecdsa"
@@ -27,11 +30,9 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield"
"github.com/sirupsen/logrus"
"google.golang.org/protobuf/proto"
)
const keyPath = "network-keys"
const metaDataPath = "metaData"
const dialTimeout = 1 * time.Second
@@ -121,45 +122,24 @@ func privKeyFromFile(path string) (*ecdsa.PrivateKey, error) {
return ecdsaprysm.ConvertFromInterfacePrivKey(unmarshalledKey)
}
// Retrieves node p2p metadata from a set of configuration values
// from the p2p service.
// TODO: Figure out how to do a v1/v2 check.
func metaDataFromConfig(cfg *Config) (metadata.Metadata, error) {
defaultKeyPath := path.Join(cfg.DataDir, metaDataPath)
metaDataPath := cfg.MetaDataDir
// Retrieves metadata sequence number from DB and returns a Metadata(V0) object
func metaDataFromDB(ctx context.Context, db db.ReadOnlyDatabaseWithSeqNum) (metadata.Metadata, error) {
seqNum, err := db.MetadataSeqNum(ctx)
// We can proceed if error is `kv.ErrNotFoundMetadataSeqNum` by using default value of 0 for sequence number.
if err != nil && !errors.Is(err, kv.ErrNotFoundMetadataSeqNum) {
return nil, err
}
_, err := os.Stat(defaultKeyPath)
defaultMetadataExist := !os.IsNotExist(err)
if err != nil && defaultMetadataExist {
return nil, err
}
if metaDataPath == "" && !defaultMetadataExist {
metaData := &pb.MetaDataV0{
SeqNumber: 0,
Attnets: bitfield.NewBitvector64(),
}
dst, err := proto.Marshal(metaData)
if err != nil {
return nil, err
}
if err := file.WriteFile(defaultKeyPath, dst); err != nil {
return nil, err
}
return wrapper.WrappedMetadataV0(metaData), nil
}
if defaultMetadataExist && metaDataPath == "" {
metaDataPath = defaultKeyPath
}
src, err := os.ReadFile(metaDataPath) // #nosec G304
if err != nil {
log.WithError(err).Error("Error reading metadata from file")
return nil, err
}
metaData := &pb.MetaDataV0{}
if err := proto.Unmarshal(src, metaData); err != nil {
return nil, err
}
return wrapper.WrappedMetadataV0(metaData), nil
// NOTE: Load V0 metadata because:
// - As the p2p service accesses metadata as an interface, and all versions implement the interface,
// there is no error in calling the fields of higher versions. It just returns the default value.
// - This approach allows us to avoid unnecessary code changes when the metadata version bumps.
// - `RefreshPersistentSubnets` runs twice every slot and it manages updating and saving metadata.
metadata := wrapper.WrappedMetadataV0(&pb.MetaDataV0{
SeqNumber: seqNum,
Attnets: bitfield.NewBitvector64(),
})
return metadata, nil
}
// Attempt to dial an address to verify its connectivity

View File

@@ -1,8 +1,10 @@
package p2p
import (
"context"
"testing"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
@@ -80,3 +82,27 @@ func TestConvertPeerIDToNodeID(t *testing.T) {
actualNodeIDStr := actualNodeID.String()
require.Equal(t, expectedNodeIDStr, actualNodeIDStr)
}
func TestMetadataFromDB(t *testing.T) {
params.SetupTestConfigCleanup(t)
t.Run("Metadata from DB", func(t *testing.T) {
beaconDB := testDB.SetupDB(t)
err := beaconDB.SaveMetadataSeqNum(t.Context(), 42)
require.NoError(t, err)
metaData, err := metaDataFromDB(context.Background(), beaconDB)
require.NoError(t, err)
assert.Equal(t, uint64(42), metaData.SequenceNumber())
})
t.Run("Use default sequence number (=0) as Metadata not found on DB", func(t *testing.T) {
beaconDB := testDB.SetupDB(t)
metaData, err := metaDataFromDB(context.Background(), beaconDB)
require.NoError(t, err)
assert.Equal(t, uint64(0), metaData.SequenceNumber())
})
}

View File

@@ -1008,9 +1008,29 @@ func (s *Server) validateConsensus(ctx context.Context, b *eth.GenericSignedBeac
}
parentStateRoot := parentBlock.Block().StateRoot()
parentState, err := s.Stater.State(ctx, parentStateRoot[:])
if err != nil {
return errors.Wrap(err, "could not get parent state")
// Check if the state is already cached
parentState := transition.NextSlotState(parentBlockRoot[:], blk.Block().Slot())
if parentState == nil {
// The state is not advanced in the NSC, check first if the parent post-state is head
headRoot, err := s.HeadFetcher.HeadRoot(ctx)
if err != nil {
return errors.Wrap(err, "could not get head root")
}
if bytes.Equal(headRoot, parentBlockRoot[:]) {
parentState, err = s.HeadFetcher.HeadState(ctx)
if err != nil {
return errors.Wrap(err, "could not get head state")
}
parentState, err = transition.ProcessSlots(ctx, parentState, blk.Block().Slot())
if err != nil {
return errors.Wrap(err, "could not process slots to get parent state")
}
} else {
parentState, err = s.Stater.State(ctx, parentStateRoot[:])
if err != nil {
return errors.Wrap(err, "could not get parent state")
}
}
}
_, err = transition.ExecuteStateTransition(ctx, parentState, blk)
if err != nil {

Some files were not shown because too many files have changed in this diff Show More