Compare commits

...

33 Commits

Author SHA1 Message Date
Preston Van Loon
ea6275d47a kasey: Option 1 2024-03-22 20:39:51 -05:00
Bharath Vedartham
3d2230223f create the log file along with its parent directory if not present (#12675)
* Remove Feature Flag From Prater (#12082)

* Use Epoch boundary cache to retrieve balances (#12083)

* Use Epoch boundary cache to retrieve balances

* save boundary states before inserting to forkchoice

* move up last block save

* remove boundary checks on balances

* fix ordering

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>

* create the log file along with its parent directory if not present

* only give ReadWritePermissions to the log file

* use io/file package to create the parent directories

* fix ci related issues

* add regression tests

* run gazelle

* fix tests

* remove print statements

* gazelle

* Remove failing test for MkdirAll, this failure is not expected

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
Co-authored-by: Potuz <potuz@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2024-03-22 15:32:08 +00:00
Preston Van Loon
b008a6422d Add tarball support for docker images (#13790) 2024-03-22 15:31:29 +00:00
Fredrik Svantes
d19365507f Set default LocalBlockValueBoost to 10 (#13772)
* Set default LocalBlockValueBoost to 10

* Update base.go

* Update mainnet_config.go
2024-03-22 13:18:20 +00:00
kasey
c05e39a668 fix handling of goodbye messages for limited peers (#13785)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-03-22 13:06:16 +00:00
Radosław Kapka
63c2b3563a Optimize GetDuties VC action (#13789)
* wait groups

* errgroup

* tests

* bzl

* review
2024-03-22 09:50:19 +00:00
Justin Traglia
a6e86c6731 Rename payloadattribute Timestamps to Timestamp (#13523)
Co-authored-by: terence <terence@prysmaticlabs.com>
2024-03-21 21:11:01 +00:00
Radosław Kapka
32fb183392 Modify the algorithm of updateFinalizedBlockRoots (#13486)
* rename error var

* new algo

* replay_test

* add comment

* review

* fill out parent root

* handle edge cases

* review
2024-03-21 21:09:56 +00:00
carrychair
cade09ba0b chore: fix some typos (#13726)
Signed-off-by: carrychair <linghuchong404@gmail.com>
2024-03-21 21:00:21 +00:00
Potuz
f85ddfe265 Log the slot and blockroot when we deadline waiting for blobs (#13774) 2024-03-21 20:29:23 +00:00
terence
3b97094ea4 Log da block root in hex (#13787) 2024-03-21 20:26:17 +00:00
Nishant Das
acdbf7c491 expand it (#13770) 2024-03-21 19:57:22 +00:00
Potuz
1cc1effd75 Revert "pass justified=finalized in Prater (#13695)" (#13709)
This reverts commit 102518e106.
2024-03-21 17:42:40 +00:00
james-prysm
f7f1d249f2 Fix get validator endpoint for empty query parameters (#13780)
* fix handlers for get validators

* removing log
2024-03-21 14:00:07 +00:00
kasey
02abb3e3c0 add log message if in da check at slot end (#13776)
* add log message if in da check at slot end

* don't bother logging late da check start

* break up defer with a var, too dense all together

* pass slot instead of block ref

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-03-20 19:31:09 +00:00
james-prysm
2255c8b287 setting missing beacon API (#13778) 2024-03-20 17:59:15 +00:00
terence
27ecf448a7 Add da waited time to sync block log (#13775) 2024-03-20 14:53:02 +00:00
james-prysm
e243f04e44 validator client on rest mode has an inappropriate context deadline for events (#13771)
* addressing errors on events endpoint

* reverting timeout on get health

* fixing linting

* fixing more linting

* Update validator/client/beacon-api/beacon_api_validator_client.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* Update beacon-chain/rpc/eth/events/events.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* reverting change and removing line on context done which creates a superfluous response.WriteHeader error

* gofmt

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2024-03-20 13:19:05 +00:00
Manu NALEPA
fca1adbad7 Re-design TestStartDiscV5_DiscoverPeersWithSubnets test (#13766)
* `Test_AttSubnets`: Factorize.

* `filterPeerForAttSubnet`: `O(n)` ==> `O(1)`

* `FindPeersWithSubnet`: Optimize.

* `TestStartDiscV5_DiscoverPeersWithSubnets`: Complete re-design.

* `broadcastAttestation`: User `log.WithFields`.

* `filterPeer`: Refactor comments.

* Make deepsource happy.

* `TestStartDiscV5_FindPeersWithSubnet`: Add context cancellation.

Add some notes on `FindPeersWithSubnet` about
this limitation as well.
2024-03-20 03:36:00 +00:00
Radosław Kapka
b692722ddf Optimize SubmitAggregateSelectionProof VC action (#13711)
* Optimize `SubscribeCommitteeSubnets` VC action

* test fixes

* remove newline

* Optimize `SubmitAggregateSelectionProof`

* mock

* bzl gzl

* test fixes
2024-03-19 14:09:07 +00:00
Nishant Das
c4f6020677 add mplex timeout (#13745) 2024-03-19 13:37:23 +00:00
Chanh Le
d779e65d4e chore(kzg): Additional tests for KZG commitments (#13758)
* add a test explaining kzgRootIndex

* minor

* minor
2024-03-19 09:08:02 +00:00
terence
357211b7d9 Update spec test to official 1.4.0 (#13761) 2024-03-18 23:39:03 +00:00
Potuz
2dd48343a2 Set default fee recipient if tracked val fails (#13768) 2024-03-18 19:35:34 +00:00
james-prysm
7f931bf65b Keymanager APIs - get,post,delete graffiti (#13474)
* wip

* adding set and delete graffiti

* fixing mock

* fixing mock linting and putting in scaffolds for unit tests

* adding some tests

* gaz

* adding tests

* updating missing unit test

* fixing unit test

* Update validator/rpc/handlers_keymanager.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Update validator/client/propose.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/rpc/handlers_keymanager.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/propose.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* radek's feedback

* fixing tests

* using wrapper for graffiti

* fixing linting

* wip

* fixing setting proposer settings

* more partial fixes to tests

* gaz

* fixing tests and setting logic

* changing keymanager

* fixing tests and making graffiti optional in the proposer file

* remove unneeded lines

* reverting unintended changes

* Update validator/client/propose.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* addressing feedback

* removing uneeded line

* fixing bad merge resolution

* gofmt

* gaz

---------

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-03-18 15:03:08 +00:00
Nishant Das
fda4589251 Rewrite Pruning Implementation To Handle EIP 7045 (#13762)
* make it very big

* use new pruning implementation

* handle pre deneb

* revert cache change

* less verbose

* gaz

* Update beacon-chain/operations/attestations/prune_expired.go

Co-authored-by: Potuz <potuz@prysmaticlabs.com>

* gofmt

* be safer

---------

Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2024-03-18 12:57:21 +00:00
kasey
34593d34d4 allow blob by root within da period (#13757)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-03-18 03:15:17 +00:00
Potuz
4d18e590ed Rename mispelled variable (#13759) 2024-03-17 20:47:19 +00:00
Potuz
ec8b67cb12 Use headstate for recent checkpoints (#13746)
* Use headstate for recent checkpoints

* add the computed state to the checkpoint cache

* acquire a multilock
2024-03-17 18:50:49 +00:00
terence
a817aa0a8d New gossip cache size (#13756)
* New gossip cache size

Increase seen aggregate cache size to 4096

* Update cache size to 8192

* 16384
2024-03-17 03:02:00 +00:00
kasey
d76f55e97a adds a metric to track blob sig cache lookups (#13755)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-03-16 20:41:21 +00:00
james-prysm
2de21eb22f adding headers to post endpoint (#13753) 2024-03-15 18:19:42 +00:00
Nishant Das
58b8c31c93 mark in progress (#13750) 2024-03-15 16:46:26 +00:00
96 changed files with 2212 additions and 1393 deletions

View File

@@ -2,7 +2,7 @@
[![Build status](https://badge.buildkite.com/b555891daf3614bae4284dcf365b2340cefc0089839526f096.svg?branch=master)](https://buildkite.com/prysmatic-labs/prysm)
[![Go Report Card](https://goreportcard.com/badge/github.com/prysmaticlabs/prysm)](https://goreportcard.com/report/github.com/prysmaticlabs/prysm)
[![Consensus_Spec_Version 1.3.0](https://img.shields.io/badge/Consensus%20Spec%20Version-v1.3.0-blue.svg)](https://github.com/ethereum/consensus-specs/tree/v1.3.0)
[![Consensus_Spec_Version 1.4.0](https://img.shields.io/badge/Consensus%20Spec%20Version-v1.4.0-blue.svg)](https://github.com/ethereum/consensus-specs/tree/v1.4.0)
[![Execution_API_Version 1.0.0-beta.2](https://img.shields.io/badge/Execution%20API%20Version-v1.0.0.beta.2-blue.svg)](https://github.com/ethereum/execution-apis/tree/v1.0.0-beta.2/src/engine)
[![Discord](https://user-images.githubusercontent.com/7288322/34471967-1df7808a-efbb-11e7-9088-ed0b04151291.png)](https://discord.gg/prysmaticlabs)
[![GitPOAP Badge](https://public-api.gitpoap.io/v1/repo/prysmaticlabs/prysm/badge)](https://www.gitpoap.io/gh/prysmaticlabs/prysm)

View File

@@ -130,9 +130,9 @@ aspect_bazel_lib_register_toolchains()
http_archive(
name = "rules_oci",
sha256 = "c71c25ed333a4909d2dd77e0b16c39e9912525a98c7fa85144282be8d04ef54c",
strip_prefix = "rules_oci-1.3.4",
url = "https://github.com/bazel-contrib/rules_oci/releases/download/v1.3.4/rules_oci-v1.3.4.tar.gz",
sha256 = "4a276e9566c03491649eef63f27c2816cc222f41ccdebd97d2c5159e84917c3b",
strip_prefix = "rules_oci-1.7.4",
url = "https://github.com/bazel-contrib/rules_oci/releases/download/v1.7.4/rules_oci-v1.7.4.tar.gz",
)
load("@rules_oci//oci:dependencies.bzl", "rules_oci_dependencies")
@@ -243,9 +243,7 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.4.0-beta.7"
consensus_spec_test_version = "v1.4.0-beta.7-hotfix"
consensus_spec_version = "v1.4.0"
bls_test_version = "v0.1.1"
@@ -262,7 +260,7 @@ filegroup(
)
""",
sha256 = "c282c0f86f23f3d2e0f71f5975769a4077e62a7e3c7382a16bd26a7e589811a0",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/general.tar.gz" % consensus_spec_test_version,
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/general.tar.gz" % consensus_spec_version,
)
http_archive(
@@ -278,7 +276,7 @@ filegroup(
)
""",
sha256 = "4649c35aa3b8eb0cfdc81bee7c05649f90ef36bede5b0513e1f2e8baf37d6033",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/minimal.tar.gz" % consensus_spec_test_version,
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/minimal.tar.gz" % consensus_spec_version,
)
http_archive(
@@ -294,7 +292,7 @@ filegroup(
)
""",
sha256 = "c5a03f724f757456ffaabd2a899992a71d2baf45ee4db65ca3518f2b7ee928c8",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/mainnet.tar.gz" % consensus_spec_test_version,
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/mainnet.tar.gz" % consensus_spec_version,
)
http_archive(
@@ -308,7 +306,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "049c29267310e6b88280f4f834a75866c2f5b9036fa97acb9d9c6db8f64d9118",
sha256 = "cd1c9d97baccbdde1d2454a7dceb8c6c61192a3b581eee12ffc94969f2db8453",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)

View File

@@ -304,6 +304,8 @@ func (c *Client) SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlyS
}
versionOpt := func(r *http.Request) {
r.Header.Add("Eth-Consensus-Version", version.String(version.Bellatrix))
r.Header.Set("Content-Type", "application/json")
r.Header.Set("Accept", "application/json")
}
rb, err := c.do(ctx, http.MethodPost, postBlindedBeaconBlockPath, bytes.NewBuffer(body), versionOpt)
@@ -341,6 +343,8 @@ func (c *Client) SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlyS
}
versionOpt := func(r *http.Request) {
r.Header.Add("Eth-Consensus-Version", version.String(version.Capella))
r.Header.Set("Content-Type", "application/json")
r.Header.Set("Accept", "application/json")
}
rb, err := c.do(ctx, http.MethodPost, postBlindedBeaconBlockPath, bytes.NewBuffer(body), versionOpt)
@@ -379,6 +383,8 @@ func (c *Client) SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlyS
versionOpt := func(r *http.Request) {
r.Header.Add("Eth-Consensus-Version", version.String(version.Deneb))
r.Header.Set("Content-Type", "application/json")
r.Header.Set("Accept", "application/json")
}
rb, err := c.do(ctx, http.MethodPost, postBlindedBeaconBlockPath, bytes.NewBuffer(body), versionOpt)
if err != nil {

View File

@@ -321,6 +321,8 @@ func TestSubmitBlindedBlock(t *testing.T) {
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, postBlindedBeaconBlockPath, r.URL.Path)
require.Equal(t, "bellatrix", r.Header.Get("Eth-Consensus-Version"))
require.Equal(t, "application/json", r.Header.Get("Content-Type"))
require.Equal(t, "application/json", r.Header.Get("Accept"))
return &http.Response{
StatusCode: http.StatusOK,
Body: io.NopCloser(bytes.NewBufferString(testExampleExecutionPayload)),
@@ -347,6 +349,8 @@ func TestSubmitBlindedBlock(t *testing.T) {
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, postBlindedBeaconBlockPath, r.URL.Path)
require.Equal(t, "capella", r.Header.Get("Eth-Consensus-Version"))
require.Equal(t, "application/json", r.Header.Get("Content-Type"))
require.Equal(t, "application/json", r.Header.Get("Accept"))
return &http.Response{
StatusCode: http.StatusOK,
Body: io.NopCloser(bytes.NewBufferString(testExampleExecutionPayloadCapella)),
@@ -376,6 +380,8 @@ func TestSubmitBlindedBlock(t *testing.T) {
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, postBlindedBeaconBlockPath, r.URL.Path)
require.Equal(t, "deneb", r.Header.Get("Eth-Consensus-Version"))
require.Equal(t, "application/json", r.Header.Get("Content-Type"))
require.Equal(t, "application/json", r.Header.Get("Accept"))
var req structs.SignedBlindedBeaconBlockDeneb
err := json.NewDecoder(r.Body).Decode(&req)
require.NoError(t, err)

View File

@@ -82,19 +82,20 @@ func logBlockSyncStatus(block interfaces.ReadOnlyBeaconBlock, blockRoot [32]byte
if level >= logrus.DebugLevel {
parentRoot := block.ParentRoot()
lf := logrus.Fields{
"slot": block.Slot(),
"slotInEpoch": block.Slot() % params.BeaconConfig().SlotsPerEpoch,
"block": fmt.Sprintf("0x%s...", hex.EncodeToString(blockRoot[:])[:8]),
"epoch": slots.ToEpoch(block.Slot()),
"justifiedEpoch": justified.Epoch,
"justifiedRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(justified.Root)[:8]),
"finalizedEpoch": finalized.Epoch,
"finalizedRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(finalized.Root)[:8]),
"parentRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(parentRoot[:])[:8]),
"version": version.String(block.Version()),
"sinceSlotStartTime": prysmTime.Now().Sub(startTime),
"chainServiceProcessedTime": prysmTime.Now().Sub(receivedTime) - daWaitedTime,
"deposits": len(block.Body().Deposits()),
"slot": block.Slot(),
"slotInEpoch": block.Slot() % params.BeaconConfig().SlotsPerEpoch,
"block": fmt.Sprintf("0x%s...", hex.EncodeToString(blockRoot[:])[:8]),
"epoch": slots.ToEpoch(block.Slot()),
"justifiedEpoch": justified.Epoch,
"justifiedRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(justified.Root)[:8]),
"finalizedEpoch": finalized.Epoch,
"finalizedRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(finalized.Root)[:8]),
"parentRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(parentRoot[:])[:8]),
"version": version.String(block.Version()),
"sinceSlotStartTime": prysmTime.Now().Sub(startTime),
"chainServiceProcessedTime": prysmTime.Now().Sub(receivedTime) - daWaitedTime,
"dataAvailabilityWaitedTime": daWaitedTime,
"deposits": len(block.Body().Deposits()),
}
log.WithFields(lf).Debug("Synced new block")
} else {

View File

@@ -18,17 +18,63 @@ import (
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
func (s *Service) getRecentPreState(ctx context.Context, c *ethpb.Checkpoint) state.ReadOnlyBeaconState {
headEpoch := slots.ToEpoch(s.HeadSlot())
if c.Epoch < headEpoch {
return nil
}
if !s.cfg.ForkChoiceStore.IsCanonical([32]byte(c.Root)) {
return nil
}
if c.Epoch == headEpoch {
targetSlot, err := s.cfg.ForkChoiceStore.Slot([32]byte(c.Root))
if err != nil {
return nil
}
if slots.ToEpoch(targetSlot)+1 < headEpoch {
return nil
}
st, err := s.HeadStateReadOnly(ctx)
if err != nil {
return nil
}
return st
}
slot, err := slots.EpochStart(c.Epoch)
if err != nil {
return nil
}
// Try if we have already set the checkpoint cache
epochKey := strconv.FormatUint(uint64(c.Epoch), 10 /* base 10 */)
lock := async.NewMultilock(string(c.Root) + epochKey)
lock.Lock()
defer lock.Unlock()
cachedState, err := s.checkpointStateCache.StateByCheckpoint(c)
if err != nil {
return nil
}
if cachedState != nil && !cachedState.IsNil() {
return cachedState
}
st, err := s.HeadState(ctx)
if err != nil {
return nil
}
st, err = transition.ProcessSlotsUsingNextSlotCache(ctx, st, c.Root, slot)
if err != nil {
return nil
}
if err := s.checkpointStateCache.AddCheckpointState(c, st); err != nil {
return nil
}
return st
}
// getAttPreState retrieves the att pre state by either from the cache or the DB.
func (s *Service) getAttPreState(ctx context.Context, c *ethpb.Checkpoint) (state.ReadOnlyBeaconState, error) {
// If the attestation is recent and canonical we can use the head state to compute the shuffling.
headEpoch := slots.ToEpoch(s.HeadSlot())
if c.Epoch == headEpoch {
targetSlot, err := s.cfg.ForkChoiceStore.Slot([32]byte(c.Root))
if err == nil && slots.ToEpoch(targetSlot)+1 >= headEpoch {
if s.cfg.ForkChoiceStore.IsCanonical([32]byte(c.Root)) {
return s.HeadStateReadOnly(ctx)
}
}
if st := s.getRecentPreState(ctx, c); st != nil {
return st, nil
}
// Use a multilock to allow scoped holding of a mutex by a checkpoint root + epoch
// allowing us to behave smarter in terms of how this function is used concurrently.

View File

@@ -146,6 +146,28 @@ func TestStore_OnAttestation_Ok_DoublyLinkedTree(t *testing.T) {
require.NoError(t, service.OnAttestation(ctx, att[0], 0))
}
func TestService_GetRecentPreState(t *testing.T) {
service, _ := minimalTestService(t)
ctx := context.Background()
s, err := util.NewBeaconState()
require.NoError(t, err)
ckRoot := bytesutil.PadTo([]byte{'A'}, fieldparams.RootLength)
cp0 := &ethpb.Checkpoint{Epoch: 0, Root: ckRoot}
err = s.SetFinalizedCheckpoint(cp0)
require.NoError(t, err)
st, root, err := prepareForkchoiceState(ctx, 31, [32]byte(ckRoot), [32]byte{}, [32]byte{'R'}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, root))
service.head = &head{
root: [32]byte(ckRoot),
state: s,
slot: 31,
}
require.NotNil(t, service.getRecentPreState(ctx, &ethpb.Checkpoint{Epoch: 1, Root: ckRoot}))
}
func TestService_GetAttPreState_Concurrency(t *testing.T) {
service, _ := minimalTestService(t)
ctx := context.Background()

View File

@@ -6,6 +6,7 @@ import (
"time"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
@@ -558,6 +559,20 @@ func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed int
// The gossip handler for blobs writes the index of each verified blob referencing the given
// root to the channel returned by blobNotifiers.forRoot.
nc := s.blobNotifiers.forRoot(root)
// Log for DA checks that cross over into the next slot; helpful for debugging.
nextSlot := slots.BeginsAt(signed.Block().Slot()+1, s.genesisTime)
// Avoid logging if DA check is called after next slot start.
if nextSlot.After(time.Now()) {
nst := time.AfterFunc(time.Until(nextSlot), func() {
if len(missing) == 0 {
return
}
log.WithFields(daCheckLogFields(root, signed.Block().Slot(), expected, len(missing))).
Error("Still waiting for DA check at slot end.")
})
defer nst.Stop()
}
for {
select {
case idx := <-nc:
@@ -571,11 +586,20 @@ func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed int
s.blobNotifiers.delete(root)
return nil
case <-ctx.Done():
return errors.Wrap(ctx.Err(), "context deadline waiting for blob sidecars")
return errors.Wrapf(ctx.Err(), "context deadline waiting for blob sidecars slot: %d, BlockRoot: %#x", block.Slot(), root)
}
}
}
func daCheckLogFields(root [32]byte, slot primitives.Slot, expected, missing int) logrus.Fields {
return logrus.Fields{
"slot": slot,
"root": fmt.Sprintf("%#x", root),
"blobsExpected": expected,
"blobsWaiting": missing,
}
}
// lateBlockTasks is called 4 seconds into the slot and performs tasks
// related to late blocks. It emits a MissedSlot state feed event.
// It calls FCU and sets the right attributes if we are proposing next slot

View File

@@ -290,18 +290,10 @@ func (s *Service) StartFromSavedState(saved state.BeaconState) error {
fRoot := s.ensureRootNotZeros(bytesutil.ToBytes32(finalized.Root))
s.cfg.ForkChoiceStore.Lock()
defer s.cfg.ForkChoiceStore.Unlock()
if params.BeaconConfig().ConfigName != params.PraterName {
if err := s.cfg.ForkChoiceStore.UpdateJustifiedCheckpoint(s.ctx, &forkchoicetypes.Checkpoint{Epoch: justified.Epoch,
Root: bytesutil.ToBytes32(justified.Root)}); err != nil {
return errors.Wrap(err, "could not update forkchoice's justified checkpoint")
}
} else {
if err := s.cfg.ForkChoiceStore.UpdateJustifiedCheckpoint(s.ctx, &forkchoicetypes.Checkpoint{Epoch: finalized.Epoch,
Root: bytesutil.ToBytes32(finalized.Root)}); err != nil {
return errors.Wrap(err, "could not update forkchoice's justified checkpoint")
}
if err := s.cfg.ForkChoiceStore.UpdateJustifiedCheckpoint(s.ctx, &forkchoicetypes.Checkpoint{Epoch: justified.Epoch,
Root: bytesutil.ToBytes32(justified.Root)}); err != nil {
return errors.Wrap(err, "could not update forkchoice's justified checkpoint")
}
if err := s.cfg.ForkChoiceStore.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Epoch: finalized.Epoch,
Root: bytesutil.ToBytes32(finalized.Root)}); err != nil {
return errors.Wrap(err, "could not update forkchoice's finalized checkpoint")

View File

@@ -109,10 +109,6 @@ func (c *SkipSlotCache) Get(ctx context.Context, r [32]byte) (state.BeaconState,
// MarkInProgress a request so that any other similar requests will block on
// Get until MarkNotInProgress is called.
func (c *SkipSlotCache) MarkInProgress(r [32]byte) error {
if c.disabled {
return nil
}
c.lock.Lock()
defer c.lock.Unlock()
@@ -126,10 +122,6 @@ func (c *SkipSlotCache) MarkInProgress(r [32]byte) error {
// MarkNotInProgress will release the lock on a given request. This should be
// called after put.
func (c *SkipSlotCache) MarkNotInProgress(r [32]byte) {
if c.disabled {
return
}
c.lock.Lock()
defer c.lock.Unlock()

View File

@@ -2,6 +2,7 @@ package cache_test
import (
"context"
"sync"
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
@@ -35,3 +36,28 @@ func TestSkipSlotCache_RoundTrip(t *testing.T) {
require.NoError(t, err)
assert.DeepEqual(t, res.ToProto(), s.ToProto(), "Expected equal protos to return from cache")
}
func TestSkipSlotCache_DisabledAndEnabled(t *testing.T) {
ctx := context.Background()
c := cache.NewSkipSlotCache()
r := [32]byte{'a'}
c.Disable()
require.NoError(t, c.MarkInProgress(r))
c.Enable()
wg := new(sync.WaitGroup)
wg.Add(1)
go func() {
// Get call will only terminate when
// it is not longer in progress.
obj, err := c.Get(ctx, r)
require.NoError(t, err)
require.IsNil(t, obj)
wg.Done()
}()
c.MarkNotInProgress(r)
wg.Wait()
}

View File

@@ -224,7 +224,7 @@ func (s *Store) DeleteBlock(ctx context.Context, root [32]byte) error {
return s.db.Update(func(tx *bolt.Tx) error {
bkt := tx.Bucket(finalizedBlockRootsIndexBucket)
if b := bkt.Get(root[:]); b != nil {
return ErrDeleteJustifiedAndFinalized
return ErrDeleteFinalized
}
if err := tx.Bucket(blocksBucket).Delete(root[:]); err != nil {

View File

@@ -289,7 +289,7 @@ func TestStore_DeleteBlock(t *testing.T) {
require.Equal(t, b, nil)
require.Equal(t, false, db.HasStateSummary(ctx, root2))
require.ErrorIs(t, db.DeleteBlock(ctx, root), ErrDeleteJustifiedAndFinalized)
require.ErrorIs(t, db.DeleteBlock(ctx, root), ErrDeleteFinalized)
}
func TestStore_DeleteJustifiedBlock(t *testing.T) {
@@ -309,7 +309,7 @@ func TestStore_DeleteJustifiedBlock(t *testing.T) {
require.NoError(t, db.SaveBlock(ctx, blk))
require.NoError(t, db.SaveState(ctx, st, root))
require.NoError(t, db.SaveJustifiedCheckpoint(ctx, cp))
require.ErrorIs(t, db.DeleteBlock(ctx, root), ErrDeleteJustifiedAndFinalized)
require.ErrorIs(t, db.DeleteBlock(ctx, root), ErrDeleteFinalized)
}
func TestStore_DeleteFinalizedBlock(t *testing.T) {
@@ -329,7 +329,7 @@ func TestStore_DeleteFinalizedBlock(t *testing.T) {
require.NoError(t, db.SaveState(ctx, st, root))
require.NoError(t, db.SaveGenesisBlockRoot(ctx, root))
require.NoError(t, db.SaveFinalizedCheckpoint(ctx, cp))
require.ErrorIs(t, db.DeleteBlock(ctx, root), ErrDeleteJustifiedAndFinalized)
require.ErrorIs(t, db.DeleteBlock(ctx, root), ErrDeleteFinalized)
}
func TestStore_GenesisBlock(t *testing.T) {
db := setupDB(t)

View File

@@ -2,8 +2,8 @@ package kv
import "github.com/pkg/errors"
// ErrDeleteJustifiedAndFinalized is raised when we attempt to delete a finalized block/state
var ErrDeleteJustifiedAndFinalized = errors.New("cannot delete finalized block or state")
// ErrDeleteFinalized is raised when we attempt to delete a finalized block/state
var ErrDeleteFinalized = errors.New("cannot delete finalized block or state")
// ErrNotFound can be used directly, or as a wrapped DBError, whenever a db method needs to
// indicate that a value couldn't be found.

View File

@@ -5,7 +5,6 @@ import (
"context"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filters"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
@@ -29,72 +28,76 @@ var containerFinalizedButNotCanonical = []byte("recent block needs reindexing to
// beacon block chain using the finalized root alone as this would exclude all other blocks in the
// finalized epoch from being indexed as "final and canonical".
//
// The algorithm for building the index works as follows:
// - De-index all finalized beacon block roots from previous_finalized_epoch to
// new_finalized_epoch. (I.e. delete these roots from the index, to be re-indexed.)
// - Build the canonical finalized chain by walking up the ancestry chain from the finalized block
// root until a parent is found in the index, or the parent is genesis or the origin checkpoint.
// - Add all block roots in the database where epoch(block.slot) == checkpoint.epoch.
//
// This method ensures that all blocks from the current finalized epoch are considered "final" while
// maintaining only canonical and finalized blocks older than the current finalized epoch.
// The main part of the algorithm traverses parent->child block relationships in the
// `blockParentRootIndicesBucket` bucket to find the path between the last finalized checkpoint
// and the current finalized checkpoint. It relies on the invariant that there is a unique path
// between two finalized checkpoints.
func (s *Store) updateFinalizedBlockRoots(ctx context.Context, tx *bolt.Tx, checkpoint *ethpb.Checkpoint) error {
ctx, span := trace.StartSpan(ctx, "BeaconDB.updateFinalizedBlockRoots")
defer span.End()
bkt := tx.Bucket(finalizedBlockRootsIndexBucket)
root := checkpoint.Root
var previousRoot []byte
genesisRoot := tx.Bucket(blocksBucket).Get(genesisBlockRootKey)
initCheckpointRoot := tx.Bucket(blocksBucket).Get(originCheckpointBlockRootKey)
// De-index recent finalized block roots, to be re-indexed.
finalizedBkt := tx.Bucket(finalizedBlockRootsIndexBucket)
previousFinalizedCheckpoint := &ethpb.Checkpoint{}
if b := bkt.Get(previousFinalizedCheckpointKey); b != nil {
if b := finalizedBkt.Get(previousFinalizedCheckpointKey); b != nil {
if err := decode(ctx, b, previousFinalizedCheckpoint); err != nil {
tracing.AnnotateError(span, err)
return err
}
}
blockRoots, err := s.BlockRoots(ctx, filters.NewFilter().
SetStartEpoch(previousFinalizedCheckpoint.Epoch).
SetEndEpoch(checkpoint.Epoch+1),
)
if err != nil {
tracing.AnnotateError(span, err)
return err
}
for _, root := range blockRoots {
if err := bkt.Delete(root[:]); err != nil {
tracing.AnnotateError(span, err)
return err
}
}
// Walk up the ancestry chain until we reach a block root present in the finalized block roots
// index bucket or genesis block root.
for {
if bytes.Equal(root, genesisRoot) {
break
}
signedBlock, err := s.Block(ctx, bytesutil.ToBytes32(root))
// Handle the case of checkpoint sync.
if previousFinalizedCheckpoint.Root == nil && bytes.Equal(checkpoint.Root, tx.Bucket(blocksBucket).Get(originCheckpointBlockRootKey)) {
container := &ethpb.FinalizedBlockRootContainer{}
enc, err := encode(ctx, container)
if err != nil {
tracing.AnnotateError(span, err)
return err
}
if err := blocks.BeaconBlockIsNil(signedBlock); err != nil {
if err = finalizedBkt.Put(checkpoint.Root, enc); err != nil {
tracing.AnnotateError(span, err)
return err
}
block := signedBlock.Block()
return updatePrevFinalizedCheckpoint(ctx, span, finalizedBkt, checkpoint)
}
parentRoot := block.ParentRoot()
container := &ethpb.FinalizedBlockRootContainer{
ParentRoot: parentRoot[:],
ChildRoot: previousRoot,
var finalized [][]byte
if previousFinalizedCheckpoint.Root == nil {
genesisRoot := tx.Bucket(blocksBucket).Get(genesisBlockRootKey)
_, finalized = pathToFinalizedCheckpoint(ctx, [][]byte{genesisRoot}, checkpoint.Root, tx)
} else {
if err := updateChildOfPrevFinalizedCheckpoint(
ctx,
span,
finalizedBkt,
tx.Bucket(blockParentRootIndicesBucket), previousFinalizedCheckpoint.Root,
); err != nil {
return err
}
_, finalized = pathToFinalizedCheckpoint(ctx, [][]byte{previousFinalizedCheckpoint.Root}, checkpoint.Root, tx)
}
for i, r := range finalized {
var container *ethpb.FinalizedBlockRootContainer
switch i {
case 0:
container = &ethpb.FinalizedBlockRootContainer{
ParentRoot: previousFinalizedCheckpoint.Root,
}
if len(finalized) > 1 {
container.ChildRoot = finalized[i+1]
}
case len(finalized) - 1:
// We don't know the finalized child of the new finalized checkpoint.
// It will be filled out in the next function call.
container = &ethpb.FinalizedBlockRootContainer{}
if len(finalized) > 1 {
container.ParentRoot = finalized[i-1]
}
default:
container = &ethpb.FinalizedBlockRootContainer{
ParentRoot: finalized[i-1],
ChildRoot: finalized[i+1],
}
}
enc, err := encode(ctx, container)
@@ -102,66 +105,13 @@ func (s *Store) updateFinalizedBlockRoots(ctx context.Context, tx *bolt.Tx, chec
tracing.AnnotateError(span, err)
return err
}
if err := bkt.Put(root, enc); err != nil {
tracing.AnnotateError(span, err)
return err
}
// breaking here allows the initial checkpoint root to be correctly inserted,
// but stops the loop from trying to search for its parent.
if bytes.Equal(root, initCheckpointRoot) {
break
}
// Found parent, loop exit condition.
pr := block.ParentRoot()
if parentBytes := bkt.Get(pr[:]); parentBytes != nil {
parent := &ethpb.FinalizedBlockRootContainer{}
if err := decode(ctx, parentBytes, parent); err != nil {
tracing.AnnotateError(span, err)
return err
}
parent.ChildRoot = root
enc, err := encode(ctx, parent)
if err != nil {
tracing.AnnotateError(span, err)
return err
}
if err := bkt.Put(pr[:], enc); err != nil {
tracing.AnnotateError(span, err)
return err
}
break
}
previousRoot = root
root = pr[:]
}
// Upsert blocks from the current finalized epoch.
roots, err := s.BlockRoots(ctx, filters.NewFilter().SetStartEpoch(checkpoint.Epoch).SetEndEpoch(checkpoint.Epoch+1))
if err != nil {
tracing.AnnotateError(span, err)
return err
}
for _, root := range roots {
root := root[:]
if bytes.Equal(root, checkpoint.Root) || bkt.Get(root) != nil {
continue
}
if err := bkt.Put(root, containerFinalizedButNotCanonical); err != nil {
if err = finalizedBkt.Put(r, enc); err != nil {
tracing.AnnotateError(span, err)
return err
}
}
// Update previous checkpoint
enc, err := encode(ctx, checkpoint)
if err != nil {
tracing.AnnotateError(span, err)
return err
}
return bkt.Put(previousFinalizedCheckpointKey, enc)
return updatePrevFinalizedCheckpoint(ctx, span, finalizedBkt, checkpoint)
}
// BackfillFinalizedIndex updates the finalized index for a contiguous chain of blocks that are the ancestors of the
@@ -242,8 +192,6 @@ func (s *Store) BackfillFinalizedIndex(ctx context.Context, blocks []blocks.ROBl
// IsFinalizedBlock returns true if the block root is present in the finalized block root index.
// A beacon block root contained exists in this index if it is considered finalized and canonical.
// Note: beacon blocks from the latest finalized epoch return true, whether or not they are
// considered canonical in the "head view" of the beacon node.
func (s *Store) IsFinalizedBlock(ctx context.Context, blockRoot [32]byte) bool {
_, span := trace.StartSpan(ctx, "BeaconDB.IsFinalizedBlock")
defer span.End()
@@ -296,3 +244,53 @@ func (s *Store) FinalizedChildBlock(ctx context.Context, blockRoot [32]byte) (in
tracing.AnnotateError(span, err)
return blk, err
}
func pathToFinalizedCheckpoint(ctx context.Context, roots [][]byte, checkpointRoot []byte, tx *bolt.Tx) (bool, [][]byte) {
if len(roots) == 0 || (len(roots) == 1 && roots[0] == nil) {
return false, nil
}
for _, r := range roots {
if bytes.Equal(r, checkpointRoot) {
return true, [][]byte{r}
}
children := lookupValuesForIndices(ctx, map[string][]byte{string(blockParentRootIndicesBucket): r}, tx)
if len(children) == 0 {
children = [][][]byte{nil}
}
isPath, path := pathToFinalizedCheckpoint(ctx, children[0], checkpointRoot, tx)
if isPath {
return true, append([][]byte{r}, path...)
}
}
return false, nil
}
func updatePrevFinalizedCheckpoint(ctx context.Context, span *trace.Span, finalizedBkt *bolt.Bucket, checkpoint *ethpb.Checkpoint) error {
enc, err := encode(ctx, checkpoint)
if err != nil {
tracing.AnnotateError(span, err)
return err
}
return finalizedBkt.Put(previousFinalizedCheckpointKey, enc)
}
func updateChildOfPrevFinalizedCheckpoint(ctx context.Context, span *trace.Span, finalizedBkt, parentBkt *bolt.Bucket, checkpointRoot []byte) error {
container := &ethpb.FinalizedBlockRootContainer{}
if err := decode(ctx, finalizedBkt.Get(checkpointRoot), container); err != nil {
tracing.AnnotateError(span, err)
return err
}
container.ChildRoot = parentBkt.Get(checkpointRoot)
enc, err := encode(ctx, container)
if err != nil {
tracing.AnnotateError(span, err)
return err
}
if err = finalizedBkt.Put(checkpointRoot, enc); err != nil {
tracing.AnnotateError(span, err)
return err
}
return nil
}

View File

@@ -26,38 +26,30 @@ func TestStore_IsFinalizedBlock(t *testing.T) {
ctx := context.Background()
require.NoError(t, db.SaveGenesisBlockRoot(ctx, genesisBlockRoot))
blks := makeBlocks(t, 0, slotsPerEpoch*3, genesisBlockRoot)
blks := makeBlocks(t, 0, slotsPerEpoch*2, genesisBlockRoot)
require.NoError(t, db.SaveBlocks(ctx, blks))
root, err := blks[slotsPerEpoch].Block().HashTreeRoot()
require.NoError(t, err)
cp := &ethpb.Checkpoint{
Epoch: 1,
Root: root[:],
}
st, err := util.NewBeaconState()
require.NoError(t, err)
// a state is required to save checkpoint
require.NoError(t, db.SaveState(ctx, st, root))
require.NoError(t, db.SaveFinalizedCheckpoint(ctx, cp))
// All blocks up to slotsPerEpoch*2 should be in the finalized index.
for i := uint64(0); i < slotsPerEpoch*2; i++ {
root, err := blks[i].Block().HashTreeRoot()
for i := uint64(0); i <= slotsPerEpoch; i++ {
root, err = blks[i].Block().HashTreeRoot()
require.NoError(t, err)
assert.Equal(t, true, db.IsFinalizedBlock(ctx, root), "Block at index %d was not considered finalized in the index", i)
assert.Equal(t, true, db.IsFinalizedBlock(ctx, root), "Block at index %d was not considered finalized", i)
}
for i := slotsPerEpoch * 3; i < uint64(len(blks)); i++ {
root, err := blks[i].Block().HashTreeRoot()
for i := slotsPerEpoch + 1; i < uint64(len(blks)); i++ {
root, err = blks[i].Block().HashTreeRoot()
require.NoError(t, err)
assert.Equal(t, false, db.IsFinalizedBlock(ctx, root), "Block at index %d was considered finalized in the index, but should not have", i)
assert.Equal(t, false, db.IsFinalizedBlock(ctx, root), "Block at index %d was considered finalized, but should not have", i)
}
}
func TestStore_IsFinalizedBlockGenesis(t *testing.T) {
func TestStore_IsFinalizedGenesisBlock(t *testing.T) {
db := setupDB(t)
ctx := context.Background()
@@ -69,136 +61,114 @@ func TestStore_IsFinalizedBlockGenesis(t *testing.T) {
require.NoError(t, err)
require.NoError(t, db.SaveBlock(ctx, wsb))
require.NoError(t, db.SaveGenesisBlockRoot(ctx, root))
assert.Equal(t, true, db.IsFinalizedBlock(ctx, root), "Finalized genesis block doesn't exist in db")
}
// This test scenario is to test a specific edge case where the finalized block root is not part of
// the finalized and canonical chain.
//
// Example:
// 0 1 2 3 4 5 6 slot
// a <- b <-- d <- e <- f <- g roots
//
// ^- c
//
// Imagine that epochs are 2 slots and that epoch 1, 2, and 3 are finalized. Checkpoint roots would
// be c, e, and g. In this scenario, c was a finalized checkpoint root but no block built upon it so
// it should not be considered "final and canonical" in the view at slot 6.
func TestStore_IsFinalized_ForkEdgeCase(t *testing.T) {
slotsPerEpoch := uint64(params.BeaconConfig().SlotsPerEpoch)
blocks0 := makeBlocks(t, slotsPerEpoch*0, slotsPerEpoch, genesisBlockRoot)
blocks1 := append(
makeBlocks(t, slotsPerEpoch*1, 1, bytesutil.ToBytes32(sszRootOrDie(t, blocks0[len(blocks0)-1]))), // No block builds off of the first block in epoch.
makeBlocks(t, slotsPerEpoch*1+1, slotsPerEpoch-1, bytesutil.ToBytes32(sszRootOrDie(t, blocks0[len(blocks0)-1])))...,
)
blocks2 := makeBlocks(t, slotsPerEpoch*2, slotsPerEpoch, bytesutil.ToBytes32(sszRootOrDie(t, blocks1[len(blocks1)-1])))
db := setupDB(t)
ctx := context.Background()
require.NoError(t, db.SaveGenesisBlockRoot(ctx, genesisBlockRoot))
require.NoError(t, db.SaveBlocks(ctx, blocks0))
require.NoError(t, db.SaveBlocks(ctx, blocks1))
require.NoError(t, db.SaveBlocks(ctx, blocks2))
// First checkpoint
checkpoint1 := &ethpb.Checkpoint{
Root: sszRootOrDie(t, blocks1[0]),
Epoch: 1,
}
st, err := util.NewBeaconState()
require.NoError(t, err)
// A state is required to save checkpoint
require.NoError(t, db.SaveState(ctx, st, bytesutil.ToBytes32(checkpoint1.Root)))
require.NoError(t, db.SaveFinalizedCheckpoint(ctx, checkpoint1))
// All blocks in blocks0 and blocks1 should be finalized and canonical.
for i, block := range append(blocks0, blocks1...) {
root := sszRootOrDie(t, block)
assert.Equal(t, true, db.IsFinalizedBlock(ctx, bytesutil.ToBytes32(root)), "%d - Expected block %#x to be finalized", i, root)
}
// Second checkpoint
checkpoint2 := &ethpb.Checkpoint{
Root: sszRootOrDie(t, blocks2[0]),
Epoch: 2,
}
// A state is required to save checkpoint
require.NoError(t, db.SaveState(ctx, st, bytesutil.ToBytes32(checkpoint2.Root)))
require.NoError(t, db.SaveFinalizedCheckpoint(ctx, checkpoint2))
// All blocks in blocks0 and blocks2 should be finalized and canonical.
for i, block := range append(blocks0, blocks2...) {
root := sszRootOrDie(t, block)
assert.Equal(t, true, db.IsFinalizedBlock(ctx, bytesutil.ToBytes32(root)), "%d - Expected block %#x to be finalized", i, root)
}
// All blocks in blocks1 should be finalized and canonical, except blocks1[0].
for i, block := range blocks1 {
root := sszRootOrDie(t, block)
if db.IsFinalizedBlock(ctx, bytesutil.ToBytes32(root)) == (i == 0) {
t.Errorf("Expected db.IsFinalizedBlock(ctx, blocks1[%d]) to be %v", i, i != 0)
}
}
assert.Equal(t, true, db.IsFinalizedBlock(ctx, root))
}
func TestStore_IsFinalizedChildBlock(t *testing.T) {
slotsPerEpoch := uint64(params.BeaconConfig().SlotsPerEpoch)
ctx := context.Background()
db := setupDB(t)
require.NoError(t, db.SaveGenesisBlockRoot(ctx, genesisBlockRoot))
eval := func(t testing.TB, ctx context.Context, db *Store, blks []interfaces.ReadOnlySignedBeaconBlock) {
require.NoError(t, db.SaveBlocks(ctx, blks))
root, err := blks[slotsPerEpoch].Block().HashTreeRoot()
require.NoError(t, err)
cp := &ethpb.Checkpoint{
Epoch: 1,
Root: root[:],
}
st, err := util.NewBeaconState()
require.NoError(t, err)
// a state is required to save checkpoint
require.NoError(t, db.SaveState(ctx, st, root))
require.NoError(t, db.SaveFinalizedCheckpoint(ctx, cp))
// All blocks up to slotsPerEpoch should have a finalized child block.
for i := uint64(0); i < slotsPerEpoch; i++ {
root, err := blks[i].Block().HashTreeRoot()
require.NoError(t, err)
assert.Equal(t, true, db.IsFinalizedBlock(ctx, root), "Block at index %d was not considered finalized in the index", i)
blk, err := db.FinalizedChildBlock(ctx, root)
assert.NoError(t, err)
if blk == nil {
t.Error("Child block doesn't exist for valid finalized block.")
}
}
blks := makeBlocks(t, 0, slotsPerEpoch*2, genesisBlockRoot)
require.NoError(t, db.SaveBlocks(ctx, blks))
root, err := blks[slotsPerEpoch].Block().HashTreeRoot()
require.NoError(t, err)
cp := &ethpb.Checkpoint{
Epoch: 1,
Root: root[:],
}
require.NoError(t, db.SaveFinalizedCheckpoint(ctx, cp))
setup := func(t testing.TB) *Store {
db := setupDB(t)
require.NoError(t, db.SaveGenesisBlockRoot(ctx, genesisBlockRoot))
return db
for i := uint64(0); i < slotsPerEpoch; i++ {
root, err = blks[i].Block().HashTreeRoot()
require.NoError(t, err)
assert.Equal(t, true, db.IsFinalizedBlock(ctx, root), "Block at index %d was not considered finalized", i)
blk, err := db.FinalizedChildBlock(ctx, root)
assert.NoError(t, err)
assert.Equal(t, false, blk == nil, "Child block at index %d was not considered finalized", i)
}
t.Run("phase0", func(t *testing.T) {
db := setup(t)
blks := makeBlocks(t, 0, slotsPerEpoch*3, genesisBlockRoot)
eval(t, ctx, db, blks)
})
t.Run("altair", func(t *testing.T) {
db := setup(t)
blks := makeBlocksAltair(t, 0, slotsPerEpoch*3, genesisBlockRoot)
eval(t, ctx, db, blks)
})
}
func sszRootOrDie(t *testing.T, block interfaces.ReadOnlySignedBeaconBlock) []byte {
root, err := block.Block().HashTreeRoot()
func TestStore_ChildRootOfPrevFinalizedCheckpointIsUpdated(t *testing.T) {
slotsPerEpoch := uint64(params.BeaconConfig().SlotsPerEpoch)
ctx := context.Background()
db := setupDB(t)
require.NoError(t, db.SaveGenesisBlockRoot(ctx, genesisBlockRoot))
blks := makeBlocks(t, 0, slotsPerEpoch*3, genesisBlockRoot)
require.NoError(t, db.SaveBlocks(ctx, blks))
root, err := blks[slotsPerEpoch].Block().HashTreeRoot()
require.NoError(t, err)
return root[:]
cp := &ethpb.Checkpoint{
Epoch: 1,
Root: root[:],
}
require.NoError(t, db.SaveFinalizedCheckpoint(ctx, cp))
root2, err := blks[slotsPerEpoch*2].Block().HashTreeRoot()
require.NoError(t, err)
cp = &ethpb.Checkpoint{
Epoch: 2,
Root: root2[:],
}
require.NoError(t, db.SaveFinalizedCheckpoint(ctx, cp))
require.NoError(t, db.db.View(func(tx *bolt.Tx) error {
container := &ethpb.FinalizedBlockRootContainer{}
f := tx.Bucket(finalizedBlockRootsIndexBucket).Get(root[:])
require.NoError(t, decode(ctx, f, container))
r, err := blks[slotsPerEpoch+1].Block().HashTreeRoot()
require.NoError(t, err)
assert.DeepEqual(t, r[:], container.ChildRoot)
return nil
}))
}
func TestStore_OrphanedBlockIsNotFinalized(t *testing.T) {
slotsPerEpoch := uint64(params.BeaconConfig().SlotsPerEpoch)
db := setupDB(t)
ctx := context.Background()
require.NoError(t, db.SaveGenesisBlockRoot(ctx, genesisBlockRoot))
blk0 := util.NewBeaconBlock()
blk0.Block.ParentRoot = genesisBlockRoot[:]
blk0Root, err := blk0.Block.HashTreeRoot()
require.NoError(t, err)
blk1 := util.NewBeaconBlock()
blk1.Block.Slot = 1
blk1.Block.ParentRoot = blk0Root[:]
blk2 := util.NewBeaconBlock()
blk2.Block.Slot = 2
// orphan block at index 1
blk2.Block.ParentRoot = blk0Root[:]
blk2Root, err := blk2.Block.HashTreeRoot()
require.NoError(t, err)
sBlk0, err := consensusblocks.NewSignedBeaconBlock(blk0)
require.NoError(t, err)
sBlk1, err := consensusblocks.NewSignedBeaconBlock(blk1)
require.NoError(t, err)
sBlk2, err := consensusblocks.NewSignedBeaconBlock(blk2)
require.NoError(t, err)
blks := append([]interfaces.ReadOnlySignedBeaconBlock{sBlk0, sBlk1, sBlk2}, makeBlocks(t, 3, slotsPerEpoch*2-3, blk2Root)...)
require.NoError(t, db.SaveBlocks(ctx, blks))
root, err := blks[slotsPerEpoch].Block().HashTreeRoot()
require.NoError(t, err)
cp := &ethpb.Checkpoint{
Epoch: 1,
Root: root[:],
}
require.NoError(t, db.SaveFinalizedCheckpoint(ctx, cp))
for i := uint64(0); i <= slotsPerEpoch; i++ {
root, err = blks[i].Block().HashTreeRoot()
require.NoError(t, err)
if i == 1 {
assert.Equal(t, false, db.IsFinalizedBlock(ctx, root), "Block at index 1 was considered finalized, but should not have")
} else {
assert.Equal(t, true, db.IsFinalizedBlock(ctx, root), "Block at index %d was not considered finalized", i)
}
}
}
func makeBlocks(t *testing.T, i, n uint64, previousRoot [32]byte) []interfaces.ReadOnlySignedBeaconBlock {
@@ -219,24 +189,6 @@ func makeBlocks(t *testing.T, i, n uint64, previousRoot [32]byte) []interfaces.R
return ifaceBlocks
}
func makeBlocksAltair(t *testing.T, startIdx, num uint64, previousRoot [32]byte) []interfaces.ReadOnlySignedBeaconBlock {
blocks := make([]*ethpb.SignedBeaconBlockAltair, num)
ifaceBlocks := make([]interfaces.ReadOnlySignedBeaconBlock, num)
for j := startIdx; j < num+startIdx; j++ {
parentRoot := make([]byte, fieldparams.RootLength)
copy(parentRoot, previousRoot[:])
blocks[j-startIdx] = util.NewBeaconBlockAltair()
blocks[j-startIdx].Block.Slot = primitives.Slot(j + 1)
blocks[j-startIdx].Block.ParentRoot = parentRoot
var err error
previousRoot, err = blocks[j-startIdx].Block.HashTreeRoot()
require.NoError(t, err)
ifaceBlocks[j-startIdx], err = consensusblocks.NewSignedBeaconBlock(blocks[j-startIdx])
require.NoError(t, err)
}
return ifaceBlocks
}
func TestStore_BackfillFinalizedIndexSingle(t *testing.T) {
db := setupDB(t)
ctx := context.Background()

View File

@@ -458,7 +458,7 @@ func (s *Store) DeleteState(ctx context.Context, blockRoot [32]byte) error {
bkt = tx.Bucket(stateBucket)
// Safeguard against deleting genesis, finalized, head state.
if bytes.Equal(blockRoot[:], finalized.Root) || bytes.Equal(blockRoot[:], genesisBlockRoot) || bytes.Equal(blockRoot[:], justified.Root) {
return ErrDeleteJustifiedAndFinalized
return ErrDeleteFinalized
}
// Nothing to delete if state doesn't exist.

View File

@@ -49,6 +49,7 @@ go_test(
"//beacon-chain/operations/attestations/kv:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/bls:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation/aggregation/attestations:go_default_library",

View File

@@ -6,6 +6,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
prysmTime "github.com/prysmaticlabs/prysm/v5/time"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
// pruneAttsPool prunes attestations pool on every slot interval.
@@ -66,7 +67,18 @@ func (s *Service) pruneExpiredAtts() {
// Return true if the input slot has been expired.
// Expired is defined as one epoch behind than current time.
func (s *Service) expired(slot primitives.Slot) bool {
func (s *Service) expired(providedSlot primitives.Slot) bool {
providedEpoch := slots.ToEpoch(providedSlot)
currSlot := slots.CurrentSlot(s.genesisTime)
currEpoch := slots.ToEpoch(currSlot)
if currEpoch < params.BeaconConfig().DenebForkEpoch {
return s.expiredPreDeneb(providedSlot)
}
return providedEpoch+1 < currEpoch
}
// Handles expiration of attestations before deneb.
func (s *Service) expiredPreDeneb(slot primitives.Slot) bool {
expirationSlot := slot + params.BeaconConfig().SlotsPerEpoch
expirationTime := s.genesisTime + uint64(expirationSlot.Mul(params.BeaconConfig().SecondsPerSlot))
currentTime := uint64(prysmTime.Now().Unix())

View File

@@ -9,6 +9,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/async"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
@@ -127,3 +128,22 @@ func TestPruneExpired_Expired(t *testing.T) {
assert.Equal(t, true, s.expired(0), "Should be expired")
assert.Equal(t, false, s.expired(1), "Should not be expired")
}
func TestPruneExpired_ExpiredDeneb(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.DenebForkEpoch = 3
params.OverrideBeaconConfig(cfg)
s, err := NewService(context.Background(), &Config{Pool: NewPool()})
require.NoError(t, err)
// Rewind back 4 epochs + 10 slots worth of time.
s.genesisTime = uint64(prysmTime.Now().Unix()) - (4*uint64(params.BeaconConfig().SlotsPerEpoch.Mul(params.BeaconConfig().SecondsPerSlot)) + 10)
secondEpochStart := primitives.Slot(2 * uint64(params.BeaconConfig().SlotsPerEpoch))
thirdEpochStart := primitives.Slot(3 * uint64(params.BeaconConfig().SlotsPerEpoch))
assert.Equal(t, true, s.expired(secondEpochStart), "Should be expired")
assert.Equal(t, false, s.expired(thirdEpochStart), "Should not be expired")
}

View File

@@ -94,6 +94,7 @@ go_library(
"@com_github_libp2p_go_libp2p_mplex//:go_default_library",
"@com_github_libp2p_go_libp2p_pubsub//:go_default_library",
"@com_github_libp2p_go_libp2p_pubsub//pb:go_default_library",
"@com_github_libp2p_go_mplex//:go_default_library",
"@com_github_multiformats_go_multiaddr//:go_default_library",
"@com_github_multiformats_go_multiaddr//net:go_default_library",
"@com_github_pkg_errors//:go_default_library",

View File

@@ -15,6 +15,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
"google.golang.org/protobuf/proto"
)
@@ -68,7 +69,7 @@ func (s *Service) BroadcastAttestation(ctx context.Context, subnet uint64, att *
}
// Non-blocking broadcast, with attempts to discover a subnet peer if none available.
go s.broadcastAttestation(ctx, subnet, att, forkDigest)
go s.internalBroadcastAttestation(ctx, subnet, att, forkDigest)
return nil
}
@@ -94,8 +95,8 @@ func (s *Service) BroadcastSyncCommitteeMessage(ctx context.Context, subnet uint
return nil
}
func (s *Service) broadcastAttestation(ctx context.Context, subnet uint64, att *ethpb.Attestation, forkDigest [4]byte) {
_, span := trace.StartSpan(ctx, "p2p.broadcastAttestation")
func (s *Service) internalBroadcastAttestation(ctx context.Context, subnet uint64, att *ethpb.Attestation, forkDigest [4]byte) {
_, span := trace.StartSpan(ctx, "p2p.internalBroadcastAttestation")
defer span.End()
ctx = trace.NewContext(context.Background(), span) // clear parent context / deadline.
@@ -137,7 +138,10 @@ func (s *Service) broadcastAttestation(ctx context.Context, subnet uint64, att *
// acceptable threshold, we exit early and do not broadcast it.
currSlot := slots.CurrentSlot(uint64(s.genesisTime.Unix()))
if att.Data.Slot+params.BeaconConfig().SlotsPerEpoch < currSlot {
log.Warnf("Attestation is too old to broadcast, discarding it. Current Slot: %d , Attestation Slot: %d", currSlot, att.Data.Slot)
log.WithFields(logrus.Fields{
"attestationSlot": att.Data.Slot,
"currentSlot": currSlot,
}).Warning("Attestation is too old to broadcast, discarding it")
return
}
@@ -218,13 +222,13 @@ func (s *Service) BroadcastBlob(ctx context.Context, subnet uint64, blob *ethpb.
}
// Non-blocking broadcast, with attempts to discover a subnet peer if none available.
go s.broadcastBlob(ctx, subnet, blob, forkDigest)
go s.internalBroadcastBlob(ctx, subnet, blob, forkDigest)
return nil
}
func (s *Service) broadcastBlob(ctx context.Context, subnet uint64, blobSidecar *ethpb.BlobSidecar, forkDigest [4]byte) {
_, span := trace.StartSpan(ctx, "p2p.broadcastBlob")
func (s *Service) internalBroadcastBlob(ctx context.Context, subnet uint64, blobSidecar *ethpb.BlobSidecar, forkDigest [4]byte) {
_, span := trace.StartSpan(ctx, "p2p.internalBroadcastBlob")
defer span.End()
ctx = trace.NewContext(context.Background(), span) // clear parent context / deadline.

View File

@@ -277,58 +277,69 @@ func (s *Service) startDiscoveryV5(
// filterPeer validates each node that we retrieve from our dht. We
// try to ascertain that the peer can be a valid protocol peer.
// Validity Conditions:
// 1. The local node is still actively looking for peers to
// connect to.
// 2. Peer has a valid IP and TCP port set in their enr.
// 3. Peer hasn't been marked as 'bad'
// 4. Peer is not currently active or connected.
// 5. Peer is ready to receive incoming connections.
// 6. Peer's fork digest in their ENR matches that of
// 1. Peer has a valid IP and TCP port set in their enr.
// 2. Peer hasn't been marked as 'bad'.
// 3. Peer is not currently active or connected.
// 4. Peer is ready to receive incoming connections.
// 5. Peer's fork digest in their ENR matches that of
// our localnodes.
func (s *Service) filterPeer(node *enode.Node) bool {
// Ignore nil node entries passed in.
if node == nil {
return false
}
// ignore nodes with no ip address stored.
// Ignore nodes with no IP address stored.
if node.IP() == nil {
return false
}
// do not dial nodes with their tcp ports not set
// Ignore nodes with their TCP ports not set.
if err := node.Record().Load(enr.WithEntry("tcp", new(enr.TCP))); err != nil {
if !enr.IsNotFound(err) {
log.WithError(err).Debug("Could not retrieve tcp port")
}
return false
}
peerData, multiAddr, err := convertToAddrInfo(node)
if err != nil {
log.WithError(err).Debug("Could not convert to peer data")
return false
}
// Ignore bad nodes.
if s.peers.IsBad(peerData.ID) {
return false
}
// Ignore nodes that are already active.
if s.peers.IsActive(peerData.ID) {
return false
}
// Ignore nodes that are already connected.
if s.host.Network().Connectedness(peerData.ID) == network.Connected {
return false
}
// Ignore nodes that are not ready to receive incoming connections.
if !s.peers.IsReadyToDial(peerData.ID) {
return false
}
// Ignore nodes that don't match our fork digest.
nodeENR := node.Record()
// Decide whether or not to connect to peer that does not
// match the proper fork ENR data with our local node.
if s.genesisValidatorsRoot != nil {
if err := s.compareForkENR(nodeENR); err != nil {
log.WithError(err).Trace("Fork ENR mismatches between peer and local node")
return false
}
}
// Add peer to peer handler.
s.peers.Add(nodeENR, peerData.ID, multiAddr, network.DirUnknown)
return true
}

View File

@@ -4,6 +4,7 @@ import (
"crypto/ecdsa"
"fmt"
"net"
"time"
"github.com/libp2p/go-libp2p"
mplex "github.com/libp2p/go-libp2p-mplex"
@@ -11,6 +12,7 @@ import (
"github.com/libp2p/go-libp2p/core/peer"
"github.com/libp2p/go-libp2p/p2p/security/noise"
"github.com/libp2p/go-libp2p/p2p/transport/tcp"
gomplex "github.com/libp2p/go-mplex"
ma "github.com/multiformats/go-multiaddr"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/config/features"
@@ -135,3 +137,8 @@ func privKeyOption(privkey *ecdsa.PrivateKey) libp2p.Option {
return cfg.Apply(libp2p.Identity(ifaceKey))
}
}
// Configures stream timeouts on mplex.
func configureMplex() {
gomplex.ResetStreamTimeout = 5 * time.Second
}

View File

@@ -22,7 +22,7 @@ func TestGossipParameters(t *testing.T) {
pms := pubsubGossipParam()
assert.Equal(t, gossipSubMcacheLen, pms.HistoryLength, "gossipSubMcacheLen")
assert.Equal(t, gossipSubMcacheGossip, pms.HistoryGossip, "gossipSubMcacheGossip")
assert.Equal(t, gossipSubSeenTTL, int(pubsub.TimeCacheDuration.Milliseconds()/pms.HeartbeatInterval.Milliseconds()), "gossipSubSeenTtl")
assert.Equal(t, gossipSubSeenTTL, int(pubsub.TimeCacheDuration.Seconds()), "gossipSubSeenTtl")
}
func TestFanoutParameters(t *testing.T) {

View File

@@ -25,7 +25,7 @@ const (
// gossip parameters
gossipSubMcacheLen = 6 // number of windows to retain full messages in cache for `IWANT` responses
gossipSubMcacheGossip = 3 // number of windows to gossip about
gossipSubSeenTTL = 550 // number of heartbeat intervals to retain message IDs
gossipSubSeenTTL = 768 // number of seconds to retain message IDs ( 2 epochs)
// fanout ttl
gossipSubFanoutTTL = 60000000000 // TTL for fanout maps for topics we are not subscribed to but have published to, in nano seconds
@@ -165,7 +165,8 @@ func pubsubGossipParam() pubsub.GossipSubParams {
// to configure our message id time-cache rather than instantiating
// it with a router instance.
func setPubSubParameters() {
pubsub.TimeCacheDuration = 550 * gossipSubHeartbeatInterval
seenTtl := 2 * time.Second * time.Duration(params.BeaconConfig().SlotsPerEpoch.Mul(params.BeaconConfig().SecondsPerSlot))
pubsub.TimeCacheDuration = seenTtl
}
// convert from libp2p's internal schema to a compatible prysm protobuf format.

View File

@@ -124,7 +124,8 @@ func NewService(ctx context.Context, cfg *Config) (*Service, error) {
if err != nil {
return nil, errors.Wrapf(err, "failed to build p2p options")
}
// Sets mplex timeouts
configureMplex()
h, err := libp2p.New(opts...)
if err != nil {
log.WithError(err).Error("Failed to create p2p host")

View File

@@ -46,9 +46,13 @@ const syncLockerVal = 100
const blobSubnetLockerVal = 110
// FindPeersWithSubnet performs a network search for peers
// subscribed to a particular subnet. Then we try to connect
// with those peers. This method will block until the required amount of
// peers are found, the method only exits in the event of context timeouts.
// subscribed to a particular subnet. Then it tries to connect
// with those peers. This method will block until either:
// - the required amount of peers are found, or
// - the context is terminated.
// On some edge cases, this method may hang indefinitely while peers
// are actually found. In such a case, the user should cancel the context
// and re-run the method again.
func (s *Service) FindPeersWithSubnet(ctx context.Context, topic string,
index uint64, threshold int) (bool, error) {
ctx, span := trace.StartSpan(ctx, "p2p.FindPeersWithSubnet")
@@ -73,9 +77,9 @@ func (s *Service) FindPeersWithSubnet(ctx context.Context, topic string,
return false, errors.New("no subnet exists for provided topic")
}
currNum := len(s.pubsub.ListPeers(topic))
wg := new(sync.WaitGroup)
for {
currNum := len(s.pubsub.ListPeers(topic))
if currNum >= threshold {
break
}
@@ -99,7 +103,6 @@ func (s *Service) FindPeersWithSubnet(ctx context.Context, topic string,
}
// Wait for all dials to be completed.
wg.Wait()
currNum = len(s.pubsub.ListPeers(topic))
}
return true, nil
}
@@ -110,18 +113,13 @@ func (s *Service) filterPeerForAttSubnet(index uint64) func(node *enode.Node) bo
if !s.filterPeer(node) {
return false
}
subnets, err := attSubnets(node.Record())
if err != nil {
return false
}
indExists := false
for _, comIdx := range subnets {
if comIdx == index {
indExists = true
break
}
}
return indExists
return subnets[index]
}
}
@@ -205,8 +203,10 @@ func initializePersistentSubnets(id enode.ID, epoch primitives.Epoch) error {
//
// return [compute_subscribed_subnet(node_id, epoch, index) for index in range(SUBNETS_PER_NODE)]
func computeSubscribedSubnets(nodeID enode.ID, epoch primitives.Epoch) ([]uint64, error) {
subs := []uint64{}
for i := uint64(0); i < params.BeaconConfig().SubnetsPerNode; i++ {
subnetsPerNode := params.BeaconConfig().SubnetsPerNode
subs := make([]uint64, 0, subnetsPerNode)
for i := uint64(0); i < subnetsPerNode; i++ {
sub, err := computeSubscribedSubnet(nodeID, epoch, i)
if err != nil {
return nil, err
@@ -281,19 +281,20 @@ func initializeSyncCommSubnets(node *enode.LocalNode) *enode.LocalNode {
// Reads the attestation subnets entry from a node's ENR and determines
// the committee indices of the attestation subnets the node is subscribed to.
func attSubnets(record *enr.Record) ([]uint64, error) {
func attSubnets(record *enr.Record) (map[uint64]bool, error) {
bitV, err := attBitvector(record)
if err != nil {
return nil, err
}
committeeIdxs := make(map[uint64]bool)
// lint:ignore uintcast -- subnet count can be safely cast to int.
if len(bitV) != byteCount(int(attestationSubnetCount)) {
return []uint64{}, errors.Errorf("invalid bitvector provided, it has a size of %d", len(bitV))
return committeeIdxs, errors.Errorf("invalid bitvector provided, it has a size of %d", len(bitV))
}
var committeeIdxs []uint64
for i := uint64(0); i < attestationSubnetCount; i++ {
if bitV.BitAt(i) {
committeeIdxs = append(committeeIdxs, i)
committeeIdxs[i] = true
}
}
return committeeIdxs, nil

View File

@@ -3,49 +3,46 @@ package p2p
import (
"context"
"crypto/rand"
"encoding/hex"
"fmt"
"reflect"
"testing"
"time"
"github.com/ethereum/go-ethereum/p2p/discover"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/libp2p/go-libp2p/core/crypto"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/startup"
"github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/wrapper"
ecdsaprysm "github.com/prysmaticlabs/prysm/v5/crypto/ecdsa"
pb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestStartDiscV5_DiscoverPeersWithSubnets(t *testing.T) {
params.SetupTestConfigCleanup(t)
// This test needs to be entirely rewritten and should be done in a follow up PR from #7885.
t.Skip("This test is now failing after PR 7885 due to false positive")
gFlags := new(flags.GlobalFlags)
gFlags.MinimumPeersPerSubnet = 4
flags.Init(gFlags)
// Reset config.
defer flags.Init(new(flags.GlobalFlags))
port := 2000
ipAddr, pkey := createAddrAndPrivKey(t)
genesisTime := time.Now()
genesisValidatorsRoot := make([]byte, 32)
s := &Service{
cfg: &Config{UDPPort: uint(port)},
genesisTime: genesisTime,
genesisValidatorsRoot: genesisValidatorsRoot,
}
bootListener, err := s.createListener(ipAddr, pkey)
require.NoError(t, err)
defer bootListener.Close()
func TestStartDiscV5_FindPeersWithSubnet(t *testing.T) {
// Topology of this test:
//
//
// Node 1 (subscribed to subnet 1) --\
// |
// Node 2 (subscribed to subnet 2) --+--> BootNode (not subscribed to any subnet) <------- Node 0 (not subscribed to any subnet)
// |
// Node 3 (subscribed to subnet 3) --/
//
// The purpose of this test is to ensure that the "Node 0" (connected only to the boot node) is able to
// find and connect to a node already subscribed to a specific subnet.
// In our case: The node i is subscribed to subnet i, with i = 1, 2, 3
// Define the genesis validators root, to ensure everybody is on the same network.
const genesisValidatorRootStr = "0xdeadbeefcafecafedeadbeefcafecafedeadbeefcafecafedeadbeefcafecafe"
genesisValidatorsRoot, err := hex.DecodeString(genesisValidatorRootStr[2:])
require.NoError(t, err)
// Create a context.
ctx := context.Background()
bootNode := bootListener.Self()
// Use shorter period for testing.
currentPeriod := pollingPeriod
pollingPeriod = 1 * time.Second
@@ -53,111 +50,150 @@ func TestStartDiscV5_DiscoverPeersWithSubnets(t *testing.T) {
pollingPeriod = currentPeriod
}()
var listeners []*discover.UDPv5
// Create flags.
params.SetupTestConfigCleanup(t)
gFlags := new(flags.GlobalFlags)
gFlags.MinimumPeersPerSubnet = 1
flags.Init(gFlags)
params.BeaconNetworkConfig().MinimumPeersInSubnetSearch = 1
// Reset config.
defer flags.Init(new(flags.GlobalFlags))
// First, generate a bootstrap node.
ipAddr, pkey := createAddrAndPrivKey(t)
genesisTime := time.Now()
bootNodeService := &Service{
cfg: &Config{TCPPort: 2000, UDPPort: 3000},
genesisTime: genesisTime,
genesisValidatorsRoot: genesisValidatorsRoot,
}
bootNodeForkDigest, err := bootNodeService.currentForkDigest()
require.NoError(t, err)
bootListener, err := bootNodeService.createListener(ipAddr, pkey)
require.NoError(t, err)
defer bootListener.Close()
bootNodeENR := bootListener.Self().String()
// Create 3 nodes, each subscribed to a different subnet.
// Each node is connected to the boostrap node.
services := make([]*Service, 0, 3)
for i := 1; i <= 3; i++ {
port = 3000 + i
cfg := &Config{
Discv5BootStrapAddrs: []string{bootNode.String()},
subnet := uint64(i)
service, err := NewService(ctx, &Config{
Discv5BootStrapAddrs: []string{bootNodeENR},
MaxPeers: 30,
UDPPort: uint(port),
}
ipAddr, pkey := createAddrAndPrivKey(t)
s = &Service{
cfg: cfg,
genesisTime: genesisTime,
genesisValidatorsRoot: genesisValidatorsRoot,
}
listener, err := s.startDiscoveryV5(ipAddr, pkey)
assert.NoError(t, err, "Could not start discovery for node")
TCPPort: uint(2000 + i),
UDPPort: uint(3000 + i),
})
require.NoError(t, err)
service.genesisTime = genesisTime
service.genesisValidatorsRoot = genesisValidatorsRoot
nodeForkDigest, err := service.currentForkDigest()
require.NoError(t, err)
require.Equal(t, true, nodeForkDigest == bootNodeForkDigest, "fork digest of the node doesn't match the boot node")
// Start the service.
service.Start()
// Set the ENR `attnets`, used by Prysm to filter peers by subnet.
bitV := bitfield.NewBitvector64()
bitV.SetBitAt(uint64(i), true)
bitV.SetBitAt(subnet, true)
entry := enr.WithEntry(attSubnetEnrKey, &bitV)
listener.LocalNode().Set(entry)
listeners = append(listeners, listener)
service.dv5Listener.LocalNode().Set(entry)
// Join and subscribe to the subnet, needed by libp2p.
topic, err := service.pubsub.Join(fmt.Sprintf(AttestationSubnetTopicFormat, bootNodeForkDigest, subnet) + "/ssz_snappy")
require.NoError(t, err)
_, err = topic.Subscribe()
require.NoError(t, err)
// Memoize the service.
services = append(services, service)
}
// Stop the services.
defer func() {
// Close down all peers.
for _, listener := range listeners {
listener.Close()
for _, service := range services {
err := service.Stop()
require.NoError(t, err)
}
}()
// Make one service on port 4001.
port = 4001
gs := startup.NewClockSynchronizer()
cfg := &Config{
Discv5BootStrapAddrs: []string{bootNode.String()},
Discv5BootStrapAddrs: []string{bootNodeENR},
MaxPeers: 30,
UDPPort: uint(port),
ClockWaiter: gs,
TCPPort: 2010,
UDPPort: 3010,
}
s, err = NewService(context.Background(), cfg)
service, err := NewService(ctx, cfg)
require.NoError(t, err)
exitRoutine := make(chan bool)
go func() {
s.Start()
<-exitRoutine
service.genesisTime = genesisTime
service.genesisValidatorsRoot = genesisValidatorsRoot
service.Start()
defer func() {
err := service.Stop()
require.NoError(t, err)
}()
time.Sleep(50 * time.Millisecond)
// Send in a loop to ensure it is delivered (busy wait for the service to subscribe to the state feed).
var vr [32]byte
require.NoError(t, gs.SetClock(startup.NewClock(time.Now(), vr)))
// Wait for the nodes to have their local routing tables to be populated with the other nodes
time.Sleep(6 * discoveryWaitTime)
// Look up 3 different subnets.
exists := make([]bool, 0, 3)
for i := 1; i <= 3; i++ {
subnet := uint64(i)
topic := fmt.Sprintf(AttestationSubnetTopicFormat, bootNodeForkDigest, subnet)
exist := false
// This for loop is used to ensure we don't get stuck in `FindPeersWithSubnet`.
// Read the documentation of `FindPeersWithSubnet` for more details.
for j := 0; j < 3; j++ {
ctxWithTimeOut, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
exist, err = service.FindPeersWithSubnet(ctxWithTimeOut, topic, subnet, 1)
require.NoError(t, err)
if exist {
break
}
}
require.NoError(t, err)
exists = append(exists, exist)
// look up 3 different subnets
ctx := context.Background()
exists, err := s.FindPeersWithSubnet(ctx, "", 1, flags.Get().MinimumPeersPerSubnet)
require.NoError(t, err)
exists2, err := s.FindPeersWithSubnet(ctx, "", 2, flags.Get().MinimumPeersPerSubnet)
require.NoError(t, err)
exists3, err := s.FindPeersWithSubnet(ctx, "", 3, flags.Get().MinimumPeersPerSubnet)
require.NoError(t, err)
if !exists || !exists2 || !exists3 {
t.Fatal("Peer with subnet doesn't exist")
}
// Update ENR of a peer.
testService := &Service{
dv5Listener: listeners[0],
metaData: wrapper.WrappedMetadataV0(&pb.MetaDataV0{
Attnets: bitfield.NewBitvector64(),
}),
// Check if all peers are found.
for _, exist := range exists {
require.Equal(t, true, exist, "Peer with subnet doesn't exist")
}
cache.SubnetIDs.AddAttesterSubnetID(0, 10)
testService.RefreshENR()
time.Sleep(2 * time.Second)
exists, err = s.FindPeersWithSubnet(ctx, "", 2, flags.Get().MinimumPeersPerSubnet)
require.NoError(t, err)
assert.Equal(t, true, exists, "Peer with subnet doesn't exist")
assert.NoError(t, s.Stop())
exitRoutine <- true
}
func Test_AttSubnets(t *testing.T) {
params.SetupTestConfigCleanup(t)
tests := []struct {
name string
record func(t *testing.T) *enr.Record
record func(localNode *enode.LocalNode) *enr.Record
want []uint64
wantErr bool
errContains string
}{
{
name: "valid record",
record: func(t *testing.T) *enr.Record {
db, err := enode.OpenDB("")
assert.NoError(t, err)
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
assert.NoError(t, err)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
assert.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
record: func(localNode *enode.LocalNode) *enr.Record {
localNode = initializeAttSubnets(localNode)
return localNode.Node().Record()
},
@@ -166,14 +202,7 @@ func Test_AttSubnets(t *testing.T) {
},
{
name: "too small subnet",
record: func(t *testing.T) *enr.Record {
db, err := enode.OpenDB("")
assert.NoError(t, err)
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
assert.NoError(t, err)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
assert.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
record: func(localNode *enode.LocalNode) *enr.Record {
entry := enr.WithEntry(attSubnetEnrKey, []byte{})
localNode.Set(entry)
return localNode.Node().Record()
@@ -184,14 +213,7 @@ func Test_AttSubnets(t *testing.T) {
},
{
name: "half sized subnet",
record: func(t *testing.T) *enr.Record {
db, err := enode.OpenDB("")
assert.NoError(t, err)
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
assert.NoError(t, err)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
assert.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
record: func(localNode *enode.LocalNode) *enr.Record {
entry := enr.WithEntry(attSubnetEnrKey, make([]byte, 4))
localNode.Set(entry)
return localNode.Node().Record()
@@ -202,14 +224,7 @@ func Test_AttSubnets(t *testing.T) {
},
{
name: "too large subnet",
record: func(t *testing.T) *enr.Record {
db, err := enode.OpenDB("")
assert.NoError(t, err)
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
assert.NoError(t, err)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
assert.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
record: func(localNode *enode.LocalNode) *enr.Record {
entry := enr.WithEntry(attSubnetEnrKey, make([]byte, byteCount(int(attestationSubnetCount))+1))
localNode.Set(entry)
return localNode.Node().Record()
@@ -220,14 +235,7 @@ func Test_AttSubnets(t *testing.T) {
},
{
name: "very large subnet",
record: func(t *testing.T) *enr.Record {
db, err := enode.OpenDB("")
assert.NoError(t, err)
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
assert.NoError(t, err)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
assert.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
record: func(localNode *enode.LocalNode) *enr.Record {
entry := enr.WithEntry(attSubnetEnrKey, make([]byte, byteCount(int(attestationSubnetCount))+100))
localNode.Set(entry)
return localNode.Node().Record()
@@ -238,14 +246,7 @@ func Test_AttSubnets(t *testing.T) {
},
{
name: "single subnet",
record: func(t *testing.T) *enr.Record {
db, err := enode.OpenDB("")
assert.NoError(t, err)
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
assert.NoError(t, err)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
assert.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
record: func(localNode *enode.LocalNode) *enr.Record {
bitV := bitfield.NewBitvector64()
bitV.SetBitAt(0, true)
entry := enr.WithEntry(attSubnetEnrKey, bitV.Bytes())
@@ -257,17 +258,10 @@ func Test_AttSubnets(t *testing.T) {
},
{
name: "multiple subnets",
record: func(t *testing.T) *enr.Record {
db, err := enode.OpenDB("")
assert.NoError(t, err)
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
assert.NoError(t, err)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
assert.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
record: func(localNode *enode.LocalNode) *enr.Record {
bitV := bitfield.NewBitvector64()
for i := uint64(0); i < bitV.Len(); i++ {
// skip 2 subnets
// Keep only odd subnets.
if (i+1)%2 == 0 {
continue
}
@@ -285,14 +279,7 @@ func Test_AttSubnets(t *testing.T) {
},
{
name: "all subnets",
record: func(t *testing.T) *enr.Record {
db, err := enode.OpenDB("")
assert.NoError(t, err)
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
assert.NoError(t, err)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
assert.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
record: func(localNode *enode.LocalNode) *enr.Record {
bitV := bitfield.NewBitvector64()
for i := uint64(0); i < bitV.Len(); i++ {
bitV.SetBitAt(i, true)
@@ -309,16 +296,35 @@ func Test_AttSubnets(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := attSubnets(tt.record(t))
db, err := enode.OpenDB("")
assert.NoError(t, err)
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
assert.NoError(t, err)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
assert.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
record := tt.record(localNode)
got, err := attSubnets(record)
if (err != nil) != tt.wantErr {
t.Errorf("syncSubnets() error = %v, wantErr %v", err, tt.wantErr)
return
}
if tt.wantErr {
assert.ErrorContains(t, tt.errContains, err)
}
if !reflect.DeepEqual(got, tt.want) {
t.Errorf("syncSubnets() got = %v, want %v", got, tt.want)
want := make(map[uint64]bool, len(tt.want))
for _, subnet := range tt.want {
want[subnet] = true
}
if !reflect.DeepEqual(got, want) {
t.Errorf("syncSubnets() got = %v, want %v", got, want)
}
})
}

View File

@@ -25,7 +25,7 @@ go_library(
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
],
)

View File

@@ -8,6 +8,7 @@ import (
time2 "time"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api"
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain"
@@ -23,7 +24,6 @@ import (
ethpbv2 "github.com/prysmaticlabs/prysm/v5/proto/eth/v2"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
log "github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
@@ -124,86 +124,93 @@ func (s *Server) StreamEvents(w http.ResponseWriter, r *http.Request) {
// stalling while waiting for the first response chunk.
// After that we send a keepalive dummy message every SECONDS_PER_SLOT
// to prevent anyone (e.g. proxy servers) from closing connections.
sendKeepalive(w, flusher)
if err := sendKeepalive(w, flusher); err != nil {
httputil.HandleError(w, err.Error(), http.StatusInternalServerError)
return
}
keepaliveTicker := time2.NewTicker(time2.Duration(params.BeaconConfig().SecondsPerSlot) * time2.Second)
for {
select {
case event := <-opsChan:
handleBlockOperationEvents(w, flusher, topicsMap, event)
if err := handleBlockOperationEvents(w, flusher, topicsMap, event); err != nil {
httputil.HandleError(w, err.Error(), http.StatusInternalServerError)
return
}
case event := <-stateChan:
s.handleStateEvents(ctx, w, flusher, topicsMap, event)
if err := s.handleStateEvents(ctx, w, flusher, topicsMap, event); err != nil {
httputil.HandleError(w, err.Error(), http.StatusInternalServerError)
return
}
case <-keepaliveTicker.C:
sendKeepalive(w, flusher)
if err := sendKeepalive(w, flusher); err != nil {
httputil.HandleError(w, err.Error(), http.StatusInternalServerError)
return
}
case <-ctx.Done():
return
}
}
}
func handleBlockOperationEvents(w http.ResponseWriter, flusher http.Flusher, requestedTopics map[string]bool, event *feed.Event) {
func handleBlockOperationEvents(w http.ResponseWriter, flusher http.Flusher, requestedTopics map[string]bool, event *feed.Event) error {
switch event.Type {
case operation.AggregatedAttReceived:
if _, ok := requestedTopics[AttestationTopic]; !ok {
return
return nil
}
attData, ok := event.Data.(*operation.AggregatedAttReceivedData)
if !ok {
write(w, flusher, topicDataMismatch, event.Data, AttestationTopic)
return
return write(w, flusher, topicDataMismatch, event.Data, AttestationTopic)
}
att := structs.AttFromConsensus(attData.Attestation.Aggregate)
send(w, flusher, AttestationTopic, att)
return send(w, flusher, AttestationTopic, att)
case operation.UnaggregatedAttReceived:
if _, ok := requestedTopics[AttestationTopic]; !ok {
return
return nil
}
attData, ok := event.Data.(*operation.UnAggregatedAttReceivedData)
if !ok {
write(w, flusher, topicDataMismatch, event.Data, AttestationTopic)
return
return write(w, flusher, topicDataMismatch, event.Data, AttestationTopic)
}
att := structs.AttFromConsensus(attData.Attestation)
send(w, flusher, AttestationTopic, att)
return send(w, flusher, AttestationTopic, att)
case operation.ExitReceived:
if _, ok := requestedTopics[VoluntaryExitTopic]; !ok {
return
return nil
}
exitData, ok := event.Data.(*operation.ExitReceivedData)
if !ok {
write(w, flusher, topicDataMismatch, event.Data, VoluntaryExitTopic)
return
return write(w, flusher, topicDataMismatch, event.Data, VoluntaryExitTopic)
}
exit := structs.SignedExitFromConsensus(exitData.Exit)
send(w, flusher, VoluntaryExitTopic, exit)
return send(w, flusher, VoluntaryExitTopic, exit)
case operation.SyncCommitteeContributionReceived:
if _, ok := requestedTopics[SyncCommitteeContributionTopic]; !ok {
return
return nil
}
contributionData, ok := event.Data.(*operation.SyncCommitteeContributionReceivedData)
if !ok {
write(w, flusher, topicDataMismatch, event.Data, SyncCommitteeContributionTopic)
return
return write(w, flusher, topicDataMismatch, event.Data, SyncCommitteeContributionTopic)
}
contribution := structs.SignedContributionAndProofFromConsensus(contributionData.Contribution)
send(w, flusher, SyncCommitteeContributionTopic, contribution)
return send(w, flusher, SyncCommitteeContributionTopic, contribution)
case operation.BLSToExecutionChangeReceived:
if _, ok := requestedTopics[BLSToExecutionChangeTopic]; !ok {
return
return nil
}
changeData, ok := event.Data.(*operation.BLSToExecutionChangeReceivedData)
if !ok {
write(w, flusher, topicDataMismatch, event.Data, BLSToExecutionChangeTopic)
return
return write(w, flusher, topicDataMismatch, event.Data, BLSToExecutionChangeTopic)
}
send(w, flusher, BLSToExecutionChangeTopic, structs.SignedBLSChangeFromConsensus(changeData.Change))
return send(w, flusher, BLSToExecutionChangeTopic, structs.SignedBLSChangeFromConsensus(changeData.Change))
case operation.BlobSidecarReceived:
if _, ok := requestedTopics[BlobSidecarTopic]; !ok {
return
return nil
}
blobData, ok := event.Data.(*operation.BlobSidecarReceivedData)
if !ok {
write(w, flusher, topicDataMismatch, event.Data, BlobSidecarTopic)
return
return write(w, flusher, topicDataMismatch, event.Data, BlobSidecarTopic)
}
versionedHash := blockchain.ConvertKzgCommitmentToVersionedHash(blobData.Blob.KzgCommitment)
blobEvent := &structs.BlobSidecarEvent{
@@ -213,38 +220,36 @@ func handleBlockOperationEvents(w http.ResponseWriter, flusher http.Flusher, req
VersionedHash: versionedHash.String(),
KzgCommitment: hexutil.Encode(blobData.Blob.KzgCommitment),
}
send(w, flusher, BlobSidecarTopic, blobEvent)
return send(w, flusher, BlobSidecarTopic, blobEvent)
case operation.AttesterSlashingReceived:
if _, ok := requestedTopics[AttesterSlashingTopic]; !ok {
return
return nil
}
attesterSlashingData, ok := event.Data.(*operation.AttesterSlashingReceivedData)
if !ok {
write(w, flusher, topicDataMismatch, event.Data, AttesterSlashingTopic)
return
return write(w, flusher, topicDataMismatch, event.Data, AttesterSlashingTopic)
}
send(w, flusher, AttesterSlashingTopic, structs.AttesterSlashingFromConsensus(attesterSlashingData.AttesterSlashing))
return send(w, flusher, AttesterSlashingTopic, structs.AttesterSlashingFromConsensus(attesterSlashingData.AttesterSlashing))
case operation.ProposerSlashingReceived:
if _, ok := requestedTopics[ProposerSlashingTopic]; !ok {
return
return nil
}
proposerSlashingData, ok := event.Data.(*operation.ProposerSlashingReceivedData)
if !ok {
write(w, flusher, topicDataMismatch, event.Data, ProposerSlashingTopic)
return
return write(w, flusher, topicDataMismatch, event.Data, ProposerSlashingTopic)
}
send(w, flusher, ProposerSlashingTopic, structs.ProposerSlashingFromConsensus(proposerSlashingData.ProposerSlashing))
return send(w, flusher, ProposerSlashingTopic, structs.ProposerSlashingFromConsensus(proposerSlashingData.ProposerSlashing))
}
return nil
}
func (s *Server) handleStateEvents(ctx context.Context, w http.ResponseWriter, flusher http.Flusher, requestedTopics map[string]bool, event *feed.Event) {
func (s *Server) handleStateEvents(ctx context.Context, w http.ResponseWriter, flusher http.Flusher, requestedTopics map[string]bool, event *feed.Event) error {
switch event.Type {
case statefeed.NewHead:
if _, ok := requestedTopics[HeadTopic]; ok {
headData, ok := event.Data.(*ethpb.EventHead)
if !ok {
write(w, flusher, topicDataMismatch, event.Data, HeadTopic)
return
return write(w, flusher, topicDataMismatch, event.Data, HeadTopic)
}
head := &structs.HeadEvent{
Slot: fmt.Sprintf("%d", headData.Slot),
@@ -255,23 +260,22 @@ func (s *Server) handleStateEvents(ctx context.Context, w http.ResponseWriter, f
PreviousDutyDependentRoot: hexutil.Encode(headData.PreviousDutyDependentRoot),
CurrentDutyDependentRoot: hexutil.Encode(headData.CurrentDutyDependentRoot),
}
send(w, flusher, HeadTopic, head)
return send(w, flusher, HeadTopic, head)
}
if _, ok := requestedTopics[PayloadAttributesTopic]; ok {
s.sendPayloadAttributes(ctx, w, flusher)
return s.sendPayloadAttributes(ctx, w, flusher)
}
case statefeed.MissedSlot:
if _, ok := requestedTopics[PayloadAttributesTopic]; ok {
s.sendPayloadAttributes(ctx, w, flusher)
return s.sendPayloadAttributes(ctx, w, flusher)
}
case statefeed.FinalizedCheckpoint:
if _, ok := requestedTopics[FinalizedCheckpointTopic]; !ok {
return
return nil
}
checkpointData, ok := event.Data.(*ethpb.EventFinalizedCheckpoint)
if !ok {
write(w, flusher, topicDataMismatch, event.Data, FinalizedCheckpointTopic)
return
return write(w, flusher, topicDataMismatch, event.Data, FinalizedCheckpointTopic)
}
checkpoint := &structs.FinalizedCheckpointEvent{
Block: hexutil.Encode(checkpointData.Block),
@@ -279,15 +283,14 @@ func (s *Server) handleStateEvents(ctx context.Context, w http.ResponseWriter, f
Epoch: fmt.Sprintf("%d", checkpointData.Epoch),
ExecutionOptimistic: checkpointData.ExecutionOptimistic,
}
send(w, flusher, FinalizedCheckpointTopic, checkpoint)
return send(w, flusher, FinalizedCheckpointTopic, checkpoint)
case statefeed.LightClientFinalityUpdate:
if _, ok := requestedTopics[LightClientFinalityUpdateTopic]; !ok {
return
return nil
}
updateData, ok := event.Data.(*ethpbv2.LightClientFinalityUpdateWithVersion)
if !ok {
write(w, flusher, topicDataMismatch, event.Data, LightClientFinalityUpdateTopic)
return
return write(w, flusher, topicDataMismatch, event.Data, LightClientFinalityUpdateTopic)
}
var finalityBranch []string
@@ -318,15 +321,14 @@ func (s *Server) handleStateEvents(ctx context.Context, w http.ResponseWriter, f
SignatureSlot: fmt.Sprintf("%d", updateData.Data.SignatureSlot),
},
}
send(w, flusher, LightClientFinalityUpdateTopic, update)
return send(w, flusher, LightClientFinalityUpdateTopic, update)
case statefeed.LightClientOptimisticUpdate:
if _, ok := requestedTopics[LightClientOptimisticUpdateTopic]; !ok {
return
return nil
}
updateData, ok := event.Data.(*ethpbv2.LightClientOptimisticUpdateWithVersion)
if !ok {
write(w, flusher, topicDataMismatch, event.Data, LightClientOptimisticUpdateTopic)
return
return write(w, flusher, topicDataMismatch, event.Data, LightClientOptimisticUpdateTopic)
}
update := &structs.LightClientOptimisticUpdateEvent{
Version: version.String(int(updateData.Version)),
@@ -345,15 +347,14 @@ func (s *Server) handleStateEvents(ctx context.Context, w http.ResponseWriter, f
SignatureSlot: fmt.Sprintf("%d", updateData.Data.SignatureSlot),
},
}
send(w, flusher, LightClientOptimisticUpdateTopic, update)
return send(w, flusher, LightClientOptimisticUpdateTopic, update)
case statefeed.Reorg:
if _, ok := requestedTopics[ChainReorgTopic]; !ok {
return
return nil
}
reorgData, ok := event.Data.(*ethpb.EventChainReorg)
if !ok {
write(w, flusher, topicDataMismatch, event.Data, ChainReorgTopic)
return
return write(w, flusher, topicDataMismatch, event.Data, ChainReorgTopic)
}
reorg := &structs.ChainReorgEvent{
Slot: fmt.Sprintf("%d", reorgData.Slot),
@@ -365,78 +366,69 @@ func (s *Server) handleStateEvents(ctx context.Context, w http.ResponseWriter, f
Epoch: fmt.Sprintf("%d", reorgData.Epoch),
ExecutionOptimistic: reorgData.ExecutionOptimistic,
}
send(w, flusher, ChainReorgTopic, reorg)
return send(w, flusher, ChainReorgTopic, reorg)
case statefeed.BlockProcessed:
if _, ok := requestedTopics[BlockTopic]; !ok {
return
return nil
}
blkData, ok := event.Data.(*statefeed.BlockProcessedData)
if !ok {
write(w, flusher, topicDataMismatch, event.Data, BlockTopic)
return
return write(w, flusher, topicDataMismatch, event.Data, BlockTopic)
}
blockRoot, err := blkData.SignedBlock.Block().HashTreeRoot()
if err != nil {
write(w, flusher, "Could not get block root: "+err.Error())
return
return write(w, flusher, "Could not get block root: "+err.Error())
}
blk := &structs.BlockEvent{
Slot: fmt.Sprintf("%d", blkData.Slot),
Block: hexutil.Encode(blockRoot[:]),
ExecutionOptimistic: blkData.Optimistic,
}
send(w, flusher, BlockTopic, blk)
return send(w, flusher, BlockTopic, blk)
}
return nil
}
// This event stream is intended to be used by builders and relays.
// Parent fields are based on state at N_{current_slot}, while the rest of fields are based on state of N_{current_slot + 1}
func (s *Server) sendPayloadAttributes(ctx context.Context, w http.ResponseWriter, flusher http.Flusher) {
func (s *Server) sendPayloadAttributes(ctx context.Context, w http.ResponseWriter, flusher http.Flusher) error {
headRoot, err := s.HeadFetcher.HeadRoot(ctx)
if err != nil {
write(w, flusher, "Could not get head root: "+err.Error())
return
return write(w, flusher, "Could not get head root: "+err.Error())
}
st, err := s.HeadFetcher.HeadState(ctx)
if err != nil {
write(w, flusher, "Could not get head state: "+err.Error())
return
return write(w, flusher, "Could not get head state: "+err.Error())
}
// advance the head state
headState, err := transition.ProcessSlotsIfPossible(ctx, st, s.ChainInfoFetcher.CurrentSlot()+1)
if err != nil {
write(w, flusher, "Could not advance head state: "+err.Error())
return
return write(w, flusher, "Could not advance head state: "+err.Error())
}
headBlock, err := s.HeadFetcher.HeadBlock(ctx)
if err != nil {
write(w, flusher, "Could not get head block: "+err.Error())
return
return write(w, flusher, "Could not get head block: "+err.Error())
}
headPayload, err := headBlock.Block().Body().Execution()
if err != nil {
write(w, flusher, "Could not get execution payload: "+err.Error())
return
return write(w, flusher, "Could not get execution payload: "+err.Error())
}
t, err := slots.ToTime(headState.GenesisTime(), headState.Slot())
if err != nil {
write(w, flusher, "Could not get head state slot time: "+err.Error())
return
return write(w, flusher, "Could not get head state slot time: "+err.Error())
}
prevRando, err := helpers.RandaoMix(headState, time.CurrentEpoch(headState))
if err != nil {
write(w, flusher, "Could not get head state randao mix: "+err.Error())
return
return write(w, flusher, "Could not get head state randao mix: "+err.Error())
}
proposerIndex, err := helpers.BeaconProposerIndex(ctx, headState)
if err != nil {
write(w, flusher, "Could not get head state proposer index: "+err.Error())
return
return write(w, flusher, "Could not get head state proposer index: "+err.Error())
}
var attributes interface{}
@@ -450,8 +442,7 @@ func (s *Server) sendPayloadAttributes(ctx context.Context, w http.ResponseWrite
case version.Capella:
withdrawals, err := headState.ExpectedWithdrawals()
if err != nil {
write(w, flusher, "Could not get head state expected withdrawals: "+err.Error())
return
return write(w, flusher, "Could not get head state expected withdrawals: "+err.Error())
}
attributes = &structs.PayloadAttributesV2{
Timestamp: fmt.Sprintf("%d", t.Unix()),
@@ -462,13 +453,11 @@ func (s *Server) sendPayloadAttributes(ctx context.Context, w http.ResponseWrite
case version.Deneb:
withdrawals, err := headState.ExpectedWithdrawals()
if err != nil {
write(w, flusher, "Could not get head state expected withdrawals: "+err.Error())
return
return write(w, flusher, "Could not get head state expected withdrawals: "+err.Error())
}
parentRoot, err := headBlock.Block().HashTreeRoot()
if err != nil {
write(w, flusher, "Could not get head block root: "+err.Error())
return
return write(w, flusher, "Could not get head block root: "+err.Error())
}
attributes = &structs.PayloadAttributesV3{
Timestamp: fmt.Sprintf("%d", t.Unix()),
@@ -478,14 +467,12 @@ func (s *Server) sendPayloadAttributes(ctx context.Context, w http.ResponseWrite
ParentBeaconBlockRoot: hexutil.Encode(parentRoot[:]),
}
default:
write(w, flusher, "Payload version %s is not supported", version.String(headState.Version()))
return
return write(w, flusher, "Payload version %s is not supported", version.String(headState.Version()))
}
attributesBytes, err := json.Marshal(attributes)
if err != nil {
write(w, flusher, err.Error())
return
return write(w, flusher, err.Error())
}
eventData := structs.PayloadAttributesEventData{
ProposerIndex: fmt.Sprintf("%d", proposerIndex),
@@ -497,32 +484,31 @@ func (s *Server) sendPayloadAttributes(ctx context.Context, w http.ResponseWrite
}
eventDataBytes, err := json.Marshal(eventData)
if err != nil {
write(w, flusher, err.Error())
return
return write(w, flusher, err.Error())
}
send(w, flusher, PayloadAttributesTopic, &structs.PayloadAttributesEvent{
return send(w, flusher, PayloadAttributesTopic, &structs.PayloadAttributesEvent{
Version: version.String(headState.Version()),
Data: eventDataBytes,
})
}
func send(w http.ResponseWriter, flusher http.Flusher, name string, data interface{}) {
func send(w http.ResponseWriter, flusher http.Flusher, name string, data interface{}) error {
j, err := json.Marshal(data)
if err != nil {
write(w, flusher, "Could not marshal event to JSON: "+err.Error())
return
return write(w, flusher, "Could not marshal event to JSON: "+err.Error())
}
write(w, flusher, "event: %s\ndata: %s\n\n", name, string(j))
return write(w, flusher, "event: %s\ndata: %s\n\n", name, string(j))
}
func sendKeepalive(w http.ResponseWriter, flusher http.Flusher) {
write(w, flusher, ":\n\n")
func sendKeepalive(w http.ResponseWriter, flusher http.Flusher) error {
return write(w, flusher, ":\n\n")
}
func write(w http.ResponseWriter, flusher http.Flusher, format string, a ...any) {
func write(w http.ResponseWriter, flusher http.Flusher, format string, a ...any) error {
_, err := fmt.Fprintf(w, format, a...)
if err != nil {
log.WithError(err).Error("Could not write to response writer")
return errors.Wrap(err, "could not write to response writer")
}
flusher.Flush()
return nil
}

View File

@@ -8,6 +8,7 @@ import (
"github.com/pkg/errors"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
@@ -39,6 +40,12 @@ var (
})
)
func setFeeRecipientIfBurnAddress(val *cache.TrackedValidator) {
if val.FeeRecipient == primitives.ExecutionAddress([20]byte{}) && val.Index == 0 {
val.FeeRecipient = primitives.ExecutionAddress(params.BeaconConfig().DefaultFeeRecipient)
}
}
// This returns the local execution payload of a given slot. The function has full awareness of pre and post merge.
func (vs *Server) getLocalPayload(ctx context.Context, blk interfaces.ReadOnlyBeaconBlock, st state.BeaconState) (interfaces.ExecutionData, bool, error) {
ctx, span := trace.StartSpan(ctx, "ProposerServer.getLocalPayload")
@@ -62,6 +69,7 @@ func (vs *Server) getLocalPayload(ctx context.Context, blk interfaces.ReadOnlyBe
if !tracked {
logrus.WithFields(logFields).Warn("could not find tracked proposer index")
}
setFeeRecipientIfBurnAddress(&val)
var err error
if ok && payloadId != [8]byte{} {

View File

@@ -383,3 +383,16 @@ func TestServer_getTerminalBlockHashIfExists(t *testing.T) {
})
}
}
func TestSetFeeRecipientIfBurnAddress(t *testing.T) {
val := &cache.TrackedValidator{Index: 1}
cfg := params.BeaconConfig().Copy()
cfg.DefaultFeeRecipient = common.Address([20]byte{'a'})
params.OverrideBeaconConfig(cfg)
require.NotEqual(t, common.Address(val.FeeRecipient), params.BeaconConfig().DefaultFeeRecipient)
setFeeRecipientIfBurnAddress(val)
require.NotEqual(t, common.Address(val.FeeRecipient), params.BeaconConfig().DefaultFeeRecipient)
val.Index = 0
setFeeRecipientIfBurnAddress(val)
require.Equal(t, common.Address(val.FeeRecipient), params.BeaconConfig().DefaultFeeRecipient)
}

View File

@@ -189,7 +189,7 @@ func TestLoadBlocks_FirstBranch(t *testing.T) {
roots, savedBlocks, err := tree1(t, beaconDB, bytesutil.PadTo([]byte{'A'}, 32))
require.NoError(t, err)
filteredBlocks, err := s.loadBlocks(ctx, 0, 8, roots[len(roots)-1])
filteredBlocks, err := s.loadBlocks(ctx, 0, 9, roots[len(roots)-1])
require.NoError(t, err)
wanted := []*ethpb.SignedBeaconBlock{
@@ -220,7 +220,7 @@ func TestLoadBlocks_SecondBranch(t *testing.T) {
roots, savedBlocks, err := tree1(t, beaconDB, bytesutil.PadTo([]byte{'A'}, 32))
require.NoError(t, err)
filteredBlocks, err := s.loadBlocks(ctx, 0, 5, roots[5])
filteredBlocks, err := s.loadBlocks(ctx, 0, 6, roots[5])
require.NoError(t, err)
wanted := []*ethpb.SignedBeaconBlock{
@@ -249,7 +249,7 @@ func TestLoadBlocks_ThirdBranch(t *testing.T) {
roots, savedBlocks, err := tree1(t, beaconDB, bytesutil.PadTo([]byte{'A'}, 32))
require.NoError(t, err)
filteredBlocks, err := s.loadBlocks(ctx, 0, 7, roots[7])
filteredBlocks, err := s.loadBlocks(ctx, 0, 8, roots[7])
require.NoError(t, err)
wanted := []*ethpb.SignedBeaconBlock{
@@ -280,7 +280,7 @@ func TestLoadBlocks_SameSlots(t *testing.T) {
roots, savedBlocks, err := tree2(t, beaconDB, bytesutil.PadTo([]byte{'A'}, 32))
require.NoError(t, err)
filteredBlocks, err := s.loadBlocks(ctx, 0, 3, roots[6])
filteredBlocks, err := s.loadBlocks(ctx, 0, 4, roots[6])
require.NoError(t, err)
wanted := []*ethpb.SignedBeaconBlock{
@@ -309,7 +309,7 @@ func TestLoadBlocks_SameEndSlots(t *testing.T) {
roots, savedBlocks, err := tree3(t, beaconDB, bytesutil.PadTo([]byte{'A'}, 32))
require.NoError(t, err)
filteredBlocks, err := s.loadBlocks(ctx, 0, 2, roots[2])
filteredBlocks, err := s.loadBlocks(ctx, 0, 3, roots[2])
require.NoError(t, err)
wanted := []*ethpb.SignedBeaconBlock{
@@ -337,7 +337,7 @@ func TestLoadBlocks_SameEndSlotsWith2blocks(t *testing.T) {
roots, savedBlocks, err := tree4(t, beaconDB, bytesutil.PadTo([]byte{'A'}, 32))
require.NoError(t, err)
filteredBlocks, err := s.loadBlocks(ctx, 0, 2, roots[1])
filteredBlocks, err := s.loadBlocks(ctx, 0, 3, roots[1])
require.NoError(t, err)
wanted := []*ethpb.SignedBeaconBlock{
@@ -363,7 +363,7 @@ func TestLoadBlocks_BadStart(t *testing.T) {
roots, _, err := tree1(t, beaconDB, bytesutil.PadTo([]byte{'A'}, 32))
require.NoError(t, err)
_, err = s.loadBlocks(ctx, 0, 5, roots[8])
_, err = s.loadBlocks(ctx, 0, 6, roots[8])
assert.ErrorContains(t, "end block roots don't match", err)
}
@@ -374,63 +374,63 @@ func TestLoadBlocks_BadStart(t *testing.T) {
// \- B7
func tree1(t *testing.T, beaconDB db.Database, genesisRoot []byte) ([][32]byte, []*ethpb.SignedBeaconBlock, error) {
b0 := util.NewBeaconBlock()
b0.Block.Slot = 0
b0.Block.Slot = 1
b0.Block.ParentRoot = genesisRoot
r0, err := b0.Block.HashTreeRoot()
if err != nil {
return nil, nil, err
}
b1 := util.NewBeaconBlock()
b1.Block.Slot = 1
b1.Block.Slot = 2
b1.Block.ParentRoot = r0[:]
r1, err := b1.Block.HashTreeRoot()
if err != nil {
return nil, nil, err
}
b2 := util.NewBeaconBlock()
b2.Block.Slot = 2
b2.Block.Slot = 3
b2.Block.ParentRoot = r1[:]
r2, err := b2.Block.HashTreeRoot()
if err != nil {
return nil, nil, err
}
b3 := util.NewBeaconBlock()
b3.Block.Slot = 3
b3.Block.Slot = 4
b3.Block.ParentRoot = r1[:]
r3, err := b3.Block.HashTreeRoot()
if err != nil {
return nil, nil, err
}
b4 := util.NewBeaconBlock()
b4.Block.Slot = 4
b4.Block.Slot = 5
b4.Block.ParentRoot = r2[:]
r4, err := b4.Block.HashTreeRoot()
if err != nil {
return nil, nil, err
}
b5 := util.NewBeaconBlock()
b5.Block.Slot = 5
b5.Block.Slot = 6
b5.Block.ParentRoot = r3[:]
r5, err := b5.Block.HashTreeRoot()
if err != nil {
return nil, nil, err
}
b6 := util.NewBeaconBlock()
b6.Block.Slot = 6
b6.Block.Slot = 7
b6.Block.ParentRoot = r4[:]
r6, err := b6.Block.HashTreeRoot()
if err != nil {
return nil, nil, err
}
b7 := util.NewBeaconBlock()
b7.Block.Slot = 7
b7.Block.Slot = 8
b7.Block.ParentRoot = r6[:]
r7, err := b7.Block.HashTreeRoot()
if err != nil {
return nil, nil, err
}
b8 := util.NewBeaconBlock()
b8.Block.Slot = 8
b8.Block.Slot = 9
b8.Block.ParentRoot = r6[:]
r8, err := b8.Block.HashTreeRoot()
if err != nil {
@@ -466,21 +466,21 @@ func tree1(t *testing.T, beaconDB db.Database, genesisRoot []byte) ([][32]byte,
// \- B2 -- B3
func tree2(t *testing.T, beaconDB db.Database, genesisRoot []byte) ([][32]byte, []*ethpb.SignedBeaconBlock, error) {
b0 := util.NewBeaconBlock()
b0.Block.Slot = 0
b0.Block.Slot = 1
b0.Block.ParentRoot = genesisRoot
r0, err := b0.Block.HashTreeRoot()
if err != nil {
return nil, nil, err
}
b1 := util.NewBeaconBlock()
b1.Block.Slot = 1
b1.Block.Slot = 2
b1.Block.ParentRoot = r0[:]
r1, err := b1.Block.HashTreeRoot()
if err != nil {
return nil, nil, err
}
b21 := util.NewBeaconBlock()
b21.Block.Slot = 2
b21.Block.Slot = 3
b21.Block.ParentRoot = r1[:]
b21.Block.StateRoot = bytesutil.PadTo([]byte{'A'}, 32)
r21, err := b21.Block.HashTreeRoot()
@@ -488,7 +488,7 @@ func tree2(t *testing.T, beaconDB db.Database, genesisRoot []byte) ([][32]byte,
return nil, nil, err
}
b22 := util.NewBeaconBlock()
b22.Block.Slot = 2
b22.Block.Slot = 3
b22.Block.ParentRoot = r1[:]
b22.Block.StateRoot = bytesutil.PadTo([]byte{'B'}, 32)
r22, err := b22.Block.HashTreeRoot()
@@ -496,7 +496,7 @@ func tree2(t *testing.T, beaconDB db.Database, genesisRoot []byte) ([][32]byte,
return nil, nil, err
}
b23 := util.NewBeaconBlock()
b23.Block.Slot = 2
b23.Block.Slot = 3
b23.Block.ParentRoot = r1[:]
b23.Block.StateRoot = bytesutil.PadTo([]byte{'C'}, 32)
r23, err := b23.Block.HashTreeRoot()
@@ -504,7 +504,7 @@ func tree2(t *testing.T, beaconDB db.Database, genesisRoot []byte) ([][32]byte,
return nil, nil, err
}
b24 := util.NewBeaconBlock()
b24.Block.Slot = 2
b24.Block.Slot = 3
b24.Block.ParentRoot = r1[:]
b24.Block.StateRoot = bytesutil.PadTo([]byte{'D'}, 32)
r24, err := b24.Block.HashTreeRoot()
@@ -512,7 +512,7 @@ func tree2(t *testing.T, beaconDB db.Database, genesisRoot []byte) ([][32]byte,
return nil, nil, err
}
b3 := util.NewBeaconBlock()
b3.Block.Slot = 3
b3.Block.Slot = 4
b3.Block.ParentRoot = r24[:]
r3, err := b3.Block.HashTreeRoot()
if err != nil {
@@ -549,21 +549,21 @@ func tree2(t *testing.T, beaconDB db.Database, genesisRoot []byte) ([][32]byte,
// \- B2
func tree3(t *testing.T, beaconDB db.Database, genesisRoot []byte) ([][32]byte, []*ethpb.SignedBeaconBlock, error) {
b0 := util.NewBeaconBlock()
b0.Block.Slot = 0
b0.Block.Slot = 1
b0.Block.ParentRoot = genesisRoot
r0, err := b0.Block.HashTreeRoot()
if err != nil {
return nil, nil, err
}
b1 := util.NewBeaconBlock()
b1.Block.Slot = 1
b1.Block.Slot = 2
b1.Block.ParentRoot = r0[:]
r1, err := b1.Block.HashTreeRoot()
if err != nil {
return nil, nil, err
}
b21 := util.NewBeaconBlock()
b21.Block.Slot = 2
b21.Block.Slot = 3
b21.Block.ParentRoot = r1[:]
b21.Block.StateRoot = bytesutil.PadTo([]byte{'A'}, 32)
r21, err := b21.Block.HashTreeRoot()
@@ -571,7 +571,7 @@ func tree3(t *testing.T, beaconDB db.Database, genesisRoot []byte) ([][32]byte,
return nil, nil, err
}
b22 := util.NewBeaconBlock()
b22.Block.Slot = 2
b22.Block.Slot = 3
b22.Block.ParentRoot = r1[:]
b22.Block.StateRoot = bytesutil.PadTo([]byte{'B'}, 32)
r22, err := b22.Block.HashTreeRoot()
@@ -579,7 +579,7 @@ func tree3(t *testing.T, beaconDB db.Database, genesisRoot []byte) ([][32]byte,
return nil, nil, err
}
b23 := util.NewBeaconBlock()
b23.Block.Slot = 2
b23.Block.Slot = 3
b23.Block.ParentRoot = r1[:]
b23.Block.StateRoot = bytesutil.PadTo([]byte{'C'}, 32)
r23, err := b23.Block.HashTreeRoot()
@@ -587,7 +587,7 @@ func tree3(t *testing.T, beaconDB db.Database, genesisRoot []byte) ([][32]byte,
return nil, nil, err
}
b24 := util.NewBeaconBlock()
b24.Block.Slot = 2
b24.Block.Slot = 3
b24.Block.ParentRoot = r1[:]
b24.Block.StateRoot = bytesutil.PadTo([]byte{'D'}, 32)
r24, err := b24.Block.HashTreeRoot()
@@ -626,14 +626,14 @@ func tree3(t *testing.T, beaconDB db.Database, genesisRoot []byte) ([][32]byte,
// \- B2
func tree4(t *testing.T, beaconDB db.Database, genesisRoot []byte) ([][32]byte, []*ethpb.SignedBeaconBlock, error) {
b0 := util.NewBeaconBlock()
b0.Block.Slot = 0
b0.Block.Slot = 1
b0.Block.ParentRoot = genesisRoot
r0, err := b0.Block.HashTreeRoot()
if err != nil {
return nil, nil, err
}
b21 := util.NewBeaconBlock()
b21.Block.Slot = 2
b21.Block.Slot = 3
b21.Block.ParentRoot = r0[:]
b21.Block.StateRoot = bytesutil.PadTo([]byte{'A'}, 32)
r21, err := b21.Block.HashTreeRoot()
@@ -641,7 +641,7 @@ func tree4(t *testing.T, beaconDB db.Database, genesisRoot []byte) ([][32]byte,
return nil, nil, err
}
b22 := util.NewBeaconBlock()
b22.Block.Slot = 2
b22.Block.Slot = 3
b22.Block.ParentRoot = r0[:]
b22.Block.StateRoot = bytesutil.PadTo([]byte{'B'}, 32)
r22, err := b22.Block.HashTreeRoot()
@@ -649,7 +649,7 @@ func tree4(t *testing.T, beaconDB db.Database, genesisRoot []byte) ([][32]byte,
return nil, nil, err
}
b23 := util.NewBeaconBlock()
b23.Block.Slot = 2
b23.Block.Slot = 3
b23.Block.ParentRoot = r0[:]
b23.Block.StateRoot = bytesutil.PadTo([]byte{'C'}, 32)
r23, err := b23.Block.HashTreeRoot()
@@ -657,7 +657,7 @@ func tree4(t *testing.T, beaconDB db.Database, genesisRoot []byte) ([][32]byte,
return nil, nil, err
}
b24 := util.NewBeaconBlock()
b24.Block.Slot = 2
b24.Block.Slot = 3
b24.Block.ParentRoot = r0[:]
b24.Block.StateRoot = bytesutil.PadTo([]byte{'D'}, 32)
r24, err := b24.Block.HashTreeRoot()
@@ -697,17 +697,17 @@ func TestLoadFinalizedBlocks(t *testing.T) {
gRoot, err := gBlock.Block.HashTreeRoot()
require.NoError(t, err)
util.SaveBlock(t, ctx, beaconDB, gBlock)
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, [32]byte{}))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, gRoot))
roots, _, err := tree1(t, beaconDB, gRoot[:])
require.NoError(t, err)
filteredBlocks, err := s.loadFinalizedBlocks(ctx, 0, 8)
filteredBlocks, err := s.loadFinalizedBlocks(ctx, 0, 9)
require.NoError(t, err)
require.Equal(t, 0, len(filteredBlocks))
require.Equal(t, 1, len(filteredBlocks))
require.NoError(t, beaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Root: roots[8][:]}))
require.NoError(t, s.beaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Root: roots[8][:]}))
filteredBlocks, err = s.loadFinalizedBlocks(ctx, 0, 8)
filteredBlocks, err = s.loadFinalizedBlocks(ctx, 0, 9)
require.NoError(t, err)
require.Equal(t, 10, len(filteredBlocks))
require.Equal(t, 7, len(filteredBlocks))
}

View File

@@ -45,7 +45,7 @@ func (w *p2pWorker) run(ctx context.Context) {
func (w *p2pWorker) handleBlocks(ctx context.Context, b batch) batch {
cs := w.c.CurrentSlot()
blobRetentionStart, err := sync.BlobsByRangeMinStartSlot(cs)
blobRetentionStart, err := sync.BlobRPCMinValidSlot(cs)
if err != nil {
return b.withRetryableError(errors.Wrap(err, "configuration issue, could not compute minimum blob retention slot"))
}

View File

@@ -327,40 +327,3 @@ func TestTestcaseSetup_BlocksAndBlobs(t *testing.T) {
require.Equal(t, true, found != nil)
}
}
func TestRoundTripDenebSave(t *testing.T) {
ctx := context.Background()
cfg := params.BeaconConfig()
repositionFutureEpochs(cfg)
undo, err := params.SetActiveWithUndo(cfg)
require.NoError(t, err)
defer func() {
require.NoError(t, undo())
}()
parentRoot := [32]byte{}
c := blobsTestCase{}
chain, clock := defaultMockChain(t)
c.chain = chain
c.clock = clock
oldest, err := slots.EpochStart(blobMinReqEpoch(c.chain.FinalizedCheckPoint.Epoch, slots.ToEpoch(c.clock.CurrentSlot())))
require.NoError(t, err)
maxBlobs := fieldparams.MaxBlobsPerBlock
block, bsc := generateTestBlockWithSidecars(t, parentRoot, oldest, maxBlobs)
require.Equal(t, len(block.Block.Body.BlobKzgCommitments), len(bsc))
require.Equal(t, maxBlobs, len(bsc))
for i := range bsc {
require.DeepEqual(t, block.Block.Body.BlobKzgCommitments[i], bsc[i].KzgCommitment)
}
d := db.SetupDB(t)
util.SaveBlock(t, ctx, d, block)
root, err := block.Block.HashTreeRoot()
require.NoError(t, err)
dbBlock, err := d.Block(ctx, root)
require.NoError(t, err)
comms, err := dbBlock.Block().Body().BlobKzgCommitments()
require.NoError(t, err)
require.Equal(t, maxBlobs, len(comms))
for i := range bsc {
require.DeepEqual(t, comms[i], bsc[i].KzgCommitment)
}
}

View File

@@ -12,14 +12,12 @@ go_library(
"log.go",
"round_robin.go",
"service.go",
"verification.go",
],
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/sync/initial-sync",
visibility = ["//beacon-chain:__subpackages__"],
deps = [
"//async/abool:go_default_library",
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/core/feed/block:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/transition:go_default_library",
@@ -41,7 +39,6 @@ go_library(
"//consensus-types/primitives:go_default_library",
"//container/leaky-bucket:go_default_library",
"//crypto/rand:go_default_library",
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime:go_default_library",

View File

@@ -478,7 +478,7 @@ func (f *blocksFetcher) fetchBlobsFromPeer(ctx context.Context, bwb []blocks2.Bl
if slots.ToEpoch(f.clock.CurrentSlot()) < params.BeaconConfig().DenebForkEpoch {
return bwb, nil
}
blobWindowStart, err := prysmsync.BlobsByRangeMinStartSlot(f.clock.CurrentSlot())
blobWindowStart, err := prysmsync.BlobRPCMinValidSlot(f.clock.CurrentSlot())
if err != nil {
return nil, err
}

View File

@@ -12,6 +12,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/das"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/sync"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/verification"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
@@ -167,7 +168,7 @@ func (s *Service) processFetchedDataRegSync(
if len(bwb) == 0 {
return
}
bv := newBlobBatchVerifier(s.newBlobVerifier)
bv := verification.NewBlobBatchVerifier(s.newBlobVerifier, verification.InitsyncSidecarRequirements)
avs := das.NewLazilyPersistentStore(s.cfg.BlobStorage, bv)
batchFields := logrus.Fields{
"firstSlot": data.bwb[0].Block.Block().Slot(),
@@ -326,7 +327,7 @@ func (s *Service) processBatchedBlocks(ctx context.Context, genesis time.Time,
errParentDoesNotExist, first.Block().ParentRoot(), first.Block().Slot())
}
bv := newBlobBatchVerifier(s.newBlobVerifier)
bv := verification.NewBlobBatchVerifier(s.newBlobVerifier, verification.InitsyncSidecarRequirements)
avs := das.NewLazilyPersistentStore(s.cfg.BlobStorage, bv)
s.logBatchSyncStatus(genesis, first, len(bwb))
for _, bb := range bwb {

View File

@@ -340,7 +340,7 @@ func (s *Service) fetchOriginBlobs(pids []peer.ID) error {
if len(sidecars) != len(req) {
continue
}
bv := newBlobBatchVerifier(s.newBlobVerifier)
bv := verification.NewBlobBatchVerifier(s.newBlobVerifier, verification.InitsyncSidecarRequirements)
avs := das.NewLazilyPersistentStore(s.cfg.BlobStorage, bv)
current := s.clock.CurrentSlot()
if err := avs.Persist(current, sidecars...); err != nil {
@@ -362,3 +362,9 @@ func shufflePeers(pids []peer.ID) {
pids[i], pids[j] = pids[j], pids[i]
})
}
func newBlobVerifierFromInitializer(ini *verification.Initializer) verification.NewBlobVerifier {
return func(b blocks.ROBlob, reqs []verification.Requirement) verification.BlobVerifier {
return ini.NewBlobVerifier(b, reqs)
}
}

View File

@@ -151,14 +151,14 @@ func (s *Service) sendAndSaveBlobSidecars(ctx context.Context, request types.Blo
if len(sidecars) != len(request) {
return fmt.Errorf("received %d blob sidecars, expected %d for RPC", len(sidecars), len(request))
}
bv := verification.NewBlobBatchVerifier(s.newBlobVerifier, verification.PendingQueueSidecarRequirements)
for _, sidecar := range sidecars {
if err := verify.BlobAlignsWithBlock(sidecar, RoBlock); err != nil {
return err
}
log.WithFields(blobFields(sidecar)).Debug("Received blob sidecar RPC")
}
vscs, err := verification.BlobSidecarSliceNoop(sidecars)
vscs, err := bv.VerifiedROBlobs(ctx, RoBlock, sidecars)
if err != nil {
return err
}

View File

@@ -123,10 +123,10 @@ func (s *Service) blobSidecarsByRangeRPCHandler(ctx context.Context, msg interfa
return nil
}
// BlobsByRangeMinStartSlot returns the lowest slot that we should expect peers to respect as the
// BlobRPCMinValidSlot returns the lowest slot that we should expect peers to respect as the
// start slot in a BlobSidecarsByRange request. This can be used to validate incoming requests and
// to avoid pestering peers with requests for blobs that are outside the retention window.
func BlobsByRangeMinStartSlot(current primitives.Slot) (primitives.Slot, error) {
func BlobRPCMinValidSlot(current primitives.Slot) (primitives.Slot, error) {
// Avoid overflow if we're running on a config where deneb is set to far future epoch.
if params.BeaconConfig().DenebForkEpoch == math.MaxUint64 {
return primitives.Slot(math.MaxUint64), nil
@@ -176,9 +176,9 @@ func validateBlobsByRange(r *pb.BlobSidecarsByRangeRequest, current primitives.S
// [max(current_epoch - MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS, DENEB_FORK_EPOCH), current_epoch]
// where current_epoch is defined by the current wall-clock time,
// and clients MUST support serving requests of blobs on this range.
minStartSlot, err := BlobsByRangeMinStartSlot(current)
minStartSlot, err := BlobRPCMinValidSlot(current)
if err != nil {
return rangeParams{}, errors.Wrap(p2ptypes.ErrInvalidRequest, "BlobsByRangeMinStartSlot error")
return rangeParams{}, errors.Wrap(p2ptypes.ErrInvalidRequest, "BlobRPCMinValidSlot error")
}
if rp.start > maxStart {
return rangeParams{}, errors.Wrap(p2ptypes.ErrInvalidRequest, "start > maxStart")

View File

@@ -178,7 +178,7 @@ func TestBlobsByRangeValidation(t *testing.T) {
and clients MUST support serving requests of blobs on this range.
*/
defaultCurrent := denebSlot + 100 + minReqSlots
defaultMinStart, err := BlobsByRangeMinStartSlot(defaultCurrent)
defaultMinStart, err := BlobRPCMinValidSlot(defaultCurrent)
require.NoError(t, err)
cases := []struct {
name string
@@ -285,3 +285,67 @@ func TestBlobsByRangeValidation(t *testing.T) {
})
}
}
func TestBlobRPCMinValidSlot(t *testing.T) {
denebSlot, err := slots.EpochStart(params.BeaconConfig().DenebForkEpoch)
require.NoError(t, err)
cases := []struct {
name string
current func(t *testing.T) types.Slot
expected types.Slot
err error
}{
{
name: "before deneb",
current: func(t *testing.T) types.Slot {
st, err := slots.EpochStart(params.BeaconConfig().DenebForkEpoch - 1)
// note: we no longer need to deal with deneb fork epoch being far future
require.NoError(t, err)
return st
},
expected: denebSlot,
},
{
name: "equal to deneb",
current: func(t *testing.T) types.Slot {
st, err := slots.EpochStart(params.BeaconConfig().DenebForkEpoch)
// note: we no longer need to deal with deneb fork epoch being far future
require.NoError(t, err)
return st
},
expected: denebSlot,
},
{
name: "after deneb, before expiry starts",
current: func(t *testing.T) types.Slot {
st, err := slots.EpochStart(params.BeaconConfig().DenebForkEpoch + params.BeaconConfig().MinEpochsForBlobsSidecarsRequest)
// note: we no longer need to deal with deneb fork epoch being far future
require.NoError(t, err)
return st
},
expected: denebSlot,
},
{
name: "expiry starts one epoch after deneb + MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS",
current: func(t *testing.T) types.Slot {
st, err := slots.EpochStart(params.BeaconConfig().DenebForkEpoch + params.BeaconConfig().MinEpochsForBlobsSidecarsRequest + 1)
// note: we no longer need to deal with deneb fork epoch being far future
require.NoError(t, err)
return st
},
expected: denebSlot + params.BeaconConfig().SlotsPerEpoch,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
current := c.current(t)
got, err := BlobRPCMinValidSlot(current)
if c.err != nil {
require.ErrorIs(t, err, c.err)
return
}
require.NoError(t, err)
require.Equal(t, c.expected, got)
})
}
}

View File

@@ -13,30 +13,12 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/types"
"github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
func blobMinReqEpoch(finalized, current primitives.Epoch) primitives.Epoch {
// max(finalized_epoch, current_epoch - MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS, DENEB_FORK_EPOCH)
denebFork := params.BeaconConfig().DenebForkEpoch
var reqWindow primitives.Epoch
if current > params.BeaconConfig().MinEpochsForBlobsSidecarsRequest {
reqWindow = current - params.BeaconConfig().MinEpochsForBlobsSidecarsRequest
}
if finalized >= reqWindow && finalized > denebFork {
return finalized
}
if reqWindow >= finalized && reqWindow > denebFork {
return reqWindow
}
return denebFork
}
// blobSidecarByRootRPCHandler handles the /eth2/beacon_chain/req/blob_sidecars_by_root/1/ RPC request.
// spec: https://github.com/ethereum/consensus-specs/blob/a7e45db9ac2b60a33e144444969ad3ac0aae3d4c/specs/deneb/p2p-interface.md#blobsidecarsbyroot-v1
func (s *Service) blobSidecarByRootRPCHandler(ctx context.Context, msg interface{}, stream libp2pcore.Stream) error {
@@ -65,7 +47,13 @@ func (s *Service) blobSidecarByRootRPCHandler(ctx context.Context, msg interface
if len(blobIdents) > batchSize {
ticker = time.NewTicker(time.Second)
}
minReqEpoch := blobMinReqEpoch(s.cfg.chain.FinalizedCheckpt().Epoch, slots.ToEpoch(s.cfg.clock.CurrentSlot()))
// Compute the oldest slot we'll allow a peer to request, based on the current slot.
cs := s.cfg.clock.CurrentSlot()
minReqSlot, err := BlobRPCMinValidSlot(cs)
if err != nil {
return errors.Wrapf(err, "unexpected error computing min valid blob request slot, current_slot=%d", cs)
}
for i := range blobIdents {
if err := ctx.Err(); err != nil {
@@ -95,12 +83,15 @@ func (s *Service) blobSidecarByRootRPCHandler(ctx context.Context, msg interface
// If any root in the request content references a block earlier than minimum_request_epoch,
// peers MAY respond with error code 3: ResourceUnavailable or not include the blob in the response.
if slots.ToEpoch(sc.Slot()) < minReqEpoch {
// note: we are deviating from the spec to allow requests for blobs that are before minimum_request_epoch,
// up to the beginning of the retention period.
if sc.Slot() < minReqSlot {
s.writeErrorResponseToStream(responseCodeResourceUnavailable, types.ErrBlobLTMinRequest.Error(), stream)
log.WithError(types.ErrBlobLTMinRequest).
Debugf("requested blob for block %#x before minimum_request_epoch", blobIdents[i].BlockRoot)
return types.ErrBlobLTMinRequest
}
SetStreamWriteDeadline(stream, defaultWriteDuration)
if chunkErr := WriteBlobSidecarChunk(stream, s.cfg.chain, s.cfg.p2p.Encoding(), sc); chunkErr != nil {
log.WithError(chunkErr).Debug("Could not send a chunked response")

View File

@@ -19,7 +19,7 @@ import (
)
func (c *blobsTestCase) defaultOldestSlotByRoot(t *testing.T) types.Slot {
oldest, err := slots.EpochStart(blobMinReqEpoch(c.chain.FinalizedCheckPoint.Epoch, slots.ToEpoch(c.clock.CurrentSlot())))
oldest, err := BlobRPCMinValidSlot(c.clock.CurrentSlot())
require.NoError(t, err)
return oldest
}
@@ -259,71 +259,3 @@ func TestBlobsByRootOK(t *testing.T) {
})
}
}
func TestBlobsByRootMinReqEpoch(t *testing.T) {
winMin := params.BeaconConfig().MinEpochsForBlobsSidecarsRequest
cases := []struct {
name string
finalized types.Epoch
current types.Epoch
deneb types.Epoch
expected types.Epoch
}{
{
name: "testnet genesis",
deneb: 100,
current: 0,
finalized: 0,
expected: 100,
},
{
name: "underflow averted",
deneb: 100,
current: winMin - 1,
finalized: 0,
expected: 100,
},
{
name: "underflow averted - finalized is higher",
deneb: 100,
current: winMin - 1,
finalized: winMin - 2,
expected: winMin - 2,
},
{
name: "underflow averted - genesis at deneb",
deneb: 0,
current: winMin - 1,
finalized: 0,
expected: 0,
},
{
name: "max is finalized",
deneb: 100,
current: 99 + winMin,
finalized: 101,
expected: 101,
},
{
name: "reqWindow > finalized, reqWindow < deneb",
deneb: 100,
current: 99 + winMin,
finalized: 98,
expected: 100,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
cfg := params.BeaconConfig()
repositionFutureEpochs(cfg)
cfg.DenebForkEpoch = c.deneb
undo, err := params.SetActiveWithUndo(cfg)
require.NoError(t, err)
defer func() {
require.NoError(t, undo())
}()
ep := blobMinReqEpoch(c.finalized, c.current)
require.Equal(t, c.expected, ep)
})
}
}

View File

@@ -42,9 +42,10 @@ func (s *Service) goodbyeRPCHandler(_ context.Context, msg interface{}, stream l
return fmt.Errorf("wrong message type for goodbye, got %T, wanted *uint64", msg)
}
if err := s.rateLimiter.validateRequest(stream, 1); err != nil {
return err
log.WithError(err).Warn("Goodbye message from rate-limited peer.")
} else {
s.rateLimiter.add(stream, 1)
}
s.rateLimiter.add(stream, 1)
log := log.WithField("Reason", goodbyeMessage(*m))
log.WithField("peer", stream.Conn().RemotePeer()).Trace("Peer has sent a goodbye message")
s.cfg.p2p.Peers().SetNextValidTime(stream.Conn().RemotePeer(), goodByeBackoff(*m))

View File

@@ -53,7 +53,7 @@ const rangeLimit uint64 = 1024
const seenBlockSize = 1000
const seenBlobSize = seenBlockSize * 4 // Each block can have max 4 blobs. Worst case 164kB for cache.
const seenUnaggregatedAttSize = 20000
const seenAggregatedAttSize = 1024
const seenAggregatedAttSize = 16384
const seenSyncMsgSize = 1000 // Maximum of 512 sync committee members, 1000 is a safe amount.
const seenSyncContributionSize = 512 // Maximum of SYNC_COMMITTEE_SIZE as specified by the spec.
const seenExitSize = 100

View File

@@ -3,12 +3,14 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"batch.go",
"blob.go",
"cache.go",
"error.go",
"fake.go",
"initializer.go",
"interface.go",
"metrics.go",
"mock.go",
"result.go",
],
@@ -35,6 +37,8 @@ go_library(
"//time/slots:go_default_library",
"@com_github_hashicorp_golang_lru//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)

View File

@@ -1,12 +1,10 @@
package initialsync
package verification
import (
"context"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/kzg"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/das"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/verification"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
)
@@ -20,21 +18,17 @@ var (
ErrBatchBlockRootMismatch = errors.New("Sidecar block header root does not match signed block")
)
func newBlobVerifierFromInitializer(ini *verification.Initializer) verification.NewBlobVerifier {
return func(b blocks.ROBlob, reqs []verification.Requirement) verification.BlobVerifier {
return ini.NewBlobVerifier(b, reqs)
}
}
func newBlobBatchVerifier(newVerifier verification.NewBlobVerifier) *BlobBatchVerifier {
// NewBlobBatchVerifier initializes a blob batch verifier. It requires the caller to correctly specify
// verification Requirements and to also pass in a NewBlobVerifier, which is a callback function that
// returns a new BlobVerifier for handling a single blob in the batch.
func NewBlobBatchVerifier(newVerifier NewBlobVerifier, reqs []Requirement) *BlobBatchVerifier {
return &BlobBatchVerifier{
verifyKzg: kzg.Verify,
newVerifier: newVerifier,
reqs: reqs,
}
}
type kzgVerifier func(b ...blocks.ROBlob) error
// BlobBatchVerifier solves problems that come from verifying batches of blobs from RPC.
// First: we only update forkchoice after the entire batch has completed, so the n+1 elements in the batch
// won't be in forkchoice yet.
@@ -42,18 +36,17 @@ type kzgVerifier func(b ...blocks.ROBlob) error
// method to BlobVerifier to verify the kzg commitments of all blob sidecars for a block together, then using the cached
// result of the batch verification when verifying the individual blobs.
type BlobBatchVerifier struct {
verifyKzg kzgVerifier
newVerifier verification.NewBlobVerifier
verifyKzg roblobCommitmentVerifier
newVerifier NewBlobVerifier
reqs []Requirement
}
var _ das.BlobBatchVerifier = &BlobBatchVerifier{}
// VerifiedROBlobs satisfies the das.BlobBatchVerifier interface, used by das.AvailabilityStore.
func (batch *BlobBatchVerifier) VerifiedROBlobs(ctx context.Context, blk blocks.ROBlock, scs []blocks.ROBlob) ([]blocks.VerifiedROBlob, error) {
if len(scs) == 0 {
return nil, nil
}
// We assume the proposer was validated wrt the block in batch block processing before performing the DA check.
// We assume the proposer is validated wrt the block in batch block processing before performing the DA check.
// So at this stage we just need to make sure the value being signed and signature bytes match the block.
for i := range scs {
if blk.Signature() != bytesutil.ToBytes96(scs[i].SignedBlockHeader.Signature) {
@@ -71,7 +64,7 @@ func (batch *BlobBatchVerifier) VerifiedROBlobs(ctx context.Context, blk blocks.
}
vs := make([]blocks.VerifiedROBlob, len(scs))
for i := range scs {
vb, err := batch.verifyOneBlob(ctx, scs[i])
vb, err := batch.verifyOneBlob(scs[i])
if err != nil {
return nil, err
}
@@ -80,13 +73,13 @@ func (batch *BlobBatchVerifier) VerifiedROBlobs(ctx context.Context, blk blocks.
return vs, nil
}
func (batch *BlobBatchVerifier) verifyOneBlob(ctx context.Context, sc blocks.ROBlob) (blocks.VerifiedROBlob, error) {
func (batch *BlobBatchVerifier) verifyOneBlob(sc blocks.ROBlob) (blocks.VerifiedROBlob, error) {
vb := blocks.VerifiedROBlob{}
bv := batch.newVerifier(sc, verification.InitsyncSidecarRequirements)
bv := batch.newVerifier(sc, batch.reqs)
// We can satisfy the following 2 requirements immediately because VerifiedROBlobs always verifies commitments
// and block signature for all blobs in the batch before calling verifyOneBlob.
bv.SatisfyRequirement(verification.RequireSidecarKzgProofVerified)
bv.SatisfyRequirement(verification.RequireValidProposerSignature)
bv.SatisfyRequirement(RequireSidecarKzgProofVerified)
bv.SatisfyRequirement(RequireValidProposerSignature)
if err := bv.BlobIndexInBounds(); err != nil {
return vb, err

View File

@@ -70,6 +70,9 @@ var InitsyncSidecarRequirements = requirementList(GossipSidecarRequirements).exc
// BackfillSidecarRequirements is the same as InitsyncSidecarRequirements.
var BackfillSidecarRequirements = requirementList(InitsyncSidecarRequirements).excluding()
// PendingQueueSidecarRequirements is the same as InitsyncSidecarRequirements, used by the pending blocks queue.
var PendingQueueSidecarRequirements = requirementList(InitsyncSidecarRequirements).excluding()
var (
ErrBlobInvalid = errors.New("blob failed verification")
// ErrBlobIndexInvalid means RequireBlobIndexInBounds failed.
@@ -190,12 +193,15 @@ func (bv *ROBlobVerifier) ValidProposerSignature(ctx context.Context) (err error
// First check if there is a cached verification that can be reused.
seen, err := bv.sc.SignatureVerified(sd)
if seen {
blobVerificationProposerSignatureCache.WithLabelValues("hit-valid").Inc()
if err != nil {
log.WithFields(logging.BlobFields(bv.blob)).WithError(err).Debug("reusing failed proposer signature validation from cache")
blobVerificationProposerSignatureCache.WithLabelValues("hit-invalid").Inc()
return ErrInvalidProposerSignature
}
return nil
}
blobVerificationProposerSignatureCache.WithLabelValues("miss").Inc()
// Retrieve the parent state to fallback to full verification.
parent, err := bv.parentState(ctx)

View File

@@ -0,0 +1,16 @@
package verification
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
)
var (
blobVerificationProposerSignatureCache = promauto.NewCounterVec(
prometheus.CounterOpts{
Name: "blob_verification_proposer_signature_cache",
Help: "BlobSidecar proposer signature cache result.",
},
[]string{"result"},
)
)

View File

@@ -29,6 +29,7 @@ var (
Name: "local-block-value-boost",
Usage: "A percentage boost for local block construction as a Uint64. This is used to prioritize local block construction over relay/builder block construction" +
"Boost is an additional percentage to multiple local block value. Use builder block if: builder_bid_value * 100 > local_block_value * (local-block-value-boost + 100)",
Value: 10,
}
// ExecutionEngineEndpoint provides an HTTP access endpoint to connect to an execution client on the execution layer
ExecutionEngineEndpoint = &cli.StringFlag{

View File

@@ -49,6 +49,82 @@ func TestProposerSettingsLoader(t *testing.T) {
validatorRegistrationEnabled bool
skipDBSavedCheck bool
}{
{
name: "graffiti in db without fee recipient",
args: args{
proposerSettingsFlagValues: &proposerSettingsFlag{
dir: "",
url: "",
defaultfee: "",
},
},
want: func() *proposer.Settings {
key1, err := hexutil.Decode("0xa057816155ad77931185101128655c0191bd0214c201ca48ed887f6c4c6adf334070efcd75140eada5ac83a92506dd7a")
require.NoError(t, err)
return &proposer.Settings{
ProposeConfig: map[[fieldparams.BLSPubkeyLength]byte]*proposer.Option{
bytesutil.ToBytes48(key1): {
GraffitiConfig: &proposer.GraffitiConfig{
Graffiti: "specific graffiti",
},
},
},
}
},
withdb: func(db iface.ValidatorDB) error {
key1, err := hexutil.Decode("0xa057816155ad77931185101128655c0191bd0214c201ca48ed887f6c4c6adf334070efcd75140eada5ac83a92506dd7a")
require.NoError(t, err)
settings := &proposer.Settings{
ProposeConfig: map[[fieldparams.BLSPubkeyLength]byte]*proposer.Option{
bytesutil.ToBytes48(key1): {
GraffitiConfig: &proposer.GraffitiConfig{
Graffiti: "specific graffiti",
},
},
},
}
return db.SaveProposerSettings(context.Background(), settings)
},
},
{
name: "graffiti from file",
args: args{
proposerSettingsFlagValues: &proposerSettingsFlag{
dir: "./testdata/good-graffiti-settings.json",
url: "",
defaultfee: "",
},
},
want: func() *proposer.Settings {
key1, err := hexutil.Decode("0xa057816155ad77931185101128655c0191bd0214c201ca48ed887f6c4c6adf334070efcd75140eada5ac83a92506dd7a")
require.NoError(t, err)
return &proposer.Settings{
ProposeConfig: map[[fieldparams.BLSPubkeyLength]byte]*proposer.Option{
bytesutil.ToBytes48(key1): {
FeeRecipientConfig: &proposer.FeeRecipientConfig{
FeeRecipient: common.HexToAddress("0x50155530FCE8a85ec7055A5F8b2bE214B3DaeFd3"),
},
GraffitiConfig: &proposer.GraffitiConfig{
Graffiti: "some graffiti",
},
BuilderConfig: &proposer.BuilderConfig{
Enabled: true,
GasLimit: validator.Uint64(30000000),
},
},
},
DefaultConfig: &proposer.Option{
FeeRecipientConfig: &proposer.FeeRecipientConfig{
FeeRecipient: common.HexToAddress("0x6e35733c5af9B61374A128e6F85f553aF09ff89A"),
},
BuilderConfig: &proposer.BuilderConfig{
Enabled: true,
GasLimit: validator.Uint64(40000000),
},
},
}
},
},
{
name: "db settings override file settings if file default config is missing",
args: args{
@@ -875,6 +951,8 @@ func TestProposerSettingsLoader(t *testing.T) {
if tt.wantErr != "" {
require.ErrorContains(t, tt.wantErr, err)
return
} else {
require.NoError(t, err)
}
if tt.wantLog != "" {
assert.LogsContain(t, hook,

View File

@@ -0,0 +1,19 @@
{
"proposer_config": {
"0xa057816155ad77931185101128655c0191bd0214c201ca48ed887f6c4c6adf334070efcd75140eada5ac83a92506dd7a": {
"fee_recipient": "0x50155530FCE8a85ec7055A5F8b2bE214B3DaeFd3",
"graffiti": "some graffiti",
"builder": {
"enabled": true,
"gas_limit": "30000000"
}
}
},
"default_config": {
"fee_recipient": "0x6e35733c5af9B61374A128e6F85f553aF09ff89A",
"builder": {
"enabled": true,
"gas_limit": 40000000
}
}
}

View File

@@ -19,9 +19,6 @@ func SettingFromConsensus(ps *validatorpb.ProposerSettingsPayload) (*Settings, e
if ps.ProposerConfig != nil && len(ps.ProposerConfig) != 0 {
settings.ProposeConfig = make(map[[fieldparams.BLSPubkeyLength]byte]*Option)
for key, optionPayload := range ps.ProposerConfig {
if optionPayload.FeeRecipient == "" {
continue
}
decodedKey, err := hexutil.Decode(key)
if err != nil {
return nil, errors.Wrap(err, fmt.Sprintf("cannot decode public key %s", key))
@@ -29,13 +26,15 @@ func SettingFromConsensus(ps *validatorpb.ProposerSettingsPayload) (*Settings, e
if len(decodedKey) != fieldparams.BLSPubkeyLength {
return nil, fmt.Errorf("%v is not a bls public key", key)
}
if err := verifyOption(key, optionPayload); err != nil {
return nil, err
p := &Option{}
if optionPayload.Graffiti != nil {
p.GraffitiConfig = &GraffitiConfig{*optionPayload.Graffiti}
}
p := &Option{
FeeRecipientConfig: &FeeRecipientConfig{
FeeRecipient: common.HexToAddress(optionPayload.FeeRecipient),
},
if optionPayload.FeeRecipient != "" {
if err := verifyOption(key, optionPayload); err != nil {
return nil, err
}
p.FeeRecipientConfig = &FeeRecipientConfig{FeeRecipient: common.HexToAddress(optionPayload.FeeRecipient)}
}
if optionPayload.Builder != nil {
p.BuilderConfig = BuilderConfigFromConsensus(optionPayload.Builder)
@@ -141,10 +140,16 @@ type FeeRecipientConfig struct {
FeeRecipient common.Address
}
// GraffitiConfig is a prysm internal representation to see if the graffiti was set.
type GraffitiConfig struct {
Graffiti string
}
// Option is a Prysm internal representation of the ProposerOptionPayload on the validator client in bytes format instead of hex.
type Option struct {
FeeRecipientConfig *FeeRecipientConfig
BuilderConfig *BuilderConfig
GraffitiConfig *GraffitiConfig
}
// Clone creates a deep copy of proposer option
@@ -159,6 +164,9 @@ func (po *Option) Clone() *Option {
if po.BuilderConfig != nil {
p.BuilderConfig = po.BuilderConfig.Clone()
}
if po.GraffitiConfig != nil {
p.GraffitiConfig = po.GraffitiConfig.Clone()
}
return p
}
@@ -173,6 +181,9 @@ func (po *Option) ToConsensus() *validatorpb.ProposerOptionPayload {
if po.BuilderConfig != nil {
p.Builder = po.BuilderConfig.ToConsensus()
}
if po.GraffitiConfig != nil {
p.Graffiti = &po.GraffitiConfig.Graffiti
}
return p
}
@@ -222,6 +233,14 @@ func (bc *BuilderConfig) Clone() *BuilderConfig {
return c
}
// Clone creates a deep copy of graffiti config
func (gc *GraffitiConfig) Clone() *GraffitiConfig {
if gc == nil {
return nil
}
return &GraffitiConfig{gc.Graffiti}
}
// ToConsensus converts Builder Config to the protobuf object
func (bc *BuilderConfig) ToConsensus() *validatorpb.BuilderConfig {
if bc == nil {

View File

@@ -76,26 +76,14 @@ func Test_Proposer_Setting_Cloning(t *testing.T) {
require.Equal(t, option.FeeRecipientConfig.FeeRecipient.Hex(), potion.FeeRecipient)
require.Equal(t, settings.DefaultConfig.FeeRecipientConfig.FeeRecipient.Hex(), payload.DefaultConfig.FeeRecipient)
require.Equal(t, settings.DefaultConfig.BuilderConfig.Enabled, payload.DefaultConfig.Builder.Enabled)
potion.FeeRecipient = ""
potion.FeeRecipient = fee
newSettings, err := SettingFromConsensus(payload)
require.NoError(t, err)
// when converting to settings if a fee recipient is empty string then it will be skipped
noption, ok := newSettings.ProposeConfig[bytesutil.ToBytes48(key1)]
require.Equal(t, false, ok)
require.Equal(t, true, noption == nil)
require.DeepEqual(t, newSettings.DefaultConfig, settings.DefaultConfig)
// if fee recipient is set it will not skip
potion.FeeRecipient = fee
newSettings, err = SettingFromConsensus(payload)
require.NoError(t, err)
noption, ok = newSettings.ProposeConfig[bytesutil.ToBytes48(key1)]
require.Equal(t, true, ok)
require.Equal(t, option.FeeRecipientConfig.FeeRecipient.Hex(), noption.FeeRecipientConfig.FeeRecipient.Hex())
require.Equal(t, option.BuilderConfig.GasLimit, option.BuilderConfig.GasLimit)
require.Equal(t, option.BuilderConfig.Enabled, option.BuilderConfig.Enabled)
})
}

View File

@@ -15,7 +15,8 @@ const (
bodyLength = 12 // The number of elements in the BeaconBlockBody Container
logBodyLength = 4 // The log 2 of bodyLength
kzgPosition = 11 // The index of the KZG commitment list in the Body
KZGOffset = 54 * field_params.MaxBlobCommitmentsPerBlock
kzgRootIndex = 54 // The Merkle index of the KZG commitment list's root in the Body's Merkle tree
KZGOffset = kzgRootIndex * field_params.MaxBlobCommitmentsPerBlock
)
var (
@@ -37,9 +38,7 @@ func VerifyKZGInclusionProof(blob ROBlob) error {
if len(root) != field_params.RootLength {
return errInvalidBodyRoot
}
chunks := make([][32]byte, 2)
copy(chunks[0][:], blob.KzgCommitment)
copy(chunks[1][:], blob.KzgCommitment[field_params.RootLength:])
chunks := makeChunk(blob.KzgCommitment)
gohashtree.HashChunks(chunks, chunks)
verified := trie.VerifyMerkleProof(root, chunks[0][:], blob.Index+KZGOffset, blob.CommitmentInclusionProof)
if !verified {
@@ -85,15 +84,21 @@ func MerkleProofKZGCommitment(body interfaces.ReadOnlyBeaconBlockBody, index int
func leavesFromCommitments(commitments [][]byte) [][]byte {
leaves := make([][]byte, len(commitments))
for i, kzg := range commitments {
chunk := make([][32]byte, 2)
copy(chunk[0][:], kzg)
copy(chunk[1][:], kzg[field_params.RootLength:])
chunk := makeChunk(kzg)
gohashtree.HashChunks(chunk, chunk)
leaves[i] = chunk[0][:]
}
return leaves
}
// makeChunk constructs a chunk from a KZG commitment.
func makeChunk(commitment []byte) [][32]byte {
chunk := make([][32]byte, 2)
copy(chunk[0][:], commitment)
copy(chunk[1][:], commitment[field_params.RootLength:])
return chunk
}
// bodyProof returns the Merkle proof of the subtree up to the root of the KZG
// commitment list.
func bodyProof(commitments [][]byte, index int) ([][]byte, error) {

View File

@@ -1,7 +1,8 @@
package blocks
import (
"math/rand"
"crypto/rand"
"errors"
"testing"
"github.com/prysmaticlabs/gohashtree"
@@ -74,14 +75,79 @@ func Test_MerkleProofKZGCommitment(t *testing.T) {
proof, err := MerkleProofKZGCommitment(body, index)
require.NoError(t, err)
chunk := make([][32]byte, 2)
copy(chunk[0][:], kzgs[index])
copy(chunk[1][:], kzgs[index][32:])
gohashtree.HashChunks(chunk, chunk)
// Test the logic of topProof in MerkleProofKZGCommitment.
commitmentsRoot, err := getBlobKzgCommitmentsRoot(kzgs)
require.NoError(t, err)
bodyMembersRoots, err := topLevelRoots(body)
require.NoError(t, err, "Failed to get top level roots")
bodySparse, err := trie.GenerateTrieFromItems(
bodyMembersRoots,
logBodyLength,
)
require.NoError(t, err, "Failed to generate trie from member roots")
require.Equal(t, bodyLength, bodySparse.NumOfItems())
topProof, err := bodySparse.MerkleProof(kzgPosition)
require.NoError(t, err, "Failed to generate Merkle proof")
require.DeepEqual(t,
topProof[:len(topProof)-1],
proof[fieldparams.LogMaxBlobCommitments+1:],
)
root, err := body.HashTreeRoot()
require.NoError(t, err)
kzgOffset := 54 * fieldparams.MaxBlobCommitmentsPerBlock
require.Equal(t, true, trie.VerifyMerkleProof(root[:], chunk[0][:], uint64(index+kzgOffset), proof))
// Partially verify if the commitments root is in the body root.
// Proof of the commitment length is not needed.
require.Equal(t, true, trie.VerifyMerkleProof(root[:], commitmentsRoot[:], kzgPosition, topProof[:len(topProof)-1]))
chunk := makeChunk(kzgs[index])
gohashtree.HashChunks(chunk, chunk)
require.Equal(t, true, trie.VerifyMerkleProof(root[:], chunk[0][:], uint64(index+KZGOffset), proof))
}
// This test explains the calculation of the KZG commitment root's Merkle index
// in the Body's Merkle tree based on the index of the KZG commitment list in the Body.
func Test_KZGRootIndex(t *testing.T) {
// Level of the KZG commitment root's parent.
kzgParentRootLevel, err := ceilLog2(kzgPosition)
require.NoError(t, err)
// Merkle index of the KZG commitment root's parent.
// The parent's left child is the KZG commitment root,
// and its right child is the KZG commitment size.
kzgParentRootIndex := kzgPosition + (1 << kzgParentRootLevel)
// The KZG commitment root is the left child of its parent.
// Its Merkle index is the double of its parent's Merkle index.
require.Equal(t, 2*kzgParentRootIndex, kzgRootIndex)
}
// ceilLog2 returns the smallest integer greater than or equal to
// the base-2 logarithm of x.
func ceilLog2(x uint32) (uint32, error) {
if x == 0 {
return 0, errors.New("log2(0) is undefined")
}
var y uint32
if (x & (x - 1)) == 0 {
y = 0
} else {
y = 1
}
for x > 1 {
x >>= 1
y += 1
}
return y, nil
}
func getBlobKzgCommitmentsRoot(commitments [][]byte) ([32]byte, error) {
commitmentsLeaves := leavesFromCommitments(commitments)
commitmentsSparse, err := trie.GenerateTrieFromItems(
commitmentsLeaves,
fieldparams.LogMaxBlobCommitments,
)
if err != nil {
return [32]byte{}, err
}
return commitmentsSparse.HashTreeRoot()
}
func Benchmark_MerkleProofKZGCommitment(b *testing.B) {

View File

@@ -16,8 +16,8 @@ func (a *data) PrevRandao() []byte {
return a.prevRandao
}
// Timestamps returns the timestamp of the payload attribute.
func (a *data) Timestamps() uint64 {
// Timestamp returns the timestamp of the payload attribute.
func (a *data) Timestamp() uint64 {
return a.timeStamp
}
@@ -100,7 +100,7 @@ func (a *data) IsEmpty() bool {
if len(a.PrevRandao()) != 0 {
return false
}
if a.Timestamps() != 0 {
if a.Timestamp() != 0 {
return false
}
if len(a.SuggestedFeeRecipient()) != 0 {

View File

@@ -44,7 +44,7 @@ func TestPayloadAttributeGetters(t *testing.T) {
r := uint64(123)
a, err := New(&enginev1.PayloadAttributes{Timestamp: r})
require.NoError(t, err)
require.Equal(t, r, a.Timestamps())
require.Equal(t, r, a.Timestamp())
},
},
{

View File

@@ -7,7 +7,7 @@ import (
type Attributer interface {
Version() int
PrevRandao() []byte
Timestamps() uint64
Timestamp() uint64
SuggestedFeeRecipient() []byte
Withdrawals() ([]*enginev1.Withdrawal, error)
PbV1() (*enginev1.PayloadAttributes, error)

View File

@@ -13,6 +13,7 @@ go_library(
"//cache/lru:go_default_library",
"//config/params:go_default_library",
"//crypto/rand:go_default_library",
"//io/file:go_default_library",
"@com_github_hashicorp_golang_lru//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],

View File

@@ -6,9 +6,11 @@ import (
"io"
"net/url"
"os"
"path/filepath"
"strings"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/io/file"
"github.com/sirupsen/logrus"
)
@@ -20,6 +22,9 @@ func addLogWriter(w io.Writer) {
// ConfigurePersistentLogging adds a log-to-file writer. File content is identical to stdout.
func ConfigurePersistentLogging(logFileName string) error {
logrus.WithField("logFileName", logFileName).Info("Logs will be made persistent")
if err := file.MkdirAll(filepath.Dir(logFileName)); err != nil {
return err
}
f, err := os.OpenFile(logFileName, os.O_CREATE|os.O_WRONLY|os.O_APPEND, params.BeaconIoConfig().ReadWritePermissions) // #nosec G304
if err != nil {
return err

View File

@@ -1,6 +1,8 @@
package logs
import (
"fmt"
"os"
"testing"
"github.com/prysmaticlabs/prysm/v5/testing/require"
@@ -24,3 +26,38 @@ func TestMaskCredentialsLogging(t *testing.T) {
require.Equal(t, MaskCredentialsLogging(test.url), test.maskedUrl)
}
}
func TestConfigurePersistantLogging(t *testing.T) {
testParentDir := t.TempDir()
// 1. Test creation of file in an existing parent directory
logFileName := "test.log"
existingDirectory := "test-1-existing-testing-dir"
err := ConfigurePersistentLogging(fmt.Sprintf("%s/%s/%s", testParentDir, existingDirectory, logFileName))
require.NoError(t, err)
// 2. Test creation of file along with parent directory
nonExistingDirectory := "test-2-non-existing-testing-dir"
err = ConfigurePersistentLogging(fmt.Sprintf("%s/%s/%s", testParentDir, nonExistingDirectory, logFileName))
require.NoError(t, err)
// 3. Test creation of file in an existing parent directory with a non-existing sub-directory
existingDirectory = "test-3-existing-testing-dir"
nonExistingSubDirectory := "test-3-non-existing-sub-dir"
err = os.Mkdir(fmt.Sprintf("%s/%s", testParentDir, existingDirectory), 0700)
if err != nil {
return
}
err = ConfigurePersistentLogging(fmt.Sprintf("%s/%s/%s/%s", testParentDir, existingDirectory, nonExistingSubDirectory, logFileName))
require.NoError(t, err)
//4. Create log file in a directory without 700 permissions
existingDirectory = "test-4-existing-testing-dir"
err = os.Mkdir(fmt.Sprintf("%s/%s", testParentDir, existingDirectory), 0750)
if err != nil {
return
}
}

View File

@@ -29,6 +29,7 @@ proto_library(
"@com_google_protobuf//:any_proto",
"@com_google_protobuf//:descriptor_proto",
"@com_google_protobuf//:empty_proto",
"@com_google_protobuf//:wrappers_proto",
"@com_google_protobuf//:timestamp_proto",
"@googleapis//google/api:annotations_proto",
],
@@ -53,6 +54,7 @@ go_proto_library(
"@googleapis//google/api:annotations_go_proto",
"@io_bazel_rules_go//proto/wkt:descriptor_go_proto",
"@io_bazel_rules_go//proto/wkt:empty_go_proto",
"@org_golang_google_protobuf//types/known/wrapperspb:go_default_library",
"@io_bazel_rules_go//proto/wkt:timestamp_go_proto",
"@org_golang_google_protobuf//reflect/protoreflect:go_default_library",
"@org_golang_google_protobuf//runtime/protoimpl:go_default_library",
@@ -78,6 +80,7 @@ go_proto_library(
"@googleapis//google/api:annotations_go_proto",
"@io_bazel_rules_go//proto/wkt:descriptor_go_proto",
"@io_bazel_rules_go//proto/wkt:empty_go_proto",
"@io_bazel_rules_go//proto/wkt:wrappers_go_proto",
"@io_bazel_rules_go//proto/wkt:timestamp_go_proto",
],
)

View File

@@ -16,6 +16,7 @@ import (
v1alpha1 "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
_ "google.golang.org/protobuf/types/known/wrapperspb"
)
const (
@@ -462,6 +463,7 @@ type ProposerOptionPayload struct {
FeeRecipient string `protobuf:"bytes,1,opt,name=fee_recipient,json=feeRecipient,proto3" json:"fee_recipient,omitempty"`
Builder *BuilderConfig `protobuf:"bytes,2,opt,name=builder,proto3" json:"builder,omitempty"`
Graffiti *string `protobuf:"bytes,3,opt,name=graffiti,proto3,oneof" json:"graffiti,omitempty"`
}
func (x *ProposerOptionPayload) Reset() {
@@ -510,6 +512,13 @@ func (x *ProposerOptionPayload) GetBuilder() *BuilderConfig {
return nil
}
func (x *ProposerOptionPayload) GetGraffiti() string {
if x != nil && x.Graffiti != nil {
return *x.Graffiti
}
return ""
}
type BuilderConfig struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
@@ -636,7 +645,9 @@ var file_proto_prysm_v1alpha1_validator_client_keymanager_proto_rawDesc = []byte
0x2d, 0x63, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x2f, 0x6b, 0x65, 0x79, 0x6d, 0x61, 0x6e, 0x61, 0x67,
0x65, 0x72, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x1e, 0x65, 0x74, 0x68, 0x65, 0x72, 0x65,
0x75, 0x6d, 0x2e, 0x76, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x6f, 0x72, 0x2e, 0x61, 0x63, 0x63,
0x6f, 0x75, 0x6e, 0x74, 0x73, 0x2e, 0x76, 0x32, 0x1a, 0x1b, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f,
0x6f, 0x75, 0x6e, 0x74, 0x73, 0x2e, 0x76, 0x32, 0x1a, 0x1e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65,
0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x77, 0x72, 0x61, 0x70, 0x70, 0x65,
0x72, 0x73, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x1b, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f,
0x65, 0x74, 0x68, 0x2f, 0x65, 0x78, 0x74, 0x2f, 0x6f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x2e,
0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x26, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x70, 0x72, 0x79,
0x73, 0x6d, 0x2f, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x31, 0x2f, 0x61, 0x74, 0x74, 0x65,
@@ -771,7 +782,7 @@ var file_proto_prysm_v1alpha1_validator_client_keymanager_proto_rawDesc = []byte
0x73, 0x12, 0x0b, 0x0a, 0x07, 0x55, 0x4e, 0x4b, 0x4e, 0x4f, 0x57, 0x4e, 0x10, 0x00, 0x12, 0x0d,
0x0a, 0x09, 0x53, 0x55, 0x43, 0x43, 0x45, 0x45, 0x44, 0x45, 0x44, 0x10, 0x01, 0x12, 0x0a, 0x0a,
0x06, 0x44, 0x45, 0x4e, 0x49, 0x45, 0x44, 0x10, 0x02, 0x12, 0x0a, 0x0a, 0x06, 0x46, 0x41, 0x49,
0x4c, 0x45, 0x44, 0x10, 0x03, 0x22, 0x85, 0x01, 0x0a, 0x15, 0x50, 0x72, 0x6f, 0x70, 0x6f, 0x73,
0x4c, 0x45, 0x44, 0x10, 0x03, 0x22, 0xb3, 0x01, 0x0a, 0x15, 0x50, 0x72, 0x6f, 0x70, 0x6f, 0x73,
0x65, 0x72, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x50, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x12,
0x23, 0x0a, 0x0d, 0x66, 0x65, 0x65, 0x5f, 0x72, 0x65, 0x63, 0x69, 0x70, 0x69, 0x65, 0x6e, 0x74,
0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0c, 0x66, 0x65, 0x65, 0x52, 0x65, 0x63, 0x69, 0x70,
@@ -779,54 +790,57 @@ var file_proto_prysm_v1alpha1_validator_client_keymanager_proto_rawDesc = []byte
0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x2d, 0x2e, 0x65, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d,
0x2e, 0x76, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x6f, 0x72, 0x2e, 0x61, 0x63, 0x63, 0x6f, 0x75,
0x6e, 0x74, 0x73, 0x2e, 0x76, 0x32, 0x2e, 0x42, 0x75, 0x69, 0x6c, 0x64, 0x65, 0x72, 0x43, 0x6f,
0x6e, 0x66, 0x69, 0x67, 0x52, 0x07, 0x62, 0x75, 0x69, 0x6c, 0x64, 0x65, 0x72, 0x22, 0xa6, 0x01,
0x0a, 0x0d, 0x42, 0x75, 0x69, 0x6c, 0x64, 0x65, 0x72, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x12,
0x18, 0x0a, 0x07, 0x65, 0x6e, 0x61, 0x62, 0x6c, 0x65, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x08,
0x52, 0x07, 0x65, 0x6e, 0x61, 0x62, 0x6c, 0x65, 0x64, 0x12, 0x63, 0x0a, 0x09, 0x67, 0x61, 0x73,
0x5f, 0x6c, 0x69, 0x6d, 0x69, 0x74, 0x18, 0x02, 0x20, 0x01, 0x28, 0x04, 0x42, 0x46, 0x82, 0xb5,
0x18, 0x42, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x70, 0x72, 0x79,
0x73, 0x6d, 0x61, 0x74, 0x69, 0x63, 0x6c, 0x61, 0x62, 0x73, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d,
0x2f, 0x76, 0x35, 0x2f, 0x63, 0x6f, 0x6e, 0x73, 0x65, 0x6e, 0x73, 0x75, 0x73, 0x2d, 0x74, 0x79,
0x70, 0x65, 0x73, 0x2f, 0x76, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x6f, 0x72, 0x2e, 0x55, 0x69,
0x6e, 0x74, 0x36, 0x34, 0x52, 0x08, 0x67, 0x61, 0x73, 0x4c, 0x69, 0x6d, 0x69, 0x74, 0x12, 0x16,
0x0a, 0x06, 0x72, 0x65, 0x6c, 0x61, 0x79, 0x73, 0x18, 0x03, 0x20, 0x03, 0x28, 0x09, 0x52, 0x06,
0x72, 0x65, 0x6c, 0x61, 0x79, 0x73, 0x22, 0xe7, 0x02, 0x0a, 0x17, 0x50, 0x72, 0x6f, 0x70, 0x6f,
0x6e, 0x66, 0x69, 0x67, 0x52, 0x07, 0x62, 0x75, 0x69, 0x6c, 0x64, 0x65, 0x72, 0x12, 0x1f, 0x0a,
0x08, 0x67, 0x72, 0x61, 0x66, 0x66, 0x69, 0x74, 0x69, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x48,
0x00, 0x52, 0x08, 0x67, 0x72, 0x61, 0x66, 0x66, 0x69, 0x74, 0x69, 0x88, 0x01, 0x01, 0x42, 0x0b,
0x0a, 0x09, 0x5f, 0x67, 0x72, 0x61, 0x66, 0x66, 0x69, 0x74, 0x69, 0x22, 0xa6, 0x01, 0x0a, 0x0d,
0x42, 0x75, 0x69, 0x6c, 0x64, 0x65, 0x72, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x12, 0x18, 0x0a,
0x07, 0x65, 0x6e, 0x61, 0x62, 0x6c, 0x65, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x08, 0x52, 0x07,
0x65, 0x6e, 0x61, 0x62, 0x6c, 0x65, 0x64, 0x12, 0x63, 0x0a, 0x09, 0x67, 0x61, 0x73, 0x5f, 0x6c,
0x69, 0x6d, 0x69, 0x74, 0x18, 0x02, 0x20, 0x01, 0x28, 0x04, 0x42, 0x46, 0x82, 0xb5, 0x18, 0x42,
0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d,
0x61, 0x74, 0x69, 0x63, 0x6c, 0x61, 0x62, 0x73, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x2f, 0x76,
0x35, 0x2f, 0x63, 0x6f, 0x6e, 0x73, 0x65, 0x6e, 0x73, 0x75, 0x73, 0x2d, 0x74, 0x79, 0x70, 0x65,
0x73, 0x2f, 0x76, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x6f, 0x72, 0x2e, 0x55, 0x69, 0x6e, 0x74,
0x36, 0x34, 0x52, 0x08, 0x67, 0x61, 0x73, 0x4c, 0x69, 0x6d, 0x69, 0x74, 0x12, 0x16, 0x0a, 0x06,
0x72, 0x65, 0x6c, 0x61, 0x79, 0x73, 0x18, 0x03, 0x20, 0x03, 0x28, 0x09, 0x52, 0x06, 0x72, 0x65,
0x6c, 0x61, 0x79, 0x73, 0x22, 0xe7, 0x02, 0x0a, 0x17, 0x50, 0x72, 0x6f, 0x70, 0x6f, 0x73, 0x65,
0x72, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x50, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64,
0x12, 0x74, 0x0a, 0x0f, 0x70, 0x72, 0x6f, 0x70, 0x6f, 0x73, 0x65, 0x72, 0x5f, 0x63, 0x6f, 0x6e,
0x66, 0x69, 0x67, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x4b, 0x2e, 0x65, 0x74, 0x68, 0x65,
0x72, 0x65, 0x75, 0x6d, 0x2e, 0x76, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x6f, 0x72, 0x2e, 0x61,
0x63, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x73, 0x2e, 0x76, 0x32, 0x2e, 0x50, 0x72, 0x6f, 0x70, 0x6f,
0x73, 0x65, 0x72, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x50, 0x61, 0x79, 0x6c, 0x6f,
0x61, 0x64, 0x12, 0x74, 0x0a, 0x0f, 0x70, 0x72, 0x6f, 0x70, 0x6f, 0x73, 0x65, 0x72, 0x5f, 0x63,
0x6f, 0x6e, 0x66, 0x69, 0x67, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x4b, 0x2e, 0x65, 0x74,
0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x2e, 0x76, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x6f, 0x72,
0x2e, 0x61, 0x63, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x73, 0x2e, 0x76, 0x32, 0x2e, 0x50, 0x72, 0x6f,
0x70, 0x6f, 0x73, 0x65, 0x72, 0x53, 0x65, 0x74, 0x74, 0x69, 0x6e, 0x67, 0x73, 0x50, 0x61, 0x79,
0x6c, 0x6f, 0x61, 0x64, 0x2e, 0x50, 0x72, 0x6f, 0x70, 0x6f, 0x73, 0x65, 0x72, 0x43, 0x6f, 0x6e,
0x66, 0x69, 0x67, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x52, 0x0e, 0x70, 0x72, 0x6f, 0x70, 0x6f, 0x73,
0x65, 0x72, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x12, 0x5c, 0x0a, 0x0e, 0x64, 0x65, 0x66, 0x61,
0x75, 0x6c, 0x74, 0x5f, 0x63, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b,
0x32, 0x35, 0x2e, 0x65, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x2e, 0x76, 0x61, 0x6c, 0x69,
0x64, 0x61, 0x74, 0x6f, 0x72, 0x2e, 0x61, 0x63, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x73, 0x2e, 0x76,
0x32, 0x2e, 0x50, 0x72, 0x6f, 0x70, 0x6f, 0x73, 0x65, 0x72, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e,
0x50, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x52, 0x0d, 0x64, 0x65, 0x66, 0x61, 0x75, 0x6c, 0x74,
0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x1a, 0x78, 0x0a, 0x13, 0x50, 0x72, 0x6f, 0x70, 0x6f, 0x73,
0x65, 0x72, 0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a,
0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12,
0x4b, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x35,
0x61, 0x64, 0x2e, 0x50, 0x72, 0x6f, 0x70, 0x6f, 0x73, 0x65, 0x72, 0x43, 0x6f, 0x6e, 0x66, 0x69,
0x67, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x52, 0x0e, 0x70, 0x72, 0x6f, 0x70, 0x6f, 0x73, 0x65, 0x72,
0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x12, 0x5c, 0x0a, 0x0e, 0x64, 0x65, 0x66, 0x61, 0x75, 0x6c,
0x74, 0x5f, 0x63, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x35,
0x2e, 0x65, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x2e, 0x76, 0x61, 0x6c, 0x69, 0x64, 0x61,
0x74, 0x6f, 0x72, 0x2e, 0x61, 0x63, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x73, 0x2e, 0x76, 0x32, 0x2e,
0x50, 0x72, 0x6f, 0x70, 0x6f, 0x73, 0x65, 0x72, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x50, 0x61,
0x79, 0x6c, 0x6f, 0x61, 0x64, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01,
0x42, 0xce, 0x01, 0x0a, 0x22, 0x6f, 0x72, 0x67, 0x2e, 0x65, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75,
0x6d, 0x2e, 0x76, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x6f, 0x72, 0x2e, 0x61, 0x63, 0x63, 0x6f,
0x75, 0x6e, 0x74, 0x73, 0x2e, 0x76, 0x32, 0x42, 0x0f, 0x4b, 0x65, 0x79, 0x6d, 0x61, 0x6e, 0x61,
0x67, 0x65, 0x72, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x50, 0x01, 0x5a, 0x53, 0x67, 0x69, 0x74, 0x68,
0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x61, 0x74, 0x69, 0x63,
0x6c, 0x61, 0x62, 0x73, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x2f, 0x76, 0x35, 0x2f, 0x70, 0x72,
0x6f, 0x74, 0x6f, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x2f, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68,
0x61, 0x31, 0x2f, 0x76, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x6f, 0x72, 0x2d, 0x63, 0x6c, 0x69,
0x65, 0x6e, 0x74, 0x3b, 0x76, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x6f, 0x72, 0x70, 0x62, 0xaa,
0x02, 0x1e, 0x45, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x2e, 0x56, 0x61, 0x6c, 0x69, 0x64,
0x61, 0x74, 0x6f, 0x72, 0x2e, 0x41, 0x63, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x73, 0x2e, 0x56, 0x32,
0xca, 0x02, 0x1e, 0x45, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x5c, 0x56, 0x61, 0x6c, 0x69,
0x64, 0x61, 0x74, 0x6f, 0x72, 0x5c, 0x41, 0x63, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x73, 0x5c, 0x56,
0x32, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
0x79, 0x6c, 0x6f, 0x61, 0x64, 0x52, 0x0d, 0x64, 0x65, 0x66, 0x61, 0x75, 0x6c, 0x74, 0x43, 0x6f,
0x6e, 0x66, 0x69, 0x67, 0x1a, 0x78, 0x0a, 0x13, 0x50, 0x72, 0x6f, 0x70, 0x6f, 0x73, 0x65, 0x72,
0x43, 0x6f, 0x6e, 0x66, 0x69, 0x67, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a, 0x03, 0x6b,
0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, 0x4b, 0x0a,
0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x35, 0x2e, 0x65,
0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x2e, 0x76, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x6f,
0x72, 0x2e, 0x61, 0x63, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x73, 0x2e, 0x76, 0x32, 0x2e, 0x50, 0x72,
0x6f, 0x70, 0x6f, 0x73, 0x65, 0x72, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x50, 0x61, 0x79, 0x6c,
0x6f, 0x61, 0x64, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, 0x42, 0xce,
0x01, 0x0a, 0x22, 0x6f, 0x72, 0x67, 0x2e, 0x65, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x2e,
0x76, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x6f, 0x72, 0x2e, 0x61, 0x63, 0x63, 0x6f, 0x75, 0x6e,
0x74, 0x73, 0x2e, 0x76, 0x32, 0x42, 0x0f, 0x4b, 0x65, 0x79, 0x6d, 0x61, 0x6e, 0x61, 0x67, 0x65,
0x72, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x50, 0x01, 0x5a, 0x53, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62,
0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x61, 0x74, 0x69, 0x63, 0x6c, 0x61,
0x62, 0x73, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x2f, 0x76, 0x35, 0x2f, 0x70, 0x72, 0x6f, 0x74,
0x6f, 0x2f, 0x70, 0x72, 0x79, 0x73, 0x6d, 0x2f, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x31,
0x2f, 0x76, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x6f, 0x72, 0x2d, 0x63, 0x6c, 0x69, 0x65, 0x6e,
0x74, 0x3b, 0x76, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x6f, 0x72, 0x70, 0x62, 0xaa, 0x02, 0x1e,
0x45, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x2e, 0x56, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74,
0x6f, 0x72, 0x2e, 0x41, 0x63, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x73, 0x2e, 0x56, 0x32, 0xca, 0x02,
0x1e, 0x45, 0x74, 0x68, 0x65, 0x72, 0x65, 0x75, 0x6d, 0x5c, 0x56, 0x61, 0x6c, 0x69, 0x64, 0x61,
0x74, 0x6f, 0x72, 0x5c, 0x41, 0x63, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x73, 0x5c, 0x56, 0x32, 0x62,
0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
var (
@@ -979,6 +993,7 @@ func file_proto_prysm_v1alpha1_validator_client_keymanager_proto_init() {
(*SignRequest_BlockDeneb)(nil),
(*SignRequest_BlindedBlockDeneb)(nil),
}
file_proto_prysm_v1alpha1_validator_client_keymanager_proto_msgTypes[2].OneofWrappers = []interface{}{}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{

View File

@@ -1,6 +1,7 @@
syntax = "proto3";
package ethereum.validator.accounts.v2;
import "google/protobuf/wrappers.proto";
import "proto/eth/ext/options.proto";
import "proto/prysm/v1alpha1/attestation.proto";
import "proto/prysm/v1alpha1/beacon_block.proto";
@@ -87,6 +88,7 @@ message SignResponse {
message ProposerOptionPayload {
string fee_recipient = 1;
BuilderConfig builder = 2;
optional string graffiti = 3;
}
// BuilderConfig is a property of ProposerOptionPayload

View File

@@ -15,6 +15,7 @@ go_library(
deps = [
"//api/client/beacon:go_default_library",
"//api/client/event:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/validator:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//validator/client/iface:go_default_library",

View File

@@ -13,8 +13,8 @@ import (
context "context"
reflect "reflect"
"github.com/prysmaticlabs/prysm/v5/api/client/beacon"
"github.com/prysmaticlabs/prysm/v5/api/client/event"
event "github.com/prysmaticlabs/prysm/v5/api/client/event"
primitives "github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
iface "github.com/prysmaticlabs/prysm/v5/validator/client/iface"
gomock "go.uber.org/mock/gomock"
@@ -113,7 +113,7 @@ func (m *MockValidatorClient) GetAggregatedSyncSelections(arg0 context.Context,
}
// GetAggregatedSyncSelections indicates an expected call of GetAggregatedSyncSelections.
func (mr *MockValidatorClientMockRecorder) GetAggregatedSyncSelections(arg0, arg1 interface{}) *gomock.Call {
func (mr *MockValidatorClientMockRecorder) GetAggregatedSyncSelections(arg0, arg1 any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetAggregatedSyncSelections", reflect.TypeOf((*MockValidatorClient)(nil).GetAggregatedSyncSelections), arg0, arg1)
}
@@ -299,72 +299,30 @@ func (mr *MockValidatorClientMockRecorder) ProposeExit(arg0, arg1 any) *gomock.C
}
// StartEventStream mocks base method.
func (m *MockValidatorClient) StartEventStream(arg0 context.Context, arg1 []string, arg2 chan<- *event.Event){
func (m *MockValidatorClient) StartEventStream(arg0 context.Context, arg1 []string, arg2 chan<- *event.Event) {
m.ctrl.T.Helper()
_ = m.ctrl.Call(m, "StartEventStream", arg0,arg1,arg2)
m.ctrl.Call(m, "StartEventStream", arg0, arg1, arg2)
}
// StartEventStream indicates an expected call of StartEventStream.
func (mr *MockValidatorClientMockRecorder) StartEventStream(arg0,arg1,arg2 interface{}) *gomock.Call {
func (mr *MockValidatorClientMockRecorder) StartEventStream(arg0, arg1, arg2 any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "StartEventStream", reflect.TypeOf((*MockValidatorClient)(nil).StartEventStream), arg0, arg1, arg2)
}
// ProcessEvent mocks base method.
func (m *MockValidatorClient) ProcessEvent(arg0 *event.Event) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "ProcessEvent", arg0)
ret0, _ := ret[0].(error)
return ret0
}
// ProcessEvent indicates an expected call of ProcessEvent.
func (mr *MockValidatorClientMockRecorder) ProcessEvent(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ProcessEvent", reflect.TypeOf((*MockValidatorClient)(nil).ProcessEvent), arg0)
}
// NodeIsHealthy mocks base method.
func (m *MockValidatorClient) NodeIsHealthy(arg0 context.Context) bool {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "NodeIsHealthy",arg0)
ret0, _ := ret[0].(bool)
return ret0
}
// NodeIsHealthy indicates an expected call of NodeIsHealthy.
func (mr *MockValidatorClientMockRecorder) NodeIsHealthy(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "NodeIsHealthy", reflect.TypeOf((*MockValidatorClient)(nil).NodeIsHealthy), arg0)
}
// NodeHealthTracker mocks base method.
func (m *MockValidatorClient) NodeHealthTracker() *beacon.NodeHealthTracker {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "NodeHealthTracker")
ret0, _ := ret[0].(*beacon.NodeHealthTracker)
return ret0
}
// NodeHealthTracker indicates an expected call of NodeHealthTracker.
func (mr *MockValidatorClientMockRecorder) NodeHealthTracker() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "NodeHealthTracker", reflect.TypeOf((*MockValidatorClient)(nil).NodeHealthTracker))
}
// SubmitAggregateSelectionProof mocks base method.
func (m *MockValidatorClient) SubmitAggregateSelectionProof(arg0 context.Context, arg1 *eth.AggregateSelectionRequest) (*eth.AggregateSelectionResponse, error) {
func (m *MockValidatorClient) SubmitAggregateSelectionProof(arg0 context.Context, arg1 *eth.AggregateSelectionRequest, arg2 primitives.ValidatorIndex, arg3 uint64) (*eth.AggregateSelectionResponse, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "SubmitAggregateSelectionProof", arg0, arg1)
ret := m.ctrl.Call(m, "SubmitAggregateSelectionProof", arg0, arg1, arg2, arg3)
ret0, _ := ret[0].(*eth.AggregateSelectionResponse)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// SubmitAggregateSelectionProof indicates an expected call of SubmitAggregateSelectionProof.
func (mr *MockValidatorClientMockRecorder) SubmitAggregateSelectionProof(arg0, arg1 any) *gomock.Call {
func (mr *MockValidatorClientMockRecorder) SubmitAggregateSelectionProof(arg0, arg1, arg2, arg3 any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SubmitAggregateSelectionProof", reflect.TypeOf((*MockValidatorClient)(nil).SubmitAggregateSelectionProof), arg0, arg1)
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SubmitAggregateSelectionProof", reflect.TypeOf((*MockValidatorClient)(nil).SubmitAggregateSelectionProof), arg0, arg1, arg2, arg3)
}
// SubmitSignedAggregateSelectionProof mocks base method.

View File

@@ -34,19 +34,19 @@ def cc_autoconf_toolchains_impl(repository_ctx):
else:
repository_ctx.file("BUILD", "# C++ toolchain autoconfiguration was disabled by BAZEL_DO_NOT_DETECT_CPP_TOOLCHAIN env variable.")
def cc_autoconf_impl(repository_ctx, overriden_tools = dict()):
def cc_autoconf_impl(repository_ctx, overridden_tools = dict()):
"""Generate BUILD file with 'cc_toolchain' targets for the local host C++ toolchain.
Args:
repository_ctx: repository context
overriden_tools: dict of tool paths to use instead of autoconfigured tools
overridden_tools: dict of tool paths to use instead of autoconfigured tools
"""
cpu_value = get_cpu_value(repository_ctx)
if cpu_value.startswith("darwin"):
print("Configuring local C++ toolchain for Darwin. This is non-hermetic and builds may " +
"not be reproducible. Consider building on linux for a hermetic build.")
configure_unix_toolchain(repository_ctx, cpu_value, overriden_tools)
configure_unix_toolchain(repository_ctx, cpu_value, overridden_tools)
else:
paths = resolve_labels(repository_ctx, [
"@bazel_tools//tools/cpp:BUILD.empty.tpl",

View File

@@ -1,4 +1,4 @@
load("@rules_oci//oci:defs.bzl", "oci_image", "oci_image_index", "oci_push")
load("@rules_oci//oci:defs.bzl", "oci_image", "oci_image_index", "oci_push", "oci_tarball")
load("@rules_pkg//:pkg.bzl", "pkg_tar")
load("//tools:multi_arch.bzl", "multi_arch")
@@ -74,3 +74,9 @@ def prysm_image_upload(
repository = repository,
tags = tags,
)
oci_tarball(
name = "oci_image_tarball",
image = ":oci_image",
repo_tags = [repository+":latest"],
)

View File

@@ -12,6 +12,7 @@ go_library(
deps = [
"//api/client/beacon:go_default_library",
"//api/client/event:go_default_library",
"//config/fieldparams:go_default_library",
"//config/proposer:go_default_library",
"//consensus-types/primitives:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",

View File

@@ -9,6 +9,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/api/client/beacon"
"github.com/prysmaticlabs/prysm/v5/api/client/event"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/proposer"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
@@ -91,6 +92,7 @@ func (_ *Wallet) InitializeKeymanager(_ context.Context, _ iface.InitKeymanagerC
type Validator struct {
Km keymanager.IKeymanager
graffiti string
proposerSettings *proposer.Settings
}
@@ -215,6 +217,23 @@ func (m *Validator) SetProposerSettings(_ context.Context, settings *proposer.Se
return nil
}
// GetGraffiti for mocking
func (m *Validator) GetGraffiti(_ context.Context, _ [fieldparams.BLSPubkeyLength]byte) ([]byte, error) {
return []byte(m.graffiti), nil
}
// SetGraffiti for mocking
func (m *Validator) SetGraffiti(_ context.Context, _ [fieldparams.BLSPubkeyLength]byte, graffiti []byte) error {
m.graffiti = string(graffiti)
return nil
}
// DeleteGraffiti for mocking
func (m *Validator) DeleteGraffiti(_ context.Context, _ [fieldparams.BLSPubkeyLength]byte) error {
m.graffiti = ""
return nil
}
func (*Validator) StartEventStream(_ context.Context, _ []string, _ chan<- *event.Event) {
panic("implement me")
}

View File

@@ -84,7 +84,7 @@ func (v *validator) SubmitAggregateAndProof(ctx context.Context, slot primitives
CommitteeIndex: duty.CommitteeIndex,
PublicKey: pubKey[:],
SlotSignature: slotSig,
})
}, duty.ValidatorIndex, uint64(len(duty.Committee)))
if err != nil {
// handle grpc not found
s, ok := status.FromError(err)

View File

@@ -63,6 +63,8 @@ func TestSubmitAggregateAndProof_SignFails(t *testing.T) {
m.validatorClient.EXPECT().SubmitAggregateSelectionProof(
gomock.Any(), // ctx
gomock.AssignableToTypeOf(&ethpb.AggregateSelectionRequest{}),
gomock.Any(),
gomock.Any(),
).Return(&ethpb.AggregateSelectionResponse{
AggregateAndProof: &ethpb.AggregateAttestationAndProof{
AggregatorIndex: 0,
@@ -106,6 +108,8 @@ func TestSubmitAggregateAndProof_Ok(t *testing.T) {
m.validatorClient.EXPECT().SubmitAggregateSelectionProof(
gomock.Any(), // ctx
gomock.AssignableToTypeOf(&ethpb.AggregateSelectionRequest{}),
gomock.Any(),
gomock.Any(),
).Return(&ethpb.AggregateSelectionResponse{
AggregateAndProof: &ethpb.AggregateAttestationAndProof{
AggregatorIndex: 0,
@@ -166,6 +170,8 @@ func TestSubmitAggregateAndProof_Distributed(t *testing.T) {
m.validatorClient.EXPECT().SubmitAggregateSelectionProof(
gomock.Any(), // ctx
gomock.AssignableToTypeOf(&ethpb.AggregateSelectionRequest{}),
gomock.Any(),
gomock.Any(),
).Return(&ethpb.AggregateSelectionResponse{
AggregateAndProof: &ethpb.AggregateAttestationAndProof{
AggregatorIndex: 0,

View File

@@ -66,6 +66,7 @@ go_library(
"@com_github_sirupsen_logrus//:go_default_library",
"@org_golang_google_grpc//:go_default_library",
"@org_golang_google_protobuf//types/known/timestamppb:go_default_library",
"@org_golang_x_sync//errgroup:go_default_library",
],
)
@@ -129,7 +130,6 @@ go_test(
"//proto/prysm/v1alpha1:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/validator-mock:go_default_library",
"//time/slots:go_default_library",
"//validator/client/beacon-api/mock:go_default_library",
"//validator/client/beacon-api/test-helpers:go_default_library",

View File

@@ -2,12 +2,14 @@ package beacon_api
import (
"context"
"net/http"
"time"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/golang/protobuf/ptypes/empty"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api/client/event"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/validator/client/iface"
@@ -135,9 +137,9 @@ func (c *beaconApiValidatorClient) StreamBlocksAltair(ctx context.Context, in *e
return c.streamBlocks(ctx, in, time.Second), nil
}
func (c *beaconApiValidatorClient) SubmitAggregateSelectionProof(ctx context.Context, in *ethpb.AggregateSelectionRequest) (*ethpb.AggregateSelectionResponse, error) {
func (c *beaconApiValidatorClient) SubmitAggregateSelectionProof(ctx context.Context, in *ethpb.AggregateSelectionRequest, index primitives.ValidatorIndex, committeeLength uint64) (*ethpb.AggregateSelectionResponse, error) {
return wrapInMetrics[*ethpb.AggregateSelectionResponse]("SubmitAggregateSelectionProof", func() (*ethpb.AggregateSelectionResponse, error) {
return c.submitAggregateSelectionProof(ctx, in)
return c.submitAggregateSelectionProof(ctx, in, index, committeeLength)
})
}
@@ -191,7 +193,8 @@ func (c *beaconApiValidatorClient) WaitForChainStart(ctx context.Context, _ *emp
}
func (c *beaconApiValidatorClient) StartEventStream(ctx context.Context, topics []string, eventsChannel chan<- *event.Event) {
eventStream, err := event.NewEventStream(ctx, c.jsonRestHandler.HttpClient(), c.jsonRestHandler.Host(), topics)
client := &http.Client{} // event stream should not be subject to the same settings as other api calls, so we won't use c.jsonRestHandler.HttpClient()
eventStream, err := event.NewEventStream(ctx, client, c.jsonRestHandler.Host(), topics)
if err != nil {
eventsChannel <- &event.Event{
EventType: event.EventError,

View File

@@ -8,11 +8,14 @@ import (
"net/url"
"strconv"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/consensus-types/validator"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"golang.org/x/sync/errgroup"
)
type dutiesProvider interface {
@@ -31,37 +34,42 @@ type committeeIndexSlotPair struct {
slot primitives.Slot
}
type validatorForDuty struct {
pubkey []byte
index primitives.ValidatorIndex
status ethpb.ValidatorStatus
}
func (c beaconApiValidatorClient) getDuties(ctx context.Context, in *ethpb.DutiesRequest) (*ethpb.DutiesResponse, error) {
all, err := c.multipleValidatorStatus(ctx, &ethpb.MultipleValidatorStatusRequest{PublicKeys: in.PublicKeys})
vals, err := c.getValidatorsForDuties(ctx, in.PublicKeys)
if err != nil {
return nil, errors.Wrap(err, "failed to get validator status")
}
known := &ethpb.MultipleValidatorStatusResponse{
PublicKeys: make([][]byte, 0, len(all.PublicKeys)),
Statuses: make([]*ethpb.ValidatorStatusResponse, 0, len(all.Statuses)),
Indices: make([]primitives.ValidatorIndex, 0, len(all.Indices)),
}
for i, status := range all.Statuses {
if status.Status != ethpb.ValidatorStatus_UNKNOWN_STATUS {
known.PublicKeys = append(known.PublicKeys, all.PublicKeys[i])
known.Statuses = append(known.Statuses, all.Statuses[i])
known.Indices = append(known.Indices, all.Indices[i])
}
return nil, errors.Wrap(err, "failed to get validators for duties")
}
// Sync committees are an Altair feature
fetchSyncDuties := in.Epoch >= params.BeaconConfig().AltairForkEpoch
currentEpochDuties, err := c.getDutiesForEpoch(ctx, in.Epoch, known, fetchSyncDuties)
if err != nil {
return nil, errors.Wrapf(err, "failed to get duties for current epoch `%d`", in.Epoch)
}
errCh := make(chan error, 1)
nextEpochDuties, err := c.getDutiesForEpoch(ctx, in.Epoch+1, known, fetchSyncDuties)
var currentEpochDuties []*ethpb.DutiesResponse_Duty
go func() {
currentEpochDuties, err = c.getDutiesForEpoch(ctx, in.Epoch, vals, fetchSyncDuties)
if err != nil {
errCh <- errors.Wrapf(err, "failed to get duties for current epoch `%d`", in.Epoch)
return
}
errCh <- nil
}()
nextEpochDuties, err := c.getDutiesForEpoch(ctx, in.Epoch+1, vals, fetchSyncDuties)
if err != nil {
return nil, errors.Wrapf(err, "failed to get duties for next epoch `%d`", in.Epoch+1)
}
if err = <-errCh; err != nil {
return nil, err
}
return &ethpb.DutiesResponse{
CurrentEpochDuties: currentEpochDuties,
NextEpochDuties: nextEpochDuties,
@@ -71,25 +79,94 @@ func (c beaconApiValidatorClient) getDuties(ctx context.Context, in *ethpb.Dutie
func (c beaconApiValidatorClient) getDutiesForEpoch(
ctx context.Context,
epoch primitives.Epoch,
multipleValidatorStatus *ethpb.MultipleValidatorStatusResponse,
vals []validatorForDuty,
fetchSyncDuties bool,
) ([]*ethpb.DutiesResponse_Duty, error) {
attesterDuties, err := c.dutiesProvider.GetAttesterDuties(ctx, epoch, multipleValidatorStatus.Indices)
if err != nil {
return nil, errors.Wrapf(err, "failed to get attester duties for epoch `%d`", epoch)
indices := make([]primitives.ValidatorIndex, len(vals))
for i, v := range vals {
indices[i] = v.index
}
var syncDuties []*structs.SyncCommitteeDuty
if fetchSyncDuties {
if syncDuties, err = c.dutiesProvider.GetSyncDuties(ctx, epoch, multipleValidatorStatus.Indices); err != nil {
return nil, errors.Wrapf(err, "failed to get sync duties for epoch `%d`", epoch)
// Below variables MUST NOT be used in the main function before wg.Wait().
// This is because they are populated in goroutines and wg.Wait()
// will return only once all goroutines finish their execution.
// Mapping from a validator index to its attesting committee's index and slot
attesterDutiesMapping := make(map[primitives.ValidatorIndex]committeeIndexSlotPair)
// Set containing all validator indices that are part of a sync committee for this epoch
syncDutiesMapping := make(map[primitives.ValidatorIndex]bool)
// Mapping from a validator index to its proposal slot
proposerDutySlots := make(map[primitives.ValidatorIndex][]primitives.Slot)
// Mapping from the {committeeIndex, slot} to each of the committee's validator indices
committeeMapping := make(map[committeeIndexSlotPair][]primitives.ValidatorIndex)
var wg errgroup.Group
wg.Go(func() error {
attesterDuties, err := c.dutiesProvider.GetAttesterDuties(ctx, epoch, indices)
if err != nil {
return errors.Wrapf(err, "failed to get attester duties for epoch `%d`", epoch)
}
for _, attesterDuty := range attesterDuties {
validatorIndex, err := strconv.ParseUint(attesterDuty.ValidatorIndex, 10, 64)
if err != nil {
return errors.Wrapf(err, "failed to parse attester validator index `%s`", attesterDuty.ValidatorIndex)
}
slot, err := strconv.ParseUint(attesterDuty.Slot, 10, 64)
if err != nil {
return errors.Wrapf(err, "failed to parse attester slot `%s`", attesterDuty.Slot)
}
committeeIndex, err := strconv.ParseUint(attesterDuty.CommitteeIndex, 10, 64)
if err != nil {
return errors.Wrapf(err, "failed to parse attester committee index `%s`", attesterDuty.CommitteeIndex)
}
attesterDutiesMapping[primitives.ValidatorIndex(validatorIndex)] = committeeIndexSlotPair{
slot: primitives.Slot(slot),
committeeIndex: primitives.CommitteeIndex(committeeIndex),
}
}
return nil
})
if fetchSyncDuties {
wg.Go(func() error {
syncDuties, err := c.dutiesProvider.GetSyncDuties(ctx, epoch, indices)
if err != nil {
return errors.Wrapf(err, "failed to get sync duties for epoch `%d`", epoch)
}
for _, syncDuty := range syncDuties {
validatorIndex, err := strconv.ParseUint(syncDuty.ValidatorIndex, 10, 64)
if err != nil {
return errors.Wrapf(err, "failed to parse sync validator index `%s`", syncDuty.ValidatorIndex)
}
syncDutiesMapping[primitives.ValidatorIndex(validatorIndex)] = true
}
return nil
})
}
var proposerDuties []*structs.ProposerDuty
if proposerDuties, err = c.dutiesProvider.GetProposerDuties(ctx, epoch); err != nil {
return nil, errors.Wrapf(err, "failed to get proposer duties for epoch `%d`", epoch)
}
wg.Go(func() error {
proposerDuties, err := c.dutiesProvider.GetProposerDuties(ctx, epoch)
if err != nil {
return errors.Wrapf(err, "failed to get proposer duties for epoch `%d`", epoch)
}
for _, proposerDuty := range proposerDuties {
validatorIndex, err := strconv.ParseUint(proposerDuty.ValidatorIndex, 10, 64)
if err != nil {
return errors.Wrapf(err, "failed to parse proposer validator index `%s`", proposerDuty.ValidatorIndex)
}
slot, err := strconv.ParseUint(proposerDuty.Slot, 10, 64)
if err != nil {
return errors.Wrapf(err, "failed to parse proposer slot `%s`", proposerDuty.Slot)
}
proposerDutySlots[primitives.ValidatorIndex(validatorIndex)] =
append(proposerDutySlots[primitives.ValidatorIndex(validatorIndex)], primitives.Slot(slot))
}
return nil
})
committees, err := c.dutiesProvider.GetCommittees(ctx, epoch)
if err != nil {
@@ -104,70 +181,15 @@ func (c beaconApiValidatorClient) getDutiesForEpoch(
slotCommittees[c.Slot] = n + 1
}
// Mapping from a validator index to its attesting committee's index and slot
attesterDutiesMapping := make(map[primitives.ValidatorIndex]committeeIndexSlotPair)
for _, attesterDuty := range attesterDuties {
validatorIndex, err := strconv.ParseUint(attesterDuty.ValidatorIndex, 10, 64)
if err != nil {
return nil, errors.Wrapf(err, "failed to parse attester validator index `%s`", attesterDuty.ValidatorIndex)
}
slot, err := strconv.ParseUint(attesterDuty.Slot, 10, 64)
if err != nil {
return nil, errors.Wrapf(err, "failed to parse attester slot `%s`", attesterDuty.Slot)
}
committeeIndex, err := strconv.ParseUint(attesterDuty.CommitteeIndex, 10, 64)
if err != nil {
return nil, errors.Wrapf(err, "failed to parse attester committee index `%s`", attesterDuty.CommitteeIndex)
}
attesterDutiesMapping[primitives.ValidatorIndex(validatorIndex)] = committeeIndexSlotPair{
slot: primitives.Slot(slot),
committeeIndex: primitives.CommitteeIndex(committeeIndex),
}
}
// Mapping from a validator index to its proposal slot
proposerDutySlots := make(map[primitives.ValidatorIndex][]primitives.Slot)
for _, proposerDuty := range proposerDuties {
validatorIndex, err := strconv.ParseUint(proposerDuty.ValidatorIndex, 10, 64)
if err != nil {
return nil, errors.Wrapf(err, "failed to parse proposer validator index `%s`", proposerDuty.ValidatorIndex)
}
slot, err := strconv.ParseUint(proposerDuty.Slot, 10, 64)
if err != nil {
return nil, errors.Wrapf(err, "failed to parse proposer slot `%s`", proposerDuty.Slot)
}
proposerDutySlots[primitives.ValidatorIndex(validatorIndex)] = append(proposerDutySlots[primitives.ValidatorIndex(validatorIndex)], primitives.Slot(slot))
}
// Set containing all validator indices that are part of a sync committee for this epoch
syncDutiesMapping := make(map[primitives.ValidatorIndex]bool)
for _, syncDuty := range syncDuties {
validatorIndex, err := strconv.ParseUint(syncDuty.ValidatorIndex, 10, 64)
if err != nil {
return nil, errors.Wrapf(err, "failed to parse sync validator index `%s`", syncDuty.ValidatorIndex)
}
syncDutiesMapping[primitives.ValidatorIndex(validatorIndex)] = true
}
// Mapping from the {committeeIndex, slot} to each of the committee's validator indices
committeeMapping := make(map[committeeIndexSlotPair][]primitives.ValidatorIndex)
for _, committee := range committees {
committeeIndex, err := strconv.ParseUint(committee.Index, 10, 64)
if err != nil {
return nil, errors.Wrapf(err, "failed to parse committee index `%s`", committee.Index)
}
slot, err := strconv.ParseUint(committee.Slot, 10, 64)
if err != nil {
return nil, errors.Wrapf(err, "failed to parse slot `%s`", committee.Slot)
}
validatorIndices := make([]primitives.ValidatorIndex, len(committee.Validators))
for index, validatorIndexString := range committee.Validators {
validatorIndex, err := strconv.ParseUint(validatorIndexString, 10, 64)
@@ -176,7 +198,6 @@ func (c beaconApiValidatorClient) getDutiesForEpoch(
}
validatorIndices[index] = primitives.ValidatorIndex(validatorIndex)
}
key := committeeIndexSlotPair{
committeeIndex: primitives.CommitteeIndex(committeeIndex),
slot: primitives.Slot(slot),
@@ -184,16 +205,19 @@ func (c beaconApiValidatorClient) getDutiesForEpoch(
committeeMapping[key] = validatorIndices
}
duties := make([]*ethpb.DutiesResponse_Duty, len(multipleValidatorStatus.Statuses))
for index, validatorStatus := range multipleValidatorStatus.Statuses {
validatorIndex := multipleValidatorStatus.Indices[index]
pubkey := multipleValidatorStatus.PublicKeys[index]
if err = wg.Wait(); err != nil {
return nil, err
}
var attesterSlot primitives.Slot
var committeeIndex primitives.CommitteeIndex
var committeeValidatorIndices []primitives.ValidatorIndex
duties := make([]*ethpb.DutiesResponse_Duty, len(vals))
for i, v := range vals {
var (
attesterSlot primitives.Slot
committeeIndex primitives.CommitteeIndex
committeeValidatorIndices []primitives.ValidatorIndex
)
if committeeMappingKey, ok := attesterDutiesMapping[validatorIndex]; ok {
if committeeMappingKey, ok := attesterDutiesMapping[v.index]; ok {
committeeIndex = committeeMappingKey.committeeIndex
attesterSlot = committeeMappingKey.slot
@@ -202,15 +226,15 @@ func (c beaconApiValidatorClient) getDutiesForEpoch(
}
}
duties[index] = &ethpb.DutiesResponse_Duty{
duties[i] = &ethpb.DutiesResponse_Duty{
Committee: committeeValidatorIndices,
CommitteeIndex: committeeIndex,
AttesterSlot: attesterSlot,
ProposerSlots: proposerDutySlots[validatorIndex],
PublicKey: pubkey,
Status: validatorStatus.Status,
ValidatorIndex: validatorIndex,
IsSyncCommittee: syncDutiesMapping[validatorIndex],
ProposerSlots: proposerDutySlots[v.index],
PublicKey: v.pubkey,
Status: v.status,
ValidatorIndex: v.index,
IsSyncCommittee: syncDutiesMapping[v.index],
CommitteesAtSlot: slotCommittees[strconv.FormatUint(uint64(attesterSlot), 10)],
}
}
@@ -218,6 +242,51 @@ func (c beaconApiValidatorClient) getDutiesForEpoch(
return duties, nil
}
func (c *beaconApiValidatorClient) getValidatorsForDuties(ctx context.Context, pubkeys [][]byte) ([]validatorForDuty, error) {
vals := make([]validatorForDuty, 0, len(pubkeys))
stringPubkeysToPubkeys := make(map[string][]byte, len(pubkeys))
stringPubkeys := make([]string, len(pubkeys))
for i, pk := range pubkeys {
stringPk := hexutil.Encode(pk)
stringPubkeysToPubkeys[stringPk] = pk
stringPubkeys[i] = stringPk
}
statusesWithDuties := []string{validator.ActiveOngoing.String(), validator.ActiveExiting.String()}
stateValidatorsResponse, err := c.stateValidatorsProvider.GetStateValidators(ctx, stringPubkeys, nil, statusesWithDuties)
if err != nil {
return nil, errors.Wrap(err, "failed to get state validators")
}
for _, validatorContainer := range stateValidatorsResponse.Data {
val := validatorForDuty{}
validatorIndex, err := strconv.ParseUint(validatorContainer.Index, 10, 64)
if err != nil {
return nil, errors.Wrapf(err, "failed to parse validator index %s", validatorContainer.Index)
}
val.index = primitives.ValidatorIndex(validatorIndex)
stringPubkey := validatorContainer.Validator.Pubkey
pubkey, ok := stringPubkeysToPubkeys[stringPubkey]
if !ok {
return nil, errors.Wrapf(err, "returned public key %s not requested", stringPubkey)
}
val.pubkey = pubkey
status, ok := beaconAPITogRPCValidatorStatus[validatorContainer.Status]
if !ok {
return nil, errors.New("invalid validator status " + validatorContainer.Status)
}
val.status = status
vals = append(vals, val)
}
return vals, nil
}
// GetCommittees retrieves the committees for the given epoch
func (c beaconApiDutiesProvider) GetCommittees(ctx context.Context, epoch primitives.Epoch) ([]*structs.Committee, error) {
committeeParams := url.Values{}

View File

@@ -9,11 +9,8 @@ import (
"strconv"
"testing"
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
validatormock "github.com/prysmaticlabs/prysm/v5/testing/validator-mock"
"github.com/prysmaticlabs/prysm/v5/validator/client/iface"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
@@ -541,7 +538,6 @@ func TestGetDutiesForEpoch_Error(t *testing.T) {
{
name: "get proposer duties failed",
expectedError: "failed to get proposer duties for epoch `1`: foo error",
fetchAttesterDutiesError: nil,
fetchProposerDutiesError: errors.New("foo error"),
},
{
@@ -720,28 +716,20 @@ func TestGetDutiesForEpoch_Error(t *testing.T) {
testCase.fetchCommitteesError,
).AnyTimes()
vals := make([]validatorForDuty, len(pubkeys))
for i := 0; i < len(pubkeys); i++ {
vals[i] = validatorForDuty{
pubkey: pubkeys[i],
index: validatorIndices[i],
status: ethpb.ValidatorStatus_ACTIVE,
}
}
validatorClient := &beaconApiValidatorClient{dutiesProvider: dutiesProvider}
_, err := validatorClient.getDutiesForEpoch(
ctx,
epoch,
&ethpb.MultipleValidatorStatusResponse{
PublicKeys: pubkeys,
Indices: validatorIndices,
Statuses: []*ethpb.ValidatorStatusResponse{
{Status: ethpb.ValidatorStatus_UNKNOWN_STATUS},
{Status: ethpb.ValidatorStatus_DEPOSITED},
{Status: ethpb.ValidatorStatus_PENDING},
{Status: ethpb.ValidatorStatus_ACTIVE},
{Status: ethpb.ValidatorStatus_EXITING},
{Status: ethpb.ValidatorStatus_SLASHING},
{Status: ethpb.ValidatorStatus_EXITED},
{Status: ethpb.ValidatorStatus_INVALID},
{Status: ethpb.ValidatorStatus_PARTIALLY_DEPOSITED},
{Status: ethpb.ValidatorStatus_UNKNOWN_STATUS},
{Status: ethpb.ValidatorStatus_DEPOSITED},
{Status: ethpb.ValidatorStatus_PENDING},
},
},
vals,
true,
)
assert.ErrorContains(t, testCase.expectedError, err)
@@ -773,40 +761,6 @@ func TestGetDutiesForEpoch_Valid(t *testing.T) {
committeeSlots := []primitives.Slot{28, 29, 30}
proposerSlots := []primitives.Slot{31, 32, 33, 34, 35, 36, 37, 38}
statuses := []ethpb.ValidatorStatus{
ethpb.ValidatorStatus_UNKNOWN_STATUS,
ethpb.ValidatorStatus_DEPOSITED,
ethpb.ValidatorStatus_PENDING,
ethpb.ValidatorStatus_ACTIVE,
ethpb.ValidatorStatus_EXITING,
ethpb.ValidatorStatus_SLASHING,
ethpb.ValidatorStatus_EXITED,
ethpb.ValidatorStatus_INVALID,
ethpb.ValidatorStatus_PARTIALLY_DEPOSITED,
ethpb.ValidatorStatus_UNKNOWN_STATUS,
ethpb.ValidatorStatus_DEPOSITED,
ethpb.ValidatorStatus_PENDING,
}
multipleValidatorStatus := &ethpb.MultipleValidatorStatusResponse{
PublicKeys: pubkeys,
Indices: validatorIndices,
Statuses: []*ethpb.ValidatorStatusResponse{
{Status: statuses[0]},
{Status: statuses[1]},
{Status: statuses[2]},
{Status: statuses[3]},
{Status: statuses[4]},
{Status: statuses[5]},
{Status: statuses[6]},
{Status: statuses[7]},
{Status: statuses[8]},
{Status: statuses[9]},
{Status: statuses[10]},
{Status: statuses[11]},
},
}
ctrl := gomock.NewController(t)
defer ctrl.Finish()
@@ -824,7 +778,7 @@ func TestGetDutiesForEpoch_Valid(t *testing.T) {
dutiesProvider.EXPECT().GetAttesterDuties(
ctx,
epoch,
multipleValidatorStatus.Indices,
validatorIndices,
).Return(
generateValidAttesterDuties(pubkeys, validatorIndices, committeeIndices, committeeSlots),
nil,
@@ -842,7 +796,7 @@ func TestGetDutiesForEpoch_Valid(t *testing.T) {
dutiesProvider.EXPECT().GetSyncDuties(
ctx,
epoch,
multipleValidatorStatus.Indices,
validatorIndices,
).Return(
generateValidSyncDuties(pubkeys, validatorIndices),
nil,
@@ -883,7 +837,7 @@ func TestGetDutiesForEpoch_Valid(t *testing.T) {
CommitteeIndex: committeeIndices[0],
AttesterSlot: committeeSlots[0],
PublicKey: pubkeys[0],
Status: statuses[0],
Status: ethpb.ValidatorStatus_ACTIVE,
ValidatorIndex: validatorIndices[0],
CommitteesAtSlot: 1,
},
@@ -895,7 +849,7 @@ func TestGetDutiesForEpoch_Valid(t *testing.T) {
CommitteeIndex: committeeIndices[0],
AttesterSlot: committeeSlots[0],
PublicKey: pubkeys[1],
Status: statuses[1],
Status: ethpb.ValidatorStatus_ACTIVE,
ValidatorIndex: validatorIndices[1],
CommitteesAtSlot: 1,
},
@@ -907,7 +861,7 @@ func TestGetDutiesForEpoch_Valid(t *testing.T) {
CommitteeIndex: committeeIndices[1],
AttesterSlot: committeeSlots[1],
PublicKey: pubkeys[2],
Status: statuses[2],
Status: ethpb.ValidatorStatus_ACTIVE,
ValidatorIndex: validatorIndices[2],
CommitteesAtSlot: 1,
},
@@ -919,7 +873,7 @@ func TestGetDutiesForEpoch_Valid(t *testing.T) {
CommitteeIndex: committeeIndices[1],
AttesterSlot: committeeSlots[1],
PublicKey: pubkeys[3],
Status: statuses[3],
Status: ethpb.ValidatorStatus_ACTIVE,
ValidatorIndex: validatorIndices[3],
CommitteesAtSlot: 1,
},
@@ -931,7 +885,7 @@ func TestGetDutiesForEpoch_Valid(t *testing.T) {
CommitteeIndex: committeeIndices[2],
AttesterSlot: committeeSlots[2],
PublicKey: pubkeys[4],
Status: statuses[4],
Status: ethpb.ValidatorStatus_ACTIVE,
ValidatorIndex: validatorIndices[4],
ProposerSlots: expectedProposerSlots1,
CommitteesAtSlot: 1,
@@ -944,7 +898,7 @@ func TestGetDutiesForEpoch_Valid(t *testing.T) {
CommitteeIndex: committeeIndices[2],
AttesterSlot: committeeSlots[2],
PublicKey: pubkeys[5],
Status: statuses[5],
Status: ethpb.ValidatorStatus_ACTIVE,
ValidatorIndex: validatorIndices[5],
ProposerSlots: expectedProposerSlots2,
IsSyncCommittee: testCase.fetchSyncDuties,
@@ -952,47 +906,55 @@ func TestGetDutiesForEpoch_Valid(t *testing.T) {
},
{
PublicKey: pubkeys[6],
Status: statuses[6],
Status: ethpb.ValidatorStatus_ACTIVE,
ValidatorIndex: validatorIndices[6],
ProposerSlots: expectedProposerSlots3,
IsSyncCommittee: testCase.fetchSyncDuties,
},
{
PublicKey: pubkeys[7],
Status: statuses[7],
Status: ethpb.ValidatorStatus_ACTIVE,
ValidatorIndex: validatorIndices[7],
ProposerSlots: expectedProposerSlots4,
IsSyncCommittee: testCase.fetchSyncDuties,
},
{
PublicKey: pubkeys[8],
Status: statuses[8],
Status: ethpb.ValidatorStatus_ACTIVE,
ValidatorIndex: validatorIndices[8],
IsSyncCommittee: testCase.fetchSyncDuties,
},
{
PublicKey: pubkeys[9],
Status: statuses[9],
Status: ethpb.ValidatorStatus_ACTIVE,
ValidatorIndex: validatorIndices[9],
IsSyncCommittee: testCase.fetchSyncDuties,
},
{
PublicKey: pubkeys[10],
Status: statuses[10],
Status: ethpb.ValidatorStatus_ACTIVE,
ValidatorIndex: validatorIndices[10],
},
{
PublicKey: pubkeys[11],
Status: statuses[11],
Status: ethpb.ValidatorStatus_ACTIVE,
ValidatorIndex: validatorIndices[11],
},
}
validatorClient := &beaconApiValidatorClient{dutiesProvider: dutiesProvider}
vals := make([]validatorForDuty, len(pubkeys))
for i := 0; i < len(pubkeys); i++ {
vals[i] = validatorForDuty{
pubkey: pubkeys[i],
index: validatorIndices[i],
status: ethpb.ValidatorStatus_ACTIVE,
}
}
duties, err := validatorClient.getDutiesForEpoch(
ctx,
epoch,
multipleValidatorStatus,
vals,
testCase.fetchSyncDuties,
)
require.NoError(t, err)
@@ -1018,41 +980,24 @@ func TestGetDuties_Valid(t *testing.T) {
for _, testCase := range testCases {
t.Run(testCase.name, func(t *testing.T) {
statuses := []ethpb.ValidatorStatus{
ethpb.ValidatorStatus_DEPOSITED,
ethpb.ValidatorStatus_PENDING,
ethpb.ValidatorStatus_ACTIVE,
ethpb.ValidatorStatus_EXITING,
ethpb.ValidatorStatus_SLASHING,
ethpb.ValidatorStatus_EXITED,
ethpb.ValidatorStatus_EXITED,
ethpb.ValidatorStatus_EXITED,
ethpb.ValidatorStatus_EXITED,
ethpb.ValidatorStatus_DEPOSITED,
ethpb.ValidatorStatus_PENDING,
ethpb.ValidatorStatus_ACTIVE,
}
pubkeys := make([][]byte, len(statuses))
validatorIndices := make([]primitives.ValidatorIndex, len(statuses))
for i := range statuses {
valCount := 12
pubkeys := make([][]byte, valCount)
validatorIndices := make([]primitives.ValidatorIndex, valCount)
vals := make([]validatorForDuty, valCount)
for i := 0; i < valCount; i++ {
pubkeys[i] = []byte(strconv.Itoa(i))
validatorIndices[i] = primitives.ValidatorIndex(i)
vals[i] = validatorForDuty{
pubkey: pubkeys[i],
index: validatorIndices[i],
status: ethpb.ValidatorStatus_ACTIVE,
}
}
committeeIndices := []primitives.CommitteeIndex{25, 26, 27}
committeeSlots := []primitives.Slot{28, 29, 30}
proposerSlots := []primitives.Slot{31, 32, 33, 34, 35, 36, 37, 38}
statusResps := make([]*ethpb.ValidatorStatusResponse, len(statuses))
for i, s := range statuses {
statusResps[i] = &ethpb.ValidatorStatusResponse{Status: s}
}
multipleValidatorStatus := &ethpb.MultipleValidatorStatusResponse{
PublicKeys: pubkeys,
Indices: validatorIndices,
Statuses: statusResps,
}
ctrl := gomock.NewController(t)
defer ctrl.Finish()
@@ -1070,7 +1015,7 @@ func TestGetDuties_Valid(t *testing.T) {
dutiesProvider.EXPECT().GetAttesterDuties(
ctx,
testCase.epoch,
multipleValidatorStatus.Indices,
validatorIndices,
).Return(
generateValidAttesterDuties(pubkeys, validatorIndices, committeeIndices, committeeSlots),
nil,
@@ -1089,7 +1034,7 @@ func TestGetDuties_Valid(t *testing.T) {
dutiesProvider.EXPECT().GetSyncDuties(
ctx,
testCase.epoch,
multipleValidatorStatus.Indices,
validatorIndices,
).Return(
generateValidSyncDuties(pubkeys, validatorIndices),
nil,
@@ -1143,7 +1088,7 @@ func TestGetDuties_Valid(t *testing.T) {
Data: []*structs.ValidatorContainer{
{
Index: strconv.FormatUint(uint64(validatorIndices[0]), 10),
Status: "pending_initialized",
Status: "active_ongoing",
Validator: &structs.Validator{
Pubkey: hexutil.Encode(pubkeys[0]),
ActivationEpoch: strconv.FormatUint(uint64(testCase.epoch), 10),
@@ -1151,7 +1096,7 @@ func TestGetDuties_Valid(t *testing.T) {
},
{
Index: strconv.FormatUint(uint64(validatorIndices[1]), 10),
Status: "pending_queued",
Status: "active_ongoing",
Validator: &structs.Validator{
Pubkey: hexutil.Encode(pubkeys[1]),
ActivationEpoch: strconv.FormatUint(uint64(testCase.epoch), 10),
@@ -1167,7 +1112,7 @@ func TestGetDuties_Valid(t *testing.T) {
},
{
Index: strconv.FormatUint(uint64(validatorIndices[3]), 10),
Status: "active_exiting",
Status: "active_ongoing",
Validator: &structs.Validator{
Pubkey: hexutil.Encode(pubkeys[3]),
ActivationEpoch: strconv.FormatUint(uint64(testCase.epoch), 10),
@@ -1175,7 +1120,7 @@ func TestGetDuties_Valid(t *testing.T) {
},
{
Index: strconv.FormatUint(uint64(validatorIndices[4]), 10),
Status: "active_slashed",
Status: "active_ongoing",
Validator: &structs.Validator{
Pubkey: hexutil.Encode(pubkeys[4]),
ActivationEpoch: strconv.FormatUint(uint64(testCase.epoch), 10),
@@ -1183,7 +1128,7 @@ func TestGetDuties_Valid(t *testing.T) {
},
{
Index: strconv.FormatUint(uint64(validatorIndices[5]), 10),
Status: "exited_unslashed",
Status: "active_ongoing",
Validator: &structs.Validator{
Pubkey: hexutil.Encode(pubkeys[5]),
ActivationEpoch: strconv.FormatUint(uint64(testCase.epoch), 10),
@@ -1191,7 +1136,7 @@ func TestGetDuties_Valid(t *testing.T) {
},
{
Index: strconv.FormatUint(uint64(validatorIndices[6]), 10),
Status: "exited_slashed",
Status: "active_ongoing",
Validator: &structs.Validator{
Pubkey: hexutil.Encode(pubkeys[6]),
ActivationEpoch: strconv.FormatUint(uint64(testCase.epoch), 10),
@@ -1199,7 +1144,7 @@ func TestGetDuties_Valid(t *testing.T) {
},
{
Index: strconv.FormatUint(uint64(validatorIndices[7]), 10),
Status: "withdrawal_possible",
Status: "active_ongoing",
Validator: &structs.Validator{
Pubkey: hexutil.Encode(pubkeys[7]),
ActivationEpoch: strconv.FormatUint(uint64(testCase.epoch), 10),
@@ -1207,7 +1152,7 @@ func TestGetDuties_Valid(t *testing.T) {
},
{
Index: strconv.FormatUint(uint64(validatorIndices[8]), 10),
Status: "withdrawal_done",
Status: "active_ongoing",
Validator: &structs.Validator{
Pubkey: hexutil.Encode(pubkeys[8]),
ActivationEpoch: strconv.FormatUint(uint64(testCase.epoch), 10),
@@ -1215,7 +1160,7 @@ func TestGetDuties_Valid(t *testing.T) {
},
{
Index: strconv.FormatUint(uint64(validatorIndices[9]), 10),
Status: "pending_initialized",
Status: "active_ongoing",
Validator: &structs.Validator{
Pubkey: hexutil.Encode(pubkeys[9]),
ActivationEpoch: strconv.FormatUint(uint64(testCase.epoch), 10),
@@ -1223,7 +1168,7 @@ func TestGetDuties_Valid(t *testing.T) {
},
{
Index: strconv.FormatUint(uint64(validatorIndices[10]), 10),
Status: "pending_queued",
Status: "active_ongoing",
Validator: &structs.Validator{
Pubkey: hexutil.Encode(pubkeys[10]),
ActivationEpoch: strconv.FormatUint(uint64(testCase.epoch), 10),
@@ -1242,27 +1187,16 @@ func TestGetDuties_Valid(t *testing.T) {
nil,
).MinTimes(1)
prysmBeaconChainClient := validatormock.NewMockPrysmBeaconChainClient(ctrl)
prysmBeaconChainClient.EXPECT().GetValidatorCount(
ctx,
gomock.Any(),
gomock.Any(),
).Return(
nil,
iface.ErrNotSupported,
).MinTimes(1)
// Make sure that our values are equal to what would be returned by calling getDutiesForEpoch individually
validatorClient := &beaconApiValidatorClient{
dutiesProvider: dutiesProvider,
stateValidatorsProvider: stateValidatorsProvider,
prysmBeaconChainCLient: prysmBeaconChainClient,
}
expectedCurrentEpochDuties, err := validatorClient.getDutiesForEpoch(
ctx,
testCase.epoch,
multipleValidatorStatus,
vals,
fetchSyncDuties,
)
require.NoError(t, err)
@@ -1270,7 +1204,7 @@ func TestGetDuties_Valid(t *testing.T) {
expectedNextEpochDuties, err := validatorClient.getDutiesForEpoch(
ctx,
testCase.epoch+1,
multipleValidatorStatus,
vals,
fetchSyncDuties,
)
require.NoError(t, err)
@@ -1291,7 +1225,7 @@ func TestGetDuties_Valid(t *testing.T) {
}
}
func TestGetDuties_GetValidatorStatusFailed(t *testing.T) {
func TestGetDuties_GetStateValidatorsFailed(t *testing.T) {
ctrl := gomock.NewController(t)
defer ctrl.Finish()
@@ -1316,7 +1250,7 @@ func TestGetDuties_GetValidatorStatusFailed(t *testing.T) {
Epoch: 1,
PublicKeys: [][]byte{},
})
assert.ErrorContains(t, "failed to get validator status", err)
assert.ErrorContains(t, "failed to get state validators", err)
assert.ErrorContains(t, "foo error", err)
}
@@ -1325,6 +1259,7 @@ func TestGetDuties_GetDutiesForEpochFailed(t *testing.T) {
defer ctrl.Finish()
ctx := context.Background()
pubkey := []byte{1, 2, 3}
stateValidatorsProvider := mock.NewMockStateValidatorsProvider(ctrl)
stateValidatorsProvider.EXPECT().GetStateValidators(
@@ -1334,7 +1269,13 @@ func TestGetDuties_GetDutiesForEpochFailed(t *testing.T) {
gomock.Any(),
).Return(
&structs.GetValidatorsResponse{
Data: []*structs.ValidatorContainer{},
Data: []*structs.ValidatorContainer{{
Index: "0",
Status: "active_ongoing",
Validator: &structs.Validator{
Pubkey: hexutil.Encode(pubkey),
},
}},
},
nil,
).Times(1)
@@ -1348,26 +1289,28 @@ func TestGetDuties_GetDutiesForEpochFailed(t *testing.T) {
nil,
errors.New("foo error"),
).Times(1)
prysmBeaconChainClient := validatormock.NewMockPrysmBeaconChainClient(ctrl)
prysmBeaconChainClient.EXPECT().GetValidatorCount(
dutiesProvider.EXPECT().GetAttesterDuties(
ctx,
primitives.Epoch(2),
gomock.Any(),
).Times(1)
dutiesProvider.EXPECT().GetProposerDuties(
ctx,
gomock.Any(),
).Times(2)
dutiesProvider.EXPECT().GetCommittees(
ctx,
gomock.Any(),
).Return(
nil,
iface.ErrNotSupported,
).MinTimes(1)
).Times(2)
validatorClient := &beaconApiValidatorClient{
stateValidatorsProvider: stateValidatorsProvider,
dutiesProvider: dutiesProvider,
prysmBeaconChainCLient: prysmBeaconChainClient,
}
_, err := validatorClient.getDuties(ctx, &ethpb.DutiesRequest{
Epoch: 1,
PublicKeys: [][]byte{},
PublicKeys: [][]byte{pubkey},
})
assert.ErrorContains(t, "failed to get duties for current epoch `1`", err)
assert.ErrorContains(t, "foo error", err)

View File

@@ -11,10 +11,14 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
func (c *beaconApiValidatorClient) submitAggregateSelectionProof(ctx context.Context, in *ethpb.AggregateSelectionRequest) (*ethpb.AggregateSelectionResponse, error) {
func (c *beaconApiValidatorClient) submitAggregateSelectionProof(
ctx context.Context,
in *ethpb.AggregateSelectionRequest,
index primitives.ValidatorIndex,
committeeLength uint64,
) (*ethpb.AggregateSelectionResponse, error) {
isOptimistic, err := c.isOptimistic(ctx)
if err != nil {
return nil, err
@@ -25,29 +29,7 @@ func (c *beaconApiValidatorClient) submitAggregateSelectionProof(ctx context.Con
return nil, errors.New("the node is currently optimistic and cannot serve validators")
}
validatorIndexResponse, err := c.validatorIndex(ctx, &ethpb.ValidatorIndexRequest{PublicKey: in.PublicKey})
if err != nil {
return nil, errors.Wrap(err, "failed to get validator index")
}
attesterDuties, err := c.dutiesProvider.GetAttesterDuties(ctx, slots.ToEpoch(in.Slot), []primitives.ValidatorIndex{validatorIndexResponse.Index})
if err != nil {
return nil, errors.Wrap(err, "failed to get attester duties")
}
if len(attesterDuties) == 0 {
return nil, errors.Errorf("no attester duty for the given slot %d", in.Slot)
}
// First attester duty is required since we requested attester duties for one validator index.
attesterDuty := attesterDuties[0]
committeeLen, err := strconv.ParseUint(attesterDuty.CommitteeLength, 10, 64)
if err != nil {
return nil, errors.Wrap(err, "failed to parse committee length")
}
isAggregator, err := helpers.IsAggregator(committeeLen, in.SlotSignature)
isAggregator, err := helpers.IsAggregator(committeeLength, in.SlotSignature)
if err != nil {
return nil, errors.Wrap(err, "failed to get aggregator status")
}
@@ -77,7 +59,7 @@ func (c *beaconApiValidatorClient) submitAggregateSelectionProof(ctx context.Con
return &ethpb.AggregateSelectionResponse{
AggregateAndProof: &ethpb.AggregateAttestationAndProof{
AggregatorIndex: validatorIndexResponse.Index,
AggregatorIndex: index,
Aggregate: aggregatedAttestation,
SelectionProof: in.SlotSignature,
},

View File

@@ -1,12 +1,9 @@
package beacon_api
import (
"bytes"
"context"
"encoding/json"
"errors"
"fmt"
"net/url"
"testing"
"github.com/ethereum/go-ethereum/common/hexutil"
@@ -15,7 +12,6 @@ import (
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/prysmaticlabs/prysm/v5/validator/client/beacon-api/mock"
test_helpers "github.com/prysmaticlabs/prysm/v5/validator/client/beacon-api/test-helpers"
"go.uber.org/mock/gomock"
@@ -25,26 +21,15 @@ func TestSubmitAggregateSelectionProof(t *testing.T) {
const (
pubkeyStr = "0x8000091c2ae64ee414a54c1cc1fc67dec663408bc636cb86756e0200e41a75c8f86603f104f02c856983d2783116be13"
syncingEndpoint = "/eth/v1/node/syncing"
attesterDutiesEndpoint = "/eth/v1/validator/duties/attester"
validatorsEndpoint = "/eth/v1/beacon/states/head/validators"
attestationDataEndpoint = "/eth/v1/validator/attestation_data"
aggregateAttestationEndpoint = "/eth/v1/validator/aggregate_attestation"
validatorIndex = "55293"
validatorIndex = primitives.ValidatorIndex(55293)
slotSignature = "0x8776a37d6802c4797d113169c5fcfda50e68a32058eb6356a6f00d06d7da64c841a00c7c38b9b94a204751eca53707bd03523ce4797827d9bacff116a6e776a20bbccff4b683bf5201b610797ed0502557a58a65c8395f8a1649b976c3112d15"
slot = primitives.Slot(123)
committeeIndex = primitives.CommitteeIndex(1)
committeesAtSlot = uint64(1)
)
attesterDuties := []*structs.AttesterDuty{
{
Pubkey: pubkeyStr,
ValidatorIndex: validatorIndex,
Slot: "123",
CommitteeIndex: "1",
CommitteeLength: "3",
},
}
attestationDataResponse := generateValidAttestation(uint64(slot), uint64(committeeIndex))
attestationDataProto, err := attestationDataResponse.Data.ToConsensus()
require.NoError(t, err)
@@ -64,22 +49,15 @@ func TestSubmitAggregateSelectionProof(t *testing.T) {
name string
isOptimistic bool
syncingErr error
validatorsErr error
dutiesErr error
attestationDataErr error
aggregateAttestationErr error
duties []*structs.AttesterDuty
validatorsCalled int
attesterDutiesCalled int
attestationDataCalled int
aggregateAttestationCalled int
expectedErrorMsg string
committeesAtSlot uint64
}{
{
name: "success",
duties: attesterDuties,
validatorsCalled: 1,
attesterDutiesCalled: 1,
attestationDataCalled: 1,
aggregateAttestationCalled: 1,
},
@@ -93,60 +71,23 @@ func TestSubmitAggregateSelectionProof(t *testing.T) {
syncingErr: errors.New("bad request"),
expectedErrorMsg: "failed to get syncing status",
},
{
name: "validator index error",
validatorsCalled: 1,
validatorsErr: errors.New("bad request"),
expectedErrorMsg: "failed to get validator index",
},
{
name: "attester duties error",
duties: attesterDuties,
validatorsCalled: 1,
attesterDutiesCalled: 1,
dutiesErr: errors.New("bad request"),
expectedErrorMsg: "failed to get attester duties",
},
{
name: "attestation data error",
duties: attesterDuties,
validatorsCalled: 1,
attesterDutiesCalled: 1,
attestationDataCalled: 1,
attestationDataErr: errors.New("bad request"),
expectedErrorMsg: fmt.Sprintf("failed to get attestation data for slot=%d and committee_index=%d", slot, committeeIndex),
},
{
name: "aggregate attestation error",
duties: attesterDuties,
validatorsCalled: 1,
attesterDutiesCalled: 1,
attestationDataCalled: 1,
aggregateAttestationCalled: 1,
aggregateAttestationErr: errors.New("bad request"),
expectedErrorMsg: "bad request",
},
{
name: "validator is not an aggregator",
duties: []*structs.AttesterDuty{
{
Pubkey: pubkeyStr,
ValidatorIndex: validatorIndex,
Slot: "123",
CommitteeIndex: "1",
CommitteeLength: "64",
},
},
validatorsCalled: 1,
attesterDutiesCalled: 1,
expectedErrorMsg: "validator is not an aggregator",
},
{
name: "no attester duties",
duties: []*structs.AttesterDuty{},
validatorsCalled: 1,
attesterDutiesCalled: 1,
expectedErrorMsg: fmt.Sprintf("no attester duty for the given slot %d", slot),
name: "validator is not an aggregator",
committeesAtSlot: 64,
expectedErrorMsg: "validator is not an aggregator",
},
}
@@ -171,76 +112,6 @@ func TestSubmitAggregateSelectionProof(t *testing.T) {
test.syncingErr,
).Times(1)
valsReq := &structs.GetValidatorsRequest{
Ids: []string{stringPubKey},
Statuses: []string{},
}
valReqBytes, err := json.Marshal(valsReq)
require.NoError(t, err)
// Call validators endpoint to get validator index.
jsonRestHandler.EXPECT().Post(
ctx,
validatorsEndpoint,
nil,
bytes.NewBuffer(valReqBytes),
&structs.GetValidatorsResponse{},
).SetArg(
4,
structs.GetValidatorsResponse{
Data: []*structs.ValidatorContainer{
{
Index: validatorIndex,
Status: "active_ongoing",
Validator: &structs.Validator{
Pubkey: pubkeyStr,
},
},
},
},
).Return(
test.validatorsErr,
).Times(test.validatorsCalled)
if test.validatorsErr != nil {
// Then try the GET call which will also return error.
queryParams := url.Values{}
for _, id := range valsReq.Ids {
queryParams.Add("id", id)
}
for _, st := range valsReq.Statuses {
queryParams.Add("status", st)
}
query := buildURL("/eth/v1/beacon/states/head/validators", queryParams)
jsonRestHandler.EXPECT().Get(
ctx,
query,
&structs.GetValidatorsResponse{},
).Return(
test.validatorsErr,
).Times(1)
}
// Call attester duties endpoint to get attester duties.
validatorIndicesBytes, err := json.Marshal([]string{validatorIndex})
require.NoError(t, err)
jsonRestHandler.EXPECT().Post(
ctx,
fmt.Sprintf("%s/%d", attesterDutiesEndpoint, slots.ToEpoch(slot)),
nil,
bytes.NewBuffer(validatorIndicesBytes),
&structs.GetAttesterDutiesResponse{},
).SetArg(
4,
structs.GetAttesterDutiesResponse{
Data: test.duties,
},
).Return(
test.dutiesErr,
).Times(test.attesterDutiesCalled)
// Call attestation data to get attestation data root to query aggregate attestation.
jsonRestHandler.EXPECT().Get(
ctx,
@@ -290,12 +161,17 @@ func TestSubmitAggregateSelectionProof(t *testing.T) {
jsonRestHandler: jsonRestHandler,
},
}
committees := committeesAtSlot
if test.committeesAtSlot != 0 {
committees = test.committeesAtSlot
}
actualResponse, err := validatorClient.submitAggregateSelectionProof(ctx, &ethpb.AggregateSelectionRequest{
Slot: slot,
CommitteeIndex: committeeIndex,
PublicKey: pubkey,
SlotSignature: slotSignatureBytes,
})
}, validatorIndex, committees)
if test.expectedErrorMsg == "" {
require.NoError(t, err)
assert.DeepEqual(t, expectedResponse, actualResponse)

View File

@@ -10,6 +10,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/api/client"
eventClient "github.com/prysmaticlabs/prysm/v5/api/client/event"
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/validator/client/iface"
log "github.com/sirupsen/logrus"
@@ -82,7 +83,7 @@ func (c *grpcValidatorClient) StreamBlocksAltair(ctx context.Context, in *ethpb.
return c.beaconNodeValidatorClient.StreamBlocksAltair(ctx, in)
}
func (c *grpcValidatorClient) SubmitAggregateSelectionProof(ctx context.Context, in *ethpb.AggregateSelectionRequest) (*ethpb.AggregateSelectionResponse, error) {
func (c *grpcValidatorClient) SubmitAggregateSelectionProof(ctx context.Context, in *ethpb.AggregateSelectionRequest, _ primitives.ValidatorIndex, _ uint64) (*ethpb.AggregateSelectionResponse, error) {
return c.beaconNodeValidatorClient.SubmitAggregateSelectionProof(ctx, in)
}

View File

@@ -60,10 +60,13 @@ type Validator interface {
PushProposerSettings(ctx context.Context, km keymanager.IKeymanager, slot primitives.Slot, deadline time.Time) error
SignValidatorRegistrationRequest(ctx context.Context, signer SigningFunc, newValidatorRegistration *ethpb.ValidatorRegistrationV1) (*ethpb.SignedValidatorRegistrationV1, error)
StartEventStream(ctx context.Context, topics []string, eventsChan chan<- *event.Event)
EventStreamIsRunning() bool
ProcessEvent(event *event.Event)
ProposerSettings() *proposer.Settings
SetProposerSettings(context.Context, *proposer.Settings) error
EventStreamIsRunning() bool
GetGraffiti(ctx context.Context, pubKey [fieldparams.BLSPubkeyLength]byte) ([]byte, error)
SetGraffiti(ctx context.Context, pubKey [fieldparams.BLSPubkeyLength]byte, graffiti []byte) error
DeleteGraffiti(ctx context.Context, pubKey [fieldparams.BLSPubkeyLength]byte) error
HealthTracker() *beacon.NodeHealthTracker
}

View File

@@ -135,7 +135,7 @@ type ValidatorClient interface {
GetFeeRecipientByPubKey(ctx context.Context, in *ethpb.FeeRecipientByPubKeyRequest) (*ethpb.FeeRecipientByPubKeyResponse, error)
GetAttestationData(ctx context.Context, in *ethpb.AttestationDataRequest) (*ethpb.AttestationData, error)
ProposeAttestation(ctx context.Context, in *ethpb.Attestation) (*ethpb.AttestResponse, error)
SubmitAggregateSelectionProof(ctx context.Context, in *ethpb.AggregateSelectionRequest) (*ethpb.AggregateSelectionResponse, error)
SubmitAggregateSelectionProof(ctx context.Context, in *ethpb.AggregateSelectionRequest, index primitives.ValidatorIndex, committeeLength uint64) (*ethpb.AggregateSelectionResponse, error)
SubmitSignedAggregateSelectionProof(ctx context.Context, in *ethpb.SignedAggregateSubmitRequest) (*ethpb.SignedAggregateSubmitResponse, error)
ProposeExit(ctx context.Context, in *ethpb.SignedVoluntaryExit) (*ethpb.ProposeExitResponse, error)
SubscribeCommitteeSubnets(ctx context.Context, in *ethpb.CommitteeSubnetsSubscribeRequest, duties []*ethpb.DutiesResponse_Duty) (*empty.Empty, error)

View File

@@ -6,12 +6,14 @@ import (
"fmt"
"time"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/golang/protobuf/ptypes/timestamp"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/async"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/signing"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/config/proposer"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
@@ -67,7 +69,7 @@ func (v *validator) ProposeBlock(ctx context.Context, slot primitives.Slot, pubK
return
}
g, err := v.getGraffiti(ctx, pubKey)
g, err := v.GetGraffiti(ctx, pubKey)
if err != nil {
// Graffiti is not a critical enough to fail block production and cause
// validator to miss block reward. When failed, validator should continue
@@ -385,9 +387,25 @@ func signVoluntaryExit(
return sig.Marshal(), nil
}
// Gets the graffiti from cli or file for the validator public key.
func (v *validator) getGraffiti(ctx context.Context, pubKey [fieldparams.BLSPubkeyLength]byte) ([]byte, error) {
// When specified, default graffiti from the command line takes the first priority.
// GetGraffiti gets the graffiti from cli or file for the validator public key.
func (v *validator) GetGraffiti(ctx context.Context, pubKey [fieldparams.BLSPubkeyLength]byte) ([]byte, error) {
if v.proposerSettings != nil {
// Check proposer settings for specific key first
if v.proposerSettings.ProposeConfig != nil {
option, ok := v.proposerSettings.ProposeConfig[pubKey]
if ok && option.GraffitiConfig != nil {
return []byte(option.GraffitiConfig.Graffiti), nil
}
}
// Check proposer settings for default settings second
if v.proposerSettings.DefaultConfig != nil {
if v.proposerSettings.DefaultConfig.GraffitiConfig != nil {
return []byte(v.proposerSettings.DefaultConfig.GraffitiConfig.Graffiti), nil
}
}
}
// When specified, use default graffiti from the command line.
if len(v.graffiti) != 0 {
return bytesutil.PadTo(v.graffiti, 32), nil
}
@@ -396,7 +414,7 @@ func (v *validator) getGraffiti(ctx context.Context, pubKey [fieldparams.BLSPubk
return nil, errors.New("graffitiStruct can't be nil")
}
// When specified, individual validator specified graffiti takes the second priority.
// When specified, individual validator specified graffiti takes the third priority.
idx, err := v.validatorClient.ValidatorIndex(ctx, &ethpb.ValidatorIndexRequest{PublicKey: pubKey[:]})
if err != nil {
return nil, err
@@ -406,7 +424,7 @@ func (v *validator) getGraffiti(ctx context.Context, pubKey [fieldparams.BLSPubk
return bytesutil.PadTo([]byte(g), 32), nil
}
// When specified, a graffiti from the ordered list in the file take third priority.
// When specified, a graffiti from the ordered list in the file take fourth priority.
if v.graffitiOrderedIndex < uint64(len(v.graffitiStruct.Ordered)) {
graffiti := v.graffitiStruct.Ordered[v.graffitiOrderedIndex]
v.graffitiOrderedIndex = v.graffitiOrderedIndex + 1
@@ -417,7 +435,7 @@ func (v *validator) getGraffiti(ctx context.Context, pubKey [fieldparams.BLSPubk
return bytesutil.PadTo([]byte(graffiti), 32), nil
}
// When specified, a graffiti from the random list in the file take fourth priority.
// When specified, a graffiti from the random list in the file take Fifth priority.
if len(v.graffitiStruct.Random) != 0 {
r := rand.NewGenerator()
r.Seed(time.Now().Unix())
@@ -433,6 +451,44 @@ func (v *validator) getGraffiti(ctx context.Context, pubKey [fieldparams.BLSPubk
return []byte{}, nil
}
func (v *validator) SetGraffiti(ctx context.Context, pubkey [fieldparams.BLSPubkeyLength]byte, graffiti []byte) error {
if graffiti == nil {
return nil
}
settings := &proposer.Settings{}
if v.proposerSettings != nil {
settings = v.proposerSettings.Clone()
}
if settings.ProposeConfig == nil {
settings.ProposeConfig = map[[48]byte]*proposer.Option{pubkey: {GraffitiConfig: &proposer.GraffitiConfig{Graffiti: string(graffiti)}}}
return v.SetProposerSettings(ctx, settings)
}
option, ok := settings.ProposeConfig[pubkey]
if !ok || option == nil {
settings.ProposeConfig[pubkey] = &proposer.Option{GraffitiConfig: &proposer.GraffitiConfig{
Graffiti: string(graffiti),
}}
} else {
option.GraffitiConfig = &proposer.GraffitiConfig{
Graffiti: string(graffiti),
}
}
return v.SetProposerSettings(ctx, settings) // save the proposer settings
}
func (v *validator) DeleteGraffiti(ctx context.Context, pubKey [fieldparams.BLSPubkeyLength]byte) error {
if v.proposerSettings == nil || v.proposerSettings.ProposeConfig == nil {
return errors.New("attempted to delete graffiti without proposer settings, graffiti will default to flag options")
}
ps := v.proposerSettings.Clone()
option, ok := ps.ProposeConfig[pubKey]
if !ok || option == nil {
return fmt.Errorf("graffiti not found in proposer settings for pubkey:%s", hexutil.Encode(pubKey[:]))
}
option.GraffitiConfig = nil
return v.SetProposerSettings(ctx, ps) // save the proposer settings
}
func blockLogFields(pubKey [fieldparams.BLSPubkeyLength]byte, blk interfaces.ReadOnlyBeaconBlock, sig []byte) logrus.Fields {
fields := logrus.Fields{
"proposerPublicKey": fmt.Sprintf("%#x", pubKey),

View File

@@ -8,10 +8,12 @@ import (
"strings"
"testing"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/signing"
lruwrpr "github.com/prysmaticlabs/prysm/v5/cache/lru"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/config/proposer"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
blocktest "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks/testing"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
@@ -955,6 +957,13 @@ func TestGetGraffiti_Ok(t *testing.T) {
validatorClient: validatormock.NewMockValidatorClient(ctrl),
}
pubKey := [fieldparams.BLSPubkeyLength]byte{'a'}
config := make(map[[fieldparams.BLSPubkeyLength]byte]*proposer.Option)
config[pubKey] = &proposer.Option{
GraffitiConfig: &proposer.GraffitiConfig{
Graffiti: "specific graffiti",
},
}
tests := []struct {
name string
v *validator
@@ -1014,16 +1023,52 @@ func TestGetGraffiti_Ok(t *testing.T) {
},
want: []byte{},
},
{name: "graffiti from proposer settings for specific pubkey",
v: &validator{
validatorClient: m.validatorClient,
proposerSettings: &proposer.Settings{
ProposeConfig: config,
},
},
want: []byte("specific graffiti"),
},
{name: "graffiti from proposer settings default config",
v: &validator{
validatorClient: m.validatorClient,
proposerSettings: &proposer.Settings{
DefaultConfig: &proposer.Option{
GraffitiConfig: &proposer.GraffitiConfig{
Graffiti: "default graffiti",
},
},
},
},
want: []byte("default graffiti"),
},
{name: "graffiti from proposer settings , specific pubkey overrides default config",
v: &validator{
validatorClient: m.validatorClient,
proposerSettings: &proposer.Settings{
ProposeConfig: config,
DefaultConfig: &proposer.Option{
GraffitiConfig: &proposer.GraffitiConfig{
Graffiti: "default graffiti",
},
},
},
},
want: []byte("specific graffiti"),
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if !strings.Contains(tt.name, "use default cli graffiti") {
if !strings.Contains(tt.name, "use default cli graffiti") && tt.v.proposerSettings == nil {
m.validatorClient.EXPECT().
ValidatorIndex(gomock.Any(), &ethpb.ValidatorIndexRequest{PublicKey: pubKey[:]}).
Return(&ethpb.ValidatorIndexResponse{Index: 2}, nil)
}
got, err := tt.v.getGraffiti(context.Background(), pubKey)
got, err := tt.v.GetGraffiti(context.Background(), pubKey)
require.NoError(t, err)
require.DeepEqual(t, tt.want, got)
})
@@ -1053,10 +1098,165 @@ func TestGetGraffitiOrdered_Ok(t *testing.T) {
},
}
for _, want := range [][]byte{bytesutil.PadTo([]byte{'a'}, 32), bytesutil.PadTo([]byte{'b'}, 32), bytesutil.PadTo([]byte{'c'}, 32), bytesutil.PadTo([]byte{'d'}, 32), bytesutil.PadTo([]byte{'d'}, 32)} {
got, err := v.getGraffiti(context.Background(), pubKey)
got, err := v.GetGraffiti(context.Background(), pubKey)
require.NoError(t, err)
require.DeepEqual(t, want, got)
}
})
}
}
func Test_validator_DeleteGraffiti(t *testing.T) {
pubKey := [fieldparams.BLSPubkeyLength]byte{'a'}
tests := []struct {
name string
proposerSettings *proposer.Settings
wantErr string
}{
{
name: "delete existing graffiti ok",
proposerSettings: &proposer.Settings{
ProposeConfig: func() map[[fieldparams.BLSPubkeyLength]byte]*proposer.Option {
config := make(map[[fieldparams.BLSPubkeyLength]byte]*proposer.Option)
config[pubKey] = &proposer.Option{
GraffitiConfig: &proposer.GraffitiConfig{
Graffiti: "specific graffiti",
},
}
return config
}(),
},
},
{
name: "delete with proposer settings but only default configs",
proposerSettings: &proposer.Settings{
DefaultConfig: &proposer.Option{
GraffitiConfig: &proposer.GraffitiConfig{
Graffiti: "default graffiti",
},
},
},
wantErr: "attempted to delete graffiti without proposer settings, graffiti will default to flag options",
},
{
name: "delete with proposer settings but without the specific public key setting",
proposerSettings: &proposer.Settings{
ProposeConfig: func() map[[fieldparams.BLSPubkeyLength]byte]*proposer.Option {
config := make(map[[fieldparams.BLSPubkeyLength]byte]*proposer.Option)
pk := make([]byte, fieldparams.BLSPubkeyLength)
config[bytesutil.ToBytes48(pk)] = &proposer.Option{
GraffitiConfig: &proposer.GraffitiConfig{
Graffiti: "specific graffiti",
},
}
return config
}(),
},
wantErr: fmt.Sprintf("graffiti not found in proposer settings for pubkey:%s", hexutil.Encode(pubKey[:])),
},
{
name: "delete without proposer settings",
wantErr: "attempted to delete graffiti without proposer settings, graffiti will default to flag options",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
v := &validator{
db: testing2.SetupDB(t, [][fieldparams.BLSPubkeyLength]byte{pubKey}, false),
proposerSettings: tt.proposerSettings,
}
err := v.DeleteGraffiti(context.Background(), pubKey)
if tt.wantErr != "" {
require.ErrorContains(t, tt.wantErr, err)
} else {
require.NoError(t, err)
require.Equal(t, v.proposerSettings.ProposeConfig[pubKey].GraffitiConfig == nil, true)
}
})
}
}
func Test_validator_SetGraffiti(t *testing.T) {
pubKey := [fieldparams.BLSPubkeyLength]byte{'a'}
tests := []struct {
name string
graffiti string
proposerSettings *proposer.Settings
wantProposerSettings *proposer.Settings
wantErr string
}{
{
name: "setting existing graffiti ok",
graffiti: "new graffiti",
proposerSettings: &proposer.Settings{
ProposeConfig: func() map[[fieldparams.BLSPubkeyLength]byte]*proposer.Option {
config := make(map[[fieldparams.BLSPubkeyLength]byte]*proposer.Option)
config[pubKey] = &proposer.Option{
GraffitiConfig: &proposer.GraffitiConfig{
Graffiti: "specific graffiti",
},
}
return config
}(),
},
},
{
name: "set with proposer settings but only default configs",
proposerSettings: &proposer.Settings{
DefaultConfig: &proposer.Option{
GraffitiConfig: &proposer.GraffitiConfig{
Graffiti: "default graffiti",
},
},
},
},
{
name: "set with proposer settings but without the specific public key setting",
proposerSettings: &proposer.Settings{
ProposeConfig: func() map[[fieldparams.BLSPubkeyLength]byte]*proposer.Option {
config := make(map[[fieldparams.BLSPubkeyLength]byte]*proposer.Option)
pk := make([]byte, fieldparams.BLSPubkeyLength)
config[bytesutil.ToBytes48(pk)] = &proposer.Option{
GraffitiConfig: &proposer.GraffitiConfig{
Graffiti: "specific graffiti",
},
}
return config
}(),
},
},
{
name: "set without proposer settings",
graffiti: "specific graffiti",
wantProposerSettings: func() *proposer.Settings {
config := make(map[[fieldparams.BLSPubkeyLength]byte]*proposer.Option)
config[pubKey] = &proposer.Option{
GraffitiConfig: &proposer.GraffitiConfig{
Graffiti: "specific graffiti",
},
}
return &proposer.Settings{ProposeConfig: config}
}(),
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
v := &validator{
db: testing2.SetupDB(t, [][fieldparams.BLSPubkeyLength]byte{pubKey}, false),
proposerSettings: tt.proposerSettings,
}
err := v.SetGraffiti(context.Background(), pubKey, []byte(tt.graffiti))
if tt.wantErr != "" {
require.ErrorContains(t, tt.wantErr, err)
} else {
require.NoError(t, err)
if tt.wantProposerSettings != nil {
require.DeepEqual(t, tt.wantProposerSettings, v.proposerSettings)
} else {
require.Equal(t, v.proposerSettings.ProposeConfig[pubKey].GraffitiConfig.Graffiti, tt.graffiti)
}
}
})
}
}

View File

@@ -358,3 +358,24 @@ func (v *ValidatorService) GenesisInfo(ctx context.Context) (*ethpb.Genesis, err
nc := ethpb.NewNodeClient(v.conn.GetGrpcClientConn())
return nc.GetGenesis(ctx, &emptypb.Empty{})
}
func (v *ValidatorService) GetGraffiti(ctx context.Context, pubKey [fieldparams.BLSPubkeyLength]byte) ([]byte, error) {
if v.validator == nil {
return nil, errors.New("validator is unavailable")
}
return v.validator.GetGraffiti(ctx, pubKey)
}
func (v *ValidatorService) SetGraffiti(ctx context.Context, pubKey [fieldparams.BLSPubkeyLength]byte, graffiti []byte) error {
if v.validator == nil {
return errors.New("validator is unavailable")
}
return v.validator.SetGraffiti(ctx, pubKey, graffiti)
}
func (v *ValidatorService) DeleteGraffiti(ctx context.Context, pubKey [fieldparams.BLSPubkeyLength]byte) error {
if v.validator == nil {
return errors.New("validator is unavailable")
}
return v.validator.DeleteGraffiti(ctx, pubKey)
}

View File

@@ -58,6 +58,7 @@ type FakeValidator struct {
proposerSettings *proposer.Settings
ProposerSettingWait time.Duration
Km keymanager.IKeymanager
graffiti string
Tracker *beacon.NodeHealthTracker
}
@@ -282,7 +283,25 @@ func (fv *FakeValidator) SetProposerSettings(_ context.Context, settings *propos
return nil
}
// GetGraffiti for mocking
func (f *FakeValidator) GetGraffiti(_ context.Context, _ [fieldparams.BLSPubkeyLength]byte) ([]byte, error) {
return []byte(f.graffiti), nil
}
// SetGraffiti for mocking
func (f *FakeValidator) SetGraffiti(_ context.Context, _ [fieldparams.BLSPubkeyLength]byte, graffiti []byte) error {
f.graffiti = string(graffiti)
return nil
}
// DeleteGraffiti for mocking
func (f *FakeValidator) DeleteGraffiti(_ context.Context, _ [fieldparams.BLSPubkeyLength]byte) error {
f.graffiti = ""
return nil
}
func (*FakeValidator) StartEventStream(_ context.Context, _ []string, _ chan<- *event.Event) {
}
func (*FakeValidator) ProcessEvent(_ *event.Event) {}

View File

@@ -660,6 +660,8 @@ func (c *ValidatorClient) registerRPCService(router *mux.Router) error {
ClientGrpcRetryDelay: grpcRetryDelay,
ClientGrpcHeaders: strings.Split(grpcHeaders, ","),
ClientWithCert: clientCert,
BeaconApiTimeout: time.Second * 30,
BeaconApiEndpoint: c.cliCtx.String(flags.BeaconRESTApiProviderFlag.Name),
Router: router,
})
return c.services.RegisterService(server)

View File

@@ -158,24 +158,29 @@ func (s *Server) GetValidators(w http.ResponseWriter, r *http.Request) {
}
pageToken := r.URL.Query().Get("page_token")
publicKeys := r.URL.Query()["public_keys"]
pubkeys := make([][]byte, len(publicKeys))
pubkeys := make([][]byte, 0)
for i, key := range publicKeys {
var pk []byte
if key == "" {
continue
}
if strings.HasPrefix(key, "0x") {
k, ok := shared.ValidateHex(w, fmt.Sprintf("PublicKeys[%d]", i), key, fieldparams.BLSPubkeyLength)
if !ok {
return
}
pk = bytesutil.SafeCopyBytes(k)
pubkeys = append(pubkeys, bytesutil.SafeCopyBytes(k))
} else {
data, err := base64.StdEncoding.DecodeString(key)
if err != nil {
httputil.HandleError(w, errors.Wrap(err, "Failed to decode base64").Error(), http.StatusBadRequest)
return
}
pk = bytesutil.SafeCopyBytes(data)
pubkeys = append(pubkeys, bytesutil.SafeCopyBytes(data))
}
pubkeys[i] = pk
}
if len(pubkeys) == 0 {
httputil.HandleError(w, "no pubkeys provided", http.StatusBadRequest)
return
}
req := &ethpb.ListValidatorsRequest{
PublicKeys: pubkeys,

View File

@@ -9,6 +9,7 @@ import (
"testing"
"time"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
@@ -102,3 +103,154 @@ func TestGetBeaconStatus_OK(t *testing.T) {
}
assert.DeepEqual(t, want, resp)
}
func TestServer_GetValidators(t *testing.T) {
tests := []struct {
name string
query string
expectedReq *ethpb.ListValidatorsRequest
chainResp *ethpb.Validators
want *ValidatorsResponse
wantCode int
wantErr string
}{
{
name: "happypath on page_size, page_token, public_keys",
wantCode: http.StatusOK,
query: "page_size=4&page_token=0&public_keys=0x855ae9c6184d6edd46351b375f16f541b2d33b0ed0da9be4571b13938588aee840ba606a946f0e8023ae3a4b2a43b4d4",
expectedReq: func() *ethpb.ListValidatorsRequest {
b, err := hexutil.Decode("0x855ae9c6184d6edd46351b375f16f541b2d33b0ed0da9be4571b13938588aee840ba606a946f0e8023ae3a4b2a43b4d4")
require.NoError(t, err)
pubkeys := [][]byte{b}
return &ethpb.ListValidatorsRequest{
PublicKeys: pubkeys,
PageSize: int32(4),
PageToken: "0",
}
}(),
chainResp: func() *ethpb.Validators {
b, err := hexutil.Decode("0x855ae9c6184d6edd46351b375f16f541b2d33b0ed0da9be4571b13938588aee840ba606a946f0e8023ae3a4b2a43b4d4")
require.NoError(t, err)
return &ethpb.Validators{
Epoch: 0,
ValidatorList: []*ethpb.Validators_ValidatorContainer{
{
Index: 0,
Validator: &ethpb.Validator{
PublicKey: b,
},
},
},
NextPageToken: "0",
TotalSize: 0,
}
}(),
want: &ValidatorsResponse{
Epoch: 0,
ValidatorList: []*ValidatorContainer{
{
Index: 0,
Validator: &Validator{
PublicKey: "0x855ae9c6184d6edd46351b375f16f541b2d33b0ed0da9be4571b13938588aee840ba606a946f0e8023ae3a4b2a43b4d4",
WithdrawalCredentials: "0x",
EffectiveBalance: 0,
Slashed: false,
ActivationEligibilityEpoch: 0,
ActivationEpoch: 0,
ExitEpoch: 0,
WithdrawableEpoch: 0,
},
},
},
NextPageToken: "0",
TotalSize: 0,
},
},
{
name: "extra public key that's empty still returns correct response",
wantCode: http.StatusOK,
query: "page_size=4&page_token=0&public_keys=0x855ae9c6184d6edd46351b375f16f541b2d33b0ed0da9be4571b13938588aee840ba606a946f0e8023ae3a4b2a43b4d4&public_keys=",
expectedReq: func() *ethpb.ListValidatorsRequest {
b, err := hexutil.Decode("0x855ae9c6184d6edd46351b375f16f541b2d33b0ed0da9be4571b13938588aee840ba606a946f0e8023ae3a4b2a43b4d4")
require.NoError(t, err)
pubkeys := [][]byte{b}
return &ethpb.ListValidatorsRequest{
PublicKeys: pubkeys,
PageSize: int32(4),
PageToken: "0",
}
}(),
chainResp: func() *ethpb.Validators {
b, err := hexutil.Decode("0x855ae9c6184d6edd46351b375f16f541b2d33b0ed0da9be4571b13938588aee840ba606a946f0e8023ae3a4b2a43b4d4")
require.NoError(t, err)
return &ethpb.Validators{
Epoch: 0,
ValidatorList: []*ethpb.Validators_ValidatorContainer{
{
Index: 0,
Validator: &ethpb.Validator{
PublicKey: b,
},
},
},
NextPageToken: "0",
TotalSize: 0,
}
}(),
want: &ValidatorsResponse{
Epoch: 0,
ValidatorList: []*ValidatorContainer{
{
Index: 0,
Validator: &Validator{
PublicKey: "0x855ae9c6184d6edd46351b375f16f541b2d33b0ed0da9be4571b13938588aee840ba606a946f0e8023ae3a4b2a43b4d4",
WithdrawalCredentials: "0x",
EffectiveBalance: 0,
Slashed: false,
ActivationEligibilityEpoch: 0,
ActivationEpoch: 0,
ExitEpoch: 0,
WithdrawableEpoch: 0,
},
},
},
NextPageToken: "0",
TotalSize: 0,
},
},
{
name: "no public keys passed results in error",
wantCode: http.StatusBadRequest,
query: "page_size=4&page_token=0&public_keys=",
wantErr: "no pubkeys provided",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ctrl := gomock.NewController(t)
beaconChainClient := validatormock.NewMockBeaconChainClient(ctrl)
if tt.wantErr == "" {
beaconChainClient.EXPECT().ListValidators(
gomock.Any(), // ctx
tt.expectedReq,
).Return(tt.chainResp, nil)
}
s := &Server{
beaconChainClient: beaconChainClient,
}
req := httptest.NewRequest(http.MethodGet, fmt.Sprintf("/v2/validator/beacon/validators?%s", tt.query), http.NoBody)
wr := httptest.NewRecorder()
wr.Body = &bytes.Buffer{}
s.GetValidators(wr, req)
require.Equal(t, tt.wantCode, wr.Code)
if tt.wantErr != "" {
require.StringContains(t, tt.wantErr, string(wr.Body.Bytes()))
} else {
resp := &ValidatorsResponse{}
require.NoError(t, json.Unmarshal(wr.Body.Bytes(), resp))
require.DeepEqual(t, resp, tt.want)
}
})
}
}

View File

@@ -7,6 +7,7 @@ import (
"fmt"
"io"
"net/http"
"strings"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
@@ -839,3 +840,86 @@ func (s *Server) DeleteGasLimit(w http.ResponseWriter, r *http.Request) {
// we respond "not found".
httputil.HandleError(w, fmt.Sprintf("No gas limit found for pubkey %q", rawPubkey), http.StatusNotFound)
}
func (s *Server) GetGraffiti(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "validator.keymanagerAPI.GetGraffiti")
defer span.End()
if s.validatorService == nil {
httputil.HandleError(w, "Validator service not ready.", http.StatusServiceUnavailable)
return
}
rawPubkey, pubkey, ok := shared.HexFromRoute(w, r, "pubkey", fieldparams.BLSPubkeyLength)
if !ok {
return
}
graffiti, err := s.validatorService.GetGraffiti(ctx, bytesutil.ToBytes48(pubkey))
if err != nil {
if strings.Contains(err.Error(), "unavailable") {
httputil.HandleError(w, err.Error(), http.StatusInternalServerError)
return
}
httputil.HandleError(w, err.Error(), http.StatusNotFound)
return
}
httputil.WriteJson(w, &GetGraffitiResponse{
Data: &GraffitiData{
Pubkey: rawPubkey,
Graffiti: string(graffiti),
},
})
}
func (s *Server) SetGraffiti(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "validator.keymanagerAPI.SetGraffiti")
defer span.End()
if s.validatorService == nil {
httputil.HandleError(w, "Validator service not ready.", http.StatusServiceUnavailable)
return
}
_, pubkey, ok := shared.HexFromRoute(w, r, "pubkey", fieldparams.BLSPubkeyLength)
if !ok {
return
}
var req struct {
Graffiti string `json:"graffiti"`
}
err := json.NewDecoder(r.Body).Decode(&req)
switch {
case err == io.EOF:
httputil.HandleError(w, "No data submitted", http.StatusBadRequest)
return
case err != nil:
httputil.HandleError(w, "Could not decode request body: "+err.Error(), http.StatusBadRequest)
return
}
if err := s.validatorService.SetGraffiti(ctx, bytesutil.ToBytes48(pubkey), []byte(req.Graffiti)); err != nil {
httputil.HandleError(w, err.Error(), http.StatusInternalServerError)
return
}
}
func (s *Server) DeleteGraffiti(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "validator.keymanagerAPI.DeleteGraffiti")
defer span.End()
if s.validatorService == nil {
httputil.HandleError(w, "Validator service not ready.", http.StatusServiceUnavailable)
return
}
_, pubkey, ok := shared.HexFromRoute(w, r, "pubkey", fieldparams.BLSPubkeyLength)
if !ok {
return
}
if err := s.validatorService.DeleteGraffiti(ctx, bytesutil.ToBytes48(pubkey)); err != nil {
httputil.HandleError(w, err.Error(), http.StatusNotFound)
return
}
}

View File

@@ -1898,3 +1898,48 @@ func TestServer_DeleteFeeRecipientByPubkey_InvalidPubKey(t *testing.T) {
require.StringContains(t, "pubkey is invalid", w.Body.String())
}
func TestServer_Graffiti(t *testing.T) {
graffiti := "graffiti"
m := &mock.Validator{}
vs, err := client.NewValidatorService(context.Background(), &client.Config{
Validator: m,
})
require.NoError(t, err)
s := &Server{
validatorService: vs,
}
var request struct {
Graffiti string `json:"graffiti"`
}
request.Graffiti = graffiti
pubkey := "0xaf2e7ba294e03438ea819bd4033c6c1bf6b04320ee2075b77273c08d02f8a61bcc303c2c06bd3713cb442072ae591493"
var buf bytes.Buffer
err = json.NewEncoder(&buf).Encode(request)
require.NoError(t, err)
req := httptest.NewRequest(http.MethodPost, fmt.Sprintf("/eth/v1/validator/{pubkey}/graffiti"), &buf)
req = mux.SetURLVars(req, map[string]string{"pubkey": pubkey})
w := httptest.NewRecorder()
w.Body = &bytes.Buffer{}
s.SetGraffiti(w, req)
require.Equal(t, http.StatusOK, w.Code)
req = httptest.NewRequest(http.MethodGet, fmt.Sprintf("/eth/v1/validator/{pubkey}/graffiti"), nil)
req = mux.SetURLVars(req, map[string]string{"pubkey": pubkey})
w = httptest.NewRecorder()
w.Body = &bytes.Buffer{}
s.GetGraffiti(w, req)
require.Equal(t, http.StatusOK, w.Code)
resp := &GetGraffitiResponse{}
require.NoError(t, json.Unmarshal(w.Body.Bytes(), resp))
assert.Equal(t, resp.Data.Graffiti, request.Graffiti)
assert.Equal(t, resp.Data.Pubkey, pubkey)
req = httptest.NewRequest(http.MethodDelete, fmt.Sprintf("/eth/v1/validator/{pubkey}/graffiti"), nil)
req = mux.SetURLVars(req, map[string]string{"pubkey": pubkey})
w = httptest.NewRecorder()
w.Body = &bytes.Buffer{}
s.DeleteGraffiti(w, req)
require.Equal(t, http.StatusOK, w.Code)
}

View File

@@ -53,6 +53,8 @@ type Config struct {
GenesisFetcher client.GenesisFetcher
WalletInitializedFeed *event.Feed
NodeGatewayEndpoint string
BeaconApiEndpoint string
BeaconApiTimeout time.Duration
Router *mux.Router
Wallet *wallet.Wallet
}
@@ -130,6 +132,8 @@ func NewServer(ctx context.Context, cfg *Config) *Server {
validatorMonitoringPort: cfg.ValidatorMonitoringPort,
validatorGatewayHost: cfg.ValidatorGatewayHost,
validatorGatewayPort: cfg.ValidatorGatewayPort,
beaconApiTimeout: cfg.BeaconApiTimeout,
beaconApiEndpoint: cfg.BeaconApiEndpoint,
router: cfg.Router,
}
// immediately register routes to override any catchalls
@@ -230,6 +234,10 @@ func (s *Server) InitializeRoutes() error {
s.router.HandleFunc("/eth/v1/validator/{pubkey}/feerecipient", s.SetFeeRecipientByPubkey).Methods(http.MethodPost)
s.router.HandleFunc("/eth/v1/validator/{pubkey}/feerecipient", s.DeleteFeeRecipientByPubkey).Methods(http.MethodDelete)
s.router.HandleFunc("/eth/v1/validator/{pubkey}/voluntary_exit", s.SetVoluntaryExit).Methods(http.MethodPost)
s.router.HandleFunc("/eth/v1/validator/{pubkey}/graffiti", s.GetGraffiti).Methods(http.MethodGet)
s.router.HandleFunc("/eth/v1/validator/{pubkey}/graffiti", s.SetGraffiti).Methods(http.MethodPost)
s.router.HandleFunc("/eth/v1/validator/{pubkey}/graffiti", s.DeleteGraffiti).Methods(http.MethodDelete)
// auth endpoint
s.router.HandleFunc(api.WebUrlPrefix+"initialize", s.Initialize).Methods(http.MethodGet)
// accounts endpoints

View File

@@ -21,6 +21,7 @@ func TestServer_InitializeRoutes(t *testing.T) {
"/eth/v1/validator/{pubkey}/gas_limit": {http.MethodGet, http.MethodPost, http.MethodDelete},
"/eth/v1/validator/{pubkey}/feerecipient": {http.MethodGet, http.MethodPost, http.MethodDelete},
"/eth/v1/validator/{pubkey}/voluntary_exit": {http.MethodPost},
"/eth/v1/validator/{pubkey}/graffiti": {http.MethodGet, http.MethodPost, http.MethodDelete},
"/v2/validator/health/version": {http.MethodGet},
"/v2/validator/health/logs/validator/stream": {http.MethodGet},
"/v2/validator/health/logs/beacon/stream": {http.MethodGet},

View File

@@ -99,6 +99,16 @@ type SetFeeRecipientByPubkeyRequest struct {
Ethaddress string `json:"ethaddress"`
}
// Graffiti keymanager api
type GetGraffitiResponse struct {
Data *GraffitiData `json:"data"`
}
type GraffitiData struct {
Pubkey string `json:"pubkey"`
Graffiti string `json:"graffiti"`
}
type BeaconStatusResponse struct {
BeaconNodeEndpoint string `json:"beacon_node_endpoint"`
Connected bool `json:"connected"`