Compare commits

..

27 Commits

Author SHA1 Message Date
terence tsao
e9922bb1da Fix consensus_spec sha256 2024-01-18 15:11:37 -08:00
Aditya Asgaonkar
29ad26d4db run bazel gazelle 2024-01-18 14:59:02 -08:00
Aditya Asgaonkar
d14cc5633f update WORKSPACE for spec v1.4.0-beta.6 2024-01-18 13:40:56 -08:00
Aditya Asgaonkar
144f24f63f update tests 2024-01-18 13:40:56 -08:00
Aditya Asgaonkar
f5bdb4f544 implement confirmation rule prerequisite - f.c. filter changes 2024-01-18 13:40:56 -08:00
Radosław Kapka
a608630727 Add Inactivity field ro attestation rewards (#13382) 2024-01-18 18:51:35 +00:00
Mario Vega
37739b4193 fix blobsidecar json tag for commitment inclusion proof (#13475)
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-01-18 17:43:43 +00:00
james-prysm
4d2067dbae bugfix: ssz post-requests should check content type not accept (#13482)
* updating post requests that accept ssz to check content type instead of accept header

* radek's review comments to make things more clear
2024-01-18 17:41:31 +00:00
Nishant Das
fc05e306dd Allow Pcli to Run State Transitions Easily (#13484)
* add all this in

* gaz

* add flag
2024-01-18 14:44:06 +00:00
Radosław Kapka
204de13c86 REST VC: Subscribe to Beacon API events (#13453)
* Revert "Revert "REST VC: Subscribe to Beacon API events  (#13354)" (#13428)"

This reverts commit 8d092a1113.

* change logic

* review

* test fix

* fix critical error

* merge flag check

* change error msg

* return on errors

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2024-01-18 14:27:41 +00:00
terence
f3ef1b64d6 Enhance block by root log (#13472) 2024-01-18 13:43:10 +00:00
terence
c3dbfa66d0 Change blob latency metrics to ms (#13481) 2024-01-17 23:28:42 +00:00
terence
93aba997f4 Move checking of attribute empty earlier (#13465) 2024-01-17 18:42:56 +00:00
Potuz
79bb7efbf8 Check init sync before getting payload attributes (#13479)
* Check init sync before getting payload attributes

This PR adds a helper to forkchoice to return the delay of the latest
imported block. It also adds a helper with an heuristic to check if the
node is during init sync. If the highest imported node was imported with
a delay of less than an epoch then the node is considered in regular
sync. If on the other hand, in addition the highest imported node is
more than two epochs old, then the node is considered in init Sync.

The helper to check this only uses forkchoice and therefore requires a
read lock. There are four paths to call this

1) During regular block processing, we defer a function to send the
   second FCU call with attributes. This function may not be called at
all if we are not regularly syncing
2) During regular block processing, we check in the path
   `postBlockProces->getFCUArgs->computePayloadAttributes` the payload
attributes if we are syncing a late block. In this case forkchoice is
already locked and we add a call in `getFCUArgs` to return early if not
regularly syncing
3) During handling of late blocks on `lateBlockTasks` we simply return
   early if not in regular sync (This is the biggest change as it takes
a longer FC lock for lateBlockTasks)
4) On Attestation processing, in UpdateHead, we are already locked so we
   just add a check to not update head on this path if not regularly
syncing.

* fix build

* Fix mocks
2024-01-17 15:39:28 +00:00
terence
87b53db3b4 Capitalize Aggregated Unaggregated Attestations Log (#13473) 2024-01-17 13:30:31 +00:00
terence
fe431b9201 Use correct HistoricalRoots (#13477) 2024-01-17 08:14:32 +00:00
james-prysm
790a09f9b1 Improve wait for activation (#13448)
* removing timeout on wait for activation, instead switched to an event driven approach

* fixing unit tests

* linting

* simplifying return

* adding sleep for the remaining slot to avoid cpu spikes

* removing ifstatement on log

* removing ifstatement on log

* improving switch statement

* removing the loop entirely

* fixing unit test

* fixing manu's reported issue with deletion of json file

* missed change around writefile at path

* gofmt

* fixing deepsource issue with reading file

* trying to clean file to avoid deepsource issue

* still getting error trying a different approach

* fixing stream loop

* fixing unit test

* Update validator/keymanager/local/keymanager.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* fixing linting

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-01-16 17:04:54 +00:00
Manu NALEPA
46387a903a getLegacyDatabaseLocation: Change message. (#13471)
* `getLegacyDatabaseLocation`: Change message.

* Update validator/node/node.go

Co-authored-by: Nishant Das <nishdas93@gmail.com>

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-01-16 11:29:36 +00:00
Nishant Das
6a65e07684 Add Spans to Core Validator Methods (#13467)
* add traces

* gaz
2024-01-16 07:52:46 +00:00
Potuz
abef94d7ad do not check optimistic status if cached attestation (#13462)
* do not check optimistic status if cached attestation

* Gazelle

* Gazelle again

* fix nil panics

* more nil checks

* more nil checks

---------

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2024-01-15 18:50:33 +00:00
Manu NALEPA
99a8d0bac6 Validator client - Improve readability - NO FUNCTIONAL CHANGE (#13468)
* Improve `NewServiceRegistry` documentation.

* Improve `README.md`.

* Improve readability of `registerValidatorService`.

* Move `log` in `main.go`.

Since `log` is only used in `main.go`.

* Clean Tos.

* `DefaultDataDir`: Use `switch` instead of `if/elif`.

* `ReadPassword`: Remove unused receiver.

* `validator/main.go`: Clean.

* `WarnIfPlatformNotSupported`: Add Mac OSX ARM64.

* `runner.go`: Use idiomatic `err` handling.

* `waitForChainStart`: Avoid `chainStartResponse`mutation.

* `WaitForChainStart`: Reduce cognitive complexity.

* Logs: `powchain` ==> `execution`.
2024-01-15 14:46:54 +00:00
Preston Van Loon
b585ff77f5 Fix port logging in bootnode (#13457) 2024-01-15 04:38:22 +00:00
Nishant Das
1ff5a43385 Add the Abillity to Defragment the Beacon State (#13444)
* Defragment head state

* change log level

* change it to be more efficient

* add flag

* add tests and clean up

* fix it

* gosimple

* Update container/multi-value-slice/multi_value_slice.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* radek's review

* unlock it

* remove from fc lock

---------

Co-authored-by: rkapka <rkapka@wp.pl>
2024-01-13 05:44:02 +00:00
dependabot[bot]
0cfbddc980 Bump github.com/quic-go/quic-go from 0.39.3 to 0.39.4 (#13445)
* Bump github.com/quic-go/quic-go from 0.39.3 to 0.39.4

Bumps [github.com/quic-go/quic-go](https://github.com/quic-go/quic-go) from 0.39.3 to 0.39.4.
- [Release notes](https://github.com/quic-go/quic-go/releases)
- [Changelog](https://github.com/quic-go/quic-go/blob/master/Changelog.md)
- [Commits](https://github.com/quic-go/quic-go/compare/v0.39.3...v0.39.4)

---
updated-dependencies:
- dependency-name: github.com/quic-go/quic-go
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

* Ran gazelle

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2024-01-12 18:12:19 +00:00
Manu NALEPA
22a484c45e Fixes issues when running validator client with the --web flag and non existing validator.db file AND/OR prysm-wallet-v2 directory. (#13460)
* `getLegacyDatabaseLocation`: Add tests.

* `getLegacyDatabaseLocation`: Handle `c.wallet == nil`.

* `saveAuthToken`: Create parent directory if needed.
2024-01-12 15:53:27 +00:00
terence
6ddafe1159 Delete invalid blob at block processing (#13456)
* Delete invalid blob at block processing

* Fix test
2024-01-12 08:09:45 +00:00
qinlz2
b8c5af665f [3/5] light client events (#13225)
* add http streaming light client events

* expose ForkChoiceStore

* return error in insertFinalizedDeposits

* send light client updates

* Revert "return error in insertFinalizedDeposits"

This reverts commit f7068663b8c8b3a3bf45950d5258011a5e4d803e.

* fix: lint

* fix: patch the wrong error response

* refactor: rename the JSON structs

* fix: LC finalized stream return correct format

* fix: LC op stream return correct JSON format

* fix: omit nil JSON fields

* chore: gazzle

* fix: make update by range return list directly based on spec

* chore: remove unneccessary json annotations

* chore: adjust comments

* feat: introduce EnableLightClientEvents feature flag

* feat: use enable-lightclient-events flag

* chore: more logging details

* chore: fix rebase errors

* chore: adjust data structure to save mem

* Update beacon-chain/blockchain/process_block.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* refactor: rename config EnableLightClient

* refactor: rename feature flag

* refactor: move helper functions to helper pkg

* test: fix broken unit tests

---------

Co-authored-by: Nicolás Pernas Maradei <nicolas@polymerlabs.org>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2024-01-11 18:38:59 +00:00
134 changed files with 2268 additions and 723 deletions

View File

@@ -223,7 +223,7 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.4.0-beta.5"
consensus_spec_version = "v1.4.0-beta.6"
bls_test_version = "v0.1.1"
@@ -239,7 +239,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "9017ffff84d64a7c4c9e6ff9f421f9479f71d3b463b738f54e02158dbb4f50f0",
sha256 = "7dc467d7be97525c88a1d3683665c1354cc86297fd62009e7cf5000905b25652",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/general.tar.gz" % consensus_spec_version,
)
@@ -255,7 +255,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "f08711682553fe7c9362f1400ed8c56b2fa9576df08581fcad4c508ba8ad4788",
sha256 = "e163011254b6ce100205fb779ba660faedc9bc9f7bb4408c25746a7aa5e8d8bc",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/minimal.tar.gz" % consensus_spec_version,
)
@@ -271,7 +271,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "7ea3189e3879f2ac62467cbf2945c00b6c94d30cdefb2d645c630b1018c50e10",
sha256 = "b73c81b6386053a2141f6f43b457489668621c7013f740ed93edf9ac0e34f091",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/mainnet.tar.gz" % consensus_spec_version,
)
@@ -286,7 +286,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "4119992a2efc79e5cb2bdc07ed08c0b1fa32332cbd0d88e6467f34938df97026",
sha256 = "47726c527512d03ef3e706a8e7f8d5db6a5f2153351db0470dab780f6a87c4dd",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)

View File

@@ -7,4 +7,6 @@ const (
ConsensusBlockValueHeader = "Eth-Consensus-Block-Value"
JsonMediaType = "application/json"
OctetStreamMediaType = "application/octet-stream"
EventStreamMediaType = "text/event-stream"
KeepAlive = "keep-alive"
)

View File

@@ -6,6 +6,7 @@ go_library(
"chain_info.go",
"chain_info_forkchoice.go",
"currently_syncing_block.go",
"defragment.go",
"error.go",
"execution_engine.go",
"forkchoice_update_execution.go",

View File

@@ -6,7 +6,10 @@ import (
"time"
"github.com/pkg/errors"
"go.opencensus.io/trace"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
f "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice"
doublylinkedtree "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/doubly-linked-tree"
forkchoicetypes "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
@@ -18,7 +21,6 @@ import (
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/time/slots"
"go.opencensus.io/trace"
)
// ChainInfoFetcher defines a common interface for methods in blockchain service which
@@ -334,6 +336,11 @@ func (s *Service) HeadValidatorIndexToPublicKey(_ context.Context, index primiti
return v.PublicKey(), nil
}
// ForkChoicer returns the forkchoice interface.
func (s *Service) ForkChoicer() f.ForkChoicer {
return s.cfg.ForkChoiceStore
}
// IsOptimistic returns true if the current head is optimistic.
func (s *Service) IsOptimistic(_ context.Context) (bool, error) {
if slots.ToEpoch(s.CurrentSlot()) < params.BeaconConfig().BellatrixForkEpoch {
@@ -549,3 +556,18 @@ func (s *Service) RecentBlockSlot(root [32]byte) (primitives.Slot, error) {
defer s.cfg.ForkChoiceStore.RUnlock()
return s.cfg.ForkChoiceStore.Slot(root)
}
// inRegularSync applies the following heuristics to decide if the node is in
// regular sync mode vs init sync mode using only forkchoice.
// It checks that the highest received block is behind the current time by at least 2 epochs
// and that it was imported at least one epoch late if both of these
// tests pass then the node is in init sync. The caller of this function MUST
// have a lock on forkchoice
func (s *Service) inRegularSync() bool {
currentSlot := s.CurrentSlot()
fc := s.cfg.ForkChoiceStore
if currentSlot-fc.HighestReceivedBlockSlot() < 2*params.BeaconConfig().SlotsPerEpoch {
return true
}
return fc.HighestReceivedBlockDelay() < params.BeaconConfig().SlotsPerEpoch
}

View File

@@ -593,3 +593,26 @@ func TestService_IsFinalized(t *testing.T) {
require.Equal(t, true, c.IsFinalized(ctx, br))
require.Equal(t, false, c.IsFinalized(ctx, [32]byte{'c'}))
}
func TestService_inRegularSync(t *testing.T) {
ctx := context.Background()
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}, head: &head{root: [32]byte{'b'}}}
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
st, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
require.Equal(t, false, c.inRegularSync())
c.SetGenesisTime(time.Now().Add(time.Second * time.Duration(-1*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot))))
st, blkRoot, err = prepareForkchoiceState(ctx, 128, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
require.Equal(t, false, c.inRegularSync())
c.SetGenesisTime(time.Now().Add(time.Second * time.Duration(-5*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot))))
require.Equal(t, true, c.inRegularSync())
c.SetGenesisTime(time.Now().Add(time.Second * time.Duration(-1*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot))))
c.cfg.ForkChoiceStore.SetGenesisTime(uint64(time.Now().Unix()))
require.Equal(t, true, c.inRegularSync())
}

View File

@@ -0,0 +1,27 @@
package blockchain
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/config/features"
"github.com/prysmaticlabs/prysm/v4/time"
)
var stateDefragmentationTime = promauto.NewSummary(prometheus.SummaryOpts{
Name: "head_state_defragmentation_milliseconds",
Help: "Milliseconds it takes to defragment the head state",
})
// This method defragments our state, so that any specific fields which have
// a higher number of fragmented indexes are reallocated to a new separate slice for
// that field.
func (s *Service) defragmentState(st state.BeaconState) {
if !features.Get().EnableExperimentalState {
return
}
startTime := time.Now()
st.Defragment()
elapsedTime := time.Since(startTime)
stateDefragmentationTime.Observe(float64(elapsedTime.Milliseconds()))
}

View File

@@ -387,7 +387,10 @@ func (s *Service) removeInvalidBlockAndState(ctx context.Context, blkRoots [][32
// This is an irreparable condition, it would me a justified or finalized block has become invalid.
return err
}
// TODO: Remove blob here
if err := s.blobStorage.Remove(root); err != nil {
// Blobs may not exist for some blocks, leading to deletion failures. Log such errors at debug level.
log.WithError(err).Debug("Could not remove blob from blob storage")
}
}
return nil
}

View File

@@ -6,6 +6,8 @@ import (
"time"
"github.com/pkg/errors"
"go.opencensus.io/trace"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/state"
@@ -29,7 +31,6 @@ import (
"github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1/attestation"
"github.com/prysmaticlabs/prysm/v4/runtime/version"
"github.com/prysmaticlabs/prysm/v4/time/slots"
"go.opencensus.io/trace"
)
// A custom slot deadline for processing state slots in our cache.
@@ -66,7 +67,10 @@ func (s *Service) postBlockProcess(cfg *postBlockProcessConfig) error {
startTime := time.Now()
fcuArgs := &fcuConfig{}
defer s.handleSecondFCUCall(cfg, fcuArgs)
if s.inRegularSync() {
defer s.handleSecondFCUCall(cfg, fcuArgs)
}
defer s.sendLightClientFeeds(cfg)
defer s.sendStateFeedOnBlock(cfg)
defer reportProcessingTime(startTime)
defer reportAttestationInclusion(cfg.signed.Block())
@@ -102,6 +106,7 @@ func (s *Service) postBlockProcess(cfg *postBlockProcessConfig) error {
if err := s.sendFCU(cfg, fcuArgs); err != nil {
return errors.Wrap(err, "could not send FCU to engine")
}
return nil
}
@@ -577,10 +582,15 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
if s.CurrentSlot() == s.HeadSlot() {
return
}
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
// return early if we are in init sync
if !s.inRegularSync() {
return
}
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.MissedSlot,
})
s.headLock.RLock()
headRoot := s.headRoot()
headState := s.headState(ctx)
@@ -595,18 +605,22 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
if err := transition.UpdateNextSlotCache(ctx, lastRoot, lastState); err != nil {
log.WithError(err).Debug("could not update next slot state cache")
}
// handleEpochBoundary requires a forkchoice lock to obtain the target root.
s.cfg.ForkChoiceStore.RLock()
if err := s.handleEpochBoundary(ctx, currentSlot, headState, headRoot[:]); err != nil {
log.WithError(err).Error("lateBlockTasks: could not update epoch boundary caches")
}
s.cfg.ForkChoiceStore.RUnlock()
// return early if we already started building a block for the current
// head root
_, has := s.cfg.PayloadIDCache.PayloadID(s.CurrentSlot()+1, headRoot)
if has {
return
}
attribute := s.getPayloadAttribute(ctx, headState, s.CurrentSlot()+1, headRoot[:])
// return early if we are not proposing next slot
if attribute.IsEmpty() {
return
}
s.headLock.RLock()
headBlock, err := s.headBlock()
if err != nil {
@@ -617,18 +631,12 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
s.headLock.RUnlock()
fcuArgs := &fcuConfig{
headState: headState,
headRoot: headRoot,
headBlock: headBlock,
headState: headState,
headRoot: headRoot,
headBlock: headBlock,
attributes: attribute,
}
fcuArgs.attributes = s.getPayloadAttribute(ctx, headState, s.CurrentSlot()+1, headRoot[:])
// return early if we are not proposing next slot
if fcuArgs.attributes.IsEmpty() {
return
}
s.cfg.ForkChoiceStore.RLock()
_, err = s.notifyForkchoiceUpdate(ctx, fcuArgs)
s.cfg.ForkChoiceStore.RUnlock()
if err != nil {
log.WithError(err).Debug("could not perform late block tasks: failed to update forkchoice with engine")
}

View File

@@ -7,21 +7,24 @@ import (
"github.com/ethereum/go-ethereum/common"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
doublylinkedtree "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/doubly-linked-tree"
forkchoicetypes "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/config/features"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
mathutil "github.com/prysmaticlabs/prysm/v4/math"
ethpbv2 "github.com/prysmaticlabs/prysm/v4/proto/eth/v2"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/time/slots"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
// CurrentSlot returns the current slot based on time.
@@ -34,6 +37,9 @@ func (s *Service) getFCUArgs(cfg *postBlockProcessConfig, fcuArgs *fcuConfig) er
if err := s.getFCUArgsEarlyBlock(cfg, fcuArgs); err != nil {
return err
}
if !s.inRegularSync() {
return nil
}
slot := cfg.signed.Block().Slot()
if slots.WithinVotingWindow(uint64(s.genesisTime.Unix()), slot) {
return nil
@@ -106,6 +112,128 @@ func (s *Service) sendStateFeedOnBlock(cfg *postBlockProcessConfig) {
})
}
// sendLightClientFeeds sends the light client feeds when feature flag is enabled.
func (s *Service) sendLightClientFeeds(cfg *postBlockProcessConfig) {
if features.Get().EnableLightClient {
if _, err := s.sendLightClientOptimisticUpdate(cfg.ctx, cfg.signed, cfg.postState); err != nil {
log.WithError(err).Error("Failed to send light client optimistic update")
}
// Get the finalized checkpoint
finalized := s.ForkChoicer().FinalizedCheckpoint()
// LightClientFinalityUpdate needs super majority
s.tryPublishLightClientFinalityUpdate(cfg.ctx, cfg.signed, finalized, cfg.postState)
}
}
func (s *Service) tryPublishLightClientFinalityUpdate(ctx context.Context, signed interfaces.ReadOnlySignedBeaconBlock, finalized *forkchoicetypes.Checkpoint, postState state.BeaconState) {
if finalized.Epoch <= s.lastPublishedLightClientEpoch {
return
}
config := params.BeaconConfig()
if finalized.Epoch < config.AltairForkEpoch {
return
}
syncAggregate, err := signed.Block().Body().SyncAggregate()
if err != nil || syncAggregate == nil {
return
}
// LightClientFinalityUpdate needs super majority
if syncAggregate.SyncCommitteeBits.Count()*3 < config.SyncCommitteeSize*2 {
return
}
_, err = s.sendLightClientFinalityUpdate(ctx, signed, postState)
if err != nil {
log.WithError(err).Error("Failed to send light client finality update")
} else {
s.lastPublishedLightClientEpoch = finalized.Epoch
}
}
// sendLightClientFinalityUpdate sends a light client finality update notification to the state feed.
func (s *Service) sendLightClientFinalityUpdate(ctx context.Context, signed interfaces.ReadOnlySignedBeaconBlock,
postState state.BeaconState) (int, error) {
// Get attested state
attestedRoot := signed.Block().ParentRoot()
attestedState, err := s.cfg.StateGen.StateByRoot(ctx, attestedRoot)
if err != nil {
return 0, errors.Wrap(err, "could not get attested state")
}
// Get finalized block
var finalizedBlock interfaces.ReadOnlySignedBeaconBlock
finalizedCheckPoint := attestedState.FinalizedCheckpoint()
if finalizedCheckPoint != nil {
finalizedRoot := bytesutil.ToBytes32(finalizedCheckPoint.Root)
finalizedBlock, err = s.cfg.BeaconDB.Block(ctx, finalizedRoot)
if err != nil {
finalizedBlock = nil
}
}
update, err := NewLightClientFinalityUpdateFromBeaconState(
ctx,
postState,
signed,
attestedState,
finalizedBlock,
)
if err != nil {
return 0, errors.Wrap(err, "could not create light client update")
}
// Return the result
result := &ethpbv2.LightClientFinalityUpdateWithVersion{
Version: ethpbv2.Version(signed.Version()),
Data: CreateLightClientFinalityUpdate(update),
}
// Send event
return s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.LightClientFinalityUpdate,
Data: result,
}), nil
}
// sendLightClientOptimisticUpdate sends a light client optimistic update notification to the state feed.
func (s *Service) sendLightClientOptimisticUpdate(ctx context.Context, signed interfaces.ReadOnlySignedBeaconBlock,
postState state.BeaconState) (int, error) {
// Get attested state
attestedRoot := signed.Block().ParentRoot()
attestedState, err := s.cfg.StateGen.StateByRoot(ctx, attestedRoot)
if err != nil {
return 0, errors.Wrap(err, "could not get attested state")
}
update, err := NewLightClientOptimisticUpdateFromBeaconState(
ctx,
postState,
signed,
attestedState,
)
if err != nil {
return 0, errors.Wrap(err, "could not create light client update")
}
// Return the result
result := &ethpbv2.LightClientOptimisticUpdateWithVersion{
Version: ethpbv2.Version(signed.Version()),
Data: CreateLightClientOptimisticUpdate(update),
}
return s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.LightClientOptimisticUpdate,
Data: result,
}), nil
}
// updateCachesPostBlockProcessing updates the next slot cache and handles the epoch
// boundary in order to compute the right proposer indices after processing
// state transition. This function is called on late blocks while still locked,

View File

@@ -150,7 +150,9 @@ func (s *Service) UpdateHead(ctx context.Context, proposingSlot primitives.Slot)
headBlock: headBlock,
proposingSlot: proposingSlot,
}
fcuArgs.attributes = s.getPayloadAttribute(ctx, headState, proposingSlot, newHeadRoot[:])
if s.inRegularSync() {
fcuArgs.attributes = s.getPayloadAttribute(ctx, headState, proposingSlot, newHeadRoot[:])
}
if fcuArgs.attributes != nil && s.shouldOverrideFCU(newHeadRoot, proposingSlot) {
return
}

View File

@@ -123,6 +123,9 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
}
daWaitedTime := time.Since(daStartTime)
// Defragment the state before continuing block processing.
s.defragmentState(postState)
// The rest of block processing takes a lock on forkchoice.
s.cfg.ForkChoiceStore.Lock()
defer s.cfg.ForkChoiceStore.Unlock()

View File

@@ -11,6 +11,8 @@ import (
"time"
"github.com/pkg/errors"
"go.opencensus.io/trace"
"github.com/prysmaticlabs/prysm/v4/async/event"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain/kzg"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache"
@@ -37,34 +39,35 @@ import (
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
prysmTime "github.com/prysmaticlabs/prysm/v4/time"
"github.com/prysmaticlabs/prysm/v4/time/slots"
"go.opencensus.io/trace"
)
// Service represents a service that handles the internal
// logic of managing the full PoS beacon chain.
type Service struct {
cfg *config
ctx context.Context
cancel context.CancelFunc
genesisTime time.Time
head *head
headLock sync.RWMutex
originBlockRoot [32]byte // genesis root, or weak subjectivity checkpoint root, depending on how the node is initialized
boundaryRoots [][32]byte
checkpointStateCache *cache.CheckpointStateCache
initSyncBlocks map[[32]byte]interfaces.ReadOnlySignedBeaconBlock
initSyncBlocksLock sync.RWMutex
wsVerifier *WeakSubjectivityVerifier
clockSetter startup.ClockSetter
clockWaiter startup.ClockWaiter
syncComplete chan struct{}
blobNotifiers *blobNotifierMap
blockBeingSynced *currentlySyncingBlock
blobStorage *filesystem.BlobStorage
cfg *config
ctx context.Context
cancel context.CancelFunc
genesisTime time.Time
head *head
headLock sync.RWMutex
originBlockRoot [32]byte // genesis root, or weak subjectivity checkpoint root, depending on how the node is initialized
boundaryRoots [][32]byte
checkpointStateCache *cache.CheckpointStateCache
initSyncBlocks map[[32]byte]interfaces.ReadOnlySignedBeaconBlock
initSyncBlocksLock sync.RWMutex
wsVerifier *WeakSubjectivityVerifier
clockSetter startup.ClockSetter
clockWaiter startup.ClockWaiter
syncComplete chan struct{}
blobNotifiers *blobNotifierMap
blockBeingSynced *currentlySyncingBlock
blobStorage *filesystem.BlobStorage
lastPublishedLightClientEpoch primitives.Epoch
}
// config options for the service.

View File

@@ -10,6 +10,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache/depositcache"
statefeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db/filesystem"
testDB "github.com/prysmaticlabs/prysm/v4/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice"
doublylinkedtree "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/doubly-linked-tree"
@@ -116,6 +117,7 @@ func minimalTestService(t *testing.T, opts ...Option) (*Service, *testServiceReq
WithBLSToExecPool(req.blsPool),
WithDepositCache(dc),
WithTrackedValidatorsCache(cache.NewTrackedValidatorsCache()),
WithBlobStorage(filesystem.NewEphemeralBlobStorage(t)),
}
// append the variadic opts so they override the defaults by being processed afterwards
opts = append(defOpts, opts...)

View File

@@ -15,11 +15,12 @@ import (
// AttDelta contains rewards and penalties for a single attestation.
type AttDelta struct {
HeadReward uint64
SourceReward uint64
SourcePenalty uint64
TargetReward uint64
TargetPenalty uint64
HeadReward uint64
SourceReward uint64
SourcePenalty uint64
TargetReward uint64
TargetPenalty uint64
InactivityPenalty uint64
}
// InitializePrecomputeValidators precomputes individual validator for its attested balances and the total sum of validators attested balances of the epoch.
@@ -251,7 +252,7 @@ func ProcessRewardsAndPenaltiesPrecompute(
if err != nil {
return nil, err
}
balances[i] = helpers.DecreaseBalanceWithVal(balances[i], delta.SourcePenalty+delta.TargetPenalty)
balances[i] = helpers.DecreaseBalanceWithVal(balances[i], delta.SourcePenalty+delta.TargetPenalty+delta.InactivityPenalty)
vals[i].AfterEpochTransitionBalance = balances[i]
}
@@ -351,7 +352,7 @@ func attestationDelta(
if err != nil {
return &AttDelta{}, err
}
attDelta.TargetPenalty += n / inactivityDenominator
attDelta.InactivityPenalty = n / inactivityDenominator
}
return attDelta, nil

View File

@@ -220,7 +220,7 @@ func TestAttestationsDelta(t *testing.T) {
penalties := make([]uint64, len(deltas))
for i, d := range deltas {
rewards[i] = d.HeadReward + d.SourceReward + d.TargetReward
penalties[i] = d.SourcePenalty + d.TargetPenalty
penalties[i] = d.SourcePenalty + d.TargetPenalty + d.InactivityPenalty
}
// Reward amount should increase as validator index increases due to setup.
@@ -258,7 +258,7 @@ func TestAttestationsDeltaBellatrix(t *testing.T) {
penalties := make([]uint64, len(deltas))
for i, d := range deltas {
rewards[i] = d.HeadReward + d.SourceReward + d.TargetReward
penalties[i] = d.SourcePenalty + d.TargetPenalty
penalties[i] = d.SourcePenalty + d.TargetPenalty + d.InactivityPenalty
}
// Reward amount should increase as validator index increases due to setup.
@@ -306,7 +306,7 @@ func TestProcessRewardsAndPenaltiesPrecompute_Ok(t *testing.T) {
penalties := make([]uint64, len(deltas))
for i, d := range deltas {
rewards[i] = d.HeadReward + d.SourceReward + d.TargetReward
penalties[i] = d.SourcePenalty + d.TargetPenalty
penalties[i] = d.SourcePenalty + d.TargetPenalty + d.InactivityPenalty
}
for i := range rewards {
wanted[i] += rewards[i]

View File

@@ -57,6 +57,10 @@ func UpgradeToDeneb(state state.BeaconState) (state.BeaconState, error) {
if err != nil {
return nil, err
}
historicalRoots, err := state.HistoricalRoots()
if err != nil {
return nil, err
}
s := &ethpb.BeaconStateDeneb{
GenesisTime: state.GenesisTime(),
@@ -70,7 +74,7 @@ func UpgradeToDeneb(state state.BeaconState) (state.BeaconState, error) {
LatestBlockHeader: state.LatestBlockHeader(),
BlockRoots: state.BlockRoots(),
StateRoots: state.StateRoots(),
HistoricalRoots: [][]byte{},
HistoricalRoots: historicalRoots,
Eth1Data: state.Eth1Data(),
Eth1DataVotes: state.Eth1DataVotes(),
Eth1DepositIndex: state.Eth1DepositIndex(),

View File

@@ -14,6 +14,7 @@ import (
func TestUpgradeToDeneb(t *testing.T) {
st, _ := util.DeterministicGenesisStateCapella(t, params.BeaconConfig().MaxValidatorsPerCommittee)
require.NoError(t, st.SetHistoricalRoots([][]byte{{1}}))
preForkState := st.Copy()
mSt, err := deneb.UpgradeToDeneb(st)
require.NoError(t, err)
@@ -46,6 +47,12 @@ func TestUpgradeToDeneb(t *testing.T) {
require.NoError(t, err)
require.DeepSSZEqual(t, make([]uint64, numValidators), s)
hr1, err := preForkState.HistoricalRoots()
require.NoError(t, err)
hr2, err := mSt.HistoricalRoots()
require.NoError(t, err)
require.DeepEqual(t, hr1, hr2)
f := mSt.Fork()
require.DeepSSZEqual(t, &ethpb.Fork{
PreviousVersion: st.Fork().CurrentVersion,

View File

@@ -27,6 +27,10 @@ const (
NewHead
// MissedSlot is sent when we need to notify users that a slot was missed.
MissedSlot
// LightClientFinalityUpdate event
LightClientFinalityUpdate
// LightClientOptimisticUpdate event
LightClientOptimisticUpdate
)
// BlockProcessedData is the data sent with BlockProcessed events.

View File

@@ -156,7 +156,7 @@ func (bs *BlobStorage) Save(sidecar blocks.VerifiedROBlob) error {
}
partialMoved = true
blobsWrittenCounter.Inc()
blobSaveLatency.Observe(time.Since(startTime).Seconds())
blobSaveLatency.Observe(float64(time.Since(startTime).Milliseconds()))
return nil
}
@@ -180,11 +180,17 @@ func (bs *BlobStorage) Get(root [32]byte, idx uint64) (blocks.VerifiedROBlob, er
return blocks.VerifiedROBlob{}, err
}
defer func() {
blobFetchLatency.Observe(time.Since(startTime).Seconds())
blobFetchLatency.Observe(float64(time.Since(startTime).Milliseconds()))
}()
return verification.BlobSidecarNoop(ro)
}
// Remove removes all blobs for a given root.
func (bs *BlobStorage) Remove(root [32]byte) error {
rootDir := blobNamer{root: root}.dir()
return bs.fs.RemoveAll(rootDir)
}
// Indices generates a bitmap representing which BlobSidecar.Index values are present on disk for a given root.
// This value can be compared to the commitments observed in a block to determine which indices need to be found
// on the network to confirm data availability.

View File

@@ -73,6 +73,21 @@ func TestBlobStorage_SaveBlobData(t *testing.T) {
require.NoError(t, err)
require.DeepSSZEqual(t, expected, actual)
})
t.Run("round trip write, read and delete", func(t *testing.T) {
bs := NewEphemeralBlobStorage(t)
err := bs.Save(testSidecars[0])
require.NoError(t, err)
expected := testSidecars[0]
actual, err := bs.Get(expected.BlockRoot(), expected.Index)
require.NoError(t, err)
require.DeepSSZEqual(t, expected, actual)
require.NoError(t, bs.Remove(expected.BlockRoot()))
_, err = bs.Get(expected.BlockRoot(), expected.Index)
require.ErrorContains(t, "file does not exist", err)
})
}
// pollUntil polls a condition function until it returns true or a timeout is reached.

View File

@@ -6,15 +6,15 @@ import (
)
var (
blobBuckets = []float64{0.00003, 0.00005, 0.00007, 0.00009, 0.00011, 0.00013, 0.00015}
blobBuckets = []float64{3, 5, 7, 9, 11, 13}
blobSaveLatency = promauto.NewHistogram(prometheus.HistogramOpts{
Name: "blob_storage_save_latency",
Help: "Latency of BlobSidecar storage save operations in seconds",
Help: "Latency of BlobSidecar storage save operations in milliseconds",
Buckets: blobBuckets,
})
blobFetchLatency = promauto.NewHistogram(prometheus.HistogramOpts{
Name: "blob_storage_get_latency",
Help: "Latency of BlobSidecar storage get operations in seconds",
Help: "Latency of BlobSidecar storage get operations in milliseconds",
Buckets: blobBuckets,
})
blobsPrunedCounter = promauto.NewCounter(prometheus.CounterOpts{

View File

@@ -81,14 +81,12 @@ func (p *blobPruner) prune(pruneBefore primitives.Slot) error {
}()
} else {
defer func() {
if totalPruned > 0 {
log.WithFields(log.Fields{
"upToEpoch": slots.ToEpoch(pruneBefore),
"duration": time.Since(start).String(),
"filesRemoved": totalPruned,
}).Debug("Pruned old blobs")
blobsPrunedCounter.Add(float64(totalPruned))
}
log.WithFields(log.Fields{
"upToEpoch": slots.ToEpoch(pruneBefore),
"duration": time.Since(start).String(),
"filesRemoved": totalPruned,
}).Debug("Pruned old blobs")
blobsPrunedCounter.Add(float64(totalPruned))
}()
}

View File

@@ -2,4 +2,4 @@ package execution
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "powchain")
var log = logrus.WithField("prefix", "execution")

View File

@@ -77,5 +77,6 @@ go_test(
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots:go_default_library",
],
)

View File

@@ -89,17 +89,25 @@ func (n *Node) updateBestDescendant(ctx context.Context, justifiedEpoch, finaliz
return nil
}
// getVotingSource returns the voting source if we were to make an FFG vote for this node in the currentEpoch
func (n *Node) getVotingSource(currentEpoch primitives.Epoch) primitives.Epoch {
// If the block is from a past epoch, then return the unrealized justified epoch
if slots.ToEpoch(n.slot) < currentEpoch {
return n.unrealizedJustifiedEpoch
}
// Otherwise return the justified epoch
return n.justifiedEpoch
}
// viableForHead returns true if the node is viable to head.
// Any node with different finalized or justified epoch than
// the ones in fork choice store should not be viable to head.
func (n *Node) viableForHead(justifiedEpoch, currentEpoch primitives.Epoch) bool {
justified := justifiedEpoch == n.justifiedEpoch || justifiedEpoch == 0
if !justified && justifiedEpoch+1 == currentEpoch {
if n.unrealizedJustifiedEpoch+1 >= currentEpoch && n.justifiedEpoch+2 >= currentEpoch {
justified = true
}
if justifiedEpoch == 0 {
return true
}
return justified
votingSource := n.getVotingSource(currentEpoch)
return votingSource == justifiedEpoch || votingSource+2 >= currentEpoch
}
func (n *Node) leadsToViableHead(justifiedEpoch, currentEpoch primitives.Epoch) bool {

View File

@@ -144,9 +144,15 @@ func TestNode_ViableForHead(t *testing.T) {
}{
{&Node{}, 0, true},
{&Node{}, 1, false},
{&Node{finalizedEpoch: 1, justifiedEpoch: 1}, 1, true},
{&Node{finalizedEpoch: 1, justifiedEpoch: 1}, 2, false},
{&Node{finalizedEpoch: 3, justifiedEpoch: 4}, 4, true},
{&Node{finalizedEpoch: 1, justifiedEpoch: 1, unrealizedJustifiedEpoch: 1}, 1, true},
{&Node{finalizedEpoch: 1, justifiedEpoch: 1, unrealizedJustifiedEpoch: 1}, 2, false},
{&Node{finalizedEpoch: 1, justifiedEpoch: 1, unrealizedJustifiedEpoch: 2}, 2, true},
{&Node{finalizedEpoch: 1, justifiedEpoch: 1, unrealizedJustifiedEpoch: 2}, 3, false},
{&Node{finalizedEpoch: 1, justifiedEpoch: 1, unrealizedJustifiedEpoch: 3}, 3, true},
{&Node{finalizedEpoch: 1, justifiedEpoch: 2, unrealizedJustifiedEpoch: 3}, 3, true},
{&Node{finalizedEpoch: 1, justifiedEpoch: 2, unrealizedJustifiedEpoch: 2}, 4, false},
{&Node{finalizedEpoch: 1, justifiedEpoch: 2, unrealizedJustifiedEpoch: 3}, 4, true},
{&Node{finalizedEpoch: 1, justifiedEpoch: 3, unrealizedJustifiedEpoch: 4}, 4, true},
}
for _, tc := range tests {
got := tc.n.viableForHead(tc.justifiedEpoch, 5)

View File

@@ -240,6 +240,20 @@ func (f *ForkChoice) HighestReceivedBlockSlot() primitives.Slot {
return f.store.highestReceivedNode.slot
}
// HighestReceivedBlockSlotDelay returns the number of slots that the highest
// received block was late when receiving it
func (f *ForkChoice) HighestReceivedBlockDelay() primitives.Slot {
n := f.store.highestReceivedNode
if n == nil {
return 0
}
secs, err := slots.SecondsSinceSlotStart(n.slot, f.store.genesisTime, n.timestamp)
if err != nil {
return 0
}
return primitives.Slot(secs / params.BeaconConfig().SecondsPerSlot)
}
// ReceivedBlocksLastEpoch returns the number of blocks received in the last epoch
func (f *ForkChoice) ReceivedBlocksLastEpoch() (uint64, error) {
count := uint64(0)

View File

@@ -333,26 +333,29 @@ func TestForkChoice_ReceivedBlocksLastEpoch(t *testing.T) {
require.NoError(t, err)
require.Equal(t, uint64(1), count)
require.Equal(t, primitives.Slot(1), f.HighestReceivedBlockSlot())
require.Equal(t, primitives.Slot(0), f.HighestReceivedBlockDelay())
// 64
// Received block last epoch is 1
_, err = s.insert(context.Background(), 64, [32]byte{'A'}, b, b, 1, 1)
require.NoError(t, err)
s.genesisTime = uint64(time.Now().Add(time.Duration(-64*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second).Unix())
s.genesisTime = uint64(time.Now().Add(time.Duration((-64*int64(params.BeaconConfig().SecondsPerSlot))-1) * time.Second).Unix())
count, err = f.ReceivedBlocksLastEpoch()
require.NoError(t, err)
require.Equal(t, uint64(1), count)
require.Equal(t, primitives.Slot(64), f.HighestReceivedBlockSlot())
require.Equal(t, primitives.Slot(0), f.HighestReceivedBlockDelay())
// 64 65
// Received block last epoch is 2
_, err = s.insert(context.Background(), 65, [32]byte{'B'}, b, b, 1, 1)
require.NoError(t, err)
s.genesisTime = uint64(time.Now().Add(time.Duration(-65*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second).Unix())
s.genesisTime = uint64(time.Now().Add(time.Duration(-66*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second).Unix())
count, err = f.ReceivedBlocksLastEpoch()
require.NoError(t, err)
require.Equal(t, uint64(2), count)
require.Equal(t, primitives.Slot(65), f.HighestReceivedBlockSlot())
require.Equal(t, primitives.Slot(1), f.HighestReceivedBlockDelay())
// 64 65 66
// Received block last epoch is 3

View File

@@ -8,6 +8,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/testing/require"
"github.com/prysmaticlabs/prysm/v4/time/slots"
)
func TestStore_SetUnrealizedEpochs(t *testing.T) {
@@ -63,14 +64,14 @@ func TestStore_UpdateUnrealizedCheckpoints(t *testing.T) {
func TestStore_LongFork(t *testing.T) {
f := setup(1, 1)
ctx := context.Background()
state, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, params.BeaconConfig().ZeroHash, [32]byte{'A'}, 1, 1)
state, blkRoot, err := prepareForkchoiceState(ctx, 75, [32]byte{'a'}, params.BeaconConfig().ZeroHash, [32]byte{'A'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'B'}, 1, 1)
state, blkRoot, err = prepareForkchoiceState(ctx, 80, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'B'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
require.NoError(t, f.store.setUnrealizedJustifiedEpoch([32]byte{'b'}, 2))
state, blkRoot, err = prepareForkchoiceState(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, [32]byte{'C'}, 1, 1)
state, blkRoot, err = prepareForkchoiceState(ctx, 95, [32]byte{'c'}, [32]byte{'b'}, [32]byte{'C'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
require.NoError(t, f.store.setUnrealizedJustifiedEpoch([32]byte{'c'}, 2))
@@ -78,27 +79,28 @@ func TestStore_LongFork(t *testing.T) {
// Add an attestation to c, it is head
f.ProcessAttestation(ctx, []uint64{0}, [32]byte{'c'}, 1)
f.justifiedBalances = []uint64{100}
c := f.store.nodeByRoot[[32]byte{'c'}]
require.Equal(t, primitives.Epoch(2), slots.ToEpoch(c.slot))
driftGenesisTime(f, c.slot, 0)
headRoot, err := f.Head(ctx)
require.NoError(t, err)
require.Equal(t, [32]byte{'c'}, headRoot)
// D is head even though its weight is lower.
// c remains the head even if a block d with higher realized justification is seen
ha := [32]byte{'a'}
state, blkRoot, err = prepareForkchoiceState(ctx, 103, [32]byte{'d'}, [32]byte{'b'}, [32]byte{'D'}, 2, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
require.NoError(t, f.UpdateJustifiedCheckpoint(ctx, &forkchoicetypes.Checkpoint{Epoch: 2, Root: ha}))
headRoot, err = f.Head(ctx)
require.NoError(t, err)
require.Equal(t, [32]byte{'d'}, headRoot)
require.Equal(t, uint64(0), f.store.nodeByRoot[[32]byte{'d'}].weight)
require.Equal(t, uint64(100), f.store.nodeByRoot[[32]byte{'c'}].weight)
// Update unrealized justification, c becomes head
require.NoError(t, f.updateUnrealizedCheckpoints(ctx))
d := f.store.nodeByRoot[[32]byte{'d'}]
require.Equal(t, primitives.Epoch(3), slots.ToEpoch(d.slot))
driftGenesisTime(f, d.slot, 0)
require.Equal(t, true, d.viableForHead(f.store.justifiedCheckpoint.Epoch, slots.ToEpoch(d.slot)))
headRoot, err = f.Head(ctx)
require.NoError(t, err)
require.Equal(t, [32]byte{'c'}, headRoot)
require.Equal(t, uint64(0), f.store.nodeByRoot[[32]byte{'d'}].weight)
require.Equal(t, uint64(100), f.store.nodeByRoot[[32]byte{'c'}].weight)
}
// Epoch 1 Epoch 2 Epoch 3

View File

@@ -64,6 +64,7 @@ type FastGetter interface {
FinalizedPayloadBlockHash() [32]byte
HasNode([32]byte) bool
HighestReceivedBlockSlot() primitives.Slot
HighestReceivedBlockDelay() primitives.Slot
IsCanonical(root [32]byte) bool
IsOptimistic(root [32]byte) (bool, error)
IsViableForCheckpoint(*forkchoicetypes.Checkpoint) (bool, error)

View File

@@ -114,6 +114,13 @@ func (ro *ROForkChoice) HighestReceivedBlockSlot() primitives.Slot {
return ro.getter.HighestReceivedBlockSlot()
}
// HighestReceivedBlockDelay delegates to the underlying forkchoice call, under a lock.
func (ro *ROForkChoice) HighestReceivedBlockDelay() primitives.Slot {
ro.l.RLock()
defer ro.l.RUnlock()
return ro.getter.HighestReceivedBlockDelay()
}
// ReceivedBlocksLastEpoch delegates to the underlying forkchoice call, under a lock.
func (ro *ROForkChoice) ReceivedBlocksLastEpoch() (uint64, error) {
ro.l.RLock()

View File

@@ -29,6 +29,7 @@ const (
unrealizedJustifiedPayloadBlockHashCalled
nodeCountCalled
highestReceivedBlockSlotCalled
highestReceivedBlockDelayCalled
receivedBlocksLastEpochCalled
weightCalled
isOptimisticCalled
@@ -113,6 +114,11 @@ func TestROLocking(t *testing.T) {
call: highestReceivedBlockSlotCalled,
cb: func(g FastGetter) { g.HighestReceivedBlockSlot() },
},
{
name: "highestReceivedBlockDelayCalled",
call: highestReceivedBlockDelayCalled,
cb: func(g FastGetter) { g.HighestReceivedBlockDelay() },
},
{
name: "receivedBlocksLastEpochCalled",
call: receivedBlocksLastEpochCalled,
@@ -245,6 +251,11 @@ func (ro *mockROForkchoice) HighestReceivedBlockSlot() primitives.Slot {
return 0
}
func (ro *mockROForkchoice) HighestReceivedBlockDelay() primitives.Slot {
ro.calls = append(ro.calls, highestReceivedBlockDelayCalled)
return 0
}
func (ro *mockROForkchoice) ReceivedBlocksLastEpoch() (uint64, error) {
ro.calls = append(ro.calls, receivedBlocksLastEpochCalled)
return 0, nil

View File

@@ -6,6 +6,7 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/v4/beacon-chain/gateway",
visibility = ["//beacon-chain:__subpackages__"],
deps = [
"//api:go_default_library",
"//api/gateway:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",

View File

@@ -2,6 +2,7 @@ package gateway
import (
gwruntime "github.com/grpc-ecosystem/grpc-gateway/v2/runtime"
"github.com/prysmaticlabs/prysm/v4/api"
"github.com/prysmaticlabs/prysm/v4/api/gateway"
"github.com/prysmaticlabs/prysm/v4/cmd/beacon-chain/flags"
ethpbalpha "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
@@ -40,7 +41,7 @@ func DefaultConfig(enableDebugRPCEndpoints bool, httpModules string) MuxConfig {
},
}),
gwruntime.WithMarshalerOption(
"text/event-stream", &gwruntime.EventSourceJSONPb{},
api.EventStreamMediaType, &gwruntime.EventSourceJSONPb{},
),
)
v1AlphaPbHandler = &gateway.PbMux{

View File

@@ -42,7 +42,7 @@ func (s *Service) prepareForkChoiceAtts() {
switch slotInterval.Interval {
case 0:
duration := time.Since(t)
log.WithField("Duration", duration).Debug("aggregated unaggregated attestations")
log.WithField("Duration", duration).Debug("Aggregated unaggregated attestations")
batchForkChoiceAttsT1.Observe(float64(duration.Milliseconds()))
case 1:
batchForkChoiceAttsT2.Observe(float64(time.Since(t).Milliseconds()))

View File

@@ -38,6 +38,7 @@ go_library(
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
"@org_golang_google_grpc//codes:go_default_library",
"@org_golang_x_sync//errgroup:go_default_library",
],

View File

@@ -11,14 +11,15 @@ import (
)
type Service struct {
HeadFetcher blockchain.HeadFetcher
FinalizedFetcher blockchain.FinalizationFetcher
GenesisTimeFetcher blockchain.TimeFetcher
SyncChecker sync.Checker
Broadcaster p2p.Broadcaster
SyncCommitteePool synccommittee.Pool
OperationNotifier opfeed.Notifier
AttestationCache *cache.AttestationCache
StateGen stategen.StateManager
P2P p2p.Broadcaster
HeadFetcher blockchain.HeadFetcher
FinalizedFetcher blockchain.FinalizationFetcher
GenesisTimeFetcher blockchain.TimeFetcher
SyncChecker sync.Checker
Broadcaster p2p.Broadcaster
SyncCommitteePool synccommittee.Pool
OperationNotifier opfeed.Notifier
AttestationCache *cache.AttestationCache
StateGen stategen.StateManager
P2P p2p.Broadcaster
OptimisticModeFetcher blockchain.OptimisticModeFetcher
}

View File

@@ -29,9 +29,12 @@ import (
prysmTime "github.com/prysmaticlabs/prysm/v4/time"
"github.com/prysmaticlabs/prysm/v4/time/slots"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
"golang.org/x/sync/errgroup"
)
var errOptimisticMode = errors.New("the node is currently optimistic and cannot serve validators")
// AggregateBroadcastFailedError represents an error scenario where
// broadcasting an aggregate selection proof failed.
type AggregateBroadcastFailedError struct {
@@ -56,6 +59,9 @@ func (s *Service) ComputeValidatorPerformance(
ctx context.Context,
req *ethpb.ValidatorPerformanceRequest,
) (*ethpb.ValidatorPerformanceResponse, *RpcError) {
ctx, span := trace.StartSpan(ctx, "coreService.ComputeValidatorPerformance")
defer span.End()
if s.SyncChecker.Syncing() {
return nil, &RpcError{Reason: Unavailable, Err: errors.New("Syncing to latest head, not ready to respond")}
}
@@ -211,6 +217,9 @@ func (s *Service) SubmitSignedContributionAndProof(
ctx context.Context,
req *ethpb.SignedContributionAndProof,
) *RpcError {
ctx, span := trace.StartSpan(ctx, "coreService.SubmitSignedContributionAndProof")
defer span.End()
errs, ctx := errgroup.WithContext(ctx)
// Broadcasting and saving contribution into the pool in parallel. As one fail should not affect another.
@@ -243,6 +252,9 @@ func (s *Service) SubmitSignedAggregateSelectionProof(
ctx context.Context,
req *ethpb.SignedAggregateSubmitRequest,
) *RpcError {
ctx, span := trace.StartSpan(ctx, "coreService.SubmitSignedAggregateSelectionProof")
defer span.End()
if req.SignedAggregateAndProof == nil || req.SignedAggregateAndProof.Message == nil ||
req.SignedAggregateAndProof.Message.Aggregate == nil || req.SignedAggregateAndProof.Message.Aggregate.Data == nil {
return &RpcError{Err: errors.New("signed aggregate request can't be nil"), Reason: BadRequest}
@@ -315,6 +327,9 @@ func (s *Service) AggregatedSigAndAggregationBits(
func (s *Service) GetAttestationData(
ctx context.Context, req *ethpb.AttestationDataRequest,
) (*ethpb.AttestationData, *RpcError) {
ctx, span := trace.StartSpan(ctx, "coreService.GetAttestationData")
defer span.End()
if req.Slot != s.GenesisTimeFetcher.CurrentSlot() {
return nil, &RpcError{Reason: BadRequest, Err: errors.Errorf("invalid request: slot %d is not the current slot %d", req.Slot, s.GenesisTimeFetcher.CurrentSlot())}
}
@@ -368,6 +383,14 @@ func (s *Service) GetAttestationData(
},
}, nil
}
// cache miss, we need to check for optimistic status before proceeding
optimistic, err := s.OptimisticModeFetcher.IsOptimistic(ctx)
if err != nil {
return nil, &RpcError{Reason: Internal, Err: err}
}
if optimistic {
return nil, &RpcError{Reason: Unavailable, Err: errOptimisticMode}
}
headRoot, err := s.HeadFetcher.HeadRoot(ctx)
if err != nil {
@@ -412,6 +435,9 @@ func (s *Service) GetAttestationData(
// SubmitSyncMessage submits the sync committee message to the network.
// It also saves the sync committee message into the pending pool for block inclusion.
func (s *Service) SubmitSyncMessage(ctx context.Context, msg *ethpb.SyncCommitteeMessage) *RpcError {
ctx, span := trace.StartSpan(ctx, "coreService.SubmitSyncMessage")
defer span.End()
errs, ctx := errgroup.WithContext(ctx)
headSyncCommitteeIndices, err := s.HeadFetcher.HeadSyncCommitteeIndices(ctx, msg.ValidatorIndex, msg.Slot)

View File

@@ -62,7 +62,7 @@ func (s *Server) GetBlock(w http.ResponseWriter, r *http.Request) {
return
}
if httputil.SszRequested(r) {
if httputil.RespondWithSsz(r) {
s.getBlockSSZ(ctx, w, blk)
} else {
s.getBlock(ctx, w, blk)
@@ -105,7 +105,7 @@ func (s *Server) GetBlockV2(w http.ResponseWriter, r *http.Request) {
return
}
if httputil.SszRequested(r) {
if httputil.RespondWithSsz(r) {
s.getBlockSSZV2(ctx, w, blk)
} else {
s.getBlockV2(ctx, w, blk)
@@ -205,7 +205,7 @@ func (s *Server) GetBlindedBlock(w http.ResponseWriter, r *http.Request) {
return
}
if httputil.SszRequested(r) {
if httputil.RespondWithSsz(r) {
s.getBlindedBlockSSZ(ctx, w, blk)
} else {
s.getBlindedBlock(ctx, w, blk)
@@ -953,8 +953,7 @@ func (s *Server) PublishBlindedBlock(w http.ResponseWriter, r *http.Request) {
if shared.IsSyncing(r.Context(), w, s.SyncChecker, s.HeadFetcher, s.TimeFetcher, s.OptimisticModeFetcher) {
return
}
isSSZ := httputil.SszRequested(r)
if isSSZ {
if httputil.IsRequestSsz(r) {
s.publishBlindedBlockSSZ(ctx, w, r, false)
} else {
s.publishBlindedBlock(ctx, w, r, false)
@@ -978,8 +977,7 @@ func (s *Server) PublishBlindedBlockV2(w http.ResponseWriter, r *http.Request) {
if shared.IsSyncing(r.Context(), w, s.SyncChecker, s.HeadFetcher, s.TimeFetcher, s.OptimisticModeFetcher) {
return
}
isSSZ := httputil.SszRequested(r)
if isSSZ {
if httputil.IsRequestSsz(r) {
s.publishBlindedBlockSSZ(ctx, w, r, true)
} else {
s.publishBlindedBlock(ctx, w, r, true)
@@ -1250,8 +1248,7 @@ func (s *Server) PublishBlock(w http.ResponseWriter, r *http.Request) {
if shared.IsSyncing(r.Context(), w, s.SyncChecker, s.HeadFetcher, s.TimeFetcher, s.OptimisticModeFetcher) {
return
}
isSSZ := httputil.SszRequested(r)
if isSSZ {
if httputil.IsRequestSsz(r) {
s.publishBlockSSZ(ctx, w, r, false)
} else {
s.publishBlock(ctx, w, r, false)
@@ -1273,8 +1270,7 @@ func (s *Server) PublishBlockV2(w http.ResponseWriter, r *http.Request) {
if shared.IsSyncing(r.Context(), w, s.SyncChecker, s.HeadFetcher, s.TimeFetcher, s.OptimisticModeFetcher) {
return
}
isSSZ := httputil.SszRequested(r)
if isSSZ {
if httputil.IsRequestSsz(r) {
s.publishBlockSSZ(ctx, w, r, true)
} else {
s.publishBlock(ctx, w, r, true)

View File

@@ -1169,7 +1169,7 @@ func TestPublishBlockSSZ(t *testing.T) {
ssz, err := genericBlock.GetPhase0().MarshalSSZ()
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader(ssz))
request.Header.Set("Accept", api.OctetStreamMediaType)
request.Header.Set("Content-Type", api.OctetStreamMediaType)
request.Header.Set(api.VersionHeader, version.String(version.Phase0))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -1199,7 +1199,7 @@ func TestPublishBlockSSZ(t *testing.T) {
ssz, err := genericBlock.GetAltair().MarshalSSZ()
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader(ssz))
request.Header.Set("Accept", api.OctetStreamMediaType)
request.Header.Set("Content-Type", api.OctetStreamMediaType)
request.Header.Set(api.VersionHeader, version.String(version.Altair))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -1224,7 +1224,7 @@ func TestPublishBlockSSZ(t *testing.T) {
ssz, err := genericBlock.GetBellatrix().MarshalSSZ()
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader(ssz))
request.Header.Set("Accept", api.OctetStreamMediaType)
request.Header.Set("Content-Type", api.OctetStreamMediaType)
request.Header.Set(api.VersionHeader, version.String(version.Bellatrix))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -1250,7 +1250,7 @@ func TestPublishBlockSSZ(t *testing.T) {
ssz, err := genericBlock.GetCapella().MarshalSSZ()
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader(ssz))
request.Header.Set("Accept", api.OctetStreamMediaType)
request.Header.Set("Content-Type", api.OctetStreamMediaType)
request.Header.Set(api.VersionHeader, version.String(version.Capella))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -1276,7 +1276,7 @@ func TestPublishBlockSSZ(t *testing.T) {
ssz, err := genericBlock.GetDeneb().MarshalSSZ()
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader(ssz))
request.Header.Set("Accept", api.OctetStreamMediaType)
request.Header.Set("Content-Type", api.OctetStreamMediaType)
request.Header.Set(api.VersionHeader, version.String(version.Deneb))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -1296,7 +1296,7 @@ func TestPublishBlockSSZ(t *testing.T) {
ssz, err := genericBlock.GetBlindedBellatrix().MarshalSSZ()
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader(ssz))
request.Header.Set("Accept", api.OctetStreamMediaType)
request.Header.Set("Content-Type", api.OctetStreamMediaType)
request.Header.Set(api.VersionHeader, version.String(version.Bellatrix))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -1317,7 +1317,7 @@ func TestPublishBlockSSZ(t *testing.T) {
ssz, err := genericBlock.GetBellatrix().MarshalSSZ()
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader(ssz))
request.Header.Set("Accept", api.OctetStreamMediaType)
request.Header.Set("Content-Type", api.OctetStreamMediaType)
request.Header.Set(api.VersionHeader, version.String(version.Capella))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -1335,7 +1335,7 @@ func TestPublishBlockSSZ(t *testing.T) {
}
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader([]byte("foo")))
request.Header.Set("Accept", api.OctetStreamMediaType)
request.Header.Set("Content-Type", api.OctetStreamMediaType)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlock(writer, request)
@@ -1531,7 +1531,7 @@ func TestPublishBlindedBlockSSZ(t *testing.T) {
ssz, err := genericBlock.GetPhase0().MarshalSSZ()
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader(ssz))
request.Header.Set("Accept", api.OctetStreamMediaType)
request.Header.Set("Content-Type", api.OctetStreamMediaType)
request.Header.Set(api.VersionHeader, version.String(version.Phase0))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -1561,7 +1561,7 @@ func TestPublishBlindedBlockSSZ(t *testing.T) {
ssz, err := genericBlock.GetAltair().MarshalSSZ()
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader(ssz))
request.Header.Set("Accept", api.OctetStreamMediaType)
request.Header.Set("Content-Type", api.OctetStreamMediaType)
request.Header.Set(api.VersionHeader, version.String(version.Altair))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -1587,7 +1587,7 @@ func TestPublishBlindedBlockSSZ(t *testing.T) {
ssz, err := genericBlock.GetBlindedBellatrix().MarshalSSZ()
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader(ssz))
request.Header.Set("Accept", api.OctetStreamMediaType)
request.Header.Set("Content-Type", api.OctetStreamMediaType)
request.Header.Set(api.VersionHeader, version.String(version.Bellatrix))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -1613,7 +1613,7 @@ func TestPublishBlindedBlockSSZ(t *testing.T) {
ssz, err := genericBlock.GetBlindedCapella().MarshalSSZ()
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader(ssz))
request.Header.Set("Accept", api.OctetStreamMediaType)
request.Header.Set("Content-Type", api.OctetStreamMediaType)
request.Header.Set(api.VersionHeader, version.String(version.Capella))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -1639,7 +1639,7 @@ func TestPublishBlindedBlockSSZ(t *testing.T) {
ssz, err := genericBlock.GetBlindedDeneb().MarshalSSZ()
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader(ssz))
request.Header.Set("Accept", api.OctetStreamMediaType)
request.Header.Set("Content-Type", api.OctetStreamMediaType)
request.Header.Set(api.VersionHeader, version.String(version.Deneb))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -1672,7 +1672,7 @@ func TestPublishBlindedBlockSSZ(t *testing.T) {
ssz, err := genericBlock.GetBlindedBellatrix().MarshalSSZ()
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader(ssz))
request.Header.Set("Accept", api.OctetStreamMediaType)
request.Header.Set("Content-Type", api.OctetStreamMediaType)
request.Header.Set(api.VersionHeader, version.String(version.Capella))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -1690,7 +1690,7 @@ func TestPublishBlindedBlockSSZ(t *testing.T) {
}
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader([]byte("foo")))
request.Header.Set("Accept", api.OctetStreamMediaType)
request.Header.Set("Content-Type", api.OctetStreamMediaType)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlindedBlock(writer, request)
@@ -1899,7 +1899,7 @@ func TestPublishBlockV2SSZ(t *testing.T) {
ssz, err := genericBlock.GetPhase0().MarshalSSZ()
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader(ssz))
request.Header.Set("Accept", api.OctetStreamMediaType)
request.Header.Set("Content-Type", api.OctetStreamMediaType)
request.Header.Set(api.VersionHeader, version.String(version.Phase0))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -1929,7 +1929,7 @@ func TestPublishBlockV2SSZ(t *testing.T) {
ssz, err := genericBlock.GetAltair().MarshalSSZ()
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader(ssz))
request.Header.Set("Accept", api.OctetStreamMediaType)
request.Header.Set("Content-Type", api.OctetStreamMediaType)
request.Header.Set(api.VersionHeader, version.String(version.Altair))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -1954,7 +1954,7 @@ func TestPublishBlockV2SSZ(t *testing.T) {
ssz, err := genericBlock.GetBellatrix().MarshalSSZ()
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader(ssz))
request.Header.Set("Accept", api.OctetStreamMediaType)
request.Header.Set("Content-Type", api.OctetStreamMediaType)
request.Header.Set(api.VersionHeader, version.String(version.Bellatrix))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -1980,7 +1980,7 @@ func TestPublishBlockV2SSZ(t *testing.T) {
ssz, err := genericBlock.GetCapella().MarshalSSZ()
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader(ssz))
request.Header.Set("Accept", api.OctetStreamMediaType)
request.Header.Set("Content-Type", api.OctetStreamMediaType)
request.Header.Set(api.VersionHeader, version.String(version.Capella))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -2006,7 +2006,7 @@ func TestPublishBlockV2SSZ(t *testing.T) {
ssz, err := genericBlock.GetDeneb().MarshalSSZ()
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader(ssz))
request.Header.Set("Accept", api.OctetStreamMediaType)
request.Header.Set("Content-Type", api.OctetStreamMediaType)
request.Header.Set(api.VersionHeader, version.String(version.Deneb))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -2026,7 +2026,7 @@ func TestPublishBlockV2SSZ(t *testing.T) {
ssz, err := genericBlock.GetBlindedBellatrix().MarshalSSZ()
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader(ssz))
request.Header.Set("Accept", api.OctetStreamMediaType)
request.Header.Set("Content-Type", api.OctetStreamMediaType)
request.Header.Set(api.VersionHeader, version.String(version.Bellatrix))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -2047,7 +2047,7 @@ func TestPublishBlockV2SSZ(t *testing.T) {
ssz, err := genericBlock.GetBellatrix().MarshalSSZ()
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader(ssz))
request.Header.Set("Accept", api.OctetStreamMediaType)
request.Header.Set("Content-Type", api.OctetStreamMediaType)
request.Header.Set(api.VersionHeader, version.String(version.Capella))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -2061,7 +2061,7 @@ func TestPublishBlockV2SSZ(t *testing.T) {
}
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader([]byte(rpctesting.CapellaBlock)))
request.Header.Set("Accept", api.OctetStreamMediaType)
request.Header.Set("Content-Type", api.OctetStreamMediaType)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlockV2(writer, request)
@@ -2078,7 +2078,7 @@ func TestPublishBlockV2SSZ(t *testing.T) {
}
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader([]byte("foo")))
request.Header.Set("Accept", api.OctetStreamMediaType)
request.Header.Set("Content-Type", api.OctetStreamMediaType)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlockV2(writer, request)
@@ -2286,7 +2286,7 @@ func TestPublishBlindedBlockV2SSZ(t *testing.T) {
ssz, err := genericBlock.GetPhase0().MarshalSSZ()
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader(ssz))
request.Header.Set("Accept", api.OctetStreamMediaType)
request.Header.Set("Content-Type", api.OctetStreamMediaType)
request.Header.Set(api.VersionHeader, version.String(version.Phase0))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -2316,7 +2316,7 @@ func TestPublishBlindedBlockV2SSZ(t *testing.T) {
ssz, err := genericBlock.GetAltair().MarshalSSZ()
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader(ssz))
request.Header.Set("Accept", api.OctetStreamMediaType)
request.Header.Set("Content-Type", api.OctetStreamMediaType)
request.Header.Set(api.VersionHeader, version.String(version.Altair))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -2342,7 +2342,7 @@ func TestPublishBlindedBlockV2SSZ(t *testing.T) {
ssz, err := genericBlock.GetBlindedBellatrix().MarshalSSZ()
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader(ssz))
request.Header.Set("Accept", api.OctetStreamMediaType)
request.Header.Set("Content-Type", api.OctetStreamMediaType)
request.Header.Set(api.VersionHeader, version.String(version.Bellatrix))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -2368,7 +2368,7 @@ func TestPublishBlindedBlockV2SSZ(t *testing.T) {
ssz, err := genericBlock.GetBlindedCapella().MarshalSSZ()
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader(ssz))
request.Header.Set("Accept", api.OctetStreamMediaType)
request.Header.Set("Content-Type", api.OctetStreamMediaType)
request.Header.Set(api.VersionHeader, version.String(version.Capella))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -2394,7 +2394,7 @@ func TestPublishBlindedBlockV2SSZ(t *testing.T) {
ssz, err := genericBlock.GetBlindedDeneb().MarshalSSZ()
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader(ssz))
request.Header.Set("Accept", api.OctetStreamMediaType)
request.Header.Set("Content-Type", api.OctetStreamMediaType)
request.Header.Set(api.VersionHeader, version.String(version.Deneb))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -2427,7 +2427,7 @@ func TestPublishBlindedBlockV2SSZ(t *testing.T) {
ssz, err := genericBlock.GetBlindedBellatrix().MarshalSSZ()
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader(ssz))
request.Header.Set("Accept", api.OctetStreamMediaType)
request.Header.Set("Content-Type", api.OctetStreamMediaType)
request.Header.Set(api.VersionHeader, version.String(version.Capella))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -2441,7 +2441,7 @@ func TestPublishBlindedBlockV2SSZ(t *testing.T) {
}
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader([]byte(rpctesting.BlindedCapellaBlock)))
request.Header.Set("Accept", api.OctetStreamMediaType)
request.Header.Set("Content-Type", api.OctetStreamMediaType)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlockV2(writer, request)
@@ -2458,7 +2458,7 @@ func TestPublishBlindedBlockV2SSZ(t *testing.T) {
}
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader([]byte("foo")))
request.Header.Set("Accept", api.OctetStreamMediaType)
request.Header.Set("Content-Type", api.OctetStreamMediaType)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlindedBlockV2(writer, request)

View File

@@ -46,8 +46,7 @@ func (s *Server) Blobs(w http.ResponseWriter, r *http.Request) {
for i := range verifiedBlobs {
sidecars = append(sidecars, verifiedBlobs[i].BlobSidecar)
}
ssz := httputil.SszRequested(r)
if ssz {
if httputil.RespondWithSsz(r) {
sidecarResp := &eth.BlobSidecars{
Sidecars: sidecars,
}

View File

@@ -12,5 +12,5 @@ type Sidecar struct {
SignedBeaconBlockHeader *shared.SignedBeaconBlockHeader `json:"signed_block_header"`
KzgCommitment string `json:"kzg_commitment"`
KzgProof string `json:"kzg_proof"`
CommitmentInclusionProof []string `json:"commitment_inclusion_proof"`
CommitmentInclusionProof []string `json:"kzg_commitment_inclusion_proof"`
}

View File

@@ -54,7 +54,7 @@ func (s *Server) GetBeaconStateV2(w http.ResponseWriter, r *http.Request) {
return
}
if httputil.SszRequested(r) {
if httputil.RespondWithSsz(r) {
s.getBeaconStateSSZV2(ctx, w, []byte(stateId))
} else {
s.getBeaconStateV2(ctx, w, []byte(stateId))

View File

@@ -8,8 +8,9 @@ go_library(
"structs.go",
],
importpath = "github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/events",
visibility = ["//beacon-chain:__subpackages__"],
visibility = ["//visibility:public"],
deps = [
"//api:go_default_library",
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/operation:go_default_library",
@@ -18,8 +19,10 @@ go_library(
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/rpc/eth/shared:go_default_library",
"//config/params:go_default_library",
"//network/httputil:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/eth/v2:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",

View File

@@ -5,8 +5,10 @@ import (
"encoding/json"
"fmt"
"net/http"
time2 "time"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/prysmaticlabs/prysm/v4/api"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/operation"
@@ -15,8 +17,10 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/network/httputil"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/eth/v1"
ethpbv2 "github.com/prysmaticlabs/prysm/v4/proto/eth/v2"
"github.com/prysmaticlabs/prysm/v4/runtime/version"
"github.com/prysmaticlabs/prysm/v4/time/slots"
log "github.com/sirupsen/logrus"
@@ -48,6 +52,10 @@ const (
ProposerSlashingTopic = "proposer_slashing"
// AttesterSlashingTopic represents a new attester slashing event topic
AttesterSlashingTopic = "attester_slashing"
// LightClientFinalityUpdateTopic represents a new light client finality update event topic.
LightClientFinalityUpdateTopic = "light_client_finality_update"
// LightClientOptimisticUpdateTopic represents a new light client optimistic update event topic.
LightClientOptimisticUpdateTopic = "light_client_optimistic_update"
)
const topicDataMismatch = "Event data type %T does not correspond to event topic %s"
@@ -55,18 +63,20 @@ const topicDataMismatch = "Event data type %T does not correspond to event topic
const chanBuffer = 1000
var casesHandled = map[string]bool{
HeadTopic: true,
BlockTopic: true,
AttestationTopic: true,
VoluntaryExitTopic: true,
FinalizedCheckpointTopic: true,
ChainReorgTopic: true,
SyncCommitteeContributionTopic: true,
BLSToExecutionChangeTopic: true,
PayloadAttributesTopic: true,
BlobSidecarTopic: true,
ProposerSlashingTopic: true,
AttesterSlashingTopic: true,
HeadTopic: true,
BlockTopic: true,
AttestationTopic: true,
VoluntaryExitTopic: true,
FinalizedCheckpointTopic: true,
ChainReorgTopic: true,
SyncCommitteeContributionTopic: true,
BLSToExecutionChangeTopic: true,
PayloadAttributesTopic: true,
BlobSidecarTopic: true,
ProposerSlashingTopic: true,
AttesterSlashingTopic: true,
LightClientFinalityUpdateTopic: true,
LightClientOptimisticUpdateTopic: true,
}
// StreamEvents provides an endpoint to subscribe to the beacon node Server-Sent-Events stream.
@@ -106,16 +116,24 @@ func (s *Server) StreamEvents(w http.ResponseWriter, r *http.Request) {
defer stateSub.Unsubscribe()
// Set up SSE response headers
w.Header().Set("Content-Type", "text/event-stream")
w.Header().Set("Connection", "keep-alive")
w.Header().Set("Content-Type", api.EventStreamMediaType)
w.Header().Set("Connection", api.KeepAlive)
// Handle each event received and context cancellation.
// We send a keepalive dummy message immediately to prevent clients
// stalling while waiting for the first response chunk.
// After that we send a keepalive dummy message every SECONDS_PER_SLOT
// to prevent anyone (e.g. proxy servers) from closing connections.
sendKeepalive(w, flusher)
keepaliveTicker := time2.NewTicker(time2.Duration(params.BeaconConfig().SecondsPerSlot) * time2.Second)
for {
select {
case event := <-opsChan:
handleBlockOperationEvents(w, flusher, topicsMap, event)
case event := <-stateChan:
s.handleStateEvents(ctx, w, flusher, topicsMap, event)
case <-keepaliveTicker.C:
sendKeepalive(w, flusher)
case <-ctx.Done():
return
}
@@ -262,6 +280,72 @@ func (s *Server) handleStateEvents(ctx context.Context, w http.ResponseWriter, f
ExecutionOptimistic: checkpointData.ExecutionOptimistic,
}
send(w, flusher, FinalizedCheckpointTopic, checkpoint)
case statefeed.LightClientFinalityUpdate:
if _, ok := requestedTopics[LightClientFinalityUpdateTopic]; !ok {
return
}
updateData, ok := event.Data.(*ethpbv2.LightClientFinalityUpdateWithVersion)
if !ok {
write(w, flusher, topicDataMismatch, event.Data, LightClientFinalityUpdateTopic)
return
}
var finalityBranch []string
for _, b := range updateData.Data.FinalityBranch {
finalityBranch = append(finalityBranch, hexutil.Encode(b))
}
update := &LightClientFinalityUpdateEvent{
Version: version.String(int(updateData.Version)),
Data: &LightClientFinalityUpdate{
AttestedHeader: &shared.BeaconBlockHeader{
Slot: fmt.Sprintf("%d", updateData.Data.AttestedHeader.Slot),
ProposerIndex: fmt.Sprintf("%d", updateData.Data.AttestedHeader.ProposerIndex),
ParentRoot: hexutil.Encode(updateData.Data.AttestedHeader.ParentRoot),
StateRoot: hexutil.Encode(updateData.Data.AttestedHeader.StateRoot),
BodyRoot: hexutil.Encode(updateData.Data.AttestedHeader.BodyRoot),
},
FinalizedHeader: &shared.BeaconBlockHeader{
Slot: fmt.Sprintf("%d", updateData.Data.FinalizedHeader.Slot),
ProposerIndex: fmt.Sprintf("%d", updateData.Data.FinalizedHeader.ProposerIndex),
ParentRoot: hexutil.Encode(updateData.Data.FinalizedHeader.ParentRoot),
StateRoot: hexutil.Encode(updateData.Data.FinalizedHeader.StateRoot),
},
FinalityBranch: finalityBranch,
SyncAggregate: &shared.SyncAggregate{
SyncCommitteeBits: hexutil.Encode(updateData.Data.SyncAggregate.SyncCommitteeBits),
SyncCommitteeSignature: hexutil.Encode(updateData.Data.SyncAggregate.SyncCommitteeSignature),
},
SignatureSlot: fmt.Sprintf("%d", updateData.Data.SignatureSlot),
},
}
send(w, flusher, LightClientFinalityUpdateTopic, update)
case statefeed.LightClientOptimisticUpdate:
if _, ok := requestedTopics[LightClientOptimisticUpdateTopic]; !ok {
return
}
updateData, ok := event.Data.(*ethpbv2.LightClientOptimisticUpdateWithVersion)
if !ok {
write(w, flusher, topicDataMismatch, event.Data, LightClientOptimisticUpdateTopic)
return
}
update := &LightClientOptimisticUpdateEvent{
Version: version.String(int(updateData.Version)),
Data: &LightClientOptimisticUpdate{
AttestedHeader: &shared.BeaconBlockHeader{
Slot: fmt.Sprintf("%d", updateData.Data.AttestedHeader.Slot),
ProposerIndex: fmt.Sprintf("%d", updateData.Data.AttestedHeader.ProposerIndex),
ParentRoot: hexutil.Encode(updateData.Data.AttestedHeader.ParentRoot),
StateRoot: hexutil.Encode(updateData.Data.AttestedHeader.StateRoot),
BodyRoot: hexutil.Encode(updateData.Data.AttestedHeader.BodyRoot),
},
SyncAggregate: &shared.SyncAggregate{
SyncCommitteeBits: hexutil.Encode(updateData.Data.SyncAggregate.SyncCommitteeBits),
SyncCommitteeSignature: hexutil.Encode(updateData.Data.SyncAggregate.SyncCommitteeSignature),
},
SignatureSlot: fmt.Sprintf("%d", updateData.Data.SignatureSlot),
},
}
send(w, flusher, LightClientOptimisticUpdateTopic, update)
case statefeed.Reorg:
if _, ok := requestedTopics[ChainReorgTopic]; !ok {
return
@@ -431,6 +515,10 @@ func send(w http.ResponseWriter, flusher http.Flusher, name string, data interfa
write(w, flusher, "event: %s\ndata: %s\n\n", name, string(j))
}
func sendKeepalive(w http.ResponseWriter, flusher http.Flusher) {
write(w, flusher, ":\n\n")
}
func write(w http.ResponseWriter, flusher http.Flusher, format string, a ...any) {
_, err := fmt.Fprintf(w, format, a...)
if err != nil {

View File

@@ -375,7 +375,9 @@ func TestStreamEvents_OperationsEvents(t *testing.T) {
})
}
const operationsResult = `event: attestation
const operationsResult = `:
event: attestation
data: {"aggregation_bits":"0x00","data":{"slot":"0","index":"0","beacon_block_root":"0x0000000000000000000000000000000000000000000000000000000000000000","source":{"epoch":"0","root":"0x0000000000000000000000000000000000000000000000000000000000000000"},"target":{"epoch":"0","root":"0x0000000000000000000000000000000000000000000000000000000000000000"}},"signature":"0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"}
event: attestation
@@ -401,7 +403,9 @@ data: {"signed_header_1":{"message":{"slot":"0","proposer_index":"0","parent_roo
`
const stateResult = `event: head
const stateResult = `:
event: head
data: {"slot":"0","block":"0x0000000000000000000000000000000000000000000000000000000000000000","state":"0x0000000000000000000000000000000000000000000000000000000000000000","epoch_transition":true,"execution_optimistic":false,"previous_duty_dependent_root":"0x0000000000000000000000000000000000000000000000000000000000000000","current_duty_dependent_root":"0x0000000000000000000000000000000000000000000000000000000000000000"}
event: finalized_checkpoint
@@ -415,17 +419,23 @@ data: {"slot":"0","block":"0xeade62f0457b2fdf48e7d3fc4b60736688286be7c7a3ac4c9a1
`
const payloadAttributesBellatrixResult = `event: payload_attributes
const payloadAttributesBellatrixResult = `:
event: payload_attributes
data: {"version":"bellatrix","data":{"proposer_index":"0","proposal_slot":"1","parent_block_number":"0","parent_block_root":"0x0000000000000000000000000000000000000000000000000000000000000000","parent_block_hash":"0x0000000000000000000000000000000000000000000000000000000000000000","payload_attributes":{"timestamp":"12","prev_randao":"0x0000000000000000000000000000000000000000000000000000000000000000","suggested_fee_recipient":"0x0000000000000000000000000000000000000000"}}}
`
const payloadAttributesCapellaResult = `event: payload_attributes
const payloadAttributesCapellaResult = `:
event: payload_attributes
data: {"version":"capella","data":{"proposer_index":"0","proposal_slot":"1","parent_block_number":"0","parent_block_root":"0x0000000000000000000000000000000000000000000000000000000000000000","parent_block_hash":"0x0000000000000000000000000000000000000000000000000000000000000000","payload_attributes":{"timestamp":"12","prev_randao":"0x0000000000000000000000000000000000000000000000000000000000000000","suggested_fee_recipient":"0x0000000000000000000000000000000000000000","withdrawals":[]}}}
`
const payloadAttributesDenebResult = `event: payload_attributes
const payloadAttributesDenebResult = `:
event: payload_attributes
data: {"version":"deneb","data":{"proposer_index":"0","proposal_slot":"1","parent_block_number":"0","parent_block_root":"0x0000000000000000000000000000000000000000000000000000000000000000","parent_block_hash":"0x0000000000000000000000000000000000000000000000000000000000000000","payload_attributes":{"timestamp":"12","prev_randao":"0x0000000000000000000000000000000000000000000000000000000000000000","suggested_fee_recipient":"0x0000000000000000000000000000000000000000","withdrawals":[],"parent_beacon_block_root":"0xbef96cb938fd48b2403d3e662664325abb0102ed12737cbb80d717520e50cf4a"}}}
`

View File

@@ -92,3 +92,27 @@ type BlobSidecarEvent struct {
KzgCommitment string `json:"kzg_commitment"`
VersionedHash string `json:"versioned_hash"`
}
type LightClientFinalityUpdateEvent struct {
Version string `json:"version"`
Data *LightClientFinalityUpdate `json:"data"`
}
type LightClientFinalityUpdate struct {
AttestedHeader *shared.BeaconBlockHeader `json:"attested_header"`
FinalizedHeader *shared.BeaconBlockHeader `json:"finalized_header"`
FinalityBranch []string `json:"finality_branch"`
SyncAggregate *shared.SyncAggregate `json:"sync_aggregate"`
SignatureSlot string `json:"signature_slot"`
}
type LightClientOptimisticUpdateEvent struct {
Version string `json:"version"`
Data *LightClientOptimisticUpdate `json:"data"`
}
type LightClientOptimisticUpdate struct {
AttestedHeader *shared.BeaconBlockHeader `json:"attested_header"`
SyncAggregate *shared.SyncAggregate `json:"sync_aggregate"`
SignatureSlot string `json:"signature_slot"`
}

View File

@@ -8,9 +8,10 @@ import (
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/gorilla/mux"
"github.com/wealdtech/go-bytesutil"
"go.opencensus.io/trace"
"github.com/wealdtech/go-bytesutil"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/config/params"
@@ -219,11 +220,7 @@ func (s *Server) GetLightClientUpdatesByRange(w http.ResponseWriter, req *http.R
return
}
response := &LightClientUpdatesByRangeResponse{
Updates: updates,
}
httputil.WriteJson(w, response)
httputil.WriteJson(w, updates)
}
// GetLightClientFinalityUpdate - implements https://github.com/ethereum/beacon-APIs/blob/263f4ed6c263c967f13279c7a9f5629b51c5fc55/apis/beacon/light_client/finality_update.yaml

View File

@@ -171,12 +171,12 @@ func TestLightClientHandler_GetLightClientUpdatesByRange(t *testing.T) {
s.GetLightClientUpdatesByRange(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &LightClientUpdatesByRangeResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 1, len(resp.Updates))
require.Equal(t, "capella", resp.Updates[0].Version)
require.Equal(t, hexutil.Encode(attestedHeader.BodyRoot), resp.Updates[0].Data.AttestedHeader.BodyRoot)
require.NotNil(t, resp.Updates)
var resp []LightClientUpdateWithVersion
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), &resp))
require.Equal(t, 1, len(resp))
require.Equal(t, "capella", resp[0].Version)
require.Equal(t, hexutil.Encode(attestedHeader.BodyRoot), resp[0].Data.AttestedHeader.BodyRoot)
require.NotNil(t, resp)
}
func TestLightClientHandler_GetLightClientUpdatesByRange_TooBigInputCount(t *testing.T) {
@@ -274,12 +274,12 @@ func TestLightClientHandler_GetLightClientUpdatesByRange_TooBigInputCount(t *tes
s.GetLightClientUpdatesByRange(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &LightClientUpdatesByRangeResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 1, len(resp.Updates)) // Even with big count input, the response is still the max available period, which is 1 in test case.
require.Equal(t, "capella", resp.Updates[0].Version)
require.Equal(t, hexutil.Encode(attestedHeader.BodyRoot), resp.Updates[0].Data.AttestedHeader.BodyRoot)
require.NotNil(t, resp.Updates)
var resp []LightClientUpdateWithVersion
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), &resp))
require.Equal(t, 1, len(resp)) // Even with big count input, the response is still the max available period, which is 1 in test case.
require.Equal(t, "capella", resp[0].Version)
require.Equal(t, hexutil.Encode(attestedHeader.BodyRoot), resp[0].Data.AttestedHeader.BodyRoot)
require.NotNil(t, resp)
}
func TestLightClientHandler_GetLightClientUpdatesByRange_TooEarlyPeriod(t *testing.T) {
@@ -377,12 +377,12 @@ func TestLightClientHandler_GetLightClientUpdatesByRange_TooEarlyPeriod(t *testi
s.GetLightClientUpdatesByRange(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &LightClientUpdatesByRangeResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 1, len(resp.Updates))
require.Equal(t, "capella", resp.Updates[0].Version)
require.Equal(t, hexutil.Encode(attestedHeader.BodyRoot), resp.Updates[0].Data.AttestedHeader.BodyRoot)
require.NotNil(t, resp.Updates)
var resp []LightClientUpdateWithVersion
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), &resp))
require.Equal(t, 1, len(resp))
require.Equal(t, "capella", resp[0].Version)
require.Equal(t, hexutil.Encode(attestedHeader.BodyRoot), resp[0].Data.AttestedHeader.BodyRoot)
require.NotNil(t, resp)
}
func TestLightClientHandler_GetLightClientUpdatesByRange_TooBigCount(t *testing.T) {
@@ -480,12 +480,12 @@ func TestLightClientHandler_GetLightClientUpdatesByRange_TooBigCount(t *testing.
s.GetLightClientUpdatesByRange(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &LightClientUpdatesByRangeResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 1, len(resp.Updates))
require.Equal(t, "capella", resp.Updates[0].Version)
require.Equal(t, hexutil.Encode(attestedHeader.BodyRoot), resp.Updates[0].Data.AttestedHeader.BodyRoot)
require.NotNil(t, resp.Updates)
var resp []LightClientUpdateWithVersion
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), &resp))
require.Equal(t, 1, len(resp))
require.Equal(t, "capella", resp[0].Version)
require.Equal(t, hexutil.Encode(attestedHeader.BodyRoot), resp[0].Data.AttestedHeader.BodyRoot)
require.NotNil(t, resp)
}
func TestLightClientHandler_GetLightClientUpdatesByRange_BeforeAltair(t *testing.T) {
@@ -583,9 +583,6 @@ func TestLightClientHandler_GetLightClientUpdatesByRange_BeforeAltair(t *testing
s.GetLightClientUpdatesByRange(writer, request)
require.Equal(t, http.StatusNotFound, writer.Code)
resp := &LightClientUpdatesByRangeResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 0, len(resp.Updates))
}
func TestLightClientHandler_GetLightClientFinalityUpdate(t *testing.T) {

View File

@@ -296,11 +296,16 @@ func newLightClientUpdateToJSON(input *v2.LightClientUpdate) *LightClientUpdate
nextSyncCommittee = shared.SyncCommitteeFromConsensus(migration.V2SyncCommitteeToV1Alpha1(input.NextSyncCommittee))
}
var finalizedHeader *shared.BeaconBlockHeader
if input.FinalizedHeader != nil {
finalizedHeader = shared.BeaconBlockHeaderFromConsensus(migration.V1HeaderToV1Alpha1(input.FinalizedHeader))
}
return &LightClientUpdate{
AttestedHeader: shared.BeaconBlockHeaderFromConsensus(migration.V1HeaderToV1Alpha1(input.AttestedHeader)),
NextSyncCommittee: nextSyncCommittee,
NextSyncCommitteeBranch: branchToJSON(input.NextSyncCommitteeBranch),
FinalizedHeader: shared.BeaconBlockHeaderFromConsensus(migration.V1HeaderToV1Alpha1(input.FinalizedHeader)),
FinalizedHeader: finalizedHeader,
FinalityBranch: branchToJSON(input.FinalityBranch),
SyncAggregate: syncAggregateToJSON(input.SyncAggregate),
SignatureSlot: strconv.FormatUint(uint64(input.SignatureSlot), 10),

View File

@@ -17,11 +17,11 @@ type LightClientBootstrap struct {
type LightClientUpdate struct {
AttestedHeader *shared.BeaconBlockHeader `json:"attested_header"`
NextSyncCommittee *shared.SyncCommittee `json:"next_sync_committee"`
FinalizedHeader *shared.BeaconBlockHeader `json:"finalized_header"`
NextSyncCommittee *shared.SyncCommittee `json:"next_sync_committee,omitempty"`
FinalizedHeader *shared.BeaconBlockHeader `json:"finalized_header,omitempty"`
SyncAggregate *shared.SyncAggregate `json:"sync_aggregate"`
NextSyncCommitteeBranch []string `json:"next_sync_committee_branch"`
FinalityBranch []string `json:"finality_branch"`
NextSyncCommitteeBranch []string `json:"next_sync_committee_branch,omitempty"`
FinalityBranch []string `json:"finality_branch,omitempty"`
SignatureSlot string `json:"signature_slot"`
}

View File

@@ -62,7 +62,6 @@ func (s *Server) BlockRewards(w http.ResponseWriter, r *http.Request) {
// AttestationRewards retrieves attestation reward info for validators specified by array of public keys or validator index.
// If no array is provided, return reward info for every validator.
// TODO: Inclusion delay
func (s *Server) AttestationRewards(w http.ResponseWriter, r *http.Request) {
st, ok := s.attRewardsState(w, r)
if !ok {
@@ -277,7 +276,10 @@ func idealAttRewards(
IsPrevEpochTargetAttester: true,
IsPrevEpochHeadAttester: true,
})
idealRewards = append(idealRewards, IdealAttestationReward{EffectiveBalance: strconv.FormatUint(effectiveBalance, 10)})
idealRewards = append(idealRewards, IdealAttestationReward{
EffectiveBalance: strconv.FormatUint(effectiveBalance, 10),
Inactivity: strconv.FormatUint(0, 10),
})
break
}
}
@@ -299,6 +301,11 @@ func idealAttRewards(
} else {
idealRewards[i].Target = strconv.FormatUint(d.TargetReward, 10)
}
if d.InactivityPenalty > 0 {
idealRewards[i].Inactivity = fmt.Sprintf("-%s", strconv.FormatUint(d.InactivityPenalty, 10))
} else {
idealRewards[i].Inactivity = strconv.FormatUint(d.InactivityPenalty, 10)
}
}
return idealRewards, true
}
@@ -331,6 +338,11 @@ func totalAttRewards(
} else {
totalRewards[i].Target = strconv.FormatUint(d.TargetReward, 10)
}
if d.InactivityPenalty > 0 {
totalRewards[i].Inactivity = fmt.Sprintf("-%s", strconv.FormatUint(d.InactivityPenalty, 10))
} else {
totalRewards[i].Inactivity = strconv.FormatUint(d.InactivityPenalty, 10)
}
}
return totalRewards, true
}

View File

@@ -396,7 +396,7 @@ func TestAttestationRewards(t *testing.T) {
FinalizationFetcher: mockChainService,
}
t.Run("ok - ideal rewards", func(t *testing.T) {
t.Run("ideal rewards", func(t *testing.T) {
url := "http://only.the.epoch.number.at.the.end.is.important/1"
request := httptest.NewRequest("POST", url, nil)
writer := httptest.NewRecorder()
@@ -419,7 +419,7 @@ func TestAttestationRewards(t *testing.T) {
}
assert.Equal(t, uint64(20756849), sum)
})
t.Run("ok - filtered vals", func(t *testing.T) {
t.Run("filtered vals", func(t *testing.T) {
url := "http://only.the.epoch.number.at.the.end.is.important/1"
var body bytes.Buffer
pubkey := fmt.Sprintf("%#x", secretKeys[10].PublicKey().Marshal())
@@ -448,7 +448,7 @@ func TestAttestationRewards(t *testing.T) {
}
assert.Equal(t, uint64(794265), sum)
})
t.Run("ok - all vals", func(t *testing.T) {
t.Run("all vals", func(t *testing.T) {
url := "http://only.the.epoch.number.at.the.end.is.important/1"
request := httptest.NewRequest("POST", url, nil)
writer := httptest.NewRecorder()
@@ -471,36 +471,12 @@ func TestAttestationRewards(t *testing.T) {
}
assert.Equal(t, uint64(54221955), sum)
})
t.Run("ok - penalty", func(t *testing.T) {
st, err := util.NewBeaconStateCapella()
require.NoError(t, err)
require.NoError(t, st.SetSlot(params.BeaconConfig().SlotsPerEpoch*3-1))
validators := make([]*eth.Validator, 0, valCount)
balances := make([]uint64, 0, valCount)
for i := 0; i < valCount; i++ {
blsKey, err := bls.RandKey()
require.NoError(t, err)
validators = append(validators, &eth.Validator{
PublicKey: blsKey.PublicKey().Marshal(),
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
WithdrawableEpoch: params.BeaconConfig().FarFutureEpoch,
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance / 64 * uint64(i),
})
balances = append(balances, params.BeaconConfig().MaxEffectiveBalance/64*uint64(i))
}
t.Run("penalty", func(t *testing.T) {
st := st.Copy()
validators := st.Validators()
validators[63].Slashed = true
require.NoError(t, st.SetValidators(validators))
require.NoError(t, st.SetBalances(balances))
require.NoError(t, st.SetInactivityScores(make([]uint64, len(validators))))
participation := make([]byte, len(validators))
for i := range participation {
participation[i] = 0b111
}
require.NoError(t, st.SetCurrentParticipationBits(participation))
require.NoError(t, st.SetPreviousParticipationBits(participation))
currentSlot := params.BeaconConfig().SlotsPerEpoch * 3
mockChainService := &mock.ChainService{Optimistic: true, Slot: &currentSlot}
s := &Server{
Stater: &testutil.MockStater{StatesBySlot: map[primitives.Slot]state.BeaconState{
params.BeaconConfig().SlotsPerEpoch*3 - 1: st,
@@ -525,8 +501,44 @@ func TestAttestationRewards(t *testing.T) {
resp := &AttestationRewardsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, "0", resp.Data.TotalRewards[0].Head)
assert.Equal(t, "-432270", resp.Data.TotalRewards[0].Source)
assert.Equal(t, "-802788", resp.Data.TotalRewards[0].Target)
assert.Equal(t, "-439299", resp.Data.TotalRewards[0].Source)
assert.Equal(t, "-815841", resp.Data.TotalRewards[0].Target)
assert.Equal(t, "0", resp.Data.TotalRewards[0].Inactivity)
})
t.Run("inactivity", func(t *testing.T) {
st := st.Copy()
validators := st.Validators()
validators[63].Slashed = true
require.NoError(t, st.SetValidators(validators))
inactivityScores, err := st.InactivityScores()
require.NoError(t, err)
inactivityScores[63] = 10
require.NoError(t, st.SetInactivityScores(inactivityScores))
s := &Server{
Stater: &testutil.MockStater{StatesBySlot: map[primitives.Slot]state.BeaconState{
params.BeaconConfig().SlotsPerEpoch*3 - 1: st,
}},
TimeFetcher: mockChainService,
OptimisticModeFetcher: mockChainService,
FinalizationFetcher: mockChainService,
}
url := "http://only.the.epoch.number.at.the.end.is.important/1"
var body bytes.Buffer
valIds, err := json.Marshal([]string{"63"})
require.NoError(t, err)
_, err = body.Write(valIds)
require.NoError(t, err)
request := httptest.NewRequest("POST", url, &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.AttestationRewards(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &AttestationRewardsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, "-4768", resp.Data.TotalRewards[0].Inactivity)
})
t.Run("invalid validator index/pubkey", func(t *testing.T) {
url := "http://only.the.epoch.number.at.the.end.is.important/1"

View File

@@ -31,6 +31,7 @@ type IdealAttestationReward struct {
Head string `json:"head"`
Target string `json:"target"`
Source string `json:"source"`
Inactivity string `json:"inactivity"`
}
type TotalAttestationReward struct {
@@ -38,7 +39,7 @@ type TotalAttestationReward struct {
Head string `json:"head"`
Target string `json:"target"`
Source string `json:"source"`
InclusionDelay string `json:"inclusion_delay"`
Inactivity string `json:"inactivity"`
}
type SyncCommitteeRewardsResponse struct {

View File

@@ -388,10 +388,6 @@ func (s *Server) GetAttestationData(w http.ResponseWriter, r *http.Request) {
return
}
if isOptimistic, err := shared.IsOptimistic(ctx, w, s.OptimisticModeFetcher); isOptimistic || err != nil {
return
}
_, slot, ok := shared.UintFromQuery(w, r, "slot", true)
if !ok {
return

View File

@@ -201,7 +201,7 @@ func (s *Server) ProduceBlockV3(w http.ResponseWriter, r *http.Request) {
}
func (s *Server) produceBlockV3(ctx context.Context, w http.ResponseWriter, r *http.Request, v1alpha1req *eth.BlockRequest, requiredType blockType) {
isSSZ := httputil.SszRequested(r)
isSSZ := httputil.RespondWithSsz(r)
v1alpha1resp, err := s.V1Alpha1Server.GetBeaconBlock(ctx, v1alpha1req)
if err != nil {
httputil.HandleError(w, err.Error(), http.StatusInternalServerError)

View File

@@ -831,10 +831,11 @@ func TestGetAttestationData(t *testing.T) {
TimeFetcher: chain,
OptimisticModeFetcher: chain,
CoreService: &core.Service{
HeadFetcher: chain,
GenesisTimeFetcher: chain,
FinalizedFetcher: chain,
AttestationCache: cache.NewAttestationCache(),
HeadFetcher: chain,
GenesisTimeFetcher: chain,
FinalizedFetcher: chain,
AttestationCache: cache.NewAttestationCache(),
OptimisticModeFetcher: chain,
},
}
@@ -914,10 +915,11 @@ func TestGetAttestationData(t *testing.T) {
TimeFetcher: chain,
OptimisticModeFetcher: chain,
CoreService: &core.Service{
AttestationCache: cache.NewAttestationCache(),
GenesisTimeFetcher: chain,
HeadFetcher: chain,
FinalizedFetcher: chain,
AttestationCache: cache.NewAttestationCache(),
GenesisTimeFetcher: chain,
HeadFetcher: chain,
FinalizedFetcher: chain,
OptimisticModeFetcher: chain,
},
}
@@ -959,8 +961,9 @@ func TestGetAttestationData(t *testing.T) {
TimeFetcher: chain,
OptimisticModeFetcher: chain,
CoreService: &core.Service{
GenesisTimeFetcher: chain,
FinalizedFetcher: chain,
GenesisTimeFetcher: chain,
OptimisticModeFetcher: chain,
FinalizedFetcher: chain,
},
}
@@ -1017,10 +1020,11 @@ func TestGetAttestationData(t *testing.T) {
TimeFetcher: chain,
OptimisticModeFetcher: chain,
CoreService: &core.Service{
HeadFetcher: chain,
GenesisTimeFetcher: chain,
StateGen: stategen.New(db, doublylinkedtree.New()),
FinalizedFetcher: chain,
HeadFetcher: chain,
GenesisTimeFetcher: chain,
OptimisticModeFetcher: chain,
StateGen: stategen.New(db, doublylinkedtree.New()),
FinalizedFetcher: chain,
},
}
@@ -1065,10 +1069,11 @@ func TestGetAttestationData(t *testing.T) {
TimeFetcher: chain,
OptimisticModeFetcher: chain,
CoreService: &core.Service{
AttestationCache: cache.NewAttestationCache(),
HeadFetcher: chain,
GenesisTimeFetcher: chain,
FinalizedFetcher: chain,
AttestationCache: cache.NewAttestationCache(),
OptimisticModeFetcher: chain,
HeadFetcher: chain,
GenesisTimeFetcher: chain,
FinalizedFetcher: chain,
},
}
@@ -1151,10 +1156,11 @@ func TestGetAttestationData(t *testing.T) {
TimeFetcher: chain,
OptimisticModeFetcher: chain,
CoreService: &core.Service{
AttestationCache: cache.NewAttestationCache(),
HeadFetcher: chain,
GenesisTimeFetcher: chain,
FinalizedFetcher: chain,
AttestationCache: cache.NewAttestationCache(),
OptimisticModeFetcher: chain,
HeadFetcher: chain,
GenesisTimeFetcher: chain,
FinalizedFetcher: chain,
},
}

View File

@@ -38,7 +38,6 @@ go_library(
"//beacon-chain/builder:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/cache/depositcache:go_default_library",
"//beacon-chain/core/altair:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/block:go_default_library",
@@ -66,7 +65,6 @@ go_library(
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/payload-attribute:go_default_library",
@@ -91,13 +89,13 @@ go_library(
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_golang_protobuf//ptypes/empty",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_bazel_rules_go//proto/wkt:empty_go_proto",
"@io_opencensus_go//trace:go_default_library",
"@org_golang_google_grpc//codes:go_default_library",
"@org_golang_google_grpc//status:go_default_library",
@@ -141,6 +139,7 @@ common_deps = [
"//beacon-chain/sync/initial-sync/testing:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",

View File

@@ -31,12 +31,6 @@ func (vs *Server) GetAttestationData(ctx context.Context, req *ethpb.Attestation
if vs.SyncChecker.Syncing() {
return nil, status.Errorf(codes.Unavailable, "Syncing to latest head, not ready to respond")
}
// An optimistic validator MUST NOT participate in attestation. (i.e., sign across the DOMAIN_BEACON_ATTESTER, DOMAIN_SELECTION_PROOF or DOMAIN_AGGREGATE_AND_PROOF domains).
if err := vs.optimisticStatus(ctx); err != nil {
return nil, err
}
res, err := vs.CoreService.GetAttestationData(ctx, req)
if err != nil {
return nil, status.Errorf(core.ErrorReasonToGRPC(err.Reason), "Could not get attestation data: %v", err.Err)

View File

@@ -61,10 +61,11 @@ func TestAttestationDataAtSlot_HandlesFarAwayJustifiedEpoch(t *testing.T) {
OptimisticModeFetcher: &mock.ChainService{Optimistic: false},
TimeFetcher: &mock.ChainService{Genesis: time.Now().Add(time.Duration(-1*offset) * time.Second)},
CoreService: &core.Service{
AttestationCache: cache.NewAttestationCache(),
HeadFetcher: &mock.ChainService{TargetRoot: blockRoot, Root: blockRoot[:]},
GenesisTimeFetcher: &mock.ChainService{Genesis: time.Now().Add(time.Duration(-1*offset) * time.Second)},
FinalizedFetcher: &mock.ChainService{CurrentJustifiedCheckPoint: justifiedCheckpoint},
AttestationCache: cache.NewAttestationCache(),
HeadFetcher: &mock.ChainService{TargetRoot: blockRoot, Root: blockRoot[:]},
GenesisTimeFetcher: &mock.ChainService{Genesis: time.Now().Add(time.Duration(-1*offset) * time.Second)},
FinalizedFetcher: &mock.ChainService{CurrentJustifiedCheckPoint: justifiedCheckpoint},
OptimisticModeFetcher: &mock.ChainService{Optimistic: false},
},
}

View File

@@ -116,8 +116,9 @@ func TestGetAttestationData_OK(t *testing.T) {
GenesisTimeFetcher: &mock.ChainService{
Genesis: time.Now().Add(time.Duration(-1*offset) * time.Second),
},
FinalizedFetcher: &mock.ChainService{CurrentJustifiedCheckPoint: justifiedCheckpoint},
AttestationCache: cache.NewAttestationCache(),
FinalizedFetcher: &mock.ChainService{CurrentJustifiedCheckPoint: justifiedCheckpoint},
AttestationCache: cache.NewAttestationCache(),
OptimisticModeFetcher: &mock.ChainService{Optimistic: false},
},
}
@@ -176,7 +177,8 @@ func BenchmarkGetAttestationDataConcurrent(b *testing.B) {
GenesisTimeFetcher: &mock.ChainService{
Genesis: time.Now().Add(time.Duration(-1*offset) * time.Second),
},
FinalizedFetcher: &mock.ChainService{CurrentJustifiedCheckPoint: justifiedCheckpoint},
OptimisticModeFetcher: &mock.ChainService{Optimistic: false},
FinalizedFetcher: &mock.ChainService{CurrentJustifiedCheckPoint: justifiedCheckpoint},
},
}
@@ -222,9 +224,10 @@ func TestGetAttestationData_Optimistic(t *testing.T) {
OptimisticModeFetcher: &mock.ChainService{Optimistic: true},
TimeFetcher: &mock.ChainService{Genesis: time.Now()},
CoreService: &core.Service{
GenesisTimeFetcher: &mock.ChainService{Genesis: time.Now()},
HeadFetcher: &mock.ChainService{},
AttestationCache: cache.NewAttestationCache(),
GenesisTimeFetcher: &mock.ChainService{Genesis: time.Now()},
HeadFetcher: &mock.ChainService{},
AttestationCache: cache.NewAttestationCache(),
OptimisticModeFetcher: &mock.ChainService{Optimistic: true},
},
}
_, err := as.GetAttestationData(context.Background(), &ethpb.AttestationDataRequest{})
@@ -240,10 +243,11 @@ func TestGetAttestationData_Optimistic(t *testing.T) {
OptimisticModeFetcher: &mock.ChainService{Optimistic: false},
TimeFetcher: &mock.ChainService{Genesis: time.Now()},
CoreService: &core.Service{
AttestationCache: cache.NewAttestationCache(),
GenesisTimeFetcher: &mock.ChainService{Genesis: time.Now()},
HeadFetcher: &mock.ChainService{Optimistic: false, State: beaconState},
FinalizedFetcher: &mock.ChainService{CurrentJustifiedCheckPoint: &ethpb.Checkpoint{}},
AttestationCache: cache.NewAttestationCache(),
GenesisTimeFetcher: &mock.ChainService{Genesis: time.Now()},
HeadFetcher: &mock.ChainService{Optimistic: false, State: beaconState},
FinalizedFetcher: &mock.ChainService{CurrentJustifiedCheckPoint: &ethpb.Checkpoint{}},
OptimisticModeFetcher: &mock.ChainService{Optimistic: false},
},
}
_, err = as.GetAttestationData(context.Background(), &ethpb.AttestationDataRequest{})
@@ -260,7 +264,8 @@ func TestServer_GetAttestationData_InvalidRequestSlot(t *testing.T) {
OptimisticModeFetcher: &mock.ChainService{Optimistic: false},
TimeFetcher: &mock.ChainService{Genesis: time.Now().Add(time.Duration(-1*offset) * time.Second)},
CoreService: &core.Service{
GenesisTimeFetcher: &mock.ChainService{Genesis: time.Now().Add(time.Duration(-1*offset) * time.Second)},
GenesisTimeFetcher: &mock.ChainService{Genesis: time.Now().Add(time.Duration(-1*offset) * time.Second)},
OptimisticModeFetcher: &mock.ChainService{Optimistic: false},
},
}
@@ -301,10 +306,11 @@ func TestServer_GetAttestationData_RequestSlotIsDifferentThanCurrentSlot(t *test
OptimisticModeFetcher: &mock.ChainService{Optimistic: false},
TimeFetcher: &mock.ChainService{Genesis: time.Now().Add(time.Duration(-1*offset) * time.Second)},
CoreService: &core.Service{
HeadFetcher: &mock.ChainService{TargetRoot: blockRoot2, Root: blockRoot[:]},
GenesisTimeFetcher: &mock.ChainService{Genesis: time.Now().Add(time.Duration(-1*offset) * time.Second)},
StateGen: stategen.New(db, doublylinkedtree.New()),
FinalizedFetcher: &mock.ChainService{CurrentJustifiedCheckPoint: justifiedCheckpoint},
HeadFetcher: &mock.ChainService{TargetRoot: blockRoot2, Root: blockRoot[:]},
GenesisTimeFetcher: &mock.ChainService{Genesis: time.Now().Add(time.Duration(-1*offset) * time.Second)},
StateGen: stategen.New(db, doublylinkedtree.New()),
FinalizedFetcher: &mock.ChainService{CurrentJustifiedCheckPoint: justifiedCheckpoint},
OptimisticModeFetcher: &mock.ChainService{Optimistic: false},
},
}
util.SaveBlock(t, ctx, db, block)
@@ -346,8 +352,9 @@ func TestGetAttestationData_SucceedsInFirstEpoch(t *testing.T) {
HeadFetcher: &mock.ChainService{
TargetRoot: targetRoot, Root: blockRoot[:],
},
GenesisTimeFetcher: &mock.ChainService{Genesis: prysmTime.Now().Add(time.Duration(-1*offset) * time.Second)},
FinalizedFetcher: &mock.ChainService{CurrentJustifiedCheckPoint: justifiedCheckpoint},
GenesisTimeFetcher: &mock.ChainService{Genesis: prysmTime.Now().Add(time.Duration(-1*offset) * time.Second)},
FinalizedFetcher: &mock.ChainService{CurrentJustifiedCheckPoint: justifiedCheckpoint},
OptimisticModeFetcher: &mock.ChainService{Optimistic: false},
},
}

View File

@@ -359,16 +359,17 @@ func (s *Service) Start() {
})
coreService := &core.Service{
HeadFetcher: s.cfg.HeadFetcher,
GenesisTimeFetcher: s.cfg.GenesisTimeFetcher,
SyncChecker: s.cfg.SyncService,
Broadcaster: s.cfg.Broadcaster,
SyncCommitteePool: s.cfg.SyncCommitteeObjectPool,
OperationNotifier: s.cfg.OperationNotifier,
AttestationCache: cache.NewAttestationCache(),
StateGen: s.cfg.StateGen,
P2P: s.cfg.Broadcaster,
FinalizedFetcher: s.cfg.FinalizationFetcher,
HeadFetcher: s.cfg.HeadFetcher,
GenesisTimeFetcher: s.cfg.GenesisTimeFetcher,
SyncChecker: s.cfg.SyncService,
Broadcaster: s.cfg.Broadcaster,
SyncCommitteePool: s.cfg.SyncCommitteeObjectPool,
OperationNotifier: s.cfg.OperationNotifier,
AttestationCache: cache.NewAttestationCache(),
StateGen: s.cfg.StateGen,
P2P: s.cfg.Broadcaster,
FinalizedFetcher: s.cfg.FinalizationFetcher,
OptimisticModeFetcher: s.cfg.OptimisticModeFetcher,
}
validatorServer := &validatorv1alpha1.Server{

View File

@@ -22,6 +22,7 @@ type BeaconState interface {
WriteOnlyBeaconState
Copy() BeaconState
CopyAllTries()
Defragment()
HashTreeRoot(ctx context.Context) ([32]byte, error)
Prover
json.Marshaler

View File

@@ -123,6 +123,55 @@ func NewMultiValueValidators(vals []*ethpb.Validator) *MultiValueValidators {
return mv
}
// Defragment checks whether each individual multi-value field in our state is fragmented
// and if it is, it will 'reset' the field to create a new multivalue object.
func (b *BeaconState) Defragment() {
b.lock.Lock()
defer b.lock.Unlock()
if b.blockRootsMultiValue != nil && b.blockRootsMultiValue.IsFragmented() {
initialMVslice := b.blockRootsMultiValue
b.blockRootsMultiValue = b.blockRootsMultiValue.Reset(b)
initialMVslice.Detach(b)
multiValueCountGauge.WithLabelValues(types.BlockRoots.String()).Inc()
runtime.SetFinalizer(b.blockRootsMultiValue, blockRootsFinalizer)
}
if b.stateRootsMultiValue != nil && b.stateRootsMultiValue.IsFragmented() {
initialMVslice := b.stateRootsMultiValue
b.stateRootsMultiValue = b.stateRootsMultiValue.Reset(b)
initialMVslice.Detach(b)
multiValueCountGauge.WithLabelValues(types.StateRoots.String()).Inc()
runtime.SetFinalizer(b.stateRootsMultiValue, stateRootsFinalizer)
}
if b.randaoMixesMultiValue != nil && b.randaoMixesMultiValue.IsFragmented() {
initialMVslice := b.randaoMixesMultiValue
b.randaoMixesMultiValue = b.randaoMixesMultiValue.Reset(b)
initialMVslice.Detach(b)
multiValueCountGauge.WithLabelValues(types.RandaoMixes.String()).Inc()
runtime.SetFinalizer(b.randaoMixesMultiValue, randaoMixesFinalizer)
}
if b.balancesMultiValue != nil && b.balancesMultiValue.IsFragmented() {
initialMVslice := b.balancesMultiValue
b.balancesMultiValue = b.balancesMultiValue.Reset(b)
initialMVslice.Detach(b)
multiValueCountGauge.WithLabelValues(types.Balances.String()).Inc()
runtime.SetFinalizer(b.balancesMultiValue, balancesFinalizer)
}
if b.validatorsMultiValue != nil && b.validatorsMultiValue.IsFragmented() {
initialMVslice := b.validatorsMultiValue
b.validatorsMultiValue = b.validatorsMultiValue.Reset(b)
initialMVslice.Detach(b)
multiValueCountGauge.WithLabelValues(types.Validators.String()).Inc()
runtime.SetFinalizer(b.validatorsMultiValue, validatorsFinalizer)
}
if b.inactivityScoresMultiValue != nil && b.inactivityScoresMultiValue.IsFragmented() {
initialMVslice := b.inactivityScoresMultiValue
b.inactivityScoresMultiValue = b.inactivityScoresMultiValue.Reset(b)
initialMVslice.Detach(b)
multiValueCountGauge.WithLabelValues(types.InactivityScores.String()).Inc()
runtime.SetFinalizer(b.inactivityScoresMultiValue, inactivityScoresFinalizer)
}
}
func randaoMixesFinalizer(m *MultiValueRandaoMixes) {
multiValueCountGauge.WithLabelValues(types.RandaoMixes.String()).Dec()
}

View File

@@ -606,7 +606,7 @@ func TestBlocksFetcher_WaitForBandwidth(t *testing.T) {
p1.Connect(p2)
require.Equal(t, 1, len(p1.BHost.Network().Peers()), "Expected peers to be connected")
req := &ethpb.BeaconBlocksByRangeRequest{
Count: 64,
Count: 64,
}
topic := p2pm.RPCBlocksByRangeTopicV1

View File

@@ -79,13 +79,6 @@ func (s *Service) processPendingAtts(ctx context.Context) error {
seen := s.seenPendingBlocks[bRoot]
s.pendingQueueLock.RUnlock()
if !seen {
// Pending attestation's missing block has not arrived yet.
log.WithFields(logrus.Fields{
"currentSlot": s.cfg.clock.CurrentSlot(),
"attSlot": attestations[0].Message.Aggregate.Data.Slot,
"attCount": len(attestations),
"blockRoot": hex.EncodeToString(bytesutil.Trunc(bRoot[:])),
}).Debug("Requesting block for pending attestation")
pendingRoots = append(pendingRoots, bRoot)
}
}

View File

@@ -53,7 +53,7 @@ func TestProcessPendingAtts_NoBlockRequestBlock(t *testing.T) {
a := &ethpb.AggregateAttestationAndProof{Aggregate: &ethpb.Attestation{Data: &ethpb.AttestationData{Target: &ethpb.Checkpoint{Root: make([]byte, 32)}}}}
r.blkRootToPendingAtts[[32]byte{'A'}] = []*ethpb.SignedAggregateAttestationAndProof{{Message: a}}
require.NoError(t, r.processPendingAtts(context.Background()))
require.LogsContain(t, hook, "Requesting block for pending attestation")
require.LogsContain(t, hook, "Requesting block by root")
}
func TestProcessPendingAtts_HasBlockSaveUnAggregatedAtt(t *testing.T) {

View File

@@ -126,7 +126,6 @@ func (s *Service) processPendingBlocks(ctx context.Context) error {
// Request parent block if not in the pending queue and not in the database.
isParentBlockInDB := s.cfg.beaconDB.HasBlock(ctx, parentRoot)
if !inPendingQueue && !isParentBlockInDB && s.hasPeer() {
log.WithFields(logrus.Fields{"currentSlot": b.Block().Slot(), "parentRoot": hex.EncodeToString(parentRoot[:])}).Debug("Requesting parent block")
parentRoots = append(parentRoots, parentRoot)
continue
}
@@ -283,6 +282,8 @@ func (s *Service) sendBatchRootRequest(ctx context.Context, roots [][32]byte, ra
r := roots[i]
if s.seenPendingBlocks[r] || s.cfg.chain.BlockBeingSynced(r) {
roots = append(roots[:i], roots[i+1:]...)
} else {
log.WithField("blockRoot", fmt.Sprintf("%#x", r)).Debug("Requesting block by root")
}
}
s.pendingQueueLock.RUnlock()

View File

@@ -31,11 +31,12 @@ func DefaultDataDir() string {
// Try to place the data folder in the user's home dir
home := file.HomeDir()
if home != "" {
if runtime.GOOS == "darwin" {
switch runtime.GOOS {
case "darwin":
return filepath.Join(home, "Library", "Eth2")
} else if runtime.GOOS == "windows" {
case "windows":
return filepath.Join(home, "AppData", "Local", "Eth2")
} else {
default:
return filepath.Join(home, ".eth2")
}
}

View File

@@ -16,7 +16,7 @@ type StdInPasswordReader struct {
}
// ReadPassword reads a password from stdin.
func (_ StdInPasswordReader) ReadPassword() (string, error) {
func (StdInPasswordReader) ReadPassword() (string, error) {
pwd, err := terminal.ReadPassword(int(os.Stdin.Fd()))
return string(pwd), err
}

View File

@@ -5,7 +5,6 @@ load("//tools:prysm_image.bzl", "prysm_image_upload")
go_library(
name = "go_default_library",
srcs = [
"log.go",
"main.go",
"usage.go",
],
@@ -30,6 +29,7 @@ go_library(
"//runtime/version:go_default_library",
"//validator/node:go_default_library",
"@com_github_joonix_log//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_urfave_cli_v2//:go_default_library",
],

View File

@@ -1,5 +0,0 @@
package main
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "main")

View File

@@ -10,6 +10,7 @@ import (
runtimeDebug "runtime/debug"
joonix "github.com/joonix/log"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/cmd"
accountcommands "github.com/prysmaticlabs/prysm/v4/cmd/validator/accounts"
dbcommands "github.com/prysmaticlabs/prysm/v4/cmd/validator/db"
@@ -31,8 +32,10 @@ import (
"github.com/urfave/cli/v2"
)
var log = logrus.WithField("prefix", "main")
func startNode(ctx *cli.Context) error {
// verify if ToS accepted
// Verify if ToS is accepted.
if err := tos.VerifyTosAcceptedOrPrompt(ctx); err != nil {
return err
}
@@ -141,6 +144,8 @@ func main() {
return err
}
logFileName := ctx.String(cmd.LogFileName.Name)
format := ctx.String(cmd.LogFormat.Name)
switch format {
case "text":
@@ -148,8 +153,8 @@ func main() {
formatter.TimestampFormat = "2006-01-02 15:04:05"
formatter.FullTimestamp = true
// If persistent log files are written - we disable the log messages coloring because
// the colors are ANSI codes and seen as Gibberish in the log files.
formatter.DisableColors = ctx.String(cmd.LogFileName.Name) != ""
// the colors are ANSI codes and seen as gibberish in the log files.
formatter.DisableColors = logFileName != ""
logrus.SetFormatter(formatter)
case "fluentd":
f := joonix.NewFormatter()
@@ -167,7 +172,6 @@ func main() {
return fmt.Errorf("unknown log format %s", format)
}
logFileName := ctx.String(cmd.LogFileName.Name)
if logFileName != "" {
if err := logs.ConfigurePersistentLogging(logFileName); err != nil {
log.WithError(err).Error("Failed to configuring logging to disk.")
@@ -182,8 +186,9 @@ func main() {
}
if err := debug.Setup(ctx); err != nil {
return err
return errors.Wrap(err, "failed to setup debug")
}
return cmd.ValidateNoArgs(ctx)
},
After: func(ctx *cli.Context) error {

View File

@@ -29,7 +29,7 @@ Examples of when not to use a feature flag:
Once it has been decided that you should use a feature flag. Follow these steps to safely
releasing your feature. In general, try to create a single PR for each step of this process.
1. Add your feature flag to shared/featureconfig/flags.go, use the flag to toggle a boolean in the
1. Add your feature flag to `shared/featureconfig/flags.go`, use the flag to toggle a boolean in the
feature config in shared/featureconfig/config.go. It is a good idea to use the `enable` prefix for
your flag since you're going to invert the flag in a later step. i.e you will use `disable` prefix
later. For example, `--enable-my-feature`. Additionally, [create a feature flag tracking issue](https://github.com/prysmaticlabs/prysm/issues/new?template=feature_flag.md)
@@ -48,7 +48,7 @@ func someExistingMethod(ctx context.Context) error {
3. Add the flag to the end to end tests. This set of flags can also be found in shared/featureconfig/flags.go.
4. Test the functionality locally and safely in production. Once you have enough confidence that
your new function works and is safe to release then move onto the next step.
5. Move your existing flag to the deprecated section of shared/featureconfig/flags.go. It is
5. Move your existing flag to the deprecated section of `shared/featureconfig/flags.go`. It is
important NOT to delete your existing flag outright. Deleting a flag can be extremely frustrating
to users as it may break their existing workflow! Marking a flag as deprecated gives users time to
adjust their start scripts and workflow. Add another feature flag to represent the inverse of your

View File

@@ -23,10 +23,11 @@ import (
"sync"
"time"
"github.com/prysmaticlabs/prysm/v4/cmd"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/sirupsen/logrus"
"github.com/urfave/cli/v2"
"github.com/prysmaticlabs/prysm/v4/cmd"
"github.com/prysmaticlabs/prysm/v4/config/params"
)
var log = logrus.WithField("prefix", "flags")
@@ -40,6 +41,7 @@ type Flags struct {
EnableExperimentalState bool // EnableExperimentalState turns on the latest and greatest (but potentially unstable) changes to the beacon state.
WriteSSZStateTransitions bool // WriteSSZStateTransitions to tmp directory.
EnablePeerScorer bool // EnablePeerScorer enables experimental peer scoring in p2p.
EnableLightClient bool // EnableLightClient enables light client APIs.
WriteWalletPasswordOnWebOnboarding bool // WriteWalletPasswordOnWebOnboarding writes the password to disk after Prysm web signup.
EnableDoppelGanger bool // EnableDoppelGanger enables doppelganger protection on startup for the validator.
EnableHistoricalSpaceRepresentation bool // EnableHistoricalSpaceRepresentation enables the saving of registry validators in separate buckets to save space
@@ -237,6 +239,10 @@ func ConfigureBeaconChain(ctx *cli.Context) error {
logEnabled(EnableEIP4881)
cfg.EnableEIP4881 = true
}
if ctx.IsSet(EnableLightClient.Name) {
logEnabled(EnableLightClient)
cfg.EnableLightClient = true
}
cfg.AggregateIntervals = [3]time.Duration{aggregateFirstInterval.Value, aggregateSecondInterval.Value, aggregateThirdInterval.Value}
Init(cfg)
return nil

View File

@@ -140,6 +140,10 @@ var (
Name: "enable-eip-4881",
Usage: "Enables the deposit tree specified in EIP-4881.",
}
EnableLightClient = &cli.BoolFlag{
Name: "enable-lightclient",
Usage: "Enables the light client support in the beacon node",
}
disableResourceManager = &cli.BoolFlag{
Name: "disable-resource-manager",
Usage: "Disables running the libp2p resource manager.",
@@ -204,6 +208,7 @@ var BeaconChainFlags = append(deprecatedBeaconFlags, append(deprecatedFlags, []c
EnableEIP4881,
disableResourceManager,
DisableRegistrationCache,
EnableLightClient,
}...)...)
// E2EBeaconChainFlags contains a list of the beacon chain feature flags to be tested in E2E.

View File

@@ -96,6 +96,10 @@ import (
"github.com/pkg/errors"
)
// Amount of references beyond which a multivalue object is considered
// fragmented.
const fragmentationLimit = 50000
// Id is an object identifier.
type Id = uint64
@@ -424,6 +428,58 @@ func (s *Slice[V]) MultiValueStatistics() MultiValueStatistics {
return stats
}
// IsFragmented checks if our mutlivalue object is fragmented (individual references held).
// If the number of references is higher than our threshold we return true.
func (s *Slice[V]) IsFragmented() bool {
stats := s.MultiValueStatistics()
return stats.TotalIndividualElemReferences+stats.TotalAppendedElemReferences >= fragmentationLimit
}
// Reset builds a new multivalue object with respect to the
// provided object's id. The base slice will be based on this
// particular id.
func (s *Slice[V]) Reset(obj Identifiable) *Slice[V] {
s.lock.RLock()
defer s.lock.RUnlock()
l, ok := s.cachedLengths[obj.Id()]
if !ok {
l = len(s.sharedItems)
}
items := make([]V, l)
copy(items, s.sharedItems)
for i, ind := range s.individualItems {
for _, v := range ind.Values {
_, found := containsId(v.ids, obj.Id())
if found {
items[i] = v.val
break
}
}
}
index := len(s.sharedItems)
for _, app := range s.appendedItems {
found := true
for _, v := range app.Values {
_, found = containsId(v.ids, obj.Id())
if found {
items[index] = v.val
index++
break
}
}
if !found {
break
}
}
reset := &Slice[V]{}
reset.Init(items)
return reset
}
func (s *Slice[V]) fillOriginalItems(obj Identifiable, items *[]V) {
for i, item := range s.sharedItems {
ind, ok := s.individualItems[uint64(i)]

View File

@@ -326,6 +326,156 @@ func TestDetach(t *testing.T) {
assert.Equal(t, false, ok)
}
func TestReset(t *testing.T) {
s := setup()
obj := &testObject{id: 2}
reset := s.Reset(obj)
assert.Equal(t, 8, len(reset.sharedItems))
assert.Equal(t, 123, reset.sharedItems[0])
assert.Equal(t, 2, reset.sharedItems[1])
assert.Equal(t, 3, reset.sharedItems[2])
assert.Equal(t, 123, reset.sharedItems[3])
assert.Equal(t, 2, reset.sharedItems[4])
assert.Equal(t, 2, reset.sharedItems[5])
assert.Equal(t, 3, reset.sharedItems[6])
assert.Equal(t, 2, reset.sharedItems[7])
assert.Equal(t, 0, len(reset.individualItems))
assert.Equal(t, 0, len(reset.appendedItems))
}
func TestFragmentation_IndividualReferences(t *testing.T) {
s := &Slice[int]{}
s.Init([]int{123, 123, 123, 123, 123})
s.individualItems[1] = &MultiValueItem[int]{
Values: []*Value[int]{
{
val: 1,
ids: []uint64{1},
},
{
val: 2,
ids: []uint64{2},
},
},
}
s.individualItems[2] = &MultiValueItem[int]{
Values: []*Value[int]{
{
val: 3,
ids: []uint64{1, 2},
},
},
}
numOfRefs := fragmentationLimit / 2
for i := 3; i < numOfRefs; i++ {
obj := &testObject{id: uint64(i)}
s.Copy(&testObject{id: 1}, obj)
}
assert.Equal(t, false, s.IsFragmented())
// Add more references to hit fragmentation limit. Id 1
// has 2 references above.
for i := numOfRefs; i < numOfRefs+3; i++ {
obj := &testObject{id: uint64(i)}
s.Copy(&testObject{id: 1}, obj)
}
assert.Equal(t, true, s.IsFragmented())
}
func TestFragmentation_AppendedReferences(t *testing.T) {
s := &Slice[int]{}
s.Init([]int{123, 123, 123, 123, 123})
s.appendedItems = []*MultiValueItem[int]{
{
Values: []*Value[int]{
{
val: 1,
ids: []uint64{1},
},
{
val: 2,
ids: []uint64{2},
},
},
},
{
Values: []*Value[int]{
{
val: 3,
ids: []uint64{1, 2},
},
},
},
}
s.cachedLengths[1] = 7
s.cachedLengths[2] = 8
numOfRefs := fragmentationLimit / 2
for i := 3; i < numOfRefs; i++ {
obj := &testObject{id: uint64(i)}
s.Copy(&testObject{id: 1}, obj)
}
assert.Equal(t, false, s.IsFragmented())
// Add more references to hit fragmentation limit. Id 1
// has 2 references above.
for i := numOfRefs; i < numOfRefs+3; i++ {
obj := &testObject{id: uint64(i)}
s.Copy(&testObject{id: 1}, obj)
}
assert.Equal(t, true, s.IsFragmented())
}
func TestFragmentation_IndividualAndAppendedReferences(t *testing.T) {
s := &Slice[int]{}
s.Init([]int{123, 123, 123, 123, 123})
s.individualItems[2] = &MultiValueItem[int]{
Values: []*Value[int]{
{
val: 3,
ids: []uint64{1, 2},
},
},
}
s.appendedItems = []*MultiValueItem[int]{
{
Values: []*Value[int]{
{
val: 1,
ids: []uint64{1},
},
{
val: 2,
ids: []uint64{2},
},
},
},
}
s.cachedLengths[1] = 7
s.cachedLengths[2] = 8
numOfRefs := fragmentationLimit / 2
for i := 3; i < numOfRefs; i++ {
obj := &testObject{id: uint64(i)}
s.Copy(&testObject{id: 1}, obj)
}
assert.Equal(t, false, s.IsFragmented())
// Add more references to hit fragmentation limit. Id 1
// has 2 references above.
for i := numOfRefs; i < numOfRefs+3; i++ {
obj := &testObject{id: uint64(i)}
s.Copy(&testObject{id: 1}, obj)
}
assert.Equal(t, true, s.IsFragmented())
}
// Share the slice between 2 objects.
// Index 0: Shared value
// Index 1: Different individual value

View File

@@ -3617,8 +3617,8 @@ def prysm_deps():
"gazelle:exclude tools.go",
],
importpath = "github.com/quic-go/quic-go",
sum = "h1:o3YB6t2SR+HU/pgwF29kJ6g4jJIJEwEZ8CKia1h1TKg=",
version = "v0.39.3",
sum = "h1:PelfiuG7wXEffUT2yceiqz5V6Pc0TA5ruOd1LcmFc1s=",
version = "v0.39.4",
)
go_repository(
name = "com_github_quic_go_webtransport_go",

View File

@@ -49,6 +49,23 @@ func FromState(marshaled []byte) (*VersionedUnmarshaler, error) {
return FromForkVersion(cv)
}
// FromBlock uses the known size of an offset and signature to determine the slot of a block without unmarshalling it.
// The slot is used to determine the version along with the respective config.
func FromBlock(marshaled []byte) (*VersionedUnmarshaler, error) {
slot, err := slotFromBlock(marshaled)
if err != nil {
return nil, err
}
copiedCfg := params.BeaconConfig().Copy()
epoch := slots.ToEpoch(slot)
fs := forks.NewOrderedSchedule(copiedCfg)
ver, err := fs.VersionForEpoch(epoch)
if err != nil {
return nil, err
}
return FromForkVersion(ver)
}
var ErrForkNotFound = errors.New("version found in fork schedule but can't be matched to a named fork")
// FromForkVersion uses a lookup table to resolve a Version (from a beacon node api for instance, or obtained by peeking at
@@ -162,7 +179,7 @@ var errBlockForkMismatch = errors.New("fork or config detected in unmarshaler is
// UnmarshalBeaconBlock uses internal knowledge in the VersionedUnmarshaler to pick the right concrete ReadOnlySignedBeaconBlock type,
// then Unmarshal()s the type and returns an instance of block.ReadOnlySignedBeaconBlock if successful.
func (cf *VersionedUnmarshaler) UnmarshalBeaconBlock(marshaled []byte) (interfaces.ReadOnlySignedBeaconBlock, error) {
func (cf *VersionedUnmarshaler) UnmarshalBeaconBlock(marshaled []byte) (interfaces.SignedBeaconBlock, error) {
slot, err := slotFromBlock(marshaled)
if err != nil {
return nil, err
@@ -197,7 +214,7 @@ func (cf *VersionedUnmarshaler) UnmarshalBeaconBlock(marshaled []byte) (interfac
// UnmarshalBlindedBeaconBlock uses internal knowledge in the VersionedUnmarshaler to pick the right concrete blinded ReadOnlySignedBeaconBlock type,
// then Unmarshal()s the type and returns an instance of block.ReadOnlySignedBeaconBlock if successful.
// For Phase0 and Altair it works exactly line UnmarshalBeaconBlock.
func (cf *VersionedUnmarshaler) UnmarshalBlindedBeaconBlock(marshaled []byte) (interfaces.ReadOnlySignedBeaconBlock, error) {
func (cf *VersionedUnmarshaler) UnmarshalBlindedBeaconBlock(marshaled []byte) (interfaces.SignedBeaconBlock, error) {
slot, err := slotFromBlock(marshaled)
if err != nil {
return nil, err

View File

@@ -204,6 +204,97 @@ func TestUnmarshalState(t *testing.T) {
}
}
func TestDetectAndUnmarshalBlock(t *testing.T) {
undo := util.HackDenebMaxuint(t)
defer undo()
altairS, err := slots.EpochStart(params.BeaconConfig().AltairForkEpoch)
require.NoError(t, err)
bellaS, err := slots.EpochStart(params.BeaconConfig().BellatrixForkEpoch)
require.NoError(t, err)
capellaS, err := slots.EpochStart(params.BeaconConfig().CapellaForkEpoch)
require.NoError(t, err)
denebS, err := slots.EpochStart(params.BeaconConfig().DenebForkEpoch)
require.NoError(t, err)
cases := []struct {
b func(*testing.T, primitives.Slot) interfaces.ReadOnlySignedBeaconBlock
name string
slot primitives.Slot
errExists bool
}{
{
name: "genesis - slot 0",
b: signedTestBlockGenesis,
},
{
name: "last slot of phase 0",
b: signedTestBlockGenesis,
slot: altairS - 1,
},
{
name: "first slot of altair",
b: signedTestBlockAltair,
slot: altairS,
},
{
name: "last slot of altair",
b: signedTestBlockAltair,
slot: bellaS - 1,
},
{
name: "first slot of bellatrix",
b: signedTestBlockBellatrix,
slot: bellaS,
},
{
name: "first slot of capella",
b: signedTestBlockCapella,
slot: capellaS,
},
{
name: "last slot of capella",
b: signedTestBlockCapella,
slot: denebS - 1,
},
{
name: "first slot of deneb",
b: signedTestBlockDeneb,
slot: denebS,
},
{
name: "bellatrix block in altair slot",
b: signedTestBlockBellatrix,
slot: bellaS - 1,
errExists: true,
},
{
name: "genesis block in altair slot",
b: signedTestBlockGenesis,
slot: bellaS - 1,
errExists: true,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
b := c.b(t, c.slot)
marshaled, err := b.MarshalSSZ()
require.NoError(t, err)
cf, err := FromBlock(marshaled)
require.NoError(t, err)
bcf, err := cf.UnmarshalBeaconBlock(marshaled)
if c.errExists {
require.NotNil(t, err)
return
}
require.NoError(t, err)
expected, err := b.Block().HashTreeRoot()
require.NoError(t, err)
actual, err := bcf.Block().HashTreeRoot()
require.NoError(t, err)
require.Equal(t, expected, actual)
})
}
}
func TestUnmarshalBlock(t *testing.T) {
undo := util.HackDenebMaxuint(t)
defer undo()

2
go.mod
View File

@@ -213,7 +213,7 @@ require (
github.com/prometheus/procfs v0.9.0 // indirect
github.com/quic-go/qpack v0.4.0 // indirect
github.com/quic-go/qtls-go1-20 v0.3.4 // indirect
github.com/quic-go/quic-go v0.39.3 // indirect
github.com/quic-go/quic-go v0.39.4 // indirect
github.com/quic-go/webtransport-go v0.6.0 // indirect
github.com/raulk/go-watchdog v1.3.0 // indirect
github.com/rivo/uniseg v0.4.4 // indirect

4
go.sum
View File

@@ -1117,8 +1117,8 @@ github.com/quic-go/qpack v0.4.0 h1:Cr9BXA1sQS2SmDUWjSofMPNKmvF6IiIfDRmgU0w1ZCo=
github.com/quic-go/qpack v0.4.0/go.mod h1:UZVnYIfi5GRk+zI9UMaCPsmZ2xKJP7XBUvVyT1Knj9A=
github.com/quic-go/qtls-go1-20 v0.3.4 h1:MfFAPULvst4yoMgY9QmtpYmfij/em7O8UUi+bNVm7Cg=
github.com/quic-go/qtls-go1-20 v0.3.4/go.mod h1:X9Nh97ZL80Z+bX/gUXMbipO6OxdiDi58b/fMC9mAL+k=
github.com/quic-go/quic-go v0.39.3 h1:o3YB6t2SR+HU/pgwF29kJ6g4jJIJEwEZ8CKia1h1TKg=
github.com/quic-go/quic-go v0.39.3/go.mod h1:T09QsDQWjLiQ74ZmacDfqZmhY/NLnw5BC40MANNNZ1Q=
github.com/quic-go/quic-go v0.39.4 h1:PelfiuG7wXEffUT2yceiqz5V6Pc0TA5ruOd1LcmFc1s=
github.com/quic-go/quic-go v0.39.4/go.mod h1:T09QsDQWjLiQ74ZmacDfqZmhY/NLnw5BC40MANNNZ1Q=
github.com/quic-go/webtransport-go v0.6.0 h1:CvNsKqc4W2HljHJnoT+rMmbRJybShZ0YPFDD3NxaZLY=
github.com/quic-go/webtransport-go v0.6.0/go.mod h1:9KjU4AEBqEQidGHNDkZrb8CAa1abRaosM2yGOyiikEc=
github.com/raulk/go-watchdog v1.3.0 h1:oUmdlHxdkXRJlwfG0O9omj8ukerm8MEQavSiDTEtBsk=

View File

@@ -22,5 +22,6 @@ go_test(
deps = [
"//api:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
],
)

View File

@@ -12,8 +12,8 @@ import (
// match a number with optional decimals
var priorityRegex = regexp.MustCompile(`q=(\d+(?:\.\d+)?)`)
// SszRequested takes a http request and checks to see if it should be requesting a ssz response.
func SszRequested(req *http.Request) bool {
// RespondWithSsz takes a http request and checks to see if it should be requesting a ssz response.
func RespondWithSsz(req *http.Request) bool {
accept := req.Header.Values("Accept")
if len(accept) == 0 {
return false
@@ -51,3 +51,8 @@ func SszRequested(req *http.Request) bool {
return currentType == api.OctetStreamMediaType
}
// IsRequestSsz checks if the request object should be interpreted as ssz
func IsRequestSsz(req *http.Request) bool {
return req.Header.Get("Content-Type") == api.OctetStreamMediaType
}

View File

@@ -1,88 +1,123 @@
package httputil
import (
"bytes"
"fmt"
"net/http"
"net/http/httptest"
"testing"
"github.com/prysmaticlabs/prysm/v4/api"
"github.com/prysmaticlabs/prysm/v4/testing/assert"
"github.com/prysmaticlabs/prysm/v4/testing/require"
)
func TestSSZRequested(t *testing.T) {
func TestRespondWithSsz(t *testing.T) {
t.Run("ssz_requested", func(t *testing.T) {
request := httptest.NewRequest("GET", "http://foo.example", nil)
request := httptest.NewRequest(http.MethodGet, "http://foo.example", nil)
request.Header["Accept"] = []string{api.OctetStreamMediaType}
result := SszRequested(request)
result := RespondWithSsz(request)
assert.Equal(t, true, result)
})
t.Run("ssz_content_type_first", func(t *testing.T) {
request := httptest.NewRequest("GET", "http://foo.example", nil)
request.Header["Accept"] = []string{fmt.Sprintf("%s,%s", api.OctetStreamMediaType, api.JsonMediaType)}
result := SszRequested(request)
result := RespondWithSsz(request)
assert.Equal(t, true, result)
})
t.Run("ssz_content_type_preferred_1", func(t *testing.T) {
request := httptest.NewRequest("GET", "http://foo.example", nil)
request := httptest.NewRequest(http.MethodGet, "http://foo.example", nil)
request.Header["Accept"] = []string{fmt.Sprintf("%s;q=0.9,%s", api.JsonMediaType, api.OctetStreamMediaType)}
result := SszRequested(request)
result := RespondWithSsz(request)
assert.Equal(t, true, result)
})
t.Run("ssz_content_type_preferred_2", func(t *testing.T) {
request := httptest.NewRequest("GET", "http://foo.example", nil)
request := httptest.NewRequest(http.MethodGet, "http://foo.example", nil)
request.Header["Accept"] = []string{fmt.Sprintf("%s;q=0.95,%s;q=0.9", api.OctetStreamMediaType, api.JsonMediaType)}
result := SszRequested(request)
result := RespondWithSsz(request)
assert.Equal(t, true, result)
})
t.Run("other_content_type_preferred", func(t *testing.T) {
request := httptest.NewRequest("GET", "http://foo.example", nil)
request.Header["Accept"] = []string{fmt.Sprintf("%s,%s;q=0.9", api.JsonMediaType, api.OctetStreamMediaType)}
result := SszRequested(request)
result := RespondWithSsz(request)
assert.Equal(t, false, result)
})
t.Run("other_params", func(t *testing.T) {
request := httptest.NewRequest("GET", "http://foo.example", nil)
request := httptest.NewRequest(http.MethodGet, "http://foo.example", nil)
request.Header["Accept"] = []string{fmt.Sprintf("%s,%s;q=0.9,otherparam=xyz", api.JsonMediaType, api.OctetStreamMediaType)}
result := SszRequested(request)
result := RespondWithSsz(request)
assert.Equal(t, false, result)
})
t.Run("no_header", func(t *testing.T) {
request := httptest.NewRequest("GET", "http://foo.example", nil)
result := SszRequested(request)
request := httptest.NewRequest(http.MethodGet, "http://foo.example", nil)
result := RespondWithSsz(request)
assert.Equal(t, false, result)
})
t.Run("empty_header", func(t *testing.T) {
request := httptest.NewRequest("GET", "http://foo.example", nil)
request := httptest.NewRequest(http.MethodGet, "http://foo.example", nil)
request.Header["Accept"] = []string{}
result := SszRequested(request)
result := RespondWithSsz(request)
assert.Equal(t, false, result)
})
t.Run("empty_header_value", func(t *testing.T) {
request := httptest.NewRequest("GET", "http://foo.example", nil)
request := httptest.NewRequest(http.MethodGet, "http://foo.example", nil)
request.Header["Accept"] = []string{""}
result := SszRequested(request)
result := RespondWithSsz(request)
assert.Equal(t, false, result)
})
t.Run("other_content_type", func(t *testing.T) {
request := httptest.NewRequest("GET", "http://foo.example", nil)
request := httptest.NewRequest(http.MethodGet, "http://foo.example", nil)
request.Header["Accept"] = []string{"application/other"}
result := SszRequested(request)
result := RespondWithSsz(request)
assert.Equal(t, false, result)
})
t.Run("garbage", func(t *testing.T) {
request := httptest.NewRequest("GET", "http://foo.example", nil)
request := httptest.NewRequest(http.MethodGet, "http://foo.example", nil)
request.Header["Accept"] = []string{"This is Sparta!!!"}
result := SszRequested(request)
result := RespondWithSsz(request)
assert.Equal(t, false, result)
})
}
func TestIsRequestSsz(t *testing.T) {
t.Run("ssz Post happy path", func(t *testing.T) {
var body bytes.Buffer
_, err := body.WriteString("something")
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", &body)
request.Header["Content-Type"] = []string{api.OctetStreamMediaType}
result := IsRequestSsz(request)
assert.Equal(t, true, result)
})
t.Run("ssz Post missing header", func(t *testing.T) {
request := httptest.NewRequest(http.MethodPost, "http://foo.example", nil)
result := IsRequestSsz(request)
assert.Equal(t, false, result)
})
t.Run("ssz Post wrong content type", func(t *testing.T) {
request := httptest.NewRequest(http.MethodPost, "http://foo.example", nil)
request.Header["Content-Type"] = []string{"application/other"}
result := IsRequestSsz(request)
assert.Equal(t, false, result)
})
t.Run("ssz Post json content type", func(t *testing.T) {
request := httptest.NewRequest(http.MethodPost, "http://foo.example", nil)
request.Header["Content-Type"] = []string{api.JsonMediaType}
result := IsRequestSsz(request)
assert.Equal(t, false, result)
})
}

View File

@@ -103,7 +103,12 @@ func WarnIfPlatformNotSupported(ctx context.Context) {
return
}
if !supported {
log.Warn("This platform is not supported. The following platforms are supported: Linux/AMD64," +
" Linux/ARM64, Mac OS X/AMD64 (10.14+ only), and Windows/AMD64")
log.Warn(`This platform is not supported. The following platforms are supported:
- Linux/AMD64
- Linux/ARM64
- Mac OS X/AMD64 (from 10.14+)
- Mac OS X/ARM64 (from 12.5+)
- Windows/AMD64`,
)
}
}

View File

@@ -31,7 +31,7 @@ type ServiceRegistry struct {
serviceTypes []reflect.Type // keep an ordered slice of registered service types.
}
// NewServiceRegistry starts a registry instance for convenience
// NewServiceRegistry starts a registry instance for convenience.
func NewServiceRegistry() *ServiceRegistry {
return &ServiceRegistry{
services: make(map[reflect.Type]Service),

View File

@@ -35,11 +35,13 @@ var (
log = logrus.WithField("prefix", "tos")
)
// VerifyTosAcceptedOrPrompt check if Tos was accepted before or asks to accept.
// VerifyTosAcceptedOrPrompt checks if Tos was accepted before or asks to accept.
func VerifyTosAcceptedOrPrompt(ctx *cli.Context) error {
if file.Exists(filepath.Join(ctx.String(cmd.DataDirFlag.Name), acceptTosFilename)) {
tosFilePath := filepath.Join(ctx.String(cmd.DataDirFlag.Name), acceptTosFilename)
if file.Exists(tosFilePath) {
return nil
}
if ctx.Bool(cmd.AcceptTosFlag.Name) {
saveTosAccepted(ctx)
return nil
@@ -49,6 +51,7 @@ func VerifyTosAcceptedOrPrompt(ctx *cli.Context) error {
if err != nil {
return errors.New(acceptTosPromptErrText)
}
if !strings.EqualFold(input, "accept") {
return errors.New("you have to accept Terms and Conditions in order to continue")
}

View File

@@ -83,7 +83,7 @@ func runPrecomputeRewardsAndPenaltiesTest(t *testing.T, testFolderPath string) {
penalties := make([]uint64, len(deltas))
for i, d := range deltas {
rewards[i] = d.HeadReward + d.SourceReward + d.TargetReward
penalties[i] = d.SourcePenalty + d.TargetPenalty
penalties[i] = d.SourcePenalty + d.TargetPenalty + d.InactivityPenalty
}
totalSpecTestRewards := make([]uint64, len(rewards))

View File

@@ -87,7 +87,7 @@ func runPrecomputeRewardsAndPenaltiesTest(t *testing.T, testFolderPath string) {
penalties := make([]uint64, len(deltas))
for i, d := range deltas {
rewards[i] = d.HeadReward + d.SourceReward + d.TargetReward
penalties[i] = d.SourcePenalty + d.TargetPenalty
penalties[i] = d.SourcePenalty + d.TargetPenalty + d.InactivityPenalty
}
totalSpecTestRewards := make([]uint64, len(rewards))

View File

@@ -87,7 +87,7 @@ func runPrecomputeRewardsAndPenaltiesTest(t *testing.T, testFolderPath string) {
penalties := make([]uint64, len(deltas))
for i, d := range deltas {
rewards[i] = d.HeadReward + d.SourceReward + d.TargetReward
penalties[i] = d.SourcePenalty + d.TargetPenalty
penalties[i] = d.SourcePenalty + d.TargetPenalty + d.InactivityPenalty
}
totalSpecTestRewards := make([]uint64, len(rewards))

View File

@@ -79,7 +79,7 @@ func runPrecomputeRewardsAndPenaltiesTest(t *testing.T, testFolderPath string) {
penalties := make([]uint64, len(deltas))
for i, d := range deltas {
rewards[i] = d.HeadReward + d.SourceReward + d.TargetReward
penalties[i] = d.SourcePenalty + d.TargetPenalty
penalties[i] = d.SourcePenalty + d.TargetPenalty + d.InactivityPenalty
}
totalSpecTestRewards := make([]uint64, len(rewards))

View File

@@ -81,6 +81,20 @@ func (mr *MockNodeClientMockRecorder) GetVersion(arg0, arg1 interface{}) *gomock
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetVersion", reflect.TypeOf((*MockNodeClient)(nil).GetVersion), arg0, arg1)
}
// IsHealthy mocks base method.
func (m *MockNodeClient) IsHealthy(arg0 context.Context) bool {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "IsHealthy", arg0)
ret0, _ := ret[0].(bool)
return ret0
}
// IsHealthy indicates an expected call of IsHealthy.
func (mr *MockNodeClientMockRecorder) IsHealthy(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "IsHealthy", reflect.TypeOf((*MockNodeClient)(nil).IsHealthy), arg0)
}
// ListPeers mocks base method.
func (m *MockNodeClient) ListPeers(arg0 context.Context, arg1 *emptypb.Empty) (*eth.Peers, error) {
m.ctrl.T.Helper()

View File

@@ -67,6 +67,20 @@ func (mr *MockValidatorClientMockRecorder) DomainData(arg0, arg1 interface{}) *g
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "DomainData", reflect.TypeOf((*MockValidatorClient)(nil).DomainData), arg0, arg1)
}
// EventStreamIsRunning mocks base method.
func (m *MockValidatorClient) EventStreamIsRunning() bool {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "EventStreamIsRunning")
ret0, _ := ret[0].(bool)
return ret0
}
// EventStreamIsRunning indicates an expected call of EventStreamIsRunning.
func (mr *MockValidatorClientMockRecorder) EventStreamIsRunning() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "EventStreamIsRunning", reflect.TypeOf((*MockValidatorClient)(nil).EventStreamIsRunning))
}
// GetAttestationData mocks base method.
func (m *MockValidatorClient) GetAttestationData(arg0 context.Context, arg1 *eth.AttestationDataRequest) (*eth.AttestationData, error) {
m.ctrl.T.Helper()
@@ -247,6 +261,20 @@ func (mr *MockValidatorClientMockRecorder) ProposeExit(arg0, arg1 interface{}) *
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ProposeExit", reflect.TypeOf((*MockValidatorClient)(nil).ProposeExit), arg0, arg1)
}
// StartEventStream mocks base method.
func (m *MockValidatorClient) StartEventStream(arg0 context.Context) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "StartEventStream", arg0)
ret0, _ := ret[0].(error)
return ret0
}
// StartEventStream indicates an expected call of StartEventStream.
func (mr *MockValidatorClientMockRecorder) StartEventStream(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "StartEventStream", reflect.TypeOf((*MockValidatorClient)(nil).StartEventStream), arg0)
}
// StreamSlots mocks base method.
func (m *MockValidatorClient) StreamSlots(arg0 context.Context, arg1 *eth.StreamSlotsRequest) (eth.BeaconNodeValidator_StreamSlotsClient, error) {
m.ctrl.T.Helper()

View File

@@ -185,7 +185,7 @@ func (h *handler) httpHandler(w http.ResponseWriter, _ *http.Request) {
write(w, []byte("Node ID: "+n.ID().String()+"\n"))
write(w, []byte("IP: "+n.IP().String()+"\n"))
write(w, []byte(fmt.Sprintf("UDP Port: %d", n.UDP())+"\n"))
write(w, []byte(fmt.Sprintf("TCP Port: %d", n.UDP())+"\n\n"))
write(w, []byte(fmt.Sprintf("TCP Port: %d", n.TCP())+"\n\n"))
}
}

View File

@@ -9,13 +9,17 @@ go_library(
visibility = ["//visibility:private"],
deps = [
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//consensus-types/blocks:go_default_library",
"//config/params:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//encoding/ssz/detect:go_default_library",
"//encoding/ssz/equality:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/logging/logrus-prefixed-formatter:go_default_library",
"//runtime/version:go_default_library",
"@com_github_kr_pretty//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_urfave_cli_v2//:go_default_library",

View File

@@ -11,10 +11,14 @@ import (
"time"
"github.com/kr/pretty"
"github.com/pkg/errors"
fssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
state_native "github.com/prysmaticlabs/prysm/v4/beacon-chain/state/state-native"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v4/encoding/ssz/detect"
"github.com/prysmaticlabs/prysm/v4/encoding/ssz/equality"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
prefixed "github.com/prysmaticlabs/prysm/v4/runtime/logging/logrus-prefixed-formatter"
@@ -28,6 +32,7 @@ func main() {
var blockPath string
var preStatePath string
var expectedPostStatePath string
var network string
var sszPath string
var sszType string
@@ -157,8 +162,36 @@ func main() {
Usage: "Path to expected post state file(ssz)",
Destination: &expectedPostStatePath,
},
&cli.StringFlag{
Name: "network",
Usage: "Network to run the state transition in",
Destination: &network,
},
},
Action: func(c *cli.Context) error {
if network != "" {
switch network {
case params.PraterName:
if err := params.SetActive(params.PraterConfig()); err != nil {
log.Fatal(err)
}
case params.GoerliName:
if err := params.SetActive(params.PraterConfig()); err != nil {
log.Fatal(err)
}
case params.SepoliaName:
if err := params.SetActive(params.SepoliaConfig()); err != nil {
log.Fatal(err)
}
case params.HoleskyName:
if err := params.SetActive(params.HoleskyConfig()); err != nil {
log.Fatal(err)
}
default:
log.Fatalf("Unknown network provided: %s", network)
}
}
if blockPath == "" {
log.Info("Block path not provided for state transition. " +
"Please provide path")
@@ -172,11 +205,11 @@ func main() {
}
blockPath = text
}
block := &ethpb.SignedBeaconBlock{}
if err := dataFetcher(blockPath, block); err != nil {
block, err := detectBlock(blockPath)
if err != nil {
log.Fatal(err)
}
blkRoot, err := block.Block.HashTreeRoot()
blkRoot, err := block.Block().HashTreeRoot()
if err != nil {
log.Fatal(err)
}
@@ -193,11 +226,7 @@ func main() {
}
preStatePath = text
}
preState := &ethpb.BeaconState{}
if err := dataFetcher(preStatePath, preState); err != nil {
log.Fatal(err)
}
stateObj, err := state_native.InitializeFromProtoPhase0(preState)
stateObj, err := detectState(preStatePath)
if err != nil {
log.Fatal(err)
}
@@ -206,18 +235,14 @@ func main() {
log.Fatal(err)
}
log.WithFields(log.Fields{
"blockSlot": fmt.Sprintf("%d", block.Block.Slot),
"blockSlot": fmt.Sprintf("%d", block.Block().Slot()),
"preStateSlot": fmt.Sprintf("%d", stateObj.Slot()),
}).Infof(
"Performing state transition with a block root of %#x and pre state root of %#x",
blkRoot,
preStateRoot,
)
wsb, err := blocks.NewSignedBeaconBlock(block)
if err != nil {
log.Fatal(err)
}
postState, err := transition.ExecuteStateTransition(context.Background(), stateObj, wsb)
postState, err := transition.ExecuteStateTransition(context.Background(), stateObj, block)
if err != nil {
log.Fatal(err)
}
@@ -229,12 +254,12 @@ func main() {
// Diff the state if a post state is provided.
if expectedPostStatePath != "" {
expectedState := &ethpb.BeaconState{}
if err := dataFetcher(expectedPostStatePath, expectedState); err != nil {
expectedState, err := detectState(expectedPostStatePath)
if err != nil {
log.Fatal(err)
}
if !equality.DeepEqual(expectedState, postState.ToProtoUnsafe()) {
diff, _ := messagediff.PrettyDiff(expectedState, postState.ToProtoUnsafe())
if !equality.DeepEqual(expectedState.ToProtoUnsafe(), postState.ToProtoUnsafe()) {
diff, _ := messagediff.PrettyDiff(expectedState.ToProtoUnsafe(), postState.ToProtoUnsafe())
log.Errorf("Derived state differs from provided post state: %s", diff)
}
}
@@ -257,6 +282,34 @@ func dataFetcher(fPath string, data fssz.Unmarshaler) error {
return data.UnmarshalSSZ(rawFile)
}
func detectState(fPath string) (state.BeaconState, error) {
rawFile, err := os.ReadFile(fPath) // #nosec G304
if err != nil {
return nil, err
}
vu, err := detect.FromState(rawFile)
if err != nil {
return nil, errors.Wrap(err, "error detecting state from file")
}
s, err := vu.UnmarshalBeaconState(rawFile)
if err != nil {
return nil, errors.Wrap(err, "error unmarshalling state")
}
return s, nil
}
func detectBlock(fPath string) (interfaces.SignedBeaconBlock, error) {
rawFile, err := os.ReadFile(fPath) // #nosec G304
if err != nil {
return nil, err
}
vu, err := detect.FromBlock(rawFile)
if err != nil {
return nil, err
}
return vu.UnmarshalBeaconBlock(rawFile)
}
func prettyPrint(sszPath string, data fssz.Unmarshaler) {
if err := dataFetcher(sszPath, data); err != nil {
log.Fatal(err)

View File

@@ -23,7 +23,7 @@ type Wallet interface {
// Read methods for important wallet and accounts-related files.
ReadFileAtPath(ctx context.Context, filePath string, fileName string) ([]byte, error)
// Write methods to persist important wallet and accounts-related files to disk.
WriteFileAtPath(ctx context.Context, pathName string, fileName string, data []byte) error
WriteFileAtPath(ctx context.Context, pathName string, fileName string, data []byte) (bool, error)
// Method for initializing a new keymanager.
InitializeKeymanager(ctx context.Context, cfg InitKeymanagerConfig) (keymanager.IKeymanager, error)
}

View File

@@ -55,19 +55,19 @@ func (w *Wallet) Password() string {
}
// WriteFileAtPath --
func (w *Wallet) WriteFileAtPath(_ context.Context, pathName, fileName string, data []byte) error {
func (w *Wallet) WriteFileAtPath(_ context.Context, pathName, fileName string, data []byte) (bool, error) {
w.lock.Lock()
defer w.lock.Unlock()
if w.HasWriteFileError {
// reset the flag to not contaminate other tests
w.HasWriteFileError = false
return errors.New("could not write keystore file for accounts")
return false, errors.New("could not write keystore file for accounts")
}
if w.Files[pathName] == nil {
w.Files[pathName] = make(map[string][]byte)
}
w.Files[pathName][fileName] = data
return nil
return true, nil
}
// ReadFileAtPath --
@@ -212,3 +212,15 @@ func (m *Validator) SetProposerSettings(_ context.Context, settings *validatorse
m.proposerSettings = settings
return nil
}
func (_ *Validator) StartEventStream(_ context.Context) error {
panic("implement me")
}
func (_ *Validator) EventStreamIsRunning() bool {
panic("implement me")
}
func (_ *Validator) NodeIsHealthy(ctx context.Context) bool {
panic("implement me")
}

Some files were not shown because too many files have changed in this diff Show More