Compare commits

...

29 Commits

Author SHA1 Message Date
Potuz
79bb7efbf8 Check init sync before getting payload attributes (#13479)
* Check init sync before getting payload attributes

This PR adds a helper to forkchoice to return the delay of the latest
imported block. It also adds a helper with an heuristic to check if the
node is during init sync. If the highest imported node was imported with
a delay of less than an epoch then the node is considered in regular
sync. If on the other hand, in addition the highest imported node is
more than two epochs old, then the node is considered in init Sync.

The helper to check this only uses forkchoice and therefore requires a
read lock. There are four paths to call this

1) During regular block processing, we defer a function to send the
   second FCU call with attributes. This function may not be called at
all if we are not regularly syncing
2) During regular block processing, we check in the path
   `postBlockProces->getFCUArgs->computePayloadAttributes` the payload
attributes if we are syncing a late block. In this case forkchoice is
already locked and we add a call in `getFCUArgs` to return early if not
regularly syncing
3) During handling of late blocks on `lateBlockTasks` we simply return
   early if not in regular sync (This is the biggest change as it takes
a longer FC lock for lateBlockTasks)
4) On Attestation processing, in UpdateHead, we are already locked so we
   just add a check to not update head on this path if not regularly
syncing.

* fix build

* Fix mocks
2024-01-17 15:39:28 +00:00
terence
87b53db3b4 Capitalize Aggregated Unaggregated Attestations Log (#13473) 2024-01-17 13:30:31 +00:00
terence
fe431b9201 Use correct HistoricalRoots (#13477) 2024-01-17 08:14:32 +00:00
james-prysm
790a09f9b1 Improve wait for activation (#13448)
* removing timeout on wait for activation, instead switched to an event driven approach

* fixing unit tests

* linting

* simplifying return

* adding sleep for the remaining slot to avoid cpu spikes

* removing ifstatement on log

* removing ifstatement on log

* improving switch statement

* removing the loop entirely

* fixing unit test

* fixing manu's reported issue with deletion of json file

* missed change around writefile at path

* gofmt

* fixing deepsource issue with reading file

* trying to clean file to avoid deepsource issue

* still getting error trying a different approach

* fixing stream loop

* fixing unit test

* Update validator/keymanager/local/keymanager.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* fixing linting

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-01-16 17:04:54 +00:00
Manu NALEPA
46387a903a getLegacyDatabaseLocation: Change message. (#13471)
* `getLegacyDatabaseLocation`: Change message.

* Update validator/node/node.go

Co-authored-by: Nishant Das <nishdas93@gmail.com>

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-01-16 11:29:36 +00:00
Nishant Das
6a65e07684 Add Spans to Core Validator Methods (#13467)
* add traces

* gaz
2024-01-16 07:52:46 +00:00
Potuz
abef94d7ad do not check optimistic status if cached attestation (#13462)
* do not check optimistic status if cached attestation

* Gazelle

* Gazelle again

* fix nil panics

* more nil checks

* more nil checks

---------

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2024-01-15 18:50:33 +00:00
Manu NALEPA
99a8d0bac6 Validator client - Improve readability - NO FUNCTIONAL CHANGE (#13468)
* Improve `NewServiceRegistry` documentation.

* Improve `README.md`.

* Improve readability of `registerValidatorService`.

* Move `log` in `main.go`.

Since `log` is only used in `main.go`.

* Clean Tos.

* `DefaultDataDir`: Use `switch` instead of `if/elif`.

* `ReadPassword`: Remove unused receiver.

* `validator/main.go`: Clean.

* `WarnIfPlatformNotSupported`: Add Mac OSX ARM64.

* `runner.go`: Use idiomatic `err` handling.

* `waitForChainStart`: Avoid `chainStartResponse`mutation.

* `WaitForChainStart`: Reduce cognitive complexity.

* Logs: `powchain` ==> `execution`.
2024-01-15 14:46:54 +00:00
Preston Van Loon
b585ff77f5 Fix port logging in bootnode (#13457) 2024-01-15 04:38:22 +00:00
Nishant Das
1ff5a43385 Add the Abillity to Defragment the Beacon State (#13444)
* Defragment head state

* change log level

* change it to be more efficient

* add flag

* add tests and clean up

* fix it

* gosimple

* Update container/multi-value-slice/multi_value_slice.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* radek's review

* unlock it

* remove from fc lock

---------

Co-authored-by: rkapka <rkapka@wp.pl>
2024-01-13 05:44:02 +00:00
dependabot[bot]
0cfbddc980 Bump github.com/quic-go/quic-go from 0.39.3 to 0.39.4 (#13445)
* Bump github.com/quic-go/quic-go from 0.39.3 to 0.39.4

Bumps [github.com/quic-go/quic-go](https://github.com/quic-go/quic-go) from 0.39.3 to 0.39.4.
- [Release notes](https://github.com/quic-go/quic-go/releases)
- [Changelog](https://github.com/quic-go/quic-go/blob/master/Changelog.md)
- [Commits](https://github.com/quic-go/quic-go/compare/v0.39.3...v0.39.4)

---
updated-dependencies:
- dependency-name: github.com/quic-go/quic-go
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

* Ran gazelle

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2024-01-12 18:12:19 +00:00
Manu NALEPA
22a484c45e Fixes issues when running validator client with the --web flag and non existing validator.db file AND/OR prysm-wallet-v2 directory. (#13460)
* `getLegacyDatabaseLocation`: Add tests.

* `getLegacyDatabaseLocation`: Handle `c.wallet == nil`.

* `saveAuthToken`: Create parent directory if needed.
2024-01-12 15:53:27 +00:00
terence
6ddafe1159 Delete invalid blob at block processing (#13456)
* Delete invalid blob at block processing

* Fix test
2024-01-12 08:09:45 +00:00
qinlz2
b8c5af665f [3/5] light client events (#13225)
* add http streaming light client events

* expose ForkChoiceStore

* return error in insertFinalizedDeposits

* send light client updates

* Revert "return error in insertFinalizedDeposits"

This reverts commit f7068663b8c8b3a3bf45950d5258011a5e4d803e.

* fix: lint

* fix: patch the wrong error response

* refactor: rename the JSON structs

* fix: LC finalized stream return correct format

* fix: LC op stream return correct JSON format

* fix: omit nil JSON fields

* chore: gazzle

* fix: make update by range return list directly based on spec

* chore: remove unneccessary json annotations

* chore: adjust comments

* feat: introduce EnableLightClientEvents feature flag

* feat: use enable-lightclient-events flag

* chore: more logging details

* chore: fix rebase errors

* chore: adjust data structure to save mem

* Update beacon-chain/blockchain/process_block.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* refactor: rename config EnableLightClient

* refactor: rename feature flag

* refactor: move helper functions to helper pkg

* test: fix broken unit tests

---------

Co-authored-by: Nicolás Pernas Maradei <nicolas@polymerlabs.org>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2024-01-11 18:38:59 +00:00
Radosław Kapka
2875ce6ee1 Use a single rest handler (#13446) 2024-01-11 16:03:35 +00:00
Manu NALEPA
a883ae2a76 BN: Move --db-backup-output-dir as a deprecated flag. (#13450) 2024-01-11 14:11:36 +00:00
Preston Van Loon
3a2b486bde Bazel 7.0.0 (#13321) 2024-01-10 15:34:11 +00:00
terence
283e09569d Remove old blob types (#13438)
* Remove old types

* Gen

* Remove old types

* Gen

* Fix lint

* Rm unused key

* Kasey's comment
2024-01-10 09:38:06 +00:00
Preston Van Loon
69723b4a77 Update go to 1.21.6 (#13440) 2024-01-10 09:37:40 +00:00
psr
4fe6834ba5 http endpoint cleanup (#13432)
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2024-01-09 23:48:43 +00:00
Preston Van Loon
98e3f2b80f sort static analyzers, add more, fix violations (#13441) 2024-01-09 23:29:36 +00:00
Enrico Del Fante
2aef7a3ec5 Update teku's bootnode (#13437) 2024-01-09 22:28:41 +00:00
Brandon Liu
c41a54be9d fix metric for exited validator (#13379)
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2024-01-09 22:15:53 +00:00
Justin Traglia
7e65378f63 Check sidecar index in BlobSidecarsByRoot response (#13180)
* Check sidecar index in BlobSidecarsByRoot response

* Remove unnecessary MaxBlobsPerBlock check
2024-01-09 22:14:56 +00:00
Justin Traglia
cf606e3766 Only process blocks which haven't been processed (#13442)
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2024-01-09 22:14:03 +00:00
Justin Traglia
703cfc5819 Initialize exec payload fields and enforce order (#13372)
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2024-01-09 21:49:35 +00:00
GoodDaisy
c6ebe157a6 Fix typos (#13435) 2024-01-09 21:03:36 +00:00
Preston Van Loon
a3cc81a048 Add nil check for head in IsOptimistic (#13439) 2024-01-09 19:40:26 +00:00
Nishant Das
75bbeb61cc Add Detailed Multi Value Metrics (#13429)
* add it

* pingo

* gaz

* remove pingo

* fix for old forks
2024-01-09 05:16:03 +00:00
160 changed files with 3186 additions and 2354 deletions

View File

@@ -1 +1 @@
6.4.0
7.0.0

View File

@@ -10,7 +10,7 @@
# Prysm specific remote-cache properties.
build:remote-cache --remote_download_minimal
build:remote-cache --experimental_remote_build_event_upload=minimal
build:remote-cache --remote_build_event_upload=minimal
build:remote-cache --remote_cache=grpc://bazel-remote-cache:9092
# Does not work with rules_oci. See https://github.com/bazel-contrib/rules_oci/issues/292
#build:remote-cache --experimental_remote_downloader=grpc://bazel-remote-cache:9092

View File

@@ -194,33 +194,6 @@ nogo(
config = ":nogo_config_with_excludes",
visibility = ["//visibility:public"],
deps = [
"@org_golang_x_tools//go/analysis/passes/unsafeptr:go_default_library",
"@org_golang_x_tools//go/analysis/passes/unreachable:go_default_library",
"@org_golang_x_tools//go/analysis/passes/unmarshal:go_default_library",
"@org_golang_x_tools//go/analysis/passes/tests:go_default_library",
"@org_golang_x_tools//go/analysis/passes/structtag:go_default_library",
"@org_golang_x_tools//go/analysis/passes/stdmethods:go_default_library",
"@org_golang_x_tools//go/analysis/passes/shift:go_default_library",
# "@org_golang_x_tools//go/analysis/passes/shadow:go_default_library",
"@org_golang_x_tools//go/analysis/passes/printf:go_default_library",
"@org_golang_x_tools//go/analysis/passes/pkgfact:go_default_library",
"@org_golang_x_tools//go/analysis/passes/nilness:go_default_library",
"@org_golang_x_tools//go/analysis/passes/nilfunc:go_default_library",
"@org_golang_x_tools//go/analysis/passes/loopclosure:go_default_library",
"@org_golang_x_tools//go/analysis/passes/httpresponse:go_default_library",
"@org_golang_x_tools//go/analysis/passes/findcall:go_default_library",
"@org_golang_x_tools//go/analysis/passes/deepequalerrors:go_default_library",
"@org_golang_x_tools//go/analysis/passes/ctrlflow:go_default_library",
"@org_golang_x_tools//go/analysis/passes/copylock:go_default_library",
# "@org_golang_x_tools//go/analysis/passes/cgocall:go_default_library",
"@org_golang_x_tools//go/analysis/passes/buildtag:go_default_library",
"@org_golang_x_tools//go/analysis/passes/buildssa:go_default_library",
"@org_golang_x_tools//go/analysis/passes/bools:go_default_library",
"@org_golang_x_tools//go/analysis/passes/atomicalign:go_default_library",
"@org_golang_x_tools//go/analysis/passes/atomic:go_default_library",
"@org_golang_x_tools//go/analysis/passes/assign:go_default_library",
"@org_golang_x_tools//go/analysis/passes/inspect:go_default_library",
"@org_golang_x_tools//go/analysis/passes/asmdecl:go_default_library",
"//tools/analyzers/comparesame:go_default_library",
"//tools/analyzers/cryptorand:go_default_library",
"//tools/analyzers/errcheck:go_default_library",
@@ -236,6 +209,53 @@ nogo(
"//tools/analyzers/shadowpredecl:go_default_library",
"//tools/analyzers/slicedirect:go_default_library",
"//tools/analyzers/uintcast:go_default_library",
"@org_golang_x_tools//go/analysis/passes/appends:go_default_library",
"@org_golang_x_tools//go/analysis/passes/asmdecl:go_default_library",
"@org_golang_x_tools//go/analysis/passes/assign:go_default_library",
"@org_golang_x_tools//go/analysis/passes/atomic:go_default_library",
"@org_golang_x_tools//go/analysis/passes/atomicalign:go_default_library",
"@org_golang_x_tools//go/analysis/passes/bools:go_default_library",
"@org_golang_x_tools//go/analysis/passes/buildssa:go_default_library",
"@org_golang_x_tools//go/analysis/passes/buildtag:go_default_library",
# cgocall disabled
#"@org_golang_x_tools//go/analysis/passes/cgocall:go_default_library",
"@org_golang_x_tools//go/analysis/passes/copylock:go_default_library",
"@org_golang_x_tools//go/analysis/passes/ctrlflow:go_default_library",
"@org_golang_x_tools//go/analysis/passes/deepequalerrors:go_default_library",
"@org_golang_x_tools//go/analysis/passes/defers:go_default_library",
"@org_golang_x_tools//go/analysis/passes/directive:go_default_library",
# fieldalignment disabled
#"@org_golang_x_tools//go/analysis/passes/fieldalignment:go_default_library",
"@org_golang_x_tools//go/analysis/passes/findcall:go_default_library",
"@org_golang_x_tools//go/analysis/passes/framepointer:go_default_library",
"@org_golang_x_tools//go/analysis/passes/httpmux:go_default_library",
"@org_golang_x_tools//go/analysis/passes/httpresponse:go_default_library",
"@org_golang_x_tools//go/analysis/passes/ifaceassert:go_default_library",
"@org_golang_x_tools//go/analysis/passes/inspect:go_default_library",
"@org_golang_x_tools//go/analysis/passes/loopclosure:go_default_library",
"@org_golang_x_tools//go/analysis/passes/nilfunc:go_default_library",
"@org_golang_x_tools//go/analysis/passes/nilness:go_default_library",
"@org_golang_x_tools//go/analysis/passes/pkgfact:go_default_library",
"@org_golang_x_tools//go/analysis/passes/printf:go_default_library",
"@org_golang_x_tools//go/analysis/passes/reflectvaluecompare:go_default_library",
# shadow disabled
#"@org_golang_x_tools//go/analysis/passes/shadow:go_default_library",
"@org_golang_x_tools//go/analysis/passes/shift:go_default_library",
"@org_golang_x_tools//go/analysis/passes/sigchanyzer:go_default_library",
"@org_golang_x_tools//go/analysis/passes/slog:go_default_library",
"@org_golang_x_tools//go/analysis/passes/sortslice:go_default_library",
"@org_golang_x_tools//go/analysis/passes/stdmethods:go_default_library",
"@org_golang_x_tools//go/analysis/passes/stringintconv:go_default_library",
"@org_golang_x_tools//go/analysis/passes/structtag:go_default_library",
"@org_golang_x_tools//go/analysis/passes/testinggoroutine:go_default_library",
"@org_golang_x_tools//go/analysis/passes/tests:go_default_library",
"@org_golang_x_tools//go/analysis/passes/timeformat:go_default_library",
"@org_golang_x_tools//go/analysis/passes/unmarshal:go_default_library",
"@org_golang_x_tools//go/analysis/passes/unreachable:go_default_library",
"@org_golang_x_tools//go/analysis/passes/unsafeptr:go_default_library",
"@org_golang_x_tools//go/analysis/passes/unusedresult:go_default_library",
"@org_golang_x_tools//go/analysis/passes/unusedwrite:go_default_library",
"@org_golang_x_tools//go/analysis/passes/usesgenerics:go_default_library",
] + select({
# nogo checks that fail with coverage enabled.
":coverage_enabled": [],

6
MODULE.bazel Normal file
View File

@@ -0,0 +1,6 @@
###############################################################################
# Bazel now uses Bzlmod by default to manage external dependencies.
# Please consider migrating your external dependencies from WORKSPACE to MODULE.bazel.
#
# For more details, please check https://github.com/bazelbuild/bazel/issues/18958
###############################################################################

1245
MODULE.bazel.lock generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -182,7 +182,7 @@ load("@io_bazel_rules_go//go:deps.bzl", "go_register_toolchains", "go_rules_depe
go_rules_dependencies()
go_register_toolchains(
go_version = "1.21.5",
go_version = "1.21.6",
nogo = "@//:nogo",
)
@@ -356,10 +356,10 @@ filegroup(
http_archive(
name = "com_google_protobuf",
sha256 = "4e176116949be52b0408dfd24f8925d1eb674a781ae242a75296b17a1c721395",
strip_prefix = "protobuf-23.3",
sha256 = "9bd87b8280ef720d3240514f884e56a712f2218f0d693b48050c836028940a42",
strip_prefix = "protobuf-25.1",
urls = [
"https://github.com/protocolbuffers/protobuf/archive/v23.3.tar.gz",
"https://github.com/protocolbuffers/protobuf/archive/v25.1.tar.gz",
],
)

View File

@@ -6,6 +6,7 @@ go_library(
"chain_info.go",
"chain_info_forkchoice.go",
"currently_syncing_block.go",
"defragment.go",
"error.go",
"execution_engine.go",
"forkchoice_update_execution.go",

View File

@@ -6,7 +6,10 @@ import (
"time"
"github.com/pkg/errors"
"go.opencensus.io/trace"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
f "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice"
doublylinkedtree "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/doubly-linked-tree"
forkchoicetypes "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
@@ -18,7 +21,6 @@ import (
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/time/slots"
"go.opencensus.io/trace"
)
// ChainInfoFetcher defines a common interface for methods in blockchain service which
@@ -334,12 +336,21 @@ func (s *Service) HeadValidatorIndexToPublicKey(_ context.Context, index primiti
return v.PublicKey(), nil
}
// ForkChoicer returns the forkchoice interface.
func (s *Service) ForkChoicer() f.ForkChoicer {
return s.cfg.ForkChoiceStore
}
// IsOptimistic returns true if the current head is optimistic.
func (s *Service) IsOptimistic(_ context.Context) (bool, error) {
if slots.ToEpoch(s.CurrentSlot()) < params.BeaconConfig().BellatrixForkEpoch {
return false, nil
}
s.headLock.RLock()
if s.head == nil {
s.headLock.RUnlock()
return false, ErrNilHead
}
headRoot := s.head.root
headSlot := s.head.slot
headOptimistic := s.head.optimistic
@@ -545,3 +556,18 @@ func (s *Service) RecentBlockSlot(root [32]byte) (primitives.Slot, error) {
defer s.cfg.ForkChoiceStore.RUnlock()
return s.cfg.ForkChoiceStore.Slot(root)
}
// inRegularSync applies the following heuristics to decide if the node is in
// regular sync mode vs init sync mode using only forkchoice.
// It checks that the highest received block is behind the current time by at least 2 epochs
// and that it was imported at least one epoch late if both of these
// tests pass then the node is in init sync. The caller of this function MUST
// have a lock on forkchoice
func (s *Service) inRegularSync() bool {
currentSlot := s.CurrentSlot()
fc := s.cfg.ForkChoiceStore
if currentSlot-fc.HighestReceivedBlockSlot() < 2*params.BeaconConfig().SlotsPerEpoch {
return true
}
return fc.HighestReceivedBlockDelay() < params.BeaconConfig().SlotsPerEpoch
}

View File

@@ -429,6 +429,11 @@ func TestService_IsOptimistic(t *testing.T) {
opt, err = c.IsOptimistic(ctx)
require.NoError(t, err)
require.Equal(t, true, opt)
// If head is nil, for some reason, an error should be returned rather than panic.
c = &Service{}
_, err = c.IsOptimistic(ctx)
require.ErrorIs(t, err, ErrNilHead)
}
func TestService_IsOptimisticBeforeBellatrix(t *testing.T) {
@@ -588,3 +593,26 @@ func TestService_IsFinalized(t *testing.T) {
require.Equal(t, true, c.IsFinalized(ctx, br))
require.Equal(t, false, c.IsFinalized(ctx, [32]byte{'c'}))
}
func TestService_inRegularSync(t *testing.T) {
ctx := context.Background()
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}, head: &head{root: [32]byte{'b'}}}
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
st, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
require.Equal(t, false, c.inRegularSync())
c.SetGenesisTime(time.Now().Add(time.Second * time.Duration(-1*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot))))
st, blkRoot, err = prepareForkchoiceState(ctx, 128, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
require.Equal(t, false, c.inRegularSync())
c.SetGenesisTime(time.Now().Add(time.Second * time.Duration(-5*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot))))
require.Equal(t, true, c.inRegularSync())
c.SetGenesisTime(time.Now().Add(time.Second * time.Duration(-1*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot))))
c.cfg.ForkChoiceStore.SetGenesisTime(uint64(time.Now().Unix()))
require.Equal(t, true, c.inRegularSync())
}

View File

@@ -0,0 +1,27 @@
package blockchain
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/config/features"
"github.com/prysmaticlabs/prysm/v4/time"
)
var stateDefragmentationTime = promauto.NewSummary(prometheus.SummaryOpts{
Name: "head_state_defragmentation_milliseconds",
Help: "Milliseconds it takes to defragment the head state",
})
// This method defragments our state, so that any specific fields which have
// a higher number of fragmented indexes are reallocated to a new separate slice for
// that field.
func (s *Service) defragmentState(st state.BeaconState) {
if !features.Get().EnableExperimentalState {
return
}
startTime := time.Now()
st.Defragment()
elapsedTime := time.Since(startTime)
stateDefragmentationTime.Observe(float64(elapsedTime.Milliseconds()))
}

View File

@@ -28,6 +28,8 @@ var (
// ErrNotCheckpoint is returned when a given checkpoint is not a
// checkpoint in any chain known to forkchoice
ErrNotCheckpoint = errors.New("not a checkpoint in forkchoice")
// ErrNilHead is returned when no head is present in the blockchain service.
ErrNilHead = errors.New("nil head")
)
var errMaxBlobsExceeded = errors.New("Expected commitments in block exceeds MAX_BLOBS_PER_BLOCK")

View File

@@ -387,9 +387,9 @@ func (s *Service) removeInvalidBlockAndState(ctx context.Context, blkRoots [][32
// This is an irreparable condition, it would me a justified or finalized block has become invalid.
return err
}
// No op if the sidecar does not exist.
if err := s.cfg.BeaconDB.DeleteBlobSidecars(ctx, root); err != nil {
return err
if err := s.blobStorage.Remove(root); err != nil {
// Blobs may not exist for some blocks, leading to deletion failures. Log such errors at debug level.
log.WithError(err).Debug("Could not remove blob from blob storage")
}
}
return nil

View File

@@ -151,8 +151,10 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
},
},
}})
@@ -494,8 +496,10 @@ func Test_NotifyNewPayload(t *testing.T) {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
},
},
},
@@ -597,8 +601,10 @@ func Test_NotifyNewPayload(t *testing.T) {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
},
},
},

View File

@@ -11,7 +11,6 @@ go_library(
visibility = ["//visibility:public"],
deps = [
"//consensus-types/blocks:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_crate_crypto_go_kzg_4844//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
@@ -25,7 +24,7 @@ go_test(
],
embed = [":go_default_library"],
deps = [
"//proto/prysm/v1alpha1:go_default_library",
"//consensus-types/blocks:go_default_library",
"//testing/require:go_default_library",
"@com_github_consensys_gnark_crypto//ecc/bls12-381/fr:go_default_library",
"@com_github_crate_crypto_go_kzg_4844//:go_default_library",

View File

@@ -1,37 +1,10 @@
package kzg
import (
"fmt"
GoKZG "github.com/crate-crypto/go-kzg-4844"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
)
// IsDataAvailable checks that
// - all blobs in the block are available
// - Expected KZG commitments match the number of blobs in the block
// - That the number of proofs match the number of blobs
// - That the proofs are verified against the KZG commitments
func IsDataAvailable(commitments [][]byte, sidecars []*ethpb.DeprecatedBlobSidecar) error {
if len(commitments) != len(sidecars) {
return fmt.Errorf("could not check data availability, expected %d commitments, obtained %d",
len(commitments), len(sidecars))
}
if len(commitments) == 0 {
return nil
}
blobs := make([]GoKZG.Blob, len(commitments))
proofs := make([]GoKZG.KZGProof, len(commitments))
cmts := make([]GoKZG.KZGCommitment, len(commitments))
for i, sidecar := range sidecars {
blobs[i] = bytesToBlob(sidecar.Blob)
proofs[i] = bytesToKZGProof(sidecar.KzgProof)
cmts[i] = bytesToCommitment(commitments[i])
}
return kzgContext.VerifyBlobKZGProofBatch(blobs, cmts, proofs)
}
// Verify performs single or batch verification of commitments depending on the number of given BlobSidecars.
func Verify(sidecars ...blocks.ROBlob) error {
if len(sidecars) == 0 {

View File

@@ -8,7 +8,7 @@ import (
"github.com/consensys/gnark-crypto/ecc/bls12-381/fr"
GoKZG "github.com/crate-crypto/go-kzg-4844"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/testing/require"
"github.com/sirupsen/logrus"
)
@@ -58,10 +58,9 @@ func GenerateCommitmentAndProof(blob GoKZG.Blob) (GoKZG.KZGCommitment, GoKZG.KZG
return commitment, proof, err
}
func TestIsDataAvailable(t *testing.T) {
sidecars := make([]*ethpb.DeprecatedBlobSidecar, 0)
commitments := make([][]byte, 0)
require.NoError(t, IsDataAvailable(commitments, sidecars))
func TestVerify(t *testing.T) {
sidecars := make([]blocks.ROBlob, 0)
require.NoError(t, Verify(sidecars...))
}
func TestBytesToAny(t *testing.T) {

View File

@@ -358,6 +358,7 @@ func reportEpochMetrics(ctx context.Context, postState, headState state.BeaconSt
for name, val := range refMap {
stateTrieReferences.WithLabelValues(name).Set(float64(val))
}
postState.RecordStateMetrics()
return nil
}

View File

@@ -6,6 +6,8 @@ import (
"time"
"github.com/pkg/errors"
"go.opencensus.io/trace"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/state"
@@ -29,7 +31,6 @@ import (
"github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1/attestation"
"github.com/prysmaticlabs/prysm/v4/runtime/version"
"github.com/prysmaticlabs/prysm/v4/time/slots"
"go.opencensus.io/trace"
)
// A custom slot deadline for processing state slots in our cache.
@@ -66,7 +67,10 @@ func (s *Service) postBlockProcess(cfg *postBlockProcessConfig) error {
startTime := time.Now()
fcuArgs := &fcuConfig{}
defer s.handleSecondFCUCall(cfg, fcuArgs)
if s.inRegularSync() {
defer s.handleSecondFCUCall(cfg, fcuArgs)
}
defer s.sendLightClientFeeds(cfg)
defer s.sendStateFeedOnBlock(cfg)
defer reportProcessingTime(startTime)
defer reportAttestationInclusion(cfg.signed.Block())
@@ -102,6 +106,7 @@ func (s *Service) postBlockProcess(cfg *postBlockProcessConfig) error {
if err := s.sendFCU(cfg, fcuArgs); err != nil {
return errors.Wrap(err, "could not send FCU to engine")
}
return nil
}
@@ -577,10 +582,15 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
if s.CurrentSlot() == s.HeadSlot() {
return
}
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
// return early if we are in init sync
if !s.inRegularSync() {
return
}
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.MissedSlot,
})
s.headLock.RLock()
headRoot := s.headRoot()
headState := s.headState(ctx)
@@ -595,12 +605,9 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
if err := transition.UpdateNextSlotCache(ctx, lastRoot, lastState); err != nil {
log.WithError(err).Debug("could not update next slot state cache")
}
// handleEpochBoundary requires a forkchoice lock to obtain the target root.
s.cfg.ForkChoiceStore.RLock()
if err := s.handleEpochBoundary(ctx, currentSlot, headState, headRoot[:]); err != nil {
log.WithError(err).Error("lateBlockTasks: could not update epoch boundary caches")
}
s.cfg.ForkChoiceStore.RUnlock()
// return early if we already started building a block for the current
// head root
_, has := s.cfg.PayloadIDCache.PayloadID(s.CurrentSlot()+1, headRoot)
@@ -626,9 +633,7 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
if fcuArgs.attributes.IsEmpty() {
return
}
s.cfg.ForkChoiceStore.RLock()
_, err = s.notifyForkchoiceUpdate(ctx, fcuArgs)
s.cfg.ForkChoiceStore.RUnlock()
if err != nil {
log.WithError(err).Debug("could not perform late block tasks: failed to update forkchoice with engine")
}

View File

@@ -7,21 +7,24 @@ import (
"github.com/ethereum/go-ethereum/common"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
doublylinkedtree "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/doubly-linked-tree"
forkchoicetypes "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/config/features"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
mathutil "github.com/prysmaticlabs/prysm/v4/math"
ethpbv2 "github.com/prysmaticlabs/prysm/v4/proto/eth/v2"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/time/slots"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
// CurrentSlot returns the current slot based on time.
@@ -34,6 +37,9 @@ func (s *Service) getFCUArgs(cfg *postBlockProcessConfig, fcuArgs *fcuConfig) er
if err := s.getFCUArgsEarlyBlock(cfg, fcuArgs); err != nil {
return err
}
if !s.inRegularSync() {
return nil
}
slot := cfg.signed.Block().Slot()
if slots.WithinVotingWindow(uint64(s.genesisTime.Unix()), slot) {
return nil
@@ -106,6 +112,128 @@ func (s *Service) sendStateFeedOnBlock(cfg *postBlockProcessConfig) {
})
}
// sendLightClientFeeds sends the light client feeds when feature flag is enabled.
func (s *Service) sendLightClientFeeds(cfg *postBlockProcessConfig) {
if features.Get().EnableLightClient {
if _, err := s.sendLightClientOptimisticUpdate(cfg.ctx, cfg.signed, cfg.postState); err != nil {
log.WithError(err).Error("Failed to send light client optimistic update")
}
// Get the finalized checkpoint
finalized := s.ForkChoicer().FinalizedCheckpoint()
// LightClientFinalityUpdate needs super majority
s.tryPublishLightClientFinalityUpdate(cfg.ctx, cfg.signed, finalized, cfg.postState)
}
}
func (s *Service) tryPublishLightClientFinalityUpdate(ctx context.Context, signed interfaces.ReadOnlySignedBeaconBlock, finalized *forkchoicetypes.Checkpoint, postState state.BeaconState) {
if finalized.Epoch <= s.lastPublishedLightClientEpoch {
return
}
config := params.BeaconConfig()
if finalized.Epoch < config.AltairForkEpoch {
return
}
syncAggregate, err := signed.Block().Body().SyncAggregate()
if err != nil || syncAggregate == nil {
return
}
// LightClientFinalityUpdate needs super majority
if syncAggregate.SyncCommitteeBits.Count()*3 < config.SyncCommitteeSize*2 {
return
}
_, err = s.sendLightClientFinalityUpdate(ctx, signed, postState)
if err != nil {
log.WithError(err).Error("Failed to send light client finality update")
} else {
s.lastPublishedLightClientEpoch = finalized.Epoch
}
}
// sendLightClientFinalityUpdate sends a light client finality update notification to the state feed.
func (s *Service) sendLightClientFinalityUpdate(ctx context.Context, signed interfaces.ReadOnlySignedBeaconBlock,
postState state.BeaconState) (int, error) {
// Get attested state
attestedRoot := signed.Block().ParentRoot()
attestedState, err := s.cfg.StateGen.StateByRoot(ctx, attestedRoot)
if err != nil {
return 0, errors.Wrap(err, "could not get attested state")
}
// Get finalized block
var finalizedBlock interfaces.ReadOnlySignedBeaconBlock
finalizedCheckPoint := attestedState.FinalizedCheckpoint()
if finalizedCheckPoint != nil {
finalizedRoot := bytesutil.ToBytes32(finalizedCheckPoint.Root)
finalizedBlock, err = s.cfg.BeaconDB.Block(ctx, finalizedRoot)
if err != nil {
finalizedBlock = nil
}
}
update, err := NewLightClientFinalityUpdateFromBeaconState(
ctx,
postState,
signed,
attestedState,
finalizedBlock,
)
if err != nil {
return 0, errors.Wrap(err, "could not create light client update")
}
// Return the result
result := &ethpbv2.LightClientFinalityUpdateWithVersion{
Version: ethpbv2.Version(signed.Version()),
Data: CreateLightClientFinalityUpdate(update),
}
// Send event
return s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.LightClientFinalityUpdate,
Data: result,
}), nil
}
// sendLightClientOptimisticUpdate sends a light client optimistic update notification to the state feed.
func (s *Service) sendLightClientOptimisticUpdate(ctx context.Context, signed interfaces.ReadOnlySignedBeaconBlock,
postState state.BeaconState) (int, error) {
// Get attested state
attestedRoot := signed.Block().ParentRoot()
attestedState, err := s.cfg.StateGen.StateByRoot(ctx, attestedRoot)
if err != nil {
return 0, errors.Wrap(err, "could not get attested state")
}
update, err := NewLightClientOptimisticUpdateFromBeaconState(
ctx,
postState,
signed,
attestedState,
)
if err != nil {
return 0, errors.Wrap(err, "could not create light client update")
}
// Return the result
result := &ethpbv2.LightClientOptimisticUpdateWithVersion{
Version: ethpbv2.Version(signed.Version()),
Data: CreateLightClientOptimisticUpdate(update),
}
return s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.LightClientOptimisticUpdate,
Data: result,
}), nil
}
// updateCachesPostBlockProcessing updates the next slot cache and handles the epoch
// boundary in order to compute the right proposer indices after processing
// state transition. This function is called on late blocks while still locked,

View File

@@ -923,8 +923,10 @@ func Test_validateMergeTransitionBlock(t *testing.T) {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
},
},
{
@@ -968,6 +970,7 @@ func Test_validateMergeTransitionBlock(t *testing.T) {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: make([]byte, fieldparams.RootLength),

View File

@@ -150,7 +150,9 @@ func (s *Service) UpdateHead(ctx context.Context, proposingSlot primitives.Slot)
headBlock: headBlock,
proposingSlot: proposingSlot,
}
fcuArgs.attributes = s.getPayloadAttribute(ctx, headState, proposingSlot, newHeadRoot[:])
if s.inRegularSync() {
fcuArgs.attributes = s.getPayloadAttribute(ctx, headState, proposingSlot, newHeadRoot[:])
}
if fcuArgs.attributes != nil && s.shouldOverrideFCU(newHeadRoot, proposingSlot) {
return
}

View File

@@ -123,6 +123,9 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
}
daWaitedTime := time.Since(daStartTime)
// Defragment the state before continuing block processing.
s.defragmentState(postState)
// The rest of block processing takes a lock on forkchoice.
s.cfg.ForkChoiceStore.Lock()
defer s.cfg.ForkChoiceStore.Unlock()

View File

@@ -11,6 +11,8 @@ import (
"time"
"github.com/pkg/errors"
"go.opencensus.io/trace"
"github.com/prysmaticlabs/prysm/v4/async/event"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain/kzg"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache"
@@ -37,34 +39,35 @@ import (
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
prysmTime "github.com/prysmaticlabs/prysm/v4/time"
"github.com/prysmaticlabs/prysm/v4/time/slots"
"go.opencensus.io/trace"
)
// Service represents a service that handles the internal
// logic of managing the full PoS beacon chain.
type Service struct {
cfg *config
ctx context.Context
cancel context.CancelFunc
genesisTime time.Time
head *head
headLock sync.RWMutex
originBlockRoot [32]byte // genesis root, or weak subjectivity checkpoint root, depending on how the node is initialized
boundaryRoots [][32]byte
checkpointStateCache *cache.CheckpointStateCache
initSyncBlocks map[[32]byte]interfaces.ReadOnlySignedBeaconBlock
initSyncBlocksLock sync.RWMutex
wsVerifier *WeakSubjectivityVerifier
clockSetter startup.ClockSetter
clockWaiter startup.ClockWaiter
syncComplete chan struct{}
blobNotifiers *blobNotifierMap
blockBeingSynced *currentlySyncingBlock
blobStorage *filesystem.BlobStorage
cfg *config
ctx context.Context
cancel context.CancelFunc
genesisTime time.Time
head *head
headLock sync.RWMutex
originBlockRoot [32]byte // genesis root, or weak subjectivity checkpoint root, depending on how the node is initialized
boundaryRoots [][32]byte
checkpointStateCache *cache.CheckpointStateCache
initSyncBlocks map[[32]byte]interfaces.ReadOnlySignedBeaconBlock
initSyncBlocksLock sync.RWMutex
wsVerifier *WeakSubjectivityVerifier
clockSetter startup.ClockSetter
clockWaiter startup.ClockWaiter
syncComplete chan struct{}
blobNotifiers *blobNotifierMap
blockBeingSynced *currentlySyncingBlock
blobStorage *filesystem.BlobStorage
lastPublishedLightClientEpoch primitives.Epoch
}
// config options for the service.

View File

@@ -10,6 +10,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache/depositcache"
statefeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db/filesystem"
testDB "github.com/prysmaticlabs/prysm/v4/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice"
doublylinkedtree "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/doubly-linked-tree"
@@ -116,6 +117,7 @@ func minimalTestService(t *testing.T, opts ...Option) (*Service, *testServiceReq
WithBLSToExecPool(req.blsPool),
WithDepositCache(dc),
WithTrackedValidatorsCache(cache.NewTrackedValidatorsCache()),
WithBlobStorage(filesystem.NewEphemeralBlobStorage(t)),
}
// append the variadic opts so they override the defaults by being processed afterwards
opts = append(defOpts, opts...)

View File

@@ -105,6 +105,7 @@ func NewGenesisBlockForState(ctx context.Context, st state.BeaconState) (interfa
ReceiptsRoot: make([]byte, 32),
LogsBloom: make([]byte, 256),
PrevRandao: make([]byte, 32),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
Transactions: make([][]byte, 0),
@@ -136,6 +137,7 @@ func NewGenesisBlockForState(ctx context.Context, st state.BeaconState) (interfa
ReceiptsRoot: make([]byte, 32),
LogsBloom: make([]byte, 256),
PrevRandao: make([]byte, 32),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
Transactions: make([][]byte, 0),
@@ -168,6 +170,7 @@ func NewGenesisBlockForState(ctx context.Context, st state.BeaconState) (interfa
ReceiptsRoot: make([]byte, 32),
LogsBloom: make([]byte, 256),
PrevRandao: make([]byte, 32),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
Transactions: make([][]byte, 0),

View File

@@ -853,10 +853,10 @@ func emptyPayloadHeader() (interfaces.ExecutionData, error) {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
})
}
@@ -868,11 +868,11 @@ func emptyPayloadHeaderCapella() (interfaces.ExecutionData, error) {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: make([]byte, fieldparams.RootLength),
WithdrawalsRoot: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
}, 0)
}
@@ -884,10 +884,10 @@ func emptyPayload() *enginev1.ExecutionPayload {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
ExtraData: make([]byte, 0),
}
}
@@ -899,10 +899,10 @@ func emptyPayloadCapella() *enginev1.ExecutionPayloadCapella {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
Withdrawals: make([]*enginev1.Withdrawal, 0),
ExtraData: make([]byte, 0),
}
}

View File

@@ -84,6 +84,7 @@ func TestUpgradeToCapella(t *testing.T) {
GasLimit: prevHeader.GasLimit(),
GasUsed: prevHeader.GasUsed(),
Timestamp: prevHeader.Timestamp(),
ExtraData: prevHeader.ExtraData(),
BaseFeePerGas: prevHeader.BaseFeePerGas(),
BlockHash: prevHeader.BlockHash(),
TransactionsRoot: txRoot,

View File

@@ -57,6 +57,10 @@ func UpgradeToDeneb(state state.BeaconState) (state.BeaconState, error) {
if err != nil {
return nil, err
}
historicalRoots, err := state.HistoricalRoots()
if err != nil {
return nil, err
}
s := &ethpb.BeaconStateDeneb{
GenesisTime: state.GenesisTime(),
@@ -70,7 +74,7 @@ func UpgradeToDeneb(state state.BeaconState) (state.BeaconState, error) {
LatestBlockHeader: state.LatestBlockHeader(),
BlockRoots: state.BlockRoots(),
StateRoots: state.StateRoots(),
HistoricalRoots: [][]byte{},
HistoricalRoots: historicalRoots,
Eth1Data: state.Eth1Data(),
Eth1DataVotes: state.Eth1DataVotes(),
Eth1DepositIndex: state.Eth1DepositIndex(),
@@ -101,10 +105,10 @@ func UpgradeToDeneb(state state.BeaconState) (state.BeaconState, error) {
ExtraData: payloadHeader.ExtraData(),
BaseFeePerGas: payloadHeader.BaseFeePerGas(),
BlockHash: payloadHeader.BlockHash(),
ExcessBlobGas: 0,
BlobGasUsed: 0,
TransactionsRoot: txRoot,
WithdrawalsRoot: wdRoot,
ExcessBlobGas: 0,
BlobGasUsed: 0,
},
NextWithdrawalIndex: wi,
NextWithdrawalValidatorIndex: vi,

View File

@@ -14,6 +14,7 @@ import (
func TestUpgradeToDeneb(t *testing.T) {
st, _ := util.DeterministicGenesisStateCapella(t, params.BeaconConfig().MaxValidatorsPerCommittee)
require.NoError(t, st.SetHistoricalRoots([][]byte{{1}}))
preForkState := st.Copy()
mSt, err := deneb.UpgradeToDeneb(st)
require.NoError(t, err)
@@ -46,6 +47,12 @@ func TestUpgradeToDeneb(t *testing.T) {
require.NoError(t, err)
require.DeepSSZEqual(t, make([]uint64, numValidators), s)
hr1, err := preForkState.HistoricalRoots()
require.NoError(t, err)
hr2, err := mSt.HistoricalRoots()
require.NoError(t, err)
require.DeepEqual(t, hr1, hr2)
f := mSt.Fork()
require.DeepSSZEqual(t, &ethpb.Fork{
PreviousVersion: st.Fork().CurrentVersion,
@@ -85,6 +92,7 @@ func TestUpgradeToDeneb(t *testing.T) {
GasLimit: prevHeader.GasLimit(),
GasUsed: prevHeader.GasUsed(),
Timestamp: prevHeader.Timestamp(),
ExtraData: prevHeader.ExtraData(),
BaseFeePerGas: prevHeader.BaseFeePerGas(),
BlockHash: prevHeader.BlockHash(),
TransactionsRoot: txRoot,

View File

@@ -79,6 +79,7 @@ func UpgradeToBellatrix(state state.BeaconState) (state.BeaconState, error) {
GasLimit: 0,
GasUsed: 0,
Timestamp: 0,
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
TransactionsRoot: make([]byte, 32),

View File

@@ -79,6 +79,7 @@ func TestUpgradeToBellatrix(t *testing.T) {
GasLimit: 0,
GasUsed: 0,
Timestamp: 0,
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
TransactionsRoot: make([]byte, 32),

View File

@@ -27,6 +27,10 @@ const (
NewHead
// MissedSlot is sent when we need to notify users that a slot was missed.
MissedSlot
// LightClientFinalityUpdate event
LightClientFinalityUpdate
// LightClientOptimisticUpdate event
LightClientOptimisticUpdate
)
// BlockProcessedData is the data sent with BlockProcessed events.

View File

@@ -245,10 +245,10 @@ func createFullBellatrixBlockWithOperations(t *testing.T) (state.BeaconState,
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: bytesutil.PadTo([]byte{1, 2, 3, 4}, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
ExtraData: make([]byte, 0),
},
},
},
@@ -284,11 +284,11 @@ func createFullCapellaBlockWithOperations(t *testing.T) (state.BeaconState,
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: bytesutil.PadTo([]byte{1, 2, 3, 4}, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
Withdrawals: make([]*enginev1.Withdrawal, 0),
ExtraData: make([]byte, 0),
},
},
},

View File

@@ -209,6 +209,7 @@ func OptimizedGenesisBeaconStateBellatrix(genesisTime uint64, preState state.Bea
ReceiptsRoot: make([]byte, 32),
LogsBloom: make([]byte, 256),
PrevRandao: make([]byte, 32),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
Transactions: make([][]byte, 0),
@@ -269,6 +270,7 @@ func EmptyGenesisStateBellatrix() (state.BeaconState, error) {
ReceiptsRoot: make([]byte, 32),
LogsBloom: make([]byte, 256),
PrevRandao: make([]byte, 32),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
TransactionsRoot: make([]byte, 32),

View File

@@ -103,7 +103,7 @@ func (e *cacheEntry) filter(root [32]byte, kc safeCommitmentArray) ([]blocks.ROB
return scs, nil
}
// safeCommitemntArray is a fixed size array of commitment byte slices. This is helpful for avoiding
// safeCommitmentArray is a fixed size array of commitment byte slices. This is helpful for avoiding
// gratuitous bounds checks.
type safeCommitmentArray [fieldparams.MaxBlobsPerBlock][]byte

View File

@@ -185,6 +185,12 @@ func (bs *BlobStorage) Get(root [32]byte, idx uint64) (blocks.VerifiedROBlob, er
return verification.BlobSidecarNoop(ro)
}
// Remove removes all blobs for a given root.
func (bs *BlobStorage) Remove(root [32]byte) error {
rootDir := blobNamer{root: root}.dir()
return bs.fs.RemoveAll(rootDir)
}
// Indices generates a bitmap representing which BlobSidecar.Index values are present on disk for a given root.
// This value can be compared to the commitments observed in a block to determine which indices need to be found
// on the network to confirm data availability.

View File

@@ -73,6 +73,21 @@ func TestBlobStorage_SaveBlobData(t *testing.T) {
require.NoError(t, err)
require.DeepSSZEqual(t, expected, actual)
})
t.Run("round trip write, read and delete", func(t *testing.T) {
bs := NewEphemeralBlobStorage(t)
err := bs.Save(testSidecars[0])
require.NoError(t, err)
expected := testSidecars[0]
actual, err := bs.Get(expected.BlockRoot(), expected.Index)
require.NoError(t, err)
require.DeepSSZEqual(t, expected, actual)
require.NoError(t, bs.Remove(expected.BlockRoot()))
_, err = bs.Get(expected.BlockRoot(), expected.Index)
require.ErrorContains(t, "file does not exist", err)
})
}
// pollUntil polls a condition function until it returns true or a timeout is reached.

View File

@@ -55,9 +55,6 @@ type ReadOnlyDatabase interface {
FeeRecipientByValidatorID(ctx context.Context, id primitives.ValidatorIndex) (common.Address, error)
RegistrationByValidatorID(ctx context.Context, id primitives.ValidatorIndex) (*ethpb.ValidatorRegistrationV1, error)
// Blob operations.
BlobSidecarsByRoot(ctx context.Context, beaconBlockRoot [32]byte, indices ...uint64) ([]*ethpb.DeprecatedBlobSidecar, error)
BlobSidecarsBySlot(ctx context.Context, slot primitives.Slot, indices ...uint64) ([]*ethpb.DeprecatedBlobSidecar, error)
// origin checkpoint sync support
OriginCheckpointBlockRoot(ctx context.Context) ([32]byte, error)
BackfillBlockRoot(ctx context.Context) ([32]byte, error)
@@ -93,9 +90,6 @@ type NoHeadAccessDatabase interface {
SaveFeeRecipientsByValidatorIDs(ctx context.Context, ids []primitives.ValidatorIndex, addrs []common.Address) error
SaveRegistrationsByValidatorIDs(ctx context.Context, ids []primitives.ValidatorIndex, regs []*ethpb.ValidatorRegistrationV1) error
// Blob operations.
DeleteBlobSidecars(ctx context.Context, beaconBlockRoot [32]byte) error
CleanUpDirtyStates(ctx context.Context, slotsPerArchivedPoint primitives.Slot) error
}

View File

@@ -5,7 +5,6 @@ go_library(
srcs = [
"archived_point.go",
"backup.go",
"blob.go",
"blocks.go",
"checkpoint.go",
"deposit_contract.go",
@@ -39,7 +38,6 @@ go_library(
"//beacon-chain/state/genesis:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
@@ -76,7 +74,6 @@ go_test(
srcs = [
"archived_point_test.go",
"backup_test.go",
"blob_test.go",
"blocks_test.go",
"checkpoint_test.go",
"deposit_contract_test.go",
@@ -114,7 +111,6 @@ go_test(
"//proto/prysm/v1alpha1:go_default_library",
"//proto/testing:go_default_library",
"//testing/assert:go_default_library",
"//testing/assertions:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",

View File

@@ -1,320 +0,0 @@
package kv
import (
"bytes"
"context"
"sort"
"github.com/pkg/errors"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/config/params"
types "github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
bolt "go.etcd.io/bbolt"
"go.opencensus.io/trace"
)
var (
errBlobSlotMismatch = errors.New("sidecar slot mismatch")
errBlobParentMismatch = errors.New("sidecar parent root mismatch")
errBlobRootMismatch = errors.New("sidecar root mismatch")
errBlobProposerMismatch = errors.New("sidecar proposer index mismatch")
errBlobSidecarLimit = errors.New("sidecar exceeds maximum number of blobs")
errEmptySidecar = errors.New("nil or empty blob sidecars")
errNewerBlobExists = errors.New("Will not overwrite newer blobs in db")
)
// A blob rotating key is represented as bytes(slot_to_rotating_buffer(blob.slot)) ++ bytes(blob.slot) ++ blob.block_root
type blobRotatingKey []byte
// BufferPrefix returns the first 8 bytes of the rotating key.
// This represents bytes(slot_to_rotating_buffer(blob.slot)) in the rotating key.
func (rk blobRotatingKey) BufferPrefix() []byte {
return rk[0:8]
}
// Slot returns the information from the key.
func (rk blobRotatingKey) Slot() types.Slot {
slotBytes := rk[8:16]
return bytesutil.BytesToSlotBigEndian(slotBytes)
}
// BlockRoot returns the block root information from the key.
func (rk blobRotatingKey) BlockRoot() []byte {
return rk[16:]
}
// SaveBlobSidecar saves the blobs for a given epoch in the sidecar bucket. When we receive a blob:
//
// 1. Convert slot using a modulo operator to [0, maxSlots] where maxSlots = MAX_EPOCHS_TO_PERSIST_BLOBS*SLOTS_PER_EPOCH
//
// 2. Compute key for blob as bytes(slot_to_rotating_buffer(blob.slot)) ++ bytes(blob.slot) ++ blob.block_root
//
// 3. Begin the save algorithm: If the incoming blob has a slot bigger than the saved slot at the spot
// in the rotating keys buffer, we overwrite all elements for that slot. Otherwise, we merge the blob with an existing one.
// Trying to replace a newer blob with an older one is an error.
func (s *Store) SaveBlobSidecar(ctx context.Context, scs []*ethpb.DeprecatedBlobSidecar) error {
if len(scs) == 0 {
return errEmptySidecar
}
ctx, span := trace.StartSpan(ctx, "BeaconDB.SaveBlobSidecar")
defer span.End()
first := scs[0]
newKey := s.blobSidecarKey(first)
prefix := newKey.BufferPrefix()
var prune []blobRotatingKey
return s.db.Update(func(tx *bolt.Tx) error {
var existing []byte
sc := &ethpb.DeprecatedBlobSidecars{}
bkt := tx.Bucket(blobsBucket)
c := bkt.Cursor()
for k, v := c.Seek(prefix); bytes.HasPrefix(k, prefix); k, v = c.Next() {
key := blobRotatingKey(k)
ks := key.Slot()
if ks < first.Slot {
// Mark older blobs at the same position of the ring buffer for deletion.
prune = append(prune, key)
continue
}
if ks > first.Slot {
// We shouldn't be overwriting newer blobs with older blobs. Something is wrong.
return errNewerBlobExists
}
// The slot isn't older or newer, so it must be equal.
// If the roots match, then we want to merge the new sidecars with the existing data.
if bytes.Equal(first.BlockRoot, key.BlockRoot()) {
existing = v
if err := decode(ctx, v, sc); err != nil {
return err
}
}
// If the slot is equal but the roots don't match, leave the existing key alone and allow the sidecar
// to be written to the new key with the same prefix. In this case sc will be empty, so it will just
// contain the incoming sidecars when we write it.
}
sc.Sidecars = append(sc.Sidecars, scs...)
sortSidecars(sc.Sidecars)
var err error
sc.Sidecars, err = validUniqueSidecars(sc.Sidecars)
if err != nil {
return err
}
encoded, err := encode(ctx, sc)
if err != nil {
return err
}
// don't write if the merged result is the same as before
if len(existing) == len(encoded) && bytes.Equal(existing, encoded) {
return nil
}
// Only prune if we're actually going through with the update.
for _, k := range prune {
if err := bkt.Delete(k); err != nil {
// note: attempting to delete a key that does not exist should not return an error.
log.WithError(err).Warnf("Could not delete blob key %#x.", k)
}
}
return bkt.Put(newKey, encoded)
})
}
// validUniqueSidecars ensures that all sidecars have the same slot, parent root, block root, and proposer index, and
// there are no more than MAX_BLOBS_PER_BLOCK sidecars.
func validUniqueSidecars(scs []*ethpb.DeprecatedBlobSidecar) ([]*ethpb.DeprecatedBlobSidecar, error) {
if len(scs) == 0 {
return nil, errEmptySidecar
}
// If there's only 1 sidecar, we've got nothing to compare.
if len(scs) == 1 {
return scs, nil
}
prev := scs[0]
didx := 1
for i := 1; i < len(scs); i++ {
sc := scs[i]
if sc.Slot != prev.Slot {
return nil, errors.Wrapf(errBlobSlotMismatch, "%d != %d", sc.Slot, prev.Slot)
}
if !bytes.Equal(sc.BlockParentRoot, prev.BlockParentRoot) {
return nil, errors.Wrapf(errBlobParentMismatch, "%x != %x", sc.BlockParentRoot, prev.BlockParentRoot)
}
if !bytes.Equal(sc.BlockRoot, prev.BlockRoot) {
return nil, errors.Wrapf(errBlobRootMismatch, "%x != %x", sc.BlockRoot, prev.BlockRoot)
}
if sc.ProposerIndex != prev.ProposerIndex {
return nil, errors.Wrapf(errBlobProposerMismatch, "%d != %d", sc.ProposerIndex, prev.ProposerIndex)
}
// skip duplicate
if sc.Index == prev.Index {
continue
}
if didx != i {
scs[didx] = scs[i]
}
prev = scs[i]
didx += 1
}
if didx > fieldparams.MaxBlobsPerBlock {
return nil, errors.Wrapf(errBlobSidecarLimit, "%d > %d", didx, fieldparams.MaxBlobsPerBlock)
}
return scs[0:didx], nil
}
// sortSidecars sorts the sidecars by their index.
func sortSidecars(scs []*ethpb.DeprecatedBlobSidecar) {
sort.Slice(scs, func(i, j int) bool {
return scs[i].Index < scs[j].Index
})
}
// BlobSidecarsByRoot retrieves the blobs for the given beacon block root.
// If the `indices` argument is omitted, all blobs for the root will be returned.
// Otherwise, the result will be filtered to only include the specified indices.
// An error will result if an invalid index is specified.
// The bucket size is bounded by 131072 entries. That's the most blobs a node will keep before rotating it out.
func (s *Store) BlobSidecarsByRoot(ctx context.Context, root [32]byte, indices ...uint64) ([]*ethpb.DeprecatedBlobSidecar, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.BlobSidecarsByRoot")
defer span.End()
var enc []byte
if err := s.db.View(func(tx *bolt.Tx) error {
c := tx.Bucket(blobsBucket).Cursor()
// Bucket size is bounded and bolt cursors are fast. Moreover, a thin caching layer can be added.
for k, v := c.First(); k != nil; k, v = c.Next() {
if bytes.HasSuffix(k, root[:]) {
enc = v
break
}
}
return nil
}); err != nil {
return nil, err
}
if enc == nil {
return nil, ErrNotFound
}
sc := &ethpb.DeprecatedBlobSidecars{}
if err := decode(ctx, enc, sc); err != nil {
return nil, err
}
return filterForIndices(sc, indices...)
}
func filterForIndices(sc *ethpb.DeprecatedBlobSidecars, indices ...uint64) ([]*ethpb.DeprecatedBlobSidecar, error) {
if len(indices) == 0 {
return sc.Sidecars, nil
}
// This loop assumes that the BlobSidecars value stores the complete set of blobs for a block
// in ascending order from eg 0..3, without gaps. This allows us to assume the indices argument
// maps 1:1 with indices in the BlobSidecars storage object.
maxIdx := uint64(len(sc.Sidecars)) - 1
sidecars := make([]*ethpb.DeprecatedBlobSidecar, len(indices))
for i, idx := range indices {
if idx > maxIdx {
return nil, errors.Wrapf(ErrNotFound, "BlobSidecars missing index: index %d", idx)
}
sidecars[i] = sc.Sidecars[idx]
}
return sidecars, nil
}
// BlobSidecarsBySlot retrieves BlobSidecars for the given slot.
// If the `indices` argument is omitted, all blobs for the slot will be returned.
// Otherwise, the result will be filtered to only include the specified indices.
// An error will result if an invalid index is specified.
// The bucket size is bounded by 131072 entries. That's the most blobs a node will keep before rotating it out.
func (s *Store) BlobSidecarsBySlot(ctx context.Context, slot types.Slot, indices ...uint64) ([]*ethpb.DeprecatedBlobSidecar, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.BlobSidecarsBySlot")
defer span.End()
var enc []byte
sk := s.slotKey(slot)
if err := s.db.View(func(tx *bolt.Tx) error {
c := tx.Bucket(blobsBucket).Cursor()
// Bucket size is bounded and bolt cursors are fast. Moreover, a thin caching layer can be added.
for k, v := c.Seek(sk); bytes.HasPrefix(k, sk); k, _ = c.Next() {
slotInKey := bytesutil.BytesToSlotBigEndian(k[8:16])
if slotInKey == slot {
enc = v
break
}
}
return nil
}); err != nil {
return nil, err
}
if enc == nil {
return nil, ErrNotFound
}
sc := &ethpb.DeprecatedBlobSidecars{}
if err := decode(ctx, enc, sc); err != nil {
return nil, err
}
return filterForIndices(sc, indices...)
}
// DeleteBlobSidecars returns true if the blobs are in the db.
func (s *Store) DeleteBlobSidecars(ctx context.Context, beaconBlockRoot [32]byte) error {
_, span := trace.StartSpan(ctx, "BeaconDB.DeleteBlobSidecar")
defer span.End()
return s.db.Update(func(tx *bolt.Tx) error {
bkt := tx.Bucket(blobsBucket)
c := bkt.Cursor()
for k, _ := c.First(); k != nil; k, _ = c.Next() {
if bytes.HasSuffix(k, beaconBlockRoot[:]) {
if err := bkt.Delete(k); err != nil {
return err
}
}
}
return nil
})
}
// We define a blob sidecar key as: bytes(slot_to_rotating_buffer(blob.slot)) ++ bytes(blob.slot) ++ blob.block_root
// where slot_to_rotating_buffer(slot) = slot % MAX_SLOTS_TO_PERSIST_BLOBS.
func (s *Store) blobSidecarKey(blob *ethpb.DeprecatedBlobSidecar) blobRotatingKey {
key := s.slotKey(blob.Slot)
key = append(key, bytesutil.SlotToBytesBigEndian(blob.Slot)...)
key = append(key, blob.BlockRoot...)
return key
}
func (s *Store) slotKey(slot types.Slot) []byte {
return bytesutil.SlotToBytesBigEndian(slot.ModSlot(s.blobRetentionSlots()))
}
func (s *Store) blobRetentionSlots() types.Slot {
return types.Slot(s.blobRetentionEpochs.Mul(uint64(params.BeaconConfig().SlotsPerEpoch)))
}
var errBlobRetentionEpochMismatch = errors.New("epochs for blobs request value in DB does not match runtime config")
func (s *Store) checkEpochsForBlobSidecarsRequestBucket(db *bolt.DB) error {
uRetentionEpochs := uint64(s.blobRetentionEpochs)
if err := db.Update(func(tx *bolt.Tx) error {
b := tx.Bucket(chainMetadataBucket)
v := b.Get(blobRetentionEpochsKey)
if v == nil {
if err := b.Put(blobRetentionEpochsKey, bytesutil.Uint64ToBytesBigEndian(uRetentionEpochs)); err != nil {
return err
}
return nil
}
e := bytesutil.BytesToUint64BigEndian(v)
if e != uRetentionEpochs {
return errors.Wrapf(errBlobRetentionEpochMismatch, "db=%d, config=%d", e, uRetentionEpochs)
}
return nil
}); err != nil {
return err
}
return nil
}

View File

@@ -1,532 +0,0 @@
package kv
import (
"context"
"crypto/rand"
"fmt"
"testing"
"github.com/pkg/errors"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
types "github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/testing/assertions"
"github.com/prysmaticlabs/prysm/v4/testing/require"
bolt "go.etcd.io/bbolt"
)
func equalBlobSlices(expect []*ethpb.DeprecatedBlobSidecar, got []*ethpb.DeprecatedBlobSidecar) error {
if len(expect) != len(got) {
return fmt.Errorf("mismatched lengths, expect=%d, got=%d", len(expect), len(got))
}
for i := 0; i < len(expect); i++ {
es := expect[i]
gs := got[i]
var e string
assertions.DeepEqual(assertions.SprintfAssertionLoggerFn(&e), es, gs)
if e != "" {
return errors.New(e)
}
}
return nil
}
func TestStore_BlobSidecars(t *testing.T) {
ctx := context.Background()
t.Run("empty", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, 0)
require.ErrorContains(t, "nil or empty blob sidecars", db.SaveBlobSidecar(ctx, scs))
})
t.Run("empty by root", func(t *testing.T) {
db := setupDB(t)
got, err := db.BlobSidecarsByRoot(ctx, [32]byte{})
require.ErrorIs(t, ErrNotFound, err)
require.Equal(t, 0, len(got))
})
t.Run("empty by slot", func(t *testing.T) {
db := setupDB(t)
got, err := db.BlobSidecarsBySlot(ctx, 1)
require.ErrorIs(t, ErrNotFound, err)
require.Equal(t, 0, len(got))
})
t.Run("save and retrieve by root (one)", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, 1)
require.NoError(t, db.SaveBlobSidecar(ctx, scs))
require.Equal(t, 1, len(scs))
got, err := db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
})
t.Run("save and retrieve by root (max), per batch", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
require.NoError(t, db.SaveBlobSidecar(ctx, scs))
require.Equal(t, fieldparams.MaxBlobsPerBlock, len(scs))
got, err := db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
})
t.Run("save and retrieve by root, max and individually", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
for _, sc := range scs {
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{sc}))
}
require.Equal(t, fieldparams.MaxBlobsPerBlock, len(scs))
got, err := db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
})
t.Run("save and retrieve valid subset by root", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
require.NoError(t, db.SaveBlobSidecar(ctx, scs))
require.Equal(t, fieldparams.MaxBlobsPerBlock, len(scs))
// we'll request indices 0 and 3, so make a slice with those indices for comparison
expect := make([]*ethpb.DeprecatedBlobSidecar, 2)
expect[0] = scs[0]
expect[1] = scs[3]
got, err := db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot), 0, 3)
require.NoError(t, err)
require.NoError(t, equalBlobSlices(expect, got))
require.Equal(t, uint64(0), got[0].Index)
require.Equal(t, uint64(3), got[1].Index)
})
t.Run("error for invalid index when retrieving by root", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
require.NoError(t, db.SaveBlobSidecar(ctx, scs))
require.Equal(t, fieldparams.MaxBlobsPerBlock, len(scs))
got, err := db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot), uint64(len(scs)))
require.ErrorIs(t, err, ErrNotFound)
require.Equal(t, 0, len(got))
})
t.Run("save and retrieve by slot (one)", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, 1)
require.NoError(t, db.SaveBlobSidecar(ctx, scs))
require.Equal(t, 1, len(scs))
got, err := db.BlobSidecarsBySlot(ctx, scs[0].Slot)
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
})
t.Run("save and retrieve by slot (max)", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
require.NoError(t, db.SaveBlobSidecar(ctx, scs))
require.Equal(t, fieldparams.MaxBlobsPerBlock, len(scs))
got, err := db.BlobSidecarsBySlot(ctx, scs[0].Slot)
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
})
t.Run("save and retrieve by slot, max and individually", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
for _, sc := range scs {
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{sc}))
}
require.Equal(t, fieldparams.MaxBlobsPerBlock, len(scs))
got, err := db.BlobSidecarsBySlot(ctx, scs[0].Slot)
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
})
t.Run("save and retrieve valid subset by slot", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
require.NoError(t, db.SaveBlobSidecar(ctx, scs))
require.Equal(t, fieldparams.MaxBlobsPerBlock, len(scs))
// we'll request indices 0 and 3, so make a slice with those indices for comparison
expect := make([]*ethpb.DeprecatedBlobSidecar, 2)
expect[0] = scs[0]
expect[1] = scs[3]
got, err := db.BlobSidecarsBySlot(ctx, scs[0].Slot, 0, 3)
require.NoError(t, err)
require.NoError(t, equalBlobSlices(expect, got))
require.Equal(t, uint64(0), got[0].Index)
require.Equal(t, uint64(3), got[1].Index)
})
t.Run("error for invalid index when retrieving by slot", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
require.NoError(t, db.SaveBlobSidecar(ctx, scs))
require.Equal(t, fieldparams.MaxBlobsPerBlock, len(scs))
got, err := db.BlobSidecarsBySlot(ctx, scs[0].Slot, uint64(len(scs)))
require.ErrorIs(t, err, ErrNotFound)
require.Equal(t, 0, len(got))
})
t.Run("delete works", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
require.NoError(t, db.SaveBlobSidecar(ctx, scs))
require.Equal(t, fieldparams.MaxBlobsPerBlock, len(scs))
got, err := db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
require.NoError(t, db.DeleteBlobSidecars(ctx, bytesutil.ToBytes32(scs[0].BlockRoot)))
got, err = db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot))
require.ErrorIs(t, ErrNotFound, err)
require.Equal(t, 0, len(got))
})
t.Run("saving blob different times", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
for i := 0; i < fieldparams.MaxBlobsPerBlock; i++ {
scs[i].Slot = primitives.Slot(i)
scs[i].BlockRoot = bytesutil.PadTo([]byte{byte(i)}, 32)
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{scs[i]}))
br := bytesutil.ToBytes32(scs[i].BlockRoot)
saved, err := db.BlobSidecarsByRoot(ctx, br)
require.NoError(t, err)
require.NoError(t, equalBlobSlices([]*ethpb.DeprecatedBlobSidecar{scs[i]}, saved))
}
})
t.Run("saving a new blob for rotation (batch)", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
require.NoError(t, db.SaveBlobSidecar(ctx, scs))
require.Equal(t, fieldparams.MaxBlobsPerBlock, len(scs))
oldBlockRoot := scs[0].BlockRoot
got, err := db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(oldBlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
newScs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
newRetentionSlot := primitives.Slot(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest.Mul(uint64(params.BeaconConfig().SlotsPerEpoch)))
for _, sc := range newScs {
sc.Slot = sc.Slot + newRetentionSlot
}
require.NoError(t, db.SaveBlobSidecar(ctx, newScs))
_, err = db.BlobSidecarsBySlot(ctx, 100)
require.ErrorIs(t, ErrNotFound, err)
got, err = db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(newScs[0].BlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(newScs, got))
})
t.Run("save multiple blobs after new rotation (individually)", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
for _, sc := range scs {
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{sc}))
}
got, err := db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
scs = generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
newRetentionSlot := primitives.Slot(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest.Mul(uint64(params.BeaconConfig().SlotsPerEpoch)))
for _, sc := range scs {
sc.Slot = sc.Slot + newRetentionSlot
}
for _, sc := range scs {
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{sc}))
}
_, err = db.BlobSidecarsBySlot(ctx, 100)
require.ErrorIs(t, ErrNotFound, err)
got, err = db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
})
t.Run("save multiple blobs after new rotation (batch then individually)", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
require.NoError(t, db.SaveBlobSidecar(ctx, scs))
require.Equal(t, fieldparams.MaxBlobsPerBlock, len(scs))
oldBlockRoot := scs[0].BlockRoot
got, err := db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(oldBlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
scs = generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
newRetentionSlot := primitives.Slot(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest.Mul(uint64(params.BeaconConfig().SlotsPerEpoch)))
for _, sc := range scs {
sc.Slot = sc.Slot + newRetentionSlot
}
for _, sc := range scs {
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{sc}))
}
_, err = db.BlobSidecarsBySlot(ctx, 100)
require.ErrorIs(t, ErrNotFound, err)
got, err = db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
})
t.Run("save multiple blobs after new rotation (individually then batch)", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
for _, sc := range scs {
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{sc}))
}
got, err := db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
scs = generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
newRetentionSlot := primitives.Slot(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest.Mul(uint64(params.BeaconConfig().SlotsPerEpoch)))
for _, sc := range scs {
sc.Slot = sc.Slot + newRetentionSlot
}
require.NoError(t, db.SaveBlobSidecar(ctx, scs))
_, err = db.BlobSidecarsBySlot(ctx, 100)
require.ErrorIs(t, ErrNotFound, err)
got, err = db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
})
t.Run("save equivocating blobs", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock/2)
eScs := generateEquivocatingBlobSidecars(t, fieldparams.MaxBlobsPerBlock/2)
for i, sc := range scs {
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{sc}))
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{eScs[i]}))
}
got, err := db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
got, err = db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(eScs[0].BlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(eScs, got))
})
}
func generateBlobSidecars(t *testing.T, n uint64) []*ethpb.DeprecatedBlobSidecar {
blobSidecars := make([]*ethpb.DeprecatedBlobSidecar, n)
for i := uint64(0); i < n; i++ {
blobSidecars[i] = generateBlobSidecar(t, i)
}
return blobSidecars
}
func generateBlobSidecar(t *testing.T, index uint64) *ethpb.DeprecatedBlobSidecar {
blob := make([]byte, 131072)
_, err := rand.Read(blob)
require.NoError(t, err)
kzgCommitment := make([]byte, 48)
_, err = rand.Read(kzgCommitment)
require.NoError(t, err)
kzgProof := make([]byte, 48)
_, err = rand.Read(kzgProof)
require.NoError(t, err)
return &ethpb.DeprecatedBlobSidecar{
BlockRoot: bytesutil.PadTo([]byte{'a'}, 32),
Index: index,
Slot: 100,
BlockParentRoot: bytesutil.PadTo([]byte{'b'}, 32),
ProposerIndex: 101,
Blob: blob,
KzgCommitment: kzgCommitment,
KzgProof: kzgProof,
}
}
func generateEquivocatingBlobSidecars(t *testing.T, n uint64) []*ethpb.DeprecatedBlobSidecar {
blobSidecars := make([]*ethpb.DeprecatedBlobSidecar, n)
for i := uint64(0); i < n; i++ {
blobSidecars[i] = generateEquivocatingBlobSidecar(t, i)
}
return blobSidecars
}
func generateEquivocatingBlobSidecar(t *testing.T, index uint64) *ethpb.DeprecatedBlobSidecar {
blob := make([]byte, 131072)
_, err := rand.Read(blob)
require.NoError(t, err)
kzgCommitment := make([]byte, 48)
_, err = rand.Read(kzgCommitment)
require.NoError(t, err)
kzgProof := make([]byte, 48)
_, err = rand.Read(kzgProof)
require.NoError(t, err)
return &ethpb.DeprecatedBlobSidecar{
BlockRoot: bytesutil.PadTo([]byte{'c'}, 32),
Index: index,
Slot: 100,
BlockParentRoot: bytesutil.PadTo([]byte{'b'}, 32),
ProposerIndex: 102,
Blob: blob,
KzgCommitment: kzgCommitment,
KzgProof: kzgProof,
}
}
func Test_validUniqueSidecars_validation(t *testing.T) {
tests := []struct {
name string
scs []*ethpb.DeprecatedBlobSidecar
err error
}{
{name: "empty", scs: []*ethpb.DeprecatedBlobSidecar{}, err: errEmptySidecar},
{name: "too many sidecars", scs: generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock+1), err: errBlobSidecarLimit},
{name: "invalid slot", scs: []*ethpb.DeprecatedBlobSidecar{{Slot: 1}, {Slot: 2}}, err: errBlobSlotMismatch},
{name: "invalid proposer index", scs: []*ethpb.DeprecatedBlobSidecar{{ProposerIndex: 1}, {ProposerIndex: 2}}, err: errBlobProposerMismatch},
{name: "invalid root", scs: []*ethpb.DeprecatedBlobSidecar{{BlockRoot: []byte{1}}, {BlockRoot: []byte{2}}}, err: errBlobRootMismatch},
{name: "invalid parent root", scs: []*ethpb.DeprecatedBlobSidecar{{BlockParentRoot: []byte{1}}, {BlockParentRoot: []byte{2}}}, err: errBlobParentMismatch},
{name: "happy path", scs: []*ethpb.DeprecatedBlobSidecar{{Index: 0}, {Index: 1}}},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
_, err := validUniqueSidecars(tt.scs)
if tt.err != nil {
require.ErrorIs(t, err, tt.err)
} else {
require.NoError(t, err)
}
})
}
}
func Test_validUniqueSidecars_dedup(t *testing.T) {
cases := []struct {
name string
scs []*ethpb.DeprecatedBlobSidecar
expected []*ethpb.DeprecatedBlobSidecar
err error
}{
{
name: "duplicate sidecar",
scs: []*ethpb.DeprecatedBlobSidecar{{Index: 1}, {Index: 1}},
expected: []*ethpb.DeprecatedBlobSidecar{{Index: 1}},
},
{
name: "single sidecar",
scs: []*ethpb.DeprecatedBlobSidecar{{Index: 1}},
expected: []*ethpb.DeprecatedBlobSidecar{{Index: 1}},
},
{
name: "multiple duplicates",
scs: []*ethpb.DeprecatedBlobSidecar{{Index: 1}, {Index: 2}, {Index: 2}, {Index: 3}, {Index: 3}},
expected: []*ethpb.DeprecatedBlobSidecar{{Index: 1}, {Index: 2}, {Index: 3}},
},
{
name: "ok number after de-dupe, > 6 before",
scs: []*ethpb.DeprecatedBlobSidecar{{Index: 1}, {Index: 2}, {Index: 2}, {Index: 2}, {Index: 2}, {Index: 3}, {Index: 3}},
expected: []*ethpb.DeprecatedBlobSidecar{{Index: 1}, {Index: 2}, {Index: 3}},
},
{
name: "max unique, no dupes",
scs: []*ethpb.DeprecatedBlobSidecar{{Index: 1}, {Index: 2}, {Index: 3}, {Index: 4}, {Index: 5}, {Index: 6}},
expected: []*ethpb.DeprecatedBlobSidecar{{Index: 1}, {Index: 2}, {Index: 3}, {Index: 4}, {Index: 5}, {Index: 6}},
},
{
name: "too many unique",
scs: []*ethpb.DeprecatedBlobSidecar{{Index: 1}, {Index: 2}, {Index: 3}, {Index: 4}, {Index: 5}, {Index: 6}, {Index: 7}},
err: errBlobSidecarLimit,
},
{
name: "too many unique with dupes",
scs: []*ethpb.DeprecatedBlobSidecar{{Index: 1}, {Index: 1}, {Index: 1}, {Index: 2}, {Index: 3}, {Index: 4}, {Index: 5}, {Index: 6}, {Index: 7}},
err: errBlobSidecarLimit,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
u, err := validUniqueSidecars(c.scs)
if c.err != nil {
require.ErrorIs(t, err, c.err)
} else {
require.NoError(t, err)
}
require.Equal(t, len(c.expected), len(u))
})
}
}
func TestStore_sortSidecars(t *testing.T) {
scs := []*ethpb.DeprecatedBlobSidecar{
{Index: 6},
{Index: 4},
{Index: 2},
{Index: 1},
{Index: 3},
{Index: 5},
{},
}
sortSidecars(scs)
for i := 0; i < len(scs)-1; i++ {
require.Equal(t, uint64(i), scs[i].Index)
}
}
func BenchmarkStore_BlobSidecarsByRoot(b *testing.B) {
s := setupDB(b)
ctx := context.Background()
require.NoError(b, s.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{
{BlockRoot: bytesutil.PadTo([]byte{'a'}, 32), Slot: 0},
}))
err := s.db.Update(func(tx *bolt.Tx) error {
bkt := tx.Bucket(blobsBucket)
for i := 1; i < 131071; i++ {
r := make([]byte, 32)
_, err := rand.Read(r)
require.NoError(b, err)
scs := []*ethpb.DeprecatedBlobSidecar{
{BlockRoot: r, Slot: primitives.Slot(i)},
}
k := s.blobSidecarKey(scs[0])
encodedBlobSidecar, err := encode(ctx, &ethpb.DeprecatedBlobSidecars{Sidecars: scs})
require.NoError(b, err)
require.NoError(b, bkt.Put(k, encodedBlobSidecar))
}
return nil
})
require.NoError(b, err)
require.NoError(b, s.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{
{BlockRoot: bytesutil.PadTo([]byte{'b'}, 32), Slot: 131071},
}))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := s.BlobSidecarsByRoot(ctx, [32]byte{'b'})
require.NoError(b, err)
}
}
func Test_checkEpochsForBlobSidecarsRequestBucket(t *testing.T) {
s := setupDB(t)
require.NoError(t, s.checkEpochsForBlobSidecarsRequestBucket(s.db)) // First write
require.NoError(t, s.checkEpochsForBlobSidecarsRequestBucket(s.db)) // First check
s.blobRetentionEpochs += 1
require.ErrorIs(t, s.checkEpochsForBlobSidecarsRequestBucket(s.db), errBlobRetentionEpochMismatch)
}
func TestBlobRotatingKey(t *testing.T) {
s := setupDB(t)
k := s.blobSidecarKey(&ethpb.DeprecatedBlobSidecar{
Slot: 1,
BlockRoot: []byte{2},
})
require.Equal(t, types.Slot(1), k.Slot())
require.DeepEqual(t, []byte{2}, k.BlockRoot())
require.DeepEqual(t, s.slotKey(types.Slot(1)), k.BufferPrefix())
}

View File

@@ -18,7 +18,6 @@ import (
"github.com/prysmaticlabs/prysm/v4/config/features"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/io/file"
bolt "go.etcd.io/bbolt"
)
@@ -91,7 +90,6 @@ type Store struct {
validatorEntryCache *ristretto.Cache
stateSummaryCache *stateSummaryCache
ctx context.Context
blobRetentionEpochs primitives.Epoch
}
// StoreDatafilePath is the canonical construction of a full
@@ -138,13 +136,6 @@ var Buckets = [][]byte{
// KVStoreOption is a functional option that modifies a kv.Store.
type KVStoreOption func(*Store)
// WithBlobRetentionEpochs sets the variable configuring the blob retention window.
func WithBlobRetentionEpochs(e primitives.Epoch) KVStoreOption {
return func(s *Store) {
s.blobRetentionEpochs = e
}
}
// NewKVStore initializes a new boltDB key-value store at the directory
// path specified, creates the kv-buckets based on the schema, and stores
// an open connection db object as a property of the Store struct.
@@ -217,14 +208,6 @@ func NewKVStore(ctx context.Context, dirPath string, opts ...KVStoreOption) (*St
return nil, err
}
if err := kv.checkEpochsForBlobSidecarsRequestBucket(boltDB); err != nil {
return nil, errors.Wrap(err, "failed to check epochs for blob sidecars request bucket")
}
// set a default so that tests don't break
if kv.blobRetentionEpochs == 0 {
kv.blobRetentionEpochs = params.BeaconConfig().MinEpochsForBlobsSidecarsRequest
}
return kv, nil
}

View File

@@ -6,7 +6,6 @@ import (
"testing"
"github.com/prysmaticlabs/prysm/v4/config/features"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/testing/require"
@@ -16,8 +15,7 @@ import (
// setupDB instantiates and returns a Store instance.
func setupDB(t testing.TB) *Store {
opt := WithBlobRetentionEpochs(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest)
db, err := NewKVStore(context.Background(), t.TempDir(), opt)
db, err := NewKVStore(context.Background(), t.TempDir())
require.NoError(t, err, "Failed to instantiate DB")
t.Cleanup(func() {
require.NoError(t, db.Close(), "Failed to close database")

View File

@@ -47,10 +47,6 @@ var (
finalizedCheckpointKey = []byte("finalized-checkpoint")
powchainDataKey = []byte("powchain-data")
lastValidatedCheckpointKey = []byte("last-validated-checkpoint")
// blobRetentionEpochsKey determines the size of the blob circular buffer and how the keys in that buffer are
// determined. If this value changes, the existing data is invalidated, so storing it in the db
// allows us to assert at runtime that the db state is still consistent with the runtime state.
blobRetentionEpochsKey = []byte("blob-retention-epochs")
// Below keys are used to identify objects are to be fork compatible.
// Objects that are only compatible with specific forks should be prefixed with such keys.

View File

@@ -761,8 +761,8 @@ func fullPayloadFromExecutionBlock(
BlockHash: blockHash[:],
Transactions: txs,
Withdrawals: block.Withdrawals,
ExcessBlobGas: ebg,
BlobGasUsed: bgu,
ExcessBlobGas: ebg,
}, 0) // We can't get the block value and don't care about the block value for this instance
default:
return nil, fmt.Errorf("unknown execution block version %d", block.Version)
@@ -941,10 +941,10 @@ func buildEmptyExecutionPayload(v int) (proto.Message, error) {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
ExtraData: make([]byte, 0),
}, nil
case version.Capella:
return &pb.ExecutionPayloadCapella{
@@ -954,10 +954,10 @@ func buildEmptyExecutionPayload(v int) (proto.Message, error) {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
ExtraData: make([]byte, 0),
Withdrawals: make([]*pb.Withdrawal, 0),
}, nil
case version.Deneb:
@@ -968,10 +968,10 @@ func buildEmptyExecutionPayload(v int) (proto.Message, error) {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
ExtraData: make([]byte, 0),
Withdrawals: make([]*pb.Withdrawal, 0),
}, nil
default:

View File

@@ -85,7 +85,7 @@ func FuzzExecutionPayload(f *testing.F) {
GasLimit: math.MaxUint64,
GasUsed: math.MaxUint64,
Timestamp: 100,
ExtraData: nil,
ExtraData: []byte{},
BaseFeePerGas: big.NewInt(math.MaxInt),
BlockHash: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
Transactions: [][]byte{{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}, {0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}, {0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}, {0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}},

View File

@@ -2,4 +2,4 @@ package execution
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "powchain")
var log = logrus.WithField("prefix", "execution")

View File

@@ -240,6 +240,20 @@ func (f *ForkChoice) HighestReceivedBlockSlot() primitives.Slot {
return f.store.highestReceivedNode.slot
}
// HighestReceivedBlockSlotDelay returns the number of slots that the highest
// received block was late when receiving it
func (f *ForkChoice) HighestReceivedBlockDelay() primitives.Slot {
n := f.store.highestReceivedNode
if n == nil {
return 0
}
secs, err := slots.SecondsSinceSlotStart(n.slot, f.store.genesisTime, n.timestamp)
if err != nil {
return 0
}
return primitives.Slot(secs / params.BeaconConfig().SecondsPerSlot)
}
// ReceivedBlocksLastEpoch returns the number of blocks received in the last epoch
func (f *ForkChoice) ReceivedBlocksLastEpoch() (uint64, error) {
count := uint64(0)

View File

@@ -333,26 +333,29 @@ func TestForkChoice_ReceivedBlocksLastEpoch(t *testing.T) {
require.NoError(t, err)
require.Equal(t, uint64(1), count)
require.Equal(t, primitives.Slot(1), f.HighestReceivedBlockSlot())
require.Equal(t, primitives.Slot(0), f.HighestReceivedBlockDelay())
// 64
// Received block last epoch is 1
_, err = s.insert(context.Background(), 64, [32]byte{'A'}, b, b, 1, 1)
require.NoError(t, err)
s.genesisTime = uint64(time.Now().Add(time.Duration(-64*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second).Unix())
s.genesisTime = uint64(time.Now().Add(time.Duration((-64*int64(params.BeaconConfig().SecondsPerSlot))-1) * time.Second).Unix())
count, err = f.ReceivedBlocksLastEpoch()
require.NoError(t, err)
require.Equal(t, uint64(1), count)
require.Equal(t, primitives.Slot(64), f.HighestReceivedBlockSlot())
require.Equal(t, primitives.Slot(0), f.HighestReceivedBlockDelay())
// 64 65
// Received block last epoch is 2
_, err = s.insert(context.Background(), 65, [32]byte{'B'}, b, b, 1, 1)
require.NoError(t, err)
s.genesisTime = uint64(time.Now().Add(time.Duration(-65*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second).Unix())
s.genesisTime = uint64(time.Now().Add(time.Duration(-66*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second).Unix())
count, err = f.ReceivedBlocksLastEpoch()
require.NoError(t, err)
require.Equal(t, uint64(2), count)
require.Equal(t, primitives.Slot(65), f.HighestReceivedBlockSlot())
require.Equal(t, primitives.Slot(1), f.HighestReceivedBlockDelay())
// 64 65 66
// Received block last epoch is 3

View File

@@ -64,6 +64,7 @@ type FastGetter interface {
FinalizedPayloadBlockHash() [32]byte
HasNode([32]byte) bool
HighestReceivedBlockSlot() primitives.Slot
HighestReceivedBlockDelay() primitives.Slot
IsCanonical(root [32]byte) bool
IsOptimistic(root [32]byte) (bool, error)
IsViableForCheckpoint(*forkchoicetypes.Checkpoint) (bool, error)

View File

@@ -114,6 +114,13 @@ func (ro *ROForkChoice) HighestReceivedBlockSlot() primitives.Slot {
return ro.getter.HighestReceivedBlockSlot()
}
// HighestReceivedBlockDelay delegates to the underlying forkchoice call, under a lock.
func (ro *ROForkChoice) HighestReceivedBlockDelay() primitives.Slot {
ro.l.RLock()
defer ro.l.RUnlock()
return ro.getter.HighestReceivedBlockDelay()
}
// ReceivedBlocksLastEpoch delegates to the underlying forkchoice call, under a lock.
func (ro *ROForkChoice) ReceivedBlocksLastEpoch() (uint64, error) {
ro.l.RLock()

View File

@@ -29,6 +29,7 @@ const (
unrealizedJustifiedPayloadBlockHashCalled
nodeCountCalled
highestReceivedBlockSlotCalled
highestReceivedBlockDelayCalled
receivedBlocksLastEpochCalled
weightCalled
isOptimisticCalled
@@ -113,6 +114,11 @@ func TestROLocking(t *testing.T) {
call: highestReceivedBlockSlotCalled,
cb: func(g FastGetter) { g.HighestReceivedBlockSlot() },
},
{
name: "highestReceivedBlockDelayCalled",
call: highestReceivedBlockDelayCalled,
cb: func(g FastGetter) { g.HighestReceivedBlockDelay() },
},
{
name: "receivedBlocksLastEpochCalled",
call: receivedBlocksLastEpochCalled,
@@ -245,6 +251,11 @@ func (ro *mockROForkchoice) HighestReceivedBlockSlot() primitives.Slot {
return 0
}
func (ro *mockROForkchoice) HighestReceivedBlockDelay() primitives.Slot {
ro.calls = append(ro.calls, highestReceivedBlockDelayCalled)
return 0
}
func (ro *mockROForkchoice) ReceivedBlocksLastEpoch() (uint64, error) {
ro.calls = append(ro.calls, receivedBlocksLastEpochCalled)
return 0, nil

View File

@@ -398,7 +398,7 @@ func (b *BeaconNode) startDB(cliCtx *cli.Context, depositAddress string) error {
log.WithField("database-path", dbPath).Info("Checking DB")
d, err := kv.NewKVStore(b.ctx, dbPath, kv.WithBlobRetentionEpochs(b.blobRetentionEpochs))
d, err := kv.NewKVStore(b.ctx, dbPath)
if err != nil {
return err
}
@@ -421,7 +421,7 @@ func (b *BeaconNode) startDB(cliCtx *cli.Context, depositAddress string) error {
return errors.Wrap(err, "could not clear database")
}
d, err = kv.NewKVStore(b.ctx, dbPath, kv.WithBlobRetentionEpochs(b.blobRetentionEpochs))
d, err = kv.NewKVStore(b.ctx, dbPath)
if err != nil {
return errors.Wrap(err, "could not create new database")
}

View File

@@ -42,7 +42,7 @@ func (s *Service) prepareForkChoiceAtts() {
switch slotInterval.Interval {
case 0:
duration := time.Since(t)
log.WithField("Duration", duration).Debug("aggregated unaggregated attestations")
log.WithField("Duration", duration).Debug("Aggregated unaggregated attestations")
batchForkChoiceAttsT1.Observe(float64(duration.Milliseconds()))
case 1:
batchForkChoiceAttsT2.Observe(float64(time.Since(t).Milliseconds()))

View File

@@ -38,6 +38,7 @@ go_library(
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
"@org_golang_google_grpc//codes:go_default_library",
"@org_golang_x_sync//errgroup:go_default_library",
],

View File

@@ -11,14 +11,15 @@ import (
)
type Service struct {
HeadFetcher blockchain.HeadFetcher
FinalizedFetcher blockchain.FinalizationFetcher
GenesisTimeFetcher blockchain.TimeFetcher
SyncChecker sync.Checker
Broadcaster p2p.Broadcaster
SyncCommitteePool synccommittee.Pool
OperationNotifier opfeed.Notifier
AttestationCache *cache.AttestationCache
StateGen stategen.StateManager
P2P p2p.Broadcaster
HeadFetcher blockchain.HeadFetcher
FinalizedFetcher blockchain.FinalizationFetcher
GenesisTimeFetcher blockchain.TimeFetcher
SyncChecker sync.Checker
Broadcaster p2p.Broadcaster
SyncCommitteePool synccommittee.Pool
OperationNotifier opfeed.Notifier
AttestationCache *cache.AttestationCache
StateGen stategen.StateManager
P2P p2p.Broadcaster
OptimisticModeFetcher blockchain.OptimisticModeFetcher
}

View File

@@ -29,9 +29,12 @@ import (
prysmTime "github.com/prysmaticlabs/prysm/v4/time"
"github.com/prysmaticlabs/prysm/v4/time/slots"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
"golang.org/x/sync/errgroup"
)
var errOptimisticMode = errors.New("the node is currently optimistic and cannot serve validators")
// AggregateBroadcastFailedError represents an error scenario where
// broadcasting an aggregate selection proof failed.
type AggregateBroadcastFailedError struct {
@@ -56,6 +59,9 @@ func (s *Service) ComputeValidatorPerformance(
ctx context.Context,
req *ethpb.ValidatorPerformanceRequest,
) (*ethpb.ValidatorPerformanceResponse, *RpcError) {
ctx, span := trace.StartSpan(ctx, "coreService.ComputeValidatorPerformance")
defer span.End()
if s.SyncChecker.Syncing() {
return nil, &RpcError{Reason: Unavailable, Err: errors.New("Syncing to latest head, not ready to respond")}
}
@@ -211,6 +217,9 @@ func (s *Service) SubmitSignedContributionAndProof(
ctx context.Context,
req *ethpb.SignedContributionAndProof,
) *RpcError {
ctx, span := trace.StartSpan(ctx, "coreService.SubmitSignedContributionAndProof")
defer span.End()
errs, ctx := errgroup.WithContext(ctx)
// Broadcasting and saving contribution into the pool in parallel. As one fail should not affect another.
@@ -243,6 +252,9 @@ func (s *Service) SubmitSignedAggregateSelectionProof(
ctx context.Context,
req *ethpb.SignedAggregateSubmitRequest,
) *RpcError {
ctx, span := trace.StartSpan(ctx, "coreService.SubmitSignedAggregateSelectionProof")
defer span.End()
if req.SignedAggregateAndProof == nil || req.SignedAggregateAndProof.Message == nil ||
req.SignedAggregateAndProof.Message.Aggregate == nil || req.SignedAggregateAndProof.Message.Aggregate.Data == nil {
return &RpcError{Err: errors.New("signed aggregate request can't be nil"), Reason: BadRequest}
@@ -315,6 +327,9 @@ func (s *Service) AggregatedSigAndAggregationBits(
func (s *Service) GetAttestationData(
ctx context.Context, req *ethpb.AttestationDataRequest,
) (*ethpb.AttestationData, *RpcError) {
ctx, span := trace.StartSpan(ctx, "coreService.GetAttestationData")
defer span.End()
if req.Slot != s.GenesisTimeFetcher.CurrentSlot() {
return nil, &RpcError{Reason: BadRequest, Err: errors.Errorf("invalid request: slot %d is not the current slot %d", req.Slot, s.GenesisTimeFetcher.CurrentSlot())}
}
@@ -368,6 +383,14 @@ func (s *Service) GetAttestationData(
},
}, nil
}
// cache miss, we need to check for optimistic status before proceeding
optimistic, err := s.OptimisticModeFetcher.IsOptimistic(ctx)
if err != nil {
return nil, &RpcError{Reason: Internal, Err: err}
}
if optimistic {
return nil, &RpcError{Reason: Unavailable, Err: errOptimisticMode}
}
headRoot, err := s.HeadFetcher.HeadRoot(ctx)
if err != nil {
@@ -412,6 +435,9 @@ func (s *Service) GetAttestationData(
// SubmitSyncMessage submits the sync committee message to the network.
// It also saves the sync committee message into the pending pool for block inclusion.
func (s *Service) SubmitSyncMessage(ctx context.Context, msg *ethpb.SyncCommitteeMessage) *RpcError {
ctx, span := trace.StartSpan(ctx, "coreService.SubmitSyncMessage")
defer span.End()
errs, ctx := errgroup.WithContext(ctx)
headSyncCommitteeIndices, err := s.HeadFetcher.HeadSyncCommitteeIndices(ctx, msg.ValidatorIndex, msg.Slot)

View File

@@ -20,6 +20,7 @@ go_library(
"//beacon-chain/rpc/eth/shared:go_default_library",
"//network/httputil:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/eth/v2:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",

View File

@@ -7,6 +7,9 @@ import (
"net/http"
"github.com/ethereum/go-ethereum/common/hexutil"
log "github.com/sirupsen/logrus"
"go.opencensus.io/trace"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/operation"
@@ -17,10 +20,9 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
"github.com/prysmaticlabs/prysm/v4/network/httputil"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/eth/v1"
ethpbv2 "github.com/prysmaticlabs/prysm/v4/proto/eth/v2"
"github.com/prysmaticlabs/prysm/v4/runtime/version"
"github.com/prysmaticlabs/prysm/v4/time/slots"
log "github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
const (
@@ -48,6 +50,10 @@ const (
ProposerSlashingTopic = "proposer_slashing"
// AttesterSlashingTopic represents a new attester slashing event topic
AttesterSlashingTopic = "attester_slashing"
// LightClientFinalityUpdateTopic represents a new light client finality update event topic.
LightClientFinalityUpdateTopic = "light_client_finality_update"
// LightClientOptimisticUpdateTopic represents a new light client optimistic update event topic.
LightClientOptimisticUpdateTopic = "light_client_optimistic_update"
)
const topicDataMismatch = "Event data type %T does not correspond to event topic %s"
@@ -55,18 +61,20 @@ const topicDataMismatch = "Event data type %T does not correspond to event topic
const chanBuffer = 1000
var casesHandled = map[string]bool{
HeadTopic: true,
BlockTopic: true,
AttestationTopic: true,
VoluntaryExitTopic: true,
FinalizedCheckpointTopic: true,
ChainReorgTopic: true,
SyncCommitteeContributionTopic: true,
BLSToExecutionChangeTopic: true,
PayloadAttributesTopic: true,
BlobSidecarTopic: true,
ProposerSlashingTopic: true,
AttesterSlashingTopic: true,
HeadTopic: true,
BlockTopic: true,
AttestationTopic: true,
VoluntaryExitTopic: true,
FinalizedCheckpointTopic: true,
ChainReorgTopic: true,
SyncCommitteeContributionTopic: true,
BLSToExecutionChangeTopic: true,
PayloadAttributesTopic: true,
BlobSidecarTopic: true,
ProposerSlashingTopic: true,
AttesterSlashingTopic: true,
LightClientFinalityUpdateTopic: true,
LightClientOptimisticUpdateTopic: true,
}
// StreamEvents provides an endpoint to subscribe to the beacon node Server-Sent-Events stream.
@@ -262,6 +270,72 @@ func (s *Server) handleStateEvents(ctx context.Context, w http.ResponseWriter, f
ExecutionOptimistic: checkpointData.ExecutionOptimistic,
}
send(w, flusher, FinalizedCheckpointTopic, checkpoint)
case statefeed.LightClientFinalityUpdate:
if _, ok := requestedTopics[LightClientFinalityUpdateTopic]; !ok {
return
}
updateData, ok := event.Data.(*ethpbv2.LightClientFinalityUpdateWithVersion)
if !ok {
write(w, flusher, topicDataMismatch, event.Data, LightClientFinalityUpdateTopic)
return
}
var finalityBranch []string
for _, b := range updateData.Data.FinalityBranch {
finalityBranch = append(finalityBranch, hexutil.Encode(b))
}
update := &LightClientFinalityUpdateEvent{
Version: version.String(int(updateData.Version)),
Data: &LightClientFinalityUpdate{
AttestedHeader: &shared.BeaconBlockHeader{
Slot: fmt.Sprintf("%d", updateData.Data.AttestedHeader.Slot),
ProposerIndex: fmt.Sprintf("%d", updateData.Data.AttestedHeader.ProposerIndex),
ParentRoot: hexutil.Encode(updateData.Data.AttestedHeader.ParentRoot),
StateRoot: hexutil.Encode(updateData.Data.AttestedHeader.StateRoot),
BodyRoot: hexutil.Encode(updateData.Data.AttestedHeader.BodyRoot),
},
FinalizedHeader: &shared.BeaconBlockHeader{
Slot: fmt.Sprintf("%d", updateData.Data.FinalizedHeader.Slot),
ProposerIndex: fmt.Sprintf("%d", updateData.Data.FinalizedHeader.ProposerIndex),
ParentRoot: hexutil.Encode(updateData.Data.FinalizedHeader.ParentRoot),
StateRoot: hexutil.Encode(updateData.Data.FinalizedHeader.StateRoot),
},
FinalityBranch: finalityBranch,
SyncAggregate: &shared.SyncAggregate{
SyncCommitteeBits: hexutil.Encode(updateData.Data.SyncAggregate.SyncCommitteeBits),
SyncCommitteeSignature: hexutil.Encode(updateData.Data.SyncAggregate.SyncCommitteeSignature),
},
SignatureSlot: fmt.Sprintf("%d", updateData.Data.SignatureSlot),
},
}
send(w, flusher, LightClientFinalityUpdateTopic, update)
case statefeed.LightClientOptimisticUpdate:
if _, ok := requestedTopics[LightClientOptimisticUpdateTopic]; !ok {
return
}
updateData, ok := event.Data.(*ethpbv2.LightClientOptimisticUpdateWithVersion)
if !ok {
write(w, flusher, topicDataMismatch, event.Data, LightClientOptimisticUpdateTopic)
return
}
update := &LightClientOptimisticUpdateEvent{
Version: version.String(int(updateData.Version)),
Data: &LightClientOptimisticUpdate{
AttestedHeader: &shared.BeaconBlockHeader{
Slot: fmt.Sprintf("%d", updateData.Data.AttestedHeader.Slot),
ProposerIndex: fmt.Sprintf("%d", updateData.Data.AttestedHeader.ProposerIndex),
ParentRoot: hexutil.Encode(updateData.Data.AttestedHeader.ParentRoot),
StateRoot: hexutil.Encode(updateData.Data.AttestedHeader.StateRoot),
BodyRoot: hexutil.Encode(updateData.Data.AttestedHeader.BodyRoot),
},
SyncAggregate: &shared.SyncAggregate{
SyncCommitteeBits: hexutil.Encode(updateData.Data.SyncAggregate.SyncCommitteeBits),
SyncCommitteeSignature: hexutil.Encode(updateData.Data.SyncAggregate.SyncCommitteeSignature),
},
SignatureSlot: fmt.Sprintf("%d", updateData.Data.SignatureSlot),
},
}
send(w, flusher, LightClientOptimisticUpdateTopic, update)
case statefeed.Reorg:
if _, ok := requestedTopics[ChainReorgTopic]; !ok {
return

View File

@@ -92,3 +92,27 @@ type BlobSidecarEvent struct {
KzgCommitment string `json:"kzg_commitment"`
VersionedHash string `json:"versioned_hash"`
}
type LightClientFinalityUpdateEvent struct {
Version string `json:"version"`
Data *LightClientFinalityUpdate `json:"data"`
}
type LightClientFinalityUpdate struct {
AttestedHeader *shared.BeaconBlockHeader `json:"attested_header"`
FinalizedHeader *shared.BeaconBlockHeader `json:"finalized_header"`
FinalityBranch []string `json:"finality_branch"`
SyncAggregate *shared.SyncAggregate `json:"sync_aggregate"`
SignatureSlot string `json:"signature_slot"`
}
type LightClientOptimisticUpdateEvent struct {
Version string `json:"version"`
Data *LightClientOptimisticUpdate `json:"data"`
}
type LightClientOptimisticUpdate struct {
AttestedHeader *shared.BeaconBlockHeader `json:"attested_header"`
SyncAggregate *shared.SyncAggregate `json:"sync_aggregate"`
SignatureSlot string `json:"signature_slot"`
}

View File

@@ -8,9 +8,10 @@ import (
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/gorilla/mux"
"github.com/wealdtech/go-bytesutil"
"go.opencensus.io/trace"
"github.com/wealdtech/go-bytesutil"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/config/params"
@@ -219,11 +220,7 @@ func (s *Server) GetLightClientUpdatesByRange(w http.ResponseWriter, req *http.R
return
}
response := &LightClientUpdatesByRangeResponse{
Updates: updates,
}
httputil.WriteJson(w, response)
httputil.WriteJson(w, updates)
}
// GetLightClientFinalityUpdate - implements https://github.com/ethereum/beacon-APIs/blob/263f4ed6c263c967f13279c7a9f5629b51c5fc55/apis/beacon/light_client/finality_update.yaml

View File

@@ -171,12 +171,12 @@ func TestLightClientHandler_GetLightClientUpdatesByRange(t *testing.T) {
s.GetLightClientUpdatesByRange(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &LightClientUpdatesByRangeResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 1, len(resp.Updates))
require.Equal(t, "capella", resp.Updates[0].Version)
require.Equal(t, hexutil.Encode(attestedHeader.BodyRoot), resp.Updates[0].Data.AttestedHeader.BodyRoot)
require.NotNil(t, resp.Updates)
var resp []LightClientUpdateWithVersion
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), &resp))
require.Equal(t, 1, len(resp))
require.Equal(t, "capella", resp[0].Version)
require.Equal(t, hexutil.Encode(attestedHeader.BodyRoot), resp[0].Data.AttestedHeader.BodyRoot)
require.NotNil(t, resp)
}
func TestLightClientHandler_GetLightClientUpdatesByRange_TooBigInputCount(t *testing.T) {
@@ -274,12 +274,12 @@ func TestLightClientHandler_GetLightClientUpdatesByRange_TooBigInputCount(t *tes
s.GetLightClientUpdatesByRange(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &LightClientUpdatesByRangeResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 1, len(resp.Updates)) // Even with big count input, the response is still the max available period, which is 1 in test case.
require.Equal(t, "capella", resp.Updates[0].Version)
require.Equal(t, hexutil.Encode(attestedHeader.BodyRoot), resp.Updates[0].Data.AttestedHeader.BodyRoot)
require.NotNil(t, resp.Updates)
var resp []LightClientUpdateWithVersion
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), &resp))
require.Equal(t, 1, len(resp)) // Even with big count input, the response is still the max available period, which is 1 in test case.
require.Equal(t, "capella", resp[0].Version)
require.Equal(t, hexutil.Encode(attestedHeader.BodyRoot), resp[0].Data.AttestedHeader.BodyRoot)
require.NotNil(t, resp)
}
func TestLightClientHandler_GetLightClientUpdatesByRange_TooEarlyPeriod(t *testing.T) {
@@ -377,12 +377,12 @@ func TestLightClientHandler_GetLightClientUpdatesByRange_TooEarlyPeriod(t *testi
s.GetLightClientUpdatesByRange(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &LightClientUpdatesByRangeResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 1, len(resp.Updates))
require.Equal(t, "capella", resp.Updates[0].Version)
require.Equal(t, hexutil.Encode(attestedHeader.BodyRoot), resp.Updates[0].Data.AttestedHeader.BodyRoot)
require.NotNil(t, resp.Updates)
var resp []LightClientUpdateWithVersion
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), &resp))
require.Equal(t, 1, len(resp))
require.Equal(t, "capella", resp[0].Version)
require.Equal(t, hexutil.Encode(attestedHeader.BodyRoot), resp[0].Data.AttestedHeader.BodyRoot)
require.NotNil(t, resp)
}
func TestLightClientHandler_GetLightClientUpdatesByRange_TooBigCount(t *testing.T) {
@@ -480,12 +480,12 @@ func TestLightClientHandler_GetLightClientUpdatesByRange_TooBigCount(t *testing.
s.GetLightClientUpdatesByRange(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &LightClientUpdatesByRangeResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 1, len(resp.Updates))
require.Equal(t, "capella", resp.Updates[0].Version)
require.Equal(t, hexutil.Encode(attestedHeader.BodyRoot), resp.Updates[0].Data.AttestedHeader.BodyRoot)
require.NotNil(t, resp.Updates)
var resp []LightClientUpdateWithVersion
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), &resp))
require.Equal(t, 1, len(resp))
require.Equal(t, "capella", resp[0].Version)
require.Equal(t, hexutil.Encode(attestedHeader.BodyRoot), resp[0].Data.AttestedHeader.BodyRoot)
require.NotNil(t, resp)
}
func TestLightClientHandler_GetLightClientUpdatesByRange_BeforeAltair(t *testing.T) {
@@ -583,9 +583,6 @@ func TestLightClientHandler_GetLightClientUpdatesByRange_BeforeAltair(t *testing
s.GetLightClientUpdatesByRange(writer, request)
require.Equal(t, http.StatusNotFound, writer.Code)
resp := &LightClientUpdatesByRangeResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 0, len(resp.Updates))
}
func TestLightClientHandler_GetLightClientFinalityUpdate(t *testing.T) {

View File

@@ -296,11 +296,16 @@ func newLightClientUpdateToJSON(input *v2.LightClientUpdate) *LightClientUpdate
nextSyncCommittee = shared.SyncCommitteeFromConsensus(migration.V2SyncCommitteeToV1Alpha1(input.NextSyncCommittee))
}
var finalizedHeader *shared.BeaconBlockHeader
if input.FinalizedHeader != nil {
finalizedHeader = shared.BeaconBlockHeaderFromConsensus(migration.V1HeaderToV1Alpha1(input.FinalizedHeader))
}
return &LightClientUpdate{
AttestedHeader: shared.BeaconBlockHeaderFromConsensus(migration.V1HeaderToV1Alpha1(input.AttestedHeader)),
NextSyncCommittee: nextSyncCommittee,
NextSyncCommitteeBranch: branchToJSON(input.NextSyncCommitteeBranch),
FinalizedHeader: shared.BeaconBlockHeaderFromConsensus(migration.V1HeaderToV1Alpha1(input.FinalizedHeader)),
FinalizedHeader: finalizedHeader,
FinalityBranch: branchToJSON(input.FinalityBranch),
SyncAggregate: syncAggregateToJSON(input.SyncAggregate),
SignatureSlot: strconv.FormatUint(uint64(input.SignatureSlot), 10),

View File

@@ -17,11 +17,11 @@ type LightClientBootstrap struct {
type LightClientUpdate struct {
AttestedHeader *shared.BeaconBlockHeader `json:"attested_header"`
NextSyncCommittee *shared.SyncCommittee `json:"next_sync_committee"`
FinalizedHeader *shared.BeaconBlockHeader `json:"finalized_header"`
NextSyncCommittee *shared.SyncCommittee `json:"next_sync_committee,omitempty"`
FinalizedHeader *shared.BeaconBlockHeader `json:"finalized_header,omitempty"`
SyncAggregate *shared.SyncAggregate `json:"sync_aggregate"`
NextSyncCommitteeBranch []string `json:"next_sync_committee_branch"`
FinalityBranch []string `json:"finality_branch"`
NextSyncCommitteeBranch []string `json:"next_sync_committee_branch,omitempty"`
FinalityBranch []string `json:"finality_branch,omitempty"`
SignatureSlot string `json:"signature_slot"`
}

View File

@@ -2333,10 +2333,10 @@ func ExecutionPayloadHeaderDenebFromConsensus(payload *enginev1.ExecutionPayload
Timestamp: fmt.Sprintf("%d", payload.Timestamp),
ExtraData: hexutil.Encode(payload.ExtraData),
BaseFeePerGas: baseFeePerGas,
BlobGasUsed: fmt.Sprintf("%d", payload.BlobGasUsed),
ExcessBlobGas: fmt.Sprintf("%d", payload.ExcessBlobGas),
BlockHash: hexutil.Encode(payload.BlockHash),
TransactionsRoot: hexutil.Encode(payload.TransactionsRoot),
WithdrawalsRoot: hexutil.Encode(payload.WithdrawalsRoot),
BlobGasUsed: fmt.Sprintf("%d", payload.BlobGasUsed),
ExcessBlobGas: fmt.Sprintf("%d", payload.ExcessBlobGas),
}, nil
}

View File

@@ -325,11 +325,11 @@ type ExecutionPayloadDeneb struct {
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlobGasUsed string `json:"blob_gas_used"`
ExcessBlobGas string `json:"excess_blob_gas"`
BlockHash string `json:"block_hash"`
Transactions []string `json:"transactions"`
Withdrawals []*Withdrawal `json:"withdrawals"`
BlobGasUsed string `json:"blob_gas_used"`
ExcessBlobGas string `json:"excess_blob_gas"`
}
type ExecutionPayloadHeaderDeneb struct {
@@ -345,9 +345,9 @@ type ExecutionPayloadHeaderDeneb struct {
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlobGasUsed string `json:"blob_gas_used"`
ExcessBlobGas string `json:"excess_blob_gas"`
BlockHash string `json:"block_hash"`
TransactionsRoot string `json:"transactions_root"`
WithdrawalsRoot string `json:"withdrawals_root"`
BlobGasUsed string `json:"blob_gas_used"`
ExcessBlobGas string `json:"excess_blob_gas"`
}

View File

@@ -388,10 +388,6 @@ func (s *Server) GetAttestationData(w http.ResponseWriter, r *http.Request) {
return
}
if isOptimistic, err := shared.IsOptimistic(ctx, w, s.OptimisticModeFetcher); isOptimistic || err != nil {
return
}
_, slot, ok := shared.UintFromQuery(w, r, "slot", true)
if !ok {
return

View File

@@ -831,10 +831,11 @@ func TestGetAttestationData(t *testing.T) {
TimeFetcher: chain,
OptimisticModeFetcher: chain,
CoreService: &core.Service{
HeadFetcher: chain,
GenesisTimeFetcher: chain,
FinalizedFetcher: chain,
AttestationCache: cache.NewAttestationCache(),
HeadFetcher: chain,
GenesisTimeFetcher: chain,
FinalizedFetcher: chain,
AttestationCache: cache.NewAttestationCache(),
OptimisticModeFetcher: chain,
},
}
@@ -914,10 +915,11 @@ func TestGetAttestationData(t *testing.T) {
TimeFetcher: chain,
OptimisticModeFetcher: chain,
CoreService: &core.Service{
AttestationCache: cache.NewAttestationCache(),
GenesisTimeFetcher: chain,
HeadFetcher: chain,
FinalizedFetcher: chain,
AttestationCache: cache.NewAttestationCache(),
GenesisTimeFetcher: chain,
HeadFetcher: chain,
FinalizedFetcher: chain,
OptimisticModeFetcher: chain,
},
}
@@ -959,8 +961,9 @@ func TestGetAttestationData(t *testing.T) {
TimeFetcher: chain,
OptimisticModeFetcher: chain,
CoreService: &core.Service{
GenesisTimeFetcher: chain,
FinalizedFetcher: chain,
GenesisTimeFetcher: chain,
OptimisticModeFetcher: chain,
FinalizedFetcher: chain,
},
}
@@ -1017,10 +1020,11 @@ func TestGetAttestationData(t *testing.T) {
TimeFetcher: chain,
OptimisticModeFetcher: chain,
CoreService: &core.Service{
HeadFetcher: chain,
GenesisTimeFetcher: chain,
StateGen: stategen.New(db, doublylinkedtree.New()),
FinalizedFetcher: chain,
HeadFetcher: chain,
GenesisTimeFetcher: chain,
OptimisticModeFetcher: chain,
StateGen: stategen.New(db, doublylinkedtree.New()),
FinalizedFetcher: chain,
},
}
@@ -1065,10 +1069,11 @@ func TestGetAttestationData(t *testing.T) {
TimeFetcher: chain,
OptimisticModeFetcher: chain,
CoreService: &core.Service{
AttestationCache: cache.NewAttestationCache(),
HeadFetcher: chain,
GenesisTimeFetcher: chain,
FinalizedFetcher: chain,
AttestationCache: cache.NewAttestationCache(),
OptimisticModeFetcher: chain,
HeadFetcher: chain,
GenesisTimeFetcher: chain,
FinalizedFetcher: chain,
},
}
@@ -1151,10 +1156,11 @@ func TestGetAttestationData(t *testing.T) {
TimeFetcher: chain,
OptimisticModeFetcher: chain,
CoreService: &core.Service{
AttestationCache: cache.NewAttestationCache(),
HeadFetcher: chain,
GenesisTimeFetcher: chain,
FinalizedFetcher: chain,
AttestationCache: cache.NewAttestationCache(),
OptimisticModeFetcher: chain,
HeadFetcher: chain,
GenesisTimeFetcher: chain,
FinalizedFetcher: chain,
},
}

View File

@@ -38,7 +38,6 @@ go_library(
"//beacon-chain/builder:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/cache/depositcache:go_default_library",
"//beacon-chain/core/altair:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/block:go_default_library",
@@ -66,7 +65,6 @@ go_library(
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/payload-attribute:go_default_library",
@@ -91,13 +89,13 @@ go_library(
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_golang_protobuf//ptypes/empty",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_bazel_rules_go//proto/wkt:empty_go_proto",
"@io_opencensus_go//trace:go_default_library",
"@org_golang_google_grpc//codes:go_default_library",
"@org_golang_google_grpc//status:go_default_library",
@@ -141,6 +139,7 @@ common_deps = [
"//beacon-chain/sync/initial-sync/testing:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",

View File

@@ -31,12 +31,6 @@ func (vs *Server) GetAttestationData(ctx context.Context, req *ethpb.Attestation
if vs.SyncChecker.Syncing() {
return nil, status.Errorf(codes.Unavailable, "Syncing to latest head, not ready to respond")
}
// An optimistic validator MUST NOT participate in attestation. (i.e., sign across the DOMAIN_BEACON_ATTESTER, DOMAIN_SELECTION_PROOF or DOMAIN_AGGREGATE_AND_PROOF domains).
if err := vs.optimisticStatus(ctx); err != nil {
return nil, err
}
res, err := vs.CoreService.GetAttestationData(ctx, req)
if err != nil {
return nil, status.Errorf(core.ErrorReasonToGRPC(err.Reason), "Could not get attestation data: %v", err.Err)

View File

@@ -61,10 +61,11 @@ func TestAttestationDataAtSlot_HandlesFarAwayJustifiedEpoch(t *testing.T) {
OptimisticModeFetcher: &mock.ChainService{Optimistic: false},
TimeFetcher: &mock.ChainService{Genesis: time.Now().Add(time.Duration(-1*offset) * time.Second)},
CoreService: &core.Service{
AttestationCache: cache.NewAttestationCache(),
HeadFetcher: &mock.ChainService{TargetRoot: blockRoot, Root: blockRoot[:]},
GenesisTimeFetcher: &mock.ChainService{Genesis: time.Now().Add(time.Duration(-1*offset) * time.Second)},
FinalizedFetcher: &mock.ChainService{CurrentJustifiedCheckPoint: justifiedCheckpoint},
AttestationCache: cache.NewAttestationCache(),
HeadFetcher: &mock.ChainService{TargetRoot: blockRoot, Root: blockRoot[:]},
GenesisTimeFetcher: &mock.ChainService{Genesis: time.Now().Add(time.Duration(-1*offset) * time.Second)},
FinalizedFetcher: &mock.ChainService{CurrentJustifiedCheckPoint: justifiedCheckpoint},
OptimisticModeFetcher: &mock.ChainService{Optimistic: false},
},
}

View File

@@ -116,8 +116,9 @@ func TestGetAttestationData_OK(t *testing.T) {
GenesisTimeFetcher: &mock.ChainService{
Genesis: time.Now().Add(time.Duration(-1*offset) * time.Second),
},
FinalizedFetcher: &mock.ChainService{CurrentJustifiedCheckPoint: justifiedCheckpoint},
AttestationCache: cache.NewAttestationCache(),
FinalizedFetcher: &mock.ChainService{CurrentJustifiedCheckPoint: justifiedCheckpoint},
AttestationCache: cache.NewAttestationCache(),
OptimisticModeFetcher: &mock.ChainService{Optimistic: false},
},
}
@@ -176,7 +177,8 @@ func BenchmarkGetAttestationDataConcurrent(b *testing.B) {
GenesisTimeFetcher: &mock.ChainService{
Genesis: time.Now().Add(time.Duration(-1*offset) * time.Second),
},
FinalizedFetcher: &mock.ChainService{CurrentJustifiedCheckPoint: justifiedCheckpoint},
OptimisticModeFetcher: &mock.ChainService{Optimistic: false},
FinalizedFetcher: &mock.ChainService{CurrentJustifiedCheckPoint: justifiedCheckpoint},
},
}
@@ -222,9 +224,10 @@ func TestGetAttestationData_Optimistic(t *testing.T) {
OptimisticModeFetcher: &mock.ChainService{Optimistic: true},
TimeFetcher: &mock.ChainService{Genesis: time.Now()},
CoreService: &core.Service{
GenesisTimeFetcher: &mock.ChainService{Genesis: time.Now()},
HeadFetcher: &mock.ChainService{},
AttestationCache: cache.NewAttestationCache(),
GenesisTimeFetcher: &mock.ChainService{Genesis: time.Now()},
HeadFetcher: &mock.ChainService{},
AttestationCache: cache.NewAttestationCache(),
OptimisticModeFetcher: &mock.ChainService{Optimistic: true},
},
}
_, err := as.GetAttestationData(context.Background(), &ethpb.AttestationDataRequest{})
@@ -240,10 +243,11 @@ func TestGetAttestationData_Optimistic(t *testing.T) {
OptimisticModeFetcher: &mock.ChainService{Optimistic: false},
TimeFetcher: &mock.ChainService{Genesis: time.Now()},
CoreService: &core.Service{
AttestationCache: cache.NewAttestationCache(),
GenesisTimeFetcher: &mock.ChainService{Genesis: time.Now()},
HeadFetcher: &mock.ChainService{Optimistic: false, State: beaconState},
FinalizedFetcher: &mock.ChainService{CurrentJustifiedCheckPoint: &ethpb.Checkpoint{}},
AttestationCache: cache.NewAttestationCache(),
GenesisTimeFetcher: &mock.ChainService{Genesis: time.Now()},
HeadFetcher: &mock.ChainService{Optimistic: false, State: beaconState},
FinalizedFetcher: &mock.ChainService{CurrentJustifiedCheckPoint: &ethpb.Checkpoint{}},
OptimisticModeFetcher: &mock.ChainService{Optimistic: false},
},
}
_, err = as.GetAttestationData(context.Background(), &ethpb.AttestationDataRequest{})
@@ -260,7 +264,8 @@ func TestServer_GetAttestationData_InvalidRequestSlot(t *testing.T) {
OptimisticModeFetcher: &mock.ChainService{Optimistic: false},
TimeFetcher: &mock.ChainService{Genesis: time.Now().Add(time.Duration(-1*offset) * time.Second)},
CoreService: &core.Service{
GenesisTimeFetcher: &mock.ChainService{Genesis: time.Now().Add(time.Duration(-1*offset) * time.Second)},
GenesisTimeFetcher: &mock.ChainService{Genesis: time.Now().Add(time.Duration(-1*offset) * time.Second)},
OptimisticModeFetcher: &mock.ChainService{Optimistic: false},
},
}
@@ -301,10 +306,11 @@ func TestServer_GetAttestationData_RequestSlotIsDifferentThanCurrentSlot(t *test
OptimisticModeFetcher: &mock.ChainService{Optimistic: false},
TimeFetcher: &mock.ChainService{Genesis: time.Now().Add(time.Duration(-1*offset) * time.Second)},
CoreService: &core.Service{
HeadFetcher: &mock.ChainService{TargetRoot: blockRoot2, Root: blockRoot[:]},
GenesisTimeFetcher: &mock.ChainService{Genesis: time.Now().Add(time.Duration(-1*offset) * time.Second)},
StateGen: stategen.New(db, doublylinkedtree.New()),
FinalizedFetcher: &mock.ChainService{CurrentJustifiedCheckPoint: justifiedCheckpoint},
HeadFetcher: &mock.ChainService{TargetRoot: blockRoot2, Root: blockRoot[:]},
GenesisTimeFetcher: &mock.ChainService{Genesis: time.Now().Add(time.Duration(-1*offset) * time.Second)},
StateGen: stategen.New(db, doublylinkedtree.New()),
FinalizedFetcher: &mock.ChainService{CurrentJustifiedCheckPoint: justifiedCheckpoint},
OptimisticModeFetcher: &mock.ChainService{Optimistic: false},
},
}
util.SaveBlock(t, ctx, db, block)
@@ -346,8 +352,9 @@ func TestGetAttestationData_SucceedsInFirstEpoch(t *testing.T) {
HeadFetcher: &mock.ChainService{
TargetRoot: targetRoot, Root: blockRoot[:],
},
GenesisTimeFetcher: &mock.ChainService{Genesis: prysmTime.Now().Add(time.Duration(-1*offset) * time.Second)},
FinalizedFetcher: &mock.ChainService{CurrentJustifiedCheckPoint: justifiedCheckpoint},
GenesisTimeFetcher: &mock.ChainService{Genesis: prysmTime.Now().Add(time.Duration(-1*offset) * time.Second)},
FinalizedFetcher: &mock.ChainService{CurrentJustifiedCheckPoint: justifiedCheckpoint},
OptimisticModeFetcher: &mock.ChainService{Optimistic: false},
},
}

View File

@@ -225,7 +225,7 @@ func (vs *Server) getPayloadHeaderFromBuilder(ctx context.Context, slot primitiv
}
for _, c := range kzgCommitments {
if len(c) != fieldparams.BLSPubkeyLength {
return nil, nil, fmt.Errorf("builder returned invalid kzg commitment lenth: %d", len(c))
return nil, nil, fmt.Errorf("builder returned invalid kzg commitment length: %d", len(c))
}
}
}

View File

@@ -113,17 +113,18 @@ func TestServer_setExecutionData(t *testing.T) {
require.NoError(t, err)
bid := &ethpb.BuilderBidCapella{
Header: &v1.ExecutionPayloadHeaderCapella{
ParentHash: params.BeaconConfig().ZeroHash[:],
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
BlockNumber: 2,
Timestamp: uint64(ti.Unix()),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: bytesutil.PadTo([]byte{1}, fieldparams.RootLength),
ParentHash: params.BeaconConfig().ZeroHash[:],
Timestamp: uint64(ti.Unix()),
BlockNumber: 2,
WithdrawalsRoot: make([]byte, fieldparams.RootLength),
},
Pubkey: sk.PublicKey().Marshal(),
@@ -176,17 +177,18 @@ func TestServer_setExecutionData(t *testing.T) {
builderValue := bytesutil.ReverseByteOrder(big.NewInt(1e9).Bytes())
bid := &ethpb.BuilderBidCapella{
Header: &v1.ExecutionPayloadHeaderCapella{
ParentHash: params.BeaconConfig().ZeroHash[:],
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
BlockNumber: 2,
Timestamp: uint64(ti.Unix()),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: bytesutil.PadTo([]byte{1}, fieldparams.RootLength),
ParentHash: params.BeaconConfig().ZeroHash[:],
Timestamp: uint64(ti.Unix()),
BlockNumber: 2,
WithdrawalsRoot: wr[:],
},
Pubkey: sk.PublicKey().Marshal(),

View File

@@ -336,6 +336,7 @@ func emptyPayload() *enginev1.ExecutionPayload {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
@@ -350,6 +351,7 @@ func emptyPayloadCapella() *enginev1.ExecutionPayloadCapella {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
@@ -365,6 +367,7 @@ func emptyPayloadDeneb() *enginev1.ExecutionPayloadDeneb {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),

View File

@@ -154,7 +154,6 @@ func TestServer_GetBeaconBlock_Altair(t *testing.T) {
SyncAggregate: &ethpb.SyncAggregate{SyncCommitteeBits: scBits[:], SyncCommitteeSignature: make([]byte, 96)},
},
},
Signature: genesis.Signature,
}
blkRoot, err := genAltair.Block.HashTreeRoot()
@@ -244,7 +243,6 @@ func TestServer_GetBeaconBlock_Bellatrix(t *testing.T) {
},
},
},
Signature: genesis.Signature,
}
blkRoot, err := blk.Block.HashTreeRoot()
@@ -363,12 +361,14 @@ func TestServer_GetBeaconBlock_Capella(t *testing.T) {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
Withdrawals: make([]*enginev1.Withdrawal, 0),
},
},
},
Signature: genesis.Signature,
}
blkRoot, err := blk.Block.HashTreeRoot()
@@ -388,14 +388,15 @@ func TestServer_GetBeaconBlock_Capella(t *testing.T) {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: random,
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
ExtraData: make([]byte, 0),
BlockNumber: 1,
GasLimit: 2,
GasUsed: 3,
Timestamp: uint64(timeStamp.Unix()),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
Withdrawals: make([]*enginev1.Withdrawal, 0),
}
proposerServer := getProposerServer(db, beaconState, parentRoot[:])
@@ -479,7 +480,6 @@ func TestServer_GetBeaconBlock_Deneb(t *testing.T) {
},
},
},
Signature: genesis.Signature,
}
blkRoot, err := blk.Block.HashTreeRoot()
@@ -731,7 +731,7 @@ func TestProposer_ProposeBlock_OK(t *testing.T) {
},
},
{
name: "deneb block some blobs (kzg and blob count missmatch)",
name: "deneb block some blobs (kzg and blob count mismatch)",
block: func(parent [32]byte) *ethpb.GenericSignedBeaconBlock {
blockToPropose := util.NewBeaconBlockContentsDeneb()
blockToPropose.Block.Block.Slot = 5

View File

@@ -359,16 +359,17 @@ func (s *Service) Start() {
})
coreService := &core.Service{
HeadFetcher: s.cfg.HeadFetcher,
GenesisTimeFetcher: s.cfg.GenesisTimeFetcher,
SyncChecker: s.cfg.SyncService,
Broadcaster: s.cfg.Broadcaster,
SyncCommitteePool: s.cfg.SyncCommitteeObjectPool,
OperationNotifier: s.cfg.OperationNotifier,
AttestationCache: cache.NewAttestationCache(),
StateGen: s.cfg.StateGen,
P2P: s.cfg.Broadcaster,
FinalizedFetcher: s.cfg.FinalizationFetcher,
HeadFetcher: s.cfg.HeadFetcher,
GenesisTimeFetcher: s.cfg.GenesisTimeFetcher,
SyncChecker: s.cfg.SyncService,
Broadcaster: s.cfg.Broadcaster,
SyncCommitteePool: s.cfg.SyncCommitteeObjectPool,
OperationNotifier: s.cfg.OperationNotifier,
AttestationCache: cache.NewAttestationCache(),
StateGen: s.cfg.StateGen,
P2P: s.cfg.Broadcaster,
FinalizedFetcher: s.cfg.FinalizationFetcher,
OptimisticModeFetcher: s.cfg.OptimisticModeFetcher,
}
validatorServer := &validatorv1alpha1.Server{

View File

@@ -22,6 +22,7 @@ type BeaconState interface {
WriteOnlyBeaconState
Copy() BeaconState
CopyAllTries()
Defragment()
HashTreeRoot(ctx context.Context) ([32]byte, error)
Prover
json.Marshaler
@@ -66,6 +67,7 @@ type ReadOnlyBeaconState interface {
HistoricalSummaries() ([]*ethpb.HistoricalSummary, error)
Slashings() []uint64
FieldReferencesCount() map[string]uint64
RecordStateMetrics()
MarshalSSZ() ([]byte, error)
IsNil() bool
Version() int

View File

@@ -5,6 +5,7 @@ import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state/state-native/types"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
multi_value_slice "github.com/prysmaticlabs/prysm/v4/container/multi-value-slice"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
@@ -12,24 +13,26 @@ import (
)
var (
multiValueRandaoMixesCountGauge = promauto.NewGauge(prometheus.GaugeOpts{
Name: "multi_value_randao_mixes_count",
})
multiValueBlockRootsCountGauge = promauto.NewGauge(prometheus.GaugeOpts{
Name: "multi_value_block_roots_count",
})
multiValueStateRootsCountGauge = promauto.NewGauge(prometheus.GaugeOpts{
Name: "multi_value_state_roots_count",
})
multiValueBalancesCountGauge = promauto.NewGauge(prometheus.GaugeOpts{
Name: "multi_value_balances_count",
})
multiValueValidatorsCountGauge = promauto.NewGauge(prometheus.GaugeOpts{
Name: "multi_value_validators_count",
})
multiValueInactivityScoresCountGauge = promauto.NewGauge(prometheus.GaugeOpts{
Name: "multi_value_inactivity_scores_count",
})
multiValueCountGauge = promauto.NewGaugeVec(prometheus.GaugeOpts{
Name: "multi_value_object_count",
Help: "The number of instances that exist for the multivalue slice for a particular field.",
}, []string{"field"})
multiValueIndividualElementsCountGauge = promauto.NewGaugeVec(prometheus.GaugeOpts{
Name: "multi_value_individual_elements_count",
Help: "The number of individual elements that exist for the multivalue slice object.",
}, []string{"field"})
multiValueIndividualElementReferencesCountGauge = promauto.NewGaugeVec(prometheus.GaugeOpts{
Name: "multi_value_individual_element_references_count",
Help: "The number of individual element references that exist for the multivalue slice object.",
}, []string{"field"})
multiValueAppendedElementsCountGauge = promauto.NewGaugeVec(prometheus.GaugeOpts{
Name: "multi_value_appended_elements_count",
Help: "The number of appended elements that exist for the multivalue slice object.",
}, []string{"field"})
multiValueAppendedElementReferencesCountGauge = promauto.NewGaugeVec(prometheus.GaugeOpts{
Name: "multi_value_appended_element_references_count",
Help: "The number of appended element references that exist for the multivalue slice object.",
}, []string{"field"})
)
// MultiValueRandaoMixes is a multi-value slice of randao mixes.
@@ -43,7 +46,7 @@ func NewMultiValueRandaoMixes(mixes [][]byte) *MultiValueRandaoMixes {
}
mv := &MultiValueRandaoMixes{}
mv.Init(items)
multiValueRandaoMixesCountGauge.Inc()
multiValueCountGauge.WithLabelValues(types.RandaoMixes.String()).Inc()
runtime.SetFinalizer(mv, randaoMixesFinalizer)
return mv
}
@@ -59,7 +62,7 @@ func NewMultiValueBlockRoots(roots [][]byte) *MultiValueBlockRoots {
}
mv := &MultiValueBlockRoots{}
mv.Init(items)
multiValueBlockRootsCountGauge.Inc()
multiValueCountGauge.WithLabelValues(types.BlockRoots.String()).Inc()
runtime.SetFinalizer(mv, blockRootsFinalizer)
return mv
}
@@ -75,7 +78,7 @@ func NewMultiValueStateRoots(roots [][]byte) *MultiValueStateRoots {
}
mv := &MultiValueStateRoots{}
mv.Init(items)
multiValueStateRootsCountGauge.Inc()
multiValueCountGauge.WithLabelValues(types.StateRoots.String()).Inc()
runtime.SetFinalizer(mv, stateRootsFinalizer)
return mv
}
@@ -89,7 +92,7 @@ func NewMultiValueBalances(balances []uint64) *MultiValueBalances {
copy(items, balances)
mv := &MultiValueBalances{}
mv.Init(items)
multiValueBalancesCountGauge.Inc()
multiValueCountGauge.WithLabelValues(types.Balances.String()).Inc()
runtime.SetFinalizer(mv, balancesFinalizer)
return mv
}
@@ -103,7 +106,7 @@ func NewMultiValueInactivityScores(scores []uint64) *MultiValueInactivityScores
copy(items, scores)
mv := &MultiValueInactivityScores{}
mv.Init(items)
multiValueInactivityScoresCountGauge.Inc()
multiValueCountGauge.WithLabelValues(types.InactivityScores.String()).Inc()
runtime.SetFinalizer(mv, inactivityScoresFinalizer)
return mv
}
@@ -115,31 +118,80 @@ type MultiValueValidators = multi_value_slice.Slice[*ethpb.Validator]
func NewMultiValueValidators(vals []*ethpb.Validator) *MultiValueValidators {
mv := &MultiValueValidators{}
mv.Init(vals)
multiValueValidatorsCountGauge.Inc()
multiValueCountGauge.WithLabelValues(types.Validators.String()).Inc()
runtime.SetFinalizer(mv, validatorsFinalizer)
return mv
}
// Defragment checks whether each individual multi-value field in our state is fragmented
// and if it is, it will 'reset' the field to create a new multivalue object.
func (b *BeaconState) Defragment() {
b.lock.Lock()
defer b.lock.Unlock()
if b.blockRootsMultiValue != nil && b.blockRootsMultiValue.IsFragmented() {
initialMVslice := b.blockRootsMultiValue
b.blockRootsMultiValue = b.blockRootsMultiValue.Reset(b)
initialMVslice.Detach(b)
multiValueCountGauge.WithLabelValues(types.BlockRoots.String()).Inc()
runtime.SetFinalizer(b.blockRootsMultiValue, blockRootsFinalizer)
}
if b.stateRootsMultiValue != nil && b.stateRootsMultiValue.IsFragmented() {
initialMVslice := b.stateRootsMultiValue
b.stateRootsMultiValue = b.stateRootsMultiValue.Reset(b)
initialMVslice.Detach(b)
multiValueCountGauge.WithLabelValues(types.StateRoots.String()).Inc()
runtime.SetFinalizer(b.stateRootsMultiValue, stateRootsFinalizer)
}
if b.randaoMixesMultiValue != nil && b.randaoMixesMultiValue.IsFragmented() {
initialMVslice := b.randaoMixesMultiValue
b.randaoMixesMultiValue = b.randaoMixesMultiValue.Reset(b)
initialMVslice.Detach(b)
multiValueCountGauge.WithLabelValues(types.RandaoMixes.String()).Inc()
runtime.SetFinalizer(b.randaoMixesMultiValue, randaoMixesFinalizer)
}
if b.balancesMultiValue != nil && b.balancesMultiValue.IsFragmented() {
initialMVslice := b.balancesMultiValue
b.balancesMultiValue = b.balancesMultiValue.Reset(b)
initialMVslice.Detach(b)
multiValueCountGauge.WithLabelValues(types.Balances.String()).Inc()
runtime.SetFinalizer(b.balancesMultiValue, balancesFinalizer)
}
if b.validatorsMultiValue != nil && b.validatorsMultiValue.IsFragmented() {
initialMVslice := b.validatorsMultiValue
b.validatorsMultiValue = b.validatorsMultiValue.Reset(b)
initialMVslice.Detach(b)
multiValueCountGauge.WithLabelValues(types.Validators.String()).Inc()
runtime.SetFinalizer(b.validatorsMultiValue, validatorsFinalizer)
}
if b.inactivityScoresMultiValue != nil && b.inactivityScoresMultiValue.IsFragmented() {
initialMVslice := b.inactivityScoresMultiValue
b.inactivityScoresMultiValue = b.inactivityScoresMultiValue.Reset(b)
initialMVslice.Detach(b)
multiValueCountGauge.WithLabelValues(types.InactivityScores.String()).Inc()
runtime.SetFinalizer(b.inactivityScoresMultiValue, inactivityScoresFinalizer)
}
}
func randaoMixesFinalizer(m *MultiValueRandaoMixes) {
multiValueRandaoMixesCountGauge.Dec()
multiValueCountGauge.WithLabelValues(types.RandaoMixes.String()).Dec()
}
func blockRootsFinalizer(m *MultiValueBlockRoots) {
multiValueBlockRootsCountGauge.Dec()
multiValueCountGauge.WithLabelValues(types.BlockRoots.String()).Dec()
}
func stateRootsFinalizer(m *MultiValueStateRoots) {
multiValueStateRootsCountGauge.Dec()
multiValueCountGauge.WithLabelValues(types.StateRoots.String()).Dec()
}
func balancesFinalizer(m *MultiValueBalances) {
multiValueBalancesCountGauge.Dec()
multiValueCountGauge.WithLabelValues(types.Balances.String()).Dec()
}
func validatorsFinalizer(m *MultiValueValidators) {
multiValueValidatorsCountGauge.Dec()
multiValueCountGauge.WithLabelValues(types.Validators.String()).Dec()
}
func inactivityScoresFinalizer(m *MultiValueInactivityScores) {
multiValueInactivityScoresCountGauge.Dec()
multiValueCountGauge.WithLabelValues(types.InactivityScores.String()).Dec()
}

View File

@@ -932,6 +932,68 @@ func (b *BeaconState) FieldReferencesCount() map[string]uint64 {
return refMap
}
// RecordStateMetrics proceeds to record any state related metrics data.
func (b *BeaconState) RecordStateMetrics() {
b.lock.RLock()
defer b.lock.RUnlock()
// Only run this for nodes running with the experimental state.
if !features.Get().EnableExperimentalState {
return
}
// Validators
if b.validatorsMultiValue != nil {
stats := b.validatorsMultiValue.MultiValueStatistics()
multiValueIndividualElementsCountGauge.WithLabelValues(types.Validators.String()).Set(float64(stats.TotalIndividualElements))
multiValueIndividualElementReferencesCountGauge.WithLabelValues(types.Validators.String()).Set(float64(stats.TotalIndividualElemReferences))
multiValueAppendedElementsCountGauge.WithLabelValues(types.Validators.String()).Set(float64(stats.TotalAppendedElements))
multiValueAppendedElementReferencesCountGauge.WithLabelValues(types.Validators.String()).Set(float64(stats.TotalAppendedElemReferences))
}
// Balances
if b.balancesMultiValue != nil {
stats := b.balancesMultiValue.MultiValueStatistics()
multiValueIndividualElementsCountGauge.WithLabelValues(types.Balances.String()).Set(float64(stats.TotalIndividualElements))
multiValueIndividualElementReferencesCountGauge.WithLabelValues(types.Balances.String()).Set(float64(stats.TotalIndividualElemReferences))
multiValueAppendedElementsCountGauge.WithLabelValues(types.Balances.String()).Set(float64(stats.TotalAppendedElements))
multiValueAppendedElementReferencesCountGauge.WithLabelValues(types.Balances.String()).Set(float64(stats.TotalAppendedElemReferences))
}
// InactivityScores
if b.inactivityScoresMultiValue != nil {
stats := b.inactivityScoresMultiValue.MultiValueStatistics()
multiValueIndividualElementsCountGauge.WithLabelValues(types.InactivityScores.String()).Set(float64(stats.TotalIndividualElements))
multiValueIndividualElementReferencesCountGauge.WithLabelValues(types.InactivityScores.String()).Set(float64(stats.TotalIndividualElemReferences))
multiValueAppendedElementsCountGauge.WithLabelValues(types.InactivityScores.String()).Set(float64(stats.TotalAppendedElements))
multiValueAppendedElementReferencesCountGauge.WithLabelValues(types.InactivityScores.String()).Set(float64(stats.TotalAppendedElemReferences))
}
// BlockRoots
if b.blockRootsMultiValue != nil {
stats := b.blockRootsMultiValue.MultiValueStatistics()
multiValueIndividualElementsCountGauge.WithLabelValues(types.BlockRoots.String()).Set(float64(stats.TotalIndividualElements))
multiValueIndividualElementReferencesCountGauge.WithLabelValues(types.BlockRoots.String()).Set(float64(stats.TotalIndividualElemReferences))
multiValueAppendedElementsCountGauge.WithLabelValues(types.BlockRoots.String()).Set(float64(stats.TotalAppendedElements))
multiValueAppendedElementReferencesCountGauge.WithLabelValues(types.BlockRoots.String()).Set(float64(stats.TotalAppendedElemReferences))
}
// StateRoots
if b.stateRootsMultiValue != nil {
stats := b.stateRootsMultiValue.MultiValueStatistics()
multiValueIndividualElementsCountGauge.WithLabelValues(types.StateRoots.String()).Set(float64(stats.TotalIndividualElements))
multiValueIndividualElementReferencesCountGauge.WithLabelValues(types.StateRoots.String()).Set(float64(stats.TotalIndividualElemReferences))
multiValueAppendedElementsCountGauge.WithLabelValues(types.StateRoots.String()).Set(float64(stats.TotalAppendedElements))
multiValueAppendedElementReferencesCountGauge.WithLabelValues(types.StateRoots.String()).Set(float64(stats.TotalAppendedElemReferences))
}
// RandaoMixes
if b.randaoMixesMultiValue != nil {
stats := b.randaoMixesMultiValue.MultiValueStatistics()
multiValueIndividualElementsCountGauge.WithLabelValues(types.RandaoMixes.String()).Set(float64(stats.TotalIndividualElements))
multiValueIndividualElementReferencesCountGauge.WithLabelValues(types.RandaoMixes.String()).Set(float64(stats.TotalIndividualElemReferences))
multiValueAppendedElementsCountGauge.WithLabelValues(types.RandaoMixes.String()).Set(float64(stats.TotalAppendedElements))
multiValueAppendedElementReferencesCountGauge.WithLabelValues(types.RandaoMixes.String()).Set(float64(stats.TotalAppendedElemReferences))
}
}
// IsNil checks if the state and the underlying proto
// object are nil.
func (b *BeaconState) IsNil() bool {

View File

@@ -338,7 +338,7 @@ func TestRoundTripDenebSave(t *testing.T) {
require.NoError(t, undo())
}()
parentRoot := [32]byte{}
c := blobsTestCase{nblocks: 10}
c := blobsTestCase{}
chain, clock := defaultMockChain(t)
c.chain = chain
c.clock = clock

View File

@@ -606,9 +606,7 @@ func TestBlocksFetcher_WaitForBandwidth(t *testing.T) {
p1.Connect(p2)
require.Equal(t, 1, len(p1.BHost.Network().Peers()), "Expected peers to be connected")
req := &ethpb.BeaconBlocksByRangeRequest{
StartSlot: 100,
Step: 1,
Count: 64,
Count: 64,
}
topic := p2pm.RPCBlocksByRangeTopicV1

View File

@@ -154,7 +154,7 @@ func (s *Service) processFetchedData(
}
}
// processFetchedData processes data received from queue.
// processFetchedDataRegSync processes data received from queue.
func (s *Service) processFetchedDataRegSync(
ctx context.Context, genesis time.Time, startSlot primitives.Slot, data *blocksQueueFetchedData) {
defer s.updatePeerScorerStats(data.pid, startSlot)
@@ -173,16 +173,13 @@ func (s *Service) processFetchedDataRegSync(
"firstSlot": data.bwb[0].Block.Block().Slot(),
"firstUnprocessed": bwb[0].Block.Block().Slot(),
}
for _, b := range data.bwb {
for _, b := range bwb {
if err := avs.Persist(s.clock.CurrentSlot(), b.Blobs...); err != nil {
log.WithError(err).WithFields(batchFields).WithFields(syncFields(b.Block)).Warn("Batch failure due to BlobSidecar issues")
return
}
if err := s.processBlock(ctx, genesis, b, s.cfg.Chain.ReceiveBlock, avs); err != nil {
switch {
case errors.Is(err, errBlockAlreadyProcessed):
log.WithError(err).WithFields(batchFields).WithFields(syncFields(b.Block)).Warn("Skipping already processed block")
continue
case errors.Is(err, errParentDoesNotExist):
log.WithFields(batchFields).WithField("missingParent", fmt.Sprintf("%#x", b.Block.Block().ParentRoot())).
WithFields(syncFields(b.Block)).Debug("Could not process batch blocks due to missing parent")
@@ -292,7 +289,7 @@ func validUnprocessed(ctx context.Context, bwb []blocks.BlockWithROBlobs, headSl
parent := bwb[i-1].Block
if parent.Root() != b.Block().ParentRoot() {
return nil, fmt.Errorf("expected linear block list with parent root of %#x (slot %d) but received %#x (slot %d)",
parent, parent.Block().Slot(), b.Block().ParentRoot(), b.Block().Slot())
parent.Root(), parent.Block().Slot(), b.Block().ParentRoot(), b.Block().Slot())
}
}
}
@@ -302,7 +299,7 @@ func validUnprocessed(ctx context.Context, bwb []blocks.BlockWithROBlobs, headSl
if *processed+1 == len(bwb) {
maxIncoming := bwb[len(bwb)-1].Block
maxRoot := maxIncoming.Root()
return nil, fmt.Errorf("headSlot:%d, blockSlot:%d , root %#x:%w", headSlot, maxIncoming.Block().Slot(), maxRoot, errBlockAlreadyProcessed)
return nil, fmt.Errorf("%w: headSlot=%d, blockSlot=%d, root=%#x", errBlockAlreadyProcessed, headSlot, maxIncoming.Block().Slot(), maxRoot)
}
nonProcessedIdx := *processed + 1
return bwb[nonProcessedIdx:], nil

View File

@@ -110,7 +110,7 @@ func validateRangeRequest(r *pb.BeaconBlocksByRangeRequest, current primitives.S
if rp.start > maxStart {
return rangeParams{}, p2ptypes.ErrInvalidRequest
}
rp.end, err = rp.start.SafeAdd((rp.size - 1))
rp.end, err = rp.start.SafeAdd(rp.size - 1)
if err != nil {
return rangeParams{}, p2ptypes.ErrInvalidRequest
}

View File

@@ -1074,7 +1074,6 @@ func TestRPCBeaconBlocksByRange_FilterBlocks(t *testing.T) {
func TestRPCBeaconBlocksByRange_FilterBlocks_PreviousRoot(t *testing.T) {
req := &ethpb.BeaconBlocksByRangeRequest{
StartSlot: 100,
Step: 1,
Count: uint64(flags.Get().BlockBatchLimit) * 2,
}

View File

@@ -31,7 +31,7 @@ var ErrInvalidFetchedData = errors.New("invalid data returned from peer")
var errMaxRequestBlobSidecarsExceeded = errors.Wrap(ErrInvalidFetchedData, "peer exceeded req blob chunk tx limit")
var errBlobChunkedReadFailure = errors.New("failed to read stream of chunk-encoded blobs")
var errBlobUnmarshal = errors.New("Could not unmarshal chunk-encoded blob")
var errUnrequestedRoot = errors.New("Received BlobSidecar in response that was not requested")
var errUnrequested = errors.New("Received BlobSidecar in response that was not requested")
var errBlobResponseOutOfBounds = errors.New("received BlobSidecar with slot outside BlobSidecarsByRangeRequest bounds")
// BeaconBlockProcessor defines a block processing function, which allows to start utilizing
@@ -199,13 +199,22 @@ func SendBlobSidecarByRoot(
type blobResponseValidation func(blocks.ROBlob) error
func blobValidatorFromRootReq(req *p2ptypes.BlobSidecarsByRootReq) blobResponseValidation {
roots := make(map[[32]byte]bool)
blobIds := make(map[[32]byte]map[uint64]bool)
for _, sc := range *req {
roots[bytesutil.ToBytes32(sc.BlockRoot)] = true
blockRoot := bytesutil.ToBytes32(sc.BlockRoot)
if blobIds[blockRoot] == nil {
blobIds[blockRoot] = make(map[uint64]bool)
}
blobIds[blockRoot][sc.Index] = true
}
return func(sc blocks.ROBlob) error {
if requested := roots[sc.BlockRoot()]; !requested {
return errors.Wrapf(errUnrequestedRoot, "root=%#x", sc.BlockRoot())
blobIndices := blobIds[sc.BlockRoot()]
if blobIndices == nil {
return errors.Wrapf(errUnrequested, "root=%#x", sc.BlockRoot())
}
requested := blobIndices[sc.Index]
if !requested {
return errors.Wrapf(errUnrequested, "root=%#x index=%d", sc.BlockRoot(), sc.Index)
}
return nil
}

View File

@@ -479,11 +479,12 @@ func TestSendRequest_SendBeaconBlocksByRootRequest(t *testing.T) {
}
func TestBlobValidatorFromRootReq(t *testing.T) {
validRoot := bytesutil.PadTo([]byte("valid"), 32)
invalidRoot := bytesutil.PadTo([]byte("invalid"), 32)
rootA := bytesutil.PadTo([]byte("valid"), 32)
rootB := bytesutil.PadTo([]byte("invalid"), 32)
header := &ethpb.SignedBeaconBlockHeader{}
validb := util.GenerateTestDenebBlobSidecar(t, bytesutil.ToBytes32(validRoot), header, 0, []byte{}, make([][]byte, 0))
invalidb := util.GenerateTestDenebBlobSidecar(t, bytesutil.ToBytes32(invalidRoot), header, 0, []byte{}, make([][]byte, 0))
blobSidecarA0 := util.GenerateTestDenebBlobSidecar(t, bytesutil.ToBytes32(rootA), header, 0, []byte{}, make([][]byte, 0))
blobSidecarA1 := util.GenerateTestDenebBlobSidecar(t, bytesutil.ToBytes32(rootA), header, 1, []byte{}, make([][]byte, 0))
blobSidecarB0 := util.GenerateTestDenebBlobSidecar(t, bytesutil.ToBytes32(rootB), header, 0, []byte{}, make([][]byte, 0))
cases := []struct {
name string
ids []*ethpb.BlobIdentifier
@@ -491,15 +492,21 @@ func TestBlobValidatorFromRootReq(t *testing.T) {
err error
}{
{
name: "valid",
ids: []*ethpb.BlobIdentifier{{BlockRoot: validRoot}},
response: []blocks.ROBlob{validb},
name: "expected",
ids: []*ethpb.BlobIdentifier{{BlockRoot: rootA, Index: 0}},
response: []blocks.ROBlob{blobSidecarA0},
},
{
name: "invalid",
ids: []*ethpb.BlobIdentifier{{BlockRoot: validRoot}},
response: []blocks.ROBlob{invalidb},
err: errUnrequestedRoot,
name: "wrong root",
ids: []*ethpb.BlobIdentifier{{BlockRoot: rootA, Index: 0}},
response: []blocks.ROBlob{blobSidecarB0},
err: errUnrequested,
},
{
name: "wrong index",
ids: []*ethpb.BlobIdentifier{{BlockRoot: rootA, Index: 0}},
response: []blocks.ROBlob{blobSidecarA1},
err: errUnrequested,
},
}
for _, c := range cases {

View File

@@ -12,7 +12,7 @@ var (
// MevRelayEndpoint provides an HTTP access endpoint to a MEV builder network.
MevRelayEndpoint = &cli.StringFlag{
Name: "http-mev-relay",
Usage: "A MEV builder relay string http endpoint, this wil be used to interact MEV builder network using API defined in: https://ethereum.github.io/builder-specs/#/Builder",
Usage: "A MEV builder relay string http endpoint, this will be used to interact MEV builder network using API defined in: https://ethereum.github.io/builder-specs/#/Builder",
Value: "",
}
MaxBuilderConsecutiveMissedSlots = &cli.IntFlag{

View File

@@ -53,7 +53,7 @@ func blobStoragePath(c *cli.Context) string {
var errInvalidBlobRetentionEpochs = errors.New("value is smaller than spec minimum")
// blobRetentionEpoch returns the spec deffault MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUEST
// blobRetentionEpoch returns the spec default MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUEST
// or a user-specified flag overriding this value. If a user-specified override is
// smaller than the spec default, an error will be returned.
func blobRetentionEpoch(cliCtx *cli.Context) (primitives.Epoch, error) {

View File

@@ -62,7 +62,6 @@ var appHelpFlagGroups = []flagGroup{
cmd.TracingEndpointFlag,
cmd.TraceSampleFractionFlag,
cmd.MonitoringHostFlag,
cmd.BackupWebhookOutputDir,
flags.MonitoringPortFlag,
cmd.DisableMonitoringFlag,
cmd.MaxGoroutines,
@@ -185,6 +184,12 @@ var appHelpFlagGroups = []flagGroup{
flags.InteropNumValidatorsFlag,
},
},
{
Name: "deprecated",
Flags: []cli.Flag{
cmd.BackupWebhookOutputDir,
},
},
}
func init() {

View File

@@ -31,11 +31,12 @@ func DefaultDataDir() string {
// Try to place the data folder in the user's home dir
home := file.HomeDir()
if home != "" {
if runtime.GOOS == "darwin" {
switch runtime.GOOS {
case "darwin":
return filepath.Join(home, "Library", "Eth2")
} else if runtime.GOOS == "windows" {
case "windows":
return filepath.Join(home, "AppData", "Local", "Eth2")
} else {
default:
return filepath.Join(home, ".eth2")
}
}

View File

@@ -16,7 +16,7 @@ type StdInPasswordReader struct {
}
// ReadPassword reads a password from stdin.
func (_ StdInPasswordReader) ReadPassword() (string, error) {
func (StdInPasswordReader) ReadPassword() (string, error) {
pwd, err := terminal.ReadPassword(int(os.Stdin.Fd()))
return string(pwd), err
}

View File

@@ -33,7 +33,7 @@ prysm_image_upload(
entrypoint = ["/prysmctl"],
repository = "gcr.io/prysmaticlabs/prysm/cmd/prysmctl",
symlinks = {
# Backwards compatiability for images that depended on the old filepath.
# Backwards compatibility for images that depended on the old filepath.
"/app/cmd/prysmctl/prysmctl": "/prysmctl",
},
tags = ["manual"],

View File

@@ -5,7 +5,6 @@ load("//tools:prysm_image.bzl", "prysm_image_upload")
go_library(
name = "go_default_library",
srcs = [
"log.go",
"main.go",
"usage.go",
],
@@ -30,6 +29,7 @@ go_library(
"//runtime/version:go_default_library",
"//validator/node:go_default_library",
"@com_github_joonix_log//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_urfave_cli_v2//:go_default_library",
],
@@ -62,7 +62,7 @@ prysm_image_upload(
entrypoint = ["/validator"],
repository = "gcr.io/prysmaticlabs/prysm/validator",
symlinks = {
# Backwards compatiability for images that depended on the old filepath.
# Backwards compatibility for images that depended on the old filepath.
"/app/cmd/validator/validator": "/validator",
},
tags = ["manual"],

View File

@@ -1,5 +0,0 @@
package main
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "main")

View File

@@ -10,6 +10,7 @@ import (
runtimeDebug "runtime/debug"
joonix "github.com/joonix/log"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/cmd"
accountcommands "github.com/prysmaticlabs/prysm/v4/cmd/validator/accounts"
dbcommands "github.com/prysmaticlabs/prysm/v4/cmd/validator/db"
@@ -31,8 +32,10 @@ import (
"github.com/urfave/cli/v2"
)
var log = logrus.WithField("prefix", "main")
func startNode(ctx *cli.Context) error {
// verify if ToS accepted
// Verify if ToS is accepted.
if err := tos.VerifyTosAcceptedOrPrompt(ctx); err != nil {
return err
}
@@ -141,6 +144,8 @@ func main() {
return err
}
logFileName := ctx.String(cmd.LogFileName.Name)
format := ctx.String(cmd.LogFormat.Name)
switch format {
case "text":
@@ -148,8 +153,8 @@ func main() {
formatter.TimestampFormat = "2006-01-02 15:04:05"
formatter.FullTimestamp = true
// If persistent log files are written - we disable the log messages coloring because
// the colors are ANSI codes and seen as Gibberish in the log files.
formatter.DisableColors = ctx.String(cmd.LogFileName.Name) != ""
// the colors are ANSI codes and seen as gibberish in the log files.
formatter.DisableColors = logFileName != ""
logrus.SetFormatter(formatter)
case "fluentd":
f := joonix.NewFormatter()
@@ -167,7 +172,6 @@ func main() {
return fmt.Errorf("unknown log format %s", format)
}
logFileName := ctx.String(cmd.LogFileName.Name)
if logFileName != "" {
if err := logs.ConfigurePersistentLogging(logFileName); err != nil {
log.WithError(err).Error("Failed to configuring logging to disk.")
@@ -182,8 +186,9 @@ func main() {
}
if err := debug.Setup(ctx); err != nil {
return err
return errors.Wrap(err, "failed to setup debug")
}
return cmd.ValidateNoArgs(ctx)
},
After: func(ctx *cli.Context) error {

View File

@@ -29,7 +29,7 @@ Examples of when not to use a feature flag:
Once it has been decided that you should use a feature flag. Follow these steps to safely
releasing your feature. In general, try to create a single PR for each step of this process.
1. Add your feature flag to shared/featureconfig/flags.go, use the flag to toggle a boolean in the
1. Add your feature flag to `shared/featureconfig/flags.go`, use the flag to toggle a boolean in the
feature config in shared/featureconfig/config.go. It is a good idea to use the `enable` prefix for
your flag since you're going to invert the flag in a later step. i.e you will use `disable` prefix
later. For example, `--enable-my-feature`. Additionally, [create a feature flag tracking issue](https://github.com/prysmaticlabs/prysm/issues/new?template=feature_flag.md)
@@ -48,7 +48,7 @@ func someExistingMethod(ctx context.Context) error {
3. Add the flag to the end to end tests. This set of flags can also be found in shared/featureconfig/flags.go.
4. Test the functionality locally and safely in production. Once you have enough confidence that
your new function works and is safe to release then move onto the next step.
5. Move your existing flag to the deprecated section of shared/featureconfig/flags.go. It is
5. Move your existing flag to the deprecated section of `shared/featureconfig/flags.go`. It is
important NOT to delete your existing flag outright. Deleting a flag can be extremely frustrating
to users as it may break their existing workflow! Marking a flag as deprecated gives users time to
adjust their start scripts and workflow. Add another feature flag to represent the inverse of your

View File

@@ -23,10 +23,11 @@ import (
"sync"
"time"
"github.com/prysmaticlabs/prysm/v4/cmd"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/sirupsen/logrus"
"github.com/urfave/cli/v2"
"github.com/prysmaticlabs/prysm/v4/cmd"
"github.com/prysmaticlabs/prysm/v4/config/params"
)
var log = logrus.WithField("prefix", "flags")
@@ -40,6 +41,7 @@ type Flags struct {
EnableExperimentalState bool // EnableExperimentalState turns on the latest and greatest (but potentially unstable) changes to the beacon state.
WriteSSZStateTransitions bool // WriteSSZStateTransitions to tmp directory.
EnablePeerScorer bool // EnablePeerScorer enables experimental peer scoring in p2p.
EnableLightClient bool // EnableLightClient enables light client APIs.
WriteWalletPasswordOnWebOnboarding bool // WriteWalletPasswordOnWebOnboarding writes the password to disk after Prysm web signup.
EnableDoppelGanger bool // EnableDoppelGanger enables doppelganger protection on startup for the validator.
EnableHistoricalSpaceRepresentation bool // EnableHistoricalSpaceRepresentation enables the saving of registry validators in separate buckets to save space
@@ -237,6 +239,10 @@ func ConfigureBeaconChain(ctx *cli.Context) error {
logEnabled(EnableEIP4881)
cfg.EnableEIP4881 = true
}
if ctx.IsSet(EnableLightClient.Name) {
logEnabled(EnableLightClient)
cfg.EnableLightClient = true
}
cfg.AggregateIntervals = [3]time.Duration{aggregateFirstInterval.Value, aggregateSecondInterval.Value, aggregateThirdInterval.Value}
Init(cfg)
return nil

View File

@@ -140,6 +140,10 @@ var (
Name: "enable-eip-4881",
Usage: "Enables the deposit tree specified in EIP-4881.",
}
EnableLightClient = &cli.BoolFlag{
Name: "enable-lightclient",
Usage: "Enables the light client support in the beacon node",
}
disableResourceManager = &cli.BoolFlag{
Name: "disable-resource-manager",
Usage: "Disables running the libp2p resource manager.",
@@ -204,6 +208,7 @@ var BeaconChainFlags = append(deprecatedBeaconFlags, append(deprecatedFlags, []c
EnableEIP4881,
disableResourceManager,
DisableRegistrationCache,
EnableLightClient,
}...)...)
// E2EBeaconChainFlags contains a list of the beacon chain feature flags to be tested in E2E.

View File

@@ -34,7 +34,7 @@ var mainnetNetworkConfig = &NetworkConfig{
ContractDeploymentBlock: 11184524, // Note: contract was deployed in block 11052984 but no transactions were sent until 11184524.
BootstrapNodes: []string{
// Teku team's bootnode
"enr:-KG4QMOEswP62yzDjSwWS4YEjtTZ5PO6r65CPqYBkgTTkrpaedQ8uEUo1uMALtJIvb2w_WWEVmg5yt1UAuK1ftxUU7QDhGV0aDKQu6TalgMAAAD__________4JpZIJ2NIJpcIQEnfA2iXNlY3AyNTZrMaEDfol8oLr6XJ7FsdAYE7lpJhKMls4G_v6qQOGKJUWGb_uDdGNwgiMog3VkcIIjKA",
"enr:-KG4QNTx85fjxABbSq_Rta9wy56nQ1fHK0PewJbGjLm1M4bMGx5-3Qq4ZX2-iFJ0pys_O90sVXNNOxp2E7afBsGsBrgDhGV0aDKQu6TalgMAAAD__________4JpZIJ2NIJpcIQEnfA2iXNlY3AyNTZrMaECGXWQ-rQ2KZKRH1aOW4IlPDBkY4XDphxg9pxKytFCkayDdGNwgiMog3VkcIIjKA",
"enr:-KG4QF4B5WrlFcRhUU6dZETwY5ZzAXnA0vGC__L1Kdw602nDZwXSTs5RFXFIFUnbQJmhNGVU6OIX7KVrCSTODsz1tK4DhGV0aDKQu6TalgMAAAD__________4JpZIJ2NIJpcIQExNYEiXNlY3AyNTZrMaECQmM9vp7KhaXhI-nqL_R0ovULLCFSFTa9CPPSdb1zPX6DdGNwgiMog3VkcIIjKA",
// Prylab team's bootnodes
"enr:-Ku4QImhMc1z8yCiNJ1TyUxdcfNucje3BGwEHzodEZUan8PherEo4sF7pPHPSIB1NNuSg5fZy7qFsjmUKs2ea1Whi0EBh2F0dG5ldHOIAAAAAAAAAACEZXRoMpD1pf1CAAAAAP__________gmlkgnY0gmlwhBLf22SJc2VjcDI1NmsxoQOVphkDqal4QzPMksc5wnpuC3gvSC8AfbFOnZY_On34wIN1ZHCCIyg",

Some files were not shown because too many files have changed in this diff Show More