Compare commits

...

76 Commits

Author SHA1 Message Date
terence
e397f8a2bd Skip origin root when cleaning dirty state (#13521)
* Skip origin root when cleaning dirty state

* Clean up
2024-01-24 17:22:50 +00:00
Radosław Kapka
6438060733 Clear cache everywhere in tests of core helpers (#13509) 2024-01-24 16:11:43 +00:00
Nishant Das
a2892b1ed5 clean up validate beacon block (#13517) 2024-01-24 05:48:15 +00:00
Nishant Das
f4ab2ca79f lower it (#13516) 2024-01-24 01:28:36 +00:00
kasey
dbcf5c29cd moving some blob rpc validation close to peer read (#13511)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-01-23 22:54:16 +00:00
james-prysm
c9fe53bc32 Blob API: make errors more generic (#13513)
* make api response more generic

* gaz
2024-01-23 20:07:46 +00:00
terence
8522febd88 Add Holesky Deneb Epoch (#13506)
* Add Holesky Deneb Epoch

* Fix fork version

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* Fix config

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2024-01-23 19:29:17 +00:00
james-prysm
75a28310c2 fixing route to match specs (#13510) 2024-01-23 18:04:03 +00:00
kasey
1df173e701 Block backfilling (#12968)
* backfill service

* fix bug where origin state is never unlocked

* support mvslice states

* use renamed interface

* refactor db code to skip block cache for backfill

* lint

* add test for verifier.verify

* enable service in service init test

* cancellation cleanup

* adding nil checks to configset juggling

* assume blocks are available by default

As long as we're sure the AvailableBlocker is initialized correctly
during node startup, defaulting to assuming we aren't in a checkpoint
sync simplifies things greatly for tests.

* block saving path refactor and bugfix

* fix fillback test

* fix BackfillStatus init tests

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-01-23 07:54:30 +00:00
terence
3187a05a76 Align aggregated att gossip validations (#13490)
* Align aggregated att gossip validations

* Feedback on reusing existing methods

* Nishant's feedback

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-01-23 04:37:06 +00:00
Justin Traglia
4e24102237 Fix minor issue in blsToExecChange validator (#13498)
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-01-23 03:26:57 +00:00
james-prysm
8dd5e96b29 re-enabling jwt on keymanager API (#13492)
* re-enabling jwt on keymanager API

* adding tests

* Update validator/rpc/intercepter.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* handling error in test

* remove debugging logs

---------

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
2024-01-22 22:16:10 +00:00
james-prysm
4afb379f8d cleanup duties naming (#13451)
* updating some naming to reflect changes to duties

* fixing unit tests

* fixing more tests
2024-01-22 16:58:25 +00:00
Nishant Das
5a2453ac9c Add Debug State Transition Method (#13495)
* add it

* lint
2024-01-22 14:46:20 +00:00
Nishant Das
e610d2a5de fix it (#13496) 2024-01-22 14:26:14 +00:00
Preston Van Loon
233aaf2f9e e2e: Fix multiclient lighthouse flag removal (#13494) 2024-01-21 21:11:11 +00:00
Nishant Das
a49bdcaa1f fix it (#13493) 2024-01-20 16:15:38 +00:00
Gaki
bdd7b2caa9 chore: typo fix (#13461)
* messsage

* cancellation
2024-01-20 01:07:17 +00:00
terence
8de0e3804b Update Sepolia Deneb fork epoch (#13491) 2024-01-19 18:47:07 +00:00
Ying Quan Tan
bfb648067b Re-enable Slasher E2E Test (#13420)
* re-enable e2e slashing test #12415

* refactored slashing evaluator

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-01-19 04:44:27 +00:00
terence
852db1f3eb Remove debug setting highest slot log (#13488) 2024-01-19 04:25:15 +00:00
Nishant Das
5d3663ef8d update lighthouse and tests (#13470) 2024-01-19 03:46:36 +00:00
Radosław Kapka
a608630727 Add Inactivity field ro attestation rewards (#13382) 2024-01-18 18:51:35 +00:00
Mario Vega
37739b4193 fix blobsidecar json tag for commitment inclusion proof (#13475)
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-01-18 17:43:43 +00:00
james-prysm
4d2067dbae bugfix: ssz post-requests should check content type not accept (#13482)
* updating post requests that accept ssz to check content type instead of accept header

* radek's review comments to make things more clear
2024-01-18 17:41:31 +00:00
Nishant Das
fc05e306dd Allow Pcli to Run State Transitions Easily (#13484)
* add all this in

* gaz

* add flag
2024-01-18 14:44:06 +00:00
Radosław Kapka
204de13c86 REST VC: Subscribe to Beacon API events (#13453)
* Revert "Revert "REST VC: Subscribe to Beacon API events  (#13354)" (#13428)"

This reverts commit 8d092a1113.

* change logic

* review

* test fix

* fix critical error

* merge flag check

* change error msg

* return on errors

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2024-01-18 14:27:41 +00:00
terence
f3ef1b64d6 Enhance block by root log (#13472) 2024-01-18 13:43:10 +00:00
terence
c3dbfa66d0 Change blob latency metrics to ms (#13481) 2024-01-17 23:28:42 +00:00
terence
93aba997f4 Move checking of attribute empty earlier (#13465) 2024-01-17 18:42:56 +00:00
Potuz
79bb7efbf8 Check init sync before getting payload attributes (#13479)
* Check init sync before getting payload attributes

This PR adds a helper to forkchoice to return the delay of the latest
imported block. It also adds a helper with an heuristic to check if the
node is during init sync. If the highest imported node was imported with
a delay of less than an epoch then the node is considered in regular
sync. If on the other hand, in addition the highest imported node is
more than two epochs old, then the node is considered in init Sync.

The helper to check this only uses forkchoice and therefore requires a
read lock. There are four paths to call this

1) During regular block processing, we defer a function to send the
   second FCU call with attributes. This function may not be called at
all if we are not regularly syncing
2) During regular block processing, we check in the path
   `postBlockProces->getFCUArgs->computePayloadAttributes` the payload
attributes if we are syncing a late block. In this case forkchoice is
already locked and we add a call in `getFCUArgs` to return early if not
regularly syncing
3) During handling of late blocks on `lateBlockTasks` we simply return
   early if not in regular sync (This is the biggest change as it takes
a longer FC lock for lateBlockTasks)
4) On Attestation processing, in UpdateHead, we are already locked so we
   just add a check to not update head on this path if not regularly
syncing.

* fix build

* Fix mocks
2024-01-17 15:39:28 +00:00
terence
87b53db3b4 Capitalize Aggregated Unaggregated Attestations Log (#13473) 2024-01-17 13:30:31 +00:00
terence
fe431b9201 Use correct HistoricalRoots (#13477) 2024-01-17 08:14:32 +00:00
james-prysm
790a09f9b1 Improve wait for activation (#13448)
* removing timeout on wait for activation, instead switched to an event driven approach

* fixing unit tests

* linting

* simplifying return

* adding sleep for the remaining slot to avoid cpu spikes

* removing ifstatement on log

* removing ifstatement on log

* improving switch statement

* removing the loop entirely

* fixing unit test

* fixing manu's reported issue with deletion of json file

* missed change around writefile at path

* gofmt

* fixing deepsource issue with reading file

* trying to clean file to avoid deepsource issue

* still getting error trying a different approach

* fixing stream loop

* fixing unit test

* Update validator/keymanager/local/keymanager.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* fixing linting

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-01-16 17:04:54 +00:00
Manu NALEPA
46387a903a getLegacyDatabaseLocation: Change message. (#13471)
* `getLegacyDatabaseLocation`: Change message.

* Update validator/node/node.go

Co-authored-by: Nishant Das <nishdas93@gmail.com>

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-01-16 11:29:36 +00:00
Nishant Das
6a65e07684 Add Spans to Core Validator Methods (#13467)
* add traces

* gaz
2024-01-16 07:52:46 +00:00
Potuz
abef94d7ad do not check optimistic status if cached attestation (#13462)
* do not check optimistic status if cached attestation

* Gazelle

* Gazelle again

* fix nil panics

* more nil checks

* more nil checks

---------

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2024-01-15 18:50:33 +00:00
Manu NALEPA
99a8d0bac6 Validator client - Improve readability - NO FUNCTIONAL CHANGE (#13468)
* Improve `NewServiceRegistry` documentation.

* Improve `README.md`.

* Improve readability of `registerValidatorService`.

* Move `log` in `main.go`.

Since `log` is only used in `main.go`.

* Clean Tos.

* `DefaultDataDir`: Use `switch` instead of `if/elif`.

* `ReadPassword`: Remove unused receiver.

* `validator/main.go`: Clean.

* `WarnIfPlatformNotSupported`: Add Mac OSX ARM64.

* `runner.go`: Use idiomatic `err` handling.

* `waitForChainStart`: Avoid `chainStartResponse`mutation.

* `WaitForChainStart`: Reduce cognitive complexity.

* Logs: `powchain` ==> `execution`.
2024-01-15 14:46:54 +00:00
Preston Van Loon
b585ff77f5 Fix port logging in bootnode (#13457) 2024-01-15 04:38:22 +00:00
Nishant Das
1ff5a43385 Add the Abillity to Defragment the Beacon State (#13444)
* Defragment head state

* change log level

* change it to be more efficient

* add flag

* add tests and clean up

* fix it

* gosimple

* Update container/multi-value-slice/multi_value_slice.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* radek's review

* unlock it

* remove from fc lock

---------

Co-authored-by: rkapka <rkapka@wp.pl>
2024-01-13 05:44:02 +00:00
dependabot[bot]
0cfbddc980 Bump github.com/quic-go/quic-go from 0.39.3 to 0.39.4 (#13445)
* Bump github.com/quic-go/quic-go from 0.39.3 to 0.39.4

Bumps [github.com/quic-go/quic-go](https://github.com/quic-go/quic-go) from 0.39.3 to 0.39.4.
- [Release notes](https://github.com/quic-go/quic-go/releases)
- [Changelog](https://github.com/quic-go/quic-go/blob/master/Changelog.md)
- [Commits](https://github.com/quic-go/quic-go/compare/v0.39.3...v0.39.4)

---
updated-dependencies:
- dependency-name: github.com/quic-go/quic-go
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

* Ran gazelle

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2024-01-12 18:12:19 +00:00
Manu NALEPA
22a484c45e Fixes issues when running validator client with the --web flag and non existing validator.db file AND/OR prysm-wallet-v2 directory. (#13460)
* `getLegacyDatabaseLocation`: Add tests.

* `getLegacyDatabaseLocation`: Handle `c.wallet == nil`.

* `saveAuthToken`: Create parent directory if needed.
2024-01-12 15:53:27 +00:00
terence
6ddafe1159 Delete invalid blob at block processing (#13456)
* Delete invalid blob at block processing

* Fix test
2024-01-12 08:09:45 +00:00
qinlz2
b8c5af665f [3/5] light client events (#13225)
* add http streaming light client events

* expose ForkChoiceStore

* return error in insertFinalizedDeposits

* send light client updates

* Revert "return error in insertFinalizedDeposits"

This reverts commit f7068663b8c8b3a3bf45950d5258011a5e4d803e.

* fix: lint

* fix: patch the wrong error response

* refactor: rename the JSON structs

* fix: LC finalized stream return correct format

* fix: LC op stream return correct JSON format

* fix: omit nil JSON fields

* chore: gazzle

* fix: make update by range return list directly based on spec

* chore: remove unneccessary json annotations

* chore: adjust comments

* feat: introduce EnableLightClientEvents feature flag

* feat: use enable-lightclient-events flag

* chore: more logging details

* chore: fix rebase errors

* chore: adjust data structure to save mem

* Update beacon-chain/blockchain/process_block.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* refactor: rename config EnableLightClient

* refactor: rename feature flag

* refactor: move helper functions to helper pkg

* test: fix broken unit tests

---------

Co-authored-by: Nicolás Pernas Maradei <nicolas@polymerlabs.org>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2024-01-11 18:38:59 +00:00
Radosław Kapka
2875ce6ee1 Use a single rest handler (#13446) 2024-01-11 16:03:35 +00:00
Manu NALEPA
a883ae2a76 BN: Move --db-backup-output-dir as a deprecated flag. (#13450) 2024-01-11 14:11:36 +00:00
Preston Van Loon
3a2b486bde Bazel 7.0.0 (#13321) 2024-01-10 15:34:11 +00:00
terence
283e09569d Remove old blob types (#13438)
* Remove old types

* Gen

* Remove old types

* Gen

* Fix lint

* Rm unused key

* Kasey's comment
2024-01-10 09:38:06 +00:00
Preston Van Loon
69723b4a77 Update go to 1.21.6 (#13440) 2024-01-10 09:37:40 +00:00
psr
4fe6834ba5 http endpoint cleanup (#13432)
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2024-01-09 23:48:43 +00:00
Preston Van Loon
98e3f2b80f sort static analyzers, add more, fix violations (#13441) 2024-01-09 23:29:36 +00:00
Enrico Del Fante
2aef7a3ec5 Update teku's bootnode (#13437) 2024-01-09 22:28:41 +00:00
Brandon Liu
c41a54be9d fix metric for exited validator (#13379)
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2024-01-09 22:15:53 +00:00
Justin Traglia
7e65378f63 Check sidecar index in BlobSidecarsByRoot response (#13180)
* Check sidecar index in BlobSidecarsByRoot response

* Remove unnecessary MaxBlobsPerBlock check
2024-01-09 22:14:56 +00:00
Justin Traglia
cf606e3766 Only process blocks which haven't been processed (#13442)
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2024-01-09 22:14:03 +00:00
Justin Traglia
703cfc5819 Initialize exec payload fields and enforce order (#13372)
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2024-01-09 21:49:35 +00:00
GoodDaisy
c6ebe157a6 Fix typos (#13435) 2024-01-09 21:03:36 +00:00
Preston Van Loon
a3cc81a048 Add nil check for head in IsOptimistic (#13439) 2024-01-09 19:40:26 +00:00
Nishant Das
75bbeb61cc Add Detailed Multi Value Metrics (#13429)
* add it

* pingo

* gaz

* remove pingo

* fix for old forks
2024-01-09 05:16:03 +00:00
kasey
5cea6bebb8 minimize syscalls in pruning routine (#13425)
* minimize syscalls in pruning routine

* terence feedback

* Update beacon-chain/db/filesystem/pruner.go

Co-authored-by: Justin Traglia <95511699+jtraglia@users.noreply.github.com>

* pr feedback

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: Justin Traglia <95511699+jtraglia@users.noreply.github.com>
2024-01-08 22:31:16 +00:00
Potuz
28596d669b Use proposer index cache for blob verification (#13423)
* Use proposer index cache for blob verification

* add unit test

* Fix test

---------

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2024-01-07 03:24:07 +00:00
kasey
0e043d55b4 VerifiedROBlobs in initial-sync (#13351)
* Use VerifiedROBlobs in initial-sync

* Update beacon-chain/das/cache.go

Co-authored-by: Justin Traglia <95511699+jtraglia@users.noreply.github.com>

* Apply suggestions from code review

comment fixes

Co-authored-by: Justin Traglia <95511699+jtraglia@users.noreply.github.com>

* fix lint error from gh web ui

* deepsource fixes

* more deepsource

* fix init wiring

* mark blobless blocks verified in batch mode

* move sig check after parent checks

* validate block commitment length at start of da check

* remove vestigial locking

* rm more copy-locksta

* rm old comment

* fail the entire batch if any sidecar fails

* lint

* skip redundant checks, fix len check

* assume sig and proposer checks passed for block

* inherits most checks from processed block

* Assume block processing handles most checks

* lint

* cleanup unused call and gaz

* more detailed logging for e2e

* fix bad refactor breaking non-finalized init-sync

* self-review cleanup

* gaz

* Update beacon-chain/verification/blob.go

Co-authored-by: Justin Traglia <95511699+jtraglia@users.noreply.github.com>

* terence and justin feedback

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: Justin Traglia <95511699+jtraglia@users.noreply.github.com>
2024-01-06 23:47:09 +00:00
Radosław Kapka
8d092a1113 Revert "REST VC: Subscribe to Beacon API events (#13354)" (#13428)
This reverts commit e68b2821c1.
2024-01-06 21:36:42 +00:00
kasey
073c4edc5f use ROForkchoice in blob verifier (#13426)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-01-06 19:39:03 +00:00
terence
d055db1c31 Unlock forkchoice store if attribute is empty (#13427)
* Unlock forkchoice store if attribute is empty

* Better version
2024-01-06 07:32:56 +00:00
Nishant Das
a974627258 Make Aggregating In Parallel The Permanent Default (#13407)
* make it the permanent default

* gaz
2024-01-06 07:29:06 +00:00
Potuz
67dccc5e43 Break out several helpers from postBlockProcess (#13419)
* Break out several helpers from `postBlockProcess`

In addition fix a bug found by @terencechain where we should use a slot
context instead of the parent context in the second FCU call.

* Remove calls for tracked proposer

getPayloadAttribute already takes care of this
Also compute correctly the time into voting window

* call with attributes only when incoming block is canonical

* check for empty payload instead of only nil

* add unit tests

* move log for non-canonical block

* return early if the incoming block does not change head

* Pass fcuArgs as arguments

* lint
2024-01-06 02:29:07 +00:00
terence
ff06e08274 Prune dangling blob (#13424)
* Prune dangling blob

* Fix test

* Kasey's feedback

* Preston's feedback

* Use warning, fix test
2024-01-05 22:29:57 +00:00
james-prysm
d3d25e3ae5 proposer and attester slashing sse (#13414)
* wip

* adding in event notifiers for slashing events

* fixing tests
2024-01-05 15:27:50 +00:00
Nishant Das
929e9ddf4c enable it (#13421) 2024-01-05 05:23:22 +00:00
Nishant Das
7c0e79d432 Make New Engine Methods The Permanent Default (#13406)
* make them the default

* gaz

* fix tests
2024-01-05 04:38:04 +00:00
terence
3c1c0b3c00 Update blob pruning log (#13417) 2024-01-04 18:02:19 +00:00
james-prysm
d439e6da74 adding builder boost factor to get block v3 (#13409)
* adding builder boost factor to functions

* gaz

* fixing linting

* fixing unit tests

* gaz

* addressing review comments

* fixing tests

* addressing review feedback

* gaz

* changing log based on review
2024-01-04 17:25:18 +00:00
Radosław Kapka
e68b2821c1 REST VC: Subscribe to Beacon API events (#13354)
* Initial code for head event streaming

* handle events and error

* keepalive event

* tests

* generate new mock

* remove single case select

* cleanup

* explain eventByteLimit

* use 2 channels in test

* review

* more review

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2024-01-04 17:14:45 +00:00
Potuz
cfef8f4676 Don't hardcode 4 seconds in forkchoice (#13416) 2024-01-04 16:49:16 +00:00
terence
9709412511 Use Afero Walk for Pruning Blob (#13410)
* Use Afero walk

* Return err

* Use wrap

* More err to the end

* Fix loop
2024-01-04 16:41:00 +00:00
367 changed files with 10838 additions and 4746 deletions

View File

@@ -1 +1 @@
6.4.0
7.0.0

View File

@@ -10,7 +10,7 @@
# Prysm specific remote-cache properties.
build:remote-cache --remote_download_minimal
build:remote-cache --experimental_remote_build_event_upload=minimal
build:remote-cache --remote_build_event_upload=minimal
build:remote-cache --remote_cache=grpc://bazel-remote-cache:9092
# Does not work with rules_oci. See https://github.com/bazel-contrib/rules_oci/issues/292
#build:remote-cache --experimental_remote_downloader=grpc://bazel-remote-cache:9092

View File

@@ -194,33 +194,6 @@ nogo(
config = ":nogo_config_with_excludes",
visibility = ["//visibility:public"],
deps = [
"@org_golang_x_tools//go/analysis/passes/unsafeptr:go_default_library",
"@org_golang_x_tools//go/analysis/passes/unreachable:go_default_library",
"@org_golang_x_tools//go/analysis/passes/unmarshal:go_default_library",
"@org_golang_x_tools//go/analysis/passes/tests:go_default_library",
"@org_golang_x_tools//go/analysis/passes/structtag:go_default_library",
"@org_golang_x_tools//go/analysis/passes/stdmethods:go_default_library",
"@org_golang_x_tools//go/analysis/passes/shift:go_default_library",
# "@org_golang_x_tools//go/analysis/passes/shadow:go_default_library",
"@org_golang_x_tools//go/analysis/passes/printf:go_default_library",
"@org_golang_x_tools//go/analysis/passes/pkgfact:go_default_library",
"@org_golang_x_tools//go/analysis/passes/nilness:go_default_library",
"@org_golang_x_tools//go/analysis/passes/nilfunc:go_default_library",
"@org_golang_x_tools//go/analysis/passes/loopclosure:go_default_library",
"@org_golang_x_tools//go/analysis/passes/httpresponse:go_default_library",
"@org_golang_x_tools//go/analysis/passes/findcall:go_default_library",
"@org_golang_x_tools//go/analysis/passes/deepequalerrors:go_default_library",
"@org_golang_x_tools//go/analysis/passes/ctrlflow:go_default_library",
"@org_golang_x_tools//go/analysis/passes/copylock:go_default_library",
# "@org_golang_x_tools//go/analysis/passes/cgocall:go_default_library",
"@org_golang_x_tools//go/analysis/passes/buildtag:go_default_library",
"@org_golang_x_tools//go/analysis/passes/buildssa:go_default_library",
"@org_golang_x_tools//go/analysis/passes/bools:go_default_library",
"@org_golang_x_tools//go/analysis/passes/atomicalign:go_default_library",
"@org_golang_x_tools//go/analysis/passes/atomic:go_default_library",
"@org_golang_x_tools//go/analysis/passes/assign:go_default_library",
"@org_golang_x_tools//go/analysis/passes/inspect:go_default_library",
"@org_golang_x_tools//go/analysis/passes/asmdecl:go_default_library",
"//tools/analyzers/comparesame:go_default_library",
"//tools/analyzers/cryptorand:go_default_library",
"//tools/analyzers/errcheck:go_default_library",
@@ -236,6 +209,53 @@ nogo(
"//tools/analyzers/shadowpredecl:go_default_library",
"//tools/analyzers/slicedirect:go_default_library",
"//tools/analyzers/uintcast:go_default_library",
"@org_golang_x_tools//go/analysis/passes/appends:go_default_library",
"@org_golang_x_tools//go/analysis/passes/asmdecl:go_default_library",
"@org_golang_x_tools//go/analysis/passes/assign:go_default_library",
"@org_golang_x_tools//go/analysis/passes/atomic:go_default_library",
"@org_golang_x_tools//go/analysis/passes/atomicalign:go_default_library",
"@org_golang_x_tools//go/analysis/passes/bools:go_default_library",
"@org_golang_x_tools//go/analysis/passes/buildssa:go_default_library",
"@org_golang_x_tools//go/analysis/passes/buildtag:go_default_library",
# cgocall disabled
#"@org_golang_x_tools//go/analysis/passes/cgocall:go_default_library",
"@org_golang_x_tools//go/analysis/passes/copylock:go_default_library",
"@org_golang_x_tools//go/analysis/passes/ctrlflow:go_default_library",
"@org_golang_x_tools//go/analysis/passes/deepequalerrors:go_default_library",
"@org_golang_x_tools//go/analysis/passes/defers:go_default_library",
"@org_golang_x_tools//go/analysis/passes/directive:go_default_library",
# fieldalignment disabled
#"@org_golang_x_tools//go/analysis/passes/fieldalignment:go_default_library",
"@org_golang_x_tools//go/analysis/passes/findcall:go_default_library",
"@org_golang_x_tools//go/analysis/passes/framepointer:go_default_library",
"@org_golang_x_tools//go/analysis/passes/httpmux:go_default_library",
"@org_golang_x_tools//go/analysis/passes/httpresponse:go_default_library",
"@org_golang_x_tools//go/analysis/passes/ifaceassert:go_default_library",
"@org_golang_x_tools//go/analysis/passes/inspect:go_default_library",
"@org_golang_x_tools//go/analysis/passes/loopclosure:go_default_library",
"@org_golang_x_tools//go/analysis/passes/nilfunc:go_default_library",
"@org_golang_x_tools//go/analysis/passes/nilness:go_default_library",
"@org_golang_x_tools//go/analysis/passes/pkgfact:go_default_library",
"@org_golang_x_tools//go/analysis/passes/printf:go_default_library",
"@org_golang_x_tools//go/analysis/passes/reflectvaluecompare:go_default_library",
# shadow disabled
#"@org_golang_x_tools//go/analysis/passes/shadow:go_default_library",
"@org_golang_x_tools//go/analysis/passes/shift:go_default_library",
"@org_golang_x_tools//go/analysis/passes/sigchanyzer:go_default_library",
"@org_golang_x_tools//go/analysis/passes/slog:go_default_library",
"@org_golang_x_tools//go/analysis/passes/sortslice:go_default_library",
"@org_golang_x_tools//go/analysis/passes/stdmethods:go_default_library",
"@org_golang_x_tools//go/analysis/passes/stringintconv:go_default_library",
"@org_golang_x_tools//go/analysis/passes/structtag:go_default_library",
"@org_golang_x_tools//go/analysis/passes/testinggoroutine:go_default_library",
"@org_golang_x_tools//go/analysis/passes/tests:go_default_library",
"@org_golang_x_tools//go/analysis/passes/timeformat:go_default_library",
"@org_golang_x_tools//go/analysis/passes/unmarshal:go_default_library",
"@org_golang_x_tools//go/analysis/passes/unreachable:go_default_library",
"@org_golang_x_tools//go/analysis/passes/unsafeptr:go_default_library",
"@org_golang_x_tools//go/analysis/passes/unusedresult:go_default_library",
"@org_golang_x_tools//go/analysis/passes/unusedwrite:go_default_library",
"@org_golang_x_tools//go/analysis/passes/usesgenerics:go_default_library",
] + select({
# nogo checks that fail with coverage enabled.
":coverage_enabled": [],

6
MODULE.bazel Normal file
View File

@@ -0,0 +1,6 @@
###############################################################################
# Bazel now uses Bzlmod by default to manage external dependencies.
# Please consider migrating your external dependencies from WORKSPACE to MODULE.bazel.
#
# For more details, please check https://github.com/bazelbuild/bazel/issues/18958
###############################################################################

1245
MODULE.bazel.lock generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -182,7 +182,7 @@ load("@io_bazel_rules_go//go:deps.bzl", "go_register_toolchains", "go_rules_depe
go_rules_dependencies()
go_register_toolchains(
go_version = "1.21.5",
go_version = "1.21.6",
nogo = "@//:nogo",
)
@@ -349,17 +349,17 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "9f66d8d5644982d3d0d2e3d2b9ebe77a5f96638a5d7fcd715599c32818195cb3",
strip_prefix = "holesky-ea39b9006210848e13f28d92e12a30548cecd41d",
url = "https://github.com/eth-clients/holesky/archive/ea39b9006210848e13f28d92e12a30548cecd41d.tar.gz", # 2023-09-21
sha256 = "5f4be6fd088683ea9db45c863b9c5a1884422449e5b59fd2d561d3ba0f73ffd9",
strip_prefix = "holesky-9d9aabf2d4de51334ee5fed6c79a4d55097d1a43",
url = "https://github.com/eth-clients/holesky/archive/9d9aabf2d4de51334ee5fed6c79a4d55097d1a43.tar.gz", # 2024-01-22
)
http_archive(
name = "com_google_protobuf",
sha256 = "4e176116949be52b0408dfd24f8925d1eb674a781ae242a75296b17a1c721395",
strip_prefix = "protobuf-23.3",
sha256 = "9bd87b8280ef720d3240514f884e56a712f2218f0d693b48050c836028940a42",
strip_prefix = "protobuf-25.1",
urls = [
"https://github.com/protocolbuffers/protobuf/archive/v23.3.tar.gz",
"https://github.com/protocolbuffers/protobuf/archive/v25.1.tar.gz",
],
)

View File

@@ -7,4 +7,6 @@ const (
ConsensusBlockValueHeader = "Eth-Consensus-Block-Value"
JsonMediaType = "application/json"
OctetStreamMediaType = "application/octet-stream"
EventStreamMediaType = "text/event-stream"
KeepAlive = "keep-alive"
)

View File

@@ -6,6 +6,7 @@ go_library(
"chain_info.go",
"chain_info_forkchoice.go",
"currently_syncing_block.go",
"defragment.go",
"error.go",
"execution_engine.go",
"forkchoice_update_execution.go",
@@ -50,6 +51,7 @@ go_library(
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/das:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/db/filters:go_default_library",
@@ -141,6 +143,7 @@ go_test(
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/das:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/db/testing:go_default_library",

View File

@@ -6,7 +6,10 @@ import (
"time"
"github.com/pkg/errors"
"go.opencensus.io/trace"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
f "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice"
doublylinkedtree "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/doubly-linked-tree"
forkchoicetypes "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
@@ -18,7 +21,6 @@ import (
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/time/slots"
"go.opencensus.io/trace"
)
// ChainInfoFetcher defines a common interface for methods in blockchain service which
@@ -334,12 +336,21 @@ func (s *Service) HeadValidatorIndexToPublicKey(_ context.Context, index primiti
return v.PublicKey(), nil
}
// ForkChoicer returns the forkchoice interface.
func (s *Service) ForkChoicer() f.ForkChoicer {
return s.cfg.ForkChoiceStore
}
// IsOptimistic returns true if the current head is optimistic.
func (s *Service) IsOptimistic(_ context.Context) (bool, error) {
if slots.ToEpoch(s.CurrentSlot()) < params.BeaconConfig().BellatrixForkEpoch {
return false, nil
}
s.headLock.RLock()
if s.head == nil {
s.headLock.RUnlock()
return false, ErrNilHead
}
headRoot := s.head.root
headSlot := s.head.slot
headOptimistic := s.head.optimistic
@@ -545,3 +556,18 @@ func (s *Service) RecentBlockSlot(root [32]byte) (primitives.Slot, error) {
defer s.cfg.ForkChoiceStore.RUnlock()
return s.cfg.ForkChoiceStore.Slot(root)
}
// inRegularSync applies the following heuristics to decide if the node is in
// regular sync mode vs init sync mode using only forkchoice.
// It checks that the highest received block is behind the current time by at least 2 epochs
// and that it was imported at least one epoch late if both of these
// tests pass then the node is in init sync. The caller of this function MUST
// have a lock on forkchoice
func (s *Service) inRegularSync() bool {
currentSlot := s.CurrentSlot()
fc := s.cfg.ForkChoiceStore
if currentSlot-fc.HighestReceivedBlockSlot() < 2*params.BeaconConfig().SlotsPerEpoch {
return true
}
return fc.HighestReceivedBlockDelay() < params.BeaconConfig().SlotsPerEpoch
}

View File

@@ -429,6 +429,11 @@ func TestService_IsOptimistic(t *testing.T) {
opt, err = c.IsOptimistic(ctx)
require.NoError(t, err)
require.Equal(t, true, opt)
// If head is nil, for some reason, an error should be returned rather than panic.
c = &Service{}
_, err = c.IsOptimistic(ctx)
require.ErrorIs(t, err, ErrNilHead)
}
func TestService_IsOptimisticBeforeBellatrix(t *testing.T) {
@@ -588,3 +593,26 @@ func TestService_IsFinalized(t *testing.T) {
require.Equal(t, true, c.IsFinalized(ctx, br))
require.Equal(t, false, c.IsFinalized(ctx, [32]byte{'c'}))
}
func TestService_inRegularSync(t *testing.T) {
ctx := context.Background()
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}, head: &head{root: [32]byte{'b'}}}
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
st, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
require.Equal(t, false, c.inRegularSync())
c.SetGenesisTime(time.Now().Add(time.Second * time.Duration(-1*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot))))
st, blkRoot, err = prepareForkchoiceState(ctx, 128, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
require.Equal(t, false, c.inRegularSync())
c.SetGenesisTime(time.Now().Add(time.Second * time.Duration(-5*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot))))
require.Equal(t, true, c.inRegularSync())
c.SetGenesisTime(time.Now().Add(time.Second * time.Duration(-1*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot))))
c.cfg.ForkChoiceStore.SetGenesisTime(uint64(time.Now().Unix()))
require.Equal(t, true, c.inRegularSync())
}

View File

@@ -0,0 +1,27 @@
package blockchain
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/config/features"
"github.com/prysmaticlabs/prysm/v4/time"
)
var stateDefragmentationTime = promauto.NewSummary(prometheus.SummaryOpts{
Name: "head_state_defragmentation_milliseconds",
Help: "Milliseconds it takes to defragment the head state",
})
// This method defragments our state, so that any specific fields which have
// a higher number of fragmented indexes are reallocated to a new separate slice for
// that field.
func (s *Service) defragmentState(st state.BeaconState) {
if !features.Get().EnableExperimentalState {
return
}
startTime := time.Now()
st.Defragment()
elapsedTime := time.Since(startTime)
stateDefragmentationTime.Observe(float64(elapsedTime.Milliseconds()))
}

View File

@@ -28,6 +28,8 @@ var (
// ErrNotCheckpoint is returned when a given checkpoint is not a
// checkpoint in any chain known to forkchoice
ErrNotCheckpoint = errors.New("not a checkpoint in forkchoice")
// ErrNilHead is returned when no head is present in the blockchain service.
ErrNilHead = errors.New("nil head")
)
var errMaxBlobsExceeded = errors.New("Expected commitments in block exceeds MAX_BLOBS_PER_BLOCK")

View File

@@ -387,9 +387,9 @@ func (s *Service) removeInvalidBlockAndState(ctx context.Context, blkRoots [][32
// This is an irreparable condition, it would me a justified or finalized block has become invalid.
return err
}
// No op if the sidecar does not exist.
if err := s.cfg.BeaconDB.DeleteBlobSidecars(ctx, root); err != nil {
return err
if err := s.blobStorage.Remove(root); err != nil {
// Blobs may not exist for some blocks, leading to deletion failures. Log such errors at debug level.
log.WithError(err).Debug("Could not remove blob from blob storage")
}
}
return nil

View File

@@ -151,8 +151,10 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
},
},
}})
@@ -494,8 +496,10 @@ func Test_NotifyNewPayload(t *testing.T) {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
},
},
},
@@ -597,8 +601,10 @@ func Test_NotifyNewPayload(t *testing.T) {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
},
},
},
@@ -1096,3 +1102,35 @@ func TestKZGCommitmentToVersionedHashes(t *testing.T) {
require.Equal(t, vhs[0].String(), vh0)
require.Equal(t, vhs[1].String(), vh1)
}
func TestComputePayloadAttribute(t *testing.T) {
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx := tr.ctx
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
service.cfg.TrackedValidatorsCache.Set(cache.TrackedValidator{Active: true, Index: 0})
// Cache hit, advance state, no fee recipient
slot := primitives.Slot(1)
service.cfg.PayloadIDCache.Set(slot, [32]byte{}, [8]byte{})
cfg := &postBlockProcessConfig{
ctx: ctx,
blockRoot: [32]byte{'a'},
}
fcu := &fcuConfig{
headState: st,
proposingSlot: slot,
headRoot: [32]byte{},
}
require.NoError(t, service.computePayloadAttributes(cfg, fcu))
require.Equal(t, false, fcu.attributes.IsEmpty())
require.Equal(t, params.BeaconConfig().EthBurnAddressHex, common.BytesToAddress(fcu.attributes.SuggestedFeeRecipient()).String())
// Cache hit, advance state, has fee recipient
suggestedAddr := common.HexToAddress("123")
service.cfg.TrackedValidatorsCache.Set(cache.TrackedValidator{Active: true, FeeRecipient: primitives.ExecutionAddress(suggestedAddr), Index: 0})
service.cfg.PayloadIDCache.Set(slot, [32]byte{}, [8]byte{})
require.NoError(t, service.computePayloadAttributes(cfg, fcu))
require.Equal(t, false, fcu.attributes.IsEmpty())
require.Equal(t, suggestedAddr, common.BytesToAddress(fcu.attributes.SuggestedFeeRecipient()))
}

View File

@@ -52,6 +52,41 @@ type fcuConfig struct {
attributes payloadattribute.Attributer
}
// sendFCU handles the logic to notify the engine of a forckhoice update
// for the first time when processing an incoming block during regular sync. It
// always updates the shuffling caches and handles epoch transitions when the
// incoming block is late, preparing payload attributes in this case while it
// only sends a message with empty attributes for early blocks.
func (s *Service) sendFCU(cfg *postBlockProcessConfig, fcuArgs *fcuConfig) error {
if !s.isNewHead(cfg.headRoot) {
return nil
}
if fcuArgs.attributes != nil && !fcuArgs.attributes.IsEmpty() && s.shouldOverrideFCU(cfg.headRoot, s.CurrentSlot()+1) {
return nil
}
return s.forkchoiceUpdateWithExecution(cfg.ctx, fcuArgs)
}
// sendFCUWithAttributes computes the payload attributes and sends an FCU message
// to the engine if needed
func (s *Service) sendFCUWithAttributes(cfg *postBlockProcessConfig, fcuArgs *fcuConfig) {
slotCtx, cancel := context.WithTimeout(context.Background(), slotDeadline)
defer cancel()
cfg.ctx = slotCtx
if err := s.computePayloadAttributes(cfg, fcuArgs); err != nil {
log.WithError(err).Error("could not compute payload attributes")
return
}
if fcuArgs.attributes.IsEmpty() {
return
}
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
if _, err := s.notifyForkchoiceUpdate(cfg.ctx, fcuArgs); err != nil {
log.WithError(err).Error("could not update forkchoice with payload attributes for proposal")
}
}
// fockchoiceUpdateWithExecution is a wrapper around notifyForkchoiceUpdate. It decides whether a new call to FCU should be made.
func (s *Service) forkchoiceUpdateWithExecution(ctx context.Context, args *fcuConfig) error {
_, span := trace.StartSpan(ctx, "beacon-chain.blockchain.forkchoiceUpdateWithExecution")

View File

@@ -11,7 +11,6 @@ go_library(
visibility = ["//visibility:public"],
deps = [
"//consensus-types/blocks:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_crate_crypto_go_kzg_4844//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
@@ -25,7 +24,7 @@ go_test(
],
embed = [":go_default_library"],
deps = [
"//proto/prysm/v1alpha1:go_default_library",
"//consensus-types/blocks:go_default_library",
"//testing/require:go_default_library",
"@com_github_consensys_gnark_crypto//ecc/bls12-381/fr:go_default_library",
"@com_github_crate_crypto_go_kzg_4844//:go_default_library",

View File

@@ -1,42 +1,32 @@
package kzg
import (
"fmt"
GoKZG "github.com/crate-crypto/go-kzg-4844"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
)
// IsDataAvailable checks that
// - all blobs in the block are available
// - Expected KZG commitments match the number of blobs in the block
// - That the number of proofs match the number of blobs
// - That the proofs are verified against the KZG commitments
func IsDataAvailable(commitments [][]byte, sidecars []*ethpb.DeprecatedBlobSidecar) error {
if len(commitments) != len(sidecars) {
return fmt.Errorf("could not check data availability, expected %d commitments, obtained %d",
len(commitments), len(sidecars))
}
if len(commitments) == 0 {
// Verify performs single or batch verification of commitments depending on the number of given BlobSidecars.
func Verify(sidecars ...blocks.ROBlob) error {
if len(sidecars) == 0 {
return nil
}
blobs := make([]GoKZG.Blob, len(commitments))
proofs := make([]GoKZG.KZGProof, len(commitments))
cmts := make([]GoKZG.KZGCommitment, len(commitments))
if len(sidecars) == 1 {
return kzgContext.VerifyBlobKZGProof(
bytesToBlob(sidecars[0].Blob),
bytesToCommitment(sidecars[0].KzgCommitment),
bytesToKZGProof(sidecars[0].KzgProof))
}
blobs := make([]GoKZG.Blob, len(sidecars))
cmts := make([]GoKZG.KZGCommitment, len(sidecars))
proofs := make([]GoKZG.KZGProof, len(sidecars))
for i, sidecar := range sidecars {
blobs[i] = bytesToBlob(sidecar.Blob)
cmts[i] = bytesToCommitment(sidecar.KzgCommitment)
proofs[i] = bytesToKZGProof(sidecar.KzgProof)
cmts[i] = bytesToCommitment(commitments[i])
}
return kzgContext.VerifyBlobKZGProofBatch(blobs, cmts, proofs)
}
// VerifyROBlobCommitment is a helper that massages the fields of an ROBlob into the types needed to call VerifyBlobKZGProof.
func VerifyROBlobCommitment(sc blocks.ROBlob) error {
return kzgContext.VerifyBlobKZGProof(bytesToBlob(sc.Blob), bytesToCommitment(sc.KzgCommitment), bytesToKZGProof(sc.KzgProof))
}
func bytesToBlob(blob []byte) (ret GoKZG.Blob) {
copy(ret[:], blob)
return

View File

@@ -8,7 +8,7 @@ import (
"github.com/consensys/gnark-crypto/ecc/bls12-381/fr"
GoKZG "github.com/crate-crypto/go-kzg-4844"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/testing/require"
"github.com/sirupsen/logrus"
)
@@ -58,10 +58,9 @@ func GenerateCommitmentAndProof(blob GoKZG.Blob) (GoKZG.KZGCommitment, GoKZG.KZG
return commitment, proof, err
}
func TestIsDataAvailable(t *testing.T) {
sidecars := make([]*ethpb.DeprecatedBlobSidecar, 0)
commitments := make([][]byte, 0)
require.NoError(t, IsDataAvailable(commitments, sidecars))
func TestVerify(t *testing.T) {
sidecars := make([]blocks.ROBlob, 0)
require.NoError(t, Verify(sidecars...))
}
func TestBytesToAny(t *testing.T) {

View File

@@ -358,6 +358,7 @@ func reportEpochMetrics(ctx context.Context, postState, headState state.BeaconSt
for name, val := range refMap {
stateTrieReferences.WithLabelValues(name).Set(float64(val))
}
postState.RecordStateMetrics()
return nil
}

View File

@@ -6,12 +6,15 @@ import (
"time"
"github.com/pkg/errors"
"go.opencensus.io/trace"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
coreTime "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/das"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db/filesystem"
forkchoicetypes "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
@@ -28,8 +31,6 @@ import (
"github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1/attestation"
"github.com/prysmaticlabs/prysm/v4/runtime/version"
"github.com/prysmaticlabs/prysm/v4/time/slots"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
// A custom slot deadline for processing state slots in our cache.
@@ -41,149 +42,71 @@ const depositDeadline = 20 * time.Second
// This defines size of the upper bound for initial sync block cache.
var initialSyncBlockCacheSize = uint64(2 * params.BeaconConfig().SlotsPerEpoch)
// postBlockProcessConfig is a structure that contains the data needed to
// process the beacon block after validating the state transition function
type postBlockProcessConfig struct {
ctx context.Context
signed interfaces.ReadOnlySignedBeaconBlock
blockRoot [32]byte
headRoot [32]byte
postState state.BeaconState
isValidPayload bool
}
// postBlockProcess is called when a gossip block is received. This function performs
// several duties most importantly informing the engine if head was updated,
// saving the new head information to the blockchain package and
// handling attestations, slashings and similar included in the block.
func (s *Service) postBlockProcess(ctx context.Context, signed interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte, postState state.BeaconState, isValidPayload bool) error {
ctx, span := trace.StartSpan(ctx, "blockChain.onBlock")
func (s *Service) postBlockProcess(cfg *postBlockProcessConfig) error {
ctx, span := trace.StartSpan(cfg.ctx, "blockChain.onBlock")
defer span.End()
if err := consensusblocks.BeaconBlockIsNil(signed); err != nil {
cfg.ctx = ctx
if err := consensusblocks.BeaconBlockIsNil(cfg.signed); err != nil {
return invalidBlock{error: err}
}
startTime := time.Now()
b := signed.Block()
fcuArgs := &fcuConfig{}
if err := s.cfg.ForkChoiceStore.InsertNode(ctx, postState, blockRoot); err != nil {
return errors.Wrapf(err, "could not insert block %d to fork choice store", signed.Block().Slot())
if s.inRegularSync() {
defer s.handleSecondFCUCall(cfg, fcuArgs)
}
if err := s.handleBlockAttestations(ctx, signed.Block(), postState); err != nil {
defer s.sendLightClientFeeds(cfg)
defer s.sendStateFeedOnBlock(cfg)
defer reportProcessingTime(startTime)
defer reportAttestationInclusion(cfg.signed.Block())
err := s.cfg.ForkChoiceStore.InsertNode(ctx, cfg.postState, cfg.blockRoot)
if err != nil {
return errors.Wrapf(err, "could not insert block %d to fork choice store", cfg.signed.Block().Slot())
}
if err := s.handleBlockAttestations(ctx, cfg.signed.Block(), cfg.postState); err != nil {
return errors.Wrap(err, "could not handle block's attestations")
}
s.InsertSlashingsToForkChoiceStore(ctx, signed.Block().Body().AttesterSlashings())
if isValidPayload {
if err := s.cfg.ForkChoiceStore.SetOptimisticToValid(ctx, blockRoot); err != nil {
s.InsertSlashingsToForkChoiceStore(ctx, cfg.signed.Block().Body().AttesterSlashings())
if cfg.isValidPayload {
if err := s.cfg.ForkChoiceStore.SetOptimisticToValid(ctx, cfg.blockRoot); err != nil {
return errors.Wrap(err, "could not set optimistic block to valid")
}
}
start := time.Now()
headRoot, err := s.cfg.ForkChoiceStore.Head(ctx)
cfg.headRoot, err = s.cfg.ForkChoiceStore.Head(ctx)
if err != nil {
log.WithError(err).Warn("Could not update head")
}
newBlockHeadElapsedTime.Observe(float64(time.Since(start).Milliseconds()))
proposingSlot := s.CurrentSlot() + 1
var fcuArgs *fcuConfig
if blockRoot != headRoot {
receivedWeight, err := s.cfg.ForkChoiceStore.Weight(blockRoot)
if err != nil {
log.WithField("root", fmt.Sprintf("%#x", blockRoot)).Warn("could not determine node weight")
}
headWeight, err := s.cfg.ForkChoiceStore.Weight(headRoot)
if err != nil {
log.WithField("root", fmt.Sprintf("%#x", headRoot)).Warn("could not determine node weight")
}
log.WithFields(logrus.Fields{
"receivedRoot": fmt.Sprintf("%#x", blockRoot),
"receivedWeight": receivedWeight,
"headRoot": fmt.Sprintf("%#x", headRoot),
"headWeight": headWeight,
}).Debug("Head block is not the received block")
headState, headBlock, err := s.getStateAndBlock(ctx, headRoot)
if err != nil {
log.WithError(err).Error("Could not get forkchoice update argument")
return nil
}
fcuArgs = &fcuConfig{
headState: headState,
headBlock: headBlock,
headRoot: headRoot,
proposingSlot: proposingSlot,
}
} else {
fcuArgs = &fcuConfig{
headState: postState,
headBlock: signed,
headRoot: headRoot,
proposingSlot: proposingSlot,
}
if cfg.headRoot != cfg.blockRoot {
s.logNonCanonicalBlockReceived(cfg.blockRoot, cfg.headRoot)
return nil
}
isEarly := slots.WithinVotingWindow(uint64(s.genesisTime.Unix()))
shouldOverrideFCU := false
slot := postState.Slot()
if s.isNewHead(headRoot) {
// if the block is early send FCU without any payload attributes
if isEarly {
if err := s.forkchoiceUpdateWithExecution(ctx, fcuArgs); err != nil {
return err
}
} else {
// if the block is late lock and update the caches
if blockRoot == headRoot {
if err := transition.UpdateNextSlotCache(ctx, blockRoot[:], postState); err != nil {
return errors.Wrap(err, "could not update next slot state cache")
}
if slots.IsEpochEnd(slot) {
if err := s.handleEpochBoundary(ctx, slot, postState, blockRoot[:]); err != nil {
return errors.Wrap(err, "could not handle epoch boundary")
}
}
}
_, tracked := s.trackedProposer(fcuArgs.headState, proposingSlot)
if tracked {
shouldOverrideFCU = s.shouldOverrideFCU(headRoot, proposingSlot)
fcuArgs.attributes = s.getPayloadAttribute(ctx, fcuArgs.headState, proposingSlot, headRoot[:])
}
if !shouldOverrideFCU {
if err := s.forkchoiceUpdateWithExecution(ctx, fcuArgs); err != nil {
return err
}
}
}
if err := s.getFCUArgs(cfg, fcuArgs); err != nil {
log.WithError(err).Error("Could not get forkchoice update argument")
return nil
}
optimistic, err := s.cfg.ForkChoiceStore.IsOptimistic(blockRoot)
if err != nil {
log.WithError(err).Debug("Could not check if block is optimistic")
optimistic = true
if err := s.sendFCU(cfg, fcuArgs); err != nil {
return errors.Wrap(err, "could not send FCU to engine")
}
// Send notification of the processed block to the state feed.
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.BlockProcessed,
Data: &statefeed.BlockProcessedData{
Slot: signed.Block().Slot(),
BlockRoot: blockRoot,
SignedBlock: signed,
Verified: true,
Optimistic: optimistic,
},
})
if blockRoot == headRoot && isEarly {
go func() {
slotCtx, cancel := context.WithTimeout(context.Background(), slotDeadline)
defer cancel()
if err := transition.UpdateNextSlotCache(slotCtx, blockRoot[:], postState); err != nil {
log.WithError(err).Error("could not update next slot state cache")
}
if slots.IsEpochEnd(slot) {
if err := s.handleEpochBoundary(ctx, slot, postState, blockRoot[:]); err != nil {
log.WithError(err).Error("could not handle epoch boundary")
}
}
if _, tracked := s.trackedProposer(fcuArgs.headState, proposingSlot); !tracked {
return
}
fcuArgs.attributes = s.getPayloadAttribute(ctx, fcuArgs.headState, proposingSlot, headRoot[:])
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
if _, err := s.notifyForkchoiceUpdate(ctx, fcuArgs); err != nil {
log.WithError(err).Error("could not update forkchoice with payload attributes for proposal")
}
}()
}
defer reportAttestationInclusion(b)
onBlockProcessingTime.Observe(float64(time.Since(startTime).Milliseconds()))
return nil
}
@@ -205,7 +128,7 @@ func getStateVersionAndPayload(st state.BeaconState) (int, interfaces.ExecutionD
return preStateVersion, preStateHeader, nil
}
func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlock) error {
func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlock, avs das.AvailabilityStore) error {
ctx, span := trace.StartSpan(ctx, "blockChain.onBlockBatch")
defer span.End()
@@ -308,8 +231,8 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
return err
}
}
if err := s.databaseDACheck(ctx, b); err != nil {
return errors.Wrap(err, "could not validate blob data availability")
if err := avs.IsDataAvailable(ctx, s.CurrentSlot(), b); err != nil {
return errors.Wrapf(err, "could not validate blob data availability at slot %d", b.Block().Slot())
}
args := &forkchoicetypes.BlockAndCheckpoints{Block: b.Block(),
JustifiedCheckpoint: jCheckpoints[i],
@@ -376,37 +299,6 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
return s.saveHeadNoDB(ctx, lastB, lastBR, preState, !isValidPayload)
}
func commitmentsToCheck(b consensusblocks.ROBlock, current primitives.Slot) [][]byte {
if b.Version() < version.Deneb {
return nil
}
// We are only required to check within MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS
if !params.WithinDAPeriod(slots.ToEpoch(b.Block().Slot()), slots.ToEpoch(current)) {
return nil
}
kzgCommitments, err := b.Block().Body().BlobKzgCommitments()
if err != nil {
return nil
}
return kzgCommitments
}
func (s *Service) databaseDACheck(ctx context.Context, b consensusblocks.ROBlock) error {
commitments := commitmentsToCheck(b, s.CurrentSlot())
if len(commitments) == 0 {
return nil
}
missing, err := missingIndices(s.blobStorage, b.Root(), commitments)
if err != nil {
return err
}
if len(missing) == 0 {
return nil
}
// TODO: don't worry that this error isn't informative, it will be superceded by a detailed sidecar cache error.
return errors.New("not all kzg commitments are available")
}
func (s *Service) updateEpochBoundaryCaches(ctx context.Context, st state.BeaconState) error {
e := coreTime.CurrentEpoch(st)
if err := helpers.UpdateCommitteeCache(ctx, st, e); err != nil {
@@ -690,10 +582,15 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
if s.CurrentSlot() == s.HeadSlot() {
return
}
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
// return early if we are in init sync
if !s.inRegularSync() {
return
}
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.MissedSlot,
})
s.headLock.RLock()
headRoot := s.headRoot()
headState := s.headState(ctx)
@@ -708,23 +605,22 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
if err := transition.UpdateNextSlotCache(ctx, lastRoot, lastState); err != nil {
log.WithError(err).Debug("could not update next slot state cache")
}
// handleEpochBoundary requires a forkchoice lock to obtain the target root.
s.cfg.ForkChoiceStore.RLock()
if err := s.handleEpochBoundary(ctx, currentSlot, headState, headRoot[:]); err != nil {
log.WithError(err).Error("lateBlockTasks: could not update epoch boundary caches")
}
s.cfg.ForkChoiceStore.RUnlock()
_, tracked := s.trackedProposer(headState, s.CurrentSlot()+1)
// return early if we are not proposing next slot.
if !tracked {
return
}
// return early if we already started building a block for the current
// head root
_, has := s.cfg.PayloadIDCache.PayloadID(s.CurrentSlot()+1, headRoot)
if has {
return
}
attribute := s.getPayloadAttribute(ctx, headState, s.CurrentSlot()+1, headRoot[:])
// return early if we are not proposing next slot
if attribute.IsEmpty() {
return
}
s.headLock.RLock()
headBlock, err := s.headBlock()
if err != nil {
@@ -733,15 +629,14 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
return
}
s.headLock.RUnlock()
s.cfg.ForkChoiceStore.RLock()
fcuArgs := &fcuConfig{
headState: headState,
headRoot: headRoot,
headBlock: headBlock,
headState: headState,
headRoot: headRoot,
headBlock: headBlock,
attributes: attribute,
}
fcuArgs.attributes = s.getPayloadAttribute(ctx, headState, s.CurrentSlot()+1, headRoot[:])
_, err = s.notifyForkchoiceUpdate(ctx, fcuArgs)
s.cfg.ForkChoiceStore.RUnlock()
if err != nil {
log.WithError(err).Debug("could not perform late block tasks: failed to update forkchoice with engine")
}

View File

@@ -3,21 +3,28 @@ package blockchain
import (
"context"
"fmt"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
doublylinkedtree "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/doubly-linked-tree"
forkchoicetypes "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/config/features"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
mathutil "github.com/prysmaticlabs/prysm/v4/math"
ethpbv2 "github.com/prysmaticlabs/prysm/v4/proto/eth/v2"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/time"
"github.com/prysmaticlabs/prysm/v4/time/slots"
"go.opencensus.io/trace"
)
// CurrentSlot returns the current slot based on time.
@@ -25,6 +32,252 @@ func (s *Service) CurrentSlot() primitives.Slot {
return slots.CurrentSlot(uint64(s.genesisTime.Unix()))
}
// getFCUArgs returns the arguments to call forkchoice update
func (s *Service) getFCUArgs(cfg *postBlockProcessConfig, fcuArgs *fcuConfig) error {
if err := s.getFCUArgsEarlyBlock(cfg, fcuArgs); err != nil {
return err
}
if !s.inRegularSync() {
return nil
}
slot := cfg.signed.Block().Slot()
if slots.WithinVotingWindow(uint64(s.genesisTime.Unix()), slot) {
return nil
}
return s.computePayloadAttributes(cfg, fcuArgs)
}
func (s *Service) getFCUArgsEarlyBlock(cfg *postBlockProcessConfig, fcuArgs *fcuConfig) error {
if cfg.blockRoot == cfg.headRoot {
fcuArgs.headState = cfg.postState
fcuArgs.headBlock = cfg.signed
fcuArgs.headRoot = cfg.headRoot
fcuArgs.proposingSlot = s.CurrentSlot() + 1
return nil
}
return s.fcuArgsNonCanonicalBlock(cfg, fcuArgs)
}
// logNonCanonicalBlockReceived prints a message informing that the received
// block is not the head of the chain. It requires the caller holds a lock on
// Foprkchoice.
func (s *Service) logNonCanonicalBlockReceived(blockRoot [32]byte, headRoot [32]byte) {
receivedWeight, err := s.cfg.ForkChoiceStore.Weight(blockRoot)
if err != nil {
log.WithField("root", fmt.Sprintf("%#x", blockRoot)).Warn("could not determine node weight")
}
headWeight, err := s.cfg.ForkChoiceStore.Weight(headRoot)
if err != nil {
log.WithField("root", fmt.Sprintf("%#x", headRoot)).Warn("could not determine node weight")
}
log.WithFields(logrus.Fields{
"receivedRoot": fmt.Sprintf("%#x", blockRoot),
"receivedWeight": receivedWeight,
"headRoot": fmt.Sprintf("%#x", headRoot),
"headWeight": headWeight,
}).Debug("Head block is not the received block")
}
// fcuArgsNonCanonicalBlock returns the arguments to the FCU call when the
// incoming block is non-canonical, that is, based on the head root.
func (s *Service) fcuArgsNonCanonicalBlock(cfg *postBlockProcessConfig, fcuArgs *fcuConfig) error {
headState, headBlock, err := s.getStateAndBlock(cfg.ctx, cfg.headRoot)
if err != nil {
return err
}
fcuArgs.headState = headState
fcuArgs.headBlock = headBlock
fcuArgs.headRoot = cfg.headRoot
fcuArgs.proposingSlot = s.CurrentSlot() + 1
return nil
}
// sendStateFeedOnBlock sends an event that a new block has been synced
func (s *Service) sendStateFeedOnBlock(cfg *postBlockProcessConfig) {
optimistic, err := s.cfg.ForkChoiceStore.IsOptimistic(cfg.blockRoot)
if err != nil {
log.WithError(err).Debug("Could not check if block is optimistic")
optimistic = true
}
// Send notification of the processed block to the state feed.
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.BlockProcessed,
Data: &statefeed.BlockProcessedData{
Slot: cfg.signed.Block().Slot(),
BlockRoot: cfg.blockRoot,
SignedBlock: cfg.signed,
Verified: true,
Optimistic: optimistic,
},
})
}
// sendLightClientFeeds sends the light client feeds when feature flag is enabled.
func (s *Service) sendLightClientFeeds(cfg *postBlockProcessConfig) {
if features.Get().EnableLightClient {
if _, err := s.sendLightClientOptimisticUpdate(cfg.ctx, cfg.signed, cfg.postState); err != nil {
log.WithError(err).Error("Failed to send light client optimistic update")
}
// Get the finalized checkpoint
finalized := s.ForkChoicer().FinalizedCheckpoint()
// LightClientFinalityUpdate needs super majority
s.tryPublishLightClientFinalityUpdate(cfg.ctx, cfg.signed, finalized, cfg.postState)
}
}
func (s *Service) tryPublishLightClientFinalityUpdate(ctx context.Context, signed interfaces.ReadOnlySignedBeaconBlock, finalized *forkchoicetypes.Checkpoint, postState state.BeaconState) {
if finalized.Epoch <= s.lastPublishedLightClientEpoch {
return
}
config := params.BeaconConfig()
if finalized.Epoch < config.AltairForkEpoch {
return
}
syncAggregate, err := signed.Block().Body().SyncAggregate()
if err != nil || syncAggregate == nil {
return
}
// LightClientFinalityUpdate needs super majority
if syncAggregate.SyncCommitteeBits.Count()*3 < config.SyncCommitteeSize*2 {
return
}
_, err = s.sendLightClientFinalityUpdate(ctx, signed, postState)
if err != nil {
log.WithError(err).Error("Failed to send light client finality update")
} else {
s.lastPublishedLightClientEpoch = finalized.Epoch
}
}
// sendLightClientFinalityUpdate sends a light client finality update notification to the state feed.
func (s *Service) sendLightClientFinalityUpdate(ctx context.Context, signed interfaces.ReadOnlySignedBeaconBlock,
postState state.BeaconState) (int, error) {
// Get attested state
attestedRoot := signed.Block().ParentRoot()
attestedState, err := s.cfg.StateGen.StateByRoot(ctx, attestedRoot)
if err != nil {
return 0, errors.Wrap(err, "could not get attested state")
}
// Get finalized block
var finalizedBlock interfaces.ReadOnlySignedBeaconBlock
finalizedCheckPoint := attestedState.FinalizedCheckpoint()
if finalizedCheckPoint != nil {
finalizedRoot := bytesutil.ToBytes32(finalizedCheckPoint.Root)
finalizedBlock, err = s.cfg.BeaconDB.Block(ctx, finalizedRoot)
if err != nil {
finalizedBlock = nil
}
}
update, err := NewLightClientFinalityUpdateFromBeaconState(
ctx,
postState,
signed,
attestedState,
finalizedBlock,
)
if err != nil {
return 0, errors.Wrap(err, "could not create light client update")
}
// Return the result
result := &ethpbv2.LightClientFinalityUpdateWithVersion{
Version: ethpbv2.Version(signed.Version()),
Data: CreateLightClientFinalityUpdate(update),
}
// Send event
return s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.LightClientFinalityUpdate,
Data: result,
}), nil
}
// sendLightClientOptimisticUpdate sends a light client optimistic update notification to the state feed.
func (s *Service) sendLightClientOptimisticUpdate(ctx context.Context, signed interfaces.ReadOnlySignedBeaconBlock,
postState state.BeaconState) (int, error) {
// Get attested state
attestedRoot := signed.Block().ParentRoot()
attestedState, err := s.cfg.StateGen.StateByRoot(ctx, attestedRoot)
if err != nil {
return 0, errors.Wrap(err, "could not get attested state")
}
update, err := NewLightClientOptimisticUpdateFromBeaconState(
ctx,
postState,
signed,
attestedState,
)
if err != nil {
return 0, errors.Wrap(err, "could not create light client update")
}
// Return the result
result := &ethpbv2.LightClientOptimisticUpdateWithVersion{
Version: ethpbv2.Version(signed.Version()),
Data: CreateLightClientOptimisticUpdate(update),
}
return s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.LightClientOptimisticUpdate,
Data: result,
}), nil
}
// updateCachesPostBlockProcessing updates the next slot cache and handles the epoch
// boundary in order to compute the right proposer indices after processing
// state transition. This function is called on late blocks while still locked,
// before sending FCU to the engine.
func (s *Service) updateCachesPostBlockProcessing(cfg *postBlockProcessConfig) error {
slot := cfg.postState.Slot()
if err := transition.UpdateNextSlotCache(cfg.ctx, cfg.blockRoot[:], cfg.postState); err != nil {
return errors.Wrap(err, "could not update next slot state cache")
}
if !slots.IsEpochEnd(slot) {
return nil
}
return s.handleEpochBoundary(cfg.ctx, slot, cfg.postState, cfg.blockRoot[:])
}
// handleSecondFCUCall handles a second call to FCU when syncing a new block.
// This is useful when proposing in the next block and we want to defer the
// computation of the next slot shuffling.
func (s *Service) handleSecondFCUCall(cfg *postBlockProcessConfig, fcuArgs *fcuConfig) {
if (fcuArgs.attributes == nil || fcuArgs.attributes.IsEmpty()) && cfg.headRoot == cfg.blockRoot {
go s.sendFCUWithAttributes(cfg, fcuArgs)
}
}
// reportProcessingTime reports the metric of how long it took to process the
// current block
func reportProcessingTime(startTime time.Time) {
onBlockProcessingTime.Observe(float64(time.Since(startTime).Milliseconds()))
}
// computePayloadAttributes modifies the passed FCU arguments to
// contain the right payload attributes with the tracked proposer. It gets
// called on blocks that arrive after the attestation voting window, or in a
// background routine after syncing early blocks.
func (s *Service) computePayloadAttributes(cfg *postBlockProcessConfig, fcuArgs *fcuConfig) error {
if cfg.blockRoot == cfg.headRoot {
if err := s.updateCachesPostBlockProcessing(cfg); err != nil {
return err
}
}
fcuArgs.attributes = s.getPayloadAttribute(cfg.ctx, fcuArgs.headState, fcuArgs.proposingSlot, cfg.headRoot[:])
return nil
}
// getBlockPreState returns the pre state of an incoming block. It uses the parent root of the block
// to retrieve the state in DB. It verifies the pre state's validity and the incoming block
// is in the correct time window.

View File

@@ -1,7 +1,6 @@
package blockchain
import (
"bytes"
"context"
"fmt"
"math/big"
@@ -17,6 +16,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/signing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/das"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db/filesystem"
testDB "github.com/prysmaticlabs/prysm/v4/beacon-chain/db/testing"
@@ -40,7 +40,6 @@ import (
"github.com/prysmaticlabs/prysm/v4/testing/require"
"github.com/prysmaticlabs/prysm/v4/testing/util"
prysmTime "github.com/prysmaticlabs/prysm/v4/time"
"github.com/prysmaticlabs/prysm/v4/time/slots"
logTest "github.com/sirupsen/logrus/hooks/test"
)
@@ -69,7 +68,7 @@ func TestStore_OnBlockBatch(t *testing.T) {
require.NoError(t, err)
blks = append(blks, rwsb)
}
err := service.onBlockBatch(ctx, blks)
err := service.onBlockBatch(ctx, blks, &das.MockAvailabilityStore{})
require.NoError(t, err)
jcp := service.CurrentJustifiedCheckpt()
jroot := bytesutil.ToBytes32(jcp.Root)
@@ -99,7 +98,7 @@ func TestStore_OnBlockBatch_NotifyNewPayload(t *testing.T) {
require.NoError(t, service.saveInitSyncBlock(ctx, rwsb.Root(), wsb))
blks = append(blks, rwsb)
}
require.NoError(t, service.onBlockBatch(ctx, blks))
require.NoError(t, service.onBlockBatch(ctx, blks, &das.MockAvailabilityStore{}))
}
func TestCachedPreState_CanGetFromStateSummary(t *testing.T) {
@@ -567,7 +566,7 @@ func TestOnBlock_CanFinalize_WithOnTick(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, r, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, r, postState, true))
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, r, [32]byte{}, postState, true}))
require.NoError(t, service.updateJustificationOnBlock(ctx, preState, postState, currStoreJustifiedEpoch))
_, err = service.updateFinalizationOnBlock(ctx, preState, postState, currStoreFinalizedEpoch)
require.NoError(t, err)
@@ -615,7 +614,7 @@ func TestOnBlock_CanFinalize(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, r, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, r, postState, true))
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, r, [32]byte{}, postState, true}))
require.NoError(t, service.updateJustificationOnBlock(ctx, preState, postState, currStoreJustifiedEpoch))
_, err = service.updateFinalizationOnBlock(ctx, preState, postState, currStoreFinalizedEpoch)
require.NoError(t, err)
@@ -641,7 +640,7 @@ func TestOnBlock_CanFinalize(t *testing.T) {
func TestOnBlock_NilBlock(t *testing.T) {
service, tr := minimalTestService(t)
err := service.postBlockProcess(tr.ctx, nil, [32]byte{}, nil, true)
err := service.postBlockProcess(&postBlockProcessConfig{tr.ctx, nil, [32]byte{}, [32]byte{}, nil, true})
require.Equal(t, true, IsInvalidBlock(err))
}
@@ -689,7 +688,7 @@ func TestOnBlock_CallNewPayloadAndForkchoiceUpdated(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, r, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, r, postState, false))
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, r, [32]byte{}, postState, false}))
testState, err = service.cfg.StateGen.StateByRoot(ctx, r)
require.NoError(t, err)
}
@@ -924,8 +923,10 @@ func Test_validateMergeTransitionBlock(t *testing.T) {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
},
},
{
@@ -969,6 +970,7 @@ func Test_validateMergeTransitionBlock(t *testing.T) {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: make([]byte, fieldparams.RootLength),
@@ -1111,7 +1113,7 @@ func TestOnBlock_ProcessBlocksParallel(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb1)
require.NoError(t, err)
lock.Lock()
require.NoError(t, service.postBlockProcess(ctx, wsb1, r1, postState, true))
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, wsb1, r1, [32]byte{}, postState, true}))
lock.Unlock()
wg.Done()
}()
@@ -1121,7 +1123,7 @@ func TestOnBlock_ProcessBlocksParallel(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb2)
require.NoError(t, err)
lock.Lock()
require.NoError(t, service.postBlockProcess(ctx, wsb2, r2, postState, true))
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, wsb2, r2, [32]byte{}, postState, true}))
lock.Unlock()
wg.Done()
}()
@@ -1131,7 +1133,7 @@ func TestOnBlock_ProcessBlocksParallel(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb3)
require.NoError(t, err)
lock.Lock()
require.NoError(t, service.postBlockProcess(ctx, wsb3, r3, postState, true))
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, wsb3, r3, [32]byte{}, postState, true}))
lock.Unlock()
wg.Done()
}()
@@ -1141,7 +1143,7 @@ func TestOnBlock_ProcessBlocksParallel(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb4)
require.NoError(t, err)
lock.Lock()
require.NoError(t, service.postBlockProcess(ctx, wsb4, r4, postState, true))
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, wsb4, r4, [32]byte{}, postState, true}))
lock.Unlock()
wg.Done()
}()
@@ -1216,7 +1218,7 @@ func TestStore_NoViableHead_FCU(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, root, postState, false))
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, root, [32]byte{}, postState, false}))
}
for i := 6; i < 12; i++ {
@@ -1234,7 +1236,7 @@ func TestStore_NoViableHead_FCU(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, false)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, root, [32]byte{}, postState, false})
require.NoError(t, err)
}
@@ -1253,7 +1255,7 @@ func TestStore_NoViableHead_FCU(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, false)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, root, [32]byte{}, postState, false})
require.NoError(t, err)
}
// Check that we haven't justified the second epoch yet
@@ -1275,7 +1277,7 @@ func TestStore_NoViableHead_FCU(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, firstInvalidRoot, wsb, postState))
err = service.postBlockProcess(ctx, wsb, firstInvalidRoot, postState, false)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, firstInvalidRoot, [32]byte{}, postState, false})
require.NoError(t, err)
jc = service.cfg.ForkChoiceStore.JustifiedCheckpoint()
require.Equal(t, primitives.Epoch(2), jc.Epoch)
@@ -1303,7 +1305,7 @@ func TestStore_NoViableHead_FCU(t *testing.T) {
postState, err = service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, false)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, root, [32]byte{}, postState, false})
require.ErrorContains(t, "received an INVALID payload from execution engine", err)
// Check that forkchoice's head is the last invalid block imported. The
// store's headroot is the previous head (since the invalid block did
@@ -1332,7 +1334,7 @@ func TestStore_NoViableHead_FCU(t *testing.T) {
postState, err = service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, true)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, root, [32]byte{}, postState, true})
require.NoError(t, err)
// Check the newly imported block is head, it justified the right
// checkpoint and the node is no longer optimistic
@@ -1394,7 +1396,7 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, root, postState, false))
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, root, [32]byte{}, postState, false}))
}
for i := 6; i < 12; i++ {
@@ -1412,7 +1414,7 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, false)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, root, [32]byte{}, postState, false})
require.NoError(t, err)
}
@@ -1432,7 +1434,7 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, false)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, root, [32]byte{}, postState, false})
require.NoError(t, err)
}
// Check that we haven't justified the second epoch yet
@@ -1454,7 +1456,7 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, firstInvalidRoot, wsb, postState))
err = service.postBlockProcess(ctx, wsb, firstInvalidRoot, postState, false)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, firstInvalidRoot, [32]byte{}, postState, false})
require.NoError(t, err)
jc = service.cfg.ForkChoiceStore.JustifiedCheckpoint()
require.Equal(t, primitives.Epoch(2), jc.Epoch)
@@ -1510,7 +1512,7 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
postState, err = service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, true)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, root, [32]byte{}, postState, true})
require.NoError(t, err)
// Check the newly imported block is head, it justified the right
// checkpoint and the node is no longer optimistic
@@ -1574,7 +1576,7 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, root, postState, false))
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, root, [32]byte{}, postState, false}))
}
for i := 6; i < 12; i++ {
@@ -1593,7 +1595,7 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, false)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, root, [32]byte{}, postState, false})
require.NoError(t, err)
}
@@ -1612,7 +1614,7 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, lastValidRoot, wsb, postState))
err = service.postBlockProcess(ctx, wsb, lastValidRoot, postState, false)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, lastValidRoot, [32]byte{}, postState, false})
require.NoError(t, err)
// save the post state and the payload Hash of this block since it will
// be the LVH
@@ -1639,7 +1641,7 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, invalidRoots[i-13], wsb, postState))
err = service.postBlockProcess(ctx, wsb, invalidRoots[i-13], postState, false)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, invalidRoots[i-13], [32]byte{}, postState, false})
require.NoError(t, err)
}
// Check that we have justified the second epoch
@@ -1704,7 +1706,7 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
postState, err = service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, root, postState, true))
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, root, [32]byte{}, postState, true}))
// Check that the head is still INVALID and the node is still optimistic
require.Equal(t, invalidHeadRoot, service.cfg.ForkChoiceStore.CachedHeadRoot())
optimistic, err = service.IsOptimistic(ctx)
@@ -1727,7 +1729,7 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, true)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, root, [32]byte{}, postState, true})
require.NoError(t, err)
st, err = service.cfg.StateGen.StateByRoot(ctx, root)
require.NoError(t, err)
@@ -1753,7 +1755,7 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
postState, err = service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, true)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, root, [32]byte{}, postState, true})
require.NoError(t, err)
require.Equal(t, root, service.cfg.ForkChoiceStore.CachedHeadRoot())
sjc = service.CurrentJustifiedCheckpt()
@@ -1809,7 +1811,7 @@ func TestNoViableHead_Reboot(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, root, postState, false))
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, root, [32]byte{}, postState, false}))
}
for i := 6; i < 12; i++ {
@@ -1827,7 +1829,7 @@ func TestNoViableHead_Reboot(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, false)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, root, [32]byte{}, postState, false})
require.NoError(t, err)
}
@@ -1846,7 +1848,7 @@ func TestNoViableHead_Reboot(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, lastValidRoot, wsb, postState))
err = service.postBlockProcess(ctx, wsb, lastValidRoot, postState, false)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, lastValidRoot, [32]byte{}, postState, false})
require.NoError(t, err)
// save the post state and the payload Hash of this block since it will
// be the LVH
@@ -1875,7 +1877,7 @@ func TestNoViableHead_Reboot(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, root, postState, false))
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, root, [32]byte{}, postState, false}))
require.NoError(t, service.updateJustificationOnBlock(ctx, preState, postState, currStoreJustifiedEpoch))
_, err = service.updateFinalizationOnBlock(ctx, preState, postState, currStoreFinalizedEpoch)
require.NoError(t, err)
@@ -1946,7 +1948,7 @@ func TestNoViableHead_Reboot(t *testing.T) {
rwsb, err := consensusblocks.NewROBlock(wsb)
require.NoError(t, err)
// We use onBlockBatch here because the valid chain is missing in forkchoice
require.NoError(t, service.onBlockBatch(ctx, []consensusblocks.ROBlock{rwsb}))
require.NoError(t, service.onBlockBatch(ctx, []consensusblocks.ROBlock{rwsb}, &das.MockAvailabilityStore{}))
// Check that the head is now VALID and the node is not optimistic
require.Equal(t, genesisRoot, service.ensureRootNotZeros(service.cfg.ForkChoiceStore.CachedHeadRoot()))
headRoot, err = service.HeadRoot(ctx)
@@ -1990,7 +1992,7 @@ func TestOnBlock_HandleBlockAttestations(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, root, postState, false))
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, root, [32]byte{}, postState, false}))
st, err = service.HeadState(ctx)
require.NoError(t, err)
@@ -2049,74 +2051,6 @@ func driftGenesisTime(s *Service, slot, delay int64) {
s.SetGenesisTime(time.Unix(time.Now().Unix()-offset, 0))
}
func Test_commitmentsToCheck(t *testing.T) {
windowSlots, err := slots.EpochEnd(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest)
require.NoError(t, err)
commits := [][]byte{
bytesutil.PadTo([]byte("a"), 48),
bytesutil.PadTo([]byte("b"), 48),
bytesutil.PadTo([]byte("c"), 48),
bytesutil.PadTo([]byte("d"), 48),
}
cases := []struct {
name string
commits [][]byte
block func(*testing.T) consensusblocks.ROBlock
slot primitives.Slot
}{
{
name: "pre deneb",
block: func(t *testing.T) consensusblocks.ROBlock {
bb := util.NewBeaconBlockBellatrix()
sb, err := consensusblocks.NewSignedBeaconBlock(bb)
require.NoError(t, err)
rb, err := consensusblocks.NewROBlock(sb)
require.NoError(t, err)
return rb
},
},
{
name: "commitments within da",
block: func(t *testing.T) consensusblocks.ROBlock {
d := util.NewBeaconBlockDeneb()
d.Block.Body.BlobKzgCommitments = commits
d.Block.Slot = 100
sb, err := consensusblocks.NewSignedBeaconBlock(d)
require.NoError(t, err)
rb, err := consensusblocks.NewROBlock(sb)
require.NoError(t, err)
return rb
},
commits: commits,
slot: 100,
},
{
name: "commitments outside da",
block: func(t *testing.T) consensusblocks.ROBlock {
d := util.NewBeaconBlockDeneb()
// block is from slot 0, "current slot" is window size +1 (so outside the window)
d.Block.Body.BlobKzgCommitments = commits
sb, err := consensusblocks.NewSignedBeaconBlock(d)
require.NoError(t, err)
rb, err := consensusblocks.NewROBlock(sb)
require.NoError(t, err)
return rb
},
slot: windowSlots + 1,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
b := c.block(t)
co := commitmentsToCheck(b, c.slot)
require.Equal(t, len(c.commits), len(co))
for i := 0; i < len(c.commits); i++ {
require.Equal(t, true, bytes.Equal(c.commits[i], co[i]))
}
})
}
}
func TestMissingIndices(t *testing.T) {
cases := []struct {
name string
@@ -2197,6 +2131,35 @@ func TestMissingIndices(t *testing.T) {
}
}
func Test_getFCUArgs(t *testing.T) {
s, tr := minimalTestService(t)
ctx := tr.ctx
st, keys := util.DeterministicGenesisState(t, 64)
b, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 1)
require.NoError(t, err)
wsb, err := consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
signed: wsb,
blockRoot: [32]byte{'a'},
postState: st,
isValidPayload: true,
}
// error branch
fcuArgs := &fcuConfig{}
err = s.getFCUArgs(cfg, fcuArgs)
require.ErrorContains(t, "block does not exist", err)
// canonical branch
cfg.headRoot = cfg.blockRoot
fcuArgs = &fcuConfig{}
err = s.getFCUArgs(cfg, fcuArgs)
require.NoError(t, err)
require.Equal(t, cfg.blockRoot, fcuArgs.headRoot)
}
func fakeCommitments(n int) [][]byte {
f := make([][]byte, n)
for i := range f {

View File

@@ -150,13 +150,12 @@ func (s *Service) UpdateHead(ctx context.Context, proposingSlot primitives.Slot)
headBlock: headBlock,
proposingSlot: proposingSlot,
}
_, tracked := s.trackedProposer(headState, proposingSlot)
if tracked {
if s.shouldOverrideFCU(newHeadRoot, proposingSlot) {
return
}
if s.inRegularSync() {
fcuArgs.attributes = s.getPayloadAttribute(ctx, headState, proposingSlot, newHeadRoot[:])
}
if fcuArgs.attributes != nil && s.shouldOverrideFCU(newHeadRoot, proposingSlot) {
return
}
if err := s.forkchoiceUpdateWithExecution(s.ctx, fcuArgs); err != nil {
log.WithError(err).Error("could not update forkchoice")
}

View File

@@ -112,7 +112,7 @@ func TestService_ProcessAttestationsAndUpdateHead(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, tRoot, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, tRoot, postState, false))
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, tRoot, [32]byte{}, postState, false}))
copied, err = service.cfg.StateGen.StateByRoot(ctx, tRoot)
require.NoError(t, err)
require.Equal(t, 2, fcs.NodeCount())
@@ -168,7 +168,7 @@ func TestService_UpdateHead_NoAtts(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, tRoot, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, tRoot, postState, false))
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, tRoot, [32]byte{}, postState, false}))
require.Equal(t, 2, fcs.NodeCount())
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb))
require.Equal(t, tRoot, service.head.root)

View File

@@ -12,6 +12,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
coreTime "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/das"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/config/features"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
@@ -33,8 +34,8 @@ var epochsSinceFinalitySaveHotStateDB = primitives.Epoch(100)
// BlockReceiver interface defines the methods of chain service for receiving and processing new blocks.
type BlockReceiver interface {
ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte) error
ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock) error
ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte, avs das.AvailabilityStore) error
ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock, avs das.AvailabilityStore) error
HasBlock(ctx context.Context, root [32]byte) bool
RecentBlockSlot(root [32]byte) (primitives.Slot, error)
BlockBeingSynced([32]byte) bool
@@ -56,7 +57,7 @@ type SlashingReceiver interface {
// 1. Validate block, apply state transition and update checkpoints
// 2. Apply fork choice to the processed block
// 3. Save latest head info
func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte) error {
func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte, avs das.AvailabilityStore) error {
ctx, span := trace.StartSpan(ctx, "blockChain.ReceiveBlock")
defer span.End()
// Return early if the block has been synced
@@ -72,6 +73,10 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
if err != nil {
return err
}
rob, err := blocks.NewROBlockWithRoot(block, blockRoot)
if err != nil {
return err
}
preState, err := s.getBlockPreState(ctx, blockCopy.Block())
if err != nil {
@@ -106,20 +111,35 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
if err := eg.Wait(); err != nil {
return err
}
daStartTime := time.Now()
if err := s.isDataAvailable(ctx, blockRoot, blockCopy); err != nil {
return errors.Wrap(err, "could not validate blob data availability")
if avs != nil {
if err := avs.IsDataAvailable(ctx, s.CurrentSlot(), rob); err != nil {
return errors.Wrap(err, "could not validate blob data availability (AvailabilityStore.IsDataAvailable)")
}
} else {
if err := s.isDataAvailable(ctx, blockRoot, blockCopy); err != nil {
return errors.Wrap(err, "could not validate blob data availability")
}
}
daWaitedTime := time.Since(daStartTime)
// Defragment the state before continuing block processing.
s.defragmentState(postState)
// The rest of block processing takes a lock on forkchoice.
s.cfg.ForkChoiceStore.Lock()
defer s.cfg.ForkChoiceStore.Unlock()
if err := s.savePostStateInfo(ctx, blockRoot, blockCopy, postState); err != nil {
return errors.Wrap(err, "could not save post state info")
}
if err := s.postBlockProcess(ctx, blockCopy, blockRoot, postState, isValidPayload); err != nil {
args := &postBlockProcessConfig{
ctx: ctx,
signed: blockCopy,
blockRoot: blockRoot,
postState: postState,
isValidPayload: isValidPayload,
}
if err := s.postBlockProcess(args); err != nil {
err := errors.Wrap(err, "could not process block")
tracing.AnnotateError(span, err)
return err
@@ -196,7 +216,7 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
// ReceiveBlockBatch processes the whole block batch at once, assuming the block batch is linear ,transitioning
// the state, performing batch verification of all collected signatures and then performing the appropriate
// actions for a block post-transition.
func (s *Service) ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock) error {
func (s *Service) ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock, avs das.AvailabilityStore) error {
ctx, span := trace.StartSpan(ctx, "blockChain.ReceiveBlockBatch")
defer span.End()
@@ -204,7 +224,7 @@ func (s *Service) ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock
defer s.cfg.ForkChoiceStore.Unlock()
// Apply state transition on the incoming newly received block batches, one by one.
if err := s.onBlockBatch(ctx, blocks); err != nil {
if err := s.onBlockBatch(ctx, blocks, avs); err != nil {
err := errors.Wrap(err, "could not process block in batch")
tracing.AnnotateError(span, err)
return err

View File

@@ -8,6 +8,7 @@ import (
blockchainTesting "github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/das"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/operations/voluntaryexits"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
@@ -146,7 +147,7 @@ func TestService_ReceiveBlock(t *testing.T) {
require.NoError(t, err)
wsb, err := blocks.NewSignedBeaconBlock(tt.args.block)
require.NoError(t, err)
err = s.ReceiveBlock(ctx, wsb, root)
err = s.ReceiveBlock(ctx, wsb, root, nil)
if tt.wantedErr != "" {
assert.ErrorContains(t, tt.wantedErr, err)
} else {
@@ -179,7 +180,7 @@ func TestService_ReceiveBlockUpdateHead(t *testing.T) {
go func() {
wsb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
require.NoError(t, s.ReceiveBlock(ctx, wsb, root))
require.NoError(t, s.ReceiveBlock(ctx, wsb, root, nil))
wg.Done()
}()
wg.Wait()
@@ -243,7 +244,7 @@ func TestService_ReceiveBlockBatch(t *testing.T) {
require.NoError(t, err)
rwsb, err := blocks.NewROBlock(wsb)
require.NoError(t, err)
err = s.ReceiveBlockBatch(ctx, []blocks.ROBlock{rwsb})
err = s.ReceiveBlockBatch(ctx, []blocks.ROBlock{rwsb}, &das.MockAvailabilityStore{})
if tt.wantedErr != "" {
assert.ErrorContains(t, tt.wantedErr, err)
} else {

View File

@@ -11,6 +11,8 @@ import (
"time"
"github.com/pkg/errors"
"go.opencensus.io/trace"
"github.com/prysmaticlabs/prysm/v4/async/event"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain/kzg"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache"
@@ -37,34 +39,35 @@ import (
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
prysmTime "github.com/prysmaticlabs/prysm/v4/time"
"github.com/prysmaticlabs/prysm/v4/time/slots"
"go.opencensus.io/trace"
)
// Service represents a service that handles the internal
// logic of managing the full PoS beacon chain.
type Service struct {
cfg *config
ctx context.Context
cancel context.CancelFunc
genesisTime time.Time
head *head
headLock sync.RWMutex
originBlockRoot [32]byte // genesis root, or weak subjectivity checkpoint root, depending on how the node is initialized
boundaryRoots [][32]byte
checkpointStateCache *cache.CheckpointStateCache
initSyncBlocks map[[32]byte]interfaces.ReadOnlySignedBeaconBlock
initSyncBlocksLock sync.RWMutex
wsVerifier *WeakSubjectivityVerifier
clockSetter startup.ClockSetter
clockWaiter startup.ClockWaiter
syncComplete chan struct{}
blobNotifiers *blobNotifierMap
blockBeingSynced *currentlySyncingBlock
blobStorage *filesystem.BlobStorage
cfg *config
ctx context.Context
cancel context.CancelFunc
genesisTime time.Time
head *head
headLock sync.RWMutex
originBlockRoot [32]byte // genesis root, or weak subjectivity checkpoint root, depending on how the node is initialized
boundaryRoots [][32]byte
checkpointStateCache *cache.CheckpointStateCache
initSyncBlocks map[[32]byte]interfaces.ReadOnlySignedBeaconBlock
initSyncBlocksLock sync.RWMutex
wsVerifier *WeakSubjectivityVerifier
clockSetter startup.ClockSetter
clockWaiter startup.ClockWaiter
syncComplete chan struct{}
blobNotifiers *blobNotifierMap
blockBeingSynced *currentlySyncingBlock
blobStorage *filesystem.BlobStorage
lastPublishedLightClientEpoch primitives.Epoch
}
// config options for the service.

View File

@@ -10,6 +10,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache/depositcache"
statefeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db/filesystem"
testDB "github.com/prysmaticlabs/prysm/v4/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice"
doublylinkedtree "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/doubly-linked-tree"
@@ -116,6 +117,7 @@ func minimalTestService(t *testing.T, opts ...Option) (*Service, *testServiceReq
WithBLSToExecPool(req.blsPool),
WithDepositCache(dc),
WithTrackedValidatorsCache(cache.NewTrackedValidatorsCache()),
WithBlobStorage(filesystem.NewEphemeralBlobStorage(t)),
}
// append the variadic opts so they override the defaults by being processed afterwards
opts = append(defOpts, opts...)

View File

@@ -17,6 +17,7 @@ go_library(
"//beacon-chain/core/feed/operation:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/das:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/forkchoice:go_default_library",
"//beacon-chain/state:go_default_library",

View File

@@ -16,6 +16,7 @@ import (
opfeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/operation"
statefeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/das"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
@@ -208,7 +209,7 @@ func (s *ChainService) ReceiveBlockInitialSync(ctx context.Context, block interf
}
// ReceiveBlockBatch processes blocks in batches from initial-sync.
func (s *ChainService) ReceiveBlockBatch(ctx context.Context, blks []blocks.ROBlock) error {
func (s *ChainService) ReceiveBlockBatch(ctx context.Context, blks []blocks.ROBlock, _ das.AvailabilityStore) error {
if s.State == nil {
return ErrNilState
}
@@ -238,7 +239,7 @@ func (s *ChainService) ReceiveBlockBatch(ctx context.Context, blks []blocks.ROBl
}
// ReceiveBlock mocks ReceiveBlock method in chain service.
func (s *ChainService) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, _ [32]byte) error {
func (s *ChainService) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, _ [32]byte, _ das.AvailabilityStore) error {
if s.ReceiveBlockMockErr != nil {
return s.ReceiveBlockMockErr
}
@@ -320,7 +321,7 @@ func (s *ChainService) PreviousJustifiedCheckpt() *ethpb.Checkpoint {
}
// ReceiveAttestation mocks ReceiveAttestation method in chain service.
func (_ *ChainService) ReceiveAttestation(_ context.Context, _ *ethpb.Attestation) error {
func (*ChainService) ReceiveAttestation(_ context.Context, _ *ethpb.Attestation) error {
return nil
}
@@ -400,12 +401,12 @@ func (s *ChainService) RecentBlockSlot([32]byte) (primitives.Slot, error) {
}
// HeadGenesisValidatorsRoot mocks HeadGenesisValidatorsRoot method in chain service.
func (_ *ChainService) HeadGenesisValidatorsRoot() [32]byte {
func (*ChainService) HeadGenesisValidatorsRoot() [32]byte {
return [32]byte{}
}
// VerifyLmdFfgConsistency mocks VerifyLmdFfgConsistency and always returns nil.
func (_ *ChainService) VerifyLmdFfgConsistency(_ context.Context, a *ethpb.Attestation) error {
func (*ChainService) VerifyLmdFfgConsistency(_ context.Context, a *ethpb.Attestation) error {
if !bytes.Equal(a.Data.BeaconBlockRoot, a.Data.Target.Root) {
return errors.New("LMD and FFG miss matched")
}
@@ -413,7 +414,7 @@ func (_ *ChainService) VerifyLmdFfgConsistency(_ context.Context, a *ethpb.Attes
}
// ChainHeads mocks ChainHeads and always return nil.
func (_ *ChainService) ChainHeads() ([][32]byte, []primitives.Slot) {
func (*ChainService) ChainHeads() ([][32]byte, []primitives.Slot) {
return [][32]byte{
bytesutil.ToBytes32(bytesutil.PadTo([]byte("foo"), 32)),
bytesutil.ToBytes32(bytesutil.PadTo([]byte("bar"), 32)),
@@ -422,7 +423,7 @@ func (_ *ChainService) ChainHeads() ([][32]byte, []primitives.Slot) {
}
// HeadPublicKeyToValidatorIndex mocks HeadPublicKeyToValidatorIndex and always return 0 and true.
func (_ *ChainService) HeadPublicKeyToValidatorIndex(_ [fieldparams.BLSPubkeyLength]byte) (primitives.ValidatorIndex, bool) {
func (*ChainService) HeadPublicKeyToValidatorIndex(_ [fieldparams.BLSPubkeyLength]byte) (primitives.ValidatorIndex, bool) {
return 0, true
}
@@ -486,7 +487,7 @@ func (s *ChainService) UpdateHead(ctx context.Context, slot primitives.Slot) {
}
// ReceiveAttesterSlashing mocks the same method in the chain service.
func (s *ChainService) ReceiveAttesterSlashing(context.Context, *ethpb.AttesterSlashing) {}
func (*ChainService) ReceiveAttesterSlashing(context.Context, *ethpb.AttesterSlashing) {}
// IsFinalized mocks the same method in the chain service.
func (s *ChainService) IsFinalized(_ context.Context, blockRoot [32]byte) bool {
@@ -599,12 +600,12 @@ func (s *ChainService) ProposerBoost() [32]byte {
}
// FinalizedBlockHash mocks the same method in the chain service
func (s *ChainService) FinalizedBlockHash() [32]byte {
func (*ChainService) FinalizedBlockHash() [32]byte {
return [32]byte{}
}
// UnrealizedJustifiedPayloadBlockHash mocks the same method in the chain service
func (s *ChainService) UnrealizedJustifiedPayloadBlockHash() [32]byte {
func (*ChainService) UnrealizedJustifiedPayloadBlockHash() [32]byte {
return [32]byte{}
}

View File

@@ -15,11 +15,12 @@ import (
// AttDelta contains rewards and penalties for a single attestation.
type AttDelta struct {
HeadReward uint64
SourceReward uint64
SourcePenalty uint64
TargetReward uint64
TargetPenalty uint64
HeadReward uint64
SourceReward uint64
SourcePenalty uint64
TargetReward uint64
TargetPenalty uint64
InactivityPenalty uint64
}
// InitializePrecomputeValidators precomputes individual validator for its attested balances and the total sum of validators attested balances of the epoch.
@@ -251,7 +252,7 @@ func ProcessRewardsAndPenaltiesPrecompute(
if err != nil {
return nil, err
}
balances[i] = helpers.DecreaseBalanceWithVal(balances[i], delta.SourcePenalty+delta.TargetPenalty)
balances[i] = helpers.DecreaseBalanceWithVal(balances[i], delta.SourcePenalty+delta.TargetPenalty+delta.InactivityPenalty)
vals[i].AfterEpochTransitionBalance = balances[i]
}
@@ -351,7 +352,7 @@ func attestationDelta(
if err != nil {
return &AttDelta{}, err
}
attDelta.TargetPenalty += n / inactivityDenominator
attDelta.InactivityPenalty = n / inactivityDenominator
}
return attDelta, nil

View File

@@ -220,7 +220,7 @@ func TestAttestationsDelta(t *testing.T) {
penalties := make([]uint64, len(deltas))
for i, d := range deltas {
rewards[i] = d.HeadReward + d.SourceReward + d.TargetReward
penalties[i] = d.SourcePenalty + d.TargetPenalty
penalties[i] = d.SourcePenalty + d.TargetPenalty + d.InactivityPenalty
}
// Reward amount should increase as validator index increases due to setup.
@@ -258,7 +258,7 @@ func TestAttestationsDeltaBellatrix(t *testing.T) {
penalties := make([]uint64, len(deltas))
for i, d := range deltas {
rewards[i] = d.HeadReward + d.SourceReward + d.TargetReward
penalties[i] = d.SourcePenalty + d.TargetPenalty
penalties[i] = d.SourcePenalty + d.TargetPenalty + d.InactivityPenalty
}
// Reward amount should increase as validator index increases due to setup.
@@ -306,7 +306,7 @@ func TestProcessRewardsAndPenaltiesPrecompute_Ok(t *testing.T) {
penalties := make([]uint64, len(deltas))
for i, d := range deltas {
rewards[i] = d.HeadReward + d.SourceReward + d.TargetReward
penalties[i] = d.SourcePenalty + d.TargetPenalty
penalties[i] = d.SourcePenalty + d.TargetPenalty + d.InactivityPenalty
}
for i := range rewards {
wanted[i] += rewards[i]

View File

@@ -105,6 +105,7 @@ func NewGenesisBlockForState(ctx context.Context, st state.BeaconState) (interfa
ReceiptsRoot: make([]byte, 32),
LogsBloom: make([]byte, 256),
PrevRandao: make([]byte, 32),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
Transactions: make([][]byte, 0),
@@ -136,6 +137,7 @@ func NewGenesisBlockForState(ctx context.Context, st state.BeaconState) (interfa
ReceiptsRoot: make([]byte, 32),
LogsBloom: make([]byte, 256),
PrevRandao: make([]byte, 32),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
Transactions: make([][]byte, 0),
@@ -168,6 +170,7 @@ func NewGenesisBlockForState(ctx context.Context, st state.BeaconState) (interfa
ReceiptsRoot: make([]byte, 32),
LogsBloom: make([]byte, 256),
PrevRandao: make([]byte, 32),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
Transactions: make([][]byte, 0),

View File

@@ -853,10 +853,10 @@ func emptyPayloadHeader() (interfaces.ExecutionData, error) {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
})
}
@@ -868,11 +868,11 @@ func emptyPayloadHeaderCapella() (interfaces.ExecutionData, error) {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: make([]byte, fieldparams.RootLength),
WithdrawalsRoot: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
}, 0)
}
@@ -884,10 +884,10 @@ func emptyPayload() *enginev1.ExecutionPayload {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
ExtraData: make([]byte, 0),
}
}
@@ -899,10 +899,10 @@ func emptyPayloadCapella() *enginev1.ExecutionPayloadCapella {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
Withdrawals: make([]*enginev1.Withdrawal, 0),
ExtraData: make([]byte, 0),
}
}

View File

@@ -84,6 +84,7 @@ func TestUpgradeToCapella(t *testing.T) {
GasLimit: prevHeader.GasLimit(),
GasUsed: prevHeader.GasUsed(),
Timestamp: prevHeader.Timestamp(),
ExtraData: prevHeader.ExtraData(),
BaseFeePerGas: prevHeader.BaseFeePerGas(),
BlockHash: prevHeader.BlockHash(),
TransactionsRoot: txRoot,

View File

@@ -57,6 +57,10 @@ func UpgradeToDeneb(state state.BeaconState) (state.BeaconState, error) {
if err != nil {
return nil, err
}
historicalRoots, err := state.HistoricalRoots()
if err != nil {
return nil, err
}
s := &ethpb.BeaconStateDeneb{
GenesisTime: state.GenesisTime(),
@@ -70,7 +74,7 @@ func UpgradeToDeneb(state state.BeaconState) (state.BeaconState, error) {
LatestBlockHeader: state.LatestBlockHeader(),
BlockRoots: state.BlockRoots(),
StateRoots: state.StateRoots(),
HistoricalRoots: [][]byte{},
HistoricalRoots: historicalRoots,
Eth1Data: state.Eth1Data(),
Eth1DataVotes: state.Eth1DataVotes(),
Eth1DepositIndex: state.Eth1DepositIndex(),
@@ -101,10 +105,10 @@ func UpgradeToDeneb(state state.BeaconState) (state.BeaconState, error) {
ExtraData: payloadHeader.ExtraData(),
BaseFeePerGas: payloadHeader.BaseFeePerGas(),
BlockHash: payloadHeader.BlockHash(),
ExcessBlobGas: 0,
BlobGasUsed: 0,
TransactionsRoot: txRoot,
WithdrawalsRoot: wdRoot,
ExcessBlobGas: 0,
BlobGasUsed: 0,
},
NextWithdrawalIndex: wi,
NextWithdrawalValidatorIndex: vi,

View File

@@ -14,6 +14,7 @@ import (
func TestUpgradeToDeneb(t *testing.T) {
st, _ := util.DeterministicGenesisStateCapella(t, params.BeaconConfig().MaxValidatorsPerCommittee)
require.NoError(t, st.SetHistoricalRoots([][]byte{{1}}))
preForkState := st.Copy()
mSt, err := deneb.UpgradeToDeneb(st)
require.NoError(t, err)
@@ -46,6 +47,12 @@ func TestUpgradeToDeneb(t *testing.T) {
require.NoError(t, err)
require.DeepSSZEqual(t, make([]uint64, numValidators), s)
hr1, err := preForkState.HistoricalRoots()
require.NoError(t, err)
hr2, err := mSt.HistoricalRoots()
require.NoError(t, err)
require.DeepEqual(t, hr1, hr2)
f := mSt.Fork()
require.DeepSSZEqual(t, &ethpb.Fork{
PreviousVersion: st.Fork().CurrentVersion,
@@ -85,6 +92,7 @@ func TestUpgradeToDeneb(t *testing.T) {
GasLimit: prevHeader.GasLimit(),
GasUsed: prevHeader.GasUsed(),
Timestamp: prevHeader.Timestamp(),
ExtraData: prevHeader.ExtraData(),
BaseFeePerGas: prevHeader.BaseFeePerGas(),
BlockHash: prevHeader.BlockHash(),
TransactionsRoot: txRoot,

View File

@@ -79,6 +79,7 @@ func UpgradeToBellatrix(state state.BeaconState) (state.BeaconState, error) {
GasLimit: 0,
GasUsed: 0,
Timestamp: 0,
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
TransactionsRoot: make([]byte, 32),

View File

@@ -79,6 +79,7 @@ func TestUpgradeToBellatrix(t *testing.T) {
GasLimit: 0,
GasUsed: 0,
Timestamp: 0,
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
TransactionsRoot: make([]byte, 32),

View File

@@ -26,6 +26,12 @@ const (
// BlobSidecarReceived is sent after a blob sidecar is received from gossip or rpc.
BlobSidecarReceived = 6
// ProposerSlashingReceived is sent after a proposer slashing is received from gossip or rpc
ProposerSlashingReceived = 7
// AttesterSlashingReceived is sent after an attester slashing is received from gossip or rpc
AttesterSlashingReceived = 8
)
// UnAggregatedAttReceivedData is the data sent with UnaggregatedAttReceived events.
@@ -61,3 +67,13 @@ type BLSToExecutionChangeReceivedData struct {
type BlobSidecarReceivedData struct {
Blob *blocks.VerifiedROBlob
}
// ProposerSlashingReceivedData is the data sent with ProposerSlashingReceived events.
type ProposerSlashingReceivedData struct {
ProposerSlashing *ethpb.ProposerSlashing
}
// AttesterSlashingReceivedData is the data sent with AttesterSlashingReceived events.
type AttesterSlashingReceivedData struct {
AttesterSlashing *ethpb.AttesterSlashing
}

View File

@@ -27,6 +27,10 @@ const (
NewHead
// MissedSlot is sent when we need to notify users that a slot was missed.
MissedSlot
// LightClientFinalityUpdate event
LightClientFinalityUpdate
// LightClientOptimisticUpdate event
LightClientOptimisticUpdate
)
// BlockProcessedData is the data sent with BlockProcessed events.

View File

@@ -50,7 +50,6 @@ go_test(
"attestation_test.go",
"beacon_committee_test.go",
"block_test.go",
"main_test.go",
"randao_test.go",
"rewards_penalties_test.go",
"shuffle_test.go",
@@ -64,6 +63,7 @@ go_test(
deps = [
"//beacon-chain/cache:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/fieldparams:go_default_library",

View File

@@ -20,6 +20,8 @@ import (
func TestAttestation_IsAggregator(t *testing.T) {
t.Run("aggregator", func(t *testing.T) {
helpers.ClearCache()
beaconState, privKeys := util.DeterministicGenesisState(t, 100)
committee, err := helpers.BeaconCommitteeFromState(context.Background(), beaconState, 0, 0)
require.NoError(t, err)
@@ -30,6 +32,8 @@ func TestAttestation_IsAggregator(t *testing.T) {
})
t.Run("not aggregator", func(t *testing.T) {
helpers.ClearCache()
params.SetupTestConfigCleanup(t)
params.OverrideBeaconConfig(params.MinimalSpecConfig())
beaconState, privKeys := util.DeterministicGenesisState(t, 2048)
@@ -44,6 +48,8 @@ func TestAttestation_IsAggregator(t *testing.T) {
}
func TestAttestation_ComputeSubnetForAttestation(t *testing.T) {
helpers.ClearCache()
// Create 10 committees
committeeCount := uint64(10)
validatorCount := committeeCount * params.BeaconConfig().TargetCommitteeSize
@@ -204,6 +210,8 @@ func Test_ValidateAttestationTime(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
helpers.ClearCache()
err := helpers.ValidateAttestationTime(tt.args.attSlot, tt.args.genesisTime,
params.BeaconConfig().MaximumGossipClockDisparityDuration())
if tt.wantedErr != "" {
@@ -216,6 +224,8 @@ func Test_ValidateAttestationTime(t *testing.T) {
}
func TestVerifyCheckpointEpoch_Ok(t *testing.T) {
helpers.ClearCache()
// Genesis was 6 epochs ago exactly.
offset := params.BeaconConfig().SlotsPerEpoch.Mul(params.BeaconConfig().SecondsPerSlot * 6)
genesis := time.Now().Add(-1 * time.Second * time.Duration(offset))
@@ -285,6 +295,8 @@ func TestValidateNilAttestation(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
helpers.ClearCache()
if tt.errString != "" {
require.ErrorContains(t, tt.errString, helpers.ValidateNilAttestation(tt.attestation))
} else {
@@ -326,6 +338,8 @@ func TestValidateSlotTargetEpoch(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
helpers.ClearCache()
if tt.errString != "" {
require.ErrorContains(t, tt.errString, helpers.ValidateSlotTargetEpoch(tt.attestation.Data))
} else {

View File

@@ -21,6 +21,8 @@ import (
)
func TestComputeCommittee_WithoutCache(t *testing.T) {
ClearCache()
// Create 10 committees
committeeCount := uint64(10)
validatorCount := committeeCount * params.BeaconConfig().TargetCommitteeSize
@@ -71,6 +73,8 @@ func TestComputeCommittee_WithoutCache(t *testing.T) {
}
func TestComputeCommittee_RegressionTest(t *testing.T) {
ClearCache()
indices := []primitives.ValidatorIndex{1, 3, 8, 16, 18, 19, 20, 23, 30, 35, 43, 46, 47, 54, 56, 58, 69, 70, 71, 83, 84, 85, 91, 96, 100, 103, 105, 106, 112, 121, 127, 128, 129, 140, 142, 144, 146, 147, 149, 152, 153, 154, 157, 160, 173, 175, 180, 182, 188, 189, 191, 194, 201, 204, 217, 221, 226, 228, 230, 231, 239, 241, 249, 250, 255}
seed := [32]byte{68, 110, 161, 250, 98, 230, 161, 172, 227, 226, 99, 11, 138, 124, 201, 134, 38, 197, 0, 120, 6, 165, 122, 34, 19, 216, 43, 226, 210, 114, 165, 183}
index := uint64(215)
@@ -80,6 +84,8 @@ func TestComputeCommittee_RegressionTest(t *testing.T) {
}
func TestVerifyBitfieldLength_OK(t *testing.T) {
ClearCache()
bf := bitfield.Bitlist{0xFF, 0x01}
committeeSize := uint64(8)
assert.NoError(t, VerifyBitfieldLength(bf, committeeSize), "Bitfield is not validated when it was supposed to be")
@@ -91,7 +97,7 @@ func TestVerifyBitfieldLength_OK(t *testing.T) {
func TestCommitteeAssignments_CannotRetrieveFutureEpoch(t *testing.T) {
ClearCache()
defer ClearCache()
epoch := primitives.Epoch(1)
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Slot: 0, // Epoch 0.
@@ -103,7 +109,7 @@ func TestCommitteeAssignments_CannotRetrieveFutureEpoch(t *testing.T) {
func TestCommitteeAssignments_NoProposerForSlot0(t *testing.T) {
ClearCache()
defer ClearCache()
validators := make([]*ethpb.Validator, 4*params.BeaconConfig().SlotsPerEpoch)
for i := 0; i < len(validators); i++ {
var activationEpoch primitives.Epoch
@@ -190,10 +196,10 @@ func TestCommitteeAssignments_CanRetrieve(t *testing.T) {
},
}
defer ClearCache()
for i, tt := range tests {
t.Run(fmt.Sprintf("%d", i), func(t *testing.T) {
ClearCache()
validatorIndexToCommittee, proposerIndexToSlots, err := CommitteeAssignments(context.Background(), state, slots.ToEpoch(tt.slot))
require.NoError(t, err, "Failed to determine CommitteeAssignments")
cac := validatorIndexToCommittee[tt.index]
@@ -209,6 +215,8 @@ func TestCommitteeAssignments_CanRetrieve(t *testing.T) {
}
func TestCommitteeAssignments_CannotRetrieveFuture(t *testing.T) {
ClearCache()
// Initialize test with 256 validators, each slot and each index gets 4 validators.
validators := make([]*ethpb.Validator, 4*params.BeaconConfig().SlotsPerEpoch)
for i := 0; i < len(validators); i++ {
@@ -239,6 +247,8 @@ func TestCommitteeAssignments_CannotRetrieveFuture(t *testing.T) {
}
func TestCommitteeAssignments_CannotRetrieveOlderThanSlotsPerHistoricalRoot(t *testing.T) {
ClearCache()
// Initialize test with 256 validators, each slot and each index gets 4 validators.
validators := make([]*ethpb.Validator, 4*params.BeaconConfig().SlotsPerEpoch)
for i := 0; i < len(validators); i++ {
@@ -259,7 +269,7 @@ func TestCommitteeAssignments_CannotRetrieveOlderThanSlotsPerHistoricalRoot(t *t
func TestCommitteeAssignments_EverySlotHasMin1Proposer(t *testing.T) {
ClearCache()
defer ClearCache()
// Initialize test with 256 validators, each slot and each index gets 4 validators.
validators := make([]*ethpb.Validator, 4*params.BeaconConfig().SlotsPerEpoch)
for i := 0; i < len(validators); i++ {
@@ -380,9 +390,9 @@ func TestVerifyAttestationBitfieldLengths_OK(t *testing.T) {
},
}
defer ClearCache()
for i, tt := range tests {
ClearCache()
require.NoError(t, state.SetSlot(tt.stateSlot))
err := VerifyAttestationBitfieldLengths(context.Background(), state, tt.attestation)
if tt.verificationFailure {
@@ -395,7 +405,7 @@ func TestVerifyAttestationBitfieldLengths_OK(t *testing.T) {
func TestUpdateCommitteeCache_CanUpdate(t *testing.T) {
ClearCache()
defer ClearCache()
validatorCount := params.BeaconConfig().MinGenesisActiveValidatorCount
validators := make([]*ethpb.Validator, validatorCount)
indices := make([]primitives.ValidatorIndex, validatorCount)
@@ -425,7 +435,7 @@ func TestUpdateCommitteeCache_CanUpdate(t *testing.T) {
func TestUpdateCommitteeCache_CanUpdateAcrossEpochs(t *testing.T) {
ClearCache()
defer ClearCache()
validatorCount := params.BeaconConfig().MinGenesisActiveValidatorCount
validators := make([]*ethpb.Validator, validatorCount)
indices := make([]primitives.ValidatorIndex, validatorCount)

View File

@@ -60,6 +60,8 @@ func TestBlockRootAtSlot_CorrectBlockRoot(t *testing.T) {
}
for i, tt := range tests {
t.Run(fmt.Sprintf("%d", i), func(t *testing.T) {
helpers.ClearCache()
s.Slot = tt.stateSlot
state, err := state_native.InitializeFromProtoPhase0(s)
require.NoError(t, err)
@@ -110,6 +112,8 @@ func TestBlockRootAtSlot_OutOfBounds(t *testing.T) {
},
}
for _, tt := range tests {
helpers.ClearCache()
state.Slot = tt.stateSlot
s, err := state_native.InitializeFromProtoPhase0(state)
require.NoError(t, err)

View File

@@ -1,13 +0,0 @@
package helpers
import (
"os"
"testing"
)
// run ClearCache before each test to prevent cross-test side effects
func TestMain(m *testing.M) {
ClearCache()
code := m.Run()
os.Exit(code)
}

View File

@@ -40,6 +40,8 @@ func TestRandaoMix_OK(t *testing.T) {
},
}
for _, test := range tests {
ClearCache()
require.NoError(t, state.SetSlot(params.BeaconConfig().SlotsPerEpoch.Mul(uint64(test.epoch+1))))
mix, err := RandaoMix(state, test.epoch)
require.NoError(t, err)
@@ -74,6 +76,8 @@ func TestRandaoMix_CopyOK(t *testing.T) {
},
}
for _, test := range tests {
ClearCache()
require.NoError(t, state.SetSlot(params.BeaconConfig().SlotsPerEpoch.Mul(uint64(test.epoch+1))))
mix, err := RandaoMix(state, test.epoch)
require.NoError(t, err)
@@ -88,6 +92,8 @@ func TestRandaoMix_CopyOK(t *testing.T) {
}
func TestGenerateSeed_OK(t *testing.T) {
ClearCache()
randaoMixes := make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector)
for i := 0; i < len(randaoMixes); i++ {
intInBytes := make([]byte, 32)

View File

@@ -14,6 +14,8 @@ import (
)
func TestTotalBalance_OK(t *testing.T) {
ClearCache()
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{Validators: []*ethpb.Validator{
{EffectiveBalance: 27 * 1e9}, {EffectiveBalance: 28 * 1e9},
{EffectiveBalance: 32 * 1e9}, {EffectiveBalance: 40 * 1e9},
@@ -27,6 +29,8 @@ func TestTotalBalance_OK(t *testing.T) {
}
func TestTotalBalance_ReturnsEffectiveBalanceIncrement(t *testing.T) {
ClearCache()
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{Validators: []*ethpb.Validator{}})
require.NoError(t, err)
@@ -47,6 +51,8 @@ func TestGetBalance_OK(t *testing.T) {
{i: 2, b: []uint64{0, 0, 0}},
}
for _, test := range tests {
ClearCache()
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{Balances: test.b})
require.NoError(t, err)
assert.Equal(t, test.b[test.i], state.Balances()[test.i], "Incorrect Validator balance")
@@ -62,6 +68,8 @@ func TestTotalActiveBalance(t *testing.T) {
{10000},
}
for _, test := range tests {
ClearCache()
validators := make([]*ethpb.Validator, 0)
for i := 0; i < test.vCount; i++ {
validators = append(validators, &ethpb.Validator{EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance, ExitEpoch: 1})
@@ -75,8 +83,6 @@ func TestTotalActiveBalance(t *testing.T) {
}
func TestTotalActiveBal_ReturnMin(t *testing.T) {
ClearCache()
defer ClearCache()
tests := []struct {
vCount int
}{
@@ -85,6 +91,8 @@ func TestTotalActiveBal_ReturnMin(t *testing.T) {
{10000},
}
for _, test := range tests {
ClearCache()
validators := make([]*ethpb.Validator, 0)
for i := 0; i < test.vCount; i++ {
validators = append(validators, &ethpb.Validator{EffectiveBalance: 1, ExitEpoch: 1})
@@ -98,8 +106,6 @@ func TestTotalActiveBal_ReturnMin(t *testing.T) {
}
func TestTotalActiveBalance_WithCache(t *testing.T) {
ClearCache()
defer ClearCache()
tests := []struct {
vCount int
wantCount int
@@ -109,6 +115,8 @@ func TestTotalActiveBalance_WithCache(t *testing.T) {
{vCount: 10000, wantCount: 10000},
}
for _, test := range tests {
ClearCache()
validators := make([]*ethpb.Validator, 0)
for i := 0; i < test.vCount; i++ {
validators = append(validators, &ethpb.Validator{EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance, ExitEpoch: 1})
@@ -133,6 +141,8 @@ func TestIncreaseBalance_OK(t *testing.T) {
{i: 2, b: []uint64{27 * 1e9, 28 * 1e9, 32 * 1e9}, nb: 33 * 1e9, eb: 65 * 1e9},
}
for _, test := range tests {
ClearCache()
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: []*ethpb.Validator{
{EffectiveBalance: 4}, {EffectiveBalance: 4}, {EffectiveBalance: 4}},
@@ -157,6 +167,8 @@ func TestDecreaseBalance_OK(t *testing.T) {
{i: 3, b: []uint64{27 * 1e9, 28 * 1e9, 1, 28 * 1e9}, nb: 28 * 1e9, eb: 0},
}
for _, test := range tests {
ClearCache()
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: []*ethpb.Validator{
{EffectiveBalance: 4}, {EffectiveBalance: 4}, {EffectiveBalance: 4}, {EffectiveBalance: 3}},
@@ -169,6 +181,8 @@ func TestDecreaseBalance_OK(t *testing.T) {
}
func TestFinalityDelay(t *testing.T) {
ClearCache()
base := buildState(params.BeaconConfig().SlotsPerEpoch*10, 1)
base.FinalizedCheckpoint = &ethpb.Checkpoint{Epoch: 3}
beaconState, err := state_native.InitializeFromProtoPhase0(base)
@@ -199,6 +213,8 @@ func TestFinalityDelay(t *testing.T) {
}
func TestIsInInactivityLeak(t *testing.T) {
ClearCache()
base := buildState(params.BeaconConfig().SlotsPerEpoch*10, 1)
base.FinalizedCheckpoint = &ethpb.Checkpoint{Epoch: 3}
beaconState, err := state_native.InitializeFromProtoPhase0(base)
@@ -269,6 +285,8 @@ func TestIncreaseBadBalance_NotOK(t *testing.T) {
{i: 2, b: []uint64{math.MaxUint64, math.MaxUint64, math.MaxUint64}, nb: 33 * 1e9},
}
for _, test := range tests {
ClearCache()
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: []*ethpb.Validator{
{EffectiveBalance: 4}, {EffectiveBalance: 4}, {EffectiveBalance: 4}},

View File

@@ -13,6 +13,8 @@ import (
)
func TestShuffleList_InvalidValidatorCount(t *testing.T) {
ClearCache()
maxShuffleListSize = 20
list := make([]primitives.ValidatorIndex, 21)
if _, err := ShuffleList(list, [32]byte{123, 125}); err == nil {
@@ -23,6 +25,8 @@ func TestShuffleList_InvalidValidatorCount(t *testing.T) {
}
func TestShuffleList_OK(t *testing.T) {
ClearCache()
var list1 []primitives.ValidatorIndex
seed1 := [32]byte{1, 128, 12}
seed2 := [32]byte{2, 128, 12}
@@ -47,6 +51,8 @@ func TestShuffleList_OK(t *testing.T) {
}
func TestSplitIndices_OK(t *testing.T) {
ClearCache()
var l []uint64
numValidators := uint64(64000)
for i := uint64(0); i < numValidators; i++ {
@@ -61,6 +67,8 @@ func TestSplitIndices_OK(t *testing.T) {
}
func TestShuffleList_Vs_ShuffleIndex(t *testing.T) {
ClearCache()
var list []primitives.ValidatorIndex
listSize := uint64(1000)
seed := [32]byte{123, 42}
@@ -125,6 +133,8 @@ func BenchmarkShuffleList(b *testing.B) {
}
func TestShuffledIndex(t *testing.T) {
ClearCache()
var list []primitives.ValidatorIndex
listSize := uint64(399)
for i := primitives.ValidatorIndex(0); uint64(i) < listSize; i++ {
@@ -147,6 +157,8 @@ func TestShuffledIndex(t *testing.T) {
}
func TestSplitIndicesAndOffset_OK(t *testing.T) {
ClearCache()
var l []uint64
validators := uint64(64000)
for i := uint64(0); i < validators; i++ {

View File

@@ -18,7 +18,7 @@ import (
func TestIsCurrentEpochSyncCommittee_UsingCache(t *testing.T) {
ClearCache()
defer ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
@@ -49,7 +49,7 @@ func TestIsCurrentEpochSyncCommittee_UsingCache(t *testing.T) {
func TestIsCurrentEpochSyncCommittee_UsingCommittee(t *testing.T) {
ClearCache()
defer ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
@@ -77,7 +77,7 @@ func TestIsCurrentEpochSyncCommittee_UsingCommittee(t *testing.T) {
func TestIsCurrentEpochSyncCommittee_DoesNotExist(t *testing.T) {
ClearCache()
defer ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
@@ -105,7 +105,7 @@ func TestIsCurrentEpochSyncCommittee_DoesNotExist(t *testing.T) {
func TestIsNextEpochSyncCommittee_UsingCache(t *testing.T) {
ClearCache()
defer ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
@@ -135,6 +135,8 @@ func TestIsNextEpochSyncCommittee_UsingCache(t *testing.T) {
}
func TestIsNextEpochSyncCommittee_UsingCommittee(t *testing.T) {
ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
@@ -161,6 +163,8 @@ func TestIsNextEpochSyncCommittee_UsingCommittee(t *testing.T) {
}
func TestIsNextEpochSyncCommittee_DoesNotExist(t *testing.T) {
ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
@@ -188,7 +192,7 @@ func TestIsNextEpochSyncCommittee_DoesNotExist(t *testing.T) {
func TestCurrentEpochSyncSubcommitteeIndices_UsingCache(t *testing.T) {
ClearCache()
defer ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
@@ -219,7 +223,7 @@ func TestCurrentEpochSyncSubcommitteeIndices_UsingCache(t *testing.T) {
func TestCurrentEpochSyncSubcommitteeIndices_UsingCommittee(t *testing.T) {
ClearCache()
defer ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
@@ -260,7 +264,7 @@ func TestCurrentEpochSyncSubcommitteeIndices_UsingCommittee(t *testing.T) {
func TestCurrentEpochSyncSubcommitteeIndices_DoesNotExist(t *testing.T) {
ClearCache()
defer ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
@@ -288,7 +292,7 @@ func TestCurrentEpochSyncSubcommitteeIndices_DoesNotExist(t *testing.T) {
func TestNextEpochSyncSubcommitteeIndices_UsingCache(t *testing.T) {
ClearCache()
defer ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
@@ -318,6 +322,8 @@ func TestNextEpochSyncSubcommitteeIndices_UsingCache(t *testing.T) {
}
func TestNextEpochSyncSubcommitteeIndices_UsingCommittee(t *testing.T) {
ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
@@ -345,7 +351,7 @@ func TestNextEpochSyncSubcommitteeIndices_UsingCommittee(t *testing.T) {
func TestNextEpochSyncSubcommitteeIndices_DoesNotExist(t *testing.T) {
ClearCache()
defer ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
@@ -372,6 +378,8 @@ func TestNextEpochSyncSubcommitteeIndices_DoesNotExist(t *testing.T) {
}
func TestUpdateSyncCommitteeCache_BadSlot(t *testing.T) {
ClearCache()
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Slot: 1,
})
@@ -388,6 +396,8 @@ func TestUpdateSyncCommitteeCache_BadSlot(t *testing.T) {
}
func TestUpdateSyncCommitteeCache_BadRoot(t *testing.T) {
ClearCache()
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Slot: primitives.Slot(params.BeaconConfig().EpochsPerSyncCommitteePeriod)*params.BeaconConfig().SlotsPerEpoch - 1,
LatestBlockHeader: &ethpb.BeaconBlockHeader{StateRoot: params.BeaconConfig().ZeroHash[:]},
@@ -399,7 +409,7 @@ func TestUpdateSyncCommitteeCache_BadRoot(t *testing.T) {
func TestIsCurrentEpochSyncCommittee_SameBlockRoot(t *testing.T) {
ClearCache()
defer ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),

View File

@@ -9,6 +9,7 @@ import (
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/time"
forkchoicetypes "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
@@ -271,11 +272,22 @@ func BeaconProposerIndex(ctx context.Context, state state.ReadOnlyBeaconState) (
func cachedProposerIndexAtSlot(slot primitives.Slot, root [32]byte) (primitives.ValidatorIndex, error) {
proposerIndices, has := proposerIndicesCache.ProposerIndices(slots.ToEpoch(slot), root)
if !has {
cache.ProposerIndicesCacheMiss.Inc()
return 0, errProposerIndexMiss
}
if len(proposerIndices) != int(params.BeaconConfig().SlotsPerEpoch) {
cache.ProposerIndicesCacheMiss.Inc()
return 0, errProposerIndexMiss
}
return proposerIndices[slot%params.BeaconConfig().SlotsPerEpoch], nil
}
// ProposerIndexAtSlotFromCheckpoint returns the proposer index at the given
// slot from the cache at the given checkpoint
func ProposerIndexAtSlotFromCheckpoint(c *forkchoicetypes.Checkpoint, slot primitives.Slot) (primitives.ValidatorIndex, error) {
proposerIndices, has := proposerIndicesCache.IndicesFromCheckpoint(*c)
if !has {
return 0, errProposerIndexMiss
}
if len(proposerIndices) != int(params.BeaconConfig().SlotsPerEpoch) {
return 0, errProposerIndexMiss
}
return proposerIndices[slot%params.BeaconConfig().SlotsPerEpoch], nil

View File

@@ -7,6 +7,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/time"
forkchoicetypes "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/types"
state_native "github.com/prysmaticlabs/prysm/v4/beacon-chain/state/state-native"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/config/params"
@@ -178,8 +179,6 @@ func TestIsSlashableValidator_OK(t *testing.T) {
func TestBeaconProposerIndex_OK(t *testing.T) {
params.SetupTestConfigCleanup(t)
ClearCache()
defer ClearCache()
c := params.BeaconConfig()
c.MinGenesisActiveValidatorCount = 16384
params.OverrideBeaconConfig(c)
@@ -223,9 +222,9 @@ func TestBeaconProposerIndex_OK(t *testing.T) {
},
}
defer ClearCache()
for _, tt := range tests {
ClearCache()
require.NoError(t, state.SetSlot(tt.slot))
result, err := BeaconProposerIndex(context.Background(), state)
require.NoError(t, err, "Failed to get shard and committees at slot")
@@ -234,9 +233,9 @@ func TestBeaconProposerIndex_OK(t *testing.T) {
}
func TestBeaconProposerIndex_BadState(t *testing.T) {
params.SetupTestConfigCleanup(t)
ClearCache()
defer ClearCache()
params.SetupTestConfigCleanup(t)
c := params.BeaconConfig()
c.MinGenesisActiveValidatorCount = 16384
params.OverrideBeaconConfig(c)
@@ -267,6 +266,8 @@ func TestBeaconProposerIndex_BadState(t *testing.T) {
}
func TestComputeProposerIndex_Compatibility(t *testing.T) {
ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().MinGenesisActiveValidatorCount)
for i := 0; i < len(validators); i++ {
validators[i] = &ethpb.Validator{
@@ -308,12 +309,16 @@ func TestComputeProposerIndex_Compatibility(t *testing.T) {
}
func TestDelayedActivationExitEpoch_OK(t *testing.T) {
ClearCache()
epoch := primitives.Epoch(9999)
wanted := epoch + 1 + params.BeaconConfig().MaxSeedLookahead
assert.Equal(t, wanted, ActivationExitEpoch(epoch))
}
func TestActiveValidatorCount_Genesis(t *testing.T) {
ClearCache()
c := 1000
validators := make([]*ethpb.Validator, c)
for i := 0; i < len(validators); i++ {
@@ -347,7 +352,6 @@ func TestChurnLimit_OK(t *testing.T) {
{validatorCount: 1000000, wantedChurn: 15 /* validatorCount/churnLimitQuotient */},
{validatorCount: 2000000, wantedChurn: 30 /* validatorCount/churnLimitQuotient */},
}
defer ClearCache()
for _, test := range tests {
ClearCache()
@@ -381,9 +385,6 @@ func TestChurnLimitDeneb_OK(t *testing.T) {
{1000000, params.BeaconConfig().MaxPerEpochActivationChurnLimit},
{2000000, params.BeaconConfig().MaxPerEpochActivationChurnLimit},
}
defer ClearCache()
for _, test := range tests {
ClearCache()
@@ -416,7 +417,7 @@ func TestChurnLimitDeneb_OK(t *testing.T) {
// Test basic functionality of ActiveValidatorIndices without caching. This test will need to be
// rewritten when releasing some cache flag.
func TestActiveValidatorIndices(t *testing.T) {
farFutureEpoch := params.BeaconConfig().FarFutureEpoch
//farFutureEpoch := params.BeaconConfig().FarFutureEpoch
type args struct {
state *ethpb.BeaconState
epoch primitives.Epoch
@@ -427,7 +428,7 @@ func TestActiveValidatorIndices(t *testing.T) {
want []primitives.ValidatorIndex
wantedErr string
}{
{
/*{
name: "all_active_epoch_10",
args: args{
state: &ethpb.BeaconState{
@@ -558,7 +559,7 @@ func TestActiveValidatorIndices(t *testing.T) {
epoch: 10,
},
want: []primitives.ValidatorIndex{0, 2, 3},
},
},*/
{
name: "impossible_zero_validators", // Regression test for issue #13051
args: args{
@@ -568,22 +569,21 @@ func TestActiveValidatorIndices(t *testing.T) {
},
epoch: 10,
},
wantedErr: "no active validator indices",
wantedErr: "state has nil validator slice",
},
}
defer ClearCache()
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ClearCache()
s, err := state_native.InitializeFromProtoPhase0(tt.args.state)
require.NoError(t, err)
require.NoError(t, s.SetValidators(tt.args.state.Validators))
got, err := ActiveValidatorIndices(context.Background(), s, tt.args.epoch)
if tt.wantedErr != "" {
assert.ErrorContains(t, tt.wantedErr, err)
return
}
assert.DeepEqual(t, tt.want, got, "ActiveValidatorIndices()")
ClearCache()
})
}
}
@@ -684,6 +684,8 @@ func TestComputeProposerIndex(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ClearCache()
bState := &ethpb.BeaconState{Validators: tt.args.validators}
stTrie, err := state_native.InitializeFromProtoUnsafePhase0(bState)
require.NoError(t, err)
@@ -716,6 +718,8 @@ func TestIsEligibleForActivationQueue(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ClearCache()
assert.Equal(t, tt.want, IsEligibleForActivationQueue(tt.validator), "IsEligibleForActivationQueue()")
})
}
@@ -743,6 +747,8 @@ func TestIsIsEligibleForActivation(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ClearCache()
s, err := state_native.InitializeFromProtoPhase0(tt.state)
require.NoError(t, err)
assert.Equal(t, tt.want, IsEligibleForActivation(s, tt.validator), "IsEligibleForActivation()")
@@ -781,6 +787,8 @@ func computeProposerIndexWithValidators(validators []*ethpb.Validator, activeInd
}
func TestLastActivatedValidatorIndex_OK(t *testing.T) {
ClearCache()
beaconState, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{})
require.NoError(t, err)
@@ -802,3 +810,20 @@ func TestLastActivatedValidatorIndex_OK(t *testing.T) {
require.NoError(t, err)
require.Equal(t, index, primitives.ValidatorIndex(3))
}
func TestProposerIndexFromCheckpoint(t *testing.T) {
ClearCache()
e := primitives.Epoch(2)
r := [32]byte{'a'}
root := [32]byte{'b'}
ids := [32]primitives.ValidatorIndex{}
slot := primitives.Slot(69) // slot 5 in the Epoch
ids[5] = primitives.ValidatorIndex(19)
proposerIndicesCache.Set(e, r, ids)
c := &forkchoicetypes.Checkpoint{Root: root, Epoch: e - 1}
proposerIndicesCache.SetCheckpoint(*c, r)
id, err := ProposerIndexAtSlotFromCheckpoint(c, slot)
require.NoError(t, err)
require.Equal(t, ids[5], id)
}

View File

@@ -202,3 +202,14 @@ func ParseWeakSubjectivityInputString(wsCheckpointString string) (*v1alpha1.Chec
Root: bRoot,
}, nil
}
// MinEpochsForBlockRequests computes the number of epochs of block history that we need to maintain,
// relative to the current epoch, per the p2p specs. This is used to compute the slot where backfill is complete.
// value defined:
// https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/p2p-interface.md#configuration
// MIN_VALIDATOR_WITHDRAWABILITY_DELAY + CHURN_LIMIT_QUOTIENT // 2 (= 33024, ~5 months)
// detailed rationale: https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/p2p-interface.md#why-are-blocksbyrange-requests-only-required-to-be-served-for-the-latest-min_epochs_for_block_requests-epochs
func MinEpochsForBlockRequests() primitives.Epoch {
return params.BeaconConfig().MinValidatorWithdrawabilityDelay +
primitives.Epoch(params.BeaconConfig().ChurnLimitQuotient/2)
}

View File

@@ -48,6 +48,7 @@ func TestWeakSubjectivity_ComputeWeakSubjectivityPeriod(t *testing.T) {
t.Run(fmt.Sprintf("valCount: %d, avgBalance: %d", tt.valCount, tt.avgBalance), func(t *testing.T) {
// Reset committee cache - as we need to recalculate active validator set for each test.
helpers.ClearCache()
got, err := helpers.ComputeWeakSubjectivityPeriod(context.Background(), genState(t, tt.valCount, tt.avgBalance), params.BeaconConfig())
require.NoError(t, err)
assert.Equal(t, tt.want, got, "valCount: %v, avgBalance: %v", tt.valCount, tt.avgBalance)
@@ -177,6 +178,8 @@ func TestWeakSubjectivity_IsWithinWeakSubjectivityPeriod(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
helpers.ClearCache()
sr, _, e := tt.genWsCheckpoint()
got, err := helpers.IsWithinWeakSubjectivityPeriod(context.Background(), tt.epoch, tt.genWsState(), sr, e, params.BeaconConfig())
if tt.wantedErr != "" {
@@ -247,6 +250,8 @@ func TestWeakSubjectivity_ParseWeakSubjectivityInputString(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
helpers.ClearCache()
wsCheckpt, err := helpers.ParseWeakSubjectivityInputString(tt.input)
if tt.wantedErr != "" {
require.ErrorContains(t, tt.wantedErr, err)
@@ -281,3 +286,21 @@ func genState(t *testing.T, valCount, avgBalance uint64) state.BeaconState {
return beaconState
}
func TestMinEpochsForBlockRequests(t *testing.T) {
helpers.ClearCache()
params.SetActiveTestCleanup(t, params.MainnetConfig())
var expected primitives.Epoch = 33024
// expected value of 33024 via spec commentary:
// https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/p2p-interface.md#why-are-blocksbyrange-requests-only-required-to-be-served-for-the-latest-min_epochs_for_block_requests-epochs
// MIN_EPOCHS_FOR_BLOCK_REQUESTS is calculated using the arithmetic from compute_weak_subjectivity_period found in the weak subjectivity guide. Specifically to find this max epoch range, we use the worst case event of a very large validator size (>= MIN_PER_EPOCH_CHURN_LIMIT * CHURN_LIMIT_QUOTIENT).
//
// MIN_EPOCHS_FOR_BLOCK_REQUESTS = (
// MIN_VALIDATOR_WITHDRAWABILITY_DELAY
// + MAX_SAFETY_DECAY * CHURN_LIMIT_QUOTIENT // (2 * 100)
// )
//
// Where MAX_SAFETY_DECAY = 100 and thus MIN_EPOCHS_FOR_BLOCK_REQUESTS = 33024 (~5 months).
require.Equal(t, expected, helpers.MinEpochsForBlockRequests())
}

View File

@@ -245,10 +245,10 @@ func createFullBellatrixBlockWithOperations(t *testing.T) (state.BeaconState,
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: bytesutil.PadTo([]byte{1, 2, 3, 4}, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
ExtraData: make([]byte, 0),
},
},
},
@@ -284,11 +284,11 @@ func createFullCapellaBlockWithOperations(t *testing.T) (state.BeaconState,
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: bytesutil.PadTo([]byte{1, 2, 3, 4}, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
Withdrawals: make([]*enginev1.Withdrawal, 0),
ExtraData: make([]byte, 0),
},
},
},

View File

@@ -209,6 +209,7 @@ func OptimizedGenesisBeaconStateBellatrix(genesisTime uint64, preState state.Bea
ReceiptsRoot: make([]byte, 32),
LogsBloom: make([]byte, 256),
PrevRandao: make([]byte, 32),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
Transactions: make([][]byte, 0),
@@ -269,6 +270,7 @@ func EmptyGenesisStateBellatrix() (state.BeaconState, error) {
ReceiptsRoot: make([]byte, 32),
LogsBloom: make([]byte, 256),
PrevRandao: make([]byte, 32),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
TransactionsRoot: make([]byte, 32),

View File

@@ -0,0 +1,49 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"availability.go",
"cache.go",
"iface.go",
"mock.go",
],
importpath = "github.com/prysmaticlabs/prysm/v4/beacon-chain/das",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/verification:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",
"//runtime/logging:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = [
"availability_test.go",
"cache_test.go",
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/verification:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)

View File

@@ -0,0 +1,149 @@
package das
import (
"context"
"fmt"
errors "github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db/filesystem"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/verification"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/runtime/logging"
"github.com/prysmaticlabs/prysm/v4/runtime/version"
"github.com/prysmaticlabs/prysm/v4/time/slots"
log "github.com/sirupsen/logrus"
)
var (
errMixedRoots = errors.New("BlobSidecars must all be for the same block")
)
// LazilyPersistentStore is an implementation of AvailabilityStore to be used when batch syncing.
// This implementation will hold any blobs passed to Persist until the IsDataAvailable is called for their
// block, at which time they will undergo full verification and be saved to the disk.
type LazilyPersistentStore struct {
store *filesystem.BlobStorage
cache *cache
verifier BlobBatchVerifier
}
var _ AvailabilityStore = &LazilyPersistentStore{}
// BlobBatchVerifier enables LazyAvailabilityStore to manage the verification process
// going from ROBlob->VerifiedROBlob, while avoiding the decision of which individual verifications
// to run and in what order. Since LazilyPersistentStore always tries to verify and save blobs only when
// they are all available, the interface takes a slice of blobs, enabling the implementation to optimize
// batch verification.
type BlobBatchVerifier interface {
VerifiedROBlobs(ctx context.Context, blk blocks.ROBlock, sc []blocks.ROBlob) ([]blocks.VerifiedROBlob, error)
}
// NewLazilyPersistentStore creates a new LazilyPersistentStore. This constructor should always be used
// when creating a LazilyPersistentStore because it needs to initialize the cache under the hood.
func NewLazilyPersistentStore(store *filesystem.BlobStorage, verifier BlobBatchVerifier) *LazilyPersistentStore {
return &LazilyPersistentStore{
store: store,
cache: newCache(),
verifier: verifier,
}
}
// Persist adds blobs to the working blob cache. Blobs stored in this cache will be persisted
// for at least as long as the node is running. Once IsDataAvailable succeeds, all blobs referenced
// by the given block are guaranteed to be persisted for the remainder of the retention period.
func (s *LazilyPersistentStore) Persist(current primitives.Slot, sc ...blocks.ROBlob) error {
if len(sc) == 0 {
return nil
}
if len(sc) > 1 {
first := sc[0].BlockRoot()
for i := 1; i < len(sc); i++ {
if first != sc[i].BlockRoot() {
return errMixedRoots
}
}
}
if !params.WithinDAPeriod(slots.ToEpoch(sc[0].Slot()), slots.ToEpoch(current)) {
return nil
}
key := keyFromSidecar(sc[0])
entry := s.cache.ensure(key)
for i := range sc {
if err := entry.stash(&sc[i]); err != nil {
return err
}
}
return nil
}
// IsDataAvailable returns nil if all the commitments in the given block are persisted to the db and have been verified.
// BlobSidecars already in the db are assumed to have been previously verified against the block.
func (s *LazilyPersistentStore) IsDataAvailable(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error {
blockCommitments, err := commitmentsToCheck(b, current)
if err != nil {
return errors.Wrapf(err, "could check data availability for block %#x", b.Root())
}
// Return early for blocks that are pre-deneb or which do not have any commitments.
if blockCommitments.count() == 0 {
return nil
}
key := keyFromBlock(b)
entry := s.cache.ensure(key)
defer s.cache.delete(key)
root := b.Root()
// Verify we have all the expected sidecars, and fail fast if any are missing or inconsistent.
// We don't try to salvage problematic batches because this indicates a misbehaving peer and we'd rather
// ignore their response and decrease their peer score.
sidecars, err := entry.filter(root, blockCommitments)
if err != nil {
return errors.Wrap(err, "incomplete BlobSidecar batch")
}
// Do thorough verifications of each BlobSidecar for the block.
// Same as above, we don't save BlobSidecars if there are any problems with the batch.
vscs, err := s.verifier.VerifiedROBlobs(ctx, b, sidecars)
if err != nil {
var me verification.VerificationMultiError
ok := errors.As(err, &me)
if ok {
fails := me.Failures()
lf := make(log.Fields, len(fails))
for i := range fails {
lf[fmt.Sprintf("fail_%d", i)] = fails[i].Error()
}
log.WithFields(lf).WithFields(logging.BlockFieldsFromBlob(sidecars[0])).
Debug("invalid BlobSidecars received")
}
return errors.Wrapf(err, "invalid BlobSidecars received for block %#x", root)
}
// Ensure that each BlobSidecar is written to disk.
for i := range vscs {
if err := s.store.Save(vscs[i]); err != nil {
return errors.Wrapf(err, "failed to save BlobSidecar index %d for block %#x", vscs[i].Index, root)
}
}
// All BlobSidecars are persisted - da check succeeds.
return nil
}
func commitmentsToCheck(b blocks.ROBlock, current primitives.Slot) (safeCommitmentArray, error) {
var ar safeCommitmentArray
if b.Version() < version.Deneb {
return ar, nil
}
// We are only required to check within MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS
if !params.WithinDAPeriod(slots.ToEpoch(b.Block().Slot()), slots.ToEpoch(current)) {
return ar, nil
}
kc, err := b.Block().Body().BlobKzgCommitments()
if err != nil {
return ar, err
}
if len(kc) > len(ar) {
return ar, errIndexOutOfBounds
}
copy(ar[:], kc)
return ar, nil
}

View File

@@ -0,0 +1,214 @@
package das
import (
"bytes"
"context"
"testing"
errors "github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db/filesystem"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/verification"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/testing/require"
"github.com/prysmaticlabs/prysm/v4/testing/util"
"github.com/prysmaticlabs/prysm/v4/time/slots"
)
func Test_commitmentsToCheck(t *testing.T) {
windowSlots, err := slots.EpochEnd(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest)
require.NoError(t, err)
commits := [][]byte{
bytesutil.PadTo([]byte("a"), 48),
bytesutil.PadTo([]byte("b"), 48),
bytesutil.PadTo([]byte("c"), 48),
bytesutil.PadTo([]byte("d"), 48),
}
cases := []struct {
name string
commits [][]byte
block func(*testing.T) blocks.ROBlock
slot primitives.Slot
err error
}{
{
name: "pre deneb",
block: func(t *testing.T) blocks.ROBlock {
bb := util.NewBeaconBlockBellatrix()
sb, err := blocks.NewSignedBeaconBlock(bb)
require.NoError(t, err)
rb, err := blocks.NewROBlock(sb)
require.NoError(t, err)
return rb
},
},
{
name: "commitments within da",
block: func(t *testing.T) blocks.ROBlock {
d := util.NewBeaconBlockDeneb()
d.Block.Body.BlobKzgCommitments = commits
d.Block.Slot = 100
sb, err := blocks.NewSignedBeaconBlock(d)
require.NoError(t, err)
rb, err := blocks.NewROBlock(sb)
require.NoError(t, err)
return rb
},
commits: commits,
slot: 100,
},
{
name: "commitments outside da",
block: func(t *testing.T) blocks.ROBlock {
d := util.NewBeaconBlockDeneb()
// block is from slot 0, "current slot" is window size +1 (so outside the window)
d.Block.Body.BlobKzgCommitments = commits
sb, err := blocks.NewSignedBeaconBlock(d)
require.NoError(t, err)
rb, err := blocks.NewROBlock(sb)
require.NoError(t, err)
return rb
},
slot: windowSlots + 1,
},
{
name: "excessive commitments",
block: func(t *testing.T) blocks.ROBlock {
d := util.NewBeaconBlockDeneb()
d.Block.Slot = 100
// block is from slot 0, "current slot" is window size +1 (so outside the window)
d.Block.Body.BlobKzgCommitments = commits
// Double the number of commitments, assert that this is over the limit
d.Block.Body.BlobKzgCommitments = append(commits, d.Block.Body.BlobKzgCommitments...)
sb, err := blocks.NewSignedBeaconBlock(d)
require.NoError(t, err)
rb, err := blocks.NewROBlock(sb)
require.NoError(t, err)
c, err := rb.Block().Body().BlobKzgCommitments()
require.NoError(t, err)
require.Equal(t, true, len(c) > fieldparams.MaxBlobsPerBlock)
return rb
},
slot: windowSlots + 1,
err: errIndexOutOfBounds,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
b := c.block(t)
co, err := commitmentsToCheck(b, c.slot)
if c.err != nil {
require.ErrorIs(t, err, c.err)
} else {
require.NoError(t, err)
}
require.Equal(t, len(c.commits), co.count())
for i := 0; i < len(c.commits); i++ {
require.Equal(t, true, bytes.Equal(c.commits[i], co[i]))
}
})
}
}
func daAlwaysSucceeds(_ [][]byte, _ []*ethpb.BlobSidecar) error {
return nil
}
type mockDA struct {
t *testing.T
scs []blocks.ROBlob
err error
}
func TestLazilyPersistent_Missing(t *testing.T) {
ctx := context.Background()
store := filesystem.NewEphemeralBlobStorage(t)
blk, scs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 3)
mbv := &mockBlobBatchVerifier{t: t, scs: scs}
as := NewLazilyPersistentStore(store, mbv)
// Only one commitment persisted, should return error with other indices
require.NoError(t, as.Persist(1, scs[2]))
err := as.IsDataAvailable(ctx, 1, blk)
require.ErrorIs(t, err, errMissingSidecar)
// All but one persisted, return missing idx
require.NoError(t, as.Persist(1, scs[0]))
err = as.IsDataAvailable(ctx, 1, blk)
require.ErrorIs(t, err, errMissingSidecar)
// All persisted, return nil
require.NoError(t, as.Persist(1, scs...))
require.NoError(t, as.IsDataAvailable(ctx, 1, blk))
}
func TestLazilyPersistent_Mismatch(t *testing.T) {
ctx := context.Background()
store := filesystem.NewEphemeralBlobStorage(t)
blk, scs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 3)
mbv := &mockBlobBatchVerifier{t: t, err: errors.New("kzg check should not run")}
scs[0].KzgCommitment = bytesutil.PadTo([]byte("nope"), 48)
as := NewLazilyPersistentStore(store, mbv)
// Only one commitment persisted, should return error with other indices
require.NoError(t, as.Persist(1, scs[0]))
err := as.IsDataAvailable(ctx, 1, blk)
require.NotNil(t, err)
require.ErrorIs(t, err, errCommitmentMismatch)
}
func TestLazyPersistOnceCommitted(t *testing.T) {
_, scs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 6)
as := NewLazilyPersistentStore(filesystem.NewEphemeralBlobStorage(t), &mockBlobBatchVerifier{})
// stashes as expected
require.NoError(t, as.Persist(1, scs...))
// ignores duplicates
require.ErrorIs(t, as.Persist(1, scs...), ErrDuplicateSidecar)
// ignores index out of bound
scs[0].Index = 6
require.ErrorIs(t, as.Persist(1, scs[0]), errIndexOutOfBounds)
_, more := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 4)
// ignores sidecars before the retention period
slotOOB, err := slots.EpochStart(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest)
require.NoError(t, err)
require.NoError(t, as.Persist(32+slotOOB, more[0]))
// doesn't ignore new sidecars with a different block root
require.NoError(t, as.Persist(1, more...))
}
type mockBlobBatchVerifier struct {
t *testing.T
scs []blocks.ROBlob
err error
verified map[[32]byte]primitives.Slot
}
var _ BlobBatchVerifier = &mockBlobBatchVerifier{}
func (m *mockBlobBatchVerifier) VerifiedROBlobs(_ context.Context, _ blocks.ROBlock, scs []blocks.ROBlob) ([]blocks.VerifiedROBlob, error) {
require.Equal(m.t, len(scs), len(m.scs))
for i := range m.scs {
require.Equal(m.t, m.scs[i], scs[i])
}
vscs := verification.FakeVerifySliceForTest(m.t, scs)
return vscs, m.err
}
func (m *mockBlobBatchVerifier) MarkVerified(root [32]byte, slot primitives.Slot) {
if m.verified == nil {
m.verified = make(map[[32]byte]primitives.Slot)
}
m.verified[root] = slot
}

117
beacon-chain/das/cache.go Normal file
View File

@@ -0,0 +1,117 @@
package das
import (
"bytes"
"github.com/pkg/errors"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
)
var (
ErrDuplicateSidecar = errors.New("duplicate sidecar stashed in AvailabilityStore")
errIndexOutOfBounds = errors.New("sidecar.index > MAX_BLOBS_PER_BLOCK")
errCommitmentMismatch = errors.New("KzgCommitment of sidecar in cache did not match block commitment")
errMissingSidecar = errors.New("no sidecar in cache for block commitment")
)
// cacheKey includes the slot so that we can easily iterate through the cache and compare
// slots for eviction purposes. Whether the input is the block or the sidecar, we always have
// the root+slot when interacting with the cache, so it isn't an inconvenience to use both.
type cacheKey struct {
slot primitives.Slot
root [32]byte
}
type cache struct {
entries map[cacheKey]*cacheEntry
}
func newCache() *cache {
return &cache{entries: make(map[cacheKey]*cacheEntry)}
}
// keyFromSidecar is a convenience method for constructing a cacheKey from a BlobSidecar value.
func keyFromSidecar(sc blocks.ROBlob) cacheKey {
return cacheKey{slot: sc.Slot(), root: sc.BlockRoot()}
}
// keyFromBlock is a convenience method for constructing a cacheKey from a ROBlock value.
func keyFromBlock(b blocks.ROBlock) cacheKey {
return cacheKey{slot: b.Block().Slot(), root: b.Root()}
}
// ensure returns the entry for the given key, creating it if it isn't already present.
func (c *cache) ensure(key cacheKey) *cacheEntry {
e, ok := c.entries[key]
if !ok {
e = &cacheEntry{}
c.entries[key] = e
}
return e
}
// delete removes the cache entry from the cache.
func (c *cache) delete(key cacheKey) {
delete(c.entries, key)
}
// cacheEntry holds a fixed-length cache of BlobSidecars.
type cacheEntry struct {
scs [fieldparams.MaxBlobsPerBlock]*blocks.ROBlob
}
// stash adds an item to the in-memory cache of BlobSidecars.
// Only the first BlobSidecar of a given Index will be kept in the cache.
// stash will return an error if the given blob is already in the cache, or if the Index is out of bounds.
func (e *cacheEntry) stash(sc *blocks.ROBlob) error {
if sc.Index >= fieldparams.MaxBlobsPerBlock {
return errors.Wrapf(errIndexOutOfBounds, "index=%d", sc.Index)
}
if e.scs[sc.Index] != nil {
return errors.Wrapf(ErrDuplicateSidecar, "root=%#x, index=%d, commitment=%#x", sc.BlockRoot(), sc.Index, sc.KzgCommitment)
}
e.scs[sc.Index] = sc
return nil
}
// filter evicts sidecars that are not committed to by the block and returns custom
// errors if the cache is missing any of the commitments, or if the commitments in
// the cache do not match those found in the block. If err is nil, then all expected
// commitments were found in the cache and the sidecar slice return value can be used
// to perform a DA check against the cached sidecars.
func (e *cacheEntry) filter(root [32]byte, kc safeCommitmentArray) ([]blocks.ROBlob, error) {
scs := make([]blocks.ROBlob, kc.count())
for i := uint64(0); i < fieldparams.MaxBlobsPerBlock; i++ {
if kc[i] == nil {
if e.scs[i] != nil {
return nil, errors.Wrapf(errCommitmentMismatch, "root=%#x, index=%#x, commitment=%#x, no block commitment", root, i, e.scs[i].KzgCommitment)
}
continue
}
if e.scs[i] == nil {
return nil, errors.Wrapf(errMissingSidecar, "root=%#x, index=%#x", root, i)
}
if !bytes.Equal(kc[i], e.scs[i].KzgCommitment) {
return nil, errors.Wrapf(errCommitmentMismatch, "root=%#x, index=%#x, commitment=%#x, block commitment=%#x", root, i, e.scs[i].KzgCommitment, kc[i])
}
scs[i] = *e.scs[i]
}
return scs, nil
}
// safeCommitmentArray is a fixed size array of commitment byte slices. This is helpful for avoiding
// gratuitous bounds checks.
type safeCommitmentArray [fieldparams.MaxBlobsPerBlock][]byte
func (s safeCommitmentArray) count() int {
for i := range s {
if s[i] == nil {
return i
}
}
return fieldparams.MaxBlobsPerBlock
}

View File

@@ -0,0 +1,25 @@
package das
import (
"testing"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v4/testing/require"
)
func TestCacheEnsureDelete(t *testing.T) {
c := newCache()
require.Equal(t, 0, len(c.entries))
root := bytesutil.ToBytes32([]byte("root"))
slot := primitives.Slot(1234)
k := cacheKey{root: root, slot: slot}
entry := c.ensure(k)
require.Equal(t, 1, len(c.entries))
require.Equal(t, c.entries[k], entry)
c.delete(k)
require.Equal(t, 0, len(c.entries))
var nilEntry *cacheEntry
require.Equal(t, nilEntry, c.entries[k])
}

19
beacon-chain/das/iface.go Normal file
View File

@@ -0,0 +1,19 @@
package das
import (
"context"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
)
// AvailabilityStore describes a component that can verify and save sidecars for a given block, and confirm previously
// verified and saved sidecars.
// Persist guarantees that the sidecar will be available to perform a DA check
// for the life of the beacon node process.
// IsDataAvailable guarantees that all blobs committed to in the block have been
// durably persisted before returning a non-error value.
type AvailabilityStore interface {
IsDataAvailable(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error
Persist(current primitives.Slot, sc ...blocks.ROBlob) error
}

32
beacon-chain/das/mock.go Normal file
View File

@@ -0,0 +1,32 @@
package das
import (
"context"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
)
// MockAvailabilityStore is an implementation of AvailabilityStore that can be used by other packages in tests.
type MockAvailabilityStore struct {
VerifyAvailabilityCallback func(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error
PersistBlobsCallback func(current primitives.Slot, sc ...blocks.ROBlob) error
}
var _ AvailabilityStore = &MockAvailabilityStore{}
// IsDataAvailable satisfies the corresponding method of the AvailabilityStore interface in a way that is useful for tests.
func (m *MockAvailabilityStore) IsDataAvailable(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error {
if m.VerifyAvailabilityCallback != nil {
return m.VerifyAvailabilityCallback(ctx, current, b)
}
return nil
}
// Persist satisfies the corresponding method of the AvailabilityStore interface in a way that is useful for tests.
func (m *MockAvailabilityStore) Persist(current primitives.Slot, sc ...blocks.ROBlob) error {
if m.PersistBlobsCallback != nil {
return m.PersistBlobsCallback(current, sc...)
}
return nil
}

View File

@@ -22,9 +22,6 @@ var ErrNotFoundOriginBlockRoot = kv.ErrNotFoundOriginBlockRoot
// ErrNotFoundBackfillBlockRoot wraps ErrNotFound for an error specific to the backfill block root.
var ErrNotFoundBackfillBlockRoot = kv.ErrNotFoundBackfillBlockRoot
// ErrNotFoundGenesisBlockRoot means no genesis block root was found, indicating the db was not initialized with genesis
var ErrNotFoundGenesisBlockRoot = kv.ErrNotFoundGenesisBlockRoot
// IsNotFound allows callers to treat errors from a flat-file database, where the file record is missing,
// as equivalent to db.ErrNotFound.
func IsNotFound(err error) bool {

View File

@@ -6,6 +6,7 @@ go_library(
"blob.go",
"ephemeral.go",
"metrics.go",
"pruner.go",
],
importpath = "github.com/prysmaticlabs/prysm/v4/beacon-chain/db/filesystem",
visibility = ["//visibility:public"],
@@ -19,7 +20,6 @@ go_library(
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/logging:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
@@ -30,7 +30,10 @@ go_library(
go_test(
name = "go_default_test",
srcs = ["blob_test.go"],
srcs = [
"blob_test.go",
"pruner_test.go",
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/verification:go_default_library",
@@ -40,7 +43,6 @@ go_test(
"//proto/prysm/v1alpha1:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_spf13_afero//:go_default_library",

View File

@@ -1,28 +1,21 @@
package filesystem
import (
"encoding/binary"
"fmt"
"io"
"os"
"path"
"path/filepath"
"strconv"
"strings"
"sync/atomic"
"time"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/verification"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/io/file"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/runtime/logging"
"github.com/prysmaticlabs/prysm/v4/time/slots"
log "github.com/sirupsen/logrus"
"github.com/spf13/afero"
)
@@ -35,7 +28,6 @@ const (
sszExt = "ssz"
partExt = "part"
bufferEpochs = 2
directoryPermissions = 0700
)
@@ -45,11 +37,11 @@ type BlobStorageOption func(*BlobStorage) error
// WithBlobRetentionEpochs is an option that changes the number of epochs blobs will be persisted.
func WithBlobRetentionEpochs(e primitives.Epoch) BlobStorageOption {
return func(b *BlobStorage) error {
s, err := slots.EpochStart(e + bufferEpochs)
pruner, err := newBlobPruner(b.fs, e)
if err != nil {
return errors.Wrap(err, "could not set retentionSlots")
return err
}
b.retentionSlots = s
b.pruner = pruner
return nil
}
}
@@ -71,14 +63,29 @@ func NewBlobStorage(base string, opts ...BlobStorageOption) (*BlobStorage, error
return nil, fmt.Errorf("failed to create blob storage at %s: %w", base, err)
}
}
if b.pruner == nil {
log.Warn("Initializing blob filesystem storage with pruning disabled")
}
return b, nil
}
// BlobStorage is the concrete implementation of the filesystem backend for saving and retrieving BlobSidecars.
type BlobStorage struct {
fs afero.Fs
retentionSlots primitives.Slot
prunedBefore atomic.Uint64
fs afero.Fs
pruner *blobPruner
}
// WarmCache runs the prune routine with an expiration of slot of 0, so nothing will be pruned, but the pruner's cache
// will be populated at node startup, avoiding a costly cold prune (~4s in syscalls) during syncing.
func (bs *BlobStorage) WarmCache() {
if bs.pruner == nil {
return
}
go func() {
if err := bs.pruner.prune(0); err != nil {
log.WithError(err).Error("Error encountered while warming up blob pruner cache.")
}
}()
}
// Save saves blobs given a list of sidecars.
@@ -94,7 +101,9 @@ func (bs *BlobStorage) Save(sidecar blocks.VerifiedROBlob) error {
log.WithFields(logging.BlobFields(sidecar.ROBlob)).Debug("ignoring a duplicate blob sidecar Save attempt")
return nil
}
bs.tryPrune(sidecar.Slot())
if bs.pruner != nil {
bs.pruner.notify(sidecar.BlockRoot(), sidecar.Slot())
}
// Serialize the ethpb.BlobSidecar to binary data using SSZ.
sidecarData, err := sidecar.MarshalSSZ()
@@ -106,8 +115,12 @@ func (bs *BlobStorage) Save(sidecar blocks.VerifiedROBlob) error {
}
partPath := fname.partPath()
partialMoved := false
// Ensure the partial file is deleted.
defer func() {
if partialMoved {
return
}
// It's expected to error if the save is successful.
err = bs.fs.Remove(partPath)
if err == nil {
@@ -141,8 +154,9 @@ func (bs *BlobStorage) Save(sidecar blocks.VerifiedROBlob) error {
if err != nil {
return errors.Wrap(err, "failed to rename partial file to final name")
}
blobsTotalGauge.Inc()
blobSaveLatency.Observe(time.Since(startTime).Seconds())
partialMoved = true
blobsWrittenCounter.Inc()
blobSaveLatency.Observe(float64(time.Since(startTime).Milliseconds()))
return nil
}
@@ -166,11 +180,17 @@ func (bs *BlobStorage) Get(root [32]byte, idx uint64) (blocks.VerifiedROBlob, er
return blocks.VerifiedROBlob{}, err
}
defer func() {
blobFetchLatency.Observe(time.Since(startTime).Seconds())
blobFetchLatency.Observe(float64(time.Since(startTime).Milliseconds()))
}()
return verification.BlobSidecarNoop(ro)
}
// Remove removes all blobs for a given root.
func (bs *BlobStorage) Remove(root [32]byte) error {
rootDir := blobNamer{root: root}.dir()
return bs.fs.RemoveAll(rootDir)
}
// Indices generates a bitmap representing which BlobSidecar.Index values are present on disk for a given root.
// This value can be compared to the commitments observed in a block to determine which indices need to be found
// on the network to confirm data availability.
@@ -218,7 +238,7 @@ func namerForSidecar(sc blocks.VerifiedROBlob) blobNamer {
}
func (p blobNamer) dir() string {
return fmt.Sprintf("%#x", p.root)
return rootString(p.root)
}
func (p blobNamer) fname(ext string) string {
@@ -233,110 +253,6 @@ func (p blobNamer) path() string {
return p.fname(sszExt)
}
// Prune prunes blobs in the base directory based on the retention epoch.
// It deletes blobs older than currentEpoch - (retentionEpochs+bufferEpochs).
// This is so that we keep a slight buffer and blobs are deleted after n+2 epochs.
func (bs *BlobStorage) Prune(pruneBefore primitives.Slot) error {
t := time.Now()
log.Debug("Pruning old blobs")
folders, err := afero.ReadDir(bs.fs, ".")
if err != nil {
return err
}
var totalPruned int
for _, folder := range folders {
if folder.IsDir() {
num, err := bs.processFolder(folder, pruneBefore)
if err != nil {
return err
}
blobsPrunedCounter.Add(float64(num))
blobsTotalGauge.Add(-float64(num))
totalPruned += num
}
}
pruneTime := time.Since(t)
log.WithFields(log.Fields{
"lastPrunedEpoch": slots.ToEpoch(pruneBefore),
"pruneTime": pruneTime,
"numberBlobsPruned": totalPruned,
}).Debug("Pruned old blobs")
return nil
}
// processFolder will delete the folder of blobs if the blob slot is outside the
// retention period. We determine the slot by looking at the first blob in the folder.
func (bs *BlobStorage) processFolder(folder os.FileInfo, pruneBefore primitives.Slot) (int, error) {
f, err := bs.fs.Open(filepath.Join(folder.Name(), "0."+sszExt))
if err != nil {
return 0, err
}
defer func() {
if err := f.Close(); err != nil {
log.WithError(err).Errorf("Could not close blob file")
}
}()
slot, err := slotFromBlob(f)
if err != nil {
return 0, err
}
var num int
if slot < pruneBefore {
num, err = bs.countFiles(folder.Name())
if err != nil {
return 0, err
}
if err = bs.fs.RemoveAll(folder.Name()); err != nil {
return 0, errors.Wrapf(err, "failed to delete blob %s", f.Name())
}
}
return num, nil
}
// slotFromBlob reads the ssz data of a file at the specified offset (8 + 131072 + 48 + 48 = 131176 bytes),
// which is calculated based on the size of the BlobSidecar struct and is based on the size of the fields
// preceding the slot information within SignedBeaconBlockHeader.
func slotFromBlob(at io.ReaderAt) (primitives.Slot, error) {
b := make([]byte, 8)
_, err := at.ReadAt(b, 131176)
if err != nil {
return 0, err
}
rawSlot := binary.LittleEndian.Uint64(b)
return primitives.Slot(rawSlot), nil
}
// Delete removes the directory matching the provided block root and all the blobs it contains.
func (bs *BlobStorage) Delete(root [32]byte) error {
if err := bs.fs.RemoveAll(hexutil.Encode(root[:])); err != nil {
return fmt.Errorf("failed to delete blobs for root %#x: %w", root, err)
}
return nil
}
// tryPrune checks whether we should prune and then calls prune
func (bs *BlobStorage) tryPrune(latest primitives.Slot) {
pruned := uint64(pruneBefore(latest, bs.retentionSlots))
if bs.prunedBefore.Swap(pruned) == pruned {
return
}
go func() {
if err := bs.Prune(primitives.Slot(pruned)); err != nil {
log.WithError(err).Errorf("failed to prune blobs from slot %d", latest)
}
}()
}
func pruneBefore(latest primitives.Slot, offset primitives.Slot) primitives.Slot {
// Safely compute the first slot in the epoch for the latest slot
latest = latest - latest%params.BeaconConfig().SlotsPerEpoch
if latest < offset {
return 0
}
return latest - offset
func rootString(root [32]byte) string {
return fmt.Sprintf("%#x", root)
}

View File

@@ -7,7 +7,6 @@ import (
"testing"
"time"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
ssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/verification"
@@ -74,47 +73,20 @@ func TestBlobStorage_SaveBlobData(t *testing.T) {
require.NoError(t, err)
require.DeepSSZEqual(t, expected, actual)
})
t.Run("check pruning", func(t *testing.T) {
fs, bs, err := NewEphemeralBlobStorageWithFs(t)
t.Run("round trip write, read and delete", func(t *testing.T) {
bs := NewEphemeralBlobStorage(t)
err := bs.Save(testSidecars[0])
require.NoError(t, err)
// Slot in first half of epoch therefore should not prune
bs.tryPrune(testSidecars[0].Slot())
err = bs.Save(testSidecars[0])
require.NoError(t, err)
actual, err := bs.Get(testSidecars[0].BlockRoot(), testSidecars[0].Index)
require.NoError(t, err)
require.DeepSSZEqual(t, testSidecars[0], actual)
err = pollUntil(t, fs, 1)
expected := testSidecars[0]
actual, err := bs.Get(expected.BlockRoot(), expected.Index)
require.NoError(t, err)
require.DeepSSZEqual(t, expected, actual)
_, sidecars = util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 33, fieldparams.MaxBlobsPerBlock)
testSidecars1, err := verification.BlobSidecarSliceNoop(sidecars)
require.NoError(t, err)
// Slot in first half of epoch therefore should not prune
bs.tryPrune(testSidecars1[0].Slot())
err = bs.Save(testSidecars1[0])
require.NoError(t, err)
// Check previous saved sidecar was not pruned
actual, err = bs.Get(testSidecars[0].BlockRoot(), testSidecars[0].Index)
require.NoError(t, err)
require.DeepSSZEqual(t, testSidecars[0], actual)
// Check latest sidecar exists
actual, err = bs.Get(testSidecars1[0].BlockRoot(), testSidecars1[0].Index)
require.NoError(t, err)
require.DeepSSZEqual(t, testSidecars1[0], actual)
err = pollUntil(t, fs, 2) // Check correct number of files
require.NoError(t, err)
_, sidecars = util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 131187, fieldparams.MaxBlobsPerBlock)
testSidecars2, err := verification.BlobSidecarSliceNoop(sidecars)
// Slot in second half of epoch therefore should prune
bs.tryPrune(testSidecars2[0].Slot())
require.NoError(t, err)
err = bs.Save(testSidecars2[0])
require.NoError(t, err)
err = pollUntil(t, fs, 3)
require.NoError(t, err)
require.NoError(t, bs.Remove(expected.BlockRoot()))
_, err = bs.Get(expected.BlockRoot(), expected.Index)
require.ErrorContains(t, "file does not exist", err)
})
}
@@ -188,7 +160,22 @@ func TestBlobStoragePrune(t *testing.T) {
require.NoError(t, bs.Save(sidecar))
}
require.NoError(t, bs.Prune(currentSlot-bs.retentionSlots))
require.NoError(t, bs.pruner.prune(currentSlot-bs.pruner.windowSize))
remainingFolders, err := afero.ReadDir(fs, ".")
require.NoError(t, err)
require.Equal(t, 0, len(remainingFolders))
})
t.Run("Prune dangling blob", func(t *testing.T) {
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 299, fieldparams.MaxBlobsPerBlock)
testSidecars, err := verification.BlobSidecarSliceNoop(sidecars)
require.NoError(t, err)
for _, sidecar := range testSidecars[4:] {
require.NoError(t, bs.Save(sidecar))
}
require.NoError(t, bs.pruner.prune(currentSlot-bs.pruner.windowSize))
remainingFolders, err := afero.ReadDir(fs, ".")
require.NoError(t, err)
@@ -196,7 +183,7 @@ func TestBlobStoragePrune(t *testing.T) {
})
t.Run("PruneMany", func(t *testing.T) {
blockQty := 10
slot := primitives.Slot(0)
slot := primitives.Slot(1)
for j := 0; j <= blockQty; j++ {
root := bytesutil.ToBytes32(bytesutil.ToBytes(uint64(slot), 32))
@@ -208,7 +195,7 @@ func TestBlobStoragePrune(t *testing.T) {
slot += 10000
}
require.NoError(t, bs.Prune(currentSlot-bs.retentionSlots))
require.NoError(t, bs.pruner.prune(currentSlot-bs.pruner.windowSize))
remainingFolders, err := afero.ReadDir(fs, ".")
require.NoError(t, err)
@@ -237,41 +224,11 @@ func BenchmarkPruning(b *testing.B) {
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
err := bs.Prune(currentSlot)
err := bs.pruner.prune(currentSlot)
require.NoError(b, err)
}
}
func TestBlobStorageDelete(t *testing.T) {
fs, bs, err := NewEphemeralBlobStorageWithFs(t)
require.NoError(t, err)
rawRoot := "0xcf9bb70c98f58092c9d6459227c9765f984d240be9690e85179bc5a6f60366ad"
blockRoot, err := hexutil.Decode(rawRoot)
require.NoError(t, err)
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, fieldparams.MaxBlobsPerBlock)
testSidecars, err := verification.BlobSidecarSliceNoop(sidecars)
require.NoError(t, err)
for _, sidecar := range testSidecars {
require.NoError(t, bs.Save(sidecar))
}
exists, err := afero.DirExists(fs, hexutil.Encode(blockRoot))
require.NoError(t, err)
require.Equal(t, true, exists)
// Delete the directory corresponding to the block root
require.NoError(t, bs.Delete(bytesutil.ToBytes32(blockRoot)))
// Ensure that the directory no longer exists after deletion
exists, err = afero.DirExists(fs, hexutil.Encode(blockRoot))
require.NoError(t, err)
require.Equal(t, false, exists)
// Deleting a non-existent root does not return an error.
require.NoError(t, bs.Delete(bytesutil.ToBytes32([]byte{0x1})))
}
func TestNewBlobStorage(t *testing.T) {
_, err := NewBlobStorage(path.Join(t.TempDir(), "good"))
require.NoError(t, err)

View File

@@ -4,26 +4,30 @@ import (
"testing"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/time/slots"
"github.com/spf13/afero"
)
// NewEphemeralBlobStorage should only be used for tests.
// The instance of BlobStorage returned is backed by an in-memory virtual filesystem,
// improving test performance and simplifying cleanup.
func NewEphemeralBlobStorage(_ testing.TB) *BlobStorage {
return &BlobStorage{fs: afero.NewMemMapFs()}
func NewEphemeralBlobStorage(t testing.TB) *BlobStorage {
fs := afero.NewMemMapFs()
pruner, err := newBlobPruner(fs, params.BeaconConfig().MinEpochsForBlobsSidecarsRequest)
if err != nil {
t.Fatal("test setup issue", err)
}
return &BlobStorage{fs: fs, pruner: pruner}
}
// NewEphemeralBlobStorageWithFs can be used by tests that want access to the virtual filesystem
// in order to interact with it outside the parameters of the BlobStorage api.
func NewEphemeralBlobStorageWithFs(_ testing.TB) (afero.Fs, *BlobStorage, error) {
func NewEphemeralBlobStorageWithFs(t testing.TB) (afero.Fs, *BlobStorage, error) {
fs := afero.NewMemMapFs()
s, err := slots.EpochStart(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest)
pruner, err := newBlobPruner(fs, params.BeaconConfig().MinEpochsForBlobsSidecarsRequest)
if err != nil {
return fs, &BlobStorage{}, err
t.Fatal("test setup issue", err)
}
return fs, &BlobStorage{fs: fs, retentionSlots: s}, nil
return fs, &BlobStorage{fs: fs, pruner: pruner}, nil
}
type BlobMocker struct {

View File

@@ -1,66 +1,28 @@
package filesystem
import (
"fmt"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/spf13/afero"
)
var (
blobBuckets = []float64{0.00003, 0.00005, 0.00007, 0.00009, 0.00011, 0.00013, 0.00015}
blobBuckets = []float64{3, 5, 7, 9, 11, 13}
blobSaveLatency = promauto.NewHistogram(prometheus.HistogramOpts{
Name: "blob_storage_save_latency",
Help: "Latency of blob storage save operations in seconds",
Help: "Latency of BlobSidecar storage save operations in milliseconds",
Buckets: blobBuckets,
})
blobFetchLatency = promauto.NewHistogram(prometheus.HistogramOpts{
Name: "blob_storage_get_latency",
Help: "Latency of blob storage get operations in seconds",
Help: "Latency of BlobSidecar storage get operations in milliseconds",
Buckets: blobBuckets,
})
blobsPrunedCounter = promauto.NewCounter(prometheus.CounterOpts{
Name: "blob_pruned_blobs_total",
Help: "Total number of pruned blobs.",
Name: "blob_pruned",
Help: "Number of BlobSidecar files pruned.",
})
blobsTotalGauge = promauto.NewGauge(prometheus.GaugeOpts{
Name: "blobs_on_disk_total",
Help: "Total number of blobs in filesystem.",
blobsWrittenCounter = promauto.NewCounter(prometheus.CounterOpts{
Name: "blobs_written",
Help: "Number of BlobSidecar files written.",
})
)
func (bs *BlobStorage) Initialize() error {
if err := bs.collectTotalBlobMetric(); err != nil {
return fmt.Errorf("failed to initialize blob metrics: %w", err)
}
return nil
}
// CollectTotalBlobMetric set the number of blobs currently present in the filesystem
// to the blobsTotalGauge metric.
func (bs *BlobStorage) collectTotalBlobMetric() error {
totalBlobs := 0
folders, err := afero.ReadDir(bs.fs, ".")
if err != nil {
return err
}
for _, folder := range folders {
num, err := bs.countFiles(folder.Name())
if err != nil {
return err
}
totalBlobs = totalBlobs + num
}
blobsTotalGauge.Set(float64(totalBlobs))
return nil
}
// countFiles returns the length of blob files for a given directory.
func (bs *BlobStorage) countFiles(folderName string) (int, error) {
files, err := afero.ReadDir(bs.fs, folderName)
if err != nil {
return 0, err
}
return len(files), nil
}

View File

@@ -0,0 +1,274 @@
package filesystem
import (
"encoding/binary"
"io"
"path"
"path/filepath"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/pkg/errors"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/time/slots"
log "github.com/sirupsen/logrus"
"github.com/spf13/afero"
)
const retentionBuffer primitives.Epoch = 2
var (
errPruningFailures = errors.New("blobs could not be pruned for some roots")
)
type blobPruner struct {
sync.Mutex
prunedBefore atomic.Uint64
windowSize primitives.Slot
slotMap *slotForRoot
fs afero.Fs
}
func newBlobPruner(fs afero.Fs, retain primitives.Epoch) (*blobPruner, error) {
r, err := slots.EpochStart(retain + retentionBuffer)
if err != nil {
return nil, errors.Wrap(err, "could not set retentionSlots")
}
return &blobPruner{fs: fs, windowSize: r, slotMap: newSlotForRoot()}, nil
}
// notify updates the pruner's view of root->blob mappings. This allows the pruner to build a cache
// of root->slot mappings and decide when to evict old blobs based on the age of present blobs.
func (p *blobPruner) notify(root [32]byte, latest primitives.Slot) {
p.slotMap.ensure(rootString(root), latest)
pruned := uint64(windowMin(latest, p.windowSize))
if p.prunedBefore.Swap(pruned) == pruned {
return
}
go func() {
if err := p.prune(primitives.Slot(pruned)); err != nil {
log.WithError(err).Errorf("Failed to prune blobs from slot %d", latest)
}
}()
}
func windowMin(latest primitives.Slot, offset primitives.Slot) primitives.Slot {
// Safely compute the first slot in the epoch for the latest slot
latest = latest - latest%params.BeaconConfig().SlotsPerEpoch
if latest < offset {
return 0
}
return latest - offset
}
// Prune prunes blobs in the base directory based on the retention epoch.
// It deletes blobs older than currentEpoch - (retentionEpochs+bufferEpochs).
// This is so that we keep a slight buffer and blobs are deleted after n+2 epochs.
func (p *blobPruner) prune(pruneBefore primitives.Slot) error {
p.Lock()
defer p.Unlock()
start := time.Now()
totalPruned, totalErr := 0, 0
// Customize logging/metrics behavior for the initial cache warmup when slot=0.
// We'll never see a prune request for slot 0, unless this is the initial call to warm up the cache.
if pruneBefore == 0 {
defer func() {
log.WithField("duration", time.Since(start).String()).Debug("Warmed up pruner cache")
}()
} else {
defer func() {
log.WithFields(log.Fields{
"upToEpoch": slots.ToEpoch(pruneBefore),
"duration": time.Since(start).String(),
"filesRemoved": totalPruned,
}).Debug("Pruned old blobs")
blobsPrunedCounter.Add(float64(totalPruned))
}()
}
entries, err := listDir(p.fs, ".")
if err != nil {
return errors.Wrap(err, "unable to list root blobs directory")
}
dirs := filter(entries, filterRoot)
for _, dir := range dirs {
pruned, err := p.tryPruneDir(dir, pruneBefore)
if err != nil {
totalErr += 1
log.WithError(err).WithField("directory", dir).Error("Unable to prune directory")
}
totalPruned += pruned
}
if totalErr > 0 {
return errors.Wrapf(errPruningFailures, "pruning failed for %d root directories", totalErr)
}
return nil
}
func shouldRetain(slot, pruneBefore primitives.Slot) bool {
return slot >= pruneBefore
}
func (p *blobPruner) tryPruneDir(dir string, pruneBefore primitives.Slot) (int, error) {
root := rootFromDir(dir)
slot, slotCached := p.slotMap.slot(root)
// Return early if the slot is cached and doesn't need pruning.
if slotCached && shouldRetain(slot, pruneBefore) {
return 0, nil
}
// entries will include things that aren't ssz files, like dangling .part files. We need these to
// completely clean up the directory.
entries, err := listDir(p.fs, dir)
if err != nil {
return 0, errors.Wrapf(err, "failed to list blobs in directory %s", dir)
}
// scFiles filters the dir listing down to the ssz encoded BlobSidecar files. This allows us to peek
// at the first one in the list to figure out the slot.
scFiles := filter(entries, filterSsz)
if len(scFiles) == 0 {
log.WithField("dir", dir).Warn("Pruner ignoring directory with no blob files")
return 0, nil
}
if !slotCached {
slot, err = slotFromFile(path.Join(dir, scFiles[0]), p.fs)
if err != nil {
return 0, errors.Wrapf(err, "slot could not be read from blob file %s", scFiles[0])
}
p.slotMap.ensure(root, slot)
if shouldRetain(slot, pruneBefore) {
return 0, nil
}
}
removed := 0
for _, fname := range entries {
fullName := path.Join(dir, fname)
if err := p.fs.Remove(fullName); err != nil {
return removed, errors.Wrapf(err, "unable to remove %s", fullName)
}
// Don't count other files that happen to be in the dir, like dangling .part files.
if filterSsz(fname) {
removed += 1
}
// Log a warning whenever we clean up a .part file
if filterPart(fullName) {
log.WithField("file", fullName).Warn("Deleting abandoned blob .part file")
}
}
if err := p.fs.Remove(dir); err != nil {
return removed, errors.Wrapf(err, "unable to remove blob directory %s", dir)
}
p.slotMap.evict(rootFromDir(dir))
return len(scFiles), nil
}
func rootFromDir(dir string) string {
return filepath.Base(dir) // end of the path should be the blob directory, named by hex encoding of root
}
// Read slot from marshaled BlobSidecar data in the given file. See slotFromBlob for details.
func slotFromFile(file string, fs afero.Fs) (primitives.Slot, error) {
f, err := fs.Open(file)
if err != nil {
return 0, err
}
defer func() {
if err := f.Close(); err != nil {
log.WithError(err).Errorf("Could not close blob file")
}
}()
return slotFromBlob(f)
}
// slotFromBlob reads the ssz data of a file at the specified offset (8 + 131072 + 48 + 48 = 131176 bytes),
// which is calculated based on the size of the BlobSidecar struct and is based on the size of the fields
// preceding the slot information within SignedBeaconBlockHeader.
func slotFromBlob(at io.ReaderAt) (primitives.Slot, error) {
b := make([]byte, 8)
_, err := at.ReadAt(b, 131176)
if err != nil {
return 0, err
}
rawSlot := binary.LittleEndian.Uint64(b)
return primitives.Slot(rawSlot), nil
}
func listDir(fs afero.Fs, dir string) ([]string, error) {
top, err := fs.Open(dir)
if err != nil {
return nil, errors.Wrap(err, "failed to open directory descriptor")
}
defer func() {
if err := top.Close(); err != nil {
log.WithError(err).Errorf("Could not close file %s", dir)
}
}()
// re the -1 param: "If n <= 0, Readdirnames returns all the names from the directory in a single slice"
dirs, err := top.Readdirnames(-1)
if err != nil {
return nil, errors.Wrap(err, "failed to read directory listing")
}
return dirs, nil
}
func filter(entries []string, filt func(string) bool) []string {
filtered := make([]string, 0, len(entries))
for i := range entries {
if filt(entries[i]) {
filtered = append(filtered, entries[i])
}
}
return filtered
}
func filterRoot(s string) bool {
return strings.HasPrefix(s, "0x")
}
var dotSszExt = "." + sszExt
var dotPartExt = "." + partExt
func filterSsz(s string) bool {
return filepath.Ext(s) == dotSszExt
}
func filterPart(s string) bool {
return filepath.Ext(s) == dotPartExt
}
func newSlotForRoot() *slotForRoot {
return &slotForRoot{
cache: make(map[string]primitives.Slot, params.BeaconConfig().MinEpochsForBlobsSidecarsRequest*fieldparams.SlotsPerEpoch),
}
}
type slotForRoot struct {
sync.RWMutex
cache map[string]primitives.Slot
}
func (s *slotForRoot) ensure(key string, slot primitives.Slot) {
s.Lock()
defer s.Unlock()
s.cache[key] = slot
}
func (s *slotForRoot) slot(key string) (primitives.Slot, bool) {
s.RLock()
defer s.RUnlock()
slot, ok := s.cache[key]
return slot, ok
}
func (s *slotForRoot) evict(key string) {
s.Lock()
defer s.Unlock()
delete(s.cache, key)
}

View File

@@ -0,0 +1,327 @@
package filesystem
import (
"bytes"
"fmt"
"math"
"os"
"path"
"sort"
"testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/verification"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/testing/require"
"github.com/prysmaticlabs/prysm/v4/testing/util"
"github.com/spf13/afero"
)
func TestTryPruneDir_CachedNotExpired(t *testing.T) {
fs := afero.NewMemMapFs()
pr, err := newBlobPruner(fs, 0)
require.NoError(t, err)
slot := pr.windowSize
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, slot, fieldparams.MaxBlobsPerBlock)
sc, err := verification.BlobSidecarNoop(sidecars[0])
require.NoError(t, err)
root := fmt.Sprintf("%#x", sc.BlockRoot())
// This slot is right on the edge of what would need to be pruned, so by adding it to the cache and
// skipping any other test setup, we can be certain the hot cache path never touches the filesystem.
pr.slotMap.ensure(root, sc.Slot())
pruned, err := pr.tryPruneDir(root, pr.windowSize)
require.NoError(t, err)
require.Equal(t, 0, pruned)
}
func TestTryPruneDir_CachedExpired(t *testing.T) {
t.Run("empty directory", func(t *testing.T) {
fs := afero.NewMemMapFs()
pr, err := newBlobPruner(fs, 0)
require.NoError(t, err)
var slot primitives.Slot = 0
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, slot, 1)
sc, err := verification.BlobSidecarNoop(sidecars[0])
require.NoError(t, err)
root := fmt.Sprintf("%#x", sc.BlockRoot())
require.NoError(t, fs.Mkdir(root, directoryPermissions)) // make empty directory
pr.slotMap.ensure(root, sc.Slot())
pruned, err := pr.tryPruneDir(root, slot+1)
require.NoError(t, err)
require.Equal(t, 0, pruned)
})
t.Run("blobs to delete", func(t *testing.T) {
fs, bs, err := NewEphemeralBlobStorageWithFs(t)
require.NoError(t, err)
var slot primitives.Slot = 0
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, slot, 2)
scs, err := verification.BlobSidecarSliceNoop(sidecars)
require.NoError(t, err)
require.NoError(t, bs.Save(scs[0]))
require.NoError(t, bs.Save(scs[1]))
// check that the root->slot is cached
root := fmt.Sprintf("%#x", scs[0].BlockRoot())
cs, cok := bs.pruner.slotMap.slot(root)
require.Equal(t, true, cok)
require.Equal(t, slot, cs)
// ensure that we see the saved files in the filesystem
files, err := listDir(fs, root)
require.NoError(t, err)
require.Equal(t, 2, len(files))
pruned, err := bs.pruner.tryPruneDir(root, slot+1)
require.NoError(t, err)
require.Equal(t, 2, pruned)
files, err = listDir(fs, root)
require.ErrorIs(t, err, os.ErrNotExist)
require.Equal(t, 0, len(files))
})
}
func TestTryPruneDir_SlotFromFile(t *testing.T) {
t.Run("expired blobs deleted", func(t *testing.T) {
fs, bs, err := NewEphemeralBlobStorageWithFs(t)
require.NoError(t, err)
var slot primitives.Slot = 0
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, slot, 2)
scs, err := verification.BlobSidecarSliceNoop(sidecars)
require.NoError(t, err)
require.NoError(t, bs.Save(scs[0]))
require.NoError(t, bs.Save(scs[1]))
// check that the root->slot is cached
root := fmt.Sprintf("%#x", scs[0].BlockRoot())
cs, ok := bs.pruner.slotMap.slot(root)
require.Equal(t, true, ok)
require.Equal(t, slot, cs)
// evict it from the cache so that we trigger the file read path
bs.pruner.slotMap.evict(root)
_, ok = bs.pruner.slotMap.slot(root)
require.Equal(t, false, ok)
// ensure that we see the saved files in the filesystem
files, err := listDir(fs, root)
require.NoError(t, err)
require.Equal(t, 2, len(files))
pruned, err := bs.pruner.tryPruneDir(root, slot+1)
require.NoError(t, err)
require.Equal(t, 2, pruned)
files, err = listDir(fs, root)
require.ErrorIs(t, err, os.ErrNotExist)
require.Equal(t, 0, len(files))
})
t.Run("not expired, intact", func(t *testing.T) {
fs, bs, err := NewEphemeralBlobStorageWithFs(t)
require.NoError(t, err)
// Set slot equal to the window size, so it should be retained.
var slot primitives.Slot = bs.pruner.windowSize
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, slot, 2)
scs, err := verification.BlobSidecarSliceNoop(sidecars)
require.NoError(t, err)
require.NoError(t, bs.Save(scs[0]))
require.NoError(t, bs.Save(scs[1]))
// Evict slot mapping from the cache so that we trigger the file read path.
root := fmt.Sprintf("%#x", scs[0].BlockRoot())
bs.pruner.slotMap.evict(root)
_, ok := bs.pruner.slotMap.slot(root)
require.Equal(t, false, ok)
// Ensure that we see the saved files in the filesystem.
files, err := listDir(fs, root)
require.NoError(t, err)
require.Equal(t, 2, len(files))
// This should use the slotFromFile code (simulating restart).
// Setting pruneBefore == slot, so that the slot will be outside the window (at the boundary).
pruned, err := bs.pruner.tryPruneDir(root, slot)
require.NoError(t, err)
require.Equal(t, 0, pruned)
// Ensure files are still present.
files, err = listDir(fs, root)
require.NoError(t, err)
require.Equal(t, 2, len(files))
})
}
func TestSlotFromBlob(t *testing.T) {
cases := []struct {
slot primitives.Slot
}{
{slot: 0},
{slot: 2},
{slot: 1123581321},
{slot: math.MaxUint64},
}
for _, c := range cases {
t.Run(fmt.Sprintf("slot %d", c.slot), func(t *testing.T) {
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, c.slot, 1)
sc := sidecars[0]
enc, err := sc.MarshalSSZ()
require.NoError(t, err)
slot, err := slotFromBlob(bytes.NewReader(enc))
require.NoError(t, err)
require.Equal(t, c.slot, slot)
})
}
}
func TestSlotFromFile(t *testing.T) {
cases := []struct {
slot primitives.Slot
}{
{slot: 0},
{slot: 2},
{slot: 1123581321},
{slot: math.MaxUint64},
}
for _, c := range cases {
t.Run(fmt.Sprintf("slot %d", c.slot), func(t *testing.T) {
fs, bs, err := NewEphemeralBlobStorageWithFs(t)
require.NoError(t, err)
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, c.slot, 1)
sc, err := verification.BlobSidecarNoop(sidecars[0])
require.NoError(t, err)
require.NoError(t, bs.Save(sc))
fname := namerForSidecar(sc)
sszPath := fname.path()
slot, err := slotFromFile(sszPath, fs)
require.NoError(t, err)
require.Equal(t, c.slot, slot)
})
}
}
type dirFiles struct {
name string
isDir bool
children []dirFiles
}
func (df dirFiles) reify(t *testing.T, fs afero.Fs, base string) {
fullPath := path.Join(base, df.name)
if df.isDir {
if df.name != "" {
require.NoError(t, fs.Mkdir(fullPath, directoryPermissions))
}
for _, c := range df.children {
c.reify(t, fs, fullPath)
}
} else {
fp, err := fs.Create(fullPath)
require.NoError(t, err)
_, err = fp.WriteString("derp")
require.NoError(t, err)
}
}
func (df dirFiles) childNames() []string {
cn := make([]string, len(df.children))
for i := range df.children {
cn[i] = df.children[i].name
}
return cn
}
func TestListDir(t *testing.T) {
fs := afero.NewMemMapFs()
// parent directory
fsLayout := dirFiles{isDir: true}
// break out each subdir for easier assertions
notABlob := dirFiles{name: "notABlob", isDir: true}
childlessBlob := dirFiles{name: "0x0987654321", isDir: true}
blobWithSsz := dirFiles{name: "0x1123581321", isDir: true,
children: []dirFiles{{name: "1.ssz"}, {name: "2.ssz"}},
}
blobWithSszAndTmp := dirFiles{name: "0x1234567890", isDir: true,
children: []dirFiles{{name: "5.ssz"}, {name: "0.part"}}}
fsLayout.children = append(fsLayout.children, notABlob)
fsLayout.children = append(fsLayout.children, childlessBlob)
fsLayout.children = append(fsLayout.children, blobWithSsz)
fsLayout.children = append(fsLayout.children, blobWithSszAndTmp)
topChildren := make([]string, len(fsLayout.children))
for i := range fsLayout.children {
topChildren[i] = fsLayout.children[i].name
}
fsLayout.reify(t, fs, "")
cases := []struct {
name string
dirPath string
expected []string
filter func(string) bool
err error
}{
{
name: "non-existent",
dirPath: "derp",
expected: []string{},
err: os.ErrNotExist,
},
{
name: "empty",
dirPath: childlessBlob.name,
expected: []string{},
},
{
name: "top",
dirPath: ".",
expected: topChildren,
},
{
name: "custom filter: only notABlob",
dirPath: ".",
expected: []string{notABlob.name},
filter: func(s string) bool {
if s == notABlob.name {
return true
}
return false
},
},
{
name: "root filter",
dirPath: ".",
expected: []string{childlessBlob.name, blobWithSsz.name, blobWithSszAndTmp.name},
filter: filterRoot,
},
{
name: "ssz filter",
dirPath: blobWithSsz.name,
expected: blobWithSsz.childNames(),
filter: filterSsz,
},
{
name: "ssz mixed filter",
dirPath: blobWithSszAndTmp.name,
expected: []string{"5.ssz"},
filter: filterSsz,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
result, err := listDir(fs, c.dirPath)
if c.filter != nil {
result = filter(result, c.filter)
}
if c.err != nil {
require.ErrorIs(t, err, c.err)
require.Equal(t, 0, len(result))
} else {
require.NoError(t, err)
sort.Strings(c.expected)
sort.Strings(result)
require.DeepEqual(t, c.expected, result)
}
})
}
}

View File

@@ -13,9 +13,11 @@ go_library(
"//beacon-chain/db/filters:go_default_library",
"//beacon-chain/slasher/types:go_default_library",
"//beacon-chain/state:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//monitoring/backup:go_default_library",
"//proto/dbval:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
],

View File

@@ -11,9 +11,11 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db/filters"
slashertypes "github.com/prysmaticlabs/prysm/v4/beacon-chain/slasher/types"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/monitoring/backup"
"github.com/prysmaticlabs/prysm/v4/proto/dbval"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
)
@@ -55,12 +57,9 @@ type ReadOnlyDatabase interface {
FeeRecipientByValidatorID(ctx context.Context, id primitives.ValidatorIndex) (common.Address, error)
RegistrationByValidatorID(ctx context.Context, id primitives.ValidatorIndex) (*ethpb.ValidatorRegistrationV1, error)
// Blob operations.
BlobSidecarsByRoot(ctx context.Context, beaconBlockRoot [32]byte, indices ...uint64) ([]*ethpb.DeprecatedBlobSidecar, error)
BlobSidecarsBySlot(ctx context.Context, slot primitives.Slot, indices ...uint64) ([]*ethpb.DeprecatedBlobSidecar, error)
// origin checkpoint sync support
OriginCheckpointBlockRoot(ctx context.Context) ([32]byte, error)
BackfillBlockRoot(ctx context.Context) ([32]byte, error)
BackfillStatus(context.Context) (*dbval.BackfillStatus, error)
}
// NoHeadAccessDatabase defines a struct without access to chain head data.
@@ -71,6 +70,7 @@ type NoHeadAccessDatabase interface {
DeleteBlock(ctx context.Context, root [32]byte) error
SaveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock) error
SaveBlocks(ctx context.Context, blocks []interfaces.ReadOnlySignedBeaconBlock) error
SaveROBlocks(ctx context.Context, blks []blocks.ROBlock, cache bool) error
SaveGenesisBlockRoot(ctx context.Context, blockRoot [32]byte) error
// State related methods.
SaveState(ctx context.Context, state state.ReadOnlyBeaconState, blockRoot [32]byte) error
@@ -93,9 +93,6 @@ type NoHeadAccessDatabase interface {
SaveFeeRecipientsByValidatorIDs(ctx context.Context, ids []primitives.ValidatorIndex, addrs []common.Address) error
SaveRegistrationsByValidatorIDs(ctx context.Context, ids []primitives.ValidatorIndex, regs []*ethpb.ValidatorRegistrationV1) error
// Blob operations.
DeleteBlobSidecars(ctx context.Context, beaconBlockRoot [32]byte) error
CleanUpDirtyStates(ctx context.Context, slotsPerArchivedPoint primitives.Slot) error
}
@@ -112,9 +109,10 @@ type HeadAccessDatabase interface {
SaveGenesisData(ctx context.Context, state state.BeaconState) error
EnsureEmbeddedGenesis(ctx context.Context) error
// initialization method needed for origin checkpoint sync
// Support for checkpoint sync and backfill.
SaveOrigin(ctx context.Context, serState, serBlock []byte) error
SaveBackfillBlockRoot(ctx context.Context, blockRoot [32]byte) error
SaveBackfillStatus(context.Context, *dbval.BackfillStatus) error
BackfillFinalizedIndex(ctx context.Context, blocks []blocks.ROBlock, finalizedChildRoot [32]byte) error
}
// SlasherDatabase interface for persisting data related to detecting slashable offenses on Ethereum.

View File

@@ -4,8 +4,8 @@ go_library(
name = "go_default_library",
srcs = [
"archived_point.go",
"backfill.go",
"backup.go",
"blob.go",
"blocks.go",
"checkpoint.go",
"deposit_contract.go",
@@ -39,7 +39,6 @@ go_library(
"//beacon-chain/state/genesis:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
@@ -50,6 +49,7 @@ go_library(
"//io/file:go_default_library",
"//monitoring/progress:go_default_library",
"//monitoring/tracing:go_default_library",
"//proto/dbval:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//time:go_default_library",
@@ -75,8 +75,8 @@ go_test(
name = "go_default_test",
srcs = [
"archived_point_test.go",
"backfill_test.go",
"backup_test.go",
"blob_test.go",
"blocks_test.go",
"checkpoint_test.go",
"deposit_contract_test.go",
@@ -110,11 +110,11 @@ go_test(
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/dbval:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/testing:go_default_library",
"//testing/assert:go_default_library",
"//testing/assertions:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",

View File

@@ -0,0 +1,44 @@
package kv
import (
"context"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/proto/dbval"
bolt "go.etcd.io/bbolt"
"go.opencensus.io/trace"
"google.golang.org/protobuf/proto"
)
// SaveBackfillStatus encodes the given BackfillStatus protobuf struct and writes it to a single key in the db.
// This value is used by the backfill service to keep track of the range of blocks that need to be synced. It is also used by the
// code that serves blocks or regenerates states to keep track of what range of blocks are available.
func (s *Store) SaveBackfillStatus(ctx context.Context, bf *dbval.BackfillStatus) error {
_, span := trace.StartSpan(ctx, "BeaconDB.SaveBackfillStatus")
defer span.End()
bfb, err := proto.Marshal(bf)
if err != nil {
return err
}
return s.db.Update(func(tx *bolt.Tx) error {
bucket := tx.Bucket(blocksBucket)
return bucket.Put(backfillStatusKey, bfb)
})
}
// BackfillStatus retrieves the most recently saved version of the BackfillStatus protobuf struct.
// This is used to persist information about backfill status across restarts.
func (s *Store) BackfillStatus(ctx context.Context) (*dbval.BackfillStatus, error) {
_, span := trace.StartSpan(ctx, "BeaconDB.BackfillStatus")
defer span.End()
bf := &dbval.BackfillStatus{}
err := s.db.View(func(tx *bolt.Tx) error {
bucket := tx.Bucket(blocksBucket)
bs := bucket.Get(backfillStatusKey)
if len(bs) == 0 {
return errors.Wrap(ErrNotFound, "BackfillStatus not found")
}
return proto.Unmarshal(bs, bf)
})
return bf, err
}

View File

@@ -0,0 +1,35 @@
package kv
import (
"context"
"testing"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v4/proto/dbval"
"github.com/prysmaticlabs/prysm/v4/testing/require"
"google.golang.org/protobuf/proto"
)
func TestBackfillRoundtrip(t *testing.T) {
db := setupDB(t)
b := &dbval.BackfillStatus{}
b.LowSlot = 23
b.LowRoot = bytesutil.PadTo([]byte("low"), 32)
b.LowParentRoot = bytesutil.PadTo([]byte("parent"), 32)
m, err := proto.Marshal(b)
require.NoError(t, err)
ub := &dbval.BackfillStatus{}
require.NoError(t, proto.Unmarshal(m, ub))
require.Equal(t, b.LowSlot, ub.LowSlot)
require.DeepEqual(t, b.LowRoot, ub.LowRoot)
require.DeepEqual(t, b.LowParentRoot, ub.LowParentRoot)
ctx := context.Background()
require.NoError(t, db.SaveBackfillStatus(ctx, b))
dbub, err := db.BackfillStatus(ctx)
require.NoError(t, err)
require.Equal(t, b.LowSlot, dbub.LowSlot)
require.DeepEqual(t, b.LowRoot, dbub.LowRoot)
require.DeepEqual(t, b.LowParentRoot, dbub.LowParentRoot)
}

View File

@@ -1,320 +0,0 @@
package kv
import (
"bytes"
"context"
"sort"
"github.com/pkg/errors"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/config/params"
types "github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
bolt "go.etcd.io/bbolt"
"go.opencensus.io/trace"
)
var (
errBlobSlotMismatch = errors.New("sidecar slot mismatch")
errBlobParentMismatch = errors.New("sidecar parent root mismatch")
errBlobRootMismatch = errors.New("sidecar root mismatch")
errBlobProposerMismatch = errors.New("sidecar proposer index mismatch")
errBlobSidecarLimit = errors.New("sidecar exceeds maximum number of blobs")
errEmptySidecar = errors.New("nil or empty blob sidecars")
errNewerBlobExists = errors.New("Will not overwrite newer blobs in db")
)
// A blob rotating key is represented as bytes(slot_to_rotating_buffer(blob.slot)) ++ bytes(blob.slot) ++ blob.block_root
type blobRotatingKey []byte
// BufferPrefix returns the first 8 bytes of the rotating key.
// This represents bytes(slot_to_rotating_buffer(blob.slot)) in the rotating key.
func (rk blobRotatingKey) BufferPrefix() []byte {
return rk[0:8]
}
// Slot returns the information from the key.
func (rk blobRotatingKey) Slot() types.Slot {
slotBytes := rk[8:16]
return bytesutil.BytesToSlotBigEndian(slotBytes)
}
// BlockRoot returns the block root information from the key.
func (rk blobRotatingKey) BlockRoot() []byte {
return rk[16:]
}
// SaveBlobSidecar saves the blobs for a given epoch in the sidecar bucket. When we receive a blob:
//
// 1. Convert slot using a modulo operator to [0, maxSlots] where maxSlots = MAX_EPOCHS_TO_PERSIST_BLOBS*SLOTS_PER_EPOCH
//
// 2. Compute key for blob as bytes(slot_to_rotating_buffer(blob.slot)) ++ bytes(blob.slot) ++ blob.block_root
//
// 3. Begin the save algorithm: If the incoming blob has a slot bigger than the saved slot at the spot
// in the rotating keys buffer, we overwrite all elements for that slot. Otherwise, we merge the blob with an existing one.
// Trying to replace a newer blob with an older one is an error.
func (s *Store) SaveBlobSidecar(ctx context.Context, scs []*ethpb.DeprecatedBlobSidecar) error {
if len(scs) == 0 {
return errEmptySidecar
}
ctx, span := trace.StartSpan(ctx, "BeaconDB.SaveBlobSidecar")
defer span.End()
first := scs[0]
newKey := s.blobSidecarKey(first)
prefix := newKey.BufferPrefix()
var prune []blobRotatingKey
return s.db.Update(func(tx *bolt.Tx) error {
var existing []byte
sc := &ethpb.DeprecatedBlobSidecars{}
bkt := tx.Bucket(blobsBucket)
c := bkt.Cursor()
for k, v := c.Seek(prefix); bytes.HasPrefix(k, prefix); k, v = c.Next() {
key := blobRotatingKey(k)
ks := key.Slot()
if ks < first.Slot {
// Mark older blobs at the same position of the ring buffer for deletion.
prune = append(prune, key)
continue
}
if ks > first.Slot {
// We shouldn't be overwriting newer blobs with older blobs. Something is wrong.
return errNewerBlobExists
}
// The slot isn't older or newer, so it must be equal.
// If the roots match, then we want to merge the new sidecars with the existing data.
if bytes.Equal(first.BlockRoot, key.BlockRoot()) {
existing = v
if err := decode(ctx, v, sc); err != nil {
return err
}
}
// If the slot is equal but the roots don't match, leave the existing key alone and allow the sidecar
// to be written to the new key with the same prefix. In this case sc will be empty, so it will just
// contain the incoming sidecars when we write it.
}
sc.Sidecars = append(sc.Sidecars, scs...)
sortSidecars(sc.Sidecars)
var err error
sc.Sidecars, err = validUniqueSidecars(sc.Sidecars)
if err != nil {
return err
}
encoded, err := encode(ctx, sc)
if err != nil {
return err
}
// don't write if the merged result is the same as before
if len(existing) == len(encoded) && bytes.Equal(existing, encoded) {
return nil
}
// Only prune if we're actually going through with the update.
for _, k := range prune {
if err := bkt.Delete(k); err != nil {
// note: attempting to delete a key that does not exist should not return an error.
log.WithError(err).Warnf("Could not delete blob key %#x.", k)
}
}
return bkt.Put(newKey, encoded)
})
}
// validUniqueSidecars ensures that all sidecars have the same slot, parent root, block root, and proposer index, and
// there are no more than MAX_BLOBS_PER_BLOCK sidecars.
func validUniqueSidecars(scs []*ethpb.DeprecatedBlobSidecar) ([]*ethpb.DeprecatedBlobSidecar, error) {
if len(scs) == 0 {
return nil, errEmptySidecar
}
// If there's only 1 sidecar, we've got nothing to compare.
if len(scs) == 1 {
return scs, nil
}
prev := scs[0]
didx := 1
for i := 1; i < len(scs); i++ {
sc := scs[i]
if sc.Slot != prev.Slot {
return nil, errors.Wrapf(errBlobSlotMismatch, "%d != %d", sc.Slot, prev.Slot)
}
if !bytes.Equal(sc.BlockParentRoot, prev.BlockParentRoot) {
return nil, errors.Wrapf(errBlobParentMismatch, "%x != %x", sc.BlockParentRoot, prev.BlockParentRoot)
}
if !bytes.Equal(sc.BlockRoot, prev.BlockRoot) {
return nil, errors.Wrapf(errBlobRootMismatch, "%x != %x", sc.BlockRoot, prev.BlockRoot)
}
if sc.ProposerIndex != prev.ProposerIndex {
return nil, errors.Wrapf(errBlobProposerMismatch, "%d != %d", sc.ProposerIndex, prev.ProposerIndex)
}
// skip duplicate
if sc.Index == prev.Index {
continue
}
if didx != i {
scs[didx] = scs[i]
}
prev = scs[i]
didx += 1
}
if didx > fieldparams.MaxBlobsPerBlock {
return nil, errors.Wrapf(errBlobSidecarLimit, "%d > %d", didx, fieldparams.MaxBlobsPerBlock)
}
return scs[0:didx], nil
}
// sortSidecars sorts the sidecars by their index.
func sortSidecars(scs []*ethpb.DeprecatedBlobSidecar) {
sort.Slice(scs, func(i, j int) bool {
return scs[i].Index < scs[j].Index
})
}
// BlobSidecarsByRoot retrieves the blobs for the given beacon block root.
// If the `indices` argument is omitted, all blobs for the root will be returned.
// Otherwise, the result will be filtered to only include the specified indices.
// An error will result if an invalid index is specified.
// The bucket size is bounded by 131072 entries. That's the most blobs a node will keep before rotating it out.
func (s *Store) BlobSidecarsByRoot(ctx context.Context, root [32]byte, indices ...uint64) ([]*ethpb.DeprecatedBlobSidecar, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.BlobSidecarsByRoot")
defer span.End()
var enc []byte
if err := s.db.View(func(tx *bolt.Tx) error {
c := tx.Bucket(blobsBucket).Cursor()
// Bucket size is bounded and bolt cursors are fast. Moreover, a thin caching layer can be added.
for k, v := c.First(); k != nil; k, v = c.Next() {
if bytes.HasSuffix(k, root[:]) {
enc = v
break
}
}
return nil
}); err != nil {
return nil, err
}
if enc == nil {
return nil, ErrNotFound
}
sc := &ethpb.DeprecatedBlobSidecars{}
if err := decode(ctx, enc, sc); err != nil {
return nil, err
}
return filterForIndices(sc, indices...)
}
func filterForIndices(sc *ethpb.DeprecatedBlobSidecars, indices ...uint64) ([]*ethpb.DeprecatedBlobSidecar, error) {
if len(indices) == 0 {
return sc.Sidecars, nil
}
// This loop assumes that the BlobSidecars value stores the complete set of blobs for a block
// in ascending order from eg 0..3, without gaps. This allows us to assume the indices argument
// maps 1:1 with indices in the BlobSidecars storage object.
maxIdx := uint64(len(sc.Sidecars)) - 1
sidecars := make([]*ethpb.DeprecatedBlobSidecar, len(indices))
for i, idx := range indices {
if idx > maxIdx {
return nil, errors.Wrapf(ErrNotFound, "BlobSidecars missing index: index %d", idx)
}
sidecars[i] = sc.Sidecars[idx]
}
return sidecars, nil
}
// BlobSidecarsBySlot retrieves BlobSidecars for the given slot.
// If the `indices` argument is omitted, all blobs for the slot will be returned.
// Otherwise, the result will be filtered to only include the specified indices.
// An error will result if an invalid index is specified.
// The bucket size is bounded by 131072 entries. That's the most blobs a node will keep before rotating it out.
func (s *Store) BlobSidecarsBySlot(ctx context.Context, slot types.Slot, indices ...uint64) ([]*ethpb.DeprecatedBlobSidecar, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.BlobSidecarsBySlot")
defer span.End()
var enc []byte
sk := s.slotKey(slot)
if err := s.db.View(func(tx *bolt.Tx) error {
c := tx.Bucket(blobsBucket).Cursor()
// Bucket size is bounded and bolt cursors are fast. Moreover, a thin caching layer can be added.
for k, v := c.Seek(sk); bytes.HasPrefix(k, sk); k, _ = c.Next() {
slotInKey := bytesutil.BytesToSlotBigEndian(k[8:16])
if slotInKey == slot {
enc = v
break
}
}
return nil
}); err != nil {
return nil, err
}
if enc == nil {
return nil, ErrNotFound
}
sc := &ethpb.DeprecatedBlobSidecars{}
if err := decode(ctx, enc, sc); err != nil {
return nil, err
}
return filterForIndices(sc, indices...)
}
// DeleteBlobSidecars returns true if the blobs are in the db.
func (s *Store) DeleteBlobSidecars(ctx context.Context, beaconBlockRoot [32]byte) error {
_, span := trace.StartSpan(ctx, "BeaconDB.DeleteBlobSidecar")
defer span.End()
return s.db.Update(func(tx *bolt.Tx) error {
bkt := tx.Bucket(blobsBucket)
c := bkt.Cursor()
for k, _ := c.First(); k != nil; k, _ = c.Next() {
if bytes.HasSuffix(k, beaconBlockRoot[:]) {
if err := bkt.Delete(k); err != nil {
return err
}
}
}
return nil
})
}
// We define a blob sidecar key as: bytes(slot_to_rotating_buffer(blob.slot)) ++ bytes(blob.slot) ++ blob.block_root
// where slot_to_rotating_buffer(slot) = slot % MAX_SLOTS_TO_PERSIST_BLOBS.
func (s *Store) blobSidecarKey(blob *ethpb.DeprecatedBlobSidecar) blobRotatingKey {
key := s.slotKey(blob.Slot)
key = append(key, bytesutil.SlotToBytesBigEndian(blob.Slot)...)
key = append(key, blob.BlockRoot...)
return key
}
func (s *Store) slotKey(slot types.Slot) []byte {
return bytesutil.SlotToBytesBigEndian(slot.ModSlot(s.blobRetentionSlots()))
}
func (s *Store) blobRetentionSlots() types.Slot {
return types.Slot(s.blobRetentionEpochs.Mul(uint64(params.BeaconConfig().SlotsPerEpoch)))
}
var errBlobRetentionEpochMismatch = errors.New("epochs for blobs request value in DB does not match runtime config")
func (s *Store) checkEpochsForBlobSidecarsRequestBucket(db *bolt.DB) error {
uRetentionEpochs := uint64(s.blobRetentionEpochs)
if err := db.Update(func(tx *bolt.Tx) error {
b := tx.Bucket(chainMetadataBucket)
v := b.Get(blobRetentionEpochsKey)
if v == nil {
if err := b.Put(blobRetentionEpochsKey, bytesutil.Uint64ToBytesBigEndian(uRetentionEpochs)); err != nil {
return err
}
return nil
}
e := bytesutil.BytesToUint64BigEndian(v)
if e != uRetentionEpochs {
return errors.Wrapf(errBlobRetentionEpochMismatch, "db=%d, config=%d", e, uRetentionEpochs)
}
return nil
}); err != nil {
return err
}
return nil
}

View File

@@ -1,532 +0,0 @@
package kv
import (
"context"
"crypto/rand"
"fmt"
"testing"
"github.com/pkg/errors"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
types "github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/testing/assertions"
"github.com/prysmaticlabs/prysm/v4/testing/require"
bolt "go.etcd.io/bbolt"
)
func equalBlobSlices(expect []*ethpb.DeprecatedBlobSidecar, got []*ethpb.DeprecatedBlobSidecar) error {
if len(expect) != len(got) {
return fmt.Errorf("mismatched lengths, expect=%d, got=%d", len(expect), len(got))
}
for i := 0; i < len(expect); i++ {
es := expect[i]
gs := got[i]
var e string
assertions.DeepEqual(assertions.SprintfAssertionLoggerFn(&e), es, gs)
if e != "" {
return errors.New(e)
}
}
return nil
}
func TestStore_BlobSidecars(t *testing.T) {
ctx := context.Background()
t.Run("empty", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, 0)
require.ErrorContains(t, "nil or empty blob sidecars", db.SaveBlobSidecar(ctx, scs))
})
t.Run("empty by root", func(t *testing.T) {
db := setupDB(t)
got, err := db.BlobSidecarsByRoot(ctx, [32]byte{})
require.ErrorIs(t, ErrNotFound, err)
require.Equal(t, 0, len(got))
})
t.Run("empty by slot", func(t *testing.T) {
db := setupDB(t)
got, err := db.BlobSidecarsBySlot(ctx, 1)
require.ErrorIs(t, ErrNotFound, err)
require.Equal(t, 0, len(got))
})
t.Run("save and retrieve by root (one)", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, 1)
require.NoError(t, db.SaveBlobSidecar(ctx, scs))
require.Equal(t, 1, len(scs))
got, err := db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
})
t.Run("save and retrieve by root (max), per batch", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
require.NoError(t, db.SaveBlobSidecar(ctx, scs))
require.Equal(t, fieldparams.MaxBlobsPerBlock, len(scs))
got, err := db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
})
t.Run("save and retrieve by root, max and individually", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
for _, sc := range scs {
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{sc}))
}
require.Equal(t, fieldparams.MaxBlobsPerBlock, len(scs))
got, err := db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
})
t.Run("save and retrieve valid subset by root", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
require.NoError(t, db.SaveBlobSidecar(ctx, scs))
require.Equal(t, fieldparams.MaxBlobsPerBlock, len(scs))
// we'll request indices 0 and 3, so make a slice with those indices for comparison
expect := make([]*ethpb.DeprecatedBlobSidecar, 2)
expect[0] = scs[0]
expect[1] = scs[3]
got, err := db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot), 0, 3)
require.NoError(t, err)
require.NoError(t, equalBlobSlices(expect, got))
require.Equal(t, uint64(0), got[0].Index)
require.Equal(t, uint64(3), got[1].Index)
})
t.Run("error for invalid index when retrieving by root", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
require.NoError(t, db.SaveBlobSidecar(ctx, scs))
require.Equal(t, fieldparams.MaxBlobsPerBlock, len(scs))
got, err := db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot), uint64(len(scs)))
require.ErrorIs(t, err, ErrNotFound)
require.Equal(t, 0, len(got))
})
t.Run("save and retrieve by slot (one)", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, 1)
require.NoError(t, db.SaveBlobSidecar(ctx, scs))
require.Equal(t, 1, len(scs))
got, err := db.BlobSidecarsBySlot(ctx, scs[0].Slot)
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
})
t.Run("save and retrieve by slot (max)", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
require.NoError(t, db.SaveBlobSidecar(ctx, scs))
require.Equal(t, fieldparams.MaxBlobsPerBlock, len(scs))
got, err := db.BlobSidecarsBySlot(ctx, scs[0].Slot)
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
})
t.Run("save and retrieve by slot, max and individually", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
for _, sc := range scs {
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{sc}))
}
require.Equal(t, fieldparams.MaxBlobsPerBlock, len(scs))
got, err := db.BlobSidecarsBySlot(ctx, scs[0].Slot)
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
})
t.Run("save and retrieve valid subset by slot", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
require.NoError(t, db.SaveBlobSidecar(ctx, scs))
require.Equal(t, fieldparams.MaxBlobsPerBlock, len(scs))
// we'll request indices 0 and 3, so make a slice with those indices for comparison
expect := make([]*ethpb.DeprecatedBlobSidecar, 2)
expect[0] = scs[0]
expect[1] = scs[3]
got, err := db.BlobSidecarsBySlot(ctx, scs[0].Slot, 0, 3)
require.NoError(t, err)
require.NoError(t, equalBlobSlices(expect, got))
require.Equal(t, uint64(0), got[0].Index)
require.Equal(t, uint64(3), got[1].Index)
})
t.Run("error for invalid index when retrieving by slot", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
require.NoError(t, db.SaveBlobSidecar(ctx, scs))
require.Equal(t, fieldparams.MaxBlobsPerBlock, len(scs))
got, err := db.BlobSidecarsBySlot(ctx, scs[0].Slot, uint64(len(scs)))
require.ErrorIs(t, err, ErrNotFound)
require.Equal(t, 0, len(got))
})
t.Run("delete works", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
require.NoError(t, db.SaveBlobSidecar(ctx, scs))
require.Equal(t, fieldparams.MaxBlobsPerBlock, len(scs))
got, err := db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
require.NoError(t, db.DeleteBlobSidecars(ctx, bytesutil.ToBytes32(scs[0].BlockRoot)))
got, err = db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot))
require.ErrorIs(t, ErrNotFound, err)
require.Equal(t, 0, len(got))
})
t.Run("saving blob different times", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
for i := 0; i < fieldparams.MaxBlobsPerBlock; i++ {
scs[i].Slot = primitives.Slot(i)
scs[i].BlockRoot = bytesutil.PadTo([]byte{byte(i)}, 32)
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{scs[i]}))
br := bytesutil.ToBytes32(scs[i].BlockRoot)
saved, err := db.BlobSidecarsByRoot(ctx, br)
require.NoError(t, err)
require.NoError(t, equalBlobSlices([]*ethpb.DeprecatedBlobSidecar{scs[i]}, saved))
}
})
t.Run("saving a new blob for rotation (batch)", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
require.NoError(t, db.SaveBlobSidecar(ctx, scs))
require.Equal(t, fieldparams.MaxBlobsPerBlock, len(scs))
oldBlockRoot := scs[0].BlockRoot
got, err := db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(oldBlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
newScs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
newRetentionSlot := primitives.Slot(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest.Mul(uint64(params.BeaconConfig().SlotsPerEpoch)))
for _, sc := range newScs {
sc.Slot = sc.Slot + newRetentionSlot
}
require.NoError(t, db.SaveBlobSidecar(ctx, newScs))
_, err = db.BlobSidecarsBySlot(ctx, 100)
require.ErrorIs(t, ErrNotFound, err)
got, err = db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(newScs[0].BlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(newScs, got))
})
t.Run("save multiple blobs after new rotation (individually)", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
for _, sc := range scs {
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{sc}))
}
got, err := db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
scs = generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
newRetentionSlot := primitives.Slot(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest.Mul(uint64(params.BeaconConfig().SlotsPerEpoch)))
for _, sc := range scs {
sc.Slot = sc.Slot + newRetentionSlot
}
for _, sc := range scs {
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{sc}))
}
_, err = db.BlobSidecarsBySlot(ctx, 100)
require.ErrorIs(t, ErrNotFound, err)
got, err = db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
})
t.Run("save multiple blobs after new rotation (batch then individually)", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
require.NoError(t, db.SaveBlobSidecar(ctx, scs))
require.Equal(t, fieldparams.MaxBlobsPerBlock, len(scs))
oldBlockRoot := scs[0].BlockRoot
got, err := db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(oldBlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
scs = generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
newRetentionSlot := primitives.Slot(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest.Mul(uint64(params.BeaconConfig().SlotsPerEpoch)))
for _, sc := range scs {
sc.Slot = sc.Slot + newRetentionSlot
}
for _, sc := range scs {
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{sc}))
}
_, err = db.BlobSidecarsBySlot(ctx, 100)
require.ErrorIs(t, ErrNotFound, err)
got, err = db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
})
t.Run("save multiple blobs after new rotation (individually then batch)", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
for _, sc := range scs {
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{sc}))
}
got, err := db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
scs = generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
newRetentionSlot := primitives.Slot(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest.Mul(uint64(params.BeaconConfig().SlotsPerEpoch)))
for _, sc := range scs {
sc.Slot = sc.Slot + newRetentionSlot
}
require.NoError(t, db.SaveBlobSidecar(ctx, scs))
_, err = db.BlobSidecarsBySlot(ctx, 100)
require.ErrorIs(t, ErrNotFound, err)
got, err = db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
})
t.Run("save equivocating blobs", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock/2)
eScs := generateEquivocatingBlobSidecars(t, fieldparams.MaxBlobsPerBlock/2)
for i, sc := range scs {
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{sc}))
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{eScs[i]}))
}
got, err := db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
got, err = db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(eScs[0].BlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(eScs, got))
})
}
func generateBlobSidecars(t *testing.T, n uint64) []*ethpb.DeprecatedBlobSidecar {
blobSidecars := make([]*ethpb.DeprecatedBlobSidecar, n)
for i := uint64(0); i < n; i++ {
blobSidecars[i] = generateBlobSidecar(t, i)
}
return blobSidecars
}
func generateBlobSidecar(t *testing.T, index uint64) *ethpb.DeprecatedBlobSidecar {
blob := make([]byte, 131072)
_, err := rand.Read(blob)
require.NoError(t, err)
kzgCommitment := make([]byte, 48)
_, err = rand.Read(kzgCommitment)
require.NoError(t, err)
kzgProof := make([]byte, 48)
_, err = rand.Read(kzgProof)
require.NoError(t, err)
return &ethpb.DeprecatedBlobSidecar{
BlockRoot: bytesutil.PadTo([]byte{'a'}, 32),
Index: index,
Slot: 100,
BlockParentRoot: bytesutil.PadTo([]byte{'b'}, 32),
ProposerIndex: 101,
Blob: blob,
KzgCommitment: kzgCommitment,
KzgProof: kzgProof,
}
}
func generateEquivocatingBlobSidecars(t *testing.T, n uint64) []*ethpb.DeprecatedBlobSidecar {
blobSidecars := make([]*ethpb.DeprecatedBlobSidecar, n)
for i := uint64(0); i < n; i++ {
blobSidecars[i] = generateEquivocatingBlobSidecar(t, i)
}
return blobSidecars
}
func generateEquivocatingBlobSidecar(t *testing.T, index uint64) *ethpb.DeprecatedBlobSidecar {
blob := make([]byte, 131072)
_, err := rand.Read(blob)
require.NoError(t, err)
kzgCommitment := make([]byte, 48)
_, err = rand.Read(kzgCommitment)
require.NoError(t, err)
kzgProof := make([]byte, 48)
_, err = rand.Read(kzgProof)
require.NoError(t, err)
return &ethpb.DeprecatedBlobSidecar{
BlockRoot: bytesutil.PadTo([]byte{'c'}, 32),
Index: index,
Slot: 100,
BlockParentRoot: bytesutil.PadTo([]byte{'b'}, 32),
ProposerIndex: 102,
Blob: blob,
KzgCommitment: kzgCommitment,
KzgProof: kzgProof,
}
}
func Test_validUniqueSidecars_validation(t *testing.T) {
tests := []struct {
name string
scs []*ethpb.DeprecatedBlobSidecar
err error
}{
{name: "empty", scs: []*ethpb.DeprecatedBlobSidecar{}, err: errEmptySidecar},
{name: "too many sidecars", scs: generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock+1), err: errBlobSidecarLimit},
{name: "invalid slot", scs: []*ethpb.DeprecatedBlobSidecar{{Slot: 1}, {Slot: 2}}, err: errBlobSlotMismatch},
{name: "invalid proposer index", scs: []*ethpb.DeprecatedBlobSidecar{{ProposerIndex: 1}, {ProposerIndex: 2}}, err: errBlobProposerMismatch},
{name: "invalid root", scs: []*ethpb.DeprecatedBlobSidecar{{BlockRoot: []byte{1}}, {BlockRoot: []byte{2}}}, err: errBlobRootMismatch},
{name: "invalid parent root", scs: []*ethpb.DeprecatedBlobSidecar{{BlockParentRoot: []byte{1}}, {BlockParentRoot: []byte{2}}}, err: errBlobParentMismatch},
{name: "happy path", scs: []*ethpb.DeprecatedBlobSidecar{{Index: 0}, {Index: 1}}},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
_, err := validUniqueSidecars(tt.scs)
if tt.err != nil {
require.ErrorIs(t, err, tt.err)
} else {
require.NoError(t, err)
}
})
}
}
func Test_validUniqueSidecars_dedup(t *testing.T) {
cases := []struct {
name string
scs []*ethpb.DeprecatedBlobSidecar
expected []*ethpb.DeprecatedBlobSidecar
err error
}{
{
name: "duplicate sidecar",
scs: []*ethpb.DeprecatedBlobSidecar{{Index: 1}, {Index: 1}},
expected: []*ethpb.DeprecatedBlobSidecar{{Index: 1}},
},
{
name: "single sidecar",
scs: []*ethpb.DeprecatedBlobSidecar{{Index: 1}},
expected: []*ethpb.DeprecatedBlobSidecar{{Index: 1}},
},
{
name: "multiple duplicates",
scs: []*ethpb.DeprecatedBlobSidecar{{Index: 1}, {Index: 2}, {Index: 2}, {Index: 3}, {Index: 3}},
expected: []*ethpb.DeprecatedBlobSidecar{{Index: 1}, {Index: 2}, {Index: 3}},
},
{
name: "ok number after de-dupe, > 6 before",
scs: []*ethpb.DeprecatedBlobSidecar{{Index: 1}, {Index: 2}, {Index: 2}, {Index: 2}, {Index: 2}, {Index: 3}, {Index: 3}},
expected: []*ethpb.DeprecatedBlobSidecar{{Index: 1}, {Index: 2}, {Index: 3}},
},
{
name: "max unique, no dupes",
scs: []*ethpb.DeprecatedBlobSidecar{{Index: 1}, {Index: 2}, {Index: 3}, {Index: 4}, {Index: 5}, {Index: 6}},
expected: []*ethpb.DeprecatedBlobSidecar{{Index: 1}, {Index: 2}, {Index: 3}, {Index: 4}, {Index: 5}, {Index: 6}},
},
{
name: "too many unique",
scs: []*ethpb.DeprecatedBlobSidecar{{Index: 1}, {Index: 2}, {Index: 3}, {Index: 4}, {Index: 5}, {Index: 6}, {Index: 7}},
err: errBlobSidecarLimit,
},
{
name: "too many unique with dupes",
scs: []*ethpb.DeprecatedBlobSidecar{{Index: 1}, {Index: 1}, {Index: 1}, {Index: 2}, {Index: 3}, {Index: 4}, {Index: 5}, {Index: 6}, {Index: 7}},
err: errBlobSidecarLimit,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
u, err := validUniqueSidecars(c.scs)
if c.err != nil {
require.ErrorIs(t, err, c.err)
} else {
require.NoError(t, err)
}
require.Equal(t, len(c.expected), len(u))
})
}
}
func TestStore_sortSidecars(t *testing.T) {
scs := []*ethpb.DeprecatedBlobSidecar{
{Index: 6},
{Index: 4},
{Index: 2},
{Index: 1},
{Index: 3},
{Index: 5},
{},
}
sortSidecars(scs)
for i := 0; i < len(scs)-1; i++ {
require.Equal(t, uint64(i), scs[i].Index)
}
}
func BenchmarkStore_BlobSidecarsByRoot(b *testing.B) {
s := setupDB(b)
ctx := context.Background()
require.NoError(b, s.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{
{BlockRoot: bytesutil.PadTo([]byte{'a'}, 32), Slot: 0},
}))
err := s.db.Update(func(tx *bolt.Tx) error {
bkt := tx.Bucket(blobsBucket)
for i := 1; i < 131071; i++ {
r := make([]byte, 32)
_, err := rand.Read(r)
require.NoError(b, err)
scs := []*ethpb.DeprecatedBlobSidecar{
{BlockRoot: r, Slot: primitives.Slot(i)},
}
k := s.blobSidecarKey(scs[0])
encodedBlobSidecar, err := encode(ctx, &ethpb.DeprecatedBlobSidecars{Sidecars: scs})
require.NoError(b, err)
require.NoError(b, bkt.Put(k, encodedBlobSidecar))
}
return nil
})
require.NoError(b, err)
require.NoError(b, s.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{
{BlockRoot: bytesutil.PadTo([]byte{'b'}, 32), Slot: 131071},
}))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := s.BlobSidecarsByRoot(ctx, [32]byte{'b'})
require.NoError(b, err)
}
}
func Test_checkEpochsForBlobSidecarsRequestBucket(t *testing.T) {
s := setupDB(t)
require.NoError(t, s.checkEpochsForBlobSidecarsRequestBucket(s.db)) // First write
require.NoError(t, s.checkEpochsForBlobSidecarsRequestBucket(s.db)) // First check
s.blobRetentionEpochs += 1
require.ErrorIs(t, s.checkEpochsForBlobSidecarsRequestBucket(s.db), errBlobRetentionEpochMismatch)
}
func TestBlobRotatingKey(t *testing.T) {
s := setupDB(t)
k := s.blobSidecarKey(&ethpb.DeprecatedBlobSidecar{
Slot: 1,
BlockRoot: []byte{2},
})
require.Equal(t, types.Slot(1), k.Slot())
require.DeepEqual(t, []byte{2}, k.BlockRoot())
require.DeepEqual(t, s.slotKey(types.Slot(1)), k.BufferPrefix())
}

View File

@@ -70,25 +70,6 @@ func (s *Store) OriginCheckpointBlockRoot(ctx context.Context) ([32]byte, error)
return root, err
}
// BackfillBlockRoot keeps track of the highest block available before the OriginCheckpointBlockRoot
func (s *Store) BackfillBlockRoot(ctx context.Context) ([32]byte, error) {
_, span := trace.StartSpan(ctx, "BeaconDB.BackfillBlockRoot")
defer span.End()
var root [32]byte
err := s.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(blocksBucket)
rootSlice := bkt.Get(backfillBlockRootKey)
if len(rootSlice) == 0 {
return ErrNotFoundBackfillBlockRoot
}
root = bytesutil.ToBytes32(rootSlice)
return nil
})
return root, err
}
// HeadBlock returns the latest canonical block in the Ethereum Beacon Chain.
func (s *Store) HeadBlock(ctx context.Context) (interfaces.ReadOnlySignedBeaconBlock, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.HeadBlock")
@@ -292,55 +273,95 @@ func (s *Store) SaveBlocks(ctx context.Context, blks []interfaces.ReadOnlySigned
ctx, span := trace.StartSpan(ctx, "BeaconDB.SaveBlocks")
defer span.End()
// Performing marshaling, hashing, and indexing outside the bolt transaction
// to minimize the time we hold the DB lock.
blockRoots := make([][]byte, len(blks))
encodedBlocks := make([][]byte, len(blks))
indicesForBlocks := make([]map[string][]byte, len(blks))
for i, blk := range blks {
blockRoot, err := blk.Block().HashTreeRoot()
robs := make([]blocks.ROBlock, len(blks))
for i := range blks {
rb, err := blocks.NewROBlock(blks[i])
if err != nil {
return err
return errors.Wrapf(err, "failed to make an ROBlock for a block in SaveBlocks")
}
enc, err := s.marshalBlock(ctx, blk)
if err != nil {
return err
}
blockRoots[i] = blockRoot[:]
encodedBlocks[i] = enc
indicesByBucket := createBlockIndicesFromBlock(ctx, blk.Block())
indicesForBlocks[i] = indicesByBucket
robs[i] = rb
}
saveBlinded, err := s.shouldSaveBlinded(ctx)
return s.SaveROBlocks(ctx, robs, true)
}
type blockBatchEntry struct {
root []byte
block interfaces.ReadOnlySignedBeaconBlock
enc []byte
updated bool
indices map[string][]byte
}
func prepareBlockBatch(blks []blocks.ROBlock, shouldBlind bool) ([]blockBatchEntry, error) {
batch := make([]blockBatchEntry, len(blks))
for i := range blks {
batch[i].root, batch[i].block = blks[i].RootSlice(), blks[i].ReadOnlySignedBeaconBlock
batch[i].indices = blockIndices(batch[i].block.Block().Slot(), batch[i].block.Block().ParentRoot())
if shouldBlind {
blinded, err := batch[i].block.ToBlinded()
if err != nil {
if !errors.Is(err, blocks.ErrUnsupportedVersion) {
return nil, errors.Wrapf(err, "could not convert block to blinded format for root %#x", batch[i].root)
}
// Pre-deneb blocks give ErrUnsupportedVersion; use the full block already in the batch entry.
} else {
batch[i].block = blinded
}
}
enc, err := encodeBlock(batch[i].block)
if err != nil {
return nil, errors.Wrapf(err, "failed to encode block for root %#x", batch[i].root)
}
batch[i].enc = enc
}
return batch, nil
}
func (s *Store) SaveROBlocks(ctx context.Context, blks []blocks.ROBlock, cache bool) error {
shouldBlind, err := s.shouldSaveBlinded(ctx)
if err != nil {
return err
}
return s.db.Update(func(tx *bolt.Tx) error {
// Precompute expensive values outside the db transaction.
batch, err := prepareBlockBatch(blks, shouldBlind)
if err != nil {
return errors.Wrap(err, "failed to encode all blocks in batch for saving to the db")
}
err = s.db.Update(func(tx *bolt.Tx) error {
bkt := tx.Bucket(blocksBucket)
for i, blk := range blks {
if existingBlock := bkt.Get(blockRoots[i]); existingBlock != nil {
for i := range batch {
if exists := bkt.Get(batch[i].root); exists != nil {
continue
}
if err := updateValueForIndices(ctx, indicesForBlocks[i], blockRoots[i], tx); err != nil {
return errors.Wrap(err, "could not update DB indices")
if err := bkt.Put(batch[i].root, batch[i].enc); err != nil {
return errors.Wrapf(err, "could write block to db with root %#x", batch[i].root)
}
if saveBlinded {
blindedBlock, err := blk.ToBlinded()
if err != nil {
if !errors.Is(err, blocks.ErrUnsupportedVersion) {
return err
}
} else {
blk = blindedBlock
}
}
s.blockCache.Set(string(blockRoots[i]), blk, int64(len(encodedBlocks[i])))
if err := bkt.Put(blockRoots[i], encodedBlocks[i]); err != nil {
return err
if err := updateValueForIndices(ctx, batch[i].indices, batch[i].root, tx); err != nil {
return errors.Wrapf(err, "could not update DB indices for root %#x", batch[i].root)
}
batch[i].updated = true
}
return nil
})
if !cache {
return err
}
for i := range batch {
if batch[i].updated {
s.blockCache.Set(string(batch[i].root), batch[i].block, int64(len(batch[i].enc)))
}
}
return err
}
// blockIndices takes in a beacon block and returns
// a map of bolt DB index buckets corresponding to each particular key for indices for
// data, such as (shard indices bucket -> shard 5).
func blockIndices(slot primitives.Slot, parentRoot [32]byte) map[string][]byte {
return map[string][]byte{
string(blockSlotIndicesBucket): bytesutil.SlotToBytesBigEndian(slot),
string(blockParentRootIndicesBucket): parentRoot[:],
}
}
// SaveHeadBlockRoot to the db.
@@ -417,17 +438,6 @@ func (s *Store) SaveOriginCheckpointBlockRoot(ctx context.Context, blockRoot [32
})
}
// SaveBackfillBlockRoot is used to keep track of the most recently backfilled block root when
// the node was initialized via checkpoint sync.
func (s *Store) SaveBackfillBlockRoot(ctx context.Context, blockRoot [32]byte) error {
_, span := trace.StartSpan(ctx, "BeaconDB.SaveBackfillBlockRoot")
defer span.End()
return s.db.Update(func(tx *bolt.Tx) error {
bucket := tx.Bucket(blocksBucket)
return bucket.Put(backfillBlockRootKey, blockRoot[:])
})
}
// HighestRootsBelowSlot returns roots from the database slot index from the highest slot below the input slot.
// The slot value at the beginning of the return list is the slot where the roots were found. This is helpful so that
// calling code can make decisions based on the slot without resolving the blocks to discover their slot (for instance
@@ -726,31 +736,6 @@ func blockRootsBySlot(ctx context.Context, tx *bolt.Tx, slot primitives.Slot) ([
return [][32]byte{}, nil
}
// createBlockIndicesFromBlock takes in a beacon block and returns
// a map of bolt DB index buckets corresponding to each particular key for indices for
// data, such as (shard indices bucket -> shard 5).
func createBlockIndicesFromBlock(ctx context.Context, block interfaces.ReadOnlyBeaconBlock) map[string][]byte {
_, span := trace.StartSpan(ctx, "BeaconDB.createBlockIndicesFromBlock")
defer span.End()
indicesByBucket := make(map[string][]byte)
// Every index has a unique bucket for fast, binary-search
// range scans for filtering across keys.
buckets := [][]byte{
blockSlotIndicesBucket,
}
indices := [][]byte{
bytesutil.SlotToBytesBigEndian(block.Slot()),
}
buckets = append(buckets, blockParentRootIndicesBucket)
parentRoot := block.ParentRoot()
indices = append(indices, parentRoot[:])
for i := 0; i < len(buckets); i++ {
indicesByBucket[string(buckets[i])] = indices[i]
}
return indicesByBucket
}
// createBlockFiltersFromIndices takes in filter criteria and returns
// a map with a single key-value pair: "block-parent-root-indices” -> parentRoot (array of bytes).
//
@@ -838,74 +823,44 @@ func unmarshalBlock(_ context.Context, enc []byte) (interfaces.ReadOnlySignedBea
return blocks.NewSignedBeaconBlock(rawBlock)
}
func (s *Store) marshalBlock(
ctx context.Context,
blk interfaces.ReadOnlySignedBeaconBlock,
) ([]byte, error) {
shouldBlind, err := s.shouldSaveBlinded(ctx)
func encodeBlock(blk interfaces.ReadOnlySignedBeaconBlock) ([]byte, error) {
key, err := keyForBlock(blk)
if err != nil {
return nil, err
return nil, errors.Wrap(err, "could not determine version encoding key for block")
}
if shouldBlind {
return marshalBlockBlinded(ctx, blk)
enc, err := blk.MarshalSSZ()
if err != nil {
return nil, errors.Wrap(err, "could not marshal block")
}
return marshalBlockFull(ctx, blk)
dbfmt := make([]byte, len(key)+len(enc))
if len(key) > 0 {
copy(dbfmt, key)
}
copy(dbfmt[len(key):], enc)
return snappy.Encode(nil, dbfmt), nil
}
// Encodes a full beacon block to the DB with its associated key.
func marshalBlockFull(
_ context.Context,
blk interfaces.ReadOnlySignedBeaconBlock,
) ([]byte, error) {
var encodedBlock []byte
var err error
encodedBlock, err = blk.MarshalSSZ()
if err != nil {
return nil, err
}
func keyForBlock(blk interfaces.ReadOnlySignedBeaconBlock) ([]byte, error) {
switch blk.Version() {
case version.Deneb:
return snappy.Encode(nil, append(denebKey, encodedBlock...)), nil
case version.Capella:
return snappy.Encode(nil, append(capellaKey, encodedBlock...)), nil
case version.Bellatrix:
return snappy.Encode(nil, append(bellatrixKey, encodedBlock...)), nil
case version.Altair:
return snappy.Encode(nil, append(altairKey, encodedBlock...)), nil
case version.Phase0:
return snappy.Encode(nil, encodedBlock), nil
default:
return nil, errors.New("unknown block version")
}
}
// Encodes a blinded beacon block with its associated key.
// If the block does not support blinding, we then encode it as a full
// block with its associated key by calling marshalBlockFull.
func marshalBlockBlinded(
ctx context.Context,
blk interfaces.ReadOnlySignedBeaconBlock,
) ([]byte, error) {
blindedBlock, err := blk.ToBlinded()
if err != nil {
switch {
case errors.Is(err, blocks.ErrUnsupportedVersion):
return marshalBlockFull(ctx, blk)
default:
return nil, errors.Wrap(err, "could not convert block to blinded format")
if blk.IsBlinded() {
return denebBlindKey, nil
}
}
encodedBlock, err := blindedBlock.MarshalSSZ()
if err != nil {
return nil, errors.Wrap(err, "could not marshal blinded block")
}
switch blk.Version() {
case version.Deneb:
return snappy.Encode(nil, append(denebBlindKey, encodedBlock...)), nil
return denebKey, nil
case version.Capella:
return snappy.Encode(nil, append(capellaBlindKey, encodedBlock...)), nil
if blk.IsBlinded() {
return capellaBlindKey, nil
}
return capellaKey, nil
case version.Bellatrix:
return snappy.Encode(nil, append(bellatrixBlindKey, encodedBlock...)), nil
if blk.IsBlinded() {
return bellatrixBlindKey, nil
}
return bellatrixKey, nil
case version.Altair:
return altairKey, nil
case version.Phase0:
return nil, nil
default:
return nil, fmt.Errorf("unsupported block version: %v", blk.Version())
}

View File

@@ -126,23 +126,6 @@ var blockTests = []struct {
},
}
func TestStore_SaveBackfillBlockRoot(t *testing.T) {
db := setupDB(t)
ctx := context.Background()
_, err := db.BackfillBlockRoot(ctx)
require.ErrorIs(t, err, ErrNotFoundBackfillBlockRoot)
var expected [32]byte
copy(expected[:], []byte{0x23})
err = db.SaveBackfillBlockRoot(ctx, expected)
require.NoError(t, err)
actual, err := db.BackfillBlockRoot(ctx)
require.NoError(t, err)
require.Equal(t, expected, actual)
}
func TestStore_SaveBlock_NoDuplicates(t *testing.T) {
BlockCacheSize = 1
slot := primitives.Slot(20)

View File

@@ -21,3 +21,8 @@ var ErrNotFoundBackfillBlockRoot = errors.Wrap(ErrNotFound, "BackfillBlockRoot")
// ErrNotFoundFeeRecipient is a not found error specifically for the fee recipient getter
var ErrNotFoundFeeRecipient = errors.Wrap(ErrNotFound, "fee recipient")
var errEmptyBlockSlice = errors.New("[]blocks.ROBlock is empty")
var errIncorrectBlockParent = errors.New("unexpected missing or forked blocks in a []ROBlock")
var errFinalizedChildNotFound = errors.New("unable to find finalized root descending from backfill batch")
var errNotConnectedToFinalized = errors.New("unable to finalize backfill blocks, finalized parent_root does not match")

View File

@@ -4,6 +4,7 @@ import (
"bytes"
"context"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db/filters"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
@@ -163,6 +164,83 @@ func (s *Store) updateFinalizedBlockRoots(ctx context.Context, tx *bolt.Tx, chec
return bkt.Put(previousFinalizedCheckpointKey, enc)
}
// BackfillFinalizedIndex updates the finalized index for a contiguous chain of blocks that are the ancestors of the
// given finalized child root. This is needed to update the finalized index during backfill, because the usual
// updateFinalizedBlockRoots has assumptions that are incompatible with backfill processing.
func (s *Store) BackfillFinalizedIndex(ctx context.Context, blocks []blocks.ROBlock, finalizedChildRoot [32]byte) error {
ctx, span := trace.StartSpan(ctx, "BeaconDB.BackfillFinalizedIndex")
defer span.End()
if len(blocks) == 0 {
return errEmptyBlockSlice
}
fbrs := make([]*ethpb.FinalizedBlockRootContainer, len(blocks))
encs := make([][]byte, len(blocks))
for i := range blocks {
pr := blocks[i].Block().ParentRoot()
fbrs[i] = &ethpb.FinalizedBlockRootContainer{
ParentRoot: pr[:],
// ChildRoot: will be filled in on the next iteration when we look at the descendent block.
}
if i == 0 {
continue
}
if blocks[i-1].Root() != blocks[i].Block().ParentRoot() {
return errors.Wrapf(errIncorrectBlockParent, "previous root=%#x, slot=%d; child parent_root=%#x, root=%#x, slot=%d",
blocks[i-1].Root(), blocks[i-1].Block().Slot(), blocks[i].Block().ParentRoot(), blocks[i].Root(), blocks[i].Block().Slot())
}
// We know the previous index is the parent of this one thanks to the assertion above,
// so we can set the ChildRoot of the previous value to the root of the current value.
fbrs[i-1].ChildRoot = blocks[i].RootSlice()
// Now that the value for fbrs[i-1] is complete, perform encoding here to minimize time in Update,
// which holds the global db lock.
penc, err := encode(ctx, fbrs[i-1])
if err != nil {
tracing.AnnotateError(span, err)
return err
}
encs[i-1] = penc
// The final element is the parent of finalizedChildRoot. This is checked inside the db transaction using
// the parent_root value stored in the index data for finalizedChildRoot.
if i == len(blocks)-1 {
fbrs[i].ChildRoot = finalizedChildRoot[:]
// Final element is complete, so it is pre-encoded like the others.
enc, err := encode(ctx, fbrs[i])
if err != nil {
tracing.AnnotateError(span, err)
return err
}
encs[i] = enc
}
}
return s.db.Update(func(tx *bolt.Tx) error {
bkt := tx.Bucket(finalizedBlockRootsIndexBucket)
child := bkt.Get(finalizedChildRoot[:])
if len(child) == 0 {
return errFinalizedChildNotFound
}
fcc := &ethpb.FinalizedBlockRootContainer{}
if err := decode(ctx, child, fcc); err != nil {
return errors.Wrapf(err, "unable to decode finalized block root container for root=%#x", finalizedChildRoot)
}
// Ensure that the existing finalized chain descends from the new segment.
if !bytes.Equal(fcc.ParentRoot, blocks[len(blocks)-1].RootSlice()) {
return errors.Wrapf(errNotConnectedToFinalized, "finalized block root container for root=%#x has parent_root=%#x, not %#x",
finalizedChildRoot, fcc.ParentRoot, blocks[len(blocks)-1].RootSlice())
}
// Update the finalized index with entries for each block in the new segment.
for i := range fbrs {
if err := bkt.Put(blocks[i].RootSlice(), encs[i]); err != nil {
return err
}
}
return nil
})
}
// IsFinalizedBlock returns true if the block root is present in the finalized block root index.
// A beacon block root contained exists in this index if it is considered finalized and canonical.
// Note: beacon blocks from the latest finalized epoch return true, whether or not they are

View File

@@ -1,6 +1,7 @@
package kv
import (
"bytes"
"context"
"testing"
@@ -14,6 +15,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/testing/assert"
"github.com/prysmaticlabs/prysm/v4/testing/require"
"github.com/prysmaticlabs/prysm/v4/testing/util"
bolt "go.etcd.io/bbolt"
)
var genesisBlockRoot = bytesutil.ToBytes32([]byte{'G', 'E', 'N', 'E', 'S', 'I', 'S'})
@@ -234,3 +236,64 @@ func makeBlocksAltair(t *testing.T, startIdx, num uint64, previousRoot [32]byte)
}
return ifaceBlocks
}
func TestStore_BackfillFinalizedIndex(t *testing.T) {
db := setupDB(t)
ctx := context.Background()
require.ErrorIs(t, db.BackfillFinalizedIndex(ctx, []consensusblocks.ROBlock{}, [32]byte{}), errEmptyBlockSlice)
blks, err := consensusblocks.NewROBlockSlice(makeBlocks(t, 0, 66, [32]byte{}))
require.NoError(t, err)
// set up existing finalized block
ebpr := blks[64].Block().ParentRoot()
ebr := blks[64].Root()
chldr := blks[65].Root()
ebf := &ethpb.FinalizedBlockRootContainer{
ParentRoot: ebpr[:],
ChildRoot: chldr[:],
}
disjoint := []consensusblocks.ROBlock{
blks[0],
blks[2],
}
enc, err := encode(ctx, ebf)
require.NoError(t, err)
err = db.db.Update(func(tx *bolt.Tx) error {
bkt := tx.Bucket(finalizedBlockRootsIndexBucket)
return bkt.Put(ebr[:], enc)
})
// reslice to remove the existing blocks
blks = blks[0:64]
// check the other error conditions with a descendent root that really doesn't exist
require.NoError(t, err)
require.ErrorIs(t, db.BackfillFinalizedIndex(ctx, disjoint, [32]byte{}), errIncorrectBlockParent)
require.NoError(t, err)
require.ErrorIs(t, errFinalizedChildNotFound, db.BackfillFinalizedIndex(ctx, blks, [32]byte{}))
// use the real root so that it succeeds
require.NoError(t, db.BackfillFinalizedIndex(ctx, blks, ebr))
for i := range blks {
require.NoError(t, db.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(finalizedBlockRootsIndexBucket)
encfr := bkt.Get(blks[i].RootSlice())
require.Equal(t, true, len(encfr) > 0)
fr := &ethpb.FinalizedBlockRootContainer{}
require.NoError(t, decode(ctx, encfr, fr))
require.Equal(t, 32, len(fr.ParentRoot))
require.Equal(t, 32, len(fr.ChildRoot))
pr := blks[i].Block().ParentRoot()
require.Equal(t, true, bytes.Equal(fr.ParentRoot, pr[:]))
if i > 0 {
require.Equal(t, true, bytes.Equal(fr.ParentRoot, blks[i-1].RootSlice()))
}
if i < len(blks)-1 {
require.DeepEqual(t, fr.ChildRoot, blks[i+1].RootSlice())
}
if i == len(blks)-1 {
require.DeepEqual(t, fr.ChildRoot, ebr[:])
}
return nil
}))
}
}

View File

@@ -18,7 +18,6 @@ import (
"github.com/prysmaticlabs/prysm/v4/config/features"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/io/file"
bolt "go.etcd.io/bbolt"
)
@@ -91,7 +90,6 @@ type Store struct {
validatorEntryCache *ristretto.Cache
stateSummaryCache *stateSummaryCache
ctx context.Context
blobRetentionEpochs primitives.Epoch
}
// StoreDatafilePath is the canonical construction of a full
@@ -138,13 +136,6 @@ var Buckets = [][]byte{
// KVStoreOption is a functional option that modifies a kv.Store.
type KVStoreOption func(*Store)
// WithBlobRetentionEpochs sets the variable configuring the blob retention window.
func WithBlobRetentionEpochs(e primitives.Epoch) KVStoreOption {
return func(s *Store) {
s.blobRetentionEpochs = e
}
}
// NewKVStore initializes a new boltDB key-value store at the directory
// path specified, creates the kv-buckets based on the schema, and stores
// an open connection db object as a property of the Store struct.
@@ -217,14 +208,6 @@ func NewKVStore(ctx context.Context, dirPath string, opts ...KVStoreOption) (*St
return nil, err
}
if err := kv.checkEpochsForBlobSidecarsRequestBucket(boltDB); err != nil {
return nil, errors.Wrap(err, "failed to check epochs for blob sidecars request bucket")
}
// set a default so that tests don't break
if kv.blobRetentionEpochs == 0 {
kv.blobRetentionEpochs = params.BeaconConfig().MinEpochsForBlobsSidecarsRequest
}
return kv, nil
}

View File

@@ -1,12 +1,12 @@
package kv
import (
"bytes"
"context"
"fmt"
"testing"
"github.com/prysmaticlabs/prysm/v4/config/features"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/testing/require"
@@ -16,8 +16,7 @@ import (
// setupDB instantiates and returns a Store instance.
func setupDB(t testing.TB) *Store {
opt := WithBlobRetentionEpochs(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest)
db, err := NewKVStore(context.Background(), t.TempDir(), opt)
db, err := NewKVStore(context.Background(), t.TempDir())
require.NoError(t, err, "Failed to instantiate DB")
t.Cleanup(func() {
require.NoError(t, db.Close(), "Failed to close database")
@@ -118,13 +117,25 @@ func Test_setupBlockStorageType(t *testing.T) {
root, err = wrappedBlock.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, store.SaveBlock(ctx, wrappedBlock))
retrievedBlk, err = store.Block(ctx, root)
require.NoError(t, err)
require.Equal(t, true, retrievedBlk.IsBlinded())
wrappedBlinded, err := wrappedBlock.ToBlinded()
require.NoError(t, err)
require.DeepEqual(t, wrappedBlinded, retrievedBlk)
retrievedBlk, err = store.Block(ctx, root)
require.NoError(t, err)
require.Equal(t, true, retrievedBlk.IsBlinded())
// Compare retrieved value by root, and marshaled bytes.
mSrc, err := wrappedBlinded.MarshalSSZ()
require.NoError(t, err)
mTgt, err := retrievedBlk.MarshalSSZ()
require.NoError(t, err)
require.Equal(t, true, bytes.Equal(mSrc, mTgt))
rSrc, err := wrappedBlinded.Block().HashTreeRoot()
require.NoError(t, err)
rTgt, err := retrievedBlk.Block().HashTreeRoot()
require.NoError(t, err)
require.Equal(t, rSrc, rTgt)
})
t.Run("existing database with full blocks type should continue storing full blocks", func(t *testing.T) {
store := setupDB(t)
@@ -157,10 +168,21 @@ func Test_setupBlockStorageType(t *testing.T) {
root, err = wrappedBlock.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, store.SaveBlock(ctx, wrappedBlock))
retrievedBlk, err = store.Block(ctx, root)
require.NoError(t, err)
require.Equal(t, false, retrievedBlk.IsBlinded())
require.DeepEqual(t, wrappedBlock, retrievedBlk)
// Compare retrieved value by root, and marshaled bytes.
mSrc, err := wrappedBlock.MarshalSSZ()
require.NoError(t, err)
mTgt, err := retrievedBlk.MarshalSSZ()
require.NoError(t, err)
require.Equal(t, true, bytes.Equal(mSrc, mTgt))
rTgt, err := retrievedBlk.Block().HashTreeRoot()
require.NoError(t, err)
require.Equal(t, root, rTgt)
})
t.Run("existing database with blinded blocks type should error if user enables full blocks feature flag", func(t *testing.T) {
store := setupDB(t)

View File

@@ -47,10 +47,6 @@ var (
finalizedCheckpointKey = []byte("finalized-checkpoint")
powchainDataKey = []byte("powchain-data")
lastValidatedCheckpointKey = []byte("last-validated-checkpoint")
// blobRetentionEpochsKey determines the size of the blob circular buffer and how the keys in that buffer are
// determined. If this value changes, the existing data is invalidated, so storing it in the db
// allows us to assert at runtime that the db state is still consistent with the runtime state.
blobRetentionEpochsKey = []byte("blob-retention-epochs")
// Below keys are used to identify objects are to be fork compatible.
// Objects that are only compatible with specific forks should be prefixed with such keys.
@@ -65,8 +61,8 @@ var (
// block root included in the beacon state used by weak subjectivity initial sync
originCheckpointBlockRootKey = []byte("origin-checkpoint-block-root")
// block root tracking the progress of backfill, or pointing at genesis if backfill has not been initiated
backfillBlockRootKey = []byte("backfill-block-root")
// tracking data about an ongoing backfill
backfillStatusKey = []byte("backfill-status")
// Deprecated: This index key was migrated in PR 6461. Do not use, except for migrations.
lastArchivedIndexKey = []byte("last-archived")

View File

@@ -893,6 +893,7 @@ func createStateIndicesFromStateSlot(ctx context.Context, slot primitives.Slot)
//
// 3.) state with current finalized root
// 4.) unfinalized States
// 5.) not origin root
func (s *Store) CleanUpDirtyStates(ctx context.Context, slotsPerArchivedPoint primitives.Slot) error {
ctx, span := trace.StartSpan(ctx, "BeaconDB. CleanUpDirtyStates")
defer span.End()
@@ -907,6 +908,11 @@ func (s *Store) CleanUpDirtyStates(ctx context.Context, slotsPerArchivedPoint pr
}
deletedRoots := make([][32]byte, 0)
oRoot, err := s.OriginCheckpointBlockRoot(ctx)
if err != nil {
return err
}
err = s.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(stateSlotIndicesBucket)
return bkt.ForEach(func(k, v []byte) error {
@@ -914,15 +920,31 @@ func (s *Store) CleanUpDirtyStates(ctx context.Context, slotsPerArchivedPoint pr
return ctx.Err()
}
finalizedChkpt := bytesutil.ToBytes32(f.Root) == bytesutil.ToBytes32(v)
root := bytesutil.ToBytes32(v)
slot := bytesutil.BytesToSlotBigEndian(k)
mod := slot % slotsPerArchivedPoint
nonFinalized := slot > finalizedSlot
// The following conditions cover 1, 2, 3 and 4 above.
if mod != 0 && mod <= slotsPerArchivedPoint-slotsPerArchivedPoint/3 && !finalizedChkpt && !nonFinalized {
deletedRoots = append(deletedRoots, bytesutil.ToBytes32(v))
if mod == 0 {
return nil
}
if mod > slotsPerArchivedPoint-slotsPerArchivedPoint/3 {
return nil
}
if bytesutil.ToBytes32(f.Root) == root {
return nil
}
if slot > finalizedSlot {
return nil
}
if oRoot == root {
return nil
}
deletedRoots = append(deletedRoots, root)
return nil
})
})

View File

@@ -675,6 +675,7 @@ func TestStore_CleanUpDirtyStates_AboveThreshold(t *testing.T) {
genesisRoot := [32]byte{'a'}
require.NoError(t, db.SaveGenesisBlockRoot(context.Background(), genesisRoot))
require.NoError(t, db.SaveState(context.Background(), genesisState, genesisRoot))
require.NoError(t, db.SaveOriginCheckpointBlockRoot(context.Background(), [32]byte{'a'}))
bRoots := make([][32]byte, 0)
slotsPerArchivedPoint := primitives.Slot(128)
@@ -720,6 +721,7 @@ func TestStore_CleanUpDirtyStates_Finalized(t *testing.T) {
genesisRoot := [32]byte{'a'}
require.NoError(t, db.SaveGenesisBlockRoot(context.Background(), genesisRoot))
require.NoError(t, db.SaveState(context.Background(), genesisState, genesisRoot))
require.NoError(t, db.SaveOriginCheckpointBlockRoot(context.Background(), [32]byte{'a'}))
for i := primitives.Slot(1); i <= params.BeaconConfig().SlotsPerEpoch; i++ {
b := util.NewBeaconBlock()
@@ -741,6 +743,35 @@ func TestStore_CleanUpDirtyStates_Finalized(t *testing.T) {
require.Equal(t, true, db.HasState(context.Background(), genesisRoot))
}
func TestStore_CleanUpDirtyStates_OriginRoot(t *testing.T) {
db := setupDB(t)
genesisState, err := util.NewBeaconState()
require.NoError(t, err)
r := [32]byte{'a'}
require.NoError(t, db.SaveGenesisBlockRoot(context.Background(), r))
require.NoError(t, db.SaveState(context.Background(), genesisState, r))
for i := primitives.Slot(1); i <= params.BeaconConfig().SlotsPerEpoch; i++ {
b := util.NewBeaconBlock()
b.Block.Slot = i
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
require.NoError(t, db.SaveBlock(context.Background(), wsb))
st, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, st.SetSlot(i))
require.NoError(t, db.SaveState(context.Background(), st, r))
}
require.NoError(t, db.SaveOriginCheckpointBlockRoot(context.Background(), r))
require.NoError(t, db.CleanUpDirtyStates(context.Background(), params.BeaconConfig().SlotsPerEpoch))
require.Equal(t, true, db.HasState(context.Background(), r))
}
func TestStore_CleanUpDirtyStates_DontDeleteNonFinalized(t *testing.T) {
db := setupDB(t)
@@ -749,6 +780,7 @@ func TestStore_CleanUpDirtyStates_DontDeleteNonFinalized(t *testing.T) {
genesisRoot := [32]byte{'a'}
require.NoError(t, db.SaveGenesisBlockRoot(context.Background(), genesisRoot))
require.NoError(t, db.SaveState(context.Background(), genesisState, genesisRoot))
require.NoError(t, db.SaveOriginCheckpointBlockRoot(context.Background(), [32]byte{'a'}))
var unfinalizedRoots [][32]byte
for i := primitives.Slot(1); i <= params.BeaconConfig().SlotsPerEpoch; i++ {

View File

@@ -8,6 +8,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/ssz/detect"
"github.com/prysmaticlabs/prysm/v4/proto/dbval"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/runtime/version"
)
@@ -17,18 +18,6 @@ import (
// syncing, using the provided values as their point of origin. This is an alternative
// to syncing from genesis, and should only be run on an empty database.
func (s *Store) SaveOrigin(ctx context.Context, serState, serBlock []byte) error {
genesisRoot, err := s.GenesisBlockRoot(ctx)
if err != nil {
if errors.Is(err, ErrNotFoundGenesisBlockRoot) {
return errors.Wrap(err, "genesis block root not found: genesis must be provided for checkpoint sync")
}
return errors.Wrap(err, "genesis block root query error: checkpoint sync must verify genesis to proceed")
}
err = s.SaveBackfillBlockRoot(ctx, genesisRoot)
if err != nil {
return errors.Wrap(err, "unable to save genesis root as initial backfill starting point for checkpoint sync")
}
cf, err := detect.FromState(serState)
if err != nil {
return errors.Wrap(err, "could not sniff config+fork for origin state bytes")
@@ -50,11 +39,24 @@ func (s *Store) SaveOrigin(ctx context.Context, serState, serBlock []byte) error
}
blk := wblk.Block()
// save block
blockRoot, err := blk.HashTreeRoot()
if err != nil {
return errors.Wrap(err, "could not compute HashTreeRoot of checkpoint block")
}
pr := blk.ParentRoot()
bf := &dbval.BackfillStatus{
LowSlot: uint64(wblk.Block().Slot()),
LowRoot: blockRoot[:],
LowParentRoot: pr[:],
OriginRoot: blockRoot[:],
OriginSlot: uint64(wblk.Block().Slot()),
}
if err = s.SaveBackfillStatus(ctx, bf); err != nil {
return errors.Wrap(err, "unable to save backfill status data to db for checkpoint sync")
}
log.Infof("saving checkpoint block to db, w/ root=%#x", blockRoot)
if err := s.SaveBlock(ctx, wblk); err != nil {
return errors.Wrap(err, "could not save checkpoint block")

View File

@@ -105,7 +105,6 @@ go_test(
"//beacon-chain/execution/types:go_default_library",
"//beacon-chain/forkchoice/doubly-linked-tree:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",

View File

@@ -15,7 +15,6 @@ import (
"github.com/holiman/uint256"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/execution/types"
"github.com/prysmaticlabs/prysm/v4/config/features"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
@@ -489,6 +488,10 @@ func (s *Service) GetPayloadBodiesByHash(ctx context.Context, executionBlockHash
defer span.End()
result := make([]*pb.ExecutionPayloadBodyV1, 0)
// Exit early if there are no execution hashes.
if len(executionBlockHashes) == 0 {
return result, nil
}
err := s.rpcClient.CallContext(ctx, &result, GetPayloadBodiesByHashV1, executionBlockHashes)
for i, item := range result {
@@ -621,31 +624,15 @@ func (s *Service) ReconstructFullBellatrixBlockBatch(
}
func (s *Service) retrievePayloadFromExecutionHash(ctx context.Context, executionBlockHash common.Hash, header interfaces.ExecutionData, version int) (interfaces.ExecutionData, error) {
if features.Get().EnableOptionalEngineMethods {
pBodies, err := s.GetPayloadBodiesByHash(ctx, []common.Hash{executionBlockHash})
if err != nil {
return nil, fmt.Errorf("could not get payload body by hash %#x: %v", executionBlockHash, err)
}
if len(pBodies) != 1 {
return nil, errors.Errorf("could not retrieve the correct number of payload bodies: wanted 1 but got %d", len(pBodies))
}
bdy := pBodies[0]
return fullPayloadFromPayloadBody(header, bdy, version)
}
executionBlock, err := s.ExecutionBlockByHash(ctx, executionBlockHash, true /* with txs */)
pBodies, err := s.GetPayloadBodiesByHash(ctx, []common.Hash{executionBlockHash})
if err != nil {
return nil, fmt.Errorf("could not fetch execution block with txs by hash %#x: %v", executionBlockHash, err)
return nil, fmt.Errorf("could not get payload body by hash %#x: %v", executionBlockHash, err)
}
if executionBlock == nil {
return nil, fmt.Errorf("received nil execution block for request by hash %#x", executionBlockHash)
if len(pBodies) != 1 {
return nil, errors.Errorf("could not retrieve the correct number of payload bodies: wanted 1 but got %d", len(pBodies))
}
if bytes.Equal(executionBlock.Hash.Bytes(), []byte{}) {
return nil, ErrEmptyBlockHash
}
executionBlock.Version = version
return fullPayloadFromExecutionBlock(version, header, executionBlock)
bdy := pBodies[0]
return fullPayloadFromPayloadBody(header, bdy, version)
}
func (s *Service) retrievePayloadsFromExecutionHashes(
@@ -654,19 +641,12 @@ func (s *Service) retrievePayloadsFromExecutionHashes(
validExecPayloads []int,
blindedBlocks []interfaces.ReadOnlySignedBeaconBlock) ([]interfaces.SignedBeaconBlock, error) {
fullBlocks := make([]interfaces.SignedBeaconBlock, len(blindedBlocks))
var execBlocks []*pb.ExecutionBlock
var payloadBodies []*pb.ExecutionPayloadBodyV1
var err error
if features.Get().EnableOptionalEngineMethods {
payloadBodies, err = s.GetPayloadBodiesByHash(ctx, executionHashes)
if err != nil {
return nil, fmt.Errorf("could not fetch payload bodies by hash %#x: %v", executionHashes, err)
}
} else {
execBlocks, err = s.ExecutionBlocksByHashes(ctx, executionHashes, true /* with txs*/)
if err != nil {
return nil, fmt.Errorf("could not fetch execution blocks with txs by hash %#x: %v", executionHashes, err)
}
payloadBodies, err = s.GetPayloadBodiesByHash(ctx, executionHashes)
if err != nil {
return nil, fmt.Errorf("could not fetch payload bodies by hash %#x: %v", executionHashes, err)
}
// For each valid payload, we reconstruct the full block from it with the
@@ -674,32 +654,17 @@ func (s *Service) retrievePayloadsFromExecutionHashes(
for sliceIdx, realIdx := range validExecPayloads {
var payload interfaces.ExecutionData
bblock := blindedBlocks[realIdx]
if features.Get().EnableOptionalEngineMethods {
b := payloadBodies[sliceIdx]
if b == nil {
return nil, fmt.Errorf("received nil payload body for request by hash %#x", executionHashes[sliceIdx])
}
header, err := bblock.Block().Body().Execution()
if err != nil {
return nil, err
}
payload, err = fullPayloadFromPayloadBody(header, b, bblock.Version())
if err != nil {
return nil, err
}
} else {
b := execBlocks[sliceIdx]
if b == nil {
return nil, fmt.Errorf("received nil execution block for request by hash %#x", executionHashes[sliceIdx])
}
header, err := bblock.Block().Body().Execution()
if err != nil {
return nil, err
}
payload, err = fullPayloadFromExecutionBlock(bblock.Version(), header, b)
if err != nil {
return nil, err
}
b := payloadBodies[sliceIdx]
if b == nil {
return nil, fmt.Errorf("received nil payload body for request by hash %#x", executionHashes[sliceIdx])
}
header, err := bblock.Block().Body().Execution()
if err != nil {
return nil, err
}
payload, err = fullPayloadFromPayloadBody(header, b, bblock.Version())
if err != nil {
return nil, err
}
fullBlock, err := blocks.BuildSignedBeaconBlockFromExecutionPayload(bblock, payload.Proto())
if err != nil {
@@ -796,8 +761,8 @@ func fullPayloadFromExecutionBlock(
BlockHash: blockHash[:],
Transactions: txs,
Withdrawals: block.Withdrawals,
ExcessBlobGas: ebg,
BlobGasUsed: bgu,
ExcessBlobGas: ebg,
}, 0) // We can't get the block value and don't care about the block value for this instance
default:
return nil, fmt.Errorf("unknown execution block version %d", block.Version)
@@ -976,10 +941,10 @@ func buildEmptyExecutionPayload(v int) (proto.Message, error) {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
ExtraData: make([]byte, 0),
}, nil
case version.Capella:
return &pb.ExecutionPayloadCapella{
@@ -989,10 +954,10 @@ func buildEmptyExecutionPayload(v int) (proto.Message, error) {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
ExtraData: make([]byte, 0),
Withdrawals: make([]*pb.Withdrawal, 0),
}, nil
case version.Deneb:
@@ -1003,10 +968,10 @@ func buildEmptyExecutionPayload(v int) (proto.Message, error) {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
ExtraData: make([]byte, 0),
Withdrawals: make([]*pb.Withdrawal, 0),
}, nil
default:

View File

@@ -85,7 +85,7 @@ func FuzzExecutionPayload(f *testing.F) {
GasLimit: math.MaxUint64,
GasUsed: math.MaxUint64,
Timestamp: 100,
ExtraData: nil,
ExtraData: []byte{},
BaseFeePerGas: big.NewInt(math.MaxInt),
BlockHash: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
Transactions: [][]byte{{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}, {0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}, {0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}, {0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}},

View File

@@ -20,7 +20,6 @@ import (
"github.com/holiman/uint256"
"github.com/pkg/errors"
mocks "github.com/prysmaticlabs/prysm/v4/beacon-chain/execution/testing"
"github.com/prysmaticlabs/prysm/v4/config/features"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
@@ -758,26 +757,7 @@ func TestReconstructFullBellatrixBlock(t *testing.T) {
encodedBinaryTxs[0], err = txs[0].MarshalBinary()
require.NoError(t, err)
payload.Transactions = encodedBinaryTxs
jsonPayload["transactions"] = txs
num := big.NewInt(1)
encodedNum := hexutil.EncodeBig(num)
jsonPayload["hash"] = hexutil.Encode(payload.BlockHash)
jsonPayload["parentHash"] = common.BytesToHash([]byte("parent"))
jsonPayload["sha3Uncles"] = common.BytesToHash([]byte("uncles"))
jsonPayload["miner"] = common.BytesToAddress([]byte("miner"))
jsonPayload["stateRoot"] = common.BytesToHash([]byte("state"))
jsonPayload["transactionsRoot"] = common.BytesToHash([]byte("txs"))
jsonPayload["receiptsRoot"] = common.BytesToHash([]byte("receipts"))
jsonPayload["logsBloom"] = gethtypes.BytesToBloom([]byte("bloom"))
jsonPayload["gasLimit"] = hexutil.EncodeUint64(1)
jsonPayload["gasUsed"] = hexutil.EncodeUint64(2)
jsonPayload["timestamp"] = hexutil.EncodeUint64(3)
jsonPayload["number"] = encodedNum
jsonPayload["extraData"] = common.BytesToHash([]byte("extra"))
jsonPayload["totalDifficulty"] = "0x123456"
jsonPayload["difficulty"] = encodedNum
jsonPayload["size"] = encodedNum
jsonPayload["baseFeePerGas"] = encodedNum
jsonPayload["transactions"] = []hexutil.Bytes{encodedBinaryTxs[0]}
wrappedPayload, err := blocks.WrappedExecutionPayload(payload)
require.NoError(t, err)
@@ -792,7 +772,7 @@ func TestReconstructFullBellatrixBlock(t *testing.T) {
respJSON := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
"result": jsonPayload,
"result": []map[string]interface{}{jsonPayload},
}
require.NoError(t, json.NewEncoder(w).Encode(respJSON))
}))
@@ -869,26 +849,7 @@ func TestReconstructFullBellatrixBlockBatch(t *testing.T) {
encodedBinaryTxs[0], err = txs[0].MarshalBinary()
require.NoError(t, err)
payload.Transactions = encodedBinaryTxs
jsonPayload["transactions"] = txs
num := big.NewInt(1)
encodedNum := hexutil.EncodeBig(num)
jsonPayload["hash"] = hexutil.Encode(payload.BlockHash)
jsonPayload["parentHash"] = common.BytesToHash([]byte("parent"))
jsonPayload["sha3Uncles"] = common.BytesToHash([]byte("uncles"))
jsonPayload["miner"] = common.BytesToAddress([]byte("miner"))
jsonPayload["stateRoot"] = common.BytesToHash([]byte("state"))
jsonPayload["transactionsRoot"] = common.BytesToHash([]byte("txs"))
jsonPayload["receiptsRoot"] = common.BytesToHash([]byte("receipts"))
jsonPayload["logsBloom"] = gethtypes.BytesToBloom([]byte("bloom"))
jsonPayload["gasLimit"] = hexutil.EncodeUint64(1)
jsonPayload["gasUsed"] = hexutil.EncodeUint64(2)
jsonPayload["timestamp"] = hexutil.EncodeUint64(3)
jsonPayload["number"] = encodedNum
jsonPayload["extraData"] = common.BytesToHash([]byte("extra"))
jsonPayload["totalDifficulty"] = "0x123456"
jsonPayload["difficulty"] = encodedNum
jsonPayload["size"] = encodedNum
jsonPayload["baseFeePerGas"] = encodedNum
jsonPayload["transactions"] = []hexutil.Bytes{encodedBinaryTxs[0]}
wrappedPayload, err := blocks.WrappedExecutionPayload(payload)
require.NoError(t, err)
@@ -912,20 +873,12 @@ func TestReconstructFullBellatrixBlockBatch(t *testing.T) {
require.NoError(t, r.Body.Close())
}()
respJSON := []map[string]interface{}{
{
"jsonrpc": "2.0",
"id": 1,
"result": jsonPayload,
},
{
"jsonrpc": "2.0",
"id": 2,
"result": jsonPayload,
},
respJSON := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
"result": []map[string]interface{}{jsonPayload, jsonPayload},
}
require.NoError(t, json.NewEncoder(w).Encode(respJSON))
require.NoError(t, json.NewEncoder(w).Encode(respJSON))
}))
defer srv.Close()
@@ -1288,6 +1241,10 @@ func fixtures() map[string]interface{} {
BlockHash: foo[:],
Transactions: [][]byte{foo[:]},
}
executionPayloadBodyFixture := &pb.ExecutionPayloadBodyV1{
Transactions: [][]byte{foo[:]},
Withdrawals: []*pb.Withdrawal{},
}
executionPayloadFixtureCapella := &pb.ExecutionPayloadCapella{
ParentHash: foo[:],
FeeRecipient: bar,
@@ -1459,6 +1416,7 @@ func fixtures() map[string]interface{} {
}
return map[string]interface{}{
"ExecutionBlock": executionBlock,
"ExecutionPayloadBody": executionPayloadBodyFixture,
"ExecutionPayload": executionPayloadFixture,
"ExecutionPayloadCapella": executionPayloadFixtureCapella,
"ExecutionPayloadDeneb": executionPayloadFixtureDeneb,
@@ -2007,10 +1965,6 @@ func newPayloadV3Setup(t *testing.T, status *pb.PayloadStatus, payload *pb.Execu
}
func TestCapella_PayloadBodiesByHash(t *testing.T) {
resetFn := features.InitWithReset(&features.Flags{
EnableOptionalEngineMethods: true,
})
defer resetFn()
t.Run("empty response works", func(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
@@ -2067,7 +2021,9 @@ func TestCapella_PayloadBodiesByHash(t *testing.T) {
service := &Service{}
service.rpcClient = rpcClient
results, err := service.GetPayloadBodiesByHash(ctx, []common.Hash{})
bRoot := [32]byte{}
copy(bRoot[:], "hash")
results, err := service.GetPayloadBodiesByHash(ctx, []common.Hash{bRoot})
require.NoError(t, err)
require.Equal(t, 1, len(results))
@@ -2113,7 +2069,9 @@ func TestCapella_PayloadBodiesByHash(t *testing.T) {
service := &Service{}
service.rpcClient = rpcClient
results, err := service.GetPayloadBodiesByHash(ctx, []common.Hash{})
bRoot := [32]byte{}
copy(bRoot[:], "hash")
results, err := service.GetPayloadBodiesByHash(ctx, []common.Hash{bRoot, bRoot, bRoot})
require.NoError(t, err)
require.Equal(t, 3, len(results))
@@ -2154,7 +2112,9 @@ func TestCapella_PayloadBodiesByHash(t *testing.T) {
service := &Service{}
service.rpcClient = rpcClient
results, err := service.GetPayloadBodiesByHash(ctx, []common.Hash{})
bRoot := [32]byte{}
copy(bRoot[:], "hash")
results, err := service.GetPayloadBodiesByHash(ctx, []common.Hash{bRoot})
require.NoError(t, err)
require.Equal(t, 1, len(results))
@@ -2204,7 +2164,9 @@ func TestCapella_PayloadBodiesByHash(t *testing.T) {
service := &Service{}
service.rpcClient = rpcClient
results, err := service.GetPayloadBodiesByHash(ctx, []common.Hash{})
bRoot := [32]byte{}
copy(bRoot[:], "hash")
results, err := service.GetPayloadBodiesByHash(ctx, []common.Hash{bRoot})
require.NoError(t, err)
require.Equal(t, 2, len(results))
@@ -2247,7 +2209,9 @@ func TestCapella_PayloadBodiesByHash(t *testing.T) {
service := &Service{}
service.rpcClient = rpcClient
results, err := service.GetPayloadBodiesByHash(ctx, []common.Hash{})
bRoot := [32]byte{}
copy(bRoot[:], "hash")
results, err := service.GetPayloadBodiesByHash(ctx, []common.Hash{bRoot, bRoot, bRoot})
require.NoError(t, err)
require.Equal(t, 3, len(results))
@@ -2258,10 +2222,6 @@ func TestCapella_PayloadBodiesByHash(t *testing.T) {
}
func TestCapella_PayloadBodiesByRange(t *testing.T) {
resetFn := features.InitWithReset(&features.Flags{
EnableOptionalEngineMethods: true,
})
defer resetFn()
t.Run("empty response works", func(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
@@ -2509,10 +2469,6 @@ func TestCapella_PayloadBodiesByRange(t *testing.T) {
}
func Test_ExchangeCapabilities(t *testing.T) {
resetFn := features.InitWithReset(&features.Flags{
EnableOptionalEngineMethods: true,
})
defer resetFn()
t.Run("empty response works", func(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")

View File

@@ -2,4 +2,4 @@ package execution
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "powchain")
var log = logrus.WithField("prefix", "execution")

View File

@@ -11,10 +11,6 @@ import (
"github.com/prysmaticlabs/prysm/v4/time/slots"
)
// orphanLateBlockFirstThreshold is the number of seconds after which we
// consider a block to be late, and thus a candidate to being reorged.
const orphanLateBlockFirstThreshold = 4
// ProcessAttestationsThreshold is the number of seconds after which we
// process attestations for the current slot
const ProcessAttestationsThreshold = 10
@@ -137,7 +133,8 @@ func (n *Node) setNodeAndParentValidated(ctx context.Context) error {
// slot will have secs = 3 below.
func (n *Node) arrivedEarly(genesisTime uint64) (bool, error) {
secs, err := slots.SecondsSinceSlotStart(n.slot, genesisTime, n.timestamp)
return secs < orphanLateBlockFirstThreshold, err
votingWindow := params.BeaconConfig().SecondsPerSlot / params.BeaconConfig().IntervalsPerSlot
return secs < votingWindow, err
}
// arrivedAfterOrphanCheck returns whether this block was inserted after the

View File

@@ -277,6 +277,7 @@ func TestNode_TimeStampsChecks(t *testing.T) {
require.NoError(t, err)
require.Equal(t, false, late)
orphanLateBlockFirstThreshold := params.BeaconConfig().SecondsPerSlot / params.BeaconConfig().IntervalsPerSlot
// late block
driftGenesisTime(f, 2, orphanLateBlockFirstThreshold+1)
root = [32]byte{'b'}

View File

@@ -28,6 +28,7 @@ func TestForkChoice_ShouldOverrideFCU(t *testing.T) {
}
f.ProcessAttestation(ctx, attesters, root, 0)
orphanLateBlockFirstThreshold := params.BeaconConfig().SecondsPerSlot / params.BeaconConfig().IntervalsPerSlot
driftGenesisTime(f, 2, orphanLateBlockFirstThreshold+1)
st, root, err = prepareForkchoiceState(ctx, 2, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'B'}, 0, 0)
require.NoError(t, err)
@@ -125,6 +126,7 @@ func TestForkChoice_GetProposerHead(t *testing.T) {
headRoot, err := f.Head(ctx)
require.NoError(t, err)
require.Equal(t, root, headRoot)
orphanLateBlockFirstThreshold := params.BeaconConfig().SecondsPerSlot / params.BeaconConfig().IntervalsPerSlot
f.store.headNode.timestamp -= params.BeaconConfig().SecondsPerSlot - orphanLateBlockFirstThreshold
t.Run("head is weak", func(t *testing.T) {
require.Equal(t, parentRoot, f.GetProposerHead())

View File

@@ -240,6 +240,20 @@ func (f *ForkChoice) HighestReceivedBlockSlot() primitives.Slot {
return f.store.highestReceivedNode.slot
}
// HighestReceivedBlockSlotDelay returns the number of slots that the highest
// received block was late when receiving it
func (f *ForkChoice) HighestReceivedBlockDelay() primitives.Slot {
n := f.store.highestReceivedNode
if n == nil {
return 0
}
secs, err := slots.SecondsSinceSlotStart(n.slot, f.store.genesisTime, n.timestamp)
if err != nil {
return 0
}
return primitives.Slot(secs / params.BeaconConfig().SecondsPerSlot)
}
// ReceivedBlocksLastEpoch returns the number of blocks received in the last epoch
func (f *ForkChoice) ReceivedBlocksLastEpoch() (uint64, error) {
count := uint64(0)

View File

@@ -333,26 +333,29 @@ func TestForkChoice_ReceivedBlocksLastEpoch(t *testing.T) {
require.NoError(t, err)
require.Equal(t, uint64(1), count)
require.Equal(t, primitives.Slot(1), f.HighestReceivedBlockSlot())
require.Equal(t, primitives.Slot(0), f.HighestReceivedBlockDelay())
// 64
// Received block last epoch is 1
_, err = s.insert(context.Background(), 64, [32]byte{'A'}, b, b, 1, 1)
require.NoError(t, err)
s.genesisTime = uint64(time.Now().Add(time.Duration(-64*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second).Unix())
s.genesisTime = uint64(time.Now().Add(time.Duration((-64*int64(params.BeaconConfig().SecondsPerSlot))-1) * time.Second).Unix())
count, err = f.ReceivedBlocksLastEpoch()
require.NoError(t, err)
require.Equal(t, uint64(1), count)
require.Equal(t, primitives.Slot(64), f.HighestReceivedBlockSlot())
require.Equal(t, primitives.Slot(0), f.HighestReceivedBlockDelay())
// 64 65
// Received block last epoch is 2
_, err = s.insert(context.Background(), 65, [32]byte{'B'}, b, b, 1, 1)
require.NoError(t, err)
s.genesisTime = uint64(time.Now().Add(time.Duration(-65*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second).Unix())
s.genesisTime = uint64(time.Now().Add(time.Duration(-66*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second).Unix())
count, err = f.ReceivedBlocksLastEpoch()
require.NoError(t, err)
require.Equal(t, uint64(2), count)
require.Equal(t, primitives.Slot(65), f.HighestReceivedBlockSlot())
require.Equal(t, primitives.Slot(1), f.HighestReceivedBlockDelay())
// 64 65 66
// Received block last epoch is 3

View File

@@ -64,6 +64,7 @@ type FastGetter interface {
FinalizedPayloadBlockHash() [32]byte
HasNode([32]byte) bool
HighestReceivedBlockSlot() primitives.Slot
HighestReceivedBlockDelay() primitives.Slot
IsCanonical(root [32]byte) bool
IsOptimistic(root [32]byte) (bool, error)
IsViableForCheckpoint(*forkchoicetypes.Checkpoint) (bool, error)

Some files were not shown because too many files have changed in this diff Show More